* [gentoo-commits] proj/linux-patches:5.8 commit in: /
@ 2020-06-16 18:22 Mike Pagano
0 siblings, 0 replies; 30+ messages in thread
From: Mike Pagano @ 2020-06-16 18:22 UTC (permalink / raw
To: gentoo-commits
commit: 88a265090e58371f7c1081152ef04d991c59cd9b
Author: Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Tue Jun 16 18:21:22 2020 +0000
Commit: Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Tue Jun 16 18:21:22 2020 +0000
URL: https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=88a26509
Genpatches initial commit:
Support for namespace user.pax.* on tmpfs.
Enable link security restrictions by default.
Bluetooth: Check key sizes only when Secure Simple Pairing
is enabled. See bug #686758
tmp513 requies REGMAP_I2C to build. Select it by default in
Kconfig. See bug #710790. Thanks to Phil Stracchino
VIDEO_TVP5150 requies REGMAP_I2C to build. Select it by
default in Kconfig. See bug #721096. Thanks to Max Steel
sign-file: full functionality with modern LibreSSL
Add Gentoo Linux support config settings and defaults.
Support for ZSTD compressed kernel
Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>
0000_README | 56 +++
1500_XATTR_USER_PREFIX.patch | 67 ++++
...ble-link-security-restrictions-by-default.patch | 20 +
...zes-only-if-Secure-Simple-Pairing-enabled.patch | 37 ++
...3-Fix-build-issue-by-selecting-CONFIG_REG.patch | 30 ++
...0-Fix-build-issue-by-selecting-REGMAP-I2C.patch | 10 +
2920_sign-file-patch-for-libressl.patch | 16 +
..._ZSTD-v5-1-8-prepare-zstd-for-preboot-env.patch | 82 ++++
...STD-v5-2-8-prepare-xxhash-for-preboot-env.patch | 94 +++++
...STD-v5-3-8-add-zstd-support-to-decompress.patch | 422 +++++++++++++++++++++
...-v5-4-8-add-support-for-zstd-compres-kern.patch | 65 ++++
...add-support-for-zstd-compressed-initramfs.patch | 50 +++
| 20 +
...v5-7-8-support-for-ZSTD-compressed-kernel.patch | 92 +++++
...5-8-8-gitignore-add-ZSTD-compressed-files.patch | 12 +
15 files changed, 1073 insertions(+)
diff --git a/0000_README b/0000_README
index 9018993..8373b37 100644
--- a/0000_README
+++ b/0000_README
@@ -43,6 +43,62 @@ EXPERIMENTAL
Individual Patch Descriptions:
--------------------------------------------------------------------------
+Patch: 1500_XATTR_USER_PREFIX.patch
+From: https://bugs.gentoo.org/show_bug.cgi?id=470644
+Desc: Support for namespace user.pax.* on tmpfs.
+
+Patch: 1510_fs-enable-link-security-restrictions-by-default.patch
+From: http://sources.debian.net/src/linux/3.16.7-ckt4-3/debian/patches/debian/fs-enable-link-security-restrictions-by-default.patch/
+Desc: Enable link security restrictions by default.
+
+Patch: 2000_BT-Check-key-sizes-only-if-Secure-Simple-Pairing-enabled.patch
+From: https://lore.kernel.org/linux-bluetooth/20190522070540.48895-1-marcel@holtmann.org/raw
+Desc: Bluetooth: Check key sizes only when Secure Simple Pairing is enabled. See bug #686758
+
+Patch: 2900_tmp513-Fix-build-issue-by-selecting-CONFIG_REG.patch
+From: https://bugs.gentoo.org/710790
+Desc: tmp513 requies REGMAP_I2C to build. Select it by default in Kconfig. See bug #710790. Thanks to Phil Stracchino
+
+Patch: 2910_TVP5150-Fix-build-issue-by-selecting-REGMAP-I2C.patch
+From: https://bugs.gentoo.org/721096
+Desc: VIDEO_TVP5150 requies REGMAP_I2C to build. Select it by default in Kconfig. See bug #721096. Thanks to Max Steel
+
+Patch: 2920_sign-file-patch-for-libressl.patch
+From: https://bugs.gentoo.org/717166
+Desc: sign-file: full functionality with modern LibreSSL
+
Patch: 4567_distro-Gentoo-Kconfig.patch
From: Tom Wijsman <TomWij@gentoo.org>
Desc: Add Gentoo Linux support config settings and defaults.
+
+Patch: 5000_ZSTD-v5-1-8-prepare-zstd-for-preboot-env.patch
+From: https://lkml.org/lkml/2020/4/1/29
+Desc: lib: prepare zstd for preboot environment
+
+Patch: 5001_ZSTD-v5-2-8-prepare-xxhash-for-preboot-env.patch
+From: https://lkml.org/lkml/2020/4/1/29
+Desc: lib: prepare xxhash for preboot environment
+
+Patch: 5002_ZSTD-v5-3-8-add-zstd-support-to-decompress.patch
+From: https://lkml.org/lkml/2020/4/1/29
+Desc: lib: add zstd support to decompress
+
+Patch: 5003_ZSTD-v5-4-8-add-support-for-zstd-compres-kern.patch
+From: https://lkml.org/lkml/2020/4/1/29
+Desc: init: add support for zstd compressed kernel
+
+Patch: 5004_ZSTD-v5-5-8-add-support-for-zstd-compressed-initramfs.patch
+From: https://lkml.org/lkml/2020/4/1/29
+Desc: usr: add support for zstd compressed initramfs
+
+Patch: 5005_ZSTD-v5-6-8-bump-ZO-z-extra-bytes-margin.patch
+From: https://lkml.org/lkml/2020/4/1/29
+Desc: x86: bump ZO_z_extra_bytes margin for zstd
+
+Patch: 5006_ZSTD-v5-7-8-support-for-ZSTD-compressed-kernel.patch
+From: https://lkml.org/lkml/2020/4/1/29
+Desc: x86: Add support for ZSTD compressed kernel
+
+Patch: 5007_ZSTD-v5-8-8-gitignore-add-ZSTD-compressed-files.patch
+From: https://lkml.org/lkml/2020/4/1/29
+Desc: .gitignore: add ZSTD-compressed files
diff --git a/1500_XATTR_USER_PREFIX.patch b/1500_XATTR_USER_PREFIX.patch
new file mode 100644
index 0000000..245dcc2
--- /dev/null
+++ b/1500_XATTR_USER_PREFIX.patch
@@ -0,0 +1,67 @@
+From: Anthony G. Basile <blueness@gentoo.org>
+
+This patch adds support for a restricted user-controlled namespace on
+tmpfs filesystem used to house PaX flags. The namespace must be of the
+form user.pax.* and its value cannot exceed a size of 8 bytes.
+
+This is needed even on all Gentoo systems so that XATTR_PAX flags
+are preserved for users who might build packages using portage on
+a tmpfs system with a non-hardened kernel and then switch to a
+hardened kernel with XATTR_PAX enabled.
+
+The namespace is added to any user with Extended Attribute support
+enabled for tmpfs. Users who do not enable xattrs will not have
+the XATTR_PAX flags preserved.
+
+diff --git a/include/uapi/linux/xattr.h b/include/uapi/linux/xattr.h
+index 1590c49..5eab462 100644
+--- a/include/uapi/linux/xattr.h
++++ b/include/uapi/linux/xattr.h
+@@ -73,5 +73,9 @@
+ #define XATTR_POSIX_ACL_DEFAULT "posix_acl_default"
+ #define XATTR_NAME_POSIX_ACL_DEFAULT XATTR_SYSTEM_PREFIX XATTR_POSIX_ACL_DEFAULT
+
++/* User namespace */
++#define XATTR_PAX_PREFIX XATTR_USER_PREFIX "pax."
++#define XATTR_PAX_FLAGS_SUFFIX "flags"
++#define XATTR_NAME_PAX_FLAGS XATTR_PAX_PREFIX XATTR_PAX_FLAGS_SUFFIX
+
+ #endif /* _UAPI_LINUX_XATTR_H */
+--- a/mm/shmem.c 2020-05-04 15:30:27.042035334 -0400
++++ b/mm/shmem.c 2020-05-04 15:34:57.013881725 -0400
+@@ -3238,6 +3238,14 @@ static int shmem_xattr_handler_set(const
+ struct shmem_inode_info *info = SHMEM_I(inode);
+
+ name = xattr_full_name(handler, name);
++
++ if (!strncmp(name, XATTR_USER_PREFIX, XATTR_USER_PREFIX_LEN)) {
++ if (strcmp(name, XATTR_NAME_PAX_FLAGS))
++ return -EOPNOTSUPP;
++ if (size > 8)
++ return -EINVAL;
++ }
++
+ return simple_xattr_set(&info->xattrs, name, value, size, flags, NULL);
+ }
+
+@@ -3253,6 +3261,12 @@ static const struct xattr_handler shmem_
+ .set = shmem_xattr_handler_set,
+ };
+
++static const struct xattr_handler shmem_user_xattr_handler = {
++ .prefix = XATTR_USER_PREFIX,
++ .get = shmem_xattr_handler_get,
++ .set = shmem_xattr_handler_set,
++};
++
+ static const struct xattr_handler *shmem_xattr_handlers[] = {
+ #ifdef CONFIG_TMPFS_POSIX_ACL
+ &posix_acl_access_xattr_handler,
+@@ -3260,6 +3274,7 @@ static const struct xattr_handler *shmem
+ #endif
+ &shmem_security_xattr_handler,
+ &shmem_trusted_xattr_handler,
++ &shmem_user_xattr_handler,
+ NULL
+ };
+
diff --git a/1510_fs-enable-link-security-restrictions-by-default.patch b/1510_fs-enable-link-security-restrictions-by-default.patch
new file mode 100644
index 0000000..f0ed144
--- /dev/null
+++ b/1510_fs-enable-link-security-restrictions-by-default.patch
@@ -0,0 +1,20 @@
+From: Ben Hutchings <ben@decadent.org.uk>
+Subject: fs: Enable link security restrictions by default
+Date: Fri, 02 Nov 2012 05:32:06 +0000
+Bug-Debian: https://bugs.debian.org/609455
+Forwarded: not-needed
+This reverts commit 561ec64ae67ef25cac8d72bb9c4bfc955edfd415
+('VFS: don't do protected {sym,hard}links by default').
+--- a/fs/namei.c 2018-09-28 07:56:07.770005006 -0400
++++ b/fs/namei.c 2018-09-28 07:56:43.370349204 -0400
+@@ -885,8 +885,8 @@ static inline void put_link(struct namei
+ path_put(&last->link);
+ }
+
+-int sysctl_protected_symlinks __read_mostly = 0;
+-int sysctl_protected_hardlinks __read_mostly = 0;
++int sysctl_protected_symlinks __read_mostly = 1;
++int sysctl_protected_hardlinks __read_mostly = 1;
+ int sysctl_protected_fifos __read_mostly;
+ int sysctl_protected_regular __read_mostly;
+
diff --git a/2000_BT-Check-key-sizes-only-if-Secure-Simple-Pairing-enabled.patch b/2000_BT-Check-key-sizes-only-if-Secure-Simple-Pairing-enabled.patch
new file mode 100644
index 0000000..394ad48
--- /dev/null
+++ b/2000_BT-Check-key-sizes-only-if-Secure-Simple-Pairing-enabled.patch
@@ -0,0 +1,37 @@
+The encryption is only mandatory to be enforced when both sides are using
+Secure Simple Pairing and this means the key size check makes only sense
+in that case.
+
+On legacy Bluetooth 2.0 and earlier devices like mice the encryption was
+optional and thus causing an issue if the key size check is not bound to
+using Secure Simple Pairing.
+
+Fixes: d5bb334a8e17 ("Bluetooth: Align minimum encryption key size for LE and BR/EDR connections")
+Signed-off-by: Marcel Holtmann <marcel@holtmann.org>
+Cc: stable@vger.kernel.org
+---
+ net/bluetooth/hci_conn.c | 9 +++++++--
+ 1 file changed, 7 insertions(+), 2 deletions(-)
+
+diff --git a/net/bluetooth/hci_conn.c b/net/bluetooth/hci_conn.c
+index 3cf0764d5793..7516cdde3373 100644
+--- a/net/bluetooth/hci_conn.c
++++ b/net/bluetooth/hci_conn.c
+@@ -1272,8 +1272,13 @@ int hci_conn_check_link_mode(struct hci_conn *conn)
+ return 0;
+ }
+
+- if (hci_conn_ssp_enabled(conn) &&
+- !test_bit(HCI_CONN_ENCRYPT, &conn->flags))
++ /* If Secure Simple Pairing is not enabled, then legacy connection
++ * setup is used and no encryption or key sizes can be enforced.
++ */
++ if (!hci_conn_ssp_enabled(conn))
++ return 1;
++
++ if (!test_bit(HCI_CONN_ENCRYPT, &conn->flags))
+ return 0;
+
+ /* The minimum encryption key size needs to be enforced by the
+--
+2.20.1
diff --git a/2900_tmp513-Fix-build-issue-by-selecting-CONFIG_REG.patch b/2900_tmp513-Fix-build-issue-by-selecting-CONFIG_REG.patch
new file mode 100644
index 0000000..4335685
--- /dev/null
+++ b/2900_tmp513-Fix-build-issue-by-selecting-CONFIG_REG.patch
@@ -0,0 +1,30 @@
+From dc328d75a6f37f4ff11a81ae16b1ec88c3197640 Mon Sep 17 00:00:00 2001
+From: Mike Pagano <mpagano@gentoo.org>
+Date: Mon, 23 Mar 2020 08:20:06 -0400
+Subject: [PATCH 1/1] This driver requires REGMAP_I2C to build. Select it by
+ default in Kconfig. Reported at gentoo bugzilla:
+ https://bugs.gentoo.org/710790
+Cc: mpagano@gentoo.org
+
+Reported-by: Phil Stracchino <phils@caerllewys.net>
+
+Signed-off-by: Mike Pagano <mpagano@gentoo.org>
+---
+ drivers/hwmon/Kconfig | 1 +
+ 1 file changed, 1 insertion(+)
+
+diff --git a/drivers/hwmon/Kconfig b/drivers/hwmon/Kconfig
+index 47ac20aee06f..530b4f29ba85 100644
+--- a/drivers/hwmon/Kconfig
++++ b/drivers/hwmon/Kconfig
+@@ -1769,6 +1769,7 @@ config SENSORS_TMP421
+ config SENSORS_TMP513
+ tristate "Texas Instruments TMP513 and compatibles"
+ depends on I2C
++ select REGMAP_I2C
+ help
+ If you say yes here you get support for Texas Instruments TMP512,
+ and TMP513 temperature and power supply sensor chips.
+--
+2.24.1
+
diff --git a/2910_TVP5150-Fix-build-issue-by-selecting-REGMAP-I2C.patch b/2910_TVP5150-Fix-build-issue-by-selecting-REGMAP-I2C.patch
new file mode 100644
index 0000000..1bc058e
--- /dev/null
+++ b/2910_TVP5150-Fix-build-issue-by-selecting-REGMAP-I2C.patch
@@ -0,0 +1,10 @@
+--- a/drivers/media/i2c/Kconfig 2020-05-13 12:38:05.102903309 -0400
++++ b/drivers/media/i2c/Kconfig 2020-05-13 12:38:51.283171977 -0400
+@@ -378,6 +378,7 @@ config VIDEO_TVP514X
+ config VIDEO_TVP5150
+ tristate "Texas Instruments TVP5150 video decoder"
+ depends on VIDEO_V4L2 && I2C
++ select REGMAP_I2C
+ select V4L2_FWNODE
+ help
+ Support for the Texas Instruments TVP5150 video decoder.
diff --git a/2920_sign-file-patch-for-libressl.patch b/2920_sign-file-patch-for-libressl.patch
new file mode 100644
index 0000000..e6ec017
--- /dev/null
+++ b/2920_sign-file-patch-for-libressl.patch
@@ -0,0 +1,16 @@
+--- a/scripts/sign-file.c 2020-05-20 18:47:21.282820662 -0400
++++ b/scripts/sign-file.c 2020-05-20 18:48:37.991081899 -0400
+@@ -41,9 +41,10 @@
+ * signing with anything other than SHA1 - so we're stuck with that if such is
+ * the case.
+ */
+-#if defined(LIBRESSL_VERSION_NUMBER) || \
+- OPENSSL_VERSION_NUMBER < 0x10000000L || \
+- defined(OPENSSL_NO_CMS)
++#if defined(OPENSSL_NO_CMS) || \
++ ( defined(LIBRESSL_VERSION_NUMBER) \
++ && (LIBRESSL_VERSION_NUMBER < 0x3010000fL) ) || \
++ OPENSSL_VERSION_NUMBER < 0x10000000L
+ #define USE_PKCS7
+ #endif
+ #ifndef USE_PKCS7
diff --git a/5000_ZSTD-v5-1-8-prepare-zstd-for-preboot-env.patch b/5000_ZSTD-v5-1-8-prepare-zstd-for-preboot-env.patch
new file mode 100644
index 0000000..297a8d4
--- /dev/null
+++ b/5000_ZSTD-v5-1-8-prepare-zstd-for-preboot-env.patch
@@ -0,0 +1,82 @@
+diff --git a/lib/zstd/decompress.c b/lib/zstd/decompress.c
+index 269ee9a796c1..73ded63278cf 100644
+--- a/lib/zstd/decompress.c
++++ b/lib/zstd/decompress.c
+@@ -2490,6 +2490,7 @@ size_t ZSTD_decompressStream(ZSTD_DStream *zds, ZSTD_outBuffer *output, ZSTD_inB
+ }
+ }
+
++#ifndef ZSTD_PREBOOT
+ EXPORT_SYMBOL(ZSTD_DCtxWorkspaceBound);
+ EXPORT_SYMBOL(ZSTD_initDCtx);
+ EXPORT_SYMBOL(ZSTD_decompressDCtx);
+@@ -2529,3 +2530,4 @@ EXPORT_SYMBOL(ZSTD_insertBlock);
+
+ MODULE_LICENSE("Dual BSD/GPL");
+ MODULE_DESCRIPTION("Zstd Decompressor");
++#endif
+diff --git a/lib/zstd/fse_decompress.c b/lib/zstd/fse_decompress.c
+index a84300e5a013..0b353530fb3f 100644
+--- a/lib/zstd/fse_decompress.c
++++ b/lib/zstd/fse_decompress.c
+@@ -47,6 +47,7 @@
+ ****************************************************************/
+ #include "bitstream.h"
+ #include "fse.h"
++#include "zstd_internal.h"
+ #include <linux/compiler.h>
+ #include <linux/kernel.h>
+ #include <linux/string.h> /* memcpy, memset */
+@@ -60,14 +61,6 @@
+ enum { FSE_static_assert = 1 / (int)(!!(c)) }; \
+ } /* use only *after* variable declarations */
+
+-/* check and forward error code */
+-#define CHECK_F(f) \
+- { \
+- size_t const e = f; \
+- if (FSE_isError(e)) \
+- return e; \
+- }
+-
+ /* **************************************************************
+ * Templates
+ ****************************************************************/
+diff --git a/lib/zstd/zstd_internal.h b/lib/zstd/zstd_internal.h
+index 1a79fab9e13a..dac753397f86 100644
+--- a/lib/zstd/zstd_internal.h
++++ b/lib/zstd/zstd_internal.h
+@@ -127,7 +127,14 @@ static const U32 OF_defaultNormLog = OF_DEFAULTNORMLOG;
+ * Shared functions to include for inlining
+ *********************************************/
+ ZSTD_STATIC void ZSTD_copy8(void *dst, const void *src) {
+- memcpy(dst, src, 8);
++ /*
++ * zstd relies heavily on gcc being able to analyze and inline this
++ * memcpy() call, since it is called in a tight loop. Preboot mode
++ * is compiled in freestanding mode, which stops gcc from analyzing
++ * memcpy(). Use __builtin_memcpy() to tell gcc to analyze this as a
++ * regular memcpy().
++ */
++ __builtin_memcpy(dst, src, 8);
+ }
+ /*! ZSTD_wildcopy() :
+ * custom version of memcpy(), can copy up to 7 bytes too many (8 bytes if length==0) */
+@@ -137,13 +144,16 @@ ZSTD_STATIC void ZSTD_wildcopy(void *dst, const void *src, ptrdiff_t length)
+ const BYTE* ip = (const BYTE*)src;
+ BYTE* op = (BYTE*)dst;
+ BYTE* const oend = op + length;
+- /* Work around https://gcc.gnu.org/bugzilla/show_bug.cgi?id=81388.
++#if defined(GCC_VERSION) && GCC_VERSION >= 70000 && GCC_VERSION < 70200
++ /*
++ * Work around https://gcc.gnu.org/bugzilla/show_bug.cgi?id=81388.
+ * Avoid the bad case where the loop only runs once by handling the
+ * special case separately. This doesn't trigger the bug because it
+ * doesn't involve pointer/integer overflow.
+ */
+ if (length <= 8)
+ return ZSTD_copy8(dst, src);
++#endif
+ do {
+ ZSTD_copy8(op, ip);
+ op += 8;
diff --git a/5001_ZSTD-v5-2-8-prepare-xxhash-for-preboot-env.patch b/5001_ZSTD-v5-2-8-prepare-xxhash-for-preboot-env.patch
new file mode 100644
index 0000000..88e4674
--- /dev/null
+++ b/5001_ZSTD-v5-2-8-prepare-xxhash-for-preboot-env.patch
@@ -0,0 +1,94 @@
+diff --git a/lib/xxhash.c b/lib/xxhash.c
+index aa61e2a3802f..b4364e011392 100644
+--- a/lib/xxhash.c
++++ b/lib/xxhash.c
+@@ -80,13 +80,11 @@ void xxh32_copy_state(struct xxh32_state *dst, const struct xxh32_state *src)
+ {
+ memcpy(dst, src, sizeof(*dst));
+ }
+-EXPORT_SYMBOL(xxh32_copy_state);
+
+ void xxh64_copy_state(struct xxh64_state *dst, const struct xxh64_state *src)
+ {
+ memcpy(dst, src, sizeof(*dst));
+ }
+-EXPORT_SYMBOL(xxh64_copy_state);
+
+ /*-***************************
+ * Simple Hash Functions
+@@ -151,7 +149,6 @@ uint32_t xxh32(const void *input, const size_t len, const uint32_t seed)
+
+ return h32;
+ }
+-EXPORT_SYMBOL(xxh32);
+
+ static uint64_t xxh64_round(uint64_t acc, const uint64_t input)
+ {
+@@ -234,7 +231,6 @@ uint64_t xxh64(const void *input, const size_t len, const uint64_t seed)
+
+ return h64;
+ }
+-EXPORT_SYMBOL(xxh64);
+
+ /*-**************************************************
+ * Advanced Hash Functions
+@@ -251,7 +247,6 @@ void xxh32_reset(struct xxh32_state *statePtr, const uint32_t seed)
+ state.v4 = seed - PRIME32_1;
+ memcpy(statePtr, &state, sizeof(state));
+ }
+-EXPORT_SYMBOL(xxh32_reset);
+
+ void xxh64_reset(struct xxh64_state *statePtr, const uint64_t seed)
+ {
+@@ -265,7 +260,6 @@ void xxh64_reset(struct xxh64_state *statePtr, const uint64_t seed)
+ state.v4 = seed - PRIME64_1;
+ memcpy(statePtr, &state, sizeof(state));
+ }
+-EXPORT_SYMBOL(xxh64_reset);
+
+ int xxh32_update(struct xxh32_state *state, const void *input, const size_t len)
+ {
+@@ -334,7 +328,6 @@ int xxh32_update(struct xxh32_state *state, const void *input, const size_t len)
+
+ return 0;
+ }
+-EXPORT_SYMBOL(xxh32_update);
+
+ uint32_t xxh32_digest(const struct xxh32_state *state)
+ {
+@@ -372,7 +365,6 @@ uint32_t xxh32_digest(const struct xxh32_state *state)
+
+ return h32;
+ }
+-EXPORT_SYMBOL(xxh32_digest);
+
+ int xxh64_update(struct xxh64_state *state, const void *input, const size_t len)
+ {
+@@ -439,7 +431,6 @@ int xxh64_update(struct xxh64_state *state, const void *input, const size_t len)
+
+ return 0;
+ }
+-EXPORT_SYMBOL(xxh64_update);
+
+ uint64_t xxh64_digest(const struct xxh64_state *state)
+ {
+@@ -494,7 +485,19 @@ uint64_t xxh64_digest(const struct xxh64_state *state)
+
+ return h64;
+ }
++
++#ifndef XXH_PREBOOT
++EXPORT_SYMBOL(xxh32_copy_state);
++EXPORT_SYMBOL(xxh64_copy_state);
++EXPORT_SYMBOL(xxh32);
++EXPORT_SYMBOL(xxh64);
++EXPORT_SYMBOL(xxh32_reset);
++EXPORT_SYMBOL(xxh64_reset);
++EXPORT_SYMBOL(xxh32_update);
++EXPORT_SYMBOL(xxh32_digest);
++EXPORT_SYMBOL(xxh64_update);
+ EXPORT_SYMBOL(xxh64_digest);
+
+ MODULE_LICENSE("Dual BSD/GPL");
+ MODULE_DESCRIPTION("xxHash");
++#endif
diff --git a/5002_ZSTD-v5-3-8-add-zstd-support-to-decompress.patch b/5002_ZSTD-v5-3-8-add-zstd-support-to-decompress.patch
new file mode 100644
index 0000000..1c22fa3
--- /dev/null
+++ b/5002_ZSTD-v5-3-8-add-zstd-support-to-decompress.patch
@@ -0,0 +1,422 @@
+diff --git a/include/linux/decompress/unzstd.h b/include/linux/decompress/unzstd.h
+new file mode 100644
+index 000000000000..56d539ae880f
+--- /dev/null
++++ b/include/linux/decompress/unzstd.h
+@@ -0,0 +1,11 @@
++/* SPDX-License-Identifier: GPL-2.0 */
++#ifndef LINUX_DECOMPRESS_UNZSTD_H
++#define LINUX_DECOMPRESS_UNZSTD_H
++
++int unzstd(unsigned char *inbuf, long len,
++ long (*fill)(void*, unsigned long),
++ long (*flush)(void*, unsigned long),
++ unsigned char *output,
++ long *pos,
++ void (*error_fn)(char *x));
++#endif
+diff --git a/lib/Kconfig b/lib/Kconfig
+index 5d53f9609c25..e883aecb9279 100644
+--- a/lib/Kconfig
++++ b/lib/Kconfig
+@@ -336,6 +336,10 @@ config DECOMPRESS_LZ4
+ select LZ4_DECOMPRESS
+ tristate
+
++config DECOMPRESS_ZSTD
++ select ZSTD_DECOMPRESS
++ tristate
++
+ #
+ # Generic allocator support is selected if needed
+ #
+diff --git a/lib/Makefile b/lib/Makefile
+index ab68a8674360..3ce4ac296611 100644
+--- a/lib/Makefile
++++ b/lib/Makefile
+@@ -166,6 +166,7 @@ lib-$(CONFIG_DECOMPRESS_LZMA) += decompress_unlzma.o
+ lib-$(CONFIG_DECOMPRESS_XZ) += decompress_unxz.o
+ lib-$(CONFIG_DECOMPRESS_LZO) += decompress_unlzo.o
+ lib-$(CONFIG_DECOMPRESS_LZ4) += decompress_unlz4.o
++lib-$(CONFIG_DECOMPRESS_ZSTD) += decompress_unzstd.o
+
+ obj-$(CONFIG_TEXTSEARCH) += textsearch.o
+ obj-$(CONFIG_TEXTSEARCH_KMP) += ts_kmp.o
+diff --git a/lib/decompress.c b/lib/decompress.c
+index 857ab1af1ef3..ab3fc90ffc64 100644
+--- a/lib/decompress.c
++++ b/lib/decompress.c
+@@ -13,6 +13,7 @@
+ #include <linux/decompress/inflate.h>
+ #include <linux/decompress/unlzo.h>
+ #include <linux/decompress/unlz4.h>
++#include <linux/decompress/unzstd.h>
+
+ #include <linux/types.h>
+ #include <linux/string.h>
+@@ -37,6 +38,9 @@
+ #ifndef CONFIG_DECOMPRESS_LZ4
+ # define unlz4 NULL
+ #endif
++#ifndef CONFIG_DECOMPRESS_ZSTD
++# define unzstd NULL
++#endif
+
+ struct compress_format {
+ unsigned char magic[2];
+@@ -52,6 +56,7 @@ static const struct compress_format compressed_formats[] __initconst = {
+ { {0xfd, 0x37}, "xz", unxz },
+ { {0x89, 0x4c}, "lzo", unlzo },
+ { {0x02, 0x21}, "lz4", unlz4 },
++ { {0x28, 0xb5}, "zstd", unzstd },
+ { {0, 0}, NULL, NULL }
+ };
+
+diff --git a/lib/decompress_unzstd.c b/lib/decompress_unzstd.c
+new file mode 100644
+index 000000000000..f317afab502f
+--- /dev/null
++++ b/lib/decompress_unzstd.c
+@@ -0,0 +1,342 @@
++// SPDX-License-Identifier: GPL-2.0
++
++/*
++ * Important notes about in-place decompression
++ *
++ * At least on x86, the kernel is decompressed in place: the compressed data
++ * is placed to the end of the output buffer, and the decompressor overwrites
++ * most of the compressed data. There must be enough safety margin to
++ * guarantee that the write position is always behind the read position.
++ *
++ * The safety margin for ZSTD with a 128 KB block size is calculated below.
++ * Note that the margin with ZSTD is bigger than with GZIP or XZ!
++ *
++ * The worst case for in-place decompression is that the beginning of
++ * the file is compressed extremely well, and the rest of the file is
++ * uncompressible. Thus, we must look for worst-case expansion when the
++ * compressor is encoding uncompressible data.
++ *
++ * The structure of the .zst file in case of a compresed kernel is as follows.
++ * Maximum sizes (as bytes) of the fields are in parenthesis.
++ *
++ * Frame Header: (18)
++ * Blocks: (N)
++ * Checksum: (4)
++ *
++ * The frame header and checksum overhead is at most 22 bytes.
++ *
++ * ZSTD stores the data in blocks. Each block has a header whose size is
++ * a 3 bytes. After the block header, there is up to 128 KB of payload.
++ * The maximum uncompressed size of the payload is 128 KB. The minimum
++ * uncompressed size of the payload is never less than the payload size
++ * (excluding the block header).
++ *
++ * The assumption, that the uncompressed size of the payload is never
++ * smaller than the payload itself, is valid only when talking about
++ * the payload as a whole. It is possible that the payload has parts where
++ * the decompressor consumes more input than it produces output. Calculating
++ * the worst case for this would be tricky. Instead of trying to do that,
++ * let's simply make sure that the decompressor never overwrites any bytes
++ * of the payload which it is currently reading.
++ *
++ * Now we have enough information to calculate the safety margin. We need
++ * - 22 bytes for the .zst file format headers;
++ * - 3 bytes per every 128 KiB of uncompressed size (one block header per
++ * block); and
++ * - 128 KiB (biggest possible zstd block size) to make sure that the
++ * decompressor never overwrites anything from the block it is currently
++ * reading.
++ *
++ * We get the following formula:
++ *
++ * safety_margin = 22 + uncompressed_size * 3 / 131072 + 131072
++ * <= 22 + (uncompressed_size >> 15) + 131072
++ */
++
++/*
++ * Preboot environments #include "path/to/decompress_unzstd.c".
++ * All of the source files we depend on must be #included.
++ * zstd's only source dependeny is xxhash, which has no source
++ * dependencies.
++ *
++ * zstd and xxhash avoid declaring themselves as modules
++ * when ZSTD_PREBOOT and XXH_PREBOOT are defined.
++ */
++#ifdef STATIC
++# define ZSTD_PREBOOT
++# define XXH_PREBOOT
++# include "xxhash.c"
++# include "zstd/entropy_common.c"
++# include "zstd/fse_decompress.c"
++# include "zstd/huf_decompress.c"
++# include "zstd/zstd_common.c"
++# include "zstd/decompress.c"
++#endif
++
++#include <linux/decompress/mm.h>
++#include <linux/kernel.h>
++#include <linux/zstd.h>
++
++/* 128MB is the maximum window size supported by zstd. */
++#define ZSTD_WINDOWSIZE_MAX (1 << ZSTD_WINDOWLOG_MAX)
++/* Size of the input and output buffers in multi-call mode.
++ * Pick a larger size because it isn't used during kernel decompression,
++ * since that is single pass, and we have to allocate a large buffer for
++ * zstd's window anyways. The larger size speeds up initramfs decompression.
++ */
++#define ZSTD_IOBUF_SIZE (1 << 17)
++
++static int INIT handle_zstd_error(size_t ret, void (*error)(char *x))
++{
++ const int err = ZSTD_getErrorCode(ret);
++
++ if (!ZSTD_isError(ret))
++ return 0;
++
++ switch (err) {
++ case ZSTD_error_memory_allocation:
++ error("ZSTD decompressor ran out of memory");
++ break;
++ case ZSTD_error_prefix_unknown:
++ error("Input is not in the ZSTD format (wrong magic bytes)");
++ break;
++ case ZSTD_error_dstSize_tooSmall:
++ case ZSTD_error_corruption_detected:
++ case ZSTD_error_checksum_wrong:
++ error("ZSTD-compressed data is corrupt");
++ break;
++ default:
++ error("ZSTD-compressed data is probably corrupt");
++ break;
++ }
++ return -1;
++}
++
++/*
++ * Handle the case where we have the entire input and output in one segment.
++ * We can allocate less memory (no circular buffer for the sliding window),
++ * and avoid some memcpy() calls.
++ */
++static int INIT decompress_single(const u8 *in_buf, long in_len, u8 *out_buf,
++ long out_len, long *in_pos,
++ void (*error)(char *x))
++{
++ const size_t wksp_size = ZSTD_DCtxWorkspaceBound();
++ void *wksp = large_malloc(wksp_size);
++ ZSTD_DCtx *dctx = ZSTD_initDCtx(wksp, wksp_size);
++ int err;
++ size_t ret;
++
++ if (dctx == NULL) {
++ error("Out of memory while allocating ZSTD_DCtx");
++ err = -1;
++ goto out;
++ }
++ /*
++ * Find out how large the frame actually is, there may be junk at
++ * the end of the frame that ZSTD_decompressDCtx() can't handle.
++ */
++ ret = ZSTD_findFrameCompressedSize(in_buf, in_len);
++ err = handle_zstd_error(ret, error);
++ if (err)
++ goto out;
++ in_len = (long)ret;
++
++ ret = ZSTD_decompressDCtx(dctx, out_buf, out_len, in_buf, in_len);
++ err = handle_zstd_error(ret, error);
++ if (err)
++ goto out;
++
++ if (in_pos != NULL)
++ *in_pos = in_len;
++
++ err = 0;
++out:
++ if (wksp != NULL)
++ large_free(wksp);
++ return err;
++}
++
++static int INIT __unzstd(unsigned char *in_buf, long in_len,
++ long (*fill)(void*, unsigned long),
++ long (*flush)(void*, unsigned long),
++ unsigned char *out_buf, long out_len,
++ long *in_pos,
++ void (*error)(char *x))
++{
++ ZSTD_inBuffer in;
++ ZSTD_outBuffer out;
++ ZSTD_frameParams params;
++ void *in_allocated = NULL;
++ void *out_allocated = NULL;
++ void *wksp = NULL;
++ size_t wksp_size;
++ ZSTD_DStream *dstream;
++ int err;
++ size_t ret;
++
++ if (out_len == 0)
++ out_len = LONG_MAX; /* no limit */
++
++ if (fill == NULL && flush == NULL)
++ /*
++ * We can decompress faster and with less memory when we have a
++ * single chunk.
++ */
++ return decompress_single(in_buf, in_len, out_buf, out_len,
++ in_pos, error);
++
++ /*
++ * If in_buf is not provided, we must be using fill(), so allocate
++ * a large enough buffer. If it is provided, it must be at least
++ * ZSTD_IOBUF_SIZE large.
++ */
++ if (in_buf == NULL) {
++ in_allocated = large_malloc(ZSTD_IOBUF_SIZE);
++ if (in_allocated == NULL) {
++ error("Out of memory while allocating input buffer");
++ err = -1;
++ goto out;
++ }
++ in_buf = in_allocated;
++ in_len = 0;
++ }
++ /* Read the first chunk, since we need to decode the frame header. */
++ if (fill != NULL)
++ in_len = fill(in_buf, ZSTD_IOBUF_SIZE);
++ if (in_len < 0) {
++ error("ZSTD-compressed data is truncated");
++ err = -1;
++ goto out;
++ }
++ /* Set the first non-empty input buffer. */
++ in.src = in_buf;
++ in.pos = 0;
++ in.size = in_len;
++ /* Allocate the output buffer if we are using flush(). */
++ if (flush != NULL) {
++ out_allocated = large_malloc(ZSTD_IOBUF_SIZE);
++ if (out_allocated == NULL) {
++ error("Out of memory while allocating output buffer");
++ err = -1;
++ goto out;
++ }
++ out_buf = out_allocated;
++ out_len = ZSTD_IOBUF_SIZE;
++ }
++ /* Set the output buffer. */
++ out.dst = out_buf;
++ out.pos = 0;
++ out.size = out_len;
++
++ /*
++ * We need to know the window size to allocate the ZSTD_DStream.
++ * Since we are streaming, we need to allocate a buffer for the sliding
++ * window. The window size varies from 1 KB to ZSTD_WINDOWSIZE_MAX
++ * (8 MB), so it is important to use the actual value so as not to
++ * waste memory when it is smaller.
++ */
++ ret = ZSTD_getFrameParams(¶ms, in.src, in.size);
++ err = handle_zstd_error(ret, error);
++ if (err)
++ goto out;
++ if (ret != 0) {
++ error("ZSTD-compressed data has an incomplete frame header");
++ err = -1;
++ goto out;
++ }
++ if (params.windowSize > ZSTD_WINDOWSIZE_MAX) {
++ error("ZSTD-compressed data has too large a window size");
++ err = -1;
++ goto out;
++ }
++
++ /*
++ * Allocate the ZSTD_DStream now that we know how much memory is
++ * required.
++ */
++ wksp_size = ZSTD_DStreamWorkspaceBound(params.windowSize);
++ wksp = large_malloc(wksp_size);
++ dstream = ZSTD_initDStream(params.windowSize, wksp, wksp_size);
++ if (dstream == NULL) {
++ error("Out of memory while allocating ZSTD_DStream");
++ err = -1;
++ goto out;
++ }
++
++ /*
++ * Decompression loop:
++ * Read more data if necessary (error if no more data can be read).
++ * Call the decompression function, which returns 0 when finished.
++ * Flush any data produced if using flush().
++ */
++ if (in_pos != NULL)
++ *in_pos = 0;
++ do {
++ /*
++ * If we need to reload data, either we have fill() and can
++ * try to get more data, or we don't and the input is truncated.
++ */
++ if (in.pos == in.size) {
++ if (in_pos != NULL)
++ *in_pos += in.pos;
++ in_len = fill ? fill(in_buf, ZSTD_IOBUF_SIZE) : -1;
++ if (in_len < 0) {
++ error("ZSTD-compressed data is truncated");
++ err = -1;
++ goto out;
++ }
++ in.pos = 0;
++ in.size = in_len;
++ }
++ /* Returns zero when the frame is complete. */
++ ret = ZSTD_decompressStream(dstream, &out, &in);
++ err = handle_zstd_error(ret, error);
++ if (err)
++ goto out;
++ /* Flush all of the data produced if using flush(). */
++ if (flush != NULL && out.pos > 0) {
++ if (out.pos != flush(out.dst, out.pos)) {
++ error("Failed to flush()");
++ err = -1;
++ goto out;
++ }
++ out.pos = 0;
++ }
++ } while (ret != 0);
++
++ if (in_pos != NULL)
++ *in_pos += in.pos;
++
++ err = 0;
++out:
++ if (in_allocated != NULL)
++ large_free(in_allocated);
++ if (out_allocated != NULL)
++ large_free(out_allocated);
++ if (wksp != NULL)
++ large_free(wksp);
++ return err;
++}
++
++#ifndef ZSTD_PREBOOT
++STATIC int INIT unzstd(unsigned char *buf, long len,
++ long (*fill)(void*, unsigned long),
++ long (*flush)(void*, unsigned long),
++ unsigned char *out_buf,
++ long *pos,
++ void (*error)(char *x))
++{
++ return __unzstd(buf, len, fill, flush, out_buf, 0, pos, error);
++}
++#else
++STATIC int INIT __decompress(unsigned char *buf, long len,
++ long (*fill)(void*, unsigned long),
++ long (*flush)(void*, unsigned long),
++ unsigned char *out_buf, long out_len,
++ long *pos,
++ void (*error)(char *x))
++{
++ return __unzstd(buf, len, fill, flush, out_buf, out_len, pos, error);
++}
++#endif
diff --git a/5003_ZSTD-v5-4-8-add-support-for-zstd-compres-kern.patch b/5003_ZSTD-v5-4-8-add-support-for-zstd-compres-kern.patch
new file mode 100644
index 0000000..d9dc79e
--- /dev/null
+++ b/5003_ZSTD-v5-4-8-add-support-for-zstd-compres-kern.patch
@@ -0,0 +1,65 @@
+diff --git a/init/Kconfig b/init/Kconfig
+index 492bb7000aa4..806874fdd663 100644
+--- a/init/Kconfig
++++ b/init/Kconfig
+@@ -176,13 +176,16 @@ config HAVE_KERNEL_LZO
+ config HAVE_KERNEL_LZ4
+ bool
+
++config HAVE_KERNEL_ZSTD
++ bool
++
+ config HAVE_KERNEL_UNCOMPRESSED
+ bool
+
+ choice
+ prompt "Kernel compression mode"
+ default KERNEL_GZIP
+- depends on HAVE_KERNEL_GZIP || HAVE_KERNEL_BZIP2 || HAVE_KERNEL_LZMA || HAVE_KERNEL_XZ || HAVE_KERNEL_LZO || HAVE_KERNEL_LZ4 || HAVE_KERNEL_UNCOMPRESSED
++ depends on HAVE_KERNEL_GZIP || HAVE_KERNEL_BZIP2 || HAVE_KERNEL_LZMA || HAVE_KERNEL_XZ || HAVE_KERNEL_LZO || HAVE_KERNEL_LZ4 || HAVE_KERNEL_ZSTD || HAVE_KERNEL_UNCOMPRESSED
+ help
+ The linux kernel is a kind of self-extracting executable.
+ Several compression algorithms are available, which differ
+@@ -261,6 +264,16 @@ config KERNEL_LZ4
+ is about 8% bigger than LZO. But the decompression speed is
+ faster than LZO.
+
++config KERNEL_ZSTD
++ bool "ZSTD"
++ depends on HAVE_KERNEL_ZSTD
++ help
++ ZSTD is a compression algorithm targeting intermediate compression
++ with fast decompression speed. It will compress better than GZIP and
++ decompress around the same speed as LZO, but slower than LZ4. You
++ will need at least 192 KB RAM or more for booting. The zstd command
++ line tools is required for compression.
++
+ config KERNEL_UNCOMPRESSED
+ bool "None"
+ depends on HAVE_KERNEL_UNCOMPRESSED
+diff --git a/scripts/Makefile.lib b/scripts/Makefile.lib
+index b12dd5ba4896..efe69b78d455 100644
+--- a/scripts/Makefile.lib
++++ b/scripts/Makefile.lib
+@@ -405,6 +405,21 @@ quiet_cmd_xzkern = XZKERN $@
+ quiet_cmd_xzmisc = XZMISC $@
+ cmd_xzmisc = cat $(real-prereqs) | xz --check=crc32 --lzma2=dict=1MiB > $@
+
++# ZSTD
++# ---------------------------------------------------------------------------
++# Appends the uncompressed size of the data using size_append. The .zst
++# format has the size information available at the beginning of the file too,
++# but it's in a more complex format and it's good to avoid changing the part
++# of the boot code that reads the uncompressed size.
++# Note that the bytes added by size_append will make the zstd tool think that
++# the file is corrupt. This is expected.
++
++quiet_cmd_zstd = ZSTD $@
++cmd_zstd = (cat $(filter-out FORCE,$^) | \
++ zstd -19 && \
++ $(call size_append, $(filter-out FORCE,$^))) > $@ || \
++ (rm -f $@ ; false)
++
+ # ASM offsets
+ # ---------------------------------------------------------------------------
+
diff --git a/5004_ZSTD-v5-5-8-add-support-for-zstd-compressed-initramfs.patch b/5004_ZSTD-v5-5-8-add-support-for-zstd-compressed-initramfs.patch
new file mode 100644
index 0000000..0096db1
--- /dev/null
+++ b/5004_ZSTD-v5-5-8-add-support-for-zstd-compressed-initramfs.patch
@@ -0,0 +1,50 @@
+diff --git a/usr/Kconfig b/usr/Kconfig
+index 96afb03b65f9..2599bc21c1b2 100644
+--- a/usr/Kconfig
++++ b/usr/Kconfig
+@@ -100,6 +100,15 @@ config RD_LZ4
+ Support loading of a LZ4 encoded initial ramdisk or cpio buffer
+ If unsure, say N.
+
++config RD_ZSTD
++ bool "Support initial ramdisk/ramfs compressed using ZSTD"
++ default y
++ depends on BLK_DEV_INITRD
++ select DECOMPRESS_ZSTD
++ help
++ Support loading of a ZSTD encoded initial ramdisk or cpio buffer.
++ If unsure, say N.
++
+ choice
+ prompt "Built-in initramfs compression mode"
+ depends on INITRAMFS_SOURCE != ""
+@@ -196,6 +205,17 @@ config INITRAMFS_COMPRESSION_LZ4
+ If you choose this, keep in mind that most distros don't provide lz4
+ by default which could cause a build failure.
+
++config INITRAMFS_COMPRESSION_ZSTD
++ bool "ZSTD"
++ depends on RD_ZSTD
++ help
++ ZSTD is a compression algorithm targeting intermediate compression
++ with fast decompression speed. It will compress better than GZIP and
++ decompress around the same speed as LZO, but slower than LZ4.
++
++ If you choose this, keep in mind that you may need to install the zstd
++ tool to be able to compress the initram.
++
+ config INITRAMFS_COMPRESSION_NONE
+ bool "None"
+ help
+diff --git a/usr/Makefile b/usr/Makefile
+index c12e6b15ce72..b1a81a40eab1 100644
+--- a/usr/Makefile
++++ b/usr/Makefile
+@@ -15,6 +15,7 @@ compress-$(CONFIG_INITRAMFS_COMPRESSION_LZMA) := lzma
+ compress-$(CONFIG_INITRAMFS_COMPRESSION_XZ) := xzmisc
+ compress-$(CONFIG_INITRAMFS_COMPRESSION_LZO) := lzo
+ compress-$(CONFIG_INITRAMFS_COMPRESSION_LZ4) := lz4
++compress-$(CONFIG_INITRAMFS_COMPRESSION_ZSTD) := zstd
+
+ obj-$(CONFIG_BLK_DEV_INITRD) := initramfs_data.o
+
--git a/5005_ZSTD-v5-6-8-bump-ZO-z-extra-bytes-margin.patch b/5005_ZSTD-v5-6-8-bump-ZO-z-extra-bytes-margin.patch
new file mode 100644
index 0000000..4e86d56
--- /dev/null
+++ b/5005_ZSTD-v5-6-8-bump-ZO-z-extra-bytes-margin.patch
@@ -0,0 +1,20 @@
+diff --git a/arch/x86/boot/header.S b/arch/x86/boot/header.S
+index 735ad7f21ab0..6dbd7e9f74c9 100644
+--- a/arch/x86/boot/header.S
++++ b/arch/x86/boot/header.S
+@@ -539,8 +539,14 @@ pref_address: .quad LOAD_PHYSICAL_ADDR # preferred load addr
+ # the size-dependent part now grows so fast.
+ #
+ # extra_bytes = (uncompressed_size >> 8) + 65536
++#
++# ZSTD compressed data grows by at most 3 bytes per 128K, and only has a 22
++# byte fixed overhead but has a maximum block size of 128K, so it needs a
++# larger margin.
++#
++# extra_bytes = (uncompressed_size >> 8) + 131072
+
+-#define ZO_z_extra_bytes ((ZO_z_output_len >> 8) + 65536)
++#define ZO_z_extra_bytes ((ZO_z_output_len >> 8) + 131072)
+ #if ZO_z_output_len > ZO_z_input_len
+ # define ZO_z_extract_offset (ZO_z_output_len + ZO_z_extra_bytes - \
+ ZO_z_input_len)
diff --git a/5006_ZSTD-v5-7-8-support-for-ZSTD-compressed-kernel.patch b/5006_ZSTD-v5-7-8-support-for-ZSTD-compressed-kernel.patch
new file mode 100644
index 0000000..6147136
--- /dev/null
+++ b/5006_ZSTD-v5-7-8-support-for-ZSTD-compressed-kernel.patch
@@ -0,0 +1,92 @@
+diff --git a/Documentation/x86/boot.rst b/Documentation/x86/boot.rst
+index fa7ddc0428c8..0404e99dc1d4 100644
+--- a/Documentation/x86/boot.rst
++++ b/Documentation/x86/boot.rst
+@@ -782,9 +782,9 @@ Protocol: 2.08+
+ uncompressed data should be determined using the standard magic
+ numbers. The currently supported compression formats are gzip
+ (magic numbers 1F 8B or 1F 9E), bzip2 (magic number 42 5A), LZMA
+- (magic number 5D 00), XZ (magic number FD 37), and LZ4 (magic number
+- 02 21). The uncompressed payload is currently always ELF (magic
+- number 7F 45 4C 46).
++ (magic number 5D 00), XZ (magic number FD 37), LZ4 (magic number
++ 02 21) and ZSTD (magic number 28 B5). The uncompressed payload is
++ currently always ELF (magic number 7F 45 4C 46).
+
+ ============ ==============
+ Field name: payload_length
+diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig
+index 886fa8368256..912f783bc01a 100644
+--- a/arch/x86/Kconfig
++++ b/arch/x86/Kconfig
+@@ -185,6 +185,7 @@ config X86
+ select HAVE_KERNEL_LZMA
+ select HAVE_KERNEL_LZO
+ select HAVE_KERNEL_XZ
++ select HAVE_KERNEL_ZSTD
+ select HAVE_KPROBES
+ select HAVE_KPROBES_ON_FTRACE
+ select HAVE_FUNCTION_ERROR_INJECTION
+diff --git a/arch/x86/boot/compressed/Makefile b/arch/x86/boot/compressed/Makefile
+index 7619742f91c9..471e61400a2e 100644
+--- a/arch/x86/boot/compressed/Makefile
++++ b/arch/x86/boot/compressed/Makefile
+@@ -26,7 +26,7 @@ OBJECT_FILES_NON_STANDARD := y
+ KCOV_INSTRUMENT := n
+
+ targets := vmlinux vmlinux.bin vmlinux.bin.gz vmlinux.bin.bz2 vmlinux.bin.lzma \
+- vmlinux.bin.xz vmlinux.bin.lzo vmlinux.bin.lz4
++ vmlinux.bin.xz vmlinux.bin.lzo vmlinux.bin.lz4 vmlinux.bin.zst
+
+ KBUILD_CFLAGS := -m$(BITS) -O2
+ KBUILD_CFLAGS += -fno-strict-aliasing $(call cc-option, -fPIE, -fPIC)
+@@ -145,6 +145,8 @@ $(obj)/vmlinux.bin.lzo: $(vmlinux.bin.all-y) FORCE
+ $(call if_changed,lzo)
+ $(obj)/vmlinux.bin.lz4: $(vmlinux.bin.all-y) FORCE
+ $(call if_changed,lz4)
++$(obj)/vmlinux.bin.zst: $(vmlinux.bin.all-y) FORCE
++ $(call if_changed,zstd)
+
+ suffix-$(CONFIG_KERNEL_GZIP) := gz
+ suffix-$(CONFIG_KERNEL_BZIP2) := bz2
+@@ -152,6 +154,7 @@ suffix-$(CONFIG_KERNEL_LZMA) := lzma
+ suffix-$(CONFIG_KERNEL_XZ) := xz
+ suffix-$(CONFIG_KERNEL_LZO) := lzo
+ suffix-$(CONFIG_KERNEL_LZ4) := lz4
++suffix-$(CONFIG_KERNEL_ZSTD) := zst
+
+ quiet_cmd_mkpiggy = MKPIGGY $@
+ cmd_mkpiggy = $(obj)/mkpiggy $< > $@
+diff --git a/arch/x86/boot/compressed/misc.c b/arch/x86/boot/compressed/misc.c
+index 9652d5c2afda..39e592d0e0b4 100644
+--- a/arch/x86/boot/compressed/misc.c
++++ b/arch/x86/boot/compressed/misc.c
+@@ -77,6 +77,10 @@ static int lines, cols;
+ #ifdef CONFIG_KERNEL_LZ4
+ #include "../../../../lib/decompress_unlz4.c"
+ #endif
++
++#ifdef CONFIG_KERNEL_ZSTD
++#include "../../../../lib/decompress_unzstd.c"
++#endif
+ /*
+ * NOTE: When adding a new decompressor, please update the analysis in
+ * ../header.S.
+diff --git a/arch/x86/include/asm/boot.h b/arch/x86/include/asm/boot.h
+index 680c320363db..d6dd43d25d9f 100644
+--- a/arch/x86/include/asm/boot.h
++++ b/arch/x86/include/asm/boot.h
+@@ -24,9 +24,11 @@
+ # error "Invalid value for CONFIG_PHYSICAL_ALIGN"
+ #endif
+
+-#ifdef CONFIG_KERNEL_BZIP2
++#if defined(CONFIG_KERNEL_BZIP2)
+ # define BOOT_HEAP_SIZE 0x400000
+-#else /* !CONFIG_KERNEL_BZIP2 */
++#elif defined(CONFIG_KERNEL_ZSTD)
++# define BOOT_HEAP_SIZE 0x30000
++#else
+ # define BOOT_HEAP_SIZE 0x10000
+ #endif
+
diff --git a/5007_ZSTD-v5-8-8-gitignore-add-ZSTD-compressed-files.patch b/5007_ZSTD-v5-8-8-gitignore-add-ZSTD-compressed-files.patch
new file mode 100644
index 0000000..adf8578
--- /dev/null
+++ b/5007_ZSTD-v5-8-8-gitignore-add-ZSTD-compressed-files.patch
@@ -0,0 +1,12 @@
+diff --git a/.gitignore b/.gitignore
+index 2258e906f01c..23871de69072 100644
+--- a/.gitignore
++++ b/.gitignore
+@@ -44,6 +44,7 @@
+ *.tab.[ch]
+ *.tar
+ *.xz
++*.zst
+ Module.symvers
+ modules.builtin
+ modules.order
^ permalink raw reply related [flat|nested] 30+ messages in thread
* [gentoo-commits] proj/linux-patches:5.8 commit in: /
@ 2020-06-29 17:25 Mike Pagano
0 siblings, 0 replies; 30+ messages in thread
From: Mike Pagano @ 2020-06-29 17:25 UTC (permalink / raw
To: gentoo-commits
commit: d26a417699e150bcd84755beeb7969a29b512526
Author: Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Mon Jun 29 17:24:59 2020 +0000
Commit: Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Mon Jun 29 17:24:59 2020 +0000
URL: https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=d26a4176
Kernel patch enables gcc >= v10.1 optimizations for additional CPUs.
Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>
0000_README | 4 +
5013_enable-cpu-optimizations-for-gcc10.patch | 671 ++++++++++++++++++++++++++
2 files changed, 675 insertions(+)
diff --git a/0000_README b/0000_README
index 8373b37..f93b340 100644
--- a/0000_README
+++ b/0000_README
@@ -102,3 +102,7 @@ Desc: x86: Add support for ZSTD compressed kernel
Patch: 5007_ZSTD-v5-8-8-gitignore-add-ZSTD-compressed-files.patch
From: https://lkml.org/lkml/2020/4/1/29
Desc: .gitignore: add ZSTD-compressed files
+
+Patch: 5013_enable-cpu-optimizations-for-gcc10.patch
+From: https://github.com/graysky2/kernel_gcc_patch/
+Desc: Kernel patch enables gcc >= v10.1 optimizations for additional CPUs.
diff --git a/5013_enable-cpu-optimizations-for-gcc10.patch b/5013_enable-cpu-optimizations-for-gcc10.patch
new file mode 100644
index 0000000..01cbaa7
--- /dev/null
+++ b/5013_enable-cpu-optimizations-for-gcc10.patch
@@ -0,0 +1,671 @@
+WARNING
+This patch works with gcc versions 10.1+ and with kernel version 5.8+ and should
+NOT be applied when compiling on older versions of gcc due to key name changes
+of the march flags introduced with the version 4.9 release of gcc.[1]
+
+Use the older version of this patch hosted on the same github for older
+versions of gcc.
+
+FEATURES
+This patch adds additional CPU options to the Linux kernel accessible under:
+ Processor type and features --->
+ Processor family --->
+
+The expanded microarchitectures include:
+* AMD Improved K8-family
+* AMD K10-family
+* AMD Family 10h (Barcelona)
+* AMD Family 14h (Bobcat)
+* AMD Family 16h (Jaguar)
+* AMD Family 15h (Bulldozer)
+* AMD Family 15h (Piledriver)
+* AMD Family 15h (Steamroller)
+* AMD Family 15h (Excavator)
+* AMD Family 17h (Zen)
+* AMD Family 17h (Zen 2)
+* Intel Silvermont low-power processors
+* Intel Goldmont low-power processors (Apollo Lake and Denverton)
+* Intel Goldmont Plus low-power processors (Gemini Lake)
+* Intel 1st Gen Core i3/i5/i7 (Nehalem)
+* Intel 1.5 Gen Core i3/i5/i7 (Westmere)
+* Intel 2nd Gen Core i3/i5/i7 (Sandybridge)
+* Intel 3rd Gen Core i3/i5/i7 (Ivybridge)
+* Intel 4th Gen Core i3/i5/i7 (Haswell)
+* Intel 5th Gen Core i3/i5/i7 (Broadwell)
+* Intel 6th Gen Core i3/i5/i7 (Skylake)
+* Intel 6th Gen Core i7/i9 (Skylake X)
+* Intel 8th Gen Core i3/i5/i7 (Cannon Lake)
+* Intel 10th Gen Core i7/i9 (Ice Lake)
+* Intel Xeon (Cascade Lake)
+* Intel Xeon (Cooper Lake)
+* Intel 3rd Gen 10nm++ i3/i5/i7/i9-family (Tiger Lake)
+
+It also offers to compile passing the 'native' option which, "selects the CPU
+to generate code for at compilation time by determining the processor type of
+the compiling machine. Using -march=native enables all instruction subsets
+supported by the local machine and will produce code optimized for the local
+machine under the constraints of the selected instruction set."[2]
+
+Do NOT try using the 'native' option on AMD Piledriver, Steamroller, or
+Excavator CPUs (-march=bdver{2,3,4} flag). The build will error out due the
+kernel's objtool issue with these.[3a,b]
+
+MINOR NOTES
+This patch also changes 'atom' to 'bonnell' in accordance with the gcc v4.9
+changes. Note that upstream is using the deprecated 'match=atom' flags when I
+believe it should use the newer 'march=bonnell' flag for atom processors.[4]
+
+It is not recommended to compile on Atom-CPUs with the 'native' option.[5] The
+recommendation is to use the 'atom' option instead.
+
+BENEFITS
+Small but real speed increases are measurable using a make endpoint comparing
+a generic kernel to one built with one of the respective microarchs.
+
+See the following experimental evidence supporting this statement:
+https://github.com/graysky2/kernel_gcc_patch
+
+REQUIREMENTS
+linux version >=5.8
+gcc version >=10.1
+
+ACKNOWLEDGMENTS
+This patch builds on the seminal work by Jeroen.[6]
+
+REFERENCES
+1. https://gcc.gnu.org/gcc-4.9/changes.html
+2. https://gcc.gnu.org/onlinedocs/gcc/x86-Options.html
+3a. https://gcc.gnu.org/bugzilla/show_bug.cgi?id=95671#c11
+3b. https://github.com/graysky2/kernel_gcc_patch/issues/55
+4. https://bugzilla.kernel.org/show_bug.cgi?id=77461
+5. https://github.com/graysky2/kernel_gcc_patch/issues/15
+6. http://www.linuxforge.net/docs/linux/linux-gcc.php
+
+--- a/arch/x86/include/asm/vermagic.h 2020-06-14 15:45:04.000000000 -0400
++++ b/arch/x86/include/asm/vermagic.h 2020-06-15 09:28:19.867840705 -0400
+@@ -17,6 +17,40 @@
+ #define MODULE_PROC_FAMILY "586MMX "
+ #elif defined CONFIG_MCORE2
+ #define MODULE_PROC_FAMILY "CORE2 "
++#elif defined CONFIG_MNATIVE
++#define MODULE_PROC_FAMILY "NATIVE "
++#elif defined CONFIG_MNEHALEM
++#define MODULE_PROC_FAMILY "NEHALEM "
++#elif defined CONFIG_MWESTMERE
++#define MODULE_PROC_FAMILY "WESTMERE "
++#elif defined CONFIG_MSILVERMONT
++#define MODULE_PROC_FAMILY "SILVERMONT "
++#elif defined CONFIG_MGOLDMONT
++#define MODULE_PROC_FAMILY "GOLDMONT "
++#elif defined CONFIG_MGOLDMONTPLUS
++#define MODULE_PROC_FAMILY "GOLDMONTPLUS "
++#elif defined CONFIG_MSANDYBRIDGE
++#define MODULE_PROC_FAMILY "SANDYBRIDGE "
++#elif defined CONFIG_MIVYBRIDGE
++#define MODULE_PROC_FAMILY "IVYBRIDGE "
++#elif defined CONFIG_MHASWELL
++#define MODULE_PROC_FAMILY "HASWELL "
++#elif defined CONFIG_MBROADWELL
++#define MODULE_PROC_FAMILY "BROADWELL "
++#elif defined CONFIG_MSKYLAKE
++#define MODULE_PROC_FAMILY "SKYLAKE "
++#elif defined CONFIG_MSKYLAKEX
++#define MODULE_PROC_FAMILY "SKYLAKEX "
++#elif defined CONFIG_MCANNONLAKE
++#define MODULE_PROC_FAMILY "CANNONLAKE "
++#elif defined CONFIG_MICELAKE
++#define MODULE_PROC_FAMILY "ICELAKE "
++#elif defined CONFIG_MCASCADELAKE
++#define MODULE_PROC_FAMILY "CASCADELAKE "
++#elif defined CONFIG_MCOOPERLAKE
++#define MODULE_PROC_FAMILY "COOPERLAKE "
++#elif defined CONFIG_MTIGERLAKE
++#define MODULE_PROC_FAMILY "TIGERLAKE "
+ #elif defined CONFIG_MATOM
+ #define MODULE_PROC_FAMILY "ATOM "
+ #elif defined CONFIG_M686
+@@ -35,6 +69,28 @@
+ #define MODULE_PROC_FAMILY "K7 "
+ #elif defined CONFIG_MK8
+ #define MODULE_PROC_FAMILY "K8 "
++#elif defined CONFIG_MK8SSE3
++#define MODULE_PROC_FAMILY "K8SSE3 "
++#elif defined CONFIG_MK10
++#define MODULE_PROC_FAMILY "K10 "
++#elif defined CONFIG_MBARCELONA
++#define MODULE_PROC_FAMILY "BARCELONA "
++#elif defined CONFIG_MBOBCAT
++#define MODULE_PROC_FAMILY "BOBCAT "
++#elif defined CONFIG_MBULLDOZER
++#define MODULE_PROC_FAMILY "BULLDOZER "
++#elif defined CONFIG_MPILEDRIVER
++#define MODULE_PROC_FAMILY "PILEDRIVER "
++#elif defined CONFIG_MSTEAMROLLER
++#define MODULE_PROC_FAMILY "STEAMROLLER "
++#elif defined CONFIG_MJAGUAR
++#define MODULE_PROC_FAMILY "JAGUAR "
++#elif defined CONFIG_MEXCAVATOR
++#define MODULE_PROC_FAMILY "EXCAVATOR "
++#elif defined CONFIG_MZEN
++#define MODULE_PROC_FAMILY "ZEN "
++#elif defined CONFIG_MZEN2
++#define MODULE_PROC_FAMILY "ZEN2 "
+ #elif defined CONFIG_MELAN
+ #define MODULE_PROC_FAMILY "ELAN "
+ #elif defined CONFIG_MCRUSOE
+--- a/arch/x86/Kconfig.cpu 2020-06-14 15:45:04.000000000 -0400
++++ b/arch/x86/Kconfig.cpu 2020-06-15 09:28:19.871174111 -0400
+@@ -123,6 +123,7 @@ config MPENTIUMM
+ config MPENTIUM4
+ bool "Pentium-4/Celeron(P4-based)/Pentium-4 M/older Xeon"
+ depends on X86_32
++ select X86_P6_NOP
+ help
+ Select this for Intel Pentium 4 chips. This includes the
+ Pentium 4, Pentium D, P4-based Celeron and Xeon, and
+@@ -155,9 +156,8 @@ config MPENTIUM4
+ -Paxville
+ -Dempsey
+
+-
+ config MK6
+- bool "K6/K6-II/K6-III"
++ bool "AMD K6/K6-II/K6-III"
+ depends on X86_32
+ help
+ Select this for an AMD K6-family processor. Enables use of
+@@ -165,7 +165,7 @@ config MK6
+ flags to GCC.
+
+ config MK7
+- bool "Athlon/Duron/K7"
++ bool "AMD Athlon/Duron/K7"
+ depends on X86_32
+ help
+ Select this for an AMD Athlon K7-family processor. Enables use of
+@@ -173,12 +173,90 @@ config MK7
+ flags to GCC.
+
+ config MK8
+- bool "Opteron/Athlon64/Hammer/K8"
++ bool "AMD Opteron/Athlon64/Hammer/K8"
+ help
+ Select this for an AMD Opteron or Athlon64 Hammer-family processor.
+ Enables use of some extended instructions, and passes appropriate
+ optimization flags to GCC.
+
++config MK8SSE3
++ bool "AMD Opteron/Athlon64/Hammer/K8 with SSE3"
++ help
++ Select this for improved AMD Opteron or Athlon64 Hammer-family processors.
++ Enables use of some extended instructions, and passes appropriate
++ optimization flags to GCC.
++
++config MK10
++ bool "AMD 61xx/7x50/PhenomX3/X4/II/K10"
++ help
++ Select this for an AMD 61xx Eight-Core Magny-Cours, Athlon X2 7x50,
++ Phenom X3/X4/II, Athlon II X2/X3/X4, or Turion II-family processor.
++ Enables use of some extended instructions, and passes appropriate
++ optimization flags to GCC.
++
++config MBARCELONA
++ bool "AMD Barcelona"
++ help
++ Select this for AMD Family 10h Barcelona processors.
++
++ Enables -march=barcelona
++
++config MBOBCAT
++ bool "AMD Bobcat"
++ help
++ Select this for AMD Family 14h Bobcat processors.
++
++ Enables -march=btver1
++
++config MJAGUAR
++ bool "AMD Jaguar"
++ help
++ Select this for AMD Family 16h Jaguar processors.
++
++ Enables -march=btver2
++
++config MBULLDOZER
++ bool "AMD Bulldozer"
++ help
++ Select this for AMD Family 15h Bulldozer processors.
++
++ Enables -march=bdver1
++
++config MPILEDRIVER
++ bool "AMD Piledriver"
++ help
++ Select this for AMD Family 15h Piledriver processors.
++
++ Enables -march=bdver2
++
++config MSTEAMROLLER
++ bool "AMD Steamroller"
++ help
++ Select this for AMD Family 15h Steamroller processors.
++
++ Enables -march=bdver3
++
++config MEXCAVATOR
++ bool "AMD Excavator"
++ help
++ Select this for AMD Family 15h Excavator processors.
++
++ Enables -march=bdver4
++
++config MZEN
++ bool "AMD Zen"
++ help
++ Select this for AMD Family 17h Zen processors.
++
++ Enables -march=znver1
++
++config MZEN2
++ bool "AMD Zen 2"
++ help
++ Select this for AMD Family 17h Zen 2 processors.
++
++ Enables -march=znver2
++
+ config MCRUSOE
+ bool "Crusoe"
+ depends on X86_32
+@@ -260,6 +338,7 @@ config MVIAC7
+
+ config MPSC
+ bool "Intel P4 / older Netburst based Xeon"
++ select X86_P6_NOP
+ depends on X86_64
+ help
+ Optimize for Intel Pentium 4, Pentium D and older Nocona/Dempsey
+@@ -269,8 +348,19 @@ config MPSC
+ using the cpu family field
+ in /proc/cpuinfo. Family 15 is an older Xeon, Family 6 a newer one.
+
++config MATOM
++ bool "Intel Atom"
++ select X86_P6_NOP
++ help
++
++ Select this for the Intel Atom platform. Intel Atom CPUs have an
++ in-order pipelining architecture and thus can benefit from
++ accordingly optimized code. Use a recent GCC with specific Atom
++ support in order to fully benefit from selecting this option.
++
+ config MCORE2
+- bool "Core 2/newer Xeon"
++ bool "Intel Core 2"
++ select X86_P6_NOP
+ help
+
+ Select this for Intel Core 2 and newer Core 2 Xeons (Xeon 51xx and
+@@ -278,14 +368,151 @@ config MCORE2
+ family in /proc/cpuinfo. Newer ones have 6 and older ones 15
+ (not a typo)
+
+-config MATOM
+- bool "Intel Atom"
++ Enables -march=core2
++
++config MNEHALEM
++ bool "Intel Nehalem"
++ select X86_P6_NOP
+ help
+
+- Select this for the Intel Atom platform. Intel Atom CPUs have an
+- in-order pipelining architecture and thus can benefit from
+- accordingly optimized code. Use a recent GCC with specific Atom
+- support in order to fully benefit from selecting this option.
++ Select this for 1st Gen Core processors in the Nehalem family.
++
++ Enables -march=nehalem
++
++config MWESTMERE
++ bool "Intel Westmere"
++ select X86_P6_NOP
++ help
++
++ Select this for the Intel Westmere formerly Nehalem-C family.
++
++ Enables -march=westmere
++
++config MSILVERMONT
++ bool "Intel Silvermont"
++ select X86_P6_NOP
++ help
++
++ Select this for the Intel Silvermont platform.
++
++ Enables -march=silvermont
++
++config MGOLDMONT
++ bool "Intel Goldmont"
++ select X86_P6_NOP
++ help
++
++ Select this for the Intel Goldmont platform including Apollo Lake and Denverton.
++
++ Enables -march=goldmont
++
++config MGOLDMONTPLUS
++ bool "Intel Goldmont Plus"
++ select X86_P6_NOP
++ help
++
++ Select this for the Intel Goldmont Plus platform including Gemini Lake.
++
++ Enables -march=goldmont-plus
++
++config MSANDYBRIDGE
++ bool "Intel Sandy Bridge"
++ select X86_P6_NOP
++ help
++
++ Select this for 2nd Gen Core processors in the Sandy Bridge family.
++
++ Enables -march=sandybridge
++
++config MIVYBRIDGE
++ bool "Intel Ivy Bridge"
++ select X86_P6_NOP
++ help
++
++ Select this for 3rd Gen Core processors in the Ivy Bridge family.
++
++ Enables -march=ivybridge
++
++config MHASWELL
++ bool "Intel Haswell"
++ select X86_P6_NOP
++ help
++
++ Select this for 4th Gen Core processors in the Haswell family.
++
++ Enables -march=haswell
++
++config MBROADWELL
++ bool "Intel Broadwell"
++ select X86_P6_NOP
++ help
++
++ Select this for 5th Gen Core processors in the Broadwell family.
++
++ Enables -march=broadwell
++
++config MSKYLAKE
++ bool "Intel Skylake"
++ select X86_P6_NOP
++ help
++
++ Select this for 6th Gen Core processors in the Skylake family.
++
++ Enables -march=skylake
++
++config MSKYLAKEX
++ bool "Intel Skylake X"
++ select X86_P6_NOP
++ help
++
++ Select this for 6th Gen Core processors in the Skylake X family.
++
++ Enables -march=skylake-avx512
++
++config MCANNONLAKE
++ bool "Intel Cannon Lake"
++ select X86_P6_NOP
++ help
++
++ Select this for 8th Gen Core processors
++
++ Enables -march=cannonlake
++
++config MICELAKE
++ bool "Intel Ice Lake"
++ select X86_P6_NOP
++ help
++
++ Select this for 10th Gen Core processors in the Ice Lake family.
++
++ Enables -march=icelake-client
++
++config MCASCADELAKE
++ bool "Intel Cascade Lake"
++ select X86_P6_NOP
++ help
++
++ Select this for Xeon processors in the Cascade Lake family.
++
++ Enables -march=cascadelake
++
++config MCOOPERLAKE
++ bool "Intel Cooper Lake"
++ select X86_P6_NOP
++ help
++
++ Select this for Xeon processors in the Cooper Lake family.
++
++ Enables -march=cooperlake
++
++config MTIGERLAKE
++ bool "Intel Tiger Lake"
++ select X86_P6_NOP
++ help
++
++ Select this for third-generation 10 nm process processors in the Tiger Lake family.
++
++ Enables -march=tigerlake
+
+ config GENERIC_CPU
+ bool "Generic-x86-64"
+@@ -294,6 +521,19 @@ config GENERIC_CPU
+ Generic x86-64 CPU.
+ Run equally well on all x86-64 CPUs.
+
++config MNATIVE
++ bool "Native optimizations autodetected by GCC"
++ help
++
++ GCC 4.2 and above support -march=native, which automatically detects
++ the optimum settings to use based on your processor. -march=native
++ also detects and applies additional settings beyond -march specific
++ to your CPU, (eg. -msse4). Unless you have a specific reason not to
++ (e.g. distcc cross-compiling), you should probably be using
++ -march=native rather than anything listed below.
++
++ Enables -march=native
++
+ endchoice
+
+ config X86_GENERIC
+@@ -318,7 +558,7 @@ config X86_INTERNODE_CACHE_SHIFT
+ config X86_L1_CACHE_SHIFT
+ int
+ default "7" if MPENTIUM4 || MPSC
+- default "6" if MK7 || MK8 || MPENTIUMM || MCORE2 || MATOM || MVIAC7 || X86_GENERIC || GENERIC_CPU
++ default "6" if MK7 || MK8 || MK8SSE3 || MK10 || MBARCELONA || MBOBCAT || MBULLDOZER || MPILEDRIVER || MSTEAMROLLER || MEXCAVATOR || MZEN || MZEN2 || MJAGUAR || MPENTIUMM || MCORE2 || MNEHALEM || MWESTMERE || MSILVERMONT || MGOLDMONT || MGOLDMONTPLUS || MSANDYBRIDGE || MIVYBRIDGE || MHASWELL || MBROADWELL || MSKYLAKE || MSKYLAKEX || MCANNONLAKE || MICELAKE || MCASCADELAKE || MCOOPERLAKE || MTIGERLAKE || MNATIVE || MATOM || MVIAC7 || X86_GENERIC || GENERIC_CPU
+ default "4" if MELAN || M486SX || M486 || MGEODEGX1
+ default "5" if MWINCHIP3D || MWINCHIPC6 || MCRUSOE || MEFFICEON || MCYRIXIII || MK6 || MPENTIUMIII || MPENTIUMII || M686 || M586MMX || M586TSC || M586 || MVIAC3_2 || MGEODE_LX
+
+@@ -336,35 +576,36 @@ config X86_ALIGNMENT_16
+
+ config X86_INTEL_USERCOPY
+ def_bool y
+- depends on MPENTIUM4 || MPENTIUMM || MPENTIUMIII || MPENTIUMII || M586MMX || X86_GENERIC || MK8 || MK7 || MEFFICEON || MCORE2
++ depends on MPENTIUM4 || MPENTIUMM || MPENTIUMIII || MPENTIUMII || M586MMX || X86_GENERIC || MK8 || MK8SSE3 || MK7 || MEFFICEON || MCORE2 || MK10 || MBARCELONA || MNEHALEM || MWESTMERE || MSILVERMONT || MGOLDMONT || MGOLDMONTPLUS || MSANDYBRIDGE || MIVYBRIDGE || MHASWELL || MBROADWELL || MSKYLAKE || MSKYLAKEX || MCANNONLAKE || MICELAKE || MCASCADELAKE || MCOOPERLAKE || MTIGERLAKE || MNATIVE
+
+ config X86_USE_PPRO_CHECKSUM
+ def_bool y
+- depends on MWINCHIP3D || MWINCHIPC6 || MCYRIXIII || MK7 || MK6 || MPENTIUM4 || MPENTIUMM || MPENTIUMIII || MPENTIUMII || M686 || MK8 || MVIAC3_2 || MVIAC7 || MEFFICEON || MGEODE_LX || MCORE2 || MATOM
++ depends on MWINCHIP3D || MWINCHIPC6 || MCYRIXIII || MK7 || MK6 || MK10 || MPENTIUM4 || MPENTIUMM || MPENTIUMIII || MPENTIUMII || M686 || MK8 || MK8SSE3 || MVIAC3_2 || MVIAC7 || MEFFICEON || MGEODE_LX || MCORE2 || MNEHALEM || MWESTMERE || MSILVERMONT || MGOLDMONT || MGOLDMONTPLUS || MSANDYBRIDGE || MIVYBRIDGE || MHASWELL || MBROADWELL || MSKYLAKE || MSKYLAKEX || MCANNONLAKE || MICELAKE || MCASCADELAKE || MCOOPERLAKE || MTIGERLAKE || MATOM || MNATIVE
+
+ config X86_USE_3DNOW
+ def_bool y
+ depends on (MCYRIXIII || MK7 || MGEODE_LX) && !UML
+
+-#
+-# P6_NOPs are a relatively minor optimization that require a family >=
+-# 6 processor, except that it is broken on certain VIA chips.
+-# Furthermore, AMD chips prefer a totally different sequence of NOPs
+-# (which work on all CPUs). In addition, it looks like Virtual PC
+-# does not understand them.
+-#
+-# As a result, disallow these if we're not compiling for X86_64 (these
+-# NOPs do work on all x86-64 capable chips); the list of processors in
+-# the right-hand clause are the cores that benefit from this optimization.
+-#
+ config X86_P6_NOP
+- def_bool y
+- depends on X86_64
+- depends on (MCORE2 || MPENTIUM4 || MPSC)
++ default n
++ bool "Support for P6_NOPs on Intel chips"
++ depends on (MCORE2 || MPENTIUM4 || MPSC || MATOM || MNEHALEM || MWESTMERE || MSILVERMONT || MGOLDMONT || MGOLDMONTPLUS || MSANDYBRIDGE || MIVYBRIDGE || MHASWELL || MBROADWELL || MSKYLAKE || MSKYLAKEX || MCANNONLAKE || MICELAKE || MCASCADELAKE || MCOOPERLAKE || MTIGERLAKE || MNATIVE)
++ help
++ P6_NOPs are a relatively minor optimization that require a family >=
++ 6 processor, except that it is broken on certain VIA chips.
++ Furthermore, AMD chips prefer a totally different sequence of NOPs
++ (which work on all CPUs). In addition, it looks like Virtual PC
++ does not understand them.
++
++ As a result, disallow these if we're not compiling for X86_64 (these
++ NOPs do work on all x86-64 capable chips); the list of processors in
++ the right-hand clause are the cores that benefit from this optimization.
++
++ Say Y if you have Intel CPU newer than Pentium Pro, N otherwise.
+
+ config X86_TSC
+ def_bool y
+- depends on (MWINCHIP3D || MCRUSOE || MEFFICEON || MCYRIXIII || MK7 || MK6 || MPENTIUM4 || MPENTIUMM || MPENTIUMIII || MPENTIUMII || M686 || M586MMX || M586TSC || MK8 || MVIAC3_2 || MVIAC7 || MGEODEGX1 || MGEODE_LX || MCORE2 || MATOM) || X86_64
++ depends on (MWINCHIP3D || MCRUSOE || MEFFICEON || MCYRIXIII || MK7 || MK6 || MPENTIUM4 || MPENTIUMM || MPENTIUMIII || MPENTIUMII || M686 || M586MMX || M586TSC || MK8 || MK8SSE3 || MVIAC3_2 || MVIAC7 || MGEODEGX1 || MGEODE_LX || MCORE2 || MNEHALEM || MWESTMERE || MSILVERMONT || MGOLDMONT || MGOLDMONTPLUS || MSANDYBRIDGE || MIVYBRIDGE || MHASWELL || MBROADWELL || MSKYLAKE || MSKYLAKEX || MCANNONLAKE || MICELAKE || MCASCADELAKE || MCOOPERLAKE || MTIGERLAKE || MNATIVE || MATOM) || X86_64
+
+ config X86_CMPXCHG64
+ def_bool y
+@@ -374,7 +615,7 @@ config X86_CMPXCHG64
+ # generates cmov.
+ config X86_CMOV
+ def_bool y
+- depends on (MK8 || MK7 || MCORE2 || MPENTIUM4 || MPENTIUMM || MPENTIUMIII || MPENTIUMII || M686 || MVIAC3_2 || MVIAC7 || MCRUSOE || MEFFICEON || X86_64 || MATOM || MGEODE_LX)
++ depends on (MK8 || MK8SSE3 || MK10 || MBARCELONA || MBOBCAT || MBULLDOZER || MPILEDRIVER || MSTEAMROLLER || MEXCAVATOR || MZEN || MZEN2 || MJAGUAR || MK7 || MCORE2 || MNEHALEM || MWESTMERE || MSILVERMONT || MGOLDMONT || MGOLDMONTPLUS || MSANDYBRIDGE || MIVYBRIDGE || MHASWELL || MBROADWELL || MSKYLAKE || MSKYLAKEX || MCANNONLAKE || MICELAKE || MCASCADELAKE || MCOOPERLAKE || MTIGERLAKE || MPENTIUM4 || MPENTIUMM || MPENTIUMIII || MPENTIUMII || M686 || MVIAC3_2 || MVIAC7 || MCRUSOE || MEFFICEON || X86_64 || MNATIVE || MATOM || MGEODE_LX)
+
+ config X86_MINIMUM_CPU_FAMILY
+ int
+--- a/arch/x86/Makefile 2020-06-14 15:45:04.000000000 -0400
++++ b/arch/x86/Makefile 2020-06-15 09:28:19.871174111 -0400
+@@ -119,13 +119,60 @@ else
+ KBUILD_CFLAGS += $(call cc-option,-mskip-rax-setup)
+
+ # FIXME - should be integrated in Makefile.cpu (Makefile_32.cpu)
++ cflags-$(CONFIG_MNATIVE) += $(call cc-option,-march=native)
+ cflags-$(CONFIG_MK8) += $(call cc-option,-march=k8)
++ cflags-$(CONFIG_MK8SSE3) += $(call cc-option,-march=k8-sse3,-mtune=k8)
++ cflags-$(CONFIG_MK10) += $(call cc-option,-march=amdfam10)
++ cflags-$(CONFIG_MBARCELONA) += $(call cc-option,-march=barcelona)
++ cflags-$(CONFIG_MBOBCAT) += $(call cc-option,-march=btver1)
++ cflags-$(CONFIG_MJAGUAR) += $(call cc-option,-march=btver2)
++ cflags-$(CONFIG_MBULLDOZER) += $(call cc-option,-march=bdver1)
++ cflags-$(CONFIG_MPILEDRIVER) += $(call cc-option,-march=bdver2)
++ cflags-$(CONFIG_MPILEDRIVER) += $(call cc-option,-mno-tbm)
++ cflags-$(CONFIG_MSTEAMROLLER) += $(call cc-option,-march=bdver3)
++ cflags-$(CONFIG_MSTEAMROLLER) += $(call cc-option,-mno-tbm)
++ cflags-$(CONFIG_MEXCAVATOR) += $(call cc-option,-march=bdver4)
++ cflags-$(CONFIG_MEXCAVATOR) += $(call cc-option,-mno-tbm)
++ cflags-$(CONFIG_MZEN) += $(call cc-option,-march=znver1)
++ cflags-$(CONFIG_MZEN2) += $(call cc-option,-march=znver2)
+ cflags-$(CONFIG_MPSC) += $(call cc-option,-march=nocona)
+
+ cflags-$(CONFIG_MCORE2) += \
+- $(call cc-option,-march=core2,$(call cc-option,-mtune=generic))
+- cflags-$(CONFIG_MATOM) += $(call cc-option,-march=atom) \
+- $(call cc-option,-mtune=atom,$(call cc-option,-mtune=generic))
++ $(call cc-option,-march=core2,$(call cc-option,-mtune=core2))
++ cflags-$(CONFIG_MNEHALEM) += \
++ $(call cc-option,-march=nehalem,$(call cc-option,-mtune=nehalem))
++ cflags-$(CONFIG_MWESTMERE) += \
++ $(call cc-option,-march=westmere,$(call cc-option,-mtune=westmere))
++ cflags-$(CONFIG_MSILVERMONT) += \
++ $(call cc-option,-march=silvermont,$(call cc-option,-mtune=silvermont))
++ cflags-$(CONFIG_MGOLDMONT) += \
++ $(call cc-option,-march=goldmont,$(call cc-option,-mtune=goldmont))
++ cflags-$(CONFIG_MGOLDMONTPLUS) += \
++ $(call cc-option,-march=goldmont-plus,$(call cc-option,-mtune=goldmont-plus))
++ cflags-$(CONFIG_MSANDYBRIDGE) += \
++ $(call cc-option,-march=sandybridge,$(call cc-option,-mtune=sandybridge))
++ cflags-$(CONFIG_MIVYBRIDGE) += \
++ $(call cc-option,-march=ivybridge,$(call cc-option,-mtune=ivybridge))
++ cflags-$(CONFIG_MHASWELL) += \
++ $(call cc-option,-march=haswell,$(call cc-option,-mtune=haswell))
++ cflags-$(CONFIG_MBROADWELL) += \
++ $(call cc-option,-march=broadwell,$(call cc-option,-mtune=broadwell))
++ cflags-$(CONFIG_MSKYLAKE) += \
++ $(call cc-option,-march=skylake,$(call cc-option,-mtune=skylake))
++ cflags-$(CONFIG_MSKYLAKEX) += \
++ $(call cc-option,-march=skylake-avx512,$(call cc-option,-mtune=skylake-avx512))
++ cflags-$(CONFIG_MCANNONLAKE) += \
++ $(call cc-option,-march=cannonlake,$(call cc-option,-mtune=cannonlake))
++ cflags-$(CONFIG_MICELAKE) += \
++ $(call cc-option,-march=icelake-client,$(call cc-option,-mtune=icelake-client))
++ cflags-$(CONFIG_MCASCADELAKE) += \
++ $(call cc-option,-march=cascadelake,$(call cc-option,-mtune=cascadelake))
++ cflags-$(CONFIG_MCOOPERLAKE) += \
++ $(call cc-option,-march=cooperlake,$(call cc-option,-mtune=cooperlake))
++ cflags-$(CONFIG_MTIGERLAKE) += \
++ $(call cc-option,-march=tigerlake,$(call cc-option,-mtune=tigerlake))
++ cflags-$(CONFIG_MATOM) += $(call cc-option,-march=bonnell) \
++ $(call cc-option,-mtune=bonnell,$(call cc-option,-mtune=generic))
+ cflags-$(CONFIG_GENERIC_CPU) += $(call cc-option,-mtune=generic)
+ KBUILD_CFLAGS += $(cflags-y)
+
+--- a/arch/x86/Makefile_32.cpu 2020-06-14 15:45:04.000000000 -0400
++++ b/arch/x86/Makefile_32.cpu 2020-06-15 09:28:19.871174111 -0400
+@@ -24,7 +24,19 @@ cflags-$(CONFIG_MK6) += -march=k6
+ # Please note, that patches that add -march=athlon-xp and friends are pointless.
+ # They make zero difference whatsosever to performance at this time.
+ cflags-$(CONFIG_MK7) += -march=athlon
++cflags-$(CONFIG_MNATIVE) += $(call cc-option,-march=native)
+ cflags-$(CONFIG_MK8) += $(call cc-option,-march=k8,-march=athlon)
++cflags-$(CONFIG_MK8SSE3) += $(call cc-option,-march=k8-sse3,-march=athlon)
++cflags-$(CONFIG_MK10) += $(call cc-option,-march=amdfam10,-march=athlon)
++cflags-$(CONFIG_MBARCELONA) += $(call cc-option,-march=barcelona,-march=athlon)
++cflags-$(CONFIG_MBOBCAT) += $(call cc-option,-march=btver1,-march=athlon)
++cflags-$(CONFIG_MJAGUAR) += $(call cc-option,-march=btver2,-march=athlon)
++cflags-$(CONFIG_MBULLDOZER) += $(call cc-option,-march=bdver1,-march=athlon)
++cflags-$(CONFIG_MPILEDRIVER) += $(call cc-option,-march=bdver2,-march=athlon)
++cflags-$(CONFIG_MSTEAMROLLER) += $(call cc-option,-march=bdver3,-march=athlon)
++cflags-$(CONFIG_MEXCAVATOR) += $(call cc-option,-march=bdver4,-march=athlon)
++cflags-$(CONFIG_MZEN) += $(call cc-option,-march=znver1,-march=athlon)
++cflags-$(CONFIG_MZEN2) += $(call cc-option,-march=znver2,-march=athlon)
+ cflags-$(CONFIG_MCRUSOE) += -march=i686 -falign-functions=0 -falign-jumps=0 -falign-loops=0
+ cflags-$(CONFIG_MEFFICEON) += -march=i686 $(call tune,pentium3) -falign-functions=0 -falign-jumps=0 -falign-loops=0
+ cflags-$(CONFIG_MWINCHIPC6) += $(call cc-option,-march=winchip-c6,-march=i586)
+@@ -33,8 +45,24 @@ cflags-$(CONFIG_MCYRIXIII) += $(call cc-
+ cflags-$(CONFIG_MVIAC3_2) += $(call cc-option,-march=c3-2,-march=i686)
+ cflags-$(CONFIG_MVIAC7) += -march=i686
+ cflags-$(CONFIG_MCORE2) += -march=i686 $(call tune,core2)
+-cflags-$(CONFIG_MATOM) += $(call cc-option,-march=atom,$(call cc-option,-march=core2,-march=i686)) \
+- $(call cc-option,-mtune=atom,$(call cc-option,-mtune=generic))
++cflags-$(CONFIG_MNEHALEM) += -march=i686 $(call tune,nehalem)
++cflags-$(CONFIG_MWESTMERE) += -march=i686 $(call tune,westmere)
++cflags-$(CONFIG_MSILVERMONT) += -march=i686 $(call tune,silvermont)
++cflags-$(CONFIG_MGOLDMONT) += -march=i686 $(call tune,goldmont)
++cflags-$(CONFIG_MGOLDMONTPLUS) += -march=i686 $(call tune,goldmont-plus)
++cflags-$(CONFIG_MSANDYBRIDGE) += -march=i686 $(call tune,sandybridge)
++cflags-$(CONFIG_MIVYBRIDGE) += -march=i686 $(call tune,ivybridge)
++cflags-$(CONFIG_MHASWELL) += -march=i686 $(call tune,haswell)
++cflags-$(CONFIG_MBROADWELL) += -march=i686 $(call tune,broadwell)
++cflags-$(CONFIG_MSKYLAKE) += -march=i686 $(call tune,skylake)
++cflags-$(CONFIG_MSKYLAKEX) += -march=i686 $(call tune,skylake-avx512)
++cflags-$(CONFIG_MCANNONLAKE) += -march=i686 $(call tune,cannonlake)
++cflags-$(CONFIG_MICELAKE) += -march=i686 $(call tune,icelake-client)
++cflags-$(CONFIG_MCASCADELAKE) += -march=i686 $(call tune,cascadelake)
++cflags-$(CONFIG_MCOOPERLAKE) += -march=i686 $(call tune,cooperlake)
++cflags-$(CONFIG_MTIGERLAKE) += -march=i686 $(call tune,tigerlake)
++cflags-$(CONFIG_MATOM) += $(call cc-option,-march=bonnell,$(call cc-option,-march=core2,-march=i686)) \
++ $(call cc-option,-mtune=bonnell,$(call cc-option,-mtune=generic))
+
+ # AMD Elan support
+ cflags-$(CONFIG_MELAN) += -march=i486
^ permalink raw reply related [flat|nested] 30+ messages in thread
* [gentoo-commits] proj/linux-patches:5.8 commit in: /
@ 2020-06-29 17:33 Mike Pagano
0 siblings, 0 replies; 30+ messages in thread
From: Mike Pagano @ 2020-06-29 17:33 UTC (permalink / raw
To: gentoo-commits
commit: cc4e39b1cd2307d4bd50e1b6147cf22b351f5db3
Author: Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Mon Jun 29 17:33:41 2020 +0000
Commit: Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Mon Jun 29 17:33:41 2020 +0000
URL: https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=cc4e39b1
Kernel patch enables gcc = v9.1+ optimizations for additional CPUs.
Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>
0000_README | 6 +-
5012_enable-cpu-optimizations-for-gcc91.patch | 641 ++++++++++++++++++++++++++
2 files changed, 646 insertions(+), 1 deletion(-)
diff --git a/0000_README b/0000_README
index f93b340..b9ce21a 100644
--- a/0000_README
+++ b/0000_README
@@ -103,6 +103,10 @@ Patch: 5007_ZSTD-v5-8-8-gitignore-add-ZSTD-compressed-files.patch
From: https://lkml.org/lkml/2020/4/1/29
Desc: .gitignore: add ZSTD-compressed files
+Patch: 5012_enable-cpu-optimizations-for-gcc91.patch
+From: https://github.com/graysky2/kernel_gcc_patch/
+Desc: Kernel patch enables gcc = v9.1+ optimizations for additional CPUs.
+
Patch: 5013_enable-cpu-optimizations-for-gcc10.patch
From: https://github.com/graysky2/kernel_gcc_patch/
-Desc: Kernel patch enables gcc >= v10.1 optimizations for additional CPUs.
+Desc: Kernel patch enables gcc = v10.1+ optimizations for additional CPUs.
diff --git a/5012_enable-cpu-optimizations-for-gcc91.patch b/5012_enable-cpu-optimizations-for-gcc91.patch
new file mode 100644
index 0000000..2f16153
--- /dev/null
+++ b/5012_enable-cpu-optimizations-for-gcc91.patch
@@ -0,0 +1,641 @@
+WARNING
+This patch works with gcc versions 9.1+ and with kernel version 5.7+ and should
+NOT be applied when compiling on older versions of gcc due to key name changes
+of the march flags introduced with the version 4.9 release of gcc.[1]
+
+Use the older version of this patch hosted on the same github for older
+versions of gcc.
+
+FEATURES
+This patch adds additional CPU options to the Linux kernel accessible under:
+ Processor type and features --->
+ Processor family --->
+
+The expanded microarchitectures include:
+* AMD Improved K8-family
+* AMD K10-family
+* AMD Family 10h (Barcelona)
+* AMD Family 14h (Bobcat)
+* AMD Family 16h (Jaguar)
+* AMD Family 15h (Bulldozer)
+* AMD Family 15h (Piledriver)
+* AMD Family 15h (Steamroller)
+* AMD Family 15h (Excavator)
+* AMD Family 17h (Zen)
+* AMD Family 17h (Zen 2)
+* Intel Silvermont low-power processors
+* Intel Goldmont low-power processors (Apollo Lake and Denverton)
+* Intel Goldmont Plus low-power processors (Gemini Lake)
+* Intel 1st Gen Core i3/i5/i7 (Nehalem)
+* Intel 1.5 Gen Core i3/i5/i7 (Westmere)
+* Intel 2nd Gen Core i3/i5/i7 (Sandybridge)
+* Intel 3rd Gen Core i3/i5/i7 (Ivybridge)
+* Intel 4th Gen Core i3/i5/i7 (Haswell)
+* Intel 5th Gen Core i3/i5/i7 (Broadwell)
+* Intel 6th Gen Core i3/i5/i7 (Skylake)
+* Intel 6th Gen Core i7/i9 (Skylake X)
+* Intel 8th Gen Core i3/i5/i7 (Cannon Lake)
+* Intel 10th Gen Core i7/i9 (Ice Lake)
+* Intel Xeon (Cascade Lake)
+
+It also offers to compile passing the 'native' option which, "selects the CPU
+to generate code for at compilation time by determining the processor type of
+the compiling machine. Using -march=native enables all instruction subsets
+supported by the local machine and will produce code optimized for the local
+machine under the constraints of the selected instruction set."[2]
+
+Do NOT try using the 'native' option on AMD Piledriver, Steamroller, or
+Excavator CPUs (-march=bdver{2,3,4} flag). The build will error out due the
+kernel's objtool issue with these.[3a,b]
+
+MINOR NOTES
+This patch also changes 'atom' to 'bonnell' in accordance with the gcc v4.9
+changes. Note that upstream is using the deprecated 'match=atom' flags when I
+believe it should use the newer 'march=bonnell' flag for atom processors.[4]
+
+It is not recommended to compile on Atom-CPUs with the 'native' option.[5] The
+recommendation is to use the 'atom' option instead.
+
+BENEFITS
+Small but real speed increases are measurable using a make endpoint comparing
+a generic kernel to one built with one of the respective microarchs.
+
+See the following experimental evidence supporting this statement:
+https://github.com/graysky2/kernel_gcc_patch
+
+REQUIREMENTS
+linux version >=5.7
+gcc version >=9.1 and <10
+
+ACKNOWLEDGMENTS
+This patch builds on the seminal work by Jeroen.[6]
+
+REFERENCES
+1. https://gcc.gnu.org/gcc-4.9/changes.html
+2. https://gcc.gnu.org/onlinedocs/gcc/x86-Options.html
+3a. https://gcc.gnu.org/bugzilla/show_bug.cgi?id=95671#c11
+3b. https://github.com/graysky2/kernel_gcc_patch/issues/55
+4. https://bugzilla.kernel.org/show_bug.cgi?id=77461
+5. https://github.com/graysky2/kernel_gcc_patch/issues/15
+6. http://www.linuxforge.net/docs/linux/linux-gcc.php
+
+--- a/arch/x86/include/asm/vermagic.h 2020-06-10 14:21:45.000000000 -0400
++++ b/arch/x86/include/asm/vermagic.h 2020-06-15 10:44:10.437477053 -0400
+@@ -17,6 +17,36 @@
+ #define MODULE_PROC_FAMILY "586MMX "
+ #elif defined CONFIG_MCORE2
+ #define MODULE_PROC_FAMILY "CORE2 "
++#elif defined CONFIG_MNATIVE
++#define MODULE_PROC_FAMILY "NATIVE "
++#elif defined CONFIG_MNEHALEM
++#define MODULE_PROC_FAMILY "NEHALEM "
++#elif defined CONFIG_MWESTMERE
++#define MODULE_PROC_FAMILY "WESTMERE "
++#elif defined CONFIG_MSILVERMONT
++#define MODULE_PROC_FAMILY "SILVERMONT "
++#elif defined CONFIG_MGOLDMONT
++#define MODULE_PROC_FAMILY "GOLDMONT "
++#elif defined CONFIG_MGOLDMONTPLUS
++#define MODULE_PROC_FAMILY "GOLDMONTPLUS "
++#elif defined CONFIG_MSANDYBRIDGE
++#define MODULE_PROC_FAMILY "SANDYBRIDGE "
++#elif defined CONFIG_MIVYBRIDGE
++#define MODULE_PROC_FAMILY "IVYBRIDGE "
++#elif defined CONFIG_MHASWELL
++#define MODULE_PROC_FAMILY "HASWELL "
++#elif defined CONFIG_MBROADWELL
++#define MODULE_PROC_FAMILY "BROADWELL "
++#elif defined CONFIG_MSKYLAKE
++#define MODULE_PROC_FAMILY "SKYLAKE "
++#elif defined CONFIG_MSKYLAKEX
++#define MODULE_PROC_FAMILY "SKYLAKEX "
++#elif defined CONFIG_MCANNONLAKE
++#define MODULE_PROC_FAMILY "CANNONLAKE "
++#elif defined CONFIG_MICELAKE
++#define MODULE_PROC_FAMILY "ICELAKE "
++#elif defined CONFIG_MCASCADELAKE
++#define MODULE_PROC_FAMILY "CASCADELAKE "
+ #elif defined CONFIG_MATOM
+ #define MODULE_PROC_FAMILY "ATOM "
+ #elif defined CONFIG_M686
+@@ -35,6 +65,28 @@
+ #define MODULE_PROC_FAMILY "K7 "
+ #elif defined CONFIG_MK8
+ #define MODULE_PROC_FAMILY "K8 "
++#elif defined CONFIG_MK8SSE3
++#define MODULE_PROC_FAMILY "K8SSE3 "
++#elif defined CONFIG_MK10
++#define MODULE_PROC_FAMILY "K10 "
++#elif defined CONFIG_MBARCELONA
++#define MODULE_PROC_FAMILY "BARCELONA "
++#elif defined CONFIG_MBOBCAT
++#define MODULE_PROC_FAMILY "BOBCAT "
++#elif defined CONFIG_MBULLDOZER
++#define MODULE_PROC_FAMILY "BULLDOZER "
++#elif defined CONFIG_MPILEDRIVER
++#define MODULE_PROC_FAMILY "PILEDRIVER "
++#elif defined CONFIG_MSTEAMROLLER
++#define MODULE_PROC_FAMILY "STEAMROLLER "
++#elif defined CONFIG_MJAGUAR
++#define MODULE_PROC_FAMILY "JAGUAR "
++#elif defined CONFIG_MEXCAVATOR
++#define MODULE_PROC_FAMILY "EXCAVATOR "
++#elif defined CONFIG_MZEN
++#define MODULE_PROC_FAMILY "ZEN "
++#elif defined CONFIG_MZEN2
++#define MODULE_PROC_FAMILY "ZEN2 "
+ #elif defined CONFIG_MELAN
+ #define MODULE_PROC_FAMILY "ELAN "
+ #elif defined CONFIG_MCRUSOE
+--- a/arch/x86/Kconfig.cpu 2020-06-10 14:21:45.000000000 -0400
++++ b/arch/x86/Kconfig.cpu 2020-06-15 10:44:10.437477053 -0400
+@@ -123,6 +123,7 @@ config MPENTIUMM
+ config MPENTIUM4
+ bool "Pentium-4/Celeron(P4-based)/Pentium-4 M/older Xeon"
+ depends on X86_32
++ select X86_P6_NOP
+ ---help---
+ Select this for Intel Pentium 4 chips. This includes the
+ Pentium 4, Pentium D, P4-based Celeron and Xeon, and
+@@ -155,9 +156,8 @@ config MPENTIUM4
+ -Paxville
+ -Dempsey
+
+-
+ config MK6
+- bool "K6/K6-II/K6-III"
++ bool "AMD K6/K6-II/K6-III"
+ depends on X86_32
+ ---help---
+ Select this for an AMD K6-family processor. Enables use of
+@@ -165,7 +165,7 @@ config MK6
+ flags to GCC.
+
+ config MK7
+- bool "Athlon/Duron/K7"
++ bool "AMD Athlon/Duron/K7"
+ depends on X86_32
+ ---help---
+ Select this for an AMD Athlon K7-family processor. Enables use of
+@@ -173,12 +173,90 @@ config MK7
+ flags to GCC.
+
+ config MK8
+- bool "Opteron/Athlon64/Hammer/K8"
++ bool "AMD Opteron/Athlon64/Hammer/K8"
+ ---help---
+ Select this for an AMD Opteron or Athlon64 Hammer-family processor.
+ Enables use of some extended instructions, and passes appropriate
+ optimization flags to GCC.
+
++config MK8SSE3
++ bool "AMD Opteron/Athlon64/Hammer/K8 with SSE3"
++ ---help---
++ Select this for improved AMD Opteron or Athlon64 Hammer-family processors.
++ Enables use of some extended instructions, and passes appropriate
++ optimization flags to GCC.
++
++config MK10
++ bool "AMD 61xx/7x50/PhenomX3/X4/II/K10"
++ ---help---
++ Select this for an AMD 61xx Eight-Core Magny-Cours, Athlon X2 7x50,
++ Phenom X3/X4/II, Athlon II X2/X3/X4, or Turion II-family processor.
++ Enables use of some extended instructions, and passes appropriate
++ optimization flags to GCC.
++
++config MBARCELONA
++ bool "AMD Barcelona"
++ ---help---
++ Select this for AMD Family 10h Barcelona processors.
++
++ Enables -march=barcelona
++
++config MBOBCAT
++ bool "AMD Bobcat"
++ ---help---
++ Select this for AMD Family 14h Bobcat processors.
++
++ Enables -march=btver1
++
++config MJAGUAR
++ bool "AMD Jaguar"
++ ---help---
++ Select this for AMD Family 16h Jaguar processors.
++
++ Enables -march=btver2
++
++config MBULLDOZER
++ bool "AMD Bulldozer"
++ ---help---
++ Select this for AMD Family 15h Bulldozer processors.
++
++ Enables -march=bdver1
++
++config MPILEDRIVER
++ bool "AMD Piledriver"
++ ---help---
++ Select this for AMD Family 15h Piledriver processors.
++
++ Enables -march=bdver2
++
++config MSTEAMROLLER
++ bool "AMD Steamroller"
++ ---help---
++ Select this for AMD Family 15h Steamroller processors.
++
++ Enables -march=bdver3
++
++config MEXCAVATOR
++ bool "AMD Excavator"
++ ---help---
++ Select this for AMD Family 15h Excavator processors.
++
++ Enables -march=bdver4
++
++config MZEN
++ bool "AMD Zen"
++ ---help---
++ Select this for AMD Family 17h Zen processors.
++
++ Enables -march=znver1
++
++config MZEN2
++ bool "AMD Zen 2"
++ ---help---
++ Select this for AMD Family 17h Zen 2 processors.
++
++ Enables -march=znver2
++
+ config MCRUSOE
+ bool "Crusoe"
+ depends on X86_32
+@@ -260,6 +338,7 @@ config MVIAC7
+
+ config MPSC
+ bool "Intel P4 / older Netburst based Xeon"
++ select X86_P6_NOP
+ depends on X86_64
+ ---help---
+ Optimize for Intel Pentium 4, Pentium D and older Nocona/Dempsey
+@@ -269,8 +348,19 @@ config MPSC
+ using the cpu family field
+ in /proc/cpuinfo. Family 15 is an older Xeon, Family 6 a newer one.
+
++config MATOM
++ bool "Intel Atom"
++ select X86_P6_NOP
++ ---help---
++
++ Select this for the Intel Atom platform. Intel Atom CPUs have an
++ in-order pipelining architecture and thus can benefit from
++ accordingly optimized code. Use a recent GCC with specific Atom
++ support in order to fully benefit from selecting this option.
++
+ config MCORE2
+- bool "Core 2/newer Xeon"
++ bool "Intel Core 2"
++ select X86_P6_NOP
+ ---help---
+
+ Select this for Intel Core 2 and newer Core 2 Xeons (Xeon 51xx and
+@@ -278,14 +368,133 @@ config MCORE2
+ family in /proc/cpuinfo. Newer ones have 6 and older ones 15
+ (not a typo)
+
+-config MATOM
+- bool "Intel Atom"
++ Enables -march=core2
++
++config MNEHALEM
++ bool "Intel Nehalem"
++ select X86_P6_NOP
+ ---help---
+
+- Select this for the Intel Atom platform. Intel Atom CPUs have an
+- in-order pipelining architecture and thus can benefit from
+- accordingly optimized code. Use a recent GCC with specific Atom
+- support in order to fully benefit from selecting this option.
++ Select this for 1st Gen Core processors in the Nehalem family.
++
++ Enables -march=nehalem
++
++config MWESTMERE
++ bool "Intel Westmere"
++ select X86_P6_NOP
++ ---help---
++
++ Select this for the Intel Westmere formerly Nehalem-C family.
++
++ Enables -march=westmere
++
++config MSILVERMONT
++ bool "Intel Silvermont"
++ select X86_P6_NOP
++ ---help---
++
++ Select this for the Intel Silvermont platform.
++
++ Enables -march=silvermont
++
++config MGOLDMONT
++ bool "Intel Goldmont"
++ select X86_P6_NOP
++ ---help---
++
++ Select this for the Intel Goldmont platform including Apollo Lake and Denverton.
++
++ Enables -march=goldmont
++
++config MGOLDMONTPLUS
++ bool "Intel Goldmont Plus"
++ select X86_P6_NOP
++ ---help---
++
++ Select this for the Intel Goldmont Plus platform including Gemini Lake.
++
++ Enables -march=goldmont-plus
++
++config MSANDYBRIDGE
++ bool "Intel Sandy Bridge"
++ select X86_P6_NOP
++ ---help---
++
++ Select this for 2nd Gen Core processors in the Sandy Bridge family.
++
++ Enables -march=sandybridge
++
++config MIVYBRIDGE
++ bool "Intel Ivy Bridge"
++ select X86_P6_NOP
++ ---help---
++
++ Select this for 3rd Gen Core processors in the Ivy Bridge family.
++
++ Enables -march=ivybridge
++
++config MHASWELL
++ bool "Intel Haswell"
++ select X86_P6_NOP
++ ---help---
++
++ Select this for 4th Gen Core processors in the Haswell family.
++
++ Enables -march=haswell
++
++config MBROADWELL
++ bool "Intel Broadwell"
++ select X86_P6_NOP
++ ---help---
++
++ Select this for 5th Gen Core processors in the Broadwell family.
++
++ Enables -march=broadwell
++
++config MSKYLAKE
++ bool "Intel Skylake"
++ select X86_P6_NOP
++ ---help---
++
++ Select this for 6th Gen Core processors in the Skylake family.
++
++ Enables -march=skylake
++
++config MSKYLAKEX
++ bool "Intel Skylake X"
++ select X86_P6_NOP
++ ---help---
++
++ Select this for 6th Gen Core processors in the Skylake X family.
++
++ Enables -march=skylake-avx512
++
++config MCANNONLAKE
++ bool "Intel Cannon Lake"
++ select X86_P6_NOP
++ ---help---
++
++ Select this for 8th Gen Core processors
++
++ Enables -march=cannonlake
++
++config MICELAKE
++ bool "Intel Ice Lake"
++ select X86_P6_NOP
++ ---help---
++
++ Select this for 10th Gen Core processors in the Ice Lake family.
++
++ Enables -march=icelake-client
++
++config MCASCADELAKE
++ bool "Intel Cascade Lake"
++ select X86_P6_NOP
++ ---help---
++
++ Select this for Xeon processors in the Cascade Lake family.
++
++ Enables -march=cascadelake
+
+ config GENERIC_CPU
+ bool "Generic-x86-64"
+@@ -294,6 +503,19 @@ config GENERIC_CPU
+ Generic x86-64 CPU.
+ Run equally well on all x86-64 CPUs.
+
++config MNATIVE
++ bool "Native optimizations autodetected by GCC"
++ ---help---
++
++ GCC 4.2 and above support -march=native, which automatically detects
++ the optimum settings to use based on your processor. -march=native
++ also detects and applies additional settings beyond -march specific
++ to your CPU, (eg. -msse4). Unless you have a specific reason not to
++ (e.g. distcc cross-compiling), you should probably be using
++ -march=native rather than anything listed below.
++
++ Enables -march=native
++
+ endchoice
+
+ config X86_GENERIC
+@@ -318,7 +540,7 @@ config X86_INTERNODE_CACHE_SHIFT
+ config X86_L1_CACHE_SHIFT
+ int
+ default "7" if MPENTIUM4 || MPSC
+- default "6" if MK7 || MK8 || MPENTIUMM || MCORE2 || MATOM || MVIAC7 || X86_GENERIC || GENERIC_CPU
++ default "6" if MK7 || MK8 || MK8SSE3 || MK10 || MBARCELONA || MBOBCAT || MBULLDOZER || MPILEDRIVER || MSTEAMROLLER || MEXCAVATOR || MZEN || MZEN2 || MJAGUAR || MPENTIUMM || MCORE2 || MNEHALEM || MWESTMERE || MSILVERMONT || MGOLDMONT || MGOLDMONTPLUS || MSANDYBRIDGE || MIVYBRIDGE || MHASWELL || MBROADWELL || MSKYLAKE || MSKYLAKEX || MCANNONLAKE || MICELAKE || MCASCADELAKE || MNATIVE || MATOM || MVIAC7 || X86_GENERIC || GENERIC_CPU
+ default "4" if MELAN || M486SX || M486 || MGEODEGX1
+ default "5" if MWINCHIP3D || MWINCHIPC6 || MCRUSOE || MEFFICEON || MCYRIXIII || MK6 || MPENTIUMIII || MPENTIUMII || M686 || M586MMX || M586TSC || M586 || MVIAC3_2 || MGEODE_LX
+
+@@ -336,35 +558,36 @@ config X86_ALIGNMENT_16
+
+ config X86_INTEL_USERCOPY
+ def_bool y
+- depends on MPENTIUM4 || MPENTIUMM || MPENTIUMIII || MPENTIUMII || M586MMX || X86_GENERIC || MK8 || MK7 || MEFFICEON || MCORE2
++ depends on MPENTIUM4 || MPENTIUMM || MPENTIUMIII || MPENTIUMII || M586MMX || X86_GENERIC || MK8 || MK8SSE3 || MK7 || MEFFICEON || MCORE2 || MK10 || MBARCELONA || MNEHALEM || MWESTMERE || MSILVERMONT || MGOLDMONT || MGOLDMONTPLUS || MSANDYBRIDGE || MIVYBRIDGE || MHASWELL || MBROADWELL || MSKYLAKE || MSKYLAKEX || MCANNONLAKE || MICELAKE || MCASCADELAKE || MNATIVE
+
+ config X86_USE_PPRO_CHECKSUM
+ def_bool y
+- depends on MWINCHIP3D || MWINCHIPC6 || MCYRIXIII || MK7 || MK6 || MPENTIUM4 || MPENTIUMM || MPENTIUMIII || MPENTIUMII || M686 || MK8 || MVIAC3_2 || MVIAC7 || MEFFICEON || MGEODE_LX || MCORE2 || MATOM
++ depends on MWINCHIP3D || MWINCHIPC6 || MCYRIXIII || MK7 || MK6 || MK10 || MPENTIUM4 || MPENTIUMM || MPENTIUMIII || MPENTIUMII || M686 || MK8 || MK8SSE3 || MVIAC3_2 || MVIAC7 || MEFFICEON || MGEODE_LX || MCORE2 || MNEHALEM || MWESTMERE || MSILVERMONT || MGOLDMONT || MGOLDMONTPLUS || MSANDYBRIDGE || MIVYBRIDGE || MHASWELL || MBROADWELL || MSKYLAKE || MSKYLAKEX || MCANNONLAKE || MICELAKE || MCASCADELAKE || MATOM || MNATIVE
+
+ config X86_USE_3DNOW
+ def_bool y
+ depends on (MCYRIXIII || MK7 || MGEODE_LX) && !UML
+
+-#
+-# P6_NOPs are a relatively minor optimization that require a family >=
+-# 6 processor, except that it is broken on certain VIA chips.
+-# Furthermore, AMD chips prefer a totally different sequence of NOPs
+-# (which work on all CPUs). In addition, it looks like Virtual PC
+-# does not understand them.
+-#
+-# As a result, disallow these if we're not compiling for X86_64 (these
+-# NOPs do work on all x86-64 capable chips); the list of processors in
+-# the right-hand clause are the cores that benefit from this optimization.
+-#
+ config X86_P6_NOP
+- def_bool y
+- depends on X86_64
+- depends on (MCORE2 || MPENTIUM4 || MPSC)
++ default n
++ bool "Support for P6_NOPs on Intel chips"
++ depends on (MCORE2 || MPENTIUM4 || MPSC || MATOM || MNEHALEM || MWESTMERE || MSILVERMONT || MGOLDMONT || MGOLDMONTPLUS || MSANDYBRIDGE || MIVYBRIDGE || MHASWELL || MBROADWELL || MSKYLAKE || MSKYLAKEX || MCANNONLAKE || MICELAKE || MCASCADELAKE || MNATIVE)
++ ---help---
++ P6_NOPs are a relatively minor optimization that require a family >=
++ 6 processor, except that it is broken on certain VIA chips.
++ Furthermore, AMD chips prefer a totally different sequence of NOPs
++ (which work on all CPUs). In addition, it looks like Virtual PC
++ does not understand them.
++
++ As a result, disallow these if we're not compiling for X86_64 (these
++ NOPs do work on all x86-64 capable chips); the list of processors in
++ the right-hand clause are the cores that benefit from this optimization.
++
++ Say Y if you have Intel CPU newer than Pentium Pro, N otherwise.
+
+ config X86_TSC
+ def_bool y
+- depends on (MWINCHIP3D || MCRUSOE || MEFFICEON || MCYRIXIII || MK7 || MK6 || MPENTIUM4 || MPENTIUMM || MPENTIUMIII || MPENTIUMII || M686 || M586MMX || M586TSC || MK8 || MVIAC3_2 || MVIAC7 || MGEODEGX1 || MGEODE_LX || MCORE2 || MATOM) || X86_64
++ depends on (MWINCHIP3D || MCRUSOE || MEFFICEON || MCYRIXIII || MK7 || MK6 || MPENTIUM4 || MPENTIUMM || MPENTIUMIII || MPENTIUMII || M686 || M586MMX || M586TSC || MK8 || MK8SSE3 || MVIAC3_2 || MVIAC7 || MGEODEGX1 || MGEODE_LX || MCORE2 || MNEHALEM || MWESTMERE || MSILVERMONT || MGOLDMONT || MGOLDMONTPLUS || MSANDYBRIDGE || MIVYBRIDGE || MHASWELL || MBROADWELL || MSKYLAKE || MSKYLAKEX || MCANNONLAKE || MICELAKE || MCASCADELAKE || MNATIVE || MATOM) || X86_64
+
+ config X86_CMPXCHG64
+ def_bool y
+@@ -374,7 +597,7 @@ config X86_CMPXCHG64
+ # generates cmov.
+ config X86_CMOV
+ def_bool y
+- depends on (MK8 || MK7 || MCORE2 || MPENTIUM4 || MPENTIUMM || MPENTIUMIII || MPENTIUMII || M686 || MVIAC3_2 || MVIAC7 || MCRUSOE || MEFFICEON || X86_64 || MATOM || MGEODE_LX)
++ depends on (MK8 || MK8SSE3 || MK10 || MBARCELONA || MBOBCAT || MBULLDOZER || MPILEDRIVER || MSTEAMROLLER || MEXCAVATOR || MZEN || MZEN2 || MJAGUAR || MK7 || MCORE2 || MNEHALEM || MWESTMERE || MSILVERMONT || MGOLDMONT || MGOLDMONTPLUS || MSANDYBRIDGE || MIVYBRIDGE || MHASWELL || MBROADWELL || MSKYLAKE || MSKYLAKEX || MCANNONLAKE || MICELAKE || MCASCADELAKE || MPENTIUM4 || MPENTIUMM || MPENTIUMIII || MPENTIUMII || M686 || MVIAC3_2 || MVIAC7 || MCRUSOE || MEFFICEON || X86_64 || MNATIVE || MATOM || MGEODE_LX)
+
+ config X86_MINIMUM_CPU_FAMILY
+ int
+--- a/arch/x86/Makefile 2020-06-10 14:21:45.000000000 -0400
++++ b/arch/x86/Makefile 2020-06-15 10:44:35.608035680 -0400
+@@ -119,13 +119,56 @@ else
+ KBUILD_CFLAGS += $(call cc-option,-mskip-rax-setup)
+
+ # FIXME - should be integrated in Makefile.cpu (Makefile_32.cpu)
++ cflags-$(CONFIG_MNATIVE) += $(call cc-option,-march=native)
+ cflags-$(CONFIG_MK8) += $(call cc-option,-march=k8)
++ cflags-$(CONFIG_MK8SSE3) += $(call cc-option,-march=k8-sse3,-mtune=k8)
++ cflags-$(CONFIG_MK10) += $(call cc-option,-march=amdfam10)
++ cflags-$(CONFIG_MBARCELONA) += $(call cc-option,-march=barcelona)
++ cflags-$(CONFIG_MBOBCAT) += $(call cc-option,-march=btver1)
++ cflags-$(CONFIG_MJAGUAR) += $(call cc-option,-march=btver2)
++ cflags-$(CONFIG_MBULLDOZER) += $(call cc-option,-march=bdver1)
++ cflags-$(CONFIG_MPILEDRIVER) += $(call cc-option,-march=bdver2)
++ cflags-$(CONFIG_MPILEDRIVER) += $(call cc-option,-mno-tbm)
++ cflags-$(CONFIG_MSTEAMROLLER) += $(call cc-option,-march=bdver3)
++ cflags-$(CONFIG_MSTEAMROLLER) += $(call cc-option,-mno-tbm)
++ cflags-$(CONFIG_MEXCAVATOR) += $(call cc-option,-march=bdver4)
++ cflags-$(CONFIG_MEXCAVATOR) += $(call cc-option,-mno-tbm)
++ cflags-$(CONFIG_MZEN) += $(call cc-option,-march=znver1)
++ cflags-$(CONFIG_MZEN2) += $(call cc-option,-march=znver2)
+ cflags-$(CONFIG_MPSC) += $(call cc-option,-march=nocona)
+
+ cflags-$(CONFIG_MCORE2) += \
+- $(call cc-option,-march=core2,$(call cc-option,-mtune=generic))
+- cflags-$(CONFIG_MATOM) += $(call cc-option,-march=atom) \
+- $(call cc-option,-mtune=atom,$(call cc-option,-mtune=generic))
++ $(call cc-option,-march=core2,$(call cc-option,-mtune=core2))
++ cflags-$(CONFIG_MNEHALEM) += \
++ $(call cc-option,-march=nehalem,$(call cc-option,-mtune=nehalem))
++ cflags-$(CONFIG_MWESTMERE) += \
++ $(call cc-option,-march=westmere,$(call cc-option,-mtune=westmere))
++ cflags-$(CONFIG_MSILVERMONT) += \
++ $(call cc-option,-march=silvermont,$(call cc-option,-mtune=silvermont))
++ cflags-$(CONFIG_MGOLDMONT) += \
++ $(call cc-option,-march=goldmont,$(call cc-option,-mtune=goldmont))
++ cflags-$(CONFIG_MGOLDMONTPLUS) += \
++ $(call cc-option,-march=goldmont-plus,$(call cc-option,-mtune=goldmont-plus))
++ cflags-$(CONFIG_MSANDYBRIDGE) += \
++ $(call cc-option,-march=sandybridge,$(call cc-option,-mtune=sandybridge))
++ cflags-$(CONFIG_MIVYBRIDGE) += \
++ $(call cc-option,-march=ivybridge,$(call cc-option,-mtune=ivybridge))
++ cflags-$(CONFIG_MHASWELL) += \
++ $(call cc-option,-march=haswell,$(call cc-option,-mtune=haswell))
++ cflags-$(CONFIG_MBROADWELL) += \
++ $(call cc-option,-march=broadwell,$(call cc-option,-mtune=broadwell))
++ cflags-$(CONFIG_MSKYLAKE) += \
++ $(call cc-option,-march=skylake,$(call cc-option,-mtune=skylake))
++ cflags-$(CONFIG_MSKYLAKEX) += \
++ $(call cc-option,-march=skylake-avx512,$(call cc-option,-mtune=skylake-avx512))
++ cflags-$(CONFIG_MCANNONLAKE) += \
++ $(call cc-option,-march=cannonlake,$(call cc-option,-mtune=cannonlake))
++ cflags-$(CONFIG_MICELAKE) += \
++ $(call cc-option,-march=icelake-client,$(call cc-option,-mtune=icelake-client))
++ cflags-$(CONFIG_MCASCADELAKE) += \
++ $(call cc-option,-march=cascadelake,$(call cc-option,-mtune=cascadelake))
++ cflags-$(CONFIG_MATOM) += $(call cc-option,-march=bonnell) \
++ $(call cc-option,-mtune=bonnell,$(call cc-option,-mtune=generic))
+ cflags-$(CONFIG_GENERIC_CPU) += $(call cc-option,-mtune=generic)
+ KBUILD_CFLAGS += $(cflags-y)
+
+--- a/arch/x86/Makefile_32.cpu 2020-06-10 14:21:45.000000000 -0400
++++ b/arch/x86/Makefile_32.cpu 2020-06-15 10:44:10.437477053 -0400
+@@ -24,7 +24,19 @@ cflags-$(CONFIG_MK6) += -march=k6
+ # Please note, that patches that add -march=athlon-xp and friends are pointless.
+ # They make zero difference whatsosever to performance at this time.
+ cflags-$(CONFIG_MK7) += -march=athlon
++cflags-$(CONFIG_MNATIVE) += $(call cc-option,-march=native)
+ cflags-$(CONFIG_MK8) += $(call cc-option,-march=k8,-march=athlon)
++cflags-$(CONFIG_MK8SSE3) += $(call cc-option,-march=k8-sse3,-march=athlon)
++cflags-$(CONFIG_MK10) += $(call cc-option,-march=amdfam10,-march=athlon)
++cflags-$(CONFIG_MBARCELONA) += $(call cc-option,-march=barcelona,-march=athlon)
++cflags-$(CONFIG_MBOBCAT) += $(call cc-option,-march=btver1,-march=athlon)
++cflags-$(CONFIG_MJAGUAR) += $(call cc-option,-march=btver2,-march=athlon)
++cflags-$(CONFIG_MBULLDOZER) += $(call cc-option,-march=bdver1,-march=athlon)
++cflags-$(CONFIG_MPILEDRIVER) += $(call cc-option,-march=bdver2,-march=athlon)
++cflags-$(CONFIG_MSTEAMROLLER) += $(call cc-option,-march=bdver3,-march=athlon)
++cflags-$(CONFIG_MEXCAVATOR) += $(call cc-option,-march=bdver4,-march=athlon)
++cflags-$(CONFIG_MZEN) += $(call cc-option,-march=znver1,-march=athlon)
++cflags-$(CONFIG_MZEN2) += $(call cc-option,-march=znver2,-march=athlon)
+ cflags-$(CONFIG_MCRUSOE) += -march=i686 -falign-functions=0 -falign-jumps=0 -falign-loops=0
+ cflags-$(CONFIG_MEFFICEON) += -march=i686 $(call tune,pentium3) -falign-functions=0 -falign-jumps=0 -falign-loops=0
+ cflags-$(CONFIG_MWINCHIPC6) += $(call cc-option,-march=winchip-c6,-march=i586)
+@@ -33,8 +45,22 @@ cflags-$(CONFIG_MCYRIXIII) += $(call cc-
+ cflags-$(CONFIG_MVIAC3_2) += $(call cc-option,-march=c3-2,-march=i686)
+ cflags-$(CONFIG_MVIAC7) += -march=i686
+ cflags-$(CONFIG_MCORE2) += -march=i686 $(call tune,core2)
+-cflags-$(CONFIG_MATOM) += $(call cc-option,-march=atom,$(call cc-option,-march=core2,-march=i686)) \
+- $(call cc-option,-mtune=atom,$(call cc-option,-mtune=generic))
++cflags-$(CONFIG_MNEHALEM) += -march=i686 $(call tune,nehalem)
++cflags-$(CONFIG_MWESTMERE) += -march=i686 $(call tune,westmere)
++cflags-$(CONFIG_MSILVERMONT) += -march=i686 $(call tune,silvermont)
++cflags-$(CONFIG_MGOLDMONT) += -march=i686 $(call tune,goldmont)
++cflags-$(CONFIG_MGOLDMONTPLUS) += -march=i686 $(call tune,goldmont-plus)
++cflags-$(CONFIG_MSANDYBRIDGE) += -march=i686 $(call tune,sandybridge)
++cflags-$(CONFIG_MIVYBRIDGE) += -march=i686 $(call tune,ivybridge)
++cflags-$(CONFIG_MHASWELL) += -march=i686 $(call tune,haswell)
++cflags-$(CONFIG_MBROADWELL) += -march=i686 $(call tune,broadwell)
++cflags-$(CONFIG_MSKYLAKE) += -march=i686 $(call tune,skylake)
++cflags-$(CONFIG_MSKYLAKEX) += -march=i686 $(call tune,skylake-avx512)
++cflags-$(CONFIG_MCANNONLAKE) += -march=i686 $(call tune,cannonlake)
++cflags-$(CONFIG_MICELAKE) += -march=i686 $(call tune,icelake-client)
++cflags-$(CONFIG_MCASCADELAKE) += -march=i686 $(call tune,cascadelake)
++cflags-$(CONFIG_MATOM) += $(call cc-option,-march=bonnell,$(call cc-option,-march=core2,-march=i686)) \
++ $(call cc-option,-mtune=bonnell,$(call cc-option,-mtune=generic))
+
+ # AMD Elan support
+ cflags-$(CONFIG_MELAN) += -march=i486
^ permalink raw reply related [flat|nested] 30+ messages in thread
* [gentoo-commits] proj/linux-patches:5.8 commit in: /
@ 2020-08-03 11:35 Mike Pagano
0 siblings, 0 replies; 30+ messages in thread
From: Mike Pagano @ 2020-08-03 11:35 UTC (permalink / raw
To: gentoo-commits
commit: a44f851975ac335d3c7565abb15f31cca1b38c6c
Author: Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Mon Aug 3 11:35:05 2020 +0000
Commit: Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Mon Aug 3 11:35:05 2020 +0000
URL: https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=a44f8519
Update ZSTD Patch
Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>
0000_README | 16 ++--
...ZSTD-v10-1-8-prepare-zstd-for-preboot-env.patch | 17 ----
...TD-v10-2-8-prepare-xxhash-for-preboot-env.patch | 29 ++++---
...STD-v5-2-8-prepare-xxhash-for-preboot-env.patch | 94 ----------------------
...TD-v10-3-8-add-zstd-support-to-decompress.patch | 50 +++++++++---
...v10-4-8-add-support-for-zstd-compres-kern.patch | 0
...add-support-for-zstd-compressed-initramfs.patch | 0
| 49 ++++++++---
...10-7-8-support-for-ZSTD-compressed-kernel.patch | 2 +-
...0-8-8-gitignore-add-ZSTD-compressed-files.patch | 12 +++
10 files changed, 116 insertions(+), 153 deletions(-)
diff --git a/0000_README b/0000_README
index b9ce21a..6e07572 100644
--- a/0000_README
+++ b/0000_README
@@ -71,35 +71,35 @@ Patch: 4567_distro-Gentoo-Kconfig.patch
From: Tom Wijsman <TomWij@gentoo.org>
Desc: Add Gentoo Linux support config settings and defaults.
-Patch: 5000_ZSTD-v5-1-8-prepare-zstd-for-preboot-env.patch
+Patch: 5000_ZSTD-v10-1-8-prepare-zstd-for-preboot-env.patch
From: https://lkml.org/lkml/2020/4/1/29
Desc: lib: prepare zstd for preboot environment
-Patch: 5001_ZSTD-v5-2-8-prepare-xxhash-for-preboot-env.patch
+Patch: 5001_ZSTD-v10-2-8-prepare-xxhash-for-preboot-env.patch
From: https://lkml.org/lkml/2020/4/1/29
Desc: lib: prepare xxhash for preboot environment
-Patch: 5002_ZSTD-v5-3-8-add-zstd-support-to-decompress.patch
+Patch: 5002_ZSTD-v10-3-8-add-zstd-support-to-decompress.patch
From: https://lkml.org/lkml/2020/4/1/29
Desc: lib: add zstd support to decompress
-Patch: 5003_ZSTD-v5-4-8-add-support-for-zstd-compres-kern.patch
+Patch: 5003_ZSTD-v10-4-8-add-support-for-zstd-compres-kern.patch
From: https://lkml.org/lkml/2020/4/1/29
Desc: init: add support for zstd compressed kernel
-Patch: 5004_ZSTD-v5-5-8-add-support-for-zstd-compressed-initramfs.patch
+Patch: 5004_ZSTD-v10-5-8-add-support-for-zstd-compressed-initramfs.patch
From: https://lkml.org/lkml/2020/4/1/29
Desc: usr: add support for zstd compressed initramfs
-Patch: 5005_ZSTD-v5-6-8-bump-ZO-z-extra-bytes-margin.patch
+Patch: 5005_ZSTD-v10-6-8-bump-ZO-z-extra-bytes-margin.patch
From: https://lkml.org/lkml/2020/4/1/29
Desc: x86: bump ZO_z_extra_bytes margin for zstd
-Patch: 5006_ZSTD-v5-7-8-support-for-ZSTD-compressed-kernel.patch
+Patch: 5006_ZSTD-v10-7-8-support-for-ZSTD-compressed-kernel.patch
From: https://lkml.org/lkml/2020/4/1/29
Desc: x86: Add support for ZSTD compressed kernel
-Patch: 5007_ZSTD-v5-8-8-gitignore-add-ZSTD-compressed-files.patch
+Patch: 5007_ZSTD-v10-8-8-gitignore-add-ZSTD-compressed-files.patch
From: https://lkml.org/lkml/2020/4/1/29
Desc: .gitignore: add ZSTD-compressed files
diff --git a/5000_ZSTD-v5-1-8-prepare-zstd-for-preboot-env.patch b/5000_ZSTD-v10-1-8-prepare-zstd-for-preboot-env.patch
similarity index 82%
rename from 5000_ZSTD-v5-1-8-prepare-zstd-for-preboot-env.patch
rename to 5000_ZSTD-v10-1-8-prepare-zstd-for-preboot-env.patch
index 297a8d4..c13b091 100644
--- a/5000_ZSTD-v5-1-8-prepare-zstd-for-preboot-env.patch
+++ b/5000_ZSTD-v10-1-8-prepare-zstd-for-preboot-env.patch
@@ -1,20 +1,3 @@
-diff --git a/lib/zstd/decompress.c b/lib/zstd/decompress.c
-index 269ee9a796c1..73ded63278cf 100644
---- a/lib/zstd/decompress.c
-+++ b/lib/zstd/decompress.c
-@@ -2490,6 +2490,7 @@ size_t ZSTD_decompressStream(ZSTD_DStream *zds, ZSTD_outBuffer *output, ZSTD_inB
- }
- }
-
-+#ifndef ZSTD_PREBOOT
- EXPORT_SYMBOL(ZSTD_DCtxWorkspaceBound);
- EXPORT_SYMBOL(ZSTD_initDCtx);
- EXPORT_SYMBOL(ZSTD_decompressDCtx);
-@@ -2529,3 +2530,4 @@ EXPORT_SYMBOL(ZSTD_insertBlock);
-
- MODULE_LICENSE("Dual BSD/GPL");
- MODULE_DESCRIPTION("Zstd Decompressor");
-+#endif
diff --git a/lib/zstd/fse_decompress.c b/lib/zstd/fse_decompress.c
index a84300e5a013..0b353530fb3f 100644
--- a/lib/zstd/fse_decompress.c
diff --git a/5002_ZSTD-v5-3-8-add-zstd-support-to-decompress.patch b/5001_ZSTD-v10-2-8-prepare-xxhash-for-preboot-env.patch
similarity index 94%
rename from 5002_ZSTD-v5-3-8-add-zstd-support-to-decompress.patch
rename to 5001_ZSTD-v10-2-8-prepare-xxhash-for-preboot-env.patch
index 1c22fa3..b18164c 100644
--- a/5002_ZSTD-v5-3-8-add-zstd-support-to-decompress.patch
+++ b/5001_ZSTD-v10-2-8-prepare-xxhash-for-preboot-env.patch
@@ -16,10 +16,10 @@ index 000000000000..56d539ae880f
+ void (*error_fn)(char *x));
+#endif
diff --git a/lib/Kconfig b/lib/Kconfig
-index 5d53f9609c25..e883aecb9279 100644
+index df3f3da95990..a5d6f23c4cab 100644
--- a/lib/Kconfig
+++ b/lib/Kconfig
-@@ -336,6 +336,10 @@ config DECOMPRESS_LZ4
+@@ -342,6 +342,10 @@ config DECOMPRESS_LZ4
select LZ4_DECOMPRESS
tristate
@@ -31,10 +31,10 @@ index 5d53f9609c25..e883aecb9279 100644
# Generic allocator support is selected if needed
#
diff --git a/lib/Makefile b/lib/Makefile
-index ab68a8674360..3ce4ac296611 100644
+index b1c42c10073b..2ba9642a3a87 100644
--- a/lib/Makefile
+++ b/lib/Makefile
-@@ -166,6 +166,7 @@ lib-$(CONFIG_DECOMPRESS_LZMA) += decompress_unlzma.o
+@@ -170,6 +170,7 @@ lib-$(CONFIG_DECOMPRESS_LZMA) += decompress_unlzma.o
lib-$(CONFIG_DECOMPRESS_XZ) += decompress_unxz.o
lib-$(CONFIG_DECOMPRESS_LZO) += decompress_unlzo.o
lib-$(CONFIG_DECOMPRESS_LZ4) += decompress_unlz4.o
@@ -74,10 +74,10 @@ index 857ab1af1ef3..ab3fc90ffc64 100644
diff --git a/lib/decompress_unzstd.c b/lib/decompress_unzstd.c
new file mode 100644
-index 000000000000..f317afab502f
+index 000000000000..0ad2c15479ed
--- /dev/null
+++ b/lib/decompress_unzstd.c
-@@ -0,0 +1,342 @@
+@@ -0,0 +1,345 @@
+// SPDX-License-Identifier: GPL-2.0
+
+/*
@@ -139,12 +139,14 @@ index 000000000000..f317afab502f
+ * zstd's only source dependeny is xxhash, which has no source
+ * dependencies.
+ *
-+ * zstd and xxhash avoid declaring themselves as modules
-+ * when ZSTD_PREBOOT and XXH_PREBOOT are defined.
++ * When UNZSTD_PREBOOT is defined we declare __decompress(), which is
++ * used for kernel decompression, instead of unzstd().
++ *
++ * Define __DISABLE_EXPORTS in preboot environments to prevent symbols
++ * from xxhash and zstd from being exported by the EXPORT_SYMBOL macro.
+ */
+#ifdef STATIC
-+# define ZSTD_PREBOOT
-+# define XXH_PREBOOT
++# define UNZSTD_PREBOOT
+# include "xxhash.c"
+# include "zstd/entropy_common.c"
+# include "zstd/fse_decompress.c"
@@ -159,10 +161,11 @@ index 000000000000..f317afab502f
+
+/* 128MB is the maximum window size supported by zstd. */
+#define ZSTD_WINDOWSIZE_MAX (1 << ZSTD_WINDOWLOG_MAX)
-+/* Size of the input and output buffers in multi-call mode.
++/*
++ * Size of the input and output buffers in multi-call mode.
+ * Pick a larger size because it isn't used during kernel decompression,
+ * since that is single pass, and we have to allocate a large buffer for
-+ * zstd's window anyways. The larger size speeds up initramfs decompression.
++ * zstd's window anyway. The larger size speeds up initramfs decompression.
+ */
+#define ZSTD_IOBUF_SIZE (1 << 17)
+
@@ -399,7 +402,7 @@ index 000000000000..f317afab502f
+ return err;
+}
+
-+#ifndef ZSTD_PREBOOT
++#ifndef UNZSTD_PREBOOT
+STATIC int INIT unzstd(unsigned char *buf, long len,
+ long (*fill)(void*, unsigned long),
+ long (*flush)(void*, unsigned long),
diff --git a/5001_ZSTD-v5-2-8-prepare-xxhash-for-preboot-env.patch b/5001_ZSTD-v5-2-8-prepare-xxhash-for-preboot-env.patch
deleted file mode 100644
index 88e4674..0000000
--- a/5001_ZSTD-v5-2-8-prepare-xxhash-for-preboot-env.patch
+++ /dev/null
@@ -1,94 +0,0 @@
-diff --git a/lib/xxhash.c b/lib/xxhash.c
-index aa61e2a3802f..b4364e011392 100644
---- a/lib/xxhash.c
-+++ b/lib/xxhash.c
-@@ -80,13 +80,11 @@ void xxh32_copy_state(struct xxh32_state *dst, const struct xxh32_state *src)
- {
- memcpy(dst, src, sizeof(*dst));
- }
--EXPORT_SYMBOL(xxh32_copy_state);
-
- void xxh64_copy_state(struct xxh64_state *dst, const struct xxh64_state *src)
- {
- memcpy(dst, src, sizeof(*dst));
- }
--EXPORT_SYMBOL(xxh64_copy_state);
-
- /*-***************************
- * Simple Hash Functions
-@@ -151,7 +149,6 @@ uint32_t xxh32(const void *input, const size_t len, const uint32_t seed)
-
- return h32;
- }
--EXPORT_SYMBOL(xxh32);
-
- static uint64_t xxh64_round(uint64_t acc, const uint64_t input)
- {
-@@ -234,7 +231,6 @@ uint64_t xxh64(const void *input, const size_t len, const uint64_t seed)
-
- return h64;
- }
--EXPORT_SYMBOL(xxh64);
-
- /*-**************************************************
- * Advanced Hash Functions
-@@ -251,7 +247,6 @@ void xxh32_reset(struct xxh32_state *statePtr, const uint32_t seed)
- state.v4 = seed - PRIME32_1;
- memcpy(statePtr, &state, sizeof(state));
- }
--EXPORT_SYMBOL(xxh32_reset);
-
- void xxh64_reset(struct xxh64_state *statePtr, const uint64_t seed)
- {
-@@ -265,7 +260,6 @@ void xxh64_reset(struct xxh64_state *statePtr, const uint64_t seed)
- state.v4 = seed - PRIME64_1;
- memcpy(statePtr, &state, sizeof(state));
- }
--EXPORT_SYMBOL(xxh64_reset);
-
- int xxh32_update(struct xxh32_state *state, const void *input, const size_t len)
- {
-@@ -334,7 +328,6 @@ int xxh32_update(struct xxh32_state *state, const void *input, const size_t len)
-
- return 0;
- }
--EXPORT_SYMBOL(xxh32_update);
-
- uint32_t xxh32_digest(const struct xxh32_state *state)
- {
-@@ -372,7 +365,6 @@ uint32_t xxh32_digest(const struct xxh32_state *state)
-
- return h32;
- }
--EXPORT_SYMBOL(xxh32_digest);
-
- int xxh64_update(struct xxh64_state *state, const void *input, const size_t len)
- {
-@@ -439,7 +431,6 @@ int xxh64_update(struct xxh64_state *state, const void *input, const size_t len)
-
- return 0;
- }
--EXPORT_SYMBOL(xxh64_update);
-
- uint64_t xxh64_digest(const struct xxh64_state *state)
- {
-@@ -494,7 +485,19 @@ uint64_t xxh64_digest(const struct xxh64_state *state)
-
- return h64;
- }
-+
-+#ifndef XXH_PREBOOT
-+EXPORT_SYMBOL(xxh32_copy_state);
-+EXPORT_SYMBOL(xxh64_copy_state);
-+EXPORT_SYMBOL(xxh32);
-+EXPORT_SYMBOL(xxh64);
-+EXPORT_SYMBOL(xxh32_reset);
-+EXPORT_SYMBOL(xxh64_reset);
-+EXPORT_SYMBOL(xxh32_update);
-+EXPORT_SYMBOL(xxh32_digest);
-+EXPORT_SYMBOL(xxh64_update);
- EXPORT_SYMBOL(xxh64_digest);
-
- MODULE_LICENSE("Dual BSD/GPL");
- MODULE_DESCRIPTION("xxHash");
-+#endif
diff --git a/5003_ZSTD-v5-4-8-add-support-for-zstd-compres-kern.patch b/5002_ZSTD-v10-3-8-add-zstd-support-to-decompress.patch
similarity index 52%
rename from 5003_ZSTD-v5-4-8-add-support-for-zstd-compres-kern.patch
rename to 5002_ZSTD-v10-3-8-add-zstd-support-to-decompress.patch
index d9dc79e..a277f5e 100644
--- a/5003_ZSTD-v5-4-8-add-support-for-zstd-compres-kern.patch
+++ b/5002_ZSTD-v10-3-8-add-zstd-support-to-decompress.patch
@@ -1,8 +1,29 @@
+diff --git a/Makefile b/Makefile
+index 229e67f2ff75..565084f347bd 100644
+--- a/Makefile
++++ b/Makefile
+@@ -464,6 +464,7 @@ KLZOP = lzop
+ LZMA = lzma
+ LZ4 = lz4c
+ XZ = xz
++ZSTD = zstd
+
+ CHECKFLAGS := -D__linux__ -Dlinux -D__STDC__ -Dunix -D__unix__ \
+ -Wbitwise -Wno-return-void -Wno-unknown-attribute $(CF)
+@@ -512,7 +513,7 @@ CLANG_FLAGS :=
+ export ARCH SRCARCH CONFIG_SHELL BASH HOSTCC KBUILD_HOSTCFLAGS CROSS_COMPILE LD CC
+ export CPP AR NM STRIP OBJCOPY OBJDUMP OBJSIZE READELF PAHOLE LEX YACC AWK INSTALLKERNEL
+ export PERL PYTHON PYTHON3 CHECK CHECKFLAGS MAKE UTS_MACHINE HOSTCXX
+-export KGZIP KBZIP2 KLZOP LZMA LZ4 XZ
++export KGZIP KBZIP2 KLZOP LZMA LZ4 XZ ZSTD
+ export KBUILD_HOSTCXXFLAGS KBUILD_HOSTLDFLAGS KBUILD_HOSTLDLIBS LDFLAGS_MODULE
+
+ export KBUILD_CPPFLAGS NOSTDINC_FLAGS LINUXINCLUDE OBJCOPYFLAGS KBUILD_LDFLAGS
diff --git a/init/Kconfig b/init/Kconfig
-index 492bb7000aa4..806874fdd663 100644
+index 0498af567f70..2b6409fec53f 100644
--- a/init/Kconfig
+++ b/init/Kconfig
-@@ -176,13 +176,16 @@ config HAVE_KERNEL_LZO
+@@ -191,13 +191,16 @@ config HAVE_KERNEL_LZO
config HAVE_KERNEL_LZ4
bool
@@ -20,7 +41,7 @@ index 492bb7000aa4..806874fdd663 100644
help
The linux kernel is a kind of self-extracting executable.
Several compression algorithms are available, which differ
-@@ -261,6 +264,16 @@ config KERNEL_LZ4
+@@ -276,6 +279,16 @@ config KERNEL_LZ4
is about 8% bigger than LZO. But the decompression speed is
faster than LZO.
@@ -32,18 +53,18 @@ index 492bb7000aa4..806874fdd663 100644
+ with fast decompression speed. It will compress better than GZIP and
+ decompress around the same speed as LZO, but slower than LZ4. You
+ will need at least 192 KB RAM or more for booting. The zstd command
-+ line tools is required for compression.
++ line tool is required for compression.
+
config KERNEL_UNCOMPRESSED
bool "None"
depends on HAVE_KERNEL_UNCOMPRESSED
diff --git a/scripts/Makefile.lib b/scripts/Makefile.lib
-index b12dd5ba4896..efe69b78d455 100644
+index 916b2f7f7098..54f7b7eb580b 100644
--- a/scripts/Makefile.lib
+++ b/scripts/Makefile.lib
-@@ -405,6 +405,21 @@ quiet_cmd_xzkern = XZKERN $@
+@@ -413,6 +413,28 @@ quiet_cmd_xzkern = XZKERN $@
quiet_cmd_xzmisc = XZMISC $@
- cmd_xzmisc = cat $(real-prereqs) | xz --check=crc32 --lzma2=dict=1MiB > $@
+ cmd_xzmisc = cat $(real-prereqs) | $(XZ) --check=crc32 --lzma2=dict=1MiB > $@
+# ZSTD
+# ---------------------------------------------------------------------------
@@ -51,14 +72,21 @@ index b12dd5ba4896..efe69b78d455 100644
+# format has the size information available at the beginning of the file too,
+# but it's in a more complex format and it's good to avoid changing the part
+# of the boot code that reads the uncompressed size.
++#
+# Note that the bytes added by size_append will make the zstd tool think that
+# the file is corrupt. This is expected.
++#
++# zstd uses a maximum window size of 8 MB. zstd22 uses a maximum window size of
++# 128 MB. zstd22 is used for kernel compression because it is decompressed in a
++# single pass, so zstd doesn't need to allocate a window buffer. When streaming
++# decompression is used, like initramfs decompression, zstd22 should likely not
++# be used because it would require zstd to allocate a 128 MB buffer.
+
+quiet_cmd_zstd = ZSTD $@
-+cmd_zstd = (cat $(filter-out FORCE,$^) | \
-+ zstd -19 && \
-+ $(call size_append, $(filter-out FORCE,$^))) > $@ || \
-+ (rm -f $@ ; false)
++ cmd_zstd = { cat $(real-prereqs) | $(ZSTD) -19; $(size_append); } > $@
++
++quiet_cmd_zstd22 = ZSTD22 $@
++ cmd_zstd22 = { cat $(real-prereqs) | $(ZSTD) -22 --ultra; $(size_append); } > $@
+
# ASM offsets
# ---------------------------------------------------------------------------
diff --git a/5004_ZSTD-v5-5-8-add-support-for-zstd-compressed-initramfs.patch b/5003_ZSTD-v10-4-8-add-support-for-zstd-compres-kern.patch
similarity index 100%
rename from 5004_ZSTD-v5-5-8-add-support-for-zstd-compressed-initramfs.patch
rename to 5003_ZSTD-v10-4-8-add-support-for-zstd-compres-kern.patch
diff --git a/5005_ZSTD-v5-6-8-bump-ZO-z-extra-bytes-margin.patch b/5004_ZSTD-v10-5-8-add-support-for-zstd-compressed-initramfs.patch
similarity index 100%
rename from 5005_ZSTD-v5-6-8-bump-ZO-z-extra-bytes-margin.patch
rename to 5004_ZSTD-v10-5-8-add-support-for-zstd-compressed-initramfs.patch
diff --git a/5006_ZSTD-v5-7-8-support-for-ZSTD-compressed-kernel.patch b/5005_ZSTD-v10-6-8-bump-ZO-z-extra-bytes-margin.patch
similarity index 67%
rename from 5006_ZSTD-v5-7-8-support-for-ZSTD-compressed-kernel.patch
rename to 5005_ZSTD-v10-6-8-bump-ZO-z-extra-bytes-margin.patch
index 6147136..c9615c0 100644
--- a/5006_ZSTD-v5-7-8-support-for-ZSTD-compressed-kernel.patch
+++ b/5005_ZSTD-v10-6-8-bump-ZO-z-extra-bytes-margin.patch
@@ -1,5 +1,5 @@
diff --git a/Documentation/x86/boot.rst b/Documentation/x86/boot.rst
-index fa7ddc0428c8..0404e99dc1d4 100644
+index 5325c71ca877..7fafc7ac00d7 100644
--- a/Documentation/x86/boot.rst
+++ b/Documentation/x86/boot.rst
@@ -782,9 +782,9 @@ Protocol: 2.08+
@@ -16,10 +16,10 @@ index fa7ddc0428c8..0404e99dc1d4 100644
============ ==============
Field name: payload_length
diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig
-index 886fa8368256..912f783bc01a 100644
+index 883da0abf779..4a64395bc35d 100644
--- a/arch/x86/Kconfig
+++ b/arch/x86/Kconfig
-@@ -185,6 +185,7 @@ config X86
+@@ -188,6 +188,7 @@ config X86
select HAVE_KERNEL_LZMA
select HAVE_KERNEL_LZO
select HAVE_KERNEL_XZ
@@ -28,7 +28,7 @@ index 886fa8368256..912f783bc01a 100644
select HAVE_KPROBES_ON_FTRACE
select HAVE_FUNCTION_ERROR_INJECTION
diff --git a/arch/x86/boot/compressed/Makefile b/arch/x86/boot/compressed/Makefile
-index 7619742f91c9..471e61400a2e 100644
+index 5a828fde7a42..c08714ae76ec 100644
--- a/arch/x86/boot/compressed/Makefile
+++ b/arch/x86/boot/compressed/Makefile
@@ -26,7 +26,7 @@ OBJECT_FILES_NON_STANDARD := y
@@ -40,16 +40,24 @@ index 7619742f91c9..471e61400a2e 100644
KBUILD_CFLAGS := -m$(BITS) -O2
KBUILD_CFLAGS += -fno-strict-aliasing $(call cc-option, -fPIE, -fPIC)
-@@ -145,6 +145,8 @@ $(obj)/vmlinux.bin.lzo: $(vmlinux.bin.all-y) FORCE
+@@ -42,6 +42,7 @@ KBUILD_CFLAGS += $(call cc-disable-warning, gnu)
+ KBUILD_CFLAGS += -Wno-pointer-sign
+ KBUILD_CFLAGS += $(call cc-option,-fmacro-prefix-map=$(srctree)/=)
+ KBUILD_CFLAGS += -fno-asynchronous-unwind-tables
++KBUILD_CFLAGS += -D__DISABLE_EXPORTS
+
+ KBUILD_AFLAGS := $(KBUILD_CFLAGS) -D__ASSEMBLY__
+ GCOV_PROFILE := n
+@@ -145,6 +146,8 @@ $(obj)/vmlinux.bin.lzo: $(vmlinux.bin.all-y) FORCE
$(call if_changed,lzo)
$(obj)/vmlinux.bin.lz4: $(vmlinux.bin.all-y) FORCE
$(call if_changed,lz4)
+$(obj)/vmlinux.bin.zst: $(vmlinux.bin.all-y) FORCE
-+ $(call if_changed,zstd)
++ $(call if_changed,zstd22)
suffix-$(CONFIG_KERNEL_GZIP) := gz
suffix-$(CONFIG_KERNEL_BZIP2) := bz2
-@@ -152,6 +154,7 @@ suffix-$(CONFIG_KERNEL_LZMA) := lzma
+@@ -152,6 +155,7 @@ suffix-$(CONFIG_KERNEL_LZMA) := lzma
suffix-$(CONFIG_KERNEL_XZ) := xz
suffix-$(CONFIG_KERNEL_LZO) := lzo
suffix-$(CONFIG_KERNEL_LZ4) := lz4
@@ -57,6 +65,24 @@ index 7619742f91c9..471e61400a2e 100644
quiet_cmd_mkpiggy = MKPIGGY $@
cmd_mkpiggy = $(obj)/mkpiggy $< > $@
+diff --git a/arch/x86/boot/compressed/kaslr.c b/arch/x86/boot/compressed/kaslr.c
+index d7408af55738..0048269180d5 100644
+--- a/arch/x86/boot/compressed/kaslr.c
++++ b/arch/x86/boot/compressed/kaslr.c
+@@ -19,13 +19,6 @@
+ */
+ #define BOOT_CTYPE_H
+
+-/*
+- * _ctype[] in lib/ctype.c is needed by isspace() of linux/ctype.h.
+- * While both lib/ctype.c and lib/cmdline.c will bring EXPORT_SYMBOL
+- * which is meaningless and will cause compiling error in some cases.
+- */
+-#define __DISABLE_EXPORTS
+-
+ #include "misc.h"
+ #include "error.h"
+ #include "../string.h"
diff --git a/arch/x86/boot/compressed/misc.c b/arch/x86/boot/compressed/misc.c
index 9652d5c2afda..39e592d0e0b4 100644
--- a/arch/x86/boot/compressed/misc.c
@@ -73,10 +99,10 @@ index 9652d5c2afda..39e592d0e0b4 100644
* NOTE: When adding a new decompressor, please update the analysis in
* ../header.S.
diff --git a/arch/x86/include/asm/boot.h b/arch/x86/include/asm/boot.h
-index 680c320363db..d6dd43d25d9f 100644
+index 680c320363db..9191280d9ea3 100644
--- a/arch/x86/include/asm/boot.h
+++ b/arch/x86/include/asm/boot.h
-@@ -24,9 +24,11 @@
+@@ -24,9 +24,16 @@
# error "Invalid value for CONFIG_PHYSICAL_ALIGN"
#endif
@@ -85,6 +111,11 @@ index 680c320363db..d6dd43d25d9f 100644
# define BOOT_HEAP_SIZE 0x400000
-#else /* !CONFIG_KERNEL_BZIP2 */
+#elif defined(CONFIG_KERNEL_ZSTD)
++/*
++ * Zstd needs to allocate the ZSTD_DCtx in order to decompress the kernel.
++ * The ZSTD_DCtx is ~160KB, so set the heap size to 192KB because it is a
++ * round number and to allow some slack.
++ */
+# define BOOT_HEAP_SIZE 0x30000
+#else
# define BOOT_HEAP_SIZE 0x10000
diff --git a/5007_ZSTD-v5-8-8-gitignore-add-ZSTD-compressed-files.patch b/5006_ZSTD-v10-7-8-support-for-ZSTD-compressed-kernel.patch
similarity index 80%
rename from 5007_ZSTD-v5-8-8-gitignore-add-ZSTD-compressed-files.patch
rename to 5006_ZSTD-v10-7-8-support-for-ZSTD-compressed-kernel.patch
index adf8578..ec12df5 100644
--- a/5007_ZSTD-v5-8-8-gitignore-add-ZSTD-compressed-files.patch
+++ b/5006_ZSTD-v10-7-8-support-for-ZSTD-compressed-kernel.patch
@@ -1,5 +1,5 @@
diff --git a/.gitignore b/.gitignore
-index 2258e906f01c..23871de69072 100644
+index d5f4804ed07c..162bd2b67bdf 100644
--- a/.gitignore
+++ b/.gitignore
@@ -44,6 +44,7 @@
diff --git a/5007_ZSTD-v10-8-8-gitignore-add-ZSTD-compressed-files.patch b/5007_ZSTD-v10-8-8-gitignore-add-ZSTD-compressed-files.patch
new file mode 100644
index 0000000..3c9ea69
--- /dev/null
+++ b/5007_ZSTD-v10-8-8-gitignore-add-ZSTD-compressed-files.patch
@@ -0,0 +1,12 @@
+diff --git a/Documentation/dontdiff b/Documentation/dontdiff
+index ef9519c32c55..e361fc95ca29 100644
+--- a/Documentation/dontdiff
++++ b/Documentation/dontdiff
+@@ -55,6 +55,7 @@
+ *.ver
+ *.xml
+ *.xz
++*.zst
+ *_MODULES
+ *_vga16.c
+ *~
^ permalink raw reply related [flat|nested] 30+ messages in thread
* [gentoo-commits] proj/linux-patches:5.8 commit in: /
@ 2020-08-03 14:42 Mike Pagano
0 siblings, 0 replies; 30+ messages in thread
From: Mike Pagano @ 2020-08-03 14:42 UTC (permalink / raw
To: gentoo-commits
commit: 1ca9312bc86801564378e2465414c6ba223d5bd3
Author: Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Mon Aug 3 14:42:01 2020 +0000
Commit: Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Mon Aug 3 14:42:01 2020 +0000
URL: https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=1ca9312b
Remove broken patch (gcc opts for gcc 9.1.X)
Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>
0000_README | 4 -
5012_enable-cpu-optimizations-for-gcc91.patch | 641 --------------------------
2 files changed, 645 deletions(-)
diff --git a/0000_README b/0000_README
index 6e07572..27a700b 100644
--- a/0000_README
+++ b/0000_README
@@ -103,10 +103,6 @@ Patch: 5007_ZSTD-v10-8-8-gitignore-add-ZSTD-compressed-files.patch
From: https://lkml.org/lkml/2020/4/1/29
Desc: .gitignore: add ZSTD-compressed files
-Patch: 5012_enable-cpu-optimizations-for-gcc91.patch
-From: https://github.com/graysky2/kernel_gcc_patch/
-Desc: Kernel patch enables gcc = v9.1+ optimizations for additional CPUs.
-
Patch: 5013_enable-cpu-optimizations-for-gcc10.patch
From: https://github.com/graysky2/kernel_gcc_patch/
Desc: Kernel patch enables gcc = v10.1+ optimizations for additional CPUs.
diff --git a/5012_enable-cpu-optimizations-for-gcc91.patch b/5012_enable-cpu-optimizations-for-gcc91.patch
deleted file mode 100644
index 2f16153..0000000
--- a/5012_enable-cpu-optimizations-for-gcc91.patch
+++ /dev/null
@@ -1,641 +0,0 @@
-WARNING
-This patch works with gcc versions 9.1+ and with kernel version 5.7+ and should
-NOT be applied when compiling on older versions of gcc due to key name changes
-of the march flags introduced with the version 4.9 release of gcc.[1]
-
-Use the older version of this patch hosted on the same github for older
-versions of gcc.
-
-FEATURES
-This patch adds additional CPU options to the Linux kernel accessible under:
- Processor type and features --->
- Processor family --->
-
-The expanded microarchitectures include:
-* AMD Improved K8-family
-* AMD K10-family
-* AMD Family 10h (Barcelona)
-* AMD Family 14h (Bobcat)
-* AMD Family 16h (Jaguar)
-* AMD Family 15h (Bulldozer)
-* AMD Family 15h (Piledriver)
-* AMD Family 15h (Steamroller)
-* AMD Family 15h (Excavator)
-* AMD Family 17h (Zen)
-* AMD Family 17h (Zen 2)
-* Intel Silvermont low-power processors
-* Intel Goldmont low-power processors (Apollo Lake and Denverton)
-* Intel Goldmont Plus low-power processors (Gemini Lake)
-* Intel 1st Gen Core i3/i5/i7 (Nehalem)
-* Intel 1.5 Gen Core i3/i5/i7 (Westmere)
-* Intel 2nd Gen Core i3/i5/i7 (Sandybridge)
-* Intel 3rd Gen Core i3/i5/i7 (Ivybridge)
-* Intel 4th Gen Core i3/i5/i7 (Haswell)
-* Intel 5th Gen Core i3/i5/i7 (Broadwell)
-* Intel 6th Gen Core i3/i5/i7 (Skylake)
-* Intel 6th Gen Core i7/i9 (Skylake X)
-* Intel 8th Gen Core i3/i5/i7 (Cannon Lake)
-* Intel 10th Gen Core i7/i9 (Ice Lake)
-* Intel Xeon (Cascade Lake)
-
-It also offers to compile passing the 'native' option which, "selects the CPU
-to generate code for at compilation time by determining the processor type of
-the compiling machine. Using -march=native enables all instruction subsets
-supported by the local machine and will produce code optimized for the local
-machine under the constraints of the selected instruction set."[2]
-
-Do NOT try using the 'native' option on AMD Piledriver, Steamroller, or
-Excavator CPUs (-march=bdver{2,3,4} flag). The build will error out due the
-kernel's objtool issue with these.[3a,b]
-
-MINOR NOTES
-This patch also changes 'atom' to 'bonnell' in accordance with the gcc v4.9
-changes. Note that upstream is using the deprecated 'match=atom' flags when I
-believe it should use the newer 'march=bonnell' flag for atom processors.[4]
-
-It is not recommended to compile on Atom-CPUs with the 'native' option.[5] The
-recommendation is to use the 'atom' option instead.
-
-BENEFITS
-Small but real speed increases are measurable using a make endpoint comparing
-a generic kernel to one built with one of the respective microarchs.
-
-See the following experimental evidence supporting this statement:
-https://github.com/graysky2/kernel_gcc_patch
-
-REQUIREMENTS
-linux version >=5.7
-gcc version >=9.1 and <10
-
-ACKNOWLEDGMENTS
-This patch builds on the seminal work by Jeroen.[6]
-
-REFERENCES
-1. https://gcc.gnu.org/gcc-4.9/changes.html
-2. https://gcc.gnu.org/onlinedocs/gcc/x86-Options.html
-3a. https://gcc.gnu.org/bugzilla/show_bug.cgi?id=95671#c11
-3b. https://github.com/graysky2/kernel_gcc_patch/issues/55
-4. https://bugzilla.kernel.org/show_bug.cgi?id=77461
-5. https://github.com/graysky2/kernel_gcc_patch/issues/15
-6. http://www.linuxforge.net/docs/linux/linux-gcc.php
-
---- a/arch/x86/include/asm/vermagic.h 2020-06-10 14:21:45.000000000 -0400
-+++ b/arch/x86/include/asm/vermagic.h 2020-06-15 10:44:10.437477053 -0400
-@@ -17,6 +17,36 @@
- #define MODULE_PROC_FAMILY "586MMX "
- #elif defined CONFIG_MCORE2
- #define MODULE_PROC_FAMILY "CORE2 "
-+#elif defined CONFIG_MNATIVE
-+#define MODULE_PROC_FAMILY "NATIVE "
-+#elif defined CONFIG_MNEHALEM
-+#define MODULE_PROC_FAMILY "NEHALEM "
-+#elif defined CONFIG_MWESTMERE
-+#define MODULE_PROC_FAMILY "WESTMERE "
-+#elif defined CONFIG_MSILVERMONT
-+#define MODULE_PROC_FAMILY "SILVERMONT "
-+#elif defined CONFIG_MGOLDMONT
-+#define MODULE_PROC_FAMILY "GOLDMONT "
-+#elif defined CONFIG_MGOLDMONTPLUS
-+#define MODULE_PROC_FAMILY "GOLDMONTPLUS "
-+#elif defined CONFIG_MSANDYBRIDGE
-+#define MODULE_PROC_FAMILY "SANDYBRIDGE "
-+#elif defined CONFIG_MIVYBRIDGE
-+#define MODULE_PROC_FAMILY "IVYBRIDGE "
-+#elif defined CONFIG_MHASWELL
-+#define MODULE_PROC_FAMILY "HASWELL "
-+#elif defined CONFIG_MBROADWELL
-+#define MODULE_PROC_FAMILY "BROADWELL "
-+#elif defined CONFIG_MSKYLAKE
-+#define MODULE_PROC_FAMILY "SKYLAKE "
-+#elif defined CONFIG_MSKYLAKEX
-+#define MODULE_PROC_FAMILY "SKYLAKEX "
-+#elif defined CONFIG_MCANNONLAKE
-+#define MODULE_PROC_FAMILY "CANNONLAKE "
-+#elif defined CONFIG_MICELAKE
-+#define MODULE_PROC_FAMILY "ICELAKE "
-+#elif defined CONFIG_MCASCADELAKE
-+#define MODULE_PROC_FAMILY "CASCADELAKE "
- #elif defined CONFIG_MATOM
- #define MODULE_PROC_FAMILY "ATOM "
- #elif defined CONFIG_M686
-@@ -35,6 +65,28 @@
- #define MODULE_PROC_FAMILY "K7 "
- #elif defined CONFIG_MK8
- #define MODULE_PROC_FAMILY "K8 "
-+#elif defined CONFIG_MK8SSE3
-+#define MODULE_PROC_FAMILY "K8SSE3 "
-+#elif defined CONFIG_MK10
-+#define MODULE_PROC_FAMILY "K10 "
-+#elif defined CONFIG_MBARCELONA
-+#define MODULE_PROC_FAMILY "BARCELONA "
-+#elif defined CONFIG_MBOBCAT
-+#define MODULE_PROC_FAMILY "BOBCAT "
-+#elif defined CONFIG_MBULLDOZER
-+#define MODULE_PROC_FAMILY "BULLDOZER "
-+#elif defined CONFIG_MPILEDRIVER
-+#define MODULE_PROC_FAMILY "PILEDRIVER "
-+#elif defined CONFIG_MSTEAMROLLER
-+#define MODULE_PROC_FAMILY "STEAMROLLER "
-+#elif defined CONFIG_MJAGUAR
-+#define MODULE_PROC_FAMILY "JAGUAR "
-+#elif defined CONFIG_MEXCAVATOR
-+#define MODULE_PROC_FAMILY "EXCAVATOR "
-+#elif defined CONFIG_MZEN
-+#define MODULE_PROC_FAMILY "ZEN "
-+#elif defined CONFIG_MZEN2
-+#define MODULE_PROC_FAMILY "ZEN2 "
- #elif defined CONFIG_MELAN
- #define MODULE_PROC_FAMILY "ELAN "
- #elif defined CONFIG_MCRUSOE
---- a/arch/x86/Kconfig.cpu 2020-06-10 14:21:45.000000000 -0400
-+++ b/arch/x86/Kconfig.cpu 2020-06-15 10:44:10.437477053 -0400
-@@ -123,6 +123,7 @@ config MPENTIUMM
- config MPENTIUM4
- bool "Pentium-4/Celeron(P4-based)/Pentium-4 M/older Xeon"
- depends on X86_32
-+ select X86_P6_NOP
- ---help---
- Select this for Intel Pentium 4 chips. This includes the
- Pentium 4, Pentium D, P4-based Celeron and Xeon, and
-@@ -155,9 +156,8 @@ config MPENTIUM4
- -Paxville
- -Dempsey
-
--
- config MK6
-- bool "K6/K6-II/K6-III"
-+ bool "AMD K6/K6-II/K6-III"
- depends on X86_32
- ---help---
- Select this for an AMD K6-family processor. Enables use of
-@@ -165,7 +165,7 @@ config MK6
- flags to GCC.
-
- config MK7
-- bool "Athlon/Duron/K7"
-+ bool "AMD Athlon/Duron/K7"
- depends on X86_32
- ---help---
- Select this for an AMD Athlon K7-family processor. Enables use of
-@@ -173,12 +173,90 @@ config MK7
- flags to GCC.
-
- config MK8
-- bool "Opteron/Athlon64/Hammer/K8"
-+ bool "AMD Opteron/Athlon64/Hammer/K8"
- ---help---
- Select this for an AMD Opteron or Athlon64 Hammer-family processor.
- Enables use of some extended instructions, and passes appropriate
- optimization flags to GCC.
-
-+config MK8SSE3
-+ bool "AMD Opteron/Athlon64/Hammer/K8 with SSE3"
-+ ---help---
-+ Select this for improved AMD Opteron or Athlon64 Hammer-family processors.
-+ Enables use of some extended instructions, and passes appropriate
-+ optimization flags to GCC.
-+
-+config MK10
-+ bool "AMD 61xx/7x50/PhenomX3/X4/II/K10"
-+ ---help---
-+ Select this for an AMD 61xx Eight-Core Magny-Cours, Athlon X2 7x50,
-+ Phenom X3/X4/II, Athlon II X2/X3/X4, or Turion II-family processor.
-+ Enables use of some extended instructions, and passes appropriate
-+ optimization flags to GCC.
-+
-+config MBARCELONA
-+ bool "AMD Barcelona"
-+ ---help---
-+ Select this for AMD Family 10h Barcelona processors.
-+
-+ Enables -march=barcelona
-+
-+config MBOBCAT
-+ bool "AMD Bobcat"
-+ ---help---
-+ Select this for AMD Family 14h Bobcat processors.
-+
-+ Enables -march=btver1
-+
-+config MJAGUAR
-+ bool "AMD Jaguar"
-+ ---help---
-+ Select this for AMD Family 16h Jaguar processors.
-+
-+ Enables -march=btver2
-+
-+config MBULLDOZER
-+ bool "AMD Bulldozer"
-+ ---help---
-+ Select this for AMD Family 15h Bulldozer processors.
-+
-+ Enables -march=bdver1
-+
-+config MPILEDRIVER
-+ bool "AMD Piledriver"
-+ ---help---
-+ Select this for AMD Family 15h Piledriver processors.
-+
-+ Enables -march=bdver2
-+
-+config MSTEAMROLLER
-+ bool "AMD Steamroller"
-+ ---help---
-+ Select this for AMD Family 15h Steamroller processors.
-+
-+ Enables -march=bdver3
-+
-+config MEXCAVATOR
-+ bool "AMD Excavator"
-+ ---help---
-+ Select this for AMD Family 15h Excavator processors.
-+
-+ Enables -march=bdver4
-+
-+config MZEN
-+ bool "AMD Zen"
-+ ---help---
-+ Select this for AMD Family 17h Zen processors.
-+
-+ Enables -march=znver1
-+
-+config MZEN2
-+ bool "AMD Zen 2"
-+ ---help---
-+ Select this for AMD Family 17h Zen 2 processors.
-+
-+ Enables -march=znver2
-+
- config MCRUSOE
- bool "Crusoe"
- depends on X86_32
-@@ -260,6 +338,7 @@ config MVIAC7
-
- config MPSC
- bool "Intel P4 / older Netburst based Xeon"
-+ select X86_P6_NOP
- depends on X86_64
- ---help---
- Optimize for Intel Pentium 4, Pentium D and older Nocona/Dempsey
-@@ -269,8 +348,19 @@ config MPSC
- using the cpu family field
- in /proc/cpuinfo. Family 15 is an older Xeon, Family 6 a newer one.
-
-+config MATOM
-+ bool "Intel Atom"
-+ select X86_P6_NOP
-+ ---help---
-+
-+ Select this for the Intel Atom platform. Intel Atom CPUs have an
-+ in-order pipelining architecture and thus can benefit from
-+ accordingly optimized code. Use a recent GCC with specific Atom
-+ support in order to fully benefit from selecting this option.
-+
- config MCORE2
-- bool "Core 2/newer Xeon"
-+ bool "Intel Core 2"
-+ select X86_P6_NOP
- ---help---
-
- Select this for Intel Core 2 and newer Core 2 Xeons (Xeon 51xx and
-@@ -278,14 +368,133 @@ config MCORE2
- family in /proc/cpuinfo. Newer ones have 6 and older ones 15
- (not a typo)
-
--config MATOM
-- bool "Intel Atom"
-+ Enables -march=core2
-+
-+config MNEHALEM
-+ bool "Intel Nehalem"
-+ select X86_P6_NOP
- ---help---
-
-- Select this for the Intel Atom platform. Intel Atom CPUs have an
-- in-order pipelining architecture and thus can benefit from
-- accordingly optimized code. Use a recent GCC with specific Atom
-- support in order to fully benefit from selecting this option.
-+ Select this for 1st Gen Core processors in the Nehalem family.
-+
-+ Enables -march=nehalem
-+
-+config MWESTMERE
-+ bool "Intel Westmere"
-+ select X86_P6_NOP
-+ ---help---
-+
-+ Select this for the Intel Westmere formerly Nehalem-C family.
-+
-+ Enables -march=westmere
-+
-+config MSILVERMONT
-+ bool "Intel Silvermont"
-+ select X86_P6_NOP
-+ ---help---
-+
-+ Select this for the Intel Silvermont platform.
-+
-+ Enables -march=silvermont
-+
-+config MGOLDMONT
-+ bool "Intel Goldmont"
-+ select X86_P6_NOP
-+ ---help---
-+
-+ Select this for the Intel Goldmont platform including Apollo Lake and Denverton.
-+
-+ Enables -march=goldmont
-+
-+config MGOLDMONTPLUS
-+ bool "Intel Goldmont Plus"
-+ select X86_P6_NOP
-+ ---help---
-+
-+ Select this for the Intel Goldmont Plus platform including Gemini Lake.
-+
-+ Enables -march=goldmont-plus
-+
-+config MSANDYBRIDGE
-+ bool "Intel Sandy Bridge"
-+ select X86_P6_NOP
-+ ---help---
-+
-+ Select this for 2nd Gen Core processors in the Sandy Bridge family.
-+
-+ Enables -march=sandybridge
-+
-+config MIVYBRIDGE
-+ bool "Intel Ivy Bridge"
-+ select X86_P6_NOP
-+ ---help---
-+
-+ Select this for 3rd Gen Core processors in the Ivy Bridge family.
-+
-+ Enables -march=ivybridge
-+
-+config MHASWELL
-+ bool "Intel Haswell"
-+ select X86_P6_NOP
-+ ---help---
-+
-+ Select this for 4th Gen Core processors in the Haswell family.
-+
-+ Enables -march=haswell
-+
-+config MBROADWELL
-+ bool "Intel Broadwell"
-+ select X86_P6_NOP
-+ ---help---
-+
-+ Select this for 5th Gen Core processors in the Broadwell family.
-+
-+ Enables -march=broadwell
-+
-+config MSKYLAKE
-+ bool "Intel Skylake"
-+ select X86_P6_NOP
-+ ---help---
-+
-+ Select this for 6th Gen Core processors in the Skylake family.
-+
-+ Enables -march=skylake
-+
-+config MSKYLAKEX
-+ bool "Intel Skylake X"
-+ select X86_P6_NOP
-+ ---help---
-+
-+ Select this for 6th Gen Core processors in the Skylake X family.
-+
-+ Enables -march=skylake-avx512
-+
-+config MCANNONLAKE
-+ bool "Intel Cannon Lake"
-+ select X86_P6_NOP
-+ ---help---
-+
-+ Select this for 8th Gen Core processors
-+
-+ Enables -march=cannonlake
-+
-+config MICELAKE
-+ bool "Intel Ice Lake"
-+ select X86_P6_NOP
-+ ---help---
-+
-+ Select this for 10th Gen Core processors in the Ice Lake family.
-+
-+ Enables -march=icelake-client
-+
-+config MCASCADELAKE
-+ bool "Intel Cascade Lake"
-+ select X86_P6_NOP
-+ ---help---
-+
-+ Select this for Xeon processors in the Cascade Lake family.
-+
-+ Enables -march=cascadelake
-
- config GENERIC_CPU
- bool "Generic-x86-64"
-@@ -294,6 +503,19 @@ config GENERIC_CPU
- Generic x86-64 CPU.
- Run equally well on all x86-64 CPUs.
-
-+config MNATIVE
-+ bool "Native optimizations autodetected by GCC"
-+ ---help---
-+
-+ GCC 4.2 and above support -march=native, which automatically detects
-+ the optimum settings to use based on your processor. -march=native
-+ also detects and applies additional settings beyond -march specific
-+ to your CPU, (eg. -msse4). Unless you have a specific reason not to
-+ (e.g. distcc cross-compiling), you should probably be using
-+ -march=native rather than anything listed below.
-+
-+ Enables -march=native
-+
- endchoice
-
- config X86_GENERIC
-@@ -318,7 +540,7 @@ config X86_INTERNODE_CACHE_SHIFT
- config X86_L1_CACHE_SHIFT
- int
- default "7" if MPENTIUM4 || MPSC
-- default "6" if MK7 || MK8 || MPENTIUMM || MCORE2 || MATOM || MVIAC7 || X86_GENERIC || GENERIC_CPU
-+ default "6" if MK7 || MK8 || MK8SSE3 || MK10 || MBARCELONA || MBOBCAT || MBULLDOZER || MPILEDRIVER || MSTEAMROLLER || MEXCAVATOR || MZEN || MZEN2 || MJAGUAR || MPENTIUMM || MCORE2 || MNEHALEM || MWESTMERE || MSILVERMONT || MGOLDMONT || MGOLDMONTPLUS || MSANDYBRIDGE || MIVYBRIDGE || MHASWELL || MBROADWELL || MSKYLAKE || MSKYLAKEX || MCANNONLAKE || MICELAKE || MCASCADELAKE || MNATIVE || MATOM || MVIAC7 || X86_GENERIC || GENERIC_CPU
- default "4" if MELAN || M486SX || M486 || MGEODEGX1
- default "5" if MWINCHIP3D || MWINCHIPC6 || MCRUSOE || MEFFICEON || MCYRIXIII || MK6 || MPENTIUMIII || MPENTIUMII || M686 || M586MMX || M586TSC || M586 || MVIAC3_2 || MGEODE_LX
-
-@@ -336,35 +558,36 @@ config X86_ALIGNMENT_16
-
- config X86_INTEL_USERCOPY
- def_bool y
-- depends on MPENTIUM4 || MPENTIUMM || MPENTIUMIII || MPENTIUMII || M586MMX || X86_GENERIC || MK8 || MK7 || MEFFICEON || MCORE2
-+ depends on MPENTIUM4 || MPENTIUMM || MPENTIUMIII || MPENTIUMII || M586MMX || X86_GENERIC || MK8 || MK8SSE3 || MK7 || MEFFICEON || MCORE2 || MK10 || MBARCELONA || MNEHALEM || MWESTMERE || MSILVERMONT || MGOLDMONT || MGOLDMONTPLUS || MSANDYBRIDGE || MIVYBRIDGE || MHASWELL || MBROADWELL || MSKYLAKE || MSKYLAKEX || MCANNONLAKE || MICELAKE || MCASCADELAKE || MNATIVE
-
- config X86_USE_PPRO_CHECKSUM
- def_bool y
-- depends on MWINCHIP3D || MWINCHIPC6 || MCYRIXIII || MK7 || MK6 || MPENTIUM4 || MPENTIUMM || MPENTIUMIII || MPENTIUMII || M686 || MK8 || MVIAC3_2 || MVIAC7 || MEFFICEON || MGEODE_LX || MCORE2 || MATOM
-+ depends on MWINCHIP3D || MWINCHIPC6 || MCYRIXIII || MK7 || MK6 || MK10 || MPENTIUM4 || MPENTIUMM || MPENTIUMIII || MPENTIUMII || M686 || MK8 || MK8SSE3 || MVIAC3_2 || MVIAC7 || MEFFICEON || MGEODE_LX || MCORE2 || MNEHALEM || MWESTMERE || MSILVERMONT || MGOLDMONT || MGOLDMONTPLUS || MSANDYBRIDGE || MIVYBRIDGE || MHASWELL || MBROADWELL || MSKYLAKE || MSKYLAKEX || MCANNONLAKE || MICELAKE || MCASCADELAKE || MATOM || MNATIVE
-
- config X86_USE_3DNOW
- def_bool y
- depends on (MCYRIXIII || MK7 || MGEODE_LX) && !UML
-
--#
--# P6_NOPs are a relatively minor optimization that require a family >=
--# 6 processor, except that it is broken on certain VIA chips.
--# Furthermore, AMD chips prefer a totally different sequence of NOPs
--# (which work on all CPUs). In addition, it looks like Virtual PC
--# does not understand them.
--#
--# As a result, disallow these if we're not compiling for X86_64 (these
--# NOPs do work on all x86-64 capable chips); the list of processors in
--# the right-hand clause are the cores that benefit from this optimization.
--#
- config X86_P6_NOP
-- def_bool y
-- depends on X86_64
-- depends on (MCORE2 || MPENTIUM4 || MPSC)
-+ default n
-+ bool "Support for P6_NOPs on Intel chips"
-+ depends on (MCORE2 || MPENTIUM4 || MPSC || MATOM || MNEHALEM || MWESTMERE || MSILVERMONT || MGOLDMONT || MGOLDMONTPLUS || MSANDYBRIDGE || MIVYBRIDGE || MHASWELL || MBROADWELL || MSKYLAKE || MSKYLAKEX || MCANNONLAKE || MICELAKE || MCASCADELAKE || MNATIVE)
-+ ---help---
-+ P6_NOPs are a relatively minor optimization that require a family >=
-+ 6 processor, except that it is broken on certain VIA chips.
-+ Furthermore, AMD chips prefer a totally different sequence of NOPs
-+ (which work on all CPUs). In addition, it looks like Virtual PC
-+ does not understand them.
-+
-+ As a result, disallow these if we're not compiling for X86_64 (these
-+ NOPs do work on all x86-64 capable chips); the list of processors in
-+ the right-hand clause are the cores that benefit from this optimization.
-+
-+ Say Y if you have Intel CPU newer than Pentium Pro, N otherwise.
-
- config X86_TSC
- def_bool y
-- depends on (MWINCHIP3D || MCRUSOE || MEFFICEON || MCYRIXIII || MK7 || MK6 || MPENTIUM4 || MPENTIUMM || MPENTIUMIII || MPENTIUMII || M686 || M586MMX || M586TSC || MK8 || MVIAC3_2 || MVIAC7 || MGEODEGX1 || MGEODE_LX || MCORE2 || MATOM) || X86_64
-+ depends on (MWINCHIP3D || MCRUSOE || MEFFICEON || MCYRIXIII || MK7 || MK6 || MPENTIUM4 || MPENTIUMM || MPENTIUMIII || MPENTIUMII || M686 || M586MMX || M586TSC || MK8 || MK8SSE3 || MVIAC3_2 || MVIAC7 || MGEODEGX1 || MGEODE_LX || MCORE2 || MNEHALEM || MWESTMERE || MSILVERMONT || MGOLDMONT || MGOLDMONTPLUS || MSANDYBRIDGE || MIVYBRIDGE || MHASWELL || MBROADWELL || MSKYLAKE || MSKYLAKEX || MCANNONLAKE || MICELAKE || MCASCADELAKE || MNATIVE || MATOM) || X86_64
-
- config X86_CMPXCHG64
- def_bool y
-@@ -374,7 +597,7 @@ config X86_CMPXCHG64
- # generates cmov.
- config X86_CMOV
- def_bool y
-- depends on (MK8 || MK7 || MCORE2 || MPENTIUM4 || MPENTIUMM || MPENTIUMIII || MPENTIUMII || M686 || MVIAC3_2 || MVIAC7 || MCRUSOE || MEFFICEON || X86_64 || MATOM || MGEODE_LX)
-+ depends on (MK8 || MK8SSE3 || MK10 || MBARCELONA || MBOBCAT || MBULLDOZER || MPILEDRIVER || MSTEAMROLLER || MEXCAVATOR || MZEN || MZEN2 || MJAGUAR || MK7 || MCORE2 || MNEHALEM || MWESTMERE || MSILVERMONT || MGOLDMONT || MGOLDMONTPLUS || MSANDYBRIDGE || MIVYBRIDGE || MHASWELL || MBROADWELL || MSKYLAKE || MSKYLAKEX || MCANNONLAKE || MICELAKE || MCASCADELAKE || MPENTIUM4 || MPENTIUMM || MPENTIUMIII || MPENTIUMII || M686 || MVIAC3_2 || MVIAC7 || MCRUSOE || MEFFICEON || X86_64 || MNATIVE || MATOM || MGEODE_LX)
-
- config X86_MINIMUM_CPU_FAMILY
- int
---- a/arch/x86/Makefile 2020-06-10 14:21:45.000000000 -0400
-+++ b/arch/x86/Makefile 2020-06-15 10:44:35.608035680 -0400
-@@ -119,13 +119,56 @@ else
- KBUILD_CFLAGS += $(call cc-option,-mskip-rax-setup)
-
- # FIXME - should be integrated in Makefile.cpu (Makefile_32.cpu)
-+ cflags-$(CONFIG_MNATIVE) += $(call cc-option,-march=native)
- cflags-$(CONFIG_MK8) += $(call cc-option,-march=k8)
-+ cflags-$(CONFIG_MK8SSE3) += $(call cc-option,-march=k8-sse3,-mtune=k8)
-+ cflags-$(CONFIG_MK10) += $(call cc-option,-march=amdfam10)
-+ cflags-$(CONFIG_MBARCELONA) += $(call cc-option,-march=barcelona)
-+ cflags-$(CONFIG_MBOBCAT) += $(call cc-option,-march=btver1)
-+ cflags-$(CONFIG_MJAGUAR) += $(call cc-option,-march=btver2)
-+ cflags-$(CONFIG_MBULLDOZER) += $(call cc-option,-march=bdver1)
-+ cflags-$(CONFIG_MPILEDRIVER) += $(call cc-option,-march=bdver2)
-+ cflags-$(CONFIG_MPILEDRIVER) += $(call cc-option,-mno-tbm)
-+ cflags-$(CONFIG_MSTEAMROLLER) += $(call cc-option,-march=bdver3)
-+ cflags-$(CONFIG_MSTEAMROLLER) += $(call cc-option,-mno-tbm)
-+ cflags-$(CONFIG_MEXCAVATOR) += $(call cc-option,-march=bdver4)
-+ cflags-$(CONFIG_MEXCAVATOR) += $(call cc-option,-mno-tbm)
-+ cflags-$(CONFIG_MZEN) += $(call cc-option,-march=znver1)
-+ cflags-$(CONFIG_MZEN2) += $(call cc-option,-march=znver2)
- cflags-$(CONFIG_MPSC) += $(call cc-option,-march=nocona)
-
- cflags-$(CONFIG_MCORE2) += \
-- $(call cc-option,-march=core2,$(call cc-option,-mtune=generic))
-- cflags-$(CONFIG_MATOM) += $(call cc-option,-march=atom) \
-- $(call cc-option,-mtune=atom,$(call cc-option,-mtune=generic))
-+ $(call cc-option,-march=core2,$(call cc-option,-mtune=core2))
-+ cflags-$(CONFIG_MNEHALEM) += \
-+ $(call cc-option,-march=nehalem,$(call cc-option,-mtune=nehalem))
-+ cflags-$(CONFIG_MWESTMERE) += \
-+ $(call cc-option,-march=westmere,$(call cc-option,-mtune=westmere))
-+ cflags-$(CONFIG_MSILVERMONT) += \
-+ $(call cc-option,-march=silvermont,$(call cc-option,-mtune=silvermont))
-+ cflags-$(CONFIG_MGOLDMONT) += \
-+ $(call cc-option,-march=goldmont,$(call cc-option,-mtune=goldmont))
-+ cflags-$(CONFIG_MGOLDMONTPLUS) += \
-+ $(call cc-option,-march=goldmont-plus,$(call cc-option,-mtune=goldmont-plus))
-+ cflags-$(CONFIG_MSANDYBRIDGE) += \
-+ $(call cc-option,-march=sandybridge,$(call cc-option,-mtune=sandybridge))
-+ cflags-$(CONFIG_MIVYBRIDGE) += \
-+ $(call cc-option,-march=ivybridge,$(call cc-option,-mtune=ivybridge))
-+ cflags-$(CONFIG_MHASWELL) += \
-+ $(call cc-option,-march=haswell,$(call cc-option,-mtune=haswell))
-+ cflags-$(CONFIG_MBROADWELL) += \
-+ $(call cc-option,-march=broadwell,$(call cc-option,-mtune=broadwell))
-+ cflags-$(CONFIG_MSKYLAKE) += \
-+ $(call cc-option,-march=skylake,$(call cc-option,-mtune=skylake))
-+ cflags-$(CONFIG_MSKYLAKEX) += \
-+ $(call cc-option,-march=skylake-avx512,$(call cc-option,-mtune=skylake-avx512))
-+ cflags-$(CONFIG_MCANNONLAKE) += \
-+ $(call cc-option,-march=cannonlake,$(call cc-option,-mtune=cannonlake))
-+ cflags-$(CONFIG_MICELAKE) += \
-+ $(call cc-option,-march=icelake-client,$(call cc-option,-mtune=icelake-client))
-+ cflags-$(CONFIG_MCASCADELAKE) += \
-+ $(call cc-option,-march=cascadelake,$(call cc-option,-mtune=cascadelake))
-+ cflags-$(CONFIG_MATOM) += $(call cc-option,-march=bonnell) \
-+ $(call cc-option,-mtune=bonnell,$(call cc-option,-mtune=generic))
- cflags-$(CONFIG_GENERIC_CPU) += $(call cc-option,-mtune=generic)
- KBUILD_CFLAGS += $(cflags-y)
-
---- a/arch/x86/Makefile_32.cpu 2020-06-10 14:21:45.000000000 -0400
-+++ b/arch/x86/Makefile_32.cpu 2020-06-15 10:44:10.437477053 -0400
-@@ -24,7 +24,19 @@ cflags-$(CONFIG_MK6) += -march=k6
- # Please note, that patches that add -march=athlon-xp and friends are pointless.
- # They make zero difference whatsosever to performance at this time.
- cflags-$(CONFIG_MK7) += -march=athlon
-+cflags-$(CONFIG_MNATIVE) += $(call cc-option,-march=native)
- cflags-$(CONFIG_MK8) += $(call cc-option,-march=k8,-march=athlon)
-+cflags-$(CONFIG_MK8SSE3) += $(call cc-option,-march=k8-sse3,-march=athlon)
-+cflags-$(CONFIG_MK10) += $(call cc-option,-march=amdfam10,-march=athlon)
-+cflags-$(CONFIG_MBARCELONA) += $(call cc-option,-march=barcelona,-march=athlon)
-+cflags-$(CONFIG_MBOBCAT) += $(call cc-option,-march=btver1,-march=athlon)
-+cflags-$(CONFIG_MJAGUAR) += $(call cc-option,-march=btver2,-march=athlon)
-+cflags-$(CONFIG_MBULLDOZER) += $(call cc-option,-march=bdver1,-march=athlon)
-+cflags-$(CONFIG_MPILEDRIVER) += $(call cc-option,-march=bdver2,-march=athlon)
-+cflags-$(CONFIG_MSTEAMROLLER) += $(call cc-option,-march=bdver3,-march=athlon)
-+cflags-$(CONFIG_MEXCAVATOR) += $(call cc-option,-march=bdver4,-march=athlon)
-+cflags-$(CONFIG_MZEN) += $(call cc-option,-march=znver1,-march=athlon)
-+cflags-$(CONFIG_MZEN2) += $(call cc-option,-march=znver2,-march=athlon)
- cflags-$(CONFIG_MCRUSOE) += -march=i686 -falign-functions=0 -falign-jumps=0 -falign-loops=0
- cflags-$(CONFIG_MEFFICEON) += -march=i686 $(call tune,pentium3) -falign-functions=0 -falign-jumps=0 -falign-loops=0
- cflags-$(CONFIG_MWINCHIPC6) += $(call cc-option,-march=winchip-c6,-march=i586)
-@@ -33,8 +45,22 @@ cflags-$(CONFIG_MCYRIXIII) += $(call cc-
- cflags-$(CONFIG_MVIAC3_2) += $(call cc-option,-march=c3-2,-march=i686)
- cflags-$(CONFIG_MVIAC7) += -march=i686
- cflags-$(CONFIG_MCORE2) += -march=i686 $(call tune,core2)
--cflags-$(CONFIG_MATOM) += $(call cc-option,-march=atom,$(call cc-option,-march=core2,-march=i686)) \
-- $(call cc-option,-mtune=atom,$(call cc-option,-mtune=generic))
-+cflags-$(CONFIG_MNEHALEM) += -march=i686 $(call tune,nehalem)
-+cflags-$(CONFIG_MWESTMERE) += -march=i686 $(call tune,westmere)
-+cflags-$(CONFIG_MSILVERMONT) += -march=i686 $(call tune,silvermont)
-+cflags-$(CONFIG_MGOLDMONT) += -march=i686 $(call tune,goldmont)
-+cflags-$(CONFIG_MGOLDMONTPLUS) += -march=i686 $(call tune,goldmont-plus)
-+cflags-$(CONFIG_MSANDYBRIDGE) += -march=i686 $(call tune,sandybridge)
-+cflags-$(CONFIG_MIVYBRIDGE) += -march=i686 $(call tune,ivybridge)
-+cflags-$(CONFIG_MHASWELL) += -march=i686 $(call tune,haswell)
-+cflags-$(CONFIG_MBROADWELL) += -march=i686 $(call tune,broadwell)
-+cflags-$(CONFIG_MSKYLAKE) += -march=i686 $(call tune,skylake)
-+cflags-$(CONFIG_MSKYLAKEX) += -march=i686 $(call tune,skylake-avx512)
-+cflags-$(CONFIG_MCANNONLAKE) += -march=i686 $(call tune,cannonlake)
-+cflags-$(CONFIG_MICELAKE) += -march=i686 $(call tune,icelake-client)
-+cflags-$(CONFIG_MCASCADELAKE) += -march=i686 $(call tune,cascadelake)
-+cflags-$(CONFIG_MATOM) += $(call cc-option,-march=bonnell,$(call cc-option,-march=core2,-march=i686)) \
-+ $(call cc-option,-mtune=bonnell,$(call cc-option,-mtune=generic))
-
- # AMD Elan support
- cflags-$(CONFIG_MELAN) += -march=i486
^ permalink raw reply related [flat|nested] 30+ messages in thread
* [gentoo-commits] proj/linux-patches:5.8 commit in: /
@ 2020-08-13 11:55 Alice Ferrazzi
0 siblings, 0 replies; 30+ messages in thread
From: Alice Ferrazzi @ 2020-08-13 11:55 UTC (permalink / raw
To: gentoo-commits
commit: 0bd0ee5b1603e02c51c443038b26805987228a4a
Author: Alice Ferrazzi <alicef <AT> gentoo <DOT> org>
AuthorDate: Thu Aug 13 11:54:28 2020 +0000
Commit: Alice Ferrazzi <alicef <AT> gentoo <DOT> org>
CommitDate: Thu Aug 13 11:54:44 2020 +0000
URL: https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=0bd0ee5b
Linux patch 5.8.1
Signed-off-by: Alice Ferrazzi <alicef <AT> gentoo.org>
0000_README | 4 +
1000_linux-5.8.1.patch | 1623 ++++++++++++++++++++++++++++++++++++++++++++++++
2 files changed, 1627 insertions(+)
diff --git a/0000_README b/0000_README
index 27a700b..8e9f53b 100644
--- a/0000_README
+++ b/0000_README
@@ -43,6 +43,10 @@ EXPERIMENTAL
Individual Patch Descriptions:
--------------------------------------------------------------------------
+Patch: 1000_linux-5.8.1.patch
+From: http://www.kernel.org
+Desc: Linux 5.8.1
+
Patch: 1500_XATTR_USER_PREFIX.patch
From: https://bugs.gentoo.org/show_bug.cgi?id=470644
Desc: Support for namespace user.pax.* on tmpfs.
diff --git a/1000_linux-5.8.1.patch b/1000_linux-5.8.1.patch
new file mode 100644
index 0000000..dc3e0ca
--- /dev/null
+++ b/1000_linux-5.8.1.patch
@@ -0,0 +1,1623 @@
+diff --git a/Makefile b/Makefile
+index 24a4c1b97bb0..7932464518f1 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 5
+ PATCHLEVEL = 8
+-SUBLEVEL = 0
++SUBLEVEL = 1
+ EXTRAVERSION =
+ NAME = Kleptomaniac Octopus
+
+diff --git a/arch/arm64/include/asm/archrandom.h b/arch/arm64/include/asm/archrandom.h
+index fc1594a0710e..44209f6146aa 100644
+--- a/arch/arm64/include/asm/archrandom.h
++++ b/arch/arm64/include/asm/archrandom.h
+@@ -6,7 +6,6 @@
+
+ #include <linux/bug.h>
+ #include <linux/kernel.h>
+-#include <linux/random.h>
+ #include <asm/cpufeature.h>
+
+ static inline bool __arm64_rndr(unsigned long *v)
+diff --git a/arch/arm64/kernel/kaslr.c b/arch/arm64/kernel/kaslr.c
+index 07c4c8cc4a67..b181e0544b79 100644
+--- a/arch/arm64/kernel/kaslr.c
++++ b/arch/arm64/kernel/kaslr.c
+@@ -11,8 +11,8 @@
+ #include <linux/sched.h>
+ #include <linux/types.h>
+ #include <linux/pgtable.h>
++#include <linux/random.h>
+
+-#include <asm/archrandom.h>
+ #include <asm/cacheflush.h>
+ #include <asm/fixmap.h>
+ #include <asm/kernel-pgtable.h>
+@@ -84,6 +84,7 @@ u64 __init kaslr_early_init(u64 dt_phys)
+ void *fdt;
+ u64 seed, offset, mask, module_range;
+ const u8 *cmdline, *str;
++ unsigned long raw;
+ int size;
+
+ /*
+@@ -122,15 +123,12 @@ u64 __init kaslr_early_init(u64 dt_phys)
+ }
+
+ /*
+- * Mix in any entropy obtainable architecturally, open coded
+- * since this runs extremely early.
++ * Mix in any entropy obtainable architecturally if enabled
++ * and supported.
+ */
+- if (__early_cpu_has_rndr()) {
+- unsigned long raw;
+
+- if (__arm64_rndr(&raw))
+- seed ^= raw;
+- }
++ if (arch_get_random_seed_long_early(&raw))
++ seed ^= raw;
+
+ if (!seed) {
+ kaslr_status = KASLR_DISABLED_NO_SEED;
+diff --git a/arch/powerpc/include/asm/kasan.h b/arch/powerpc/include/asm/kasan.h
+index be85c7005fb1..d635b96c7ea6 100644
+--- a/arch/powerpc/include/asm/kasan.h
++++ b/arch/powerpc/include/asm/kasan.h
+@@ -27,10 +27,12 @@
+
+ #ifdef CONFIG_KASAN
+ void kasan_early_init(void);
++void kasan_mmu_init(void);
+ void kasan_init(void);
+ void kasan_late_init(void);
+ #else
+ static inline void kasan_init(void) { }
++static inline void kasan_mmu_init(void) { }
+ static inline void kasan_late_init(void) { }
+ #endif
+
+diff --git a/arch/powerpc/mm/init_32.c b/arch/powerpc/mm/init_32.c
+index 5a5469eb3174..bf1717f8d5f4 100644
+--- a/arch/powerpc/mm/init_32.c
++++ b/arch/powerpc/mm/init_32.c
+@@ -171,6 +171,8 @@ void __init MMU_init(void)
+ btext_unmap();
+ #endif
+
++ kasan_mmu_init();
++
+ setup_kup();
+
+ /* Shortly after that, the entire linear mapping will be available */
+diff --git a/arch/powerpc/mm/kasan/kasan_init_32.c b/arch/powerpc/mm/kasan/kasan_init_32.c
+index 0760e1e754e4..019b0c0bbbf3 100644
+--- a/arch/powerpc/mm/kasan/kasan_init_32.c
++++ b/arch/powerpc/mm/kasan/kasan_init_32.c
+@@ -117,14 +117,27 @@ static void __init kasan_unmap_early_shadow_vmalloc(void)
+ kasan_update_early_region(k_start, k_end, __pte(0));
+ }
+
+-static void __init kasan_mmu_init(void)
++void __init kasan_mmu_init(void)
+ {
+ int ret;
++
++ if (early_mmu_has_feature(MMU_FTR_HPTE_TABLE) ||
++ IS_ENABLED(CONFIG_KASAN_VMALLOC)) {
++ ret = kasan_init_shadow_page_tables(KASAN_SHADOW_START, KASAN_SHADOW_END);
++
++ if (ret)
++ panic("kasan: kasan_init_shadow_page_tables() failed");
++ }
++}
++
++void __init kasan_init(void)
++{
+ struct memblock_region *reg;
+
+ for_each_memblock(memory, reg) {
+ phys_addr_t base = reg->base;
+ phys_addr_t top = min(base + reg->size, total_lowmem);
++ int ret;
+
+ if (base >= top)
+ continue;
+@@ -134,20 +147,6 @@ static void __init kasan_mmu_init(void)
+ panic("kasan: kasan_init_region() failed");
+ }
+
+- if (early_mmu_has_feature(MMU_FTR_HPTE_TABLE) ||
+- IS_ENABLED(CONFIG_KASAN_VMALLOC)) {
+- ret = kasan_init_shadow_page_tables(KASAN_SHADOW_START, KASAN_SHADOW_END);
+-
+- if (ret)
+- panic("kasan: kasan_init_shadow_page_tables() failed");
+- }
+-
+-}
+-
+-void __init kasan_init(void)
+-{
+- kasan_mmu_init();
+-
+ kasan_remap_early_shadow_ro();
+
+ clear_page(kasan_early_shadow_page);
+diff --git a/drivers/android/binder.c b/drivers/android/binder.c
+index f50c5f182bb5..5b310eea9e52 100644
+--- a/drivers/android/binder.c
++++ b/drivers/android/binder.c
+@@ -2982,6 +2982,12 @@ static void binder_transaction(struct binder_proc *proc,
+ goto err_dead_binder;
+ }
+ e->to_node = target_node->debug_id;
++ if (WARN_ON(proc == target_proc)) {
++ return_error = BR_FAILED_REPLY;
++ return_error_param = -EINVAL;
++ return_error_line = __LINE__;
++ goto err_invalid_target_handle;
++ }
+ if (security_binder_transaction(proc->tsk,
+ target_proc->tsk) < 0) {
+ return_error = BR_FAILED_REPLY;
+@@ -3635,10 +3641,17 @@ static int binder_thread_write(struct binder_proc *proc,
+ struct binder_node *ctx_mgr_node;
+ mutex_lock(&context->context_mgr_node_lock);
+ ctx_mgr_node = context->binder_context_mgr_node;
+- if (ctx_mgr_node)
++ if (ctx_mgr_node) {
++ if (ctx_mgr_node->proc == proc) {
++ binder_user_error("%d:%d context manager tried to acquire desc 0\n",
++ proc->pid, thread->pid);
++ mutex_unlock(&context->context_mgr_node_lock);
++ return -EINVAL;
++ }
+ ret = binder_inc_ref_for_node(
+ proc, ctx_mgr_node,
+ strong, NULL, &rdata);
++ }
+ mutex_unlock(&context->context_mgr_node_lock);
+ }
+ if (ret)
+diff --git a/drivers/gpio/gpio-max77620.c b/drivers/gpio/gpio-max77620.c
+index 313bd02dd893..bd6c4faea639 100644
+--- a/drivers/gpio/gpio-max77620.c
++++ b/drivers/gpio/gpio-max77620.c
+@@ -305,8 +305,9 @@ static int max77620_gpio_probe(struct platform_device *pdev)
+ gpiochip_irqchip_add_nested(&mgpio->gpio_chip, &max77620_gpio_irqchip,
+ 0, handle_edge_irq, IRQ_TYPE_NONE);
+
+- ret = request_threaded_irq(gpio_irq, NULL, max77620_gpio_irqhandler,
+- IRQF_ONESHOT, "max77620-gpio", mgpio);
++ ret = devm_request_threaded_irq(&pdev->dev, gpio_irq, NULL,
++ max77620_gpio_irqhandler, IRQF_ONESHOT,
++ "max77620-gpio", mgpio);
+ if (ret < 0) {
+ dev_err(&pdev->dev, "failed to request IRQ: %d\n", ret);
+ return ret;
+diff --git a/drivers/leds/leds-88pm860x.c b/drivers/leds/leds-88pm860x.c
+index b3044c9a8120..465c3755cf2e 100644
+--- a/drivers/leds/leds-88pm860x.c
++++ b/drivers/leds/leds-88pm860x.c
+@@ -203,21 +203,33 @@ static int pm860x_led_probe(struct platform_device *pdev)
+ data->cdev.brightness_set_blocking = pm860x_led_set;
+ mutex_init(&data->lock);
+
+- ret = devm_led_classdev_register(chip->dev, &data->cdev);
++ ret = led_classdev_register(chip->dev, &data->cdev);
+ if (ret < 0) {
+ dev_err(&pdev->dev, "Failed to register LED: %d\n", ret);
+ return ret;
+ }
+ pm860x_led_set(&data->cdev, 0);
++
++ platform_set_drvdata(pdev, data);
++
+ return 0;
+ }
+
++static int pm860x_led_remove(struct platform_device *pdev)
++{
++ struct pm860x_led *data = platform_get_drvdata(pdev);
++
++ led_classdev_unregister(&data->cdev);
++
++ return 0;
++}
+
+ static struct platform_driver pm860x_led_driver = {
+ .driver = {
+ .name = "88pm860x-led",
+ },
+ .probe = pm860x_led_probe,
++ .remove = pm860x_led_remove,
+ };
+
+ module_platform_driver(pm860x_led_driver);
+diff --git a/drivers/leds/leds-da903x.c b/drivers/leds/leds-da903x.c
+index ed1b303f699f..2b5fb00438a2 100644
+--- a/drivers/leds/leds-da903x.c
++++ b/drivers/leds/leds-da903x.c
+@@ -110,12 +110,23 @@ static int da903x_led_probe(struct platform_device *pdev)
+ led->flags = pdata->flags;
+ led->master = pdev->dev.parent;
+
+- ret = devm_led_classdev_register(led->master, &led->cdev);
++ ret = led_classdev_register(led->master, &led->cdev);
+ if (ret) {
+ dev_err(&pdev->dev, "failed to register LED %d\n", id);
+ return ret;
+ }
+
++ platform_set_drvdata(pdev, led);
++
++ return 0;
++}
++
++static int da903x_led_remove(struct platform_device *pdev)
++{
++ struct da903x_led *led = platform_get_drvdata(pdev);
++
++ led_classdev_unregister(&led->cdev);
++
+ return 0;
+ }
+
+@@ -124,6 +135,7 @@ static struct platform_driver da903x_led_driver = {
+ .name = "da903x-led",
+ },
+ .probe = da903x_led_probe,
++ .remove = da903x_led_remove,
+ };
+
+ module_platform_driver(da903x_led_driver);
+diff --git a/drivers/leds/leds-lm3533.c b/drivers/leds/leds-lm3533.c
+index 9504ad405aef..b3edee703193 100644
+--- a/drivers/leds/leds-lm3533.c
++++ b/drivers/leds/leds-lm3533.c
+@@ -694,7 +694,7 @@ static int lm3533_led_probe(struct platform_device *pdev)
+
+ platform_set_drvdata(pdev, led);
+
+- ret = devm_led_classdev_register(pdev->dev.parent, &led->cdev);
++ ret = led_classdev_register(pdev->dev.parent, &led->cdev);
+ if (ret) {
+ dev_err(&pdev->dev, "failed to register LED %d\n", pdev->id);
+ return ret;
+@@ -704,13 +704,18 @@ static int lm3533_led_probe(struct platform_device *pdev)
+
+ ret = lm3533_led_setup(led, pdata);
+ if (ret)
+- return ret;
++ goto err_deregister;
+
+ ret = lm3533_ctrlbank_enable(&led->cb);
+ if (ret)
+- return ret;
++ goto err_deregister;
+
+ return 0;
++
++err_deregister:
++ led_classdev_unregister(&led->cdev);
++
++ return ret;
+ }
+
+ static int lm3533_led_remove(struct platform_device *pdev)
+@@ -720,6 +725,7 @@ static int lm3533_led_remove(struct platform_device *pdev)
+ dev_dbg(&pdev->dev, "%s\n", __func__);
+
+ lm3533_ctrlbank_disable(&led->cb);
++ led_classdev_unregister(&led->cdev);
+
+ return 0;
+ }
+diff --git a/drivers/leds/leds-lm36274.c b/drivers/leds/leds-lm36274.c
+index 836b60c9a2b8..db842eeb7ca2 100644
+--- a/drivers/leds/leds-lm36274.c
++++ b/drivers/leds/leds-lm36274.c
+@@ -133,7 +133,7 @@ static int lm36274_probe(struct platform_device *pdev)
+ lm36274_data->pdev = pdev;
+ lm36274_data->dev = lmu->dev;
+ lm36274_data->regmap = lmu->regmap;
+- dev_set_drvdata(&pdev->dev, lm36274_data);
++ platform_set_drvdata(pdev, lm36274_data);
+
+ ret = lm36274_parse_dt(lm36274_data);
+ if (ret) {
+@@ -147,8 +147,16 @@ static int lm36274_probe(struct platform_device *pdev)
+ return ret;
+ }
+
+- return devm_led_classdev_register(lm36274_data->dev,
+- &lm36274_data->led_dev);
++ return led_classdev_register(lm36274_data->dev, &lm36274_data->led_dev);
++}
++
++static int lm36274_remove(struct platform_device *pdev)
++{
++ struct lm36274 *lm36274_data = platform_get_drvdata(pdev);
++
++ led_classdev_unregister(&lm36274_data->led_dev);
++
++ return 0;
+ }
+
+ static const struct of_device_id of_lm36274_leds_match[] = {
+@@ -159,6 +167,7 @@ MODULE_DEVICE_TABLE(of, of_lm36274_leds_match);
+
+ static struct platform_driver lm36274_driver = {
+ .probe = lm36274_probe,
++ .remove = lm36274_remove,
+ .driver = {
+ .name = "lm36274-leds",
+ },
+diff --git a/drivers/leds/leds-wm831x-status.c b/drivers/leds/leds-wm831x-status.c
+index 082df7f1dd90..67f4235cb28a 100644
+--- a/drivers/leds/leds-wm831x-status.c
++++ b/drivers/leds/leds-wm831x-status.c
+@@ -269,12 +269,23 @@ static int wm831x_status_probe(struct platform_device *pdev)
+ drvdata->cdev.blink_set = wm831x_status_blink_set;
+ drvdata->cdev.groups = wm831x_status_groups;
+
+- ret = devm_led_classdev_register(wm831x->dev, &drvdata->cdev);
++ ret = led_classdev_register(wm831x->dev, &drvdata->cdev);
+ if (ret < 0) {
+ dev_err(&pdev->dev, "Failed to register LED: %d\n", ret);
+ return ret;
+ }
+
++ platform_set_drvdata(pdev, drvdata);
++
++ return 0;
++}
++
++static int wm831x_status_remove(struct platform_device *pdev)
++{
++ struct wm831x_status *drvdata = platform_get_drvdata(pdev);
++
++ led_classdev_unregister(&drvdata->cdev);
++
+ return 0;
+ }
+
+@@ -283,6 +294,7 @@ static struct platform_driver wm831x_status_driver = {
+ .name = "wm831x-status",
+ },
+ .probe = wm831x_status_probe,
++ .remove = wm831x_status_remove,
+ };
+
+ module_platform_driver(wm831x_status_driver);
+diff --git a/drivers/misc/lkdtm/heap.c b/drivers/misc/lkdtm/heap.c
+index 3c5cec85edce..1323bc16f113 100644
+--- a/drivers/misc/lkdtm/heap.c
++++ b/drivers/misc/lkdtm/heap.c
+@@ -58,11 +58,12 @@ void lkdtm_READ_AFTER_FREE(void)
+ int *base, *val, saw;
+ size_t len = 1024;
+ /*
+- * The slub allocator uses the first word to store the free
+- * pointer in some configurations. Use the middle of the
+- * allocation to avoid running into the freelist
++ * The slub allocator will use the either the first word or
++ * the middle of the allocation to store the free pointer,
++ * depending on configurations. Store in the second word to
++ * avoid running into the freelist.
+ */
+- size_t offset = (len / sizeof(*base)) / 2;
++ size_t offset = sizeof(*base);
+
+ base = kmalloc(len, GFP_KERNEL);
+ if (!base) {
+diff --git a/drivers/mtd/mtdchar.c b/drivers/mtd/mtdchar.c
+index c5935b2f9cd1..b40f46a43fc6 100644
+--- a/drivers/mtd/mtdchar.c
++++ b/drivers/mtd/mtdchar.c
+@@ -355,9 +355,6 @@ static int mtdchar_writeoob(struct file *file, struct mtd_info *mtd,
+ uint32_t retlen;
+ int ret = 0;
+
+- if (!(file->f_mode & FMODE_WRITE))
+- return -EPERM;
+-
+ if (length > 4096)
+ return -EINVAL;
+
+@@ -643,6 +640,48 @@ static int mtdchar_ioctl(struct file *file, u_int cmd, u_long arg)
+
+ pr_debug("MTD_ioctl\n");
+
++ /*
++ * Check the file mode to require "dangerous" commands to have write
++ * permissions.
++ */
++ switch (cmd) {
++ /* "safe" commands */
++ case MEMGETREGIONCOUNT:
++ case MEMGETREGIONINFO:
++ case MEMGETINFO:
++ case MEMREADOOB:
++ case MEMREADOOB64:
++ case MEMLOCK:
++ case MEMUNLOCK:
++ case MEMISLOCKED:
++ case MEMGETOOBSEL:
++ case MEMGETBADBLOCK:
++ case MEMSETBADBLOCK:
++ case OTPSELECT:
++ case OTPGETREGIONCOUNT:
++ case OTPGETREGIONINFO:
++ case OTPLOCK:
++ case ECCGETLAYOUT:
++ case ECCGETSTATS:
++ case MTDFILEMODE:
++ case BLKPG:
++ case BLKRRPART:
++ break;
++
++ /* "dangerous" commands */
++ case MEMERASE:
++ case MEMERASE64:
++ case MEMWRITEOOB:
++ case MEMWRITEOOB64:
++ case MEMWRITE:
++ if (!(file->f_mode & FMODE_WRITE))
++ return -EPERM;
++ break;
++
++ default:
++ return -ENOTTY;
++ }
++
+ switch (cmd) {
+ case MEMGETREGIONCOUNT:
+ if (copy_to_user(argp, &(mtd->numeraseregions), sizeof(int)))
+@@ -690,9 +729,6 @@ static int mtdchar_ioctl(struct file *file, u_int cmd, u_long arg)
+ {
+ struct erase_info *erase;
+
+- if(!(file->f_mode & FMODE_WRITE))
+- return -EPERM;
+-
+ erase=kzalloc(sizeof(struct erase_info),GFP_KERNEL);
+ if (!erase)
+ ret = -ENOMEM;
+@@ -985,9 +1021,6 @@ static int mtdchar_ioctl(struct file *file, u_int cmd, u_long arg)
+ ret = 0;
+ break;
+ }
+-
+- default:
+- ret = -ENOTTY;
+ }
+
+ return ret;
+@@ -1031,6 +1064,11 @@ static long mtdchar_compat_ioctl(struct file *file, unsigned int cmd,
+ struct mtd_oob_buf32 buf;
+ struct mtd_oob_buf32 __user *buf_user = argp;
+
++ if (!(file->f_mode & FMODE_WRITE)) {
++ ret = -EPERM;
++ break;
++ }
++
+ if (copy_from_user(&buf, argp, sizeof(buf)))
+ ret = -EFAULT;
+ else
+diff --git a/drivers/pci/controller/pci-tegra.c b/drivers/pci/controller/pci-tegra.c
+index 235b456698fc..b532d5082fb6 100644
+--- a/drivers/pci/controller/pci-tegra.c
++++ b/drivers/pci/controller/pci-tegra.c
+@@ -181,13 +181,6 @@
+
+ #define AFI_PEXBIAS_CTRL_0 0x168
+
+-#define RP_PRIV_XP_DL 0x00000494
+-#define RP_PRIV_XP_DL_GEN2_UPD_FC_TSHOLD (0x1ff << 1)
+-
+-#define RP_RX_HDR_LIMIT 0x00000e00
+-#define RP_RX_HDR_LIMIT_PW_MASK (0xff << 8)
+-#define RP_RX_HDR_LIMIT_PW (0x0e << 8)
+-
+ #define RP_ECTL_2_R1 0x00000e84
+ #define RP_ECTL_2_R1_RX_CTLE_1C_MASK 0xffff
+
+@@ -323,7 +316,6 @@ struct tegra_pcie_soc {
+ bool program_uphy;
+ bool update_clamp_threshold;
+ bool program_deskew_time;
+- bool raw_violation_fixup;
+ bool update_fc_timer;
+ bool has_cache_bars;
+ struct {
+@@ -659,23 +651,6 @@ static void tegra_pcie_apply_sw_fixup(struct tegra_pcie_port *port)
+ writel(value, port->base + RP_VEND_CTL0);
+ }
+
+- /* Fixup for read after write violation. */
+- if (soc->raw_violation_fixup) {
+- value = readl(port->base + RP_RX_HDR_LIMIT);
+- value &= ~RP_RX_HDR_LIMIT_PW_MASK;
+- value |= RP_RX_HDR_LIMIT_PW;
+- writel(value, port->base + RP_RX_HDR_LIMIT);
+-
+- value = readl(port->base + RP_PRIV_XP_DL);
+- value |= RP_PRIV_XP_DL_GEN2_UPD_FC_TSHOLD;
+- writel(value, port->base + RP_PRIV_XP_DL);
+-
+- value = readl(port->base + RP_VEND_XP);
+- value &= ~RP_VEND_XP_UPDATE_FC_THRESHOLD_MASK;
+- value |= soc->update_fc_threshold;
+- writel(value, port->base + RP_VEND_XP);
+- }
+-
+ if (soc->update_fc_timer) {
+ value = readl(port->base + RP_VEND_XP);
+ value &= ~RP_VEND_XP_UPDATE_FC_THRESHOLD_MASK;
+@@ -2416,7 +2391,6 @@ static const struct tegra_pcie_soc tegra20_pcie = {
+ .program_uphy = true,
+ .update_clamp_threshold = false,
+ .program_deskew_time = false,
+- .raw_violation_fixup = false,
+ .update_fc_timer = false,
+ .has_cache_bars = true,
+ .ectl.enable = false,
+@@ -2446,7 +2420,6 @@ static const struct tegra_pcie_soc tegra30_pcie = {
+ .program_uphy = true,
+ .update_clamp_threshold = false,
+ .program_deskew_time = false,
+- .raw_violation_fixup = false,
+ .update_fc_timer = false,
+ .has_cache_bars = false,
+ .ectl.enable = false,
+@@ -2459,8 +2432,6 @@ static const struct tegra_pcie_soc tegra124_pcie = {
+ .pads_pll_ctl = PADS_PLL_CTL_TEGRA30,
+ .tx_ref_sel = PADS_PLL_CTL_TXCLKREF_BUF_EN,
+ .pads_refclk_cfg0 = 0x44ac44ac,
+- /* FC threshold is bit[25:18] */
+- .update_fc_threshold = 0x03fc0000,
+ .has_pex_clkreq_en = true,
+ .has_pex_bias_ctrl = true,
+ .has_intr_prsnt_sense = true,
+@@ -2470,7 +2441,6 @@ static const struct tegra_pcie_soc tegra124_pcie = {
+ .program_uphy = true,
+ .update_clamp_threshold = true,
+ .program_deskew_time = false,
+- .raw_violation_fixup = true,
+ .update_fc_timer = false,
+ .has_cache_bars = false,
+ .ectl.enable = false,
+@@ -2494,7 +2464,6 @@ static const struct tegra_pcie_soc tegra210_pcie = {
+ .program_uphy = true,
+ .update_clamp_threshold = true,
+ .program_deskew_time = true,
+- .raw_violation_fixup = false,
+ .update_fc_timer = true,
+ .has_cache_bars = false,
+ .ectl = {
+@@ -2536,7 +2505,6 @@ static const struct tegra_pcie_soc tegra186_pcie = {
+ .program_uphy = false,
+ .update_clamp_threshold = false,
+ .program_deskew_time = false,
+- .raw_violation_fixup = false,
+ .update_fc_timer = false,
+ .has_cache_bars = false,
+ .ectl.enable = false,
+diff --git a/drivers/scsi/ufs/ufshcd.c b/drivers/scsi/ufs/ufshcd.c
+index ad4fc829cbb2..a483c0720a0c 100644
+--- a/drivers/scsi/ufs/ufshcd.c
++++ b/drivers/scsi/ufs/ufshcd.c
+@@ -1925,8 +1925,11 @@ static void ufshcd_clk_scaling_update_busy(struct ufs_hba *hba)
+ static inline
+ void ufshcd_send_command(struct ufs_hba *hba, unsigned int task_tag)
+ {
+- hba->lrb[task_tag].issue_time_stamp = ktime_get();
+- hba->lrb[task_tag].compl_time_stamp = ktime_set(0, 0);
++ struct ufshcd_lrb *lrbp = &hba->lrb[task_tag];
++
++ lrbp->issue_time_stamp = ktime_get();
++ lrbp->compl_time_stamp = ktime_set(0, 0);
++ ufshcd_vops_setup_xfer_req(hba, task_tag, (lrbp->cmd ? true : false));
+ ufshcd_add_command_trace(hba, task_tag, "send");
+ ufshcd_clk_scaling_start_busy(hba);
+ __set_bit(task_tag, &hba->outstanding_reqs);
+@@ -2536,7 +2539,6 @@ static int ufshcd_queuecommand(struct Scsi_Host *host, struct scsi_cmnd *cmd)
+
+ /* issue command to the controller */
+ spin_lock_irqsave(hba->host->host_lock, flags);
+- ufshcd_vops_setup_xfer_req(hba, tag, true);
+ ufshcd_send_command(hba, tag);
+ out_unlock:
+ spin_unlock_irqrestore(hba->host->host_lock, flags);
+@@ -2723,7 +2725,6 @@ static int ufshcd_exec_dev_cmd(struct ufs_hba *hba,
+ /* Make sure descriptors are ready before ringing the doorbell */
+ wmb();
+ spin_lock_irqsave(hba->host->host_lock, flags);
+- ufshcd_vops_setup_xfer_req(hba, tag, false);
+ ufshcd_send_command(hba, tag);
+ spin_unlock_irqrestore(hba->host->host_lock, flags);
+
+diff --git a/drivers/staging/android/ashmem.c b/drivers/staging/android/ashmem.c
+index c05a214191da..10b4be1f3e78 100644
+--- a/drivers/staging/android/ashmem.c
++++ b/drivers/staging/android/ashmem.c
+@@ -95,6 +95,15 @@ static DEFINE_MUTEX(ashmem_mutex);
+ static struct kmem_cache *ashmem_area_cachep __read_mostly;
+ static struct kmem_cache *ashmem_range_cachep __read_mostly;
+
++/*
++ * A separate lockdep class for the backing shmem inodes to resolve the lockdep
++ * warning about the race between kswapd taking fs_reclaim before inode_lock
++ * and write syscall taking inode_lock and then fs_reclaim.
++ * Note that such race is impossible because ashmem does not support write
++ * syscalls operating on the backing shmem.
++ */
++static struct lock_class_key backing_shmem_inode_class;
++
+ static inline unsigned long range_size(struct ashmem_range *range)
+ {
+ return range->pgend - range->pgstart + 1;
+@@ -396,6 +405,7 @@ static int ashmem_mmap(struct file *file, struct vm_area_struct *vma)
+ if (!asma->file) {
+ char *name = ASHMEM_NAME_DEF;
+ struct file *vmfile;
++ struct inode *inode;
+
+ if (asma->name[ASHMEM_NAME_PREFIX_LEN] != '\0')
+ name = asma->name;
+@@ -407,6 +417,8 @@ static int ashmem_mmap(struct file *file, struct vm_area_struct *vma)
+ goto out;
+ }
+ vmfile->f_mode |= FMODE_LSEEK;
++ inode = file_inode(vmfile);
++ lockdep_set_class(&inode->i_rwsem, &backing_shmem_inode_class);
+ asma->file = vmfile;
+ /*
+ * override mmap operation of the vmfile so that it can't be
+diff --git a/drivers/staging/rtl8188eu/core/rtw_mlme.c b/drivers/staging/rtl8188eu/core/rtw_mlme.c
+index 9de2d421f6b1..4f2abe1e14d5 100644
+--- a/drivers/staging/rtl8188eu/core/rtw_mlme.c
++++ b/drivers/staging/rtl8188eu/core/rtw_mlme.c
+@@ -1729,9 +1729,11 @@ int rtw_restruct_sec_ie(struct adapter *adapter, u8 *in_ie, u8 *out_ie, uint in_
+ if ((ndisauthmode == Ndis802_11AuthModeWPA) ||
+ (ndisauthmode == Ndis802_11AuthModeWPAPSK))
+ authmode = _WPA_IE_ID_;
+- if ((ndisauthmode == Ndis802_11AuthModeWPA2) ||
++ else if ((ndisauthmode == Ndis802_11AuthModeWPA2) ||
+ (ndisauthmode == Ndis802_11AuthModeWPA2PSK))
+ authmode = _WPA2_IE_ID_;
++ else
++ authmode = 0x0;
+
+ if (check_fwstate(pmlmepriv, WIFI_UNDER_WPS)) {
+ memcpy(out_ie + ielength, psecuritypriv->wps_ie, psecuritypriv->wps_ie_len);
+diff --git a/drivers/staging/rtl8712/hal_init.c b/drivers/staging/rtl8712/hal_init.c
+index 40145c0338e4..42c0a3c947f1 100644
+--- a/drivers/staging/rtl8712/hal_init.c
++++ b/drivers/staging/rtl8712/hal_init.c
+@@ -33,7 +33,6 @@ static void rtl871x_load_fw_cb(const struct firmware *firmware, void *context)
+ {
+ struct _adapter *adapter = context;
+
+- complete(&adapter->rtl8712_fw_ready);
+ if (!firmware) {
+ struct usb_device *udev = adapter->dvobjpriv.pusbdev;
+ struct usb_interface *usb_intf = adapter->pusb_intf;
+@@ -41,11 +40,13 @@ static void rtl871x_load_fw_cb(const struct firmware *firmware, void *context)
+ dev_err(&udev->dev, "r8712u: Firmware request failed\n");
+ usb_put_dev(udev);
+ usb_set_intfdata(usb_intf, NULL);
++ complete(&adapter->rtl8712_fw_ready);
+ return;
+ }
+ adapter->fw = firmware;
+ /* firmware available - start netdev */
+ register_netdev(adapter->pnetdev);
++ complete(&adapter->rtl8712_fw_ready);
+ }
+
+ static const char firmware_file[] = "rtlwifi/rtl8712u.bin";
+diff --git a/drivers/staging/rtl8712/usb_intf.c b/drivers/staging/rtl8712/usb_intf.c
+index a87562f632a7..2fcd65260f4c 100644
+--- a/drivers/staging/rtl8712/usb_intf.c
++++ b/drivers/staging/rtl8712/usb_intf.c
+@@ -595,13 +595,17 @@ static void r871xu_dev_remove(struct usb_interface *pusb_intf)
+ if (pnetdev) {
+ struct _adapter *padapter = netdev_priv(pnetdev);
+
+- usb_set_intfdata(pusb_intf, NULL);
+- release_firmware(padapter->fw);
+ /* never exit with a firmware callback pending */
+ wait_for_completion(&padapter->rtl8712_fw_ready);
++ pnetdev = usb_get_intfdata(pusb_intf);
++ usb_set_intfdata(pusb_intf, NULL);
++ if (!pnetdev)
++ goto firmware_load_fail;
++ release_firmware(padapter->fw);
+ if (drvpriv.drv_registered)
+ padapter->surprise_removed = true;
+- unregister_netdev(pnetdev); /* will call netdev_close() */
++ if (pnetdev->reg_state != NETREG_UNINITIALIZED)
++ unregister_netdev(pnetdev); /* will call netdev_close() */
+ flush_scheduled_work();
+ udelay(1);
+ /* Stop driver mlme relation timer */
+@@ -614,6 +618,7 @@ static void r871xu_dev_remove(struct usb_interface *pusb_intf)
+ */
+ usb_put_dev(udev);
+ }
++firmware_load_fail:
+ /* If we didn't unplug usb dongle and remove/insert module, driver
+ * fails on sitesurvey for the first time when device is up.
+ * Reset usb port for sitesurvey fail issue.
+diff --git a/drivers/usb/host/xhci-pci.c b/drivers/usb/host/xhci-pci.c
+index 9234c82e70e4..3feaafebfe58 100644
+--- a/drivers/usb/host/xhci-pci.c
++++ b/drivers/usb/host/xhci-pci.c
+@@ -57,7 +57,10 @@
+ #define PCI_DEVICE_ID_AMD_PROMONTORYA_3 0x43ba
+ #define PCI_DEVICE_ID_AMD_PROMONTORYA_2 0x43bb
+ #define PCI_DEVICE_ID_AMD_PROMONTORYA_1 0x43bc
++#define PCI_DEVICE_ID_ASMEDIA_1042_XHCI 0x1042
+ #define PCI_DEVICE_ID_ASMEDIA_1042A_XHCI 0x1142
++#define PCI_DEVICE_ID_ASMEDIA_1142_XHCI 0x1242
++#define PCI_DEVICE_ID_ASMEDIA_2142_XHCI 0x2142
+
+ static const char hcd_name[] = "xhci_hcd";
+
+@@ -260,13 +263,14 @@ static void xhci_pci_quirks(struct device *dev, struct xhci_hcd *xhci)
+ xhci->quirks |= XHCI_LPM_SUPPORT;
+
+ if (pdev->vendor == PCI_VENDOR_ID_ASMEDIA &&
+- pdev->device == 0x1042)
++ pdev->device == PCI_DEVICE_ID_ASMEDIA_1042_XHCI)
+ xhci->quirks |= XHCI_BROKEN_STREAMS;
+ if (pdev->vendor == PCI_VENDOR_ID_ASMEDIA &&
+- pdev->device == 0x1142)
++ pdev->device == PCI_DEVICE_ID_ASMEDIA_1042A_XHCI)
+ xhci->quirks |= XHCI_TRUST_TX_LENGTH;
+ if (pdev->vendor == PCI_VENDOR_ID_ASMEDIA &&
+- pdev->device == 0x2142)
++ (pdev->device == PCI_DEVICE_ID_ASMEDIA_1142_XHCI ||
++ pdev->device == PCI_DEVICE_ID_ASMEDIA_2142_XHCI))
+ xhci->quirks |= XHCI_NO_64BIT_SUPPORT;
+
+ if (pdev->vendor == PCI_VENDOR_ID_ASMEDIA &&
+diff --git a/drivers/usb/misc/iowarrior.c b/drivers/usb/misc/iowarrior.c
+index dce20301e367..103c69c692ba 100644
+--- a/drivers/usb/misc/iowarrior.c
++++ b/drivers/usb/misc/iowarrior.c
+@@ -2,8 +2,9 @@
+ /*
+ * Native support for the I/O-Warrior USB devices
+ *
+- * Copyright (c) 2003-2005 Code Mercenaries GmbH
+- * written by Christian Lucht <lucht@codemercs.com>
++ * Copyright (c) 2003-2005, 2020 Code Mercenaries GmbH
++ * written by Christian Lucht <lucht@codemercs.com> and
++ * Christoph Jung <jung@codemercs.com>
+ *
+ * based on
+
+@@ -802,14 +803,28 @@ static int iowarrior_probe(struct usb_interface *interface,
+
+ /* we have to check the report_size often, so remember it in the endianness suitable for our machine */
+ dev->report_size = usb_endpoint_maxp(dev->int_in_endpoint);
+- if ((dev->interface->cur_altsetting->desc.bInterfaceNumber == 0) &&
+- ((dev->product_id == USB_DEVICE_ID_CODEMERCS_IOW56) ||
+- (dev->product_id == USB_DEVICE_ID_CODEMERCS_IOW56AM) ||
+- (dev->product_id == USB_DEVICE_ID_CODEMERCS_IOW28) ||
+- (dev->product_id == USB_DEVICE_ID_CODEMERCS_IOW28L) ||
+- (dev->product_id == USB_DEVICE_ID_CODEMERCS_IOW100)))
+- /* IOWarrior56 has wMaxPacketSize different from report size */
+- dev->report_size = 7;
++
++ /*
++ * Some devices need the report size to be different than the
++ * endpoint size.
++ */
++ if (dev->interface->cur_altsetting->desc.bInterfaceNumber == 0) {
++ switch (dev->product_id) {
++ case USB_DEVICE_ID_CODEMERCS_IOW56:
++ case USB_DEVICE_ID_CODEMERCS_IOW56AM:
++ dev->report_size = 7;
++ break;
++
++ case USB_DEVICE_ID_CODEMERCS_IOW28:
++ case USB_DEVICE_ID_CODEMERCS_IOW28L:
++ dev->report_size = 4;
++ break;
++
++ case USB_DEVICE_ID_CODEMERCS_IOW100:
++ dev->report_size = 13;
++ break;
++ }
++ }
+
+ /* create the urb and buffer for reading */
+ dev->int_in_urb = usb_alloc_urb(0, GFP_KERNEL);
+diff --git a/drivers/usb/serial/qcserial.c b/drivers/usb/serial/qcserial.c
+index d147feae83e6..0f60363c1bbc 100644
+--- a/drivers/usb/serial/qcserial.c
++++ b/drivers/usb/serial/qcserial.c
+@@ -155,6 +155,7 @@ static const struct usb_device_id id_table[] = {
+ {DEVICE_SWI(0x1199, 0x9056)}, /* Sierra Wireless Modem */
+ {DEVICE_SWI(0x1199, 0x9060)}, /* Sierra Wireless Modem */
+ {DEVICE_SWI(0x1199, 0x9061)}, /* Sierra Wireless Modem */
++ {DEVICE_SWI(0x1199, 0x9062)}, /* Sierra Wireless EM7305 QDL */
+ {DEVICE_SWI(0x1199, 0x9063)}, /* Sierra Wireless EM7305 */
+ {DEVICE_SWI(0x1199, 0x9070)}, /* Sierra Wireless MC74xx */
+ {DEVICE_SWI(0x1199, 0x9071)}, /* Sierra Wireless MC74xx */
+diff --git a/drivers/video/console/vgacon.c b/drivers/video/console/vgacon.c
+index 998b0de1812f..e9254b3085a3 100644
+--- a/drivers/video/console/vgacon.c
++++ b/drivers/video/console/vgacon.c
+@@ -251,6 +251,10 @@ static void vgacon_scrollback_update(struct vc_data *c, int t, int count)
+ p = (void *) (c->vc_origin + t * c->vc_size_row);
+
+ while (count--) {
++ if ((vgacon_scrollback_cur->tail + c->vc_size_row) >
++ vgacon_scrollback_cur->size)
++ vgacon_scrollback_cur->tail = 0;
++
+ scr_memcpyw(vgacon_scrollback_cur->data +
+ vgacon_scrollback_cur->tail,
+ p, c->vc_size_row);
+diff --git a/drivers/video/fbdev/omap2/omapfb/dss/dss.c b/drivers/video/fbdev/omap2/omapfb/dss/dss.c
+index 7252d22dd117..bfc5c4c5a26a 100644
+--- a/drivers/video/fbdev/omap2/omapfb/dss/dss.c
++++ b/drivers/video/fbdev/omap2/omapfb/dss/dss.c
+@@ -833,7 +833,7 @@ static const struct dss_features omap34xx_dss_feats = {
+ };
+
+ static const struct dss_features omap3630_dss_feats = {
+- .fck_div_max = 32,
++ .fck_div_max = 31,
+ .dss_fck_multiplier = 1,
+ .parent_clk_name = "dpll4_ck",
+ .dpi_select_source = &dss_dpi_select_source_omap2_omap3,
+diff --git a/fs/xattr.c b/fs/xattr.c
+index 91608d9bfc6a..95f38f57347f 100644
+--- a/fs/xattr.c
++++ b/fs/xattr.c
+@@ -204,10 +204,22 @@ int __vfs_setxattr_noperm(struct dentry *dentry, const char *name,
+ return error;
+ }
+
+-
++/**
++ * __vfs_setxattr_locked: set an extended attribute while holding the inode
++ * lock
++ *
++ * @dentry - object to perform setxattr on
++ * @name - xattr name to set
++ * @value - value to set @name to
++ * @size - size of @value
++ * @flags - flags to pass into filesystem operations
++ * @delegated_inode - on return, will contain an inode pointer that
++ * a delegation was broken on, NULL if none.
++ */
+ int
+-vfs_setxattr(struct dentry *dentry, const char *name, const void *value,
+- size_t size, int flags)
++__vfs_setxattr_locked(struct dentry *dentry, const char *name,
++ const void *value, size_t size, int flags,
++ struct inode **delegated_inode)
+ {
+ struct inode *inode = dentry->d_inode;
+ int error;
+@@ -216,15 +228,40 @@ vfs_setxattr(struct dentry *dentry, const char *name, const void *value,
+ if (error)
+ return error;
+
+- inode_lock(inode);
+ error = security_inode_setxattr(dentry, name, value, size, flags);
+ if (error)
+ goto out;
+
++ error = try_break_deleg(inode, delegated_inode);
++ if (error)
++ goto out;
++
+ error = __vfs_setxattr_noperm(dentry, name, value, size, flags);
+
+ out:
++ return error;
++}
++EXPORT_SYMBOL_GPL(__vfs_setxattr_locked);
++
++int
++vfs_setxattr(struct dentry *dentry, const char *name, const void *value,
++ size_t size, int flags)
++{
++ struct inode *inode = dentry->d_inode;
++ struct inode *delegated_inode = NULL;
++ int error;
++
++retry_deleg:
++ inode_lock(inode);
++ error = __vfs_setxattr_locked(dentry, name, value, size, flags,
++ &delegated_inode);
+ inode_unlock(inode);
++
++ if (delegated_inode) {
++ error = break_deleg_wait(&delegated_inode);
++ if (!error)
++ goto retry_deleg;
++ }
+ return error;
+ }
+ EXPORT_SYMBOL_GPL(vfs_setxattr);
+@@ -378,8 +415,18 @@ __vfs_removexattr(struct dentry *dentry, const char *name)
+ }
+ EXPORT_SYMBOL(__vfs_removexattr);
+
++/**
++ * __vfs_removexattr_locked: set an extended attribute while holding the inode
++ * lock
++ *
++ * @dentry - object to perform setxattr on
++ * @name - name of xattr to remove
++ * @delegated_inode - on return, will contain an inode pointer that
++ * a delegation was broken on, NULL if none.
++ */
+ int
+-vfs_removexattr(struct dentry *dentry, const char *name)
++__vfs_removexattr_locked(struct dentry *dentry, const char *name,
++ struct inode **delegated_inode)
+ {
+ struct inode *inode = dentry->d_inode;
+ int error;
+@@ -388,11 +435,14 @@ vfs_removexattr(struct dentry *dentry, const char *name)
+ if (error)
+ return error;
+
+- inode_lock(inode);
+ error = security_inode_removexattr(dentry, name);
+ if (error)
+ goto out;
+
++ error = try_break_deleg(inode, delegated_inode);
++ if (error)
++ goto out;
++
+ error = __vfs_removexattr(dentry, name);
+
+ if (!error) {
+@@ -401,12 +451,32 @@ vfs_removexattr(struct dentry *dentry, const char *name)
+ }
+
+ out:
++ return error;
++}
++EXPORT_SYMBOL_GPL(__vfs_removexattr_locked);
++
++int
++vfs_removexattr(struct dentry *dentry, const char *name)
++{
++ struct inode *inode = dentry->d_inode;
++ struct inode *delegated_inode = NULL;
++ int error;
++
++retry_deleg:
++ inode_lock(inode);
++ error = __vfs_removexattr_locked(dentry, name, &delegated_inode);
+ inode_unlock(inode);
++
++ if (delegated_inode) {
++ error = break_deleg_wait(&delegated_inode);
++ if (!error)
++ goto retry_deleg;
++ }
++
+ return error;
+ }
+ EXPORT_SYMBOL_GPL(vfs_removexattr);
+
+-
+ /*
+ * Extended attribute SET operations
+ */
+diff --git a/include/linux/prandom.h b/include/linux/prandom.h
+new file mode 100644
+index 000000000000..aa16e6468f91
+--- /dev/null
++++ b/include/linux/prandom.h
+@@ -0,0 +1,78 @@
++/* SPDX-License-Identifier: GPL-2.0 */
++/*
++ * include/linux/prandom.h
++ *
++ * Include file for the fast pseudo-random 32-bit
++ * generation.
++ */
++#ifndef _LINUX_PRANDOM_H
++#define _LINUX_PRANDOM_H
++
++#include <linux/types.h>
++#include <linux/percpu.h>
++
++u32 prandom_u32(void);
++void prandom_bytes(void *buf, size_t nbytes);
++void prandom_seed(u32 seed);
++void prandom_reseed_late(void);
++
++struct rnd_state {
++ __u32 s1, s2, s3, s4;
++};
++
++DECLARE_PER_CPU(struct rnd_state, net_rand_state);
++
++u32 prandom_u32_state(struct rnd_state *state);
++void prandom_bytes_state(struct rnd_state *state, void *buf, size_t nbytes);
++void prandom_seed_full_state(struct rnd_state __percpu *pcpu_state);
++
++#define prandom_init_once(pcpu_state) \
++ DO_ONCE(prandom_seed_full_state, (pcpu_state))
++
++/**
++ * prandom_u32_max - returns a pseudo-random number in interval [0, ep_ro)
++ * @ep_ro: right open interval endpoint
++ *
++ * Returns a pseudo-random number that is in interval [0, ep_ro). Note
++ * that the result depends on PRNG being well distributed in [0, ~0U]
++ * u32 space. Here we use maximally equidistributed combined Tausworthe
++ * generator, that is, prandom_u32(). This is useful when requesting a
++ * random index of an array containing ep_ro elements, for example.
++ *
++ * Returns: pseudo-random number in interval [0, ep_ro)
++ */
++static inline u32 prandom_u32_max(u32 ep_ro)
++{
++ return (u32)(((u64) prandom_u32() * ep_ro) >> 32);
++}
++
++/*
++ * Handle minimum values for seeds
++ */
++static inline u32 __seed(u32 x, u32 m)
++{
++ return (x < m) ? x + m : x;
++}
++
++/**
++ * prandom_seed_state - set seed for prandom_u32_state().
++ * @state: pointer to state structure to receive the seed.
++ * @seed: arbitrary 64-bit value to use as a seed.
++ */
++static inline void prandom_seed_state(struct rnd_state *state, u64 seed)
++{
++ u32 i = (seed >> 32) ^ (seed << 10) ^ seed;
++
++ state->s1 = __seed(i, 2U);
++ state->s2 = __seed(i, 8U);
++ state->s3 = __seed(i, 16U);
++ state->s4 = __seed(i, 128U);
++}
++
++/* Pseudo random number generator from numerical recipes. */
++static inline u32 next_pseudo_random32(u32 seed)
++{
++ return seed * 1664525 + 1013904223;
++}
++
++#endif
+diff --git a/include/linux/random.h b/include/linux/random.h
+index 9ab7443bd91b..f45b8be3e3c4 100644
+--- a/include/linux/random.h
++++ b/include/linux/random.h
+@@ -11,7 +11,6 @@
+ #include <linux/kernel.h>
+ #include <linux/list.h>
+ #include <linux/once.h>
+-#include <asm/percpu.h>
+
+ #include <uapi/linux/random.h>
+
+@@ -111,63 +110,12 @@ declare_get_random_var_wait(long)
+
+ unsigned long randomize_page(unsigned long start, unsigned long range);
+
+-u32 prandom_u32(void);
+-void prandom_bytes(void *buf, size_t nbytes);
+-void prandom_seed(u32 seed);
+-void prandom_reseed_late(void);
+-
+-struct rnd_state {
+- __u32 s1, s2, s3, s4;
+-};
+-
+-DECLARE_PER_CPU(struct rnd_state, net_rand_state);
+-
+-u32 prandom_u32_state(struct rnd_state *state);
+-void prandom_bytes_state(struct rnd_state *state, void *buf, size_t nbytes);
+-void prandom_seed_full_state(struct rnd_state __percpu *pcpu_state);
+-
+-#define prandom_init_once(pcpu_state) \
+- DO_ONCE(prandom_seed_full_state, (pcpu_state))
+-
+-/**
+- * prandom_u32_max - returns a pseudo-random number in interval [0, ep_ro)
+- * @ep_ro: right open interval endpoint
+- *
+- * Returns a pseudo-random number that is in interval [0, ep_ro). Note
+- * that the result depends on PRNG being well distributed in [0, ~0U]
+- * u32 space. Here we use maximally equidistributed combined Tausworthe
+- * generator, that is, prandom_u32(). This is useful when requesting a
+- * random index of an array containing ep_ro elements, for example.
+- *
+- * Returns: pseudo-random number in interval [0, ep_ro)
+- */
+-static inline u32 prandom_u32_max(u32 ep_ro)
+-{
+- return (u32)(((u64) prandom_u32() * ep_ro) >> 32);
+-}
+-
+ /*
+- * Handle minimum values for seeds
+- */
+-static inline u32 __seed(u32 x, u32 m)
+-{
+- return (x < m) ? x + m : x;
+-}
+-
+-/**
+- * prandom_seed_state - set seed for prandom_u32_state().
+- * @state: pointer to state structure to receive the seed.
+- * @seed: arbitrary 64-bit value to use as a seed.
++ * This is designed to be standalone for just prandom
++ * users, but for now we include it from <linux/random.h>
++ * for legacy reasons.
+ */
+-static inline void prandom_seed_state(struct rnd_state *state, u64 seed)
+-{
+- u32 i = (seed >> 32) ^ (seed << 10) ^ seed;
+-
+- state->s1 = __seed(i, 2U);
+- state->s2 = __seed(i, 8U);
+- state->s3 = __seed(i, 16U);
+- state->s4 = __seed(i, 128U);
+-}
++#include <linux/prandom.h>
+
+ #ifdef CONFIG_ARCH_RANDOM
+ # include <asm/archrandom.h>
+@@ -210,10 +158,4 @@ static inline bool __init arch_get_random_long_early(unsigned long *v)
+ }
+ #endif
+
+-/* Pseudo random number generator from numerical recipes. */
+-static inline u32 next_pseudo_random32(u32 seed)
+-{
+- return seed * 1664525 + 1013904223;
+-}
+-
+ #endif /* _LINUX_RANDOM_H */
+diff --git a/include/linux/xattr.h b/include/linux/xattr.h
+index c5afaf8ca7a2..902b740b6cac 100644
+--- a/include/linux/xattr.h
++++ b/include/linux/xattr.h
+@@ -52,8 +52,10 @@ ssize_t vfs_getxattr(struct dentry *, const char *, void *, size_t);
+ ssize_t vfs_listxattr(struct dentry *d, char *list, size_t size);
+ int __vfs_setxattr(struct dentry *, struct inode *, const char *, const void *, size_t, int);
+ int __vfs_setxattr_noperm(struct dentry *, const char *, const void *, size_t, int);
++int __vfs_setxattr_locked(struct dentry *, const char *, const void *, size_t, int, struct inode **);
+ int vfs_setxattr(struct dentry *, const char *, const void *, size_t, int);
+ int __vfs_removexattr(struct dentry *, const char *);
++int __vfs_removexattr_locked(struct dentry *, const char *, struct inode **);
+ int vfs_removexattr(struct dentry *, const char *);
+
+ ssize_t generic_listxattr(struct dentry *dentry, char *buffer, size_t buffer_size);
+diff --git a/net/bluetooth/hci_event.c b/net/bluetooth/hci_event.c
+index af9d7f2ff8ba..6c6c9a81bee2 100644
+--- a/net/bluetooth/hci_event.c
++++ b/net/bluetooth/hci_event.c
+@@ -2520,7 +2520,7 @@ static void hci_inquiry_result_evt(struct hci_dev *hdev, struct sk_buff *skb)
+
+ BT_DBG("%s num_rsp %d", hdev->name, num_rsp);
+
+- if (!num_rsp)
++ if (!num_rsp || skb->len < num_rsp * sizeof(*info) + 1)
+ return;
+
+ if (hci_dev_test_flag(hdev, HCI_PERIODIC_INQ))
+@@ -4166,6 +4166,9 @@ static void hci_inquiry_result_with_rssi_evt(struct hci_dev *hdev,
+ struct inquiry_info_with_rssi_and_pscan_mode *info;
+ info = (void *) (skb->data + 1);
+
++ if (skb->len < num_rsp * sizeof(*info) + 1)
++ goto unlock;
++
+ for (; num_rsp; num_rsp--, info++) {
+ u32 flags;
+
+@@ -4187,6 +4190,9 @@ static void hci_inquiry_result_with_rssi_evt(struct hci_dev *hdev,
+ } else {
+ struct inquiry_info_with_rssi *info = (void *) (skb->data + 1);
+
++ if (skb->len < num_rsp * sizeof(*info) + 1)
++ goto unlock;
++
+ for (; num_rsp; num_rsp--, info++) {
+ u32 flags;
+
+@@ -4207,6 +4213,7 @@ static void hci_inquiry_result_with_rssi_evt(struct hci_dev *hdev,
+ }
+ }
+
++unlock:
+ hci_dev_unlock(hdev);
+ }
+
+@@ -4382,7 +4389,7 @@ static void hci_extended_inquiry_result_evt(struct hci_dev *hdev,
+
+ BT_DBG("%s num_rsp %d", hdev->name, num_rsp);
+
+- if (!num_rsp)
++ if (!num_rsp || skb->len < num_rsp * sizeof(*info) + 1)
+ return;
+
+ if (hci_dev_test_flag(hdev, HCI_PERIODIC_INQ))
+diff --git a/scripts/coccinelle/misc/add_namespace.cocci b/scripts/coccinelle/misc/add_namespace.cocci
+index 99e93a6c2e24..cbf1614163cb 100644
+--- a/scripts/coccinelle/misc/add_namespace.cocci
++++ b/scripts/coccinelle/misc/add_namespace.cocci
+@@ -6,6 +6,7 @@
+ /// add a missing namespace tag to a module source file.
+ ///
+
++virtual nsdeps
+ virtual report
+
+ @has_ns_import@
+@@ -16,10 +17,15 @@ MODULE_IMPORT_NS(ns);
+
+ // Add missing imports, but only adjacent to a MODULE_LICENSE statement.
+ // That ensures we are adding it only to the main module source file.
+-@do_import depends on !has_ns_import@
++@do_import depends on !has_ns_import && nsdeps@
+ declarer name MODULE_LICENSE;
+ expression license;
+ identifier virtual.ns;
+ @@
+ MODULE_LICENSE(license);
+ + MODULE_IMPORT_NS(ns);
++
++// Dummy rule for report mode that would otherwise be empty and make spatch
++// fail ("No rules apply.")
++@script:python depends on report@
++@@
+diff --git a/scripts/nsdeps b/scripts/nsdeps
+index 03a8e7cbe6c7..dab4c1a0e27d 100644
+--- a/scripts/nsdeps
++++ b/scripts/nsdeps
+@@ -29,7 +29,7 @@ fi
+
+ generate_deps_for_ns() {
+ $SPATCH --very-quiet --in-place --sp-file \
+- $srctree/scripts/coccinelle/misc/add_namespace.cocci -D ns=$1 $2
++ $srctree/scripts/coccinelle/misc/add_namespace.cocci -D nsdeps -D ns=$1 $2
+ }
+
+ generate_deps() {
+diff --git a/security/integrity/ima/Kconfig b/security/integrity/ima/Kconfig
+index edde88dbe576..62dc11a5af01 100644
+--- a/security/integrity/ima/Kconfig
++++ b/security/integrity/ima/Kconfig
+@@ -232,7 +232,7 @@ config IMA_APPRAISE_REQUIRE_POLICY_SIGS
+
+ config IMA_APPRAISE_BOOTPARAM
+ bool "ima_appraise boot parameter"
+- depends on IMA_APPRAISE && !IMA_ARCH_POLICY
++ depends on IMA_APPRAISE
+ default y
+ help
+ This option enables the different "ima_appraise=" modes
+diff --git a/security/integrity/ima/ima_appraise.c b/security/integrity/ima/ima_appraise.c
+index a9649b04b9f1..28a59508c6bd 100644
+--- a/security/integrity/ima/ima_appraise.c
++++ b/security/integrity/ima/ima_appraise.c
+@@ -19,6 +19,12 @@
+ static int __init default_appraise_setup(char *str)
+ {
+ #ifdef CONFIG_IMA_APPRAISE_BOOTPARAM
++ if (arch_ima_get_secureboot()) {
++ pr_info("Secure boot enabled: ignoring ima_appraise=%s boot parameter option",
++ str);
++ return 1;
++ }
++
+ if (strncmp(str, "off", 3) == 0)
+ ima_appraise = 0;
+ else if (strncmp(str, "log", 3) == 0)
+diff --git a/security/smack/smackfs.c b/security/smack/smackfs.c
+index c21b656b3263..840a192e9337 100644
+--- a/security/smack/smackfs.c
++++ b/security/smack/smackfs.c
+@@ -2720,7 +2720,6 @@ static int smk_open_relabel_self(struct inode *inode, struct file *file)
+ static ssize_t smk_write_relabel_self(struct file *file, const char __user *buf,
+ size_t count, loff_t *ppos)
+ {
+- struct task_smack *tsp = smack_cred(current_cred());
+ char *data;
+ int rc;
+ LIST_HEAD(list_tmp);
+@@ -2745,11 +2744,21 @@ static ssize_t smk_write_relabel_self(struct file *file, const char __user *buf,
+ kfree(data);
+
+ if (!rc || (rc == -EINVAL && list_empty(&list_tmp))) {
++ struct cred *new;
++ struct task_smack *tsp;
++
++ new = prepare_creds();
++ if (!new) {
++ rc = -ENOMEM;
++ goto out;
++ }
++ tsp = smack_cred(new);
+ smk_destroy_label_list(&tsp->smk_relabel);
+ list_splice(&list_tmp, &tsp->smk_relabel);
++ commit_creds(new);
+ return count;
+ }
+-
++out:
+ smk_destroy_label_list(&list_tmp);
+ return rc;
+ }
+diff --git a/sound/core/seq/oss/seq_oss.c b/sound/core/seq/oss/seq_oss.c
+index 17f913657304..c8b9c0b315d8 100644
+--- a/sound/core/seq/oss/seq_oss.c
++++ b/sound/core/seq/oss/seq_oss.c
+@@ -168,10 +168,16 @@ static long
+ odev_ioctl(struct file *file, unsigned int cmd, unsigned long arg)
+ {
+ struct seq_oss_devinfo *dp;
++ long rc;
++
+ dp = file->private_data;
+ if (snd_BUG_ON(!dp))
+ return -ENXIO;
+- return snd_seq_oss_ioctl(dp, cmd, arg);
++
++ mutex_lock(®ister_mutex);
++ rc = snd_seq_oss_ioctl(dp, cmd, arg);
++ mutex_unlock(®ister_mutex);
++ return rc;
+ }
+
+ #ifdef CONFIG_COMPAT
+diff --git a/sound/pci/hda/hda_intel.c b/sound/pci/hda/hda_intel.c
+index 3fbba2e51e36..4c23b169ac67 100644
+--- a/sound/pci/hda/hda_intel.c
++++ b/sound/pci/hda/hda_intel.c
+@@ -2354,7 +2354,6 @@ static int azx_probe_continue(struct azx *chip)
+
+ if (azx_has_pm_runtime(chip)) {
+ pm_runtime_use_autosuspend(&pci->dev);
+- pm_runtime_allow(&pci->dev);
+ pm_runtime_put_autosuspend(&pci->dev);
+ }
+
+diff --git a/sound/pci/hda/patch_ca0132.c b/sound/pci/hda/patch_ca0132.c
+index 34fe753a46fb..6dfa864d3fe7 100644
+--- a/sound/pci/hda/patch_ca0132.c
++++ b/sound/pci/hda/patch_ca0132.c
+@@ -1182,6 +1182,7 @@ static const struct snd_pci_quirk ca0132_quirks[] = {
+ SND_PCI_QUIRK(0x1458, 0xA036, "Gigabyte GA-Z170X-Gaming 7", QUIRK_R3DI),
+ SND_PCI_QUIRK(0x3842, 0x1038, "EVGA X99 Classified", QUIRK_R3DI),
+ SND_PCI_QUIRK(0x1102, 0x0013, "Recon3D", QUIRK_R3D),
++ SND_PCI_QUIRK(0x1102, 0x0018, "Recon3D", QUIRK_R3D),
+ SND_PCI_QUIRK(0x1102, 0x0051, "Sound Blaster AE-5", QUIRK_AE5),
+ {}
+ };
+@@ -4671,7 +4672,7 @@ static int ca0132_alt_select_in(struct hda_codec *codec)
+ tmp = FLOAT_ONE;
+ break;
+ case QUIRK_AE5:
+- ca0113_mmio_command_set(codec, 0x48, 0x28, 0x00);
++ ca0113_mmio_command_set(codec, 0x30, 0x28, 0x00);
+ tmp = FLOAT_THREE;
+ break;
+ default:
+@@ -4717,7 +4718,7 @@ static int ca0132_alt_select_in(struct hda_codec *codec)
+ r3di_gpio_mic_set(codec, R3DI_REAR_MIC);
+ break;
+ case QUIRK_AE5:
+- ca0113_mmio_command_set(codec, 0x48, 0x28, 0x00);
++ ca0113_mmio_command_set(codec, 0x30, 0x28, 0x00);
+ break;
+ default:
+ break;
+@@ -4756,7 +4757,7 @@ static int ca0132_alt_select_in(struct hda_codec *codec)
+ tmp = FLOAT_ONE;
+ break;
+ case QUIRK_AE5:
+- ca0113_mmio_command_set(codec, 0x48, 0x28, 0x3f);
++ ca0113_mmio_command_set(codec, 0x30, 0x28, 0x3f);
+ tmp = FLOAT_THREE;
+ break;
+ default:
+@@ -5748,6 +5749,11 @@ static int ca0132_switch_get(struct snd_kcontrol *kcontrol,
+ return 0;
+ }
+
++ if (nid == ZXR_HEADPHONE_GAIN) {
++ *valp = spec->zxr_gain_set;
++ return 0;
++ }
++
+ return 0;
+ }
+
+diff --git a/sound/pci/hda/patch_realtek.c b/sound/pci/hda/patch_realtek.c
+index 29f5878f0c50..417c8e17d839 100644
+--- a/sound/pci/hda/patch_realtek.c
++++ b/sound/pci/hda/patch_realtek.c
+@@ -6166,6 +6166,11 @@ enum {
+ ALC289_FIXUP_ASUS_GA502,
+ ALC256_FIXUP_ACER_MIC_NO_PRESENCE,
+ ALC285_FIXUP_HP_GPIO_AMP_INIT,
++ ALC269_FIXUP_CZC_B20,
++ ALC269_FIXUP_CZC_TMI,
++ ALC269_FIXUP_CZC_L101,
++ ALC269_FIXUP_LEMOTE_A1802,
++ ALC269_FIXUP_LEMOTE_A190X,
+ };
+
+ static const struct hda_fixup alc269_fixups[] = {
+@@ -7404,6 +7409,89 @@ static const struct hda_fixup alc269_fixups[] = {
+ .chained = true,
+ .chain_id = ALC285_FIXUP_HP_GPIO_LED
+ },
++ [ALC269_FIXUP_CZC_B20] = {
++ .type = HDA_FIXUP_PINS,
++ .v.pins = (const struct hda_pintbl[]) {
++ { 0x12, 0x411111f0 },
++ { 0x14, 0x90170110 }, /* speaker */
++ { 0x15, 0x032f1020 }, /* HP out */
++ { 0x17, 0x411111f0 },
++ { 0x18, 0x03ab1040 }, /* mic */
++ { 0x19, 0xb7a7013f },
++ { 0x1a, 0x0181305f },
++ { 0x1b, 0x411111f0 },
++ { 0x1d, 0x411111f0 },
++ { 0x1e, 0x411111f0 },
++ { }
++ },
++ .chain_id = ALC269_FIXUP_DMIC,
++ },
++ [ALC269_FIXUP_CZC_TMI] = {
++ .type = HDA_FIXUP_PINS,
++ .v.pins = (const struct hda_pintbl[]) {
++ { 0x12, 0x4000c000 },
++ { 0x14, 0x90170110 }, /* speaker */
++ { 0x15, 0x0421401f }, /* HP out */
++ { 0x17, 0x411111f0 },
++ { 0x18, 0x04a19020 }, /* mic */
++ { 0x19, 0x411111f0 },
++ { 0x1a, 0x411111f0 },
++ { 0x1b, 0x411111f0 },
++ { 0x1d, 0x40448505 },
++ { 0x1e, 0x411111f0 },
++ { 0x20, 0x8000ffff },
++ { }
++ },
++ .chain_id = ALC269_FIXUP_DMIC,
++ },
++ [ALC269_FIXUP_CZC_L101] = {
++ .type = HDA_FIXUP_PINS,
++ .v.pins = (const struct hda_pintbl[]) {
++ { 0x12, 0x40000000 },
++ { 0x14, 0x01014010 }, /* speaker */
++ { 0x15, 0x411111f0 }, /* HP out */
++ { 0x16, 0x411111f0 },
++ { 0x18, 0x01a19020 }, /* mic */
++ { 0x19, 0x02a19021 },
++ { 0x1a, 0x0181302f },
++ { 0x1b, 0x0221401f },
++ { 0x1c, 0x411111f0 },
++ { 0x1d, 0x4044c601 },
++ { 0x1e, 0x411111f0 },
++ { }
++ },
++ .chain_id = ALC269_FIXUP_DMIC,
++ },
++ [ALC269_FIXUP_LEMOTE_A1802] = {
++ .type = HDA_FIXUP_PINS,
++ .v.pins = (const struct hda_pintbl[]) {
++ { 0x12, 0x40000000 },
++ { 0x14, 0x90170110 }, /* speaker */
++ { 0x17, 0x411111f0 },
++ { 0x18, 0x03a19040 }, /* mic1 */
++ { 0x19, 0x90a70130 }, /* mic2 */
++ { 0x1a, 0x411111f0 },
++ { 0x1b, 0x411111f0 },
++ { 0x1d, 0x40489d2d },
++ { 0x1e, 0x411111f0 },
++ { 0x20, 0x0003ffff },
++ { 0x21, 0x03214020 },
++ { }
++ },
++ .chain_id = ALC269_FIXUP_DMIC,
++ },
++ [ALC269_FIXUP_LEMOTE_A190X] = {
++ .type = HDA_FIXUP_PINS,
++ .v.pins = (const struct hda_pintbl[]) {
++ { 0x14, 0x99130110 }, /* speaker */
++ { 0x15, 0x0121401f }, /* HP out */
++ { 0x18, 0x01a19c20 }, /* rear mic */
++ { 0x19, 0x99a3092f }, /* front mic */
++ { 0x1b, 0x0201401f }, /* front lineout */
++ { }
++ },
++ .chain_id = ALC269_FIXUP_DMIC,
++ },
+ };
+
+ static const struct snd_pci_quirk alc269_fixup_tbl[] = {
+@@ -7693,9 +7781,14 @@ static const struct snd_pci_quirk alc269_fixup_tbl[] = {
+ SND_PCI_QUIRK(0x17aa, 0x3bf8, "Quanta FL1", ALC269_FIXUP_PCM_44K),
+ SND_PCI_QUIRK(0x17aa, 0x9e54, "LENOVO NB", ALC269_FIXUP_LENOVO_EAPD),
+ SND_PCI_QUIRK(0x19e5, 0x3204, "Huawei MACH-WX9", ALC256_FIXUP_HUAWEI_MACH_WX9_PINS),
++ SND_PCI_QUIRK(0x1b35, 0x1235, "CZC B20", ALC269_FIXUP_CZC_B20),
++ SND_PCI_QUIRK(0x1b35, 0x1236, "CZC TMI", ALC269_FIXUP_CZC_TMI),
++ SND_PCI_QUIRK(0x1b35, 0x1237, "CZC L101", ALC269_FIXUP_CZC_L101),
+ SND_PCI_QUIRK(0x1b7d, 0xa831, "Ordissimo EVE2 ", ALC269VB_FIXUP_ORDISSIMO_EVE2), /* Also known as Malata PC-B1303 */
+ SND_PCI_QUIRK(0x1d72, 0x1901, "RedmiBook 14", ALC256_FIXUP_ASUS_HEADSET_MIC),
+ SND_PCI_QUIRK(0x10ec, 0x118c, "Medion EE4254 MD62100", ALC256_FIXUP_MEDION_HEADSET_NO_PRESENCE),
++ SND_PCI_QUIRK(0x1c06, 0x2013, "Lemote A1802", ALC269_FIXUP_LEMOTE_A1802),
++ SND_PCI_QUIRK(0x1c06, 0x2015, "Lemote A190X", ALC269_FIXUP_LEMOTE_A190X),
+
+ #if 0
+ /* Below is a quirk table taken from the old code.
+@@ -8951,6 +9044,7 @@ enum {
+ ALC662_FIXUP_LED_GPIO1,
+ ALC662_FIXUP_IDEAPAD,
+ ALC272_FIXUP_MARIO,
++ ALC662_FIXUP_CZC_ET26,
+ ALC662_FIXUP_CZC_P10T,
+ ALC662_FIXUP_SKU_IGNORE,
+ ALC662_FIXUP_HP_RP5800,
+@@ -9020,6 +9114,25 @@ static const struct hda_fixup alc662_fixups[] = {
+ .type = HDA_FIXUP_FUNC,
+ .v.func = alc272_fixup_mario,
+ },
++ [ALC662_FIXUP_CZC_ET26] = {
++ .type = HDA_FIXUP_PINS,
++ .v.pins = (const struct hda_pintbl[]) {
++ {0x12, 0x403cc000},
++ {0x14, 0x90170110}, /* speaker */
++ {0x15, 0x411111f0},
++ {0x16, 0x411111f0},
++ {0x18, 0x01a19030}, /* mic */
++ {0x19, 0x90a7013f}, /* int-mic */
++ {0x1a, 0x01014020},
++ {0x1b, 0x0121401f},
++ {0x1c, 0x411111f0},
++ {0x1d, 0x411111f0},
++ {0x1e, 0x40478e35},
++ {}
++ },
++ .chained = true,
++ .chain_id = ALC662_FIXUP_SKU_IGNORE
++ },
+ [ALC662_FIXUP_CZC_P10T] = {
+ .type = HDA_FIXUP_VERBS,
+ .v.verbs = (const struct hda_verb[]) {
+@@ -9403,6 +9516,7 @@ static const struct snd_pci_quirk alc662_fixup_tbl[] = {
+ SND_PCI_QUIRK(0x1849, 0x5892, "ASRock B150M", ALC892_FIXUP_ASROCK_MOBO),
+ SND_PCI_QUIRK(0x19da, 0xa130, "Zotac Z68", ALC662_FIXUP_ZOTAC_Z68),
+ SND_PCI_QUIRK(0x1b0a, 0x01b8, "ACER Veriton", ALC662_FIXUP_ACER_VERITON),
++ SND_PCI_QUIRK(0x1b35, 0x1234, "CZC ET26", ALC662_FIXUP_CZC_ET26),
+ SND_PCI_QUIRK(0x1b35, 0x2206, "CZC P10T", ALC662_FIXUP_CZC_P10T),
+ SND_PCI_QUIRK(0x1025, 0x0566, "Acer Aspire Ethos 8951G", ALC669_FIXUP_ACER_ASPIRE_ETHOS),
+
^ permalink raw reply related [flat|nested] 30+ messages in thread
* [gentoo-commits] proj/linux-patches:5.8 commit in: /
@ 2020-08-19 9:16 Alice Ferrazzi
0 siblings, 0 replies; 30+ messages in thread
From: Alice Ferrazzi @ 2020-08-19 9:16 UTC (permalink / raw
To: gentoo-commits
commit: af2b20a2aff29338c1974510f50b2b97f1c7cead
Author: Alice Ferrazzi <alicef <AT> gentoo <DOT> org>
AuthorDate: Wed Aug 19 09:15:23 2020 +0000
Commit: Alice Ferrazzi <alicef <AT> gentoo <DOT> org>
CommitDate: Wed Aug 19 09:16:02 2020 +0000
URL: https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=af2b20a2
Linux patch 5.8.2
Signed-off-by: Alice Ferrazzi <alicef <AT> gentoo.org>
0000_README | 4 +
1001_linux-5.8.2.patch | 17484 +++++++++++++++++++++++++++++++++++++++++++++++
2 files changed, 17488 insertions(+)
diff --git a/0000_README b/0000_README
index 8e9f53b..2409f92 100644
--- a/0000_README
+++ b/0000_README
@@ -47,6 +47,10 @@ Patch: 1000_linux-5.8.1.patch
From: http://www.kernel.org
Desc: Linux 5.8.1
+Patch: 1001_linux-5.8.2.patch
+From: http://www.kernel.org
+Desc: Linux 5.8.2
+
Patch: 1500_XATTR_USER_PREFIX.patch
From: https://bugs.gentoo.org/show_bug.cgi?id=470644
Desc: Support for namespace user.pax.* on tmpfs.
diff --git a/1001_linux-5.8.2.patch b/1001_linux-5.8.2.patch
new file mode 100644
index 0000000..e74313e
--- /dev/null
+++ b/1001_linux-5.8.2.patch
@@ -0,0 +1,17484 @@
+diff --git a/Documentation/ABI/testing/sysfs-bus-iio b/Documentation/ABI/testing/sysfs-bus-iio
+index d3e53a6d8331..5c62bfb0f3f5 100644
+--- a/Documentation/ABI/testing/sysfs-bus-iio
++++ b/Documentation/ABI/testing/sysfs-bus-iio
+@@ -1569,7 +1569,8 @@ What: /sys/bus/iio/devices/iio:deviceX/in_concentrationX_voc_raw
+ KernelVersion: 4.3
+ Contact: linux-iio@vger.kernel.org
+ Description:
+- Raw (unscaled no offset etc.) percentage reading of a substance.
++ Raw (unscaled no offset etc.) reading of a substance. Units
++ after application of scale and offset are percents.
+
+ What: /sys/bus/iio/devices/iio:deviceX/in_resistance_raw
+ What: /sys/bus/iio/devices/iio:deviceX/in_resistanceX_raw
+diff --git a/Documentation/core-api/cpu_hotplug.rst b/Documentation/core-api/cpu_hotplug.rst
+index 4a50ab7817f7..b1ae1ac159cf 100644
+--- a/Documentation/core-api/cpu_hotplug.rst
++++ b/Documentation/core-api/cpu_hotplug.rst
+@@ -50,13 +50,6 @@ Command Line Switches
+
+ This option is limited to the X86 and S390 architecture.
+
+-``cede_offline={"off","on"}``
+- Use this option to disable/enable putting offlined processors to an extended
+- ``H_CEDE`` state on supported pseries platforms. If nothing is specified,
+- ``cede_offline`` is set to "on".
+-
+- This option is limited to the PowerPC architecture.
+-
+ ``cpu0_hotplug``
+ Allow to shutdown CPU0.
+
+diff --git a/Documentation/devicetree/bindings/phy/socionext,uniphier-usb3hs-phy.yaml b/Documentation/devicetree/bindings/phy/socionext,uniphier-usb3hs-phy.yaml
+index f88d36207b87..c871d462c952 100644
+--- a/Documentation/devicetree/bindings/phy/socionext,uniphier-usb3hs-phy.yaml
++++ b/Documentation/devicetree/bindings/phy/socionext,uniphier-usb3hs-phy.yaml
+@@ -31,12 +31,16 @@ properties:
+
+ clocks:
+ minItems: 1
+- maxItems: 2
++ maxItems: 3
+
+ clock-names:
+ oneOf:
+ - const: link # for PXs2
+- - items: # for PXs3
++ - items: # for PXs3 with phy-ext
++ - const: link
++ - const: phy
++ - const: phy-ext
++ - items: # for others
+ - const: link
+ - const: phy
+
+diff --git a/Makefile b/Makefile
+index 7932464518f1..6940f82a15cc 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 5
+ PATCHLEVEL = 8
+-SUBLEVEL = 1
++SUBLEVEL = 2
+ EXTRAVERSION =
+ NAME = Kleptomaniac Octopus
+
+diff --git a/arch/arm/boot/dts/at91-sama5d3_xplained.dts b/arch/arm/boot/dts/at91-sama5d3_xplained.dts
+index 61f068a7b362..7abf555cd2fe 100644
+--- a/arch/arm/boot/dts/at91-sama5d3_xplained.dts
++++ b/arch/arm/boot/dts/at91-sama5d3_xplained.dts
+@@ -128,7 +128,7 @@
+ };
+
+ macb0: ethernet@f0028000 {
+- phy-mode = "rgmii";
++ phy-mode = "rgmii-rxid";
+ #address-cells = <1>;
+ #size-cells = <0>;
+ status = "okay";
+diff --git a/arch/arm/boot/dts/exynos5422-odroid-core.dtsi b/arch/arm/boot/dts/exynos5422-odroid-core.dtsi
+index ab27ff8bc3dc..afe090578e8f 100644
+--- a/arch/arm/boot/dts/exynos5422-odroid-core.dtsi
++++ b/arch/arm/boot/dts/exynos5422-odroid-core.dtsi
+@@ -411,12 +411,6 @@
+ status = "okay";
+ };
+
+-&bus_fsys {
+- operating-points-v2 = <&bus_fsys2_opp_table>;
+- devfreq = <&bus_wcore>;
+- status = "okay";
+-};
+-
+ &bus_fsys2 {
+ operating-points-v2 = <&bus_fsys2_opp_table>;
+ devfreq = <&bus_wcore>;
+diff --git a/arch/arm/boot/dts/exynos5800.dtsi b/arch/arm/boot/dts/exynos5800.dtsi
+index dfb99ab53c3e..526729dad53f 100644
+--- a/arch/arm/boot/dts/exynos5800.dtsi
++++ b/arch/arm/boot/dts/exynos5800.dtsi
+@@ -23,17 +23,17 @@
+ &cluster_a15_opp_table {
+ opp-2000000000 {
+ opp-hz = /bits/ 64 <2000000000>;
+- opp-microvolt = <1312500>;
++ opp-microvolt = <1312500 1312500 1500000>;
+ clock-latency-ns = <140000>;
+ };
+ opp-1900000000 {
+ opp-hz = /bits/ 64 <1900000000>;
+- opp-microvolt = <1262500>;
++ opp-microvolt = <1262500 1262500 1500000>;
+ clock-latency-ns = <140000>;
+ };
+ opp-1800000000 {
+ opp-hz = /bits/ 64 <1800000000>;
+- opp-microvolt = <1237500>;
++ opp-microvolt = <1237500 1237500 1500000>;
+ clock-latency-ns = <140000>;
+ };
+ opp-1700000000 {
+diff --git a/arch/arm/boot/dts/r8a7793-gose.dts b/arch/arm/boot/dts/r8a7793-gose.dts
+index 79baf06019f5..10c3536b8e3d 100644
+--- a/arch/arm/boot/dts/r8a7793-gose.dts
++++ b/arch/arm/boot/dts/r8a7793-gose.dts
+@@ -336,7 +336,7 @@
+ reg = <0x20>;
+ remote = <&vin1>;
+
+- port {
++ ports {
+ #address-cells = <1>;
+ #size-cells = <0>;
+
+@@ -394,7 +394,7 @@
+ interrupts = <2 IRQ_TYPE_LEVEL_LOW>;
+ default-input = <0>;
+
+- port {
++ ports {
+ #address-cells = <1>;
+ #size-cells = <0>;
+
+diff --git a/arch/arm/boot/dts/stm32mp15-pinctrl.dtsi b/arch/arm/boot/dts/stm32mp15-pinctrl.dtsi
+index 7eb858732d6d..cc505458da2f 100644
+--- a/arch/arm/boot/dts/stm32mp15-pinctrl.dtsi
++++ b/arch/arm/boot/dts/stm32mp15-pinctrl.dtsi
+@@ -1574,143 +1574,157 @@
+ };
+ };
+
+- usart2_pins_a: usart2-0 {
++ uart4_pins_a: uart4-0 {
+ pins1 {
+- pinmux = <STM32_PINMUX('F', 5, AF7)>, /* USART2_TX */
+- <STM32_PINMUX('D', 4, AF7)>; /* USART2_RTS */
++ pinmux = <STM32_PINMUX('G', 11, AF6)>; /* UART4_TX */
+ bias-disable;
+ drive-push-pull;
+ slew-rate = <0>;
+ };
+ pins2 {
+- pinmux = <STM32_PINMUX('D', 6, AF7)>, /* USART2_RX */
+- <STM32_PINMUX('D', 3, AF7)>; /* USART2_CTS_NSS */
++ pinmux = <STM32_PINMUX('B', 2, AF8)>; /* UART4_RX */
+ bias-disable;
+ };
+ };
+
+- usart2_sleep_pins_a: usart2-sleep-0 {
+- pins {
+- pinmux = <STM32_PINMUX('F', 5, ANALOG)>, /* USART2_TX */
+- <STM32_PINMUX('D', 4, ANALOG)>, /* USART2_RTS */
+- <STM32_PINMUX('D', 6, ANALOG)>, /* USART2_RX */
+- <STM32_PINMUX('D', 3, ANALOG)>; /* USART2_CTS_NSS */
+- };
+- };
+-
+- usart2_pins_b: usart2-1 {
++ uart4_pins_b: uart4-1 {
+ pins1 {
+- pinmux = <STM32_PINMUX('F', 5, AF7)>, /* USART2_TX */
+- <STM32_PINMUX('A', 1, AF7)>; /* USART2_RTS */
++ pinmux = <STM32_PINMUX('D', 1, AF8)>; /* UART4_TX */
+ bias-disable;
+ drive-push-pull;
+ slew-rate = <0>;
+ };
+ pins2 {
+- pinmux = <STM32_PINMUX('F', 4, AF7)>, /* USART2_RX */
+- <STM32_PINMUX('E', 15, AF7)>; /* USART2_CTS_NSS */
++ pinmux = <STM32_PINMUX('B', 2, AF8)>; /* UART4_RX */
+ bias-disable;
+ };
+ };
+
+- usart2_sleep_pins_b: usart2-sleep-1 {
+- pins {
+- pinmux = <STM32_PINMUX('F', 5, ANALOG)>, /* USART2_TX */
+- <STM32_PINMUX('A', 1, ANALOG)>, /* USART2_RTS */
+- <STM32_PINMUX('F', 4, ANALOG)>, /* USART2_RX */
+- <STM32_PINMUX('E', 15, ANALOG)>; /* USART2_CTS_NSS */
++ uart4_pins_c: uart4-2 {
++ pins1 {
++ pinmux = <STM32_PINMUX('G', 11, AF6)>; /* UART4_TX */
++ bias-disable;
++ drive-push-pull;
++ slew-rate = <0>;
++ };
++ pins2 {
++ pinmux = <STM32_PINMUX('B', 2, AF8)>; /* UART4_RX */
++ bias-disable;
+ };
+ };
+
+- usart3_pins_a: usart3-0 {
++ uart7_pins_a: uart7-0 {
+ pins1 {
+- pinmux = <STM32_PINMUX('B', 10, AF7)>; /* USART3_TX */
++ pinmux = <STM32_PINMUX('E', 8, AF7)>; /* UART7_TX */
+ bias-disable;
+ drive-push-pull;
+ slew-rate = <0>;
+ };
+ pins2 {
+- pinmux = <STM32_PINMUX('B', 12, AF8)>; /* USART3_RX */
++ pinmux = <STM32_PINMUX('E', 7, AF7)>, /* UART7_RX */
++ <STM32_PINMUX('E', 10, AF7)>, /* UART7_CTS */
++ <STM32_PINMUX('E', 9, AF7)>; /* UART7_RTS */
+ bias-disable;
+ };
+ };
+
+- uart4_pins_a: uart4-0 {
++ uart7_pins_b: uart7-1 {
+ pins1 {
+- pinmux = <STM32_PINMUX('G', 11, AF6)>; /* UART4_TX */
++ pinmux = <STM32_PINMUX('F', 7, AF7)>; /* UART7_TX */
+ bias-disable;
+ drive-push-pull;
+ slew-rate = <0>;
+ };
+ pins2 {
+- pinmux = <STM32_PINMUX('B', 2, AF8)>; /* UART4_RX */
++ pinmux = <STM32_PINMUX('F', 6, AF7)>; /* UART7_RX */
+ bias-disable;
+ };
+ };
+
+- uart4_pins_b: uart4-1 {
++ uart8_pins_a: uart8-0 {
+ pins1 {
+- pinmux = <STM32_PINMUX('D', 1, AF8)>; /* UART4_TX */
++ pinmux = <STM32_PINMUX('E', 1, AF8)>; /* UART8_TX */
+ bias-disable;
+ drive-push-pull;
+ slew-rate = <0>;
+ };
+ pins2 {
+- pinmux = <STM32_PINMUX('B', 2, AF8)>; /* UART4_RX */
++ pinmux = <STM32_PINMUX('E', 0, AF8)>; /* UART8_RX */
+ bias-disable;
+ };
+ };
+
+- uart4_pins_c: uart4-2 {
+- pins1 {
+- pinmux = <STM32_PINMUX('G', 11, AF6)>; /* UART4_TX */
++ spi4_pins_a: spi4-0 {
++ pins {
++ pinmux = <STM32_PINMUX('E', 12, AF5)>, /* SPI4_SCK */
++ <STM32_PINMUX('E', 6, AF5)>; /* SPI4_MOSI */
+ bias-disable;
+ drive-push-pull;
+- slew-rate = <0>;
++ slew-rate = <1>;
+ };
+ pins2 {
+- pinmux = <STM32_PINMUX('B', 2, AF8)>; /* UART4_RX */
++ pinmux = <STM32_PINMUX('E', 13, AF5)>; /* SPI4_MISO */
+ bias-disable;
+ };
+ };
+
+- uart7_pins_a: uart7-0 {
++ usart2_pins_a: usart2-0 {
+ pins1 {
+- pinmux = <STM32_PINMUX('E', 8, AF7)>; /* UART4_TX */
++ pinmux = <STM32_PINMUX('F', 5, AF7)>, /* USART2_TX */
++ <STM32_PINMUX('D', 4, AF7)>; /* USART2_RTS */
+ bias-disable;
+ drive-push-pull;
+ slew-rate = <0>;
+ };
+ pins2 {
+- pinmux = <STM32_PINMUX('E', 7, AF7)>, /* UART4_RX */
+- <STM32_PINMUX('E', 10, AF7)>, /* UART4_CTS */
+- <STM32_PINMUX('E', 9, AF7)>; /* UART4_RTS */
++ pinmux = <STM32_PINMUX('D', 6, AF7)>, /* USART2_RX */
++ <STM32_PINMUX('D', 3, AF7)>; /* USART2_CTS_NSS */
+ bias-disable;
+ };
+ };
+
+- uart7_pins_b: uart7-1 {
++ usart2_sleep_pins_a: usart2-sleep-0 {
++ pins {
++ pinmux = <STM32_PINMUX('F', 5, ANALOG)>, /* USART2_TX */
++ <STM32_PINMUX('D', 4, ANALOG)>, /* USART2_RTS */
++ <STM32_PINMUX('D', 6, ANALOG)>, /* USART2_RX */
++ <STM32_PINMUX('D', 3, ANALOG)>; /* USART2_CTS_NSS */
++ };
++ };
++
++ usart2_pins_b: usart2-1 {
+ pins1 {
+- pinmux = <STM32_PINMUX('F', 7, AF7)>; /* UART7_TX */
++ pinmux = <STM32_PINMUX('F', 5, AF7)>, /* USART2_TX */
++ <STM32_PINMUX('A', 1, AF7)>; /* USART2_RTS */
+ bias-disable;
+ drive-push-pull;
+ slew-rate = <0>;
+ };
+ pins2 {
+- pinmux = <STM32_PINMUX('F', 6, AF7)>; /* UART7_RX */
++ pinmux = <STM32_PINMUX('F', 4, AF7)>, /* USART2_RX */
++ <STM32_PINMUX('E', 15, AF7)>; /* USART2_CTS_NSS */
+ bias-disable;
+ };
+ };
+
+- uart8_pins_a: uart8-0 {
++ usart2_sleep_pins_b: usart2-sleep-1 {
++ pins {
++ pinmux = <STM32_PINMUX('F', 5, ANALOG)>, /* USART2_TX */
++ <STM32_PINMUX('A', 1, ANALOG)>, /* USART2_RTS */
++ <STM32_PINMUX('F', 4, ANALOG)>, /* USART2_RX */
++ <STM32_PINMUX('E', 15, ANALOG)>; /* USART2_CTS_NSS */
++ };
++ };
++
++ usart3_pins_a: usart3-0 {
+ pins1 {
+- pinmux = <STM32_PINMUX('E', 1, AF8)>; /* UART8_TX */
++ pinmux = <STM32_PINMUX('B', 10, AF7)>; /* USART3_TX */
+ bias-disable;
+ drive-push-pull;
+ slew-rate = <0>;
+ };
+ pins2 {
+- pinmux = <STM32_PINMUX('E', 0, AF8)>; /* UART8_RX */
++ pinmux = <STM32_PINMUX('B', 12, AF8)>; /* USART3_RX */
+ bias-disable;
+ };
+ };
+@@ -1776,18 +1790,4 @@
+ bias-disable;
+ };
+ };
+-
+- spi4_pins_a: spi4-0 {
+- pins {
+- pinmux = <STM32_PINMUX('E', 12, AF5)>, /* SPI4_SCK */
+- <STM32_PINMUX('E', 6, AF5)>; /* SPI4_MOSI */
+- bias-disable;
+- drive-push-pull;
+- slew-rate = <1>;
+- };
+- pins2 {
+- pinmux = <STM32_PINMUX('E', 13, AF5)>; /* SPI4_MISO */
+- bias-disable;
+- };
+- };
+ };
+diff --git a/arch/arm/boot/dts/sunxi-bananapi-m2-plus-v1.2.dtsi b/arch/arm/boot/dts/sunxi-bananapi-m2-plus-v1.2.dtsi
+index 22466afd38a3..235994a4a2eb 100644
+--- a/arch/arm/boot/dts/sunxi-bananapi-m2-plus-v1.2.dtsi
++++ b/arch/arm/boot/dts/sunxi-bananapi-m2-plus-v1.2.dtsi
+@@ -16,15 +16,27 @@
+ regulator-type = "voltage";
+ regulator-boot-on;
+ regulator-always-on;
+- regulator-min-microvolt = <1100000>;
+- regulator-max-microvolt = <1300000>;
++ regulator-min-microvolt = <1108475>;
++ regulator-max-microvolt = <1308475>;
+ regulator-ramp-delay = <50>; /* 4ms */
+ gpios = <&r_pio 0 1 GPIO_ACTIVE_HIGH>; /* PL1 */
+ gpios-states = <0x1>;
+- states = <1100000 0>, <1300000 1>;
++ states = <1108475 0>, <1308475 1>;
+ };
+ };
+
+ &cpu0 {
+ cpu-supply = <®_vdd_cpux>;
+ };
++
++&cpu1 {
++ cpu-supply = <®_vdd_cpux>;
++};
++
++&cpu2 {
++ cpu-supply = <®_vdd_cpux>;
++};
++
++&cpu3 {
++ cpu-supply = <®_vdd_cpux>;
++};
+diff --git a/arch/arm/kernel/stacktrace.c b/arch/arm/kernel/stacktrace.c
+index cc726afea023..76ea4178a55c 100644
+--- a/arch/arm/kernel/stacktrace.c
++++ b/arch/arm/kernel/stacktrace.c
+@@ -22,6 +22,19 @@
+ * A simple function epilogue looks like this:
+ * ldm sp, {fp, sp, pc}
+ *
++ * When compiled with clang, pc and sp are not pushed. A simple function
++ * prologue looks like this when built with clang:
++ *
++ * stmdb {..., fp, lr}
++ * add fp, sp, #x
++ * sub sp, sp, #y
++ *
++ * A simple function epilogue looks like this when built with clang:
++ *
++ * sub sp, fp, #x
++ * ldm {..., fp, pc}
++ *
++ *
+ * Note that with framepointer enabled, even the leaf functions have the same
+ * prologue and epilogue, therefore we can ignore the LR value in this case.
+ */
+@@ -34,6 +47,16 @@ int notrace unwind_frame(struct stackframe *frame)
+ low = frame->sp;
+ high = ALIGN(low, THREAD_SIZE);
+
++#ifdef CONFIG_CC_IS_CLANG
++ /* check current frame pointer is within bounds */
++ if (fp < low + 4 || fp > high - 4)
++ return -EINVAL;
++
++ frame->sp = frame->fp;
++ frame->fp = *(unsigned long *)(fp);
++ frame->pc = frame->lr;
++ frame->lr = *(unsigned long *)(fp + 4);
++#else
+ /* check current frame pointer is within bounds */
+ if (fp < low + 12 || fp > high - 4)
+ return -EINVAL;
+@@ -42,6 +65,7 @@ int notrace unwind_frame(struct stackframe *frame)
+ frame->fp = *(unsigned long *)(fp - 12);
+ frame->sp = *(unsigned long *)(fp - 8);
+ frame->pc = *(unsigned long *)(fp - 4);
++#endif
+
+ return 0;
+ }
+diff --git a/arch/arm/mach-at91/pm.c b/arch/arm/mach-at91/pm.c
+index 074bde64064e..2aab043441e8 100644
+--- a/arch/arm/mach-at91/pm.c
++++ b/arch/arm/mach-at91/pm.c
+@@ -592,13 +592,13 @@ static void __init at91_pm_sram_init(void)
+ sram_pool = gen_pool_get(&pdev->dev, NULL);
+ if (!sram_pool) {
+ pr_warn("%s: sram pool unavailable!\n", __func__);
+- return;
++ goto out_put_device;
+ }
+
+ sram_base = gen_pool_alloc(sram_pool, at91_pm_suspend_in_sram_sz);
+ if (!sram_base) {
+ pr_warn("%s: unable to alloc sram!\n", __func__);
+- return;
++ goto out_put_device;
+ }
+
+ sram_pbase = gen_pool_virt_to_phys(sram_pool, sram_base);
+@@ -606,12 +606,17 @@ static void __init at91_pm_sram_init(void)
+ at91_pm_suspend_in_sram_sz, false);
+ if (!at91_suspend_sram_fn) {
+ pr_warn("SRAM: Could not map\n");
+- return;
++ goto out_put_device;
+ }
+
+ /* Copy the pm suspend handler to SRAM */
+ at91_suspend_sram_fn = fncpy(at91_suspend_sram_fn,
+ &at91_pm_suspend_in_sram, at91_pm_suspend_in_sram_sz);
++ return;
++
++out_put_device:
++ put_device(&pdev->dev);
++ return;
+ }
+
+ static bool __init at91_is_pm_mode_active(int pm_mode)
+diff --git a/arch/arm/mach-exynos/exynos.c b/arch/arm/mach-exynos/exynos.c
+index 7a8d1555db40..36c37444485a 100644
+--- a/arch/arm/mach-exynos/exynos.c
++++ b/arch/arm/mach-exynos/exynos.c
+@@ -193,7 +193,7 @@ static void __init exynos_dt_fixup(void)
+ }
+
+ DT_MACHINE_START(EXYNOS_DT, "Samsung Exynos (Flattened Device Tree)")
+- .l2c_aux_val = 0x3c400001,
++ .l2c_aux_val = 0x3c400000,
+ .l2c_aux_mask = 0xc20fffff,
+ .smp = smp_ops(exynos_smp_ops),
+ .map_io = exynos_init_io,
+diff --git a/arch/arm/mach-exynos/mcpm-exynos.c b/arch/arm/mach-exynos/mcpm-exynos.c
+index 9a681b421ae1..cd861c57d5ad 100644
+--- a/arch/arm/mach-exynos/mcpm-exynos.c
++++ b/arch/arm/mach-exynos/mcpm-exynos.c
+@@ -26,6 +26,7 @@
+ #define EXYNOS5420_USE_L2_COMMON_UP_STATE BIT(30)
+
+ static void __iomem *ns_sram_base_addr __ro_after_init;
++static bool secure_firmware __ro_after_init;
+
+ /*
+ * The common v7_exit_coherency_flush API could not be used because of the
+@@ -58,15 +59,16 @@ static void __iomem *ns_sram_base_addr __ro_after_init;
+ static int exynos_cpu_powerup(unsigned int cpu, unsigned int cluster)
+ {
+ unsigned int cpunr = cpu + (cluster * EXYNOS5420_CPUS_PER_CLUSTER);
++ bool state;
+
+ pr_debug("%s: cpu %u cluster %u\n", __func__, cpu, cluster);
+ if (cpu >= EXYNOS5420_CPUS_PER_CLUSTER ||
+ cluster >= EXYNOS5420_NR_CLUSTERS)
+ return -EINVAL;
+
+- if (!exynos_cpu_power_state(cpunr)) {
+- exynos_cpu_power_up(cpunr);
+-
++ state = exynos_cpu_power_state(cpunr);
++ exynos_cpu_power_up(cpunr);
++ if (!state && secure_firmware) {
+ /*
+ * This assumes the cluster number of the big cores(Cortex A15)
+ * is 0 and the Little cores(Cortex A7) is 1.
+@@ -258,6 +260,8 @@ static int __init exynos_mcpm_init(void)
+ return -ENOMEM;
+ }
+
++ secure_firmware = exynos_secure_firmware_available();
++
+ /*
+ * To increase the stability of KFC reset we need to program
+ * the PMU SPARE3 register
+diff --git a/arch/arm/mach-socfpga/pm.c b/arch/arm/mach-socfpga/pm.c
+index 6ed887cf8dc9..365c0428b21b 100644
+--- a/arch/arm/mach-socfpga/pm.c
++++ b/arch/arm/mach-socfpga/pm.c
+@@ -49,14 +49,14 @@ static int socfpga_setup_ocram_self_refresh(void)
+ if (!ocram_pool) {
+ pr_warn("%s: ocram pool unavailable!\n", __func__);
+ ret = -ENODEV;
+- goto put_node;
++ goto put_device;
+ }
+
+ ocram_base = gen_pool_alloc(ocram_pool, socfpga_sdram_self_refresh_sz);
+ if (!ocram_base) {
+ pr_warn("%s: unable to alloc ocram!\n", __func__);
+ ret = -ENOMEM;
+- goto put_node;
++ goto put_device;
+ }
+
+ ocram_pbase = gen_pool_virt_to_phys(ocram_pool, ocram_base);
+@@ -67,7 +67,7 @@ static int socfpga_setup_ocram_self_refresh(void)
+ if (!suspend_ocram_base) {
+ pr_warn("%s: __arm_ioremap_exec failed!\n", __func__);
+ ret = -ENOMEM;
+- goto put_node;
++ goto put_device;
+ }
+
+ /* Copy the code that puts DDR in self refresh to ocram */
+@@ -81,6 +81,8 @@ static int socfpga_setup_ocram_self_refresh(void)
+ if (!socfpga_sdram_self_refresh_in_ocram)
+ ret = -EFAULT;
+
++put_device:
++ put_device(&pdev->dev);
+ put_node:
+ of_node_put(np);
+
+diff --git a/arch/arm64/boot/dts/allwinner/sun50i-a64-pinephone.dtsi b/arch/arm64/boot/dts/allwinner/sun50i-a64-pinephone.dtsi
+index cefda145c3c9..342733a20c33 100644
+--- a/arch/arm64/boot/dts/allwinner/sun50i-a64-pinephone.dtsi
++++ b/arch/arm64/boot/dts/allwinner/sun50i-a64-pinephone.dtsi
+@@ -279,7 +279,7 @@
+
+ ®_dldo4 {
+ regulator-min-microvolt = <1800000>;
+- regulator-max-microvolt = <3300000>;
++ regulator-max-microvolt = <1800000>;
+ regulator-name = "vcc-wifi-io";
+ };
+
+diff --git a/arch/arm64/boot/dts/amlogic/meson-g12b-w400.dtsi b/arch/arm64/boot/dts/amlogic/meson-g12b-w400.dtsi
+index 98b70d216a6f..2802ddbb83ac 100644
+--- a/arch/arm64/boot/dts/amlogic/meson-g12b-w400.dtsi
++++ b/arch/arm64/boot/dts/amlogic/meson-g12b-w400.dtsi
+@@ -336,9 +336,11 @@
+
+ bus-width = <4>;
+ cap-sd-highspeed;
+- sd-uhs-sdr50;
+ max-frequency = <100000000>;
+
++ /* WiFi firmware requires power to be kept while in suspend */
++ keep-power-in-suspend;
++
+ non-removable;
+ disable-wp;
+
+@@ -398,7 +400,7 @@
+ shutdown-gpios = <&gpio GPIOX_17 GPIO_ACTIVE_HIGH>;
+ max-speed = <2000000>;
+ clocks = <&wifi32k>;
+- clock-names = "lpo";
++ clock-names = "lpo";
+ };
+ };
+
+diff --git a/arch/arm64/boot/dts/amlogic/meson-khadas-vim3.dtsi b/arch/arm64/boot/dts/amlogic/meson-khadas-vim3.dtsi
+index 1ef1e3672b96..ff5ba85b7562 100644
+--- a/arch/arm64/boot/dts/amlogic/meson-khadas-vim3.dtsi
++++ b/arch/arm64/boot/dts/amlogic/meson-khadas-vim3.dtsi
+@@ -270,7 +270,6 @@
+
+ bus-width = <4>;
+ cap-sd-highspeed;
+- sd-uhs-sdr50;
+ max-frequency = <100000000>;
+
+ non-removable;
+diff --git a/arch/arm64/boot/dts/amlogic/meson-sm1-khadas-vim3l.dts b/arch/arm64/boot/dts/amlogic/meson-sm1-khadas-vim3l.dts
+index dbbf29a0dbf6..026b21708b07 100644
+--- a/arch/arm64/boot/dts/amlogic/meson-sm1-khadas-vim3l.dts
++++ b/arch/arm64/boot/dts/amlogic/meson-sm1-khadas-vim3l.dts
+@@ -88,6 +88,10 @@
+ status = "okay";
+ };
+
++&sd_emmc_a {
++ sd-uhs-sdr50;
++};
++
+ &usb {
+ phys = <&usb2_phy0>, <&usb2_phy1>;
+ phy-names = "usb2-phy0", "usb2-phy1";
+diff --git a/arch/arm64/boot/dts/exynos/exynos7-espresso.dts b/arch/arm64/boot/dts/exynos/exynos7-espresso.dts
+index 7af288fa9475..a9412805c1d6 100644
+--- a/arch/arm64/boot/dts/exynos/exynos7-espresso.dts
++++ b/arch/arm64/boot/dts/exynos/exynos7-espresso.dts
+@@ -157,6 +157,7 @@
+ regulator-min-microvolt = <700000>;
+ regulator-max-microvolt = <1150000>;
+ regulator-enable-ramp-delay = <125>;
++ regulator-always-on;
+ };
+
+ ldo8_reg: LDO8 {
+diff --git a/arch/arm64/boot/dts/hisilicon/hi3660-hikey960.dts b/arch/arm64/boot/dts/hisilicon/hi3660-hikey960.dts
+index e035cf195b19..8c4bfbaf3a80 100644
+--- a/arch/arm64/boot/dts/hisilicon/hi3660-hikey960.dts
++++ b/arch/arm64/boot/dts/hisilicon/hi3660-hikey960.dts
+@@ -530,6 +530,17 @@
+ status = "ok";
+ compatible = "adi,adv7533";
+ reg = <0x39>;
++ adi,dsi-lanes = <4>;
++ ports {
++ #address-cells = <1>;
++ #size-cells = <0>;
++ port@0 {
++ reg = <0>;
++ };
++ port@1 {
++ reg = <1>;
++ };
++ };
+ };
+ };
+
+diff --git a/arch/arm64/boot/dts/hisilicon/hi6220-hikey.dts b/arch/arm64/boot/dts/hisilicon/hi6220-hikey.dts
+index c14205cd6bf5..3e47150c05ec 100644
+--- a/arch/arm64/boot/dts/hisilicon/hi6220-hikey.dts
++++ b/arch/arm64/boot/dts/hisilicon/hi6220-hikey.dts
+@@ -516,7 +516,7 @@
+ reg = <0x39>;
+ interrupt-parent = <&gpio1>;
+ interrupts = <1 2>;
+- pd-gpio = <&gpio0 4 0>;
++ pd-gpios = <&gpio0 4 0>;
+ adi,dsi-lanes = <4>;
+ #sound-dai-cells = <0>;
+
+diff --git a/arch/arm64/boot/dts/qcom/msm8916-pins.dtsi b/arch/arm64/boot/dts/qcom/msm8916-pins.dtsi
+index e9c00367f7fd..5785bf0a807c 100644
+--- a/arch/arm64/boot/dts/qcom/msm8916-pins.dtsi
++++ b/arch/arm64/boot/dts/qcom/msm8916-pins.dtsi
+@@ -556,7 +556,7 @@
+ pins = "gpio63", "gpio64", "gpio65", "gpio66",
+ "gpio67", "gpio68";
+ drive-strength = <8>;
+- bias-pull-none;
++ bias-disable;
+ };
+ };
+ cdc_pdm_lines_sus: pdm-lines-off {
+@@ -585,7 +585,7 @@
+ pins = "gpio113", "gpio114", "gpio115",
+ "gpio116";
+ drive-strength = <8>;
+- bias-pull-none;
++ bias-disable;
+ };
+ };
+
+@@ -613,7 +613,7 @@
+ pinconf {
+ pins = "gpio110";
+ drive-strength = <8>;
+- bias-pull-none;
++ bias-disable;
+ };
+ };
+
+@@ -639,7 +639,7 @@
+ pinconf {
+ pins = "gpio116";
+ drive-strength = <8>;
+- bias-pull-none;
++ bias-disable;
+ };
+ };
+ ext_mclk_tlmm_lines_sus: mclk-lines-off {
+@@ -667,7 +667,7 @@
+ pins = "gpio112", "gpio117", "gpio118",
+ "gpio119";
+ drive-strength = <8>;
+- bias-pull-none;
++ bias-disable;
+ };
+ };
+ ext_sec_tlmm_lines_sus: tlmm-lines-off {
+diff --git a/arch/arm64/boot/dts/renesas/r8a774a1.dtsi b/arch/arm64/boot/dts/renesas/r8a774a1.dtsi
+index a603d947970e..16b059d7fd01 100644
+--- a/arch/arm64/boot/dts/renesas/r8a774a1.dtsi
++++ b/arch/arm64/boot/dts/renesas/r8a774a1.dtsi
+@@ -2250,7 +2250,7 @@
+ status = "disabled";
+ };
+
+- sdhi0: sd@ee100000 {
++ sdhi0: mmc@ee100000 {
+ compatible = "renesas,sdhi-r8a774a1",
+ "renesas,rcar-gen3-sdhi";
+ reg = <0 0xee100000 0 0x2000>;
+@@ -2262,7 +2262,7 @@
+ status = "disabled";
+ };
+
+- sdhi1: sd@ee120000 {
++ sdhi1: mmc@ee120000 {
+ compatible = "renesas,sdhi-r8a774a1",
+ "renesas,rcar-gen3-sdhi";
+ reg = <0 0xee120000 0 0x2000>;
+@@ -2274,7 +2274,7 @@
+ status = "disabled";
+ };
+
+- sdhi2: sd@ee140000 {
++ sdhi2: mmc@ee140000 {
+ compatible = "renesas,sdhi-r8a774a1",
+ "renesas,rcar-gen3-sdhi";
+ reg = <0 0xee140000 0 0x2000>;
+@@ -2286,7 +2286,7 @@
+ status = "disabled";
+ };
+
+- sdhi3: sd@ee160000 {
++ sdhi3: mmc@ee160000 {
+ compatible = "renesas,sdhi-r8a774a1",
+ "renesas,rcar-gen3-sdhi";
+ reg = <0 0xee160000 0 0x2000>;
+diff --git a/arch/arm64/boot/dts/renesas/r8a774b1.dtsi b/arch/arm64/boot/dts/renesas/r8a774b1.dtsi
+index 1e51855c7cd3..6db8b6a4d191 100644
+--- a/arch/arm64/boot/dts/renesas/r8a774b1.dtsi
++++ b/arch/arm64/boot/dts/renesas/r8a774b1.dtsi
+@@ -2108,7 +2108,7 @@
+ status = "disabled";
+ };
+
+- sdhi0: sd@ee100000 {
++ sdhi0: mmc@ee100000 {
+ compatible = "renesas,sdhi-r8a774b1",
+ "renesas,rcar-gen3-sdhi";
+ reg = <0 0xee100000 0 0x2000>;
+@@ -2120,7 +2120,7 @@
+ status = "disabled";
+ };
+
+- sdhi1: sd@ee120000 {
++ sdhi1: mmc@ee120000 {
+ compatible = "renesas,sdhi-r8a774b1",
+ "renesas,rcar-gen3-sdhi";
+ reg = <0 0xee120000 0 0x2000>;
+@@ -2132,7 +2132,7 @@
+ status = "disabled";
+ };
+
+- sdhi2: sd@ee140000 {
++ sdhi2: mmc@ee140000 {
+ compatible = "renesas,sdhi-r8a774b1",
+ "renesas,rcar-gen3-sdhi";
+ reg = <0 0xee140000 0 0x2000>;
+@@ -2144,7 +2144,7 @@
+ status = "disabled";
+ };
+
+- sdhi3: sd@ee160000 {
++ sdhi3: mmc@ee160000 {
+ compatible = "renesas,sdhi-r8a774b1",
+ "renesas,rcar-gen3-sdhi";
+ reg = <0 0xee160000 0 0x2000>;
+diff --git a/arch/arm64/boot/dts/renesas/r8a774c0.dtsi b/arch/arm64/boot/dts/renesas/r8a774c0.dtsi
+index 5c72a7efbb03..42171190cce4 100644
+--- a/arch/arm64/boot/dts/renesas/r8a774c0.dtsi
++++ b/arch/arm64/boot/dts/renesas/r8a774c0.dtsi
+@@ -1618,7 +1618,7 @@
+ status = "disabled";
+ };
+
+- sdhi0: sd@ee100000 {
++ sdhi0: mmc@ee100000 {
+ compatible = "renesas,sdhi-r8a774c0",
+ "renesas,rcar-gen3-sdhi";
+ reg = <0 0xee100000 0 0x2000>;
+@@ -1630,7 +1630,7 @@
+ status = "disabled";
+ };
+
+- sdhi1: sd@ee120000 {
++ sdhi1: mmc@ee120000 {
+ compatible = "renesas,sdhi-r8a774c0",
+ "renesas,rcar-gen3-sdhi";
+ reg = <0 0xee120000 0 0x2000>;
+@@ -1642,7 +1642,7 @@
+ status = "disabled";
+ };
+
+- sdhi3: sd@ee160000 {
++ sdhi3: mmc@ee160000 {
+ compatible = "renesas,sdhi-r8a774c0",
+ "renesas,rcar-gen3-sdhi";
+ reg = <0 0xee160000 0 0x2000>;
+diff --git a/arch/arm64/boot/dts/renesas/r8a77951.dtsi b/arch/arm64/boot/dts/renesas/r8a77951.dtsi
+index 61d67d9714ab..9beb8e76d923 100644
+--- a/arch/arm64/boot/dts/renesas/r8a77951.dtsi
++++ b/arch/arm64/boot/dts/renesas/r8a77951.dtsi
+@@ -2590,7 +2590,7 @@
+ status = "disabled";
+ };
+
+- sdhi0: sd@ee100000 {
++ sdhi0: mmc@ee100000 {
+ compatible = "renesas,sdhi-r8a7795",
+ "renesas,rcar-gen3-sdhi";
+ reg = <0 0xee100000 0 0x2000>;
+@@ -2603,7 +2603,7 @@
+ status = "disabled";
+ };
+
+- sdhi1: sd@ee120000 {
++ sdhi1: mmc@ee120000 {
+ compatible = "renesas,sdhi-r8a7795",
+ "renesas,rcar-gen3-sdhi";
+ reg = <0 0xee120000 0 0x2000>;
+@@ -2616,7 +2616,7 @@
+ status = "disabled";
+ };
+
+- sdhi2: sd@ee140000 {
++ sdhi2: mmc@ee140000 {
+ compatible = "renesas,sdhi-r8a7795",
+ "renesas,rcar-gen3-sdhi";
+ reg = <0 0xee140000 0 0x2000>;
+@@ -2629,7 +2629,7 @@
+ status = "disabled";
+ };
+
+- sdhi3: sd@ee160000 {
++ sdhi3: mmc@ee160000 {
+ compatible = "renesas,sdhi-r8a7795",
+ "renesas,rcar-gen3-sdhi";
+ reg = <0 0xee160000 0 0x2000>;
+diff --git a/arch/arm64/boot/dts/renesas/r8a77960.dtsi b/arch/arm64/boot/dts/renesas/r8a77960.dtsi
+index 33bf62acffbb..4dfb7f076787 100644
+--- a/arch/arm64/boot/dts/renesas/r8a77960.dtsi
++++ b/arch/arm64/boot/dts/renesas/r8a77960.dtsi
+@@ -2394,7 +2394,7 @@
+ status = "disabled";
+ };
+
+- sdhi0: sd@ee100000 {
++ sdhi0: mmc@ee100000 {
+ compatible = "renesas,sdhi-r8a7796",
+ "renesas,rcar-gen3-sdhi";
+ reg = <0 0xee100000 0 0x2000>;
+@@ -2407,7 +2407,7 @@
+ status = "disabled";
+ };
+
+- sdhi1: sd@ee120000 {
++ sdhi1: mmc@ee120000 {
+ compatible = "renesas,sdhi-r8a7796",
+ "renesas,rcar-gen3-sdhi";
+ reg = <0 0xee120000 0 0x2000>;
+@@ -2420,7 +2420,7 @@
+ status = "disabled";
+ };
+
+- sdhi2: sd@ee140000 {
++ sdhi2: mmc@ee140000 {
+ compatible = "renesas,sdhi-r8a7796",
+ "renesas,rcar-gen3-sdhi";
+ reg = <0 0xee140000 0 0x2000>;
+@@ -2433,7 +2433,7 @@
+ status = "disabled";
+ };
+
+- sdhi3: sd@ee160000 {
++ sdhi3: mmc@ee160000 {
+ compatible = "renesas,sdhi-r8a7796",
+ "renesas,rcar-gen3-sdhi";
+ reg = <0 0xee160000 0 0x2000>;
+diff --git a/arch/arm64/boot/dts/renesas/r8a77961.dtsi b/arch/arm64/boot/dts/renesas/r8a77961.dtsi
+index 760e738b75b3..eabb0e635cd4 100644
+--- a/arch/arm64/boot/dts/renesas/r8a77961.dtsi
++++ b/arch/arm64/boot/dts/renesas/r8a77961.dtsi
+@@ -1257,7 +1257,7 @@
+ status = "disabled";
+ };
+
+- sdhi0: sd@ee100000 {
++ sdhi0: mmc@ee100000 {
+ compatible = "renesas,sdhi-r8a77961",
+ "renesas,rcar-gen3-sdhi";
+ reg = <0 0xee100000 0 0x2000>;
+@@ -1269,7 +1269,7 @@
+ status = "disabled";
+ };
+
+- sdhi1: sd@ee120000 {
++ sdhi1: mmc@ee120000 {
+ compatible = "renesas,sdhi-r8a77961",
+ "renesas,rcar-gen3-sdhi";
+ reg = <0 0xee120000 0 0x2000>;
+@@ -1281,7 +1281,7 @@
+ status = "disabled";
+ };
+
+- sdhi2: sd@ee140000 {
++ sdhi2: mmc@ee140000 {
+ compatible = "renesas,sdhi-r8a77961",
+ "renesas,rcar-gen3-sdhi";
+ reg = <0 0xee140000 0 0x2000>;
+@@ -1293,7 +1293,7 @@
+ status = "disabled";
+ };
+
+- sdhi3: sd@ee160000 {
++ sdhi3: mmc@ee160000 {
+ compatible = "renesas,sdhi-r8a77961",
+ "renesas,rcar-gen3-sdhi";
+ reg = <0 0xee160000 0 0x2000>;
+diff --git a/arch/arm64/boot/dts/renesas/r8a77965.dtsi b/arch/arm64/boot/dts/renesas/r8a77965.dtsi
+index 6f7ab39fd282..fe4dc12e2bdf 100644
+--- a/arch/arm64/boot/dts/renesas/r8a77965.dtsi
++++ b/arch/arm64/boot/dts/renesas/r8a77965.dtsi
+@@ -2120,7 +2120,7 @@
+ status = "disabled";
+ };
+
+- sdhi0: sd@ee100000 {
++ sdhi0: mmc@ee100000 {
+ compatible = "renesas,sdhi-r8a77965",
+ "renesas,rcar-gen3-sdhi";
+ reg = <0 0xee100000 0 0x2000>;
+@@ -2133,7 +2133,7 @@
+ status = "disabled";
+ };
+
+- sdhi1: sd@ee120000 {
++ sdhi1: mmc@ee120000 {
+ compatible = "renesas,sdhi-r8a77965",
+ "renesas,rcar-gen3-sdhi";
+ reg = <0 0xee120000 0 0x2000>;
+@@ -2146,7 +2146,7 @@
+ status = "disabled";
+ };
+
+- sdhi2: sd@ee140000 {
++ sdhi2: mmc@ee140000 {
+ compatible = "renesas,sdhi-r8a77965",
+ "renesas,rcar-gen3-sdhi";
+ reg = <0 0xee140000 0 0x2000>;
+@@ -2159,7 +2159,7 @@
+ status = "disabled";
+ };
+
+- sdhi3: sd@ee160000 {
++ sdhi3: mmc@ee160000 {
+ compatible = "renesas,sdhi-r8a77965",
+ "renesas,rcar-gen3-sdhi";
+ reg = <0 0xee160000 0 0x2000>;
+diff --git a/arch/arm64/boot/dts/renesas/r8a77990.dtsi b/arch/arm64/boot/dts/renesas/r8a77990.dtsi
+index cd11f24744d4..1991bdc36792 100644
+--- a/arch/arm64/boot/dts/renesas/r8a77990.dtsi
++++ b/arch/arm64/boot/dts/renesas/r8a77990.dtsi
+@@ -1595,7 +1595,7 @@
+ status = "disabled";
+ };
+
+- sdhi0: sd@ee100000 {
++ sdhi0: mmc@ee100000 {
+ compatible = "renesas,sdhi-r8a77990",
+ "renesas,rcar-gen3-sdhi";
+ reg = <0 0xee100000 0 0x2000>;
+@@ -1608,7 +1608,7 @@
+ status = "disabled";
+ };
+
+- sdhi1: sd@ee120000 {
++ sdhi1: mmc@ee120000 {
+ compatible = "renesas,sdhi-r8a77990",
+ "renesas,rcar-gen3-sdhi";
+ reg = <0 0xee120000 0 0x2000>;
+@@ -1621,7 +1621,7 @@
+ status = "disabled";
+ };
+
+- sdhi3: sd@ee160000 {
++ sdhi3: mmc@ee160000 {
+ compatible = "renesas,sdhi-r8a77990",
+ "renesas,rcar-gen3-sdhi";
+ reg = <0 0xee160000 0 0x2000>;
+diff --git a/arch/arm64/boot/dts/renesas/r8a77995.dtsi b/arch/arm64/boot/dts/renesas/r8a77995.dtsi
+index e5617ec0f49c..2c2272f5f5b5 100644
+--- a/arch/arm64/boot/dts/renesas/r8a77995.dtsi
++++ b/arch/arm64/boot/dts/renesas/r8a77995.dtsi
+@@ -916,7 +916,7 @@
+ status = "disabled";
+ };
+
+- sdhi2: sd@ee140000 {
++ sdhi2: mmc@ee140000 {
+ compatible = "renesas,sdhi-r8a77995",
+ "renesas,rcar-gen3-sdhi";
+ reg = <0 0xee140000 0 0x2000>;
+diff --git a/arch/arm64/boot/dts/rockchip/rk3368-lion.dtsi b/arch/arm64/boot/dts/rockchip/rk3368-lion.dtsi
+index e17311e09082..216aafd90e7f 100644
+--- a/arch/arm64/boot/dts/rockchip/rk3368-lion.dtsi
++++ b/arch/arm64/boot/dts/rockchip/rk3368-lion.dtsi
+@@ -156,7 +156,7 @@
+ pinctrl-0 = <&rgmii_pins>;
+ snps,reset-active-low;
+ snps,reset-delays-us = <0 10000 50000>;
+- snps,reset-gpio = <&gpio3 RK_PB3 GPIO_ACTIVE_HIGH>;
++ snps,reset-gpio = <&gpio3 RK_PB3 GPIO_ACTIVE_LOW>;
+ tx_delay = <0x10>;
+ rx_delay = <0x10>;
+ status = "okay";
+diff --git a/arch/arm64/boot/dts/rockchip/rk3399-puma.dtsi b/arch/arm64/boot/dts/rockchip/rk3399-puma.dtsi
+index 07694b196fdb..72c06abd27ea 100644
+--- a/arch/arm64/boot/dts/rockchip/rk3399-puma.dtsi
++++ b/arch/arm64/boot/dts/rockchip/rk3399-puma.dtsi
+@@ -101,7 +101,7 @@
+
+ vcc5v0_host: vcc5v0-host-regulator {
+ compatible = "regulator-fixed";
+- gpio = <&gpio4 RK_PA3 GPIO_ACTIVE_HIGH>;
++ gpio = <&gpio4 RK_PA3 GPIO_ACTIVE_LOW>;
+ enable-active-low;
+ pinctrl-names = "default";
+ pinctrl-0 = <&vcc5v0_host_en>;
+@@ -157,7 +157,7 @@
+ phy-mode = "rgmii";
+ pinctrl-names = "default";
+ pinctrl-0 = <&rgmii_pins>;
+- snps,reset-gpio = <&gpio3 RK_PC0 GPIO_ACTIVE_HIGH>;
++ snps,reset-gpio = <&gpio3 RK_PC0 GPIO_ACTIVE_LOW>;
+ snps,reset-active-low;
+ snps,reset-delays-us = <0 10000 50000>;
+ tx_delay = <0x10>;
+diff --git a/arch/m68k/mac/iop.c b/arch/m68k/mac/iop.c
+index d3775afb0f07..bfc8daf50744 100644
+--- a/arch/m68k/mac/iop.c
++++ b/arch/m68k/mac/iop.c
+@@ -183,7 +183,7 @@ static __inline__ void iop_writeb(volatile struct mac_iop *iop, __u16 addr, __u8
+
+ static __inline__ void iop_stop(volatile struct mac_iop *iop)
+ {
+- iop->status_ctrl &= ~IOP_RUN;
++ iop->status_ctrl = IOP_AUTOINC;
+ }
+
+ static __inline__ void iop_start(volatile struct mac_iop *iop)
+@@ -191,14 +191,9 @@ static __inline__ void iop_start(volatile struct mac_iop *iop)
+ iop->status_ctrl = IOP_RUN | IOP_AUTOINC;
+ }
+
+-static __inline__ void iop_bypass(volatile struct mac_iop *iop)
+-{
+- iop->status_ctrl |= IOP_BYPASS;
+-}
+-
+ static __inline__ void iop_interrupt(volatile struct mac_iop *iop)
+ {
+- iop->status_ctrl |= IOP_IRQ;
++ iop->status_ctrl = IOP_IRQ | IOP_RUN | IOP_AUTOINC;
+ }
+
+ static int iop_alive(volatile struct mac_iop *iop)
+@@ -244,7 +239,6 @@ void __init iop_preinit(void)
+ } else {
+ iop_base[IOP_NUM_SCC] = (struct mac_iop *) SCC_IOP_BASE_QUADRA;
+ }
+- iop_base[IOP_NUM_SCC]->status_ctrl = 0x87;
+ iop_scc_present = 1;
+ } else {
+ iop_base[IOP_NUM_SCC] = NULL;
+@@ -256,7 +250,7 @@ void __init iop_preinit(void)
+ } else {
+ iop_base[IOP_NUM_ISM] = (struct mac_iop *) ISM_IOP_BASE_QUADRA;
+ }
+- iop_base[IOP_NUM_ISM]->status_ctrl = 0;
++ iop_stop(iop_base[IOP_NUM_ISM]);
+ iop_ism_present = 1;
+ } else {
+ iop_base[IOP_NUM_ISM] = NULL;
+@@ -415,7 +409,8 @@ static void iop_handle_send(uint iop_num, uint chan)
+ msg->status = IOP_MSGSTATUS_UNUSED;
+ msg = msg->next;
+ iop_send_queue[iop_num][chan] = msg;
+- if (msg) iop_do_send(msg);
++ if (msg && iop_readb(iop, IOP_ADDR_SEND_STATE + chan) == IOP_MSG_IDLE)
++ iop_do_send(msg);
+ }
+
+ /*
+@@ -489,16 +484,12 @@ int iop_send_message(uint iop_num, uint chan, void *privdata,
+
+ if (!(q = iop_send_queue[iop_num][chan])) {
+ iop_send_queue[iop_num][chan] = msg;
++ iop_do_send(msg);
+ } else {
+ while (q->next) q = q->next;
+ q->next = msg;
+ }
+
+- if (iop_readb(iop_base[iop_num],
+- IOP_ADDR_SEND_STATE + chan) == IOP_MSG_IDLE) {
+- iop_do_send(msg);
+- }
+-
+ return 0;
+ }
+
+diff --git a/arch/mips/cavium-octeon/octeon-usb.c b/arch/mips/cavium-octeon/octeon-usb.c
+index 1fd85c559700..950e6c6e8629 100644
+--- a/arch/mips/cavium-octeon/octeon-usb.c
++++ b/arch/mips/cavium-octeon/octeon-usb.c
+@@ -518,6 +518,7 @@ static int __init dwc3_octeon_device_init(void)
+
+ res = platform_get_resource(pdev, IORESOURCE_MEM, 0);
+ if (res == NULL) {
++ put_device(&pdev->dev);
+ dev_err(&pdev->dev, "No memory resources\n");
+ return -ENXIO;
+ }
+@@ -529,8 +530,10 @@ static int __init dwc3_octeon_device_init(void)
+ * know the difference.
+ */
+ base = devm_ioremap_resource(&pdev->dev, res);
+- if (IS_ERR(base))
++ if (IS_ERR(base)) {
++ put_device(&pdev->dev);
+ return PTR_ERR(base);
++ }
+
+ mutex_lock(&dwc3_octeon_clocks_mutex);
+ dwc3_octeon_clocks_start(&pdev->dev, (u64)base);
+diff --git a/arch/mips/include/asm/cpu-features.h b/arch/mips/include/asm/cpu-features.h
+index 724dfddcab92..0b1bc7ed913b 100644
+--- a/arch/mips/include/asm/cpu-features.h
++++ b/arch/mips/include/asm/cpu-features.h
+@@ -568,6 +568,10 @@
+ # define cpu_has_mac2008_only __opt(MIPS_CPU_MAC_2008_ONLY)
+ #endif
+
++#ifndef cpu_has_ftlbparex
++# define cpu_has_ftlbparex __opt(MIPS_CPU_FTLBPAREX)
++#endif
++
+ #ifdef CONFIG_SMP
+ /*
+ * Some systems share FTLB RAMs between threads within a core (siblings in
+diff --git a/arch/mips/include/asm/cpu.h b/arch/mips/include/asm/cpu.h
+index 104a509312b3..3a4773714b29 100644
+--- a/arch/mips/include/asm/cpu.h
++++ b/arch/mips/include/asm/cpu.h
+@@ -425,6 +425,7 @@ enum cpu_type_enum {
+ #define MIPS_CPU_MM_SYSAD BIT_ULL(58) /* CPU supports write-through SysAD Valid merge */
+ #define MIPS_CPU_MM_FULL BIT_ULL(59) /* CPU supports write-through full merge */
+ #define MIPS_CPU_MAC_2008_ONLY BIT_ULL(60) /* CPU Only support MAC2008 Fused multiply-add instruction */
++#define MIPS_CPU_FTLBPAREX BIT_ULL(61) /* CPU has FTLB parity exception */
+
+ /*
+ * CPU ASE encodings
+diff --git a/arch/mips/kernel/cpu-probe.c b/arch/mips/kernel/cpu-probe.c
+index def1659fe262..3404011eb7cf 100644
+--- a/arch/mips/kernel/cpu-probe.c
++++ b/arch/mips/kernel/cpu-probe.c
+@@ -1827,6 +1827,19 @@ static inline void cpu_probe_mips(struct cpuinfo_mips *c, unsigned int cpu)
+ default:
+ break;
+ }
++
++ /* Recent MIPS cores use the implementation-dependent ExcCode 16 for
++ * cache/FTLB parity exceptions.
++ */
++ switch (__get_cpu_type(c->cputype)) {
++ case CPU_PROAPTIV:
++ case CPU_P5600:
++ case CPU_P6600:
++ case CPU_I6400:
++ case CPU_I6500:
++ c->options |= MIPS_CPU_FTLBPAREX;
++ break;
++ }
+ }
+
+ static inline void cpu_probe_alchemy(struct cpuinfo_mips *c, unsigned int cpu)
+diff --git a/arch/mips/kernel/traps.c b/arch/mips/kernel/traps.c
+index f655af68176c..e664d8b43e72 100644
+--- a/arch/mips/kernel/traps.c
++++ b/arch/mips/kernel/traps.c
+@@ -2457,7 +2457,8 @@ void __init trap_init(void)
+ if (cpu_has_fpu && !cpu_has_nofpuex)
+ set_except_vector(EXCCODE_FPE, handle_fpe);
+
+- set_except_vector(MIPS_EXCCODE_TLBPAR, handle_ftlb);
++ if (cpu_has_ftlbparex)
++ set_except_vector(MIPS_EXCCODE_TLBPAR, handle_ftlb);
+
+ if (cpu_has_rixiex) {
+ set_except_vector(EXCCODE_TLBRI, tlb_do_page_fault_0);
+diff --git a/arch/mips/kvm/vz.c b/arch/mips/kvm/vz.c
+index d9c462c14163..8397e623b926 100644
+--- a/arch/mips/kvm/vz.c
++++ b/arch/mips/kvm/vz.c
+@@ -29,7 +29,9 @@
+ #include <linux/kvm_host.h>
+
+ #include "interrupt.h"
++#ifdef CONFIG_CPU_LOONGSON64
+ #include "loongson_regs.h"
++#endif
+
+ #include "trace.h"
+
+diff --git a/arch/mips/pci/pci-xtalk-bridge.c b/arch/mips/pci/pci-xtalk-bridge.c
+index 5958217861b8..9b3cc775c55e 100644
+--- a/arch/mips/pci/pci-xtalk-bridge.c
++++ b/arch/mips/pci/pci-xtalk-bridge.c
+@@ -728,6 +728,7 @@ err_free_resource:
+ pci_free_resource_list(&host->windows);
+ err_remove_domain:
+ irq_domain_remove(domain);
++ irq_domain_free_fwnode(fn);
+ return err;
+ }
+
+@@ -735,8 +736,10 @@ static int bridge_remove(struct platform_device *pdev)
+ {
+ struct pci_bus *bus = platform_get_drvdata(pdev);
+ struct bridge_controller *bc = BRIDGE_CONTROLLER(bus);
++ struct fwnode_handle *fn = bc->domain->fwnode;
+
+ irq_domain_remove(bc->domain);
++ irq_domain_free_fwnode(fn);
+ pci_lock_rescan_remove();
+ pci_stop_root_bus(bus);
+ pci_remove_root_bus(bus);
+diff --git a/arch/parisc/include/asm/barrier.h b/arch/parisc/include/asm/barrier.h
+index dbaaca84f27f..640d46edf32e 100644
+--- a/arch/parisc/include/asm/barrier.h
++++ b/arch/parisc/include/asm/barrier.h
+@@ -26,6 +26,67 @@
+ #define __smp_rmb() mb()
+ #define __smp_wmb() mb()
+
++#define __smp_store_release(p, v) \
++do { \
++ typeof(p) __p = (p); \
++ union { typeof(*p) __val; char __c[1]; } __u = \
++ { .__val = (__force typeof(*p)) (v) }; \
++ compiletime_assert_atomic_type(*p); \
++ switch (sizeof(*p)) { \
++ case 1: \
++ asm volatile("stb,ma %0,0(%1)" \
++ : : "r"(*(__u8 *)__u.__c), "r"(__p) \
++ : "memory"); \
++ break; \
++ case 2: \
++ asm volatile("sth,ma %0,0(%1)" \
++ : : "r"(*(__u16 *)__u.__c), "r"(__p) \
++ : "memory"); \
++ break; \
++ case 4: \
++ asm volatile("stw,ma %0,0(%1)" \
++ : : "r"(*(__u32 *)__u.__c), "r"(__p) \
++ : "memory"); \
++ break; \
++ case 8: \
++ if (IS_ENABLED(CONFIG_64BIT)) \
++ asm volatile("std,ma %0,0(%1)" \
++ : : "r"(*(__u64 *)__u.__c), "r"(__p) \
++ : "memory"); \
++ break; \
++ } \
++} while (0)
++
++#define __smp_load_acquire(p) \
++({ \
++ union { typeof(*p) __val; char __c[1]; } __u; \
++ typeof(p) __p = (p); \
++ compiletime_assert_atomic_type(*p); \
++ switch (sizeof(*p)) { \
++ case 1: \
++ asm volatile("ldb,ma 0(%1),%0" \
++ : "=r"(*(__u8 *)__u.__c) : "r"(__p) \
++ : "memory"); \
++ break; \
++ case 2: \
++ asm volatile("ldh,ma 0(%1),%0" \
++ : "=r"(*(__u16 *)__u.__c) : "r"(__p) \
++ : "memory"); \
++ break; \
++ case 4: \
++ asm volatile("ldw,ma 0(%1),%0" \
++ : "=r"(*(__u32 *)__u.__c) : "r"(__p) \
++ : "memory"); \
++ break; \
++ case 8: \
++ if (IS_ENABLED(CONFIG_64BIT)) \
++ asm volatile("ldd,ma 0(%1),%0" \
++ : "=r"(*(__u64 *)__u.__c) : "r"(__p) \
++ : "memory"); \
++ break; \
++ } \
++ __u.__val; \
++})
+ #include <asm-generic/barrier.h>
+
+ #endif /* !__ASSEMBLY__ */
+diff --git a/arch/parisc/include/asm/spinlock.h b/arch/parisc/include/asm/spinlock.h
+index 70fecb8dc4e2..51b6c47f802f 100644
+--- a/arch/parisc/include/asm/spinlock.h
++++ b/arch/parisc/include/asm/spinlock.h
+@@ -10,34 +10,25 @@
+ static inline int arch_spin_is_locked(arch_spinlock_t *x)
+ {
+ volatile unsigned int *a = __ldcw_align(x);
+- smp_mb();
+ return *a == 0;
+ }
+
+-static inline void arch_spin_lock(arch_spinlock_t *x)
+-{
+- volatile unsigned int *a;
+-
+- a = __ldcw_align(x);
+- while (__ldcw(a) == 0)
+- while (*a == 0)
+- cpu_relax();
+-}
++#define arch_spin_lock(lock) arch_spin_lock_flags(lock, 0)
+
+ static inline void arch_spin_lock_flags(arch_spinlock_t *x,
+ unsigned long flags)
+ {
+ volatile unsigned int *a;
+- unsigned long flags_dis;
+
+ a = __ldcw_align(x);
+- while (__ldcw(a) == 0) {
+- local_save_flags(flags_dis);
+- local_irq_restore(flags);
++ while (__ldcw(a) == 0)
+ while (*a == 0)
+- cpu_relax();
+- local_irq_restore(flags_dis);
+- }
++ if (flags & PSW_SM_I) {
++ local_irq_enable();
++ cpu_relax();
++ local_irq_disable();
++ } else
++ cpu_relax();
+ }
+ #define arch_spin_lock_flags arch_spin_lock_flags
+
+@@ -46,12 +37,8 @@ static inline void arch_spin_unlock(arch_spinlock_t *x)
+ volatile unsigned int *a;
+
+ a = __ldcw_align(x);
+-#ifdef CONFIG_SMP
+- (void) __ldcw(a);
+-#else
+- mb();
+-#endif
+- *a = 1;
++ /* Release with ordered store. */
++ __asm__ __volatile__("stw,ma %0,0(%1)" : : "r"(1), "r"(a) : "memory");
+ }
+
+ static inline int arch_spin_trylock(arch_spinlock_t *x)
+diff --git a/arch/parisc/kernel/entry.S b/arch/parisc/kernel/entry.S
+index 4b484ec7c7da..519f9056fd00 100644
+--- a/arch/parisc/kernel/entry.S
++++ b/arch/parisc/kernel/entry.S
+@@ -454,7 +454,6 @@
+ nop
+ LDREG 0(\ptp),\pte
+ bb,<,n \pte,_PAGE_PRESENT_BIT,3f
+- LDCW 0(\tmp),\tmp1
+ b \fault
+ stw \spc,0(\tmp)
+ 99: ALTERNATIVE(98b, 99b, ALT_COND_NO_SMP, INSN_NOP)
+@@ -464,23 +463,26 @@
+ 3:
+ .endm
+
+- /* Release pa_tlb_lock lock without reloading lock address. */
+- .macro tlb_unlock0 spc,tmp,tmp1
++ /* Release pa_tlb_lock lock without reloading lock address.
++ Note that the values in the register spc are limited to
++ NR_SPACE_IDS (262144). Thus, the stw instruction always
++ stores a nonzero value even when register spc is 64 bits.
++ We use an ordered store to ensure all prior accesses are
++ performed prior to releasing the lock. */
++ .macro tlb_unlock0 spc,tmp
+ #ifdef CONFIG_SMP
+ 98: or,COND(=) %r0,\spc,%r0
+- LDCW 0(\tmp),\tmp1
+- or,COND(=) %r0,\spc,%r0
+- stw \spc,0(\tmp)
++ stw,ma \spc,0(\tmp)
+ 99: ALTERNATIVE(98b, 99b, ALT_COND_NO_SMP, INSN_NOP)
+ #endif
+ .endm
+
+ /* Release pa_tlb_lock lock. */
+- .macro tlb_unlock1 spc,tmp,tmp1
++ .macro tlb_unlock1 spc,tmp
+ #ifdef CONFIG_SMP
+ 98: load_pa_tlb_lock \tmp
+ 99: ALTERNATIVE(98b, 99b, ALT_COND_NO_SMP, INSN_NOP)
+- tlb_unlock0 \spc,\tmp,\tmp1
++ tlb_unlock0 \spc,\tmp
+ #endif
+ .endm
+
+@@ -1163,7 +1165,7 @@ dtlb_miss_20w:
+
+ idtlbt pte,prot
+
+- tlb_unlock1 spc,t0,t1
++ tlb_unlock1 spc,t0
+ rfir
+ nop
+
+@@ -1189,7 +1191,7 @@ nadtlb_miss_20w:
+
+ idtlbt pte,prot
+
+- tlb_unlock1 spc,t0,t1
++ tlb_unlock1 spc,t0
+ rfir
+ nop
+
+@@ -1223,7 +1225,7 @@ dtlb_miss_11:
+
+ mtsp t1, %sr1 /* Restore sr1 */
+
+- tlb_unlock1 spc,t0,t1
++ tlb_unlock1 spc,t0
+ rfir
+ nop
+
+@@ -1256,7 +1258,7 @@ nadtlb_miss_11:
+
+ mtsp t1, %sr1 /* Restore sr1 */
+
+- tlb_unlock1 spc,t0,t1
++ tlb_unlock1 spc,t0
+ rfir
+ nop
+
+@@ -1285,7 +1287,7 @@ dtlb_miss_20:
+
+ idtlbt pte,prot
+
+- tlb_unlock1 spc,t0,t1
++ tlb_unlock1 spc,t0
+ rfir
+ nop
+
+@@ -1313,7 +1315,7 @@ nadtlb_miss_20:
+
+ idtlbt pte,prot
+
+- tlb_unlock1 spc,t0,t1
++ tlb_unlock1 spc,t0
+ rfir
+ nop
+
+@@ -1420,7 +1422,7 @@ itlb_miss_20w:
+
+ iitlbt pte,prot
+
+- tlb_unlock1 spc,t0,t1
++ tlb_unlock1 spc,t0
+ rfir
+ nop
+
+@@ -1444,7 +1446,7 @@ naitlb_miss_20w:
+
+ iitlbt pte,prot
+
+- tlb_unlock1 spc,t0,t1
++ tlb_unlock1 spc,t0
+ rfir
+ nop
+
+@@ -1478,7 +1480,7 @@ itlb_miss_11:
+
+ mtsp t1, %sr1 /* Restore sr1 */
+
+- tlb_unlock1 spc,t0,t1
++ tlb_unlock1 spc,t0
+ rfir
+ nop
+
+@@ -1502,7 +1504,7 @@ naitlb_miss_11:
+
+ mtsp t1, %sr1 /* Restore sr1 */
+
+- tlb_unlock1 spc,t0,t1
++ tlb_unlock1 spc,t0
+ rfir
+ nop
+
+@@ -1532,7 +1534,7 @@ itlb_miss_20:
+
+ iitlbt pte,prot
+
+- tlb_unlock1 spc,t0,t1
++ tlb_unlock1 spc,t0
+ rfir
+ nop
+
+@@ -1552,7 +1554,7 @@ naitlb_miss_20:
+
+ iitlbt pte,prot
+
+- tlb_unlock1 spc,t0,t1
++ tlb_unlock1 spc,t0
+ rfir
+ nop
+
+@@ -1582,7 +1584,7 @@ dbit_trap_20w:
+
+ idtlbt pte,prot
+
+- tlb_unlock0 spc,t0,t1
++ tlb_unlock0 spc,t0
+ rfir
+ nop
+ #else
+@@ -1608,7 +1610,7 @@ dbit_trap_11:
+
+ mtsp t1, %sr1 /* Restore sr1 */
+
+- tlb_unlock0 spc,t0,t1
++ tlb_unlock0 spc,t0
+ rfir
+ nop
+
+@@ -1628,7 +1630,7 @@ dbit_trap_20:
+
+ idtlbt pte,prot
+
+- tlb_unlock0 spc,t0,t1
++ tlb_unlock0 spc,t0
+ rfir
+ nop
+ #endif
+diff --git a/arch/parisc/kernel/syscall.S b/arch/parisc/kernel/syscall.S
+index f05c9d5b6b9e..3ad61a177f5b 100644
+--- a/arch/parisc/kernel/syscall.S
++++ b/arch/parisc/kernel/syscall.S
+@@ -640,11 +640,7 @@ cas_action:
+ sub,<> %r28, %r25, %r0
+ 2: stw %r24, 0(%r26)
+ /* Free lock */
+-#ifdef CONFIG_SMP
+-98: LDCW 0(%sr2,%r20), %r1 /* Barrier */
+-99: ALTERNATIVE(98b, 99b, ALT_COND_NO_SMP, INSN_NOP)
+-#endif
+- stw %r20, 0(%sr2,%r20)
++ stw,ma %r20, 0(%sr2,%r20)
+ #if ENABLE_LWS_DEBUG
+ /* Clear thread register indicator */
+ stw %r0, 4(%sr2,%r20)
+@@ -658,11 +654,7 @@ cas_action:
+ 3:
+ /* Error occurred on load or store */
+ /* Free lock */
+-#ifdef CONFIG_SMP
+-98: LDCW 0(%sr2,%r20), %r1 /* Barrier */
+-99: ALTERNATIVE(98b, 99b, ALT_COND_NO_SMP, INSN_NOP)
+-#endif
+- stw %r20, 0(%sr2,%r20)
++ stw,ma %r20, 0(%sr2,%r20)
+ #if ENABLE_LWS_DEBUG
+ stw %r0, 4(%sr2,%r20)
+ #endif
+@@ -863,11 +855,7 @@ cas2_action:
+
+ cas2_end:
+ /* Free lock */
+-#ifdef CONFIG_SMP
+-98: LDCW 0(%sr2,%r20), %r1 /* Barrier */
+-99: ALTERNATIVE(98b, 99b, ALT_COND_NO_SMP, INSN_NOP)
+-#endif
+- stw %r20, 0(%sr2,%r20)
++ stw,ma %r20, 0(%sr2,%r20)
+ /* Enable interrupts */
+ ssm PSW_SM_I, %r0
+ /* Return to userspace, set no error */
+@@ -877,11 +865,7 @@ cas2_end:
+ 22:
+ /* Error occurred on load or store */
+ /* Free lock */
+-#ifdef CONFIG_SMP
+-98: LDCW 0(%sr2,%r20), %r1 /* Barrier */
+-99: ALTERNATIVE(98b, 99b, ALT_COND_NO_SMP, INSN_NOP)
+-#endif
+- stw %r20, 0(%sr2,%r20)
++ stw,ma %r20, 0(%sr2,%r20)
+ ssm PSW_SM_I, %r0
+ ldo 1(%r0),%r28
+ b lws_exit
+diff --git a/arch/powerpc/boot/Makefile b/arch/powerpc/boot/Makefile
+index 63d7456b9518..2039ed41250d 100644
+--- a/arch/powerpc/boot/Makefile
++++ b/arch/powerpc/boot/Makefile
+@@ -117,7 +117,7 @@ src-wlib-y := string.S crt0.S stdio.c decompress.c main.c \
+ elf_util.c $(zlib-y) devtree.c stdlib.c \
+ oflib.c ofconsole.c cuboot.c
+
+-src-wlib-$(CONFIG_PPC_MPC52XX) += mpc52xx-psc.c
++src-wlib-$(CONFIG_PPC_MPC52xx) += mpc52xx-psc.c
+ src-wlib-$(CONFIG_PPC64_BOOT_WRAPPER) += opal-calls.S opal.c
+ ifndef CONFIG_PPC64_BOOT_WRAPPER
+ src-wlib-y += crtsavres.S
+diff --git a/arch/powerpc/boot/serial.c b/arch/powerpc/boot/serial.c
+index 0bfa7e87e546..9a19e5905485 100644
+--- a/arch/powerpc/boot/serial.c
++++ b/arch/powerpc/boot/serial.c
+@@ -128,7 +128,7 @@ int serial_console_init(void)
+ dt_is_compatible(devp, "fsl,cpm2-smc-uart"))
+ rc = cpm_console_init(devp, &serial_cd);
+ #endif
+-#ifdef CONFIG_PPC_MPC52XX
++#ifdef CONFIG_PPC_MPC52xx
+ else if (dt_is_compatible(devp, "fsl,mpc5200-psc-uart"))
+ rc = mpc5200_psc_console_init(devp, &serial_cd);
+ #endif
+diff --git a/arch/powerpc/include/asm/fixmap.h b/arch/powerpc/include/asm/fixmap.h
+index 29188810ba30..925cf89cbf4b 100644
+--- a/arch/powerpc/include/asm/fixmap.h
++++ b/arch/powerpc/include/asm/fixmap.h
+@@ -52,7 +52,7 @@ enum fixed_addresses {
+ FIX_HOLE,
+ /* reserve the top 128K for early debugging purposes */
+ FIX_EARLY_DEBUG_TOP = FIX_HOLE,
+- FIX_EARLY_DEBUG_BASE = FIX_EARLY_DEBUG_TOP+((128*1024)/PAGE_SIZE)-1,
++ FIX_EARLY_DEBUG_BASE = FIX_EARLY_DEBUG_TOP+(ALIGN(SZ_128, PAGE_SIZE)/PAGE_SIZE)-1,
+ #ifdef CONFIG_HIGHMEM
+ FIX_KMAP_BEGIN, /* reserved pte's for temporary kernel mappings */
+ FIX_KMAP_END = FIX_KMAP_BEGIN+(KM_TYPE_NR*NR_CPUS)-1,
+diff --git a/arch/powerpc/include/asm/perf_event.h b/arch/powerpc/include/asm/perf_event.h
+index eed3954082fa..1e8b2e1ec1db 100644
+--- a/arch/powerpc/include/asm/perf_event.h
++++ b/arch/powerpc/include/asm/perf_event.h
+@@ -12,6 +12,8 @@
+
+ #ifdef CONFIG_PPC_PERF_CTRS
+ #include <asm/perf_event_server.h>
++#else
++static inline bool is_sier_available(void) { return false; }
+ #endif
+
+ #ifdef CONFIG_FSL_EMB_PERF_EVENT
+diff --git a/arch/powerpc/include/asm/ptrace.h b/arch/powerpc/include/asm/ptrace.h
+index ac3970fff0d5..8b5814a36b76 100644
+--- a/arch/powerpc/include/asm/ptrace.h
++++ b/arch/powerpc/include/asm/ptrace.h
+@@ -238,7 +238,7 @@ static inline void set_trap_norestart(struct pt_regs *regs)
+ }
+
+ #define arch_has_single_step() (1)
+-#ifndef CONFIG_BOOK3S_601
++#ifndef CONFIG_PPC_BOOK3S_601
+ #define arch_has_block_step() (true)
+ #else
+ #define arch_has_block_step() (false)
+diff --git a/arch/powerpc/include/asm/rtas.h b/arch/powerpc/include/asm/rtas.h
+index 014968f25f7e..0107d724e9da 100644
+--- a/arch/powerpc/include/asm/rtas.h
++++ b/arch/powerpc/include/asm/rtas.h
+@@ -253,8 +253,6 @@ extern int rtas_set_indicator_fast(int indicator, int index, int new_value);
+ extern void rtas_progress(char *s, unsigned short hex);
+ extern int rtas_suspend_cpu(struct rtas_suspend_me_data *data);
+ extern int rtas_suspend_last_cpu(struct rtas_suspend_me_data *data);
+-extern int rtas_online_cpus_mask(cpumask_var_t cpus);
+-extern int rtas_offline_cpus_mask(cpumask_var_t cpus);
+ extern int rtas_ibm_suspend_me(u64 handle);
+
+ struct rtc_time;
+diff --git a/arch/powerpc/include/asm/timex.h b/arch/powerpc/include/asm/timex.h
+index d2d2c4bd8435..6047402b0a4d 100644
+--- a/arch/powerpc/include/asm/timex.h
++++ b/arch/powerpc/include/asm/timex.h
+@@ -17,7 +17,7 @@ typedef unsigned long cycles_t;
+
+ static inline cycles_t get_cycles(void)
+ {
+- if (IS_ENABLED(CONFIG_BOOK3S_601))
++ if (IS_ENABLED(CONFIG_PPC_BOOK3S_601))
+ return 0;
+
+ return mftb();
+diff --git a/arch/powerpc/kernel/hw_breakpoint.c b/arch/powerpc/kernel/hw_breakpoint.c
+index 0000daf0e1da..c55e67bab271 100644
+--- a/arch/powerpc/kernel/hw_breakpoint.c
++++ b/arch/powerpc/kernel/hw_breakpoint.c
+@@ -419,7 +419,7 @@ static int hw_breakpoint_validate_len(struct arch_hw_breakpoint *hw)
+ if (dawr_enabled()) {
+ max_len = DAWR_MAX_LEN;
+ /* DAWR region can't cross 512 bytes boundary */
+- if (ALIGN(start_addr, SZ_512M) != ALIGN(end_addr - 1, SZ_512M))
++ if (ALIGN_DOWN(start_addr, SZ_512) != ALIGN_DOWN(end_addr - 1, SZ_512))
+ return -EINVAL;
+ } else if (IS_ENABLED(CONFIG_PPC_8xx)) {
+ /* 8xx can setup a range without limitation */
+@@ -498,11 +498,11 @@ static bool dar_in_user_range(unsigned long dar, struct arch_hw_breakpoint *info
+ return ((info->address <= dar) && (dar - info->address < info->len));
+ }
+
+-static bool dar_user_range_overlaps(unsigned long dar, int size,
+- struct arch_hw_breakpoint *info)
++static bool ea_user_range_overlaps(unsigned long ea, int size,
++ struct arch_hw_breakpoint *info)
+ {
+- return ((dar < info->address + info->len) &&
+- (dar + size > info->address));
++ return ((ea < info->address + info->len) &&
++ (ea + size > info->address));
+ }
+
+ static bool dar_in_hw_range(unsigned long dar, struct arch_hw_breakpoint *info)
+@@ -515,20 +515,22 @@ static bool dar_in_hw_range(unsigned long dar, struct arch_hw_breakpoint *info)
+ return ((hw_start_addr <= dar) && (hw_end_addr > dar));
+ }
+
+-static bool dar_hw_range_overlaps(unsigned long dar, int size,
+- struct arch_hw_breakpoint *info)
++static bool ea_hw_range_overlaps(unsigned long ea, int size,
++ struct arch_hw_breakpoint *info)
+ {
+ unsigned long hw_start_addr, hw_end_addr;
+
+ hw_start_addr = ALIGN_DOWN(info->address, HW_BREAKPOINT_SIZE);
+ hw_end_addr = ALIGN(info->address + info->len, HW_BREAKPOINT_SIZE);
+
+- return ((dar < hw_end_addr) && (dar + size > hw_start_addr));
++ return ((ea < hw_end_addr) && (ea + size > hw_start_addr));
+ }
+
+ /*
+ * If hw has multiple DAWR registers, we also need to check all
+ * dawrx constraint bits to confirm this is _really_ a valid event.
++ * If type is UNKNOWN, but privilege level matches, consider it as
++ * a positive match.
+ */
+ static bool check_dawrx_constraints(struct pt_regs *regs, int type,
+ struct arch_hw_breakpoint *info)
+@@ -536,7 +538,12 @@ static bool check_dawrx_constraints(struct pt_regs *regs, int type,
+ if (OP_IS_LOAD(type) && !(info->type & HW_BRK_TYPE_READ))
+ return false;
+
+- if (OP_IS_STORE(type) && !(info->type & HW_BRK_TYPE_WRITE))
++ /*
++ * The Cache Management instructions other than dcbz never
++ * cause a match. i.e. if type is CACHEOP, the instruction
++ * is dcbz, and dcbz is treated as Store.
++ */
++ if ((OP_IS_STORE(type) || type == CACHEOP) && !(info->type & HW_BRK_TYPE_WRITE))
+ return false;
+
+ if (is_kernel_addr(regs->nip) && !(info->type & HW_BRK_TYPE_KERNEL))
+@@ -553,7 +560,8 @@ static bool check_dawrx_constraints(struct pt_regs *regs, int type,
+ * including extraneous exception. Otherwise return false.
+ */
+ static bool check_constraints(struct pt_regs *regs, struct ppc_inst instr,
+- int type, int size, struct arch_hw_breakpoint *info)
++ unsigned long ea, int type, int size,
++ struct arch_hw_breakpoint *info)
+ {
+ bool in_user_range = dar_in_user_range(regs->dar, info);
+ bool dawrx_constraints;
+@@ -569,22 +577,27 @@ static bool check_constraints(struct pt_regs *regs, struct ppc_inst instr,
+ }
+
+ if (unlikely(ppc_inst_equal(instr, ppc_inst(0)))) {
+- if (in_user_range)
+- return true;
++ if (cpu_has_feature(CPU_FTR_ARCH_31) &&
++ !dar_in_hw_range(regs->dar, info))
++ return false;
+
+- if (dar_in_hw_range(regs->dar, info)) {
+- info->type |= HW_BRK_TYPE_EXTRANEOUS_IRQ;
+- return true;
+- }
+- return false;
++ return true;
+ }
+
+ dawrx_constraints = check_dawrx_constraints(regs, type, info);
+
+- if (dar_user_range_overlaps(regs->dar, size, info))
++ if (type == UNKNOWN) {
++ if (cpu_has_feature(CPU_FTR_ARCH_31) &&
++ !dar_in_hw_range(regs->dar, info))
++ return false;
++
++ return dawrx_constraints;
++ }
++
++ if (ea_user_range_overlaps(ea, size, info))
+ return dawrx_constraints;
+
+- if (dar_hw_range_overlaps(regs->dar, size, info)) {
++ if (ea_hw_range_overlaps(ea, size, info)) {
+ if (dawrx_constraints) {
+ info->type |= HW_BRK_TYPE_EXTRANEOUS_IRQ;
+ return true;
+@@ -593,8 +606,17 @@ static bool check_constraints(struct pt_regs *regs, struct ppc_inst instr,
+ return false;
+ }
+
++static int cache_op_size(void)
++{
++#ifdef __powerpc64__
++ return ppc64_caches.l1d.block_size;
++#else
++ return L1_CACHE_BYTES;
++#endif
++}
++
+ static void get_instr_detail(struct pt_regs *regs, struct ppc_inst *instr,
+- int *type, int *size, bool *larx_stcx)
++ int *type, int *size, unsigned long *ea)
+ {
+ struct instruction_op op;
+
+@@ -602,16 +624,23 @@ static void get_instr_detail(struct pt_regs *regs, struct ppc_inst *instr,
+ return;
+
+ analyse_instr(&op, regs, *instr);
+-
+- /*
+- * Set size = 8 if analyse_instr() fails. If it's a userspace
+- * watchpoint(valid or extraneous), we can notify user about it.
+- * If it's a kernel watchpoint, instruction emulation will fail
+- * in stepping_handler() and watchpoint will be disabled.
+- */
+ *type = GETTYPE(op.type);
+- *size = !(*type == UNKNOWN) ? GETSIZE(op.type) : 8;
+- *larx_stcx = (*type == LARX || *type == STCX);
++ *ea = op.ea;
++#ifdef __powerpc64__
++ if (!(regs->msr & MSR_64BIT))
++ *ea &= 0xffffffffUL;
++#endif
++
++ *size = GETSIZE(op.type);
++ if (*type == CACHEOP) {
++ *size = cache_op_size();
++ *ea &= ~(*size - 1);
++ }
++}
++
++static bool is_larx_stcx_instr(int type)
++{
++ return type == LARX || type == STCX;
+ }
+
+ /*
+@@ -678,7 +707,7 @@ int hw_breakpoint_handler(struct die_args *args)
+ struct ppc_inst instr = ppc_inst(0);
+ int type = 0;
+ int size = 0;
+- bool larx_stcx = false;
++ unsigned long ea;
+
+ /* Disable breakpoints during exception handling */
+ hw_breakpoint_disable();
+@@ -692,7 +721,7 @@ int hw_breakpoint_handler(struct die_args *args)
+ rcu_read_lock();
+
+ if (!IS_ENABLED(CONFIG_PPC_8xx))
+- get_instr_detail(regs, &instr, &type, &size, &larx_stcx);
++ get_instr_detail(regs, &instr, &type, &size, &ea);
+
+ for (i = 0; i < nr_wp_slots(); i++) {
+ bp[i] = __this_cpu_read(bp_per_reg[i]);
+@@ -702,7 +731,7 @@ int hw_breakpoint_handler(struct die_args *args)
+ info[i] = counter_arch_bp(bp[i]);
+ info[i]->type &= ~HW_BRK_TYPE_EXTRANEOUS_IRQ;
+
+- if (check_constraints(regs, instr, type, size, info[i])) {
++ if (check_constraints(regs, instr, ea, type, size, info[i])) {
+ if (!IS_ENABLED(CONFIG_PPC_8xx) &&
+ ppc_inst_equal(instr, ppc_inst(0))) {
+ handler_error(bp[i], info[i]);
+@@ -744,7 +773,7 @@ int hw_breakpoint_handler(struct die_args *args)
+ }
+
+ if (!IS_ENABLED(CONFIG_PPC_8xx)) {
+- if (larx_stcx) {
++ if (is_larx_stcx_instr(type)) {
+ for (i = 0; i < nr_wp_slots(); i++) {
+ if (!hit[i])
+ continue;
+diff --git a/arch/powerpc/kernel/rtas.c b/arch/powerpc/kernel/rtas.c
+index a09eba03f180..806d554ce357 100644
+--- a/arch/powerpc/kernel/rtas.c
++++ b/arch/powerpc/kernel/rtas.c
+@@ -843,96 +843,6 @@ static void rtas_percpu_suspend_me(void *info)
+ __rtas_suspend_cpu((struct rtas_suspend_me_data *)info, 1);
+ }
+
+-enum rtas_cpu_state {
+- DOWN,
+- UP,
+-};
+-
+-#ifndef CONFIG_SMP
+-static int rtas_cpu_state_change_mask(enum rtas_cpu_state state,
+- cpumask_var_t cpus)
+-{
+- if (!cpumask_empty(cpus)) {
+- cpumask_clear(cpus);
+- return -EINVAL;
+- } else
+- return 0;
+-}
+-#else
+-/* On return cpumask will be altered to indicate CPUs changed.
+- * CPUs with states changed will be set in the mask,
+- * CPUs with status unchanged will be unset in the mask. */
+-static int rtas_cpu_state_change_mask(enum rtas_cpu_state state,
+- cpumask_var_t cpus)
+-{
+- int cpu;
+- int cpuret = 0;
+- int ret = 0;
+-
+- if (cpumask_empty(cpus))
+- return 0;
+-
+- for_each_cpu(cpu, cpus) {
+- struct device *dev = get_cpu_device(cpu);
+-
+- switch (state) {
+- case DOWN:
+- cpuret = device_offline(dev);
+- break;
+- case UP:
+- cpuret = device_online(dev);
+- break;
+- }
+- if (cpuret < 0) {
+- pr_debug("%s: cpu_%s for cpu#%d returned %d.\n",
+- __func__,
+- ((state == UP) ? "up" : "down"),
+- cpu, cpuret);
+- if (!ret)
+- ret = cpuret;
+- if (state == UP) {
+- /* clear bits for unchanged cpus, return */
+- cpumask_shift_right(cpus, cpus, cpu);
+- cpumask_shift_left(cpus, cpus, cpu);
+- break;
+- } else {
+- /* clear bit for unchanged cpu, continue */
+- cpumask_clear_cpu(cpu, cpus);
+- }
+- }
+- cond_resched();
+- }
+-
+- return ret;
+-}
+-#endif
+-
+-int rtas_online_cpus_mask(cpumask_var_t cpus)
+-{
+- int ret;
+-
+- ret = rtas_cpu_state_change_mask(UP, cpus);
+-
+- if (ret) {
+- cpumask_var_t tmp_mask;
+-
+- if (!alloc_cpumask_var(&tmp_mask, GFP_KERNEL))
+- return ret;
+-
+- /* Use tmp_mask to preserve cpus mask from first failure */
+- cpumask_copy(tmp_mask, cpus);
+- rtas_offline_cpus_mask(tmp_mask);
+- free_cpumask_var(tmp_mask);
+- }
+-
+- return ret;
+-}
+-
+-int rtas_offline_cpus_mask(cpumask_var_t cpus)
+-{
+- return rtas_cpu_state_change_mask(DOWN, cpus);
+-}
+-
+ int rtas_ibm_suspend_me(u64 handle)
+ {
+ long state;
+@@ -940,8 +850,6 @@ int rtas_ibm_suspend_me(u64 handle)
+ unsigned long retbuf[PLPAR_HCALL_BUFSIZE];
+ struct rtas_suspend_me_data data;
+ DECLARE_COMPLETION_ONSTACK(done);
+- cpumask_var_t offline_mask;
+- int cpuret;
+
+ if (!rtas_service_present("ibm,suspend-me"))
+ return -ENOSYS;
+@@ -962,9 +870,6 @@ int rtas_ibm_suspend_me(u64 handle)
+ return -EIO;
+ }
+
+- if (!alloc_cpumask_var(&offline_mask, GFP_KERNEL))
+- return -ENOMEM;
+-
+ atomic_set(&data.working, 0);
+ atomic_set(&data.done, 0);
+ atomic_set(&data.error, 0);
+@@ -973,24 +878,8 @@ int rtas_ibm_suspend_me(u64 handle)
+
+ lock_device_hotplug();
+
+- /* All present CPUs must be online */
+- cpumask_andnot(offline_mask, cpu_present_mask, cpu_online_mask);
+- cpuret = rtas_online_cpus_mask(offline_mask);
+- if (cpuret) {
+- pr_err("%s: Could not bring present CPUs online.\n", __func__);
+- atomic_set(&data.error, cpuret);
+- goto out;
+- }
+-
+ cpu_hotplug_disable();
+
+- /* Check if we raced with a CPU-Offline Operation */
+- if (!cpumask_equal(cpu_present_mask, cpu_online_mask)) {
+- pr_info("%s: Raced against a concurrent CPU-Offline\n", __func__);
+- atomic_set(&data.error, -EAGAIN);
+- goto out_hotplug_enable;
+- }
+-
+ /* Call function on all CPUs. One of us will make the
+ * rtas call
+ */
+@@ -1001,18 +890,11 @@ int rtas_ibm_suspend_me(u64 handle)
+ if (atomic_read(&data.error) != 0)
+ printk(KERN_ERR "Error doing global join\n");
+
+-out_hotplug_enable:
+- cpu_hotplug_enable();
+
+- /* Take down CPUs not online prior to suspend */
+- cpuret = rtas_offline_cpus_mask(offline_mask);
+- if (cpuret)
+- pr_warn("%s: Could not restore CPUs to offline state.\n",
+- __func__);
++ cpu_hotplug_enable();
+
+-out:
+ unlock_device_hotplug();
+- free_cpumask_var(offline_mask);
++
+ return atomic_read(&data.error);
+ }
+
+diff --git a/arch/powerpc/kernel/vdso.c b/arch/powerpc/kernel/vdso.c
+index e0f4ba45b6cc..8dad44262e75 100644
+--- a/arch/powerpc/kernel/vdso.c
++++ b/arch/powerpc/kernel/vdso.c
+@@ -677,7 +677,7 @@ int vdso_getcpu_init(void)
+ node = cpu_to_node(cpu);
+ WARN_ON_ONCE(node > 0xffff);
+
+- val = (cpu & 0xfff) | ((node & 0xffff) << 16);
++ val = (cpu & 0xffff) | ((node & 0xffff) << 16);
+ mtspr(SPRN_SPRG_VDSO_WRITE, val);
+ get_paca()->sprg_vdso = val;
+
+diff --git a/arch/powerpc/mm/book3s64/hash_utils.c b/arch/powerpc/mm/book3s64/hash_utils.c
+index 9b9f92ad0e7a..3d5c9092feb1 100644
+--- a/arch/powerpc/mm/book3s64/hash_utils.c
++++ b/arch/powerpc/mm/book3s64/hash_utils.c
+@@ -663,11 +663,10 @@ static void __init htab_init_page_sizes(void)
+ * Pick a size for the linear mapping. Currently, we only
+ * support 16M, 1M and 4K which is the default
+ */
+- if (IS_ENABLED(STRICT_KERNEL_RWX) &&
++ if (IS_ENABLED(CONFIG_STRICT_KERNEL_RWX) &&
+ (unsigned long)_stext % 0x1000000) {
+ if (mmu_psize_defs[MMU_PAGE_16M].shift)
+- pr_warn("Kernel not 16M aligned, "
+- "disabling 16M linear map alignment");
++ pr_warn("Kernel not 16M aligned, disabling 16M linear map alignment\n");
+ aligned = false;
+ }
+
+diff --git a/arch/powerpc/mm/book3s64/pkeys.c b/arch/powerpc/mm/book3s64/pkeys.c
+index d174106bab67..82ace6acb0aa 100644
+--- a/arch/powerpc/mm/book3s64/pkeys.c
++++ b/arch/powerpc/mm/book3s64/pkeys.c
+@@ -83,13 +83,17 @@ static int pkey_initialize(void)
+ scan_pkey_feature();
+
+ /*
+- * Let's assume 32 pkeys on P8 bare metal, if its not defined by device
+- * tree. We make this exception since skiboot forgot to expose this
+- * property on power8.
++ * Let's assume 32 pkeys on P8/P9 bare metal, if its not defined by device
++ * tree. We make this exception since some version of skiboot forgot to
++ * expose this property on power8/9.
+ */
+- if (!pkeys_devtree_defined && !firmware_has_feature(FW_FEATURE_LPAR) &&
+- cpu_has_feature(CPU_FTRS_POWER8))
+- pkeys_total = 32;
++ if (!pkeys_devtree_defined && !firmware_has_feature(FW_FEATURE_LPAR)) {
++ unsigned long pvr = mfspr(SPRN_PVR);
++
++ if (PVR_VER(pvr) == PVR_POWER8 || PVR_VER(pvr) == PVR_POWER8E ||
++ PVR_VER(pvr) == PVR_POWER8NVL || PVR_VER(pvr) == PVR_POWER9)
++ pkeys_total = 32;
++ }
+
+ /*
+ * Adjust the upper limit, based on the number of bits supported by
+diff --git a/arch/powerpc/mm/book3s64/radix_pgtable.c b/arch/powerpc/mm/book3s64/radix_pgtable.c
+index bb00e0cba119..c2989c171883 100644
+--- a/arch/powerpc/mm/book3s64/radix_pgtable.c
++++ b/arch/powerpc/mm/book3s64/radix_pgtable.c
+@@ -700,6 +700,21 @@ static void free_pmd_table(pmd_t *pmd_start, pud_t *pud)
+ pud_clear(pud);
+ }
+
++static void free_pud_table(pud_t *pud_start, p4d_t *p4d)
++{
++ pud_t *pud;
++ int i;
++
++ for (i = 0; i < PTRS_PER_PUD; i++) {
++ pud = pud_start + i;
++ if (!pud_none(*pud))
++ return;
++ }
++
++ pud_free(&init_mm, pud_start);
++ p4d_clear(p4d);
++}
++
+ struct change_mapping_params {
+ pte_t *pte;
+ unsigned long start;
+@@ -874,6 +889,7 @@ static void __meminit remove_pagetable(unsigned long start, unsigned long end)
+
+ pud_base = (pud_t *)p4d_page_vaddr(*p4d);
+ remove_pud_table(pud_base, addr, next);
++ free_pud_table(pud_base, p4d);
+ }
+
+ spin_unlock(&init_mm.page_table_lock);
+diff --git a/arch/powerpc/platforms/cell/spufs/coredump.c b/arch/powerpc/platforms/cell/spufs/coredump.c
+index 3b75e8f60609..014d1c045bc3 100644
+--- a/arch/powerpc/platforms/cell/spufs/coredump.c
++++ b/arch/powerpc/platforms/cell/spufs/coredump.c
+@@ -105,7 +105,7 @@ static int spufs_arch_write_note(struct spu_context *ctx, int i,
+ size_t sz = spufs_coredump_read[i].size;
+ char fullname[80];
+ struct elf_note en;
+- size_t ret;
++ int ret;
+
+ sprintf(fullname, "SPU/%d/%s", dfd, spufs_coredump_read[i].name);
+ en.n_namesz = strlen(fullname) + 1;
+diff --git a/arch/powerpc/platforms/pseries/hotplug-cpu.c b/arch/powerpc/platforms/pseries/hotplug-cpu.c
+index 3e8cbfe7a80f..6d4ee03d476a 100644
+--- a/arch/powerpc/platforms/pseries/hotplug-cpu.c
++++ b/arch/powerpc/platforms/pseries/hotplug-cpu.c
+@@ -35,54 +35,10 @@
+ #include <asm/topology.h>
+
+ #include "pseries.h"
+-#include "offline_states.h"
+
+ /* This version can't take the spinlock, because it never returns */
+ static int rtas_stop_self_token = RTAS_UNKNOWN_SERVICE;
+
+-static DEFINE_PER_CPU(enum cpu_state_vals, preferred_offline_state) =
+- CPU_STATE_OFFLINE;
+-static DEFINE_PER_CPU(enum cpu_state_vals, current_state) = CPU_STATE_OFFLINE;
+-
+-static enum cpu_state_vals default_offline_state = CPU_STATE_OFFLINE;
+-
+-static bool cede_offline_enabled __read_mostly = true;
+-
+-/*
+- * Enable/disable cede_offline when available.
+- */
+-static int __init setup_cede_offline(char *str)
+-{
+- return (kstrtobool(str, &cede_offline_enabled) == 0);
+-}
+-
+-__setup("cede_offline=", setup_cede_offline);
+-
+-enum cpu_state_vals get_cpu_current_state(int cpu)
+-{
+- return per_cpu(current_state, cpu);
+-}
+-
+-void set_cpu_current_state(int cpu, enum cpu_state_vals state)
+-{
+- per_cpu(current_state, cpu) = state;
+-}
+-
+-enum cpu_state_vals get_preferred_offline_state(int cpu)
+-{
+- return per_cpu(preferred_offline_state, cpu);
+-}
+-
+-void set_preferred_offline_state(int cpu, enum cpu_state_vals state)
+-{
+- per_cpu(preferred_offline_state, cpu) = state;
+-}
+-
+-void set_default_offline_state(int cpu)
+-{
+- per_cpu(preferred_offline_state, cpu) = default_offline_state;
+-}
+-
+ static void rtas_stop_self(void)
+ {
+ static struct rtas_args args;
+@@ -101,9 +57,7 @@ static void rtas_stop_self(void)
+
+ static void pseries_mach_cpu_die(void)
+ {
+- unsigned int cpu = smp_processor_id();
+ unsigned int hwcpu = hard_smp_processor_id();
+- u8 cede_latency_hint = 0;
+
+ local_irq_disable();
+ idle_task_exit();
+@@ -112,49 +66,6 @@ static void pseries_mach_cpu_die(void)
+ else
+ xics_teardown_cpu();
+
+- if (get_preferred_offline_state(cpu) == CPU_STATE_INACTIVE) {
+- set_cpu_current_state(cpu, CPU_STATE_INACTIVE);
+- if (ppc_md.suspend_disable_cpu)
+- ppc_md.suspend_disable_cpu();
+-
+- cede_latency_hint = 2;
+-
+- get_lppaca()->idle = 1;
+- if (!lppaca_shared_proc(get_lppaca()))
+- get_lppaca()->donate_dedicated_cpu = 1;
+-
+- while (get_preferred_offline_state(cpu) == CPU_STATE_INACTIVE) {
+- while (!prep_irq_for_idle()) {
+- local_irq_enable();
+- local_irq_disable();
+- }
+-
+- extended_cede_processor(cede_latency_hint);
+- }
+-
+- local_irq_disable();
+-
+- if (!lppaca_shared_proc(get_lppaca()))
+- get_lppaca()->donate_dedicated_cpu = 0;
+- get_lppaca()->idle = 0;
+-
+- if (get_preferred_offline_state(cpu) == CPU_STATE_ONLINE) {
+- unregister_slb_shadow(hwcpu);
+-
+- hard_irq_disable();
+- /*
+- * Call to start_secondary_resume() will not return.
+- * Kernel stack will be reset and start_secondary()
+- * will be called to continue the online operation.
+- */
+- start_secondary_resume();
+- }
+- }
+-
+- /* Requested state is CPU_STATE_OFFLINE at this point */
+- WARN_ON(get_preferred_offline_state(cpu) != CPU_STATE_OFFLINE);
+-
+- set_cpu_current_state(cpu, CPU_STATE_OFFLINE);
+ unregister_slb_shadow(hwcpu);
+ rtas_stop_self();
+
+@@ -200,24 +111,13 @@ static void pseries_cpu_die(unsigned int cpu)
+ int cpu_status = 1;
+ unsigned int pcpu = get_hard_smp_processor_id(cpu);
+
+- if (get_preferred_offline_state(cpu) == CPU_STATE_INACTIVE) {
+- cpu_status = 1;
+- for (tries = 0; tries < 5000; tries++) {
+- if (get_cpu_current_state(cpu) == CPU_STATE_INACTIVE) {
+- cpu_status = 0;
+- break;
+- }
+- msleep(1);
+- }
+- } else if (get_preferred_offline_state(cpu) == CPU_STATE_OFFLINE) {
++ for (tries = 0; tries < 25; tries++) {
++ cpu_status = smp_query_cpu_stopped(pcpu);
++ if (cpu_status == QCSS_STOPPED ||
++ cpu_status == QCSS_HARDWARE_ERROR)
++ break;
++ cpu_relax();
+
+- for (tries = 0; tries < 25; tries++) {
+- cpu_status = smp_query_cpu_stopped(pcpu);
+- if (cpu_status == QCSS_STOPPED ||
+- cpu_status == QCSS_HARDWARE_ERROR)
+- break;
+- cpu_relax();
+- }
+ }
+
+ if (cpu_status != 0) {
+@@ -359,28 +259,15 @@ static int dlpar_offline_cpu(struct device_node *dn)
+ if (get_hard_smp_processor_id(cpu) != thread)
+ continue;
+
+- if (get_cpu_current_state(cpu) == CPU_STATE_OFFLINE)
++ if (!cpu_online(cpu))
+ break;
+
+- if (get_cpu_current_state(cpu) == CPU_STATE_ONLINE) {
+- set_preferred_offline_state(cpu,
+- CPU_STATE_OFFLINE);
+- cpu_maps_update_done();
+- timed_topology_update(1);
+- rc = device_offline(get_cpu_device(cpu));
+- if (rc)
+- goto out;
+- cpu_maps_update_begin();
+- break;
+- }
+-
+- /*
+- * The cpu is in CPU_STATE_INACTIVE.
+- * Upgrade it's state to CPU_STATE_OFFLINE.
+- */
+- set_preferred_offline_state(cpu, CPU_STATE_OFFLINE);
+- WARN_ON(plpar_hcall_norets(H_PROD, thread) != H_SUCCESS);
+- __cpu_die(cpu);
++ cpu_maps_update_done();
++ timed_topology_update(1);
++ rc = device_offline(get_cpu_device(cpu));
++ if (rc)
++ goto out;
++ cpu_maps_update_begin();
+ break;
+ }
+ if (cpu == num_possible_cpus()) {
+@@ -414,8 +301,6 @@ static int dlpar_online_cpu(struct device_node *dn)
+ for_each_present_cpu(cpu) {
+ if (get_hard_smp_processor_id(cpu) != thread)
+ continue;
+- BUG_ON(get_cpu_current_state(cpu)
+- != CPU_STATE_OFFLINE);
+ cpu_maps_update_done();
+ timed_topology_update(1);
+ find_and_online_cpu_nid(cpu);
+@@ -854,7 +739,6 @@ static int dlpar_cpu_add_by_count(u32 cpus_to_add)
+ parent = of_find_node_by_path("/cpus");
+ if (!parent) {
+ pr_warn("Could not find CPU root node in device tree\n");
+- kfree(cpu_drcs);
+ return -1;
+ }
+
+@@ -1013,27 +897,8 @@ static struct notifier_block pseries_smp_nb = {
+ .notifier_call = pseries_smp_notifier,
+ };
+
+-#define MAX_CEDE_LATENCY_LEVELS 4
+-#define CEDE_LATENCY_PARAM_LENGTH 10
+-#define CEDE_LATENCY_PARAM_MAX_LENGTH \
+- (MAX_CEDE_LATENCY_LEVELS * CEDE_LATENCY_PARAM_LENGTH * sizeof(char))
+-#define CEDE_LATENCY_TOKEN 45
+-
+-static char cede_parameters[CEDE_LATENCY_PARAM_MAX_LENGTH];
+-
+-static int parse_cede_parameters(void)
+-{
+- memset(cede_parameters, 0, CEDE_LATENCY_PARAM_MAX_LENGTH);
+- return rtas_call(rtas_token("ibm,get-system-parameter"), 3, 1,
+- NULL,
+- CEDE_LATENCY_TOKEN,
+- __pa(cede_parameters),
+- CEDE_LATENCY_PARAM_MAX_LENGTH);
+-}
+-
+ static int __init pseries_cpu_hotplug_init(void)
+ {
+- int cpu;
+ int qcss_tok;
+
+ #ifdef CONFIG_ARCH_CPU_PROBE_RELEASE
+@@ -1056,16 +921,8 @@ static int __init pseries_cpu_hotplug_init(void)
+ smp_ops->cpu_die = pseries_cpu_die;
+
+ /* Processors can be added/removed only on LPAR */
+- if (firmware_has_feature(FW_FEATURE_LPAR)) {
++ if (firmware_has_feature(FW_FEATURE_LPAR))
+ of_reconfig_notifier_register(&pseries_smp_nb);
+- cpu_maps_update_begin();
+- if (cede_offline_enabled && parse_cede_parameters() == 0) {
+- default_offline_state = CPU_STATE_INACTIVE;
+- for_each_online_cpu(cpu)
+- set_default_offline_state(cpu);
+- }
+- cpu_maps_update_done();
+- }
+
+ return 0;
+ }
+diff --git a/arch/powerpc/platforms/pseries/offline_states.h b/arch/powerpc/platforms/pseries/offline_states.h
+deleted file mode 100644
+index 51414aee2862..000000000000
+--- a/arch/powerpc/platforms/pseries/offline_states.h
++++ /dev/null
+@@ -1,38 +0,0 @@
+-/* SPDX-License-Identifier: GPL-2.0 */
+-#ifndef _OFFLINE_STATES_H_
+-#define _OFFLINE_STATES_H_
+-
+-/* Cpu offline states go here */
+-enum cpu_state_vals {
+- CPU_STATE_OFFLINE,
+- CPU_STATE_INACTIVE,
+- CPU_STATE_ONLINE,
+- CPU_MAX_OFFLINE_STATES
+-};
+-
+-#ifdef CONFIG_HOTPLUG_CPU
+-extern enum cpu_state_vals get_cpu_current_state(int cpu);
+-extern void set_cpu_current_state(int cpu, enum cpu_state_vals state);
+-extern void set_preferred_offline_state(int cpu, enum cpu_state_vals state);
+-extern void set_default_offline_state(int cpu);
+-#else
+-static inline enum cpu_state_vals get_cpu_current_state(int cpu)
+-{
+- return CPU_STATE_ONLINE;
+-}
+-
+-static inline void set_cpu_current_state(int cpu, enum cpu_state_vals state)
+-{
+-}
+-
+-static inline void set_preferred_offline_state(int cpu, enum cpu_state_vals state)
+-{
+-}
+-
+-static inline void set_default_offline_state(int cpu)
+-{
+-}
+-#endif
+-
+-extern enum cpu_state_vals get_preferred_offline_state(int cpu);
+-#endif
+diff --git a/arch/powerpc/platforms/pseries/pmem.c b/arch/powerpc/platforms/pseries/pmem.c
+index f860a897a9e0..f827de7087e9 100644
+--- a/arch/powerpc/platforms/pseries/pmem.c
++++ b/arch/powerpc/platforms/pseries/pmem.c
+@@ -24,7 +24,6 @@
+ #include <asm/topology.h>
+
+ #include "pseries.h"
+-#include "offline_states.h"
+
+ static struct device_node *pmem_node;
+
+diff --git a/arch/powerpc/platforms/pseries/smp.c b/arch/powerpc/platforms/pseries/smp.c
+index 6891710833be..7ebacac03dc3 100644
+--- a/arch/powerpc/platforms/pseries/smp.c
++++ b/arch/powerpc/platforms/pseries/smp.c
+@@ -44,8 +44,6 @@
+ #include <asm/svm.h>
+
+ #include "pseries.h"
+-#include "offline_states.h"
+-
+
+ /*
+ * The Primary thread of each non-boot processor was started from the OF client
+@@ -108,10 +106,7 @@ static inline int smp_startup_cpu(unsigned int lcpu)
+
+ /* Fixup atomic count: it exited inside IRQ handler. */
+ task_thread_info(paca_ptrs[lcpu]->__current)->preempt_count = 0;
+-#ifdef CONFIG_HOTPLUG_CPU
+- if (get_cpu_current_state(lcpu) == CPU_STATE_INACTIVE)
+- goto out;
+-#endif
++
+ /*
+ * If the RTAS start-cpu token does not exist then presume the
+ * cpu is already spinning.
+@@ -126,9 +121,6 @@ static inline int smp_startup_cpu(unsigned int lcpu)
+ return 0;
+ }
+
+-#ifdef CONFIG_HOTPLUG_CPU
+-out:
+-#endif
+ return 1;
+ }
+
+@@ -143,10 +135,6 @@ static void smp_setup_cpu(int cpu)
+ vpa_init(cpu);
+
+ cpumask_clear_cpu(cpu, of_spin_mask);
+-#ifdef CONFIG_HOTPLUG_CPU
+- set_cpu_current_state(cpu, CPU_STATE_ONLINE);
+- set_default_offline_state(cpu);
+-#endif
+ }
+
+ static int smp_pSeries_kick_cpu(int nr)
+@@ -163,20 +151,6 @@ static int smp_pSeries_kick_cpu(int nr)
+ * the processor will continue on to secondary_start
+ */
+ paca_ptrs[nr]->cpu_start = 1;
+-#ifdef CONFIG_HOTPLUG_CPU
+- set_preferred_offline_state(nr, CPU_STATE_ONLINE);
+-
+- if (get_cpu_current_state(nr) == CPU_STATE_INACTIVE) {
+- long rc;
+- unsigned long hcpuid;
+-
+- hcpuid = get_hard_smp_processor_id(nr);
+- rc = plpar_hcall_norets(H_PROD, hcpuid);
+- if (rc != H_SUCCESS)
+- printk(KERN_ERR "Error: Prod to wake up processor %d "
+- "Ret= %ld\n", nr, rc);
+- }
+-#endif
+
+ return 0;
+ }
+diff --git a/arch/powerpc/platforms/pseries/suspend.c b/arch/powerpc/platforms/pseries/suspend.c
+index 0a24a5a185f0..f789693f61f4 100644
+--- a/arch/powerpc/platforms/pseries/suspend.c
++++ b/arch/powerpc/platforms/pseries/suspend.c
+@@ -132,15 +132,11 @@ static ssize_t store_hibernate(struct device *dev,
+ struct device_attribute *attr,
+ const char *buf, size_t count)
+ {
+- cpumask_var_t offline_mask;
+ int rc;
+
+ if (!capable(CAP_SYS_ADMIN))
+ return -EPERM;
+
+- if (!alloc_cpumask_var(&offline_mask, GFP_KERNEL))
+- return -ENOMEM;
+-
+ stream_id = simple_strtoul(buf, NULL, 16);
+
+ do {
+@@ -150,32 +146,16 @@ static ssize_t store_hibernate(struct device *dev,
+ } while (rc == -EAGAIN);
+
+ if (!rc) {
+- /* All present CPUs must be online */
+- cpumask_andnot(offline_mask, cpu_present_mask,
+- cpu_online_mask);
+- rc = rtas_online_cpus_mask(offline_mask);
+- if (rc) {
+- pr_err("%s: Could not bring present CPUs online.\n",
+- __func__);
+- goto out;
+- }
+-
+ stop_topology_update();
+ rc = pm_suspend(PM_SUSPEND_MEM);
+ start_topology_update();
+-
+- /* Take down CPUs not online prior to suspend */
+- if (!rtas_offline_cpus_mask(offline_mask))
+- pr_warn("%s: Could not restore CPUs to offline "
+- "state.\n", __func__);
+ }
+
+ stream_id = 0;
+
+ if (!rc)
+ rc = count;
+-out:
+- free_cpumask_var(offline_mask);
++
+ return rc;
+ }
+
+diff --git a/arch/s390/include/asm/topology.h b/arch/s390/include/asm/topology.h
+index fbb507504a3b..3a0ac0c7a9a3 100644
+--- a/arch/s390/include/asm/topology.h
++++ b/arch/s390/include/asm/topology.h
+@@ -86,12 +86,6 @@ static inline const struct cpumask *cpumask_of_node(int node)
+
+ #define pcibus_to_node(bus) __pcibus_to_node(bus)
+
+-#define node_distance(a, b) __node_distance(a, b)
+-static inline int __node_distance(int a, int b)
+-{
+- return 0;
+-}
+-
+ #else /* !CONFIG_NUMA */
+
+ #define numa_node_id numa_node_id
+diff --git a/arch/s390/mm/gmap.c b/arch/s390/mm/gmap.c
+index 190357ff86b3..46c1bf2a3b4b 100644
+--- a/arch/s390/mm/gmap.c
++++ b/arch/s390/mm/gmap.c
+@@ -2485,23 +2485,36 @@ void gmap_sync_dirty_log_pmd(struct gmap *gmap, unsigned long bitmap[4],
+ }
+ EXPORT_SYMBOL_GPL(gmap_sync_dirty_log_pmd);
+
++#ifdef CONFIG_TRANSPARENT_HUGEPAGE
++static int thp_split_walk_pmd_entry(pmd_t *pmd, unsigned long addr,
++ unsigned long end, struct mm_walk *walk)
++{
++ struct vm_area_struct *vma = walk->vma;
++
++ split_huge_pmd(vma, pmd, addr);
++ return 0;
++}
++
++static const struct mm_walk_ops thp_split_walk_ops = {
++ .pmd_entry = thp_split_walk_pmd_entry,
++};
++
+ static inline void thp_split_mm(struct mm_struct *mm)
+ {
+-#ifdef CONFIG_TRANSPARENT_HUGEPAGE
+ struct vm_area_struct *vma;
+- unsigned long addr;
+
+ for (vma = mm->mmap; vma != NULL; vma = vma->vm_next) {
+- for (addr = vma->vm_start;
+- addr < vma->vm_end;
+- addr += PAGE_SIZE)
+- follow_page(vma, addr, FOLL_SPLIT);
+ vma->vm_flags &= ~VM_HUGEPAGE;
+ vma->vm_flags |= VM_NOHUGEPAGE;
++ walk_page_vma(vma, &thp_split_walk_ops, NULL);
+ }
+ mm->def_flags |= VM_NOHUGEPAGE;
+-#endif
+ }
++#else
++static inline void thp_split_mm(struct mm_struct *mm)
++{
++}
++#endif /* CONFIG_TRANSPARENT_HUGEPAGE */
+
+ /*
+ * Remove all empty zero pages from the mapping for lazy refaulting
+diff --git a/arch/s390/net/bpf_jit_comp.c b/arch/s390/net/bpf_jit_comp.c
+index f4242b894cf2..a78c5b59e1ab 100644
+--- a/arch/s390/net/bpf_jit_comp.c
++++ b/arch/s390/net/bpf_jit_comp.c
+@@ -489,6 +489,24 @@ static void save_restore_regs(struct bpf_jit *jit, int op, u32 stack_depth)
+ } while (re <= last);
+ }
+
++static void bpf_skip(struct bpf_jit *jit, int size)
++{
++ if (size >= 6 && !is_valid_rel(size)) {
++ /* brcl 0xf,size */
++ EMIT6_PCREL_RIL(0xc0f4000000, size);
++ size -= 6;
++ } else if (size >= 4 && is_valid_rel(size)) {
++ /* brc 0xf,size */
++ EMIT4_PCREL(0xa7f40000, size);
++ size -= 4;
++ }
++ while (size >= 2) {
++ /* bcr 0,%0 */
++ _EMIT2(0x0700);
++ size -= 2;
++ }
++}
++
+ /*
+ * Emit function prologue
+ *
+@@ -1268,8 +1286,12 @@ static noinline int bpf_jit_insn(struct bpf_jit *jit, struct bpf_prog *fp,
+ last = (i == fp->len - 1) ? 1 : 0;
+ if (last)
+ break;
+- /* j <exit> */
+- EMIT4_PCREL(0xa7f40000, jit->exit_ip - jit->prg);
++ if (!is_first_pass(jit) && can_use_rel(jit, jit->exit_ip))
++ /* brc 0xf, <exit> */
++ EMIT4_PCREL_RIC(0xa7040000, 0xf, jit->exit_ip);
++ else
++ /* brcl 0xf, <exit> */
++ EMIT6_PCREL_RILC(0xc0040000, 0xf, jit->exit_ip);
+ break;
+ /*
+ * Branch relative (number of skipped instructions) to offset on
+@@ -1417,21 +1439,10 @@ branch_ks:
+ }
+ break;
+ branch_ku:
+- is_jmp32 = BPF_CLASS(insn->code) == BPF_JMP32;
+- /* clfi or clgfi %dst,imm */
+- EMIT6_IMM(is_jmp32 ? 0xc20f0000 : 0xc20e0000,
+- dst_reg, imm);
+- if (!is_first_pass(jit) &&
+- can_use_rel(jit, addrs[i + off + 1])) {
+- /* brc mask,off */
+- EMIT4_PCREL_RIC(0xa7040000,
+- mask >> 12, addrs[i + off + 1]);
+- } else {
+- /* brcl mask,off */
+- EMIT6_PCREL_RILC(0xc0040000,
+- mask >> 12, addrs[i + off + 1]);
+- }
+- break;
++ /* lgfi %w1,imm (load sign extend imm) */
++ src_reg = REG_1;
++ EMIT6_IMM(0xc0010000, src_reg, imm);
++ goto branch_xu;
+ branch_xs:
+ is_jmp32 = BPF_CLASS(insn->code) == BPF_JMP32;
+ if (!is_first_pass(jit) &&
+@@ -1510,7 +1521,14 @@ static bool bpf_is_new_addr_sane(struct bpf_jit *jit, int i)
+ */
+ static int bpf_set_addr(struct bpf_jit *jit, int i)
+ {
+- if (!bpf_is_new_addr_sane(jit, i))
++ int delta;
++
++ if (is_codegen_pass(jit)) {
++ delta = jit->prg - jit->addrs[i];
++ if (delta < 0)
++ bpf_skip(jit, -delta);
++ }
++ if (WARN_ON_ONCE(!bpf_is_new_addr_sane(jit, i)))
+ return -1;
+ jit->addrs[i] = jit->prg;
+ return 0;
+diff --git a/arch/x86/crypto/aes_ctrby8_avx-x86_64.S b/arch/x86/crypto/aes_ctrby8_avx-x86_64.S
+index ec437db1fa54..494a3bda8487 100644
+--- a/arch/x86/crypto/aes_ctrby8_avx-x86_64.S
++++ b/arch/x86/crypto/aes_ctrby8_avx-x86_64.S
+@@ -127,10 +127,6 @@ ddq_add_8:
+
+ /* generate a unique variable for ddq_add_x */
+
+-.macro setddq n
+- var_ddq_add = ddq_add_\n
+-.endm
+-
+ /* generate a unique variable for xmm register */
+ .macro setxdata n
+ var_xdata = %xmm\n
+@@ -140,9 +136,7 @@ ddq_add_8:
+
+ .macro club name, id
+ .altmacro
+- .if \name == DDQ_DATA
+- setddq %\id
+- .elseif \name == XDATA
++ .if \name == XDATA
+ setxdata %\id
+ .endif
+ .noaltmacro
+@@ -165,9 +159,8 @@ ddq_add_8:
+
+ .set i, 1
+ .rept (by - 1)
+- club DDQ_DATA, i
+ club XDATA, i
+- vpaddq var_ddq_add(%rip), xcounter, var_xdata
++ vpaddq (ddq_add_1 + 16 * (i - 1))(%rip), xcounter, var_xdata
+ vptest ddq_low_msk(%rip), var_xdata
+ jnz 1f
+ vpaddq ddq_high_add_1(%rip), var_xdata, var_xdata
+@@ -180,8 +173,7 @@ ddq_add_8:
+ vmovdqa 1*16(p_keys), xkeyA
+
+ vpxor xkey0, xdata0, xdata0
+- club DDQ_DATA, by
+- vpaddq var_ddq_add(%rip), xcounter, xcounter
++ vpaddq (ddq_add_1 + 16 * (by - 1))(%rip), xcounter, xcounter
+ vptest ddq_low_msk(%rip), xcounter
+ jnz 1f
+ vpaddq ddq_high_add_1(%rip), xcounter, xcounter
+diff --git a/arch/x86/crypto/aesni-intel_asm.S b/arch/x86/crypto/aesni-intel_asm.S
+index 54e7d15dbd0d..7d4298e6d4cb 100644
+--- a/arch/x86/crypto/aesni-intel_asm.S
++++ b/arch/x86/crypto/aesni-intel_asm.S
+@@ -266,7 +266,7 @@ ALL_F: .octa 0xffffffffffffffffffffffffffffffff
+ PSHUFB_XMM %xmm2, %xmm0
+ movdqu %xmm0, CurCount(%arg2) # ctx_data.current_counter = iv
+
+- PRECOMPUTE \SUBKEY, %xmm1, %xmm2, %xmm3, %xmm4, %xmm5, %xmm6, %xmm7,
++ PRECOMPUTE \SUBKEY, %xmm1, %xmm2, %xmm3, %xmm4, %xmm5, %xmm6, %xmm7
+ movdqu HashKey(%arg2), %xmm13
+
+ CALC_AAD_HASH %xmm13, \AAD, \AADLEN, %xmm0, %xmm1, %xmm2, %xmm3, \
+@@ -978,7 +978,7 @@ _initial_blocks_done\@:
+ * arg1, %arg3, %arg4 are used as pointers only, not modified
+ * %r11 is the data offset value
+ */
+-.macro GHASH_4_ENCRYPT_4_PARALLEL_ENC TMP1 TMP2 TMP3 TMP4 TMP5 \
++.macro GHASH_4_ENCRYPT_4_PARALLEL_enc TMP1 TMP2 TMP3 TMP4 TMP5 \
+ TMP6 XMM0 XMM1 XMM2 XMM3 XMM4 XMM5 XMM6 XMM7 XMM8 operation
+
+ movdqa \XMM1, \XMM5
+@@ -1186,7 +1186,7 @@ aes_loop_par_enc_done\@:
+ * arg1, %arg3, %arg4 are used as pointers only, not modified
+ * %r11 is the data offset value
+ */
+-.macro GHASH_4_ENCRYPT_4_PARALLEL_DEC TMP1 TMP2 TMP3 TMP4 TMP5 \
++.macro GHASH_4_ENCRYPT_4_PARALLEL_dec TMP1 TMP2 TMP3 TMP4 TMP5 \
+ TMP6 XMM0 XMM1 XMM2 XMM3 XMM4 XMM5 XMM6 XMM7 XMM8 operation
+
+ movdqa \XMM1, \XMM5
+diff --git a/arch/x86/crypto/crc32c-pcl-intel-asm_64.S b/arch/x86/crypto/crc32c-pcl-intel-asm_64.S
+index 8501ec4532f4..442599cbe796 100644
+--- a/arch/x86/crypto/crc32c-pcl-intel-asm_64.S
++++ b/arch/x86/crypto/crc32c-pcl-intel-asm_64.S
+@@ -170,7 +170,7 @@ continue_block:
+
+ ## branch into array
+ lea jump_table(%rip), %bufp
+- movzxw (%bufp, %rax, 2), len
++ movzwq (%bufp, %rax, 2), len
+ lea crc_array(%rip), %bufp
+ lea (%bufp, len, 1), %bufp
+ JMP_NOSPEC bufp
+diff --git a/arch/x86/events/intel/uncore_snb.c b/arch/x86/events/intel/uncore_snb.c
+index 3de1065eefc4..1038e9f1e354 100644
+--- a/arch/x86/events/intel/uncore_snb.c
++++ b/arch/x86/events/intel/uncore_snb.c
+@@ -1085,6 +1085,7 @@ static struct pci_dev *tgl_uncore_get_mc_dev(void)
+ }
+
+ #define TGL_UNCORE_MMIO_IMC_MEM_OFFSET 0x10000
++#define TGL_UNCORE_PCI_IMC_MAP_SIZE 0xe000
+
+ static void tgl_uncore_imc_freerunning_init_box(struct intel_uncore_box *box)
+ {
+@@ -1112,7 +1113,7 @@ static void tgl_uncore_imc_freerunning_init_box(struct intel_uncore_box *box)
+ addr |= ((resource_size_t)mch_bar << 32);
+ #endif
+
+- box->io_addr = ioremap(addr, SNB_UNCORE_PCI_IMC_MAP_SIZE);
++ box->io_addr = ioremap(addr, TGL_UNCORE_PCI_IMC_MAP_SIZE);
+ }
+
+ static struct intel_uncore_ops tgl_uncore_imc_freerunning_ops = {
+diff --git a/arch/x86/include/asm/topology.h b/arch/x86/include/asm/topology.h
+index 79d8d5496330..f4234575f3fd 100644
+--- a/arch/x86/include/asm/topology.h
++++ b/arch/x86/include/asm/topology.h
+@@ -193,7 +193,7 @@ static inline void sched_clear_itmt_support(void)
+ }
+ #endif /* CONFIG_SCHED_MC_PRIO */
+
+-#ifdef CONFIG_SMP
++#if defined(CONFIG_SMP) && defined(CONFIG_X86_64)
+ #include <asm/cpufeature.h>
+
+ DECLARE_STATIC_KEY_FALSE(arch_scale_freq_key);
+diff --git a/arch/x86/include/asm/uaccess.h b/arch/x86/include/asm/uaccess.h
+index 18dfa07d3ef0..2f3e8f2a958f 100644
+--- a/arch/x86/include/asm/uaccess.h
++++ b/arch/x86/include/asm/uaccess.h
+@@ -314,11 +314,14 @@ do { \
+
+ #define __get_user_size(x, ptr, size, retval) \
+ do { \
++ unsigned char x_u8__; \
++ \
+ retval = 0; \
+ __chk_user_ptr(ptr); \
+ switch (size) { \
+ case 1: \
+- __get_user_asm(x, ptr, retval, "b", "=q"); \
++ __get_user_asm(x_u8__, ptr, retval, "b", "=q"); \
++ (x) = x_u8__; \
+ break; \
+ case 2: \
+ __get_user_asm(x, ptr, retval, "w", "=r"); \
+diff --git a/arch/x86/kernel/apic/io_apic.c b/arch/x86/kernel/apic/io_apic.c
+index 81ffcfbfaef2..21325a4a78b9 100644
+--- a/arch/x86/kernel/apic/io_apic.c
++++ b/arch/x86/kernel/apic/io_apic.c
+@@ -2335,8 +2335,13 @@ static int mp_irqdomain_create(int ioapic)
+
+ static void ioapic_destroy_irqdomain(int idx)
+ {
++ struct ioapic_domain_cfg *cfg = &ioapics[idx].irqdomain_cfg;
++ struct fwnode_handle *fn = ioapics[idx].irqdomain->fwnode;
++
+ if (ioapics[idx].irqdomain) {
+ irq_domain_remove(ioapics[idx].irqdomain);
++ if (!cfg->dev)
++ irq_domain_free_fwnode(fn);
+ ioapics[idx].irqdomain = NULL;
+ }
+ }
+diff --git a/arch/x86/kernel/cpu/mce/inject.c b/arch/x86/kernel/cpu/mce/inject.c
+index 0593b192eb8f..7843ab3fde09 100644
+--- a/arch/x86/kernel/cpu/mce/inject.c
++++ b/arch/x86/kernel/cpu/mce/inject.c
+@@ -511,7 +511,7 @@ static void do_inject(void)
+ */
+ if (inj_type == DFR_INT_INJ) {
+ i_mce.status |= MCI_STATUS_DEFERRED;
+- i_mce.status |= (i_mce.status & ~MCI_STATUS_UC);
++ i_mce.status &= ~MCI_STATUS_UC;
+ }
+
+ /*
+diff --git a/arch/x86/kernel/process_64.c b/arch/x86/kernel/process_64.c
+index 9a97415b2139..3ebc70bd01e8 100644
+--- a/arch/x86/kernel/process_64.c
++++ b/arch/x86/kernel/process_64.c
+@@ -314,7 +314,7 @@ static unsigned long x86_fsgsbase_read_task(struct task_struct *task,
+ */
+ mutex_lock(&task->mm->context.lock);
+ ldt = task->mm->context.ldt;
+- if (unlikely(idx >= ldt->nr_entries))
++ if (unlikely(!ldt || idx >= ldt->nr_entries))
+ base = 0;
+ else
+ base = get_desc_base(ldt->entries + idx);
+diff --git a/arch/x86/kernel/smpboot.c b/arch/x86/kernel/smpboot.c
+index ffbd9a3d78d8..518ac6bf752e 100644
+--- a/arch/x86/kernel/smpboot.c
++++ b/arch/x86/kernel/smpboot.c
+@@ -56,6 +56,7 @@
+ #include <linux/cpuidle.h>
+ #include <linux/numa.h>
+ #include <linux/pgtable.h>
++#include <linux/overflow.h>
+
+ #include <asm/acpi.h>
+ #include <asm/desc.h>
+@@ -1777,6 +1778,7 @@ void native_play_dead(void)
+
+ #endif
+
++#ifdef CONFIG_X86_64
+ /*
+ * APERF/MPERF frequency ratio computation.
+ *
+@@ -1975,6 +1977,7 @@ static bool core_set_max_freq_ratio(u64 *base_freq, u64 *turbo_freq)
+ static bool intel_set_max_freq_ratio(void)
+ {
+ u64 base_freq, turbo_freq;
++ u64 turbo_ratio;
+
+ if (slv_set_max_freq_ratio(&base_freq, &turbo_freq))
+ goto out;
+@@ -2000,15 +2003,23 @@ out:
+ /*
+ * Some hypervisors advertise X86_FEATURE_APERFMPERF
+ * but then fill all MSR's with zeroes.
++ * Some CPUs have turbo boost but don't declare any turbo ratio
++ * in MSR_TURBO_RATIO_LIMIT.
+ */
+- if (!base_freq) {
+- pr_debug("Couldn't determine cpu base frequency, necessary for scale-invariant accounting.\n");
++ if (!base_freq || !turbo_freq) {
++ pr_debug("Couldn't determine cpu base or turbo frequency, necessary for scale-invariant accounting.\n");
+ return false;
+ }
+
+- arch_turbo_freq_ratio = div_u64(turbo_freq * SCHED_CAPACITY_SCALE,
+- base_freq);
++ turbo_ratio = div_u64(turbo_freq * SCHED_CAPACITY_SCALE, base_freq);
++ if (!turbo_ratio) {
++ pr_debug("Non-zero turbo and base frequencies led to a 0 ratio.\n");
++ return false;
++ }
++
++ arch_turbo_freq_ratio = turbo_ratio;
+ arch_set_max_freq_ratio(turbo_disabled());
++
+ return true;
+ }
+
+@@ -2048,11 +2059,19 @@ static void init_freq_invariance(bool secondary)
+ }
+ }
+
++static void disable_freq_invariance_workfn(struct work_struct *work)
++{
++ static_branch_disable(&arch_scale_freq_key);
++}
++
++static DECLARE_WORK(disable_freq_invariance_work,
++ disable_freq_invariance_workfn);
++
+ DEFINE_PER_CPU(unsigned long, arch_freq_scale) = SCHED_CAPACITY_SCALE;
+
+ void arch_scale_freq_tick(void)
+ {
+- u64 freq_scale;
++ u64 freq_scale = SCHED_CAPACITY_SCALE;
+ u64 aperf, mperf;
+ u64 acnt, mcnt;
+
+@@ -2064,19 +2083,32 @@ void arch_scale_freq_tick(void)
+
+ acnt = aperf - this_cpu_read(arch_prev_aperf);
+ mcnt = mperf - this_cpu_read(arch_prev_mperf);
+- if (!mcnt)
+- return;
+
+ this_cpu_write(arch_prev_aperf, aperf);
+ this_cpu_write(arch_prev_mperf, mperf);
+
+- acnt <<= 2*SCHED_CAPACITY_SHIFT;
+- mcnt *= arch_max_freq_ratio;
++ if (check_shl_overflow(acnt, 2*SCHED_CAPACITY_SHIFT, &acnt))
++ goto error;
++
++ if (check_mul_overflow(mcnt, arch_max_freq_ratio, &mcnt) || !mcnt)
++ goto error;
+
+ freq_scale = div64_u64(acnt, mcnt);
++ if (!freq_scale)
++ goto error;
+
+ if (freq_scale > SCHED_CAPACITY_SCALE)
+ freq_scale = SCHED_CAPACITY_SCALE;
+
+ this_cpu_write(arch_freq_scale, freq_scale);
++ return;
++
++error:
++ pr_warn("Scheduler frequency invariance went wobbly, disabling!\n");
++ schedule_work(&disable_freq_invariance_work);
++}
++#else
++static inline void init_freq_invariance(bool secondary)
++{
+ }
++#endif /* CONFIG_X86_64 */
+diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c
+index 5bbf76189afa..f8ead44c3265 100644
+--- a/arch/x86/kvm/svm/svm.c
++++ b/arch/x86/kvm/svm/svm.c
+@@ -2522,7 +2522,7 @@ static int svm_set_msr(struct kvm_vcpu *vcpu, struct msr_data *msr)
+ !guest_cpuid_has(vcpu, X86_FEATURE_AMD_SSBD))
+ return 1;
+
+- if (data & ~kvm_spec_ctrl_valid_bits(vcpu))
++ if (kvm_spec_ctrl_test_value(data))
+ return 1;
+
+ svm->spec_ctrl = data;
+diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c
+index 13745f2a5ecd..eb33c764d159 100644
+--- a/arch/x86/kvm/vmx/vmx.c
++++ b/arch/x86/kvm/vmx/vmx.c
+@@ -2062,7 +2062,7 @@ static int vmx_set_msr(struct kvm_vcpu *vcpu, struct msr_data *msr_info)
+ !guest_cpuid_has(vcpu, X86_FEATURE_SPEC_CTRL))
+ return 1;
+
+- if (data & ~kvm_spec_ctrl_valid_bits(vcpu))
++ if (kvm_spec_ctrl_test_value(data))
+ return 1;
+
+ vmx->spec_ctrl = data;
+diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
+index 88c593f83b28..4fe976c2495e 100644
+--- a/arch/x86/kvm/x86.c
++++ b/arch/x86/kvm/x86.c
+@@ -10676,28 +10676,32 @@ bool kvm_arch_no_poll(struct kvm_vcpu *vcpu)
+ }
+ EXPORT_SYMBOL_GPL(kvm_arch_no_poll);
+
+-u64 kvm_spec_ctrl_valid_bits(struct kvm_vcpu *vcpu)
++
++int kvm_spec_ctrl_test_value(u64 value)
+ {
+- uint64_t bits = SPEC_CTRL_IBRS | SPEC_CTRL_STIBP | SPEC_CTRL_SSBD;
++ /*
++ * test that setting IA32_SPEC_CTRL to given value
++ * is allowed by the host processor
++ */
++
++ u64 saved_value;
++ unsigned long flags;
++ int ret = 0;
+
+- /* The STIBP bit doesn't fault even if it's not advertised */
+- if (!guest_cpuid_has(vcpu, X86_FEATURE_SPEC_CTRL) &&
+- !guest_cpuid_has(vcpu, X86_FEATURE_AMD_IBRS))
+- bits &= ~(SPEC_CTRL_IBRS | SPEC_CTRL_STIBP);
+- if (!boot_cpu_has(X86_FEATURE_SPEC_CTRL) &&
+- !boot_cpu_has(X86_FEATURE_AMD_IBRS))
+- bits &= ~(SPEC_CTRL_IBRS | SPEC_CTRL_STIBP);
++ local_irq_save(flags);
+
+- if (!guest_cpuid_has(vcpu, X86_FEATURE_SPEC_CTRL_SSBD) &&
+- !guest_cpuid_has(vcpu, X86_FEATURE_AMD_SSBD))
+- bits &= ~SPEC_CTRL_SSBD;
+- if (!boot_cpu_has(X86_FEATURE_SPEC_CTRL_SSBD) &&
+- !boot_cpu_has(X86_FEATURE_AMD_SSBD))
+- bits &= ~SPEC_CTRL_SSBD;
++ if (rdmsrl_safe(MSR_IA32_SPEC_CTRL, &saved_value))
++ ret = 1;
++ else if (wrmsrl_safe(MSR_IA32_SPEC_CTRL, value))
++ ret = 1;
++ else
++ wrmsrl(MSR_IA32_SPEC_CTRL, saved_value);
+
+- return bits;
++ local_irq_restore(flags);
++
++ return ret;
+ }
+-EXPORT_SYMBOL_GPL(kvm_spec_ctrl_valid_bits);
++EXPORT_SYMBOL_GPL(kvm_spec_ctrl_test_value);
+
+ EXPORT_TRACEPOINT_SYMBOL_GPL(kvm_exit);
+ EXPORT_TRACEPOINT_SYMBOL_GPL(kvm_fast_mmio);
+diff --git a/arch/x86/kvm/x86.h b/arch/x86/kvm/x86.h
+index 6eb62e97e59f..1878799d8661 100644
+--- a/arch/x86/kvm/x86.h
++++ b/arch/x86/kvm/x86.h
+@@ -363,7 +363,7 @@ static inline bool kvm_dr7_valid(u64 data)
+
+ void kvm_load_guest_xsave_state(struct kvm_vcpu *vcpu);
+ void kvm_load_host_xsave_state(struct kvm_vcpu *vcpu);
+-u64 kvm_spec_ctrl_valid_bits(struct kvm_vcpu *vcpu);
++int kvm_spec_ctrl_test_value(u64 value);
+ bool kvm_vcpu_exit_request(struct kvm_vcpu *vcpu);
+
+ #endif
+diff --git a/block/blk-iocost.c b/block/blk-iocost.c
+index 8ac4aad66ebc..86ba6fd254e1 100644
+--- a/block/blk-iocost.c
++++ b/block/blk-iocost.c
+@@ -1370,7 +1370,7 @@ static void ioc_timer_fn(struct timer_list *timer)
+ * should have woken up in the last period and expire idle iocgs.
+ */
+ list_for_each_entry_safe(iocg, tiocg, &ioc->active_iocgs, active_list) {
+- if (!waitqueue_active(&iocg->waitq) && iocg->abs_vdebt &&
++ if (!waitqueue_active(&iocg->waitq) && !iocg->abs_vdebt &&
+ !iocg_is_idle(iocg))
+ continue;
+
+diff --git a/block/blk-zoned.c b/block/blk-zoned.c
+index 23831fa8701d..480dfff69a00 100644
+--- a/block/blk-zoned.c
++++ b/block/blk-zoned.c
+@@ -497,6 +497,9 @@ int blk_revalidate_disk_zones(struct gendisk *disk,
+ if (WARN_ON_ONCE(!queue_is_mq(q)))
+ return -EIO;
+
++ if (!get_capacity(disk))
++ return -EIO;
++
+ /*
+ * Ensure that all memory allocations in this context are done as if
+ * GFP_NOIO was specified.
+diff --git a/drivers/acpi/acpica/exprep.c b/drivers/acpi/acpica/exprep.c
+index a4e306690a21..4a0f03157e08 100644
+--- a/drivers/acpi/acpica/exprep.c
++++ b/drivers/acpi/acpica/exprep.c
+@@ -473,10 +473,6 @@ acpi_status acpi_ex_prep_field_value(struct acpi_create_field_info *info)
+ (u8)access_byte_width;
+ }
+ }
+- /* An additional reference for the container */
+-
+- acpi_ut_add_reference(obj_desc->field.region_obj);
+-
+ ACPI_DEBUG_PRINT((ACPI_DB_BFIELD,
+ "RegionField: BitOff %X, Off %X, Gran %X, Region %p\n",
+ obj_desc->field.start_field_bit_offset,
+diff --git a/drivers/acpi/acpica/utdelete.c b/drivers/acpi/acpica/utdelete.c
+index c365faf4e6cd..4c0d4e434196 100644
+--- a/drivers/acpi/acpica/utdelete.c
++++ b/drivers/acpi/acpica/utdelete.c
+@@ -568,11 +568,6 @@ acpi_ut_update_object_reference(union acpi_operand_object *object, u16 action)
+ next_object = object->buffer_field.buffer_obj;
+ break;
+
+- case ACPI_TYPE_LOCAL_REGION_FIELD:
+-
+- next_object = object->field.region_obj;
+- break;
+-
+ case ACPI_TYPE_LOCAL_BANK_FIELD:
+
+ next_object = object->bank_field.bank_obj;
+@@ -613,6 +608,7 @@ acpi_ut_update_object_reference(union acpi_operand_object *object, u16 action)
+ }
+ break;
+
++ case ACPI_TYPE_LOCAL_REGION_FIELD:
+ case ACPI_TYPE_REGION:
+ default:
+
+diff --git a/drivers/base/dd.c b/drivers/base/dd.c
+index 48ca81cb8ebc..e8628716ea34 100644
+--- a/drivers/base/dd.c
++++ b/drivers/base/dd.c
+@@ -276,7 +276,7 @@ static void deferred_probe_timeout_work_func(struct work_struct *work)
+
+ list_for_each_entry_safe(private, p, &deferred_probe_pending_list, deferred_probe)
+ dev_info(private->device, "deferred probe pending\n");
+- wake_up(&probe_timeout_waitqueue);
++ wake_up_all(&probe_timeout_waitqueue);
+ }
+ static DECLARE_DELAYED_WORK(deferred_probe_timeout_work, deferred_probe_timeout_work_func);
+
+@@ -487,7 +487,8 @@ static int really_probe(struct device *dev, struct device_driver *drv)
+ drv->bus->name, __func__, drv->name, dev_name(dev));
+ if (!list_empty(&dev->devres_head)) {
+ dev_crit(dev, "Resources present before probing\n");
+- return -EBUSY;
++ ret = -EBUSY;
++ goto done;
+ }
+
+ re_probe:
+@@ -607,7 +608,7 @@ pinctrl_bind_failed:
+ ret = 0;
+ done:
+ atomic_dec(&probe_count);
+- wake_up(&probe_waitqueue);
++ wake_up_all(&probe_waitqueue);
+ return ret;
+ }
+
+diff --git a/drivers/base/firmware_loader/fallback_platform.c b/drivers/base/firmware_loader/fallback_platform.c
+index cdd2c9a9f38a..685edb7dd05a 100644
+--- a/drivers/base/firmware_loader/fallback_platform.c
++++ b/drivers/base/firmware_loader/fallback_platform.c
+@@ -25,7 +25,10 @@ int firmware_fallback_platform(struct fw_priv *fw_priv, u32 opt_flags)
+ if (rc)
+ return rc; /* rc == -ENOENT when the fw was not found */
+
+- fw_priv->data = vmalloc(size);
++ if (fw_priv->data && size > fw_priv->allocated_size)
++ return -ENOMEM;
++ if (!fw_priv->data)
++ fw_priv->data = vmalloc(size);
+ if (!fw_priv->data)
+ return -ENOMEM;
+
+diff --git a/drivers/block/loop.c b/drivers/block/loop.c
+index 475e1a738560..776083963ee6 100644
+--- a/drivers/block/loop.c
++++ b/drivers/block/loop.c
+@@ -2402,6 +2402,8 @@ static void __exit loop_exit(void)
+
+ range = max_loop ? max_loop << part_shift : 1UL << MINORBITS;
+
++ mutex_lock(&loop_ctl_mutex);
++
+ idr_for_each(&loop_index_idr, &loop_exit_cb, NULL);
+ idr_destroy(&loop_index_idr);
+
+@@ -2409,6 +2411,8 @@ static void __exit loop_exit(void)
+ unregister_blkdev(LOOP_MAJOR, "loop");
+
+ misc_deregister(&loop_misc);
++
++ mutex_unlock(&loop_ctl_mutex);
+ }
+
+ module_init(loop_init);
+diff --git a/drivers/bluetooth/btmrvl_sdio.c b/drivers/bluetooth/btmrvl_sdio.c
+index a296f8526433..64ee799c1761 100644
+--- a/drivers/bluetooth/btmrvl_sdio.c
++++ b/drivers/bluetooth/btmrvl_sdio.c
+@@ -328,7 +328,7 @@ static const struct btmrvl_sdio_device btmrvl_sdio_sd8897 = {
+
+ static const struct btmrvl_sdio_device btmrvl_sdio_sd8977 = {
+ .helper = NULL,
+- .firmware = "mrvl/sd8977_uapsta.bin",
++ .firmware = "mrvl/sdsd8977_combo_v2.bin",
+ .reg = &btmrvl_reg_8977,
+ .support_pscan_win_report = true,
+ .sd_blksz_fw_dl = 256,
+@@ -346,7 +346,7 @@ static const struct btmrvl_sdio_device btmrvl_sdio_sd8987 = {
+
+ static const struct btmrvl_sdio_device btmrvl_sdio_sd8997 = {
+ .helper = NULL,
+- .firmware = "mrvl/sd8997_uapsta.bin",
++ .firmware = "mrvl/sdsd8997_combo_v4.bin",
+ .reg = &btmrvl_reg_8997,
+ .support_pscan_win_report = true,
+ .sd_blksz_fw_dl = 256,
+@@ -1831,6 +1831,6 @@ MODULE_FIRMWARE("mrvl/sd8787_uapsta.bin");
+ MODULE_FIRMWARE("mrvl/sd8797_uapsta.bin");
+ MODULE_FIRMWARE("mrvl/sd8887_uapsta.bin");
+ MODULE_FIRMWARE("mrvl/sd8897_uapsta.bin");
+-MODULE_FIRMWARE("mrvl/sd8977_uapsta.bin");
++MODULE_FIRMWARE("mrvl/sdsd8977_combo_v2.bin");
+ MODULE_FIRMWARE("mrvl/sd8987_uapsta.bin");
+-MODULE_FIRMWARE("mrvl/sd8997_uapsta.bin");
++MODULE_FIRMWARE("mrvl/sdsd8997_combo_v4.bin");
+diff --git a/drivers/bluetooth/btmtksdio.c b/drivers/bluetooth/btmtksdio.c
+index bff095be2f97..c7ab7a23bd67 100644
+--- a/drivers/bluetooth/btmtksdio.c
++++ b/drivers/bluetooth/btmtksdio.c
+@@ -685,7 +685,7 @@ static int mtk_setup_firmware(struct hci_dev *hdev, const char *fwname)
+ const u8 *fw_ptr;
+ size_t fw_size;
+ int err, dlen;
+- u8 flag;
++ u8 flag, param;
+
+ err = request_firmware(&fw, fwname, &hdev->dev);
+ if (err < 0) {
+@@ -693,6 +693,20 @@ static int mtk_setup_firmware(struct hci_dev *hdev, const char *fwname)
+ return err;
+ }
+
++ /* Power on data RAM the firmware relies on. */
++ param = 1;
++ wmt_params.op = MTK_WMT_FUNC_CTRL;
++ wmt_params.flag = 3;
++ wmt_params.dlen = sizeof(param);
++ wmt_params.data = ¶m;
++ wmt_params.status = NULL;
++
++ err = mtk_hci_wmt_sync(hdev, &wmt_params);
++ if (err < 0) {
++ bt_dev_err(hdev, "Failed to power on data RAM (%d)", err);
++ return err;
++ }
++
+ fw_ptr = fw->data;
+ fw_size = fw->size;
+
+diff --git a/drivers/bluetooth/btusb.c b/drivers/bluetooth/btusb.c
+index 5f022e9cf667..a5fef9aa419f 100644
+--- a/drivers/bluetooth/btusb.c
++++ b/drivers/bluetooth/btusb.c
+@@ -1720,6 +1720,7 @@ static int btusb_setup_csr(struct hci_dev *hdev)
+ {
+ struct hci_rp_read_local_version *rp;
+ struct sk_buff *skb;
++ bool is_fake = false;
+
+ BT_DBG("%s", hdev->name);
+
+@@ -1739,18 +1740,69 @@ static int btusb_setup_csr(struct hci_dev *hdev)
+
+ rp = (struct hci_rp_read_local_version *)skb->data;
+
+- /* Detect controllers which aren't real CSR ones. */
++ /* Detect a wide host of Chinese controllers that aren't CSR.
++ *
++ * Known fake bcdDevices: 0x0100, 0x0134, 0x1915, 0x2520, 0x7558, 0x8891
++ *
++ * The main thing they have in common is that these are really popular low-cost
++ * options that support newer Bluetooth versions but rely on heavy VID/PID
++ * squatting of this poor old Bluetooth 1.1 device. Even sold as such.
++ *
++ * We detect actual CSR devices by checking that the HCI manufacturer code
++ * is Cambridge Silicon Radio (10) and ensuring that LMP sub-version and
++ * HCI rev values always match. As they both store the firmware number.
++ */
+ if (le16_to_cpu(rp->manufacturer) != 10 ||
+- le16_to_cpu(rp->lmp_subver) == 0x0c5c) {
++ le16_to_cpu(rp->hci_rev) != le16_to_cpu(rp->lmp_subver))
++ is_fake = true;
++
++ /* Known legit CSR firmware build numbers and their supported BT versions:
++ * - 1.1 (0x1) -> 0x0073, 0x020d, 0x033c, 0x034e
++ * - 1.2 (0x2) -> 0x04d9, 0x0529
++ * - 2.0 (0x3) -> 0x07a6, 0x07ad, 0x0c5c
++ * - 2.1 (0x4) -> 0x149c, 0x1735, 0x1899 (0x1899 is a BlueCore4-External)
++ * - 4.0 (0x6) -> 0x1d86, 0x2031, 0x22bb
++ *
++ * e.g. Real CSR dongles with LMP subversion 0x73 are old enough that
++ * support BT 1.1 only; so it's a dead giveaway when some
++ * third-party BT 4.0 dongle reuses it.
++ */
++ else if (le16_to_cpu(rp->lmp_subver) <= 0x034e &&
++ le16_to_cpu(rp->hci_ver) > BLUETOOTH_VER_1_1)
++ is_fake = true;
++
++ else if (le16_to_cpu(rp->lmp_subver) <= 0x0529 &&
++ le16_to_cpu(rp->hci_ver) > BLUETOOTH_VER_1_2)
++ is_fake = true;
++
++ else if (le16_to_cpu(rp->lmp_subver) <= 0x0c5c &&
++ le16_to_cpu(rp->hci_ver) > BLUETOOTH_VER_2_0)
++ is_fake = true;
++
++ else if (le16_to_cpu(rp->lmp_subver) <= 0x1899 &&
++ le16_to_cpu(rp->hci_ver) > BLUETOOTH_VER_2_1)
++ is_fake = true;
++
++ else if (le16_to_cpu(rp->lmp_subver) <= 0x22bb &&
++ le16_to_cpu(rp->hci_ver) > BLUETOOTH_VER_4_0)
++ is_fake = true;
++
++ if (is_fake) {
++ bt_dev_warn(hdev, "CSR: Unbranded CSR clone detected; adding workarounds...");
++
++ /* Generally these clones have big discrepancies between
++ * advertised features and what's actually supported.
++ * Probably will need to be expanded in the future;
++ * without these the controller will lock up.
++ */
++ set_bit(HCI_QUIRK_BROKEN_STORED_LINK_KEY, &hdev->quirks);
++ set_bit(HCI_QUIRK_BROKEN_ERR_DATA_REPORTING, &hdev->quirks);
++
+ /* Clear the reset quirk since this is not an actual
+ * early Bluetooth 1.1 device from CSR.
+ */
+ clear_bit(HCI_QUIRK_RESET_ON_CLOSE, &hdev->quirks);
+-
+- /* These fake CSR controllers have all a broken
+- * stored link key handling and so just disable it.
+- */
+- set_bit(HCI_QUIRK_BROKEN_STORED_LINK_KEY, &hdev->quirks);
++ clear_bit(HCI_QUIRK_SIMULTANEOUS_DISCOVERY, &hdev->quirks);
+ }
+
+ kfree_skb(skb);
+@@ -2925,7 +2977,7 @@ static int btusb_mtk_setup_firmware(struct hci_dev *hdev, const char *fwname)
+ const u8 *fw_ptr;
+ size_t fw_size;
+ int err, dlen;
+- u8 flag;
++ u8 flag, param;
+
+ err = request_firmware(&fw, fwname, &hdev->dev);
+ if (err < 0) {
+@@ -2933,6 +2985,20 @@ static int btusb_mtk_setup_firmware(struct hci_dev *hdev, const char *fwname)
+ return err;
+ }
+
++ /* Power on data RAM the firmware relies on. */
++ param = 1;
++ wmt_params.op = BTMTK_WMT_FUNC_CTRL;
++ wmt_params.flag = 3;
++ wmt_params.dlen = sizeof(param);
++ wmt_params.data = ¶m;
++ wmt_params.status = NULL;
++
++ err = btusb_mtk_hci_wmt_sync(hdev, &wmt_params);
++ if (err < 0) {
++ bt_dev_err(hdev, "Failed to power on data RAM (%d)", err);
++ return err;
++ }
++
+ fw_ptr = fw->data;
+ fw_size = fw->size;
+
+@@ -4001,11 +4067,13 @@ static int btusb_probe(struct usb_interface *intf,
+ if (bcdDevice < 0x117)
+ set_bit(HCI_QUIRK_RESET_ON_CLOSE, &hdev->quirks);
+
++ /* This must be set first in case we disable it for fakes */
++ set_bit(HCI_QUIRK_SIMULTANEOUS_DISCOVERY, &hdev->quirks);
++
+ /* Fake CSR devices with broken commands */
+- if (bcdDevice <= 0x100 || bcdDevice == 0x134)
++ if (le16_to_cpu(udev->descriptor.idVendor) == 0x0a12 &&
++ le16_to_cpu(udev->descriptor.idProduct) == 0x0001)
+ hdev->setup = btusb_setup_csr;
+-
+- set_bit(HCI_QUIRK_SIMULTANEOUS_DISCOVERY, &hdev->quirks);
+ }
+
+ if (id->driver_info & BTUSB_SNIFFER) {
+diff --git a/drivers/bluetooth/hci_h5.c b/drivers/bluetooth/hci_h5.c
+index e60b2e0773db..e41854e0d79a 100644
+--- a/drivers/bluetooth/hci_h5.c
++++ b/drivers/bluetooth/hci_h5.c
+@@ -793,7 +793,7 @@ static int h5_serdev_probe(struct serdev_device *serdev)
+ if (!h5)
+ return -ENOMEM;
+
+- set_bit(HCI_UART_RESET_ON_INIT, &h5->serdev_hu.flags);
++ set_bit(HCI_UART_RESET_ON_INIT, &h5->serdev_hu.hdev_flags);
+
+ h5->hu = &h5->serdev_hu;
+ h5->serdev_hu.serdev = serdev;
+diff --git a/drivers/bluetooth/hci_qca.c b/drivers/bluetooth/hci_qca.c
+index 81c3c38baba1..9150b0c3f302 100644
+--- a/drivers/bluetooth/hci_qca.c
++++ b/drivers/bluetooth/hci_qca.c
+@@ -46,7 +46,7 @@
+ #define HCI_MAX_IBS_SIZE 10
+
+ #define IBS_WAKE_RETRANS_TIMEOUT_MS 100
+-#define IBS_BTSOC_TX_IDLE_TIMEOUT_MS 40
++#define IBS_BTSOC_TX_IDLE_TIMEOUT_MS 200
+ #define IBS_HOST_TX_IDLE_TIMEOUT_MS 2000
+ #define CMD_TRANS_TIMEOUT_MS 100
+ #define MEMDUMP_TIMEOUT_MS 8000
+@@ -72,7 +72,8 @@ enum qca_flags {
+ QCA_DROP_VENDOR_EVENT,
+ QCA_SUSPENDING,
+ QCA_MEMDUMP_COLLECTION,
+- QCA_HW_ERROR_EVENT
++ QCA_HW_ERROR_EVENT,
++ QCA_SSR_TRIGGERED
+ };
+
+ enum qca_capabilities {
+@@ -862,6 +863,13 @@ static int qca_enqueue(struct hci_uart *hu, struct sk_buff *skb)
+ BT_DBG("hu %p qca enq skb %p tx_ibs_state %d", hu, skb,
+ qca->tx_ibs_state);
+
++ if (test_bit(QCA_SSR_TRIGGERED, &qca->flags)) {
++ /* As SSR is in progress, ignore the packets */
++ bt_dev_dbg(hu->hdev, "SSR is in progress");
++ kfree_skb(skb);
++ return 0;
++ }
++
+ /* Prepend skb with frame type */
+ memcpy(skb_push(skb, 1), &hci_skb_pkt_type(skb), 1);
+
+@@ -983,8 +991,11 @@ static void qca_controller_memdump(struct work_struct *work)
+ while ((skb = skb_dequeue(&qca->rx_memdump_q))) {
+
+ mutex_lock(&qca->hci_memdump_lock);
+- /* Skip processing the received packets if timeout detected. */
+- if (qca->memdump_state == QCA_MEMDUMP_TIMEOUT) {
++ /* Skip processing the received packets if timeout detected
++ * or memdump collection completed.
++ */
++ if (qca->memdump_state == QCA_MEMDUMP_TIMEOUT ||
++ qca->memdump_state == QCA_MEMDUMP_COLLECTED) {
+ mutex_unlock(&qca->hci_memdump_lock);
+ return;
+ }
+@@ -1128,6 +1139,7 @@ static int qca_controller_memdump_event(struct hci_dev *hdev,
+ struct hci_uart *hu = hci_get_drvdata(hdev);
+ struct qca_data *qca = hu->priv;
+
++ set_bit(QCA_SSR_TRIGGERED, &qca->flags);
+ skb_queue_tail(&qca->rx_memdump_q, skb);
+ queue_work(qca->workqueue, &qca->ctrl_memdump_evt);
+
+@@ -1485,9 +1497,8 @@ static void qca_hw_error(struct hci_dev *hdev, u8 code)
+ {
+ struct hci_uart *hu = hci_get_drvdata(hdev);
+ struct qca_data *qca = hu->priv;
+- struct qca_memdump_data *qca_memdump = qca->qca_memdump;
+- char *memdump_buf = NULL;
+
++ set_bit(QCA_SSR_TRIGGERED, &qca->flags);
+ set_bit(QCA_HW_ERROR_EVENT, &qca->flags);
+ bt_dev_info(hdev, "mem_dump_status: %d", qca->memdump_state);
+
+@@ -1509,19 +1520,23 @@ static void qca_hw_error(struct hci_dev *hdev, u8 code)
+ qca_wait_for_dump_collection(hdev);
+ }
+
++ mutex_lock(&qca->hci_memdump_lock);
+ if (qca->memdump_state != QCA_MEMDUMP_COLLECTED) {
+ bt_dev_err(hu->hdev, "clearing allocated memory due to memdump timeout");
+- mutex_lock(&qca->hci_memdump_lock);
+- if (qca_memdump)
+- memdump_buf = qca_memdump->memdump_buf_head;
+- vfree(memdump_buf);
+- kfree(qca_memdump);
+- qca->qca_memdump = NULL;
++ if (qca->qca_memdump) {
++ vfree(qca->qca_memdump->memdump_buf_head);
++ kfree(qca->qca_memdump);
++ qca->qca_memdump = NULL;
++ }
+ qca->memdump_state = QCA_MEMDUMP_TIMEOUT;
+ cancel_delayed_work(&qca->ctrl_memdump_timeout);
+- skb_queue_purge(&qca->rx_memdump_q);
+- mutex_unlock(&qca->hci_memdump_lock);
++ }
++ mutex_unlock(&qca->hci_memdump_lock);
++
++ if (qca->memdump_state == QCA_MEMDUMP_TIMEOUT ||
++ qca->memdump_state == QCA_MEMDUMP_COLLECTED) {
+ cancel_work_sync(&qca->ctrl_memdump_evt);
++ skb_queue_purge(&qca->rx_memdump_q);
+ }
+
+ clear_bit(QCA_HW_ERROR_EVENT, &qca->flags);
+@@ -1532,10 +1547,30 @@ static void qca_cmd_timeout(struct hci_dev *hdev)
+ struct hci_uart *hu = hci_get_drvdata(hdev);
+ struct qca_data *qca = hu->priv;
+
+- if (qca->memdump_state == QCA_MEMDUMP_IDLE)
++ set_bit(QCA_SSR_TRIGGERED, &qca->flags);
++ if (qca->memdump_state == QCA_MEMDUMP_IDLE) {
++ set_bit(QCA_MEMDUMP_COLLECTION, &qca->flags);
+ qca_send_crashbuffer(hu);
+- else
+- bt_dev_info(hdev, "Dump collection is in process");
++ qca_wait_for_dump_collection(hdev);
++ } else if (qca->memdump_state == QCA_MEMDUMP_COLLECTING) {
++ /* Let us wait here until memory dump collected or
++ * memory dump timer expired.
++ */
++ bt_dev_info(hdev, "waiting for dump to complete");
++ qca_wait_for_dump_collection(hdev);
++ }
++
++ mutex_lock(&qca->hci_memdump_lock);
++ if (qca->memdump_state != QCA_MEMDUMP_COLLECTED) {
++ qca->memdump_state = QCA_MEMDUMP_TIMEOUT;
++ if (!test_bit(QCA_HW_ERROR_EVENT, &qca->flags)) {
++ /* Inject hw error event to reset the device
++ * and driver.
++ */
++ hci_reset_dev(hu->hdev);
++ }
++ }
++ mutex_unlock(&qca->hci_memdump_lock);
+ }
+
+ static int qca_wcn3990_init(struct hci_uart *hu)
+@@ -1641,11 +1676,15 @@ static int qca_setup(struct hci_uart *hu)
+ bt_dev_info(hdev, "setting up %s",
+ qca_is_wcn399x(soc_type) ? "wcn399x" : "ROME/QCA6390");
+
++ qca->memdump_state = QCA_MEMDUMP_IDLE;
++
+ retry:
+ ret = qca_power_on(hdev);
+ if (ret)
+ return ret;
+
++ clear_bit(QCA_SSR_TRIGGERED, &qca->flags);
++
+ if (qca_is_wcn399x(soc_type)) {
+ set_bit(HCI_QUIRK_USE_BDADDR_PROPERTY, &hdev->quirks);
+
+@@ -1788,9 +1827,6 @@ static void qca_power_shutdown(struct hci_uart *hu)
+ qca_flush(hu);
+ spin_unlock_irqrestore(&qca->hci_ibs_lock, flags);
+
+- hu->hdev->hw_error = NULL;
+- hu->hdev->cmd_timeout = NULL;
+-
+ /* Non-serdev device usually is powered by external power
+ * and don't need additional action in driver for power down
+ */
+@@ -1812,6 +1848,9 @@ static int qca_power_off(struct hci_dev *hdev)
+ struct qca_data *qca = hu->priv;
+ enum qca_btsoc_type soc_type = qca_soc_type(hu);
+
++ hu->hdev->hw_error = NULL;
++ hu->hdev->cmd_timeout = NULL;
++
+ /* Stop sending shutdown command if soc crashes. */
+ if (soc_type != QCA_ROME
+ && qca->memdump_state == QCA_MEMDUMP_IDLE) {
+@@ -1819,7 +1858,6 @@ static int qca_power_off(struct hci_dev *hdev)
+ usleep_range(8000, 10000);
+ }
+
+- qca->memdump_state = QCA_MEMDUMP_IDLE;
+ qca_power_shutdown(hu);
+ return 0;
+ }
+@@ -1962,17 +2000,17 @@ static int qca_serdev_probe(struct serdev_device *serdev)
+ }
+
+ qcadev->susclk = devm_clk_get_optional(&serdev->dev, NULL);
+- if (!qcadev->susclk) {
++ if (IS_ERR(qcadev->susclk)) {
+ dev_warn(&serdev->dev, "failed to acquire clk\n");
+- } else {
+- err = clk_set_rate(qcadev->susclk, SUSCLK_RATE_32KHZ);
+- if (err)
+- return err;
+-
+- err = clk_prepare_enable(qcadev->susclk);
+- if (err)
+- return err;
++ return PTR_ERR(qcadev->susclk);
+ }
++ err = clk_set_rate(qcadev->susclk, SUSCLK_RATE_32KHZ);
++ if (err)
++ return err;
++
++ err = clk_prepare_enable(qcadev->susclk);
++ if (err)
++ return err;
+
+ err = hci_uart_register_device(&qcadev->serdev_hu, &qca_proto);
+ if (err) {
+@@ -2083,8 +2121,6 @@ static int __maybe_unused qca_suspend(struct device *dev)
+
+ qca->tx_ibs_state = HCI_IBS_TX_ASLEEP;
+ qca->ibs_sent_slps++;
+-
+- qca_wq_serial_tx_clock_vote_off(&qca->ws_tx_vote_off);
+ break;
+
+ case HCI_IBS_TX_ASLEEP:
+@@ -2112,8 +2148,10 @@ static int __maybe_unused qca_suspend(struct device *dev)
+ qca->rx_ibs_state == HCI_IBS_RX_ASLEEP,
+ msecs_to_jiffies(IBS_BTSOC_TX_IDLE_TIMEOUT_MS));
+
+- if (ret > 0)
++ if (ret > 0) {
++ qca_wq_serial_tx_clock_vote_off(&qca->ws_tx_vote_off);
+ return 0;
++ }
+
+ if (ret == 0)
+ ret = -ETIMEDOUT;
+diff --git a/drivers/bluetooth/hci_serdev.c b/drivers/bluetooth/hci_serdev.c
+index 599855e4c57c..7b233312e723 100644
+--- a/drivers/bluetooth/hci_serdev.c
++++ b/drivers/bluetooth/hci_serdev.c
+@@ -355,7 +355,8 @@ void hci_uart_unregister_device(struct hci_uart *hu)
+ struct hci_dev *hdev = hu->hdev;
+
+ clear_bit(HCI_UART_PROTO_READY, &hu->flags);
+- hci_unregister_dev(hdev);
++ if (test_bit(HCI_UART_REGISTERED, &hu->flags))
++ hci_unregister_dev(hdev);
+ hci_free_dev(hdev);
+
+ cancel_work_sync(&hu->write_work);
+diff --git a/drivers/bus/ti-sysc.c b/drivers/bus/ti-sysc.c
+index 191c97b84715..fb5a901fd89e 100644
+--- a/drivers/bus/ti-sysc.c
++++ b/drivers/bus/ti-sysc.c
+@@ -1395,6 +1395,10 @@ static const struct sysc_revision_quirk sysc_revision_quirks[] = {
+ SYSC_QUIRK_SWSUP_SIDLE | SYSC_QUIRK_SWSUP_MSTANDBY),
+ SYSC_QUIRK("tptc", 0, 0, -ENODEV, -ENODEV, 0x40007c00, 0xffffffff,
+ SYSC_QUIRK_SWSUP_SIDLE | SYSC_QUIRK_SWSUP_MSTANDBY),
++ SYSC_QUIRK("usb_host_hs", 0, 0, 0x10, 0x14, 0x50700100, 0xffffffff,
++ SYSC_QUIRK_SWSUP_SIDLE | SYSC_QUIRK_SWSUP_MSTANDBY),
++ SYSC_QUIRK("usb_host_hs", 0, 0, 0x10, -ENODEV, 0x50700101, 0xffffffff,
++ SYSC_QUIRK_SWSUP_SIDLE | SYSC_QUIRK_SWSUP_MSTANDBY),
+ SYSC_QUIRK("usb_otg_hs", 0, 0x400, 0x404, 0x408, 0x00000050,
+ 0xffffffff, SYSC_QUIRK_SWSUP_SIDLE | SYSC_QUIRK_SWSUP_MSTANDBY),
+ SYSC_QUIRK("usb_otg_hs", 0, 0, 0x10, -ENODEV, 0x4ea2080d, 0xffffffff,
+@@ -1473,8 +1477,6 @@ static const struct sysc_revision_quirk sysc_revision_quirks[] = {
+ SYSC_QUIRK("tpcc", 0, 0, -ENODEV, -ENODEV, 0x40014c00, 0xffffffff, 0),
+ SYSC_QUIRK("usbhstll", 0, 0, 0x10, 0x14, 0x00000004, 0xffffffff, 0),
+ SYSC_QUIRK("usbhstll", 0, 0, 0x10, 0x14, 0x00000008, 0xffffffff, 0),
+- SYSC_QUIRK("usb_host_hs", 0, 0, 0x10, 0x14, 0x50700100, 0xffffffff, 0),
+- SYSC_QUIRK("usb_host_hs", 0, 0, 0x10, -ENODEV, 0x50700101, 0xffffffff, 0),
+ SYSC_QUIRK("venc", 0x58003000, 0, -ENODEV, -ENODEV, 0x00000002, 0xffffffff, 0),
+ SYSC_QUIRK("vfpe", 0, 0, 0x104, -ENODEV, 0x4d001200, 0xffffffff, 0),
+ #endif
+diff --git a/drivers/char/agp/intel-gtt.c b/drivers/char/agp/intel-gtt.c
+index 4b34a5195c65..5bfdf222d5f9 100644
+--- a/drivers/char/agp/intel-gtt.c
++++ b/drivers/char/agp/intel-gtt.c
+@@ -304,8 +304,10 @@ static int intel_gtt_setup_scratch_page(void)
+ if (intel_private.needs_dmar) {
+ dma_addr = pci_map_page(intel_private.pcidev, page, 0,
+ PAGE_SIZE, PCI_DMA_BIDIRECTIONAL);
+- if (pci_dma_mapping_error(intel_private.pcidev, dma_addr))
++ if (pci_dma_mapping_error(intel_private.pcidev, dma_addr)) {
++ __free_page(page);
+ return -EINVAL;
++ }
+
+ intel_private.scratch_page_dma = dma_addr;
+ } else
+diff --git a/drivers/char/tpm/tpm-chip.c b/drivers/char/tpm/tpm-chip.c
+index 8c77e88012e9..ddaeceb7e109 100644
+--- a/drivers/char/tpm/tpm-chip.c
++++ b/drivers/char/tpm/tpm-chip.c
+@@ -386,13 +386,8 @@ struct tpm_chip *tpm_chip_alloc(struct device *pdev,
+ chip->cdev.owner = THIS_MODULE;
+ chip->cdevs.owner = THIS_MODULE;
+
+- chip->work_space.context_buf = kzalloc(PAGE_SIZE, GFP_KERNEL);
+- if (!chip->work_space.context_buf) {
+- rc = -ENOMEM;
+- goto out;
+- }
+- chip->work_space.session_buf = kzalloc(PAGE_SIZE, GFP_KERNEL);
+- if (!chip->work_space.session_buf) {
++ rc = tpm2_init_space(&chip->work_space, TPM2_SPACE_BUFFER_SIZE);
++ if (rc) {
+ rc = -ENOMEM;
+ goto out;
+ }
+diff --git a/drivers/char/tpm/tpm.h b/drivers/char/tpm/tpm.h
+index 0fbcede241ea..947d1db0a5cc 100644
+--- a/drivers/char/tpm/tpm.h
++++ b/drivers/char/tpm/tpm.h
+@@ -59,6 +59,9 @@ enum tpm_addr {
+
+ #define TPM_TAG_RQU_COMMAND 193
+
++/* TPM2 specific constants. */
++#define TPM2_SPACE_BUFFER_SIZE 16384 /* 16 kB */
++
+ struct stclear_flags_t {
+ __be16 tag;
+ u8 deactivated;
+@@ -228,7 +231,7 @@ unsigned long tpm2_calc_ordinal_duration(struct tpm_chip *chip, u32 ordinal);
+ int tpm2_probe(struct tpm_chip *chip);
+ int tpm2_get_cc_attrs_tbl(struct tpm_chip *chip);
+ int tpm2_find_cc(struct tpm_chip *chip, u32 cc);
+-int tpm2_init_space(struct tpm_space *space);
++int tpm2_init_space(struct tpm_space *space, unsigned int buf_size);
+ void tpm2_del_space(struct tpm_chip *chip, struct tpm_space *space);
+ void tpm2_flush_space(struct tpm_chip *chip);
+ int tpm2_prepare_space(struct tpm_chip *chip, struct tpm_space *space, u8 *cmd,
+diff --git a/drivers/char/tpm/tpm2-space.c b/drivers/char/tpm/tpm2-space.c
+index 982d341d8837..784b8b3cb903 100644
+--- a/drivers/char/tpm/tpm2-space.c
++++ b/drivers/char/tpm/tpm2-space.c
+@@ -38,18 +38,21 @@ static void tpm2_flush_sessions(struct tpm_chip *chip, struct tpm_space *space)
+ }
+ }
+
+-int tpm2_init_space(struct tpm_space *space)
++int tpm2_init_space(struct tpm_space *space, unsigned int buf_size)
+ {
+- space->context_buf = kzalloc(PAGE_SIZE, GFP_KERNEL);
++ space->context_buf = kzalloc(buf_size, GFP_KERNEL);
+ if (!space->context_buf)
+ return -ENOMEM;
+
+- space->session_buf = kzalloc(PAGE_SIZE, GFP_KERNEL);
++ space->session_buf = kzalloc(buf_size, GFP_KERNEL);
+ if (space->session_buf == NULL) {
+ kfree(space->context_buf);
++ /* Prevent caller getting a dangling pointer. */
++ space->context_buf = NULL;
+ return -ENOMEM;
+ }
+
++ space->buf_size = buf_size;
+ return 0;
+ }
+
+@@ -311,8 +314,10 @@ int tpm2_prepare_space(struct tpm_chip *chip, struct tpm_space *space, u8 *cmd,
+ sizeof(space->context_tbl));
+ memcpy(&chip->work_space.session_tbl, &space->session_tbl,
+ sizeof(space->session_tbl));
+- memcpy(chip->work_space.context_buf, space->context_buf, PAGE_SIZE);
+- memcpy(chip->work_space.session_buf, space->session_buf, PAGE_SIZE);
++ memcpy(chip->work_space.context_buf, space->context_buf,
++ space->buf_size);
++ memcpy(chip->work_space.session_buf, space->session_buf,
++ space->buf_size);
+
+ rc = tpm2_load_space(chip);
+ if (rc) {
+@@ -492,7 +497,7 @@ static int tpm2_save_space(struct tpm_chip *chip)
+ continue;
+
+ rc = tpm2_save_context(chip, space->context_tbl[i],
+- space->context_buf, PAGE_SIZE,
++ space->context_buf, space->buf_size,
+ &offset);
+ if (rc == -ENOENT) {
+ space->context_tbl[i] = 0;
+@@ -509,9 +514,8 @@ static int tpm2_save_space(struct tpm_chip *chip)
+ continue;
+
+ rc = tpm2_save_context(chip, space->session_tbl[i],
+- space->session_buf, PAGE_SIZE,
++ space->session_buf, space->buf_size,
+ &offset);
+-
+ if (rc == -ENOENT) {
+ /* handle error saving session, just forget it */
+ space->session_tbl[i] = 0;
+@@ -557,8 +561,10 @@ int tpm2_commit_space(struct tpm_chip *chip, struct tpm_space *space,
+ sizeof(space->context_tbl));
+ memcpy(&space->session_tbl, &chip->work_space.session_tbl,
+ sizeof(space->session_tbl));
+- memcpy(space->context_buf, chip->work_space.context_buf, PAGE_SIZE);
+- memcpy(space->session_buf, chip->work_space.session_buf, PAGE_SIZE);
++ memcpy(space->context_buf, chip->work_space.context_buf,
++ space->buf_size);
++ memcpy(space->session_buf, chip->work_space.session_buf,
++ space->buf_size);
+
+ return 0;
+ out:
+diff --git a/drivers/char/tpm/tpmrm-dev.c b/drivers/char/tpm/tpmrm-dev.c
+index 7a0a7051a06f..eef0fb06ea83 100644
+--- a/drivers/char/tpm/tpmrm-dev.c
++++ b/drivers/char/tpm/tpmrm-dev.c
+@@ -21,7 +21,7 @@ static int tpmrm_open(struct inode *inode, struct file *file)
+ if (priv == NULL)
+ return -ENOMEM;
+
+- rc = tpm2_init_space(&priv->space);
++ rc = tpm2_init_space(&priv->space, TPM2_SPACE_BUFFER_SIZE);
+ if (rc) {
+ kfree(priv);
+ return -ENOMEM;
+diff --git a/drivers/clk/bcm/clk-bcm63xx-gate.c b/drivers/clk/bcm/clk-bcm63xx-gate.c
+index 98e884957db8..911a29bd744e 100644
+--- a/drivers/clk/bcm/clk-bcm63xx-gate.c
++++ b/drivers/clk/bcm/clk-bcm63xx-gate.c
+@@ -155,6 +155,7 @@ static int clk_bcm63xx_probe(struct platform_device *pdev)
+
+ for (entry = table; entry->name; entry++)
+ maxbit = max_t(u8, maxbit, entry->bit);
++ maxbit++;
+
+ hw = devm_kzalloc(&pdev->dev, struct_size(hw, data.hws, maxbit),
+ GFP_KERNEL);
+diff --git a/drivers/clk/clk-scmi.c b/drivers/clk/clk-scmi.c
+index c491f5de0f3f..c754dfbb73fd 100644
+--- a/drivers/clk/clk-scmi.c
++++ b/drivers/clk/clk-scmi.c
+@@ -103,6 +103,8 @@ static const struct clk_ops scmi_clk_ops = {
+ static int scmi_clk_ops_init(struct device *dev, struct scmi_clk *sclk)
+ {
+ int ret;
++ unsigned long min_rate, max_rate;
++
+ struct clk_init_data init = {
+ .flags = CLK_GET_RATE_NOCACHE,
+ .num_parents = 0,
+@@ -112,9 +114,23 @@ static int scmi_clk_ops_init(struct device *dev, struct scmi_clk *sclk)
+
+ sclk->hw.init = &init;
+ ret = devm_clk_hw_register(dev, &sclk->hw);
+- if (!ret)
+- clk_hw_set_rate_range(&sclk->hw, sclk->info->range.min_rate,
+- sclk->info->range.max_rate);
++ if (ret)
++ return ret;
++
++ if (sclk->info->rate_discrete) {
++ int num_rates = sclk->info->list.num_rates;
++
++ if (num_rates <= 0)
++ return -EINVAL;
++
++ min_rate = sclk->info->list.rates[0];
++ max_rate = sclk->info->list.rates[num_rates - 1];
++ } else {
++ min_rate = sclk->info->range.min_rate;
++ max_rate = sclk->info->range.max_rate;
++ }
++
++ clk_hw_set_rate_range(&sclk->hw, min_rate, max_rate);
+ return ret;
+ }
+
+diff --git a/drivers/clk/qcom/gcc-sc7180.c b/drivers/clk/qcom/gcc-sc7180.c
+index ca4383e3a02a..538677befb86 100644
+--- a/drivers/clk/qcom/gcc-sc7180.c
++++ b/drivers/clk/qcom/gcc-sc7180.c
+@@ -1061,7 +1061,7 @@ static struct clk_branch gcc_disp_gpll0_clk_src = {
+ .hw = &gpll0.clkr.hw,
+ },
+ .num_parents = 1,
+- .ops = &clk_branch2_ops,
++ .ops = &clk_branch2_aon_ops,
+ },
+ },
+ };
+diff --git a/drivers/clk/qcom/gcc-sdm845.c b/drivers/clk/qcom/gcc-sdm845.c
+index f6ce888098be..90f7febaf528 100644
+--- a/drivers/clk/qcom/gcc-sdm845.c
++++ b/drivers/clk/qcom/gcc-sdm845.c
+@@ -1,6 +1,6 @@
+ // SPDX-License-Identifier: GPL-2.0
+ /*
+- * Copyright (c) 2018, The Linux Foundation. All rights reserved.
++ * Copyright (c) 2018, 2020, The Linux Foundation. All rights reserved.
+ */
+
+ #include <linux/kernel.h>
+@@ -1344,7 +1344,7 @@ static struct clk_branch gcc_disp_gpll0_clk_src = {
+ "gpll0",
+ },
+ .num_parents = 1,
+- .ops = &clk_branch2_ops,
++ .ops = &clk_branch2_aon_ops,
+ },
+ },
+ };
+diff --git a/drivers/cpufreq/Kconfig.arm b/drivers/cpufreq/Kconfig.arm
+index c6cbfc8baf72..a967894c4613 100644
+--- a/drivers/cpufreq/Kconfig.arm
++++ b/drivers/cpufreq/Kconfig.arm
+@@ -41,6 +41,7 @@ config ARM_ARMADA_37XX_CPUFREQ
+ config ARM_ARMADA_8K_CPUFREQ
+ tristate "Armada 8K CPUFreq driver"
+ depends on ARCH_MVEBU && CPUFREQ_DT
++ select ARMADA_AP_CPU_CLK
+ help
+ This enables the CPUFreq driver support for Marvell
+ Armada8k SOCs.
+diff --git a/drivers/cpufreq/armada-37xx-cpufreq.c b/drivers/cpufreq/armada-37xx-cpufreq.c
+index aa0f06dec959..df1c941260d1 100644
+--- a/drivers/cpufreq/armada-37xx-cpufreq.c
++++ b/drivers/cpufreq/armada-37xx-cpufreq.c
+@@ -456,6 +456,7 @@ static int __init armada37xx_cpufreq_driver_init(void)
+ /* Now that everything is setup, enable the DVFS at hardware level */
+ armada37xx_cpufreq_enable_dvfs(nb_pm_base);
+
++ memset(&pdata, 0, sizeof(pdata));
+ pdata.suspend = armada37xx_cpufreq_suspend;
+ pdata.resume = armada37xx_cpufreq_resume;
+
+diff --git a/drivers/cpufreq/cpufreq.c b/drivers/cpufreq/cpufreq.c
+index 0128de3603df..e9e8200a0211 100644
+--- a/drivers/cpufreq/cpufreq.c
++++ b/drivers/cpufreq/cpufreq.c
+@@ -621,6 +621,24 @@ static struct cpufreq_governor *find_governor(const char *str_governor)
+ return NULL;
+ }
+
++static struct cpufreq_governor *get_governor(const char *str_governor)
++{
++ struct cpufreq_governor *t;
++
++ mutex_lock(&cpufreq_governor_mutex);
++ t = find_governor(str_governor);
++ if (!t)
++ goto unlock;
++
++ if (!try_module_get(t->owner))
++ t = NULL;
++
++unlock:
++ mutex_unlock(&cpufreq_governor_mutex);
++
++ return t;
++}
++
+ static unsigned int cpufreq_parse_policy(char *str_governor)
+ {
+ if (!strncasecmp(str_governor, "performance", CPUFREQ_NAME_LEN))
+@@ -640,28 +658,14 @@ static struct cpufreq_governor *cpufreq_parse_governor(char *str_governor)
+ {
+ struct cpufreq_governor *t;
+
+- mutex_lock(&cpufreq_governor_mutex);
+-
+- t = find_governor(str_governor);
+- if (!t) {
+- int ret;
+-
+- mutex_unlock(&cpufreq_governor_mutex);
+-
+- ret = request_module("cpufreq_%s", str_governor);
+- if (ret)
+- return NULL;
+-
+- mutex_lock(&cpufreq_governor_mutex);
++ t = get_governor(str_governor);
++ if (t)
++ return t;
+
+- t = find_governor(str_governor);
+- }
+- if (t && !try_module_get(t->owner))
+- t = NULL;
+-
+- mutex_unlock(&cpufreq_governor_mutex);
++ if (request_module("cpufreq_%s", str_governor))
++ return NULL;
+
+- return t;
++ return get_governor(str_governor);
+ }
+
+ /**
+@@ -815,12 +819,14 @@ static ssize_t show_scaling_available_governors(struct cpufreq_policy *policy,
+ goto out;
+ }
+
++ mutex_lock(&cpufreq_governor_mutex);
+ for_each_governor(t) {
+ if (i >= (ssize_t) ((PAGE_SIZE / sizeof(char))
+ - (CPUFREQ_NAME_LEN + 2)))
+- goto out;
++ break;
+ i += scnprintf(&buf[i], CPUFREQ_NAME_PLEN, "%s ", t->name);
+ }
++ mutex_unlock(&cpufreq_governor_mutex);
+ out:
+ i += sprintf(&buf[i], "\n");
+ return i;
+@@ -1058,15 +1064,17 @@ static int cpufreq_init_policy(struct cpufreq_policy *policy)
+ struct cpufreq_governor *def_gov = cpufreq_default_governor();
+ struct cpufreq_governor *gov = NULL;
+ unsigned int pol = CPUFREQ_POLICY_UNKNOWN;
++ int ret;
+
+ if (has_target()) {
+ /* Update policy governor to the one used before hotplug. */
+- gov = find_governor(policy->last_governor);
++ gov = get_governor(policy->last_governor);
+ if (gov) {
+ pr_debug("Restoring governor %s for cpu %d\n",
+ policy->governor->name, policy->cpu);
+ } else if (def_gov) {
+ gov = def_gov;
++ __module_get(gov->owner);
+ } else {
+ return -ENODATA;
+ }
+@@ -1089,7 +1097,11 @@ static int cpufreq_init_policy(struct cpufreq_policy *policy)
+ return -ENODATA;
+ }
+
+- return cpufreq_set_policy(policy, gov, pol);
++ ret = cpufreq_set_policy(policy, gov, pol);
++ if (gov)
++ module_put(gov->owner);
++
++ return ret;
+ }
+
+ static int cpufreq_add_policy_cpu(struct cpufreq_policy *policy, unsigned int cpu)
+diff --git a/drivers/crypto/caam/caamalg.c b/drivers/crypto/caam/caamalg.c
+index b2f9882bc010..bf90a4fcabd1 100644
+--- a/drivers/crypto/caam/caamalg.c
++++ b/drivers/crypto/caam/caamalg.c
+@@ -838,7 +838,7 @@ static int xts_skcipher_setkey(struct crypto_skcipher *skcipher, const u8 *key,
+ u32 *desc;
+
+ if (keylen != 2 * AES_MIN_KEY_SIZE && keylen != 2 * AES_MAX_KEY_SIZE) {
+- dev_err(jrdev, "key size mismatch\n");
++ dev_dbg(jrdev, "key size mismatch\n");
+ return -EINVAL;
+ }
+
+diff --git a/drivers/crypto/caam/caamalg_qi.c b/drivers/crypto/caam/caamalg_qi.c
+index 27e36bdf6163..315d53499ce8 100644
+--- a/drivers/crypto/caam/caamalg_qi.c
++++ b/drivers/crypto/caam/caamalg_qi.c
+@@ -728,7 +728,7 @@ static int xts_skcipher_setkey(struct crypto_skcipher *skcipher, const u8 *key,
+ int ret = 0;
+
+ if (keylen != 2 * AES_MIN_KEY_SIZE && keylen != 2 * AES_MAX_KEY_SIZE) {
+- dev_err(jrdev, "key size mismatch\n");
++ dev_dbg(jrdev, "key size mismatch\n");
+ return -EINVAL;
+ }
+
+diff --git a/drivers/crypto/caam/caamalg_qi2.c b/drivers/crypto/caam/caamalg_qi2.c
+index 28669cbecf77..e1b6bc6ef091 100644
+--- a/drivers/crypto/caam/caamalg_qi2.c
++++ b/drivers/crypto/caam/caamalg_qi2.c
+@@ -1058,7 +1058,7 @@ static int xts_skcipher_setkey(struct crypto_skcipher *skcipher, const u8 *key,
+ u32 *desc;
+
+ if (keylen != 2 * AES_MIN_KEY_SIZE && keylen != 2 * AES_MAX_KEY_SIZE) {
+- dev_err(dev, "key size mismatch\n");
++ dev_dbg(dev, "key size mismatch\n");
+ return -EINVAL;
+ }
+
+diff --git a/drivers/crypto/cavium/cpt/cptvf_algs.c b/drivers/crypto/cavium/cpt/cptvf_algs.c
+index 1be1adffff1d..2e4bf90c5798 100644
+--- a/drivers/crypto/cavium/cpt/cptvf_algs.c
++++ b/drivers/crypto/cavium/cpt/cptvf_algs.c
+@@ -200,6 +200,7 @@ static inline int cvm_enc_dec(struct skcipher_request *req, u32 enc)
+ int status;
+
+ memset(req_info, 0, sizeof(struct cpt_request_info));
++ req_info->may_sleep = (req->base.flags & CRYPTO_TFM_REQ_MAY_SLEEP) != 0;
+ memset(fctx, 0, sizeof(struct fc_context));
+ create_input_list(req, enc, enc_iv_len);
+ create_output_list(req, enc_iv_len);
+diff --git a/drivers/crypto/cavium/cpt/cptvf_reqmanager.c b/drivers/crypto/cavium/cpt/cptvf_reqmanager.c
+index 7a24019356b5..e343249c8d05 100644
+--- a/drivers/crypto/cavium/cpt/cptvf_reqmanager.c
++++ b/drivers/crypto/cavium/cpt/cptvf_reqmanager.c
+@@ -133,7 +133,7 @@ static inline int setup_sgio_list(struct cpt_vf *cptvf,
+
+ /* Setup gather (input) components */
+ g_sz_bytes = ((req->incnt + 3) / 4) * sizeof(struct sglist_component);
+- info->gather_components = kzalloc(g_sz_bytes, GFP_KERNEL);
++ info->gather_components = kzalloc(g_sz_bytes, req->may_sleep ? GFP_KERNEL : GFP_ATOMIC);
+ if (!info->gather_components) {
+ ret = -ENOMEM;
+ goto scatter_gather_clean;
+@@ -150,7 +150,7 @@ static inline int setup_sgio_list(struct cpt_vf *cptvf,
+
+ /* Setup scatter (output) components */
+ s_sz_bytes = ((req->outcnt + 3) / 4) * sizeof(struct sglist_component);
+- info->scatter_components = kzalloc(s_sz_bytes, GFP_KERNEL);
++ info->scatter_components = kzalloc(s_sz_bytes, req->may_sleep ? GFP_KERNEL : GFP_ATOMIC);
+ if (!info->scatter_components) {
+ ret = -ENOMEM;
+ goto scatter_gather_clean;
+@@ -167,7 +167,7 @@ static inline int setup_sgio_list(struct cpt_vf *cptvf,
+
+ /* Create and initialize DPTR */
+ info->dlen = g_sz_bytes + s_sz_bytes + SG_LIST_HDR_SIZE;
+- info->in_buffer = kzalloc(info->dlen, GFP_KERNEL);
++ info->in_buffer = kzalloc(info->dlen, req->may_sleep ? GFP_KERNEL : GFP_ATOMIC);
+ if (!info->in_buffer) {
+ ret = -ENOMEM;
+ goto scatter_gather_clean;
+@@ -195,7 +195,7 @@ static inline int setup_sgio_list(struct cpt_vf *cptvf,
+ }
+
+ /* Create and initialize RPTR */
+- info->out_buffer = kzalloc(COMPLETION_CODE_SIZE, GFP_KERNEL);
++ info->out_buffer = kzalloc(COMPLETION_CODE_SIZE, req->may_sleep ? GFP_KERNEL : GFP_ATOMIC);
+ if (!info->out_buffer) {
+ ret = -ENOMEM;
+ goto scatter_gather_clean;
+@@ -421,7 +421,7 @@ int process_request(struct cpt_vf *cptvf, struct cpt_request_info *req)
+ struct cpt_vq_command vq_cmd;
+ union cpt_inst_s cptinst;
+
+- info = kzalloc(sizeof(*info), GFP_KERNEL);
++ info = kzalloc(sizeof(*info), req->may_sleep ? GFP_KERNEL : GFP_ATOMIC);
+ if (unlikely(!info)) {
+ dev_err(&pdev->dev, "Unable to allocate memory for info_buffer\n");
+ return -ENOMEM;
+@@ -443,7 +443,7 @@ int process_request(struct cpt_vf *cptvf, struct cpt_request_info *req)
+ * Get buffer for union cpt_res_s response
+ * structure and its physical address
+ */
+- info->completion_addr = kzalloc(sizeof(union cpt_res_s), GFP_KERNEL);
++ info->completion_addr = kzalloc(sizeof(union cpt_res_s), req->may_sleep ? GFP_KERNEL : GFP_ATOMIC);
+ if (unlikely(!info->completion_addr)) {
+ dev_err(&pdev->dev, "Unable to allocate memory for completion_addr\n");
+ ret = -ENOMEM;
+diff --git a/drivers/crypto/cavium/cpt/request_manager.h b/drivers/crypto/cavium/cpt/request_manager.h
+index 3514b082eca7..1e8dd9ebcc17 100644
+--- a/drivers/crypto/cavium/cpt/request_manager.h
++++ b/drivers/crypto/cavium/cpt/request_manager.h
+@@ -62,6 +62,8 @@ struct cpt_request_info {
+ union ctrl_info ctrl; /* User control information */
+ struct cptvf_request req; /* Request Information (Core specific) */
+
++ bool may_sleep;
++
+ struct buf_ptr in[MAX_BUF_CNT];
+ struct buf_ptr out[MAX_BUF_CNT];
+
+diff --git a/drivers/crypto/ccp/ccp-dev.h b/drivers/crypto/ccp/ccp-dev.h
+index 3f68262d9ab4..87a34d91fdf7 100644
+--- a/drivers/crypto/ccp/ccp-dev.h
++++ b/drivers/crypto/ccp/ccp-dev.h
+@@ -469,6 +469,7 @@ struct ccp_sg_workarea {
+ unsigned int sg_used;
+
+ struct scatterlist *dma_sg;
++ struct scatterlist *dma_sg_head;
+ struct device *dma_dev;
+ unsigned int dma_count;
+ enum dma_data_direction dma_dir;
+diff --git a/drivers/crypto/ccp/ccp-ops.c b/drivers/crypto/ccp/ccp-ops.c
+index 422193690fd4..64112c736810 100644
+--- a/drivers/crypto/ccp/ccp-ops.c
++++ b/drivers/crypto/ccp/ccp-ops.c
+@@ -63,7 +63,7 @@ static u32 ccp_gen_jobid(struct ccp_device *ccp)
+ static void ccp_sg_free(struct ccp_sg_workarea *wa)
+ {
+ if (wa->dma_count)
+- dma_unmap_sg(wa->dma_dev, wa->dma_sg, wa->nents, wa->dma_dir);
++ dma_unmap_sg(wa->dma_dev, wa->dma_sg_head, wa->nents, wa->dma_dir);
+
+ wa->dma_count = 0;
+ }
+@@ -92,6 +92,7 @@ static int ccp_init_sg_workarea(struct ccp_sg_workarea *wa, struct device *dev,
+ return 0;
+
+ wa->dma_sg = sg;
++ wa->dma_sg_head = sg;
+ wa->dma_dev = dev;
+ wa->dma_dir = dma_dir;
+ wa->dma_count = dma_map_sg(dev, sg, wa->nents, dma_dir);
+@@ -104,14 +105,28 @@ static int ccp_init_sg_workarea(struct ccp_sg_workarea *wa, struct device *dev,
+ static void ccp_update_sg_workarea(struct ccp_sg_workarea *wa, unsigned int len)
+ {
+ unsigned int nbytes = min_t(u64, len, wa->bytes_left);
++ unsigned int sg_combined_len = 0;
+
+ if (!wa->sg)
+ return;
+
+ wa->sg_used += nbytes;
+ wa->bytes_left -= nbytes;
+- if (wa->sg_used == wa->sg->length) {
+- wa->sg = sg_next(wa->sg);
++ if (wa->sg_used == sg_dma_len(wa->dma_sg)) {
++ /* Advance to the next DMA scatterlist entry */
++ wa->dma_sg = sg_next(wa->dma_sg);
++
++ /* In the case that the DMA mapped scatterlist has entries
++ * that have been merged, the non-DMA mapped scatterlist
++ * must be advanced multiple times for each merged entry.
++ * This ensures that the current non-DMA mapped entry
++ * corresponds to the current DMA mapped entry.
++ */
++ do {
++ sg_combined_len += wa->sg->length;
++ wa->sg = sg_next(wa->sg);
++ } while (wa->sg_used > sg_combined_len);
++
+ wa->sg_used = 0;
+ }
+ }
+@@ -299,7 +314,7 @@ static unsigned int ccp_queue_buf(struct ccp_data *data, unsigned int from)
+ /* Update the structures and generate the count */
+ buf_count = 0;
+ while (sg_wa->bytes_left && (buf_count < dm_wa->length)) {
+- nbytes = min(sg_wa->sg->length - sg_wa->sg_used,
++ nbytes = min(sg_dma_len(sg_wa->dma_sg) - sg_wa->sg_used,
+ dm_wa->length - buf_count);
+ nbytes = min_t(u64, sg_wa->bytes_left, nbytes);
+
+@@ -331,11 +346,11 @@ static void ccp_prepare_data(struct ccp_data *src, struct ccp_data *dst,
+ * and destination. The resulting len values will always be <= UINT_MAX
+ * because the dma length is an unsigned int.
+ */
+- sg_src_len = sg_dma_len(src->sg_wa.sg) - src->sg_wa.sg_used;
++ sg_src_len = sg_dma_len(src->sg_wa.dma_sg) - src->sg_wa.sg_used;
+ sg_src_len = min_t(u64, src->sg_wa.bytes_left, sg_src_len);
+
+ if (dst) {
+- sg_dst_len = sg_dma_len(dst->sg_wa.sg) - dst->sg_wa.sg_used;
++ sg_dst_len = sg_dma_len(dst->sg_wa.dma_sg) - dst->sg_wa.sg_used;
+ sg_dst_len = min_t(u64, src->sg_wa.bytes_left, sg_dst_len);
+ op_len = min(sg_src_len, sg_dst_len);
+ } else {
+@@ -365,7 +380,7 @@ static void ccp_prepare_data(struct ccp_data *src, struct ccp_data *dst,
+ /* Enough data in the sg element, but we need to
+ * adjust for any previously copied data
+ */
+- op->src.u.dma.address = sg_dma_address(src->sg_wa.sg);
++ op->src.u.dma.address = sg_dma_address(src->sg_wa.dma_sg);
+ op->src.u.dma.offset = src->sg_wa.sg_used;
+ op->src.u.dma.length = op_len & ~(block_size - 1);
+
+@@ -386,7 +401,7 @@ static void ccp_prepare_data(struct ccp_data *src, struct ccp_data *dst,
+ /* Enough room in the sg element, but we need to
+ * adjust for any previously used area
+ */
+- op->dst.u.dma.address = sg_dma_address(dst->sg_wa.sg);
++ op->dst.u.dma.address = sg_dma_address(dst->sg_wa.dma_sg);
+ op->dst.u.dma.offset = dst->sg_wa.sg_used;
+ op->dst.u.dma.length = op->src.u.dma.length;
+ }
+@@ -2028,7 +2043,7 @@ ccp_run_passthru_cmd(struct ccp_cmd_queue *cmd_q, struct ccp_cmd *cmd)
+ dst.sg_wa.sg_used = 0;
+ for (i = 1; i <= src.sg_wa.dma_count; i++) {
+ if (!dst.sg_wa.sg ||
+- (dst.sg_wa.sg->length < src.sg_wa.sg->length)) {
++ (sg_dma_len(dst.sg_wa.sg) < sg_dma_len(src.sg_wa.sg))) {
+ ret = -EINVAL;
+ goto e_dst;
+ }
+@@ -2054,8 +2069,8 @@ ccp_run_passthru_cmd(struct ccp_cmd_queue *cmd_q, struct ccp_cmd *cmd)
+ goto e_dst;
+ }
+
+- dst.sg_wa.sg_used += src.sg_wa.sg->length;
+- if (dst.sg_wa.sg_used == dst.sg_wa.sg->length) {
++ dst.sg_wa.sg_used += sg_dma_len(src.sg_wa.sg);
++ if (dst.sg_wa.sg_used == sg_dma_len(dst.sg_wa.sg)) {
+ dst.sg_wa.sg = sg_next(dst.sg_wa.sg);
+ dst.sg_wa.sg_used = 0;
+ }
+diff --git a/drivers/crypto/ccree/cc_cipher.c b/drivers/crypto/ccree/cc_cipher.c
+index 872ea3ff1c6b..f144fe04748b 100644
+--- a/drivers/crypto/ccree/cc_cipher.c
++++ b/drivers/crypto/ccree/cc_cipher.c
+@@ -159,7 +159,6 @@ static int cc_cipher_init(struct crypto_tfm *tfm)
+ skcipher_alg.base);
+ struct device *dev = drvdata_to_dev(cc_alg->drvdata);
+ unsigned int max_key_buf_size = cc_alg->skcipher_alg.max_keysize;
+- int rc = 0;
+
+ dev_dbg(dev, "Initializing context @%p for %s\n", ctx_p,
+ crypto_tfm_alg_name(tfm));
+@@ -171,10 +170,19 @@ static int cc_cipher_init(struct crypto_tfm *tfm)
+ ctx_p->flow_mode = cc_alg->flow_mode;
+ ctx_p->drvdata = cc_alg->drvdata;
+
++ if (ctx_p->cipher_mode == DRV_CIPHER_ESSIV) {
++ /* Alloc hash tfm for essiv */
++ ctx_p->shash_tfm = crypto_alloc_shash("sha256-generic", 0, 0);
++ if (IS_ERR(ctx_p->shash_tfm)) {
++ dev_err(dev, "Error allocating hash tfm for ESSIV.\n");
++ return PTR_ERR(ctx_p->shash_tfm);
++ }
++ }
++
+ /* Allocate key buffer, cache line aligned */
+ ctx_p->user.key = kmalloc(max_key_buf_size, GFP_KERNEL);
+ if (!ctx_p->user.key)
+- return -ENOMEM;
++ goto free_shash;
+
+ dev_dbg(dev, "Allocated key buffer in context. key=@%p\n",
+ ctx_p->user.key);
+@@ -186,21 +194,19 @@ static int cc_cipher_init(struct crypto_tfm *tfm)
+ if (dma_mapping_error(dev, ctx_p->user.key_dma_addr)) {
+ dev_err(dev, "Mapping Key %u B at va=%pK for DMA failed\n",
+ max_key_buf_size, ctx_p->user.key);
+- return -ENOMEM;
++ goto free_key;
+ }
+ dev_dbg(dev, "Mapped key %u B at va=%pK to dma=%pad\n",
+ max_key_buf_size, ctx_p->user.key, &ctx_p->user.key_dma_addr);
+
+- if (ctx_p->cipher_mode == DRV_CIPHER_ESSIV) {
+- /* Alloc hash tfm for essiv */
+- ctx_p->shash_tfm = crypto_alloc_shash("sha256-generic", 0, 0);
+- if (IS_ERR(ctx_p->shash_tfm)) {
+- dev_err(dev, "Error allocating hash tfm for ESSIV.\n");
+- return PTR_ERR(ctx_p->shash_tfm);
+- }
+- }
++ return 0;
+
+- return rc;
++free_key:
++ kfree(ctx_p->user.key);
++free_shash:
++ crypto_free_shash(ctx_p->shash_tfm);
++
++ return -ENOMEM;
+ }
+
+ static void cc_cipher_exit(struct crypto_tfm *tfm)
+diff --git a/drivers/crypto/hisilicon/sec/sec_algs.c b/drivers/crypto/hisilicon/sec/sec_algs.c
+index c27e7160d2df..4ad4ffd90cee 100644
+--- a/drivers/crypto/hisilicon/sec/sec_algs.c
++++ b/drivers/crypto/hisilicon/sec/sec_algs.c
+@@ -175,7 +175,8 @@ static int sec_alloc_and_fill_hw_sgl(struct sec_hw_sgl **sec_sgl,
+ dma_addr_t *psec_sgl,
+ struct scatterlist *sgl,
+ int count,
+- struct sec_dev_info *info)
++ struct sec_dev_info *info,
++ gfp_t gfp)
+ {
+ struct sec_hw_sgl *sgl_current = NULL;
+ struct sec_hw_sgl *sgl_next;
+@@ -190,7 +191,7 @@ static int sec_alloc_and_fill_hw_sgl(struct sec_hw_sgl **sec_sgl,
+ sge_index = i % SEC_MAX_SGE_NUM;
+ if (sge_index == 0) {
+ sgl_next = dma_pool_zalloc(info->hw_sgl_pool,
+- GFP_KERNEL, &sgl_next_dma);
++ gfp, &sgl_next_dma);
+ if (!sgl_next) {
+ ret = -ENOMEM;
+ goto err_free_hw_sgls;
+@@ -545,14 +546,14 @@ void sec_alg_callback(struct sec_bd_info *resp, void *shadow)
+ }
+
+ static int sec_alg_alloc_and_calc_split_sizes(int length, size_t **split_sizes,
+- int *steps)
++ int *steps, gfp_t gfp)
+ {
+ size_t *sizes;
+ int i;
+
+ /* Split into suitable sized blocks */
+ *steps = roundup(length, SEC_REQ_LIMIT) / SEC_REQ_LIMIT;
+- sizes = kcalloc(*steps, sizeof(*sizes), GFP_KERNEL);
++ sizes = kcalloc(*steps, sizeof(*sizes), gfp);
+ if (!sizes)
+ return -ENOMEM;
+
+@@ -568,7 +569,7 @@ static int sec_map_and_split_sg(struct scatterlist *sgl, size_t *split_sizes,
+ int steps, struct scatterlist ***splits,
+ int **splits_nents,
+ int sgl_len_in,
+- struct device *dev)
++ struct device *dev, gfp_t gfp)
+ {
+ int ret, count;
+
+@@ -576,12 +577,12 @@ static int sec_map_and_split_sg(struct scatterlist *sgl, size_t *split_sizes,
+ if (!count)
+ return -EINVAL;
+
+- *splits = kcalloc(steps, sizeof(struct scatterlist *), GFP_KERNEL);
++ *splits = kcalloc(steps, sizeof(struct scatterlist *), gfp);
+ if (!*splits) {
+ ret = -ENOMEM;
+ goto err_unmap_sg;
+ }
+- *splits_nents = kcalloc(steps, sizeof(int), GFP_KERNEL);
++ *splits_nents = kcalloc(steps, sizeof(int), gfp);
+ if (!*splits_nents) {
+ ret = -ENOMEM;
+ goto err_free_splits;
+@@ -589,7 +590,7 @@ static int sec_map_and_split_sg(struct scatterlist *sgl, size_t *split_sizes,
+
+ /* output the scatter list before and after this */
+ ret = sg_split(sgl, count, 0, steps, split_sizes,
+- *splits, *splits_nents, GFP_KERNEL);
++ *splits, *splits_nents, gfp);
+ if (ret) {
+ ret = -ENOMEM;
+ goto err_free_splits_nents;
+@@ -630,13 +631,13 @@ static struct sec_request_el
+ int el_size, bool different_dest,
+ struct scatterlist *sgl_in, int n_ents_in,
+ struct scatterlist *sgl_out, int n_ents_out,
+- struct sec_dev_info *info)
++ struct sec_dev_info *info, gfp_t gfp)
+ {
+ struct sec_request_el *el;
+ struct sec_bd_info *req;
+ int ret;
+
+- el = kzalloc(sizeof(*el), GFP_KERNEL);
++ el = kzalloc(sizeof(*el), gfp);
+ if (!el)
+ return ERR_PTR(-ENOMEM);
+ el->el_length = el_size;
+@@ -668,7 +669,7 @@ static struct sec_request_el
+ el->sgl_in = sgl_in;
+
+ ret = sec_alloc_and_fill_hw_sgl(&el->in, &el->dma_in, el->sgl_in,
+- n_ents_in, info);
++ n_ents_in, info, gfp);
+ if (ret)
+ goto err_free_el;
+
+@@ -679,7 +680,7 @@ static struct sec_request_el
+ el->sgl_out = sgl_out;
+ ret = sec_alloc_and_fill_hw_sgl(&el->out, &el->dma_out,
+ el->sgl_out,
+- n_ents_out, info);
++ n_ents_out, info, gfp);
+ if (ret)
+ goto err_free_hw_sgl_in;
+
+@@ -720,6 +721,7 @@ static int sec_alg_skcipher_crypto(struct skcipher_request *skreq,
+ int *splits_out_nents = NULL;
+ struct sec_request_el *el, *temp;
+ bool split = skreq->src != skreq->dst;
++ gfp_t gfp = skreq->base.flags & CRYPTO_TFM_REQ_MAY_SLEEP ? GFP_KERNEL : GFP_ATOMIC;
+
+ mutex_init(&sec_req->lock);
+ sec_req->req_base = &skreq->base;
+@@ -728,13 +730,13 @@ static int sec_alg_skcipher_crypto(struct skcipher_request *skreq,
+ sec_req->len_in = sg_nents(skreq->src);
+
+ ret = sec_alg_alloc_and_calc_split_sizes(skreq->cryptlen, &split_sizes,
+- &steps);
++ &steps, gfp);
+ if (ret)
+ return ret;
+ sec_req->num_elements = steps;
+ ret = sec_map_and_split_sg(skreq->src, split_sizes, steps, &splits_in,
+ &splits_in_nents, sec_req->len_in,
+- info->dev);
++ info->dev, gfp);
+ if (ret)
+ goto err_free_split_sizes;
+
+@@ -742,7 +744,7 @@ static int sec_alg_skcipher_crypto(struct skcipher_request *skreq,
+ sec_req->len_out = sg_nents(skreq->dst);
+ ret = sec_map_and_split_sg(skreq->dst, split_sizes, steps,
+ &splits_out, &splits_out_nents,
+- sec_req->len_out, info->dev);
++ sec_req->len_out, info->dev, gfp);
+ if (ret)
+ goto err_unmap_in_sg;
+ }
+@@ -775,7 +777,7 @@ static int sec_alg_skcipher_crypto(struct skcipher_request *skreq,
+ splits_in[i], splits_in_nents[i],
+ split ? splits_out[i] : NULL,
+ split ? splits_out_nents[i] : 0,
+- info);
++ info, gfp);
+ if (IS_ERR(el)) {
+ ret = PTR_ERR(el);
+ goto err_free_elements;
+diff --git a/drivers/crypto/qat/qat_common/qat_algs.c b/drivers/crypto/qat/qat_common/qat_algs.c
+index e14d3dd291f0..1b050391c0c9 100644
+--- a/drivers/crypto/qat/qat_common/qat_algs.c
++++ b/drivers/crypto/qat/qat_common/qat_algs.c
+@@ -55,6 +55,7 @@
+ #include <crypto/hmac.h>
+ #include <crypto/algapi.h>
+ #include <crypto/authenc.h>
++#include <crypto/xts.h>
+ #include <linux/dma-mapping.h>
+ #include "adf_accel_devices.h"
+ #include "adf_transport.h"
+@@ -1102,6 +1103,14 @@ static int qat_alg_skcipher_blk_encrypt(struct skcipher_request *req)
+ return qat_alg_skcipher_encrypt(req);
+ }
+
++static int qat_alg_skcipher_xts_encrypt(struct skcipher_request *req)
++{
++ if (req->cryptlen < XTS_BLOCK_SIZE)
++ return -EINVAL;
++
++ return qat_alg_skcipher_encrypt(req);
++}
++
+ static int qat_alg_skcipher_decrypt(struct skcipher_request *req)
+ {
+ struct crypto_skcipher *stfm = crypto_skcipher_reqtfm(req);
+@@ -1161,6 +1170,15 @@ static int qat_alg_skcipher_blk_decrypt(struct skcipher_request *req)
+
+ return qat_alg_skcipher_decrypt(req);
+ }
++
++static int qat_alg_skcipher_xts_decrypt(struct skcipher_request *req)
++{
++ if (req->cryptlen < XTS_BLOCK_SIZE)
++ return -EINVAL;
++
++ return qat_alg_skcipher_decrypt(req);
++}
++
+ static int qat_alg_aead_init(struct crypto_aead *tfm,
+ enum icp_qat_hw_auth_algo hash,
+ const char *hash_name)
+@@ -1354,8 +1372,8 @@ static struct skcipher_alg qat_skciphers[] = { {
+ .init = qat_alg_skcipher_init_tfm,
+ .exit = qat_alg_skcipher_exit_tfm,
+ .setkey = qat_alg_skcipher_xts_setkey,
+- .decrypt = qat_alg_skcipher_blk_decrypt,
+- .encrypt = qat_alg_skcipher_blk_encrypt,
++ .decrypt = qat_alg_skcipher_xts_decrypt,
++ .encrypt = qat_alg_skcipher_xts_encrypt,
+ .min_keysize = 2 * AES_MIN_KEY_SIZE,
+ .max_keysize = 2 * AES_MAX_KEY_SIZE,
+ .ivsize = AES_BLOCK_SIZE,
+diff --git a/drivers/crypto/qat/qat_common/qat_uclo.c b/drivers/crypto/qat/qat_common/qat_uclo.c
+index 6bd8f6a2a24f..aeb03081415c 100644
+--- a/drivers/crypto/qat/qat_common/qat_uclo.c
++++ b/drivers/crypto/qat/qat_common/qat_uclo.c
+@@ -332,13 +332,18 @@ static int qat_uclo_create_batch_init_list(struct icp_qat_fw_loader_handle
+ }
+ return 0;
+ out_err:
++ /* Do not free the list head unless we allocated it. */
++ tail_old = tail_old->next;
++ if (flag) {
++ kfree(*init_tab_base);
++ *init_tab_base = NULL;
++ }
++
+ while (tail_old) {
+ mem_init = tail_old->next;
+ kfree(tail_old);
+ tail_old = mem_init;
+ }
+- if (flag)
+- kfree(*init_tab_base);
+ return -ENOMEM;
+ }
+
+diff --git a/drivers/devfreq/devfreq.c b/drivers/devfreq/devfreq.c
+index 52b9c3e141f3..46c84dce6544 100644
+--- a/drivers/devfreq/devfreq.c
++++ b/drivers/devfreq/devfreq.c
+@@ -1657,8 +1657,7 @@ static int devfreq_summary_show(struct seq_file *s, void *data)
+ unsigned long cur_freq, min_freq, max_freq;
+ unsigned int polling_ms;
+
+- seq_printf(s, "%-30s %-10s %-10s %-15s %10s %12s %12s %12s\n",
+- "dev_name",
++ seq_printf(s, "%-30s %-30s %-15s %10s %12s %12s %12s\n",
+ "dev",
+ "parent_dev",
+ "governor",
+@@ -1666,10 +1665,9 @@ static int devfreq_summary_show(struct seq_file *s, void *data)
+ "cur_freq_Hz",
+ "min_freq_Hz",
+ "max_freq_Hz");
+- seq_printf(s, "%30s %10s %10s %15s %10s %12s %12s %12s\n",
++ seq_printf(s, "%30s %30s %15s %10s %12s %12s %12s\n",
++ "------------------------------",
+ "------------------------------",
+- "----------",
+- "----------",
+ "---------------",
+ "----------",
+ "------------",
+@@ -1698,8 +1696,7 @@ static int devfreq_summary_show(struct seq_file *s, void *data)
+ mutex_unlock(&devfreq->lock);
+
+ seq_printf(s,
+- "%-30s %-10s %-10s %-15s %10d %12ld %12ld %12ld\n",
+- dev_name(devfreq->dev.parent),
++ "%-30s %-30s %-15s %10d %12ld %12ld %12ld\n",
+ dev_name(&devfreq->dev),
+ p_devfreq ? dev_name(&p_devfreq->dev) : "null",
+ devfreq->governor_name,
+diff --git a/drivers/devfreq/rk3399_dmc.c b/drivers/devfreq/rk3399_dmc.c
+index 24f04f78285b..027769e39f9b 100644
+--- a/drivers/devfreq/rk3399_dmc.c
++++ b/drivers/devfreq/rk3399_dmc.c
+@@ -95,18 +95,20 @@ static int rk3399_dmcfreq_target(struct device *dev, unsigned long *freq,
+
+ mutex_lock(&dmcfreq->lock);
+
+- if (target_rate >= dmcfreq->odt_dis_freq)
+- odt_enable = true;
+-
+- /*
+- * This makes a SMC call to the TF-A to set the DDR PD (power-down)
+- * timings and to enable or disable the ODT (on-die termination)
+- * resistors.
+- */
+- arm_smccc_smc(ROCKCHIP_SIP_DRAM_FREQ, dmcfreq->odt_pd_arg0,
+- dmcfreq->odt_pd_arg1,
+- ROCKCHIP_SIP_CONFIG_DRAM_SET_ODT_PD,
+- odt_enable, 0, 0, 0, &res);
++ if (dmcfreq->regmap_pmu) {
++ if (target_rate >= dmcfreq->odt_dis_freq)
++ odt_enable = true;
++
++ /*
++ * This makes a SMC call to the TF-A to set the DDR PD
++ * (power-down) timings and to enable or disable the
++ * ODT (on-die termination) resistors.
++ */
++ arm_smccc_smc(ROCKCHIP_SIP_DRAM_FREQ, dmcfreq->odt_pd_arg0,
++ dmcfreq->odt_pd_arg1,
++ ROCKCHIP_SIP_CONFIG_DRAM_SET_ODT_PD,
++ odt_enable, 0, 0, 0, &res);
++ }
+
+ /*
+ * If frequency scaling from low to high, adjust voltage first.
+@@ -371,13 +373,14 @@ static int rk3399_dmcfreq_probe(struct platform_device *pdev)
+ }
+
+ node = of_parse_phandle(np, "rockchip,pmu", 0);
+- if (node) {
+- data->regmap_pmu = syscon_node_to_regmap(node);
+- of_node_put(node);
+- if (IS_ERR(data->regmap_pmu)) {
+- ret = PTR_ERR(data->regmap_pmu);
+- goto err_edev;
+- }
++ if (!node)
++ goto no_pmu;
++
++ data->regmap_pmu = syscon_node_to_regmap(node);
++ of_node_put(node);
++ if (IS_ERR(data->regmap_pmu)) {
++ ret = PTR_ERR(data->regmap_pmu);
++ goto err_edev;
+ }
+
+ regmap_read(data->regmap_pmu, RK3399_PMUGRF_OS_REG2, &val);
+@@ -399,6 +402,7 @@ static int rk3399_dmcfreq_probe(struct platform_device *pdev)
+ goto err_edev;
+ };
+
++no_pmu:
+ arm_smccc_smc(ROCKCHIP_SIP_DRAM_FREQ, 0, 0,
+ ROCKCHIP_SIP_CONFIG_DRAM_INIT,
+ 0, 0, 0, 0, &res);
+diff --git a/drivers/dma-buf/st-dma-fence-chain.c b/drivers/dma-buf/st-dma-fence-chain.c
+index 5d45ba7ba3cd..9525f7f56119 100644
+--- a/drivers/dma-buf/st-dma-fence-chain.c
++++ b/drivers/dma-buf/st-dma-fence-chain.c
+@@ -318,15 +318,16 @@ static int find_out_of_order(void *arg)
+ goto err;
+ }
+
+- if (fence && fence != fc.chains[1]) {
++ /*
++ * We signaled the middle fence (2) of the 1-2-3 chain. The behavior
++ * of the dma-fence-chain is to make us wait for all the fences up to
++ * the point we want. Since fence 1 is still not signaled, this what
++ * we should get as fence to wait upon (fence 2 being garbage
++ * collected during the traversal of the chain).
++ */
++ if (fence != fc.chains[0]) {
+ pr_err("Incorrect chain-fence.seqno:%lld reported for completed seqno:2\n",
+- fence->seqno);
+-
+- dma_fence_get(fence);
+- err = dma_fence_chain_find_seqno(&fence, 2);
+- dma_fence_put(fence);
+- if (err)
+- pr_err("Reported %d for finding self!\n", err);
++ fence ? fence->seqno : 0);
+
+ err = -EINVAL;
+ }
+@@ -415,20 +416,18 @@ static int __find_race(void *arg)
+ if (!fence)
+ goto signal;
+
+- err = dma_fence_chain_find_seqno(&fence, seqno);
+- if (err) {
+- pr_err("Reported an invalid fence for find-self:%d\n",
+- seqno);
+- dma_fence_put(fence);
+- break;
+- }
+-
+- if (fence->seqno < seqno) {
+- pr_err("Reported an earlier fence.seqno:%lld for seqno:%d\n",
+- fence->seqno, seqno);
+- err = -EINVAL;
+- dma_fence_put(fence);
+- break;
++ /*
++ * We can only find ourselves if we are on fence we were
++ * looking for.
++ */
++ if (fence->seqno == seqno) {
++ err = dma_fence_chain_find_seqno(&fence, seqno);
++ if (err) {
++ pr_err("Reported an invalid fence for find-self:%d\n",
++ seqno);
++ dma_fence_put(fence);
++ break;
++ }
+ }
+
+ dma_fence_put(fence);
+diff --git a/drivers/edac/edac_device_sysfs.c b/drivers/edac/edac_device_sysfs.c
+index 0e7ea3591b78..5e7593753799 100644
+--- a/drivers/edac/edac_device_sysfs.c
++++ b/drivers/edac/edac_device_sysfs.c
+@@ -275,6 +275,7 @@ int edac_device_register_sysfs_main_kobj(struct edac_device_ctl_info *edac_dev)
+
+ /* Error exit stack */
+ err_kobj_reg:
++ kobject_put(&edac_dev->kobj);
+ module_put(edac_dev->owner);
+
+ err_out:
+diff --git a/drivers/edac/edac_pci_sysfs.c b/drivers/edac/edac_pci_sysfs.c
+index 72c9eb9fdffb..53042af7262e 100644
+--- a/drivers/edac/edac_pci_sysfs.c
++++ b/drivers/edac/edac_pci_sysfs.c
+@@ -386,7 +386,7 @@ static int edac_pci_main_kobj_setup(void)
+
+ /* Error unwind statck */
+ kobject_init_and_add_fail:
+- kfree(edac_pci_top_main_kobj);
++ kobject_put(edac_pci_top_main_kobj);
+
+ kzalloc_fail:
+ module_put(THIS_MODULE);
+diff --git a/drivers/firmware/arm_scmi/scmi_pm_domain.c b/drivers/firmware/arm_scmi/scmi_pm_domain.c
+index bafbfe358f97..9e44479f0284 100644
+--- a/drivers/firmware/arm_scmi/scmi_pm_domain.c
++++ b/drivers/firmware/arm_scmi/scmi_pm_domain.c
+@@ -85,7 +85,10 @@ static int scmi_pm_domain_probe(struct scmi_device *sdev)
+ for (i = 0; i < num_domains; i++, scmi_pd++) {
+ u32 state;
+
+- domains[i] = &scmi_pd->genpd;
++ if (handle->power_ops->state_get(handle, i, &state)) {
++ dev_warn(dev, "failed to get state for domain %d\n", i);
++ continue;
++ }
+
+ scmi_pd->domain = i;
+ scmi_pd->handle = handle;
+@@ -94,13 +97,10 @@ static int scmi_pm_domain_probe(struct scmi_device *sdev)
+ scmi_pd->genpd.power_off = scmi_pd_power_off;
+ scmi_pd->genpd.power_on = scmi_pd_power_on;
+
+- if (handle->power_ops->state_get(handle, i, &state)) {
+- dev_warn(dev, "failed to get state for domain %d\n", i);
+- continue;
+- }
+-
+ pm_genpd_init(&scmi_pd->genpd, NULL,
+ state == SCMI_POWER_STATE_GENERIC_OFF);
++
++ domains[i] = &scmi_pd->genpd;
+ }
+
+ scmi_pd_data->domains = domains;
+diff --git a/drivers/firmware/qcom_scm.c b/drivers/firmware/qcom_scm.c
+index 0e7233a20f34..d4fda210adfe 100644
+--- a/drivers/firmware/qcom_scm.c
++++ b/drivers/firmware/qcom_scm.c
+@@ -391,7 +391,7 @@ static int __qcom_scm_set_dload_mode(struct device *dev, bool enable)
+
+ desc.args[1] = enable ? QCOM_SCM_BOOT_SET_DLOAD_MODE : 0;
+
+- return qcom_scm_call(__scm->dev, &desc, NULL);
++ return qcom_scm_call_atomic(__scm->dev, &desc, NULL);
+ }
+
+ static void qcom_scm_set_download_mode(bool enable)
+@@ -650,7 +650,7 @@ int qcom_scm_io_readl(phys_addr_t addr, unsigned int *val)
+ int ret;
+
+
+- ret = qcom_scm_call(__scm->dev, &desc, &res);
++ ret = qcom_scm_call_atomic(__scm->dev, &desc, &res);
+ if (ret >= 0)
+ *val = res.result[0];
+
+@@ -669,8 +669,7 @@ int qcom_scm_io_writel(phys_addr_t addr, unsigned int val)
+ .owner = ARM_SMCCC_OWNER_SIP,
+ };
+
+-
+- return qcom_scm_call(__scm->dev, &desc, NULL);
++ return qcom_scm_call_atomic(__scm->dev, &desc, NULL);
+ }
+ EXPORT_SYMBOL(qcom_scm_io_writel);
+
+diff --git a/drivers/gpio/gpiolib-devres.c b/drivers/gpio/gpiolib-devres.c
+index 5c91c4365da1..7dbce4c4ebdf 100644
+--- a/drivers/gpio/gpiolib-devres.c
++++ b/drivers/gpio/gpiolib-devres.c
+@@ -487,10 +487,12 @@ static void devm_gpio_chip_release(struct device *dev, void *res)
+ }
+
+ /**
+- * devm_gpiochip_add_data() - Resource managed gpiochip_add_data()
++ * devm_gpiochip_add_data_with_key() - Resource managed gpiochip_add_data_with_key()
+ * @dev: pointer to the device that gpio_chip belongs to.
+ * @gc: the GPIO chip to register
+ * @data: driver-private data associated with this chip
++ * @lock_key: lockdep class for IRQ lock
++ * @request_key: lockdep class for IRQ request
+ *
+ * Context: potentially before irqs will work
+ *
+@@ -501,8 +503,9 @@ static void devm_gpio_chip_release(struct device *dev, void *res)
+ * gc->base is invalid or already associated with a different chip.
+ * Otherwise it returns zero as a success code.
+ */
+-int devm_gpiochip_add_data(struct device *dev, struct gpio_chip *gc,
+- void *data)
++int devm_gpiochip_add_data_with_key(struct device *dev, struct gpio_chip *gc, void *data,
++ struct lock_class_key *lock_key,
++ struct lock_class_key *request_key)
+ {
+ struct gpio_chip **ptr;
+ int ret;
+@@ -512,7 +515,7 @@ int devm_gpiochip_add_data(struct device *dev, struct gpio_chip *gc,
+ if (!ptr)
+ return -ENOMEM;
+
+- ret = gpiochip_add_data(gc, data);
++ ret = gpiochip_add_data_with_key(gc, data, lock_key, request_key);
+ if (ret < 0) {
+ devres_free(ptr);
+ return ret;
+@@ -523,4 +526,4 @@ int devm_gpiochip_add_data(struct device *dev, struct gpio_chip *gc,
+
+ return 0;
+ }
+-EXPORT_SYMBOL_GPL(devm_gpiochip_add_data);
++EXPORT_SYMBOL_GPL(devm_gpiochip_add_data_with_key);
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_debugfs.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_debugfs.c
+index a414da22a359..f87b225437fc 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_debugfs.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_debugfs.c
+@@ -223,12 +223,16 @@ static int amdgpu_debugfs_process_reg_op(bool read, struct file *f,
+ *pos &= (1UL << 22) - 1;
+
+ r = pm_runtime_get_sync(adev->ddev->dev);
+- if (r < 0)
++ if (r < 0) {
++ pm_runtime_put_autosuspend(adev->ddev->dev);
+ return r;
++ }
+
+ r = amdgpu_virt_enable_access_debugfs(adev);
+- if (r < 0)
++ if (r < 0) {
++ pm_runtime_put_autosuspend(adev->ddev->dev);
+ return r;
++ }
+
+ if (use_bank) {
+ if ((sh_bank != 0xFFFFFFFF && sh_bank >= adev->gfx.config.max_sh_per_se) ||
+@@ -332,12 +336,16 @@ static ssize_t amdgpu_debugfs_regs_pcie_read(struct file *f, char __user *buf,
+ return -EINVAL;
+
+ r = pm_runtime_get_sync(adev->ddev->dev);
+- if (r < 0)
++ if (r < 0) {
++ pm_runtime_put_autosuspend(adev->ddev->dev);
+ return r;
++ }
+
+ r = amdgpu_virt_enable_access_debugfs(adev);
+- if (r < 0)
++ if (r < 0) {
++ pm_runtime_put_autosuspend(adev->ddev->dev);
+ return r;
++ }
+
+ while (size) {
+ uint32_t value;
+@@ -387,12 +395,16 @@ static ssize_t amdgpu_debugfs_regs_pcie_write(struct file *f, const char __user
+ return -EINVAL;
+
+ r = pm_runtime_get_sync(adev->ddev->dev);
+- if (r < 0)
++ if (r < 0) {
++ pm_runtime_put_autosuspend(adev->ddev->dev);
+ return r;
++ }
+
+ r = amdgpu_virt_enable_access_debugfs(adev);
+- if (r < 0)
++ if (r < 0) {
++ pm_runtime_put_autosuspend(adev->ddev->dev);
+ return r;
++ }
+
+ while (size) {
+ uint32_t value;
+@@ -443,12 +455,16 @@ static ssize_t amdgpu_debugfs_regs_didt_read(struct file *f, char __user *buf,
+ return -EINVAL;
+
+ r = pm_runtime_get_sync(adev->ddev->dev);
+- if (r < 0)
++ if (r < 0) {
++ pm_runtime_put_autosuspend(adev->ddev->dev);
+ return r;
++ }
+
+ r = amdgpu_virt_enable_access_debugfs(adev);
+- if (r < 0)
++ if (r < 0) {
++ pm_runtime_put_autosuspend(adev->ddev->dev);
+ return r;
++ }
+
+ while (size) {
+ uint32_t value;
+@@ -498,12 +514,16 @@ static ssize_t amdgpu_debugfs_regs_didt_write(struct file *f, const char __user
+ return -EINVAL;
+
+ r = pm_runtime_get_sync(adev->ddev->dev);
+- if (r < 0)
++ if (r < 0) {
++ pm_runtime_put_autosuspend(adev->ddev->dev);
+ return r;
++ }
+
+ r = amdgpu_virt_enable_access_debugfs(adev);
+- if (r < 0)
++ if (r < 0) {
++ pm_runtime_put_autosuspend(adev->ddev->dev);
+ return r;
++ }
+
+ while (size) {
+ uint32_t value;
+@@ -554,12 +574,16 @@ static ssize_t amdgpu_debugfs_regs_smc_read(struct file *f, char __user *buf,
+ return -EINVAL;
+
+ r = pm_runtime_get_sync(adev->ddev->dev);
+- if (r < 0)
++ if (r < 0) {
++ pm_runtime_put_autosuspend(adev->ddev->dev);
+ return r;
++ }
+
+ r = amdgpu_virt_enable_access_debugfs(adev);
+- if (r < 0)
++ if (r < 0) {
++ pm_runtime_put_autosuspend(adev->ddev->dev);
+ return r;
++ }
+
+ while (size) {
+ uint32_t value;
+@@ -609,12 +633,16 @@ static ssize_t amdgpu_debugfs_regs_smc_write(struct file *f, const char __user *
+ return -EINVAL;
+
+ r = pm_runtime_get_sync(adev->ddev->dev);
+- if (r < 0)
++ if (r < 0) {
++ pm_runtime_put_autosuspend(adev->ddev->dev);
+ return r;
++ }
+
+ r = amdgpu_virt_enable_access_debugfs(adev);
+- if (r < 0)
++ if (r < 0) {
++ pm_runtime_put_autosuspend(adev->ddev->dev);
+ return r;
++ }
+
+ while (size) {
+ uint32_t value;
+@@ -764,12 +792,16 @@ static ssize_t amdgpu_debugfs_sensor_read(struct file *f, char __user *buf,
+ valuesize = sizeof(values);
+
+ r = pm_runtime_get_sync(adev->ddev->dev);
+- if (r < 0)
++ if (r < 0) {
++ pm_runtime_put_autosuspend(adev->ddev->dev);
+ return r;
++ }
+
+ r = amdgpu_virt_enable_access_debugfs(adev);
+- if (r < 0)
++ if (r < 0) {
++ pm_runtime_put_autosuspend(adev->ddev->dev);
+ return r;
++ }
+
+ r = amdgpu_dpm_read_sensor(adev, idx, &values[0], &valuesize);
+
+@@ -842,12 +874,16 @@ static ssize_t amdgpu_debugfs_wave_read(struct file *f, char __user *buf,
+ simd = (*pos & GENMASK_ULL(44, 37)) >> 37;
+
+ r = pm_runtime_get_sync(adev->ddev->dev);
+- if (r < 0)
++ if (r < 0) {
++ pm_runtime_put_autosuspend(adev->ddev->dev);
+ return r;
++ }
+
+ r = amdgpu_virt_enable_access_debugfs(adev);
+- if (r < 0)
++ if (r < 0) {
++ pm_runtime_put_autosuspend(adev->ddev->dev);
+ return r;
++ }
+
+ /* switch to the specific se/sh/cu */
+ mutex_lock(&adev->grbm_idx_mutex);
+@@ -941,7 +977,7 @@ static ssize_t amdgpu_debugfs_gpr_read(struct file *f, char __user *buf,
+
+ r = amdgpu_virt_enable_access_debugfs(adev);
+ if (r < 0)
+- return r;
++ goto err;
+
+ /* switch to the specific se/sh/cu */
+ mutex_lock(&adev->grbm_idx_mutex);
+@@ -977,6 +1013,7 @@ static ssize_t amdgpu_debugfs_gpr_read(struct file *f, char __user *buf,
+ }
+
+ err:
++ pm_runtime_put_autosuspend(adev->ddev->dev);
+ kfree(data);
+ amdgpu_virt_disable_access_debugfs(adev);
+ return result;
+@@ -1003,8 +1040,10 @@ static ssize_t amdgpu_debugfs_gfxoff_write(struct file *f, const char __user *bu
+ return -EINVAL;
+
+ r = pm_runtime_get_sync(adev->ddev->dev);
+- if (r < 0)
++ if (r < 0) {
++ pm_runtime_put_autosuspend(adev->ddev->dev);
+ return r;
++ }
+
+ while (size) {
+ uint32_t value;
+@@ -1140,8 +1179,10 @@ static int amdgpu_debugfs_test_ib(struct seq_file *m, void *data)
+ int r = 0, i;
+
+ r = pm_runtime_get_sync(dev->dev);
+- if (r < 0)
++ if (r < 0) {
++ pm_runtime_put_autosuspend(adev->ddev->dev);
+ return r;
++ }
+
+ /* Avoid accidently unparking the sched thread during GPU reset */
+ mutex_lock(&adev->lock_reset);
+@@ -1197,8 +1238,10 @@ static int amdgpu_debugfs_evict_vram(struct seq_file *m, void *data)
+ int r;
+
+ r = pm_runtime_get_sync(dev->dev);
+- if (r < 0)
++ if (r < 0) {
++ pm_runtime_put_autosuspend(adev->ddev->dev);
+ return r;
++ }
+
+ seq_printf(m, "(%d)\n", amdgpu_bo_evict_vram(adev));
+
+@@ -1216,8 +1259,10 @@ static int amdgpu_debugfs_evict_gtt(struct seq_file *m, void *data)
+ int r;
+
+ r = pm_runtime_get_sync(dev->dev);
+- if (r < 0)
++ if (r < 0) {
++ pm_runtime_put_autosuspend(adev->ddev->dev);
+ return r;
++ }
+
+ seq_printf(m, "(%d)\n", ttm_bo_evict_mm(&adev->mman.bdev, TTM_PL_TT));
+
+@@ -1417,16 +1462,16 @@ static int amdgpu_debugfs_sclk_set(void *data, u64 val)
+ return -EINVAL;
+
+ ret = pm_runtime_get_sync(adev->ddev->dev);
+- if (ret < 0)
++ if (ret < 0) {
++ pm_runtime_put_autosuspend(adev->ddev->dev);
+ return ret;
++ }
+
+ if (is_support_sw_smu(adev)) {
+ ret = smu_get_dpm_freq_range(&adev->smu, SMU_SCLK, &min_freq, &max_freq, true);
+ if (ret || val > max_freq || val < min_freq)
+ return -EINVAL;
+ ret = smu_set_soft_freq_range(&adev->smu, SMU_SCLK, (uint32_t)val, (uint32_t)val, true);
+- } else {
+- return 0;
+ }
+
+ pm_runtime_mark_last_busy(adev->ddev->dev);
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.c
+index 43d8ed7dbd00..652c57a3b847 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.c
+@@ -587,7 +587,7 @@ struct drm_gem_object *amdgpu_gem_prime_import(struct drm_device *dev,
+ attach = dma_buf_dynamic_attach(dma_buf, dev->dev,
+ &amdgpu_dma_buf_attach_ops, obj);
+ if (IS_ERR(attach)) {
+- drm_gem_object_put(obj);
++ drm_gem_object_put_unlocked(obj);
+ return ERR_CAST(attach);
+ }
+
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_fence.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_fence.c
+index d878fe7fee51..3414e119f0cb 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_fence.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_fence.c
+@@ -416,7 +416,9 @@ int amdgpu_fence_driver_start_ring(struct amdgpu_ring *ring,
+ ring->fence_drv.gpu_addr = adev->uvd.inst[ring->me].gpu_addr + index;
+ }
+ amdgpu_fence_write(ring, atomic_read(&ring->fence_drv.last_seq));
+- amdgpu_irq_get(adev, irq_src, irq_type);
++
++ if (irq_src)
++ amdgpu_irq_get(adev, irq_src, irq_type);
+
+ ring->fence_drv.irq_src = irq_src;
+ ring->fence_drv.irq_type = irq_type;
+@@ -537,8 +539,9 @@ void amdgpu_fence_driver_fini(struct amdgpu_device *adev)
+ /* no need to trigger GPU reset as we are unloading */
+ amdgpu_fence_driver_force_completion(ring);
+ }
+- amdgpu_irq_put(adev, ring->fence_drv.irq_src,
+- ring->fence_drv.irq_type);
++ if (ring->fence_drv.irq_src)
++ amdgpu_irq_put(adev, ring->fence_drv.irq_src,
++ ring->fence_drv.irq_type);
+ drm_sched_fini(&ring->sched);
+ del_timer_sync(&ring->fence_drv.fallback_timer);
+ for (j = 0; j <= ring->fence_drv.num_fences_mask; ++j)
+@@ -574,8 +577,9 @@ void amdgpu_fence_driver_suspend(struct amdgpu_device *adev)
+ }
+
+ /* disable the interrupt */
+- amdgpu_irq_put(adev, ring->fence_drv.irq_src,
+- ring->fence_drv.irq_type);
++ if (ring->fence_drv.irq_src)
++ amdgpu_irq_put(adev, ring->fence_drv.irq_src,
++ ring->fence_drv.irq_type);
+ }
+ }
+
+@@ -601,8 +605,9 @@ void amdgpu_fence_driver_resume(struct amdgpu_device *adev)
+ continue;
+
+ /* enable the interrupt */
+- amdgpu_irq_get(adev, ring->fence_drv.irq_src,
+- ring->fence_drv.irq_type);
++ if (ring->fence_drv.irq_src)
++ amdgpu_irq_get(adev, ring->fence_drv.irq_src,
++ ring->fence_drv.irq_type);
+ }
+ }
+
+diff --git a/drivers/gpu/drm/amd/amdgpu/jpeg_v2_5.c b/drivers/gpu/drm/amd/amdgpu/jpeg_v2_5.c
+index 713c32560445..25ebf8f19b85 100644
+--- a/drivers/gpu/drm/amd/amdgpu/jpeg_v2_5.c
++++ b/drivers/gpu/drm/amd/amdgpu/jpeg_v2_5.c
+@@ -462,7 +462,7 @@ static int jpeg_v2_5_wait_for_idle(void *handle)
+ return ret;
+ }
+
+- return ret;
++ return 0;
+ }
+
+ static int jpeg_v2_5_set_clockgating_state(void *handle,
+diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_pp_smu.c b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_pp_smu.c
+index a2e1a73f66b8..5c6a6ae48d39 100644
+--- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_pp_smu.c
++++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_pp_smu.c
+@@ -106,7 +106,7 @@ bool dm_pp_apply_display_requirements(
+ adev->powerplay.pp_funcs->display_configuration_change(
+ adev->powerplay.pp_handle,
+ &adev->pm.pm_display_cfg);
+- else
++ else if (adev->smu.ppt_funcs)
+ smu_display_configuration_change(smu,
+ &adev->pm.pm_display_cfg);
+
+@@ -530,6 +530,8 @@ bool dm_pp_get_static_clocks(
+ &pp_clk_info);
+ else if (adev->smu.ppt_funcs)
+ ret = smu_get_current_clocks(&adev->smu, &pp_clk_info);
++ else
++ return false;
+ if (ret)
+ return false;
+
+@@ -590,7 +592,7 @@ void pp_rv_set_wm_ranges(struct pp_smu *pp,
+ if (pp_funcs && pp_funcs->set_watermarks_for_clocks_ranges)
+ pp_funcs->set_watermarks_for_clocks_ranges(pp_handle,
+ &wm_with_clock_ranges);
+- else
++ else if (adev->smu.ppt_funcs)
+ smu_set_watermarks_for_clock_ranges(&adev->smu,
+ &wm_with_clock_ranges);
+ }
+diff --git a/drivers/gpu/drm/amd/display/dc/core/dc_link.c b/drivers/gpu/drm/amd/display/dc/core/dc_link.c
+index 48ab51533d5d..841cc051b7d0 100644
+--- a/drivers/gpu/drm/amd/display/dc/core/dc_link.c
++++ b/drivers/gpu/drm/amd/display/dc/core/dc_link.c
+@@ -3298,9 +3298,11 @@ void core_link_disable_stream(struct pipe_ctx *pipe_ctx)
+ write_i2c_redriver_setting(pipe_ctx, false);
+ }
+ }
+- dc->hwss.disable_stream(pipe_ctx);
+
+ disable_link(pipe_ctx->stream->link, pipe_ctx->stream->signal);
++
++ dc->hwss.disable_stream(pipe_ctx);
++
+ if (pipe_ctx->stream->timing.flags.DSC) {
+ if (dc_is_dp_signal(pipe_ctx->stream->signal))
+ dp_set_dsc_enable(pipe_ctx, false);
+diff --git a/drivers/gpu/drm/amd/display/dc/core/dc_link_ddc.c b/drivers/gpu/drm/amd/display/dc/core/dc_link_ddc.c
+index aefd29a440b5..be8f265976b0 100644
+--- a/drivers/gpu/drm/amd/display/dc/core/dc_link_ddc.c
++++ b/drivers/gpu/drm/amd/display/dc/core/dc_link_ddc.c
+@@ -503,7 +503,7 @@ bool dal_ddc_service_query_ddc_data(
+ uint8_t *read_buf,
+ uint32_t read_size)
+ {
+- bool ret = false;
++ bool success = true;
+ uint32_t payload_size =
+ dal_ddc_service_is_in_aux_transaction_mode(ddc) ?
+ DEFAULT_AUX_MAX_DATA_SIZE : EDID_SEGMENT_SIZE;
+@@ -527,7 +527,6 @@ bool dal_ddc_service_query_ddc_data(
+ * but we want to read 256 over i2c!!!!*/
+ if (dal_ddc_service_is_in_aux_transaction_mode(ddc)) {
+ struct aux_payload payload;
+- bool read_available = true;
+
+ payload.i2c_over_aux = true;
+ payload.address = address;
+@@ -536,21 +535,26 @@ bool dal_ddc_service_query_ddc_data(
+
+ if (write_size != 0) {
+ payload.write = true;
+- payload.mot = false;
++ /* should not set mot (middle of transaction) to 0
++ * if there are pending read payloads
++ */
++ payload.mot = read_size == 0 ? false : true;
+ payload.length = write_size;
+ payload.data = write_buf;
+
+- ret = dal_ddc_submit_aux_command(ddc, &payload);
+- read_available = ret;
++ success = dal_ddc_submit_aux_command(ddc, &payload);
+ }
+
+- if (read_size != 0 && read_available) {
++ if (read_size != 0 && success) {
+ payload.write = false;
++ /* should set mot (middle of transaction) to 0
++ * since it is the last payload to send
++ */
+ payload.mot = false;
+ payload.length = read_size;
+ payload.data = read_buf;
+
+- ret = dal_ddc_submit_aux_command(ddc, &payload);
++ success = dal_ddc_submit_aux_command(ddc, &payload);
+ }
+ } else {
+ struct i2c_command command = {0};
+@@ -573,7 +577,7 @@ bool dal_ddc_service_query_ddc_data(
+ command.number_of_payloads =
+ dal_ddc_i2c_payloads_get_count(&payloads);
+
+- ret = dm_helpers_submit_i2c(
++ success = dm_helpers_submit_i2c(
+ ddc->ctx,
+ ddc->link,
+ &command);
+@@ -581,7 +585,7 @@ bool dal_ddc_service_query_ddc_data(
+ dal_ddc_i2c_payloads_destroy(&payloads);
+ }
+
+- return ret;
++ return success;
+ }
+
+ bool dal_ddc_submit_aux_command(struct ddc_service *ddc,
+@@ -598,7 +602,7 @@ bool dal_ddc_submit_aux_command(struct ddc_service *ddc,
+
+ do {
+ struct aux_payload current_payload;
+- bool is_end_of_payload = (retrieved + DEFAULT_AUX_MAX_DATA_SIZE) >
++ bool is_end_of_payload = (retrieved + DEFAULT_AUX_MAX_DATA_SIZE) >=
+ payload->length;
+
+ current_payload.address = payload->address;
+@@ -607,7 +611,10 @@ bool dal_ddc_submit_aux_command(struct ddc_service *ddc,
+ current_payload.i2c_over_aux = payload->i2c_over_aux;
+ current_payload.length = is_end_of_payload ?
+ payload->length - retrieved : DEFAULT_AUX_MAX_DATA_SIZE;
+- current_payload.mot = !is_end_of_payload;
++ /* set mot (middle of transaction) to false
++ * if it is the last payload
++ */
++ current_payload.mot = is_end_of_payload ? payload->mot:true;
+ current_payload.reply = payload->reply;
+ current_payload.write = payload->write;
+
+diff --git a/drivers/gpu/drm/amd/display/dc/core/dc_link_dp.c b/drivers/gpu/drm/amd/display/dc/core/dc_link_dp.c
+index 91cd884d6f25..6124af571bff 100644
+--- a/drivers/gpu/drm/amd/display/dc/core/dc_link_dp.c
++++ b/drivers/gpu/drm/amd/display/dc/core/dc_link_dp.c
+@@ -1102,6 +1102,10 @@ static inline enum link_training_result perform_link_training_int(
+ dpcd_pattern.v1_4.TRAINING_PATTERN_SET = DPCD_TRAINING_PATTERN_VIDEOIDLE;
+ dpcd_set_training_pattern(link, dpcd_pattern);
+
++ /* delay 5ms after notifying sink of idle pattern before switching output */
++ if (link->connector_signal != SIGNAL_TYPE_EDP)
++ msleep(5);
++
+ /* 4. mainlink output idle pattern*/
+ dp_set_hw_test_pattern(link, DP_TEST_PATTERN_VIDEO_MODE, NULL, 0);
+
+@@ -1551,6 +1555,12 @@ bool perform_link_training_with_retries(
+ struct dc_link *link = stream->link;
+ enum dp_panel_mode panel_mode = dp_get_panel_mode(link);
+
++ /* We need to do this before the link training to ensure the idle pattern in SST
++ * mode will be sent right after the link training
++ */
++ link->link_enc->funcs->connect_dig_be_to_fe(link->link_enc,
++ pipe_ctx->stream_res.stream_enc->id, true);
++
+ for (j = 0; j < attempts; ++j) {
+
+ dp_enable_link_phy(
+@@ -1567,12 +1577,6 @@ bool perform_link_training_with_retries(
+
+ dp_set_panel_mode(link, panel_mode);
+
+- /* We need to do this before the link training to ensure the idle pattern in SST
+- * mode will be sent right after the link training
+- */
+- link->link_enc->funcs->connect_dig_be_to_fe(link->link_enc,
+- pipe_ctx->stream_res.stream_enc->id, true);
+-
+ if (link->aux_access_disabled) {
+ dc_link_dp_perform_link_training_skip_aux(link, link_setting);
+ return true;
+diff --git a/drivers/gpu/drm/amd/display/dc/dce110/dce110_hw_sequencer.c b/drivers/gpu/drm/amd/display/dc/dce110/dce110_hw_sequencer.c
+index b77e9dc16086..2af1d74d16ad 100644
+--- a/drivers/gpu/drm/amd/display/dc/dce110/dce110_hw_sequencer.c
++++ b/drivers/gpu/drm/amd/display/dc/dce110/dce110_hw_sequencer.c
+@@ -1069,8 +1069,17 @@ void dce110_blank_stream(struct pipe_ctx *pipe_ctx)
+ link->dc->hwss.set_abm_immediate_disable(pipe_ctx);
+ }
+
+- if (dc_is_dp_signal(pipe_ctx->stream->signal))
++ if (dc_is_dp_signal(pipe_ctx->stream->signal)) {
+ pipe_ctx->stream_res.stream_enc->funcs->dp_blank(pipe_ctx->stream_res.stream_enc);
++
++ /*
++ * After output is idle pattern some sinks need time to recognize the stream
++ * has changed or they enter protection state and hang.
++ */
++ if (!dc_is_embedded_signal(pipe_ctx->stream->signal))
++ msleep(60);
++ }
++
+ }
+
+
+diff --git a/drivers/gpu/drm/amd/powerplay/arcturus_ppt.c b/drivers/gpu/drm/amd/powerplay/arcturus_ppt.c
+index 27c5fc9572b2..e4630a76d7bf 100644
+--- a/drivers/gpu/drm/amd/powerplay/arcturus_ppt.c
++++ b/drivers/gpu/drm/amd/powerplay/arcturus_ppt.c
+@@ -2042,8 +2042,6 @@ static void arcturus_fill_eeprom_i2c_req(SwI2cRequest_t *req, bool write,
+ {
+ int i;
+
+- BUG_ON(numbytes > MAX_SW_I2C_COMMANDS);
+-
+ req->I2CcontrollerPort = 0;
+ req->I2CSpeed = 2;
+ req->SlaveAddress = address;
+@@ -2081,6 +2079,12 @@ static int arcturus_i2c_eeprom_read_data(struct i2c_adapter *control,
+ struct smu_table_context *smu_table = &adev->smu.smu_table;
+ struct smu_table *table = &smu_table->driver_table;
+
++ if (numbytes > MAX_SW_I2C_COMMANDS) {
++ dev_err(adev->dev, "numbytes requested %d is over max allowed %d\n",
++ numbytes, MAX_SW_I2C_COMMANDS);
++ return -EINVAL;
++ }
++
+ memset(&req, 0, sizeof(req));
+ arcturus_fill_eeprom_i2c_req(&req, false, address, numbytes, data);
+
+@@ -2117,6 +2121,12 @@ static int arcturus_i2c_eeprom_write_data(struct i2c_adapter *control,
+ SwI2cRequest_t req;
+ struct amdgpu_device *adev = to_amdgpu_device(control);
+
++ if (numbytes > MAX_SW_I2C_COMMANDS) {
++ dev_err(adev->dev, "numbytes requested %d is over max allowed %d\n",
++ numbytes, MAX_SW_I2C_COMMANDS);
++ return -EINVAL;
++ }
++
+ memset(&req, 0, sizeof(req));
+ arcturus_fill_eeprom_i2c_req(&req, true, address, numbytes, data);
+
+diff --git a/drivers/gpu/drm/amd/powerplay/smu_v11_0.c b/drivers/gpu/drm/amd/powerplay/smu_v11_0.c
+index aa76c2cea747..7897be877b96 100644
+--- a/drivers/gpu/drm/amd/powerplay/smu_v11_0.c
++++ b/drivers/gpu/drm/amd/powerplay/smu_v11_0.c
+@@ -164,7 +164,8 @@ int smu_v11_0_init_microcode(struct smu_context *smu)
+ chip_name = "navi12";
+ break;
+ default:
+- BUG();
++ dev_err(adev->dev, "Unsupported ASIC type %d\n", adev->asic_type);
++ return -EINVAL;
+ }
+
+ snprintf(fw_name, sizeof(fw_name), "amdgpu/%s_smc.bin", chip_name);
+diff --git a/drivers/gpu/drm/arm/malidp_planes.c b/drivers/gpu/drm/arm/malidp_planes.c
+index 37715cc6064e..ab45ac445045 100644
+--- a/drivers/gpu/drm/arm/malidp_planes.c
++++ b/drivers/gpu/drm/arm/malidp_planes.c
+@@ -928,7 +928,7 @@ int malidp_de_planes_init(struct drm_device *drm)
+ const struct malidp_hw_regmap *map = &malidp->dev->hw->map;
+ struct malidp_plane *plane = NULL;
+ enum drm_plane_type plane_type;
+- unsigned long crtcs = 1 << drm->mode_config.num_crtc;
++ unsigned long crtcs = BIT(drm->mode_config.num_crtc);
+ unsigned long flags = DRM_MODE_ROTATE_0 | DRM_MODE_ROTATE_90 | DRM_MODE_ROTATE_180 |
+ DRM_MODE_ROTATE_270 | DRM_MODE_REFLECT_X | DRM_MODE_REFLECT_Y;
+ unsigned int blend_caps = BIT(DRM_MODE_BLEND_PIXEL_NONE) |
+diff --git a/drivers/gpu/drm/bridge/sil-sii8620.c b/drivers/gpu/drm/bridge/sil-sii8620.c
+index 92acd336aa89..ca98133411aa 100644
+--- a/drivers/gpu/drm/bridge/sil-sii8620.c
++++ b/drivers/gpu/drm/bridge/sil-sii8620.c
+@@ -178,7 +178,7 @@ static void sii8620_read_buf(struct sii8620 *ctx, u16 addr, u8 *buf, int len)
+
+ static u8 sii8620_readb(struct sii8620 *ctx, u16 addr)
+ {
+- u8 ret;
++ u8 ret = 0;
+
+ sii8620_read_buf(ctx, addr, &ret, 1);
+ return ret;
+diff --git a/drivers/gpu/drm/bridge/ti-sn65dsi86.c b/drivers/gpu/drm/bridge/ti-sn65dsi86.c
+index 6ad688b320ae..8a0e34f2160a 100644
+--- a/drivers/gpu/drm/bridge/ti-sn65dsi86.c
++++ b/drivers/gpu/drm/bridge/ti-sn65dsi86.c
+@@ -475,7 +475,7 @@ static int ti_sn_bridge_calc_min_dp_rate_idx(struct ti_sn_bridge *pdata)
+ 1000 * pdata->dp_lanes * DP_CLK_FUDGE_DEN);
+
+ for (i = 1; i < ARRAY_SIZE(ti_sn_bridge_dp_rate_lut) - 1; i++)
+- if (ti_sn_bridge_dp_rate_lut[i] > dp_rate_mhz)
++ if (ti_sn_bridge_dp_rate_lut[i] >= dp_rate_mhz)
+ break;
+
+ return i;
+@@ -827,6 +827,12 @@ static ssize_t ti_sn_aux_transfer(struct drm_dp_aux *aux,
+ buf[i]);
+ }
+
++ /* Clear old status bits before start so we don't get confused */
++ regmap_write(pdata->regmap, SN_AUX_CMD_STATUS_REG,
++ AUX_IRQ_STATUS_NAT_I2C_FAIL |
++ AUX_IRQ_STATUS_AUX_RPLY_TOUT |
++ AUX_IRQ_STATUS_AUX_SHORT);
++
+ regmap_write(pdata->regmap, SN_AUX_CMD_REG, request_val | AUX_CMD_SEND);
+
+ ret = regmap_read_poll_timeout(pdata->regmap, SN_AUX_CMD_REG, val,
+diff --git a/drivers/gpu/drm/drm_debugfs.c b/drivers/gpu/drm/drm_debugfs.c
+index 2bea22130703..bfe4602f206b 100644
+--- a/drivers/gpu/drm/drm_debugfs.c
++++ b/drivers/gpu/drm/drm_debugfs.c
+@@ -311,13 +311,13 @@ static ssize_t connector_write(struct file *file, const char __user *ubuf,
+
+ buf[len] = '\0';
+
+- if (!strcmp(buf, "on"))
++ if (sysfs_streq(buf, "on"))
+ connector->force = DRM_FORCE_ON;
+- else if (!strcmp(buf, "digital"))
++ else if (sysfs_streq(buf, "digital"))
+ connector->force = DRM_FORCE_ON_DIGITAL;
+- else if (!strcmp(buf, "off"))
++ else if (sysfs_streq(buf, "off"))
+ connector->force = DRM_FORCE_OFF;
+- else if (!strcmp(buf, "unspecified"))
++ else if (sysfs_streq(buf, "unspecified"))
+ connector->force = DRM_FORCE_UNSPECIFIED;
+ else
+ return -EINVAL;
+diff --git a/drivers/gpu/drm/drm_gem.c b/drivers/gpu/drm/drm_gem.c
+index ee2058ad482c..d22480ebb29e 100644
+--- a/drivers/gpu/drm/drm_gem.c
++++ b/drivers/gpu/drm/drm_gem.c
+@@ -709,6 +709,8 @@ int drm_gem_objects_lookup(struct drm_file *filp, void __user *bo_handles,
+ if (!objs)
+ return -ENOMEM;
+
++ *objs_out = objs;
++
+ handles = kvmalloc_array(count, sizeof(u32), GFP_KERNEL);
+ if (!handles) {
+ ret = -ENOMEM;
+@@ -722,8 +724,6 @@ int drm_gem_objects_lookup(struct drm_file *filp, void __user *bo_handles,
+ }
+
+ ret = objects_lookup(filp, handles, count, objs);
+- *objs_out = objs;
+-
+ out:
+ kvfree(handles);
+ return ret;
+diff --git a/drivers/gpu/drm/drm_mipi_dsi.c b/drivers/gpu/drm/drm_mipi_dsi.c
+index 55531895dde6..37b03fefbdf6 100644
+--- a/drivers/gpu/drm/drm_mipi_dsi.c
++++ b/drivers/gpu/drm/drm_mipi_dsi.c
+@@ -1082,11 +1082,11 @@ EXPORT_SYMBOL(mipi_dsi_dcs_set_pixel_format);
+ */
+ int mipi_dsi_dcs_set_tear_scanline(struct mipi_dsi_device *dsi, u16 scanline)
+ {
+- u8 payload[3] = { MIPI_DCS_SET_TEAR_SCANLINE, scanline >> 8,
+- scanline & 0xff };
++ u8 payload[2] = { scanline >> 8, scanline & 0xff };
+ ssize_t err;
+
+- err = mipi_dsi_generic_write(dsi, payload, sizeof(payload));
++ err = mipi_dsi_dcs_write(dsi, MIPI_DCS_SET_TEAR_SCANLINE, payload,
++ sizeof(payload));
+ if (err < 0)
+ return err;
+
+diff --git a/drivers/gpu/drm/drm_mm.c b/drivers/gpu/drm/drm_mm.c
+index f4ca1ff80af9..60e9a9c91e9d 100644
+--- a/drivers/gpu/drm/drm_mm.c
++++ b/drivers/gpu/drm/drm_mm.c
+@@ -407,7 +407,7 @@ next_hole_high_addr(struct drm_mm_node *entry, u64 size)
+ left_node = rb_entry(left_rb_node,
+ struct drm_mm_node, rb_hole_addr);
+ if ((left_node->subtree_max_hole < size ||
+- entry->size == entry->subtree_max_hole) &&
++ HOLE_SIZE(entry) == entry->subtree_max_hole) &&
+ parent_rb_node && parent_rb_node->rb_left != rb_node)
+ return rb_hole_addr_to_node(parent_rb_node);
+ }
+@@ -447,7 +447,7 @@ next_hole_low_addr(struct drm_mm_node *entry, u64 size)
+ right_node = rb_entry(right_rb_node,
+ struct drm_mm_node, rb_hole_addr);
+ if ((right_node->subtree_max_hole < size ||
+- entry->size == entry->subtree_max_hole) &&
++ HOLE_SIZE(entry) == entry->subtree_max_hole) &&
+ parent_rb_node && parent_rb_node->rb_right != rb_node)
+ return rb_hole_addr_to_node(parent_rb_node);
+ }
+diff --git a/drivers/gpu/drm/etnaviv/etnaviv_gpu.c b/drivers/gpu/drm/etnaviv/etnaviv_gpu.c
+index a31eeff2b297..4a512b062df8 100644
+--- a/drivers/gpu/drm/etnaviv/etnaviv_gpu.c
++++ b/drivers/gpu/drm/etnaviv/etnaviv_gpu.c
+@@ -722,7 +722,7 @@ int etnaviv_gpu_init(struct etnaviv_gpu *gpu)
+ ret = pm_runtime_get_sync(gpu->dev);
+ if (ret < 0) {
+ dev_err(gpu->dev, "Failed to enable GPU power domain\n");
+- return ret;
++ goto pm_put;
+ }
+
+ etnaviv_hw_identify(gpu);
+@@ -819,6 +819,7 @@ int etnaviv_gpu_init(struct etnaviv_gpu *gpu)
+
+ fail:
+ pm_runtime_mark_last_busy(gpu->dev);
++pm_put:
+ pm_runtime_put_autosuspend(gpu->dev);
+
+ return ret;
+@@ -859,7 +860,7 @@ int etnaviv_gpu_debugfs(struct etnaviv_gpu *gpu, struct seq_file *m)
+
+ ret = pm_runtime_get_sync(gpu->dev);
+ if (ret < 0)
+- return ret;
++ goto pm_put;
+
+ dma_lo = gpu_read(gpu, VIVS_FE_DMA_LOW);
+ dma_hi = gpu_read(gpu, VIVS_FE_DMA_HIGH);
+@@ -1003,6 +1004,7 @@ int etnaviv_gpu_debugfs(struct etnaviv_gpu *gpu, struct seq_file *m)
+ ret = 0;
+
+ pm_runtime_mark_last_busy(gpu->dev);
++pm_put:
+ pm_runtime_put_autosuspend(gpu->dev);
+
+ return ret;
+@@ -1016,7 +1018,7 @@ void etnaviv_gpu_recover_hang(struct etnaviv_gpu *gpu)
+ dev_err(gpu->dev, "recover hung GPU!\n");
+
+ if (pm_runtime_get_sync(gpu->dev) < 0)
+- return;
++ goto pm_put;
+
+ mutex_lock(&gpu->lock);
+
+@@ -1035,6 +1037,7 @@ void etnaviv_gpu_recover_hang(struct etnaviv_gpu *gpu)
+
+ mutex_unlock(&gpu->lock);
+ pm_runtime_mark_last_busy(gpu->dev);
++pm_put:
+ pm_runtime_put_autosuspend(gpu->dev);
+ }
+
+@@ -1308,8 +1311,10 @@ struct dma_fence *etnaviv_gpu_submit(struct etnaviv_gem_submit *submit)
+
+ if (!submit->runtime_resumed) {
+ ret = pm_runtime_get_sync(gpu->dev);
+- if (ret < 0)
++ if (ret < 0) {
++ pm_runtime_put_noidle(gpu->dev);
+ return NULL;
++ }
+ submit->runtime_resumed = true;
+ }
+
+@@ -1326,6 +1331,7 @@ struct dma_fence *etnaviv_gpu_submit(struct etnaviv_gem_submit *submit)
+ ret = event_alloc(gpu, nr_events, event);
+ if (ret) {
+ DRM_ERROR("no free events\n");
++ pm_runtime_put_noidle(gpu->dev);
+ return NULL;
+ }
+
+@@ -1496,7 +1502,7 @@ static int etnaviv_gpu_clk_enable(struct etnaviv_gpu *gpu)
+ if (gpu->clk_bus) {
+ ret = clk_prepare_enable(gpu->clk_bus);
+ if (ret)
+- return ret;
++ goto disable_clk_reg;
+ }
+
+ if (gpu->clk_core) {
+@@ -1519,6 +1525,9 @@ disable_clk_core:
+ disable_clk_bus:
+ if (gpu->clk_bus)
+ clk_disable_unprepare(gpu->clk_bus);
++disable_clk_reg:
++ if (gpu->clk_reg)
++ clk_disable_unprepare(gpu->clk_reg);
+
+ return ret;
+ }
+diff --git a/drivers/gpu/drm/imx/dw_hdmi-imx.c b/drivers/gpu/drm/imx/dw_hdmi-imx.c
+index ba4ca17fd4d8..87869b9997a6 100644
+--- a/drivers/gpu/drm/imx/dw_hdmi-imx.c
++++ b/drivers/gpu/drm/imx/dw_hdmi-imx.c
+@@ -209,9 +209,8 @@ static int dw_hdmi_imx_bind(struct device *dev, struct device *master,
+ if (!pdev->dev.of_node)
+ return -ENODEV;
+
+- hdmi = devm_kzalloc(&pdev->dev, sizeof(*hdmi), GFP_KERNEL);
+- if (!hdmi)
+- return -ENOMEM;
++ hdmi = dev_get_drvdata(dev);
++ memset(hdmi, 0, sizeof(*hdmi));
+
+ match = of_match_node(dw_hdmi_imx_dt_ids, pdev->dev.of_node);
+ plat_data = match->data;
+@@ -235,8 +234,6 @@ static int dw_hdmi_imx_bind(struct device *dev, struct device *master,
+ drm_encoder_helper_add(encoder, &dw_hdmi_imx_encoder_helper_funcs);
+ drm_simple_encoder_init(drm, encoder, DRM_MODE_ENCODER_TMDS);
+
+- platform_set_drvdata(pdev, hdmi);
+-
+ hdmi->hdmi = dw_hdmi_bind(pdev, encoder, plat_data);
+
+ /*
+@@ -266,6 +263,14 @@ static const struct component_ops dw_hdmi_imx_ops = {
+
+ static int dw_hdmi_imx_probe(struct platform_device *pdev)
+ {
++ struct imx_hdmi *hdmi;
++
++ hdmi = devm_kzalloc(&pdev->dev, sizeof(*hdmi), GFP_KERNEL);
++ if (!hdmi)
++ return -ENOMEM;
++
++ platform_set_drvdata(pdev, hdmi);
++
+ return component_add(&pdev->dev, &dw_hdmi_imx_ops);
+ }
+
+diff --git a/drivers/gpu/drm/imx/imx-drm-core.c b/drivers/gpu/drm/imx/imx-drm-core.c
+index 2e38f1a5cf8d..3421043a558d 100644
+--- a/drivers/gpu/drm/imx/imx-drm-core.c
++++ b/drivers/gpu/drm/imx/imx-drm-core.c
+@@ -275,9 +275,10 @@ static void imx_drm_unbind(struct device *dev)
+
+ drm_kms_helper_poll_fini(drm);
+
++ component_unbind_all(drm->dev, drm);
++
+ drm_mode_config_cleanup(drm);
+
+- component_unbind_all(drm->dev, drm);
+ dev_set_drvdata(dev, NULL);
+
+ drm_dev_put(drm);
+diff --git a/drivers/gpu/drm/imx/imx-ldb.c b/drivers/gpu/drm/imx/imx-ldb.c
+index 66ea68e8da87..1823af9936c9 100644
+--- a/drivers/gpu/drm/imx/imx-ldb.c
++++ b/drivers/gpu/drm/imx/imx-ldb.c
+@@ -590,9 +590,8 @@ static int imx_ldb_bind(struct device *dev, struct device *master, void *data)
+ int ret;
+ int i;
+
+- imx_ldb = devm_kzalloc(dev, sizeof(*imx_ldb), GFP_KERNEL);
+- if (!imx_ldb)
+- return -ENOMEM;
++ imx_ldb = dev_get_drvdata(dev);
++ memset(imx_ldb, 0, sizeof(*imx_ldb));
+
+ imx_ldb->regmap = syscon_regmap_lookup_by_phandle(np, "gpr");
+ if (IS_ERR(imx_ldb->regmap)) {
+@@ -700,8 +699,6 @@ static int imx_ldb_bind(struct device *dev, struct device *master, void *data)
+ }
+ }
+
+- dev_set_drvdata(dev, imx_ldb);
+-
+ return 0;
+
+ free_child:
+@@ -733,6 +730,14 @@ static const struct component_ops imx_ldb_ops = {
+
+ static int imx_ldb_probe(struct platform_device *pdev)
+ {
++ struct imx_ldb *imx_ldb;
++
++ imx_ldb = devm_kzalloc(&pdev->dev, sizeof(*imx_ldb), GFP_KERNEL);
++ if (!imx_ldb)
++ return -ENOMEM;
++
++ platform_set_drvdata(pdev, imx_ldb);
++
+ return component_add(&pdev->dev, &imx_ldb_ops);
+ }
+
+diff --git a/drivers/gpu/drm/imx/imx-tve.c b/drivers/gpu/drm/imx/imx-tve.c
+index ee63782c77e9..3758de3e09bd 100644
+--- a/drivers/gpu/drm/imx/imx-tve.c
++++ b/drivers/gpu/drm/imx/imx-tve.c
+@@ -490,6 +490,13 @@ static int imx_tve_register(struct drm_device *drm, struct imx_tve *tve)
+ return 0;
+ }
+
++static void imx_tve_disable_regulator(void *data)
++{
++ struct imx_tve *tve = data;
++
++ regulator_disable(tve->dac_reg);
++}
++
+ static bool imx_tve_readable_reg(struct device *dev, unsigned int reg)
+ {
+ return (reg % 4 == 0) && (reg <= 0xdc);
+@@ -542,9 +549,8 @@ static int imx_tve_bind(struct device *dev, struct device *master, void *data)
+ int irq;
+ int ret;
+
+- tve = devm_kzalloc(dev, sizeof(*tve), GFP_KERNEL);
+- if (!tve)
+- return -ENOMEM;
++ tve = dev_get_drvdata(dev);
++ memset(tve, 0, sizeof(*tve));
+
+ tve->dev = dev;
+ spin_lock_init(&tve->lock);
+@@ -614,6 +620,9 @@ static int imx_tve_bind(struct device *dev, struct device *master, void *data)
+ ret = regulator_enable(tve->dac_reg);
+ if (ret)
+ return ret;
++ ret = devm_add_action_or_reset(dev, imx_tve_disable_regulator, tve);
++ if (ret)
++ return ret;
+ }
+
+ tve->clk = devm_clk_get(dev, "tve");
+@@ -655,27 +664,23 @@ static int imx_tve_bind(struct device *dev, struct device *master, void *data)
+ if (ret)
+ return ret;
+
+- dev_set_drvdata(dev, tve);
+-
+ return 0;
+ }
+
+-static void imx_tve_unbind(struct device *dev, struct device *master,
+- void *data)
+-{
+- struct imx_tve *tve = dev_get_drvdata(dev);
+-
+- if (!IS_ERR(tve->dac_reg))
+- regulator_disable(tve->dac_reg);
+-}
+-
+ static const struct component_ops imx_tve_ops = {
+ .bind = imx_tve_bind,
+- .unbind = imx_tve_unbind,
+ };
+
+ static int imx_tve_probe(struct platform_device *pdev)
+ {
++ struct imx_tve *tve;
++
++ tve = devm_kzalloc(&pdev->dev, sizeof(*tve), GFP_KERNEL);
++ if (!tve)
++ return -ENOMEM;
++
++ platform_set_drvdata(pdev, tve);
++
+ return component_add(&pdev->dev, &imx_tve_ops);
+ }
+
+diff --git a/drivers/gpu/drm/imx/ipuv3-crtc.c b/drivers/gpu/drm/imx/ipuv3-crtc.c
+index 63c0284f8b3c..2256c9789fc2 100644
+--- a/drivers/gpu/drm/imx/ipuv3-crtc.c
++++ b/drivers/gpu/drm/imx/ipuv3-crtc.c
+@@ -438,21 +438,13 @@ static int ipu_drm_bind(struct device *dev, struct device *master, void *data)
+ struct ipu_client_platformdata *pdata = dev->platform_data;
+ struct drm_device *drm = data;
+ struct ipu_crtc *ipu_crtc;
+- int ret;
+
+- ipu_crtc = devm_kzalloc(dev, sizeof(*ipu_crtc), GFP_KERNEL);
+- if (!ipu_crtc)
+- return -ENOMEM;
++ ipu_crtc = dev_get_drvdata(dev);
++ memset(ipu_crtc, 0, sizeof(*ipu_crtc));
+
+ ipu_crtc->dev = dev;
+
+- ret = ipu_crtc_init(ipu_crtc, pdata, drm);
+- if (ret)
+- return ret;
+-
+- dev_set_drvdata(dev, ipu_crtc);
+-
+- return 0;
++ return ipu_crtc_init(ipu_crtc, pdata, drm);
+ }
+
+ static void ipu_drm_unbind(struct device *dev, struct device *master,
+@@ -474,6 +466,7 @@ static const struct component_ops ipu_crtc_ops = {
+ static int ipu_drm_probe(struct platform_device *pdev)
+ {
+ struct device *dev = &pdev->dev;
++ struct ipu_crtc *ipu_crtc;
+ int ret;
+
+ if (!dev->platform_data)
+@@ -483,6 +476,12 @@ static int ipu_drm_probe(struct platform_device *pdev)
+ if (ret)
+ return ret;
+
++ ipu_crtc = devm_kzalloc(dev, sizeof(*ipu_crtc), GFP_KERNEL);
++ if (!ipu_crtc)
++ return -ENOMEM;
++
++ dev_set_drvdata(dev, ipu_crtc);
++
+ return component_add(dev, &ipu_crtc_ops);
+ }
+
+diff --git a/drivers/gpu/drm/imx/parallel-display.c b/drivers/gpu/drm/imx/parallel-display.c
+index ac916c84a631..622eabe9efb3 100644
+--- a/drivers/gpu/drm/imx/parallel-display.c
++++ b/drivers/gpu/drm/imx/parallel-display.c
+@@ -326,9 +326,8 @@ static int imx_pd_bind(struct device *dev, struct device *master, void *data)
+ u32 bus_format = 0;
+ const char *fmt;
+
+- imxpd = devm_kzalloc(dev, sizeof(*imxpd), GFP_KERNEL);
+- if (!imxpd)
+- return -ENOMEM;
++ imxpd = dev_get_drvdata(dev);
++ memset(imxpd, 0, sizeof(*imxpd));
+
+ edidp = of_get_property(np, "edid", &imxpd->edid_len);
+ if (edidp)
+@@ -359,8 +358,6 @@ static int imx_pd_bind(struct device *dev, struct device *master, void *data)
+ if (ret)
+ return ret;
+
+- dev_set_drvdata(dev, imxpd);
+-
+ return 0;
+ }
+
+@@ -382,6 +379,14 @@ static const struct component_ops imx_pd_ops = {
+
+ static int imx_pd_probe(struct platform_device *pdev)
+ {
++ struct imx_parallel_display *imxpd;
++
++ imxpd = devm_kzalloc(&pdev->dev, sizeof(*imxpd), GFP_KERNEL);
++ if (!imxpd)
++ return -ENOMEM;
++
++ platform_set_drvdata(pdev, imxpd);
++
+ return component_add(&pdev->dev, &imx_pd_ops);
+ }
+
+diff --git a/drivers/gpu/drm/msm/adreno/a6xx_gmu.c b/drivers/gpu/drm/msm/adreno/a6xx_gmu.c
+index 21e77d67151f..1d330204c465 100644
+--- a/drivers/gpu/drm/msm/adreno/a6xx_gmu.c
++++ b/drivers/gpu/drm/msm/adreno/a6xx_gmu.c
+@@ -854,10 +854,19 @@ int a6xx_gmu_resume(struct a6xx_gpu *a6xx_gpu)
+ /* Turn on the resources */
+ pm_runtime_get_sync(gmu->dev);
+
++ /*
++ * "enable" the GX power domain which won't actually do anything but it
++ * will make sure that the refcounting is correct in case we need to
++ * bring down the GX after a GMU failure
++ */
++ if (!IS_ERR_OR_NULL(gmu->gxpd))
++ pm_runtime_get_sync(gmu->gxpd);
++
+ /* Use a known rate to bring up the GMU */
+ clk_set_rate(gmu->core_clk, 200000000);
+ ret = clk_bulk_prepare_enable(gmu->nr_clocks, gmu->clocks);
+ if (ret) {
++ pm_runtime_put(gmu->gxpd);
+ pm_runtime_put(gmu->dev);
+ return ret;
+ }
+@@ -903,19 +912,12 @@ int a6xx_gmu_resume(struct a6xx_gpu *a6xx_gpu)
+ else
+ a6xx_hfi_set_freq(gmu, gmu->current_perf_index);
+
+- /*
+- * "enable" the GX power domain which won't actually do anything but it
+- * will make sure that the refcounting is correct in case we need to
+- * bring down the GX after a GMU failure
+- */
+- if (!IS_ERR_OR_NULL(gmu->gxpd))
+- pm_runtime_get(gmu->gxpd);
+-
+ out:
+ /* On failure, shut down the GMU to leave it in a good state */
+ if (ret) {
+ disable_irq(gmu->gmu_irq);
+ a6xx_rpmh_stop(gmu);
++ pm_runtime_put(gmu->gxpd);
+ pm_runtime_put(gmu->dev);
+ }
+
+diff --git a/drivers/gpu/drm/msm/disp/dpu1/dpu_crtc.c b/drivers/gpu/drm/msm/disp/dpu1/dpu_crtc.c
+index e15b42a780e0..969d95aa873c 100644
+--- a/drivers/gpu/drm/msm/disp/dpu1/dpu_crtc.c
++++ b/drivers/gpu/drm/msm/disp/dpu1/dpu_crtc.c
+@@ -389,7 +389,7 @@ static void dpu_crtc_frame_event_cb(void *data, u32 event)
+ spin_unlock_irqrestore(&dpu_crtc->spin_lock, flags);
+
+ if (!fevent) {
+- DRM_ERROR("crtc%d event %d overflow\n", crtc->base.id, event);
++ DRM_ERROR_RATELIMITED("crtc%d event %d overflow\n", crtc->base.id, event);
+ return;
+ }
+
+diff --git a/drivers/gpu/drm/msm/disp/dpu1/dpu_hw_catalog.c b/drivers/gpu/drm/msm/disp/dpu1/dpu_hw_catalog.c
+index 29d4fde3172b..8ef2f62e4111 100644
+--- a/drivers/gpu/drm/msm/disp/dpu1/dpu_hw_catalog.c
++++ b/drivers/gpu/drm/msm/disp/dpu1/dpu_hw_catalog.c
+@@ -43,6 +43,10 @@
+
+ #define DSPP_SC7180_MASK BIT(DPU_DSPP_PCC)
+
++#define INTF_SDM845_MASK (0)
++
++#define INTF_SC7180_MASK BIT(DPU_INTF_INPUT_CTRL) | BIT(DPU_INTF_TE)
++
+ #define DEFAULT_PIXEL_RAM_SIZE (50 * 1024)
+ #define DEFAULT_DPU_LINE_WIDTH 2048
+ #define DEFAULT_DPU_OUTPUT_LINE_WIDTH 2560
+@@ -400,26 +404,26 @@ static struct dpu_pingpong_cfg sc7180_pp[] = {
+ /*************************************************************
+ * INTF sub blocks config
+ *************************************************************/
+-#define INTF_BLK(_name, _id, _base, _type, _ctrl_id) \
++#define INTF_BLK(_name, _id, _base, _type, _ctrl_id, _features) \
+ {\
+ .name = _name, .id = _id, \
+ .base = _base, .len = 0x280, \
+- .features = BIT(DPU_CTL_ACTIVE_CFG), \
++ .features = _features, \
+ .type = _type, \
+ .controller_id = _ctrl_id, \
+ .prog_fetch_lines_worst_case = 24 \
+ }
+
+ static const struct dpu_intf_cfg sdm845_intf[] = {
+- INTF_BLK("intf_0", INTF_0, 0x6A000, INTF_DP, 0),
+- INTF_BLK("intf_1", INTF_1, 0x6A800, INTF_DSI, 0),
+- INTF_BLK("intf_2", INTF_2, 0x6B000, INTF_DSI, 1),
+- INTF_BLK("intf_3", INTF_3, 0x6B800, INTF_DP, 1),
++ INTF_BLK("intf_0", INTF_0, 0x6A000, INTF_DP, 0, INTF_SDM845_MASK),
++ INTF_BLK("intf_1", INTF_1, 0x6A800, INTF_DSI, 0, INTF_SDM845_MASK),
++ INTF_BLK("intf_2", INTF_2, 0x6B000, INTF_DSI, 1, INTF_SDM845_MASK),
++ INTF_BLK("intf_3", INTF_3, 0x6B800, INTF_DP, 1, INTF_SDM845_MASK),
+ };
+
+ static const struct dpu_intf_cfg sc7180_intf[] = {
+- INTF_BLK("intf_0", INTF_0, 0x6A000, INTF_DP, 0),
+- INTF_BLK("intf_1", INTF_1, 0x6A800, INTF_DSI, 0),
++ INTF_BLK("intf_0", INTF_0, 0x6A000, INTF_DP, 0, INTF_SC7180_MASK),
++ INTF_BLK("intf_1", INTF_1, 0x6A800, INTF_DSI, 0, INTF_SC7180_MASK),
+ };
+
+ /*************************************************************
+diff --git a/drivers/gpu/drm/msm/disp/dpu1/dpu_hw_catalog.h b/drivers/gpu/drm/msm/disp/dpu1/dpu_hw_catalog.h
+index f7de43838c69..e4206206a174 100644
+--- a/drivers/gpu/drm/msm/disp/dpu1/dpu_hw_catalog.h
++++ b/drivers/gpu/drm/msm/disp/dpu1/dpu_hw_catalog.h
+@@ -185,6 +185,19 @@ enum {
+ DPU_CTL_MAX
+ };
+
++/**
++ * INTF sub-blocks
++ * @DPU_INTF_INPUT_CTRL Supports the setting of pp block from which
++ * pixel data arrives to this INTF
++ * @DPU_INTF_TE INTF block has TE configuration support
++ * @DPU_INTF_MAX
++ */
++enum {
++ DPU_INTF_INPUT_CTRL = 0x1,
++ DPU_INTF_TE,
++ DPU_INTF_MAX
++};
++
+ /**
+ * VBIF sub-blocks and features
+ * @DPU_VBIF_QOS_OTLIM VBIF supports OT Limit
+diff --git a/drivers/gpu/drm/msm/disp/dpu1/dpu_hw_intf.c b/drivers/gpu/drm/msm/disp/dpu1/dpu_hw_intf.c
+index efe9a5719c6b..64f556d693dd 100644
+--- a/drivers/gpu/drm/msm/disp/dpu1/dpu_hw_intf.c
++++ b/drivers/gpu/drm/msm/disp/dpu1/dpu_hw_intf.c
+@@ -225,14 +225,9 @@ static void dpu_hw_intf_bind_pingpong_blk(
+ bool enable,
+ const enum dpu_pingpong pp)
+ {
+- struct dpu_hw_blk_reg_map *c;
++ struct dpu_hw_blk_reg_map *c = &intf->hw;
+ u32 mux_cfg;
+
+- if (!intf)
+- return;
+-
+- c = &intf->hw;
+-
+ mux_cfg = DPU_REG_READ(c, INTF_MUX);
+ mux_cfg &= ~0xf;
+
+@@ -280,7 +275,7 @@ static void _setup_intf_ops(struct dpu_hw_intf_ops *ops,
+ ops->get_status = dpu_hw_intf_get_status;
+ ops->enable_timing = dpu_hw_intf_enable_timing_engine;
+ ops->get_line_count = dpu_hw_intf_get_line_count;
+- if (cap & BIT(DPU_CTL_ACTIVE_CFG))
++ if (cap & BIT(DPU_INTF_INPUT_CTRL))
+ ops->bind_pingpong_blk = dpu_hw_intf_bind_pingpong_blk;
+ }
+
+diff --git a/drivers/gpu/drm/msm/msm_gem.c b/drivers/gpu/drm/msm/msm_gem.c
+index 6277fde13df9..f63bb7e452d2 100644
+--- a/drivers/gpu/drm/msm/msm_gem.c
++++ b/drivers/gpu/drm/msm/msm_gem.c
+@@ -994,10 +994,8 @@ int msm_gem_new_handle(struct drm_device *dev, struct drm_file *file,
+
+ static int msm_gem_new_impl(struct drm_device *dev,
+ uint32_t size, uint32_t flags,
+- struct drm_gem_object **obj,
+- bool struct_mutex_locked)
++ struct drm_gem_object **obj)
+ {
+- struct msm_drm_private *priv = dev->dev_private;
+ struct msm_gem_object *msm_obj;
+
+ switch (flags & MSM_BO_CACHE_MASK) {
+@@ -1023,15 +1021,6 @@ static int msm_gem_new_impl(struct drm_device *dev,
+ INIT_LIST_HEAD(&msm_obj->submit_entry);
+ INIT_LIST_HEAD(&msm_obj->vmas);
+
+- if (struct_mutex_locked) {
+- WARN_ON(!mutex_is_locked(&dev->struct_mutex));
+- list_add_tail(&msm_obj->mm_list, &priv->inactive_list);
+- } else {
+- mutex_lock(&dev->struct_mutex);
+- list_add_tail(&msm_obj->mm_list, &priv->inactive_list);
+- mutex_unlock(&dev->struct_mutex);
+- }
+-
+ *obj = &msm_obj->base;
+
+ return 0;
+@@ -1041,6 +1030,7 @@ static struct drm_gem_object *_msm_gem_new(struct drm_device *dev,
+ uint32_t size, uint32_t flags, bool struct_mutex_locked)
+ {
+ struct msm_drm_private *priv = dev->dev_private;
++ struct msm_gem_object *msm_obj;
+ struct drm_gem_object *obj = NULL;
+ bool use_vram = false;
+ int ret;
+@@ -1061,14 +1051,15 @@ static struct drm_gem_object *_msm_gem_new(struct drm_device *dev,
+ if (size == 0)
+ return ERR_PTR(-EINVAL);
+
+- ret = msm_gem_new_impl(dev, size, flags, &obj, struct_mutex_locked);
++ ret = msm_gem_new_impl(dev, size, flags, &obj);
+ if (ret)
+ goto fail;
+
++ msm_obj = to_msm_bo(obj);
++
+ if (use_vram) {
+ struct msm_gem_vma *vma;
+ struct page **pages;
+- struct msm_gem_object *msm_obj = to_msm_bo(obj);
+
+ mutex_lock(&msm_obj->lock);
+
+@@ -1103,6 +1094,15 @@ static struct drm_gem_object *_msm_gem_new(struct drm_device *dev,
+ mapping_set_gfp_mask(obj->filp->f_mapping, GFP_HIGHUSER);
+ }
+
++ if (struct_mutex_locked) {
++ WARN_ON(!mutex_is_locked(&dev->struct_mutex));
++ list_add_tail(&msm_obj->mm_list, &priv->inactive_list);
++ } else {
++ mutex_lock(&dev->struct_mutex);
++ list_add_tail(&msm_obj->mm_list, &priv->inactive_list);
++ mutex_unlock(&dev->struct_mutex);
++ }
++
+ return obj;
+
+ fail:
+@@ -1125,6 +1125,7 @@ struct drm_gem_object *msm_gem_new(struct drm_device *dev,
+ struct drm_gem_object *msm_gem_import(struct drm_device *dev,
+ struct dma_buf *dmabuf, struct sg_table *sgt)
+ {
++ struct msm_drm_private *priv = dev->dev_private;
+ struct msm_gem_object *msm_obj;
+ struct drm_gem_object *obj;
+ uint32_t size;
+@@ -1138,7 +1139,7 @@ struct drm_gem_object *msm_gem_import(struct drm_device *dev,
+
+ size = PAGE_ALIGN(dmabuf->size);
+
+- ret = msm_gem_new_impl(dev, size, MSM_BO_WC, &obj, false);
++ ret = msm_gem_new_impl(dev, size, MSM_BO_WC, &obj);
+ if (ret)
+ goto fail;
+
+@@ -1163,6 +1164,11 @@ struct drm_gem_object *msm_gem_import(struct drm_device *dev,
+ }
+
+ mutex_unlock(&msm_obj->lock);
++
++ mutex_lock(&dev->struct_mutex);
++ list_add_tail(&msm_obj->mm_list, &priv->inactive_list);
++ mutex_unlock(&dev->struct_mutex);
++
+ return obj;
+
+ fail:
+diff --git a/drivers/gpu/drm/nouveau/dispnv50/head.c b/drivers/gpu/drm/nouveau/dispnv50/head.c
+index 8f6455697ba7..ed6819519f6d 100644
+--- a/drivers/gpu/drm/nouveau/dispnv50/head.c
++++ b/drivers/gpu/drm/nouveau/dispnv50/head.c
+@@ -84,18 +84,20 @@ nv50_head_atomic_check_dither(struct nv50_head_atom *armh,
+ {
+ u32 mode = 0x00;
+
+- if (asyc->dither.mode == DITHERING_MODE_AUTO) {
+- if (asyh->base.depth > asyh->or.bpc * 3)
+- mode = DITHERING_MODE_DYNAMIC2X2;
+- } else {
+- mode = asyc->dither.mode;
+- }
++ if (asyc->dither.mode) {
++ if (asyc->dither.mode == DITHERING_MODE_AUTO) {
++ if (asyh->base.depth > asyh->or.bpc * 3)
++ mode = DITHERING_MODE_DYNAMIC2X2;
++ } else {
++ mode = asyc->dither.mode;
++ }
+
+- if (asyc->dither.depth == DITHERING_DEPTH_AUTO) {
+- if (asyh->or.bpc >= 8)
+- mode |= DITHERING_DEPTH_8BPC;
+- } else {
+- mode |= asyc->dither.depth;
++ if (asyc->dither.depth == DITHERING_DEPTH_AUTO) {
++ if (asyh->or.bpc >= 8)
++ mode |= DITHERING_DEPTH_8BPC;
++ } else {
++ mode |= asyc->dither.depth;
++ }
+ }
+
+ asyh->dither.enable = mode;
+diff --git a/drivers/gpu/drm/nouveau/nouveau_debugfs.c b/drivers/gpu/drm/nouveau/nouveau_debugfs.c
+index 63b5c8cf9ae4..8f63cda3db17 100644
+--- a/drivers/gpu/drm/nouveau/nouveau_debugfs.c
++++ b/drivers/gpu/drm/nouveau/nouveau_debugfs.c
+@@ -54,8 +54,10 @@ nouveau_debugfs_strap_peek(struct seq_file *m, void *data)
+ int ret;
+
+ ret = pm_runtime_get_sync(drm->dev->dev);
+- if (ret < 0 && ret != -EACCES)
++ if (ret < 0 && ret != -EACCES) {
++ pm_runtime_put_autosuspend(drm->dev->dev);
+ return ret;
++ }
+
+ seq_printf(m, "0x%08x\n",
+ nvif_rd32(&drm->client.device.object, 0x101000));
+diff --git a/drivers/gpu/drm/nouveau/nouveau_drm.c b/drivers/gpu/drm/nouveau/nouveau_drm.c
+index ac93d12201dc..880d962c1b19 100644
+--- a/drivers/gpu/drm/nouveau/nouveau_drm.c
++++ b/drivers/gpu/drm/nouveau/nouveau_drm.c
+@@ -1026,8 +1026,10 @@ nouveau_drm_open(struct drm_device *dev, struct drm_file *fpriv)
+
+ /* need to bring up power immediately if opening device */
+ ret = pm_runtime_get_sync(dev->dev);
+- if (ret < 0 && ret != -EACCES)
++ if (ret < 0 && ret != -EACCES) {
++ pm_runtime_put_autosuspend(dev->dev);
+ return ret;
++ }
+
+ get_task_comm(tmpname, current);
+ snprintf(name, sizeof(name), "%s[%d]", tmpname, pid_nr(fpriv->pid));
+@@ -1109,8 +1111,10 @@ nouveau_drm_ioctl(struct file *file, unsigned int cmd, unsigned long arg)
+ long ret;
+
+ ret = pm_runtime_get_sync(dev->dev);
+- if (ret < 0 && ret != -EACCES)
++ if (ret < 0 && ret != -EACCES) {
++ pm_runtime_put_autosuspend(dev->dev);
+ return ret;
++ }
+
+ switch (_IOC_NR(cmd) - DRM_COMMAND_BASE) {
+ case DRM_NOUVEAU_NVIF:
+diff --git a/drivers/gpu/drm/nouveau/nouveau_gem.c b/drivers/gpu/drm/nouveau/nouveau_gem.c
+index 4c3f131ad31d..c5ee5b7364a0 100644
+--- a/drivers/gpu/drm/nouveau/nouveau_gem.c
++++ b/drivers/gpu/drm/nouveau/nouveau_gem.c
+@@ -45,8 +45,10 @@ nouveau_gem_object_del(struct drm_gem_object *gem)
+ int ret;
+
+ ret = pm_runtime_get_sync(dev);
+- if (WARN_ON(ret < 0 && ret != -EACCES))
++ if (WARN_ON(ret < 0 && ret != -EACCES)) {
++ pm_runtime_put_autosuspend(dev);
+ return;
++ }
+
+ if (gem->import_attach)
+ drm_prime_gem_destroy(gem, nvbo->bo.sg);
+diff --git a/drivers/gpu/drm/nouveau/nouveau_sgdma.c b/drivers/gpu/drm/nouveau/nouveau_sgdma.c
+index feaac908efed..34403b810dba 100644
+--- a/drivers/gpu/drm/nouveau/nouveau_sgdma.c
++++ b/drivers/gpu/drm/nouveau/nouveau_sgdma.c
+@@ -96,12 +96,9 @@ nouveau_sgdma_create_ttm(struct ttm_buffer_object *bo, uint32_t page_flags)
+ else
+ nvbe->ttm.ttm.func = &nv50_sgdma_backend;
+
+- if (ttm_dma_tt_init(&nvbe->ttm, bo, page_flags))
+- /*
+- * A failing ttm_dma_tt_init() will call ttm_tt_destroy()
+- * and thus our nouveau_sgdma_destroy() hook, so we don't need
+- * to free nvbe here.
+- */
++ if (ttm_dma_tt_init(&nvbe->ttm, bo, page_flags)) {
++ kfree(nvbe);
+ return NULL;
++ }
+ return &nvbe->ttm.ttm;
+ }
+diff --git a/drivers/gpu/drm/panel/panel-simple.c b/drivers/gpu/drm/panel/panel-simple.c
+index 4aeb960ccf15..444b77490a42 100644
+--- a/drivers/gpu/drm/panel/panel-simple.c
++++ b/drivers/gpu/drm/panel/panel-simple.c
+@@ -2304,7 +2304,7 @@ static const struct drm_display_mode lg_lb070wv8_mode = {
+ static const struct panel_desc lg_lb070wv8 = {
+ .modes = &lg_lb070wv8_mode,
+ .num_modes = 1,
+- .bpc = 16,
++ .bpc = 8,
+ .size = {
+ .width = 151,
+ .height = 91,
+diff --git a/drivers/gpu/drm/panfrost/panfrost_job.c b/drivers/gpu/drm/panfrost/panfrost_job.c
+index 7914b1570841..f9519afca29d 100644
+--- a/drivers/gpu/drm/panfrost/panfrost_job.c
++++ b/drivers/gpu/drm/panfrost/panfrost_job.c
+@@ -145,6 +145,8 @@ static void panfrost_job_hw_submit(struct panfrost_job *job, int js)
+ u64 jc_head = job->jc;
+ int ret;
+
++ panfrost_devfreq_record_busy(pfdev);
++
+ ret = pm_runtime_get_sync(pfdev->dev);
+ if (ret < 0)
+ return;
+@@ -155,7 +157,6 @@ static void panfrost_job_hw_submit(struct panfrost_job *job, int js)
+ }
+
+ cfg = panfrost_mmu_as_get(pfdev, &job->file_priv->mmu);
+- panfrost_devfreq_record_busy(pfdev);
+
+ job_write(pfdev, JS_HEAD_NEXT_LO(js), jc_head & 0xFFFFFFFF);
+ job_write(pfdev, JS_HEAD_NEXT_HI(js), jc_head >> 32);
+@@ -410,12 +411,12 @@ static void panfrost_job_timedout(struct drm_sched_job *sched_job)
+ for (i = 0; i < NUM_JOB_SLOTS; i++) {
+ if (pfdev->jobs[i]) {
+ pm_runtime_put_noidle(pfdev->dev);
++ panfrost_devfreq_record_idle(pfdev);
+ pfdev->jobs[i] = NULL;
+ }
+ }
+ spin_unlock_irqrestore(&pfdev->js->job_lock, flags);
+
+- panfrost_devfreq_record_idle(pfdev);
+ panfrost_device_reset(pfdev);
+
+ for (i = 0; i < NUM_JOB_SLOTS; i++)
+diff --git a/drivers/gpu/drm/radeon/ci_dpm.c b/drivers/gpu/drm/radeon/ci_dpm.c
+index f434efdeca44..ba20c6f03719 100644
+--- a/drivers/gpu/drm/radeon/ci_dpm.c
++++ b/drivers/gpu/drm/radeon/ci_dpm.c
+@@ -4351,7 +4351,7 @@ static int ci_set_mc_special_registers(struct radeon_device *rdev,
+ table->mc_reg_table_entry[k].mc_data[j] |= 0x100;
+ }
+ j++;
+- if (j > SMU7_DISCRETE_MC_REGISTER_ARRAY_SIZE)
++ if (j >= SMU7_DISCRETE_MC_REGISTER_ARRAY_SIZE)
+ return -EINVAL;
+
+ if (!pi->mem_gddr5) {
+diff --git a/drivers/gpu/drm/radeon/radeon_display.c b/drivers/gpu/drm/radeon/radeon_display.c
+index 35db79a168bf..df1a7eb73651 100644
+--- a/drivers/gpu/drm/radeon/radeon_display.c
++++ b/drivers/gpu/drm/radeon/radeon_display.c
+@@ -635,8 +635,10 @@ radeon_crtc_set_config(struct drm_mode_set *set,
+ dev = set->crtc->dev;
+
+ ret = pm_runtime_get_sync(dev->dev);
+- if (ret < 0)
++ if (ret < 0) {
++ pm_runtime_put_autosuspend(dev->dev);
+ return ret;
++ }
+
+ ret = drm_crtc_helper_set_config(set, ctx);
+
+diff --git a/drivers/gpu/drm/radeon/radeon_drv.c b/drivers/gpu/drm/radeon/radeon_drv.c
+index bbb0883e8ce6..4cd30613fa1d 100644
+--- a/drivers/gpu/drm/radeon/radeon_drv.c
++++ b/drivers/gpu/drm/radeon/radeon_drv.c
+@@ -171,12 +171,7 @@ int radeon_no_wb;
+ int radeon_modeset = -1;
+ int radeon_dynclks = -1;
+ int radeon_r4xx_atom = 0;
+-#ifdef __powerpc__
+-/* Default to PCI on PowerPC (fdo #95017) */
+ int radeon_agpmode = -1;
+-#else
+-int radeon_agpmode = 0;
+-#endif
+ int radeon_vram_limit = 0;
+ int radeon_gart_size = -1; /* auto */
+ int radeon_benchmarking = 0;
+@@ -549,8 +544,10 @@ long radeon_drm_ioctl(struct file *filp,
+ long ret;
+ dev = file_priv->minor->dev;
+ ret = pm_runtime_get_sync(dev->dev);
+- if (ret < 0)
++ if (ret < 0) {
++ pm_runtime_put_autosuspend(dev->dev);
+ return ret;
++ }
+
+ ret = drm_ioctl(filp, cmd, arg);
+
+diff --git a/drivers/gpu/drm/radeon/radeon_kms.c b/drivers/gpu/drm/radeon/radeon_kms.c
+index c5d1dc9618a4..99ee60f8b604 100644
+--- a/drivers/gpu/drm/radeon/radeon_kms.c
++++ b/drivers/gpu/drm/radeon/radeon_kms.c
+@@ -638,8 +638,10 @@ int radeon_driver_open_kms(struct drm_device *dev, struct drm_file *file_priv)
+ file_priv->driver_priv = NULL;
+
+ r = pm_runtime_get_sync(dev->dev);
+- if (r < 0)
++ if (r < 0) {
++ pm_runtime_put_autosuspend(dev->dev);
+ return r;
++ }
+
+ /* new gpu have virtual address space support */
+ if (rdev->family >= CHIP_CAYMAN) {
+diff --git a/drivers/gpu/drm/stm/ltdc.c b/drivers/gpu/drm/stm/ltdc.c
+index f894968d6e45..3f590d916e91 100644
+--- a/drivers/gpu/drm/stm/ltdc.c
++++ b/drivers/gpu/drm/stm/ltdc.c
+@@ -423,9 +423,12 @@ static void ltdc_crtc_atomic_enable(struct drm_crtc *crtc,
+ struct drm_crtc_state *old_state)
+ {
+ struct ltdc_device *ldev = crtc_to_ltdc(crtc);
++ struct drm_device *ddev = crtc->dev;
+
+ DRM_DEBUG_DRIVER("\n");
+
++ pm_runtime_get_sync(ddev->dev);
++
+ /* Sets the background color value */
+ reg_write(ldev->regs, LTDC_BCCR, BCCR_BCBLACK);
+
+diff --git a/drivers/gpu/drm/tilcdc/tilcdc_panel.c b/drivers/gpu/drm/tilcdc/tilcdc_panel.c
+index 12823d60c4e8..4be53768f014 100644
+--- a/drivers/gpu/drm/tilcdc/tilcdc_panel.c
++++ b/drivers/gpu/drm/tilcdc/tilcdc_panel.c
+@@ -139,12 +139,16 @@ static int panel_connector_get_modes(struct drm_connector *connector)
+ int i;
+
+ for (i = 0; i < timings->num_timings; i++) {
+- struct drm_display_mode *mode = drm_mode_create(dev);
++ struct drm_display_mode *mode;
+ struct videomode vm;
+
+ if (videomode_from_timings(timings, &vm, i))
+ break;
+
++ mode = drm_mode_create(dev);
++ if (!mode)
++ break;
++
+ drm_display_mode_from_videomode(&vm, mode);
+
+ mode->type = DRM_MODE_TYPE_DRIVER;
+diff --git a/drivers/gpu/drm/ttm/ttm_tt.c b/drivers/gpu/drm/ttm/ttm_tt.c
+index 2ec448e1d663..9f296b9da05b 100644
+--- a/drivers/gpu/drm/ttm/ttm_tt.c
++++ b/drivers/gpu/drm/ttm/ttm_tt.c
+@@ -242,7 +242,6 @@ int ttm_tt_init(struct ttm_tt *ttm, struct ttm_buffer_object *bo,
+ ttm_tt_init_fields(ttm, bo, page_flags);
+
+ if (ttm_tt_alloc_page_directory(ttm)) {
+- ttm_tt_destroy(ttm);
+ pr_err("Failed allocating page table\n");
+ return -ENOMEM;
+ }
+@@ -266,7 +265,6 @@ int ttm_dma_tt_init(struct ttm_dma_tt *ttm_dma, struct ttm_buffer_object *bo,
+
+ INIT_LIST_HEAD(&ttm_dma->pages_list);
+ if (ttm_dma_tt_alloc_page_directory(ttm_dma)) {
+- ttm_tt_destroy(ttm);
+ pr_err("Failed allocating page table\n");
+ return -ENOMEM;
+ }
+@@ -288,7 +286,6 @@ int ttm_sg_tt_init(struct ttm_dma_tt *ttm_dma, struct ttm_buffer_object *bo,
+ else
+ ret = ttm_dma_tt_alloc_page_directory(ttm_dma);
+ if (ret) {
+- ttm_tt_destroy(ttm);
+ pr_err("Failed allocating page table\n");
+ return -ENOMEM;
+ }
+diff --git a/drivers/gpu/drm/xen/xen_drm_front.c b/drivers/gpu/drm/xen/xen_drm_front.c
+index 1fd458e877ca..51818e76facd 100644
+--- a/drivers/gpu/drm/xen/xen_drm_front.c
++++ b/drivers/gpu/drm/xen/xen_drm_front.c
+@@ -400,8 +400,8 @@ static int xen_drm_drv_dumb_create(struct drm_file *filp,
+ args->size = args->pitch * args->height;
+
+ obj = xen_drm_front_gem_create(dev, args->size);
+- if (IS_ERR_OR_NULL(obj)) {
+- ret = PTR_ERR_OR_ZERO(obj);
++ if (IS_ERR(obj)) {
++ ret = PTR_ERR(obj);
+ goto fail;
+ }
+
+diff --git a/drivers/gpu/drm/xen/xen_drm_front_gem.c b/drivers/gpu/drm/xen/xen_drm_front_gem.c
+index f0b85e094111..4ec8a49241e1 100644
+--- a/drivers/gpu/drm/xen/xen_drm_front_gem.c
++++ b/drivers/gpu/drm/xen/xen_drm_front_gem.c
+@@ -83,7 +83,7 @@ static struct xen_gem_object *gem_create(struct drm_device *dev, size_t size)
+
+ size = round_up(size, PAGE_SIZE);
+ xen_obj = gem_create_obj(dev, size);
+- if (IS_ERR_OR_NULL(xen_obj))
++ if (IS_ERR(xen_obj))
+ return xen_obj;
+
+ if (drm_info->front_info->cfg.be_alloc) {
+@@ -117,7 +117,7 @@ static struct xen_gem_object *gem_create(struct drm_device *dev, size_t size)
+ */
+ xen_obj->num_pages = DIV_ROUND_UP(size, PAGE_SIZE);
+ xen_obj->pages = drm_gem_get_pages(&xen_obj->base);
+- if (IS_ERR_OR_NULL(xen_obj->pages)) {
++ if (IS_ERR(xen_obj->pages)) {
+ ret = PTR_ERR(xen_obj->pages);
+ xen_obj->pages = NULL;
+ goto fail;
+@@ -136,7 +136,7 @@ struct drm_gem_object *xen_drm_front_gem_create(struct drm_device *dev,
+ struct xen_gem_object *xen_obj;
+
+ xen_obj = gem_create(dev, size);
+- if (IS_ERR_OR_NULL(xen_obj))
++ if (IS_ERR(xen_obj))
+ return ERR_CAST(xen_obj);
+
+ return &xen_obj->base;
+@@ -194,7 +194,7 @@ xen_drm_front_gem_import_sg_table(struct drm_device *dev,
+
+ size = attach->dmabuf->size;
+ xen_obj = gem_create_obj(dev, size);
+- if (IS_ERR_OR_NULL(xen_obj))
++ if (IS_ERR(xen_obj))
+ return ERR_CAST(xen_obj);
+
+ ret = gem_alloc_pages_array(xen_obj, size);
+diff --git a/drivers/gpu/drm/xen/xen_drm_front_kms.c b/drivers/gpu/drm/xen/xen_drm_front_kms.c
+index 78096bbcd226..ef11b1e4de39 100644
+--- a/drivers/gpu/drm/xen/xen_drm_front_kms.c
++++ b/drivers/gpu/drm/xen/xen_drm_front_kms.c
+@@ -60,7 +60,7 @@ fb_create(struct drm_device *dev, struct drm_file *filp,
+ int ret;
+
+ fb = drm_gem_fb_create_with_funcs(dev, filp, mode_cmd, &fb_funcs);
+- if (IS_ERR_OR_NULL(fb))
++ if (IS_ERR(fb))
+ return fb;
+
+ gem_obj = fb->obj[0];
+diff --git a/drivers/gpu/host1x/debug.c b/drivers/gpu/host1x/debug.c
+index c0392672a842..1b4997bda1c7 100644
+--- a/drivers/gpu/host1x/debug.c
++++ b/drivers/gpu/host1x/debug.c
+@@ -16,6 +16,8 @@
+ #include "debug.h"
+ #include "channel.h"
+
++static DEFINE_MUTEX(debug_lock);
++
+ unsigned int host1x_debug_trace_cmdbuf;
+
+ static pid_t host1x_debug_force_timeout_pid;
+@@ -52,12 +54,14 @@ static int show_channel(struct host1x_channel *ch, void *data, bool show_fifo)
+ struct output *o = data;
+
+ mutex_lock(&ch->cdma.lock);
++ mutex_lock(&debug_lock);
+
+ if (show_fifo)
+ host1x_hw_show_channel_fifo(m, ch, o);
+
+ host1x_hw_show_channel_cdma(m, ch, o);
+
++ mutex_unlock(&debug_lock);
+ mutex_unlock(&ch->cdma.lock);
+
+ return 0;
+diff --git a/drivers/gpu/ipu-v3/ipu-common.c b/drivers/gpu/ipu-v3/ipu-common.c
+index ee2a025e54cf..b3dae9ec1a38 100644
+--- a/drivers/gpu/ipu-v3/ipu-common.c
++++ b/drivers/gpu/ipu-v3/ipu-common.c
+@@ -124,6 +124,8 @@ enum ipu_color_space ipu_pixelformat_to_colorspace(u32 pixelformat)
+ case V4L2_PIX_FMT_RGBX32:
+ case V4L2_PIX_FMT_ARGB32:
+ case V4L2_PIX_FMT_XRGB32:
++ case V4L2_PIX_FMT_RGB32:
++ case V4L2_PIX_FMT_BGR32:
+ return IPUV3_COLORSPACE_RGB;
+ default:
+ return IPUV3_COLORSPACE_UNKNOWN;
+diff --git a/drivers/hid/hid-input.c b/drivers/hid/hid-input.c
+index dea9cc65bf80..e8641ce677e4 100644
+--- a/drivers/hid/hid-input.c
++++ b/drivers/hid/hid-input.c
+@@ -350,13 +350,13 @@ static int hidinput_query_battery_capacity(struct hid_device *dev)
+ u8 *buf;
+ int ret;
+
+- buf = kmalloc(2, GFP_KERNEL);
++ buf = kmalloc(4, GFP_KERNEL);
+ if (!buf)
+ return -ENOMEM;
+
+- ret = hid_hw_raw_request(dev, dev->battery_report_id, buf, 2,
++ ret = hid_hw_raw_request(dev, dev->battery_report_id, buf, 4,
+ dev->battery_report_type, HID_REQ_GET_REPORT);
+- if (ret != 2) {
++ if (ret < 2) {
+ kfree(buf);
+ return -ENODATA;
+ }
+diff --git a/drivers/hwtracing/coresight/coresight-etm4x.c b/drivers/hwtracing/coresight/coresight-etm4x.c
+index 0c35cd5e0d1d..6089c481f8f1 100644
+--- a/drivers/hwtracing/coresight/coresight-etm4x.c
++++ b/drivers/hwtracing/coresight/coresight-etm4x.c
+@@ -507,6 +507,12 @@ static void etm4_disable_hw(void *info)
+ readl_relaxed(drvdata->base + TRCSSCSRn(i));
+ }
+
++ /* read back the current counter values */
++ for (i = 0; i < drvdata->nr_cntr; i++) {
++ config->cntr_val[i] =
++ readl_relaxed(drvdata->base + TRCCNTVRn(i));
++ }
++
+ coresight_disclaim_device_unlocked(drvdata->base);
+
+ CS_LOCK(drvdata->base);
+@@ -1196,8 +1202,8 @@ static int etm4_cpu_save(struct etmv4_drvdata *drvdata)
+ }
+
+ for (i = 0; i < drvdata->nr_addr_cmp * 2; i++) {
+- state->trcacvr[i] = readl(drvdata->base + TRCACVRn(i));
+- state->trcacatr[i] = readl(drvdata->base + TRCACATRn(i));
++ state->trcacvr[i] = readq(drvdata->base + TRCACVRn(i));
++ state->trcacatr[i] = readq(drvdata->base + TRCACATRn(i));
+ }
+
+ /*
+@@ -1208,10 +1214,10 @@ static int etm4_cpu_save(struct etmv4_drvdata *drvdata)
+ */
+
+ for (i = 0; i < drvdata->numcidc; i++)
+- state->trccidcvr[i] = readl(drvdata->base + TRCCIDCVRn(i));
++ state->trccidcvr[i] = readq(drvdata->base + TRCCIDCVRn(i));
+
+ for (i = 0; i < drvdata->numvmidc; i++)
+- state->trcvmidcvr[i] = readl(drvdata->base + TRCVMIDCVRn(i));
++ state->trcvmidcvr[i] = readq(drvdata->base + TRCVMIDCVRn(i));
+
+ state->trccidcctlr0 = readl(drvdata->base + TRCCIDCCTLR0);
+ state->trccidcctlr1 = readl(drvdata->base + TRCCIDCCTLR1);
+@@ -1309,18 +1315,18 @@ static void etm4_cpu_restore(struct etmv4_drvdata *drvdata)
+ }
+
+ for (i = 0; i < drvdata->nr_addr_cmp * 2; i++) {
+- writel_relaxed(state->trcacvr[i],
++ writeq_relaxed(state->trcacvr[i],
+ drvdata->base + TRCACVRn(i));
+- writel_relaxed(state->trcacatr[i],
++ writeq_relaxed(state->trcacatr[i],
+ drvdata->base + TRCACATRn(i));
+ }
+
+ for (i = 0; i < drvdata->numcidc; i++)
+- writel_relaxed(state->trccidcvr[i],
++ writeq_relaxed(state->trccidcvr[i],
+ drvdata->base + TRCCIDCVRn(i));
+
+ for (i = 0; i < drvdata->numvmidc; i++)
+- writel_relaxed(state->trcvmidcvr[i],
++ writeq_relaxed(state->trcvmidcvr[i],
+ drvdata->base + TRCVMIDCVRn(i));
+
+ writel_relaxed(state->trccidcctlr0, drvdata->base + TRCCIDCCTLR0);
+diff --git a/drivers/hwtracing/coresight/coresight-etm4x.h b/drivers/hwtracing/coresight/coresight-etm4x.h
+index 4a695bf90582..47729e04aac7 100644
+--- a/drivers/hwtracing/coresight/coresight-etm4x.h
++++ b/drivers/hwtracing/coresight/coresight-etm4x.h
+@@ -133,7 +133,7 @@
+ #define ETMv4_MAX_CTXID_CMP 8
+ #define ETM_MAX_VMID_CMP 8
+ #define ETM_MAX_PE_CMP 8
+-#define ETM_MAX_RES_SEL 16
++#define ETM_MAX_RES_SEL 32
+ #define ETM_MAX_SS_CMP 8
+
+ #define ETM_ARCH_V4 0x40
+@@ -325,7 +325,7 @@ struct etmv4_save_state {
+ u32 trccntctlr[ETMv4_MAX_CNTR];
+ u32 trccntvr[ETMv4_MAX_CNTR];
+
+- u32 trcrsctlr[ETM_MAX_RES_SEL * 2];
++ u32 trcrsctlr[ETM_MAX_RES_SEL];
+
+ u32 trcssccr[ETM_MAX_SS_CMP];
+ u32 trcsscsr[ETM_MAX_SS_CMP];
+@@ -334,7 +334,7 @@ struct etmv4_save_state {
+ u64 trcacvr[ETM_MAX_SINGLE_ADDR_CMP];
+ u64 trcacatr[ETM_MAX_SINGLE_ADDR_CMP];
+ u64 trccidcvr[ETMv4_MAX_CTXID_CMP];
+- u32 trcvmidcvr[ETM_MAX_VMID_CMP];
++ u64 trcvmidcvr[ETM_MAX_VMID_CMP];
+ u32 trccidcctlr0;
+ u32 trccidcctlr1;
+ u32 trcvmidcctlr0;
+diff --git a/drivers/hwtracing/coresight/coresight-tmc-etf.c b/drivers/hwtracing/coresight/coresight-tmc-etf.c
+index 36cce2bfb744..6375504ba8b0 100644
+--- a/drivers/hwtracing/coresight/coresight-tmc-etf.c
++++ b/drivers/hwtracing/coresight/coresight-tmc-etf.c
+@@ -639,15 +639,14 @@ int tmc_read_unprepare_etb(struct tmc_drvdata *drvdata)
+
+ spin_lock_irqsave(&drvdata->spinlock, flags);
+
+- /* There is no point in reading a TMC in HW FIFO mode */
+- mode = readl_relaxed(drvdata->base + TMC_MODE);
+- if (mode != TMC_MODE_CIRCULAR_BUFFER) {
+- spin_unlock_irqrestore(&drvdata->spinlock, flags);
+- return -EINVAL;
+- }
+-
+ /* Re-enable the TMC if need be */
+ if (drvdata->mode == CS_MODE_SYSFS) {
++ /* There is no point in reading a TMC in HW FIFO mode */
++ mode = readl_relaxed(drvdata->base + TMC_MODE);
++ if (mode != TMC_MODE_CIRCULAR_BUFFER) {
++ spin_unlock_irqrestore(&drvdata->spinlock, flags);
++ return -EINVAL;
++ }
+ /*
+ * The trace run will continue with the same allocated trace
+ * buffer. As such zero-out the buffer so that we don't end
+diff --git a/drivers/iio/amplifiers/ad8366.c b/drivers/iio/amplifiers/ad8366.c
+index 62167b87caea..8345ba65d41d 100644
+--- a/drivers/iio/amplifiers/ad8366.c
++++ b/drivers/iio/amplifiers/ad8366.c
+@@ -262,8 +262,11 @@ static int ad8366_probe(struct spi_device *spi)
+ case ID_ADA4961:
+ case ID_ADL5240:
+ case ID_HMC1119:
+- st->reset_gpio = devm_gpiod_get(&spi->dev, "reset",
+- GPIOD_OUT_HIGH);
++ st->reset_gpio = devm_gpiod_get_optional(&spi->dev, "reset", GPIOD_OUT_HIGH);
++ if (IS_ERR(st->reset_gpio)) {
++ ret = PTR_ERR(st->reset_gpio);
++ goto error_disable_reg;
++ }
+ indio_dev->channels = ada4961_channels;
+ indio_dev->num_channels = ARRAY_SIZE(ada4961_channels);
+ break;
+diff --git a/drivers/infiniband/core/device.c b/drivers/infiniband/core/device.c
+index 905a2beaf885..eadba29432dd 100644
+--- a/drivers/infiniband/core/device.c
++++ b/drivers/infiniband/core/device.c
+@@ -1339,6 +1339,10 @@ out:
+ return ret;
+ }
+
++static void prevent_dealloc_device(struct ib_device *ib_dev)
++{
++}
++
+ /**
+ * ib_register_device - Register an IB device with IB core
+ * @device: Device to register
+@@ -1409,11 +1413,11 @@ int ib_register_device(struct ib_device *device, const char *name)
+ * possibility for a parallel unregistration along with this
+ * error flow. Since we have a refcount here we know any
+ * parallel flow is stopped in disable_device and will see the
+- * NULL pointers, causing the responsibility to
++ * special dealloc_driver pointer, causing the responsibility to
+ * ib_dealloc_device() to revert back to this thread.
+ */
+ dealloc_fn = device->ops.dealloc_driver;
+- device->ops.dealloc_driver = NULL;
++ device->ops.dealloc_driver = prevent_dealloc_device;
+ ib_device_put(device);
+ __ib_unregister_device(device);
+ device->ops.dealloc_driver = dealloc_fn;
+@@ -1462,7 +1466,8 @@ static void __ib_unregister_device(struct ib_device *ib_dev)
+ * Drivers using the new flow may not call ib_dealloc_device except
+ * in error unwind prior to registration success.
+ */
+- if (ib_dev->ops.dealloc_driver) {
++ if (ib_dev->ops.dealloc_driver &&
++ ib_dev->ops.dealloc_driver != prevent_dealloc_device) {
+ WARN_ON(kref_read(&ib_dev->dev.kobj.kref) <= 1);
+ ib_dealloc_device(ib_dev);
+ }
+diff --git a/drivers/infiniband/core/nldev.c b/drivers/infiniband/core/nldev.c
+index e16105be2eb2..98cd6403ca60 100644
+--- a/drivers/infiniband/core/nldev.c
++++ b/drivers/infiniband/core/nldev.c
+@@ -738,9 +738,6 @@ static int fill_stat_counter_qps(struct sk_buff *msg,
+ xa_lock(&rt->xa);
+ xa_for_each(&rt->xa, id, res) {
+ qp = container_of(res, struct ib_qp, res);
+- if (qp->qp_type == IB_QPT_RAW_PACKET && !capable(CAP_NET_RAW))
+- continue;
+-
+ if (!qp->counter || (qp->counter->id != counter->id))
+ continue;
+
+diff --git a/drivers/infiniband/core/verbs.c b/drivers/infiniband/core/verbs.c
+index 53d6505c0c7b..f369f0a19e85 100644
+--- a/drivers/infiniband/core/verbs.c
++++ b/drivers/infiniband/core/verbs.c
+@@ -1712,7 +1712,7 @@ static int _ib_modify_qp(struct ib_qp *qp, struct ib_qp_attr *attr,
+ if (!(rdma_protocol_ib(qp->device,
+ attr->alt_ah_attr.port_num) &&
+ rdma_protocol_ib(qp->device, port))) {
+- ret = EINVAL;
++ ret = -EINVAL;
+ goto out;
+ }
+ }
+diff --git a/drivers/infiniband/hw/hns/hns_roce_hw_v2.c b/drivers/infiniband/hw/hns/hns_roce_hw_v2.c
+index 0618ced45bf8..eb71b941d21b 100644
+--- a/drivers/infiniband/hw/hns/hns_roce_hw_v2.c
++++ b/drivers/infiniband/hw/hns/hns_roce_hw_v2.c
+@@ -3053,6 +3053,7 @@ static void get_cqe_status(struct hns_roce_dev *hr_dev, struct hns_roce_qp *qp,
+ IB_WC_RETRY_EXC_ERR },
+ { HNS_ROCE_CQE_V2_RNR_RETRY_EXC_ERR, IB_WC_RNR_RETRY_EXC_ERR },
+ { HNS_ROCE_CQE_V2_REMOTE_ABORT_ERR, IB_WC_REM_ABORT_ERR },
++ { HNS_ROCE_CQE_V2_GENERAL_ERR, IB_WC_GENERAL_ERR}
+ };
+
+ u32 cqe_status = roce_get_field(cqe->byte_4, V2_CQE_BYTE_4_STATUS_M,
+@@ -3074,6 +3075,14 @@ static void get_cqe_status(struct hns_roce_dev *hr_dev, struct hns_roce_qp *qp,
+ print_hex_dump(KERN_ERR, "", DUMP_PREFIX_NONE, 16, 4, cqe,
+ sizeof(*cqe), false);
+
++ /*
++ * For hns ROCEE, GENERAL_ERR is an error type that is not defined in
++ * the standard protocol, the driver must ignore it and needn't to set
++ * the QP to an error state.
++ */
++ if (cqe_status == HNS_ROCE_CQE_V2_GENERAL_ERR)
++ return;
++
+ /*
+ * Hip08 hardware cannot flush the WQEs in SQ/RQ if the QP state gets
+ * into errored mode. Hence, as a workaround to this hardware
+@@ -4301,7 +4310,9 @@ static bool check_qp_state(enum ib_qp_state cur_state,
+ [IB_QPS_RTR] = { [IB_QPS_RESET] = true,
+ [IB_QPS_RTS] = true,
+ [IB_QPS_ERR] = true },
+- [IB_QPS_RTS] = { [IB_QPS_RESET] = true, [IB_QPS_ERR] = true },
++ [IB_QPS_RTS] = { [IB_QPS_RESET] = true,
++ [IB_QPS_RTS] = true,
++ [IB_QPS_ERR] = true },
+ [IB_QPS_SQD] = {},
+ [IB_QPS_SQE] = {},
+ [IB_QPS_ERR] = { [IB_QPS_RESET] = true, [IB_QPS_ERR] = true }
+diff --git a/drivers/infiniband/hw/hns/hns_roce_hw_v2.h b/drivers/infiniband/hw/hns/hns_roce_hw_v2.h
+index e176b0aaa4ac..e6c385ced187 100644
+--- a/drivers/infiniband/hw/hns/hns_roce_hw_v2.h
++++ b/drivers/infiniband/hw/hns/hns_roce_hw_v2.h
+@@ -230,6 +230,7 @@ enum {
+ HNS_ROCE_CQE_V2_TRANSPORT_RETRY_EXC_ERR = 0x15,
+ HNS_ROCE_CQE_V2_RNR_RETRY_EXC_ERR = 0x16,
+ HNS_ROCE_CQE_V2_REMOTE_ABORT_ERR = 0x22,
++ HNS_ROCE_CQE_V2_GENERAL_ERR = 0x23,
+
+ HNS_ROCE_V2_CQE_STATUS_MASK = 0xff,
+ };
+diff --git a/drivers/infiniband/hw/qedr/qedr.h b/drivers/infiniband/hw/qedr/qedr.h
+index fdf90ecb2699..460292179b32 100644
+--- a/drivers/infiniband/hw/qedr/qedr.h
++++ b/drivers/infiniband/hw/qedr/qedr.h
+@@ -235,6 +235,7 @@ struct qedr_ucontext {
+ u32 dpi_size;
+ u16 dpi;
+ bool db_rec;
++ u8 edpm_mode;
+ };
+
+ union db_prod32 {
+@@ -344,10 +345,10 @@ struct qedr_srq_hwq_info {
+ u32 wqe_prod;
+ u32 sge_prod;
+ u32 wr_prod_cnt;
+- u32 wr_cons_cnt;
++ atomic_t wr_cons_cnt;
+ u32 num_elems;
+
+- u32 *virt_prod_pair_addr;
++ struct rdma_srq_producers *virt_prod_pair_addr;
+ dma_addr_t phy_prod_pair_addr;
+ };
+
+diff --git a/drivers/infiniband/hw/qedr/verbs.c b/drivers/infiniband/hw/qedr/verbs.c
+index 9b9e80266367..1a7f1f805be3 100644
+--- a/drivers/infiniband/hw/qedr/verbs.c
++++ b/drivers/infiniband/hw/qedr/verbs.c
+@@ -275,7 +275,8 @@ int qedr_alloc_ucontext(struct ib_ucontext *uctx, struct ib_udata *udata)
+ DP_ERR(dev, "Problem copying data from user space\n");
+ return -EFAULT;
+ }
+-
++ ctx->edpm_mode = !!(ureq.context_flags &
++ QEDR_ALLOC_UCTX_EDPM_MODE);
+ ctx->db_rec = !!(ureq.context_flags & QEDR_ALLOC_UCTX_DB_REC);
+ }
+
+@@ -316,11 +317,15 @@ int qedr_alloc_ucontext(struct ib_ucontext *uctx, struct ib_udata *udata)
+ uresp.dpm_flags = QEDR_DPM_TYPE_IWARP_LEGACY;
+ else
+ uresp.dpm_flags = QEDR_DPM_TYPE_ROCE_ENHANCED |
+- QEDR_DPM_TYPE_ROCE_LEGACY;
++ QEDR_DPM_TYPE_ROCE_LEGACY |
++ QEDR_DPM_TYPE_ROCE_EDPM_MODE;
+
+- uresp.dpm_flags |= QEDR_DPM_SIZES_SET;
+- uresp.ldpm_limit_size = QEDR_LDPM_MAX_SIZE;
+- uresp.edpm_trans_size = QEDR_EDPM_TRANS_SIZE;
++ if (ureq.context_flags & QEDR_SUPPORT_DPM_SIZES) {
++ uresp.dpm_flags |= QEDR_DPM_SIZES_SET;
++ uresp.ldpm_limit_size = QEDR_LDPM_MAX_SIZE;
++ uresp.edpm_trans_size = QEDR_EDPM_TRANS_SIZE;
++ uresp.edpm_limit_size = QEDR_EDPM_MAX_SIZE;
++ }
+
+ uresp.wids_enabled = 1;
+ uresp.wid_count = oparams.wid_count;
+@@ -1750,7 +1755,7 @@ static int qedr_create_user_qp(struct qedr_dev *dev,
+ struct qed_rdma_create_qp_out_params out_params;
+ struct qedr_pd *pd = get_qedr_pd(ibpd);
+ struct qedr_create_qp_uresp uresp;
+- struct qedr_ucontext *ctx = NULL;
++ struct qedr_ucontext *ctx = pd ? pd->uctx : NULL;
+ struct qedr_create_qp_ureq ureq;
+ int alloc_and_init = rdma_protocol_roce(&dev->ibdev, 1);
+ int rc = -EINVAL;
+@@ -1788,6 +1793,9 @@ static int qedr_create_user_qp(struct qedr_dev *dev,
+ in_params.rq_pbl_ptr = qp->urq.pbl_tbl->pa;
+ }
+
++ if (ctx)
++ SET_FIELD(in_params.flags, QED_ROCE_EDPM_MODE, ctx->edpm_mode);
++
+ qp->qed_qp = dev->ops->rdma_create_qp(dev->rdma_ctx,
+ &in_params, &out_params);
+
+@@ -3686,7 +3694,7 @@ static u32 qedr_srq_elem_left(struct qedr_srq_hwq_info *hw_srq)
+ * count and consumer count and subtract it from max
+ * work request supported so that we get elements left.
+ */
+- used = hw_srq->wr_prod_cnt - hw_srq->wr_cons_cnt;
++ used = hw_srq->wr_prod_cnt - (u32)atomic_read(&hw_srq->wr_cons_cnt);
+
+ return hw_srq->max_wr - used;
+ }
+@@ -3701,7 +3709,6 @@ int qedr_post_srq_recv(struct ib_srq *ibsrq, const struct ib_recv_wr *wr,
+ unsigned long flags;
+ int status = 0;
+ u32 num_sge;
+- u32 offset;
+
+ spin_lock_irqsave(&srq->lock, flags);
+
+@@ -3714,7 +3721,8 @@ int qedr_post_srq_recv(struct ib_srq *ibsrq, const struct ib_recv_wr *wr,
+ if (!qedr_srq_elem_left(hw_srq) ||
+ wr->num_sge > srq->hw_srq.max_sges) {
+ DP_ERR(dev, "Can't post WR (%d,%d) || (%d > %d)\n",
+- hw_srq->wr_prod_cnt, hw_srq->wr_cons_cnt,
++ hw_srq->wr_prod_cnt,
++ atomic_read(&hw_srq->wr_cons_cnt),
+ wr->num_sge, srq->hw_srq.max_sges);
+ status = -ENOMEM;
+ *bad_wr = wr;
+@@ -3748,22 +3756,20 @@ int qedr_post_srq_recv(struct ib_srq *ibsrq, const struct ib_recv_wr *wr,
+ hw_srq->sge_prod++;
+ }
+
+- /* Flush WQE and SGE information before
++ /* Update WQE and SGE information before
+ * updating producer.
+ */
+- wmb();
++ dma_wmb();
+
+ /* SRQ producer is 8 bytes. Need to update SGE producer index
+ * in first 4 bytes and need to update WQE producer in
+ * next 4 bytes.
+ */
+- *srq->hw_srq.virt_prod_pair_addr = hw_srq->sge_prod;
+- offset = offsetof(struct rdma_srq_producers, wqe_prod);
+- *((u8 *)srq->hw_srq.virt_prod_pair_addr + offset) =
+- hw_srq->wqe_prod;
++ srq->hw_srq.virt_prod_pair_addr->sge_prod = hw_srq->sge_prod;
++ /* Make sure sge producer is updated first */
++ dma_wmb();
++ srq->hw_srq.virt_prod_pair_addr->wqe_prod = hw_srq->wqe_prod;
+
+- /* Flush producer after updating it. */
+- wmb();
+ wr = wr->next;
+ }
+
+@@ -4182,7 +4188,7 @@ static int process_resp_one_srq(struct qedr_dev *dev, struct qedr_qp *qp,
+ } else {
+ __process_resp_one(dev, qp, cq, wc, resp, wr_id);
+ }
+- srq->hw_srq.wr_cons_cnt++;
++ atomic_inc(&srq->hw_srq.wr_cons_cnt);
+
+ return 1;
+ }
+diff --git a/drivers/infiniband/sw/rxe/rxe_recv.c b/drivers/infiniband/sw/rxe/rxe_recv.c
+index 831ad578a7b2..46e111c218fd 100644
+--- a/drivers/infiniband/sw/rxe/rxe_recv.c
++++ b/drivers/infiniband/sw/rxe/rxe_recv.c
+@@ -330,10 +330,14 @@ err1:
+
+ static int rxe_match_dgid(struct rxe_dev *rxe, struct sk_buff *skb)
+ {
++ struct rxe_pkt_info *pkt = SKB_TO_PKT(skb);
+ const struct ib_gid_attr *gid_attr;
+ union ib_gid dgid;
+ union ib_gid *pdgid;
+
++ if (pkt->mask & RXE_LOOPBACK_MASK)
++ return 0;
++
+ if (skb->protocol == htons(ETH_P_IP)) {
+ ipv6_addr_set_v4mapped(ip_hdr(skb)->daddr,
+ (struct in6_addr *)&dgid);
+@@ -366,7 +370,7 @@ void rxe_rcv(struct sk_buff *skb)
+ if (unlikely(skb->len < pkt->offset + RXE_BTH_BYTES))
+ goto drop;
+
+- if (unlikely(rxe_match_dgid(rxe, skb) < 0)) {
++ if (rxe_match_dgid(rxe, skb) < 0) {
+ pr_warn_ratelimited("failed matching dgid\n");
+ goto drop;
+ }
+diff --git a/drivers/infiniband/sw/rxe/rxe_verbs.c b/drivers/infiniband/sw/rxe/rxe_verbs.c
+index b8a22af724e8..84fec5fd798d 100644
+--- a/drivers/infiniband/sw/rxe/rxe_verbs.c
++++ b/drivers/infiniband/sw/rxe/rxe_verbs.c
+@@ -684,6 +684,7 @@ static int rxe_post_send_kernel(struct rxe_qp *qp, const struct ib_send_wr *wr,
+ unsigned int mask;
+ unsigned int length = 0;
+ int i;
++ struct ib_send_wr *next;
+
+ while (wr) {
+ mask = wr_opcode_mask(wr->opcode, qp);
+@@ -700,6 +701,8 @@ static int rxe_post_send_kernel(struct rxe_qp *qp, const struct ib_send_wr *wr,
+ break;
+ }
+
++ next = wr->next;
++
+ length = 0;
+ for (i = 0; i < wr->num_sge; i++)
+ length += wr->sg_list[i].length;
+@@ -710,7 +713,7 @@ static int rxe_post_send_kernel(struct rxe_qp *qp, const struct ib_send_wr *wr,
+ *bad_wr = wr;
+ break;
+ }
+- wr = wr->next;
++ wr = next;
+ }
+
+ rxe_run_task(&qp->req.task, 1);
+diff --git a/drivers/infiniband/ulp/rtrs/rtrs-clt.c b/drivers/infiniband/ulp/rtrs/rtrs-clt.c
+index 564388a85603..776e89231c52 100644
+--- a/drivers/infiniband/ulp/rtrs/rtrs-clt.c
++++ b/drivers/infiniband/ulp/rtrs/rtrs-clt.c
+@@ -12,6 +12,7 @@
+
+ #include <linux/module.h>
+ #include <linux/rculist.h>
++#include <linux/random.h>
+
+ #include "rtrs-clt.h"
+ #include "rtrs-log.h"
+@@ -23,6 +24,12 @@
+ * leads to "false positives" failed reconnect attempts
+ */
+ #define RTRS_RECONNECT_BACKOFF 1000
++/*
++ * Wait for additional random time between 0 and 8 seconds
++ * before starting to reconnect to avoid clients reconnecting
++ * all at once in case of a major network outage
++ */
++#define RTRS_RECONNECT_SEED 8
+
+ MODULE_DESCRIPTION("RDMA Transport Client");
+ MODULE_LICENSE("GPL");
+@@ -306,7 +313,8 @@ static void rtrs_rdma_error_recovery(struct rtrs_clt_con *con)
+ */
+ delay_ms = clt->reconnect_delay_sec * 1000;
+ queue_delayed_work(rtrs_wq, &sess->reconnect_dwork,
+- msecs_to_jiffies(delay_ms));
++ msecs_to_jiffies(delay_ms +
++ prandom_u32() % RTRS_RECONNECT_SEED));
+ } else {
+ /*
+ * Error can happen just on establishing new connection,
+@@ -2503,7 +2511,9 @@ reconnect_again:
+ sess->stats->reconnects.fail_cnt++;
+ delay_ms = clt->reconnect_delay_sec * 1000;
+ queue_delayed_work(rtrs_wq, &sess->reconnect_dwork,
+- msecs_to_jiffies(delay_ms));
++ msecs_to_jiffies(delay_ms +
++ prandom_u32() %
++ RTRS_RECONNECT_SEED));
+ }
+ }
+
+@@ -2972,7 +2982,7 @@ static int __init rtrs_client_init(void)
+ pr_err("Failed to create rtrs-client dev class\n");
+ return PTR_ERR(rtrs_clt_dev_class);
+ }
+- rtrs_wq = alloc_workqueue("rtrs_client_wq", WQ_MEM_RECLAIM, 0);
++ rtrs_wq = alloc_workqueue("rtrs_client_wq", 0, 0);
+ if (!rtrs_wq) {
+ class_destroy(rtrs_clt_dev_class);
+ return -ENOMEM;
+diff --git a/drivers/infiniband/ulp/rtrs/rtrs-srv.c b/drivers/infiniband/ulp/rtrs/rtrs-srv.c
+index 0d9241f5d9e6..a219bd1bdbc2 100644
+--- a/drivers/infiniband/ulp/rtrs/rtrs-srv.c
++++ b/drivers/infiniband/ulp/rtrs/rtrs-srv.c
+@@ -2150,7 +2150,7 @@ static int __init rtrs_server_init(void)
+ err = PTR_ERR(rtrs_dev_class);
+ goto out_chunk_pool;
+ }
+- rtrs_wq = alloc_workqueue("rtrs_server_wq", WQ_MEM_RECLAIM, 0);
++ rtrs_wq = alloc_workqueue("rtrs_server_wq", 0, 0);
+ if (!rtrs_wq) {
+ err = -ENOMEM;
+ goto out_dev_class;
+diff --git a/drivers/iommu/intel/dmar.c b/drivers/iommu/intel/dmar.c
+index 683b812c5c47..16f47041f1bf 100644
+--- a/drivers/iommu/intel/dmar.c
++++ b/drivers/iommu/intel/dmar.c
+@@ -1102,6 +1102,7 @@ static int alloc_iommu(struct dmar_drhd_unit *drhd)
+ }
+
+ drhd->iommu = iommu;
++ iommu->drhd = drhd;
+
+ return 0;
+
+diff --git a/drivers/iommu/intel/iommu.c b/drivers/iommu/intel/iommu.c
+index d759e7234e98..a459eac96754 100644
+--- a/drivers/iommu/intel/iommu.c
++++ b/drivers/iommu/intel/iommu.c
+@@ -356,6 +356,7 @@ static int intel_iommu_strict;
+ static int intel_iommu_superpage = 1;
+ static int iommu_identity_mapping;
+ static int intel_no_bounce;
++static int iommu_skip_te_disable;
+
+ #define IDENTMAP_GFX 2
+ #define IDENTMAP_AZALIA 4
+@@ -1629,6 +1630,10 @@ static void iommu_disable_translation(struct intel_iommu *iommu)
+ u32 sts;
+ unsigned long flag;
+
++ if (iommu_skip_te_disable && iommu->drhd->gfx_dedicated &&
++ (cap_read_drain(iommu->cap) || cap_write_drain(iommu->cap)))
++ return;
++
+ raw_spin_lock_irqsave(&iommu->register_lock, flag);
+ iommu->gcmd &= ~DMA_GCMD_TE;
+ writel(iommu->gcmd, iommu->reg + DMAR_GCMD_REG);
+@@ -4039,6 +4044,7 @@ static void __init init_no_remapping_devices(void)
+
+ /* This IOMMU has *only* gfx devices. Either bypass it or
+ set the gfx_mapped flag, as appropriate */
++ drhd->gfx_dedicated = 1;
+ if (!dmar_map_gfx) {
+ drhd->ignored = 1;
+ for_each_active_dev_scope(drhd->devices,
+@@ -6182,6 +6188,27 @@ DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_INTEL, 0x0044, quirk_calpella_no_shadow_g
+ DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_INTEL, 0x0062, quirk_calpella_no_shadow_gtt);
+ DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_INTEL, 0x006a, quirk_calpella_no_shadow_gtt);
+
++static void quirk_igfx_skip_te_disable(struct pci_dev *dev)
++{
++ unsigned short ver;
++
++ if (!IS_GFX_DEVICE(dev))
++ return;
++
++ ver = (dev->device >> 8) & 0xff;
++ if (ver != 0x45 && ver != 0x46 && ver != 0x4c &&
++ ver != 0x4e && ver != 0x8a && ver != 0x98 &&
++ ver != 0x9a)
++ return;
++
++ if (risky_device(dev))
++ return;
++
++ pci_info(dev, "Skip IOMMU disabling for graphics\n");
++ iommu_skip_te_disable = 1;
++}
++DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_INTEL, PCI_ANY_ID, quirk_igfx_skip_te_disable);
++
+ /* On Tylersburg chipsets, some BIOSes have been known to enable the
+ ISOCH DMAR unit for the Azalia sound device, but not give it any
+ TLB entries, which causes it to deadlock. Check for that. We do
+diff --git a/drivers/iommu/intel/irq_remapping.c b/drivers/iommu/intel/irq_remapping.c
+index 9564d23d094f..aa096b333a99 100644
+--- a/drivers/iommu/intel/irq_remapping.c
++++ b/drivers/iommu/intel/irq_remapping.c
+@@ -628,13 +628,21 @@ out_free_table:
+
+ static void intel_teardown_irq_remapping(struct intel_iommu *iommu)
+ {
++ struct fwnode_handle *fn;
++
+ if (iommu && iommu->ir_table) {
+ if (iommu->ir_msi_domain) {
++ fn = iommu->ir_msi_domain->fwnode;
++
+ irq_domain_remove(iommu->ir_msi_domain);
++ irq_domain_free_fwnode(fn);
+ iommu->ir_msi_domain = NULL;
+ }
+ if (iommu->ir_domain) {
++ fn = iommu->ir_domain->fwnode;
++
+ irq_domain_remove(iommu->ir_domain);
++ irq_domain_free_fwnode(fn);
+ iommu->ir_domain = NULL;
+ }
+ free_pages((unsigned long)iommu->ir_table->base,
+diff --git a/drivers/irqchip/irq-bcm7038-l1.c b/drivers/irqchip/irq-bcm7038-l1.c
+index fd7c537fb42a..4127eeab10af 100644
+--- a/drivers/irqchip/irq-bcm7038-l1.c
++++ b/drivers/irqchip/irq-bcm7038-l1.c
+@@ -327,7 +327,11 @@ static int bcm7038_l1_suspend(void)
+ u32 val;
+
+ /* Wakeup interrupt should only come from the boot cpu */
++#ifdef CONFIG_SMP
+ boot_cpu = cpu_logical_map(0);
++#else
++ boot_cpu = 0;
++#endif
+
+ list_for_each_entry(intc, &bcm7038_l1_intcs_list, list) {
+ for (word = 0; word < intc->n_words; word++) {
+@@ -347,7 +351,11 @@ static void bcm7038_l1_resume(void)
+ struct bcm7038_l1_chip *intc;
+ int boot_cpu, word;
+
++#ifdef CONFIG_SMP
+ boot_cpu = cpu_logical_map(0);
++#else
++ boot_cpu = 0;
++#endif
+
+ list_for_each_entry(intc, &bcm7038_l1_intcs_list, list) {
+ for (word = 0; word < intc->n_words; word++) {
+diff --git a/drivers/irqchip/irq-gic-v3-its.c b/drivers/irqchip/irq-gic-v3-its.c
+index beac4caefad9..da44bfa48bc2 100644
+--- a/drivers/irqchip/irq-gic-v3-its.c
++++ b/drivers/irqchip/irq-gic-v3-its.c
+@@ -2814,7 +2814,7 @@ static int allocate_vpe_l1_table(void)
+ if (val & GICR_VPROPBASER_4_1_VALID)
+ goto out;
+
+- gic_data_rdist()->vpe_table_mask = kzalloc(sizeof(cpumask_t), GFP_KERNEL);
++ gic_data_rdist()->vpe_table_mask = kzalloc(sizeof(cpumask_t), GFP_ATOMIC);
+ if (!gic_data_rdist()->vpe_table_mask)
+ return -ENOMEM;
+
+@@ -2881,7 +2881,7 @@ static int allocate_vpe_l1_table(void)
+
+ pr_debug("np = %d, npg = %lld, psz = %d, epp = %d, esz = %d\n",
+ np, npg, psz, epp, esz);
+- page = alloc_pages(GFP_KERNEL | __GFP_ZERO, get_order(np * PAGE_SIZE));
++ page = alloc_pages(GFP_ATOMIC | __GFP_ZERO, get_order(np * PAGE_SIZE));
+ if (!page)
+ return -ENOMEM;
+
+diff --git a/drivers/irqchip/irq-loongson-htvec.c b/drivers/irqchip/irq-loongson-htvec.c
+index 1ece9337c78d..720cf96ae90e 100644
+--- a/drivers/irqchip/irq-loongson-htvec.c
++++ b/drivers/irqchip/irq-loongson-htvec.c
+@@ -109,11 +109,14 @@ static struct irq_chip htvec_irq_chip = {
+ static int htvec_domain_alloc(struct irq_domain *domain, unsigned int virq,
+ unsigned int nr_irqs, void *arg)
+ {
++ int ret;
+ unsigned long hwirq;
+ unsigned int type, i;
+ struct htvec *priv = domain->host_data;
+
+- irq_domain_translate_onecell(domain, arg, &hwirq, &type);
++ ret = irq_domain_translate_onecell(domain, arg, &hwirq, &type);
++ if (ret)
++ return ret;
+
+ for (i = 0; i < nr_irqs; i++) {
+ irq_domain_set_info(domain, virq + i, hwirq + i, &htvec_irq_chip,
+@@ -192,7 +195,7 @@ static int htvec_of_init(struct device_node *node,
+ if (!priv->htvec_domain) {
+ pr_err("Failed to create IRQ domain\n");
+ err = -ENOMEM;
+- goto iounmap_base;
++ goto irq_dispose;
+ }
+
+ htvec_reset(priv);
+@@ -203,6 +206,9 @@ static int htvec_of_init(struct device_node *node,
+
+ return 0;
+
++irq_dispose:
++ for (; i > 0; i--)
++ irq_dispose_mapping(parent_irq[i - 1]);
+ iounmap_base:
+ iounmap(priv->base);
+ free_priv:
+diff --git a/drivers/irqchip/irq-loongson-liointc.c b/drivers/irqchip/irq-loongson-liointc.c
+index 63b61474a0cc..6ef86a334c62 100644
+--- a/drivers/irqchip/irq-loongson-liointc.c
++++ b/drivers/irqchip/irq-loongson-liointc.c
+@@ -114,6 +114,7 @@ static int liointc_set_type(struct irq_data *data, unsigned int type)
+ liointc_set_bit(gc, LIOINTC_REG_INTC_POL, mask, false);
+ break;
+ default:
++ irq_gc_unlock_irqrestore(gc, flags);
+ return -EINVAL;
+ }
+ irq_gc_unlock_irqrestore(gc, flags);
+diff --git a/drivers/irqchip/irq-loongson-pch-pic.c b/drivers/irqchip/irq-loongson-pch-pic.c
+index 2a05b9305012..9bf6b9a5f734 100644
+--- a/drivers/irqchip/irq-loongson-pch-pic.c
++++ b/drivers/irqchip/irq-loongson-pch-pic.c
+@@ -64,15 +64,6 @@ static void pch_pic_bitclr(struct pch_pic *priv, int offset, int bit)
+ raw_spin_unlock(&priv->pic_lock);
+ }
+
+-static void pch_pic_eoi_irq(struct irq_data *d)
+-{
+- u32 idx = PIC_REG_IDX(d->hwirq);
+- struct pch_pic *priv = irq_data_get_irq_chip_data(d);
+-
+- writel(BIT(PIC_REG_BIT(d->hwirq)),
+- priv->base + PCH_PIC_CLR + idx * 4);
+-}
+-
+ static void pch_pic_mask_irq(struct irq_data *d)
+ {
+ struct pch_pic *priv = irq_data_get_irq_chip_data(d);
+@@ -85,6 +76,9 @@ static void pch_pic_unmask_irq(struct irq_data *d)
+ {
+ struct pch_pic *priv = irq_data_get_irq_chip_data(d);
+
++ writel(BIT(PIC_REG_BIT(d->hwirq)),
++ priv->base + PCH_PIC_CLR + PIC_REG_IDX(d->hwirq) * 4);
++
+ irq_chip_unmask_parent(d);
+ pch_pic_bitclr(priv, PCH_PIC_MASK, d->hwirq);
+ }
+@@ -124,7 +118,6 @@ static struct irq_chip pch_pic_irq_chip = {
+ .irq_mask = pch_pic_mask_irq,
+ .irq_unmask = pch_pic_unmask_irq,
+ .irq_ack = irq_chip_ack_parent,
+- .irq_eoi = pch_pic_eoi_irq,
+ .irq_set_affinity = irq_chip_set_affinity_parent,
+ .irq_set_type = pch_pic_set_type,
+ };
+@@ -135,22 +128,25 @@ static int pch_pic_alloc(struct irq_domain *domain, unsigned int virq,
+ int err;
+ unsigned int type;
+ unsigned long hwirq;
+- struct irq_fwspec fwspec;
++ struct irq_fwspec *fwspec = arg;
++ struct irq_fwspec parent_fwspec;
+ struct pch_pic *priv = domain->host_data;
+
+- irq_domain_translate_twocell(domain, arg, &hwirq, &type);
++ err = irq_domain_translate_twocell(domain, fwspec, &hwirq, &type);
++ if (err)
++ return err;
+
+- fwspec.fwnode = domain->parent->fwnode;
+- fwspec.param_count = 1;
+- fwspec.param[0] = hwirq + priv->ht_vec_base;
++ parent_fwspec.fwnode = domain->parent->fwnode;
++ parent_fwspec.param_count = 1;
++ parent_fwspec.param[0] = hwirq + priv->ht_vec_base;
+
+- err = irq_domain_alloc_irqs_parent(domain, virq, 1, &fwspec);
++ err = irq_domain_alloc_irqs_parent(domain, virq, 1, &parent_fwspec);
+ if (err)
+ return err;
+
+ irq_domain_set_info(domain, virq, hwirq,
+ &pch_pic_irq_chip, priv,
+- handle_fasteoi_ack_irq, NULL, NULL);
++ handle_level_irq, NULL, NULL);
+ irq_set_probe(virq);
+
+ return 0;
+diff --git a/drivers/irqchip/irq-mtk-sysirq.c b/drivers/irqchip/irq-mtk-sysirq.c
+index 73eae5966a40..6ff98b87e5c0 100644
+--- a/drivers/irqchip/irq-mtk-sysirq.c
++++ b/drivers/irqchip/irq-mtk-sysirq.c
+@@ -15,7 +15,7 @@
+ #include <linux/spinlock.h>
+
+ struct mtk_sysirq_chip_data {
+- spinlock_t lock;
++ raw_spinlock_t lock;
+ u32 nr_intpol_bases;
+ void __iomem **intpol_bases;
+ u32 *intpol_words;
+@@ -37,7 +37,7 @@ static int mtk_sysirq_set_type(struct irq_data *data, unsigned int type)
+ reg_index = chip_data->which_word[hwirq];
+ offset = hwirq & 0x1f;
+
+- spin_lock_irqsave(&chip_data->lock, flags);
++ raw_spin_lock_irqsave(&chip_data->lock, flags);
+ value = readl_relaxed(base + reg_index * 4);
+ if (type == IRQ_TYPE_LEVEL_LOW || type == IRQ_TYPE_EDGE_FALLING) {
+ if (type == IRQ_TYPE_LEVEL_LOW)
+@@ -53,7 +53,7 @@ static int mtk_sysirq_set_type(struct irq_data *data, unsigned int type)
+
+ data = data->parent_data;
+ ret = data->chip->irq_set_type(data, type);
+- spin_unlock_irqrestore(&chip_data->lock, flags);
++ raw_spin_unlock_irqrestore(&chip_data->lock, flags);
+ return ret;
+ }
+
+@@ -212,7 +212,7 @@ static int __init mtk_sysirq_of_init(struct device_node *node,
+ ret = -ENOMEM;
+ goto out_free_which_word;
+ }
+- spin_lock_init(&chip_data->lock);
++ raw_spin_lock_init(&chip_data->lock);
+
+ return 0;
+
+diff --git a/drivers/irqchip/irq-ti-sci-inta.c b/drivers/irqchip/irq-ti-sci-inta.c
+index 7e3ebf6ed2cd..be0a35d91796 100644
+--- a/drivers/irqchip/irq-ti-sci-inta.c
++++ b/drivers/irqchip/irq-ti-sci-inta.c
+@@ -572,7 +572,7 @@ static int ti_sci_inta_irq_domain_probe(struct platform_device *pdev)
+ res = platform_get_resource(pdev, IORESOURCE_MEM, 0);
+ inta->base = devm_ioremap_resource(dev, res);
+ if (IS_ERR(inta->base))
+- return -ENODEV;
++ return PTR_ERR(inta->base);
+
+ domain = irq_domain_add_linear(dev_of_node(dev),
+ ti_sci_get_num_resources(inta->vint),
+diff --git a/drivers/leds/led-class.c b/drivers/leds/led-class.c
+index 3363a6551a70..cc3929f858b6 100644
+--- a/drivers/leds/led-class.c
++++ b/drivers/leds/led-class.c
+@@ -173,6 +173,7 @@ void led_classdev_suspend(struct led_classdev *led_cdev)
+ {
+ led_cdev->flags |= LED_SUSPENDED;
+ led_set_brightness_nopm(led_cdev, 0);
++ flush_work(&led_cdev->set_brightness_work);
+ }
+ EXPORT_SYMBOL_GPL(led_classdev_suspend);
+
+diff --git a/drivers/leds/leds-lm355x.c b/drivers/leds/leds-lm355x.c
+index 11ce05249751..b2eb2e1e9c04 100644
+--- a/drivers/leds/leds-lm355x.c
++++ b/drivers/leds/leds-lm355x.c
+@@ -164,18 +164,19 @@ static int lm355x_chip_init(struct lm355x_chip_data *chip)
+ /* input and output pins configuration */
+ switch (chip->type) {
+ case CHIP_LM3554:
+- reg_val = pdata->pin_tx2 | pdata->ntc_pin;
++ reg_val = (u32)pdata->pin_tx2 | (u32)pdata->ntc_pin;
+ ret = regmap_update_bits(chip->regmap, 0xE0, 0x28, reg_val);
+ if (ret < 0)
+ goto out;
+- reg_val = pdata->pass_mode;
++ reg_val = (u32)pdata->pass_mode;
+ ret = regmap_update_bits(chip->regmap, 0xA0, 0x04, reg_val);
+ if (ret < 0)
+ goto out;
+ break;
+
+ case CHIP_LM3556:
+- reg_val = pdata->pin_tx2 | pdata->ntc_pin | pdata->pass_mode;
++ reg_val = (u32)pdata->pin_tx2 | (u32)pdata->ntc_pin |
++ (u32)pdata->pass_mode;
+ ret = regmap_update_bits(chip->regmap, 0x0A, 0xC4, reg_val);
+ if (ret < 0)
+ goto out;
+diff --git a/drivers/macintosh/via-macii.c b/drivers/macintosh/via-macii.c
+index ac824d7b2dcf..6aa903529570 100644
+--- a/drivers/macintosh/via-macii.c
++++ b/drivers/macintosh/via-macii.c
+@@ -270,15 +270,12 @@ static int macii_autopoll(int devs)
+ unsigned long flags;
+ int err = 0;
+
++ local_irq_save(flags);
++
+ /* bit 1 == device 1, and so on. */
+ autopoll_devs = devs & 0xFFFE;
+
+- if (!autopoll_devs)
+- return 0;
+-
+- local_irq_save(flags);
+-
+- if (current_req == NULL) {
++ if (autopoll_devs && !current_req) {
+ /* Send a Talk Reg 0. The controller will repeatedly transmit
+ * this as long as it is idle.
+ */
+diff --git a/drivers/md/bcache/super.c b/drivers/md/bcache/super.c
+index 2014016f9a60..445bb84ee27f 100644
+--- a/drivers/md/bcache/super.c
++++ b/drivers/md/bcache/super.c
+@@ -2100,7 +2100,14 @@ found:
+ sysfs_create_link(&c->kobj, &ca->kobj, buf))
+ goto err;
+
+- if (ca->sb.seq > c->sb.seq) {
++ /*
++ * A special case is both ca->sb.seq and c->sb.seq are 0,
++ * such condition happens on a new created cache device whose
++ * super block is never flushed yet. In this case c->sb.version
++ * and other members should be updated too, otherwise we will
++ * have a mistaken super block version in cache set.
++ */
++ if (ca->sb.seq > c->sb.seq || c->sb.seq == 0) {
+ c->sb.version = ca->sb.version;
+ memcpy(c->sb.set_uuid, ca->sb.set_uuid, 16);
+ c->sb.flags = ca->sb.flags;
+diff --git a/drivers/md/md-cluster.c b/drivers/md/md-cluster.c
+index 813a99ffa86f..73fd50e77975 100644
+--- a/drivers/md/md-cluster.c
++++ b/drivers/md/md-cluster.c
+@@ -1518,6 +1518,7 @@ static void unlock_all_bitmaps(struct mddev *mddev)
+ }
+ }
+ kfree(cinfo->other_bitmap_lockres);
++ cinfo->other_bitmap_lockres = NULL;
+ }
+ }
+
+diff --git a/drivers/md/md.c b/drivers/md/md.c
+index f567f536b529..90756450b958 100644
+--- a/drivers/md/md.c
++++ b/drivers/md/md.c
+@@ -470,17 +470,18 @@ static blk_qc_t md_make_request(struct request_queue *q, struct bio *bio)
+ struct mddev *mddev = bio->bi_disk->private_data;
+ unsigned int sectors;
+
+- if (unlikely(test_bit(MD_BROKEN, &mddev->flags)) && (rw == WRITE)) {
++ if (mddev == NULL || mddev->pers == NULL) {
+ bio_io_error(bio);
+ return BLK_QC_T_NONE;
+ }
+
+- blk_queue_split(q, &bio);
+-
+- if (mddev == NULL || mddev->pers == NULL) {
++ if (unlikely(test_bit(MD_BROKEN, &mddev->flags)) && (rw == WRITE)) {
+ bio_io_error(bio);
+ return BLK_QC_T_NONE;
+ }
++
++ blk_queue_split(q, &bio);
++
+ if (mddev->ro == 1 && unlikely(rw == WRITE)) {
+ if (bio_sectors(bio) != 0)
+ bio->bi_status = BLK_STS_IOERR;
+diff --git a/drivers/media/cec/platform/cros-ec/cros-ec-cec.c b/drivers/media/cec/platform/cros-ec/cros-ec-cec.c
+index 0e7e2772f08f..2d95e16cd248 100644
+--- a/drivers/media/cec/platform/cros-ec/cros-ec-cec.c
++++ b/drivers/media/cec/platform/cros-ec/cros-ec-cec.c
+@@ -277,11 +277,7 @@ static int cros_ec_cec_probe(struct platform_device *pdev)
+ platform_set_drvdata(pdev, cros_ec_cec);
+ cros_ec_cec->cros_ec = cros_ec;
+
+- ret = device_init_wakeup(&pdev->dev, 1);
+- if (ret) {
+- dev_err(&pdev->dev, "failed to initialize wakeup\n");
+- return ret;
+- }
++ device_init_wakeup(&pdev->dev, 1);
+
+ cros_ec_cec->adap = cec_allocate_adapter(&cros_ec_cec_ops, cros_ec_cec,
+ DRV_NAME,
+diff --git a/drivers/media/firewire/firedtv-fw.c b/drivers/media/firewire/firedtv-fw.c
+index 97144734eb05..3f1ca40b9b98 100644
+--- a/drivers/media/firewire/firedtv-fw.c
++++ b/drivers/media/firewire/firedtv-fw.c
+@@ -272,6 +272,8 @@ static int node_probe(struct fw_unit *unit, const struct ieee1394_device_id *id)
+
+ name_len = fw_csr_string(unit->directory, CSR_MODEL,
+ name, sizeof(name));
++ if (name_len < 0)
++ return name_len;
+ for (i = ARRAY_SIZE(model_names); --i; )
+ if (strlen(model_names[i]) <= name_len &&
+ strncmp(name, model_names[i], name_len) == 0)
+diff --git a/drivers/media/i2c/tvp5150.c b/drivers/media/i2c/tvp5150.c
+index eb39cf5ea089..9df575238952 100644
+--- a/drivers/media/i2c/tvp5150.c
++++ b/drivers/media/i2c/tvp5150.c
+@@ -1664,8 +1664,10 @@ static int tvp5150_registered(struct v4l2_subdev *sd)
+ return 0;
+
+ err:
+- for (i = 0; i < decoder->connectors_num; i++)
++ for (i = 0; i < decoder->connectors_num; i++) {
+ media_device_unregister_entity(&decoder->connectors[i].ent);
++ media_entity_cleanup(&decoder->connectors[i].ent);
++ }
+ return ret;
+ #endif
+
+@@ -2248,8 +2250,10 @@ static int tvp5150_remove(struct i2c_client *c)
+
+ for (i = 0; i < decoder->connectors_num; i++)
+ v4l2_fwnode_connector_free(&decoder->connectors[i].base);
+- for (i = 0; i < decoder->connectors_num; i++)
++ for (i = 0; i < decoder->connectors_num; i++) {
+ media_device_unregister_entity(&decoder->connectors[i].ent);
++ media_entity_cleanup(&decoder->connectors[i].ent);
++ }
+ v4l2_async_unregister_subdev(sd);
+ v4l2_ctrl_handler_free(&decoder->hdl);
+ pm_runtime_disable(&c->dev);
+diff --git a/drivers/media/mc/mc-request.c b/drivers/media/mc/mc-request.c
+index e3fca436c75b..c0782fd96c59 100644
+--- a/drivers/media/mc/mc-request.c
++++ b/drivers/media/mc/mc-request.c
+@@ -296,9 +296,18 @@ int media_request_alloc(struct media_device *mdev, int *alloc_fd)
+ if (WARN_ON(!mdev->ops->req_alloc ^ !mdev->ops->req_free))
+ return -ENOMEM;
+
++ if (mdev->ops->req_alloc)
++ req = mdev->ops->req_alloc(mdev);
++ else
++ req = kzalloc(sizeof(*req), GFP_KERNEL);
++ if (!req)
++ return -ENOMEM;
++
+ fd = get_unused_fd_flags(O_CLOEXEC);
+- if (fd < 0)
+- return fd;
++ if (fd < 0) {
++ ret = fd;
++ goto err_free_req;
++ }
+
+ filp = anon_inode_getfile("request", &request_fops, NULL, O_CLOEXEC);
+ if (IS_ERR(filp)) {
+@@ -306,15 +315,6 @@ int media_request_alloc(struct media_device *mdev, int *alloc_fd)
+ goto err_put_fd;
+ }
+
+- if (mdev->ops->req_alloc)
+- req = mdev->ops->req_alloc(mdev);
+- else
+- req = kzalloc(sizeof(*req), GFP_KERNEL);
+- if (!req) {
+- ret = -ENOMEM;
+- goto err_fput;
+- }
+-
+ filp->private_data = req;
+ req->mdev = mdev;
+ req->state = MEDIA_REQUEST_STATE_IDLE;
+@@ -336,12 +336,15 @@ int media_request_alloc(struct media_device *mdev, int *alloc_fd)
+
+ return 0;
+
+-err_fput:
+- fput(filp);
+-
+ err_put_fd:
+ put_unused_fd(fd);
+
++err_free_req:
++ if (mdev->ops->req_free)
++ mdev->ops->req_free(req);
++ else
++ kfree(req);
++
+ return ret;
+ }
+
+diff --git a/drivers/media/platform/exynos4-is/media-dev.c b/drivers/media/platform/exynos4-is/media-dev.c
+index 9aaf3b8060d5..9c31d950cddf 100644
+--- a/drivers/media/platform/exynos4-is/media-dev.c
++++ b/drivers/media/platform/exynos4-is/media-dev.c
+@@ -1270,6 +1270,9 @@ static int fimc_md_get_pinctrl(struct fimc_md *fmd)
+
+ pctl->state_idle = pinctrl_lookup_state(pctl->pinctrl,
+ PINCTRL_STATE_IDLE);
++ if (IS_ERR(pctl->state_idle))
++ return PTR_ERR(pctl->state_idle);
++
+ return 0;
+ }
+
+diff --git a/drivers/media/platform/marvell-ccic/mcam-core.c b/drivers/media/platform/marvell-ccic/mcam-core.c
+index 09775b6624c6..326e79b8531c 100644
+--- a/drivers/media/platform/marvell-ccic/mcam-core.c
++++ b/drivers/media/platform/marvell-ccic/mcam-core.c
+@@ -1940,6 +1940,7 @@ int mccic_register(struct mcam_camera *cam)
+ out:
+ v4l2_async_notifier_unregister(&cam->notifier);
+ v4l2_device_unregister(&cam->v4l2_dev);
++ v4l2_async_notifier_cleanup(&cam->notifier);
+ return ret;
+ }
+ EXPORT_SYMBOL_GPL(mccic_register);
+@@ -1961,6 +1962,7 @@ void mccic_shutdown(struct mcam_camera *cam)
+ v4l2_ctrl_handler_free(&cam->ctrl_handler);
+ v4l2_async_notifier_unregister(&cam->notifier);
+ v4l2_device_unregister(&cam->v4l2_dev);
++ v4l2_async_notifier_cleanup(&cam->notifier);
+ }
+ EXPORT_SYMBOL_GPL(mccic_shutdown);
+
+diff --git a/drivers/media/platform/mtk-mdp/mtk_mdp_comp.c b/drivers/media/platform/mtk-mdp/mtk_mdp_comp.c
+index 58abfbdfb82d..90b6d939f3ad 100644
+--- a/drivers/media/platform/mtk-mdp/mtk_mdp_comp.c
++++ b/drivers/media/platform/mtk-mdp/mtk_mdp_comp.c
+@@ -96,6 +96,7 @@ int mtk_mdp_comp_init(struct device *dev, struct device_node *node,
+ {
+ struct device_node *larb_node;
+ struct platform_device *larb_pdev;
++ int ret;
+ int i;
+
+ if (comp_id < 0 || comp_id >= MTK_MDP_COMP_ID_MAX) {
+@@ -113,8 +114,8 @@ int mtk_mdp_comp_init(struct device *dev, struct device_node *node,
+ if (IS_ERR(comp->clk[i])) {
+ if (PTR_ERR(comp->clk[i]) != -EPROBE_DEFER)
+ dev_err(dev, "Failed to get clock\n");
+-
+- return PTR_ERR(comp->clk[i]);
++ ret = PTR_ERR(comp->clk[i]);
++ goto put_dev;
+ }
+
+ /* Only RDMA needs two clocks */
+@@ -133,20 +134,27 @@ int mtk_mdp_comp_init(struct device *dev, struct device_node *node,
+ if (!larb_node) {
+ dev_err(dev,
+ "Missing mediadek,larb phandle in %pOF node\n", node);
+- return -EINVAL;
++ ret = -EINVAL;
++ goto put_dev;
+ }
+
+ larb_pdev = of_find_device_by_node(larb_node);
+ if (!larb_pdev) {
+ dev_warn(dev, "Waiting for larb device %pOF\n", larb_node);
+ of_node_put(larb_node);
+- return -EPROBE_DEFER;
++ ret = -EPROBE_DEFER;
++ goto put_dev;
+ }
+ of_node_put(larb_node);
+
+ comp->larb_dev = &larb_pdev->dev;
+
+ return 0;
++
++put_dev:
++ of_node_put(comp->dev_node);
++
++ return ret;
+ }
+
+ void mtk_mdp_comp_deinit(struct device *dev, struct mtk_mdp_comp *comp)
+diff --git a/drivers/media/platform/omap3isp/isppreview.c b/drivers/media/platform/omap3isp/isppreview.c
+index 4dbdf3180d10..607b7685c982 100644
+--- a/drivers/media/platform/omap3isp/isppreview.c
++++ b/drivers/media/platform/omap3isp/isppreview.c
+@@ -2287,7 +2287,7 @@ static int preview_init_entities(struct isp_prev_device *prev)
+ me->ops = &preview_media_ops;
+ ret = media_entity_pads_init(me, PREV_PADS_NUM, pads);
+ if (ret < 0)
+- return ret;
++ goto error_handler_free;
+
+ preview_init_formats(sd, NULL);
+
+@@ -2320,6 +2320,8 @@ error_video_out:
+ omap3isp_video_cleanup(&prev->video_in);
+ error_video_in:
+ media_entity_cleanup(&prev->subdev.entity);
++error_handler_free:
++ v4l2_ctrl_handler_free(&prev->ctrls);
+ return ret;
+ }
+
+diff --git a/drivers/media/platform/s5p-g2d/g2d.c b/drivers/media/platform/s5p-g2d/g2d.c
+index 6932fd47071b..15bcb7f6e113 100644
+--- a/drivers/media/platform/s5p-g2d/g2d.c
++++ b/drivers/media/platform/s5p-g2d/g2d.c
+@@ -695,21 +695,13 @@ static int g2d_probe(struct platform_device *pdev)
+ vfd->lock = &dev->mutex;
+ vfd->v4l2_dev = &dev->v4l2_dev;
+ vfd->device_caps = V4L2_CAP_VIDEO_M2M | V4L2_CAP_STREAMING;
+- ret = video_register_device(vfd, VFL_TYPE_VIDEO, 0);
+- if (ret) {
+- v4l2_err(&dev->v4l2_dev, "Failed to register video device\n");
+- goto rel_vdev;
+- }
+- video_set_drvdata(vfd, dev);
+- dev->vfd = vfd;
+- v4l2_info(&dev->v4l2_dev, "device registered as /dev/video%d\n",
+- vfd->num);
++
+ platform_set_drvdata(pdev, dev);
+ dev->m2m_dev = v4l2_m2m_init(&g2d_m2m_ops);
+ if (IS_ERR(dev->m2m_dev)) {
+ v4l2_err(&dev->v4l2_dev, "Failed to init mem2mem device\n");
+ ret = PTR_ERR(dev->m2m_dev);
+- goto unreg_video_dev;
++ goto rel_vdev;
+ }
+
+ def_frame.stride = (def_frame.width * def_frame.fmt->depth) >> 3;
+@@ -717,14 +709,24 @@ static int g2d_probe(struct platform_device *pdev)
+ of_id = of_match_node(exynos_g2d_match, pdev->dev.of_node);
+ if (!of_id) {
+ ret = -ENODEV;
+- goto unreg_video_dev;
++ goto free_m2m;
+ }
+ dev->variant = (struct g2d_variant *)of_id->data;
+
++ ret = video_register_device(vfd, VFL_TYPE_VIDEO, 0);
++ if (ret) {
++ v4l2_err(&dev->v4l2_dev, "Failed to register video device\n");
++ goto free_m2m;
++ }
++ video_set_drvdata(vfd, dev);
++ dev->vfd = vfd;
++ v4l2_info(&dev->v4l2_dev, "device registered as /dev/video%d\n",
++ vfd->num);
++
+ return 0;
+
+-unreg_video_dev:
+- video_unregister_device(dev->vfd);
++free_m2m:
++ v4l2_m2m_release(dev->m2m_dev);
+ rel_vdev:
+ video_device_release(vfd);
+ unreg_v4l2_dev:
+diff --git a/drivers/media/usb/dvb-usb/Kconfig b/drivers/media/usb/dvb-usb/Kconfig
+index 15d29c91662f..25ba03edcb5c 100644
+--- a/drivers/media/usb/dvb-usb/Kconfig
++++ b/drivers/media/usb/dvb-usb/Kconfig
+@@ -151,6 +151,7 @@ config DVB_USB_CXUSB
+ config DVB_USB_CXUSB_ANALOG
+ bool "Analog support for the Conexant USB2.0 hybrid reference design"
+ depends on DVB_USB_CXUSB && VIDEO_V4L2
++ depends on VIDEO_V4L2=y || VIDEO_V4L2=DVB_USB_CXUSB
+ select VIDEO_CX25840
+ select VIDEOBUF2_VMALLOC
+ help
+diff --git a/drivers/media/usb/go7007/go7007-usb.c b/drivers/media/usb/go7007/go7007-usb.c
+index f889c9d740cd..dbf0455d5d50 100644
+--- a/drivers/media/usb/go7007/go7007-usb.c
++++ b/drivers/media/usb/go7007/go7007-usb.c
+@@ -1132,6 +1132,10 @@ static int go7007_usb_probe(struct usb_interface *intf,
+ go->hpi_ops = &go7007_usb_onboard_hpi_ops;
+ go->hpi_context = usb;
+
++ ep = usb->usbdev->ep_in[4];
++ if (!ep)
++ return -ENODEV;
++
+ /* Allocate the URB and buffer for receiving incoming interrupts */
+ usb->intr_urb = usb_alloc_urb(0, GFP_KERNEL);
+ if (usb->intr_urb == NULL)
+@@ -1141,7 +1145,6 @@ static int go7007_usb_probe(struct usb_interface *intf,
+ if (usb->intr_urb->transfer_buffer == NULL)
+ goto allocfail;
+
+- ep = usb->usbdev->ep_in[4];
+ if (usb_endpoint_type(&ep->desc) == USB_ENDPOINT_XFER_BULK)
+ usb_fill_bulk_urb(usb->intr_urb, usb->usbdev,
+ usb_rcvbulkpipe(usb->usbdev, 4),
+@@ -1263,9 +1266,13 @@ static int go7007_usb_probe(struct usb_interface *intf,
+
+ /* Allocate the URBs and buffers for receiving the video stream */
+ if (board->flags & GO7007_USB_EZUSB) {
++ if (!usb->usbdev->ep_in[6])
++ goto allocfail;
+ v_urb_len = 1024;
+ video_pipe = usb_rcvbulkpipe(usb->usbdev, 6);
+ } else {
++ if (!usb->usbdev->ep_in[1])
++ goto allocfail;
+ v_urb_len = 512;
+ video_pipe = usb_rcvbulkpipe(usb->usbdev, 1);
+ }
+@@ -1285,6 +1292,8 @@ static int go7007_usb_probe(struct usb_interface *intf,
+ /* Allocate the URBs and buffers for receiving the audio stream */
+ if ((board->flags & GO7007_USB_EZUSB) &&
+ (board->main_info.flags & GO7007_BOARD_HAS_AUDIO)) {
++ if (!usb->usbdev->ep_in[8])
++ goto allocfail;
+ for (i = 0; i < 8; ++i) {
+ usb->audio_urbs[i] = usb_alloc_urb(0, GFP_KERNEL);
+ if (usb->audio_urbs[i] == NULL)
+diff --git a/drivers/memory/samsung/exynos5422-dmc.c b/drivers/memory/samsung/exynos5422-dmc.c
+index 25196d6268e2..85b31d3de57a 100644
+--- a/drivers/memory/samsung/exynos5422-dmc.c
++++ b/drivers/memory/samsung/exynos5422-dmc.c
+@@ -270,12 +270,14 @@ static int find_target_freq_idx(struct exynos5_dmc *dmc,
+ * This function switches between these banks according to the
+ * currently used clock source.
+ */
+-static void exynos5_switch_timing_regs(struct exynos5_dmc *dmc, bool set)
++static int exynos5_switch_timing_regs(struct exynos5_dmc *dmc, bool set)
+ {
+ unsigned int reg;
+ int ret;
+
+ ret = regmap_read(dmc->clk_regmap, CDREX_LPDDR3PHY_CON3, ®);
++ if (ret)
++ return ret;
+
+ if (set)
+ reg |= EXYNOS5_TIMING_SET_SWI;
+@@ -283,6 +285,8 @@ static void exynos5_switch_timing_regs(struct exynos5_dmc *dmc, bool set)
+ reg &= ~EXYNOS5_TIMING_SET_SWI;
+
+ regmap_write(dmc->clk_regmap, CDREX_LPDDR3PHY_CON3, reg);
++
++ return 0;
+ }
+
+ /**
+@@ -516,7 +520,7 @@ exynos5_dmc_switch_to_bypass_configuration(struct exynos5_dmc *dmc,
+ /*
+ * Delays are long enough, so use them for the new coming clock.
+ */
+- exynos5_switch_timing_regs(dmc, USE_MX_MSPLL_TIMINGS);
++ ret = exynos5_switch_timing_regs(dmc, USE_MX_MSPLL_TIMINGS);
+
+ return ret;
+ }
+@@ -577,7 +581,9 @@ exynos5_dmc_change_freq_and_volt(struct exynos5_dmc *dmc,
+
+ clk_set_rate(dmc->fout_bpll, target_rate);
+
+- exynos5_switch_timing_regs(dmc, USE_BPLL_TIMINGS);
++ ret = exynos5_switch_timing_regs(dmc, USE_BPLL_TIMINGS);
++ if (ret)
++ goto disable_clocks;
+
+ ret = clk_set_parent(dmc->mout_mclk_cdrex, dmc->mout_bpll);
+ if (ret)
+diff --git a/drivers/memory/tegra/tegra186-emc.c b/drivers/memory/tegra/tegra186-emc.c
+index 97f26bc77ad4..c900948881d5 100644
+--- a/drivers/memory/tegra/tegra186-emc.c
++++ b/drivers/memory/tegra/tegra186-emc.c
+@@ -185,7 +185,7 @@ static int tegra186_emc_probe(struct platform_device *pdev)
+ if (IS_ERR(emc->clk)) {
+ err = PTR_ERR(emc->clk);
+ dev_err(&pdev->dev, "failed to get EMC clock: %d\n", err);
+- return err;
++ goto put_bpmp;
+ }
+
+ platform_set_drvdata(pdev, emc);
+@@ -201,7 +201,7 @@ static int tegra186_emc_probe(struct platform_device *pdev)
+ err = tegra_bpmp_transfer(emc->bpmp, &msg);
+ if (err < 0) {
+ dev_err(&pdev->dev, "failed to EMC DVFS pairs: %d\n", err);
+- return err;
++ goto put_bpmp;
+ }
+
+ emc->debugfs.min_rate = ULONG_MAX;
+@@ -211,8 +211,10 @@ static int tegra186_emc_probe(struct platform_device *pdev)
+
+ emc->dvfs = devm_kmalloc_array(&pdev->dev, emc->num_dvfs,
+ sizeof(*emc->dvfs), GFP_KERNEL);
+- if (!emc->dvfs)
+- return -ENOMEM;
++ if (!emc->dvfs) {
++ err = -ENOMEM;
++ goto put_bpmp;
++ }
+
+ dev_dbg(&pdev->dev, "%u DVFS pairs:\n", emc->num_dvfs);
+
+@@ -237,7 +239,7 @@ static int tegra186_emc_probe(struct platform_device *pdev)
+ "failed to set rate range [%lu-%lu] for %pC\n",
+ emc->debugfs.min_rate, emc->debugfs.max_rate,
+ emc->clk);
+- return err;
++ goto put_bpmp;
+ }
+
+ emc->debugfs.root = debugfs_create_dir("emc", NULL);
+@@ -254,6 +256,10 @@ static int tegra186_emc_probe(struct platform_device *pdev)
+ emc, &tegra186_emc_debug_max_rate_fops);
+
+ return 0;
++
++put_bpmp:
++ tegra_bpmp_put(emc->bpmp);
++ return err;
+ }
+
+ static int tegra186_emc_remove(struct platform_device *pdev)
+diff --git a/drivers/mfd/ioc3.c b/drivers/mfd/ioc3.c
+index 74cee7cb0afc..d939ccc46509 100644
+--- a/drivers/mfd/ioc3.c
++++ b/drivers/mfd/ioc3.c
+@@ -616,7 +616,10 @@ static int ioc3_mfd_probe(struct pci_dev *pdev,
+ /* Remove all already added MFD devices */
+ mfd_remove_devices(&ipd->pdev->dev);
+ if (ipd->domain) {
++ struct fwnode_handle *fn = ipd->domain->fwnode;
++
+ irq_domain_remove(ipd->domain);
++ irq_domain_free_fwnode(fn);
+ free_irq(ipd->domain_irq, (void *)ipd);
+ }
+ pci_iounmap(pdev, regs);
+@@ -643,7 +646,10 @@ static void ioc3_mfd_remove(struct pci_dev *pdev)
+ /* Release resources */
+ mfd_remove_devices(&ipd->pdev->dev);
+ if (ipd->domain) {
++ struct fwnode_handle *fn = ipd->domain->fwnode;
++
+ irq_domain_remove(ipd->domain);
++ irq_domain_free_fwnode(fn);
+ free_irq(ipd->domain_irq, (void *)ipd);
+ }
+ pci_iounmap(pdev, ipd->regs);
+diff --git a/drivers/misc/cxl/sysfs.c b/drivers/misc/cxl/sysfs.c
+index f0263d1a1fdf..d97a243ad30c 100644
+--- a/drivers/misc/cxl/sysfs.c
++++ b/drivers/misc/cxl/sysfs.c
+@@ -624,7 +624,7 @@ static struct afu_config_record *cxl_sysfs_afu_new_cr(struct cxl_afu *afu, int c
+ rc = kobject_init_and_add(&cr->kobj, &afu_config_record_type,
+ &afu->dev.kobj, "cr%i", cr->cr);
+ if (rc)
+- goto err;
++ goto err1;
+
+ rc = sysfs_create_bin_file(&cr->kobj, &cr->config_attr);
+ if (rc)
+diff --git a/drivers/misc/lkdtm/bugs.c b/drivers/misc/lkdtm/bugs.c
+index 736675f0a246..10338800f6be 100644
+--- a/drivers/misc/lkdtm/bugs.c
++++ b/drivers/misc/lkdtm/bugs.c
+@@ -13,7 +13,7 @@
+ #include <linux/uaccess.h>
+ #include <linux/slab.h>
+
+-#ifdef CONFIG_X86_32
++#if IS_ENABLED(CONFIG_X86_32) && !IS_ENABLED(CONFIG_UML)
+ #include <asm/desc.h>
+ #endif
+
+@@ -118,9 +118,8 @@ noinline void lkdtm_CORRUPT_STACK(void)
+ /* Use default char array length that triggers stack protection. */
+ char data[8] __aligned(sizeof(void *));
+
+- __lkdtm_CORRUPT_STACK(&data);
+-
+- pr_info("Corrupted stack containing char array ...\n");
++ pr_info("Corrupting stack containing char array ...\n");
++ __lkdtm_CORRUPT_STACK((void *)&data);
+ }
+
+ /* Same as above but will only get a canary with -fstack-protector-strong */
+@@ -131,9 +130,8 @@ noinline void lkdtm_CORRUPT_STACK_STRONG(void)
+ unsigned long *ptr;
+ } data __aligned(sizeof(void *));
+
+- __lkdtm_CORRUPT_STACK(&data);
+-
+- pr_info("Corrupted stack containing union ...\n");
++ pr_info("Corrupting stack containing union ...\n");
++ __lkdtm_CORRUPT_STACK((void *)&data);
+ }
+
+ void lkdtm_UNALIGNED_LOAD_STORE_WRITE(void)
+@@ -248,6 +246,7 @@ void lkdtm_ARRAY_BOUNDS(void)
+
+ kfree(not_checked);
+ kfree(checked);
++ pr_err("FAIL: survived array bounds overflow!\n");
+ }
+
+ void lkdtm_CORRUPT_LIST_ADD(void)
+@@ -419,7 +418,7 @@ void lkdtm_UNSET_SMEP(void)
+
+ void lkdtm_DOUBLE_FAULT(void)
+ {
+-#ifdef CONFIG_X86_32
++#if IS_ENABLED(CONFIG_X86_32) && !IS_ENABLED(CONFIG_UML)
+ /*
+ * Trigger #DF by setting the stack limit to zero. This clobbers
+ * a GDT TLS slot, which is okay because the current task will die
+@@ -454,38 +453,42 @@ void lkdtm_DOUBLE_FAULT(void)
+ #endif
+ }
+
+-#ifdef CONFIG_ARM64_PTR_AUTH
++#ifdef CONFIG_ARM64
+ static noinline void change_pac_parameters(void)
+ {
+- /* Reset the keys of current task */
+- ptrauth_thread_init_kernel(current);
+- ptrauth_thread_switch_kernel(current);
++ if (IS_ENABLED(CONFIG_ARM64_PTR_AUTH)) {
++ /* Reset the keys of current task */
++ ptrauth_thread_init_kernel(current);
++ ptrauth_thread_switch_kernel(current);
++ }
+ }
++#endif
+
+-#define CORRUPT_PAC_ITERATE 10
+ noinline void lkdtm_CORRUPT_PAC(void)
+ {
++#ifdef CONFIG_ARM64
++#define CORRUPT_PAC_ITERATE 10
+ int i;
+
++ if (!IS_ENABLED(CONFIG_ARM64_PTR_AUTH))
++ pr_err("FAIL: kernel not built with CONFIG_ARM64_PTR_AUTH\n");
++
+ if (!system_supports_address_auth()) {
+- pr_err("FAIL: arm64 pointer authentication feature not present\n");
++ pr_err("FAIL: CPU lacks pointer authentication feature\n");
+ return;
+ }
+
+- pr_info("Change the PAC parameters to force function return failure\n");
++ pr_info("changing PAC parameters to force function return failure...\n");
+ /*
+- * Pac is a hash value computed from input keys, return address and
++ * PAC is a hash value computed from input keys, return address and
+ * stack pointer. As pac has fewer bits so there is a chance of
+ * collision, so iterate few times to reduce the collision probability.
+ */
+ for (i = 0; i < CORRUPT_PAC_ITERATE; i++)
+ change_pac_parameters();
+
+- pr_err("FAIL: %s test failed. Kernel may be unstable from here\n", __func__);
+-}
+-#else /* !CONFIG_ARM64_PTR_AUTH */
+-noinline void lkdtm_CORRUPT_PAC(void)
+-{
+- pr_err("FAIL: arm64 pointer authentication config disabled\n");
+-}
++ pr_err("FAIL: survived PAC changes! Kernel may be unstable from here\n");
++#else
++ pr_err("XFAIL: this test is arm64-only\n");
+ #endif
++}
+diff --git a/drivers/misc/lkdtm/lkdtm.h b/drivers/misc/lkdtm/lkdtm.h
+index 601a2156a0d4..8878538b2c13 100644
+--- a/drivers/misc/lkdtm/lkdtm.h
++++ b/drivers/misc/lkdtm/lkdtm.h
+@@ -31,9 +31,7 @@ void lkdtm_CORRUPT_USER_DS(void);
+ void lkdtm_STACK_GUARD_PAGE_LEADING(void);
+ void lkdtm_STACK_GUARD_PAGE_TRAILING(void);
+ void lkdtm_UNSET_SMEP(void);
+-#ifdef CONFIG_X86_32
+ void lkdtm_DOUBLE_FAULT(void);
+-#endif
+ void lkdtm_CORRUPT_PAC(void);
+
+ /* lkdtm_heap.c */
+diff --git a/drivers/misc/lkdtm/perms.c b/drivers/misc/lkdtm/perms.c
+index 62f76d506f04..2dede2ef658f 100644
+--- a/drivers/misc/lkdtm/perms.c
++++ b/drivers/misc/lkdtm/perms.c
+@@ -57,6 +57,7 @@ static noinline void execute_location(void *dst, bool write)
+ }
+ pr_info("attempting bad execution at %px\n", func);
+ func();
++ pr_err("FAIL: func returned\n");
+ }
+
+ static void execute_user_location(void *dst)
+@@ -75,20 +76,22 @@ static void execute_user_location(void *dst)
+ return;
+ pr_info("attempting bad execution at %px\n", func);
+ func();
++ pr_err("FAIL: func returned\n");
+ }
+
+ void lkdtm_WRITE_RO(void)
+ {
+- /* Explicitly cast away "const" for the test. */
+- unsigned long *ptr = (unsigned long *)&rodata;
++ /* Explicitly cast away "const" for the test and make volatile. */
++ volatile unsigned long *ptr = (unsigned long *)&rodata;
+
+ pr_info("attempting bad rodata write at %px\n", ptr);
+ *ptr ^= 0xabcd1234;
++ pr_err("FAIL: survived bad write\n");
+ }
+
+ void lkdtm_WRITE_RO_AFTER_INIT(void)
+ {
+- unsigned long *ptr = &ro_after_init;
++ volatile unsigned long *ptr = &ro_after_init;
+
+ /*
+ * Verify we were written to during init. Since an Oops
+@@ -102,19 +105,21 @@ void lkdtm_WRITE_RO_AFTER_INIT(void)
+
+ pr_info("attempting bad ro_after_init write at %px\n", ptr);
+ *ptr ^= 0xabcd1234;
++ pr_err("FAIL: survived bad write\n");
+ }
+
+ void lkdtm_WRITE_KERN(void)
+ {
+ size_t size;
+- unsigned char *ptr;
++ volatile unsigned char *ptr;
+
+ size = (unsigned long)do_overwritten - (unsigned long)do_nothing;
+ ptr = (unsigned char *)do_overwritten;
+
+ pr_info("attempting bad %zu byte write at %px\n", size, ptr);
+- memcpy(ptr, (unsigned char *)do_nothing, size);
++ memcpy((void *)ptr, (unsigned char *)do_nothing, size);
+ flush_icache_range((unsigned long)ptr, (unsigned long)(ptr + size));
++ pr_err("FAIL: survived bad write\n");
+
+ do_overwritten();
+ }
+@@ -193,9 +198,11 @@ void lkdtm_ACCESS_USERSPACE(void)
+ pr_info("attempting bad read at %px\n", ptr);
+ tmp = *ptr;
+ tmp += 0xc0dec0de;
++ pr_err("FAIL: survived bad read\n");
+
+ pr_info("attempting bad write at %px\n", ptr);
+ *ptr = tmp;
++ pr_err("FAIL: survived bad write\n");
+
+ vm_munmap(user_addr, PAGE_SIZE);
+ }
+@@ -203,19 +210,20 @@ void lkdtm_ACCESS_USERSPACE(void)
+ void lkdtm_ACCESS_NULL(void)
+ {
+ unsigned long tmp;
+- unsigned long *ptr = (unsigned long *)NULL;
++ volatile unsigned long *ptr = (unsigned long *)NULL;
+
+ pr_info("attempting bad read at %px\n", ptr);
+ tmp = *ptr;
+ tmp += 0xc0dec0de;
++ pr_err("FAIL: survived bad read\n");
+
+ pr_info("attempting bad write at %px\n", ptr);
+ *ptr = tmp;
++ pr_err("FAIL: survived bad write\n");
+ }
+
+ void __init lkdtm_perms_init(void)
+ {
+ /* Make sure we can write to __ro_after_init values during __init */
+ ro_after_init |= 0xAA;
+-
+ }
+diff --git a/drivers/misc/lkdtm/usercopy.c b/drivers/misc/lkdtm/usercopy.c
+index e172719dd86d..b833367a45d0 100644
+--- a/drivers/misc/lkdtm/usercopy.c
++++ b/drivers/misc/lkdtm/usercopy.c
+@@ -304,19 +304,22 @@ void lkdtm_USERCOPY_KERNEL(void)
+ return;
+ }
+
+- pr_info("attempting good copy_to_user from kernel rodata\n");
++ pr_info("attempting good copy_to_user from kernel rodata: %px\n",
++ test_text);
+ if (copy_to_user((void __user *)user_addr, test_text,
+ unconst + sizeof(test_text))) {
+ pr_warn("copy_to_user failed unexpectedly?!\n");
+ goto free_user;
+ }
+
+- pr_info("attempting bad copy_to_user from kernel text\n");
++ pr_info("attempting bad copy_to_user from kernel text: %px\n",
++ vm_mmap);
+ if (copy_to_user((void __user *)user_addr, vm_mmap,
+ unconst + PAGE_SIZE)) {
+ pr_warn("copy_to_user failed, but lacked Oops\n");
+ goto free_user;
+ }
++ pr_err("FAIL: survived bad copy_to_user()\n");
+
+ free_user:
+ vm_munmap(user_addr, PAGE_SIZE);
+diff --git a/drivers/mmc/host/sdhci-cadence.c b/drivers/mmc/host/sdhci-cadence.c
+index 4a6c9ba82538..4d9f7681817c 100644
+--- a/drivers/mmc/host/sdhci-cadence.c
++++ b/drivers/mmc/host/sdhci-cadence.c
+@@ -202,57 +202,6 @@ static u32 sdhci_cdns_get_emmc_mode(struct sdhci_cdns_priv *priv)
+ return FIELD_GET(SDHCI_CDNS_HRS06_MODE, tmp);
+ }
+
+-static void sdhci_cdns_set_uhs_signaling(struct sdhci_host *host,
+- unsigned int timing)
+-{
+- struct sdhci_cdns_priv *priv = sdhci_cdns_priv(host);
+- u32 mode;
+-
+- switch (timing) {
+- case MMC_TIMING_MMC_HS:
+- mode = SDHCI_CDNS_HRS06_MODE_MMC_SDR;
+- break;
+- case MMC_TIMING_MMC_DDR52:
+- mode = SDHCI_CDNS_HRS06_MODE_MMC_DDR;
+- break;
+- case MMC_TIMING_MMC_HS200:
+- mode = SDHCI_CDNS_HRS06_MODE_MMC_HS200;
+- break;
+- case MMC_TIMING_MMC_HS400:
+- if (priv->enhanced_strobe)
+- mode = SDHCI_CDNS_HRS06_MODE_MMC_HS400ES;
+- else
+- mode = SDHCI_CDNS_HRS06_MODE_MMC_HS400;
+- break;
+- default:
+- mode = SDHCI_CDNS_HRS06_MODE_SD;
+- break;
+- }
+-
+- sdhci_cdns_set_emmc_mode(priv, mode);
+-
+- /* For SD, fall back to the default handler */
+- if (mode == SDHCI_CDNS_HRS06_MODE_SD)
+- sdhci_set_uhs_signaling(host, timing);
+-}
+-
+-static const struct sdhci_ops sdhci_cdns_ops = {
+- .set_clock = sdhci_set_clock,
+- .get_timeout_clock = sdhci_cdns_get_timeout_clock,
+- .set_bus_width = sdhci_set_bus_width,
+- .reset = sdhci_reset,
+- .set_uhs_signaling = sdhci_cdns_set_uhs_signaling,
+-};
+-
+-static const struct sdhci_pltfm_data sdhci_cdns_uniphier_pltfm_data = {
+- .ops = &sdhci_cdns_ops,
+- .quirks2 = SDHCI_QUIRK2_PRESET_VALUE_BROKEN,
+-};
+-
+-static const struct sdhci_pltfm_data sdhci_cdns_pltfm_data = {
+- .ops = &sdhci_cdns_ops,
+-};
+-
+ static int sdhci_cdns_set_tune_val(struct sdhci_host *host, unsigned int val)
+ {
+ struct sdhci_cdns_priv *priv = sdhci_cdns_priv(host);
+@@ -286,23 +235,24 @@ static int sdhci_cdns_set_tune_val(struct sdhci_host *host, unsigned int val)
+ return 0;
+ }
+
+-static int sdhci_cdns_execute_tuning(struct mmc_host *mmc, u32 opcode)
++/*
++ * In SD mode, software must not use the hardware tuning and instead perform
++ * an almost identical procedure to eMMC.
++ */
++static int sdhci_cdns_execute_tuning(struct sdhci_host *host, u32 opcode)
+ {
+- struct sdhci_host *host = mmc_priv(mmc);
+ int cur_streak = 0;
+ int max_streak = 0;
+ int end_of_streak = 0;
+ int i;
+
+ /*
+- * This handler only implements the eMMC tuning that is specific to
+- * this controller. Fall back to the standard method for SD timing.
++ * Do not execute tuning for UHS_SDR50 or UHS_DDR50.
++ * The delay is set by probe, based on the DT properties.
+ */
+- if (host->timing != MMC_TIMING_MMC_HS200)
+- return sdhci_execute_tuning(mmc, opcode);
+-
+- if (WARN_ON(opcode != MMC_SEND_TUNING_BLOCK_HS200))
+- return -EINVAL;
++ if (host->timing != MMC_TIMING_MMC_HS200 &&
++ host->timing != MMC_TIMING_UHS_SDR104)
++ return 0;
+
+ for (i = 0; i < SDHCI_CDNS_MAX_TUNING_LOOP; i++) {
+ if (sdhci_cdns_set_tune_val(host, i) ||
+@@ -325,6 +275,58 @@ static int sdhci_cdns_execute_tuning(struct mmc_host *mmc, u32 opcode)
+ return sdhci_cdns_set_tune_val(host, end_of_streak - max_streak / 2);
+ }
+
++static void sdhci_cdns_set_uhs_signaling(struct sdhci_host *host,
++ unsigned int timing)
++{
++ struct sdhci_cdns_priv *priv = sdhci_cdns_priv(host);
++ u32 mode;
++
++ switch (timing) {
++ case MMC_TIMING_MMC_HS:
++ mode = SDHCI_CDNS_HRS06_MODE_MMC_SDR;
++ break;
++ case MMC_TIMING_MMC_DDR52:
++ mode = SDHCI_CDNS_HRS06_MODE_MMC_DDR;
++ break;
++ case MMC_TIMING_MMC_HS200:
++ mode = SDHCI_CDNS_HRS06_MODE_MMC_HS200;
++ break;
++ case MMC_TIMING_MMC_HS400:
++ if (priv->enhanced_strobe)
++ mode = SDHCI_CDNS_HRS06_MODE_MMC_HS400ES;
++ else
++ mode = SDHCI_CDNS_HRS06_MODE_MMC_HS400;
++ break;
++ default:
++ mode = SDHCI_CDNS_HRS06_MODE_SD;
++ break;
++ }
++
++ sdhci_cdns_set_emmc_mode(priv, mode);
++
++ /* For SD, fall back to the default handler */
++ if (mode == SDHCI_CDNS_HRS06_MODE_SD)
++ sdhci_set_uhs_signaling(host, timing);
++}
++
++static const struct sdhci_ops sdhci_cdns_ops = {
++ .set_clock = sdhci_set_clock,
++ .get_timeout_clock = sdhci_cdns_get_timeout_clock,
++ .set_bus_width = sdhci_set_bus_width,
++ .reset = sdhci_reset,
++ .platform_execute_tuning = sdhci_cdns_execute_tuning,
++ .set_uhs_signaling = sdhci_cdns_set_uhs_signaling,
++};
++
++static const struct sdhci_pltfm_data sdhci_cdns_uniphier_pltfm_data = {
++ .ops = &sdhci_cdns_ops,
++ .quirks2 = SDHCI_QUIRK2_PRESET_VALUE_BROKEN,
++};
++
++static const struct sdhci_pltfm_data sdhci_cdns_pltfm_data = {
++ .ops = &sdhci_cdns_ops,
++};
++
+ static void sdhci_cdns_hs400_enhanced_strobe(struct mmc_host *mmc,
+ struct mmc_ios *ios)
+ {
+@@ -385,7 +387,6 @@ static int sdhci_cdns_probe(struct platform_device *pdev)
+ priv->hrs_addr = host->ioaddr;
+ priv->enhanced_strobe = false;
+ host->ioaddr += SDHCI_CDNS_SRS_BASE;
+- host->mmc_host_ops.execute_tuning = sdhci_cdns_execute_tuning;
+ host->mmc_host_ops.hs400_enhanced_strobe =
+ sdhci_cdns_hs400_enhanced_strobe;
+ sdhci_enable_v4_mode(host);
+diff --git a/drivers/mmc/host/sdhci-of-arasan.c b/drivers/mmc/host/sdhci-of-arasan.c
+index db9b544465cd..fb26e743e1fd 100644
+--- a/drivers/mmc/host/sdhci-of-arasan.c
++++ b/drivers/mmc/host/sdhci-of-arasan.c
+@@ -1299,6 +1299,8 @@ sdhci_arasan_register_sdcardclk(struct sdhci_arasan_data *sdhci_arasan,
+ clk_data->sdcardclk_hw.init = &sdcardclk_init;
+ clk_data->sdcardclk =
+ devm_clk_register(dev, &clk_data->sdcardclk_hw);
++ if (IS_ERR(clk_data->sdcardclk))
++ return PTR_ERR(clk_data->sdcardclk);
+ clk_data->sdcardclk_hw.init = NULL;
+
+ ret = of_clk_add_provider(np, of_clk_src_simple_get,
+@@ -1349,6 +1351,8 @@ sdhci_arasan_register_sampleclk(struct sdhci_arasan_data *sdhci_arasan,
+ clk_data->sampleclk_hw.init = &sampleclk_init;
+ clk_data->sampleclk =
+ devm_clk_register(dev, &clk_data->sampleclk_hw);
++ if (IS_ERR(clk_data->sampleclk))
++ return PTR_ERR(clk_data->sampleclk);
+ clk_data->sampleclk_hw.init = NULL;
+
+ ret = of_clk_add_provider(np, of_clk_src_simple_get,
+diff --git a/drivers/mmc/host/sdhci-pci-o2micro.c b/drivers/mmc/host/sdhci-pci-o2micro.c
+index e2a846885902..ed3c605fcf0c 100644
+--- a/drivers/mmc/host/sdhci-pci-o2micro.c
++++ b/drivers/mmc/host/sdhci-pci-o2micro.c
+@@ -561,6 +561,12 @@ static int sdhci_pci_o2_probe_slot(struct sdhci_pci_slot *slot)
+ slot->host->mmc_host_ops.get_cd = sdhci_o2_get_cd;
+ }
+
++ if (chip->pdev->device == PCI_DEVICE_ID_O2_SEABIRD1) {
++ slot->host->mmc_host_ops.get_cd = sdhci_o2_get_cd;
++ host->mmc->caps2 |= MMC_CAP2_NO_SDIO;
++ host->quirks2 |= SDHCI_QUIRK2_PRESET_VALUE_BROKEN;
++ }
++
+ host->mmc_host_ops.execute_tuning = sdhci_o2_execute_tuning;
+
+ if (chip->pdev->device != PCI_DEVICE_ID_O2_FUJIN2)
+diff --git a/drivers/most/core.c b/drivers/most/core.c
+index f781c46cd4af..353ab277cbc6 100644
+--- a/drivers/most/core.c
++++ b/drivers/most/core.c
+@@ -1283,10 +1283,8 @@ int most_register_interface(struct most_interface *iface)
+ struct most_channel *c;
+
+ if (!iface || !iface->enqueue || !iface->configure ||
+- !iface->poison_channel || (iface->num_channels > MAX_CHANNELS)) {
+- dev_err(iface->dev, "Bad interface or channel overflow\n");
++ !iface->poison_channel || (iface->num_channels > MAX_CHANNELS))
+ return -EINVAL;
+- }
+
+ id = ida_simple_get(&mdev_id, 0, 0, GFP_KERNEL);
+ if (id < 0) {
+diff --git a/drivers/mtd/nand/raw/brcmnand/brcmnand.c b/drivers/mtd/nand/raw/brcmnand/brcmnand.c
+index 44068e9eea03..ac934a715a19 100644
+--- a/drivers/mtd/nand/raw/brcmnand/brcmnand.c
++++ b/drivers/mtd/nand/raw/brcmnand/brcmnand.c
+@@ -3023,8 +3023,9 @@ int brcmnand_probe(struct platform_device *pdev, struct brcmnand_soc *soc)
+ if (ret < 0)
+ goto err;
+
+- /* set edu transfer function to call */
+- ctrl->dma_trans = brcmnand_edu_trans;
++ if (has_edu(ctrl))
++ /* set edu transfer function to call */
++ ctrl->dma_trans = brcmnand_edu_trans;
+ }
+
+ /* Disable automatic device ID config, direct addressing */
+diff --git a/drivers/mtd/nand/raw/qcom_nandc.c b/drivers/mtd/nand/raw/qcom_nandc.c
+index f1daf330951b..78b5f211598c 100644
+--- a/drivers/mtd/nand/raw/qcom_nandc.c
++++ b/drivers/mtd/nand/raw/qcom_nandc.c
+@@ -459,11 +459,13 @@ struct qcom_nand_host {
+ * among different NAND controllers.
+ * @ecc_modes - ecc mode for NAND
+ * @is_bam - whether NAND controller is using BAM
++ * @is_qpic - whether NAND CTRL is part of qpic IP
+ * @dev_cmd_reg_start - NAND_DEV_CMD_* registers starting offset
+ */
+ struct qcom_nandc_props {
+ u32 ecc_modes;
+ bool is_bam;
++ bool is_qpic;
+ u32 dev_cmd_reg_start;
+ };
+
+@@ -2774,7 +2776,8 @@ static int qcom_nandc_setup(struct qcom_nand_controller *nandc)
+ u32 nand_ctrl;
+
+ /* kill onenand */
+- nandc_write(nandc, SFLASHC_BURST_CFG, 0);
++ if (!nandc->props->is_qpic)
++ nandc_write(nandc, SFLASHC_BURST_CFG, 0);
+ nandc_write(nandc, dev_cmd_reg_addr(nandc, NAND_DEV_CMD_VLD),
+ NAND_DEV_CMD_VLD_VAL);
+
+@@ -3035,12 +3038,14 @@ static const struct qcom_nandc_props ipq806x_nandc_props = {
+ static const struct qcom_nandc_props ipq4019_nandc_props = {
+ .ecc_modes = (ECC_BCH_4BIT | ECC_BCH_8BIT),
+ .is_bam = true,
++ .is_qpic = true,
+ .dev_cmd_reg_start = 0x0,
+ };
+
+ static const struct qcom_nandc_props ipq8074_nandc_props = {
+ .ecc_modes = (ECC_BCH_4BIT | ECC_BCH_8BIT),
+ .is_bam = true,
++ .is_qpic = true,
+ .dev_cmd_reg_start = 0x7000,
+ };
+
+diff --git a/drivers/mtd/spi-nor/controllers/intel-spi.c b/drivers/mtd/spi-nor/controllers/intel-spi.c
+index 61d2a0ad2131..3259c9fc981f 100644
+--- a/drivers/mtd/spi-nor/controllers/intel-spi.c
++++ b/drivers/mtd/spi-nor/controllers/intel-spi.c
+@@ -612,6 +612,15 @@ static int intel_spi_write_reg(struct spi_nor *nor, u8 opcode, const u8 *buf,
+ return 0;
+ }
+
++ /*
++ * We hope that HW sequencer will do the right thing automatically and
++ * with the SW sequencer we cannot use preopcode anyway, so just ignore
++ * the Write Disable operation and pretend it was completed
++ * successfully.
++ */
++ if (opcode == SPINOR_OP_WRDI)
++ return 0;
++
+ writel(0, ispi->base + FADDR);
+
+ /* Write the value beforehand */
+diff --git a/drivers/net/dsa/mv88e6xxx/chip.c b/drivers/net/dsa/mv88e6xxx/chip.c
+index fee16c947c2e..359043659327 100644
+--- a/drivers/net/dsa/mv88e6xxx/chip.c
++++ b/drivers/net/dsa/mv88e6xxx/chip.c
+@@ -3485,7 +3485,6 @@ static const struct mv88e6xxx_ops mv88e6097_ops = {
+ .port_set_frame_mode = mv88e6351_port_set_frame_mode,
+ .port_set_egress_floods = mv88e6352_port_set_egress_floods,
+ .port_set_ether_type = mv88e6351_port_set_ether_type,
+- .port_set_jumbo_size = mv88e6165_port_set_jumbo_size,
+ .port_egress_rate_limiting = mv88e6095_port_egress_rate_limiting,
+ .port_pause_limit = mv88e6097_port_pause_limit,
+ .port_disable_learn_limit = mv88e6xxx_port_disable_learn_limit,
+diff --git a/drivers/net/dsa/rtl8366.c b/drivers/net/dsa/rtl8366.c
+index ac88caca5ad4..1368816abaed 100644
+--- a/drivers/net/dsa/rtl8366.c
++++ b/drivers/net/dsa/rtl8366.c
+@@ -43,18 +43,26 @@ int rtl8366_set_vlan(struct realtek_smi *smi, int vid, u32 member,
+ int ret;
+ int i;
+
++ dev_dbg(smi->dev,
++ "setting VLAN%d 4k members: 0x%02x, untagged: 0x%02x\n",
++ vid, member, untag);
++
+ /* Update the 4K table */
+ ret = smi->ops->get_vlan_4k(smi, vid, &vlan4k);
+ if (ret)
+ return ret;
+
+- vlan4k.member = member;
+- vlan4k.untag = untag;
++ vlan4k.member |= member;
++ vlan4k.untag |= untag;
+ vlan4k.fid = fid;
+ ret = smi->ops->set_vlan_4k(smi, &vlan4k);
+ if (ret)
+ return ret;
+
++ dev_dbg(smi->dev,
++ "resulting VLAN%d 4k members: 0x%02x, untagged: 0x%02x\n",
++ vid, vlan4k.member, vlan4k.untag);
++
+ /* Try to find an existing MC entry for this VID */
+ for (i = 0; i < smi->num_vlan_mc; i++) {
+ struct rtl8366_vlan_mc vlanmc;
+@@ -65,11 +73,16 @@ int rtl8366_set_vlan(struct realtek_smi *smi, int vid, u32 member,
+
+ if (vid == vlanmc.vid) {
+ /* update the MC entry */
+- vlanmc.member = member;
+- vlanmc.untag = untag;
++ vlanmc.member |= member;
++ vlanmc.untag |= untag;
+ vlanmc.fid = fid;
+
+ ret = smi->ops->set_vlan_mc(smi, i, &vlanmc);
++
++ dev_dbg(smi->dev,
++ "resulting VLAN%d MC members: 0x%02x, untagged: 0x%02x\n",
++ vid, vlanmc.member, vlanmc.untag);
++
+ break;
+ }
+ }
+@@ -384,7 +397,7 @@ void rtl8366_vlan_add(struct dsa_switch *ds, int port,
+ if (dsa_is_dsa_port(ds, port) || dsa_is_cpu_port(ds, port))
+ dev_err(smi->dev, "port is DSA or CPU port\n");
+
+- for (vid = vlan->vid_begin; vid <= vlan->vid_end; ++vid) {
++ for (vid = vlan->vid_begin; vid <= vlan->vid_end; vid++) {
+ int pvid_val = 0;
+
+ dev_info(smi->dev, "add VLAN %04x\n", vid);
+@@ -407,13 +420,13 @@ void rtl8366_vlan_add(struct dsa_switch *ds, int port,
+ if (ret < 0)
+ return;
+ }
+- }
+
+- ret = rtl8366_set_vlan(smi, port, member, untag, 0);
+- if (ret)
+- dev_err(smi->dev,
+- "failed to set up VLAN %04x",
+- vid);
++ ret = rtl8366_set_vlan(smi, vid, member, untag, 0);
++ if (ret)
++ dev_err(smi->dev,
++ "failed to set up VLAN %04x",
++ vid);
++ }
+ }
+ EXPORT_SYMBOL_GPL(rtl8366_vlan_add);
+
+diff --git a/drivers/net/ethernet/aquantia/atlantic/aq_ethtool.c b/drivers/net/ethernet/aquantia/atlantic/aq_ethtool.c
+index 743d3b13b39d..bb1fc6052bcf 100644
+--- a/drivers/net/ethernet/aquantia/atlantic/aq_ethtool.c
++++ b/drivers/net/ethernet/aquantia/atlantic/aq_ethtool.c
+@@ -123,21 +123,21 @@ static const char aq_macsec_stat_names[][ETH_GSTRING_LEN] = {
+ "MACSec OutUnctrlHitDropRedir",
+ };
+
+-static const char *aq_macsec_txsc_stat_names[] = {
++static const char * const aq_macsec_txsc_stat_names[] = {
+ "MACSecTXSC%d ProtectedPkts",
+ "MACSecTXSC%d EncryptedPkts",
+ "MACSecTXSC%d ProtectedOctets",
+ "MACSecTXSC%d EncryptedOctets",
+ };
+
+-static const char *aq_macsec_txsa_stat_names[] = {
++static const char * const aq_macsec_txsa_stat_names[] = {
+ "MACSecTXSC%dSA%d HitDropRedirect",
+ "MACSecTXSC%dSA%d Protected2Pkts",
+ "MACSecTXSC%dSA%d ProtectedPkts",
+ "MACSecTXSC%dSA%d EncryptedPkts",
+ };
+
+-static const char *aq_macsec_rxsa_stat_names[] = {
++static const char * const aq_macsec_rxsa_stat_names[] = {
+ "MACSecRXSC%dSA%d UntaggedHitPkts",
+ "MACSecRXSC%dSA%d CtrlHitDrpRedir",
+ "MACSecRXSC%dSA%d NotUsingSa",
+diff --git a/drivers/net/ethernet/aquantia/atlantic/hw_atl/hw_atl_a0.c b/drivers/net/ethernet/aquantia/atlantic/hw_atl/hw_atl_a0.c
+index a312864969af..6640fd35b29b 100644
+--- a/drivers/net/ethernet/aquantia/atlantic/hw_atl/hw_atl_a0.c
++++ b/drivers/net/ethernet/aquantia/atlantic/hw_atl/hw_atl_a0.c
+@@ -782,7 +782,7 @@ static int hw_atl_a0_hw_multicast_list_set(struct aq_hw_s *self,
+ int err = 0;
+
+ if (count > (HW_ATL_A0_MAC_MAX - HW_ATL_A0_MAC_MIN)) {
+- err = EBADRQC;
++ err = -EBADRQC;
+ goto err_exit;
+ }
+ for (self->aq_nic_cfg->mc_list_count = 0U;
+diff --git a/drivers/net/ethernet/cadence/macb_main.c b/drivers/net/ethernet/cadence/macb_main.c
+index 2213e6ab8151..4b1b5928b104 100644
+--- a/drivers/net/ethernet/cadence/macb_main.c
++++ b/drivers/net/ethernet/cadence/macb_main.c
+@@ -578,7 +578,7 @@ static void macb_mac_config(struct phylink_config *config, unsigned int mode,
+ if (bp->caps & MACB_CAPS_MACB_IS_EMAC) {
+ if (state->interface == PHY_INTERFACE_MODE_RMII)
+ ctrl |= MACB_BIT(RM9200_RMII);
+- } else {
++ } else if (macb_is_gem(bp)) {
+ ctrl &= ~(GEM_BIT(SGMIIEN) | GEM_BIT(PCSSEL));
+
+ if (state->interface == PHY_INTERFACE_MODE_SGMII)
+@@ -639,10 +639,13 @@ static void macb_mac_link_up(struct phylink_config *config,
+ ctrl |= MACB_BIT(FD);
+
+ if (!(bp->caps & MACB_CAPS_MACB_IS_EMAC)) {
+- ctrl &= ~(GEM_BIT(GBE) | MACB_BIT(PAE));
++ ctrl &= ~MACB_BIT(PAE);
++ if (macb_is_gem(bp)) {
++ ctrl &= ~GEM_BIT(GBE);
+
+- if (speed == SPEED_1000)
+- ctrl |= GEM_BIT(GBE);
++ if (speed == SPEED_1000)
++ ctrl |= GEM_BIT(GBE);
++ }
+
+ /* We do not support MLO_PAUSE_RX yet */
+ if (tx_pause)
+diff --git a/drivers/net/ethernet/cavium/liquidio/cn23xx_pf_device.c b/drivers/net/ethernet/cavium/liquidio/cn23xx_pf_device.c
+index 43d11c38b38a..4cddd628d41b 100644
+--- a/drivers/net/ethernet/cavium/liquidio/cn23xx_pf_device.c
++++ b/drivers/net/ethernet/cavium/liquidio/cn23xx_pf_device.c
+@@ -1167,7 +1167,7 @@ static int cn23xx_get_pf_num(struct octeon_device *oct)
+ oct->pf_num = ((fdl_bit >> CN23XX_PCIE_SRIOV_FDL_BIT_POS) &
+ CN23XX_PCIE_SRIOV_FDL_MASK);
+ } else {
+- ret = EINVAL;
++ ret = -EINVAL;
+
+ /* Under some virtual environments, extended PCI regs are
+ * inaccessible, in which case the above read will have failed.
+diff --git a/drivers/net/ethernet/cavium/thunder/nicvf_main.c b/drivers/net/ethernet/cavium/thunder/nicvf_main.c
+index 2ba0ce115e63..4fee95584e31 100644
+--- a/drivers/net/ethernet/cavium/thunder/nicvf_main.c
++++ b/drivers/net/ethernet/cavium/thunder/nicvf_main.c
+@@ -2042,11 +2042,11 @@ static void nicvf_set_rx_mode_task(struct work_struct *work_arg)
+ /* Save message data locally to prevent them from
+ * being overwritten by next ndo_set_rx_mode call().
+ */
+- spin_lock(&nic->rx_mode_wq_lock);
++ spin_lock_bh(&nic->rx_mode_wq_lock);
+ mode = vf_work->mode;
+ mc = vf_work->mc;
+ vf_work->mc = NULL;
+- spin_unlock(&nic->rx_mode_wq_lock);
++ spin_unlock_bh(&nic->rx_mode_wq_lock);
+
+ __nicvf_set_rx_mode_task(mode, mc, nic);
+ }
+@@ -2180,6 +2180,9 @@ static int nicvf_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
+ nic->max_queues *= 2;
+ nic->ptp_clock = ptp_clock;
+
++ /* Initialize mutex that serializes usage of VF's mailbox */
++ mutex_init(&nic->rx_mode_mtx);
++
+ /* MAP VF's configuration registers */
+ nic->reg_base = pcim_iomap(pdev, PCI_CFG_REG_BAR_NUM, 0);
+ if (!nic->reg_base) {
+@@ -2256,7 +2259,6 @@ static int nicvf_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
+
+ INIT_WORK(&nic->rx_mode_work.work, nicvf_set_rx_mode_task);
+ spin_lock_init(&nic->rx_mode_wq_lock);
+- mutex_init(&nic->rx_mode_mtx);
+
+ err = register_netdev(netdev);
+ if (err) {
+diff --git a/drivers/net/ethernet/freescale/dpaa2/dpaa2-eth.c b/drivers/net/ethernet/freescale/dpaa2/dpaa2-eth.c
+index 0998ceb1a26e..a4b2b18009c1 100644
+--- a/drivers/net/ethernet/freescale/dpaa2/dpaa2-eth.c
++++ b/drivers/net/ethernet/freescale/dpaa2/dpaa2-eth.c
+@@ -1109,7 +1109,7 @@ static void drain_bufs(struct dpaa2_eth_priv *priv, int count)
+ buf_array, count);
+ if (ret < 0) {
+ if (ret == -EBUSY &&
+- retries++ >= DPAA2_ETH_SWP_BUSY_RETRIES)
++ retries++ < DPAA2_ETH_SWP_BUSY_RETRIES)
+ continue;
+ netdev_err(priv->net_dev, "dpaa2_io_service_acquire() failed\n");
+ return;
+@@ -2207,7 +2207,7 @@ close:
+ free:
+ fsl_mc_object_free(dpcon);
+
+- return NULL;
++ return ERR_PTR(err);
+ }
+
+ static void free_dpcon(struct dpaa2_eth_priv *priv,
+@@ -2231,8 +2231,8 @@ alloc_channel(struct dpaa2_eth_priv *priv)
+ return NULL;
+
+ channel->dpcon = setup_dpcon(priv);
+- if (IS_ERR_OR_NULL(channel->dpcon)) {
+- err = PTR_ERR_OR_ZERO(channel->dpcon);
++ if (IS_ERR(channel->dpcon)) {
++ err = PTR_ERR(channel->dpcon);
+ goto err_setup;
+ }
+
+diff --git a/drivers/net/ethernet/freescale/fman/fman.c b/drivers/net/ethernet/freescale/fman/fman.c
+index f151d6e111dd..ef67e8599b39 100644
+--- a/drivers/net/ethernet/freescale/fman/fman.c
++++ b/drivers/net/ethernet/freescale/fman/fman.c
+@@ -1398,8 +1398,7 @@ static void enable_time_stamp(struct fman *fman)
+ {
+ struct fman_fpm_regs __iomem *fpm_rg = fman->fpm_regs;
+ u16 fm_clk_freq = fman->state->fm_clk_freq;
+- u32 tmp, intgr, ts_freq;
+- u64 frac;
++ u32 tmp, intgr, ts_freq, frac;
+
+ ts_freq = (u32)(1 << fman->state->count1_micro_bit);
+ /* configure timestamp so that bit 8 will count 1 microsecond
+diff --git a/drivers/net/ethernet/freescale/fman/fman_dtsec.c b/drivers/net/ethernet/freescale/fman/fman_dtsec.c
+index 004c266802a8..bce3c9398887 100644
+--- a/drivers/net/ethernet/freescale/fman/fman_dtsec.c
++++ b/drivers/net/ethernet/freescale/fman/fman_dtsec.c
+@@ -1200,7 +1200,7 @@ int dtsec_del_hash_mac_address(struct fman_mac *dtsec, enet_addr_t *eth_addr)
+ list_for_each(pos,
+ &dtsec->multicast_addr_hash->lsts[bucket]) {
+ hash_entry = ETH_HASH_ENTRY_OBJ(pos);
+- if (hash_entry->addr == addr) {
++ if (hash_entry && hash_entry->addr == addr) {
+ list_del_init(&hash_entry->node);
+ kfree(hash_entry);
+ break;
+@@ -1213,7 +1213,7 @@ int dtsec_del_hash_mac_address(struct fman_mac *dtsec, enet_addr_t *eth_addr)
+ list_for_each(pos,
+ &dtsec->unicast_addr_hash->lsts[bucket]) {
+ hash_entry = ETH_HASH_ENTRY_OBJ(pos);
+- if (hash_entry->addr == addr) {
++ if (hash_entry && hash_entry->addr == addr) {
+ list_del_init(&hash_entry->node);
+ kfree(hash_entry);
+ break;
+diff --git a/drivers/net/ethernet/freescale/fman/fman_mac.h b/drivers/net/ethernet/freescale/fman/fman_mac.h
+index dd6d0526f6c1..19f327efdaff 100644
+--- a/drivers/net/ethernet/freescale/fman/fman_mac.h
++++ b/drivers/net/ethernet/freescale/fman/fman_mac.h
+@@ -252,7 +252,7 @@ static inline struct eth_hash_t *alloc_hash_table(u16 size)
+ struct eth_hash_t *hash;
+
+ /* Allocate address hash table */
+- hash = kmalloc_array(size, sizeof(struct eth_hash_t *), GFP_KERNEL);
++ hash = kmalloc(sizeof(*hash), GFP_KERNEL);
+ if (!hash)
+ return NULL;
+
+diff --git a/drivers/net/ethernet/freescale/fman/fman_memac.c b/drivers/net/ethernet/freescale/fman/fman_memac.c
+index a5500ede4070..645764abdaae 100644
+--- a/drivers/net/ethernet/freescale/fman/fman_memac.c
++++ b/drivers/net/ethernet/freescale/fman/fman_memac.c
+@@ -852,7 +852,6 @@ int memac_set_tx_pause_frames(struct fman_mac *memac, u8 priority,
+
+ tmp = ioread32be(®s->command_config);
+ tmp &= ~CMD_CFG_PFC_MODE;
+- priority = 0;
+
+ iowrite32be(tmp, ®s->command_config);
+
+@@ -982,7 +981,7 @@ int memac_del_hash_mac_address(struct fman_mac *memac, enet_addr_t *eth_addr)
+
+ list_for_each(pos, &memac->multicast_addr_hash->lsts[hash]) {
+ hash_entry = ETH_HASH_ENTRY_OBJ(pos);
+- if (hash_entry->addr == addr) {
++ if (hash_entry && hash_entry->addr == addr) {
+ list_del_init(&hash_entry->node);
+ kfree(hash_entry);
+ break;
+diff --git a/drivers/net/ethernet/freescale/fman/fman_port.c b/drivers/net/ethernet/freescale/fman/fman_port.c
+index 87b26f063cc8..c27df153f895 100644
+--- a/drivers/net/ethernet/freescale/fman/fman_port.c
++++ b/drivers/net/ethernet/freescale/fman/fman_port.c
+@@ -1767,6 +1767,7 @@ static int fman_port_probe(struct platform_device *of_dev)
+ struct fman_port *port;
+ struct fman *fman;
+ struct device_node *fm_node, *port_node;
++ struct platform_device *fm_pdev;
+ struct resource res;
+ struct resource *dev_res;
+ u32 val;
+@@ -1791,8 +1792,14 @@ static int fman_port_probe(struct platform_device *of_dev)
+ goto return_err;
+ }
+
+- fman = dev_get_drvdata(&of_find_device_by_node(fm_node)->dev);
++ fm_pdev = of_find_device_by_node(fm_node);
+ of_node_put(fm_node);
++ if (!fm_pdev) {
++ err = -EINVAL;
++ goto return_err;
++ }
++
++ fman = dev_get_drvdata(&fm_pdev->dev);
+ if (!fman) {
+ err = -EINVAL;
+ goto return_err;
+diff --git a/drivers/net/ethernet/freescale/fman/fman_tgec.c b/drivers/net/ethernet/freescale/fman/fman_tgec.c
+index 8c7eb878d5b4..41946b16f6c7 100644
+--- a/drivers/net/ethernet/freescale/fman/fman_tgec.c
++++ b/drivers/net/ethernet/freescale/fman/fman_tgec.c
+@@ -626,7 +626,7 @@ int tgec_del_hash_mac_address(struct fman_mac *tgec, enet_addr_t *eth_addr)
+
+ list_for_each(pos, &tgec->multicast_addr_hash->lsts[hash]) {
+ hash_entry = ETH_HASH_ENTRY_OBJ(pos);
+- if (hash_entry->addr == addr) {
++ if (hash_entry && hash_entry->addr == addr) {
+ list_del_init(&hash_entry->node);
+ kfree(hash_entry);
+ break;
+diff --git a/drivers/net/ethernet/intel/iavf/iavf_main.c b/drivers/net/ethernet/intel/iavf/iavf_main.c
+index fa82768e5eda..d338efe5f3f5 100644
+--- a/drivers/net/ethernet/intel/iavf/iavf_main.c
++++ b/drivers/net/ethernet/intel/iavf/iavf_main.c
+@@ -1863,8 +1863,10 @@ static int iavf_init_get_resources(struct iavf_adapter *adapter)
+
+ adapter->rss_key = kzalloc(adapter->rss_key_size, GFP_KERNEL);
+ adapter->rss_lut = kzalloc(adapter->rss_lut_size, GFP_KERNEL);
+- if (!adapter->rss_key || !adapter->rss_lut)
++ if (!adapter->rss_key || !adapter->rss_lut) {
++ err = -ENOMEM;
+ goto err_mem;
++ }
+ if (RSS_AQ(adapter))
+ adapter->aq_required |= IAVF_FLAG_AQ_CONFIGURE_RSS;
+ else
+@@ -1946,7 +1948,10 @@ static void iavf_watchdog_task(struct work_struct *work)
+ iavf_send_api_ver(adapter);
+ }
+ } else {
+- if (!iavf_process_aq_command(adapter) &&
++ /* An error will be returned if no commands were
++ * processed; use this opportunity to update stats
++ */
++ if (iavf_process_aq_command(adapter) &&
+ adapter->state == __IAVF_RUNNING)
+ iavf_request_stats(adapter);
+ }
+diff --git a/drivers/net/ethernet/intel/ice/ice_flex_pipe.c b/drivers/net/ethernet/intel/ice/ice_flex_pipe.c
+index 4420fc02f7e7..504a02b071ce 100644
+--- a/drivers/net/ethernet/intel/ice/ice_flex_pipe.c
++++ b/drivers/net/ethernet/intel/ice/ice_flex_pipe.c
+@@ -2922,6 +2922,8 @@ static void ice_free_flow_profs(struct ice_hw *hw, u8 blk_idx)
+ ICE_FLOW_ENTRY_HNDL(e));
+
+ list_del(&p->l_entry);
++
++ mutex_destroy(&p->entries_lock);
+ devm_kfree(ice_hw_to_dev(hw), p);
+ }
+ mutex_unlock(&hw->fl_profs_locks[blk_idx]);
+@@ -3039,7 +3041,7 @@ void ice_clear_hw_tbls(struct ice_hw *hw)
+ memset(prof_redir->t, 0,
+ prof_redir->count * sizeof(*prof_redir->t));
+
+- memset(es->t, 0, es->count * sizeof(*es->t));
++ memset(es->t, 0, es->count * sizeof(*es->t) * es->fvw);
+ memset(es->ref_count, 0, es->count * sizeof(*es->ref_count));
+ memset(es->written, 0, es->count * sizeof(*es->written));
+ }
+@@ -3150,10 +3152,12 @@ enum ice_status ice_init_hw_tbls(struct ice_hw *hw)
+ es->ref_count = devm_kcalloc(ice_hw_to_dev(hw), es->count,
+ sizeof(*es->ref_count),
+ GFP_KERNEL);
++ if (!es->ref_count)
++ goto err;
+
+ es->written = devm_kcalloc(ice_hw_to_dev(hw), es->count,
+ sizeof(*es->written), GFP_KERNEL);
+- if (!es->ref_count)
++ if (!es->written)
+ goto err;
+ }
+ return 0;
+diff --git a/drivers/net/ethernet/marvell/mvpp2/mvpp2_main.c b/drivers/net/ethernet/marvell/mvpp2/mvpp2_main.c
+index 24f4d8e0da98..ee72397813d4 100644
+--- a/drivers/net/ethernet/marvell/mvpp2/mvpp2_main.c
++++ b/drivers/net/ethernet/marvell/mvpp2/mvpp2_main.c
+@@ -2981,6 +2981,7 @@ static int mvpp2_rx(struct mvpp2_port *port, struct napi_struct *napi,
+ err = mvpp2_rx_refill(port, bm_pool, pool);
+ if (err) {
+ netdev_err(port->dev, "failed to refill BM pools\n");
++ dev_kfree_skb_any(skb);
+ goto err_drop_frame;
+ }
+
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/fs_core.c b/drivers/net/ethernet/mellanox/mlx5/core/fs_core.c
+index 2569bb6228b6..2e5f7efb82a8 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/fs_core.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/fs_core.c
+@@ -847,18 +847,15 @@ static int connect_fts_in_prio(struct mlx5_core_dev *dev,
+ {
+ struct mlx5_flow_root_namespace *root = find_root(&prio->node);
+ struct mlx5_flow_table *iter;
+- int i = 0;
+ int err;
+
+ fs_for_each_ft(iter, prio) {
+- i++;
+ err = root->cmds->modify_flow_table(root, iter, ft);
+ if (err) {
+- mlx5_core_warn(dev, "Failed to modify flow table %d\n",
+- iter->id);
++ mlx5_core_err(dev,
++ "Failed to modify flow table id %d, type %d, err %d\n",
++ iter->id, iter->type, err);
+ /* The driver is out of sync with the FW */
+- if (i > 1)
+- WARN_ON(true);
+ return err;
+ }
+ }
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/steering/fs_dr.c b/drivers/net/ethernet/mellanox/mlx5/core/steering/fs_dr.c
+index 8887b2440c7d..9b08eb557a31 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/steering/fs_dr.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/steering/fs_dr.c
+@@ -279,29 +279,9 @@ static int mlx5_cmd_dr_create_fte(struct mlx5_flow_root_namespace *ns,
+
+ /* The order of the actions are must to be keep, only the following
+ * order is supported by SW steering:
+- * TX: push vlan -> modify header -> encap
++ * TX: modify header -> push vlan -> encap
+ * RX: decap -> pop vlan -> modify header
+ */
+- if (fte->action.action & MLX5_FLOW_CONTEXT_ACTION_VLAN_PUSH) {
+- tmp_action = create_action_push_vlan(domain, &fte->action.vlan[0]);
+- if (!tmp_action) {
+- err = -ENOMEM;
+- goto free_actions;
+- }
+- fs_dr_actions[fs_dr_num_actions++] = tmp_action;
+- actions[num_actions++] = tmp_action;
+- }
+-
+- if (fte->action.action & MLX5_FLOW_CONTEXT_ACTION_VLAN_PUSH_2) {
+- tmp_action = create_action_push_vlan(domain, &fte->action.vlan[1]);
+- if (!tmp_action) {
+- err = -ENOMEM;
+- goto free_actions;
+- }
+- fs_dr_actions[fs_dr_num_actions++] = tmp_action;
+- actions[num_actions++] = tmp_action;
+- }
+-
+ if (fte->action.action & MLX5_FLOW_CONTEXT_ACTION_DECAP) {
+ enum mlx5dr_action_reformat_type decap_type =
+ DR_ACTION_REFORMAT_TYP_TNL_L2_TO_L2;
+@@ -354,6 +334,26 @@ static int mlx5_cmd_dr_create_fte(struct mlx5_flow_root_namespace *ns,
+ actions[num_actions++] =
+ fte->action.modify_hdr->action.dr_action;
+
++ if (fte->action.action & MLX5_FLOW_CONTEXT_ACTION_VLAN_PUSH) {
++ tmp_action = create_action_push_vlan(domain, &fte->action.vlan[0]);
++ if (!tmp_action) {
++ err = -ENOMEM;
++ goto free_actions;
++ }
++ fs_dr_actions[fs_dr_num_actions++] = tmp_action;
++ actions[num_actions++] = tmp_action;
++ }
++
++ if (fte->action.action & MLX5_FLOW_CONTEXT_ACTION_VLAN_PUSH_2) {
++ tmp_action = create_action_push_vlan(domain, &fte->action.vlan[1]);
++ if (!tmp_action) {
++ err = -ENOMEM;
++ goto free_actions;
++ }
++ fs_dr_actions[fs_dr_num_actions++] = tmp_action;
++ actions[num_actions++] = tmp_action;
++ }
++
+ if (delay_encap_set)
+ actions[num_actions++] =
+ fte->action.pkt_reformat->action.dr_action;
+diff --git a/drivers/net/ethernet/mscc/ocelot.c b/drivers/net/ethernet/mscc/ocelot.c
+index f17da67a4622..d0b79cca5184 100644
+--- a/drivers/net/ethernet/mscc/ocelot.c
++++ b/drivers/net/ethernet/mscc/ocelot.c
+@@ -1605,14 +1605,14 @@ static int ocelot_port_obj_add_mdb(struct net_device *dev,
+ addr[0] = 0;
+
+ if (!new) {
+- addr[2] = mc->ports << 0;
+- addr[1] = mc->ports << 8;
++ addr[1] = mc->ports >> 8;
++ addr[2] = mc->ports & 0xff;
+ ocelot_mact_forget(ocelot, addr, vid);
+ }
+
+ mc->ports |= BIT(port);
+- addr[2] = mc->ports << 0;
+- addr[1] = mc->ports << 8;
++ addr[1] = mc->ports >> 8;
++ addr[2] = mc->ports & 0xff;
+
+ return ocelot_mact_learn(ocelot, 0, addr, vid, ENTRYTYPE_MACv4);
+ }
+@@ -1636,9 +1636,9 @@ static int ocelot_port_obj_del_mdb(struct net_device *dev,
+ return -ENOENT;
+
+ memcpy(addr, mc->addr, ETH_ALEN);
+- addr[2] = mc->ports << 0;
+- addr[1] = mc->ports << 8;
+ addr[0] = 0;
++ addr[1] = mc->ports >> 8;
++ addr[2] = mc->ports & 0xff;
+ ocelot_mact_forget(ocelot, addr, vid);
+
+ mc->ports &= ~BIT(port);
+@@ -1648,8 +1648,8 @@ static int ocelot_port_obj_del_mdb(struct net_device *dev,
+ return 0;
+ }
+
+- addr[2] = mc->ports << 0;
+- addr[1] = mc->ports << 8;
++ addr[1] = mc->ports >> 8;
++ addr[2] = mc->ports & 0xff;
+
+ return ocelot_mact_learn(ocelot, 0, addr, vid, ENTRYTYPE_MACv4);
+ }
+diff --git a/drivers/net/ethernet/pensando/ionic/ionic_bus_pci.c b/drivers/net/ethernet/pensando/ionic/ionic_bus_pci.c
+index 2924cde440aa..85c686c16741 100644
+--- a/drivers/net/ethernet/pensando/ionic/ionic_bus_pci.c
++++ b/drivers/net/ethernet/pensando/ionic/ionic_bus_pci.c
+@@ -247,12 +247,11 @@ static int ionic_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
+ goto err_out_pci_disable_device;
+ }
+
+- pci_set_master(pdev);
+ pcie_print_link_status(pdev);
+
+ err = ionic_map_bars(ionic);
+ if (err)
+- goto err_out_pci_clear_master;
++ goto err_out_pci_disable_device;
+
+ /* Configure the device */
+ err = ionic_setup(ionic);
+@@ -260,6 +259,7 @@ static int ionic_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
+ dev_err(dev, "Cannot setup device: %d, aborting\n", err);
+ goto err_out_unmap_bars;
+ }
++ pci_set_master(pdev);
+
+ err = ionic_identify(ionic);
+ if (err) {
+@@ -350,6 +350,7 @@ err_out_reset:
+ ionic_reset(ionic);
+ err_out_teardown:
+ ionic_dev_teardown(ionic);
++ pci_clear_master(pdev);
+ /* Don't fail the probe for these errors, keep
+ * the hw interface around for inspection
+ */
+@@ -358,8 +359,6 @@ err_out_teardown:
+ err_out_unmap_bars:
+ ionic_unmap_bars(ionic);
+ pci_release_regions(pdev);
+-err_out_pci_clear_master:
+- pci_clear_master(pdev);
+ err_out_pci_disable_device:
+ pci_disable_device(pdev);
+ err_out_debugfs_del_dev:
+@@ -389,9 +388,9 @@ static void ionic_remove(struct pci_dev *pdev)
+ ionic_port_reset(ionic);
+ ionic_reset(ionic);
+ ionic_dev_teardown(ionic);
++ pci_clear_master(pdev);
+ ionic_unmap_bars(ionic);
+ pci_release_regions(pdev);
+- pci_clear_master(pdev);
+ pci_disable_device(pdev);
+ ionic_debugfs_del_dev(ionic);
+ mutex_destroy(&ionic->dev_cmd_lock);
+diff --git a/drivers/net/ethernet/pensando/ionic/ionic_lif.c b/drivers/net/ethernet/pensando/ionic/ionic_lif.c
+index e55d41546cff..aa93f9a6252d 100644
+--- a/drivers/net/ethernet/pensando/ionic/ionic_lif.c
++++ b/drivers/net/ethernet/pensando/ionic/ionic_lif.c
+@@ -723,7 +723,7 @@ static bool ionic_notifyq_service(struct ionic_cq *cq,
+ eid = le64_to_cpu(comp->event.eid);
+
+ /* Have we run out of new completions to process? */
+- if (eid <= lif->last_eid)
++ if ((s64)(eid - lif->last_eid) <= 0)
+ return false;
+
+ lif->last_eid = eid;
+diff --git a/drivers/net/ethernet/qlogic/qed/qed_cxt.c b/drivers/net/ethernet/qlogic/qed/qed_cxt.c
+index d13ec88313c3..eb70fdddddbf 100644
+--- a/drivers/net/ethernet/qlogic/qed/qed_cxt.c
++++ b/drivers/net/ethernet/qlogic/qed/qed_cxt.c
+@@ -2355,6 +2355,11 @@ qed_cxt_free_ilt_range(struct qed_hwfn *p_hwfn,
+ elem_size = SRQ_CXT_SIZE;
+ p_blk = &p_cli->pf_blks[SRQ_BLK];
+ break;
++ case QED_ELEM_XRC_SRQ:
++ p_cli = &p_hwfn->p_cxt_mngr->clients[ILT_CLI_TSDM];
++ elem_size = XRC_SRQ_CXT_SIZE;
++ p_blk = &p_cli->pf_blks[SRQ_BLK];
++ break;
+ case QED_ELEM_TASK:
+ p_cli = &p_hwfn->p_cxt_mngr->clients[ILT_CLI_CDUT];
+ elem_size = TYPE1_TASK_CXT_SIZE(p_hwfn);
+diff --git a/drivers/net/ethernet/qlogic/qed/qed_rdma.c b/drivers/net/ethernet/qlogic/qed/qed_rdma.c
+index 19c0c8864da1..4ad5f21de79e 100644
+--- a/drivers/net/ethernet/qlogic/qed/qed_rdma.c
++++ b/drivers/net/ethernet/qlogic/qed/qed_rdma.c
+@@ -404,6 +404,7 @@ static void qed_rdma_resc_free(struct qed_hwfn *p_hwfn)
+ qed_rdma_bmap_free(p_hwfn, &p_hwfn->p_rdma_info->srq_map, 1);
+ qed_rdma_bmap_free(p_hwfn, &p_hwfn->p_rdma_info->real_cid_map, 1);
+ qed_rdma_bmap_free(p_hwfn, &p_hwfn->p_rdma_info->xrc_srq_map, 1);
++ qed_rdma_bmap_free(p_hwfn, &p_hwfn->p_rdma_info->xrcd_map, 1);
+
+ kfree(p_rdma_info->port);
+ kfree(p_rdma_info->dev);
+diff --git a/drivers/net/ethernet/sgi/ioc3-eth.c b/drivers/net/ethernet/sgi/ioc3-eth.c
+index 6646eba9f57f..6eef0f45b133 100644
+--- a/drivers/net/ethernet/sgi/ioc3-eth.c
++++ b/drivers/net/ethernet/sgi/ioc3-eth.c
+@@ -951,7 +951,7 @@ out_stop:
+ dma_free_coherent(ip->dma_dev, RX_RING_SIZE, ip->rxr,
+ ip->rxr_dma);
+ if (ip->tx_ring)
+- dma_free_coherent(ip->dma_dev, TX_RING_SIZE, ip->tx_ring,
++ dma_free_coherent(ip->dma_dev, TX_RING_SIZE + SZ_16K - 1, ip->tx_ring,
+ ip->txr_dma);
+ out_free:
+ free_netdev(dev);
+@@ -964,7 +964,7 @@ static int ioc3eth_remove(struct platform_device *pdev)
+ struct ioc3_private *ip = netdev_priv(dev);
+
+ dma_free_coherent(ip->dma_dev, RX_RING_SIZE, ip->rxr, ip->rxr_dma);
+- dma_free_coherent(ip->dma_dev, TX_RING_SIZE, ip->tx_ring, ip->txr_dma);
++ dma_free_coherent(ip->dma_dev, TX_RING_SIZE + SZ_16K - 1, ip->tx_ring, ip->txr_dma);
+
+ unregister_netdev(dev);
+ del_timer_sync(&ip->ioc3_timer);
+diff --git a/drivers/net/ethernet/ti/am65-cpsw-nuss.c b/drivers/net/ethernet/ti/am65-cpsw-nuss.c
+index 6d778bc3d012..88832277edd5 100644
+--- a/drivers/net/ethernet/ti/am65-cpsw-nuss.c
++++ b/drivers/net/ethernet/ti/am65-cpsw-nuss.c
+@@ -223,6 +223,9 @@ static int am65_cpsw_nuss_ndo_slave_add_vid(struct net_device *ndev,
+ u32 port_mask, unreg_mcast = 0;
+ int ret;
+
++ if (!netif_running(ndev) || !vid)
++ return 0;
++
+ ret = pm_runtime_get_sync(common->dev);
+ if (ret < 0) {
+ pm_runtime_put_noidle(common->dev);
+@@ -246,6 +249,9 @@ static int am65_cpsw_nuss_ndo_slave_kill_vid(struct net_device *ndev,
+ struct am65_cpsw_common *common = am65_ndev_to_common(ndev);
+ int ret;
+
++ if (!netif_running(ndev) || !vid)
++ return 0;
++
+ ret = pm_runtime_get_sync(common->dev);
+ if (ret < 0) {
+ pm_runtime_put_noidle(common->dev);
+@@ -571,6 +577,16 @@ static int am65_cpsw_nuss_ndo_slave_stop(struct net_device *ndev)
+ return 0;
+ }
+
++static int cpsw_restore_vlans(struct net_device *vdev, int vid, void *arg)
++{
++ struct am65_cpsw_port *port = arg;
++
++ if (!vdev)
++ return 0;
++
++ return am65_cpsw_nuss_ndo_slave_add_vid(port->ndev, 0, vid);
++}
++
+ static int am65_cpsw_nuss_ndo_slave_open(struct net_device *ndev)
+ {
+ struct am65_cpsw_common *common = am65_ndev_to_common(ndev);
+@@ -644,6 +660,9 @@ static int am65_cpsw_nuss_ndo_slave_open(struct net_device *ndev)
+ }
+ }
+
++ /* restore vlan configurations */
++ vlan_for_each(ndev, cpsw_restore_vlans, port);
++
+ phy_attached_info(port->slave.phy);
+ phy_start(port->slave.phy);
+
+diff --git a/drivers/net/ethernet/toshiba/spider_net.c b/drivers/net/ethernet/toshiba/spider_net.c
+index 3902b3aeb0c2..94267e1f5d30 100644
+--- a/drivers/net/ethernet/toshiba/spider_net.c
++++ b/drivers/net/ethernet/toshiba/spider_net.c
+@@ -283,8 +283,8 @@ spider_net_free_chain(struct spider_net_card *card,
+ descr = descr->next;
+ } while (descr != chain->ring);
+
+- dma_free_coherent(&card->pdev->dev, chain->num_desc,
+- chain->hwring, chain->dma_addr);
++ dma_free_coherent(&card->pdev->dev, chain->num_desc * sizeof(struct spider_net_hw_descr),
++ chain->hwring, chain->dma_addr);
+ }
+
+ /**
+diff --git a/drivers/net/ethernet/xilinx/ll_temac_main.c b/drivers/net/ethernet/xilinx/ll_temac_main.c
+index 929244064abd..9a15f14daa47 100644
+--- a/drivers/net/ethernet/xilinx/ll_temac_main.c
++++ b/drivers/net/ethernet/xilinx/ll_temac_main.c
+@@ -1407,10 +1407,8 @@ static int temac_probe(struct platform_device *pdev)
+ }
+
+ /* map device registers */
+- res = platform_get_resource(pdev, IORESOURCE_MEM, 0);
+- lp->regs = devm_ioremap(&pdev->dev, res->start,
+- resource_size(res));
+- if (!lp->regs) {
++ lp->regs = devm_platform_ioremap_resource_byname(pdev, 0);
++ if (IS_ERR(lp->regs)) {
+ dev_err(&pdev->dev, "could not map TEMAC registers\n");
+ return -ENOMEM;
+ }
+diff --git a/drivers/net/hyperv/netvsc_drv.c b/drivers/net/hyperv/netvsc_drv.c
+index 6267f706e8ee..0d779bba1b01 100644
+--- a/drivers/net/hyperv/netvsc_drv.c
++++ b/drivers/net/hyperv/netvsc_drv.c
+@@ -532,12 +532,13 @@ static int netvsc_xmit(struct sk_buff *skb, struct net_device *net, bool xdp_tx)
+ u32 hash;
+ struct hv_page_buffer pb[MAX_PAGE_BUFFER_COUNT];
+
+- /* if VF is present and up then redirect packets
+- * already called with rcu_read_lock_bh
++ /* If VF is present and up then redirect packets to it.
++ * Skip the VF if it is marked down or has no carrier.
++ * If netpoll is in uses, then VF can not be used either.
+ */
+ vf_netdev = rcu_dereference_bh(net_device_ctx->vf_netdev);
+ if (vf_netdev && netif_running(vf_netdev) &&
+- !netpoll_tx_running(net))
++ netif_carrier_ok(vf_netdev) && !netpoll_tx_running(net))
+ return netvsc_vf_xmit(net, vf_netdev, skb);
+
+ /* We will atmost need two pages to describe the rndis
+diff --git a/drivers/net/phy/marvell10g.c b/drivers/net/phy/marvell10g.c
+index d4c2e62b2439..179f5ea405d8 100644
+--- a/drivers/net/phy/marvell10g.c
++++ b/drivers/net/phy/marvell10g.c
+@@ -205,13 +205,6 @@ static int mv3310_hwmon_config(struct phy_device *phydev, bool enable)
+ MV_V2_TEMP_CTRL_MASK, val);
+ }
+
+-static void mv3310_hwmon_disable(void *data)
+-{
+- struct phy_device *phydev = data;
+-
+- mv3310_hwmon_config(phydev, false);
+-}
+-
+ static int mv3310_hwmon_probe(struct phy_device *phydev)
+ {
+ struct device *dev = &phydev->mdio.dev;
+@@ -235,10 +228,6 @@ static int mv3310_hwmon_probe(struct phy_device *phydev)
+ if (ret)
+ return ret;
+
+- ret = devm_add_action_or_reset(dev, mv3310_hwmon_disable, phydev);
+- if (ret)
+- return ret;
+-
+ priv->hwmon_dev = devm_hwmon_device_register_with_info(dev,
+ priv->hwmon_name, phydev,
+ &mv3310_hwmon_chip_info, NULL);
+@@ -423,6 +412,11 @@ static int mv3310_probe(struct phy_device *phydev)
+ return phy_sfp_probe(phydev, &mv3310_sfp_ops);
+ }
+
++static void mv3310_remove(struct phy_device *phydev)
++{
++ mv3310_hwmon_config(phydev, false);
++}
++
+ static int mv3310_suspend(struct phy_device *phydev)
+ {
+ return mv3310_power_down(phydev);
+@@ -762,6 +756,7 @@ static struct phy_driver mv3310_drivers[] = {
+ .read_status = mv3310_read_status,
+ .get_tunable = mv3310_get_tunable,
+ .set_tunable = mv3310_set_tunable,
++ .remove = mv3310_remove,
+ },
+ {
+ .phy_id = MARVELL_PHY_ID_88E2110,
+@@ -776,6 +771,7 @@ static struct phy_driver mv3310_drivers[] = {
+ .read_status = mv3310_read_status,
+ .get_tunable = mv3310_get_tunable,
+ .set_tunable = mv3310_set_tunable,
++ .remove = mv3310_remove,
+ },
+ };
+
+diff --git a/drivers/net/phy/mscc/mscc_main.c b/drivers/net/phy/mscc/mscc_main.c
+index 5ddc44f87eaf..8f5f2586e784 100644
+--- a/drivers/net/phy/mscc/mscc_main.c
++++ b/drivers/net/phy/mscc/mscc_main.c
+@@ -1379,6 +1379,11 @@ static int vsc8584_config_init(struct phy_device *phydev)
+ if (ret)
+ goto err;
+
++ ret = phy_base_write(phydev, MSCC_EXT_PAGE_ACCESS,
++ MSCC_PHY_PAGE_STANDARD);
++ if (ret)
++ goto err;
++
+ if (!phy_interface_is_rgmii(phydev)) {
+ val = PROC_CMD_MCB_ACCESS_MAC_CONF | PROC_CMD_RST_CONF_PORT |
+ PROC_CMD_READ_MOD_WRITE_PORT;
+@@ -1751,7 +1756,11 @@ static int vsc8514_config_init(struct phy_device *phydev)
+ val &= ~MAC_CFG_MASK;
+ val |= MAC_CFG_QSGMII;
+ ret = phy_base_write(phydev, MSCC_PHY_MAC_CFG_FASTLINK, val);
++ if (ret)
++ goto err;
+
++ ret = phy_base_write(phydev, MSCC_EXT_PAGE_ACCESS,
++ MSCC_PHY_PAGE_STANDARD);
+ if (ret)
+ goto err;
+
+diff --git a/drivers/net/phy/phy_device.c b/drivers/net/phy/phy_device.c
+index b4978c5fb2ca..98369430a3be 100644
+--- a/drivers/net/phy/phy_device.c
++++ b/drivers/net/phy/phy_device.c
+@@ -616,7 +616,9 @@ struct phy_device *phy_device_create(struct mii_bus *bus, int addr, u32 phy_id,
+ if (c45_ids)
+ dev->c45_ids = *c45_ids;
+ dev->irq = bus->irq[addr];
++
+ dev_set_name(&mdiodev->dev, PHY_ID_FMT, bus->id, addr);
++ device_initialize(&mdiodev->dev);
+
+ dev->state = PHY_DOWN;
+
+@@ -650,10 +652,8 @@ struct phy_device *phy_device_create(struct mii_bus *bus, int addr, u32 phy_id,
+ ret = phy_request_driver_module(dev, phy_id);
+ }
+
+- if (!ret) {
+- device_initialize(&mdiodev->dev);
+- } else {
+- kfree(dev);
++ if (ret) {
++ put_device(&mdiodev->dev);
+ dev = ERR_PTR(ret);
+ }
+
+diff --git a/drivers/net/usb/r8152.c b/drivers/net/usb/r8152.c
+index 7d39f998535d..2b02fefd094d 100644
+--- a/drivers/net/usb/r8152.c
++++ b/drivers/net/usb/r8152.c
+@@ -1504,7 +1504,7 @@ static int determine_ethernet_addr(struct r8152 *tp, struct sockaddr *sa)
+
+ sa->sa_family = dev->type;
+
+- ret = eth_platform_get_mac_address(&dev->dev, sa->sa_data);
++ ret = eth_platform_get_mac_address(&tp->udev->dev, sa->sa_data);
+ if (ret < 0) {
+ if (tp->version == RTL_VER_01) {
+ ret = pla_ocp_read(tp, PLA_IDR, 8, sa->sa_data);
+diff --git a/drivers/net/vmxnet3/vmxnet3_drv.c b/drivers/net/vmxnet3/vmxnet3_drv.c
+index ca395f9679d0..2818015324b8 100644
+--- a/drivers/net/vmxnet3/vmxnet3_drv.c
++++ b/drivers/net/vmxnet3/vmxnet3_drv.c
+@@ -886,7 +886,8 @@ vmxnet3_parse_hdr(struct sk_buff *skb, struct vmxnet3_tx_queue *tq,
+
+ switch (protocol) {
+ case IPPROTO_TCP:
+- ctx->l4_hdr_size = tcp_hdrlen(skb);
++ ctx->l4_hdr_size = skb->encapsulation ? inner_tcp_hdrlen(skb) :
++ tcp_hdrlen(skb);
+ break;
+ case IPPROTO_UDP:
+ ctx->l4_hdr_size = sizeof(struct udphdr);
+diff --git a/drivers/net/vxlan.c b/drivers/net/vxlan.c
+index a7c3939264b0..35a7d409d8d3 100644
+--- a/drivers/net/vxlan.c
++++ b/drivers/net/vxlan.c
+@@ -2722,7 +2722,7 @@ static void vxlan_xmit_one(struct sk_buff *skb, struct net_device *dev,
+ ndst = &rt->dst;
+ skb_tunnel_check_pmtu(skb, ndst, VXLAN_HEADROOM);
+
+- tos = ip_tunnel_ecn_encap(RT_TOS(tos), old_iph, skb);
++ tos = ip_tunnel_ecn_encap(tos, old_iph, skb);
+ ttl = ttl ? : ip4_dst_hoplimit(&rt->dst);
+ err = vxlan_build_skb(skb, ndst, sizeof(struct iphdr),
+ vni, md, flags, udp_sum);
+@@ -2762,7 +2762,7 @@ static void vxlan_xmit_one(struct sk_buff *skb, struct net_device *dev,
+
+ skb_tunnel_check_pmtu(skb, ndst, VXLAN6_HEADROOM);
+
+- tos = ip_tunnel_ecn_encap(RT_TOS(tos), old_iph, skb);
++ tos = ip_tunnel_ecn_encap(tos, old_iph, skb);
+ ttl = ttl ? : ip6_dst_hoplimit(ndst);
+ skb_scrub_packet(skb, xnet);
+ err = vxlan_build_skb(skb, ndst, sizeof(struct ipv6hdr),
+diff --git a/drivers/net/wan/lapbether.c b/drivers/net/wan/lapbether.c
+index b2868433718f..1ea15f2123ed 100644
+--- a/drivers/net/wan/lapbether.c
++++ b/drivers/net/wan/lapbether.c
+@@ -157,6 +157,12 @@ static netdev_tx_t lapbeth_xmit(struct sk_buff *skb,
+ if (!netif_running(dev))
+ goto drop;
+
++ /* There should be a pseudo header of 1 byte added by upper layers.
++ * Check to make sure it is there before reading it.
++ */
++ if (skb->len < 1)
++ goto drop;
++
+ switch (skb->data[0]) {
+ case X25_IFACE_DATA:
+ break;
+@@ -305,6 +311,7 @@ static void lapbeth_setup(struct net_device *dev)
+ dev->netdev_ops = &lapbeth_netdev_ops;
+ dev->needs_free_netdev = true;
+ dev->type = ARPHRD_X25;
++ dev->hard_header_len = 0;
+ dev->mtu = 1000;
+ dev->addr_len = 0;
+ }
+@@ -331,7 +338,8 @@ static int lapbeth_new_device(struct net_device *dev)
+ * then this driver prepends a length field of 2 bytes,
+ * then the underlying Ethernet device prepends its own header.
+ */
+- ndev->hard_header_len = -1 + 3 + 2 + dev->hard_header_len;
++ ndev->needed_headroom = -1 + 3 + 2 + dev->hard_header_len
++ + dev->needed_headroom;
+
+ lapbeth = netdev_priv(ndev);
+ lapbeth->axdev = ndev;
+diff --git a/drivers/net/wireless/ath/ath10k/htt_tx.c b/drivers/net/wireless/ath/ath10k/htt_tx.c
+index 4fd10ac3a941..bbe869575855 100644
+--- a/drivers/net/wireless/ath/ath10k/htt_tx.c
++++ b/drivers/net/wireless/ath/ath10k/htt_tx.c
+@@ -1591,7 +1591,9 @@ static int ath10k_htt_tx_32(struct ath10k_htt *htt,
+ err_unmap_msdu:
+ dma_unmap_single(dev, skb_cb->paddr, msdu->len, DMA_TO_DEVICE);
+ err_free_msdu_id:
++ spin_lock_bh(&htt->tx_lock);
+ ath10k_htt_tx_free_msdu_id(htt, msdu_id);
++ spin_unlock_bh(&htt->tx_lock);
+ err:
+ return res;
+ }
+@@ -1798,7 +1800,9 @@ static int ath10k_htt_tx_64(struct ath10k_htt *htt,
+ err_unmap_msdu:
+ dma_unmap_single(dev, skb_cb->paddr, msdu->len, DMA_TO_DEVICE);
+ err_free_msdu_id:
++ spin_lock_bh(&htt->tx_lock);
+ ath10k_htt_tx_free_msdu_id(htt, msdu_id);
++ spin_unlock_bh(&htt->tx_lock);
+ err:
+ return res;
+ }
+diff --git a/drivers/net/wireless/broadcom/brcm80211/brcmfmac/fwil_types.h b/drivers/net/wireless/broadcom/brcm80211/brcmfmac/fwil_types.h
+index de0ef1b545c4..2e31cc10c195 100644
+--- a/drivers/net/wireless/broadcom/brcm80211/brcmfmac/fwil_types.h
++++ b/drivers/net/wireless/broadcom/brcm80211/brcmfmac/fwil_types.h
+@@ -19,7 +19,7 @@
+ #define BRCMF_ARP_OL_PEER_AUTO_REPLY 0x00000008
+
+ #define BRCMF_BSS_INFO_VERSION 109 /* curr ver of brcmf_bss_info_le struct */
+-#define BRCMF_BSS_RSSI_ON_CHANNEL 0x0002
++#define BRCMF_BSS_RSSI_ON_CHANNEL 0x0004
+
+ #define BRCMF_STA_BRCM 0x00000001 /* Running a Broadcom driver */
+ #define BRCMF_STA_WME 0x00000002 /* WMM association */
+diff --git a/drivers/net/wireless/broadcom/brcm80211/brcmfmac/fwsignal.c b/drivers/net/wireless/broadcom/brcm80211/brcmfmac/fwsignal.c
+index 09701262330d..babaac682f13 100644
+--- a/drivers/net/wireless/broadcom/brcm80211/brcmfmac/fwsignal.c
++++ b/drivers/net/wireless/broadcom/brcm80211/brcmfmac/fwsignal.c
+@@ -621,6 +621,7 @@ static inline int brcmf_fws_hanger_poppkt(struct brcmf_fws_hanger *h,
+ static void brcmf_fws_psq_flush(struct brcmf_fws_info *fws, struct pktq *q,
+ int ifidx)
+ {
++ struct brcmf_fws_hanger_item *hi;
+ bool (*matchfn)(struct sk_buff *, void *) = NULL;
+ struct sk_buff *skb;
+ int prec;
+@@ -632,6 +633,9 @@ static void brcmf_fws_psq_flush(struct brcmf_fws_info *fws, struct pktq *q,
+ skb = brcmu_pktq_pdeq_match(q, prec, matchfn, &ifidx);
+ while (skb) {
+ hslot = brcmf_skb_htod_tag_get_field(skb, HSLOT);
++ hi = &fws->hanger.items[hslot];
++ WARN_ON(skb != hi->pkt);
++ hi->state = BRCMF_FWS_HANGER_ITEM_STATE_FREE;
+ brcmf_fws_hanger_poppkt(&fws->hanger, hslot, &skb,
+ true);
+ brcmu_pkt_buf_free_skb(skb);
+diff --git a/drivers/net/wireless/broadcom/brcm80211/brcmfmac/sdio.c b/drivers/net/wireless/broadcom/brcm80211/brcmfmac/sdio.c
+index 310d8075f5d7..bc02168ebb53 100644
+--- a/drivers/net/wireless/broadcom/brcm80211/brcmfmac/sdio.c
++++ b/drivers/net/wireless/broadcom/brcm80211/brcmfmac/sdio.c
+@@ -3699,7 +3699,11 @@ static void brcmf_sdio_bus_watchdog(struct brcmf_sdio *bus)
+ if (bus->idlecount > bus->idletime) {
+ brcmf_dbg(SDIO, "idle\n");
+ sdio_claim_host(bus->sdiodev->func1);
+- brcmf_sdio_wd_timer(bus, false);
++#ifdef DEBUG
++ if (!BRCMF_FWCON_ON() ||
++ bus->console_interval == 0)
++#endif
++ brcmf_sdio_wd_timer(bus, false);
+ bus->idlecount = 0;
+ brcmf_sdio_bus_sleep(bus, true, false);
+ sdio_release_host(bus->sdiodev->func1);
+diff --git a/drivers/net/wireless/intel/iwlegacy/common.c b/drivers/net/wireless/intel/iwlegacy/common.c
+index 348c17ce72f5..f78e062df572 100644
+--- a/drivers/net/wireless/intel/iwlegacy/common.c
++++ b/drivers/net/wireless/intel/iwlegacy/common.c
+@@ -4286,8 +4286,8 @@ il_apm_init(struct il_priv *il)
+ * power savings, even without L1.
+ */
+ if (il->cfg->set_l0s) {
+- pcie_capability_read_word(il->pci_dev, PCI_EXP_LNKCTL, &lctl);
+- if (lctl & PCI_EXP_LNKCTL_ASPM_L1) {
++ ret = pcie_capability_read_word(il->pci_dev, PCI_EXP_LNKCTL, &lctl);
++ if (!ret && (lctl & PCI_EXP_LNKCTL_ASPM_L1)) {
+ /* L1-ASPM enabled; disable(!) L0S */
+ il_set_bit(il, CSR_GIO_REG,
+ CSR_GIO_REG_VAL_L0S_ENABLED);
+diff --git a/drivers/net/wireless/marvell/mwifiex/sdio.h b/drivers/net/wireless/marvell/mwifiex/sdio.h
+index 71cd8629b28e..8b476b007c5e 100644
+--- a/drivers/net/wireless/marvell/mwifiex/sdio.h
++++ b/drivers/net/wireless/marvell/mwifiex/sdio.h
+@@ -36,9 +36,9 @@
+ #define SD8897_DEFAULT_FW_NAME "mrvl/sd8897_uapsta.bin"
+ #define SD8887_DEFAULT_FW_NAME "mrvl/sd8887_uapsta.bin"
+ #define SD8801_DEFAULT_FW_NAME "mrvl/sd8801_uapsta.bin"
+-#define SD8977_DEFAULT_FW_NAME "mrvl/sd8977_uapsta.bin"
++#define SD8977_DEFAULT_FW_NAME "mrvl/sdsd8977_combo_v2.bin"
+ #define SD8987_DEFAULT_FW_NAME "mrvl/sd8987_uapsta.bin"
+-#define SD8997_DEFAULT_FW_NAME "mrvl/sd8997_uapsta.bin"
++#define SD8997_DEFAULT_FW_NAME "mrvl/sdsd8997_combo_v4.bin"
+
+ #define BLOCK_MODE 1
+ #define BYTE_MODE 0
+diff --git a/drivers/net/wireless/marvell/mwifiex/sta_cmdresp.c b/drivers/net/wireless/marvell/mwifiex/sta_cmdresp.c
+index f21660149f58..962d8bfe6f10 100644
+--- a/drivers/net/wireless/marvell/mwifiex/sta_cmdresp.c
++++ b/drivers/net/wireless/marvell/mwifiex/sta_cmdresp.c
+@@ -580,6 +580,11 @@ static int mwifiex_ret_802_11_key_material_v1(struct mwifiex_private *priv,
+ {
+ struct host_cmd_ds_802_11_key_material *key =
+ &resp->params.key_material;
++ int len;
++
++ len = le16_to_cpu(key->key_param_set.key_len);
++ if (len > sizeof(key->key_param_set.key))
++ return -EINVAL;
+
+ if (le16_to_cpu(key->action) == HostCmd_ACT_GEN_SET) {
+ if ((le16_to_cpu(key->key_param_set.key_info) & KEY_MCAST)) {
+@@ -593,9 +598,8 @@ static int mwifiex_ret_802_11_key_material_v1(struct mwifiex_private *priv,
+
+ memset(priv->aes_key.key_param_set.key, 0,
+ sizeof(key->key_param_set.key));
+- priv->aes_key.key_param_set.key_len = key->key_param_set.key_len;
+- memcpy(priv->aes_key.key_param_set.key, key->key_param_set.key,
+- le16_to_cpu(priv->aes_key.key_param_set.key_len));
++ priv->aes_key.key_param_set.key_len = cpu_to_le16(len);
++ memcpy(priv->aes_key.key_param_set.key, key->key_param_set.key, len);
+
+ return 0;
+ }
+@@ -610,9 +614,14 @@ static int mwifiex_ret_802_11_key_material_v2(struct mwifiex_private *priv,
+ struct host_cmd_ds_command *resp)
+ {
+ struct host_cmd_ds_802_11_key_material_v2 *key_v2;
+- __le16 len;
++ int len;
+
+ key_v2 = &resp->params.key_material_v2;
++
++ len = le16_to_cpu(key_v2->key_param_set.key_params.aes.key_len);
++ if (len > WLAN_KEY_LEN_CCMP)
++ return -EINVAL;
++
+ if (le16_to_cpu(key_v2->action) == HostCmd_ACT_GEN_SET) {
+ if ((le16_to_cpu(key_v2->key_param_set.key_info) & KEY_MCAST)) {
+ mwifiex_dbg(priv->adapter, INFO, "info: key: GTK is set\n");
+@@ -628,10 +637,9 @@ static int mwifiex_ret_802_11_key_material_v2(struct mwifiex_private *priv,
+ memset(priv->aes_key_v2.key_param_set.key_params.aes.key, 0,
+ WLAN_KEY_LEN_CCMP);
+ priv->aes_key_v2.key_param_set.key_params.aes.key_len =
+- key_v2->key_param_set.key_params.aes.key_len;
+- len = priv->aes_key_v2.key_param_set.key_params.aes.key_len;
++ cpu_to_le16(len);
+ memcpy(priv->aes_key_v2.key_param_set.key_params.aes.key,
+- key_v2->key_param_set.key_params.aes.key, le16_to_cpu(len));
++ key_v2->key_param_set.key_params.aes.key, len);
+
+ return 0;
+ }
+diff --git a/drivers/net/wireless/mediatek/mt76/mt7615/mcu.c b/drivers/net/wireless/mediatek/mt76/mt7615/mcu.c
+index 6e869b8c5e26..cb8c1d80ead9 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt7615/mcu.c
++++ b/drivers/net/wireless/mediatek/mt76/mt7615/mcu.c
+@@ -180,8 +180,10 @@ mt7615_mcu_parse_response(struct mt7615_dev *dev, int cmd,
+ struct mt7615_mcu_rxd *rxd = (struct mt7615_mcu_rxd *)skb->data;
+ int ret = 0;
+
+- if (seq != rxd->seq)
+- return -EAGAIN;
++ if (seq != rxd->seq) {
++ ret = -EAGAIN;
++ goto out;
++ }
+
+ switch (cmd) {
+ case MCU_CMD_PATCH_SEM_CONTROL:
+@@ -208,6 +210,7 @@ mt7615_mcu_parse_response(struct mt7615_dev *dev, int cmd,
+ default:
+ break;
+ }
++out:
+ dev_kfree_skb(skb);
+
+ return ret;
+@@ -1206,8 +1209,12 @@ mt7615_mcu_wtbl_sta_add(struct mt7615_dev *dev, struct ieee80211_vif *vif,
+ skb = enable ? wskb : sskb;
+
+ err = __mt76_mcu_skb_send_msg(&dev->mt76, skb, cmd, true);
+- if (err < 0)
++ if (err < 0) {
++ skb = enable ? sskb : wskb;
++ dev_kfree_skb(skb);
++
+ return err;
++ }
+
+ cmd = enable ? MCU_EXT_CMD_STA_REC_UPDATE : MCU_EXT_CMD_WTBL_UPDATE;
+ skb = enable ? sskb : wskb;
+diff --git a/drivers/net/wireless/mediatek/mt76/mt7615/usb.c b/drivers/net/wireless/mediatek/mt76/mt7615/usb.c
+index 5be6704770ad..7906e6a71c5b 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt7615/usb.c
++++ b/drivers/net/wireless/mediatek/mt76/mt7615/usb.c
+@@ -166,12 +166,16 @@ __mt7663u_mac_set_key(struct mt7615_dev *dev,
+
+ lockdep_assert_held(&dev->mt76.mutex);
+
+- if (!sta)
+- return -EINVAL;
++ if (!sta) {
++ err = -EINVAL;
++ goto out;
++ }
+
+ cipher = mt7615_mac_get_cipher(key->cipher);
+- if (cipher == MT_CIPHER_NONE)
+- return -EOPNOTSUPP;
++ if (cipher == MT_CIPHER_NONE) {
++ err = -EOPNOTSUPP;
++ goto out;
++ }
+
+ wcid = &wd->sta->wcid;
+
+@@ -179,19 +183,22 @@ __mt7663u_mac_set_key(struct mt7615_dev *dev,
+ err = mt7615_mac_wtbl_update_key(dev, wcid, key->key, key->keylen,
+ cipher, key->cmd);
+ if (err < 0)
+- return err;
++ goto out;
+
+ err = mt7615_mac_wtbl_update_pk(dev, wcid, cipher, key->keyidx,
+ key->cmd);
+ if (err < 0)
+- return err;
++ goto out;
+
+ if (key->cmd == SET_KEY)
+ wcid->cipher |= BIT(cipher);
+ else
+ wcid->cipher &= ~BIT(cipher);
+
+- return 0;
++out:
++ kfree(key->key);
++
++ return err;
+ }
+
+ void mt7663u_wtbl_work(struct work_struct *work)
+diff --git a/drivers/net/wireless/mediatek/mt76/mt7615/usb_mcu.c b/drivers/net/wireless/mediatek/mt76/mt7615/usb_mcu.c
+index cd709fd617db..3e66ff98cab8 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt7615/usb_mcu.c
++++ b/drivers/net/wireless/mediatek/mt76/mt7615/usb_mcu.c
+@@ -34,7 +34,6 @@ mt7663u_mcu_send_message(struct mt76_dev *mdev, struct sk_buff *skb,
+
+ ret = mt76u_bulk_msg(&dev->mt76, skb->data, skb->len, NULL,
+ 1000, ep);
+- dev_kfree_skb(skb);
+ if (ret < 0)
+ goto out;
+
+@@ -43,6 +42,7 @@ mt7663u_mcu_send_message(struct mt76_dev *mdev, struct sk_buff *skb,
+
+ out:
+ mutex_unlock(&mdev->mcu.mutex);
++ dev_kfree_skb(skb);
+
+ return ret;
+ }
+diff --git a/drivers/net/wireless/mediatek/mt76/mt7915/debugfs.c b/drivers/net/wireless/mediatek/mt76/mt7915/debugfs.c
+index 5278bee812f1..7e48f56b5b08 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt7915/debugfs.c
++++ b/drivers/net/wireless/mediatek/mt76/mt7915/debugfs.c
+@@ -384,6 +384,7 @@ int mt7915_init_debugfs(struct mt7915_dev *dev)
+ return 0;
+ }
+
++#ifdef CONFIG_MAC80211_DEBUGFS
+ /** per-station debugfs **/
+
+ /* usage: <tx mode> <ldpc> <stbc> <bw> <gi> <nss> <mcs> */
+@@ -461,3 +462,4 @@ void mt7915_sta_add_debugfs(struct ieee80211_hw *hw, struct ieee80211_vif *vif,
+ debugfs_create_file("fixed_rate", 0600, dir, sta, &fops_fixed_rate);
+ debugfs_create_file("stats", 0400, dir, sta, &fops_sta_stats);
+ }
++#endif
+diff --git a/drivers/net/wireless/mediatek/mt76/mt7915/mcu.c b/drivers/net/wireless/mediatek/mt76/mt7915/mcu.c
+index c8c12c740c1a..8fb8255650a7 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt7915/mcu.c
++++ b/drivers/net/wireless/mediatek/mt76/mt7915/mcu.c
+@@ -505,15 +505,22 @@ static void
+ mt7915_mcu_tx_rate_report(struct mt7915_dev *dev, struct sk_buff *skb)
+ {
+ struct mt7915_mcu_ra_info *ra = (struct mt7915_mcu_ra_info *)skb->data;
+- u16 wcidx = le16_to_cpu(ra->wlan_idx);
+- struct mt76_wcid *wcid = rcu_dereference(dev->mt76.wcid[wcidx]);
+- struct mt7915_sta *msta = container_of(wcid, struct mt7915_sta, wcid);
+- struct mt7915_sta_stats *stats = &msta->stats;
+- struct mt76_phy *mphy = &dev->mphy;
+ struct rate_info rate = {}, prob_rate = {};
++ u16 probe = le16_to_cpu(ra->prob_up_rate);
+ u16 attempts = le16_to_cpu(ra->attempts);
+ u16 curr = le16_to_cpu(ra->curr_rate);
+- u16 probe = le16_to_cpu(ra->prob_up_rate);
++ u16 wcidx = le16_to_cpu(ra->wlan_idx);
++ struct mt76_phy *mphy = &dev->mphy;
++ struct mt7915_sta_stats *stats;
++ struct mt7915_sta *msta;
++ struct mt76_wcid *wcid;
++
++ if (wcidx >= MT76_N_WCIDS)
++ return;
++
++ wcid = rcu_dereference(dev->mt76.wcid[wcidx]);
++ msta = container_of(wcid, struct mt7915_sta, wcid);
++ stats = &msta->stats;
+
+ if (msta->wcid.ext_phy && dev->mt76.phy2)
+ mphy = dev->mt76.phy2;
+diff --git a/drivers/net/wireless/quantenna/qtnfmac/core.c b/drivers/net/wireless/quantenna/qtnfmac/core.c
+index eea777f8acea..6aafff9d4231 100644
+--- a/drivers/net/wireless/quantenna/qtnfmac/core.c
++++ b/drivers/net/wireless/quantenna/qtnfmac/core.c
+@@ -446,8 +446,11 @@ static struct qtnf_wmac *qtnf_core_mac_alloc(struct qtnf_bus *bus,
+ }
+
+ wiphy = qtnf_wiphy_allocate(bus, pdev);
+- if (!wiphy)
++ if (!wiphy) {
++ if (pdev)
++ platform_device_unregister(pdev);
+ return ERR_PTR(-ENOMEM);
++ }
+
+ mac = wiphy_priv(wiphy);
+
+diff --git a/drivers/net/wireless/realtek/rtw88/coex.c b/drivers/net/wireless/realtek/rtw88/coex.c
+index cbf3d503df1c..30ebe426a4ab 100644
+--- a/drivers/net/wireless/realtek/rtw88/coex.c
++++ b/drivers/net/wireless/realtek/rtw88/coex.c
+@@ -1934,7 +1934,8 @@ static void rtw_coex_run_coex(struct rtw_dev *rtwdev, u8 reason)
+ if (coex_stat->wl_under_ips)
+ return;
+
+- if (coex->freeze && !coex_stat->bt_setup_link)
++ if (coex->freeze && coex_dm->reason == COEX_RSN_BTINFO &&
++ !coex_stat->bt_setup_link)
+ return;
+
+ coex_stat->cnt_wl[COEX_CNT_WL_COEXRUN]++;
+diff --git a/drivers/net/wireless/realtek/rtw88/fw.c b/drivers/net/wireless/realtek/rtw88/fw.c
+index 6478fd7a78f6..13e79482f6d5 100644
+--- a/drivers/net/wireless/realtek/rtw88/fw.c
++++ b/drivers/net/wireless/realtek/rtw88/fw.c
+@@ -456,7 +456,7 @@ void rtw_fw_send_ra_info(struct rtw_dev *rtwdev, struct rtw_sta_info *si)
+ SET_RA_INFO_INIT_RA_LVL(h2c_pkt, si->init_ra_lv);
+ SET_RA_INFO_SGI_EN(h2c_pkt, si->sgi_enable);
+ SET_RA_INFO_BW_MODE(h2c_pkt, si->bw_mode);
+- SET_RA_INFO_LDPC(h2c_pkt, si->ldpc_en);
++ SET_RA_INFO_LDPC(h2c_pkt, !!si->ldpc_en);
+ SET_RA_INFO_NO_UPDATE(h2c_pkt, no_update);
+ SET_RA_INFO_VHT_EN(h2c_pkt, si->vht_enable);
+ SET_RA_INFO_DIS_PT(h2c_pkt, disable_pt);
+diff --git a/drivers/net/wireless/realtek/rtw88/main.c b/drivers/net/wireless/realtek/rtw88/main.c
+index 0eefafc51c62..665d4bbdee6a 100644
+--- a/drivers/net/wireless/realtek/rtw88/main.c
++++ b/drivers/net/wireless/realtek/rtw88/main.c
+@@ -722,8 +722,6 @@ void rtw_update_sta_info(struct rtw_dev *rtwdev, struct rtw_sta_info *si)
+ stbc_en = VHT_STBC_EN;
+ if (sta->vht_cap.cap & IEEE80211_VHT_CAP_RXLDPC)
+ ldpc_en = VHT_LDPC_EN;
+- if (sta->vht_cap.cap & IEEE80211_VHT_CAP_SHORT_GI_80)
+- is_support_sgi = true;
+ } else if (sta->ht_cap.ht_supported) {
+ ra_mask |= (sta->ht_cap.mcs.rx_mask[1] << 20) |
+ (sta->ht_cap.mcs.rx_mask[0] << 12);
+@@ -731,9 +729,6 @@ void rtw_update_sta_info(struct rtw_dev *rtwdev, struct rtw_sta_info *si)
+ stbc_en = HT_STBC_EN;
+ if (sta->ht_cap.cap & IEEE80211_HT_CAP_LDPC_CODING)
+ ldpc_en = HT_LDPC_EN;
+- if (sta->ht_cap.cap & IEEE80211_HT_CAP_SGI_20 ||
+- sta->ht_cap.cap & IEEE80211_HT_CAP_SGI_40)
+- is_support_sgi = true;
+ }
+
+ if (efuse->hw_cap.nss == 1)
+@@ -775,12 +770,18 @@ void rtw_update_sta_info(struct rtw_dev *rtwdev, struct rtw_sta_info *si)
+ switch (sta->bandwidth) {
+ case IEEE80211_STA_RX_BW_80:
+ bw_mode = RTW_CHANNEL_WIDTH_80;
++ is_support_sgi = sta->vht_cap.vht_supported &&
++ (sta->vht_cap.cap & IEEE80211_VHT_CAP_SHORT_GI_80);
+ break;
+ case IEEE80211_STA_RX_BW_40:
+ bw_mode = RTW_CHANNEL_WIDTH_40;
++ is_support_sgi = sta->ht_cap.ht_supported &&
++ (sta->ht_cap.cap & IEEE80211_HT_CAP_SGI_40);
+ break;
+ default:
+ bw_mode = RTW_CHANNEL_WIDTH_20;
++ is_support_sgi = sta->ht_cap.ht_supported &&
++ (sta->ht_cap.cap & IEEE80211_HT_CAP_SGI_20);
+ break;
+ }
+
+diff --git a/drivers/net/wireless/realtek/rtw88/rtw8822ce.c b/drivers/net/wireless/realtek/rtw88/rtw8822ce.c
+index 7b6bd990651e..026ac49ce6e3 100644
+--- a/drivers/net/wireless/realtek/rtw88/rtw8822ce.c
++++ b/drivers/net/wireless/realtek/rtw88/rtw8822ce.c
+@@ -11,6 +11,10 @@ static const struct pci_device_id rtw_8822ce_id_table[] = {
+ PCI_DEVICE(PCI_VENDOR_ID_REALTEK, 0xC822),
+ .driver_data = (kernel_ulong_t)&rtw8822c_hw_spec
+ },
++ {
++ PCI_DEVICE(PCI_VENDOR_ID_REALTEK, 0xC82F),
++ .driver_data = (kernel_ulong_t)&rtw8822c_hw_spec
++ },
+ {}
+ };
+ MODULE_DEVICE_TABLE(pci, rtw_8822ce_id_table);
+diff --git a/drivers/net/wireless/ti/wl1251/event.c b/drivers/net/wireless/ti/wl1251/event.c
+index 850864dbafa1..e6d426edab56 100644
+--- a/drivers/net/wireless/ti/wl1251/event.c
++++ b/drivers/net/wireless/ti/wl1251/event.c
+@@ -70,7 +70,7 @@ static int wl1251_event_ps_report(struct wl1251 *wl,
+ break;
+ }
+
+- return 0;
++ return ret;
+ }
+
+ static void wl1251_event_mbox_dump(struct event_mailbox *mbox)
+diff --git a/drivers/nvme/host/multipath.c b/drivers/nvme/host/multipath.c
+index 66509472fe06..57d51148e71b 100644
+--- a/drivers/nvme/host/multipath.c
++++ b/drivers/nvme/host/multipath.c
+@@ -246,6 +246,12 @@ static struct nvme_ns *nvme_round_robin_path(struct nvme_ns_head *head,
+ fallback = ns;
+ }
+
++ /* No optimized path found, re-check the current path */
++ if (!nvme_path_is_disabled(old) &&
++ old->ana_state == NVME_ANA_OPTIMIZED) {
++ found = old;
++ goto out;
++ }
+ if (!fallback)
+ return NULL;
+ found = fallback;
+@@ -266,10 +272,13 @@ inline struct nvme_ns *nvme_find_path(struct nvme_ns_head *head)
+ struct nvme_ns *ns;
+
+ ns = srcu_dereference(head->current_path[node], &head->srcu);
+- if (READ_ONCE(head->subsys->iopolicy) == NVME_IOPOLICY_RR && ns)
+- ns = nvme_round_robin_path(head, node, ns);
+- if (unlikely(!ns || !nvme_path_is_optimized(ns)))
+- ns = __nvme_find_path(head, node);
++ if (unlikely(!ns))
++ return __nvme_find_path(head, node);
++
++ if (READ_ONCE(head->subsys->iopolicy) == NVME_IOPOLICY_RR)
++ return nvme_round_robin_path(head, node, ns);
++ if (unlikely(!nvme_path_is_optimized(ns)))
++ return __nvme_find_path(head, node);
+ return ns;
+ }
+
+diff --git a/drivers/nvme/host/rdma.c b/drivers/nvme/host/rdma.c
+index 13506a87a444..af0cfd25ed7a 100644
+--- a/drivers/nvme/host/rdma.c
++++ b/drivers/nvme/host/rdma.c
+@@ -941,15 +941,20 @@ static int nvme_rdma_configure_io_queues(struct nvme_rdma_ctrl *ctrl, bool new)
+ ret = PTR_ERR(ctrl->ctrl.connect_q);
+ goto out_free_tag_set;
+ }
+- } else {
+- blk_mq_update_nr_hw_queues(&ctrl->tag_set,
+- ctrl->ctrl.queue_count - 1);
+ }
+
+ ret = nvme_rdma_start_io_queues(ctrl);
+ if (ret)
+ goto out_cleanup_connect_q;
+
++ if (!new) {
++ nvme_start_queues(&ctrl->ctrl);
++ nvme_wait_freeze(&ctrl->ctrl);
++ blk_mq_update_nr_hw_queues(ctrl->ctrl.tagset,
++ ctrl->ctrl.queue_count - 1);
++ nvme_unfreeze(&ctrl->ctrl);
++ }
++
+ return 0;
+
+ out_cleanup_connect_q:
+@@ -982,6 +987,7 @@ static void nvme_rdma_teardown_io_queues(struct nvme_rdma_ctrl *ctrl,
+ bool remove)
+ {
+ if (ctrl->ctrl.queue_count > 1) {
++ nvme_start_freeze(&ctrl->ctrl);
+ nvme_stop_queues(&ctrl->ctrl);
+ nvme_rdma_stop_io_queues(ctrl);
+ if (ctrl->ctrl.tagset) {
+diff --git a/drivers/nvme/host/tcp.c b/drivers/nvme/host/tcp.c
+index f3a91818167b..83bb329d4113 100644
+--- a/drivers/nvme/host/tcp.c
++++ b/drivers/nvme/host/tcp.c
+@@ -1744,15 +1744,20 @@ static int nvme_tcp_configure_io_queues(struct nvme_ctrl *ctrl, bool new)
+ ret = PTR_ERR(ctrl->connect_q);
+ goto out_free_tag_set;
+ }
+- } else {
+- blk_mq_update_nr_hw_queues(ctrl->tagset,
+- ctrl->queue_count - 1);
+ }
+
+ ret = nvme_tcp_start_io_queues(ctrl);
+ if (ret)
+ goto out_cleanup_connect_q;
+
++ if (!new) {
++ nvme_start_queues(ctrl);
++ nvme_wait_freeze(ctrl);
++ blk_mq_update_nr_hw_queues(ctrl->tagset,
++ ctrl->queue_count - 1);
++ nvme_unfreeze(ctrl);
++ }
++
+ return 0;
+
+ out_cleanup_connect_q:
+@@ -1857,6 +1862,7 @@ static void nvme_tcp_teardown_io_queues(struct nvme_ctrl *ctrl,
+ {
+ if (ctrl->queue_count <= 1)
+ return;
++ nvme_start_freeze(ctrl);
+ nvme_stop_queues(ctrl);
+ nvme_tcp_stop_io_queues(ctrl);
+ if (ctrl->tagset) {
+diff --git a/drivers/nvmem/sprd-efuse.c b/drivers/nvmem/sprd-efuse.c
+index 925feb21d5ad..59523245db8a 100644
+--- a/drivers/nvmem/sprd-efuse.c
++++ b/drivers/nvmem/sprd-efuse.c
+@@ -378,8 +378,8 @@ static int sprd_efuse_probe(struct platform_device *pdev)
+ return -ENOMEM;
+
+ efuse->base = devm_platform_ioremap_resource(pdev, 0);
+- if (!efuse->base)
+- return -ENOMEM;
++ if (IS_ERR(efuse->base))
++ return PTR_ERR(efuse->base);
+
+ ret = of_hwspin_lock_get_id(np, 0);
+ if (ret < 0) {
+diff --git a/drivers/parisc/sba_iommu.c b/drivers/parisc/sba_iommu.c
+index 7e112829d250..00785fa81ff7 100644
+--- a/drivers/parisc/sba_iommu.c
++++ b/drivers/parisc/sba_iommu.c
+@@ -1270,7 +1270,7 @@ sba_ioc_init_pluto(struct parisc_device *sba, struct ioc *ioc, int ioc_num)
+ ** (one that doesn't overlap memory or LMMIO space) in the
+ ** IBASE and IMASK registers.
+ */
+- ioc->ibase = READ_REG(ioc->ioc_hpa + IOC_IBASE);
++ ioc->ibase = READ_REG(ioc->ioc_hpa + IOC_IBASE) & ~0x1fffffULL;
+ iova_space_size = ~(READ_REG(ioc->ioc_hpa + IOC_IMASK) & 0xFFFFFFFFUL) + 1;
+
+ if ((ioc->ibase < 0xfed00000UL) && ((ioc->ibase + iova_space_size) > 0xfee00000UL)) {
+diff --git a/drivers/pci/access.c b/drivers/pci/access.c
+index 79c4a2ef269a..9793f17fa184 100644
+--- a/drivers/pci/access.c
++++ b/drivers/pci/access.c
+@@ -204,17 +204,13 @@ EXPORT_SYMBOL(pci_bus_set_ops);
+ static DECLARE_WAIT_QUEUE_HEAD(pci_cfg_wait);
+
+ static noinline void pci_wait_cfg(struct pci_dev *dev)
++ __must_hold(&pci_lock)
+ {
+- DECLARE_WAITQUEUE(wait, current);
+-
+- __add_wait_queue(&pci_cfg_wait, &wait);
+ do {
+- set_current_state(TASK_UNINTERRUPTIBLE);
+ raw_spin_unlock_irq(&pci_lock);
+- schedule();
++ wait_event(pci_cfg_wait, !dev->block_cfg_access);
+ raw_spin_lock_irq(&pci_lock);
+ } while (dev->block_cfg_access);
+- __remove_wait_queue(&pci_cfg_wait, &wait);
+ }
+
+ /* Returns 0 on success, negative values indicate error. */
+diff --git a/drivers/pci/controller/cadence/pcie-cadence-ep.c b/drivers/pci/controller/cadence/pcie-cadence-ep.c
+index 1c15c8352125..4a829ccff7d0 100644
+--- a/drivers/pci/controller/cadence/pcie-cadence-ep.c
++++ b/drivers/pci/controller/cadence/pcie-cadence-ep.c
+@@ -8,7 +8,6 @@
+ #include <linux/of.h>
+ #include <linux/pci-epc.h>
+ #include <linux/platform_device.h>
+-#include <linux/pm_runtime.h>
+ #include <linux/sizes.h>
+
+ #include "pcie-cadence.h"
+@@ -440,8 +439,7 @@ int cdns_pcie_ep_setup(struct cdns_pcie_ep *ep)
+ epc = devm_pci_epc_create(dev, &cdns_pcie_epc_ops);
+ if (IS_ERR(epc)) {
+ dev_err(dev, "failed to create epc device\n");
+- ret = PTR_ERR(epc);
+- goto err_init;
++ return PTR_ERR(epc);
+ }
+
+ epc_set_drvdata(epc, ep);
+@@ -453,7 +451,7 @@ int cdns_pcie_ep_setup(struct cdns_pcie_ep *ep)
+ resource_size(pcie->mem_res), PAGE_SIZE);
+ if (ret < 0) {
+ dev_err(dev, "failed to initialize the memory space\n");
+- goto err_init;
++ return ret;
+ }
+
+ ep->irq_cpu_addr = pci_epc_mem_alloc_addr(epc, &ep->irq_phys_addr,
+@@ -472,8 +470,5 @@ int cdns_pcie_ep_setup(struct cdns_pcie_ep *ep)
+ free_epc_mem:
+ pci_epc_mem_exit(epc);
+
+- err_init:
+- pm_runtime_put_sync(dev);
+-
+ return ret;
+ }
+diff --git a/drivers/pci/controller/cadence/pcie-cadence-host.c b/drivers/pci/controller/cadence/pcie-cadence-host.c
+index 8c2543f28ba0..b2411e8e6f18 100644
+--- a/drivers/pci/controller/cadence/pcie-cadence-host.c
++++ b/drivers/pci/controller/cadence/pcie-cadence-host.c
+@@ -7,7 +7,6 @@
+ #include <linux/of_address.h>
+ #include <linux/of_pci.h>
+ #include <linux/platform_device.h>
+-#include <linux/pm_runtime.h>
+
+ #include "pcie-cadence.h"
+
+@@ -70,6 +69,7 @@ static int cdns_pcie_host_init_root_port(struct cdns_pcie_rc *rc)
+ {
+ struct cdns_pcie *pcie = &rc->pcie;
+ u32 value, ctrl;
++ u32 id;
+
+ /*
+ * Set the root complex BAR configuration register:
+@@ -89,8 +89,12 @@ static int cdns_pcie_host_init_root_port(struct cdns_pcie_rc *rc)
+ cdns_pcie_writel(pcie, CDNS_PCIE_LM_RC_BAR_CFG, value);
+
+ /* Set root port configuration space */
+- if (rc->vendor_id != 0xffff)
+- cdns_pcie_rp_writew(pcie, PCI_VENDOR_ID, rc->vendor_id);
++ if (rc->vendor_id != 0xffff) {
++ id = CDNS_PCIE_LM_ID_VENDOR(rc->vendor_id) |
++ CDNS_PCIE_LM_ID_SUBSYS(rc->vendor_id);
++ cdns_pcie_writel(pcie, CDNS_PCIE_LM_ID, id);
++ }
++
+ if (rc->device_id != 0xffff)
+ cdns_pcie_rp_writew(pcie, PCI_DEVICE_ID, rc->device_id);
+
+@@ -250,7 +254,7 @@ int cdns_pcie_host_setup(struct cdns_pcie_rc *rc)
+
+ ret = cdns_pcie_host_init(dev, &resources, rc);
+ if (ret)
+- goto err_init;
++ return ret;
+
+ list_splice_init(&resources, &bridge->windows);
+ bridge->dev.parent = dev;
+@@ -268,8 +272,5 @@ int cdns_pcie_host_setup(struct cdns_pcie_rc *rc)
+ err_host_probe:
+ pci_free_resource_list(&resources);
+
+- err_init:
+- pm_runtime_put_sync(dev);
+-
+ return ret;
+ }
+diff --git a/drivers/pci/controller/pci-loongson.c b/drivers/pci/controller/pci-loongson.c
+index 459009c8a4a0..58b862aaa6e9 100644
+--- a/drivers/pci/controller/pci-loongson.c
++++ b/drivers/pci/controller/pci-loongson.c
+@@ -37,11 +37,11 @@ static void bridge_class_quirk(struct pci_dev *dev)
+ {
+ dev->class = PCI_CLASS_BRIDGE_PCI << 8;
+ }
+-DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_LOONGSON,
++DECLARE_PCI_FIXUP_EARLY(PCI_VENDOR_ID_LOONGSON,
+ DEV_PCIE_PORT_0, bridge_class_quirk);
+-DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_LOONGSON,
++DECLARE_PCI_FIXUP_EARLY(PCI_VENDOR_ID_LOONGSON,
+ DEV_PCIE_PORT_1, bridge_class_quirk);
+-DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_LOONGSON,
++DECLARE_PCI_FIXUP_EARLY(PCI_VENDOR_ID_LOONGSON,
+ DEV_PCIE_PORT_2, bridge_class_quirk);
+
+ static void system_bus_quirk(struct pci_dev *pdev)
+diff --git a/drivers/pci/controller/pcie-rcar-host.c b/drivers/pci/controller/pcie-rcar-host.c
+index d210a36561be..060c24f5221e 100644
+--- a/drivers/pci/controller/pcie-rcar-host.c
++++ b/drivers/pci/controller/pcie-rcar-host.c
+@@ -986,7 +986,7 @@ static int rcar_pcie_probe(struct platform_device *pdev)
+ err = pm_runtime_get_sync(pcie->dev);
+ if (err < 0) {
+ dev_err(pcie->dev, "pm_runtime_get_sync failed\n");
+- goto err_pm_disable;
++ goto err_pm_put;
+ }
+
+ err = rcar_pcie_get_resources(host);
+@@ -1057,8 +1057,6 @@ err_unmap_msi_irqs:
+
+ err_pm_put:
+ pm_runtime_put(dev);
+-
+-err_pm_disable:
+ pm_runtime_disable(dev);
+ pci_free_resource_list(&host->resources);
+
+diff --git a/drivers/pci/controller/vmd.c b/drivers/pci/controller/vmd.c
+index 9a64cf90c291..ebec0a6e77ed 100644
+--- a/drivers/pci/controller/vmd.c
++++ b/drivers/pci/controller/vmd.c
+@@ -560,6 +560,7 @@ static int vmd_enable_domain(struct vmd_dev *vmd, unsigned long features)
+ if (!vmd->bus) {
+ pci_free_resource_list(&resources);
+ irq_domain_remove(vmd->irq_domain);
++ irq_domain_free_fwnode(fn);
+ return -ENODEV;
+ }
+
+@@ -673,6 +674,7 @@ static void vmd_cleanup_srcu(struct vmd_dev *vmd)
+ static void vmd_remove(struct pci_dev *dev)
+ {
+ struct vmd_dev *vmd = pci_get_drvdata(dev);
++ struct fwnode_handle *fn = vmd->irq_domain->fwnode;
+
+ sysfs_remove_link(&vmd->dev->dev.kobj, "domain");
+ pci_stop_root_bus(vmd->bus);
+@@ -680,6 +682,7 @@ static void vmd_remove(struct pci_dev *dev)
+ vmd_cleanup_srcu(vmd);
+ vmd_detach_resources(vmd);
+ irq_domain_remove(vmd->irq_domain);
++ irq_domain_free_fwnode(fn);
+ }
+
+ #ifdef CONFIG_PM_SLEEP
+diff --git a/drivers/pci/pcie/aspm.c b/drivers/pci/pcie/aspm.c
+index b17e5ffd31b1..253c30cc1967 100644
+--- a/drivers/pci/pcie/aspm.c
++++ b/drivers/pci/pcie/aspm.c
+@@ -1182,6 +1182,7 @@ static int pcie_aspm_get_policy(char *buffer, const struct kernel_param *kp)
+ cnt += sprintf(buffer + cnt, "[%s] ", policy_str[i]);
+ else
+ cnt += sprintf(buffer + cnt, "%s ", policy_str[i]);
++ cnt += sprintf(buffer + cnt, "\n");
+ return cnt;
+ }
+
+diff --git a/drivers/pci/quirks.c b/drivers/pci/quirks.c
+index 2ea61abd5830..d442219cd270 100644
+--- a/drivers/pci/quirks.c
++++ b/drivers/pci/quirks.c
+@@ -4422,6 +4422,8 @@ static int pci_quirk_amd_sb_acs(struct pci_dev *dev, u16 acs_flags)
+ if (ACPI_FAILURE(status))
+ return -ENODEV;
+
++ acpi_put_table(header);
++
+ /* Filter out flags not applicable to multifunction */
+ acs_flags &= (PCI_ACS_RR | PCI_ACS_CR | PCI_ACS_EC | PCI_ACS_DT);
+
+diff --git a/drivers/phy/cadence/phy-cadence-salvo.c b/drivers/phy/cadence/phy-cadence-salvo.c
+index 1ecbb964cd21..016514e4aa54 100644
+--- a/drivers/phy/cadence/phy-cadence-salvo.c
++++ b/drivers/phy/cadence/phy-cadence-salvo.c
+@@ -88,7 +88,7 @@
+ #define TB_ADDR_TX_RCVDETSC_CTRL 0x4124
+
+ /* TB_ADDR_TX_RCVDETSC_CTRL */
+-#define RXDET_IN_P3_32KHZ BIT(1)
++#define RXDET_IN_P3_32KHZ BIT(0)
+
+ struct cdns_reg_pairs {
+ u16 val;
+diff --git a/drivers/phy/marvell/phy-armada38x-comphy.c b/drivers/phy/marvell/phy-armada38x-comphy.c
+index 6960dfd8ad8c..0fe408964334 100644
+--- a/drivers/phy/marvell/phy-armada38x-comphy.c
++++ b/drivers/phy/marvell/phy-armada38x-comphy.c
+@@ -41,6 +41,7 @@ struct a38x_comphy_lane {
+
+ struct a38x_comphy {
+ void __iomem *base;
++ void __iomem *conf;
+ struct device *dev;
+ struct a38x_comphy_lane lane[MAX_A38X_COMPHY];
+ };
+@@ -54,6 +55,21 @@ static const u8 gbe_mux[MAX_A38X_COMPHY][MAX_A38X_PORTS] = {
+ { 0, 0, 3 },
+ };
+
++static void a38x_set_conf(struct a38x_comphy_lane *lane, bool enable)
++{
++ struct a38x_comphy *priv = lane->priv;
++ u32 conf;
++
++ if (priv->conf) {
++ conf = readl_relaxed(priv->conf);
++ if (enable)
++ conf |= BIT(lane->port);
++ else
++ conf &= ~BIT(lane->port);
++ writel(conf, priv->conf);
++ }
++}
++
+ static void a38x_comphy_set_reg(struct a38x_comphy_lane *lane,
+ unsigned int offset, u32 mask, u32 value)
+ {
+@@ -97,6 +113,7 @@ static int a38x_comphy_set_mode(struct phy *phy, enum phy_mode mode, int sub)
+ {
+ struct a38x_comphy_lane *lane = phy_get_drvdata(phy);
+ unsigned int gen;
++ int ret;
+
+ if (mode != PHY_MODE_ETHERNET)
+ return -EINVAL;
+@@ -115,13 +132,20 @@ static int a38x_comphy_set_mode(struct phy *phy, enum phy_mode mode, int sub)
+ return -EINVAL;
+ }
+
++ a38x_set_conf(lane, false);
++
+ a38x_comphy_set_speed(lane, gen, gen);
+
+- return a38x_comphy_poll(lane, COMPHY_STAT1,
+- COMPHY_STAT1_PLL_RDY_TX |
+- COMPHY_STAT1_PLL_RDY_RX,
+- COMPHY_STAT1_PLL_RDY_TX |
+- COMPHY_STAT1_PLL_RDY_RX);
++ ret = a38x_comphy_poll(lane, COMPHY_STAT1,
++ COMPHY_STAT1_PLL_RDY_TX |
++ COMPHY_STAT1_PLL_RDY_RX,
++ COMPHY_STAT1_PLL_RDY_TX |
++ COMPHY_STAT1_PLL_RDY_RX);
++
++ if (ret == 0)
++ a38x_set_conf(lane, true);
++
++ return ret;
+ }
+
+ static const struct phy_ops a38x_comphy_ops = {
+@@ -174,14 +198,21 @@ static int a38x_comphy_probe(struct platform_device *pdev)
+ if (!priv)
+ return -ENOMEM;
+
+- res = platform_get_resource(pdev, IORESOURCE_MEM, 0);
+- base = devm_ioremap_resource(&pdev->dev, res);
++ base = devm_platform_ioremap_resource(pdev, 0);
+ if (IS_ERR(base))
+ return PTR_ERR(base);
+
+ priv->dev = &pdev->dev;
+ priv->base = base;
+
++ /* Optional */
++ res = platform_get_resource_byname(pdev, IORESOURCE_MEM, "conf");
++ if (res) {
++ priv->conf = devm_ioremap_resource(&pdev->dev, res);
++ if (IS_ERR(priv->conf))
++ return PTR_ERR(priv->conf);
++ }
++
+ for_each_available_child_of_node(pdev->dev.of_node, child) {
+ struct phy *phy;
+ int ret;
+diff --git a/drivers/phy/renesas/phy-rcar-gen3-usb2.c b/drivers/phy/renesas/phy-rcar-gen3-usb2.c
+index bfb22f868857..5087b7c44d55 100644
+--- a/drivers/phy/renesas/phy-rcar-gen3-usb2.c
++++ b/drivers/phy/renesas/phy-rcar-gen3-usb2.c
+@@ -111,6 +111,7 @@ struct rcar_gen3_chan {
+ struct work_struct work;
+ struct mutex lock; /* protects rphys[...].powered */
+ enum usb_dr_mode dr_mode;
++ int irq;
+ bool extcon_host;
+ bool is_otg_channel;
+ bool uses_otg_pins;
+@@ -389,12 +390,38 @@ static void rcar_gen3_init_otg(struct rcar_gen3_chan *ch)
+ rcar_gen3_device_recognition(ch);
+ }
+
++static irqreturn_t rcar_gen3_phy_usb2_irq(int irq, void *_ch)
++{
++ struct rcar_gen3_chan *ch = _ch;
++ void __iomem *usb2_base = ch->base;
++ u32 status = readl(usb2_base + USB2_OBINTSTA);
++ irqreturn_t ret = IRQ_NONE;
++
++ if (status & USB2_OBINT_BITS) {
++ dev_vdbg(ch->dev, "%s: %08x\n", __func__, status);
++ writel(USB2_OBINT_BITS, usb2_base + USB2_OBINTSTA);
++ rcar_gen3_device_recognition(ch);
++ ret = IRQ_HANDLED;
++ }
++
++ return ret;
++}
++
+ static int rcar_gen3_phy_usb2_init(struct phy *p)
+ {
+ struct rcar_gen3_phy *rphy = phy_get_drvdata(p);
+ struct rcar_gen3_chan *channel = rphy->ch;
+ void __iomem *usb2_base = channel->base;
+ u32 val;
++ int ret;
++
++ if (!rcar_gen3_is_any_rphy_initialized(channel) && channel->irq >= 0) {
++ INIT_WORK(&channel->work, rcar_gen3_phy_usb2_work);
++ ret = request_irq(channel->irq, rcar_gen3_phy_usb2_irq,
++ IRQF_SHARED, dev_name(channel->dev), channel);
++ if (ret < 0)
++ dev_err(channel->dev, "No irq handler (%d)\n", channel->irq);
++ }
+
+ /* Initialize USB2 part */
+ val = readl(usb2_base + USB2_INT_ENABLE);
+@@ -433,6 +460,9 @@ static int rcar_gen3_phy_usb2_exit(struct phy *p)
+ val &= ~USB2_INT_ENABLE_UCOM_INTEN;
+ writel(val, usb2_base + USB2_INT_ENABLE);
+
++ if (channel->irq >= 0 && !rcar_gen3_is_any_rphy_initialized(channel))
++ free_irq(channel->irq, channel);
++
+ return 0;
+ }
+
+@@ -503,23 +533,6 @@ static const struct phy_ops rz_g1c_phy_usb2_ops = {
+ .owner = THIS_MODULE,
+ };
+
+-static irqreturn_t rcar_gen3_phy_usb2_irq(int irq, void *_ch)
+-{
+- struct rcar_gen3_chan *ch = _ch;
+- void __iomem *usb2_base = ch->base;
+- u32 status = readl(usb2_base + USB2_OBINTSTA);
+- irqreturn_t ret = IRQ_NONE;
+-
+- if (status & USB2_OBINT_BITS) {
+- dev_vdbg(ch->dev, "%s: %08x\n", __func__, status);
+- writel(USB2_OBINT_BITS, usb2_base + USB2_OBINTSTA);
+- rcar_gen3_device_recognition(ch);
+- ret = IRQ_HANDLED;
+- }
+-
+- return ret;
+-}
+-
+ static const struct of_device_id rcar_gen3_phy_usb2_match_table[] = {
+ {
+ .compatible = "renesas,usb2-phy-r8a77470",
+@@ -598,7 +611,7 @@ static int rcar_gen3_phy_usb2_probe(struct platform_device *pdev)
+ struct phy_provider *provider;
+ struct resource *res;
+ const struct phy_ops *phy_usb2_ops;
+- int irq, ret = 0, i;
++ int ret = 0, i;
+
+ if (!dev->of_node) {
+ dev_err(dev, "This driver needs device tree\n");
+@@ -614,16 +627,8 @@ static int rcar_gen3_phy_usb2_probe(struct platform_device *pdev)
+ if (IS_ERR(channel->base))
+ return PTR_ERR(channel->base);
+
+- /* call request_irq for OTG */
+- irq = platform_get_irq_optional(pdev, 0);
+- if (irq >= 0) {
+- INIT_WORK(&channel->work, rcar_gen3_phy_usb2_work);
+- irq = devm_request_irq(dev, irq, rcar_gen3_phy_usb2_irq,
+- IRQF_SHARED, dev_name(dev), channel);
+- if (irq < 0)
+- dev_err(dev, "No irq handler (%d)\n", irq);
+- }
+-
++ /* get irq number here and request_irq for OTG in phy_init */
++ channel->irq = platform_get_irq_optional(pdev, 0);
+ channel->dr_mode = rcar_gen3_get_dr_mode(dev->of_node);
+ if (channel->dr_mode != USB_DR_MODE_UNKNOWN) {
+ int ret;
+diff --git a/drivers/phy/samsung/phy-exynos5-usbdrd.c b/drivers/phy/samsung/phy-exynos5-usbdrd.c
+index e510732afb8b..7f6279fb4f8f 100644
+--- a/drivers/phy/samsung/phy-exynos5-usbdrd.c
++++ b/drivers/phy/samsung/phy-exynos5-usbdrd.c
+@@ -714,7 +714,9 @@ static int exynos5_usbdrd_phy_calibrate(struct phy *phy)
+ struct phy_usb_instance *inst = phy_get_drvdata(phy);
+ struct exynos5_usbdrd_phy *phy_drd = to_usbdrd_phy(inst);
+
+- return exynos5420_usbdrd_phy_calibrate(phy_drd);
++ if (inst->phy_cfg->id == EXYNOS5_DRDPHY_UTMI)
++ return exynos5420_usbdrd_phy_calibrate(phy_drd);
++ return 0;
+ }
+
+ static const struct phy_ops exynos5_usbdrd_phy_ops = {
+diff --git a/drivers/pinctrl/pinctrl-single.c b/drivers/pinctrl/pinctrl-single.c
+index f3a8a465d27e..02f677eb1d53 100644
+--- a/drivers/pinctrl/pinctrl-single.c
++++ b/drivers/pinctrl/pinctrl-single.c
+@@ -916,7 +916,7 @@ static int pcs_parse_pinconf(struct pcs_device *pcs, struct device_node *np,
+
+ /* If pinconf isn't supported, don't parse properties in below. */
+ if (!PCS_HAS_PINCONF)
+- return 0;
++ return -ENOTSUPP;
+
+ /* cacluate how much properties are supported in current node */
+ for (i = 0; i < ARRAY_SIZE(prop2); i++) {
+@@ -928,7 +928,7 @@ static int pcs_parse_pinconf(struct pcs_device *pcs, struct device_node *np,
+ nconfs++;
+ }
+ if (!nconfs)
+- return 0;
++ return -ENOTSUPP;
+
+ func->conf = devm_kcalloc(pcs->dev,
+ nconfs, sizeof(struct pcs_conf_vals),
+@@ -1056,9 +1056,12 @@ static int pcs_parse_one_pinctrl_entry(struct pcs_device *pcs,
+
+ if (PCS_HAS_PINCONF && function) {
+ res = pcs_parse_pinconf(pcs, np, function, map);
+- if (res)
++ if (res == 0)
++ *num_maps = 2;
++ else if (res == -ENOTSUPP)
++ *num_maps = 1;
++ else
+ goto free_pingroups;
+- *num_maps = 2;
+ } else {
+ *num_maps = 1;
+ }
+diff --git a/drivers/platform/x86/asus-nb-wmi.c b/drivers/platform/x86/asus-nb-wmi.c
+index 8c4d00482ef0..6c42f73c1dfd 100644
+--- a/drivers/platform/x86/asus-nb-wmi.c
++++ b/drivers/platform/x86/asus-nb-wmi.c
+@@ -110,6 +110,16 @@ static struct quirk_entry quirk_asus_forceals = {
+ .wmi_force_als_set = true,
+ };
+
++static struct quirk_entry quirk_asus_ga401i = {
++ .wmi_backlight_power = true,
++ .wmi_backlight_set_devstate = true,
++};
++
++static struct quirk_entry quirk_asus_ga502i = {
++ .wmi_backlight_power = true,
++ .wmi_backlight_set_devstate = true,
++};
++
+ static int dmi_matched(const struct dmi_system_id *dmi)
+ {
+ pr_info("Identified laptop model '%s'\n", dmi->ident);
+@@ -411,6 +421,78 @@ static const struct dmi_system_id asus_quirks[] = {
+ },
+ .driver_data = &quirk_asus_forceals,
+ },
++ {
++ .callback = dmi_matched,
++ .ident = "ASUSTeK COMPUTER INC. GA401IH",
++ .matches = {
++ DMI_MATCH(DMI_SYS_VENDOR, "ASUSTeK COMPUTER INC."),
++ DMI_MATCH(DMI_PRODUCT_NAME, "GA401IH"),
++ },
++ .driver_data = &quirk_asus_ga401i,
++ },
++ {
++ .callback = dmi_matched,
++ .ident = "ASUSTeK COMPUTER INC. GA401II",
++ .matches = {
++ DMI_MATCH(DMI_SYS_VENDOR, "ASUSTeK COMPUTER INC."),
++ DMI_MATCH(DMI_PRODUCT_NAME, "GA401II"),
++ },
++ .driver_data = &quirk_asus_ga401i,
++ },
++ {
++ .callback = dmi_matched,
++ .ident = "ASUSTeK COMPUTER INC. GA401IU",
++ .matches = {
++ DMI_MATCH(DMI_SYS_VENDOR, "ASUSTeK COMPUTER INC."),
++ DMI_MATCH(DMI_PRODUCT_NAME, "GA401IU"),
++ },
++ .driver_data = &quirk_asus_ga401i,
++ },
++ {
++ .callback = dmi_matched,
++ .ident = "ASUSTeK COMPUTER INC. GA401IV",
++ .matches = {
++ DMI_MATCH(DMI_SYS_VENDOR, "ASUSTeK COMPUTER INC."),
++ DMI_MATCH(DMI_PRODUCT_NAME, "GA401IV"),
++ },
++ .driver_data = &quirk_asus_ga401i,
++ },
++ {
++ .callback = dmi_matched,
++ .ident = "ASUSTeK COMPUTER INC. GA401IVC",
++ .matches = {
++ DMI_MATCH(DMI_SYS_VENDOR, "ASUSTeK COMPUTER INC."),
++ DMI_MATCH(DMI_PRODUCT_NAME, "GA401IVC"),
++ },
++ .driver_data = &quirk_asus_ga401i,
++ },
++ {
++ .callback = dmi_matched,
++ .ident = "ASUSTeK COMPUTER INC. GA502II",
++ .matches = {
++ DMI_MATCH(DMI_SYS_VENDOR, "ASUSTeK COMPUTER INC."),
++ DMI_MATCH(DMI_PRODUCT_NAME, "GA502II"),
++ },
++ .driver_data = &quirk_asus_ga502i,
++ },
++ {
++ .callback = dmi_matched,
++ .ident = "ASUSTeK COMPUTER INC. GA502IU",
++ .matches = {
++ DMI_MATCH(DMI_SYS_VENDOR, "ASUSTeK COMPUTER INC."),
++ DMI_MATCH(DMI_PRODUCT_NAME, "GA502IU"),
++ },
++ .driver_data = &quirk_asus_ga502i,
++ },
++ {
++ .callback = dmi_matched,
++ .ident = "ASUSTeK COMPUTER INC. GA502IV",
++ .matches = {
++ DMI_MATCH(DMI_SYS_VENDOR, "ASUSTeK COMPUTER INC."),
++ DMI_MATCH(DMI_PRODUCT_NAME, "GA502IV"),
++ },
++ .driver_data = &quirk_asus_ga502i,
++ },
+ {},
+ };
+
+diff --git a/drivers/platform/x86/intel-hid.c b/drivers/platform/x86/intel-hid.c
+index 9ee79b74311c..86261970bd8f 100644
+--- a/drivers/platform/x86/intel-hid.c
++++ b/drivers/platform/x86/intel-hid.c
+@@ -571,7 +571,7 @@ check_acpi_dev(acpi_handle handle, u32 lvl, void *context, void **rv)
+ return AE_OK;
+
+ if (acpi_match_device_ids(dev, ids) == 0)
+- if (acpi_create_platform_device(dev, NULL))
++ if (!IS_ERR_OR_NULL(acpi_create_platform_device(dev, NULL)))
+ dev_info(&dev->dev,
+ "intel-hid: created platform device\n");
+
+diff --git a/drivers/platform/x86/intel-vbtn.c b/drivers/platform/x86/intel-vbtn.c
+index 0487b606a274..e85d8e58320c 100644
+--- a/drivers/platform/x86/intel-vbtn.c
++++ b/drivers/platform/x86/intel-vbtn.c
+@@ -299,7 +299,7 @@ check_acpi_dev(acpi_handle handle, u32 lvl, void *context, void **rv)
+ return AE_OK;
+
+ if (acpi_match_device_ids(dev, ids) == 0)
+- if (acpi_create_platform_device(dev, NULL))
++ if (!IS_ERR_OR_NULL(acpi_create_platform_device(dev, NULL)))
+ dev_info(&dev->dev,
+ "intel-vbtn: created platform device\n");
+
+diff --git a/drivers/power/supply/88pm860x_battery.c b/drivers/power/supply/88pm860x_battery.c
+index 1308f3a185f3..590da88a17a2 100644
+--- a/drivers/power/supply/88pm860x_battery.c
++++ b/drivers/power/supply/88pm860x_battery.c
+@@ -433,7 +433,7 @@ static void pm860x_init_battery(struct pm860x_battery_info *info)
+ int ret;
+ int data;
+ int bat_remove;
+- int soc;
++ int soc = 0;
+
+ /* measure enable on GPADC1 */
+ data = MEAS1_GP1;
+@@ -496,7 +496,9 @@ static void pm860x_init_battery(struct pm860x_battery_info *info)
+ }
+ mutex_unlock(&info->lock);
+
+- calc_soc(info, OCV_MODE_ACTIVE, &soc);
++ ret = calc_soc(info, OCV_MODE_ACTIVE, &soc);
++ if (ret < 0)
++ goto out;
+
+ data = pm860x_reg_read(info->i2c, PM8607_POWER_UP_LOG);
+ bat_remove = data & BAT_WU_LOG;
+diff --git a/drivers/regulator/core.c b/drivers/regulator/core.c
+index 03154f5b939f..720f28844795 100644
+--- a/drivers/regulator/core.c
++++ b/drivers/regulator/core.c
+@@ -5023,7 +5023,6 @@ regulator_register(const struct regulator_desc *regulator_desc,
+ struct regulator_dev *rdev;
+ bool dangling_cfg_gpiod = false;
+ bool dangling_of_gpiod = false;
+- bool reg_device_fail = false;
+ struct device *dev;
+ int ret, i;
+
+@@ -5152,10 +5151,12 @@ regulator_register(const struct regulator_desc *regulator_desc,
+ }
+
+ /* register with sysfs */
++ device_initialize(&rdev->dev);
+ rdev->dev.class = ®ulator_class;
+ rdev->dev.parent = dev;
+ dev_set_name(&rdev->dev, "regulator.%lu",
+ (unsigned long) atomic_inc_return(®ulator_no));
++ dev_set_drvdata(&rdev->dev, rdev);
+
+ /* set regulator constraints */
+ if (init_data)
+@@ -5206,12 +5207,9 @@ regulator_register(const struct regulator_desc *regulator_desc,
+ !rdev->desc->fixed_uV)
+ rdev->is_switch = true;
+
+- dev_set_drvdata(&rdev->dev, rdev);
+- ret = device_register(&rdev->dev);
+- if (ret != 0) {
+- reg_device_fail = true;
++ ret = device_add(&rdev->dev);
++ if (ret != 0)
+ goto unset_supplies;
+- }
+
+ rdev_init_debugfs(rdev);
+
+@@ -5233,17 +5231,15 @@ unset_supplies:
+ mutex_unlock(®ulator_list_mutex);
+ wash:
+ kfree(rdev->coupling_desc.coupled_rdevs);
+- kfree(rdev->constraints);
+ mutex_lock(®ulator_list_mutex);
+ regulator_ena_gpio_free(rdev);
+ mutex_unlock(®ulator_list_mutex);
++ put_device(&rdev->dev);
++ rdev = NULL;
+ clean:
+ if (dangling_of_gpiod)
+ gpiod_put(config->ena_gpiod);
+- if (reg_device_fail)
+- put_device(&rdev->dev);
+- else
+- kfree(rdev);
++ kfree(rdev);
+ kfree(config);
+ rinse:
+ if (dangling_cfg_gpiod)
+diff --git a/drivers/reset/reset-intel-gw.c b/drivers/reset/reset-intel-gw.c
+index 854238444616..effc177db80a 100644
+--- a/drivers/reset/reset-intel-gw.c
++++ b/drivers/reset/reset-intel-gw.c
+@@ -15,9 +15,9 @@
+ #define RCU_RST_STAT 0x0024
+ #define RCU_RST_REQ 0x0048
+
+-#define REG_OFFSET GENMASK(31, 16)
+-#define BIT_OFFSET GENMASK(15, 8)
+-#define STAT_BIT_OFFSET GENMASK(7, 0)
++#define REG_OFFSET_MASK GENMASK(31, 16)
++#define BIT_OFFSET_MASK GENMASK(15, 8)
++#define STAT_BIT_OFFSET_MASK GENMASK(7, 0)
+
+ #define to_reset_data(x) container_of(x, struct intel_reset_data, rcdev)
+
+@@ -51,11 +51,11 @@ static u32 id_to_reg_and_bit_offsets(struct intel_reset_data *data,
+ unsigned long id, u32 *rst_req,
+ u32 *req_bit, u32 *stat_bit)
+ {
+- *rst_req = FIELD_GET(REG_OFFSET, id);
+- *req_bit = FIELD_GET(BIT_OFFSET, id);
++ *rst_req = FIELD_GET(REG_OFFSET_MASK, id);
++ *req_bit = FIELD_GET(BIT_OFFSET_MASK, id);
+
+ if (data->soc_data->legacy)
+- *stat_bit = FIELD_GET(STAT_BIT_OFFSET, id);
++ *stat_bit = FIELD_GET(STAT_BIT_OFFSET_MASK, id);
+ else
+ *stat_bit = *req_bit;
+
+@@ -141,14 +141,14 @@ static int intel_reset_xlate(struct reset_controller_dev *rcdev,
+ if (spec->args[1] > 31)
+ return -EINVAL;
+
+- id = FIELD_PREP(REG_OFFSET, spec->args[0]);
+- id |= FIELD_PREP(BIT_OFFSET, spec->args[1]);
++ id = FIELD_PREP(REG_OFFSET_MASK, spec->args[0]);
++ id |= FIELD_PREP(BIT_OFFSET_MASK, spec->args[1]);
+
+ if (data->soc_data->legacy) {
+ if (spec->args[2] > 31)
+ return -EINVAL;
+
+- id |= FIELD_PREP(STAT_BIT_OFFSET, spec->args[2]);
++ id |= FIELD_PREP(STAT_BIT_OFFSET_MASK, spec->args[2]);
+ }
+
+ return id;
+@@ -210,11 +210,11 @@ static int intel_reset_probe(struct platform_device *pdev)
+ if (ret)
+ return ret;
+
+- data->reboot_id = FIELD_PREP(REG_OFFSET, rb_id[0]);
+- data->reboot_id |= FIELD_PREP(BIT_OFFSET, rb_id[1]);
++ data->reboot_id = FIELD_PREP(REG_OFFSET_MASK, rb_id[0]);
++ data->reboot_id |= FIELD_PREP(BIT_OFFSET_MASK, rb_id[1]);
+
+ if (data->soc_data->legacy)
+- data->reboot_id |= FIELD_PREP(STAT_BIT_OFFSET, rb_id[2]);
++ data->reboot_id |= FIELD_PREP(STAT_BIT_OFFSET_MASK, rb_id[2]);
+
+ data->restart_nb.notifier_call = intel_reset_restart_handler;
+ data->restart_nb.priority = 128;
+diff --git a/drivers/s390/block/dasd_diag.c b/drivers/s390/block/dasd_diag.c
+index facb588d09e4..069d6b39cacf 100644
+--- a/drivers/s390/block/dasd_diag.c
++++ b/drivers/s390/block/dasd_diag.c
+@@ -319,7 +319,7 @@ dasd_diag_check_device(struct dasd_device *device)
+ struct dasd_diag_characteristics *rdc_data;
+ struct vtoc_cms_label *label;
+ struct dasd_block *block;
+- struct dasd_diag_bio bio;
++ struct dasd_diag_bio *bio;
+ unsigned int sb, bsize;
+ blocknum_t end_block;
+ int rc;
+@@ -395,29 +395,36 @@ dasd_diag_check_device(struct dasd_device *device)
+ rc = -ENOMEM;
+ goto out;
+ }
++ bio = kzalloc(sizeof(*bio), GFP_KERNEL);
++ if (bio == NULL) {
++ DBF_DEV_EVENT(DBF_WARNING, device, "%s",
++ "No memory to allocate initialization bio");
++ rc = -ENOMEM;
++ goto out_label;
++ }
+ rc = 0;
+ end_block = 0;
+ /* try all sizes - needed for ECKD devices */
+ for (bsize = 512; bsize <= PAGE_SIZE; bsize <<= 1) {
+ mdsk_init_io(device, bsize, 0, &end_block);
+- memset(&bio, 0, sizeof (struct dasd_diag_bio));
+- bio.type = MDSK_READ_REQ;
+- bio.block_number = private->pt_block + 1;
+- bio.buffer = label;
++ memset(bio, 0, sizeof(*bio));
++ bio->type = MDSK_READ_REQ;
++ bio->block_number = private->pt_block + 1;
++ bio->buffer = label;
+ memset(&private->iob, 0, sizeof (struct dasd_diag_rw_io));
+ private->iob.dev_nr = rdc_data->dev_nr;
+ private->iob.key = 0;
+ private->iob.flags = 0; /* do synchronous io */
+ private->iob.block_count = 1;
+ private->iob.interrupt_params = 0;
+- private->iob.bio_list = &bio;
++ private->iob.bio_list = bio;
+ private->iob.flaga = DASD_DIAG_FLAGA_DEFAULT;
+ rc = dia250(&private->iob, RW_BIO);
+ if (rc == 3) {
+ pr_warn("%s: A 64-bit DIAG call failed\n",
+ dev_name(&device->cdev->dev));
+ rc = -EOPNOTSUPP;
+- goto out_label;
++ goto out_bio;
+ }
+ mdsk_term_io(device);
+ if (rc == 0)
+@@ -427,7 +434,7 @@ dasd_diag_check_device(struct dasd_device *device)
+ pr_warn("%s: Accessing the DASD failed because of an incorrect format (rc=%d)\n",
+ dev_name(&device->cdev->dev), rc);
+ rc = -EIO;
+- goto out_label;
++ goto out_bio;
+ }
+ /* check for label block */
+ if (memcmp(label->label_id, DASD_DIAG_CMS1,
+@@ -457,6 +464,8 @@ dasd_diag_check_device(struct dasd_device *device)
+ (rc == 4) ? ", read-only device" : "");
+ rc = 0;
+ }
++out_bio:
++ kfree(bio);
+ out_label:
+ free_page((long) label);
+ out:
+diff --git a/drivers/s390/net/qeth_core_main.c b/drivers/s390/net/qeth_core_main.c
+index 88e998de2d03..c8e57007c423 100644
+--- a/drivers/s390/net/qeth_core_main.c
++++ b/drivers/s390/net/qeth_core_main.c
+@@ -204,12 +204,17 @@ EXPORT_SYMBOL_GPL(qeth_threads_running);
+ void qeth_clear_working_pool_list(struct qeth_card *card)
+ {
+ struct qeth_buffer_pool_entry *pool_entry, *tmp;
++ struct qeth_qdio_q *queue = card->qdio.in_q;
++ unsigned int i;
+
+ QETH_CARD_TEXT(card, 5, "clwrklst");
+ list_for_each_entry_safe(pool_entry, tmp,
+ &card->qdio.in_buf_pool.entry_list, list){
+ list_del(&pool_entry->list);
+ }
++
++ for (i = 0; i < ARRAY_SIZE(queue->bufs); i++)
++ queue->bufs[i].pool_entry = NULL;
+ }
+ EXPORT_SYMBOL_GPL(qeth_clear_working_pool_list);
+
+@@ -2951,7 +2956,7 @@ static struct qeth_buffer_pool_entry *qeth_find_free_buffer_pool_entry(
+ static int qeth_init_input_buffer(struct qeth_card *card,
+ struct qeth_qdio_buffer *buf)
+ {
+- struct qeth_buffer_pool_entry *pool_entry;
++ struct qeth_buffer_pool_entry *pool_entry = buf->pool_entry;
+ int i;
+
+ if ((card->options.cq == QETH_CQ_ENABLED) && (!buf->rx_skb)) {
+@@ -2962,9 +2967,13 @@ static int qeth_init_input_buffer(struct qeth_card *card,
+ return -ENOMEM;
+ }
+
+- pool_entry = qeth_find_free_buffer_pool_entry(card);
+- if (!pool_entry)
+- return -ENOBUFS;
++ if (!pool_entry) {
++ pool_entry = qeth_find_free_buffer_pool_entry(card);
++ if (!pool_entry)
++ return -ENOBUFS;
++
++ buf->pool_entry = pool_entry;
++ }
+
+ /*
+ * since the buffer is accessed only from the input_tasklet
+@@ -2972,8 +2981,6 @@ static int qeth_init_input_buffer(struct qeth_card *card,
+ * the QETH_IN_BUF_REQUEUE_THRESHOLD we should never run out off
+ * buffers
+ */
+-
+- buf->pool_entry = pool_entry;
+ for (i = 0; i < QETH_MAX_BUFFER_ELEMENTS(card); ++i) {
+ buf->buffer->element[i].length = PAGE_SIZE;
+ buf->buffer->element[i].addr =
+@@ -5802,6 +5809,7 @@ static unsigned int qeth_rx_poll(struct qeth_card *card, int budget)
+ if (done) {
+ QETH_CARD_STAT_INC(card, rx_bufs);
+ qeth_put_buffer_pool_entry(card, buffer->pool_entry);
++ buffer->pool_entry = NULL;
+ qeth_queue_input_buffer(card, card->rx.b_index);
+ card->rx.b_count--;
+
+diff --git a/drivers/s390/net/qeth_l2_main.c b/drivers/s390/net/qeth_l2_main.c
+index 2d3bca3c0141..b4e06aeb6dc1 100644
+--- a/drivers/s390/net/qeth_l2_main.c
++++ b/drivers/s390/net/qeth_l2_main.c
+@@ -1142,6 +1142,10 @@ static void qeth_bridge_state_change(struct qeth_card *card,
+ int extrasize;
+
+ QETH_CARD_TEXT(card, 2, "brstchng");
++ if (qports->num_entries == 0) {
++ QETH_CARD_TEXT(card, 2, "BPempty");
++ return;
++ }
+ if (qports->entry_length != sizeof(struct qeth_sbp_port_entry)) {
+ QETH_CARD_TEXT_(card, 2, "BPsz%04x", qports->entry_length);
+ return;
+diff --git a/drivers/scsi/arm/cumana_2.c b/drivers/scsi/arm/cumana_2.c
+index 65691c21f133..29294f0ef8a9 100644
+--- a/drivers/scsi/arm/cumana_2.c
++++ b/drivers/scsi/arm/cumana_2.c
+@@ -450,7 +450,7 @@ static int cumanascsi2_probe(struct expansion_card *ec,
+
+ if (info->info.scsi.dma != NO_DMA)
+ free_dma(info->info.scsi.dma);
+- free_irq(ec->irq, host);
++ free_irq(ec->irq, info);
+
+ out_release:
+ fas216_release(host);
+diff --git a/drivers/scsi/arm/eesox.c b/drivers/scsi/arm/eesox.c
+index 6e204a2e0c8d..591ae2a6dd74 100644
+--- a/drivers/scsi/arm/eesox.c
++++ b/drivers/scsi/arm/eesox.c
+@@ -571,7 +571,7 @@ static int eesoxscsi_probe(struct expansion_card *ec, const struct ecard_id *id)
+
+ if (info->info.scsi.dma != NO_DMA)
+ free_dma(info->info.scsi.dma);
+- free_irq(ec->irq, host);
++ free_irq(ec->irq, info);
+
+ out_remove:
+ fas216_remove(host);
+diff --git a/drivers/scsi/arm/powertec.c b/drivers/scsi/arm/powertec.c
+index 772a13e5fd91..d99ef014528e 100644
+--- a/drivers/scsi/arm/powertec.c
++++ b/drivers/scsi/arm/powertec.c
+@@ -378,7 +378,7 @@ static int powertecscsi_probe(struct expansion_card *ec,
+
+ if (info->info.scsi.dma != NO_DMA)
+ free_dma(info->info.scsi.dma);
+- free_irq(ec->irq, host);
++ free_irq(ec->irq, info);
+
+ out_release:
+ fas216_release(host);
+diff --git a/drivers/scsi/megaraid/megaraid_sas_base.c b/drivers/scsi/megaraid/megaraid_sas_base.c
+index 00668335c2af..924ea9f4cdd0 100644
+--- a/drivers/scsi/megaraid/megaraid_sas_base.c
++++ b/drivers/scsi/megaraid/megaraid_sas_base.c
+@@ -5602,9 +5602,13 @@ megasas_setup_irqs_msix(struct megasas_instance *instance, u8 is_probe)
+ &instance->irq_context[i])) {
+ dev_err(&instance->pdev->dev,
+ "Failed to register IRQ for vector %d.\n", i);
+- for (j = 0; j < i; j++)
++ for (j = 0; j < i; j++) {
++ if (j < instance->low_latency_index_start)
++ irq_set_affinity_hint(
++ pci_irq_vector(pdev, j), NULL);
+ free_irq(pci_irq_vector(pdev, j),
+ &instance->irq_context[j]);
++ }
+ /* Retry irq register for IO_APIC*/
+ instance->msix_vectors = 0;
+ instance->msix_load_balance = false;
+@@ -5642,6 +5646,9 @@ megasas_destroy_irqs(struct megasas_instance *instance) {
+
+ if (instance->msix_vectors)
+ for (i = 0; i < instance->msix_vectors; i++) {
++ if (i < instance->low_latency_index_start)
++ irq_set_affinity_hint(
++ pci_irq_vector(instance->pdev, i), NULL);
+ free_irq(pci_irq_vector(instance->pdev, i),
+ &instance->irq_context[i]);
+ }
+diff --git a/drivers/scsi/mesh.c b/drivers/scsi/mesh.c
+index f9f8f4921654..fd1d03064079 100644
+--- a/drivers/scsi/mesh.c
++++ b/drivers/scsi/mesh.c
+@@ -1045,6 +1045,8 @@ static void handle_error(struct mesh_state *ms)
+ while ((in_8(&mr->bus_status1) & BS1_RST) != 0)
+ udelay(1);
+ printk("done\n");
++ if (ms->dma_started)
++ halt_dma(ms);
+ handle_reset(ms);
+ /* request_q is empty, no point in mesh_start() */
+ return;
+@@ -1357,7 +1359,8 @@ static void halt_dma(struct mesh_state *ms)
+ ms->conn_tgt, ms->data_ptr, scsi_bufflen(cmd),
+ ms->tgts[ms->conn_tgt].data_goes_out);
+ }
+- scsi_dma_unmap(cmd);
++ if (cmd)
++ scsi_dma_unmap(cmd);
+ ms->dma_started = 0;
+ }
+
+@@ -1712,6 +1715,9 @@ static int mesh_host_reset(struct scsi_cmnd *cmd)
+
+ spin_lock_irqsave(ms->host->host_lock, flags);
+
++ if (ms->dma_started)
++ halt_dma(ms);
++
+ /* Reset the controller & dbdma channel */
+ out_le32(&md->control, (RUN|PAUSE|FLUSH|WAKE) << 16); /* stop dma */
+ out_8(&mr->exception, 0xff); /* clear all exception bits */
+diff --git a/drivers/scsi/qla2xxx/qla_iocb.c b/drivers/scsi/qla2xxx/qla_iocb.c
+index 8865c35d3421..7c2ad8c18398 100644
+--- a/drivers/scsi/qla2xxx/qla_iocb.c
++++ b/drivers/scsi/qla2xxx/qla_iocb.c
+@@ -2305,8 +2305,8 @@ __qla2x00_alloc_iocbs(struct qla_qpair *qpair, srb_t *sp)
+ pkt = req->ring_ptr;
+ memset(pkt, 0, REQUEST_ENTRY_SIZE);
+ if (IS_QLAFX00(ha)) {
+- wrt_reg_byte((void __iomem *)&pkt->entry_count, req_cnt);
+- wrt_reg_word((void __iomem *)&pkt->handle, handle);
++ wrt_reg_byte((u8 __force __iomem *)&pkt->entry_count, req_cnt);
++ wrt_reg_dword((__le32 __force __iomem *)&pkt->handle, handle);
+ } else {
+ pkt->entry_count = req_cnt;
+ pkt->handle = handle;
+diff --git a/drivers/scsi/scsi_debug.c b/drivers/scsi/scsi_debug.c
+index 843cccb38cb7..b0d93bf79978 100644
+--- a/drivers/scsi/scsi_debug.c
++++ b/drivers/scsi/scsi_debug.c
+@@ -6610,6 +6610,12 @@ static int __init scsi_debug_init(void)
+ pr_err("submit_queues must be 1 or more\n");
+ return -EINVAL;
+ }
++
++ if ((sdebug_max_queue > SDEBUG_CANQUEUE) || (sdebug_max_queue < 1)) {
++ pr_err("max_queue must be in range [1, %d]\n", SDEBUG_CANQUEUE);
++ return -EINVAL;
++ }
++
+ sdebug_q_arr = kcalloc(submit_queues, sizeof(struct sdebug_queue),
+ GFP_KERNEL);
+ if (sdebug_q_arr == NULL)
+diff --git a/drivers/scsi/scsi_lib.c b/drivers/scsi/scsi_lib.c
+index 06056e9ec333..ae620dada8ce 100644
+--- a/drivers/scsi/scsi_lib.c
++++ b/drivers/scsi/scsi_lib.c
+@@ -2841,8 +2841,10 @@ scsi_host_block(struct Scsi_Host *shost)
+ mutex_lock(&sdev->state_mutex);
+ ret = scsi_internal_device_block_nowait(sdev);
+ mutex_unlock(&sdev->state_mutex);
+- if (ret)
++ if (ret) {
++ scsi_device_put(sdev);
+ break;
++ }
+ }
+
+ /*
+diff --git a/drivers/scsi/ufs/ufshcd.c b/drivers/scsi/ufs/ufshcd.c
+index a483c0720a0c..e412e43d2382 100644
+--- a/drivers/scsi/ufs/ufshcd.c
++++ b/drivers/scsi/ufs/ufshcd.c
+@@ -1314,6 +1314,7 @@ static int ufshcd_devfreq_get_dev_status(struct device *dev,
+ unsigned long flags;
+ struct list_head *clk_list = &hba->clk_list_head;
+ struct ufs_clk_info *clki;
++ ktime_t curr_t;
+
+ if (!ufshcd_is_clkscaling_supported(hba))
+ return -EINVAL;
+@@ -1321,6 +1322,7 @@ static int ufshcd_devfreq_get_dev_status(struct device *dev,
+ memset(stat, 0, sizeof(*stat));
+
+ spin_lock_irqsave(hba->host->host_lock, flags);
++ curr_t = ktime_get();
+ if (!scaling->window_start_t)
+ goto start_window;
+
+@@ -1332,18 +1334,17 @@ static int ufshcd_devfreq_get_dev_status(struct device *dev,
+ */
+ stat->current_frequency = clki->curr_freq;
+ if (scaling->is_busy_started)
+- scaling->tot_busy_t += ktime_to_us(ktime_sub(ktime_get(),
+- scaling->busy_start_t));
++ scaling->tot_busy_t += ktime_us_delta(curr_t,
++ scaling->busy_start_t);
+
+- stat->total_time = jiffies_to_usecs((long)jiffies -
+- (long)scaling->window_start_t);
++ stat->total_time = ktime_us_delta(curr_t, scaling->window_start_t);
+ stat->busy_time = scaling->tot_busy_t;
+ start_window:
+- scaling->window_start_t = jiffies;
++ scaling->window_start_t = curr_t;
+ scaling->tot_busy_t = 0;
+
+ if (hba->outstanding_reqs) {
+- scaling->busy_start_t = ktime_get();
++ scaling->busy_start_t = curr_t;
+ scaling->is_busy_started = true;
+ } else {
+ scaling->busy_start_t = 0;
+@@ -1877,6 +1878,7 @@ static void ufshcd_exit_clk_gating(struct ufs_hba *hba)
+ static void ufshcd_clk_scaling_start_busy(struct ufs_hba *hba)
+ {
+ bool queue_resume_work = false;
++ ktime_t curr_t = ktime_get();
+
+ if (!ufshcd_is_clkscaling_supported(hba))
+ return;
+@@ -1892,13 +1894,13 @@ static void ufshcd_clk_scaling_start_busy(struct ufs_hba *hba)
+ &hba->clk_scaling.resume_work);
+
+ if (!hba->clk_scaling.window_start_t) {
+- hba->clk_scaling.window_start_t = jiffies;
++ hba->clk_scaling.window_start_t = curr_t;
+ hba->clk_scaling.tot_busy_t = 0;
+ hba->clk_scaling.is_busy_started = false;
+ }
+
+ if (!hba->clk_scaling.is_busy_started) {
+- hba->clk_scaling.busy_start_t = ktime_get();
++ hba->clk_scaling.busy_start_t = curr_t;
+ hba->clk_scaling.is_busy_started = true;
+ }
+ }
+@@ -6816,20 +6818,30 @@ out:
+
+ static void ufshcd_wb_probe(struct ufs_hba *hba, u8 *desc_buf)
+ {
++ struct ufs_dev_info *dev_info = &hba->dev_info;
+ u8 lun;
+ u32 d_lu_wb_buf_alloc;
+
+ if (!ufshcd_is_wb_allowed(hba))
+ return;
++ /*
++ * Probe WB only for UFS-2.2 and UFS-3.1 (and later) devices or
++ * UFS devices with quirk UFS_DEVICE_QUIRK_SUPPORT_EXTENDED_FEATURES
++ * enabled
++ */
++ if (!(dev_info->wspecversion >= 0x310 ||
++ dev_info->wspecversion == 0x220 ||
++ (hba->dev_quirks & UFS_DEVICE_QUIRK_SUPPORT_EXTENDED_FEATURES)))
++ goto wb_disabled;
+
+ if (hba->desc_size.dev_desc < DEVICE_DESC_PARAM_EXT_UFS_FEATURE_SUP + 4)
+ goto wb_disabled;
+
+- hba->dev_info.d_ext_ufs_feature_sup =
++ dev_info->d_ext_ufs_feature_sup =
+ get_unaligned_be32(desc_buf +
+ DEVICE_DESC_PARAM_EXT_UFS_FEATURE_SUP);
+
+- if (!(hba->dev_info.d_ext_ufs_feature_sup & UFS_DEV_WRITE_BOOSTER_SUP))
++ if (!(dev_info->d_ext_ufs_feature_sup & UFS_DEV_WRITE_BOOSTER_SUP))
+ goto wb_disabled;
+
+ /*
+@@ -6838,17 +6850,17 @@ static void ufshcd_wb_probe(struct ufs_hba *hba, u8 *desc_buf)
+ * a max of 1 lun would have wb buffer configured.
+ * Now only shared buffer mode is supported.
+ */
+- hba->dev_info.b_wb_buffer_type =
++ dev_info->b_wb_buffer_type =
+ desc_buf[DEVICE_DESC_PARAM_WB_TYPE];
+
+- hba->dev_info.b_presrv_uspc_en =
++ dev_info->b_presrv_uspc_en =
+ desc_buf[DEVICE_DESC_PARAM_WB_PRESRV_USRSPC_EN];
+
+- if (hba->dev_info.b_wb_buffer_type == WB_BUF_MODE_SHARED) {
+- hba->dev_info.d_wb_alloc_units =
++ if (dev_info->b_wb_buffer_type == WB_BUF_MODE_SHARED) {
++ dev_info->d_wb_alloc_units =
+ get_unaligned_be32(desc_buf +
+ DEVICE_DESC_PARAM_WB_SHARED_ALLOC_UNITS);
+- if (!hba->dev_info.d_wb_alloc_units)
++ if (!dev_info->d_wb_alloc_units)
+ goto wb_disabled;
+ } else {
+ for (lun = 0; lun < UFS_UPIU_MAX_WB_LUN_ID; lun++) {
+@@ -6859,7 +6871,7 @@ static void ufshcd_wb_probe(struct ufs_hba *hba, u8 *desc_buf)
+ (u8 *)&d_lu_wb_buf_alloc,
+ sizeof(d_lu_wb_buf_alloc));
+ if (d_lu_wb_buf_alloc) {
+- hba->dev_info.wb_dedicated_lu = lun;
++ dev_info->wb_dedicated_lu = lun;
+ break;
+ }
+ }
+@@ -6948,14 +6960,7 @@ static int ufs_get_device_desc(struct ufs_hba *hba)
+
+ ufs_fixup_device_setup(hba);
+
+- /*
+- * Probe WB only for UFS-3.1 devices or UFS devices with quirk
+- * UFS_DEVICE_QUIRK_SUPPORT_EXTENDED_FEATURES enabled
+- */
+- if (dev_info->wspecversion >= 0x310 ||
+- dev_info->wspecversion == 0x220 ||
+- (hba->dev_quirks & UFS_DEVICE_QUIRK_SUPPORT_EXTENDED_FEATURES))
+- ufshcd_wb_probe(hba, desc_buf);
++ ufshcd_wb_probe(hba, desc_buf);
+
+ /*
+ * ufshcd_read_string_desc returns size of the string
+diff --git a/drivers/scsi/ufs/ufshcd.h b/drivers/scsi/ufs/ufshcd.h
+index bf97d616e597..16187be98a94 100644
+--- a/drivers/scsi/ufs/ufshcd.h
++++ b/drivers/scsi/ufs/ufshcd.h
+@@ -411,7 +411,7 @@ struct ufs_saved_pwr_info {
+ struct ufs_clk_scaling {
+ int active_reqs;
+ unsigned long tot_busy_t;
+- unsigned long window_start_t;
++ ktime_t window_start_t;
+ ktime_t busy_start_t;
+ struct device_attribute enable_attr;
+ struct ufs_saved_pwr_info saved_pwr_info;
+diff --git a/drivers/soc/qcom/pdr_interface.c b/drivers/soc/qcom/pdr_interface.c
+index bdcf16f88a97..4c9225f15c4e 100644
+--- a/drivers/soc/qcom/pdr_interface.c
++++ b/drivers/soc/qcom/pdr_interface.c
+@@ -278,13 +278,15 @@ static void pdr_indack_work(struct work_struct *work)
+
+ list_for_each_entry_safe(ind, tmp, &pdr->indack_list, node) {
+ pds = ind->pds;
+- pdr_send_indack_msg(pdr, pds, ind->transaction_id);
+
+ mutex_lock(&pdr->status_lock);
+ pds->state = ind->curr_state;
+ pdr->status(pds->state, pds->service_path, pdr->priv);
+ mutex_unlock(&pdr->status_lock);
+
++ /* Ack the indication after clients release the PD resources */
++ pdr_send_indack_msg(pdr, pds, ind->transaction_id);
++
+ mutex_lock(&pdr->list_lock);
+ list_del(&ind->node);
+ mutex_unlock(&pdr->list_lock);
+diff --git a/drivers/soc/qcom/rpmh-rsc.c b/drivers/soc/qcom/rpmh-rsc.c
+index 076fd27f3081..ae6675782581 100644
+--- a/drivers/soc/qcom/rpmh-rsc.c
++++ b/drivers/soc/qcom/rpmh-rsc.c
+@@ -175,13 +175,21 @@ static void write_tcs_reg(const struct rsc_drv *drv, int reg, int tcs_id,
+ static void write_tcs_reg_sync(const struct rsc_drv *drv, int reg, int tcs_id,
+ u32 data)
+ {
+- u32 new_data;
++ int i;
+
+ writel(data, tcs_reg_addr(drv, reg, tcs_id));
+- if (readl_poll_timeout_atomic(tcs_reg_addr(drv, reg, tcs_id), new_data,
+- new_data == data, 1, USEC_PER_SEC))
+- pr_err("%s: error writing %#x to %d:%#x\n", drv->name,
+- data, tcs_id, reg);
++
++ /*
++ * Wait until we read back the same value. Use a counter rather than
++ * ktime for timeout since this may be called after timekeeping stops.
++ */
++ for (i = 0; i < USEC_PER_SEC; i++) {
++ if (readl(tcs_reg_addr(drv, reg, tcs_id)) == data)
++ return;
++ udelay(1);
++ }
++ pr_err("%s: error writing %#x to %d:%#x\n", drv->name,
++ data, tcs_id, reg);
+ }
+
+ /**
+@@ -1023,6 +1031,7 @@ static struct platform_driver rpmh_driver = {
+ .driver = {
+ .name = "rpmh",
+ .of_match_table = rpmh_drv_match,
++ .suppress_bind_attrs = true,
+ },
+ };
+
+diff --git a/drivers/spi/spi-dw-dma.c b/drivers/spi/spi-dw-dma.c
+index 5986c520b196..bb390ff67d1d 100644
+--- a/drivers/spi/spi-dw-dma.c
++++ b/drivers/spi/spi-dw-dma.c
+@@ -372,8 +372,20 @@ static int dw_spi_dma_setup(struct dw_spi *dws, struct spi_transfer *xfer)
+ {
+ u16 imr = 0, dma_ctrl = 0;
+
++ /*
++ * Having a Rx DMA channel serviced with higher priority than a Tx DMA
++ * channel might not be enough to provide a well balanced DMA-based
++ * SPI transfer interface. There might still be moments when the Tx DMA
++ * channel is occasionally handled faster than the Rx DMA channel.
++ * That in its turn will eventually cause the SPI Rx FIFO overflow if
++ * SPI bus speed is high enough to fill the SPI Rx FIFO in before it's
++ * cleared by the Rx DMA channel. In order to fix the problem the Tx
++ * DMA activity is intentionally slowed down by limiting the SPI Tx
++ * FIFO depth with a value twice bigger than the Tx burst length
++ * calculated earlier by the dw_spi_dma_maxburst_init() method.
++ */
+ dw_writel(dws, DW_SPI_DMARDLR, dws->rxburst - 1);
+- dw_writel(dws, DW_SPI_DMATDLR, dws->fifo_len - dws->txburst);
++ dw_writel(dws, DW_SPI_DMATDLR, dws->txburst);
+
+ if (xfer->tx_buf)
+ dma_ctrl |= SPI_DMA_TDMAE;
+diff --git a/drivers/spi/spi-lantiq-ssc.c b/drivers/spi/spi-lantiq-ssc.c
+index 1fd7ee53d451..049a64451c75 100644
+--- a/drivers/spi/spi-lantiq-ssc.c
++++ b/drivers/spi/spi-lantiq-ssc.c
+@@ -184,6 +184,7 @@ struct lantiq_ssc_spi {
+ unsigned int tx_fifo_size;
+ unsigned int rx_fifo_size;
+ unsigned int base_cs;
++ unsigned int fdx_tx_level;
+ };
+
+ static u32 lantiq_ssc_readl(const struct lantiq_ssc_spi *spi, u32 reg)
+@@ -481,6 +482,7 @@ static void tx_fifo_write(struct lantiq_ssc_spi *spi)
+ u32 data;
+ unsigned int tx_free = tx_fifo_free(spi);
+
++ spi->fdx_tx_level = 0;
+ while (spi->tx_todo && tx_free) {
+ switch (spi->bits_per_word) {
+ case 2 ... 8:
+@@ -509,6 +511,7 @@ static void tx_fifo_write(struct lantiq_ssc_spi *spi)
+
+ lantiq_ssc_writel(spi, data, LTQ_SPI_TB);
+ tx_free--;
++ spi->fdx_tx_level++;
+ }
+ }
+
+@@ -520,6 +523,13 @@ static void rx_fifo_read_full_duplex(struct lantiq_ssc_spi *spi)
+ u32 data;
+ unsigned int rx_fill = rx_fifo_level(spi);
+
++ /*
++ * Wait until all expected data to be shifted in.
++ * Otherwise, rx overrun may occur.
++ */
++ while (rx_fill != spi->fdx_tx_level)
++ rx_fill = rx_fifo_level(spi);
++
+ while (rx_fill) {
+ data = lantiq_ssc_readl(spi, LTQ_SPI_RB);
+
+@@ -899,7 +909,7 @@ static int lantiq_ssc_probe(struct platform_device *pdev)
+ master->bits_per_word_mask = SPI_BPW_RANGE_MASK(2, 8) |
+ SPI_BPW_MASK(16) | SPI_BPW_MASK(32);
+
+- spi->wq = alloc_ordered_workqueue(dev_name(dev), 0);
++ spi->wq = alloc_ordered_workqueue(dev_name(dev), WQ_MEM_RECLAIM);
+ if (!spi->wq) {
+ err = -ENOMEM;
+ goto err_clk_put;
+diff --git a/drivers/spi/spi-rockchip.c b/drivers/spi/spi-rockchip.c
+index 9b8a5e1233c0..4776aa815c3f 100644
+--- a/drivers/spi/spi-rockchip.c
++++ b/drivers/spi/spi-rockchip.c
+@@ -288,7 +288,7 @@ static void rockchip_spi_pio_writer(struct rockchip_spi *rs)
+ static void rockchip_spi_pio_reader(struct rockchip_spi *rs)
+ {
+ u32 words = readl_relaxed(rs->regs + ROCKCHIP_SPI_RXFLR);
+- u32 rx_left = rs->rx_left - words;
++ u32 rx_left = (rs->rx_left > words) ? rs->rx_left - words : 0;
+
+ /* the hardware doesn't allow us to change fifo threshold
+ * level while spi is enabled, so instead make sure to leave
+diff --git a/drivers/spi/spidev.c b/drivers/spi/spidev.c
+index 59e07675ef86..455e99c4958e 100644
+--- a/drivers/spi/spidev.c
++++ b/drivers/spi/spidev.c
+@@ -224,6 +224,11 @@ static int spidev_message(struct spidev_data *spidev,
+ for (n = n_xfers, k_tmp = k_xfers, u_tmp = u_xfers;
+ n;
+ n--, k_tmp++, u_tmp++) {
++ /* Ensure that also following allocations from rx_buf/tx_buf will meet
++ * DMA alignment requirements.
++ */
++ unsigned int len_aligned = ALIGN(u_tmp->len, ARCH_KMALLOC_MINALIGN);
++
+ k_tmp->len = u_tmp->len;
+
+ total += k_tmp->len;
+@@ -239,17 +244,17 @@ static int spidev_message(struct spidev_data *spidev,
+
+ if (u_tmp->rx_buf) {
+ /* this transfer needs space in RX bounce buffer */
+- rx_total += k_tmp->len;
++ rx_total += len_aligned;
+ if (rx_total > bufsiz) {
+ status = -EMSGSIZE;
+ goto done;
+ }
+ k_tmp->rx_buf = rx_buf;
+- rx_buf += k_tmp->len;
++ rx_buf += len_aligned;
+ }
+ if (u_tmp->tx_buf) {
+ /* this transfer needs space in TX bounce buffer */
+- tx_total += k_tmp->len;
++ tx_total += len_aligned;
+ if (tx_total > bufsiz) {
+ status = -EMSGSIZE;
+ goto done;
+@@ -259,7 +264,7 @@ static int spidev_message(struct spidev_data *spidev,
+ (uintptr_t) u_tmp->tx_buf,
+ u_tmp->len))
+ goto done;
+- tx_buf += k_tmp->len;
++ tx_buf += len_aligned;
+ }
+
+ k_tmp->cs_change = !!u_tmp->cs_change;
+@@ -293,16 +298,16 @@ static int spidev_message(struct spidev_data *spidev,
+ goto done;
+
+ /* copy any rx data out of bounce buffer */
+- rx_buf = spidev->rx_buffer;
+- for (n = n_xfers, u_tmp = u_xfers; n; n--, u_tmp++) {
++ for (n = n_xfers, k_tmp = k_xfers, u_tmp = u_xfers;
++ n;
++ n--, k_tmp++, u_tmp++) {
+ if (u_tmp->rx_buf) {
+ if (copy_to_user((u8 __user *)
+- (uintptr_t) u_tmp->rx_buf, rx_buf,
++ (uintptr_t) u_tmp->rx_buf, k_tmp->rx_buf,
+ u_tmp->len)) {
+ status = -EFAULT;
+ goto done;
+ }
+- rx_buf += u_tmp->len;
+ }
+ }
+ status = total;
+diff --git a/drivers/staging/media/allegro-dvt/allegro-core.c b/drivers/staging/media/allegro-dvt/allegro-core.c
+index 70f133a842dd..3ed66aae741d 100644
+--- a/drivers/staging/media/allegro-dvt/allegro-core.c
++++ b/drivers/staging/media/allegro-dvt/allegro-core.c
+@@ -3065,9 +3065,9 @@ static int allegro_probe(struct platform_device *pdev)
+ return -EINVAL;
+ }
+ regs = devm_ioremap(&pdev->dev, res->start, resource_size(res));
+- if (IS_ERR(regs)) {
++ if (!regs) {
+ dev_err(&pdev->dev, "failed to map registers\n");
+- return PTR_ERR(regs);
++ return -ENOMEM;
+ }
+ dev->regmap = devm_regmap_init_mmio(&pdev->dev, regs,
+ &allegro_regmap_config);
+@@ -3085,9 +3085,9 @@ static int allegro_probe(struct platform_device *pdev)
+ sram_regs = devm_ioremap(&pdev->dev,
+ sram_res->start,
+ resource_size(sram_res));
+- if (IS_ERR(sram_regs)) {
++ if (!sram_regs) {
+ dev_err(&pdev->dev, "failed to map sram\n");
+- return PTR_ERR(sram_regs);
++ return -ENOMEM;
+ }
+ dev->sram = devm_regmap_init_mmio(&pdev->dev, sram_regs,
+ &allegro_sram_config);
+diff --git a/drivers/staging/media/rkisp1/rkisp1-resizer.c b/drivers/staging/media/rkisp1/rkisp1-resizer.c
+index d049374413dc..e188944941b5 100644
+--- a/drivers/staging/media/rkisp1/rkisp1-resizer.c
++++ b/drivers/staging/media/rkisp1/rkisp1-resizer.c
+@@ -437,8 +437,8 @@ static int rkisp1_rsz_enum_mbus_code(struct v4l2_subdev *sd,
+ u32 pad = code->pad;
+ int ret;
+
+- /* supported mbus codes are the same in isp sink pad */
+- code->pad = RKISP1_ISP_PAD_SINK_VIDEO;
++ /* supported mbus codes are the same in isp video src pad */
++ code->pad = RKISP1_ISP_PAD_SOURCE_VIDEO;
+ ret = v4l2_subdev_call(&rsz->rkisp1->isp.sd, pad, enum_mbus_code,
+ &dummy_cfg, code);
+
+@@ -553,11 +553,11 @@ static void rkisp1_rsz_set_sink_fmt(struct rkisp1_resizer *rsz,
+ src_fmt->code = sink_fmt->code;
+
+ sink_fmt->width = clamp_t(u32, format->width,
+- rsz->config->min_rsz_width,
+- rsz->config->max_rsz_width);
++ RKISP1_ISP_MIN_WIDTH,
++ RKISP1_ISP_MAX_WIDTH);
+ sink_fmt->height = clamp_t(u32, format->height,
+- rsz->config->min_rsz_height,
+- rsz->config->max_rsz_height);
++ RKISP1_ISP_MIN_HEIGHT,
++ RKISP1_ISP_MAX_HEIGHT);
+
+ *format = *sink_fmt;
+
+diff --git a/drivers/staging/rtl8192u/r8192U_core.c b/drivers/staging/rtl8192u/r8192U_core.c
+index fcfb9024a83f..6ec65187bef9 100644
+--- a/drivers/staging/rtl8192u/r8192U_core.c
++++ b/drivers/staging/rtl8192u/r8192U_core.c
+@@ -2374,7 +2374,7 @@ static int rtl8192_read_eeprom_info(struct net_device *dev)
+ ret = eprom_read(dev, (EEPROM_TX_PW_INDEX_CCK >> 1));
+ if (ret < 0)
+ return ret;
+- priv->EEPROMTxPowerLevelCCK = ((u16)ret & 0xff) >> 8;
++ priv->EEPROMTxPowerLevelCCK = ((u16)ret & 0xff00) >> 8;
+ } else
+ priv->EEPROMTxPowerLevelCCK = 0x10;
+ RT_TRACE(COMP_EPROM, "CCK Tx Power Levl: 0x%02x\n", priv->EEPROMTxPowerLevelCCK);
+diff --git a/drivers/staging/vc04_services/interface/vchiq_arm/vchiq_arm.c b/drivers/staging/vc04_services/interface/vchiq_arm/vchiq_arm.c
+index 28ea8c3a4cba..355590f1e130 100644
+--- a/drivers/staging/vc04_services/interface/vchiq_arm/vchiq_arm.c
++++ b/drivers/staging/vc04_services/interface/vchiq_arm/vchiq_arm.c
+@@ -2805,6 +2805,7 @@ failed_platform_init:
+
+ static int vchiq_remove(struct platform_device *pdev)
+ {
++ platform_device_unregister(bcm2835_audio);
+ platform_device_unregister(bcm2835_camera);
+ vchiq_debugfs_deinit();
+ device_destroy(vchiq_class, vchiq_devid);
+diff --git a/drivers/thermal/intel/int340x_thermal/processor_thermal_device.c b/drivers/thermal/intel/int340x_thermal/processor_thermal_device.c
+index 297db1d2d960..81e8b15ef405 100644
+--- a/drivers/thermal/intel/int340x_thermal/processor_thermal_device.c
++++ b/drivers/thermal/intel/int340x_thermal/processor_thermal_device.c
+@@ -43,7 +43,7 @@
+ #define PCI_DEVICE_ID_PROC_ICL_THERMAL 0x8a03
+
+ /* JasperLake thermal reporting device */
+-#define PCI_DEVICE_ID_PROC_JSL_THERMAL 0x4503
++#define PCI_DEVICE_ID_PROC_JSL_THERMAL 0x4E03
+
+ /* TigerLake thermal reporting device */
+ #define PCI_DEVICE_ID_PROC_TGL_THERMAL 0x9A03
+diff --git a/drivers/thermal/ti-soc-thermal/ti-thermal-common.c b/drivers/thermal/ti-soc-thermal/ti-thermal-common.c
+index 85776db4bf34..2ce4b19f312a 100644
+--- a/drivers/thermal/ti-soc-thermal/ti-thermal-common.c
++++ b/drivers/thermal/ti-soc-thermal/ti-thermal-common.c
+@@ -169,7 +169,7 @@ int ti_thermal_expose_sensor(struct ti_bandgap *bgp, int id,
+
+ data = ti_bandgap_get_sensor_data(bgp, id);
+
+- if (!IS_ERR_OR_NULL(data))
++ if (IS_ERR_OR_NULL(data))
+ data = ti_thermal_build_data(bgp, id);
+
+ if (!data)
+diff --git a/drivers/usb/cdns3/gadget.c b/drivers/usb/cdns3/gadget.c
+index 5e24c2e57c0d..37ae7fc5f8dd 100644
+--- a/drivers/usb/cdns3/gadget.c
++++ b/drivers/usb/cdns3/gadget.c
+@@ -242,9 +242,10 @@ int cdns3_allocate_trb_pool(struct cdns3_endpoint *priv_ep)
+ return -ENOMEM;
+
+ priv_ep->alloc_ring_size = ring_size;
+- memset(priv_ep->trb_pool, 0, ring_size);
+ }
+
++ memset(priv_ep->trb_pool, 0, ring_size);
++
+ priv_ep->num_trbs = num_trbs;
+
+ if (!priv_ep->num)
+diff --git a/drivers/usb/core/quirks.c b/drivers/usb/core/quirks.c
+index e0b77674869c..c96c50faccf7 100644
+--- a/drivers/usb/core/quirks.c
++++ b/drivers/usb/core/quirks.c
+@@ -25,17 +25,23 @@ static unsigned int quirk_count;
+
+ static char quirks_param[128];
+
+-static int quirks_param_set(const char *val, const struct kernel_param *kp)
++static int quirks_param_set(const char *value, const struct kernel_param *kp)
+ {
+- char *p, *field;
++ char *val, *p, *field;
+ u16 vid, pid;
+ u32 flags;
+ size_t i;
+ int err;
+
++ val = kstrdup(value, GFP_KERNEL);
++ if (!val)
++ return -ENOMEM;
++
+ err = param_set_copystring(val, kp);
+- if (err)
++ if (err) {
++ kfree(val);
+ return err;
++ }
+
+ mutex_lock(&quirk_mutex);
+
+@@ -60,10 +66,11 @@ static int quirks_param_set(const char *val, const struct kernel_param *kp)
+ if (!quirk_list) {
+ quirk_count = 0;
+ mutex_unlock(&quirk_mutex);
++ kfree(val);
+ return -ENOMEM;
+ }
+
+- for (i = 0, p = (char *)val; p && *p;) {
++ for (i = 0, p = val; p && *p;) {
+ /* Each entry consists of VID:PID:flags */
+ field = strsep(&p, ":");
+ if (!field)
+@@ -144,6 +151,7 @@ static int quirks_param_set(const char *val, const struct kernel_param *kp)
+
+ unlock:
+ mutex_unlock(&quirk_mutex);
++ kfree(val);
+
+ return 0;
+ }
+diff --git a/drivers/usb/dwc2/platform.c b/drivers/usb/dwc2/platform.c
+index cb8ddbd53718..db9fd4bd1a38 100644
+--- a/drivers/usb/dwc2/platform.c
++++ b/drivers/usb/dwc2/platform.c
+@@ -582,6 +582,7 @@ static int dwc2_driver_probe(struct platform_device *dev)
+ if (hsotg->gadget_enabled) {
+ retval = usb_add_gadget_udc(hsotg->dev, &hsotg->gadget);
+ if (retval) {
++ hsotg->gadget.udc = NULL;
+ dwc2_hsotg_remove(hsotg);
+ goto error_init;
+ }
+@@ -593,7 +594,8 @@ error_init:
+ if (hsotg->params.activate_stm_id_vb_detection)
+ regulator_disable(hsotg->usb33d);
+ error:
+- dwc2_lowlevel_hw_disable(hsotg);
++ if (hsotg->dr_mode != USB_DR_MODE_PERIPHERAL)
++ dwc2_lowlevel_hw_disable(hsotg);
+ return retval;
+ }
+
+diff --git a/drivers/usb/dwc3/dwc3-meson-g12a.c b/drivers/usb/dwc3/dwc3-meson-g12a.c
+index 1f7f4d88ed9d..88b75b5a039c 100644
+--- a/drivers/usb/dwc3/dwc3-meson-g12a.c
++++ b/drivers/usb/dwc3/dwc3-meson-g12a.c
+@@ -737,13 +737,13 @@ static int dwc3_meson_g12a_probe(struct platform_device *pdev)
+ goto err_disable_clks;
+ }
+
+- ret = reset_control_reset(priv->reset);
++ ret = reset_control_deassert(priv->reset);
+ if (ret)
+- goto err_disable_clks;
++ goto err_assert_reset;
+
+ ret = dwc3_meson_g12a_get_phys(priv);
+ if (ret)
+- goto err_disable_clks;
++ goto err_assert_reset;
+
+ ret = priv->drvdata->setup_regmaps(priv, base);
+ if (ret)
+@@ -752,7 +752,7 @@ static int dwc3_meson_g12a_probe(struct platform_device *pdev)
+ if (priv->vbus) {
+ ret = regulator_enable(priv->vbus);
+ if (ret)
+- goto err_disable_clks;
++ goto err_assert_reset;
+ }
+
+ /* Get dr_mode */
+@@ -765,13 +765,13 @@ static int dwc3_meson_g12a_probe(struct platform_device *pdev)
+
+ ret = priv->drvdata->usb_init(priv);
+ if (ret)
+- goto err_disable_clks;
++ goto err_assert_reset;
+
+ /* Init PHYs */
+ for (i = 0 ; i < PHY_COUNT ; ++i) {
+ ret = phy_init(priv->phys[i]);
+ if (ret)
+- goto err_disable_clks;
++ goto err_assert_reset;
+ }
+
+ /* Set PHY Power */
+@@ -809,6 +809,9 @@ err_phys_exit:
+ for (i = 0 ; i < PHY_COUNT ; ++i)
+ phy_exit(priv->phys[i]);
+
++err_assert_reset:
++ reset_control_assert(priv->reset);
++
+ err_disable_clks:
+ clk_bulk_disable_unprepare(priv->drvdata->num_clks,
+ priv->drvdata->clks);
+diff --git a/drivers/usb/gadget/function/f_uac2.c b/drivers/usb/gadget/function/f_uac2.c
+index db2d4980cb35..3633df6d7610 100644
+--- a/drivers/usb/gadget/function/f_uac2.c
++++ b/drivers/usb/gadget/function/f_uac2.c
+@@ -215,10 +215,7 @@ static struct uac2_ac_header_descriptor ac_hdr_desc = {
+ .bDescriptorSubtype = UAC_MS_HEADER,
+ .bcdADC = cpu_to_le16(0x200),
+ .bCategory = UAC2_FUNCTION_IO_BOX,
+- .wTotalLength = cpu_to_le16(sizeof in_clk_src_desc
+- + sizeof out_clk_src_desc + sizeof usb_out_it_desc
+- + sizeof io_in_it_desc + sizeof usb_in_ot_desc
+- + sizeof io_out_ot_desc),
++ /* .wTotalLength = DYNAMIC */
+ .bmControls = 0,
+ };
+
+@@ -501,7 +498,7 @@ static void setup_descriptor(struct f_uac2_opts *opts)
+ as_in_hdr_desc.bTerminalLink = usb_in_ot_desc.bTerminalID;
+
+ iad_desc.bInterfaceCount = 1;
+- ac_hdr_desc.wTotalLength = 0;
++ ac_hdr_desc.wTotalLength = cpu_to_le16(sizeof(ac_hdr_desc));
+
+ if (EPIN_EN(opts)) {
+ u16 len = le16_to_cpu(ac_hdr_desc.wTotalLength);
+diff --git a/drivers/usb/gadget/udc/bdc/bdc_core.c b/drivers/usb/gadget/udc/bdc/bdc_core.c
+index 02a3a774670b..2dca11f0a744 100644
+--- a/drivers/usb/gadget/udc/bdc/bdc_core.c
++++ b/drivers/usb/gadget/udc/bdc/bdc_core.c
+@@ -282,6 +282,7 @@ static void bdc_mem_init(struct bdc *bdc, bool reinit)
+ * in that case reinit is passed as 1
+ */
+ if (reinit) {
++ int i;
+ /* Enable interrupts */
+ temp = bdc_readl(bdc->regs, BDC_BDCSC);
+ temp |= BDC_GIE;
+@@ -291,6 +292,9 @@ static void bdc_mem_init(struct bdc *bdc, bool reinit)
+ /* Initialize SRR to 0 */
+ memset(bdc->srr.sr_bds, 0,
+ NUM_SR_ENTRIES * sizeof(struct bdc_bd));
++ /* clear ep flags to avoid post disconnect stops/deconfigs */
++ for (i = 1; i < bdc->num_eps; ++i)
++ bdc->bdc_ep_array[i]->flags = 0;
+ } else {
+ /* One time initiaization only */
+ /* Enable status report function pointers */
+@@ -599,9 +603,14 @@ static int bdc_remove(struct platform_device *pdev)
+ static int bdc_suspend(struct device *dev)
+ {
+ struct bdc *bdc = dev_get_drvdata(dev);
++ int ret;
+
+- clk_disable_unprepare(bdc->clk);
+- return 0;
++ /* Halt the controller */
++ ret = bdc_stop(bdc);
++ if (!ret)
++ clk_disable_unprepare(bdc->clk);
++
++ return ret;
+ }
+
+ static int bdc_resume(struct device *dev)
+diff --git a/drivers/usb/gadget/udc/bdc/bdc_ep.c b/drivers/usb/gadget/udc/bdc/bdc_ep.c
+index d49c6dc1082d..9ddc0b4e92c9 100644
+--- a/drivers/usb/gadget/udc/bdc/bdc_ep.c
++++ b/drivers/usb/gadget/udc/bdc/bdc_ep.c
+@@ -615,7 +615,6 @@ int bdc_ep_enable(struct bdc_ep *ep)
+ }
+ bdc_dbg_bd_list(bdc, ep);
+ /* only for ep0: config ep is called for ep0 from connect event */
+- ep->flags |= BDC_EP_ENABLED;
+ if (ep->ep_num == 1)
+ return ret;
+
+@@ -759,10 +758,13 @@ static int ep_dequeue(struct bdc_ep *ep, struct bdc_req *req)
+ __func__, ep->name, start_bdi, end_bdi);
+ dev_dbg(bdc->dev, "ep_dequeue ep=%p ep->desc=%p\n",
+ ep, (void *)ep->usb_ep.desc);
+- /* Stop the ep to see where the HW is ? */
+- ret = bdc_stop_ep(bdc, ep->ep_num);
+- /* if there is an issue with stopping ep, then no need to go further */
+- if (ret)
++ /* if still connected, stop the ep to see where the HW is ? */
++ if (!(bdc_readl(bdc->regs, BDC_USPC) & BDC_PST_MASK)) {
++ ret = bdc_stop_ep(bdc, ep->ep_num);
++ /* if there is an issue, then no need to go further */
++ if (ret)
++ return 0;
++ } else
+ return 0;
+
+ /*
+@@ -1911,7 +1913,9 @@ static int bdc_gadget_ep_disable(struct usb_ep *_ep)
+ __func__, ep->name, ep->flags);
+
+ if (!(ep->flags & BDC_EP_ENABLED)) {
+- dev_warn(bdc->dev, "%s is already disabled\n", ep->name);
++ if (bdc->gadget.speed != USB_SPEED_UNKNOWN)
++ dev_warn(bdc->dev, "%s is already disabled\n",
++ ep->name);
+ return 0;
+ }
+ spin_lock_irqsave(&bdc->lock, flags);
+diff --git a/drivers/usb/gadget/udc/net2280.c b/drivers/usb/gadget/udc/net2280.c
+index 5eff85eeaa5a..7530bd9a08c4 100644
+--- a/drivers/usb/gadget/udc/net2280.c
++++ b/drivers/usb/gadget/udc/net2280.c
+@@ -3781,8 +3781,10 @@ static int net2280_probe(struct pci_dev *pdev, const struct pci_device_id *id)
+ return 0;
+
+ done:
+- if (dev)
++ if (dev) {
+ net2280_remove(pdev);
++ kfree(dev);
++ }
+ return retval;
+ }
+
+diff --git a/drivers/usb/mtu3/mtu3_core.c b/drivers/usb/mtu3/mtu3_core.c
+index 9dd02160cca9..e3780d4d6514 100644
+--- a/drivers/usb/mtu3/mtu3_core.c
++++ b/drivers/usb/mtu3/mtu3_core.c
+@@ -131,8 +131,12 @@ static void mtu3_device_disable(struct mtu3 *mtu)
+ mtu3_setbits(ibase, SSUSB_U2_CTRL(0),
+ SSUSB_U2_PORT_DIS | SSUSB_U2_PORT_PDN);
+
+- if (mtu->ssusb->dr_mode == USB_DR_MODE_OTG)
++ if (mtu->ssusb->dr_mode == USB_DR_MODE_OTG) {
+ mtu3_clrbits(ibase, SSUSB_U2_CTRL(0), SSUSB_U2_PORT_OTG_SEL);
++ if (mtu->is_u3_ip)
++ mtu3_clrbits(ibase, SSUSB_U3_CTRL(0),
++ SSUSB_U3_PORT_DUAL_MODE);
++ }
+
+ mtu3_setbits(ibase, U3D_SSUSB_IP_PW_CTRL2, SSUSB_IP_DEV_PDN);
+ }
+diff --git a/drivers/usb/serial/cp210x.c b/drivers/usb/serial/cp210x.c
+index f5143eedbc48..a90801ef0055 100644
+--- a/drivers/usb/serial/cp210x.c
++++ b/drivers/usb/serial/cp210x.c
+@@ -272,6 +272,8 @@ static struct usb_serial_driver cp210x_device = {
+ .break_ctl = cp210x_break_ctl,
+ .set_termios = cp210x_set_termios,
+ .tx_empty = cp210x_tx_empty,
++ .throttle = usb_serial_generic_throttle,
++ .unthrottle = usb_serial_generic_unthrottle,
+ .tiocmget = cp210x_tiocmget,
+ .tiocmset = cp210x_tiocmset,
+ .attach = cp210x_attach,
+@@ -915,6 +917,7 @@ static void cp210x_get_termios_port(struct usb_serial_port *port,
+ u32 baud;
+ u16 bits;
+ u32 ctl_hs;
++ u32 flow_repl;
+
+ cp210x_read_u32_reg(port, CP210X_GET_BAUDRATE, &baud);
+
+@@ -1015,6 +1018,22 @@ static void cp210x_get_termios_port(struct usb_serial_port *port,
+ ctl_hs = le32_to_cpu(flow_ctl.ulControlHandshake);
+ if (ctl_hs & CP210X_SERIAL_CTS_HANDSHAKE) {
+ dev_dbg(dev, "%s - flow control = CRTSCTS\n", __func__);
++ /*
++ * When the port is closed, the CP210x hardware disables
++ * auto-RTS and RTS is deasserted but it leaves auto-CTS when
++ * in hardware flow control mode. When re-opening the port, if
++ * auto-CTS is enabled on the cp210x, then auto-RTS must be
++ * re-enabled in the driver.
++ */
++ flow_repl = le32_to_cpu(flow_ctl.ulFlowReplace);
++ flow_repl &= ~CP210X_SERIAL_RTS_MASK;
++ flow_repl |= CP210X_SERIAL_RTS_SHIFT(CP210X_SERIAL_RTS_FLOW_CTL);
++ flow_ctl.ulFlowReplace = cpu_to_le32(flow_repl);
++ cp210x_write_reg_block(port,
++ CP210X_SET_FLOW,
++ &flow_ctl,
++ sizeof(flow_ctl));
++
+ cflag |= CRTSCTS;
+ } else {
+ dev_dbg(dev, "%s - flow control = NONE\n", __func__);
+diff --git a/drivers/usb/serial/iuu_phoenix.c b/drivers/usb/serial/iuu_phoenix.c
+index b8dfeb4fb2ed..ffbb2a8901b2 100644
+--- a/drivers/usb/serial/iuu_phoenix.c
++++ b/drivers/usb/serial/iuu_phoenix.c
+@@ -353,10 +353,11 @@ static void iuu_led_activity_on(struct urb *urb)
+ struct usb_serial_port *port = urb->context;
+ int result;
+ char *buf_ptr = port->write_urb->transfer_buffer;
+- *buf_ptr++ = IUU_SET_LED;
++
+ if (xmas) {
+- get_random_bytes(buf_ptr, 6);
+- *(buf_ptr+7) = 1;
++ buf_ptr[0] = IUU_SET_LED;
++ get_random_bytes(buf_ptr + 1, 6);
++ buf_ptr[7] = 1;
+ } else {
+ iuu_rgbf_fill_buffer(buf_ptr, 255, 255, 0, 0, 0, 0, 255);
+ }
+@@ -374,13 +375,14 @@ static void iuu_led_activity_off(struct urb *urb)
+ struct usb_serial_port *port = urb->context;
+ int result;
+ char *buf_ptr = port->write_urb->transfer_buffer;
++
+ if (xmas) {
+ iuu_rxcmd(urb);
+ return;
+- } else {
+- *buf_ptr++ = IUU_SET_LED;
+- iuu_rgbf_fill_buffer(buf_ptr, 0, 0, 255, 255, 0, 0, 255);
+ }
++
++ iuu_rgbf_fill_buffer(buf_ptr, 0, 0, 255, 255, 0, 0, 255);
++
+ usb_fill_bulk_urb(port->write_urb, port->serial->dev,
+ usb_sndbulkpipe(port->serial->dev,
+ port->bulk_out_endpointAddress),
+diff --git a/drivers/vdpa/vdpa_sim/vdpa_sim.c b/drivers/vdpa/vdpa_sim/vdpa_sim.c
+index c7334cc65bb2..8ac6f341dcc1 100644
+--- a/drivers/vdpa/vdpa_sim/vdpa_sim.c
++++ b/drivers/vdpa/vdpa_sim/vdpa_sim.c
+@@ -70,6 +70,8 @@ struct vdpasim {
+ u32 status;
+ u32 generation;
+ u64 features;
++ /* spinlock to synchronize iommu table */
++ spinlock_t iommu_lock;
+ };
+
+ static struct vdpasim *vdpasim_dev;
+@@ -118,7 +120,9 @@ static void vdpasim_reset(struct vdpasim *vdpasim)
+ for (i = 0; i < VDPASIM_VQ_NUM; i++)
+ vdpasim_vq_reset(&vdpasim->vqs[i]);
+
++ spin_lock(&vdpasim->iommu_lock);
+ vhost_iotlb_reset(vdpasim->iommu);
++ spin_unlock(&vdpasim->iommu_lock);
+
+ vdpasim->features = 0;
+ vdpasim->status = 0;
+@@ -236,8 +240,10 @@ static dma_addr_t vdpasim_map_page(struct device *dev, struct page *page,
+ /* For simplicity, use identical mapping to avoid e.g iova
+ * allocator.
+ */
++ spin_lock(&vdpasim->iommu_lock);
+ ret = vhost_iotlb_add_range(iommu, pa, pa + size - 1,
+ pa, dir_to_perm(dir));
++ spin_unlock(&vdpasim->iommu_lock);
+ if (ret)
+ return DMA_MAPPING_ERROR;
+
+@@ -251,8 +257,10 @@ static void vdpasim_unmap_page(struct device *dev, dma_addr_t dma_addr,
+ struct vdpasim *vdpasim = dev_to_sim(dev);
+ struct vhost_iotlb *iommu = vdpasim->iommu;
+
++ spin_lock(&vdpasim->iommu_lock);
+ vhost_iotlb_del_range(iommu, (u64)dma_addr,
+ (u64)dma_addr + size - 1);
++ spin_unlock(&vdpasim->iommu_lock);
+ }
+
+ static void *vdpasim_alloc_coherent(struct device *dev, size_t size,
+@@ -264,9 +272,10 @@ static void *vdpasim_alloc_coherent(struct device *dev, size_t size,
+ void *addr = kmalloc(size, flag);
+ int ret;
+
+- if (!addr)
++ spin_lock(&vdpasim->iommu_lock);
++ if (!addr) {
+ *dma_addr = DMA_MAPPING_ERROR;
+- else {
++ } else {
+ u64 pa = virt_to_phys(addr);
+
+ ret = vhost_iotlb_add_range(iommu, (u64)pa,
+@@ -279,6 +288,7 @@ static void *vdpasim_alloc_coherent(struct device *dev, size_t size,
+ } else
+ *dma_addr = (dma_addr_t)pa;
+ }
++ spin_unlock(&vdpasim->iommu_lock);
+
+ return addr;
+ }
+@@ -290,8 +300,11 @@ static void vdpasim_free_coherent(struct device *dev, size_t size,
+ struct vdpasim *vdpasim = dev_to_sim(dev);
+ struct vhost_iotlb *iommu = vdpasim->iommu;
+
++ spin_lock(&vdpasim->iommu_lock);
+ vhost_iotlb_del_range(iommu, (u64)dma_addr,
+ (u64)dma_addr + size - 1);
++ spin_unlock(&vdpasim->iommu_lock);
++
+ kfree(phys_to_virt((uintptr_t)dma_addr));
+ }
+
+@@ -532,6 +545,7 @@ static int vdpasim_set_map(struct vdpa_device *vdpa,
+ u64 start = 0ULL, last = 0ULL - 1;
+ int ret;
+
++ spin_lock(&vdpasim->iommu_lock);
+ vhost_iotlb_reset(vdpasim->iommu);
+
+ for (map = vhost_iotlb_itree_first(iotlb, start, last); map;
+@@ -541,10 +555,12 @@ static int vdpasim_set_map(struct vdpa_device *vdpa,
+ if (ret)
+ goto err;
+ }
++ spin_unlock(&vdpasim->iommu_lock);
+ return 0;
+
+ err:
+ vhost_iotlb_reset(vdpasim->iommu);
++ spin_unlock(&vdpasim->iommu_lock);
+ return ret;
+ }
+
+@@ -552,16 +568,23 @@ static int vdpasim_dma_map(struct vdpa_device *vdpa, u64 iova, u64 size,
+ u64 pa, u32 perm)
+ {
+ struct vdpasim *vdpasim = vdpa_to_sim(vdpa);
++ int ret;
+
+- return vhost_iotlb_add_range(vdpasim->iommu, iova,
+- iova + size - 1, pa, perm);
++ spin_lock(&vdpasim->iommu_lock);
++ ret = vhost_iotlb_add_range(vdpasim->iommu, iova, iova + size - 1, pa,
++ perm);
++ spin_unlock(&vdpasim->iommu_lock);
++
++ return ret;
+ }
+
+ static int vdpasim_dma_unmap(struct vdpa_device *vdpa, u64 iova, u64 size)
+ {
+ struct vdpasim *vdpasim = vdpa_to_sim(vdpa);
+
++ spin_lock(&vdpasim->iommu_lock);
+ vhost_iotlb_del_range(vdpasim->iommu, iova, iova + size - 1);
++ spin_unlock(&vdpasim->iommu_lock);
+
+ return 0;
+ }
+diff --git a/drivers/video/console/newport_con.c b/drivers/video/console/newport_con.c
+index 504cda38763e..df3c52d72159 100644
+--- a/drivers/video/console/newport_con.c
++++ b/drivers/video/console/newport_con.c
+@@ -31,6 +31,8 @@
+ #include <linux/linux_logo.h>
+ #include <linux/font.h>
+
++#define NEWPORT_LEN 0x10000
++
+ #define FONT_DATA ((unsigned char *)font_vga_8x16.data)
+
+ /* borrowed from fbcon.c */
+@@ -42,6 +44,7 @@
+ static unsigned char *font_data[MAX_NR_CONSOLES];
+
+ static struct newport_regs *npregs;
++static unsigned long newport_addr;
+
+ static int logo_active;
+ static int topscan;
+@@ -701,7 +704,6 @@ const struct consw newport_con = {
+ static int newport_probe(struct gio_device *dev,
+ const struct gio_device_id *id)
+ {
+- unsigned long newport_addr;
+ int err;
+
+ if (!dev->resource.start)
+@@ -711,7 +713,7 @@ static int newport_probe(struct gio_device *dev,
+ return -EBUSY; /* we only support one Newport as console */
+
+ newport_addr = dev->resource.start + 0xF0000;
+- if (!request_mem_region(newport_addr, 0x10000, "Newport"))
++ if (!request_mem_region(newport_addr, NEWPORT_LEN, "Newport"))
+ return -ENODEV;
+
+ npregs = (struct newport_regs *)/* ioremap cannot fail */
+@@ -719,6 +721,11 @@ static int newport_probe(struct gio_device *dev,
+ console_lock();
+ err = do_take_over_console(&newport_con, 0, MAX_NR_CONSOLES - 1, 1);
+ console_unlock();
++
++ if (err) {
++ iounmap((void *)npregs);
++ release_mem_region(newport_addr, NEWPORT_LEN);
++ }
+ return err;
+ }
+
+@@ -726,6 +733,7 @@ static void newport_remove(struct gio_device *dev)
+ {
+ give_up_console(&newport_con);
+ iounmap((void *)npregs);
++ release_mem_region(newport_addr, NEWPORT_LEN);
+ }
+
+ static struct gio_device_id newport_ids[] = {
+diff --git a/drivers/video/fbdev/neofb.c b/drivers/video/fbdev/neofb.c
+index f5a676bfd67a..09a20d4ab35f 100644
+--- a/drivers/video/fbdev/neofb.c
++++ b/drivers/video/fbdev/neofb.c
+@@ -1819,6 +1819,7 @@ static int neo_scan_monitor(struct fb_info *info)
+ #else
+ printk(KERN_ERR
+ "neofb: Only 640x480, 800x600/480 and 1024x768 panels are currently supported\n");
++ kfree(info->monspecs.modedb);
+ return -1;
+ #endif
+ default:
+diff --git a/drivers/video/fbdev/pxafb.c b/drivers/video/fbdev/pxafb.c
+index 00b96a78676e..6f972bed410a 100644
+--- a/drivers/video/fbdev/pxafb.c
++++ b/drivers/video/fbdev/pxafb.c
+@@ -2417,8 +2417,8 @@ static int pxafb_remove(struct platform_device *dev)
+
+ free_pages_exact(fbi->video_mem, fbi->video_mem_size);
+
+- dma_free_wc(&dev->dev, fbi->dma_buff_size, fbi->dma_buff,
+- fbi->dma_buff_phys);
++ dma_free_coherent(&dev->dev, fbi->dma_buff_size, fbi->dma_buff,
++ fbi->dma_buff_phys);
+
+ return 0;
+ }
+diff --git a/drivers/video/fbdev/savage/savagefb_driver.c b/drivers/video/fbdev/savage/savagefb_driver.c
+index 3c8ae87f0ea7..3fd87aeb6c79 100644
+--- a/drivers/video/fbdev/savage/savagefb_driver.c
++++ b/drivers/video/fbdev/savage/savagefb_driver.c
+@@ -2157,6 +2157,8 @@ static int savage_init_fb_info(struct fb_info *info, struct pci_dev *dev,
+ info->flags |= FBINFO_HWACCEL_COPYAREA |
+ FBINFO_HWACCEL_FILLRECT |
+ FBINFO_HWACCEL_IMAGEBLIT;
++ else
++ kfree(info->pixmap.addr);
+ }
+ #endif
+ return err;
+diff --git a/drivers/video/fbdev/sm712fb.c b/drivers/video/fbdev/sm712fb.c
+index 6a1b4a853d9e..8cd655d6d628 100644
+--- a/drivers/video/fbdev/sm712fb.c
++++ b/drivers/video/fbdev/sm712fb.c
+@@ -1429,6 +1429,8 @@ static int smtc_map_smem(struct smtcfb_info *sfb,
+ static void smtc_unmap_smem(struct smtcfb_info *sfb)
+ {
+ if (sfb && sfb->fb->screen_base) {
++ if (sfb->chip_id == 0x720)
++ sfb->fb->screen_base -= 0x00200000;
+ iounmap(sfb->fb->screen_base);
+ sfb->fb->screen_base = NULL;
+ }
+diff --git a/drivers/xen/balloon.c b/drivers/xen/balloon.c
+index 77c57568e5d7..292413b27575 100644
+--- a/drivers/xen/balloon.c
++++ b/drivers/xen/balloon.c
+@@ -568,11 +568,13 @@ static int add_ballooned_pages(int nr_pages)
+ if (xen_hotplug_unpopulated) {
+ st = reserve_additional_memory();
+ if (st != BP_ECANCELED) {
++ int rc;
++
+ mutex_unlock(&balloon_mutex);
+- wait_event(balloon_wq,
++ rc = wait_event_interruptible(balloon_wq,
+ !list_empty(&ballooned_pages));
+ mutex_lock(&balloon_mutex);
+- return 0;
++ return rc ? -ENOMEM : 0;
+ }
+ }
+
+@@ -630,6 +632,12 @@ int alloc_xenballooned_pages(int nr_pages, struct page **pages)
+ out_undo:
+ mutex_unlock(&balloon_mutex);
+ free_xenballooned_pages(pgno, pages);
++ /*
++ * NB: free_xenballooned_pages will only subtract pgno pages, but since
++ * target_unpopulated is incremented with nr_pages at the start we need
++ * to remove the remaining ones also, or accounting will be screwed.
++ */
++ balloon_stats.target_unpopulated -= nr_pages - pgno;
+ return ret;
+ }
+ EXPORT_SYMBOL(alloc_xenballooned_pages);
+diff --git a/drivers/xen/gntdev-dmabuf.c b/drivers/xen/gntdev-dmabuf.c
+index 75d3bb948bf3..b1b6eebafd5d 100644
+--- a/drivers/xen/gntdev-dmabuf.c
++++ b/drivers/xen/gntdev-dmabuf.c
+@@ -613,6 +613,14 @@ dmabuf_imp_to_refs(struct gntdev_dmabuf_priv *priv, struct device *dev,
+ goto fail_detach;
+ }
+
++ /* Check that we have zero offset. */
++ if (sgt->sgl->offset) {
++ ret = ERR_PTR(-EINVAL);
++ pr_debug("DMA buffer has %d bytes offset, user-space expects 0\n",
++ sgt->sgl->offset);
++ goto fail_unmap;
++ }
++
+ /* Check number of pages that imported buffer has. */
+ if (attach->dmabuf->size != gntdev_dmabuf->nr_pages << PAGE_SHIFT) {
+ ret = ERR_PTR(-EINVAL);
+diff --git a/fs/9p/v9fs.c b/fs/9p/v9fs.c
+index 15a99f9c7253..39def020a074 100644
+--- a/fs/9p/v9fs.c
++++ b/fs/9p/v9fs.c
+@@ -500,10 +500,9 @@ void v9fs_session_close(struct v9fs_session_info *v9ses)
+ }
+
+ #ifdef CONFIG_9P_FSCACHE
+- if (v9ses->fscache) {
++ if (v9ses->fscache)
+ v9fs_cache_session_put_cookie(v9ses);
+- kfree(v9ses->cachetag);
+- }
++ kfree(v9ses->cachetag);
+ #endif
+ kfree(v9ses->uname);
+ kfree(v9ses->aname);
+diff --git a/fs/btrfs/ctree.h b/fs/btrfs/ctree.h
+index d404cce8ae40..7c8efa0c3ee6 100644
+--- a/fs/btrfs/ctree.h
++++ b/fs/btrfs/ctree.h
+@@ -2982,6 +2982,8 @@ int btrfs_dirty_pages(struct inode *inode, struct page **pages,
+ size_t num_pages, loff_t pos, size_t write_bytes,
+ struct extent_state **cached);
+ int btrfs_fdatawrite_range(struct inode *inode, loff_t start, loff_t end);
++int btrfs_check_can_nocow(struct btrfs_inode *inode, loff_t pos,
++ size_t *write_bytes, bool nowait);
+
+ /* tree-defrag.c */
+ int btrfs_defrag_leaves(struct btrfs_trans_handle *trans,
+diff --git a/fs/btrfs/extent-tree.c b/fs/btrfs/extent-tree.c
+index c0bc35f932bf..96223813b618 100644
+--- a/fs/btrfs/extent-tree.c
++++ b/fs/btrfs/extent-tree.c
+@@ -5466,6 +5466,14 @@ int btrfs_drop_snapshot(struct btrfs_root *root, int update_ref, int for_reloc)
+ }
+ }
+
++ /*
++ * This subvolume is going to be completely dropped, and won't be
++ * recorded as dirty roots, thus pertrans meta rsv will not be freed at
++ * commit transaction time. So free it here manually.
++ */
++ btrfs_qgroup_convert_reserved_meta(root, INT_MAX);
++ btrfs_qgroup_free_meta_all_pertrans(root);
++
+ if (test_bit(BTRFS_ROOT_IN_RADIX, &root->state))
+ btrfs_add_dropped_root(trans, root);
+ else
+diff --git a/fs/btrfs/extent_io.c b/fs/btrfs/extent_io.c
+index 60278e52c37a..eeaee346f5a9 100644
+--- a/fs/btrfs/extent_io.c
++++ b/fs/btrfs/extent_io.c
+@@ -4516,6 +4516,8 @@ int try_release_extent_mapping(struct page *page, gfp_t mask)
+
+ /* once for us */
+ free_extent_map(em);
++
++ cond_resched(); /* Allow large-extent preemption. */
+ }
+ }
+ return try_release_extent_state(tree, page, mask);
+diff --git a/fs/btrfs/file.c b/fs/btrfs/file.c
+index b0d2c976587e..1523aa4eaff0 100644
+--- a/fs/btrfs/file.c
++++ b/fs/btrfs/file.c
+@@ -1532,8 +1532,8 @@ lock_and_cleanup_extent_if_need(struct btrfs_inode *inode, struct page **pages,
+ return ret;
+ }
+
+-static noinline int check_can_nocow(struct btrfs_inode *inode, loff_t pos,
+- size_t *write_bytes, bool nowait)
++int btrfs_check_can_nocow(struct btrfs_inode *inode, loff_t pos,
++ size_t *write_bytes, bool nowait)
+ {
+ struct btrfs_fs_info *fs_info = inode->root->fs_info;
+ struct btrfs_root *root = inode->root;
+@@ -1648,8 +1648,8 @@ static noinline ssize_t btrfs_buffered_write(struct kiocb *iocb,
+ if (ret < 0) {
+ if ((BTRFS_I(inode)->flags & (BTRFS_INODE_NODATACOW |
+ BTRFS_INODE_PREALLOC)) &&
+- check_can_nocow(BTRFS_I(inode), pos,
+- &write_bytes, false) > 0) {
++ btrfs_check_can_nocow(BTRFS_I(inode), pos,
++ &write_bytes, false) > 0) {
+ /*
+ * For nodata cow case, no need to reserve
+ * data space.
+@@ -1928,8 +1928,8 @@ static ssize_t btrfs_file_write_iter(struct kiocb *iocb,
+ */
+ if (!(BTRFS_I(inode)->flags & (BTRFS_INODE_NODATACOW |
+ BTRFS_INODE_PREALLOC)) ||
+- check_can_nocow(BTRFS_I(inode), pos, &nocow_bytes,
+- true) <= 0) {
++ btrfs_check_can_nocow(BTRFS_I(inode), pos, &nocow_bytes,
++ true) <= 0) {
+ inode_unlock(inode);
+ return -EAGAIN;
+ }
+diff --git a/fs/btrfs/inode.c b/fs/btrfs/inode.c
+index 6862cd7e21a9..3f77ec5de8ec 100644
+--- a/fs/btrfs/inode.c
++++ b/fs/btrfs/inode.c
+@@ -4511,11 +4511,13 @@ int btrfs_truncate_block(struct inode *inode, loff_t from, loff_t len,
+ struct extent_state *cached_state = NULL;
+ struct extent_changeset *data_reserved = NULL;
+ char *kaddr;
++ bool only_release_metadata = false;
+ u32 blocksize = fs_info->sectorsize;
+ pgoff_t index = from >> PAGE_SHIFT;
+ unsigned offset = from & (blocksize - 1);
+ struct page *page;
+ gfp_t mask = btrfs_alloc_write_mask(mapping);
++ size_t write_bytes = blocksize;
+ int ret = 0;
+ u64 block_start;
+ u64 block_end;
+@@ -4527,11 +4529,27 @@ int btrfs_truncate_block(struct inode *inode, loff_t from, loff_t len,
+ block_start = round_down(from, blocksize);
+ block_end = block_start + blocksize - 1;
+
+- ret = btrfs_delalloc_reserve_space(inode, &data_reserved,
+- block_start, blocksize);
+- if (ret)
+- goto out;
+
++ ret = btrfs_check_data_free_space(inode, &data_reserved, block_start,
++ blocksize);
++ if (ret < 0) {
++ if ((BTRFS_I(inode)->flags & (BTRFS_INODE_NODATACOW |
++ BTRFS_INODE_PREALLOC)) &&
++ btrfs_check_can_nocow(BTRFS_I(inode), block_start,
++ &write_bytes, false) > 0) {
++ /* For nocow case, no need to reserve data space */
++ only_release_metadata = true;
++ } else {
++ goto out;
++ }
++ }
++ ret = btrfs_delalloc_reserve_metadata(BTRFS_I(inode), blocksize);
++ if (ret < 0) {
++ if (!only_release_metadata)
++ btrfs_free_reserved_data_space(inode, data_reserved,
++ block_start, blocksize);
++ goto out;
++ }
+ again:
+ page = find_or_create_page(mapping, index, mask);
+ if (!page) {
+@@ -4600,14 +4618,26 @@ again:
+ set_page_dirty(page);
+ unlock_extent_cached(io_tree, block_start, block_end, &cached_state);
+
++ if (only_release_metadata)
++ set_extent_bit(&BTRFS_I(inode)->io_tree, block_start,
++ block_end, EXTENT_NORESERVE, NULL, NULL,
++ GFP_NOFS);
++
+ out_unlock:
+- if (ret)
+- btrfs_delalloc_release_space(inode, data_reserved, block_start,
+- blocksize, true);
++ if (ret) {
++ if (only_release_metadata)
++ btrfs_delalloc_release_metadata(BTRFS_I(inode),
++ blocksize, true);
++ else
++ btrfs_delalloc_release_space(inode, data_reserved,
++ block_start, blocksize, true);
++ }
+ btrfs_delalloc_release_extents(BTRFS_I(inode), blocksize);
+ unlock_page(page);
+ put_page(page);
+ out:
++ if (only_release_metadata)
++ btrfs_drew_write_unlock(&BTRFS_I(inode)->root->snapshot_lock);
+ extent_changeset_free(data_reserved);
+ return ret;
+ }
+diff --git a/fs/btrfs/space-info.c b/fs/btrfs/space-info.c
+index c7bd3fdd7792..475968ccbd1d 100644
+--- a/fs/btrfs/space-info.c
++++ b/fs/btrfs/space-info.c
+@@ -468,8 +468,8 @@ again:
+ "block group %llu has %llu bytes, %llu used %llu pinned %llu reserved %s",
+ cache->start, cache->length, cache->used, cache->pinned,
+ cache->reserved, cache->ro ? "[readonly]" : "");
+- btrfs_dump_free_space(cache, bytes);
+ spin_unlock(&cache->lock);
++ btrfs_dump_free_space(cache, bytes);
+ }
+ if (++index < BTRFS_NR_RAID_TYPES)
+ goto again;
+diff --git a/fs/dlm/lockspace.c b/fs/dlm/lockspace.c
+index e93670ecfae5..624617c12250 100644
+--- a/fs/dlm/lockspace.c
++++ b/fs/dlm/lockspace.c
+@@ -622,6 +622,9 @@ static int new_lockspace(const char *name, const char *cluster,
+ wait_event(ls->ls_recover_lock_wait,
+ test_bit(LSFL_RECOVER_LOCK, &ls->ls_flags));
+
++ /* let kobject handle freeing of ls if there's an error */
++ do_unreg = 1;
++
+ ls->ls_kobj.kset = dlm_kset;
+ error = kobject_init_and_add(&ls->ls_kobj, &dlm_ktype, NULL,
+ "%s", ls->ls_name);
+@@ -629,9 +632,6 @@ static int new_lockspace(const char *name, const char *cluster,
+ goto out_recoverd;
+ kobject_uevent(&ls->ls_kobj, KOBJ_ADD);
+
+- /* let kobject handle freeing of ls if there's an error */
+- do_unreg = 1;
+-
+ /* This uevent triggers dlm_controld in userspace to add us to the
+ group of nodes that are members of this lockspace (managed by the
+ cluster infrastructure.) Once it's done that, it tells us who the
+diff --git a/fs/erofs/inode.c b/fs/erofs/inode.c
+index 7dd4bbe9674f..586f9d0a8b2f 100644
+--- a/fs/erofs/inode.c
++++ b/fs/erofs/inode.c
+@@ -8,31 +8,80 @@
+
+ #include <trace/events/erofs.h>
+
+-/* no locking */
+-static int erofs_read_inode(struct inode *inode, void *data)
++/*
++ * if inode is successfully read, return its inode page (or sometimes
++ * the inode payload page if it's an extended inode) in order to fill
++ * inline data if possible.
++ */
++static struct page *erofs_read_inode(struct inode *inode,
++ unsigned int *ofs)
+ {
++ struct super_block *sb = inode->i_sb;
++ struct erofs_sb_info *sbi = EROFS_SB(sb);
+ struct erofs_inode *vi = EROFS_I(inode);
+- struct erofs_inode_compact *dic = data;
+- struct erofs_inode_extended *die;
++ const erofs_off_t inode_loc = iloc(sbi, vi->nid);
++
++ erofs_blk_t blkaddr, nblks = 0;
++ struct page *page;
++ struct erofs_inode_compact *dic;
++ struct erofs_inode_extended *die, *copied = NULL;
++ unsigned int ifmt;
++ int err;
+
+- const unsigned int ifmt = le16_to_cpu(dic->i_format);
+- struct erofs_sb_info *sbi = EROFS_SB(inode->i_sb);
+- erofs_blk_t nblks = 0;
++ blkaddr = erofs_blknr(inode_loc);
++ *ofs = erofs_blkoff(inode_loc);
+
+- vi->datalayout = erofs_inode_datalayout(ifmt);
++ erofs_dbg("%s, reading inode nid %llu at %u of blkaddr %u",
++ __func__, vi->nid, *ofs, blkaddr);
++
++ page = erofs_get_meta_page(sb, blkaddr);
++ if (IS_ERR(page)) {
++ erofs_err(sb, "failed to get inode (nid: %llu) page, err %ld",
++ vi->nid, PTR_ERR(page));
++ return page;
++ }
+
++ dic = page_address(page) + *ofs;
++ ifmt = le16_to_cpu(dic->i_format);
++
++ vi->datalayout = erofs_inode_datalayout(ifmt);
+ if (vi->datalayout >= EROFS_INODE_DATALAYOUT_MAX) {
+ erofs_err(inode->i_sb, "unsupported datalayout %u of nid %llu",
+ vi->datalayout, vi->nid);
+- DBG_BUGON(1);
+- return -EOPNOTSUPP;
++ err = -EOPNOTSUPP;
++ goto err_out;
+ }
+
+ switch (erofs_inode_version(ifmt)) {
+ case EROFS_INODE_LAYOUT_EXTENDED:
+- die = data;
+-
+ vi->inode_isize = sizeof(struct erofs_inode_extended);
++ /* check if the inode acrosses page boundary */
++ if (*ofs + vi->inode_isize <= PAGE_SIZE) {
++ *ofs += vi->inode_isize;
++ die = (struct erofs_inode_extended *)dic;
++ } else {
++ const unsigned int gotten = PAGE_SIZE - *ofs;
++
++ copied = kmalloc(vi->inode_isize, GFP_NOFS);
++ if (!copied) {
++ err = -ENOMEM;
++ goto err_out;
++ }
++ memcpy(copied, dic, gotten);
++ unlock_page(page);
++ put_page(page);
++
++ page = erofs_get_meta_page(sb, blkaddr + 1);
++ if (IS_ERR(page)) {
++ erofs_err(sb, "failed to get inode payload page (nid: %llu), err %ld",
++ vi->nid, PTR_ERR(page));
++ kfree(copied);
++ return page;
++ }
++ *ofs = vi->inode_isize - gotten;
++ memcpy((u8 *)copied + gotten, page_address(page), *ofs);
++ die = copied;
++ }
+ vi->xattr_isize = erofs_xattr_ibody_size(die->i_xattr_icount);
+
+ inode->i_mode = le16_to_cpu(die->i_mode);
+@@ -69,9 +118,12 @@ static int erofs_read_inode(struct inode *inode, void *data)
+ /* total blocks for compressed files */
+ if (erofs_inode_is_data_compressed(vi->datalayout))
+ nblks = le32_to_cpu(die->i_u.compressed_blocks);
++
++ kfree(copied);
+ break;
+ case EROFS_INODE_LAYOUT_COMPACT:
+ vi->inode_isize = sizeof(struct erofs_inode_compact);
++ *ofs += vi->inode_isize;
+ vi->xattr_isize = erofs_xattr_ibody_size(dic->i_xattr_icount);
+
+ inode->i_mode = le16_to_cpu(dic->i_mode);
+@@ -111,8 +163,8 @@ static int erofs_read_inode(struct inode *inode, void *data)
+ erofs_err(inode->i_sb,
+ "unsupported on-disk inode version %u of nid %llu",
+ erofs_inode_version(ifmt), vi->nid);
+- DBG_BUGON(1);
+- return -EOPNOTSUPP;
++ err = -EOPNOTSUPP;
++ goto err_out;
+ }
+
+ if (!nblks)
+@@ -120,13 +172,18 @@ static int erofs_read_inode(struct inode *inode, void *data)
+ inode->i_blocks = roundup(inode->i_size, EROFS_BLKSIZ) >> 9;
+ else
+ inode->i_blocks = nblks << LOG_SECTORS_PER_BLOCK;
+- return 0;
++ return page;
+
+ bogusimode:
+ erofs_err(inode->i_sb, "bogus i_mode (%o) @ nid %llu",
+ inode->i_mode, vi->nid);
++ err = -EFSCORRUPTED;
++err_out:
+ DBG_BUGON(1);
+- return -EFSCORRUPTED;
++ kfree(copied);
++ unlock_page(page);
++ put_page(page);
++ return ERR_PTR(err);
+ }
+
+ static int erofs_fill_symlink(struct inode *inode, void *data,
+@@ -146,7 +203,7 @@ static int erofs_fill_symlink(struct inode *inode, void *data,
+ if (!lnk)
+ return -ENOMEM;
+
+- m_pofs += vi->inode_isize + vi->xattr_isize;
++ m_pofs += vi->xattr_isize;
+ /* inline symlink data shouldn't cross page boundary as well */
+ if (m_pofs + inode->i_size > PAGE_SIZE) {
+ kfree(lnk);
+@@ -167,37 +224,17 @@ static int erofs_fill_symlink(struct inode *inode, void *data,
+
+ static int erofs_fill_inode(struct inode *inode, int isdir)
+ {
+- struct super_block *sb = inode->i_sb;
+ struct erofs_inode *vi = EROFS_I(inode);
+ struct page *page;
+- void *data;
+- int err;
+- erofs_blk_t blkaddr;
+ unsigned int ofs;
+- erofs_off_t inode_loc;
++ int err = 0;
+
+ trace_erofs_fill_inode(inode, isdir);
+- inode_loc = iloc(EROFS_SB(sb), vi->nid);
+- blkaddr = erofs_blknr(inode_loc);
+- ofs = erofs_blkoff(inode_loc);
+-
+- erofs_dbg("%s, reading inode nid %llu at %u of blkaddr %u",
+- __func__, vi->nid, ofs, blkaddr);
+
+- page = erofs_get_meta_page(sb, blkaddr);
+-
+- if (IS_ERR(page)) {
+- erofs_err(sb, "failed to get inode (nid: %llu) page, err %ld",
+- vi->nid, PTR_ERR(page));
++ /* read inode base data from disk */
++ page = erofs_read_inode(inode, &ofs);
++ if (IS_ERR(page))
+ return PTR_ERR(page);
+- }
+-
+- DBG_BUGON(!PageUptodate(page));
+- data = page_address(page);
+-
+- err = erofs_read_inode(inode, data + ofs);
+- if (err)
+- goto out_unlock;
+
+ /* setup the new inode */
+ switch (inode->i_mode & S_IFMT) {
+@@ -210,7 +247,7 @@ static int erofs_fill_inode(struct inode *inode, int isdir)
+ inode->i_fop = &erofs_dir_fops;
+ break;
+ case S_IFLNK:
+- err = erofs_fill_symlink(inode, data, ofs);
++ err = erofs_fill_symlink(inode, page_address(page), ofs);
+ if (err)
+ goto out_unlock;
+ inode_nohighmem(inode);
+diff --git a/fs/io_uring.c b/fs/io_uring.c
+index 493e5047e67c..f926d94867f7 100644
+--- a/fs/io_uring.c
++++ b/fs/io_uring.c
+@@ -669,12 +669,12 @@ struct io_kiocb {
+ * restore the work, if needed.
+ */
+ struct {
+- struct callback_head task_work;
+ struct hlist_node hash_node;
+ struct async_poll *apoll;
+ };
+ struct io_wq_work work;
+ };
++ struct callback_head task_work;
+ };
+
+ #define IO_PLUG_THRESHOLD 2
+@@ -1549,12 +1549,9 @@ static void io_req_link_next(struct io_kiocb *req, struct io_kiocb **nxtptr)
+ /*
+ * Called if REQ_F_LINK_HEAD is set, and we fail the head request
+ */
+-static void io_fail_links(struct io_kiocb *req)
++static void __io_fail_links(struct io_kiocb *req)
+ {
+ struct io_ring_ctx *ctx = req->ctx;
+- unsigned long flags;
+-
+- spin_lock_irqsave(&ctx->completion_lock, flags);
+
+ while (!list_empty(&req->link_list)) {
+ struct io_kiocb *link = list_first_entry(&req->link_list,
+@@ -1568,13 +1565,29 @@ static void io_fail_links(struct io_kiocb *req)
+ io_link_cancel_timeout(link);
+ } else {
+ io_cqring_fill_event(link, -ECANCELED);
++ link->flags |= REQ_F_COMP_LOCKED;
+ __io_double_put_req(link);
+ }
+ req->flags &= ~REQ_F_LINK_TIMEOUT;
+ }
+
+ io_commit_cqring(ctx);
+- spin_unlock_irqrestore(&ctx->completion_lock, flags);
++}
++
++static void io_fail_links(struct io_kiocb *req)
++{
++ struct io_ring_ctx *ctx = req->ctx;
++
++ if (!(req->flags & REQ_F_COMP_LOCKED)) {
++ unsigned long flags;
++
++ spin_lock_irqsave(&ctx->completion_lock, flags);
++ __io_fail_links(req);
++ spin_unlock_irqrestore(&ctx->completion_lock, flags);
++ } else {
++ __io_fail_links(req);
++ }
++
+ io_cqring_ev_posted(ctx);
+ }
+
+@@ -1747,6 +1760,17 @@ static int io_put_kbuf(struct io_kiocb *req)
+ return cflags;
+ }
+
++static inline bool io_run_task_work(void)
++{
++ if (current->task_works) {
++ __set_current_state(TASK_RUNNING);
++ task_work_run();
++ return true;
++ }
++
++ return false;
++}
++
+ static void io_iopoll_queue(struct list_head *again)
+ {
+ struct io_kiocb *req;
+@@ -1936,6 +1960,7 @@ static int io_iopoll_check(struct io_ring_ctx *ctx, unsigned *nr_events,
+ */
+ if (!(++iters & 7)) {
+ mutex_unlock(&ctx->uring_lock);
++ io_run_task_work();
+ mutex_lock(&ctx->uring_lock);
+ }
+
+@@ -2661,8 +2686,10 @@ static int io_read(struct io_kiocb *req, bool force_nonblock)
+
+ if (req->file->f_op->read_iter)
+ ret2 = call_read_iter(req->file, kiocb, &iter);
+- else
++ else if (req->file->f_op->read)
+ ret2 = loop_rw_iter(READ, req->file, kiocb, &iter);
++ else
++ ret2 = -EINVAL;
+
+ /* Catch -EAGAIN return for forced non-blocking submission */
+ if (!force_nonblock || ret2 != -EAGAIN) {
+@@ -2776,8 +2803,10 @@ static int io_write(struct io_kiocb *req, bool force_nonblock)
+
+ if (req->file->f_op->write_iter)
+ ret2 = call_write_iter(req->file, kiocb, &iter);
+- else
++ else if (req->file->f_op->write)
+ ret2 = loop_rw_iter(WRITE, req->file, kiocb, &iter);
++ else
++ ret2 = -EINVAL;
+
+ if (!force_nonblock)
+ current->signal->rlim[RLIMIT_FSIZE].rlim_cur = RLIM_INFINITY;
+@@ -4088,22 +4117,22 @@ static int io_req_task_work_add(struct io_kiocb *req, struct callback_head *cb)
+ {
+ struct task_struct *tsk = req->task;
+ struct io_ring_ctx *ctx = req->ctx;
+- int ret, notify = TWA_RESUME;
++ int ret, notify;
+
+ /*
+- * SQPOLL kernel thread doesn't need notification, just a wakeup.
+- * If we're not using an eventfd, then TWA_RESUME is always fine,
+- * as we won't have dependencies between request completions for
+- * other kernel wait conditions.
++ * SQPOLL kernel thread doesn't need notification, just a wakeup. For
++ * all other cases, use TWA_SIGNAL unconditionally to ensure we're
++ * processing task_work. There's no reliable way to tell if TWA_RESUME
++ * will do the job.
+ */
+- if (ctx->flags & IORING_SETUP_SQPOLL)
+- notify = 0;
+- else if (ctx->cq_ev_fd)
++ notify = 0;
++ if (!(ctx->flags & IORING_SETUP_SQPOLL))
+ notify = TWA_SIGNAL;
+
+ ret = task_work_add(tsk, cb, notify);
+ if (!ret)
+ wake_up_process(tsk);
++
+ return ret;
+ }
+
+@@ -4124,6 +4153,8 @@ static int __io_async_wake(struct io_kiocb *req, struct io_poll_iocb *poll,
+ tsk = req->task;
+ req->result = mask;
+ init_task_work(&req->task_work, func);
++ percpu_ref_get(&req->ctx->refs);
++
+ /*
+ * If this fails, then the task is exiting. When a task exits, the
+ * work gets canceled, so just cancel this request as well instead
+@@ -4160,9 +4191,24 @@ static bool io_poll_rewait(struct io_kiocb *req, struct io_poll_iocb *poll)
+ return false;
+ }
+
+-static void io_poll_remove_double(struct io_kiocb *req, void *data)
++static struct io_poll_iocb *io_poll_get_double(struct io_kiocb *req)
++{
++ /* pure poll stashes this in ->io, poll driven retry elsewhere */
++ if (req->opcode == IORING_OP_POLL_ADD)
++ return (struct io_poll_iocb *) req->io;
++ return req->apoll->double_poll;
++}
++
++static struct io_poll_iocb *io_poll_get_single(struct io_kiocb *req)
++{
++ if (req->opcode == IORING_OP_POLL_ADD)
++ return &req->poll;
++ return &req->apoll->poll;
++}
++
++static void io_poll_remove_double(struct io_kiocb *req)
+ {
+- struct io_poll_iocb *poll = data;
++ struct io_poll_iocb *poll = io_poll_get_double(req);
+
+ lockdep_assert_held(&req->ctx->completion_lock);
+
+@@ -4182,7 +4228,7 @@ static void io_poll_complete(struct io_kiocb *req, __poll_t mask, int error)
+ {
+ struct io_ring_ctx *ctx = req->ctx;
+
+- io_poll_remove_double(req, req->io);
++ io_poll_remove_double(req);
+ req->poll.done = true;
+ io_cqring_fill_event(req, error ? error : mangle_poll(mask));
+ io_commit_cqring(ctx);
+@@ -4208,6 +4254,7 @@ static void io_poll_task_handler(struct io_kiocb *req, struct io_kiocb **nxt)
+ static void io_poll_task_func(struct callback_head *cb)
+ {
+ struct io_kiocb *req = container_of(cb, struct io_kiocb, task_work);
++ struct io_ring_ctx *ctx = req->ctx;
+ struct io_kiocb *nxt = NULL;
+
+ io_poll_task_handler(req, &nxt);
+@@ -4218,13 +4265,14 @@ static void io_poll_task_func(struct callback_head *cb)
+ __io_queue_sqe(nxt, NULL);
+ mutex_unlock(&ctx->uring_lock);
+ }
++ percpu_ref_put(&ctx->refs);
+ }
+
+ static int io_poll_double_wake(struct wait_queue_entry *wait, unsigned mode,
+ int sync, void *key)
+ {
+ struct io_kiocb *req = wait->private;
+- struct io_poll_iocb *poll = req->apoll->double_poll;
++ struct io_poll_iocb *poll = io_poll_get_single(req);
+ __poll_t mask = key_to_poll(key);
+
+ /* for instances that support it check for an event match first: */
+@@ -4238,6 +4286,8 @@ static int io_poll_double_wake(struct wait_queue_entry *wait, unsigned mode,
+ done = list_empty(&poll->wait.entry);
+ if (!done)
+ list_del_init(&poll->wait.entry);
++ /* make sure double remove sees this as being gone */
++ wait->private = NULL;
+ spin_unlock(&poll->head->lock);
+ if (!done)
+ __io_async_wake(req, poll, mask, io_poll_task_func);
+@@ -4332,6 +4382,7 @@ static void io_async_task_func(struct callback_head *cb)
+
+ if (io_poll_rewait(req, &apoll->poll)) {
+ spin_unlock_irq(&ctx->completion_lock);
++ percpu_ref_put(&ctx->refs);
+ return;
+ }
+
+@@ -4346,7 +4397,7 @@ static void io_async_task_func(struct callback_head *cb)
+ }
+ }
+
+- io_poll_remove_double(req, apoll->double_poll);
++ io_poll_remove_double(req);
+ spin_unlock_irq(&ctx->completion_lock);
+
+ /* restore ->work in case we need to retry again */
+@@ -4356,7 +4407,6 @@ static void io_async_task_func(struct callback_head *cb)
+ kfree(apoll);
+
+ if (!canceled) {
+- __set_current_state(TASK_RUNNING);
+ if (io_sq_thread_acquire_mm(ctx, req)) {
+ io_cqring_add_event(req, -EFAULT);
+ goto end_req;
+@@ -4370,6 +4420,7 @@ end_req:
+ req_set_fail_links(req);
+ io_double_put_req(req);
+ }
++ percpu_ref_put(&ctx->refs);
+ }
+
+ static int io_async_wake(struct wait_queue_entry *wait, unsigned mode, int sync,
+@@ -4472,8 +4523,8 @@ static bool io_arm_poll_handler(struct io_kiocb *req)
+
+ ret = __io_arm_poll_handler(req, &apoll->poll, &ipt, mask,
+ io_async_wake);
+- if (ret) {
+- io_poll_remove_double(req, apoll->double_poll);
++ if (ret || ipt.error) {
++ io_poll_remove_double(req);
+ spin_unlock_irq(&ctx->completion_lock);
+ if (req->flags & REQ_F_WORK_INITIALIZED)
+ memcpy(&req->work, &apoll->work, sizeof(req->work));
+@@ -4507,14 +4558,13 @@ static bool io_poll_remove_one(struct io_kiocb *req)
+ {
+ bool do_complete;
+
++ io_poll_remove_double(req);
++
+ if (req->opcode == IORING_OP_POLL_ADD) {
+- io_poll_remove_double(req, req->io);
+ do_complete = __io_poll_remove_one(req, &req->poll);
+ } else {
+ struct async_poll *apoll = req->apoll;
+
+- io_poll_remove_double(req, apoll->double_poll);
+-
+ /* non-poll requests have submit ref still */
+ do_complete = __io_poll_remove_one(req, &apoll->poll);
+ if (do_complete) {
+@@ -4536,6 +4586,7 @@ static bool io_poll_remove_one(struct io_kiocb *req)
+ io_cqring_fill_event(req, -ECANCELED);
+ io_commit_cqring(req->ctx);
+ req->flags |= REQ_F_COMP_LOCKED;
++ req_set_fail_links(req);
+ io_put_req(req);
+ }
+
+@@ -4709,6 +4760,23 @@ static enum hrtimer_restart io_timeout_fn(struct hrtimer *timer)
+ return HRTIMER_NORESTART;
+ }
+
++static int __io_timeout_cancel(struct io_kiocb *req)
++{
++ int ret;
++
++ list_del_init(&req->list);
++
++ ret = hrtimer_try_to_cancel(&req->io->timeout.timer);
++ if (ret == -1)
++ return -EALREADY;
++
++ req_set_fail_links(req);
++ req->flags |= REQ_F_COMP_LOCKED;
++ io_cqring_fill_event(req, -ECANCELED);
++ io_put_req(req);
++ return 0;
++}
++
+ static int io_timeout_cancel(struct io_ring_ctx *ctx, __u64 user_data)
+ {
+ struct io_kiocb *req;
+@@ -4716,7 +4784,6 @@ static int io_timeout_cancel(struct io_ring_ctx *ctx, __u64 user_data)
+
+ list_for_each_entry(req, &ctx->timeout_list, list) {
+ if (user_data == req->user_data) {
+- list_del_init(&req->list);
+ ret = 0;
+ break;
+ }
+@@ -4725,14 +4792,7 @@ static int io_timeout_cancel(struct io_ring_ctx *ctx, __u64 user_data)
+ if (ret == -ENOENT)
+ return ret;
+
+- ret = hrtimer_try_to_cancel(&req->io->timeout.timer);
+- if (ret == -1)
+- return -EALREADY;
+-
+- req_set_fail_links(req);
+- io_cqring_fill_event(req, -ECANCELED);
+- io_put_req(req);
+- return 0;
++ return __io_timeout_cancel(req);
+ }
+
+ static int io_timeout_remove_prep(struct io_kiocb *req,
+@@ -6082,8 +6142,7 @@ static int io_sq_thread(void *data)
+ if (!list_empty(&ctx->poll_list) || need_resched() ||
+ (!time_after(jiffies, timeout) && ret != -EBUSY &&
+ !percpu_ref_is_dying(&ctx->refs))) {
+- if (current->task_works)
+- task_work_run();
++ io_run_task_work();
+ cond_resched();
+ continue;
+ }
+@@ -6115,8 +6174,7 @@ static int io_sq_thread(void *data)
+ finish_wait(&ctx->sqo_wait, &wait);
+ break;
+ }
+- if (current->task_works) {
+- task_work_run();
++ if (io_run_task_work()) {
+ finish_wait(&ctx->sqo_wait, &wait);
+ continue;
+ }
+@@ -6145,8 +6203,7 @@ static int io_sq_thread(void *data)
+ timeout = jiffies + ctx->sq_thread_idle;
+ }
+
+- if (current->task_works)
+- task_work_run();
++ io_run_task_work();
+
+ io_sq_thread_drop_mm(ctx);
+ revert_creds(old_cred);
+@@ -6211,9 +6268,8 @@ static int io_cqring_wait(struct io_ring_ctx *ctx, int min_events,
+ do {
+ if (io_cqring_events(ctx, false) >= min_events)
+ return 0;
+- if (!current->task_works)
++ if (!io_run_task_work())
+ break;
+- task_work_run();
+ } while (1);
+
+ if (sig) {
+@@ -6235,8 +6291,8 @@ static int io_cqring_wait(struct io_ring_ctx *ctx, int min_events,
+ prepare_to_wait_exclusive(&ctx->wait, &iowq.wq,
+ TASK_INTERRUPTIBLE);
+ /* make sure we run task_work before checking for signals */
+- if (current->task_works)
+- task_work_run();
++ if (io_run_task_work())
++ continue;
+ if (signal_pending(current)) {
+ if (current->jobctl & JOBCTL_TASK_WORK) {
+ spin_lock_irq(¤t->sighand->siglock);
+@@ -7086,6 +7142,9 @@ static unsigned long rings_size(unsigned sq_entries, unsigned cq_entries,
+ return SIZE_MAX;
+ #endif
+
++ if (sq_offset)
++ *sq_offset = off;
++
+ sq_array_size = array_size(sizeof(u32), sq_entries);
+ if (sq_array_size == SIZE_MAX)
+ return SIZE_MAX;
+@@ -7093,9 +7152,6 @@ static unsigned long rings_size(unsigned sq_entries, unsigned cq_entries,
+ if (check_add_overflow(off, sq_array_size, &off))
+ return SIZE_MAX;
+
+- if (sq_offset)
+- *sq_offset = off;
+-
+ return off;
+ }
+
+@@ -7488,6 +7544,71 @@ static bool io_wq_files_match(struct io_wq_work *work, void *data)
+ return work->files == files;
+ }
+
++/*
++ * Returns true if 'preq' is the link parent of 'req'
++ */
++static bool io_match_link(struct io_kiocb *preq, struct io_kiocb *req)
++{
++ struct io_kiocb *link;
++
++ if (!(preq->flags & REQ_F_LINK_HEAD))
++ return false;
++
++ list_for_each_entry(link, &preq->link_list, link_list) {
++ if (link == req)
++ return true;
++ }
++
++ return false;
++}
++
++/*
++ * We're looking to cancel 'req' because it's holding on to our files, but
++ * 'req' could be a link to another request. See if it is, and cancel that
++ * parent request if so.
++ */
++static bool io_poll_remove_link(struct io_ring_ctx *ctx, struct io_kiocb *req)
++{
++ struct hlist_node *tmp;
++ struct io_kiocb *preq;
++ bool found = false;
++ int i;
++
++ spin_lock_irq(&ctx->completion_lock);
++ for (i = 0; i < (1U << ctx->cancel_hash_bits); i++) {
++ struct hlist_head *list;
++
++ list = &ctx->cancel_hash[i];
++ hlist_for_each_entry_safe(preq, tmp, list, hash_node) {
++ found = io_match_link(preq, req);
++ if (found) {
++ io_poll_remove_one(preq);
++ break;
++ }
++ }
++ }
++ spin_unlock_irq(&ctx->completion_lock);
++ return found;
++}
++
++static bool io_timeout_remove_link(struct io_ring_ctx *ctx,
++ struct io_kiocb *req)
++{
++ struct io_kiocb *preq;
++ bool found = false;
++
++ spin_lock_irq(&ctx->completion_lock);
++ list_for_each_entry(preq, &ctx->timeout_list, list) {
++ found = io_match_link(preq, req);
++ if (found) {
++ __io_timeout_cancel(preq);
++ break;
++ }
++ }
++ spin_unlock_irq(&ctx->completion_lock);
++ return found;
++}
++
+ static void io_uring_cancel_files(struct io_ring_ctx *ctx,
+ struct files_struct *files)
+ {
+@@ -7529,10 +7650,10 @@ static void io_uring_cancel_files(struct io_ring_ctx *ctx,
+ clear_bit(0, &ctx->cq_check_overflow);
+ ctx->rings->sq_flags &= ~IORING_SQ_CQ_OVERFLOW;
+ }
+- spin_unlock_irq(&ctx->completion_lock);
+-
+ WRITE_ONCE(ctx->rings->cq_overflow,
+ atomic_inc_return(&ctx->cached_cq_overflow));
++ io_commit_cqring(ctx);
++ spin_unlock_irq(&ctx->completion_lock);
+
+ /*
+ * Put inflight ref and overflow ref. If that's
+@@ -7545,6 +7666,9 @@ static void io_uring_cancel_files(struct io_ring_ctx *ctx,
+ }
+ } else {
+ io_wq_cancel_work(ctx->io_wq, &cancel_req->work);
++ /* could be a link, check and remove if it is */
++ if (!io_poll_remove_link(ctx, cancel_req))
++ io_timeout_remove_link(ctx, cancel_req);
+ io_put_req(cancel_req);
+ }
+
+@@ -7655,8 +7779,7 @@ SYSCALL_DEFINE6(io_uring_enter, unsigned int, fd, u32, to_submit,
+ int submitted = 0;
+ struct fd f;
+
+- if (current->task_works)
+- task_work_run();
++ io_run_task_work();
+
+ if (flags & ~(IORING_ENTER_GETEVENTS | IORING_ENTER_SQ_WAKEUP))
+ return -EINVAL;
+@@ -7828,6 +7951,10 @@ static int io_allocate_scq_urings(struct io_ring_ctx *ctx,
+ struct io_rings *rings;
+ size_t size, sq_array_offset;
+
++ /* make sure these are sane, as we already accounted them */
++ ctx->sq_entries = p->sq_entries;
++ ctx->cq_entries = p->cq_entries;
++
+ size = rings_size(p->sq_entries, p->cq_entries, &sq_array_offset);
+ if (size == SIZE_MAX)
+ return -EOVERFLOW;
+@@ -7844,8 +7971,6 @@ static int io_allocate_scq_urings(struct io_ring_ctx *ctx,
+ rings->cq_ring_entries = p->cq_entries;
+ ctx->sq_mask = rings->sq_ring_mask;
+ ctx->cq_mask = rings->cq_ring_mask;
+- ctx->sq_entries = rings->sq_ring_entries;
+- ctx->cq_entries = rings->cq_ring_entries;
+
+ size = array_size(sizeof(struct io_uring_sqe), p->sq_entries);
+ if (size == SIZE_MAX) {
+diff --git a/fs/kernfs/file.c b/fs/kernfs/file.c
+index 06b342d8462b..e23b3f62483c 100644
+--- a/fs/kernfs/file.c
++++ b/fs/kernfs/file.c
+@@ -912,7 +912,7 @@ repeat:
+ }
+
+ fsnotify(inode, FS_MODIFY, inode, FSNOTIFY_EVENT_INODE,
+- &name, 0);
++ NULL, 0);
+ iput(inode);
+ }
+
+diff --git a/fs/minix/inode.c b/fs/minix/inode.c
+index 7cb5fd38eb14..0dd929346f3f 100644
+--- a/fs/minix/inode.c
++++ b/fs/minix/inode.c
+@@ -150,6 +150,23 @@ static int minix_remount (struct super_block * sb, int * flags, char * data)
+ return 0;
+ }
+
++static bool minix_check_superblock(struct minix_sb_info *sbi)
++{
++ if (sbi->s_imap_blocks == 0 || sbi->s_zmap_blocks == 0)
++ return false;
++
++ /*
++ * s_max_size must not exceed the block mapping limitation. This check
++ * is only needed for V1 filesystems, since V2/V3 support an extra level
++ * of indirect blocks which places the limit well above U32_MAX.
++ */
++ if (sbi->s_version == MINIX_V1 &&
++ sbi->s_max_size > (7 + 512 + 512*512) * BLOCK_SIZE)
++ return false;
++
++ return true;
++}
++
+ static int minix_fill_super(struct super_block *s, void *data, int silent)
+ {
+ struct buffer_head *bh;
+@@ -228,11 +245,12 @@ static int minix_fill_super(struct super_block *s, void *data, int silent)
+ } else
+ goto out_no_fs;
+
++ if (!minix_check_superblock(sbi))
++ goto out_illegal_sb;
++
+ /*
+ * Allocate the buffer map to keep the superblock small.
+ */
+- if (sbi->s_imap_blocks == 0 || sbi->s_zmap_blocks == 0)
+- goto out_illegal_sb;
+ i = (sbi->s_imap_blocks + sbi->s_zmap_blocks) * sizeof(bh);
+ map = kzalloc(i, GFP_KERNEL);
+ if (!map)
+@@ -468,6 +486,13 @@ static struct inode *V1_minix_iget(struct inode *inode)
+ iget_failed(inode);
+ return ERR_PTR(-EIO);
+ }
++ if (raw_inode->i_nlinks == 0) {
++ printk("MINIX-fs: deleted inode referenced: %lu\n",
++ inode->i_ino);
++ brelse(bh);
++ iget_failed(inode);
++ return ERR_PTR(-ESTALE);
++ }
+ inode->i_mode = raw_inode->i_mode;
+ i_uid_write(inode, raw_inode->i_uid);
+ i_gid_write(inode, raw_inode->i_gid);
+@@ -501,6 +526,13 @@ static struct inode *V2_minix_iget(struct inode *inode)
+ iget_failed(inode);
+ return ERR_PTR(-EIO);
+ }
++ if (raw_inode->i_nlinks == 0) {
++ printk("MINIX-fs: deleted inode referenced: %lu\n",
++ inode->i_ino);
++ brelse(bh);
++ iget_failed(inode);
++ return ERR_PTR(-ESTALE);
++ }
+ inode->i_mode = raw_inode->i_mode;
+ i_uid_write(inode, raw_inode->i_uid);
+ i_gid_write(inode, raw_inode->i_gid);
+diff --git a/fs/minix/itree_common.c b/fs/minix/itree_common.c
+index 043c3fdbc8e7..446148792f41 100644
+--- a/fs/minix/itree_common.c
++++ b/fs/minix/itree_common.c
+@@ -75,6 +75,7 @@ static int alloc_branch(struct inode *inode,
+ int n = 0;
+ int i;
+ int parent = minix_new_block(inode);
++ int err = -ENOSPC;
+
+ branch[0].key = cpu_to_block(parent);
+ if (parent) for (n = 1; n < num; n++) {
+@@ -85,6 +86,11 @@ static int alloc_branch(struct inode *inode,
+ break;
+ branch[n].key = cpu_to_block(nr);
+ bh = sb_getblk(inode->i_sb, parent);
++ if (!bh) {
++ minix_free_block(inode, nr);
++ err = -ENOMEM;
++ break;
++ }
+ lock_buffer(bh);
+ memset(bh->b_data, 0, bh->b_size);
+ branch[n].bh = bh;
+@@ -103,7 +109,7 @@ static int alloc_branch(struct inode *inode,
+ bforget(branch[i].bh);
+ for (i = 0; i < n; i++)
+ minix_free_block(inode, block_to_cpu(branch[i].key));
+- return -ENOSPC;
++ return err;
+ }
+
+ static inline int splice_branch(struct inode *inode,
+diff --git a/fs/nfs/pnfs.c b/fs/nfs/pnfs.c
+index dd2e14f5875d..d61dac48dff5 100644
+--- a/fs/nfs/pnfs.c
++++ b/fs/nfs/pnfs.c
+@@ -1226,31 +1226,27 @@ out:
+ return status;
+ }
+
++static bool
++pnfs_layout_segments_returnable(struct pnfs_layout_hdr *lo,
++ enum pnfs_iomode iomode,
++ u32 seq)
++{
++ struct pnfs_layout_range recall_range = {
++ .length = NFS4_MAX_UINT64,
++ .iomode = iomode,
++ };
++ return pnfs_mark_matching_lsegs_return(lo, &lo->plh_return_segs,
++ &recall_range, seq) != -EBUSY;
++}
++
+ /* Return true if layoutreturn is needed */
+ static bool
+ pnfs_layout_need_return(struct pnfs_layout_hdr *lo)
+ {
+- struct pnfs_layout_segment *s;
+- enum pnfs_iomode iomode;
+- u32 seq;
+-
+ if (!test_bit(NFS_LAYOUT_RETURN_REQUESTED, &lo->plh_flags))
+ return false;
+-
+- seq = lo->plh_return_seq;
+- iomode = lo->plh_return_iomode;
+-
+- /* Defer layoutreturn until all recalled lsegs are done */
+- list_for_each_entry(s, &lo->plh_segs, pls_list) {
+- if (seq && pnfs_seqid_is_newer(s->pls_seq, seq))
+- continue;
+- if (iomode != IOMODE_ANY && s->pls_range.iomode != iomode)
+- continue;
+- if (test_bit(NFS_LSEG_LAYOUTRETURN, &s->pls_flags))
+- return false;
+- }
+-
+- return true;
++ return pnfs_layout_segments_returnable(lo, lo->plh_return_iomode,
++ lo->plh_return_seq);
+ }
+
+ static void pnfs_layoutreturn_before_put_layout_hdr(struct pnfs_layout_hdr *lo)
+@@ -2392,16 +2388,6 @@ out_forget:
+ return ERR_PTR(-EAGAIN);
+ }
+
+-static int
+-mark_lseg_invalid_or_return(struct pnfs_layout_segment *lseg,
+- struct list_head *tmp_list)
+-{
+- if (!mark_lseg_invalid(lseg, tmp_list))
+- return 0;
+- pnfs_cache_lseg_for_layoutreturn(lseg->pls_layout, lseg);
+- return 1;
+-}
+-
+ /**
+ * pnfs_mark_matching_lsegs_return - Free or return matching layout segments
+ * @lo: pointer to layout header
+@@ -2438,7 +2424,7 @@ pnfs_mark_matching_lsegs_return(struct pnfs_layout_hdr *lo,
+ lseg, lseg->pls_range.iomode,
+ lseg->pls_range.offset,
+ lseg->pls_range.length);
+- if (mark_lseg_invalid_or_return(lseg, tmp_list))
++ if (mark_lseg_invalid(lseg, tmp_list))
+ continue;
+ remaining++;
+ set_bit(NFS_LSEG_LAYOUTRETURN, &lseg->pls_flags);
+diff --git a/fs/nfsd/nfs4recover.c b/fs/nfsd/nfs4recover.c
+index 9e40dfecf1b1..186fa2c2c6ba 100644
+--- a/fs/nfsd/nfs4recover.c
++++ b/fs/nfsd/nfs4recover.c
+@@ -747,13 +747,11 @@ struct cld_upcall {
+ };
+
+ static int
+-__cld_pipe_upcall(struct rpc_pipe *pipe, void *cmsg)
++__cld_pipe_upcall(struct rpc_pipe *pipe, void *cmsg, struct nfsd_net *nn)
+ {
+ int ret;
+ struct rpc_pipe_msg msg;
+ struct cld_upcall *cup = container_of(cmsg, struct cld_upcall, cu_u);
+- struct nfsd_net *nn = net_generic(pipe->dentry->d_sb->s_fs_info,
+- nfsd_net_id);
+
+ memset(&msg, 0, sizeof(msg));
+ msg.data = cmsg;
+@@ -773,7 +771,7 @@ out:
+ }
+
+ static int
+-cld_pipe_upcall(struct rpc_pipe *pipe, void *cmsg)
++cld_pipe_upcall(struct rpc_pipe *pipe, void *cmsg, struct nfsd_net *nn)
+ {
+ int ret;
+
+@@ -782,7 +780,7 @@ cld_pipe_upcall(struct rpc_pipe *pipe, void *cmsg)
+ * upcalls queued.
+ */
+ do {
+- ret = __cld_pipe_upcall(pipe, cmsg);
++ ret = __cld_pipe_upcall(pipe, cmsg, nn);
+ } while (ret == -EAGAIN);
+
+ return ret;
+@@ -1115,7 +1113,7 @@ nfsd4_cld_create(struct nfs4_client *clp)
+ memcpy(cup->cu_u.cu_msg.cm_u.cm_name.cn_id, clp->cl_name.data,
+ clp->cl_name.len);
+
+- ret = cld_pipe_upcall(cn->cn_pipe, &cup->cu_u.cu_msg);
++ ret = cld_pipe_upcall(cn->cn_pipe, &cup->cu_u.cu_msg, nn);
+ if (!ret) {
+ ret = cup->cu_u.cu_msg.cm_status;
+ set_bit(NFSD4_CLIENT_STABLE, &clp->cl_flags);
+@@ -1180,7 +1178,7 @@ nfsd4_cld_create_v2(struct nfs4_client *clp)
+ } else
+ cmsg->cm_u.cm_clntinfo.cc_princhash.cp_len = 0;
+
+- ret = cld_pipe_upcall(cn->cn_pipe, cmsg);
++ ret = cld_pipe_upcall(cn->cn_pipe, cmsg, nn);
+ if (!ret) {
+ ret = cmsg->cm_status;
+ set_bit(NFSD4_CLIENT_STABLE, &clp->cl_flags);
+@@ -1218,7 +1216,7 @@ nfsd4_cld_remove(struct nfs4_client *clp)
+ memcpy(cup->cu_u.cu_msg.cm_u.cm_name.cn_id, clp->cl_name.data,
+ clp->cl_name.len);
+
+- ret = cld_pipe_upcall(cn->cn_pipe, &cup->cu_u.cu_msg);
++ ret = cld_pipe_upcall(cn->cn_pipe, &cup->cu_u.cu_msg, nn);
+ if (!ret) {
+ ret = cup->cu_u.cu_msg.cm_status;
+ clear_bit(NFSD4_CLIENT_STABLE, &clp->cl_flags);
+@@ -1261,7 +1259,7 @@ nfsd4_cld_check_v0(struct nfs4_client *clp)
+ memcpy(cup->cu_u.cu_msg.cm_u.cm_name.cn_id, clp->cl_name.data,
+ clp->cl_name.len);
+
+- ret = cld_pipe_upcall(cn->cn_pipe, &cup->cu_u.cu_msg);
++ ret = cld_pipe_upcall(cn->cn_pipe, &cup->cu_u.cu_msg, nn);
+ if (!ret) {
+ ret = cup->cu_u.cu_msg.cm_status;
+ set_bit(NFSD4_CLIENT_STABLE, &clp->cl_flags);
+@@ -1404,7 +1402,7 @@ nfsd4_cld_grace_start(struct nfsd_net *nn)
+ }
+
+ cup->cu_u.cu_msg.cm_cmd = Cld_GraceStart;
+- ret = cld_pipe_upcall(cn->cn_pipe, &cup->cu_u.cu_msg);
++ ret = cld_pipe_upcall(cn->cn_pipe, &cup->cu_u.cu_msg, nn);
+ if (!ret)
+ ret = cup->cu_u.cu_msg.cm_status;
+
+@@ -1432,7 +1430,7 @@ nfsd4_cld_grace_done_v0(struct nfsd_net *nn)
+
+ cup->cu_u.cu_msg.cm_cmd = Cld_GraceDone;
+ cup->cu_u.cu_msg.cm_u.cm_gracetime = nn->boot_time;
+- ret = cld_pipe_upcall(cn->cn_pipe, &cup->cu_u.cu_msg);
++ ret = cld_pipe_upcall(cn->cn_pipe, &cup->cu_u.cu_msg, nn);
+ if (!ret)
+ ret = cup->cu_u.cu_msg.cm_status;
+
+@@ -1460,7 +1458,7 @@ nfsd4_cld_grace_done(struct nfsd_net *nn)
+ }
+
+ cup->cu_u.cu_msg.cm_cmd = Cld_GraceDone;
+- ret = cld_pipe_upcall(cn->cn_pipe, &cup->cu_u.cu_msg);
++ ret = cld_pipe_upcall(cn->cn_pipe, &cup->cu_u.cu_msg, nn);
+ if (!ret)
+ ret = cup->cu_u.cu_msg.cm_status;
+
+@@ -1524,7 +1522,7 @@ nfsd4_cld_get_version(struct nfsd_net *nn)
+ goto out_err;
+ }
+ cup->cu_u.cu_msg.cm_cmd = Cld_GetVersion;
+- ret = cld_pipe_upcall(cn->cn_pipe, &cup->cu_u.cu_msg);
++ ret = cld_pipe_upcall(cn->cn_pipe, &cup->cu_u.cu_msg, nn);
+ if (!ret) {
+ ret = cup->cu_u.cu_msg.cm_status;
+ if (ret)
+diff --git a/fs/ocfs2/dlmglue.c b/fs/ocfs2/dlmglue.c
+index 751bc4dc7466..8e3a369086db 100644
+--- a/fs/ocfs2/dlmglue.c
++++ b/fs/ocfs2/dlmglue.c
+@@ -2871,9 +2871,15 @@ int ocfs2_nfs_sync_lock(struct ocfs2_super *osb, int ex)
+
+ status = ocfs2_cluster_lock(osb, lockres, ex ? LKM_EXMODE : LKM_PRMODE,
+ 0, 0);
+- if (status < 0)
++ if (status < 0) {
+ mlog(ML_ERROR, "lock on nfs sync lock failed %d\n", status);
+
++ if (ex)
++ up_write(&osb->nfs_sync_rwlock);
++ else
++ up_read(&osb->nfs_sync_rwlock);
++ }
++
+ return status;
+ }
+
+diff --git a/fs/pstore/platform.c b/fs/pstore/platform.c
+index a9e297eefdff..36714df37d5d 100644
+--- a/fs/pstore/platform.c
++++ b/fs/pstore/platform.c
+@@ -269,6 +269,9 @@ static int pstore_compress(const void *in, void *out,
+ {
+ int ret;
+
++ if (!IS_ENABLED(CONFIG_PSTORE_COMPRESSION))
++ return -EINVAL;
++
+ ret = crypto_comp_compress(tfm, in, inlen, out, &outlen);
+ if (ret) {
+ pr_err("crypto_comp_compress failed, ret = %d!\n", ret);
+@@ -668,7 +671,7 @@ static void decompress_record(struct pstore_record *record)
+ int unzipped_len;
+ char *unzipped, *workspace;
+
+- if (!record->compressed)
++ if (!IS_ENABLED(CONFIG_PSTORE_COMPRESSION) || !record->compressed)
+ return;
+
+ /* Only PSTORE_TYPE_DMESG support compression. */
+diff --git a/fs/xfs/libxfs/xfs_shared.h b/fs/xfs/libxfs/xfs_shared.h
+index c45acbd3add9..708feb8eac76 100644
+--- a/fs/xfs/libxfs/xfs_shared.h
++++ b/fs/xfs/libxfs/xfs_shared.h
+@@ -65,6 +65,7 @@ void xfs_log_get_max_trans_res(struct xfs_mount *mp,
+ #define XFS_TRANS_DQ_DIRTY 0x10 /* at least one dquot in trx dirty */
+ #define XFS_TRANS_RESERVE 0x20 /* OK to use reserved data blocks */
+ #define XFS_TRANS_NO_WRITECOUNT 0x40 /* do not elevate SB writecount */
++#define XFS_TRANS_RES_FDBLKS 0x80 /* reserve newly freed blocks */
+ /*
+ * LOWMODE is used by the allocator to activate the lowspace algorithm - when
+ * free space is running low the extent allocator may choose to allocate an
+diff --git a/fs/xfs/libxfs/xfs_trans_space.h b/fs/xfs/libxfs/xfs_trans_space.h
+index 88221c7a04cc..c6df01a2a158 100644
+--- a/fs/xfs/libxfs/xfs_trans_space.h
++++ b/fs/xfs/libxfs/xfs_trans_space.h
+@@ -57,7 +57,7 @@
+ XFS_DAREMOVE_SPACE_RES(mp, XFS_DATA_FORK)
+ #define XFS_IALLOC_SPACE_RES(mp) \
+ (M_IGEO(mp)->ialloc_blks + \
+- (xfs_sb_version_hasfinobt(&mp->m_sb) ? 2 : 1 * \
++ ((xfs_sb_version_hasfinobt(&mp->m_sb) ? 2 : 1) * \
+ (M_IGEO(mp)->inobt_maxlevels - 1)))
+
+ /*
+diff --git a/fs/xfs/scrub/bmap.c b/fs/xfs/scrub/bmap.c
+index 7badd6dfe544..955302e7cdde 100644
+--- a/fs/xfs/scrub/bmap.c
++++ b/fs/xfs/scrub/bmap.c
+@@ -45,9 +45,27 @@ xchk_setup_inode_bmap(
+ */
+ if (S_ISREG(VFS_I(sc->ip)->i_mode) &&
+ sc->sm->sm_type == XFS_SCRUB_TYPE_BMBTD) {
++ struct address_space *mapping = VFS_I(sc->ip)->i_mapping;
++
+ inode_dio_wait(VFS_I(sc->ip));
+- error = filemap_write_and_wait(VFS_I(sc->ip)->i_mapping);
+- if (error)
++
++ /*
++ * Try to flush all incore state to disk before we examine the
++ * space mappings for the data fork. Leave accumulated errors
++ * in the mapping for the writer threads to consume.
++ *
++ * On ENOSPC or EIO writeback errors, we continue into the
++ * extent mapping checks because write failures do not
++ * necessarily imply anything about the correctness of the file
++ * metadata. The metadata and the file data could be on
++ * completely separate devices; a media failure might only
++ * affect a subset of the disk, etc. We can handle delalloc
++ * extents in the scrubber, so leaving them in memory is fine.
++ */
++ error = filemap_fdatawrite(mapping);
++ if (!error)
++ error = filemap_fdatawait_keep_errors(mapping);
++ if (error && (error != -ENOSPC && error != -EIO))
+ goto out;
+ }
+
+diff --git a/fs/xfs/xfs_bmap_util.c b/fs/xfs/xfs_bmap_util.c
+index f37f5cc4b19f..afdc7f8e0e70 100644
+--- a/fs/xfs/xfs_bmap_util.c
++++ b/fs/xfs/xfs_bmap_util.c
+@@ -1567,6 +1567,7 @@ xfs_swap_extents(
+ int lock_flags;
+ uint64_t f;
+ int resblks = 0;
++ unsigned int flags = 0;
+
+ /*
+ * Lock the inodes against other IO, page faults and truncate to
+@@ -1630,17 +1631,16 @@ xfs_swap_extents(
+ resblks += XFS_SWAP_RMAP_SPACE_RES(mp, tipnext, w);
+
+ /*
+- * Handle the corner case where either inode might straddle the
+- * btree format boundary. If so, the inode could bounce between
+- * btree <-> extent format on unmap -> remap cycles, freeing and
+- * allocating a bmapbt block each time.
++ * If either inode straddles a bmapbt block allocation boundary,
++ * the rmapbt algorithm triggers repeated allocs and frees as
++ * extents are remapped. This can exhaust the block reservation
++ * prematurely and cause shutdown. Return freed blocks to the
++ * transaction reservation to counter this behavior.
+ */
+- if (ipnext == (XFS_IFORK_MAXEXT(ip, w) + 1))
+- resblks += XFS_IFORK_MAXEXT(ip, w);
+- if (tipnext == (XFS_IFORK_MAXEXT(tip, w) + 1))
+- resblks += XFS_IFORK_MAXEXT(tip, w);
++ flags |= XFS_TRANS_RES_FDBLKS;
+ }
+- error = xfs_trans_alloc(mp, &M_RES(mp)->tr_write, resblks, 0, 0, &tp);
++ error = xfs_trans_alloc(mp, &M_RES(mp)->tr_write, resblks, 0, flags,
++ &tp);
+ if (error)
+ goto out_unlock;
+
+diff --git a/fs/xfs/xfs_qm.c b/fs/xfs/xfs_qm.c
+index d6cd83317344..938023dd8ce5 100644
+--- a/fs/xfs/xfs_qm.c
++++ b/fs/xfs/xfs_qm.c
+@@ -148,6 +148,7 @@ xfs_qm_dqpurge(
+ error = xfs_bwrite(bp);
+ xfs_buf_relse(bp);
+ } else if (error == -EAGAIN) {
++ dqp->dq_flags &= ~XFS_DQ_FREEING;
+ goto out_unlock;
+ }
+ xfs_dqflock(dqp);
+diff --git a/fs/xfs/xfs_reflink.c b/fs/xfs/xfs_reflink.c
+index 107bf2a2f344..d89201d40891 100644
+--- a/fs/xfs/xfs_reflink.c
++++ b/fs/xfs/xfs_reflink.c
+@@ -1003,6 +1003,7 @@ xfs_reflink_remap_extent(
+ xfs_filblks_t rlen;
+ xfs_filblks_t unmap_len;
+ xfs_off_t newlen;
++ int64_t qres;
+ int error;
+
+ unmap_len = irec->br_startoff + irec->br_blockcount - destoff;
+@@ -1025,13 +1026,19 @@ xfs_reflink_remap_extent(
+ xfs_ilock(ip, XFS_ILOCK_EXCL);
+ xfs_trans_ijoin(tp, ip, 0);
+
+- /* If we're not just clearing space, then do we have enough quota? */
+- if (real_extent) {
+- error = xfs_trans_reserve_quota_nblks(tp, ip,
+- irec->br_blockcount, 0, XFS_QMOPT_RES_REGBLKS);
+- if (error)
+- goto out_cancel;
+- }
++ /*
++ * Reserve quota for this operation. We don't know if the first unmap
++ * in the dest file will cause a bmap btree split, so we always reserve
++ * at least enough blocks for that split. If the extent being mapped
++ * in is written, we need to reserve quota for that too.
++ */
++ qres = XFS_EXTENTADD_SPACE_RES(mp, XFS_DATA_FORK);
++ if (real_extent)
++ qres += irec->br_blockcount;
++ error = xfs_trans_reserve_quota_nblks(tp, ip, qres, 0,
++ XFS_QMOPT_RES_REGBLKS);
++ if (error)
++ goto out_cancel;
+
+ trace_xfs_reflink_remap(ip, irec->br_startoff,
+ irec->br_blockcount, irec->br_startblock);
+diff --git a/fs/xfs/xfs_trans.c b/fs/xfs/xfs_trans.c
+index 3c94e5ff4316..0ad72a83edac 100644
+--- a/fs/xfs/xfs_trans.c
++++ b/fs/xfs/xfs_trans.c
+@@ -107,7 +107,8 @@ xfs_trans_dup(
+
+ ntp->t_flags = XFS_TRANS_PERM_LOG_RES |
+ (tp->t_flags & XFS_TRANS_RESERVE) |
+- (tp->t_flags & XFS_TRANS_NO_WRITECOUNT);
++ (tp->t_flags & XFS_TRANS_NO_WRITECOUNT) |
++ (tp->t_flags & XFS_TRANS_RES_FDBLKS);
+ /* We gave our writer reference to the new transaction */
+ tp->t_flags |= XFS_TRANS_NO_WRITECOUNT;
+ ntp->t_ticket = xfs_log_ticket_get(tp->t_ticket);
+@@ -272,6 +273,8 @@ xfs_trans_alloc(
+ */
+ WARN_ON(resp->tr_logres > 0 &&
+ mp->m_super->s_writers.frozen == SB_FREEZE_COMPLETE);
++ ASSERT(!(flags & XFS_TRANS_RES_FDBLKS) ||
++ xfs_sb_version_haslazysbcount(&mp->m_sb));
+
+ tp->t_magic = XFS_TRANS_HEADER_MAGIC;
+ tp->t_flags = flags;
+@@ -365,6 +368,20 @@ xfs_trans_mod_sb(
+ tp->t_blk_res_used += (uint)-delta;
+ if (tp->t_blk_res_used > tp->t_blk_res)
+ xfs_force_shutdown(mp, SHUTDOWN_CORRUPT_INCORE);
++ } else if (delta > 0 && (tp->t_flags & XFS_TRANS_RES_FDBLKS)) {
++ int64_t blkres_delta;
++
++ /*
++ * Return freed blocks directly to the reservation
++ * instead of the global pool, being careful not to
++ * overflow the trans counter. This is used to preserve
++ * reservation across chains of transaction rolls that
++ * repeatedly free and allocate blocks.
++ */
++ blkres_delta = min_t(int64_t, delta,
++ UINT_MAX - tp->t_blk_res);
++ tp->t_blk_res += blkres_delta;
++ delta -= blkres_delta;
+ }
+ tp->t_fdblocks_delta += delta;
+ if (xfs_sb_version_haslazysbcount(&mp->m_sb))
+diff --git a/include/asm-generic/vmlinux.lds.h b/include/asm-generic/vmlinux.lds.h
+index 052e0f05a984..9e9b1ec30b90 100644
+--- a/include/asm-generic/vmlinux.lds.h
++++ b/include/asm-generic/vmlinux.lds.h
+@@ -375,6 +375,7 @@
+ */
+ #ifndef RO_AFTER_INIT_DATA
+ #define RO_AFTER_INIT_DATA \
++ . = ALIGN(8); \
+ __start_ro_after_init = .; \
+ *(.data..ro_after_init) \
+ JUMP_TABLE_DATA \
+diff --git a/include/linux/bitfield.h b/include/linux/bitfield.h
+index 48ea093ff04c..4e035aca6f7e 100644
+--- a/include/linux/bitfield.h
++++ b/include/linux/bitfield.h
+@@ -77,7 +77,7 @@
+ */
+ #define FIELD_FIT(_mask, _val) \
+ ({ \
+- __BF_FIELD_CHECK(_mask, 0ULL, _val, "FIELD_FIT: "); \
++ __BF_FIELD_CHECK(_mask, 0ULL, 0ULL, "FIELD_FIT: "); \
+ !((((typeof(_mask))_val) << __bf_shf(_mask)) & ~(_mask)); \
+ })
+
+diff --git a/include/linux/dmar.h b/include/linux/dmar.h
+index d7bf029df737..65565820328a 100644
+--- a/include/linux/dmar.h
++++ b/include/linux/dmar.h
+@@ -48,6 +48,7 @@ struct dmar_drhd_unit {
+ u16 segment; /* PCI domain */
+ u8 ignored:1; /* ignore drhd */
+ u8 include_all:1;
++ u8 gfx_dedicated:1; /* graphic dedicated */
+ struct intel_iommu *iommu;
+ };
+
+diff --git a/include/linux/gpio/driver.h b/include/linux/gpio/driver.h
+index c4f272af7af5..e6217d8e2e9f 100644
+--- a/include/linux/gpio/driver.h
++++ b/include/linux/gpio/driver.h
+@@ -509,8 +509,16 @@ extern int gpiochip_add_data_with_key(struct gpio_chip *gc, void *data,
+ gpiochip_add_data_with_key(gc, data, &lock_key, \
+ &request_key); \
+ })
++#define devm_gpiochip_add_data(dev, gc, data) ({ \
++ static struct lock_class_key lock_key; \
++ static struct lock_class_key request_key; \
++ devm_gpiochip_add_data_with_key(dev, gc, data, &lock_key, \
++ &request_key); \
++ })
+ #else
+ #define gpiochip_add_data(gc, data) gpiochip_add_data_with_key(gc, data, NULL, NULL)
++#define devm_gpiochip_add_data(dev, gc, data) \
++ devm_gpiochip_add_data_with_key(dev, gc, data, NULL, NULL)
+ #endif /* CONFIG_LOCKDEP */
+
+ static inline int gpiochip_add(struct gpio_chip *gc)
+@@ -518,8 +526,9 @@ static inline int gpiochip_add(struct gpio_chip *gc)
+ return gpiochip_add_data(gc, NULL);
+ }
+ extern void gpiochip_remove(struct gpio_chip *gc);
+-extern int devm_gpiochip_add_data(struct device *dev, struct gpio_chip *gc,
+- void *data);
++extern int devm_gpiochip_add_data_with_key(struct device *dev, struct gpio_chip *gc, void *data,
++ struct lock_class_key *lock_key,
++ struct lock_class_key *request_key);
+
+ extern struct gpio_chip *gpiochip_find(void *data,
+ int (*match)(struct gpio_chip *gc, void *data));
+diff --git a/include/linux/gpio/regmap.h b/include/linux/gpio/regmap.h
+index 4c1e6b34e824..ad76f3d0a6ba 100644
+--- a/include/linux/gpio/regmap.h
++++ b/include/linux/gpio/regmap.h
+@@ -8,7 +8,7 @@ struct gpio_regmap;
+ struct irq_domain;
+ struct regmap;
+
+-#define GPIO_REGMAP_ADDR_ZERO ((unsigned long)(-1))
++#define GPIO_REGMAP_ADDR_ZERO ((unsigned int)(-1))
+ #define GPIO_REGMAP_ADDR(addr) ((addr) ? : GPIO_REGMAP_ADDR_ZERO)
+
+ /**
+diff --git a/include/linux/intel-iommu.h b/include/linux/intel-iommu.h
+index 3e8fa1c7a1e6..04bd9279c3fb 100644
+--- a/include/linux/intel-iommu.h
++++ b/include/linux/intel-iommu.h
+@@ -600,6 +600,8 @@ struct intel_iommu {
+ struct iommu_device iommu; /* IOMMU core code handle */
+ int node;
+ u32 flags; /* Software defined flags */
++
++ struct dmar_drhd_unit *drhd;
+ };
+
+ /* PCI domain-device relationship */
+diff --git a/include/linux/tpm.h b/include/linux/tpm.h
+index 03e9b184411b..8f4ff39f51e7 100644
+--- a/include/linux/tpm.h
++++ b/include/linux/tpm.h
+@@ -96,6 +96,7 @@ struct tpm_space {
+ u8 *context_buf;
+ u32 session_tbl[3];
+ u8 *session_buf;
++ u32 buf_size;
+ };
+
+ struct tpm_bios_log {
+diff --git a/include/linux/tpm_eventlog.h b/include/linux/tpm_eventlog.h
+index 64356b199e94..739ba9a03ec1 100644
+--- a/include/linux/tpm_eventlog.h
++++ b/include/linux/tpm_eventlog.h
+@@ -211,9 +211,16 @@ static inline int __calc_tpm2_event_size(struct tcg_pcr_event2_head *event,
+
+ efispecid = (struct tcg_efi_specid_event_head *)event_header->event;
+
+- /* Check if event is malformed. */
++ /*
++ * Perform validation of the event in order to identify malformed
++ * events. This function may be asked to parse arbitrary byte sequences
++ * immediately following a valid event log. The caller expects this
++ * function to recognize that the byte sequence is not a valid event
++ * and to return an event size of 0.
++ */
+ if (memcmp(efispecid->signature, TCG_SPECID_SIG,
+- sizeof(TCG_SPECID_SIG)) || count > efispecid->num_algs) {
++ sizeof(TCG_SPECID_SIG)) ||
++ !efispecid->num_algs || count != efispecid->num_algs) {
+ size = 0;
+ goto out;
+ }
+diff --git a/include/linux/tracepoint.h b/include/linux/tracepoint.h
+index a1fecf311621..3a5b717d92e8 100644
+--- a/include/linux/tracepoint.h
++++ b/include/linux/tracepoint.h
+@@ -361,7 +361,7 @@ static inline struct tracepoint *tracepoint_ptr_deref(tracepoint_ptr_t *p)
+ static const char *___tp_str __tracepoint_string = str; \
+ ___tp_str; \
+ })
+-#define __tracepoint_string __attribute__((section("__tracepoint_str")))
++#define __tracepoint_string __attribute__((section("__tracepoint_str"), used))
+ #else
+ /*
+ * tracepoint_string() is used to save the string address for userspace
+diff --git a/include/net/bluetooth/bluetooth.h b/include/net/bluetooth/bluetooth.h
+index 18190055374c..155019220c47 100644
+--- a/include/net/bluetooth/bluetooth.h
++++ b/include/net/bluetooth/bluetooth.h
+@@ -41,6 +41,8 @@
+ #define BLUETOOTH_VER_1_1 1
+ #define BLUETOOTH_VER_1_2 2
+ #define BLUETOOTH_VER_2_0 3
++#define BLUETOOTH_VER_2_1 4
++#define BLUETOOTH_VER_4_0 6
+
+ /* Reserv for core and drivers use */
+ #define BT_SKB_RESERVE 8
+diff --git a/include/net/bluetooth/hci.h b/include/net/bluetooth/hci.h
+index 16ab6ce87883..1c321b6d1ed8 100644
+--- a/include/net/bluetooth/hci.h
++++ b/include/net/bluetooth/hci.h
+@@ -227,6 +227,17 @@ enum {
+ * supported.
+ */
+ HCI_QUIRK_VALID_LE_STATES,
++
++ /* When this quirk is set, then erroneous data reporting
++ * is ignored. This is mainly due to the fact that the HCI
++ * Read Default Erroneous Data Reporting command is advertised,
++ * but not supported; these controllers often reply with unknown
++ * command and tend to lock up randomly. Needing a hard reset.
++ *
++ * This quirk can be set before hci_register_dev is called or
++ * during the hdev->setup vendor callback.
++ */
++ HCI_QUIRK_BROKEN_ERR_DATA_REPORTING,
+ };
+
+ /* HCI device flags */
+diff --git a/include/net/bluetooth/hci_core.h b/include/net/bluetooth/hci_core.h
+index cdd4f1db8670..da3728871e85 100644
+--- a/include/net/bluetooth/hci_core.h
++++ b/include/net/bluetooth/hci_core.h
+@@ -1387,7 +1387,7 @@ static inline void hci_encrypt_cfm(struct hci_conn *conn, __u8 status)
+ __u8 encrypt;
+
+ if (conn->state == BT_CONFIG) {
+- if (status)
++ if (!status)
+ conn->state = BT_CONNECTED;
+
+ hci_connect_cfm(conn, status);
+diff --git a/include/net/inet_connection_sock.h b/include/net/inet_connection_sock.h
+index e5b388f5fa20..1d59bf55bb4d 100644
+--- a/include/net/inet_connection_sock.h
++++ b/include/net/inet_connection_sock.h
+@@ -316,6 +316,10 @@ int inet_csk_compat_getsockopt(struct sock *sk, int level, int optname,
+ int inet_csk_compat_setsockopt(struct sock *sk, int level, int optname,
+ char __user *optval, unsigned int optlen);
+
++/* update the fast reuse flag when adding a socket */
++void inet_csk_update_fastreuse(struct inet_bind_bucket *tb,
++ struct sock *sk);
++
+ struct dst_entry *inet_csk_update_pmtu(struct sock *sk, u32 mtu);
+
+ #define TCP_PINGPONG_THRESH 3
+diff --git a/include/net/ip_vs.h b/include/net/ip_vs.h
+index 83be2d93b407..fe96aa462d05 100644
+--- a/include/net/ip_vs.h
++++ b/include/net/ip_vs.h
+@@ -1624,18 +1624,16 @@ static inline void ip_vs_conn_drop_conntrack(struct ip_vs_conn *cp)
+ }
+ #endif /* CONFIG_IP_VS_NFCT */
+
+-/* Really using conntrack? */
+-static inline bool ip_vs_conn_uses_conntrack(struct ip_vs_conn *cp,
+- struct sk_buff *skb)
++/* Using old conntrack that can not be redirected to another real server? */
++static inline bool ip_vs_conn_uses_old_conntrack(struct ip_vs_conn *cp,
++ struct sk_buff *skb)
+ {
+ #ifdef CONFIG_IP_VS_NFCT
+ enum ip_conntrack_info ctinfo;
+ struct nf_conn *ct;
+
+- if (!(cp->flags & IP_VS_CONN_F_NFCT))
+- return false;
+ ct = nf_ct_get(skb, &ctinfo);
+- if (ct)
++ if (ct && nf_ct_is_confirmed(ct))
+ return true;
+ #endif
+ return false;
+diff --git a/include/net/tcp.h b/include/net/tcp.h
+index 4de9485f73d9..0c1d2843a6d7 100644
+--- a/include/net/tcp.h
++++ b/include/net/tcp.h
+@@ -1664,6 +1664,8 @@ void tcp_fastopen_destroy_cipher(struct sock *sk);
+ void tcp_fastopen_ctx_destroy(struct net *net);
+ int tcp_fastopen_reset_cipher(struct net *net, struct sock *sk,
+ void *primary_key, void *backup_key);
++int tcp_fastopen_get_cipher(struct net *net, struct inet_connection_sock *icsk,
++ u64 *key);
+ void tcp_fastopen_add_skb(struct sock *sk, struct sk_buff *skb);
+ struct sock *tcp_try_fastopen(struct sock *sk, struct sk_buff *skb,
+ struct request_sock *req,
+diff --git a/include/uapi/linux/bpf.h b/include/uapi/linux/bpf.h
+index 8bd33050b7bb..a3fd55194e0b 100644
+--- a/include/uapi/linux/bpf.h
++++ b/include/uapi/linux/bpf.h
+@@ -3168,7 +3168,7 @@ union bpf_attr {
+ * Return
+ * The id is returned or 0 in case the id could not be retrieved.
+ *
+- * int bpf_ringbuf_output(void *ringbuf, void *data, u64 size, u64 flags)
++ * long bpf_ringbuf_output(void *ringbuf, void *data, u64 size, u64 flags)
+ * Description
+ * Copy *size* bytes from *data* into a ring buffer *ringbuf*.
+ * If **BPF_RB_NO_WAKEUP** is specified in *flags*, no notification
+diff --git a/include/uapi/linux/seccomp.h b/include/uapi/linux/seccomp.h
+index c1735455bc53..965290f7dcc2 100644
+--- a/include/uapi/linux/seccomp.h
++++ b/include/uapi/linux/seccomp.h
+@@ -123,5 +123,6 @@ struct seccomp_notif_resp {
+ #define SECCOMP_IOCTL_NOTIF_RECV SECCOMP_IOWR(0, struct seccomp_notif)
+ #define SECCOMP_IOCTL_NOTIF_SEND SECCOMP_IOWR(1, \
+ struct seccomp_notif_resp)
+-#define SECCOMP_IOCTL_NOTIF_ID_VALID SECCOMP_IOR(2, __u64)
++#define SECCOMP_IOCTL_NOTIF_ID_VALID SECCOMP_IOW(2, __u64)
++
+ #endif /* _UAPI_LINUX_SECCOMP_H */
+diff --git a/include/uapi/rdma/qedr-abi.h b/include/uapi/rdma/qedr-abi.h
+index a0b83c9d4498..bf7333b2b5d7 100644
+--- a/include/uapi/rdma/qedr-abi.h
++++ b/include/uapi/rdma/qedr-abi.h
+@@ -39,8 +39,9 @@
+
+ /* user kernel communication data structures. */
+ enum qedr_alloc_ucontext_flags {
+- QEDR_ALLOC_UCTX_RESERVED = 1 << 0,
+- QEDR_ALLOC_UCTX_DB_REC = 1 << 1
++ QEDR_ALLOC_UCTX_EDPM_MODE = 1 << 0,
++ QEDR_ALLOC_UCTX_DB_REC = 1 << 1,
++ QEDR_SUPPORT_DPM_SIZES = 1 << 2,
+ };
+
+ struct qedr_alloc_ucontext_req {
+@@ -50,13 +51,14 @@ struct qedr_alloc_ucontext_req {
+
+ #define QEDR_LDPM_MAX_SIZE (8192)
+ #define QEDR_EDPM_TRANS_SIZE (64)
++#define QEDR_EDPM_MAX_SIZE (ROCE_REQ_MAX_INLINE_DATA_SIZE)
+
+ enum qedr_rdma_dpm_type {
+ QEDR_DPM_TYPE_NONE = 0,
+ QEDR_DPM_TYPE_ROCE_ENHANCED = 1 << 0,
+ QEDR_DPM_TYPE_ROCE_LEGACY = 1 << 1,
+ QEDR_DPM_TYPE_IWARP_LEGACY = 1 << 2,
+- QEDR_DPM_TYPE_RESERVED = 1 << 3,
++ QEDR_DPM_TYPE_ROCE_EDPM_MODE = 1 << 3,
+ QEDR_DPM_SIZES_SET = 1 << 4,
+ };
+
+@@ -77,6 +79,8 @@ struct qedr_alloc_ucontext_resp {
+ __u16 ldpm_limit_size;
+ __u8 edpm_trans_size;
+ __u8 reserved;
++ __u16 edpm_limit_size;
++ __u8 padding[6];
+ };
+
+ struct qedr_alloc_pd_ureq {
+diff --git a/kernel/bpf/map_iter.c b/kernel/bpf/map_iter.c
+index c69071e334bf..1a04c168563d 100644
+--- a/kernel/bpf/map_iter.c
++++ b/kernel/bpf/map_iter.c
+@@ -6,7 +6,7 @@
+ #include <linux/kernel.h>
+
+ struct bpf_iter_seq_map_info {
+- u32 mid;
++ u32 map_id;
+ };
+
+ static void *bpf_map_seq_start(struct seq_file *seq, loff_t *pos)
+@@ -14,27 +14,23 @@ static void *bpf_map_seq_start(struct seq_file *seq, loff_t *pos)
+ struct bpf_iter_seq_map_info *info = seq->private;
+ struct bpf_map *map;
+
+- map = bpf_map_get_curr_or_next(&info->mid);
++ map = bpf_map_get_curr_or_next(&info->map_id);
+ if (!map)
+ return NULL;
+
+- ++*pos;
++ if (*pos == 0)
++ ++*pos;
+ return map;
+ }
+
+ static void *bpf_map_seq_next(struct seq_file *seq, void *v, loff_t *pos)
+ {
+ struct bpf_iter_seq_map_info *info = seq->private;
+- struct bpf_map *map;
+
+ ++*pos;
+- ++info->mid;
++ ++info->map_id;
+ bpf_map_put((struct bpf_map *)v);
+- map = bpf_map_get_curr_or_next(&info->mid);
+- if (!map)
+- return NULL;
+-
+- return map;
++ return bpf_map_get_curr_or_next(&info->map_id);
+ }
+
+ struct bpf_iter__bpf_map {
+diff --git a/kernel/bpf/task_iter.c b/kernel/bpf/task_iter.c
+index 4dbf2b6035f8..ac7869a38999 100644
+--- a/kernel/bpf/task_iter.c
++++ b/kernel/bpf/task_iter.c
+@@ -50,7 +50,8 @@ static void *task_seq_start(struct seq_file *seq, loff_t *pos)
+ if (!task)
+ return NULL;
+
+- ++*pos;
++ if (*pos == 0)
++ ++*pos;
+ return task;
+ }
+
+@@ -209,7 +210,8 @@ static void *task_file_seq_start(struct seq_file *seq, loff_t *pos)
+ return NULL;
+ }
+
+- ++*pos;
++ if (*pos == 0)
++ ++*pos;
+ info->task = task;
+ info->files = files;
+
+diff --git a/kernel/rcu/tree.c b/kernel/rcu/tree.c
+index 6c6569e0586c..1e9e500ff790 100644
+--- a/kernel/rcu/tree.c
++++ b/kernel/rcu/tree.c
+@@ -3105,7 +3105,7 @@ static void kfree_rcu_work(struct work_struct *work)
+ static inline bool queue_kfree_rcu_work(struct kfree_rcu_cpu *krcp)
+ {
+ struct kfree_rcu_cpu_work *krwp;
+- bool queued = false;
++ bool repeat = false;
+ int i;
+
+ lockdep_assert_held(&krcp->lock);
+@@ -3143,11 +3143,14 @@ static inline bool queue_kfree_rcu_work(struct kfree_rcu_cpu *krcp)
+ * been detached following each other, one by one.
+ */
+ queue_rcu_work(system_wq, &krwp->rcu_work);
+- queued = true;
+ }
++
++ /* Repeat if any "free" corresponding channel is still busy. */
++ if (krcp->bhead || krcp->head)
++ repeat = true;
+ }
+
+- return queued;
++ return !repeat;
+ }
+
+ static inline void kfree_rcu_drain_unlock(struct kfree_rcu_cpu *krcp,
+diff --git a/kernel/sched/core.c b/kernel/sched/core.c
+index 2142c6767682..c3cbdc436e2e 100644
+--- a/kernel/sched/core.c
++++ b/kernel/sched/core.c
+@@ -1237,6 +1237,20 @@ static void uclamp_fork(struct task_struct *p)
+ }
+ }
+
++static void __init init_uclamp_rq(struct rq *rq)
++{
++ enum uclamp_id clamp_id;
++ struct uclamp_rq *uc_rq = rq->uclamp;
++
++ for_each_clamp_id(clamp_id) {
++ uc_rq[clamp_id] = (struct uclamp_rq) {
++ .value = uclamp_none(clamp_id)
++ };
++ }
++
++ rq->uclamp_flags = 0;
++}
++
+ static void __init init_uclamp(void)
+ {
+ struct uclamp_se uc_max = {};
+@@ -1245,11 +1259,8 @@ static void __init init_uclamp(void)
+
+ mutex_init(&uclamp_mutex);
+
+- for_each_possible_cpu(cpu) {
+- memset(&cpu_rq(cpu)->uclamp, 0,
+- sizeof(struct uclamp_rq)*UCLAMP_CNT);
+- cpu_rq(cpu)->uclamp_flags = 0;
+- }
++ for_each_possible_cpu(cpu)
++ init_uclamp_rq(cpu_rq(cpu));
+
+ for_each_clamp_id(clamp_id) {
+ uclamp_se_set(&init_task.uclamp_req[clamp_id],
+diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
+index 04fa8dbcfa4d..6b3b59cc51d6 100644
+--- a/kernel/sched/fair.c
++++ b/kernel/sched/fair.c
+@@ -10027,7 +10027,12 @@ static void kick_ilb(unsigned int flags)
+ {
+ int ilb_cpu;
+
+- nohz.next_balance++;
++ /*
++ * Increase nohz.next_balance only when if full ilb is triggered but
++ * not if we only update stats.
++ */
++ if (flags & NOHZ_BALANCE_KICK)
++ nohz.next_balance = jiffies+1;
+
+ ilb_cpu = find_new_ilb();
+
+@@ -10348,6 +10353,14 @@ static bool _nohz_idle_balance(struct rq *this_rq, unsigned int flags,
+ }
+ }
+
++ /*
++ * next_balance will be updated only when there is a need.
++ * When the CPU is attached to null domain for ex, it will not be
++ * updated.
++ */
++ if (likely(update_next_balance))
++ nohz.next_balance = next_balance;
++
+ /* Newly idle CPU doesn't need an update */
+ if (idle != CPU_NEWLY_IDLE) {
+ update_blocked_averages(this_cpu);
+@@ -10368,14 +10381,6 @@ abort:
+ if (has_blocked_load)
+ WRITE_ONCE(nohz.has_blocked, 1);
+
+- /*
+- * next_balance will be updated only when there is a need.
+- * When the CPU is attached to null domain for ex, it will not be
+- * updated.
+- */
+- if (likely(update_next_balance))
+- nohz.next_balance = next_balance;
+-
+ return ret;
+ }
+
+diff --git a/kernel/sched/topology.c b/kernel/sched/topology.c
+index ba81187bb7af..9079d865a935 100644
+--- a/kernel/sched/topology.c
++++ b/kernel/sched/topology.c
+@@ -1328,7 +1328,7 @@ sd_init(struct sched_domain_topology_level *tl,
+ sd_flags = (*tl->sd_flags)();
+ if (WARN_ONCE(sd_flags & ~TOPOLOGY_SD_FLAGS,
+ "wrong sd_flags in topology description\n"))
+- sd_flags &= ~TOPOLOGY_SD_FLAGS;
++ sd_flags &= TOPOLOGY_SD_FLAGS;
+
+ /* Apply detected topology flags */
+ sd_flags |= dflags;
+diff --git a/kernel/seccomp.c b/kernel/seccomp.c
+index d653d8426de9..c461ba992513 100644
+--- a/kernel/seccomp.c
++++ b/kernel/seccomp.c
+@@ -42,6 +42,14 @@
+ #include <linux/uaccess.h>
+ #include <linux/anon_inodes.h>
+
++/*
++ * When SECCOMP_IOCTL_NOTIF_ID_VALID was first introduced, it had the
++ * wrong direction flag in the ioctl number. This is the broken one,
++ * which the kernel needs to keep supporting until all userspaces stop
++ * using the wrong command number.
++ */
++#define SECCOMP_IOCTL_NOTIF_ID_VALID_WRONG_DIR SECCOMP_IOR(2, __u64)
++
+ enum notify_state {
+ SECCOMP_NOTIFY_INIT,
+ SECCOMP_NOTIFY_SENT,
+@@ -1186,6 +1194,7 @@ static long seccomp_notify_ioctl(struct file *file, unsigned int cmd,
+ return seccomp_notify_recv(filter, buf);
+ case SECCOMP_IOCTL_NOTIF_SEND:
+ return seccomp_notify_send(filter, buf);
++ case SECCOMP_IOCTL_NOTIF_ID_VALID_WRONG_DIR:
+ case SECCOMP_IOCTL_NOTIF_ID_VALID:
+ return seccomp_notify_id_valid(filter, buf);
+ default:
+diff --git a/kernel/signal.c b/kernel/signal.c
+index 6f16f7c5d375..42b67d2cea37 100644
+--- a/kernel/signal.c
++++ b/kernel/signal.c
+@@ -2541,7 +2541,21 @@ bool get_signal(struct ksignal *ksig)
+
+ relock:
+ spin_lock_irq(&sighand->siglock);
+- current->jobctl &= ~JOBCTL_TASK_WORK;
++ /*
++ * Make sure we can safely read ->jobctl() in task_work add. As Oleg
++ * states:
++ *
++ * It pairs with mb (implied by cmpxchg) before READ_ONCE. So we
++ * roughly have
++ *
++ * task_work_add: get_signal:
++ * STORE(task->task_works, new_work); STORE(task->jobctl);
++ * mb(); mb();
++ * LOAD(task->jobctl); LOAD(task->task_works);
++ *
++ * and we can rely on STORE-MB-LOAD [ in task_work_add].
++ */
++ smp_store_mb(current->jobctl, current->jobctl & ~JOBCTL_TASK_WORK);
+ if (unlikely(current->task_works)) {
+ spin_unlock_irq(&sighand->siglock);
+ task_work_run();
+diff --git a/kernel/task_work.c b/kernel/task_work.c
+index 5c0848ca1287..613b2d634af8 100644
+--- a/kernel/task_work.c
++++ b/kernel/task_work.c
+@@ -42,7 +42,13 @@ task_work_add(struct task_struct *task, struct callback_head *work, int notify)
+ set_notify_resume(task);
+ break;
+ case TWA_SIGNAL:
+- if (lock_task_sighand(task, &flags)) {
++ /*
++ * Only grab the sighand lock if we don't already have some
++ * task_work pending. This pairs with the smp_store_mb()
++ * in get_signal(), see comment there.
++ */
++ if (!(READ_ONCE(task->jobctl) & JOBCTL_TASK_WORK) &&
++ lock_task_sighand(task, &flags)) {
+ task->jobctl |= JOBCTL_TASK_WORK;
+ signal_wake_up(task, 0);
+ unlock_task_sighand(task, &flags);
+diff --git a/kernel/time/tick-sched.c b/kernel/time/tick-sched.c
+index 3e2dc9b8858c..f0199a4ba1ad 100644
+--- a/kernel/time/tick-sched.c
++++ b/kernel/time/tick-sched.c
+@@ -351,16 +351,24 @@ void tick_nohz_dep_clear_cpu(int cpu, enum tick_dep_bits bit)
+ EXPORT_SYMBOL_GPL(tick_nohz_dep_clear_cpu);
+
+ /*
+- * Set a per-task tick dependency. Posix CPU timers need this in order to elapse
+- * per task timers.
++ * Set a per-task tick dependency. RCU need this. Also posix CPU timers
++ * in order to elapse per task timers.
+ */
+ void tick_nohz_dep_set_task(struct task_struct *tsk, enum tick_dep_bits bit)
+ {
+- /*
+- * We could optimize this with just kicking the target running the task
+- * if that noise matters for nohz full users.
+- */
+- tick_nohz_dep_set_all(&tsk->tick_dep_mask, bit);
++ if (!atomic_fetch_or(BIT(bit), &tsk->tick_dep_mask)) {
++ if (tsk == current) {
++ preempt_disable();
++ tick_nohz_full_kick();
++ preempt_enable();
++ } else {
++ /*
++ * Some future tick_nohz_full_kick_task()
++ * should optimize this.
++ */
++ tick_nohz_full_kick_all();
++ }
++ }
+ }
+ EXPORT_SYMBOL_GPL(tick_nohz_dep_set_task);
+
+diff --git a/kernel/trace/blktrace.c b/kernel/trace/blktrace.c
+index 5ef0484513ec..588e8e396019 100644
+--- a/kernel/trace/blktrace.c
++++ b/kernel/trace/blktrace.c
+@@ -522,10 +522,18 @@ static int do_blk_trace_setup(struct request_queue *q, char *name, dev_t dev,
+ if (!bt->msg_data)
+ goto err;
+
+- ret = -ENOENT;
+-
+- dir = debugfs_lookup(buts->name, blk_debugfs_root);
+- if (!dir)
++#ifdef CONFIG_BLK_DEBUG_FS
++ /*
++ * When tracing whole make_request drivers (multiqueue) block devices,
++ * reuse the existing debugfs directory created by the block layer on
++ * init. For request-based block devices, all partitions block devices,
++ * and scsi-generic block devices we create a temporary new debugfs
++ * directory that will be removed once the trace ends.
++ */
++ if (queue_is_mq(q) && bdev && bdev == bdev->bd_contains)
++ dir = q->debugfs_dir;
++ else
++#endif
+ bt->dir = dir = debugfs_create_dir(buts->name, blk_debugfs_root);
+
+ bt->dev = dev;
+@@ -563,8 +571,6 @@ static int do_blk_trace_setup(struct request_queue *q, char *name, dev_t dev,
+
+ ret = 0;
+ err:
+- if (dir && !bt->dir)
+- dput(dir);
+ if (ret)
+ blk_trace_free(bt);
+ return ret;
+diff --git a/kernel/trace/ftrace.c b/kernel/trace/ftrace.c
+index 1903b80db6eb..7d879fae3777 100644
+--- a/kernel/trace/ftrace.c
++++ b/kernel/trace/ftrace.c
+@@ -139,9 +139,6 @@ static inline void ftrace_ops_init(struct ftrace_ops *ops)
+ #endif
+ }
+
+-#define FTRACE_PID_IGNORE -1
+-#define FTRACE_PID_TRACE -2
+-
+ static void ftrace_pid_func(unsigned long ip, unsigned long parent_ip,
+ struct ftrace_ops *op, struct pt_regs *regs)
+ {
+diff --git a/kernel/trace/trace.c b/kernel/trace/trace.c
+index bb62269724d5..6fc6da55b94e 100644
+--- a/kernel/trace/trace.c
++++ b/kernel/trace/trace.c
+@@ -5887,7 +5887,7 @@ int tracing_set_tracer(struct trace_array *tr, const char *buf)
+ }
+
+ /* If trace pipe files are being read, we can't change the tracer */
+- if (tr->current_trace->ref) {
++ if (tr->trace_ref) {
+ ret = -EBUSY;
+ goto out;
+ }
+@@ -6103,7 +6103,7 @@ static int tracing_open_pipe(struct inode *inode, struct file *filp)
+
+ nonseekable_open(inode, filp);
+
+- tr->current_trace->ref++;
++ tr->trace_ref++;
+ out:
+ mutex_unlock(&trace_types_lock);
+ return ret;
+@@ -6122,7 +6122,7 @@ static int tracing_release_pipe(struct inode *inode, struct file *file)
+
+ mutex_lock(&trace_types_lock);
+
+- tr->current_trace->ref--;
++ tr->trace_ref--;
+
+ if (iter->trace->pipe_close)
+ iter->trace->pipe_close(iter);
+@@ -7424,7 +7424,7 @@ static int tracing_buffers_open(struct inode *inode, struct file *filp)
+
+ filp->private_data = info;
+
+- tr->current_trace->ref++;
++ tr->trace_ref++;
+
+ mutex_unlock(&trace_types_lock);
+
+@@ -7525,7 +7525,7 @@ static int tracing_buffers_release(struct inode *inode, struct file *file)
+
+ mutex_lock(&trace_types_lock);
+
+- iter->tr->current_trace->ref--;
++ iter->tr->trace_ref--;
+
+ __trace_array_put(iter->tr);
+
+@@ -8733,7 +8733,7 @@ static int __remove_instance(struct trace_array *tr)
+ int i;
+
+ /* Reference counter for a newly created trace array = 1. */
+- if (tr->ref > 1 || (tr->current_trace && tr->current_trace->ref))
++ if (tr->ref > 1 || (tr->current_trace && tr->trace_ref))
+ return -EBUSY;
+
+ list_del(&tr->list);
+diff --git a/kernel/trace/trace.h b/kernel/trace/trace.h
+index 13db4000af3f..610d21355526 100644
+--- a/kernel/trace/trace.h
++++ b/kernel/trace/trace.h
+@@ -356,6 +356,7 @@ struct trace_array {
+ struct trace_event_file *trace_marker_file;
+ cpumask_var_t tracing_cpumask; /* only trace on set CPUs */
+ int ref;
++ int trace_ref;
+ #ifdef CONFIG_FUNCTION_TRACER
+ struct ftrace_ops *ops;
+ struct trace_pid_list __rcu *function_pids;
+@@ -547,7 +548,6 @@ struct tracer {
+ struct tracer *next;
+ struct tracer_flags *flags;
+ int enabled;
+- int ref;
+ bool print_max;
+ bool allow_instances;
+ #ifdef CONFIG_TRACER_MAX_TRACE
+@@ -1103,6 +1103,10 @@ print_graph_function_flags(struct trace_iterator *iter, u32 flags)
+ extern struct list_head ftrace_pids;
+
+ #ifdef CONFIG_FUNCTION_TRACER
++
++#define FTRACE_PID_IGNORE -1
++#define FTRACE_PID_TRACE -2
++
+ struct ftrace_func_command {
+ struct list_head list;
+ char *name;
+@@ -1114,7 +1118,8 @@ struct ftrace_func_command {
+ extern bool ftrace_filter_param __initdata;
+ static inline int ftrace_trace_task(struct trace_array *tr)
+ {
+- return !this_cpu_read(tr->array_buffer.data->ftrace_ignore_pid);
++ return this_cpu_read(tr->array_buffer.data->ftrace_ignore_pid) !=
++ FTRACE_PID_IGNORE;
+ }
+ extern int ftrace_is_dead(void);
+ int ftrace_create_function_files(struct trace_array *tr,
+diff --git a/lib/crc-t10dif.c b/lib/crc-t10dif.c
+index 8cc01a603416..c9acf1c12cfc 100644
+--- a/lib/crc-t10dif.c
++++ b/lib/crc-t10dif.c
+@@ -19,39 +19,46 @@
+ static struct crypto_shash __rcu *crct10dif_tfm;
+ static struct static_key crct10dif_fallback __read_mostly;
+ static DEFINE_MUTEX(crc_t10dif_mutex);
++static struct work_struct crct10dif_rehash_work;
+
+-static int crc_t10dif_rehash(struct notifier_block *self, unsigned long val, void *data)
++static int crc_t10dif_notify(struct notifier_block *self, unsigned long val, void *data)
+ {
+ struct crypto_alg *alg = data;
+- struct crypto_shash *new, *old;
+
+ if (val != CRYPTO_MSG_ALG_LOADED ||
+ static_key_false(&crct10dif_fallback) ||
+ strncmp(alg->cra_name, CRC_T10DIF_STRING, strlen(CRC_T10DIF_STRING)))
+ return 0;
+
++ schedule_work(&crct10dif_rehash_work);
++ return 0;
++}
++
++static void crc_t10dif_rehash(struct work_struct *work)
++{
++ struct crypto_shash *new, *old;
++
+ mutex_lock(&crc_t10dif_mutex);
+ old = rcu_dereference_protected(crct10dif_tfm,
+ lockdep_is_held(&crc_t10dif_mutex));
+ if (!old) {
+ mutex_unlock(&crc_t10dif_mutex);
+- return 0;
++ return;
+ }
+ new = crypto_alloc_shash("crct10dif", 0, 0);
+ if (IS_ERR(new)) {
+ mutex_unlock(&crc_t10dif_mutex);
+- return 0;
++ return;
+ }
+ rcu_assign_pointer(crct10dif_tfm, new);
+ mutex_unlock(&crc_t10dif_mutex);
+
+ synchronize_rcu();
+ crypto_free_shash(old);
+- return 0;
+ }
+
+ static struct notifier_block crc_t10dif_nb = {
+- .notifier_call = crc_t10dif_rehash,
++ .notifier_call = crc_t10dif_notify,
+ };
+
+ __u16 crc_t10dif_update(__u16 crc, const unsigned char *buffer, size_t len)
+@@ -86,19 +93,26 @@ EXPORT_SYMBOL(crc_t10dif);
+
+ static int __init crc_t10dif_mod_init(void)
+ {
++ struct crypto_shash *tfm;
++
++ INIT_WORK(&crct10dif_rehash_work, crc_t10dif_rehash);
+ crypto_register_notifier(&crc_t10dif_nb);
+- crct10dif_tfm = crypto_alloc_shash("crct10dif", 0, 0);
+- if (IS_ERR(crct10dif_tfm)) {
++ mutex_lock(&crc_t10dif_mutex);
++ tfm = crypto_alloc_shash("crct10dif", 0, 0);
++ if (IS_ERR(tfm)) {
+ static_key_slow_inc(&crct10dif_fallback);
+- crct10dif_tfm = NULL;
++ tfm = NULL;
+ }
++ RCU_INIT_POINTER(crct10dif_tfm, tfm);
++ mutex_unlock(&crc_t10dif_mutex);
+ return 0;
+ }
+
+ static void __exit crc_t10dif_mod_fini(void)
+ {
+ crypto_unregister_notifier(&crc_t10dif_nb);
+- crypto_free_shash(crct10dif_tfm);
++ cancel_work_sync(&crct10dif_rehash_work);
++ crypto_free_shash(rcu_dereference_protected(crct10dif_tfm, 1));
+ }
+
+ module_init(crc_t10dif_mod_init);
+@@ -106,11 +120,27 @@ module_exit(crc_t10dif_mod_fini);
+
+ static int crc_t10dif_transform_show(char *buffer, const struct kernel_param *kp)
+ {
++ struct crypto_shash *tfm;
++ const char *name;
++ int len;
++
+ if (static_key_false(&crct10dif_fallback))
+ return sprintf(buffer, "fallback\n");
+
+- return sprintf(buffer, "%s\n",
+- crypto_tfm_alg_driver_name(crypto_shash_tfm(crct10dif_tfm)));
++ rcu_read_lock();
++ tfm = rcu_dereference(crct10dif_tfm);
++ if (!tfm) {
++ len = sprintf(buffer, "init\n");
++ goto unlock;
++ }
++
++ name = crypto_tfm_alg_driver_name(crypto_shash_tfm(tfm));
++ len = sprintf(buffer, "%s\n", name);
++
++unlock:
++ rcu_read_unlock();
++
++ return len;
+ }
+
+ module_param_call(transform, NULL, crc_t10dif_transform_show, NULL, 0644);
+diff --git a/lib/dynamic_debug.c b/lib/dynamic_debug.c
+index 321437bbf87d..98876a8255c7 100644
+--- a/lib/dynamic_debug.c
++++ b/lib/dynamic_debug.c
+@@ -87,22 +87,22 @@ static struct { unsigned flag:8; char opt_char; } opt_array[] = {
+ { _DPRINTK_FLAGS_NONE, '_' },
+ };
+
++struct flagsbuf { char buf[ARRAY_SIZE(opt_array)+1]; };
++
+ /* format a string into buf[] which describes the _ddebug's flags */
+-static char *ddebug_describe_flags(struct _ddebug *dp, char *buf,
+- size_t maxlen)
++static char *ddebug_describe_flags(unsigned int flags, struct flagsbuf *fb)
+ {
+- char *p = buf;
++ char *p = fb->buf;
+ int i;
+
+- BUG_ON(maxlen < 6);
+ for (i = 0; i < ARRAY_SIZE(opt_array); ++i)
+- if (dp->flags & opt_array[i].flag)
++ if (flags & opt_array[i].flag)
+ *p++ = opt_array[i].opt_char;
+- if (p == buf)
++ if (p == fb->buf)
+ *p++ = '_';
+ *p = '\0';
+
+- return buf;
++ return fb->buf;
+ }
+
+ #define vpr_info(fmt, ...) \
+@@ -144,7 +144,7 @@ static int ddebug_change(const struct ddebug_query *query,
+ struct ddebug_table *dt;
+ unsigned int newflags;
+ unsigned int nfound = 0;
+- char flagbuf[10];
++ struct flagsbuf fbuf;
+
+ /* search for matching ddebugs */
+ mutex_lock(&ddebug_lock);
+@@ -201,8 +201,7 @@ static int ddebug_change(const struct ddebug_query *query,
+ vpr_info("changed %s:%d [%s]%s =%s\n",
+ trim_prefix(dp->filename), dp->lineno,
+ dt->mod_name, dp->function,
+- ddebug_describe_flags(dp, flagbuf,
+- sizeof(flagbuf)));
++ ddebug_describe_flags(dp->flags, &fbuf));
+ }
+ }
+ mutex_unlock(&ddebug_lock);
+@@ -816,7 +815,7 @@ static int ddebug_proc_show(struct seq_file *m, void *p)
+ {
+ struct ddebug_iter *iter = m->private;
+ struct _ddebug *dp = p;
+- char flagsbuf[10];
++ struct flagsbuf flags;
+
+ vpr_info("called m=%p p=%p\n", m, p);
+
+@@ -829,7 +828,7 @@ static int ddebug_proc_show(struct seq_file *m, void *p)
+ seq_printf(m, "%s:%u [%s]%s =%s \"",
+ trim_prefix(dp->filename), dp->lineno,
+ iter->table->mod_name, dp->function,
+- ddebug_describe_flags(dp, flagsbuf, sizeof(flagsbuf)));
++ ddebug_describe_flags(dp->flags, &flags));
+ seq_escape(m, dp->format, "\t\r\n\"");
+ seq_puts(m, "\"\n");
+
+diff --git a/lib/kobject.c b/lib/kobject.c
+index 1e4b7382a88e..3afb939f2a1c 100644
+--- a/lib/kobject.c
++++ b/lib/kobject.c
+@@ -599,14 +599,7 @@ out:
+ }
+ EXPORT_SYMBOL_GPL(kobject_move);
+
+-/**
+- * kobject_del() - Unlink kobject from hierarchy.
+- * @kobj: object.
+- *
+- * This is the function that should be called to delete an object
+- * successfully added via kobject_add().
+- */
+-void kobject_del(struct kobject *kobj)
++static void __kobject_del(struct kobject *kobj)
+ {
+ struct kernfs_node *sd;
+ const struct kobj_type *ktype;
+@@ -632,9 +625,23 @@ void kobject_del(struct kobject *kobj)
+
+ kobj->state_in_sysfs = 0;
+ kobj_kset_leave(kobj);
+- kobject_put(kobj->parent);
+ kobj->parent = NULL;
+ }
++
++/**
++ * kobject_del() - Unlink kobject from hierarchy.
++ * @kobj: object.
++ *
++ * This is the function that should be called to delete an object
++ * successfully added via kobject_add().
++ */
++void kobject_del(struct kobject *kobj)
++{
++ struct kobject *parent = kobj->parent;
++
++ __kobject_del(kobj);
++ kobject_put(parent);
++}
+ EXPORT_SYMBOL(kobject_del);
+
+ /**
+@@ -670,6 +677,7 @@ EXPORT_SYMBOL(kobject_get_unless_zero);
+ */
+ static void kobject_cleanup(struct kobject *kobj)
+ {
++ struct kobject *parent = kobj->parent;
+ struct kobj_type *t = get_ktype(kobj);
+ const char *name = kobj->name;
+
+@@ -684,7 +692,10 @@ static void kobject_cleanup(struct kobject *kobj)
+ if (kobj->state_in_sysfs) {
+ pr_debug("kobject: '%s' (%p): auto cleanup kobject_del\n",
+ kobject_name(kobj), kobj);
+- kobject_del(kobj);
++ __kobject_del(kobj);
++ } else {
++ /* avoid dropping the parent reference unnecessarily */
++ parent = NULL;
+ }
+
+ if (t && t->release) {
+@@ -698,6 +709,8 @@ static void kobject_cleanup(struct kobject *kobj)
+ pr_debug("kobject: '%s': free name\n", name);
+ kfree_const(name);
+ }
++
++ kobject_put(parent);
+ }
+
+ #ifdef CONFIG_DEBUG_KOBJECT_RELEASE
+diff --git a/mm/mmap.c b/mm/mmap.c
+index 8c7ca737a19b..dcdab2675a21 100644
+--- a/mm/mmap.c
++++ b/mm/mmap.c
+@@ -3171,6 +3171,7 @@ void exit_mmap(struct mm_struct *mm)
+ if (vma->vm_flags & VM_ACCOUNT)
+ nr_accounted += vma_pages(vma);
+ vma = remove_vma(vma);
++ cond_resched();
+ }
+ vm_unacct_memory(nr_accounted);
+ }
+diff --git a/mm/vmstat.c b/mm/vmstat.c
+index 3fb23a21f6dd..7fb01d225337 100644
+--- a/mm/vmstat.c
++++ b/mm/vmstat.c
+@@ -1596,12 +1596,6 @@ static void zoneinfo_show_print(struct seq_file *m, pg_data_t *pgdat,
+ zone->present_pages,
+ zone_managed_pages(zone));
+
+- /* If unpopulated, no other information is useful */
+- if (!populated_zone(zone)) {
+- seq_putc(m, '\n');
+- return;
+- }
+-
+ seq_printf(m,
+ "\n protection: (%ld",
+ zone->lowmem_reserve[0]);
+@@ -1609,6 +1603,12 @@ static void zoneinfo_show_print(struct seq_file *m, pg_data_t *pgdat,
+ seq_printf(m, ", %ld", zone->lowmem_reserve[i]);
+ seq_putc(m, ')');
+
++ /* If unpopulated, no other information is useful */
++ if (!populated_zone(zone)) {
++ seq_putc(m, '\n');
++ return;
++ }
++
+ for (i = 0; i < NR_VM_ZONE_STAT_ITEMS; i++)
+ seq_printf(m, "\n %-12s %lu", zone_stat_name(i),
+ zone_page_state(zone, i));
+diff --git a/net/bluetooth/6lowpan.c b/net/bluetooth/6lowpan.c
+index bb55d92691b0..cff4944d5b66 100644
+--- a/net/bluetooth/6lowpan.c
++++ b/net/bluetooth/6lowpan.c
+@@ -50,6 +50,7 @@ static bool enable_6lowpan;
+ /* We are listening incoming connections via this channel
+ */
+ static struct l2cap_chan *listen_chan;
++static DEFINE_MUTEX(set_lock);
+
+ struct lowpan_peer {
+ struct list_head list;
+@@ -1078,12 +1079,14 @@ static void do_enable_set(struct work_struct *work)
+
+ enable_6lowpan = set_enable->flag;
+
++ mutex_lock(&set_lock);
+ if (listen_chan) {
+ l2cap_chan_close(listen_chan, 0);
+ l2cap_chan_put(listen_chan);
+ }
+
+ listen_chan = bt_6lowpan_listen();
++ mutex_unlock(&set_lock);
+
+ kfree(set_enable);
+ }
+@@ -1135,11 +1138,13 @@ static ssize_t lowpan_control_write(struct file *fp,
+ if (ret == -EINVAL)
+ return ret;
+
++ mutex_lock(&set_lock);
+ if (listen_chan) {
+ l2cap_chan_close(listen_chan, 0);
+ l2cap_chan_put(listen_chan);
+ listen_chan = NULL;
+ }
++ mutex_unlock(&set_lock);
+
+ if (conn) {
+ struct lowpan_peer *peer;
+diff --git a/net/bluetooth/hci_core.c b/net/bluetooth/hci_core.c
+index dbe2d79f233f..41fba93d857a 100644
+--- a/net/bluetooth/hci_core.c
++++ b/net/bluetooth/hci_core.c
+@@ -606,7 +606,8 @@ static int hci_init3_req(struct hci_request *req, unsigned long opt)
+ if (hdev->commands[8] & 0x01)
+ hci_req_add(req, HCI_OP_READ_PAGE_SCAN_ACTIVITY, 0, NULL);
+
+- if (hdev->commands[18] & 0x04)
++ if (hdev->commands[18] & 0x04 &&
++ !test_bit(HCI_QUIRK_BROKEN_ERR_DATA_REPORTING, &hdev->quirks))
+ hci_req_add(req, HCI_OP_READ_DEF_ERR_DATA_REPORTING, 0, NULL);
+
+ /* Some older Broadcom based Bluetooth 1.2 controllers do not
+@@ -851,7 +852,8 @@ static int hci_init4_req(struct hci_request *req, unsigned long opt)
+ /* Set erroneous data reporting if supported to the wideband speech
+ * setting value
+ */
+- if (hdev->commands[18] & 0x08) {
++ if (hdev->commands[18] & 0x08 &&
++ !test_bit(HCI_QUIRK_BROKEN_ERR_DATA_REPORTING, &hdev->quirks)) {
+ bool enabled = hci_dev_test_flag(hdev,
+ HCI_WIDEBAND_SPEECH_ENABLED);
+
+@@ -3289,10 +3291,10 @@ static int hci_suspend_wait_event(struct hci_dev *hdev)
+ WAKE_COND, SUSPEND_NOTIFIER_TIMEOUT);
+
+ if (ret == 0) {
+- bt_dev_dbg(hdev, "Timed out waiting for suspend");
++ bt_dev_err(hdev, "Timed out waiting for suspend events");
+ for (i = 0; i < __SUSPEND_NUM_TASKS; ++i) {
+ if (test_bit(i, hdev->suspend_tasks))
+- bt_dev_dbg(hdev, "Bit %d is set", i);
++ bt_dev_err(hdev, "Suspend timeout bit: %d", i);
+ clear_bit(i, hdev->suspend_tasks);
+ }
+
+@@ -3360,12 +3362,15 @@ static int hci_suspend_notifier(struct notifier_block *nb, unsigned long action,
+ ret = hci_change_suspend_state(hdev, BT_RUNNING);
+ }
+
+- /* If suspend failed, restore it to running */
+- if (ret && action == PM_SUSPEND_PREPARE)
+- hci_change_suspend_state(hdev, BT_RUNNING);
+-
+ done:
+- return ret ? notifier_from_errno(-EBUSY) : NOTIFY_STOP;
++ /* We always allow suspend even if suspend preparation failed and
++ * attempt to recover in resume.
++ */
++ if (ret)
++ bt_dev_err(hdev, "Suspend notifier action (%lu) failed: %d",
++ action, ret);
++
++ return NOTIFY_STOP;
+ }
+
+ /* Alloc HCI device */
+@@ -3603,9 +3608,10 @@ void hci_unregister_dev(struct hci_dev *hdev)
+
+ cancel_work_sync(&hdev->power_on);
+
+- hci_dev_do_close(hdev);
+-
+ unregister_pm_notifier(&hdev->suspend_notifier);
++ cancel_work_sync(&hdev->suspend_prepare);
++
++ hci_dev_do_close(hdev);
+
+ if (!test_bit(HCI_INIT, &hdev->flags) &&
+ !hci_dev_test_flag(hdev, HCI_SETUP) &&
+diff --git a/net/bpfilter/bpfilter_kern.c b/net/bpfilter/bpfilter_kern.c
+index 4494ea6056cd..42b88a92afe9 100644
+--- a/net/bpfilter/bpfilter_kern.c
++++ b/net/bpfilter/bpfilter_kern.c
+@@ -50,6 +50,7 @@ static int __bpfilter_process_sockopt(struct sock *sk, int optname,
+ req.len = optlen;
+ if (!bpfilter_ops.info.pid)
+ goto out;
++ pos = 0;
+ n = kernel_write(bpfilter_ops.info.pipe_to_umh, &req, sizeof(req),
+ &pos);
+ if (n != sizeof(req)) {
+diff --git a/net/core/sock.c b/net/core/sock.c
+index 2e5b7870e5d3..a14a8cb6ccca 100644
+--- a/net/core/sock.c
++++ b/net/core/sock.c
+@@ -3443,6 +3443,16 @@ static void sock_inuse_add(struct net *net, int val)
+ }
+ #endif
+
++static void tw_prot_cleanup(struct timewait_sock_ops *twsk_prot)
++{
++ if (!twsk_prot)
++ return;
++ kfree(twsk_prot->twsk_slab_name);
++ twsk_prot->twsk_slab_name = NULL;
++ kmem_cache_destroy(twsk_prot->twsk_slab);
++ twsk_prot->twsk_slab = NULL;
++}
++
+ static void req_prot_cleanup(struct request_sock_ops *rsk_prot)
+ {
+ if (!rsk_prot)
+@@ -3513,7 +3523,7 @@ int proto_register(struct proto *prot, int alloc_slab)
+ prot->slab_flags,
+ NULL);
+ if (prot->twsk_prot->twsk_slab == NULL)
+- goto out_free_timewait_sock_slab_name;
++ goto out_free_timewait_sock_slab;
+ }
+ }
+
+@@ -3521,15 +3531,15 @@ int proto_register(struct proto *prot, int alloc_slab)
+ ret = assign_proto_idx(prot);
+ if (ret) {
+ mutex_unlock(&proto_list_mutex);
+- goto out_free_timewait_sock_slab_name;
++ goto out_free_timewait_sock_slab;
+ }
+ list_add(&prot->node, &proto_list);
+ mutex_unlock(&proto_list_mutex);
+ return ret;
+
+-out_free_timewait_sock_slab_name:
++out_free_timewait_sock_slab:
+ if (alloc_slab && prot->twsk_prot)
+- kfree(prot->twsk_prot->twsk_slab_name);
++ tw_prot_cleanup(prot->twsk_prot);
+ out_free_request_sock_slab:
+ if (alloc_slab) {
+ req_prot_cleanup(prot->rsk_prot);
+@@ -3553,12 +3563,7 @@ void proto_unregister(struct proto *prot)
+ prot->slab = NULL;
+
+ req_prot_cleanup(prot->rsk_prot);
+-
+- if (prot->twsk_prot != NULL && prot->twsk_prot->twsk_slab != NULL) {
+- kmem_cache_destroy(prot->twsk_prot->twsk_slab);
+- kfree(prot->twsk_prot->twsk_slab_name);
+- prot->twsk_prot->twsk_slab = NULL;
+- }
++ tw_prot_cleanup(prot->twsk_prot);
+ }
+ EXPORT_SYMBOL(proto_unregister);
+
+diff --git a/net/ipv4/inet_connection_sock.c b/net/ipv4/inet_connection_sock.c
+index afaf582a5aa9..a1be020bde8e 100644
+--- a/net/ipv4/inet_connection_sock.c
++++ b/net/ipv4/inet_connection_sock.c
+@@ -296,6 +296,57 @@ static inline int sk_reuseport_match(struct inet_bind_bucket *tb,
+ ipv6_only_sock(sk), true, false);
+ }
+
++void inet_csk_update_fastreuse(struct inet_bind_bucket *tb,
++ struct sock *sk)
++{
++ kuid_t uid = sock_i_uid(sk);
++ bool reuse = sk->sk_reuse && sk->sk_state != TCP_LISTEN;
++
++ if (hlist_empty(&tb->owners)) {
++ tb->fastreuse = reuse;
++ if (sk->sk_reuseport) {
++ tb->fastreuseport = FASTREUSEPORT_ANY;
++ tb->fastuid = uid;
++ tb->fast_rcv_saddr = sk->sk_rcv_saddr;
++ tb->fast_ipv6_only = ipv6_only_sock(sk);
++ tb->fast_sk_family = sk->sk_family;
++#if IS_ENABLED(CONFIG_IPV6)
++ tb->fast_v6_rcv_saddr = sk->sk_v6_rcv_saddr;
++#endif
++ } else {
++ tb->fastreuseport = 0;
++ }
++ } else {
++ if (!reuse)
++ tb->fastreuse = 0;
++ if (sk->sk_reuseport) {
++ /* We didn't match or we don't have fastreuseport set on
++ * the tb, but we have sk_reuseport set on this socket
++ * and we know that there are no bind conflicts with
++ * this socket in this tb, so reset our tb's reuseport
++ * settings so that any subsequent sockets that match
++ * our current socket will be put on the fast path.
++ *
++ * If we reset we need to set FASTREUSEPORT_STRICT so we
++ * do extra checking for all subsequent sk_reuseport
++ * socks.
++ */
++ if (!sk_reuseport_match(tb, sk)) {
++ tb->fastreuseport = FASTREUSEPORT_STRICT;
++ tb->fastuid = uid;
++ tb->fast_rcv_saddr = sk->sk_rcv_saddr;
++ tb->fast_ipv6_only = ipv6_only_sock(sk);
++ tb->fast_sk_family = sk->sk_family;
++#if IS_ENABLED(CONFIG_IPV6)
++ tb->fast_v6_rcv_saddr = sk->sk_v6_rcv_saddr;
++#endif
++ }
++ } else {
++ tb->fastreuseport = 0;
++ }
++ }
++}
++
+ /* Obtain a reference to a local port for the given sock,
+ * if snum is zero it means select any available local port.
+ * We try to allocate an odd port (and leave even ports for connect())
+@@ -308,7 +359,6 @@ int inet_csk_get_port(struct sock *sk, unsigned short snum)
+ struct inet_bind_hashbucket *head;
+ struct net *net = sock_net(sk);
+ struct inet_bind_bucket *tb = NULL;
+- kuid_t uid = sock_i_uid(sk);
+ int l3mdev;
+
+ l3mdev = inet_sk_bound_l3mdev(sk);
+@@ -345,49 +395,8 @@ tb_found:
+ goto fail_unlock;
+ }
+ success:
+- if (hlist_empty(&tb->owners)) {
+- tb->fastreuse = reuse;
+- if (sk->sk_reuseport) {
+- tb->fastreuseport = FASTREUSEPORT_ANY;
+- tb->fastuid = uid;
+- tb->fast_rcv_saddr = sk->sk_rcv_saddr;
+- tb->fast_ipv6_only = ipv6_only_sock(sk);
+- tb->fast_sk_family = sk->sk_family;
+-#if IS_ENABLED(CONFIG_IPV6)
+- tb->fast_v6_rcv_saddr = sk->sk_v6_rcv_saddr;
+-#endif
+- } else {
+- tb->fastreuseport = 0;
+- }
+- } else {
+- if (!reuse)
+- tb->fastreuse = 0;
+- if (sk->sk_reuseport) {
+- /* We didn't match or we don't have fastreuseport set on
+- * the tb, but we have sk_reuseport set on this socket
+- * and we know that there are no bind conflicts with
+- * this socket in this tb, so reset our tb's reuseport
+- * settings so that any subsequent sockets that match
+- * our current socket will be put on the fast path.
+- *
+- * If we reset we need to set FASTREUSEPORT_STRICT so we
+- * do extra checking for all subsequent sk_reuseport
+- * socks.
+- */
+- if (!sk_reuseport_match(tb, sk)) {
+- tb->fastreuseport = FASTREUSEPORT_STRICT;
+- tb->fastuid = uid;
+- tb->fast_rcv_saddr = sk->sk_rcv_saddr;
+- tb->fast_ipv6_only = ipv6_only_sock(sk);
+- tb->fast_sk_family = sk->sk_family;
+-#if IS_ENABLED(CONFIG_IPV6)
+- tb->fast_v6_rcv_saddr = sk->sk_v6_rcv_saddr;
+-#endif
+- }
+- } else {
+- tb->fastreuseport = 0;
+- }
+- }
++ inet_csk_update_fastreuse(tb, sk);
++
+ if (!inet_csk(sk)->icsk_bind_hash)
+ inet_bind_hash(sk, tb, port);
+ WARN_ON(inet_csk(sk)->icsk_bind_hash != tb);
+diff --git a/net/ipv4/inet_hashtables.c b/net/ipv4/inet_hashtables.c
+index 2bbaaf0c7176..006a34b18537 100644
+--- a/net/ipv4/inet_hashtables.c
++++ b/net/ipv4/inet_hashtables.c
+@@ -163,6 +163,7 @@ int __inet_inherit_port(const struct sock *sk, struct sock *child)
+ return -ENOMEM;
+ }
+ }
++ inet_csk_update_fastreuse(tb, child);
+ }
+ inet_bind_hash(child, tb, port);
+ spin_unlock(&head->lock);
+diff --git a/net/ipv4/sysctl_net_ipv4.c b/net/ipv4/sysctl_net_ipv4.c
+index 5653e3b011bf..54023a46db04 100644
+--- a/net/ipv4/sysctl_net_ipv4.c
++++ b/net/ipv4/sysctl_net_ipv4.c
+@@ -301,24 +301,16 @@ static int proc_tcp_fastopen_key(struct ctl_table *table, int write,
+ struct ctl_table tbl = { .maxlen = ((TCP_FASTOPEN_KEY_LENGTH *
+ 2 * TCP_FASTOPEN_KEY_MAX) +
+ (TCP_FASTOPEN_KEY_MAX * 5)) };
+- struct tcp_fastopen_context *ctx;
+- u32 user_key[TCP_FASTOPEN_KEY_MAX * 4];
+- __le32 key[TCP_FASTOPEN_KEY_MAX * 4];
++ u32 user_key[TCP_FASTOPEN_KEY_BUF_LENGTH / sizeof(u32)];
++ __le32 key[TCP_FASTOPEN_KEY_BUF_LENGTH / sizeof(__le32)];
+ char *backup_data;
+- int ret, i = 0, off = 0, n_keys = 0;
++ int ret, i = 0, off = 0, n_keys;
+
+ tbl.data = kmalloc(tbl.maxlen, GFP_KERNEL);
+ if (!tbl.data)
+ return -ENOMEM;
+
+- rcu_read_lock();
+- ctx = rcu_dereference(net->ipv4.tcp_fastopen_ctx);
+- if (ctx) {
+- n_keys = tcp_fastopen_context_len(ctx);
+- memcpy(&key[0], &ctx->key[0], TCP_FASTOPEN_KEY_LENGTH * n_keys);
+- }
+- rcu_read_unlock();
+-
++ n_keys = tcp_fastopen_get_cipher(net, NULL, (u64 *)key);
+ if (!n_keys) {
+ memset(&key[0], 0, TCP_FASTOPEN_KEY_LENGTH);
+ n_keys = 1;
+diff --git a/net/ipv4/tcp.c b/net/ipv4/tcp.c
+index 6f0caf9a866d..30c1142584b1 100644
+--- a/net/ipv4/tcp.c
++++ b/net/ipv4/tcp.c
+@@ -3694,22 +3694,14 @@ static int do_tcp_getsockopt(struct sock *sk, int level,
+ return 0;
+
+ case TCP_FASTOPEN_KEY: {
+- __u8 key[TCP_FASTOPEN_KEY_BUF_LENGTH];
+- struct tcp_fastopen_context *ctx;
+- unsigned int key_len = 0;
++ u64 key[TCP_FASTOPEN_KEY_BUF_LENGTH / sizeof(u64)];
++ unsigned int key_len;
+
+ if (get_user(len, optlen))
+ return -EFAULT;
+
+- rcu_read_lock();
+- ctx = rcu_dereference(icsk->icsk_accept_queue.fastopenq.ctx);
+- if (ctx) {
+- key_len = tcp_fastopen_context_len(ctx) *
+- TCP_FASTOPEN_KEY_LENGTH;
+- memcpy(&key[0], &ctx->key[0], key_len);
+- }
+- rcu_read_unlock();
+-
++ key_len = tcp_fastopen_get_cipher(net, icsk, key) *
++ TCP_FASTOPEN_KEY_LENGTH;
+ len = min_t(unsigned int, len, key_len);
+ if (put_user(len, optlen))
+ return -EFAULT;
+diff --git a/net/ipv4/tcp_fastopen.c b/net/ipv4/tcp_fastopen.c
+index 19ad9586c720..1bb85821f1e6 100644
+--- a/net/ipv4/tcp_fastopen.c
++++ b/net/ipv4/tcp_fastopen.c
+@@ -108,6 +108,29 @@ out:
+ return err;
+ }
+
++int tcp_fastopen_get_cipher(struct net *net, struct inet_connection_sock *icsk,
++ u64 *key)
++{
++ struct tcp_fastopen_context *ctx;
++ int n_keys = 0, i;
++
++ rcu_read_lock();
++ if (icsk)
++ ctx = rcu_dereference(icsk->icsk_accept_queue.fastopenq.ctx);
++ else
++ ctx = rcu_dereference(net->ipv4.tcp_fastopen_ctx);
++ if (ctx) {
++ n_keys = tcp_fastopen_context_len(ctx);
++ for (i = 0; i < n_keys; i++) {
++ put_unaligned_le64(ctx->key[i].key[0], key + (i * 2));
++ put_unaligned_le64(ctx->key[i].key[1], key + (i * 2) + 1);
++ }
++ }
++ rcu_read_unlock();
++
++ return n_keys;
++}
++
+ static bool __tcp_fastopen_cookie_gen_cipher(struct request_sock *req,
+ struct sk_buff *syn,
+ const siphash_key_t *key,
+diff --git a/net/netfilter/ipvs/ip_vs_core.c b/net/netfilter/ipvs/ip_vs_core.c
+index aa6a603a2425..517f6a2ac15a 100644
+--- a/net/netfilter/ipvs/ip_vs_core.c
++++ b/net/netfilter/ipvs/ip_vs_core.c
+@@ -2066,14 +2066,14 @@ ip_vs_in(struct netns_ipvs *ipvs, unsigned int hooknum, struct sk_buff *skb, int
+
+ conn_reuse_mode = sysctl_conn_reuse_mode(ipvs);
+ if (conn_reuse_mode && !iph.fragoffs && is_new_conn(skb, &iph) && cp) {
+- bool uses_ct = false, resched = false;
++ bool old_ct = false, resched = false;
+
+ if (unlikely(sysctl_expire_nodest_conn(ipvs)) && cp->dest &&
+ unlikely(!atomic_read(&cp->dest->weight))) {
+ resched = true;
+- uses_ct = ip_vs_conn_uses_conntrack(cp, skb);
++ old_ct = ip_vs_conn_uses_old_conntrack(cp, skb);
+ } else if (is_new_conn_expected(cp, conn_reuse_mode)) {
+- uses_ct = ip_vs_conn_uses_conntrack(cp, skb);
++ old_ct = ip_vs_conn_uses_old_conntrack(cp, skb);
+ if (!atomic_read(&cp->n_control)) {
+ resched = true;
+ } else {
+@@ -2081,15 +2081,17 @@ ip_vs_in(struct netns_ipvs *ipvs, unsigned int hooknum, struct sk_buff *skb, int
+ * that uses conntrack while it is still
+ * referenced by controlled connection(s).
+ */
+- resched = !uses_ct;
++ resched = !old_ct;
+ }
+ }
+
+ if (resched) {
++ if (!old_ct)
++ cp->flags &= ~IP_VS_CONN_F_NFCT;
+ if (!atomic_read(&cp->n_control))
+ ip_vs_conn_expire_now(cp);
+ __ip_vs_conn_put(cp);
+- if (uses_ct)
++ if (old_ct)
+ return NF_DROP;
+ cp = NULL;
+ }
+diff --git a/net/netfilter/nft_meta.c b/net/netfilter/nft_meta.c
+index 951b6e87ed5d..7bc6537f3ccb 100644
+--- a/net/netfilter/nft_meta.c
++++ b/net/netfilter/nft_meta.c
+@@ -253,7 +253,7 @@ static bool nft_meta_get_eval_ifname(enum nft_meta_keys key, u32 *dest,
+ return false;
+ break;
+ case NFT_META_IIFGROUP:
+- if (!nft_meta_store_ifgroup(dest, nft_out(pkt)))
++ if (!nft_meta_store_ifgroup(dest, nft_in(pkt)))
+ return false;
+ break;
+ case NFT_META_OIFGROUP:
+diff --git a/net/nfc/rawsock.c b/net/nfc/rawsock.c
+index ba5ffd3badd3..b5c867fe3232 100644
+--- a/net/nfc/rawsock.c
++++ b/net/nfc/rawsock.c
+@@ -332,10 +332,13 @@ static int rawsock_create(struct net *net, struct socket *sock,
+ if ((sock->type != SOCK_SEQPACKET) && (sock->type != SOCK_RAW))
+ return -ESOCKTNOSUPPORT;
+
+- if (sock->type == SOCK_RAW)
++ if (sock->type == SOCK_RAW) {
++ if (!capable(CAP_NET_RAW))
++ return -EPERM;
+ sock->ops = &rawsock_raw_ops;
+- else
++ } else {
+ sock->ops = &rawsock_ops;
++ }
+
+ sk = sk_alloc(net, PF_NFC, GFP_ATOMIC, nfc_proto->proto, kern);
+ if (!sk)
+diff --git a/net/packet/af_packet.c b/net/packet/af_packet.c
+index 29bd405adbbd..301f41d4929b 100644
+--- a/net/packet/af_packet.c
++++ b/net/packet/af_packet.c
+@@ -942,6 +942,7 @@ static int prb_queue_frozen(struct tpacket_kbdq_core *pkc)
+ }
+
+ static void prb_clear_blk_fill_status(struct packet_ring_buffer *rb)
++ __releases(&pkc->blk_fill_in_prog_lock)
+ {
+ struct tpacket_kbdq_core *pkc = GET_PBDQC_FROM_RB(rb);
+ atomic_dec(&pkc->blk_fill_in_prog);
+@@ -989,6 +990,7 @@ static void prb_fill_curr_block(char *curr,
+ struct tpacket_kbdq_core *pkc,
+ struct tpacket_block_desc *pbd,
+ unsigned int len)
++ __acquires(&pkc->blk_fill_in_prog_lock)
+ {
+ struct tpacket3_hdr *ppd;
+
+@@ -2286,8 +2288,11 @@ static int tpacket_rcv(struct sk_buff *skb, struct net_device *dev,
+ if (do_vnet &&
+ virtio_net_hdr_from_skb(skb, h.raw + macoff -
+ sizeof(struct virtio_net_hdr),
+- vio_le(), true, 0))
++ vio_le(), true, 0)) {
++ if (po->tp_version == TPACKET_V3)
++ prb_clear_blk_fill_status(&po->rx_ring);
+ goto drop_n_account;
++ }
+
+ if (po->tp_version <= TPACKET_V2) {
+ packet_increment_rx_head(po, &po->rx_ring);
+@@ -2393,7 +2398,7 @@ static int tpacket_rcv(struct sk_buff *skb, struct net_device *dev,
+ __clear_bit(slot_id, po->rx_ring.rx_owner_map);
+ spin_unlock(&sk->sk_receive_queue.lock);
+ sk->sk_data_ready(sk);
+- } else {
++ } else if (po->tp_version == TPACKET_V3) {
+ prb_clear_blk_fill_status(&po->rx_ring);
+ }
+
+diff --git a/net/socket.c b/net/socket.c
+index 976426d03f09..481fd5f25669 100644
+--- a/net/socket.c
++++ b/net/socket.c
+@@ -500,7 +500,7 @@ static struct socket *sockfd_lookup_light(int fd, int *err, int *fput_needed)
+ if (f.file) {
+ sock = sock_from_file(f.file, err);
+ if (likely(sock)) {
+- *fput_needed = f.flags;
++ *fput_needed = f.flags & FDPUT_FPUT;
+ return sock;
+ }
+ fdput(f);
+diff --git a/net/sunrpc/auth_gss/gss_krb5_wrap.c b/net/sunrpc/auth_gss/gss_krb5_wrap.c
+index cf0fd170ac18..90b8329fef82 100644
+--- a/net/sunrpc/auth_gss/gss_krb5_wrap.c
++++ b/net/sunrpc/auth_gss/gss_krb5_wrap.c
+@@ -584,7 +584,7 @@ gss_unwrap_kerberos_v2(struct krb5_ctx *kctx, int offset, int len,
+ buf->head[0].iov_len);
+ memmove(ptr, ptr + GSS_KRB5_TOK_HDR_LEN + headskip, movelen);
+ buf->head[0].iov_len -= GSS_KRB5_TOK_HDR_LEN + headskip;
+- buf->len = len - GSS_KRB5_TOK_HDR_LEN + headskip;
++ buf->len = len - (GSS_KRB5_TOK_HDR_LEN + headskip);
+
+ /* Trim off the trailing "extra count" and checksum blob */
+ xdr_buf_trim(buf, ec + GSS_KRB5_TOK_HDR_LEN + tailskip);
+diff --git a/net/sunrpc/auth_gss/svcauth_gss.c b/net/sunrpc/auth_gss/svcauth_gss.c
+index 46027d0c903f..c28051f7d217 100644
+--- a/net/sunrpc/auth_gss/svcauth_gss.c
++++ b/net/sunrpc/auth_gss/svcauth_gss.c
+@@ -958,7 +958,6 @@ unwrap_priv_data(struct svc_rqst *rqstp, struct xdr_buf *buf, u32 seq, struct gs
+
+ maj_stat = gss_unwrap(ctx, 0, priv_len, buf);
+ pad = priv_len - buf->len;
+- buf->len -= pad;
+ /* The upper layers assume the buffer is aligned on 4-byte boundaries.
+ * In the krb5p case, at least, the data ends up offset, so we need to
+ * move it around. */
+diff --git a/net/sunrpc/xprtrdma/svc_rdma_rw.c b/net/sunrpc/xprtrdma/svc_rdma_rw.c
+index 5eb35309ecef..83806fa94def 100644
+--- a/net/sunrpc/xprtrdma/svc_rdma_rw.c
++++ b/net/sunrpc/xprtrdma/svc_rdma_rw.c
+@@ -684,7 +684,6 @@ static int svc_rdma_build_read_chunk(struct svc_rqst *rqstp,
+ struct svc_rdma_read_info *info,
+ __be32 *p)
+ {
+- unsigned int i;
+ int ret;
+
+ ret = -EINVAL;
+@@ -707,12 +706,6 @@ static int svc_rdma_build_read_chunk(struct svc_rqst *rqstp,
+ info->ri_chunklen += rs_length;
+ }
+
+- /* Pages under I/O have been copied to head->rc_pages.
+- * Prevent their premature release by svc_xprt_release() .
+- */
+- for (i = 0; i < info->ri_readctxt->rc_page_count; i++)
+- rqstp->rq_pages[i] = NULL;
+-
+ return ret;
+ }
+
+@@ -807,6 +800,26 @@ out:
+ return ret;
+ }
+
++/* Pages under I/O have been copied to head->rc_pages. Ensure they
++ * are not released by svc_xprt_release() until the I/O is complete.
++ *
++ * This has to be done after all Read WRs are constructed to properly
++ * handle a page that is part of I/O on behalf of two different RDMA
++ * segments.
++ *
++ * Do this only if I/O has been posted. Otherwise, we do indeed want
++ * svc_xprt_release() to clean things up properly.
++ */
++static void svc_rdma_save_io_pages(struct svc_rqst *rqstp,
++ const unsigned int start,
++ const unsigned int num_pages)
++{
++ unsigned int i;
++
++ for (i = start; i < num_pages + start; i++)
++ rqstp->rq_pages[i] = NULL;
++}
++
+ /**
+ * svc_rdma_recv_read_chunk - Pull a Read chunk from the client
+ * @rdma: controlling RDMA transport
+@@ -860,6 +873,7 @@ int svc_rdma_recv_read_chunk(struct svcxprt_rdma *rdma, struct svc_rqst *rqstp,
+ ret = svc_rdma_post_chunk_ctxt(&info->ri_cc);
+ if (ret < 0)
+ goto out_err;
++ svc_rdma_save_io_pages(rqstp, 0, head->rc_page_count);
+ return 0;
+
+ out_err:
+diff --git a/net/tls/tls_device.c b/net/tls/tls_device.c
+index 0e55f8365ce2..0cbad566f281 100644
+--- a/net/tls/tls_device.c
++++ b/net/tls/tls_device.c
+@@ -561,7 +561,7 @@ int tls_device_sendpage(struct sock *sk, struct page *page,
+ {
+ struct tls_context *tls_ctx = tls_get_ctx(sk);
+ struct iov_iter msg_iter;
+- char *kaddr = kmap(page);
++ char *kaddr;
+ struct kvec iov;
+ int rc;
+
+@@ -576,6 +576,7 @@ int tls_device_sendpage(struct sock *sk, struct page *page,
+ goto out;
+ }
+
++ kaddr = kmap(page);
+ iov.iov_base = kaddr + offset;
+ iov.iov_len = size;
+ iov_iter_kvec(&msg_iter, WRITE, &iov, 1, size);
+diff --git a/net/vmw_vsock/af_vsock.c b/net/vmw_vsock/af_vsock.c
+index 626bf9044418..6cd0df1c5caf 100644
+--- a/net/vmw_vsock/af_vsock.c
++++ b/net/vmw_vsock/af_vsock.c
+@@ -1032,7 +1032,7 @@ static __poll_t vsock_poll(struct file *file, struct socket *sock,
+ }
+
+ /* Connected sockets that can produce data can be written. */
+- if (sk->sk_state == TCP_ESTABLISHED) {
++ if (transport && sk->sk_state == TCP_ESTABLISHED) {
+ if (!(sk->sk_shutdown & SEND_SHUTDOWN)) {
+ bool space_avail_now = false;
+ int ret = transport->notify_poll_out(
+diff --git a/samples/bpf/fds_example.c b/samples/bpf/fds_example.c
+index d5992f787232..59f45fef5110 100644
+--- a/samples/bpf/fds_example.c
++++ b/samples/bpf/fds_example.c
+@@ -30,6 +30,8 @@
+ #define BPF_M_MAP 1
+ #define BPF_M_PROG 2
+
++char bpf_log_buf[BPF_LOG_BUF_SIZE];
++
+ static void usage(void)
+ {
+ printf("Usage: fds_example [...]\n");
+@@ -57,7 +59,6 @@ static int bpf_prog_create(const char *object)
+ BPF_EXIT_INSN(),
+ };
+ size_t insns_cnt = sizeof(insns) / sizeof(struct bpf_insn);
+- char bpf_log_buf[BPF_LOG_BUF_SIZE];
+ struct bpf_object *obj;
+ int prog_fd;
+
+diff --git a/samples/bpf/map_perf_test_kern.c b/samples/bpf/map_perf_test_kern.c
+index 12e91ae64d4d..c9b31193ca12 100644
+--- a/samples/bpf/map_perf_test_kern.c
++++ b/samples/bpf/map_perf_test_kern.c
+@@ -11,6 +11,8 @@
+ #include <bpf/bpf_helpers.h>
+ #include "bpf_legacy.h"
+ #include <bpf/bpf_tracing.h>
++#include <bpf/bpf_core_read.h>
++#include "trace_common.h"
+
+ #define MAX_ENTRIES 1000
+ #define MAX_NR_CPUS 1024
+@@ -154,9 +156,10 @@ int stress_percpu_hmap_alloc(struct pt_regs *ctx)
+ return 0;
+ }
+
+-SEC("kprobe/sys_connect")
++SEC("kprobe/" SYSCALL(sys_connect))
+ int stress_lru_hmap_alloc(struct pt_regs *ctx)
+ {
++ struct pt_regs *real_regs = (struct pt_regs *)PT_REGS_PARM1_CORE(ctx);
+ char fmt[] = "Failed at stress_lru_hmap_alloc. ret:%dn";
+ union {
+ u16 dst6[8];
+@@ -175,8 +178,8 @@ int stress_lru_hmap_alloc(struct pt_regs *ctx)
+ long val = 1;
+ u32 key = 0;
+
+- in6 = (struct sockaddr_in6 *)PT_REGS_PARM2(ctx);
+- addrlen = (int)PT_REGS_PARM3(ctx);
++ in6 = (struct sockaddr_in6 *)PT_REGS_PARM2_CORE(real_regs);
++ addrlen = (int)PT_REGS_PARM3_CORE(real_regs);
+
+ if (addrlen != sizeof(*in6))
+ return 0;
+diff --git a/samples/bpf/test_map_in_map_kern.c b/samples/bpf/test_map_in_map_kern.c
+index 6cee61e8ce9b..36a203e69064 100644
+--- a/samples/bpf/test_map_in_map_kern.c
++++ b/samples/bpf/test_map_in_map_kern.c
+@@ -13,6 +13,8 @@
+ #include <bpf/bpf_helpers.h>
+ #include "bpf_legacy.h"
+ #include <bpf/bpf_tracing.h>
++#include <bpf/bpf_core_read.h>
++#include "trace_common.h"
+
+ #define MAX_NR_PORTS 65536
+
+@@ -102,9 +104,10 @@ static __always_inline int do_inline_hash_lookup(void *inner_map, u32 port)
+ return result ? *result : -ENOENT;
+ }
+
+-SEC("kprobe/sys_connect")
++SEC("kprobe/" SYSCALL(sys_connect))
+ int trace_sys_connect(struct pt_regs *ctx)
+ {
++ struct pt_regs *real_regs = (struct pt_regs *)PT_REGS_PARM1_CORE(ctx);
+ struct sockaddr_in6 *in6;
+ u16 test_case, port, dst6[8];
+ int addrlen, ret, inline_ret, ret_key = 0;
+@@ -112,8 +115,8 @@ int trace_sys_connect(struct pt_regs *ctx)
+ void *outer_map, *inner_map;
+ bool inline_hash = false;
+
+- in6 = (struct sockaddr_in6 *)PT_REGS_PARM2(ctx);
+- addrlen = (int)PT_REGS_PARM3(ctx);
++ in6 = (struct sockaddr_in6 *)PT_REGS_PARM2_CORE(real_regs);
++ addrlen = (int)PT_REGS_PARM3_CORE(real_regs);
+
+ if (addrlen != sizeof(*in6))
+ return 0;
+diff --git a/samples/bpf/test_probe_write_user_kern.c b/samples/bpf/test_probe_write_user_kern.c
+index f033f36a13a3..fd651a65281e 100644
+--- a/samples/bpf/test_probe_write_user_kern.c
++++ b/samples/bpf/test_probe_write_user_kern.c
+@@ -10,6 +10,8 @@
+ #include <linux/version.h>
+ #include <bpf/bpf_helpers.h>
+ #include <bpf/bpf_tracing.h>
++#include <bpf/bpf_core_read.h>
++#include "trace_common.h"
+
+ struct bpf_map_def SEC("maps") dnat_map = {
+ .type = BPF_MAP_TYPE_HASH,
+@@ -26,13 +28,14 @@ struct bpf_map_def SEC("maps") dnat_map = {
+ * This example sits on a syscall, and the syscall ABI is relatively stable
+ * of course, across platforms, and over time, the ABI may change.
+ */
+-SEC("kprobe/sys_connect")
++SEC("kprobe/" SYSCALL(sys_connect))
+ int bpf_prog1(struct pt_regs *ctx)
+ {
++ struct pt_regs *real_regs = (struct pt_regs *)PT_REGS_PARM1_CORE(ctx);
++ void *sockaddr_arg = (void *)PT_REGS_PARM2_CORE(real_regs);
++ int sockaddr_len = (int)PT_REGS_PARM3_CORE(real_regs);
+ struct sockaddr_in new_addr, orig_addr = {};
+ struct sockaddr_in *mapped_addr;
+- void *sockaddr_arg = (void *)PT_REGS_PARM2(ctx);
+- int sockaddr_len = (int)PT_REGS_PARM3(ctx);
+
+ if (sockaddr_len > sizeof(orig_addr))
+ return 0;
+diff --git a/scripts/recordmcount.c b/scripts/recordmcount.c
+index 7225107a9aaf..e59022b3f125 100644
+--- a/scripts/recordmcount.c
++++ b/scripts/recordmcount.c
+@@ -434,6 +434,11 @@ static int arm_is_fake_mcount(Elf32_Rel const *rp)
+ return 1;
+ }
+
++static int arm64_is_fake_mcount(Elf64_Rel const *rp)
++{
++ return ELF64_R_TYPE(w(rp->r_info)) != R_AARCH64_CALL26;
++}
++
+ /* 64-bit EM_MIPS has weird ELF64_Rela.r_info.
+ * http://techpubs.sgi.com/library/manuals/4000/007-4658-001/pdf/007-4658-001.pdf
+ * We interpret Table 29 Relocation Operation (Elf64_Rel, Elf64_Rela) [p.40]
+@@ -547,6 +552,7 @@ static int do_file(char const *const fname)
+ make_nop = make_nop_arm64;
+ rel_type_nop = R_AARCH64_NONE;
+ ideal_nop = ideal_nop4_arm64;
++ is_fake_mcount64 = arm64_is_fake_mcount;
+ break;
+ case EM_IA_64: reltype = R_IA64_IMM64; break;
+ case EM_MIPS: /* reltype: e_class */ break;
+diff --git a/scripts/selinux/mdp/mdp.c b/scripts/selinux/mdp/mdp.c
+index 576d11a60417..6ceb88eb9b59 100644
+--- a/scripts/selinux/mdp/mdp.c
++++ b/scripts/selinux/mdp/mdp.c
+@@ -67,8 +67,14 @@ int main(int argc, char *argv[])
+
+ initial_sid_to_string_len = sizeof(initial_sid_to_string) / sizeof (char *);
+ /* print out the sids */
+- for (i = 1; i < initial_sid_to_string_len; i++)
+- fprintf(fout, "sid %s\n", initial_sid_to_string[i]);
++ for (i = 1; i < initial_sid_to_string_len; i++) {
++ const char *name = initial_sid_to_string[i];
++
++ if (name)
++ fprintf(fout, "sid %s\n", name);
++ else
++ fprintf(fout, "sid unused%d\n", i);
++ }
+ fprintf(fout, "\n");
+
+ /* print out the class permissions */
+@@ -126,9 +132,16 @@ int main(int argc, char *argv[])
+ #define OBJUSERROLETYPE "user_u:object_r:base_t"
+
+ /* default sids */
+- for (i = 1; i < initial_sid_to_string_len; i++)
+- fprintf(fout, "sid %s " SUBJUSERROLETYPE "%s\n",
+- initial_sid_to_string[i], mls ? ":" SYSTEMLOW : "");
++ for (i = 1; i < initial_sid_to_string_len; i++) {
++ const char *name = initial_sid_to_string[i];
++
++ if (name)
++ fprintf(fout, "sid %s ", name);
++ else
++ fprintf(fout, "sid unused%d\n", i);
++ fprintf(fout, SUBJUSERROLETYPE "%s\n",
++ mls ? ":" SYSTEMLOW : "");
++ }
+ fprintf(fout, "\n");
+
+ #define FS_USE(behavior, fstype) \
+diff --git a/security/integrity/ima/ima.h b/security/integrity/ima/ima.h
+index 9d94080bdad8..f0748f8ca47e 100644
+--- a/security/integrity/ima/ima.h
++++ b/security/integrity/ima/ima.h
+@@ -404,6 +404,7 @@ static inline void ima_free_modsig(struct modsig *modsig)
+ #ifdef CONFIG_IMA_LSM_RULES
+
+ #define security_filter_rule_init security_audit_rule_init
++#define security_filter_rule_free security_audit_rule_free
+ #define security_filter_rule_match security_audit_rule_match
+
+ #else
+@@ -414,6 +415,10 @@ static inline int security_filter_rule_init(u32 field, u32 op, char *rulestr,
+ return -EINVAL;
+ }
+
++static inline void security_filter_rule_free(void *lsmrule)
++{
++}
++
+ static inline int security_filter_rule_match(u32 secid, u32 field, u32 op,
+ void *lsmrule)
+ {
+diff --git a/security/integrity/ima/ima_policy.c b/security/integrity/ima/ima_policy.c
+index e493063a3c34..3e3e568c8130 100644
+--- a/security/integrity/ima/ima_policy.c
++++ b/security/integrity/ima/ima_policy.c
+@@ -258,9 +258,24 @@ static void ima_lsm_free_rule(struct ima_rule_entry *entry)
+ int i;
+
+ for (i = 0; i < MAX_LSM_RULES; i++) {
+- kfree(entry->lsm[i].rule);
++ security_filter_rule_free(entry->lsm[i].rule);
+ kfree(entry->lsm[i].args_p);
+ }
++}
++
++static void ima_free_rule(struct ima_rule_entry *entry)
++{
++ if (!entry)
++ return;
++
++ /*
++ * entry->template->fields may be allocated in ima_parse_rule() but that
++ * reference is owned by the corresponding ima_template_desc element in
++ * the defined_templates list and cannot be freed here
++ */
++ kfree(entry->fsname);
++ kfree(entry->keyrings);
++ ima_lsm_free_rule(entry);
+ kfree(entry);
+ }
+
+@@ -302,6 +317,7 @@ static struct ima_rule_entry *ima_lsm_copy_rule(struct ima_rule_entry *entry)
+
+ out_err:
+ ima_lsm_free_rule(nentry);
++ kfree(nentry);
+ return NULL;
+ }
+
+@@ -315,11 +331,29 @@ static int ima_lsm_update_rule(struct ima_rule_entry *entry)
+
+ list_replace_rcu(&entry->list, &nentry->list);
+ synchronize_rcu();
++ /*
++ * ima_lsm_copy_rule() shallow copied all references, except for the
++ * LSM references, from entry to nentry so we only want to free the LSM
++ * references and the entry itself. All other memory refrences will now
++ * be owned by nentry.
++ */
+ ima_lsm_free_rule(entry);
++ kfree(entry);
+
+ return 0;
+ }
+
++static bool ima_rule_contains_lsm_cond(struct ima_rule_entry *entry)
++{
++ int i;
++
++ for (i = 0; i < MAX_LSM_RULES; i++)
++ if (entry->lsm[i].args_p)
++ return true;
++
++ return false;
++}
++
+ /*
+ * The LSM policy can be reloaded, leaving the IMA LSM based rules referring
+ * to the old, stale LSM policy. Update the IMA LSM based rules to reflect
+@@ -890,6 +924,7 @@ static int ima_lsm_rule_init(struct ima_rule_entry *entry,
+
+ if (ima_rules == &ima_default_rules) {
+ kfree(entry->lsm[lsm_rule].args_p);
++ entry->lsm[lsm_rule].args_p = NULL;
+ result = -EINVAL;
+ } else
+ result = 0;
+@@ -949,6 +984,60 @@ static void check_template_modsig(const struct ima_template_desc *template)
+ #undef MSG
+ }
+
++static bool ima_validate_rule(struct ima_rule_entry *entry)
++{
++ /* Ensure that the action is set */
++ if (entry->action == UNKNOWN)
++ return false;
++
++ /*
++ * Ensure that the hook function is compatible with the other
++ * components of the rule
++ */
++ switch (entry->func) {
++ case NONE:
++ case FILE_CHECK:
++ case MMAP_CHECK:
++ case BPRM_CHECK:
++ case CREDS_CHECK:
++ case POST_SETATTR:
++ case MODULE_CHECK:
++ case FIRMWARE_CHECK:
++ case KEXEC_KERNEL_CHECK:
++ case KEXEC_INITRAMFS_CHECK:
++ case POLICY_CHECK:
++ /* Validation of these hook functions is in ima_parse_rule() */
++ break;
++ case KEXEC_CMDLINE:
++ if (entry->action & ~(MEASURE | DONT_MEASURE))
++ return false;
++
++ if (entry->flags & ~(IMA_FUNC | IMA_PCR))
++ return false;
++
++ if (ima_rule_contains_lsm_cond(entry))
++ return false;
++
++ break;
++ case KEY_CHECK:
++ if (entry->action & ~(MEASURE | DONT_MEASURE))
++ return false;
++
++ if (entry->flags & ~(IMA_FUNC | IMA_UID | IMA_PCR |
++ IMA_KEYRINGS))
++ return false;
++
++ if (ima_rule_contains_lsm_cond(entry))
++ return false;
++
++ break;
++ default:
++ return false;
++ }
++
++ return true;
++}
++
+ static int ima_parse_rule(char *rule, struct ima_rule_entry *entry)
+ {
+ struct audit_buffer *ab;
+@@ -1126,7 +1215,6 @@ static int ima_parse_rule(char *rule, struct ima_rule_entry *entry)
+ keyrings_len = strlen(args[0].from) + 1;
+
+ if ((entry->keyrings) ||
+- (entry->action != MEASURE) ||
+ (entry->func != KEY_CHECK) ||
+ (keyrings_len < 2)) {
+ result = -EINVAL;
+@@ -1332,7 +1420,7 @@ static int ima_parse_rule(char *rule, struct ima_rule_entry *entry)
+ break;
+ }
+ }
+- if (!result && (entry->action == UNKNOWN))
++ if (!result && !ima_validate_rule(entry))
+ result = -EINVAL;
+ else if (entry->action == APPRAISE)
+ temp_ima_appraise |= ima_appraise_flag(entry->func);
+@@ -1381,7 +1469,7 @@ ssize_t ima_parse_add_rule(char *rule)
+
+ result = ima_parse_rule(p, entry);
+ if (result) {
+- kfree(entry);
++ ima_free_rule(entry);
+ integrity_audit_msg(AUDIT_INTEGRITY_STATUS, NULL,
+ NULL, op, "invalid-policy", result,
+ audit_info);
+@@ -1402,15 +1490,11 @@ ssize_t ima_parse_add_rule(char *rule)
+ void ima_delete_rules(void)
+ {
+ struct ima_rule_entry *entry, *tmp;
+- int i;
+
+ temp_ima_appraise = 0;
+ list_for_each_entry_safe(entry, tmp, &ima_temp_rules, list) {
+- for (i = 0; i < MAX_LSM_RULES; i++)
+- kfree(entry->lsm[i].args_p);
+-
+ list_del(&entry->list);
+- kfree(entry);
++ ima_free_rule(entry);
+ }
+ }
+
+diff --git a/security/smack/smackfs.c b/security/smack/smackfs.c
+index 840a192e9337..9c4308077574 100644
+--- a/security/smack/smackfs.c
++++ b/security/smack/smackfs.c
+@@ -884,7 +884,7 @@ static ssize_t smk_set_cipso(struct file *file, const char __user *buf,
+ }
+
+ ret = sscanf(rule, "%d", &maplevel);
+- if (ret != 1 || maplevel > SMACK_CIPSO_MAXLEVEL)
++ if (ret != 1 || maplevel < 0 || maplevel > SMACK_CIPSO_MAXLEVEL)
+ goto out;
+
+ rule += SMK_DIGITLEN;
+@@ -905,6 +905,10 @@ static ssize_t smk_set_cipso(struct file *file, const char __user *buf,
+
+ for (i = 0; i < catlen; i++) {
+ rule += SMK_DIGITLEN;
++ if (rule > data + count) {
++ rc = -EOVERFLOW;
++ goto out;
++ }
+ ret = sscanf(rule, "%u", &cat);
+ if (ret != 1 || cat > SMACK_CIPSO_MAXCATNUM)
+ goto out;
+diff --git a/sound/pci/hda/patch_realtek.c b/sound/pci/hda/patch_realtek.c
+index 417c8e17d839..00d155b98c1d 100644
+--- a/sound/pci/hda/patch_realtek.c
++++ b/sound/pci/hda/patch_realtek.c
+@@ -4118,7 +4118,7 @@ static int micmute_led_set(struct led_classdev *led_cdev,
+ struct alc_spec *spec = codec->spec;
+
+ alc_update_gpio_led(codec, spec->gpio_mic_led_mask,
+- spec->micmute_led_polarity, !!brightness);
++ spec->micmute_led_polarity, !brightness);
+ return 0;
+ }
+
+@@ -4173,8 +4173,6 @@ static void alc285_fixup_hp_gpio_led(struct hda_codec *codec,
+ {
+ struct alc_spec *spec = codec->spec;
+
+- spec->micmute_led_polarity = 1;
+-
+ alc_fixup_hp_gpio_led(codec, action, 0x04, 0x01);
+ }
+
+diff --git a/sound/soc/codecs/hdac_hda.c b/sound/soc/codecs/hdac_hda.c
+index 473efe9ef998..b0370bb10c14 100644
+--- a/sound/soc/codecs/hdac_hda.c
++++ b/sound/soc/codecs/hdac_hda.c
+@@ -289,7 +289,6 @@ static int hdac_hda_dai_open(struct snd_pcm_substream *substream,
+ struct hdac_hda_priv *hda_pvt;
+ struct hda_pcm_stream *hda_stream;
+ struct hda_pcm *pcm;
+- int ret;
+
+ hda_pvt = snd_soc_component_get_drvdata(component);
+ pcm = snd_soc_find_pcm_from_dai(hda_pvt, dai);
+@@ -300,11 +299,7 @@ static int hdac_hda_dai_open(struct snd_pcm_substream *substream,
+
+ hda_stream = &pcm->stream[substream->stream];
+
+- ret = hda_stream->ops.open(hda_stream, &hda_pvt->codec, substream);
+- if (ret < 0)
+- snd_hda_codec_pcm_put(pcm);
+-
+- return ret;
++ return hda_stream->ops.open(hda_stream, &hda_pvt->codec, substream);
+ }
+
+ static void hdac_hda_dai_close(struct snd_pcm_substream *substream,
+diff --git a/sound/soc/codecs/tas2770.c b/sound/soc/codecs/tas2770.c
+index 54c8135fe43c..cf071121c839 100644
+--- a/sound/soc/codecs/tas2770.c
++++ b/sound/soc/codecs/tas2770.c
+@@ -758,8 +758,7 @@ static int tas2770_i2c_probe(struct i2c_client *client,
+ }
+ }
+
+- tas2770->reset_gpio = devm_gpiod_get_optional(tas2770->dev,
+- "reset-gpio",
++ tas2770->reset_gpio = devm_gpiod_get_optional(tas2770->dev, "reset",
+ GPIOD_OUT_HIGH);
+ if (IS_ERR(tas2770->reset_gpio)) {
+ if (PTR_ERR(tas2770->reset_gpio) == -EPROBE_DEFER) {
+diff --git a/sound/soc/fsl/fsl_easrc.c b/sound/soc/fsl/fsl_easrc.c
+index c6b5eb2d2af7..fff1f02dadfe 100644
+--- a/sound/soc/fsl/fsl_easrc.c
++++ b/sound/soc/fsl/fsl_easrc.c
+@@ -1133,7 +1133,7 @@ int fsl_easrc_set_ctx_format(struct fsl_asrc_pair *ctx,
+ struct fsl_easrc_ctx_priv *ctx_priv = ctx->private;
+ struct fsl_easrc_data_fmt *in_fmt = &ctx_priv->in_params.fmt;
+ struct fsl_easrc_data_fmt *out_fmt = &ctx_priv->out_params.fmt;
+- int ret;
++ int ret = 0;
+
+ /* Get the bitfield values for input data format */
+ if (in_raw_format && out_raw_format) {
+diff --git a/sound/soc/fsl/fsl_sai.c b/sound/soc/fsl/fsl_sai.c
+index 9d436b0c5718..7031869a023a 100644
+--- a/sound/soc/fsl/fsl_sai.c
++++ b/sound/soc/fsl/fsl_sai.c
+@@ -680,10 +680,11 @@ static int fsl_sai_dai_probe(struct snd_soc_dai *cpu_dai)
+ regmap_write(sai->regmap, FSL_SAI_RCSR(ofs), 0);
+
+ regmap_update_bits(sai->regmap, FSL_SAI_TCR1(ofs),
+- FSL_SAI_CR1_RFW_MASK,
++ FSL_SAI_CR1_RFW_MASK(sai->soc_data->fifo_depth),
+ sai->soc_data->fifo_depth - FSL_SAI_MAXBURST_TX);
+ regmap_update_bits(sai->regmap, FSL_SAI_RCR1(ofs),
+- FSL_SAI_CR1_RFW_MASK, FSL_SAI_MAXBURST_RX - 1);
++ FSL_SAI_CR1_RFW_MASK(sai->soc_data->fifo_depth),
++ FSL_SAI_MAXBURST_RX - 1);
+
+ snd_soc_dai_init_dma_data(cpu_dai, &sai->dma_params_tx,
+ &sai->dma_params_rx);
+diff --git a/sound/soc/fsl/fsl_sai.h b/sound/soc/fsl/fsl_sai.h
+index 76b15deea80c..6aba7d28f5f3 100644
+--- a/sound/soc/fsl/fsl_sai.h
++++ b/sound/soc/fsl/fsl_sai.h
+@@ -94,7 +94,7 @@
+ #define FSL_SAI_CSR_FRDE BIT(0)
+
+ /* SAI Transmit and Receive Configuration 1 Register */
+-#define FSL_SAI_CR1_RFW_MASK 0x1f
++#define FSL_SAI_CR1_RFW_MASK(x) ((x) - 1)
+
+ /* SAI Transmit and Receive Configuration 2 Register */
+ #define FSL_SAI_CR2_SYNC BIT(30)
+diff --git a/sound/soc/intel/boards/bxt_rt298.c b/sound/soc/intel/boards/bxt_rt298.c
+index 7a4decf34191..c84c60df17db 100644
+--- a/sound/soc/intel/boards/bxt_rt298.c
++++ b/sound/soc/intel/boards/bxt_rt298.c
+@@ -565,6 +565,7 @@ static int bxt_card_late_probe(struct snd_soc_card *card)
+ /* broxton audio machine driver for SPT + RT298S */
+ static struct snd_soc_card broxton_rt298 = {
+ .name = "broxton-rt298",
++ .owner = THIS_MODULE,
+ .dai_link = broxton_rt298_dais,
+ .num_links = ARRAY_SIZE(broxton_rt298_dais),
+ .controls = broxton_controls,
+@@ -580,6 +581,7 @@ static struct snd_soc_card broxton_rt298 = {
+
+ static struct snd_soc_card geminilake_rt298 = {
+ .name = "geminilake-rt298",
++ .owner = THIS_MODULE,
+ .dai_link = broxton_rt298_dais,
+ .num_links = ARRAY_SIZE(broxton_rt298_dais),
+ .controls = broxton_controls,
+diff --git a/sound/soc/intel/boards/cml_rt1011_rt5682.c b/sound/soc/intel/boards/cml_rt1011_rt5682.c
+index 68eff29daf8f..23dd8c5fc1e7 100644
+--- a/sound/soc/intel/boards/cml_rt1011_rt5682.c
++++ b/sound/soc/intel/boards/cml_rt1011_rt5682.c
+@@ -34,7 +34,6 @@
+ #define SOF_RT1011_SPEAKER_WR BIT(1)
+ #define SOF_RT1011_SPEAKER_TL BIT(2)
+ #define SOF_RT1011_SPEAKER_TR BIT(3)
+-#define SPK_CH 4
+
+ /* Default: Woofer speakers */
+ static unsigned long sof_rt1011_quirk = SOF_RT1011_SPEAKER_WL |
+@@ -376,10 +375,17 @@ SND_SOC_DAILINK_DEF(ssp0_codec,
+
+ SND_SOC_DAILINK_DEF(ssp1_pin,
+ DAILINK_COMP_ARRAY(COMP_CPU("SSP1 Pin")));
+-SND_SOC_DAILINK_DEF(ssp1_codec,
++SND_SOC_DAILINK_DEF(ssp1_codec_2spk,
+ DAILINK_COMP_ARRAY(
+ /* WL */ COMP_CODEC("i2c-10EC1011:00", CML_RT1011_CODEC_DAI),
+ /* WR */ COMP_CODEC("i2c-10EC1011:01", CML_RT1011_CODEC_DAI)));
++SND_SOC_DAILINK_DEF(ssp1_codec_4spk,
++ DAILINK_COMP_ARRAY(
++ /* WL */ COMP_CODEC("i2c-10EC1011:00", CML_RT1011_CODEC_DAI),
++ /* WR */ COMP_CODEC("i2c-10EC1011:01", CML_RT1011_CODEC_DAI),
++ /* TL */ COMP_CODEC("i2c-10EC1011:02", CML_RT1011_CODEC_DAI),
++ /* TR */ COMP_CODEC("i2c-10EC1011:03", CML_RT1011_CODEC_DAI)));
++
+
+ SND_SOC_DAILINK_DEF(dmic_pin,
+ DAILINK_COMP_ARRAY(COMP_CPU("DMIC01 Pin")));
+@@ -475,7 +481,7 @@ static struct snd_soc_dai_link cml_rt1011_rt5682_dailink[] = {
+ .no_pcm = 1,
+ .init = cml_rt1011_spk_init,
+ .ops = &cml_rt1011_ops,
+- SND_SOC_DAILINK_REG(ssp1_pin, ssp1_codec, platform),
++ SND_SOC_DAILINK_REG(ssp1_pin, ssp1_codec_2spk, platform),
+ },
+ };
+
+@@ -488,11 +494,21 @@ static struct snd_soc_codec_conf rt1011_conf[] = {
+ .dlc = COMP_CODEC_CONF("i2c-10EC1011:01"),
+ .name_prefix = "WR",
+ },
++ /* single configuration structure for 2 and 4 channels */
++ {
++ .dlc = COMP_CODEC_CONF("i2c-10EC1011:02"),
++ .name_prefix = "TL",
++ },
++ {
++ .dlc = COMP_CODEC_CONF("i2c-10EC1011:03"),
++ .name_prefix = "TR",
++ },
+ };
+
+ /* Cometlake audio machine driver for RT1011 and RT5682 */
+ static struct snd_soc_card snd_soc_card_cml = {
+ .name = "cml_rt1011_rt5682",
++ .owner = THIS_MODULE,
+ .dai_link = cml_rt1011_rt5682_dailink,
+ .num_links = ARRAY_SIZE(cml_rt1011_rt5682_dailink),
+ .codec_conf = rt1011_conf,
+@@ -509,8 +525,6 @@ static struct snd_soc_card snd_soc_card_cml = {
+
+ static int snd_cml_rt1011_probe(struct platform_device *pdev)
+ {
+- struct snd_soc_dai_link_component *rt1011_dais_components;
+- struct snd_soc_codec_conf *rt1011_dais_confs;
+ struct card_private *ctx;
+ struct snd_soc_acpi_mach *mach;
+ const char *platform_name;
+@@ -529,65 +543,15 @@ static int snd_cml_rt1011_probe(struct platform_device *pdev)
+
+ dev_info(&pdev->dev, "sof_rt1011_quirk = %lx\n", sof_rt1011_quirk);
+
++ /* when 4 speaker is available, update codec config */
+ if (sof_rt1011_quirk & (SOF_RT1011_SPEAKER_TL |
+ SOF_RT1011_SPEAKER_TR)) {
+- rt1011_dais_confs = devm_kzalloc(&pdev->dev,
+- sizeof(struct snd_soc_codec_conf) *
+- SPK_CH, GFP_KERNEL);
+-
+- if (!rt1011_dais_confs)
+- return -ENOMEM;
+-
+- rt1011_dais_components = devm_kzalloc(&pdev->dev,
+- sizeof(struct snd_soc_dai_link_component) *
+- SPK_CH, GFP_KERNEL);
+-
+- if (!rt1011_dais_components)
+- return -ENOMEM;
+-
+- for (i = 0; i < SPK_CH; i++) {
+- rt1011_dais_confs[i].dlc.name = devm_kasprintf(&pdev->dev,
+- GFP_KERNEL,
+- "i2c-10EC1011:0%d",
+- i);
+-
+- if (!rt1011_dais_confs[i].dlc.name)
+- return -ENOMEM;
+-
+- switch (i) {
+- case 0:
+- rt1011_dais_confs[i].name_prefix = "WL";
+- break;
+- case 1:
+- rt1011_dais_confs[i].name_prefix = "WR";
+- break;
+- case 2:
+- rt1011_dais_confs[i].name_prefix = "TL";
+- break;
+- case 3:
+- rt1011_dais_confs[i].name_prefix = "TR";
+- break;
+- default:
+- return -EINVAL;
+- }
+- rt1011_dais_components[i].name = devm_kasprintf(&pdev->dev,
+- GFP_KERNEL,
+- "i2c-10EC1011:0%d",
+- i);
+- if (!rt1011_dais_components[i].name)
+- return -ENOMEM;
+-
+- rt1011_dais_components[i].dai_name = CML_RT1011_CODEC_DAI;
+- }
+-
+- snd_soc_card_cml.codec_conf = rt1011_dais_confs;
+- snd_soc_card_cml.num_configs = SPK_CH;
+-
+ for (i = 0; i < ARRAY_SIZE(cml_rt1011_rt5682_dailink); i++) {
+ if (!strcmp(cml_rt1011_rt5682_dailink[i].codecs->dai_name,
+- CML_RT1011_CODEC_DAI)) {
+- cml_rt1011_rt5682_dailink[i].codecs = rt1011_dais_components;
+- cml_rt1011_rt5682_dailink[i].num_codecs = SPK_CH;
++ CML_RT1011_CODEC_DAI)) {
++ cml_rt1011_rt5682_dailink[i].codecs = ssp1_codec_4spk;
++ cml_rt1011_rt5682_dailink[i].num_codecs =
++ ARRAY_SIZE(ssp1_codec_4spk);
+ }
+ }
+ }
+diff --git a/sound/soc/intel/boards/sof_sdw.c b/sound/soc/intel/boards/sof_sdw.c
+index e1c1a8ba78e6..1bfd9613449e 100644
+--- a/sound/soc/intel/boards/sof_sdw.c
++++ b/sound/soc/intel/boards/sof_sdw.c
+@@ -893,6 +893,7 @@ static const char sdw_card_long_name[] = "Intel Soundwire SOF";
+
+ static struct snd_soc_card card_sof_sdw = {
+ .name = "soundwire",
++ .owner = THIS_MODULE,
+ .late_probe = sof_sdw_hdmi_card_late_probe,
+ .codec_conf = codec_conf,
+ .num_configs = ARRAY_SIZE(codec_conf),
+diff --git a/sound/soc/meson/axg-card.c b/sound/soc/meson/axg-card.c
+index 89f7f64747cd..33058518c3da 100644
+--- a/sound/soc/meson/axg-card.c
++++ b/sound/soc/meson/axg-card.c
+@@ -116,7 +116,7 @@ static int axg_card_add_tdm_loopback(struct snd_soc_card *card,
+
+ lb = &card->dai_link[*index + 1];
+
+- lb->name = kasprintf(GFP_KERNEL, "%s-lb", pad->name);
++ lb->name = devm_kasprintf(card->dev, GFP_KERNEL, "%s-lb", pad->name);
+ if (!lb->name)
+ return -ENOMEM;
+
+@@ -327,20 +327,22 @@ static int axg_card_add_link(struct snd_soc_card *card, struct device_node *np,
+ return ret;
+
+ if (axg_card_cpu_is_playback_fe(dai_link->cpus->of_node))
+- ret = meson_card_set_fe_link(card, dai_link, np, true);
++ return meson_card_set_fe_link(card, dai_link, np, true);
+ else if (axg_card_cpu_is_capture_fe(dai_link->cpus->of_node))
+- ret = meson_card_set_fe_link(card, dai_link, np, false);
+- else
+- ret = meson_card_set_be_link(card, dai_link, np);
++ return meson_card_set_fe_link(card, dai_link, np, false);
+
++
++ ret = meson_card_set_be_link(card, dai_link, np);
+ if (ret)
+ return ret;
+
+- if (axg_card_cpu_is_tdm_iface(dai_link->cpus->of_node))
+- ret = axg_card_parse_tdm(card, np, index);
+- else if (axg_card_cpu_is_codec(dai_link->cpus->of_node)) {
++ if (axg_card_cpu_is_codec(dai_link->cpus->of_node)) {
+ dai_link->params = &codec_params;
+- dai_link->no_pcm = 0; /* link is not a DPCM BE */
++ } else {
++ dai_link->no_pcm = 1;
++ snd_soc_dai_link_set_capabilities(dai_link);
++ if (axg_card_cpu_is_tdm_iface(dai_link->cpus->of_node))
++ ret = axg_card_parse_tdm(card, np, index);
+ }
+
+ return ret;
+diff --git a/sound/soc/meson/axg-tdm-formatter.c b/sound/soc/meson/axg-tdm-formatter.c
+index 358c8c0d861c..f7e8e9da68a0 100644
+--- a/sound/soc/meson/axg-tdm-formatter.c
++++ b/sound/soc/meson/axg-tdm-formatter.c
+@@ -70,7 +70,7 @@ EXPORT_SYMBOL_GPL(axg_tdm_formatter_set_channel_masks);
+ static int axg_tdm_formatter_enable(struct axg_tdm_formatter *formatter)
+ {
+ struct axg_tdm_stream *ts = formatter->stream;
+- bool invert = formatter->drv->quirks->invert_sclk;
++ bool invert;
+ int ret;
+
+ /* Do nothing if the formatter is already enabled */
+@@ -96,11 +96,12 @@ static int axg_tdm_formatter_enable(struct axg_tdm_formatter *formatter)
+ return ret;
+
+ /*
+- * If sclk is inverted, invert it back and provide the inversion
+- * required by the formatter
++ * If sclk is inverted, it means the bit should latched on the
++ * rising edge which is what our HW expects. If not, we need to
++ * invert it before the formatter.
+ */
+- invert ^= axg_tdm_sclk_invert(ts->iface->fmt);
+- ret = clk_set_phase(formatter->sclk, invert ? 180 : 0);
++ invert = axg_tdm_sclk_invert(ts->iface->fmt);
++ ret = clk_set_phase(formatter->sclk, invert ? 0 : 180);
+ if (ret)
+ return ret;
+
+diff --git a/sound/soc/meson/axg-tdm-formatter.h b/sound/soc/meson/axg-tdm-formatter.h
+index 9ef98e955cb2..a1f0dcc0ff13 100644
+--- a/sound/soc/meson/axg-tdm-formatter.h
++++ b/sound/soc/meson/axg-tdm-formatter.h
+@@ -16,7 +16,6 @@ struct snd_kcontrol;
+
+ struct axg_tdm_formatter_hw {
+ unsigned int skew_offset;
+- bool invert_sclk;
+ };
+
+ struct axg_tdm_formatter_ops {
+diff --git a/sound/soc/meson/axg-tdm-interface.c b/sound/soc/meson/axg-tdm-interface.c
+index 6de27238e9df..36df30915378 100644
+--- a/sound/soc/meson/axg-tdm-interface.c
++++ b/sound/soc/meson/axg-tdm-interface.c
+@@ -119,18 +119,25 @@ static int axg_tdm_iface_set_fmt(struct snd_soc_dai *dai, unsigned int fmt)
+ {
+ struct axg_tdm_iface *iface = snd_soc_dai_get_drvdata(dai);
+
+- /* These modes are not supported */
+- if (fmt & (SND_SOC_DAIFMT_CBS_CFM | SND_SOC_DAIFMT_CBM_CFS)) {
++ switch (fmt & SND_SOC_DAIFMT_MASTER_MASK) {
++ case SND_SOC_DAIFMT_CBS_CFS:
++ if (!iface->mclk) {
++ dev_err(dai->dev, "cpu clock master: mclk missing\n");
++ return -ENODEV;
++ }
++ break;
++
++ case SND_SOC_DAIFMT_CBM_CFM:
++ break;
++
++ case SND_SOC_DAIFMT_CBS_CFM:
++ case SND_SOC_DAIFMT_CBM_CFS:
+ dev_err(dai->dev, "only CBS_CFS and CBM_CFM are supported\n");
++ /* Fall-through */
++ default:
+ return -EINVAL;
+ }
+
+- /* If the TDM interface is the clock master, it requires mclk */
+- if (!iface->mclk && (fmt & SND_SOC_DAIFMT_CBS_CFS)) {
+- dev_err(dai->dev, "cpu clock master: mclk missing\n");
+- return -ENODEV;
+- }
+-
+ iface->fmt = fmt;
+ return 0;
+ }
+@@ -319,7 +326,8 @@ static int axg_tdm_iface_hw_params(struct snd_pcm_substream *substream,
+ if (ret)
+ return ret;
+
+- if (iface->fmt & SND_SOC_DAIFMT_CBS_CFS) {
++ if ((iface->fmt & SND_SOC_DAIFMT_MASTER_MASK) ==
++ SND_SOC_DAIFMT_CBS_CFS) {
+ ret = axg_tdm_iface_set_sclk(dai, params);
+ if (ret)
+ return ret;
+diff --git a/sound/soc/meson/axg-tdmin.c b/sound/soc/meson/axg-tdmin.c
+index 973d4c02ef8d..88ed95ae886b 100644
+--- a/sound/soc/meson/axg-tdmin.c
++++ b/sound/soc/meson/axg-tdmin.c
+@@ -228,15 +228,29 @@ static const struct axg_tdm_formatter_driver axg_tdmin_drv = {
+ .regmap_cfg = &axg_tdmin_regmap_cfg,
+ .ops = &axg_tdmin_ops,
+ .quirks = &(const struct axg_tdm_formatter_hw) {
+- .invert_sclk = false,
+ .skew_offset = 2,
+ },
+ };
+
++static const struct axg_tdm_formatter_driver g12a_tdmin_drv = {
++ .component_drv = &axg_tdmin_component_drv,
++ .regmap_cfg = &axg_tdmin_regmap_cfg,
++ .ops = &axg_tdmin_ops,
++ .quirks = &(const struct axg_tdm_formatter_hw) {
++ .skew_offset = 3,
++ },
++};
++
+ static const struct of_device_id axg_tdmin_of_match[] = {
+ {
+ .compatible = "amlogic,axg-tdmin",
+ .data = &axg_tdmin_drv,
++ }, {
++ .compatible = "amlogic,g12a-tdmin",
++ .data = &g12a_tdmin_drv,
++ }, {
++ .compatible = "amlogic,sm1-tdmin",
++ .data = &g12a_tdmin_drv,
+ }, {}
+ };
+ MODULE_DEVICE_TABLE(of, axg_tdmin_of_match);
+diff --git a/sound/soc/meson/axg-tdmout.c b/sound/soc/meson/axg-tdmout.c
+index 418ec314b37d..3ceabddae629 100644
+--- a/sound/soc/meson/axg-tdmout.c
++++ b/sound/soc/meson/axg-tdmout.c
+@@ -238,7 +238,6 @@ static const struct axg_tdm_formatter_driver axg_tdmout_drv = {
+ .regmap_cfg = &axg_tdmout_regmap_cfg,
+ .ops = &axg_tdmout_ops,
+ .quirks = &(const struct axg_tdm_formatter_hw) {
+- .invert_sclk = true,
+ .skew_offset = 1,
+ },
+ };
+@@ -248,7 +247,6 @@ static const struct axg_tdm_formatter_driver g12a_tdmout_drv = {
+ .regmap_cfg = &axg_tdmout_regmap_cfg,
+ .ops = &axg_tdmout_ops,
+ .quirks = &(const struct axg_tdm_formatter_hw) {
+- .invert_sclk = true,
+ .skew_offset = 2,
+ },
+ };
+@@ -309,7 +307,6 @@ static const struct axg_tdm_formatter_driver sm1_tdmout_drv = {
+ .regmap_cfg = &axg_tdmout_regmap_cfg,
+ .ops = &axg_tdmout_ops,
+ .quirks = &(const struct axg_tdm_formatter_hw) {
+- .invert_sclk = true,
+ .skew_offset = 2,
+ },
+ };
+diff --git a/sound/soc/meson/gx-card.c b/sound/soc/meson/gx-card.c
+index 4abf7efb7eac..fdd2d5303b2a 100644
+--- a/sound/soc/meson/gx-card.c
++++ b/sound/soc/meson/gx-card.c
+@@ -96,21 +96,21 @@ static int gx_card_add_link(struct snd_soc_card *card, struct device_node *np,
+ return ret;
+
+ if (gx_card_cpu_identify(dai_link->cpus, "FIFO"))
+- ret = meson_card_set_fe_link(card, dai_link, np, true);
+- else
+- ret = meson_card_set_be_link(card, dai_link, np);
++ return meson_card_set_fe_link(card, dai_link, np, true);
+
++ ret = meson_card_set_be_link(card, dai_link, np);
+ if (ret)
+ return ret;
+
+- /* Check if the cpu is the i2s encoder and parse i2s data */
+- if (gx_card_cpu_identify(dai_link->cpus, "I2S Encoder"))
+- ret = gx_card_parse_i2s(card, np, index);
+-
+ /* Or apply codec to codec params if necessary */
+- else if (gx_card_cpu_identify(dai_link->cpus, "CODEC CTRL")) {
++ if (gx_card_cpu_identify(dai_link->cpus, "CODEC CTRL")) {
+ dai_link->params = &codec_params;
+- dai_link->no_pcm = 0; /* link is not a DPCM BE */
++ } else {
++ dai_link->no_pcm = 1;
++ snd_soc_dai_link_set_capabilities(dai_link);
++ /* Check if the cpu is the i2s encoder and parse i2s data */
++ if (gx_card_cpu_identify(dai_link->cpus, "I2S Encoder"))
++ ret = gx_card_parse_i2s(card, np, index);
+ }
+
+ return ret;
+diff --git a/sound/soc/meson/meson-card-utils.c b/sound/soc/meson/meson-card-utils.c
+index 5a4a91c88734..c734131ff0d6 100644
+--- a/sound/soc/meson/meson-card-utils.c
++++ b/sound/soc/meson/meson-card-utils.c
+@@ -147,10 +147,6 @@ int meson_card_set_be_link(struct snd_soc_card *card,
+ struct device_node *np;
+ int ret, num_codecs;
+
+- link->no_pcm = 1;
+- link->dpcm_playback = 1;
+- link->dpcm_capture = 1;
+-
+ num_codecs = of_get_child_count(node);
+ if (!num_codecs) {
+ dev_err(card->dev, "be link %s has no codec\n",
+diff --git a/sound/soc/soc-core.c b/sound/soc/soc-core.c
+index 2b8abf88ec60..f1d641cd48da 100644
+--- a/sound/soc/soc-core.c
++++ b/sound/soc/soc-core.c
+@@ -446,7 +446,6 @@ static struct snd_soc_pcm_runtime *soc_new_pcm_runtime(
+
+ dev->parent = card->dev;
+ dev->release = soc_release_rtd_dev;
+- dev->groups = soc_dev_attr_groups;
+
+ dev_set_name(dev, "%s", dai_link->name);
+
+@@ -503,6 +502,10 @@ static struct snd_soc_pcm_runtime *soc_new_pcm_runtime(
+ /* see for_each_card_rtds */
+ list_add_tail(&rtd->list, &card->rtd_list);
+
++ ret = device_add_groups(dev, soc_dev_attr_groups);
++ if (ret < 0)
++ goto free_rtd;
++
+ return rtd;
+
+ free_rtd:
+diff --git a/sound/soc/soc-dai.c b/sound/soc/soc-dai.c
+index 457159975b01..cecbbed2de9d 100644
+--- a/sound/soc/soc-dai.c
++++ b/sound/soc/soc-dai.c
+@@ -400,28 +400,30 @@ void snd_soc_dai_link_set_capabilities(struct snd_soc_dai_link *dai_link)
+ struct snd_soc_dai_link_component *codec;
+ struct snd_soc_dai *dai;
+ bool supported[SNDRV_PCM_STREAM_LAST + 1];
++ bool supported_cpu;
++ bool supported_codec;
+ int direction;
+ int i;
+
+ for_each_pcm_streams(direction) {
+- supported[direction] = true;
++ supported_cpu = false;
++ supported_codec = false;
+
+ for_each_link_cpus(dai_link, i, cpu) {
+ dai = snd_soc_find_dai(cpu);
+- if (!dai || !snd_soc_dai_stream_valid(dai, direction)) {
+- supported[direction] = false;
++ if (dai && snd_soc_dai_stream_valid(dai, direction)) {
++ supported_cpu = true;
+ break;
+ }
+ }
+- if (!supported[direction])
+- continue;
+ for_each_link_codecs(dai_link, i, codec) {
+ dai = snd_soc_find_dai(codec);
+- if (!dai || !snd_soc_dai_stream_valid(dai, direction)) {
+- supported[direction] = false;
++ if (dai && snd_soc_dai_stream_valid(dai, direction)) {
++ supported_codec = true;
+ break;
+ }
+ }
++ supported[direction] = supported_cpu && supported_codec;
+ }
+
+ dai_link->dpcm_playback = supported[SNDRV_PCM_STREAM_PLAYBACK];
+diff --git a/sound/soc/soc-pcm.c b/sound/soc/soc-pcm.c
+index c517064f5391..74baf1fce053 100644
+--- a/sound/soc/soc-pcm.c
++++ b/sound/soc/soc-pcm.c
+@@ -2802,30 +2802,36 @@ int soc_new_pcm(struct snd_soc_pcm_runtime *rtd, int num)
+ if (rtd->dai_link->dpcm_playback) {
+ stream = SNDRV_PCM_STREAM_PLAYBACK;
+
+- for_each_rtd_cpu_dais(rtd, i, cpu_dai)
+- if (!snd_soc_dai_stream_valid(cpu_dai,
+- stream)) {
+- dev_err(rtd->card->dev,
+- "CPU DAI %s for rtd %s does not support playback\n",
+- cpu_dai->name,
+- rtd->dai_link->stream_name);
+- return -EINVAL;
++ for_each_rtd_cpu_dais(rtd, i, cpu_dai) {
++ if (snd_soc_dai_stream_valid(cpu_dai, stream)) {
++ playback = 1;
++ break;
+ }
+- playback = 1;
++ }
++
++ if (!playback) {
++ dev_err(rtd->card->dev,
++ "No CPU DAIs support playback for stream %s\n",
++ rtd->dai_link->stream_name);
++ return -EINVAL;
++ }
+ }
+ if (rtd->dai_link->dpcm_capture) {
+ stream = SNDRV_PCM_STREAM_CAPTURE;
+
+- for_each_rtd_cpu_dais(rtd, i, cpu_dai)
+- if (!snd_soc_dai_stream_valid(cpu_dai,
+- stream)) {
+- dev_err(rtd->card->dev,
+- "CPU DAI %s for rtd %s does not support capture\n",
+- cpu_dai->name,
+- rtd->dai_link->stream_name);
+- return -EINVAL;
++ for_each_rtd_cpu_dais(rtd, i, cpu_dai) {
++ if (snd_soc_dai_stream_valid(cpu_dai, stream)) {
++ capture = 1;
++ break;
+ }
+- capture = 1;
++ }
++
++ if (!capture) {
++ dev_err(rtd->card->dev,
++ "No CPU DAIs support capture for stream %s\n",
++ rtd->dai_link->stream_name);
++ return -EINVAL;
++ }
+ }
+ } else {
+ /* Adapt stream for codec2codec links */
+diff --git a/sound/soc/sof/nocodec.c b/sound/soc/sof/nocodec.c
+index d03b5be31255..9e922df6a710 100644
+--- a/sound/soc/sof/nocodec.c
++++ b/sound/soc/sof/nocodec.c
+@@ -14,6 +14,7 @@
+
+ static struct snd_soc_card sof_nocodec_card = {
+ .name = "nocodec", /* the sof- prefix is added by the core */
++ .owner = THIS_MODULE
+ };
+
+ static int sof_nocodec_bes_setup(struct device *dev,
+diff --git a/sound/usb/card.h b/sound/usb/card.h
+index de43267b9c8a..5351d7183b1b 100644
+--- a/sound/usb/card.h
++++ b/sound/usb/card.h
+@@ -137,6 +137,7 @@ struct snd_usb_substream {
+ unsigned int tx_length_quirk:1; /* add length specifier to transfers */
+ unsigned int fmt_type; /* USB audio format type (1-3) */
+ unsigned int pkt_offset_adj; /* Bytes to drop from beginning of packets (for non-compliant devices) */
++ unsigned int stream_offset_adj; /* Bytes to drop from beginning of stream (for non-compliant devices) */
+
+ unsigned int running: 1; /* running status */
+
+diff --git a/sound/usb/mixer_quirks.c b/sound/usb/mixer_quirks.c
+index cec1cfd7edb7..199cdbfdc761 100644
+--- a/sound/usb/mixer_quirks.c
++++ b/sound/usb/mixer_quirks.c
+@@ -185,6 +185,7 @@ static const struct rc_config {
+ { USB_ID(0x041e, 0x3042), 0, 1, 1, 1, 1, 0x000d }, /* Usb X-Fi S51 */
+ { USB_ID(0x041e, 0x30df), 0, 1, 1, 1, 1, 0x000d }, /* Usb X-Fi S51 Pro */
+ { USB_ID(0x041e, 0x3237), 0, 1, 1, 1, 1, 0x000d }, /* Usb X-Fi S51 Pro */
++ { USB_ID(0x041e, 0x3263), 0, 1, 1, 1, 1, 0x000d }, /* Usb X-Fi S51 Pro */
+ { USB_ID(0x041e, 0x3048), 2, 2, 6, 6, 2, 0x6e91 }, /* Toshiba SB0500 */
+ };
+
+diff --git a/sound/usb/pcm.c b/sound/usb/pcm.c
+index a69d9e75f66f..eb3cececda79 100644
+--- a/sound/usb/pcm.c
++++ b/sound/usb/pcm.c
+@@ -1420,6 +1420,12 @@ static void retire_capture_urb(struct snd_usb_substream *subs,
+ // continue;
+ }
+ bytes = urb->iso_frame_desc[i].actual_length;
++ if (subs->stream_offset_adj > 0) {
++ unsigned int adj = min(subs->stream_offset_adj, bytes);
++ cp += adj;
++ bytes -= adj;
++ subs->stream_offset_adj -= adj;
++ }
+ frames = bytes / stride;
+ if (!subs->txfr_quirk)
+ bytes = frames * stride;
+diff --git a/sound/usb/quirks-table.h b/sound/usb/quirks-table.h
+index 9092cc0aa807..a53eb67ad4bd 100644
+--- a/sound/usb/quirks-table.h
++++ b/sound/usb/quirks-table.h
+@@ -3541,6 +3541,62 @@ AU0828_DEVICE(0x2040, 0x7270, "Hauppauge", "HVR-950Q"),
+ }
+ }
+ },
++{
++ /*
++ * PIONEER DJ DDJ-RB
++ * PCM is 4 channels out, 2 dummy channels in @ 44.1 fixed
++ * The feedback for the output is the dummy input.
++ */
++ USB_DEVICE_VENDOR_SPEC(0x2b73, 0x000e),
++ .driver_info = (unsigned long) &(const struct snd_usb_audio_quirk) {
++ .ifnum = QUIRK_ANY_INTERFACE,
++ .type = QUIRK_COMPOSITE,
++ .data = (const struct snd_usb_audio_quirk[]) {
++ {
++ .ifnum = 0,
++ .type = QUIRK_AUDIO_FIXED_ENDPOINT,
++ .data = &(const struct audioformat) {
++ .formats = SNDRV_PCM_FMTBIT_S24_3LE,
++ .channels = 4,
++ .iface = 0,
++ .altsetting = 1,
++ .altset_idx = 1,
++ .endpoint = 0x01,
++ .ep_attr = USB_ENDPOINT_XFER_ISOC|
++ USB_ENDPOINT_SYNC_ASYNC,
++ .rates = SNDRV_PCM_RATE_44100,
++ .rate_min = 44100,
++ .rate_max = 44100,
++ .nr_rates = 1,
++ .rate_table = (unsigned int[]) { 44100 }
++ }
++ },
++ {
++ .ifnum = 0,
++ .type = QUIRK_AUDIO_FIXED_ENDPOINT,
++ .data = &(const struct audioformat) {
++ .formats = SNDRV_PCM_FMTBIT_S24_3LE,
++ .channels = 2,
++ .iface = 0,
++ .altsetting = 1,
++ .altset_idx = 1,
++ .endpoint = 0x82,
++ .ep_attr = USB_ENDPOINT_XFER_ISOC|
++ USB_ENDPOINT_SYNC_ASYNC|
++ USB_ENDPOINT_USAGE_IMPLICIT_FB,
++ .rates = SNDRV_PCM_RATE_44100,
++ .rate_min = 44100,
++ .rate_max = 44100,
++ .nr_rates = 1,
++ .rate_table = (unsigned int[]) { 44100 }
++ }
++ },
++ {
++ .ifnum = -1
++ }
++ }
++ }
++},
+
+ #define ALC1220_VB_DESKTOP(vend, prod) { \
+ USB_DEVICE(vend, prod), \
+@@ -3645,7 +3701,13 @@ ALC1220_VB_DESKTOP(0x26ce, 0x0a01), /* Asrock TRX40 Creator */
+ * with.
+ */
+ {
+- USB_DEVICE(0x534d, 0x2109),
++ .match_flags = USB_DEVICE_ID_MATCH_DEVICE |
++ USB_DEVICE_ID_MATCH_INT_CLASS |
++ USB_DEVICE_ID_MATCH_INT_SUBCLASS,
++ .idVendor = 0x534d,
++ .idProduct = 0x2109,
++ .bInterfaceClass = USB_CLASS_AUDIO,
++ .bInterfaceSubClass = USB_SUBCLASS_AUDIOCONTROL,
+ .driver_info = (unsigned long) &(const struct snd_usb_audio_quirk) {
+ .vendor_name = "MacroSilicon",
+ .product_name = "MS2109",
+diff --git a/sound/usb/quirks.c b/sound/usb/quirks.c
+index fca72730a802..ef1c1cf040b4 100644
+--- a/sound/usb/quirks.c
++++ b/sound/usb/quirks.c
+@@ -1495,6 +1495,9 @@ void snd_usb_set_format_quirk(struct snd_usb_substream *subs,
+ case USB_ID(0x2b73, 0x000a): /* Pioneer DJ DJM-900NXS2 */
+ pioneer_djm_set_format_quirk(subs);
+ break;
++ case USB_ID(0x534d, 0x2109): /* MacroSilicon MS2109 */
++ subs->stream_offset_adj = 2;
++ break;
+ }
+ }
+
+diff --git a/sound/usb/stream.c b/sound/usb/stream.c
+index 15296f2c902c..e03ff2a7a73f 100644
+--- a/sound/usb/stream.c
++++ b/sound/usb/stream.c
+@@ -94,6 +94,7 @@ static void snd_usb_init_substream(struct snd_usb_stream *as,
+ subs->tx_length_quirk = as->chip->tx_length_quirk;
+ subs->speed = snd_usb_get_speed(subs->dev);
+ subs->pkt_offset_adj = 0;
++ subs->stream_offset_adj = 0;
+
+ snd_usb_set_pcm_ops(as->pcm, stream);
+
+diff --git a/tools/bpf/bpftool/btf.c b/tools/bpf/bpftool/btf.c
+index faac8189b285..c2f1fd414820 100644
+--- a/tools/bpf/bpftool/btf.c
++++ b/tools/bpf/bpftool/btf.c
+@@ -596,7 +596,7 @@ static int do_dump(int argc, char **argv)
+ goto done;
+ }
+ if (!btf) {
+- err = ENOENT;
++ err = -ENOENT;
+ p_err("can't find btf with ID (%u)", btf_id);
+ goto done;
+ }
+diff --git a/tools/bpf/bpftool/gen.c b/tools/bpf/bpftool/gen.c
+index 10de76b296ba..540ffde0b03a 100644
+--- a/tools/bpf/bpftool/gen.c
++++ b/tools/bpf/bpftool/gen.c
+@@ -305,8 +305,11 @@ static int do_skeleton(int argc, char **argv)
+ opts.object_name = obj_name;
+ obj = bpf_object__open_mem(obj_data, file_sz, &opts);
+ if (IS_ERR(obj)) {
++ char err_buf[256];
++
++ libbpf_strerror(PTR_ERR(obj), err_buf, sizeof(err_buf));
++ p_err("failed to open BPF object file: %s", err_buf);
+ obj = NULL;
+- p_err("failed to open BPF object file: %ld", PTR_ERR(obj));
+ goto out;
+ }
+
+diff --git a/tools/build/Build.include b/tools/build/Build.include
+index 9ec01f4454f9..585486e40995 100644
+--- a/tools/build/Build.include
++++ b/tools/build/Build.include
+@@ -74,7 +74,8 @@ dep-cmd = $(if $(wildcard $(fixdep)),
+ # dependencies in the cmd file
+ if_changed_dep = $(if $(strip $(any-prereq) $(arg-check)), \
+ @set -e; \
+- $(echo-cmd) $(cmd_$(1)) && $(dep-cmd))
++ $(echo-cmd) $(cmd_$(1)); \
++ $(dep-cmd))
+
+ # if_changed - execute command if any prerequisite is newer than
+ # target, or command line has changed
+diff --git a/tools/include/uapi/linux/bpf.h b/tools/include/uapi/linux/bpf.h
+index 8bd33050b7bb..a3fd55194e0b 100644
+--- a/tools/include/uapi/linux/bpf.h
++++ b/tools/include/uapi/linux/bpf.h
+@@ -3168,7 +3168,7 @@ union bpf_attr {
+ * Return
+ * The id is returned or 0 in case the id could not be retrieved.
+ *
+- * int bpf_ringbuf_output(void *ringbuf, void *data, u64 size, u64 flags)
++ * long bpf_ringbuf_output(void *ringbuf, void *data, u64 size, u64 flags)
+ * Description
+ * Copy *size* bytes from *data* into a ring buffer *ringbuf*.
+ * If **BPF_RB_NO_WAKEUP** is specified in *flags*, no notification
+diff --git a/tools/lib/bpf/bpf_tracing.h b/tools/lib/bpf/bpf_tracing.h
+index 58eceb884df3..eebf020cbe3e 100644
+--- a/tools/lib/bpf/bpf_tracing.h
++++ b/tools/lib/bpf/bpf_tracing.h
+@@ -215,7 +215,7 @@ struct pt_regs;
+ #define PT_REGS_PARM5(x) ((x)->regs[8])
+ #define PT_REGS_RET(x) ((x)->regs[31])
+ #define PT_REGS_FP(x) ((x)->regs[30]) /* Works only with CONFIG_FRAME_POINTER */
+-#define PT_REGS_RC(x) ((x)->regs[1])
++#define PT_REGS_RC(x) ((x)->regs[2])
+ #define PT_REGS_SP(x) ((x)->regs[29])
+ #define PT_REGS_IP(x) ((x)->cp0_epc)
+
+@@ -226,7 +226,7 @@ struct pt_regs;
+ #define PT_REGS_PARM5_CORE(x) BPF_CORE_READ((x), regs[8])
+ #define PT_REGS_RET_CORE(x) BPF_CORE_READ((x), regs[31])
+ #define PT_REGS_FP_CORE(x) BPF_CORE_READ((x), regs[30])
+-#define PT_REGS_RC_CORE(x) BPF_CORE_READ((x), regs[1])
++#define PT_REGS_RC_CORE(x) BPF_CORE_READ((x), regs[2])
+ #define PT_REGS_SP_CORE(x) BPF_CORE_READ((x), regs[29])
+ #define PT_REGS_IP_CORE(x) BPF_CORE_READ((x), cp0_epc)
+
+diff --git a/tools/testing/kunit/kunit.py b/tools/testing/kunit/kunit.py
+index f9b769f3437d..425ef40067e7 100755
+--- a/tools/testing/kunit/kunit.py
++++ b/tools/testing/kunit/kunit.py
+@@ -240,12 +240,6 @@ def main(argv, linux=None):
+ if cli_args.subcommand == 'run':
+ if not os.path.exists(cli_args.build_dir):
+ os.mkdir(cli_args.build_dir)
+- kunit_kernel.kunitconfig_path = os.path.join(
+- cli_args.build_dir,
+- kunit_kernel.kunitconfig_path)
+-
+- if not os.path.exists(kunit_kernel.kunitconfig_path):
+- create_default_kunitconfig()
+
+ if not linux:
+ linux = kunit_kernel.LinuxSourceTree()
+@@ -263,12 +257,6 @@ def main(argv, linux=None):
+ if cli_args.build_dir:
+ if not os.path.exists(cli_args.build_dir):
+ os.mkdir(cli_args.build_dir)
+- kunit_kernel.kunitconfig_path = os.path.join(
+- cli_args.build_dir,
+- kunit_kernel.kunitconfig_path)
+-
+- if not os.path.exists(kunit_kernel.kunitconfig_path):
+- create_default_kunitconfig()
+
+ if not linux:
+ linux = kunit_kernel.LinuxSourceTree()
+@@ -285,12 +273,6 @@ def main(argv, linux=None):
+ if cli_args.build_dir:
+ if not os.path.exists(cli_args.build_dir):
+ os.mkdir(cli_args.build_dir)
+- kunit_kernel.kunitconfig_path = os.path.join(
+- cli_args.build_dir,
+- kunit_kernel.kunitconfig_path)
+-
+- if not os.path.exists(kunit_kernel.kunitconfig_path):
+- create_default_kunitconfig()
+
+ if not linux:
+ linux = kunit_kernel.LinuxSourceTree()
+@@ -309,12 +291,6 @@ def main(argv, linux=None):
+ if cli_args.build_dir:
+ if not os.path.exists(cli_args.build_dir):
+ os.mkdir(cli_args.build_dir)
+- kunit_kernel.kunitconfig_path = os.path.join(
+- cli_args.build_dir,
+- kunit_kernel.kunitconfig_path)
+-
+- if not os.path.exists(kunit_kernel.kunitconfig_path):
+- create_default_kunitconfig()
+
+ if not linux:
+ linux = kunit_kernel.LinuxSourceTree()
+diff --git a/tools/testing/kunit/kunit_kernel.py b/tools/testing/kunit/kunit_kernel.py
+index 63dbda2d029f..e20e2056cb38 100644
+--- a/tools/testing/kunit/kunit_kernel.py
++++ b/tools/testing/kunit/kunit_kernel.py
+@@ -34,7 +34,7 @@ class LinuxSourceTreeOperations(object):
+
+ def make_mrproper(self):
+ try:
+- subprocess.check_output(['make', 'mrproper'])
++ subprocess.check_output(['make', 'mrproper'], stderr=subprocess.STDOUT)
+ except OSError as e:
+ raise ConfigError('Could not call make command: ' + e)
+ except subprocess.CalledProcessError as e:
+@@ -47,7 +47,7 @@ class LinuxSourceTreeOperations(object):
+ if build_dir:
+ command += ['O=' + build_dir]
+ try:
+- subprocess.check_output(command, stderr=subprocess.PIPE)
++ subprocess.check_output(command, stderr=subprocess.STDOUT)
+ except OSError as e:
+ raise ConfigError('Could not call make command: ' + e)
+ except subprocess.CalledProcessError as e:
+@@ -77,7 +77,7 @@ class LinuxSourceTreeOperations(object):
+ if build_dir:
+ command += ['O=' + build_dir]
+ try:
+- subprocess.check_output(command)
++ subprocess.check_output(command, stderr=subprocess.STDOUT)
+ except OSError as e:
+ raise BuildError('Could not call execute make: ' + e)
+ except subprocess.CalledProcessError as e:
+diff --git a/tools/testing/kunit/kunit_tool_test.py b/tools/testing/kunit/kunit_tool_test.py
+index f9eeaea94cad..287c74d821c3 100755
+--- a/tools/testing/kunit/kunit_tool_test.py
++++ b/tools/testing/kunit/kunit_tool_test.py
+@@ -251,21 +251,21 @@ class KUnitMainTest(unittest.TestCase):
+ pass
+
+ def test_config_passes_args_pass(self):
+- kunit.main(['config'], self.linux_source_mock)
++ kunit.main(['config', '--build_dir=.kunit'], self.linux_source_mock)
+ assert self.linux_source_mock.build_reconfig.call_count == 1
+ assert self.linux_source_mock.run_kernel.call_count == 0
+
+ def test_build_passes_args_pass(self):
+ kunit.main(['build'], self.linux_source_mock)
+ assert self.linux_source_mock.build_reconfig.call_count == 0
+- self.linux_source_mock.build_um_kernel.assert_called_once_with(False, 8, '', None)
++ self.linux_source_mock.build_um_kernel.assert_called_once_with(False, 8, '.kunit', None)
+ assert self.linux_source_mock.run_kernel.call_count == 0
+
+ def test_exec_passes_args_pass(self):
+ kunit.main(['exec'], self.linux_source_mock)
+ assert self.linux_source_mock.build_reconfig.call_count == 0
+ assert self.linux_source_mock.run_kernel.call_count == 1
+- self.linux_source_mock.run_kernel.assert_called_once_with(build_dir='', timeout=300)
++ self.linux_source_mock.run_kernel.assert_called_once_with(build_dir='.kunit', timeout=300)
+ self.print_mock.assert_any_call(StrContains('Testing complete.'))
+
+ def test_run_passes_args_pass(self):
+@@ -273,7 +273,7 @@ class KUnitMainTest(unittest.TestCase):
+ assert self.linux_source_mock.build_reconfig.call_count == 1
+ assert self.linux_source_mock.run_kernel.call_count == 1
+ self.linux_source_mock.run_kernel.assert_called_once_with(
+- build_dir='', timeout=300)
++ build_dir='.kunit', timeout=300)
+ self.print_mock.assert_any_call(StrContains('Testing complete.'))
+
+ def test_exec_passes_args_fail(self):
+@@ -313,7 +313,7 @@ class KUnitMainTest(unittest.TestCase):
+ def test_exec_timeout(self):
+ timeout = 3453
+ kunit.main(['exec', '--timeout', str(timeout)], self.linux_source_mock)
+- self.linux_source_mock.run_kernel.assert_called_once_with(build_dir='', timeout=timeout)
++ self.linux_source_mock.run_kernel.assert_called_once_with(build_dir='.kunit', timeout=timeout)
+ self.print_mock.assert_any_call(StrContains('Testing complete.'))
+
+ def test_run_timeout(self):
+@@ -321,12 +321,12 @@ class KUnitMainTest(unittest.TestCase):
+ kunit.main(['run', '--timeout', str(timeout)], self.linux_source_mock)
+ assert self.linux_source_mock.build_reconfig.call_count == 1
+ self.linux_source_mock.run_kernel.assert_called_once_with(
+- build_dir='', timeout=timeout)
++ build_dir='.kunit', timeout=timeout)
+ self.print_mock.assert_any_call(StrContains('Testing complete.'))
+
+ def test_run_builddir(self):
+ build_dir = '.kunit'
+- kunit.main(['run', '--build_dir', build_dir], self.linux_source_mock)
++ kunit.main(['run', '--build_dir=.kunit'], self.linux_source_mock)
+ assert self.linux_source_mock.build_reconfig.call_count == 1
+ self.linux_source_mock.run_kernel.assert_called_once_with(
+ build_dir=build_dir, timeout=300)
+diff --git a/tools/testing/selftests/lkdtm/run.sh b/tools/testing/selftests/lkdtm/run.sh
+index ee64ff8df8f4..8383eb89d88a 100755
+--- a/tools/testing/selftests/lkdtm/run.sh
++++ b/tools/testing/selftests/lkdtm/run.sh
+@@ -8,6 +8,7 @@
+ #
+ set -e
+ TRIGGER=/sys/kernel/debug/provoke-crash/DIRECT
++CLEAR_ONCE=/sys/kernel/debug/clear_warn_once
+ KSELFTEST_SKIP_TEST=4
+
+ # Verify we have LKDTM available in the kernel.
+@@ -67,6 +68,11 @@ cleanup() {
+ }
+ trap cleanup EXIT
+
++# Reset WARN_ONCE counters so we trip it each time this runs.
++if [ -w $CLEAR_ONCE ] ; then
++ echo 1 > $CLEAR_ONCE
++fi
++
+ # Save existing dmesg so we can detect new content below
+ dmesg > "$DMESG"
+
+diff --git a/tools/testing/selftests/lkdtm/tests.txt b/tools/testing/selftests/lkdtm/tests.txt
+index 92ca32143ae5..9d266e79c6a2 100644
+--- a/tools/testing/selftests/lkdtm/tests.txt
++++ b/tools/testing/selftests/lkdtm/tests.txt
+@@ -14,6 +14,7 @@ STACK_GUARD_PAGE_LEADING
+ STACK_GUARD_PAGE_TRAILING
+ UNSET_SMEP CR4 bits went missing
+ DOUBLE_FAULT
++CORRUPT_PAC
+ UNALIGNED_LOAD_STORE_WRITE
+ #OVERWRITE_ALLOCATION Corrupts memory on failure
+ #WRITE_AFTER_FREE Corrupts memory on failure
+diff --git a/tools/testing/selftests/net/msg_zerocopy.c b/tools/testing/selftests/net/msg_zerocopy.c
+index 4b02933cab8a..bdc03a2097e8 100644
+--- a/tools/testing/selftests/net/msg_zerocopy.c
++++ b/tools/testing/selftests/net/msg_zerocopy.c
+@@ -125,9 +125,8 @@ static int do_setcpu(int cpu)
+ CPU_ZERO(&mask);
+ CPU_SET(cpu, &mask);
+ if (sched_setaffinity(0, sizeof(mask), &mask))
+- error(1, 0, "setaffinity %d", cpu);
+-
+- if (cfg_verbose)
++ fprintf(stderr, "cpu: unable to pin, may increase variance.\n");
++ else if (cfg_verbose)
+ fprintf(stderr, "cpu: %u\n", cpu);
+
+ return 0;
+diff --git a/tools/testing/selftests/powerpc/benchmarks/context_switch.c b/tools/testing/selftests/powerpc/benchmarks/context_switch.c
+index a2e8c9da7fa5..d50cc05df495 100644
+--- a/tools/testing/selftests/powerpc/benchmarks/context_switch.c
++++ b/tools/testing/selftests/powerpc/benchmarks/context_switch.c
+@@ -19,6 +19,7 @@
+ #include <limits.h>
+ #include <sys/time.h>
+ #include <sys/syscall.h>
++#include <sys/sysinfo.h>
+ #include <sys/types.h>
+ #include <sys/shm.h>
+ #include <linux/futex.h>
+@@ -104,8 +105,9 @@ static void start_thread_on(void *(*fn)(void *), void *arg, unsigned long cpu)
+
+ static void start_process_on(void *(*fn)(void *), void *arg, unsigned long cpu)
+ {
+- int pid;
+- cpu_set_t cpuset;
++ int pid, ncpus;
++ cpu_set_t *cpuset;
++ size_t size;
+
+ pid = fork();
+ if (pid == -1) {
+@@ -116,14 +118,23 @@ static void start_process_on(void *(*fn)(void *), void *arg, unsigned long cpu)
+ if (pid)
+ return;
+
+- CPU_ZERO(&cpuset);
+- CPU_SET(cpu, &cpuset);
++ ncpus = get_nprocs();
++ size = CPU_ALLOC_SIZE(ncpus);
++ cpuset = CPU_ALLOC(ncpus);
++ if (!cpuset) {
++ perror("malloc");
++ exit(1);
++ }
++ CPU_ZERO_S(size, cpuset);
++ CPU_SET_S(cpu, size, cpuset);
+
+- if (sched_setaffinity(0, sizeof(cpuset), &cpuset)) {
++ if (sched_setaffinity(0, size, cpuset)) {
+ perror("sched_setaffinity");
++ CPU_FREE(cpuset);
+ exit(1);
+ }
+
++ CPU_FREE(cpuset);
+ fn(arg);
+
+ exit(0);
+diff --git a/tools/testing/selftests/powerpc/eeh/eeh-functions.sh b/tools/testing/selftests/powerpc/eeh/eeh-functions.sh
+index f52ed92b53e7..00dc32c0ed75 100755
+--- a/tools/testing/selftests/powerpc/eeh/eeh-functions.sh
++++ b/tools/testing/selftests/powerpc/eeh/eeh-functions.sh
+@@ -5,12 +5,17 @@ pe_ok() {
+ local dev="$1"
+ local path="/sys/bus/pci/devices/$dev/eeh_pe_state"
+
+- if ! [ -e "$path" ] ; then
++ # if a driver doesn't support the error handling callbacks then the
++ # device is recovered by removing and re-probing it. This causes the
++ # sysfs directory to disappear so read the PE state once and squash
++ # any potential error messages
++ local eeh_state="$(cat $path 2>/dev/null)"
++ if [ -z "$eeh_state" ]; then
+ return 1;
+ fi
+
+- local fw_state="$(cut -d' ' -f1 < $path)"
+- local sw_state="$(cut -d' ' -f2 < $path)"
++ local fw_state="$(echo $eeh_state | cut -d' ' -f1)"
++ local sw_state="$(echo $eeh_state | cut -d' ' -f2)"
+
+ # If EEH_PE_ISOLATED or EEH_PE_RECOVERING are set then the PE is in an
+ # error state or being recovered. Either way, not ok.
+diff --git a/tools/testing/selftests/powerpc/utils.c b/tools/testing/selftests/powerpc/utils.c
+index 5ee0e98c4896..eb530e73e02c 100644
+--- a/tools/testing/selftests/powerpc/utils.c
++++ b/tools/testing/selftests/powerpc/utils.c
+@@ -16,6 +16,7 @@
+ #include <string.h>
+ #include <sys/ioctl.h>
+ #include <sys/stat.h>
++#include <sys/sysinfo.h>
+ #include <sys/types.h>
+ #include <sys/utsname.h>
+ #include <unistd.h>
+@@ -88,28 +89,40 @@ void *get_auxv_entry(int type)
+
+ int pick_online_cpu(void)
+ {
+- cpu_set_t mask;
+- int cpu;
++ int ncpus, cpu = -1;
++ cpu_set_t *mask;
++ size_t size;
++
++ ncpus = get_nprocs_conf();
++ size = CPU_ALLOC_SIZE(ncpus);
++ mask = CPU_ALLOC(ncpus);
++ if (!mask) {
++ perror("malloc");
++ return -1;
++ }
+
+- CPU_ZERO(&mask);
++ CPU_ZERO_S(size, mask);
+
+- if (sched_getaffinity(0, sizeof(mask), &mask)) {
++ if (sched_getaffinity(0, size, mask)) {
+ perror("sched_getaffinity");
+- return -1;
++ goto done;
+ }
+
+ /* We prefer a primary thread, but skip 0 */
+- for (cpu = 8; cpu < CPU_SETSIZE; cpu += 8)
+- if (CPU_ISSET(cpu, &mask))
+- return cpu;
++ for (cpu = 8; cpu < ncpus; cpu += 8)
++ if (CPU_ISSET_S(cpu, size, mask))
++ goto done;
+
+ /* Search for anything, but in reverse */
+- for (cpu = CPU_SETSIZE - 1; cpu >= 0; cpu--)
+- if (CPU_ISSET(cpu, &mask))
+- return cpu;
++ for (cpu = ncpus - 1; cpu >= 0; cpu--)
++ if (CPU_ISSET_S(cpu, size, mask))
++ goto done;
+
+ printf("No cpus in affinity mask?!\n");
+- return -1;
++
++done:
++ CPU_FREE(mask);
++ return cpu;
+ }
+
+ bool is_ppc64le(void)
+diff --git a/tools/testing/selftests/seccomp/seccomp_bpf.c b/tools/testing/selftests/seccomp/seccomp_bpf.c
+index 252140a52553..ccf276e13882 100644
+--- a/tools/testing/selftests/seccomp/seccomp_bpf.c
++++ b/tools/testing/selftests/seccomp/seccomp_bpf.c
+@@ -180,7 +180,7 @@ struct seccomp_metadata {
+ #define SECCOMP_IOCTL_NOTIF_RECV SECCOMP_IOWR(0, struct seccomp_notif)
+ #define SECCOMP_IOCTL_NOTIF_SEND SECCOMP_IOWR(1, \
+ struct seccomp_notif_resp)
+-#define SECCOMP_IOCTL_NOTIF_ID_VALID SECCOMP_IOR(2, __u64)
++#define SECCOMP_IOCTL_NOTIF_ID_VALID SECCOMP_IOW(2, __u64)
+
+ struct seccomp_notif {
+ __u64 id;
^ permalink raw reply related [flat|nested] 30+ messages in thread
* [gentoo-commits] proj/linux-patches:5.8 commit in: /
@ 2020-08-19 14:58 Mike Pagano
0 siblings, 0 replies; 30+ messages in thread
From: Mike Pagano @ 2020-08-19 14:58 UTC (permalink / raw
To: gentoo-commits
commit: 5566efea02bca361490723df10d8044143364ba6
Author: Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Wed Aug 19 14:57:48 2020 +0000
Commit: Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Wed Aug 19 14:57:48 2020 +0000
URL: https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=5566efea
Remove redundant patch. See bug #738002
Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>
0000_README | 4 ----
2910_TVP5150-Fix-build-issue-by-selecting-REGMAP-I2C.patch | 10 ----------
2 files changed, 14 deletions(-)
diff --git a/0000_README b/0000_README
index 2409f92..6e28c94 100644
--- a/0000_README
+++ b/0000_README
@@ -67,10 +67,6 @@ Patch: 2900_tmp513-Fix-build-issue-by-selecting-CONFIG_REG.patch
From: https://bugs.gentoo.org/710790
Desc: tmp513 requies REGMAP_I2C to build. Select it by default in Kconfig. See bug #710790. Thanks to Phil Stracchino
-Patch: 2910_TVP5150-Fix-build-issue-by-selecting-REGMAP-I2C.patch
-From: https://bugs.gentoo.org/721096
-Desc: VIDEO_TVP5150 requies REGMAP_I2C to build. Select it by default in Kconfig. See bug #721096. Thanks to Max Steel
-
Patch: 2920_sign-file-patch-for-libressl.patch
From: https://bugs.gentoo.org/717166
Desc: sign-file: full functionality with modern LibreSSL
diff --git a/2910_TVP5150-Fix-build-issue-by-selecting-REGMAP-I2C.patch b/2910_TVP5150-Fix-build-issue-by-selecting-REGMAP-I2C.patch
deleted file mode 100644
index 1bc058e..0000000
--- a/2910_TVP5150-Fix-build-issue-by-selecting-REGMAP-I2C.patch
+++ /dev/null
@@ -1,10 +0,0 @@
---- a/drivers/media/i2c/Kconfig 2020-05-13 12:38:05.102903309 -0400
-+++ b/drivers/media/i2c/Kconfig 2020-05-13 12:38:51.283171977 -0400
-@@ -378,6 +378,7 @@ config VIDEO_TVP514X
- config VIDEO_TVP5150
- tristate "Texas Instruments TVP5150 video decoder"
- depends on VIDEO_V4L2 && I2C
-+ select REGMAP_I2C
- select V4L2_FWNODE
- help
- Support for the Texas Instruments TVP5150 video decoder.
^ permalink raw reply related [flat|nested] 30+ messages in thread
* [gentoo-commits] proj/linux-patches:5.8 commit in: /
@ 2020-08-21 11:41 Alice Ferrazzi
0 siblings, 0 replies; 30+ messages in thread
From: Alice Ferrazzi @ 2020-08-21 11:41 UTC (permalink / raw
To: gentoo-commits
commit: bf645074ab68cde774b9613eb74462942e461c33
Author: Alice Ferrazzi <alicef <AT> gentoo <DOT> org>
AuthorDate: Fri Aug 21 11:41:11 2020 +0000
Commit: Alice Ferrazzi <alicef <AT> gentoo <DOT> org>
CommitDate: Fri Aug 21 11:41:17 2020 +0000
URL: https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=bf645074
Linux patch 5.8.3
Signed-off-by: Alice Ferrazzi <alicef <AT> gentoo.org>
0000_README | 4 +
1002_linux-5.8.3.patch | 8143 ++++++++++++++++++++++++++++++++++++++++++++++++
2 files changed, 8147 insertions(+)
diff --git a/0000_README b/0000_README
index 6e28c94..bacfc9f 100644
--- a/0000_README
+++ b/0000_README
@@ -51,6 +51,10 @@ Patch: 1001_linux-5.8.2.patch
From: http://www.kernel.org
Desc: Linux 5.8.2
+Patch: 1002_linux-5.8.3.patch
+From: http://www.kernel.org
+Desc: Linux 5.8.3
+
Patch: 1500_XATTR_USER_PREFIX.patch
From: https://bugs.gentoo.org/show_bug.cgi?id=470644
Desc: Support for namespace user.pax.* on tmpfs.
diff --git a/1002_linux-5.8.3.patch b/1002_linux-5.8.3.patch
new file mode 100644
index 0000000..f212dd8
--- /dev/null
+++ b/1002_linux-5.8.3.patch
@@ -0,0 +1,8143 @@
+diff --git a/Documentation/admin-guide/hw-vuln/multihit.rst b/Documentation/admin-guide/hw-vuln/multihit.rst
+index ba9988d8bce50..140e4cec38c33 100644
+--- a/Documentation/admin-guide/hw-vuln/multihit.rst
++++ b/Documentation/admin-guide/hw-vuln/multihit.rst
+@@ -80,6 +80,10 @@ The possible values in this file are:
+ - The processor is not vulnerable.
+ * - KVM: Mitigation: Split huge pages
+ - Software changes mitigate this issue.
++ * - KVM: Mitigation: VMX unsupported
++ - KVM is not vulnerable because Virtual Machine Extensions (VMX) is not supported.
++ * - KVM: Mitigation: VMX disabled
++ - KVM is not vulnerable because Virtual Machine Extensions (VMX) is disabled.
+ * - KVM: Vulnerable
+ - The processor is vulnerable, but no mitigation enabled
+
+diff --git a/Documentation/devicetree/bindings/iio/multiplexer/io-channel-mux.txt b/Documentation/devicetree/bindings/iio/multiplexer/io-channel-mux.txt
+index c82794002595f..89647d7143879 100644
+--- a/Documentation/devicetree/bindings/iio/multiplexer/io-channel-mux.txt
++++ b/Documentation/devicetree/bindings/iio/multiplexer/io-channel-mux.txt
+@@ -21,7 +21,7 @@ controller state. The mux controller state is described in
+
+ Example:
+ mux: mux-controller {
+- compatible = "mux-gpio";
++ compatible = "gpio-mux";
+ #mux-control-cells = <0>;
+
+ mux-gpios = <&pioA 0 GPIO_ACTIVE_HIGH>,
+diff --git a/Makefile b/Makefile
+index 6940f82a15cc1..6001ed2b14c3a 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 5
+ PATCHLEVEL = 8
+-SUBLEVEL = 2
++SUBLEVEL = 3
+ EXTRAVERSION =
+ NAME = Kleptomaniac Octopus
+
+diff --git a/arch/arm64/boot/dts/qcom/sc7180-idp.dts b/arch/arm64/boot/dts/qcom/sc7180-idp.dts
+index 4e9149d82d09e..17624d6440df5 100644
+--- a/arch/arm64/boot/dts/qcom/sc7180-idp.dts
++++ b/arch/arm64/boot/dts/qcom/sc7180-idp.dts
+@@ -312,7 +312,7 @@
+ &remoteproc_mpss {
+ status = "okay";
+ compatible = "qcom,sc7180-mss-pil";
+- iommus = <&apps_smmu 0x460 0x1>, <&apps_smmu 0x444 0x3>;
++ iommus = <&apps_smmu 0x461 0x0>, <&apps_smmu 0x444 0x3>;
+ memory-region = <&mba_mem &mpss_mem>;
+ };
+
+diff --git a/arch/arm64/boot/dts/qcom/sdm845-cheza.dtsi b/arch/arm64/boot/dts/qcom/sdm845-cheza.dtsi
+index 70466cc4b4055..64fc1bfd66fad 100644
+--- a/arch/arm64/boot/dts/qcom/sdm845-cheza.dtsi
++++ b/arch/arm64/boot/dts/qcom/sdm845-cheza.dtsi
+@@ -634,7 +634,7 @@ ap_ts_i2c: &i2c14 {
+ };
+
+ &mss_pil {
+- iommus = <&apps_smmu 0x780 0x1>,
++ iommus = <&apps_smmu 0x781 0x0>,
+ <&apps_smmu 0x724 0x3>;
+ };
+
+diff --git a/arch/arm64/kernel/perf_event.c b/arch/arm64/kernel/perf_event.c
+index 4d7879484cecc..581602413a130 100644
+--- a/arch/arm64/kernel/perf_event.c
++++ b/arch/arm64/kernel/perf_event.c
+@@ -155,7 +155,7 @@ armv8pmu_events_sysfs_show(struct device *dev,
+
+ pmu_attr = container_of(attr, struct perf_pmu_events_attr, attr);
+
+- return sprintf(page, "event=0x%03llx\n", pmu_attr->id);
++ return sprintf(page, "event=0x%04llx\n", pmu_attr->id);
+ }
+
+ #define ARMV8_EVENT_ATTR(name, config) \
+@@ -244,10 +244,13 @@ armv8pmu_event_attr_is_visible(struct kobject *kobj,
+ test_bit(pmu_attr->id, cpu_pmu->pmceid_bitmap))
+ return attr->mode;
+
+- pmu_attr->id -= ARMV8_PMUV3_EXT_COMMON_EVENT_BASE;
+- if (pmu_attr->id < ARMV8_PMUV3_MAX_COMMON_EVENTS &&
+- test_bit(pmu_attr->id, cpu_pmu->pmceid_ext_bitmap))
+- return attr->mode;
++ if (pmu_attr->id >= ARMV8_PMUV3_EXT_COMMON_EVENT_BASE) {
++ u64 id = pmu_attr->id - ARMV8_PMUV3_EXT_COMMON_EVENT_BASE;
++
++ if (id < ARMV8_PMUV3_MAX_COMMON_EVENTS &&
++ test_bit(id, cpu_pmu->pmceid_ext_bitmap))
++ return attr->mode;
++ }
+
+ return 0;
+ }
+diff --git a/arch/mips/Kconfig b/arch/mips/Kconfig
+index 6fee1a133e9d6..a7e40bb1e5bc6 100644
+--- a/arch/mips/Kconfig
++++ b/arch/mips/Kconfig
+@@ -678,6 +678,7 @@ config SGI_IP27
+ select SYS_SUPPORTS_NUMA
+ select SYS_SUPPORTS_SMP
+ select MIPS_L1_CACHE_SHIFT_7
++ select NUMA
+ help
+ This are the SGI Origin 200, Origin 2000 and Onyx 2 Graphics
+ workstations. To compile a Linux kernel that runs on these, say Y
+diff --git a/arch/mips/boot/dts/ingenic/qi_lb60.dts b/arch/mips/boot/dts/ingenic/qi_lb60.dts
+index 7a371d9c5a33f..eda37fb516f0e 100644
+--- a/arch/mips/boot/dts/ingenic/qi_lb60.dts
++++ b/arch/mips/boot/dts/ingenic/qi_lb60.dts
+@@ -69,7 +69,7 @@
+ "Speaker", "OUTL",
+ "Speaker", "OUTR",
+ "INL", "LOUT",
+- "INL", "ROUT";
++ "INR", "ROUT";
+
+ simple-audio-card,aux-devs = <&>;
+
+diff --git a/arch/mips/kernel/topology.c b/arch/mips/kernel/topology.c
+index cd3e1f82e1a5d..08ad6371fbe08 100644
+--- a/arch/mips/kernel/topology.c
++++ b/arch/mips/kernel/topology.c
+@@ -20,7 +20,7 @@ static int __init topology_init(void)
+ for_each_present_cpu(i) {
+ struct cpu *c = &per_cpu(cpu_devices, i);
+
+- c->hotpluggable = 1;
++ c->hotpluggable = !!i;
+ ret = register_cpu(c, i);
+ if (ret)
+ printk(KERN_WARNING "topology_init: register_cpu %d "
+diff --git a/arch/openrisc/kernel/stacktrace.c b/arch/openrisc/kernel/stacktrace.c
+index 43f140a28bc72..54d38809e22cb 100644
+--- a/arch/openrisc/kernel/stacktrace.c
++++ b/arch/openrisc/kernel/stacktrace.c
+@@ -13,6 +13,7 @@
+ #include <linux/export.h>
+ #include <linux/sched.h>
+ #include <linux/sched/debug.h>
++#include <linux/sched/task_stack.h>
+ #include <linux/stacktrace.h>
+
+ #include <asm/processor.h>
+@@ -68,12 +69,25 @@ void save_stack_trace_tsk(struct task_struct *tsk, struct stack_trace *trace)
+ {
+ unsigned long *sp = NULL;
+
++ if (!try_get_task_stack(tsk))
++ return;
++
+ if (tsk == current)
+ sp = (unsigned long *) &sp;
+- else
+- sp = (unsigned long *) KSTK_ESP(tsk);
++ else {
++ unsigned long ksp;
++
++ /* Locate stack from kernel context */
++ ksp = task_thread_info(tsk)->ksp;
++ ksp += STACK_FRAME_OVERHEAD; /* redzone */
++ ksp += sizeof(struct pt_regs);
++
++ sp = (unsigned long *) ksp;
++ }
+
+ unwind_stack(trace, sp, save_stack_address_nosched);
++
++ put_task_stack(tsk);
+ }
+ EXPORT_SYMBOL_GPL(save_stack_trace_tsk);
+
+diff --git a/arch/powerpc/include/asm/percpu.h b/arch/powerpc/include/asm/percpu.h
+index dce863a7635cd..8e5b7d0b851c6 100644
+--- a/arch/powerpc/include/asm/percpu.h
++++ b/arch/powerpc/include/asm/percpu.h
+@@ -10,8 +10,6 @@
+
+ #ifdef CONFIG_SMP
+
+-#include <asm/paca.h>
+-
+ #define __my_cpu_offset local_paca->data_offset
+
+ #endif /* CONFIG_SMP */
+@@ -19,4 +17,6 @@
+
+ #include <asm-generic/percpu.h>
+
++#include <asm/paca.h>
++
+ #endif /* _ASM_POWERPC_PERCPU_H_ */
+diff --git a/arch/powerpc/mm/fault.c b/arch/powerpc/mm/fault.c
+index 641fc5f3d7dd9..3ebb1792e6367 100644
+--- a/arch/powerpc/mm/fault.c
++++ b/arch/powerpc/mm/fault.c
+@@ -267,6 +267,9 @@ static bool bad_kernel_fault(struct pt_regs *regs, unsigned long error_code,
+ return false;
+ }
+
++// This comes from 64-bit struct rt_sigframe + __SIGNAL_FRAMESIZE
++#define SIGFRAME_MAX_SIZE (4096 + 128)
++
+ static bool bad_stack_expansion(struct pt_regs *regs, unsigned long address,
+ struct vm_area_struct *vma, unsigned int flags,
+ bool *must_retry)
+@@ -274,7 +277,7 @@ static bool bad_stack_expansion(struct pt_regs *regs, unsigned long address,
+ /*
+ * N.B. The POWER/Open ABI allows programs to access up to
+ * 288 bytes below the stack pointer.
+- * The kernel signal delivery code writes up to about 1.5kB
++ * The kernel signal delivery code writes a bit over 4KB
+ * below the stack pointer (r1) before decrementing it.
+ * The exec code can write slightly over 640kB to the stack
+ * before setting the user r1. Thus we allow the stack to
+@@ -299,7 +302,7 @@ static bool bad_stack_expansion(struct pt_regs *regs, unsigned long address,
+ * between the last mapped region and the stack will
+ * expand the stack rather than segfaulting.
+ */
+- if (address + 2048 >= uregs->gpr[1])
++ if (address + SIGFRAME_MAX_SIZE >= uregs->gpr[1])
+ return false;
+
+ if ((flags & FAULT_FLAG_WRITE) && (flags & FAULT_FLAG_USER) &&
+diff --git a/arch/powerpc/mm/ptdump/hashpagetable.c b/arch/powerpc/mm/ptdump/hashpagetable.c
+index a2c33efc7ce8d..5b8bd34cd3a16 100644
+--- a/arch/powerpc/mm/ptdump/hashpagetable.c
++++ b/arch/powerpc/mm/ptdump/hashpagetable.c
+@@ -258,7 +258,7 @@ static int pseries_find(unsigned long ea, int psize, bool primary, u64 *v, u64 *
+ for (i = 0; i < HPTES_PER_GROUP; i += 4, hpte_group += 4) {
+ lpar_rc = plpar_pte_read_4(0, hpte_group, (void *)ptes);
+
+- if (lpar_rc != H_SUCCESS)
++ if (lpar_rc)
+ continue;
+ for (j = 0; j < 4; j++) {
+ if (HPTE_V_COMPARE(ptes[j].v, want_v) &&
+diff --git a/arch/powerpc/platforms/pseries/hotplug-memory.c b/arch/powerpc/platforms/pseries/hotplug-memory.c
+index 5ace2f9a277e9..8b748690dac22 100644
+--- a/arch/powerpc/platforms/pseries/hotplug-memory.c
++++ b/arch/powerpc/platforms/pseries/hotplug-memory.c
+@@ -27,7 +27,7 @@ static bool rtas_hp_event;
+ unsigned long pseries_memory_block_size(void)
+ {
+ struct device_node *np;
+- unsigned int memblock_size = MIN_MEMORY_BLOCK_SIZE;
++ u64 memblock_size = MIN_MEMORY_BLOCK_SIZE;
+ struct resource r;
+
+ np = of_find_node_by_path("/ibm,dynamic-reconfiguration-memory");
+diff --git a/arch/s390/Kconfig b/arch/s390/Kconfig
+index c7d7ede6300c5..4907a5149a8a3 100644
+--- a/arch/s390/Kconfig
++++ b/arch/s390/Kconfig
+@@ -769,6 +769,7 @@ config VFIO_AP
+ def_tristate n
+ prompt "VFIO support for AP devices"
+ depends on S390_AP_IOMMU && VFIO_MDEV_DEVICE && KVM
++ depends on ZCRYPT
+ help
+ This driver grants access to Adjunct Processor (AP) devices
+ via the VFIO mediated device interface.
+diff --git a/arch/s390/lib/test_unwind.c b/arch/s390/lib/test_unwind.c
+index 32b7a30b2485d..b0b12b46bc572 100644
+--- a/arch/s390/lib/test_unwind.c
++++ b/arch/s390/lib/test_unwind.c
+@@ -63,6 +63,7 @@ static noinline int test_unwind(struct task_struct *task, struct pt_regs *regs,
+ break;
+ if (state.reliable && !addr) {
+ pr_err("unwind state reliable but addr is 0\n");
++ kfree(bt);
+ return -EINVAL;
+ }
+ sprint_symbol(sym, addr);
+diff --git a/arch/sh/boards/mach-landisk/setup.c b/arch/sh/boards/mach-landisk/setup.c
+index 16b4d8b0bb850..2c44b94f82fb2 100644
+--- a/arch/sh/boards/mach-landisk/setup.c
++++ b/arch/sh/boards/mach-landisk/setup.c
+@@ -82,6 +82,9 @@ device_initcall(landisk_devices_setup);
+
+ static void __init landisk_setup(char **cmdline_p)
+ {
++ /* I/O port identity mapping */
++ __set_io_port_base(0);
++
+ /* LED ON */
+ __raw_writeb(__raw_readb(PA_LED) | 0x03, PA_LED);
+
+diff --git a/arch/sh/mm/fault.c b/arch/sh/mm/fault.c
+index fbe1f2fe9a8c8..acd1c75994983 100644
+--- a/arch/sh/mm/fault.c
++++ b/arch/sh/mm/fault.c
+@@ -208,7 +208,6 @@ show_fault_oops(struct pt_regs *regs, unsigned long address)
+ if (!oops_may_print())
+ return;
+
+- printk(KERN_ALERT "PC:");
+ pr_alert("BUG: unable to handle kernel %s at %08lx\n",
+ address < PAGE_SIZE ? "NULL pointer dereference"
+ : "paging request",
+diff --git a/arch/x86/events/rapl.c b/arch/x86/events/rapl.c
+index 0f2bf59f43541..51ff9a3618c95 100644
+--- a/arch/x86/events/rapl.c
++++ b/arch/x86/events/rapl.c
+@@ -665,7 +665,7 @@ static const struct attribute_group *rapl_attr_update[] = {
+ &rapl_events_pkg_group,
+ &rapl_events_ram_group,
+ &rapl_events_gpu_group,
+- &rapl_events_gpu_group,
++ &rapl_events_psys_group,
+ NULL,
+ };
+
+diff --git a/arch/x86/kernel/apic/vector.c b/arch/x86/kernel/apic/vector.c
+index 7649da2478d8a..dae32d948bf25 100644
+--- a/arch/x86/kernel/apic/vector.c
++++ b/arch/x86/kernel/apic/vector.c
+@@ -560,6 +560,10 @@ static int x86_vector_alloc_irqs(struct irq_domain *domain, unsigned int virq,
+ * as that can corrupt the affinity move state.
+ */
+ irqd_set_handle_enforce_irqctx(irqd);
++
++ /* Don't invoke affinity setter on deactivated interrupts */
++ irqd_set_affinity_on_activate(irqd);
++
+ /*
+ * Legacy vectors are already assigned when the IOAPIC
+ * takes them over. They stay on the same vector. This is
+diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c
+index 0b71970d2d3d2..b0802d45abd30 100644
+--- a/arch/x86/kernel/cpu/bugs.c
++++ b/arch/x86/kernel/cpu/bugs.c
+@@ -31,6 +31,7 @@
+ #include <asm/intel-family.h>
+ #include <asm/e820/api.h>
+ #include <asm/hypervisor.h>
++#include <asm/tlbflush.h>
+
+ #include "cpu.h"
+
+@@ -1556,7 +1557,12 @@ static ssize_t l1tf_show_state(char *buf)
+
+ static ssize_t itlb_multihit_show_state(char *buf)
+ {
+- if (itlb_multihit_kvm_mitigation)
++ if (!boot_cpu_has(X86_FEATURE_MSR_IA32_FEAT_CTL) ||
++ !boot_cpu_has(X86_FEATURE_VMX))
++ return sprintf(buf, "KVM: Mitigation: VMX unsupported\n");
++ else if (!(cr4_read_shadow() & X86_CR4_VMXE))
++ return sprintf(buf, "KVM: Mitigation: VMX disabled\n");
++ else if (itlb_multihit_kvm_mitigation)
+ return sprintf(buf, "KVM: Mitigation: Split huge pages\n");
+ else
+ return sprintf(buf, "KVM: Vulnerable\n");
+diff --git a/arch/x86/kernel/tsc_msr.c b/arch/x86/kernel/tsc_msr.c
+index 4fec6f3a1858b..a654a9b4b77c0 100644
+--- a/arch/x86/kernel/tsc_msr.c
++++ b/arch/x86/kernel/tsc_msr.c
+@@ -133,10 +133,15 @@ static const struct freq_desc freq_desc_ann = {
+ .mask = 0x0f,
+ };
+
+-/* 24 MHz crystal? : 24 * 13 / 4 = 78 MHz */
++/*
++ * 24 MHz crystal? : 24 * 13 / 4 = 78 MHz
++ * Frequency step for Lightning Mountain SoC is fixed to 78 MHz,
++ * so all the frequency entries are 78000.
++ */
+ static const struct freq_desc freq_desc_lgm = {
+ .use_msr_plat = true,
+- .freqs = { 78000, 78000, 78000, 78000, 78000, 78000, 78000, 78000 },
++ .freqs = { 78000, 78000, 78000, 78000, 78000, 78000, 78000, 78000,
++ 78000, 78000, 78000, 78000, 78000, 78000, 78000, 78000 },
+ .mask = 0x0f,
+ };
+
+diff --git a/arch/xtensa/include/asm/thread_info.h b/arch/xtensa/include/asm/thread_info.h
+index f092cc3f4e66d..956d4d47c6cd1 100644
+--- a/arch/xtensa/include/asm/thread_info.h
++++ b/arch/xtensa/include/asm/thread_info.h
+@@ -55,6 +55,10 @@ struct thread_info {
+ mm_segment_t addr_limit; /* thread address space */
+
+ unsigned long cpenable;
++#if XCHAL_HAVE_EXCLUSIVE
++ /* result of the most recent exclusive store */
++ unsigned long atomctl8;
++#endif
+
+ /* Allocate storage for extra user states and coprocessor states. */
+ #if XTENSA_HAVE_COPROCESSORS
+diff --git a/arch/xtensa/kernel/asm-offsets.c b/arch/xtensa/kernel/asm-offsets.c
+index 33a257b33723a..dc5c83cad9be8 100644
+--- a/arch/xtensa/kernel/asm-offsets.c
++++ b/arch/xtensa/kernel/asm-offsets.c
+@@ -93,6 +93,9 @@ int main(void)
+ DEFINE(THREAD_RA, offsetof (struct task_struct, thread.ra));
+ DEFINE(THREAD_SP, offsetof (struct task_struct, thread.sp));
+ DEFINE(THREAD_CPENABLE, offsetof (struct thread_info, cpenable));
++#if XCHAL_HAVE_EXCLUSIVE
++ DEFINE(THREAD_ATOMCTL8, offsetof (struct thread_info, atomctl8));
++#endif
+ #if XTENSA_HAVE_COPROCESSORS
+ DEFINE(THREAD_XTREGS_CP0, offsetof(struct thread_info, xtregs_cp.cp0));
+ DEFINE(THREAD_XTREGS_CP1, offsetof(struct thread_info, xtregs_cp.cp1));
+diff --git a/arch/xtensa/kernel/entry.S b/arch/xtensa/kernel/entry.S
+index 98515c24d9b28..703cf6205efec 100644
+--- a/arch/xtensa/kernel/entry.S
++++ b/arch/xtensa/kernel/entry.S
+@@ -374,6 +374,11 @@ common_exception:
+ s32i a2, a1, PT_LCOUNT
+ #endif
+
++#if XCHAL_HAVE_EXCLUSIVE
++ /* Clear exclusive access monitor set by interrupted code */
++ clrex
++#endif
++
+ /* It is now save to restore the EXC_TABLE_FIXUP variable. */
+
+ rsr a2, exccause
+@@ -2020,6 +2025,12 @@ ENTRY(_switch_to)
+ s32i a3, a4, THREAD_CPENABLE
+ #endif
+
++#if XCHAL_HAVE_EXCLUSIVE
++ l32i a3, a5, THREAD_ATOMCTL8
++ getex a3
++ s32i a3, a4, THREAD_ATOMCTL8
++#endif
++
+ /* Flush register file. */
+
+ spill_registers_kernel
+diff --git a/arch/xtensa/kernel/perf_event.c b/arch/xtensa/kernel/perf_event.c
+index 99fcd63ce597f..a0d05c8598d0f 100644
+--- a/arch/xtensa/kernel/perf_event.c
++++ b/arch/xtensa/kernel/perf_event.c
+@@ -399,7 +399,7 @@ static struct pmu xtensa_pmu = {
+ .read = xtensa_pmu_read,
+ };
+
+-static int xtensa_pmu_setup(int cpu)
++static int xtensa_pmu_setup(unsigned int cpu)
+ {
+ unsigned i;
+
+diff --git a/crypto/af_alg.c b/crypto/af_alg.c
+index 28fc323e3fe30..5882ed46f1adb 100644
+--- a/crypto/af_alg.c
++++ b/crypto/af_alg.c
+@@ -635,6 +635,7 @@ void af_alg_pull_tsgl(struct sock *sk, size_t used, struct scatterlist *dst,
+
+ if (!ctx->used)
+ ctx->merge = 0;
++ ctx->init = ctx->more;
+ }
+ EXPORT_SYMBOL_GPL(af_alg_pull_tsgl);
+
+@@ -734,9 +735,10 @@ EXPORT_SYMBOL_GPL(af_alg_wmem_wakeup);
+ *
+ * @sk socket of connection to user space
+ * @flags If MSG_DONTWAIT is set, then only report if function would sleep
++ * @min Set to minimum request size if partial requests are allowed.
+ * @return 0 when writable memory is available, < 0 upon error
+ */
+-int af_alg_wait_for_data(struct sock *sk, unsigned flags)
++int af_alg_wait_for_data(struct sock *sk, unsigned flags, unsigned min)
+ {
+ DEFINE_WAIT_FUNC(wait, woken_wake_function);
+ struct alg_sock *ask = alg_sk(sk);
+@@ -754,7 +756,9 @@ int af_alg_wait_for_data(struct sock *sk, unsigned flags)
+ if (signal_pending(current))
+ break;
+ timeout = MAX_SCHEDULE_TIMEOUT;
+- if (sk_wait_event(sk, &timeout, (ctx->used || !ctx->more),
++ if (sk_wait_event(sk, &timeout,
++ ctx->init && (!ctx->more ||
++ (min && ctx->used >= min)),
+ &wait)) {
+ err = 0;
+ break;
+@@ -843,10 +847,11 @@ int af_alg_sendmsg(struct socket *sock, struct msghdr *msg, size_t size,
+ }
+
+ lock_sock(sk);
+- if (!ctx->more && ctx->used) {
++ if (ctx->init && (init || !ctx->more)) {
+ err = -EINVAL;
+ goto unlock;
+ }
++ ctx->init = true;
+
+ if (init) {
+ ctx->enc = enc;
+diff --git a/crypto/algif_aead.c b/crypto/algif_aead.c
+index 0ae000a61c7f5..43c6aa784858b 100644
+--- a/crypto/algif_aead.c
++++ b/crypto/algif_aead.c
+@@ -106,8 +106,8 @@ static int _aead_recvmsg(struct socket *sock, struct msghdr *msg,
+ size_t usedpages = 0; /* [in] RX bufs to be used from user */
+ size_t processed = 0; /* [in] TX bufs to be consumed */
+
+- if (!ctx->used) {
+- err = af_alg_wait_for_data(sk, flags);
++ if (!ctx->init || ctx->more) {
++ err = af_alg_wait_for_data(sk, flags, 0);
+ if (err)
+ return err;
+ }
+@@ -558,12 +558,6 @@ static int aead_accept_parent_nokey(void *private, struct sock *sk)
+
+ INIT_LIST_HEAD(&ctx->tsgl_list);
+ ctx->len = len;
+- ctx->used = 0;
+- atomic_set(&ctx->rcvused, 0);
+- ctx->more = 0;
+- ctx->merge = 0;
+- ctx->enc = 0;
+- ctx->aead_assoclen = 0;
+ crypto_init_wait(&ctx->wait);
+
+ ask->private = ctx;
+diff --git a/crypto/algif_skcipher.c b/crypto/algif_skcipher.c
+index ec5567c87a6df..81c4022285a7c 100644
+--- a/crypto/algif_skcipher.c
++++ b/crypto/algif_skcipher.c
+@@ -61,8 +61,8 @@ static int _skcipher_recvmsg(struct socket *sock, struct msghdr *msg,
+ int err = 0;
+ size_t len = 0;
+
+- if (!ctx->used) {
+- err = af_alg_wait_for_data(sk, flags);
++ if (!ctx->init || (ctx->more && ctx->used < bs)) {
++ err = af_alg_wait_for_data(sk, flags, bs);
+ if (err)
+ return err;
+ }
+@@ -333,6 +333,7 @@ static int skcipher_accept_parent_nokey(void *private, struct sock *sk)
+ ctx = sock_kmalloc(sk, len, GFP_KERNEL);
+ if (!ctx)
+ return -ENOMEM;
++ memset(ctx, 0, len);
+
+ ctx->iv = sock_kmalloc(sk, crypto_skcipher_ivsize(tfm),
+ GFP_KERNEL);
+@@ -340,16 +341,10 @@ static int skcipher_accept_parent_nokey(void *private, struct sock *sk)
+ sock_kfree_s(sk, ctx, len);
+ return -ENOMEM;
+ }
+-
+ memset(ctx->iv, 0, crypto_skcipher_ivsize(tfm));
+
+ INIT_LIST_HEAD(&ctx->tsgl_list);
+ ctx->len = len;
+- ctx->used = 0;
+- atomic_set(&ctx->rcvused, 0);
+- ctx->more = 0;
+- ctx->merge = 0;
+- ctx->enc = 0;
+ crypto_init_wait(&ctx->wait);
+
+ ask->private = ctx;
+diff --git a/drivers/acpi/nfit/core.c b/drivers/acpi/nfit/core.c
+index 7c138a4edc03e..1f72ce1a782b5 100644
+--- a/drivers/acpi/nfit/core.c
++++ b/drivers/acpi/nfit/core.c
+@@ -1823,6 +1823,7 @@ static void populate_shutdown_status(struct nfit_mem *nfit_mem)
+ static int acpi_nfit_add_dimm(struct acpi_nfit_desc *acpi_desc,
+ struct nfit_mem *nfit_mem, u32 device_handle)
+ {
++ struct nvdimm_bus_descriptor *nd_desc = &acpi_desc->nd_desc;
+ struct acpi_device *adev, *adev_dimm;
+ struct device *dev = acpi_desc->dev;
+ unsigned long dsm_mask, label_mask;
+@@ -1834,6 +1835,7 @@ static int acpi_nfit_add_dimm(struct acpi_nfit_desc *acpi_desc,
+ /* nfit test assumes 1:1 relationship between commands and dsms */
+ nfit_mem->dsm_mask = acpi_desc->dimm_cmd_force_en;
+ nfit_mem->family = NVDIMM_FAMILY_INTEL;
++ set_bit(NVDIMM_FAMILY_INTEL, &nd_desc->dimm_family_mask);
+
+ if (dcr->valid_fields & ACPI_NFIT_CONTROL_MFG_INFO_VALID)
+ sprintf(nfit_mem->id, "%04x-%02x-%04x-%08x",
+@@ -1886,10 +1888,13 @@ static int acpi_nfit_add_dimm(struct acpi_nfit_desc *acpi_desc,
+ * Note, that checking for function0 (bit0) tells us if any commands
+ * are reachable through this GUID.
+ */
++ clear_bit(NVDIMM_FAMILY_INTEL, &nd_desc->dimm_family_mask);
+ for (i = 0; i <= NVDIMM_FAMILY_MAX; i++)
+- if (acpi_check_dsm(adev_dimm->handle, to_nfit_uuid(i), 1, 1))
++ if (acpi_check_dsm(adev_dimm->handle, to_nfit_uuid(i), 1, 1)) {
++ set_bit(i, &nd_desc->dimm_family_mask);
+ if (family < 0 || i == default_dsm_family)
+ family = i;
++ }
+
+ /* limit the supported commands to those that are publicly documented */
+ nfit_mem->family = family;
+@@ -2153,6 +2158,9 @@ static void acpi_nfit_init_dsms(struct acpi_nfit_desc *acpi_desc)
+
+ nd_desc->cmd_mask = acpi_desc->bus_cmd_force_en;
+ nd_desc->bus_dsm_mask = acpi_desc->bus_nfit_cmd_force_en;
++ set_bit(ND_CMD_CALL, &nd_desc->cmd_mask);
++ set_bit(NVDIMM_BUS_FAMILY_NFIT, &nd_desc->bus_family_mask);
++
+ adev = to_acpi_dev(acpi_desc);
+ if (!adev)
+ return;
+@@ -2160,7 +2168,6 @@ static void acpi_nfit_init_dsms(struct acpi_nfit_desc *acpi_desc)
+ for (i = ND_CMD_ARS_CAP; i <= ND_CMD_CLEAR_ERROR; i++)
+ if (acpi_check_dsm(adev->handle, guid, 1, 1ULL << i))
+ set_bit(i, &nd_desc->cmd_mask);
+- set_bit(ND_CMD_CALL, &nd_desc->cmd_mask);
+
+ dsm_mask =
+ (1 << ND_CMD_ARS_CAP) |
+diff --git a/drivers/acpi/nfit/nfit.h b/drivers/acpi/nfit/nfit.h
+index f5525f8bb7708..5c5e7ebba8dc6 100644
+--- a/drivers/acpi/nfit/nfit.h
++++ b/drivers/acpi/nfit/nfit.h
+@@ -33,7 +33,6 @@
+ | ACPI_NFIT_MEM_RESTORE_FAILED | ACPI_NFIT_MEM_FLUSH_FAILED \
+ | ACPI_NFIT_MEM_NOT_ARMED | ACPI_NFIT_MEM_MAP_FAILED)
+
+-#define NVDIMM_FAMILY_MAX NVDIMM_FAMILY_HYPERV
+ #define NVDIMM_CMD_MAX 31
+
+ #define NVDIMM_STANDARD_CMDMASK \
+diff --git a/drivers/base/dd.c b/drivers/base/dd.c
+index e8628716ea345..18e81d65d32c4 100644
+--- a/drivers/base/dd.c
++++ b/drivers/base/dd.c
+@@ -844,7 +844,9 @@ static int __device_attach(struct device *dev, bool allow_async)
+ int ret = 0;
+
+ device_lock(dev);
+- if (dev->driver) {
++ if (dev->p->dead) {
++ goto out_unlock;
++ } else if (dev->driver) {
+ if (device_is_bound(dev)) {
+ ret = 1;
+ goto out_unlock;
+diff --git a/drivers/clk/Kconfig b/drivers/clk/Kconfig
+index 326f91b2dda9f..5f952e111ab5a 100644
+--- a/drivers/clk/Kconfig
++++ b/drivers/clk/Kconfig
+@@ -50,7 +50,7 @@ source "drivers/clk/versatile/Kconfig"
+ config CLK_HSDK
+ bool "PLL Driver for HSDK platform"
+ depends on OF || COMPILE_TEST
+- depends on IOMEM
++ depends on HAS_IOMEM
+ help
+ This driver supports the HSDK core, system, ddr, tunnel and hdmi PLLs
+ control.
+diff --git a/drivers/clk/actions/owl-s500.c b/drivers/clk/actions/owl-s500.c
+index e2007ac4d235d..0eb83a0b70bcc 100644
+--- a/drivers/clk/actions/owl-s500.c
++++ b/drivers/clk/actions/owl-s500.c
+@@ -183,7 +183,7 @@ static OWL_GATE(timer_clk, "timer_clk", "hosc", CMU_DEVCLKEN1, 27, 0, 0);
+ static OWL_GATE(hdmi_clk, "hdmi_clk", "hosc", CMU_DEVCLKEN1, 3, 0, 0);
+
+ /* divider clocks */
+-static OWL_DIVIDER(h_clk, "h_clk", "ahbprevdiv_clk", CMU_BUSCLK1, 12, 2, NULL, 0, 0);
++static OWL_DIVIDER(h_clk, "h_clk", "ahbprediv_clk", CMU_BUSCLK1, 12, 2, NULL, 0, 0);
+ static OWL_DIVIDER(rmii_ref_clk, "rmii_ref_clk", "ethernet_pll_clk", CMU_ETHERNETPLL, 1, 1, rmii_ref_div_table, 0, 0);
+
+ /* factor clocks */
+diff --git a/drivers/clk/bcm/clk-bcm2835.c b/drivers/clk/bcm/clk-bcm2835.c
+index 6bb7efa12037b..011802f1a6df9 100644
+--- a/drivers/clk/bcm/clk-bcm2835.c
++++ b/drivers/clk/bcm/clk-bcm2835.c
+@@ -314,6 +314,7 @@ struct bcm2835_cprman {
+ struct device *dev;
+ void __iomem *regs;
+ spinlock_t regs_lock; /* spinlock for all clocks */
++ unsigned int soc;
+
+ /*
+ * Real names of cprman clock parents looked up through
+@@ -525,6 +526,20 @@ static int bcm2835_pll_is_on(struct clk_hw *hw)
+ A2W_PLL_CTRL_PRST_DISABLE;
+ }
+
++static u32 bcm2835_pll_get_prediv_mask(struct bcm2835_cprman *cprman,
++ const struct bcm2835_pll_data *data)
++{
++ /*
++ * On BCM2711 there isn't a pre-divisor available in the PLL feedback
++ * loop. Bits 13:14 of ANA1 (PLLA,PLLB,PLLC,PLLD) have been re-purposed
++ * for to for VCO RANGE bits.
++ */
++ if (cprman->soc & SOC_BCM2711)
++ return 0;
++
++ return data->ana->fb_prediv_mask;
++}
++
+ static void bcm2835_pll_choose_ndiv_and_fdiv(unsigned long rate,
+ unsigned long parent_rate,
+ u32 *ndiv, u32 *fdiv)
+@@ -582,7 +597,7 @@ static unsigned long bcm2835_pll_get_rate(struct clk_hw *hw,
+ ndiv = (a2wctrl & A2W_PLL_CTRL_NDIV_MASK) >> A2W_PLL_CTRL_NDIV_SHIFT;
+ pdiv = (a2wctrl & A2W_PLL_CTRL_PDIV_MASK) >> A2W_PLL_CTRL_PDIV_SHIFT;
+ using_prediv = cprman_read(cprman, data->ana_reg_base + 4) &
+- data->ana->fb_prediv_mask;
++ bcm2835_pll_get_prediv_mask(cprman, data);
+
+ if (using_prediv) {
+ ndiv *= 2;
+@@ -665,6 +680,7 @@ static int bcm2835_pll_set_rate(struct clk_hw *hw,
+ struct bcm2835_pll *pll = container_of(hw, struct bcm2835_pll, hw);
+ struct bcm2835_cprman *cprman = pll->cprman;
+ const struct bcm2835_pll_data *data = pll->data;
++ u32 prediv_mask = bcm2835_pll_get_prediv_mask(cprman, data);
+ bool was_using_prediv, use_fb_prediv, do_ana_setup_first;
+ u32 ndiv, fdiv, a2w_ctl;
+ u32 ana[4];
+@@ -682,7 +698,7 @@ static int bcm2835_pll_set_rate(struct clk_hw *hw,
+ for (i = 3; i >= 0; i--)
+ ana[i] = cprman_read(cprman, data->ana_reg_base + i * 4);
+
+- was_using_prediv = ana[1] & data->ana->fb_prediv_mask;
++ was_using_prediv = ana[1] & prediv_mask;
+
+ ana[0] &= ~data->ana->mask0;
+ ana[0] |= data->ana->set0;
+@@ -692,10 +708,10 @@ static int bcm2835_pll_set_rate(struct clk_hw *hw,
+ ana[3] |= data->ana->set3;
+
+ if (was_using_prediv && !use_fb_prediv) {
+- ana[1] &= ~data->ana->fb_prediv_mask;
++ ana[1] &= ~prediv_mask;
+ do_ana_setup_first = true;
+ } else if (!was_using_prediv && use_fb_prediv) {
+- ana[1] |= data->ana->fb_prediv_mask;
++ ana[1] |= prediv_mask;
+ do_ana_setup_first = false;
+ } else {
+ do_ana_setup_first = true;
+@@ -2238,6 +2254,7 @@ static int bcm2835_clk_probe(struct platform_device *pdev)
+ platform_set_drvdata(pdev, cprman);
+
+ cprman->onecell.num = asize;
++ cprman->soc = pdata->soc;
+ hws = cprman->onecell.hws;
+
+ for (i = 0; i < asize; i++) {
+diff --git a/drivers/clk/qcom/clk-alpha-pll.c b/drivers/clk/qcom/clk-alpha-pll.c
+index 9b2dfa08acb2a..1325139173c95 100644
+--- a/drivers/clk/qcom/clk-alpha-pll.c
++++ b/drivers/clk/qcom/clk-alpha-pll.c
+@@ -56,7 +56,6 @@
+ #define PLL_STATUS(p) ((p)->offset + (p)->regs[PLL_OFF_STATUS])
+ #define PLL_OPMODE(p) ((p)->offset + (p)->regs[PLL_OFF_OPMODE])
+ #define PLL_FRAC(p) ((p)->offset + (p)->regs[PLL_OFF_FRAC])
+-#define PLL_CAL_VAL(p) ((p)->offset + (p)->regs[PLL_OFF_CAL_VAL])
+
+ const u8 clk_alpha_pll_regs[][PLL_OFF_MAX_REGS] = {
+ [CLK_ALPHA_PLL_TYPE_DEFAULT] = {
+@@ -115,7 +114,6 @@ const u8 clk_alpha_pll_regs[][PLL_OFF_MAX_REGS] = {
+ [PLL_OFF_STATUS] = 0x30,
+ [PLL_OFF_OPMODE] = 0x38,
+ [PLL_OFF_ALPHA_VAL] = 0x40,
+- [PLL_OFF_CAL_VAL] = 0x44,
+ },
+ [CLK_ALPHA_PLL_TYPE_LUCID] = {
+ [PLL_OFF_L_VAL] = 0x04,
+diff --git a/drivers/clk/qcom/gcc-sdm660.c b/drivers/clk/qcom/gcc-sdm660.c
+index bf5730832ef3d..c6fb57cd576f5 100644
+--- a/drivers/clk/qcom/gcc-sdm660.c
++++ b/drivers/clk/qcom/gcc-sdm660.c
+@@ -1715,6 +1715,9 @@ static struct clk_branch gcc_mss_cfg_ahb_clk = {
+
+ static struct clk_branch gcc_mss_mnoc_bimc_axi_clk = {
+ .halt_reg = 0x8a004,
++ .halt_check = BRANCH_HALT,
++ .hwcg_reg = 0x8a004,
++ .hwcg_bit = 1,
+ .clkr = {
+ .enable_reg = 0x8a004,
+ .enable_mask = BIT(0),
+diff --git a/drivers/clk/qcom/gcc-sm8150.c b/drivers/clk/qcom/gcc-sm8150.c
+index 72524cf110487..55e9d6d75a0cd 100644
+--- a/drivers/clk/qcom/gcc-sm8150.c
++++ b/drivers/clk/qcom/gcc-sm8150.c
+@@ -1617,6 +1617,7 @@ static struct clk_branch gcc_gpu_cfg_ahb_clk = {
+ };
+
+ static struct clk_branch gcc_gpu_gpll0_clk_src = {
++ .halt_check = BRANCH_HALT_SKIP,
+ .clkr = {
+ .enable_reg = 0x52004,
+ .enable_mask = BIT(15),
+@@ -1632,13 +1633,14 @@ static struct clk_branch gcc_gpu_gpll0_clk_src = {
+ };
+
+ static struct clk_branch gcc_gpu_gpll0_div_clk_src = {
++ .halt_check = BRANCH_HALT_SKIP,
+ .clkr = {
+ .enable_reg = 0x52004,
+ .enable_mask = BIT(16),
+ .hw.init = &(struct clk_init_data){
+ .name = "gcc_gpu_gpll0_div_clk_src",
+ .parent_hws = (const struct clk_hw *[]){
+- &gcc_gpu_gpll0_clk_src.clkr.hw },
++ &gpll0_out_even.clkr.hw },
+ .num_parents = 1,
+ .flags = CLK_SET_RATE_PARENT,
+ .ops = &clk_branch2_ops,
+@@ -1729,6 +1731,7 @@ static struct clk_branch gcc_npu_cfg_ahb_clk = {
+ };
+
+ static struct clk_branch gcc_npu_gpll0_clk_src = {
++ .halt_check = BRANCH_HALT_SKIP,
+ .clkr = {
+ .enable_reg = 0x52004,
+ .enable_mask = BIT(18),
+@@ -1744,13 +1747,14 @@ static struct clk_branch gcc_npu_gpll0_clk_src = {
+ };
+
+ static struct clk_branch gcc_npu_gpll0_div_clk_src = {
++ .halt_check = BRANCH_HALT_SKIP,
+ .clkr = {
+ .enable_reg = 0x52004,
+ .enable_mask = BIT(19),
+ .hw.init = &(struct clk_init_data){
+ .name = "gcc_npu_gpll0_div_clk_src",
+ .parent_hws = (const struct clk_hw *[]){
+- &gcc_npu_gpll0_clk_src.clkr.hw },
++ &gpll0_out_even.clkr.hw },
+ .num_parents = 1,
+ .flags = CLK_SET_RATE_PARENT,
+ .ops = &clk_branch2_ops,
+diff --git a/drivers/clk/sirf/clk-atlas6.c b/drivers/clk/sirf/clk-atlas6.c
+index c84d5bab7ac28..b95483bb6a5ec 100644
+--- a/drivers/clk/sirf/clk-atlas6.c
++++ b/drivers/clk/sirf/clk-atlas6.c
+@@ -135,7 +135,7 @@ static void __init atlas6_clk_init(struct device_node *np)
+
+ for (i = pll1; i < maxclk; i++) {
+ atlas6_clks[i] = clk_register(NULL, atlas6_clk_hw_array[i]);
+- BUG_ON(!atlas6_clks[i]);
++ BUG_ON(IS_ERR(atlas6_clks[i]));
+ }
+ clk_register_clkdev(atlas6_clks[cpu], NULL, "cpu");
+ clk_register_clkdev(atlas6_clks[io], NULL, "io");
+diff --git a/drivers/crypto/caam/caamalg.c b/drivers/crypto/caam/caamalg.c
+index bf90a4fcabd1f..8149ac4d6ef22 100644
+--- a/drivers/crypto/caam/caamalg.c
++++ b/drivers/crypto/caam/caamalg.c
+@@ -810,12 +810,6 @@ static int ctr_skcipher_setkey(struct crypto_skcipher *skcipher,
+ return skcipher_setkey(skcipher, key, keylen, ctx1_iv_off);
+ }
+
+-static int arc4_skcipher_setkey(struct crypto_skcipher *skcipher,
+- const u8 *key, unsigned int keylen)
+-{
+- return skcipher_setkey(skcipher, key, keylen, 0);
+-}
+-
+ static int des_skcipher_setkey(struct crypto_skcipher *skcipher,
+ const u8 *key, unsigned int keylen)
+ {
+@@ -1967,21 +1961,6 @@ static struct caam_skcipher_alg driver_algs[] = {
+ },
+ .caam.class1_alg_type = OP_ALG_ALGSEL_3DES | OP_ALG_AAI_ECB,
+ },
+- {
+- .skcipher = {
+- .base = {
+- .cra_name = "ecb(arc4)",
+- .cra_driver_name = "ecb-arc4-caam",
+- .cra_blocksize = ARC4_BLOCK_SIZE,
+- },
+- .setkey = arc4_skcipher_setkey,
+- .encrypt = skcipher_encrypt,
+- .decrypt = skcipher_decrypt,
+- .min_keysize = ARC4_MIN_KEY_SIZE,
+- .max_keysize = ARC4_MAX_KEY_SIZE,
+- },
+- .caam.class1_alg_type = OP_ALG_ALGSEL_ARC4 | OP_ALG_AAI_ECB,
+- },
+ };
+
+ static struct caam_aead_alg driver_aeads[] = {
+@@ -3457,7 +3436,6 @@ int caam_algapi_init(struct device *ctrldev)
+ struct caam_drv_private *priv = dev_get_drvdata(ctrldev);
+ int i = 0, err = 0;
+ u32 aes_vid, aes_inst, des_inst, md_vid, md_inst, ccha_inst, ptha_inst;
+- u32 arc4_inst;
+ unsigned int md_limit = SHA512_DIGEST_SIZE;
+ bool registered = false, gcm_support;
+
+@@ -3477,8 +3455,6 @@ int caam_algapi_init(struct device *ctrldev)
+ CHA_ID_LS_DES_SHIFT;
+ aes_inst = cha_inst & CHA_ID_LS_AES_MASK;
+ md_inst = (cha_inst & CHA_ID_LS_MD_MASK) >> CHA_ID_LS_MD_SHIFT;
+- arc4_inst = (cha_inst & CHA_ID_LS_ARC4_MASK) >>
+- CHA_ID_LS_ARC4_SHIFT;
+ ccha_inst = 0;
+ ptha_inst = 0;
+
+@@ -3499,7 +3475,6 @@ int caam_algapi_init(struct device *ctrldev)
+ md_inst = mdha & CHA_VER_NUM_MASK;
+ ccha_inst = rd_reg32(&priv->ctrl->vreg.ccha) & CHA_VER_NUM_MASK;
+ ptha_inst = rd_reg32(&priv->ctrl->vreg.ptha) & CHA_VER_NUM_MASK;
+- arc4_inst = rd_reg32(&priv->ctrl->vreg.afha) & CHA_VER_NUM_MASK;
+
+ gcm_support = aesa & CHA_VER_MISC_AES_GCM;
+ }
+@@ -3522,10 +3497,6 @@ int caam_algapi_init(struct device *ctrldev)
+ if (!aes_inst && (alg_sel == OP_ALG_ALGSEL_AES))
+ continue;
+
+- /* Skip ARC4 algorithms if not supported by device */
+- if (!arc4_inst && alg_sel == OP_ALG_ALGSEL_ARC4)
+- continue;
+-
+ /*
+ * Check support for AES modes not available
+ * on LP devices.
+diff --git a/drivers/crypto/caam/compat.h b/drivers/crypto/caam/compat.h
+index 60e2a54c19f11..c3c22a8de4c00 100644
+--- a/drivers/crypto/caam/compat.h
++++ b/drivers/crypto/caam/compat.h
+@@ -43,7 +43,6 @@
+ #include <crypto/akcipher.h>
+ #include <crypto/scatterwalk.h>
+ #include <crypto/skcipher.h>
+-#include <crypto/arc4.h>
+ #include <crypto/internal/skcipher.h>
+ #include <crypto/internal/hash.h>
+ #include <crypto/internal/rsa.h>
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_debugfs.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_debugfs.c
+index f87b225437fc3..bd5061fbe031e 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_debugfs.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_debugfs.c
+@@ -973,7 +973,7 @@ static ssize_t amdgpu_debugfs_gpr_read(struct file *f, char __user *buf,
+
+ r = pm_runtime_get_sync(adev->ddev->dev);
+ if (r < 0)
+- return r;
++ goto err;
+
+ r = amdgpu_virt_enable_access_debugfs(adev);
+ if (r < 0)
+@@ -1003,7 +1003,7 @@ static ssize_t amdgpu_debugfs_gpr_read(struct file *f, char __user *buf,
+ value = data[result >> 2];
+ r = put_user(value, (uint32_t *)buf);
+ if (r) {
+- result = r;
++ amdgpu_virt_disable_access_debugfs(adev);
+ goto err;
+ }
+
+@@ -1012,11 +1012,14 @@ static ssize_t amdgpu_debugfs_gpr_read(struct file *f, char __user *buf,
+ size -= 4;
+ }
+
+-err:
+- pm_runtime_put_autosuspend(adev->ddev->dev);
+ kfree(data);
+ amdgpu_virt_disable_access_debugfs(adev);
+ return result;
++
++err:
++ pm_runtime_put_autosuspend(adev->ddev->dev);
++ kfree(data);
++ return r;
+ }
+
+ /**
+diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
+index 710edc70e37ec..195d621145ba5 100644
+--- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
++++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
+@@ -8686,6 +8686,29 @@ static int amdgpu_dm_atomic_check(struct drm_device *dev,
+ if (ret)
+ goto fail;
+
++ /* Check connector changes */
++ for_each_oldnew_connector_in_state(state, connector, old_con_state, new_con_state, i) {
++ struct dm_connector_state *dm_old_con_state = to_dm_connector_state(old_con_state);
++ struct dm_connector_state *dm_new_con_state = to_dm_connector_state(new_con_state);
++
++ /* Skip connectors that are disabled or part of modeset already. */
++ if (!old_con_state->crtc && !new_con_state->crtc)
++ continue;
++
++ if (!new_con_state->crtc)
++ continue;
++
++ new_crtc_state = drm_atomic_get_crtc_state(state, new_con_state->crtc);
++ if (IS_ERR(new_crtc_state)) {
++ ret = PTR_ERR(new_crtc_state);
++ goto fail;
++ }
++
++ if (dm_old_con_state->abm_level !=
++ dm_new_con_state->abm_level)
++ new_crtc_state->connectors_changed = true;
++ }
++
+ #if defined(CONFIG_DRM_AMD_DC_DCN)
+ if (!compute_mst_dsc_configs_for_state(state, dm_state->context))
+ goto fail;
+diff --git a/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn10/rv1_clk_mgr.c b/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn10/rv1_clk_mgr.c
+index 3fab9296918ab..e133edc587d31 100644
+--- a/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn10/rv1_clk_mgr.c
++++ b/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn10/rv1_clk_mgr.c
+@@ -85,12 +85,77 @@ static int rv1_determine_dppclk_threshold(struct clk_mgr_internal *clk_mgr, stru
+ return disp_clk_threshold;
+ }
+
+-static void ramp_up_dispclk_with_dpp(struct clk_mgr_internal *clk_mgr, struct dc *dc, struct dc_clocks *new_clocks)
++static void ramp_up_dispclk_with_dpp(
++ struct clk_mgr_internal *clk_mgr,
++ struct dc *dc,
++ struct dc_clocks *new_clocks,
++ bool safe_to_lower)
+ {
+ int i;
+ int dispclk_to_dpp_threshold = rv1_determine_dppclk_threshold(clk_mgr, new_clocks);
+ bool request_dpp_div = new_clocks->dispclk_khz > new_clocks->dppclk_khz;
+
++ /* this function is to change dispclk, dppclk and dprefclk according to
++ * bandwidth requirement. Its call stack is rv1_update_clocks -->
++ * update_clocks --> dcn10_prepare_bandwidth / dcn10_optimize_bandwidth
++ * --> prepare_bandwidth / optimize_bandwidth. before change dcn hw,
++ * prepare_bandwidth will be called first to allow enough clock,
++ * watermark for change, after end of dcn hw change, optimize_bandwidth
++ * is executed to lower clock to save power for new dcn hw settings.
++ *
++ * below is sequence of commit_planes_for_stream:
++ *
++ * step 1: prepare_bandwidth - raise clock to have enough bandwidth
++ * step 2: lock_doublebuffer_enable
++ * step 3: pipe_control_lock(true) - make dchubp register change will
++ * not take effect right way
++ * step 4: apply_ctx_for_surface - program dchubp
++ * step 5: pipe_control_lock(false) - dchubp register change take effect
++ * step 6: optimize_bandwidth --> dc_post_update_surfaces_to_stream
++ * for full_date, optimize clock to save power
++ *
++ * at end of step 1, dcn clocks (dprefclk, dispclk, dppclk) may be
++ * changed for new dchubp configuration. but real dcn hub dchubps are
++ * still running with old configuration until end of step 5. this need
++ * clocks settings at step 1 should not less than that before step 1.
++ * this is checked by two conditions: 1. if (should_set_clock(safe_to_lower
++ * , new_clocks->dispclk_khz, clk_mgr_base->clks.dispclk_khz) ||
++ * new_clocks->dispclk_khz == clk_mgr_base->clks.dispclk_khz)
++ * 2. request_dpp_div = new_clocks->dispclk_khz > new_clocks->dppclk_khz
++ *
++ * the second condition is based on new dchubp configuration. dppclk
++ * for new dchubp may be different from dppclk before step 1.
++ * for example, before step 1, dchubps are as below:
++ * pipe 0: recout=(0,40,1920,980) viewport=(0,0,1920,979)
++ * pipe 1: recout=(0,0,1920,1080) viewport=(0,0,1920,1080)
++ * for dppclk for pipe0 need dppclk = dispclk
++ *
++ * new dchubp pipe split configuration:
++ * pipe 0: recout=(0,0,960,1080) viewport=(0,0,960,1080)
++ * pipe 1: recout=(960,0,960,1080) viewport=(960,0,960,1080)
++ * dppclk only needs dppclk = dispclk /2.
++ *
++ * dispclk, dppclk are not lock by otg master lock. they take effect
++ * after step 1. during this transition, dispclk are the same, but
++ * dppclk is changed to half of previous clock for old dchubp
++ * configuration between step 1 and step 6. This may cause p-state
++ * warning intermittently.
++ *
++ * for new_clocks->dispclk_khz == clk_mgr_base->clks.dispclk_khz, we
++ * need make sure dppclk are not changed to less between step 1 and 6.
++ * for new_clocks->dispclk_khz > clk_mgr_base->clks.dispclk_khz,
++ * new display clock is raised, but we do not know ratio of
++ * new_clocks->dispclk_khz and clk_mgr_base->clks.dispclk_khz,
++ * new_clocks->dispclk_khz /2 does not guarantee equal or higher than
++ * old dppclk. we could ignore power saving different between
++ * dppclk = displck and dppclk = dispclk / 2 between step 1 and step 6.
++ * as long as safe_to_lower = false, set dpclk = dispclk to simplify
++ * condition check.
++ * todo: review this change for other asic.
++ **/
++ if (!safe_to_lower)
++ request_dpp_div = false;
++
+ /* set disp clk to dpp clk threshold */
+
+ clk_mgr->funcs->set_dispclk(clk_mgr, dispclk_to_dpp_threshold);
+@@ -209,7 +274,7 @@ static void rv1_update_clocks(struct clk_mgr *clk_mgr_base,
+ /* program dispclk on = as a w/a for sleep resume clock ramping issues */
+ if (should_set_clock(safe_to_lower, new_clocks->dispclk_khz, clk_mgr_base->clks.dispclk_khz)
+ || new_clocks->dispclk_khz == clk_mgr_base->clks.dispclk_khz) {
+- ramp_up_dispclk_with_dpp(clk_mgr, dc, new_clocks);
++ ramp_up_dispclk_with_dpp(clk_mgr, dc, new_clocks, safe_to_lower);
+ clk_mgr_base->clks.dispclk_khz = new_clocks->dispclk_khz;
+ send_request_to_lower = true;
+ }
+diff --git a/drivers/gpu/drm/amd/powerplay/smumgr/ci_smumgr.c b/drivers/gpu/drm/amd/powerplay/smumgr/ci_smumgr.c
+index 56923a96b4502..ad54f4500af1f 100644
+--- a/drivers/gpu/drm/amd/powerplay/smumgr/ci_smumgr.c
++++ b/drivers/gpu/drm/amd/powerplay/smumgr/ci_smumgr.c
+@@ -2725,7 +2725,10 @@ static int ci_initialize_mc_reg_table(struct pp_hwmgr *hwmgr)
+
+ static bool ci_is_dpm_running(struct pp_hwmgr *hwmgr)
+ {
+- return ci_is_smc_ram_running(hwmgr);
++ return (1 == PHM_READ_INDIRECT_FIELD(hwmgr->device,
++ CGS_IND_REG__SMC, FEATURE_STATUS,
++ VOLTAGE_CONTROLLER_ON))
++ ? true : false;
+ }
+
+ static int ci_smu_init(struct pp_hwmgr *hwmgr)
+diff --git a/drivers/gpu/drm/drm_dp_mst_topology.c b/drivers/gpu/drm/drm_dp_mst_topology.c
+index 1e26b89628f98..ffbd754a53825 100644
+--- a/drivers/gpu/drm/drm_dp_mst_topology.c
++++ b/drivers/gpu/drm/drm_dp_mst_topology.c
+@@ -88,8 +88,8 @@ static int drm_dp_send_enum_path_resources(struct drm_dp_mst_topology_mgr *mgr,
+ static bool drm_dp_validate_guid(struct drm_dp_mst_topology_mgr *mgr,
+ u8 *guid);
+
+-static int drm_dp_mst_register_i2c_bus(struct drm_dp_aux *aux);
+-static void drm_dp_mst_unregister_i2c_bus(struct drm_dp_aux *aux);
++static int drm_dp_mst_register_i2c_bus(struct drm_dp_mst_port *port);
++static void drm_dp_mst_unregister_i2c_bus(struct drm_dp_mst_port *port);
+ static void drm_dp_mst_kick_tx(struct drm_dp_mst_topology_mgr *mgr);
+
+ #define DBG_PREFIX "[dp_mst]"
+@@ -1197,7 +1197,8 @@ static int drm_dp_mst_wait_tx_reply(struct drm_dp_mst_branch *mstb,
+
+ /* remove from q */
+ if (txmsg->state == DRM_DP_SIDEBAND_TX_QUEUED ||
+- txmsg->state == DRM_DP_SIDEBAND_TX_START_SEND)
++ txmsg->state == DRM_DP_SIDEBAND_TX_START_SEND ||
++ txmsg->state == DRM_DP_SIDEBAND_TX_SENT)
+ list_del(&txmsg->next);
+ }
+ out:
+@@ -1966,7 +1967,7 @@ drm_dp_port_set_pdt(struct drm_dp_mst_port *port, u8 new_pdt,
+ }
+
+ /* remove i2c over sideband */
+- drm_dp_mst_unregister_i2c_bus(&port->aux);
++ drm_dp_mst_unregister_i2c_bus(port);
+ } else {
+ mutex_lock(&mgr->lock);
+ drm_dp_mst_topology_put_mstb(port->mstb);
+@@ -1981,7 +1982,7 @@ drm_dp_port_set_pdt(struct drm_dp_mst_port *port, u8 new_pdt,
+ if (port->pdt != DP_PEER_DEVICE_NONE) {
+ if (drm_dp_mst_is_end_device(port->pdt, port->mcs)) {
+ /* add i2c over sideband */
+- ret = drm_dp_mst_register_i2c_bus(&port->aux);
++ ret = drm_dp_mst_register_i2c_bus(port);
+ } else {
+ lct = drm_dp_calculate_rad(port, rad);
+ mstb = drm_dp_add_mst_branch_device(lct, rad);
+@@ -4261,11 +4262,11 @@ bool drm_dp_mst_allocate_vcpi(struct drm_dp_mst_topology_mgr *mgr,
+ {
+ int ret;
+
+- port = drm_dp_mst_topology_get_port_validated(mgr, port);
+- if (!port)
++ if (slots < 0)
+ return false;
+
+- if (slots < 0)
++ port = drm_dp_mst_topology_get_port_validated(mgr, port);
++ if (!port)
+ return false;
+
+ if (port->vcpi.vcpi > 0) {
+@@ -4281,6 +4282,7 @@ bool drm_dp_mst_allocate_vcpi(struct drm_dp_mst_topology_mgr *mgr,
+ if (ret) {
+ DRM_DEBUG_KMS("failed to init vcpi slots=%d max=63 ret=%d\n",
+ DIV_ROUND_UP(pbn, mgr->pbn_div), ret);
++ drm_dp_mst_topology_put_port(port);
+ goto out;
+ }
+ DRM_DEBUG_KMS("initing vcpi for pbn=%d slots=%d\n",
+@@ -4641,12 +4643,13 @@ static void drm_dp_tx_work(struct work_struct *work)
+ static inline void
+ drm_dp_delayed_destroy_port(struct drm_dp_mst_port *port)
+ {
++ drm_dp_port_set_pdt(port, DP_PEER_DEVICE_NONE, port->mcs);
++
+ if (port->connector) {
+ drm_connector_unregister(port->connector);
+ drm_connector_put(port->connector);
+ }
+
+- drm_dp_port_set_pdt(port, DP_PEER_DEVICE_NONE, port->mcs);
+ drm_dp_mst_put_port_malloc(port);
+ }
+
+@@ -5346,22 +5349,26 @@ static const struct i2c_algorithm drm_dp_mst_i2c_algo = {
+
+ /**
+ * drm_dp_mst_register_i2c_bus() - register an I2C adapter for I2C-over-AUX
+- * @aux: DisplayPort AUX channel
++ * @port: The port to add the I2C bus on
+ *
+ * Returns 0 on success or a negative error code on failure.
+ */
+-static int drm_dp_mst_register_i2c_bus(struct drm_dp_aux *aux)
++static int drm_dp_mst_register_i2c_bus(struct drm_dp_mst_port *port)
+ {
++ struct drm_dp_aux *aux = &port->aux;
++ struct device *parent_dev = port->mgr->dev->dev;
++
+ aux->ddc.algo = &drm_dp_mst_i2c_algo;
+ aux->ddc.algo_data = aux;
+ aux->ddc.retries = 3;
+
+ aux->ddc.class = I2C_CLASS_DDC;
+ aux->ddc.owner = THIS_MODULE;
+- aux->ddc.dev.parent = aux->dev;
+- aux->ddc.dev.of_node = aux->dev->of_node;
++ /* FIXME: set the kdev of the port's connector as parent */
++ aux->ddc.dev.parent = parent_dev;
++ aux->ddc.dev.of_node = parent_dev->of_node;
+
+- strlcpy(aux->ddc.name, aux->name ? aux->name : dev_name(aux->dev),
++ strlcpy(aux->ddc.name, aux->name ? aux->name : dev_name(parent_dev),
+ sizeof(aux->ddc.name));
+
+ return i2c_add_adapter(&aux->ddc);
+@@ -5369,11 +5376,11 @@ static int drm_dp_mst_register_i2c_bus(struct drm_dp_aux *aux)
+
+ /**
+ * drm_dp_mst_unregister_i2c_bus() - unregister an I2C-over-AUX adapter
+- * @aux: DisplayPort AUX channel
++ * @port: The port to remove the I2C bus from
+ */
+-static void drm_dp_mst_unregister_i2c_bus(struct drm_dp_aux *aux)
++static void drm_dp_mst_unregister_i2c_bus(struct drm_dp_mst_port *port)
+ {
+- i2c_del_adapter(&aux->ddc);
++ i2c_del_adapter(&port->aux.ddc);
+ }
+
+ /**
+diff --git a/drivers/gpu/drm/drm_panel_orientation_quirks.c b/drivers/gpu/drm/drm_panel_orientation_quirks.c
+index d00ea384dcbfe..58f5dc2f6dd52 100644
+--- a/drivers/gpu/drm/drm_panel_orientation_quirks.c
++++ b/drivers/gpu/drm/drm_panel_orientation_quirks.c
+@@ -121,6 +121,12 @@ static const struct dmi_system_id orientation_data[] = {
+ DMI_EXACT_MATCH(DMI_PRODUCT_NAME, "T101HA"),
+ },
+ .driver_data = (void *)&lcd800x1280_rightside_up,
++ }, { /* Asus T103HAF */
++ .matches = {
++ DMI_EXACT_MATCH(DMI_SYS_VENDOR, "ASUSTeK COMPUTER INC."),
++ DMI_EXACT_MATCH(DMI_PRODUCT_NAME, "T103HAF"),
++ },
++ .driver_data = (void *)&lcd800x1280_rightside_up,
+ }, { /* GPD MicroPC (generic strings, also match on bios date) */
+ .matches = {
+ DMI_EXACT_MATCH(DMI_SYS_VENDOR, "Default string"),
+diff --git a/drivers/gpu/drm/i915/gt/intel_gt.c b/drivers/gpu/drm/i915/gt/intel_gt.c
+index f069551e412f3..ebc29b6ee86cb 100644
+--- a/drivers/gpu/drm/i915/gt/intel_gt.c
++++ b/drivers/gpu/drm/i915/gt/intel_gt.c
+@@ -616,6 +616,11 @@ void intel_gt_driver_unregister(struct intel_gt *gt)
+ void intel_gt_driver_release(struct intel_gt *gt)
+ {
+ struct i915_address_space *vm;
++ intel_wakeref_t wakeref;
++
++ /* Scrub all HW state upon release */
++ with_intel_runtime_pm(gt->uncore->rpm, wakeref)
++ __intel_gt_reset(gt, ALL_ENGINES);
+
+ vm = fetch_and_zero(>->vm);
+ if (vm) /* FIXME being called twice on error paths :( */
+diff --git a/drivers/gpu/drm/imx/imx-ldb.c b/drivers/gpu/drm/imx/imx-ldb.c
+index 1823af9936c98..447a110787a6f 100644
+--- a/drivers/gpu/drm/imx/imx-ldb.c
++++ b/drivers/gpu/drm/imx/imx-ldb.c
+@@ -304,18 +304,19 @@ static void imx_ldb_encoder_disable(struct drm_encoder *encoder)
+ {
+ struct imx_ldb_channel *imx_ldb_ch = enc_to_imx_ldb_ch(encoder);
+ struct imx_ldb *ldb = imx_ldb_ch->ldb;
++ int dual = ldb->ldb_ctrl & LDB_SPLIT_MODE_EN;
+ int mux, ret;
+
+ drm_panel_disable(imx_ldb_ch->panel);
+
+- if (imx_ldb_ch == &ldb->channel[0])
++ if (imx_ldb_ch == &ldb->channel[0] || dual)
+ ldb->ldb_ctrl &= ~LDB_CH0_MODE_EN_MASK;
+- else if (imx_ldb_ch == &ldb->channel[1])
++ if (imx_ldb_ch == &ldb->channel[1] || dual)
+ ldb->ldb_ctrl &= ~LDB_CH1_MODE_EN_MASK;
+
+ regmap_write(ldb->regmap, IOMUXC_GPR2, ldb->ldb_ctrl);
+
+- if (ldb->ldb_ctrl & LDB_SPLIT_MODE_EN) {
++ if (dual) {
+ clk_disable_unprepare(ldb->clk[0]);
+ clk_disable_unprepare(ldb->clk[1]);
+ }
+diff --git a/drivers/gpu/drm/ingenic/ingenic-drm.c b/drivers/gpu/drm/ingenic/ingenic-drm.c
+index 55b49a31729bf..9764c99ebddf4 100644
+--- a/drivers/gpu/drm/ingenic/ingenic-drm.c
++++ b/drivers/gpu/drm/ingenic/ingenic-drm.c
+@@ -386,7 +386,7 @@ static void ingenic_drm_plane_atomic_update(struct drm_plane *plane,
+ addr = drm_fb_cma_get_gem_addr(state->fb, state, 0);
+ width = state->src_w >> 16;
+ height = state->src_h >> 16;
+- cpp = state->fb->format->cpp[plane->index];
++ cpp = state->fb->format->cpp[0];
+
+ priv->dma_hwdesc->addr = addr;
+ priv->dma_hwdesc->cmd = width * height * cpp / 4;
+diff --git a/drivers/gpu/drm/omapdrm/dss/dispc.c b/drivers/gpu/drm/omapdrm/dss/dispc.c
+index 6639ee9b05d3d..48593932bddf5 100644
+--- a/drivers/gpu/drm/omapdrm/dss/dispc.c
++++ b/drivers/gpu/drm/omapdrm/dss/dispc.c
+@@ -4915,6 +4915,7 @@ static int dispc_runtime_resume(struct device *dev)
+ static const struct dev_pm_ops dispc_pm_ops = {
+ .runtime_suspend = dispc_runtime_suspend,
+ .runtime_resume = dispc_runtime_resume,
++ SET_LATE_SYSTEM_SLEEP_PM_OPS(pm_runtime_force_suspend, pm_runtime_force_resume)
+ };
+
+ struct platform_driver omap_dispchw_driver = {
+diff --git a/drivers/gpu/drm/omapdrm/dss/dsi.c b/drivers/gpu/drm/omapdrm/dss/dsi.c
+index 79ddfbfd1b588..eeccf40bae416 100644
+--- a/drivers/gpu/drm/omapdrm/dss/dsi.c
++++ b/drivers/gpu/drm/omapdrm/dss/dsi.c
+@@ -5467,6 +5467,7 @@ static int dsi_runtime_resume(struct device *dev)
+ static const struct dev_pm_ops dsi_pm_ops = {
+ .runtime_suspend = dsi_runtime_suspend,
+ .runtime_resume = dsi_runtime_resume,
++ SET_LATE_SYSTEM_SLEEP_PM_OPS(pm_runtime_force_suspend, pm_runtime_force_resume)
+ };
+
+ struct platform_driver omap_dsihw_driver = {
+diff --git a/drivers/gpu/drm/omapdrm/dss/dss.c b/drivers/gpu/drm/omapdrm/dss/dss.c
+index 4d5739fa4a5d8..6ccbc29c4ce4b 100644
+--- a/drivers/gpu/drm/omapdrm/dss/dss.c
++++ b/drivers/gpu/drm/omapdrm/dss/dss.c
+@@ -1614,6 +1614,7 @@ static int dss_runtime_resume(struct device *dev)
+ static const struct dev_pm_ops dss_pm_ops = {
+ .runtime_suspend = dss_runtime_suspend,
+ .runtime_resume = dss_runtime_resume,
++ SET_LATE_SYSTEM_SLEEP_PM_OPS(pm_runtime_force_suspend, pm_runtime_force_resume)
+ };
+
+ struct platform_driver omap_dsshw_driver = {
+diff --git a/drivers/gpu/drm/omapdrm/dss/venc.c b/drivers/gpu/drm/omapdrm/dss/venc.c
+index 9701843ccf09d..01ee6c50b6631 100644
+--- a/drivers/gpu/drm/omapdrm/dss/venc.c
++++ b/drivers/gpu/drm/omapdrm/dss/venc.c
+@@ -902,6 +902,7 @@ static int venc_runtime_resume(struct device *dev)
+ static const struct dev_pm_ops venc_pm_ops = {
+ .runtime_suspend = venc_runtime_suspend,
+ .runtime_resume = venc_runtime_resume,
++ SET_LATE_SYSTEM_SLEEP_PM_OPS(pm_runtime_force_suspend, pm_runtime_force_resume)
+ };
+
+ static const struct of_device_id venc_of_match[] = {
+diff --git a/drivers/gpu/drm/panfrost/panfrost_gem.c b/drivers/gpu/drm/panfrost/panfrost_gem.c
+index 17b654e1eb942..556181ea4a073 100644
+--- a/drivers/gpu/drm/panfrost/panfrost_gem.c
++++ b/drivers/gpu/drm/panfrost/panfrost_gem.c
+@@ -46,7 +46,7 @@ static void panfrost_gem_free_object(struct drm_gem_object *obj)
+ sg_free_table(&bo->sgts[i]);
+ }
+ }
+- kfree(bo->sgts);
++ kvfree(bo->sgts);
+ }
+
+ drm_gem_shmem_free_object(obj);
+diff --git a/drivers/gpu/drm/panfrost/panfrost_mmu.c b/drivers/gpu/drm/panfrost/panfrost_mmu.c
+index ed28aeba6d59a..3c8ae7411c800 100644
+--- a/drivers/gpu/drm/panfrost/panfrost_mmu.c
++++ b/drivers/gpu/drm/panfrost/panfrost_mmu.c
+@@ -486,7 +486,7 @@ static int panfrost_mmu_map_fault_addr(struct panfrost_device *pfdev, int as,
+ pages = kvmalloc_array(bo->base.base.size >> PAGE_SHIFT,
+ sizeof(struct page *), GFP_KERNEL | __GFP_ZERO);
+ if (!pages) {
+- kfree(bo->sgts);
++ kvfree(bo->sgts);
+ bo->sgts = NULL;
+ mutex_unlock(&bo->base.pages_lock);
+ ret = -ENOMEM;
+diff --git a/drivers/gpu/drm/tidss/tidss_kms.c b/drivers/gpu/drm/tidss/tidss_kms.c
+index 4b99e9fa84a5b..c0240f7e0b198 100644
+--- a/drivers/gpu/drm/tidss/tidss_kms.c
++++ b/drivers/gpu/drm/tidss/tidss_kms.c
+@@ -154,7 +154,7 @@ static int tidss_dispc_modeset_init(struct tidss_device *tidss)
+ break;
+ case DISPC_VP_DPI:
+ enc_type = DRM_MODE_ENCODER_DPI;
+- conn_type = DRM_MODE_CONNECTOR_LVDS;
++ conn_type = DRM_MODE_CONNECTOR_DPI;
+ break;
+ default:
+ WARN_ON(1);
+diff --git a/drivers/gpu/drm/vmwgfx/vmwgfx_kms.c b/drivers/gpu/drm/vmwgfx/vmwgfx_kms.c
+index 04d66592f6050..b7a9cee69ea72 100644
+--- a/drivers/gpu/drm/vmwgfx/vmwgfx_kms.c
++++ b/drivers/gpu/drm/vmwgfx/vmwgfx_kms.c
+@@ -2578,7 +2578,7 @@ int vmw_kms_fbdev_init_data(struct vmw_private *dev_priv,
+ ++i;
+ }
+
+- if (i != unit) {
++ if (&con->head == &dev_priv->dev->mode_config.connector_list) {
+ DRM_ERROR("Could not find initial display unit.\n");
+ ret = -EINVAL;
+ goto out_unlock;
+@@ -2602,13 +2602,13 @@ int vmw_kms_fbdev_init_data(struct vmw_private *dev_priv,
+ break;
+ }
+
+- if (mode->type & DRM_MODE_TYPE_PREFERRED)
+- *p_mode = mode;
+- else {
++ if (&mode->head == &con->modes) {
+ WARN_ONCE(true, "Could not find initial preferred mode.\n");
+ *p_mode = list_first_entry(&con->modes,
+ struct drm_display_mode,
+ head);
++ } else {
++ *p_mode = mode;
+ }
+
+ out_unlock:
+diff --git a/drivers/gpu/drm/vmwgfx/vmwgfx_ldu.c b/drivers/gpu/drm/vmwgfx/vmwgfx_ldu.c
+index 16dafff5cab19..009f1742bed51 100644
+--- a/drivers/gpu/drm/vmwgfx/vmwgfx_ldu.c
++++ b/drivers/gpu/drm/vmwgfx/vmwgfx_ldu.c
+@@ -81,7 +81,7 @@ static int vmw_ldu_commit_list(struct vmw_private *dev_priv)
+ struct vmw_legacy_display_unit *entry;
+ struct drm_framebuffer *fb = NULL;
+ struct drm_crtc *crtc = NULL;
+- int i = 0;
++ int i;
+
+ /* If there is no display topology the host just assumes
+ * that the guest will set the same layout as the host.
+@@ -92,12 +92,11 @@ static int vmw_ldu_commit_list(struct vmw_private *dev_priv)
+ crtc = &entry->base.crtc;
+ w = max(w, crtc->x + crtc->mode.hdisplay);
+ h = max(h, crtc->y + crtc->mode.vdisplay);
+- i++;
+ }
+
+ if (crtc == NULL)
+ return 0;
+- fb = entry->base.crtc.primary->state->fb;
++ fb = crtc->primary->state->fb;
+
+ return vmw_kms_write_svga(dev_priv, w, h, fb->pitches[0],
+ fb->format->cpp[0] * 8,
+diff --git a/drivers/gpu/ipu-v3/ipu-image-convert.c b/drivers/gpu/ipu-v3/ipu-image-convert.c
+index eeca50d9a1ee4..aa1d4b6d278f7 100644
+--- a/drivers/gpu/ipu-v3/ipu-image-convert.c
++++ b/drivers/gpu/ipu-v3/ipu-image-convert.c
+@@ -137,6 +137,17 @@ struct ipu_image_convert_ctx;
+ struct ipu_image_convert_chan;
+ struct ipu_image_convert_priv;
+
++enum eof_irq_mask {
++ EOF_IRQ_IN = BIT(0),
++ EOF_IRQ_ROT_IN = BIT(1),
++ EOF_IRQ_OUT = BIT(2),
++ EOF_IRQ_ROT_OUT = BIT(3),
++};
++
++#define EOF_IRQ_COMPLETE (EOF_IRQ_IN | EOF_IRQ_OUT)
++#define EOF_IRQ_ROT_COMPLETE (EOF_IRQ_IN | EOF_IRQ_OUT | \
++ EOF_IRQ_ROT_IN | EOF_IRQ_ROT_OUT)
++
+ struct ipu_image_convert_ctx {
+ struct ipu_image_convert_chan *chan;
+
+@@ -173,6 +184,9 @@ struct ipu_image_convert_ctx {
+ /* where to place converted tile in dest image */
+ unsigned int out_tile_map[MAX_TILES];
+
++ /* mask of completed EOF irqs at every tile conversion */
++ enum eof_irq_mask eof_mask;
++
+ struct list_head list;
+ };
+
+@@ -189,6 +203,8 @@ struct ipu_image_convert_chan {
+ struct ipuv3_channel *rotation_out_chan;
+
+ /* the IPU end-of-frame irqs */
++ int in_eof_irq;
++ int rot_in_eof_irq;
+ int out_eof_irq;
+ int rot_out_eof_irq;
+
+@@ -1380,6 +1396,9 @@ static int convert_start(struct ipu_image_convert_run *run, unsigned int tile)
+ dev_dbg(priv->ipu->dev, "%s: task %u: starting ctx %p run %p tile %u -> %u\n",
+ __func__, chan->ic_task, ctx, run, tile, dst_tile);
+
++ /* clear EOF irq mask */
++ ctx->eof_mask = 0;
++
+ if (ipu_rot_mode_is_irt(ctx->rot_mode)) {
+ /* swap width/height for resizer */
+ dest_width = d_image->tile[dst_tile].height;
+@@ -1615,7 +1634,7 @@ static bool ic_settings_changed(struct ipu_image_convert_ctx *ctx)
+ }
+
+ /* hold irqlock when calling */
+-static irqreturn_t do_irq(struct ipu_image_convert_run *run)
++static irqreturn_t do_tile_complete(struct ipu_image_convert_run *run)
+ {
+ struct ipu_image_convert_ctx *ctx = run->ctx;
+ struct ipu_image_convert_chan *chan = ctx->chan;
+@@ -1700,6 +1719,7 @@ static irqreturn_t do_irq(struct ipu_image_convert_run *run)
+ ctx->cur_buf_num ^= 1;
+ }
+
++ ctx->eof_mask = 0; /* clear EOF irq mask for next tile */
+ ctx->next_tile++;
+ return IRQ_HANDLED;
+ done:
+@@ -1709,13 +1729,15 @@ done:
+ return IRQ_WAKE_THREAD;
+ }
+
+-static irqreturn_t norotate_irq(int irq, void *data)
++static irqreturn_t eof_irq(int irq, void *data)
+ {
+ struct ipu_image_convert_chan *chan = data;
++ struct ipu_image_convert_priv *priv = chan->priv;
+ struct ipu_image_convert_ctx *ctx;
+ struct ipu_image_convert_run *run;
++ irqreturn_t ret = IRQ_HANDLED;
++ bool tile_complete = false;
+ unsigned long flags;
+- irqreturn_t ret;
+
+ spin_lock_irqsave(&chan->irqlock, flags);
+
+@@ -1728,46 +1750,33 @@ static irqreturn_t norotate_irq(int irq, void *data)
+
+ ctx = run->ctx;
+
+- if (ipu_rot_mode_is_irt(ctx->rot_mode)) {
+- /* this is a rotation operation, just ignore */
+- spin_unlock_irqrestore(&chan->irqlock, flags);
+- return IRQ_HANDLED;
+- }
+-
+- ret = do_irq(run);
+-out:
+- spin_unlock_irqrestore(&chan->irqlock, flags);
+- return ret;
+-}
+-
+-static irqreturn_t rotate_irq(int irq, void *data)
+-{
+- struct ipu_image_convert_chan *chan = data;
+- struct ipu_image_convert_priv *priv = chan->priv;
+- struct ipu_image_convert_ctx *ctx;
+- struct ipu_image_convert_run *run;
+- unsigned long flags;
+- irqreturn_t ret;
+-
+- spin_lock_irqsave(&chan->irqlock, flags);
+-
+- /* get current run and its context */
+- run = chan->current_run;
+- if (!run) {
++ if (irq == chan->in_eof_irq) {
++ ctx->eof_mask |= EOF_IRQ_IN;
++ } else if (irq == chan->out_eof_irq) {
++ ctx->eof_mask |= EOF_IRQ_OUT;
++ } else if (irq == chan->rot_in_eof_irq ||
++ irq == chan->rot_out_eof_irq) {
++ if (!ipu_rot_mode_is_irt(ctx->rot_mode)) {
++ /* this was NOT a rotation op, shouldn't happen */
++ dev_err(priv->ipu->dev,
++ "Unexpected rotation interrupt\n");
++ goto out;
++ }
++ ctx->eof_mask |= (irq == chan->rot_in_eof_irq) ?
++ EOF_IRQ_ROT_IN : EOF_IRQ_ROT_OUT;
++ } else {
++ dev_err(priv->ipu->dev, "Received unknown irq %d\n", irq);
+ ret = IRQ_NONE;
+ goto out;
+ }
+
+- ctx = run->ctx;
+-
+- if (!ipu_rot_mode_is_irt(ctx->rot_mode)) {
+- /* this was NOT a rotation operation, shouldn't happen */
+- dev_err(priv->ipu->dev, "Unexpected rotation interrupt\n");
+- spin_unlock_irqrestore(&chan->irqlock, flags);
+- return IRQ_HANDLED;
+- }
++ if (ipu_rot_mode_is_irt(ctx->rot_mode))
++ tile_complete = (ctx->eof_mask == EOF_IRQ_ROT_COMPLETE);
++ else
++ tile_complete = (ctx->eof_mask == EOF_IRQ_COMPLETE);
+
+- ret = do_irq(run);
++ if (tile_complete)
++ ret = do_tile_complete(run);
+ out:
+ spin_unlock_irqrestore(&chan->irqlock, flags);
+ return ret;
+@@ -1801,6 +1810,10 @@ static void force_abort(struct ipu_image_convert_ctx *ctx)
+
+ static void release_ipu_resources(struct ipu_image_convert_chan *chan)
+ {
++ if (chan->in_eof_irq >= 0)
++ free_irq(chan->in_eof_irq, chan);
++ if (chan->rot_in_eof_irq >= 0)
++ free_irq(chan->rot_in_eof_irq, chan);
+ if (chan->out_eof_irq >= 0)
+ free_irq(chan->out_eof_irq, chan);
+ if (chan->rot_out_eof_irq >= 0)
+@@ -1819,7 +1832,27 @@ static void release_ipu_resources(struct ipu_image_convert_chan *chan)
+
+ chan->in_chan = chan->out_chan = chan->rotation_in_chan =
+ chan->rotation_out_chan = NULL;
+- chan->out_eof_irq = chan->rot_out_eof_irq = -1;
++ chan->in_eof_irq = -1;
++ chan->rot_in_eof_irq = -1;
++ chan->out_eof_irq = -1;
++ chan->rot_out_eof_irq = -1;
++}
++
++static int get_eof_irq(struct ipu_image_convert_chan *chan,
++ struct ipuv3_channel *channel)
++{
++ struct ipu_image_convert_priv *priv = chan->priv;
++ int ret, irq;
++
++ irq = ipu_idmac_channel_irq(priv->ipu, channel, IPU_IRQ_EOF);
++
++ ret = request_threaded_irq(irq, eof_irq, do_bh, 0, "ipu-ic", chan);
++ if (ret < 0) {
++ dev_err(priv->ipu->dev, "could not acquire irq %d\n", irq);
++ return ret;
++ }
++
++ return irq;
+ }
+
+ static int get_ipu_resources(struct ipu_image_convert_chan *chan)
+@@ -1855,31 +1888,33 @@ static int get_ipu_resources(struct ipu_image_convert_chan *chan)
+ }
+
+ /* acquire the EOF interrupts */
+- chan->out_eof_irq = ipu_idmac_channel_irq(priv->ipu,
+- chan->out_chan,
+- IPU_IRQ_EOF);
++ ret = get_eof_irq(chan, chan->in_chan);
++ if (ret < 0) {
++ chan->in_eof_irq = -1;
++ goto err;
++ }
++ chan->in_eof_irq = ret;
+
+- ret = request_threaded_irq(chan->out_eof_irq, norotate_irq, do_bh,
+- 0, "ipu-ic", chan);
++ ret = get_eof_irq(chan, chan->rotation_in_chan);
+ if (ret < 0) {
+- dev_err(priv->ipu->dev, "could not acquire irq %d\n",
+- chan->out_eof_irq);
+- chan->out_eof_irq = -1;
++ chan->rot_in_eof_irq = -1;
+ goto err;
+ }
++ chan->rot_in_eof_irq = ret;
+
+- chan->rot_out_eof_irq = ipu_idmac_channel_irq(priv->ipu,
+- chan->rotation_out_chan,
+- IPU_IRQ_EOF);
++ ret = get_eof_irq(chan, chan->out_chan);
++ if (ret < 0) {
++ chan->out_eof_irq = -1;
++ goto err;
++ }
++ chan->out_eof_irq = ret;
+
+- ret = request_threaded_irq(chan->rot_out_eof_irq, rotate_irq, do_bh,
+- 0, "ipu-ic", chan);
++ ret = get_eof_irq(chan, chan->rotation_out_chan);
+ if (ret < 0) {
+- dev_err(priv->ipu->dev, "could not acquire irq %d\n",
+- chan->rot_out_eof_irq);
+ chan->rot_out_eof_irq = -1;
+ goto err;
+ }
++ chan->rot_out_eof_irq = ret;
+
+ return 0;
+ err:
+@@ -2458,6 +2493,8 @@ int ipu_image_convert_init(struct ipu_soc *ipu, struct device *dev)
+ chan->ic_task = i;
+ chan->priv = priv;
+ chan->dma_ch = &image_convert_dma_chan[i];
++ chan->in_eof_irq = -1;
++ chan->rot_in_eof_irq = -1;
+ chan->out_eof_irq = -1;
+ chan->rot_out_eof_irq = -1;
+
+diff --git a/drivers/i2c/busses/i2c-bcm-iproc.c b/drivers/i2c/busses/i2c-bcm-iproc.c
+index 8a3c98866fb7e..688e928188214 100644
+--- a/drivers/i2c/busses/i2c-bcm-iproc.c
++++ b/drivers/i2c/busses/i2c-bcm-iproc.c
+@@ -1078,7 +1078,7 @@ static int bcm_iproc_i2c_unreg_slave(struct i2c_client *slave)
+ if (!iproc_i2c->slave)
+ return -EINVAL;
+
+- iproc_i2c->slave = NULL;
++ disable_irq(iproc_i2c->irq);
+
+ /* disable all slave interrupts */
+ tmp = iproc_i2c_rd_reg(iproc_i2c, IE_OFFSET);
+@@ -1091,6 +1091,17 @@ static int bcm_iproc_i2c_unreg_slave(struct i2c_client *slave)
+ tmp &= ~BIT(S_CFG_EN_NIC_SMB_ADDR3_SHIFT);
+ iproc_i2c_wr_reg(iproc_i2c, S_CFG_SMBUS_ADDR_OFFSET, tmp);
+
++ /* flush TX/RX FIFOs */
++ tmp = (BIT(S_FIFO_RX_FLUSH_SHIFT) | BIT(S_FIFO_TX_FLUSH_SHIFT));
++ iproc_i2c_wr_reg(iproc_i2c, S_FIFO_CTRL_OFFSET, tmp);
++
++ /* clear all pending slave interrupts */
++ iproc_i2c_wr_reg(iproc_i2c, IS_OFFSET, ISR_MASK_SLAVE);
++
++ iproc_i2c->slave = NULL;
++
++ enable_irq(iproc_i2c->irq);
++
+ return 0;
+ }
+
+diff --git a/drivers/i2c/busses/i2c-rcar.c b/drivers/i2c/busses/i2c-rcar.c
+index 2e3e1bb750134..9e883474db8ce 100644
+--- a/drivers/i2c/busses/i2c-rcar.c
++++ b/drivers/i2c/busses/i2c-rcar.c
+@@ -583,13 +583,14 @@ static bool rcar_i2c_slave_irq(struct rcar_i2c_priv *priv)
+ rcar_i2c_write(priv, ICSIER, SDR | SSR | SAR);
+ }
+
+- rcar_i2c_write(priv, ICSSR, ~SAR & 0xff);
++ /* Clear SSR, too, because of old STOPs to other clients than us */
++ rcar_i2c_write(priv, ICSSR, ~(SAR | SSR) & 0xff);
+ }
+
+ /* master sent stop */
+ if (ssr_filtered & SSR) {
+ i2c_slave_event(priv->slave, I2C_SLAVE_STOP, &value);
+- rcar_i2c_write(priv, ICSIER, SAR | SSR);
++ rcar_i2c_write(priv, ICSIER, SAR);
+ rcar_i2c_write(priv, ICSSR, ~SSR & 0xff);
+ }
+
+@@ -853,7 +854,7 @@ static int rcar_reg_slave(struct i2c_client *slave)
+ priv->slave = slave;
+ rcar_i2c_write(priv, ICSAR, slave->addr);
+ rcar_i2c_write(priv, ICSSR, 0);
+- rcar_i2c_write(priv, ICSIER, SAR | SSR);
++ rcar_i2c_write(priv, ICSIER, SAR);
+ rcar_i2c_write(priv, ICSCR, SIE | SDBS);
+
+ return 0;
+@@ -865,12 +866,14 @@ static int rcar_unreg_slave(struct i2c_client *slave)
+
+ WARN_ON(!priv->slave);
+
+- /* disable irqs and ensure none is running before clearing ptr */
++ /* ensure no irq is running before clearing ptr */
++ disable_irq(priv->irq);
+ rcar_i2c_write(priv, ICSIER, 0);
+- rcar_i2c_write(priv, ICSCR, 0);
++ rcar_i2c_write(priv, ICSSR, 0);
++ enable_irq(priv->irq);
++ rcar_i2c_write(priv, ICSCR, SDBS);
+ rcar_i2c_write(priv, ICSAR, 0); /* Gen2: must be 0 if not using slave */
+
+- synchronize_irq(priv->irq);
+ priv->slave = NULL;
+
+ pm_runtime_put(rcar_i2c_priv_to_dev(priv));
+diff --git a/drivers/iio/dac/ad5592r-base.c b/drivers/iio/dac/ad5592r-base.c
+index 410e90e5f75fb..5226c258856b2 100644
+--- a/drivers/iio/dac/ad5592r-base.c
++++ b/drivers/iio/dac/ad5592r-base.c
+@@ -413,7 +413,7 @@ static int ad5592r_read_raw(struct iio_dev *iio_dev,
+ s64 tmp = *val * (3767897513LL / 25LL);
+ *val = div_s64_rem(tmp, 1000000000LL, val2);
+
+- ret = IIO_VAL_INT_PLUS_MICRO;
++ return IIO_VAL_INT_PLUS_MICRO;
+ } else {
+ int mult;
+
+@@ -444,7 +444,7 @@ static int ad5592r_read_raw(struct iio_dev *iio_dev,
+ ret = IIO_VAL_INT;
+ break;
+ default:
+- ret = -EINVAL;
++ return -EINVAL;
+ }
+
+ unlock:
+diff --git a/drivers/iio/imu/st_lsm6dsx/st_lsm6dsx.h b/drivers/iio/imu/st_lsm6dsx/st_lsm6dsx.h
+index b56df409ed0fa..529970195b398 100644
+--- a/drivers/iio/imu/st_lsm6dsx/st_lsm6dsx.h
++++ b/drivers/iio/imu/st_lsm6dsx/st_lsm6dsx.h
+@@ -436,8 +436,7 @@ int st_lsm6dsx_update_watermark(struct st_lsm6dsx_sensor *sensor,
+ u16 watermark);
+ int st_lsm6dsx_update_fifo(struct st_lsm6dsx_sensor *sensor, bool enable);
+ int st_lsm6dsx_flush_fifo(struct st_lsm6dsx_hw *hw);
+-int st_lsm6dsx_set_fifo_mode(struct st_lsm6dsx_hw *hw,
+- enum st_lsm6dsx_fifo_mode fifo_mode);
++int st_lsm6dsx_resume_fifo(struct st_lsm6dsx_hw *hw);
+ int st_lsm6dsx_read_fifo(struct st_lsm6dsx_hw *hw);
+ int st_lsm6dsx_read_tagged_fifo(struct st_lsm6dsx_hw *hw);
+ int st_lsm6dsx_check_odr(struct st_lsm6dsx_sensor *sensor, u32 odr, u8 *val);
+diff --git a/drivers/iio/imu/st_lsm6dsx/st_lsm6dsx_buffer.c b/drivers/iio/imu/st_lsm6dsx/st_lsm6dsx_buffer.c
+index afd00daeefb2d..7de10bd636ea0 100644
+--- a/drivers/iio/imu/st_lsm6dsx/st_lsm6dsx_buffer.c
++++ b/drivers/iio/imu/st_lsm6dsx/st_lsm6dsx_buffer.c
+@@ -184,8 +184,8 @@ static int st_lsm6dsx_update_decimators(struct st_lsm6dsx_hw *hw)
+ return err;
+ }
+
+-int st_lsm6dsx_set_fifo_mode(struct st_lsm6dsx_hw *hw,
+- enum st_lsm6dsx_fifo_mode fifo_mode)
++static int st_lsm6dsx_set_fifo_mode(struct st_lsm6dsx_hw *hw,
++ enum st_lsm6dsx_fifo_mode fifo_mode)
+ {
+ unsigned int data;
+
+@@ -302,6 +302,18 @@ static int st_lsm6dsx_reset_hw_ts(struct st_lsm6dsx_hw *hw)
+ return 0;
+ }
+
++int st_lsm6dsx_resume_fifo(struct st_lsm6dsx_hw *hw)
++{
++ int err;
++
++ /* reset hw ts counter */
++ err = st_lsm6dsx_reset_hw_ts(hw);
++ if (err < 0)
++ return err;
++
++ return st_lsm6dsx_set_fifo_mode(hw, ST_LSM6DSX_FIFO_CONT);
++}
++
+ /*
+ * Set max bulk read to ST_LSM6DSX_MAX_WORD_LEN/ST_LSM6DSX_MAX_TAGGED_WORD_LEN
+ * in order to avoid a kmalloc for each bus access
+@@ -675,12 +687,7 @@ int st_lsm6dsx_update_fifo(struct st_lsm6dsx_sensor *sensor, bool enable)
+ goto out;
+
+ if (fifo_mask) {
+- /* reset hw ts counter */
+- err = st_lsm6dsx_reset_hw_ts(hw);
+- if (err < 0)
+- goto out;
+-
+- err = st_lsm6dsx_set_fifo_mode(hw, ST_LSM6DSX_FIFO_CONT);
++ err = st_lsm6dsx_resume_fifo(hw);
+ if (err < 0)
+ goto out;
+ }
+diff --git a/drivers/iio/imu/st_lsm6dsx/st_lsm6dsx_core.c b/drivers/iio/imu/st_lsm6dsx/st_lsm6dsx_core.c
+index 0b776cb91928b..b3a08e3e23592 100644
+--- a/drivers/iio/imu/st_lsm6dsx/st_lsm6dsx_core.c
++++ b/drivers/iio/imu/st_lsm6dsx/st_lsm6dsx_core.c
+@@ -2458,7 +2458,7 @@ static int __maybe_unused st_lsm6dsx_resume(struct device *dev)
+ }
+
+ if (hw->fifo_mask)
+- err = st_lsm6dsx_set_fifo_mode(hw, ST_LSM6DSX_FIFO_CONT);
++ err = st_lsm6dsx_resume_fifo(hw);
+
+ return err;
+ }
+diff --git a/drivers/infiniband/core/counters.c b/drivers/infiniband/core/counters.c
+index 738d1faf4bba5..417ebf4d8ba9b 100644
+--- a/drivers/infiniband/core/counters.c
++++ b/drivers/infiniband/core/counters.c
+@@ -288,7 +288,7 @@ int rdma_counter_bind_qp_auto(struct ib_qp *qp, u8 port)
+ struct rdma_counter *counter;
+ int ret;
+
+- if (!qp->res.valid)
++ if (!qp->res.valid || rdma_is_kernel_res(&qp->res))
+ return 0;
+
+ if (!rdma_is_port_valid(dev, port))
+@@ -483,7 +483,7 @@ int rdma_counter_bind_qpn(struct ib_device *dev, u8 port,
+ goto err;
+ }
+
+- if (counter->res.task != qp->res.task) {
++ if (rdma_is_kernel_res(&counter->res) != rdma_is_kernel_res(&qp->res)) {
+ ret = -EINVAL;
+ goto err_task;
+ }
+diff --git a/drivers/infiniband/core/uverbs_cmd.c b/drivers/infiniband/core/uverbs_cmd.c
+index b48b3f6e632d4..557644dcc9237 100644
+--- a/drivers/infiniband/core/uverbs_cmd.c
++++ b/drivers/infiniband/core/uverbs_cmd.c
+@@ -770,6 +770,7 @@ static int ib_uverbs_reg_mr(struct uverbs_attr_bundle *attrs)
+ mr->uobject = uobj;
+ atomic_inc(&pd->usecnt);
+ mr->res.type = RDMA_RESTRACK_MR;
++ mr->iova = cmd.hca_va;
+ rdma_restrack_uadd(&mr->res);
+
+ uobj->object = mr;
+@@ -861,6 +862,9 @@ static int ib_uverbs_rereg_mr(struct uverbs_attr_bundle *attrs)
+ atomic_dec(&old_pd->usecnt);
+ }
+
++ if (cmd.flags & IB_MR_REREG_TRANS)
++ mr->iova = cmd.hca_va;
++
+ memset(&resp, 0, sizeof(resp));
+ resp.lkey = mr->lkey;
+ resp.rkey = mr->rkey;
+diff --git a/drivers/infiniband/hw/cxgb4/mem.c b/drivers/infiniband/hw/cxgb4/mem.c
+index 962dc97a8ff2b..1e4f4e5255980 100644
+--- a/drivers/infiniband/hw/cxgb4/mem.c
++++ b/drivers/infiniband/hw/cxgb4/mem.c
+@@ -399,7 +399,6 @@ static int finish_mem_reg(struct c4iw_mr *mhp, u32 stag)
+ mmid = stag >> 8;
+ mhp->ibmr.rkey = mhp->ibmr.lkey = stag;
+ mhp->ibmr.length = mhp->attr.len;
+- mhp->ibmr.iova = mhp->attr.va_fbo;
+ mhp->ibmr.page_size = 1U << (mhp->attr.page_size + 12);
+ pr_debug("mmid 0x%x mhp %p\n", mmid, mhp);
+ return xa_insert_irq(&mhp->rhp->mrs, mmid, mhp, GFP_KERNEL);
+diff --git a/drivers/infiniband/hw/mlx4/mr.c b/drivers/infiniband/hw/mlx4/mr.c
+index 7e0b205c05eb3..d7c78f841d2f5 100644
+--- a/drivers/infiniband/hw/mlx4/mr.c
++++ b/drivers/infiniband/hw/mlx4/mr.c
+@@ -439,7 +439,6 @@ struct ib_mr *mlx4_ib_reg_user_mr(struct ib_pd *pd, u64 start, u64 length,
+
+ mr->ibmr.rkey = mr->ibmr.lkey = mr->mmr.key;
+ mr->ibmr.length = length;
+- mr->ibmr.iova = virt_addr;
+ mr->ibmr.page_size = 1U << shift;
+
+ return &mr->ibmr;
+diff --git a/drivers/infiniband/ulp/ipoib/ipoib.h b/drivers/infiniband/ulp/ipoib/ipoib.h
+index 9a3379c49541f..9ce6a36fe48ed 100644
+--- a/drivers/infiniband/ulp/ipoib/ipoib.h
++++ b/drivers/infiniband/ulp/ipoib/ipoib.h
+@@ -515,7 +515,7 @@ void ipoib_ib_dev_cleanup(struct net_device *dev);
+
+ int ipoib_ib_dev_open_default(struct net_device *dev);
+ int ipoib_ib_dev_open(struct net_device *dev);
+-int ipoib_ib_dev_stop(struct net_device *dev);
++void ipoib_ib_dev_stop(struct net_device *dev);
+ void ipoib_ib_dev_up(struct net_device *dev);
+ void ipoib_ib_dev_down(struct net_device *dev);
+ int ipoib_ib_dev_stop_default(struct net_device *dev);
+diff --git a/drivers/infiniband/ulp/ipoib/ipoib_ib.c b/drivers/infiniband/ulp/ipoib/ipoib_ib.c
+index da3c5315bbb51..494f413dc3c6c 100644
+--- a/drivers/infiniband/ulp/ipoib/ipoib_ib.c
++++ b/drivers/infiniband/ulp/ipoib/ipoib_ib.c
+@@ -670,13 +670,12 @@ int ipoib_send(struct net_device *dev, struct sk_buff *skb,
+ return rc;
+ }
+
+-static void __ipoib_reap_ah(struct net_device *dev)
++static void ipoib_reap_dead_ahs(struct ipoib_dev_priv *priv)
+ {
+- struct ipoib_dev_priv *priv = ipoib_priv(dev);
+ struct ipoib_ah *ah, *tah;
+ unsigned long flags;
+
+- netif_tx_lock_bh(dev);
++ netif_tx_lock_bh(priv->dev);
+ spin_lock_irqsave(&priv->lock, flags);
+
+ list_for_each_entry_safe(ah, tah, &priv->dead_ahs, list)
+@@ -687,37 +686,37 @@ static void __ipoib_reap_ah(struct net_device *dev)
+ }
+
+ spin_unlock_irqrestore(&priv->lock, flags);
+- netif_tx_unlock_bh(dev);
++ netif_tx_unlock_bh(priv->dev);
+ }
+
+ void ipoib_reap_ah(struct work_struct *work)
+ {
+ struct ipoib_dev_priv *priv =
+ container_of(work, struct ipoib_dev_priv, ah_reap_task.work);
+- struct net_device *dev = priv->dev;
+
+- __ipoib_reap_ah(dev);
++ ipoib_reap_dead_ahs(priv);
+
+ if (!test_bit(IPOIB_STOP_REAPER, &priv->flags))
+ queue_delayed_work(priv->wq, &priv->ah_reap_task,
+ round_jiffies_relative(HZ));
+ }
+
+-static void ipoib_flush_ah(struct net_device *dev)
++static void ipoib_start_ah_reaper(struct ipoib_dev_priv *priv)
+ {
+- struct ipoib_dev_priv *priv = ipoib_priv(dev);
+-
+- cancel_delayed_work(&priv->ah_reap_task);
+- flush_workqueue(priv->wq);
+- ipoib_reap_ah(&priv->ah_reap_task.work);
++ clear_bit(IPOIB_STOP_REAPER, &priv->flags);
++ queue_delayed_work(priv->wq, &priv->ah_reap_task,
++ round_jiffies_relative(HZ));
+ }
+
+-static void ipoib_stop_ah(struct net_device *dev)
++static void ipoib_stop_ah_reaper(struct ipoib_dev_priv *priv)
+ {
+- struct ipoib_dev_priv *priv = ipoib_priv(dev);
+-
+ set_bit(IPOIB_STOP_REAPER, &priv->flags);
+- ipoib_flush_ah(dev);
++ cancel_delayed_work(&priv->ah_reap_task);
++ /*
++ * After ipoib_stop_ah_reaper() we always go through
++ * ipoib_reap_dead_ahs() which ensures the work is really stopped and
++ * does a final flush out of the dead_ah's list
++ */
+ }
+
+ static int recvs_pending(struct net_device *dev)
+@@ -846,18 +845,6 @@ timeout:
+ return 0;
+ }
+
+-int ipoib_ib_dev_stop(struct net_device *dev)
+-{
+- struct ipoib_dev_priv *priv = ipoib_priv(dev);
+-
+- priv->rn_ops->ndo_stop(dev);
+-
+- clear_bit(IPOIB_FLAG_INITIALIZED, &priv->flags);
+- ipoib_flush_ah(dev);
+-
+- return 0;
+-}
+-
+ int ipoib_ib_dev_open_default(struct net_device *dev)
+ {
+ struct ipoib_dev_priv *priv = ipoib_priv(dev);
+@@ -901,10 +888,7 @@ int ipoib_ib_dev_open(struct net_device *dev)
+ return -1;
+ }
+
+- clear_bit(IPOIB_STOP_REAPER, &priv->flags);
+- queue_delayed_work(priv->wq, &priv->ah_reap_task,
+- round_jiffies_relative(HZ));
+-
++ ipoib_start_ah_reaper(priv);
+ if (priv->rn_ops->ndo_open(dev)) {
+ pr_warn("%s: Failed to open dev\n", dev->name);
+ goto dev_stop;
+@@ -915,13 +899,20 @@ int ipoib_ib_dev_open(struct net_device *dev)
+ return 0;
+
+ dev_stop:
+- set_bit(IPOIB_STOP_REAPER, &priv->flags);
+- cancel_delayed_work(&priv->ah_reap_task);
+- set_bit(IPOIB_FLAG_INITIALIZED, &priv->flags);
+- ipoib_ib_dev_stop(dev);
++ ipoib_stop_ah_reaper(priv);
+ return -1;
+ }
+
++void ipoib_ib_dev_stop(struct net_device *dev)
++{
++ struct ipoib_dev_priv *priv = ipoib_priv(dev);
++
++ priv->rn_ops->ndo_stop(dev);
++
++ clear_bit(IPOIB_FLAG_INITIALIZED, &priv->flags);
++ ipoib_stop_ah_reaper(priv);
++}
++
+ void ipoib_pkey_dev_check_presence(struct net_device *dev)
+ {
+ struct ipoib_dev_priv *priv = ipoib_priv(dev);
+@@ -1232,7 +1223,7 @@ static void __ipoib_ib_dev_flush(struct ipoib_dev_priv *priv,
+ ipoib_mcast_dev_flush(dev);
+ if (oper_up)
+ set_bit(IPOIB_FLAG_OPER_UP, &priv->flags);
+- ipoib_flush_ah(dev);
++ ipoib_reap_dead_ahs(priv);
+ }
+
+ if (level >= IPOIB_FLUSH_NORMAL)
+@@ -1307,7 +1298,7 @@ void ipoib_ib_dev_cleanup(struct net_device *dev)
+ * the neighbor garbage collection is stopped and reaped.
+ * That should all be done now, so make a final ah flush.
+ */
+- ipoib_stop_ah(dev);
++ ipoib_reap_dead_ahs(priv);
+
+ clear_bit(IPOIB_PKEY_ASSIGNED, &priv->flags);
+
+diff --git a/drivers/infiniband/ulp/ipoib/ipoib_main.c b/drivers/infiniband/ulp/ipoib/ipoib_main.c
+index 3cfb682b91b0a..ef60e8e4ae67b 100644
+--- a/drivers/infiniband/ulp/ipoib/ipoib_main.c
++++ b/drivers/infiniband/ulp/ipoib/ipoib_main.c
+@@ -1976,6 +1976,8 @@ static void ipoib_ndo_uninit(struct net_device *dev)
+
+ /* no more works over the priv->wq */
+ if (priv->wq) {
++ /* See ipoib_mcast_carrier_on_task() */
++ WARN_ON(test_bit(IPOIB_FLAG_OPER_UP, &priv->flags));
+ flush_workqueue(priv->wq);
+ destroy_workqueue(priv->wq);
+ priv->wq = NULL;
+diff --git a/drivers/input/mouse/sentelic.c b/drivers/input/mouse/sentelic.c
+index e99d9bf1a267d..e78c4c7eda34d 100644
+--- a/drivers/input/mouse/sentelic.c
++++ b/drivers/input/mouse/sentelic.c
+@@ -441,7 +441,7 @@ static ssize_t fsp_attr_set_setreg(struct psmouse *psmouse, void *data,
+
+ fsp_reg_write_enable(psmouse, false);
+
+- return count;
++ return retval;
+ }
+
+ PSMOUSE_DEFINE_WO_ATTR(setreg, S_IWUSR, NULL, fsp_attr_set_setreg);
+diff --git a/drivers/iommu/intel/dmar.c b/drivers/iommu/intel/dmar.c
+index 16f47041f1bf5..ec23a2f0b5f8d 100644
+--- a/drivers/iommu/intel/dmar.c
++++ b/drivers/iommu/intel/dmar.c
+@@ -1459,9 +1459,26 @@ void qi_flush_dev_iotlb_pasid(struct intel_iommu *iommu, u16 sid, u16 pfsid,
+ * Max Invs Pending (MIP) is set to 0 for now until we have DIT in
+ * ECAP.
+ */
+- desc.qw1 |= addr & ~mask;
+- if (size_order)
++ if (addr & GENMASK_ULL(size_order + VTD_PAGE_SHIFT, 0))
++ pr_warn_ratelimited("Invalidate non-aligned address %llx, order %d\n",
++ addr, size_order);
++
++ /* Take page address */
++ desc.qw1 = QI_DEV_EIOTLB_ADDR(addr);
++
++ if (size_order) {
++ /*
++ * Existing 0s in address below size_order may be the least
++ * significant bit, we must set them to 1s to avoid having
++ * smaller size than desired.
++ */
++ desc.qw1 |= GENMASK_ULL(size_order + VTD_PAGE_SHIFT - 1,
++ VTD_PAGE_SHIFT);
++ /* Clear size_order bit to indicate size */
++ desc.qw1 &= ~mask;
++ /* Set the S bit to indicate flushing more than 1 page */
+ desc.qw1 |= QI_DEV_EIOTLB_SIZE;
++ }
+
+ qi_submit_sync(iommu, &desc, 1, 0);
+ }
+diff --git a/drivers/iommu/intel/iommu.c b/drivers/iommu/intel/iommu.c
+index a459eac967545..04e82f1756010 100644
+--- a/drivers/iommu/intel/iommu.c
++++ b/drivers/iommu/intel/iommu.c
+@@ -2565,7 +2565,7 @@ static struct dmar_domain *dmar_insert_one_dev_info(struct intel_iommu *iommu,
+ }
+
+ if (info->ats_supported && ecap_prs(iommu->ecap) &&
+- pci_find_ext_capability(pdev, PCI_EXT_CAP_ID_PRI))
++ pci_pri_supported(pdev))
+ info->pri_supported = 1;
+ }
+ }
+@@ -5452,13 +5452,12 @@ intel_iommu_sva_invalidate(struct iommu_domain *domain, struct device *dev,
+
+ switch (BIT(cache_type)) {
+ case IOMMU_CACHE_INV_TYPE_IOTLB:
++ /* HW will ignore LSB bits based on address mask */
+ if (inv_info->granularity == IOMMU_INV_GRANU_ADDR &&
+ size &&
+ (inv_info->addr_info.addr & ((BIT(VTD_PAGE_SHIFT + size)) - 1))) {
+- pr_err_ratelimited("Address out of range, 0x%llx, size order %llu\n",
++ pr_err_ratelimited("User address not aligned, 0x%llx, size order %llu\n",
+ inv_info->addr_info.addr, size);
+- ret = -ERANGE;
+- goto out_unlock;
+ }
+
+ /*
+diff --git a/drivers/iommu/intel/svm.c b/drivers/iommu/intel/svm.c
+index 6c87c807a0abb..d386853121a26 100644
+--- a/drivers/iommu/intel/svm.c
++++ b/drivers/iommu/intel/svm.c
+@@ -277,20 +277,16 @@ int intel_svm_bind_gpasid(struct iommu_domain *domain, struct device *dev,
+ goto out;
+ }
+
++ /*
++ * Do not allow multiple bindings of the same device-PASID since
++ * there is only one SL page tables per PASID. We may revisit
++ * once sharing PGD across domains are supported.
++ */
+ for_each_svm_dev(sdev, svm, dev) {
+- /*
+- * For devices with aux domains, we should allow
+- * multiple bind calls with the same PASID and pdev.
+- */
+- if (iommu_dev_feature_enabled(dev,
+- IOMMU_DEV_FEAT_AUX)) {
+- sdev->users++;
+- } else {
+- dev_warn_ratelimited(dev,
+- "Already bound with PASID %u\n",
+- svm->pasid);
+- ret = -EBUSY;
+- }
++ dev_warn_ratelimited(dev,
++ "Already bound with PASID %u\n",
++ svm->pasid);
++ ret = -EBUSY;
+ goto out;
+ }
+ } else {
+diff --git a/drivers/iommu/omap-iommu-debug.c b/drivers/iommu/omap-iommu-debug.c
+index 8e19bfa94121e..a99afb5d9011c 100644
+--- a/drivers/iommu/omap-iommu-debug.c
++++ b/drivers/iommu/omap-iommu-debug.c
+@@ -98,8 +98,11 @@ static ssize_t debug_read_regs(struct file *file, char __user *userbuf,
+ mutex_lock(&iommu_debug_lock);
+
+ bytes = omap_iommu_dump_ctx(obj, p, count);
++ if (bytes < 0)
++ goto err;
+ bytes = simple_read_from_buffer(userbuf, count, ppos, buf, bytes);
+
++err:
+ mutex_unlock(&iommu_debug_lock);
+ kfree(buf);
+
+diff --git a/drivers/irqchip/irq-gic-v3-its.c b/drivers/irqchip/irq-gic-v3-its.c
+index da44bfa48bc25..95f097448f971 100644
+--- a/drivers/irqchip/irq-gic-v3-its.c
++++ b/drivers/irqchip/irq-gic-v3-its.c
+@@ -3523,6 +3523,7 @@ static int its_irq_domain_alloc(struct irq_domain *domain, unsigned int virq,
+ msi_alloc_info_t *info = args;
+ struct its_device *its_dev = info->scratchpad[0].ptr;
+ struct its_node *its = its_dev->its;
++ struct irq_data *irqd;
+ irq_hw_number_t hwirq;
+ int err;
+ int i;
+@@ -3542,7 +3543,9 @@ static int its_irq_domain_alloc(struct irq_domain *domain, unsigned int virq,
+
+ irq_domain_set_hwirq_and_chip(domain, virq + i,
+ hwirq + i, &its_irq_chip, its_dev);
+- irqd_set_single_target(irq_desc_get_irq_data(irq_to_desc(virq + i)));
++ irqd = irq_get_irq_data(virq + i);
++ irqd_set_single_target(irqd);
++ irqd_set_affinity_on_activate(irqd);
+ pr_debug("ID:%d pID:%d vID:%d\n",
+ (int)(hwirq + i - its_dev->event_map.lpi_base),
+ (int)(hwirq + i), virq + i);
+@@ -4087,18 +4090,22 @@ static void its_vpe_4_1_deschedule(struct its_vpe *vpe,
+ static void its_vpe_4_1_invall(struct its_vpe *vpe)
+ {
+ void __iomem *rdbase;
++ unsigned long flags;
+ u64 val;
++ int cpu;
+
+ val = GICR_INVALLR_V;
+ val |= FIELD_PREP(GICR_INVALLR_VPEID, vpe->vpe_id);
+
+ /* Target the redistributor this vPE is currently known on */
+- raw_spin_lock(&gic_data_rdist_cpu(vpe->col_idx)->rd_lock);
+- rdbase = per_cpu_ptr(gic_rdists->rdist, vpe->col_idx)->rd_base;
++ cpu = vpe_to_cpuid_lock(vpe, &flags);
++ raw_spin_lock(&gic_data_rdist_cpu(cpu)->rd_lock);
++ rdbase = per_cpu_ptr(gic_rdists->rdist, cpu)->rd_base;
+ gic_write_lpir(val, rdbase + GICR_INVALLR);
+
+ wait_for_syncr(rdbase);
+- raw_spin_unlock(&gic_data_rdist_cpu(vpe->col_idx)->rd_lock);
++ raw_spin_unlock(&gic_data_rdist_cpu(cpu)->rd_lock);
++ vpe_to_cpuid_unlock(vpe, flags);
+ }
+
+ static int its_vpe_4_1_set_vcpu_affinity(struct irq_data *d, void *vcpu_info)
+diff --git a/drivers/irqchip/irq-loongson-liointc.c b/drivers/irqchip/irq-loongson-liointc.c
+index 6ef86a334c62d..9ed1bc4736634 100644
+--- a/drivers/irqchip/irq-loongson-liointc.c
++++ b/drivers/irqchip/irq-loongson-liointc.c
+@@ -60,7 +60,7 @@ static void liointc_chained_handle_irq(struct irq_desc *desc)
+ if (!pending) {
+ /* Always blame LPC IRQ if we have that bug */
+ if (handler->priv->has_lpc_irq_errata &&
+- (handler->parent_int_map & ~gc->mask_cache &
++ (handler->parent_int_map & gc->mask_cache &
+ BIT(LIOINTC_ERRATA_IRQ)))
+ pending = BIT(LIOINTC_ERRATA_IRQ);
+ else
+@@ -132,11 +132,11 @@ static void liointc_resume(struct irq_chip_generic *gc)
+ irq_gc_lock_irqsave(gc, flags);
+ /* Disable all at first */
+ writel(0xffffffff, gc->reg_base + LIOINTC_REG_INTC_DISABLE);
+- /* Revert map cache */
++ /* Restore map cache */
+ for (i = 0; i < LIOINTC_CHIP_IRQ; i++)
+ writeb(priv->map_cache[i], gc->reg_base + i);
+- /* Revert mask cache */
+- writel(~gc->mask_cache, gc->reg_base + LIOINTC_REG_INTC_ENABLE);
++ /* Restore mask cache */
++ writel(gc->mask_cache, gc->reg_base + LIOINTC_REG_INTC_ENABLE);
+ irq_gc_unlock_irqrestore(gc, flags);
+ }
+
+@@ -244,7 +244,7 @@ int __init liointc_of_init(struct device_node *node,
+ ct->chip.irq_mask_ack = irq_gc_mask_disable_reg;
+ ct->chip.irq_set_type = liointc_set_type;
+
+- gc->mask_cache = 0xffffffff;
++ gc->mask_cache = 0;
+ priv->gc = gc;
+
+ for (i = 0; i < LIOINTC_NUM_PARENT; i++) {
+diff --git a/drivers/md/bcache/bcache.h b/drivers/md/bcache/bcache.h
+index 221e0191b6870..80e3c4813fb05 100644
+--- a/drivers/md/bcache/bcache.h
++++ b/drivers/md/bcache/bcache.h
+@@ -264,7 +264,7 @@ struct bcache_device {
+ #define BCACHE_DEV_UNLINK_DONE 2
+ #define BCACHE_DEV_WB_RUNNING 3
+ #define BCACHE_DEV_RATE_DW_RUNNING 4
+- unsigned int nr_stripes;
++ int nr_stripes;
+ unsigned int stripe_size;
+ atomic_t *stripe_sectors_dirty;
+ unsigned long *full_dirty_stripes;
+diff --git a/drivers/md/bcache/bset.c b/drivers/md/bcache/bset.c
+index 4995fcaefe297..67a2c47f4201a 100644
+--- a/drivers/md/bcache/bset.c
++++ b/drivers/md/bcache/bset.c
+@@ -322,7 +322,7 @@ int bch_btree_keys_alloc(struct btree_keys *b,
+
+ b->page_order = page_order;
+
+- t->data = (void *) __get_free_pages(gfp, b->page_order);
++ t->data = (void *) __get_free_pages(__GFP_COMP|gfp, b->page_order);
+ if (!t->data)
+ goto err;
+
+diff --git a/drivers/md/bcache/btree.c b/drivers/md/bcache/btree.c
+index 6548a601edf0e..dd116c83de80c 100644
+--- a/drivers/md/bcache/btree.c
++++ b/drivers/md/bcache/btree.c
+@@ -785,7 +785,7 @@ int bch_btree_cache_alloc(struct cache_set *c)
+ mutex_init(&c->verify_lock);
+
+ c->verify_ondisk = (void *)
+- __get_free_pages(GFP_KERNEL, ilog2(bucket_pages(c)));
++ __get_free_pages(GFP_KERNEL|__GFP_COMP, ilog2(bucket_pages(c)));
+
+ c->verify_data = mca_bucket_alloc(c, &ZERO_KEY, GFP_KERNEL);
+
+diff --git a/drivers/md/bcache/journal.c b/drivers/md/bcache/journal.c
+index 90aac4e2333f5..d8586b6ccb76a 100644
+--- a/drivers/md/bcache/journal.c
++++ b/drivers/md/bcache/journal.c
+@@ -999,8 +999,8 @@ int bch_journal_alloc(struct cache_set *c)
+ j->w[1].c = c;
+
+ if (!(init_fifo(&j->pin, JOURNAL_PIN, GFP_KERNEL)) ||
+- !(j->w[0].data = (void *) __get_free_pages(GFP_KERNEL, JSET_BITS)) ||
+- !(j->w[1].data = (void *) __get_free_pages(GFP_KERNEL, JSET_BITS)))
++ !(j->w[0].data = (void *) __get_free_pages(GFP_KERNEL|__GFP_COMP, JSET_BITS)) ||
++ !(j->w[1].data = (void *) __get_free_pages(GFP_KERNEL|__GFP_COMP, JSET_BITS)))
+ return -ENOMEM;
+
+ return 0;
+diff --git a/drivers/md/bcache/request.c b/drivers/md/bcache/request.c
+index 7acf024e99f35..9cc044293acd9 100644
+--- a/drivers/md/bcache/request.c
++++ b/drivers/md/bcache/request.c
+@@ -668,7 +668,9 @@ static void backing_request_endio(struct bio *bio)
+ static void bio_complete(struct search *s)
+ {
+ if (s->orig_bio) {
+- bio_end_io_acct(s->orig_bio, s->start_time);
++ /* Count on bcache device */
++ disk_end_io_acct(s->d->disk, bio_op(s->orig_bio), s->start_time);
++
+ trace_bcache_request_end(s->d, s->orig_bio);
+ s->orig_bio->bi_status = s->iop.status;
+ bio_endio(s->orig_bio);
+@@ -728,8 +730,8 @@ static inline struct search *search_alloc(struct bio *bio,
+ s->recoverable = 1;
+ s->write = op_is_write(bio_op(bio));
+ s->read_dirty_data = 0;
+- s->start_time = bio_start_io_acct(bio);
+-
++ /* Count on the bcache device */
++ s->start_time = disk_start_io_acct(d->disk, bio_sectors(bio), bio_op(bio));
+ s->iop.c = d->c;
+ s->iop.bio = NULL;
+ s->iop.inode = d->id;
+@@ -1080,7 +1082,8 @@ static void detached_dev_end_io(struct bio *bio)
+ bio->bi_end_io = ddip->bi_end_io;
+ bio->bi_private = ddip->bi_private;
+
+- bio_end_io_acct(bio, ddip->start_time);
++ /* Count on the bcache device */
++ disk_end_io_acct(ddip->d->disk, bio_op(bio), ddip->start_time);
+
+ if (bio->bi_status) {
+ struct cached_dev *dc = container_of(ddip->d,
+@@ -1105,7 +1108,8 @@ static void detached_dev_do_request(struct bcache_device *d, struct bio *bio)
+ */
+ ddip = kzalloc(sizeof(struct detached_dev_io_private), GFP_NOIO);
+ ddip->d = d;
+- ddip->start_time = bio_start_io_acct(bio);
++ /* Count on the bcache device */
++ ddip->start_time = disk_start_io_acct(d->disk, bio_sectors(bio), bio_op(bio));
+ ddip->bi_end_io = bio->bi_end_io;
+ ddip->bi_private = bio->bi_private;
+ bio->bi_end_io = detached_dev_end_io;
+diff --git a/drivers/md/bcache/super.c b/drivers/md/bcache/super.c
+index 445bb84ee27f8..e15d078230311 100644
+--- a/drivers/md/bcache/super.c
++++ b/drivers/md/bcache/super.c
+@@ -826,19 +826,19 @@ static int bcache_device_init(struct bcache_device *d, unsigned int block_size,
+ struct request_queue *q;
+ const size_t max_stripes = min_t(size_t, INT_MAX,
+ SIZE_MAX / sizeof(atomic_t));
+- size_t n;
++ uint64_t n;
+ int idx;
+
+ if (!d->stripe_size)
+ d->stripe_size = 1 << 31;
+
+- d->nr_stripes = DIV_ROUND_UP_ULL(sectors, d->stripe_size);
+-
+- if (!d->nr_stripes || d->nr_stripes > max_stripes) {
+- pr_err("nr_stripes too large or invalid: %u (start sector beyond end of disk?)\n",
+- (unsigned int)d->nr_stripes);
++ n = DIV_ROUND_UP_ULL(sectors, d->stripe_size);
++ if (!n || n > max_stripes) {
++ pr_err("nr_stripes too large or invalid: %llu (start sector beyond end of disk?)\n",
++ n);
+ return -ENOMEM;
+ }
++ d->nr_stripes = n;
+
+ n = d->nr_stripes * sizeof(atomic_t);
+ d->stripe_sectors_dirty = kvzalloc(n, GFP_KERNEL);
+@@ -1776,7 +1776,7 @@ void bch_cache_set_unregister(struct cache_set *c)
+ }
+
+ #define alloc_bucket_pages(gfp, c) \
+- ((void *) __get_free_pages(__GFP_ZERO|gfp, ilog2(bucket_pages(c))))
++ ((void *) __get_free_pages(__GFP_ZERO|__GFP_COMP|gfp, ilog2(bucket_pages(c))))
+
+ struct cache_set *bch_cache_set_alloc(struct cache_sb *sb)
+ {
+diff --git a/drivers/md/bcache/writeback.c b/drivers/md/bcache/writeback.c
+index 1cf1e5016cb9d..ab101ad55459a 100644
+--- a/drivers/md/bcache/writeback.c
++++ b/drivers/md/bcache/writeback.c
+@@ -523,15 +523,19 @@ void bcache_dev_sectors_dirty_add(struct cache_set *c, unsigned int inode,
+ uint64_t offset, int nr_sectors)
+ {
+ struct bcache_device *d = c->devices[inode];
+- unsigned int stripe_offset, stripe, sectors_dirty;
++ unsigned int stripe_offset, sectors_dirty;
++ int stripe;
+
+ if (!d)
+ return;
+
++ stripe = offset_to_stripe(d, offset);
++ if (stripe < 0)
++ return;
++
+ if (UUID_FLASH_ONLY(&c->uuids[inode]))
+ atomic_long_add(nr_sectors, &c->flash_dev_dirty_sectors);
+
+- stripe = offset_to_stripe(d, offset);
+ stripe_offset = offset & (d->stripe_size - 1);
+
+ while (nr_sectors) {
+@@ -571,12 +575,12 @@ static bool dirty_pred(struct keybuf *buf, struct bkey *k)
+ static void refill_full_stripes(struct cached_dev *dc)
+ {
+ struct keybuf *buf = &dc->writeback_keys;
+- unsigned int start_stripe, stripe, next_stripe;
++ unsigned int start_stripe, next_stripe;
++ int stripe;
+ bool wrapped = false;
+
+ stripe = offset_to_stripe(&dc->disk, KEY_OFFSET(&buf->last_scanned));
+-
+- if (stripe >= dc->disk.nr_stripes)
++ if (stripe < 0)
+ stripe = 0;
+
+ start_stripe = stripe;
+diff --git a/drivers/md/bcache/writeback.h b/drivers/md/bcache/writeback.h
+index b029843ce5b6f..3f1230e22de01 100644
+--- a/drivers/md/bcache/writeback.h
++++ b/drivers/md/bcache/writeback.h
+@@ -52,10 +52,22 @@ static inline uint64_t bcache_dev_sectors_dirty(struct bcache_device *d)
+ return ret;
+ }
+
+-static inline unsigned int offset_to_stripe(struct bcache_device *d,
++static inline int offset_to_stripe(struct bcache_device *d,
+ uint64_t offset)
+ {
+ do_div(offset, d->stripe_size);
++
++ /* d->nr_stripes is in range [1, INT_MAX] */
++ if (unlikely(offset >= d->nr_stripes)) {
++ pr_err("Invalid stripe %llu (>= nr_stripes %d).\n",
++ offset, d->nr_stripes);
++ return -EINVAL;
++ }
++
++ /*
++ * Here offset is definitly smaller than INT_MAX,
++ * return it as int will never overflow.
++ */
+ return offset;
+ }
+
+@@ -63,7 +75,10 @@ static inline bool bcache_dev_stripe_dirty(struct cached_dev *dc,
+ uint64_t offset,
+ unsigned int nr_sectors)
+ {
+- unsigned int stripe = offset_to_stripe(&dc->disk, offset);
++ int stripe = offset_to_stripe(&dc->disk, offset);
++
++ if (stripe < 0)
++ return false;
+
+ while (1) {
+ if (atomic_read(dc->disk.stripe_sectors_dirty + stripe))
+diff --git a/drivers/md/dm-ebs-target.c b/drivers/md/dm-ebs-target.c
+index 44451276f1281..cb85610527c2c 100644
+--- a/drivers/md/dm-ebs-target.c
++++ b/drivers/md/dm-ebs-target.c
+@@ -363,7 +363,7 @@ static int ebs_map(struct dm_target *ti, struct bio *bio)
+ bio_set_dev(bio, ec->dev->bdev);
+ bio->bi_iter.bi_sector = ec->start + dm_target_offset(ti, bio->bi_iter.bi_sector);
+
+- if (unlikely(bio->bi_opf & REQ_OP_FLUSH))
++ if (unlikely(bio_op(bio) == REQ_OP_FLUSH))
+ return DM_MAPIO_REMAPPED;
+ /*
+ * Only queue for bufio processing in case of partial or overlapping buffers
+diff --git a/drivers/md/dm-rq.c b/drivers/md/dm-rq.c
+index 85e0daabad49c..20745e2e34b94 100644
+--- a/drivers/md/dm-rq.c
++++ b/drivers/md/dm-rq.c
+@@ -70,9 +70,6 @@ void dm_start_queue(struct request_queue *q)
+
+ void dm_stop_queue(struct request_queue *q)
+ {
+- if (blk_mq_queue_stopped(q))
+- return;
+-
+ blk_mq_quiesce_queue(q);
+ }
+
+diff --git a/drivers/md/dm.c b/drivers/md/dm.c
+index 5b9de2f71bb07..88b391ff9bea7 100644
+--- a/drivers/md/dm.c
++++ b/drivers/md/dm.c
+@@ -504,7 +504,8 @@ static int dm_blk_report_zones(struct gendisk *disk, sector_t sector,
+ }
+
+ args.tgt = tgt;
+- ret = tgt->type->report_zones(tgt, &args, nr_zones);
++ ret = tgt->type->report_zones(tgt, &args,
++ nr_zones - args.zone_idx);
+ if (ret < 0)
+ goto out;
+ } while (args.zone_idx < nr_zones &&
+diff --git a/drivers/md/md-cluster.c b/drivers/md/md-cluster.c
+index 73fd50e779754..d50737ec40394 100644
+--- a/drivers/md/md-cluster.c
++++ b/drivers/md/md-cluster.c
+@@ -1139,6 +1139,7 @@ static int resize_bitmaps(struct mddev *mddev, sector_t newsize, sector_t oldsiz
+ bitmap = get_bitmap_from_slot(mddev, i);
+ if (IS_ERR(bitmap)) {
+ pr_err("can't get bitmap from slot %d\n", i);
++ bitmap = NULL;
+ goto out;
+ }
+ counts = &bitmap->counts;
+diff --git a/drivers/md/raid5.c b/drivers/md/raid5.c
+index ab8067f9ce8c6..43eedf7adc79c 100644
+--- a/drivers/md/raid5.c
++++ b/drivers/md/raid5.c
+@@ -3607,6 +3607,7 @@ static int need_this_block(struct stripe_head *sh, struct stripe_head_state *s,
+ * is missing/faulty, then we need to read everything we can.
+ */
+ if (sh->raid_conf->level != 6 &&
++ sh->raid_conf->rmw_level != PARITY_DISABLE_RMW &&
+ sh->sector < sh->raid_conf->mddev->recovery_cp)
+ /* reconstruct-write isn't being forced */
+ return 0;
+@@ -4842,7 +4843,7 @@ static void handle_stripe(struct stripe_head *sh)
+ * or to load a block that is being partially written.
+ */
+ if (s.to_read || s.non_overwrite
+- || (conf->level == 6 && s.to_write && s.failed)
++ || (s.to_write && s.failed)
+ || (s.syncing && (s.uptodate + s.compute < disks))
+ || s.replacing
+ || s.expanding)
+diff --git a/drivers/media/platform/qcom/venus/pm_helpers.c b/drivers/media/platform/qcom/venus/pm_helpers.c
+index abf93158857b9..531e7a41658f7 100644
+--- a/drivers/media/platform/qcom/venus/pm_helpers.c
++++ b/drivers/media/platform/qcom/venus/pm_helpers.c
+@@ -496,6 +496,10 @@ min_loaded_core(struct venus_inst *inst, u32 *min_coreid, u32 *min_load)
+ list_for_each_entry(inst_pos, &core->instances, list) {
+ if (inst_pos == inst)
+ continue;
++
++ if (inst_pos->state != INST_START)
++ continue;
++
+ vpp_freq = inst_pos->clk_data.codec_freq_data->vpp_freq;
+ coreid = inst_pos->clk_data.core_id;
+
+diff --git a/drivers/media/platform/rockchip/rga/rga-hw.c b/drivers/media/platform/rockchip/rga/rga-hw.c
+index 4be6dcf292fff..aaa96f256356b 100644
+--- a/drivers/media/platform/rockchip/rga/rga-hw.c
++++ b/drivers/media/platform/rockchip/rga/rga-hw.c
+@@ -200,22 +200,25 @@ static void rga_cmd_set_trans_info(struct rga_ctx *ctx)
+ dst_info.data.format = ctx->out.fmt->hw_format;
+ dst_info.data.swap = ctx->out.fmt->color_swap;
+
+- if (ctx->in.fmt->hw_format >= RGA_COLOR_FMT_YUV422SP) {
+- if (ctx->out.fmt->hw_format < RGA_COLOR_FMT_YUV422SP) {
+- switch (ctx->in.colorspace) {
+- case V4L2_COLORSPACE_REC709:
+- src_info.data.csc_mode =
+- RGA_SRC_CSC_MODE_BT709_R0;
+- break;
+- default:
+- src_info.data.csc_mode =
+- RGA_SRC_CSC_MODE_BT601_R0;
+- break;
+- }
++ /*
++ * CSC mode must only be set when the colorspace families differ between
++ * input and output. It must remain unset (zeroed) if both are the same.
++ */
++
++ if (RGA_COLOR_FMT_IS_YUV(ctx->in.fmt->hw_format) &&
++ RGA_COLOR_FMT_IS_RGB(ctx->out.fmt->hw_format)) {
++ switch (ctx->in.colorspace) {
++ case V4L2_COLORSPACE_REC709:
++ src_info.data.csc_mode = RGA_SRC_CSC_MODE_BT709_R0;
++ break;
++ default:
++ src_info.data.csc_mode = RGA_SRC_CSC_MODE_BT601_R0;
++ break;
+ }
+ }
+
+- if (ctx->out.fmt->hw_format >= RGA_COLOR_FMT_YUV422SP) {
++ if (RGA_COLOR_FMT_IS_RGB(ctx->in.fmt->hw_format) &&
++ RGA_COLOR_FMT_IS_YUV(ctx->out.fmt->hw_format)) {
+ switch (ctx->out.colorspace) {
+ case V4L2_COLORSPACE_REC709:
+ dst_info.data.csc_mode = RGA_SRC_CSC_MODE_BT709_R0;
+diff --git a/drivers/media/platform/rockchip/rga/rga-hw.h b/drivers/media/platform/rockchip/rga/rga-hw.h
+index 96cb0314dfa70..e8917e5630a48 100644
+--- a/drivers/media/platform/rockchip/rga/rga-hw.h
++++ b/drivers/media/platform/rockchip/rga/rga-hw.h
+@@ -95,6 +95,11 @@
+ #define RGA_COLOR_FMT_CP_8BPP 15
+ #define RGA_COLOR_FMT_MASK 15
+
++#define RGA_COLOR_FMT_IS_YUV(fmt) \
++ (((fmt) >= RGA_COLOR_FMT_YUV422SP) && ((fmt) < RGA_COLOR_FMT_CP_1BPP))
++#define RGA_COLOR_FMT_IS_RGB(fmt) \
++ ((fmt) < RGA_COLOR_FMT_YUV422SP)
++
+ #define RGA_COLOR_NONE_SWAP 0
+ #define RGA_COLOR_RB_SWAP 1
+ #define RGA_COLOR_ALPHA_SWAP 2
+diff --git a/drivers/media/platform/vsp1/vsp1_dl.c b/drivers/media/platform/vsp1/vsp1_dl.c
+index d7b43037e500a..e07b135613eb5 100644
+--- a/drivers/media/platform/vsp1/vsp1_dl.c
++++ b/drivers/media/platform/vsp1/vsp1_dl.c
+@@ -431,6 +431,8 @@ vsp1_dl_cmd_pool_create(struct vsp1_device *vsp1, enum vsp1_extcmd_type type,
+ if (!pool)
+ return NULL;
+
++ pool->vsp1 = vsp1;
++
+ spin_lock_init(&pool->lock);
+ INIT_LIST_HEAD(&pool->free);
+
+diff --git a/drivers/mfd/arizona-core.c b/drivers/mfd/arizona-core.c
+index f73cf76d1373d..a5e443110fc3d 100644
+--- a/drivers/mfd/arizona-core.c
++++ b/drivers/mfd/arizona-core.c
+@@ -1426,6 +1426,15 @@ err_irq:
+ arizona_irq_exit(arizona);
+ err_pm:
+ pm_runtime_disable(arizona->dev);
++
++ switch (arizona->pdata.clk32k_src) {
++ case ARIZONA_32KZ_MCLK1:
++ case ARIZONA_32KZ_MCLK2:
++ arizona_clk32k_disable(arizona);
++ break;
++ default:
++ break;
++ }
+ err_reset:
+ arizona_enable_reset(arizona);
+ regulator_disable(arizona->dcvdd);
+@@ -1448,6 +1457,15 @@ int arizona_dev_exit(struct arizona *arizona)
+ regulator_disable(arizona->dcvdd);
+ regulator_put(arizona->dcvdd);
+
++ switch (arizona->pdata.clk32k_src) {
++ case ARIZONA_32KZ_MCLK1:
++ case ARIZONA_32KZ_MCLK2:
++ arizona_clk32k_disable(arizona);
++ break;
++ default:
++ break;
++ }
++
+ mfd_remove_devices(arizona->dev);
+ arizona_free_irq(arizona, ARIZONA_IRQ_UNDERCLOCKED, arizona);
+ arizona_free_irq(arizona, ARIZONA_IRQ_OVERCLOCKED, arizona);
+diff --git a/drivers/mfd/dln2.c b/drivers/mfd/dln2.c
+index 39276fa626d2b..83e676a096dc1 100644
+--- a/drivers/mfd/dln2.c
++++ b/drivers/mfd/dln2.c
+@@ -287,7 +287,11 @@ static void dln2_rx(struct urb *urb)
+ len = urb->actual_length - sizeof(struct dln2_header);
+
+ if (handle == DLN2_HANDLE_EVENT) {
++ unsigned long flags;
++
++ spin_lock_irqsave(&dln2->event_cb_lock, flags);
+ dln2_run_event_callbacks(dln2, id, echo, data, len);
++ spin_unlock_irqrestore(&dln2->event_cb_lock, flags);
+ } else {
+ /* URB will be re-submitted in _dln2_transfer (free_rx_slot) */
+ if (dln2_transfer_complete(dln2, urb, handle, echo))
+diff --git a/drivers/mmc/host/renesas_sdhi_internal_dmac.c b/drivers/mmc/host/renesas_sdhi_internal_dmac.c
+index 47ac53e912411..201b8ed37f2e0 100644
+--- a/drivers/mmc/host/renesas_sdhi_internal_dmac.c
++++ b/drivers/mmc/host/renesas_sdhi_internal_dmac.c
+@@ -229,15 +229,12 @@ static void renesas_sdhi_internal_dmac_issue_tasklet_fn(unsigned long arg)
+ DTRAN_CTRL_DM_START);
+ }
+
+-static void renesas_sdhi_internal_dmac_complete_tasklet_fn(unsigned long arg)
++static bool renesas_sdhi_internal_dmac_complete(struct tmio_mmc_host *host)
+ {
+- struct tmio_mmc_host *host = (struct tmio_mmc_host *)arg;
+ enum dma_data_direction dir;
+
+- spin_lock_irq(&host->lock);
+-
+ if (!host->data)
+- goto out;
++ return false;
+
+ if (host->data->flags & MMC_DATA_READ)
+ dir = DMA_FROM_DEVICE;
+@@ -250,6 +247,17 @@ static void renesas_sdhi_internal_dmac_complete_tasklet_fn(unsigned long arg)
+ if (dir == DMA_FROM_DEVICE)
+ clear_bit(SDHI_INTERNAL_DMAC_RX_IN_USE, &global_flags);
+
++ return true;
++}
++
++static void renesas_sdhi_internal_dmac_complete_tasklet_fn(unsigned long arg)
++{
++ struct tmio_mmc_host *host = (struct tmio_mmc_host *)arg;
++
++ spin_lock_irq(&host->lock);
++ if (!renesas_sdhi_internal_dmac_complete(host))
++ goto out;
++
+ tmio_mmc_do_data_irq(host);
+ out:
+ spin_unlock_irq(&host->lock);
+diff --git a/drivers/mtd/nand/raw/brcmnand/brcmnand.c b/drivers/mtd/nand/raw/brcmnand/brcmnand.c
+index ac934a715a194..a4033d32a7103 100644
+--- a/drivers/mtd/nand/raw/brcmnand/brcmnand.c
++++ b/drivers/mtd/nand/raw/brcmnand/brcmnand.c
+@@ -1918,6 +1918,22 @@ static int brcmnand_edu_trans(struct brcmnand_host *host, u64 addr, u32 *buf,
+ edu_writel(ctrl, EDU_STOP, 0); /* force stop */
+ edu_readl(ctrl, EDU_STOP);
+
++ if (!ret && edu_cmd == EDU_CMD_READ) {
++ u64 err_addr = 0;
++
++ /*
++ * check for ECC errors here, subpage ECC errors are
++ * retained in ECC error address register
++ */
++ err_addr = brcmnand_get_uncorrecc_addr(ctrl);
++ if (!err_addr) {
++ err_addr = brcmnand_get_correcc_addr(ctrl);
++ if (err_addr)
++ ret = -EUCLEAN;
++ } else
++ ret = -EBADMSG;
++ }
++
+ return ret;
+ }
+
+@@ -2124,6 +2140,7 @@ static int brcmnand_read(struct mtd_info *mtd, struct nand_chip *chip,
+ u64 err_addr = 0;
+ int err;
+ bool retry = true;
++ bool edu_err = false;
+
+ dev_dbg(ctrl->dev, "read %llx -> %p\n", (unsigned long long)addr, buf);
+
+@@ -2141,6 +2158,10 @@ try_dmaread:
+ else
+ return -EIO;
+ }
++
++ if (has_edu(ctrl) && err_addr)
++ edu_err = true;
++
+ } else {
+ if (oob)
+ memset(oob, 0x99, mtd->oobsize);
+@@ -2188,6 +2209,11 @@ try_dmaread:
+ if (mtd_is_bitflip(err)) {
+ unsigned int corrected = brcmnand_count_corrected(ctrl);
+
++ /* in case of EDU correctable error we read again using PIO */
++ if (edu_err)
++ err = brcmnand_read_by_pio(mtd, chip, addr, trans, buf,
++ oob, &err_addr);
++
+ dev_dbg(ctrl->dev, "corrected error at 0x%llx\n",
+ (unsigned long long)err_addr);
+ mtd->ecc_stats.corrected += corrected;
+diff --git a/drivers/mtd/nand/raw/fsl_upm.c b/drivers/mtd/nand/raw/fsl_upm.c
+index 627deb26db512..76d1032cd35e8 100644
+--- a/drivers/mtd/nand/raw/fsl_upm.c
++++ b/drivers/mtd/nand/raw/fsl_upm.c
+@@ -62,7 +62,6 @@ static int fun_chip_ready(struct nand_chip *chip)
+ static void fun_wait_rnb(struct fsl_upm_nand *fun)
+ {
+ if (fun->rnb_gpio[fun->mchip_number] >= 0) {
+- struct mtd_info *mtd = nand_to_mtd(&fun->chip);
+ int cnt = 1000000;
+
+ while (--cnt && !fun_chip_ready(&fun->chip))
+diff --git a/drivers/mtd/ubi/fastmap-wl.c b/drivers/mtd/ubi/fastmap-wl.c
+index 83afc00e365a5..28f55f9cf7153 100644
+--- a/drivers/mtd/ubi/fastmap-wl.c
++++ b/drivers/mtd/ubi/fastmap-wl.c
+@@ -381,6 +381,11 @@ static void ubi_fastmap_close(struct ubi_device *ubi)
+ ubi->fm_anchor = NULL;
+ }
+
++ if (ubi->fm_next_anchor) {
++ return_unused_peb(ubi, ubi->fm_next_anchor);
++ ubi->fm_next_anchor = NULL;
++ }
++
+ if (ubi->fm) {
+ for (i = 0; i < ubi->fm->used_blocks; i++)
+ kfree(ubi->fm->e[i]);
+diff --git a/drivers/mtd/ubi/wl.c b/drivers/mtd/ubi/wl.c
+index 27636063ed1bb..42cac572f82dc 100644
+--- a/drivers/mtd/ubi/wl.c
++++ b/drivers/mtd/ubi/wl.c
+@@ -1086,7 +1086,8 @@ static int __erase_worker(struct ubi_device *ubi, struct ubi_work *wl_wrk)
+ if (!err) {
+ spin_lock(&ubi->wl_lock);
+
+- if (!ubi->fm_next_anchor && e->pnum < UBI_FM_MAX_START) {
++ if (!ubi->fm_disabled && !ubi->fm_next_anchor &&
++ e->pnum < UBI_FM_MAX_START) {
+ /* Abort anchor production, if needed it will be
+ * enabled again in the wear leveling started below.
+ */
+diff --git a/drivers/net/ethernet/marvell/octeontx2/af/common.h b/drivers/net/ethernet/marvell/octeontx2/af/common.h
+index cd33c2e6ca5fc..f48eb66ed021b 100644
+--- a/drivers/net/ethernet/marvell/octeontx2/af/common.h
++++ b/drivers/net/ethernet/marvell/octeontx2/af/common.h
+@@ -43,7 +43,7 @@ struct qmem {
+ void *base;
+ dma_addr_t iova;
+ int alloc_sz;
+- u8 entry_sz;
++ u16 entry_sz;
+ u8 align;
+ u32 qsize;
+ };
+diff --git a/drivers/net/ethernet/qualcomm/emac/emac.c b/drivers/net/ethernet/qualcomm/emac/emac.c
+index 20b1b43a0e393..1166b98d8bb2c 100644
+--- a/drivers/net/ethernet/qualcomm/emac/emac.c
++++ b/drivers/net/ethernet/qualcomm/emac/emac.c
+@@ -474,13 +474,24 @@ static int emac_clks_phase1_init(struct platform_device *pdev,
+
+ ret = clk_prepare_enable(adpt->clk[EMAC_CLK_CFG_AHB]);
+ if (ret)
+- return ret;
++ goto disable_clk_axi;
+
+ ret = clk_set_rate(adpt->clk[EMAC_CLK_HIGH_SPEED], 19200000);
+ if (ret)
+- return ret;
++ goto disable_clk_cfg_ahb;
++
++ ret = clk_prepare_enable(adpt->clk[EMAC_CLK_HIGH_SPEED]);
++ if (ret)
++ goto disable_clk_cfg_ahb;
+
+- return clk_prepare_enable(adpt->clk[EMAC_CLK_HIGH_SPEED]);
++ return 0;
++
++disable_clk_cfg_ahb:
++ clk_disable_unprepare(adpt->clk[EMAC_CLK_CFG_AHB]);
++disable_clk_axi:
++ clk_disable_unprepare(adpt->clk[EMAC_CLK_AXI]);
++
++ return ret;
+ }
+
+ /* Enable clocks; needs emac_clks_phase1_init to be called before */
+diff --git a/drivers/net/ethernet/stmicro/stmmac/dwmac-ipq806x.c b/drivers/net/ethernet/stmicro/stmmac/dwmac-ipq806x.c
+index 02102c781a8cf..bf3250e0e59ca 100644
+--- a/drivers/net/ethernet/stmicro/stmmac/dwmac-ipq806x.c
++++ b/drivers/net/ethernet/stmicro/stmmac/dwmac-ipq806x.c
+@@ -351,6 +351,7 @@ static int ipq806x_gmac_probe(struct platform_device *pdev)
+ plat_dat->has_gmac = true;
+ plat_dat->bsp_priv = gmac;
+ plat_dat->fix_mac_speed = ipq806x_gmac_fix_mac_speed;
++ plat_dat->multicast_filter_bins = 0;
+
+ err = stmmac_dvr_probe(&pdev->dev, plat_dat, &stmmac_res);
+ if (err)
+diff --git a/drivers/net/ethernet/stmicro/stmmac/dwmac1000_core.c b/drivers/net/ethernet/stmicro/stmmac/dwmac1000_core.c
+index efc6ec1b8027c..fc8759f146c7c 100644
+--- a/drivers/net/ethernet/stmicro/stmmac/dwmac1000_core.c
++++ b/drivers/net/ethernet/stmicro/stmmac/dwmac1000_core.c
+@@ -164,6 +164,9 @@ static void dwmac1000_set_filter(struct mac_device_info *hw,
+ value = GMAC_FRAME_FILTER_PR | GMAC_FRAME_FILTER_PCF;
+ } else if (dev->flags & IFF_ALLMULTI) {
+ value = GMAC_FRAME_FILTER_PM; /* pass all multi */
++ } else if (!netdev_mc_empty(dev) && (mcbitslog2 == 0)) {
++ /* Fall back to all multicast if we've no filter */
++ value = GMAC_FRAME_FILTER_PM;
+ } else if (!netdev_mc_empty(dev)) {
+ struct netdev_hw_addr *ha;
+
+diff --git a/drivers/net/wireless/realtek/rtw88/pci.c b/drivers/net/wireless/realtek/rtw88/pci.c
+index 8228db9a5fc86..3413973bc4750 100644
+--- a/drivers/net/wireless/realtek/rtw88/pci.c
++++ b/drivers/net/wireless/realtek/rtw88/pci.c
+@@ -14,8 +14,11 @@
+ #include "debug.h"
+
+ static bool rtw_disable_msi;
++static bool rtw_pci_disable_aspm;
+ module_param_named(disable_msi, rtw_disable_msi, bool, 0644);
++module_param_named(disable_aspm, rtw_pci_disable_aspm, bool, 0644);
+ MODULE_PARM_DESC(disable_msi, "Set Y to disable MSI interrupt support");
++MODULE_PARM_DESC(disable_aspm, "Set Y to disable PCI ASPM support");
+
+ static u32 rtw_pci_tx_queue_idx_addr[] = {
+ [RTW_TX_QUEUE_BK] = RTK_PCI_TXBD_IDX_BKQ,
+@@ -1200,6 +1203,9 @@ static void rtw_pci_clkreq_set(struct rtw_dev *rtwdev, bool enable)
+ u8 value;
+ int ret;
+
++ if (rtw_pci_disable_aspm)
++ return;
++
+ ret = rtw_dbi_read8(rtwdev, RTK_PCIE_LINK_CFG, &value);
+ if (ret) {
+ rtw_err(rtwdev, "failed to read CLKREQ_L1, ret=%d", ret);
+@@ -1219,6 +1225,9 @@ static void rtw_pci_aspm_set(struct rtw_dev *rtwdev, bool enable)
+ u8 value;
+ int ret;
+
++ if (rtw_pci_disable_aspm)
++ return;
++
+ ret = rtw_dbi_read8(rtwdev, RTK_PCIE_LINK_CFG, &value);
+ if (ret) {
+ rtw_err(rtwdev, "failed to read ASPM, ret=%d", ret);
+diff --git a/drivers/nvdimm/bus.c b/drivers/nvdimm/bus.c
+index 09087c38fabdc..955265656b96c 100644
+--- a/drivers/nvdimm/bus.c
++++ b/drivers/nvdimm/bus.c
+@@ -1037,9 +1037,25 @@ static int __nd_ioctl(struct nvdimm_bus *nvdimm_bus, struct nvdimm *nvdimm,
+ dimm_name = "bus";
+ }
+
++ /* Validate command family support against bus declared support */
+ if (cmd == ND_CMD_CALL) {
++ unsigned long *mask;
++
+ if (copy_from_user(&pkg, p, sizeof(pkg)))
+ return -EFAULT;
++
++ if (nvdimm) {
++ if (pkg.nd_family > NVDIMM_FAMILY_MAX)
++ return -EINVAL;
++ mask = &nd_desc->dimm_family_mask;
++ } else {
++ if (pkg.nd_family > NVDIMM_BUS_FAMILY_MAX)
++ return -EINVAL;
++ mask = &nd_desc->bus_family_mask;
++ }
++
++ if (!test_bit(pkg.nd_family, mask))
++ return -EINVAL;
+ }
+
+ if (!desc ||
+diff --git a/drivers/nvdimm/security.c b/drivers/nvdimm/security.c
+index 4cef69bd3c1bd..4b80150e4afa7 100644
+--- a/drivers/nvdimm/security.c
++++ b/drivers/nvdimm/security.c
+@@ -450,14 +450,19 @@ void __nvdimm_security_overwrite_query(struct nvdimm *nvdimm)
+ else
+ dev_dbg(&nvdimm->dev, "overwrite completed\n");
+
+- if (nvdimm->sec.overwrite_state)
+- sysfs_notify_dirent(nvdimm->sec.overwrite_state);
++ /*
++ * Mark the overwrite work done and update dimm security flags,
++ * then send a sysfs event notification to wake up userspace
++ * poll threads to picked up the changed state.
++ */
+ nvdimm->sec.overwrite_tmo = 0;
+ clear_bit(NDD_SECURITY_OVERWRITE, &nvdimm->flags);
+ clear_bit(NDD_WORK_PENDING, &nvdimm->flags);
+- put_device(&nvdimm->dev);
+ nvdimm->sec.flags = nvdimm_security_flags(nvdimm, NVDIMM_USER);
+- nvdimm->sec.flags = nvdimm_security_flags(nvdimm, NVDIMM_MASTER);
++ nvdimm->sec.ext_flags = nvdimm_security_flags(nvdimm, NVDIMM_MASTER);
++ if (nvdimm->sec.overwrite_state)
++ sysfs_notify_dirent(nvdimm->sec.overwrite_state);
++ put_device(&nvdimm->dev);
+ }
+
+ void nvdimm_security_overwrite_query(struct work_struct *work)
+diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c
+index 4ee2330c603e7..f38548e6d55ec 100644
+--- a/drivers/nvme/host/core.c
++++ b/drivers/nvme/host/core.c
+@@ -362,6 +362,16 @@ bool nvme_change_ctrl_state(struct nvme_ctrl *ctrl,
+ break;
+ }
+ break;
++ case NVME_CTRL_DELETING_NOIO:
++ switch (old_state) {
++ case NVME_CTRL_DELETING:
++ case NVME_CTRL_DEAD:
++ changed = true;
++ /* FALLTHRU */
++ default:
++ break;
++ }
++ break;
+ case NVME_CTRL_DEAD:
+ switch (old_state) {
+ case NVME_CTRL_DELETING:
+@@ -399,6 +409,7 @@ static bool nvme_state_terminal(struct nvme_ctrl *ctrl)
+ case NVME_CTRL_CONNECTING:
+ return false;
+ case NVME_CTRL_DELETING:
++ case NVME_CTRL_DELETING_NOIO:
+ case NVME_CTRL_DEAD:
+ return true;
+ default:
+@@ -3344,6 +3355,7 @@ static ssize_t nvme_sysfs_show_state(struct device *dev,
+ [NVME_CTRL_RESETTING] = "resetting",
+ [NVME_CTRL_CONNECTING] = "connecting",
+ [NVME_CTRL_DELETING] = "deleting",
++ [NVME_CTRL_DELETING_NOIO]= "deleting (no IO)",
+ [NVME_CTRL_DEAD] = "dead",
+ };
+
+@@ -3911,6 +3923,9 @@ void nvme_remove_namespaces(struct nvme_ctrl *ctrl)
+ if (ctrl->state == NVME_CTRL_DEAD)
+ nvme_kill_queues(ctrl);
+
++ /* this is a no-op when called from the controller reset handler */
++ nvme_change_ctrl_state(ctrl, NVME_CTRL_DELETING_NOIO);
++
+ down_write(&ctrl->namespaces_rwsem);
+ list_splice_init(&ctrl->namespaces, &ns_list);
+ up_write(&ctrl->namespaces_rwsem);
+diff --git a/drivers/nvme/host/fabrics.c b/drivers/nvme/host/fabrics.c
+index 2a6c8190eeb76..4ec4829d62334 100644
+--- a/drivers/nvme/host/fabrics.c
++++ b/drivers/nvme/host/fabrics.c
+@@ -547,7 +547,7 @@ static struct nvmf_transport_ops *nvmf_lookup_transport(
+ blk_status_t nvmf_fail_nonready_command(struct nvme_ctrl *ctrl,
+ struct request *rq)
+ {
+- if (ctrl->state != NVME_CTRL_DELETING &&
++ if (ctrl->state != NVME_CTRL_DELETING_NOIO &&
+ ctrl->state != NVME_CTRL_DEAD &&
+ !blk_noretry_request(rq) && !(rq->cmd_flags & REQ_NVME_MPATH))
+ return BLK_STS_RESOURCE;
+diff --git a/drivers/nvme/host/fabrics.h b/drivers/nvme/host/fabrics.h
+index a0ec40ab62eeb..a9c1e3b4585ec 100644
+--- a/drivers/nvme/host/fabrics.h
++++ b/drivers/nvme/host/fabrics.h
+@@ -182,7 +182,8 @@ bool nvmf_ip_options_match(struct nvme_ctrl *ctrl,
+ static inline bool nvmf_check_ready(struct nvme_ctrl *ctrl, struct request *rq,
+ bool queue_live)
+ {
+- if (likely(ctrl->state == NVME_CTRL_LIVE))
++ if (likely(ctrl->state == NVME_CTRL_LIVE ||
++ ctrl->state == NVME_CTRL_DELETING))
+ return true;
+ return __nvmf_check_ready(ctrl, rq, queue_live);
+ }
+diff --git a/drivers/nvme/host/fc.c b/drivers/nvme/host/fc.c
+index e999a8c4b7e87..549f5b0fb0b4b 100644
+--- a/drivers/nvme/host/fc.c
++++ b/drivers/nvme/host/fc.c
+@@ -825,6 +825,7 @@ nvme_fc_ctrl_connectivity_loss(struct nvme_fc_ctrl *ctrl)
+ break;
+
+ case NVME_CTRL_DELETING:
++ case NVME_CTRL_DELETING_NOIO:
+ default:
+ /* no action to take - let it delete */
+ break;
+diff --git a/drivers/nvme/host/multipath.c b/drivers/nvme/host/multipath.c
+index 57d51148e71b6..2672953233434 100644
+--- a/drivers/nvme/host/multipath.c
++++ b/drivers/nvme/host/multipath.c
+@@ -167,9 +167,18 @@ void nvme_mpath_clear_ctrl_paths(struct nvme_ctrl *ctrl)
+
+ static bool nvme_path_is_disabled(struct nvme_ns *ns)
+ {
+- return ns->ctrl->state != NVME_CTRL_LIVE ||
+- test_bit(NVME_NS_ANA_PENDING, &ns->flags) ||
+- test_bit(NVME_NS_REMOVING, &ns->flags);
++ /*
++ * We don't treat NVME_CTRL_DELETING as a disabled path as I/O should
++ * still be able to complete assuming that the controller is connected.
++ * Otherwise it will fail immediately and return to the requeue list.
++ */
++ if (ns->ctrl->state != NVME_CTRL_LIVE &&
++ ns->ctrl->state != NVME_CTRL_DELETING)
++ return true;
++ if (test_bit(NVME_NS_ANA_PENDING, &ns->flags) ||
++ test_bit(NVME_NS_REMOVING, &ns->flags))
++ return true;
++ return false;
+ }
+
+ static struct nvme_ns *__nvme_find_path(struct nvme_ns_head *head, int node)
+@@ -574,6 +583,9 @@ static void nvme_ana_work(struct work_struct *work)
+ {
+ struct nvme_ctrl *ctrl = container_of(work, struct nvme_ctrl, ana_work);
+
++ if (ctrl->state != NVME_CTRL_LIVE)
++ return;
++
+ nvme_read_ana_log(ctrl);
+ }
+
+diff --git a/drivers/nvme/host/nvme.h b/drivers/nvme/host/nvme.h
+index 09ffc3246f60e..e268f1d7e1a0f 100644
+--- a/drivers/nvme/host/nvme.h
++++ b/drivers/nvme/host/nvme.h
+@@ -186,6 +186,7 @@ enum nvme_ctrl_state {
+ NVME_CTRL_RESETTING,
+ NVME_CTRL_CONNECTING,
+ NVME_CTRL_DELETING,
++ NVME_CTRL_DELETING_NOIO,
+ NVME_CTRL_DEAD,
+ };
+
+diff --git a/drivers/nvme/host/rdma.c b/drivers/nvme/host/rdma.c
+index af0cfd25ed7a4..876859cd14e86 100644
+--- a/drivers/nvme/host/rdma.c
++++ b/drivers/nvme/host/rdma.c
+@@ -1082,11 +1082,12 @@ static int nvme_rdma_setup_ctrl(struct nvme_rdma_ctrl *ctrl, bool new)
+ changed = nvme_change_ctrl_state(&ctrl->ctrl, NVME_CTRL_LIVE);
+ if (!changed) {
+ /*
+- * state change failure is ok if we're in DELETING state,
++ * state change failure is ok if we started ctrl delete,
+ * unless we're during creation of a new controller to
+ * avoid races with teardown flow.
+ */
+- WARN_ON_ONCE(ctrl->ctrl.state != NVME_CTRL_DELETING);
++ WARN_ON_ONCE(ctrl->ctrl.state != NVME_CTRL_DELETING &&
++ ctrl->ctrl.state != NVME_CTRL_DELETING_NOIO);
+ WARN_ON_ONCE(new);
+ ret = -EINVAL;
+ goto destroy_io;
+@@ -1139,8 +1140,9 @@ static void nvme_rdma_error_recovery_work(struct work_struct *work)
+ blk_mq_unquiesce_queue(ctrl->ctrl.admin_q);
+
+ if (!nvme_change_ctrl_state(&ctrl->ctrl, NVME_CTRL_CONNECTING)) {
+- /* state change failure is ok if we're in DELETING state */
+- WARN_ON_ONCE(ctrl->ctrl.state != NVME_CTRL_DELETING);
++ /* state change failure is ok if we started ctrl delete */
++ WARN_ON_ONCE(ctrl->ctrl.state != NVME_CTRL_DELETING &&
++ ctrl->ctrl.state != NVME_CTRL_DELETING_NOIO);
+ return;
+ }
+
+diff --git a/drivers/nvme/host/tcp.c b/drivers/nvme/host/tcp.c
+index 83bb329d4113a..a6d2e3330a584 100644
+--- a/drivers/nvme/host/tcp.c
++++ b/drivers/nvme/host/tcp.c
+@@ -1929,11 +1929,12 @@ static int nvme_tcp_setup_ctrl(struct nvme_ctrl *ctrl, bool new)
+
+ if (!nvme_change_ctrl_state(ctrl, NVME_CTRL_LIVE)) {
+ /*
+- * state change failure is ok if we're in DELETING state,
++ * state change failure is ok if we started ctrl delete,
+ * unless we're during creation of a new controller to
+ * avoid races with teardown flow.
+ */
+- WARN_ON_ONCE(ctrl->state != NVME_CTRL_DELETING);
++ WARN_ON_ONCE(ctrl->state != NVME_CTRL_DELETING &&
++ ctrl->state != NVME_CTRL_DELETING_NOIO);
+ WARN_ON_ONCE(new);
+ ret = -EINVAL;
+ goto destroy_io;
+@@ -1989,8 +1990,9 @@ static void nvme_tcp_error_recovery_work(struct work_struct *work)
+ blk_mq_unquiesce_queue(ctrl->admin_q);
+
+ if (!nvme_change_ctrl_state(ctrl, NVME_CTRL_CONNECTING)) {
+- /* state change failure is ok if we're in DELETING state */
+- WARN_ON_ONCE(ctrl->state != NVME_CTRL_DELETING);
++ /* state change failure is ok if we started ctrl delete */
++ WARN_ON_ONCE(ctrl->state != NVME_CTRL_DELETING &&
++ ctrl->state != NVME_CTRL_DELETING_NOIO);
+ return;
+ }
+
+@@ -2025,8 +2027,9 @@ static void nvme_reset_ctrl_work(struct work_struct *work)
+ nvme_tcp_teardown_ctrl(ctrl, false);
+
+ if (!nvme_change_ctrl_state(ctrl, NVME_CTRL_CONNECTING)) {
+- /* state change failure is ok if we're in DELETING state */
+- WARN_ON_ONCE(ctrl->state != NVME_CTRL_DELETING);
++ /* state change failure is ok if we started ctrl delete */
++ WARN_ON_ONCE(ctrl->state != NVME_CTRL_DELETING &&
++ ctrl->state != NVME_CTRL_DELETING_NOIO);
+ return;
+ }
+
+diff --git a/drivers/pci/ats.c b/drivers/pci/ats.c
+index b761c1f72f672..647e097530a89 100644
+--- a/drivers/pci/ats.c
++++ b/drivers/pci/ats.c
+@@ -325,6 +325,21 @@ int pci_prg_resp_pasid_required(struct pci_dev *pdev)
+
+ return pdev->pasid_required;
+ }
++
++/**
++ * pci_pri_supported - Check if PRI is supported.
++ * @pdev: PCI device structure
++ *
++ * Returns true if PRI capability is present, false otherwise.
++ */
++bool pci_pri_supported(struct pci_dev *pdev)
++{
++ /* VFs share the PF PRI */
++ if (pci_physfn(pdev)->pri_cap)
++ return true;
++ return false;
++}
++EXPORT_SYMBOL_GPL(pci_pri_supported);
+ #endif /* CONFIG_PCI_PRI */
+
+ #ifdef CONFIG_PCI_PASID
+diff --git a/drivers/pci/bus.c b/drivers/pci/bus.c
+index 8e40b3e6da77d..3cef835b375fd 100644
+--- a/drivers/pci/bus.c
++++ b/drivers/pci/bus.c
+@@ -322,12 +322,8 @@ void pci_bus_add_device(struct pci_dev *dev)
+
+ dev->match_driver = true;
+ retval = device_attach(&dev->dev);
+- if (retval < 0 && retval != -EPROBE_DEFER) {
++ if (retval < 0 && retval != -EPROBE_DEFER)
+ pci_warn(dev, "device attach failed (%d)\n", retval);
+- pci_proc_detach_device(dev);
+- pci_remove_sysfs_dev_files(dev);
+- return;
+- }
+
+ pci_dev_assign_added(dev, true);
+ }
+diff --git a/drivers/pci/controller/dwc/pcie-qcom.c b/drivers/pci/controller/dwc/pcie-qcom.c
+index 138e1a2d21ccd..5dd1740855770 100644
+--- a/drivers/pci/controller/dwc/pcie-qcom.c
++++ b/drivers/pci/controller/dwc/pcie-qcom.c
+@@ -45,7 +45,13 @@
+ #define PCIE_CAP_CPL_TIMEOUT_DISABLE 0x10
+
+ #define PCIE20_PARF_PHY_CTRL 0x40
++#define PHY_CTRL_PHY_TX0_TERM_OFFSET_MASK GENMASK(20, 16)
++#define PHY_CTRL_PHY_TX0_TERM_OFFSET(x) ((x) << 16)
++
+ #define PCIE20_PARF_PHY_REFCLK 0x4C
++#define PHY_REFCLK_SSP_EN BIT(16)
++#define PHY_REFCLK_USE_PAD BIT(12)
++
+ #define PCIE20_PARF_DBI_BASE_ADDR 0x168
+ #define PCIE20_PARF_SLV_ADDR_SPACE_SIZE 0x16C
+ #define PCIE20_PARF_MHI_CLOCK_RESET_CTRL 0x174
+@@ -77,6 +83,18 @@
+ #define DBI_RO_WR_EN 1
+
+ #define PERST_DELAY_US 1000
++/* PARF registers */
++#define PCIE20_PARF_PCS_DEEMPH 0x34
++#define PCS_DEEMPH_TX_DEEMPH_GEN1(x) ((x) << 16)
++#define PCS_DEEMPH_TX_DEEMPH_GEN2_3_5DB(x) ((x) << 8)
++#define PCS_DEEMPH_TX_DEEMPH_GEN2_6DB(x) ((x) << 0)
++
++#define PCIE20_PARF_PCS_SWING 0x38
++#define PCS_SWING_TX_SWING_FULL(x) ((x) << 8)
++#define PCS_SWING_TX_SWING_LOW(x) ((x) << 0)
++
++#define PCIE20_PARF_CONFIG_BITS 0x50
++#define PHY_RX0_EQ(x) ((x) << 24)
+
+ #define PCIE20_v3_PARF_SLV_ADDR_SPACE_SIZE 0x358
+ #define SLV_ADDR_SPACE_SZ 0x10000000
+@@ -286,6 +304,7 @@ static int qcom_pcie_init_2_1_0(struct qcom_pcie *pcie)
+ struct qcom_pcie_resources_2_1_0 *res = &pcie->res.v2_1_0;
+ struct dw_pcie *pci = pcie->pci;
+ struct device *dev = pci->dev;
++ struct device_node *node = dev->of_node;
+ u32 val;
+ int ret;
+
+@@ -330,9 +349,29 @@ static int qcom_pcie_init_2_1_0(struct qcom_pcie *pcie)
+ val &= ~BIT(0);
+ writel(val, pcie->parf + PCIE20_PARF_PHY_CTRL);
+
++ if (of_device_is_compatible(node, "qcom,pcie-ipq8064")) {
++ writel(PCS_DEEMPH_TX_DEEMPH_GEN1(24) |
++ PCS_DEEMPH_TX_DEEMPH_GEN2_3_5DB(24) |
++ PCS_DEEMPH_TX_DEEMPH_GEN2_6DB(34),
++ pcie->parf + PCIE20_PARF_PCS_DEEMPH);
++ writel(PCS_SWING_TX_SWING_FULL(120) |
++ PCS_SWING_TX_SWING_LOW(120),
++ pcie->parf + PCIE20_PARF_PCS_SWING);
++ writel(PHY_RX0_EQ(4), pcie->parf + PCIE20_PARF_CONFIG_BITS);
++ }
++
++ if (of_device_is_compatible(node, "qcom,pcie-ipq8064")) {
++ /* set TX termination offset */
++ val = readl(pcie->parf + PCIE20_PARF_PHY_CTRL);
++ val &= ~PHY_CTRL_PHY_TX0_TERM_OFFSET_MASK;
++ val |= PHY_CTRL_PHY_TX0_TERM_OFFSET(7);
++ writel(val, pcie->parf + PCIE20_PARF_PHY_CTRL);
++ }
++
+ /* enable external reference clock */
+ val = readl(pcie->parf + PCIE20_PARF_PHY_REFCLK);
+- val |= BIT(16);
++ val &= ~PHY_REFCLK_USE_PAD;
++ val |= PHY_REFCLK_SSP_EN;
+ writel(val, pcie->parf + PCIE20_PARF_PHY_REFCLK);
+
+ ret = reset_control_deassert(res->phy_reset);
+diff --git a/drivers/pci/hotplug/acpiphp_glue.c b/drivers/pci/hotplug/acpiphp_glue.c
+index b4c92cee13f8a..3365c93abf0e2 100644
+--- a/drivers/pci/hotplug/acpiphp_glue.c
++++ b/drivers/pci/hotplug/acpiphp_glue.c
+@@ -122,13 +122,21 @@ static struct acpiphp_context *acpiphp_grab_context(struct acpi_device *adev)
+ struct acpiphp_context *context;
+
+ acpi_lock_hp_context();
++
+ context = acpiphp_get_context(adev);
+- if (!context || context->func.parent->is_going_away) {
+- acpi_unlock_hp_context();
+- return NULL;
++ if (!context)
++ goto unlock;
++
++ if (context->func.parent->is_going_away) {
++ acpiphp_put_context(context);
++ context = NULL;
++ goto unlock;
+ }
++
+ get_bridge(context->func.parent);
+ acpiphp_put_context(context);
++
++unlock:
+ acpi_unlock_hp_context();
+ return context;
+ }
+diff --git a/drivers/pci/quirks.c b/drivers/pci/quirks.c
+index d442219cd2708..cc6e1a382118e 100644
+--- a/drivers/pci/quirks.c
++++ b/drivers/pci/quirks.c
+@@ -5207,7 +5207,8 @@ DECLARE_PCI_FIXUP_EARLY(PCI_VENDOR_ID_SERVERWORKS, 0x0422, quirk_no_ext_tags);
+ */
+ static void quirk_amd_harvest_no_ats(struct pci_dev *pdev)
+ {
+- if (pdev->device == 0x7340 && pdev->revision != 0xc5)
++ if ((pdev->device == 0x7312 && pdev->revision != 0x00) ||
++ (pdev->device == 0x7340 && pdev->revision != 0xc5))
+ return;
+
+ pci_info(pdev, "disabling ATS\n");
+@@ -5218,6 +5219,8 @@ static void quirk_amd_harvest_no_ats(struct pci_dev *pdev)
+ DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_ATI, 0x98e4, quirk_amd_harvest_no_ats);
+ /* AMD Iceland dGPU */
+ DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_ATI, 0x6900, quirk_amd_harvest_no_ats);
++/* AMD Navi10 dGPU */
++DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_ATI, 0x7312, quirk_amd_harvest_no_ats);
+ /* AMD Navi14 dGPU */
+ DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_ATI, 0x7340, quirk_amd_harvest_no_ats);
+ #endif /* CONFIG_PCI_ATS */
+diff --git a/drivers/pinctrl/pinctrl-ingenic.c b/drivers/pinctrl/pinctrl-ingenic.c
+index 6a8d44504f940..367211998ab00 100644
+--- a/drivers/pinctrl/pinctrl-ingenic.c
++++ b/drivers/pinctrl/pinctrl-ingenic.c
+@@ -1810,9 +1810,9 @@ static void ingenic_gpio_irq_ack(struct irq_data *irqd)
+ */
+ high = ingenic_gpio_get_value(jzgc, irq);
+ if (high)
+- irq_set_type(jzgc, irq, IRQ_TYPE_EDGE_FALLING);
++ irq_set_type(jzgc, irq, IRQ_TYPE_LEVEL_LOW);
+ else
+- irq_set_type(jzgc, irq, IRQ_TYPE_EDGE_RISING);
++ irq_set_type(jzgc, irq, IRQ_TYPE_LEVEL_HIGH);
+ }
+
+ if (jzgc->jzpc->info->version >= ID_JZ4760)
+@@ -1848,7 +1848,7 @@ static int ingenic_gpio_irq_set_type(struct irq_data *irqd, unsigned int type)
+ */
+ bool high = ingenic_gpio_get_value(jzgc, irqd->hwirq);
+
+- type = high ? IRQ_TYPE_EDGE_FALLING : IRQ_TYPE_EDGE_RISING;
++ type = high ? IRQ_TYPE_LEVEL_LOW : IRQ_TYPE_LEVEL_HIGH;
+ }
+
+ irq_set_type(jzgc, irqd->hwirq, type);
+@@ -1955,7 +1955,8 @@ static int ingenic_gpio_get_direction(struct gpio_chip *gc, unsigned int offset)
+ unsigned int pin = gc->base + offset;
+
+ if (jzpc->info->version >= ID_JZ4760) {
+- if (ingenic_get_pin_config(jzpc, pin, JZ4760_GPIO_PAT1))
++ if (ingenic_get_pin_config(jzpc, pin, JZ4760_GPIO_INT) ||
++ ingenic_get_pin_config(jzpc, pin, JZ4760_GPIO_PAT1))
+ return GPIO_LINE_DIRECTION_IN;
+ return GPIO_LINE_DIRECTION_OUT;
+ }
+diff --git a/drivers/platform/chrome/cros_ec_ishtp.c b/drivers/platform/chrome/cros_ec_ishtp.c
+index ed794a7ddba9b..81364029af367 100644
+--- a/drivers/platform/chrome/cros_ec_ishtp.c
++++ b/drivers/platform/chrome/cros_ec_ishtp.c
+@@ -681,8 +681,10 @@ static int cros_ec_ishtp_probe(struct ishtp_cl_device *cl_device)
+
+ /* Register croc_ec_dev mfd */
+ rv = cros_ec_dev_init(client_data);
+- if (rv)
++ if (rv) {
++ down_write(&init_lock);
+ goto end_cros_ec_dev_init_error;
++ }
+
+ return 0;
+
+diff --git a/drivers/pwm/pwm-bcm-iproc.c b/drivers/pwm/pwm-bcm-iproc.c
+index 1f829edd8ee70..d392a828fc493 100644
+--- a/drivers/pwm/pwm-bcm-iproc.c
++++ b/drivers/pwm/pwm-bcm-iproc.c
+@@ -85,8 +85,6 @@ static void iproc_pwmc_get_state(struct pwm_chip *chip, struct pwm_device *pwm,
+ u64 tmp, multi, rate;
+ u32 value, prescale;
+
+- rate = clk_get_rate(ip->clk);
+-
+ value = readl(ip->base + IPROC_PWM_CTRL_OFFSET);
+
+ if (value & BIT(IPROC_PWM_CTRL_EN_SHIFT(pwm->hwpwm)))
+@@ -99,6 +97,13 @@ static void iproc_pwmc_get_state(struct pwm_chip *chip, struct pwm_device *pwm,
+ else
+ state->polarity = PWM_POLARITY_INVERSED;
+
++ rate = clk_get_rate(ip->clk);
++ if (rate == 0) {
++ state->period = 0;
++ state->duty_cycle = 0;
++ return;
++ }
++
+ value = readl(ip->base + IPROC_PWM_PRESCALE_OFFSET);
+ prescale = value >> IPROC_PWM_PRESCALE_SHIFT(pwm->hwpwm);
+ prescale &= IPROC_PWM_PRESCALE_MAX;
+diff --git a/drivers/remoteproc/qcom_q6v5.c b/drivers/remoteproc/qcom_q6v5.c
+index 111a442c993c4..fd6fd36268d93 100644
+--- a/drivers/remoteproc/qcom_q6v5.c
++++ b/drivers/remoteproc/qcom_q6v5.c
+@@ -153,6 +153,8 @@ int qcom_q6v5_request_stop(struct qcom_q6v5 *q6v5)
+ {
+ int ret;
+
++ q6v5->running = false;
++
+ qcom_smem_state_update_bits(q6v5->state,
+ BIT(q6v5->stop_bit), BIT(q6v5->stop_bit));
+
+diff --git a/drivers/remoteproc/qcom_q6v5_mss.c b/drivers/remoteproc/qcom_q6v5_mss.c
+index feb70283b6a21..a6770e5e32daa 100644
+--- a/drivers/remoteproc/qcom_q6v5_mss.c
++++ b/drivers/remoteproc/qcom_q6v5_mss.c
+@@ -407,6 +407,12 @@ static int q6v5_load(struct rproc *rproc, const struct firmware *fw)
+ {
+ struct q6v5 *qproc = rproc->priv;
+
++ /* MBA is restricted to a maximum size of 1M */
++ if (fw->size > qproc->mba_size || fw->size > SZ_1M) {
++ dev_err(qproc->dev, "MBA firmware load failed\n");
++ return -EINVAL;
++ }
++
+ memcpy(qproc->mba_region, fw->data, fw->size);
+
+ return 0;
+@@ -1138,15 +1144,14 @@ static int q6v5_mpss_load(struct q6v5 *qproc)
+ } else if (phdr->p_filesz) {
+ /* Replace "xxx.xxx" with "xxx.bxx" */
+ sprintf(fw_name + fw_name_len - 3, "b%02d", i);
+- ret = request_firmware(&seg_fw, fw_name, qproc->dev);
++ ret = request_firmware_into_buf(&seg_fw, fw_name, qproc->dev,
++ ptr, phdr->p_filesz);
+ if (ret) {
+ dev_err(qproc->dev, "failed to load %s\n", fw_name);
+ iounmap(ptr);
+ goto release_firmware;
+ }
+
+- memcpy(ptr, seg_fw->data, seg_fw->size);
+-
+ release_firmware(seg_fw);
+ }
+
+diff --git a/drivers/rtc/rtc-cpcap.c b/drivers/rtc/rtc-cpcap.c
+index a603f1f211250..800667d73a6fb 100644
+--- a/drivers/rtc/rtc-cpcap.c
++++ b/drivers/rtc/rtc-cpcap.c
+@@ -261,7 +261,7 @@ static int cpcap_rtc_probe(struct platform_device *pdev)
+ return PTR_ERR(rtc->rtc_dev);
+
+ rtc->rtc_dev->ops = &cpcap_rtc_ops;
+- rtc->rtc_dev->range_max = (1 << 14) * SECS_PER_DAY - 1;
++ rtc->rtc_dev->range_max = (timeu64_t) (DAY_MASK + 1) * SECS_PER_DAY - 1;
+
+ err = cpcap_get_vendor(dev, rtc->regmap, &rtc->vendor);
+ if (err)
+diff --git a/drivers/rtc/rtc-pl031.c b/drivers/rtc/rtc-pl031.c
+index 40d7450a1ce49..c6b89273feba8 100644
+--- a/drivers/rtc/rtc-pl031.c
++++ b/drivers/rtc/rtc-pl031.c
+@@ -275,6 +275,7 @@ static int pl031_set_alarm(struct device *dev, struct rtc_wkalrm *alarm)
+ struct pl031_local *ldata = dev_get_drvdata(dev);
+
+ writel(rtc_tm_to_time64(&alarm->time), ldata->base + RTC_MR);
++ pl031_alarm_irq_enable(dev, alarm->enabled);
+
+ return 0;
+ }
+diff --git a/drivers/scsi/lpfc/lpfc_nvmet.c b/drivers/scsi/lpfc/lpfc_nvmet.c
+index 88760416a8cbd..fcd9d4c2f1ee0 100644
+--- a/drivers/scsi/lpfc/lpfc_nvmet.c
++++ b/drivers/scsi/lpfc/lpfc_nvmet.c
+@@ -2112,7 +2112,7 @@ lpfc_nvmet_destroy_targetport(struct lpfc_hba *phba)
+ }
+ tgtp->tport_unreg_cmp = &tport_unreg_cmp;
+ nvmet_fc_unregister_targetport(phba->targetport);
+- if (!wait_for_completion_timeout(tgtp->tport_unreg_cmp,
++ if (!wait_for_completion_timeout(&tport_unreg_cmp,
+ msecs_to_jiffies(LPFC_NVMET_WAIT_TMO)))
+ lpfc_printf_log(phba, KERN_ERR, LOG_NVME,
+ "6179 Unreg targetport x%px timeout "
+diff --git a/drivers/staging/media/rkisp1/rkisp1-common.h b/drivers/staging/media/rkisp1/rkisp1-common.h
+index 0c4fe503adc90..12bd9d05050db 100644
+--- a/drivers/staging/media/rkisp1/rkisp1-common.h
++++ b/drivers/staging/media/rkisp1/rkisp1-common.h
+@@ -22,6 +22,9 @@
+ #include "rkisp1-regs.h"
+ #include "uapi/rkisp1-config.h"
+
++#define RKISP1_ISP_SD_SRC BIT(0)
++#define RKISP1_ISP_SD_SINK BIT(1)
++
+ #define RKISP1_ISP_MAX_WIDTH 4032
+ #define RKISP1_ISP_MAX_HEIGHT 3024
+ #define RKISP1_ISP_MIN_WIDTH 32
+diff --git a/drivers/staging/media/rkisp1/rkisp1-isp.c b/drivers/staging/media/rkisp1/rkisp1-isp.c
+index dc2b59a0160a8..b21a67aea433c 100644
+--- a/drivers/staging/media/rkisp1/rkisp1-isp.c
++++ b/drivers/staging/media/rkisp1/rkisp1-isp.c
+@@ -23,10 +23,6 @@
+
+ #define RKISP1_ISP_DEV_NAME RKISP1_DRIVER_NAME "_isp"
+
+-#define RKISP1_DIR_SRC BIT(0)
+-#define RKISP1_DIR_SINK BIT(1)
+-#define RKISP1_DIR_SINK_SRC (RKISP1_DIR_SINK | RKISP1_DIR_SRC)
+-
+ /*
+ * NOTE: MIPI controller and input MUX are also configured in this file.
+ * This is because ISP Subdev describes not only ISP submodule (input size,
+@@ -62,119 +58,119 @@ static const struct rkisp1_isp_mbus_info rkisp1_isp_formats[] = {
+ {
+ .mbus_code = MEDIA_BUS_FMT_YUYV8_2X8,
+ .pixel_enc = V4L2_PIXEL_ENC_YUV,
+- .direction = RKISP1_DIR_SRC,
++ .direction = RKISP1_ISP_SD_SRC,
+ }, {
+ .mbus_code = MEDIA_BUS_FMT_SRGGB10_1X10,
+ .pixel_enc = V4L2_PIXEL_ENC_BAYER,
+ .mipi_dt = RKISP1_CIF_CSI2_DT_RAW10,
+ .bayer_pat = RKISP1_RAW_RGGB,
+ .bus_width = 10,
+- .direction = RKISP1_DIR_SINK_SRC,
++ .direction = RKISP1_ISP_SD_SINK | RKISP1_ISP_SD_SRC,
+ }, {
+ .mbus_code = MEDIA_BUS_FMT_SBGGR10_1X10,
+ .pixel_enc = V4L2_PIXEL_ENC_BAYER,
+ .mipi_dt = RKISP1_CIF_CSI2_DT_RAW10,
+ .bayer_pat = RKISP1_RAW_BGGR,
+ .bus_width = 10,
+- .direction = RKISP1_DIR_SINK_SRC,
++ .direction = RKISP1_ISP_SD_SINK | RKISP1_ISP_SD_SRC,
+ }, {
+ .mbus_code = MEDIA_BUS_FMT_SGBRG10_1X10,
+ .pixel_enc = V4L2_PIXEL_ENC_BAYER,
+ .mipi_dt = RKISP1_CIF_CSI2_DT_RAW10,
+ .bayer_pat = RKISP1_RAW_GBRG,
+ .bus_width = 10,
+- .direction = RKISP1_DIR_SINK_SRC,
++ .direction = RKISP1_ISP_SD_SINK | RKISP1_ISP_SD_SRC,
+ }, {
+ .mbus_code = MEDIA_BUS_FMT_SGRBG10_1X10,
+ .pixel_enc = V4L2_PIXEL_ENC_BAYER,
+ .mipi_dt = RKISP1_CIF_CSI2_DT_RAW10,
+ .bayer_pat = RKISP1_RAW_GRBG,
+ .bus_width = 10,
+- .direction = RKISP1_DIR_SINK_SRC,
++ .direction = RKISP1_ISP_SD_SINK | RKISP1_ISP_SD_SRC,
+ }, {
+ .mbus_code = MEDIA_BUS_FMT_SRGGB12_1X12,
+ .pixel_enc = V4L2_PIXEL_ENC_BAYER,
+ .mipi_dt = RKISP1_CIF_CSI2_DT_RAW12,
+ .bayer_pat = RKISP1_RAW_RGGB,
+ .bus_width = 12,
+- .direction = RKISP1_DIR_SINK_SRC,
++ .direction = RKISP1_ISP_SD_SINK | RKISP1_ISP_SD_SRC,
+ }, {
+ .mbus_code = MEDIA_BUS_FMT_SBGGR12_1X12,
+ .pixel_enc = V4L2_PIXEL_ENC_BAYER,
+ .mipi_dt = RKISP1_CIF_CSI2_DT_RAW12,
+ .bayer_pat = RKISP1_RAW_BGGR,
+ .bus_width = 12,
+- .direction = RKISP1_DIR_SINK_SRC,
++ .direction = RKISP1_ISP_SD_SINK | RKISP1_ISP_SD_SRC,
+ }, {
+ .mbus_code = MEDIA_BUS_FMT_SGBRG12_1X12,
+ .pixel_enc = V4L2_PIXEL_ENC_BAYER,
+ .mipi_dt = RKISP1_CIF_CSI2_DT_RAW12,
+ .bayer_pat = RKISP1_RAW_GBRG,
+ .bus_width = 12,
+- .direction = RKISP1_DIR_SINK_SRC,
++ .direction = RKISP1_ISP_SD_SINK | RKISP1_ISP_SD_SRC,
+ }, {
+ .mbus_code = MEDIA_BUS_FMT_SGRBG12_1X12,
+ .pixel_enc = V4L2_PIXEL_ENC_BAYER,
+ .mipi_dt = RKISP1_CIF_CSI2_DT_RAW12,
+ .bayer_pat = RKISP1_RAW_GRBG,
+ .bus_width = 12,
+- .direction = RKISP1_DIR_SINK_SRC,
++ .direction = RKISP1_ISP_SD_SINK | RKISP1_ISP_SD_SRC,
+ }, {
+ .mbus_code = MEDIA_BUS_FMT_SRGGB8_1X8,
+ .pixel_enc = V4L2_PIXEL_ENC_BAYER,
+ .mipi_dt = RKISP1_CIF_CSI2_DT_RAW8,
+ .bayer_pat = RKISP1_RAW_RGGB,
+ .bus_width = 8,
+- .direction = RKISP1_DIR_SINK_SRC,
++ .direction = RKISP1_ISP_SD_SINK | RKISP1_ISP_SD_SRC,
+ }, {
+ .mbus_code = MEDIA_BUS_FMT_SBGGR8_1X8,
+ .pixel_enc = V4L2_PIXEL_ENC_BAYER,
+ .mipi_dt = RKISP1_CIF_CSI2_DT_RAW8,
+ .bayer_pat = RKISP1_RAW_BGGR,
+ .bus_width = 8,
+- .direction = RKISP1_DIR_SINK_SRC,
++ .direction = RKISP1_ISP_SD_SINK | RKISP1_ISP_SD_SRC,
+ }, {
+ .mbus_code = MEDIA_BUS_FMT_SGBRG8_1X8,
+ .pixel_enc = V4L2_PIXEL_ENC_BAYER,
+ .mipi_dt = RKISP1_CIF_CSI2_DT_RAW8,
+ .bayer_pat = RKISP1_RAW_GBRG,
+ .bus_width = 8,
+- .direction = RKISP1_DIR_SINK_SRC,
++ .direction = RKISP1_ISP_SD_SINK | RKISP1_ISP_SD_SRC,
+ }, {
+ .mbus_code = MEDIA_BUS_FMT_SGRBG8_1X8,
+ .pixel_enc = V4L2_PIXEL_ENC_BAYER,
+ .mipi_dt = RKISP1_CIF_CSI2_DT_RAW8,
+ .bayer_pat = RKISP1_RAW_GRBG,
+ .bus_width = 8,
+- .direction = RKISP1_DIR_SINK_SRC,
++ .direction = RKISP1_ISP_SD_SINK | RKISP1_ISP_SD_SRC,
+ }, {
+ .mbus_code = MEDIA_BUS_FMT_YUYV8_1X16,
+ .pixel_enc = V4L2_PIXEL_ENC_YUV,
+ .mipi_dt = RKISP1_CIF_CSI2_DT_YUV422_8b,
+ .yuv_seq = RKISP1_CIF_ISP_ACQ_PROP_YCBYCR,
+ .bus_width = 16,
+- .direction = RKISP1_DIR_SINK,
++ .direction = RKISP1_ISP_SD_SINK,
+ }, {
+ .mbus_code = MEDIA_BUS_FMT_YVYU8_1X16,
+ .pixel_enc = V4L2_PIXEL_ENC_YUV,
+ .mipi_dt = RKISP1_CIF_CSI2_DT_YUV422_8b,
+ .yuv_seq = RKISP1_CIF_ISP_ACQ_PROP_YCRYCB,
+ .bus_width = 16,
+- .direction = RKISP1_DIR_SINK,
++ .direction = RKISP1_ISP_SD_SINK,
+ }, {
+ .mbus_code = MEDIA_BUS_FMT_UYVY8_1X16,
+ .pixel_enc = V4L2_PIXEL_ENC_YUV,
+ .mipi_dt = RKISP1_CIF_CSI2_DT_YUV422_8b,
+ .yuv_seq = RKISP1_CIF_ISP_ACQ_PROP_CBYCRY,
+ .bus_width = 16,
+- .direction = RKISP1_DIR_SINK,
++ .direction = RKISP1_ISP_SD_SINK,
+ }, {
+ .mbus_code = MEDIA_BUS_FMT_VYUY8_1X16,
+ .pixel_enc = V4L2_PIXEL_ENC_YUV,
+ .mipi_dt = RKISP1_CIF_CSI2_DT_YUV422_8b,
+ .yuv_seq = RKISP1_CIF_ISP_ACQ_PROP_CRYCBY,
+ .bus_width = 16,
+- .direction = RKISP1_DIR_SINK,
++ .direction = RKISP1_ISP_SD_SINK,
+ },
+ };
+
+@@ -574,9 +570,9 @@ static int rkisp1_isp_enum_mbus_code(struct v4l2_subdev *sd,
+ int pos = 0;
+
+ if (code->pad == RKISP1_ISP_PAD_SINK_VIDEO) {
+- dir = RKISP1_DIR_SINK;
++ dir = RKISP1_ISP_SD_SINK;
+ } else if (code->pad == RKISP1_ISP_PAD_SOURCE_VIDEO) {
+- dir = RKISP1_DIR_SRC;
++ dir = RKISP1_ISP_SD_SRC;
+ } else {
+ if (code->index > 0)
+ return -EINVAL;
+@@ -661,7 +657,7 @@ static void rkisp1_isp_set_src_fmt(struct rkisp1_isp *isp,
+
+ src_fmt->code = format->code;
+ mbus_info = rkisp1_isp_mbus_info_get(src_fmt->code);
+- if (!mbus_info || !(mbus_info->direction & RKISP1_DIR_SRC)) {
++ if (!mbus_info || !(mbus_info->direction & RKISP1_ISP_SD_SRC)) {
+ src_fmt->code = RKISP1_DEF_SRC_PAD_FMT;
+ mbus_info = rkisp1_isp_mbus_info_get(src_fmt->code);
+ }
+@@ -745,7 +741,7 @@ static void rkisp1_isp_set_sink_fmt(struct rkisp1_isp *isp,
+ which);
+ sink_fmt->code = format->code;
+ mbus_info = rkisp1_isp_mbus_info_get(sink_fmt->code);
+- if (!mbus_info || !(mbus_info->direction & RKISP1_DIR_SINK)) {
++ if (!mbus_info || !(mbus_info->direction & RKISP1_ISP_SD_SINK)) {
+ sink_fmt->code = RKISP1_DEF_SINK_PAD_FMT;
+ mbus_info = rkisp1_isp_mbus_info_get(sink_fmt->code);
+ }
+diff --git a/drivers/staging/media/rkisp1/rkisp1-resizer.c b/drivers/staging/media/rkisp1/rkisp1-resizer.c
+index e188944941b58..a2b35961bc8b7 100644
+--- a/drivers/staging/media/rkisp1/rkisp1-resizer.c
++++ b/drivers/staging/media/rkisp1/rkisp1-resizer.c
+@@ -542,7 +542,7 @@ static void rkisp1_rsz_set_sink_fmt(struct rkisp1_resizer *rsz,
+ which);
+ sink_fmt->code = format->code;
+ mbus_info = rkisp1_isp_mbus_info_get(sink_fmt->code);
+- if (!mbus_info) {
++ if (!mbus_info || !(mbus_info->direction & RKISP1_ISP_SD_SRC)) {
+ sink_fmt->code = RKISP1_DEF_FMT;
+ mbus_info = rkisp1_isp_mbus_info_get(sink_fmt->code);
+ }
+diff --git a/drivers/usb/serial/ftdi_sio.c b/drivers/usb/serial/ftdi_sio.c
+index 9ad44a96dfe3a..33f1cca7eaa61 100644
+--- a/drivers/usb/serial/ftdi_sio.c
++++ b/drivers/usb/serial/ftdi_sio.c
+@@ -2480,12 +2480,11 @@ static int ftdi_prepare_write_buffer(struct usb_serial_port *port,
+ #define FTDI_RS_ERR_MASK (FTDI_RS_BI | FTDI_RS_PE | FTDI_RS_FE | FTDI_RS_OE)
+
+ static int ftdi_process_packet(struct usb_serial_port *port,
+- struct ftdi_private *priv, char *packet, int len)
++ struct ftdi_private *priv, unsigned char *buf, int len)
+ {
++ unsigned char status;
+ int i;
+- char status;
+ char flag;
+- char *ch;
+
+ if (len < 2) {
+ dev_dbg(&port->dev, "malformed packet\n");
+@@ -2495,7 +2494,7 @@ static int ftdi_process_packet(struct usb_serial_port *port,
+ /* Compare new line status to the old one, signal if different/
+ N.B. packet may be processed more than once, but differences
+ are only processed once. */
+- status = packet[0] & FTDI_STATUS_B0_MASK;
++ status = buf[0] & FTDI_STATUS_B0_MASK;
+ if (status != priv->prev_status) {
+ char diff_status = status ^ priv->prev_status;
+
+@@ -2521,13 +2520,12 @@ static int ftdi_process_packet(struct usb_serial_port *port,
+ }
+
+ /* save if the transmitter is empty or not */
+- if (packet[1] & FTDI_RS_TEMT)
++ if (buf[1] & FTDI_RS_TEMT)
+ priv->transmit_empty = 1;
+ else
+ priv->transmit_empty = 0;
+
+- len -= 2;
+- if (!len)
++ if (len == 2)
+ return 0; /* status only */
+
+ /*
+@@ -2535,40 +2533,41 @@ static int ftdi_process_packet(struct usb_serial_port *port,
+ * data payload to avoid over-reporting.
+ */
+ flag = TTY_NORMAL;
+- if (packet[1] & FTDI_RS_ERR_MASK) {
++ if (buf[1] & FTDI_RS_ERR_MASK) {
+ /* Break takes precedence over parity, which takes precedence
+ * over framing errors */
+- if (packet[1] & FTDI_RS_BI) {
++ if (buf[1] & FTDI_RS_BI) {
+ flag = TTY_BREAK;
+ port->icount.brk++;
+ usb_serial_handle_break(port);
+- } else if (packet[1] & FTDI_RS_PE) {
++ } else if (buf[1] & FTDI_RS_PE) {
+ flag = TTY_PARITY;
+ port->icount.parity++;
+- } else if (packet[1] & FTDI_RS_FE) {
++ } else if (buf[1] & FTDI_RS_FE) {
+ flag = TTY_FRAME;
+ port->icount.frame++;
+ }
+ /* Overrun is special, not associated with a char */
+- if (packet[1] & FTDI_RS_OE) {
++ if (buf[1] & FTDI_RS_OE) {
+ port->icount.overrun++;
+ tty_insert_flip_char(&port->port, 0, TTY_OVERRUN);
+ }
+ }
+
+- port->icount.rx += len;
+- ch = packet + 2;
++ port->icount.rx += len - 2;
+
+ if (port->port.console && port->sysrq) {
+- for (i = 0; i < len; i++, ch++) {
+- if (!usb_serial_handle_sysrq_char(port, *ch))
+- tty_insert_flip_char(&port->port, *ch, flag);
++ for (i = 2; i < len; i++) {
++ if (usb_serial_handle_sysrq_char(port, buf[i]))
++ continue;
++ tty_insert_flip_char(&port->port, buf[i], flag);
+ }
+ } else {
+- tty_insert_flip_string_fixed_flag(&port->port, ch, flag, len);
++ tty_insert_flip_string_fixed_flag(&port->port, buf + 2, flag,
++ len - 2);
+ }
+
+- return len;
++ return len - 2;
+ }
+
+ static void ftdi_process_read_urb(struct urb *urb)
+diff --git a/drivers/vdpa/vdpa_sim/vdpa_sim.c b/drivers/vdpa/vdpa_sim/vdpa_sim.c
+index 8ac6f341dcc16..67956db75013f 100644
+--- a/drivers/vdpa/vdpa_sim/vdpa_sim.c
++++ b/drivers/vdpa/vdpa_sim/vdpa_sim.c
+@@ -331,6 +331,7 @@ static struct vdpasim *vdpasim_create(void)
+
+ INIT_WORK(&vdpasim->work, vdpasim_work);
+ spin_lock_init(&vdpasim->lock);
++ spin_lock_init(&vdpasim->iommu_lock);
+
+ dev = &vdpasim->vdpa.dev;
+ dev->coherent_dma_mask = DMA_BIT_MASK(64);
+@@ -521,7 +522,7 @@ static void vdpasim_get_config(struct vdpa_device *vdpa, unsigned int offset,
+ struct vdpasim *vdpasim = vdpa_to_sim(vdpa);
+
+ if (offset + len < sizeof(struct virtio_net_config))
+- memcpy(buf, &vdpasim->config + offset, len);
++ memcpy(buf, (u8 *)&vdpasim->config + offset, len);
+ }
+
+ static void vdpasim_set_config(struct vdpa_device *vdpa, unsigned int offset,
+diff --git a/drivers/watchdog/f71808e_wdt.c b/drivers/watchdog/f71808e_wdt.c
+index a3c44d75d80eb..26bf366aebc23 100644
+--- a/drivers/watchdog/f71808e_wdt.c
++++ b/drivers/watchdog/f71808e_wdt.c
+@@ -690,9 +690,9 @@ static int __init watchdog_init(int sioaddr)
+ * into the module have been registered yet.
+ */
+ watchdog.sioaddr = sioaddr;
+- watchdog.ident.options = WDIOC_SETTIMEOUT
+- | WDIOF_MAGICCLOSE
+- | WDIOF_KEEPALIVEPING;
++ watchdog.ident.options = WDIOF_MAGICCLOSE
++ | WDIOF_KEEPALIVEPING
++ | WDIOF_CARDRESET;
+
+ snprintf(watchdog.ident.identity,
+ sizeof(watchdog.ident.identity), "%s watchdog",
+@@ -706,6 +706,13 @@ static int __init watchdog_init(int sioaddr)
+ wdt_conf = superio_inb(sioaddr, F71808FG_REG_WDT_CONF);
+ watchdog.caused_reboot = wdt_conf & BIT(F71808FG_FLAG_WDTMOUT_STS);
+
++ /*
++ * We don't want WDTMOUT_STS to stick around till regular reboot.
++ * Write 1 to the bit to clear it to zero.
++ */
++ superio_outb(sioaddr, F71808FG_REG_WDT_CONF,
++ wdt_conf | BIT(F71808FG_FLAG_WDTMOUT_STS));
++
+ superio_exit(sioaddr);
+
+ err = watchdog_set_timeout(timeout);
+diff --git a/drivers/watchdog/rti_wdt.c b/drivers/watchdog/rti_wdt.c
+index d456dd72d99a0..c904496fff65e 100644
+--- a/drivers/watchdog/rti_wdt.c
++++ b/drivers/watchdog/rti_wdt.c
+@@ -211,6 +211,7 @@ static int rti_wdt_probe(struct platform_device *pdev)
+
+ err_iomap:
+ pm_runtime_put_sync(&pdev->dev);
++ pm_runtime_disable(&pdev->dev);
+
+ return ret;
+ }
+@@ -221,6 +222,7 @@ static int rti_wdt_remove(struct platform_device *pdev)
+
+ watchdog_unregister_device(&wdt->wdd);
+ pm_runtime_put(&pdev->dev);
++ pm_runtime_disable(&pdev->dev);
+
+ return 0;
+ }
+diff --git a/drivers/watchdog/watchdog_dev.c b/drivers/watchdog/watchdog_dev.c
+index 7e4cd34a8c20e..b535f5fa279b9 100644
+--- a/drivers/watchdog/watchdog_dev.c
++++ b/drivers/watchdog/watchdog_dev.c
+@@ -994,6 +994,15 @@ static int watchdog_cdev_register(struct watchdog_device *wdd)
+ if (IS_ERR_OR_NULL(watchdog_kworker))
+ return -ENODEV;
+
++ device_initialize(&wd_data->dev);
++ wd_data->dev.devt = MKDEV(MAJOR(watchdog_devt), wdd->id);
++ wd_data->dev.class = &watchdog_class;
++ wd_data->dev.parent = wdd->parent;
++ wd_data->dev.groups = wdd->groups;
++ wd_data->dev.release = watchdog_core_data_release;
++ dev_set_drvdata(&wd_data->dev, wdd);
++ dev_set_name(&wd_data->dev, "watchdog%d", wdd->id);
++
+ kthread_init_work(&wd_data->work, watchdog_ping_work);
+ hrtimer_init(&wd_data->timer, CLOCK_MONOTONIC, HRTIMER_MODE_REL_HARD);
+ wd_data->timer.function = watchdog_timer_expired;
+@@ -1014,15 +1023,6 @@ static int watchdog_cdev_register(struct watchdog_device *wdd)
+ }
+ }
+
+- device_initialize(&wd_data->dev);
+- wd_data->dev.devt = MKDEV(MAJOR(watchdog_devt), wdd->id);
+- wd_data->dev.class = &watchdog_class;
+- wd_data->dev.parent = wdd->parent;
+- wd_data->dev.groups = wdd->groups;
+- wd_data->dev.release = watchdog_core_data_release;
+- dev_set_drvdata(&wd_data->dev, wdd);
+- dev_set_name(&wd_data->dev, "watchdog%d", wdd->id);
+-
+ /* Fill in the data structures */
+ cdev_init(&wd_data->cdev, &watchdog_fops);
+
+diff --git a/fs/btrfs/backref.c b/fs/btrfs/backref.c
+index ea10f7bc99abf..ea1c28ccb44ff 100644
+--- a/fs/btrfs/backref.c
++++ b/fs/btrfs/backref.c
+@@ -2303,7 +2303,7 @@ struct btrfs_backref_iter *btrfs_backref_iter_alloc(
+ return NULL;
+
+ ret->path = btrfs_alloc_path();
+- if (!ret) {
++ if (!ret->path) {
+ kfree(ret);
+ return NULL;
+ }
+diff --git a/fs/btrfs/ctree.h b/fs/btrfs/ctree.h
+index 7c8efa0c3ee65..6fdb3392a06d5 100644
+--- a/fs/btrfs/ctree.h
++++ b/fs/btrfs/ctree.h
+@@ -1059,8 +1059,10 @@ struct btrfs_root {
+ wait_queue_head_t log_writer_wait;
+ wait_queue_head_t log_commit_wait[2];
+ struct list_head log_ctxs[2];
++ /* Used only for log trees of subvolumes, not for the log root tree */
+ atomic_t log_writers;
+ atomic_t log_commit[2];
++ /* Used only for log trees of subvolumes, not for the log root tree */
+ atomic_t log_batch;
+ int log_transid;
+ /* No matter the commit succeeds or not*/
+@@ -3196,7 +3198,7 @@ do { \
+ /* Report first abort since mount */ \
+ if (!test_and_set_bit(BTRFS_FS_STATE_TRANS_ABORTED, \
+ &((trans)->fs_info->fs_state))) { \
+- if ((errno) != -EIO) { \
++ if ((errno) != -EIO && (errno) != -EROFS) { \
+ WARN(1, KERN_DEBUG \
+ "BTRFS: Transaction aborted (error %d)\n", \
+ (errno)); \
+diff --git a/fs/btrfs/disk-io.c b/fs/btrfs/disk-io.c
+index b1a148058773e..66618a1794ea7 100644
+--- a/fs/btrfs/disk-io.c
++++ b/fs/btrfs/disk-io.c
+@@ -1395,7 +1395,12 @@ alloc_fail:
+ goto out;
+ }
+
+-static int btrfs_init_fs_root(struct btrfs_root *root)
++/*
++ * Initialize subvolume root in-memory structure
++ *
++ * @anon_dev: anonymous device to attach to the root, if zero, allocate new
++ */
++static int btrfs_init_fs_root(struct btrfs_root *root, dev_t anon_dev)
+ {
+ int ret;
+ unsigned int nofs_flag;
+@@ -1428,9 +1433,20 @@ static int btrfs_init_fs_root(struct btrfs_root *root)
+ spin_lock_init(&root->ino_cache_lock);
+ init_waitqueue_head(&root->ino_cache_wait);
+
+- ret = get_anon_bdev(&root->anon_dev);
+- if (ret)
+- goto fail;
++ /*
++ * Don't assign anonymous block device to roots that are not exposed to
++ * userspace, the id pool is limited to 1M
++ */
++ if (is_fstree(root->root_key.objectid) &&
++ btrfs_root_refs(&root->root_item) > 0) {
++ if (!anon_dev) {
++ ret = get_anon_bdev(&root->anon_dev);
++ if (ret)
++ goto fail;
++ } else {
++ root->anon_dev = anon_dev;
++ }
++ }
+
+ mutex_lock(&root->objectid_mutex);
+ ret = btrfs_find_highest_objectid(root,
+@@ -1534,8 +1550,27 @@ void btrfs_free_fs_info(struct btrfs_fs_info *fs_info)
+ }
+
+
+-struct btrfs_root *btrfs_get_fs_root(struct btrfs_fs_info *fs_info,
+- u64 objectid, bool check_ref)
++/*
++ * Get an in-memory reference of a root structure.
++ *
++ * For essential trees like root/extent tree, we grab it from fs_info directly.
++ * For subvolume trees, we check the cached filesystem roots first. If not
++ * found, then read it from disk and add it to cached fs roots.
++ *
++ * Caller should release the root by calling btrfs_put_root() after the usage.
++ *
++ * NOTE: Reloc and log trees can't be read by this function as they share the
++ * same root objectid.
++ *
++ * @objectid: root id
++ * @anon_dev: preallocated anonymous block device number for new roots,
++ * pass 0 for new allocation.
++ * @check_ref: whether to check root item references, If true, return -ENOENT
++ * for orphan roots
++ */
++static struct btrfs_root *btrfs_get_root_ref(struct btrfs_fs_info *fs_info,
++ u64 objectid, dev_t anon_dev,
++ bool check_ref)
+ {
+ struct btrfs_root *root;
+ struct btrfs_path *path;
+@@ -1564,6 +1599,8 @@ struct btrfs_root *btrfs_get_fs_root(struct btrfs_fs_info *fs_info,
+ again:
+ root = btrfs_lookup_fs_root(fs_info, objectid);
+ if (root) {
++ /* Shouldn't get preallocated anon_dev for cached roots */
++ ASSERT(!anon_dev);
+ if (check_ref && btrfs_root_refs(&root->root_item) == 0) {
+ btrfs_put_root(root);
+ return ERR_PTR(-ENOENT);
+@@ -1583,7 +1620,7 @@ again:
+ goto fail;
+ }
+
+- ret = btrfs_init_fs_root(root);
++ ret = btrfs_init_fs_root(root, anon_dev);
+ if (ret)
+ goto fail;
+
+@@ -1616,6 +1653,33 @@ fail:
+ return ERR_PTR(ret);
+ }
+
++/*
++ * Get in-memory reference of a root structure
++ *
++ * @objectid: tree objectid
++ * @check_ref: if set, verify that the tree exists and the item has at least
++ * one reference
++ */
++struct btrfs_root *btrfs_get_fs_root(struct btrfs_fs_info *fs_info,
++ u64 objectid, bool check_ref)
++{
++ return btrfs_get_root_ref(fs_info, objectid, 0, check_ref);
++}
++
++/*
++ * Get in-memory reference of a root structure, created as new, optionally pass
++ * the anonymous block device id
++ *
++ * @objectid: tree objectid
++ * @anon_dev: if zero, allocate a new anonymous block device or use the
++ * parameter value
++ */
++struct btrfs_root *btrfs_get_new_fs_root(struct btrfs_fs_info *fs_info,
++ u64 objectid, dev_t anon_dev)
++{
++ return btrfs_get_root_ref(fs_info, objectid, anon_dev, true);
++}
++
+ static int btrfs_congested_fn(void *congested_data, int bdi_bits)
+ {
+ struct btrfs_fs_info *info = (struct btrfs_fs_info *)congested_data;
+diff --git a/fs/btrfs/disk-io.h b/fs/btrfs/disk-io.h
+index bf43245406c4d..00dc39d47ed34 100644
+--- a/fs/btrfs/disk-io.h
++++ b/fs/btrfs/disk-io.h
+@@ -67,6 +67,8 @@ void btrfs_free_fs_roots(struct btrfs_fs_info *fs_info);
+
+ struct btrfs_root *btrfs_get_fs_root(struct btrfs_fs_info *fs_info,
+ u64 objectid, bool check_ref);
++struct btrfs_root *btrfs_get_new_fs_root(struct btrfs_fs_info *fs_info,
++ u64 objectid, dev_t anon_dev);
+
+ void btrfs_free_fs_info(struct btrfs_fs_info *fs_info);
+ int btrfs_cleanup_fs_roots(struct btrfs_fs_info *fs_info);
+diff --git a/fs/btrfs/extent-io-tree.h b/fs/btrfs/extent-io-tree.h
+index b6561455b3c42..8bbb734f3f514 100644
+--- a/fs/btrfs/extent-io-tree.h
++++ b/fs/btrfs/extent-io-tree.h
+@@ -34,6 +34,8 @@ struct io_failure_record;
+ */
+ #define CHUNK_ALLOCATED EXTENT_DIRTY
+ #define CHUNK_TRIMMED EXTENT_DEFRAG
++#define CHUNK_STATE_MASK (CHUNK_ALLOCATED | \
++ CHUNK_TRIMMED)
+
+ enum {
+ IO_TREE_FS_PINNED_EXTENTS,
+diff --git a/fs/btrfs/extent-tree.c b/fs/btrfs/extent-tree.c
+index 96223813b6186..de6fe176fdfb3 100644
+--- a/fs/btrfs/extent-tree.c
++++ b/fs/btrfs/extent-tree.c
+@@ -33,6 +33,7 @@
+ #include "delalloc-space.h"
+ #include "block-group.h"
+ #include "discard.h"
++#include "rcu-string.h"
+
+ #undef SCRAMBLE_DELAYED_REFS
+
+@@ -5298,7 +5299,14 @@ int btrfs_drop_snapshot(struct btrfs_root *root, int update_ref, int for_reloc)
+ goto out;
+ }
+
+- trans = btrfs_start_transaction(tree_root, 0);
++ /*
++ * Use join to avoid potential EINTR from transaction start. See
++ * wait_reserve_ticket and the whole reservation callchain.
++ */
++ if (for_reloc)
++ trans = btrfs_join_transaction(tree_root);
++ else
++ trans = btrfs_start_transaction(tree_root, 0);
+ if (IS_ERR(trans)) {
+ err = PTR_ERR(trans);
+ goto out_free;
+@@ -5661,6 +5669,19 @@ static int btrfs_trim_free_extents(struct btrfs_device *device, u64 *trimmed)
+ &start, &end,
+ CHUNK_TRIMMED | CHUNK_ALLOCATED);
+
++ /* Check if there are any CHUNK_* bits left */
++ if (start > device->total_bytes) {
++ WARN_ON(IS_ENABLED(CONFIG_BTRFS_DEBUG));
++ btrfs_warn_in_rcu(fs_info,
++"ignoring attempt to trim beyond device size: offset %llu length %llu device %s device size %llu",
++ start, end - start + 1,
++ rcu_str_deref(device->name),
++ device->total_bytes);
++ mutex_unlock(&fs_info->chunk_mutex);
++ ret = 0;
++ break;
++ }
++
+ /* Ensure we skip the reserved area in the first 1M */
+ start = max_t(u64, start, SZ_1M);
+
+diff --git a/fs/btrfs/extent_io.c b/fs/btrfs/extent_io.c
+index eeaee346f5a95..8ba8788461ae5 100644
+--- a/fs/btrfs/extent_io.c
++++ b/fs/btrfs/extent_io.c
+@@ -4127,7 +4127,7 @@ retry:
+ if (!test_bit(BTRFS_FS_STATE_ERROR, &fs_info->fs_state)) {
+ ret = flush_write_bio(&epd);
+ } else {
+- ret = -EUCLEAN;
++ ret = -EROFS;
+ end_write_bio(&epd, ret);
+ }
+ return ret;
+@@ -4502,15 +4502,25 @@ int try_release_extent_mapping(struct page *page, gfp_t mask)
+ free_extent_map(em);
+ break;
+ }
+- if (!test_range_bit(tree, em->start,
+- extent_map_end(em) - 1,
+- EXTENT_LOCKED, 0, NULL)) {
++ if (test_range_bit(tree, em->start,
++ extent_map_end(em) - 1,
++ EXTENT_LOCKED, 0, NULL))
++ goto next;
++ /*
++ * If it's not in the list of modified extents, used
++ * by a fast fsync, we can remove it. If it's being
++ * logged we can safely remove it since fsync took an
++ * extra reference on the em.
++ */
++ if (list_empty(&em->list) ||
++ test_bit(EXTENT_FLAG_LOGGING, &em->flags)) {
+ set_bit(BTRFS_INODE_NEEDS_FULL_SYNC,
+ &btrfs_inode->runtime_flags);
+ remove_extent_mapping(map, em);
+ /* once for the rb tree */
+ free_extent_map(em);
+ }
++next:
+ start = extent_map_end(em);
+ write_unlock(&map->lock);
+
+diff --git a/fs/btrfs/free-space-cache.c b/fs/btrfs/free-space-cache.c
+index 55955bd424d70..6f7b6bca6dc5b 100644
+--- a/fs/btrfs/free-space-cache.c
++++ b/fs/btrfs/free-space-cache.c
+@@ -2281,7 +2281,7 @@ out:
+ static bool try_merge_free_space(struct btrfs_free_space_ctl *ctl,
+ struct btrfs_free_space *info, bool update_stat)
+ {
+- struct btrfs_free_space *left_info;
++ struct btrfs_free_space *left_info = NULL;
+ struct btrfs_free_space *right_info;
+ bool merged = false;
+ u64 offset = info->offset;
+@@ -2297,7 +2297,7 @@ static bool try_merge_free_space(struct btrfs_free_space_ctl *ctl,
+ if (right_info && rb_prev(&right_info->offset_index))
+ left_info = rb_entry(rb_prev(&right_info->offset_index),
+ struct btrfs_free_space, offset_index);
+- else
++ else if (!right_info)
+ left_info = tree_search_offset(ctl, offset - 1, 0, 0);
+
+ /* See try_merge_free_space() comment. */
+diff --git a/fs/btrfs/inode.c b/fs/btrfs/inode.c
+index 3f77ec5de8ec7..7ba1218b1630e 100644
+--- a/fs/btrfs/inode.c
++++ b/fs/btrfs/inode.c
+@@ -650,12 +650,18 @@ cont:
+ page_error_op |
+ PAGE_END_WRITEBACK);
+
+- for (i = 0; i < nr_pages; i++) {
+- WARN_ON(pages[i]->mapping);
+- put_page(pages[i]);
++ /*
++ * Ensure we only free the compressed pages if we have
++ * them allocated, as we can still reach here with
++ * inode_need_compress() == false.
++ */
++ if (pages) {
++ for (i = 0; i < nr_pages; i++) {
++ WARN_ON(pages[i]->mapping);
++ put_page(pages[i]);
++ }
++ kfree(pages);
+ }
+- kfree(pages);
+-
+ return 0;
+ }
+ }
+@@ -4041,6 +4047,8 @@ int btrfs_delete_subvolume(struct inode *dir, struct dentry *dentry)
+ }
+ }
+
++ free_anon_bdev(dest->anon_dev);
++ dest->anon_dev = 0;
+ out_end_trans:
+ trans->block_rsv = NULL;
+ trans->bytes_reserved = 0;
+@@ -6625,7 +6633,7 @@ struct extent_map *btrfs_get_extent(struct btrfs_inode *inode,
+ extent_type == BTRFS_FILE_EXTENT_PREALLOC) {
+ /* Only regular file could have regular/prealloc extent */
+ if (!S_ISREG(inode->vfs_inode.i_mode)) {
+- ret = -EUCLEAN;
++ err = -EUCLEAN;
+ btrfs_crit(fs_info,
+ "regular/prealloc extent found for non-regular inode %llu",
+ btrfs_ino(inode));
+diff --git a/fs/btrfs/ioctl.c b/fs/btrfs/ioctl.c
+index e8f7c5f008944..1448bc43561c2 100644
+--- a/fs/btrfs/ioctl.c
++++ b/fs/btrfs/ioctl.c
+@@ -164,8 +164,11 @@ static int btrfs_ioctl_getflags(struct file *file, void __user *arg)
+ return 0;
+ }
+
+-/* Check if @flags are a supported and valid set of FS_*_FL flags */
+-static int check_fsflags(unsigned int flags)
++/*
++ * Check if @flags are a supported and valid set of FS_*_FL flags and that
++ * the old and new flags are not conflicting
++ */
++static int check_fsflags(unsigned int old_flags, unsigned int flags)
+ {
+ if (flags & ~(FS_IMMUTABLE_FL | FS_APPEND_FL | \
+ FS_NOATIME_FL | FS_NODUMP_FL | \
+@@ -174,9 +177,19 @@ static int check_fsflags(unsigned int flags)
+ FS_NOCOW_FL))
+ return -EOPNOTSUPP;
+
++ /* COMPR and NOCOMP on new/old are valid */
+ if ((flags & FS_NOCOMP_FL) && (flags & FS_COMPR_FL))
+ return -EINVAL;
+
++ if ((flags & FS_COMPR_FL) && (flags & FS_NOCOW_FL))
++ return -EINVAL;
++
++ /* NOCOW and compression options are mutually exclusive */
++ if ((old_flags & FS_NOCOW_FL) && (flags & (FS_COMPR_FL | FS_NOCOMP_FL)))
++ return -EINVAL;
++ if ((flags & FS_NOCOW_FL) && (old_flags & (FS_COMPR_FL | FS_NOCOMP_FL)))
++ return -EINVAL;
++
+ return 0;
+ }
+
+@@ -190,7 +203,7 @@ static int btrfs_ioctl_setflags(struct file *file, void __user *arg)
+ unsigned int fsflags, old_fsflags;
+ int ret;
+ const char *comp = NULL;
+- u32 binode_flags = binode->flags;
++ u32 binode_flags;
+
+ if (!inode_owner_or_capable(inode))
+ return -EPERM;
+@@ -201,22 +214,23 @@ static int btrfs_ioctl_setflags(struct file *file, void __user *arg)
+ if (copy_from_user(&fsflags, arg, sizeof(fsflags)))
+ return -EFAULT;
+
+- ret = check_fsflags(fsflags);
+- if (ret)
+- return ret;
+-
+ ret = mnt_want_write_file(file);
+ if (ret)
+ return ret;
+
+ inode_lock(inode);
+-
+ fsflags = btrfs_mask_fsflags_for_type(inode, fsflags);
+ old_fsflags = btrfs_inode_flags_to_fsflags(binode->flags);
++
+ ret = vfs_ioc_setflags_prepare(inode, old_fsflags, fsflags);
+ if (ret)
+ goto out_unlock;
+
++ ret = check_fsflags(old_fsflags, fsflags);
++ if (ret)
++ goto out_unlock;
++
++ binode_flags = binode->flags;
+ if (fsflags & FS_SYNC_FL)
+ binode_flags |= BTRFS_INODE_SYNC;
+ else
+@@ -566,6 +580,7 @@ static noinline int create_subvol(struct inode *dir,
+ struct inode *inode;
+ int ret;
+ int err;
++ dev_t anon_dev = 0;
+ u64 objectid;
+ u64 new_dirid = BTRFS_FIRST_FREE_OBJECTID;
+ u64 index = 0;
+@@ -578,6 +593,10 @@ static noinline int create_subvol(struct inode *dir,
+ if (ret)
+ goto fail_free;
+
++ ret = get_anon_bdev(&anon_dev);
++ if (ret < 0)
++ goto fail_free;
++
+ /*
+ * Don't create subvolume whose level is not zero. Or qgroup will be
+ * screwed up since it assumes subvolume qgroup's level to be 0.
+@@ -660,12 +679,15 @@ static noinline int create_subvol(struct inode *dir,
+ goto fail;
+
+ key.offset = (u64)-1;
+- new_root = btrfs_get_fs_root(fs_info, objectid, true);
++ new_root = btrfs_get_new_fs_root(fs_info, objectid, anon_dev);
+ if (IS_ERR(new_root)) {
++ free_anon_bdev(anon_dev);
+ ret = PTR_ERR(new_root);
+ btrfs_abort_transaction(trans, ret);
+ goto fail;
+ }
++ /* Freeing will be done in btrfs_put_root() of new_root */
++ anon_dev = 0;
+
+ btrfs_record_root_in_trans(trans, new_root);
+
+@@ -735,6 +757,8 @@ fail:
+ return ret;
+
+ fail_free:
++ if (anon_dev)
++ free_anon_bdev(anon_dev);
+ kfree(root_item);
+ return ret;
+ }
+@@ -762,6 +786,9 @@ static int create_snapshot(struct btrfs_root *root, struct inode *dir,
+ if (!pending_snapshot)
+ return -ENOMEM;
+
++ ret = get_anon_bdev(&pending_snapshot->anon_dev);
++ if (ret < 0)
++ goto free_pending;
+ pending_snapshot->root_item = kzalloc(sizeof(struct btrfs_root_item),
+ GFP_KERNEL);
+ pending_snapshot->path = btrfs_alloc_path();
+@@ -823,10 +850,16 @@ static int create_snapshot(struct btrfs_root *root, struct inode *dir,
+
+ d_instantiate(dentry, inode);
+ ret = 0;
++ pending_snapshot->anon_dev = 0;
+ fail:
++ /* Prevent double freeing of anon_dev */
++ if (ret && pending_snapshot->snap)
++ pending_snapshot->snap->anon_dev = 0;
+ btrfs_put_root(pending_snapshot->snap);
+ btrfs_subvolume_release_metadata(fs_info, &pending_snapshot->block_rsv);
+ free_pending:
++ if (pending_snapshot->anon_dev)
++ free_anon_bdev(pending_snapshot->anon_dev);
+ kfree(pending_snapshot->root_item);
+ btrfs_free_path(pending_snapshot->path);
+ kfree(pending_snapshot);
+@@ -3198,11 +3231,15 @@ static long btrfs_ioctl_fs_info(struct btrfs_fs_info *fs_info,
+ struct btrfs_ioctl_fs_info_args *fi_args;
+ struct btrfs_device *device;
+ struct btrfs_fs_devices *fs_devices = fs_info->fs_devices;
++ u64 flags_in;
+ int ret = 0;
+
+- fi_args = kzalloc(sizeof(*fi_args), GFP_KERNEL);
+- if (!fi_args)
+- return -ENOMEM;
++ fi_args = memdup_user(arg, sizeof(*fi_args));
++ if (IS_ERR(fi_args))
++ return PTR_ERR(fi_args);
++
++ flags_in = fi_args->flags;
++ memset(fi_args, 0, sizeof(*fi_args));
+
+ rcu_read_lock();
+ fi_args->num_devices = fs_devices->num_devices;
+@@ -3218,6 +3255,12 @@ static long btrfs_ioctl_fs_info(struct btrfs_fs_info *fs_info,
+ fi_args->sectorsize = fs_info->sectorsize;
+ fi_args->clone_alignment = fs_info->sectorsize;
+
++ if (flags_in & BTRFS_FS_INFO_FLAG_CSUM_INFO) {
++ fi_args->csum_type = btrfs_super_csum_type(fs_info->super_copy);
++ fi_args->csum_size = btrfs_super_csum_size(fs_info->super_copy);
++ fi_args->flags |= BTRFS_FS_INFO_FLAG_CSUM_INFO;
++ }
++
+ if (copy_to_user(arg, fi_args, sizeof(*fi_args)))
+ ret = -EFAULT;
+
+diff --git a/fs/btrfs/ref-verify.c b/fs/btrfs/ref-verify.c
+index af92525dbb168..7f03dbe5b609d 100644
+--- a/fs/btrfs/ref-verify.c
++++ b/fs/btrfs/ref-verify.c
+@@ -286,6 +286,8 @@ static struct block_entry *add_block_entry(struct btrfs_fs_info *fs_info,
+ exist_re = insert_root_entry(&exist->roots, re);
+ if (exist_re)
+ kfree(re);
++ } else {
++ kfree(re);
+ }
+ kfree(be);
+ return exist;
+diff --git a/fs/btrfs/relocation.c b/fs/btrfs/relocation.c
+index 3bbae80c752fc..5740ed51a1e8e 100644
+--- a/fs/btrfs/relocation.c
++++ b/fs/btrfs/relocation.c
+@@ -1686,12 +1686,20 @@ static noinline_for_stack int merge_reloc_root(struct reloc_control *rc,
+ btrfs_unlock_up_safe(path, 0);
+ }
+
+- min_reserved = fs_info->nodesize * (BTRFS_MAX_LEVEL - 1) * 2;
++ /*
++ * In merge_reloc_root(), we modify the upper level pointer to swap the
++ * tree blocks between reloc tree and subvolume tree. Thus for tree
++ * block COW, we COW at most from level 1 to root level for each tree.
++ *
++ * Thus the needed metadata size is at most root_level * nodesize,
++ * and * 2 since we have two trees to COW.
++ */
++ min_reserved = fs_info->nodesize * btrfs_root_level(root_item) * 2;
+ memset(&next_key, 0, sizeof(next_key));
+
+ while (1) {
+ ret = btrfs_block_rsv_refill(root, rc->block_rsv, min_reserved,
+- BTRFS_RESERVE_FLUSH_ALL);
++ BTRFS_RESERVE_FLUSH_LIMIT);
+ if (ret) {
+ err = ret;
+ goto out;
+diff --git a/fs/btrfs/scrub.c b/fs/btrfs/scrub.c
+index 016a025e36c74..5f5b21e389dbc 100644
+--- a/fs/btrfs/scrub.c
++++ b/fs/btrfs/scrub.c
+@@ -3758,7 +3758,7 @@ static noinline_for_stack int scrub_supers(struct scrub_ctx *sctx,
+ struct btrfs_fs_info *fs_info = sctx->fs_info;
+
+ if (test_bit(BTRFS_FS_STATE_ERROR, &fs_info->fs_state))
+- return -EIO;
++ return -EROFS;
+
+ /* Seed devices of a new filesystem has their own generation. */
+ if (scrub_dev->fs_devices != fs_info->fs_devices)
+diff --git a/fs/btrfs/super.c b/fs/btrfs/super.c
+index c3826ae883f0e..56cd2cf571588 100644
+--- a/fs/btrfs/super.c
++++ b/fs/btrfs/super.c
+@@ -449,6 +449,7 @@ int btrfs_parse_options(struct btrfs_fs_info *info, char *options,
+ char *compress_type;
+ bool compress_force = false;
+ enum btrfs_compression_type saved_compress_type;
++ int saved_compress_level;
+ bool saved_compress_force;
+ int no_compress = 0;
+
+@@ -531,6 +532,7 @@ int btrfs_parse_options(struct btrfs_fs_info *info, char *options,
+ info->compress_type : BTRFS_COMPRESS_NONE;
+ saved_compress_force =
+ btrfs_test_opt(info, FORCE_COMPRESS);
++ saved_compress_level = info->compress_level;
+ if (token == Opt_compress ||
+ token == Opt_compress_force ||
+ strncmp(args[0].from, "zlib", 4) == 0) {
+@@ -575,6 +577,8 @@ int btrfs_parse_options(struct btrfs_fs_info *info, char *options,
+ no_compress = 0;
+ } else if (strncmp(args[0].from, "no", 2) == 0) {
+ compress_type = "no";
++ info->compress_level = 0;
++ info->compress_type = 0;
+ btrfs_clear_opt(info->mount_opt, COMPRESS);
+ btrfs_clear_opt(info->mount_opt, FORCE_COMPRESS);
+ compress_force = false;
+@@ -595,11 +599,11 @@ int btrfs_parse_options(struct btrfs_fs_info *info, char *options,
+ */
+ btrfs_clear_opt(info->mount_opt, FORCE_COMPRESS);
+ }
+- if ((btrfs_test_opt(info, COMPRESS) &&
+- (info->compress_type != saved_compress_type ||
+- compress_force != saved_compress_force)) ||
+- (!btrfs_test_opt(info, COMPRESS) &&
+- no_compress == 1)) {
++ if (no_compress == 1) {
++ btrfs_info(info, "use no compression");
++ } else if ((info->compress_type != saved_compress_type) ||
++ (compress_force != saved_compress_force) ||
++ (info->compress_level != saved_compress_level)) {
+ btrfs_info(info, "%s %s compression, level %d",
+ (compress_force) ? "force" : "use",
+ compress_type, info->compress_level);
+@@ -1312,6 +1316,7 @@ static int btrfs_show_options(struct seq_file *seq, struct dentry *dentry)
+ {
+ struct btrfs_fs_info *info = btrfs_sb(dentry->d_sb);
+ const char *compress_type;
++ const char *subvol_name;
+
+ if (btrfs_test_opt(info, DEGRADED))
+ seq_puts(seq, ",degraded");
+@@ -1398,8 +1403,13 @@ static int btrfs_show_options(struct seq_file *seq, struct dentry *dentry)
+ seq_puts(seq, ",ref_verify");
+ seq_printf(seq, ",subvolid=%llu",
+ BTRFS_I(d_inode(dentry))->root->root_key.objectid);
+- seq_puts(seq, ",subvol=");
+- seq_dentry(seq, dentry, " \t\n\\");
++ subvol_name = btrfs_get_subvol_name_from_objectid(info,
++ BTRFS_I(d_inode(dentry))->root->root_key.objectid);
++ if (!IS_ERR(subvol_name)) {
++ seq_puts(seq, ",subvol=");
++ seq_escape(seq, subvol_name, " \t\n\\");
++ kfree(subvol_name);
++ }
+ return 0;
+ }
+
+@@ -1887,6 +1897,12 @@ static int btrfs_remount(struct super_block *sb, int *flags, char *data)
+ set_bit(BTRFS_FS_OPEN, &fs_info->flags);
+ }
+ out:
++ /*
++ * We need to set SB_I_VERSION here otherwise it'll get cleared by VFS,
++ * since the absence of the flag means it can be toggled off by remount.
++ */
++ *flags |= SB_I_VERSION;
++
+ wake_up_process(fs_info->transaction_kthread);
+ btrfs_remount_cleanup(fs_info, old_opts);
+ return 0;
+@@ -2296,9 +2312,7 @@ static int btrfs_unfreeze(struct super_block *sb)
+ static int btrfs_show_devname(struct seq_file *m, struct dentry *root)
+ {
+ struct btrfs_fs_info *fs_info = btrfs_sb(root->d_sb);
+- struct btrfs_fs_devices *cur_devices;
+ struct btrfs_device *dev, *first_dev = NULL;
+- struct list_head *head;
+
+ /*
+ * Lightweight locking of the devices. We should not need
+@@ -2308,18 +2322,13 @@ static int btrfs_show_devname(struct seq_file *m, struct dentry *root)
+ * least until the rcu_read_unlock.
+ */
+ rcu_read_lock();
+- cur_devices = fs_info->fs_devices;
+- while (cur_devices) {
+- head = &cur_devices->devices;
+- list_for_each_entry_rcu(dev, head, dev_list) {
+- if (test_bit(BTRFS_DEV_STATE_MISSING, &dev->dev_state))
+- continue;
+- if (!dev->name)
+- continue;
+- if (!first_dev || dev->devid < first_dev->devid)
+- first_dev = dev;
+- }
+- cur_devices = cur_devices->seed;
++ list_for_each_entry_rcu(dev, &fs_info->fs_devices->devices, dev_list) {
++ if (test_bit(BTRFS_DEV_STATE_MISSING, &dev->dev_state))
++ continue;
++ if (!dev->name)
++ continue;
++ if (!first_dev || dev->devid < first_dev->devid)
++ first_dev = dev;
+ }
+
+ if (first_dev)
+diff --git a/fs/btrfs/sysfs.c b/fs/btrfs/sysfs.c
+index a39bff64ff24e..abc4a8fd6df65 100644
+--- a/fs/btrfs/sysfs.c
++++ b/fs/btrfs/sysfs.c
+@@ -1273,7 +1273,9 @@ int btrfs_sysfs_add_devices_dir(struct btrfs_fs_devices *fs_devices,
+ {
+ int error = 0;
+ struct btrfs_device *dev;
++ unsigned int nofs_flag;
+
++ nofs_flag = memalloc_nofs_save();
+ list_for_each_entry(dev, &fs_devices->devices, dev_list) {
+
+ if (one_device && one_device != dev)
+@@ -1301,6 +1303,7 @@ int btrfs_sysfs_add_devices_dir(struct btrfs_fs_devices *fs_devices,
+ break;
+ }
+ }
++ memalloc_nofs_restore(nofs_flag);
+
+ return error;
+ }
+diff --git a/fs/btrfs/transaction.c b/fs/btrfs/transaction.c
+index b359d4b17658b..2710f8ddb95fb 100644
+--- a/fs/btrfs/transaction.c
++++ b/fs/btrfs/transaction.c
+@@ -937,7 +937,10 @@ static int __btrfs_end_transaction(struct btrfs_trans_handle *trans,
+ if (TRANS_ABORTED(trans) ||
+ test_bit(BTRFS_FS_STATE_ERROR, &info->fs_state)) {
+ wake_up_process(info->transaction_kthread);
+- err = -EIO;
++ if (TRANS_ABORTED(trans))
++ err = trans->aborted;
++ else
++ err = -EROFS;
+ }
+
+ kmem_cache_free(btrfs_trans_handle_cachep, trans);
+@@ -1630,7 +1633,7 @@ static noinline int create_pending_snapshot(struct btrfs_trans_handle *trans,
+ }
+
+ key.offset = (u64)-1;
+- pending->snap = btrfs_get_fs_root(fs_info, objectid, true);
++ pending->snap = btrfs_get_new_fs_root(fs_info, objectid, pending->anon_dev);
+ if (IS_ERR(pending->snap)) {
+ ret = PTR_ERR(pending->snap);
+ btrfs_abort_transaction(trans, ret);
+diff --git a/fs/btrfs/transaction.h b/fs/btrfs/transaction.h
+index bf102e64bfb25..a122a712f5cc0 100644
+--- a/fs/btrfs/transaction.h
++++ b/fs/btrfs/transaction.h
+@@ -151,6 +151,8 @@ struct btrfs_pending_snapshot {
+ struct btrfs_block_rsv block_rsv;
+ /* extra metadata reservation for relocation */
+ int error;
++ /* Preallocated anonymous block device number */
++ dev_t anon_dev;
+ bool readonly;
+ struct list_head list;
+ };
+diff --git a/fs/btrfs/tree-log.c b/fs/btrfs/tree-log.c
+index cd5348f352ddc..d22ff1e0963c6 100644
+--- a/fs/btrfs/tree-log.c
++++ b/fs/btrfs/tree-log.c
+@@ -3116,29 +3116,17 @@ int btrfs_sync_log(struct btrfs_trans_handle *trans,
+ btrfs_init_log_ctx(&root_log_ctx, NULL);
+
+ mutex_lock(&log_root_tree->log_mutex);
+- atomic_inc(&log_root_tree->log_batch);
+- atomic_inc(&log_root_tree->log_writers);
+
+ index2 = log_root_tree->log_transid % 2;
+ list_add_tail(&root_log_ctx.list, &log_root_tree->log_ctxs[index2]);
+ root_log_ctx.log_transid = log_root_tree->log_transid;
+
+- mutex_unlock(&log_root_tree->log_mutex);
+-
+- mutex_lock(&log_root_tree->log_mutex);
+-
+ /*
+ * Now we are safe to update the log_root_tree because we're under the
+ * log_mutex, and we're a current writer so we're holding the commit
+ * open until we drop the log_mutex.
+ */
+ ret = update_log_root(trans, log, &new_root_item);
+-
+- if (atomic_dec_and_test(&log_root_tree->log_writers)) {
+- /* atomic_dec_and_test implies a barrier */
+- cond_wake_up_nomb(&log_root_tree->log_writer_wait);
+- }
+-
+ if (ret) {
+ if (!list_empty(&root_log_ctx.list))
+ list_del_init(&root_log_ctx.list);
+@@ -3184,8 +3172,6 @@ int btrfs_sync_log(struct btrfs_trans_handle *trans,
+ root_log_ctx.log_transid - 1);
+ }
+
+- wait_for_writer(log_root_tree);
+-
+ /*
+ * now that we've moved on to the tree of log tree roots,
+ * check the full commit flag again
+@@ -4041,11 +4027,8 @@ static noinline int copy_items(struct btrfs_trans_handle *trans,
+ fs_info->csum_root,
+ ds + cs, ds + cs + cl - 1,
+ &ordered_sums, 0);
+- if (ret) {
+- btrfs_release_path(dst_path);
+- kfree(ins_data);
+- return ret;
+- }
++ if (ret)
++ break;
+ }
+ }
+ }
+@@ -4058,7 +4041,6 @@ static noinline int copy_items(struct btrfs_trans_handle *trans,
+ * we have to do this after the loop above to avoid changing the
+ * log tree while trying to change the log tree.
+ */
+- ret = 0;
+ while (!list_empty(&ordered_sums)) {
+ struct btrfs_ordered_sum *sums = list_entry(ordered_sums.next,
+ struct btrfs_ordered_sum,
+@@ -5123,14 +5105,13 @@ static int btrfs_log_inode(struct btrfs_trans_handle *trans,
+ const loff_t end,
+ struct btrfs_log_ctx *ctx)
+ {
+- struct btrfs_fs_info *fs_info = root->fs_info;
+ struct btrfs_path *path;
+ struct btrfs_path *dst_path;
+ struct btrfs_key min_key;
+ struct btrfs_key max_key;
+ struct btrfs_root *log = root->log_root;
+ int err = 0;
+- int ret;
++ int ret = 0;
+ bool fast_search = false;
+ u64 ino = btrfs_ino(inode);
+ struct extent_map_tree *em_tree = &inode->extent_tree;
+@@ -5166,15 +5147,19 @@ static int btrfs_log_inode(struct btrfs_trans_handle *trans,
+ max_key.offset = (u64)-1;
+
+ /*
+- * Only run delayed items if we are a dir or a new file.
+- * Otherwise commit the delayed inode only, which is needed in
+- * order for the log replay code to mark inodes for link count
+- * fixup (create temporary BTRFS_TREE_LOG_FIXUP_OBJECTID items).
++ * Only run delayed items if we are a directory. We want to make sure
++ * all directory indexes hit the fs/subvolume tree so we can find them
++ * and figure out which index ranges have to be logged.
++ *
++ * Otherwise commit the delayed inode only if the full sync flag is set,
++ * as we want to make sure an up to date version is in the subvolume
++ * tree so copy_inode_items_to_log() / copy_items() can find it and copy
++ * it to the log tree. For a non full sync, we always log the inode item
++ * based on the in-memory struct btrfs_inode which is always up to date.
+ */
+- if (S_ISDIR(inode->vfs_inode.i_mode) ||
+- inode->generation > fs_info->last_trans_committed)
++ if (S_ISDIR(inode->vfs_inode.i_mode))
+ ret = btrfs_commit_inode_delayed_items(trans, inode);
+- else
++ else if (test_bit(BTRFS_INODE_NEEDS_FULL_SYNC, &inode->runtime_flags))
+ ret = btrfs_commit_inode_delayed_inode(inode);
+
+ if (ret) {
+diff --git a/fs/btrfs/volumes.c b/fs/btrfs/volumes.c
+index f403fb1e6d379..0fecf1e4d8f66 100644
+--- a/fs/btrfs/volumes.c
++++ b/fs/btrfs/volumes.c
+@@ -245,7 +245,9 @@ static int __btrfs_map_block(struct btrfs_fs_info *fs_info,
+ *
+ * global::fs_devs - add, remove, updates to the global list
+ *
+- * does not protect: manipulation of the fs_devices::devices list!
++ * does not protect: manipulation of the fs_devices::devices list in general
++ * but in mount context it could be used to exclude list modifications by eg.
++ * scan ioctl
+ *
+ * btrfs_device::name - renames (write side), read is RCU
+ *
+@@ -258,6 +260,9 @@ static int __btrfs_map_block(struct btrfs_fs_info *fs_info,
+ * may be used to exclude some operations from running concurrently without any
+ * modifications to the list (see write_all_supers)
+ *
++ * Is not required at mount and close times, because our device list is
++ * protected by the uuid_mutex at that point.
++ *
+ * balance_mutex
+ * -------------
+ * protects balance structures (status, state) and context accessed from
+@@ -602,6 +607,11 @@ static int btrfs_free_stale_devices(const char *path,
+ return ret;
+ }
+
++/*
++ * This is only used on mount, and we are protected from competing things
++ * messing with our fs_devices by the uuid_mutex, thus we do not need the
++ * fs_devices->device_list_mutex here.
++ */
+ static int btrfs_open_one_device(struct btrfs_fs_devices *fs_devices,
+ struct btrfs_device *device, fmode_t flags,
+ void *holder)
+@@ -1229,8 +1239,14 @@ int btrfs_open_devices(struct btrfs_fs_devices *fs_devices,
+ int ret;
+
+ lockdep_assert_held(&uuid_mutex);
++ /*
++ * The device_list_mutex cannot be taken here in case opening the
++ * underlying device takes further locks like bd_mutex.
++ *
++ * We also don't need the lock here as this is called during mount and
++ * exclusion is provided by uuid_mutex
++ */
+
+- mutex_lock(&fs_devices->device_list_mutex);
+ if (fs_devices->opened) {
+ fs_devices->opened++;
+ ret = 0;
+@@ -1238,7 +1254,6 @@ int btrfs_open_devices(struct btrfs_fs_devices *fs_devices,
+ list_sort(NULL, &fs_devices->devices, devid_cmp);
+ ret = open_fs_devices(fs_devices, flags, holder);
+ }
+- mutex_unlock(&fs_devices->device_list_mutex);
+
+ return ret;
+ }
+@@ -3231,7 +3246,7 @@ static int del_balance_item(struct btrfs_fs_info *fs_info)
+ if (!path)
+ return -ENOMEM;
+
+- trans = btrfs_start_transaction(root, 0);
++ trans = btrfs_start_transaction_fallback_global_rsv(root, 0);
+ if (IS_ERR(trans)) {
+ btrfs_free_path(path);
+ return PTR_ERR(trans);
+@@ -4135,7 +4150,22 @@ int btrfs_balance(struct btrfs_fs_info *fs_info,
+ mutex_lock(&fs_info->balance_mutex);
+ if (ret == -ECANCELED && atomic_read(&fs_info->balance_pause_req))
+ btrfs_info(fs_info, "balance: paused");
+- else if (ret == -ECANCELED && atomic_read(&fs_info->balance_cancel_req))
++ /*
++ * Balance can be canceled by:
++ *
++ * - Regular cancel request
++ * Then ret == -ECANCELED and balance_cancel_req > 0
++ *
++ * - Fatal signal to "btrfs" process
++ * Either the signal caught by wait_reserve_ticket() and callers
++ * got -EINTR, or caught by btrfs_should_cancel_balance() and
++ * got -ECANCELED.
++ * Either way, in this case balance_cancel_req = 0, and
++ * ret == -EINTR or ret == -ECANCELED.
++ *
++ * So here we only check the return value to catch canceled balance.
++ */
++ else if (ret == -ECANCELED || ret == -EINTR)
+ btrfs_info(fs_info, "balance: canceled");
+ else
+ btrfs_info(fs_info, "balance: ended with status: %d", ret);
+@@ -4690,6 +4720,10 @@ again:
+ }
+
+ mutex_lock(&fs_info->chunk_mutex);
++ /* Clear all state bits beyond the shrunk device size */
++ clear_extent_bits(&device->alloc_state, new_size, (u64)-1,
++ CHUNK_STATE_MASK);
++
+ btrfs_device_set_disk_total_bytes(device, new_size);
+ if (list_empty(&device->post_commit_list))
+ list_add_tail(&device->post_commit_list,
+@@ -7049,7 +7083,6 @@ int btrfs_read_chunk_tree(struct btrfs_fs_info *fs_info)
+ * otherwise we don't need it.
+ */
+ mutex_lock(&uuid_mutex);
+- mutex_lock(&fs_info->chunk_mutex);
+
+ /*
+ * It is possible for mount and umount to race in such a way that
+@@ -7094,7 +7127,9 @@ int btrfs_read_chunk_tree(struct btrfs_fs_info *fs_info)
+ } else if (found_key.type == BTRFS_CHUNK_ITEM_KEY) {
+ struct btrfs_chunk *chunk;
+ chunk = btrfs_item_ptr(leaf, slot, struct btrfs_chunk);
++ mutex_lock(&fs_info->chunk_mutex);
+ ret = read_one_chunk(&found_key, leaf, chunk);
++ mutex_unlock(&fs_info->chunk_mutex);
+ if (ret)
+ goto error;
+ }
+@@ -7124,7 +7159,6 @@ int btrfs_read_chunk_tree(struct btrfs_fs_info *fs_info)
+ }
+ ret = 0;
+ error:
+- mutex_unlock(&fs_info->chunk_mutex);
+ mutex_unlock(&uuid_mutex);
+
+ btrfs_free_path(path);
+diff --git a/fs/ceph/dir.c b/fs/ceph/dir.c
+index 39f5311404b08..060bdcc5ce32c 100644
+--- a/fs/ceph/dir.c
++++ b/fs/ceph/dir.c
+@@ -930,6 +930,10 @@ static int ceph_symlink(struct inode *dir, struct dentry *dentry,
+ req->r_num_caps = 2;
+ req->r_dentry_drop = CEPH_CAP_FILE_SHARED | CEPH_CAP_AUTH_EXCL;
+ req->r_dentry_unless = CEPH_CAP_FILE_EXCL;
++ if (as_ctx.pagelist) {
++ req->r_pagelist = as_ctx.pagelist;
++ as_ctx.pagelist = NULL;
++ }
+ err = ceph_mdsc_do_request(mdsc, dir, req);
+ if (!err && !req->r_reply_info.head->is_dentry)
+ err = ceph_handle_notrace_create(dir, dentry);
+diff --git a/fs/ceph/mds_client.c b/fs/ceph/mds_client.c
+index a50497142e598..dea971f9d89ee 100644
+--- a/fs/ceph/mds_client.c
++++ b/fs/ceph/mds_client.c
+@@ -3279,8 +3279,10 @@ static void handle_session(struct ceph_mds_session *session,
+ goto bad;
+ /* version >= 3, feature bits */
+ ceph_decode_32_safe(&p, end, len, bad);
+- ceph_decode_64_safe(&p, end, features, bad);
+- p += len - sizeof(features);
++ if (len) {
++ ceph_decode_64_safe(&p, end, features, bad);
++ p += len - sizeof(features);
++ }
+ }
+
+ mutex_lock(&mdsc->mutex);
+diff --git a/fs/cifs/smb2inode.c b/fs/cifs/smb2inode.c
+index b9db73687eaaf..eba01d0908dd9 100644
+--- a/fs/cifs/smb2inode.c
++++ b/fs/cifs/smb2inode.c
+@@ -115,6 +115,7 @@ smb2_compound_op(const unsigned int xid, struct cifs_tcon *tcon,
+ vars->oparms.fid = &fid;
+ vars->oparms.reconnect = false;
+ vars->oparms.mode = mode;
++ vars->oparms.cifs_sb = cifs_sb;
+
+ rqst[num_rqst].rq_iov = &vars->open_iov[0];
+ rqst[num_rqst].rq_nvec = SMB2_CREATE_IOV_SIZE;
+diff --git a/fs/cifs/smb2misc.c b/fs/cifs/smb2misc.c
+index 157992864ce7e..d88e2683626e7 100644
+--- a/fs/cifs/smb2misc.c
++++ b/fs/cifs/smb2misc.c
+@@ -508,15 +508,31 @@ cifs_ses_oplock_break(struct work_struct *work)
+ kfree(lw);
+ }
+
++static void
++smb2_queue_pending_open_break(struct tcon_link *tlink, __u8 *lease_key,
++ __le32 new_lease_state)
++{
++ struct smb2_lease_break_work *lw;
++
++ lw = kmalloc(sizeof(struct smb2_lease_break_work), GFP_KERNEL);
++ if (!lw) {
++ cifs_put_tlink(tlink);
++ return;
++ }
++
++ INIT_WORK(&lw->lease_break, cifs_ses_oplock_break);
++ lw->tlink = tlink;
++ lw->lease_state = new_lease_state;
++ memcpy(lw->lease_key, lease_key, SMB2_LEASE_KEY_SIZE);
++ queue_work(cifsiod_wq, &lw->lease_break);
++}
++
+ static bool
+-smb2_tcon_has_lease(struct cifs_tcon *tcon, struct smb2_lease_break *rsp,
+- struct smb2_lease_break_work *lw)
++smb2_tcon_has_lease(struct cifs_tcon *tcon, struct smb2_lease_break *rsp)
+ {
+- bool found;
+ __u8 lease_state;
+ struct list_head *tmp;
+ struct cifsFileInfo *cfile;
+- struct cifs_pending_open *open;
+ struct cifsInodeInfo *cinode;
+ int ack_req = le32_to_cpu(rsp->Flags &
+ SMB2_NOTIFY_BREAK_LEASE_FLAG_ACK_REQUIRED);
+@@ -546,22 +562,29 @@ smb2_tcon_has_lease(struct cifs_tcon *tcon, struct smb2_lease_break *rsp,
+ cfile->oplock_level = lease_state;
+
+ cifs_queue_oplock_break(cfile);
+- kfree(lw);
+ return true;
+ }
+
+- found = false;
++ return false;
++}
++
++static struct cifs_pending_open *
++smb2_tcon_find_pending_open_lease(struct cifs_tcon *tcon,
++ struct smb2_lease_break *rsp)
++{
++ __u8 lease_state = le32_to_cpu(rsp->NewLeaseState);
++ int ack_req = le32_to_cpu(rsp->Flags &
++ SMB2_NOTIFY_BREAK_LEASE_FLAG_ACK_REQUIRED);
++ struct cifs_pending_open *open;
++ struct cifs_pending_open *found = NULL;
++
+ list_for_each_entry(open, &tcon->pending_opens, olist) {
+ if (memcmp(open->lease_key, rsp->LeaseKey,
+ SMB2_LEASE_KEY_SIZE))
+ continue;
+
+ if (!found && ack_req) {
+- found = true;
+- memcpy(lw->lease_key, open->lease_key,
+- SMB2_LEASE_KEY_SIZE);
+- lw->tlink = cifs_get_tlink(open->tlink);
+- queue_work(cifsiod_wq, &lw->lease_break);
++ found = open;
+ }
+
+ cifs_dbg(FYI, "found in the pending open list\n");
+@@ -582,14 +605,7 @@ smb2_is_valid_lease_break(char *buffer)
+ struct TCP_Server_Info *server;
+ struct cifs_ses *ses;
+ struct cifs_tcon *tcon;
+- struct smb2_lease_break_work *lw;
+-
+- lw = kmalloc(sizeof(struct smb2_lease_break_work), GFP_KERNEL);
+- if (!lw)
+- return false;
+-
+- INIT_WORK(&lw->lease_break, cifs_ses_oplock_break);
+- lw->lease_state = rsp->NewLeaseState;
++ struct cifs_pending_open *open;
+
+ cifs_dbg(FYI, "Checking for lease break\n");
+
+@@ -607,11 +623,27 @@ smb2_is_valid_lease_break(char *buffer)
+ spin_lock(&tcon->open_file_lock);
+ cifs_stats_inc(
+ &tcon->stats.cifs_stats.num_oplock_brks);
+- if (smb2_tcon_has_lease(tcon, rsp, lw)) {
++ if (smb2_tcon_has_lease(tcon, rsp)) {
+ spin_unlock(&tcon->open_file_lock);
+ spin_unlock(&cifs_tcp_ses_lock);
+ return true;
+ }
++ open = smb2_tcon_find_pending_open_lease(tcon,
++ rsp);
++ if (open) {
++ __u8 lease_key[SMB2_LEASE_KEY_SIZE];
++ struct tcon_link *tlink;
++
++ tlink = cifs_get_tlink(open->tlink);
++ memcpy(lease_key, open->lease_key,
++ SMB2_LEASE_KEY_SIZE);
++ spin_unlock(&tcon->open_file_lock);
++ spin_unlock(&cifs_tcp_ses_lock);
++ smb2_queue_pending_open_break(tlink,
++ lease_key,
++ rsp->NewLeaseState);
++ return true;
++ }
+ spin_unlock(&tcon->open_file_lock);
+
+ if (tcon->crfid.is_valid &&
+@@ -629,7 +661,6 @@ smb2_is_valid_lease_break(char *buffer)
+ }
+ }
+ spin_unlock(&cifs_tcp_ses_lock);
+- kfree(lw);
+ cifs_dbg(FYI, "Can not process lease break - no lease matched\n");
+ return false;
+ }
+diff --git a/fs/cifs/smb2pdu.c b/fs/cifs/smb2pdu.c
+index 2f4cdd290c464..4926887640048 100644
+--- a/fs/cifs/smb2pdu.c
++++ b/fs/cifs/smb2pdu.c
+@@ -1387,6 +1387,8 @@ SMB2_auth_kerberos(struct SMB2_sess_data *sess_data)
+ spnego_key = cifs_get_spnego_key(ses);
+ if (IS_ERR(spnego_key)) {
+ rc = PTR_ERR(spnego_key);
++ if (rc == -ENOKEY)
++ cifs_dbg(VFS, "Verify user has a krb5 ticket and keyutils is installed\n");
+ spnego_key = NULL;
+ goto out;
+ }
+diff --git a/fs/ext2/ialloc.c b/fs/ext2/ialloc.c
+index fda7d3f5b4be5..432c3febea6df 100644
+--- a/fs/ext2/ialloc.c
++++ b/fs/ext2/ialloc.c
+@@ -80,6 +80,7 @@ static void ext2_release_inode(struct super_block *sb, int group, int dir)
+ if (dir)
+ le16_add_cpu(&desc->bg_used_dirs_count, -1);
+ spin_unlock(sb_bgl_lock(EXT2_SB(sb), group));
++ percpu_counter_inc(&EXT2_SB(sb)->s_freeinodes_counter);
+ if (dir)
+ percpu_counter_dec(&EXT2_SB(sb)->s_dirs_counter);
+ mark_buffer_dirty(bh);
+@@ -528,7 +529,7 @@ got:
+ goto fail;
+ }
+
+- percpu_counter_add(&sbi->s_freeinodes_counter, -1);
++ percpu_counter_dec(&sbi->s_freeinodes_counter);
+ if (S_ISDIR(mode))
+ percpu_counter_inc(&sbi->s_dirs_counter);
+
+diff --git a/fs/f2fs/compress.c b/fs/f2fs/compress.c
+index 1e02a8c106b0a..f6fbe61b1251e 100644
+--- a/fs/f2fs/compress.c
++++ b/fs/f2fs/compress.c
+@@ -1353,6 +1353,8 @@ int f2fs_write_multi_pages(struct compress_ctx *cc,
+ err = f2fs_write_compressed_pages(cc, submitted,
+ wbc, io_type);
+ cops->destroy_compress_ctx(cc);
++ kfree(cc->cpages);
++ cc->cpages = NULL;
+ if (!err)
+ return 0;
+ f2fs_bug_on(F2FS_I_SB(cc->inode), err != -EAGAIN);
+diff --git a/fs/f2fs/data.c b/fs/f2fs/data.c
+index 326c63879ddc8..6e9017e6a8197 100644
+--- a/fs/f2fs/data.c
++++ b/fs/f2fs/data.c
+@@ -3432,6 +3432,10 @@ static int f2fs_write_end(struct file *file,
+ if (f2fs_compressed_file(inode) && fsdata) {
+ f2fs_compress_write_end(inode, fsdata, page->index, copied);
+ f2fs_update_time(F2FS_I_SB(inode), REQ_TIME);
++
++ if (pos + copied > i_size_read(inode) &&
++ !f2fs_verity_in_progress(inode))
++ f2fs_i_size_write(inode, pos + copied);
+ return copied;
+ }
+ #endif
+diff --git a/fs/gfs2/bmap.c b/fs/gfs2/bmap.c
+index 6306eaae378b2..6d2ea788d0a17 100644
+--- a/fs/gfs2/bmap.c
++++ b/fs/gfs2/bmap.c
+@@ -1351,9 +1351,15 @@ int gfs2_extent_map(struct inode *inode, u64 lblock, int *new, u64 *dblock, unsi
+ return ret;
+ }
+
++/*
++ * NOTE: Never call gfs2_block_zero_range with an open transaction because it
++ * uses iomap write to perform its actions, which begin their own transactions
++ * (iomap_begin, page_prepare, etc.)
++ */
+ static int gfs2_block_zero_range(struct inode *inode, loff_t from,
+ unsigned int length)
+ {
++ BUG_ON(current->journal_info);
+ return iomap_zero_range(inode, from, length, NULL, &gfs2_iomap_ops);
+ }
+
+@@ -1414,6 +1420,16 @@ static int trunc_start(struct inode *inode, u64 newsize)
+ u64 oldsize = inode->i_size;
+ int error;
+
++ if (!gfs2_is_stuffed(ip)) {
++ unsigned int blocksize = i_blocksize(inode);
++ unsigned int offs = newsize & (blocksize - 1);
++ if (offs) {
++ error = gfs2_block_zero_range(inode, newsize,
++ blocksize - offs);
++ if (error)
++ return error;
++ }
++ }
+ if (journaled)
+ error = gfs2_trans_begin(sdp, RES_DINODE + RES_JDATA, GFS2_JTRUNC_REVOKES);
+ else
+@@ -1427,19 +1443,10 @@ static int trunc_start(struct inode *inode, u64 newsize)
+
+ gfs2_trans_add_meta(ip->i_gl, dibh);
+
+- if (gfs2_is_stuffed(ip)) {
++ if (gfs2_is_stuffed(ip))
+ gfs2_buffer_clear_tail(dibh, sizeof(struct gfs2_dinode) + newsize);
+- } else {
+- unsigned int blocksize = i_blocksize(inode);
+- unsigned int offs = newsize & (blocksize - 1);
+- if (offs) {
+- error = gfs2_block_zero_range(inode, newsize,
+- blocksize - offs);
+- if (error)
+- goto out;
+- }
++ else
+ ip->i_diskflags |= GFS2_DIF_TRUNC_IN_PROG;
+- }
+
+ i_size_write(inode, newsize);
+ ip->i_inode.i_mtime = ip->i_inode.i_ctime = current_time(&ip->i_inode);
+@@ -2448,25 +2455,7 @@ int __gfs2_punch_hole(struct file *file, loff_t offset, loff_t length)
+ loff_t start, end;
+ int error;
+
+- start = round_down(offset, blocksize);
+- end = round_up(offset + length, blocksize) - 1;
+- error = filemap_write_and_wait_range(inode->i_mapping, start, end);
+- if (error)
+- return error;
+-
+- if (gfs2_is_jdata(ip))
+- error = gfs2_trans_begin(sdp, RES_DINODE + 2 * RES_JDATA,
+- GFS2_JTRUNC_REVOKES);
+- else
+- error = gfs2_trans_begin(sdp, RES_DINODE, 0);
+- if (error)
+- return error;
+-
+- if (gfs2_is_stuffed(ip)) {
+- error = stuffed_zero_range(inode, offset, length);
+- if (error)
+- goto out;
+- } else {
++ if (!gfs2_is_stuffed(ip)) {
+ unsigned int start_off, end_len;
+
+ start_off = offset & (blocksize - 1);
+@@ -2489,6 +2478,26 @@ int __gfs2_punch_hole(struct file *file, loff_t offset, loff_t length)
+ }
+ }
+
++ start = round_down(offset, blocksize);
++ end = round_up(offset + length, blocksize) - 1;
++ error = filemap_write_and_wait_range(inode->i_mapping, start, end);
++ if (error)
++ return error;
++
++ if (gfs2_is_jdata(ip))
++ error = gfs2_trans_begin(sdp, RES_DINODE + 2 * RES_JDATA,
++ GFS2_JTRUNC_REVOKES);
++ else
++ error = gfs2_trans_begin(sdp, RES_DINODE, 0);
++ if (error)
++ return error;
++
++ if (gfs2_is_stuffed(ip)) {
++ error = stuffed_zero_range(inode, offset, length);
++ if (error)
++ goto out;
++ }
++
+ if (gfs2_is_jdata(ip)) {
+ BUG_ON(!current->journal_info);
+ gfs2_journaled_truncate_range(inode, offset, length);
+diff --git a/fs/gfs2/glock.c b/fs/gfs2/glock.c
+index 8545024a1401f..f92876f4f37a1 100644
+--- a/fs/gfs2/glock.c
++++ b/fs/gfs2/glock.c
+@@ -790,9 +790,11 @@ static void gfs2_glock_poke(struct gfs2_glock *gl)
+ struct gfs2_holder gh;
+ int error;
+
+- error = gfs2_glock_nq_init(gl, LM_ST_SHARED, flags, &gh);
++ gfs2_holder_init(gl, LM_ST_SHARED, flags, &gh);
++ error = gfs2_glock_nq(&gh);
+ if (!error)
+ gfs2_glock_dq(&gh);
++ gfs2_holder_uninit(&gh);
+ }
+
+ static bool gfs2_try_evict(struct gfs2_glock *gl)
+diff --git a/fs/minix/inode.c b/fs/minix/inode.c
+index 0dd929346f3f3..7b09a9158e401 100644
+--- a/fs/minix/inode.c
++++ b/fs/minix/inode.c
+@@ -150,8 +150,10 @@ static int minix_remount (struct super_block * sb, int * flags, char * data)
+ return 0;
+ }
+
+-static bool minix_check_superblock(struct minix_sb_info *sbi)
++static bool minix_check_superblock(struct super_block *sb)
+ {
++ struct minix_sb_info *sbi = minix_sb(sb);
++
+ if (sbi->s_imap_blocks == 0 || sbi->s_zmap_blocks == 0)
+ return false;
+
+@@ -161,7 +163,7 @@ static bool minix_check_superblock(struct minix_sb_info *sbi)
+ * of indirect blocks which places the limit well above U32_MAX.
+ */
+ if (sbi->s_version == MINIX_V1 &&
+- sbi->s_max_size > (7 + 512 + 512*512) * BLOCK_SIZE)
++ sb->s_maxbytes > (7 + 512 + 512*512) * BLOCK_SIZE)
+ return false;
+
+ return true;
+@@ -202,7 +204,7 @@ static int minix_fill_super(struct super_block *s, void *data, int silent)
+ sbi->s_zmap_blocks = ms->s_zmap_blocks;
+ sbi->s_firstdatazone = ms->s_firstdatazone;
+ sbi->s_log_zone_size = ms->s_log_zone_size;
+- sbi->s_max_size = ms->s_max_size;
++ s->s_maxbytes = ms->s_max_size;
+ s->s_magic = ms->s_magic;
+ if (s->s_magic == MINIX_SUPER_MAGIC) {
+ sbi->s_version = MINIX_V1;
+@@ -233,7 +235,7 @@ static int minix_fill_super(struct super_block *s, void *data, int silent)
+ sbi->s_zmap_blocks = m3s->s_zmap_blocks;
+ sbi->s_firstdatazone = m3s->s_firstdatazone;
+ sbi->s_log_zone_size = m3s->s_log_zone_size;
+- sbi->s_max_size = m3s->s_max_size;
++ s->s_maxbytes = m3s->s_max_size;
+ sbi->s_ninodes = m3s->s_ninodes;
+ sbi->s_nzones = m3s->s_zones;
+ sbi->s_dirsize = 64;
+@@ -245,7 +247,7 @@ static int minix_fill_super(struct super_block *s, void *data, int silent)
+ } else
+ goto out_no_fs;
+
+- if (!minix_check_superblock(sbi))
++ if (!minix_check_superblock(s))
+ goto out_illegal_sb;
+
+ /*
+diff --git a/fs/minix/itree_v1.c b/fs/minix/itree_v1.c
+index 046cc96ee7adb..1fed906042aa8 100644
+--- a/fs/minix/itree_v1.c
++++ b/fs/minix/itree_v1.c
+@@ -29,12 +29,12 @@ static int block_to_path(struct inode * inode, long block, int offsets[DEPTH])
+ if (block < 0) {
+ printk("MINIX-fs: block_to_path: block %ld < 0 on dev %pg\n",
+ block, inode->i_sb->s_bdev);
+- } else if (block >= (minix_sb(inode->i_sb)->s_max_size/BLOCK_SIZE)) {
+- if (printk_ratelimit())
+- printk("MINIX-fs: block_to_path: "
+- "block %ld too big on dev %pg\n",
+- block, inode->i_sb->s_bdev);
+- } else if (block < 7) {
++ return 0;
++ }
++ if ((u64)block * BLOCK_SIZE >= inode->i_sb->s_maxbytes)
++ return 0;
++
++ if (block < 7) {
+ offsets[n++] = block;
+ } else if ((block -= 7) < 512) {
+ offsets[n++] = 7;
+diff --git a/fs/minix/itree_v2.c b/fs/minix/itree_v2.c
+index f7fc7eccccccd..9d00f31a2d9d1 100644
+--- a/fs/minix/itree_v2.c
++++ b/fs/minix/itree_v2.c
+@@ -32,13 +32,12 @@ static int block_to_path(struct inode * inode, long block, int offsets[DEPTH])
+ if (block < 0) {
+ printk("MINIX-fs: block_to_path: block %ld < 0 on dev %pg\n",
+ block, sb->s_bdev);
+- } else if ((u64)block * (u64)sb->s_blocksize >=
+- minix_sb(sb)->s_max_size) {
+- if (printk_ratelimit())
+- printk("MINIX-fs: block_to_path: "
+- "block %ld too big on dev %pg\n",
+- block, sb->s_bdev);
+- } else if (block < DIRCOUNT) {
++ return 0;
++ }
++ if ((u64)block * (u64)sb->s_blocksize >= sb->s_maxbytes)
++ return 0;
++
++ if (block < DIRCOUNT) {
+ offsets[n++] = block;
+ } else if ((block -= DIRCOUNT) < INDIRCOUNT(sb)) {
+ offsets[n++] = DIRCOUNT;
+diff --git a/fs/minix/minix.h b/fs/minix/minix.h
+index df081e8afcc3c..168d45d3de73e 100644
+--- a/fs/minix/minix.h
++++ b/fs/minix/minix.h
+@@ -32,7 +32,6 @@ struct minix_sb_info {
+ unsigned long s_zmap_blocks;
+ unsigned long s_firstdatazone;
+ unsigned long s_log_zone_size;
+- unsigned long s_max_size;
+ int s_dirsize;
+ int s_namelen;
+ struct buffer_head ** s_imap;
+diff --git a/fs/nfs/file.c b/fs/nfs/file.c
+index f96367a2463e3..63940a7a70be1 100644
+--- a/fs/nfs/file.c
++++ b/fs/nfs/file.c
+@@ -140,6 +140,7 @@ static int
+ nfs_file_flush(struct file *file, fl_owner_t id)
+ {
+ struct inode *inode = file_inode(file);
++ errseq_t since;
+
+ dprintk("NFS: flush(%pD2)\n", file);
+
+@@ -148,7 +149,9 @@ nfs_file_flush(struct file *file, fl_owner_t id)
+ return 0;
+
+ /* Flush writes to the server and return any errors */
+- return nfs_wb_all(inode);
++ since = filemap_sample_wb_err(file->f_mapping);
++ nfs_wb_all(inode);
++ return filemap_check_wb_err(file->f_mapping, since);
+ }
+
+ ssize_t
+@@ -587,12 +590,14 @@ static const struct vm_operations_struct nfs_file_vm_ops = {
+ .page_mkwrite = nfs_vm_page_mkwrite,
+ };
+
+-static int nfs_need_check_write(struct file *filp, struct inode *inode)
++static int nfs_need_check_write(struct file *filp, struct inode *inode,
++ int error)
+ {
+ struct nfs_open_context *ctx;
+
+ ctx = nfs_file_open_context(filp);
+- if (nfs_ctx_key_to_expire(ctx, inode))
++ if (nfs_error_is_fatal_on_server(error) ||
++ nfs_ctx_key_to_expire(ctx, inode))
+ return 1;
+ return 0;
+ }
+@@ -603,6 +608,8 @@ ssize_t nfs_file_write(struct kiocb *iocb, struct iov_iter *from)
+ struct inode *inode = file_inode(file);
+ unsigned long written = 0;
+ ssize_t result;
++ errseq_t since;
++ int error;
+
+ result = nfs_key_timeout_notify(file, inode);
+ if (result)
+@@ -627,6 +634,7 @@ ssize_t nfs_file_write(struct kiocb *iocb, struct iov_iter *from)
+ if (iocb->ki_pos > i_size_read(inode))
+ nfs_revalidate_mapping(inode, file->f_mapping);
+
++ since = filemap_sample_wb_err(file->f_mapping);
+ nfs_start_io_write(inode);
+ result = generic_write_checks(iocb, from);
+ if (result > 0) {
+@@ -645,7 +653,8 @@ ssize_t nfs_file_write(struct kiocb *iocb, struct iov_iter *from)
+ goto out;
+
+ /* Return error values */
+- if (nfs_need_check_write(file, inode)) {
++ error = filemap_check_wb_err(file->f_mapping, since);
++ if (nfs_need_check_write(file, inode, error)) {
+ int err = nfs_wb_all(inode);
+ if (err < 0)
+ result = err;
+diff --git a/fs/nfs/flexfilelayout/flexfilelayout.c b/fs/nfs/flexfilelayout/flexfilelayout.c
+index de03e440b7eef..048272d60a165 100644
+--- a/fs/nfs/flexfilelayout/flexfilelayout.c
++++ b/fs/nfs/flexfilelayout/flexfilelayout.c
+@@ -790,6 +790,19 @@ ff_layout_choose_best_ds_for_read(struct pnfs_layout_segment *lseg,
+ return ff_layout_choose_any_ds_for_read(lseg, start_idx, best_idx);
+ }
+
++static struct nfs4_pnfs_ds *
++ff_layout_get_ds_for_read(struct nfs_pageio_descriptor *pgio, int *best_idx)
++{
++ struct pnfs_layout_segment *lseg = pgio->pg_lseg;
++ struct nfs4_pnfs_ds *ds;
++
++ ds = ff_layout_choose_best_ds_for_read(lseg, pgio->pg_mirror_idx,
++ best_idx);
++ if (ds || !pgio->pg_mirror_idx)
++ return ds;
++ return ff_layout_choose_best_ds_for_read(lseg, 0, best_idx);
++}
++
+ static void
+ ff_layout_pg_get_read(struct nfs_pageio_descriptor *pgio,
+ struct nfs_page *req,
+@@ -840,7 +853,7 @@ retry:
+ goto out_nolseg;
+ }
+
+- ds = ff_layout_choose_best_ds_for_read(pgio->pg_lseg, 0, &ds_idx);
++ ds = ff_layout_get_ds_for_read(pgio, &ds_idx);
+ if (!ds) {
+ if (!ff_layout_no_fallback_to_mds(pgio->pg_lseg))
+ goto out_mds;
+@@ -1028,11 +1041,24 @@ static void ff_layout_reset_write(struct nfs_pgio_header *hdr, bool retry_pnfs)
+ }
+ }
+
++static void ff_layout_resend_pnfs_read(struct nfs_pgio_header *hdr)
++{
++ u32 idx = hdr->pgio_mirror_idx + 1;
++ int new_idx = 0;
++
++ if (ff_layout_choose_any_ds_for_read(hdr->lseg, idx + 1, &new_idx))
++ ff_layout_send_layouterror(hdr->lseg);
++ else
++ pnfs_error_mark_layout_for_return(hdr->inode, hdr->lseg);
++ pnfs_read_resend_pnfs(hdr, new_idx);
++}
++
+ static void ff_layout_reset_read(struct nfs_pgio_header *hdr)
+ {
+ struct rpc_task *task = &hdr->task;
+
+ pnfs_layoutcommit_inode(hdr->inode, false);
++ pnfs_error_mark_layout_for_return(hdr->inode, hdr->lseg);
+
+ if (!test_and_set_bit(NFS_IOHDR_REDO, &hdr->flags)) {
+ dprintk("%s Reset task %5u for i/o through MDS "
+@@ -1234,6 +1260,12 @@ static void ff_layout_io_track_ds_error(struct pnfs_layout_segment *lseg,
+ break;
+ case NFS4ERR_NXIO:
+ ff_layout_mark_ds_unreachable(lseg, idx);
++ /*
++ * Don't return the layout if this is a read and we still
++ * have layouts to try
++ */
++ if (opnum == OP_READ)
++ break;
+ /* Fallthrough */
+ default:
+ pnfs_error_mark_layout_for_return(lseg->pls_layout->plh_inode,
+@@ -1247,7 +1279,6 @@ static void ff_layout_io_track_ds_error(struct pnfs_layout_segment *lseg,
+ static int ff_layout_read_done_cb(struct rpc_task *task,
+ struct nfs_pgio_header *hdr)
+ {
+- int new_idx = hdr->pgio_mirror_idx;
+ int err;
+
+ if (task->tk_status < 0) {
+@@ -1267,10 +1298,6 @@ static int ff_layout_read_done_cb(struct rpc_task *task,
+ clear_bit(NFS_IOHDR_RESEND_MDS, &hdr->flags);
+ switch (err) {
+ case -NFS4ERR_RESET_TO_PNFS:
+- if (ff_layout_choose_best_ds_for_read(hdr->lseg,
+- hdr->pgio_mirror_idx + 1,
+- &new_idx))
+- goto out_layouterror;
+ set_bit(NFS_IOHDR_RESEND_PNFS, &hdr->flags);
+ return task->tk_status;
+ case -NFS4ERR_RESET_TO_MDS:
+@@ -1281,10 +1308,6 @@ static int ff_layout_read_done_cb(struct rpc_task *task,
+ }
+
+ return 0;
+-out_layouterror:
+- ff_layout_read_record_layoutstats_done(task, hdr);
+- ff_layout_send_layouterror(hdr->lseg);
+- hdr->pgio_mirror_idx = new_idx;
+ out_eagain:
+ rpc_restart_call_prepare(task);
+ return -EAGAIN;
+@@ -1411,10 +1434,9 @@ static void ff_layout_read_release(void *data)
+ struct nfs_pgio_header *hdr = data;
+
+ ff_layout_read_record_layoutstats_done(&hdr->task, hdr);
+- if (test_bit(NFS_IOHDR_RESEND_PNFS, &hdr->flags)) {
+- ff_layout_send_layouterror(hdr->lseg);
+- pnfs_read_resend_pnfs(hdr);
+- } else if (test_bit(NFS_IOHDR_RESEND_MDS, &hdr->flags))
++ if (test_bit(NFS_IOHDR_RESEND_PNFS, &hdr->flags))
++ ff_layout_resend_pnfs_read(hdr);
++ else if (test_bit(NFS_IOHDR_RESEND_MDS, &hdr->flags))
+ ff_layout_reset_read(hdr);
+ pnfs_generic_rw_release(data);
+ }
+diff --git a/fs/nfs/nfs4file.c b/fs/nfs/nfs4file.c
+index 8e5d6223ddd35..a339707654673 100644
+--- a/fs/nfs/nfs4file.c
++++ b/fs/nfs/nfs4file.c
+@@ -110,6 +110,7 @@ static int
+ nfs4_file_flush(struct file *file, fl_owner_t id)
+ {
+ struct inode *inode = file_inode(file);
++ errseq_t since;
+
+ dprintk("NFS: flush(%pD2)\n", file);
+
+@@ -125,7 +126,9 @@ nfs4_file_flush(struct file *file, fl_owner_t id)
+ return filemap_fdatawrite(file->f_mapping);
+
+ /* Flush writes to the server and return any errors */
+- return nfs_wb_all(inode);
++ since = filemap_sample_wb_err(file->f_mapping);
++ nfs_wb_all(inode);
++ return filemap_check_wb_err(file->f_mapping, since);
+ }
+
+ #ifdef CONFIG_NFS_V4_2
+diff --git a/fs/nfs/nfs4proc.c b/fs/nfs/nfs4proc.c
+index 2e2dac29a9e91..45e0585e0667c 100644
+--- a/fs/nfs/nfs4proc.c
++++ b/fs/nfs/nfs4proc.c
+@@ -5845,8 +5845,6 @@ static int _nfs4_get_security_label(struct inode *inode, void *buf,
+ return ret;
+ if (!(fattr.valid & NFS_ATTR_FATTR_V4_SECURITY_LABEL))
+ return -ENOENT;
+- if (buflen < label.len)
+- return -ERANGE;
+ return 0;
+ }
+
+diff --git a/fs/nfs/nfs4xdr.c b/fs/nfs/nfs4xdr.c
+index 47817ef0aadb1..4e0d8a3b89b67 100644
+--- a/fs/nfs/nfs4xdr.c
++++ b/fs/nfs/nfs4xdr.c
+@@ -4166,7 +4166,11 @@ static int decode_attr_security_label(struct xdr_stream *xdr, uint32_t *bitmap,
+ return -EIO;
+ if (len < NFS4_MAXLABELLEN) {
+ if (label) {
+- memcpy(label->label, p, len);
++ if (label->len) {
++ if (label->len < len)
++ return -ERANGE;
++ memcpy(label->label, p, len);
++ }
+ label->len = len;
+ label->pi = pi;
+ label->lfs = lfs;
+diff --git a/fs/nfs/pnfs.c b/fs/nfs/pnfs.c
+index d61dac48dff50..75e988caf3cd7 100644
+--- a/fs/nfs/pnfs.c
++++ b/fs/nfs/pnfs.c
+@@ -2939,7 +2939,8 @@ pnfs_try_to_read_data(struct nfs_pgio_header *hdr,
+ }
+
+ /* Resend all requests through pnfs. */
+-void pnfs_read_resend_pnfs(struct nfs_pgio_header *hdr)
++void pnfs_read_resend_pnfs(struct nfs_pgio_header *hdr,
++ unsigned int mirror_idx)
+ {
+ struct nfs_pageio_descriptor pgio;
+
+@@ -2950,6 +2951,7 @@ void pnfs_read_resend_pnfs(struct nfs_pgio_header *hdr)
+
+ nfs_pageio_init_read(&pgio, hdr->inode, false,
+ hdr->completion_ops);
++ pgio.pg_mirror_idx = mirror_idx;
+ hdr->task.tk_status = nfs_pageio_resend(&pgio, hdr);
+ }
+ }
+diff --git a/fs/nfs/pnfs.h b/fs/nfs/pnfs.h
+index 8e0ada581b92e..2661c44c62db4 100644
+--- a/fs/nfs/pnfs.h
++++ b/fs/nfs/pnfs.h
+@@ -311,7 +311,7 @@ int _pnfs_return_layout(struct inode *);
+ int pnfs_commit_and_return_layout(struct inode *);
+ void pnfs_ld_write_done(struct nfs_pgio_header *);
+ void pnfs_ld_read_done(struct nfs_pgio_header *);
+-void pnfs_read_resend_pnfs(struct nfs_pgio_header *);
++void pnfs_read_resend_pnfs(struct nfs_pgio_header *, unsigned int mirror_idx);
+ struct pnfs_layout_segment *pnfs_update_layout(struct inode *ino,
+ struct nfs_open_context *ctx,
+ loff_t pos,
+diff --git a/fs/ocfs2/ocfs2.h b/fs/ocfs2/ocfs2.h
+index 2dd71d626196d..7993d527edae9 100644
+--- a/fs/ocfs2/ocfs2.h
++++ b/fs/ocfs2/ocfs2.h
+@@ -327,8 +327,8 @@ struct ocfs2_super
+ spinlock_t osb_lock;
+ u32 s_next_generation;
+ unsigned long osb_flags;
+- s16 s_inode_steal_slot;
+- s16 s_meta_steal_slot;
++ u16 s_inode_steal_slot;
++ u16 s_meta_steal_slot;
+ atomic_t s_num_inodes_stolen;
+ atomic_t s_num_meta_stolen;
+
+diff --git a/fs/ocfs2/suballoc.c b/fs/ocfs2/suballoc.c
+index 45745cc3408a5..8c8cf7f4eb34e 100644
+--- a/fs/ocfs2/suballoc.c
++++ b/fs/ocfs2/suballoc.c
+@@ -879,9 +879,9 @@ static void __ocfs2_set_steal_slot(struct ocfs2_super *osb, int slot, int type)
+ {
+ spin_lock(&osb->osb_lock);
+ if (type == INODE_ALLOC_SYSTEM_INODE)
+- osb->s_inode_steal_slot = slot;
++ osb->s_inode_steal_slot = (u16)slot;
+ else if (type == EXTENT_ALLOC_SYSTEM_INODE)
+- osb->s_meta_steal_slot = slot;
++ osb->s_meta_steal_slot = (u16)slot;
+ spin_unlock(&osb->osb_lock);
+ }
+
+diff --git a/fs/ocfs2/super.c b/fs/ocfs2/super.c
+index 71ea9ce71a6b8..1d91dd1e8711c 100644
+--- a/fs/ocfs2/super.c
++++ b/fs/ocfs2/super.c
+@@ -78,7 +78,7 @@ struct mount_options
+ unsigned long commit_interval;
+ unsigned long mount_opt;
+ unsigned int atime_quantum;
+- signed short slot;
++ unsigned short slot;
+ int localalloc_opt;
+ unsigned int resv_level;
+ int dir_resv_level;
+@@ -1349,7 +1349,7 @@ static int ocfs2_parse_options(struct super_block *sb,
+ goto bail;
+ }
+ if (option)
+- mopt->slot = (s16)option;
++ mopt->slot = (u16)option;
+ break;
+ case Opt_commit:
+ if (match_int(&args[0], &option)) {
+diff --git a/fs/ubifs/journal.c b/fs/ubifs/journal.c
+index e5ec1afe1c668..2cf05f87565c2 100644
+--- a/fs/ubifs/journal.c
++++ b/fs/ubifs/journal.c
+@@ -539,7 +539,7 @@ int ubifs_jnl_update(struct ubifs_info *c, const struct inode *dir,
+ const struct fscrypt_name *nm, const struct inode *inode,
+ int deletion, int xent)
+ {
+- int err, dlen, ilen, len, lnum, ino_offs, dent_offs;
++ int err, dlen, ilen, len, lnum, ino_offs, dent_offs, orphan_added = 0;
+ int aligned_dlen, aligned_ilen, sync = IS_DIRSYNC(dir);
+ int last_reference = !!(deletion && inode->i_nlink == 0);
+ struct ubifs_inode *ui = ubifs_inode(inode);
+@@ -630,6 +630,7 @@ int ubifs_jnl_update(struct ubifs_info *c, const struct inode *dir,
+ goto out_finish;
+ }
+ ui->del_cmtno = c->cmt_no;
++ orphan_added = 1;
+ }
+
+ err = write_head(c, BASEHD, dent, len, &lnum, &dent_offs, sync);
+@@ -702,7 +703,7 @@ out_release:
+ kfree(dent);
+ out_ro:
+ ubifs_ro_mode(c, err);
+- if (last_reference)
++ if (orphan_added)
+ ubifs_delete_orphan(c, inode->i_ino);
+ finish_reservation(c);
+ return err;
+@@ -1218,7 +1219,7 @@ int ubifs_jnl_rename(struct ubifs_info *c, const struct inode *old_dir,
+ void *p;
+ union ubifs_key key;
+ struct ubifs_dent_node *dent, *dent2;
+- int err, dlen1, dlen2, ilen, lnum, offs, len;
++ int err, dlen1, dlen2, ilen, lnum, offs, len, orphan_added = 0;
+ int aligned_dlen1, aligned_dlen2, plen = UBIFS_INO_NODE_SZ;
+ int last_reference = !!(new_inode && new_inode->i_nlink == 0);
+ int move = (old_dir != new_dir);
+@@ -1334,6 +1335,7 @@ int ubifs_jnl_rename(struct ubifs_info *c, const struct inode *old_dir,
+ goto out_finish;
+ }
+ new_ui->del_cmtno = c->cmt_no;
++ orphan_added = 1;
+ }
+
+ err = write_head(c, BASEHD, dent, len, &lnum, &offs, sync);
+@@ -1415,7 +1417,7 @@ out_release:
+ release_head(c, BASEHD);
+ out_ro:
+ ubifs_ro_mode(c, err);
+- if (last_reference)
++ if (orphan_added)
+ ubifs_delete_orphan(c, new_inode->i_ino);
+ out_finish:
+ finish_reservation(c);
+diff --git a/fs/ufs/super.c b/fs/ufs/super.c
+index 1da0be667409b..e3b69fb280e8c 100644
+--- a/fs/ufs/super.c
++++ b/fs/ufs/super.c
+@@ -101,7 +101,7 @@ static struct inode *ufs_nfs_get_inode(struct super_block *sb, u64 ino, u32 gene
+ struct ufs_sb_private_info *uspi = UFS_SB(sb)->s_uspi;
+ struct inode *inode;
+
+- if (ino < UFS_ROOTINO || ino > uspi->s_ncg * uspi->s_ipg)
++ if (ino < UFS_ROOTINO || ino > (u64)uspi->s_ncg * uspi->s_ipg)
+ return ERR_PTR(-ESTALE);
+
+ inode = ufs_iget(sb, ino);
+diff --git a/include/crypto/if_alg.h b/include/crypto/if_alg.h
+index 088c1ded27148..ee6412314f8f3 100644
+--- a/include/crypto/if_alg.h
++++ b/include/crypto/if_alg.h
+@@ -135,6 +135,7 @@ struct af_alg_async_req {
+ * SG?
+ * @enc: Cryptographic operation to be performed when
+ * recvmsg is invoked.
++ * @init: True if metadata has been sent.
+ * @len: Length of memory allocated for this data structure.
+ */
+ struct af_alg_ctx {
+@@ -151,6 +152,7 @@ struct af_alg_ctx {
+ bool more;
+ bool merge;
+ bool enc;
++ bool init;
+
+ unsigned int len;
+ };
+@@ -226,7 +228,7 @@ unsigned int af_alg_count_tsgl(struct sock *sk, size_t bytes, size_t offset);
+ void af_alg_pull_tsgl(struct sock *sk, size_t used, struct scatterlist *dst,
+ size_t dst_offset);
+ void af_alg_wmem_wakeup(struct sock *sk);
+-int af_alg_wait_for_data(struct sock *sk, unsigned flags);
++int af_alg_wait_for_data(struct sock *sk, unsigned flags, unsigned min);
+ int af_alg_sendmsg(struct socket *sock, struct msghdr *msg, size_t size,
+ unsigned int ivsize);
+ ssize_t af_alg_sendpage(struct socket *sock, struct page *page,
+diff --git a/include/linux/fs.h b/include/linux/fs.h
+index f5abba86107d8..2dab217c6047f 100644
+--- a/include/linux/fs.h
++++ b/include/linux/fs.h
+@@ -549,6 +549,16 @@ static inline void i_mmap_unlock_read(struct address_space *mapping)
+ up_read(&mapping->i_mmap_rwsem);
+ }
+
++static inline void i_mmap_assert_locked(struct address_space *mapping)
++{
++ lockdep_assert_held(&mapping->i_mmap_rwsem);
++}
++
++static inline void i_mmap_assert_write_locked(struct address_space *mapping)
++{
++ lockdep_assert_held_write(&mapping->i_mmap_rwsem);
++}
++
+ /*
+ * Might pages of this file be mapped into userspace?
+ */
+diff --git a/include/linux/hugetlb.h b/include/linux/hugetlb.h
+index 50650d0d01b9e..a520bf26e5d8e 100644
+--- a/include/linux/hugetlb.h
++++ b/include/linux/hugetlb.h
+@@ -164,7 +164,8 @@ pte_t *huge_pte_alloc(struct mm_struct *mm,
+ unsigned long addr, unsigned long sz);
+ pte_t *huge_pte_offset(struct mm_struct *mm,
+ unsigned long addr, unsigned long sz);
+-int huge_pmd_unshare(struct mm_struct *mm, unsigned long *addr, pte_t *ptep);
++int huge_pmd_unshare(struct mm_struct *mm, struct vm_area_struct *vma,
++ unsigned long *addr, pte_t *ptep);
+ void adjust_range_if_pmd_sharing_possible(struct vm_area_struct *vma,
+ unsigned long *start, unsigned long *end);
+ struct page *follow_huge_addr(struct mm_struct *mm, unsigned long address,
+@@ -203,8 +204,9 @@ static inline struct address_space *hugetlb_page_mapping_lock_write(
+ return NULL;
+ }
+
+-static inline int huge_pmd_unshare(struct mm_struct *mm, unsigned long *addr,
+- pte_t *ptep)
++static inline int huge_pmd_unshare(struct mm_struct *mm,
++ struct vm_area_struct *vma,
++ unsigned long *addr, pte_t *ptep)
+ {
+ return 0;
+ }
+diff --git a/include/linux/intel-iommu.h b/include/linux/intel-iommu.h
+index 04bd9279c3fb3..711bdca975be3 100644
+--- a/include/linux/intel-iommu.h
++++ b/include/linux/intel-iommu.h
+@@ -381,8 +381,8 @@ enum {
+
+ #define QI_DEV_EIOTLB_ADDR(a) ((u64)(a) & VTD_PAGE_MASK)
+ #define QI_DEV_EIOTLB_SIZE (((u64)1) << 11)
+-#define QI_DEV_EIOTLB_GLOB(g) ((u64)g)
+-#define QI_DEV_EIOTLB_PASID(p) (((u64)p) << 32)
++#define QI_DEV_EIOTLB_GLOB(g) ((u64)(g) & 0x1)
++#define QI_DEV_EIOTLB_PASID(p) ((u64)((p) & 0xfffff) << 32)
+ #define QI_DEV_EIOTLB_SID(sid) ((u64)((sid) & 0xffff) << 16)
+ #define QI_DEV_EIOTLB_QDEP(qd) ((u64)((qd) & 0x1f) << 4)
+ #define QI_DEV_EIOTLB_PFSID(pfsid) (((u64)(pfsid & 0xf) << 12) | \
+diff --git a/include/linux/irq.h b/include/linux/irq.h
+index 8d5bc2c237d74..1b7f4dfee35b3 100644
+--- a/include/linux/irq.h
++++ b/include/linux/irq.h
+@@ -213,6 +213,8 @@ struct irq_data {
+ * required
+ * IRQD_HANDLE_ENFORCE_IRQCTX - Enforce that handle_irq_*() is only invoked
+ * from actual interrupt context.
++ * IRQD_AFFINITY_ON_ACTIVATE - Affinity is set on activation. Don't call
++ * irq_chip::irq_set_affinity() when deactivated.
+ */
+ enum {
+ IRQD_TRIGGER_MASK = 0xf,
+@@ -237,6 +239,7 @@ enum {
+ IRQD_CAN_RESERVE = (1 << 26),
+ IRQD_MSI_NOMASK_QUIRK = (1 << 27),
+ IRQD_HANDLE_ENFORCE_IRQCTX = (1 << 28),
++ IRQD_AFFINITY_ON_ACTIVATE = (1 << 29),
+ };
+
+ #define __irqd_to_state(d) ACCESS_PRIVATE((d)->common, state_use_accessors)
+@@ -421,6 +424,16 @@ static inline bool irqd_msi_nomask_quirk(struct irq_data *d)
+ return __irqd_to_state(d) & IRQD_MSI_NOMASK_QUIRK;
+ }
+
++static inline void irqd_set_affinity_on_activate(struct irq_data *d)
++{
++ __irqd_to_state(d) |= IRQD_AFFINITY_ON_ACTIVATE;
++}
++
++static inline bool irqd_affinity_on_activate(struct irq_data *d)
++{
++ return __irqd_to_state(d) & IRQD_AFFINITY_ON_ACTIVATE;
++}
++
+ #undef __irqd_to_state
+
+ static inline irq_hw_number_t irqd_to_hwirq(struct irq_data *d)
+diff --git a/include/linux/libnvdimm.h b/include/linux/libnvdimm.h
+index 18da4059be09a..bd39a2cf7972b 100644
+--- a/include/linux/libnvdimm.h
++++ b/include/linux/libnvdimm.h
+@@ -78,6 +78,8 @@ struct nvdimm_bus_descriptor {
+ const struct attribute_group **attr_groups;
+ unsigned long bus_dsm_mask;
+ unsigned long cmd_mask;
++ unsigned long dimm_family_mask;
++ unsigned long bus_family_mask;
+ struct module *module;
+ char *provider_name;
+ struct device_node *of_node;
+diff --git a/include/linux/pci-ats.h b/include/linux/pci-ats.h
+index f75c307f346de..df54cd5b15db0 100644
+--- a/include/linux/pci-ats.h
++++ b/include/linux/pci-ats.h
+@@ -28,6 +28,10 @@ int pci_enable_pri(struct pci_dev *pdev, u32 reqs);
+ void pci_disable_pri(struct pci_dev *pdev);
+ int pci_reset_pri(struct pci_dev *pdev);
+ int pci_prg_resp_pasid_required(struct pci_dev *pdev);
++bool pci_pri_supported(struct pci_dev *pdev);
++#else
++static inline bool pci_pri_supported(struct pci_dev *pdev)
++{ return false; }
+ #endif /* CONFIG_PCI_PRI */
+
+ #ifdef CONFIG_PCI_PASID
+diff --git a/include/net/sock.h b/include/net/sock.h
+index 1183507df95bf..d05a2c3ed3a6b 100644
+--- a/include/net/sock.h
++++ b/include/net/sock.h
+@@ -891,6 +891,8 @@ static inline int sk_memalloc_socks(void)
+ {
+ return static_branch_unlikely(&memalloc_socks_key);
+ }
++
++void __receive_sock(struct file *file);
+ #else
+
+ static inline int sk_memalloc_socks(void)
+@@ -898,6 +900,8 @@ static inline int sk_memalloc_socks(void)
+ return 0;
+ }
+
++static inline void __receive_sock(struct file *file)
++{ }
+ #endif
+
+ static inline gfp_t sk_gfp_mask(const struct sock *sk, gfp_t gfp_mask)
+diff --git a/include/uapi/linux/btrfs.h b/include/uapi/linux/btrfs.h
+index e6b6cb0f8bc6a..24f6848ad78ec 100644
+--- a/include/uapi/linux/btrfs.h
++++ b/include/uapi/linux/btrfs.h
+@@ -243,6 +243,13 @@ struct btrfs_ioctl_dev_info_args {
+ __u8 path[BTRFS_DEVICE_PATH_NAME_MAX]; /* out */
+ };
+
++/*
++ * Retrieve information about the filesystem
++ */
++
++/* Request information about checksum type and size */
++#define BTRFS_FS_INFO_FLAG_CSUM_INFO (1 << 0)
++
+ struct btrfs_ioctl_fs_info_args {
+ __u64 max_id; /* out */
+ __u64 num_devices; /* out */
+@@ -250,8 +257,11 @@ struct btrfs_ioctl_fs_info_args {
+ __u32 nodesize; /* out */
+ __u32 sectorsize; /* out */
+ __u32 clone_alignment; /* out */
+- __u32 reserved32;
+- __u64 reserved[122]; /* pad to 1k */
++ /* See BTRFS_FS_INFO_FLAG_* */
++ __u16 csum_type; /* out */
++ __u16 csum_size; /* out */
++ __u64 flags; /* in/out */
++ __u8 reserved[968]; /* pad to 1k */
+ };
+
+ /*
+diff --git a/include/uapi/linux/ndctl.h b/include/uapi/linux/ndctl.h
+index 0e09dc5cec192..e9468b9332bd5 100644
+--- a/include/uapi/linux/ndctl.h
++++ b/include/uapi/linux/ndctl.h
+@@ -245,6 +245,10 @@ struct nd_cmd_pkg {
+ #define NVDIMM_FAMILY_MSFT 3
+ #define NVDIMM_FAMILY_HYPERV 4
+ #define NVDIMM_FAMILY_PAPR 5
++#define NVDIMM_FAMILY_MAX NVDIMM_FAMILY_PAPR
++
++#define NVDIMM_BUS_FAMILY_NFIT 0
++#define NVDIMM_BUS_FAMILY_MAX NVDIMM_BUS_FAMILY_NFIT
+
+ #define ND_IOCTL_CALL _IOWR(ND_IOCTL, ND_CMD_CALL,\
+ struct nd_cmd_pkg)
+diff --git a/init/main.c b/init/main.c
+index 0ead83e86b5aa..883ded3638e59 100644
+--- a/init/main.c
++++ b/init/main.c
+@@ -387,8 +387,6 @@ static int __init bootconfig_params(char *param, char *val,
+ {
+ if (strcmp(param, "bootconfig") == 0) {
+ bootconfig_found = true;
+- } else if (strcmp(param, "--") == 0) {
+- initargs_found = true;
+ }
+ return 0;
+ }
+@@ -399,19 +397,23 @@ static void __init setup_boot_config(const char *cmdline)
+ const char *msg;
+ int pos;
+ u32 size, csum;
+- char *data, *copy;
++ char *data, *copy, *err;
+ int ret;
+
+ /* Cut out the bootconfig data even if we have no bootconfig option */
+ data = get_boot_config_from_initrd(&size, &csum);
+
+ strlcpy(tmp_cmdline, boot_command_line, COMMAND_LINE_SIZE);
+- parse_args("bootconfig", tmp_cmdline, NULL, 0, 0, 0, NULL,
+- bootconfig_params);
++ err = parse_args("bootconfig", tmp_cmdline, NULL, 0, 0, 0, NULL,
++ bootconfig_params);
+
+- if (!bootconfig_found)
++ if (IS_ERR(err) || !bootconfig_found)
+ return;
+
++ /* parse_args() stops at '--' and returns an address */
++ if (err)
++ initargs_found = true;
++
+ if (!data) {
+ pr_err("'bootconfig' found on command line, but no bootconfig found\n");
+ return;
+diff --git a/kernel/irq/manage.c b/kernel/irq/manage.c
+index 2a9fec53e1591..e68a8f9931065 100644
+--- a/kernel/irq/manage.c
++++ b/kernel/irq/manage.c
+@@ -320,12 +320,16 @@ static bool irq_set_affinity_deactivated(struct irq_data *data,
+ struct irq_desc *desc = irq_data_to_desc(data);
+
+ /*
++ * Handle irq chips which can handle affinity only in activated
++ * state correctly
++ *
+ * If the interrupt is not yet activated, just store the affinity
+ * mask and do not call the chip driver at all. On activation the
+ * driver has to make sure anyway that the interrupt is in a
+ * useable state so startup works.
+ */
+- if (!IS_ENABLED(CONFIG_IRQ_DOMAIN_HIERARCHY) || irqd_is_activated(data))
++ if (!IS_ENABLED(CONFIG_IRQ_DOMAIN_HIERARCHY) ||
++ irqd_is_activated(data) || !irqd_affinity_on_activate(data))
+ return false;
+
+ cpumask_copy(desc->irq_common_data.affinity, mask);
+@@ -2731,8 +2735,10 @@ int irq_set_irqchip_state(unsigned int irq, enum irqchip_irq_state which,
+
+ do {
+ chip = irq_data_get_irq_chip(data);
+- if (WARN_ON_ONCE(!chip))
+- return -ENODEV;
++ if (WARN_ON_ONCE(!chip)) {
++ err = -ENODEV;
++ goto out_unlock;
++ }
+ if (chip->irq_set_irqchip_state)
+ break;
+ #ifdef CONFIG_IRQ_DOMAIN_HIERARCHY
+@@ -2745,6 +2751,7 @@ int irq_set_irqchip_state(unsigned int irq, enum irqchip_irq_state which,
+ if (data)
+ err = chip->irq_set_irqchip_state(data, which, val);
+
++out_unlock:
+ irq_put_desc_busunlock(desc, flags);
+ return err;
+ }
+diff --git a/kernel/irq/pm.c b/kernel/irq/pm.c
+index 8f557fa1f4fe4..c6c7e187ae748 100644
+--- a/kernel/irq/pm.c
++++ b/kernel/irq/pm.c
+@@ -185,14 +185,18 @@ void rearm_wake_irq(unsigned int irq)
+ unsigned long flags;
+ struct irq_desc *desc = irq_get_desc_buslock(irq, &flags, IRQ_GET_DESC_CHECK_GLOBAL);
+
+- if (!desc || !(desc->istate & IRQS_SUSPENDED) ||
+- !irqd_is_wakeup_set(&desc->irq_data))
++ if (!desc)
+ return;
+
++ if (!(desc->istate & IRQS_SUSPENDED) ||
++ !irqd_is_wakeup_set(&desc->irq_data))
++ goto unlock;
++
+ desc->istate &= ~IRQS_SUSPENDED;
+ irqd_set(&desc->irq_data, IRQD_WAKEUP_ARMED);
+ __enable_irq(desc);
+
++unlock:
+ irq_put_desc_busunlock(desc, flags);
+ }
+
+diff --git a/kernel/kprobes.c b/kernel/kprobes.c
+index 2e97febeef77d..72af5d37e9ff1 100644
+--- a/kernel/kprobes.c
++++ b/kernel/kprobes.c
+@@ -1079,9 +1079,20 @@ static int disarm_kprobe_ftrace(struct kprobe *p)
+ ipmodify ? &kprobe_ipmodify_enabled : &kprobe_ftrace_enabled);
+ }
+ #else /* !CONFIG_KPROBES_ON_FTRACE */
+-#define prepare_kprobe(p) arch_prepare_kprobe(p)
+-#define arm_kprobe_ftrace(p) (-ENODEV)
+-#define disarm_kprobe_ftrace(p) (-ENODEV)
++static inline int prepare_kprobe(struct kprobe *p)
++{
++ return arch_prepare_kprobe(p);
++}
++
++static inline int arm_kprobe_ftrace(struct kprobe *p)
++{
++ return -ENODEV;
++}
++
++static inline int disarm_kprobe_ftrace(struct kprobe *p)
++{
++ return -ENODEV;
++}
+ #endif
+
+ /* Arm a kprobe with text_mutex */
+@@ -2113,6 +2124,13 @@ static void kill_kprobe(struct kprobe *p)
+ * the original probed function (which will be freed soon) any more.
+ */
+ arch_remove_kprobe(p);
++
++ /*
++ * The module is going away. We should disarm the kprobe which
++ * is using ftrace.
++ */
++ if (kprobe_ftrace(p))
++ disarm_kprobe_ftrace(p);
+ }
+
+ /* Disable one kprobe */
+diff --git a/kernel/kthread.c b/kernel/kthread.c
+index 132f84a5fde3f..f481ab35de2f9 100644
+--- a/kernel/kthread.c
++++ b/kernel/kthread.c
+@@ -1239,13 +1239,16 @@ void kthread_use_mm(struct mm_struct *mm)
+ WARN_ON_ONCE(tsk->mm);
+
+ task_lock(tsk);
++ /* Hold off tlb flush IPIs while switching mm's */
++ local_irq_disable();
+ active_mm = tsk->active_mm;
+ if (active_mm != mm) {
+ mmgrab(mm);
+ tsk->active_mm = mm;
+ }
+ tsk->mm = mm;
+- switch_mm(active_mm, mm, tsk);
++ switch_mm_irqs_off(active_mm, mm, tsk);
++ local_irq_enable();
+ task_unlock(tsk);
+ #ifdef finish_arch_post_lock_switch
+ finish_arch_post_lock_switch();
+@@ -1274,9 +1277,11 @@ void kthread_unuse_mm(struct mm_struct *mm)
+
+ task_lock(tsk);
+ sync_mm_rss(mm);
++ local_irq_disable();
+ tsk->mm = NULL;
+ /* active_mm is still 'mm' */
+ enter_lazy_tlb(mm, tsk);
++ local_irq_enable();
+ task_unlock(tsk);
+ }
+ EXPORT_SYMBOL_GPL(kthread_unuse_mm);
+diff --git a/kernel/module.c b/kernel/module.c
+index aa183c9ac0a25..08c46084d8cca 100644
+--- a/kernel/module.c
++++ b/kernel/module.c
+@@ -1520,18 +1520,34 @@ struct module_sect_attrs {
+ struct module_sect_attr attrs[];
+ };
+
++#define MODULE_SECT_READ_SIZE (3 /* "0x", "\n" */ + (BITS_PER_LONG / 4))
+ static ssize_t module_sect_read(struct file *file, struct kobject *kobj,
+ struct bin_attribute *battr,
+ char *buf, loff_t pos, size_t count)
+ {
+ struct module_sect_attr *sattr =
+ container_of(battr, struct module_sect_attr, battr);
++ char bounce[MODULE_SECT_READ_SIZE + 1];
++ size_t wrote;
+
+ if (pos != 0)
+ return -EINVAL;
+
+- return sprintf(buf, "0x%px\n",
+- kallsyms_show_value(file->f_cred) ? (void *)sattr->address : NULL);
++ /*
++ * Since we're a binary read handler, we must account for the
++ * trailing NUL byte that sprintf will write: if "buf" is
++ * too small to hold the NUL, or the NUL is exactly the last
++ * byte, the read will look like it got truncated by one byte.
++ * Since there is no way to ask sprintf nicely to not write
++ * the NUL, we have to use a bounce buffer.
++ */
++ wrote = scnprintf(bounce, sizeof(bounce), "0x%px\n",
++ kallsyms_show_value(file->f_cred)
++ ? (void *)sattr->address : NULL);
++ count = min(count, wrote);
++ memcpy(buf, bounce, count);
++
++ return count;
+ }
+
+ static void free_sect_attrs(struct module_sect_attrs *sect_attrs)
+@@ -1580,7 +1596,7 @@ static void add_sect_attrs(struct module *mod, const struct load_info *info)
+ goto out;
+ sect_attrs->nsections++;
+ sattr->battr.read = module_sect_read;
+- sattr->battr.size = 3 /* "0x", "\n" */ + (BITS_PER_LONG / 4);
++ sattr->battr.size = MODULE_SECT_READ_SIZE;
+ sattr->battr.attr.mode = 0400;
+ *(gattr++) = &(sattr++)->battr;
+ }
+diff --git a/kernel/pid.c b/kernel/pid.c
+index f1496b7571621..ee58530d1acad 100644
+--- a/kernel/pid.c
++++ b/kernel/pid.c
+@@ -42,6 +42,7 @@
+ #include <linux/sched/signal.h>
+ #include <linux/sched/task.h>
+ #include <linux/idr.h>
++#include <net/sock.h>
+
+ struct pid init_struct_pid = {
+ .count = REFCOUNT_INIT(1),
+@@ -642,10 +643,12 @@ static int pidfd_getfd(struct pid *pid, int fd)
+ }
+
+ ret = get_unused_fd_flags(O_CLOEXEC);
+- if (ret < 0)
++ if (ret < 0) {
+ fput(file);
+- else
++ } else {
++ __receive_sock(file);
+ fd_install(ret, file);
++ }
+
+ return ret;
+ }
+diff --git a/kernel/sched/core.c b/kernel/sched/core.c
+index c3cbdc436e2e4..f788cd61df212 100644
+--- a/kernel/sched/core.c
++++ b/kernel/sched/core.c
+@@ -794,6 +794,26 @@ unsigned int sysctl_sched_uclamp_util_max = SCHED_CAPACITY_SCALE;
+ /* All clamps are required to be less or equal than these values */
+ static struct uclamp_se uclamp_default[UCLAMP_CNT];
+
++/*
++ * This static key is used to reduce the uclamp overhead in the fast path. It
++ * primarily disables the call to uclamp_rq_{inc, dec}() in
++ * enqueue/dequeue_task().
++ *
++ * This allows users to continue to enable uclamp in their kernel config with
++ * minimum uclamp overhead in the fast path.
++ *
++ * As soon as userspace modifies any of the uclamp knobs, the static key is
++ * enabled, since we have an actual users that make use of uclamp
++ * functionality.
++ *
++ * The knobs that would enable this static key are:
++ *
++ * * A task modifying its uclamp value with sched_setattr().
++ * * An admin modifying the sysctl_sched_uclamp_{min, max} via procfs.
++ * * An admin modifying the cgroup cpu.uclamp.{min, max}
++ */
++DEFINE_STATIC_KEY_FALSE(sched_uclamp_used);
++
+ /* Integer rounded range for each bucket */
+ #define UCLAMP_BUCKET_DELTA DIV_ROUND_CLOSEST(SCHED_CAPACITY_SCALE, UCLAMP_BUCKETS)
+
+@@ -990,10 +1010,38 @@ static inline void uclamp_rq_dec_id(struct rq *rq, struct task_struct *p,
+
+ lockdep_assert_held(&rq->lock);
+
++ /*
++ * If sched_uclamp_used was enabled after task @p was enqueued,
++ * we could end up with unbalanced call to uclamp_rq_dec_id().
++ *
++ * In this case the uc_se->active flag should be false since no uclamp
++ * accounting was performed at enqueue time and we can just return
++ * here.
++ *
++ * Need to be careful of the following enqeueue/dequeue ordering
++ * problem too
++ *
++ * enqueue(taskA)
++ * // sched_uclamp_used gets enabled
++ * enqueue(taskB)
++ * dequeue(taskA)
++ * // Must not decrement bukcet->tasks here
++ * dequeue(taskB)
++ *
++ * where we could end up with stale data in uc_se and
++ * bucket[uc_se->bucket_id].
++ *
++ * The following check here eliminates the possibility of such race.
++ */
++ if (unlikely(!uc_se->active))
++ return;
++
+ bucket = &uc_rq->bucket[uc_se->bucket_id];
++
+ SCHED_WARN_ON(!bucket->tasks);
+ if (likely(bucket->tasks))
+ bucket->tasks--;
++
+ uc_se->active = false;
+
+ /*
+@@ -1021,6 +1069,15 @@ static inline void uclamp_rq_inc(struct rq *rq, struct task_struct *p)
+ {
+ enum uclamp_id clamp_id;
+
++ /*
++ * Avoid any overhead until uclamp is actually used by the userspace.
++ *
++ * The condition is constructed such that a NOP is generated when
++ * sched_uclamp_used is disabled.
++ */
++ if (!static_branch_unlikely(&sched_uclamp_used))
++ return;
++
+ if (unlikely(!p->sched_class->uclamp_enabled))
+ return;
+
+@@ -1036,6 +1093,15 @@ static inline void uclamp_rq_dec(struct rq *rq, struct task_struct *p)
+ {
+ enum uclamp_id clamp_id;
+
++ /*
++ * Avoid any overhead until uclamp is actually used by the userspace.
++ *
++ * The condition is constructed such that a NOP is generated when
++ * sched_uclamp_used is disabled.
++ */
++ if (!static_branch_unlikely(&sched_uclamp_used))
++ return;
++
+ if (unlikely(!p->sched_class->uclamp_enabled))
+ return;
+
+@@ -1144,8 +1210,10 @@ int sysctl_sched_uclamp_handler(struct ctl_table *table, int write,
+ update_root_tg = true;
+ }
+
+- if (update_root_tg)
++ if (update_root_tg) {
++ static_branch_enable(&sched_uclamp_used);
+ uclamp_update_root_tg();
++ }
+
+ /*
+ * We update all RUNNABLE tasks only when task groups are in use.
+@@ -1180,6 +1248,15 @@ static int uclamp_validate(struct task_struct *p,
+ if (upper_bound > SCHED_CAPACITY_SCALE)
+ return -EINVAL;
+
++ /*
++ * We have valid uclamp attributes; make sure uclamp is enabled.
++ *
++ * We need to do that here, because enabling static branches is a
++ * blocking operation which obviously cannot be done while holding
++ * scheduler locks.
++ */
++ static_branch_enable(&sched_uclamp_used);
++
+ return 0;
+ }
+
+@@ -7442,6 +7519,8 @@ static ssize_t cpu_uclamp_write(struct kernfs_open_file *of, char *buf,
+ if (req.ret)
+ return req.ret;
+
++ static_branch_enable(&sched_uclamp_used);
++
+ mutex_lock(&uclamp_mutex);
+ rcu_read_lock();
+
+diff --git a/kernel/sched/cpufreq_schedutil.c b/kernel/sched/cpufreq_schedutil.c
+index 7fbaee24c824f..dc6835bc64907 100644
+--- a/kernel/sched/cpufreq_schedutil.c
++++ b/kernel/sched/cpufreq_schedutil.c
+@@ -210,7 +210,7 @@ unsigned long schedutil_cpu_util(int cpu, unsigned long util_cfs,
+ unsigned long dl_util, util, irq;
+ struct rq *rq = cpu_rq(cpu);
+
+- if (!IS_BUILTIN(CONFIG_UCLAMP_TASK) &&
++ if (!uclamp_is_used() &&
+ type == FREQUENCY_UTIL && rt_rq_is_runnable(&rq->rt)) {
+ return max;
+ }
+diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h
+index 877fb08eb1b04..c82857e2e288a 100644
+--- a/kernel/sched/sched.h
++++ b/kernel/sched/sched.h
+@@ -862,6 +862,8 @@ struct uclamp_rq {
+ unsigned int value;
+ struct uclamp_bucket bucket[UCLAMP_BUCKETS];
+ };
++
++DECLARE_STATIC_KEY_FALSE(sched_uclamp_used);
+ #endif /* CONFIG_UCLAMP_TASK */
+
+ /*
+@@ -2349,12 +2351,35 @@ static inline void cpufreq_update_util(struct rq *rq, unsigned int flags) {}
+ #ifdef CONFIG_UCLAMP_TASK
+ unsigned long uclamp_eff_value(struct task_struct *p, enum uclamp_id clamp_id);
+
++/**
++ * uclamp_rq_util_with - clamp @util with @rq and @p effective uclamp values.
++ * @rq: The rq to clamp against. Must not be NULL.
++ * @util: The util value to clamp.
++ * @p: The task to clamp against. Can be NULL if you want to clamp
++ * against @rq only.
++ *
++ * Clamps the passed @util to the max(@rq, @p) effective uclamp values.
++ *
++ * If sched_uclamp_used static key is disabled, then just return the util
++ * without any clamping since uclamp aggregation at the rq level in the fast
++ * path is disabled, rendering this operation a NOP.
++ *
++ * Use uclamp_eff_value() if you don't care about uclamp values at rq level. It
++ * will return the correct effective uclamp value of the task even if the
++ * static key is disabled.
++ */
+ static __always_inline
+ unsigned long uclamp_rq_util_with(struct rq *rq, unsigned long util,
+ struct task_struct *p)
+ {
+- unsigned long min_util = READ_ONCE(rq->uclamp[UCLAMP_MIN].value);
+- unsigned long max_util = READ_ONCE(rq->uclamp[UCLAMP_MAX].value);
++ unsigned long min_util;
++ unsigned long max_util;
++
++ if (!static_branch_likely(&sched_uclamp_used))
++ return util;
++
++ min_util = READ_ONCE(rq->uclamp[UCLAMP_MIN].value);
++ max_util = READ_ONCE(rq->uclamp[UCLAMP_MAX].value);
+
+ if (p) {
+ min_util = max(min_util, uclamp_eff_value(p, UCLAMP_MIN));
+@@ -2371,6 +2396,19 @@ unsigned long uclamp_rq_util_with(struct rq *rq, unsigned long util,
+
+ return clamp(util, min_util, max_util);
+ }
++
++/*
++ * When uclamp is compiled in, the aggregation at rq level is 'turned off'
++ * by default in the fast path and only gets turned on once userspace performs
++ * an operation that requires it.
++ *
++ * Returns true if userspace opted-in to use uclamp and aggregation at rq level
++ * hence is active.
++ */
++static inline bool uclamp_is_used(void)
++{
++ return static_branch_likely(&sched_uclamp_used);
++}
+ #else /* CONFIG_UCLAMP_TASK */
+ static inline
+ unsigned long uclamp_rq_util_with(struct rq *rq, unsigned long util,
+@@ -2378,6 +2416,11 @@ unsigned long uclamp_rq_util_with(struct rq *rq, unsigned long util,
+ {
+ return util;
+ }
++
++static inline bool uclamp_is_used(void)
++{
++ return false;
++}
+ #endif /* CONFIG_UCLAMP_TASK */
+
+ #ifdef arch_scale_freq_capacity
+diff --git a/kernel/trace/ftrace.c b/kernel/trace/ftrace.c
+index 7d879fae3777f..b5cb5be3ca6f6 100644
+--- a/kernel/trace/ftrace.c
++++ b/kernel/trace/ftrace.c
+@@ -6187,8 +6187,11 @@ static int referenced_filters(struct dyn_ftrace *rec)
+ int cnt = 0;
+
+ for (ops = ftrace_ops_list; ops != &ftrace_list_end; ops = ops->next) {
+- if (ops_references_rec(ops, rec))
+- cnt++;
++ if (ops_references_rec(ops, rec)) {
++ cnt++;
++ if (ops->flags & FTRACE_OPS_FL_SAVE_REGS)
++ rec->flags |= FTRACE_FL_REGS;
++ }
+ }
+
+ return cnt;
+@@ -6367,8 +6370,8 @@ void ftrace_module_enable(struct module *mod)
+ if (ftrace_start_up)
+ cnt += referenced_filters(rec);
+
+- /* This clears FTRACE_FL_DISABLED */
+- rec->flags = cnt;
++ rec->flags &= ~FTRACE_FL_DISABLED;
++ rec->flags += cnt;
+
+ if (ftrace_start_up && cnt) {
+ int failed = __ftrace_replace_code(rec, 1);
+@@ -6966,12 +6969,12 @@ void ftrace_pid_follow_fork(struct trace_array *tr, bool enable)
+ if (enable) {
+ register_trace_sched_process_fork(ftrace_pid_follow_sched_process_fork,
+ tr);
+- register_trace_sched_process_exit(ftrace_pid_follow_sched_process_exit,
++ register_trace_sched_process_free(ftrace_pid_follow_sched_process_exit,
+ tr);
+ } else {
+ unregister_trace_sched_process_fork(ftrace_pid_follow_sched_process_fork,
+ tr);
+- unregister_trace_sched_process_exit(ftrace_pid_follow_sched_process_exit,
++ unregister_trace_sched_process_free(ftrace_pid_follow_sched_process_exit,
+ tr);
+ }
+ }
+diff --git a/kernel/trace/trace_events.c b/kernel/trace/trace_events.c
+index f6f55682d3e2d..a85effb2373bf 100644
+--- a/kernel/trace/trace_events.c
++++ b/kernel/trace/trace_events.c
+@@ -538,12 +538,12 @@ void trace_event_follow_fork(struct trace_array *tr, bool enable)
+ if (enable) {
+ register_trace_prio_sched_process_fork(event_filter_pid_sched_process_fork,
+ tr, INT_MIN);
+- register_trace_prio_sched_process_exit(event_filter_pid_sched_process_exit,
++ register_trace_prio_sched_process_free(event_filter_pid_sched_process_exit,
+ tr, INT_MAX);
+ } else {
+ unregister_trace_sched_process_fork(event_filter_pid_sched_process_fork,
+ tr);
+- unregister_trace_sched_process_exit(event_filter_pid_sched_process_exit,
++ unregister_trace_sched_process_free(event_filter_pid_sched_process_exit,
+ tr);
+ }
+ }
+diff --git a/kernel/trace/trace_hwlat.c b/kernel/trace/trace_hwlat.c
+index e2be7bb7ef7e2..17e1e49e5b936 100644
+--- a/kernel/trace/trace_hwlat.c
++++ b/kernel/trace/trace_hwlat.c
+@@ -283,6 +283,7 @@ static bool disable_migrate;
+ static void move_to_next_cpu(void)
+ {
+ struct cpumask *current_mask = &save_cpumask;
++ struct trace_array *tr = hwlat_trace;
+ int next_cpu;
+
+ if (disable_migrate)
+@@ -296,7 +297,7 @@ static void move_to_next_cpu(void)
+ goto disable;
+
+ get_online_cpus();
+- cpumask_and(current_mask, cpu_online_mask, tracing_buffer_mask);
++ cpumask_and(current_mask, cpu_online_mask, tr->tracing_cpumask);
+ next_cpu = cpumask_next(smp_processor_id(), current_mask);
+ put_online_cpus();
+
+@@ -373,7 +374,7 @@ static int start_kthread(struct trace_array *tr)
+ /* Just pick the first CPU on first iteration */
+ current_mask = &save_cpumask;
+ get_online_cpus();
+- cpumask_and(current_mask, cpu_online_mask, tracing_buffer_mask);
++ cpumask_and(current_mask, cpu_online_mask, tr->tracing_cpumask);
+ put_online_cpus();
+ next_cpu = cpumask_first(current_mask);
+
+diff --git a/lib/devres.c b/lib/devres.c
+index 6ef51f159c54b..ca0d28727ccef 100644
+--- a/lib/devres.c
++++ b/lib/devres.c
+@@ -119,6 +119,7 @@ __devm_ioremap_resource(struct device *dev, const struct resource *res,
+ {
+ resource_size_t size;
+ void __iomem *dest_ptr;
++ char *pretty_name;
+
+ BUG_ON(!dev);
+
+@@ -129,7 +130,15 @@ __devm_ioremap_resource(struct device *dev, const struct resource *res,
+
+ size = resource_size(res);
+
+- if (!devm_request_mem_region(dev, res->start, size, dev_name(dev))) {
++ if (res->name)
++ pretty_name = devm_kasprintf(dev, GFP_KERNEL, "%s %s",
++ dev_name(dev), res->name);
++ else
++ pretty_name = devm_kstrdup(dev, dev_name(dev), GFP_KERNEL);
++ if (!pretty_name)
++ return IOMEM_ERR_PTR(-ENOMEM);
++
++ if (!devm_request_mem_region(dev, res->start, size, pretty_name)) {
+ dev_err(dev, "can't request region for resource %pR\n", res);
+ return IOMEM_ERR_PTR(-EBUSY);
+ }
+diff --git a/lib/test_kmod.c b/lib/test_kmod.c
+index e651c37d56dbd..eab52770070d6 100644
+--- a/lib/test_kmod.c
++++ b/lib/test_kmod.c
+@@ -745,7 +745,7 @@ static int trigger_config_run_type(struct kmod_test_device *test_dev,
+ break;
+ case TEST_KMOD_FS_TYPE:
+ kfree_const(config->test_fs);
+- config->test_driver = NULL;
++ config->test_fs = NULL;
+ copied = config_copy_test_fs(config, test_str,
+ strlen(test_str));
+ break;
+diff --git a/lib/test_lockup.c b/lib/test_lockup.c
+index bd7c7ff39f6be..e7202763a1688 100644
+--- a/lib/test_lockup.c
++++ b/lib/test_lockup.c
+@@ -512,8 +512,8 @@ static int __init test_lockup_init(void)
+ if (test_file_path[0]) {
+ test_file = filp_open(test_file_path, O_RDONLY, 0);
+ if (IS_ERR(test_file)) {
+- pr_err("cannot find file_path\n");
+- return -EINVAL;
++ pr_err("failed to open %s: %ld\n", test_file_path, PTR_ERR(test_file));
++ return PTR_ERR(test_file);
+ }
+ test_inode = file_inode(test_file);
+ } else if (test_lock_inode ||
+diff --git a/mm/cma.c b/mm/cma.c
+index 26ecff8188817..0963c0f9c5022 100644
+--- a/mm/cma.c
++++ b/mm/cma.c
+@@ -93,17 +93,15 @@ static void cma_clear_bitmap(struct cma *cma, unsigned long pfn,
+ mutex_unlock(&cma->lock);
+ }
+
+-static int __init cma_activate_area(struct cma *cma)
++static void __init cma_activate_area(struct cma *cma)
+ {
+ unsigned long base_pfn = cma->base_pfn, pfn = base_pfn;
+ unsigned i = cma->count >> pageblock_order;
+ struct zone *zone;
+
+ cma->bitmap = bitmap_zalloc(cma_bitmap_maxno(cma), GFP_KERNEL);
+- if (!cma->bitmap) {
+- cma->count = 0;
+- return -ENOMEM;
+- }
++ if (!cma->bitmap)
++ goto out_error;
+
+ WARN_ON_ONCE(!pfn_valid(pfn));
+ zone = page_zone(pfn_to_page(pfn));
+@@ -133,25 +131,22 @@ static int __init cma_activate_area(struct cma *cma)
+ spin_lock_init(&cma->mem_head_lock);
+ #endif
+
+- return 0;
++ return;
+
+ not_in_zone:
+- pr_err("CMA area %s could not be activated\n", cma->name);
+ bitmap_free(cma->bitmap);
++out_error:
+ cma->count = 0;
+- return -EINVAL;
++ pr_err("CMA area %s could not be activated\n", cma->name);
++ return;
+ }
+
+ static int __init cma_init_reserved_areas(void)
+ {
+ int i;
+
+- for (i = 0; i < cma_area_count; i++) {
+- int ret = cma_activate_area(&cma_areas[i]);
+-
+- if (ret)
+- return ret;
+- }
++ for (i = 0; i < cma_area_count; i++)
++ cma_activate_area(&cma_areas[i]);
+
+ return 0;
+ }
+diff --git a/mm/hugetlb.c b/mm/hugetlb.c
+index 590111ea6975d..7952c6cb6f08c 100644
+--- a/mm/hugetlb.c
++++ b/mm/hugetlb.c
+@@ -3952,7 +3952,7 @@ void __unmap_hugepage_range(struct mmu_gather *tlb, struct vm_area_struct *vma,
+ continue;
+
+ ptl = huge_pte_lock(h, mm, ptep);
+- if (huge_pmd_unshare(mm, &address, ptep)) {
++ if (huge_pmd_unshare(mm, vma, &address, ptep)) {
+ spin_unlock(ptl);
+ /*
+ * We just unmapped a page of PMDs by clearing a PUD.
+@@ -4539,10 +4539,6 @@ vm_fault_t hugetlb_fault(struct mm_struct *mm, struct vm_area_struct *vma,
+ } else if (unlikely(is_hugetlb_entry_hwpoisoned(entry)))
+ return VM_FAULT_HWPOISON_LARGE |
+ VM_FAULT_SET_HINDEX(hstate_index(h));
+- } else {
+- ptep = huge_pte_alloc(mm, haddr, huge_page_size(h));
+- if (!ptep)
+- return VM_FAULT_OOM;
+ }
+
+ /*
+@@ -5019,7 +5015,7 @@ unsigned long hugetlb_change_protection(struct vm_area_struct *vma,
+ if (!ptep)
+ continue;
+ ptl = huge_pte_lock(h, mm, ptep);
+- if (huge_pmd_unshare(mm, &address, ptep)) {
++ if (huge_pmd_unshare(mm, vma, &address, ptep)) {
+ pages++;
+ spin_unlock(ptl);
+ shared_pmd = true;
+@@ -5313,25 +5309,21 @@ static bool vma_shareable(struct vm_area_struct *vma, unsigned long addr)
+ void adjust_range_if_pmd_sharing_possible(struct vm_area_struct *vma,
+ unsigned long *start, unsigned long *end)
+ {
+- unsigned long check_addr;
++ unsigned long a_start, a_end;
+
+ if (!(vma->vm_flags & VM_MAYSHARE))
+ return;
+
+- for (check_addr = *start; check_addr < *end; check_addr += PUD_SIZE) {
+- unsigned long a_start = check_addr & PUD_MASK;
+- unsigned long a_end = a_start + PUD_SIZE;
++ /* Extend the range to be PUD aligned for a worst case scenario */
++ a_start = ALIGN_DOWN(*start, PUD_SIZE);
++ a_end = ALIGN(*end, PUD_SIZE);
+
+- /*
+- * If sharing is possible, adjust start/end if necessary.
+- */
+- if (range_in_vma(vma, a_start, a_end)) {
+- if (a_start < *start)
+- *start = a_start;
+- if (a_end > *end)
+- *end = a_end;
+- }
+- }
++ /*
++ * Intersect the range with the vma range, since pmd sharing won't be
++ * across vma after all
++ */
++ *start = max(vma->vm_start, a_start);
++ *end = min(vma->vm_end, a_end);
+ }
+
+ /*
+@@ -5404,12 +5396,14 @@ out:
+ * returns: 1 successfully unmapped a shared pte page
+ * 0 the underlying pte page is not shared, or it is the last user
+ */
+-int huge_pmd_unshare(struct mm_struct *mm, unsigned long *addr, pte_t *ptep)
++int huge_pmd_unshare(struct mm_struct *mm, struct vm_area_struct *vma,
++ unsigned long *addr, pte_t *ptep)
+ {
+ pgd_t *pgd = pgd_offset(mm, *addr);
+ p4d_t *p4d = p4d_offset(pgd, *addr);
+ pud_t *pud = pud_offset(p4d, *addr);
+
++ i_mmap_assert_write_locked(vma->vm_file->f_mapping);
+ BUG_ON(page_count(virt_to_page(ptep)) == 0);
+ if (page_count(virt_to_page(ptep)) == 1)
+ return 0;
+@@ -5427,7 +5421,8 @@ pte_t *huge_pmd_share(struct mm_struct *mm, unsigned long addr, pud_t *pud)
+ return NULL;
+ }
+
+-int huge_pmd_unshare(struct mm_struct *mm, unsigned long *addr, pte_t *ptep)
++int huge_pmd_unshare(struct mm_struct *mm, struct vm_area_struct *vma,
++ unsigned long *addr, pte_t *ptep)
+ {
+ return 0;
+ }
+diff --git a/mm/khugepaged.c b/mm/khugepaged.c
+index 700f5160f3e4d..ac04b332a373a 100644
+--- a/mm/khugepaged.c
++++ b/mm/khugepaged.c
+@@ -1412,7 +1412,7 @@ void collapse_pte_mapped_thp(struct mm_struct *mm, unsigned long addr)
+ {
+ unsigned long haddr = addr & HPAGE_PMD_MASK;
+ struct vm_area_struct *vma = find_vma(mm, haddr);
+- struct page *hpage = NULL;
++ struct page *hpage;
+ pte_t *start_pte, *pte;
+ pmd_t *pmd, _pmd;
+ spinlock_t *ptl;
+@@ -1432,9 +1432,17 @@ void collapse_pte_mapped_thp(struct mm_struct *mm, unsigned long addr)
+ if (!hugepage_vma_check(vma, vma->vm_flags | VM_HUGEPAGE))
+ return;
+
++ hpage = find_lock_page(vma->vm_file->f_mapping,
++ linear_page_index(vma, haddr));
++ if (!hpage)
++ return;
++
++ if (!PageHead(hpage))
++ goto drop_hpage;
++
+ pmd = mm_find_pmd(mm, haddr);
+ if (!pmd)
+- return;
++ goto drop_hpage;
+
+ start_pte = pte_offset_map_lock(mm, pmd, haddr, &ptl);
+
+@@ -1453,30 +1461,11 @@ void collapse_pte_mapped_thp(struct mm_struct *mm, unsigned long addr)
+
+ page = vm_normal_page(vma, addr, *pte);
+
+- if (!page || !PageCompound(page))
+- goto abort;
+-
+- if (!hpage) {
+- hpage = compound_head(page);
+- /*
+- * The mapping of the THP should not change.
+- *
+- * Note that uprobe, debugger, or MAP_PRIVATE may
+- * change the page table, but the new page will
+- * not pass PageCompound() check.
+- */
+- if (WARN_ON(hpage->mapping != vma->vm_file->f_mapping))
+- goto abort;
+- }
+-
+ /*
+- * Confirm the page maps to the correct subpage.
+- *
+- * Note that uprobe, debugger, or MAP_PRIVATE may change
+- * the page table, but the new page will not pass
+- * PageCompound() check.
++ * Note that uprobe, debugger, or MAP_PRIVATE may change the
++ * page table, but the new page will not be a subpage of hpage.
+ */
+- if (WARN_ON(hpage + i != page))
++ if (hpage + i != page)
+ goto abort;
+ count++;
+ }
+@@ -1495,21 +1484,26 @@ void collapse_pte_mapped_thp(struct mm_struct *mm, unsigned long addr)
+ pte_unmap_unlock(start_pte, ptl);
+
+ /* step 3: set proper refcount and mm_counters. */
+- if (hpage) {
++ if (count) {
+ page_ref_sub(hpage, count);
+ add_mm_counter(vma->vm_mm, mm_counter_file(hpage), -count);
+ }
+
+ /* step 4: collapse pmd */
+ ptl = pmd_lock(vma->vm_mm, pmd);
+- _pmd = pmdp_collapse_flush(vma, addr, pmd);
++ _pmd = pmdp_collapse_flush(vma, haddr, pmd);
+ spin_unlock(ptl);
+ mm_dec_nr_ptes(mm);
+ pte_free(mm, pmd_pgtable(_pmd));
++
++drop_hpage:
++ unlock_page(hpage);
++ put_page(hpage);
+ return;
+
+ abort:
+ pte_unmap_unlock(start_pte, ptl);
++ goto drop_hpage;
+ }
+
+ static int khugepaged_collapse_pte_mapped_thps(struct mm_slot *mm_slot)
+@@ -1538,6 +1532,7 @@ out:
+ static void retract_page_tables(struct address_space *mapping, pgoff_t pgoff)
+ {
+ struct vm_area_struct *vma;
++ struct mm_struct *mm;
+ unsigned long addr;
+ pmd_t *pmd, _pmd;
+
+@@ -1566,7 +1561,8 @@ static void retract_page_tables(struct address_space *mapping, pgoff_t pgoff)
+ continue;
+ if (vma->vm_end < addr + HPAGE_PMD_SIZE)
+ continue;
+- pmd = mm_find_pmd(vma->vm_mm, addr);
++ mm = vma->vm_mm;
++ pmd = mm_find_pmd(mm, addr);
+ if (!pmd)
+ continue;
+ /*
+@@ -1576,17 +1572,19 @@ static void retract_page_tables(struct address_space *mapping, pgoff_t pgoff)
+ * mmap_lock while holding page lock. Fault path does it in
+ * reverse order. Trylock is a way to avoid deadlock.
+ */
+- if (mmap_write_trylock(vma->vm_mm)) {
+- spinlock_t *ptl = pmd_lock(vma->vm_mm, pmd);
+- /* assume page table is clear */
+- _pmd = pmdp_collapse_flush(vma, addr, pmd);
+- spin_unlock(ptl);
+- mmap_write_unlock(vma->vm_mm);
+- mm_dec_nr_ptes(vma->vm_mm);
+- pte_free(vma->vm_mm, pmd_pgtable(_pmd));
++ if (mmap_write_trylock(mm)) {
++ if (!khugepaged_test_exit(mm)) {
++ spinlock_t *ptl = pmd_lock(mm, pmd);
++ /* assume page table is clear */
++ _pmd = pmdp_collapse_flush(vma, addr, pmd);
++ spin_unlock(ptl);
++ mm_dec_nr_ptes(mm);
++ pte_free(mm, pmd_pgtable(_pmd));
++ }
++ mmap_write_unlock(mm);
+ } else {
+ /* Try again later */
+- khugepaged_add_pte_mapped_thp(vma->vm_mm, addr);
++ khugepaged_add_pte_mapped_thp(mm, addr);
+ }
+ }
+ i_mmap_unlock_write(mapping);
+diff --git a/mm/memory_hotplug.c b/mm/memory_hotplug.c
+index da374cd3d45b3..76c75a599da3f 100644
+--- a/mm/memory_hotplug.c
++++ b/mm/memory_hotplug.c
+@@ -1742,7 +1742,7 @@ static int __ref try_remove_memory(int nid, u64 start, u64 size)
+ */
+ rc = walk_memory_blocks(start, size, NULL, check_memblock_offlined_cb);
+ if (rc)
+- goto done;
++ return rc;
+
+ /* remove memmap entry */
+ firmware_map_remove(start, start + size, "System RAM");
+@@ -1766,9 +1766,8 @@ static int __ref try_remove_memory(int nid, u64 start, u64 size)
+
+ try_offline_node(nid);
+
+-done:
+ mem_hotplug_done();
+- return rc;
++ return 0;
+ }
+
+ /**
+diff --git a/mm/page_counter.c b/mm/page_counter.c
+index c56db2d5e1592..b4663844c9b37 100644
+--- a/mm/page_counter.c
++++ b/mm/page_counter.c
+@@ -72,7 +72,7 @@ void page_counter_charge(struct page_counter *counter, unsigned long nr_pages)
+ long new;
+
+ new = atomic_long_add_return(nr_pages, &c->usage);
+- propagate_protected_usage(counter, new);
++ propagate_protected_usage(c, new);
+ /*
+ * This is indeed racy, but we can live with some
+ * inaccuracy in the watermark.
+@@ -116,7 +116,7 @@ bool page_counter_try_charge(struct page_counter *counter,
+ new = atomic_long_add_return(nr_pages, &c->usage);
+ if (new > c->max) {
+ atomic_long_sub(nr_pages, &c->usage);
+- propagate_protected_usage(counter, new);
++ propagate_protected_usage(c, new);
+ /*
+ * This is racy, but we can live with some
+ * inaccuracy in the failcnt.
+@@ -125,7 +125,7 @@ bool page_counter_try_charge(struct page_counter *counter,
+ *fail = c;
+ goto failed;
+ }
+- propagate_protected_usage(counter, new);
++ propagate_protected_usage(c, new);
+ /*
+ * Just like with failcnt, we can live with some
+ * inaccuracy in the watermark.
+diff --git a/mm/rmap.c b/mm/rmap.c
+index 5fe2dedce1fc1..6cce9ef06753b 100644
+--- a/mm/rmap.c
++++ b/mm/rmap.c
+@@ -1469,7 +1469,7 @@ static bool try_to_unmap_one(struct page *page, struct vm_area_struct *vma,
+ * do this outside rmap routines.
+ */
+ VM_BUG_ON(!(flags & TTU_RMAP_LOCKED));
+- if (huge_pmd_unshare(mm, &address, pvmw.pte)) {
++ if (huge_pmd_unshare(mm, vma, &address, pvmw.pte)) {
+ /*
+ * huge_pmd_unshare unmapped an entire PMD
+ * page. There is no way of knowing exactly
+diff --git a/mm/shuffle.c b/mm/shuffle.c
+index 44406d9977c77..dd13ab851b3ee 100644
+--- a/mm/shuffle.c
++++ b/mm/shuffle.c
+@@ -58,25 +58,25 @@ module_param_call(shuffle, shuffle_store, shuffle_show, &shuffle_param, 0400);
+ * For two pages to be swapped in the shuffle, they must be free (on a
+ * 'free_area' lru), have the same order, and have the same migratetype.
+ */
+-static struct page * __meminit shuffle_valid_page(unsigned long pfn, int order)
++static struct page * __meminit shuffle_valid_page(struct zone *zone,
++ unsigned long pfn, int order)
+ {
+- struct page *page;
++ struct page *page = pfn_to_online_page(pfn);
+
+ /*
+ * Given we're dealing with randomly selected pfns in a zone we
+ * need to ask questions like...
+ */
+
+- /* ...is the pfn even in the memmap? */
+- if (!pfn_valid_within(pfn))
++ /* ... is the page managed by the buddy? */
++ if (!page)
+ return NULL;
+
+- /* ...is the pfn in a present section or a hole? */
+- if (!pfn_in_present_section(pfn))
++ /* ... is the page assigned to the same zone? */
++ if (page_zone(page) != zone)
+ return NULL;
+
+ /* ...is the page free and currently on a free_area list? */
+- page = pfn_to_page(pfn);
+ if (!PageBuddy(page))
+ return NULL;
+
+@@ -123,7 +123,7 @@ void __meminit __shuffle_zone(struct zone *z)
+ * page_j randomly selected in the span @zone_start_pfn to
+ * @spanned_pages.
+ */
+- page_i = shuffle_valid_page(i, order);
++ page_i = shuffle_valid_page(z, i, order);
+ if (!page_i)
+ continue;
+
+@@ -137,7 +137,7 @@ void __meminit __shuffle_zone(struct zone *z)
+ j = z->zone_start_pfn +
+ ALIGN_DOWN(get_random_long() % z->spanned_pages,
+ order_pages);
+- page_j = shuffle_valid_page(j, order);
++ page_j = shuffle_valid_page(z, j, order);
+ if (page_j && page_j != page_i)
+ break;
+ }
+diff --git a/net/appletalk/atalk_proc.c b/net/appletalk/atalk_proc.c
+index 550c6ca007cc2..9c1241292d1d2 100644
+--- a/net/appletalk/atalk_proc.c
++++ b/net/appletalk/atalk_proc.c
+@@ -229,6 +229,8 @@ int __init atalk_proc_init(void)
+ sizeof(struct aarp_iter_state), NULL))
+ goto out;
+
++ return 0;
++
+ out:
+ remove_proc_subtree("atalk", init_net.proc_net);
+ return -ENOMEM;
+diff --git a/net/compat.c b/net/compat.c
+index 434838bef5f80..7dc670c8eac50 100644
+--- a/net/compat.c
++++ b/net/compat.c
+@@ -309,6 +309,7 @@ void scm_detach_fds_compat(struct msghdr *kmsg, struct scm_cookie *scm)
+ break;
+ }
+ /* Bump the usage count and install the file. */
++ __receive_sock(fp[i]);
+ fd_install(new_fd, get_file(fp[i]));
+ }
+
+diff --git a/net/core/sock.c b/net/core/sock.c
+index a14a8cb6ccca6..78f8736be9c50 100644
+--- a/net/core/sock.c
++++ b/net/core/sock.c
+@@ -2842,6 +2842,27 @@ int sock_no_mmap(struct file *file, struct socket *sock, struct vm_area_struct *
+ }
+ EXPORT_SYMBOL(sock_no_mmap);
+
++/*
++ * When a file is received (via SCM_RIGHTS, etc), we must bump the
++ * various sock-based usage counts.
++ */
++void __receive_sock(struct file *file)
++{
++ struct socket *sock;
++ int error;
++
++ /*
++ * The resulting value of "error" is ignored here since we only
++ * need to take action when the file is a socket and testing
++ * "sock" for NULL is sufficient.
++ */
++ sock = sock_from_file(file, &error);
++ if (sock) {
++ sock_update_netprioidx(&sock->sk->sk_cgrp_data);
++ sock_update_classid(&sock->sk->sk_cgrp_data);
++ }
++}
++
+ ssize_t sock_no_sendpage(struct socket *sock, struct page *page, int offset, size_t size, int flags)
+ {
+ ssize_t res;
+diff --git a/net/mac80211/sta_info.c b/net/mac80211/sta_info.c
+index af4cc5fb678ed..05e966f1609e2 100644
+--- a/net/mac80211/sta_info.c
++++ b/net/mac80211/sta_info.c
+@@ -1050,7 +1050,7 @@ static void __sta_info_destroy_part2(struct sta_info *sta)
+ might_sleep();
+ lockdep_assert_held(&local->sta_mtx);
+
+- while (sta->sta_state == IEEE80211_STA_AUTHORIZED) {
++ if (sta->sta_state == IEEE80211_STA_AUTHORIZED) {
+ ret = sta_info_move_state(sta, IEEE80211_STA_ASSOC);
+ WARN_ON_ONCE(ret);
+ }
+diff --git a/scripts/recordmcount.c b/scripts/recordmcount.c
+index e59022b3f1254..b9c2ee7ab43fa 100644
+--- a/scripts/recordmcount.c
++++ b/scripts/recordmcount.c
+@@ -42,6 +42,8 @@
+ #define R_ARM_THM_CALL 10
+ #define R_ARM_CALL 28
+
++#define R_AARCH64_CALL26 283
++
+ static int fd_map; /* File descriptor for file being modified. */
+ static int mmap_failed; /* Boolean flag. */
+ static char gpfx; /* prefix for global symbol name (sometimes '_') */
+diff --git a/security/integrity/ima/ima_policy.c b/security/integrity/ima/ima_policy.c
+index 3e3e568c81309..a59bf2f5b2d4f 100644
+--- a/security/integrity/ima/ima_policy.c
++++ b/security/integrity/ima/ima_policy.c
+@@ -1035,6 +1035,11 @@ static bool ima_validate_rule(struct ima_rule_entry *entry)
+ return false;
+ }
+
++ /* Ensure that combinations of flags are compatible with each other */
++ if (entry->flags & IMA_CHECK_BLACKLIST &&
++ !(entry->flags & IMA_MODSIG_ALLOWED))
++ return false;
++
+ return true;
+ }
+
+@@ -1371,9 +1376,17 @@ static int ima_parse_rule(char *rule, struct ima_rule_entry *entry)
+ result = -EINVAL;
+ break;
+ case Opt_appraise_flag:
++ if (entry->action != APPRAISE) {
++ result = -EINVAL;
++ break;
++ }
++
+ ima_log_string(ab, "appraise_flag", args[0].from);
+- if (strstr(args[0].from, "blacklist"))
++ if (IS_ENABLED(CONFIG_IMA_APPRAISE_MODSIG) &&
++ strstr(args[0].from, "blacklist"))
+ entry->flags |= IMA_CHECK_BLACKLIST;
++ else
++ result = -EINVAL;
+ break;
+ case Opt_permit_directio:
+ entry->flags |= IMA_PERMIT_DIRECTIO;
+diff --git a/sound/pci/echoaudio/echoaudio.c b/sound/pci/echoaudio/echoaudio.c
+index 0941a7a17623a..456219a665a79 100644
+--- a/sound/pci/echoaudio/echoaudio.c
++++ b/sound/pci/echoaudio/echoaudio.c
+@@ -2158,7 +2158,6 @@ static int snd_echo_resume(struct device *dev)
+ if (err < 0) {
+ kfree(commpage_bak);
+ dev_err(dev, "resume init_hw err=%d\n", err);
+- snd_echo_free(chip);
+ return err;
+ }
+
+@@ -2185,7 +2184,6 @@ static int snd_echo_resume(struct device *dev)
+ if (request_irq(pci->irq, snd_echo_interrupt, IRQF_SHARED,
+ KBUILD_MODNAME, chip)) {
+ dev_err(chip->card->dev, "cannot grab irq\n");
+- snd_echo_free(chip);
+ return -EBUSY;
+ }
+ chip->irq = pci->irq;
+diff --git a/sound/pci/hda/patch_realtek.c b/sound/pci/hda/patch_realtek.c
+index 00d155b98c1d1..8626e59f1e6a9 100644
+--- a/sound/pci/hda/patch_realtek.c
++++ b/sound/pci/hda/patch_realtek.c
+@@ -4171,8 +4171,6 @@ static void alc269_fixup_hp_gpio_led(struct hda_codec *codec,
+ static void alc285_fixup_hp_gpio_led(struct hda_codec *codec,
+ const struct hda_fixup *fix, int action)
+ {
+- struct alc_spec *spec = codec->spec;
+-
+ alc_fixup_hp_gpio_led(codec, action, 0x04, 0x01);
+ }
+
+diff --git a/tools/build/Makefile.feature b/tools/build/Makefile.feature
+index cb152370fdefd..e7818b44b48ee 100644
+--- a/tools/build/Makefile.feature
++++ b/tools/build/Makefile.feature
+@@ -8,7 +8,7 @@ endif
+
+ feature_check = $(eval $(feature_check_code))
+ define feature_check_code
+- feature-$(1) := $(shell $(MAKE) OUTPUT=$(OUTPUT_FEATURES) CFLAGS="$(EXTRA_CFLAGS) $(FEATURE_CHECK_CFLAGS-$(1))" CXXFLAGS="$(EXTRA_CXXFLAGS) $(FEATURE_CHECK_CXXFLAGS-$(1))" LDFLAGS="$(LDFLAGS) $(FEATURE_CHECK_LDFLAGS-$(1))" -C $(feature_dir) $(OUTPUT_FEATURES)test-$1.bin >/dev/null 2>/dev/null && echo 1 || echo 0)
++ feature-$(1) := $(shell $(MAKE) OUTPUT=$(OUTPUT_FEATURES) CC="$(CC)" CXX="$(CXX)" CFLAGS="$(EXTRA_CFLAGS) $(FEATURE_CHECK_CFLAGS-$(1))" CXXFLAGS="$(EXTRA_CXXFLAGS) $(FEATURE_CHECK_CXXFLAGS-$(1))" LDFLAGS="$(LDFLAGS) $(FEATURE_CHECK_LDFLAGS-$(1))" -C $(feature_dir) $(OUTPUT_FEATURES)test-$1.bin >/dev/null 2>/dev/null && echo 1 || echo 0)
+ endef
+
+ feature_set = $(eval $(feature_set_code))
+diff --git a/tools/build/feature/Makefile b/tools/build/feature/Makefile
+index b1f0321180f5c..93b590d81209c 100644
+--- a/tools/build/feature/Makefile
++++ b/tools/build/feature/Makefile
+@@ -74,8 +74,6 @@ FILES= \
+
+ FILES := $(addprefix $(OUTPUT),$(FILES))
+
+-CC ?= $(CROSS_COMPILE)gcc
+-CXX ?= $(CROSS_COMPILE)g++
+ PKG_CONFIG ?= $(CROSS_COMPILE)pkg-config
+ LLVM_CONFIG ?= llvm-config
+ CLANG ?= clang
+diff --git a/tools/perf/bench/mem-functions.c b/tools/perf/bench/mem-functions.c
+index 9235b76501be8..19d45c377ac18 100644
+--- a/tools/perf/bench/mem-functions.c
++++ b/tools/perf/bench/mem-functions.c
+@@ -223,12 +223,8 @@ static int bench_mem_common(int argc, const char **argv, struct bench_mem_info *
+ return 0;
+ }
+
+-static u64 do_memcpy_cycles(const struct function *r, size_t size, void *src, void *dst)
++static void memcpy_prefault(memcpy_t fn, size_t size, void *src, void *dst)
+ {
+- u64 cycle_start = 0ULL, cycle_end = 0ULL;
+- memcpy_t fn = r->fn.memcpy;
+- int i;
+-
+ /* Make sure to always prefault zero pages even if MMAP_THRESH is crossed: */
+ memset(src, 0, size);
+
+@@ -237,6 +233,15 @@ static u64 do_memcpy_cycles(const struct function *r, size_t size, void *src, vo
+ * to not measure page fault overhead:
+ */
+ fn(dst, src, size);
++}
++
++static u64 do_memcpy_cycles(const struct function *r, size_t size, void *src, void *dst)
++{
++ u64 cycle_start = 0ULL, cycle_end = 0ULL;
++ memcpy_t fn = r->fn.memcpy;
++ int i;
++
++ memcpy_prefault(fn, size, src, dst);
+
+ cycle_start = get_cycles();
+ for (i = 0; i < nr_loops; ++i)
+@@ -252,11 +257,7 @@ static double do_memcpy_gettimeofday(const struct function *r, size_t size, void
+ memcpy_t fn = r->fn.memcpy;
+ int i;
+
+- /*
+- * We prefault the freshly allocated memory range here,
+- * to not measure page fault overhead:
+- */
+- fn(dst, src, size);
++ memcpy_prefault(fn, size, src, dst);
+
+ BUG_ON(gettimeofday(&tv_start, NULL));
+ for (i = 0; i < nr_loops; ++i)
+diff --git a/tools/perf/builtin-record.c b/tools/perf/builtin-record.c
+index a37e7910e9e90..23ea934f30b34 100644
+--- a/tools/perf/builtin-record.c
++++ b/tools/perf/builtin-record.c
+@@ -1489,7 +1489,7 @@ static int record__setup_sb_evlist(struct record *rec)
+ evlist__set_cb(rec->sb_evlist, record__process_signal_event, rec);
+ rec->thread_id = pthread_self();
+ }
+-
++#ifdef HAVE_LIBBPF_SUPPORT
+ if (!opts->no_bpf_event) {
+ if (rec->sb_evlist == NULL) {
+ rec->sb_evlist = evlist__new();
+@@ -1505,7 +1505,7 @@ static int record__setup_sb_evlist(struct record *rec)
+ return -1;
+ }
+ }
+-
++#endif
+ if (perf_evlist__start_sb_thread(rec->sb_evlist, &rec->opts.target)) {
+ pr_debug("Couldn't start the BPF side band thread:\nBPF programs starting from now on won't be annotatable\n");
+ opts->no_bpf_event = true;
+diff --git a/tools/perf/tests/parse-events.c b/tools/perf/tests/parse-events.c
+index 895188b63f963..6a2ec6ec0d0ef 100644
+--- a/tools/perf/tests/parse-events.c
++++ b/tools/perf/tests/parse-events.c
+@@ -631,6 +631,34 @@ static int test__checkterms_simple(struct list_head *terms)
+ TEST_ASSERT_VAL("wrong val", term->val.num == 1);
+ TEST_ASSERT_VAL("wrong config", !strcmp(term->config, "umask"));
+
++ /*
++ * read
++ *
++ * The perf_pmu__test_parse_init injects 'read' term into
++ * perf_pmu_events_list, so 'read' is evaluated as read term
++ * and not as raw event with 'ead' hex value.
++ */
++ term = list_entry(term->list.next, struct parse_events_term, list);
++ TEST_ASSERT_VAL("wrong type term",
++ term->type_term == PARSE_EVENTS__TERM_TYPE_USER);
++ TEST_ASSERT_VAL("wrong type val",
++ term->type_val == PARSE_EVENTS__TERM_TYPE_NUM);
++ TEST_ASSERT_VAL("wrong val", term->val.num == 1);
++ TEST_ASSERT_VAL("wrong config", !strcmp(term->config, "read"));
++
++ /*
++ * r0xead
++ *
++ * To be still able to pass 'ead' value with 'r' syntax,
++ * we added support to parse 'r0xHEX' event.
++ */
++ term = list_entry(term->list.next, struct parse_events_term, list);
++ TEST_ASSERT_VAL("wrong type term",
++ term->type_term == PARSE_EVENTS__TERM_TYPE_CONFIG);
++ TEST_ASSERT_VAL("wrong type val",
++ term->type_val == PARSE_EVENTS__TERM_TYPE_NUM);
++ TEST_ASSERT_VAL("wrong val", term->val.num == 0xead);
++ TEST_ASSERT_VAL("wrong config", !term->config);
+ return 0;
+ }
+
+@@ -1776,7 +1804,7 @@ struct terms_test {
+
+ static struct terms_test test__terms[] = {
+ [0] = {
+- .str = "config=10,config1,config2=3,umask=1",
++ .str = "config=10,config1,config2=3,umask=1,read,r0xead",
+ .check = test__checkterms_simple,
+ },
+ };
+@@ -1836,6 +1864,13 @@ static int test_term(struct terms_test *t)
+
+ INIT_LIST_HEAD(&terms);
+
++ /*
++ * The perf_pmu__test_parse_init prepares perf_pmu_events_list
++ * which gets freed in parse_events_terms.
++ */
++ if (perf_pmu__test_parse_init())
++ return -1;
++
+ ret = parse_events_terms(&terms, t->str);
+ if (ret) {
+ pr_debug("failed to parse terms '%s', err %d\n",
+diff --git a/tools/perf/util/evsel.c b/tools/perf/util/evsel.c
+index ef802f6d40c17..6a79cfdf96cb6 100644
+--- a/tools/perf/util/evsel.c
++++ b/tools/perf/util/evsel.c
+@@ -1014,12 +1014,14 @@ void evsel__config(struct evsel *evsel, struct record_opts *opts,
+ if (callchain && callchain->enabled && !evsel->no_aux_samples)
+ evsel__config_callchain(evsel, opts, callchain);
+
+- if (opts->sample_intr_regs && !evsel->no_aux_samples) {
++ if (opts->sample_intr_regs && !evsel->no_aux_samples &&
++ !evsel__is_dummy_event(evsel)) {
+ attr->sample_regs_intr = opts->sample_intr_regs;
+ evsel__set_sample_bit(evsel, REGS_INTR);
+ }
+
+- if (opts->sample_user_regs && !evsel->no_aux_samples) {
++ if (opts->sample_user_regs && !evsel->no_aux_samples &&
++ !evsel__is_dummy_event(evsel)) {
+ attr->sample_regs_user |= opts->sample_user_regs;
+ evsel__set_sample_bit(evsel, REGS_USER);
+ }
+diff --git a/tools/perf/util/intel-pt-decoder/intel-pt-decoder.c b/tools/perf/util/intel-pt-decoder/intel-pt-decoder.c
+index f8ccfd6be0eee..7ffcbd6fcd1ae 100644
+--- a/tools/perf/util/intel-pt-decoder/intel-pt-decoder.c
++++ b/tools/perf/util/intel-pt-decoder/intel-pt-decoder.c
+@@ -1164,6 +1164,7 @@ static int intel_pt_walk_fup(struct intel_pt_decoder *decoder)
+ return 0;
+ if (err == -EAGAIN ||
+ intel_pt_fup_with_nlip(decoder, &intel_pt_insn, ip, err)) {
++ decoder->pkt_state = INTEL_PT_STATE_IN_SYNC;
+ if (intel_pt_fup_event(decoder))
+ return 0;
+ return -EAGAIN;
+@@ -1942,17 +1943,13 @@ next:
+ }
+ if (decoder->set_fup_mwait)
+ no_tip = true;
++ if (no_tip)
++ decoder->pkt_state = INTEL_PT_STATE_FUP_NO_TIP;
++ else
++ decoder->pkt_state = INTEL_PT_STATE_FUP;
+ err = intel_pt_walk_fup(decoder);
+- if (err != -EAGAIN) {
+- if (err)
+- return err;
+- if (no_tip)
+- decoder->pkt_state =
+- INTEL_PT_STATE_FUP_NO_TIP;
+- else
+- decoder->pkt_state = INTEL_PT_STATE_FUP;
+- return 0;
+- }
++ if (err != -EAGAIN)
++ return err;
+ if (no_tip) {
+ no_tip = false;
+ break;
+@@ -1980,8 +1977,10 @@ next:
+ * possibility of another CBR change that gets caught up
+ * in the PSB+.
+ */
+- if (decoder->cbr != decoder->cbr_seen)
++ if (decoder->cbr != decoder->cbr_seen) {
++ decoder->state.type = 0;
+ return 0;
++ }
+ break;
+
+ case INTEL_PT_PIP:
+@@ -2022,8 +2021,10 @@ next:
+
+ case INTEL_PT_CBR:
+ intel_pt_calc_cbr(decoder);
+- if (decoder->cbr != decoder->cbr_seen)
++ if (decoder->cbr != decoder->cbr_seen) {
++ decoder->state.type = 0;
+ return 0;
++ }
+ break;
+
+ case INTEL_PT_MODE_EXEC:
+@@ -2599,15 +2600,11 @@ const struct intel_pt_state *intel_pt_decode(struct intel_pt_decoder *decoder)
+ err = intel_pt_walk_tip(decoder);
+ break;
+ case INTEL_PT_STATE_FUP:
+- decoder->pkt_state = INTEL_PT_STATE_IN_SYNC;
+ err = intel_pt_walk_fup(decoder);
+ if (err == -EAGAIN)
+ err = intel_pt_walk_fup_tip(decoder);
+- else if (!err)
+- decoder->pkt_state = INTEL_PT_STATE_FUP;
+ break;
+ case INTEL_PT_STATE_FUP_NO_TIP:
+- decoder->pkt_state = INTEL_PT_STATE_IN_SYNC;
+ err = intel_pt_walk_fup(decoder);
+ if (err == -EAGAIN)
+ err = intel_pt_walk_trace(decoder);
+diff --git a/tools/perf/util/parse-events.c b/tools/perf/util/parse-events.c
+index 3decbb203846a..4476de0e678aa 100644
+--- a/tools/perf/util/parse-events.c
++++ b/tools/perf/util/parse-events.c
+@@ -2017,6 +2017,32 @@ err:
+ perf_pmu__parse_cleanup();
+ }
+
++/*
++ * This function injects special term in
++ * perf_pmu_events_list so the test code
++ * can check on this functionality.
++ */
++int perf_pmu__test_parse_init(void)
++{
++ struct perf_pmu_event_symbol *list;
++
++ list = malloc(sizeof(*list) * 1);
++ if (!list)
++ return -ENOMEM;
++
++ list->type = PMU_EVENT_SYMBOL;
++ list->symbol = strdup("read");
++
++ if (!list->symbol) {
++ free(list);
++ return -ENOMEM;
++ }
++
++ perf_pmu_events_list = list;
++ perf_pmu_events_list_num = 1;
++ return 0;
++}
++
+ enum perf_pmu_event_symbol_type
+ perf_pmu__parse_check(const char *name)
+ {
+@@ -2078,6 +2104,8 @@ int parse_events_terms(struct list_head *terms, const char *str)
+ int ret;
+
+ ret = parse_events__scanner(str, &parse_state);
++ perf_pmu__parse_cleanup();
++
+ if (!ret) {
+ list_splice(parse_state.terms, terms);
+ zfree(&parse_state.terms);
+diff --git a/tools/perf/util/parse-events.h b/tools/perf/util/parse-events.h
+index 1fe23a2f9b36e..0b8cdb7270f04 100644
+--- a/tools/perf/util/parse-events.h
++++ b/tools/perf/util/parse-events.h
+@@ -253,4 +253,6 @@ static inline bool is_sdt_event(char *str __maybe_unused)
+ }
+ #endif /* HAVE_LIBELF_SUPPORT */
+
++int perf_pmu__test_parse_init(void);
++
+ #endif /* __PERF_PARSE_EVENTS_H */
+diff --git a/tools/perf/util/parse-events.l b/tools/perf/util/parse-events.l
+index 002802e17059e..7332d16cb4fc7 100644
+--- a/tools/perf/util/parse-events.l
++++ b/tools/perf/util/parse-events.l
+@@ -41,14 +41,6 @@ static int value(yyscan_t scanner, int base)
+ return __value(yylval, text, base, PE_VALUE);
+ }
+
+-static int raw(yyscan_t scanner)
+-{
+- YYSTYPE *yylval = parse_events_get_lval(scanner);
+- char *text = parse_events_get_text(scanner);
+-
+- return __value(yylval, text + 1, 16, PE_RAW);
+-}
+-
+ static int str(yyscan_t scanner, int token)
+ {
+ YYSTYPE *yylval = parse_events_get_lval(scanner);
+@@ -72,6 +64,17 @@ static int str(yyscan_t scanner, int token)
+ return token;
+ }
+
++static int raw(yyscan_t scanner)
++{
++ YYSTYPE *yylval = parse_events_get_lval(scanner);
++ char *text = parse_events_get_text(scanner);
++
++ if (perf_pmu__parse_check(text) == PMU_EVENT_SYMBOL)
++ return str(scanner, PE_NAME);
++
++ return __value(yylval, text + 1, 16, PE_RAW);
++}
++
+ static bool isbpf_suffix(char *text)
+ {
+ int len = strlen(text);
+diff --git a/tools/perf/util/probe-finder.c b/tools/perf/util/probe-finder.c
+index 55924255c5355..659024342e9ac 100644
+--- a/tools/perf/util/probe-finder.c
++++ b/tools/perf/util/probe-finder.c
+@@ -1408,6 +1408,9 @@ static int fill_empty_trace_arg(struct perf_probe_event *pev,
+ char *type;
+ int i, j, ret;
+
++ if (!ntevs)
++ return -ENOENT;
++
+ for (i = 0; i < pev->nargs; i++) {
+ type = NULL;
+ for (j = 0; j < ntevs; j++) {
+@@ -1464,7 +1467,7 @@ int debuginfo__find_trace_events(struct debuginfo *dbg,
+ if (ret >= 0 && tf.pf.skip_empty_arg)
+ ret = fill_empty_trace_arg(pev, tf.tevs, tf.ntevs);
+
+- if (ret < 0) {
++ if (ret < 0 || tf.ntevs == 0) {
+ for (i = 0; i < tf.ntevs; i++)
+ clear_probe_trace_event(&tf.tevs[i]);
+ zfree(tevs);
+diff --git a/tools/testing/selftests/bpf/Makefile b/tools/testing/selftests/bpf/Makefile
+index 22aaec74ea0ab..4f322d5388757 100644
+--- a/tools/testing/selftests/bpf/Makefile
++++ b/tools/testing/selftests/bpf/Makefile
+@@ -102,7 +102,7 @@ endif
+ OVERRIDE_TARGETS := 1
+ override define CLEAN
+ $(call msg,CLEAN)
+- $(RM) -r $(TEST_GEN_PROGS) $(TEST_GEN_PROGS_EXTENDED) $(TEST_GEN_FILES) $(EXTRA_CLEAN)
++ $(Q)$(RM) -r $(TEST_GEN_PROGS) $(TEST_GEN_PROGS_EXTENDED) $(TEST_GEN_FILES) $(EXTRA_CLEAN)
+ endef
+
+ include ../lib.mk
+@@ -122,17 +122,21 @@ $(notdir $(TEST_GEN_PROGS) \
+ $(TEST_GEN_PROGS_EXTENDED) \
+ $(TEST_CUSTOM_PROGS)): %: $(OUTPUT)/% ;
+
++$(OUTPUT)/%.o: %.c
++ $(call msg,CC,,$@)
++ $(Q)$(CC) $(CFLAGS) -c $(filter %.c,$^) $(LDLIBS) -o $@
++
+ $(OUTPUT)/%:%.c
+ $(call msg,BINARY,,$@)
+- $(LINK.c) $^ $(LDLIBS) -o $@
++ $(Q)$(LINK.c) $^ $(LDLIBS) -o $@
+
+ $(OUTPUT)/urandom_read: urandom_read.c
+ $(call msg,BINARY,,$@)
+- $(CC) $(LDFLAGS) -o $@ $< $(LDLIBS) -Wl,--build-id
++ $(Q)$(CC) $(LDFLAGS) -o $@ $< $(LDLIBS) -Wl,--build-id
+
+ $(OUTPUT)/test_stub.o: test_stub.c $(BPFOBJ)
+ $(call msg,CC,,$@)
+- $(CC) -c $(CFLAGS) -o $@ $<
++ $(Q)$(CC) -c $(CFLAGS) -o $@ $<
+
+ VMLINUX_BTF_PATHS := $(if $(O),$(O)/vmlinux) \
+ $(if $(KBUILD_OUTPUT),$(KBUILD_OUTPUT)/vmlinux) \
+@@ -141,7 +145,9 @@ VMLINUX_BTF_PATHS := $(if $(O),$(O)/vmlinux) \
+ /boot/vmlinux-$(shell uname -r)
+ VMLINUX_BTF := $(abspath $(firstword $(wildcard $(VMLINUX_BTF_PATHS))))
+
+-$(OUTPUT)/runqslower: $(BPFOBJ)
++DEFAULT_BPFTOOL := $(SCRATCH_DIR)/sbin/bpftool
++
++$(OUTPUT)/runqslower: $(BPFOBJ) | $(DEFAULT_BPFTOOL)
+ $(Q)$(MAKE) $(submake_extras) -C $(TOOLSDIR)/bpf/runqslower \
+ OUTPUT=$(SCRATCH_DIR)/ VMLINUX_BTF=$(VMLINUX_BTF) \
+ BPFOBJ=$(BPFOBJ) BPF_INCLUDE=$(INCLUDE_DIR) && \
+@@ -163,7 +169,6 @@ $(OUTPUT)/test_netcnt: cgroup_helpers.c
+ $(OUTPUT)/test_sock_fields: cgroup_helpers.c
+ $(OUTPUT)/test_sysctl: cgroup_helpers.c
+
+-DEFAULT_BPFTOOL := $(SCRATCH_DIR)/sbin/bpftool
+ BPFTOOL ?= $(DEFAULT_BPFTOOL)
+ $(DEFAULT_BPFTOOL): $(wildcard $(BPFTOOLDIR)/*.[ch] $(BPFTOOLDIR)/Makefile) \
+ $(BPFOBJ) | $(BUILD_DIR)/bpftool
+@@ -179,11 +184,11 @@ $(BPFOBJ): $(wildcard $(BPFDIR)/*.[ch] $(BPFDIR)/Makefile) \
+
+ $(BUILD_DIR)/libbpf $(BUILD_DIR)/bpftool $(INCLUDE_DIR):
+ $(call msg,MKDIR,,$@)
+- mkdir -p $@
++ $(Q)mkdir -p $@
+
+ $(INCLUDE_DIR)/vmlinux.h: $(VMLINUX_BTF) | $(BPFTOOL) $(INCLUDE_DIR)
+ $(call msg,GEN,,$@)
+- $(BPFTOOL) btf dump file $(VMLINUX_BTF) format c > $@
++ $(Q)$(BPFTOOL) btf dump file $(VMLINUX_BTF) format c > $@
+
+ # Get Clang's default includes on this system, as opposed to those seen by
+ # '-target bpf'. This fixes "missing" files on some architectures/distros,
+@@ -221,28 +226,28 @@ $(OUTPUT)/flow_dissector_load.o: flow_dissector_load.h
+ # $4 - LDFLAGS
+ define CLANG_BPF_BUILD_RULE
+ $(call msg,CLNG-LLC,$(TRUNNER_BINARY),$2)
+- ($(CLANG) $3 -O2 -target bpf -emit-llvm \
++ $(Q)($(CLANG) $3 -O2 -target bpf -emit-llvm \
+ -c $1 -o - || echo "BPF obj compilation failed") | \
+ $(LLC) -mattr=dwarfris -march=bpf -mcpu=v3 $4 -filetype=obj -o $2
+ endef
+ # Similar to CLANG_BPF_BUILD_RULE, but with disabled alu32
+ define CLANG_NOALU32_BPF_BUILD_RULE
+ $(call msg,CLNG-LLC,$(TRUNNER_BINARY),$2)
+- ($(CLANG) $3 -O2 -target bpf -emit-llvm \
++ $(Q)($(CLANG) $3 -O2 -target bpf -emit-llvm \
+ -c $1 -o - || echo "BPF obj compilation failed") | \
+ $(LLC) -march=bpf -mcpu=v2 $4 -filetype=obj -o $2
+ endef
+ # Similar to CLANG_BPF_BUILD_RULE, but using native Clang and bpf LLC
+ define CLANG_NATIVE_BPF_BUILD_RULE
+ $(call msg,CLNG-BPF,$(TRUNNER_BINARY),$2)
+- ($(CLANG) $3 -O2 -emit-llvm \
++ $(Q)($(CLANG) $3 -O2 -emit-llvm \
+ -c $1 -o - || echo "BPF obj compilation failed") | \
+ $(LLC) -march=bpf -mcpu=v3 $4 -filetype=obj -o $2
+ endef
+ # Build BPF object using GCC
+ define GCC_BPF_BUILD_RULE
+ $(call msg,GCC-BPF,$(TRUNNER_BINARY),$2)
+- $(BPF_GCC) $3 $4 -O2 -c $1 -o $2
++ $(Q)$(BPF_GCC) $3 $4 -O2 -c $1 -o $2
+ endef
+
+ SKEL_BLACKLIST := btf__% test_pinning_invalid.c test_sk_assign.c
+@@ -284,7 +289,7 @@ ifeq ($($(TRUNNER_OUTPUT)-dir),)
+ $(TRUNNER_OUTPUT)-dir := y
+ $(TRUNNER_OUTPUT):
+ $$(call msg,MKDIR,,$$@)
+- mkdir -p $$@
++ $(Q)mkdir -p $$@
+ endif
+
+ # ensure we set up BPF objects generation rule just once for a given
+@@ -304,7 +309,7 @@ $(TRUNNER_BPF_SKELS): $(TRUNNER_OUTPUT)/%.skel.h: \
+ $(TRUNNER_OUTPUT)/%.o \
+ | $(BPFTOOL) $(TRUNNER_OUTPUT)
+ $$(call msg,GEN-SKEL,$(TRUNNER_BINARY),$$@)
+- $$(BPFTOOL) gen skeleton $$< > $$@
++ $(Q)$$(BPFTOOL) gen skeleton $$< > $$@
+ endif
+
+ # ensure we set up tests.h header generation rule just once
+@@ -328,7 +333,7 @@ $(TRUNNER_TEST_OBJS): $(TRUNNER_OUTPUT)/%.test.o: \
+ $(TRUNNER_BPF_SKELS) \
+ $$(BPFOBJ) | $(TRUNNER_OUTPUT)
+ $$(call msg,TEST-OBJ,$(TRUNNER_BINARY),$$@)
+- cd $$(@D) && $$(CC) -I. $$(CFLAGS) -c $(CURDIR)/$$< $$(LDLIBS) -o $$(@F)
++ $(Q)cd $$(@D) && $$(CC) -I. $$(CFLAGS) -c $(CURDIR)/$$< $$(LDLIBS) -o $$(@F)
+
+ $(TRUNNER_EXTRA_OBJS): $(TRUNNER_OUTPUT)/%.o: \
+ %.c \
+@@ -336,20 +341,20 @@ $(TRUNNER_EXTRA_OBJS): $(TRUNNER_OUTPUT)/%.o: \
+ $(TRUNNER_TESTS_HDR) \
+ $$(BPFOBJ) | $(TRUNNER_OUTPUT)
+ $$(call msg,EXT-OBJ,$(TRUNNER_BINARY),$$@)
+- $$(CC) $$(CFLAGS) -c $$< $$(LDLIBS) -o $$@
++ $(Q)$$(CC) $$(CFLAGS) -c $$< $$(LDLIBS) -o $$@
+
+ # only copy extra resources if in flavored build
+ $(TRUNNER_BINARY)-extras: $(TRUNNER_EXTRA_FILES) | $(TRUNNER_OUTPUT)
+ ifneq ($2,)
+ $$(call msg,EXT-COPY,$(TRUNNER_BINARY),$(TRUNNER_EXTRA_FILES))
+- cp -a $$^ $(TRUNNER_OUTPUT)/
++ $(Q)cp -a $$^ $(TRUNNER_OUTPUT)/
+ endif
+
+ $(OUTPUT)/$(TRUNNER_BINARY): $(TRUNNER_TEST_OBJS) \
+ $(TRUNNER_EXTRA_OBJS) $$(BPFOBJ) \
+ | $(TRUNNER_BINARY)-extras
+ $$(call msg,BINARY,,$$@)
+- $$(CC) $$(CFLAGS) $$(filter %.a %.o,$$^) $$(LDLIBS) -o $$@
++ $(Q)$$(CC) $$(CFLAGS) $$(filter %.a %.o,$$^) $$(LDLIBS) -o $$@
+
+ endef
+
+@@ -402,17 +407,17 @@ verifier/tests.h: verifier/*.c
+ ) > verifier/tests.h)
+ $(OUTPUT)/test_verifier: test_verifier.c verifier/tests.h $(BPFOBJ) | $(OUTPUT)
+ $(call msg,BINARY,,$@)
+- $(CC) $(CFLAGS) $(filter %.a %.o %.c,$^) $(LDLIBS) -o $@
++ $(Q)$(CC) $(CFLAGS) $(filter %.a %.o %.c,$^) $(LDLIBS) -o $@
+
+ # Make sure we are able to include and link libbpf against c++.
+ $(OUTPUT)/test_cpp: test_cpp.cpp $(OUTPUT)/test_core_extern.skel.h $(BPFOBJ)
+ $(call msg,CXX,,$@)
+- $(CXX) $(CFLAGS) $^ $(LDLIBS) -o $@
++ $(Q)$(CXX) $(CFLAGS) $^ $(LDLIBS) -o $@
+
+ # Benchmark runner
+ $(OUTPUT)/bench_%.o: benchs/bench_%.c bench.h
+ $(call msg,CC,,$@)
+- $(CC) $(CFLAGS) -c $(filter %.c,$^) $(LDLIBS) -o $@
++ $(Q)$(CC) $(CFLAGS) -c $(filter %.c,$^) $(LDLIBS) -o $@
+ $(OUTPUT)/bench_rename.o: $(OUTPUT)/test_overhead.skel.h
+ $(OUTPUT)/bench_trigger.o: $(OUTPUT)/trigger_bench.skel.h
+ $(OUTPUT)/bench_ringbufs.o: $(OUTPUT)/ringbuf_bench.skel.h \
+@@ -425,7 +430,7 @@ $(OUTPUT)/bench: $(OUTPUT)/bench.o $(OUTPUT)/testing_helpers.o \
+ $(OUTPUT)/bench_trigger.o \
+ $(OUTPUT)/bench_ringbufs.o
+ $(call msg,BINARY,,$@)
+- $(CC) $(LDFLAGS) -o $@ $(filter %.a %.o,$^) $(LDLIBS)
++ $(Q)$(CC) $(LDFLAGS) -o $@ $(filter %.a %.o,$^) $(LDLIBS)
+
+ EXTRA_CLEAN := $(TEST_CUSTOM_PROGS) $(SCRATCH_DIR) \
+ prog_tests/tests.h map_tests/tests.h verifier/tests.h \
+diff --git a/tools/testing/selftests/bpf/test_progs.c b/tools/testing/selftests/bpf/test_progs.c
+index 54fa5fa688ce9..d498b6aa63a42 100644
+--- a/tools/testing/selftests/bpf/test_progs.c
++++ b/tools/testing/selftests/bpf/test_progs.c
+@@ -12,6 +12,9 @@
+ #include <string.h>
+ #include <execinfo.h> /* backtrace */
+
++#define EXIT_NO_TEST 2
++#define EXIT_ERR_SETUP_INFRA 3
++
+ /* defined in test_progs.h */
+ struct test_env env = {};
+
+@@ -111,13 +114,31 @@ static void reset_affinity() {
+ if (err < 0) {
+ stdio_restore();
+ fprintf(stderr, "Failed to reset process affinity: %d!\n", err);
+- exit(-1);
++ exit(EXIT_ERR_SETUP_INFRA);
+ }
+ err = pthread_setaffinity_np(pthread_self(), sizeof(cpuset), &cpuset);
+ if (err < 0) {
+ stdio_restore();
+ fprintf(stderr, "Failed to reset thread affinity: %d!\n", err);
+- exit(-1);
++ exit(EXIT_ERR_SETUP_INFRA);
++ }
++}
++
++static void save_netns(void)
++{
++ env.saved_netns_fd = open("/proc/self/ns/net", O_RDONLY);
++ if (env.saved_netns_fd == -1) {
++ perror("open(/proc/self/ns/net)");
++ exit(EXIT_ERR_SETUP_INFRA);
++ }
++}
++
++static void restore_netns(void)
++{
++ if (setns(env.saved_netns_fd, CLONE_NEWNET) == -1) {
++ stdio_restore();
++ perror("setns(CLONE_NEWNS)");
++ exit(EXIT_ERR_SETUP_INFRA);
+ }
+ }
+
+@@ -138,8 +159,6 @@ void test__end_subtest()
+ test->test_num, test->subtest_num,
+ test->subtest_name, sub_error_cnt ? "FAIL" : "OK");
+
+- reset_affinity();
+-
+ free(test->subtest_name);
+ test->subtest_name = NULL;
+ }
+@@ -643,6 +662,7 @@ int main(int argc, char **argv)
+ return -1;
+ }
+
++ save_netns();
+ stdio_hijack();
+ for (i = 0; i < prog_test_cnt; i++) {
+ struct prog_test_def *test = &prog_test_defs[i];
+@@ -673,6 +693,7 @@ int main(int argc, char **argv)
+ test->error_cnt ? "FAIL" : "OK");
+
+ reset_affinity();
++ restore_netns();
+ if (test->need_cgroup_cleanup)
+ cleanup_cgroup_environment();
+ }
+@@ -686,6 +707,10 @@ int main(int argc, char **argv)
+ free_str_set(&env.subtest_selector.blacklist);
+ free_str_set(&env.subtest_selector.whitelist);
+ free(env.subtest_selector.num_set);
++ close(env.saved_netns_fd);
++
++ if (env.succ_cnt + env.fail_cnt + env.skip_cnt == 0)
++ return EXIT_NO_TEST;
+
+ return env.fail_cnt ? EXIT_FAILURE : EXIT_SUCCESS;
+ }
+diff --git a/tools/testing/selftests/bpf/test_progs.h b/tools/testing/selftests/bpf/test_progs.h
+index f4503c926acad..b809246039181 100644
+--- a/tools/testing/selftests/bpf/test_progs.h
++++ b/tools/testing/selftests/bpf/test_progs.h
+@@ -78,6 +78,8 @@ struct test_env {
+ int sub_succ_cnt; /* successful sub-tests */
+ int fail_cnt; /* total failed tests + sub-tests */
+ int skip_cnt; /* skipped tests */
++
++ int saved_netns_fd;
+ };
+
+ extern struct test_env env;
+diff --git a/tools/testing/selftests/powerpc/ptrace/ptrace-pkey.c b/tools/testing/selftests/powerpc/ptrace/ptrace-pkey.c
+index bdbbbe8431e03..3694613f418f6 100644
+--- a/tools/testing/selftests/powerpc/ptrace/ptrace-pkey.c
++++ b/tools/testing/selftests/powerpc/ptrace/ptrace-pkey.c
+@@ -44,7 +44,7 @@ struct shared_info {
+ unsigned long amr2;
+
+ /* AMR value that ptrace should refuse to write to the child. */
+- unsigned long amr3;
++ unsigned long invalid_amr;
+
+ /* IAMR value the parent expects to read from the child. */
+ unsigned long expected_iamr;
+@@ -57,8 +57,8 @@ struct shared_info {
+ * (even though they're valid ones) because userspace doesn't have
+ * access to those registers.
+ */
+- unsigned long new_iamr;
+- unsigned long new_uamor;
++ unsigned long invalid_iamr;
++ unsigned long invalid_uamor;
+ };
+
+ static int sys_pkey_alloc(unsigned long flags, unsigned long init_access_rights)
+@@ -66,11 +66,6 @@ static int sys_pkey_alloc(unsigned long flags, unsigned long init_access_rights)
+ return syscall(__NR_pkey_alloc, flags, init_access_rights);
+ }
+
+-static int sys_pkey_free(int pkey)
+-{
+- return syscall(__NR_pkey_free, pkey);
+-}
+-
+ static int child(struct shared_info *info)
+ {
+ unsigned long reg;
+@@ -100,28 +95,32 @@ static int child(struct shared_info *info)
+
+ info->amr1 |= 3ul << pkeyshift(pkey1);
+ info->amr2 |= 3ul << pkeyshift(pkey2);
+- info->amr3 |= info->amr2 | 3ul << pkeyshift(pkey3);
++ /*
++ * invalid amr value where we try to force write
++ * things which are deined by a uamor setting.
++ */
++ info->invalid_amr = info->amr2 | (~0x0UL & ~info->expected_uamor);
+
++ /*
++ * if PKEY_DISABLE_EXECUTE succeeded we should update the expected_iamr
++ */
+ if (disable_execute)
+ info->expected_iamr |= 1ul << pkeyshift(pkey1);
+ else
+ info->expected_iamr &= ~(1ul << pkeyshift(pkey1));
+
+- info->expected_iamr &= ~(1ul << pkeyshift(pkey2) | 1ul << pkeyshift(pkey3));
+-
+- info->expected_uamor |= 3ul << pkeyshift(pkey1) |
+- 3ul << pkeyshift(pkey2);
+- info->new_iamr |= 1ul << pkeyshift(pkey1) | 1ul << pkeyshift(pkey2);
+- info->new_uamor |= 3ul << pkeyshift(pkey1);
++ /*
++ * We allocated pkey2 and pkey 3 above. Clear the IAMR bits.
++ */
++ info->expected_iamr &= ~(1ul << pkeyshift(pkey2));
++ info->expected_iamr &= ~(1ul << pkeyshift(pkey3));
+
+ /*
+- * We won't use pkey3. We just want a plausible but invalid key to test
+- * whether ptrace will let us write to AMR bits we are not supposed to.
+- *
+- * This also tests whether the kernel restores the UAMOR permissions
+- * after a key is freed.
++ * Create an IAMR value different from expected value.
++ * Kernel will reject an IAMR and UAMOR change.
+ */
+- sys_pkey_free(pkey3);
++ info->invalid_iamr = info->expected_iamr | (1ul << pkeyshift(pkey1) | 1ul << pkeyshift(pkey2));
++ info->invalid_uamor = info->expected_uamor & ~(0x3ul << pkeyshift(pkey1));
+
+ printf("%-30s AMR: %016lx pkey1: %d pkey2: %d pkey3: %d\n",
+ user_write, info->amr1, pkey1, pkey2, pkey3);
+@@ -196,9 +195,9 @@ static int parent(struct shared_info *info, pid_t pid)
+ PARENT_SKIP_IF_UNSUPPORTED(ret, &info->child_sync);
+ PARENT_FAIL_IF(ret, &info->child_sync);
+
+- info->amr1 = info->amr2 = info->amr3 = regs[0];
+- info->expected_iamr = info->new_iamr = regs[1];
+- info->expected_uamor = info->new_uamor = regs[2];
++ info->amr1 = info->amr2 = regs[0];
++ info->expected_iamr = regs[1];
++ info->expected_uamor = regs[2];
+
+ /* Wake up child so that it can set itself up. */
+ ret = prod_child(&info->child_sync);
+@@ -234,10 +233,10 @@ static int parent(struct shared_info *info, pid_t pid)
+ return ret;
+
+ /* Write invalid AMR value in child. */
+- ret = ptrace_write_regs(pid, NT_PPC_PKEY, &info->amr3, 1);
++ ret = ptrace_write_regs(pid, NT_PPC_PKEY, &info->invalid_amr, 1);
+ PARENT_FAIL_IF(ret, &info->child_sync);
+
+- printf("%-30s AMR: %016lx\n", ptrace_write_running, info->amr3);
++ printf("%-30s AMR: %016lx\n", ptrace_write_running, info->invalid_amr);
+
+ /* Wake up child so that it can verify it didn't change. */
+ ret = prod_child(&info->child_sync);
+@@ -249,7 +248,7 @@ static int parent(struct shared_info *info, pid_t pid)
+
+ /* Try to write to IAMR. */
+ regs[0] = info->amr1;
+- regs[1] = info->new_iamr;
++ regs[1] = info->invalid_iamr;
+ ret = ptrace_write_regs(pid, NT_PPC_PKEY, regs, 2);
+ PARENT_FAIL_IF(!ret, &info->child_sync);
+
+@@ -257,7 +256,7 @@ static int parent(struct shared_info *info, pid_t pid)
+ ptrace_write_running, regs[0], regs[1]);
+
+ /* Try to write to IAMR and UAMOR. */
+- regs[2] = info->new_uamor;
++ regs[2] = info->invalid_uamor;
+ ret = ptrace_write_regs(pid, NT_PPC_PKEY, regs, 3);
+ PARENT_FAIL_IF(!ret, &info->child_sync);
+
+diff --git a/tools/testing/selftests/seccomp/seccomp_bpf.c b/tools/testing/selftests/seccomp/seccomp_bpf.c
+index ccf276e138829..592fd1c3d1abb 100644
+--- a/tools/testing/selftests/seccomp/seccomp_bpf.c
++++ b/tools/testing/selftests/seccomp/seccomp_bpf.c
+@@ -3258,6 +3258,11 @@ TEST(user_notification_with_tsync)
+ int ret;
+ unsigned int flags;
+
++ ret = prctl(PR_SET_NO_NEW_PRIVS, 1, 0, 0, 0);
++ ASSERT_EQ(0, ret) {
++ TH_LOG("Kernel does not support PR_SET_NO_NEW_PRIVS!");
++ }
++
+ /* these were exclusive */
+ flags = SECCOMP_FILTER_FLAG_NEW_LISTENER |
+ SECCOMP_FILTER_FLAG_TSYNC;
^ permalink raw reply related [flat|nested] 30+ messages in thread
* [gentoo-commits] proj/linux-patches:5.8 commit in: /
@ 2020-08-26 11:18 Mike Pagano
0 siblings, 0 replies; 30+ messages in thread
From: Mike Pagano @ 2020-08-26 11:18 UTC (permalink / raw
To: gentoo-commits
commit: 40de4f8c83267312ae7a05f4ef6c57d178753cf4
Author: Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Wed Aug 26 11:18:00 2020 +0000
Commit: Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Wed Aug 26 11:18:00 2020 +0000
URL: https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=40de4f8c
Linux patch 5.8.4
Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>
0000_README | 4 +
1003_linux-5.8.4.patch | 5194 ++++++++++++++++++++++++++++++++++++++++++++++++
2 files changed, 5198 insertions(+)
diff --git a/0000_README b/0000_README
index bacfc9f..17d6b16 100644
--- a/0000_README
+++ b/0000_README
@@ -55,6 +55,10 @@ Patch: 1002_linux-5.8.3.patch
From: http://www.kernel.org
Desc: Linux 5.8.3
+Patch: 1003_linux-5.8.4.patch
+From: http://www.kernel.org
+Desc: Linux 5.8.4
+
Patch: 1500_XATTR_USER_PREFIX.patch
From: https://bugs.gentoo.org/show_bug.cgi?id=470644
Desc: Support for namespace user.pax.* on tmpfs.
diff --git a/1003_linux-5.8.4.patch b/1003_linux-5.8.4.patch
new file mode 100644
index 0000000..fc30996
--- /dev/null
+++ b/1003_linux-5.8.4.patch
@@ -0,0 +1,5194 @@
+diff --git a/Makefile b/Makefile
+index 6001ed2b14c3a..9a7a416f2d84e 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 5
+ PATCHLEVEL = 8
+-SUBLEVEL = 3
++SUBLEVEL = 4
+ EXTRAVERSION =
+ NAME = Kleptomaniac Octopus
+
+diff --git a/arch/alpha/include/asm/io.h b/arch/alpha/include/asm/io.h
+index a4d0c19f1e796..640e1a2f57b42 100644
+--- a/arch/alpha/include/asm/io.h
++++ b/arch/alpha/include/asm/io.h
+@@ -489,10 +489,10 @@ extern inline void writeq(u64 b, volatile void __iomem *addr)
+ }
+ #endif
+
+-#define ioread16be(p) be16_to_cpu(ioread16(p))
+-#define ioread32be(p) be32_to_cpu(ioread32(p))
+-#define iowrite16be(v,p) iowrite16(cpu_to_be16(v), (p))
+-#define iowrite32be(v,p) iowrite32(cpu_to_be32(v), (p))
++#define ioread16be(p) swab16(ioread16(p))
++#define ioread32be(p) swab32(ioread32(p))
++#define iowrite16be(v,p) iowrite16(swab16(v), (p))
++#define iowrite32be(v,p) iowrite32(swab32(v), (p))
+
+ #define inb_p inb
+ #define inw_p inw
+diff --git a/arch/arm64/Makefile b/arch/arm64/Makefile
+index 70f5905954dde..91e377770a6b8 100644
+--- a/arch/arm64/Makefile
++++ b/arch/arm64/Makefile
+@@ -158,6 +158,7 @@ zinstall install:
+ PHONY += vdso_install
+ vdso_install:
+ $(Q)$(MAKE) $(build)=arch/arm64/kernel/vdso $@
++ $(Q)$(MAKE) $(build)=arch/arm64/kernel/vdso32 $@
+
+ # We use MRPROPER_FILES and CLEAN_FILES now
+ archclean:
+diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
+index e21d4a01372fe..759d62343e1d0 100644
+--- a/arch/arm64/include/asm/kvm_host.h
++++ b/arch/arm64/include/asm/kvm_host.h
+@@ -443,7 +443,7 @@ int __kvm_arm_vcpu_set_events(struct kvm_vcpu *vcpu,
+
+ #define KVM_ARCH_WANT_MMU_NOTIFIER
+ int kvm_unmap_hva_range(struct kvm *kvm,
+- unsigned long start, unsigned long end);
++ unsigned long start, unsigned long end, unsigned flags);
+ int kvm_set_spte_hva(struct kvm *kvm, unsigned long hva, pte_t pte);
+ int kvm_age_hva(struct kvm *kvm, unsigned long start, unsigned long end);
+ int kvm_test_age_hva(struct kvm *kvm, unsigned long hva);
+diff --git a/arch/arm64/kernel/vdso32/Makefile b/arch/arm64/kernel/vdso32/Makefile
+index 5139a5f192568..d6adb4677c25f 100644
+--- a/arch/arm64/kernel/vdso32/Makefile
++++ b/arch/arm64/kernel/vdso32/Makefile
+@@ -208,7 +208,7 @@ quiet_cmd_vdsosym = VDSOSYM $@
+ cmd_vdsosym = $(NM) $< | $(gen-vdsosym) | LC_ALL=C sort > $@
+
+ # Install commands for the unstripped file
+-quiet_cmd_vdso_install = INSTALL $@
++quiet_cmd_vdso_install = INSTALL32 $@
+ cmd_vdso_install = cp $(obj)/$@.dbg $(MODLIB)/vdso/vdso32.so
+
+ vdso.so: $(obj)/vdso.so.dbg
+diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c
+index 31058e6e7c2a3..bd47f06739d6c 100644
+--- a/arch/arm64/kvm/mmu.c
++++ b/arch/arm64/kvm/mmu.c
+@@ -365,7 +365,8 @@ static void unmap_stage2_p4ds(struct kvm *kvm, pgd_t *pgd,
+ * destroying the VM), otherwise another faulting VCPU may come in and mess
+ * with things behind our backs.
+ */
+-static void unmap_stage2_range(struct kvm *kvm, phys_addr_t start, u64 size)
++static void __unmap_stage2_range(struct kvm *kvm, phys_addr_t start, u64 size,
++ bool may_block)
+ {
+ pgd_t *pgd;
+ phys_addr_t addr = start, end = start + size;
+@@ -390,11 +391,16 @@ static void unmap_stage2_range(struct kvm *kvm, phys_addr_t start, u64 size)
+ * If the range is too large, release the kvm->mmu_lock
+ * to prevent starvation and lockup detector warnings.
+ */
+- if (next != end)
++ if (may_block && next != end)
+ cond_resched_lock(&kvm->mmu_lock);
+ } while (pgd++, addr = next, addr != end);
+ }
+
++static void unmap_stage2_range(struct kvm *kvm, phys_addr_t start, u64 size)
++{
++ __unmap_stage2_range(kvm, start, size, true);
++}
++
+ static void stage2_flush_ptes(struct kvm *kvm, pmd_t *pmd,
+ phys_addr_t addr, phys_addr_t end)
+ {
+@@ -2198,18 +2204,21 @@ static int handle_hva_to_gpa(struct kvm *kvm,
+
+ static int kvm_unmap_hva_handler(struct kvm *kvm, gpa_t gpa, u64 size, void *data)
+ {
+- unmap_stage2_range(kvm, gpa, size);
++ unsigned flags = *(unsigned *)data;
++ bool may_block = flags & MMU_NOTIFIER_RANGE_BLOCKABLE;
++
++ __unmap_stage2_range(kvm, gpa, size, may_block);
+ return 0;
+ }
+
+ int kvm_unmap_hva_range(struct kvm *kvm,
+- unsigned long start, unsigned long end)
++ unsigned long start, unsigned long end, unsigned flags)
+ {
+ if (!kvm->arch.pgd)
+ return 0;
+
+ trace_kvm_unmap_hva_range(start, end);
+- handle_hva_to_gpa(kvm, start, end, &kvm_unmap_hva_handler, NULL);
++ handle_hva_to_gpa(kvm, start, end, &kvm_unmap_hva_handler, &flags);
+ return 0;
+ }
+
+diff --git a/arch/ia64/include/asm/pgtable.h b/arch/ia64/include/asm/pgtable.h
+index 10850897a91c4..779b6972aa84b 100644
+--- a/arch/ia64/include/asm/pgtable.h
++++ b/arch/ia64/include/asm/pgtable.h
+@@ -366,6 +366,15 @@ pgd_index (unsigned long address)
+ }
+ #define pgd_index pgd_index
+
++/*
++ * In the kernel's mapped region we know everything is in region number 5, so
++ * as an optimisation its PGD already points to the area for that region.
++ * However, this also means that we cannot use pgd_index() and we must
++ * never add the region here.
++ */
++#define pgd_offset_k(addr) \
++ (init_mm.pgd + (((addr) >> PGDIR_SHIFT) & (PTRS_PER_PGD - 1)))
++
+ /* Look up a pgd entry in the gate area. On IA-64, the gate-area
+ resides in the kernel-mapped segment, hence we use pgd_offset_k()
+ here. */
+diff --git a/arch/m68k/include/asm/m53xxacr.h b/arch/m68k/include/asm/m53xxacr.h
+index 9138a624c5c81..692f90e7fecc1 100644
+--- a/arch/m68k/include/asm/m53xxacr.h
++++ b/arch/m68k/include/asm/m53xxacr.h
+@@ -89,9 +89,9 @@
+ * coherency though in all cases. And for copyback caches we will need
+ * to push cached data as well.
+ */
+-#define CACHE_INIT CACR_CINVA
+-#define CACHE_INVALIDATE CACR_CINVA
+-#define CACHE_INVALIDATED CACR_CINVA
++#define CACHE_INIT (CACHE_MODE + CACR_CINVA - CACR_EC)
++#define CACHE_INVALIDATE (CACHE_MODE + CACR_CINVA)
++#define CACHE_INVALIDATED (CACHE_MODE + CACR_CINVA)
+
+ #define ACR0_MODE ((CONFIG_RAMBASE & 0xff000000) + \
+ (0x000f0000) + \
+diff --git a/arch/mips/include/asm/kvm_host.h b/arch/mips/include/asm/kvm_host.h
+index 363e7a89d1738..ef1d25d49ec87 100644
+--- a/arch/mips/include/asm/kvm_host.h
++++ b/arch/mips/include/asm/kvm_host.h
+@@ -981,7 +981,7 @@ enum kvm_mips_fault_result kvm_trap_emul_gva_fault(struct kvm_vcpu *vcpu,
+
+ #define KVM_ARCH_WANT_MMU_NOTIFIER
+ int kvm_unmap_hva_range(struct kvm *kvm,
+- unsigned long start, unsigned long end);
++ unsigned long start, unsigned long end, unsigned flags);
+ int kvm_set_spte_hva(struct kvm *kvm, unsigned long hva, pte_t pte);
+ int kvm_age_hva(struct kvm *kvm, unsigned long start, unsigned long end);
+ int kvm_test_age_hva(struct kvm *kvm, unsigned long hva);
+diff --git a/arch/mips/kernel/setup.c b/arch/mips/kernel/setup.c
+index 7b537fa2035df..588b21245e00b 100644
+--- a/arch/mips/kernel/setup.c
++++ b/arch/mips/kernel/setup.c
+@@ -497,7 +497,7 @@ static void __init mips_parse_crashkernel(void)
+ if (ret != 0 || crash_size <= 0)
+ return;
+
+- if (!memblock_find_in_range(crash_base, crash_base + crash_size, crash_size, 0)) {
++ if (!memblock_find_in_range(crash_base, crash_base + crash_size, crash_size, 1)) {
+ pr_warn("Invalid memory region reserved for crash kernel\n");
+ return;
+ }
+diff --git a/arch/mips/kvm/mmu.c b/arch/mips/kvm/mmu.c
+index 49bd160f4d85c..0783ac9b32405 100644
+--- a/arch/mips/kvm/mmu.c
++++ b/arch/mips/kvm/mmu.c
+@@ -518,7 +518,8 @@ static int kvm_unmap_hva_handler(struct kvm *kvm, gfn_t gfn, gfn_t gfn_end,
+ return 1;
+ }
+
+-int kvm_unmap_hva_range(struct kvm *kvm, unsigned long start, unsigned long end)
++int kvm_unmap_hva_range(struct kvm *kvm, unsigned long start, unsigned long end,
++ unsigned flags)
+ {
+ handle_hva_to_gpa(kvm, start, end, &kvm_unmap_hva_handler, NULL);
+
+diff --git a/arch/powerpc/include/asm/fixmap.h b/arch/powerpc/include/asm/fixmap.h
+index 925cf89cbf4ba..6bfc87915d5db 100644
+--- a/arch/powerpc/include/asm/fixmap.h
++++ b/arch/powerpc/include/asm/fixmap.h
+@@ -52,7 +52,7 @@ enum fixed_addresses {
+ FIX_HOLE,
+ /* reserve the top 128K for early debugging purposes */
+ FIX_EARLY_DEBUG_TOP = FIX_HOLE,
+- FIX_EARLY_DEBUG_BASE = FIX_EARLY_DEBUG_TOP+(ALIGN(SZ_128, PAGE_SIZE)/PAGE_SIZE)-1,
++ FIX_EARLY_DEBUG_BASE = FIX_EARLY_DEBUG_TOP+(ALIGN(SZ_128K, PAGE_SIZE)/PAGE_SIZE)-1,
+ #ifdef CONFIG_HIGHMEM
+ FIX_KMAP_BEGIN, /* reserved pte's for temporary kernel mappings */
+ FIX_KMAP_END = FIX_KMAP_BEGIN+(KM_TYPE_NR*NR_CPUS)-1,
+diff --git a/arch/powerpc/include/asm/kvm_host.h b/arch/powerpc/include/asm/kvm_host.h
+index 7e2d061d04451..bccf0ba2da2ef 100644
+--- a/arch/powerpc/include/asm/kvm_host.h
++++ b/arch/powerpc/include/asm/kvm_host.h
+@@ -58,7 +58,8 @@
+ #define KVM_ARCH_WANT_MMU_NOTIFIER
+
+ extern int kvm_unmap_hva_range(struct kvm *kvm,
+- unsigned long start, unsigned long end);
++ unsigned long start, unsigned long end,
++ unsigned flags);
+ extern int kvm_age_hva(struct kvm *kvm, unsigned long start, unsigned long end);
+ extern int kvm_test_age_hva(struct kvm *kvm, unsigned long hva);
+ extern int kvm_set_spte_hva(struct kvm *kvm, unsigned long hva, pte_t pte);
+diff --git a/arch/powerpc/kernel/setup-common.c b/arch/powerpc/kernel/setup-common.c
+index 9d3faac53295e..5ed658ae121ab 100644
+--- a/arch/powerpc/kernel/setup-common.c
++++ b/arch/powerpc/kernel/setup-common.c
+@@ -311,6 +311,7 @@ static int show_cpuinfo(struct seq_file *m, void *v)
+ min = pvr & 0xFF;
+ break;
+ case 0x004e: /* POWER9 bits 12-15 give chip type */
++ case 0x0080: /* POWER10 bit 12 gives SMT8/4 */
+ maj = (pvr >> 8) & 0x0F;
+ min = pvr & 0xFF;
+ break;
+diff --git a/arch/powerpc/kvm/book3s.c b/arch/powerpc/kvm/book3s.c
+index 41fedec69ac35..49db50d1db04c 100644
+--- a/arch/powerpc/kvm/book3s.c
++++ b/arch/powerpc/kvm/book3s.c
+@@ -834,7 +834,8 @@ void kvmppc_core_commit_memory_region(struct kvm *kvm,
+ kvm->arch.kvm_ops->commit_memory_region(kvm, mem, old, new, change);
+ }
+
+-int kvm_unmap_hva_range(struct kvm *kvm, unsigned long start, unsigned long end)
++int kvm_unmap_hva_range(struct kvm *kvm, unsigned long start, unsigned long end,
++ unsigned flags)
+ {
+ return kvm->arch.kvm_ops->unmap_hva_range(kvm, start, end);
+ }
+diff --git a/arch/powerpc/kvm/e500_mmu_host.c b/arch/powerpc/kvm/e500_mmu_host.c
+index d6c1069e9954a..ed0c9c43d0cf1 100644
+--- a/arch/powerpc/kvm/e500_mmu_host.c
++++ b/arch/powerpc/kvm/e500_mmu_host.c
+@@ -734,7 +734,8 @@ static int kvm_unmap_hva(struct kvm *kvm, unsigned long hva)
+ return 0;
+ }
+
+-int kvm_unmap_hva_range(struct kvm *kvm, unsigned long start, unsigned long end)
++int kvm_unmap_hva_range(struct kvm *kvm, unsigned long start, unsigned long end,
++ unsigned flags)
+ {
+ /* kvm_unmap_hva flushes everything anyways */
+ kvm_unmap_hva(kvm, start);
+diff --git a/arch/powerpc/platforms/pseries/hotplug-cpu.c b/arch/powerpc/platforms/pseries/hotplug-cpu.c
+index 6d4ee03d476a9..ec04fc7f5a641 100644
+--- a/arch/powerpc/platforms/pseries/hotplug-cpu.c
++++ b/arch/powerpc/platforms/pseries/hotplug-cpu.c
+@@ -107,22 +107,28 @@ static int pseries_cpu_disable(void)
+ */
+ static void pseries_cpu_die(unsigned int cpu)
+ {
+- int tries;
+ int cpu_status = 1;
+ unsigned int pcpu = get_hard_smp_processor_id(cpu);
++ unsigned long timeout = jiffies + msecs_to_jiffies(120000);
+
+- for (tries = 0; tries < 25; tries++) {
++ while (true) {
+ cpu_status = smp_query_cpu_stopped(pcpu);
+ if (cpu_status == QCSS_STOPPED ||
+ cpu_status == QCSS_HARDWARE_ERROR)
+ break;
+- cpu_relax();
+
++ if (time_after(jiffies, timeout)) {
++ pr_warn("CPU %i (hwid %i) didn't die after 120 seconds\n",
++ cpu, pcpu);
++ timeout = jiffies + msecs_to_jiffies(120000);
++ }
++
++ cond_resched();
+ }
+
+- if (cpu_status != 0) {
+- printk("Querying DEAD? cpu %i (%i) shows %i\n",
+- cpu, pcpu, cpu_status);
++ if (cpu_status == QCSS_HARDWARE_ERROR) {
++ pr_warn("CPU %i (hwid %i) reported error while dying\n",
++ cpu, pcpu);
+ }
+
+ /* Isolation and deallocation are definitely done by
+diff --git a/arch/powerpc/platforms/pseries/ras.c b/arch/powerpc/platforms/pseries/ras.c
+index f3736fcd98fcb..13c86a292c6d7 100644
+--- a/arch/powerpc/platforms/pseries/ras.c
++++ b/arch/powerpc/platforms/pseries/ras.c
+@@ -184,7 +184,6 @@ static void handle_system_shutdown(char event_modifier)
+ case EPOW_SHUTDOWN_ON_UPS:
+ pr_emerg("Loss of system power detected. System is running on"
+ " UPS/battery. Check RTAS error log for details\n");
+- orderly_poweroff(true);
+ break;
+
+ case EPOW_SHUTDOWN_LOSS_OF_CRITICAL_FUNCTIONS:
+diff --git a/arch/riscv/kernel/vmlinux.lds.S b/arch/riscv/kernel/vmlinux.lds.S
+index e6f8016b366ab..f3586e31ed1ec 100644
+--- a/arch/riscv/kernel/vmlinux.lds.S
++++ b/arch/riscv/kernel/vmlinux.lds.S
+@@ -22,6 +22,7 @@ SECTIONS
+ /* Beginning of code and text segment */
+ . = LOAD_OFFSET;
+ _start = .;
++ _stext = .;
+ HEAD_TEXT_SECTION
+ . = ALIGN(PAGE_SIZE);
+
+@@ -54,7 +55,6 @@ SECTIONS
+ . = ALIGN(SECTION_ALIGN);
+ .text : {
+ _text = .;
+- _stext = .;
+ TEXT_TEXT
+ SCHED_TEXT
+ CPUIDLE_TEXT
+diff --git a/arch/s390/kernel/ptrace.c b/arch/s390/kernel/ptrace.c
+index 3cc15c0662983..2924f236d89c6 100644
+--- a/arch/s390/kernel/ptrace.c
++++ b/arch/s390/kernel/ptrace.c
+@@ -1310,7 +1310,6 @@ static bool is_ri_cb_valid(struct runtime_instr_cb *cb)
+ cb->pc == 1 &&
+ cb->qc == 0 &&
+ cb->reserved2 == 0 &&
+- cb->key == PAGE_DEFAULT_KEY &&
+ cb->reserved3 == 0 &&
+ cb->reserved4 == 0 &&
+ cb->reserved5 == 0 &&
+@@ -1374,7 +1373,11 @@ static int s390_runtime_instr_set(struct task_struct *target,
+ kfree(data);
+ return -EINVAL;
+ }
+-
++ /*
++ * Override access key in any case, since user space should
++ * not be able to set it, nor should it care about it.
++ */
++ ri_cb.key = PAGE_DEFAULT_KEY >> 4;
+ preempt_disable();
+ if (!target->thread.ri_cb)
+ target->thread.ri_cb = data;
+diff --git a/arch/s390/kernel/runtime_instr.c b/arch/s390/kernel/runtime_instr.c
+index 125c7f6e87150..1788a5454b6fc 100644
+--- a/arch/s390/kernel/runtime_instr.c
++++ b/arch/s390/kernel/runtime_instr.c
+@@ -57,7 +57,7 @@ static void init_runtime_instr_cb(struct runtime_instr_cb *cb)
+ cb->k = 1;
+ cb->ps = 1;
+ cb->pc = 1;
+- cb->key = PAGE_DEFAULT_KEY;
++ cb->key = PAGE_DEFAULT_KEY >> 4;
+ cb->v = 1;
+ }
+
+diff --git a/arch/s390/pci/pci.c b/arch/s390/pci/pci.c
+index 3902c9f6f2d63..4b62d6b550246 100644
+--- a/arch/s390/pci/pci.c
++++ b/arch/s390/pci/pci.c
+@@ -672,6 +672,19 @@ int zpci_disable_device(struct zpci_dev *zdev)
+ }
+ EXPORT_SYMBOL_GPL(zpci_disable_device);
+
++void zpci_remove_device(struct zpci_dev *zdev)
++{
++ struct zpci_bus *zbus = zdev->zbus;
++ struct pci_dev *pdev;
++
++ pdev = pci_get_slot(zbus->bus, zdev->devfn);
++ if (pdev) {
++ if (pdev->is_virtfn)
++ return zpci_remove_virtfn(pdev, zdev->vfn);
++ pci_stop_and_remove_bus_device_locked(pdev);
++ }
++}
++
+ int zpci_create_device(struct zpci_dev *zdev)
+ {
+ int rc;
+@@ -716,13 +729,8 @@ void zpci_release_device(struct kref *kref)
+ {
+ struct zpci_dev *zdev = container_of(kref, struct zpci_dev, kref);
+
+- if (zdev->zbus->bus) {
+- struct pci_dev *pdev;
+-
+- pdev = pci_get_slot(zdev->zbus->bus, zdev->devfn);
+- if (pdev)
+- pci_stop_and_remove_bus_device_locked(pdev);
+- }
++ if (zdev->zbus->bus)
++ zpci_remove_device(zdev);
+
+ switch (zdev->state) {
+ case ZPCI_FN_STATE_ONLINE:
+diff --git a/arch/s390/pci/pci_bus.c b/arch/s390/pci/pci_bus.c
+index 642a993846889..5967f30141563 100644
+--- a/arch/s390/pci/pci_bus.c
++++ b/arch/s390/pci/pci_bus.c
+@@ -132,13 +132,14 @@ static int zpci_bus_link_virtfn(struct pci_dev *pdev,
+ {
+ int rc;
+
+- virtfn->physfn = pci_dev_get(pdev);
+ rc = pci_iov_sysfs_link(pdev, virtfn, vfid);
+- if (rc) {
+- pci_dev_put(pdev);
+- virtfn->physfn = NULL;
++ if (rc)
+ return rc;
+- }
++
++ virtfn->is_virtfn = 1;
++ virtfn->multifunction = 0;
++ virtfn->physfn = pci_dev_get(pdev);
++
+ return 0;
+ }
+
+@@ -151,9 +152,9 @@ static int zpci_bus_setup_virtfn(struct zpci_bus *zbus,
+ int vfid = vfn - 1; /* Linux' vfid's start at 0 vfn at 1*/
+ int rc = 0;
+
+- virtfn->is_virtfn = 1;
+- virtfn->multifunction = 0;
+- WARN_ON(vfid < 0);
++ if (!zbus->multifunction)
++ return 0;
++
+ /* If the parent PF for the given VF is also configured in the
+ * instance, it must be on the same zbus.
+ * We can then identify the parent PF by checking what
+@@ -165,11 +166,17 @@ static int zpci_bus_setup_virtfn(struct zpci_bus *zbus,
+ zdev = zbus->function[i];
+ if (zdev && zdev->is_physfn) {
+ pdev = pci_get_slot(zbus->bus, zdev->devfn);
++ if (!pdev)
++ continue;
+ cand_devfn = pci_iov_virtfn_devfn(pdev, vfid);
+ if (cand_devfn == virtfn->devfn) {
+ rc = zpci_bus_link_virtfn(pdev, virtfn, vfid);
++ /* balance pci_get_slot() */
++ pci_dev_put(pdev);
+ break;
+ }
++ /* balance pci_get_slot() */
++ pci_dev_put(pdev);
+ }
+ }
+ return rc;
+@@ -178,12 +185,23 @@ static int zpci_bus_setup_virtfn(struct zpci_bus *zbus,
+ static inline int zpci_bus_setup_virtfn(struct zpci_bus *zbus,
+ struct pci_dev *virtfn, int vfn)
+ {
+- virtfn->is_virtfn = 1;
+- virtfn->multifunction = 0;
+ return 0;
+ }
+ #endif
+
++void pcibios_bus_add_device(struct pci_dev *pdev)
++{
++ struct zpci_dev *zdev = to_zpci(pdev);
++
++ /*
++ * With pdev->no_vf_scan the common PCI probing code does not
++ * perform PF/VF linking.
++ */
++ if (zdev->vfn)
++ zpci_bus_setup_virtfn(zdev->zbus, pdev, zdev->vfn);
++
++}
++
+ static int zpci_bus_add_device(struct zpci_bus *zbus, struct zpci_dev *zdev)
+ {
+ struct pci_bus *bus;
+@@ -214,20 +232,10 @@ static int zpci_bus_add_device(struct zpci_bus *zbus, struct zpci_dev *zdev)
+ }
+
+ pdev = pci_scan_single_device(bus, zdev->devfn);
+- if (pdev) {
+- if (!zdev->is_physfn) {
+- rc = zpci_bus_setup_virtfn(zbus, pdev, zdev->vfn);
+- if (rc)
+- goto failed_with_pdev;
+- }
++ if (pdev)
+ pci_bus_add_device(pdev);
+- }
+- return 0;
+
+-failed_with_pdev:
+- pci_stop_and_remove_bus_device(pdev);
+- pci_dev_put(pdev);
+- return rc;
++ return 0;
+ }
+
+ static void zpci_bus_add_devices(struct zpci_bus *zbus)
+diff --git a/arch/s390/pci/pci_bus.h b/arch/s390/pci/pci_bus.h
+index 89be3c354b7bc..4972433df4581 100644
+--- a/arch/s390/pci/pci_bus.h
++++ b/arch/s390/pci/pci_bus.h
+@@ -29,3 +29,16 @@ static inline struct zpci_dev *get_zdev_by_bus(struct pci_bus *bus,
+
+ return (devfn >= ZPCI_FUNCTIONS_PER_BUS) ? NULL : zbus->function[devfn];
+ }
++
++#ifdef CONFIG_PCI_IOV
++static inline void zpci_remove_virtfn(struct pci_dev *pdev, int vfn)
++{
++
++ pci_lock_rescan_remove();
++ /* Linux' vfid's start at 0 vfn at 1 */
++ pci_iov_remove_virtfn(pdev->physfn, vfn - 1);
++ pci_unlock_rescan_remove();
++}
++#else /* CONFIG_PCI_IOV */
++static inline void zpci_remove_virtfn(struct pci_dev *pdev, int vfn) {}
++#endif /* CONFIG_PCI_IOV */
+diff --git a/arch/s390/pci/pci_event.c b/arch/s390/pci/pci_event.c
+index fdebd286f4023..9a3a291cad432 100644
+--- a/arch/s390/pci/pci_event.c
++++ b/arch/s390/pci/pci_event.c
+@@ -92,6 +92,9 @@ static void __zpci_event_availability(struct zpci_ccdf_avail *ccdf)
+ ret = clp_add_pci_device(ccdf->fid, ccdf->fh, 1);
+ break;
+ }
++ /* the configuration request may be stale */
++ if (zdev->state != ZPCI_FN_STATE_STANDBY)
++ break;
+ zdev->fh = ccdf->fh;
+ zdev->state = ZPCI_FN_STATE_CONFIGURED;
+ ret = zpci_enable_device(zdev);
+@@ -118,7 +121,7 @@ static void __zpci_event_availability(struct zpci_ccdf_avail *ccdf)
+ if (!zdev)
+ break;
+ if (pdev)
+- pci_stop_and_remove_bus_device_locked(pdev);
++ zpci_remove_device(zdev);
+
+ ret = zpci_disable_device(zdev);
+ if (ret)
+@@ -137,7 +140,7 @@ static void __zpci_event_availability(struct zpci_ccdf_avail *ccdf)
+ /* Give the driver a hint that the function is
+ * already unusable. */
+ pdev->error_state = pci_channel_io_perm_failure;
+- pci_stop_and_remove_bus_device_locked(pdev);
++ zpci_remove_device(zdev);
+ }
+
+ zdev->state = ZPCI_FN_STATE_STANDBY;
+diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
+index be5363b215409..c6908a3d551e1 100644
+--- a/arch/x86/include/asm/kvm_host.h
++++ b/arch/x86/include/asm/kvm_host.h
+@@ -1641,7 +1641,8 @@ asmlinkage void kvm_spurious_fault(void);
+ _ASM_EXTABLE(666b, 667b)
+
+ #define KVM_ARCH_WANT_MMU_NOTIFIER
+-int kvm_unmap_hva_range(struct kvm *kvm, unsigned long start, unsigned long end);
++int kvm_unmap_hva_range(struct kvm *kvm, unsigned long start, unsigned long end,
++ unsigned flags);
+ int kvm_age_hva(struct kvm *kvm, unsigned long start, unsigned long end);
+ int kvm_test_age_hva(struct kvm *kvm, unsigned long hva);
+ int kvm_set_spte_hva(struct kvm *kvm, unsigned long hva, pte_t pte);
+diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c
+index 6d6a0ae7800c6..9516a958e7801 100644
+--- a/arch/x86/kvm/mmu/mmu.c
++++ b/arch/x86/kvm/mmu/mmu.c
+@@ -1971,7 +1971,8 @@ static int kvm_handle_hva(struct kvm *kvm, unsigned long hva,
+ return kvm_handle_hva_range(kvm, hva, hva + 1, data, handler);
+ }
+
+-int kvm_unmap_hva_range(struct kvm *kvm, unsigned long start, unsigned long end)
++int kvm_unmap_hva_range(struct kvm *kvm, unsigned long start, unsigned long end,
++ unsigned flags)
+ {
+ return kvm_handle_hva_range(kvm, start, end, 0, kvm_unmap_rmapp);
+ }
+diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
+index 4fe976c2495ea..f7304132d5907 100644
+--- a/arch/x86/kvm/x86.c
++++ b/arch/x86/kvm/x86.c
+@@ -967,7 +967,7 @@ int kvm_set_cr4(struct kvm_vcpu *vcpu, unsigned long cr4)
+ {
+ unsigned long old_cr4 = kvm_read_cr4(vcpu);
+ unsigned long pdptr_bits = X86_CR4_PGE | X86_CR4_PSE | X86_CR4_PAE |
+- X86_CR4_SMEP | X86_CR4_SMAP | X86_CR4_PKE;
++ X86_CR4_SMEP;
+
+ if (kvm_valid_cr4(vcpu, cr4))
+ return 1;
+diff --git a/arch/x86/pci/xen.c b/arch/x86/pci/xen.c
+index e3f1ca3160684..db34fee931388 100644
+--- a/arch/x86/pci/xen.c
++++ b/arch/x86/pci/xen.c
+@@ -26,6 +26,7 @@
+ #include <asm/xen/pci.h>
+ #include <asm/xen/cpuid.h>
+ #include <asm/apic.h>
++#include <asm/acpi.h>
+ #include <asm/i8259.h>
+
+ static int xen_pcifront_enable_irq(struct pci_dev *dev)
+diff --git a/arch/x86/platform/efi/efi_64.c b/arch/x86/platform/efi/efi_64.c
+index 8e364c4c67683..7caa658373563 100644
+--- a/arch/x86/platform/efi/efi_64.c
++++ b/arch/x86/platform/efi/efi_64.c
+@@ -268,6 +268,8 @@ int __init efi_setup_page_tables(unsigned long pa_memmap, unsigned num_pages)
+ npages = (__end_rodata - __start_rodata) >> PAGE_SHIFT;
+ rodata = __pa(__start_rodata);
+ pfn = rodata >> PAGE_SHIFT;
++
++ pf = _PAGE_NX | _PAGE_ENC;
+ if (kernel_map_pages_in_pgd(pgd, pfn, rodata, npages, pf)) {
+ pr_err("Failed to map kernel rodata 1:1\n");
+ return 1;
+diff --git a/drivers/cpufreq/intel_pstate.c b/drivers/cpufreq/intel_pstate.c
+index 7e0f7880b21a6..c7540ad28995b 100644
+--- a/drivers/cpufreq/intel_pstate.c
++++ b/drivers/cpufreq/intel_pstate.c
+@@ -1572,6 +1572,7 @@ static void intel_pstate_get_cpu_pstates(struct cpudata *cpu)
+
+ intel_pstate_get_hwp_max(cpu->cpu, &phy_max, ¤t_max);
+ cpu->pstate.turbo_freq = phy_max * cpu->pstate.scaling;
++ cpu->pstate.turbo_pstate = phy_max;
+ } else {
+ cpu->pstate.turbo_freq = cpu->pstate.turbo_pstate * cpu->pstate.scaling;
+ }
+diff --git a/drivers/edac/i7core_edac.c b/drivers/edac/i7core_edac.c
+index 5860ca41185cf..2acd9f9284a26 100644
+--- a/drivers/edac/i7core_edac.c
++++ b/drivers/edac/i7core_edac.c
+@@ -1710,9 +1710,9 @@ static void i7core_mce_output_error(struct mem_ctl_info *mci,
+ if (uncorrected_error) {
+ core_err_cnt = 1;
+ if (ripv)
+- tp_event = HW_EVENT_ERR_FATAL;
+- else
+ tp_event = HW_EVENT_ERR_UNCORRECTED;
++ else
++ tp_event = HW_EVENT_ERR_FATAL;
+ } else {
+ tp_event = HW_EVENT_ERR_CORRECTED;
+ }
+diff --git a/drivers/edac/pnd2_edac.c b/drivers/edac/pnd2_edac.c
+index c1f2e6deb021a..4b44ea6b03adf 100644
+--- a/drivers/edac/pnd2_edac.c
++++ b/drivers/edac/pnd2_edac.c
+@@ -1155,7 +1155,7 @@ static void pnd2_mce_output_error(struct mem_ctl_info *mci, const struct mce *m,
+ u32 optypenum = GET_BITFIELD(m->status, 4, 6);
+ int rc;
+
+- tp_event = uc_err ? (ripv ? HW_EVENT_ERR_FATAL : HW_EVENT_ERR_UNCORRECTED) :
++ tp_event = uc_err ? (ripv ? HW_EVENT_ERR_UNCORRECTED : HW_EVENT_ERR_FATAL) :
+ HW_EVENT_ERR_CORRECTED;
+
+ /*
+diff --git a/drivers/edac/sb_edac.c b/drivers/edac/sb_edac.c
+index d414698ca3242..c5ab634cb6a49 100644
+--- a/drivers/edac/sb_edac.c
++++ b/drivers/edac/sb_edac.c
+@@ -2982,9 +2982,9 @@ static void sbridge_mce_output_error(struct mem_ctl_info *mci,
+ if (uncorrected_error) {
+ core_err_cnt = 1;
+ if (ripv) {
+- tp_event = HW_EVENT_ERR_FATAL;
+- } else {
+ tp_event = HW_EVENT_ERR_UNCORRECTED;
++ } else {
++ tp_event = HW_EVENT_ERR_FATAL;
+ }
+ } else {
+ tp_event = HW_EVENT_ERR_CORRECTED;
+diff --git a/drivers/edac/skx_common.c b/drivers/edac/skx_common.c
+index 6d8d6dc626bfe..2b4ce8e5ac2fa 100644
+--- a/drivers/edac/skx_common.c
++++ b/drivers/edac/skx_common.c
+@@ -493,9 +493,9 @@ static void skx_mce_output_error(struct mem_ctl_info *mci,
+ if (uncorrected_error) {
+ core_err_cnt = 1;
+ if (ripv) {
+- tp_event = HW_EVENT_ERR_FATAL;
+- } else {
+ tp_event = HW_EVENT_ERR_UNCORRECTED;
++ } else {
++ tp_event = HW_EVENT_ERR_FATAL;
+ }
+ } else {
+ tp_event = HW_EVENT_ERR_CORRECTED;
+diff --git a/drivers/firmware/efi/efi.c b/drivers/firmware/efi/efi.c
+index fdd1db025dbfd..3aa07c3b51369 100644
+--- a/drivers/firmware/efi/efi.c
++++ b/drivers/firmware/efi/efi.c
+@@ -381,6 +381,7 @@ static int __init efisubsys_init(void)
+ efi_kobj = kobject_create_and_add("efi", firmware_kobj);
+ if (!efi_kobj) {
+ pr_err("efi: Firmware registration failed.\n");
++ destroy_workqueue(efi_rts_wq);
+ return -ENOMEM;
+ }
+
+@@ -424,6 +425,7 @@ err_unregister:
+ generic_ops_unregister();
+ err_put:
+ kobject_put(efi_kobj);
++ destroy_workqueue(efi_rts_wq);
+ return error;
+ }
+
+diff --git a/drivers/firmware/efi/libstub/efi-stub-helper.c b/drivers/firmware/efi/libstub/efi-stub-helper.c
+index 6bca70bbb43d0..f735db55adc03 100644
+--- a/drivers/firmware/efi/libstub/efi-stub-helper.c
++++ b/drivers/firmware/efi/libstub/efi-stub-helper.c
+@@ -187,20 +187,28 @@ int efi_printk(const char *fmt, ...)
+ */
+ efi_status_t efi_parse_options(char const *cmdline)
+ {
+- size_t len = strlen(cmdline) + 1;
++ size_t len;
+ efi_status_t status;
+ char *str, *buf;
+
++ if (!cmdline)
++ return EFI_SUCCESS;
++
++ len = strnlen(cmdline, COMMAND_LINE_SIZE - 1) + 1;
+ status = efi_bs_call(allocate_pool, EFI_LOADER_DATA, len, (void **)&buf);
+ if (status != EFI_SUCCESS)
+ return status;
+
+- str = skip_spaces(memcpy(buf, cmdline, len));
++ memcpy(buf, cmdline, len - 1);
++ buf[len - 1] = '\0';
++ str = skip_spaces(buf);
+
+ while (*str) {
+ char *param, *val;
+
+ str = next_arg(str, ¶m, &val);
++ if (!val && !strcmp(param, "--"))
++ break;
+
+ if (!strcmp(param, "nokaslr")) {
+ efi_nokaslr = true;
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_vram_mgr.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_vram_mgr.c
+index d399e58931705..74459927f97f7 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_vram_mgr.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_vram_mgr.c
+@@ -465,7 +465,7 @@ int amdgpu_vram_mgr_alloc_sgt(struct amdgpu_device *adev,
+ unsigned int pages;
+ int i, r;
+
+- *sgt = kmalloc(sizeof(*sg), GFP_KERNEL);
++ *sgt = kmalloc(sizeof(**sgt), GFP_KERNEL);
+ if (!*sgt)
+ return -ENOMEM;
+
+diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
+index 195d621145ba5..0a39a8558b294 100644
+--- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
++++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
+@@ -2184,6 +2184,7 @@ void amdgpu_dm_update_connector_after_detect(
+
+ drm_connector_update_edid_property(connector,
+ aconnector->edid);
++ drm_add_edid_modes(connector, aconnector->edid);
+
+ if (aconnector->dc_link->aux_mode)
+ drm_dp_cec_set_edid(&aconnector->dm_dp_aux.aux,
+diff --git a/drivers/gpu/drm/amd/display/dc/core/dc_link.c b/drivers/gpu/drm/amd/display/dc/core/dc_link.c
+index 841cc051b7d01..31aa31c280ee6 100644
+--- a/drivers/gpu/drm/amd/display/dc/core/dc_link.c
++++ b/drivers/gpu/drm/amd/display/dc/core/dc_link.c
+@@ -3265,12 +3265,11 @@ void core_link_disable_stream(struct pipe_ctx *pipe_ctx)
+ core_link_set_avmute(pipe_ctx, true);
+ }
+
++ dc->hwss.blank_stream(pipe_ctx);
+ #if defined(CONFIG_DRM_AMD_DC_HDCP)
+ update_psp_stream_config(pipe_ctx, true);
+ #endif
+
+- dc->hwss.blank_stream(pipe_ctx);
+-
+ if (pipe_ctx->stream->signal == SIGNAL_TYPE_DISPLAY_PORT_MST)
+ deallocate_mst_payload(pipe_ctx);
+
+@@ -3298,11 +3297,9 @@ void core_link_disable_stream(struct pipe_ctx *pipe_ctx)
+ write_i2c_redriver_setting(pipe_ctx, false);
+ }
+ }
+-
+- disable_link(pipe_ctx->stream->link, pipe_ctx->stream->signal);
+-
+ dc->hwss.disable_stream(pipe_ctx);
+
++ disable_link(pipe_ctx->stream->link, pipe_ctx->stream->signal);
+ if (pipe_ctx->stream->timing.flags.DSC) {
+ if (dc_is_dp_signal(pipe_ctx->stream->signal))
+ dp_set_dsc_enable(pipe_ctx, false);
+diff --git a/drivers/gpu/drm/amd/display/dc/core/dc_link_dp.c b/drivers/gpu/drm/amd/display/dc/core/dc_link_dp.c
+index 6124af571bff6..91cd884d6f257 100644
+--- a/drivers/gpu/drm/amd/display/dc/core/dc_link_dp.c
++++ b/drivers/gpu/drm/amd/display/dc/core/dc_link_dp.c
+@@ -1102,10 +1102,6 @@ static inline enum link_training_result perform_link_training_int(
+ dpcd_pattern.v1_4.TRAINING_PATTERN_SET = DPCD_TRAINING_PATTERN_VIDEOIDLE;
+ dpcd_set_training_pattern(link, dpcd_pattern);
+
+- /* delay 5ms after notifying sink of idle pattern before switching output */
+- if (link->connector_signal != SIGNAL_TYPE_EDP)
+- msleep(5);
+-
+ /* 4. mainlink output idle pattern*/
+ dp_set_hw_test_pattern(link, DP_TEST_PATTERN_VIDEO_MODE, NULL, 0);
+
+@@ -1555,12 +1551,6 @@ bool perform_link_training_with_retries(
+ struct dc_link *link = stream->link;
+ enum dp_panel_mode panel_mode = dp_get_panel_mode(link);
+
+- /* We need to do this before the link training to ensure the idle pattern in SST
+- * mode will be sent right after the link training
+- */
+- link->link_enc->funcs->connect_dig_be_to_fe(link->link_enc,
+- pipe_ctx->stream_res.stream_enc->id, true);
+-
+ for (j = 0; j < attempts; ++j) {
+
+ dp_enable_link_phy(
+@@ -1577,6 +1567,12 @@ bool perform_link_training_with_retries(
+
+ dp_set_panel_mode(link, panel_mode);
+
++ /* We need to do this before the link training to ensure the idle pattern in SST
++ * mode will be sent right after the link training
++ */
++ link->link_enc->funcs->connect_dig_be_to_fe(link->link_enc,
++ pipe_ctx->stream_res.stream_enc->id, true);
++
+ if (link->aux_access_disabled) {
+ dc_link_dp_perform_link_training_skip_aux(link, link_setting);
+ return true;
+diff --git a/drivers/gpu/drm/amd/display/dc/dce/dce_panel_cntl.h b/drivers/gpu/drm/amd/display/dc/dce/dce_panel_cntl.h
+index 70ec691e14d2d..99c68ca9c7e00 100644
+--- a/drivers/gpu/drm/amd/display/dc/dce/dce_panel_cntl.h
++++ b/drivers/gpu/drm/amd/display/dc/dce/dce_panel_cntl.h
+@@ -49,7 +49,7 @@
+ #define DCN_PANEL_CNTL_REG_LIST()\
+ DCN_PANEL_CNTL_SR(PWRSEQ_CNTL, LVTMA), \
+ DCN_PANEL_CNTL_SR(PWRSEQ_STATE, LVTMA), \
+- DCE_PANEL_CNTL_SR(PWRSEQ_REF_DIV, LVTMA), \
++ DCN_PANEL_CNTL_SR(PWRSEQ_REF_DIV, LVTMA), \
+ SR(BL_PWM_CNTL), \
+ SR(BL_PWM_CNTL2), \
+ SR(BL_PWM_PERIOD_CNTL), \
+diff --git a/drivers/gpu/drm/amd/display/dc/dce110/dce110_hw_sequencer.c b/drivers/gpu/drm/amd/display/dc/dce110/dce110_hw_sequencer.c
+index 2af1d74d16ad8..b77e9dc160863 100644
+--- a/drivers/gpu/drm/amd/display/dc/dce110/dce110_hw_sequencer.c
++++ b/drivers/gpu/drm/amd/display/dc/dce110/dce110_hw_sequencer.c
+@@ -1069,17 +1069,8 @@ void dce110_blank_stream(struct pipe_ctx *pipe_ctx)
+ link->dc->hwss.set_abm_immediate_disable(pipe_ctx);
+ }
+
+- if (dc_is_dp_signal(pipe_ctx->stream->signal)) {
++ if (dc_is_dp_signal(pipe_ctx->stream->signal))
+ pipe_ctx->stream_res.stream_enc->funcs->dp_blank(pipe_ctx->stream_res.stream_enc);
+-
+- /*
+- * After output is idle pattern some sinks need time to recognize the stream
+- * has changed or they enter protection state and hang.
+- */
+- if (!dc_is_embedded_signal(pipe_ctx->stream->signal))
+- msleep(60);
+- }
+-
+ }
+
+
+diff --git a/drivers/gpu/drm/amd/display/dc/dcn20/dcn20_hwseq.c b/drivers/gpu/drm/amd/display/dc/dcn20/dcn20_hwseq.c
+index da5333d165ace..ec63cb8533607 100644
+--- a/drivers/gpu/drm/amd/display/dc/dcn20/dcn20_hwseq.c
++++ b/drivers/gpu/drm/amd/display/dc/dcn20/dcn20_hwseq.c
+@@ -1386,8 +1386,8 @@ static void dcn20_update_dchubp_dpp(
+
+ /* Any updates are handled in dc interface, just need to apply existing for plane enable */
+ if ((pipe_ctx->update_flags.bits.enable || pipe_ctx->update_flags.bits.opp_changed ||
+- pipe_ctx->update_flags.bits.scaler || pipe_ctx->update_flags.bits.viewport)
+- && pipe_ctx->stream->cursor_attributes.address.quad_part != 0) {
++ pipe_ctx->update_flags.bits.scaler || viewport_changed == true) &&
++ pipe_ctx->stream->cursor_attributes.address.quad_part != 0) {
+ dc->hwss.set_cursor_position(pipe_ctx);
+ dc->hwss.set_cursor_attribute(pipe_ctx);
+
+diff --git a/drivers/gpu/drm/amd/display/dc/dcn20/dcn20_resource.c b/drivers/gpu/drm/amd/display/dc/dcn20/dcn20_resource.c
+index cef1aa938ab54..2d9055eb3ce92 100644
+--- a/drivers/gpu/drm/amd/display/dc/dcn20/dcn20_resource.c
++++ b/drivers/gpu/drm/amd/display/dc/dcn20/dcn20_resource.c
+@@ -3097,7 +3097,7 @@ static bool dcn20_validate_bandwidth_internal(struct dc *dc, struct dc_state *co
+ int vlevel = 0;
+ int pipe_split_from[MAX_PIPES];
+ int pipe_cnt = 0;
+- display_e2e_pipe_params_st *pipes = kzalloc(dc->res_pool->pipe_count * sizeof(display_e2e_pipe_params_st), GFP_KERNEL);
++ display_e2e_pipe_params_st *pipes = kzalloc(dc->res_pool->pipe_count * sizeof(display_e2e_pipe_params_st), GFP_ATOMIC);
+ DC_LOGGER_INIT(dc->ctx->logger);
+
+ BW_VAL_TRACE_COUNT();
+diff --git a/drivers/gpu/drm/amd/display/include/fixed31_32.h b/drivers/gpu/drm/amd/display/include/fixed31_32.h
+index 89ef9f6860e5b..16df2a485dd0d 100644
+--- a/drivers/gpu/drm/amd/display/include/fixed31_32.h
++++ b/drivers/gpu/drm/amd/display/include/fixed31_32.h
+@@ -431,6 +431,9 @@ struct fixed31_32 dc_fixpt_log(struct fixed31_32 arg);
+ */
+ static inline struct fixed31_32 dc_fixpt_pow(struct fixed31_32 arg1, struct fixed31_32 arg2)
+ {
++ if (arg1.value == 0)
++ return arg2.value == 0 ? dc_fixpt_one : dc_fixpt_zero;
++
+ return dc_fixpt_exp(
+ dc_fixpt_mul(
+ dc_fixpt_log(arg1),
+diff --git a/drivers/gpu/drm/ast/ast_drv.c b/drivers/gpu/drm/ast/ast_drv.c
+index b7ba22dddcad9..83509106f3ba9 100644
+--- a/drivers/gpu/drm/ast/ast_drv.c
++++ b/drivers/gpu/drm/ast/ast_drv.c
+@@ -59,7 +59,6 @@ static struct drm_driver driver;
+ static const struct pci_device_id pciidlist[] = {
+ AST_VGA_DEVICE(PCI_CHIP_AST2000, NULL),
+ AST_VGA_DEVICE(PCI_CHIP_AST2100, NULL),
+- /* AST_VGA_DEVICE(PCI_CHIP_AST1180, NULL), - don't bind to 1180 for now */
+ {0, 0, 0},
+ };
+
+diff --git a/drivers/gpu/drm/ast/ast_drv.h b/drivers/gpu/drm/ast/ast_drv.h
+index 656d591b154b3..09f2659e29118 100644
+--- a/drivers/gpu/drm/ast/ast_drv.h
++++ b/drivers/gpu/drm/ast/ast_drv.h
+@@ -52,7 +52,6 @@
+
+ #define PCI_CHIP_AST2000 0x2000
+ #define PCI_CHIP_AST2100 0x2010
+-#define PCI_CHIP_AST1180 0x1180
+
+
+ enum ast_chip {
+@@ -64,7 +63,6 @@ enum ast_chip {
+ AST2300,
+ AST2400,
+ AST2500,
+- AST1180,
+ };
+
+ enum ast_tx_chip {
+diff --git a/drivers/gpu/drm/ast/ast_main.c b/drivers/gpu/drm/ast/ast_main.c
+index e5398e3dabe70..99c11b51f0207 100644
+--- a/drivers/gpu/drm/ast/ast_main.c
++++ b/drivers/gpu/drm/ast/ast_main.c
+@@ -142,50 +142,42 @@ static int ast_detect_chip(struct drm_device *dev, bool *need_post)
+ ast_detect_config_mode(dev, &scu_rev);
+
+ /* Identify chipset */
+- if (dev->pdev->device == PCI_CHIP_AST1180) {
+- ast->chip = AST1100;
+- DRM_INFO("AST 1180 detected\n");
+- } else {
+- if (dev->pdev->revision >= 0x40) {
+- ast->chip = AST2500;
+- DRM_INFO("AST 2500 detected\n");
+- } else if (dev->pdev->revision >= 0x30) {
+- ast->chip = AST2400;
+- DRM_INFO("AST 2400 detected\n");
+- } else if (dev->pdev->revision >= 0x20) {
+- ast->chip = AST2300;
+- DRM_INFO("AST 2300 detected\n");
+- } else if (dev->pdev->revision >= 0x10) {
+- switch (scu_rev & 0x0300) {
+- case 0x0200:
+- ast->chip = AST1100;
+- DRM_INFO("AST 1100 detected\n");
+- break;
+- case 0x0100:
+- ast->chip = AST2200;
+- DRM_INFO("AST 2200 detected\n");
+- break;
+- case 0x0000:
+- ast->chip = AST2150;
+- DRM_INFO("AST 2150 detected\n");
+- break;
+- default:
+- ast->chip = AST2100;
+- DRM_INFO("AST 2100 detected\n");
+- break;
+- }
+- ast->vga2_clone = false;
+- } else {
+- ast->chip = AST2000;
+- DRM_INFO("AST 2000 detected\n");
++ if (dev->pdev->revision >= 0x40) {
++ ast->chip = AST2500;
++ DRM_INFO("AST 2500 detected\n");
++ } else if (dev->pdev->revision >= 0x30) {
++ ast->chip = AST2400;
++ DRM_INFO("AST 2400 detected\n");
++ } else if (dev->pdev->revision >= 0x20) {
++ ast->chip = AST2300;
++ DRM_INFO("AST 2300 detected\n");
++ } else if (dev->pdev->revision >= 0x10) {
++ switch (scu_rev & 0x0300) {
++ case 0x0200:
++ ast->chip = AST1100;
++ DRM_INFO("AST 1100 detected\n");
++ break;
++ case 0x0100:
++ ast->chip = AST2200;
++ DRM_INFO("AST 2200 detected\n");
++ break;
++ case 0x0000:
++ ast->chip = AST2150;
++ DRM_INFO("AST 2150 detected\n");
++ break;
++ default:
++ ast->chip = AST2100;
++ DRM_INFO("AST 2100 detected\n");
++ break;
+ }
++ ast->vga2_clone = false;
++ } else {
++ ast->chip = AST2000;
++ DRM_INFO("AST 2000 detected\n");
+ }
+
+ /* Check if we support wide screen */
+ switch (ast->chip) {
+- case AST1180:
+- ast->support_wide_screen = true;
+- break;
+ case AST2000:
+ ast->support_wide_screen = false;
+ break;
+@@ -466,19 +458,17 @@ int ast_driver_load(struct drm_device *dev, unsigned long flags)
+
+ ast_detect_chip(dev, &need_post);
+
++ ret = ast_get_dram_info(dev);
++ if (ret)
++ goto out_free;
++ ast->vram_size = ast_get_vram_info(dev);
++ DRM_INFO("dram MCLK=%u Mhz type=%d bus_width=%d size=%08x\n",
++ ast->mclk, ast->dram_type,
++ ast->dram_bus_width, ast->vram_size);
++
+ if (need_post)
+ ast_post_gpu(dev);
+
+- if (ast->chip != AST1180) {
+- ret = ast_get_dram_info(dev);
+- if (ret)
+- goto out_free;
+- ast->vram_size = ast_get_vram_info(dev);
+- DRM_INFO("dram MCLK=%u Mhz type=%d bus_width=%d size=%08x\n",
+- ast->mclk, ast->dram_type,
+- ast->dram_bus_width, ast->vram_size);
+- }
+-
+ ret = ast_mm_init(ast);
+ if (ret)
+ goto out_free;
+@@ -496,8 +486,7 @@ int ast_driver_load(struct drm_device *dev, unsigned long flags)
+ ast->chip == AST2200 ||
+ ast->chip == AST2300 ||
+ ast->chip == AST2400 ||
+- ast->chip == AST2500 ||
+- ast->chip == AST1180) {
++ ast->chip == AST2500) {
+ dev->mode_config.max_width = 1920;
+ dev->mode_config.max_height = 2048;
+ } else {
+diff --git a/drivers/gpu/drm/ast/ast_mode.c b/drivers/gpu/drm/ast/ast_mode.c
+index 3a3a511670c9c..73fd76cec5120 100644
+--- a/drivers/gpu/drm/ast/ast_mode.c
++++ b/drivers/gpu/drm/ast/ast_mode.c
+@@ -769,9 +769,6 @@ static void ast_crtc_dpms(struct drm_crtc *crtc, int mode)
+ {
+ struct ast_private *ast = crtc->dev->dev_private;
+
+- if (ast->chip == AST1180)
+- return;
+-
+ /* TODO: Maybe control display signal generation with
+ * Sync Enable (bit CR17.7).
+ */
+@@ -793,16 +790,10 @@ static void ast_crtc_dpms(struct drm_crtc *crtc, int mode)
+ static int ast_crtc_helper_atomic_check(struct drm_crtc *crtc,
+ struct drm_crtc_state *state)
+ {
+- struct ast_private *ast = crtc->dev->dev_private;
+ struct ast_crtc_state *ast_state;
+ const struct drm_format_info *format;
+ bool succ;
+
+- if (ast->chip == AST1180) {
+- DRM_ERROR("AST 1180 modesetting not supported\n");
+- return -EINVAL;
+- }
+-
+ if (!state->enable)
+ return 0; /* no mode checks if CRTC is being disabled */
+
+@@ -1044,7 +1035,7 @@ static enum drm_mode_status ast_mode_valid(struct drm_connector *connector,
+
+ if ((ast->chip == AST2100) || (ast->chip == AST2200) ||
+ (ast->chip == AST2300) || (ast->chip == AST2400) ||
+- (ast->chip == AST2500) || (ast->chip == AST1180)) {
++ (ast->chip == AST2500)) {
+ if ((mode->hdisplay == 1920) && (mode->vdisplay == 1080))
+ return MODE_OK;
+
+diff --git a/drivers/gpu/drm/ast/ast_post.c b/drivers/gpu/drm/ast/ast_post.c
+index 2d1b186197432..af0c8ebb009a1 100644
+--- a/drivers/gpu/drm/ast/ast_post.c
++++ b/drivers/gpu/drm/ast/ast_post.c
+@@ -58,13 +58,9 @@ bool ast_is_vga_enabled(struct drm_device *dev)
+ struct ast_private *ast = dev->dev_private;
+ u8 ch;
+
+- if (ast->chip == AST1180) {
+- /* TODO 1180 */
+- } else {
+- ch = ast_io_read8(ast, AST_IO_VGA_ENABLE_PORT);
+- return !!(ch & 0x01);
+- }
+- return false;
++ ch = ast_io_read8(ast, AST_IO_VGA_ENABLE_PORT);
++
++ return !!(ch & 0x01);
+ }
+
+ static const u8 extreginfo[] = { 0x0f, 0x04, 0x1c, 0xff };
+diff --git a/drivers/gpu/drm/i915/i915_pmu.c b/drivers/gpu/drm/i915/i915_pmu.c
+index 962ded9ce73fd..9792220ddbe2e 100644
+--- a/drivers/gpu/drm/i915/i915_pmu.c
++++ b/drivers/gpu/drm/i915/i915_pmu.c
+@@ -441,8 +441,10 @@ static u64 count_interrupts(struct drm_i915_private *i915)
+
+ static void i915_pmu_event_destroy(struct perf_event *event)
+ {
+- WARN_ON(event->parent);
+- module_put(THIS_MODULE);
++ struct drm_i915_private *i915 =
++ container_of(event->pmu, typeof(*i915), pmu.base);
++
++ drm_WARN_ON(&i915->drm, event->parent);
+ }
+
+ static int
+@@ -534,10 +536,8 @@ static int i915_pmu_event_init(struct perf_event *event)
+ if (ret)
+ return ret;
+
+- if (!event->parent) {
+- __module_get(THIS_MODULE);
++ if (!event->parent)
+ event->destroy = i915_pmu_event_destroy;
+- }
+
+ return 0;
+ }
+@@ -1058,8 +1058,10 @@ static int i915_pmu_register_cpuhp_state(struct i915_pmu *pmu)
+
+ static void i915_pmu_unregister_cpuhp_state(struct i915_pmu *pmu)
+ {
+- WARN_ON(pmu->cpuhp.slot == CPUHP_INVALID);
+- WARN_ON(cpuhp_state_remove_instance(pmu->cpuhp.slot, &pmu->cpuhp.node));
++ struct drm_i915_private *i915 = container_of(pmu, typeof(*i915), pmu);
++
++ drm_WARN_ON(&i915->drm, pmu->cpuhp.slot == CPUHP_INVALID);
++ drm_WARN_ON(&i915->drm, cpuhp_state_remove_instance(pmu->cpuhp.slot, &pmu->cpuhp.node));
+ cpuhp_remove_multi_state(pmu->cpuhp.slot);
+ pmu->cpuhp.slot = CPUHP_INVALID;
+ }
+@@ -1121,6 +1123,7 @@ void i915_pmu_register(struct drm_i915_private *i915)
+ if (!pmu->base.attr_groups)
+ goto err_attr;
+
++ pmu->base.module = THIS_MODULE;
+ pmu->base.task_ctx_nr = perf_invalid_context;
+ pmu->base.event_init = i915_pmu_event_init;
+ pmu->base.add = i915_pmu_event_add;
+diff --git a/drivers/gpu/drm/panel/panel-simple.c b/drivers/gpu/drm/panel/panel-simple.c
+index 444b77490a42a..7debf2ca42522 100644
+--- a/drivers/gpu/drm/panel/panel-simple.c
++++ b/drivers/gpu/drm/panel/panel-simple.c
+@@ -1717,7 +1717,7 @@ static const struct drm_display_mode frida_frd350h54004_mode = {
+ .vsync_end = 240 + 2 + 6,
+ .vtotal = 240 + 2 + 6 + 2,
+ .vrefresh = 60,
+- .flags = DRM_MODE_FLAG_PHSYNC | DRM_MODE_FLAG_PVSYNC,
++ .flags = DRM_MODE_FLAG_NHSYNC | DRM_MODE_FLAG_NVSYNC,
+ };
+
+ static const struct panel_desc frida_frd350h54004 = {
+diff --git a/drivers/gpu/drm/ttm/ttm_bo_vm.c b/drivers/gpu/drm/ttm/ttm_bo_vm.c
+index fa03fab02076d..33526c5df0e8c 100644
+--- a/drivers/gpu/drm/ttm/ttm_bo_vm.c
++++ b/drivers/gpu/drm/ttm/ttm_bo_vm.c
+@@ -505,8 +505,10 @@ static int ttm_bo_vm_access_kmap(struct ttm_buffer_object *bo,
+ int ttm_bo_vm_access(struct vm_area_struct *vma, unsigned long addr,
+ void *buf, int len, int write)
+ {
+- unsigned long offset = (addr) - vma->vm_start;
+ struct ttm_buffer_object *bo = vma->vm_private_data;
++ unsigned long offset = (addr) - vma->vm_start +
++ ((vma->vm_pgoff - drm_vma_node_start(&bo->base.vma_node))
++ << PAGE_SHIFT);
+ int ret;
+
+ if (len < 1 || (offset + len) >> PAGE_SHIFT > bo->num_pages)
+diff --git a/drivers/gpu/drm/vgem/vgem_drv.c b/drivers/gpu/drm/vgem/vgem_drv.c
+index ec1a8ebb6f1bf..fa39d140adc6c 100644
+--- a/drivers/gpu/drm/vgem/vgem_drv.c
++++ b/drivers/gpu/drm/vgem/vgem_drv.c
+@@ -230,32 +230,6 @@ static int vgem_gem_dumb_create(struct drm_file *file, struct drm_device *dev,
+ return 0;
+ }
+
+-static int vgem_gem_dumb_map(struct drm_file *file, struct drm_device *dev,
+- uint32_t handle, uint64_t *offset)
+-{
+- struct drm_gem_object *obj;
+- int ret;
+-
+- obj = drm_gem_object_lookup(file, handle);
+- if (!obj)
+- return -ENOENT;
+-
+- if (!obj->filp) {
+- ret = -EINVAL;
+- goto unref;
+- }
+-
+- ret = drm_gem_create_mmap_offset(obj);
+- if (ret)
+- goto unref;
+-
+- *offset = drm_vma_node_offset_addr(&obj->vma_node);
+-unref:
+- drm_gem_object_put_unlocked(obj);
+-
+- return ret;
+-}
+-
+ static struct drm_ioctl_desc vgem_ioctls[] = {
+ DRM_IOCTL_DEF_DRV(VGEM_FENCE_ATTACH, vgem_fence_attach_ioctl, DRM_RENDER_ALLOW),
+ DRM_IOCTL_DEF_DRV(VGEM_FENCE_SIGNAL, vgem_fence_signal_ioctl, DRM_RENDER_ALLOW),
+@@ -446,7 +420,6 @@ static struct drm_driver vgem_driver = {
+ .fops = &vgem_driver_fops,
+
+ .dumb_create = vgem_gem_dumb_create,
+- .dumb_map_offset = vgem_gem_dumb_map,
+
+ .prime_handle_to_fd = drm_gem_prime_handle_to_fd,
+ .prime_fd_to_handle = drm_gem_prime_fd_to_handle,
+diff --git a/drivers/gpu/drm/virtio/virtgpu_ioctl.c b/drivers/gpu/drm/virtio/virtgpu_ioctl.c
+index 5df722072ba0b..19c5bc01eb790 100644
+--- a/drivers/gpu/drm/virtio/virtgpu_ioctl.c
++++ b/drivers/gpu/drm/virtio/virtgpu_ioctl.c
+@@ -179,6 +179,7 @@ static int virtio_gpu_execbuffer_ioctl(struct drm_device *dev, void *data,
+
+ virtio_gpu_cmd_submit(vgdev, buf, exbuf->size,
+ vfpriv->ctx_id, buflist, out_fence);
++ dma_fence_put(&out_fence->f);
+ virtio_gpu_notify(vgdev);
+ return 0;
+
+diff --git a/drivers/infiniband/hw/bnxt_re/main.c b/drivers/infiniband/hw/bnxt_re/main.c
+index b12fbc857f942..5c41e13496a02 100644
+--- a/drivers/infiniband/hw/bnxt_re/main.c
++++ b/drivers/infiniband/hw/bnxt_re/main.c
+@@ -811,7 +811,8 @@ static int bnxt_re_handle_qp_async_event(struct creq_qp_event *qp_event,
+ struct ib_event event;
+ unsigned int flags;
+
+- if (qp->qplib_qp.state == CMDQ_MODIFY_QP_NEW_STATE_ERR) {
++ if (qp->qplib_qp.state == CMDQ_MODIFY_QP_NEW_STATE_ERR &&
++ rdma_is_kernel_res(&qp->ib_qp.res)) {
+ flags = bnxt_re_lock_cqs(qp);
+ bnxt_qplib_add_flush_qp(&qp->qplib_qp);
+ bnxt_re_unlock_cqs(qp, flags);
+diff --git a/drivers/infiniband/hw/hfi1/tid_rdma.c b/drivers/infiniband/hw/hfi1/tid_rdma.c
+index facff133139a9..3ba299cfd0b51 100644
+--- a/drivers/infiniband/hw/hfi1/tid_rdma.c
++++ b/drivers/infiniband/hw/hfi1/tid_rdma.c
+@@ -3215,6 +3215,7 @@ bool hfi1_tid_rdma_wqe_interlock(struct rvt_qp *qp, struct rvt_swqe *wqe)
+ case IB_WR_ATOMIC_CMP_AND_SWP:
+ case IB_WR_ATOMIC_FETCH_AND_ADD:
+ case IB_WR_RDMA_WRITE:
++ case IB_WR_RDMA_WRITE_WITH_IMM:
+ switch (prev->wr.opcode) {
+ case IB_WR_TID_RDMA_WRITE:
+ req = wqe_to_tid_req(prev);
+diff --git a/drivers/infiniband/hw/hns/hns_roce_device.h b/drivers/infiniband/hw/hns/hns_roce_device.h
+index 479fa557993e7..c69453a62767c 100644
+--- a/drivers/infiniband/hw/hns/hns_roce_device.h
++++ b/drivers/infiniband/hw/hns/hns_roce_device.h
+@@ -66,8 +66,6 @@
+ #define HNS_ROCE_CQE_WCMD_EMPTY_BIT 0x2
+ #define HNS_ROCE_MIN_CQE_CNT 16
+
+-#define HNS_ROCE_RESERVED_SGE 1
+-
+ #define HNS_ROCE_MAX_IRQ_NUM 128
+
+ #define HNS_ROCE_SGE_IN_WQE 2
+diff --git a/drivers/infiniband/hw/hns/hns_roce_hw_v2.c b/drivers/infiniband/hw/hns/hns_roce_hw_v2.c
+index eb71b941d21b7..38a48ab3e1d02 100644
+--- a/drivers/infiniband/hw/hns/hns_roce_hw_v2.c
++++ b/drivers/infiniband/hw/hns/hns_roce_hw_v2.c
+@@ -629,7 +629,7 @@ static int hns_roce_v2_post_recv(struct ib_qp *ibqp,
+
+ wqe_idx = (hr_qp->rq.head + nreq) & (hr_qp->rq.wqe_cnt - 1);
+
+- if (unlikely(wr->num_sge >= hr_qp->rq.max_gs)) {
++ if (unlikely(wr->num_sge > hr_qp->rq.max_gs)) {
+ ibdev_err(ibdev, "rq:num_sge=%d >= qp->sq.max_gs=%d\n",
+ wr->num_sge, hr_qp->rq.max_gs);
+ ret = -EINVAL;
+@@ -649,7 +649,6 @@ static int hns_roce_v2_post_recv(struct ib_qp *ibqp,
+ if (wr->num_sge < hr_qp->rq.max_gs) {
+ dseg->lkey = cpu_to_le32(HNS_ROCE_INVALID_LKEY);
+ dseg->addr = 0;
+- dseg->len = cpu_to_le32(HNS_ROCE_INVALID_SGE_LENGTH);
+ }
+
+ /* rq support inline data */
+@@ -783,8 +782,8 @@ static int hns_roce_v2_post_srq_recv(struct ib_srq *ibsrq,
+ }
+
+ if (wr->num_sge < srq->max_gs) {
+- dseg[i].len = cpu_to_le32(HNS_ROCE_INVALID_SGE_LENGTH);
+- dseg[i].lkey = cpu_to_le32(HNS_ROCE_INVALID_LKEY);
++ dseg[i].len = 0;
++ dseg[i].lkey = cpu_to_le32(0x100);
+ dseg[i].addr = 0;
+ }
+
+@@ -5098,7 +5097,7 @@ static int hns_roce_v2_query_srq(struct ib_srq *ibsrq, struct ib_srq_attr *attr)
+
+ attr->srq_limit = limit_wl;
+ attr->max_wr = srq->wqe_cnt - 1;
+- attr->max_sge = srq->max_gs - HNS_ROCE_RESERVED_SGE;
++ attr->max_sge = srq->max_gs;
+
+ out:
+ hns_roce_free_cmd_mailbox(hr_dev, mailbox);
+diff --git a/drivers/infiniband/hw/hns/hns_roce_hw_v2.h b/drivers/infiniband/hw/hns/hns_roce_hw_v2.h
+index e6c385ced1872..4f840997c6c73 100644
+--- a/drivers/infiniband/hw/hns/hns_roce_hw_v2.h
++++ b/drivers/infiniband/hw/hns/hns_roce_hw_v2.h
+@@ -92,9 +92,7 @@
+ #define HNS_ROCE_V2_CQC_TIMER_ENTRY_SZ PAGE_SIZE
+ #define HNS_ROCE_V2_PAGE_SIZE_SUPPORTED 0xFFFFF000
+ #define HNS_ROCE_V2_MAX_INNER_MTPT_NUM 2
+-#define HNS_ROCE_INVALID_LKEY 0x0
+-#define HNS_ROCE_INVALID_SGE_LENGTH 0x80000000
+-
++#define HNS_ROCE_INVALID_LKEY 0x100
+ #define HNS_ROCE_CMQ_TX_TIMEOUT 30000
+ #define HNS_ROCE_V2_UC_RC_SGE_NUM_IN_WQE 2
+ #define HNS_ROCE_V2_RSV_QPS 8
+diff --git a/drivers/infiniband/hw/hns/hns_roce_qp.c b/drivers/infiniband/hw/hns/hns_roce_qp.c
+index a0a47bd669759..4edea397b6b80 100644
+--- a/drivers/infiniband/hw/hns/hns_roce_qp.c
++++ b/drivers/infiniband/hw/hns/hns_roce_qp.c
+@@ -386,8 +386,7 @@ static int set_rq_size(struct hns_roce_dev *hr_dev, struct ib_qp_cap *cap,
+ return -EINVAL;
+ }
+
+- hr_qp->rq.max_gs = roundup_pow_of_two(max(1U, cap->max_recv_sge) +
+- HNS_ROCE_RESERVED_SGE);
++ hr_qp->rq.max_gs = roundup_pow_of_two(max(1U, cap->max_recv_sge));
+
+ if (hr_dev->caps.max_rq_sg <= HNS_ROCE_SGE_IN_WQE)
+ hr_qp->rq.wqe_shift = ilog2(hr_dev->caps.max_rq_desc_sz);
+@@ -402,7 +401,7 @@ static int set_rq_size(struct hns_roce_dev *hr_dev, struct ib_qp_cap *cap,
+ hr_qp->rq_inl_buf.wqe_cnt = 0;
+
+ cap->max_recv_wr = cnt;
+- cap->max_recv_sge = hr_qp->rq.max_gs - HNS_ROCE_RESERVED_SGE;
++ cap->max_recv_sge = hr_qp->rq.max_gs;
+
+ return 0;
+ }
+diff --git a/drivers/infiniband/hw/hns/hns_roce_srq.c b/drivers/infiniband/hw/hns/hns_roce_srq.c
+index f40a000e94ee7..b9e2dbd372b66 100644
+--- a/drivers/infiniband/hw/hns/hns_roce_srq.c
++++ b/drivers/infiniband/hw/hns/hns_roce_srq.c
+@@ -297,7 +297,7 @@ int hns_roce_create_srq(struct ib_srq *ib_srq,
+ spin_lock_init(&srq->lock);
+
+ srq->wqe_cnt = roundup_pow_of_two(init_attr->attr.max_wr + 1);
+- srq->max_gs = init_attr->attr.max_sge + HNS_ROCE_RESERVED_SGE;
++ srq->max_gs = init_attr->attr.max_sge;
+
+ if (udata) {
+ ret = ib_copy_from_udata(&ucmd, udata, sizeof(ucmd));
+diff --git a/drivers/input/mouse/psmouse-base.c b/drivers/input/mouse/psmouse-base.c
+index 527ae0b9a191e..0b4a3039f312f 100644
+--- a/drivers/input/mouse/psmouse-base.c
++++ b/drivers/input/mouse/psmouse-base.c
+@@ -2042,7 +2042,7 @@ static int psmouse_get_maxproto(char *buffer, const struct kernel_param *kp)
+ {
+ int type = *((unsigned int *)kp->arg);
+
+- return sprintf(buffer, "%s", psmouse_protocol_by_type(type)->name);
++ return sprintf(buffer, "%s\n", psmouse_protocol_by_type(type)->name);
+ }
+
+ static int __init psmouse_init(void)
+diff --git a/drivers/media/pci/ttpci/budget-core.c b/drivers/media/pci/ttpci/budget-core.c
+index fadbdeeb44955..293867b9e7961 100644
+--- a/drivers/media/pci/ttpci/budget-core.c
++++ b/drivers/media/pci/ttpci/budget-core.c
+@@ -369,20 +369,25 @@ static int budget_register(struct budget *budget)
+ ret = dvbdemux->dmx.add_frontend(&dvbdemux->dmx, &budget->hw_frontend);
+
+ if (ret < 0)
+- return ret;
++ goto err_release_dmx;
+
+ budget->mem_frontend.source = DMX_MEMORY_FE;
+ ret = dvbdemux->dmx.add_frontend(&dvbdemux->dmx, &budget->mem_frontend);
+ if (ret < 0)
+- return ret;
++ goto err_release_dmx;
+
+ ret = dvbdemux->dmx.connect_frontend(&dvbdemux->dmx, &budget->hw_frontend);
+ if (ret < 0)
+- return ret;
++ goto err_release_dmx;
+
+ dvb_net_init(&budget->dvb_adapter, &budget->dvb_net, &dvbdemux->dmx);
+
+ return 0;
++
++err_release_dmx:
++ dvb_dmxdev_release(&budget->dmxdev);
++ dvb_dmx_release(&budget->demux);
++ return ret;
+ }
+
+ static void budget_unregister(struct budget *budget)
+diff --git a/drivers/media/platform/coda/coda-jpeg.c b/drivers/media/platform/coda/coda-jpeg.c
+index 00d19859db500..b11cfbe166dd3 100644
+--- a/drivers/media/platform/coda/coda-jpeg.c
++++ b/drivers/media/platform/coda/coda-jpeg.c
+@@ -327,8 +327,11 @@ int coda_jpeg_decode_header(struct coda_ctx *ctx, struct vb2_buffer *vb)
+ "only 8-bit quantization tables supported\n");
+ continue;
+ }
+- if (!ctx->params.jpeg_qmat_tab[i])
++ if (!ctx->params.jpeg_qmat_tab[i]) {
+ ctx->params.jpeg_qmat_tab[i] = kmalloc(64, GFP_KERNEL);
++ if (!ctx->params.jpeg_qmat_tab[i])
++ return -ENOMEM;
++ }
+ memcpy(ctx->params.jpeg_qmat_tab[i],
+ quantization_tables[i].start, 64);
+ }
+diff --git a/drivers/media/platform/davinci/vpss.c b/drivers/media/platform/davinci/vpss.c
+index d38d2bbb6f0f8..7000f0bf0b353 100644
+--- a/drivers/media/platform/davinci/vpss.c
++++ b/drivers/media/platform/davinci/vpss.c
+@@ -505,19 +505,31 @@ static void vpss_exit(void)
+
+ static int __init vpss_init(void)
+ {
++ int ret;
++
+ if (!request_mem_region(VPSS_CLK_CTRL, 4, "vpss_clock_control"))
+ return -EBUSY;
+
+ oper_cfg.vpss_regs_base2 = ioremap(VPSS_CLK_CTRL, 4);
+ if (unlikely(!oper_cfg.vpss_regs_base2)) {
+- release_mem_region(VPSS_CLK_CTRL, 4);
+- return -ENOMEM;
++ ret = -ENOMEM;
++ goto err_ioremap;
+ }
+
+ writel(VPSS_CLK_CTRL_VENCCLKEN |
+- VPSS_CLK_CTRL_DACCLKEN, oper_cfg.vpss_regs_base2);
++ VPSS_CLK_CTRL_DACCLKEN, oper_cfg.vpss_regs_base2);
++
++ ret = platform_driver_register(&vpss_driver);
++ if (ret)
++ goto err_pd_register;
++
++ return 0;
+
+- return platform_driver_register(&vpss_driver);
++err_pd_register:
++ iounmap(oper_cfg.vpss_regs_base2);
++err_ioremap:
++ release_mem_region(VPSS_CLK_CTRL, 4);
++ return ret;
+ }
+ subsys_initcall(vpss_init);
+ module_exit(vpss_exit);
+diff --git a/drivers/media/platform/qcom/camss/camss.c b/drivers/media/platform/qcom/camss/camss.c
+index 3fdc9f964a3c6..2483641799dfb 100644
+--- a/drivers/media/platform/qcom/camss/camss.c
++++ b/drivers/media/platform/qcom/camss/camss.c
+@@ -504,7 +504,6 @@ static int camss_of_parse_ports(struct camss *camss)
+ return num_subdevs;
+
+ err_cleanup:
+- v4l2_async_notifier_cleanup(&camss->notifier);
+ of_node_put(node);
+ return ret;
+ }
+@@ -835,29 +834,38 @@ static int camss_probe(struct platform_device *pdev)
+ camss->csid_num = 4;
+ camss->vfe_num = 2;
+ } else {
+- return -EINVAL;
++ ret = -EINVAL;
++ goto err_free;
+ }
+
+ camss->csiphy = devm_kcalloc(dev, camss->csiphy_num,
+ sizeof(*camss->csiphy), GFP_KERNEL);
+- if (!camss->csiphy)
+- return -ENOMEM;
++ if (!camss->csiphy) {
++ ret = -ENOMEM;
++ goto err_free;
++ }
+
+ camss->csid = devm_kcalloc(dev, camss->csid_num, sizeof(*camss->csid),
+ GFP_KERNEL);
+- if (!camss->csid)
+- return -ENOMEM;
++ if (!camss->csid) {
++ ret = -ENOMEM;
++ goto err_free;
++ }
+
+ camss->vfe = devm_kcalloc(dev, camss->vfe_num, sizeof(*camss->vfe),
+ GFP_KERNEL);
+- if (!camss->vfe)
+- return -ENOMEM;
++ if (!camss->vfe) {
++ ret = -ENOMEM;
++ goto err_free;
++ }
+
+ v4l2_async_notifier_init(&camss->notifier);
+
+ num_subdevs = camss_of_parse_ports(camss);
+- if (num_subdevs < 0)
+- return num_subdevs;
++ if (num_subdevs < 0) {
++ ret = num_subdevs;
++ goto err_cleanup;
++ }
+
+ ret = camss_init_subdevices(camss);
+ if (ret < 0)
+@@ -936,6 +944,8 @@ err_register_entities:
+ v4l2_device_unregister(&camss->v4l2_dev);
+ err_cleanup:
+ v4l2_async_notifier_cleanup(&camss->notifier);
++err_free:
++ kfree(camss);
+
+ return ret;
+ }
+diff --git a/drivers/net/bonding/bond_main.c b/drivers/net/bonding/bond_main.c
+index f88cb097b022a..500aa3e19a4c7 100644
+--- a/drivers/net/bonding/bond_main.c
++++ b/drivers/net/bonding/bond_main.c
+@@ -2084,7 +2084,8 @@ static int bond_release_and_destroy(struct net_device *bond_dev,
+ int ret;
+
+ ret = __bond_release_one(bond_dev, slave_dev, false, true);
+- if (ret == 0 && !bond_has_slaves(bond)) {
++ if (ret == 0 && !bond_has_slaves(bond) &&
++ bond_dev->reg_state != NETREG_UNREGISTERING) {
+ bond_dev->priv_flags |= IFF_DISABLE_NETPOLL;
+ netdev_info(bond_dev, "Destroying bond\n");
+ bond_remove_proc_entry(bond);
+@@ -2824,6 +2825,9 @@ static int bond_ab_arp_inspect(struct bonding *bond)
+ if (bond_time_in_interval(bond, last_rx, 1)) {
+ bond_propose_link_state(slave, BOND_LINK_UP);
+ commit++;
++ } else if (slave->link == BOND_LINK_BACK) {
++ bond_propose_link_state(slave, BOND_LINK_FAIL);
++ commit++;
+ }
+ continue;
+ }
+@@ -2932,6 +2936,19 @@ static void bond_ab_arp_commit(struct bonding *bond)
+
+ continue;
+
++ case BOND_LINK_FAIL:
++ bond_set_slave_link_state(slave, BOND_LINK_FAIL,
++ BOND_SLAVE_NOTIFY_NOW);
++ bond_set_slave_inactive_flags(slave,
++ BOND_SLAVE_NOTIFY_NOW);
++
++ /* A slave has just been enslaved and has become
++ * the current active slave.
++ */
++ if (rtnl_dereference(bond->curr_active_slave))
++ RCU_INIT_POINTER(bond->current_arp_slave, NULL);
++ continue;
++
+ default:
+ slave_err(bond->dev, slave->dev,
+ "impossible: link_new_state %d on slave\n",
+@@ -2982,8 +2999,6 @@ static bool bond_ab_arp_probe(struct bonding *bond)
+ return should_notify_rtnl;
+ }
+
+- bond_set_slave_inactive_flags(curr_arp_slave, BOND_SLAVE_NOTIFY_LATER);
+-
+ bond_for_each_slave_rcu(bond, slave, iter) {
+ if (!found && !before && bond_slave_is_up(slave))
+ before = slave;
+@@ -4431,13 +4446,23 @@ static netdev_tx_t bond_start_xmit(struct sk_buff *skb, struct net_device *dev)
+ return ret;
+ }
+
++static u32 bond_mode_bcast_speed(struct slave *slave, u32 speed)
++{
++ if (speed == 0 || speed == SPEED_UNKNOWN)
++ speed = slave->speed;
++ else
++ speed = min(speed, slave->speed);
++
++ return speed;
++}
++
+ static int bond_ethtool_get_link_ksettings(struct net_device *bond_dev,
+ struct ethtool_link_ksettings *cmd)
+ {
+ struct bonding *bond = netdev_priv(bond_dev);
+- unsigned long speed = 0;
+ struct list_head *iter;
+ struct slave *slave;
++ u32 speed = 0;
+
+ cmd->base.duplex = DUPLEX_UNKNOWN;
+ cmd->base.port = PORT_OTHER;
+@@ -4449,8 +4474,13 @@ static int bond_ethtool_get_link_ksettings(struct net_device *bond_dev,
+ */
+ bond_for_each_slave(bond, slave, iter) {
+ if (bond_slave_can_tx(slave)) {
+- if (slave->speed != SPEED_UNKNOWN)
+- speed += slave->speed;
++ if (slave->speed != SPEED_UNKNOWN) {
++ if (BOND_MODE(bond) == BOND_MODE_BROADCAST)
++ speed = bond_mode_bcast_speed(slave,
++ speed);
++ else
++ speed += slave->speed;
++ }
+ if (cmd->base.duplex == DUPLEX_UNKNOWN &&
+ slave->duplex != DUPLEX_UNKNOWN)
+ cmd->base.duplex = slave->duplex;
+diff --git a/drivers/net/dsa/b53/b53_common.c b/drivers/net/dsa/b53/b53_common.c
+index 1df05841ab6b1..86869337223a8 100644
+--- a/drivers/net/dsa/b53/b53_common.c
++++ b/drivers/net/dsa/b53/b53_common.c
+@@ -1555,6 +1555,8 @@ static int b53_arl_op(struct b53_device *dev, int op, int port,
+ return ret;
+
+ switch (ret) {
++ case -ETIMEDOUT:
++ return ret;
+ case -ENOSPC:
+ dev_dbg(dev->dev, "{%pM,%.4d} no space left in ARL\n",
+ addr, vid);
+diff --git a/drivers/net/ethernet/amazon/ena/ena_netdev.c b/drivers/net/ethernet/amazon/ena/ena_netdev.c
+index dda4b8fc9525e..000f57198352d 100644
+--- a/drivers/net/ethernet/amazon/ena/ena_netdev.c
++++ b/drivers/net/ethernet/amazon/ena/ena_netdev.c
+@@ -2177,13 +2177,10 @@ static void ena_del_napi_in_range(struct ena_adapter *adapter,
+ int i;
+
+ for (i = first_index; i < first_index + count; i++) {
+- /* Check if napi was initialized before */
+- if (!ENA_IS_XDP_INDEX(adapter, i) ||
+- adapter->ena_napi[i].xdp_ring)
+- netif_napi_del(&adapter->ena_napi[i].napi);
+- else
+- WARN_ON(ENA_IS_XDP_INDEX(adapter, i) &&
+- adapter->ena_napi[i].xdp_ring);
++ netif_napi_del(&adapter->ena_napi[i].napi);
++
++ WARN_ON(!ENA_IS_XDP_INDEX(adapter, i) &&
++ adapter->ena_napi[i].xdp_ring);
+ }
+ }
+
+@@ -3523,16 +3520,14 @@ static void ena_fw_reset_device(struct work_struct *work)
+ {
+ struct ena_adapter *adapter =
+ container_of(work, struct ena_adapter, reset_task);
+- struct pci_dev *pdev = adapter->pdev;
+
+- if (unlikely(!test_bit(ENA_FLAG_TRIGGER_RESET, &adapter->flags))) {
+- dev_err(&pdev->dev,
+- "device reset schedule while reset bit is off\n");
+- return;
+- }
+ rtnl_lock();
+- ena_destroy_device(adapter, false);
+- ena_restore_device(adapter);
++
++ if (likely(test_bit(ENA_FLAG_TRIGGER_RESET, &adapter->flags))) {
++ ena_destroy_device(adapter, false);
++ ena_restore_device(adapter);
++ }
++
+ rtnl_unlock();
+ }
+
+@@ -4366,8 +4361,11 @@ static void __ena_shutoff(struct pci_dev *pdev, bool shutdown)
+ netdev->rx_cpu_rmap = NULL;
+ }
+ #endif /* CONFIG_RFS_ACCEL */
+- del_timer_sync(&adapter->timer_service);
+
++ /* Make sure timer and reset routine won't be called after
++ * freeing device resources.
++ */
++ del_timer_sync(&adapter->timer_service);
+ cancel_work_sync(&adapter->reset_task);
+
+ rtnl_lock(); /* lock released inside the below if-else block */
+diff --git a/drivers/net/ethernet/cortina/gemini.c b/drivers/net/ethernet/cortina/gemini.c
+index 66e67b24a887c..62e271aea4a50 100644
+--- a/drivers/net/ethernet/cortina/gemini.c
++++ b/drivers/net/ethernet/cortina/gemini.c
+@@ -2389,7 +2389,7 @@ static int gemini_ethernet_port_probe(struct platform_device *pdev)
+
+ dev_info(dev, "probe %s ID %d\n", dev_name(dev), id);
+
+- netdev = alloc_etherdev_mq(sizeof(*port), TX_QUEUE_NUM);
++ netdev = devm_alloc_etherdev_mqs(dev, sizeof(*port), TX_QUEUE_NUM, TX_QUEUE_NUM);
+ if (!netdev) {
+ dev_err(dev, "Can't allocate ethernet device #%d\n", id);
+ return -ENOMEM;
+@@ -2521,7 +2521,6 @@ static int gemini_ethernet_port_probe(struct platform_device *pdev)
+ }
+
+ port->netdev = NULL;
+- free_netdev(netdev);
+ return ret;
+ }
+
+@@ -2530,7 +2529,6 @@ static int gemini_ethernet_port_remove(struct platform_device *pdev)
+ struct gemini_ethernet_port *port = platform_get_drvdata(pdev);
+
+ gemini_port_remove(port);
+- free_netdev(port->netdev);
+ return 0;
+ }
+
+diff --git a/drivers/net/ethernet/freescale/fec_main.c b/drivers/net/ethernet/freescale/fec_main.c
+index cc7fbfc093548..534fcc71a2a53 100644
+--- a/drivers/net/ethernet/freescale/fec_main.c
++++ b/drivers/net/ethernet/freescale/fec_main.c
+@@ -3714,11 +3714,11 @@ failed_mii_init:
+ failed_irq:
+ failed_init:
+ fec_ptp_stop(pdev);
+- if (fep->reg_phy)
+- regulator_disable(fep->reg_phy);
+ failed_reset:
+ pm_runtime_put_noidle(&pdev->dev);
+ pm_runtime_disable(&pdev->dev);
++ if (fep->reg_phy)
++ regulator_disable(fep->reg_phy);
+ failed_regulator:
+ clk_disable_unprepare(fep->clk_ahb);
+ failed_clk_ahb:
+diff --git a/drivers/net/ethernet/intel/i40e/i40e_adminq_cmd.h b/drivers/net/ethernet/intel/i40e/i40e_adminq_cmd.h
+index aa5f1c0aa7215..0921785a10795 100644
+--- a/drivers/net/ethernet/intel/i40e/i40e_adminq_cmd.h
++++ b/drivers/net/ethernet/intel/i40e/i40e_adminq_cmd.h
+@@ -1211,7 +1211,7 @@ struct i40e_aqc_set_vsi_promiscuous_modes {
+ #define I40E_AQC_SET_VSI_PROMISC_BROADCAST 0x04
+ #define I40E_AQC_SET_VSI_DEFAULT 0x08
+ #define I40E_AQC_SET_VSI_PROMISC_VLAN 0x10
+-#define I40E_AQC_SET_VSI_PROMISC_TX 0x8000
++#define I40E_AQC_SET_VSI_PROMISC_RX_ONLY 0x8000
+ __le16 seid;
+ #define I40E_AQC_VSI_PROM_CMD_SEID_MASK 0x3FF
+ __le16 vlan_tag;
+diff --git a/drivers/net/ethernet/intel/i40e/i40e_common.c b/drivers/net/ethernet/intel/i40e/i40e_common.c
+index 45b90eb11adba..21e44c6cd5eac 100644
+--- a/drivers/net/ethernet/intel/i40e/i40e_common.c
++++ b/drivers/net/ethernet/intel/i40e/i40e_common.c
+@@ -1969,6 +1969,21 @@ i40e_status i40e_aq_set_phy_debug(struct i40e_hw *hw, u8 cmd_flags,
+ return status;
+ }
+
++/**
++ * i40e_is_aq_api_ver_ge
++ * @aq: pointer to AdminQ info containing HW API version to compare
++ * @maj: API major value
++ * @min: API minor value
++ *
++ * Assert whether current HW API version is greater/equal than provided.
++ **/
++static bool i40e_is_aq_api_ver_ge(struct i40e_adminq_info *aq, u16 maj,
++ u16 min)
++{
++ return (aq->api_maj_ver > maj ||
++ (aq->api_maj_ver == maj && aq->api_min_ver >= min));
++}
++
+ /**
+ * i40e_aq_add_vsi
+ * @hw: pointer to the hw struct
+@@ -2094,18 +2109,16 @@ i40e_status i40e_aq_set_vsi_unicast_promiscuous(struct i40e_hw *hw,
+
+ if (set) {
+ flags |= I40E_AQC_SET_VSI_PROMISC_UNICAST;
+- if (rx_only_promisc &&
+- (((hw->aq.api_maj_ver == 1) && (hw->aq.api_min_ver >= 5)) ||
+- (hw->aq.api_maj_ver > 1)))
+- flags |= I40E_AQC_SET_VSI_PROMISC_TX;
++ if (rx_only_promisc && i40e_is_aq_api_ver_ge(&hw->aq, 1, 5))
++ flags |= I40E_AQC_SET_VSI_PROMISC_RX_ONLY;
+ }
+
+ cmd->promiscuous_flags = cpu_to_le16(flags);
+
+ cmd->valid_flags = cpu_to_le16(I40E_AQC_SET_VSI_PROMISC_UNICAST);
+- if (((hw->aq.api_maj_ver >= 1) && (hw->aq.api_min_ver >= 5)) ||
+- (hw->aq.api_maj_ver > 1))
+- cmd->valid_flags |= cpu_to_le16(I40E_AQC_SET_VSI_PROMISC_TX);
++ if (i40e_is_aq_api_ver_ge(&hw->aq, 1, 5))
++ cmd->valid_flags |=
++ cpu_to_le16(I40E_AQC_SET_VSI_PROMISC_RX_ONLY);
+
+ cmd->seid = cpu_to_le16(seid);
+ status = i40e_asq_send_command(hw, &desc, NULL, 0, cmd_details);
+@@ -2202,11 +2215,17 @@ enum i40e_status_code i40e_aq_set_vsi_uc_promisc_on_vlan(struct i40e_hw *hw,
+ i40e_fill_default_direct_cmd_desc(&desc,
+ i40e_aqc_opc_set_vsi_promiscuous_modes);
+
+- if (enable)
++ if (enable) {
+ flags |= I40E_AQC_SET_VSI_PROMISC_UNICAST;
++ if (i40e_is_aq_api_ver_ge(&hw->aq, 1, 5))
++ flags |= I40E_AQC_SET_VSI_PROMISC_RX_ONLY;
++ }
+
+ cmd->promiscuous_flags = cpu_to_le16(flags);
+ cmd->valid_flags = cpu_to_le16(I40E_AQC_SET_VSI_PROMISC_UNICAST);
++ if (i40e_is_aq_api_ver_ge(&hw->aq, 1, 5))
++ cmd->valid_flags |=
++ cpu_to_le16(I40E_AQC_SET_VSI_PROMISC_RX_ONLY);
+ cmd->seid = cpu_to_le16(seid);
+ cmd->vlan_tag = cpu_to_le16(vid | I40E_AQC_SET_VSI_VLAN_VALID);
+
+diff --git a/drivers/net/ethernet/intel/i40e/i40e_main.c b/drivers/net/ethernet/intel/i40e/i40e_main.c
+index 56ecd6c3f2362..6af6367e7cac2 100644
+--- a/drivers/net/ethernet/intel/i40e/i40e_main.c
++++ b/drivers/net/ethernet/intel/i40e/i40e_main.c
+@@ -15352,6 +15352,9 @@ static void i40e_remove(struct pci_dev *pdev)
+ i40e_write_rx_ctl(hw, I40E_PFQF_HENA(0), 0);
+ i40e_write_rx_ctl(hw, I40E_PFQF_HENA(1), 0);
+
++ while (test_bit(__I40E_RESET_RECOVERY_PENDING, pf->state))
++ usleep_range(1000, 2000);
++
+ /* no more scheduling of any task */
+ set_bit(__I40E_SUSPENDED, pf->state);
+ set_bit(__I40E_DOWN, pf->state);
+diff --git a/drivers/net/ethernet/intel/igc/igc_main.c b/drivers/net/ethernet/intel/igc/igc_main.c
+index 6919c50e449a2..63259ecd41e5b 100644
+--- a/drivers/net/ethernet/intel/igc/igc_main.c
++++ b/drivers/net/ethernet/intel/igc/igc_main.c
+@@ -5158,6 +5158,8 @@ static int igc_probe(struct pci_dev *pdev,
+ device_set_wakeup_enable(&adapter->pdev->dev,
+ adapter->flags & IGC_FLAG_WOL_SUPPORTED);
+
++ igc_ptp_init(adapter);
++
+ /* reset the hardware with the new settings */
+ igc_reset(adapter);
+
+@@ -5174,9 +5176,6 @@ static int igc_probe(struct pci_dev *pdev,
+ /* carrier off reporting is important to ethtool even BEFORE open */
+ netif_carrier_off(netdev);
+
+- /* do hw tstamp init after resetting */
+- igc_ptp_init(adapter);
+-
+ /* Check if Media Autosense is enabled */
+ adapter->ei = *ei;
+
+diff --git a/drivers/net/ethernet/intel/igc/igc_ptp.c b/drivers/net/ethernet/intel/igc/igc_ptp.c
+index 0d746f8588c81..61e38853aa47d 100644
+--- a/drivers/net/ethernet/intel/igc/igc_ptp.c
++++ b/drivers/net/ethernet/intel/igc/igc_ptp.c
+@@ -608,8 +608,6 @@ void igc_ptp_init(struct igc_adapter *adapter)
+ adapter->tstamp_config.rx_filter = HWTSTAMP_FILTER_NONE;
+ adapter->tstamp_config.tx_type = HWTSTAMP_TX_OFF;
+
+- igc_ptp_reset(adapter);
+-
+ adapter->ptp_clock = ptp_clock_register(&adapter->ptp_caps,
+ &adapter->pdev->dev);
+ if (IS_ERR(adapter->ptp_clock)) {
+diff --git a/drivers/net/hyperv/netvsc_drv.c b/drivers/net/hyperv/netvsc_drv.c
+index 0d779bba1b019..6b81c04ab5e29 100644
+--- a/drivers/net/hyperv/netvsc_drv.c
++++ b/drivers/net/hyperv/netvsc_drv.c
+@@ -502,7 +502,7 @@ static int netvsc_vf_xmit(struct net_device *net, struct net_device *vf_netdev,
+ int rc;
+
+ skb->dev = vf_netdev;
+- skb->queue_mapping = qdisc_skb_cb(skb)->slave_dev_queue_mapping;
++ skb_record_rx_queue(skb, qdisc_skb_cb(skb)->slave_dev_queue_mapping);
+
+ rc = dev_queue_xmit(skb);
+ if (likely(rc == NET_XMIT_SUCCESS || rc == NET_XMIT_CN)) {
+diff --git a/drivers/net/ipvlan/ipvlan_main.c b/drivers/net/ipvlan/ipvlan_main.c
+index 15e87c097b0b3..5bca94c990061 100644
+--- a/drivers/net/ipvlan/ipvlan_main.c
++++ b/drivers/net/ipvlan/ipvlan_main.c
+@@ -106,12 +106,21 @@ static void ipvlan_port_destroy(struct net_device *dev)
+ kfree(port);
+ }
+
++#define IPVLAN_ALWAYS_ON_OFLOADS \
++ (NETIF_F_SG | NETIF_F_HW_CSUM | \
++ NETIF_F_GSO_ROBUST | NETIF_F_GSO_SOFTWARE | NETIF_F_GSO_ENCAP_ALL)
++
++#define IPVLAN_ALWAYS_ON \
++ (IPVLAN_ALWAYS_ON_OFLOADS | NETIF_F_LLTX | NETIF_F_VLAN_CHALLENGED)
++
+ #define IPVLAN_FEATURES \
+- (NETIF_F_SG | NETIF_F_CSUM_MASK | NETIF_F_HIGHDMA | NETIF_F_FRAGLIST | \
++ (NETIF_F_SG | NETIF_F_HW_CSUM | NETIF_F_HIGHDMA | NETIF_F_FRAGLIST | \
+ NETIF_F_GSO | NETIF_F_ALL_TSO | NETIF_F_GSO_ROBUST | \
+ NETIF_F_GRO | NETIF_F_RXCSUM | \
+ NETIF_F_HW_VLAN_CTAG_FILTER | NETIF_F_HW_VLAN_STAG_FILTER)
+
++ /* NETIF_F_GSO_ENCAP_ALL NETIF_F_GSO_SOFTWARE Newly added */
++
+ #define IPVLAN_STATE_MASK \
+ ((1<<__LINK_STATE_NOCARRIER) | (1<<__LINK_STATE_DORMANT))
+
+@@ -125,7 +134,9 @@ static int ipvlan_init(struct net_device *dev)
+ dev->state = (dev->state & ~IPVLAN_STATE_MASK) |
+ (phy_dev->state & IPVLAN_STATE_MASK);
+ dev->features = phy_dev->features & IPVLAN_FEATURES;
+- dev->features |= NETIF_F_LLTX | NETIF_F_VLAN_CHALLENGED;
++ dev->features |= IPVLAN_ALWAYS_ON;
++ dev->vlan_features = phy_dev->vlan_features & IPVLAN_FEATURES;
++ dev->vlan_features |= IPVLAN_ALWAYS_ON_OFLOADS;
+ dev->hw_enc_features |= dev->features;
+ dev->gso_max_size = phy_dev->gso_max_size;
+ dev->gso_max_segs = phy_dev->gso_max_segs;
+@@ -227,7 +238,14 @@ static netdev_features_t ipvlan_fix_features(struct net_device *dev,
+ {
+ struct ipvl_dev *ipvlan = netdev_priv(dev);
+
+- return features & (ipvlan->sfeatures | ~IPVLAN_FEATURES);
++ features |= NETIF_F_ALL_FOR_ALL;
++ features &= (ipvlan->sfeatures | ~IPVLAN_FEATURES);
++ features = netdev_increment_features(ipvlan->phy_dev->features,
++ features, features);
++ features |= IPVLAN_ALWAYS_ON;
++ features &= (IPVLAN_FEATURES | IPVLAN_ALWAYS_ON);
++
++ return features;
+ }
+
+ static void ipvlan_change_rx_flags(struct net_device *dev, int change)
+@@ -734,10 +752,9 @@ static int ipvlan_device_event(struct notifier_block *unused,
+
+ case NETDEV_FEAT_CHANGE:
+ list_for_each_entry(ipvlan, &port->ipvlans, pnode) {
+- ipvlan->dev->features = dev->features & IPVLAN_FEATURES;
+ ipvlan->dev->gso_max_size = dev->gso_max_size;
+ ipvlan->dev->gso_max_segs = dev->gso_max_segs;
+- netdev_features_change(ipvlan->dev);
++ netdev_update_features(ipvlan->dev);
+ }
+ break;
+
+diff --git a/drivers/of/address.c b/drivers/of/address.c
+index 8eea3f6e29a44..340d3051b1ce2 100644
+--- a/drivers/of/address.c
++++ b/drivers/of/address.c
+@@ -980,6 +980,11 @@ int of_dma_get_range(struct device_node *np, u64 *dma_addr, u64 *paddr, u64 *siz
+ /* Don't error out as we'd break some existing DTs */
+ continue;
+ }
++ if (range.cpu_addr == OF_BAD_ADDR) {
++ pr_err("translation of DMA address(%llx) to CPU address failed node(%pOF)\n",
++ range.bus_addr, node);
++ continue;
++ }
+ dma_offset = range.cpu_addr - range.bus_addr;
+
+ /* Take lower and upper limits */
+diff --git a/drivers/opp/core.c b/drivers/opp/core.c
+index dfbd3d10410ca..8c90f78717723 100644
+--- a/drivers/opp/core.c
++++ b/drivers/opp/core.c
+@@ -862,8 +862,10 @@ int dev_pm_opp_set_rate(struct device *dev, unsigned long target_freq)
+ * have OPP table for the device, while others don't and
+ * opp_set_rate() just needs to behave like clk_set_rate().
+ */
+- if (!_get_opp_count(opp_table))
+- return 0;
++ if (!_get_opp_count(opp_table)) {
++ ret = 0;
++ goto put_opp_table;
++ }
+
+ if (!opp_table->required_opp_tables && !opp_table->regulators &&
+ !opp_table->paths) {
+@@ -874,7 +876,7 @@ int dev_pm_opp_set_rate(struct device *dev, unsigned long target_freq)
+
+ ret = _set_opp_bw(opp_table, NULL, dev, true);
+ if (ret)
+- return ret;
++ goto put_opp_table;
+
+ if (opp_table->regulator_enabled) {
+ regulator_disable(opp_table->regulators[0]);
+@@ -901,10 +903,13 @@ int dev_pm_opp_set_rate(struct device *dev, unsigned long target_freq)
+
+ /* Return early if nothing to do */
+ if (old_freq == freq) {
+- dev_dbg(dev, "%s: old/new frequencies (%lu Hz) are same, nothing to do\n",
+- __func__, freq);
+- ret = 0;
+- goto put_opp_table;
++ if (!opp_table->required_opp_tables && !opp_table->regulators &&
++ !opp_table->paths) {
++ dev_dbg(dev, "%s: old/new frequencies (%lu Hz) are same, nothing to do\n",
++ __func__, freq);
++ ret = 0;
++ goto put_opp_table;
++ }
+ }
+
+ /*
+diff --git a/drivers/pci/hotplug/s390_pci_hpc.c b/drivers/pci/hotplug/s390_pci_hpc.c
+index b59f84918fe06..c9e790c74051f 100644
+--- a/drivers/pci/hotplug/s390_pci_hpc.c
++++ b/drivers/pci/hotplug/s390_pci_hpc.c
+@@ -83,21 +83,19 @@ static int disable_slot(struct hotplug_slot *hotplug_slot)
+ struct zpci_dev *zdev = container_of(hotplug_slot, struct zpci_dev,
+ hotplug_slot);
+ struct pci_dev *pdev;
+- struct zpci_bus *zbus = zdev->zbus;
+ int rc;
+
+ if (!zpci_fn_configured(zdev->state))
+ return -EIO;
+
+- pdev = pci_get_slot(zbus->bus, zdev->devfn);
+- if (pdev) {
+- if (pci_num_vf(pdev))
+- return -EBUSY;
+-
+- pci_stop_and_remove_bus_device_locked(pdev);
++ pdev = pci_get_slot(zdev->zbus->bus, zdev->devfn);
++ if (pdev && pci_num_vf(pdev)) {
+ pci_dev_put(pdev);
++ return -EBUSY;
+ }
+
++ zpci_remove_device(zdev);
++
+ rc = zpci_disable_device(zdev);
+ if (rc)
+ return rc;
+diff --git a/drivers/rtc/rtc-goldfish.c b/drivers/rtc/rtc-goldfish.c
+index 27797157fcb3f..6349d2cd36805 100644
+--- a/drivers/rtc/rtc-goldfish.c
++++ b/drivers/rtc/rtc-goldfish.c
+@@ -73,6 +73,7 @@ static int goldfish_rtc_set_alarm(struct device *dev,
+ rtc_alarm64 = rtc_tm_to_time64(&alrm->time) * NSEC_PER_SEC;
+ writel((rtc_alarm64 >> 32), base + TIMER_ALARM_HIGH);
+ writel(rtc_alarm64, base + TIMER_ALARM_LOW);
++ writel(1, base + TIMER_IRQ_ENABLED);
+ } else {
+ /*
+ * if this function was called with enabled=0
+diff --git a/drivers/s390/scsi/zfcp_fsf.c b/drivers/s390/scsi/zfcp_fsf.c
+index c795f22249d8f..140186fe1d1e0 100644
+--- a/drivers/s390/scsi/zfcp_fsf.c
++++ b/drivers/s390/scsi/zfcp_fsf.c
+@@ -434,7 +434,7 @@ static void zfcp_fsf_req_complete(struct zfcp_fsf_req *req)
+ return;
+ }
+
+- del_timer(&req->timer);
++ del_timer_sync(&req->timer);
+ zfcp_fsf_protstatus_eval(req);
+ zfcp_fsf_fsfstatus_eval(req);
+ req->handler(req);
+@@ -867,7 +867,7 @@ static int zfcp_fsf_req_send(struct zfcp_fsf_req *req)
+ req->qdio_req.qdio_outb_usage = atomic_read(&qdio->req_q_free);
+ req->issued = get_tod_clock();
+ if (zfcp_qdio_send(qdio, &req->qdio_req)) {
+- del_timer(&req->timer);
++ del_timer_sync(&req->timer);
+ /* lookup request again, list might have changed */
+ zfcp_reqlist_find_rm(adapter->req_list, req_id);
+ zfcp_erp_adapter_reopen(adapter, 0, "fsrs__1");
+diff --git a/drivers/scsi/libfc/fc_disc.c b/drivers/scsi/libfc/fc_disc.c
+index 2b865c6423e29..e00dc4693fcbd 100644
+--- a/drivers/scsi/libfc/fc_disc.c
++++ b/drivers/scsi/libfc/fc_disc.c
+@@ -581,8 +581,12 @@ static void fc_disc_gpn_id_resp(struct fc_seq *sp, struct fc_frame *fp,
+
+ if (PTR_ERR(fp) == -FC_EX_CLOSED)
+ goto out;
+- if (IS_ERR(fp))
+- goto redisc;
++ if (IS_ERR(fp)) {
++ mutex_lock(&disc->disc_mutex);
++ fc_disc_restart(disc);
++ mutex_unlock(&disc->disc_mutex);
++ goto out;
++ }
+
+ cp = fc_frame_payload_get(fp, sizeof(*cp));
+ if (!cp)
+@@ -609,7 +613,7 @@ static void fc_disc_gpn_id_resp(struct fc_seq *sp, struct fc_frame *fp,
+ new_rdata->disc_id = disc->disc_id;
+ fc_rport_login(new_rdata);
+ }
+- goto out;
++ goto free_fp;
+ }
+ rdata->disc_id = disc->disc_id;
+ mutex_unlock(&rdata->rp_mutex);
+@@ -626,6 +630,8 @@ redisc:
+ fc_disc_restart(disc);
+ mutex_unlock(&disc->disc_mutex);
+ }
++free_fp:
++ fc_frame_free(fp);
+ out:
+ kref_put(&rdata->kref, fc_rport_destroy);
+ if (!IS_ERR(fp))
+diff --git a/drivers/scsi/qla2xxx/qla_os.c b/drivers/scsi/qla2xxx/qla_os.c
+index e92fad99338cd..5c7c22d0fab4b 100644
+--- a/drivers/scsi/qla2xxx/qla_os.c
++++ b/drivers/scsi/qla2xxx/qla_os.c
+@@ -2829,10 +2829,6 @@ qla2x00_probe_one(struct pci_dev *pdev, const struct pci_device_id *id)
+ /* This may fail but that's ok */
+ pci_enable_pcie_error_reporting(pdev);
+
+- /* Turn off T10-DIF when FC-NVMe is enabled */
+- if (ql2xnvmeenable)
+- ql2xenabledif = 0;
+-
+ ha = kzalloc(sizeof(struct qla_hw_data), GFP_KERNEL);
+ if (!ha) {
+ ql_log_pci(ql_log_fatal, pdev, 0x0009,
+diff --git a/drivers/scsi/ufs/ti-j721e-ufs.c b/drivers/scsi/ufs/ti-j721e-ufs.c
+index 46bb905b4d6a9..eafe0db98d542 100644
+--- a/drivers/scsi/ufs/ti-j721e-ufs.c
++++ b/drivers/scsi/ufs/ti-j721e-ufs.c
+@@ -38,6 +38,7 @@ static int ti_j721e_ufs_probe(struct platform_device *pdev)
+ /* Select MPHY refclk frequency */
+ clk = devm_clk_get(dev, NULL);
+ if (IS_ERR(clk)) {
++ ret = PTR_ERR(clk);
+ dev_err(dev, "Cannot claim MPHY clock.\n");
+ goto clk_err;
+ }
+diff --git a/drivers/scsi/ufs/ufs_quirks.h b/drivers/scsi/ufs/ufs_quirks.h
+index e3175a63c676b..e80d5f26a4424 100644
+--- a/drivers/scsi/ufs/ufs_quirks.h
++++ b/drivers/scsi/ufs/ufs_quirks.h
+@@ -12,6 +12,7 @@
+ #define UFS_ANY_VENDOR 0xFFFF
+ #define UFS_ANY_MODEL "ANY_MODEL"
+
++#define UFS_VENDOR_MICRON 0x12C
+ #define UFS_VENDOR_TOSHIBA 0x198
+ #define UFS_VENDOR_SAMSUNG 0x1CE
+ #define UFS_VENDOR_SKHYNIX 0x1AD
+diff --git a/drivers/scsi/ufs/ufshcd-pci.c b/drivers/scsi/ufs/ufshcd-pci.c
+index 8f78a81514991..b220666774ce8 100644
+--- a/drivers/scsi/ufs/ufshcd-pci.c
++++ b/drivers/scsi/ufs/ufshcd-pci.c
+@@ -67,11 +67,23 @@ static int ufs_intel_link_startup_notify(struct ufs_hba *hba,
+ return err;
+ }
+
++static int ufs_intel_ehl_init(struct ufs_hba *hba)
++{
++ hba->quirks |= UFSHCD_QUIRK_BROKEN_AUTO_HIBERN8;
++ return 0;
++}
++
+ static struct ufs_hba_variant_ops ufs_intel_cnl_hba_vops = {
+ .name = "intel-pci",
+ .link_startup_notify = ufs_intel_link_startup_notify,
+ };
+
++static struct ufs_hba_variant_ops ufs_intel_ehl_hba_vops = {
++ .name = "intel-pci",
++ .init = ufs_intel_ehl_init,
++ .link_startup_notify = ufs_intel_link_startup_notify,
++};
++
+ #ifdef CONFIG_PM_SLEEP
+ /**
+ * ufshcd_pci_suspend - suspend power management function
+@@ -200,8 +212,8 @@ static const struct dev_pm_ops ufshcd_pci_pm_ops = {
+ static const struct pci_device_id ufshcd_pci_tbl[] = {
+ { PCI_VENDOR_ID_SAMSUNG, 0xC00C, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0 },
+ { PCI_VDEVICE(INTEL, 0x9DFA), (kernel_ulong_t)&ufs_intel_cnl_hba_vops },
+- { PCI_VDEVICE(INTEL, 0x4B41), (kernel_ulong_t)&ufs_intel_cnl_hba_vops },
+- { PCI_VDEVICE(INTEL, 0x4B43), (kernel_ulong_t)&ufs_intel_cnl_hba_vops },
++ { PCI_VDEVICE(INTEL, 0x4B41), (kernel_ulong_t)&ufs_intel_ehl_hba_vops },
++ { PCI_VDEVICE(INTEL, 0x4B43), (kernel_ulong_t)&ufs_intel_ehl_hba_vops },
+ { } /* terminate list */
+ };
+
+diff --git a/drivers/scsi/ufs/ufshcd.c b/drivers/scsi/ufs/ufshcd.c
+index e412e43d23821..136b863bc1d45 100644
+--- a/drivers/scsi/ufs/ufshcd.c
++++ b/drivers/scsi/ufs/ufshcd.c
+@@ -216,6 +216,8 @@ ufs_get_desired_pm_lvl_for_dev_link_state(enum ufs_dev_pwr_mode dev_state,
+
+ static struct ufs_dev_fix ufs_fixups[] = {
+ /* UFS cards deviations table */
++ UFS_FIX(UFS_VENDOR_MICRON, UFS_ANY_MODEL,
++ UFS_DEVICE_QUIRK_DELAY_BEFORE_LPM),
+ UFS_FIX(UFS_VENDOR_SAMSUNG, UFS_ANY_MODEL,
+ UFS_DEVICE_QUIRK_DELAY_BEFORE_LPM),
+ UFS_FIX(UFS_VENDOR_SAMSUNG, UFS_ANY_MODEL,
+@@ -672,7 +674,11 @@ static inline int ufshcd_get_tr_ocs(struct ufshcd_lrb *lrbp)
+ */
+ static inline void ufshcd_utrl_clear(struct ufs_hba *hba, u32 pos)
+ {
+- ufshcd_writel(hba, ~(1 << pos), REG_UTP_TRANSFER_REQ_LIST_CLEAR);
++ if (hba->quirks & UFSHCI_QUIRK_BROKEN_REQ_LIST_CLR)
++ ufshcd_writel(hba, (1 << pos), REG_UTP_TRANSFER_REQ_LIST_CLEAR);
++ else
++ ufshcd_writel(hba, ~(1 << pos),
++ REG_UTP_TRANSFER_REQ_LIST_CLEAR);
+ }
+
+ /**
+@@ -682,7 +688,10 @@ static inline void ufshcd_utrl_clear(struct ufs_hba *hba, u32 pos)
+ */
+ static inline void ufshcd_utmrl_clear(struct ufs_hba *hba, u32 pos)
+ {
+- ufshcd_writel(hba, ~(1 << pos), REG_UTP_TASK_REQ_LIST_CLEAR);
++ if (hba->quirks & UFSHCI_QUIRK_BROKEN_REQ_LIST_CLR)
++ ufshcd_writel(hba, (1 << pos), REG_UTP_TASK_REQ_LIST_CLEAR);
++ else
++ ufshcd_writel(hba, ~(1 << pos), REG_UTP_TASK_REQ_LIST_CLEAR);
+ }
+
+ /**
+@@ -2166,8 +2175,14 @@ static int ufshcd_map_sg(struct ufs_hba *hba, struct ufshcd_lrb *lrbp)
+ return sg_segments;
+
+ if (sg_segments) {
+- lrbp->utr_descriptor_ptr->prd_table_length =
+- cpu_to_le16((u16)sg_segments);
++
++ if (hba->quirks & UFSHCD_QUIRK_PRDT_BYTE_GRAN)
++ lrbp->utr_descriptor_ptr->prd_table_length =
++ cpu_to_le16((sg_segments *
++ sizeof(struct ufshcd_sg_entry)));
++ else
++ lrbp->utr_descriptor_ptr->prd_table_length =
++ cpu_to_le16((u16) (sg_segments));
+
+ prd_table = (struct ufshcd_sg_entry *)lrbp->ucd_prdt_ptr;
+
+@@ -3514,11 +3529,21 @@ static void ufshcd_host_memory_configure(struct ufs_hba *hba)
+ cpu_to_le32(upper_32_bits(cmd_desc_element_addr));
+
+ /* Response upiu and prdt offset should be in double words */
+- utrdlp[i].response_upiu_offset =
+- cpu_to_le16(response_offset >> 2);
+- utrdlp[i].prd_table_offset = cpu_to_le16(prdt_offset >> 2);
+- utrdlp[i].response_upiu_length =
+- cpu_to_le16(ALIGNED_UPIU_SIZE >> 2);
++ if (hba->quirks & UFSHCD_QUIRK_PRDT_BYTE_GRAN) {
++ utrdlp[i].response_upiu_offset =
++ cpu_to_le16(response_offset);
++ utrdlp[i].prd_table_offset =
++ cpu_to_le16(prdt_offset);
++ utrdlp[i].response_upiu_length =
++ cpu_to_le16(ALIGNED_UPIU_SIZE);
++ } else {
++ utrdlp[i].response_upiu_offset =
++ cpu_to_le16(response_offset >> 2);
++ utrdlp[i].prd_table_offset =
++ cpu_to_le16(prdt_offset >> 2);
++ utrdlp[i].response_upiu_length =
++ cpu_to_le16(ALIGNED_UPIU_SIZE >> 2);
++ }
+
+ ufshcd_init_lrb(hba, &hba->lrb[i], i);
+ }
+@@ -3548,6 +3573,52 @@ static int ufshcd_dme_link_startup(struct ufs_hba *hba)
+ "dme-link-startup: error code %d\n", ret);
+ return ret;
+ }
++/**
++ * ufshcd_dme_reset - UIC command for DME_RESET
++ * @hba: per adapter instance
++ *
++ * DME_RESET command is issued in order to reset UniPro stack.
++ * This function now deals with cold reset.
++ *
++ * Returns 0 on success, non-zero value on failure
++ */
++static int ufshcd_dme_reset(struct ufs_hba *hba)
++{
++ struct uic_command uic_cmd = {0};
++ int ret;
++
++ uic_cmd.command = UIC_CMD_DME_RESET;
++
++ ret = ufshcd_send_uic_cmd(hba, &uic_cmd);
++ if (ret)
++ dev_err(hba->dev,
++ "dme-reset: error code %d\n", ret);
++
++ return ret;
++}
++
++/**
++ * ufshcd_dme_enable - UIC command for DME_ENABLE
++ * @hba: per adapter instance
++ *
++ * DME_ENABLE command is issued in order to enable UniPro stack.
++ *
++ * Returns 0 on success, non-zero value on failure
++ */
++static int ufshcd_dme_enable(struct ufs_hba *hba)
++{
++ struct uic_command uic_cmd = {0};
++ int ret;
++
++ uic_cmd.command = UIC_CMD_DME_ENABLE;
++
++ ret = ufshcd_send_uic_cmd(hba, &uic_cmd);
++ if (ret)
++ dev_err(hba->dev,
++ "dme-reset: error code %d\n", ret);
++
++ return ret;
++}
+
+ static inline void ufshcd_add_delay_before_dme_cmd(struct ufs_hba *hba)
+ {
+@@ -4272,7 +4343,7 @@ static inline void ufshcd_hba_stop(struct ufs_hba *hba)
+ }
+
+ /**
+- * ufshcd_hba_enable - initialize the controller
++ * ufshcd_hba_execute_hce - initialize the controller
+ * @hba: per adapter instance
+ *
+ * The controller resets itself and controller firmware initialization
+@@ -4281,7 +4352,7 @@ static inline void ufshcd_hba_stop(struct ufs_hba *hba)
+ *
+ * Returns 0 on success, non-zero value on failure
+ */
+-int ufshcd_hba_enable(struct ufs_hba *hba)
++static int ufshcd_hba_execute_hce(struct ufs_hba *hba)
+ {
+ int retry;
+
+@@ -4329,6 +4400,32 @@ int ufshcd_hba_enable(struct ufs_hba *hba)
+
+ return 0;
+ }
++
++int ufshcd_hba_enable(struct ufs_hba *hba)
++{
++ int ret;
++
++ if (hba->quirks & UFSHCI_QUIRK_BROKEN_HCE) {
++ ufshcd_set_link_off(hba);
++ ufshcd_vops_hce_enable_notify(hba, PRE_CHANGE);
++
++ /* enable UIC related interrupts */
++ ufshcd_enable_intr(hba, UFSHCD_UIC_MASK);
++ ret = ufshcd_dme_reset(hba);
++ if (!ret) {
++ ret = ufshcd_dme_enable(hba);
++ if (!ret)
++ ufshcd_vops_hce_enable_notify(hba, POST_CHANGE);
++ if (ret)
++ dev_err(hba->dev,
++ "Host controller enable failed with non-hce\n");
++ }
++ } else {
++ ret = ufshcd_hba_execute_hce(hba);
++ }
++
++ return ret;
++}
+ EXPORT_SYMBOL_GPL(ufshcd_hba_enable);
+
+ static int ufshcd_disable_tx_lcc(struct ufs_hba *hba, bool peer)
+@@ -4727,6 +4824,12 @@ ufshcd_transfer_rsp_status(struct ufs_hba *hba, struct ufshcd_lrb *lrbp)
+ /* overall command status of utrd */
+ ocs = ufshcd_get_tr_ocs(lrbp);
+
++ if (hba->quirks & UFSHCD_QUIRK_BROKEN_OCS_FATAL_ERROR) {
++ if (be32_to_cpu(lrbp->ucd_rsp_ptr->header.dword_1) &
++ MASK_RSP_UPIU_RESULT)
++ ocs = OCS_SUCCESS;
++ }
++
+ switch (ocs) {
+ case OCS_SUCCESS:
+ result = ufshcd_get_req_rsp(lrbp->ucd_rsp_ptr);
+@@ -4905,7 +5008,8 @@ static irqreturn_t ufshcd_transfer_req_compl(struct ufs_hba *hba)
+ * false interrupt if device completes another request after resetting
+ * aggregation and before reading the DB.
+ */
+- if (ufshcd_is_intr_aggr_allowed(hba))
++ if (ufshcd_is_intr_aggr_allowed(hba) &&
++ !(hba->quirks & UFSHCI_QUIRK_SKIP_RESET_INTR_AGGR))
+ ufshcd_reset_intr_aggr(hba);
+
+ tr_doorbell = ufshcd_readl(hba, REG_UTP_TRANSFER_REQ_DOOR_BELL);
+@@ -5909,7 +6013,7 @@ static irqreturn_t ufshcd_intr(int irq, void *__hba)
+ intr_status = ufshcd_readl(hba, REG_INTERRUPT_STATUS);
+ } while (intr_status && --retries);
+
+- if (retval == IRQ_NONE) {
++ if (enabled_intr_status && retval == IRQ_NONE) {
+ dev_err(hba->dev, "%s: Unhandled interrupt 0x%08x\n",
+ __func__, intr_status);
+ ufshcd_dump_regs(hba, 0, UFSHCI_REG_SPACE_SIZE, "host_regs: ");
+diff --git a/drivers/scsi/ufs/ufshcd.h b/drivers/scsi/ufs/ufshcd.h
+index 16187be98a94c..4bf98c2295372 100644
+--- a/drivers/scsi/ufs/ufshcd.h
++++ b/drivers/scsi/ufs/ufshcd.h
+@@ -520,6 +520,41 @@ enum ufshcd_quirks {
+ * ops (get_ufs_hci_version) to get the correct version.
+ */
+ UFSHCD_QUIRK_BROKEN_UFS_HCI_VERSION = 1 << 5,
++
++ /*
++ * Clear handling for transfer/task request list is just opposite.
++ */
++ UFSHCI_QUIRK_BROKEN_REQ_LIST_CLR = 1 << 6,
++
++ /*
++ * This quirk needs to be enabled if host controller doesn't allow
++ * that the interrupt aggregation timer and counter are reset by s/w.
++ */
++ UFSHCI_QUIRK_SKIP_RESET_INTR_AGGR = 1 << 7,
++
++ /*
++ * This quirks needs to be enabled if host controller cannot be
++ * enabled via HCE register.
++ */
++ UFSHCI_QUIRK_BROKEN_HCE = 1 << 8,
++
++ /*
++ * This quirk needs to be enabled if the host controller regards
++ * resolution of the values of PRDTO and PRDTL in UTRD as byte.
++ */
++ UFSHCD_QUIRK_PRDT_BYTE_GRAN = 1 << 9,
++
++ /*
++ * This quirk needs to be enabled if the host controller reports
++ * OCS FATAL ERROR with device error through sense data
++ */
++ UFSHCD_QUIRK_BROKEN_OCS_FATAL_ERROR = 1 << 10,
++
++ /*
++ * This quirk needs to be enabled if the host controller has
++ * auto-hibernate capability but it doesn't work.
++ */
++ UFSHCD_QUIRK_BROKEN_AUTO_HIBERN8 = 1 << 11,
+ };
+
+ enum ufshcd_caps {
+@@ -786,7 +821,8 @@ return true;
+
+ static inline bool ufshcd_is_auto_hibern8_supported(struct ufs_hba *hba)
+ {
+- return (hba->capabilities & MASK_AUTO_HIBERN8_SUPPORT);
++ return (hba->capabilities & MASK_AUTO_HIBERN8_SUPPORT) &&
++ !(hba->quirks & UFSHCD_QUIRK_BROKEN_AUTO_HIBERN8);
+ }
+
+ static inline bool ufshcd_is_auto_hibern8_enabled(struct ufs_hba *hba)
+diff --git a/drivers/spi/Kconfig b/drivers/spi/Kconfig
+index 8f1f8fca79e37..8eb053803429c 100644
+--- a/drivers/spi/Kconfig
++++ b/drivers/spi/Kconfig
+@@ -999,4 +999,7 @@ config SPI_SLAVE_SYSTEM_CONTROL
+
+ endif # SPI_SLAVE
+
++config SPI_DYNAMIC
++ def_bool ACPI || OF_DYNAMIC || SPI_SLAVE
++
+ endif # SPI
+diff --git a/drivers/spi/spi-stm32.c b/drivers/spi/spi-stm32.c
+index 4c643dfc7fbbc..9672cda2f8031 100644
+--- a/drivers/spi/spi-stm32.c
++++ b/drivers/spi/spi-stm32.c
+@@ -13,6 +13,7 @@
+ #include <linux/iopoll.h>
+ #include <linux/module.h>
+ #include <linux/of_platform.h>
++#include <linux/pinctrl/consumer.h>
+ #include <linux/pm_runtime.h>
+ #include <linux/reset.h>
+ #include <linux/spi/spi.h>
+@@ -1996,6 +1997,8 @@ static int stm32_spi_remove(struct platform_device *pdev)
+
+ pm_runtime_disable(&pdev->dev);
+
++ pinctrl_pm_select_sleep_state(&pdev->dev);
++
+ return 0;
+ }
+
+@@ -2007,13 +2010,18 @@ static int stm32_spi_runtime_suspend(struct device *dev)
+
+ clk_disable_unprepare(spi->clk);
+
+- return 0;
++ return pinctrl_pm_select_sleep_state(dev);
+ }
+
+ static int stm32_spi_runtime_resume(struct device *dev)
+ {
+ struct spi_master *master = dev_get_drvdata(dev);
+ struct stm32_spi *spi = spi_master_get_devdata(master);
++ int ret;
++
++ ret = pinctrl_pm_select_default_state(dev);
++ if (ret)
++ return ret;
+
+ return clk_prepare_enable(spi->clk);
+ }
+@@ -2043,10 +2051,23 @@ static int stm32_spi_resume(struct device *dev)
+ return ret;
+
+ ret = spi_master_resume(master);
+- if (ret)
++ if (ret) {
+ clk_disable_unprepare(spi->clk);
++ return ret;
++ }
+
+- return ret;
++ ret = pm_runtime_get_sync(dev);
++ if (ret) {
++ dev_err(dev, "Unable to power device:%d\n", ret);
++ return ret;
++ }
++
++ spi->cfg->config(spi);
++
++ pm_runtime_mark_last_busy(dev);
++ pm_runtime_put_autosuspend(dev);
++
++ return 0;
+ }
+ #endif
+
+diff --git a/drivers/spi/spi.c b/drivers/spi/spi.c
+index 8158e281f3540..5c5a95792c0d3 100644
+--- a/drivers/spi/spi.c
++++ b/drivers/spi/spi.c
+@@ -475,6 +475,12 @@ static LIST_HEAD(spi_controller_list);
+ */
+ static DEFINE_MUTEX(board_lock);
+
++/*
++ * Prevents addition of devices with same chip select and
++ * addition of devices below an unregistering controller.
++ */
++static DEFINE_MUTEX(spi_add_lock);
++
+ /**
+ * spi_alloc_device - Allocate a new SPI device
+ * @ctlr: Controller to which device is connected
+@@ -554,7 +560,6 @@ static int spi_dev_check(struct device *dev, void *data)
+ */
+ int spi_add_device(struct spi_device *spi)
+ {
+- static DEFINE_MUTEX(spi_add_lock);
+ struct spi_controller *ctlr = spi->controller;
+ struct device *dev = ctlr->dev.parent;
+ int status;
+@@ -582,6 +587,13 @@ int spi_add_device(struct spi_device *spi)
+ goto done;
+ }
+
++ /* Controller may unregister concurrently */
++ if (IS_ENABLED(CONFIG_SPI_DYNAMIC) &&
++ !device_is_registered(&ctlr->dev)) {
++ status = -ENODEV;
++ goto done;
++ }
++
+ /* Descriptors take precedence */
+ if (ctlr->cs_gpiods)
+ spi->cs_gpiod = ctlr->cs_gpiods[spi->chip_select];
+@@ -2764,6 +2776,10 @@ void spi_unregister_controller(struct spi_controller *ctlr)
+ struct spi_controller *found;
+ int id = ctlr->bus_num;
+
++ /* Prevent addition of new devices, unregister existing ones */
++ if (IS_ENABLED(CONFIG_SPI_DYNAMIC))
++ mutex_lock(&spi_add_lock);
++
+ device_for_each_child(&ctlr->dev, NULL, __unregister);
+
+ /* First make sure that this controller was ever added */
+@@ -2784,6 +2800,9 @@ void spi_unregister_controller(struct spi_controller *ctlr)
+ if (found == ctlr)
+ idr_remove(&spi_master_idr, id);
+ mutex_unlock(&board_lock);
++
++ if (IS_ENABLED(CONFIG_SPI_DYNAMIC))
++ mutex_unlock(&spi_add_lock);
+ }
+ EXPORT_SYMBOL_GPL(spi_unregister_controller);
+
+diff --git a/drivers/target/target_core_user.c b/drivers/target/target_core_user.c
+index 560bfec933bc3..63cca0e1e9123 100644
+--- a/drivers/target/target_core_user.c
++++ b/drivers/target/target_core_user.c
+@@ -601,7 +601,7 @@ static inline void tcmu_flush_dcache_range(void *vaddr, size_t size)
+ size = round_up(size+offset, PAGE_SIZE);
+
+ while (size) {
+- flush_dcache_page(virt_to_page(start));
++ flush_dcache_page(vmalloc_to_page(start));
+ start += PAGE_SIZE;
+ size -= PAGE_SIZE;
+ }
+diff --git a/drivers/vfio/pci/vfio_pci_private.h b/drivers/vfio/pci/vfio_pci_private.h
+index 86a02aff8735f..61ca8ab165dc1 100644
+--- a/drivers/vfio/pci/vfio_pci_private.h
++++ b/drivers/vfio/pci/vfio_pci_private.h
+@@ -33,12 +33,14 @@
+
+ struct vfio_pci_ioeventfd {
+ struct list_head next;
++ struct vfio_pci_device *vdev;
+ struct virqfd *virqfd;
+ void __iomem *addr;
+ uint64_t data;
+ loff_t pos;
+ int bar;
+ int count;
++ bool test_mem;
+ };
+
+ struct vfio_pci_irq_ctx {
+diff --git a/drivers/vfio/pci/vfio_pci_rdwr.c b/drivers/vfio/pci/vfio_pci_rdwr.c
+index 916b184df3a5b..9e353c484ace2 100644
+--- a/drivers/vfio/pci/vfio_pci_rdwr.c
++++ b/drivers/vfio/pci/vfio_pci_rdwr.c
+@@ -37,17 +37,70 @@
+ #define vfio_ioread8 ioread8
+ #define vfio_iowrite8 iowrite8
+
++#define VFIO_IOWRITE(size) \
++static int vfio_pci_iowrite##size(struct vfio_pci_device *vdev, \
++ bool test_mem, u##size val, void __iomem *io) \
++{ \
++ if (test_mem) { \
++ down_read(&vdev->memory_lock); \
++ if (!__vfio_pci_memory_enabled(vdev)) { \
++ up_read(&vdev->memory_lock); \
++ return -EIO; \
++ } \
++ } \
++ \
++ vfio_iowrite##size(val, io); \
++ \
++ if (test_mem) \
++ up_read(&vdev->memory_lock); \
++ \
++ return 0; \
++}
++
++VFIO_IOWRITE(8)
++VFIO_IOWRITE(16)
++VFIO_IOWRITE(32)
++#ifdef iowrite64
++VFIO_IOWRITE(64)
++#endif
++
++#define VFIO_IOREAD(size) \
++static int vfio_pci_ioread##size(struct vfio_pci_device *vdev, \
++ bool test_mem, u##size *val, void __iomem *io) \
++{ \
++ if (test_mem) { \
++ down_read(&vdev->memory_lock); \
++ if (!__vfio_pci_memory_enabled(vdev)) { \
++ up_read(&vdev->memory_lock); \
++ return -EIO; \
++ } \
++ } \
++ \
++ *val = vfio_ioread##size(io); \
++ \
++ if (test_mem) \
++ up_read(&vdev->memory_lock); \
++ \
++ return 0; \
++}
++
++VFIO_IOREAD(8)
++VFIO_IOREAD(16)
++VFIO_IOREAD(32)
++
+ /*
+ * Read or write from an __iomem region (MMIO or I/O port) with an excluded
+ * range which is inaccessible. The excluded range drops writes and fills
+ * reads with -1. This is intended for handling MSI-X vector tables and
+ * leftover space for ROM BARs.
+ */
+-static ssize_t do_io_rw(void __iomem *io, char __user *buf,
++static ssize_t do_io_rw(struct vfio_pci_device *vdev, bool test_mem,
++ void __iomem *io, char __user *buf,
+ loff_t off, size_t count, size_t x_start,
+ size_t x_end, bool iswrite)
+ {
+ ssize_t done = 0;
++ int ret;
+
+ while (count) {
+ size_t fillable, filled;
+@@ -66,9 +119,15 @@ static ssize_t do_io_rw(void __iomem *io, char __user *buf,
+ if (copy_from_user(&val, buf, 4))
+ return -EFAULT;
+
+- vfio_iowrite32(val, io + off);
++ ret = vfio_pci_iowrite32(vdev, test_mem,
++ val, io + off);
++ if (ret)
++ return ret;
+ } else {
+- val = vfio_ioread32(io + off);
++ ret = vfio_pci_ioread32(vdev, test_mem,
++ &val, io + off);
++ if (ret)
++ return ret;
+
+ if (copy_to_user(buf, &val, 4))
+ return -EFAULT;
+@@ -82,9 +141,15 @@ static ssize_t do_io_rw(void __iomem *io, char __user *buf,
+ if (copy_from_user(&val, buf, 2))
+ return -EFAULT;
+
+- vfio_iowrite16(val, io + off);
++ ret = vfio_pci_iowrite16(vdev, test_mem,
++ val, io + off);
++ if (ret)
++ return ret;
+ } else {
+- val = vfio_ioread16(io + off);
++ ret = vfio_pci_ioread16(vdev, test_mem,
++ &val, io + off);
++ if (ret)
++ return ret;
+
+ if (copy_to_user(buf, &val, 2))
+ return -EFAULT;
+@@ -98,9 +163,15 @@ static ssize_t do_io_rw(void __iomem *io, char __user *buf,
+ if (copy_from_user(&val, buf, 1))
+ return -EFAULT;
+
+- vfio_iowrite8(val, io + off);
++ ret = vfio_pci_iowrite8(vdev, test_mem,
++ val, io + off);
++ if (ret)
++ return ret;
+ } else {
+- val = vfio_ioread8(io + off);
++ ret = vfio_pci_ioread8(vdev, test_mem,
++ &val, io + off);
++ if (ret)
++ return ret;
+
+ if (copy_to_user(buf, &val, 1))
+ return -EFAULT;
+@@ -178,14 +249,6 @@ ssize_t vfio_pci_bar_rw(struct vfio_pci_device *vdev, char __user *buf,
+
+ count = min(count, (size_t)(end - pos));
+
+- if (res->flags & IORESOURCE_MEM) {
+- down_read(&vdev->memory_lock);
+- if (!__vfio_pci_memory_enabled(vdev)) {
+- up_read(&vdev->memory_lock);
+- return -EIO;
+- }
+- }
+-
+ if (bar == PCI_ROM_RESOURCE) {
+ /*
+ * The ROM can fill less space than the BAR, so we start the
+@@ -213,7 +276,8 @@ ssize_t vfio_pci_bar_rw(struct vfio_pci_device *vdev, char __user *buf,
+ x_end = vdev->msix_offset + vdev->msix_size;
+ }
+
+- done = do_io_rw(io, buf, pos, count, x_start, x_end, iswrite);
++ done = do_io_rw(vdev, res->flags & IORESOURCE_MEM, io, buf, pos,
++ count, x_start, x_end, iswrite);
+
+ if (done >= 0)
+ *ppos += done;
+@@ -221,9 +285,6 @@ ssize_t vfio_pci_bar_rw(struct vfio_pci_device *vdev, char __user *buf,
+ if (bar == PCI_ROM_RESOURCE)
+ pci_unmap_rom(pdev, io);
+ out:
+- if (res->flags & IORESOURCE_MEM)
+- up_read(&vdev->memory_lock);
+-
+ return done;
+ }
+
+@@ -278,7 +339,12 @@ ssize_t vfio_pci_vga_rw(struct vfio_pci_device *vdev, char __user *buf,
+ return ret;
+ }
+
+- done = do_io_rw(iomem, buf, off, count, 0, 0, iswrite);
++ /*
++ * VGA MMIO is a legacy, non-BAR resource that hopefully allows
++ * probing, so we don't currently worry about access in relation
++ * to the memory enable bit in the command register.
++ */
++ done = do_io_rw(vdev, false, iomem, buf, off, count, 0, 0, iswrite);
+
+ vga_put(vdev->pdev, rsrc);
+
+@@ -296,17 +362,21 @@ static int vfio_pci_ioeventfd_handler(void *opaque, void *unused)
+
+ switch (ioeventfd->count) {
+ case 1:
+- vfio_iowrite8(ioeventfd->data, ioeventfd->addr);
++ vfio_pci_iowrite8(ioeventfd->vdev, ioeventfd->test_mem,
++ ioeventfd->data, ioeventfd->addr);
+ break;
+ case 2:
+- vfio_iowrite16(ioeventfd->data, ioeventfd->addr);
++ vfio_pci_iowrite16(ioeventfd->vdev, ioeventfd->test_mem,
++ ioeventfd->data, ioeventfd->addr);
+ break;
+ case 4:
+- vfio_iowrite32(ioeventfd->data, ioeventfd->addr);
++ vfio_pci_iowrite32(ioeventfd->vdev, ioeventfd->test_mem,
++ ioeventfd->data, ioeventfd->addr);
+ break;
+ #ifdef iowrite64
+ case 8:
+- vfio_iowrite64(ioeventfd->data, ioeventfd->addr);
++ vfio_pci_iowrite64(ioeventfd->vdev, ioeventfd->test_mem,
++ ioeventfd->data, ioeventfd->addr);
+ break;
+ #endif
+ }
+@@ -378,11 +448,13 @@ long vfio_pci_ioeventfd(struct vfio_pci_device *vdev, loff_t offset,
+ goto out_unlock;
+ }
+
++ ioeventfd->vdev = vdev;
+ ioeventfd->addr = vdev->barmap[bar] + pos;
+ ioeventfd->data = data;
+ ioeventfd->pos = pos;
+ ioeventfd->bar = bar;
+ ioeventfd->count = count;
++ ioeventfd->test_mem = vdev->pdev->resource[bar].flags & IORESOURCE_MEM;
+
+ ret = vfio_virqfd_enable(ioeventfd, vfio_pci_ioeventfd_handler,
+ NULL, NULL, &ioeventfd->virqfd, fd);
+diff --git a/drivers/vfio/vfio_iommu_type1.c b/drivers/vfio/vfio_iommu_type1.c
+index 5e556ac9102a5..f48f0db908a46 100644
+--- a/drivers/vfio/vfio_iommu_type1.c
++++ b/drivers/vfio/vfio_iommu_type1.c
+@@ -1422,13 +1422,16 @@ static int vfio_bus_type(struct device *dev, void *data)
+ static int vfio_iommu_replay(struct vfio_iommu *iommu,
+ struct vfio_domain *domain)
+ {
+- struct vfio_domain *d;
++ struct vfio_domain *d = NULL;
+ struct rb_node *n;
+ unsigned long limit = rlimit(RLIMIT_MEMLOCK) >> PAGE_SHIFT;
+ int ret;
+
+ /* Arbitrarily pick the first domain in the list for lookups */
+- d = list_first_entry(&iommu->domain_list, struct vfio_domain, next);
++ if (!list_empty(&iommu->domain_list))
++ d = list_first_entry(&iommu->domain_list,
++ struct vfio_domain, next);
++
+ n = rb_first(&iommu->dma_list);
+
+ for (; n; n = rb_next(n)) {
+@@ -1446,6 +1449,11 @@ static int vfio_iommu_replay(struct vfio_iommu *iommu,
+ phys_addr_t p;
+ dma_addr_t i;
+
++ if (WARN_ON(!d)) { /* mapped w/o a domain?! */
++ ret = -EINVAL;
++ goto unwind;
++ }
++
+ phys = iommu_iova_to_phys(d->domain, iova);
+
+ if (WARN_ON(!phys)) {
+@@ -1475,7 +1483,7 @@ static int vfio_iommu_replay(struct vfio_iommu *iommu,
+ if (npage <= 0) {
+ WARN_ON(!npage);
+ ret = (int)npage;
+- return ret;
++ goto unwind;
+ }
+
+ phys = pfn << PAGE_SHIFT;
+@@ -1484,14 +1492,67 @@ static int vfio_iommu_replay(struct vfio_iommu *iommu,
+
+ ret = iommu_map(domain->domain, iova, phys,
+ size, dma->prot | domain->prot);
+- if (ret)
+- return ret;
++ if (ret) {
++ if (!dma->iommu_mapped)
++ vfio_unpin_pages_remote(dma, iova,
++ phys >> PAGE_SHIFT,
++ size >> PAGE_SHIFT,
++ true);
++ goto unwind;
++ }
+
+ iova += size;
+ }
++ }
++
++ /* All dmas are now mapped, defer to second tree walk for unwind */
++ for (n = rb_first(&iommu->dma_list); n; n = rb_next(n)) {
++ struct vfio_dma *dma = rb_entry(n, struct vfio_dma, node);
++
+ dma->iommu_mapped = true;
+ }
++
+ return 0;
++
++unwind:
++ for (; n; n = rb_prev(n)) {
++ struct vfio_dma *dma = rb_entry(n, struct vfio_dma, node);
++ dma_addr_t iova;
++
++ if (dma->iommu_mapped) {
++ iommu_unmap(domain->domain, dma->iova, dma->size);
++ continue;
++ }
++
++ iova = dma->iova;
++ while (iova < dma->iova + dma->size) {
++ phys_addr_t phys, p;
++ size_t size;
++ dma_addr_t i;
++
++ phys = iommu_iova_to_phys(domain->domain, iova);
++ if (!phys) {
++ iova += PAGE_SIZE;
++ continue;
++ }
++
++ size = PAGE_SIZE;
++ p = phys + size;
++ i = iova + size;
++ while (i < dma->iova + dma->size &&
++ p == iommu_iova_to_phys(domain->domain, i)) {
++ size += PAGE_SIZE;
++ p += PAGE_SIZE;
++ i += PAGE_SIZE;
++ }
++
++ iommu_unmap(domain->domain, iova, size);
++ vfio_unpin_pages_remote(dma, iova, phys >> PAGE_SHIFT,
++ size >> PAGE_SHIFT, true);
++ }
++ }
++
++ return ret;
+ }
+
+ /*
+diff --git a/drivers/video/fbdev/efifb.c b/drivers/video/fbdev/efifb.c
+index 65491ae74808d..e57c00824965c 100644
+--- a/drivers/video/fbdev/efifb.c
++++ b/drivers/video/fbdev/efifb.c
+@@ -453,7 +453,7 @@ static int efifb_probe(struct platform_device *dev)
+ info->apertures->ranges[0].base = efifb_fix.smem_start;
+ info->apertures->ranges[0].size = size_remap;
+
+- if (efi_enabled(EFI_BOOT) &&
++ if (efi_enabled(EFI_MEMMAP) &&
+ !efi_mem_desc_lookup(efifb_fix.smem_start, &md)) {
+ if ((efifb_fix.smem_start + efifb_fix.smem_len) >
+ (md.phys_addr + (md.num_pages << EFI_PAGE_SHIFT))) {
+diff --git a/drivers/virtio/virtio_ring.c b/drivers/virtio/virtio_ring.c
+index 58b96baa8d488..4f7c73e6052f6 100644
+--- a/drivers/virtio/virtio_ring.c
++++ b/drivers/virtio/virtio_ring.c
+@@ -1960,6 +1960,9 @@ bool virtqueue_poll(struct virtqueue *_vq, unsigned last_used_idx)
+ {
+ struct vring_virtqueue *vq = to_vvq(_vq);
+
++ if (unlikely(vq->broken))
++ return false;
++
+ virtio_mb(vq->weak_barriers);
+ return vq->packed_ring ? virtqueue_poll_packed(_vq, last_used_idx) :
+ virtqueue_poll_split(_vq, last_used_idx);
+diff --git a/drivers/xen/swiotlb-xen.c b/drivers/xen/swiotlb-xen.c
+index b6d27762c6f8c..5fbadd07819bd 100644
+--- a/drivers/xen/swiotlb-xen.c
++++ b/drivers/xen/swiotlb-xen.c
+@@ -335,6 +335,7 @@ xen_swiotlb_free_coherent(struct device *hwdev, size_t size, void *vaddr,
+ int order = get_order(size);
+ phys_addr_t phys;
+ u64 dma_mask = DMA_BIT_MASK(32);
++ struct page *page;
+
+ if (hwdev && hwdev->coherent_dma_mask)
+ dma_mask = hwdev->coherent_dma_mask;
+@@ -346,9 +347,14 @@ xen_swiotlb_free_coherent(struct device *hwdev, size_t size, void *vaddr,
+ /* Convert the size to actually allocated. */
+ size = 1UL << (order + XEN_PAGE_SHIFT);
+
++ if (is_vmalloc_addr(vaddr))
++ page = vmalloc_to_page(vaddr);
++ else
++ page = virt_to_page(vaddr);
++
+ if (!WARN_ON((dev_addr + size - 1 > dma_mask) ||
+ range_straddles_page_boundary(phys, size)) &&
+- TestClearPageXenRemapped(virt_to_page(vaddr)))
++ TestClearPageXenRemapped(page))
+ xen_destroy_contiguous_region(phys, order);
+
+ xen_free_coherent_pages(hwdev, size, vaddr, (dma_addr_t)phys, attrs);
+diff --git a/fs/afs/dynroot.c b/fs/afs/dynroot.c
+index b79879aacc02e..7b784af604fd9 100644
+--- a/fs/afs/dynroot.c
++++ b/fs/afs/dynroot.c
+@@ -382,15 +382,17 @@ void afs_dynroot_depopulate(struct super_block *sb)
+ net->dynroot_sb = NULL;
+ mutex_unlock(&net->proc_cells_lock);
+
+- inode_lock(root->d_inode);
+-
+- /* Remove all the pins for dirs created for manually added cells */
+- list_for_each_entry_safe(subdir, tmp, &root->d_subdirs, d_child) {
+- if (subdir->d_fsdata) {
+- subdir->d_fsdata = NULL;
+- dput(subdir);
++ if (root) {
++ inode_lock(root->d_inode);
++
++ /* Remove all the pins for dirs created for manually added cells */
++ list_for_each_entry_safe(subdir, tmp, &root->d_subdirs, d_child) {
++ if (subdir->d_fsdata) {
++ subdir->d_fsdata = NULL;
++ dput(subdir);
++ }
+ }
+- }
+
+- inode_unlock(root->d_inode);
++ inode_unlock(root->d_inode);
++ }
+ }
+diff --git a/fs/afs/fs_operation.c b/fs/afs/fs_operation.c
+index 24fd163c6323e..97cab12b0a6c2 100644
+--- a/fs/afs/fs_operation.c
++++ b/fs/afs/fs_operation.c
+@@ -235,6 +235,7 @@ int afs_put_operation(struct afs_operation *op)
+ afs_end_cursor(&op->ac);
+ afs_put_serverlist(op->net, op->server_list);
+ afs_put_volume(op->net, op->volume, afs_volume_trace_put_put_op);
++ key_put(op->key);
+ kfree(op);
+ return ret;
+ }
+diff --git a/fs/ceph/mds_client.c b/fs/ceph/mds_client.c
+index dea971f9d89ee..946f9a92658ab 100644
+--- a/fs/ceph/mds_client.c
++++ b/fs/ceph/mds_client.c
+@@ -4361,7 +4361,6 @@ int ceph_mdsc_init(struct ceph_fs_client *fsc)
+ goto err_mdsc;
+ }
+
+- fsc->mdsc = mdsc;
+ init_completion(&mdsc->safe_umount_waiters);
+ init_waitqueue_head(&mdsc->session_close_wq);
+ INIT_LIST_HEAD(&mdsc->waiting_for_map);
+@@ -4416,6 +4415,8 @@ int ceph_mdsc_init(struct ceph_fs_client *fsc)
+
+ strscpy(mdsc->nodename, utsname()->nodename,
+ sizeof(mdsc->nodename));
++
++ fsc->mdsc = mdsc;
+ return 0;
+
+ err_mdsmap:
+diff --git a/fs/eventpoll.c b/fs/eventpoll.c
+index 12eebcdea9c8a..e0decff22ae27 100644
+--- a/fs/eventpoll.c
++++ b/fs/eventpoll.c
+@@ -1994,9 +1994,11 @@ static int ep_loop_check_proc(void *priv, void *cookie, int call_nests)
+ * not already there, and calling reverse_path_check()
+ * during ep_insert().
+ */
+- if (list_empty(&epi->ffd.file->f_tfile_llink))
++ if (list_empty(&epi->ffd.file->f_tfile_llink)) {
++ get_file(epi->ffd.file);
+ list_add(&epi->ffd.file->f_tfile_llink,
+ &tfile_check_list);
++ }
+ }
+ }
+ mutex_unlock(&ep->mtx);
+@@ -2040,6 +2042,7 @@ static void clear_tfile_check_list(void)
+ file = list_first_entry(&tfile_check_list, struct file,
+ f_tfile_llink);
+ list_del_init(&file->f_tfile_llink);
++ fput(file);
+ }
+ INIT_LIST_HEAD(&tfile_check_list);
+ }
+@@ -2200,25 +2203,22 @@ int do_epoll_ctl(int epfd, int op, int fd, struct epoll_event *epds,
+ full_check = 1;
+ if (is_file_epoll(tf.file)) {
+ error = -ELOOP;
+- if (ep_loop_check(ep, tf.file) != 0) {
+- clear_tfile_check_list();
++ if (ep_loop_check(ep, tf.file) != 0)
+ goto error_tgt_fput;
+- }
+- } else
++ } else {
++ get_file(tf.file);
+ list_add(&tf.file->f_tfile_llink,
+ &tfile_check_list);
++ }
+ error = epoll_mutex_lock(&ep->mtx, 0, nonblock);
+- if (error) {
+-out_del:
+- list_del(&tf.file->f_tfile_llink);
++ if (error)
+ goto error_tgt_fput;
+- }
+ if (is_file_epoll(tf.file)) {
+ tep = tf.file->private_data;
+ error = epoll_mutex_lock(&tep->mtx, 1, nonblock);
+ if (error) {
+ mutex_unlock(&ep->mtx);
+- goto out_del;
++ goto error_tgt_fput;
+ }
+ }
+ }
+@@ -2239,8 +2239,6 @@ out_del:
+ error = ep_insert(ep, epds, tf.file, fd, full_check);
+ } else
+ error = -EEXIST;
+- if (full_check)
+- clear_tfile_check_list();
+ break;
+ case EPOLL_CTL_DEL:
+ if (epi)
+@@ -2263,8 +2261,10 @@ out_del:
+ mutex_unlock(&ep->mtx);
+
+ error_tgt_fput:
+- if (full_check)
++ if (full_check) {
++ clear_tfile_check_list();
+ mutex_unlock(&epmutex);
++ }
+
+ fdput(tf);
+ error_fput:
+diff --git a/fs/ext4/block_validity.c b/fs/ext4/block_validity.c
+index 16e9b2fda03ae..e830a9d4e10d3 100644
+--- a/fs/ext4/block_validity.c
++++ b/fs/ext4/block_validity.c
+@@ -24,6 +24,7 @@ struct ext4_system_zone {
+ struct rb_node node;
+ ext4_fsblk_t start_blk;
+ unsigned int count;
++ u32 ino;
+ };
+
+ static struct kmem_cache *ext4_system_zone_cachep;
+@@ -45,7 +46,8 @@ void ext4_exit_system_zone(void)
+ static inline int can_merge(struct ext4_system_zone *entry1,
+ struct ext4_system_zone *entry2)
+ {
+- if ((entry1->start_blk + entry1->count) == entry2->start_blk)
++ if ((entry1->start_blk + entry1->count) == entry2->start_blk &&
++ entry1->ino == entry2->ino)
+ return 1;
+ return 0;
+ }
+@@ -66,9 +68,9 @@ static void release_system_zone(struct ext4_system_blocks *system_blks)
+ */
+ static int add_system_zone(struct ext4_system_blocks *system_blks,
+ ext4_fsblk_t start_blk,
+- unsigned int count)
++ unsigned int count, u32 ino)
+ {
+- struct ext4_system_zone *new_entry = NULL, *entry;
++ struct ext4_system_zone *new_entry, *entry;
+ struct rb_node **n = &system_blks->root.rb_node, *node;
+ struct rb_node *parent = NULL, *new_node = NULL;
+
+@@ -79,30 +81,21 @@ static int add_system_zone(struct ext4_system_blocks *system_blks,
+ n = &(*n)->rb_left;
+ else if (start_blk >= (entry->start_blk + entry->count))
+ n = &(*n)->rb_right;
+- else {
+- if (start_blk + count > (entry->start_blk +
+- entry->count))
+- entry->count = (start_blk + count -
+- entry->start_blk);
+- new_node = *n;
+- new_entry = rb_entry(new_node, struct ext4_system_zone,
+- node);
+- break;
+- }
++ else /* Unexpected overlap of system zones. */
++ return -EFSCORRUPTED;
+ }
+
+- if (!new_entry) {
+- new_entry = kmem_cache_alloc(ext4_system_zone_cachep,
+- GFP_KERNEL);
+- if (!new_entry)
+- return -ENOMEM;
+- new_entry->start_blk = start_blk;
+- new_entry->count = count;
+- new_node = &new_entry->node;
+-
+- rb_link_node(new_node, parent, n);
+- rb_insert_color(new_node, &system_blks->root);
+- }
++ new_entry = kmem_cache_alloc(ext4_system_zone_cachep,
++ GFP_KERNEL);
++ if (!new_entry)
++ return -ENOMEM;
++ new_entry->start_blk = start_blk;
++ new_entry->count = count;
++ new_entry->ino = ino;
++ new_node = &new_entry->node;
++
++ rb_link_node(new_node, parent, n);
++ rb_insert_color(new_node, &system_blks->root);
+
+ /* Can we merge to the left? */
+ node = rb_prev(new_node);
+@@ -159,7 +152,7 @@ static void debug_print_tree(struct ext4_sb_info *sbi)
+ static int ext4_data_block_valid_rcu(struct ext4_sb_info *sbi,
+ struct ext4_system_blocks *system_blks,
+ ext4_fsblk_t start_blk,
+- unsigned int count)
++ unsigned int count, ino_t ino)
+ {
+ struct ext4_system_zone *entry;
+ struct rb_node *n;
+@@ -180,7 +173,7 @@ static int ext4_data_block_valid_rcu(struct ext4_sb_info *sbi,
+ else if (start_blk >= (entry->start_blk + entry->count))
+ n = n->rb_right;
+ else
+- return 0;
++ return entry->ino == ino;
+ }
+ return 1;
+ }
+@@ -214,19 +207,18 @@ static int ext4_protect_reserved_inode(struct super_block *sb,
+ if (n == 0) {
+ i++;
+ } else {
+- if (!ext4_data_block_valid_rcu(sbi, system_blks,
+- map.m_pblk, n)) {
+- err = -EFSCORRUPTED;
+- __ext4_error(sb, __func__, __LINE__, -err,
+- map.m_pblk, "blocks %llu-%llu "
+- "from inode %u overlap system zone",
+- map.m_pblk,
+- map.m_pblk + map.m_len - 1, ino);
++ err = add_system_zone(system_blks, map.m_pblk, n, ino);
++ if (err < 0) {
++ if (err == -EFSCORRUPTED) {
++ __ext4_error(sb, __func__, __LINE__,
++ -err, map.m_pblk,
++ "blocks %llu-%llu from inode %u overlap system zone",
++ map.m_pblk,
++ map.m_pblk + map.m_len - 1,
++ ino);
++ }
+ break;
+ }
+- err = add_system_zone(system_blks, map.m_pblk, n);
+- if (err < 0)
+- break;
+ i += n;
+ }
+ }
+@@ -280,19 +272,19 @@ int ext4_setup_system_zone(struct super_block *sb)
+ ((i < 5) || ((i % flex_size) == 0)))
+ add_system_zone(system_blks,
+ ext4_group_first_block_no(sb, i),
+- ext4_bg_num_gdb(sb, i) + 1);
++ ext4_bg_num_gdb(sb, i) + 1, 0);
+ gdp = ext4_get_group_desc(sb, i, NULL);
+ ret = add_system_zone(system_blks,
+- ext4_block_bitmap(sb, gdp), 1);
++ ext4_block_bitmap(sb, gdp), 1, 0);
+ if (ret)
+ goto err;
+ ret = add_system_zone(system_blks,
+- ext4_inode_bitmap(sb, gdp), 1);
++ ext4_inode_bitmap(sb, gdp), 1, 0);
+ if (ret)
+ goto err;
+ ret = add_system_zone(system_blks,
+ ext4_inode_table(sb, gdp),
+- sbi->s_itb_per_group);
++ sbi->s_itb_per_group, 0);
+ if (ret)
+ goto err;
+ }
+@@ -341,7 +333,7 @@ void ext4_release_system_zone(struct super_block *sb)
+ call_rcu(&system_blks->rcu, ext4_destroy_system_zone);
+ }
+
+-int ext4_data_block_valid(struct ext4_sb_info *sbi, ext4_fsblk_t start_blk,
++int ext4_inode_block_valid(struct inode *inode, ext4_fsblk_t start_blk,
+ unsigned int count)
+ {
+ struct ext4_system_blocks *system_blks;
+@@ -353,9 +345,9 @@ int ext4_data_block_valid(struct ext4_sb_info *sbi, ext4_fsblk_t start_blk,
+ * mount option.
+ */
+ rcu_read_lock();
+- system_blks = rcu_dereference(sbi->system_blks);
+- ret = ext4_data_block_valid_rcu(sbi, system_blks, start_blk,
+- count);
++ system_blks = rcu_dereference(EXT4_SB(inode->i_sb)->system_blks);
++ ret = ext4_data_block_valid_rcu(EXT4_SB(inode->i_sb), system_blks,
++ start_blk, count, inode->i_ino);
+ rcu_read_unlock();
+ return ret;
+ }
+@@ -374,8 +366,7 @@ int ext4_check_blockref(const char *function, unsigned int line,
+ while (bref < p+max) {
+ blk = le32_to_cpu(*bref++);
+ if (blk &&
+- unlikely(!ext4_data_block_valid(EXT4_SB(inode->i_sb),
+- blk, 1))) {
++ unlikely(!ext4_inode_block_valid(inode, blk, 1))) {
+ ext4_error_inode(inode, function, line, blk,
+ "invalid block");
+ return -EFSCORRUPTED;
+diff --git a/fs/ext4/ext4.h b/fs/ext4/ext4.h
+index 42f5060f3cdf1..42815304902b8 100644
+--- a/fs/ext4/ext4.h
++++ b/fs/ext4/ext4.h
+@@ -3363,9 +3363,9 @@ extern void ext4_release_system_zone(struct super_block *sb);
+ extern int ext4_setup_system_zone(struct super_block *sb);
+ extern int __init ext4_init_system_zone(void);
+ extern void ext4_exit_system_zone(void);
+-extern int ext4_data_block_valid(struct ext4_sb_info *sbi,
+- ext4_fsblk_t start_blk,
+- unsigned int count);
++extern int ext4_inode_block_valid(struct inode *inode,
++ ext4_fsblk_t start_blk,
++ unsigned int count);
+ extern int ext4_check_blockref(const char *, unsigned int,
+ struct inode *, __le32 *, unsigned int);
+
+diff --git a/fs/ext4/extents.c b/fs/ext4/extents.c
+index 221f240eae604..d75054570e44c 100644
+--- a/fs/ext4/extents.c
++++ b/fs/ext4/extents.c
+@@ -340,7 +340,7 @@ static int ext4_valid_extent(struct inode *inode, struct ext4_extent *ext)
+ */
+ if (lblock + len <= lblock)
+ return 0;
+- return ext4_data_block_valid(EXT4_SB(inode->i_sb), block, len);
++ return ext4_inode_block_valid(inode, block, len);
+ }
+
+ static int ext4_valid_extent_idx(struct inode *inode,
+@@ -348,7 +348,7 @@ static int ext4_valid_extent_idx(struct inode *inode,
+ {
+ ext4_fsblk_t block = ext4_idx_pblock(ext_idx);
+
+- return ext4_data_block_valid(EXT4_SB(inode->i_sb), block, 1);
++ return ext4_inode_block_valid(inode, block, 1);
+ }
+
+ static int ext4_valid_extent_entries(struct inode *inode,
+@@ -507,14 +507,10 @@ __read_extent_tree_block(const char *function, unsigned int line,
+ }
+ if (buffer_verified(bh) && !(flags & EXT4_EX_FORCE_CACHE))
+ return bh;
+- if (!ext4_has_feature_journal(inode->i_sb) ||
+- (inode->i_ino !=
+- le32_to_cpu(EXT4_SB(inode->i_sb)->s_es->s_journal_inum))) {
+- err = __ext4_ext_check(function, line, inode,
+- ext_block_hdr(bh), depth, pblk);
+- if (err)
+- goto errout;
+- }
++ err = __ext4_ext_check(function, line, inode,
++ ext_block_hdr(bh), depth, pblk);
++ if (err)
++ goto errout;
+ set_buffer_verified(bh);
+ /*
+ * If this is a leaf block, cache all of its entries
+diff --git a/fs/ext4/file.c b/fs/ext4/file.c
+index 2a01e31a032c4..8f742b53f1d40 100644
+--- a/fs/ext4/file.c
++++ b/fs/ext4/file.c
+@@ -428,6 +428,10 @@ restart:
+ */
+ if (*ilock_shared && (!IS_NOSEC(inode) || *extend ||
+ !ext4_overwrite_io(inode, offset, count))) {
++ if (iocb->ki_flags & IOCB_NOWAIT) {
++ ret = -EAGAIN;
++ goto out;
++ }
+ inode_unlock_shared(inode);
+ *ilock_shared = false;
+ inode_lock(inode);
+diff --git a/fs/ext4/indirect.c b/fs/ext4/indirect.c
+index be2b66eb65f7a..4026418257121 100644
+--- a/fs/ext4/indirect.c
++++ b/fs/ext4/indirect.c
+@@ -858,8 +858,7 @@ static int ext4_clear_blocks(handle_t *handle, struct inode *inode,
+ else if (ext4_should_journal_data(inode))
+ flags |= EXT4_FREE_BLOCKS_FORGET;
+
+- if (!ext4_data_block_valid(EXT4_SB(inode->i_sb), block_to_free,
+- count)) {
++ if (!ext4_inode_block_valid(inode, block_to_free, count)) {
+ EXT4_ERROR_INODE(inode, "attempt to clear invalid "
+ "blocks %llu len %lu",
+ (unsigned long long) block_to_free, count);
+@@ -1004,8 +1003,7 @@ static void ext4_free_branches(handle_t *handle, struct inode *inode,
+ if (!nr)
+ continue; /* A hole */
+
+- if (!ext4_data_block_valid(EXT4_SB(inode->i_sb),
+- nr, 1)) {
++ if (!ext4_inode_block_valid(inode, nr, 1)) {
+ EXT4_ERROR_INODE(inode,
+ "invalid indirect mapped "
+ "block %lu (level %d)",
+diff --git a/fs/ext4/inode.c b/fs/ext4/inode.c
+index 10dd470876b30..92573f8540ab7 100644
+--- a/fs/ext4/inode.c
++++ b/fs/ext4/inode.c
+@@ -394,8 +394,7 @@ static int __check_block_validity(struct inode *inode, const char *func,
+ (inode->i_ino ==
+ le32_to_cpu(EXT4_SB(inode->i_sb)->s_es->s_journal_inum)))
+ return 0;
+- if (!ext4_data_block_valid(EXT4_SB(inode->i_sb), map->m_pblk,
+- map->m_len)) {
++ if (!ext4_inode_block_valid(inode, map->m_pblk, map->m_len)) {
+ ext4_error_inode(inode, func, line, map->m_pblk,
+ "lblock %lu mapped to illegal pblock %llu "
+ "(length %d)", (unsigned long) map->m_lblk,
+@@ -4760,7 +4759,7 @@ struct inode *__ext4_iget(struct super_block *sb, unsigned long ino,
+
+ ret = 0;
+ if (ei->i_file_acl &&
+- !ext4_data_block_valid(EXT4_SB(sb), ei->i_file_acl, 1)) {
++ !ext4_inode_block_valid(inode, ei->i_file_acl, 1)) {
+ ext4_error_inode(inode, function, line, 0,
+ "iget: bad extended attribute block %llu",
+ ei->i_file_acl);
+diff --git a/fs/ext4/mballoc.c b/fs/ext4/mballoc.c
+index c0a331e2feb02..38719c156573c 100644
+--- a/fs/ext4/mballoc.c
++++ b/fs/ext4/mballoc.c
+@@ -3090,7 +3090,7 @@ ext4_mb_mark_diskspace_used(struct ext4_allocation_context *ac,
+ block = ext4_grp_offs_to_block(sb, &ac->ac_b_ex);
+
+ len = EXT4_C2B(sbi, ac->ac_b_ex.fe_len);
+- if (!ext4_data_block_valid(sbi, block, len)) {
++ if (!ext4_inode_block_valid(ac->ac_inode, block, len)) {
+ ext4_error(sb, "Allocating blocks %llu-%llu which overlap "
+ "fs metadata", block, block+len);
+ /* File system mounted not to panic on error
+@@ -4915,7 +4915,7 @@ void ext4_free_blocks(handle_t *handle, struct inode *inode,
+
+ sbi = EXT4_SB(sb);
+ if (!(flags & EXT4_FREE_BLOCKS_VALIDATED) &&
+- !ext4_data_block_valid(sbi, block, count)) {
++ !ext4_inode_block_valid(inode, block, count)) {
+ ext4_error(sb, "Freeing blocks not in datazone - "
+ "block = %llu, count = %lu", block, count);
+ goto error_return;
+diff --git a/fs/ext4/namei.c b/fs/ext4/namei.c
+index 56738b538ddf4..a91a5bb8c3a2b 100644
+--- a/fs/ext4/namei.c
++++ b/fs/ext4/namei.c
+@@ -1396,8 +1396,8 @@ int ext4_search_dir(struct buffer_head *bh, char *search_buf, int buf_size,
+ ext4_match(dir, fname, de)) {
+ /* found a match - just to be sure, do
+ * a full check */
+- if (ext4_check_dir_entry(dir, NULL, de, bh, bh->b_data,
+- bh->b_size, offset))
++ if (ext4_check_dir_entry(dir, NULL, de, bh, search_buf,
++ buf_size, offset))
+ return -1;
+ *res_dir = de;
+ return 1;
+@@ -1858,7 +1858,7 @@ static struct ext4_dir_entry_2 *do_split(handle_t *handle, struct inode *dir,
+ blocksize, hinfo, map);
+ map -= count;
+ dx_sort_map(map, count);
+- /* Split the existing block in the middle, size-wise */
++ /* Ensure that neither split block is over half full */
+ size = 0;
+ move = 0;
+ for (i = count-1; i >= 0; i--) {
+@@ -1868,8 +1868,18 @@ static struct ext4_dir_entry_2 *do_split(handle_t *handle, struct inode *dir,
+ size += map[i].size;
+ move++;
+ }
+- /* map index at which we will split */
+- split = count - move;
++ /*
++ * map index at which we will split
++ *
++ * If the sum of active entries didn't exceed half the block size, just
++ * split it in half by count; each resulting block will have at least
++ * half the space free.
++ */
++ if (i > 0)
++ split = count - move;
++ else
++ split = count/2;
++
+ hash2 = map[split].hash;
+ continued = hash2 == map[split - 1].hash;
+ dxtrace(printk(KERN_INFO "Split block %lu at %x, %i/%i\n",
+@@ -2472,7 +2482,7 @@ int ext4_generic_delete_entry(handle_t *handle,
+ de = (struct ext4_dir_entry_2 *)entry_buf;
+ while (i < buf_size - csum_size) {
+ if (ext4_check_dir_entry(dir, NULL, de, bh,
+- bh->b_data, bh->b_size, i))
++ entry_buf, buf_size, i))
+ return -EFSCORRUPTED;
+ if (de == de_del) {
+ if (pde)
+diff --git a/fs/f2fs/compress.c b/fs/f2fs/compress.c
+index f6fbe61b1251e..2390f7943f6c8 100644
+--- a/fs/f2fs/compress.c
++++ b/fs/f2fs/compress.c
+@@ -1310,6 +1310,12 @@ retry_write:
+ congestion_wait(BLK_RW_ASYNC,
+ DEFAULT_IO_TIMEOUT);
+ lock_page(cc->rpages[i]);
++
++ if (!PageDirty(cc->rpages[i])) {
++ unlock_page(cc->rpages[i]);
++ continue;
++ }
++
+ clear_page_dirty_for_io(cc->rpages[i]);
+ goto retry_write;
+ }
+diff --git a/fs/f2fs/node.c b/fs/f2fs/node.c
+index 03e24df1c84f5..e61ce7fb0958b 100644
+--- a/fs/f2fs/node.c
++++ b/fs/f2fs/node.c
+@@ -1924,8 +1924,12 @@ continue_unlock:
+ goto continue_unlock;
+ }
+
+- /* flush inline_data, if it's async context. */
+- if (do_balance && is_inline_node(page)) {
++ /* flush inline_data/inode, if it's async context. */
++ if (!do_balance)
++ goto write_node;
++
++ /* flush inline_data */
++ if (is_inline_node(page)) {
+ clear_inline_node(page);
+ unlock_page(page);
+ flush_inline_data(sbi, ino_of_node(page));
+@@ -1938,7 +1942,7 @@ continue_unlock:
+ if (flush_dirty_inode(page))
+ goto lock_node;
+ }
+-
++write_node:
+ f2fs_wait_on_page_writeback(page, NODE, true, true);
+
+ if (!clear_page_dirty_for_io(page))
+diff --git a/fs/fat/fatent.c b/fs/fat/fatent.c
+index bbfe18c074179..f7e3304b78029 100644
+--- a/fs/fat/fatent.c
++++ b/fs/fat/fatent.c
+@@ -657,6 +657,9 @@ static void fat_ra_init(struct super_block *sb, struct fatent_ra *ra,
+ unsigned long ra_pages = sb->s_bdi->ra_pages;
+ unsigned int reada_blocks;
+
++ if (fatent->entry >= ent_limit)
++ return;
++
+ if (ra_pages > sb->s_bdi->io_pages)
+ ra_pages = rounddown(ra_pages, sb->s_bdi->io_pages);
+ reada_blocks = ra_pages << (PAGE_SHIFT - sb->s_blocksize_bits + 1);
+diff --git a/fs/io_uring.c b/fs/io_uring.c
+index f926d94867f7b..dd8ad87540ef7 100644
+--- a/fs/io_uring.c
++++ b/fs/io_uring.c
+@@ -7609,6 +7609,33 @@ static bool io_timeout_remove_link(struct io_ring_ctx *ctx,
+ return found;
+ }
+
++static bool io_cancel_link_cb(struct io_wq_work *work, void *data)
++{
++ return io_match_link(container_of(work, struct io_kiocb, work), data);
++}
++
++static void io_attempt_cancel(struct io_ring_ctx *ctx, struct io_kiocb *req)
++{
++ enum io_wq_cancel cret;
++
++ /* cancel this particular work, if it's running */
++ cret = io_wq_cancel_work(ctx->io_wq, &req->work);
++ if (cret != IO_WQ_CANCEL_NOTFOUND)
++ return;
++
++ /* find links that hold this pending, cancel those */
++ cret = io_wq_cancel_cb(ctx->io_wq, io_cancel_link_cb, req, true);
++ if (cret != IO_WQ_CANCEL_NOTFOUND)
++ return;
++
++ /* if we have a poll link holding this pending, cancel that */
++ if (io_poll_remove_link(ctx, req))
++ return;
++
++ /* final option, timeout link is holding this req pending */
++ io_timeout_remove_link(ctx, req);
++}
++
+ static void io_uring_cancel_files(struct io_ring_ctx *ctx,
+ struct files_struct *files)
+ {
+@@ -7665,10 +7692,8 @@ static void io_uring_cancel_files(struct io_ring_ctx *ctx,
+ continue;
+ }
+ } else {
+- io_wq_cancel_work(ctx->io_wq, &cancel_req->work);
+- /* could be a link, check and remove if it is */
+- if (!io_poll_remove_link(ctx, cancel_req))
+- io_timeout_remove_link(ctx, cancel_req);
++ /* cancel this request, or head link requests */
++ io_attempt_cancel(ctx, cancel_req);
+ io_put_req(cancel_req);
+ }
+
+diff --git a/fs/jbd2/journal.c b/fs/jbd2/journal.c
+index e4944436e733d..5493a0da23ddd 100644
+--- a/fs/jbd2/journal.c
++++ b/fs/jbd2/journal.c
+@@ -1367,8 +1367,10 @@ static int jbd2_write_superblock(journal_t *journal, int write_flags)
+ int ret;
+
+ /* Buffer got discarded which means block device got invalidated */
+- if (!buffer_mapped(bh))
++ if (!buffer_mapped(bh)) {
++ unlock_buffer(bh);
+ return -EIO;
++ }
+
+ trace_jbd2_write_superblock(journal, write_flags);
+ if (!(journal->j_flags & JBD2_BARRIER))
+diff --git a/fs/jffs2/dir.c b/fs/jffs2/dir.c
+index f20cff1194bb6..776493713153f 100644
+--- a/fs/jffs2/dir.c
++++ b/fs/jffs2/dir.c
+@@ -590,10 +590,14 @@ static int jffs2_rmdir (struct inode *dir_i, struct dentry *dentry)
+ int ret;
+ uint32_t now = JFFS2_NOW();
+
++ mutex_lock(&f->sem);
+ for (fd = f->dents ; fd; fd = fd->next) {
+- if (fd->ino)
++ if (fd->ino) {
++ mutex_unlock(&f->sem);
+ return -ENOTEMPTY;
++ }
+ }
++ mutex_unlock(&f->sem);
+
+ ret = jffs2_do_unlink(c, dir_f, dentry->d_name.name,
+ dentry->d_name.len, f, now);
+diff --git a/fs/romfs/storage.c b/fs/romfs/storage.c
+index 6b2b4362089e6..b57b3ffcbc327 100644
+--- a/fs/romfs/storage.c
++++ b/fs/romfs/storage.c
+@@ -217,10 +217,8 @@ int romfs_dev_read(struct super_block *sb, unsigned long pos,
+ size_t limit;
+
+ limit = romfs_maxsize(sb);
+- if (pos >= limit)
++ if (pos >= limit || buflen > limit - pos)
+ return -EIO;
+- if (buflen > limit - pos)
+- buflen = limit - pos;
+
+ #ifdef CONFIG_ROMFS_ON_MTD
+ if (sb->s_mtd)
+diff --git a/fs/signalfd.c b/fs/signalfd.c
+index 44b6845b071c3..5b78719be4455 100644
+--- a/fs/signalfd.c
++++ b/fs/signalfd.c
+@@ -314,9 +314,10 @@ SYSCALL_DEFINE4(signalfd4, int, ufd, sigset_t __user *, user_mask,
+ {
+ sigset_t mask;
+
+- if (sizemask != sizeof(sigset_t) ||
+- copy_from_user(&mask, user_mask, sizeof(mask)))
++ if (sizemask != sizeof(sigset_t))
+ return -EINVAL;
++ if (copy_from_user(&mask, user_mask, sizeof(mask)))
++ return -EFAULT;
+ return do_signalfd4(ufd, &mask, flags);
+ }
+
+@@ -325,9 +326,10 @@ SYSCALL_DEFINE3(signalfd, int, ufd, sigset_t __user *, user_mask,
+ {
+ sigset_t mask;
+
+- if (sizemask != sizeof(sigset_t) ||
+- copy_from_user(&mask, user_mask, sizeof(mask)))
++ if (sizemask != sizeof(sigset_t))
+ return -EINVAL;
++ if (copy_from_user(&mask, user_mask, sizeof(mask)))
++ return -EFAULT;
+ return do_signalfd4(ufd, &mask, 0);
+ }
+
+diff --git a/fs/squashfs/block.c b/fs/squashfs/block.c
+index 76bb1c846845e..8a19773b5a0b7 100644
+--- a/fs/squashfs/block.c
++++ b/fs/squashfs/block.c
+@@ -87,7 +87,11 @@ static int squashfs_bio_read(struct super_block *sb, u64 index, int length,
+ int error, i;
+ struct bio *bio;
+
+- bio = bio_alloc(GFP_NOIO, page_count);
++ if (page_count <= BIO_MAX_PAGES)
++ bio = bio_alloc(GFP_NOIO, page_count);
++ else
++ bio = bio_kmalloc(GFP_NOIO, page_count);
++
+ if (!bio)
+ return -ENOMEM;
+
+diff --git a/fs/xfs/xfs_sysfs.h b/fs/xfs/xfs_sysfs.h
+index e9f810fc67317..43585850f1546 100644
+--- a/fs/xfs/xfs_sysfs.h
++++ b/fs/xfs/xfs_sysfs.h
+@@ -32,9 +32,11 @@ xfs_sysfs_init(
+ struct xfs_kobj *parent_kobj,
+ const char *name)
+ {
++ struct kobject *parent;
++
++ parent = parent_kobj ? &parent_kobj->kobject : NULL;
+ init_completion(&kobj->complete);
+- return kobject_init_and_add(&kobj->kobject, ktype,
+- &parent_kobj->kobject, "%s", name);
++ return kobject_init_and_add(&kobj->kobject, ktype, parent, "%s", name);
+ }
+
+ static inline void
+diff --git a/fs/xfs/xfs_trans_dquot.c b/fs/xfs/xfs_trans_dquot.c
+index c0f73b82c0551..ed0ce8b301b40 100644
+--- a/fs/xfs/xfs_trans_dquot.c
++++ b/fs/xfs/xfs_trans_dquot.c
+@@ -647,7 +647,7 @@ xfs_trans_dqresv(
+ }
+ }
+ if (ninos > 0) {
+- total_count = be64_to_cpu(dqp->q_core.d_icount) + ninos;
++ total_count = dqp->q_res_icount + ninos;
+ timer = be32_to_cpu(dqp->q_core.d_itimer);
+ warns = be16_to_cpu(dqp->q_core.d_iwarns);
+ warnlimit = defq->iwarnlimit;
+diff --git a/include/linux/pgtable.h b/include/linux/pgtable.h
+index 56c1e8eb7bb0a..8075f6ae185a1 100644
+--- a/include/linux/pgtable.h
++++ b/include/linux/pgtable.h
+@@ -117,7 +117,9 @@ static inline pgd_t *pgd_offset_pgd(pgd_t *pgd, unsigned long address)
+ * a shortcut which implies the use of the kernel's pgd, instead
+ * of a process's
+ */
++#ifndef pgd_offset_k
+ #define pgd_offset_k(address) pgd_offset(&init_mm, (address))
++#endif
+
+ /*
+ * In many cases it is known that a virtual address is mapped at PMD or PTE
+diff --git a/include/linux/sched/user.h b/include/linux/sched/user.h
+index 917d88edb7b9d..a8ec3b6093fcb 100644
+--- a/include/linux/sched/user.h
++++ b/include/linux/sched/user.h
+@@ -36,6 +36,9 @@ struct user_struct {
+ defined(CONFIG_NET) || defined(CONFIG_IO_URING)
+ atomic_long_t locked_vm;
+ #endif
++#ifdef CONFIG_WATCH_QUEUE
++ atomic_t nr_watches; /* The number of watches this user currently has */
++#endif
+
+ /* Miscellaneous per-user rate limit */
+ struct ratelimit_state ratelimit;
+diff --git a/kernel/bpf/task_iter.c b/kernel/bpf/task_iter.c
+index ac7869a389990..a4a0fb4f94cc1 100644
+--- a/kernel/bpf/task_iter.c
++++ b/kernel/bpf/task_iter.c
+@@ -177,10 +177,11 @@ again:
+ f = fcheck_files(curr_files, curr_fd);
+ if (!f)
+ continue;
++ if (!get_file_rcu(f))
++ continue;
+
+ /* set info->fd */
+ info->fd = curr_fd;
+- get_file(f);
+ rcu_read_unlock();
+ return f;
+ }
+diff --git a/kernel/events/uprobes.c b/kernel/events/uprobes.c
+index 5f8b0c52fd2ef..661333c2893d5 100644
+--- a/kernel/events/uprobes.c
++++ b/kernel/events/uprobes.c
+@@ -205,7 +205,7 @@ static int __replace_page(struct vm_area_struct *vma, unsigned long addr,
+ try_to_free_swap(old_page);
+ page_vma_mapped_walk_done(&pvmw);
+
+- if (vma->vm_flags & VM_LOCKED)
++ if ((vma->vm_flags & VM_LOCKED) && !PageCompound(old_page))
+ munlock_vma_page(old_page);
+ put_page(old_page);
+
+diff --git a/kernel/relay.c b/kernel/relay.c
+index 72fe443ea78f0..fb4e0c530c080 100644
+--- a/kernel/relay.c
++++ b/kernel/relay.c
+@@ -197,6 +197,7 @@ free_buf:
+ static void relay_destroy_channel(struct kref *kref)
+ {
+ struct rchan *chan = container_of(kref, struct rchan, kref);
++ free_percpu(chan->buf);
+ kfree(chan);
+ }
+
+diff --git a/kernel/watch_queue.c b/kernel/watch_queue.c
+index f74020f6bd9d5..0ef8f65bd2d71 100644
+--- a/kernel/watch_queue.c
++++ b/kernel/watch_queue.c
+@@ -393,6 +393,7 @@ static void free_watch(struct rcu_head *rcu)
+ struct watch *watch = container_of(rcu, struct watch, rcu);
+
+ put_watch_queue(rcu_access_pointer(watch->queue));
++ atomic_dec(&watch->cred->user->nr_watches);
+ put_cred(watch->cred);
+ }
+
+@@ -452,6 +453,13 @@ int add_watch_to_object(struct watch *watch, struct watch_list *wlist)
+ watch->cred = get_current_cred();
+ rcu_assign_pointer(watch->watch_list, wlist);
+
++ if (atomic_inc_return(&watch->cred->user->nr_watches) >
++ task_rlimit(current, RLIMIT_NOFILE)) {
++ atomic_dec(&watch->cred->user->nr_watches);
++ put_cred(watch->cred);
++ return -EAGAIN;
++ }
++
+ spin_lock_bh(&wqueue->lock);
+ kref_get(&wqueue->usage);
+ kref_get(&watch->usage);
+diff --git a/mm/khugepaged.c b/mm/khugepaged.c
+index ac04b332a373a..1d6a9b0b6a9fd 100644
+--- a/mm/khugepaged.c
++++ b/mm/khugepaged.c
+@@ -466,7 +466,7 @@ int __khugepaged_enter(struct mm_struct *mm)
+ return -ENOMEM;
+
+ /* __khugepaged_exit() must not run from under us */
+- VM_BUG_ON_MM(khugepaged_test_exit(mm), mm);
++ VM_BUG_ON_MM(atomic_read(&mm->mm_users) == 0, mm);
+ if (unlikely(test_and_set_bit(MMF_VM_HUGEPAGE, &mm->flags))) {
+ free_mm_slot(mm_slot);
+ return 0;
+diff --git a/mm/memory.c b/mm/memory.c
+index 3ecad55103adb..a279c1a26af7e 100644
+--- a/mm/memory.c
++++ b/mm/memory.c
+@@ -4248,6 +4248,9 @@ static vm_fault_t handle_pte_fault(struct vm_fault *vmf)
+ vmf->flags & FAULT_FLAG_WRITE)) {
+ update_mmu_cache(vmf->vma, vmf->address, vmf->pte);
+ } else {
++ /* Skip spurious TLB flush for retried page fault */
++ if (vmf->flags & FAULT_FLAG_TRIED)
++ goto unlock;
+ /*
+ * This is needed only for protection faults but the arch code
+ * is not yet telling us if this is a protection fault or not.
+diff --git a/mm/page_alloc.c b/mm/page_alloc.c
+index e028b87ce2942..d809242f671f0 100644
+--- a/mm/page_alloc.c
++++ b/mm/page_alloc.c
+@@ -1306,6 +1306,11 @@ static void free_pcppages_bulk(struct zone *zone, int count,
+ struct page *page, *tmp;
+ LIST_HEAD(head);
+
++ /*
++ * Ensure proper count is passed which otherwise would stuck in the
++ * below while (list_empty(list)) loop.
++ */
++ count = min(pcp->count, count);
+ while (count) {
+ struct list_head *list;
+
+@@ -7881,7 +7886,7 @@ int __meminit init_per_zone_wmark_min(void)
+
+ return 0;
+ }
+-core_initcall(init_per_zone_wmark_min)
++postcore_initcall(init_per_zone_wmark_min)
+
+ /*
+ * min_free_kbytes_sysctl_handler - just a wrapper around proc_dointvec() so
+diff --git a/mm/vmalloc.c b/mm/vmalloc.c
+index 5a2b55c8dd9a7..128d20d2d6cb6 100644
+--- a/mm/vmalloc.c
++++ b/mm/vmalloc.c
+@@ -102,6 +102,8 @@ static void vunmap_pmd_range(pud_t *pud, unsigned long addr, unsigned long end,
+ if (pmd_none_or_clear_bad(pmd))
+ continue;
+ vunmap_pte_range(pmd, addr, next, mask);
++
++ cond_resched();
+ } while (pmd++, addr = next, addr != end);
+ }
+
+diff --git a/net/can/j1939/socket.c b/net/can/j1939/socket.c
+index f7587428febdd..bf9fd6ee88fe0 100644
+--- a/net/can/j1939/socket.c
++++ b/net/can/j1939/socket.c
+@@ -398,6 +398,7 @@ static int j1939_sk_init(struct sock *sk)
+ spin_lock_init(&jsk->sk_session_queue_lock);
+ INIT_LIST_HEAD(&jsk->sk_session_queue);
+ sk->sk_destruct = j1939_sk_sock_destruct;
++ sk->sk_protocol = CAN_J1939;
+
+ return 0;
+ }
+@@ -466,6 +467,14 @@ static int j1939_sk_bind(struct socket *sock, struct sockaddr *uaddr, int len)
+ goto out_release_sock;
+ }
+
++ if (!ndev->ml_priv) {
++ netdev_warn_once(ndev,
++ "No CAN mid layer private allocated, please fix your driver and use alloc_candev()!\n");
++ dev_put(ndev);
++ ret = -ENODEV;
++ goto out_release_sock;
++ }
++
+ priv = j1939_netdev_start(ndev);
+ dev_put(ndev);
+ if (IS_ERR(priv)) {
+@@ -553,6 +562,11 @@ static int j1939_sk_connect(struct socket *sock, struct sockaddr *uaddr,
+ static void j1939_sk_sock2sockaddr_can(struct sockaddr_can *addr,
+ const struct j1939_sock *jsk, int peer)
+ {
++ /* There are two holes (2 bytes and 3 bytes) to clear to avoid
++ * leaking kernel information to user space.
++ */
++ memset(addr, 0, J1939_MIN_NAMELEN);
++
+ addr->can_family = AF_CAN;
+ addr->can_ifindex = jsk->ifindex;
+ addr->can_addr.j1939.pgn = jsk->addr.pgn;
+diff --git a/net/can/j1939/transport.c b/net/can/j1939/transport.c
+index 9f99af5b0b11e..dbd215cbc53d8 100644
+--- a/net/can/j1939/transport.c
++++ b/net/can/j1939/transport.c
+@@ -352,17 +352,16 @@ void j1939_session_skb_queue(struct j1939_session *session,
+ skb_queue_tail(&session->skb_queue, skb);
+ }
+
+-static struct sk_buff *j1939_session_skb_find(struct j1939_session *session)
++static struct
++sk_buff *j1939_session_skb_find_by_offset(struct j1939_session *session,
++ unsigned int offset_start)
+ {
+ struct j1939_priv *priv = session->priv;
++ struct j1939_sk_buff_cb *do_skcb;
+ struct sk_buff *skb = NULL;
+ struct sk_buff *do_skb;
+- struct j1939_sk_buff_cb *do_skcb;
+- unsigned int offset_start;
+ unsigned long flags;
+
+- offset_start = session->pkt.dpo * 7;
+-
+ spin_lock_irqsave(&session->skb_queue.lock, flags);
+ skb_queue_walk(&session->skb_queue, do_skb) {
+ do_skcb = j1939_skb_to_cb(do_skb);
+@@ -382,6 +381,14 @@ static struct sk_buff *j1939_session_skb_find(struct j1939_session *session)
+ return skb;
+ }
+
++static struct sk_buff *j1939_session_skb_find(struct j1939_session *session)
++{
++ unsigned int offset_start;
++
++ offset_start = session->pkt.dpo * 7;
++ return j1939_session_skb_find_by_offset(session, offset_start);
++}
++
+ /* see if we are receiver
+ * returns 0 for broadcasts, although we will receive them
+ */
+@@ -716,10 +723,12 @@ static int j1939_session_tx_rts(struct j1939_session *session)
+ return ret;
+
+ session->last_txcmd = dat[0];
+- if (dat[0] == J1939_TP_CMD_BAM)
++ if (dat[0] == J1939_TP_CMD_BAM) {
+ j1939_tp_schedule_txtimer(session, 50);
+-
+- j1939_tp_set_rxtimeout(session, 1250);
++ j1939_tp_set_rxtimeout(session, 250);
++ } else {
++ j1939_tp_set_rxtimeout(session, 1250);
++ }
+
+ netdev_dbg(session->priv->ndev, "%s: 0x%p\n", __func__, session);
+
+@@ -766,7 +775,7 @@ static int j1939_session_tx_dat(struct j1939_session *session)
+ int ret = 0;
+ u8 dat[8];
+
+- se_skb = j1939_session_skb_find(session);
++ se_skb = j1939_session_skb_find_by_offset(session, session->pkt.tx * 7);
+ if (!se_skb)
+ return -ENOBUFS;
+
+@@ -787,6 +796,18 @@ static int j1939_session_tx_dat(struct j1939_session *session)
+ if (len > 7)
+ len = 7;
+
++ if (offset + len > se_skb->len) {
++ netdev_err_once(priv->ndev,
++ "%s: 0x%p: requested data outside of queued buffer: offset %i, len %i, pkt.tx: %i\n",
++ __func__, session, skcb->offset, se_skb->len , session->pkt.tx);
++ return -EOVERFLOW;
++ }
++
++ if (!len) {
++ ret = -ENOBUFS;
++ break;
++ }
++
+ memcpy(&dat[1], &tpdat[offset], len);
+ ret = j1939_tp_tx_dat(session, dat, len + 1);
+ if (ret < 0) {
+@@ -1055,9 +1076,9 @@ static void __j1939_session_cancel(struct j1939_session *session,
+ lockdep_assert_held(&session->priv->active_session_list_lock);
+
+ session->err = j1939_xtp_abort_to_errno(priv, err);
++ session->state = J1939_SESSION_WAITING_ABORT;
+ /* do not send aborts on incoming broadcasts */
+ if (!j1939_cb_is_broadcast(&session->skcb)) {
+- session->state = J1939_SESSION_WAITING_ABORT;
+ j1939_xtp_tx_abort(priv, &session->skcb,
+ !session->transmission,
+ err, session->skcb.addr.pgn);
+@@ -1120,6 +1141,9 @@ static enum hrtimer_restart j1939_tp_txtimer(struct hrtimer *hrtimer)
+ * cleanup including propagation of the error to user space.
+ */
+ break;
++ case -EOVERFLOW:
++ j1939_session_cancel(session, J1939_XTP_ABORT_ECTS_TOO_BIG);
++ break;
+ case 0:
+ session->tx_retry = 0;
+ break;
+@@ -1651,8 +1675,12 @@ static void j1939_xtp_rx_rts(struct j1939_priv *priv, struct sk_buff *skb,
+ return;
+ }
+ session = j1939_xtp_rx_rts_session_new(priv, skb);
+- if (!session)
++ if (!session) {
++ if (cmd == J1939_TP_CMD_BAM && j1939_sk_recv_match(priv, skcb))
++ netdev_info(priv->ndev, "%s: failed to create TP BAM session\n",
++ __func__);
+ return;
++ }
+ } else {
+ if (j1939_xtp_rx_rts_session_active(session, skb)) {
+ j1939_session_put(session);
+@@ -1661,11 +1689,15 @@ static void j1939_xtp_rx_rts(struct j1939_priv *priv, struct sk_buff *skb,
+ }
+ session->last_cmd = cmd;
+
+- j1939_tp_set_rxtimeout(session, 1250);
+-
+- if (cmd != J1939_TP_CMD_BAM && !session->transmission) {
+- j1939_session_txtimer_cancel(session);
+- j1939_tp_schedule_txtimer(session, 0);
++ if (cmd == J1939_TP_CMD_BAM) {
++ if (!session->transmission)
++ j1939_tp_set_rxtimeout(session, 750);
++ } else {
++ if (!session->transmission) {
++ j1939_session_txtimer_cancel(session);
++ j1939_tp_schedule_txtimer(session, 0);
++ }
++ j1939_tp_set_rxtimeout(session, 1250);
+ }
+
+ j1939_session_put(session);
+@@ -1716,6 +1748,7 @@ static void j1939_xtp_rx_dat_one(struct j1939_session *session,
+ int offset;
+ int nbytes;
+ bool final = false;
++ bool remain = false;
+ bool do_cts_eoma = false;
+ int packet;
+
+@@ -1750,7 +1783,8 @@ static void j1939_xtp_rx_dat_one(struct j1939_session *session,
+ __func__, session);
+ goto out_session_cancel;
+ }
+- se_skb = j1939_session_skb_find(session);
++
++ se_skb = j1939_session_skb_find_by_offset(session, packet * 7);
+ if (!se_skb) {
+ netdev_warn(priv->ndev, "%s: 0x%p: no skb found\n", __func__,
+ session);
+@@ -1777,6 +1811,8 @@ static void j1939_xtp_rx_dat_one(struct j1939_session *session,
+ j1939_cb_is_broadcast(&session->skcb)) {
+ if (session->pkt.rx >= session->pkt.total)
+ final = true;
++ else
++ remain = true;
+ } else {
+ /* never final, an EOMA must follow */
+ if (session->pkt.rx >= session->pkt.last)
+@@ -1784,7 +1820,11 @@ static void j1939_xtp_rx_dat_one(struct j1939_session *session,
+ }
+
+ if (final) {
++ j1939_session_timers_cancel(session);
+ j1939_session_completed(session);
++ } else if (remain) {
++ if (!session->transmission)
++ j1939_tp_set_rxtimeout(session, 750);
+ } else if (do_cts_eoma) {
+ j1939_tp_set_rxtimeout(session, 1250);
+ if (!session->transmission)
+@@ -1829,6 +1869,13 @@ static void j1939_xtp_rx_dat(struct j1939_priv *priv, struct sk_buff *skb)
+ else
+ j1939_xtp_rx_dat_one(session, skb);
+ }
++
++ if (j1939_cb_is_broadcast(skcb)) {
++ session = j1939_session_get_by_addr(priv, &skcb->addr, false,
++ false);
++ if (session)
++ j1939_xtp_rx_dat_one(session, skb);
++ }
+ }
+
+ /* j1939 main intf */
+@@ -1920,7 +1967,7 @@ static void j1939_tp_cmd_recv(struct j1939_priv *priv, struct sk_buff *skb)
+ if (j1939_tp_im_transmitter(skcb))
+ j1939_xtp_rx_rts(priv, skb, true);
+
+- if (j1939_tp_im_receiver(skcb))
++ if (j1939_tp_im_receiver(skcb) || j1939_cb_is_broadcast(skcb))
+ j1939_xtp_rx_rts(priv, skb, false);
+
+ break;
+@@ -1984,7 +2031,7 @@ int j1939_tp_recv(struct j1939_priv *priv, struct sk_buff *skb)
+ {
+ struct j1939_sk_buff_cb *skcb = j1939_skb_to_cb(skb);
+
+- if (!j1939_tp_im_involved_anydir(skcb))
++ if (!j1939_tp_im_involved_anydir(skcb) && !j1939_cb_is_broadcast(skcb))
+ return 0;
+
+ switch (skcb->addr.pgn) {
+@@ -2017,6 +2064,10 @@ void j1939_simple_recv(struct j1939_priv *priv, struct sk_buff *skb)
+ if (!skb->sk)
+ return;
+
++ if (skb->sk->sk_family != AF_CAN ||
++ skb->sk->sk_protocol != CAN_J1939)
++ return;
++
+ j1939_session_list_lock(priv);
+ session = j1939_session_get_simple(priv, skb);
+ j1939_session_list_unlock(priv);
+diff --git a/net/core/filter.c b/net/core/filter.c
+index 82e1b5b061675..a69e79327c29e 100644
+--- a/net/core/filter.c
++++ b/net/core/filter.c
+@@ -8249,15 +8249,31 @@ static u32 sock_ops_convert_ctx_access(enum bpf_access_type type,
+ /* Helper macro for adding read access to tcp_sock or sock fields. */
+ #define SOCK_OPS_GET_FIELD(BPF_FIELD, OBJ_FIELD, OBJ) \
+ do { \
++ int fullsock_reg = si->dst_reg, reg = BPF_REG_9, jmp = 2; \
+ BUILD_BUG_ON(sizeof_field(OBJ, OBJ_FIELD) > \
+ sizeof_field(struct bpf_sock_ops, BPF_FIELD)); \
++ if (si->dst_reg == reg || si->src_reg == reg) \
++ reg--; \
++ if (si->dst_reg == reg || si->src_reg == reg) \
++ reg--; \
++ if (si->dst_reg == si->src_reg) { \
++ *insn++ = BPF_STX_MEM(BPF_DW, si->src_reg, reg, \
++ offsetof(struct bpf_sock_ops_kern, \
++ temp)); \
++ fullsock_reg = reg; \
++ jmp += 2; \
++ } \
+ *insn++ = BPF_LDX_MEM(BPF_FIELD_SIZEOF( \
+ struct bpf_sock_ops_kern, \
+ is_fullsock), \
+- si->dst_reg, si->src_reg, \
++ fullsock_reg, si->src_reg, \
+ offsetof(struct bpf_sock_ops_kern, \
+ is_fullsock)); \
+- *insn++ = BPF_JMP_IMM(BPF_JEQ, si->dst_reg, 0, 2); \
++ *insn++ = BPF_JMP_IMM(BPF_JEQ, fullsock_reg, 0, jmp); \
++ if (si->dst_reg == si->src_reg) \
++ *insn++ = BPF_LDX_MEM(BPF_DW, reg, si->src_reg, \
++ offsetof(struct bpf_sock_ops_kern, \
++ temp)); \
+ *insn++ = BPF_LDX_MEM(BPF_FIELD_SIZEOF( \
+ struct bpf_sock_ops_kern, sk),\
+ si->dst_reg, si->src_reg, \
+@@ -8266,6 +8282,49 @@ static u32 sock_ops_convert_ctx_access(enum bpf_access_type type,
+ OBJ_FIELD), \
+ si->dst_reg, si->dst_reg, \
+ offsetof(OBJ, OBJ_FIELD)); \
++ if (si->dst_reg == si->src_reg) { \
++ *insn++ = BPF_JMP_A(1); \
++ *insn++ = BPF_LDX_MEM(BPF_DW, reg, si->src_reg, \
++ offsetof(struct bpf_sock_ops_kern, \
++ temp)); \
++ } \
++ } while (0)
++
++#define SOCK_OPS_GET_SK() \
++ do { \
++ int fullsock_reg = si->dst_reg, reg = BPF_REG_9, jmp = 1; \
++ if (si->dst_reg == reg || si->src_reg == reg) \
++ reg--; \
++ if (si->dst_reg == reg || si->src_reg == reg) \
++ reg--; \
++ if (si->dst_reg == si->src_reg) { \
++ *insn++ = BPF_STX_MEM(BPF_DW, si->src_reg, reg, \
++ offsetof(struct bpf_sock_ops_kern, \
++ temp)); \
++ fullsock_reg = reg; \
++ jmp += 2; \
++ } \
++ *insn++ = BPF_LDX_MEM(BPF_FIELD_SIZEOF( \
++ struct bpf_sock_ops_kern, \
++ is_fullsock), \
++ fullsock_reg, si->src_reg, \
++ offsetof(struct bpf_sock_ops_kern, \
++ is_fullsock)); \
++ *insn++ = BPF_JMP_IMM(BPF_JEQ, fullsock_reg, 0, jmp); \
++ if (si->dst_reg == si->src_reg) \
++ *insn++ = BPF_LDX_MEM(BPF_DW, reg, si->src_reg, \
++ offsetof(struct bpf_sock_ops_kern, \
++ temp)); \
++ *insn++ = BPF_LDX_MEM(BPF_FIELD_SIZEOF( \
++ struct bpf_sock_ops_kern, sk),\
++ si->dst_reg, si->src_reg, \
++ offsetof(struct bpf_sock_ops_kern, sk));\
++ if (si->dst_reg == si->src_reg) { \
++ *insn++ = BPF_JMP_A(1); \
++ *insn++ = BPF_LDX_MEM(BPF_DW, reg, si->src_reg, \
++ offsetof(struct bpf_sock_ops_kern, \
++ temp)); \
++ } \
+ } while (0)
+
+ #define SOCK_OPS_GET_TCP_SOCK_FIELD(FIELD) \
+@@ -8552,17 +8611,7 @@ static u32 sock_ops_convert_ctx_access(enum bpf_access_type type,
+ SOCK_OPS_GET_TCP_SOCK_FIELD(bytes_acked);
+ break;
+ case offsetof(struct bpf_sock_ops, sk):
+- *insn++ = BPF_LDX_MEM(BPF_FIELD_SIZEOF(
+- struct bpf_sock_ops_kern,
+- is_fullsock),
+- si->dst_reg, si->src_reg,
+- offsetof(struct bpf_sock_ops_kern,
+- is_fullsock));
+- *insn++ = BPF_JMP_IMM(BPF_JEQ, si->dst_reg, 0, 1);
+- *insn++ = BPF_LDX_MEM(BPF_FIELD_SIZEOF(
+- struct bpf_sock_ops_kern, sk),
+- si->dst_reg, si->src_reg,
+- offsetof(struct bpf_sock_ops_kern, sk));
++ SOCK_OPS_GET_SK();
+ break;
+ }
+ return insn - insn_buf;
+diff --git a/net/netfilter/nft_exthdr.c b/net/netfilter/nft_exthdr.c
+index 07782836fad6e..3c48cdc8935df 100644
+--- a/net/netfilter/nft_exthdr.c
++++ b/net/netfilter/nft_exthdr.c
+@@ -44,7 +44,7 @@ static void nft_exthdr_ipv6_eval(const struct nft_expr *expr,
+
+ err = ipv6_find_hdr(pkt->skb, &offset, priv->type, NULL, NULL);
+ if (priv->flags & NFT_EXTHDR_F_PRESENT) {
+- *dest = (err >= 0);
++ nft_reg_store8(dest, err >= 0);
+ return;
+ } else if (err < 0) {
+ goto err;
+@@ -141,7 +141,7 @@ static void nft_exthdr_ipv4_eval(const struct nft_expr *expr,
+
+ err = ipv4_find_option(nft_net(pkt), skb, &offset, priv->type);
+ if (priv->flags & NFT_EXTHDR_F_PRESENT) {
+- *dest = (err >= 0);
++ nft_reg_store8(dest, err >= 0);
+ return;
+ } else if (err < 0) {
+ goto err;
+diff --git a/net/sunrpc/xprtrdma/svc_rdma_recvfrom.c b/net/sunrpc/xprtrdma/svc_rdma_recvfrom.c
+index e426fedb9524f..ac16d83f2d26c 100644
+--- a/net/sunrpc/xprtrdma/svc_rdma_recvfrom.c
++++ b/net/sunrpc/xprtrdma/svc_rdma_recvfrom.c
+@@ -265,6 +265,8 @@ static int svc_rdma_post_recv(struct svcxprt_rdma *rdma)
+ {
+ struct svc_rdma_recv_ctxt *ctxt;
+
++ if (test_bit(XPT_CLOSE, &rdma->sc_xprt.xpt_flags))
++ return 0;
+ ctxt = svc_rdma_recv_ctxt_get(rdma);
+ if (!ctxt)
+ return -ENOMEM;
+diff --git a/scripts/kconfig/qconf.cc b/scripts/kconfig/qconf.cc
+index 23d1cb01a41ae..5ceb93010a973 100644
+--- a/scripts/kconfig/qconf.cc
++++ b/scripts/kconfig/qconf.cc
+@@ -864,40 +864,40 @@ void ConfigList::focusInEvent(QFocusEvent *e)
+
+ void ConfigList::contextMenuEvent(QContextMenuEvent *e)
+ {
+- if (e->y() <= header()->geometry().bottom()) {
+- if (!headerPopup) {
+- QAction *action;
+-
+- headerPopup = new QMenu(this);
+- action = new QAction("Show Name", this);
+- action->setCheckable(true);
+- connect(action, SIGNAL(toggled(bool)),
+- parent(), SLOT(setShowName(bool)));
+- connect(parent(), SIGNAL(showNameChanged(bool)),
+- action, SLOT(setOn(bool)));
+- action->setChecked(showName);
+- headerPopup->addAction(action);
+- action = new QAction("Show Range", this);
+- action->setCheckable(true);
+- connect(action, SIGNAL(toggled(bool)),
+- parent(), SLOT(setShowRange(bool)));
+- connect(parent(), SIGNAL(showRangeChanged(bool)),
+- action, SLOT(setOn(bool)));
+- action->setChecked(showRange);
+- headerPopup->addAction(action);
+- action = new QAction("Show Data", this);
+- action->setCheckable(true);
+- connect(action, SIGNAL(toggled(bool)),
+- parent(), SLOT(setShowData(bool)));
+- connect(parent(), SIGNAL(showDataChanged(bool)),
+- action, SLOT(setOn(bool)));
+- action->setChecked(showData);
+- headerPopup->addAction(action);
+- }
+- headerPopup->exec(e->globalPos());
+- e->accept();
+- } else
+- e->ignore();
++ if (!headerPopup) {
++ QAction *action;
++
++ headerPopup = new QMenu(this);
++ action = new QAction("Show Name", this);
++ action->setCheckable(true);
++ connect(action, SIGNAL(toggled(bool)),
++ parent(), SLOT(setShowName(bool)));
++ connect(parent(), SIGNAL(showNameChanged(bool)),
++ action, SLOT(setChecked(bool)));
++ action->setChecked(showName);
++ headerPopup->addAction(action);
++
++ action = new QAction("Show Range", this);
++ action->setCheckable(true);
++ connect(action, SIGNAL(toggled(bool)),
++ parent(), SLOT(setShowRange(bool)));
++ connect(parent(), SIGNAL(showRangeChanged(bool)),
++ action, SLOT(setChecked(bool)));
++ action->setChecked(showRange);
++ headerPopup->addAction(action);
++
++ action = new QAction("Show Data", this);
++ action->setCheckable(true);
++ connect(action, SIGNAL(toggled(bool)),
++ parent(), SLOT(setShowData(bool)));
++ connect(parent(), SIGNAL(showDataChanged(bool)),
++ action, SLOT(setChecked(bool)));
++ action->setChecked(showData);
++ headerPopup->addAction(action);
++ }
++
++ headerPopup->exec(e->globalPos());
++ e->accept();
+ }
+
+ ConfigView*ConfigView::viewList;
+@@ -1228,7 +1228,6 @@ void ConfigInfoView::clicked(const QUrl &url)
+ struct menu *m = NULL;
+
+ if (count < 1) {
+- qInfo() << "Clicked link is empty";
+ delete[] data;
+ return;
+ }
+@@ -1241,7 +1240,6 @@ void ConfigInfoView::clicked(const QUrl &url)
+ strcat(data, "$");
+ result = sym_re_search(data);
+ if (!result) {
+- qInfo() << "Clicked symbol is invalid:" << data;
+ delete[] data;
+ return;
+ }
+@@ -1275,7 +1273,7 @@ QMenu* ConfigInfoView::createStandardContextMenu(const QPoint & pos)
+
+ action->setCheckable(true);
+ connect(action, SIGNAL(toggled(bool)), SLOT(setShowDebug(bool)));
+- connect(this, SIGNAL(showDebugChanged(bool)), action, SLOT(setOn(bool)));
++ connect(this, SIGNAL(showDebugChanged(bool)), action, SLOT(setChecked(bool)));
+ action->setChecked(showDebug());
+ popup->addSeparator();
+ popup->addAction(action);
+diff --git a/sound/hda/hdac_bus.c b/sound/hda/hdac_bus.c
+index 09ddab5f5caeb..9766f6af87430 100644
+--- a/sound/hda/hdac_bus.c
++++ b/sound/hda/hdac_bus.c
+@@ -46,6 +46,18 @@ int snd_hdac_bus_init(struct hdac_bus *bus, struct device *dev,
+ INIT_LIST_HEAD(&bus->hlink_list);
+ init_waitqueue_head(&bus->rirb_wq);
+ bus->irq = -1;
++
++ /*
++ * Default value of '8' is as per the HD audio specification (Rev 1.0a).
++ * Following relation is used to derive STRIPE control value.
++ * For sample rate <= 48K:
++ * { ((num_channels * bits_per_sample) / number of SDOs) >= 8 }
++ * For sample rate > 48K:
++ * { ((num_channels * bits_per_sample * rate/48000) /
++ * number of SDOs) >= 8 }
++ */
++ bus->sdo_limit = 8;
++
+ return 0;
+ }
+ EXPORT_SYMBOL_GPL(snd_hdac_bus_init);
+diff --git a/sound/hda/hdac_controller.c b/sound/hda/hdac_controller.c
+index 011b17cc1efa2..b98449fd92f3b 100644
+--- a/sound/hda/hdac_controller.c
++++ b/sound/hda/hdac_controller.c
+@@ -529,17 +529,6 @@ bool snd_hdac_bus_init_chip(struct hdac_bus *bus, bool full_reset)
+
+ bus->chip_init = true;
+
+- /*
+- * Default value of '8' is as per the HD audio specification (Rev 1.0a).
+- * Following relation is used to derive STRIPE control value.
+- * For sample rate <= 48K:
+- * { ((num_channels * bits_per_sample) / number of SDOs) >= 8 }
+- * For sample rate > 48K:
+- * { ((num_channels * bits_per_sample * rate/48000) /
+- * number of SDOs) >= 8 }
+- */
+- bus->sdo_limit = 8;
+-
+ return true;
+ }
+ EXPORT_SYMBOL_GPL(snd_hdac_bus_init_chip);
+diff --git a/sound/pci/hda/patch_realtek.c b/sound/pci/hda/patch_realtek.c
+index 8626e59f1e6a9..b10d005786d07 100644
+--- a/sound/pci/hda/patch_realtek.c
++++ b/sound/pci/hda/patch_realtek.c
+@@ -7696,6 +7696,8 @@ static const struct snd_pci_quirk alc269_fixup_tbl[] = {
+ SND_PCI_QUIRK(0x144d, 0xc109, "Samsung Ativ book 9 (NP900X3G)", ALC269_FIXUP_INV_DMIC),
+ SND_PCI_QUIRK(0x144d, 0xc169, "Samsung Notebook 9 Pen (NP930SBE-K01US)", ALC298_FIXUP_SAMSUNG_HEADPHONE_VERY_QUIET),
+ SND_PCI_QUIRK(0x144d, 0xc176, "Samsung Notebook 9 Pro (NP930MBE-K04US)", ALC298_FIXUP_SAMSUNG_HEADPHONE_VERY_QUIET),
++ SND_PCI_QUIRK(0x144d, 0xc189, "Samsung Galaxy Flex Book (NT950QCG-X716)", ALC298_FIXUP_SAMSUNG_HEADPHONE_VERY_QUIET),
++ SND_PCI_QUIRK(0x144d, 0xc18a, "Samsung Galaxy Book Ion (NT950XCJ-X716A)", ALC298_FIXUP_SAMSUNG_HEADPHONE_VERY_QUIET),
+ SND_PCI_QUIRK(0x144d, 0xc740, "Samsung Ativ book 8 (NP870Z5G)", ALC269_FIXUP_ATIV_BOOK_8),
+ SND_PCI_QUIRK(0x144d, 0xc812, "Samsung Notebook Pen S (NT950SBE-X58)", ALC298_FIXUP_SAMSUNG_HEADPHONE_VERY_QUIET),
+ SND_PCI_QUIRK(0x1458, 0xfa53, "Gigabyte BXBT-2807", ALC283_FIXUP_HEADSET_MIC),
+diff --git a/sound/soc/amd/renoir/acp3x-pdm-dma.c b/sound/soc/amd/renoir/acp3x-pdm-dma.c
+index 623dfd3ea7051..7b14d9a81b97a 100644
+--- a/sound/soc/amd/renoir/acp3x-pdm-dma.c
++++ b/sound/soc/amd/renoir/acp3x-pdm-dma.c
+@@ -314,40 +314,30 @@ static int acp_pdm_dma_close(struct snd_soc_component *component,
+ return 0;
+ }
+
+-static int acp_pdm_dai_hw_params(struct snd_pcm_substream *substream,
+- struct snd_pcm_hw_params *params,
+- struct snd_soc_dai *dai)
++static int acp_pdm_dai_trigger(struct snd_pcm_substream *substream,
++ int cmd, struct snd_soc_dai *dai)
+ {
+ struct pdm_stream_instance *rtd;
++ int ret;
++ bool pdm_status;
+ unsigned int ch_mask;
+
+ rtd = substream->runtime->private_data;
+- switch (params_channels(params)) {
++ ret = 0;
++ switch (substream->runtime->channels) {
+ case TWO_CH:
+ ch_mask = 0x00;
+ break;
+ default:
+ return -EINVAL;
+ }
+- rn_writel(ch_mask, rtd->acp_base + ACP_WOV_PDM_NO_OF_CHANNELS);
+- rn_writel(PDM_DECIMATION_FACTOR, rtd->acp_base +
+- ACP_WOV_PDM_DECIMATION_FACTOR);
+- return 0;
+-}
+-
+-static int acp_pdm_dai_trigger(struct snd_pcm_substream *substream,
+- int cmd, struct snd_soc_dai *dai)
+-{
+- struct pdm_stream_instance *rtd;
+- int ret;
+- bool pdm_status;
+-
+- rtd = substream->runtime->private_data;
+- ret = 0;
+ switch (cmd) {
+ case SNDRV_PCM_TRIGGER_START:
+ case SNDRV_PCM_TRIGGER_RESUME:
+ case SNDRV_PCM_TRIGGER_PAUSE_RELEASE:
++ rn_writel(ch_mask, rtd->acp_base + ACP_WOV_PDM_NO_OF_CHANNELS);
++ rn_writel(PDM_DECIMATION_FACTOR, rtd->acp_base +
++ ACP_WOV_PDM_DECIMATION_FACTOR);
+ rtd->bytescount = acp_pdm_get_byte_count(rtd,
+ substream->stream);
+ pdm_status = check_pdm_dma_status(rtd->acp_base);
+@@ -369,7 +359,6 @@ static int acp_pdm_dai_trigger(struct snd_pcm_substream *substream,
+ }
+
+ static struct snd_soc_dai_ops acp_pdm_dai_ops = {
+- .hw_params = acp_pdm_dai_hw_params,
+ .trigger = acp_pdm_dai_trigger,
+ };
+
+diff --git a/sound/soc/codecs/msm8916-wcd-analog.c b/sound/soc/codecs/msm8916-wcd-analog.c
+index 85bc7ae4d2671..26cf372ccda6f 100644
+--- a/sound/soc/codecs/msm8916-wcd-analog.c
++++ b/sound/soc/codecs/msm8916-wcd-analog.c
+@@ -19,8 +19,8 @@
+
+ #define CDC_D_REVISION1 (0xf000)
+ #define CDC_D_PERPH_SUBTYPE (0xf005)
+-#define CDC_D_INT_EN_SET (0x015)
+-#define CDC_D_INT_EN_CLR (0x016)
++#define CDC_D_INT_EN_SET (0xf015)
++#define CDC_D_INT_EN_CLR (0xf016)
+ #define MBHC_SWITCH_INT BIT(7)
+ #define MBHC_MIC_ELECTRICAL_INS_REM_DET BIT(6)
+ #define MBHC_BUTTON_PRESS_DET BIT(5)
+diff --git a/sound/soc/intel/atom/sst-mfld-platform-pcm.c b/sound/soc/intel/atom/sst-mfld-platform-pcm.c
+index 8817eaae6bb7a..b520e3aeaf3de 100644
+--- a/sound/soc/intel/atom/sst-mfld-platform-pcm.c
++++ b/sound/soc/intel/atom/sst-mfld-platform-pcm.c
+@@ -331,7 +331,7 @@ static int sst_media_open(struct snd_pcm_substream *substream,
+
+ ret_val = power_up_sst(stream);
+ if (ret_val < 0)
+- return ret_val;
++ goto out_power_up;
+
+ /* Make sure, that the period size is always even */
+ snd_pcm_hw_constraint_step(substream->runtime, 0,
+@@ -340,8 +340,9 @@ static int sst_media_open(struct snd_pcm_substream *substream,
+ return snd_pcm_hw_constraint_integer(runtime,
+ SNDRV_PCM_HW_PARAM_PERIODS);
+ out_ops:
+- kfree(stream);
+ mutex_unlock(&sst_lock);
++out_power_up:
++ kfree(stream);
+ return ret_val;
+ }
+
+diff --git a/sound/soc/qcom/qdsp6/q6afe-dai.c b/sound/soc/qcom/qdsp6/q6afe-dai.c
+index 2a5302f1db98a..0168af8492727 100644
+--- a/sound/soc/qcom/qdsp6/q6afe-dai.c
++++ b/sound/soc/qcom/qdsp6/q6afe-dai.c
+@@ -1150,206 +1150,206 @@ static int q6afe_of_xlate_dai_name(struct snd_soc_component *component,
+ }
+
+ static const struct snd_soc_dapm_widget q6afe_dai_widgets[] = {
+- SND_SOC_DAPM_AIF_IN("HDMI_RX", NULL, 0, 0, 0, 0),
+- SND_SOC_DAPM_AIF_IN("SLIMBUS_0_RX", NULL, 0, 0, 0, 0),
+- SND_SOC_DAPM_AIF_IN("SLIMBUS_1_RX", NULL, 0, 0, 0, 0),
+- SND_SOC_DAPM_AIF_IN("SLIMBUS_2_RX", NULL, 0, 0, 0, 0),
+- SND_SOC_DAPM_AIF_IN("SLIMBUS_3_RX", NULL, 0, 0, 0, 0),
+- SND_SOC_DAPM_AIF_IN("SLIMBUS_4_RX", NULL, 0, 0, 0, 0),
+- SND_SOC_DAPM_AIF_IN("SLIMBUS_5_RX", NULL, 0, 0, 0, 0),
+- SND_SOC_DAPM_AIF_IN("SLIMBUS_6_RX", NULL, 0, 0, 0, 0),
+- SND_SOC_DAPM_AIF_OUT("SLIMBUS_0_TX", NULL, 0, 0, 0, 0),
+- SND_SOC_DAPM_AIF_OUT("SLIMBUS_1_TX", NULL, 0, 0, 0, 0),
+- SND_SOC_DAPM_AIF_OUT("SLIMBUS_2_TX", NULL, 0, 0, 0, 0),
+- SND_SOC_DAPM_AIF_OUT("SLIMBUS_3_TX", NULL, 0, 0, 0, 0),
+- SND_SOC_DAPM_AIF_OUT("SLIMBUS_4_TX", NULL, 0, 0, 0, 0),
+- SND_SOC_DAPM_AIF_OUT("SLIMBUS_5_TX", NULL, 0, 0, 0, 0),
+- SND_SOC_DAPM_AIF_OUT("SLIMBUS_6_TX", NULL, 0, 0, 0, 0),
++ SND_SOC_DAPM_AIF_IN("HDMI_RX", NULL, 0, SND_SOC_NOPM, 0, 0),
++ SND_SOC_DAPM_AIF_IN("SLIMBUS_0_RX", NULL, 0, SND_SOC_NOPM, 0, 0),
++ SND_SOC_DAPM_AIF_IN("SLIMBUS_1_RX", NULL, 0, SND_SOC_NOPM, 0, 0),
++ SND_SOC_DAPM_AIF_IN("SLIMBUS_2_RX", NULL, 0, SND_SOC_NOPM, 0, 0),
++ SND_SOC_DAPM_AIF_IN("SLIMBUS_3_RX", NULL, 0, SND_SOC_NOPM, 0, 0),
++ SND_SOC_DAPM_AIF_IN("SLIMBUS_4_RX", NULL, 0, SND_SOC_NOPM, 0, 0),
++ SND_SOC_DAPM_AIF_IN("SLIMBUS_5_RX", NULL, 0, SND_SOC_NOPM, 0, 0),
++ SND_SOC_DAPM_AIF_IN("SLIMBUS_6_RX", NULL, 0, SND_SOC_NOPM, 0, 0),
++ SND_SOC_DAPM_AIF_OUT("SLIMBUS_0_TX", NULL, 0, SND_SOC_NOPM, 0, 0),
++ SND_SOC_DAPM_AIF_OUT("SLIMBUS_1_TX", NULL, 0, SND_SOC_NOPM, 0, 0),
++ SND_SOC_DAPM_AIF_OUT("SLIMBUS_2_TX", NULL, 0, SND_SOC_NOPM, 0, 0),
++ SND_SOC_DAPM_AIF_OUT("SLIMBUS_3_TX", NULL, 0, SND_SOC_NOPM, 0, 0),
++ SND_SOC_DAPM_AIF_OUT("SLIMBUS_4_TX", NULL, 0, SND_SOC_NOPM, 0, 0),
++ SND_SOC_DAPM_AIF_OUT("SLIMBUS_5_TX", NULL, 0, SND_SOC_NOPM, 0, 0),
++ SND_SOC_DAPM_AIF_OUT("SLIMBUS_6_TX", NULL, 0, SND_SOC_NOPM, 0, 0),
+ SND_SOC_DAPM_AIF_IN("QUAT_MI2S_RX", NULL,
+- 0, 0, 0, 0),
++ 0, SND_SOC_NOPM, 0, 0),
+ SND_SOC_DAPM_AIF_OUT("QUAT_MI2S_TX", NULL,
+- 0, 0, 0, 0),
++ 0, SND_SOC_NOPM, 0, 0),
+ SND_SOC_DAPM_AIF_IN("TERT_MI2S_RX", NULL,
+- 0, 0, 0, 0),
++ 0, SND_SOC_NOPM, 0, 0),
+ SND_SOC_DAPM_AIF_OUT("TERT_MI2S_TX", NULL,
+- 0, 0, 0, 0),
++ 0, SND_SOC_NOPM, 0, 0),
+ SND_SOC_DAPM_AIF_IN("SEC_MI2S_RX", NULL,
+- 0, 0, 0, 0),
++ 0, SND_SOC_NOPM, 0, 0),
+ SND_SOC_DAPM_AIF_OUT("SEC_MI2S_TX", NULL,
+- 0, 0, 0, 0),
++ 0, SND_SOC_NOPM, 0, 0),
+ SND_SOC_DAPM_AIF_IN("SEC_MI2S_RX_SD1",
+ "Secondary MI2S Playback SD1",
+- 0, 0, 0, 0),
++ 0, SND_SOC_NOPM, 0, 0),
+ SND_SOC_DAPM_AIF_IN("PRI_MI2S_RX", NULL,
+- 0, 0, 0, 0),
++ 0, SND_SOC_NOPM, 0, 0),
+ SND_SOC_DAPM_AIF_OUT("PRI_MI2S_TX", NULL,
+- 0, 0, 0, 0),
++ 0, SND_SOC_NOPM, 0, 0),
+
+ SND_SOC_DAPM_AIF_IN("PRIMARY_TDM_RX_0", NULL,
+- 0, 0, 0, 0),
++ 0, SND_SOC_NOPM, 0, 0),
+ SND_SOC_DAPM_AIF_IN("PRIMARY_TDM_RX_1", NULL,
+- 0, 0, 0, 0),
++ 0, SND_SOC_NOPM, 0, 0),
+ SND_SOC_DAPM_AIF_IN("PRIMARY_TDM_RX_2", NULL,
+- 0, 0, 0, 0),
++ 0, SND_SOC_NOPM, 0, 0),
+ SND_SOC_DAPM_AIF_IN("PRIMARY_TDM_RX_3", NULL,
+- 0, 0, 0, 0),
++ 0, SND_SOC_NOPM, 0, 0),
+ SND_SOC_DAPM_AIF_IN("PRIMARY_TDM_RX_4", NULL,
+- 0, 0, 0, 0),
++ 0, SND_SOC_NOPM, 0, 0),
+ SND_SOC_DAPM_AIF_IN("PRIMARY_TDM_RX_5", NULL,
+- 0, 0, 0, 0),
++ 0, SND_SOC_NOPM, 0, 0),
+ SND_SOC_DAPM_AIF_IN("PRIMARY_TDM_RX_6", NULL,
+- 0, 0, 0, 0),
++ 0, SND_SOC_NOPM, 0, 0),
+ SND_SOC_DAPM_AIF_IN("PRIMARY_TDM_RX_7", NULL,
+- 0, 0, 0, 0),
++ 0, SND_SOC_NOPM, 0, 0),
+ SND_SOC_DAPM_AIF_OUT("PRIMARY_TDM_TX_0", NULL,
+- 0, 0, 0, 0),
++ 0, SND_SOC_NOPM, 0, 0),
+ SND_SOC_DAPM_AIF_OUT("PRIMARY_TDM_TX_1", NULL,
+- 0, 0, 0, 0),
++ 0, SND_SOC_NOPM, 0, 0),
+ SND_SOC_DAPM_AIF_OUT("PRIMARY_TDM_TX_2", NULL,
+- 0, 0, 0, 0),
++ 0, SND_SOC_NOPM, 0, 0),
+ SND_SOC_DAPM_AIF_OUT("PRIMARY_TDM_TX_3", NULL,
+- 0, 0, 0, 0),
++ 0, SND_SOC_NOPM, 0, 0),
+ SND_SOC_DAPM_AIF_OUT("PRIMARY_TDM_TX_4", NULL,
+- 0, 0, 0, 0),
++ 0, SND_SOC_NOPM, 0, 0),
+ SND_SOC_DAPM_AIF_OUT("PRIMARY_TDM_TX_5", NULL,
+- 0, 0, 0, 0),
++ 0, SND_SOC_NOPM, 0, 0),
+ SND_SOC_DAPM_AIF_OUT("PRIMARY_TDM_TX_6", NULL,
+- 0, 0, 0, 0),
++ 0, SND_SOC_NOPM, 0, 0),
+ SND_SOC_DAPM_AIF_OUT("PRIMARY_TDM_TX_7", NULL,
+- 0, 0, 0, 0),
++ 0, SND_SOC_NOPM, 0, 0),
+
+ SND_SOC_DAPM_AIF_IN("SEC_TDM_RX_0", NULL,
+- 0, 0, 0, 0),
++ 0, SND_SOC_NOPM, 0, 0),
+ SND_SOC_DAPM_AIF_IN("SEC_TDM_RX_1", NULL,
+- 0, 0, 0, 0),
++ 0, SND_SOC_NOPM, 0, 0),
+ SND_SOC_DAPM_AIF_IN("SEC_TDM_RX_2", NULL,
+- 0, 0, 0, 0),
++ 0, SND_SOC_NOPM, 0, 0),
+ SND_SOC_DAPM_AIF_IN("SEC_TDM_RX_3", NULL,
+- 0, 0, 0, 0),
++ 0, SND_SOC_NOPM, 0, 0),
+ SND_SOC_DAPM_AIF_IN("SEC_TDM_RX_4", NULL,
+- 0, 0, 0, 0),
++ 0, SND_SOC_NOPM, 0, 0),
+ SND_SOC_DAPM_AIF_IN("SEC_TDM_RX_5", NULL,
+- 0, 0, 0, 0),
++ 0, SND_SOC_NOPM, 0, 0),
+ SND_SOC_DAPM_AIF_IN("SEC_TDM_RX_6", NULL,
+- 0, 0, 0, 0),
++ 0, SND_SOC_NOPM, 0, 0),
+ SND_SOC_DAPM_AIF_IN("SEC_TDM_RX_7", NULL,
+- 0, 0, 0, 0),
++ 0, SND_SOC_NOPM, 0, 0),
+ SND_SOC_DAPM_AIF_OUT("SEC_TDM_TX_0", NULL,
+- 0, 0, 0, 0),
++ 0, SND_SOC_NOPM, 0, 0),
+ SND_SOC_DAPM_AIF_OUT("SEC_TDM_TX_1", NULL,
+- 0, 0, 0, 0),
++ 0, SND_SOC_NOPM, 0, 0),
+ SND_SOC_DAPM_AIF_OUT("SEC_TDM_TX_2", NULL,
+- 0, 0, 0, 0),
++ 0, SND_SOC_NOPM, 0, 0),
+ SND_SOC_DAPM_AIF_OUT("SEC_TDM_TX_3", NULL,
+- 0, 0, 0, 0),
++ 0, SND_SOC_NOPM, 0, 0),
+ SND_SOC_DAPM_AIF_OUT("SEC_TDM_TX_4", NULL,
+- 0, 0, 0, 0),
++ 0, SND_SOC_NOPM, 0, 0),
+ SND_SOC_DAPM_AIF_OUT("SEC_TDM_TX_5", NULL,
+- 0, 0, 0, 0),
++ 0, SND_SOC_NOPM, 0, 0),
+ SND_SOC_DAPM_AIF_OUT("SEC_TDM_TX_6", NULL,
+- 0, 0, 0, 0),
++ 0, SND_SOC_NOPM, 0, 0),
+ SND_SOC_DAPM_AIF_OUT("SEC_TDM_TX_7", NULL,
+- 0, 0, 0, 0),
++ 0, SND_SOC_NOPM, 0, 0),
+
+ SND_SOC_DAPM_AIF_IN("TERT_TDM_RX_0", NULL,
+- 0, 0, 0, 0),
++ 0, SND_SOC_NOPM, 0, 0),
+ SND_SOC_DAPM_AIF_IN("TERT_TDM_RX_1", NULL,
+- 0, 0, 0, 0),
++ 0, SND_SOC_NOPM, 0, 0),
+ SND_SOC_DAPM_AIF_IN("TERT_TDM_RX_2", NULL,
+- 0, 0, 0, 0),
++ 0, SND_SOC_NOPM, 0, 0),
+ SND_SOC_DAPM_AIF_IN("TERT_TDM_RX_3", NULL,
+- 0, 0, 0, 0),
++ 0, SND_SOC_NOPM, 0, 0),
+ SND_SOC_DAPM_AIF_IN("TERT_TDM_RX_4", NULL,
+- 0, 0, 0, 0),
++ 0, SND_SOC_NOPM, 0, 0),
+ SND_SOC_DAPM_AIF_IN("TERT_TDM_RX_5", NULL,
+- 0, 0, 0, 0),
++ 0, SND_SOC_NOPM, 0, 0),
+ SND_SOC_DAPM_AIF_IN("TERT_TDM_RX_6", NULL,
+- 0, 0, 0, 0),
++ 0, SND_SOC_NOPM, 0, 0),
+ SND_SOC_DAPM_AIF_IN("TERT_TDM_RX_7", NULL,
+- 0, 0, 0, 0),
++ 0, SND_SOC_NOPM, 0, 0),
+ SND_SOC_DAPM_AIF_OUT("TERT_TDM_TX_0", NULL,
+- 0, 0, 0, 0),
++ 0, SND_SOC_NOPM, 0, 0),
+ SND_SOC_DAPM_AIF_OUT("TERT_TDM_TX_1", NULL,
+- 0, 0, 0, 0),
++ 0, SND_SOC_NOPM, 0, 0),
+ SND_SOC_DAPM_AIF_OUT("TERT_TDM_TX_2", NULL,
+- 0, 0, 0, 0),
++ 0, SND_SOC_NOPM, 0, 0),
+ SND_SOC_DAPM_AIF_OUT("TERT_TDM_TX_3", NULL,
+- 0, 0, 0, 0),
++ 0, SND_SOC_NOPM, 0, 0),
+ SND_SOC_DAPM_AIF_OUT("TERT_TDM_TX_4", NULL,
+- 0, 0, 0, 0),
++ 0, SND_SOC_NOPM, 0, 0),
+ SND_SOC_DAPM_AIF_OUT("TERT_TDM_TX_5", NULL,
+- 0, 0, 0, 0),
++ 0, SND_SOC_NOPM, 0, 0),
+ SND_SOC_DAPM_AIF_OUT("TERT_TDM_TX_6", NULL,
+- 0, 0, 0, 0),
++ 0, SND_SOC_NOPM, 0, 0),
+ SND_SOC_DAPM_AIF_OUT("TERT_TDM_TX_7", NULL,
+- 0, 0, 0, 0),
++ 0, SND_SOC_NOPM, 0, 0),
+
+ SND_SOC_DAPM_AIF_IN("QUAT_TDM_RX_0", NULL,
+- 0, 0, 0, 0),
++ 0, SND_SOC_NOPM, 0, 0),
+ SND_SOC_DAPM_AIF_IN("QUAT_TDM_RX_1", NULL,
+- 0, 0, 0, 0),
++ 0, SND_SOC_NOPM, 0, 0),
+ SND_SOC_DAPM_AIF_IN("QUAT_TDM_RX_2", NULL,
+- 0, 0, 0, 0),
++ 0, SND_SOC_NOPM, 0, 0),
+ SND_SOC_DAPM_AIF_IN("QUAT_TDM_RX_3", NULL,
+- 0, 0, 0, 0),
++ 0, SND_SOC_NOPM, 0, 0),
+ SND_SOC_DAPM_AIF_IN("QUAT_TDM_RX_4", NULL,
+- 0, 0, 0, 0),
++ 0, SND_SOC_NOPM, 0, 0),
+ SND_SOC_DAPM_AIF_IN("QUAT_TDM_RX_5", NULL,
+- 0, 0, 0, 0),
++ 0, SND_SOC_NOPM, 0, 0),
+ SND_SOC_DAPM_AIF_IN("QUAT_TDM_RX_6", NULL,
+- 0, 0, 0, 0),
++ 0, SND_SOC_NOPM, 0, 0),
+ SND_SOC_DAPM_AIF_IN("QUAT_TDM_RX_7", NULL,
+- 0, 0, 0, 0),
++ 0, SND_SOC_NOPM, 0, 0),
+ SND_SOC_DAPM_AIF_OUT("QUAT_TDM_TX_0", NULL,
+- 0, 0, 0, 0),
++ 0, SND_SOC_NOPM, 0, 0),
+ SND_SOC_DAPM_AIF_OUT("QUAT_TDM_TX_1", NULL,
+- 0, 0, 0, 0),
++ 0, SND_SOC_NOPM, 0, 0),
+ SND_SOC_DAPM_AIF_OUT("QUAT_TDM_TX_2", NULL,
+- 0, 0, 0, 0),
++ 0, SND_SOC_NOPM, 0, 0),
+ SND_SOC_DAPM_AIF_OUT("QUAT_TDM_TX_3", NULL,
+- 0, 0, 0, 0),
++ 0, SND_SOC_NOPM, 0, 0),
+ SND_SOC_DAPM_AIF_OUT("QUAT_TDM_TX_4", NULL,
+- 0, 0, 0, 0),
++ 0, SND_SOC_NOPM, 0, 0),
+ SND_SOC_DAPM_AIF_OUT("QUAT_TDM_TX_5", NULL,
+- 0, 0, 0, 0),
++ 0, SND_SOC_NOPM, 0, 0),
+ SND_SOC_DAPM_AIF_OUT("QUAT_TDM_TX_6", NULL,
+- 0, 0, 0, 0),
++ 0, SND_SOC_NOPM, 0, 0),
+ SND_SOC_DAPM_AIF_OUT("QUAT_TDM_TX_7", NULL,
+- 0, 0, 0, 0),
++ 0, SND_SOC_NOPM, 0, 0),
+
+ SND_SOC_DAPM_AIF_IN("QUIN_TDM_RX_0", NULL,
+- 0, 0, 0, 0),
++ 0, SND_SOC_NOPM, 0, 0),
+ SND_SOC_DAPM_AIF_IN("QUIN_TDM_RX_1", NULL,
+- 0, 0, 0, 0),
++ 0, SND_SOC_NOPM, 0, 0),
+ SND_SOC_DAPM_AIF_IN("QUIN_TDM_RX_2", NULL,
+- 0, 0, 0, 0),
++ 0, SND_SOC_NOPM, 0, 0),
+ SND_SOC_DAPM_AIF_IN("QUIN_TDM_RX_3", NULL,
+- 0, 0, 0, 0),
++ 0, SND_SOC_NOPM, 0, 0),
+ SND_SOC_DAPM_AIF_IN("QUIN_TDM_RX_4", NULL,
+- 0, 0, 0, 0),
++ 0, SND_SOC_NOPM, 0, 0),
+ SND_SOC_DAPM_AIF_IN("QUIN_TDM_RX_5", NULL,
+- 0, 0, 0, 0),
++ 0, SND_SOC_NOPM, 0, 0),
+ SND_SOC_DAPM_AIF_IN("QUIN_TDM_RX_6", NULL,
+- 0, 0, 0, 0),
++ 0, SND_SOC_NOPM, 0, 0),
+ SND_SOC_DAPM_AIF_IN("QUIN_TDM_RX_7", NULL,
+- 0, 0, 0, 0),
++ 0, SND_SOC_NOPM, 0, 0),
+ SND_SOC_DAPM_AIF_OUT("QUIN_TDM_TX_0", NULL,
+- 0, 0, 0, 0),
++ 0, SND_SOC_NOPM, 0, 0),
+ SND_SOC_DAPM_AIF_OUT("QUIN_TDM_TX_1", NULL,
+- 0, 0, 0, 0),
++ 0, SND_SOC_NOPM, 0, 0),
+ SND_SOC_DAPM_AIF_OUT("QUIN_TDM_TX_2", NULL,
+- 0, 0, 0, 0),
++ 0, SND_SOC_NOPM, 0, 0),
+ SND_SOC_DAPM_AIF_OUT("QUIN_TDM_TX_3", NULL,
+- 0, 0, 0, 0),
++ 0, SND_SOC_NOPM, 0, 0),
+ SND_SOC_DAPM_AIF_OUT("QUIN_TDM_TX_4", NULL,
+- 0, 0, 0, 0),
++ 0, SND_SOC_NOPM, 0, 0),
+ SND_SOC_DAPM_AIF_OUT("QUIN_TDM_TX_5", NULL,
+- 0, 0, 0, 0),
++ 0, SND_SOC_NOPM, 0, 0),
+ SND_SOC_DAPM_AIF_OUT("QUIN_TDM_TX_6", NULL,
+- 0, 0, 0, 0),
++ 0, SND_SOC_NOPM, 0, 0),
+ SND_SOC_DAPM_AIF_OUT("QUIN_TDM_TX_7", NULL,
+- 0, 0, 0, 0),
+- SND_SOC_DAPM_AIF_OUT("DISPLAY_PORT_RX", "NULL", 0, 0, 0, 0),
++ 0, SND_SOC_NOPM, 0, 0),
++ SND_SOC_DAPM_AIF_OUT("DISPLAY_PORT_RX", "NULL", 0, SND_SOC_NOPM, 0, 0),
+ };
+
+ static const struct snd_soc_component_driver q6afe_dai_component = {
+diff --git a/sound/soc/qcom/qdsp6/q6routing.c b/sound/soc/qcom/qdsp6/q6routing.c
+index 46e50612b92c1..750e6a30444eb 100644
+--- a/sound/soc/qcom/qdsp6/q6routing.c
++++ b/sound/soc/qcom/qdsp6/q6routing.c
+@@ -973,6 +973,20 @@ static int msm_routing_probe(struct snd_soc_component *c)
+ return 0;
+ }
+
++static unsigned int q6routing_reg_read(struct snd_soc_component *component,
++ unsigned int reg)
++{
++ /* default value */
++ return 0;
++}
++
++static int q6routing_reg_write(struct snd_soc_component *component,
++ unsigned int reg, unsigned int val)
++{
++ /* dummy */
++ return 0;
++}
++
+ static const struct snd_soc_component_driver msm_soc_routing_component = {
+ .probe = msm_routing_probe,
+ .name = DRV_NAME,
+@@ -981,6 +995,8 @@ static const struct snd_soc_component_driver msm_soc_routing_component = {
+ .num_dapm_widgets = ARRAY_SIZE(msm_qdsp6_widgets),
+ .dapm_routes = intercon,
+ .num_dapm_routes = ARRAY_SIZE(intercon),
++ .read = q6routing_reg_read,
++ .write = q6routing_reg_write,
+ };
+
+ static int q6pcm_routing_probe(struct platform_device *pdev)
+diff --git a/tools/bpf/bpftool/gen.c b/tools/bpf/bpftool/gen.c
+index 540ffde0b03a3..0be1330b4c1ba 100644
+--- a/tools/bpf/bpftool/gen.c
++++ b/tools/bpf/bpftool/gen.c
+@@ -400,7 +400,7 @@ static int do_skeleton(int argc, char **argv)
+ { \n\
+ struct %1$s *obj; \n\
+ \n\
+- obj = (typeof(obj))calloc(1, sizeof(*obj)); \n\
++ obj = (struct %1$s *)calloc(1, sizeof(*obj)); \n\
+ if (!obj) \n\
+ return NULL; \n\
+ if (%1$s__create_skeleton(obj)) \n\
+@@ -464,7 +464,7 @@ static int do_skeleton(int argc, char **argv)
+ { \n\
+ struct bpf_object_skeleton *s; \n\
+ \n\
+- s = (typeof(s))calloc(1, sizeof(*s)); \n\
++ s = (struct bpf_object_skeleton *)calloc(1, sizeof(*s));\n\
+ if (!s) \n\
+ return -1; \n\
+ obj->skeleton = s; \n\
+@@ -482,7 +482,7 @@ static int do_skeleton(int argc, char **argv)
+ /* maps */ \n\
+ s->map_cnt = %zu; \n\
+ s->map_skel_sz = sizeof(*s->maps); \n\
+- s->maps = (typeof(s->maps))calloc(s->map_cnt, s->map_skel_sz);\n\
++ s->maps = (struct bpf_map_skeleton *)calloc(s->map_cnt, s->map_skel_sz);\n\
+ if (!s->maps) \n\
+ goto err; \n\
+ ",
+@@ -518,7 +518,7 @@ static int do_skeleton(int argc, char **argv)
+ /* programs */ \n\
+ s->prog_cnt = %zu; \n\
+ s->prog_skel_sz = sizeof(*s->progs); \n\
+- s->progs = (typeof(s->progs))calloc(s->prog_cnt, s->prog_skel_sz);\n\
++ s->progs = (struct bpf_prog_skeleton *)calloc(s->prog_cnt, s->prog_skel_sz);\n\
+ if (!s->progs) \n\
+ goto err; \n\
+ ",
+diff --git a/tools/lib/bpf/libbpf.c b/tools/lib/bpf/libbpf.c
+index 11e4725b8b1c0..e7642a6e39f9e 100644
+--- a/tools/lib/bpf/libbpf.c
++++ b/tools/lib/bpf/libbpf.c
+@@ -5025,7 +5025,8 @@ static int bpf_object__collect_st_ops_relos(struct bpf_object *obj,
+ static int bpf_object__collect_map_relos(struct bpf_object *obj,
+ GElf_Shdr *shdr, Elf_Data *data)
+ {
+- int i, j, nrels, new_sz, ptr_sz = sizeof(void *);
++ const int bpf_ptr_sz = 8, host_ptr_sz = sizeof(void *);
++ int i, j, nrels, new_sz;
+ const struct btf_var_secinfo *vi = NULL;
+ const struct btf_type *sec, *var, *def;
+ const struct btf_member *member;
+@@ -5074,7 +5075,7 @@ static int bpf_object__collect_map_relos(struct bpf_object *obj,
+
+ vi = btf_var_secinfos(sec) + map->btf_var_idx;
+ if (vi->offset <= rel.r_offset &&
+- rel.r_offset + sizeof(void *) <= vi->offset + vi->size)
++ rel.r_offset + bpf_ptr_sz <= vi->offset + vi->size)
+ break;
+ }
+ if (j == obj->nr_maps) {
+@@ -5110,17 +5111,20 @@ static int bpf_object__collect_map_relos(struct bpf_object *obj,
+ return -EINVAL;
+
+ moff = rel.r_offset - vi->offset - moff;
+- if (moff % ptr_sz)
++ /* here we use BPF pointer size, which is always 64 bit, as we
++ * are parsing ELF that was built for BPF target
++ */
++ if (moff % bpf_ptr_sz)
+ return -EINVAL;
+- moff /= ptr_sz;
++ moff /= bpf_ptr_sz;
+ if (moff >= map->init_slots_sz) {
+ new_sz = moff + 1;
+- tmp = realloc(map->init_slots, new_sz * ptr_sz);
++ tmp = realloc(map->init_slots, new_sz * host_ptr_sz);
+ if (!tmp)
+ return -ENOMEM;
+ map->init_slots = tmp;
+ memset(map->init_slots + map->init_slots_sz, 0,
+- (new_sz - map->init_slots_sz) * ptr_sz);
++ (new_sz - map->init_slots_sz) * host_ptr_sz);
+ map->init_slots_sz = new_sz;
+ }
+ map->init_slots[moff] = targ_map;
+diff --git a/tools/testing/selftests/bpf/.gitignore b/tools/testing/selftests/bpf/.gitignore
+index 1bb204cee853f..9a0946ddb705a 100644
+--- a/tools/testing/selftests/bpf/.gitignore
++++ b/tools/testing/selftests/bpf/.gitignore
+@@ -6,7 +6,6 @@ test_lpm_map
+ test_tag
+ FEATURE-DUMP.libbpf
+ fixdep
+-test_align
+ test_dev_cgroup
+ /test_progs*
+ test_tcpbpf_user
+diff --git a/tools/testing/selftests/bpf/Makefile b/tools/testing/selftests/bpf/Makefile
+index 4f322d5388757..50965cc7bf098 100644
+--- a/tools/testing/selftests/bpf/Makefile
++++ b/tools/testing/selftests/bpf/Makefile
+@@ -32,7 +32,7 @@ LDLIBS += -lcap -lelf -lz -lrt -lpthread
+
+ # Order correspond to 'make run_tests' order
+ TEST_GEN_PROGS = test_verifier test_tag test_maps test_lru_map test_lpm_map test_progs \
+- test_align test_verifier_log test_dev_cgroup test_tcpbpf_user \
++ test_verifier_log test_dev_cgroup test_tcpbpf_user \
+ test_sock test_btf test_sockmap get_cgroup_id_user test_socket_cookie \
+ test_cgroup_storage \
+ test_netcnt test_tcpnotify_user test_sock_fields test_sysctl \
+diff --git a/tools/testing/selftests/cgroup/cgroup_util.c b/tools/testing/selftests/cgroup/cgroup_util.c
+index 8a637ca7d73a4..05853b0b88318 100644
+--- a/tools/testing/selftests/cgroup/cgroup_util.c
++++ b/tools/testing/selftests/cgroup/cgroup_util.c
+@@ -106,7 +106,7 @@ int cg_read_strcmp(const char *cgroup, const char *control,
+
+ /* Handle the case of comparing against empty string */
+ if (!expected)
+- size = 32;
++ return -1;
+ else
+ size = strlen(expected) + 1;
+
+diff --git a/tools/testing/selftests/kvm/x86_64/debug_regs.c b/tools/testing/selftests/kvm/x86_64/debug_regs.c
+index 8162c58a1234e..b8d14f9db5f9e 100644
+--- a/tools/testing/selftests/kvm/x86_64/debug_regs.c
++++ b/tools/testing/selftests/kvm/x86_64/debug_regs.c
+@@ -40,11 +40,11 @@ static void guest_code(void)
+
+ /* Single step test, covers 2 basic instructions and 2 emulated */
+ asm volatile("ss_start: "
+- "xor %%rax,%%rax\n\t"
++ "xor %%eax,%%eax\n\t"
+ "cpuid\n\t"
+ "movl $0x1a0,%%ecx\n\t"
+ "rdmsr\n\t"
+- : : : "rax", "ecx");
++ : : : "eax", "ebx", "ecx", "edx");
+
+ /* DR6.BD test */
+ asm volatile("bd_start: mov %%dr0, %%rax" : : : "rax");
+diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c
+index 0a68c9d3d3ab1..9e925675a8868 100644
+--- a/virt/kvm/kvm_main.c
++++ b/virt/kvm/kvm_main.c
+@@ -427,7 +427,8 @@ static int kvm_mmu_notifier_invalidate_range_start(struct mmu_notifier *mn,
+ * count is also read inside the mmu_lock critical section.
+ */
+ kvm->mmu_notifier_count++;
+- need_tlb_flush = kvm_unmap_hva_range(kvm, range->start, range->end);
++ need_tlb_flush = kvm_unmap_hva_range(kvm, range->start, range->end,
++ range->flags);
+ need_tlb_flush |= kvm->tlbs_dirty;
+ /* we've to flush the tlb before the pages can be freed */
+ if (need_tlb_flush)
^ permalink raw reply related [flat|nested] 30+ messages in thread
* [gentoo-commits] proj/linux-patches:5.8 commit in: /
@ 2020-08-27 13:22 Mike Pagano
0 siblings, 0 replies; 30+ messages in thread
From: Mike Pagano @ 2020-08-27 13:22 UTC (permalink / raw
To: gentoo-commits
commit: 1d996290ed3f1ffdc4ea2d9b4c4d2cf19ccc77d3
Author: Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Thu Aug 27 13:22:06 2020 +0000
Commit: Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Thu Aug 27 13:22:06 2020 +0000
URL: https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=1d996290
Linux patch 5.8.5
Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>
0000_README | 4 +
1004_linux-5.8.5.patch | 397 +++++++++++++++++++++++++++++++++++++++++++++++++
2 files changed, 401 insertions(+)
diff --git a/0000_README b/0000_README
index 17d6b16..4ed3bb4 100644
--- a/0000_README
+++ b/0000_README
@@ -59,6 +59,10 @@ Patch: 1003_linux-5.8.4.patch
From: http://www.kernel.org
Desc: Linux 5.8.4
+Patch: 1004_linux-5.8.5.patch
+From: http://www.kernel.org
+Desc: Linux 5.8.5
+
Patch: 1500_XATTR_USER_PREFIX.patch
From: https://bugs.gentoo.org/show_bug.cgi?id=470644
Desc: Support for namespace user.pax.* on tmpfs.
diff --git a/1004_linux-5.8.5.patch b/1004_linux-5.8.5.patch
new file mode 100644
index 0000000..68d90ad
--- /dev/null
+++ b/1004_linux-5.8.5.patch
@@ -0,0 +1,397 @@
+diff --git a/Makefile b/Makefile
+index 9a7a416f2d84e..f47073a3b4740 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 5
+ PATCHLEVEL = 8
+-SUBLEVEL = 4
++SUBLEVEL = 5
+ EXTRAVERSION =
+ NAME = Kleptomaniac Octopus
+
+diff --git a/drivers/net/ethernet/amazon/ena/ena_netdev.c b/drivers/net/ethernet/amazon/ena/ena_netdev.c
+index 000f57198352d..9f2c697ba0ac8 100644
+--- a/drivers/net/ethernet/amazon/ena/ena_netdev.c
++++ b/drivers/net/ethernet/amazon/ena/ena_netdev.c
+@@ -3609,7 +3609,7 @@ static int check_missing_comp_in_tx_queue(struct ena_adapter *adapter,
+ }
+
+ u64_stats_update_begin(&tx_ring->syncp);
+- tx_ring->tx_stats.missed_tx = missed_tx;
++ tx_ring->tx_stats.missed_tx += missed_tx;
+ u64_stats_update_end(&tx_ring->syncp);
+
+ return rc;
+@@ -4537,6 +4537,9 @@ static void ena_keep_alive_wd(void *adapter_data,
+ tx_drops = ((u64)desc->tx_drops_high << 32) | desc->tx_drops_low;
+
+ u64_stats_update_begin(&adapter->syncp);
++ /* These stats are accumulated by the device, so the counters indicate
++ * all drops since last reset.
++ */
+ adapter->dev_stats.rx_drops = rx_drops;
+ adapter->dev_stats.tx_drops = tx_drops;
+ u64_stats_update_end(&adapter->syncp);
+diff --git a/fs/binfmt_flat.c b/fs/binfmt_flat.c
+index f2f9086ebe983..b9c658e0548eb 100644
+--- a/fs/binfmt_flat.c
++++ b/fs/binfmt_flat.c
+@@ -576,7 +576,7 @@ static int load_flat_file(struct linux_binprm *bprm,
+ goto err;
+ }
+
+- len = data_len + extra;
++ len = data_len + extra + MAX_SHARED_LIBS * sizeof(unsigned long);
+ len = PAGE_ALIGN(len);
+ realdatastart = vm_mmap(NULL, 0, len,
+ PROT_READ|PROT_WRITE|PROT_EXEC, MAP_PRIVATE, 0);
+@@ -590,7 +590,9 @@ static int load_flat_file(struct linux_binprm *bprm,
+ vm_munmap(textpos, text_len);
+ goto err;
+ }
+- datapos = ALIGN(realdatastart, FLAT_DATA_ALIGN);
++ datapos = ALIGN(realdatastart +
++ MAX_SHARED_LIBS * sizeof(unsigned long),
++ FLAT_DATA_ALIGN);
+
+ pr_debug("Allocated data+bss+stack (%u bytes): %lx\n",
+ data_len + bss_len + stack_len, datapos);
+@@ -620,7 +622,7 @@ static int load_flat_file(struct linux_binprm *bprm,
+ memp_size = len;
+ } else {
+
+- len = text_len + data_len + extra;
++ len = text_len + data_len + extra + MAX_SHARED_LIBS * sizeof(u32);
+ len = PAGE_ALIGN(len);
+ textpos = vm_mmap(NULL, 0, len,
+ PROT_READ | PROT_EXEC | PROT_WRITE, MAP_PRIVATE, 0);
+@@ -635,7 +637,9 @@ static int load_flat_file(struct linux_binprm *bprm,
+ }
+
+ realdatastart = textpos + ntohl(hdr->data_start);
+- datapos = ALIGN(realdatastart, FLAT_DATA_ALIGN);
++ datapos = ALIGN(realdatastart +
++ MAX_SHARED_LIBS * sizeof(u32),
++ FLAT_DATA_ALIGN);
+
+ reloc = (__be32 __user *)
+ (datapos + (ntohl(hdr->reloc_start) - text_len));
+@@ -652,9 +656,8 @@ static int load_flat_file(struct linux_binprm *bprm,
+ (text_len + full_data
+ - sizeof(struct flat_hdr)),
+ 0);
+- if (datapos != realdatastart)
+- memmove((void *)datapos, (void *)realdatastart,
+- full_data);
++ memmove((void *) datapos, (void *) realdatastart,
++ full_data);
+ #else
+ /*
+ * This is used on MMU systems mainly for testing.
+@@ -710,7 +713,8 @@ static int load_flat_file(struct linux_binprm *bprm,
+ if (IS_ERR_VALUE(result)) {
+ ret = result;
+ pr_err("Unable to read code+data+bss, errno %d\n", ret);
+- vm_munmap(textpos, text_len + data_len + extra);
++ vm_munmap(textpos, text_len + data_len + extra +
++ MAX_SHARED_LIBS * sizeof(u32));
+ goto err;
+ }
+ }
+diff --git a/fs/io_uring.c b/fs/io_uring.c
+index dd8ad87540ef7..26978630378e0 100644
+--- a/fs/io_uring.c
++++ b/fs/io_uring.c
+@@ -4363,7 +4363,8 @@ static int io_sq_thread_acquire_mm(struct io_ring_ctx *ctx,
+ struct io_kiocb *req)
+ {
+ if (io_op_defs[req->opcode].needs_mm && !current->mm) {
+- if (unlikely(!mmget_not_zero(ctx->sqo_mm)))
++ if (unlikely(!(ctx->flags & IORING_SETUP_SQPOLL) ||
++ !mmget_not_zero(ctx->sqo_mm)))
+ return -EFAULT;
+ kthread_use_mm(ctx->sqo_mm);
+ }
+diff --git a/net/core/skbuff.c b/net/core/skbuff.c
+index b8afefe6f6b69..7afe52bd038ba 100644
+--- a/net/core/skbuff.c
++++ b/net/core/skbuff.c
+@@ -5419,8 +5419,8 @@ struct sk_buff *skb_vlan_untag(struct sk_buff *skb)
+ skb = skb_share_check(skb, GFP_ATOMIC);
+ if (unlikely(!skb))
+ goto err_free;
+-
+- if (unlikely(!pskb_may_pull(skb, VLAN_HLEN)))
++ /* We may access the two bytes after vlan_hdr in vlan_set_encap_proto(). */
++ if (unlikely(!pskb_may_pull(skb, VLAN_HLEN + sizeof(unsigned short))))
+ goto err_free;
+
+ vhdr = (struct vlan_hdr *)skb->data;
+diff --git a/net/ethtool/features.c b/net/ethtool/features.c
+index 4e632dc987d85..495635f152ba6 100644
+--- a/net/ethtool/features.c
++++ b/net/ethtool/features.c
+@@ -224,7 +224,9 @@ int ethnl_set_features(struct sk_buff *skb, struct genl_info *info)
+ DECLARE_BITMAP(wanted_diff_mask, NETDEV_FEATURE_COUNT);
+ DECLARE_BITMAP(active_diff_mask, NETDEV_FEATURE_COUNT);
+ DECLARE_BITMAP(old_active, NETDEV_FEATURE_COUNT);
++ DECLARE_BITMAP(old_wanted, NETDEV_FEATURE_COUNT);
+ DECLARE_BITMAP(new_active, NETDEV_FEATURE_COUNT);
++ DECLARE_BITMAP(new_wanted, NETDEV_FEATURE_COUNT);
+ DECLARE_BITMAP(req_wanted, NETDEV_FEATURE_COUNT);
+ DECLARE_BITMAP(req_mask, NETDEV_FEATURE_COUNT);
+ struct nlattr *tb[ETHTOOL_A_FEATURES_MAX + 1];
+@@ -250,6 +252,7 @@ int ethnl_set_features(struct sk_buff *skb, struct genl_info *info)
+
+ rtnl_lock();
+ ethnl_features_to_bitmap(old_active, dev->features);
++ ethnl_features_to_bitmap(old_wanted, dev->wanted_features);
+ ret = ethnl_parse_bitset(req_wanted, req_mask, NETDEV_FEATURE_COUNT,
+ tb[ETHTOOL_A_FEATURES_WANTED],
+ netdev_features_strings, info->extack);
+@@ -261,17 +264,15 @@ int ethnl_set_features(struct sk_buff *skb, struct genl_info *info)
+ goto out_rtnl;
+ }
+
+- /* set req_wanted bits not in req_mask from old_active */
++ /* set req_wanted bits not in req_mask from old_wanted */
+ bitmap_and(req_wanted, req_wanted, req_mask, NETDEV_FEATURE_COUNT);
+- bitmap_andnot(new_active, old_active, req_mask, NETDEV_FEATURE_COUNT);
+- bitmap_or(req_wanted, new_active, req_wanted, NETDEV_FEATURE_COUNT);
+- if (bitmap_equal(req_wanted, old_active, NETDEV_FEATURE_COUNT)) {
+- ret = 0;
+- goto out_rtnl;
++ bitmap_andnot(new_wanted, old_wanted, req_mask, NETDEV_FEATURE_COUNT);
++ bitmap_or(req_wanted, new_wanted, req_wanted, NETDEV_FEATURE_COUNT);
++ if (!bitmap_equal(req_wanted, old_wanted, NETDEV_FEATURE_COUNT)) {
++ dev->wanted_features &= ~dev->hw_features;
++ dev->wanted_features |= ethnl_bitmap_to_features(req_wanted) & dev->hw_features;
++ __netdev_update_features(dev);
+ }
+-
+- dev->wanted_features = ethnl_bitmap_to_features(req_wanted);
+- __netdev_update_features(dev);
+ ethnl_features_to_bitmap(new_active, dev->features);
+ mod = !bitmap_equal(old_active, new_active, NETDEV_FEATURE_COUNT);
+
+diff --git a/net/ipv4/nexthop.c b/net/ipv4/nexthop.c
+index cc8049b100b24..134e923822750 100644
+--- a/net/ipv4/nexthop.c
++++ b/net/ipv4/nexthop.c
+@@ -446,7 +446,7 @@ static int nh_check_attr_group(struct net *net, struct nlattr *tb[],
+ unsigned int i, j;
+ u8 nhg_fdb = 0;
+
+- if (len & (sizeof(struct nexthop_grp) - 1)) {
++ if (!len || len & (sizeof(struct nexthop_grp) - 1)) {
+ NL_SET_ERR_MSG(extack,
+ "Invalid length for nexthop group attribute");
+ return -EINVAL;
+@@ -1187,6 +1187,9 @@ static struct nexthop *nexthop_create_group(struct net *net,
+ struct nexthop *nh;
+ int i;
+
++ if (WARN_ON(!num_nh))
++ return ERR_PTR(-EINVAL);
++
+ nh = nexthop_alloc();
+ if (!nh)
+ return ERR_PTR(-ENOMEM);
+diff --git a/net/ipv6/ip6_tunnel.c b/net/ipv6/ip6_tunnel.c
+index a18c378ca5f46..d8f0102cec94a 100644
+--- a/net/ipv6/ip6_tunnel.c
++++ b/net/ipv6/ip6_tunnel.c
+@@ -913,7 +913,15 @@ int ip6_tnl_rcv(struct ip6_tnl *t, struct sk_buff *skb,
+ struct metadata_dst *tun_dst,
+ bool log_ecn_err)
+ {
+- return __ip6_tnl_rcv(t, skb, tpi, tun_dst, ip6ip6_dscp_ecn_decapsulate,
++ int (*dscp_ecn_decapsulate)(const struct ip6_tnl *t,
++ const struct ipv6hdr *ipv6h,
++ struct sk_buff *skb);
++
++ dscp_ecn_decapsulate = ip6ip6_dscp_ecn_decapsulate;
++ if (tpi->proto == htons(ETH_P_IP))
++ dscp_ecn_decapsulate = ip4ip6_dscp_ecn_decapsulate;
++
++ return __ip6_tnl_rcv(t, skb, tpi, tun_dst, dscp_ecn_decapsulate,
+ log_ecn_err);
+ }
+ EXPORT_SYMBOL(ip6_tnl_rcv);
+diff --git a/net/netlink/policy.c b/net/netlink/policy.c
+index f6491853c7971..2b3e26f7496f5 100644
+--- a/net/netlink/policy.c
++++ b/net/netlink/policy.c
+@@ -51,6 +51,9 @@ static int add_policy(struct nl_policy_dump **statep,
+ if (!state)
+ return -ENOMEM;
+
++ memset(&state->policies[state->n_alloc], 0,
++ flex_array_size(state, policies, n_alloc - state->n_alloc));
++
+ state->policies[state->n_alloc].policy = policy;
+ state->policies[state->n_alloc].maxtype = maxtype;
+ state->n_alloc = n_alloc;
+diff --git a/net/qrtr/qrtr.c b/net/qrtr/qrtr.c
+index 300a104b9a0fb..85ab4559f0577 100644
+--- a/net/qrtr/qrtr.c
++++ b/net/qrtr/qrtr.c
+@@ -692,23 +692,25 @@ static void qrtr_port_remove(struct qrtr_sock *ipc)
+ */
+ static int qrtr_port_assign(struct qrtr_sock *ipc, int *port)
+ {
++ u32 min_port;
+ int rc;
+
+ mutex_lock(&qrtr_port_lock);
+ if (!*port) {
+- rc = idr_alloc(&qrtr_ports, ipc,
+- QRTR_MIN_EPH_SOCKET, QRTR_MAX_EPH_SOCKET + 1,
+- GFP_ATOMIC);
+- if (rc >= 0)
+- *port = rc;
++ min_port = QRTR_MIN_EPH_SOCKET;
++ rc = idr_alloc_u32(&qrtr_ports, ipc, &min_port, QRTR_MAX_EPH_SOCKET, GFP_ATOMIC);
++ if (!rc)
++ *port = min_port;
+ } else if (*port < QRTR_MIN_EPH_SOCKET && !capable(CAP_NET_ADMIN)) {
+ rc = -EACCES;
+ } else if (*port == QRTR_PORT_CTRL) {
+- rc = idr_alloc(&qrtr_ports, ipc, 0, 1, GFP_ATOMIC);
++ min_port = 0;
++ rc = idr_alloc_u32(&qrtr_ports, ipc, &min_port, 0, GFP_ATOMIC);
+ } else {
+- rc = idr_alloc(&qrtr_ports, ipc, *port, *port + 1, GFP_ATOMIC);
+- if (rc >= 0)
+- *port = rc;
++ min_port = *port;
++ rc = idr_alloc_u32(&qrtr_ports, ipc, &min_port, *port, GFP_ATOMIC);
++ if (!rc)
++ *port = min_port;
+ }
+ mutex_unlock(&qrtr_port_lock);
+
+diff --git a/net/sched/act_ct.c b/net/sched/act_ct.c
+index 6ed1652d1e265..41d8440deaf14 100644
+--- a/net/sched/act_ct.c
++++ b/net/sched/act_ct.c
+@@ -704,7 +704,7 @@ static int tcf_ct_handle_fragments(struct net *net, struct sk_buff *skb,
+ err = ip_defrag(net, skb, user);
+ local_bh_enable();
+ if (err && err != -EINPROGRESS)
+- goto out_free;
++ return err;
+
+ if (!err)
+ *defrag = true;
+diff --git a/net/sctp/stream.c b/net/sctp/stream.c
+index bda2536dd740f..6dc95dcc0ff4f 100644
+--- a/net/sctp/stream.c
++++ b/net/sctp/stream.c
+@@ -88,12 +88,13 @@ static int sctp_stream_alloc_out(struct sctp_stream *stream, __u16 outcnt,
+ int ret;
+
+ if (outcnt <= stream->outcnt)
+- return 0;
++ goto out;
+
+ ret = genradix_prealloc(&stream->out, outcnt, gfp);
+ if (ret)
+ return ret;
+
++out:
+ stream->outcnt = outcnt;
+ return 0;
+ }
+@@ -104,12 +105,13 @@ static int sctp_stream_alloc_in(struct sctp_stream *stream, __u16 incnt,
+ int ret;
+
+ if (incnt <= stream->incnt)
+- return 0;
++ goto out;
+
+ ret = genradix_prealloc(&stream->in, incnt, gfp);
+ if (ret)
+ return ret;
+
++out:
+ stream->incnt = incnt;
+ return 0;
+ }
+diff --git a/net/smc/smc_diag.c b/net/smc/smc_diag.c
+index e1f64f4ba2361..da9ba6d1679b7 100644
+--- a/net/smc/smc_diag.c
++++ b/net/smc/smc_diag.c
+@@ -170,13 +170,15 @@ static int __smc_diag_dump(struct sock *sk, struct sk_buff *skb,
+ (req->diag_ext & (1 << (SMC_DIAG_DMBINFO - 1))) &&
+ !list_empty(&smc->conn.lgr->list)) {
+ struct smc_connection *conn = &smc->conn;
+- struct smcd_diag_dmbinfo dinfo = {
+- .linkid = *((u32 *)conn->lgr->id),
+- .peer_gid = conn->lgr->peer_gid,
+- .my_gid = conn->lgr->smcd->local_gid,
+- .token = conn->rmb_desc->token,
+- .peer_token = conn->peer_token
+- };
++ struct smcd_diag_dmbinfo dinfo;
++
++ memset(&dinfo, 0, sizeof(dinfo));
++
++ dinfo.linkid = *((u32 *)conn->lgr->id);
++ dinfo.peer_gid = conn->lgr->peer_gid;
++ dinfo.my_gid = conn->lgr->smcd->local_gid;
++ dinfo.token = conn->rmb_desc->token;
++ dinfo.peer_token = conn->peer_token;
+
+ if (nla_put(skb, SMC_DIAG_DMBINFO, sizeof(dinfo), &dinfo) < 0)
+ goto errout;
+diff --git a/net/tipc/crypto.c b/net/tipc/crypto.c
+index c8c47fc726536..d6426b6cc9c5a 100644
+--- a/net/tipc/crypto.c
++++ b/net/tipc/crypto.c
+@@ -757,10 +757,12 @@ static void tipc_aead_encrypt_done(struct crypto_async_request *base, int err)
+ switch (err) {
+ case 0:
+ this_cpu_inc(tx->stats->stat[STAT_ASYNC_OK]);
++ rcu_read_lock();
+ if (likely(test_bit(0, &b->up)))
+ b->media->send_msg(net, skb, b, &tx_ctx->dst);
+ else
+ kfree_skb(skb);
++ rcu_read_unlock();
+ break;
+ case -EINPROGRESS:
+ return;
+diff --git a/net/tipc/netlink_compat.c b/net/tipc/netlink_compat.c
+index 217516357ef26..90e3c70a91ad0 100644
+--- a/net/tipc/netlink_compat.c
++++ b/net/tipc/netlink_compat.c
+@@ -275,8 +275,9 @@ err_out:
+ static int tipc_nl_compat_dumpit(struct tipc_nl_compat_cmd_dump *cmd,
+ struct tipc_nl_compat_msg *msg)
+ {
+- int err;
++ struct nlmsghdr *nlh;
+ struct sk_buff *arg;
++ int err;
+
+ if (msg->req_type && (!msg->req_size ||
+ !TLV_CHECK_TYPE(msg->req, msg->req_type)))
+@@ -305,6 +306,15 @@ static int tipc_nl_compat_dumpit(struct tipc_nl_compat_cmd_dump *cmd,
+ return -ENOMEM;
+ }
+
++ nlh = nlmsg_put(arg, 0, 0, tipc_genl_family.id, 0, NLM_F_MULTI);
++ if (!nlh) {
++ kfree_skb(arg);
++ kfree_skb(msg->rep);
++ msg->rep = NULL;
++ return -EMSGSIZE;
++ }
++ nlmsg_end(arg, nlh);
++
+ err = __tipc_nl_compat_dumpit(cmd, msg, arg);
+ if (err) {
+ kfree_skb(msg->rep);
^ permalink raw reply related [flat|nested] 30+ messages in thread
* [gentoo-commits] proj/linux-patches:5.8 commit in: /
@ 2020-09-03 12:37 Mike Pagano
0 siblings, 0 replies; 30+ messages in thread
From: Mike Pagano @ 2020-09-03 12:37 UTC (permalink / raw
To: gentoo-commits
commit: d74b195b5da2b0ed808dfa000ab302de53abfcbc
Author: Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Thu Sep 3 11:48:16 2020 +0000
Commit: Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Thu Sep 3 11:48:16 2020 +0000
URL: https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=d74b195b
Linux patch 5.7.6
Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>
0000_README | 4 +
1005_linux-5.7.6.patch | 20870 +++++++++++++++++++++++++++++++++++++++++++++++
2 files changed, 20874 insertions(+)
diff --git a/0000_README b/0000_README
index 4ed3bb4..ba2f389 100644
--- a/0000_README
+++ b/0000_README
@@ -63,6 +63,10 @@ Patch: 1004_linux-5.8.5.patch
From: http://www.kernel.org
Desc: Linux 5.8.5
+Patch: 1005_linux-5.8.6.patch
+From: http://www.kernel.org
+Desc: Linux 5.8.6
+
Patch: 1500_XATTR_USER_PREFIX.patch
From: https://bugs.gentoo.org/show_bug.cgi?id=470644
Desc: Support for namespace user.pax.* on tmpfs.
diff --git a/1005_linux-5.7.6.patch b/1005_linux-5.7.6.patch
new file mode 100644
index 0000000..9939e08
--- /dev/null
+++ b/1005_linux-5.7.6.patch
@@ -0,0 +1,20870 @@
+diff --git a/Makefile b/Makefile
+index c48d489f82bc..f928cd1dfdc1 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 5
+ PATCHLEVEL = 7
+-SUBLEVEL = 5
++SUBLEVEL = 6
+ EXTRAVERSION =
+ NAME = Kleptomaniac Octopus
+
+diff --git a/arch/arm/boot/dts/aspeed-bmc-facebook-tiogapass.dts b/arch/arm/boot/dts/aspeed-bmc-facebook-tiogapass.dts
+index 5d7cbd9164d4..669980c690f9 100644
+--- a/arch/arm/boot/dts/aspeed-bmc-facebook-tiogapass.dts
++++ b/arch/arm/boot/dts/aspeed-bmc-facebook-tiogapass.dts
+@@ -112,13 +112,13 @@
+ &kcs2 {
+ // BMC KCS channel 2
+ status = "okay";
+- kcs_addr = <0xca8>;
++ aspeed,lpc-io-reg = <0xca8>;
+ };
+
+ &kcs3 {
+ // BMC KCS channel 3
+ status = "okay";
+- kcs_addr = <0xca2>;
++ aspeed,lpc-io-reg = <0xca2>;
+ };
+
+ &mac0 {
+diff --git a/arch/arm/boot/dts/aspeed-g5.dtsi b/arch/arm/boot/dts/aspeed-g5.dtsi
+index f12ec04d3cbc..bc92d3db7b78 100644
+--- a/arch/arm/boot/dts/aspeed-g5.dtsi
++++ b/arch/arm/boot/dts/aspeed-g5.dtsi
+@@ -426,22 +426,22 @@
+ #size-cells = <1>;
+ ranges = <0x0 0x0 0x80>;
+
+- kcs1: kcs1@0 {
+- compatible = "aspeed,ast2500-kcs-bmc";
++ kcs1: kcs@24 {
++ compatible = "aspeed,ast2500-kcs-bmc-v2";
++ reg = <0x24 0x1>, <0x30 0x1>, <0x3c 0x1>;
+ interrupts = <8>;
+- kcs_chan = <1>;
+ status = "disabled";
+ };
+- kcs2: kcs2@0 {
+- compatible = "aspeed,ast2500-kcs-bmc";
++ kcs2: kcs@28 {
++ compatible = "aspeed,ast2500-kcs-bmc-v2";
++ reg = <0x28 0x1>, <0x34 0x1>, <0x40 0x1>;
+ interrupts = <8>;
+- kcs_chan = <2>;
+ status = "disabled";
+ };
+- kcs3: kcs3@0 {
+- compatible = "aspeed,ast2500-kcs-bmc";
++ kcs3: kcs@2c {
++ compatible = "aspeed,ast2500-kcs-bmc-v2";
++ reg = <0x2c 0x1>, <0x38 0x1>, <0x44 0x1>;
+ interrupts = <8>;
+- kcs_chan = <3>;
+ status = "disabled";
+ };
+ };
+@@ -455,10 +455,10 @@
+ #size-cells = <1>;
+ ranges = <0x0 0x80 0x1e0>;
+
+- kcs4: kcs4@0 {
+- compatible = "aspeed,ast2500-kcs-bmc";
++ kcs4: kcs@94 {
++ compatible = "aspeed,ast2500-kcs-bmc-v2";
++ reg = <0x94 0x1>, <0x98 0x1>, <0x9c 0x1>;
+ interrupts = <8>;
+- kcs_chan = <4>;
+ status = "disabled";
+ };
+
+diff --git a/arch/arm/boot/dts/aspeed-g6.dtsi b/arch/arm/boot/dts/aspeed-g6.dtsi
+index 0a29b3b57a9d..a2d2ac720a51 100644
+--- a/arch/arm/boot/dts/aspeed-g6.dtsi
++++ b/arch/arm/boot/dts/aspeed-g6.dtsi
+@@ -65,6 +65,7 @@
+ <GIC_PPI 10 (GIC_CPU_MASK_SIMPLE(2) | IRQ_TYPE_LEVEL_LOW)>;
+ clocks = <&syscon ASPEED_CLK_HPLL>;
+ arm,cpu-registers-not-fw-configured;
++ always-on;
+ };
+
+ ahb {
+@@ -368,6 +369,7 @@
+ <&gic GIC_SPI 23 IRQ_TYPE_LEVEL_HIGH>;
+ clocks = <&syscon ASPEED_CLK_APB1>;
+ clock-names = "PCLK";
++ status = "disabled";
+ };
+
+ uart1: serial@1e783000 {
+@@ -433,22 +435,23 @@
+ #size-cells = <1>;
+ ranges = <0x0 0x0 0x80>;
+
+- kcs1: kcs1@0 {
+- compatible = "aspeed,ast2600-kcs-bmc";
++ kcs1: kcs@24 {
++ compatible = "aspeed,ast2500-kcs-bmc-v2";
++ reg = <0x24 0x1>, <0x30 0x1>, <0x3c 0x1>;
+ interrupts = <GIC_SPI 138 IRQ_TYPE_LEVEL_HIGH>;
+ kcs_chan = <1>;
+ status = "disabled";
+ };
+- kcs2: kcs2@0 {
+- compatible = "aspeed,ast2600-kcs-bmc";
++ kcs2: kcs@28 {
++ compatible = "aspeed,ast2500-kcs-bmc-v2";
++ reg = <0x28 0x1>, <0x34 0x1>, <0x40 0x1>;
+ interrupts = <GIC_SPI 139 IRQ_TYPE_LEVEL_HIGH>;
+- kcs_chan = <2>;
+ status = "disabled";
+ };
+- kcs3: kcs3@0 {
+- compatible = "aspeed,ast2600-kcs-bmc";
++ kcs3: kcs@2c {
++ compatible = "aspeed,ast2500-kcs-bmc-v2";
++ reg = <0x2c 0x1>, <0x38 0x1>, <0x44 0x1>;
+ interrupts = <GIC_SPI 140 IRQ_TYPE_LEVEL_HIGH>;
+- kcs_chan = <3>;
+ status = "disabled";
+ };
+ };
+@@ -462,10 +465,10 @@
+ #size-cells = <1>;
+ ranges = <0x0 0x80 0x1e0>;
+
+- kcs4: kcs4@0 {
+- compatible = "aspeed,ast2600-kcs-bmc";
++ kcs4: kcs@94 {
++ compatible = "aspeed,ast2500-kcs-bmc-v2";
++ reg = <0x94 0x1>, <0x98 0x1>, <0x9c 0x1>;
+ interrupts = <GIC_SPI 141 IRQ_TYPE_LEVEL_HIGH>;
+- kcs_chan = <4>;
+ status = "disabled";
+ };
+
+diff --git a/arch/arm/boot/dts/bcm2835-common.dtsi b/arch/arm/boot/dts/bcm2835-common.dtsi
+index 2b1d9d4c0cde..4119271c979d 100644
+--- a/arch/arm/boot/dts/bcm2835-common.dtsi
++++ b/arch/arm/boot/dts/bcm2835-common.dtsi
+@@ -130,7 +130,6 @@
+ compatible = "brcm,bcm2835-v3d";
+ reg = <0x7ec00000 0x1000>;
+ interrupts = <1 10>;
+- power-domains = <&pm BCM2835_POWER_DOMAIN_GRAFX_V3D>;
+ };
+
+ vc4: gpu {
+diff --git a/arch/arm/boot/dts/bcm2835-rpi-common.dtsi b/arch/arm/boot/dts/bcm2835-rpi-common.dtsi
+new file mode 100644
+index 000000000000..8a55b6cded59
+--- /dev/null
++++ b/arch/arm/boot/dts/bcm2835-rpi-common.dtsi
+@@ -0,0 +1,12 @@
++// SPDX-License-Identifier: GPL-2.0
++/*
++ * This include file covers the common peripherals and configuration between
++ * bcm2835, bcm2836 and bcm2837 implementations that interact with RPi's
++ * firmware interface.
++ */
++
++#include <dt-bindings/power/raspberrypi-power.h>
++
++&v3d {
++ power-domains = <&power RPI_POWER_DOMAIN_V3D>;
++};
+diff --git a/arch/arm/boot/dts/bcm2835.dtsi b/arch/arm/boot/dts/bcm2835.dtsi
+index 53bf4579cc22..0549686134ea 100644
+--- a/arch/arm/boot/dts/bcm2835.dtsi
++++ b/arch/arm/boot/dts/bcm2835.dtsi
+@@ -1,6 +1,7 @@
+ // SPDX-License-Identifier: GPL-2.0
+ #include "bcm283x.dtsi"
+ #include "bcm2835-common.dtsi"
++#include "bcm2835-rpi-common.dtsi"
+
+ / {
+ compatible = "brcm,bcm2835";
+diff --git a/arch/arm/boot/dts/bcm2836.dtsi b/arch/arm/boot/dts/bcm2836.dtsi
+index 82d6c4662ae4..b390006aef79 100644
+--- a/arch/arm/boot/dts/bcm2836.dtsi
++++ b/arch/arm/boot/dts/bcm2836.dtsi
+@@ -1,6 +1,7 @@
+ // SPDX-License-Identifier: GPL-2.0
+ #include "bcm283x.dtsi"
+ #include "bcm2835-common.dtsi"
++#include "bcm2835-rpi-common.dtsi"
+
+ / {
+ compatible = "brcm,bcm2836";
+diff --git a/arch/arm/boot/dts/bcm2837.dtsi b/arch/arm/boot/dts/bcm2837.dtsi
+index 9e95fee78e19..0199ec98cd61 100644
+--- a/arch/arm/boot/dts/bcm2837.dtsi
++++ b/arch/arm/boot/dts/bcm2837.dtsi
+@@ -1,5 +1,6 @@
+ #include "bcm283x.dtsi"
+ #include "bcm2835-common.dtsi"
++#include "bcm2835-rpi-common.dtsi"
+
+ / {
+ compatible = "brcm,bcm2837";
+diff --git a/arch/arm/boot/dts/r8a7743.dtsi b/arch/arm/boot/dts/r8a7743.dtsi
+index e8b340bb99bc..fff123753b85 100644
+--- a/arch/arm/boot/dts/r8a7743.dtsi
++++ b/arch/arm/boot/dts/r8a7743.dtsi
+@@ -338,7 +338,7 @@
+ #thermal-sensor-cells = <0>;
+ };
+
+- ipmmu_sy0: mmu@e6280000 {
++ ipmmu_sy0: iommu@e6280000 {
+ compatible = "renesas,ipmmu-r8a7743",
+ "renesas,ipmmu-vmsa";
+ reg = <0 0xe6280000 0 0x1000>;
+@@ -348,7 +348,7 @@
+ status = "disabled";
+ };
+
+- ipmmu_sy1: mmu@e6290000 {
++ ipmmu_sy1: iommu@e6290000 {
+ compatible = "renesas,ipmmu-r8a7743",
+ "renesas,ipmmu-vmsa";
+ reg = <0 0xe6290000 0 0x1000>;
+@@ -357,7 +357,7 @@
+ status = "disabled";
+ };
+
+- ipmmu_ds: mmu@e6740000 {
++ ipmmu_ds: iommu@e6740000 {
+ compatible = "renesas,ipmmu-r8a7743",
+ "renesas,ipmmu-vmsa";
+ reg = <0 0xe6740000 0 0x1000>;
+@@ -367,7 +367,7 @@
+ status = "disabled";
+ };
+
+- ipmmu_mp: mmu@ec680000 {
++ ipmmu_mp: iommu@ec680000 {
+ compatible = "renesas,ipmmu-r8a7743",
+ "renesas,ipmmu-vmsa";
+ reg = <0 0xec680000 0 0x1000>;
+@@ -376,7 +376,7 @@
+ status = "disabled";
+ };
+
+- ipmmu_mx: mmu@fe951000 {
++ ipmmu_mx: iommu@fe951000 {
+ compatible = "renesas,ipmmu-r8a7743",
+ "renesas,ipmmu-vmsa";
+ reg = <0 0xfe951000 0 0x1000>;
+@@ -386,7 +386,7 @@
+ status = "disabled";
+ };
+
+- ipmmu_gp: mmu@e62a0000 {
++ ipmmu_gp: iommu@e62a0000 {
+ compatible = "renesas,ipmmu-r8a7743",
+ "renesas,ipmmu-vmsa";
+ reg = <0 0xe62a0000 0 0x1000>;
+diff --git a/arch/arm/boot/dts/r8a7744.dtsi b/arch/arm/boot/dts/r8a7744.dtsi
+index def840b8b2d3..5050ac19041d 100644
+--- a/arch/arm/boot/dts/r8a7744.dtsi
++++ b/arch/arm/boot/dts/r8a7744.dtsi
+@@ -338,7 +338,7 @@
+ #thermal-sensor-cells = <0>;
+ };
+
+- ipmmu_sy0: mmu@e6280000 {
++ ipmmu_sy0: iommu@e6280000 {
+ compatible = "renesas,ipmmu-r8a7744",
+ "renesas,ipmmu-vmsa";
+ reg = <0 0xe6280000 0 0x1000>;
+@@ -348,7 +348,7 @@
+ status = "disabled";
+ };
+
+- ipmmu_sy1: mmu@e6290000 {
++ ipmmu_sy1: iommu@e6290000 {
+ compatible = "renesas,ipmmu-r8a7744",
+ "renesas,ipmmu-vmsa";
+ reg = <0 0xe6290000 0 0x1000>;
+@@ -357,7 +357,7 @@
+ status = "disabled";
+ };
+
+- ipmmu_ds: mmu@e6740000 {
++ ipmmu_ds: iommu@e6740000 {
+ compatible = "renesas,ipmmu-r8a7744",
+ "renesas,ipmmu-vmsa";
+ reg = <0 0xe6740000 0 0x1000>;
+@@ -367,7 +367,7 @@
+ status = "disabled";
+ };
+
+- ipmmu_mp: mmu@ec680000 {
++ ipmmu_mp: iommu@ec680000 {
+ compatible = "renesas,ipmmu-r8a7744",
+ "renesas,ipmmu-vmsa";
+ reg = <0 0xec680000 0 0x1000>;
+@@ -376,7 +376,7 @@
+ status = "disabled";
+ };
+
+- ipmmu_mx: mmu@fe951000 {
++ ipmmu_mx: iommu@fe951000 {
+ compatible = "renesas,ipmmu-r8a7744",
+ "renesas,ipmmu-vmsa";
+ reg = <0 0xfe951000 0 0x1000>;
+@@ -386,7 +386,7 @@
+ status = "disabled";
+ };
+
+- ipmmu_gp: mmu@e62a0000 {
++ ipmmu_gp: iommu@e62a0000 {
+ compatible = "renesas,ipmmu-r8a7744",
+ "renesas,ipmmu-vmsa";
+ reg = <0 0xe62a0000 0 0x1000>;
+diff --git a/arch/arm/boot/dts/r8a7745.dtsi b/arch/arm/boot/dts/r8a7745.dtsi
+index 7ab58d8bb740..b0d1fc24e97e 100644
+--- a/arch/arm/boot/dts/r8a7745.dtsi
++++ b/arch/arm/boot/dts/r8a7745.dtsi
+@@ -302,7 +302,7 @@
+ resets = <&cpg 407>;
+ };
+
+- ipmmu_sy0: mmu@e6280000 {
++ ipmmu_sy0: iommu@e6280000 {
+ compatible = "renesas,ipmmu-r8a7745",
+ "renesas,ipmmu-vmsa";
+ reg = <0 0xe6280000 0 0x1000>;
+@@ -312,7 +312,7 @@
+ status = "disabled";
+ };
+
+- ipmmu_sy1: mmu@e6290000 {
++ ipmmu_sy1: iommu@e6290000 {
+ compatible = "renesas,ipmmu-r8a7745",
+ "renesas,ipmmu-vmsa";
+ reg = <0 0xe6290000 0 0x1000>;
+@@ -321,7 +321,7 @@
+ status = "disabled";
+ };
+
+- ipmmu_ds: mmu@e6740000 {
++ ipmmu_ds: iommu@e6740000 {
+ compatible = "renesas,ipmmu-r8a7745",
+ "renesas,ipmmu-vmsa";
+ reg = <0 0xe6740000 0 0x1000>;
+@@ -331,7 +331,7 @@
+ status = "disabled";
+ };
+
+- ipmmu_mp: mmu@ec680000 {
++ ipmmu_mp: iommu@ec680000 {
+ compatible = "renesas,ipmmu-r8a7745",
+ "renesas,ipmmu-vmsa";
+ reg = <0 0xec680000 0 0x1000>;
+@@ -340,7 +340,7 @@
+ status = "disabled";
+ };
+
+- ipmmu_mx: mmu@fe951000 {
++ ipmmu_mx: iommu@fe951000 {
+ compatible = "renesas,ipmmu-r8a7745",
+ "renesas,ipmmu-vmsa";
+ reg = <0 0xfe951000 0 0x1000>;
+@@ -350,7 +350,7 @@
+ status = "disabled";
+ };
+
+- ipmmu_gp: mmu@e62a0000 {
++ ipmmu_gp: iommu@e62a0000 {
+ compatible = "renesas,ipmmu-r8a7745",
+ "renesas,ipmmu-vmsa";
+ reg = <0 0xe62a0000 0 0x1000>;
+diff --git a/arch/arm/boot/dts/r8a7790.dtsi b/arch/arm/boot/dts/r8a7790.dtsi
+index e5ef9fd4284a..166d5566229d 100644
+--- a/arch/arm/boot/dts/r8a7790.dtsi
++++ b/arch/arm/boot/dts/r8a7790.dtsi
+@@ -427,7 +427,7 @@
+ #thermal-sensor-cells = <0>;
+ };
+
+- ipmmu_sy0: mmu@e6280000 {
++ ipmmu_sy0: iommu@e6280000 {
+ compatible = "renesas,ipmmu-r8a7790",
+ "renesas,ipmmu-vmsa";
+ reg = <0 0xe6280000 0 0x1000>;
+@@ -437,7 +437,7 @@
+ status = "disabled";
+ };
+
+- ipmmu_sy1: mmu@e6290000 {
++ ipmmu_sy1: iommu@e6290000 {
+ compatible = "renesas,ipmmu-r8a7790",
+ "renesas,ipmmu-vmsa";
+ reg = <0 0xe6290000 0 0x1000>;
+@@ -446,7 +446,7 @@
+ status = "disabled";
+ };
+
+- ipmmu_ds: mmu@e6740000 {
++ ipmmu_ds: iommu@e6740000 {
+ compatible = "renesas,ipmmu-r8a7790",
+ "renesas,ipmmu-vmsa";
+ reg = <0 0xe6740000 0 0x1000>;
+@@ -456,7 +456,7 @@
+ status = "disabled";
+ };
+
+- ipmmu_mp: mmu@ec680000 {
++ ipmmu_mp: iommu@ec680000 {
+ compatible = "renesas,ipmmu-r8a7790",
+ "renesas,ipmmu-vmsa";
+ reg = <0 0xec680000 0 0x1000>;
+@@ -465,7 +465,7 @@
+ status = "disabled";
+ };
+
+- ipmmu_mx: mmu@fe951000 {
++ ipmmu_mx: iommu@fe951000 {
+ compatible = "renesas,ipmmu-r8a7790",
+ "renesas,ipmmu-vmsa";
+ reg = <0 0xfe951000 0 0x1000>;
+@@ -475,7 +475,7 @@
+ status = "disabled";
+ };
+
+- ipmmu_rt: mmu@ffc80000 {
++ ipmmu_rt: iommu@ffc80000 {
+ compatible = "renesas,ipmmu-r8a7790",
+ "renesas,ipmmu-vmsa";
+ reg = <0 0xffc80000 0 0x1000>;
+diff --git a/arch/arm/boot/dts/r8a7791.dtsi b/arch/arm/boot/dts/r8a7791.dtsi
+index 6e5bd86731cd..09e47cc17765 100644
+--- a/arch/arm/boot/dts/r8a7791.dtsi
++++ b/arch/arm/boot/dts/r8a7791.dtsi
+@@ -350,7 +350,7 @@
+ #thermal-sensor-cells = <0>;
+ };
+
+- ipmmu_sy0: mmu@e6280000 {
++ ipmmu_sy0: iommu@e6280000 {
+ compatible = "renesas,ipmmu-r8a7791",
+ "renesas,ipmmu-vmsa";
+ reg = <0 0xe6280000 0 0x1000>;
+@@ -360,7 +360,7 @@
+ status = "disabled";
+ };
+
+- ipmmu_sy1: mmu@e6290000 {
++ ipmmu_sy1: iommu@e6290000 {
+ compatible = "renesas,ipmmu-r8a7791",
+ "renesas,ipmmu-vmsa";
+ reg = <0 0xe6290000 0 0x1000>;
+@@ -369,7 +369,7 @@
+ status = "disabled";
+ };
+
+- ipmmu_ds: mmu@e6740000 {
++ ipmmu_ds: iommu@e6740000 {
+ compatible = "renesas,ipmmu-r8a7791",
+ "renesas,ipmmu-vmsa";
+ reg = <0 0xe6740000 0 0x1000>;
+@@ -379,7 +379,7 @@
+ status = "disabled";
+ };
+
+- ipmmu_mp: mmu@ec680000 {
++ ipmmu_mp: iommu@ec680000 {
+ compatible = "renesas,ipmmu-r8a7791",
+ "renesas,ipmmu-vmsa";
+ reg = <0 0xec680000 0 0x1000>;
+@@ -388,7 +388,7 @@
+ status = "disabled";
+ };
+
+- ipmmu_mx: mmu@fe951000 {
++ ipmmu_mx: iommu@fe951000 {
+ compatible = "renesas,ipmmu-r8a7791",
+ "renesas,ipmmu-vmsa";
+ reg = <0 0xfe951000 0 0x1000>;
+@@ -398,7 +398,7 @@
+ status = "disabled";
+ };
+
+- ipmmu_rt: mmu@ffc80000 {
++ ipmmu_rt: iommu@ffc80000 {
+ compatible = "renesas,ipmmu-r8a7791",
+ "renesas,ipmmu-vmsa";
+ reg = <0 0xffc80000 0 0x1000>;
+@@ -407,7 +407,7 @@
+ status = "disabled";
+ };
+
+- ipmmu_gp: mmu@e62a0000 {
++ ipmmu_gp: iommu@e62a0000 {
+ compatible = "renesas,ipmmu-r8a7791",
+ "renesas,ipmmu-vmsa";
+ reg = <0 0xe62a0000 0 0x1000>;
+diff --git a/arch/arm/boot/dts/r8a7793.dtsi b/arch/arm/boot/dts/r8a7793.dtsi
+index dadbda16161b..1b62a7e06b42 100644
+--- a/arch/arm/boot/dts/r8a7793.dtsi
++++ b/arch/arm/boot/dts/r8a7793.dtsi
+@@ -336,7 +336,7 @@
+ #thermal-sensor-cells = <0>;
+ };
+
+- ipmmu_sy0: mmu@e6280000 {
++ ipmmu_sy0: iommu@e6280000 {
+ compatible = "renesas,ipmmu-r8a7793",
+ "renesas,ipmmu-vmsa";
+ reg = <0 0xe6280000 0 0x1000>;
+@@ -346,7 +346,7 @@
+ status = "disabled";
+ };
+
+- ipmmu_sy1: mmu@e6290000 {
++ ipmmu_sy1: iommu@e6290000 {
+ compatible = "renesas,ipmmu-r8a7793",
+ "renesas,ipmmu-vmsa";
+ reg = <0 0xe6290000 0 0x1000>;
+@@ -355,7 +355,7 @@
+ status = "disabled";
+ };
+
+- ipmmu_ds: mmu@e6740000 {
++ ipmmu_ds: iommu@e6740000 {
+ compatible = "renesas,ipmmu-r8a7793",
+ "renesas,ipmmu-vmsa";
+ reg = <0 0xe6740000 0 0x1000>;
+@@ -365,7 +365,7 @@
+ status = "disabled";
+ };
+
+- ipmmu_mp: mmu@ec680000 {
++ ipmmu_mp: iommu@ec680000 {
+ compatible = "renesas,ipmmu-r8a7793",
+ "renesas,ipmmu-vmsa";
+ reg = <0 0xec680000 0 0x1000>;
+@@ -374,7 +374,7 @@
+ status = "disabled";
+ };
+
+- ipmmu_mx: mmu@fe951000 {
++ ipmmu_mx: iommu@fe951000 {
+ compatible = "renesas,ipmmu-r8a7793",
+ "renesas,ipmmu-vmsa";
+ reg = <0 0xfe951000 0 0x1000>;
+@@ -384,7 +384,7 @@
+ status = "disabled";
+ };
+
+- ipmmu_rt: mmu@ffc80000 {
++ ipmmu_rt: iommu@ffc80000 {
+ compatible = "renesas,ipmmu-r8a7793",
+ "renesas,ipmmu-vmsa";
+ reg = <0 0xffc80000 0 0x1000>;
+@@ -393,7 +393,7 @@
+ status = "disabled";
+ };
+
+- ipmmu_gp: mmu@e62a0000 {
++ ipmmu_gp: iommu@e62a0000 {
+ compatible = "renesas,ipmmu-r8a7793",
+ "renesas,ipmmu-vmsa";
+ reg = <0 0xe62a0000 0 0x1000>;
+diff --git a/arch/arm/boot/dts/r8a7794.dtsi b/arch/arm/boot/dts/r8a7794.dtsi
+index 2c9e7a1ebfec..8d7f8798628a 100644
+--- a/arch/arm/boot/dts/r8a7794.dtsi
++++ b/arch/arm/boot/dts/r8a7794.dtsi
+@@ -290,7 +290,7 @@
+ resets = <&cpg 407>;
+ };
+
+- ipmmu_sy0: mmu@e6280000 {
++ ipmmu_sy0: iommu@e6280000 {
+ compatible = "renesas,ipmmu-r8a7794",
+ "renesas,ipmmu-vmsa";
+ reg = <0 0xe6280000 0 0x1000>;
+@@ -300,7 +300,7 @@
+ status = "disabled";
+ };
+
+- ipmmu_sy1: mmu@e6290000 {
++ ipmmu_sy1: iommu@e6290000 {
+ compatible = "renesas,ipmmu-r8a7794",
+ "renesas,ipmmu-vmsa";
+ reg = <0 0xe6290000 0 0x1000>;
+@@ -309,7 +309,7 @@
+ status = "disabled";
+ };
+
+- ipmmu_ds: mmu@e6740000 {
++ ipmmu_ds: iommu@e6740000 {
+ compatible = "renesas,ipmmu-r8a7794",
+ "renesas,ipmmu-vmsa";
+ reg = <0 0xe6740000 0 0x1000>;
+@@ -319,7 +319,7 @@
+ status = "disabled";
+ };
+
+- ipmmu_mp: mmu@ec680000 {
++ ipmmu_mp: iommu@ec680000 {
+ compatible = "renesas,ipmmu-r8a7794",
+ "renesas,ipmmu-vmsa";
+ reg = <0 0xec680000 0 0x1000>;
+@@ -328,7 +328,7 @@
+ status = "disabled";
+ };
+
+- ipmmu_mx: mmu@fe951000 {
++ ipmmu_mx: iommu@fe951000 {
+ compatible = "renesas,ipmmu-r8a7794",
+ "renesas,ipmmu-vmsa";
+ reg = <0 0xfe951000 0 0x1000>;
+@@ -338,7 +338,7 @@
+ status = "disabled";
+ };
+
+- ipmmu_gp: mmu@e62a0000 {
++ ipmmu_gp: iommu@e62a0000 {
+ compatible = "renesas,ipmmu-r8a7794",
+ "renesas,ipmmu-vmsa";
+ reg = <0 0xe62a0000 0 0x1000>;
+diff --git a/arch/arm/boot/dts/stm32mp157a-avenger96.dts b/arch/arm/boot/dts/stm32mp157a-avenger96.dts
+index 425175f7d83c..081037b510bc 100644
+--- a/arch/arm/boot/dts/stm32mp157a-avenger96.dts
++++ b/arch/arm/boot/dts/stm32mp157a-avenger96.dts
+@@ -92,6 +92,9 @@
+ #address-cells = <1>;
+ #size-cells = <0>;
+ compatible = "snps,dwmac-mdio";
++ reset-gpios = <&gpioz 2 GPIO_ACTIVE_LOW>;
++ reset-delay-us = <1000>;
++
+ phy0: ethernet-phy@7 {
+ reg = <7>;
+ };
+diff --git a/arch/arm/boot/dts/sun8i-h2-plus-bananapi-m2-zero.dts b/arch/arm/boot/dts/sun8i-h2-plus-bananapi-m2-zero.dts
+index d277d043031b..4c6704e4c57e 100644
+--- a/arch/arm/boot/dts/sun8i-h2-plus-bananapi-m2-zero.dts
++++ b/arch/arm/boot/dts/sun8i-h2-plus-bananapi-m2-zero.dts
+@@ -31,7 +31,7 @@
+
+ pwr_led {
+ label = "bananapi-m2-zero:red:pwr";
+- gpios = <&r_pio 0 10 GPIO_ACTIVE_HIGH>; /* PL10 */
++ gpios = <&r_pio 0 10 GPIO_ACTIVE_LOW>; /* PL10 */
+ default-state = "on";
+ };
+ };
+diff --git a/arch/arm/boot/dts/vexpress-v2m-rs1.dtsi b/arch/arm/boot/dts/vexpress-v2m-rs1.dtsi
+index 5c183483ec3b..8010cdcdb37a 100644
+--- a/arch/arm/boot/dts/vexpress-v2m-rs1.dtsi
++++ b/arch/arm/boot/dts/vexpress-v2m-rs1.dtsi
+@@ -31,7 +31,7 @@
+ #interrupt-cells = <1>;
+ ranges;
+
+- nor_flash: flash@0,00000000 {
++ nor_flash: flash@0 {
+ compatible = "arm,vexpress-flash", "cfi-flash";
+ reg = <0 0x00000000 0x04000000>,
+ <4 0x00000000 0x04000000>;
+@@ -41,13 +41,13 @@
+ };
+ };
+
+- psram@1,00000000 {
++ psram@100000000 {
+ compatible = "arm,vexpress-psram", "mtd-ram";
+ reg = <1 0x00000000 0x02000000>;
+ bank-width = <4>;
+ };
+
+- ethernet@2,02000000 {
++ ethernet@202000000 {
+ compatible = "smsc,lan9118", "smsc,lan9115";
+ reg = <2 0x02000000 0x10000>;
+ interrupts = <15>;
+@@ -59,14 +59,14 @@
+ vddvario-supply = <&v2m_fixed_3v3>;
+ };
+
+- usb@2,03000000 {
++ usb@203000000 {
+ compatible = "nxp,usb-isp1761";
+ reg = <2 0x03000000 0x20000>;
+ interrupts = <16>;
+ port1-otg;
+ };
+
+- iofpga@3,00000000 {
++ iofpga@300000000 {
+ compatible = "simple-bus";
+ #address-cells = <1>;
+ #size-cells = <1>;
+diff --git a/arch/arm/mach-davinci/board-dm644x-evm.c b/arch/arm/mach-davinci/board-dm644x-evm.c
+index 3461d12bbfc0..a5d3708fedf6 100644
+--- a/arch/arm/mach-davinci/board-dm644x-evm.c
++++ b/arch/arm/mach-davinci/board-dm644x-evm.c
+@@ -655,19 +655,6 @@ static struct i2c_board_info __initdata i2c_info[] = {
+ },
+ };
+
+-/* Fixed regulator support */
+-static struct regulator_consumer_supply fixed_supplies_3_3v[] = {
+- /* Baseboard 3.3V: 5V -> TPS54310PWP -> 3.3V */
+- REGULATOR_SUPPLY("AVDD", "1-001b"),
+- REGULATOR_SUPPLY("DRVDD", "1-001b"),
+-};
+-
+-static struct regulator_consumer_supply fixed_supplies_1_8v[] = {
+- /* Baseboard 1.8V: 5V -> TPS54310PWP -> 1.8V */
+- REGULATOR_SUPPLY("IOVDD", "1-001b"),
+- REGULATOR_SUPPLY("DVDD", "1-001b"),
+-};
+-
+ #define DM644X_I2C_SDA_PIN GPIO_TO_PIN(2, 12)
+ #define DM644X_I2C_SCL_PIN GPIO_TO_PIN(2, 11)
+
+@@ -700,6 +687,19 @@ static void __init evm_init_i2c(void)
+ }
+ #endif
+
++/* Fixed regulator support */
++static struct regulator_consumer_supply fixed_supplies_3_3v[] = {
++ /* Baseboard 3.3V: 5V -> TPS54310PWP -> 3.3V */
++ REGULATOR_SUPPLY("AVDD", "1-001b"),
++ REGULATOR_SUPPLY("DRVDD", "1-001b"),
++};
++
++static struct regulator_consumer_supply fixed_supplies_1_8v[] = {
++ /* Baseboard 1.8V: 5V -> TPS54310PWP -> 1.8V */
++ REGULATOR_SUPPLY("IOVDD", "1-001b"),
++ REGULATOR_SUPPLY("DVDD", "1-001b"),
++};
++
+ #define VENC_STD_ALL (V4L2_STD_NTSC | V4L2_STD_PAL)
+
+ /* venc standard timings */
+diff --git a/arch/arm/mach-integrator/Kconfig b/arch/arm/mach-integrator/Kconfig
+index 982eabc36163..2406cab73835 100644
+--- a/arch/arm/mach-integrator/Kconfig
++++ b/arch/arm/mach-integrator/Kconfig
+@@ -4,6 +4,8 @@ menuconfig ARCH_INTEGRATOR
+ depends on ARCH_MULTI_V4T || ARCH_MULTI_V5 || ARCH_MULTI_V6
+ select ARM_AMBA
+ select COMMON_CLK_VERSATILE
++ select CMA
++ select DMA_CMA
+ select HAVE_TCM
+ select ICST
+ select MFD_SYSCON
+@@ -35,14 +37,13 @@ config INTEGRATOR_IMPD1
+ select ARM_VIC
+ select GPIO_PL061
+ select GPIOLIB
++ select REGULATOR
++ select REGULATOR_FIXED_VOLTAGE
+ help
+ The IM-PD1 is an add-on logic module for the Integrator which
+ allows ARM(R) Ltd PrimeCells to be developed and evaluated.
+ The IM-PD1 can be found on the Integrator/PP2 platform.
+
+- To compile this driver as a module, choose M here: the
+- module will be called impd1.
+-
+ config INTEGRATOR_CM7TDMI
+ bool "Integrator/CM7TDMI core module"
+ depends on ARCH_INTEGRATOR_AP
+diff --git a/arch/arm64/Kconfig.platforms b/arch/arm64/Kconfig.platforms
+index 55d70cfe0f9e..3c7e310fd8bf 100644
+--- a/arch/arm64/Kconfig.platforms
++++ b/arch/arm64/Kconfig.platforms
+@@ -248,7 +248,7 @@ config ARCH_TEGRA
+ This enables support for the NVIDIA Tegra SoC family.
+
+ config ARCH_SPRD
+- tristate "Spreadtrum SoC platform"
++ bool "Spreadtrum SoC platform"
+ help
+ Support for Spreadtrum ARM based SoCs
+
+diff --git a/arch/arm64/boot/dts/amlogic/meson-axg.dtsi b/arch/arm64/boot/dts/amlogic/meson-axg.dtsi
+index aace3d32a3df..8e6281c685fa 100644
+--- a/arch/arm64/boot/dts/amlogic/meson-axg.dtsi
++++ b/arch/arm64/boot/dts/amlogic/meson-axg.dtsi
+@@ -1735,18 +1735,18 @@
+ };
+
+ sram: sram@fffc0000 {
+- compatible = "amlogic,meson-axg-sram", "mmio-sram";
++ compatible = "mmio-sram";
+ reg = <0x0 0xfffc0000 0x0 0x20000>;
+ #address-cells = <1>;
+ #size-cells = <1>;
+ ranges = <0 0x0 0xfffc0000 0x20000>;
+
+- cpu_scp_lpri: scp-shmem@13000 {
++ cpu_scp_lpri: scp-sram@13000 {
+ compatible = "amlogic,meson-axg-scp-shmem";
+ reg = <0x13000 0x400>;
+ };
+
+- cpu_scp_hpri: scp-shmem@13400 {
++ cpu_scp_hpri: scp-sram@13400 {
+ compatible = "amlogic,meson-axg-scp-shmem";
+ reg = <0x13400 0x400>;
+ };
+diff --git a/arch/arm64/boot/dts/amlogic/meson-g12b-ugoos-am6.dts b/arch/arm64/boot/dts/amlogic/meson-g12b-ugoos-am6.dts
+index 06c5430eb92d..fdaacfd96b97 100644
+--- a/arch/arm64/boot/dts/amlogic/meson-g12b-ugoos-am6.dts
++++ b/arch/arm64/boot/dts/amlogic/meson-g12b-ugoos-am6.dts
+@@ -14,7 +14,7 @@
+ #include <dt-bindings/sound/meson-g12a-tohdmitx.h>
+
+ / {
+- compatible = "ugoos,am6", "amlogic,g12b";
++ compatible = "ugoos,am6", "amlogic,s922x", "amlogic,g12b";
+ model = "Ugoos AM6";
+
+ aliases {
+diff --git a/arch/arm64/boot/dts/amlogic/meson-gx-libretech-pc.dtsi b/arch/arm64/boot/dts/amlogic/meson-gx-libretech-pc.dtsi
+index 248b018c83d5..b1da36fdeac6 100644
+--- a/arch/arm64/boot/dts/amlogic/meson-gx-libretech-pc.dtsi
++++ b/arch/arm64/boot/dts/amlogic/meson-gx-libretech-pc.dtsi
+@@ -96,14 +96,14 @@
+ leds {
+ compatible = "gpio-leds";
+
+- green {
++ led-green {
+ color = <LED_COLOR_ID_GREEN>;
+ function = LED_FUNCTION_DISK_ACTIVITY;
+ gpios = <&gpio_ao GPIOAO_9 GPIO_ACTIVE_HIGH>;
+ linux,default-trigger = "disk-activity";
+ };
+
+- blue {
++ led-blue {
+ color = <LED_COLOR_ID_BLUE>;
+ function = LED_FUNCTION_STATUS;
+ gpios = <&gpio GPIODV_28 GPIO_ACTIVE_HIGH>;
+diff --git a/arch/arm64/boot/dts/amlogic/meson-gx.dtsi b/arch/arm64/boot/dts/amlogic/meson-gx.dtsi
+index 03f79fe045b7..e2bb68ec8502 100644
+--- a/arch/arm64/boot/dts/amlogic/meson-gx.dtsi
++++ b/arch/arm64/boot/dts/amlogic/meson-gx.dtsi
+@@ -398,20 +398,20 @@
+ };
+
+ sram: sram@c8000000 {
+- compatible = "amlogic,meson-gx-sram", "amlogic,meson-gxbb-sram", "mmio-sram";
++ compatible = "mmio-sram";
+ reg = <0x0 0xc8000000 0x0 0x14000>;
+
+ #address-cells = <1>;
+ #size-cells = <1>;
+ ranges = <0 0x0 0xc8000000 0x14000>;
+
+- cpu_scp_lpri: scp-shmem@0 {
+- compatible = "amlogic,meson-gx-scp-shmem", "amlogic,meson-gxbb-scp-shmem";
++ cpu_scp_lpri: scp-sram@0 {
++ compatible = "amlogic,meson-gxbb-scp-shmem";
+ reg = <0x13000 0x400>;
+ };
+
+- cpu_scp_hpri: scp-shmem@200 {
+- compatible = "amlogic,meson-gx-scp-shmem", "amlogic,meson-gxbb-scp-shmem";
++ cpu_scp_hpri: scp-sram@200 {
++ compatible = "amlogic,meson-gxbb-scp-shmem";
+ reg = <0x13400 0x400>;
+ };
+ };
+diff --git a/arch/arm64/boot/dts/amlogic/meson-gxbb-kii-pro.dts b/arch/arm64/boot/dts/amlogic/meson-gxbb-kii-pro.dts
+index 6c9cc45fb417..e8394a8269ee 100644
+--- a/arch/arm64/boot/dts/amlogic/meson-gxbb-kii-pro.dts
++++ b/arch/arm64/boot/dts/amlogic/meson-gxbb-kii-pro.dts
+@@ -11,7 +11,7 @@
+ #include <dt-bindings/input/input.h>
+ #include <dt-bindings/leds/common.h>
+ / {
+- compatible = "videostrong,kii-pro", "amlogic,p201", "amlogic,s905", "amlogic,meson-gxbb";
++ compatible = "videostrong,kii-pro", "amlogic,meson-gxbb";
+ model = "Videostrong KII Pro";
+
+ leds {
+diff --git a/arch/arm64/boot/dts/amlogic/meson-gxbb-nanopi-k2.dts b/arch/arm64/boot/dts/amlogic/meson-gxbb-nanopi-k2.dts
+index d6ca684e0e61..7be3e354093b 100644
+--- a/arch/arm64/boot/dts/amlogic/meson-gxbb-nanopi-k2.dts
++++ b/arch/arm64/boot/dts/amlogic/meson-gxbb-nanopi-k2.dts
+@@ -29,7 +29,7 @@
+ leds {
+ compatible = "gpio-leds";
+
+- stat {
++ led-stat {
+ label = "nanopi-k2:blue:stat";
+ gpios = <&gpio_ao GPIOAO_13 GPIO_ACTIVE_HIGH>;
+ default-state = "on";
+diff --git a/arch/arm64/boot/dts/amlogic/meson-gxbb-nexbox-a95x.dts b/arch/arm64/boot/dts/amlogic/meson-gxbb-nexbox-a95x.dts
+index 65ec7dea828c..67d901ed2fa3 100644
+--- a/arch/arm64/boot/dts/amlogic/meson-gxbb-nexbox-a95x.dts
++++ b/arch/arm64/boot/dts/amlogic/meson-gxbb-nexbox-a95x.dts
+@@ -31,7 +31,7 @@
+
+ leds {
+ compatible = "gpio-leds";
+- blue {
++ led-blue {
+ label = "a95x:system-status";
+ gpios = <&gpio_ao GPIOAO_13 GPIO_ACTIVE_LOW>;
+ linux,default-trigger = "heartbeat";
+diff --git a/arch/arm64/boot/dts/amlogic/meson-gxbb-odroidc2.dts b/arch/arm64/boot/dts/amlogic/meson-gxbb-odroidc2.dts
+index b46ef985bb44..70fcfb7b0683 100644
+--- a/arch/arm64/boot/dts/amlogic/meson-gxbb-odroidc2.dts
++++ b/arch/arm64/boot/dts/amlogic/meson-gxbb-odroidc2.dts
+@@ -49,7 +49,7 @@
+
+ leds {
+ compatible = "gpio-leds";
+- blue {
++ led-blue {
+ label = "c2:blue:alive";
+ gpios = <&gpio_ao GPIOAO_13 GPIO_ACTIVE_LOW>;
+ linux,default-trigger = "heartbeat";
+diff --git a/arch/arm64/boot/dts/amlogic/meson-gxbb-vega-s95.dtsi b/arch/arm64/boot/dts/amlogic/meson-gxbb-vega-s95.dtsi
+index 45cb83625951..222ee8069cfa 100644
+--- a/arch/arm64/boot/dts/amlogic/meson-gxbb-vega-s95.dtsi
++++ b/arch/arm64/boot/dts/amlogic/meson-gxbb-vega-s95.dtsi
+@@ -20,7 +20,7 @@
+ leds {
+ compatible = "gpio-leds";
+
+- blue {
++ led-blue {
+ label = "vega-s95:blue:on";
+ gpios = <&gpio_ao GPIOAO_13 GPIO_ACTIVE_HIGH>;
+ default-state = "on";
+diff --git a/arch/arm64/boot/dts/amlogic/meson-gxbb-wetek-play2.dts b/arch/arm64/boot/dts/amlogic/meson-gxbb-wetek-play2.dts
+index 1d32d1f6d032..2ab8a3d10079 100644
+--- a/arch/arm64/boot/dts/amlogic/meson-gxbb-wetek-play2.dts
++++ b/arch/arm64/boot/dts/amlogic/meson-gxbb-wetek-play2.dts
+@@ -14,13 +14,13 @@
+ model = "WeTek Play 2";
+
+ leds {
+- wifi {
++ led-wifi {
+ label = "wetek-play:wifi-status";
+ gpios = <&gpio GPIODV_26 GPIO_ACTIVE_HIGH>;
+ default-state = "off";
+ };
+
+- ethernet {
++ led-ethernet {
+ label = "wetek-play:ethernet-status";
+ gpios = <&gpio GPIODV_27 GPIO_ACTIVE_HIGH>;
+ default-state = "off";
+diff --git a/arch/arm64/boot/dts/amlogic/meson-gxbb-wetek.dtsi b/arch/arm64/boot/dts/amlogic/meson-gxbb-wetek.dtsi
+index dee51cf95223..d6133af09d64 100644
+--- a/arch/arm64/boot/dts/amlogic/meson-gxbb-wetek.dtsi
++++ b/arch/arm64/boot/dts/amlogic/meson-gxbb-wetek.dtsi
+@@ -25,7 +25,7 @@
+ leds {
+ compatible = "gpio-leds";
+
+- system {
++ led-system {
+ label = "wetek-play:system-status";
+ gpios = <&gpio_ao GPIOAO_13 GPIO_ACTIVE_HIGH>;
+ default-state = "on";
+diff --git a/arch/arm64/boot/dts/amlogic/meson-gxl-s905x-libretech-cc.dts b/arch/arm64/boot/dts/amlogic/meson-gxl-s905x-libretech-cc.dts
+index e8348b2728db..a4a71c13891b 100644
+--- a/arch/arm64/boot/dts/amlogic/meson-gxl-s905x-libretech-cc.dts
++++ b/arch/arm64/boot/dts/amlogic/meson-gxl-s905x-libretech-cc.dts
+@@ -54,14 +54,14 @@
+ leds {
+ compatible = "gpio-leds";
+
+- system {
++ led-system {
+ label = "librecomputer:system-status";
+ gpios = <&gpio GPIODV_24 GPIO_ACTIVE_HIGH>;
+ default-state = "on";
+ panic-indicator;
+ };
+
+- blue {
++ led-blue {
+ label = "librecomputer:blue";
+ gpios = <&gpio_ao GPIOAO_2 GPIO_ACTIVE_HIGH>;
+ linux,default-trigger = "heartbeat";
+diff --git a/arch/arm64/boot/dts/amlogic/meson-gxm-rbox-pro.dts b/arch/arm64/boot/dts/amlogic/meson-gxm-rbox-pro.dts
+index 420a88e9a195..c89c9f846fb1 100644
+--- a/arch/arm64/boot/dts/amlogic/meson-gxm-rbox-pro.dts
++++ b/arch/arm64/boot/dts/amlogic/meson-gxm-rbox-pro.dts
+@@ -36,13 +36,13 @@
+ leds {
+ compatible = "gpio-leds";
+
+- blue {
++ led-blue {
+ label = "rbox-pro:blue:on";
+ gpios = <&gpio_ao GPIOAO_9 GPIO_ACTIVE_HIGH>;
+ default-state = "on";
+ };
+
+- red {
++ led-red {
+ label = "rbox-pro:red:standby";
+ gpios = <&gpio GPIODV_28 GPIO_ACTIVE_HIGH>;
+ default-state = "off";
+diff --git a/arch/arm64/boot/dts/amlogic/meson-khadas-vim3.dtsi b/arch/arm64/boot/dts/amlogic/meson-khadas-vim3.dtsi
+index 094ecf2222bb..1ef1e3672b96 100644
+--- a/arch/arm64/boot/dts/amlogic/meson-khadas-vim3.dtsi
++++ b/arch/arm64/boot/dts/amlogic/meson-khadas-vim3.dtsi
+@@ -39,13 +39,13 @@
+ leds {
+ compatible = "gpio-leds";
+
+- white {
++ led-white {
+ label = "vim3:white:sys";
+ gpios = <&gpio_ao GPIOAO_4 GPIO_ACTIVE_LOW>;
+ linux,default-trigger = "heartbeat";
+ };
+
+- red {
++ led-red {
+ label = "vim3:red";
+ gpios = <&gpio_expander 5 GPIO_ACTIVE_LOW>;
+ };
+diff --git a/arch/arm64/boot/dts/amlogic/meson-sm1-sei610.dts b/arch/arm64/boot/dts/amlogic/meson-sm1-sei610.dts
+index dfb2438851c0..5ab139a34c01 100644
+--- a/arch/arm64/boot/dts/amlogic/meson-sm1-sei610.dts
++++ b/arch/arm64/boot/dts/amlogic/meson-sm1-sei610.dts
+@@ -104,7 +104,7 @@
+ leds {
+ compatible = "gpio-leds";
+
+- bluetooth {
++ led-bluetooth {
+ label = "sei610:blue:bt";
+ gpios = <&gpio GPIOC_7 (GPIO_ACTIVE_LOW | GPIO_OPEN_DRAIN)>;
+ default-state = "off";
+diff --git a/arch/arm64/boot/dts/arm/foundation-v8-gicv2.dtsi b/arch/arm64/boot/dts/arm/foundation-v8-gicv2.dtsi
+index 15fe81738e94..dfb23dfc0b0f 100644
+--- a/arch/arm64/boot/dts/arm/foundation-v8-gicv2.dtsi
++++ b/arch/arm64/boot/dts/arm/foundation-v8-gicv2.dtsi
+@@ -8,7 +8,7 @@
+ gic: interrupt-controller@2c001000 {
+ compatible = "arm,cortex-a15-gic", "arm,cortex-a9-gic";
+ #interrupt-cells = <3>;
+- #address-cells = <2>;
++ #address-cells = <1>;
+ interrupt-controller;
+ reg = <0x0 0x2c001000 0 0x1000>,
+ <0x0 0x2c002000 0 0x2000>,
+diff --git a/arch/arm64/boot/dts/arm/foundation-v8-gicv3.dtsi b/arch/arm64/boot/dts/arm/foundation-v8-gicv3.dtsi
+index f2c75c756039..906f51935b36 100644
+--- a/arch/arm64/boot/dts/arm/foundation-v8-gicv3.dtsi
++++ b/arch/arm64/boot/dts/arm/foundation-v8-gicv3.dtsi
+@@ -8,9 +8,9 @@
+ gic: interrupt-controller@2f000000 {
+ compatible = "arm,gic-v3";
+ #interrupt-cells = <3>;
+- #address-cells = <2>;
+- #size-cells = <2>;
+- ranges;
++ #address-cells = <1>;
++ #size-cells = <1>;
++ ranges = <0x0 0x0 0x2f000000 0x100000>;
+ interrupt-controller;
+ reg = <0x0 0x2f000000 0x0 0x10000>,
+ <0x0 0x2f100000 0x0 0x200000>,
+@@ -22,7 +22,7 @@
+ its: its@2f020000 {
+ compatible = "arm,gic-v3-its";
+ msi-controller;
+- reg = <0x0 0x2f020000 0x0 0x20000>;
++ reg = <0x20000 0x20000>;
+ };
+ };
+ };
+diff --git a/arch/arm64/boot/dts/arm/foundation-v8.dtsi b/arch/arm64/boot/dts/arm/foundation-v8.dtsi
+index 12f039fa3dad..e2da63f78298 100644
+--- a/arch/arm64/boot/dts/arm/foundation-v8.dtsi
++++ b/arch/arm64/boot/dts/arm/foundation-v8.dtsi
+@@ -107,51 +107,51 @@
+
+ #interrupt-cells = <1>;
+ interrupt-map-mask = <0 0 63>;
+- interrupt-map = <0 0 0 &gic 0 0 GIC_SPI 0 IRQ_TYPE_LEVEL_HIGH>,
+- <0 0 1 &gic 0 0 GIC_SPI 1 IRQ_TYPE_LEVEL_HIGH>,
+- <0 0 2 &gic 0 0 GIC_SPI 2 IRQ_TYPE_LEVEL_HIGH>,
+- <0 0 3 &gic 0 0 GIC_SPI 3 IRQ_TYPE_LEVEL_HIGH>,
+- <0 0 4 &gic 0 0 GIC_SPI 4 IRQ_TYPE_LEVEL_HIGH>,
+- <0 0 5 &gic 0 0 GIC_SPI 5 IRQ_TYPE_LEVEL_HIGH>,
+- <0 0 6 &gic 0 0 GIC_SPI 6 IRQ_TYPE_LEVEL_HIGH>,
+- <0 0 7 &gic 0 0 GIC_SPI 7 IRQ_TYPE_LEVEL_HIGH>,
+- <0 0 8 &gic 0 0 GIC_SPI 8 IRQ_TYPE_LEVEL_HIGH>,
+- <0 0 9 &gic 0 0 GIC_SPI 9 IRQ_TYPE_LEVEL_HIGH>,
+- <0 0 10 &gic 0 0 GIC_SPI 10 IRQ_TYPE_LEVEL_HIGH>,
+- <0 0 11 &gic 0 0 GIC_SPI 11 IRQ_TYPE_LEVEL_HIGH>,
+- <0 0 12 &gic 0 0 GIC_SPI 12 IRQ_TYPE_LEVEL_HIGH>,
+- <0 0 13 &gic 0 0 GIC_SPI 13 IRQ_TYPE_LEVEL_HIGH>,
+- <0 0 14 &gic 0 0 GIC_SPI 14 IRQ_TYPE_LEVEL_HIGH>,
+- <0 0 15 &gic 0 0 GIC_SPI 15 IRQ_TYPE_LEVEL_HIGH>,
+- <0 0 16 &gic 0 0 GIC_SPI 16 IRQ_TYPE_LEVEL_HIGH>,
+- <0 0 17 &gic 0 0 GIC_SPI 17 IRQ_TYPE_LEVEL_HIGH>,
+- <0 0 18 &gic 0 0 GIC_SPI 18 IRQ_TYPE_LEVEL_HIGH>,
+- <0 0 19 &gic 0 0 GIC_SPI 19 IRQ_TYPE_LEVEL_HIGH>,
+- <0 0 20 &gic 0 0 GIC_SPI 20 IRQ_TYPE_LEVEL_HIGH>,
+- <0 0 21 &gic 0 0 GIC_SPI 21 IRQ_TYPE_LEVEL_HIGH>,
+- <0 0 22 &gic 0 0 GIC_SPI 22 IRQ_TYPE_LEVEL_HIGH>,
+- <0 0 23 &gic 0 0 GIC_SPI 23 IRQ_TYPE_LEVEL_HIGH>,
+- <0 0 24 &gic 0 0 GIC_SPI 24 IRQ_TYPE_LEVEL_HIGH>,
+- <0 0 25 &gic 0 0 GIC_SPI 25 IRQ_TYPE_LEVEL_HIGH>,
+- <0 0 26 &gic 0 0 GIC_SPI 26 IRQ_TYPE_LEVEL_HIGH>,
+- <0 0 27 &gic 0 0 GIC_SPI 27 IRQ_TYPE_LEVEL_HIGH>,
+- <0 0 28 &gic 0 0 GIC_SPI 28 IRQ_TYPE_LEVEL_HIGH>,
+- <0 0 29 &gic 0 0 GIC_SPI 29 IRQ_TYPE_LEVEL_HIGH>,
+- <0 0 30 &gic 0 0 GIC_SPI 30 IRQ_TYPE_LEVEL_HIGH>,
+- <0 0 31 &gic 0 0 GIC_SPI 31 IRQ_TYPE_LEVEL_HIGH>,
+- <0 0 32 &gic 0 0 GIC_SPI 32 IRQ_TYPE_LEVEL_HIGH>,
+- <0 0 33 &gic 0 0 GIC_SPI 33 IRQ_TYPE_LEVEL_HIGH>,
+- <0 0 34 &gic 0 0 GIC_SPI 34 IRQ_TYPE_LEVEL_HIGH>,
+- <0 0 35 &gic 0 0 GIC_SPI 35 IRQ_TYPE_LEVEL_HIGH>,
+- <0 0 36 &gic 0 0 GIC_SPI 36 IRQ_TYPE_LEVEL_HIGH>,
+- <0 0 37 &gic 0 0 GIC_SPI 37 IRQ_TYPE_LEVEL_HIGH>,
+- <0 0 38 &gic 0 0 GIC_SPI 38 IRQ_TYPE_LEVEL_HIGH>,
+- <0 0 39 &gic 0 0 GIC_SPI 39 IRQ_TYPE_LEVEL_HIGH>,
+- <0 0 40 &gic 0 0 GIC_SPI 40 IRQ_TYPE_LEVEL_HIGH>,
+- <0 0 41 &gic 0 0 GIC_SPI 41 IRQ_TYPE_LEVEL_HIGH>,
+- <0 0 42 &gic 0 0 GIC_SPI 42 IRQ_TYPE_LEVEL_HIGH>;
+-
+- ethernet@2,02000000 {
++ interrupt-map = <0 0 0 &gic 0 GIC_SPI 0 IRQ_TYPE_LEVEL_HIGH>,
++ <0 0 1 &gic 0 GIC_SPI 1 IRQ_TYPE_LEVEL_HIGH>,
++ <0 0 2 &gic 0 GIC_SPI 2 IRQ_TYPE_LEVEL_HIGH>,
++ <0 0 3 &gic 0 GIC_SPI 3 IRQ_TYPE_LEVEL_HIGH>,
++ <0 0 4 &gic 0 GIC_SPI 4 IRQ_TYPE_LEVEL_HIGH>,
++ <0 0 5 &gic 0 GIC_SPI 5 IRQ_TYPE_LEVEL_HIGH>,
++ <0 0 6 &gic 0 GIC_SPI 6 IRQ_TYPE_LEVEL_HIGH>,
++ <0 0 7 &gic 0 GIC_SPI 7 IRQ_TYPE_LEVEL_HIGH>,
++ <0 0 8 &gic 0 GIC_SPI 8 IRQ_TYPE_LEVEL_HIGH>,
++ <0 0 9 &gic 0 GIC_SPI 9 IRQ_TYPE_LEVEL_HIGH>,
++ <0 0 10 &gic 0 GIC_SPI 10 IRQ_TYPE_LEVEL_HIGH>,
++ <0 0 11 &gic 0 GIC_SPI 11 IRQ_TYPE_LEVEL_HIGH>,
++ <0 0 12 &gic 0 GIC_SPI 12 IRQ_TYPE_LEVEL_HIGH>,
++ <0 0 13 &gic 0 GIC_SPI 13 IRQ_TYPE_LEVEL_HIGH>,
++ <0 0 14 &gic 0 GIC_SPI 14 IRQ_TYPE_LEVEL_HIGH>,
++ <0 0 15 &gic 0 GIC_SPI 15 IRQ_TYPE_LEVEL_HIGH>,
++ <0 0 16 &gic 0 GIC_SPI 16 IRQ_TYPE_LEVEL_HIGH>,
++ <0 0 17 &gic 0 GIC_SPI 17 IRQ_TYPE_LEVEL_HIGH>,
++ <0 0 18 &gic 0 GIC_SPI 18 IRQ_TYPE_LEVEL_HIGH>,
++ <0 0 19 &gic 0 GIC_SPI 19 IRQ_TYPE_LEVEL_HIGH>,
++ <0 0 20 &gic 0 GIC_SPI 20 IRQ_TYPE_LEVEL_HIGH>,
++ <0 0 21 &gic 0 GIC_SPI 21 IRQ_TYPE_LEVEL_HIGH>,
++ <0 0 22 &gic 0 GIC_SPI 22 IRQ_TYPE_LEVEL_HIGH>,
++ <0 0 23 &gic 0 GIC_SPI 23 IRQ_TYPE_LEVEL_HIGH>,
++ <0 0 24 &gic 0 GIC_SPI 24 IRQ_TYPE_LEVEL_HIGH>,
++ <0 0 25 &gic 0 GIC_SPI 25 IRQ_TYPE_LEVEL_HIGH>,
++ <0 0 26 &gic 0 GIC_SPI 26 IRQ_TYPE_LEVEL_HIGH>,
++ <0 0 27 &gic 0 GIC_SPI 27 IRQ_TYPE_LEVEL_HIGH>,
++ <0 0 28 &gic 0 GIC_SPI 28 IRQ_TYPE_LEVEL_HIGH>,
++ <0 0 29 &gic 0 GIC_SPI 29 IRQ_TYPE_LEVEL_HIGH>,
++ <0 0 30 &gic 0 GIC_SPI 30 IRQ_TYPE_LEVEL_HIGH>,
++ <0 0 31 &gic 0 GIC_SPI 31 IRQ_TYPE_LEVEL_HIGH>,
++ <0 0 32 &gic 0 GIC_SPI 32 IRQ_TYPE_LEVEL_HIGH>,
++ <0 0 33 &gic 0 GIC_SPI 33 IRQ_TYPE_LEVEL_HIGH>,
++ <0 0 34 &gic 0 GIC_SPI 34 IRQ_TYPE_LEVEL_HIGH>,
++ <0 0 35 &gic 0 GIC_SPI 35 IRQ_TYPE_LEVEL_HIGH>,
++ <0 0 36 &gic 0 GIC_SPI 36 IRQ_TYPE_LEVEL_HIGH>,
++ <0 0 37 &gic 0 GIC_SPI 37 IRQ_TYPE_LEVEL_HIGH>,
++ <0 0 38 &gic 0 GIC_SPI 38 IRQ_TYPE_LEVEL_HIGH>,
++ <0 0 39 &gic 0 GIC_SPI 39 IRQ_TYPE_LEVEL_HIGH>,
++ <0 0 40 &gic 0 GIC_SPI 40 IRQ_TYPE_LEVEL_HIGH>,
++ <0 0 41 &gic 0 GIC_SPI 41 IRQ_TYPE_LEVEL_HIGH>,
++ <0 0 42 &gic 0 GIC_SPI 42 IRQ_TYPE_LEVEL_HIGH>;
++
++ ethernet@202000000 {
+ compatible = "smsc,lan91c111";
+ reg = <2 0x02000000 0x10000>;
+ interrupts = <15>;
+@@ -178,7 +178,7 @@
+ clock-output-names = "v2m:refclk32khz";
+ };
+
+- iofpga@3,00000000 {
++ iofpga@300000000 {
+ compatible = "simple-bus";
+ #address-cells = <1>;
+ #size-cells = <1>;
+diff --git a/arch/arm64/boot/dts/arm/juno-base.dtsi b/arch/arm64/boot/dts/arm/juno-base.dtsi
+index f5889281545f..59b6ac0b828a 100644
+--- a/arch/arm64/boot/dts/arm/juno-base.dtsi
++++ b/arch/arm64/boot/dts/arm/juno-base.dtsi
+@@ -74,35 +74,35 @@
+ <0x0 0x2c02f000 0 0x2000>,
+ <0x0 0x2c04f000 0 0x2000>,
+ <0x0 0x2c06f000 0 0x2000>;
+- #address-cells = <2>;
++ #address-cells = <1>;
+ #interrupt-cells = <3>;
+- #size-cells = <2>;
++ #size-cells = <1>;
+ interrupt-controller;
+ interrupts = <GIC_PPI 9 (GIC_CPU_MASK_SIMPLE(6) | IRQ_TYPE_LEVEL_HIGH)>;
+- ranges = <0 0 0 0x2c1c0000 0 0x40000>;
++ ranges = <0 0 0x2c1c0000 0x40000>;
+
+ v2m_0: v2m@0 {
+ compatible = "arm,gic-v2m-frame";
+ msi-controller;
+- reg = <0 0 0 0x10000>;
++ reg = <0 0x10000>;
+ };
+
+ v2m@10000 {
+ compatible = "arm,gic-v2m-frame";
+ msi-controller;
+- reg = <0 0x10000 0 0x10000>;
++ reg = <0x10000 0x10000>;
+ };
+
+ v2m@20000 {
+ compatible = "arm,gic-v2m-frame";
+ msi-controller;
+- reg = <0 0x20000 0 0x10000>;
++ reg = <0x20000 0x10000>;
+ };
+
+ v2m@30000 {
+ compatible = "arm,gic-v2m-frame";
+ msi-controller;
+- reg = <0 0x30000 0 0x10000>;
++ reg = <0x30000 0x10000>;
+ };
+ };
+
+@@ -546,10 +546,10 @@
+ <0x42000000 0x40 0x00000000 0x40 0x00000000 0x1 0x00000000>;
+ #interrupt-cells = <1>;
+ interrupt-map-mask = <0 0 0 7>;
+- interrupt-map = <0 0 0 1 &gic 0 0 GIC_SPI 136 IRQ_TYPE_LEVEL_HIGH>,
+- <0 0 0 2 &gic 0 0 GIC_SPI 137 IRQ_TYPE_LEVEL_HIGH>,
+- <0 0 0 3 &gic 0 0 GIC_SPI 138 IRQ_TYPE_LEVEL_HIGH>,
+- <0 0 0 4 &gic 0 0 GIC_SPI 139 IRQ_TYPE_LEVEL_HIGH>;
++ interrupt-map = <0 0 0 1 &gic 0 GIC_SPI 136 IRQ_TYPE_LEVEL_HIGH>,
++ <0 0 0 2 &gic 0 GIC_SPI 137 IRQ_TYPE_LEVEL_HIGH>,
++ <0 0 0 3 &gic 0 GIC_SPI 138 IRQ_TYPE_LEVEL_HIGH>,
++ <0 0 0 4 &gic 0 GIC_SPI 139 IRQ_TYPE_LEVEL_HIGH>;
+ msi-parent = <&v2m_0>;
+ status = "disabled";
+ iommu-map-mask = <0x0>; /* RC has no means to output PCI RID */
+@@ -813,19 +813,19 @@
+
+ #interrupt-cells = <1>;
+ interrupt-map-mask = <0 0 15>;
+- interrupt-map = <0 0 0 &gic 0 0 GIC_SPI 68 IRQ_TYPE_LEVEL_HIGH>,
+- <0 0 1 &gic 0 0 GIC_SPI 69 IRQ_TYPE_LEVEL_HIGH>,
+- <0 0 2 &gic 0 0 GIC_SPI 70 IRQ_TYPE_LEVEL_HIGH>,
+- <0 0 3 &gic 0 0 GIC_SPI 160 IRQ_TYPE_LEVEL_HIGH>,
+- <0 0 4 &gic 0 0 GIC_SPI 161 IRQ_TYPE_LEVEL_HIGH>,
+- <0 0 5 &gic 0 0 GIC_SPI 162 IRQ_TYPE_LEVEL_HIGH>,
+- <0 0 6 &gic 0 0 GIC_SPI 163 IRQ_TYPE_LEVEL_HIGH>,
+- <0 0 7 &gic 0 0 GIC_SPI 164 IRQ_TYPE_LEVEL_HIGH>,
+- <0 0 8 &gic 0 0 GIC_SPI 165 IRQ_TYPE_LEVEL_HIGH>,
+- <0 0 9 &gic 0 0 GIC_SPI 166 IRQ_TYPE_LEVEL_HIGH>,
+- <0 0 10 &gic 0 0 GIC_SPI 167 IRQ_TYPE_LEVEL_HIGH>,
+- <0 0 11 &gic 0 0 GIC_SPI 168 IRQ_TYPE_LEVEL_HIGH>,
+- <0 0 12 &gic 0 0 GIC_SPI 169 IRQ_TYPE_LEVEL_HIGH>;
++ interrupt-map = <0 0 0 &gic 0 GIC_SPI 68 IRQ_TYPE_LEVEL_HIGH>,
++ <0 0 1 &gic 0 GIC_SPI 69 IRQ_TYPE_LEVEL_HIGH>,
++ <0 0 2 &gic 0 GIC_SPI 70 IRQ_TYPE_LEVEL_HIGH>,
++ <0 0 3 &gic 0 GIC_SPI 160 IRQ_TYPE_LEVEL_HIGH>,
++ <0 0 4 &gic 0 GIC_SPI 161 IRQ_TYPE_LEVEL_HIGH>,
++ <0 0 5 &gic 0 GIC_SPI 162 IRQ_TYPE_LEVEL_HIGH>,
++ <0 0 6 &gic 0 GIC_SPI 163 IRQ_TYPE_LEVEL_HIGH>,
++ <0 0 7 &gic 0 GIC_SPI 164 IRQ_TYPE_LEVEL_HIGH>,
++ <0 0 8 &gic 0 GIC_SPI 165 IRQ_TYPE_LEVEL_HIGH>,
++ <0 0 9 &gic 0 GIC_SPI 166 IRQ_TYPE_LEVEL_HIGH>,
++ <0 0 10 &gic 0 GIC_SPI 167 IRQ_TYPE_LEVEL_HIGH>,
++ <0 0 11 &gic 0 GIC_SPI 168 IRQ_TYPE_LEVEL_HIGH>,
++ <0 0 12 &gic 0 GIC_SPI 169 IRQ_TYPE_LEVEL_HIGH>;
+ };
+
+ site2: tlx@60000000 {
+@@ -835,6 +835,6 @@
+ ranges = <0 0 0x60000000 0x10000000>;
+ #interrupt-cells = <1>;
+ interrupt-map-mask = <0 0>;
+- interrupt-map = <0 0 &gic 0 0 GIC_SPI 168 IRQ_TYPE_LEVEL_HIGH>;
++ interrupt-map = <0 0 &gic 0 GIC_SPI 168 IRQ_TYPE_LEVEL_HIGH>;
+ };
+ };
+diff --git a/arch/arm64/boot/dts/arm/juno-motherboard.dtsi b/arch/arm64/boot/dts/arm/juno-motherboard.dtsi
+index e3983ded3c3c..d5cefddde08c 100644
+--- a/arch/arm64/boot/dts/arm/juno-motherboard.dtsi
++++ b/arch/arm64/boot/dts/arm/juno-motherboard.dtsi
+@@ -103,7 +103,7 @@
+ };
+ };
+
+- flash@0,00000000 {
++ flash@0 {
+ /* 2 * 32MiB NOR Flash memory mounted on CS0 */
+ compatible = "arm,vexpress-flash", "cfi-flash";
+ reg = <0 0x00000000 0x04000000>;
+@@ -120,7 +120,7 @@
+ };
+ };
+
+- ethernet@2,00000000 {
++ ethernet@200000000 {
+ compatible = "smsc,lan9118", "smsc,lan9115";
+ reg = <2 0x00000000 0x10000>;
+ interrupts = <3>;
+@@ -133,7 +133,7 @@
+ vddvario-supply = <&mb_fixed_3v3>;
+ };
+
+- iofpga@3,00000000 {
++ iofpga@300000000 {
+ compatible = "simple-bus";
+ #address-cells = <1>;
+ #size-cells = <1>;
+diff --git a/arch/arm64/boot/dts/arm/rtsm_ve-motherboard-rs2.dtsi b/arch/arm64/boot/dts/arm/rtsm_ve-motherboard-rs2.dtsi
+index 60703b5763c6..350cbf17e8b4 100644
+--- a/arch/arm64/boot/dts/arm/rtsm_ve-motherboard-rs2.dtsi
++++ b/arch/arm64/boot/dts/arm/rtsm_ve-motherboard-rs2.dtsi
+@@ -9,7 +9,7 @@
+ motherboard {
+ arm,v2m-memory-map = "rs2";
+
+- iofpga@3,00000000 {
++ iofpga@300000000 {
+ virtio-p9@140000 {
+ compatible = "virtio,mmio";
+ reg = <0x140000 0x200>;
+diff --git a/arch/arm64/boot/dts/arm/rtsm_ve-motherboard.dtsi b/arch/arm64/boot/dts/arm/rtsm_ve-motherboard.dtsi
+index e333c8d2d0e4..d1bfa62ca073 100644
+--- a/arch/arm64/boot/dts/arm/rtsm_ve-motherboard.dtsi
++++ b/arch/arm64/boot/dts/arm/rtsm_ve-motherboard.dtsi
+@@ -17,14 +17,14 @@
+ #interrupt-cells = <1>;
+ ranges;
+
+- flash@0,00000000 {
++ flash@0 {
+ compatible = "arm,vexpress-flash", "cfi-flash";
+ reg = <0 0x00000000 0x04000000>,
+ <4 0x00000000 0x04000000>;
+ bank-width = <4>;
+ };
+
+- ethernet@2,02000000 {
++ ethernet@202000000 {
+ compatible = "smsc,lan91c111";
+ reg = <2 0x02000000 0x10000>;
+ interrupts = <15>;
+@@ -51,7 +51,7 @@
+ clock-output-names = "v2m:refclk32khz";
+ };
+
+- iofpga@3,00000000 {
++ iofpga@300000000 {
+ compatible = "simple-bus";
+ #address-cells = <1>;
+ #size-cells = <1>;
+diff --git a/arch/arm64/boot/dts/marvell/armada-3720-db.dts b/arch/arm64/boot/dts/marvell/armada-3720-db.dts
+index f2cc00594d64..3e5789f37206 100644
+--- a/arch/arm64/boot/dts/marvell/armada-3720-db.dts
++++ b/arch/arm64/boot/dts/marvell/armada-3720-db.dts
+@@ -128,6 +128,9 @@
+
+ /* CON15(V2.0)/CON17(V1.4) : PCIe / CON15(V2.0)/CON12(V1.4) :mini-PCIe */
+ &pcie0 {
++ pinctrl-names = "default";
++ pinctrl-0 = <&pcie_reset_pins &pcie_clkreq_pins>;
++ reset-gpios = <&gpiosb 3 GPIO_ACTIVE_LOW>;
+ status = "okay";
+ };
+
+diff --git a/arch/arm64/boot/dts/marvell/armada-3720-espressobin.dtsi b/arch/arm64/boot/dts/marvell/armada-3720-espressobin.dtsi
+index 42e992f9c8a5..c92ad664cb0e 100644
+--- a/arch/arm64/boot/dts/marvell/armada-3720-espressobin.dtsi
++++ b/arch/arm64/boot/dts/marvell/armada-3720-espressobin.dtsi
+@@ -47,6 +47,7 @@
+ phys = <&comphy1 0>;
+ pinctrl-names = "default";
+ pinctrl-0 = <&pcie_reset_pins &pcie_clkreq_pins>;
++ reset-gpios = <&gpiosb 3 GPIO_ACTIVE_LOW>;
+ };
+
+ /* J6 */
+diff --git a/arch/arm64/boot/dts/marvell/armada-3720-turris-mox.dts b/arch/arm64/boot/dts/marvell/armada-3720-turris-mox.dts
+index bb42d1e6a4e9..1452c821f8c0 100644
+--- a/arch/arm64/boot/dts/marvell/armada-3720-turris-mox.dts
++++ b/arch/arm64/boot/dts/marvell/armada-3720-turris-mox.dts
+@@ -95,7 +95,7 @@
+ };
+
+ sfp: sfp {
+- compatible = "sff,sfp+";
++ compatible = "sff,sfp";
+ i2c-bus = <&i2c0>;
+ los-gpio = <&moxtet_sfp 0 GPIO_ACTIVE_HIGH>;
+ tx-fault-gpio = <&moxtet_sfp 1 GPIO_ACTIVE_HIGH>;
+@@ -128,10 +128,6 @@
+ };
+ };
+
+-&pcie_reset_pins {
+- function = "gpio";
+-};
+-
+ &pcie0 {
+ pinctrl-names = "default";
+ pinctrl-0 = <&pcie_reset_pins &pcie_clkreq_pins>;
+@@ -179,6 +175,8 @@
+ marvell,pad-type = "sd";
+ vqmmc-supply = <&vsdio_reg>;
+ mmc-pwrseq = <&sdhci1_pwrseq>;
++ /* forbid SDR104 for FCC purposes */
++ sdhci-caps-mask = <0x2 0x0>;
+ status = "okay";
+ };
+
+diff --git a/arch/arm64/boot/dts/marvell/armada-37xx.dtsi b/arch/arm64/boot/dts/marvell/armada-37xx.dtsi
+index 000c135e39b7..7909c146eabf 100644
+--- a/arch/arm64/boot/dts/marvell/armada-37xx.dtsi
++++ b/arch/arm64/boot/dts/marvell/armada-37xx.dtsi
+@@ -317,7 +317,7 @@
+
+ pcie_reset_pins: pcie-reset-pins {
+ groups = "pcie1";
+- function = "pcie";
++ function = "gpio";
+ };
+
+ pcie_clkreq_pins: pcie-clkreq-pins {
+diff --git a/arch/arm64/boot/dts/mediatek/mt8173.dtsi b/arch/arm64/boot/dts/mediatek/mt8173.dtsi
+index d819e44d94a8..6ad1053afd27 100644
+--- a/arch/arm64/boot/dts/mediatek/mt8173.dtsi
++++ b/arch/arm64/boot/dts/mediatek/mt8173.dtsi
+@@ -242,21 +242,21 @@
+ cpu_on = <0x84000003>;
+ };
+
+- clk26m: oscillator@0 {
++ clk26m: oscillator0 {
+ compatible = "fixed-clock";
+ #clock-cells = <0>;
+ clock-frequency = <26000000>;
+ clock-output-names = "clk26m";
+ };
+
+- clk32k: oscillator@1 {
++ clk32k: oscillator1 {
+ compatible = "fixed-clock";
+ #clock-cells = <0>;
+ clock-frequency = <32000>;
+ clock-output-names = "clk32k";
+ };
+
+- cpum_ck: oscillator@2 {
++ cpum_ck: oscillator2 {
+ compatible = "fixed-clock";
+ #clock-cells = <0>;
+ clock-frequency = <0>;
+@@ -272,19 +272,19 @@
+ sustainable-power = <1500>; /* milliwatts */
+
+ trips {
+- threshold: trip-point@0 {
++ threshold: trip-point0 {
+ temperature = <68000>;
+ hysteresis = <2000>;
+ type = "passive";
+ };
+
+- target: trip-point@1 {
++ target: trip-point1 {
+ temperature = <85000>;
+ hysteresis = <2000>;
+ type = "passive";
+ };
+
+- cpu_crit: cpu_crit@0 {
++ cpu_crit: cpu_crit0 {
+ temperature = <115000>;
+ hysteresis = <2000>;
+ type = "critical";
+@@ -292,13 +292,13 @@
+ };
+
+ cooling-maps {
+- map@0 {
++ map0 {
+ trip = <&target>;
+ cooling-device = <&cpu0 0 0>,
+ <&cpu1 0 0>;
+ contribution = <3072>;
+ };
+- map@1 {
++ map1 {
+ trip = <&target>;
+ cooling-device = <&cpu2 0 0>,
+ <&cpu3 0 0>;
+@@ -312,7 +312,7 @@
+ #address-cells = <2>;
+ #size-cells = <2>;
+ ranges;
+- vpu_dma_reserved: vpu_dma_mem_region {
++ vpu_dma_reserved: vpu_dma_mem_region@b7000000 {
+ compatible = "shared-dma-pool";
+ reg = <0 0xb7000000 0 0x500000>;
+ alignment = <0x1000>;
+@@ -365,7 +365,7 @@
+ reg = <0 0x10005000 0 0x1000>;
+ };
+
+- pio: pinctrl@10005000 {
++ pio: pinctrl@1000b000 {
+ compatible = "mediatek,mt8173-pinctrl";
+ reg = <0 0x1000b000 0 0x1000>;
+ mediatek,pctl-regmap = <&syscfg_pctl_a>;
+@@ -572,7 +572,7 @@
+ status = "disabled";
+ };
+
+- gic: interrupt-controller@10220000 {
++ gic: interrupt-controller@10221000 {
+ compatible = "arm,gic-400";
+ #interrupt-cells = <3>;
+ interrupt-parent = <&gic>;
+diff --git a/arch/arm64/boot/dts/nvidia/tegra194-p2888.dtsi b/arch/arm64/boot/dts/nvidia/tegra194-p2888.dtsi
+index 623f7d7d216b..8e3136dfdd62 100644
+--- a/arch/arm64/boot/dts/nvidia/tegra194-p2888.dtsi
++++ b/arch/arm64/boot/dts/nvidia/tegra194-p2888.dtsi
+@@ -33,7 +33,7 @@
+
+ phy-reset-gpios = <&gpio TEGRA194_MAIN_GPIO(G, 5) GPIO_ACTIVE_LOW>;
+ phy-handle = <&phy>;
+- phy-mode = "rgmii";
++ phy-mode = "rgmii-id";
+
+ mdio {
+ #address-cells = <1>;
+diff --git a/arch/arm64/boot/dts/nvidia/tegra194.dtsi b/arch/arm64/boot/dts/nvidia/tegra194.dtsi
+index f4ede86e32b4..3c928360f4ed 100644
+--- a/arch/arm64/boot/dts/nvidia/tegra194.dtsi
++++ b/arch/arm64/boot/dts/nvidia/tegra194.dtsi
+@@ -1387,7 +1387,7 @@
+
+ bus-range = <0x0 0xff>;
+ ranges = <0x81000000 0x0 0x30100000 0x0 0x30100000 0x0 0x00100000 /* downstream I/O (1MB) */
+- 0xc2000000 0x12 0x00000000 0x12 0x00000000 0x0 0x30000000 /* prefetchable memory (768MB) */
++ 0xc3000000 0x12 0x00000000 0x12 0x00000000 0x0 0x30000000 /* prefetchable memory (768MB) */
+ 0x82000000 0x0 0x40000000 0x12 0x30000000 0x0 0x10000000>; /* non-prefetchable memory (256MB) */
+ };
+
+@@ -1432,7 +1432,7 @@
+
+ bus-range = <0x0 0xff>;
+ ranges = <0x81000000 0x0 0x32100000 0x0 0x32100000 0x0 0x00100000 /* downstream I/O (1MB) */
+- 0xc2000000 0x12 0x40000000 0x12 0x40000000 0x0 0x30000000 /* prefetchable memory (768MB) */
++ 0xc3000000 0x12 0x40000000 0x12 0x40000000 0x0 0x30000000 /* prefetchable memory (768MB) */
+ 0x82000000 0x0 0x40000000 0x12 0x70000000 0x0 0x10000000>; /* non-prefetchable memory (256MB) */
+ };
+
+@@ -1477,7 +1477,7 @@
+
+ bus-range = <0x0 0xff>;
+ ranges = <0x81000000 0x0 0x34100000 0x0 0x34100000 0x0 0x00100000 /* downstream I/O (1MB) */
+- 0xc2000000 0x12 0x80000000 0x12 0x80000000 0x0 0x30000000 /* prefetchable memory (768MB) */
++ 0xc3000000 0x12 0x80000000 0x12 0x80000000 0x0 0x30000000 /* prefetchable memory (768MB) */
+ 0x82000000 0x0 0x40000000 0x12 0xb0000000 0x0 0x10000000>; /* non-prefetchable memory (256MB) */
+ };
+
+@@ -1522,7 +1522,7 @@
+
+ bus-range = <0x0 0xff>;
+ ranges = <0x81000000 0x0 0x36100000 0x0 0x36100000 0x0 0x00100000 /* downstream I/O (1MB) */
+- 0xc2000000 0x14 0x00000000 0x14 0x00000000 0x3 0x40000000 /* prefetchable memory (13GB) */
++ 0xc3000000 0x14 0x00000000 0x14 0x00000000 0x3 0x40000000 /* prefetchable memory (13GB) */
+ 0x82000000 0x0 0x40000000 0x17 0x40000000 0x0 0xc0000000>; /* non-prefetchable memory (3GB) */
+ };
+
+@@ -1567,7 +1567,7 @@
+
+ bus-range = <0x0 0xff>;
+ ranges = <0x81000000 0x0 0x38100000 0x0 0x38100000 0x0 0x00100000 /* downstream I/O (1MB) */
+- 0xc2000000 0x18 0x00000000 0x18 0x00000000 0x3 0x40000000 /* prefetchable memory (13GB) */
++ 0xc3000000 0x18 0x00000000 0x18 0x00000000 0x3 0x40000000 /* prefetchable memory (13GB) */
+ 0x82000000 0x0 0x40000000 0x1b 0x40000000 0x0 0xc0000000>; /* non-prefetchable memory (3GB) */
+ };
+
+@@ -1616,7 +1616,7 @@
+
+ bus-range = <0x0 0xff>;
+ ranges = <0x81000000 0x0 0x3a100000 0x0 0x3a100000 0x0 0x00100000 /* downstream I/O (1MB) */
+- 0xc2000000 0x1c 0x00000000 0x1c 0x00000000 0x3 0x40000000 /* prefetchable memory (13GB) */
++ 0xc3000000 0x1c 0x00000000 0x1c 0x00000000 0x3 0x40000000 /* prefetchable memory (13GB) */
+ 0x82000000 0x0 0x40000000 0x1f 0x40000000 0x0 0xc0000000>; /* non-prefetchable memory (3GB) */
+ };
+
+diff --git a/arch/arm64/boot/dts/qcom/apq8096-db820c.dtsi b/arch/arm64/boot/dts/qcom/apq8096-db820c.dtsi
+index c4abbccf2bed..eaa1eb70b455 100644
+--- a/arch/arm64/boot/dts/qcom/apq8096-db820c.dtsi
++++ b/arch/arm64/boot/dts/qcom/apq8096-db820c.dtsi
+@@ -117,16 +117,6 @@
+ regulator-max-microvolt = <3700000>;
+ };
+
+- vreg_s8a_l3a_input: vreg-s8a-l3a-input {
+- compatible = "regulator-fixed";
+- regulator-name = "vreg_s8a_l3a_input";
+- regulator-always-on;
+- regulator-boot-on;
+-
+- regulator-min-microvolt = <0>;
+- regulator-max-microvolt = <0>;
+- };
+-
+ wlan_en: wlan-en-1-8v {
+ pinctrl-names = "default";
+ pinctrl-0 = <&wlan_en_gpios>;
+@@ -705,14 +695,14 @@
+ vdd_s11-supply = <&vph_pwr>;
+ vdd_s12-supply = <&vph_pwr>;
+ vdd_l2_l26_l28-supply = <&vreg_s3a_1p3>;
+- vdd_l3_l11-supply = <&vreg_s8a_l3a_input>;
++ vdd_l3_l11-supply = <&vreg_s3a_1p3>;
+ vdd_l4_l27_l31-supply = <&vreg_s3a_1p3>;
+ vdd_l5_l7-supply = <&vreg_s5a_2p15>;
+ vdd_l6_l12_l32-supply = <&vreg_s5a_2p15>;
+ vdd_l8_l16_l30-supply = <&vph_pwr>;
+ vdd_l14_l15-supply = <&vreg_s5a_2p15>;
+ vdd_l25-supply = <&vreg_s3a_1p3>;
+- vdd_lvs1_2-supply = <&vreg_s4a_1p8>;
++ vdd_lvs1_lvs2-supply = <&vreg_s4a_1p8>;
+
+ vreg_s3a_1p3: s3 {
+ regulator-name = "vreg_s3a_1p3";
+diff --git a/arch/arm64/boot/dts/qcom/msm8916.dtsi b/arch/arm64/boot/dts/qcom/msm8916.dtsi
+index a88a15f2352b..5548d7b5096c 100644
+--- a/arch/arm64/boot/dts/qcom/msm8916.dtsi
++++ b/arch/arm64/boot/dts/qcom/msm8916.dtsi
+@@ -261,7 +261,7 @@
+ thermal-sensors = <&tsens 4>;
+
+ trips {
+- cpu2_3_alert0: trip-point@0 {
++ cpu2_3_alert0: trip-point0 {
+ temperature = <75000>;
+ hysteresis = <2000>;
+ type = "passive";
+@@ -291,7 +291,7 @@
+ thermal-sensors = <&tsens 2>;
+
+ trips {
+- gpu_alert0: trip-point@0 {
++ gpu_alert0: trip-point0 {
+ temperature = <75000>;
+ hysteresis = <2000>;
+ type = "passive";
+@@ -311,7 +311,7 @@
+ thermal-sensors = <&tsens 1>;
+
+ trips {
+- cam_alert0: trip-point@0 {
++ cam_alert0: trip-point0 {
+ temperature = <75000>;
+ hysteresis = <2000>;
+ type = "hot";
+@@ -326,7 +326,7 @@
+ thermal-sensors = <&tsens 0>;
+
+ trips {
+- modem_alert0: trip-point@0 {
++ modem_alert0: trip-point0 {
+ temperature = <85000>;
+ hysteresis = <2000>;
+ type = "hot";
+diff --git a/arch/arm64/boot/dts/qcom/msm8996.dtsi b/arch/arm64/boot/dts/qcom/msm8996.dtsi
+index 98634d5c4440..d22c364b520a 100644
+--- a/arch/arm64/boot/dts/qcom/msm8996.dtsi
++++ b/arch/arm64/boot/dts/qcom/msm8996.dtsi
+@@ -989,16 +989,16 @@
+ "csi_clk_mux",
+ "vfe0",
+ "vfe1";
+- interrupts = <GIC_SPI 78 0>,
+- <GIC_SPI 79 0>,
+- <GIC_SPI 80 0>,
+- <GIC_SPI 296 0>,
+- <GIC_SPI 297 0>,
+- <GIC_SPI 298 0>,
+- <GIC_SPI 299 0>,
+- <GIC_SPI 309 0>,
+- <GIC_SPI 314 0>,
+- <GIC_SPI 315 0>;
++ interrupts = <GIC_SPI 78 IRQ_TYPE_EDGE_RISING>,
++ <GIC_SPI 79 IRQ_TYPE_EDGE_RISING>,
++ <GIC_SPI 80 IRQ_TYPE_EDGE_RISING>,
++ <GIC_SPI 296 IRQ_TYPE_EDGE_RISING>,
++ <GIC_SPI 297 IRQ_TYPE_EDGE_RISING>,
++ <GIC_SPI 298 IRQ_TYPE_EDGE_RISING>,
++ <GIC_SPI 299 IRQ_TYPE_EDGE_RISING>,
++ <GIC_SPI 309 IRQ_TYPE_EDGE_RISING>,
++ <GIC_SPI 314 IRQ_TYPE_EDGE_RISING>,
++ <GIC_SPI 315 IRQ_TYPE_EDGE_RISING>;
+ interrupt-names = "csiphy0",
+ "csiphy1",
+ "csiphy2",
+diff --git a/arch/arm64/boot/dts/qcom/pm8150.dtsi b/arch/arm64/boot/dts/qcom/pm8150.dtsi
+index b6e304748a57..c0b197458665 100644
+--- a/arch/arm64/boot/dts/qcom/pm8150.dtsi
++++ b/arch/arm64/boot/dts/qcom/pm8150.dtsi
+@@ -73,18 +73,8 @@
+ reg = <0xc000>;
+ gpio-controller;
+ #gpio-cells = <2>;
+- interrupts = <0x0 0xc0 0x0 IRQ_TYPE_NONE>,
+- <0x0 0xc1 0x0 IRQ_TYPE_NONE>,
+- <0x0 0xc2 0x0 IRQ_TYPE_NONE>,
+- <0x0 0xc3 0x0 IRQ_TYPE_NONE>,
+- <0x0 0xc4 0x0 IRQ_TYPE_NONE>,
+- <0x0 0xc5 0x0 IRQ_TYPE_NONE>,
+- <0x0 0xc6 0x0 IRQ_TYPE_NONE>,
+- <0x0 0xc7 0x0 IRQ_TYPE_NONE>,
+- <0x0 0xc8 0x0 IRQ_TYPE_NONE>,
+- <0x0 0xc9 0x0 IRQ_TYPE_NONE>,
+- <0x0 0xca 0x0 IRQ_TYPE_NONE>,
+- <0x0 0xcb 0x0 IRQ_TYPE_NONE>;
++ interrupt-controller;
++ #interrupt-cells = <2>;
+ };
+ };
+
+diff --git a/arch/arm64/boot/dts/qcom/pm8150b.dtsi b/arch/arm64/boot/dts/qcom/pm8150b.dtsi
+index 322379d5c31f..40b5d75a4a1d 100644
+--- a/arch/arm64/boot/dts/qcom/pm8150b.dtsi
++++ b/arch/arm64/boot/dts/qcom/pm8150b.dtsi
+@@ -62,18 +62,8 @@
+ reg = <0xc000>;
+ gpio-controller;
+ #gpio-cells = <2>;
+- interrupts = <0x2 0xc0 0x0 IRQ_TYPE_NONE>,
+- <0x2 0xc1 0x0 IRQ_TYPE_NONE>,
+- <0x2 0xc2 0x0 IRQ_TYPE_NONE>,
+- <0x2 0xc3 0x0 IRQ_TYPE_NONE>,
+- <0x2 0xc4 0x0 IRQ_TYPE_NONE>,
+- <0x2 0xc5 0x0 IRQ_TYPE_NONE>,
+- <0x2 0xc6 0x0 IRQ_TYPE_NONE>,
+- <0x2 0xc7 0x0 IRQ_TYPE_NONE>,
+- <0x2 0xc8 0x0 IRQ_TYPE_NONE>,
+- <0x2 0xc9 0x0 IRQ_TYPE_NONE>,
+- <0x2 0xca 0x0 IRQ_TYPE_NONE>,
+- <0x2 0xcb 0x0 IRQ_TYPE_NONE>;
++ interrupt-controller;
++ #interrupt-cells = <2>;
+ };
+ };
+
+diff --git a/arch/arm64/boot/dts/qcom/pm8150l.dtsi b/arch/arm64/boot/dts/qcom/pm8150l.dtsi
+index eb0e9a090e42..cf05e0685d10 100644
+--- a/arch/arm64/boot/dts/qcom/pm8150l.dtsi
++++ b/arch/arm64/boot/dts/qcom/pm8150l.dtsi
+@@ -56,18 +56,8 @@
+ reg = <0xc000>;
+ gpio-controller;
+ #gpio-cells = <2>;
+- interrupts = <0x4 0xc0 0x0 IRQ_TYPE_NONE>,
+- <0x4 0xc1 0x0 IRQ_TYPE_NONE>,
+- <0x4 0xc2 0x0 IRQ_TYPE_NONE>,
+- <0x4 0xc3 0x0 IRQ_TYPE_NONE>,
+- <0x4 0xc4 0x0 IRQ_TYPE_NONE>,
+- <0x4 0xc5 0x0 IRQ_TYPE_NONE>,
+- <0x4 0xc6 0x0 IRQ_TYPE_NONE>,
+- <0x4 0xc7 0x0 IRQ_TYPE_NONE>,
+- <0x4 0xc8 0x0 IRQ_TYPE_NONE>,
+- <0x4 0xc9 0x0 IRQ_TYPE_NONE>,
+- <0x4 0xca 0x0 IRQ_TYPE_NONE>,
+- <0x4 0xcb 0x0 IRQ_TYPE_NONE>;
++ interrupt-controller;
++ #interrupt-cells = <2>;
+ };
+ };
+
+diff --git a/arch/arm64/boot/dts/qcom/sc7180.dtsi b/arch/arm64/boot/dts/qcom/sc7180.dtsi
+index 998f101ad623..eea92b314fc6 100644
+--- a/arch/arm64/boot/dts/qcom/sc7180.dtsi
++++ b/arch/arm64/boot/dts/qcom/sc7180.dtsi
+@@ -1657,8 +1657,7 @@
+ pdc: interrupt-controller@b220000 {
+ compatible = "qcom,sc7180-pdc", "qcom,pdc";
+ reg = <0 0x0b220000 0 0x30000>;
+- qcom,pdc-ranges = <0 480 15>, <17 497 98>,
+- <119 634 4>, <124 639 1>;
++ qcom,pdc-ranges = <0 480 94>, <94 609 31>, <125 63 1>;
+ #interrupt-cells = <2>;
+ interrupt-parent = <&intc>;
+ interrupt-controller;
+diff --git a/arch/arm64/boot/dts/qcom/sdm850-lenovo-yoga-c630.dts b/arch/arm64/boot/dts/qcom/sdm850-lenovo-yoga-c630.dts
+index 51a670ad15b2..4b9860a2c8eb 100644
+--- a/arch/arm64/boot/dts/qcom/sdm850-lenovo-yoga-c630.dts
++++ b/arch/arm64/boot/dts/qcom/sdm850-lenovo-yoga-c630.dts
+@@ -577,3 +577,14 @@
+ };
+ };
+ };
++
++&wifi {
++ status = "okay";
++
++ vdd-0.8-cx-mx-supply = <&vreg_l5a_0p8>;
++ vdd-1.8-xo-supply = <&vreg_l7a_1p8>;
++ vdd-1.3-rfa-supply = <&vreg_l17a_1p3>;
++ vdd-3.3-ch0-supply = <&vreg_l25a_3p3>;
++
++ qcom,snoc-host-cap-8bit-quirk;
++};
+diff --git a/arch/arm64/boot/dts/qcom/sm8250.dtsi b/arch/arm64/boot/dts/qcom/sm8250.dtsi
+index 891d83b2afea..2a7eaefd221d 100644
+--- a/arch/arm64/boot/dts/qcom/sm8250.dtsi
++++ b/arch/arm64/boot/dts/qcom/sm8250.dtsi
+@@ -314,8 +314,8 @@
+ };
+
+ pdc: interrupt-controller@b220000 {
+- compatible = "qcom,sm8250-pdc";
+- reg = <0x0b220000 0x30000>, <0x17c000f0 0x60>;
++ compatible = "qcom,sm8250-pdc", "qcom,pdc";
++ reg = <0 0x0b220000 0 0x30000>, <0 0x17c000f0 0 0x60>;
+ qcom,pdc-ranges = <0 480 94>, <94 609 31>,
+ <125 63 1>, <126 716 12>;
+ #interrupt-cells = <2>;
+diff --git a/arch/arm64/boot/dts/realtek/rtd1293-ds418j.dts b/arch/arm64/boot/dts/realtek/rtd1293-ds418j.dts
+index b2dd583146b4..b2e44c6c2d22 100644
+--- a/arch/arm64/boot/dts/realtek/rtd1293-ds418j.dts
++++ b/arch/arm64/boot/dts/realtek/rtd1293-ds418j.dts
+@@ -1,6 +1,6 @@
+ // SPDX-License-Identifier: (GPL-2.0-or-later OR BSD-2-Clause)
+ /*
+- * Copyright (c) 2017 Andreas Färber
++ * Copyright (c) 2017-2019 Andreas Färber
+ */
+
+ /dts-v1/;
+@@ -11,9 +11,9 @@
+ compatible = "synology,ds418j", "realtek,rtd1293";
+ model = "Synology DiskStation DS418j";
+
+- memory@0 {
++ memory@1f000 {
+ device_type = "memory";
+- reg = <0x0 0x40000000>;
++ reg = <0x1f000 0x3ffe1000>; /* boot ROM to 1 GiB */
+ };
+
+ aliases {
+diff --git a/arch/arm64/boot/dts/realtek/rtd1293.dtsi b/arch/arm64/boot/dts/realtek/rtd1293.dtsi
+index bd4e22723f7b..2d92b56ac94d 100644
+--- a/arch/arm64/boot/dts/realtek/rtd1293.dtsi
++++ b/arch/arm64/boot/dts/realtek/rtd1293.dtsi
+@@ -36,16 +36,20 @@
+ timer {
+ compatible = "arm,armv8-timer";
+ interrupts = <GIC_PPI 13
+- (GIC_CPU_MASK_RAW(0xf) | IRQ_TYPE_LEVEL_LOW)>,
++ (GIC_CPU_MASK_SIMPLE(2) | IRQ_TYPE_LEVEL_LOW)>,
+ <GIC_PPI 14
+- (GIC_CPU_MASK_RAW(0xf) | IRQ_TYPE_LEVEL_LOW)>,
++ (GIC_CPU_MASK_SIMPLE(2) | IRQ_TYPE_LEVEL_LOW)>,
+ <GIC_PPI 11
+- (GIC_CPU_MASK_RAW(0xf) | IRQ_TYPE_LEVEL_LOW)>,
++ (GIC_CPU_MASK_SIMPLE(2) | IRQ_TYPE_LEVEL_LOW)>,
+ <GIC_PPI 10
+- (GIC_CPU_MASK_RAW(0xf) | IRQ_TYPE_LEVEL_LOW)>;
++ (GIC_CPU_MASK_SIMPLE(2) | IRQ_TYPE_LEVEL_LOW)>;
+ };
+ };
+
+ &arm_pmu {
+ interrupt-affinity = <&cpu0>, <&cpu1>;
+ };
++
++&gic {
++ interrupts = <GIC_PPI 9 (GIC_CPU_MASK_SIMPLE(2) | IRQ_TYPE_LEVEL_LOW)>;
++};
+diff --git a/arch/arm64/boot/dts/realtek/rtd1295-mele-v9.dts b/arch/arm64/boot/dts/realtek/rtd1295-mele-v9.dts
+index bd584e99fff9..cf4a57c012a8 100644
+--- a/arch/arm64/boot/dts/realtek/rtd1295-mele-v9.dts
++++ b/arch/arm64/boot/dts/realtek/rtd1295-mele-v9.dts
+@@ -1,5 +1,5 @@
+ /*
+- * Copyright (c) 2017 Andreas Färber
++ * Copyright (c) 2017-2019 Andreas Färber
+ *
+ * SPDX-License-Identifier: (GPL-2.0+ OR MIT)
+ */
+@@ -12,9 +12,9 @@
+ compatible = "mele,v9", "realtek,rtd1295";
+ model = "MeLE V9";
+
+- memory@0 {
++ memory@1f000 {
+ device_type = "memory";
+- reg = <0x0 0x80000000>;
++ reg = <0x1f000 0x7ffe1000>; /* boot ROM to 2 GiB */
+ };
+
+ aliases {
+diff --git a/arch/arm64/boot/dts/realtek/rtd1295-probox2-ava.dts b/arch/arm64/boot/dts/realtek/rtd1295-probox2-ava.dts
+index 8e2b0e75298a..14161c3f304d 100644
+--- a/arch/arm64/boot/dts/realtek/rtd1295-probox2-ava.dts
++++ b/arch/arm64/boot/dts/realtek/rtd1295-probox2-ava.dts
+@@ -1,5 +1,5 @@
+ /*
+- * Copyright (c) 2017 Andreas Färber
++ * Copyright (c) 2017-2019 Andreas Färber
+ *
+ * SPDX-License-Identifier: (GPL-2.0+ OR MIT)
+ */
+@@ -12,9 +12,9 @@
+ compatible = "probox2,ava", "realtek,rtd1295";
+ model = "PROBOX2 AVA";
+
+- memory@0 {
++ memory@1f000 {
+ device_type = "memory";
+- reg = <0x0 0x80000000>;
++ reg = <0x1f000 0x7ffe1000>; /* boot ROM to 2 GiB */
+ };
+
+ aliases {
+diff --git a/arch/arm64/boot/dts/realtek/rtd1295-zidoo-x9s.dts b/arch/arm64/boot/dts/realtek/rtd1295-zidoo-x9s.dts
+index e98e508b9514..4beb37bb9522 100644
+--- a/arch/arm64/boot/dts/realtek/rtd1295-zidoo-x9s.dts
++++ b/arch/arm64/boot/dts/realtek/rtd1295-zidoo-x9s.dts
+@@ -11,9 +11,9 @@
+ compatible = "zidoo,x9s", "realtek,rtd1295";
+ model = "Zidoo X9S";
+
+- memory@0 {
++ memory@1f000 {
+ device_type = "memory";
+- reg = <0x0 0x80000000>;
++ reg = <0x1f000 0x7ffe1000>; /* boot ROM to 2 GiB */
+ };
+
+ aliases {
+diff --git a/arch/arm64/boot/dts/realtek/rtd1295.dtsi b/arch/arm64/boot/dts/realtek/rtd1295.dtsi
+index 93f0e1d97721..1402abe80ea1 100644
+--- a/arch/arm64/boot/dts/realtek/rtd1295.dtsi
++++ b/arch/arm64/boot/dts/realtek/rtd1295.dtsi
+@@ -2,7 +2,7 @@
+ /*
+ * Realtek RTD1295 SoC
+ *
+- * Copyright (c) 2016-2017 Andreas Färber
++ * Copyright (c) 2016-2019 Andreas Färber
+ */
+
+ #include "rtd129x.dtsi"
+@@ -47,27 +47,16 @@
+ };
+ };
+
+- reserved-memory {
+- #address-cells = <1>;
+- #size-cells = <1>;
+- ranges;
+-
+- tee@10100000 {
+- reg = <0x10100000 0xf00000>;
+- no-map;
+- };
+- };
+-
+ timer {
+ compatible = "arm,armv8-timer";
+ interrupts = <GIC_PPI 13
+- (GIC_CPU_MASK_RAW(0xf) | IRQ_TYPE_LEVEL_LOW)>,
++ (GIC_CPU_MASK_SIMPLE(4) | IRQ_TYPE_LEVEL_LOW)>,
+ <GIC_PPI 14
+- (GIC_CPU_MASK_RAW(0xf) | IRQ_TYPE_LEVEL_LOW)>,
++ (GIC_CPU_MASK_SIMPLE(4) | IRQ_TYPE_LEVEL_LOW)>,
+ <GIC_PPI 11
+- (GIC_CPU_MASK_RAW(0xf) | IRQ_TYPE_LEVEL_LOW)>,
++ (GIC_CPU_MASK_SIMPLE(4) | IRQ_TYPE_LEVEL_LOW)>,
+ <GIC_PPI 10
+- (GIC_CPU_MASK_RAW(0xf) | IRQ_TYPE_LEVEL_LOW)>;
++ (GIC_CPU_MASK_SIMPLE(4) | IRQ_TYPE_LEVEL_LOW)>;
+ };
+ };
+
+diff --git a/arch/arm64/boot/dts/realtek/rtd1296-ds418.dts b/arch/arm64/boot/dts/realtek/rtd1296-ds418.dts
+index 5a051a52bf88..cc706d13da8b 100644
+--- a/arch/arm64/boot/dts/realtek/rtd1296-ds418.dts
++++ b/arch/arm64/boot/dts/realtek/rtd1296-ds418.dts
+@@ -11,9 +11,9 @@
+ compatible = "synology,ds418", "realtek,rtd1296";
+ model = "Synology DiskStation DS418";
+
+- memory@0 {
++ memory@1f000 {
+ device_type = "memory";
+- reg = <0x0 0x80000000>;
++ reg = <0x1f000 0x7ffe1000>; /* boot ROM to 2 GiB */
+ };
+
+ aliases {
+diff --git a/arch/arm64/boot/dts/realtek/rtd1296.dtsi b/arch/arm64/boot/dts/realtek/rtd1296.dtsi
+index 0f9e59cac086..fb864a139c97 100644
+--- a/arch/arm64/boot/dts/realtek/rtd1296.dtsi
++++ b/arch/arm64/boot/dts/realtek/rtd1296.dtsi
+@@ -50,13 +50,13 @@
+ timer {
+ compatible = "arm,armv8-timer";
+ interrupts = <GIC_PPI 13
+- (GIC_CPU_MASK_RAW(0xf) | IRQ_TYPE_LEVEL_LOW)>,
++ (GIC_CPU_MASK_SIMPLE(4) | IRQ_TYPE_LEVEL_LOW)>,
+ <GIC_PPI 14
+- (GIC_CPU_MASK_RAW(0xf) | IRQ_TYPE_LEVEL_LOW)>,
++ (GIC_CPU_MASK_SIMPLE(4) | IRQ_TYPE_LEVEL_LOW)>,
+ <GIC_PPI 11
+- (GIC_CPU_MASK_RAW(0xf) | IRQ_TYPE_LEVEL_LOW)>,
++ (GIC_CPU_MASK_SIMPLE(4) | IRQ_TYPE_LEVEL_LOW)>,
+ <GIC_PPI 10
+- (GIC_CPU_MASK_RAW(0xf) | IRQ_TYPE_LEVEL_LOW)>;
++ (GIC_CPU_MASK_SIMPLE(4) | IRQ_TYPE_LEVEL_LOW)>;
+ };
+ };
+
+diff --git a/arch/arm64/boot/dts/realtek/rtd129x.dtsi b/arch/arm64/boot/dts/realtek/rtd129x.dtsi
+index 4433114476f5..b63d0c03597a 100644
+--- a/arch/arm64/boot/dts/realtek/rtd129x.dtsi
++++ b/arch/arm64/boot/dts/realtek/rtd129x.dtsi
+@@ -2,14 +2,12 @@
+ /*
+ * Realtek RTD1293/RTD1295/RTD1296 SoC
+ *
+- * Copyright (c) 2016-2017 Andreas Färber
++ * Copyright (c) 2016-2019 Andreas Färber
+ */
+
+-/memreserve/ 0x0000000000000000 0x0000000000030000;
+-/memreserve/ 0x000000000001f000 0x0000000000001000;
+-/memreserve/ 0x0000000000030000 0x00000000000d0000;
++/memreserve/ 0x0000000000000000 0x000000000001f000;
++/memreserve/ 0x000000000001f000 0x00000000000e1000;
+ /memreserve/ 0x0000000001b00000 0x00000000004be000;
+-/memreserve/ 0x0000000001ffe000 0x0000000000004000;
+
+ #include <dt-bindings/interrupt-controller/arm-gic.h>
+ #include <dt-bindings/reset/realtek,rtd1295.h>
+@@ -19,6 +17,25 @@
+ #address-cells = <1>;
+ #size-cells = <1>;
+
++ reserved-memory {
++ #address-cells = <1>;
++ #size-cells = <1>;
++ ranges;
++
++ rpc_comm: rpc@1f000 {
++ reg = <0x1f000 0x1000>;
++ };
++
++ rpc_ringbuf: rpc@1ffe000 {
++ reg = <0x1ffe000 0x4000>;
++ };
++
++ tee: tee@10100000 {
++ reg = <0x10100000 0xf00000>;
++ no-map;
++ };
++ };
++
+ arm_pmu: arm-pmu {
+ compatible = "arm,cortex-a53-pmu";
+ interrupts = <GIC_SPI 48 IRQ_TYPE_LEVEL_HIGH>;
+@@ -35,8 +52,9 @@
+ compatible = "simple-bus";
+ #address-cells = <1>;
+ #size-cells = <1>;
+- /* Exclude up to 2 GiB of RAM */
+- ranges = <0x80000000 0x80000000 0x80000000>;
++ ranges = <0x00000000 0x00000000 0x0001f000>, /* boot ROM */
++ /* Exclude up to 2 GiB of RAM */
++ <0x80000000 0x80000000 0x80000000>;
+
+ reset1: reset-controller@98000000 {
+ compatible = "snps,dw-low-reset";
+diff --git a/arch/arm64/boot/dts/renesas/r8a774a1.dtsi b/arch/arm64/boot/dts/renesas/r8a774a1.dtsi
+index 79023433a740..a603d947970e 100644
+--- a/arch/arm64/boot/dts/renesas/r8a774a1.dtsi
++++ b/arch/arm64/boot/dts/renesas/r8a774a1.dtsi
+@@ -1000,7 +1000,7 @@
+ <&ipmmu_ds1 30>, <&ipmmu_ds1 31>;
+ };
+
+- ipmmu_ds0: mmu@e6740000 {
++ ipmmu_ds0: iommu@e6740000 {
+ compatible = "renesas,ipmmu-r8a774a1";
+ reg = <0 0xe6740000 0 0x1000>;
+ renesas,ipmmu-main = <&ipmmu_mm 0>;
+@@ -1008,7 +1008,7 @@
+ #iommu-cells = <1>;
+ };
+
+- ipmmu_ds1: mmu@e7740000 {
++ ipmmu_ds1: iommu@e7740000 {
+ compatible = "renesas,ipmmu-r8a774a1";
+ reg = <0 0xe7740000 0 0x1000>;
+ renesas,ipmmu-main = <&ipmmu_mm 1>;
+@@ -1016,7 +1016,7 @@
+ #iommu-cells = <1>;
+ };
+
+- ipmmu_hc: mmu@e6570000 {
++ ipmmu_hc: iommu@e6570000 {
+ compatible = "renesas,ipmmu-r8a774a1";
+ reg = <0 0xe6570000 0 0x1000>;
+ renesas,ipmmu-main = <&ipmmu_mm 2>;
+@@ -1024,7 +1024,7 @@
+ #iommu-cells = <1>;
+ };
+
+- ipmmu_mm: mmu@e67b0000 {
++ ipmmu_mm: iommu@e67b0000 {
+ compatible = "renesas,ipmmu-r8a774a1";
+ reg = <0 0xe67b0000 0 0x1000>;
+ interrupts = <GIC_SPI 196 IRQ_TYPE_LEVEL_HIGH>,
+@@ -1033,7 +1033,7 @@
+ #iommu-cells = <1>;
+ };
+
+- ipmmu_mp: mmu@ec670000 {
++ ipmmu_mp: iommu@ec670000 {
+ compatible = "renesas,ipmmu-r8a774a1";
+ reg = <0 0xec670000 0 0x1000>;
+ renesas,ipmmu-main = <&ipmmu_mm 4>;
+@@ -1041,7 +1041,7 @@
+ #iommu-cells = <1>;
+ };
+
+- ipmmu_pv0: mmu@fd800000 {
++ ipmmu_pv0: iommu@fd800000 {
+ compatible = "renesas,ipmmu-r8a774a1";
+ reg = <0 0xfd800000 0 0x1000>;
+ renesas,ipmmu-main = <&ipmmu_mm 5>;
+@@ -1049,7 +1049,7 @@
+ #iommu-cells = <1>;
+ };
+
+- ipmmu_pv1: mmu@fd950000 {
++ ipmmu_pv1: iommu@fd950000 {
+ compatible = "renesas,ipmmu-r8a774a1";
+ reg = <0 0xfd950000 0 0x1000>;
+ renesas,ipmmu-main = <&ipmmu_mm 6>;
+@@ -1057,7 +1057,7 @@
+ #iommu-cells = <1>;
+ };
+
+- ipmmu_vc0: mmu@fe6b0000 {
++ ipmmu_vc0: iommu@fe6b0000 {
+ compatible = "renesas,ipmmu-r8a774a1";
+ reg = <0 0xfe6b0000 0 0x1000>;
+ renesas,ipmmu-main = <&ipmmu_mm 8>;
+@@ -1065,7 +1065,7 @@
+ #iommu-cells = <1>;
+ };
+
+- ipmmu_vi0: mmu@febd0000 {
++ ipmmu_vi0: iommu@febd0000 {
+ compatible = "renesas,ipmmu-r8a774a1";
+ reg = <0 0xfebd0000 0 0x1000>;
+ renesas,ipmmu-main = <&ipmmu_mm 9>;
+diff --git a/arch/arm64/boot/dts/renesas/r8a774b1.dtsi b/arch/arm64/boot/dts/renesas/r8a774b1.dtsi
+index 3137f735974b..1e51855c7cd3 100644
+--- a/arch/arm64/boot/dts/renesas/r8a774b1.dtsi
++++ b/arch/arm64/boot/dts/renesas/r8a774b1.dtsi
+@@ -874,7 +874,7 @@
+ <&ipmmu_ds1 30>, <&ipmmu_ds1 31>;
+ };
+
+- ipmmu_ds0: mmu@e6740000 {
++ ipmmu_ds0: iommu@e6740000 {
+ compatible = "renesas,ipmmu-r8a774b1";
+ reg = <0 0xe6740000 0 0x1000>;
+ renesas,ipmmu-main = <&ipmmu_mm 0>;
+@@ -882,7 +882,7 @@
+ #iommu-cells = <1>;
+ };
+
+- ipmmu_ds1: mmu@e7740000 {
++ ipmmu_ds1: iommu@e7740000 {
+ compatible = "renesas,ipmmu-r8a774b1";
+ reg = <0 0xe7740000 0 0x1000>;
+ renesas,ipmmu-main = <&ipmmu_mm 1>;
+@@ -890,7 +890,7 @@
+ #iommu-cells = <1>;
+ };
+
+- ipmmu_hc: mmu@e6570000 {
++ ipmmu_hc: iommu@e6570000 {
+ compatible = "renesas,ipmmu-r8a774b1";
+ reg = <0 0xe6570000 0 0x1000>;
+ renesas,ipmmu-main = <&ipmmu_mm 2>;
+@@ -898,7 +898,7 @@
+ #iommu-cells = <1>;
+ };
+
+- ipmmu_mm: mmu@e67b0000 {
++ ipmmu_mm: iommu@e67b0000 {
+ compatible = "renesas,ipmmu-r8a774b1";
+ reg = <0 0xe67b0000 0 0x1000>;
+ interrupts = <GIC_SPI 196 IRQ_TYPE_LEVEL_HIGH>,
+@@ -907,7 +907,7 @@
+ #iommu-cells = <1>;
+ };
+
+- ipmmu_mp: mmu@ec670000 {
++ ipmmu_mp: iommu@ec670000 {
+ compatible = "renesas,ipmmu-r8a774b1";
+ reg = <0 0xec670000 0 0x1000>;
+ renesas,ipmmu-main = <&ipmmu_mm 4>;
+@@ -915,7 +915,7 @@
+ #iommu-cells = <1>;
+ };
+
+- ipmmu_pv0: mmu@fd800000 {
++ ipmmu_pv0: iommu@fd800000 {
+ compatible = "renesas,ipmmu-r8a774b1";
+ reg = <0 0xfd800000 0 0x1000>;
+ renesas,ipmmu-main = <&ipmmu_mm 6>;
+@@ -923,7 +923,7 @@
+ #iommu-cells = <1>;
+ };
+
+- ipmmu_vc0: mmu@fe6b0000 {
++ ipmmu_vc0: iommu@fe6b0000 {
+ compatible = "renesas,ipmmu-r8a774b1";
+ reg = <0 0xfe6b0000 0 0x1000>;
+ renesas,ipmmu-main = <&ipmmu_mm 12>;
+@@ -931,7 +931,7 @@
+ #iommu-cells = <1>;
+ };
+
+- ipmmu_vi0: mmu@febd0000 {
++ ipmmu_vi0: iommu@febd0000 {
+ compatible = "renesas,ipmmu-r8a774b1";
+ reg = <0 0xfebd0000 0 0x1000>;
+ renesas,ipmmu-main = <&ipmmu_mm 14>;
+@@ -939,7 +939,7 @@
+ #iommu-cells = <1>;
+ };
+
+- ipmmu_vp0: mmu@fe990000 {
++ ipmmu_vp0: iommu@fe990000 {
+ compatible = "renesas,ipmmu-r8a774b1";
+ reg = <0 0xfe990000 0 0x1000>;
+ renesas,ipmmu-main = <&ipmmu_mm 16>;
+diff --git a/arch/arm64/boot/dts/renesas/r8a774c0.dtsi b/arch/arm64/boot/dts/renesas/r8a774c0.dtsi
+index 22785cbddff5..5c72a7efbb03 100644
+--- a/arch/arm64/boot/dts/renesas/r8a774c0.dtsi
++++ b/arch/arm64/boot/dts/renesas/r8a774c0.dtsi
+@@ -847,7 +847,7 @@
+ <&ipmmu_ds1 30>, <&ipmmu_ds1 31>;
+ };
+
+- ipmmu_ds0: mmu@e6740000 {
++ ipmmu_ds0: iommu@e6740000 {
+ compatible = "renesas,ipmmu-r8a774c0";
+ reg = <0 0xe6740000 0 0x1000>;
+ renesas,ipmmu-main = <&ipmmu_mm 0>;
+@@ -855,7 +855,7 @@
+ #iommu-cells = <1>;
+ };
+
+- ipmmu_ds1: mmu@e7740000 {
++ ipmmu_ds1: iommu@e7740000 {
+ compatible = "renesas,ipmmu-r8a774c0";
+ reg = <0 0xe7740000 0 0x1000>;
+ renesas,ipmmu-main = <&ipmmu_mm 1>;
+@@ -863,7 +863,7 @@
+ #iommu-cells = <1>;
+ };
+
+- ipmmu_hc: mmu@e6570000 {
++ ipmmu_hc: iommu@e6570000 {
+ compatible = "renesas,ipmmu-r8a774c0";
+ reg = <0 0xe6570000 0 0x1000>;
+ renesas,ipmmu-main = <&ipmmu_mm 2>;
+@@ -871,7 +871,7 @@
+ #iommu-cells = <1>;
+ };
+
+- ipmmu_mm: mmu@e67b0000 {
++ ipmmu_mm: iommu@e67b0000 {
+ compatible = "renesas,ipmmu-r8a774c0";
+ reg = <0 0xe67b0000 0 0x1000>;
+ interrupts = <GIC_SPI 196 IRQ_TYPE_LEVEL_HIGH>,
+@@ -880,7 +880,7 @@
+ #iommu-cells = <1>;
+ };
+
+- ipmmu_mp: mmu@ec670000 {
++ ipmmu_mp: iommu@ec670000 {
+ compatible = "renesas,ipmmu-r8a774c0";
+ reg = <0 0xec670000 0 0x1000>;
+ renesas,ipmmu-main = <&ipmmu_mm 4>;
+@@ -888,7 +888,7 @@
+ #iommu-cells = <1>;
+ };
+
+- ipmmu_pv0: mmu@fd800000 {
++ ipmmu_pv0: iommu@fd800000 {
+ compatible = "renesas,ipmmu-r8a774c0";
+ reg = <0 0xfd800000 0 0x1000>;
+ renesas,ipmmu-main = <&ipmmu_mm 6>;
+@@ -896,7 +896,7 @@
+ #iommu-cells = <1>;
+ };
+
+- ipmmu_vc0: mmu@fe6b0000 {
++ ipmmu_vc0: iommu@fe6b0000 {
+ compatible = "renesas,ipmmu-r8a774c0";
+ reg = <0 0xfe6b0000 0 0x1000>;
+ renesas,ipmmu-main = <&ipmmu_mm 12>;
+@@ -904,7 +904,7 @@
+ #iommu-cells = <1>;
+ };
+
+- ipmmu_vi0: mmu@febd0000 {
++ ipmmu_vi0: iommu@febd0000 {
+ compatible = "renesas,ipmmu-r8a774c0";
+ reg = <0 0xfebd0000 0 0x1000>;
+ renesas,ipmmu-main = <&ipmmu_mm 14>;
+@@ -912,7 +912,7 @@
+ #iommu-cells = <1>;
+ };
+
+- ipmmu_vp0: mmu@fe990000 {
++ ipmmu_vp0: iommu@fe990000 {
+ compatible = "renesas,ipmmu-r8a774c0";
+ reg = <0 0xfe990000 0 0x1000>;
+ renesas,ipmmu-main = <&ipmmu_mm 16>;
+diff --git a/arch/arm64/boot/dts/renesas/r8a77950.dtsi b/arch/arm64/boot/dts/renesas/r8a77950.dtsi
+index 3975eecd50c4..d716c4386ae9 100644
+--- a/arch/arm64/boot/dts/renesas/r8a77950.dtsi
++++ b/arch/arm64/boot/dts/renesas/r8a77950.dtsi
+@@ -77,7 +77,7 @@
+ /delete-node/ dma-controller@e6460000;
+ /delete-node/ dma-controller@e6470000;
+
+- ipmmu_mp1: mmu@ec680000 {
++ ipmmu_mp1: iommu@ec680000 {
+ compatible = "renesas,ipmmu-r8a7795";
+ reg = <0 0xec680000 0 0x1000>;
+ renesas,ipmmu-main = <&ipmmu_mm 5>;
+@@ -85,7 +85,7 @@
+ #iommu-cells = <1>;
+ };
+
+- ipmmu_sy: mmu@e7730000 {
++ ipmmu_sy: iommu@e7730000 {
+ compatible = "renesas,ipmmu-r8a7795";
+ reg = <0 0xe7730000 0 0x1000>;
+ renesas,ipmmu-main = <&ipmmu_mm 8>;
+@@ -93,11 +93,11 @@
+ #iommu-cells = <1>;
+ };
+
+- /delete-node/ mmu@fd950000;
+- /delete-node/ mmu@fd960000;
+- /delete-node/ mmu@fd970000;
+- /delete-node/ mmu@febe0000;
+- /delete-node/ mmu@fe980000;
++ /delete-node/ iommu@fd950000;
++ /delete-node/ iommu@fd960000;
++ /delete-node/ iommu@fd970000;
++ /delete-node/ iommu@febe0000;
++ /delete-node/ iommu@fe980000;
+
+ xhci1: usb@ee040000 {
+ compatible = "renesas,xhci-r8a7795", "renesas,rcar-gen3-xhci";
+diff --git a/arch/arm64/boot/dts/renesas/r8a77951.dtsi b/arch/arm64/boot/dts/renesas/r8a77951.dtsi
+index 52229546454c..61d67d9714ab 100644
+--- a/arch/arm64/boot/dts/renesas/r8a77951.dtsi
++++ b/arch/arm64/boot/dts/renesas/r8a77951.dtsi
+@@ -1073,7 +1073,7 @@
+ <&ipmmu_ds1 30>, <&ipmmu_ds1 31>;
+ };
+
+- ipmmu_ds0: mmu@e6740000 {
++ ipmmu_ds0: iommu@e6740000 {
+ compatible = "renesas,ipmmu-r8a7795";
+ reg = <0 0xe6740000 0 0x1000>;
+ renesas,ipmmu-main = <&ipmmu_mm 0>;
+@@ -1081,7 +1081,7 @@
+ #iommu-cells = <1>;
+ };
+
+- ipmmu_ds1: mmu@e7740000 {
++ ipmmu_ds1: iommu@e7740000 {
+ compatible = "renesas,ipmmu-r8a7795";
+ reg = <0 0xe7740000 0 0x1000>;
+ renesas,ipmmu-main = <&ipmmu_mm 1>;
+@@ -1089,7 +1089,7 @@
+ #iommu-cells = <1>;
+ };
+
+- ipmmu_hc: mmu@e6570000 {
++ ipmmu_hc: iommu@e6570000 {
+ compatible = "renesas,ipmmu-r8a7795";
+ reg = <0 0xe6570000 0 0x1000>;
+ renesas,ipmmu-main = <&ipmmu_mm 2>;
+@@ -1097,7 +1097,7 @@
+ #iommu-cells = <1>;
+ };
+
+- ipmmu_ir: mmu@ff8b0000 {
++ ipmmu_ir: iommu@ff8b0000 {
+ compatible = "renesas,ipmmu-r8a7795";
+ reg = <0 0xff8b0000 0 0x1000>;
+ renesas,ipmmu-main = <&ipmmu_mm 3>;
+@@ -1105,7 +1105,7 @@
+ #iommu-cells = <1>;
+ };
+
+- ipmmu_mm: mmu@e67b0000 {
++ ipmmu_mm: iommu@e67b0000 {
+ compatible = "renesas,ipmmu-r8a7795";
+ reg = <0 0xe67b0000 0 0x1000>;
+ interrupts = <GIC_SPI 196 IRQ_TYPE_LEVEL_HIGH>,
+@@ -1114,7 +1114,7 @@
+ #iommu-cells = <1>;
+ };
+
+- ipmmu_mp0: mmu@ec670000 {
++ ipmmu_mp0: iommu@ec670000 {
+ compatible = "renesas,ipmmu-r8a7795";
+ reg = <0 0xec670000 0 0x1000>;
+ renesas,ipmmu-main = <&ipmmu_mm 4>;
+@@ -1122,7 +1122,7 @@
+ #iommu-cells = <1>;
+ };
+
+- ipmmu_pv0: mmu@fd800000 {
++ ipmmu_pv0: iommu@fd800000 {
+ compatible = "renesas,ipmmu-r8a7795";
+ reg = <0 0xfd800000 0 0x1000>;
+ renesas,ipmmu-main = <&ipmmu_mm 6>;
+@@ -1130,7 +1130,7 @@
+ #iommu-cells = <1>;
+ };
+
+- ipmmu_pv1: mmu@fd950000 {
++ ipmmu_pv1: iommu@fd950000 {
+ compatible = "renesas,ipmmu-r8a7795";
+ reg = <0 0xfd950000 0 0x1000>;
+ renesas,ipmmu-main = <&ipmmu_mm 7>;
+@@ -1138,7 +1138,7 @@
+ #iommu-cells = <1>;
+ };
+
+- ipmmu_pv2: mmu@fd960000 {
++ ipmmu_pv2: iommu@fd960000 {
+ compatible = "renesas,ipmmu-r8a7795";
+ reg = <0 0xfd960000 0 0x1000>;
+ renesas,ipmmu-main = <&ipmmu_mm 8>;
+@@ -1146,7 +1146,7 @@
+ #iommu-cells = <1>;
+ };
+
+- ipmmu_pv3: mmu@fd970000 {
++ ipmmu_pv3: iommu@fd970000 {
+ compatible = "renesas,ipmmu-r8a7795";
+ reg = <0 0xfd970000 0 0x1000>;
+ renesas,ipmmu-main = <&ipmmu_mm 9>;
+@@ -1154,7 +1154,7 @@
+ #iommu-cells = <1>;
+ };
+
+- ipmmu_rt: mmu@ffc80000 {
++ ipmmu_rt: iommu@ffc80000 {
+ compatible = "renesas,ipmmu-r8a7795";
+ reg = <0 0xffc80000 0 0x1000>;
+ renesas,ipmmu-main = <&ipmmu_mm 10>;
+@@ -1162,7 +1162,7 @@
+ #iommu-cells = <1>;
+ };
+
+- ipmmu_vc0: mmu@fe6b0000 {
++ ipmmu_vc0: iommu@fe6b0000 {
+ compatible = "renesas,ipmmu-r8a7795";
+ reg = <0 0xfe6b0000 0 0x1000>;
+ renesas,ipmmu-main = <&ipmmu_mm 12>;
+@@ -1170,7 +1170,7 @@
+ #iommu-cells = <1>;
+ };
+
+- ipmmu_vc1: mmu@fe6f0000 {
++ ipmmu_vc1: iommu@fe6f0000 {
+ compatible = "renesas,ipmmu-r8a7795";
+ reg = <0 0xfe6f0000 0 0x1000>;
+ renesas,ipmmu-main = <&ipmmu_mm 13>;
+@@ -1178,7 +1178,7 @@
+ #iommu-cells = <1>;
+ };
+
+- ipmmu_vi0: mmu@febd0000 {
++ ipmmu_vi0: iommu@febd0000 {
+ compatible = "renesas,ipmmu-r8a7795";
+ reg = <0 0xfebd0000 0 0x1000>;
+ renesas,ipmmu-main = <&ipmmu_mm 14>;
+@@ -1186,7 +1186,7 @@
+ #iommu-cells = <1>;
+ };
+
+- ipmmu_vi1: mmu@febe0000 {
++ ipmmu_vi1: iommu@febe0000 {
+ compatible = "renesas,ipmmu-r8a7795";
+ reg = <0 0xfebe0000 0 0x1000>;
+ renesas,ipmmu-main = <&ipmmu_mm 15>;
+@@ -1194,7 +1194,7 @@
+ #iommu-cells = <1>;
+ };
+
+- ipmmu_vp0: mmu@fe990000 {
++ ipmmu_vp0: iommu@fe990000 {
+ compatible = "renesas,ipmmu-r8a7795";
+ reg = <0 0xfe990000 0 0x1000>;
+ renesas,ipmmu-main = <&ipmmu_mm 16>;
+@@ -1202,7 +1202,7 @@
+ #iommu-cells = <1>;
+ };
+
+- ipmmu_vp1: mmu@fe980000 {
++ ipmmu_vp1: iommu@fe980000 {
+ compatible = "renesas,ipmmu-r8a7795";
+ reg = <0 0xfe980000 0 0x1000>;
+ renesas,ipmmu-main = <&ipmmu_mm 17>;
+diff --git a/arch/arm64/boot/dts/renesas/r8a77960.dtsi b/arch/arm64/boot/dts/renesas/r8a77960.dtsi
+index 31282367d3ac..33bf62acffbb 100644
+--- a/arch/arm64/boot/dts/renesas/r8a77960.dtsi
++++ b/arch/arm64/boot/dts/renesas/r8a77960.dtsi
+@@ -997,7 +997,7 @@
+ <&ipmmu_ds1 30>, <&ipmmu_ds1 31>;
+ };
+
+- ipmmu_ds0: mmu@e6740000 {
++ ipmmu_ds0: iommu@e6740000 {
+ compatible = "renesas,ipmmu-r8a7796";
+ reg = <0 0xe6740000 0 0x1000>;
+ renesas,ipmmu-main = <&ipmmu_mm 0>;
+@@ -1005,7 +1005,7 @@
+ #iommu-cells = <1>;
+ };
+
+- ipmmu_ds1: mmu@e7740000 {
++ ipmmu_ds1: iommu@e7740000 {
+ compatible = "renesas,ipmmu-r8a7796";
+ reg = <0 0xe7740000 0 0x1000>;
+ renesas,ipmmu-main = <&ipmmu_mm 1>;
+@@ -1013,7 +1013,7 @@
+ #iommu-cells = <1>;
+ };
+
+- ipmmu_hc: mmu@e6570000 {
++ ipmmu_hc: iommu@e6570000 {
+ compatible = "renesas,ipmmu-r8a7796";
+ reg = <0 0xe6570000 0 0x1000>;
+ renesas,ipmmu-main = <&ipmmu_mm 2>;
+@@ -1021,7 +1021,7 @@
+ #iommu-cells = <1>;
+ };
+
+- ipmmu_ir: mmu@ff8b0000 {
++ ipmmu_ir: iommu@ff8b0000 {
+ compatible = "renesas,ipmmu-r8a7796";
+ reg = <0 0xff8b0000 0 0x1000>;
+ renesas,ipmmu-main = <&ipmmu_mm 3>;
+@@ -1029,7 +1029,7 @@
+ #iommu-cells = <1>;
+ };
+
+- ipmmu_mm: mmu@e67b0000 {
++ ipmmu_mm: iommu@e67b0000 {
+ compatible = "renesas,ipmmu-r8a7796";
+ reg = <0 0xe67b0000 0 0x1000>;
+ interrupts = <GIC_SPI 196 IRQ_TYPE_LEVEL_HIGH>,
+@@ -1038,7 +1038,7 @@
+ #iommu-cells = <1>;
+ };
+
+- ipmmu_mp: mmu@ec670000 {
++ ipmmu_mp: iommu@ec670000 {
+ compatible = "renesas,ipmmu-r8a7796";
+ reg = <0 0xec670000 0 0x1000>;
+ renesas,ipmmu-main = <&ipmmu_mm 4>;
+@@ -1046,7 +1046,7 @@
+ #iommu-cells = <1>;
+ };
+
+- ipmmu_pv0: mmu@fd800000 {
++ ipmmu_pv0: iommu@fd800000 {
+ compatible = "renesas,ipmmu-r8a7796";
+ reg = <0 0xfd800000 0 0x1000>;
+ renesas,ipmmu-main = <&ipmmu_mm 5>;
+@@ -1054,7 +1054,7 @@
+ #iommu-cells = <1>;
+ };
+
+- ipmmu_pv1: mmu@fd950000 {
++ ipmmu_pv1: iommu@fd950000 {
+ compatible = "renesas,ipmmu-r8a7796";
+ reg = <0 0xfd950000 0 0x1000>;
+ renesas,ipmmu-main = <&ipmmu_mm 6>;
+@@ -1062,7 +1062,7 @@
+ #iommu-cells = <1>;
+ };
+
+- ipmmu_rt: mmu@ffc80000 {
++ ipmmu_rt: iommu@ffc80000 {
+ compatible = "renesas,ipmmu-r8a7796";
+ reg = <0 0xffc80000 0 0x1000>;
+ renesas,ipmmu-main = <&ipmmu_mm 7>;
+@@ -1070,7 +1070,7 @@
+ #iommu-cells = <1>;
+ };
+
+- ipmmu_vc0: mmu@fe6b0000 {
++ ipmmu_vc0: iommu@fe6b0000 {
+ compatible = "renesas,ipmmu-r8a7796";
+ reg = <0 0xfe6b0000 0 0x1000>;
+ renesas,ipmmu-main = <&ipmmu_mm 8>;
+@@ -1078,7 +1078,7 @@
+ #iommu-cells = <1>;
+ };
+
+- ipmmu_vi0: mmu@febd0000 {
++ ipmmu_vi0: iommu@febd0000 {
+ compatible = "renesas,ipmmu-r8a7796";
+ reg = <0 0xfebd0000 0 0x1000>;
+ renesas,ipmmu-main = <&ipmmu_mm 9>;
+diff --git a/arch/arm64/boot/dts/renesas/r8a77965.dtsi b/arch/arm64/boot/dts/renesas/r8a77965.dtsi
+index d82dd4e67b62..6f7ab39fd282 100644
+--- a/arch/arm64/boot/dts/renesas/r8a77965.dtsi
++++ b/arch/arm64/boot/dts/renesas/r8a77965.dtsi
+@@ -867,7 +867,7 @@
+ <&ipmmu_ds1 30>, <&ipmmu_ds1 31>;
+ };
+
+- ipmmu_ds0: mmu@e6740000 {
++ ipmmu_ds0: iommu@e6740000 {
+ compatible = "renesas,ipmmu-r8a77965";
+ reg = <0 0xe6740000 0 0x1000>;
+ renesas,ipmmu-main = <&ipmmu_mm 0>;
+@@ -875,7 +875,7 @@
+ #iommu-cells = <1>;
+ };
+
+- ipmmu_ds1: mmu@e7740000 {
++ ipmmu_ds1: iommu@e7740000 {
+ compatible = "renesas,ipmmu-r8a77965";
+ reg = <0 0xe7740000 0 0x1000>;
+ renesas,ipmmu-main = <&ipmmu_mm 1>;
+@@ -883,7 +883,7 @@
+ #iommu-cells = <1>;
+ };
+
+- ipmmu_hc: mmu@e6570000 {
++ ipmmu_hc: iommu@e6570000 {
+ compatible = "renesas,ipmmu-r8a77965";
+ reg = <0 0xe6570000 0 0x1000>;
+ renesas,ipmmu-main = <&ipmmu_mm 2>;
+@@ -891,7 +891,7 @@
+ #iommu-cells = <1>;
+ };
+
+- ipmmu_mm: mmu@e67b0000 {
++ ipmmu_mm: iommu@e67b0000 {
+ compatible = "renesas,ipmmu-r8a77965";
+ reg = <0 0xe67b0000 0 0x1000>;
+ interrupts = <GIC_SPI 196 IRQ_TYPE_LEVEL_HIGH>,
+@@ -900,7 +900,7 @@
+ #iommu-cells = <1>;
+ };
+
+- ipmmu_mp: mmu@ec670000 {
++ ipmmu_mp: iommu@ec670000 {
+ compatible = "renesas,ipmmu-r8a77965";
+ reg = <0 0xec670000 0 0x1000>;
+ renesas,ipmmu-main = <&ipmmu_mm 4>;
+@@ -908,7 +908,7 @@
+ #iommu-cells = <1>;
+ };
+
+- ipmmu_pv0: mmu@fd800000 {
++ ipmmu_pv0: iommu@fd800000 {
+ compatible = "renesas,ipmmu-r8a77965";
+ reg = <0 0xfd800000 0 0x1000>;
+ renesas,ipmmu-main = <&ipmmu_mm 6>;
+@@ -916,7 +916,7 @@
+ #iommu-cells = <1>;
+ };
+
+- ipmmu_rt: mmu@ffc80000 {
++ ipmmu_rt: iommu@ffc80000 {
+ compatible = "renesas,ipmmu-r8a77965";
+ reg = <0 0xffc80000 0 0x1000>;
+ renesas,ipmmu-main = <&ipmmu_mm 10>;
+@@ -924,7 +924,7 @@
+ #iommu-cells = <1>;
+ };
+
+- ipmmu_vc0: mmu@fe6b0000 {
++ ipmmu_vc0: iommu@fe6b0000 {
+ compatible = "renesas,ipmmu-r8a77965";
+ reg = <0 0xfe6b0000 0 0x1000>;
+ renesas,ipmmu-main = <&ipmmu_mm 12>;
+@@ -932,7 +932,7 @@
+ #iommu-cells = <1>;
+ };
+
+- ipmmu_vi0: mmu@febd0000 {
++ ipmmu_vi0: iommu@febd0000 {
+ compatible = "renesas,ipmmu-r8a77965";
+ reg = <0 0xfebd0000 0 0x1000>;
+ renesas,ipmmu-main = <&ipmmu_mm 14>;
+@@ -940,7 +940,7 @@
+ #iommu-cells = <1>;
+ };
+
+- ipmmu_vp0: mmu@fe990000 {
++ ipmmu_vp0: iommu@fe990000 {
+ compatible = "renesas,ipmmu-r8a77965";
+ reg = <0 0xfe990000 0 0x1000>;
+ renesas,ipmmu-main = <&ipmmu_mm 16>;
+diff --git a/arch/arm64/boot/dts/renesas/r8a77970.dtsi b/arch/arm64/boot/dts/renesas/r8a77970.dtsi
+index a009c0ebc8b4..bd95ecb1b40d 100644
+--- a/arch/arm64/boot/dts/renesas/r8a77970.dtsi
++++ b/arch/arm64/boot/dts/renesas/r8a77970.dtsi
+@@ -985,7 +985,7 @@
+ <&ipmmu_ds1 22>, <&ipmmu_ds1 23>;
+ };
+
+- ipmmu_ds1: mmu@e7740000 {
++ ipmmu_ds1: iommu@e7740000 {
+ compatible = "renesas,ipmmu-r8a77970";
+ reg = <0 0xe7740000 0 0x1000>;
+ renesas,ipmmu-main = <&ipmmu_mm 0>;
+@@ -993,7 +993,7 @@
+ #iommu-cells = <1>;
+ };
+
+- ipmmu_ir: mmu@ff8b0000 {
++ ipmmu_ir: iommu@ff8b0000 {
+ compatible = "renesas,ipmmu-r8a77970";
+ reg = <0 0xff8b0000 0 0x1000>;
+ renesas,ipmmu-main = <&ipmmu_mm 3>;
+@@ -1001,7 +1001,7 @@
+ #iommu-cells = <1>;
+ };
+
+- ipmmu_mm: mmu@e67b0000 {
++ ipmmu_mm: iommu@e67b0000 {
+ compatible = "renesas,ipmmu-r8a77970";
+ reg = <0 0xe67b0000 0 0x1000>;
+ interrupts = <GIC_SPI 196 IRQ_TYPE_LEVEL_HIGH>,
+@@ -1010,7 +1010,7 @@
+ #iommu-cells = <1>;
+ };
+
+- ipmmu_rt: mmu@ffc80000 {
++ ipmmu_rt: iommu@ffc80000 {
+ compatible = "renesas,ipmmu-r8a77970";
+ reg = <0 0xffc80000 0 0x1000>;
+ renesas,ipmmu-main = <&ipmmu_mm 7>;
+@@ -1018,7 +1018,7 @@
+ #iommu-cells = <1>;
+ };
+
+- ipmmu_vi0: mmu@febd0000 {
++ ipmmu_vi0: iommu@febd0000 {
+ compatible = "renesas,ipmmu-r8a77970";
+ reg = <0 0xfebd0000 0 0x1000>;
+ renesas,ipmmu-main = <&ipmmu_mm 9>;
+diff --git a/arch/arm64/boot/dts/renesas/r8a77980.dtsi b/arch/arm64/boot/dts/renesas/r8a77980.dtsi
+index d672b320bc14..387e6d99f2f3 100644
+--- a/arch/arm64/boot/dts/renesas/r8a77980.dtsi
++++ b/arch/arm64/boot/dts/renesas/r8a77980.dtsi
+@@ -1266,7 +1266,7 @@
+ status = "disabled";
+ };
+
+- ipmmu_ds1: mmu@e7740000 {
++ ipmmu_ds1: iommu@e7740000 {
+ compatible = "renesas,ipmmu-r8a77980";
+ reg = <0 0xe7740000 0 0x1000>;
+ renesas,ipmmu-main = <&ipmmu_mm 0>;
+@@ -1274,7 +1274,7 @@
+ #iommu-cells = <1>;
+ };
+
+- ipmmu_ir: mmu@ff8b0000 {
++ ipmmu_ir: iommu@ff8b0000 {
+ compatible = "renesas,ipmmu-r8a77980";
+ reg = <0 0xff8b0000 0 0x1000>;
+ renesas,ipmmu-main = <&ipmmu_mm 3>;
+@@ -1282,7 +1282,7 @@
+ #iommu-cells = <1>;
+ };
+
+- ipmmu_mm: mmu@e67b0000 {
++ ipmmu_mm: iommu@e67b0000 {
+ compatible = "renesas,ipmmu-r8a77980";
+ reg = <0 0xe67b0000 0 0x1000>;
+ interrupts = <GIC_SPI 196 IRQ_TYPE_LEVEL_HIGH>,
+@@ -1291,7 +1291,7 @@
+ #iommu-cells = <1>;
+ };
+
+- ipmmu_rt: mmu@ffc80000 {
++ ipmmu_rt: iommu@ffc80000 {
+ compatible = "renesas,ipmmu-r8a77980";
+ reg = <0 0xffc80000 0 0x1000>;
+ renesas,ipmmu-main = <&ipmmu_mm 10>;
+@@ -1299,7 +1299,7 @@
+ #iommu-cells = <1>;
+ };
+
+- ipmmu_vc0: mmu@fe990000 {
++ ipmmu_vc0: iommu@fe990000 {
+ compatible = "renesas,ipmmu-r8a77980";
+ reg = <0 0xfe990000 0 0x1000>;
+ renesas,ipmmu-main = <&ipmmu_mm 12>;
+@@ -1307,7 +1307,7 @@
+ #iommu-cells = <1>;
+ };
+
+- ipmmu_vi0: mmu@febd0000 {
++ ipmmu_vi0: iommu@febd0000 {
+ compatible = "renesas,ipmmu-r8a77980";
+ reg = <0 0xfebd0000 0 0x1000>;
+ renesas,ipmmu-main = <&ipmmu_mm 14>;
+@@ -1315,7 +1315,7 @@
+ #iommu-cells = <1>;
+ };
+
+- ipmmu_vip0: mmu@e7b00000 {
++ ipmmu_vip0: iommu@e7b00000 {
+ compatible = "renesas,ipmmu-r8a77980";
+ reg = <0 0xe7b00000 0 0x1000>;
+ renesas,ipmmu-main = <&ipmmu_mm 4>;
+@@ -1323,7 +1323,7 @@
+ #iommu-cells = <1>;
+ };
+
+- ipmmu_vip1: mmu@e7960000 {
++ ipmmu_vip1: iommu@e7960000 {
+ compatible = "renesas,ipmmu-r8a77980";
+ reg = <0 0xe7960000 0 0x1000>;
+ renesas,ipmmu-main = <&ipmmu_mm 11>;
+diff --git a/arch/arm64/boot/dts/renesas/r8a77990.dtsi b/arch/arm64/boot/dts/renesas/r8a77990.dtsi
+index 1543f18e834f..cd11f24744d4 100644
+--- a/arch/arm64/boot/dts/renesas/r8a77990.dtsi
++++ b/arch/arm64/boot/dts/renesas/r8a77990.dtsi
+@@ -817,7 +817,7 @@
+ <&ipmmu_ds1 30>, <&ipmmu_ds1 31>;
+ };
+
+- ipmmu_ds0: mmu@e6740000 {
++ ipmmu_ds0: iommu@e6740000 {
+ compatible = "renesas,ipmmu-r8a77990";
+ reg = <0 0xe6740000 0 0x1000>;
+ renesas,ipmmu-main = <&ipmmu_mm 0>;
+@@ -825,7 +825,7 @@
+ #iommu-cells = <1>;
+ };
+
+- ipmmu_ds1: mmu@e7740000 {
++ ipmmu_ds1: iommu@e7740000 {
+ compatible = "renesas,ipmmu-r8a77990";
+ reg = <0 0xe7740000 0 0x1000>;
+ renesas,ipmmu-main = <&ipmmu_mm 1>;
+@@ -833,7 +833,7 @@
+ #iommu-cells = <1>;
+ };
+
+- ipmmu_hc: mmu@e6570000 {
++ ipmmu_hc: iommu@e6570000 {
+ compatible = "renesas,ipmmu-r8a77990";
+ reg = <0 0xe6570000 0 0x1000>;
+ renesas,ipmmu-main = <&ipmmu_mm 2>;
+@@ -841,7 +841,7 @@
+ #iommu-cells = <1>;
+ };
+
+- ipmmu_mm: mmu@e67b0000 {
++ ipmmu_mm: iommu@e67b0000 {
+ compatible = "renesas,ipmmu-r8a77990";
+ reg = <0 0xe67b0000 0 0x1000>;
+ interrupts = <GIC_SPI 196 IRQ_TYPE_LEVEL_HIGH>,
+@@ -850,7 +850,7 @@
+ #iommu-cells = <1>;
+ };
+
+- ipmmu_mp: mmu@ec670000 {
++ ipmmu_mp: iommu@ec670000 {
+ compatible = "renesas,ipmmu-r8a77990";
+ reg = <0 0xec670000 0 0x1000>;
+ renesas,ipmmu-main = <&ipmmu_mm 4>;
+@@ -858,7 +858,7 @@
+ #iommu-cells = <1>;
+ };
+
+- ipmmu_pv0: mmu@fd800000 {
++ ipmmu_pv0: iommu@fd800000 {
+ compatible = "renesas,ipmmu-r8a77990";
+ reg = <0 0xfd800000 0 0x1000>;
+ renesas,ipmmu-main = <&ipmmu_mm 6>;
+@@ -866,7 +866,7 @@
+ #iommu-cells = <1>;
+ };
+
+- ipmmu_rt: mmu@ffc80000 {
++ ipmmu_rt: iommu@ffc80000 {
+ compatible = "renesas,ipmmu-r8a77990";
+ reg = <0 0xffc80000 0 0x1000>;
+ renesas,ipmmu-main = <&ipmmu_mm 10>;
+@@ -874,7 +874,7 @@
+ #iommu-cells = <1>;
+ };
+
+- ipmmu_vc0: mmu@fe6b0000 {
++ ipmmu_vc0: iommu@fe6b0000 {
+ compatible = "renesas,ipmmu-r8a77990";
+ reg = <0 0xfe6b0000 0 0x1000>;
+ renesas,ipmmu-main = <&ipmmu_mm 12>;
+@@ -882,7 +882,7 @@
+ #iommu-cells = <1>;
+ };
+
+- ipmmu_vi0: mmu@febd0000 {
++ ipmmu_vi0: iommu@febd0000 {
+ compatible = "renesas,ipmmu-r8a77990";
+ reg = <0 0xfebd0000 0 0x1000>;
+ renesas,ipmmu-main = <&ipmmu_mm 14>;
+@@ -890,7 +890,7 @@
+ #iommu-cells = <1>;
+ };
+
+- ipmmu_vp0: mmu@fe990000 {
++ ipmmu_vp0: iommu@fe990000 {
+ compatible = "renesas,ipmmu-r8a77990";
+ reg = <0 0xfe990000 0 0x1000>;
+ renesas,ipmmu-main = <&ipmmu_mm 16>;
+diff --git a/arch/arm64/boot/dts/renesas/r8a77995.dtsi b/arch/arm64/boot/dts/renesas/r8a77995.dtsi
+index e8d2290fe79d..e5617ec0f49c 100644
+--- a/arch/arm64/boot/dts/renesas/r8a77995.dtsi
++++ b/arch/arm64/boot/dts/renesas/r8a77995.dtsi
+@@ -507,7 +507,7 @@
+ <&ipmmu_ds1 22>, <&ipmmu_ds1 23>;
+ };
+
+- ipmmu_ds0: mmu@e6740000 {
++ ipmmu_ds0: iommu@e6740000 {
+ compatible = "renesas,ipmmu-r8a77995";
+ reg = <0 0xe6740000 0 0x1000>;
+ renesas,ipmmu-main = <&ipmmu_mm 0>;
+@@ -515,7 +515,7 @@
+ #iommu-cells = <1>;
+ };
+
+- ipmmu_ds1: mmu@e7740000 {
++ ipmmu_ds1: iommu@e7740000 {
+ compatible = "renesas,ipmmu-r8a77995";
+ reg = <0 0xe7740000 0 0x1000>;
+ renesas,ipmmu-main = <&ipmmu_mm 1>;
+@@ -523,7 +523,7 @@
+ #iommu-cells = <1>;
+ };
+
+- ipmmu_hc: mmu@e6570000 {
++ ipmmu_hc: iommu@e6570000 {
+ compatible = "renesas,ipmmu-r8a77995";
+ reg = <0 0xe6570000 0 0x1000>;
+ renesas,ipmmu-main = <&ipmmu_mm 2>;
+@@ -531,7 +531,7 @@
+ #iommu-cells = <1>;
+ };
+
+- ipmmu_mm: mmu@e67b0000 {
++ ipmmu_mm: iommu@e67b0000 {
+ compatible = "renesas,ipmmu-r8a77995";
+ reg = <0 0xe67b0000 0 0x1000>;
+ interrupts = <GIC_SPI 196 IRQ_TYPE_LEVEL_HIGH>,
+@@ -540,7 +540,7 @@
+ #iommu-cells = <1>;
+ };
+
+- ipmmu_mp: mmu@ec670000 {
++ ipmmu_mp: iommu@ec670000 {
+ compatible = "renesas,ipmmu-r8a77995";
+ reg = <0 0xec670000 0 0x1000>;
+ renesas,ipmmu-main = <&ipmmu_mm 4>;
+@@ -548,7 +548,7 @@
+ #iommu-cells = <1>;
+ };
+
+- ipmmu_pv0: mmu@fd800000 {
++ ipmmu_pv0: iommu@fd800000 {
+ compatible = "renesas,ipmmu-r8a77995";
+ reg = <0 0xfd800000 0 0x1000>;
+ renesas,ipmmu-main = <&ipmmu_mm 6>;
+@@ -556,7 +556,7 @@
+ #iommu-cells = <1>;
+ };
+
+- ipmmu_rt: mmu@ffc80000 {
++ ipmmu_rt: iommu@ffc80000 {
+ compatible = "renesas,ipmmu-r8a77995";
+ reg = <0 0xffc80000 0 0x1000>;
+ renesas,ipmmu-main = <&ipmmu_mm 10>;
+@@ -564,7 +564,7 @@
+ #iommu-cells = <1>;
+ };
+
+- ipmmu_vc0: mmu@fe6b0000 {
++ ipmmu_vc0: iommu@fe6b0000 {
+ compatible = "renesas,ipmmu-r8a77995";
+ reg = <0 0xfe6b0000 0 0x1000>;
+ renesas,ipmmu-main = <&ipmmu_mm 12>;
+@@ -572,7 +572,7 @@
+ #iommu-cells = <1>;
+ };
+
+- ipmmu_vi0: mmu@febd0000 {
++ ipmmu_vi0: iommu@febd0000 {
+ compatible = "renesas,ipmmu-r8a77995";
+ reg = <0 0xfebd0000 0 0x1000>;
+ renesas,ipmmu-main = <&ipmmu_mm 14>;
+@@ -580,7 +580,7 @@
+ #iommu-cells = <1>;
+ };
+
+- ipmmu_vp0: mmu@fe990000 {
++ ipmmu_vp0: iommu@fe990000 {
+ compatible = "renesas,ipmmu-r8a77995";
+ reg = <0 0xfe990000 0 0x1000>;
+ renesas,ipmmu-main = <&ipmmu_mm 16>;
+diff --git a/arch/arm64/kernel/ftrace.c b/arch/arm64/kernel/ftrace.c
+index 8618faa82e6d..86a5cf9bc19a 100644
+--- a/arch/arm64/kernel/ftrace.c
++++ b/arch/arm64/kernel/ftrace.c
+@@ -69,7 +69,8 @@ static struct plt_entry *get_ftrace_plt(struct module *mod, unsigned long addr)
+
+ if (addr == FTRACE_ADDR)
+ return &plt[FTRACE_PLT_IDX];
+- if (addr == FTRACE_REGS_ADDR && IS_ENABLED(CONFIG_FTRACE_WITH_REGS))
++ if (addr == FTRACE_REGS_ADDR &&
++ IS_ENABLED(CONFIG_DYNAMIC_FTRACE_WITH_REGS))
+ return &plt[FTRACE_REGS_PLT_IDX];
+ #endif
+ return NULL;
+diff --git a/arch/arm64/kernel/hw_breakpoint.c b/arch/arm64/kernel/hw_breakpoint.c
+index 0b727edf4104..af234a1e08b7 100644
+--- a/arch/arm64/kernel/hw_breakpoint.c
++++ b/arch/arm64/kernel/hw_breakpoint.c
+@@ -730,6 +730,27 @@ static u64 get_distance_from_watchpoint(unsigned long addr, u64 val,
+ return 0;
+ }
+
++static int watchpoint_report(struct perf_event *wp, unsigned long addr,
++ struct pt_regs *regs)
++{
++ int step = is_default_overflow_handler(wp);
++ struct arch_hw_breakpoint *info = counter_arch_bp(wp);
++
++ info->trigger = addr;
++
++ /*
++ * If we triggered a user watchpoint from a uaccess routine, then
++ * handle the stepping ourselves since userspace really can't help
++ * us with this.
++ */
++ if (!user_mode(regs) && info->ctrl.privilege == AARCH64_BREAKPOINT_EL0)
++ step = 1;
++ else
++ perf_bp_event(wp, regs);
++
++ return step;
++}
++
+ static int watchpoint_handler(unsigned long addr, unsigned int esr,
+ struct pt_regs *regs)
+ {
+@@ -739,7 +760,6 @@ static int watchpoint_handler(unsigned long addr, unsigned int esr,
+ u64 val;
+ struct perf_event *wp, **slots;
+ struct debug_info *debug_info;
+- struct arch_hw_breakpoint *info;
+ struct arch_hw_breakpoint_ctrl ctrl;
+
+ slots = this_cpu_ptr(wp_on_reg);
+@@ -777,25 +797,13 @@ static int watchpoint_handler(unsigned long addr, unsigned int esr,
+ if (dist != 0)
+ continue;
+
+- info = counter_arch_bp(wp);
+- info->trigger = addr;
+- perf_bp_event(wp, regs);
+-
+- /* Do we need to handle the stepping? */
+- if (is_default_overflow_handler(wp))
+- step = 1;
++ step = watchpoint_report(wp, addr, regs);
+ }
+- if (min_dist > 0 && min_dist != -1) {
+- /* No exact match found. */
+- wp = slots[closest_match];
+- info = counter_arch_bp(wp);
+- info->trigger = addr;
+- perf_bp_event(wp, regs);
+
+- /* Do we need to handle the stepping? */
+- if (is_default_overflow_handler(wp))
+- step = 1;
+- }
++ /* No exact match found? */
++ if (min_dist > 0 && min_dist != -1)
++ step = watchpoint_report(slots[closest_match], addr, regs);
++
+ rcu_read_unlock();
+
+ if (!step)
+diff --git a/arch/arm64/mm/init.c b/arch/arm64/mm/init.c
+index e42727e3568e..3f9010167468 100644
+--- a/arch/arm64/mm/init.c
++++ b/arch/arm64/mm/init.c
+@@ -458,11 +458,6 @@ void __init arm64_memblock_init(void)
+ high_memory = __va(memblock_end_of_DRAM() - 1) + 1;
+
+ dma_contiguous_reserve(arm64_dma32_phys_limit);
+-
+-#ifdef CONFIG_ARM64_4K_PAGES
+- hugetlb_cma_reserve(PUD_SHIFT - PAGE_SHIFT);
+-#endif
+-
+ }
+
+ void __init bootmem_init(void)
+@@ -478,6 +473,16 @@ void __init bootmem_init(void)
+ min_low_pfn = min;
+
+ arm64_numa_init();
++
++ /*
++ * must be done after arm64_numa_init() which calls numa_init() to
++ * initialize node_online_map that gets used in hugetlb_cma_reserve()
++ * while allocating required CMA size across online nodes.
++ */
++#ifdef CONFIG_ARM64_4K_PAGES
++ hugetlb_cma_reserve(PUD_SHIFT - PAGE_SHIFT);
++#endif
++
+ /*
+ * Sparsemem tries to allocate bootmem in memory_present(), so must be
+ * done after the fixed reservations.
+diff --git a/arch/m68k/coldfire/pci.c b/arch/m68k/coldfire/pci.c
+index 62b0eb6cf69a..84eab0f5e00a 100644
+--- a/arch/m68k/coldfire/pci.c
++++ b/arch/m68k/coldfire/pci.c
+@@ -216,8 +216,10 @@ static int __init mcf_pci_init(void)
+
+ /* Keep a virtual mapping to IO/config space active */
+ iospace = (unsigned long) ioremap(PCI_IO_PA, PCI_IO_SIZE);
+- if (iospace == 0)
++ if (iospace == 0) {
++ pci_free_host_bridge(bridge);
+ return -ENODEV;
++ }
+ pr_info("Coldfire: PCI IO/config window mapped to 0x%x\n",
+ (u32) iospace);
+
+diff --git a/arch/openrisc/kernel/entry.S b/arch/openrisc/kernel/entry.S
+index e4a78571f883..c6481cfc5220 100644
+--- a/arch/openrisc/kernel/entry.S
++++ b/arch/openrisc/kernel/entry.S
+@@ -1166,13 +1166,13 @@ ENTRY(__sys_clone)
+ l.movhi r29,hi(sys_clone)
+ l.ori r29,r29,lo(sys_clone)
+ l.j _fork_save_extra_regs_and_call
+- l.addi r7,r1,0
++ l.nop
+
+ ENTRY(__sys_fork)
+ l.movhi r29,hi(sys_fork)
+ l.ori r29,r29,lo(sys_fork)
+ l.j _fork_save_extra_regs_and_call
+- l.addi r3,r1,0
++ l.nop
+
+ ENTRY(sys_rt_sigreturn)
+ l.jal _sys_rt_sigreturn
+diff --git a/arch/powerpc/Kconfig b/arch/powerpc/Kconfig
+index 62aca9efbbbe..310957b988e3 100644
+--- a/arch/powerpc/Kconfig
++++ b/arch/powerpc/Kconfig
+@@ -773,6 +773,7 @@ config THREAD_SHIFT
+ range 13 15
+ default "15" if PPC_256K_PAGES
+ default "14" if PPC64
++ default "14" if KASAN
+ default "13"
+ help
+ Used to define the stack size. The default is almost always what you
+diff --git a/arch/powerpc/configs/adder875_defconfig b/arch/powerpc/configs/adder875_defconfig
+index f55e23cb176c..5326bc739279 100644
+--- a/arch/powerpc/configs/adder875_defconfig
++++ b/arch/powerpc/configs/adder875_defconfig
+@@ -10,7 +10,6 @@ CONFIG_EXPERT=y
+ # CONFIG_BLK_DEV_BSG is not set
+ CONFIG_PARTITION_ADVANCED=y
+ CONFIG_PPC_ADDER875=y
+-CONFIG_8xx_COPYBACK=y
+ CONFIG_GEN_RTC=y
+ CONFIG_HZ_1000=y
+ # CONFIG_SECCOMP is not set
+diff --git a/arch/powerpc/configs/ep88xc_defconfig b/arch/powerpc/configs/ep88xc_defconfig
+index 0e2e5e81a359..f5c3e72da719 100644
+--- a/arch/powerpc/configs/ep88xc_defconfig
++++ b/arch/powerpc/configs/ep88xc_defconfig
+@@ -12,7 +12,6 @@ CONFIG_EXPERT=y
+ # CONFIG_BLK_DEV_BSG is not set
+ CONFIG_PARTITION_ADVANCED=y
+ CONFIG_PPC_EP88XC=y
+-CONFIG_8xx_COPYBACK=y
+ CONFIG_GEN_RTC=y
+ CONFIG_HZ_100=y
+ # CONFIG_SECCOMP is not set
+diff --git a/arch/powerpc/configs/mpc866_ads_defconfig b/arch/powerpc/configs/mpc866_ads_defconfig
+index 5320735395e7..5c56d36cdfc5 100644
+--- a/arch/powerpc/configs/mpc866_ads_defconfig
++++ b/arch/powerpc/configs/mpc866_ads_defconfig
+@@ -12,7 +12,6 @@ CONFIG_EXPERT=y
+ # CONFIG_BLK_DEV_BSG is not set
+ CONFIG_PARTITION_ADVANCED=y
+ CONFIG_MPC86XADS=y
+-CONFIG_8xx_COPYBACK=y
+ CONFIG_GEN_RTC=y
+ CONFIG_HZ_1000=y
+ CONFIG_MATH_EMULATION=y
+diff --git a/arch/powerpc/configs/mpc885_ads_defconfig b/arch/powerpc/configs/mpc885_ads_defconfig
+index 82a008c04eae..949ff9ccda5e 100644
+--- a/arch/powerpc/configs/mpc885_ads_defconfig
++++ b/arch/powerpc/configs/mpc885_ads_defconfig
+@@ -11,7 +11,6 @@ CONFIG_EXPERT=y
+ # CONFIG_VM_EVENT_COUNTERS is not set
+ # CONFIG_BLK_DEV_BSG is not set
+ CONFIG_PARTITION_ADVANCED=y
+-CONFIG_8xx_COPYBACK=y
+ CONFIG_GEN_RTC=y
+ CONFIG_HZ_100=y
+ # CONFIG_SECCOMP is not set
+diff --git a/arch/powerpc/configs/tqm8xx_defconfig b/arch/powerpc/configs/tqm8xx_defconfig
+index eda8bfb2d0a3..77857d513022 100644
+--- a/arch/powerpc/configs/tqm8xx_defconfig
++++ b/arch/powerpc/configs/tqm8xx_defconfig
+@@ -15,7 +15,6 @@ CONFIG_MODULE_SRCVERSION_ALL=y
+ # CONFIG_BLK_DEV_BSG is not set
+ CONFIG_PARTITION_ADVANCED=y
+ CONFIG_TQM8XX=y
+-CONFIG_8xx_COPYBACK=y
+ # CONFIG_8xx_CPU15 is not set
+ CONFIG_GEN_RTC=y
+ CONFIG_HZ_100=y
+diff --git a/arch/powerpc/include/asm/book3s/64/kup-radix.h b/arch/powerpc/include/asm/book3s/64/kup-radix.h
+index 3bcef989a35d..101d60f16d46 100644
+--- a/arch/powerpc/include/asm/book3s/64/kup-radix.h
++++ b/arch/powerpc/include/asm/book3s/64/kup-radix.h
+@@ -16,7 +16,9 @@
+ #ifdef CONFIG_PPC_KUAP
+ BEGIN_MMU_FTR_SECTION_NESTED(67)
+ ld \gpr, STACK_REGS_KUAP(r1)
++ isync
+ mtspr SPRN_AMR, \gpr
++ /* No isync required, see kuap_restore_amr() */
+ END_MMU_FTR_SECTION_NESTED_IFSET(MMU_FTR_RADIX_KUAP, 67)
+ #endif
+ .endm
+@@ -62,8 +64,15 @@
+
+ static inline void kuap_restore_amr(struct pt_regs *regs)
+ {
+- if (mmu_has_feature(MMU_FTR_RADIX_KUAP))
++ if (mmu_has_feature(MMU_FTR_RADIX_KUAP)) {
++ isync();
+ mtspr(SPRN_AMR, regs->kuap);
++ /*
++ * No isync required here because we are about to RFI back to
++ * previous context before any user accesses would be made,
++ * which is a CSI.
++ */
++ }
+ }
+
+ static inline void kuap_check_amr(void)
+diff --git a/arch/powerpc/include/asm/book3s/64/pgtable.h b/arch/powerpc/include/asm/book3s/64/pgtable.h
+index 368b136517e0..2838b98bc6df 100644
+--- a/arch/powerpc/include/asm/book3s/64/pgtable.h
++++ b/arch/powerpc/include/asm/book3s/64/pgtable.h
+@@ -998,10 +998,25 @@ extern struct page *pgd_page(pgd_t pgd);
+ #define pud_page_vaddr(pud) __va(pud_val(pud) & ~PUD_MASKED_BITS)
+ #define pgd_page_vaddr(pgd) __va(pgd_val(pgd) & ~PGD_MASKED_BITS)
+
+-#define pgd_index(address) (((address) >> (PGDIR_SHIFT)) & (PTRS_PER_PGD - 1))
+-#define pud_index(address) (((address) >> (PUD_SHIFT)) & (PTRS_PER_PUD - 1))
+-#define pmd_index(address) (((address) >> (PMD_SHIFT)) & (PTRS_PER_PMD - 1))
+-#define pte_index(address) (((address) >> (PAGE_SHIFT)) & (PTRS_PER_PTE - 1))
++static inline unsigned long pgd_index(unsigned long address)
++{
++ return (address >> PGDIR_SHIFT) & (PTRS_PER_PGD - 1);
++}
++
++static inline unsigned long pud_index(unsigned long address)
++{
++ return (address >> PUD_SHIFT) & (PTRS_PER_PUD - 1);
++}
++
++static inline unsigned long pmd_index(unsigned long address)
++{
++ return (address >> PMD_SHIFT) & (PTRS_PER_PMD - 1);
++}
++
++static inline unsigned long pte_index(unsigned long address)
++{
++ return (address >> PAGE_SHIFT) & (PTRS_PER_PTE - 1);
++}
+
+ /*
+ * Find an entry in a page-table-directory. We combine the address region
+diff --git a/arch/powerpc/include/asm/nohash/32/mmu-8xx.h b/arch/powerpc/include/asm/nohash/32/mmu-8xx.h
+index 76af5b0cb16e..26b7cee34dfe 100644
+--- a/arch/powerpc/include/asm/nohash/32/mmu-8xx.h
++++ b/arch/powerpc/include/asm/nohash/32/mmu-8xx.h
+@@ -19,7 +19,6 @@
+ #define MI_RSV4I 0x08000000 /* Reserve 4 TLB entries */
+ #define MI_PPCS 0x02000000 /* Use MI_RPN prob/priv state */
+ #define MI_IDXMASK 0x00001f00 /* TLB index to be loaded */
+-#define MI_RESETVAL 0x00000000 /* Value of register at reset */
+
+ /* These are the Ks and Kp from the PowerPC books. For proper operation,
+ * Ks = 0, Kp = 1.
+@@ -95,7 +94,6 @@
+ #define MD_TWAM 0x04000000 /* Use 4K page hardware assist */
+ #define MD_PPCS 0x02000000 /* Use MI_RPN prob/priv state */
+ #define MD_IDXMASK 0x00001f00 /* TLB index to be loaded */
+-#define MD_RESETVAL 0x04000000 /* Value of register at reset */
+
+ #define SPRN_M_CASID 793 /* Address space ID (context) to match */
+ #define MC_ASIDMASK 0x0000000f /* Bits used for ASID value */
+diff --git a/arch/powerpc/include/asm/processor.h b/arch/powerpc/include/asm/processor.h
+index eedcbfb9a6ff..c220cb9eccad 100644
+--- a/arch/powerpc/include/asm/processor.h
++++ b/arch/powerpc/include/asm/processor.h
+@@ -301,7 +301,6 @@ struct thread_struct {
+ #else
+ #define INIT_THREAD { \
+ .ksp = INIT_SP, \
+- .regs = (struct pt_regs *)INIT_SP - 1, /* XXX bogus, I think */ \
+ .addr_limit = KERNEL_DS, \
+ .fpexc_mode = 0, \
+ .fscr = FSCR_TAR | FSCR_EBB \
+diff --git a/arch/powerpc/kernel/exceptions-64s.S b/arch/powerpc/kernel/exceptions-64s.S
+index ebeebab74b56..d9ddce40bed8 100644
+--- a/arch/powerpc/kernel/exceptions-64s.S
++++ b/arch/powerpc/kernel/exceptions-64s.S
+@@ -270,7 +270,7 @@ BEGIN_FTR_SECTION
+ END_FTR_SECTION_IFSET(CPU_FTR_CFAR)
+ .endif
+
+- ld r10,PACA_EXGEN+EX_CTR(r13)
++ ld r10,IAREA+EX_CTR(r13)
+ mtctr r10
+ BEGIN_FTR_SECTION
+ ld r10,IAREA+EX_PPR(r13)
+@@ -298,7 +298,7 @@ END_FTR_SECTION_IFSET(CPU_FTR_HAS_PPR)
+
+ .if IKVM_SKIP
+ 89: mtocrf 0x80,r9
+- ld r10,PACA_EXGEN+EX_CTR(r13)
++ ld r10,IAREA+EX_CTR(r13)
+ mtctr r10
+ ld r9,IAREA+EX_R9(r13)
+ ld r10,IAREA+EX_R10(r13)
+@@ -1117,11 +1117,30 @@ END_FTR_SECTION_IFSET(CPU_FTR_HVMODE)
+ li r10,MSR_RI
+ mtmsrd r10,1
+
++ /*
++ * Set IRQS_ALL_DISABLED and save PACAIRQHAPPENED (see
++ * system_reset_common)
++ */
++ li r10,IRQS_ALL_DISABLED
++ stb r10,PACAIRQSOFTMASK(r13)
++ lbz r10,PACAIRQHAPPENED(r13)
++ std r10,RESULT(r1)
++ ori r10,r10,PACA_IRQ_HARD_DIS
++ stb r10,PACAIRQHAPPENED(r13)
++
+ addi r3,r1,STACK_FRAME_OVERHEAD
+ bl machine_check_early
+ std r3,RESULT(r1) /* Save result */
+ ld r12,_MSR(r1)
+
++ /*
++ * Restore soft mask settings.
++ */
++ ld r10,RESULT(r1)
++ stb r10,PACAIRQHAPPENED(r13)
++ ld r10,SOFTE(r1)
++ stb r10,PACAIRQSOFTMASK(r13)
++
+ #ifdef CONFIG_PPC_P7_NAP
+ /*
+ * Check if thread was in power saving mode. We come here when any
+@@ -1225,17 +1244,19 @@ EXC_COMMON_BEGIN(machine_check_idle_common)
+ bl machine_check_queue_event
+
+ /*
+- * We have not used any non-volatile GPRs here, and as a rule
+- * most exception code including machine check does not.
+- * Therefore PACA_NAPSTATELOST does not need to be set. Idle
+- * wakeup will restore volatile registers.
++ * GPR-loss wakeups are relatively straightforward, because the
++ * idle sleep code has saved all non-volatile registers on its
++ * own stack, and r1 in PACAR1.
+ *
+- * Load the original SRR1 into r3 for pnv_powersave_wakeup_mce.
++ * For no-loss wakeups the r1 and lr registers used by the
++ * early machine check handler have to be restored first. r2 is
++ * the kernel TOC, so no need to restore it.
+ *
+ * Then decrement MCE nesting after finishing with the stack.
+ */
+ ld r3,_MSR(r1)
+ ld r4,_LINK(r1)
++ ld r1,GPR1(r1)
+
+ lhz r11,PACA_IN_MCE(r13)
+ subi r11,r11,1
+@@ -1244,7 +1265,7 @@ EXC_COMMON_BEGIN(machine_check_idle_common)
+ mtlr r4
+ rlwinm r10,r3,47-31,30,31
+ cmpwi cr1,r10,2
+- bltlr cr1 /* no state loss, return to idle caller */
++ bltlr cr1 /* no state loss, return to idle caller with r3=SRR1 */
+ b idle_return_gpr_loss
+ #endif
+
+diff --git a/arch/powerpc/kernel/head_64.S b/arch/powerpc/kernel/head_64.S
+index ddfbd02140d9..0e05a9a47a4b 100644
+--- a/arch/powerpc/kernel/head_64.S
++++ b/arch/powerpc/kernel/head_64.S
+@@ -947,15 +947,8 @@ start_here_multiplatform:
+ std r0,0(r4)
+ #endif
+
+- /* The following gets the stack set up with the regs */
+- /* pointing to the real addr of the kernel stack. This is */
+- /* all done to support the C function call below which sets */
+- /* up the htab. This is done because we have relocated the */
+- /* kernel but are still running in real mode. */
+-
+- LOAD_REG_ADDR(r3,init_thread_union)
+-
+ /* set up a stack pointer */
++ LOAD_REG_ADDR(r3,init_thread_union)
+ LOAD_REG_IMMEDIATE(r1,THREAD_SIZE)
+ add r1,r3,r1
+ li r0,0
+diff --git a/arch/powerpc/kernel/head_8xx.S b/arch/powerpc/kernel/head_8xx.S
+index 073a651787df..905205c79a25 100644
+--- a/arch/powerpc/kernel/head_8xx.S
++++ b/arch/powerpc/kernel/head_8xx.S
+@@ -779,10 +779,7 @@ start_here:
+ initial_mmu:
+ li r8, 0
+ mtspr SPRN_MI_CTR, r8 /* remove PINNED ITLB entries */
+- lis r10, MD_RESETVAL@h
+-#ifndef CONFIG_8xx_COPYBACK
+- oris r10, r10, MD_WTDEF@h
+-#endif
++ lis r10, MD_TWAM@h
+ mtspr SPRN_MD_CTR, r10 /* remove PINNED DTLB entries */
+
+ tlbia /* Invalidate all TLB entries */
+@@ -857,17 +854,7 @@ initial_mmu:
+ mtspr SPRN_DC_CST, r8
+ lis r8, IDC_ENABLE@h
+ mtspr SPRN_IC_CST, r8
+-#ifdef CONFIG_8xx_COPYBACK
+- mtspr SPRN_DC_CST, r8
+-#else
+- /* For a debug option, I left this here to easily enable
+- * the write through cache mode
+- */
+- lis r8, DC_SFWT@h
+ mtspr SPRN_DC_CST, r8
+- lis r8, IDC_ENABLE@h
+- mtspr SPRN_DC_CST, r8
+-#endif
+ /* Disable debug mode entry on breakpoints */
+ mfspr r8, SPRN_DER
+ #ifdef CONFIG_PERF_EVENTS
+diff --git a/arch/powerpc/kernel/process.c b/arch/powerpc/kernel/process.c
+index 9c21288f8645..774476be591b 100644
+--- a/arch/powerpc/kernel/process.c
++++ b/arch/powerpc/kernel/process.c
+@@ -1241,29 +1241,31 @@ struct task_struct *__switch_to(struct task_struct *prev,
+ static void show_instructions(struct pt_regs *regs)
+ {
+ int i;
++ unsigned long nip = regs->nip;
+ unsigned long pc = regs->nip - (NR_INSN_TO_PRINT * 3 / 4 * sizeof(int));
+
+ printk("Instruction dump:");
+
++ /*
++ * If we were executing with the MMU off for instructions, adjust pc
++ * rather than printing XXXXXXXX.
++ */
++ if (!IS_ENABLED(CONFIG_BOOKE) && !(regs->msr & MSR_IR)) {
++ pc = (unsigned long)phys_to_virt(pc);
++ nip = (unsigned long)phys_to_virt(regs->nip);
++ }
++
+ for (i = 0; i < NR_INSN_TO_PRINT; i++) {
+ int instr;
+
+ if (!(i % 8))
+ pr_cont("\n");
+
+-#if !defined(CONFIG_BOOKE)
+- /* If executing with the IMMU off, adjust pc rather
+- * than print XXXXXXXX.
+- */
+- if (!(regs->msr & MSR_IR))
+- pc = (unsigned long)phys_to_virt(pc);
+-#endif
+-
+ if (!__kernel_text_address(pc) ||
+ probe_kernel_address((const void *)pc, instr)) {
+ pr_cont("XXXXXXXX ");
+ } else {
+- if (regs->nip == pc)
++ if (nip == pc)
+ pr_cont("<%08x> ", instr);
+ else
+ pr_cont("%08x ", instr);
+diff --git a/arch/powerpc/kexec/core.c b/arch/powerpc/kexec/core.c
+index 078fe3d76feb..56da5eb2b923 100644
+--- a/arch/powerpc/kexec/core.c
++++ b/arch/powerpc/kexec/core.c
+@@ -115,11 +115,12 @@ void machine_kexec(struct kimage *image)
+
+ void __init reserve_crashkernel(void)
+ {
+- unsigned long long crash_size, crash_base;
++ unsigned long long crash_size, crash_base, total_mem_sz;
+ int ret;
+
++ total_mem_sz = memory_limit ? memory_limit : memblock_phys_mem_size();
+ /* use common parsing */
+- ret = parse_crashkernel(boot_command_line, memblock_phys_mem_size(),
++ ret = parse_crashkernel(boot_command_line, total_mem_sz,
+ &crash_size, &crash_base);
+ if (ret == 0 && crash_size > 0) {
+ crashk_res.start = crash_base;
+@@ -178,6 +179,7 @@ void __init reserve_crashkernel(void)
+ /* Crash kernel trumps memory limit */
+ if (memory_limit && memory_limit <= crashk_res.end) {
+ memory_limit = crashk_res.end + 1;
++ total_mem_sz = memory_limit;
+ printk("Adjusted memory limit for crashkernel, now 0x%llx\n",
+ memory_limit);
+ }
+@@ -186,7 +188,7 @@ void __init reserve_crashkernel(void)
+ "for crashkernel (System RAM: %ldMB)\n",
+ (unsigned long)(crash_size >> 20),
+ (unsigned long)(crashk_res.start >> 20),
+- (unsigned long)(memblock_phys_mem_size() >> 20));
++ (unsigned long)(total_mem_sz >> 20));
+
+ if (!memblock_is_region_memory(crashk_res.start, crash_size) ||
+ memblock_reserve(crashk_res.start, crash_size)) {
+diff --git a/arch/powerpc/kvm/book3s_64_mmu_radix.c b/arch/powerpc/kvm/book3s_64_mmu_radix.c
+index aa12cd4078b3..bc6c1aa3d0e9 100644
+--- a/arch/powerpc/kvm/book3s_64_mmu_radix.c
++++ b/arch/powerpc/kvm/book3s_64_mmu_radix.c
+@@ -353,7 +353,13 @@ static struct kmem_cache *kvm_pmd_cache;
+
+ static pte_t *kvmppc_pte_alloc(void)
+ {
+- return kmem_cache_alloc(kvm_pte_cache, GFP_KERNEL);
++ pte_t *pte;
++
++ pte = kmem_cache_alloc(kvm_pte_cache, GFP_KERNEL);
++ /* pmd_populate() will only reference _pa(pte). */
++ kmemleak_ignore(pte);
++
++ return pte;
+ }
+
+ static void kvmppc_pte_free(pte_t *ptep)
+@@ -363,7 +369,13 @@ static void kvmppc_pte_free(pte_t *ptep)
+
+ static pmd_t *kvmppc_pmd_alloc(void)
+ {
+- return kmem_cache_alloc(kvm_pmd_cache, GFP_KERNEL);
++ pmd_t *pmd;
++
++ pmd = kmem_cache_alloc(kvm_pmd_cache, GFP_KERNEL);
++ /* pud_populate() will only reference _pa(pmd). */
++ kmemleak_ignore(pmd);
++
++ return pmd;
+ }
+
+ static void kvmppc_pmd_free(pmd_t *pmdp)
+diff --git a/arch/powerpc/kvm/book3s_64_vio.c b/arch/powerpc/kvm/book3s_64_vio.c
+index 50555ad1db93..1a529df0ab44 100644
+--- a/arch/powerpc/kvm/book3s_64_vio.c
++++ b/arch/powerpc/kvm/book3s_64_vio.c
+@@ -73,6 +73,7 @@ extern void kvm_spapr_tce_release_iommu_group(struct kvm *kvm,
+ struct kvmppc_spapr_tce_iommu_table *stit, *tmp;
+ struct iommu_table_group *table_group = NULL;
+
++ rcu_read_lock();
+ list_for_each_entry_rcu(stt, &kvm->arch.spapr_tce_tables, list) {
+
+ table_group = iommu_group_get_iommudata(grp);
+@@ -87,7 +88,9 @@ extern void kvm_spapr_tce_release_iommu_group(struct kvm *kvm,
+ kref_put(&stit->kref, kvm_spapr_tce_liobn_put);
+ }
+ }
++ cond_resched_rcu();
+ }
++ rcu_read_unlock();
+ }
+
+ extern long kvm_spapr_tce_attach_iommu_group(struct kvm *kvm, int tablefd,
+@@ -105,12 +108,14 @@ extern long kvm_spapr_tce_attach_iommu_group(struct kvm *kvm, int tablefd,
+ if (!f.file)
+ return -EBADF;
+
++ rcu_read_lock();
+ list_for_each_entry_rcu(stt, &kvm->arch.spapr_tce_tables, list) {
+ if (stt == f.file->private_data) {
+ found = true;
+ break;
+ }
+ }
++ rcu_read_unlock();
+
+ fdput(f);
+
+@@ -143,6 +148,7 @@ extern long kvm_spapr_tce_attach_iommu_group(struct kvm *kvm, int tablefd,
+ if (!tbl)
+ return -EINVAL;
+
++ rcu_read_lock();
+ list_for_each_entry_rcu(stit, &stt->iommu_tables, next) {
+ if (tbl != stit->tbl)
+ continue;
+@@ -150,14 +156,17 @@ extern long kvm_spapr_tce_attach_iommu_group(struct kvm *kvm, int tablefd,
+ if (!kref_get_unless_zero(&stit->kref)) {
+ /* stit is being destroyed */
+ iommu_tce_table_put(tbl);
++ rcu_read_unlock();
+ return -ENOTTY;
+ }
+ /*
+ * The table is already known to this KVM, we just increased
+ * its KVM reference counter and can return.
+ */
++ rcu_read_unlock();
+ return 0;
+ }
++ rcu_read_unlock();
+
+ stit = kzalloc(sizeof(*stit), GFP_KERNEL);
+ if (!stit) {
+@@ -365,18 +374,19 @@ static long kvmppc_tce_validate(struct kvmppc_spapr_tce_table *stt,
+ if (kvmppc_tce_to_ua(stt->kvm, tce, &ua))
+ return H_TOO_HARD;
+
++ rcu_read_lock();
+ list_for_each_entry_rcu(stit, &stt->iommu_tables, next) {
+ unsigned long hpa = 0;
+ struct mm_iommu_table_group_mem_t *mem;
+ long shift = stit->tbl->it_page_shift;
+
+ mem = mm_iommu_lookup(stt->kvm->mm, ua, 1ULL << shift);
+- if (!mem)
+- return H_TOO_HARD;
+-
+- if (mm_iommu_ua_to_hpa(mem, ua, shift, &hpa))
++ if (!mem || mm_iommu_ua_to_hpa(mem, ua, shift, &hpa)) {
++ rcu_read_unlock();
+ return H_TOO_HARD;
++ }
+ }
++ rcu_read_unlock();
+
+ return H_SUCCESS;
+ }
+diff --git a/arch/powerpc/kvm/book3s_hv.c b/arch/powerpc/kvm/book3s_hv.c
+index 93493f0cbfe8..ee581cde4878 100644
+--- a/arch/powerpc/kvm/book3s_hv.c
++++ b/arch/powerpc/kvm/book3s_hv.c
+@@ -1099,9 +1099,14 @@ int kvmppc_pseries_do_hcall(struct kvm_vcpu *vcpu)
+ ret = kvmppc_h_svm_init_done(vcpu->kvm);
+ break;
+ case H_SVM_INIT_ABORT:
+- ret = H_UNSUPPORTED;
+- if (kvmppc_get_srr1(vcpu) & MSR_S)
+- ret = kvmppc_h_svm_init_abort(vcpu->kvm);
++ /*
++ * Even if that call is made by the Ultravisor, the SSR1 value
++ * is the guest context one, with the secure bit clear as it has
++ * not yet been secured. So we can't check it here.
++ * Instead the kvm->arch.secure_guest flag is checked inside
++ * kvmppc_h_svm_init_abort().
++ */
++ ret = kvmppc_h_svm_init_abort(vcpu->kvm);
+ break;
+
+ default:
+diff --git a/arch/powerpc/mm/book3s32/mmu.c b/arch/powerpc/mm/book3s32/mmu.c
+index 39ba53ca5bb5..a9b2cbc74797 100644
+--- a/arch/powerpc/mm/book3s32/mmu.c
++++ b/arch/powerpc/mm/book3s32/mmu.c
+@@ -187,6 +187,7 @@ void mmu_mark_initmem_nx(void)
+ int i;
+ unsigned long base = (unsigned long)_stext - PAGE_OFFSET;
+ unsigned long top = (unsigned long)_etext - PAGE_OFFSET;
++ unsigned long border = (unsigned long)__init_begin - PAGE_OFFSET;
+ unsigned long size;
+
+ if (IS_ENABLED(CONFIG_PPC_BOOK3S_601))
+@@ -201,9 +202,10 @@ void mmu_mark_initmem_nx(void)
+ size = block_size(base, top);
+ size = max(size, 128UL << 10);
+ if ((top - base) > size) {
+- if (strict_kernel_rwx_enabled())
+- pr_warn("Kernel _etext not properly aligned\n");
+ size <<= 1;
++ if (strict_kernel_rwx_enabled() && base + size > border)
++ pr_warn("Some RW data is getting mapped X. "
++ "Adjust CONFIG_DATA_SHIFT to avoid that.\n");
+ }
+ setibat(i++, PAGE_OFFSET + base, base, size, PAGE_KERNEL_TEXT);
+ base += size;
+diff --git a/arch/powerpc/mm/book3s64/radix_tlb.c b/arch/powerpc/mm/book3s64/radix_tlb.c
+index 758ade2c2b6e..b5cc9b23cf02 100644
+--- a/arch/powerpc/mm/book3s64/radix_tlb.c
++++ b/arch/powerpc/mm/book3s64/radix_tlb.c
+@@ -884,9 +884,7 @@ is_local:
+ if (IS_ENABLED(CONFIG_TRANSPARENT_HUGEPAGE)) {
+ hstart = (start + PMD_SIZE - 1) & PMD_MASK;
+ hend = end & PMD_MASK;
+- if (hstart == hend)
+- hflush = false;
+- else
++ if (hstart < hend)
+ hflush = true;
+ }
+
+diff --git a/arch/powerpc/mm/kasan/kasan_init_32.c b/arch/powerpc/mm/kasan/kasan_init_32.c
+index 59e49c0e8154..b7c287adfd59 100644
+--- a/arch/powerpc/mm/kasan/kasan_init_32.c
++++ b/arch/powerpc/mm/kasan/kasan_init_32.c
+@@ -76,15 +76,14 @@ static int __init kasan_init_region(void *start, size_t size)
+ return ret;
+
+ block = memblock_alloc(k_end - k_start, PAGE_SIZE);
++ if (!block)
++ return -ENOMEM;
+
+ for (k_cur = k_start & PAGE_MASK; k_cur < k_end; k_cur += PAGE_SIZE) {
+ pmd_t *pmd = pmd_ptr_k(k_cur);
+ void *va = block + k_cur - k_start;
+ pte_t pte = pfn_pte(PHYS_PFN(__pa(va)), PAGE_KERNEL);
+
+- if (!va)
+- return -ENOMEM;
+-
+ __set_pte_at(&init_mm, k_cur, pte_offset_kernel(pmd, k_cur), pte, 0);
+ }
+ flush_tlb_kernel_range(k_start, k_end);
+diff --git a/arch/powerpc/mm/ptdump/shared.c b/arch/powerpc/mm/ptdump/shared.c
+index f7ed2f187cb0..784f8df17f73 100644
+--- a/arch/powerpc/mm/ptdump/shared.c
++++ b/arch/powerpc/mm/ptdump/shared.c
+@@ -30,6 +30,11 @@ static const struct flag_info flag_array[] = {
+ .val = _PAGE_PRESENT,
+ .set = "present",
+ .clear = " ",
++ }, {
++ .mask = _PAGE_COHERENT,
++ .val = _PAGE_COHERENT,
++ .set = "coherent",
++ .clear = " ",
+ }, {
+ .mask = _PAGE_GUARDED,
+ .val = _PAGE_GUARDED,
+diff --git a/arch/powerpc/perf/hv-24x7.c b/arch/powerpc/perf/hv-24x7.c
+index 573e0b309c0c..48e8f4b17b91 100644
+--- a/arch/powerpc/perf/hv-24x7.c
++++ b/arch/powerpc/perf/hv-24x7.c
+@@ -1400,16 +1400,6 @@ static void h_24x7_event_read(struct perf_event *event)
+ h24x7hw = &get_cpu_var(hv_24x7_hw);
+ h24x7hw->events[i] = event;
+ put_cpu_var(h24x7hw);
+- /*
+- * Clear the event count so we can compute the _change_
+- * in the 24x7 raw counter value at the end of the txn.
+- *
+- * Note that we could alternatively read the 24x7 value
+- * now and save its value in event->hw.prev_count. But
+- * that would require issuing a hcall, which would then
+- * defeat the purpose of using the txn interface.
+- */
+- local64_set(&event->count, 0);
+ }
+
+ put_cpu_var(hv_24x7_reqb);
+diff --git a/arch/powerpc/platforms/4xx/pci.c b/arch/powerpc/platforms/4xx/pci.c
+index e6e2adcc7b64..c13d64c3b019 100644
+--- a/arch/powerpc/platforms/4xx/pci.c
++++ b/arch/powerpc/platforms/4xx/pci.c
+@@ -1242,7 +1242,7 @@ static void __init ppc460sx_pciex_check_link(struct ppc4xx_pciex_port *port)
+ if (mbase == NULL) {
+ printk(KERN_ERR "%pOF: Can't map internal config space !",
+ port->node);
+- goto done;
++ return;
+ }
+
+ while (attempt && (0 == (in_le32(mbase + PECFG_460SX_DLLSTA)
+@@ -1252,9 +1252,7 @@ static void __init ppc460sx_pciex_check_link(struct ppc4xx_pciex_port *port)
+ }
+ if (attempt)
+ port->link = 1;
+-done:
+ iounmap(mbase);
+-
+ }
+
+ static struct ppc4xx_pciex_hwops ppc460sx_pcie_hwops __initdata = {
+diff --git a/arch/powerpc/platforms/8xx/Kconfig b/arch/powerpc/platforms/8xx/Kconfig
+index e0fe670f06f6..b37de62d7e7f 100644
+--- a/arch/powerpc/platforms/8xx/Kconfig
++++ b/arch/powerpc/platforms/8xx/Kconfig
+@@ -98,15 +98,6 @@ menu "MPC8xx CPM Options"
+ # 8xx specific questions.
+ comment "Generic MPC8xx Options"
+
+-config 8xx_COPYBACK
+- bool "Copy-Back Data Cache (else Writethrough)"
+- help
+- Saying Y here will cause the cache on an MPC8xx processor to be used
+- in Copy-Back mode. If you say N here, it is used in Writethrough
+- mode.
+-
+- If in doubt, say Y here.
+-
+ config 8xx_GPIO
+ bool "GPIO API Support"
+ select GPIOLIB
+diff --git a/arch/powerpc/platforms/powernv/opal.c b/arch/powerpc/platforms/powernv/opal.c
+index 2b3dfd0b6cdd..d95954ad4c0a 100644
+--- a/arch/powerpc/platforms/powernv/opal.c
++++ b/arch/powerpc/platforms/powernv/opal.c
+@@ -811,6 +811,10 @@ static int opal_add_one_export(struct kobject *parent, const char *export_name,
+ goto out;
+
+ attr = kzalloc(sizeof(*attr), GFP_KERNEL);
++ if (!attr) {
++ rc = -ENOMEM;
++ goto out;
++ }
+ name = kstrdup(export_name, GFP_KERNEL);
+ if (!name) {
+ rc = -ENOMEM;
+diff --git a/arch/powerpc/platforms/ps3/mm.c b/arch/powerpc/platforms/ps3/mm.c
+index 423be34f0f5f..f42fe4e86ce5 100644
+--- a/arch/powerpc/platforms/ps3/mm.c
++++ b/arch/powerpc/platforms/ps3/mm.c
+@@ -200,13 +200,14 @@ void ps3_mm_vas_destroy(void)
+ {
+ int result;
+
+- DBG("%s:%d: map.vas_id = %llu\n", __func__, __LINE__, map.vas_id);
+-
+ if (map.vas_id) {
+ result = lv1_select_virtual_address_space(0);
+- BUG_ON(result);
+- result = lv1_destruct_virtual_address_space(map.vas_id);
+- BUG_ON(result);
++ result += lv1_destruct_virtual_address_space(map.vas_id);
++
++ if (result) {
++ lv1_panic(0);
++ }
++
+ map.vas_id = 0;
+ }
+ }
+@@ -304,19 +305,20 @@ static void ps3_mm_region_destroy(struct mem_region *r)
+ int result;
+
+ if (!r->destroy) {
+- pr_info("%s:%d: Not destroying high region: %llxh %llxh\n",
+- __func__, __LINE__, r->base, r->size);
+ return;
+ }
+
+- DBG("%s:%d: r->base = %llxh\n", __func__, __LINE__, r->base);
+-
+ if (r->base) {
+ result = lv1_release_memory(r->base);
+- BUG_ON(result);
++
++ if (result) {
++ lv1_panic(0);
++ }
++
+ r->size = r->base = r->offset = 0;
+ map.total = map.rm.size;
+ }
++
+ ps3_mm_set_repository_highmem(NULL);
+ }
+
+diff --git a/arch/powerpc/platforms/pseries/ras.c b/arch/powerpc/platforms/pseries/ras.c
+index 1d1da639b8b7..16ba5c542e55 100644
+--- a/arch/powerpc/platforms/pseries/ras.c
++++ b/arch/powerpc/platforms/pseries/ras.c
+@@ -395,10 +395,11 @@ static irqreturn_t ras_error_interrupt(int irq, void *dev_id)
+ /*
+ * Some versions of FWNMI place the buffer inside the 4kB page starting at
+ * 0x7000. Other versions place it inside the rtas buffer. We check both.
++ * Minimum size of the buffer is 16 bytes.
+ */
+ #define VALID_FWNMI_BUFFER(A) \
+- ((((A) >= 0x7000) && ((A) < 0x7ff0)) || \
+- (((A) >= rtas.base) && ((A) < (rtas.base + rtas.size - 16))))
++ ((((A) >= 0x7000) && ((A) <= 0x8000 - 16)) || \
++ (((A) >= rtas.base) && ((A) <= (rtas.base + rtas.size - 16))))
+
+ static inline struct rtas_error_log *fwnmi_get_errlog(void)
+ {
+diff --git a/arch/s390/Kconfig b/arch/s390/Kconfig
+index 2167bce993ff..ae01be202204 100644
+--- a/arch/s390/Kconfig
++++ b/arch/s390/Kconfig
+@@ -462,6 +462,7 @@ config NUMA
+
+ config NODES_SHIFT
+ int
++ depends on NEED_MULTIPLE_NODES
+ default "1"
+
+ config SCHED_SMT
+diff --git a/arch/s390/include/asm/syscall.h b/arch/s390/include/asm/syscall.h
+index f073292e9fdb..d9d5de0f67ff 100644
+--- a/arch/s390/include/asm/syscall.h
++++ b/arch/s390/include/asm/syscall.h
+@@ -33,7 +33,17 @@ static inline void syscall_rollback(struct task_struct *task,
+ static inline long syscall_get_error(struct task_struct *task,
+ struct pt_regs *regs)
+ {
+- return IS_ERR_VALUE(regs->gprs[2]) ? regs->gprs[2] : 0;
++ unsigned long error = regs->gprs[2];
++#ifdef CONFIG_COMPAT
++ if (test_tsk_thread_flag(task, TIF_31BIT)) {
++ /*
++ * Sign-extend the value so (int)-EFOO becomes (long)-EFOO
++ * and will match correctly in comparisons.
++ */
++ error = (long)(int)error;
++ }
++#endif
++ return IS_ERR_VALUE(error) ? error : 0;
+ }
+
+ static inline long syscall_get_return_value(struct task_struct *task,
+diff --git a/arch/sh/include/asm/io.h b/arch/sh/include/asm/io.h
+index 39c9ead489e5..b42228906eaf 100644
+--- a/arch/sh/include/asm/io.h
++++ b/arch/sh/include/asm/io.h
+@@ -328,7 +328,7 @@ __ioremap_mode(phys_addr_t offset, unsigned long size, pgprot_t prot)
+ #else
+ #define __ioremap(offset, size, prot) ((void __iomem *)(offset))
+ #define __ioremap_mode(offset, size, prot) ((void __iomem *)(offset))
+-#define iounmap(addr) do { } while (0)
++static inline void iounmap(void __iomem *addr) {}
+ #endif /* CONFIG_MMU */
+
+ static inline void __iomem *ioremap(phys_addr_t offset, unsigned long size)
+diff --git a/arch/sparc/mm/srmmu.c b/arch/sparc/mm/srmmu.c
+index a8c2f2615fc6..ecc9e8786d57 100644
+--- a/arch/sparc/mm/srmmu.c
++++ b/arch/sparc/mm/srmmu.c
+@@ -383,7 +383,6 @@ pgtable_t pte_alloc_one(struct mm_struct *mm)
+ return NULL;
+ page = pfn_to_page(__nocache_pa(pte) >> PAGE_SHIFT);
+ if (!pgtable_pte_page_ctor(page)) {
+- __free_page(page);
+ return NULL;
+ }
+ return page;
+diff --git a/arch/um/drivers/Makefile b/arch/um/drivers/Makefile
+index a290821e355c..2a249f619467 100644
+--- a/arch/um/drivers/Makefile
++++ b/arch/um/drivers/Makefile
+@@ -18,9 +18,9 @@ ubd-objs := ubd_kern.o ubd_user.o
+ port-objs := port_kern.o port_user.o
+ harddog-objs := harddog_kern.o harddog_user.o
+
+-LDFLAGS_pcap.o := -r $(shell $(CC) $(KBUILD_CFLAGS) -print-file-name=libpcap.a)
++LDFLAGS_pcap.o = $(shell $(CC) $(KBUILD_CFLAGS) -print-file-name=libpcap.a)
+
+-LDFLAGS_vde.o := -r $(shell $(CC) $(CFLAGS) -print-file-name=libvdeplug.a)
++LDFLAGS_vde.o = $(shell $(CC) $(CFLAGS) -print-file-name=libvdeplug.a)
+
+ targets := pcap_kern.o pcap_user.o vde_kern.o vde_user.o
+
+diff --git a/arch/unicore32/lib/Makefile b/arch/unicore32/lib/Makefile
+index 098981a01841..5af06645b8f0 100644
+--- a/arch/unicore32/lib/Makefile
++++ b/arch/unicore32/lib/Makefile
+@@ -10,12 +10,12 @@ lib-y += strncpy_from_user.o strnlen_user.o
+ lib-y += clear_user.o copy_page.o
+ lib-y += copy_from_user.o copy_to_user.o
+
+-GNU_LIBC_A := $(shell $(CC) $(KBUILD_CFLAGS) -print-file-name=libc.a)
++GNU_LIBC_A = $(shell $(CC) $(KBUILD_CFLAGS) -print-file-name=libc.a)
+ GNU_LIBC_A_OBJS := memchr.o memcpy.o memmove.o memset.o
+ GNU_LIBC_A_OBJS += strchr.o strrchr.o
+ GNU_LIBC_A_OBJS += rawmemchr.o # needed by strrchr.o
+
+-GNU_LIBGCC_A := $(shell $(CC) $(KBUILD_CFLAGS) -print-file-name=libgcc.a)
++GNU_LIBGCC_A = $(shell $(CC) $(KBUILD_CFLAGS) -print-file-name=libgcc.a)
+ GNU_LIBGCC_A_OBJS := _ashldi3.o _ashrdi3.o _lshrdi3.o
+ GNU_LIBGCC_A_OBJS += _divsi3.o _modsi3.o _ucmpdi2.o _umodsi3.o _udivsi3.o
+
+diff --git a/arch/x86/kernel/apic/apic.c b/arch/x86/kernel/apic/apic.c
+index e53dda210cd7..21d2f1de1057 100644
+--- a/arch/x86/kernel/apic/apic.c
++++ b/arch/x86/kernel/apic/apic.c
+@@ -2093,7 +2093,7 @@ void __init init_apic_mappings(void)
+ unsigned int new_apicid;
+
+ if (apic_validate_deadline_timer())
+- pr_debug("TSC deadline timer available\n");
++ pr_info("TSC deadline timer available\n");
+
+ if (x2apic_mode) {
+ boot_cpu_physical_apicid = read_apic_id();
+diff --git a/arch/x86/kernel/cpu/mce/dev-mcelog.c b/arch/x86/kernel/cpu/mce/dev-mcelog.c
+index d089567a9ce8..bcb379b2fd42 100644
+--- a/arch/x86/kernel/cpu/mce/dev-mcelog.c
++++ b/arch/x86/kernel/cpu/mce/dev-mcelog.c
+@@ -343,7 +343,7 @@ static __init int dev_mcelog_init_device(void)
+ if (!mcelog)
+ return -ENOMEM;
+
+- strncpy(mcelog->signature, MCE_LOG_SIGNATURE, sizeof(mcelog->signature));
++ memcpy(mcelog->signature, MCE_LOG_SIGNATURE, sizeof(mcelog->signature));
+ mcelog->len = mce_log_len;
+ mcelog->recordlen = sizeof(struct mce);
+
+diff --git a/arch/x86/kernel/idt.c b/arch/x86/kernel/idt.c
+index 87ef69a72c52..7bb4c3cbf4dc 100644
+--- a/arch/x86/kernel/idt.c
++++ b/arch/x86/kernel/idt.c
+@@ -318,7 +318,11 @@ void __init idt_setup_apic_and_irq_gates(void)
+
+ #ifdef CONFIG_X86_LOCAL_APIC
+ for_each_clear_bit_from(i, system_vectors, NR_VECTORS) {
+- set_bit(i, system_vectors);
++ /*
++ * Don't set the non assigned system vectors in the
++ * system_vectors bitmap. Otherwise they show up in
++ * /proc/interrupts.
++ */
+ entry = spurious_entries_start + 8 * (i - FIRST_SYSTEM_VECTOR);
+ set_intr_gate(i, entry);
+ }
+diff --git a/arch/x86/kernel/kprobes/core.c b/arch/x86/kernel/kprobes/core.c
+index 4d7022a740ab..a12adbe1559d 100644
+--- a/arch/x86/kernel/kprobes/core.c
++++ b/arch/x86/kernel/kprobes/core.c
+@@ -753,16 +753,11 @@ asm(
+ NOKPROBE_SYMBOL(kretprobe_trampoline);
+ STACK_FRAME_NON_STANDARD(kretprobe_trampoline);
+
+-static struct kprobe kretprobe_kprobe = {
+- .addr = (void *)kretprobe_trampoline,
+-};
+-
+ /*
+ * Called from kretprobe_trampoline
+ */
+ __used __visible void *trampoline_handler(struct pt_regs *regs)
+ {
+- struct kprobe_ctlblk *kcb;
+ struct kretprobe_instance *ri = NULL;
+ struct hlist_head *head, empty_rp;
+ struct hlist_node *tmp;
+@@ -772,16 +767,12 @@ __used __visible void *trampoline_handler(struct pt_regs *regs)
+ void *frame_pointer;
+ bool skipped = false;
+
+- preempt_disable();
+-
+ /*
+ * Set a dummy kprobe for avoiding kretprobe recursion.
+ * Since kretprobe never run in kprobe handler, kprobe must not
+ * be running at this point.
+ */
+- kcb = get_kprobe_ctlblk();
+- __this_cpu_write(current_kprobe, &kretprobe_kprobe);
+- kcb->kprobe_status = KPROBE_HIT_ACTIVE;
++ kprobe_busy_begin();
+
+ INIT_HLIST_HEAD(&empty_rp);
+ kretprobe_hash_lock(current, &head, &flags);
+@@ -857,7 +848,7 @@ __used __visible void *trampoline_handler(struct pt_regs *regs)
+ __this_cpu_write(current_kprobe, &ri->rp->kp);
+ ri->ret_addr = correct_ret_addr;
+ ri->rp->handler(ri, regs);
+- __this_cpu_write(current_kprobe, &kretprobe_kprobe);
++ __this_cpu_write(current_kprobe, &kprobe_busy);
+ }
+
+ recycle_rp_inst(ri, &empty_rp);
+@@ -873,8 +864,7 @@ __used __visible void *trampoline_handler(struct pt_regs *regs)
+
+ kretprobe_hash_unlock(current, &flags);
+
+- __this_cpu_write(current_kprobe, NULL);
+- preempt_enable();
++ kprobe_busy_end();
+
+ hlist_for_each_entry_safe(ri, tmp, &empty_rp, hlist) {
+ hlist_del(&ri->hlist);
+diff --git a/arch/x86/purgatory/Makefile b/arch/x86/purgatory/Makefile
+index fb4ee5444379..9733d1cc791d 100644
+--- a/arch/x86/purgatory/Makefile
++++ b/arch/x86/purgatory/Makefile
+@@ -17,7 +17,10 @@ CFLAGS_sha256.o := -D__DISABLE_EXPORTS
+ LDFLAGS_purgatory.ro := -e purgatory_start -r --no-undefined -nostdlib -z nodefaultlib
+ targets += purgatory.ro
+
++# Sanitizer, etc. runtimes are unavailable and cannot be linked here.
++GCOV_PROFILE := n
+ KASAN_SANITIZE := n
++UBSAN_SANITIZE := n
+ KCOV_INSTRUMENT := n
+
+ # These are adjustments to the compiler flags used for objects that
+@@ -25,7 +28,7 @@ KCOV_INSTRUMENT := n
+
+ PURGATORY_CFLAGS_REMOVE := -mcmodel=kernel
+ PURGATORY_CFLAGS := -mcmodel=large -ffreestanding -fno-zero-initialized-in-bss
+-PURGATORY_CFLAGS += $(DISABLE_STACKLEAK_PLUGIN)
++PURGATORY_CFLAGS += $(DISABLE_STACKLEAK_PLUGIN) -DDISABLE_BRANCH_PROFILING
+
+ # Default KBUILD_CFLAGS can have -pg option set when FTRACE is enabled. That
+ # in turn leaves some undefined symbols like __fentry__ in purgatory and not
+diff --git a/crypto/algboss.c b/crypto/algboss.c
+index 535f1f87e6c1..5ebccbd6b74e 100644
+--- a/crypto/algboss.c
++++ b/crypto/algboss.c
+@@ -178,8 +178,6 @@ static int cryptomgr_schedule_probe(struct crypto_larval *larval)
+ if (IS_ERR(thread))
+ goto err_put_larval;
+
+- wait_for_completion_interruptible(&larval->completion);
+-
+ return NOTIFY_STOP;
+
+ err_put_larval:
+diff --git a/crypto/algif_skcipher.c b/crypto/algif_skcipher.c
+index e2c8ab408bed..4c3bdffe0c3a 100644
+--- a/crypto/algif_skcipher.c
++++ b/crypto/algif_skcipher.c
+@@ -74,14 +74,10 @@ static int _skcipher_recvmsg(struct socket *sock, struct msghdr *msg,
+ return PTR_ERR(areq);
+
+ /* convert iovecs of output buffers into RX SGL */
+- err = af_alg_get_rsgl(sk, msg, flags, areq, -1, &len);
++ err = af_alg_get_rsgl(sk, msg, flags, areq, ctx->used, &len);
+ if (err)
+ goto free;
+
+- /* Process only as much RX buffers for which we have TX data */
+- if (len > ctx->used)
+- len = ctx->used;
+-
+ /*
+ * If more buffers are to be expected to be processed, process only
+ * full block size buffers.
+diff --git a/drivers/ata/libata-core.c b/drivers/ata/libata-core.c
+index beca5f91bb4c..e74c8fe2a5fd 100644
+--- a/drivers/ata/libata-core.c
++++ b/drivers/ata/libata-core.c
+@@ -42,7 +42,6 @@
+ #include <linux/workqueue.h>
+ #include <linux/scatterlist.h>
+ #include <linux/io.h>
+-#include <linux/async.h>
+ #include <linux/log2.h>
+ #include <linux/slab.h>
+ #include <linux/glob.h>
+@@ -5778,7 +5777,7 @@ int ata_host_register(struct ata_host *host, struct scsi_host_template *sht)
+ /* perform each probe asynchronously */
+ for (i = 0; i < host->n_ports; i++) {
+ struct ata_port *ap = host->ports[i];
+- async_schedule(async_port_probe, ap);
++ ap->cookie = async_schedule(async_port_probe, ap);
+ }
+
+ return 0;
+@@ -5920,11 +5919,11 @@ void ata_host_detach(struct ata_host *host)
+ {
+ int i;
+
+- /* Ensure ata_port probe has completed */
+- async_synchronize_full();
+-
+- for (i = 0; i < host->n_ports; i++)
++ for (i = 0; i < host->n_ports; i++) {
++ /* Ensure ata_port probe has completed */
++ async_synchronize_cookie(host->ports[i]->cookie + 1);
+ ata_port_detach(host->ports[i]);
++ }
+
+ /* the host is dead now, dissociate ACPI */
+ ata_acpi_dissociate(host);
+diff --git a/drivers/base/platform.c b/drivers/base/platform.c
+index b27d0f6c18c9..f5d485166fd3 100644
+--- a/drivers/base/platform.c
++++ b/drivers/base/platform.c
+@@ -851,6 +851,8 @@ int __init_or_module __platform_driver_probe(struct platform_driver *drv,
+ /* temporary section violation during probe() */
+ drv->probe = probe;
+ retval = code = __platform_driver_register(drv, module);
++ if (retval)
++ return retval;
+
+ /*
+ * Fixup that section violation, being paranoid about code scanning
+diff --git a/drivers/block/ps3disk.c b/drivers/block/ps3disk.c
+index c5c6487a19d5..7b55811c2a81 100644
+--- a/drivers/block/ps3disk.c
++++ b/drivers/block/ps3disk.c
+@@ -454,7 +454,6 @@ static int ps3disk_probe(struct ps3_system_bus_device *_dev)
+ queue->queuedata = dev;
+
+ blk_queue_max_hw_sectors(queue, dev->bounce_size >> 9);
+- blk_queue_segment_boundary(queue, -1UL);
+ blk_queue_dma_alignment(queue, dev->blk_size-1);
+ blk_queue_logical_block_size(queue, dev->blk_size);
+
+diff --git a/drivers/bus/mhi/core/main.c b/drivers/bus/mhi/core/main.c
+index 97e06cc586e4..8be3d0fb0614 100644
+--- a/drivers/bus/mhi/core/main.c
++++ b/drivers/bus/mhi/core/main.c
+@@ -513,7 +513,10 @@ static int parse_xfer_event(struct mhi_controller *mhi_cntrl,
+ mhi_cntrl->unmap_single(mhi_cntrl, buf_info);
+
+ result.buf_addr = buf_info->cb_buf;
+- result.bytes_xferd = xfer_len;
++
++ /* truncate to buf len if xfer_len is larger */
++ result.bytes_xferd =
++ min_t(u16, xfer_len, buf_info->len);
+ mhi_del_ring_element(mhi_cntrl, buf_ring);
+ mhi_del_ring_element(mhi_cntrl, tre_ring);
+ local_rp = tre_ring->rp;
+@@ -597,7 +600,9 @@ static int parse_rsc_event(struct mhi_controller *mhi_cntrl,
+
+ result.transaction_status = (ev_code == MHI_EV_CC_OVERFLOW) ?
+ -EOVERFLOW : 0;
+- result.bytes_xferd = xfer_len;
++
++ /* truncate to buf len if xfer_len is larger */
++ result.bytes_xferd = min_t(u16, xfer_len, buf_info->len);
+ result.buf_addr = buf_info->cb_buf;
+ result.dir = mhi_chan->dir;
+
+diff --git a/drivers/char/ipmi/ipmi_msghandler.c b/drivers/char/ipmi/ipmi_msghandler.c
+index c48d8f086382..9afd220cd824 100644
+--- a/drivers/char/ipmi/ipmi_msghandler.c
++++ b/drivers/char/ipmi/ipmi_msghandler.c
+@@ -33,6 +33,7 @@
+ #include <linux/workqueue.h>
+ #include <linux/uuid.h>
+ #include <linux/nospec.h>
++#include <linux/vmalloc.h>
+
+ #define IPMI_DRIVER_VERSION "39.2"
+
+@@ -1153,7 +1154,7 @@ static void free_user_work(struct work_struct *work)
+ remove_work);
+
+ cleanup_srcu_struct(&user->release_barrier);
+- kfree(user);
++ vfree(user);
+ }
+
+ int ipmi_create_user(unsigned int if_num,
+@@ -1185,7 +1186,7 @@ int ipmi_create_user(unsigned int if_num,
+ if (rv)
+ return rv;
+
+- new_user = kmalloc(sizeof(*new_user), GFP_KERNEL);
++ new_user = vzalloc(sizeof(*new_user));
+ if (!new_user)
+ return -ENOMEM;
+
+@@ -1232,7 +1233,7 @@ int ipmi_create_user(unsigned int if_num,
+
+ out_kfree:
+ srcu_read_unlock(&ipmi_interfaces_srcu, index);
+- kfree(new_user);
++ vfree(new_user);
+ return rv;
+ }
+ EXPORT_SYMBOL(ipmi_create_user);
+diff --git a/drivers/char/mem.c b/drivers/char/mem.c
+index 43dd0891ca1e..31cae88a730b 100644
+--- a/drivers/char/mem.c
++++ b/drivers/char/mem.c
+@@ -31,11 +31,15 @@
+ #include <linux/uio.h>
+ #include <linux/uaccess.h>
+ #include <linux/security.h>
++#include <linux/pseudo_fs.h>
++#include <uapi/linux/magic.h>
++#include <linux/mount.h>
+
+ #ifdef CONFIG_IA64
+ # include <linux/efi.h>
+ #endif
+
++#define DEVMEM_MINOR 1
+ #define DEVPORT_MINOR 4
+
+ static inline unsigned long size_inside_page(unsigned long start,
+@@ -805,12 +809,64 @@ static loff_t memory_lseek(struct file *file, loff_t offset, int orig)
+ return ret;
+ }
+
++static struct inode *devmem_inode;
++
++#ifdef CONFIG_IO_STRICT_DEVMEM
++void revoke_devmem(struct resource *res)
++{
++ struct inode *inode = READ_ONCE(devmem_inode);
++
++ /*
++ * Check that the initialization has completed. Losing the race
++ * is ok because it means drivers are claiming resources before
++ * the fs_initcall level of init and prevent /dev/mem from
++ * establishing mappings.
++ */
++ if (!inode)
++ return;
++
++ /*
++ * The expectation is that the driver has successfully marked
++ * the resource busy by this point, so devmem_is_allowed()
++ * should start returning false, however for performance this
++ * does not iterate the entire resource range.
++ */
++ if (devmem_is_allowed(PHYS_PFN(res->start)) &&
++ devmem_is_allowed(PHYS_PFN(res->end))) {
++ /*
++ * *cringe* iomem=relaxed says "go ahead, what's the
++ * worst that can happen?"
++ */
++ return;
++ }
++
++ unmap_mapping_range(inode->i_mapping, res->start, resource_size(res), 1);
++}
++#endif
++
+ static int open_port(struct inode *inode, struct file *filp)
+ {
++ int rc;
++
+ if (!capable(CAP_SYS_RAWIO))
+ return -EPERM;
+
+- return security_locked_down(LOCKDOWN_DEV_MEM);
++ rc = security_locked_down(LOCKDOWN_DEV_MEM);
++ if (rc)
++ return rc;
++
++ if (iminor(inode) != DEVMEM_MINOR)
++ return 0;
++
++ /*
++ * Use a unified address space to have a single point to manage
++ * revocations when drivers want to take over a /dev/mem mapped
++ * range.
++ */
++ inode->i_mapping = devmem_inode->i_mapping;
++ filp->f_mapping = inode->i_mapping;
++
++ return 0;
+ }
+
+ #define zero_lseek null_lseek
+@@ -885,7 +941,7 @@ static const struct memdev {
+ fmode_t fmode;
+ } devlist[] = {
+ #ifdef CONFIG_DEVMEM
+- [1] = { "mem", 0, &mem_fops, FMODE_UNSIGNED_OFFSET },
++ [DEVMEM_MINOR] = { "mem", 0, &mem_fops, FMODE_UNSIGNED_OFFSET },
+ #endif
+ #ifdef CONFIG_DEVKMEM
+ [2] = { "kmem", 0, &kmem_fops, FMODE_UNSIGNED_OFFSET },
+@@ -939,6 +995,45 @@ static char *mem_devnode(struct device *dev, umode_t *mode)
+
+ static struct class *mem_class;
+
++static int devmem_fs_init_fs_context(struct fs_context *fc)
++{
++ return init_pseudo(fc, DEVMEM_MAGIC) ? 0 : -ENOMEM;
++}
++
++static struct file_system_type devmem_fs_type = {
++ .name = "devmem",
++ .owner = THIS_MODULE,
++ .init_fs_context = devmem_fs_init_fs_context,
++ .kill_sb = kill_anon_super,
++};
++
++static int devmem_init_inode(void)
++{
++ static struct vfsmount *devmem_vfs_mount;
++ static int devmem_fs_cnt;
++ struct inode *inode;
++ int rc;
++
++ rc = simple_pin_fs(&devmem_fs_type, &devmem_vfs_mount, &devmem_fs_cnt);
++ if (rc < 0) {
++ pr_err("Cannot mount /dev/mem pseudo filesystem: %d\n", rc);
++ return rc;
++ }
++
++ inode = alloc_anon_inode(devmem_vfs_mount->mnt_sb);
++ if (IS_ERR(inode)) {
++ rc = PTR_ERR(inode);
++ pr_err("Cannot allocate inode for /dev/mem: %d\n", rc);
++ simple_release_fs(&devmem_vfs_mount, &devmem_fs_cnt);
++ return rc;
++ }
++
++ /* publish /dev/mem initialized */
++ WRITE_ONCE(devmem_inode, inode);
++
++ return 0;
++}
++
+ static int __init chr_dev_init(void)
+ {
+ int minor;
+@@ -960,6 +1055,8 @@ static int __init chr_dev_init(void)
+ */
+ if ((minor == DEVPORT_MINOR) && !arch_has_dev_port())
+ continue;
++ if ((minor == DEVMEM_MINOR) && devmem_init_inode() != 0)
++ continue;
+
+ device_create(mem_class, NULL, MKDEV(MEM_MAJOR, minor),
+ NULL, devlist[minor].name);
+diff --git a/drivers/clk/Makefile b/drivers/clk/Makefile
+index f4169cc2fd31..60e811d3f226 100644
+--- a/drivers/clk/Makefile
++++ b/drivers/clk/Makefile
+@@ -105,7 +105,7 @@ obj-$(CONFIG_CLK_SIFIVE) += sifive/
+ obj-$(CONFIG_ARCH_SIRF) += sirf/
+ obj-$(CONFIG_ARCH_SOCFPGA) += socfpga/
+ obj-$(CONFIG_PLAT_SPEAR) += spear/
+-obj-$(CONFIG_ARCH_SPRD) += sprd/
++obj-y += sprd/
+ obj-$(CONFIG_ARCH_STI) += st/
+ obj-$(CONFIG_ARCH_STRATIX10) += socfpga/
+ obj-$(CONFIG_ARCH_SUNXI) += sunxi/
+diff --git a/drivers/clk/bcm/clk-bcm2835.c b/drivers/clk/bcm/clk-bcm2835.c
+index ded13ccf768e..7c845c293af0 100644
+--- a/drivers/clk/bcm/clk-bcm2835.c
++++ b/drivers/clk/bcm/clk-bcm2835.c
+@@ -1448,13 +1448,13 @@ static struct clk_hw *bcm2835_register_clock(struct bcm2835_cprman *cprman,
+ return &clock->hw;
+ }
+
+-static struct clk *bcm2835_register_gate(struct bcm2835_cprman *cprman,
++static struct clk_hw *bcm2835_register_gate(struct bcm2835_cprman *cprman,
+ const struct bcm2835_gate_data *data)
+ {
+- return clk_register_gate(cprman->dev, data->name, data->parent,
+- CLK_IGNORE_UNUSED | CLK_SET_RATE_GATE,
+- cprman->regs + data->ctl_reg,
+- CM_GATE_BIT, 0, &cprman->regs_lock);
++ return clk_hw_register_gate(cprman->dev, data->name, data->parent,
++ CLK_IGNORE_UNUSED | CLK_SET_RATE_GATE,
++ cprman->regs + data->ctl_reg,
++ CM_GATE_BIT, 0, &cprman->regs_lock);
+ }
+
+ typedef struct clk_hw *(*bcm2835_clk_register)(struct bcm2835_cprman *cprman,
+diff --git a/drivers/clk/clk-ast2600.c b/drivers/clk/clk-ast2600.c
+index 392d01705b97..99afc949925f 100644
+--- a/drivers/clk/clk-ast2600.c
++++ b/drivers/clk/clk-ast2600.c
+@@ -642,14 +642,22 @@ static const u32 ast2600_a0_axi_ahb_div_table[] = {
+ 2, 2, 3, 5,
+ };
+
+-static const u32 ast2600_a1_axi_ahb_div_table[] = {
+- 4, 6, 2, 4,
++static const u32 ast2600_a1_axi_ahb_div0_tbl[] = {
++ 3, 2, 3, 4,
++};
++
++static const u32 ast2600_a1_axi_ahb_div1_tbl[] = {
++ 3, 4, 6, 8,
++};
++
++static const u32 ast2600_a1_axi_ahb200_tbl[] = {
++ 3, 4, 3, 4, 2, 2, 2, 2,
+ };
+
+ static void __init aspeed_g6_cc(struct regmap *map)
+ {
+ struct clk_hw *hw;
+- u32 val, div, chip_id, axi_div, ahb_div;
++ u32 val, div, divbits, chip_id, axi_div, ahb_div;
+
+ clk_hw_register_fixed_rate(NULL, "clkin", NULL, 0, 25000000);
+
+@@ -679,11 +687,22 @@ static void __init aspeed_g6_cc(struct regmap *map)
+ else
+ axi_div = 2;
+
++ divbits = (val >> 11) & 0x3;
+ regmap_read(map, ASPEED_G6_SILICON_REV, &chip_id);
+- if (chip_id & BIT(16))
+- ahb_div = ast2600_a1_axi_ahb_div_table[(val >> 11) & 0x3];
+- else
++ if (chip_id & BIT(16)) {
++ if (!divbits) {
++ ahb_div = ast2600_a1_axi_ahb200_tbl[(val >> 8) & 0x3];
++ if (val & BIT(16))
++ ahb_div *= 2;
++ } else {
++ if (val & BIT(16))
++ ahb_div = ast2600_a1_axi_ahb_div1_tbl[divbits];
++ else
++ ahb_div = ast2600_a1_axi_ahb_div0_tbl[divbits];
++ }
++ } else {
+ ahb_div = ast2600_a0_axi_ahb_div_table[(val >> 11) & 0x3];
++ }
+
+ hw = clk_hw_register_fixed_factor(NULL, "ahb", "hpll", 0, 1, axi_div * ahb_div);
+ aspeed_g6_clk_data->hws[ASPEED_CLK_AHB] = hw;
+diff --git a/drivers/clk/meson/meson8b.c b/drivers/clk/meson/meson8b.c
+index 34a70c4b4899..11f6b868cf2b 100644
+--- a/drivers/clk/meson/meson8b.c
++++ b/drivers/clk/meson/meson8b.c
+@@ -1077,7 +1077,7 @@ static struct clk_regmap meson8b_vid_pll_in_sel = {
+ * Meson8m2: vid2_pll
+ */
+ .parent_hws = (const struct clk_hw *[]) {
+- &meson8b_hdmi_pll_dco.hw
++ &meson8b_hdmi_pll_lvds_out.hw
+ },
+ .num_parents = 1,
+ .flags = CLK_SET_RATE_PARENT,
+@@ -1213,7 +1213,7 @@ static struct clk_regmap meson8b_vclk_in_en = {
+
+ static struct clk_regmap meson8b_vclk_div1_gate = {
+ .data = &(struct clk_regmap_gate_data){
+- .offset = HHI_VID_CLK_DIV,
++ .offset = HHI_VID_CLK_CNTL,
+ .bit_idx = 0,
+ },
+ .hw.init = &(struct clk_init_data){
+@@ -1243,7 +1243,7 @@ static struct clk_fixed_factor meson8b_vclk_div2_div = {
+
+ static struct clk_regmap meson8b_vclk_div2_div_gate = {
+ .data = &(struct clk_regmap_gate_data){
+- .offset = HHI_VID_CLK_DIV,
++ .offset = HHI_VID_CLK_CNTL,
+ .bit_idx = 1,
+ },
+ .hw.init = &(struct clk_init_data){
+@@ -1273,7 +1273,7 @@ static struct clk_fixed_factor meson8b_vclk_div4_div = {
+
+ static struct clk_regmap meson8b_vclk_div4_div_gate = {
+ .data = &(struct clk_regmap_gate_data){
+- .offset = HHI_VID_CLK_DIV,
++ .offset = HHI_VID_CLK_CNTL,
+ .bit_idx = 2,
+ },
+ .hw.init = &(struct clk_init_data){
+@@ -1303,7 +1303,7 @@ static struct clk_fixed_factor meson8b_vclk_div6_div = {
+
+ static struct clk_regmap meson8b_vclk_div6_div_gate = {
+ .data = &(struct clk_regmap_gate_data){
+- .offset = HHI_VID_CLK_DIV,
++ .offset = HHI_VID_CLK_CNTL,
+ .bit_idx = 3,
+ },
+ .hw.init = &(struct clk_init_data){
+@@ -1333,7 +1333,7 @@ static struct clk_fixed_factor meson8b_vclk_div12_div = {
+
+ static struct clk_regmap meson8b_vclk_div12_div_gate = {
+ .data = &(struct clk_regmap_gate_data){
+- .offset = HHI_VID_CLK_DIV,
++ .offset = HHI_VID_CLK_CNTL,
+ .bit_idx = 4,
+ },
+ .hw.init = &(struct clk_init_data){
+@@ -1918,6 +1918,13 @@ static struct clk_regmap meson8b_mali = {
+ },
+ };
+
++static const struct reg_sequence meson8m2_gp_pll_init_regs[] = {
++ { .reg = HHI_GP_PLL_CNTL2, .def = 0x59c88000 },
++ { .reg = HHI_GP_PLL_CNTL3, .def = 0xca463823 },
++ { .reg = HHI_GP_PLL_CNTL4, .def = 0x0286a027 },
++ { .reg = HHI_GP_PLL_CNTL5, .def = 0x00003000 },
++};
++
+ static const struct pll_params_table meson8m2_gp_pll_params_table[] = {
+ PLL_PARAMS(182, 3),
+ { /* sentinel */ },
+@@ -1951,6 +1958,8 @@ static struct clk_regmap meson8m2_gp_pll_dco = {
+ .width = 1,
+ },
+ .table = meson8m2_gp_pll_params_table,
++ .init_regs = meson8m2_gp_pll_init_regs,
++ .init_count = ARRAY_SIZE(meson8m2_gp_pll_init_regs),
+ },
+ .hw.init = &(struct clk_init_data){
+ .name = "gp_pll_dco",
+@@ -3506,54 +3515,87 @@ static struct clk_regmap *const meson8b_clk_regmaps[] = {
+ static const struct meson8b_clk_reset_line {
+ u32 reg;
+ u8 bit_idx;
++ bool active_low;
+ } meson8b_clk_reset_bits[] = {
+ [CLKC_RESET_L2_CACHE_SOFT_RESET] = {
+- .reg = HHI_SYS_CPU_CLK_CNTL0, .bit_idx = 30
++ .reg = HHI_SYS_CPU_CLK_CNTL0,
++ .bit_idx = 30,
++ .active_low = false,
+ },
+ [CLKC_RESET_AXI_64_TO_128_BRIDGE_A5_SOFT_RESET] = {
+- .reg = HHI_SYS_CPU_CLK_CNTL0, .bit_idx = 29
++ .reg = HHI_SYS_CPU_CLK_CNTL0,
++ .bit_idx = 29,
++ .active_low = false,
+ },
+ [CLKC_RESET_SCU_SOFT_RESET] = {
+- .reg = HHI_SYS_CPU_CLK_CNTL0, .bit_idx = 28
++ .reg = HHI_SYS_CPU_CLK_CNTL0,
++ .bit_idx = 28,
++ .active_low = false,
+ },
+ [CLKC_RESET_CPU3_SOFT_RESET] = {
+- .reg = HHI_SYS_CPU_CLK_CNTL0, .bit_idx = 27
++ .reg = HHI_SYS_CPU_CLK_CNTL0,
++ .bit_idx = 27,
++ .active_low = false,
+ },
+ [CLKC_RESET_CPU2_SOFT_RESET] = {
+- .reg = HHI_SYS_CPU_CLK_CNTL0, .bit_idx = 26
++ .reg = HHI_SYS_CPU_CLK_CNTL0,
++ .bit_idx = 26,
++ .active_low = false,
+ },
+ [CLKC_RESET_CPU1_SOFT_RESET] = {
+- .reg = HHI_SYS_CPU_CLK_CNTL0, .bit_idx = 25
++ .reg = HHI_SYS_CPU_CLK_CNTL0,
++ .bit_idx = 25,
++ .active_low = false,
+ },
+ [CLKC_RESET_CPU0_SOFT_RESET] = {
+- .reg = HHI_SYS_CPU_CLK_CNTL0, .bit_idx = 24
++ .reg = HHI_SYS_CPU_CLK_CNTL0,
++ .bit_idx = 24,
++ .active_low = false,
+ },
+ [CLKC_RESET_A5_GLOBAL_RESET] = {
+- .reg = HHI_SYS_CPU_CLK_CNTL0, .bit_idx = 18
++ .reg = HHI_SYS_CPU_CLK_CNTL0,
++ .bit_idx = 18,
++ .active_low = false,
+ },
+ [CLKC_RESET_A5_AXI_SOFT_RESET] = {
+- .reg = HHI_SYS_CPU_CLK_CNTL0, .bit_idx = 17
++ .reg = HHI_SYS_CPU_CLK_CNTL0,
++ .bit_idx = 17,
++ .active_low = false,
+ },
+ [CLKC_RESET_A5_ABP_SOFT_RESET] = {
+- .reg = HHI_SYS_CPU_CLK_CNTL0, .bit_idx = 16
++ .reg = HHI_SYS_CPU_CLK_CNTL0,
++ .bit_idx = 16,
++ .active_low = false,
+ },
+ [CLKC_RESET_AXI_64_TO_128_BRIDGE_MMC_SOFT_RESET] = {
+- .reg = HHI_SYS_CPU_CLK_CNTL1, .bit_idx = 30
++ .reg = HHI_SYS_CPU_CLK_CNTL1,
++ .bit_idx = 30,
++ .active_low = false,
+ },
+ [CLKC_RESET_VID_CLK_CNTL_SOFT_RESET] = {
+- .reg = HHI_VID_CLK_CNTL, .bit_idx = 15
++ .reg = HHI_VID_CLK_CNTL,
++ .bit_idx = 15,
++ .active_low = false,
+ },
+ [CLKC_RESET_VID_DIVIDER_CNTL_SOFT_RESET_POST] = {
+- .reg = HHI_VID_DIVIDER_CNTL, .bit_idx = 7
++ .reg = HHI_VID_DIVIDER_CNTL,
++ .bit_idx = 7,
++ .active_low = false,
+ },
+ [CLKC_RESET_VID_DIVIDER_CNTL_SOFT_RESET_PRE] = {
+- .reg = HHI_VID_DIVIDER_CNTL, .bit_idx = 3
++ .reg = HHI_VID_DIVIDER_CNTL,
++ .bit_idx = 3,
++ .active_low = false,
+ },
+ [CLKC_RESET_VID_DIVIDER_CNTL_RESET_N_POST] = {
+- .reg = HHI_VID_DIVIDER_CNTL, .bit_idx = 1
++ .reg = HHI_VID_DIVIDER_CNTL,
++ .bit_idx = 1,
++ .active_low = true,
+ },
+ [CLKC_RESET_VID_DIVIDER_CNTL_RESET_N_PRE] = {
+- .reg = HHI_VID_DIVIDER_CNTL, .bit_idx = 0
++ .reg = HHI_VID_DIVIDER_CNTL,
++ .bit_idx = 0,
++ .active_low = true,
+ },
+ };
+
+@@ -3562,22 +3604,22 @@ static int meson8b_clk_reset_update(struct reset_controller_dev *rcdev,
+ {
+ struct meson8b_clk_reset *meson8b_clk_reset =
+ container_of(rcdev, struct meson8b_clk_reset, reset);
+- unsigned long flags;
+ const struct meson8b_clk_reset_line *reset;
++ unsigned int value = 0;
++ unsigned long flags;
+
+ if (id >= ARRAY_SIZE(meson8b_clk_reset_bits))
+ return -EINVAL;
+
+ reset = &meson8b_clk_reset_bits[id];
+
++ if (assert != reset->active_low)
++ value = BIT(reset->bit_idx);
++
+ spin_lock_irqsave(&meson_clk_lock, flags);
+
+- if (assert)
+- regmap_update_bits(meson8b_clk_reset->regmap, reset->reg,
+- BIT(reset->bit_idx), BIT(reset->bit_idx));
+- else
+- regmap_update_bits(meson8b_clk_reset->regmap, reset->reg,
+- BIT(reset->bit_idx), 0);
++ regmap_update_bits(meson8b_clk_reset->regmap, reset->reg,
++ BIT(reset->bit_idx), value);
+
+ spin_unlock_irqrestore(&meson_clk_lock, flags);
+
+diff --git a/drivers/clk/meson/meson8b.h b/drivers/clk/meson/meson8b.h
+index c889fbeec30f..c91fb07fcb65 100644
+--- a/drivers/clk/meson/meson8b.h
++++ b/drivers/clk/meson/meson8b.h
+@@ -20,6 +20,10 @@
+ * [0] http://dn.odroid.com/S805/Datasheet/S805_Datasheet%20V0.8%2020150126.pdf
+ */
+ #define HHI_GP_PLL_CNTL 0x40 /* 0x10 offset in data sheet */
++#define HHI_GP_PLL_CNTL2 0x44 /* 0x11 offset in data sheet */
++#define HHI_GP_PLL_CNTL3 0x48 /* 0x12 offset in data sheet */
++#define HHI_GP_PLL_CNTL4 0x4C /* 0x13 offset in data sheet */
++#define HHI_GP_PLL_CNTL5 0x50 /* 0x14 offset in data sheet */
+ #define HHI_VIID_CLK_DIV 0x128 /* 0x4a offset in data sheet */
+ #define HHI_VIID_CLK_CNTL 0x12c /* 0x4b offset in data sheet */
+ #define HHI_GCLK_MPEG0 0x140 /* 0x50 offset in data sheet */
+diff --git a/drivers/clk/qcom/gcc-msm8916.c b/drivers/clk/qcom/gcc-msm8916.c
+index 4e329a7baf2b..17e4a5a2a9fd 100644
+--- a/drivers/clk/qcom/gcc-msm8916.c
++++ b/drivers/clk/qcom/gcc-msm8916.c
+@@ -260,7 +260,7 @@ static struct clk_pll gpll0 = {
+ .l_reg = 0x21004,
+ .m_reg = 0x21008,
+ .n_reg = 0x2100c,
+- .config_reg = 0x21014,
++ .config_reg = 0x21010,
+ .mode_reg = 0x21000,
+ .status_reg = 0x2101c,
+ .status_bit = 17,
+@@ -287,7 +287,7 @@ static struct clk_pll gpll1 = {
+ .l_reg = 0x20004,
+ .m_reg = 0x20008,
+ .n_reg = 0x2000c,
+- .config_reg = 0x20014,
++ .config_reg = 0x20010,
+ .mode_reg = 0x20000,
+ .status_reg = 0x2001c,
+ .status_bit = 17,
+@@ -314,7 +314,7 @@ static struct clk_pll gpll2 = {
+ .l_reg = 0x4a004,
+ .m_reg = 0x4a008,
+ .n_reg = 0x4a00c,
+- .config_reg = 0x4a014,
++ .config_reg = 0x4a010,
+ .mode_reg = 0x4a000,
+ .status_reg = 0x4a01c,
+ .status_bit = 17,
+@@ -341,7 +341,7 @@ static struct clk_pll bimc_pll = {
+ .l_reg = 0x23004,
+ .m_reg = 0x23008,
+ .n_reg = 0x2300c,
+- .config_reg = 0x23014,
++ .config_reg = 0x23010,
+ .mode_reg = 0x23000,
+ .status_reg = 0x2301c,
+ .status_bit = 17,
+diff --git a/drivers/clk/renesas/renesas-cpg-mssr.c b/drivers/clk/renesas/renesas-cpg-mssr.c
+index a2663fbbd7a5..d6a53c99b114 100644
+--- a/drivers/clk/renesas/renesas-cpg-mssr.c
++++ b/drivers/clk/renesas/renesas-cpg-mssr.c
+@@ -812,7 +812,8 @@ static int cpg_mssr_suspend_noirq(struct device *dev)
+ /* Save module registers with bits under our control */
+ for (reg = 0; reg < ARRAY_SIZE(priv->smstpcr_saved); reg++) {
+ if (priv->smstpcr_saved[reg].mask)
+- priv->smstpcr_saved[reg].val =
++ priv->smstpcr_saved[reg].val = priv->stbyctrl ?
++ readb(priv->base + STBCR(reg)) :
+ readl(priv->base + SMSTPCR(reg));
+ }
+
+@@ -872,8 +873,9 @@ static int cpg_mssr_resume_noirq(struct device *dev)
+ }
+
+ if (!i)
+- dev_warn(dev, "Failed to enable SMSTP %p[0x%x]\n",
+- priv->base + SMSTPCR(reg), oldval & mask);
++ dev_warn(dev, "Failed to enable %s%u[0x%x]\n",
++ priv->stbyctrl ? "STB" : "SMSTP", reg,
++ oldval & mask);
+ }
+
+ return 0;
+diff --git a/drivers/clk/samsung/clk-exynos5420.c b/drivers/clk/samsung/clk-exynos5420.c
+index c9e5a1fb6653..edb2363c735a 100644
+--- a/drivers/clk/samsung/clk-exynos5420.c
++++ b/drivers/clk/samsung/clk-exynos5420.c
+@@ -540,7 +540,7 @@ static const struct samsung_div_clock exynos5800_div_clks[] __initconst = {
+
+ static const struct samsung_gate_clock exynos5800_gate_clks[] __initconst = {
+ GATE(CLK_ACLK550_CAM, "aclk550_cam", "mout_user_aclk550_cam",
+- GATE_BUS_TOP, 24, 0, 0),
++ GATE_BUS_TOP, 24, CLK_IS_CRITICAL, 0),
+ GATE(CLK_ACLK432_SCALER, "aclk432_scaler", "mout_user_aclk432_scaler",
+ GATE_BUS_TOP, 27, CLK_IS_CRITICAL, 0),
+ };
+@@ -943,25 +943,25 @@ static const struct samsung_gate_clock exynos5x_gate_clks[] __initconst = {
+ GATE(0, "aclk300_jpeg", "mout_user_aclk300_jpeg",
+ GATE_BUS_TOP, 4, CLK_IGNORE_UNUSED, 0),
+ GATE(0, "aclk333_432_isp0", "mout_user_aclk333_432_isp0",
+- GATE_BUS_TOP, 5, 0, 0),
++ GATE_BUS_TOP, 5, CLK_IS_CRITICAL, 0),
+ GATE(0, "aclk300_gscl", "mout_user_aclk300_gscl",
+ GATE_BUS_TOP, 6, CLK_IS_CRITICAL, 0),
+ GATE(0, "aclk333_432_gscl", "mout_user_aclk333_432_gscl",
+ GATE_BUS_TOP, 7, CLK_IGNORE_UNUSED, 0),
+ GATE(0, "aclk333_432_isp", "mout_user_aclk333_432_isp",
+- GATE_BUS_TOP, 8, 0, 0),
++ GATE_BUS_TOP, 8, CLK_IS_CRITICAL, 0),
+ GATE(CLK_PCLK66_GPIO, "pclk66_gpio", "mout_user_pclk66_gpio",
+ GATE_BUS_TOP, 9, CLK_IGNORE_UNUSED, 0),
+ GATE(0, "aclk66_psgen", "mout_user_aclk66_psgen",
+ GATE_BUS_TOP, 10, CLK_IGNORE_UNUSED, 0),
+ GATE(0, "aclk266_isp", "mout_user_aclk266_isp",
+- GATE_BUS_TOP, 13, 0, 0),
++ GATE_BUS_TOP, 13, CLK_IS_CRITICAL, 0),
+ GATE(0, "aclk166", "mout_user_aclk166",
+ GATE_BUS_TOP, 14, CLK_IGNORE_UNUSED, 0),
+ GATE(CLK_ACLK333, "aclk333", "mout_user_aclk333",
+ GATE_BUS_TOP, 15, CLK_IS_CRITICAL, 0),
+ GATE(0, "aclk400_isp", "mout_user_aclk400_isp",
+- GATE_BUS_TOP, 16, 0, 0),
++ GATE_BUS_TOP, 16, CLK_IS_CRITICAL, 0),
+ GATE(0, "aclk400_mscl", "mout_user_aclk400_mscl",
+ GATE_BUS_TOP, 17, CLK_IS_CRITICAL, 0),
+ GATE(0, "aclk200_disp1", "mout_user_aclk200_disp1",
+@@ -1161,8 +1161,10 @@ static const struct samsung_gate_clock exynos5x_gate_clks[] __initconst = {
+ GATE_IP_GSCL1, 3, 0, 0),
+ GATE(CLK_SMMU_FIMCL1, "smmu_fimcl1", "dout_gscl_blk_333",
+ GATE_IP_GSCL1, 4, 0, 0),
+- GATE(CLK_GSCL_WA, "gscl_wa", "sclk_gscl_wa", GATE_IP_GSCL1, 12, 0, 0),
+- GATE(CLK_GSCL_WB, "gscl_wb", "sclk_gscl_wb", GATE_IP_GSCL1, 13, 0, 0),
++ GATE(CLK_GSCL_WA, "gscl_wa", "sclk_gscl_wa", GATE_IP_GSCL1, 12,
++ CLK_IS_CRITICAL, 0),
++ GATE(CLK_GSCL_WB, "gscl_wb", "sclk_gscl_wb", GATE_IP_GSCL1, 13,
++ CLK_IS_CRITICAL, 0),
+ GATE(CLK_SMMU_FIMCL3, "smmu_fimcl3,", "dout_gscl_blk_333",
+ GATE_IP_GSCL1, 16, 0, 0),
+ GATE(CLK_FIMC_LITE3, "fimc_lite3", "aclk333_432_gscl",
+diff --git a/drivers/clk/samsung/clk-exynos5433.c b/drivers/clk/samsung/clk-exynos5433.c
+index 4b1aa9382ad2..6f29ecd0442e 100644
+--- a/drivers/clk/samsung/clk-exynos5433.c
++++ b/drivers/clk/samsung/clk-exynos5433.c
+@@ -1706,7 +1706,8 @@ static const struct samsung_gate_clock peric_gate_clks[] __initconst = {
+ GATE(CLK_SCLK_PCM1, "sclk_pcm1", "sclk_pcm1_peric",
+ ENABLE_SCLK_PERIC, 7, CLK_SET_RATE_PARENT, 0),
+ GATE(CLK_SCLK_I2S1, "sclk_i2s1", "sclk_i2s1_peric",
+- ENABLE_SCLK_PERIC, 6, CLK_SET_RATE_PARENT, 0),
++ ENABLE_SCLK_PERIC, 6,
++ CLK_SET_RATE_PARENT | CLK_IGNORE_UNUSED, 0),
+ GATE(CLK_SCLK_SPI2, "sclk_spi2", "sclk_spi2_peric", ENABLE_SCLK_PERIC,
+ 5, CLK_SET_RATE_PARENT, 0),
+ GATE(CLK_SCLK_SPI1, "sclk_spi1", "sclk_spi1_peric", ENABLE_SCLK_PERIC,
+diff --git a/drivers/clk/sprd/pll.c b/drivers/clk/sprd/pll.c
+index 15791484388f..13a322b2535a 100644
+--- a/drivers/clk/sprd/pll.c
++++ b/drivers/clk/sprd/pll.c
+@@ -106,7 +106,7 @@ static unsigned long _sprd_pll_recalc_rate(const struct sprd_pll *pll,
+
+ cfg = kcalloc(regs_num, sizeof(*cfg), GFP_KERNEL);
+ if (!cfg)
+- return -ENOMEM;
++ return parent_rate;
+
+ for (i = 0; i < regs_num; i++)
+ cfg[i] = sprd_pll_read(pll, i);
+diff --git a/drivers/clk/st/clk-flexgen.c b/drivers/clk/st/clk-flexgen.c
+index 4413b6e04a8e..55873d4b7603 100644
+--- a/drivers/clk/st/clk-flexgen.c
++++ b/drivers/clk/st/clk-flexgen.c
+@@ -375,6 +375,7 @@ static void __init st_of_flexgen_setup(struct device_node *np)
+ break;
+ }
+
++ flex_flags &= ~CLK_IS_CRITICAL;
+ of_clk_detect_critical(np, i, &flex_flags);
+
+ /*
+diff --git a/drivers/clk/sunxi/clk-sunxi.c b/drivers/clk/sunxi/clk-sunxi.c
+index 27201fd26e44..e1aa1fbac48a 100644
+--- a/drivers/clk/sunxi/clk-sunxi.c
++++ b/drivers/clk/sunxi/clk-sunxi.c
+@@ -90,7 +90,7 @@ static void sun6i_a31_get_pll1_factors(struct factors_request *req)
+ * Round down the frequency to the closest multiple of either
+ * 6 or 16
+ */
+- u32 round_freq_6 = round_down(freq_mhz, 6);
++ u32 round_freq_6 = rounddown(freq_mhz, 6);
+ u32 round_freq_16 = round_down(freq_mhz, 16);
+
+ if (round_freq_6 > round_freq_16)
+diff --git a/drivers/clk/ti/composite.c b/drivers/clk/ti/composite.c
+index 6a89936ba03a..eaa43575cfa5 100644
+--- a/drivers/clk/ti/composite.c
++++ b/drivers/clk/ti/composite.c
+@@ -196,6 +196,7 @@ cleanup:
+ if (!cclk->comp_clks[i])
+ continue;
+ list_del(&cclk->comp_clks[i]->link);
++ kfree(cclk->comp_clks[i]->parent_names);
+ kfree(cclk->comp_clks[i]);
+ }
+
+diff --git a/drivers/clk/zynqmp/clkc.c b/drivers/clk/zynqmp/clkc.c
+index 10e89f23880b..b66c3a62233a 100644
+--- a/drivers/clk/zynqmp/clkc.c
++++ b/drivers/clk/zynqmp/clkc.c
+@@ -558,7 +558,7 @@ static struct clk_hw *zynqmp_register_clk_topology(int clk_id, char *clk_name,
+ {
+ int j;
+ u32 num_nodes, clk_dev_id;
+- char *clk_out = NULL;
++ char *clk_out[MAX_NODES];
+ struct clock_topology *nodes;
+ struct clk_hw *hw = NULL;
+
+@@ -572,16 +572,16 @@ static struct clk_hw *zynqmp_register_clk_topology(int clk_id, char *clk_name,
+ * Intermediate clock names are postfixed with type of clock.
+ */
+ if (j != (num_nodes - 1)) {
+- clk_out = kasprintf(GFP_KERNEL, "%s%s", clk_name,
++ clk_out[j] = kasprintf(GFP_KERNEL, "%s%s", clk_name,
+ clk_type_postfix[nodes[j].type]);
+ } else {
+- clk_out = kasprintf(GFP_KERNEL, "%s", clk_name);
++ clk_out[j] = kasprintf(GFP_KERNEL, "%s", clk_name);
+ }
+
+ if (!clk_topology[nodes[j].type])
+ continue;
+
+- hw = (*clk_topology[nodes[j].type])(clk_out, clk_dev_id,
++ hw = (*clk_topology[nodes[j].type])(clk_out[j], clk_dev_id,
+ parent_names,
+ num_parents,
+ &nodes[j]);
+@@ -590,9 +590,12 @@ static struct clk_hw *zynqmp_register_clk_topology(int clk_id, char *clk_name,
+ __func__, clk_dev_id, clk_name,
+ PTR_ERR(hw));
+
+- parent_names[0] = clk_out;
++ parent_names[0] = clk_out[j];
+ }
+- kfree(clk_out);
++
++ for (j = 0; j < num_nodes; j++)
++ kfree(clk_out[j]);
++
+ return hw;
+ }
+
+diff --git a/drivers/clk/zynqmp/divider.c b/drivers/clk/zynqmp/divider.c
+index 4be2cc76aa2e..9bc4f9409aea 100644
+--- a/drivers/clk/zynqmp/divider.c
++++ b/drivers/clk/zynqmp/divider.c
+@@ -111,23 +111,30 @@ static unsigned long zynqmp_clk_divider_recalc_rate(struct clk_hw *hw,
+
+ static void zynqmp_get_divider2_val(struct clk_hw *hw,
+ unsigned long rate,
+- unsigned long parent_rate,
+ struct zynqmp_clk_divider *divider,
+ int *bestdiv)
+ {
+ int div1;
+ int div2;
+ long error = LONG_MAX;
+- struct clk_hw *parent_hw = clk_hw_get_parent(hw);
+- struct zynqmp_clk_divider *pdivider = to_zynqmp_clk_divider(parent_hw);
++ unsigned long div1_prate;
++ struct clk_hw *div1_parent_hw;
++ struct clk_hw *div2_parent_hw = clk_hw_get_parent(hw);
++ struct zynqmp_clk_divider *pdivider =
++ to_zynqmp_clk_divider(div2_parent_hw);
+
+ if (!pdivider)
+ return;
+
++ div1_parent_hw = clk_hw_get_parent(div2_parent_hw);
++ if (!div1_parent_hw)
++ return;
++
++ div1_prate = clk_hw_get_rate(div1_parent_hw);
+ *bestdiv = 1;
+ for (div1 = 1; div1 <= pdivider->max_div;) {
+ for (div2 = 1; div2 <= divider->max_div;) {
+- long new_error = ((parent_rate / div1) / div2) - rate;
++ long new_error = ((div1_prate / div1) / div2) - rate;
+
+ if (abs(new_error) < abs(error)) {
+ *bestdiv = div2;
+@@ -192,7 +199,7 @@ static long zynqmp_clk_divider_round_rate(struct clk_hw *hw,
+ */
+ if (div_type == TYPE_DIV2 &&
+ (clk_hw_get_flags(hw) & CLK_SET_RATE_PARENT)) {
+- zynqmp_get_divider2_val(hw, rate, *prate, divider, &bestdiv);
++ zynqmp_get_divider2_val(hw, rate, divider, &bestdiv);
+ }
+
+ if ((clk_hw_get_flags(hw) & CLK_SET_RATE_PARENT) && divider->is_frac)
+diff --git a/drivers/crypto/hisilicon/sgl.c b/drivers/crypto/hisilicon/sgl.c
+index 0e8c7e324fb4..725a739800b0 100644
+--- a/drivers/crypto/hisilicon/sgl.c
++++ b/drivers/crypto/hisilicon/sgl.c
+@@ -66,7 +66,8 @@ struct hisi_acc_sgl_pool *hisi_acc_create_sgl_pool(struct device *dev,
+
+ sgl_size = sizeof(struct acc_hw_sge) * sge_nr +
+ sizeof(struct hisi_acc_hw_sgl);
+- block_size = PAGE_SIZE * (1 << (MAX_ORDER - 1));
++ block_size = 1 << (PAGE_SHIFT + MAX_ORDER <= 32 ?
++ PAGE_SHIFT + MAX_ORDER - 1 : 31);
+ sgl_num_per_block = block_size / sgl_size;
+ block_num = count / sgl_num_per_block;
+ remain_sgl = count % sgl_num_per_block;
+diff --git a/drivers/crypto/marvell/octeontx/otx_cptvf_algs.c b/drivers/crypto/marvell/octeontx/otx_cptvf_algs.c
+index 06202bcffb33..a370c99ecf4c 100644
+--- a/drivers/crypto/marvell/octeontx/otx_cptvf_algs.c
++++ b/drivers/crypto/marvell/octeontx/otx_cptvf_algs.c
+@@ -118,6 +118,9 @@ static void otx_cpt_aead_callback(int status, void *arg1, void *arg2)
+ struct otx_cpt_req_info *cpt_req;
+ struct pci_dev *pdev;
+
++ if (!cpt_info)
++ goto complete;
++
+ cpt_req = cpt_info->req;
+ if (!status) {
+ /*
+@@ -129,10 +132,10 @@ static void otx_cpt_aead_callback(int status, void *arg1, void *arg2)
+ !cpt_req->is_enc)
+ status = validate_hmac_cipher_null(cpt_req);
+ }
+- if (cpt_info) {
+- pdev = cpt_info->pdev;
+- do_request_cleanup(pdev, cpt_info);
+- }
++ pdev = cpt_info->pdev;
++ do_request_cleanup(pdev, cpt_info);
++
++complete:
+ if (areq)
+ areq->complete(areq, status);
+ }
+diff --git a/drivers/crypto/omap-sham.c b/drivers/crypto/omap-sham.c
+index e4072cd38585..a82a3596dca3 100644
+--- a/drivers/crypto/omap-sham.c
++++ b/drivers/crypto/omap-sham.c
+@@ -169,8 +169,6 @@ struct omap_sham_hmac_ctx {
+ };
+
+ struct omap_sham_ctx {
+- struct omap_sham_dev *dd;
+-
+ unsigned long flags;
+
+ /* fallback stuff */
+@@ -751,8 +749,15 @@ static int omap_sham_align_sgs(struct scatterlist *sg,
+ int offset = rctx->offset;
+ int bufcnt = rctx->bufcnt;
+
+- if (!sg || !sg->length || !nbytes)
++ if (!sg || !sg->length || !nbytes) {
++ if (bufcnt) {
++ sg_init_table(rctx->sgl, 1);
++ sg_set_buf(rctx->sgl, rctx->dd->xmit_buf, bufcnt);
++ rctx->sg = rctx->sgl;
++ }
++
+ return 0;
++ }
+
+ new_len = nbytes;
+
+@@ -896,7 +901,7 @@ static int omap_sham_prepare_request(struct ahash_request *req, bool update)
+ if (hash_later < 0)
+ hash_later = 0;
+
+- if (hash_later) {
++ if (hash_later && hash_later <= rctx->buflen) {
+ scatterwalk_map_and_copy(rctx->buffer,
+ req->src,
+ req->nbytes - hash_later,
+@@ -926,27 +931,35 @@ static int omap_sham_update_dma_stop(struct omap_sham_dev *dd)
+ return 0;
+ }
+
++struct omap_sham_dev *omap_sham_find_dev(struct omap_sham_reqctx *ctx)
++{
++ struct omap_sham_dev *dd;
++
++ if (ctx->dd)
++ return ctx->dd;
++
++ spin_lock_bh(&sham.lock);
++ dd = list_first_entry(&sham.dev_list, struct omap_sham_dev, list);
++ list_move_tail(&dd->list, &sham.dev_list);
++ ctx->dd = dd;
++ spin_unlock_bh(&sham.lock);
++
++ return dd;
++}
++
+ static int omap_sham_init(struct ahash_request *req)
+ {
+ struct crypto_ahash *tfm = crypto_ahash_reqtfm(req);
+ struct omap_sham_ctx *tctx = crypto_ahash_ctx(tfm);
+ struct omap_sham_reqctx *ctx = ahash_request_ctx(req);
+- struct omap_sham_dev *dd = NULL, *tmp;
++ struct omap_sham_dev *dd;
+ int bs = 0;
+
+- spin_lock_bh(&sham.lock);
+- if (!tctx->dd) {
+- list_for_each_entry(tmp, &sham.dev_list, list) {
+- dd = tmp;
+- break;
+- }
+- tctx->dd = dd;
+- } else {
+- dd = tctx->dd;
+- }
+- spin_unlock_bh(&sham.lock);
++ ctx->dd = NULL;
+
+- ctx->dd = dd;
++ dd = omap_sham_find_dev(ctx);
++ if (!dd)
++ return -ENODEV;
+
+ ctx->flags = 0;
+
+@@ -1216,8 +1229,7 @@ err1:
+ static int omap_sham_enqueue(struct ahash_request *req, unsigned int op)
+ {
+ struct omap_sham_reqctx *ctx = ahash_request_ctx(req);
+- struct omap_sham_ctx *tctx = crypto_tfm_ctx(req->base.tfm);
+- struct omap_sham_dev *dd = tctx->dd;
++ struct omap_sham_dev *dd = ctx->dd;
+
+ ctx->op = op;
+
+@@ -1227,7 +1239,7 @@ static int omap_sham_enqueue(struct ahash_request *req, unsigned int op)
+ static int omap_sham_update(struct ahash_request *req)
+ {
+ struct omap_sham_reqctx *ctx = ahash_request_ctx(req);
+- struct omap_sham_dev *dd = ctx->dd;
++ struct omap_sham_dev *dd = omap_sham_find_dev(ctx);
+
+ if (!req->nbytes)
+ return 0;
+@@ -1331,21 +1343,8 @@ static int omap_sham_setkey(struct crypto_ahash *tfm, const u8 *key,
+ struct omap_sham_hmac_ctx *bctx = tctx->base;
+ int bs = crypto_shash_blocksize(bctx->shash);
+ int ds = crypto_shash_digestsize(bctx->shash);
+- struct omap_sham_dev *dd = NULL, *tmp;
+ int err, i;
+
+- spin_lock_bh(&sham.lock);
+- if (!tctx->dd) {
+- list_for_each_entry(tmp, &sham.dev_list, list) {
+- dd = tmp;
+- break;
+- }
+- tctx->dd = dd;
+- } else {
+- dd = tctx->dd;
+- }
+- spin_unlock_bh(&sham.lock);
+-
+ err = crypto_shash_setkey(tctx->fallback, key, keylen);
+ if (err)
+ return err;
+@@ -1363,7 +1362,7 @@ static int omap_sham_setkey(struct crypto_ahash *tfm, const u8 *key,
+
+ memset(bctx->ipad + keylen, 0, bs - keylen);
+
+- if (!test_bit(FLAGS_AUTO_XOR, &dd->flags)) {
++ if (!test_bit(FLAGS_AUTO_XOR, &sham.flags)) {
+ memcpy(bctx->opad, bctx->ipad, bs);
+
+ for (i = 0; i < bs; i++) {
+@@ -2167,6 +2166,7 @@ static int omap_sham_probe(struct platform_device *pdev)
+ }
+
+ dd->flags |= dd->pdata->flags;
++ sham.flags |= dd->pdata->flags;
+
+ pm_runtime_use_autosuspend(dev);
+ pm_runtime_set_autosuspend_delay(dev, DEFAULT_AUTOSUSPEND_DELAY);
+@@ -2194,6 +2194,9 @@ static int omap_sham_probe(struct platform_device *pdev)
+ spin_unlock(&sham.lock);
+
+ for (i = 0; i < dd->pdata->algs_info_size; i++) {
++ if (dd->pdata->algs_info[i].registered)
++ break;
++
+ for (j = 0; j < dd->pdata->algs_info[i].size; j++) {
+ struct ahash_alg *alg;
+
+@@ -2245,9 +2248,11 @@ static int omap_sham_remove(struct platform_device *pdev)
+ list_del(&dd->list);
+ spin_unlock(&sham.lock);
+ for (i = dd->pdata->algs_info_size - 1; i >= 0; i--)
+- for (j = dd->pdata->algs_info[i].registered - 1; j >= 0; j--)
++ for (j = dd->pdata->algs_info[i].registered - 1; j >= 0; j--) {
+ crypto_unregister_ahash(
+ &dd->pdata->algs_info[i].algs_list[j]);
++ dd->pdata->algs_info[i].registered--;
++ }
+ tasklet_kill(&dd->done_task);
+ pm_runtime_disable(&pdev->dev);
+
+diff --git a/drivers/extcon/extcon-adc-jack.c b/drivers/extcon/extcon-adc-jack.c
+index ad02dc6747a4..0317b614b680 100644
+--- a/drivers/extcon/extcon-adc-jack.c
++++ b/drivers/extcon/extcon-adc-jack.c
+@@ -124,7 +124,7 @@ static int adc_jack_probe(struct platform_device *pdev)
+ for (i = 0; data->adc_conditions[i].id != EXTCON_NONE; i++);
+ data->num_conditions = i;
+
+- data->chan = iio_channel_get(&pdev->dev, pdata->consumer_channel);
++ data->chan = devm_iio_channel_get(&pdev->dev, pdata->consumer_channel);
+ if (IS_ERR(data->chan))
+ return PTR_ERR(data->chan);
+
+@@ -164,7 +164,6 @@ static int adc_jack_remove(struct platform_device *pdev)
+
+ free_irq(data->irq, data);
+ cancel_work_sync(&data->handler.work);
+- iio_channel_release(data->chan);
+
+ return 0;
+ }
+diff --git a/drivers/firmware/imx/imx-scu.c b/drivers/firmware/imx/imx-scu.c
+index b3da2e193ad2..176ddd151375 100644
+--- a/drivers/firmware/imx/imx-scu.c
++++ b/drivers/firmware/imx/imx-scu.c
+@@ -314,6 +314,7 @@ static int imx_scu_probe(struct platform_device *pdev)
+ if (ret != -EPROBE_DEFER)
+ dev_err(dev, "Failed to request mbox chan %s ret %d\n",
+ chan_name, ret);
++ kfree(chan_name);
+ return ret;
+ }
+
+diff --git a/drivers/firmware/qcom_scm.c b/drivers/firmware/qcom_scm.c
+index 059bb0fbae9e..4701487573f7 100644
+--- a/drivers/firmware/qcom_scm.c
++++ b/drivers/firmware/qcom_scm.c
+@@ -6,7 +6,6 @@
+ #include <linux/init.h>
+ #include <linux/cpumask.h>
+ #include <linux/export.h>
+-#include <linux/dma-direct.h>
+ #include <linux/dma-mapping.h>
+ #include <linux/module.h>
+ #include <linux/types.h>
+@@ -806,8 +805,7 @@ int qcom_scm_assign_mem(phys_addr_t mem_addr, size_t mem_sz,
+ struct qcom_scm_mem_map_info *mem_to_map;
+ phys_addr_t mem_to_map_phys;
+ phys_addr_t dest_phys;
+- phys_addr_t ptr_phys;
+- dma_addr_t ptr_dma;
++ dma_addr_t ptr_phys;
+ size_t mem_to_map_sz;
+ size_t dest_sz;
+ size_t src_sz;
+@@ -824,10 +822,9 @@ int qcom_scm_assign_mem(phys_addr_t mem_addr, size_t mem_sz,
+ ptr_sz = ALIGN(src_sz, SZ_64) + ALIGN(mem_to_map_sz, SZ_64) +
+ ALIGN(dest_sz, SZ_64);
+
+- ptr = dma_alloc_coherent(__scm->dev, ptr_sz, &ptr_dma, GFP_KERNEL);
++ ptr = dma_alloc_coherent(__scm->dev, ptr_sz, &ptr_phys, GFP_KERNEL);
+ if (!ptr)
+ return -ENOMEM;
+- ptr_phys = dma_to_phys(__scm->dev, ptr_dma);
+
+ /* Fill source vmid detail */
+ src = ptr;
+@@ -855,7 +852,7 @@ int qcom_scm_assign_mem(phys_addr_t mem_addr, size_t mem_sz,
+
+ ret = __qcom_scm_assign_mem(__scm->dev, mem_to_map_phys, mem_to_map_sz,
+ ptr_phys, src_sz, dest_phys, dest_sz);
+- dma_free_coherent(__scm->dev, ptr_sz, ptr, ptr_dma);
++ dma_free_coherent(__scm->dev, ptr_sz, ptr, ptr_phys);
+ if (ret) {
+ dev_err(__scm->dev,
+ "Assign memory protection call failed %d\n", ret);
+diff --git a/drivers/fpga/dfl-afu-dma-region.c b/drivers/fpga/dfl-afu-dma-region.c
+index 62f924489db5..5942343a5d6e 100644
+--- a/drivers/fpga/dfl-afu-dma-region.c
++++ b/drivers/fpga/dfl-afu-dma-region.c
+@@ -61,10 +61,10 @@ static int afu_dma_pin_pages(struct dfl_feature_platform_data *pdata,
+ region->pages);
+ if (pinned < 0) {
+ ret = pinned;
+- goto put_pages;
++ goto free_pages;
+ } else if (pinned != npages) {
+ ret = -EFAULT;
+- goto free_pages;
++ goto put_pages;
+ }
+
+ dev_dbg(dev, "%d pages pinned\n", pinned);
+diff --git a/drivers/gpio/gpio-dwapb.c b/drivers/gpio/gpio-dwapb.c
+index 92e127e74813..ed6061b5cca1 100644
+--- a/drivers/gpio/gpio-dwapb.c
++++ b/drivers/gpio/gpio-dwapb.c
+@@ -49,7 +49,9 @@
+ #define GPIO_EXT_PORTC 0x58
+ #define GPIO_EXT_PORTD 0x5c
+
++#define DWAPB_DRIVER_NAME "gpio-dwapb"
+ #define DWAPB_MAX_PORTS 4
++
+ #define GPIO_EXT_PORT_STRIDE 0x04 /* register stride 32 bits */
+ #define GPIO_SWPORT_DR_STRIDE 0x0c /* register stride 3*32 bits */
+ #define GPIO_SWPORT_DDR_STRIDE 0x0c /* register stride 3*32 bits */
+@@ -398,7 +400,7 @@ static void dwapb_configure_irqs(struct dwapb_gpio *gpio,
+ return;
+
+ err = irq_alloc_domain_generic_chips(gpio->domain, ngpio, 2,
+- "gpio-dwapb", handle_level_irq,
++ DWAPB_DRIVER_NAME, handle_level_irq,
+ IRQ_NOREQUEST, 0,
+ IRQ_GC_INIT_NESTED_LOCK);
+ if (err) {
+@@ -455,7 +457,7 @@ static void dwapb_configure_irqs(struct dwapb_gpio *gpio,
+ */
+ err = devm_request_irq(gpio->dev, pp->irq[0],
+ dwapb_irq_handler_mfd,
+- IRQF_SHARED, "gpio-dwapb-mfd", gpio);
++ IRQF_SHARED, DWAPB_DRIVER_NAME, gpio);
+ if (err) {
+ dev_err(gpio->dev, "error requesting IRQ\n");
+ irq_domain_remove(gpio->domain);
+@@ -533,26 +535,33 @@ static int dwapb_gpio_add_port(struct dwapb_gpio *gpio,
+ dwapb_configure_irqs(gpio, port, pp);
+
+ err = gpiochip_add_data(&port->gc, port);
+- if (err)
++ if (err) {
+ dev_err(gpio->dev, "failed to register gpiochip for port%d\n",
+ port->idx);
+- else
+- port->is_registered = true;
++ return err;
++ }
+
+ /* Add GPIO-signaled ACPI event support */
+- if (pp->has_irq)
+- acpi_gpiochip_request_interrupts(&port->gc);
++ acpi_gpiochip_request_interrupts(&port->gc);
+
+- return err;
++ port->is_registered = true;
++
++ return 0;
+ }
+
+ static void dwapb_gpio_unregister(struct dwapb_gpio *gpio)
+ {
+ unsigned int m;
+
+- for (m = 0; m < gpio->nr_ports; ++m)
+- if (gpio->ports[m].is_registered)
+- gpiochip_remove(&gpio->ports[m].gc);
++ for (m = 0; m < gpio->nr_ports; ++m) {
++ struct dwapb_gpio_port *port = &gpio->ports[m];
++
++ if (!port->is_registered)
++ continue;
++
++ acpi_gpiochip_free_interrupts(&port->gc);
++ gpiochip_remove(&port->gc);
++ }
+ }
+
+ static struct dwapb_platform_data *
+@@ -836,7 +845,7 @@ static SIMPLE_DEV_PM_OPS(dwapb_gpio_pm_ops, dwapb_gpio_suspend,
+
+ static struct platform_driver dwapb_gpio_driver = {
+ .driver = {
+- .name = "gpio-dwapb",
++ .name = DWAPB_DRIVER_NAME,
+ .pm = &dwapb_gpio_pm_ops,
+ .of_match_table = of_match_ptr(dwapb_of_match),
+ .acpi_match_table = ACPI_PTR(dwapb_acpi_match),
+@@ -850,3 +859,4 @@ module_platform_driver(dwapb_gpio_driver);
+ MODULE_LICENSE("GPL");
+ MODULE_AUTHOR("Jamie Iles");
+ MODULE_DESCRIPTION("Synopsys DesignWare APB GPIO driver");
++MODULE_ALIAS("platform:" DWAPB_DRIVER_NAME);
+diff --git a/drivers/gpio/gpio-mlxbf2.c b/drivers/gpio/gpio-mlxbf2.c
+index da570e63589d..cc0dd8593a4b 100644
+--- a/drivers/gpio/gpio-mlxbf2.c
++++ b/drivers/gpio/gpio-mlxbf2.c
+@@ -110,8 +110,8 @@ static int mlxbf2_gpio_get_lock_res(struct platform_device *pdev)
+ }
+
+ yu_arm_gpio_lock_param.io = devm_ioremap(dev, res->start, size);
+- if (IS_ERR(yu_arm_gpio_lock_param.io))
+- ret = PTR_ERR(yu_arm_gpio_lock_param.io);
++ if (!yu_arm_gpio_lock_param.io)
++ ret = -ENOMEM;
+
+ exit:
+ mutex_unlock(yu_arm_gpio_lock_param.lock);
+diff --git a/drivers/gpio/gpio-pca953x.c b/drivers/gpio/gpio-pca953x.c
+index 4269ea9a817e..01011a780688 100644
+--- a/drivers/gpio/gpio-pca953x.c
++++ b/drivers/gpio/gpio-pca953x.c
+@@ -307,8 +307,22 @@ static const struct regmap_config pca953x_i2c_regmap = {
+ .volatile_reg = pca953x_volatile_register,
+
+ .cache_type = REGCACHE_RBTREE,
+- /* REVISIT: should be 0x7f but some 24 bit chips use REG_ADDR_AI */
+- .max_register = 0xff,
++ .max_register = 0x7f,
++};
++
++static const struct regmap_config pca953x_ai_i2c_regmap = {
++ .reg_bits = 8,
++ .val_bits = 8,
++
++ .read_flag_mask = REG_ADDR_AI,
++ .write_flag_mask = REG_ADDR_AI,
++
++ .readable_reg = pca953x_readable_register,
++ .writeable_reg = pca953x_writeable_register,
++ .volatile_reg = pca953x_volatile_register,
++
++ .cache_type = REGCACHE_RBTREE,
++ .max_register = 0x7f,
+ };
+
+ static u8 pca953x_recalc_addr(struct pca953x_chip *chip, int reg, int off,
+@@ -319,18 +333,6 @@ static u8 pca953x_recalc_addr(struct pca953x_chip *chip, int reg, int off,
+ int pinctrl = (reg & PCAL_PINCTRL_MASK) << 1;
+ u8 regaddr = pinctrl | addr | (off / BANK_SZ);
+
+- /* Single byte read doesn't need AI bit set. */
+- if (!addrinc)
+- return regaddr;
+-
+- /* Chips with 24 and more GPIOs always support Auto Increment */
+- if (write && NBANK(chip) > 2)
+- regaddr |= REG_ADDR_AI;
+-
+- /* PCA9575 needs address-increment on multi-byte writes */
+- if (PCA_CHIP_TYPE(chip->driver_data) == PCA957X_TYPE)
+- regaddr |= REG_ADDR_AI;
+-
+ return regaddr;
+ }
+
+@@ -863,6 +865,7 @@ static int pca953x_probe(struct i2c_client *client,
+ int ret;
+ u32 invert = 0;
+ struct regulator *reg;
++ const struct regmap_config *regmap_config;
+
+ chip = devm_kzalloc(&client->dev, sizeof(*chip), GFP_KERNEL);
+ if (chip == NULL)
+@@ -925,7 +928,17 @@ static int pca953x_probe(struct i2c_client *client,
+
+ i2c_set_clientdata(client, chip);
+
+- chip->regmap = devm_regmap_init_i2c(client, &pca953x_i2c_regmap);
++ pca953x_setup_gpio(chip, chip->driver_data & PCA_GPIO_MASK);
++
++ if (NBANK(chip) > 2 || PCA_CHIP_TYPE(chip->driver_data) == PCA957X_TYPE) {
++ dev_info(&client->dev, "using AI\n");
++ regmap_config = &pca953x_ai_i2c_regmap;
++ } else {
++ dev_info(&client->dev, "using no AI\n");
++ regmap_config = &pca953x_i2c_regmap;
++ }
++
++ chip->regmap = devm_regmap_init_i2c(client, regmap_config);
+ if (IS_ERR(chip->regmap)) {
+ ret = PTR_ERR(chip->regmap);
+ goto err_exit;
+@@ -956,7 +969,6 @@ static int pca953x_probe(struct i2c_client *client,
+ /* initialize cached registers from their original values.
+ * we can't share this chip with another i2c master.
+ */
+- pca953x_setup_gpio(chip, chip->driver_data & PCA_GPIO_MASK);
+
+ if (PCA_CHIP_TYPE(chip->driver_data) == PCA953X_TYPE) {
+ chip->regs = &pca953x_regs;
+diff --git a/drivers/gpu/drm/amd/amdkfd/kfd_priv.h b/drivers/gpu/drm/amd/amdkfd/kfd_priv.h
+index c24cad3c64ed..f7cfb8180b71 100644
+--- a/drivers/gpu/drm/amd/amdkfd/kfd_priv.h
++++ b/drivers/gpu/drm/amd/amdkfd/kfd_priv.h
+@@ -40,6 +40,7 @@
+ #include <drm/drm_file.h>
+ #include <drm/drm_drv.h>
+ #include <drm/drm_device.h>
++#include <drm/drm_ioctl.h>
+ #include <kgd_kfd_interface.h>
+ #include <linux/swap.h>
+
+@@ -1053,7 +1054,7 @@ static inline int kfd_devcgroup_check_permission(struct kfd_dev *kfd)
+ #if defined(CONFIG_CGROUP_DEVICE) || defined(CONFIG_CGROUP_BPF)
+ struct drm_device *ddev = kfd->ddev;
+
+- return devcgroup_check_permission(DEVCG_DEV_CHAR, ddev->driver->major,
++ return devcgroup_check_permission(DEVCG_DEV_CHAR, DRM_MAJOR,
+ ddev->render->index,
+ DEVCG_ACC_WRITE | DEVCG_ACC_READ);
+ #else
+diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
+index 7fc15b82fe48..f9f02e08054b 100644
+--- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
++++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
+@@ -1334,7 +1334,7 @@ static int dm_late_init(void *handle)
+ unsigned int linear_lut[16];
+ int i;
+ struct dmcu *dmcu = adev->dm.dc->res_pool->dmcu;
+- bool ret = false;
++ bool ret;
+
+ for (i = 0; i < 16; i++)
+ linear_lut[i] = 0xFFFF * i / 15;
+@@ -1350,13 +1350,10 @@ static int dm_late_init(void *handle)
+ */
+ params.min_abm_backlight = 0x28F;
+
+- /* todo will enable for navi10 */
+- if (adev->asic_type <= CHIP_RAVEN) {
+- ret = dmcu_load_iram(dmcu, params);
++ ret = dmcu_load_iram(dmcu, params);
+
+- if (!ret)
+- return -EINVAL;
+- }
++ if (!ret)
++ return -EINVAL;
+
+ return detect_mst_link_for_all_connectors(adev->ddev);
+ }
+diff --git a/drivers/gpu/drm/amd/display/dc/core/dc.c b/drivers/gpu/drm/amd/display/dc/core/dc.c
+index 47431ca6986d..4acaf4be8a81 100644
+--- a/drivers/gpu/drm/amd/display/dc/core/dc.c
++++ b/drivers/gpu/drm/amd/display/dc/core/dc.c
+@@ -1011,9 +1011,17 @@ static void program_timing_sync(
+ }
+ }
+
+- /* set first pipe with plane as master */
++ /* set first unblanked pipe as master */
+ for (j = 0; j < group_size; j++) {
+- if (pipe_set[j]->plane_state) {
++ bool is_blanked;
++
++ if (pipe_set[j]->stream_res.opp->funcs->dpg_is_blanked)
++ is_blanked =
++ pipe_set[j]->stream_res.opp->funcs->dpg_is_blanked(pipe_set[j]->stream_res.opp);
++ else
++ is_blanked =
++ pipe_set[j]->stream_res.tg->funcs->is_blanked(pipe_set[j]->stream_res.tg);
++ if (!is_blanked) {
+ if (j == 0)
+ break;
+
+@@ -1034,9 +1042,17 @@ static void program_timing_sync(
+ status->timing_sync_info.master = false;
+
+ }
+- /* remove any other pipes with plane as they have already been synced */
++ /* remove any other unblanked pipes as they have already been synced */
+ for (j = j + 1; j < group_size; j++) {
+- if (pipe_set[j]->plane_state) {
++ bool is_blanked;
++
++ if (pipe_set[j]->stream_res.opp->funcs->dpg_is_blanked)
++ is_blanked =
++ pipe_set[j]->stream_res.opp->funcs->dpg_is_blanked(pipe_set[j]->stream_res.opp);
++ else
++ is_blanked =
++ pipe_set[j]->stream_res.tg->funcs->is_blanked(pipe_set[j]->stream_res.tg);
++ if (!is_blanked) {
+ group_size--;
+ pipe_set[j] = pipe_set[group_size];
+ j--;
+@@ -2517,6 +2533,12 @@ void dc_commit_updates_for_stream(struct dc *dc,
+
+ copy_stream_update_to_stream(dc, context, stream, stream_update);
+
++ if (!dc->res_pool->funcs->validate_bandwidth(dc, context, false)) {
++ DC_ERROR("Mode validation failed for stream update!\n");
++ dc_release_state(context);
++ return;
++ }
++
+ commit_planes_for_stream(
+ dc,
+ srf_updates,
+diff --git a/drivers/gpu/drm/amd/display/modules/color/color_gamma.c b/drivers/gpu/drm/amd/display/modules/color/color_gamma.c
+index cac09d500fda..e89694eb90b4 100644
+--- a/drivers/gpu/drm/amd/display/modules/color/color_gamma.c
++++ b/drivers/gpu/drm/amd/display/modules/color/color_gamma.c
+@@ -843,7 +843,7 @@ static bool build_regamma(struct pwl_float_data_ex *rgb_regamma,
+ pow_buffer_ptr = -1; // reset back to no optimize
+ ret = true;
+ release:
+- kfree(coeff);
++ kvfree(coeff);
+ return ret;
+ }
+
+diff --git a/drivers/gpu/drm/amd/powerplay/smumgr/ci_smumgr.c b/drivers/gpu/drm/amd/powerplay/smumgr/ci_smumgr.c
+index 868e2d5f6e62..7c3e903230ca 100644
+--- a/drivers/gpu/drm/amd/powerplay/smumgr/ci_smumgr.c
++++ b/drivers/gpu/drm/amd/powerplay/smumgr/ci_smumgr.c
+@@ -239,7 +239,7 @@ static void ci_initialize_power_tune_defaults(struct pp_hwmgr *hwmgr)
+
+ switch (dev_id) {
+ case 0x67BA:
+- case 0x66B1:
++ case 0x67B1:
+ smu_data->power_tune_defaults = &defaults_hawaii_pro;
+ break;
+ case 0x67B8:
+diff --git a/drivers/gpu/drm/ast/ast_mode.c b/drivers/gpu/drm/ast/ast_mode.c
+index 7a9f20a2fd30..e7ba0b6f46d8 100644
+--- a/drivers/gpu/drm/ast/ast_mode.c
++++ b/drivers/gpu/drm/ast/ast_mode.c
+@@ -226,6 +226,7 @@ static void ast_set_vbios_color_reg(struct ast_private *ast,
+ case 3:
+ case 4:
+ color_index = TrueCModeIndex;
++ break;
+ default:
+ return;
+ }
+@@ -801,6 +802,9 @@ static int ast_crtc_helper_atomic_check(struct drm_crtc *crtc,
+ return -EINVAL;
+ }
+
++ if (!state->enable)
++ return 0; /* no mode checks if CRTC is being disabled */
++
+ ast_state = to_ast_crtc_state(state);
+
+ format = ast_state->format;
+diff --git a/drivers/gpu/drm/drm_connector.c b/drivers/gpu/drm/drm_connector.c
+index 644f0ad10671..ac9fd96c4c66 100644
+--- a/drivers/gpu/drm/drm_connector.c
++++ b/drivers/gpu/drm/drm_connector.c
+@@ -27,6 +27,7 @@
+ #include <drm/drm_print.h>
+ #include <drm/drm_drv.h>
+ #include <drm/drm_file.h>
++#include <drm/drm_sysfs.h>
+
+ #include <linux/uaccess.h>
+
+@@ -523,6 +524,10 @@ int drm_connector_register(struct drm_connector *connector)
+ drm_mode_object_register(connector->dev, &connector->base);
+
+ connector->registration_state = DRM_CONNECTOR_REGISTERED;
++
++ /* Let userspace know we have a new connector */
++ drm_sysfs_hotplug_event(connector->dev);
++
+ goto unlock;
+
+ err_debugfs:
+diff --git a/drivers/gpu/drm/drm_dp_mst_topology.c b/drivers/gpu/drm/drm_dp_mst_topology.c
+index 9d89ebf3a749..abb1f358ec6d 100644
+--- a/drivers/gpu/drm/drm_dp_mst_topology.c
++++ b/drivers/gpu/drm/drm_dp_mst_topology.c
+@@ -27,6 +27,7 @@
+ #include <linux/kernel.h>
+ #include <linux/sched.h>
+ #include <linux/seq_file.h>
++#include <linux/iopoll.h>
+
+ #if IS_ENABLED(CONFIG_DRM_DEBUG_DP_MST_TOPOLOGY_REFS)
+ #include <linux/stacktrace.h>
+@@ -4448,6 +4449,17 @@ fail:
+ return ret;
+ }
+
++static int do_get_act_status(struct drm_dp_aux *aux)
++{
++ int ret;
++ u8 status;
++
++ ret = drm_dp_dpcd_readb(aux, DP_PAYLOAD_TABLE_UPDATE_STATUS, &status);
++ if (ret < 0)
++ return ret;
++
++ return status;
++}
+
+ /**
+ * drm_dp_check_act_status() - Check ACT handled status.
+@@ -4457,33 +4469,29 @@ fail:
+ */
+ int drm_dp_check_act_status(struct drm_dp_mst_topology_mgr *mgr)
+ {
+- u8 status;
+- int ret;
+- int count = 0;
+-
+- do {
+- ret = drm_dp_dpcd_readb(mgr->aux, DP_PAYLOAD_TABLE_UPDATE_STATUS, &status);
+-
+- if (ret < 0) {
+- DRM_DEBUG_KMS("failed to read payload table status %d\n", ret);
+- goto fail;
+- }
+-
+- if (status & DP_PAYLOAD_ACT_HANDLED)
+- break;
+- count++;
+- udelay(100);
+-
+- } while (count < 30);
+-
+- if (!(status & DP_PAYLOAD_ACT_HANDLED)) {
+- DRM_DEBUG_KMS("failed to get ACT bit %d after %d retries\n", status, count);
+- ret = -EINVAL;
+- goto fail;
++ /*
++ * There doesn't seem to be any recommended retry count or timeout in
++ * the MST specification. Since some hubs have been observed to take
++ * over 1 second to update their payload allocations under certain
++ * conditions, we use a rather large timeout value.
++ */
++ const int timeout_ms = 3000;
++ int ret, status;
++
++ ret = readx_poll_timeout(do_get_act_status, mgr->aux, status,
++ status & DP_PAYLOAD_ACT_HANDLED || status < 0,
++ 200, timeout_ms * USEC_PER_MSEC);
++ if (ret < 0 && status >= 0) {
++ DRM_DEBUG_KMS("Failed to get ACT after %dms, last status: %02x\n",
++ timeout_ms, status);
++ return -EINVAL;
++ } else if (status < 0) {
++ DRM_DEBUG_KMS("Failed to read payload table status: %d\n",
++ status);
++ return status;
+ }
++
+ return 0;
+-fail:
+- return ret;
+ }
+ EXPORT_SYMBOL(drm_dp_check_act_status);
+
+diff --git a/drivers/gpu/drm/drm_encoder_slave.c b/drivers/gpu/drm/drm_encoder_slave.c
+index cf804389f5ec..d50a7884e69e 100644
+--- a/drivers/gpu/drm/drm_encoder_slave.c
++++ b/drivers/gpu/drm/drm_encoder_slave.c
+@@ -84,7 +84,7 @@ int drm_i2c_encoder_init(struct drm_device *dev,
+
+ err = encoder_drv->encoder_init(client, dev, encoder);
+ if (err)
+- goto fail_unregister;
++ goto fail_module_put;
+
+ if (info->platform_data)
+ encoder->slave_funcs->set_config(&encoder->base,
+@@ -92,9 +92,10 @@ int drm_i2c_encoder_init(struct drm_device *dev,
+
+ return 0;
+
++fail_module_put:
++ module_put(module);
+ fail_unregister:
+ i2c_unregister_device(client);
+- module_put(module);
+ fail:
+ return err;
+ }
+diff --git a/drivers/gpu/drm/drm_sysfs.c b/drivers/gpu/drm/drm_sysfs.c
+index 939f0032aab1..f0336c804639 100644
+--- a/drivers/gpu/drm/drm_sysfs.c
++++ b/drivers/gpu/drm/drm_sysfs.c
+@@ -291,9 +291,6 @@ int drm_sysfs_connector_add(struct drm_connector *connector)
+ return PTR_ERR(connector->kdev);
+ }
+
+- /* Let userspace know we have a new connector */
+- drm_sysfs_hotplug_event(dev);
+-
+ if (connector->ddc)
+ return sysfs_create_link(&connector->kdev->kobj,
+ &connector->ddc->dev.kobj, "ddc");
+diff --git a/drivers/gpu/drm/i915/display/intel_ddi.c b/drivers/gpu/drm/i915/display/intel_ddi.c
+index 52db7852827b..647412da733e 100644
+--- a/drivers/gpu/drm/i915/display/intel_ddi.c
++++ b/drivers/gpu/drm/i915/display/intel_ddi.c
+@@ -2866,7 +2866,7 @@ icl_program_mg_dp_mode(struct intel_digital_port *intel_dig_port,
+ ln1 = intel_de_read(dev_priv, MG_DP_MODE(1, tc_port));
+ }
+
+- ln0 &= ~(MG_DP_MODE_CFG_DP_X1_MODE | MG_DP_MODE_CFG_DP_X1_MODE);
++ ln0 &= ~(MG_DP_MODE_CFG_DP_X1_MODE | MG_DP_MODE_CFG_DP_X2_MODE);
+ ln1 &= ~(MG_DP_MODE_CFG_DP_X1_MODE | MG_DP_MODE_CFG_DP_X2_MODE);
+
+ /* DPPATC */
+diff --git a/drivers/gpu/drm/i915/display/intel_dp.c b/drivers/gpu/drm/i915/display/intel_dp.c
+index a2fafd4499f2..5e228d202e4d 100644
+--- a/drivers/gpu/drm/i915/display/intel_dp.c
++++ b/drivers/gpu/drm/i915/display/intel_dp.c
+@@ -1343,8 +1343,7 @@ intel_dp_aux_xfer(struct intel_dp *intel_dp,
+ bool is_tc_port = intel_phy_is_tc(i915, phy);
+ i915_reg_t ch_ctl, ch_data[5];
+ u32 aux_clock_divider;
+- enum intel_display_power_domain aux_domain =
+- intel_aux_power_domain(intel_dig_port);
++ enum intel_display_power_domain aux_domain;
+ intel_wakeref_t aux_wakeref;
+ intel_wakeref_t pps_wakeref;
+ int i, ret, recv_bytes;
+@@ -1359,6 +1358,8 @@ intel_dp_aux_xfer(struct intel_dp *intel_dp,
+ if (is_tc_port)
+ intel_tc_port_lock(intel_dig_port);
+
++ aux_domain = intel_aux_power_domain(intel_dig_port);
++
+ aux_wakeref = intel_display_power_get(i915, aux_domain);
+ pps_wakeref = pps_lock(intel_dp);
+
+diff --git a/drivers/gpu/drm/i915/gem/i915_gem_shmem.c b/drivers/gpu/drm/i915/gem/i915_gem_shmem.c
+index 5d5d7eef3f43..7aff3514d97a 100644
+--- a/drivers/gpu/drm/i915/gem/i915_gem_shmem.c
++++ b/drivers/gpu/drm/i915/gem/i915_gem_shmem.c
+@@ -39,7 +39,6 @@ static int shmem_get_pages(struct drm_i915_gem_object *obj)
+ unsigned long last_pfn = 0; /* suppress gcc warning */
+ unsigned int max_segment = i915_sg_segment_size();
+ unsigned int sg_page_sizes;
+- struct pagevec pvec;
+ gfp_t noreclaim;
+ int ret;
+
+@@ -192,13 +191,17 @@ err_sg:
+ sg_mark_end(sg);
+ err_pages:
+ mapping_clear_unevictable(mapping);
+- pagevec_init(&pvec);
+- for_each_sgt_page(page, sgt_iter, st) {
+- if (!pagevec_add(&pvec, page))
++ if (sg != st->sgl) {
++ struct pagevec pvec;
++
++ pagevec_init(&pvec);
++ for_each_sgt_page(page, sgt_iter, st) {
++ if (!pagevec_add(&pvec, page))
++ check_release_pagevec(&pvec);
++ }
++ if (pagevec_count(&pvec))
+ check_release_pagevec(&pvec);
+ }
+- if (pagevec_count(&pvec))
+- check_release_pagevec(&pvec);
+ sg_free_table(st);
+ kfree(st);
+
+diff --git a/drivers/gpu/drm/i915/gt/intel_engine_cs.c b/drivers/gpu/drm/i915/gt/intel_engine_cs.c
+index 883a9b7fe88d..55b9165e7533 100644
+--- a/drivers/gpu/drm/i915/gt/intel_engine_cs.c
++++ b/drivers/gpu/drm/i915/gt/intel_engine_cs.c
+@@ -639,7 +639,7 @@ static int engine_setup_common(struct intel_engine_cs *engine)
+ struct measure_breadcrumb {
+ struct i915_request rq;
+ struct intel_ring ring;
+- u32 cs[1024];
++ u32 cs[2048];
+ };
+
+ static int measure_breadcrumb_dw(struct intel_context *ce)
+@@ -661,6 +661,8 @@ static int measure_breadcrumb_dw(struct intel_context *ce)
+
+ frame->ring.vaddr = frame->cs;
+ frame->ring.size = sizeof(frame->cs);
++ frame->ring.wrap =
++ BITS_PER_TYPE(frame->ring.size) - ilog2(frame->ring.size);
+ frame->ring.effective_size = frame->ring.size;
+ intel_ring_update_space(&frame->ring);
+ frame->rq.ring = &frame->ring;
+diff --git a/drivers/gpu/drm/i915/gt/intel_lrc.c b/drivers/gpu/drm/i915/gt/intel_lrc.c
+index 2dfaddb8811e..ba82193b4e31 100644
+--- a/drivers/gpu/drm/i915/gt/intel_lrc.c
++++ b/drivers/gpu/drm/i915/gt/intel_lrc.c
+@@ -972,6 +972,13 @@ __unwind_incomplete_requests(struct intel_engine_cs *engine)
+ list_move(&rq->sched.link, pl);
+ set_bit(I915_FENCE_FLAG_PQUEUE, &rq->fence.flags);
+
++ /* Check in case we rollback so far we wrap [size/2] */
++ if (intel_ring_direction(rq->ring,
++ intel_ring_wrap(rq->ring,
++ rq->tail),
++ rq->ring->tail) > 0)
++ rq->context->lrc.desc |= CTX_DESC_FORCE_RESTORE;
++
+ active = rq;
+ } else {
+ struct intel_engine_cs *owner = rq->context->engine;
+@@ -1383,8 +1390,9 @@ static u64 execlists_update_context(struct i915_request *rq)
+ * HW has a tendency to ignore us rewinding the TAIL to the end of
+ * an earlier request.
+ */
++ GEM_BUG_ON(ce->lrc_reg_state[CTX_RING_TAIL] != rq->ring->tail);
++ prev = rq->ring->tail;
+ tail = intel_ring_set_tail(rq->ring, rq->tail);
+- prev = ce->lrc_reg_state[CTX_RING_TAIL];
+ if (unlikely(intel_ring_direction(rq->ring, tail, prev) <= 0))
+ desc |= CTX_DESC_FORCE_RESTORE;
+ ce->lrc_reg_state[CTX_RING_TAIL] = tail;
+@@ -4213,6 +4221,14 @@ static int gen12_emit_flush_render(struct i915_request *request,
+ return 0;
+ }
+
++static void assert_request_valid(struct i915_request *rq)
++{
++ struct intel_ring *ring __maybe_unused = rq->ring;
++
++ /* Can we unwind this request without appearing to go forwards? */
++ GEM_BUG_ON(intel_ring_direction(ring, rq->wa_tail, rq->head) <= 0);
++}
++
+ /*
+ * Reserve space for 2 NOOPs at the end of each request to be
+ * used as a workaround for not being allowed to do lite
+@@ -4225,6 +4241,9 @@ static u32 *gen8_emit_wa_tail(struct i915_request *request, u32 *cs)
+ *cs++ = MI_NOOP;
+ request->wa_tail = intel_ring_offset(request, cs);
+
++ /* Check that entire request is less than half the ring */
++ assert_request_valid(request);
++
+ return cs;
+ }
+
+diff --git a/drivers/gpu/drm/i915/gt/intel_ring.c b/drivers/gpu/drm/i915/gt/intel_ring.c
+index 8cda1b7e17ba..bdb324167ef3 100644
+--- a/drivers/gpu/drm/i915/gt/intel_ring.c
++++ b/drivers/gpu/drm/i915/gt/intel_ring.c
+@@ -315,3 +315,7 @@ int intel_ring_cacheline_align(struct i915_request *rq)
+ GEM_BUG_ON(rq->ring->emit & (CACHELINE_BYTES - 1));
+ return 0;
+ }
++
++#if IS_ENABLED(CONFIG_DRM_I915_SELFTEST)
++#include "selftest_ring.c"
++#endif
+diff --git a/drivers/gpu/drm/i915/gt/intel_workarounds.c b/drivers/gpu/drm/i915/gt/intel_workarounds.c
+index 5176ad1a3976..bb100872cd07 100644
+--- a/drivers/gpu/drm/i915/gt/intel_workarounds.c
++++ b/drivers/gpu/drm/i915/gt/intel_workarounds.c
+@@ -178,6 +178,12 @@ wa_write_or(struct i915_wa_list *wal, i915_reg_t reg, u32 set)
+ wa_write_masked_or(wal, reg, set, set);
+ }
+
++static void
++wa_write_clr(struct i915_wa_list *wal, i915_reg_t reg, u32 clr)
++{
++ wa_write_masked_or(wal, reg, clr, 0);
++}
++
+ static void
+ wa_masked_en(struct i915_wa_list *wal, i915_reg_t reg, u32 val)
+ {
+@@ -697,6 +703,227 @@ int intel_engine_emit_ctx_wa(struct i915_request *rq)
+ return 0;
+ }
+
++static void
++gen4_gt_workarounds_init(struct drm_i915_private *i915,
++ struct i915_wa_list *wal)
++{
++ /* WaDisable_RenderCache_OperationalFlush:gen4,ilk */
++ wa_masked_dis(wal, CACHE_MODE_0, RC_OP_FLUSH_ENABLE);
++}
++
++static void
++g4x_gt_workarounds_init(struct drm_i915_private *i915, struct i915_wa_list *wal)
++{
++ gen4_gt_workarounds_init(i915, wal);
++
++ /* WaDisableRenderCachePipelinedFlush:g4x,ilk */
++ wa_masked_en(wal, CACHE_MODE_0, CM0_PIPELINED_RENDER_FLUSH_DISABLE);
++}
++
++static void
++ilk_gt_workarounds_init(struct drm_i915_private *i915, struct i915_wa_list *wal)
++{
++ g4x_gt_workarounds_init(i915, wal);
++
++ wa_masked_en(wal, _3D_CHICKEN2, _3D_CHICKEN2_WM_READ_PIPELINED);
++}
++
++static void
++snb_gt_workarounds_init(struct drm_i915_private *i915, struct i915_wa_list *wal)
++{
++ /* WaDisableHiZPlanesWhenMSAAEnabled:snb */
++ wa_masked_en(wal,
++ _3D_CHICKEN,
++ _3D_CHICKEN_HIZ_PLANE_DISABLE_MSAA_4X_SNB);
++
++ /* WaDisable_RenderCache_OperationalFlush:snb */
++ wa_masked_dis(wal, CACHE_MODE_0, RC_OP_FLUSH_ENABLE);
++
++ /*
++ * BSpec recommends 8x4 when MSAA is used,
++ * however in practice 16x4 seems fastest.
++ *
++ * Note that PS/WM thread counts depend on the WIZ hashing
++ * disable bit, which we don't touch here, but it's good
++ * to keep in mind (see 3DSTATE_PS and 3DSTATE_WM).
++ */
++ wa_add(wal,
++ GEN6_GT_MODE, 0,
++ _MASKED_FIELD(GEN6_WIZ_HASHING_MASK, GEN6_WIZ_HASHING_16x4),
++ GEN6_WIZ_HASHING_16x4);
++
++ wa_masked_dis(wal, CACHE_MODE_0, CM0_STC_EVICT_DISABLE_LRA_SNB);
++
++ wa_masked_en(wal,
++ _3D_CHICKEN3,
++ /* WaStripsFansDisableFastClipPerformanceFix:snb */
++ _3D_CHICKEN3_SF_DISABLE_FASTCLIP_CULL |
++ /*
++ * Bspec says:
++ * "This bit must be set if 3DSTATE_CLIP clip mode is set
++ * to normal and 3DSTATE_SF number of SF output attributes
++ * is more than 16."
++ */
++ _3D_CHICKEN3_SF_DISABLE_PIPELINED_ATTR_FETCH);
++}
++
++static void
++ivb_gt_workarounds_init(struct drm_i915_private *i915, struct i915_wa_list *wal)
++{
++ /* WaDisableEarlyCull:ivb */
++ wa_masked_en(wal, _3D_CHICKEN3, _3D_CHICKEN_SF_DISABLE_OBJEND_CULL);
++
++ /* WaDisablePSDDualDispatchEnable:ivb */
++ if (IS_IVB_GT1(i915))
++ wa_masked_en(wal,
++ GEN7_HALF_SLICE_CHICKEN1,
++ GEN7_PSD_SINGLE_PORT_DISPATCH_ENABLE);
++
++ /* WaDisable_RenderCache_OperationalFlush:ivb */
++ wa_masked_dis(wal, CACHE_MODE_0_GEN7, RC_OP_FLUSH_ENABLE);
++
++ /* Apply the WaDisableRHWOOptimizationForRenderHang:ivb workaround. */
++ wa_masked_dis(wal,
++ GEN7_COMMON_SLICE_CHICKEN1,
++ GEN7_CSC1_RHWO_OPT_DISABLE_IN_RCC);
++
++ /* WaApplyL3ControlAndL3ChickenMode:ivb */
++ wa_write(wal, GEN7_L3CNTLREG1, GEN7_WA_FOR_GEN7_L3_CONTROL);
++ wa_write(wal, GEN7_L3_CHICKEN_MODE_REGISTER, GEN7_WA_L3_CHICKEN_MODE);
++
++ /* WaForceL3Serialization:ivb */
++ wa_write_clr(wal, GEN7_L3SQCREG4, L3SQ_URB_READ_CAM_MATCH_DISABLE);
++
++ /*
++ * WaVSThreadDispatchOverride:ivb,vlv
++ *
++ * This actually overrides the dispatch
++ * mode for all thread types.
++ */
++ wa_write_masked_or(wal, GEN7_FF_THREAD_MODE,
++ GEN7_FF_SCHED_MASK,
++ GEN7_FF_TS_SCHED_HW |
++ GEN7_FF_VS_SCHED_HW |
++ GEN7_FF_DS_SCHED_HW);
++
++ if (0) { /* causes HiZ corruption on ivb:gt1 */
++ /* enable HiZ Raw Stall Optimization */
++ wa_masked_dis(wal, CACHE_MODE_0_GEN7, HIZ_RAW_STALL_OPT_DISABLE);
++ }
++
++ /* WaDisable4x2SubspanOptimization:ivb */
++ wa_masked_en(wal, CACHE_MODE_1, PIXEL_SUBSPAN_COLLECT_OPT_DISABLE);
++
++ /*
++ * BSpec recommends 8x4 when MSAA is used,
++ * however in practice 16x4 seems fastest.
++ *
++ * Note that PS/WM thread counts depend on the WIZ hashing
++ * disable bit, which we don't touch here, but it's good
++ * to keep in mind (see 3DSTATE_PS and 3DSTATE_WM).
++ */
++ wa_add(wal, GEN7_GT_MODE, 0,
++ _MASKED_FIELD(GEN6_WIZ_HASHING_MASK, GEN6_WIZ_HASHING_16x4),
++ GEN6_WIZ_HASHING_16x4);
++}
++
++static void
++vlv_gt_workarounds_init(struct drm_i915_private *i915, struct i915_wa_list *wal)
++{
++ /* WaDisableEarlyCull:vlv */
++ wa_masked_en(wal, _3D_CHICKEN3, _3D_CHICKEN_SF_DISABLE_OBJEND_CULL);
++
++ /* WaPsdDispatchEnable:vlv */
++ /* WaDisablePSDDualDispatchEnable:vlv */
++ wa_masked_en(wal,
++ GEN7_HALF_SLICE_CHICKEN1,
++ GEN7_MAX_PS_THREAD_DEP |
++ GEN7_PSD_SINGLE_PORT_DISPATCH_ENABLE);
++
++ /* WaDisable_RenderCache_OperationalFlush:vlv */
++ wa_masked_dis(wal, CACHE_MODE_0_GEN7, RC_OP_FLUSH_ENABLE);
++
++ /* WaForceL3Serialization:vlv */
++ wa_write_clr(wal, GEN7_L3SQCREG4, L3SQ_URB_READ_CAM_MATCH_DISABLE);
++
++ /*
++ * WaVSThreadDispatchOverride:ivb,vlv
++ *
++ * This actually overrides the dispatch
++ * mode for all thread types.
++ */
++ wa_write_masked_or(wal,
++ GEN7_FF_THREAD_MODE,
++ GEN7_FF_SCHED_MASK,
++ GEN7_FF_TS_SCHED_HW |
++ GEN7_FF_VS_SCHED_HW |
++ GEN7_FF_DS_SCHED_HW);
++
++ /*
++ * BSpec says this must be set, even though
++ * WaDisable4x2SubspanOptimization isn't listed for VLV.
++ */
++ wa_masked_en(wal, CACHE_MODE_1, PIXEL_SUBSPAN_COLLECT_OPT_DISABLE);
++
++ /*
++ * BSpec recommends 8x4 when MSAA is used,
++ * however in practice 16x4 seems fastest.
++ *
++ * Note that PS/WM thread counts depend on the WIZ hashing
++ * disable bit, which we don't touch here, but it's good
++ * to keep in mind (see 3DSTATE_PS and 3DSTATE_WM).
++ */
++ wa_add(wal, GEN7_GT_MODE, 0,
++ _MASKED_FIELD(GEN6_WIZ_HASHING_MASK, GEN6_WIZ_HASHING_16x4),
++ GEN6_WIZ_HASHING_16x4);
++
++ /*
++ * WaIncreaseL3CreditsForVLVB0:vlv
++ * This is the hardware default actually.
++ */
++ wa_write(wal, GEN7_L3SQCREG1, VLV_B0_WA_L3SQCREG1_VALUE);
++}
++
++static void
++hsw_gt_workarounds_init(struct drm_i915_private *i915, struct i915_wa_list *wal)
++{
++ /* L3 caching of data atomics doesn't work -- disable it. */
++ wa_write(wal, HSW_SCRATCH1, HSW_SCRATCH1_L3_DATA_ATOMICS_DISABLE);
++
++ wa_add(wal,
++ HSW_ROW_CHICKEN3, 0,
++ _MASKED_BIT_ENABLE(HSW_ROW_CHICKEN3_L3_GLOBAL_ATOMICS_DISABLE),
++ 0 /* XXX does this reg exist? */);
++
++ /* WaVSRefCountFullforceMissDisable:hsw */
++ wa_write_clr(wal, GEN7_FF_THREAD_MODE, GEN7_FF_VS_REF_CNT_FFME);
++
++ wa_masked_dis(wal,
++ CACHE_MODE_0_GEN7,
++ /* WaDisable_RenderCache_OperationalFlush:hsw */
++ RC_OP_FLUSH_ENABLE |
++ /* enable HiZ Raw Stall Optimization */
++ HIZ_RAW_STALL_OPT_DISABLE);
++
++ /* WaDisable4x2SubspanOptimization:hsw */
++ wa_masked_en(wal, CACHE_MODE_1, PIXEL_SUBSPAN_COLLECT_OPT_DISABLE);
++
++ /*
++ * BSpec recommends 8x4 when MSAA is used,
++ * however in practice 16x4 seems fastest.
++ *
++ * Note that PS/WM thread counts depend on the WIZ hashing
++ * disable bit, which we don't touch here, but it's good
++ * to keep in mind (see 3DSTATE_PS and 3DSTATE_WM).
++ */
++ wa_add(wal, GEN7_GT_MODE, 0,
++ _MASKED_FIELD(GEN6_WIZ_HASHING_MASK, GEN6_WIZ_HASHING_16x4),
++ GEN6_WIZ_HASHING_16x4);
++
++ /* WaSampleCChickenBitEnable:hsw */
++ wa_masked_en(wal, HALF_SLICE_CHICKEN3, HSW_SAMPLE_C_PERFORMANCE);
++}
++
+ static void
+ gen9_gt_workarounds_init(struct drm_i915_private *i915, struct i915_wa_list *wal)
+ {
+@@ -974,6 +1201,20 @@ gt_init_workarounds(struct drm_i915_private *i915, struct i915_wa_list *wal)
+ bxt_gt_workarounds_init(i915, wal);
+ else if (IS_SKYLAKE(i915))
+ skl_gt_workarounds_init(i915, wal);
++ else if (IS_HASWELL(i915))
++ hsw_gt_workarounds_init(i915, wal);
++ else if (IS_VALLEYVIEW(i915))
++ vlv_gt_workarounds_init(i915, wal);
++ else if (IS_IVYBRIDGE(i915))
++ ivb_gt_workarounds_init(i915, wal);
++ else if (IS_GEN(i915, 6))
++ snb_gt_workarounds_init(i915, wal);
++ else if (IS_GEN(i915, 5))
++ ilk_gt_workarounds_init(i915, wal);
++ else if (IS_G4X(i915))
++ g4x_gt_workarounds_init(i915, wal);
++ else if (IS_GEN(i915, 4))
++ gen4_gt_workarounds_init(i915, wal);
+ else if (INTEL_GEN(i915) <= 8)
+ return;
+ else
+@@ -1379,12 +1620,6 @@ rcs_engine_wa_init(struct intel_engine_cs *engine, struct i915_wa_list *wal)
+ GEN7_FF_THREAD_MODE,
+ GEN12_FF_TESSELATION_DOP_GATE_DISABLE);
+
+- /*
+- * Wa_1409085225:tgl
+- * Wa_14010229206:tgl
+- */
+- wa_masked_en(wal, GEN9_ROW_CHICKEN4, GEN12_DISABLE_TDL_PUSH);
+-
+ /* Wa_1408615072:tgl */
+ wa_write_or(wal, UNSLICE_UNIT_LEVEL_CLKGATE2,
+ VSUNIT_CLKGATE_DIS_TGL);
+@@ -1402,6 +1637,12 @@ rcs_engine_wa_init(struct intel_engine_cs *engine, struct i915_wa_list *wal)
+ wa_masked_en(wal,
+ GEN9_CS_DEBUG_MODE1,
+ FF_DOP_CLOCK_GATE_DISABLE);
++
++ /*
++ * Wa_1409085225:tgl
++ * Wa_14010229206:tgl
++ */
++ wa_masked_en(wal, GEN9_ROW_CHICKEN4, GEN12_DISABLE_TDL_PUSH);
+ }
+
+ if (IS_GEN(i915, 11)) {
+diff --git a/drivers/gpu/drm/i915/gt/selftest_mocs.c b/drivers/gpu/drm/i915/gt/selftest_mocs.c
+index 8831ffee2061..63f87d8608c3 100644
+--- a/drivers/gpu/drm/i915/gt/selftest_mocs.c
++++ b/drivers/gpu/drm/i915/gt/selftest_mocs.c
+@@ -18,6 +18,20 @@ struct live_mocs {
+ void *vaddr;
+ };
+
++static struct intel_context *mocs_context_create(struct intel_engine_cs *engine)
++{
++ struct intel_context *ce;
++
++ ce = intel_context_create(engine);
++ if (IS_ERR(ce))
++ return ce;
++
++ /* We build large requests to read the registers from the ring */
++ ce->ring = __intel_context_ring_size(SZ_16K);
++
++ return ce;
++}
++
+ static int request_add_sync(struct i915_request *rq, int err)
+ {
+ i915_request_get(rq);
+@@ -301,7 +315,7 @@ static int live_mocs_clean(void *arg)
+ for_each_engine(engine, gt, id) {
+ struct intel_context *ce;
+
+- ce = intel_context_create(engine);
++ ce = mocs_context_create(engine);
+ if (IS_ERR(ce)) {
+ err = PTR_ERR(ce);
+ break;
+@@ -395,7 +409,7 @@ static int live_mocs_reset(void *arg)
+ for_each_engine(engine, gt, id) {
+ struct intel_context *ce;
+
+- ce = intel_context_create(engine);
++ ce = mocs_context_create(engine);
+ if (IS_ERR(ce)) {
+ err = PTR_ERR(ce);
+ break;
+diff --git a/drivers/gpu/drm/i915/gt/selftest_ring.c b/drivers/gpu/drm/i915/gt/selftest_ring.c
+new file mode 100644
+index 000000000000..2a8c534dc125
+--- /dev/null
++++ b/drivers/gpu/drm/i915/gt/selftest_ring.c
+@@ -0,0 +1,110 @@
++// SPDX-License-Identifier: GPL-2.0
++/*
++ * Copyright © 2020 Intel Corporation
++ */
++
++static struct intel_ring *mock_ring(unsigned long sz)
++{
++ struct intel_ring *ring;
++
++ ring = kzalloc(sizeof(*ring) + sz, GFP_KERNEL);
++ if (!ring)
++ return NULL;
++
++ kref_init(&ring->ref);
++ ring->size = sz;
++ ring->wrap = BITS_PER_TYPE(ring->size) - ilog2(sz);
++ ring->effective_size = sz;
++ ring->vaddr = (void *)(ring + 1);
++ atomic_set(&ring->pin_count, 1);
++
++ intel_ring_update_space(ring);
++
++ return ring;
++}
++
++static void mock_ring_free(struct intel_ring *ring)
++{
++ kfree(ring);
++}
++
++static int check_ring_direction(struct intel_ring *ring,
++ u32 next, u32 prev,
++ int expected)
++{
++ int result;
++
++ result = intel_ring_direction(ring, next, prev);
++ if (result < 0)
++ result = -1;
++ else if (result > 0)
++ result = 1;
++
++ if (result != expected) {
++ pr_err("intel_ring_direction(%u, %u):%d != %d\n",
++ next, prev, result, expected);
++ return -EINVAL;
++ }
++
++ return 0;
++}
++
++static int check_ring_step(struct intel_ring *ring, u32 x, u32 step)
++{
++ u32 prev = x, next = intel_ring_wrap(ring, x + step);
++ int err = 0;
++
++ err |= check_ring_direction(ring, next, next, 0);
++ err |= check_ring_direction(ring, prev, prev, 0);
++ err |= check_ring_direction(ring, next, prev, 1);
++ err |= check_ring_direction(ring, prev, next, -1);
++
++ return err;
++}
++
++static int check_ring_offset(struct intel_ring *ring, u32 x, u32 step)
++{
++ int err = 0;
++
++ err |= check_ring_step(ring, x, step);
++ err |= check_ring_step(ring, intel_ring_wrap(ring, x + 1), step);
++ err |= check_ring_step(ring, intel_ring_wrap(ring, x - 1), step);
++
++ return err;
++}
++
++static int igt_ring_direction(void *dummy)
++{
++ struct intel_ring *ring;
++ unsigned int half = 2048;
++ int step, err = 0;
++
++ ring = mock_ring(2 * half);
++ if (!ring)
++ return -ENOMEM;
++
++ GEM_BUG_ON(ring->size != 2 * half);
++
++ /* Precision of wrap detection is limited to ring->size / 2 */
++ for (step = 1; step < half; step <<= 1) {
++ err |= check_ring_offset(ring, 0, step);
++ err |= check_ring_offset(ring, half, step);
++ }
++ err |= check_ring_step(ring, 0, half - 64);
++
++ /* And check unwrapped handling for good measure */
++ err |= check_ring_offset(ring, 0, 2 * half + 64);
++ err |= check_ring_offset(ring, 3 * half, 1);
++
++ mock_ring_free(ring);
++ return err;
++}
++
++int intel_ring_mock_selftests(void)
++{
++ static const struct i915_subtest tests[] = {
++ SUBTEST(igt_ring_direction),
++ };
++
++ return i915_subtests(tests, NULL);
++}
+diff --git a/drivers/gpu/drm/i915/i915_cmd_parser.c b/drivers/gpu/drm/i915/i915_cmd_parser.c
+index 189b573d02be..372354d33f55 100644
+--- a/drivers/gpu/drm/i915/i915_cmd_parser.c
++++ b/drivers/gpu/drm/i915/i915_cmd_parser.c
+@@ -572,6 +572,9 @@ struct drm_i915_reg_descriptor {
+ #define REG32(_reg, ...) \
+ { .addr = (_reg), __VA_ARGS__ }
+
++#define REG32_IDX(_reg, idx) \
++ { .addr = _reg(idx) }
++
+ /*
+ * Convenience macro for adding 64-bit registers.
+ *
+@@ -669,6 +672,7 @@ static const struct drm_i915_reg_descriptor gen9_blt_regs[] = {
+ REG64_IDX(RING_TIMESTAMP, BSD_RING_BASE),
+ REG32(BCS_SWCTRL),
+ REG64_IDX(RING_TIMESTAMP, BLT_RING_BASE),
++ REG32_IDX(RING_CTX_TIMESTAMP, BLT_RING_BASE),
+ REG64_IDX(BCS_GPR, 0),
+ REG64_IDX(BCS_GPR, 1),
+ REG64_IDX(BCS_GPR, 2),
+diff --git a/drivers/gpu/drm/i915/i915_irq.c b/drivers/gpu/drm/i915/i915_irq.c
+index 8a2b83807ffc..bd042725a678 100644
+--- a/drivers/gpu/drm/i915/i915_irq.c
++++ b/drivers/gpu/drm/i915/i915_irq.c
+@@ -3092,6 +3092,7 @@ static void gen11_hpd_irq_setup(struct drm_i915_private *dev_priv)
+
+ val = I915_READ(GEN11_DE_HPD_IMR);
+ val &= ~hotplug_irqs;
++ val |= ~enabled_irqs & hotplug_irqs;
+ I915_WRITE(GEN11_DE_HPD_IMR, val);
+ POSTING_READ(GEN11_DE_HPD_IMR);
+
+diff --git a/drivers/gpu/drm/i915/i915_reg.h b/drivers/gpu/drm/i915/i915_reg.h
+index 6e12000c4b6b..a41be9357d15 100644
+--- a/drivers/gpu/drm/i915/i915_reg.h
++++ b/drivers/gpu/drm/i915/i915_reg.h
+@@ -7819,7 +7819,7 @@ enum {
+
+ /* GEN7 chicken */
+ #define GEN7_COMMON_SLICE_CHICKEN1 _MMIO(0x7010)
+- #define GEN7_CSC1_RHWO_OPT_DISABLE_IN_RCC ((1 << 10) | (1 << 26))
++ #define GEN7_CSC1_RHWO_OPT_DISABLE_IN_RCC (1 << 10)
+ #define GEN9_RHWO_OPTIMIZATION_DISABLE (1 << 14)
+
+ #define COMMON_SLICE_CHICKEN2 _MMIO(0x7014)
+diff --git a/drivers/gpu/drm/i915/intel_pm.c b/drivers/gpu/drm/i915/intel_pm.c
+index a52986a9e7a6..20c1683fda24 100644
+--- a/drivers/gpu/drm/i915/intel_pm.c
++++ b/drivers/gpu/drm/i915/intel_pm.c
+@@ -6593,16 +6593,6 @@ static void ilk_init_clock_gating(struct drm_i915_private *dev_priv)
+ I915_WRITE(ILK_DISPLAY_CHICKEN2,
+ I915_READ(ILK_DISPLAY_CHICKEN2) |
+ ILK_ELPIN_409_SELECT);
+- I915_WRITE(_3D_CHICKEN2,
+- _3D_CHICKEN2_WM_READ_PIPELINED << 16 |
+- _3D_CHICKEN2_WM_READ_PIPELINED);
+-
+- /* WaDisableRenderCachePipelinedFlush:ilk */
+- I915_WRITE(CACHE_MODE_0,
+- _MASKED_BIT_ENABLE(CM0_PIPELINED_RENDER_FLUSH_DISABLE));
+-
+- /* WaDisable_RenderCache_OperationalFlush:ilk */
+- I915_WRITE(CACHE_MODE_0, _MASKED_BIT_DISABLE(RC_OP_FLUSH_ENABLE));
+
+ g4x_disable_trickle_feed(dev_priv);
+
+@@ -6665,27 +6655,6 @@ static void gen6_init_clock_gating(struct drm_i915_private *dev_priv)
+ I915_READ(ILK_DISPLAY_CHICKEN2) |
+ ILK_ELPIN_409_SELECT);
+
+- /* WaDisableHiZPlanesWhenMSAAEnabled:snb */
+- I915_WRITE(_3D_CHICKEN,
+- _MASKED_BIT_ENABLE(_3D_CHICKEN_HIZ_PLANE_DISABLE_MSAA_4X_SNB));
+-
+- /* WaDisable_RenderCache_OperationalFlush:snb */
+- I915_WRITE(CACHE_MODE_0, _MASKED_BIT_DISABLE(RC_OP_FLUSH_ENABLE));
+-
+- /*
+- * BSpec recoomends 8x4 when MSAA is used,
+- * however in practice 16x4 seems fastest.
+- *
+- * Note that PS/WM thread counts depend on the WIZ hashing
+- * disable bit, which we don't touch here, but it's good
+- * to keep in mind (see 3DSTATE_PS and 3DSTATE_WM).
+- */
+- I915_WRITE(GEN6_GT_MODE,
+- _MASKED_FIELD(GEN6_WIZ_HASHING_MASK, GEN6_WIZ_HASHING_16x4));
+-
+- I915_WRITE(CACHE_MODE_0,
+- _MASKED_BIT_DISABLE(CM0_STC_EVICT_DISABLE_LRA_SNB));
+-
+ I915_WRITE(GEN6_UCGCTL1,
+ I915_READ(GEN6_UCGCTL1) |
+ GEN6_BLBUNIT_CLOCK_GATE_DISABLE |
+@@ -6708,18 +6677,6 @@ static void gen6_init_clock_gating(struct drm_i915_private *dev_priv)
+ GEN6_RCPBUNIT_CLOCK_GATE_DISABLE |
+ GEN6_RCCUNIT_CLOCK_GATE_DISABLE);
+
+- /* WaStripsFansDisableFastClipPerformanceFix:snb */
+- I915_WRITE(_3D_CHICKEN3,
+- _MASKED_BIT_ENABLE(_3D_CHICKEN3_SF_DISABLE_FASTCLIP_CULL));
+-
+- /*
+- * Bspec says:
+- * "This bit must be set if 3DSTATE_CLIP clip mode is set to normal and
+- * 3DSTATE_SF number of SF output attributes is more than 16."
+- */
+- I915_WRITE(_3D_CHICKEN3,
+- _MASKED_BIT_ENABLE(_3D_CHICKEN3_SF_DISABLE_PIPELINED_ATTR_FETCH));
+-
+ /*
+ * According to the spec the following bits should be
+ * set in order to enable memory self-refresh and fbc:
+@@ -6749,24 +6706,6 @@ static void gen6_init_clock_gating(struct drm_i915_private *dev_priv)
+ gen6_check_mch_setup(dev_priv);
+ }
+
+-static void gen7_setup_fixed_func_scheduler(struct drm_i915_private *dev_priv)
+-{
+- u32 reg = I915_READ(GEN7_FF_THREAD_MODE);
+-
+- /*
+- * WaVSThreadDispatchOverride:ivb,vlv
+- *
+- * This actually overrides the dispatch
+- * mode for all thread types.
+- */
+- reg &= ~GEN7_FF_SCHED_MASK;
+- reg |= GEN7_FF_TS_SCHED_HW;
+- reg |= GEN7_FF_VS_SCHED_HW;
+- reg |= GEN7_FF_DS_SCHED_HW;
+-
+- I915_WRITE(GEN7_FF_THREAD_MODE, reg);
+-}
+-
+ static void lpt_init_clock_gating(struct drm_i915_private *dev_priv)
+ {
+ /*
+@@ -6992,45 +6931,10 @@ static void bdw_init_clock_gating(struct drm_i915_private *dev_priv)
+
+ static void hsw_init_clock_gating(struct drm_i915_private *dev_priv)
+ {
+- /* L3 caching of data atomics doesn't work -- disable it. */
+- I915_WRITE(HSW_SCRATCH1, HSW_SCRATCH1_L3_DATA_ATOMICS_DISABLE);
+- I915_WRITE(HSW_ROW_CHICKEN3,
+- _MASKED_BIT_ENABLE(HSW_ROW_CHICKEN3_L3_GLOBAL_ATOMICS_DISABLE));
+-
+ /* This is required by WaCatErrorRejectionIssue:hsw */
+ I915_WRITE(GEN7_SQ_CHICKEN_MBCUNIT_CONFIG,
+- I915_READ(GEN7_SQ_CHICKEN_MBCUNIT_CONFIG) |
+- GEN7_SQ_CHICKEN_MBCUNIT_SQINTMOB);
+-
+- /* WaVSRefCountFullforceMissDisable:hsw */
+- I915_WRITE(GEN7_FF_THREAD_MODE,
+- I915_READ(GEN7_FF_THREAD_MODE) & ~GEN7_FF_VS_REF_CNT_FFME);
+-
+- /* WaDisable_RenderCache_OperationalFlush:hsw */
+- I915_WRITE(CACHE_MODE_0_GEN7, _MASKED_BIT_DISABLE(RC_OP_FLUSH_ENABLE));
+-
+- /* enable HiZ Raw Stall Optimization */
+- I915_WRITE(CACHE_MODE_0_GEN7,
+- _MASKED_BIT_DISABLE(HIZ_RAW_STALL_OPT_DISABLE));
+-
+- /* WaDisable4x2SubspanOptimization:hsw */
+- I915_WRITE(CACHE_MODE_1,
+- _MASKED_BIT_ENABLE(PIXEL_SUBSPAN_COLLECT_OPT_DISABLE));
+-
+- /*
+- * BSpec recommends 8x4 when MSAA is used,
+- * however in practice 16x4 seems fastest.
+- *
+- * Note that PS/WM thread counts depend on the WIZ hashing
+- * disable bit, which we don't touch here, but it's good
+- * to keep in mind (see 3DSTATE_PS and 3DSTATE_WM).
+- */
+- I915_WRITE(GEN7_GT_MODE,
+- _MASKED_FIELD(GEN6_WIZ_HASHING_MASK, GEN6_WIZ_HASHING_16x4));
+-
+- /* WaSampleCChickenBitEnable:hsw */
+- I915_WRITE(HALF_SLICE_CHICKEN3,
+- _MASKED_BIT_ENABLE(HSW_SAMPLE_C_PERFORMANCE));
++ I915_READ(GEN7_SQ_CHICKEN_MBCUNIT_CONFIG) |
++ GEN7_SQ_CHICKEN_MBCUNIT_SQINTMOB);
+
+ /* WaSwitchSolVfFArbitrationPriority:hsw */
+ I915_WRITE(GAM_ECOCHK, I915_READ(GAM_ECOCHK) | HSW_ECOCHK_ARB_PRIO_SOL);
+@@ -7044,32 +6948,11 @@ static void ivb_init_clock_gating(struct drm_i915_private *dev_priv)
+
+ I915_WRITE(ILK_DSPCLK_GATE_D, ILK_VRHUNIT_CLOCK_GATE_DISABLE);
+
+- /* WaDisableEarlyCull:ivb */
+- I915_WRITE(_3D_CHICKEN3,
+- _MASKED_BIT_ENABLE(_3D_CHICKEN_SF_DISABLE_OBJEND_CULL));
+-
+ /* WaDisableBackToBackFlipFix:ivb */
+ I915_WRITE(IVB_CHICKEN3,
+ CHICKEN3_DGMG_REQ_OUT_FIX_DISABLE |
+ CHICKEN3_DGMG_DONE_FIX_DISABLE);
+
+- /* WaDisablePSDDualDispatchEnable:ivb */
+- if (IS_IVB_GT1(dev_priv))
+- I915_WRITE(GEN7_HALF_SLICE_CHICKEN1,
+- _MASKED_BIT_ENABLE(GEN7_PSD_SINGLE_PORT_DISPATCH_ENABLE));
+-
+- /* WaDisable_RenderCache_OperationalFlush:ivb */
+- I915_WRITE(CACHE_MODE_0_GEN7, _MASKED_BIT_DISABLE(RC_OP_FLUSH_ENABLE));
+-
+- /* Apply the WaDisableRHWOOptimizationForRenderHang:ivb workaround. */
+- I915_WRITE(GEN7_COMMON_SLICE_CHICKEN1,
+- GEN7_CSC1_RHWO_OPT_DISABLE_IN_RCC);
+-
+- /* WaApplyL3ControlAndL3ChickenMode:ivb */
+- I915_WRITE(GEN7_L3CNTLREG1,
+- GEN7_WA_FOR_GEN7_L3_CONTROL);
+- I915_WRITE(GEN7_L3_CHICKEN_MODE_REGISTER,
+- GEN7_WA_L3_CHICKEN_MODE);
+ if (IS_IVB_GT1(dev_priv))
+ I915_WRITE(GEN7_ROW_CHICKEN2,
+ _MASKED_BIT_ENABLE(DOP_CLOCK_GATING_DISABLE));
+@@ -7081,10 +6964,6 @@ static void ivb_init_clock_gating(struct drm_i915_private *dev_priv)
+ _MASKED_BIT_ENABLE(DOP_CLOCK_GATING_DISABLE));
+ }
+
+- /* WaForceL3Serialization:ivb */
+- I915_WRITE(GEN7_L3SQCREG4, I915_READ(GEN7_L3SQCREG4) &
+- ~L3SQ_URB_READ_CAM_MATCH_DISABLE);
+-
+ /*
+ * According to the spec, bit 13 (RCZUNIT) must be set on IVB.
+ * This implements the WaDisableRCZUnitClockGating:ivb workaround.
+@@ -7099,29 +6978,6 @@ static void ivb_init_clock_gating(struct drm_i915_private *dev_priv)
+
+ g4x_disable_trickle_feed(dev_priv);
+
+- gen7_setup_fixed_func_scheduler(dev_priv);
+-
+- if (0) { /* causes HiZ corruption on ivb:gt1 */
+- /* enable HiZ Raw Stall Optimization */
+- I915_WRITE(CACHE_MODE_0_GEN7,
+- _MASKED_BIT_DISABLE(HIZ_RAW_STALL_OPT_DISABLE));
+- }
+-
+- /* WaDisable4x2SubspanOptimization:ivb */
+- I915_WRITE(CACHE_MODE_1,
+- _MASKED_BIT_ENABLE(PIXEL_SUBSPAN_COLLECT_OPT_DISABLE));
+-
+- /*
+- * BSpec recommends 8x4 when MSAA is used,
+- * however in practice 16x4 seems fastest.
+- *
+- * Note that PS/WM thread counts depend on the WIZ hashing
+- * disable bit, which we don't touch here, but it's good
+- * to keep in mind (see 3DSTATE_PS and 3DSTATE_WM).
+- */
+- I915_WRITE(GEN7_GT_MODE,
+- _MASKED_FIELD(GEN6_WIZ_HASHING_MASK, GEN6_WIZ_HASHING_16x4));
+-
+ snpcr = I915_READ(GEN6_MBCUNIT_SNPCR);
+ snpcr &= ~GEN6_MBC_SNPCR_MASK;
+ snpcr |= GEN6_MBC_SNPCR_MED;
+@@ -7135,28 +6991,11 @@ static void ivb_init_clock_gating(struct drm_i915_private *dev_priv)
+
+ static void vlv_init_clock_gating(struct drm_i915_private *dev_priv)
+ {
+- /* WaDisableEarlyCull:vlv */
+- I915_WRITE(_3D_CHICKEN3,
+- _MASKED_BIT_ENABLE(_3D_CHICKEN_SF_DISABLE_OBJEND_CULL));
+-
+ /* WaDisableBackToBackFlipFix:vlv */
+ I915_WRITE(IVB_CHICKEN3,
+ CHICKEN3_DGMG_REQ_OUT_FIX_DISABLE |
+ CHICKEN3_DGMG_DONE_FIX_DISABLE);
+
+- /* WaPsdDispatchEnable:vlv */
+- /* WaDisablePSDDualDispatchEnable:vlv */
+- I915_WRITE(GEN7_HALF_SLICE_CHICKEN1,
+- _MASKED_BIT_ENABLE(GEN7_MAX_PS_THREAD_DEP |
+- GEN7_PSD_SINGLE_PORT_DISPATCH_ENABLE));
+-
+- /* WaDisable_RenderCache_OperationalFlush:vlv */
+- I915_WRITE(CACHE_MODE_0_GEN7, _MASKED_BIT_DISABLE(RC_OP_FLUSH_ENABLE));
+-
+- /* WaForceL3Serialization:vlv */
+- I915_WRITE(GEN7_L3SQCREG4, I915_READ(GEN7_L3SQCREG4) &
+- ~L3SQ_URB_READ_CAM_MATCH_DISABLE);
+-
+ /* WaDisableDopClockGating:vlv */
+ I915_WRITE(GEN7_ROW_CHICKEN2,
+ _MASKED_BIT_ENABLE(DOP_CLOCK_GATING_DISABLE));
+@@ -7166,8 +7005,6 @@ static void vlv_init_clock_gating(struct drm_i915_private *dev_priv)
+ I915_READ(GEN7_SQ_CHICKEN_MBCUNIT_CONFIG) |
+ GEN7_SQ_CHICKEN_MBCUNIT_SQINTMOB);
+
+- gen7_setup_fixed_func_scheduler(dev_priv);
+-
+ /*
+ * According to the spec, bit 13 (RCZUNIT) must be set on IVB.
+ * This implements the WaDisableRCZUnitClockGating:vlv workaround.
+@@ -7181,30 +7018,6 @@ static void vlv_init_clock_gating(struct drm_i915_private *dev_priv)
+ I915_WRITE(GEN7_UCGCTL4,
+ I915_READ(GEN7_UCGCTL4) | GEN7_L3BANK2X_CLOCK_GATE_DISABLE);
+
+- /*
+- * BSpec says this must be set, even though
+- * WaDisable4x2SubspanOptimization isn't listed for VLV.
+- */
+- I915_WRITE(CACHE_MODE_1,
+- _MASKED_BIT_ENABLE(PIXEL_SUBSPAN_COLLECT_OPT_DISABLE));
+-
+- /*
+- * BSpec recommends 8x4 when MSAA is used,
+- * however in practice 16x4 seems fastest.
+- *
+- * Note that PS/WM thread counts depend on the WIZ hashing
+- * disable bit, which we don't touch here, but it's good
+- * to keep in mind (see 3DSTATE_PS and 3DSTATE_WM).
+- */
+- I915_WRITE(GEN7_GT_MODE,
+- _MASKED_FIELD(GEN6_WIZ_HASHING_MASK, GEN6_WIZ_HASHING_16x4));
+-
+- /*
+- * WaIncreaseL3CreditsForVLVB0:vlv
+- * This is the hardware default actually.
+- */
+- I915_WRITE(GEN7_L3SQCREG1, VLV_B0_WA_L3SQCREG1_VALUE);
+-
+ /*
+ * WaDisableVLVClockGating_VBIIssue:vlv
+ * Disable clock gating on th GCFG unit to prevent a delay
+@@ -7257,13 +7070,6 @@ static void g4x_init_clock_gating(struct drm_i915_private *dev_priv)
+ dspclk_gate |= DSSUNIT_CLOCK_GATE_DISABLE;
+ I915_WRITE(DSPCLK_GATE_D, dspclk_gate);
+
+- /* WaDisableRenderCachePipelinedFlush */
+- I915_WRITE(CACHE_MODE_0,
+- _MASKED_BIT_ENABLE(CM0_PIPELINED_RENDER_FLUSH_DISABLE));
+-
+- /* WaDisable_RenderCache_OperationalFlush:g4x */
+- I915_WRITE(CACHE_MODE_0, _MASKED_BIT_DISABLE(RC_OP_FLUSH_ENABLE));
+-
+ g4x_disable_trickle_feed(dev_priv);
+ }
+
+@@ -7279,11 +7085,6 @@ static void i965gm_init_clock_gating(struct drm_i915_private *dev_priv)
+ intel_uncore_write(uncore,
+ MI_ARB_STATE,
+ _MASKED_BIT_ENABLE(MI_ARB_DISPLAY_TRICKLE_FEED_DISABLE));
+-
+- /* WaDisable_RenderCache_OperationalFlush:gen4 */
+- intel_uncore_write(uncore,
+- CACHE_MODE_0,
+- _MASKED_BIT_DISABLE(RC_OP_FLUSH_ENABLE));
+ }
+
+ static void i965g_init_clock_gating(struct drm_i915_private *dev_priv)
+@@ -7296,9 +7097,6 @@ static void i965g_init_clock_gating(struct drm_i915_private *dev_priv)
+ I915_WRITE(RENCLK_GATE_D2, 0);
+ I915_WRITE(MI_ARB_STATE,
+ _MASKED_BIT_ENABLE(MI_ARB_DISPLAY_TRICKLE_FEED_DISABLE));
+-
+- /* WaDisable_RenderCache_OperationalFlush:gen4 */
+- I915_WRITE(CACHE_MODE_0, _MASKED_BIT_DISABLE(RC_OP_FLUSH_ENABLE));
+ }
+
+ static void gen3_init_clock_gating(struct drm_i915_private *dev_priv)
+diff --git a/drivers/gpu/drm/i915/selftests/i915_mock_selftests.h b/drivers/gpu/drm/i915/selftests/i915_mock_selftests.h
+index 5b39bab4da1d..86baed226b53 100644
+--- a/drivers/gpu/drm/i915/selftests/i915_mock_selftests.h
++++ b/drivers/gpu/drm/i915/selftests/i915_mock_selftests.h
+@@ -20,6 +20,7 @@ selftest(fence, i915_sw_fence_mock_selftests)
+ selftest(scatterlist, scatterlist_mock_selftests)
+ selftest(syncmap, i915_syncmap_mock_selftests)
+ selftest(uncore, intel_uncore_mock_selftests)
++selftest(ring, intel_ring_mock_selftests)
+ selftest(engine, intel_engine_cs_mock_selftests)
+ selftest(timelines, intel_timeline_mock_selftests)
+ selftest(requests, i915_request_mock_selftests)
+diff --git a/drivers/gpu/drm/msm/adreno/a5xx_gpu.c b/drivers/gpu/drm/msm/adreno/a5xx_gpu.c
+index 724024a2243a..662d02289533 100644
+--- a/drivers/gpu/drm/msm/adreno/a5xx_gpu.c
++++ b/drivers/gpu/drm/msm/adreno/a5xx_gpu.c
+@@ -1404,6 +1404,10 @@ static unsigned long a5xx_gpu_busy(struct msm_gpu *gpu)
+ {
+ u64 busy_cycles, busy_time;
+
++ /* Only read the gpu busy if the hardware is already active */
++ if (pm_runtime_get_if_in_use(&gpu->pdev->dev) == 0)
++ return 0;
++
+ busy_cycles = gpu_read64(gpu, REG_A5XX_RBBM_PERFCTR_RBBM_0_LO,
+ REG_A5XX_RBBM_PERFCTR_RBBM_0_HI);
+
+@@ -1412,6 +1416,8 @@ static unsigned long a5xx_gpu_busy(struct msm_gpu *gpu)
+
+ gpu->devfreq.busy_cycles = busy_cycles;
+
++ pm_runtime_put(&gpu->pdev->dev);
++
+ if (WARN_ON(busy_time > ~0LU))
+ return ~0LU;
+
+diff --git a/drivers/gpu/drm/msm/adreno/a6xx_gmu.c b/drivers/gpu/drm/msm/adreno/a6xx_gmu.c
+index c4e71abbdd53..34607a98cc7c 100644
+--- a/drivers/gpu/drm/msm/adreno/a6xx_gmu.c
++++ b/drivers/gpu/drm/msm/adreno/a6xx_gmu.c
+@@ -108,6 +108,13 @@ static void __a6xx_gmu_set_freq(struct a6xx_gmu *gmu, int index)
+ struct msm_gpu *gpu = &adreno_gpu->base;
+ int ret;
+
++ /*
++ * This can get called from devfreq while the hardware is idle. Don't
++ * bring up the power if it isn't already active
++ */
++ if (pm_runtime_get_if_in_use(gmu->dev) == 0)
++ return;
++
+ gmu_write(gmu, REG_A6XX_GMU_DCVS_ACK_OPTION, 0);
+
+ gmu_write(gmu, REG_A6XX_GMU_DCVS_PERF_SETTING,
+@@ -134,6 +141,7 @@ static void __a6xx_gmu_set_freq(struct a6xx_gmu *gmu, int index)
+ * for now leave it at max so that the performance is nominal.
+ */
+ icc_set_bw(gpu->icc_path, 0, MBps_to_icc(7216));
++ pm_runtime_put(gmu->dev);
+ }
+
+ void a6xx_gmu_set_freq(struct msm_gpu *gpu, unsigned long freq)
+diff --git a/drivers/gpu/drm/msm/adreno/a6xx_gpu.c b/drivers/gpu/drm/msm/adreno/a6xx_gpu.c
+index 68af24150de5..2c09d2c21773 100644
+--- a/drivers/gpu/drm/msm/adreno/a6xx_gpu.c
++++ b/drivers/gpu/drm/msm/adreno/a6xx_gpu.c
+@@ -810,6 +810,11 @@ static unsigned long a6xx_gpu_busy(struct msm_gpu *gpu)
+ struct a6xx_gpu *a6xx_gpu = to_a6xx_gpu(adreno_gpu);
+ u64 busy_cycles, busy_time;
+
++
++ /* Only read the gpu busy if the hardware is already active */
++ if (pm_runtime_get_if_in_use(a6xx_gpu->gmu.dev) == 0)
++ return 0;
++
+ busy_cycles = gmu_read64(&a6xx_gpu->gmu,
+ REG_A6XX_GMU_CX_GMU_POWER_COUNTER_XOCLK_0_L,
+ REG_A6XX_GMU_CX_GMU_POWER_COUNTER_XOCLK_0_H);
+@@ -819,6 +824,8 @@ static unsigned long a6xx_gpu_busy(struct msm_gpu *gpu)
+
+ gpu->devfreq.busy_cycles = busy_cycles;
+
++ pm_runtime_put(a6xx_gpu->gmu.dev);
++
+ if (WARN_ON(busy_time > ~0LU))
+ return ~0LU;
+
+diff --git a/drivers/gpu/drm/msm/disp/mdp5/mdp5_kms.c b/drivers/gpu/drm/msm/disp/mdp5/mdp5_kms.c
+index 47b989834af1..c23a2fa13fb9 100644
+--- a/drivers/gpu/drm/msm/disp/mdp5/mdp5_kms.c
++++ b/drivers/gpu/drm/msm/disp/mdp5/mdp5_kms.c
+@@ -943,7 +943,8 @@ static int mdp5_init(struct platform_device *pdev, struct drm_device *dev)
+
+ return 0;
+ fail:
+- mdp5_destroy(pdev);
++ if (mdp5_kms)
++ mdp5_destroy(pdev);
+ return ret;
+ }
+
+diff --git a/drivers/gpu/drm/msm/msm_rd.c b/drivers/gpu/drm/msm/msm_rd.c
+index 732f65df5c4f..fea30e7aa9e8 100644
+--- a/drivers/gpu/drm/msm/msm_rd.c
++++ b/drivers/gpu/drm/msm/msm_rd.c
+@@ -29,8 +29,6 @@
+ * or shader programs (if not emitted inline in cmdstream).
+ */
+
+-#ifdef CONFIG_DEBUG_FS
+-
+ #include <linux/circ_buf.h>
+ #include <linux/debugfs.h>
+ #include <linux/kfifo.h>
+@@ -47,6 +45,8 @@ bool rd_full = false;
+ MODULE_PARM_DESC(rd_full, "If true, $debugfs/.../rd will snapshot all buffer contents");
+ module_param_named(rd_full, rd_full, bool, 0600);
+
++#ifdef CONFIG_DEBUG_FS
++
+ enum rd_sect_type {
+ RD_NONE,
+ RD_TEST, /* ascii text */
+diff --git a/drivers/gpu/drm/nouveau/dispnv50/disp.c b/drivers/gpu/drm/nouveau/dispnv50/disp.c
+index 6be9df1820c5..2625ed84fc44 100644
+--- a/drivers/gpu/drm/nouveau/dispnv50/disp.c
++++ b/drivers/gpu/drm/nouveau/dispnv50/disp.c
+@@ -482,15 +482,16 @@ nv50_dac_create(struct drm_connector *connector, struct dcb_output *dcbe)
+ * audio component binding for ELD notification
+ */
+ static void
+-nv50_audio_component_eld_notify(struct drm_audio_component *acomp, int port)
++nv50_audio_component_eld_notify(struct drm_audio_component *acomp, int port,
++ int dev_id)
+ {
+ if (acomp && acomp->audio_ops && acomp->audio_ops->pin_eld_notify)
+ acomp->audio_ops->pin_eld_notify(acomp->audio_ops->audio_ptr,
+- port, -1);
++ port, dev_id);
+ }
+
+ static int
+-nv50_audio_component_get_eld(struct device *kdev, int port, int pipe,
++nv50_audio_component_get_eld(struct device *kdev, int port, int dev_id,
+ bool *enabled, unsigned char *buf, int max_bytes)
+ {
+ struct drm_device *drm_dev = dev_get_drvdata(kdev);
+@@ -506,7 +507,8 @@ nv50_audio_component_get_eld(struct device *kdev, int port, int pipe,
+ nv_encoder = nouveau_encoder(encoder);
+ nv_connector = nouveau_encoder_connector_get(nv_encoder);
+ nv_crtc = nouveau_crtc(encoder->crtc);
+- if (!nv_connector || !nv_crtc || nv_crtc->index != port)
++ if (!nv_connector || !nv_crtc || nv_encoder->or != port ||
++ nv_crtc->index != dev_id)
+ continue;
+ *enabled = drm_detect_monitor_audio(nv_connector->edid);
+ if (*enabled) {
+@@ -600,7 +602,8 @@ nv50_audio_disable(struct drm_encoder *encoder, struct nouveau_crtc *nv_crtc)
+
+ nvif_mthd(&disp->disp->object, 0, &args, sizeof(args));
+
+- nv50_audio_component_eld_notify(drm->audio.component, nv_crtc->index);
++ nv50_audio_component_eld_notify(drm->audio.component, nv_encoder->or,
++ nv_crtc->index);
+ }
+
+ static void
+@@ -634,7 +637,8 @@ nv50_audio_enable(struct drm_encoder *encoder, struct drm_display_mode *mode)
+ nvif_mthd(&disp->disp->object, 0, &args,
+ sizeof(args.base) + drm_eld_size(args.data));
+
+- nv50_audio_component_eld_notify(drm->audio.component, nv_crtc->index);
++ nv50_audio_component_eld_notify(drm->audio.component, nv_encoder->or,
++ nv_crtc->index);
+ }
+
+ /******************************************************************************
+diff --git a/drivers/gpu/drm/nouveau/nvkm/engine/disp/hdmigm200.c b/drivers/gpu/drm/nouveau/nvkm/engine/disp/hdmigm200.c
+index 9b16a08eb4d9..bf6d41fb0c9f 100644
+--- a/drivers/gpu/drm/nouveau/nvkm/engine/disp/hdmigm200.c
++++ b/drivers/gpu/drm/nouveau/nvkm/engine/disp/hdmigm200.c
+@@ -27,10 +27,10 @@ void
+ gm200_hdmi_scdc(struct nvkm_ior *ior, int head, u8 scdc)
+ {
+ struct nvkm_device *device = ior->disp->engine.subdev.device;
+- const u32 hoff = head * 0x800;
++ const u32 soff = nv50_ior_base(ior);
+ const u32 ctrl = scdc & 0x3;
+
+- nvkm_mask(device, 0x61c5bc + hoff, 0x00000003, ctrl);
++ nvkm_mask(device, 0x61c5bc + soff, 0x00000003, ctrl);
+
+ ior->tmds.high_speed = !!(scdc & 0x2);
+ }
+diff --git a/drivers/gpu/drm/nouveau/nvkm/engine/gr/gk20a.c b/drivers/gpu/drm/nouveau/nvkm/engine/gr/gk20a.c
+index 4209b24a46d7..bf6b65257852 100644
+--- a/drivers/gpu/drm/nouveau/nvkm/engine/gr/gk20a.c
++++ b/drivers/gpu/drm/nouveau/nvkm/engine/gr/gk20a.c
+@@ -341,7 +341,7 @@ gk20a_gr_load(struct gf100_gr *gr, int ver, const struct gf100_gr_fwif *fwif)
+
+ static const struct gf100_gr_fwif
+ gk20a_gr_fwif[] = {
+- { -1, gk20a_gr_load, &gk20a_gr },
++ { 0, gk20a_gr_load, &gk20a_gr },
+ {}
+ };
+
+diff --git a/drivers/gpu/drm/qxl/qxl_kms.c b/drivers/gpu/drm/qxl/qxl_kms.c
+index 70b20ee4741a..41ef6a9ca8cc 100644
+--- a/drivers/gpu/drm/qxl/qxl_kms.c
++++ b/drivers/gpu/drm/qxl/qxl_kms.c
+@@ -218,7 +218,7 @@ int qxl_device_init(struct qxl_device *qdev,
+ &(qdev->ram_header->cursor_ring_hdr),
+ sizeof(struct qxl_command),
+ QXL_CURSOR_RING_SIZE,
+- qdev->io_base + QXL_IO_NOTIFY_CMD,
++ qdev->io_base + QXL_IO_NOTIFY_CURSOR,
+ false,
+ &qdev->cursor_event);
+
+diff --git a/drivers/gpu/drm/sun4i/sun4i_hdmi.h b/drivers/gpu/drm/sun4i/sun4i_hdmi.h
+index 7ad3f06c127e..00ca35f07ba5 100644
+--- a/drivers/gpu/drm/sun4i/sun4i_hdmi.h
++++ b/drivers/gpu/drm/sun4i/sun4i_hdmi.h
+@@ -148,7 +148,7 @@
+ #define SUN4I_HDMI_DDC_CMD_IMPLICIT_WRITE 3
+
+ #define SUN4I_HDMI_DDC_CLK_REG 0x528
+-#define SUN4I_HDMI_DDC_CLK_M(m) (((m) & 0x7) << 3)
++#define SUN4I_HDMI_DDC_CLK_M(m) (((m) & 0xf) << 3)
+ #define SUN4I_HDMI_DDC_CLK_N(n) ((n) & 0x7)
+
+ #define SUN4I_HDMI_DDC_LINE_CTRL_REG 0x540
+diff --git a/drivers/gpu/drm/sun4i/sun4i_hdmi_ddc_clk.c b/drivers/gpu/drm/sun4i/sun4i_hdmi_ddc_clk.c
+index 2ff780114106..12430b9d4e93 100644
+--- a/drivers/gpu/drm/sun4i/sun4i_hdmi_ddc_clk.c
++++ b/drivers/gpu/drm/sun4i/sun4i_hdmi_ddc_clk.c
+@@ -33,7 +33,7 @@ static unsigned long sun4i_ddc_calc_divider(unsigned long rate,
+ unsigned long best_rate = 0;
+ u8 best_m = 0, best_n = 0, _m, _n;
+
+- for (_m = 0; _m < 8; _m++) {
++ for (_m = 0; _m < 16; _m++) {
+ for (_n = 0; _n < 8; _n++) {
+ unsigned long tmp_rate;
+
+diff --git a/drivers/hid/hid-ids.h b/drivers/hid/hid-ids.h
+index 1c71a1aa76b2..f03f1cc913ce 100644
+--- a/drivers/hid/hid-ids.h
++++ b/drivers/hid/hid-ids.h
+@@ -1157,6 +1157,9 @@
+ #define USB_DEVICE_ID_TPV_OPTICAL_TOUCHSCREEN_8882 0x8882
+ #define USB_DEVICE_ID_TPV_OPTICAL_TOUCHSCREEN_8883 0x8883
+
++#define USB_VENDOR_ID_TRUST 0x145f
++#define USB_DEVICE_ID_TRUST_PANORA_TABLET 0x0212
++
+ #define USB_VENDOR_ID_TURBOX 0x062a
+ #define USB_DEVICE_ID_TURBOX_KEYBOARD 0x0201
+ #define USB_DEVICE_ID_ASUS_MD_5110 0x5110
+diff --git a/drivers/hid/hid-quirks.c b/drivers/hid/hid-quirks.c
+index e4cb543de0cd..ca8b5c261c7c 100644
+--- a/drivers/hid/hid-quirks.c
++++ b/drivers/hid/hid-quirks.c
+@@ -168,6 +168,7 @@ static const struct hid_device_id hid_quirks[] = {
+ { HID_USB_DEVICE(USB_VENDOR_ID_TOUCHPACK, USB_DEVICE_ID_TOUCHPACK_RTS), HID_QUIRK_MULTI_INPUT },
+ { HID_USB_DEVICE(USB_VENDOR_ID_TPV, USB_DEVICE_ID_TPV_OPTICAL_TOUCHSCREEN_8882), HID_QUIRK_NOGET },
+ { HID_USB_DEVICE(USB_VENDOR_ID_TPV, USB_DEVICE_ID_TPV_OPTICAL_TOUCHSCREEN_8883), HID_QUIRK_NOGET },
++ { HID_USB_DEVICE(USB_VENDOR_ID_TRUST, USB_DEVICE_ID_TRUST_PANORA_TABLET), HID_QUIRK_MULTI_INPUT | HID_QUIRK_HIDINPUT_FORCE },
+ { HID_USB_DEVICE(USB_VENDOR_ID_TURBOX, USB_DEVICE_ID_TURBOX_KEYBOARD), HID_QUIRK_NOGET },
+ { HID_USB_DEVICE(USB_VENDOR_ID_UCLOGIC, USB_DEVICE_ID_UCLOGIC_TABLET_KNA5), HID_QUIRK_MULTI_INPUT },
+ { HID_USB_DEVICE(USB_VENDOR_ID_UCLOGIC, USB_DEVICE_ID_UCLOGIC_TABLET_TWA60), HID_QUIRK_MULTI_INPUT },
+diff --git a/drivers/hid/intel-ish-hid/ishtp-fw-loader.c b/drivers/hid/intel-ish-hid/ishtp-fw-loader.c
+index aa2dbed30fc3..6cf59fd26ad7 100644
+--- a/drivers/hid/intel-ish-hid/ishtp-fw-loader.c
++++ b/drivers/hid/intel-ish-hid/ishtp-fw-loader.c
+@@ -480,6 +480,7 @@ static int ish_query_loader_prop(struct ishtp_cl_data *client_data,
+ sizeof(ldr_xfer_query_resp));
+ if (rv < 0) {
+ client_data->flag_retry = true;
++ *fw_info = (struct shim_fw_info){};
+ return rv;
+ }
+
+@@ -489,6 +490,7 @@ static int ish_query_loader_prop(struct ishtp_cl_data *client_data,
+ "data size %d is not equal to size of loader_xfer_query_response %zu\n",
+ rv, sizeof(struct loader_xfer_query_response));
+ client_data->flag_retry = true;
++ *fw_info = (struct shim_fw_info){};
+ return -EMSGSIZE;
+ }
+
+diff --git a/drivers/hwtracing/coresight/coresight-etm4x.c b/drivers/hwtracing/coresight/coresight-etm4x.c
+index a90d757f7043..a6d6c7a3abcb 100644
+--- a/drivers/hwtracing/coresight/coresight-etm4x.c
++++ b/drivers/hwtracing/coresight/coresight-etm4x.c
+@@ -1527,6 +1527,7 @@ static int etm4_probe(struct amba_device *adev, const struct amba_id *id)
+ return 0;
+
+ err_arch_supported:
++ etmdrvdata[drvdata->cpu] = NULL;
+ if (--etm4_count == 0) {
+ etm4_cpu_pm_unregister();
+
+diff --git a/drivers/hwtracing/coresight/coresight-platform.c b/drivers/hwtracing/coresight/coresight-platform.c
+index 43418a2126ff..471f34e40c74 100644
+--- a/drivers/hwtracing/coresight/coresight-platform.c
++++ b/drivers/hwtracing/coresight/coresight-platform.c
+@@ -87,6 +87,7 @@ static void of_coresight_get_ports_legacy(const struct device_node *node,
+ int *nr_inport, int *nr_outport)
+ {
+ struct device_node *ep = NULL;
++ struct of_endpoint endpoint;
+ int in = 0, out = 0;
+
+ do {
+@@ -94,10 +95,16 @@ static void of_coresight_get_ports_legacy(const struct device_node *node,
+ if (!ep)
+ break;
+
+- if (of_coresight_legacy_ep_is_input(ep))
+- in++;
+- else
+- out++;
++ if (of_graph_parse_endpoint(ep, &endpoint))
++ continue;
++
++ if (of_coresight_legacy_ep_is_input(ep)) {
++ in = (endpoint.port + 1 > in) ?
++ endpoint.port + 1 : in;
++ } else {
++ out = (endpoint.port + 1) > out ?
++ endpoint.port + 1 : out;
++ }
+
+ } while (ep);
+
+@@ -137,9 +144,16 @@ of_coresight_count_ports(struct device_node *port_parent)
+ {
+ int i = 0;
+ struct device_node *ep = NULL;
++ struct of_endpoint endpoint;
++
++ while ((ep = of_graph_get_next_endpoint(port_parent, ep))) {
++ /* Defer error handling to parsing */
++ if (of_graph_parse_endpoint(ep, &endpoint))
++ continue;
++ if (endpoint.port + 1 > i)
++ i = endpoint.port + 1;
++ }
+
+- while ((ep = of_graph_get_next_endpoint(port_parent, ep)))
+- i++;
+ return i;
+ }
+
+@@ -191,14 +205,12 @@ static int of_coresight_get_cpu(struct device *dev)
+ * Parses the local port, remote device name and the remote port.
+ *
+ * Returns :
+- * 1 - If the parsing is successful and a connection record
+- * was created for an output connection.
+ * 0 - If the parsing completed without any fatal errors.
+ * -Errno - Fatal error, abort the scanning.
+ */
+ static int of_coresight_parse_endpoint(struct device *dev,
+ struct device_node *ep,
+- struct coresight_connection *conn)
++ struct coresight_platform_data *pdata)
+ {
+ int ret = 0;
+ struct of_endpoint endpoint, rendpoint;
+@@ -206,6 +218,7 @@ static int of_coresight_parse_endpoint(struct device *dev,
+ struct device_node *rep = NULL;
+ struct device *rdev = NULL;
+ struct fwnode_handle *rdev_fwnode;
++ struct coresight_connection *conn;
+
+ do {
+ /* Parse the local port details */
+@@ -232,6 +245,13 @@ static int of_coresight_parse_endpoint(struct device *dev,
+ break;
+ }
+
++ conn = &pdata->conns[endpoint.port];
++ if (conn->child_fwnode) {
++ dev_warn(dev, "Duplicate output port %d\n",
++ endpoint.port);
++ ret = -EINVAL;
++ break;
++ }
+ conn->outport = endpoint.port;
+ /*
+ * Hold the refcount to the target device. This could be
+@@ -244,7 +264,6 @@ static int of_coresight_parse_endpoint(struct device *dev,
+ conn->child_fwnode = fwnode_handle_get(rdev_fwnode);
+ conn->child_port = rendpoint.port;
+ /* Connection record updated */
+- ret = 1;
+ } while (0);
+
+ of_node_put(rparent);
+@@ -258,7 +277,6 @@ static int of_get_coresight_platform_data(struct device *dev,
+ struct coresight_platform_data *pdata)
+ {
+ int ret = 0;
+- struct coresight_connection *conn;
+ struct device_node *ep = NULL;
+ const struct device_node *parent = NULL;
+ bool legacy_binding = false;
+@@ -287,8 +305,6 @@ static int of_get_coresight_platform_data(struct device *dev,
+ dev_warn_once(dev, "Uses obsolete Coresight DT bindings\n");
+ }
+
+- conn = pdata->conns;
+-
+ /* Iterate through each output port to discover topology */
+ while ((ep = of_graph_get_next_endpoint(parent, ep))) {
+ /*
+@@ -300,15 +316,9 @@ static int of_get_coresight_platform_data(struct device *dev,
+ if (legacy_binding && of_coresight_legacy_ep_is_input(ep))
+ continue;
+
+- ret = of_coresight_parse_endpoint(dev, ep, conn);
+- switch (ret) {
+- case 1:
+- conn++; /* Fall through */
+- case 0:
+- break;
+- default:
++ ret = of_coresight_parse_endpoint(dev, ep, pdata);
++ if (ret)
+ return ret;
+- }
+ }
+
+ return 0;
+@@ -647,6 +657,16 @@ static int acpi_coresight_parse_link(struct acpi_device *adev,
+ * coresight_remove_match().
+ */
+ conn->child_fwnode = fwnode_handle_get(&r_adev->fwnode);
++ } else if (dir == ACPI_CORESIGHT_LINK_SLAVE) {
++ /*
++ * We are only interested in the port number
++ * for the input ports at this component.
++ * Store the port number in child_port.
++ */
++ conn->child_port = fields[0].integer.value;
++ } else {
++ /* Invalid direction */
++ return -EINVAL;
+ }
+
+ return dir;
+@@ -692,10 +712,20 @@ static int acpi_coresight_parse_graph(struct acpi_device *adev,
+ return dir;
+
+ if (dir == ACPI_CORESIGHT_LINK_MASTER) {
+- pdata->nr_outport++;
++ if (ptr->outport > pdata->nr_outport)
++ pdata->nr_outport = ptr->outport;
+ ptr++;
+ } else {
+- pdata->nr_inport++;
++ WARN_ON(pdata->nr_inport == ptr->child_port);
++ /*
++ * We do not track input port connections for a device.
++ * However we need the highest port number described,
++ * which can be recorded now and reuse this connection
++ * record for an output connection. Hence, do not move
++ * the ptr for input connections
++ */
++ if (ptr->child_port > pdata->nr_inport)
++ pdata->nr_inport = ptr->child_port;
+ }
+ }
+
+@@ -704,8 +734,13 @@ static int acpi_coresight_parse_graph(struct acpi_device *adev,
+ return rc;
+
+ /* Copy the connection information to the final location */
+- for (i = 0; i < pdata->nr_outport; i++)
+- pdata->conns[i] = conns[i];
++ for (i = 0; conns + i < ptr; i++) {
++ int port = conns[i].outport;
++
++ /* Duplicate output port */
++ WARN_ON(pdata->conns[port].child_fwnode);
++ pdata->conns[port] = conns[i];
++ }
+
+ devm_kfree(&adev->dev, conns);
+ return 0;
+diff --git a/drivers/hwtracing/coresight/coresight-tmc-etf.c b/drivers/hwtracing/coresight/coresight-tmc-etf.c
+index d0cc3985b72a..36cce2bfb744 100644
+--- a/drivers/hwtracing/coresight/coresight-tmc-etf.c
++++ b/drivers/hwtracing/coresight/coresight-tmc-etf.c
+@@ -596,13 +596,6 @@ int tmc_read_prepare_etb(struct tmc_drvdata *drvdata)
+ goto out;
+ }
+
+- /* There is no point in reading a TMC in HW FIFO mode */
+- mode = readl_relaxed(drvdata->base + TMC_MODE);
+- if (mode != TMC_MODE_CIRCULAR_BUFFER) {
+- ret = -EINVAL;
+- goto out;
+- }
+-
+ /* Don't interfere if operated from Perf */
+ if (drvdata->mode == CS_MODE_PERF) {
+ ret = -EINVAL;
+@@ -616,8 +609,15 @@ int tmc_read_prepare_etb(struct tmc_drvdata *drvdata)
+ }
+
+ /* Disable the TMC if need be */
+- if (drvdata->mode == CS_MODE_SYSFS)
++ if (drvdata->mode == CS_MODE_SYSFS) {
++ /* There is no point in reading a TMC in HW FIFO mode */
++ mode = readl_relaxed(drvdata->base + TMC_MODE);
++ if (mode != TMC_MODE_CIRCULAR_BUFFER) {
++ ret = -EINVAL;
++ goto out;
++ }
+ __tmc_etb_disable_hw(drvdata);
++ }
+
+ drvdata->reading = true;
+ out:
+diff --git a/drivers/hwtracing/coresight/coresight.c b/drivers/hwtracing/coresight/coresight.c
+index c71553c09f8e..8f5e62f02444 100644
+--- a/drivers/hwtracing/coresight/coresight.c
++++ b/drivers/hwtracing/coresight/coresight.c
+@@ -1053,6 +1053,9 @@ static int coresight_orphan_match(struct device *dev, void *data)
+ for (i = 0; i < i_csdev->pdata->nr_outport; i++) {
+ conn = &i_csdev->pdata->conns[i];
+
++ /* Skip the port if FW doesn't describe it */
++ if (!conn->child_fwnode)
++ continue;
+ /* We have found at least one orphan connection */
+ if (conn->child_dev == NULL) {
+ /* Does it match this newly added device? */
+@@ -1091,6 +1094,8 @@ static void coresight_fixup_device_conns(struct coresight_device *csdev)
+ for (i = 0; i < csdev->pdata->nr_outport; i++) {
+ struct coresight_connection *conn = &csdev->pdata->conns[i];
+
++ if (!conn->child_fwnode)
++ continue;
+ conn->child_dev =
+ coresight_find_csdev_by_fwnode(conn->child_fwnode);
+ if (!conn->child_dev)
+@@ -1118,7 +1123,7 @@ static int coresight_remove_match(struct device *dev, void *data)
+ for (i = 0; i < iterator->pdata->nr_outport; i++) {
+ conn = &iterator->pdata->conns[i];
+
+- if (conn->child_dev == NULL)
++ if (conn->child_dev == NULL || conn->child_fwnode == NULL)
+ continue;
+
+ if (csdev->dev.fwnode == conn->child_fwnode) {
+diff --git a/drivers/i2c/busses/i2c-icy.c b/drivers/i2c/busses/i2c-icy.c
+index 271470f4d8a9..66c9923fc766 100644
+--- a/drivers/i2c/busses/i2c-icy.c
++++ b/drivers/i2c/busses/i2c-icy.c
+@@ -43,6 +43,7 @@
+ #include <linux/i2c.h>
+ #include <linux/i2c-algo-pcf.h>
+
++#include <asm/amigahw.h>
+ #include <asm/amigaints.h>
+ #include <linux/zorro.h>
+
+diff --git a/drivers/i2c/busses/i2c-piix4.c b/drivers/i2c/busses/i2c-piix4.c
+index 30ded6422e7b..69740a4ff1db 100644
+--- a/drivers/i2c/busses/i2c-piix4.c
++++ b/drivers/i2c/busses/i2c-piix4.c
+@@ -977,7 +977,8 @@ static int piix4_probe(struct pci_dev *dev, const struct pci_device_id *id)
+ }
+
+ if (dev->vendor == PCI_VENDOR_ID_AMD &&
+- dev->device == PCI_DEVICE_ID_AMD_HUDSON2_SMBUS) {
++ (dev->device == PCI_DEVICE_ID_AMD_HUDSON2_SMBUS ||
++ dev->device == PCI_DEVICE_ID_AMD_KERNCZ_SMBUS)) {
+ retval = piix4_setup_sb800(dev, id, 1);
+ }
+
+diff --git a/drivers/i2c/busses/i2c-pxa.c b/drivers/i2c/busses/i2c-pxa.c
+index 466e4f681d7a..f537a37ac1d5 100644
+--- a/drivers/i2c/busses/i2c-pxa.c
++++ b/drivers/i2c/busses/i2c-pxa.c
+@@ -311,11 +311,10 @@ static void i2c_pxa_scream_blue_murder(struct pxa_i2c *i2c, const char *why)
+ dev_err(dev, "IBMR: %08x IDBR: %08x ICR: %08x ISR: %08x\n",
+ readl(_IBMR(i2c)), readl(_IDBR(i2c)), readl(_ICR(i2c)),
+ readl(_ISR(i2c)));
+- dev_dbg(dev, "log: ");
++ dev_err(dev, "log:");
+ for (i = 0; i < i2c->irqlogidx; i++)
+- pr_debug("[%08x:%08x] ", i2c->isrlog[i], i2c->icrlog[i]);
+-
+- pr_debug("\n");
++ pr_cont(" [%03x:%05x]", i2c->isrlog[i], i2c->icrlog[i]);
++ pr_cont("\n");
+ }
+
+ #else /* ifdef DEBUG */
+@@ -747,11 +746,9 @@ static inline void i2c_pxa_stop_message(struct pxa_i2c *i2c)
+ {
+ u32 icr;
+
+- /*
+- * Clear the STOP and ACK flags
+- */
++ /* Clear the START, STOP, ACK, TB and MA flags */
+ icr = readl(_ICR(i2c));
+- icr &= ~(ICR_STOP | ICR_ACKNAK);
++ icr &= ~(ICR_START | ICR_STOP | ICR_ACKNAK | ICR_TB | ICR_MA);
+ writel(icr, _ICR(i2c));
+ }
+
+diff --git a/drivers/iio/buffer/industrialio-buffer-dmaengine.c b/drivers/iio/buffer/industrialio-buffer-dmaengine.c
+index b129693af0fd..94da3b1ca3a2 100644
+--- a/drivers/iio/buffer/industrialio-buffer-dmaengine.c
++++ b/drivers/iio/buffer/industrialio-buffer-dmaengine.c
+@@ -134,7 +134,7 @@ static ssize_t iio_dmaengine_buffer_get_length_align(struct device *dev,
+ struct dmaengine_buffer *dmaengine_buffer =
+ iio_buffer_to_dmaengine_buffer(indio_dev->buffer);
+
+- return sprintf(buf, "%u\n", dmaengine_buffer->align);
++ return sprintf(buf, "%zu\n", dmaengine_buffer->align);
+ }
+
+ static IIO_DEVICE_ATTR(length_align_bytes, 0444,
+diff --git a/drivers/iio/light/gp2ap002.c b/drivers/iio/light/gp2ap002.c
+index b7ef16b28280..7a2679bdc987 100644
+--- a/drivers/iio/light/gp2ap002.c
++++ b/drivers/iio/light/gp2ap002.c
+@@ -158,6 +158,9 @@ static irqreturn_t gp2ap002_prox_irq(int irq, void *d)
+ int val;
+ int ret;
+
++ if (!gp2ap002->enabled)
++ goto err_retrig;
++
+ ret = regmap_read(gp2ap002->map, GP2AP002_PROX, &val);
+ if (ret) {
+ dev_err(gp2ap002->dev, "error reading proximity\n");
+@@ -247,6 +250,8 @@ static int gp2ap002_read_raw(struct iio_dev *indio_dev,
+ struct gp2ap002 *gp2ap002 = iio_priv(indio_dev);
+ int ret;
+
++ pm_runtime_get_sync(gp2ap002->dev);
++
+ switch (mask) {
+ case IIO_CHAN_INFO_RAW:
+ switch (chan->type) {
+@@ -255,13 +260,21 @@ static int gp2ap002_read_raw(struct iio_dev *indio_dev,
+ if (ret < 0)
+ return ret;
+ *val = ret;
+- return IIO_VAL_INT;
++ ret = IIO_VAL_INT;
++ goto out;
+ default:
+- return -EINVAL;
++ ret = -EINVAL;
++ goto out;
+ }
+ default:
+- return -EINVAL;
++ ret = -EINVAL;
+ }
++
++out:
++ pm_runtime_mark_last_busy(gp2ap002->dev);
++ pm_runtime_put_autosuspend(gp2ap002->dev);
++
++ return ret;
+ }
+
+ static int gp2ap002_init(struct gp2ap002 *gp2ap002)
+diff --git a/drivers/iio/pressure/bmp280-core.c b/drivers/iio/pressure/bmp280-core.c
+index 29c209cc1108..973264a088f9 100644
+--- a/drivers/iio/pressure/bmp280-core.c
++++ b/drivers/iio/pressure/bmp280-core.c
+@@ -271,6 +271,8 @@ static u32 bmp280_compensate_humidity(struct bmp280_data *data,
+ + (s32)2097152) * calib->H2 + 8192) >> 14);
+ var -= ((((var >> 15) * (var >> 15)) >> 7) * (s32)calib->H1) >> 4;
+
++ var = clamp_val(var, 0, 419430400);
++
+ return var >> 12;
+ };
+
+@@ -713,7 +715,7 @@ static int bmp180_measure(struct bmp280_data *data, u8 ctrl_meas)
+ unsigned int ctrl;
+
+ if (data->use_eoc)
+- init_completion(&data->done);
++ reinit_completion(&data->done);
+
+ ret = regmap_write(data->regmap, BMP280_REG_CTRL_MEAS, ctrl_meas);
+ if (ret)
+@@ -969,6 +971,9 @@ static int bmp085_fetch_eoc_irq(struct device *dev,
+ "trying to enforce it\n");
+ irq_trig = IRQF_TRIGGER_RISING;
+ }
++
++ init_completion(&data->done);
++
+ ret = devm_request_threaded_irq(dev,
+ irq,
+ bmp085_eoc_irq,
+diff --git a/drivers/infiniband/core/cm.c b/drivers/infiniband/core/cm.c
+index 17f14e0eafe4..1c2bf18cda9f 100644
+--- a/drivers/infiniband/core/cm.c
++++ b/drivers/infiniband/core/cm.c
+@@ -1076,7 +1076,9 @@ retest:
+ case IB_CM_REP_SENT:
+ case IB_CM_MRA_REP_RCVD:
+ ib_cancel_mad(cm_id_priv->av.port->mad_agent, cm_id_priv->msg);
+- /* Fall through */
++ cm_send_rej_locked(cm_id_priv, IB_CM_REJ_CONSUMER_DEFINED, NULL,
++ 0, NULL, 0);
++ goto retest;
+ case IB_CM_MRA_REQ_SENT:
+ case IB_CM_REP_RCVD:
+ case IB_CM_MRA_REP_SENT:
+diff --git a/drivers/infiniband/core/cma_configfs.c b/drivers/infiniband/core/cma_configfs.c
+index c672a4978bfd..3c1e2ca564fe 100644
+--- a/drivers/infiniband/core/cma_configfs.c
++++ b/drivers/infiniband/core/cma_configfs.c
+@@ -322,8 +322,21 @@ fail:
+ return ERR_PTR(err);
+ }
+
++static void drop_cma_dev(struct config_group *cgroup, struct config_item *item)
++{
++ struct config_group *group =
++ container_of(item, struct config_group, cg_item);
++ struct cma_dev_group *cma_dev_group =
++ container_of(group, struct cma_dev_group, device_group);
++
++ configfs_remove_default_groups(&cma_dev_group->ports_group);
++ configfs_remove_default_groups(&cma_dev_group->device_group);
++ config_item_put(item);
++}
++
+ static struct configfs_group_operations cma_subsys_group_ops = {
+ .make_group = make_cma_dev,
++ .drop_item = drop_cma_dev,
+ };
+
+ static const struct config_item_type cma_subsys_type = {
+diff --git a/drivers/infiniband/core/sysfs.c b/drivers/infiniband/core/sysfs.c
+index 087682e6969e..defe9cd4c5ee 100644
+--- a/drivers/infiniband/core/sysfs.c
++++ b/drivers/infiniband/core/sysfs.c
+@@ -1058,8 +1058,7 @@ static int add_port(struct ib_core_device *coredev, int port_num)
+ coredev->ports_kobj,
+ "%d", port_num);
+ if (ret) {
+- kfree(p);
+- return ret;
++ goto err_put;
+ }
+
+ p->gid_attr_group = kzalloc(sizeof(*p->gid_attr_group), GFP_KERNEL);
+@@ -1072,8 +1071,7 @@ static int add_port(struct ib_core_device *coredev, int port_num)
+ ret = kobject_init_and_add(&p->gid_attr_group->kobj, &gid_attr_type,
+ &p->kobj, "gid_attrs");
+ if (ret) {
+- kfree(p->gid_attr_group);
+- goto err_put;
++ goto err_put_gid_attrs;
+ }
+
+ if (device->ops.process_mad && is_full_dev) {
+@@ -1404,8 +1402,10 @@ int ib_port_register_module_stat(struct ib_device *device, u8 port_num,
+
+ ret = kobject_init_and_add(kobj, ktype, &port->kobj, "%s",
+ name);
+- if (ret)
++ if (ret) {
++ kobject_put(kobj);
+ return ret;
++ }
+ }
+
+ return 0;
+diff --git a/drivers/infiniband/core/uverbs_cmd.c b/drivers/infiniband/core/uverbs_cmd.c
+index 060b4ebbd2ba..d6e9cc94dd90 100644
+--- a/drivers/infiniband/core/uverbs_cmd.c
++++ b/drivers/infiniband/core/uverbs_cmd.c
+@@ -2959,6 +2959,7 @@ static int ib_uverbs_ex_create_wq(struct uverbs_attr_bundle *attrs)
+ wq_init_attr.event_handler = ib_uverbs_wq_event_handler;
+ wq_init_attr.create_flags = cmd.create_flags;
+ INIT_LIST_HEAD(&obj->uevent.event_list);
++ obj->uevent.uobject.user_handle = cmd.user_handle;
+
+ wq = pd->device->ops.create_wq(pd, &wq_init_attr, &attrs->driver_udata);
+ if (IS_ERR(wq)) {
+@@ -2976,8 +2977,6 @@ static int ib_uverbs_ex_create_wq(struct uverbs_attr_bundle *attrs)
+ atomic_set(&wq->usecnt, 0);
+ atomic_inc(&pd->usecnt);
+ atomic_inc(&cq->usecnt);
+- wq->uobject = obj;
+- obj->uevent.uobject.object = wq;
+
+ memset(&resp, 0, sizeof(resp));
+ resp.wq_handle = obj->uevent.uobject.id;
+diff --git a/drivers/infiniband/hw/cxgb4/device.c b/drivers/infiniband/hw/cxgb4/device.c
+index 599340c1f0b8..541dbcf22d0e 100644
+--- a/drivers/infiniband/hw/cxgb4/device.c
++++ b/drivers/infiniband/hw/cxgb4/device.c
+@@ -953,6 +953,7 @@ void c4iw_dealloc(struct uld_ctx *ctx)
+ static void c4iw_remove(struct uld_ctx *ctx)
+ {
+ pr_debug("c4iw_dev %p\n", ctx->dev);
++ debugfs_remove_recursive(ctx->dev->debugfs_root);
+ c4iw_unregister_device(ctx->dev);
+ c4iw_dealloc(ctx);
+ }
+diff --git a/drivers/infiniband/hw/efa/efa_com_cmd.c b/drivers/infiniband/hw/efa/efa_com_cmd.c
+index eea5574a62e8..69f842c92ff6 100644
+--- a/drivers/infiniband/hw/efa/efa_com_cmd.c
++++ b/drivers/infiniband/hw/efa/efa_com_cmd.c
+@@ -388,7 +388,7 @@ static int efa_com_get_feature_ex(struct efa_com_dev *edev,
+
+ if (control_buff_size)
+ EFA_SET(&get_cmd.aq_common_descriptor.flags,
+- EFA_ADMIN_AQ_COMMON_DESC_CTRL_DATA_INDIRECT, 1);
++ EFA_ADMIN_AQ_COMMON_DESC_CTRL_DATA, 1);
+
+ efa_com_set_dma_addr(control_buf_dma_addr,
+ &get_cmd.control_buffer.address.mem_addr_high,
+@@ -540,7 +540,7 @@ static int efa_com_set_feature_ex(struct efa_com_dev *edev,
+ if (control_buff_size) {
+ set_cmd->aq_common_descriptor.flags = 0;
+ EFA_SET(&set_cmd->aq_common_descriptor.flags,
+- EFA_ADMIN_AQ_COMMON_DESC_CTRL_DATA_INDIRECT, 1);
++ EFA_ADMIN_AQ_COMMON_DESC_CTRL_DATA, 1);
+ efa_com_set_dma_addr(control_buf_dma_addr,
+ &set_cmd->control_buffer.address.mem_addr_high,
+ &set_cmd->control_buffer.address.mem_addr_low);
+diff --git a/drivers/infiniband/hw/hns/hns_roce_hw_v2.c b/drivers/infiniband/hw/hns/hns_roce_hw_v2.c
+index c3316672b70e..f9fa80ae5560 100644
+--- a/drivers/infiniband/hw/hns/hns_roce_hw_v2.c
++++ b/drivers/infiniband/hw/hns/hns_roce_hw_v2.c
+@@ -1349,34 +1349,26 @@ static int hns_roce_query_pf_resource(struct hns_roce_dev *hr_dev)
+ static int hns_roce_query_pf_timer_resource(struct hns_roce_dev *hr_dev)
+ {
+ struct hns_roce_pf_timer_res_a *req_a;
+- struct hns_roce_cmq_desc desc[2];
+- int ret, i;
++ struct hns_roce_cmq_desc desc;
++ int ret;
+
+- for (i = 0; i < 2; i++) {
+- hns_roce_cmq_setup_basic_desc(&desc[i],
+- HNS_ROCE_OPC_QUERY_PF_TIMER_RES,
+- true);
++ hns_roce_cmq_setup_basic_desc(&desc, HNS_ROCE_OPC_QUERY_PF_TIMER_RES,
++ true);
+
+- if (i == 0)
+- desc[i].flag |= cpu_to_le16(HNS_ROCE_CMD_FLAG_NEXT);
+- else
+- desc[i].flag &= ~cpu_to_le16(HNS_ROCE_CMD_FLAG_NEXT);
+- }
+-
+- ret = hns_roce_cmq_send(hr_dev, desc, 2);
++ ret = hns_roce_cmq_send(hr_dev, &desc, 1);
+ if (ret)
+ return ret;
+
+- req_a = (struct hns_roce_pf_timer_res_a *)desc[0].data;
++ req_a = (struct hns_roce_pf_timer_res_a *)desc.data;
+
+ hr_dev->caps.qpc_timer_bt_num =
+- roce_get_field(req_a->qpc_timer_bt_idx_num,
+- PF_RES_DATA_1_PF_QPC_TIMER_BT_NUM_M,
+- PF_RES_DATA_1_PF_QPC_TIMER_BT_NUM_S);
++ roce_get_field(req_a->qpc_timer_bt_idx_num,
++ PF_RES_DATA_1_PF_QPC_TIMER_BT_NUM_M,
++ PF_RES_DATA_1_PF_QPC_TIMER_BT_NUM_S);
+ hr_dev->caps.cqc_timer_bt_num =
+- roce_get_field(req_a->cqc_timer_bt_idx_num,
+- PF_RES_DATA_2_PF_CQC_TIMER_BT_NUM_M,
+- PF_RES_DATA_2_PF_CQC_TIMER_BT_NUM_S);
++ roce_get_field(req_a->cqc_timer_bt_idx_num,
++ PF_RES_DATA_2_PF_CQC_TIMER_BT_NUM_M,
++ PF_RES_DATA_2_PF_CQC_TIMER_BT_NUM_S);
+
+ return 0;
+ }
+@@ -4639,7 +4631,7 @@ static int hns_roce_v2_query_qp(struct ib_qp *ibqp, struct ib_qp_attr *qp_attr,
+ qp_attr->path_mig_state = IB_MIG_ARMED;
+ qp_attr->ah_attr.type = RDMA_AH_ATTR_TYPE_ROCE;
+ if (hr_qp->ibqp.qp_type == IB_QPT_UD)
+- qp_attr->qkey = V2_QKEY_VAL;
++ qp_attr->qkey = le32_to_cpu(context.qkey_xrcd);
+
+ qp_attr->rq_psn = roce_get_field(context.byte_108_rx_reqepsn,
+ V2_QPC_BYTE_108_RX_REQ_EPSN_M,
+diff --git a/drivers/infiniband/hw/mlx5/devx.c b/drivers/infiniband/hw/mlx5/devx.c
+index 46e1ab771f10..ed10e2f32aab 100644
+--- a/drivers/infiniband/hw/mlx5/devx.c
++++ b/drivers/infiniband/hw/mlx5/devx.c
+@@ -494,6 +494,10 @@ static u64 devx_get_obj_id(const void *in)
+ obj_id = get_enc_obj_id(MLX5_CMD_OP_CREATE_QP,
+ MLX5_GET(rst2init_qp_in, in, qpn));
+ break;
++ case MLX5_CMD_OP_INIT2INIT_QP:
++ obj_id = get_enc_obj_id(MLX5_CMD_OP_CREATE_QP,
++ MLX5_GET(init2init_qp_in, in, qpn));
++ break;
+ case MLX5_CMD_OP_INIT2RTR_QP:
+ obj_id = get_enc_obj_id(MLX5_CMD_OP_CREATE_QP,
+ MLX5_GET(init2rtr_qp_in, in, qpn));
+@@ -819,6 +823,7 @@ static bool devx_is_obj_modify_cmd(const void *in)
+ case MLX5_CMD_OP_SET_L2_TABLE_ENTRY:
+ case MLX5_CMD_OP_RST2INIT_QP:
+ case MLX5_CMD_OP_INIT2RTR_QP:
++ case MLX5_CMD_OP_INIT2INIT_QP:
+ case MLX5_CMD_OP_RTR2RTS_QP:
+ case MLX5_CMD_OP_RTS2RTS_QP:
+ case MLX5_CMD_OP_SQERR2RTS_QP:
+diff --git a/drivers/infiniband/hw/mlx5/srq.c b/drivers/infiniband/hw/mlx5/srq.c
+index b1a8a9175040..6d1ff13d2283 100644
+--- a/drivers/infiniband/hw/mlx5/srq.c
++++ b/drivers/infiniband/hw/mlx5/srq.c
+@@ -310,12 +310,18 @@ int mlx5_ib_create_srq(struct ib_srq *ib_srq,
+ srq->msrq.event = mlx5_ib_srq_event;
+ srq->ibsrq.ext.xrc.srq_num = srq->msrq.srqn;
+
+- if (udata)
+- if (ib_copy_to_udata(udata, &srq->msrq.srqn, sizeof(__u32))) {
++ if (udata) {
++ struct mlx5_ib_create_srq_resp resp = {
++ .srqn = srq->msrq.srqn,
++ };
++
++ if (ib_copy_to_udata(udata, &resp, min(udata->outlen,
++ sizeof(resp)))) {
+ mlx5_ib_dbg(dev, "copy to user failed\n");
+ err = -EFAULT;
+ goto err_core;
+ }
++ }
+
+ init_attr->attr.max_wr = srq->msrq.max - 1;
+
+diff --git a/drivers/infiniband/ulp/srpt/ib_srpt.c b/drivers/infiniband/ulp/srpt/ib_srpt.c
+index 98552749d71c..fcf982c60db6 100644
+--- a/drivers/infiniband/ulp/srpt/ib_srpt.c
++++ b/drivers/infiniband/ulp/srpt/ib_srpt.c
+@@ -610,6 +610,11 @@ static int srpt_refresh_port(struct srpt_port *sport)
+ dev_name(&sport->sdev->device->dev), sport->port,
+ PTR_ERR(sport->mad_agent));
+ sport->mad_agent = NULL;
++ memset(&port_modify, 0, sizeof(port_modify));
++ port_modify.clr_port_cap_mask = IB_PORT_DEVICE_MGMT_SUP;
++ ib_modify_port(sport->sdev->device, sport->port, 0,
++ &port_modify);
++
+ }
+ }
+
+@@ -633,9 +638,8 @@ static void srpt_unregister_mad_agent(struct srpt_device *sdev)
+ for (i = 1; i <= sdev->device->phys_port_cnt; i++) {
+ sport = &sdev->port[i - 1];
+ WARN_ON(sport->port != i);
+- if (ib_modify_port(sdev->device, i, 0, &port_modify) < 0)
+- pr_err("disabling MAD processing failed.\n");
+ if (sport->mad_agent) {
++ ib_modify_port(sdev->device, i, 0, &port_modify);
+ ib_unregister_mad_agent(sport->mad_agent);
+ sport->mad_agent = NULL;
+ }
+diff --git a/drivers/input/serio/i8042-ppcio.h b/drivers/input/serio/i8042-ppcio.h
+deleted file mode 100644
+index 391f94d9e47d..000000000000
+--- a/drivers/input/serio/i8042-ppcio.h
++++ /dev/null
+@@ -1,57 +0,0 @@
+-/* SPDX-License-Identifier: GPL-2.0-only */
+-#ifndef _I8042_PPCIO_H
+-#define _I8042_PPCIO_H
+-
+-
+-#if defined(CONFIG_WALNUT)
+-
+-#define I8042_KBD_IRQ 25
+-#define I8042_AUX_IRQ 26
+-
+-#define I8042_KBD_PHYS_DESC "walnutps2/serio0"
+-#define I8042_AUX_PHYS_DESC "walnutps2/serio1"
+-#define I8042_MUX_PHYS_DESC "walnutps2/serio%d"
+-
+-extern void *kb_cs;
+-extern void *kb_data;
+-
+-#define I8042_COMMAND_REG (*(int *)kb_cs)
+-#define I8042_DATA_REG (*(int *)kb_data)
+-
+-static inline int i8042_read_data(void)
+-{
+- return readb(kb_data);
+-}
+-
+-static inline int i8042_read_status(void)
+-{
+- return readb(kb_cs);
+-}
+-
+-static inline void i8042_write_data(int val)
+-{
+- writeb(val, kb_data);
+-}
+-
+-static inline void i8042_write_command(int val)
+-{
+- writeb(val, kb_cs);
+-}
+-
+-static inline int i8042_platform_init(void)
+-{
+- i8042_reset = I8042_RESET_ALWAYS;
+- return 0;
+-}
+-
+-static inline void i8042_platform_exit(void)
+-{
+-}
+-
+-#else
+-
+-#include "i8042-io.h"
+-
+-#endif
+-
+-#endif /* _I8042_PPCIO_H */
+diff --git a/drivers/input/serio/i8042.h b/drivers/input/serio/i8042.h
+index 38dc27ad3c18..eb376700dfff 100644
+--- a/drivers/input/serio/i8042.h
++++ b/drivers/input/serio/i8042.h
+@@ -17,8 +17,6 @@
+ #include "i8042-ip22io.h"
+ #elif defined(CONFIG_SNI_RM)
+ #include "i8042-snirm.h"
+-#elif defined(CONFIG_PPC)
+-#include "i8042-ppcio.h"
+ #elif defined(CONFIG_SPARC)
+ #include "i8042-sparcio.h"
+ #elif defined(CONFIG_X86) || defined(CONFIG_IA64)
+diff --git a/drivers/input/touchscreen/edt-ft5x06.c b/drivers/input/touchscreen/edt-ft5x06.c
+index d2587724c52a..9b8450794a8a 100644
+--- a/drivers/input/touchscreen/edt-ft5x06.c
++++ b/drivers/input/touchscreen/edt-ft5x06.c
+@@ -938,19 +938,25 @@ static void edt_ft5x06_ts_get_defaults(struct device *dev,
+
+ error = device_property_read_u32(dev, "offset", &val);
+ if (!error) {
+- edt_ft5x06_register_write(tsdata, reg_addr->reg_offset, val);
++ if (reg_addr->reg_offset != NO_REGISTER)
++ edt_ft5x06_register_write(tsdata,
++ reg_addr->reg_offset, val);
+ tsdata->offset = val;
+ }
+
+ error = device_property_read_u32(dev, "offset-x", &val);
+ if (!error) {
+- edt_ft5x06_register_write(tsdata, reg_addr->reg_offset_x, val);
++ if (reg_addr->reg_offset_x != NO_REGISTER)
++ edt_ft5x06_register_write(tsdata,
++ reg_addr->reg_offset_x, val);
+ tsdata->offset_x = val;
+ }
+
+ error = device_property_read_u32(dev, "offset-y", &val);
+ if (!error) {
+- edt_ft5x06_register_write(tsdata, reg_addr->reg_offset_y, val);
++ if (reg_addr->reg_offset_y != NO_REGISTER)
++ edt_ft5x06_register_write(tsdata,
++ reg_addr->reg_offset_y, val);
+ tsdata->offset_y = val;
+ }
+ }
+diff --git a/drivers/iommu/arm-smmu-v3.c b/drivers/iommu/arm-smmu-v3.c
+index 82508730feb7..af21d24a09e8 100644
+--- a/drivers/iommu/arm-smmu-v3.c
++++ b/drivers/iommu/arm-smmu-v3.c
+@@ -171,6 +171,8 @@
+ #define ARM_SMMU_PRIQ_IRQ_CFG1 0xd8
+ #define ARM_SMMU_PRIQ_IRQ_CFG2 0xdc
+
++#define ARM_SMMU_REG_SZ 0xe00
++
+ /* Common MSI config fields */
+ #define MSI_CFG0_ADDR_MASK GENMASK_ULL(51, 2)
+ #define MSI_CFG2_SH GENMASK(5, 4)
+@@ -628,6 +630,7 @@ struct arm_smmu_strtab_cfg {
+ struct arm_smmu_device {
+ struct device *dev;
+ void __iomem *base;
++ void __iomem *page1;
+
+ #define ARM_SMMU_FEAT_2_LVL_STRTAB (1 << 0)
+ #define ARM_SMMU_FEAT_2_LVL_CDTAB (1 << 1)
+@@ -733,9 +736,8 @@ static struct arm_smmu_option_prop arm_smmu_options[] = {
+ static inline void __iomem *arm_smmu_page1_fixup(unsigned long offset,
+ struct arm_smmu_device *smmu)
+ {
+- if ((offset > SZ_64K) &&
+- (smmu->options & ARM_SMMU_OPT_PAGE0_REGS_ONLY))
+- offset -= SZ_64K;
++ if (offset > SZ_64K)
++ return smmu->page1 + offset - SZ_64K;
+
+ return smmu->base + offset;
+ }
+@@ -4021,6 +4023,18 @@ err_reset_pci_ops: __maybe_unused;
+ return err;
+ }
+
++static void __iomem *arm_smmu_ioremap(struct device *dev, resource_size_t start,
++ resource_size_t size)
++{
++ struct resource res = {
++ .flags = IORESOURCE_MEM,
++ .start = start,
++ .end = start + size - 1,
++ };
++
++ return devm_ioremap_resource(dev, &res);
++}
++
+ static int arm_smmu_device_probe(struct platform_device *pdev)
+ {
+ int irq, ret;
+@@ -4056,10 +4070,23 @@ static int arm_smmu_device_probe(struct platform_device *pdev)
+ }
+ ioaddr = res->start;
+
+- smmu->base = devm_ioremap_resource(dev, res);
++ /*
++ * Don't map the IMPLEMENTATION DEFINED regions, since they may contain
++ * the PMCG registers which are reserved by the PMU driver.
++ */
++ smmu->base = arm_smmu_ioremap(dev, ioaddr, ARM_SMMU_REG_SZ);
+ if (IS_ERR(smmu->base))
+ return PTR_ERR(smmu->base);
+
++ if (arm_smmu_resource_size(smmu) > SZ_64K) {
++ smmu->page1 = arm_smmu_ioremap(dev, ioaddr + SZ_64K,
++ ARM_SMMU_REG_SZ);
++ if (IS_ERR(smmu->page1))
++ return PTR_ERR(smmu->page1);
++ } else {
++ smmu->page1 = smmu->base;
++ }
++
+ /* Interrupt lines */
+
+ irq = platform_get_irq_byname_optional(pdev, "combined");
+diff --git a/drivers/iommu/intel-iommu.c b/drivers/iommu/intel-iommu.c
+index 11ed871dd255..fde7aba49b74 100644
+--- a/drivers/iommu/intel-iommu.c
++++ b/drivers/iommu/intel-iommu.c
+@@ -2518,9 +2518,6 @@ struct dmar_domain *find_domain(struct device *dev)
+ if (unlikely(attach_deferred(dev) || iommu_dummy(dev)))
+ return NULL;
+
+- if (dev_is_pci(dev))
+- dev = &pci_real_dma_dev(to_pci_dev(dev))->dev;
+-
+ /* No lock here, assumes no domain exit in normal case */
+ info = dev->archdata.iommu;
+ if (likely(info))
+diff --git a/drivers/mailbox/imx-mailbox.c b/drivers/mailbox/imx-mailbox.c
+index 7906624a731c..478308fb82cc 100644
+--- a/drivers/mailbox/imx-mailbox.c
++++ b/drivers/mailbox/imx-mailbox.c
+@@ -66,6 +66,8 @@ struct imx_mu_priv {
+ struct clk *clk;
+ int irq;
+
++ u32 xcr;
++
+ bool side_b;
+ };
+
+@@ -374,7 +376,7 @@ static struct mbox_chan *imx_mu_scu_xlate(struct mbox_controller *mbox,
+ break;
+ default:
+ dev_err(mbox->dev, "Invalid chan type: %d\n", type);
+- return NULL;
++ return ERR_PTR(-EINVAL);
+ }
+
+ if (chan >= mbox->num_chans) {
+@@ -558,12 +560,45 @@ static const struct of_device_id imx_mu_dt_ids[] = {
+ };
+ MODULE_DEVICE_TABLE(of, imx_mu_dt_ids);
+
++static int imx_mu_suspend_noirq(struct device *dev)
++{
++ struct imx_mu_priv *priv = dev_get_drvdata(dev);
++
++ priv->xcr = imx_mu_read(priv, priv->dcfg->xCR);
++
++ return 0;
++}
++
++static int imx_mu_resume_noirq(struct device *dev)
++{
++ struct imx_mu_priv *priv = dev_get_drvdata(dev);
++
++ /*
++ * ONLY restore MU when context lost, the TIE could
++ * be set during noirq resume as there is MU data
++ * communication going on, and restore the saved
++ * value will overwrite the TIE and cause MU data
++ * send failed, may lead to system freeze. This issue
++ * is observed by testing freeze mode suspend.
++ */
++ if (!imx_mu_read(priv, priv->dcfg->xCR))
++ imx_mu_write(priv, priv->xcr, priv->dcfg->xCR);
++
++ return 0;
++}
++
++static const struct dev_pm_ops imx_mu_pm_ops = {
++ SET_NOIRQ_SYSTEM_SLEEP_PM_OPS(imx_mu_suspend_noirq,
++ imx_mu_resume_noirq)
++};
++
+ static struct platform_driver imx_mu_driver = {
+ .probe = imx_mu_probe,
+ .remove = imx_mu_remove,
+ .driver = {
+ .name = "imx_mu",
+ .of_match_table = imx_mu_dt_ids,
++ .pm = &imx_mu_pm_ops,
+ },
+ };
+ module_platform_driver(imx_mu_driver);
+diff --git a/drivers/mailbox/zynqmp-ipi-mailbox.c b/drivers/mailbox/zynqmp-ipi-mailbox.c
+index 86887c9a349a..f9cc674ba9b7 100644
+--- a/drivers/mailbox/zynqmp-ipi-mailbox.c
++++ b/drivers/mailbox/zynqmp-ipi-mailbox.c
+@@ -504,10 +504,9 @@ static int zynqmp_ipi_mbox_probe(struct zynqmp_ipi_mbox *ipi_mbox,
+ mchan->req_buf_size = resource_size(&res);
+ mchan->req_buf = devm_ioremap(mdev, res.start,
+ mchan->req_buf_size);
+- if (IS_ERR(mchan->req_buf)) {
++ if (!mchan->req_buf) {
+ dev_err(mdev, "Unable to map IPI buffer I/O memory\n");
+- ret = PTR_ERR(mchan->req_buf);
+- return ret;
++ return -ENOMEM;
+ }
+ } else if (ret != -ENODEV) {
+ dev_err(mdev, "Unmatched resource %s, %d.\n", name, ret);
+@@ -520,10 +519,9 @@ static int zynqmp_ipi_mbox_probe(struct zynqmp_ipi_mbox *ipi_mbox,
+ mchan->resp_buf_size = resource_size(&res);
+ mchan->resp_buf = devm_ioremap(mdev, res.start,
+ mchan->resp_buf_size);
+- if (IS_ERR(mchan->resp_buf)) {
++ if (!mchan->resp_buf) {
+ dev_err(mdev, "Unable to map IPI buffer I/O memory\n");
+- ret = PTR_ERR(mchan->resp_buf);
+- return ret;
++ return -ENOMEM;
+ }
+ } else if (ret != -ENODEV) {
+ dev_err(mdev, "Unmatched resource %s.\n", name);
+@@ -543,10 +541,9 @@ static int zynqmp_ipi_mbox_probe(struct zynqmp_ipi_mbox *ipi_mbox,
+ mchan->req_buf_size = resource_size(&res);
+ mchan->req_buf = devm_ioremap(mdev, res.start,
+ mchan->req_buf_size);
+- if (IS_ERR(mchan->req_buf)) {
++ if (!mchan->req_buf) {
+ dev_err(mdev, "Unable to map IPI buffer I/O memory\n");
+- ret = PTR_ERR(mchan->req_buf);
+- return ret;
++ return -ENOMEM;
+ }
+ } else if (ret != -ENODEV) {
+ dev_err(mdev, "Unmatched resource %s.\n", name);
+@@ -559,10 +556,9 @@ static int zynqmp_ipi_mbox_probe(struct zynqmp_ipi_mbox *ipi_mbox,
+ mchan->resp_buf_size = resource_size(&res);
+ mchan->resp_buf = devm_ioremap(mdev, res.start,
+ mchan->resp_buf_size);
+- if (IS_ERR(mchan->resp_buf)) {
++ if (!mchan->resp_buf) {
+ dev_err(mdev, "Unable to map IPI buffer I/O memory\n");
+- ret = PTR_ERR(mchan->resp_buf);
+- return ret;
++ return -ENOMEM;
+ }
+ } else if (ret != -ENODEV) {
+ dev_err(mdev, "Unmatched resource %s.\n", name);
+diff --git a/drivers/md/bcache/btree.c b/drivers/md/bcache/btree.c
+index 72856e5f23a3..fd1f288fd801 100644
+--- a/drivers/md/bcache/btree.c
++++ b/drivers/md/bcache/btree.c
+@@ -1389,7 +1389,7 @@ static int btree_gc_coalesce(struct btree *b, struct btree_op *op,
+ if (__set_blocks(n1, n1->keys + n2->keys,
+ block_bytes(b->c)) >
+ btree_blocks(new_nodes[i]))
+- goto out_nocoalesce;
++ goto out_unlock_nocoalesce;
+
+ keys = n2->keys;
+ /* Take the key of the node we're getting rid of */
+@@ -1418,7 +1418,7 @@ static int btree_gc_coalesce(struct btree *b, struct btree_op *op,
+
+ if (__bch_keylist_realloc(&keylist,
+ bkey_u64s(&new_nodes[i]->key)))
+- goto out_nocoalesce;
++ goto out_unlock_nocoalesce;
+
+ bch_btree_node_write(new_nodes[i], &cl);
+ bch_keylist_add(&keylist, &new_nodes[i]->key);
+@@ -1464,6 +1464,10 @@ static int btree_gc_coalesce(struct btree *b, struct btree_op *op,
+ /* Invalidated our iterator */
+ return -EINTR;
+
++out_unlock_nocoalesce:
++ for (i = 0; i < nodes; i++)
++ mutex_unlock(&new_nodes[i]->write_lock);
++
+ out_nocoalesce:
+ closure_sync(&cl);
+
+diff --git a/drivers/md/dm-mpath.c b/drivers/md/dm-mpath.c
+index 3e500098132f..e0c800cf87a9 100644
+--- a/drivers/md/dm-mpath.c
++++ b/drivers/md/dm-mpath.c
+@@ -1918,7 +1918,7 @@ static int multipath_prepare_ioctl(struct dm_target *ti,
+ int r;
+
+ current_pgpath = READ_ONCE(m->current_pgpath);
+- if (!current_pgpath)
++ if (!current_pgpath || !test_bit(MPATHF_QUEUE_IO, &m->flags))
+ current_pgpath = choose_pgpath(m, 0);
+
+ if (current_pgpath) {
+diff --git a/drivers/md/dm-zoned-metadata.c b/drivers/md/dm-zoned-metadata.c
+index 369de15c4e80..61b7d7b7e5a6 100644
+--- a/drivers/md/dm-zoned-metadata.c
++++ b/drivers/md/dm-zoned-metadata.c
+@@ -1554,7 +1554,7 @@ static struct dm_zone *dmz_get_rnd_zone_for_reclaim(struct dmz_metadata *zmd)
+ return dzone;
+ }
+
+- return ERR_PTR(-EBUSY);
++ return NULL;
+ }
+
+ /*
+@@ -1574,7 +1574,7 @@ static struct dm_zone *dmz_get_seq_zone_for_reclaim(struct dmz_metadata *zmd)
+ return zone;
+ }
+
+- return ERR_PTR(-EBUSY);
++ return NULL;
+ }
+
+ /*
+diff --git a/drivers/md/dm-zoned-reclaim.c b/drivers/md/dm-zoned-reclaim.c
+index e7ace908a9b7..d50817320e8e 100644
+--- a/drivers/md/dm-zoned-reclaim.c
++++ b/drivers/md/dm-zoned-reclaim.c
+@@ -349,8 +349,8 @@ static int dmz_do_reclaim(struct dmz_reclaim *zrc)
+
+ /* Get a data zone */
+ dzone = dmz_get_zone_for_reclaim(zmd);
+- if (IS_ERR(dzone))
+- return PTR_ERR(dzone);
++ if (!dzone)
++ return -EBUSY;
+
+ start = jiffies;
+
+diff --git a/drivers/media/platform/s5p-mfc/s5p_mfc.c b/drivers/media/platform/s5p-mfc/s5p_mfc.c
+index 5c2a23b953a4..eba2b9f040df 100644
+--- a/drivers/media/platform/s5p-mfc/s5p_mfc.c
++++ b/drivers/media/platform/s5p-mfc/s5p_mfc.c
+@@ -1089,6 +1089,10 @@ static struct device *s5p_mfc_alloc_memdev(struct device *dev,
+ child->coherent_dma_mask = dev->coherent_dma_mask;
+ child->dma_mask = dev->dma_mask;
+ child->release = s5p_mfc_memdev_release;
++ child->dma_parms = devm_kzalloc(dev, sizeof(*child->dma_parms),
++ GFP_KERNEL);
++ if (!child->dma_parms)
++ goto err;
+
+ /*
+ * The memdevs are not proper OF platform devices, so in order for them
+@@ -1104,7 +1108,7 @@ static struct device *s5p_mfc_alloc_memdev(struct device *dev,
+ return child;
+ device_del(child);
+ }
+-
++err:
+ put_device(child);
+ return NULL;
+ }
+diff --git a/drivers/media/v4l2-core/v4l2-ctrls.c b/drivers/media/v4l2-core/v4l2-ctrls.c
+index 452edd06d67d..99fd377f9b81 100644
+--- a/drivers/media/v4l2-core/v4l2-ctrls.c
++++ b/drivers/media/v4l2-core/v4l2-ctrls.c
+@@ -1825,7 +1825,7 @@ static int std_validate_compound(const struct v4l2_ctrl *ctrl, u32 idx,
+ sizeof(p_hevc_pps->row_height_minus1));
+
+ p_hevc_pps->flags &=
+- ~V4L2_HEVC_PPS_FLAG_PPS_LOOP_FILTER_ACROSS_SLICES_ENABLED;
++ ~V4L2_HEVC_PPS_FLAG_LOOP_FILTER_ACROSS_TILES_ENABLED;
+ }
+
+ if (p_hevc_pps->flags &
+diff --git a/drivers/mfd/stmfx.c b/drivers/mfd/stmfx.c
+index 857991cb3cbb..711979afd90a 100644
+--- a/drivers/mfd/stmfx.c
++++ b/drivers/mfd/stmfx.c
+@@ -287,14 +287,21 @@ static int stmfx_irq_init(struct i2c_client *client)
+
+ ret = regmap_write(stmfx->map, STMFX_REG_IRQ_OUT_PIN, irqoutpin);
+ if (ret)
+- return ret;
++ goto irq_exit;
+
+ ret = devm_request_threaded_irq(stmfx->dev, client->irq,
+ NULL, stmfx_irq_handler,
+ irqtrigger | IRQF_ONESHOT,
+ "stmfx", stmfx);
+ if (ret)
+- stmfx_irq_exit(client);
++ goto irq_exit;
++
++ stmfx->irq = client->irq;
++
++ return 0;
++
++irq_exit:
++ stmfx_irq_exit(client);
+
+ return ret;
+ }
+@@ -481,6 +488,8 @@ static int stmfx_suspend(struct device *dev)
+ if (ret)
+ return ret;
+
++ disable_irq(stmfx->irq);
++
+ if (stmfx->vdd)
+ return regulator_disable(stmfx->vdd);
+
+@@ -501,6 +510,13 @@ static int stmfx_resume(struct device *dev)
+ }
+ }
+
++ /* Reset STMFX - supply has been stopped during suspend */
++ ret = stmfx_chip_reset(stmfx);
++ if (ret) {
++ dev_err(stmfx->dev, "Failed to reset chip: %d\n", ret);
++ return ret;
++ }
++
+ ret = regmap_raw_write(stmfx->map, STMFX_REG_SYS_CTRL,
+ &stmfx->bkp_sysctrl, sizeof(stmfx->bkp_sysctrl));
+ if (ret)
+@@ -517,6 +533,8 @@ static int stmfx_resume(struct device *dev)
+ if (ret)
+ return ret;
+
++ enable_irq(stmfx->irq);
++
+ return 0;
+ }
+ #endif
+diff --git a/drivers/mfd/wcd934x.c b/drivers/mfd/wcd934x.c
+index 90341f3c6810..da910302d51a 100644
+--- a/drivers/mfd/wcd934x.c
++++ b/drivers/mfd/wcd934x.c
+@@ -280,7 +280,6 @@ static void wcd934x_slim_remove(struct slim_device *sdev)
+
+ regulator_bulk_disable(WCD934X_MAX_SUPPLY, ddata->supplies);
+ mfd_remove_devices(&sdev->dev);
+- kfree(ddata);
+ }
+
+ static const struct slim_device_id wcd934x_slim_id[] = {
+diff --git a/drivers/mfd/wm8994-core.c b/drivers/mfd/wm8994-core.c
+index 1e9fe7d92597..737dede4a95c 100644
+--- a/drivers/mfd/wm8994-core.c
++++ b/drivers/mfd/wm8994-core.c
+@@ -690,3 +690,4 @@ module_i2c_driver(wm8994_i2c_driver);
+ MODULE_DESCRIPTION("Core support for the WM8994 audio CODEC");
+ MODULE_LICENSE("GPL");
+ MODULE_AUTHOR("Mark Brown <broonie@opensource.wolfsonmicro.com>");
++MODULE_SOFTDEP("pre: wm8994_regulator");
+diff --git a/drivers/misc/fastrpc.c b/drivers/misc/fastrpc.c
+index e3e085e33d46..7939c55daceb 100644
+--- a/drivers/misc/fastrpc.c
++++ b/drivers/misc/fastrpc.c
+@@ -904,6 +904,7 @@ static int fastrpc_invoke_send(struct fastrpc_session_ctx *sctx,
+ struct fastrpc_channel_ctx *cctx;
+ struct fastrpc_user *fl = ctx->fl;
+ struct fastrpc_msg *msg = &ctx->msg;
++ int ret;
+
+ cctx = fl->cctx;
+ msg->pid = fl->tgid;
+@@ -919,7 +920,13 @@ static int fastrpc_invoke_send(struct fastrpc_session_ctx *sctx,
+ msg->size = roundup(ctx->msg_sz, PAGE_SIZE);
+ fastrpc_context_get(ctx);
+
+- return rpmsg_send(cctx->rpdev->ept, (void *)msg, sizeof(*msg));
++ ret = rpmsg_send(cctx->rpdev->ept, (void *)msg, sizeof(*msg));
++
++ if (ret)
++ fastrpc_context_put(ctx);
++
++ return ret;
++
+ }
+
+ static int fastrpc_internal_invoke(struct fastrpc_user *fl, u32 kernel,
+@@ -1613,8 +1620,10 @@ static int fastrpc_rpmsg_probe(struct rpmsg_device *rpdev)
+ domains[domain_id]);
+ data->miscdev.fops = &fastrpc_fops;
+ err = misc_register(&data->miscdev);
+- if (err)
++ if (err) {
++ kfree(data);
+ return err;
++ }
+
+ kref_init(&data->refcount);
+
+diff --git a/drivers/misc/habanalabs/device.c b/drivers/misc/habanalabs/device.c
+index aef4de36b7aa..6d9c298e02c7 100644
+--- a/drivers/misc/habanalabs/device.c
++++ b/drivers/misc/habanalabs/device.c
+@@ -718,7 +718,7 @@ disable_device:
+ return rc;
+ }
+
+-static void device_kill_open_processes(struct hl_device *hdev)
++static int device_kill_open_processes(struct hl_device *hdev)
+ {
+ u16 pending_total, pending_cnt;
+ struct hl_fpriv *hpriv;
+@@ -771,9 +771,7 @@ static void device_kill_open_processes(struct hl_device *hdev)
+ ssleep(1);
+ }
+
+- if (!list_empty(&hdev->fpriv_list))
+- dev_crit(hdev->dev,
+- "Going to hard reset with open user contexts\n");
++ return list_empty(&hdev->fpriv_list) ? 0 : -EBUSY;
+ }
+
+ static void device_hard_reset_pending(struct work_struct *work)
+@@ -894,7 +892,12 @@ again:
+ * process can't really exit until all its CSs are done, which
+ * is what we do in cs rollback
+ */
+- device_kill_open_processes(hdev);
++ rc = device_kill_open_processes(hdev);
++ if (rc) {
++ dev_crit(hdev->dev,
++ "Failed to kill all open processes, stopping hard reset\n");
++ goto out_err;
++ }
+
+ /* Flush the Event queue workers to make sure no other thread is
+ * reading or writing to registers during the reset
+@@ -1375,7 +1378,9 @@ void hl_device_fini(struct hl_device *hdev)
+ * can't really exit until all its CSs are done, which is what we
+ * do in cs rollback
+ */
+- device_kill_open_processes(hdev);
++ rc = device_kill_open_processes(hdev);
++ if (rc)
++ dev_crit(hdev->dev, "Failed to kill all open processes\n");
+
+ hl_cb_pool_fini(hdev);
+
+diff --git a/drivers/misc/habanalabs/habanalabs.h b/drivers/misc/habanalabs/habanalabs.h
+index 31ebcf9458fe..a6dd8e6ca594 100644
+--- a/drivers/misc/habanalabs/habanalabs.h
++++ b/drivers/misc/habanalabs/habanalabs.h
+@@ -23,7 +23,7 @@
+
+ #define HL_MMAP_CB_MASK (0x8000000000000000ull >> PAGE_SHIFT)
+
+-#define HL_PENDING_RESET_PER_SEC 5
++#define HL_PENDING_RESET_PER_SEC 30
+
+ #define HL_DEVICE_TIMEOUT_USEC 1000000 /* 1 s */
+
+diff --git a/drivers/misc/xilinx_sdfec.c b/drivers/misc/xilinx_sdfec.c
+index 71bbaa56bdb5..e2766aad9e14 100644
+--- a/drivers/misc/xilinx_sdfec.c
++++ b/drivers/misc/xilinx_sdfec.c
+@@ -602,10 +602,10 @@ static int xsdfec_table_write(struct xsdfec_dev *xsdfec, u32 offset,
+ const u32 depth)
+ {
+ u32 reg = 0;
+- u32 res;
+- u32 n, i;
++ int res, i, nr_pages;
++ u32 n;
+ u32 *addr = NULL;
+- struct page *page[MAX_NUM_PAGES];
++ struct page *pages[MAX_NUM_PAGES];
+
+ /*
+ * Writes that go beyond the length of
+@@ -622,15 +622,22 @@ static int xsdfec_table_write(struct xsdfec_dev *xsdfec, u32 offset,
+ if ((len * XSDFEC_REG_WIDTH_JUMP) % PAGE_SIZE)
+ n += 1;
+
+- res = get_user_pages_fast((unsigned long)src_ptr, n, 0, page);
+- if (res < n) {
+- for (i = 0; i < res; i++)
+- put_page(page[i]);
++ if (WARN_ON_ONCE(n > INT_MAX))
++ return -EINVAL;
++
++ nr_pages = n;
++
++ res = get_user_pages_fast((unsigned long)src_ptr, nr_pages, 0, pages);
++ if (res < nr_pages) {
++ if (res > 0) {
++ for (i = 0; i < res; i++)
++ put_page(pages[i]);
++ }
+ return -EINVAL;
+ }
+
+- for (i = 0; i < n; i++) {
+- addr = kmap(page[i]);
++ for (i = 0; i < nr_pages; i++) {
++ addr = kmap(pages[i]);
+ do {
+ xsdfec_regwrite(xsdfec,
+ base_addr + ((offset + reg) *
+@@ -639,7 +646,7 @@ static int xsdfec_table_write(struct xsdfec_dev *xsdfec, u32 offset,
+ reg++;
+ } while ((reg < len) &&
+ ((reg * XSDFEC_REG_WIDTH_JUMP) % PAGE_SIZE));
+- put_page(page[i]);
++ put_page(pages[i]);
+ }
+ return reg;
+ }
+diff --git a/drivers/net/bareudp.c b/drivers/net/bareudp.c
+index efd1a1d1f35e..5d3c691a1c66 100644
+--- a/drivers/net/bareudp.c
++++ b/drivers/net/bareudp.c
+@@ -552,6 +552,8 @@ static int bareudp_validate(struct nlattr *tb[], struct nlattr *data[],
+ static int bareudp2info(struct nlattr *data[], struct bareudp_conf *conf,
+ struct netlink_ext_ack *extack)
+ {
++ memset(conf, 0, sizeof(*conf));
++
+ if (!data[IFLA_BAREUDP_PORT]) {
+ NL_SET_ERR_MSG(extack, "port not specified");
+ return -EINVAL;
+diff --git a/drivers/net/dsa/lantiq_gswip.c b/drivers/net/dsa/lantiq_gswip.c
+index cf6fa8fede33..521ebc072903 100644
+--- a/drivers/net/dsa/lantiq_gswip.c
++++ b/drivers/net/dsa/lantiq_gswip.c
+@@ -1452,7 +1452,8 @@ static void gswip_phylink_validate(struct dsa_switch *ds, int port,
+
+ unsupported:
+ bitmap_zero(supported, __ETHTOOL_LINK_MODE_MASK_NBITS);
+- dev_err(ds->dev, "Unsupported interface: %d\n", state->interface);
++ dev_err(ds->dev, "Unsupported interface '%s' for port %d\n",
++ phy_modes(state->interface), port);
+ return;
+ }
+
+diff --git a/drivers/net/dsa/sja1105/sja1105_ptp.c b/drivers/net/dsa/sja1105/sja1105_ptp.c
+index bc0e47c1dbb9..177134596458 100644
+--- a/drivers/net/dsa/sja1105/sja1105_ptp.c
++++ b/drivers/net/dsa/sja1105/sja1105_ptp.c
+@@ -891,16 +891,16 @@ void sja1105_ptp_txtstamp_skb(struct dsa_switch *ds, int port,
+
+ mutex_lock(&ptp_data->lock);
+
+- rc = sja1105_ptpclkval_read(priv, &ticks, NULL);
++ rc = sja1105_ptpegr_ts_poll(ds, port, &ts);
+ if (rc < 0) {
+- dev_err(ds->dev, "Failed to read PTP clock: %d\n", rc);
++ dev_err(ds->dev, "timed out polling for tstamp\n");
+ kfree_skb(skb);
+ goto out;
+ }
+
+- rc = sja1105_ptpegr_ts_poll(ds, port, &ts);
++ rc = sja1105_ptpclkval_read(priv, &ticks, NULL);
+ if (rc < 0) {
+- dev_err(ds->dev, "timed out polling for tstamp\n");
++ dev_err(ds->dev, "Failed to read PTP clock: %d\n", rc);
+ kfree_skb(skb);
+ goto out;
+ }
+diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt.c b/drivers/net/ethernet/broadcom/bnxt/bnxt.c
+index 58e0d9a781e9..19c4a0a5727a 100644
+--- a/drivers/net/ethernet/broadcom/bnxt/bnxt.c
++++ b/drivers/net/ethernet/broadcom/bnxt/bnxt.c
+@@ -10014,7 +10014,7 @@ static void bnxt_timer(struct timer_list *t)
+ struct bnxt *bp = from_timer(bp, t, timer);
+ struct net_device *dev = bp->dev;
+
+- if (!netif_running(dev))
++ if (!netif_running(dev) || !test_bit(BNXT_STATE_OPEN, &bp->state))
+ return;
+
+ if (atomic_read(&bp->intr_sem) != 0)
+@@ -12097,19 +12097,9 @@ static int bnxt_resume(struct device *device)
+ goto resume_exit;
+ }
+
+- if (bnxt_hwrm_queue_qportcfg(bp)) {
+- rc = -ENODEV;
++ rc = bnxt_hwrm_func_qcaps(bp);
++ if (rc)
+ goto resume_exit;
+- }
+-
+- if (bp->hwrm_spec_code >= 0x10803) {
+- if (bnxt_alloc_ctx_mem(bp)) {
+- rc = -ENODEV;
+- goto resume_exit;
+- }
+- }
+- if (BNXT_NEW_RM(bp))
+- bnxt_hwrm_func_resc_qcaps(bp, false);
+
+ if (bnxt_hwrm_func_drv_rgtr(bp, NULL, 0, false)) {
+ rc = -ENODEV;
+@@ -12125,6 +12115,8 @@ static int bnxt_resume(struct device *device)
+
+ resume_exit:
+ bnxt_ulp_start(bp, rc);
++ if (!rc)
++ bnxt_reenable_sriov(bp);
+ rtnl_unlock();
+ return rc;
+ }
+@@ -12168,6 +12160,9 @@ static pci_ers_result_t bnxt_io_error_detected(struct pci_dev *pdev,
+ bnxt_close(netdev);
+
+ pci_disable_device(pdev);
++ bnxt_free_ctx_mem(bp);
++ kfree(bp->ctx);
++ bp->ctx = NULL;
+ rtnl_unlock();
+
+ /* Request a slot slot reset. */
+@@ -12201,12 +12196,16 @@ static pci_ers_result_t bnxt_io_slot_reset(struct pci_dev *pdev)
+ pci_set_master(pdev);
+
+ err = bnxt_hwrm_func_reset(bp);
+- if (!err && netif_running(netdev))
+- err = bnxt_open(netdev);
+-
+- if (!err)
+- result = PCI_ERS_RESULT_RECOVERED;
++ if (!err) {
++ err = bnxt_hwrm_func_qcaps(bp);
++ if (!err && netif_running(netdev))
++ err = bnxt_open(netdev);
++ }
+ bnxt_ulp_start(bp, err);
++ if (!err) {
++ bnxt_reenable_sriov(bp);
++ result = PCI_ERS_RESULT_RECOVERED;
++ }
+ }
+
+ if (result != PCI_ERS_RESULT_RECOVERED) {
+diff --git a/drivers/net/ethernet/cavium/octeon/octeon_mgmt.c b/drivers/net/ethernet/cavium/octeon/octeon_mgmt.c
+index 9d868403d86c..cbaa1924afbe 100644
+--- a/drivers/net/ethernet/cavium/octeon/octeon_mgmt.c
++++ b/drivers/net/ethernet/cavium/octeon/octeon_mgmt.c
+@@ -234,6 +234,11 @@ static void octeon_mgmt_rx_fill_ring(struct net_device *netdev)
+
+ /* Put it in the ring. */
+ p->rx_ring[p->rx_next_fill] = re.d64;
++ /* Make sure there is no reorder of filling the ring and ringing
++ * the bell
++ */
++ wmb();
++
+ dma_sync_single_for_device(p->dev, p->rx_ring_handle,
+ ring_size_to_bytes(OCTEON_MGMT_RX_RING_SIZE),
+ DMA_BIDIRECTIONAL);
+diff --git a/drivers/net/ethernet/ibm/ibmvnic.c b/drivers/net/ethernet/ibm/ibmvnic.c
+index 197dc5b2c090..1b4d04e4474b 100644
+--- a/drivers/net/ethernet/ibm/ibmvnic.c
++++ b/drivers/net/ethernet/ibm/ibmvnic.c
+@@ -5184,6 +5184,9 @@ static int ibmvnic_remove(struct vio_dev *dev)
+ adapter->state = VNIC_REMOVING;
+ spin_unlock_irqrestore(&adapter->state_lock, flags);
+
++ flush_work(&adapter->ibmvnic_reset);
++ flush_delayed_work(&adapter->ibmvnic_delayed_reset);
++
+ rtnl_lock();
+ unregister_netdevice(netdev);
+
+diff --git a/drivers/net/ethernet/intel/e1000e/netdev.c b/drivers/net/ethernet/intel/e1000e/netdev.c
+index df3d50e759de..5e388d4a97a1 100644
+--- a/drivers/net/ethernet/intel/e1000e/netdev.c
++++ b/drivers/net/ethernet/intel/e1000e/netdev.c
+@@ -6518,11 +6518,17 @@ static int __e1000_shutdown(struct pci_dev *pdev, bool runtime)
+ struct net_device *netdev = pci_get_drvdata(pdev);
+ struct e1000_adapter *adapter = netdev_priv(netdev);
+ struct e1000_hw *hw = &adapter->hw;
+- u32 ctrl, ctrl_ext, rctl, status;
+- /* Runtime suspend should only enable wakeup for link changes */
+- u32 wufc = runtime ? E1000_WUFC_LNKC : adapter->wol;
++ u32 ctrl, ctrl_ext, rctl, status, wufc;
+ int retval = 0;
+
++ /* Runtime suspend should only enable wakeup for link changes */
++ if (runtime)
++ wufc = E1000_WUFC_LNKC;
++ else if (device_may_wakeup(&pdev->dev))
++ wufc = adapter->wol;
++ else
++ wufc = 0;
++
+ status = er32(STATUS);
+ if (status & E1000_STATUS_LU)
+ wufc &= ~E1000_WUFC_LNKC;
+@@ -6579,7 +6585,7 @@ static int __e1000_shutdown(struct pci_dev *pdev, bool runtime)
+ if (adapter->hw.phy.type == e1000_phy_igp_3) {
+ e1000e_igp3_phy_powerdown_workaround_ich8lan(&adapter->hw);
+ } else if (hw->mac.type >= e1000_pch_lpt) {
+- if (!(wufc & (E1000_WUFC_EX | E1000_WUFC_MC | E1000_WUFC_BC)))
++ if (wufc && !(wufc & (E1000_WUFC_EX | E1000_WUFC_MC | E1000_WUFC_BC)))
+ /* ULP does not support wake from unicast, multicast
+ * or broadcast.
+ */
+diff --git a/drivers/net/ethernet/intel/iavf/iavf.h b/drivers/net/ethernet/intel/iavf/iavf.h
+index bcd11b4b29df..2d4ce6fdba1a 100644
+--- a/drivers/net/ethernet/intel/iavf/iavf.h
++++ b/drivers/net/ethernet/intel/iavf/iavf.h
+@@ -87,6 +87,10 @@ struct iavf_vsi {
+ #define IAVF_HLUT_ARRAY_SIZE ((IAVF_VFQF_HLUT_MAX_INDEX + 1) * 4)
+ #define IAVF_MBPS_DIVISOR 125000 /* divisor to convert to Mbps */
+
++#define IAVF_VIRTCHNL_VF_RESOURCE_SIZE (sizeof(struct virtchnl_vf_resource) + \
++ (IAVF_MAX_VF_VSI * \
++ sizeof(struct virtchnl_vsi_resource)))
++
+ /* MAX_MSIX_Q_VECTORS of these are allocated,
+ * but we only use one per queue-specific vector.
+ */
+@@ -306,6 +310,14 @@ struct iavf_adapter {
+ bool netdev_registered;
+ bool link_up;
+ enum virtchnl_link_speed link_speed;
++ /* This is only populated if the VIRTCHNL_VF_CAP_ADV_LINK_SPEED is set
++ * in vf_res->vf_cap_flags. Use ADV_LINK_SUPPORT macro to determine if
++ * this field is valid. This field should be used going forward and the
++ * enum virtchnl_link_speed above should be considered the legacy way of
++ * storing/communicating link speeds.
++ */
++ u32 link_speed_mbps;
++
+ enum virtchnl_ops current_op;
+ #define CLIENT_ALLOWED(_a) ((_a)->vf_res ? \
+ (_a)->vf_res->vf_cap_flags & \
+@@ -322,6 +334,8 @@ struct iavf_adapter {
+ VIRTCHNL_VF_OFFLOAD_RSS_PF)))
+ #define VLAN_ALLOWED(_a) ((_a)->vf_res->vf_cap_flags & \
+ VIRTCHNL_VF_OFFLOAD_VLAN)
++#define ADV_LINK_SUPPORT(_a) ((_a)->vf_res->vf_cap_flags & \
++ VIRTCHNL_VF_CAP_ADV_LINK_SPEED)
+ struct virtchnl_vf_resource *vf_res; /* incl. all VSIs */
+ struct virtchnl_vsi_resource *vsi_res; /* our LAN VSI */
+ struct virtchnl_version_info pf_version;
+diff --git a/drivers/net/ethernet/intel/iavf/iavf_ethtool.c b/drivers/net/ethernet/intel/iavf/iavf_ethtool.c
+index 2c39d46b6138..40a3fc7c5ea5 100644
+--- a/drivers/net/ethernet/intel/iavf/iavf_ethtool.c
++++ b/drivers/net/ethernet/intel/iavf/iavf_ethtool.c
+@@ -278,7 +278,18 @@ static int iavf_get_link_ksettings(struct net_device *netdev,
+ ethtool_link_ksettings_zero_link_mode(cmd, supported);
+ cmd->base.autoneg = AUTONEG_DISABLE;
+ cmd->base.port = PORT_NONE;
+- /* Set speed and duplex */
++ cmd->base.duplex = DUPLEX_FULL;
++
++ if (ADV_LINK_SUPPORT(adapter)) {
++ if (adapter->link_speed_mbps &&
++ adapter->link_speed_mbps < U32_MAX)
++ cmd->base.speed = adapter->link_speed_mbps;
++ else
++ cmd->base.speed = SPEED_UNKNOWN;
++
++ return 0;
++ }
++
+ switch (adapter->link_speed) {
+ case IAVF_LINK_SPEED_40GB:
+ cmd->base.speed = SPEED_40000;
+@@ -306,7 +317,6 @@ static int iavf_get_link_ksettings(struct net_device *netdev,
+ default:
+ break;
+ }
+- cmd->base.duplex = DUPLEX_FULL;
+
+ return 0;
+ }
+diff --git a/drivers/net/ethernet/intel/iavf/iavf_main.c b/drivers/net/ethernet/intel/iavf/iavf_main.c
+index 2050649848ba..a21ae74bcd1b 100644
+--- a/drivers/net/ethernet/intel/iavf/iavf_main.c
++++ b/drivers/net/ethernet/intel/iavf/iavf_main.c
+@@ -1756,17 +1756,17 @@ static int iavf_init_get_resources(struct iavf_adapter *adapter)
+ struct net_device *netdev = adapter->netdev;
+ struct pci_dev *pdev = adapter->pdev;
+ struct iavf_hw *hw = &adapter->hw;
+- int err = 0, bufsz;
++ int err;
+
+ WARN_ON(adapter->state != __IAVF_INIT_GET_RESOURCES);
+ /* aq msg sent, awaiting reply */
+ if (!adapter->vf_res) {
+- bufsz = sizeof(struct virtchnl_vf_resource) +
+- (IAVF_MAX_VF_VSI *
+- sizeof(struct virtchnl_vsi_resource));
+- adapter->vf_res = kzalloc(bufsz, GFP_KERNEL);
+- if (!adapter->vf_res)
++ adapter->vf_res = kzalloc(IAVF_VIRTCHNL_VF_RESOURCE_SIZE,
++ GFP_KERNEL);
++ if (!adapter->vf_res) {
++ err = -ENOMEM;
+ goto err;
++ }
+ }
+ err = iavf_get_vf_config(adapter);
+ if (err == IAVF_ERR_ADMIN_QUEUE_NO_WORK) {
+@@ -2036,7 +2036,7 @@ static void iavf_disable_vf(struct iavf_adapter *adapter)
+ iavf_reset_interrupt_capability(adapter);
+ iavf_free_queues(adapter);
+ iavf_free_q_vectors(adapter);
+- kfree(adapter->vf_res);
++ memset(adapter->vf_res, 0, IAVF_VIRTCHNL_VF_RESOURCE_SIZE);
+ iavf_shutdown_adminq(&adapter->hw);
+ adapter->netdev->flags &= ~IFF_UP;
+ clear_bit(__IAVF_IN_CRITICAL_TASK, &adapter->crit_section);
+@@ -2487,6 +2487,16 @@ static int iavf_validate_tx_bandwidth(struct iavf_adapter *adapter,
+ {
+ int speed = 0, ret = 0;
+
++ if (ADV_LINK_SUPPORT(adapter)) {
++ if (adapter->link_speed_mbps < U32_MAX) {
++ speed = adapter->link_speed_mbps;
++ goto validate_bw;
++ } else {
++ dev_err(&adapter->pdev->dev, "Unknown link speed\n");
++ return -EINVAL;
++ }
++ }
++
+ switch (adapter->link_speed) {
+ case IAVF_LINK_SPEED_40GB:
+ speed = 40000;
+@@ -2510,6 +2520,7 @@ static int iavf_validate_tx_bandwidth(struct iavf_adapter *adapter,
+ break;
+ }
+
++validate_bw:
+ if (max_tx_rate > speed) {
+ dev_err(&adapter->pdev->dev,
+ "Invalid tx rate specified\n");
+diff --git a/drivers/net/ethernet/intel/iavf/iavf_virtchnl.c b/drivers/net/ethernet/intel/iavf/iavf_virtchnl.c
+index d58374c2c33d..ca79bec4ebd9 100644
+--- a/drivers/net/ethernet/intel/iavf/iavf_virtchnl.c
++++ b/drivers/net/ethernet/intel/iavf/iavf_virtchnl.c
+@@ -139,7 +139,8 @@ int iavf_send_vf_config_msg(struct iavf_adapter *adapter)
+ VIRTCHNL_VF_OFFLOAD_ENCAP |
+ VIRTCHNL_VF_OFFLOAD_ENCAP_CSUM |
+ VIRTCHNL_VF_OFFLOAD_REQ_QUEUES |
+- VIRTCHNL_VF_OFFLOAD_ADQ;
++ VIRTCHNL_VF_OFFLOAD_ADQ |
++ VIRTCHNL_VF_CAP_ADV_LINK_SPEED;
+
+ adapter->current_op = VIRTCHNL_OP_GET_VF_RESOURCES;
+ adapter->aq_required &= ~IAVF_FLAG_AQ_GET_CONFIG;
+@@ -891,6 +892,8 @@ void iavf_disable_vlan_stripping(struct iavf_adapter *adapter)
+ iavf_send_pf_msg(adapter, VIRTCHNL_OP_DISABLE_VLAN_STRIPPING, NULL, 0);
+ }
+
++#define IAVF_MAX_SPEED_STRLEN 13
++
+ /**
+ * iavf_print_link_message - print link up or down
+ * @adapter: adapter structure
+@@ -900,37 +903,99 @@ void iavf_disable_vlan_stripping(struct iavf_adapter *adapter)
+ static void iavf_print_link_message(struct iavf_adapter *adapter)
+ {
+ struct net_device *netdev = adapter->netdev;
+- char *speed = "Unknown ";
++ int link_speed_mbps;
++ char *speed;
+
+ if (!adapter->link_up) {
+ netdev_info(netdev, "NIC Link is Down\n");
+ return;
+ }
+
++ speed = kcalloc(1, IAVF_MAX_SPEED_STRLEN, GFP_KERNEL);
++ if (!speed)
++ return;
++
++ if (ADV_LINK_SUPPORT(adapter)) {
++ link_speed_mbps = adapter->link_speed_mbps;
++ goto print_link_msg;
++ }
++
+ switch (adapter->link_speed) {
+ case IAVF_LINK_SPEED_40GB:
+- speed = "40 G";
++ link_speed_mbps = SPEED_40000;
+ break;
+ case IAVF_LINK_SPEED_25GB:
+- speed = "25 G";
++ link_speed_mbps = SPEED_25000;
+ break;
+ case IAVF_LINK_SPEED_20GB:
+- speed = "20 G";
++ link_speed_mbps = SPEED_20000;
+ break;
+ case IAVF_LINK_SPEED_10GB:
+- speed = "10 G";
++ link_speed_mbps = SPEED_10000;
+ break;
+ case IAVF_LINK_SPEED_1GB:
+- speed = "1000 M";
++ link_speed_mbps = SPEED_1000;
+ break;
+ case IAVF_LINK_SPEED_100MB:
+- speed = "100 M";
++ link_speed_mbps = SPEED_100;
+ break;
+ default:
++ link_speed_mbps = SPEED_UNKNOWN;
+ break;
+ }
+
+- netdev_info(netdev, "NIC Link is Up %sbps Full Duplex\n", speed);
++print_link_msg:
++ if (link_speed_mbps > SPEED_1000) {
++ if (link_speed_mbps == SPEED_2500)
++ snprintf(speed, IAVF_MAX_SPEED_STRLEN, "2.5 Gbps");
++ else
++ /* convert to Gbps inline */
++ snprintf(speed, IAVF_MAX_SPEED_STRLEN, "%d %s",
++ link_speed_mbps / 1000, "Gbps");
++ } else if (link_speed_mbps == SPEED_UNKNOWN) {
++ snprintf(speed, IAVF_MAX_SPEED_STRLEN, "%s", "Unknown Mbps");
++ } else {
++ snprintf(speed, IAVF_MAX_SPEED_STRLEN, "%u %s",
++ link_speed_mbps, "Mbps");
++ }
++
++ netdev_info(netdev, "NIC Link is Up Speed is %s Full Duplex\n", speed);
++ kfree(speed);
++}
++
++/**
++ * iavf_get_vpe_link_status
++ * @adapter: adapter structure
++ * @vpe: virtchnl_pf_event structure
++ *
++ * Helper function for determining the link status
++ **/
++static bool
++iavf_get_vpe_link_status(struct iavf_adapter *adapter,
++ struct virtchnl_pf_event *vpe)
++{
++ if (ADV_LINK_SUPPORT(adapter))
++ return vpe->event_data.link_event_adv.link_status;
++ else
++ return vpe->event_data.link_event.link_status;
++}
++
++/**
++ * iavf_set_adapter_link_speed_from_vpe
++ * @adapter: adapter structure for which we are setting the link speed
++ * @vpe: virtchnl_pf_event structure that contains the link speed we are setting
++ *
++ * Helper function for setting iavf_adapter link speed
++ **/
++static void
++iavf_set_adapter_link_speed_from_vpe(struct iavf_adapter *adapter,
++ struct virtchnl_pf_event *vpe)
++{
++ if (ADV_LINK_SUPPORT(adapter))
++ adapter->link_speed_mbps =
++ vpe->event_data.link_event_adv.link_speed;
++ else
++ adapter->link_speed = vpe->event_data.link_event.link_speed;
+ }
+
+ /**
+@@ -1160,12 +1225,11 @@ void iavf_virtchnl_completion(struct iavf_adapter *adapter,
+ if (v_opcode == VIRTCHNL_OP_EVENT) {
+ struct virtchnl_pf_event *vpe =
+ (struct virtchnl_pf_event *)msg;
+- bool link_up = vpe->event_data.link_event.link_status;
++ bool link_up = iavf_get_vpe_link_status(adapter, vpe);
+
+ switch (vpe->event) {
+ case VIRTCHNL_EVENT_LINK_CHANGE:
+- adapter->link_speed =
+- vpe->event_data.link_event.link_speed;
++ iavf_set_adapter_link_speed_from_vpe(adapter, vpe);
+
+ /* we've already got the right link status, bail */
+ if (adapter->link_up == link_up)
+diff --git a/drivers/net/ethernet/marvell/mvpp2/mvpp2_main.c b/drivers/net/ethernet/marvell/mvpp2/mvpp2_main.c
+index 2b5dad2ec650..b7b553602ea9 100644
+--- a/drivers/net/ethernet/marvell/mvpp2/mvpp2_main.c
++++ b/drivers/net/ethernet/marvell/mvpp2/mvpp2_main.c
+@@ -5983,8 +5983,8 @@ static int mvpp2_remove(struct platform_device *pdev)
+ {
+ struct mvpp2 *priv = platform_get_drvdata(pdev);
+ struct fwnode_handle *fwnode = pdev->dev.fwnode;
++ int i = 0, poolnum = MVPP2_BM_POOLS_NUM;
+ struct fwnode_handle *port_fwnode;
+- int i = 0;
+
+ mvpp2_dbgfs_cleanup(priv);
+
+@@ -5998,7 +5998,10 @@ static int mvpp2_remove(struct platform_device *pdev)
+
+ destroy_workqueue(priv->stats_queue);
+
+- for (i = 0; i < MVPP2_BM_POOLS_NUM; i++) {
++ if (priv->percpu_pools)
++ poolnum = mvpp2_get_nrxqs(priv) * 2;
++
++ for (i = 0; i < poolnum; i++) {
+ struct mvpp2_bm_pool *bm_pool = &priv->bm_pools[i];
+
+ mvpp2_bm_pool_destroy(&pdev->dev, priv, bm_pool);
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/steering/dr_send.c b/drivers/net/ethernet/mellanox/mlx5/core/steering/dr_send.c
+index 18719acb7e54..eff8bb64899d 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/steering/dr_send.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/steering/dr_send.c
+@@ -181,7 +181,7 @@ static struct mlx5dr_qp *dr_create_rc_qp(struct mlx5_core_dev *mdev,
+ in, pas));
+
+ err = mlx5_core_create_qp(mdev, &dr_qp->mqp, in, inlen);
+- kfree(in);
++ kvfree(in);
+
+ if (err) {
+ mlx5_core_warn(mdev, " Can't create QP\n");
+diff --git a/drivers/net/ethernet/mellanox/mlxsw/spectrum.c b/drivers/net/ethernet/mellanox/mlxsw/spectrum.c
+index 6b39978acd07..3e4199246a18 100644
+--- a/drivers/net/ethernet/mellanox/mlxsw/spectrum.c
++++ b/drivers/net/ethernet/mellanox/mlxsw/spectrum.c
+@@ -990,8 +990,10 @@ int __mlxsw_sp_port_headroom_set(struct mlxsw_sp_port *mlxsw_sp_port, int mtu,
+
+ lossy = !(pfc || pause_en);
+ thres_cells = mlxsw_sp_pg_buf_threshold_get(mlxsw_sp, mtu);
++ mlxsw_sp_port_headroom_8x_adjust(mlxsw_sp_port, &thres_cells);
+ delay_cells = mlxsw_sp_pg_buf_delay_get(mlxsw_sp, mtu, delay,
+ pfc, pause_en);
++ mlxsw_sp_port_headroom_8x_adjust(mlxsw_sp_port, &delay_cells);
+ total_cells = thres_cells + delay_cells;
+
+ taken_headroom_cells += total_cells;
+diff --git a/drivers/net/ethernet/mellanox/mlxsw/spectrum.h b/drivers/net/ethernet/mellanox/mlxsw/spectrum.h
+index ca56e72cb4b7..e28ecb84b816 100644
+--- a/drivers/net/ethernet/mellanox/mlxsw/spectrum.h
++++ b/drivers/net/ethernet/mellanox/mlxsw/spectrum.h
+@@ -395,6 +395,19 @@ mlxsw_sp_port_vlan_find_by_vid(const struct mlxsw_sp_port *mlxsw_sp_port,
+ return NULL;
+ }
+
++static inline void
++mlxsw_sp_port_headroom_8x_adjust(const struct mlxsw_sp_port *mlxsw_sp_port,
++ u16 *p_size)
++{
++ /* Ports with eight lanes use two headroom buffers between which the
++ * configured headroom size is split. Therefore, multiply the calculated
++ * headroom size by two.
++ */
++ if (mlxsw_sp_port->mapping.width != 8)
++ return;
++ *p_size *= 2;
++}
++
+ enum mlxsw_sp_flood_type {
+ MLXSW_SP_FLOOD_TYPE_UC,
+ MLXSW_SP_FLOOD_TYPE_BC,
+diff --git a/drivers/net/ethernet/mellanox/mlxsw/spectrum_buffers.c b/drivers/net/ethernet/mellanox/mlxsw/spectrum_buffers.c
+index 968f0902e4fe..19bf0768ed78 100644
+--- a/drivers/net/ethernet/mellanox/mlxsw/spectrum_buffers.c
++++ b/drivers/net/ethernet/mellanox/mlxsw/spectrum_buffers.c
+@@ -312,6 +312,7 @@ static int mlxsw_sp_port_pb_init(struct mlxsw_sp_port *mlxsw_sp_port)
+
+ if (i == MLXSW_SP_PB_UNUSED)
+ continue;
++ mlxsw_sp_port_headroom_8x_adjust(mlxsw_sp_port, &size);
+ mlxsw_reg_pbmc_lossy_buffer_pack(pbmc_pl, i, size);
+ }
+ mlxsw_reg_pbmc_lossy_buffer_pack(pbmc_pl,
+diff --git a/drivers/net/ethernet/mellanox/mlxsw/spectrum_span.c b/drivers/net/ethernet/mellanox/mlxsw/spectrum_span.c
+index 9fb2e9d93929..7c5032f9c8ff 100644
+--- a/drivers/net/ethernet/mellanox/mlxsw/spectrum_span.c
++++ b/drivers/net/ethernet/mellanox/mlxsw/spectrum_span.c
+@@ -776,6 +776,7 @@ mlxsw_sp_span_port_buffsize_update(struct mlxsw_sp_port *mlxsw_sp_port, u16 mtu)
+ speed = 0;
+
+ buffsize = mlxsw_sp_span_buffsize_get(mlxsw_sp, speed, mtu);
++ mlxsw_sp_port_headroom_8x_adjust(mlxsw_sp_port, (u16 *) &buffsize);
+ mlxsw_reg_sbib_pack(sbib_pl, mlxsw_sp_port->local_port, buffsize);
+ return mlxsw_reg_write(mlxsw_sp->core, MLXSW_REG(sbib), sbib_pl);
+ }
+diff --git a/drivers/net/geneve.c b/drivers/net/geneve.c
+index 6b461be1820b..75266580b586 100644
+--- a/drivers/net/geneve.c
++++ b/drivers/net/geneve.c
+@@ -987,9 +987,10 @@ static netdev_tx_t geneve_xmit(struct sk_buff *skb, struct net_device *dev)
+ if (geneve->collect_md) {
+ info = skb_tunnel_info(skb);
+ if (unlikely(!info || !(info->mode & IP_TUNNEL_INFO_TX))) {
+- err = -EINVAL;
+ netdev_dbg(dev, "no tunnel metadata\n");
+- goto tx_error;
++ dev_kfree_skb(skb);
++ dev->stats.tx_dropped++;
++ return NETDEV_TX_OK;
+ }
+ } else {
+ info = &geneve->info;
+@@ -1006,7 +1007,7 @@ static netdev_tx_t geneve_xmit(struct sk_buff *skb, struct net_device *dev)
+
+ if (likely(!err))
+ return NETDEV_TX_OK;
+-tx_error:
++
+ dev_kfree_skb(skb);
+
+ if (err == -ELOOP)
+diff --git a/drivers/net/hamradio/yam.c b/drivers/net/hamradio/yam.c
+index 71cdef9fb56b..5ab53e9942f3 100644
+--- a/drivers/net/hamradio/yam.c
++++ b/drivers/net/hamradio/yam.c
+@@ -1133,6 +1133,7 @@ static int __init yam_init_driver(void)
+ err = register_netdev(dev);
+ if (err) {
+ printk(KERN_WARNING "yam: cannot register net device %s\n", dev->name);
++ free_netdev(dev);
+ goto error;
+ }
+ yam_devs[i] = dev;
+diff --git a/drivers/net/ipa/ipa_endpoint.c b/drivers/net/ipa/ipa_endpoint.c
+index a21534f1462f..1d823ac0f6d6 100644
+--- a/drivers/net/ipa/ipa_endpoint.c
++++ b/drivers/net/ipa/ipa_endpoint.c
+@@ -669,10 +669,12 @@ static void ipa_endpoint_init_seq(struct ipa_endpoint *endpoint)
+ u32 seq_type = endpoint->seq_type;
+ u32 val = 0;
+
++ /* Sequencer type is made up of four nibbles */
+ val |= u32_encode_bits(seq_type & 0xf, HPS_SEQ_TYPE_FMASK);
+ val |= u32_encode_bits((seq_type >> 4) & 0xf, DPS_SEQ_TYPE_FMASK);
+- /* HPS_REP_SEQ_TYPE is 0 */
+- /* DPS_REP_SEQ_TYPE is 0 */
++ /* The second two apply to replicated packets */
++ val |= u32_encode_bits((seq_type >> 8) & 0xf, HPS_REP_SEQ_TYPE_FMASK);
++ val |= u32_encode_bits((seq_type >> 12) & 0xf, DPS_REP_SEQ_TYPE_FMASK);
+
+ iowrite32(val, endpoint->ipa->reg_virt + offset);
+ }
+diff --git a/drivers/net/ipa/ipa_reg.h b/drivers/net/ipa/ipa_reg.h
+index 3b8106aa277a..0a688d8c1d7c 100644
+--- a/drivers/net/ipa/ipa_reg.h
++++ b/drivers/net/ipa/ipa_reg.h
+@@ -455,6 +455,8 @@ enum ipa_mode {
+ * second packet processing pass + no decipher + microcontroller
+ * @IPA_SEQ_DMA_DEC: DMA + cipher/decipher
+ * @IPA_SEQ_DMA_COMP_DECOMP: DMA + compression/decompression
++ * @IPA_SEQ_PKT_PROCESS_NO_DEC_NO_UCP_DMAP:
++ * packet processing + no decipher + no uCP + HPS REP DMA parser
+ * @IPA_SEQ_INVALID: invalid sequencer type
+ *
+ * The values defined here are broken into 4-bit nibbles that are written
+diff --git a/drivers/net/phy/dp83867.c b/drivers/net/phy/dp83867.c
+index b55e3c0403ed..ddac79960ea7 100644
+--- a/drivers/net/phy/dp83867.c
++++ b/drivers/net/phy/dp83867.c
+@@ -488,7 +488,7 @@ static int dp83867_verify_rgmii_cfg(struct phy_device *phydev)
+ return 0;
+ }
+
+-#ifdef CONFIG_OF_MDIO
++#if IS_ENABLED(CONFIG_OF_MDIO)
+ static int dp83867_of_init(struct phy_device *phydev)
+ {
+ struct dp83867_private *dp83867 = phydev->priv;
+diff --git a/drivers/net/phy/marvell.c b/drivers/net/phy/marvell.c
+index 7fc8e10c5f33..a435f7352cfb 100644
+--- a/drivers/net/phy/marvell.c
++++ b/drivers/net/phy/marvell.c
+@@ -337,7 +337,7 @@ static int m88e1101_config_aneg(struct phy_device *phydev)
+ return marvell_config_aneg(phydev);
+ }
+
+-#ifdef CONFIG_OF_MDIO
++#if IS_ENABLED(CONFIG_OF_MDIO)
+ /* Set and/or override some configuration registers based on the
+ * marvell,reg-init property stored in the of_node for the phydev.
+ *
+diff --git a/drivers/net/phy/mdio_bus.c b/drivers/net/phy/mdio_bus.c
+index 7a4eb3f2cb74..a1a4dee2a033 100644
+--- a/drivers/net/phy/mdio_bus.c
++++ b/drivers/net/phy/mdio_bus.c
+@@ -757,6 +757,7 @@ EXPORT_SYMBOL(mdiobus_scan);
+
+ static void mdiobus_stats_acct(struct mdio_bus_stats *stats, bool op, int ret)
+ {
++ preempt_disable();
+ u64_stats_update_begin(&stats->syncp);
+
+ u64_stats_inc(&stats->transfers);
+@@ -771,6 +772,7 @@ static void mdiobus_stats_acct(struct mdio_bus_stats *stats, bool op, int ret)
+ u64_stats_inc(&stats->writes);
+ out:
+ u64_stats_update_end(&stats->syncp);
++ preempt_enable();
+ }
+
+ /**
+diff --git a/drivers/net/phy/mscc/mscc.h b/drivers/net/phy/mscc/mscc.h
+index 414e3b31bb1f..132f9bf49198 100644
+--- a/drivers/net/phy/mscc/mscc.h
++++ b/drivers/net/phy/mscc/mscc.h
+@@ -375,7 +375,7 @@ struct vsc8531_private {
+ #endif
+ };
+
+-#ifdef CONFIG_OF_MDIO
++#if IS_ENABLED(CONFIG_OF_MDIO)
+ struct vsc8531_edge_rate_table {
+ u32 vddmac;
+ u32 slowdown[8];
+diff --git a/drivers/net/phy/mscc/mscc_main.c b/drivers/net/phy/mscc/mscc_main.c
+index c8aa6d905d8e..485a4f8a6a9a 100644
+--- a/drivers/net/phy/mscc/mscc_main.c
++++ b/drivers/net/phy/mscc/mscc_main.c
+@@ -98,7 +98,7 @@ static const struct vsc85xx_hw_stat vsc8584_hw_stats[] = {
+ },
+ };
+
+-#ifdef CONFIG_OF_MDIO
++#if IS_ENABLED(CONFIG_OF_MDIO)
+ static const struct vsc8531_edge_rate_table edge_table[] = {
+ {MSCC_VDDMAC_3300, { 0, 2, 4, 7, 10, 17, 29, 53} },
+ {MSCC_VDDMAC_2500, { 0, 3, 6, 10, 14, 23, 37, 63} },
+@@ -382,7 +382,7 @@ out_unlock:
+ mutex_unlock(&phydev->lock);
+ }
+
+-#ifdef CONFIG_OF_MDIO
++#if IS_ENABLED(CONFIG_OF_MDIO)
+ static int vsc85xx_edge_rate_magic_get(struct phy_device *phydev)
+ {
+ u32 vdd, sd;
+diff --git a/drivers/ntb/core.c b/drivers/ntb/core.c
+index 2581ab724c34..f8f75a504a58 100644
+--- a/drivers/ntb/core.c
++++ b/drivers/ntb/core.c
+@@ -214,10 +214,8 @@ int ntb_default_port_number(struct ntb_dev *ntb)
+ case NTB_TOPO_B2B_DSD:
+ return NTB_PORT_SEC_DSD;
+ default:
+- break;
++ return 0;
+ }
+-
+- return -EINVAL;
+ }
+ EXPORT_SYMBOL(ntb_default_port_number);
+
+@@ -240,10 +238,8 @@ int ntb_default_peer_port_number(struct ntb_dev *ntb, int pidx)
+ case NTB_TOPO_B2B_DSD:
+ return NTB_PORT_PRI_USD;
+ default:
+- break;
++ return 0;
+ }
+-
+- return -EINVAL;
+ }
+ EXPORT_SYMBOL(ntb_default_peer_port_number);
+
+@@ -315,4 +311,3 @@ static void __exit ntb_driver_exit(void)
+ bus_unregister(&ntb_bus);
+ }
+ module_exit(ntb_driver_exit);
+-
+diff --git a/drivers/ntb/test/ntb_perf.c b/drivers/ntb/test/ntb_perf.c
+index 972f6d984f6d..528751803419 100644
+--- a/drivers/ntb/test/ntb_perf.c
++++ b/drivers/ntb/test/ntb_perf.c
+@@ -159,6 +159,8 @@ struct perf_peer {
+ /* NTB connection setup service */
+ struct work_struct service;
+ unsigned long sts;
++
++ struct completion init_comp;
+ };
+ #define to_peer_service(__work) \
+ container_of(__work, struct perf_peer, service)
+@@ -547,6 +549,7 @@ static int perf_setup_outbuf(struct perf_peer *peer)
+
+ /* Initialization is finally done */
+ set_bit(PERF_STS_DONE, &peer->sts);
++ complete_all(&peer->init_comp);
+
+ return 0;
+ }
+@@ -557,7 +560,7 @@ static void perf_free_inbuf(struct perf_peer *peer)
+ return;
+
+ (void)ntb_mw_clear_trans(peer->perf->ntb, peer->pidx, peer->gidx);
+- dma_free_coherent(&peer->perf->ntb->dev, peer->inbuf_size,
++ dma_free_coherent(&peer->perf->ntb->pdev->dev, peer->inbuf_size,
+ peer->inbuf, peer->inbuf_xlat);
+ peer->inbuf = NULL;
+ }
+@@ -586,8 +589,9 @@ static int perf_setup_inbuf(struct perf_peer *peer)
+
+ perf_free_inbuf(peer);
+
+- peer->inbuf = dma_alloc_coherent(&perf->ntb->dev, peer->inbuf_size,
+- &peer->inbuf_xlat, GFP_KERNEL);
++ peer->inbuf = dma_alloc_coherent(&perf->ntb->pdev->dev,
++ peer->inbuf_size, &peer->inbuf_xlat,
++ GFP_KERNEL);
+ if (!peer->inbuf) {
+ dev_err(&perf->ntb->dev, "Failed to alloc inbuf of %pa\n",
+ &peer->inbuf_size);
+@@ -637,6 +641,7 @@ static void perf_service_work(struct work_struct *work)
+ perf_setup_outbuf(peer);
+
+ if (test_and_clear_bit(PERF_CMD_CLEAR, &peer->sts)) {
++ init_completion(&peer->init_comp);
+ clear_bit(PERF_STS_DONE, &peer->sts);
+ if (test_bit(0, &peer->perf->busy_flag) &&
+ peer == peer->perf->test_peer) {
+@@ -653,7 +658,7 @@ static int perf_init_service(struct perf_ctx *perf)
+ {
+ u64 mask;
+
+- if (ntb_peer_mw_count(perf->ntb) < perf->pcnt + 1) {
++ if (ntb_peer_mw_count(perf->ntb) < perf->pcnt) {
+ dev_err(&perf->ntb->dev, "Not enough memory windows\n");
+ return -EINVAL;
+ }
+@@ -1083,8 +1088,9 @@ static int perf_submit_test(struct perf_peer *peer)
+ struct perf_thread *pthr;
+ int tidx, ret;
+
+- if (!test_bit(PERF_STS_DONE, &peer->sts))
+- return -ENOLINK;
++ ret = wait_for_completion_interruptible(&peer->init_comp);
++ if (ret < 0)
++ return ret;
+
+ if (test_and_set_bit_lock(0, &perf->busy_flag))
+ return -EBUSY;
+@@ -1455,10 +1461,21 @@ static int perf_init_peers(struct perf_ctx *perf)
+ peer->gidx = pidx;
+ }
+ INIT_WORK(&peer->service, perf_service_work);
++ init_completion(&peer->init_comp);
+ }
+ if (perf->gidx == -1)
+ perf->gidx = pidx;
+
++ /*
++ * Hardware with only two ports may not have unique port
++ * numbers. In this case, the gidxs should all be zero.
++ */
++ if (perf->pcnt == 1 && ntb_port_number(perf->ntb) == 0 &&
++ ntb_peer_port_number(perf->ntb, 0) == 0) {
++ perf->gidx = 0;
++ perf->peers[0].gidx = 0;
++ }
++
+ for (pidx = 0; pidx < perf->pcnt; pidx++) {
+ ret = perf_setup_peer_mw(&perf->peers[pidx]);
+ if (ret)
+@@ -1554,4 +1571,3 @@ static void __exit perf_exit(void)
+ destroy_workqueue(perf_wq);
+ }
+ module_exit(perf_exit);
+-
+diff --git a/drivers/ntb/test/ntb_pingpong.c b/drivers/ntb/test/ntb_pingpong.c
+index 04dd46647db3..2164e8492772 100644
+--- a/drivers/ntb/test/ntb_pingpong.c
++++ b/drivers/ntb/test/ntb_pingpong.c
+@@ -121,15 +121,14 @@ static int pp_find_next_peer(struct pp_ctx *pp)
+ link = ntb_link_is_up(pp->ntb, NULL, NULL);
+
+ /* Find next available peer */
+- if (link & pp->nmask) {
++ if (link & pp->nmask)
+ pidx = __ffs64(link & pp->nmask);
+- out_db = BIT_ULL(pidx + 1);
+- } else if (link & pp->pmask) {
++ else if (link & pp->pmask)
+ pidx = __ffs64(link & pp->pmask);
+- out_db = BIT_ULL(pidx);
+- } else {
++ else
+ return -ENODEV;
+- }
++
++ out_db = BIT_ULL(ntb_peer_port_number(pp->ntb, pidx));
+
+ spin_lock(&pp->lock);
+ pp->out_pidx = pidx;
+@@ -303,7 +302,7 @@ static void pp_init_flds(struct pp_ctx *pp)
+ break;
+ }
+
+- pp->in_db = BIT_ULL(pidx);
++ pp->in_db = BIT_ULL(lport);
+ pp->pmask = GENMASK_ULL(pidx, 0) >> 1;
+ pp->nmask = GENMASK_ULL(pcnt - 1, pidx);
+
+@@ -432,4 +431,3 @@ static void __exit pp_exit(void)
+ debugfs_remove_recursive(pp_dbgfs_topdir);
+ }
+ module_exit(pp_exit);
+-
+diff --git a/drivers/ntb/test/ntb_tool.c b/drivers/ntb/test/ntb_tool.c
+index 69da758fe64c..b7bf3f863d79 100644
+--- a/drivers/ntb/test/ntb_tool.c
++++ b/drivers/ntb/test/ntb_tool.c
+@@ -504,7 +504,7 @@ static ssize_t tool_peer_link_read(struct file *filep, char __user *ubuf,
+ buf[1] = '\n';
+ buf[2] = '\0';
+
+- return simple_read_from_buffer(ubuf, size, offp, buf, 3);
++ return simple_read_from_buffer(ubuf, size, offp, buf, 2);
+ }
+
+ static TOOL_FOPS_RDWR(tool_peer_link_fops,
+@@ -590,7 +590,7 @@ static int tool_setup_mw(struct tool_ctx *tc, int pidx, int widx,
+ inmw->size = min_t(resource_size_t, req_size, size);
+ inmw->size = round_up(inmw->size, addr_align);
+ inmw->size = round_up(inmw->size, size_align);
+- inmw->mm_base = dma_alloc_coherent(&tc->ntb->dev, inmw->size,
++ inmw->mm_base = dma_alloc_coherent(&tc->ntb->pdev->dev, inmw->size,
+ &inmw->dma_base, GFP_KERNEL);
+ if (!inmw->mm_base)
+ return -ENOMEM;
+@@ -612,7 +612,7 @@ static int tool_setup_mw(struct tool_ctx *tc, int pidx, int widx,
+ return 0;
+
+ err_free_dma:
+- dma_free_coherent(&tc->ntb->dev, inmw->size, inmw->mm_base,
++ dma_free_coherent(&tc->ntb->pdev->dev, inmw->size, inmw->mm_base,
+ inmw->dma_base);
+ inmw->mm_base = NULL;
+ inmw->dma_base = 0;
+@@ -629,7 +629,7 @@ static void tool_free_mw(struct tool_ctx *tc, int pidx, int widx)
+
+ if (inmw->mm_base != NULL) {
+ ntb_mw_clear_trans(tc->ntb, pidx, widx);
+- dma_free_coherent(&tc->ntb->dev, inmw->size,
++ dma_free_coherent(&tc->ntb->pdev->dev, inmw->size,
+ inmw->mm_base, inmw->dma_base);
+ }
+
+@@ -1690,4 +1690,3 @@ static void __exit tool_exit(void)
+ debugfs_remove_recursive(tool_dbgfs_topdir);
+ }
+ module_exit(tool_exit);
+-
+diff --git a/drivers/nvme/host/fc.c b/drivers/nvme/host/fc.c
+index 5ef4a84c442a..564e3f220ac7 100644
+--- a/drivers/nvme/host/fc.c
++++ b/drivers/nvme/host/fc.c
+@@ -2300,10 +2300,11 @@ nvme_fc_start_fcp_op(struct nvme_fc_ctrl *ctrl, struct nvme_fc_queue *queue,
+ opstate = atomic_xchg(&op->state, FCPOP_STATE_COMPLETE);
+ __nvme_fc_fcpop_chk_teardowns(ctrl, op, opstate);
+
+- if (!(op->flags & FCOP_FLAGS_AEN))
++ if (!(op->flags & FCOP_FLAGS_AEN)) {
+ nvme_fc_unmap_data(ctrl, op->rq, op);
++ nvme_cleanup_cmd(op->rq);
++ }
+
+- nvme_cleanup_cmd(op->rq);
+ nvme_fc_ctrl_put(ctrl);
+
+ if (ctrl->rport->remoteport.port_state == FC_OBJSTATE_ONLINE &&
+diff --git a/drivers/nvme/host/pci.c b/drivers/nvme/host/pci.c
+index 076bdd90c922..4ad629eb3bc6 100644
+--- a/drivers/nvme/host/pci.c
++++ b/drivers/nvme/host/pci.c
+@@ -2958,9 +2958,15 @@ static int nvme_suspend(struct device *dev)
+ * the PCI bus layer to put it into D3 in order to take the PCIe link
+ * down, so as to allow the platform to achieve its minimum low-power
+ * state (which may not be possible if the link is up).
++ *
++ * If a host memory buffer is enabled, shut down the device as the NVMe
++ * specification allows the device to access the host memory buffer in
++ * host DRAM from all power states, but hosts will fail access to DRAM
++ * during S3.
+ */
+ if (pm_suspend_via_firmware() || !ctrl->npss ||
+ !pcie_aspm_enabled(pdev) ||
++ ndev->nr_host_mem_descs ||
+ (ndev->ctrl.quirks & NVME_QUIRK_SIMPLE_SUSPEND))
+ return nvme_disable_prepare_reset(ndev, true);
+
+diff --git a/drivers/nvmem/core.c b/drivers/nvmem/core.c
+index 05c6ae4b0b97..a8300202a7fb 100644
+--- a/drivers/nvmem/core.c
++++ b/drivers/nvmem/core.c
+@@ -66,6 +66,30 @@ static LIST_HEAD(nvmem_lookup_list);
+
+ static BLOCKING_NOTIFIER_HEAD(nvmem_notifier);
+
++static int nvmem_reg_read(struct nvmem_device *nvmem, unsigned int offset,
++ void *val, size_t bytes)
++{
++ if (nvmem->reg_read)
++ return nvmem->reg_read(nvmem->priv, offset, val, bytes);
++
++ return -EINVAL;
++}
++
++static int nvmem_reg_write(struct nvmem_device *nvmem, unsigned int offset,
++ void *val, size_t bytes)
++{
++ int ret;
++
++ if (nvmem->reg_write) {
++ gpiod_set_value_cansleep(nvmem->wp_gpio, 0);
++ ret = nvmem->reg_write(nvmem->priv, offset, val, bytes);
++ gpiod_set_value_cansleep(nvmem->wp_gpio, 1);
++ return ret;
++ }
++
++ return -EINVAL;
++}
++
+ #ifdef CONFIG_NVMEM_SYSFS
+ static const char * const nvmem_type_str[] = {
+ [NVMEM_TYPE_UNKNOWN] = "Unknown",
+@@ -122,7 +146,7 @@ static ssize_t bin_attr_nvmem_read(struct file *filp, struct kobject *kobj,
+ if (!nvmem->reg_read)
+ return -EPERM;
+
+- rc = nvmem->reg_read(nvmem->priv, pos, buf, count);
++ rc = nvmem_reg_read(nvmem, pos, buf, count);
+
+ if (rc)
+ return rc;
+@@ -159,7 +183,7 @@ static ssize_t bin_attr_nvmem_write(struct file *filp, struct kobject *kobj,
+ if (!nvmem->reg_write)
+ return -EPERM;
+
+- rc = nvmem->reg_write(nvmem->priv, pos, buf, count);
++ rc = nvmem_reg_write(nvmem, pos, buf, count);
+
+ if (rc)
+ return rc;
+@@ -311,30 +335,6 @@ static void nvmem_sysfs_remove_compat(struct nvmem_device *nvmem,
+
+ #endif /* CONFIG_NVMEM_SYSFS */
+
+-static int nvmem_reg_read(struct nvmem_device *nvmem, unsigned int offset,
+- void *val, size_t bytes)
+-{
+- if (nvmem->reg_read)
+- return nvmem->reg_read(nvmem->priv, offset, val, bytes);
+-
+- return -EINVAL;
+-}
+-
+-static int nvmem_reg_write(struct nvmem_device *nvmem, unsigned int offset,
+- void *val, size_t bytes)
+-{
+- int ret;
+-
+- if (nvmem->reg_write) {
+- gpiod_set_value_cansleep(nvmem->wp_gpio, 0);
+- ret = nvmem->reg_write(nvmem->priv, offset, val, bytes);
+- gpiod_set_value_cansleep(nvmem->wp_gpio, 1);
+- return ret;
+- }
+-
+- return -EINVAL;
+-}
+-
+ static void nvmem_release(struct device *dev)
+ {
+ struct nvmem_device *nvmem = to_nvmem_device(dev);
+diff --git a/drivers/of/kobj.c b/drivers/of/kobj.c
+index c72eef988041..a32e60b024b8 100644
+--- a/drivers/of/kobj.c
++++ b/drivers/of/kobj.c
+@@ -134,8 +134,6 @@ int __of_attach_node_sysfs(struct device_node *np)
+ if (!name)
+ return -ENOMEM;
+
+- of_node_get(np);
+-
+ rc = kobject_add(&np->kobj, parent, "%s", name);
+ kfree(name);
+ if (rc)
+@@ -144,6 +142,7 @@ int __of_attach_node_sysfs(struct device_node *np)
+ for_each_property_of_node(np, pp)
+ __of_add_property_sysfs(np, pp);
+
++ of_node_get(np);
+ return 0;
+ }
+
+diff --git a/drivers/of/property.c b/drivers/of/property.c
+index b4916dcc9e72..6dc542af5a70 100644
+--- a/drivers/of/property.c
++++ b/drivers/of/property.c
+@@ -1045,8 +1045,20 @@ static int of_link_to_phandle(struct device *dev, struct device_node *sup_np,
+ * Find the device node that contains the supplier phandle. It may be
+ * @sup_np or it may be an ancestor of @sup_np.
+ */
+- while (sup_np && !of_find_property(sup_np, "compatible", NULL))
++ while (sup_np) {
++
++ /* Don't allow linking to a disabled supplier */
++ if (!of_device_is_available(sup_np)) {
++ of_node_put(sup_np);
++ sup_np = NULL;
++ }
++
++ if (of_find_property(sup_np, "compatible", NULL))
++ break;
++
+ sup_np = of_get_next_parent(sup_np);
++ }
++
+ if (!sup_np) {
+ dev_dbg(dev, "Not linking to %pOFP - No device\n", tmp_np);
+ return -ENODEV;
+@@ -1296,7 +1308,7 @@ static int of_link_to_suppliers(struct device *dev,
+ if (of_link_property(dev, con_np, p->name))
+ ret = -ENODEV;
+
+- for_each_child_of_node(con_np, child)
++ for_each_available_child_of_node(con_np, child)
+ if (of_link_to_suppliers(dev, child) && !ret)
+ ret = -EAGAIN;
+
+diff --git a/drivers/pci/controller/dwc/pci-dra7xx.c b/drivers/pci/controller/dwc/pci-dra7xx.c
+index 3b0e58f2de58..6184ebc9392d 100644
+--- a/drivers/pci/controller/dwc/pci-dra7xx.c
++++ b/drivers/pci/controller/dwc/pci-dra7xx.c
+@@ -840,7 +840,6 @@ static int __init dra7xx_pcie_probe(struct platform_device *pdev)
+ struct phy **phy;
+ struct device_link **link;
+ void __iomem *base;
+- struct resource *res;
+ struct dw_pcie *pci;
+ struct dra7xx_pcie *dra7xx;
+ struct device *dev = &pdev->dev;
+@@ -877,10 +876,9 @@ static int __init dra7xx_pcie_probe(struct platform_device *pdev)
+ return irq;
+ }
+
+- res = platform_get_resource_byname(pdev, IORESOURCE_MEM, "ti_conf");
+- base = devm_ioremap(dev, res->start, resource_size(res));
+- if (!base)
+- return -ENOMEM;
++ base = devm_platform_ioremap_resource_byname(pdev, "ti_conf");
++ if (IS_ERR(base))
++ return PTR_ERR(base);
+
+ phy_count = of_property_count_strings(np, "phy-names");
+ if (phy_count < 0) {
+diff --git a/drivers/pci/controller/dwc/pci-meson.c b/drivers/pci/controller/dwc/pci-meson.c
+index 3715dceca1bf..ca59ba9e0ecd 100644
+--- a/drivers/pci/controller/dwc/pci-meson.c
++++ b/drivers/pci/controller/dwc/pci-meson.c
+@@ -289,11 +289,11 @@ static void meson_pcie_init_dw(struct meson_pcie *mp)
+ meson_cfg_writel(mp, val, PCIE_CFG0);
+
+ val = meson_elb_readl(mp, PCIE_PORT_LINK_CTRL_OFF);
+- val &= ~LINK_CAPABLE_MASK;
++ val &= ~(LINK_CAPABLE_MASK | FAST_LINK_MODE);
+ meson_elb_writel(mp, val, PCIE_PORT_LINK_CTRL_OFF);
+
+ val = meson_elb_readl(mp, PCIE_PORT_LINK_CTRL_OFF);
+- val |= LINK_CAPABLE_X1 | FAST_LINK_MODE;
++ val |= LINK_CAPABLE_X1;
+ meson_elb_writel(mp, val, PCIE_PORT_LINK_CTRL_OFF);
+
+ val = meson_elb_readl(mp, PCIE_GEN2_CTRL_OFF);
+diff --git a/drivers/pci/controller/dwc/pcie-designware-host.c b/drivers/pci/controller/dwc/pcie-designware-host.c
+index 395feb8ca051..3c43311bb95c 100644
+--- a/drivers/pci/controller/dwc/pcie-designware-host.c
++++ b/drivers/pci/controller/dwc/pcie-designware-host.c
+@@ -264,6 +264,8 @@ int dw_pcie_allocate_domains(struct pcie_port *pp)
+ return -ENOMEM;
+ }
+
++ irq_domain_update_bus_token(pp->irq_domain, DOMAIN_BUS_NEXUS);
++
+ pp->msi_domain = pci_msi_create_irq_domain(fwnode,
+ &dw_pcie_msi_domain_info,
+ pp->irq_domain);
+diff --git a/drivers/pci/controller/pci-aardvark.c b/drivers/pci/controller/pci-aardvark.c
+index 2a20b649f40c..2ecc79c03ade 100644
+--- a/drivers/pci/controller/pci-aardvark.c
++++ b/drivers/pci/controller/pci-aardvark.c
+@@ -9,6 +9,7 @@
+ */
+
+ #include <linux/delay.h>
++#include <linux/gpio.h>
+ #include <linux/interrupt.h>
+ #include <linux/irq.h>
+ #include <linux/irqdomain.h>
+@@ -18,6 +19,7 @@
+ #include <linux/platform_device.h>
+ #include <linux/msi.h>
+ #include <linux/of_address.h>
++#include <linux/of_gpio.h>
+ #include <linux/of_pci.h>
+
+ #include "../pci.h"
+@@ -40,6 +42,7 @@
+ #define PCIE_CORE_LINK_CTRL_STAT_REG 0xd0
+ #define PCIE_CORE_LINK_L0S_ENTRY BIT(0)
+ #define PCIE_CORE_LINK_TRAINING BIT(5)
++#define PCIE_CORE_LINK_SPEED_SHIFT 16
+ #define PCIE_CORE_LINK_WIDTH_SHIFT 20
+ #define PCIE_CORE_ERR_CAPCTL_REG 0x118
+ #define PCIE_CORE_ERR_CAPCTL_ECRC_CHK_TX BIT(5)
+@@ -201,7 +204,9 @@ struct advk_pcie {
+ struct mutex msi_used_lock;
+ u16 msi_msg;
+ int root_bus_nr;
++ int link_gen;
+ struct pci_bridge_emul bridge;
++ struct gpio_desc *reset_gpio;
+ };
+
+ static inline void advk_writel(struct advk_pcie *pcie, u32 val, u64 reg)
+@@ -225,20 +230,16 @@ static int advk_pcie_link_up(struct advk_pcie *pcie)
+
+ static int advk_pcie_wait_for_link(struct advk_pcie *pcie)
+ {
+- struct device *dev = &pcie->pdev->dev;
+ int retries;
+
+ /* check if the link is up or not */
+ for (retries = 0; retries < LINK_WAIT_MAX_RETRIES; retries++) {
+- if (advk_pcie_link_up(pcie)) {
+- dev_info(dev, "link up\n");
++ if (advk_pcie_link_up(pcie))
+ return 0;
+- }
+
+ usleep_range(LINK_WAIT_USLEEP_MIN, LINK_WAIT_USLEEP_MAX);
+ }
+
+- dev_err(dev, "link never came up\n");
+ return -ETIMEDOUT;
+ }
+
+@@ -253,10 +254,110 @@ static void advk_pcie_wait_for_retrain(struct advk_pcie *pcie)
+ }
+ }
+
++static int advk_pcie_train_at_gen(struct advk_pcie *pcie, int gen)
++{
++ int ret, neg_gen;
++ u32 reg;
++
++ /* Setup link speed */
++ reg = advk_readl(pcie, PCIE_CORE_CTRL0_REG);
++ reg &= ~PCIE_GEN_SEL_MSK;
++ if (gen == 3)
++ reg |= SPEED_GEN_3;
++ else if (gen == 2)
++ reg |= SPEED_GEN_2;
++ else
++ reg |= SPEED_GEN_1;
++ advk_writel(pcie, reg, PCIE_CORE_CTRL0_REG);
++
++ /*
++ * Enable link training. This is not needed in every call to this
++ * function, just once suffices, but it does not break anything either.
++ */
++ reg = advk_readl(pcie, PCIE_CORE_CTRL0_REG);
++ reg |= LINK_TRAINING_EN;
++ advk_writel(pcie, reg, PCIE_CORE_CTRL0_REG);
++
++ /*
++ * Start link training immediately after enabling it.
++ * This solves problems for some buggy cards.
++ */
++ reg = advk_readl(pcie, PCIE_CORE_LINK_CTRL_STAT_REG);
++ reg |= PCIE_CORE_LINK_TRAINING;
++ advk_writel(pcie, reg, PCIE_CORE_LINK_CTRL_STAT_REG);
++
++ ret = advk_pcie_wait_for_link(pcie);
++ if (ret)
++ return ret;
++
++ reg = advk_readl(pcie, PCIE_CORE_LINK_CTRL_STAT_REG);
++ neg_gen = (reg >> PCIE_CORE_LINK_SPEED_SHIFT) & 0xf;
++
++ return neg_gen;
++}
++
++static void advk_pcie_train_link(struct advk_pcie *pcie)
++{
++ struct device *dev = &pcie->pdev->dev;
++ int neg_gen = -1, gen;
++
++ /*
++ * Try link training at link gen specified by device tree property
++ * 'max-link-speed'. If this fails, iteratively train at lower gen.
++ */
++ for (gen = pcie->link_gen; gen > 0; --gen) {
++ neg_gen = advk_pcie_train_at_gen(pcie, gen);
++ if (neg_gen > 0)
++ break;
++ }
++
++ if (neg_gen < 0)
++ goto err;
++
++ /*
++ * After successful training if negotiated gen is lower than requested,
++ * train again on negotiated gen. This solves some stability issues for
++ * some buggy gen1 cards.
++ */
++ if (neg_gen < gen) {
++ gen = neg_gen;
++ neg_gen = advk_pcie_train_at_gen(pcie, gen);
++ }
++
++ if (neg_gen == gen) {
++ dev_info(dev, "link up at gen %i\n", gen);
++ return;
++ }
++
++err:
++ dev_err(dev, "link never came up\n");
++}
++
++static void advk_pcie_issue_perst(struct advk_pcie *pcie)
++{
++ u32 reg;
++
++ if (!pcie->reset_gpio)
++ return;
++
++ /* PERST does not work for some cards when link training is enabled */
++ reg = advk_readl(pcie, PCIE_CORE_CTRL0_REG);
++ reg &= ~LINK_TRAINING_EN;
++ advk_writel(pcie, reg, PCIE_CORE_CTRL0_REG);
++
++ /* 10ms delay is needed for some cards */
++ dev_info(&pcie->pdev->dev, "issuing PERST via reset GPIO for 10ms\n");
++ gpiod_set_value_cansleep(pcie->reset_gpio, 1);
++ usleep_range(10000, 11000);
++ gpiod_set_value_cansleep(pcie->reset_gpio, 0);
++}
++
+ static void advk_pcie_setup_hw(struct advk_pcie *pcie)
+ {
+ u32 reg;
+
++ advk_pcie_issue_perst(pcie);
++
+ /* Set to Direct mode */
+ reg = advk_readl(pcie, CTRL_CONFIG_REG);
+ reg &= ~(CTRL_MODE_MASK << CTRL_MODE_SHIFT);
+@@ -288,23 +389,12 @@ static void advk_pcie_setup_hw(struct advk_pcie *pcie)
+ PCIE_CORE_CTRL2_TD_ENABLE;
+ advk_writel(pcie, reg, PCIE_CORE_CTRL2_REG);
+
+- /* Set GEN2 */
+- reg = advk_readl(pcie, PCIE_CORE_CTRL0_REG);
+- reg &= ~PCIE_GEN_SEL_MSK;
+- reg |= SPEED_GEN_2;
+- advk_writel(pcie, reg, PCIE_CORE_CTRL0_REG);
+-
+ /* Set lane X1 */
+ reg = advk_readl(pcie, PCIE_CORE_CTRL0_REG);
+ reg &= ~LANE_CNT_MSK;
+ reg |= LANE_COUNT_1;
+ advk_writel(pcie, reg, PCIE_CORE_CTRL0_REG);
+
+- /* Enable link training */
+- reg = advk_readl(pcie, PCIE_CORE_CTRL0_REG);
+- reg |= LINK_TRAINING_EN;
+- advk_writel(pcie, reg, PCIE_CORE_CTRL0_REG);
+-
+ /* Enable MSI */
+ reg = advk_readl(pcie, PCIE_CORE_CTRL2_REG);
+ reg |= PCIE_CORE_CTRL2_MSI_ENABLE;
+@@ -340,22 +430,14 @@ static void advk_pcie_setup_hw(struct advk_pcie *pcie)
+
+ /*
+ * PERST# signal could have been asserted by pinctrl subsystem before
+- * probe() callback has been called, making the endpoint going into
++ * probe() callback has been called or issued explicitly by reset gpio
++ * function advk_pcie_issue_perst(), making the endpoint going into
+ * fundamental reset. As required by PCI Express spec a delay for at
+ * least 100ms after such a reset before link training is needed.
+ */
+ msleep(PCI_PM_D3COLD_WAIT);
+
+- /* Start link training */
+- reg = advk_readl(pcie, PCIE_CORE_LINK_CTRL_STAT_REG);
+- reg |= PCIE_CORE_LINK_TRAINING;
+- advk_writel(pcie, reg, PCIE_CORE_LINK_CTRL_STAT_REG);
+-
+- advk_pcie_wait_for_link(pcie);
+-
+- reg = PCIE_CORE_LINK_L0S_ENTRY |
+- (1 << PCIE_CORE_LINK_WIDTH_SHIFT);
+- advk_writel(pcie, reg, PCIE_CORE_LINK_CTRL_STAT_REG);
++ advk_pcie_train_link(pcie);
+
+ reg = advk_readl(pcie, PCIE_CORE_CMD_STATUS_REG);
+ reg |= PCIE_CORE_CMD_MEM_ACCESS_EN |
+@@ -989,6 +1071,28 @@ static int advk_pcie_probe(struct platform_device *pdev)
+ }
+ pcie->root_bus_nr = bus->start;
+
++ pcie->reset_gpio = devm_gpiod_get_from_of_node(dev, dev->of_node,
++ "reset-gpios", 0,
++ GPIOD_OUT_LOW,
++ "pcie1-reset");
++ ret = PTR_ERR_OR_ZERO(pcie->reset_gpio);
++ if (ret) {
++ if (ret == -ENOENT) {
++ pcie->reset_gpio = NULL;
++ } else {
++ if (ret != -EPROBE_DEFER)
++ dev_err(dev, "Failed to get reset-gpio: %i\n",
++ ret);
++ return ret;
++ }
++ }
++
++ ret = of_pci_get_max_link_speed(dev->of_node);
++ if (ret <= 0 || ret > 3)
++ pcie->link_gen = 3;
++ else
++ pcie->link_gen = ret;
++
+ advk_pcie_setup_hw(pcie);
+
+ advk_sw_pci_bridge_init(pcie);
+diff --git a/drivers/pci/controller/pci-v3-semi.c b/drivers/pci/controller/pci-v3-semi.c
+index bd05221f5a22..ddcb4571a79b 100644
+--- a/drivers/pci/controller/pci-v3-semi.c
++++ b/drivers/pci/controller/pci-v3-semi.c
+@@ -720,7 +720,7 @@ static int v3_pci_probe(struct platform_device *pdev)
+ int irq;
+ int ret;
+
+- host = pci_alloc_host_bridge(sizeof(*v3));
++ host = devm_pci_alloc_host_bridge(dev, sizeof(*v3));
+ if (!host)
+ return -ENOMEM;
+
+diff --git a/drivers/pci/controller/pcie-brcmstb.c b/drivers/pci/controller/pcie-brcmstb.c
+index 6d79d14527a6..2297910bf6e4 100644
+--- a/drivers/pci/controller/pcie-brcmstb.c
++++ b/drivers/pci/controller/pcie-brcmstb.c
+@@ -54,11 +54,11 @@
+
+ #define PCIE_MISC_CPU_2_PCIE_MEM_WIN0_LO 0x400c
+ #define PCIE_MEM_WIN0_LO(win) \
+- PCIE_MISC_CPU_2_PCIE_MEM_WIN0_LO + ((win) * 4)
++ PCIE_MISC_CPU_2_PCIE_MEM_WIN0_LO + ((win) * 8)
+
+ #define PCIE_MISC_CPU_2_PCIE_MEM_WIN0_HI 0x4010
+ #define PCIE_MEM_WIN0_HI(win) \
+- PCIE_MISC_CPU_2_PCIE_MEM_WIN0_HI + ((win) * 4)
++ PCIE_MISC_CPU_2_PCIE_MEM_WIN0_HI + ((win) * 8)
+
+ #define PCIE_MISC_RC_BAR1_CONFIG_LO 0x402c
+ #define PCIE_MISC_RC_BAR1_CONFIG_LO_SIZE_MASK 0x1f
+@@ -697,6 +697,7 @@ static int brcm_pcie_setup(struct brcm_pcie *pcie)
+
+ /* Reset the bridge */
+ brcm_pcie_bridge_sw_init_set(pcie, 1);
++ brcm_pcie_perst_set(pcie, 1);
+
+ usleep_range(100, 200);
+
+diff --git a/drivers/pci/controller/pcie-rcar.c b/drivers/pci/controller/pcie-rcar.c
+index 759c6542c5c8..1bae6a4abaae 100644
+--- a/drivers/pci/controller/pcie-rcar.c
++++ b/drivers/pci/controller/pcie-rcar.c
+@@ -333,11 +333,12 @@ static struct pci_ops rcar_pcie_ops = {
+ };
+
+ static void rcar_pcie_setup_window(int win, struct rcar_pcie *pcie,
+- struct resource *res)
++ struct resource_entry *window)
+ {
+ /* Setup PCIe address space mappings for each resource */
+ resource_size_t size;
+ resource_size_t res_start;
++ struct resource *res = window->res;
+ u32 mask;
+
+ rcar_pci_write_reg(pcie, 0x00000000, PCIEPTCTLR(win));
+@@ -351,9 +352,9 @@ static void rcar_pcie_setup_window(int win, struct rcar_pcie *pcie,
+ rcar_pci_write_reg(pcie, mask << 7, PCIEPAMR(win));
+
+ if (res->flags & IORESOURCE_IO)
+- res_start = pci_pio_to_address(res->start);
++ res_start = pci_pio_to_address(res->start) - window->offset;
+ else
+- res_start = res->start;
++ res_start = res->start - window->offset;
+
+ rcar_pci_write_reg(pcie, upper_32_bits(res_start), PCIEPAUR(win));
+ rcar_pci_write_reg(pcie, lower_32_bits(res_start) & ~0x7F,
+@@ -382,7 +383,7 @@ static int rcar_pcie_setup(struct list_head *resource, struct rcar_pcie *pci)
+ switch (resource_type(res)) {
+ case IORESOURCE_IO:
+ case IORESOURCE_MEM:
+- rcar_pcie_setup_window(i, pci, res);
++ rcar_pcie_setup_window(i, pci, win);
+ i++;
+ break;
+ case IORESOURCE_BUS:
+diff --git a/drivers/pci/controller/vmd.c b/drivers/pci/controller/vmd.c
+index dac91d60701d..e386d4eac407 100644
+--- a/drivers/pci/controller/vmd.c
++++ b/drivers/pci/controller/vmd.c
+@@ -445,9 +445,11 @@ static int vmd_enable_domain(struct vmd_dev *vmd, unsigned long features)
+ if (!membar2)
+ return -ENOMEM;
+ offset[0] = vmd->dev->resource[VMD_MEMBAR1].start -
+- readq(membar2 + MB2_SHADOW_OFFSET);
++ (readq(membar2 + MB2_SHADOW_OFFSET) &
++ PCI_BASE_ADDRESS_MEM_MASK);
+ offset[1] = vmd->dev->resource[VMD_MEMBAR2].start -
+- readq(membar2 + MB2_SHADOW_OFFSET + 8);
++ (readq(membar2 + MB2_SHADOW_OFFSET + 8) &
++ PCI_BASE_ADDRESS_MEM_MASK);
+ pci_iounmap(vmd->dev, membar2);
+ }
+ }
+diff --git a/drivers/pci/endpoint/functions/pci-epf-test.c b/drivers/pci/endpoint/functions/pci-epf-test.c
+index 60330f3e3751..c89a9561439f 100644
+--- a/drivers/pci/endpoint/functions/pci-epf-test.c
++++ b/drivers/pci/endpoint/functions/pci-epf-test.c
+@@ -187,6 +187,9 @@ static int pci_epf_test_init_dma_chan(struct pci_epf_test *epf_test)
+ */
+ static void pci_epf_test_clean_dma_chan(struct pci_epf_test *epf_test)
+ {
++ if (!epf_test->dma_supported)
++ return;
++
+ dma_release_channel(epf_test->dma_chan);
+ epf_test->dma_chan = NULL;
+ }
+diff --git a/drivers/pci/pci-bridge-emul.c b/drivers/pci/pci-bridge-emul.c
+index 4f4f54bc732e..faa414655f33 100644
+--- a/drivers/pci/pci-bridge-emul.c
++++ b/drivers/pci/pci-bridge-emul.c
+@@ -185,8 +185,8 @@ static const struct pci_bridge_reg_behavior pcie_cap_regs_behavior[] = {
+ * RO, the rest is reserved
+ */
+ .w1c = GENMASK(19, 16),
+- .ro = GENMASK(20, 19),
+- .rsvd = GENMASK(31, 21),
++ .ro = GENMASK(21, 20),
++ .rsvd = GENMASK(31, 22),
+ },
+
+ [PCI_EXP_LNKCAP / 4] = {
+@@ -226,7 +226,7 @@ static const struct pci_bridge_reg_behavior pcie_cap_regs_behavior[] = {
+ PCI_EXP_SLTSTA_CC | PCI_EXP_SLTSTA_DLLSC) << 16,
+ .ro = (PCI_EXP_SLTSTA_MRLSS | PCI_EXP_SLTSTA_PDS |
+ PCI_EXP_SLTSTA_EIS) << 16,
+- .rsvd = GENMASK(15, 12) | (GENMASK(15, 9) << 16),
++ .rsvd = GENMASK(15, 13) | (GENMASK(15, 9) << 16),
+ },
+
+ [PCI_EXP_RTCTL / 4] = {
+diff --git a/drivers/pci/pci.c b/drivers/pci/pci.c
+index 6d3234f75692..809f2584e338 100644
+--- a/drivers/pci/pci.c
++++ b/drivers/pci/pci.c
+@@ -4660,7 +4660,8 @@ static int pci_pm_reset(struct pci_dev *dev, int probe)
+ * pcie_wait_for_link_delay - Wait until link is active or inactive
+ * @pdev: Bridge device
+ * @active: waiting for active or inactive?
+- * @delay: Delay to wait after link has become active (in ms)
++ * @delay: Delay to wait after link has become active (in ms). Specify %0
++ * for no delay.
+ *
+ * Use this to wait till link becomes active or inactive.
+ */
+@@ -4701,7 +4702,7 @@ static bool pcie_wait_for_link_delay(struct pci_dev *pdev, bool active,
+ msleep(10);
+ timeout -= 10;
+ }
+- if (active && ret)
++ if (active && ret && delay)
+ msleep(delay);
+ else if (ret != active)
+ pci_info(pdev, "Data Link Layer Link Active not %s in 1000 msec\n",
+@@ -4822,17 +4823,28 @@ void pci_bridge_wait_for_secondary_bus(struct pci_dev *dev)
+ if (!pcie_downstream_port(dev))
+ return;
+
+- if (pcie_get_speed_cap(dev) <= PCIE_SPEED_5_0GT) {
+- pci_dbg(dev, "waiting %d ms for downstream link\n", delay);
+- msleep(delay);
+- } else {
+- pci_dbg(dev, "waiting %d ms for downstream link, after activation\n",
+- delay);
+- if (!pcie_wait_for_link_delay(dev, true, delay)) {
++ /*
++ * Per PCIe r5.0, sec 6.6.1, for downstream ports that support
++ * speeds > 5 GT/s, we must wait for link training to complete
++ * before the mandatory delay.
++ *
++ * We can only tell when link training completes via DLL Link
++ * Active, which is required for downstream ports that support
++ * speeds > 5 GT/s (sec 7.5.3.6). Unfortunately some common
++ * devices do not implement Link Active reporting even when it's
++ * required, so we'll check for that directly instead of checking
++ * the supported link speed. We assume devices without Link Active
++ * reporting can train in 100 ms regardless of speed.
++ */
++ if (dev->link_active_reporting) {
++ pci_dbg(dev, "waiting for link to train\n");
++ if (!pcie_wait_for_link_delay(dev, true, 0)) {
+ /* Did not train, no need to wait any further */
+ return;
+ }
+ }
++ pci_dbg(child, "waiting %d ms to become accessible\n", delay);
++ msleep(delay);
+
+ if (!pci_device_is_present(child)) {
+ pci_dbg(child, "waiting additional %d ms to become accessible\n", delay);
+diff --git a/drivers/pci/pcie/aspm.c b/drivers/pci/pcie/aspm.c
+index 2378ed692534..b17e5ffd31b1 100644
+--- a/drivers/pci/pcie/aspm.c
++++ b/drivers/pci/pcie/aspm.c
+@@ -628,16 +628,6 @@ static void pcie_aspm_cap_init(struct pcie_link_state *link, int blacklist)
+
+ /* Setup initial capable state. Will be updated later */
+ link->aspm_capable = link->aspm_support;
+- /*
+- * If the downstream component has pci bridge function, don't
+- * do ASPM for now.
+- */
+- list_for_each_entry(child, &linkbus->devices, bus_list) {
+- if (pci_pcie_type(child) == PCI_EXP_TYPE_PCI_BRIDGE) {
+- link->aspm_disable = ASPM_STATE_ALL;
+- break;
+- }
+- }
+
+ /* Get and check endpoint acceptable latencies */
+ list_for_each_entry(child, &linkbus->devices, bus_list) {
+diff --git a/drivers/pci/pcie/ptm.c b/drivers/pci/pcie/ptm.c
+index 9361f3aa26ab..357a454cafa0 100644
+--- a/drivers/pci/pcie/ptm.c
++++ b/drivers/pci/pcie/ptm.c
+@@ -39,10 +39,6 @@ void pci_ptm_init(struct pci_dev *dev)
+ if (!pci_is_pcie(dev))
+ return;
+
+- pos = pci_find_ext_capability(dev, PCI_EXT_CAP_ID_PTM);
+- if (!pos)
+- return;
+-
+ /*
+ * Enable PTM only on interior devices (root ports, switch ports,
+ * etc.) on the assumption that it causes no link traffic until an
+@@ -52,6 +48,23 @@ void pci_ptm_init(struct pci_dev *dev)
+ pci_pcie_type(dev) == PCI_EXP_TYPE_RC_END))
+ return;
+
++ /*
++ * Switch Downstream Ports are not permitted to have a PTM
++ * capability; their PTM behavior is controlled by the Upstream
++ * Port (PCIe r5.0, sec 7.9.16).
++ */
++ ups = pci_upstream_bridge(dev);
++ if (pci_pcie_type(dev) == PCI_EXP_TYPE_DOWNSTREAM &&
++ ups && ups->ptm_enabled) {
++ dev->ptm_granularity = ups->ptm_granularity;
++ dev->ptm_enabled = 1;
++ return;
++ }
++
++ pos = pci_find_ext_capability(dev, PCI_EXT_CAP_ID_PTM);
++ if (!pos)
++ return;
++
+ pci_read_config_dword(dev, pos + PCI_PTM_CAP, &cap);
+ local_clock = (cap & PCI_PTM_GRANULARITY_MASK) >> 8;
+
+@@ -61,7 +74,6 @@ void pci_ptm_init(struct pci_dev *dev)
+ * the spec recommendation (PCIe r3.1, sec 7.32.3), select the
+ * furthest upstream Time Source as the PTM Root.
+ */
+- ups = pci_upstream_bridge(dev);
+ if (ups && ups->ptm_enabled) {
+ ctrl = PCI_PTM_CTRL_ENABLE;
+ if (ups->ptm_granularity == 0)
+diff --git a/drivers/pci/probe.c b/drivers/pci/probe.c
+index c7e3a8267521..b59a4b0f5f16 100644
+--- a/drivers/pci/probe.c
++++ b/drivers/pci/probe.c
+@@ -909,9 +909,10 @@ static int pci_register_host_bridge(struct pci_host_bridge *bridge)
+ goto free;
+
+ err = device_register(&bridge->dev);
+- if (err)
++ if (err) {
+ put_device(&bridge->dev);
+-
++ goto free;
++ }
+ bus->bridge = get_device(&bridge->dev);
+ device_enable_async_suspend(bus->bridge);
+ pci_set_bus_of_node(bus);
+diff --git a/drivers/pci/setup-res.c b/drivers/pci/setup-res.c
+index d8ca40a97693..d21fa04fa44d 100644
+--- a/drivers/pci/setup-res.c
++++ b/drivers/pci/setup-res.c
+@@ -439,10 +439,11 @@ int pci_resize_resource(struct pci_dev *dev, int resno, int size)
+ res->end = res->start + pci_rebar_size_to_bytes(size) - 1;
+
+ /* Check if the new config works by trying to assign everything. */
+- ret = pci_reassign_bridge_resources(dev->bus->self, res->flags);
+- if (ret)
+- goto error_resize;
+-
++ if (dev->bus->self) {
++ ret = pci_reassign_bridge_resources(dev->bus->self, res->flags);
++ if (ret)
++ goto error_resize;
++ }
+ return 0;
+
+ error_resize:
+diff --git a/drivers/perf/hisilicon/hisi_uncore_l3c_pmu.c b/drivers/perf/hisilicon/hisi_uncore_l3c_pmu.c
+index 1151e99b241c..479de4be99eb 100644
+--- a/drivers/perf/hisilicon/hisi_uncore_l3c_pmu.c
++++ b/drivers/perf/hisilicon/hisi_uncore_l3c_pmu.c
+@@ -35,7 +35,7 @@
+ /* L3C has 8-counters */
+ #define L3C_NR_COUNTERS 0x8
+
+-#define L3C_PERF_CTRL_EN 0x20000
++#define L3C_PERF_CTRL_EN 0x10000
+ #define L3C_EVTYPE_NONE 0xff
+
+ /*
+diff --git a/drivers/phy/broadcom/phy-bcm-sr-usb.c b/drivers/phy/broadcom/phy-bcm-sr-usb.c
+index fe6c58910e4c..7c7862b4f41f 100644
+--- a/drivers/phy/broadcom/phy-bcm-sr-usb.c
++++ b/drivers/phy/broadcom/phy-bcm-sr-usb.c
+@@ -16,8 +16,6 @@ enum bcm_usb_phy_version {
+ };
+
+ enum bcm_usb_phy_reg {
+- PLL_NDIV_FRAC,
+- PLL_NDIV_INT,
+ PLL_CTRL,
+ PHY_CTRL,
+ PHY_PLL_CTRL,
+@@ -31,18 +29,11 @@ static const u8 bcm_usb_combo_phy_ss[] = {
+ };
+
+ static const u8 bcm_usb_combo_phy_hs[] = {
+- [PLL_NDIV_FRAC] = 0x04,
+- [PLL_NDIV_INT] = 0x08,
+ [PLL_CTRL] = 0x0c,
+ [PHY_CTRL] = 0x10,
+ };
+
+-#define HSPLL_NDIV_INT_VAL 0x13
+-#define HSPLL_NDIV_FRAC_VAL 0x1005
+-
+ static const u8 bcm_usb_hs_phy[] = {
+- [PLL_NDIV_FRAC] = 0x0,
+- [PLL_NDIV_INT] = 0x4,
+ [PLL_CTRL] = 0x8,
+ [PHY_CTRL] = 0xc,
+ };
+@@ -52,7 +43,6 @@ enum pll_ctrl_bits {
+ SSPLL_SUSPEND_EN,
+ PLL_SEQ_START,
+ PLL_LOCK,
+- PLL_PDIV,
+ };
+
+ static const u8 u3pll_ctrl[] = {
+@@ -66,29 +56,17 @@ static const u8 u3pll_ctrl[] = {
+ #define HSPLL_PDIV_VAL 0x1
+
+ static const u8 u2pll_ctrl[] = {
+- [PLL_PDIV] = 1,
+ [PLL_RESETB] = 5,
+ [PLL_LOCK] = 6,
+ };
+
+ enum bcm_usb_phy_ctrl_bits {
+ CORERDY,
+- AFE_LDO_PWRDWNB,
+- AFE_PLL_PWRDWNB,
+- AFE_BG_PWRDWNB,
+- PHY_ISO,
+ PHY_RESETB,
+ PHY_PCTL,
+ };
+
+ #define PHY_PCTL_MASK 0xffff
+-/*
+- * 0x0806 of PCTL_VAL has below bits set
+- * BIT-8 : refclk divider 1
+- * BIT-3:2: device mode; mode is not effect
+- * BIT-1: soft reset active low
+- */
+-#define HSPHY_PCTL_VAL 0x0806
+ #define SSPHY_PCTL_VAL 0x0006
+
+ static const u8 u3phy_ctrl[] = {
+@@ -98,10 +76,6 @@ static const u8 u3phy_ctrl[] = {
+
+ static const u8 u2phy_ctrl[] = {
+ [CORERDY] = 0,
+- [AFE_LDO_PWRDWNB] = 1,
+- [AFE_PLL_PWRDWNB] = 2,
+- [AFE_BG_PWRDWNB] = 3,
+- [PHY_ISO] = 4,
+ [PHY_RESETB] = 5,
+ [PHY_PCTL] = 6,
+ };
+@@ -186,38 +160,13 @@ static int bcm_usb_hs_phy_init(struct bcm_usb_phy_cfg *phy_cfg)
+ int ret = 0;
+ void __iomem *regs = phy_cfg->regs;
+ const u8 *offset;
+- u32 rd_data;
+
+ offset = phy_cfg->offset;
+
+- writel(HSPLL_NDIV_INT_VAL, regs + offset[PLL_NDIV_INT]);
+- writel(HSPLL_NDIV_FRAC_VAL, regs + offset[PLL_NDIV_FRAC]);
+-
+- rd_data = readl(regs + offset[PLL_CTRL]);
+- rd_data &= ~(HSPLL_PDIV_MASK << u2pll_ctrl[PLL_PDIV]);
+- rd_data |= (HSPLL_PDIV_VAL << u2pll_ctrl[PLL_PDIV]);
+- writel(rd_data, regs + offset[PLL_CTRL]);
+-
+- /* Set Core Ready high */
+- bcm_usb_reg32_setbits(regs + offset[PHY_CTRL],
+- BIT(u2phy_ctrl[CORERDY]));
+-
+- /* Maximum timeout for Core Ready done */
+- msleep(30);
+-
++ bcm_usb_reg32_clrbits(regs + offset[PLL_CTRL],
++ BIT(u2pll_ctrl[PLL_RESETB]));
+ bcm_usb_reg32_setbits(regs + offset[PLL_CTRL],
+ BIT(u2pll_ctrl[PLL_RESETB]));
+- bcm_usb_reg32_setbits(regs + offset[PHY_CTRL],
+- BIT(u2phy_ctrl[PHY_RESETB]));
+-
+-
+- rd_data = readl(regs + offset[PHY_CTRL]);
+- rd_data &= ~(PHY_PCTL_MASK << u2phy_ctrl[PHY_PCTL]);
+- rd_data |= (HSPHY_PCTL_VAL << u2phy_ctrl[PHY_PCTL]);
+- writel(rd_data, regs + offset[PHY_CTRL]);
+-
+- /* Maximum timeout for PLL reset done */
+- msleep(30);
+
+ ret = bcm_usb_pll_lock_check(regs + offset[PLL_CTRL],
+ BIT(u2pll_ctrl[PLL_LOCK]));
+diff --git a/drivers/phy/cadence/phy-cadence-sierra.c b/drivers/phy/cadence/phy-cadence-sierra.c
+index a5c08e5bd2bf..faed652b73f7 100644
+--- a/drivers/phy/cadence/phy-cadence-sierra.c
++++ b/drivers/phy/cadence/phy-cadence-sierra.c
+@@ -685,10 +685,10 @@ static struct cdns_reg_pairs cdns_usb_cmn_regs_ext_ssc[] = {
+ static struct cdns_reg_pairs cdns_usb_ln_regs_ext_ssc[] = {
+ {0xFE0A, SIERRA_DET_STANDEC_A_PREG},
+ {0x000F, SIERRA_DET_STANDEC_B_PREG},
+- {0x00A5, SIERRA_DET_STANDEC_C_PREG},
++ {0x55A5, SIERRA_DET_STANDEC_C_PREG},
+ {0x69ad, SIERRA_DET_STANDEC_D_PREG},
+ {0x0241, SIERRA_DET_STANDEC_E_PREG},
+- {0x0010, SIERRA_PSM_LANECAL_DLY_A1_RESETS_PREG},
++ {0x0110, SIERRA_PSM_LANECAL_DLY_A1_RESETS_PREG},
+ {0x0014, SIERRA_PSM_A0IN_TMR_PREG},
+ {0xCF00, SIERRA_PSM_DIAG_PREG},
+ {0x001F, SIERRA_PSC_TX_A0_PREG},
+@@ -696,7 +696,7 @@ static struct cdns_reg_pairs cdns_usb_ln_regs_ext_ssc[] = {
+ {0x0003, SIERRA_PSC_TX_A2_PREG},
+ {0x0003, SIERRA_PSC_TX_A3_PREG},
+ {0x0FFF, SIERRA_PSC_RX_A0_PREG},
+- {0x0619, SIERRA_PSC_RX_A1_PREG},
++ {0x0003, SIERRA_PSC_RX_A1_PREG},
+ {0x0003, SIERRA_PSC_RX_A2_PREG},
+ {0x0001, SIERRA_PSC_RX_A3_PREG},
+ {0x0001, SIERRA_PLLCTRL_SUBRATE_PREG},
+@@ -705,19 +705,19 @@ static struct cdns_reg_pairs cdns_usb_ln_regs_ext_ssc[] = {
+ {0x00CA, SIERRA_CLKPATH_BIASTRIM_PREG},
+ {0x2512, SIERRA_DFE_BIASTRIM_PREG},
+ {0x0000, SIERRA_DRVCTRL_ATTEN_PREG},
+- {0x873E, SIERRA_CLKPATHCTRL_TMR_PREG},
+- {0x03CF, SIERRA_RX_CREQ_FLTR_A_MODE1_PREG},
+- {0x01CE, SIERRA_RX_CREQ_FLTR_A_MODE0_PREG},
++ {0x823E, SIERRA_CLKPATHCTRL_TMR_PREG},
++ {0x078F, SIERRA_RX_CREQ_FLTR_A_MODE1_PREG},
++ {0x078F, SIERRA_RX_CREQ_FLTR_A_MODE0_PREG},
+ {0x7B3C, SIERRA_CREQ_CCLKDET_MODE01_PREG},
+- {0x033F, SIERRA_RX_CTLE_MAINTENANCE_PREG},
++ {0x023C, SIERRA_RX_CTLE_MAINTENANCE_PREG},
+ {0x3232, SIERRA_CREQ_FSMCLK_SEL_PREG},
+ {0x0000, SIERRA_CREQ_EQ_CTRL_PREG},
+- {0x8000, SIERRA_CREQ_SPARE_PREG},
++ {0x0000, SIERRA_CREQ_SPARE_PREG},
+ {0xCC44, SIERRA_CREQ_EQ_OPEN_EYE_THRESH_PREG},
+- {0x8453, SIERRA_CTLELUT_CTRL_PREG},
+- {0x4110, SIERRA_DFE_ECMP_RATESEL_PREG},
+- {0x4110, SIERRA_DFE_SMP_RATESEL_PREG},
+- {0x0002, SIERRA_DEQ_PHALIGN_CTRL},
++ {0x8452, SIERRA_CTLELUT_CTRL_PREG},
++ {0x4121, SIERRA_DFE_ECMP_RATESEL_PREG},
++ {0x4121, SIERRA_DFE_SMP_RATESEL_PREG},
++ {0x0003, SIERRA_DEQ_PHALIGN_CTRL},
+ {0x3200, SIERRA_DEQ_CONCUR_CTRL1_PREG},
+ {0x5064, SIERRA_DEQ_CONCUR_CTRL2_PREG},
+ {0x0030, SIERRA_DEQ_EPIPWR_CTRL2_PREG},
+@@ -725,7 +725,7 @@ static struct cdns_reg_pairs cdns_usb_ln_regs_ext_ssc[] = {
+ {0x5A5A, SIERRA_DEQ_ERRCMP_CTRL_PREG},
+ {0x02F5, SIERRA_DEQ_OFFSET_CTRL_PREG},
+ {0x02F5, SIERRA_DEQ_GAIN_CTRL_PREG},
+- {0x9A8A, SIERRA_DEQ_VGATUNE_CTRL_PREG},
++ {0x9999, SIERRA_DEQ_VGATUNE_CTRL_PREG},
+ {0x0014, SIERRA_DEQ_GLUT0},
+ {0x0014, SIERRA_DEQ_GLUT1},
+ {0x0014, SIERRA_DEQ_GLUT2},
+@@ -772,6 +772,7 @@ static struct cdns_reg_pairs cdns_usb_ln_regs_ext_ssc[] = {
+ {0x000F, SIERRA_LFPSFILT_NS_PREG},
+ {0x0009, SIERRA_LFPSFILT_RD_PREG},
+ {0x0001, SIERRA_LFPSFILT_MP_PREG},
++ {0x6013, SIERRA_SIGDET_SUPPORT_PREG},
+ {0x8013, SIERRA_SDFILT_H2L_A_PREG},
+ {0x8009, SIERRA_SDFILT_L2H_PREG},
+ {0x0024, SIERRA_RXBUFFER_CTLECTRL_PREG},
+diff --git a/drivers/phy/ti/phy-j721e-wiz.c b/drivers/phy/ti/phy-j721e-wiz.c
+index 7b51045df783..c8e4ff341cef 100644
+--- a/drivers/phy/ti/phy-j721e-wiz.c
++++ b/drivers/phy/ti/phy-j721e-wiz.c
+@@ -794,8 +794,10 @@ static int wiz_probe(struct platform_device *pdev)
+ }
+
+ base = devm_ioremap(dev, res.start, resource_size(&res));
+- if (!base)
++ if (!base) {
++ ret = -ENOMEM;
+ goto err_addr_to_resource;
++ }
+
+ regmap = devm_regmap_init_mmio(dev, base, &wiz_regmap_config);
+ if (IS_ERR(regmap)) {
+@@ -812,6 +814,7 @@ static int wiz_probe(struct platform_device *pdev)
+
+ if (num_lanes > WIZ_MAX_LANES) {
+ dev_err(dev, "Cannot support %d lanes\n", num_lanes);
++ ret = -ENODEV;
+ goto err_addr_to_resource;
+ }
+
+@@ -897,6 +900,7 @@ static int wiz_probe(struct platform_device *pdev)
+ serdes_pdev = of_platform_device_create(child_node, NULL, dev);
+ if (!serdes_pdev) {
+ dev_WARN(dev, "Unable to create SERDES platform device\n");
++ ret = -ENOMEM;
+ goto err_pdev_create;
+ }
+ wiz->serdes_pdev = serdes_pdev;
+diff --git a/drivers/pinctrl/bcm/pinctrl-bcm281xx.c b/drivers/pinctrl/bcm/pinctrl-bcm281xx.c
+index f690fc5cd688..71e666178300 100644
+--- a/drivers/pinctrl/bcm/pinctrl-bcm281xx.c
++++ b/drivers/pinctrl/bcm/pinctrl-bcm281xx.c
+@@ -1406,7 +1406,7 @@ static int __init bcm281xx_pinctrl_probe(struct platform_device *pdev)
+ pdata->reg_base = devm_platform_ioremap_resource(pdev, 0);
+ if (IS_ERR(pdata->reg_base)) {
+ dev_err(&pdev->dev, "Failed to ioremap MEM resource\n");
+- return -ENODEV;
++ return PTR_ERR(pdata->reg_base);
+ }
+
+ /* Initialize the dynamic part of pinctrl_desc */
+diff --git a/drivers/pinctrl/freescale/pinctrl-imx.c b/drivers/pinctrl/freescale/pinctrl-imx.c
+index 9f42036c5fbb..1f81569c7ae3 100644
+--- a/drivers/pinctrl/freescale/pinctrl-imx.c
++++ b/drivers/pinctrl/freescale/pinctrl-imx.c
+@@ -774,16 +774,6 @@ static int imx_pinctrl_probe_dt(struct platform_device *pdev,
+ return 0;
+ }
+
+-/*
+- * imx_free_resources() - free memory used by this driver
+- * @info: info driver instance
+- */
+-static void imx_free_resources(struct imx_pinctrl *ipctl)
+-{
+- if (ipctl->pctl)
+- pinctrl_unregister(ipctl->pctl);
+-}
+-
+ int imx_pinctrl_probe(struct platform_device *pdev,
+ const struct imx_pinctrl_soc_info *info)
+ {
+@@ -874,23 +864,18 @@ int imx_pinctrl_probe(struct platform_device *pdev,
+ &ipctl->pctl);
+ if (ret) {
+ dev_err(&pdev->dev, "could not register IMX pinctrl driver\n");
+- goto free;
++ return ret;
+ }
+
+ ret = imx_pinctrl_probe_dt(pdev, ipctl);
+ if (ret) {
+ dev_err(&pdev->dev, "fail to probe dt properties\n");
+- goto free;
++ return ret;
+ }
+
+ dev_info(&pdev->dev, "initialized IMX pinctrl driver\n");
+
+ return pinctrl_enable(ipctl->pctl);
+-
+-free:
+- imx_free_resources(ipctl);
+-
+- return ret;
+ }
+
+ static int __maybe_unused imx_pinctrl_suspend(struct device *dev)
+diff --git a/drivers/pinctrl/freescale/pinctrl-imx1-core.c b/drivers/pinctrl/freescale/pinctrl-imx1-core.c
+index c00d0022d311..421f7d1886e5 100644
+--- a/drivers/pinctrl/freescale/pinctrl-imx1-core.c
++++ b/drivers/pinctrl/freescale/pinctrl-imx1-core.c
+@@ -638,7 +638,6 @@ int imx1_pinctrl_core_probe(struct platform_device *pdev,
+
+ ret = of_platform_populate(pdev->dev.of_node, NULL, NULL, &pdev->dev);
+ if (ret) {
+- pinctrl_unregister(ipctl->pctl);
+ dev_err(&pdev->dev, "Failed to populate subdevices\n");
+ return ret;
+ }
+diff --git a/drivers/pinctrl/pinctrl-at91-pio4.c b/drivers/pinctrl/pinctrl-at91-pio4.c
+index 694912409fd9..54222ccddfb1 100644
+--- a/drivers/pinctrl/pinctrl-at91-pio4.c
++++ b/drivers/pinctrl/pinctrl-at91-pio4.c
+@@ -1019,7 +1019,7 @@ static int atmel_pinctrl_probe(struct platform_device *pdev)
+
+ atmel_pioctrl->reg_base = devm_platform_ioremap_resource(pdev, 0);
+ if (IS_ERR(atmel_pioctrl->reg_base))
+- return -EINVAL;
++ return PTR_ERR(atmel_pioctrl->reg_base);
+
+ atmel_pioctrl->clk = devm_clk_get(dev, NULL);
+ if (IS_ERR(atmel_pioctrl->clk)) {
+diff --git a/drivers/pinctrl/pinctrl-ocelot.c b/drivers/pinctrl/pinctrl-ocelot.c
+index ed8eac6c1494..b1bf46ec207f 100644
+--- a/drivers/pinctrl/pinctrl-ocelot.c
++++ b/drivers/pinctrl/pinctrl-ocelot.c
+@@ -714,11 +714,12 @@ static void ocelot_irq_handler(struct irq_desc *desc)
+ struct irq_chip *parent_chip = irq_desc_get_chip(desc);
+ struct gpio_chip *chip = irq_desc_get_handler_data(desc);
+ struct ocelot_pinctrl *info = gpiochip_get_data(chip);
++ unsigned int id_reg = OCELOT_GPIO_INTR_IDENT * info->stride;
+ unsigned int reg = 0, irq, i;
+ unsigned long irqs;
+
+ for (i = 0; i < info->stride; i++) {
+- regmap_read(info->map, OCELOT_GPIO_INTR_IDENT + 4 * i, ®);
++ regmap_read(info->map, id_reg + 4 * i, ®);
+ if (!reg)
+ continue;
+
+@@ -751,21 +752,21 @@ static int ocelot_gpiochip_register(struct platform_device *pdev,
+ gc->of_node = info->dev->of_node;
+ gc->label = "ocelot-gpio";
+
+- irq = irq_of_parse_and_map(pdev->dev.of_node, 0);
+- if (irq <= 0)
+- return irq;
+-
+- girq = &gc->irq;
+- girq->chip = &ocelot_irqchip;
+- girq->parent_handler = ocelot_irq_handler;
+- girq->num_parents = 1;
+- girq->parents = devm_kcalloc(&pdev->dev, 1, sizeof(*girq->parents),
+- GFP_KERNEL);
+- if (!girq->parents)
+- return -ENOMEM;
+- girq->parents[0] = irq;
+- girq->default_type = IRQ_TYPE_NONE;
+- girq->handler = handle_edge_irq;
++ irq = irq_of_parse_and_map(gc->of_node, 0);
++ if (irq) {
++ girq = &gc->irq;
++ girq->chip = &ocelot_irqchip;
++ girq->parent_handler = ocelot_irq_handler;
++ girq->num_parents = 1;
++ girq->parents = devm_kcalloc(&pdev->dev, 1,
++ sizeof(*girq->parents),
++ GFP_KERNEL);
++ if (!girq->parents)
++ return -ENOMEM;
++ girq->parents[0] = irq;
++ girq->default_type = IRQ_TYPE_NONE;
++ girq->handler = handle_edge_irq;
++ }
+
+ ret = devm_gpiochip_add_data(&pdev->dev, gc, info);
+ if (ret)
+diff --git a/drivers/pinctrl/pinctrl-rockchip.c b/drivers/pinctrl/pinctrl-rockchip.c
+index 098951346339..d7869b636889 100644
+--- a/drivers/pinctrl/pinctrl-rockchip.c
++++ b/drivers/pinctrl/pinctrl-rockchip.c
+@@ -508,8 +508,8 @@ static int rockchip_dt_node_to_map(struct pinctrl_dev *pctldev,
+ }
+
+ map_num += grp->npins;
+- new_map = devm_kcalloc(pctldev->dev, map_num, sizeof(*new_map),
+- GFP_KERNEL);
++
++ new_map = kcalloc(map_num, sizeof(*new_map), GFP_KERNEL);
+ if (!new_map)
+ return -ENOMEM;
+
+@@ -519,7 +519,7 @@ static int rockchip_dt_node_to_map(struct pinctrl_dev *pctldev,
+ /* create mux map */
+ parent = of_get_parent(np);
+ if (!parent) {
+- devm_kfree(pctldev->dev, new_map);
++ kfree(new_map);
+ return -EINVAL;
+ }
+ new_map[0].type = PIN_MAP_TYPE_MUX_GROUP;
+@@ -546,6 +546,7 @@ static int rockchip_dt_node_to_map(struct pinctrl_dev *pctldev,
+ static void rockchip_dt_free_map(struct pinctrl_dev *pctldev,
+ struct pinctrl_map *map, unsigned num_maps)
+ {
++ kfree(map);
+ }
+
+ static const struct pinctrl_ops rockchip_pctrl_ops = {
+diff --git a/drivers/pinctrl/pinctrl-rza1.c b/drivers/pinctrl/pinctrl-rza1.c
+index da2d8365c690..ff4a7fb518bb 100644
+--- a/drivers/pinctrl/pinctrl-rza1.c
++++ b/drivers/pinctrl/pinctrl-rza1.c
+@@ -418,7 +418,7 @@ static const struct rza1_bidir_entry rza1l_bidir_entries[RZA1_NPORTS] = {
+ };
+
+ static const struct rza1_swio_entry rza1l_swio_entries[] = {
+- [0] = { ARRAY_SIZE(rza1h_swio_pins), rza1h_swio_pins },
++ [0] = { ARRAY_SIZE(rza1l_swio_pins), rza1l_swio_pins },
+ };
+
+ /* RZ/A1L (r7s72102x) pinmux flags table */
+diff --git a/drivers/pinctrl/qcom/pinctrl-ipq6018.c b/drivers/pinctrl/qcom/pinctrl-ipq6018.c
+index 38c33a778cb8..ec50a3b4bd16 100644
+--- a/drivers/pinctrl/qcom/pinctrl-ipq6018.c
++++ b/drivers/pinctrl/qcom/pinctrl-ipq6018.c
+@@ -367,7 +367,8 @@ static const char * const wci20_groups[] = {
+
+ static const char * const qpic_pad_groups[] = {
+ "gpio0", "gpio1", "gpio2", "gpio3", "gpio4", "gpio9", "gpio10",
+- "gpio11", "gpio17",
++ "gpio11", "gpio17", "gpio15", "gpio12", "gpio13", "gpio14", "gpio5",
++ "gpio6", "gpio7", "gpio8",
+ };
+
+ static const char * const burn0_groups[] = {
+diff --git a/drivers/pinctrl/sirf/pinctrl-sirf.c b/drivers/pinctrl/sirf/pinctrl-sirf.c
+index 1ebcb957c654..63a287d5795f 100644
+--- a/drivers/pinctrl/sirf/pinctrl-sirf.c
++++ b/drivers/pinctrl/sirf/pinctrl-sirf.c
+@@ -794,13 +794,17 @@ static int sirfsoc_gpio_probe(struct device_node *np)
+ return -ENODEV;
+
+ sgpio = devm_kzalloc(&pdev->dev, sizeof(*sgpio), GFP_KERNEL);
+- if (!sgpio)
+- return -ENOMEM;
++ if (!sgpio) {
++ err = -ENOMEM;
++ goto out_put_device;
++ }
+ spin_lock_init(&sgpio->lock);
+
+ regs = of_iomap(np, 0);
+- if (!regs)
+- return -ENOMEM;
++ if (!regs) {
++ err = -ENOMEM;
++ goto out_put_device;
++ }
+
+ sgpio->chip.gc.request = sirfsoc_gpio_request;
+ sgpio->chip.gc.free = sirfsoc_gpio_free;
+@@ -824,8 +828,10 @@ static int sirfsoc_gpio_probe(struct device_node *np)
+ girq->parents = devm_kcalloc(&pdev->dev, SIRFSOC_GPIO_NO_OF_BANKS,
+ sizeof(*girq->parents),
+ GFP_KERNEL);
+- if (!girq->parents)
+- return -ENOMEM;
++ if (!girq->parents) {
++ err = -ENOMEM;
++ goto out_put_device;
++ }
+ for (i = 0; i < SIRFSOC_GPIO_NO_OF_BANKS; i++) {
+ bank = &sgpio->sgpio_bank[i];
+ spin_lock_init(&bank->lock);
+@@ -868,6 +874,8 @@ out_no_range:
+ gpiochip_remove(&sgpio->chip.gc);
+ out:
+ iounmap(regs);
++out_put_device:
++ put_device(&pdev->dev);
+ return err;
+ }
+
+diff --git a/drivers/power/supply/Kconfig b/drivers/power/supply/Kconfig
+index f3424fdce341..d37ec0d03237 100644
+--- a/drivers/power/supply/Kconfig
++++ b/drivers/power/supply/Kconfig
+@@ -577,7 +577,7 @@ config CHARGER_BQ24257
+ tristate "TI BQ24250/24251/24257 battery charger driver"
+ depends on I2C
+ depends on GPIOLIB || COMPILE_TEST
+- depends on REGMAP_I2C
++ select REGMAP_I2C
+ help
+ Say Y to enable support for the TI BQ24250, BQ24251, and BQ24257 battery
+ chargers.
+diff --git a/drivers/power/supply/lp8788-charger.c b/drivers/power/supply/lp8788-charger.c
+index 84a206f42a8e..e7931ffb7151 100644
+--- a/drivers/power/supply/lp8788-charger.c
++++ b/drivers/power/supply/lp8788-charger.c
+@@ -572,27 +572,14 @@ static void lp8788_setup_adc_channel(struct device *dev,
+ return;
+
+ /* ADC channel for battery voltage */
+- chan = iio_channel_get(dev, pdata->adc_vbatt);
++ chan = devm_iio_channel_get(dev, pdata->adc_vbatt);
+ pchg->chan[LP8788_VBATT] = IS_ERR(chan) ? NULL : chan;
+
+ /* ADC channel for battery temperature */
+- chan = iio_channel_get(dev, pdata->adc_batt_temp);
++ chan = devm_iio_channel_get(dev, pdata->adc_batt_temp);
+ pchg->chan[LP8788_BATT_TEMP] = IS_ERR(chan) ? NULL : chan;
+ }
+
+-static void lp8788_release_adc_channel(struct lp8788_charger *pchg)
+-{
+- int i;
+-
+- for (i = 0; i < LP8788_NUM_CHG_ADC; i++) {
+- if (!pchg->chan[i])
+- continue;
+-
+- iio_channel_release(pchg->chan[i]);
+- pchg->chan[i] = NULL;
+- }
+-}
+-
+ static ssize_t lp8788_show_charger_status(struct device *dev,
+ struct device_attribute *attr, char *buf)
+ {
+@@ -735,7 +722,6 @@ static int lp8788_charger_remove(struct platform_device *pdev)
+ flush_work(&pchg->charger_work);
+ lp8788_irq_unregister(pdev, pchg);
+ lp8788_psy_unregister(pchg);
+- lp8788_release_adc_channel(pchg);
+
+ return 0;
+ }
+diff --git a/drivers/power/supply/smb347-charger.c b/drivers/power/supply/smb347-charger.c
+index c1d124b8be0c..d102921b3ab2 100644
+--- a/drivers/power/supply/smb347-charger.c
++++ b/drivers/power/supply/smb347-charger.c
+@@ -1138,6 +1138,7 @@ static bool smb347_volatile_reg(struct device *dev, unsigned int reg)
+ switch (reg) {
+ case IRQSTAT_A:
+ case IRQSTAT_C:
++ case IRQSTAT_D:
+ case IRQSTAT_E:
+ case IRQSTAT_F:
+ case STAT_A:
+diff --git a/drivers/pwm/core.c b/drivers/pwm/core.c
+index 9973c442b455..6b3cbc0490c6 100644
+--- a/drivers/pwm/core.c
++++ b/drivers/pwm/core.c
+@@ -121,7 +121,7 @@ static int pwm_device_request(struct pwm_device *pwm, const char *label)
+ pwm->chip->ops->get_state(pwm->chip, pwm, &pwm->state);
+ trace_pwm_get(pwm, &pwm->state);
+
+- if (IS_ENABLED(PWM_DEBUG))
++ if (IS_ENABLED(CONFIG_PWM_DEBUG))
+ pwm->last = pwm->state;
+ }
+
+diff --git a/drivers/pwm/pwm-img.c b/drivers/pwm/pwm-img.c
+index c9e57bd109fb..599a0f66a384 100644
+--- a/drivers/pwm/pwm-img.c
++++ b/drivers/pwm/pwm-img.c
+@@ -129,8 +129,10 @@ static int img_pwm_config(struct pwm_chip *chip, struct pwm_device *pwm,
+ duty = DIV_ROUND_UP(timebase * duty_ns, period_ns);
+
+ ret = pm_runtime_get_sync(chip->dev);
+- if (ret < 0)
++ if (ret < 0) {
++ pm_runtime_put_autosuspend(chip->dev);
+ return ret;
++ }
+
+ val = img_pwm_readl(pwm_chip, PWM_CTRL_CFG);
+ val &= ~(PWM_CTRL_CFG_DIV_MASK << PWM_CTRL_CFG_DIV_SHIFT(pwm->hwpwm));
+@@ -331,8 +333,10 @@ static int img_pwm_remove(struct platform_device *pdev)
+ int ret;
+
+ ret = pm_runtime_get_sync(&pdev->dev);
+- if (ret < 0)
++ if (ret < 0) {
++ pm_runtime_put(&pdev->dev);
+ return ret;
++ }
+
+ for (i = 0; i < pwm_chip->chip.npwm; i++) {
+ val = img_pwm_readl(pwm_chip, PWM_CTRL_CFG);
+diff --git a/drivers/pwm/pwm-imx27.c b/drivers/pwm/pwm-imx27.c
+index a6e40d4c485f..732a6f3701e8 100644
+--- a/drivers/pwm/pwm-imx27.c
++++ b/drivers/pwm/pwm-imx27.c
+@@ -150,13 +150,12 @@ static void pwm_imx27_get_state(struct pwm_chip *chip,
+
+ prescaler = MX3_PWMCR_PRESCALER_GET(val);
+ pwm_clk = clk_get_rate(imx->clk_per);
+- pwm_clk = DIV_ROUND_CLOSEST_ULL(pwm_clk, prescaler);
+ val = readl(imx->mmio_base + MX3_PWMPR);
+ period = val >= MX3_PWMPR_MAX ? MX3_PWMPR_MAX : val;
+
+ /* PWMOUT (Hz) = PWMCLK / (PWMPR + 2) */
+- tmp = NSEC_PER_SEC * (u64)(period + 2);
+- state->period = DIV_ROUND_CLOSEST_ULL(tmp, pwm_clk);
++ tmp = NSEC_PER_SEC * (u64)(period + 2) * prescaler;
++ state->period = DIV_ROUND_UP_ULL(tmp, pwm_clk);
+
+ /*
+ * PWMSAR can be read only if PWM is enabled. If the PWM is disabled,
+@@ -167,8 +166,8 @@ static void pwm_imx27_get_state(struct pwm_chip *chip,
+ else
+ val = imx->duty_cycle;
+
+- tmp = NSEC_PER_SEC * (u64)(val);
+- state->duty_cycle = DIV_ROUND_CLOSEST_ULL(tmp, pwm_clk);
++ tmp = NSEC_PER_SEC * (u64)(val) * prescaler;
++ state->duty_cycle = DIV_ROUND_UP_ULL(tmp, pwm_clk);
+
+ pwm_imx27_clk_disable_unprepare(imx);
+ }
+@@ -220,22 +219,23 @@ static int pwm_imx27_apply(struct pwm_chip *chip, struct pwm_device *pwm,
+ struct pwm_imx27_chip *imx = to_pwm_imx27_chip(chip);
+ struct pwm_state cstate;
+ unsigned long long c;
++ unsigned long long clkrate;
+ int ret;
+ u32 cr;
+
+ pwm_get_state(pwm, &cstate);
+
+- c = clk_get_rate(imx->clk_per);
+- c *= state->period;
++ clkrate = clk_get_rate(imx->clk_per);
++ c = clkrate * state->period;
+
+- do_div(c, 1000000000);
++ do_div(c, NSEC_PER_SEC);
+ period_cycles = c;
+
+ prescale = period_cycles / 0x10000 + 1;
+
+ period_cycles /= prescale;
+- c = (unsigned long long)period_cycles * state->duty_cycle;
+- do_div(c, state->period);
++ c = clkrate * state->duty_cycle;
++ do_div(c, NSEC_PER_SEC * prescale);
+ duty_cycles = c;
+
+ /*
+diff --git a/drivers/remoteproc/mtk_scp.c b/drivers/remoteproc/mtk_scp.c
+index 2bead57c9cf9..ac13e7b046a6 100644
+--- a/drivers/remoteproc/mtk_scp.c
++++ b/drivers/remoteproc/mtk_scp.c
+@@ -132,8 +132,8 @@ static int scp_ipi_init(struct mtk_scp *scp)
+ (struct mtk_share_obj __iomem *)(scp->sram_base + recv_offset);
+ scp->send_buf =
+ (struct mtk_share_obj __iomem *)(scp->sram_base + send_offset);
+- memset_io(scp->recv_buf, 0, sizeof(scp->recv_buf));
+- memset_io(scp->send_buf, 0, sizeof(scp->send_buf));
++ memset_io(scp->recv_buf, 0, sizeof(*scp->recv_buf));
++ memset_io(scp->send_buf, 0, sizeof(*scp->send_buf));
+
+ return 0;
+ }
+diff --git a/drivers/remoteproc/qcom_q6v5_mss.c b/drivers/remoteproc/qcom_q6v5_mss.c
+index 5475d4f808a8..629abcee2c1d 100644
+--- a/drivers/remoteproc/qcom_q6v5_mss.c
++++ b/drivers/remoteproc/qcom_q6v5_mss.c
+@@ -69,13 +69,9 @@
+ #define AXI_HALTREQ_REG 0x0
+ #define AXI_HALTACK_REG 0x4
+ #define AXI_IDLE_REG 0x8
+-#define NAV_AXI_HALTREQ_BIT BIT(0)
+-#define NAV_AXI_HALTACK_BIT BIT(1)
+-#define NAV_AXI_IDLE_BIT BIT(2)
+ #define AXI_GATING_VALID_OVERRIDE BIT(0)
+
+ #define HALT_ACK_TIMEOUT_US 100000
+-#define NAV_HALT_ACK_TIMEOUT_US 200
+
+ /* QDSP6SS_RESET */
+ #define Q6SS_STOP_CORE BIT(0)
+@@ -143,7 +139,7 @@ struct rproc_hexagon_res {
+ int version;
+ bool need_mem_protection;
+ bool has_alt_reset;
+- bool has_halt_nav;
++ bool has_spare_reg;
+ };
+
+ struct q6v5 {
+@@ -154,13 +150,11 @@ struct q6v5 {
+ void __iomem *rmb_base;
+
+ struct regmap *halt_map;
+- struct regmap *halt_nav_map;
+ struct regmap *conn_map;
+
+ u32 halt_q6;
+ u32 halt_modem;
+ u32 halt_nc;
+- u32 halt_nav;
+ u32 conn_box;
+
+ struct reset_control *mss_restart;
+@@ -206,7 +200,7 @@ struct q6v5 {
+ struct qcom_sysmon *sysmon;
+ bool need_mem_protection;
+ bool has_alt_reset;
+- bool has_halt_nav;
++ bool has_spare_reg;
+ int mpss_perm;
+ int mba_perm;
+ const char *hexagon_mdt_image;
+@@ -427,21 +421,19 @@ static int q6v5_reset_assert(struct q6v5 *qproc)
+ reset_control_assert(qproc->pdc_reset);
+ ret = reset_control_reset(qproc->mss_restart);
+ reset_control_deassert(qproc->pdc_reset);
+- } else if (qproc->has_halt_nav) {
++ } else if (qproc->has_spare_reg) {
+ /*
+ * When the AXI pipeline is being reset with the Q6 modem partly
+ * operational there is possibility of AXI valid signal to
+ * glitch, leading to spurious transactions and Q6 hangs. A work
+ * around is employed by asserting the AXI_GATING_VALID_OVERRIDE
+- * BIT before triggering Q6 MSS reset. Both the HALTREQ and
+- * AXI_GATING_VALID_OVERRIDE are withdrawn post MSS assert
+- * followed by a MSS deassert, while holding the PDC reset.
++ * BIT before triggering Q6 MSS reset. AXI_GATING_VALID_OVERRIDE
++ * is withdrawn post MSS assert followed by a MSS deassert,
++ * while holding the PDC reset.
+ */
+ reset_control_assert(qproc->pdc_reset);
+ regmap_update_bits(qproc->conn_map, qproc->conn_box,
+ AXI_GATING_VALID_OVERRIDE, 1);
+- regmap_update_bits(qproc->halt_nav_map, qproc->halt_nav,
+- NAV_AXI_HALTREQ_BIT, 0);
+ reset_control_assert(qproc->mss_restart);
+ reset_control_deassert(qproc->pdc_reset);
+ regmap_update_bits(qproc->conn_map, qproc->conn_box,
+@@ -464,7 +456,7 @@ static int q6v5_reset_deassert(struct q6v5 *qproc)
+ ret = reset_control_reset(qproc->mss_restart);
+ writel(0, qproc->rmb_base + RMB_MBA_ALT_RESET);
+ reset_control_deassert(qproc->pdc_reset);
+- } else if (qproc->has_halt_nav) {
++ } else if (qproc->has_spare_reg) {
+ ret = reset_control_reset(qproc->mss_restart);
+ } else {
+ ret = reset_control_deassert(qproc->mss_restart);
+@@ -761,32 +753,6 @@ static void q6v5proc_halt_axi_port(struct q6v5 *qproc,
+ regmap_write(halt_map, offset + AXI_HALTREQ_REG, 0);
+ }
+
+-static void q6v5proc_halt_nav_axi_port(struct q6v5 *qproc,
+- struct regmap *halt_map,
+- u32 offset)
+-{
+- unsigned int val;
+- int ret;
+-
+- /* Check if we're already idle */
+- ret = regmap_read(halt_map, offset, &val);
+- if (!ret && (val & NAV_AXI_IDLE_BIT))
+- return;
+-
+- /* Assert halt request */
+- regmap_update_bits(halt_map, offset, NAV_AXI_HALTREQ_BIT,
+- NAV_AXI_HALTREQ_BIT);
+-
+- /* Wait for halt ack*/
+- regmap_read_poll_timeout(halt_map, offset, val,
+- (val & NAV_AXI_HALTACK_BIT),
+- 5, NAV_HALT_ACK_TIMEOUT_US);
+-
+- ret = regmap_read(halt_map, offset, &val);
+- if (ret || !(val & NAV_AXI_IDLE_BIT))
+- dev_err(qproc->dev, "port failed halt\n");
+-}
+-
+ static int q6v5_mpss_init_image(struct q6v5 *qproc, const struct firmware *fw)
+ {
+ unsigned long dma_attrs = DMA_ATTR_FORCE_CONTIGUOUS;
+@@ -951,9 +917,6 @@ static int q6v5_mba_load(struct q6v5 *qproc)
+ halt_axi_ports:
+ q6v5proc_halt_axi_port(qproc, qproc->halt_map, qproc->halt_q6);
+ q6v5proc_halt_axi_port(qproc, qproc->halt_map, qproc->halt_modem);
+- if (qproc->has_halt_nav)
+- q6v5proc_halt_nav_axi_port(qproc, qproc->halt_nav_map,
+- qproc->halt_nav);
+ q6v5proc_halt_axi_port(qproc, qproc->halt_map, qproc->halt_nc);
+
+ reclaim_mba:
+@@ -1001,9 +964,6 @@ static void q6v5_mba_reclaim(struct q6v5 *qproc)
+
+ q6v5proc_halt_axi_port(qproc, qproc->halt_map, qproc->halt_q6);
+ q6v5proc_halt_axi_port(qproc, qproc->halt_map, qproc->halt_modem);
+- if (qproc->has_halt_nav)
+- q6v5proc_halt_nav_axi_port(qproc, qproc->halt_nav_map,
+- qproc->halt_nav);
+ q6v5proc_halt_axi_port(qproc, qproc->halt_map, qproc->halt_nc);
+ if (qproc->version == MSS_MSM8996) {
+ /*
+@@ -1156,7 +1116,13 @@ static int q6v5_mpss_load(struct q6v5 *qproc)
+ goto release_firmware;
+ }
+
+- ptr = qproc->mpss_region + offset;
++ ptr = ioremap_wc(qproc->mpss_phys + offset, phdr->p_memsz);
++ if (!ptr) {
++ dev_err(qproc->dev,
++ "unable to map memory region: %pa+%zx-%x\n",
++ &qproc->mpss_phys, offset, phdr->p_memsz);
++ goto release_firmware;
++ }
+
+ if (phdr->p_filesz && phdr->p_offset < fw->size) {
+ /* Firmware is large enough to be non-split */
+@@ -1165,6 +1131,7 @@ static int q6v5_mpss_load(struct q6v5 *qproc)
+ "failed to load segment %d from truncated file %s\n",
+ i, fw_name);
+ ret = -EINVAL;
++ iounmap(ptr);
+ goto release_firmware;
+ }
+
+@@ -1175,6 +1142,7 @@ static int q6v5_mpss_load(struct q6v5 *qproc)
+ ret = request_firmware(&seg_fw, fw_name, qproc->dev);
+ if (ret) {
+ dev_err(qproc->dev, "failed to load %s\n", fw_name);
++ iounmap(ptr);
+ goto release_firmware;
+ }
+
+@@ -1187,6 +1155,7 @@ static int q6v5_mpss_load(struct q6v5 *qproc)
+ memset(ptr + phdr->p_filesz, 0,
+ phdr->p_memsz - phdr->p_filesz);
+ }
++ iounmap(ptr);
+ size += phdr->p_memsz;
+
+ code_length = readl(qproc->rmb_base + RMB_PMI_CODE_LENGTH_REG);
+@@ -1236,7 +1205,8 @@ static void qcom_q6v5_dump_segment(struct rproc *rproc,
+ int ret = 0;
+ struct q6v5 *qproc = rproc->priv;
+ unsigned long mask = BIT((unsigned long)segment->priv);
+- void *ptr = rproc_da_to_va(rproc, segment->da, segment->size);
++ int offset = segment->da - qproc->mpss_reloc;
++ void *ptr = NULL;
+
+ /* Unlock mba before copying segments */
+ if (!qproc->dump_mba_loaded) {
+@@ -1250,10 +1220,15 @@ static void qcom_q6v5_dump_segment(struct rproc *rproc,
+ }
+ }
+
+- if (!ptr || ret)
+- memset(dest, 0xff, segment->size);
+- else
++ if (!ret)
++ ptr = ioremap_wc(qproc->mpss_phys + offset, segment->size);
++
++ if (ptr) {
+ memcpy(dest, ptr, segment->size);
++ iounmap(ptr);
++ } else {
++ memset(dest, 0xff, segment->size);
++ }
+
+ qproc->dump_segment_mask |= mask;
+
+@@ -1432,36 +1407,12 @@ static int q6v5_init_mem(struct q6v5 *qproc, struct platform_device *pdev)
+ qproc->halt_modem = args.args[1];
+ qproc->halt_nc = args.args[2];
+
+- if (qproc->has_halt_nav) {
+- struct platform_device *nav_pdev;
+-
++ if (qproc->has_spare_reg) {
+ ret = of_parse_phandle_with_fixed_args(pdev->dev.of_node,
+- "qcom,halt-nav-regs",
++ "qcom,spare-regs",
+ 1, 0, &args);
+ if (ret < 0) {
+- dev_err(&pdev->dev, "failed to parse halt-nav-regs\n");
+- return -EINVAL;
+- }
+-
+- nav_pdev = of_find_device_by_node(args.np);
+- of_node_put(args.np);
+- if (!nav_pdev) {
+- dev_err(&pdev->dev, "failed to get mss clock device\n");
+- return -EPROBE_DEFER;
+- }
+-
+- qproc->halt_nav_map = dev_get_regmap(&nav_pdev->dev, NULL);
+- if (!qproc->halt_nav_map) {
+- dev_err(&pdev->dev, "failed to get map from device\n");
+- return -EINVAL;
+- }
+- qproc->halt_nav = args.args[0];
+-
+- ret = of_parse_phandle_with_fixed_args(pdev->dev.of_node,
+- "qcom,halt-nav-regs",
+- 1, 1, &args);
+- if (ret < 0) {
+- dev_err(&pdev->dev, "failed to parse halt-nav-regs\n");
++ dev_err(&pdev->dev, "failed to parse spare-regs\n");
+ return -EINVAL;
+ }
+
+@@ -1547,7 +1498,7 @@ static int q6v5_init_reset(struct q6v5 *qproc)
+ return PTR_ERR(qproc->mss_restart);
+ }
+
+- if (qproc->has_alt_reset || qproc->has_halt_nav) {
++ if (qproc->has_alt_reset || qproc->has_spare_reg) {
+ qproc->pdc_reset = devm_reset_control_get_exclusive(qproc->dev,
+ "pdc_reset");
+ if (IS_ERR(qproc->pdc_reset)) {
+@@ -1595,12 +1546,6 @@ static int q6v5_alloc_memory_region(struct q6v5 *qproc)
+
+ qproc->mpss_phys = qproc->mpss_reloc = r.start;
+ qproc->mpss_size = resource_size(&r);
+- qproc->mpss_region = devm_ioremap_wc(qproc->dev, qproc->mpss_phys, qproc->mpss_size);
+- if (!qproc->mpss_region) {
+- dev_err(qproc->dev, "unable to map memory region: %pa+%zx\n",
+- &r.start, qproc->mpss_size);
+- return -EBUSY;
+- }
+
+ return 0;
+ }
+@@ -1679,7 +1624,7 @@ static int q6v5_probe(struct platform_device *pdev)
+
+ platform_set_drvdata(pdev, qproc);
+
+- qproc->has_halt_nav = desc->has_halt_nav;
++ qproc->has_spare_reg = desc->has_spare_reg;
+ ret = q6v5_init_mem(qproc, pdev);
+ if (ret)
+ goto free_rproc;
+@@ -1828,8 +1773,6 @@ static const struct rproc_hexagon_res sc7180_mss = {
+ .active_clk_names = (char*[]){
+ "mnoc_axi",
+ "nav",
+- "mss_nav",
+- "mss_crypto",
+ NULL
+ },
+ .active_pd_names = (char*[]){
+@@ -1844,7 +1787,7 @@ static const struct rproc_hexagon_res sc7180_mss = {
+ },
+ .need_mem_protection = true,
+ .has_alt_reset = false,
+- .has_halt_nav = true,
++ .has_spare_reg = true,
+ .version = MSS_SC7180,
+ };
+
+@@ -1879,7 +1822,7 @@ static const struct rproc_hexagon_res sdm845_mss = {
+ },
+ .need_mem_protection = true,
+ .has_alt_reset = true,
+- .has_halt_nav = false,
++ .has_spare_reg = false,
+ .version = MSS_SDM845,
+ };
+
+@@ -1906,7 +1849,7 @@ static const struct rproc_hexagon_res msm8998_mss = {
+ },
+ .need_mem_protection = true,
+ .has_alt_reset = false,
+- .has_halt_nav = false,
++ .has_spare_reg = false,
+ .version = MSS_MSM8998,
+ };
+
+@@ -1936,7 +1879,7 @@ static const struct rproc_hexagon_res msm8996_mss = {
+ },
+ .need_mem_protection = true,
+ .has_alt_reset = false,
+- .has_halt_nav = false,
++ .has_spare_reg = false,
+ .version = MSS_MSM8996,
+ };
+
+@@ -1969,7 +1912,7 @@ static const struct rproc_hexagon_res msm8916_mss = {
+ },
+ .need_mem_protection = false,
+ .has_alt_reset = false,
+- .has_halt_nav = false,
++ .has_spare_reg = false,
+ .version = MSS_MSM8916,
+ };
+
+@@ -2010,7 +1953,7 @@ static const struct rproc_hexagon_res msm8974_mss = {
+ },
+ .need_mem_protection = false,
+ .has_alt_reset = false,
+- .has_halt_nav = false,
++ .has_spare_reg = false,
+ .version = MSS_MSM8974,
+ };
+
+diff --git a/drivers/remoteproc/remoteproc_core.c b/drivers/remoteproc/remoteproc_core.c
+index be15aace9b3c..8f79cfd2e467 100644
+--- a/drivers/remoteproc/remoteproc_core.c
++++ b/drivers/remoteproc/remoteproc_core.c
+@@ -2053,6 +2053,7 @@ struct rproc *rproc_alloc(struct device *dev, const char *name,
+ rproc->dev.type = &rproc_type;
+ rproc->dev.class = &rproc_class;
+ rproc->dev.driver_data = rproc;
++ idr_init(&rproc->notifyids);
+
+ /* Assign a unique device index and name */
+ rproc->index = ida_simple_get(&rproc_dev_index, 0, 0, GFP_KERNEL);
+@@ -2078,8 +2079,6 @@ struct rproc *rproc_alloc(struct device *dev, const char *name,
+
+ mutex_init(&rproc->lock);
+
+- idr_init(&rproc->notifyids);
+-
+ INIT_LIST_HEAD(&rproc->carveouts);
+ INIT_LIST_HEAD(&rproc->mappings);
+ INIT_LIST_HEAD(&rproc->traces);
+diff --git a/drivers/rtc/rtc-mc13xxx.c b/drivers/rtc/rtc-mc13xxx.c
+index afce2c0b4bd6..d6802e6191cb 100644
+--- a/drivers/rtc/rtc-mc13xxx.c
++++ b/drivers/rtc/rtc-mc13xxx.c
+@@ -308,8 +308,10 @@ static int __init mc13xxx_rtc_probe(struct platform_device *pdev)
+ mc13xxx_unlock(mc13xxx);
+
+ ret = rtc_register_device(priv->rtc);
+- if (ret)
++ if (ret) {
++ mc13xxx_lock(mc13xxx);
+ goto err_irq_request;
++ }
+
+ return 0;
+
+diff --git a/drivers/rtc/rtc-rc5t619.c b/drivers/rtc/rtc-rc5t619.c
+index 24e386ecbc7e..dd1a20977478 100644
+--- a/drivers/rtc/rtc-rc5t619.c
++++ b/drivers/rtc/rtc-rc5t619.c
+@@ -356,10 +356,8 @@ static int rc5t619_rtc_probe(struct platform_device *pdev)
+ int err;
+
+ rtc = devm_kzalloc(dev, sizeof(*rtc), GFP_KERNEL);
+- if (IS_ERR(rtc)) {
+- err = PTR_ERR(rtc);
++ if (!rtc)
+ return -ENOMEM;
+- }
+
+ rtc->rn5t618 = rn5t618;
+
+diff --git a/drivers/rtc/rtc-rv3028.c b/drivers/rtc/rtc-rv3028.c
+index a0ddc86c975a..ec84db0b3d7a 100644
+--- a/drivers/rtc/rtc-rv3028.c
++++ b/drivers/rtc/rtc-rv3028.c
+@@ -755,6 +755,8 @@ static int rv3028_probe(struct i2c_client *client)
+ return -ENOMEM;
+
+ rv3028->regmap = devm_regmap_init_i2c(client, ®map_config);
++ if (IS_ERR(rv3028->regmap))
++ return PTR_ERR(rv3028->regmap);
+
+ i2c_set_clientdata(client, rv3028);
+
+diff --git a/drivers/s390/cio/qdio.h b/drivers/s390/cio/qdio.h
+index b8453b594679..a2afd7bc100b 100644
+--- a/drivers/s390/cio/qdio.h
++++ b/drivers/s390/cio/qdio.h
+@@ -364,7 +364,6 @@ static inline int multicast_outbound(struct qdio_q *q)
+ extern u64 last_ai_time;
+
+ /* prototypes for thin interrupt */
+-void qdio_setup_thinint(struct qdio_irq *irq_ptr);
+ int qdio_establish_thinint(struct qdio_irq *irq_ptr);
+ void qdio_shutdown_thinint(struct qdio_irq *irq_ptr);
+ void tiqdio_add_device(struct qdio_irq *irq_ptr);
+@@ -389,6 +388,7 @@ int qdio_setup_get_ssqd(struct qdio_irq *irq_ptr,
+ struct subchannel_id *schid,
+ struct qdio_ssqd_desc *data);
+ int qdio_setup_irq(struct qdio_irq *irq_ptr, struct qdio_initialize *init_data);
++void qdio_shutdown_irq(struct qdio_irq *irq);
+ void qdio_print_subchannel_info(struct qdio_irq *irq_ptr);
+ void qdio_release_memory(struct qdio_irq *irq_ptr);
+ int qdio_setup_init(void);
+diff --git a/drivers/s390/cio/qdio_main.c b/drivers/s390/cio/qdio_main.c
+index bcc3ab14e72d..80cc811bd2e0 100644
+--- a/drivers/s390/cio/qdio_main.c
++++ b/drivers/s390/cio/qdio_main.c
+@@ -1154,35 +1154,27 @@ int qdio_shutdown(struct ccw_device *cdev, int how)
+
+ /* cleanup subchannel */
+ spin_lock_irq(get_ccwdev_lock(cdev));
+-
++ qdio_set_state(irq_ptr, QDIO_IRQ_STATE_CLEANUP);
+ if (how & QDIO_FLAG_CLEANUP_USING_CLEAR)
+ rc = ccw_device_clear(cdev, QDIO_DOING_CLEANUP);
+ else
+ /* default behaviour is halt */
+ rc = ccw_device_halt(cdev, QDIO_DOING_CLEANUP);
++ spin_unlock_irq(get_ccwdev_lock(cdev));
+ if (rc) {
+ DBF_ERROR("%4x SHUTD ERR", irq_ptr->schid.sch_no);
+ DBF_ERROR("rc:%4d", rc);
+ goto no_cleanup;
+ }
+
+- qdio_set_state(irq_ptr, QDIO_IRQ_STATE_CLEANUP);
+- spin_unlock_irq(get_ccwdev_lock(cdev));
+ wait_event_interruptible_timeout(cdev->private->wait_q,
+ irq_ptr->state == QDIO_IRQ_STATE_INACTIVE ||
+ irq_ptr->state == QDIO_IRQ_STATE_ERR,
+ 10 * HZ);
+- spin_lock_irq(get_ccwdev_lock(cdev));
+
+ no_cleanup:
+ qdio_shutdown_thinint(irq_ptr);
+-
+- /* restore interrupt handler */
+- if ((void *)cdev->handler == (void *)qdio_int_handler) {
+- cdev->handler = irq_ptr->orig_handler;
+- cdev->private->intparm = 0;
+- }
+- spin_unlock_irq(get_ccwdev_lock(cdev));
++ qdio_shutdown_irq(irq_ptr);
+
+ qdio_set_state(irq_ptr, QDIO_IRQ_STATE_INACTIVE);
+ mutex_unlock(&irq_ptr->setup_mutex);
+@@ -1352,8 +1344,8 @@ int qdio_establish(struct ccw_device *cdev,
+
+ rc = qdio_establish_thinint(irq_ptr);
+ if (rc) {
++ qdio_shutdown_irq(irq_ptr);
+ mutex_unlock(&irq_ptr->setup_mutex);
+- qdio_shutdown(cdev, QDIO_FLAG_CLEANUP_USING_CLEAR);
+ return rc;
+ }
+
+@@ -1371,8 +1363,9 @@ int qdio_establish(struct ccw_device *cdev,
+ if (rc) {
+ DBF_ERROR("%4x est IO ERR", irq_ptr->schid.sch_no);
+ DBF_ERROR("rc:%4x", rc);
++ qdio_shutdown_thinint(irq_ptr);
++ qdio_shutdown_irq(irq_ptr);
+ mutex_unlock(&irq_ptr->setup_mutex);
+- qdio_shutdown(cdev, QDIO_FLAG_CLEANUP_USING_CLEAR);
+ return rc;
+ }
+
+diff --git a/drivers/s390/cio/qdio_setup.c b/drivers/s390/cio/qdio_setup.c
+index 3083edd61f0c..8edfa0982221 100644
+--- a/drivers/s390/cio/qdio_setup.c
++++ b/drivers/s390/cio/qdio_setup.c
+@@ -480,7 +480,6 @@ int qdio_setup_irq(struct qdio_irq *irq_ptr, struct qdio_initialize *init_data)
+ }
+
+ setup_qib(irq_ptr, init_data);
+- qdio_setup_thinint(irq_ptr);
+ set_impl_params(irq_ptr, init_data->qib_param_field_format,
+ init_data->qib_param_field,
+ init_data->input_slib_elements,
+@@ -491,6 +490,12 @@ int qdio_setup_irq(struct qdio_irq *irq_ptr, struct qdio_initialize *init_data)
+
+ /* qdr, qib, sls, slsbs, slibs, sbales are filled now */
+
++ /* set our IRQ handler */
++ spin_lock_irq(get_ccwdev_lock(cdev));
++ irq_ptr->orig_handler = cdev->handler;
++ cdev->handler = qdio_int_handler;
++ spin_unlock_irq(get_ccwdev_lock(cdev));
++
+ /* get qdio commands */
+ ciw = ccw_device_get_ciw(cdev, CIW_TYPE_EQUEUE);
+ if (!ciw) {
+@@ -506,12 +511,18 @@ int qdio_setup_irq(struct qdio_irq *irq_ptr, struct qdio_initialize *init_data)
+ }
+ irq_ptr->aqueue = *ciw;
+
+- /* set new interrupt handler */
++ return 0;
++}
++
++void qdio_shutdown_irq(struct qdio_irq *irq)
++{
++ struct ccw_device *cdev = irq->cdev;
++
++ /* restore IRQ handler */
+ spin_lock_irq(get_ccwdev_lock(cdev));
+- irq_ptr->orig_handler = cdev->handler;
+- cdev->handler = qdio_int_handler;
++ cdev->handler = irq->orig_handler;
++ cdev->private->intparm = 0;
+ spin_unlock_irq(get_ccwdev_lock(cdev));
+- return 0;
+ }
+
+ void qdio_print_subchannel_info(struct qdio_irq *irq_ptr)
+diff --git a/drivers/s390/cio/qdio_thinint.c b/drivers/s390/cio/qdio_thinint.c
+index ae50373617cd..0faa0ad21732 100644
+--- a/drivers/s390/cio/qdio_thinint.c
++++ b/drivers/s390/cio/qdio_thinint.c
+@@ -227,17 +227,19 @@ int __init tiqdio_register_thinints(void)
+
+ int qdio_establish_thinint(struct qdio_irq *irq_ptr)
+ {
++ int rc;
++
+ if (!is_thinint_irq(irq_ptr))
+ return 0;
+- return set_subchannel_ind(irq_ptr, 0);
+-}
+
+-void qdio_setup_thinint(struct qdio_irq *irq_ptr)
+-{
+- if (!is_thinint_irq(irq_ptr))
+- return;
+ irq_ptr->dsci = get_indicator();
+ DBF_HEX(&irq_ptr->dsci, sizeof(void *));
++
++ rc = set_subchannel_ind(irq_ptr, 0);
++ if (rc)
++ put_indicator(irq_ptr->dsci);
++
++ return rc;
+ }
+
+ void qdio_shutdown_thinint(struct qdio_irq *irq_ptr)
+diff --git a/drivers/scsi/arm/acornscsi.c b/drivers/scsi/arm/acornscsi.c
+index ddb52e7ba622..9a912fd0f70b 100644
+--- a/drivers/scsi/arm/acornscsi.c
++++ b/drivers/scsi/arm/acornscsi.c
+@@ -2911,8 +2911,10 @@ static int acornscsi_probe(struct expansion_card *ec, const struct ecard_id *id)
+
+ ashost->base = ecardm_iomap(ec, ECARD_RES_MEMC, 0, 0);
+ ashost->fast = ecardm_iomap(ec, ECARD_RES_IOCFAST, 0, 0);
+- if (!ashost->base || !ashost->fast)
++ if (!ashost->base || !ashost->fast) {
++ ret = -ENOMEM;
+ goto out_put;
++ }
+
+ host->irq = ec->irq;
+ ashost->host = host;
+diff --git a/drivers/scsi/cxgbi/cxgb3i/cxgb3i.c b/drivers/scsi/cxgbi/cxgb3i/cxgb3i.c
+index 524cdbcd29aa..ec7d01f6e2d5 100644
+--- a/drivers/scsi/cxgbi/cxgb3i/cxgb3i.c
++++ b/drivers/scsi/cxgbi/cxgb3i/cxgb3i.c
+@@ -959,6 +959,7 @@ static int init_act_open(struct cxgbi_sock *csk)
+ struct net_device *ndev = cdev->ports[csk->port_id];
+ struct cxgbi_hba *chba = cdev->hbas[csk->port_id];
+ struct sk_buff *skb = NULL;
++ int ret;
+
+ log_debug(1 << CXGBI_DBG_TOE | 1 << CXGBI_DBG_SOCK,
+ "csk 0x%p,%u,0x%lx.\n", csk, csk->state, csk->flags);
+@@ -979,16 +980,16 @@ static int init_act_open(struct cxgbi_sock *csk)
+ csk->atid = cxgb3_alloc_atid(t3dev, &t3_client, csk);
+ if (csk->atid < 0) {
+ pr_err("NO atid available.\n");
+- return -EINVAL;
++ ret = -EINVAL;
++ goto put_sock;
+ }
+ cxgbi_sock_set_flag(csk, CTPF_HAS_ATID);
+ cxgbi_sock_get(csk);
+
+ skb = alloc_wr(sizeof(struct cpl_act_open_req), 0, GFP_KERNEL);
+ if (!skb) {
+- cxgb3_free_atid(t3dev, csk->atid);
+- cxgbi_sock_put(csk);
+- return -ENOMEM;
++ ret = -ENOMEM;
++ goto free_atid;
+ }
+ skb->sk = (struct sock *)csk;
+ set_arp_failure_handler(skb, act_open_arp_failure);
+@@ -1010,6 +1011,15 @@ static int init_act_open(struct cxgbi_sock *csk)
+ cxgbi_sock_set_state(csk, CTP_ACTIVE_OPEN);
+ send_act_open_req(csk, skb, csk->l2t);
+ return 0;
++
++free_atid:
++ cxgb3_free_atid(t3dev, csk->atid);
++put_sock:
++ cxgbi_sock_put(csk);
++ l2t_release(t3dev, csk->l2t);
++ csk->l2t = NULL;
++
++ return ret;
+ }
+
+ cxgb3_cpl_handler_func cxgb3i_cpl_handlers[NUM_CPL_CMDS] = {
+diff --git a/drivers/scsi/hisi_sas/hisi_sas_main.c b/drivers/scsi/hisi_sas/hisi_sas_main.c
+index 9a6deb21fe4d..11caa4b0d797 100644
+--- a/drivers/scsi/hisi_sas/hisi_sas_main.c
++++ b/drivers/scsi/hisi_sas/hisi_sas_main.c
+@@ -898,8 +898,11 @@ void hisi_sas_phy_oob_ready(struct hisi_hba *hisi_hba, int phy_no)
+ struct hisi_sas_phy *phy = &hisi_hba->phy[phy_no];
+ struct device *dev = hisi_hba->dev;
+
++ dev_dbg(dev, "phy%d OOB ready\n", phy_no);
++ if (phy->phy_attached)
++ return;
++
+ if (!timer_pending(&phy->timer)) {
+- dev_dbg(dev, "phy%d OOB ready\n", phy_no);
+ phy->timer.expires = jiffies + HISI_SAS_WAIT_PHYUP_TIMEOUT * HZ;
+ add_timer(&phy->timer);
+ }
+diff --git a/drivers/scsi/ibmvscsi/ibmvscsi.c b/drivers/scsi/ibmvscsi/ibmvscsi.c
+index 59f0f1030c54..c5711c659b51 100644
+--- a/drivers/scsi/ibmvscsi/ibmvscsi.c
++++ b/drivers/scsi/ibmvscsi/ibmvscsi.c
+@@ -415,6 +415,8 @@ static int ibmvscsi_reenable_crq_queue(struct crq_queue *queue,
+ int rc = 0;
+ struct vio_dev *vdev = to_vio_dev(hostdata->dev);
+
++ set_adapter_info(hostdata);
++
+ /* Re-enable the CRQ */
+ do {
+ if (rc)
+diff --git a/drivers/scsi/iscsi_boot_sysfs.c b/drivers/scsi/iscsi_boot_sysfs.c
+index e4857b728033..a64abe38db2d 100644
+--- a/drivers/scsi/iscsi_boot_sysfs.c
++++ b/drivers/scsi/iscsi_boot_sysfs.c
+@@ -352,7 +352,7 @@ iscsi_boot_create_kobj(struct iscsi_boot_kset *boot_kset,
+ boot_kobj->kobj.kset = boot_kset->kset;
+ if (kobject_init_and_add(&boot_kobj->kobj, &iscsi_boot_ktype,
+ NULL, name, index)) {
+- kfree(boot_kobj);
++ kobject_put(&boot_kobj->kobj);
+ return NULL;
+ }
+ boot_kobj->data = data;
+diff --git a/drivers/scsi/lpfc/lpfc_els.c b/drivers/scsi/lpfc/lpfc_els.c
+index 80d1e661b0d4..35fbcb4d52eb 100644
+--- a/drivers/scsi/lpfc/lpfc_els.c
++++ b/drivers/scsi/lpfc/lpfc_els.c
+@@ -8514,6 +8514,8 @@ lpfc_els_unsol_buffer(struct lpfc_hba *phba, struct lpfc_sli_ring *pring,
+ spin_lock_irq(shost->host_lock);
+ if (ndlp->nlp_flag & NLP_IN_DEV_LOSS) {
+ spin_unlock_irq(shost->host_lock);
++ if (newnode)
++ lpfc_nlp_put(ndlp);
+ goto dropit;
+ }
+ spin_unlock_irq(shost->host_lock);
+diff --git a/drivers/scsi/mpt3sas/mpt3sas_base.c b/drivers/scsi/mpt3sas/mpt3sas_base.c
+index 663782bb790d..39d233262039 100644
+--- a/drivers/scsi/mpt3sas/mpt3sas_base.c
++++ b/drivers/scsi/mpt3sas/mpt3sas_base.c
+@@ -4915,7 +4915,9 @@ _base_release_memory_pools(struct MPT3SAS_ADAPTER *ioc)
+ }
+
+ kfree(ioc->hpr_lookup);
++ ioc->hpr_lookup = NULL;
+ kfree(ioc->internal_lookup);
++ ioc->internal_lookup = NULL;
+ if (ioc->chain_lookup) {
+ for (i = 0; i < ioc->scsiio_depth; i++) {
+ for (j = ioc->chains_per_prp_buffer;
+diff --git a/drivers/scsi/qedf/qedf.h b/drivers/scsi/qedf/qedf.h
+index f3f399fe10c8..0da4e16fb23a 100644
+--- a/drivers/scsi/qedf/qedf.h
++++ b/drivers/scsi/qedf/qedf.h
+@@ -355,6 +355,7 @@ struct qedf_ctx {
+ #define QEDF_GRCDUMP_CAPTURE 4
+ #define QEDF_IN_RECOVERY 5
+ #define QEDF_DBG_STOP_IO 6
++#define QEDF_PROBING 8
+ unsigned long flags; /* Miscellaneous state flags */
+ int fipvlan_retries;
+ u8 num_queues;
+diff --git a/drivers/scsi/qedf/qedf_main.c b/drivers/scsi/qedf/qedf_main.c
+index 5b19f5175c5c..3a7d03472922 100644
+--- a/drivers/scsi/qedf/qedf_main.c
++++ b/drivers/scsi/qedf/qedf_main.c
+@@ -3153,7 +3153,7 @@ static int __qedf_probe(struct pci_dev *pdev, int mode)
+ {
+ int rc = -EINVAL;
+ struct fc_lport *lport;
+- struct qedf_ctx *qedf;
++ struct qedf_ctx *qedf = NULL;
+ struct Scsi_Host *host;
+ bool is_vf = false;
+ struct qed_ll2_params params;
+@@ -3183,6 +3183,7 @@ static int __qedf_probe(struct pci_dev *pdev, int mode)
+
+ /* Initialize qedf_ctx */
+ qedf = lport_priv(lport);
++ set_bit(QEDF_PROBING, &qedf->flags);
+ qedf->lport = lport;
+ qedf->ctlr.lp = lport;
+ qedf->pdev = pdev;
+@@ -3206,9 +3207,12 @@ static int __qedf_probe(struct pci_dev *pdev, int mode)
+ } else {
+ /* Init pointers during recovery */
+ qedf = pci_get_drvdata(pdev);
++ set_bit(QEDF_PROBING, &qedf->flags);
+ lport = qedf->lport;
+ }
+
++ QEDF_INFO(&qedf->dbg_ctx, QEDF_LOG_DISC, "Probe started.\n");
++
+ host = lport->host;
+
+ /* Allocate mempool for qedf_io_work structs */
+@@ -3513,6 +3517,10 @@ static int __qedf_probe(struct pci_dev *pdev, int mode)
+ else
+ fc_fabric_login(lport);
+
++ QEDF_INFO(&qedf->dbg_ctx, QEDF_LOG_DISC, "Probe done.\n");
++
++ clear_bit(QEDF_PROBING, &qedf->flags);
++
+ /* All good */
+ return 0;
+
+@@ -3538,6 +3546,11 @@ err2:
+ err1:
+ scsi_host_put(lport->host);
+ err0:
++ if (qedf) {
++ QEDF_INFO(&qedf->dbg_ctx, QEDF_LOG_DISC, "Probe done.\n");
++
++ clear_bit(QEDF_PROBING, &qedf->flags);
++ }
+ return rc;
+ }
+
+@@ -3687,11 +3700,25 @@ void qedf_get_protocol_tlv_data(void *dev, void *data)
+ {
+ struct qedf_ctx *qedf = dev;
+ struct qed_mfw_tlv_fcoe *fcoe = data;
+- struct fc_lport *lport = qedf->lport;
+- struct Scsi_Host *host = lport->host;
+- struct fc_host_attrs *fc_host = shost_to_fc_host(host);
++ struct fc_lport *lport;
++ struct Scsi_Host *host;
++ struct fc_host_attrs *fc_host;
+ struct fc_host_statistics *hst;
+
++ if (!qedf) {
++ QEDF_ERR(NULL, "qedf is null.\n");
++ return;
++ }
++
++ if (test_bit(QEDF_PROBING, &qedf->flags)) {
++ QEDF_ERR(&qedf->dbg_ctx, "Function is still probing.\n");
++ return;
++ }
++
++ lport = qedf->lport;
++ host = lport->host;
++ fc_host = shost_to_fc_host(host);
++
+ /* Force a refresh of the fc_host stats including offload stats */
+ hst = qedf_fc_get_host_stats(host);
+
+diff --git a/drivers/scsi/qedi/qedi_iscsi.c b/drivers/scsi/qedi/qedi_iscsi.c
+index 1f4a5fb00a05..366c65b295a5 100644
+--- a/drivers/scsi/qedi/qedi_iscsi.c
++++ b/drivers/scsi/qedi/qedi_iscsi.c
+@@ -1001,7 +1001,8 @@ static void qedi_ep_disconnect(struct iscsi_endpoint *ep)
+ if (qedi_ep->state == EP_STATE_OFLDCONN_START)
+ goto ep_exit_recover;
+
+- flush_work(&qedi_ep->offload_work);
++ if (qedi_ep->state != EP_STATE_OFLDCONN_NONE)
++ flush_work(&qedi_ep->offload_work);
+
+ if (qedi_ep->conn) {
+ qedi_conn = qedi_ep->conn;
+@@ -1218,6 +1219,10 @@ static int qedi_set_path(struct Scsi_Host *shost, struct iscsi_path *path_data)
+ }
+
+ iscsi_cid = (u32)path_data->handle;
++ if (iscsi_cid >= qedi->max_active_conns) {
++ ret = -EINVAL;
++ goto set_path_exit;
++ }
+ qedi_ep = qedi->ep_tbl[iscsi_cid];
+ QEDI_INFO(&qedi->dbg_ctx, QEDI_LOG_INFO,
+ "iscsi_cid=0x%x, qedi_ep=%p\n", iscsi_cid, qedi_ep);
+diff --git a/drivers/scsi/qla2xxx/qla_os.c b/drivers/scsi/qla2xxx/qla_os.c
+index 1d9a4866f9a7..9179bb4caed8 100644
+--- a/drivers/scsi/qla2xxx/qla_os.c
++++ b/drivers/scsi/qla2xxx/qla_os.c
+@@ -6871,6 +6871,7 @@ qla2x00_do_dpc(void *data)
+
+ if (do_reset && !(test_and_set_bit(ABORT_ISP_ACTIVE,
+ &base_vha->dpc_flags))) {
++ base_vha->flags.online = 1;
+ ql_dbg(ql_dbg_dpc, base_vha, 0x4007,
+ "ISP abort scheduled.\n");
+ if (ha->isp_ops->abort_isp(base_vha)) {
+diff --git a/drivers/scsi/qla2xxx/tcm_qla2xxx.c b/drivers/scsi/qla2xxx/tcm_qla2xxx.c
+index 1f0a185b2a95..bf00ae16b487 100644
+--- a/drivers/scsi/qla2xxx/tcm_qla2xxx.c
++++ b/drivers/scsi/qla2xxx/tcm_qla2xxx.c
+@@ -949,6 +949,7 @@ static ssize_t tcm_qla2xxx_tpg_enable_store(struct config_item *item,
+
+ atomic_set(&tpg->lport_tpg_enabled, 0);
+ qlt_stop_phase1(vha->vha_tgt.qla_tgt);
++ qlt_stop_phase2(vha->vha_tgt.qla_tgt);
+ }
+
+ return count;
+@@ -1111,6 +1112,7 @@ static ssize_t tcm_qla2xxx_npiv_tpg_enable_store(struct config_item *item,
+
+ atomic_set(&tpg->lport_tpg_enabled, 0);
+ qlt_stop_phase1(vha->vha_tgt.qla_tgt);
++ qlt_stop_phase2(vha->vha_tgt.qla_tgt);
+ }
+
+ return count;
+diff --git a/drivers/scsi/scsi_error.c b/drivers/scsi/scsi_error.c
+index 978be1602f71..927b1e641842 100644
+--- a/drivers/scsi/scsi_error.c
++++ b/drivers/scsi/scsi_error.c
+@@ -1412,6 +1412,7 @@ static int scsi_eh_stu(struct Scsi_Host *shost,
+ sdev_printk(KERN_INFO, sdev,
+ "%s: skip START_UNIT, past eh deadline\n",
+ current->comm));
++ scsi_device_put(sdev);
+ break;
+ }
+ stu_scmd = NULL;
+@@ -1478,6 +1479,7 @@ static int scsi_eh_bus_device_reset(struct Scsi_Host *shost,
+ sdev_printk(KERN_INFO, sdev,
+ "%s: skip BDR, past eh deadline\n",
+ current->comm));
++ scsi_device_put(sdev);
+ break;
+ }
+ bdr_scmd = NULL;
+diff --git a/drivers/scsi/scsi_lib.c b/drivers/scsi/scsi_lib.c
+index 06c260f6cdae..b8b4366f1200 100644
+--- a/drivers/scsi/scsi_lib.c
++++ b/drivers/scsi/scsi_lib.c
+@@ -548,7 +548,7 @@ static void scsi_uninit_cmd(struct scsi_cmnd *cmd)
+ }
+ }
+
+-static void scsi_mq_free_sgtables(struct scsi_cmnd *cmd)
++static void scsi_free_sgtables(struct scsi_cmnd *cmd)
+ {
+ if (cmd->sdb.table.nents)
+ sg_free_table_chained(&cmd->sdb.table,
+@@ -560,7 +560,7 @@ static void scsi_mq_free_sgtables(struct scsi_cmnd *cmd)
+
+ static void scsi_mq_uninit_cmd(struct scsi_cmnd *cmd)
+ {
+- scsi_mq_free_sgtables(cmd);
++ scsi_free_sgtables(cmd);
+ scsi_uninit_cmd(cmd);
+ }
+
+@@ -1059,7 +1059,7 @@ blk_status_t scsi_init_io(struct scsi_cmnd *cmd)
+
+ return BLK_STS_OK;
+ out_free_sgtables:
+- scsi_mq_free_sgtables(cmd);
++ scsi_free_sgtables(cmd);
+ return ret;
+ }
+ EXPORT_SYMBOL(scsi_init_io);
+@@ -1190,6 +1190,7 @@ static blk_status_t scsi_setup_cmnd(struct scsi_device *sdev,
+ struct request *req)
+ {
+ struct scsi_cmnd *cmd = blk_mq_rq_to_pdu(req);
++ blk_status_t ret;
+
+ if (!blk_rq_bytes(req))
+ cmd->sc_data_direction = DMA_NONE;
+@@ -1199,9 +1200,14 @@ static blk_status_t scsi_setup_cmnd(struct scsi_device *sdev,
+ cmd->sc_data_direction = DMA_FROM_DEVICE;
+
+ if (blk_rq_is_scsi(req))
+- return scsi_setup_scsi_cmnd(sdev, req);
++ ret = scsi_setup_scsi_cmnd(sdev, req);
+ else
+- return scsi_setup_fs_cmnd(sdev, req);
++ ret = scsi_setup_fs_cmnd(sdev, req);
++
++ if (ret != BLK_STS_OK)
++ scsi_free_sgtables(cmd);
++
++ return ret;
+ }
+
+ static blk_status_t
+@@ -2859,8 +2865,10 @@ scsi_host_unblock(struct Scsi_Host *shost, int new_state)
+
+ shost_for_each_device(sdev, shost) {
+ ret = scsi_internal_device_unblock(sdev, new_state);
+- if (ret)
++ if (ret) {
++ scsi_device_put(sdev);
+ break;
++ }
+ }
+ return ret;
+ }
+diff --git a/drivers/scsi/scsi_transport_iscsi.c b/drivers/scsi/scsi_transport_iscsi.c
+index b2a803c51288..ea6d498fa923 100644
+--- a/drivers/scsi/scsi_transport_iscsi.c
++++ b/drivers/scsi/scsi_transport_iscsi.c
+@@ -1616,6 +1616,12 @@ static DECLARE_TRANSPORT_CLASS(iscsi_connection_class,
+ static struct sock *nls;
+ static DEFINE_MUTEX(rx_queue_mutex);
+
++/*
++ * conn_mutex protects the {start,bind,stop,destroy}_conn from racing
++ * against the kernel stop_connection recovery mechanism
++ */
++static DEFINE_MUTEX(conn_mutex);
++
+ static LIST_HEAD(sesslist);
+ static LIST_HEAD(sessdestroylist);
+ static DEFINE_SPINLOCK(sesslock);
+@@ -2445,6 +2451,32 @@ int iscsi_offload_mesg(struct Scsi_Host *shost,
+ }
+ EXPORT_SYMBOL_GPL(iscsi_offload_mesg);
+
++/*
++ * This can be called without the rx_queue_mutex, if invoked by the kernel
++ * stop work. But, in that case, it is guaranteed not to race with
++ * iscsi_destroy by conn_mutex.
++ */
++static void iscsi_if_stop_conn(struct iscsi_cls_conn *conn, int flag)
++{
++ /*
++ * It is important that this path doesn't rely on
++ * rx_queue_mutex, otherwise, a thread doing allocation on a
++ * start_session/start_connection could sleep waiting on a
++ * writeback to a failed iscsi device, that cannot be recovered
++ * because the lock is held. If we don't hold it here, the
++ * kernel stop_conn_work_fn has a chance to stop the broken
++ * session and resolve the allocation.
++ *
++ * Still, the user invoked .stop_conn() needs to be serialized
++ * with stop_conn_work_fn by a private mutex. Not pretty, but
++ * it works.
++ */
++ mutex_lock(&conn_mutex);
++ conn->transport->stop_conn(conn, flag);
++ mutex_unlock(&conn_mutex);
++
++}
++
+ static void stop_conn_work_fn(struct work_struct *work)
+ {
+ struct iscsi_cls_conn *conn, *tmp;
+@@ -2463,30 +2495,17 @@ static void stop_conn_work_fn(struct work_struct *work)
+ uint32_t sid = iscsi_conn_get_sid(conn);
+ struct iscsi_cls_session *session;
+
+- mutex_lock(&rx_queue_mutex);
+-
+ session = iscsi_session_lookup(sid);
+ if (session) {
+ if (system_state != SYSTEM_RUNNING) {
+ session->recovery_tmo = 0;
+- conn->transport->stop_conn(conn,
+- STOP_CONN_TERM);
++ iscsi_if_stop_conn(conn, STOP_CONN_TERM);
+ } else {
+- conn->transport->stop_conn(conn,
+- STOP_CONN_RECOVER);
++ iscsi_if_stop_conn(conn, STOP_CONN_RECOVER);
+ }
+ }
+
+ list_del_init(&conn->conn_list_err);
+-
+- mutex_unlock(&rx_queue_mutex);
+-
+- /* we don't want to hold rx_queue_mutex for too long,
+- * for instance if many conns failed at the same time,
+- * since this stall other iscsi maintenance operations.
+- * Give other users a chance to proceed.
+- */
+- cond_resched();
+ }
+ }
+
+@@ -2846,8 +2865,11 @@ iscsi_if_destroy_conn(struct iscsi_transport *transport, struct iscsi_uevent *ev
+ spin_unlock_irqrestore(&connlock, flags);
+
+ ISCSI_DBG_TRANS_CONN(conn, "Destroying transport conn\n");
++
++ mutex_lock(&conn_mutex);
+ if (transport->destroy_conn)
+ transport->destroy_conn(conn);
++ mutex_unlock(&conn_mutex);
+
+ return 0;
+ }
+@@ -3689,9 +3711,12 @@ iscsi_if_recv_msg(struct sk_buff *skb, struct nlmsghdr *nlh, uint32_t *group)
+ break;
+ }
+
++ mutex_lock(&conn_mutex);
+ ev->r.retcode = transport->bind_conn(session, conn,
+ ev->u.b_conn.transport_eph,
+ ev->u.b_conn.is_leading);
++ mutex_unlock(&conn_mutex);
++
+ if (ev->r.retcode || !transport->ep_connect)
+ break;
+
+@@ -3713,9 +3738,11 @@ iscsi_if_recv_msg(struct sk_buff *skb, struct nlmsghdr *nlh, uint32_t *group)
+ case ISCSI_UEVENT_START_CONN:
+ conn = iscsi_conn_lookup(ev->u.start_conn.sid, ev->u.start_conn.cid);
+ if (conn) {
++ mutex_lock(&conn_mutex);
+ ev->r.retcode = transport->start_conn(conn);
+ if (!ev->r.retcode)
+ conn->state = ISCSI_CONN_UP;
++ mutex_unlock(&conn_mutex);
+ }
+ else
+ err = -EINVAL;
+@@ -3723,17 +3750,20 @@ iscsi_if_recv_msg(struct sk_buff *skb, struct nlmsghdr *nlh, uint32_t *group)
+ case ISCSI_UEVENT_STOP_CONN:
+ conn = iscsi_conn_lookup(ev->u.stop_conn.sid, ev->u.stop_conn.cid);
+ if (conn)
+- transport->stop_conn(conn, ev->u.stop_conn.flag);
++ iscsi_if_stop_conn(conn, ev->u.stop_conn.flag);
+ else
+ err = -EINVAL;
+ break;
+ case ISCSI_UEVENT_SEND_PDU:
+ conn = iscsi_conn_lookup(ev->u.send_pdu.sid, ev->u.send_pdu.cid);
+- if (conn)
++ if (conn) {
++ mutex_lock(&conn_mutex);
+ ev->r.retcode = transport->send_pdu(conn,
+ (struct iscsi_hdr*)((char*)ev + sizeof(*ev)),
+ (char*)ev + sizeof(*ev) + ev->u.send_pdu.hdr_size,
+ ev->u.send_pdu.data_size);
++ mutex_unlock(&conn_mutex);
++ }
+ else
+ err = -EINVAL;
+ break;
+diff --git a/drivers/scsi/sr.c b/drivers/scsi/sr.c
+index d2fe3fa470f9..1e13c6a0f0ca 100644
+--- a/drivers/scsi/sr.c
++++ b/drivers/scsi/sr.c
+@@ -797,7 +797,7 @@ static int sr_probe(struct device *dev)
+ cd->cdi.disk = disk;
+
+ if (register_cdrom(&cd->cdi))
+- goto fail_put;
++ goto fail_minor;
+
+ /*
+ * Initialize block layer runtime PM stuffs before the
+@@ -815,8 +815,13 @@ static int sr_probe(struct device *dev)
+
+ return 0;
+
++fail_minor:
++ spin_lock(&sr_index_lock);
++ clear_bit(minor, sr_index_bits);
++ spin_unlock(&sr_index_lock);
+ fail_put:
+ put_disk(disk);
++ mutex_destroy(&cd->lock);
+ fail_free:
+ kfree(cd);
+ fail:
+diff --git a/drivers/scsi/ufs/ti-j721e-ufs.c b/drivers/scsi/ufs/ti-j721e-ufs.c
+index 5216d228cdd9..46bb905b4d6a 100644
+--- a/drivers/scsi/ufs/ti-j721e-ufs.c
++++ b/drivers/scsi/ufs/ti-j721e-ufs.c
+@@ -32,14 +32,14 @@ static int ti_j721e_ufs_probe(struct platform_device *pdev)
+ ret = pm_runtime_get_sync(dev);
+ if (ret < 0) {
+ pm_runtime_put_noidle(dev);
+- return ret;
++ goto disable_pm;
+ }
+
+ /* Select MPHY refclk frequency */
+ clk = devm_clk_get(dev, NULL);
+ if (IS_ERR(clk)) {
+ dev_err(dev, "Cannot claim MPHY clock.\n");
+- return PTR_ERR(clk);
++ goto clk_err;
+ }
+ clk_rate = clk_get_rate(clk);
+ if (clk_rate == 26000000)
+@@ -54,16 +54,23 @@ static int ti_j721e_ufs_probe(struct platform_device *pdev)
+ dev);
+ if (ret) {
+ dev_err(dev, "failed to populate child nodes %d\n", ret);
+- pm_runtime_put_sync(dev);
++ goto clk_err;
+ }
+
+ return ret;
++
++clk_err:
++ pm_runtime_put_sync(dev);
++disable_pm:
++ pm_runtime_disable(dev);
++ return ret;
+ }
+
+ static int ti_j721e_ufs_remove(struct platform_device *pdev)
+ {
+ of_platform_depopulate(&pdev->dev);
+ pm_runtime_put_sync(&pdev->dev);
++ pm_runtime_disable(&pdev->dev);
+
+ return 0;
+ }
+diff --git a/drivers/scsi/ufs/ufs-qcom.c b/drivers/scsi/ufs/ufs-qcom.c
+index 19aa5c44e0da..f938867301a0 100644
+--- a/drivers/scsi/ufs/ufs-qcom.c
++++ b/drivers/scsi/ufs/ufs-qcom.c
+@@ -1658,11 +1658,11 @@ static void ufs_qcom_dump_dbg_regs(struct ufs_hba *hba)
+
+ /* sleep a bit intermittently as we are dumping too much data */
+ ufs_qcom_print_hw_debug_reg_all(hba, NULL, ufs_qcom_dump_regs_wrapper);
+- usleep_range(1000, 1100);
++ udelay(1000);
+ ufs_qcom_testbus_read(hba);
+- usleep_range(1000, 1100);
++ udelay(1000);
+ ufs_qcom_print_unipro_testbus(hba);
+- usleep_range(1000, 1100);
++ udelay(1000);
+ }
+
+ /**
+diff --git a/drivers/scsi/ufs/ufs_bsg.c b/drivers/scsi/ufs/ufs_bsg.c
+index 53dd87628cbe..516a7f573942 100644
+--- a/drivers/scsi/ufs/ufs_bsg.c
++++ b/drivers/scsi/ufs/ufs_bsg.c
+@@ -106,8 +106,10 @@ static int ufs_bsg_request(struct bsg_job *job)
+ desc_op = bsg_request->upiu_req.qr.opcode;
+ ret = ufs_bsg_alloc_desc_buffer(hba, job, &desc_buff,
+ &desc_len, desc_op);
+- if (ret)
++ if (ret) {
++ pm_runtime_put_sync(hba->dev);
+ goto out;
++ }
+
+ /* fall through */
+ case UPIU_TRANSACTION_NOP_OUT:
+diff --git a/drivers/scsi/ufs/ufshcd.c b/drivers/scsi/ufs/ufshcd.c
+index 698e8d20b4ba..52740b60d786 100644
+--- a/drivers/scsi/ufs/ufshcd.c
++++ b/drivers/scsi/ufs/ufshcd.c
+@@ -5098,7 +5098,6 @@ static int ufshcd_bkops_ctrl(struct ufs_hba *hba,
+ err = ufshcd_enable_auto_bkops(hba);
+ else
+ err = ufshcd_disable_auto_bkops(hba);
+- hba->urgent_bkops_lvl = curr_status;
+ out:
+ return err;
+ }
+diff --git a/drivers/slimbus/qcom-ngd-ctrl.c b/drivers/slimbus/qcom-ngd-ctrl.c
+index fc2575fef51b..7426b5884218 100644
+--- a/drivers/slimbus/qcom-ngd-ctrl.c
++++ b/drivers/slimbus/qcom-ngd-ctrl.c
+@@ -1361,7 +1361,6 @@ static int of_qcom_slim_ngd_register(struct device *parent,
+ ngd->pdev->driver_override = QCOM_SLIM_NGD_DRV_NAME;
+ ngd->pdev->dev.of_node = node;
+ ctrl->ngd = ngd;
+- platform_set_drvdata(ngd->pdev, ctrl);
+
+ platform_device_add(ngd->pdev);
+ ngd->base = ctrl->base + ngd->id * data->offset +
+@@ -1376,12 +1375,13 @@ static int of_qcom_slim_ngd_register(struct device *parent,
+
+ static int qcom_slim_ngd_probe(struct platform_device *pdev)
+ {
+- struct qcom_slim_ngd_ctrl *ctrl = platform_get_drvdata(pdev);
+ struct device *dev = &pdev->dev;
++ struct qcom_slim_ngd_ctrl *ctrl = dev_get_drvdata(dev->parent);
+ int ret;
+
+ ctrl->ctrl.dev = dev;
+
++ platform_set_drvdata(pdev, ctrl);
+ pm_runtime_use_autosuspend(dev);
+ pm_runtime_set_autosuspend_delay(dev, QCOM_SLIM_NGD_AUTOSUSPEND);
+ pm_runtime_set_suspended(dev);
+diff --git a/drivers/soundwire/slave.c b/drivers/soundwire/slave.c
+index aace57fae7f8..4bacdb187eab 100644
+--- a/drivers/soundwire/slave.c
++++ b/drivers/soundwire/slave.c
+@@ -68,6 +68,8 @@ static int sdw_slave_add(struct sdw_bus *bus,
+ list_del(&slave->node);
+ mutex_unlock(&bus->bus_lock);
+ put_device(&slave->dev);
++
++ return ret;
+ }
+ sdw_slave_debugfs_init(slave);
+
+diff --git a/drivers/staging/gasket/gasket_sysfs.c b/drivers/staging/gasket/gasket_sysfs.c
+index 5f0e089573a2..af26bc9f184a 100644
+--- a/drivers/staging/gasket/gasket_sysfs.c
++++ b/drivers/staging/gasket/gasket_sysfs.c
+@@ -339,6 +339,7 @@ void gasket_sysfs_put_attr(struct device *device,
+
+ dev_err(device, "Unable to put unknown attribute: %s\n",
+ attr->attr.attr.name);
++ put_mapping(mapping);
+ }
+ EXPORT_SYMBOL(gasket_sysfs_put_attr);
+
+@@ -372,6 +373,7 @@ ssize_t gasket_sysfs_register_store(struct device *device,
+ gasket_dev = mapping->gasket_dev;
+ if (!gasket_dev) {
+ dev_err(device, "Device driver may have been removed\n");
++ put_mapping(mapping);
+ return 0;
+ }
+
+diff --git a/drivers/staging/greybus/light.c b/drivers/staging/greybus/light.c
+index d6ba25f21d80..d2672b65c3f4 100644
+--- a/drivers/staging/greybus/light.c
++++ b/drivers/staging/greybus/light.c
+@@ -1026,7 +1026,8 @@ static int gb_lights_light_config(struct gb_lights *glights, u8 id)
+
+ light->channels_count = conf.channel_count;
+ light->name = kstrndup(conf.name, NAMES_MAX, GFP_KERNEL);
+-
++ if (!light->name)
++ return -ENOMEM;
+ light->channels = kcalloc(light->channels_count,
+ sizeof(struct gb_channel), GFP_KERNEL);
+ if (!light->channels)
+diff --git a/drivers/staging/mt7621-dts/mt7621.dtsi b/drivers/staging/mt7621-dts/mt7621.dtsi
+index 9e5cf68731bb..82aa93634eda 100644
+--- a/drivers/staging/mt7621-dts/mt7621.dtsi
++++ b/drivers/staging/mt7621-dts/mt7621.dtsi
+@@ -523,11 +523,10 @@
+ 0x01000000 0 0x00000000 0x1e160000 0 0x00010000 /* io space */
+ >;
+
+- #interrupt-cells = <1>;
+- interrupt-map-mask = <0xF0000 0 0 1>;
+- interrupt-map = <0x10000 0 0 1 &gic GIC_SHARED 4 IRQ_TYPE_LEVEL_HIGH>,
+- <0x20000 0 0 1 &gic GIC_SHARED 24 IRQ_TYPE_LEVEL_HIGH>,
+- <0x30000 0 0 1 &gic GIC_SHARED 25 IRQ_TYPE_LEVEL_HIGH>;
++ interrupt-parent = <&gic>;
++ interrupts = <GIC_SHARED 4 IRQ_TYPE_LEVEL_HIGH
++ GIC_SHARED 24 IRQ_TYPE_LEVEL_HIGH
++ GIC_SHARED 25 IRQ_TYPE_LEVEL_HIGH>;
+
+ status = "disabled";
+
+diff --git a/drivers/staging/mt7621-pci/pci-mt7621.c b/drivers/staging/mt7621-pci/pci-mt7621.c
+index b9d460a9c041..36207243a71b 100644
+--- a/drivers/staging/mt7621-pci/pci-mt7621.c
++++ b/drivers/staging/mt7621-pci/pci-mt7621.c
+@@ -97,6 +97,7 @@
+ * @pcie_rst: pointer to port reset control
+ * @gpio_rst: gpio reset
+ * @slot: port slot
++ * @irq: GIC irq
+ * @enabled: indicates if port is enabled
+ */
+ struct mt7621_pcie_port {
+@@ -107,6 +108,7 @@ struct mt7621_pcie_port {
+ struct reset_control *pcie_rst;
+ struct gpio_desc *gpio_rst;
+ u32 slot;
++ int irq;
+ bool enabled;
+ };
+
+@@ -120,6 +122,7 @@ struct mt7621_pcie_port {
+ * @dev: Pointer to PCIe device
+ * @io_map_base: virtual memory base address for io
+ * @ports: pointer to PCIe port information
++ * @irq_map: irq mapping info according pcie link status
+ * @resets_inverted: depends on chip revision
+ * reset lines are inverted.
+ */
+@@ -135,6 +138,7 @@ struct mt7621_pcie {
+ } offset;
+ unsigned long io_map_base;
+ struct list_head ports;
++ int irq_map[PCIE_P2P_MAX];
+ bool resets_inverted;
+ };
+
+@@ -279,6 +283,16 @@ static void setup_cm_memory_region(struct mt7621_pcie *pcie)
+ }
+ }
+
++static int mt7621_map_irq(const struct pci_dev *pdev, u8 slot, u8 pin)
++{
++ struct mt7621_pcie *pcie = pdev->bus->sysdata;
++ struct device *dev = pcie->dev;
++ int irq = pcie->irq_map[slot];
++
++ dev_info(dev, "bus=%d slot=%d irq=%d\n", pdev->bus->number, slot, irq);
++ return irq;
++}
++
+ static int mt7621_pci_parse_request_of_pci_ranges(struct mt7621_pcie *pcie)
+ {
+ struct device *dev = pcie->dev;
+@@ -330,6 +344,7 @@ static int mt7621_pcie_parse_port(struct mt7621_pcie *pcie,
+ {
+ struct mt7621_pcie_port *port;
+ struct device *dev = pcie->dev;
++ struct platform_device *pdev = to_platform_device(dev);
+ struct device_node *pnode = dev->of_node;
+ struct resource regs;
+ char name[10];
+@@ -371,6 +386,12 @@ static int mt7621_pcie_parse_port(struct mt7621_pcie *pcie,
+ port->slot = slot;
+ port->pcie = pcie;
+
++ port->irq = platform_get_irq(pdev, slot);
++ if (port->irq < 0) {
++ dev_err(dev, "Failed to get IRQ for PCIe%d\n", slot);
++ return -ENXIO;
++ }
++
+ INIT_LIST_HEAD(&port->list);
+ list_add_tail(&port->list, &pcie->ports);
+
+@@ -585,13 +606,15 @@ static int mt7621_pcie_init_virtual_bridges(struct mt7621_pcie *pcie)
+ {
+ u32 pcie_link_status = 0;
+ u32 n;
+- int i;
++ int i = 0;
+ u32 p2p_br_devnum[PCIE_P2P_MAX];
++ int irqs[PCIE_P2P_MAX];
+ struct mt7621_pcie_port *port;
+
+ list_for_each_entry(port, &pcie->ports, list) {
+ u32 slot = port->slot;
+
++ irqs[i++] = port->irq;
+ if (port->enabled)
+ pcie_link_status |= BIT(slot);
+ }
+@@ -614,6 +637,15 @@ static int mt7621_pcie_init_virtual_bridges(struct mt7621_pcie *pcie)
+ (p2p_br_devnum[1] << PCIE_P2P_BR_DEVNUM1_SHIFT) |
+ (p2p_br_devnum[2] << PCIE_P2P_BR_DEVNUM2_SHIFT));
+
++ /* Assign IRQs */
++ n = 0;
++ for (i = 0; i < PCIE_P2P_MAX; i++)
++ if (pcie_link_status & BIT(i))
++ pcie->irq_map[n++] = irqs[i];
++
++ for (i = n; i < PCIE_P2P_MAX; i++)
++ pcie->irq_map[i] = -1;
++
+ return 0;
+ }
+
+@@ -638,7 +670,7 @@ static int mt7621_pcie_register_host(struct pci_host_bridge *host,
+ host->busnr = pcie->busn.start;
+ host->dev.parent = pcie->dev;
+ host->ops = &mt7621_pci_ops;
+- host->map_irq = of_irq_parse_and_map_pci;
++ host->map_irq = mt7621_map_irq;
+ host->swizzle_irq = pci_common_swizzle;
+ host->sysdata = pcie;
+
+diff --git a/drivers/staging/sm750fb/sm750.c b/drivers/staging/sm750fb/sm750.c
+index 59568d18ce23..5b72aa81d94c 100644
+--- a/drivers/staging/sm750fb/sm750.c
++++ b/drivers/staging/sm750fb/sm750.c
+@@ -898,6 +898,7 @@ static int lynxfb_set_fbinfo(struct fb_info *info, int index)
+ fix->visual = FB_VISUAL_PSEUDOCOLOR;
+ break;
+ case 16:
++ case 24:
+ case 32:
+ fix->visual = FB_VISUAL_TRUECOLOR;
+ break;
+diff --git a/drivers/staging/wfx/bus_sdio.c b/drivers/staging/wfx/bus_sdio.c
+index dedc3ff58d3e..c2e4bd1e3b0a 100644
+--- a/drivers/staging/wfx/bus_sdio.c
++++ b/drivers/staging/wfx/bus_sdio.c
+@@ -156,7 +156,13 @@ static const struct hwbus_ops wfx_sdio_hwbus_ops = {
+ .align_size = wfx_sdio_align_size,
+ };
+
+-static const struct of_device_id wfx_sdio_of_match[];
++static const struct of_device_id wfx_sdio_of_match[] = {
++ { .compatible = "silabs,wfx-sdio" },
++ { .compatible = "silabs,wf200" },
++ { },
++};
++MODULE_DEVICE_TABLE(of, wfx_sdio_of_match);
++
+ static int wfx_sdio_probe(struct sdio_func *func,
+ const struct sdio_device_id *id)
+ {
+@@ -248,15 +254,6 @@ static const struct sdio_device_id wfx_sdio_ids[] = {
+ };
+ MODULE_DEVICE_TABLE(sdio, wfx_sdio_ids);
+
+-#ifdef CONFIG_OF
+-static const struct of_device_id wfx_sdio_of_match[] = {
+- { .compatible = "silabs,wfx-sdio" },
+- { .compatible = "silabs,wf200" },
+- { },
+-};
+-MODULE_DEVICE_TABLE(of, wfx_sdio_of_match);
+-#endif
+-
+ struct sdio_driver wfx_sdio_driver = {
+ .name = "wfx-sdio",
+ .id_table = wfx_sdio_ids,
+@@ -264,6 +261,6 @@ struct sdio_driver wfx_sdio_driver = {
+ .remove = wfx_sdio_remove,
+ .drv = {
+ .owner = THIS_MODULE,
+- .of_match_table = of_match_ptr(wfx_sdio_of_match),
++ .of_match_table = wfx_sdio_of_match,
+ }
+ };
+diff --git a/drivers/staging/wfx/debug.c b/drivers/staging/wfx/debug.c
+index 1164aba118a1..a73b5bbb578e 100644
+--- a/drivers/staging/wfx/debug.c
++++ b/drivers/staging/wfx/debug.c
+@@ -142,7 +142,7 @@ static int wfx_rx_stats_show(struct seq_file *seq, void *v)
+ mutex_lock(&wdev->rx_stats_lock);
+ seq_printf(seq, "Timestamp: %dus\n", st->date);
+ seq_printf(seq, "Low power clock: frequency %uHz, external %s\n",
+- st->pwr_clk_freq,
++ le32_to_cpu(st->pwr_clk_freq),
+ st->is_ext_pwr_clk ? "yes" : "no");
+ seq_printf(seq,
+ "Num. of frames: %d, PER (x10e4): %d, Throughput: %dKbps/s\n",
+@@ -152,9 +152,12 @@ static int wfx_rx_stats_show(struct seq_file *seq, void *v)
+ for (i = 0; i < ARRAY_SIZE(channel_names); i++) {
+ if (channel_names[i])
+ seq_printf(seq, "%5s %8d %8d %8d %8d %8d\n",
+- channel_names[i], st->nb_rx_by_rate[i],
+- st->per[i], st->rssi[i] / 100,
+- st->snr[i] / 100, st->cfo[i]);
++ channel_names[i],
++ le32_to_cpu(st->nb_rx_by_rate[i]),
++ le16_to_cpu(st->per[i]),
++ (s16)le16_to_cpu(st->rssi[i]) / 100,
++ (s16)le16_to_cpu(st->snr[i]) / 100,
++ (s16)le16_to_cpu(st->cfo[i]));
+ }
+ mutex_unlock(&wdev->rx_stats_lock);
+
+diff --git a/drivers/staging/wfx/hif_tx.c b/drivers/staging/wfx/hif_tx.c
+index 77bca43aca42..20b3045d7667 100644
+--- a/drivers/staging/wfx/hif_tx.c
++++ b/drivers/staging/wfx/hif_tx.c
+@@ -268,7 +268,7 @@ int hif_scan(struct wfx_vif *wvif, struct cfg80211_scan_request *req,
+ tmo_chan_bg = le32_to_cpu(body->max_channel_time) * USEC_PER_TU;
+ tmo_chan_fg = 512 * USEC_PER_TU + body->probe_delay;
+ tmo_chan_fg *= body->num_of_probe_requests;
+- tmo = chan_num * max(tmo_chan_bg, tmo_chan_fg);
++ tmo = chan_num * max(tmo_chan_bg, tmo_chan_fg) + 512 * USEC_PER_TU;
+
+ wfx_fill_header(hif, wvif->id, HIF_REQ_ID_START_SCAN, buf_len);
+ ret = wfx_cmd_send(wvif->wdev, hif, NULL, 0, false);
+diff --git a/drivers/staging/wfx/queue.c b/drivers/staging/wfx/queue.c
+index 39d9127ce4b9..8ae23681e29b 100644
+--- a/drivers/staging/wfx/queue.c
++++ b/drivers/staging/wfx/queue.c
+@@ -35,6 +35,7 @@ void wfx_tx_flush(struct wfx_dev *wdev)
+ if (wdev->chip_frozen)
+ return;
+
++ wfx_tx_lock(wdev);
+ mutex_lock(&wdev->hif_cmd.lock);
+ ret = wait_event_timeout(wdev->hif.tx_buffers_empty,
+ !wdev->hif.tx_buffers_used,
+@@ -47,6 +48,7 @@ void wfx_tx_flush(struct wfx_dev *wdev)
+ wdev->chip_frozen = 1;
+ }
+ mutex_unlock(&wdev->hif_cmd.lock);
++ wfx_tx_unlock(wdev);
+ }
+
+ void wfx_tx_lock_flush(struct wfx_dev *wdev)
+diff --git a/drivers/staging/wfx/sta.c b/drivers/staging/wfx/sta.c
+index 9d430346a58b..b4cd7cb1ce56 100644
+--- a/drivers/staging/wfx/sta.c
++++ b/drivers/staging/wfx/sta.c
+@@ -520,7 +520,9 @@ static void wfx_do_join(struct wfx_vif *wvif)
+ ssidie = ieee80211_bss_get_ie(bss, WLAN_EID_SSID);
+ if (ssidie) {
+ ssidlen = ssidie[1];
+- memcpy(ssid, &ssidie[2], ssidie[1]);
++ if (ssidlen > IEEE80211_MAX_SSID_LEN)
++ ssidlen = IEEE80211_MAX_SSID_LEN;
++ memcpy(ssid, &ssidie[2], ssidlen);
+ }
+ rcu_read_unlock();
+
+@@ -1047,7 +1049,6 @@ int wfx_add_interface(struct ieee80211_hw *hw, struct ieee80211_vif *vif)
+ init_completion(&wvif->scan_complete);
+ INIT_WORK(&wvif->scan_work, wfx_hw_scan_work);
+
+- INIT_WORK(&wvif->tx_policy_upload_work, wfx_tx_policy_upload_work);
+ mutex_unlock(&wdev->conf_mutex);
+
+ hif_set_macaddr(wvif, vif->addr);
+diff --git a/drivers/staging/wfx/sta.h b/drivers/staging/wfx/sta.h
+index cf99a8a74a81..ace845f9ed14 100644
+--- a/drivers/staging/wfx/sta.h
++++ b/drivers/staging/wfx/sta.h
+@@ -37,7 +37,7 @@ struct wfx_grp_addr_table {
+ struct wfx_sta_priv {
+ int link_id;
+ int vif_id;
+- u8 buffered[IEEE80211_NUM_TIDS];
++ int buffered[IEEE80211_NUM_TIDS];
+ // Ensure atomicity of "buffered" and calls to ieee80211_sta_set_buffered()
+ spinlock_t lock;
+ };
+diff --git a/drivers/staging/wilc1000/hif.c b/drivers/staging/wilc1000/hif.c
+index 6c7de2f8d3f2..d025a3093015 100644
+--- a/drivers/staging/wilc1000/hif.c
++++ b/drivers/staging/wilc1000/hif.c
+@@ -11,6 +11,8 @@
+
+ #define WILC_FALSE_FRMWR_CHANNEL 100
+
++#define WILC_SCAN_WID_LIST_SIZE 6
++
+ struct wilc_rcvd_mac_info {
+ u8 status;
+ };
+@@ -151,7 +153,7 @@ int wilc_scan(struct wilc_vif *vif, u8 scan_source, u8 scan_type,
+ void *user_arg, struct cfg80211_scan_request *request)
+ {
+ int result = 0;
+- struct wid wid_list[5];
++ struct wid wid_list[WILC_SCAN_WID_LIST_SIZE];
+ u32 index = 0;
+ u32 i, scan_timeout;
+ u8 *buffer;
+diff --git a/drivers/target/loopback/tcm_loop.c b/drivers/target/loopback/tcm_loop.c
+index 3305b47fdf53..16d5a4e117a2 100644
+--- a/drivers/target/loopback/tcm_loop.c
++++ b/drivers/target/loopback/tcm_loop.c
+@@ -545,32 +545,15 @@ static int tcm_loop_write_pending(struct se_cmd *se_cmd)
+ return 0;
+ }
+
+-static int tcm_loop_queue_data_in(struct se_cmd *se_cmd)
++static int tcm_loop_queue_data_or_status(const char *func,
++ struct se_cmd *se_cmd, u8 scsi_status)
+ {
+ struct tcm_loop_cmd *tl_cmd = container_of(se_cmd,
+ struct tcm_loop_cmd, tl_se_cmd);
+ struct scsi_cmnd *sc = tl_cmd->sc;
+
+ pr_debug("%s() called for scsi_cmnd: %p cdb: 0x%02x\n",
+- __func__, sc, sc->cmnd[0]);
+-
+- sc->result = SAM_STAT_GOOD;
+- set_host_byte(sc, DID_OK);
+- if ((se_cmd->se_cmd_flags & SCF_OVERFLOW_BIT) ||
+- (se_cmd->se_cmd_flags & SCF_UNDERFLOW_BIT))
+- scsi_set_resid(sc, se_cmd->residual_count);
+- sc->scsi_done(sc);
+- return 0;
+-}
+-
+-static int tcm_loop_queue_status(struct se_cmd *se_cmd)
+-{
+- struct tcm_loop_cmd *tl_cmd = container_of(se_cmd,
+- struct tcm_loop_cmd, tl_se_cmd);
+- struct scsi_cmnd *sc = tl_cmd->sc;
+-
+- pr_debug("%s() called for scsi_cmnd: %p cdb: 0x%02x\n",
+- __func__, sc, sc->cmnd[0]);
++ func, sc, sc->cmnd[0]);
+
+ if (se_cmd->sense_buffer &&
+ ((se_cmd->se_cmd_flags & SCF_TRANSPORT_TASK_SENSE) ||
+@@ -581,7 +564,7 @@ static int tcm_loop_queue_status(struct se_cmd *se_cmd)
+ sc->result = SAM_STAT_CHECK_CONDITION;
+ set_driver_byte(sc, DRIVER_SENSE);
+ } else
+- sc->result = se_cmd->scsi_status;
++ sc->result = scsi_status;
+
+ set_host_byte(sc, DID_OK);
+ if ((se_cmd->se_cmd_flags & SCF_OVERFLOW_BIT) ||
+@@ -591,6 +574,17 @@ static int tcm_loop_queue_status(struct se_cmd *se_cmd)
+ return 0;
+ }
+
++static int tcm_loop_queue_data_in(struct se_cmd *se_cmd)
++{
++ return tcm_loop_queue_data_or_status(__func__, se_cmd, SAM_STAT_GOOD);
++}
++
++static int tcm_loop_queue_status(struct se_cmd *se_cmd)
++{
++ return tcm_loop_queue_data_or_status(__func__,
++ se_cmd, se_cmd->scsi_status);
++}
++
+ static void tcm_loop_queue_tm_rsp(struct se_cmd *se_cmd)
+ {
+ struct tcm_loop_cmd *tl_cmd = container_of(se_cmd,
+diff --git a/drivers/target/target_core_user.c b/drivers/target/target_core_user.c
+index f769bb1e3735..b63a1e0c4aa6 100644
+--- a/drivers/target/target_core_user.c
++++ b/drivers/target/target_core_user.c
+@@ -882,41 +882,24 @@ static inline size_t tcmu_cmd_get_cmd_size(struct tcmu_cmd *tcmu_cmd,
+ return command_size;
+ }
+
+-static int tcmu_setup_cmd_timer(struct tcmu_cmd *tcmu_cmd, unsigned int tmo,
+- struct timer_list *timer)
++static void tcmu_setup_cmd_timer(struct tcmu_cmd *tcmu_cmd, unsigned int tmo,
++ struct timer_list *timer)
+ {
+- struct tcmu_dev *udev = tcmu_cmd->tcmu_dev;
+- int cmd_id;
+-
+- if (tcmu_cmd->cmd_id)
+- goto setup_timer;
+-
+- cmd_id = idr_alloc(&udev->commands, tcmu_cmd, 1, USHRT_MAX, GFP_NOWAIT);
+- if (cmd_id < 0) {
+- pr_err("tcmu: Could not allocate cmd id.\n");
+- return cmd_id;
+- }
+- tcmu_cmd->cmd_id = cmd_id;
+-
+- pr_debug("allocated cmd %u for dev %s tmo %lu\n", tcmu_cmd->cmd_id,
+- udev->name, tmo / MSEC_PER_SEC);
+-
+-setup_timer:
+ if (!tmo)
+- return 0;
++ return;
+
+ tcmu_cmd->deadline = round_jiffies_up(jiffies + msecs_to_jiffies(tmo));
+ if (!timer_pending(timer))
+ mod_timer(timer, tcmu_cmd->deadline);
+
+- return 0;
++ pr_debug("Timeout set up for cmd %p, dev = %s, tmo = %lu\n", tcmu_cmd,
++ tcmu_cmd->tcmu_dev->name, tmo / MSEC_PER_SEC);
+ }
+
+ static int add_to_qfull_queue(struct tcmu_cmd *tcmu_cmd)
+ {
+ struct tcmu_dev *udev = tcmu_cmd->tcmu_dev;
+ unsigned int tmo;
+- int ret;
+
+ /*
+ * For backwards compat if qfull_time_out is not set use
+@@ -931,13 +914,11 @@ static int add_to_qfull_queue(struct tcmu_cmd *tcmu_cmd)
+ else
+ tmo = TCMU_TIME_OUT;
+
+- ret = tcmu_setup_cmd_timer(tcmu_cmd, tmo, &udev->qfull_timer);
+- if (ret)
+- return ret;
++ tcmu_setup_cmd_timer(tcmu_cmd, tmo, &udev->qfull_timer);
+
+ list_add_tail(&tcmu_cmd->queue_entry, &udev->qfull_queue);
+- pr_debug("adding cmd %u on dev %s to ring space wait queue\n",
+- tcmu_cmd->cmd_id, udev->name);
++ pr_debug("adding cmd %p on dev %s to ring space wait queue\n",
++ tcmu_cmd, udev->name);
+ return 0;
+ }
+
+@@ -959,7 +940,7 @@ static int queue_cmd_ring(struct tcmu_cmd *tcmu_cmd, sense_reason_t *scsi_err)
+ struct tcmu_mailbox *mb;
+ struct tcmu_cmd_entry *entry;
+ struct iovec *iov;
+- int iov_cnt, ret;
++ int iov_cnt, cmd_id;
+ uint32_t cmd_head;
+ uint64_t cdb_off;
+ bool copy_to_data_area;
+@@ -1060,14 +1041,21 @@ static int queue_cmd_ring(struct tcmu_cmd *tcmu_cmd, sense_reason_t *scsi_err)
+ }
+ entry->req.iov_bidi_cnt = iov_cnt;
+
+- ret = tcmu_setup_cmd_timer(tcmu_cmd, udev->cmd_time_out,
+- &udev->cmd_timer);
+- if (ret) {
+- tcmu_cmd_free_data(tcmu_cmd, tcmu_cmd->dbi_cnt);
++ cmd_id = idr_alloc(&udev->commands, tcmu_cmd, 1, USHRT_MAX, GFP_NOWAIT);
++ if (cmd_id < 0) {
++ pr_err("tcmu: Could not allocate cmd id.\n");
+
++ tcmu_cmd_free_data(tcmu_cmd, tcmu_cmd->dbi_cnt);
+ *scsi_err = TCM_OUT_OF_RESOURCES;
+ return -1;
+ }
++ tcmu_cmd->cmd_id = cmd_id;
++
++ pr_debug("allocated cmd id %u for cmd %p dev %s\n", tcmu_cmd->cmd_id,
++ tcmu_cmd, udev->name);
++
++ tcmu_setup_cmd_timer(tcmu_cmd, udev->cmd_time_out, &udev->cmd_timer);
++
+ entry->hdr.cmd_id = tcmu_cmd->cmd_id;
+
+ /*
+@@ -1279,50 +1267,39 @@ static unsigned int tcmu_handle_completions(struct tcmu_dev *udev)
+ return handled;
+ }
+
+-static int tcmu_check_expired_cmd(int id, void *p, void *data)
++static void tcmu_check_expired_ring_cmd(struct tcmu_cmd *cmd)
+ {
+- struct tcmu_cmd *cmd = p;
+- struct tcmu_dev *udev = cmd->tcmu_dev;
+- u8 scsi_status;
+ struct se_cmd *se_cmd;
+- bool is_running;
+-
+- if (test_bit(TCMU_CMD_BIT_EXPIRED, &cmd->flags))
+- return 0;
+
+ if (!time_after(jiffies, cmd->deadline))
+- return 0;
++ return;
+
+- is_running = test_bit(TCMU_CMD_BIT_INFLIGHT, &cmd->flags);
++ set_bit(TCMU_CMD_BIT_EXPIRED, &cmd->flags);
++ list_del_init(&cmd->queue_entry);
+ se_cmd = cmd->se_cmd;
++ cmd->se_cmd = NULL;
+
+- if (is_running) {
+- /*
+- * If cmd_time_out is disabled but qfull is set deadline
+- * will only reflect the qfull timeout. Ignore it.
+- */
+- if (!udev->cmd_time_out)
+- return 0;
++ pr_debug("Timing out inflight cmd %u on dev %s.\n",
++ cmd->cmd_id, cmd->tcmu_dev->name);
+
+- set_bit(TCMU_CMD_BIT_EXPIRED, &cmd->flags);
+- /*
+- * target_complete_cmd will translate this to LUN COMM FAILURE
+- */
+- scsi_status = SAM_STAT_CHECK_CONDITION;
+- list_del_init(&cmd->queue_entry);
+- cmd->se_cmd = NULL;
+- } else {
+- list_del_init(&cmd->queue_entry);
+- idr_remove(&udev->commands, id);
+- tcmu_free_cmd(cmd);
+- scsi_status = SAM_STAT_TASK_SET_FULL;
+- }
++ target_complete_cmd(se_cmd, SAM_STAT_CHECK_CONDITION);
++}
+
+- pr_debug("Timing out cmd %u on dev %s that is %s.\n",
+- id, udev->name, is_running ? "inflight" : "queued");
++static void tcmu_check_expired_queue_cmd(struct tcmu_cmd *cmd)
++{
++ struct se_cmd *se_cmd;
+
+- target_complete_cmd(se_cmd, scsi_status);
+- return 0;
++ if (!time_after(jiffies, cmd->deadline))
++ return;
++
++ pr_debug("Timing out queued cmd %p on dev %s.\n",
++ cmd, cmd->tcmu_dev->name);
++
++ list_del_init(&cmd->queue_entry);
++ se_cmd = cmd->se_cmd;
++ tcmu_free_cmd(cmd);
++
++ target_complete_cmd(se_cmd, SAM_STAT_TASK_SET_FULL);
+ }
+
+ static void tcmu_device_timedout(struct tcmu_dev *udev)
+@@ -1407,16 +1384,15 @@ static struct se_device *tcmu_alloc_device(struct se_hba *hba, const char *name)
+ return &udev->se_dev;
+ }
+
+-static bool run_qfull_queue(struct tcmu_dev *udev, bool fail)
++static void run_qfull_queue(struct tcmu_dev *udev, bool fail)
+ {
+ struct tcmu_cmd *tcmu_cmd, *tmp_cmd;
+ LIST_HEAD(cmds);
+- bool drained = true;
+ sense_reason_t scsi_ret;
+ int ret;
+
+ if (list_empty(&udev->qfull_queue))
+- return true;
++ return;
+
+ pr_debug("running %s's cmdr queue forcefail %d\n", udev->name, fail);
+
+@@ -1425,11 +1401,10 @@ static bool run_qfull_queue(struct tcmu_dev *udev, bool fail)
+ list_for_each_entry_safe(tcmu_cmd, tmp_cmd, &cmds, queue_entry) {
+ list_del_init(&tcmu_cmd->queue_entry);
+
+- pr_debug("removing cmd %u on dev %s from queue\n",
+- tcmu_cmd->cmd_id, udev->name);
++ pr_debug("removing cmd %p on dev %s from queue\n",
++ tcmu_cmd, udev->name);
+
+ if (fail) {
+- idr_remove(&udev->commands, tcmu_cmd->cmd_id);
+ /*
+ * We were not able to even start the command, so
+ * fail with busy to allow a retry in case runner
+@@ -1444,10 +1419,8 @@ static bool run_qfull_queue(struct tcmu_dev *udev, bool fail)
+
+ ret = queue_cmd_ring(tcmu_cmd, &scsi_ret);
+ if (ret < 0) {
+- pr_debug("cmd %u on dev %s failed with %u\n",
+- tcmu_cmd->cmd_id, udev->name, scsi_ret);
+-
+- idr_remove(&udev->commands, tcmu_cmd->cmd_id);
++ pr_debug("cmd %p on dev %s failed with %u\n",
++ tcmu_cmd, udev->name, scsi_ret);
+ /*
+ * Ignore scsi_ret for now. target_complete_cmd
+ * drops it.
+@@ -1462,13 +1435,11 @@ static bool run_qfull_queue(struct tcmu_dev *udev, bool fail)
+ * the queue
+ */
+ list_splice_tail(&cmds, &udev->qfull_queue);
+- drained = false;
+ break;
+ }
+ }
+
+ tcmu_set_next_deadline(&udev->qfull_queue, &udev->qfull_timer);
+- return drained;
+ }
+
+ static int tcmu_irqcontrol(struct uio_info *info, s32 irq_on)
+@@ -1652,6 +1623,8 @@ static void tcmu_dev_kref_release(struct kref *kref)
+ if (tcmu_check_and_free_pending_cmd(cmd) != 0)
+ all_expired = false;
+ }
++ if (!list_empty(&udev->qfull_queue))
++ all_expired = false;
+ idr_destroy(&udev->commands);
+ WARN_ON(!all_expired);
+
+@@ -2037,9 +2010,6 @@ static void tcmu_reset_ring(struct tcmu_dev *udev, u8 err_level)
+ mutex_lock(&udev->cmdr_lock);
+
+ idr_for_each_entry(&udev->commands, cmd, i) {
+- if (!test_bit(TCMU_CMD_BIT_INFLIGHT, &cmd->flags))
+- continue;
+-
+ pr_debug("removing cmd %u on dev %s from ring (is expired %d)\n",
+ cmd->cmd_id, udev->name,
+ test_bit(TCMU_CMD_BIT_EXPIRED, &cmd->flags));
+@@ -2077,6 +2047,8 @@ static void tcmu_reset_ring(struct tcmu_dev *udev, u8 err_level)
+
+ del_timer(&udev->cmd_timer);
+
++ run_qfull_queue(udev, false);
++
+ mutex_unlock(&udev->cmdr_lock);
+ }
+
+@@ -2698,6 +2670,7 @@ static void find_free_blocks(void)
+ static void check_timedout_devices(void)
+ {
+ struct tcmu_dev *udev, *tmp_dev;
++ struct tcmu_cmd *cmd, *tmp_cmd;
+ LIST_HEAD(devs);
+
+ spin_lock_bh(&timed_out_udevs_lock);
+@@ -2708,9 +2681,24 @@ static void check_timedout_devices(void)
+ spin_unlock_bh(&timed_out_udevs_lock);
+
+ mutex_lock(&udev->cmdr_lock);
+- idr_for_each(&udev->commands, tcmu_check_expired_cmd, NULL);
+
+- tcmu_set_next_deadline(&udev->inflight_queue, &udev->cmd_timer);
++ /*
++ * If cmd_time_out is disabled but qfull is set deadline
++ * will only reflect the qfull timeout. Ignore it.
++ */
++ if (udev->cmd_time_out) {
++ list_for_each_entry_safe(cmd, tmp_cmd,
++ &udev->inflight_queue,
++ queue_entry) {
++ tcmu_check_expired_ring_cmd(cmd);
++ }
++ tcmu_set_next_deadline(&udev->inflight_queue,
++ &udev->cmd_timer);
++ }
++ list_for_each_entry_safe(cmd, tmp_cmd, &udev->qfull_queue,
++ queue_entry) {
++ tcmu_check_expired_queue_cmd(cmd);
++ }
+ tcmu_set_next_deadline(&udev->qfull_queue, &udev->qfull_timer);
+
+ mutex_unlock(&udev->cmdr_lock);
+diff --git a/drivers/thermal/ti-soc-thermal/ti-thermal-common.c b/drivers/thermal/ti-soc-thermal/ti-thermal-common.c
+index d3e959d01606..85776db4bf34 100644
+--- a/drivers/thermal/ti-soc-thermal/ti-thermal-common.c
++++ b/drivers/thermal/ti-soc-thermal/ti-thermal-common.c
+@@ -169,7 +169,7 @@ int ti_thermal_expose_sensor(struct ti_bandgap *bgp, int id,
+
+ data = ti_bandgap_get_sensor_data(bgp, id);
+
+- if (!data || IS_ERR(data))
++ if (!IS_ERR_OR_NULL(data))
+ data = ti_thermal_build_data(bgp, id);
+
+ if (!data)
+@@ -196,7 +196,7 @@ int ti_thermal_remove_sensor(struct ti_bandgap *bgp, int id)
+
+ data = ti_bandgap_get_sensor_data(bgp, id);
+
+- if (data && data->ti_thermal) {
++ if (!IS_ERR_OR_NULL(data) && data->ti_thermal) {
+ if (data->our_zone)
+ thermal_zone_device_unregister(data->ti_thermal);
+ }
+@@ -262,7 +262,7 @@ int ti_thermal_unregister_cpu_cooling(struct ti_bandgap *bgp, int id)
+
+ data = ti_bandgap_get_sensor_data(bgp, id);
+
+- if (data) {
++ if (!IS_ERR_OR_NULL(data)) {
+ cpufreq_cooling_unregister(data->cool_dev);
+ if (data->policy)
+ cpufreq_cpu_put(data->policy);
+diff --git a/drivers/tty/hvc/hvc_console.c b/drivers/tty/hvc/hvc_console.c
+index cdcc64ea2554..f8e43a6faea9 100644
+--- a/drivers/tty/hvc/hvc_console.c
++++ b/drivers/tty/hvc/hvc_console.c
+@@ -75,6 +75,8 @@ static LIST_HEAD(hvc_structs);
+ */
+ static DEFINE_MUTEX(hvc_structs_mutex);
+
++/* Mutex to serialize hvc_open */
++static DEFINE_MUTEX(hvc_open_mutex);
+ /*
+ * This value is used to assign a tty->index value to a hvc_struct based
+ * upon order of exposure via hvc_probe(), when we can not match it to
+@@ -346,16 +348,24 @@ static int hvc_install(struct tty_driver *driver, struct tty_struct *tty)
+ */
+ static int hvc_open(struct tty_struct *tty, struct file * filp)
+ {
+- struct hvc_struct *hp = tty->driver_data;
++ struct hvc_struct *hp;
+ unsigned long flags;
+ int rc = 0;
+
++ mutex_lock(&hvc_open_mutex);
++
++ hp = tty->driver_data;
++ if (!hp) {
++ rc = -EIO;
++ goto out;
++ }
++
+ spin_lock_irqsave(&hp->port.lock, flags);
+ /* Check and then increment for fast path open. */
+ if (hp->port.count++ > 0) {
+ spin_unlock_irqrestore(&hp->port.lock, flags);
+ hvc_kick();
+- return 0;
++ goto out;
+ } /* else count == 0 */
+ spin_unlock_irqrestore(&hp->port.lock, flags);
+
+@@ -383,6 +393,8 @@ static int hvc_open(struct tty_struct *tty, struct file * filp)
+ /* Force wakeup of the polling thread */
+ hvc_kick();
+
++out:
++ mutex_unlock(&hvc_open_mutex);
+ return rc;
+ }
+
+diff --git a/drivers/tty/n_gsm.c b/drivers/tty/n_gsm.c
+index d77ed82a4840..f189579db7c4 100644
+--- a/drivers/tty/n_gsm.c
++++ b/drivers/tty/n_gsm.c
+@@ -673,11 +673,10 @@ static struct gsm_msg *gsm_data_alloc(struct gsm_mux *gsm, u8 addr, int len,
+ * FIXME: lock against link layer control transmissions
+ */
+
+-static void gsm_data_kick(struct gsm_mux *gsm)
++static void gsm_data_kick(struct gsm_mux *gsm, struct gsm_dlci *dlci)
+ {
+ struct gsm_msg *msg, *nmsg;
+ int len;
+- int skip_sof = 0;
+
+ list_for_each_entry_safe(msg, nmsg, &gsm->tx_list, list) {
+ if (gsm->constipated && msg->addr)
+@@ -699,18 +698,23 @@ static void gsm_data_kick(struct gsm_mux *gsm)
+ print_hex_dump_bytes("gsm_data_kick: ",
+ DUMP_PREFIX_OFFSET,
+ gsm->txframe, len);
+-
+- if (gsm->output(gsm, gsm->txframe + skip_sof,
+- len - skip_sof) < 0)
++ if (gsm->output(gsm, gsm->txframe, len) < 0)
+ break;
+ /* FIXME: Can eliminate one SOF in many more cases */
+ gsm->tx_bytes -= msg->len;
+- /* For a burst of frames skip the extra SOF within the
+- burst */
+- skip_sof = 1;
+
+ list_del(&msg->list);
+ kfree(msg);
++
++ if (dlci) {
++ tty_port_tty_wakeup(&dlci->port);
++ } else {
++ int i = 0;
++
++ for (i = 0; i < NUM_DLCI; i++)
++ if (gsm->dlci[i])
++ tty_port_tty_wakeup(&gsm->dlci[i]->port);
++ }
+ }
+ }
+
+@@ -762,7 +766,7 @@ static void __gsm_data_queue(struct gsm_dlci *dlci, struct gsm_msg *msg)
+ /* Add to the actual output queue */
+ list_add_tail(&msg->list, &gsm->tx_list);
+ gsm->tx_bytes += msg->len;
+- gsm_data_kick(gsm);
++ gsm_data_kick(gsm, dlci);
+ }
+
+ /**
+@@ -1223,7 +1227,7 @@ static void gsm_control_message(struct gsm_mux *gsm, unsigned int command,
+ gsm_control_reply(gsm, CMD_FCON, NULL, 0);
+ /* Kick the link in case it is idling */
+ spin_lock_irqsave(&gsm->tx_lock, flags);
+- gsm_data_kick(gsm);
++ gsm_data_kick(gsm, NULL);
+ spin_unlock_irqrestore(&gsm->tx_lock, flags);
+ break;
+ case CMD_FCOFF:
+@@ -2545,7 +2549,7 @@ static void gsmld_write_wakeup(struct tty_struct *tty)
+ /* Queue poll */
+ clear_bit(TTY_DO_WRITE_WAKEUP, &tty->flags);
+ spin_lock_irqsave(&gsm->tx_lock, flags);
+- gsm_data_kick(gsm);
++ gsm_data_kick(gsm, NULL);
+ if (gsm->tx_bytes < TX_THRESH_LO) {
+ gsm_dlci_data_sweep(gsm);
+ }
+diff --git a/drivers/tty/serial/8250/8250_port.c b/drivers/tty/serial/8250/8250_port.c
+index f77bf820b7a3..4d83c85a7389 100644
+--- a/drivers/tty/serial/8250/8250_port.c
++++ b/drivers/tty/serial/8250/8250_port.c
+@@ -2615,6 +2615,8 @@ static unsigned int serial8250_get_baud_rate(struct uart_port *port,
+ struct ktermios *termios,
+ struct ktermios *old)
+ {
++ unsigned int tolerance = port->uartclk / 100;
++
+ /*
+ * Ask the core to calculate the divisor for us.
+ * Allow 1% tolerance at the upper limit so uart clks marginally
+@@ -2623,7 +2625,7 @@ static unsigned int serial8250_get_baud_rate(struct uart_port *port,
+ */
+ return uart_get_baud_rate(port, termios, old,
+ port->uartclk / 16 / UART_DIV_MAX,
+- port->uartclk);
++ (port->uartclk + tolerance) / 16);
+ }
+
+ void
+diff --git a/drivers/usb/cdns3/cdns3-ti.c b/drivers/usb/cdns3/cdns3-ti.c
+index 5685ba11480b..e701ab56b0a7 100644
+--- a/drivers/usb/cdns3/cdns3-ti.c
++++ b/drivers/usb/cdns3/cdns3-ti.c
+@@ -138,7 +138,7 @@ static int cdns_ti_probe(struct platform_device *pdev)
+ error = pm_runtime_get_sync(dev);
+ if (error < 0) {
+ dev_err(dev, "pm_runtime_get_sync failed: %d\n", error);
+- goto err_get;
++ goto err;
+ }
+
+ /* assert RESET */
+@@ -185,7 +185,6 @@ static int cdns_ti_probe(struct platform_device *pdev)
+
+ err:
+ pm_runtime_put_sync(data->dev);
+-err_get:
+ pm_runtime_disable(data->dev);
+
+ return error;
+diff --git a/drivers/usb/class/usblp.c b/drivers/usb/class/usblp.c
+index 0d8e3f3804a3..084c48c5848f 100644
+--- a/drivers/usb/class/usblp.c
++++ b/drivers/usb/class/usblp.c
+@@ -468,7 +468,8 @@ static int usblp_release(struct inode *inode, struct file *file)
+ usb_autopm_put_interface(usblp->intf);
+
+ if (!usblp->present) /* finish cleanup from disconnect */
+- usblp_cleanup(usblp);
++ usblp_cleanup(usblp); /* any URBs must be dead */
++
+ mutex_unlock(&usblp_mutex);
+ return 0;
+ }
+@@ -1375,9 +1376,11 @@ static void usblp_disconnect(struct usb_interface *intf)
+
+ usblp_unlink_urbs(usblp);
+ mutex_unlock(&usblp->mut);
++ usb_poison_anchored_urbs(&usblp->urbs);
+
+ if (!usblp->used)
+ usblp_cleanup(usblp);
++
+ mutex_unlock(&usblp_mutex);
+ }
+
+diff --git a/drivers/usb/dwc2/core_intr.c b/drivers/usb/dwc2/core_intr.c
+index 876ff31261d5..55f1d14fc414 100644
+--- a/drivers/usb/dwc2/core_intr.c
++++ b/drivers/usb/dwc2/core_intr.c
+@@ -416,10 +416,13 @@ static void dwc2_handle_wakeup_detected_intr(struct dwc2_hsotg *hsotg)
+ if (ret && (ret != -ENOTSUPP))
+ dev_err(hsotg->dev, "exit power_down failed\n");
+
++ /* Change to L0 state */
++ hsotg->lx_state = DWC2_L0;
+ call_gadget(hsotg, resume);
++ } else {
++ /* Change to L0 state */
++ hsotg->lx_state = DWC2_L0;
+ }
+- /* Change to L0 state */
+- hsotg->lx_state = DWC2_L0;
+ } else {
+ if (hsotg->params.power_down)
+ return;
+diff --git a/drivers/usb/dwc3/dwc3-meson-g12a.c b/drivers/usb/dwc3/dwc3-meson-g12a.c
+index b81d085bc534..eabb3bb6fcaa 100644
+--- a/drivers/usb/dwc3/dwc3-meson-g12a.c
++++ b/drivers/usb/dwc3/dwc3-meson-g12a.c
+@@ -505,7 +505,7 @@ static int dwc3_meson_g12a_probe(struct platform_device *pdev)
+ if (IS_ERR(priv->reset)) {
+ ret = PTR_ERR(priv->reset);
+ dev_err(dev, "failed to get device reset, err=%d\n", ret);
+- return ret;
++ goto err_disable_clks;
+ }
+
+ ret = reset_control_reset(priv->reset);
+@@ -525,7 +525,9 @@ static int dwc3_meson_g12a_probe(struct platform_device *pdev)
+ /* Get dr_mode */
+ priv->otg_mode = usb_get_dr_mode(dev);
+
+- dwc3_meson_g12a_usb_init(priv);
++ ret = dwc3_meson_g12a_usb_init(priv);
++ if (ret)
++ goto err_disable_clks;
+
+ /* Init PHYs */
+ for (i = 0 ; i < PHY_COUNT ; ++i) {
+diff --git a/drivers/usb/dwc3/gadget.c b/drivers/usb/dwc3/gadget.c
+index 585cb3deea7a..de3b92680935 100644
+--- a/drivers/usb/dwc3/gadget.c
++++ b/drivers/usb/dwc3/gadget.c
+@@ -1220,6 +1220,8 @@ static void dwc3_prepare_trbs(struct dwc3_ep *dep)
+ }
+ }
+
++static void dwc3_gadget_ep_cleanup_cancelled_requests(struct dwc3_ep *dep);
++
+ static int __dwc3_gadget_kick_transfer(struct dwc3_ep *dep)
+ {
+ struct dwc3_gadget_ep_cmd_params params;
+@@ -1259,14 +1261,20 @@ static int __dwc3_gadget_kick_transfer(struct dwc3_ep *dep)
+
+ ret = dwc3_send_gadget_ep_cmd(dep, cmd, ¶ms);
+ if (ret < 0) {
+- /*
+- * FIXME we need to iterate over the list of requests
+- * here and stop, unmap, free and del each of the linked
+- * requests instead of what we do now.
+- */
+- if (req->trb)
+- memset(req->trb, 0, sizeof(struct dwc3_trb));
+- dwc3_gadget_del_and_unmap_request(dep, req, ret);
++ struct dwc3_request *tmp;
++
++ if (ret == -EAGAIN)
++ return ret;
++
++ dwc3_stop_active_transfer(dep, true, true);
++
++ list_for_each_entry_safe(req, tmp, &dep->started_list, list)
++ dwc3_gadget_move_cancelled_request(req);
++
++ /* If ep isn't started, then there's no end transfer pending */
++ if (!(dep->flags & DWC3_EP_END_TRANSFER_PENDING))
++ dwc3_gadget_ep_cleanup_cancelled_requests(dep);
++
+ return ret;
+ }
+
+@@ -1508,6 +1516,10 @@ static void dwc3_gadget_ep_skip_trbs(struct dwc3_ep *dep, struct dwc3_request *r
+ {
+ int i;
+
++ /* If req->trb is not set, then the request has not started */
++ if (!req->trb)
++ return;
++
+ /*
+ * If request was already started, this means we had to
+ * stop the transfer. With that we also need to ignore
+@@ -1598,6 +1610,8 @@ int __dwc3_gadget_ep_set_halt(struct dwc3_ep *dep, int value, int protocol)
+ {
+ struct dwc3_gadget_ep_cmd_params params;
+ struct dwc3 *dwc = dep->dwc;
++ struct dwc3_request *req;
++ struct dwc3_request *tmp;
+ int ret;
+
+ if (usb_endpoint_xfer_isoc(dep->endpoint.desc)) {
+@@ -1634,13 +1648,37 @@ int __dwc3_gadget_ep_set_halt(struct dwc3_ep *dep, int value, int protocol)
+ else
+ dep->flags |= DWC3_EP_STALL;
+ } else {
++ /*
++ * Don't issue CLEAR_STALL command to control endpoints. The
++ * controller automatically clears the STALL when it receives
++ * the SETUP token.
++ */
++ if (dep->number <= 1) {
++ dep->flags &= ~(DWC3_EP_STALL | DWC3_EP_WEDGE);
++ return 0;
++ }
+
+ ret = dwc3_send_clear_stall_ep_cmd(dep);
+- if (ret)
++ if (ret) {
+ dev_err(dwc->dev, "failed to clear STALL on %s\n",
+ dep->name);
+- else
+- dep->flags &= ~(DWC3_EP_STALL | DWC3_EP_WEDGE);
++ return ret;
++ }
++
++ dep->flags &= ~(DWC3_EP_STALL | DWC3_EP_WEDGE);
++
++ dwc3_stop_active_transfer(dep, true, true);
++
++ list_for_each_entry_safe(req, tmp, &dep->started_list, list)
++ dwc3_gadget_move_cancelled_request(req);
++
++ list_for_each_entry_safe(req, tmp, &dep->pending_list, list)
++ dwc3_gadget_move_cancelled_request(req);
++
++ if (!(dep->flags & DWC3_EP_END_TRANSFER_PENDING)) {
++ dep->flags &= ~DWC3_EP_DELAY_START;
++ dwc3_gadget_ep_cleanup_cancelled_requests(dep);
++ }
+ }
+
+ return ret;
+diff --git a/drivers/usb/gadget/composite.c b/drivers/usb/gadget/composite.c
+index cb4950cf1cdc..5c1eb96a5c57 100644
+--- a/drivers/usb/gadget/composite.c
++++ b/drivers/usb/gadget/composite.c
+@@ -96,40 +96,43 @@ function_descriptors(struct usb_function *f,
+ }
+
+ /**
+- * next_ep_desc() - advance to the next EP descriptor
++ * next_desc() - advance to the next desc_type descriptor
+ * @t: currect pointer within descriptor array
++ * @desc_type: descriptor type
+ *
+- * Return: next EP descriptor or NULL
++ * Return: next desc_type descriptor or NULL
+ *
+- * Iterate over @t until either EP descriptor found or
++ * Iterate over @t until either desc_type descriptor found or
+ * NULL (that indicates end of list) encountered
+ */
+ static struct usb_descriptor_header**
+-next_ep_desc(struct usb_descriptor_header **t)
++next_desc(struct usb_descriptor_header **t, u8 desc_type)
+ {
+ for (; *t; t++) {
+- if ((*t)->bDescriptorType == USB_DT_ENDPOINT)
++ if ((*t)->bDescriptorType == desc_type)
+ return t;
+ }
+ return NULL;
+ }
+
+ /*
+- * for_each_ep_desc()- iterate over endpoint descriptors in the
+- * descriptors list
+- * @start: pointer within descriptor array.
+- * @ep_desc: endpoint descriptor to use as the loop cursor
++ * for_each_desc() - iterate over desc_type descriptors in the
++ * descriptors list
++ * @start: pointer within descriptor array.
++ * @iter_desc: desc_type descriptor to use as the loop cursor
++ * @desc_type: wanted descriptr type
+ */
+-#define for_each_ep_desc(start, ep_desc) \
+- for (ep_desc = next_ep_desc(start); \
+- ep_desc; ep_desc = next_ep_desc(ep_desc+1))
++#define for_each_desc(start, iter_desc, desc_type) \
++ for (iter_desc = next_desc(start, desc_type); \
++ iter_desc; iter_desc = next_desc(iter_desc + 1, desc_type))
+
+ /**
+- * config_ep_by_speed() - configures the given endpoint
++ * config_ep_by_speed_and_alt() - configures the given endpoint
+ * according to gadget speed.
+ * @g: pointer to the gadget
+ * @f: usb function
+ * @_ep: the endpoint to configure
++ * @alt: alternate setting number
+ *
+ * Return: error code, 0 on success
+ *
+@@ -142,11 +145,13 @@ next_ep_desc(struct usb_descriptor_header **t)
+ * Note: the supplied function should hold all the descriptors
+ * for supported speeds
+ */
+-int config_ep_by_speed(struct usb_gadget *g,
+- struct usb_function *f,
+- struct usb_ep *_ep)
++int config_ep_by_speed_and_alt(struct usb_gadget *g,
++ struct usb_function *f,
++ struct usb_ep *_ep,
++ u8 alt)
+ {
+ struct usb_endpoint_descriptor *chosen_desc = NULL;
++ struct usb_interface_descriptor *int_desc = NULL;
+ struct usb_descriptor_header **speed_desc = NULL;
+
+ struct usb_ss_ep_comp_descriptor *comp_desc = NULL;
+@@ -182,8 +187,21 @@ int config_ep_by_speed(struct usb_gadget *g,
+ default:
+ speed_desc = f->fs_descriptors;
+ }
++
++ /* find correct alternate setting descriptor */
++ for_each_desc(speed_desc, d_spd, USB_DT_INTERFACE) {
++ int_desc = (struct usb_interface_descriptor *)*d_spd;
++
++ if (int_desc->bAlternateSetting == alt) {
++ speed_desc = d_spd;
++ goto intf_found;
++ }
++ }
++ return -EIO;
++
++intf_found:
+ /* find descriptors */
+- for_each_ep_desc(speed_desc, d_spd) {
++ for_each_desc(speed_desc, d_spd, USB_DT_ENDPOINT) {
+ chosen_desc = (struct usb_endpoint_descriptor *)*d_spd;
+ if (chosen_desc->bEndpointAddress == _ep->address)
+ goto ep_found;
+@@ -237,6 +255,32 @@ ep_found:
+ }
+ return 0;
+ }
++EXPORT_SYMBOL_GPL(config_ep_by_speed_and_alt);
++
++/**
++ * config_ep_by_speed() - configures the given endpoint
++ * according to gadget speed.
++ * @g: pointer to the gadget
++ * @f: usb function
++ * @_ep: the endpoint to configure
++ *
++ * Return: error code, 0 on success
++ *
++ * This function chooses the right descriptors for a given
++ * endpoint according to gadget speed and saves it in the
++ * endpoint desc field. If the endpoint already has a descriptor
++ * assigned to it - overwrites it with currently corresponding
++ * descriptor. The endpoint maxpacket field is updated according
++ * to the chosen descriptor.
++ * Note: the supplied function should hold all the descriptors
++ * for supported speeds
++ */
++int config_ep_by_speed(struct usb_gadget *g,
++ struct usb_function *f,
++ struct usb_ep *_ep)
++{
++ return config_ep_by_speed_and_alt(g, f, _ep, 0);
++}
+ EXPORT_SYMBOL_GPL(config_ep_by_speed);
+
+ /**
+diff --git a/drivers/usb/gadget/udc/core.c b/drivers/usb/gadget/udc/core.c
+index 9b11046480fe..2e28dde8376f 100644
+--- a/drivers/usb/gadget/udc/core.c
++++ b/drivers/usb/gadget/udc/core.c
+@@ -1297,6 +1297,8 @@ static void usb_gadget_remove_driver(struct usb_udc *udc)
+ kobject_uevent(&udc->dev.kobj, KOBJ_CHANGE);
+
+ usb_gadget_disconnect(udc->gadget);
++ if (udc->gadget->irq)
++ synchronize_irq(udc->gadget->irq);
+ udc->driver->unbind(udc->gadget);
+ usb_gadget_udc_stop(udc);
+
+diff --git a/drivers/usb/gadget/udc/lpc32xx_udc.c b/drivers/usb/gadget/udc/lpc32xx_udc.c
+index cb997b82c008..465d0b7c6522 100644
+--- a/drivers/usb/gadget/udc/lpc32xx_udc.c
++++ b/drivers/usb/gadget/udc/lpc32xx_udc.c
+@@ -1614,17 +1614,17 @@ static int lpc32xx_ep_enable(struct usb_ep *_ep,
+ const struct usb_endpoint_descriptor *desc)
+ {
+ struct lpc32xx_ep *ep = container_of(_ep, struct lpc32xx_ep, ep);
+- struct lpc32xx_udc *udc = ep->udc;
++ struct lpc32xx_udc *udc;
+ u16 maxpacket;
+ u32 tmp;
+ unsigned long flags;
+
+ /* Verify EP data */
+ if ((!_ep) || (!ep) || (!desc) ||
+- (desc->bDescriptorType != USB_DT_ENDPOINT)) {
+- dev_dbg(udc->dev, "bad ep or descriptor\n");
++ (desc->bDescriptorType != USB_DT_ENDPOINT))
+ return -EINVAL;
+- }
++
++ udc = ep->udc;
+ maxpacket = usb_endpoint_maxp(desc);
+ if ((maxpacket == 0) || (maxpacket > ep->maxpacket)) {
+ dev_dbg(udc->dev, "bad ep descriptor's packet size\n");
+@@ -1872,7 +1872,7 @@ static int lpc32xx_ep_dequeue(struct usb_ep *_ep, struct usb_request *_req)
+ static int lpc32xx_ep_set_halt(struct usb_ep *_ep, int value)
+ {
+ struct lpc32xx_ep *ep = container_of(_ep, struct lpc32xx_ep, ep);
+- struct lpc32xx_udc *udc = ep->udc;
++ struct lpc32xx_udc *udc;
+ unsigned long flags;
+
+ if ((!ep) || (ep->hwep_num <= 1))
+@@ -1882,6 +1882,7 @@ static int lpc32xx_ep_set_halt(struct usb_ep *_ep, int value)
+ if (ep->is_in)
+ return -EAGAIN;
+
++ udc = ep->udc;
+ spin_lock_irqsave(&udc->lock, flags);
+
+ if (value == 1) {
+diff --git a/drivers/usb/gadget/udc/m66592-udc.c b/drivers/usb/gadget/udc/m66592-udc.c
+index 75d16a8902e6..931e6362a13d 100644
+--- a/drivers/usb/gadget/udc/m66592-udc.c
++++ b/drivers/usb/gadget/udc/m66592-udc.c
+@@ -1667,7 +1667,7 @@ static int m66592_probe(struct platform_device *pdev)
+
+ err_add_udc:
+ m66592_free_request(&m66592->ep[0].ep, m66592->ep0_req);
+-
++ m66592->ep0_req = NULL;
+ clean_up3:
+ if (m66592->pdata->on_chip) {
+ clk_disable(m66592->clk);
+diff --git a/drivers/usb/gadget/udc/s3c2410_udc.c b/drivers/usb/gadget/udc/s3c2410_udc.c
+index 0507a2ca0f55..80002d97b59d 100644
+--- a/drivers/usb/gadget/udc/s3c2410_udc.c
++++ b/drivers/usb/gadget/udc/s3c2410_udc.c
+@@ -251,10 +251,6 @@ static void s3c2410_udc_done(struct s3c2410_ep *ep,
+ static void s3c2410_udc_nuke(struct s3c2410_udc *udc,
+ struct s3c2410_ep *ep, int status)
+ {
+- /* Sanity check */
+- if (&ep->queue == NULL)
+- return;
+-
+ while (!list_empty(&ep->queue)) {
+ struct s3c2410_request *req;
+ req = list_entry(ep->queue.next, struct s3c2410_request,
+diff --git a/drivers/usb/host/ehci-mxc.c b/drivers/usb/host/ehci-mxc.c
+index c9f91e6c72b6..7f65c86047dd 100644
+--- a/drivers/usb/host/ehci-mxc.c
++++ b/drivers/usb/host/ehci-mxc.c
+@@ -50,6 +50,8 @@ static int ehci_mxc_drv_probe(struct platform_device *pdev)
+ }
+
+ irq = platform_get_irq(pdev, 0);
++ if (irq < 0)
++ return irq;
+
+ hcd = usb_create_hcd(&ehci_mxc_hc_driver, dev, dev_name(dev));
+ if (!hcd)
+diff --git a/drivers/usb/host/ehci-platform.c b/drivers/usb/host/ehci-platform.c
+index e4fc3f66d43b..e9a49007cce4 100644
+--- a/drivers/usb/host/ehci-platform.c
++++ b/drivers/usb/host/ehci-platform.c
+@@ -455,6 +455,10 @@ static int ehci_platform_resume(struct device *dev)
+
+ ehci_resume(hcd, priv->reset_on_resume);
+
++ pm_runtime_disable(dev);
++ pm_runtime_set_active(dev);
++ pm_runtime_enable(dev);
++
+ if (priv->quirk_poll)
+ quirk_poll_init(priv);
+
+diff --git a/drivers/usb/host/ohci-platform.c b/drivers/usb/host/ohci-platform.c
+index 7addfc2cbadc..4a8456f12a73 100644
+--- a/drivers/usb/host/ohci-platform.c
++++ b/drivers/usb/host/ohci-platform.c
+@@ -299,6 +299,11 @@ static int ohci_platform_resume(struct device *dev)
+ }
+
+ ohci_resume(hcd, false);
++
++ pm_runtime_disable(dev);
++ pm_runtime_set_active(dev);
++ pm_runtime_enable(dev);
++
+ return 0;
+ }
+ #endif /* CONFIG_PM_SLEEP */
+diff --git a/drivers/usb/host/ohci-sm501.c b/drivers/usb/host/ohci-sm501.c
+index c158cda9e4b9..cff965240327 100644
+--- a/drivers/usb/host/ohci-sm501.c
++++ b/drivers/usb/host/ohci-sm501.c
+@@ -157,9 +157,10 @@ static int ohci_hcd_sm501_drv_probe(struct platform_device *pdev)
+ * the call to usb_hcd_setup_local_mem() below does just that.
+ */
+
+- if (usb_hcd_setup_local_mem(hcd, mem->start,
+- mem->start - mem->parent->start,
+- resource_size(mem)) < 0)
++ retval = usb_hcd_setup_local_mem(hcd, mem->start,
++ mem->start - mem->parent->start,
++ resource_size(mem));
++ if (retval < 0)
+ goto err5;
+ retval = usb_add_hcd(hcd, irq, IRQF_SHARED);
+ if (retval)
+diff --git a/drivers/usb/host/xhci-plat.c b/drivers/usb/host/xhci-plat.c
+index ea460b9682d5..ca82e2c61ddc 100644
+--- a/drivers/usb/host/xhci-plat.c
++++ b/drivers/usb/host/xhci-plat.c
+@@ -409,7 +409,15 @@ static int __maybe_unused xhci_plat_resume(struct device *dev)
+ if (ret)
+ return ret;
+
+- return xhci_resume(xhci, 0);
++ ret = xhci_resume(xhci, 0);
++ if (ret)
++ return ret;
++
++ pm_runtime_disable(dev);
++ pm_runtime_set_active(dev);
++ pm_runtime_enable(dev);
++
++ return 0;
+ }
+
+ static int __maybe_unused xhci_plat_runtime_suspend(struct device *dev)
+diff --git a/drivers/usb/roles/class.c b/drivers/usb/roles/class.c
+index 5b17709821df..27d92af29635 100644
+--- a/drivers/usb/roles/class.c
++++ b/drivers/usb/roles/class.c
+@@ -49,8 +49,10 @@ int usb_role_switch_set_role(struct usb_role_switch *sw, enum usb_role role)
+ mutex_lock(&sw->lock);
+
+ ret = sw->set(sw, role);
+- if (!ret)
++ if (!ret) {
+ sw->role = role;
++ kobject_uevent(&sw->dev.kobj, KOBJ_CHANGE);
++ }
+
+ mutex_unlock(&sw->lock);
+
+diff --git a/drivers/vfio/mdev/mdev_sysfs.c b/drivers/vfio/mdev/mdev_sysfs.c
+index 8ad14e5c02bf..917fd84c1c6f 100644
+--- a/drivers/vfio/mdev/mdev_sysfs.c
++++ b/drivers/vfio/mdev/mdev_sysfs.c
+@@ -110,7 +110,7 @@ static struct mdev_type *add_mdev_supported_type(struct mdev_parent *parent,
+ "%s-%s", dev_driver_string(parent->dev),
+ group->name);
+ if (ret) {
+- kfree(type);
++ kobject_put(&type->kobj);
+ return ERR_PTR(ret);
+ }
+
+diff --git a/drivers/vfio/pci/vfio_pci_config.c b/drivers/vfio/pci/vfio_pci_config.c
+index 90c0b80f8acf..814bcbe0dd4e 100644
+--- a/drivers/vfio/pci/vfio_pci_config.c
++++ b/drivers/vfio/pci/vfio_pci_config.c
+@@ -1462,7 +1462,12 @@ static int vfio_cap_init(struct vfio_pci_device *vdev)
+ if (ret)
+ return ret;
+
+- if (cap <= PCI_CAP_ID_MAX) {
++ /*
++ * ID 0 is a NULL capability, conflicting with our fake
++ * PCI_CAP_ID_BASIC. As it has no content, consider it
++ * hidden for now.
++ */
++ if (cap && cap <= PCI_CAP_ID_MAX) {
+ len = pci_cap_length[cap];
+ if (len == 0xFF) { /* Variable length */
+ len = vfio_cap_len(vdev, cap, pos);
+@@ -1728,8 +1733,11 @@ void vfio_config_free(struct vfio_pci_device *vdev)
+ vdev->vconfig = NULL;
+ kfree(vdev->pci_config_map);
+ vdev->pci_config_map = NULL;
+- kfree(vdev->msi_perm);
+- vdev->msi_perm = NULL;
++ if (vdev->msi_perm) {
++ free_perm_bits(vdev->msi_perm);
++ kfree(vdev->msi_perm);
++ vdev->msi_perm = NULL;
++ }
+ }
+
+ /*
+diff --git a/drivers/vhost/scsi.c b/drivers/vhost/scsi.c
+index c39952243fd3..8b104f76f324 100644
+--- a/drivers/vhost/scsi.c
++++ b/drivers/vhost/scsi.c
+@@ -2280,6 +2280,7 @@ static struct configfs_attribute *vhost_scsi_wwn_attrs[] = {
+ static const struct target_core_fabric_ops vhost_scsi_ops = {
+ .module = THIS_MODULE,
+ .fabric_name = "vhost",
++ .max_data_sg_nents = VHOST_SCSI_PREALLOC_SGLS,
+ .tpg_get_wwn = vhost_scsi_get_fabric_wwn,
+ .tpg_get_tag = vhost_scsi_get_tpgt,
+ .tpg_check_demo_mode = vhost_scsi_check_true,
+diff --git a/drivers/video/backlight/lp855x_bl.c b/drivers/video/backlight/lp855x_bl.c
+index f68920131a4a..e94932c69f54 100644
+--- a/drivers/video/backlight/lp855x_bl.c
++++ b/drivers/video/backlight/lp855x_bl.c
+@@ -456,7 +456,7 @@ static int lp855x_probe(struct i2c_client *cl, const struct i2c_device_id *id)
+ ret = regulator_enable(lp->enable);
+ if (ret < 0) {
+ dev_err(lp->dev, "failed to enable vddio: %d\n", ret);
+- return ret;
++ goto disable_supply;
+ }
+
+ /*
+@@ -471,24 +471,34 @@ static int lp855x_probe(struct i2c_client *cl, const struct i2c_device_id *id)
+ ret = lp855x_configure(lp);
+ if (ret) {
+ dev_err(lp->dev, "device config err: %d", ret);
+- return ret;
++ goto disable_vddio;
+ }
+
+ ret = lp855x_backlight_register(lp);
+ if (ret) {
+ dev_err(lp->dev,
+ "failed to register backlight. err: %d\n", ret);
+- return ret;
++ goto disable_vddio;
+ }
+
+ ret = sysfs_create_group(&lp->dev->kobj, &lp855x_attr_group);
+ if (ret) {
+ dev_err(lp->dev, "failed to register sysfs. err: %d\n", ret);
+- return ret;
++ goto disable_vddio;
+ }
+
+ backlight_update_status(lp->bl);
++
+ return 0;
++
++disable_vddio:
++ if (lp->enable)
++ regulator_disable(lp->enable);
++disable_supply:
++ if (lp->supply)
++ regulator_disable(lp->supply);
++
++ return ret;
+ }
+
+ static int lp855x_remove(struct i2c_client *cl)
+@@ -497,6 +507,8 @@ static int lp855x_remove(struct i2c_client *cl)
+
+ lp->bl->props.brightness = 0;
+ backlight_update_status(lp->bl);
++ if (lp->enable)
++ regulator_disable(lp->enable);
+ if (lp->supply)
+ regulator_disable(lp->supply);
+ sysfs_remove_group(&lp->dev->kobj, &lp855x_attr_group);
+diff --git a/drivers/watchdog/da9062_wdt.c b/drivers/watchdog/da9062_wdt.c
+index 0ad15d55071c..18dec438d518 100644
+--- a/drivers/watchdog/da9062_wdt.c
++++ b/drivers/watchdog/da9062_wdt.c
+@@ -58,11 +58,6 @@ static int da9062_wdt_update_timeout_register(struct da9062_watchdog *wdt,
+ unsigned int regval)
+ {
+ struct da9062 *chip = wdt->hw;
+- int ret;
+-
+- ret = da9062_reset_watchdog_timer(wdt);
+- if (ret)
+- return ret;
+
+ regmap_update_bits(chip->regmap,
+ DA9062AA_CONTROL_D,
+diff --git a/drivers/xen/cpu_hotplug.c b/drivers/xen/cpu_hotplug.c
+index ec975decb5de..b96b11e2b571 100644
+--- a/drivers/xen/cpu_hotplug.c
++++ b/drivers/xen/cpu_hotplug.c
+@@ -93,10 +93,8 @@ static int setup_cpu_watcher(struct notifier_block *notifier,
+ (void)register_xenbus_watch(&cpu_watch);
+
+ for_each_possible_cpu(cpu) {
+- if (vcpu_online(cpu) == 0) {
+- device_offline(get_cpu_device(cpu));
+- set_cpu_present(cpu, false);
+- }
++ if (vcpu_online(cpu) == 0)
++ disable_hotplug_cpu(cpu);
+ }
+
+ return NOTIFY_DONE;
+@@ -119,5 +117,5 @@ static int __init setup_vcpu_hotplug_event(void)
+ return 0;
+ }
+
+-arch_initcall(setup_vcpu_hotplug_event);
++late_initcall(setup_vcpu_hotplug_event);
+
+diff --git a/fs/afs/cmservice.c b/fs/afs/cmservice.c
+index 380ad5ace7cf..3a9b8b1f5f2b 100644
+--- a/fs/afs/cmservice.c
++++ b/fs/afs/cmservice.c
+@@ -305,8 +305,7 @@ static int afs_deliver_cb_callback(struct afs_call *call)
+ call->count = ntohl(call->tmp);
+ _debug("FID count: %u", call->count);
+ if (call->count > AFSCBMAX)
+- return afs_protocol_error(call, -EBADMSG,
+- afs_eproto_cb_fid_count);
++ return afs_protocol_error(call, afs_eproto_cb_fid_count);
+
+ call->buffer = kmalloc(array3_size(call->count, 3, 4),
+ GFP_KERNEL);
+@@ -351,8 +350,7 @@ static int afs_deliver_cb_callback(struct afs_call *call)
+ call->count2 = ntohl(call->tmp);
+ _debug("CB count: %u", call->count2);
+ if (call->count2 != call->count && call->count2 != 0)
+- return afs_protocol_error(call, -EBADMSG,
+- afs_eproto_cb_count);
++ return afs_protocol_error(call, afs_eproto_cb_count);
+ call->iter = &call->def_iter;
+ iov_iter_discard(&call->def_iter, READ, call->count2 * 3 * 4);
+ call->unmarshall++;
+@@ -672,8 +670,7 @@ static int afs_deliver_yfs_cb_callback(struct afs_call *call)
+ call->count = ntohl(call->tmp);
+ _debug("FID count: %u", call->count);
+ if (call->count > YFSCBMAX)
+- return afs_protocol_error(call, -EBADMSG,
+- afs_eproto_cb_fid_count);
++ return afs_protocol_error(call, afs_eproto_cb_fid_count);
+
+ size = array_size(call->count, sizeof(struct yfs_xdr_YFSFid));
+ call->buffer = kmalloc(size, GFP_KERNEL);
+diff --git a/fs/afs/dir.c b/fs/afs/dir.c
+index d1e1caa23c8b..3c486340b220 100644
+--- a/fs/afs/dir.c
++++ b/fs/afs/dir.c
+@@ -658,7 +658,8 @@ static struct inode *afs_do_lookup(struct inode *dir, struct dentry *dentry,
+
+ cookie->ctx.actor = afs_lookup_filldir;
+ cookie->name = dentry->d_name;
+- cookie->nr_fids = 1; /* slot 0 is saved for the fid we actually want */
++ cookie->nr_fids = 2; /* slot 0 is saved for the fid we actually want
++ * and slot 1 for the directory */
+
+ read_seqlock_excl(&dvnode->cb_lock);
+ dcbi = rcu_dereference_protected(dvnode->cb_interest,
+@@ -709,7 +710,11 @@ static struct inode *afs_do_lookup(struct inode *dir, struct dentry *dentry,
+ if (!cookie->inodes)
+ goto out_s;
+
+- for (i = 1; i < cookie->nr_fids; i++) {
++ cookie->fids[1] = dvnode->fid;
++ cookie->statuses[1].cb_break = afs_calc_vnode_cb_break(dvnode);
++ cookie->inodes[1] = igrab(&dvnode->vfs_inode);
++
++ for (i = 2; i < cookie->nr_fids; i++) {
+ scb = &cookie->statuses[i];
+
+ /* Find any inodes that already exist and get their
+diff --git a/fs/afs/fsclient.c b/fs/afs/fsclient.c
+index d2b3798c1932..7bca0c13d0c4 100644
+--- a/fs/afs/fsclient.c
++++ b/fs/afs/fsclient.c
+@@ -56,16 +56,15 @@ static void xdr_dump_bad(const __be32 *bp)
+ /*
+ * decode an AFSFetchStatus block
+ */
+-static int xdr_decode_AFSFetchStatus(const __be32 **_bp,
+- struct afs_call *call,
+- struct afs_status_cb *scb)
++static void xdr_decode_AFSFetchStatus(const __be32 **_bp,
++ struct afs_call *call,
++ struct afs_status_cb *scb)
+ {
+ const struct afs_xdr_AFSFetchStatus *xdr = (const void *)*_bp;
+ struct afs_file_status *status = &scb->status;
+ bool inline_error = (call->operation_ID == afs_FS_InlineBulkStatus);
+ u64 data_version, size;
+ u32 type, abort_code;
+- int ret;
+
+ abort_code = ntohl(xdr->abort_code);
+
+@@ -79,7 +78,7 @@ static int xdr_decode_AFSFetchStatus(const __be32 **_bp,
+ */
+ status->abort_code = abort_code;
+ scb->have_error = true;
+- goto good;
++ goto advance;
+ }
+
+ pr_warn("Unknown AFSFetchStatus version %u\n", ntohl(xdr->if_version));
+@@ -89,7 +88,7 @@ static int xdr_decode_AFSFetchStatus(const __be32 **_bp,
+ if (abort_code != 0 && inline_error) {
+ status->abort_code = abort_code;
+ scb->have_error = true;
+- goto good;
++ goto advance;
+ }
+
+ type = ntohl(xdr->type);
+@@ -125,15 +124,13 @@ static int xdr_decode_AFSFetchStatus(const __be32 **_bp,
+ data_version |= (u64)ntohl(xdr->data_version_hi) << 32;
+ status->data_version = data_version;
+ scb->have_status = true;
+-good:
+- ret = 0;
+ advance:
+ *_bp = (const void *)*_bp + sizeof(*xdr);
+- return ret;
++ return;
+
+ bad:
+ xdr_dump_bad(*_bp);
+- ret = afs_protocol_error(call, -EBADMSG, afs_eproto_bad_status);
++ afs_protocol_error(call, afs_eproto_bad_status);
+ goto advance;
+ }
+
+@@ -254,9 +251,7 @@ static int afs_deliver_fs_fetch_status_vnode(struct afs_call *call)
+
+ /* unmarshall the reply once we've received all of it */
+ bp = call->buffer;
+- ret = xdr_decode_AFSFetchStatus(&bp, call, call->out_scb);
+- if (ret < 0)
+- return ret;
++ xdr_decode_AFSFetchStatus(&bp, call, call->out_scb);
+ xdr_decode_AFSCallBack(&bp, call, call->out_scb);
+ xdr_decode_AFSVolSync(&bp, call->out_volsync);
+
+@@ -419,9 +414,7 @@ static int afs_deliver_fs_fetch_data(struct afs_call *call)
+ return ret;
+
+ bp = call->buffer;
+- ret = xdr_decode_AFSFetchStatus(&bp, call, call->out_scb);
+- if (ret < 0)
+- return ret;
++ xdr_decode_AFSFetchStatus(&bp, call, call->out_scb);
+ xdr_decode_AFSCallBack(&bp, call, call->out_scb);
+ xdr_decode_AFSVolSync(&bp, call->out_volsync);
+
+@@ -577,12 +570,8 @@ static int afs_deliver_fs_create_vnode(struct afs_call *call)
+ /* unmarshall the reply once we've received all of it */
+ bp = call->buffer;
+ xdr_decode_AFSFid(&bp, call->out_fid);
+- ret = xdr_decode_AFSFetchStatus(&bp, call, call->out_scb);
+- if (ret < 0)
+- return ret;
+- ret = xdr_decode_AFSFetchStatus(&bp, call, call->out_dir_scb);
+- if (ret < 0)
+- return ret;
++ xdr_decode_AFSFetchStatus(&bp, call, call->out_scb);
++ xdr_decode_AFSFetchStatus(&bp, call, call->out_dir_scb);
+ xdr_decode_AFSCallBack(&bp, call, call->out_scb);
+ xdr_decode_AFSVolSync(&bp, call->out_volsync);
+
+@@ -691,9 +680,7 @@ static int afs_deliver_fs_dir_status_and_vol(struct afs_call *call)
+
+ /* unmarshall the reply once we've received all of it */
+ bp = call->buffer;
+- ret = xdr_decode_AFSFetchStatus(&bp, call, call->out_dir_scb);
+- if (ret < 0)
+- return ret;
++ xdr_decode_AFSFetchStatus(&bp, call, call->out_dir_scb);
+ xdr_decode_AFSVolSync(&bp, call->out_volsync);
+
+ _leave(" = 0 [done]");
+@@ -784,12 +771,8 @@ static int afs_deliver_fs_link(struct afs_call *call)
+
+ /* unmarshall the reply once we've received all of it */
+ bp = call->buffer;
+- ret = xdr_decode_AFSFetchStatus(&bp, call, call->out_scb);
+- if (ret < 0)
+- return ret;
+- ret = xdr_decode_AFSFetchStatus(&bp, call, call->out_dir_scb);
+- if (ret < 0)
+- return ret;
++ xdr_decode_AFSFetchStatus(&bp, call, call->out_scb);
++ xdr_decode_AFSFetchStatus(&bp, call, call->out_dir_scb);
+ xdr_decode_AFSVolSync(&bp, call->out_volsync);
+
+ _leave(" = 0 [done]");
+@@ -878,12 +861,8 @@ static int afs_deliver_fs_symlink(struct afs_call *call)
+ /* unmarshall the reply once we've received all of it */
+ bp = call->buffer;
+ xdr_decode_AFSFid(&bp, call->out_fid);
+- ret = xdr_decode_AFSFetchStatus(&bp, call, call->out_scb);
+- if (ret < 0)
+- return ret;
+- ret = xdr_decode_AFSFetchStatus(&bp, call, call->out_dir_scb);
+- if (ret < 0)
+- return ret;
++ xdr_decode_AFSFetchStatus(&bp, call, call->out_scb);
++ xdr_decode_AFSFetchStatus(&bp, call, call->out_dir_scb);
+ xdr_decode_AFSVolSync(&bp, call->out_volsync);
+
+ _leave(" = 0 [done]");
+@@ -986,16 +965,12 @@ static int afs_deliver_fs_rename(struct afs_call *call)
+ if (ret < 0)
+ return ret;
+
++ bp = call->buffer;
+ /* If the two dirs are the same, we have two copies of the same status
+ * report, so we just decode it twice.
+ */
+- bp = call->buffer;
+- ret = xdr_decode_AFSFetchStatus(&bp, call, call->out_dir_scb);
+- if (ret < 0)
+- return ret;
+- ret = xdr_decode_AFSFetchStatus(&bp, call, call->out_scb);
+- if (ret < 0)
+- return ret;
++ xdr_decode_AFSFetchStatus(&bp, call, call->out_dir_scb);
++ xdr_decode_AFSFetchStatus(&bp, call, call->out_scb);
+ xdr_decode_AFSVolSync(&bp, call->out_volsync);
+
+ _leave(" = 0 [done]");
+@@ -1103,9 +1078,7 @@ static int afs_deliver_fs_store_data(struct afs_call *call)
+
+ /* unmarshall the reply once we've received all of it */
+ bp = call->buffer;
+- ret = xdr_decode_AFSFetchStatus(&bp, call, call->out_scb);
+- if (ret < 0)
+- return ret;
++ xdr_decode_AFSFetchStatus(&bp, call, call->out_scb);
+ xdr_decode_AFSVolSync(&bp, call->out_volsync);
+
+ _leave(" = 0 [done]");
+@@ -1283,9 +1256,7 @@ static int afs_deliver_fs_store_status(struct afs_call *call)
+
+ /* unmarshall the reply once we've received all of it */
+ bp = call->buffer;
+- ret = xdr_decode_AFSFetchStatus(&bp, call, call->out_scb);
+- if (ret < 0)
+- return ret;
++ xdr_decode_AFSFetchStatus(&bp, call, call->out_scb);
+ xdr_decode_AFSVolSync(&bp, call->out_volsync);
+
+ _leave(" = 0 [done]");
+@@ -1499,8 +1470,7 @@ static int afs_deliver_fs_get_volume_status(struct afs_call *call)
+ call->count = ntohl(call->tmp);
+ _debug("volname length: %u", call->count);
+ if (call->count >= AFSNAMEMAX)
+- return afs_protocol_error(call, -EBADMSG,
+- afs_eproto_volname_len);
++ return afs_protocol_error(call, afs_eproto_volname_len);
+ size = (call->count + 3) & ~3; /* It's padded */
+ afs_extract_to_buf(call, size);
+ call->unmarshall++;
+@@ -1529,8 +1499,7 @@ static int afs_deliver_fs_get_volume_status(struct afs_call *call)
+ call->count = ntohl(call->tmp);
+ _debug("offline msg length: %u", call->count);
+ if (call->count >= AFSNAMEMAX)
+- return afs_protocol_error(call, -EBADMSG,
+- afs_eproto_offline_msg_len);
++ return afs_protocol_error(call, afs_eproto_offline_msg_len);
+ size = (call->count + 3) & ~3; /* It's padded */
+ afs_extract_to_buf(call, size);
+ call->unmarshall++;
+@@ -1560,8 +1529,7 @@ static int afs_deliver_fs_get_volume_status(struct afs_call *call)
+ call->count = ntohl(call->tmp);
+ _debug("motd length: %u", call->count);
+ if (call->count >= AFSNAMEMAX)
+- return afs_protocol_error(call, -EBADMSG,
+- afs_eproto_motd_len);
++ return afs_protocol_error(call, afs_eproto_motd_len);
+ size = (call->count + 3) & ~3; /* It's padded */
+ afs_extract_to_buf(call, size);
+ call->unmarshall++;
+@@ -1954,9 +1922,7 @@ static int afs_deliver_fs_fetch_status(struct afs_call *call)
+
+ /* unmarshall the reply once we've received all of it */
+ bp = call->buffer;
+- ret = xdr_decode_AFSFetchStatus(&bp, call, call->out_scb);
+- if (ret < 0)
+- return ret;
++ xdr_decode_AFSFetchStatus(&bp, call, call->out_scb);
+ xdr_decode_AFSCallBack(&bp, call, call->out_scb);
+ xdr_decode_AFSVolSync(&bp, call->out_volsync);
+
+@@ -2045,8 +2011,7 @@ static int afs_deliver_fs_inline_bulk_status(struct afs_call *call)
+ tmp = ntohl(call->tmp);
+ _debug("status count: %u/%u", tmp, call->count2);
+ if (tmp != call->count2)
+- return afs_protocol_error(call, -EBADMSG,
+- afs_eproto_ibulkst_count);
++ return afs_protocol_error(call, afs_eproto_ibulkst_count);
+
+ call->count = 0;
+ call->unmarshall++;
+@@ -2062,10 +2027,7 @@ static int afs_deliver_fs_inline_bulk_status(struct afs_call *call)
+
+ bp = call->buffer;
+ scb = &call->out_scb[call->count];
+- ret = xdr_decode_AFSFetchStatus(&bp, call, scb);
+- if (ret < 0)
+- return ret;
+-
++ xdr_decode_AFSFetchStatus(&bp, call, scb);
+ call->count++;
+ if (call->count < call->count2)
+ goto more_counts;
+@@ -2085,8 +2047,7 @@ static int afs_deliver_fs_inline_bulk_status(struct afs_call *call)
+ tmp = ntohl(call->tmp);
+ _debug("CB count: %u", tmp);
+ if (tmp != call->count2)
+- return afs_protocol_error(call, -EBADMSG,
+- afs_eproto_ibulkst_cb_count);
++ return afs_protocol_error(call, afs_eproto_ibulkst_cb_count);
+ call->count = 0;
+ call->unmarshall++;
+ more_cbs:
+@@ -2243,9 +2204,7 @@ static int afs_deliver_fs_fetch_acl(struct afs_call *call)
+ return ret;
+
+ bp = call->buffer;
+- ret = xdr_decode_AFSFetchStatus(&bp, call, call->out_scb);
+- if (ret < 0)
+- return ret;
++ xdr_decode_AFSFetchStatus(&bp, call, call->out_scb);
+ xdr_decode_AFSVolSync(&bp, call->out_volsync);
+
+ call->unmarshall++;
+@@ -2326,9 +2285,7 @@ static int afs_deliver_fs_file_status_and_vol(struct afs_call *call)
+ return ret;
+
+ bp = call->buffer;
+- ret = xdr_decode_AFSFetchStatus(&bp, call, call->out_scb);
+- if (ret < 0)
+- return ret;
++ xdr_decode_AFSFetchStatus(&bp, call, call->out_scb);
+ xdr_decode_AFSVolSync(&bp, call->out_volsync);
+
+ _leave(" = 0 [done]");
+diff --git a/fs/afs/inode.c b/fs/afs/inode.c
+index 281470fe1183..d7b65fad6679 100644
+--- a/fs/afs/inode.c
++++ b/fs/afs/inode.c
+@@ -130,7 +130,7 @@ static int afs_inode_init_from_status(struct afs_vnode *vnode, struct key *key,
+ default:
+ dump_vnode(vnode, parent_vnode);
+ write_sequnlock(&vnode->cb_lock);
+- return afs_protocol_error(NULL, -EBADMSG, afs_eproto_file_type);
++ return afs_protocol_error(NULL, afs_eproto_file_type);
+ }
+
+ afs_set_i_size(vnode, status->size);
+@@ -170,6 +170,7 @@ static void afs_apply_status(struct afs_fs_cursor *fc,
+ struct timespec64 t;
+ umode_t mode;
+ bool data_changed = false;
++ bool change_size = false;
+
+ BUG_ON(test_bit(AFS_VNODE_UNSET, &vnode->flags));
+
+@@ -179,7 +180,7 @@ static void afs_apply_status(struct afs_fs_cursor *fc,
+ vnode->fid.vnode,
+ vnode->fid.unique,
+ status->type, vnode->status.type);
+- afs_protocol_error(NULL, -EBADMSG, afs_eproto_bad_status);
++ afs_protocol_error(NULL, afs_eproto_bad_status);
+ return;
+ }
+
+@@ -225,6 +226,7 @@ static void afs_apply_status(struct afs_fs_cursor *fc,
+ } else {
+ set_bit(AFS_VNODE_ZAP_DATA, &vnode->flags);
+ }
++ change_size = true;
+ } else if (vnode->status.type == AFS_FTYPE_DIR) {
+ /* Expected directory change is handled elsewhere so
+ * that we can locally edit the directory and save on a
+@@ -232,11 +234,19 @@ static void afs_apply_status(struct afs_fs_cursor *fc,
+ */
+ if (test_bit(AFS_VNODE_DIR_VALID, &vnode->flags))
+ data_changed = false;
++ change_size = true;
+ }
+
+ if (data_changed) {
+ inode_set_iversion_raw(&vnode->vfs_inode, status->data_version);
+- afs_set_i_size(vnode, status->size);
++
++ /* Only update the size if the data version jumped. If the
++ * file is being modified locally, then we might have our own
++ * idea of what the size should be that's not the same as
++ * what's on the server.
++ */
++ if (change_size)
++ afs_set_i_size(vnode, status->size);
+ }
+ }
+
+diff --git a/fs/afs/internal.h b/fs/afs/internal.h
+index 80255513e230..98e0cebd5e5e 100644
+--- a/fs/afs/internal.h
++++ b/fs/afs/internal.h
+@@ -161,6 +161,7 @@ struct afs_call {
+ bool upgrade; /* T to request service upgrade */
+ bool have_reply_time; /* T if have got reply_time */
+ bool intr; /* T if interruptible */
++ bool unmarshalling_error; /* T if an unmarshalling error occurred */
+ u16 service_id; /* Actual service ID (after upgrade) */
+ unsigned int debug_id; /* Trace ID */
+ u32 operation_ID; /* operation ID for an incoming call */
+@@ -1128,7 +1129,7 @@ extern void afs_flat_call_destructor(struct afs_call *);
+ extern void afs_send_empty_reply(struct afs_call *);
+ extern void afs_send_simple_reply(struct afs_call *, const void *, size_t);
+ extern int afs_extract_data(struct afs_call *, bool);
+-extern int afs_protocol_error(struct afs_call *, int, enum afs_eproto_cause);
++extern int afs_protocol_error(struct afs_call *, enum afs_eproto_cause);
+
+ static inline void afs_set_fc_call(struct afs_call *call, struct afs_fs_cursor *fc)
+ {
+diff --git a/fs/afs/misc.c b/fs/afs/misc.c
+index 52b19e9c1535..5334f1bd2bca 100644
+--- a/fs/afs/misc.c
++++ b/fs/afs/misc.c
+@@ -83,6 +83,7 @@ int afs_abort_to_error(u32 abort_code)
+ case UAENOLCK: return -ENOLCK;
+ case UAENOTEMPTY: return -ENOTEMPTY;
+ case UAELOOP: return -ELOOP;
++ case UAEOVERFLOW: return -EOVERFLOW;
+ case UAENOMEDIUM: return -ENOMEDIUM;
+ case UAEDQUOT: return -EDQUOT;
+
+diff --git a/fs/afs/proc.c b/fs/afs/proc.c
+index 468e1713bce1..6f34c84a0fd0 100644
+--- a/fs/afs/proc.c
++++ b/fs/afs/proc.c
+@@ -563,6 +563,7 @@ void afs_put_sysnames(struct afs_sysnames *sysnames)
+ if (sysnames->subs[i] != afs_init_sysname &&
+ sysnames->subs[i] != sysnames->blank)
+ kfree(sysnames->subs[i]);
++ kfree(sysnames);
+ }
+ }
+
+diff --git a/fs/afs/rxrpc.c b/fs/afs/rxrpc.c
+index 1ecc67da6c1a..e3c2655616dc 100644
+--- a/fs/afs/rxrpc.c
++++ b/fs/afs/rxrpc.c
+@@ -540,6 +540,8 @@ static void afs_deliver_to_call(struct afs_call *call)
+
+ ret = call->type->deliver(call);
+ state = READ_ONCE(call->state);
++ if (ret == 0 && call->unmarshalling_error)
++ ret = -EBADMSG;
+ switch (ret) {
+ case 0:
+ afs_queue_call_work(call);
+@@ -959,9 +961,11 @@ int afs_extract_data(struct afs_call *call, bool want_more)
+ /*
+ * Log protocol error production.
+ */
+-noinline int afs_protocol_error(struct afs_call *call, int error,
++noinline int afs_protocol_error(struct afs_call *call,
+ enum afs_eproto_cause cause)
+ {
+- trace_afs_protocol_error(call, error, cause);
+- return error;
++ trace_afs_protocol_error(call, cause);
++ if (call)
++ call->unmarshalling_error = true;
++ return -EBADMSG;
+ }
+diff --git a/fs/afs/vlclient.c b/fs/afs/vlclient.c
+index 516e9a3bb5b4..e64b002c3bb3 100644
+--- a/fs/afs/vlclient.c
++++ b/fs/afs/vlclient.c
+@@ -447,8 +447,7 @@ static int afs_deliver_yfsvl_get_endpoints(struct afs_call *call)
+ call->count2 = ntohl(*bp); /* Type or next count */
+
+ if (call->count > YFS_MAXENDPOINTS)
+- return afs_protocol_error(call, -EBADMSG,
+- afs_eproto_yvl_fsendpt_num);
++ return afs_protocol_error(call, afs_eproto_yvl_fsendpt_num);
+
+ alist = afs_alloc_addrlist(call->count, FS_SERVICE, AFS_FS_PORT);
+ if (!alist)
+@@ -468,8 +467,7 @@ static int afs_deliver_yfsvl_get_endpoints(struct afs_call *call)
+ size = sizeof(__be32) * (1 + 4 + 1);
+ break;
+ default:
+- return afs_protocol_error(call, -EBADMSG,
+- afs_eproto_yvl_fsendpt_type);
++ return afs_protocol_error(call, afs_eproto_yvl_fsendpt_type);
+ }
+
+ size += sizeof(__be32);
+@@ -487,21 +485,20 @@ static int afs_deliver_yfsvl_get_endpoints(struct afs_call *call)
+ switch (call->count2) {
+ case YFS_ENDPOINT_IPV4:
+ if (ntohl(bp[0]) != sizeof(__be32) * 2)
+- return afs_protocol_error(call, -EBADMSG,
+- afs_eproto_yvl_fsendpt4_len);
++ return afs_protocol_error(
++ call, afs_eproto_yvl_fsendpt4_len);
+ afs_merge_fs_addr4(alist, bp[1], ntohl(bp[2]));
+ bp += 3;
+ break;
+ case YFS_ENDPOINT_IPV6:
+ if (ntohl(bp[0]) != sizeof(__be32) * 5)
+- return afs_protocol_error(call, -EBADMSG,
+- afs_eproto_yvl_fsendpt6_len);
++ return afs_protocol_error(
++ call, afs_eproto_yvl_fsendpt6_len);
+ afs_merge_fs_addr6(alist, bp + 1, ntohl(bp[5]));
+ bp += 6;
+ break;
+ default:
+- return afs_protocol_error(call, -EBADMSG,
+- afs_eproto_yvl_fsendpt_type);
++ return afs_protocol_error(call, afs_eproto_yvl_fsendpt_type);
+ }
+
+ /* Got either the type of the next entry or the count of
+@@ -519,8 +516,7 @@ static int afs_deliver_yfsvl_get_endpoints(struct afs_call *call)
+ if (!call->count)
+ goto end;
+ if (call->count > YFS_MAXENDPOINTS)
+- return afs_protocol_error(call, -EBADMSG,
+- afs_eproto_yvl_vlendpt_type);
++ return afs_protocol_error(call, afs_eproto_yvl_vlendpt_type);
+
+ afs_extract_to_buf(call, 1 * sizeof(__be32));
+ call->unmarshall = 3;
+@@ -547,8 +543,7 @@ static int afs_deliver_yfsvl_get_endpoints(struct afs_call *call)
+ size = sizeof(__be32) * (1 + 4 + 1);
+ break;
+ default:
+- return afs_protocol_error(call, -EBADMSG,
+- afs_eproto_yvl_vlendpt_type);
++ return afs_protocol_error(call, afs_eproto_yvl_vlendpt_type);
+ }
+
+ if (call->count > 1)
+@@ -566,19 +561,18 @@ static int afs_deliver_yfsvl_get_endpoints(struct afs_call *call)
+ switch (call->count2) {
+ case YFS_ENDPOINT_IPV4:
+ if (ntohl(bp[0]) != sizeof(__be32) * 2)
+- return afs_protocol_error(call, -EBADMSG,
+- afs_eproto_yvl_vlendpt4_len);
++ return afs_protocol_error(
++ call, afs_eproto_yvl_vlendpt4_len);
+ bp += 3;
+ break;
+ case YFS_ENDPOINT_IPV6:
+ if (ntohl(bp[0]) != sizeof(__be32) * 5)
+- return afs_protocol_error(call, -EBADMSG,
+- afs_eproto_yvl_vlendpt6_len);
++ return afs_protocol_error(
++ call, afs_eproto_yvl_vlendpt6_len);
+ bp += 6;
+ break;
+ default:
+- return afs_protocol_error(call, -EBADMSG,
+- afs_eproto_yvl_vlendpt_type);
++ return afs_protocol_error(call, afs_eproto_yvl_vlendpt_type);
+ }
+
+ /* Got either the type of the next entry or the count of
+diff --git a/fs/afs/write.c b/fs/afs/write.c
+index cb76566763db..96b042af6248 100644
+--- a/fs/afs/write.c
++++ b/fs/afs/write.c
+@@ -194,11 +194,11 @@ int afs_write_end(struct file *file, struct address_space *mapping,
+
+ i_size = i_size_read(&vnode->vfs_inode);
+ if (maybe_i_size > i_size) {
+- spin_lock(&vnode->wb_lock);
++ write_seqlock(&vnode->cb_lock);
+ i_size = i_size_read(&vnode->vfs_inode);
+ if (maybe_i_size > i_size)
+ i_size_write(&vnode->vfs_inode, maybe_i_size);
+- spin_unlock(&vnode->wb_lock);
++ write_sequnlock(&vnode->cb_lock);
+ }
+
+ if (!PageUptodate(page)) {
+@@ -811,6 +811,7 @@ vm_fault_t afs_page_mkwrite(struct vm_fault *vmf)
+ vmf->page->index, priv);
+ SetPagePrivate(vmf->page);
+ set_page_private(vmf->page, priv);
++ file_update_time(file);
+
+ sb_end_pagefault(inode->i_sb);
+ return VM_FAULT_LOCKED;
+diff --git a/fs/afs/yfsclient.c b/fs/afs/yfsclient.c
+index fe413e7a5cf4..bf74c679c02b 100644
+--- a/fs/afs/yfsclient.c
++++ b/fs/afs/yfsclient.c
+@@ -179,21 +179,20 @@ static void xdr_dump_bad(const __be32 *bp)
+ /*
+ * Decode a YFSFetchStatus block
+ */
+-static int xdr_decode_YFSFetchStatus(const __be32 **_bp,
+- struct afs_call *call,
+- struct afs_status_cb *scb)
++static void xdr_decode_YFSFetchStatus(const __be32 **_bp,
++ struct afs_call *call,
++ struct afs_status_cb *scb)
+ {
+ const struct yfs_xdr_YFSFetchStatus *xdr = (const void *)*_bp;
+ struct afs_file_status *status = &scb->status;
+ u32 type;
+- int ret;
+
+ status->abort_code = ntohl(xdr->abort_code);
+ if (status->abort_code != 0) {
+ if (status->abort_code == VNOVNODE)
+ status->nlink = 0;
+ scb->have_error = true;
+- goto good;
++ goto advance;
+ }
+
+ type = ntohl(xdr->type);
+@@ -221,15 +220,13 @@ static int xdr_decode_YFSFetchStatus(const __be32 **_bp,
+ status->size = xdr_to_u64(xdr->size);
+ status->data_version = xdr_to_u64(xdr->data_version);
+ scb->have_status = true;
+-good:
+- ret = 0;
+ advance:
+ *_bp += xdr_size(xdr);
+- return ret;
++ return;
+
+ bad:
+ xdr_dump_bad(*_bp);
+- ret = afs_protocol_error(call, -EBADMSG, afs_eproto_bad_status);
++ afs_protocol_error(call, afs_eproto_bad_status);
+ goto advance;
+ }
+
+@@ -348,9 +345,7 @@ static int yfs_deliver_fs_status_cb_and_volsync(struct afs_call *call)
+
+ /* unmarshall the reply once we've received all of it */
+ bp = call->buffer;
+- ret = xdr_decode_YFSFetchStatus(&bp, call, call->out_scb);
+- if (ret < 0)
+- return ret;
++ xdr_decode_YFSFetchStatus(&bp, call, call->out_scb);
+ xdr_decode_YFSCallBack(&bp, call, call->out_scb);
+ xdr_decode_YFSVolSync(&bp, call->out_volsync);
+
+@@ -372,9 +367,7 @@ static int yfs_deliver_status_and_volsync(struct afs_call *call)
+ return ret;
+
+ bp = call->buffer;
+- ret = xdr_decode_YFSFetchStatus(&bp, call, call->out_scb);
+- if (ret < 0)
+- return ret;
++ xdr_decode_YFSFetchStatus(&bp, call, call->out_scb);
+ xdr_decode_YFSVolSync(&bp, call->out_volsync);
+
+ _leave(" = 0 [done]");
+@@ -534,9 +527,7 @@ static int yfs_deliver_fs_fetch_data64(struct afs_call *call)
+ return ret;
+
+ bp = call->buffer;
+- ret = xdr_decode_YFSFetchStatus(&bp, call, call->out_scb);
+- if (ret < 0)
+- return ret;
++ xdr_decode_YFSFetchStatus(&bp, call, call->out_scb);
+ xdr_decode_YFSCallBack(&bp, call, call->out_scb);
+ xdr_decode_YFSVolSync(&bp, call->out_volsync);
+
+@@ -644,12 +635,8 @@ static int yfs_deliver_fs_create_vnode(struct afs_call *call)
+ /* unmarshall the reply once we've received all of it */
+ bp = call->buffer;
+ xdr_decode_YFSFid(&bp, call->out_fid);
+- ret = xdr_decode_YFSFetchStatus(&bp, call, call->out_scb);
+- if (ret < 0)
+- return ret;
+- ret = xdr_decode_YFSFetchStatus(&bp, call, call->out_dir_scb);
+- if (ret < 0)
+- return ret;
++ xdr_decode_YFSFetchStatus(&bp, call, call->out_scb);
++ xdr_decode_YFSFetchStatus(&bp, call, call->out_dir_scb);
+ xdr_decode_YFSCallBack(&bp, call, call->out_scb);
+ xdr_decode_YFSVolSync(&bp, call->out_volsync);
+
+@@ -802,14 +789,9 @@ static int yfs_deliver_fs_remove_file2(struct afs_call *call)
+ return ret;
+
+ bp = call->buffer;
+- ret = xdr_decode_YFSFetchStatus(&bp, call, call->out_dir_scb);
+- if (ret < 0)
+- return ret;
+-
++ xdr_decode_YFSFetchStatus(&bp, call, call->out_dir_scb);
+ xdr_decode_YFSFid(&bp, &fid);
+- ret = xdr_decode_YFSFetchStatus(&bp, call, call->out_scb);
+- if (ret < 0)
+- return ret;
++ xdr_decode_YFSFetchStatus(&bp, call, call->out_scb);
+ /* Was deleted if vnode->status.abort_code == VNOVNODE. */
+
+ xdr_decode_YFSVolSync(&bp, call->out_volsync);
+@@ -889,10 +871,7 @@ static int yfs_deliver_fs_remove(struct afs_call *call)
+ return ret;
+
+ bp = call->buffer;
+- ret = xdr_decode_YFSFetchStatus(&bp, call, call->out_dir_scb);
+- if (ret < 0)
+- return ret;
+-
++ xdr_decode_YFSFetchStatus(&bp, call, call->out_dir_scb);
+ xdr_decode_YFSVolSync(&bp, call->out_volsync);
+ return 0;
+ }
+@@ -974,12 +953,8 @@ static int yfs_deliver_fs_link(struct afs_call *call)
+ return ret;
+
+ bp = call->buffer;
+- ret = xdr_decode_YFSFetchStatus(&bp, call, call->out_scb);
+- if (ret < 0)
+- return ret;
+- ret = xdr_decode_YFSFetchStatus(&bp, call, call->out_dir_scb);
+- if (ret < 0)
+- return ret;
++ xdr_decode_YFSFetchStatus(&bp, call, call->out_scb);
++ xdr_decode_YFSFetchStatus(&bp, call, call->out_dir_scb);
+ xdr_decode_YFSVolSync(&bp, call->out_volsync);
+ _leave(" = 0 [done]");
+ return 0;
+@@ -1061,12 +1036,8 @@ static int yfs_deliver_fs_symlink(struct afs_call *call)
+ /* unmarshall the reply once we've received all of it */
+ bp = call->buffer;
+ xdr_decode_YFSFid(&bp, call->out_fid);
+- ret = xdr_decode_YFSFetchStatus(&bp, call, call->out_scb);
+- if (ret < 0)
+- return ret;
+- ret = xdr_decode_YFSFetchStatus(&bp, call, call->out_dir_scb);
+- if (ret < 0)
+- return ret;
++ xdr_decode_YFSFetchStatus(&bp, call, call->out_scb);
++ xdr_decode_YFSFetchStatus(&bp, call, call->out_dir_scb);
+ xdr_decode_YFSVolSync(&bp, call->out_volsync);
+
+ _leave(" = 0 [done]");
+@@ -1154,13 +1125,11 @@ static int yfs_deliver_fs_rename(struct afs_call *call)
+ return ret;
+
+ bp = call->buffer;
+- ret = xdr_decode_YFSFetchStatus(&bp, call, call->out_dir_scb);
+- if (ret < 0)
+- return ret;
+- ret = xdr_decode_YFSFetchStatus(&bp, call, call->out_scb);
+- if (ret < 0)
+- return ret;
+-
++ /* If the two dirs are the same, we have two copies of the same status
++ * report, so we just decode it twice.
++ */
++ xdr_decode_YFSFetchStatus(&bp, call, call->out_dir_scb);
++ xdr_decode_YFSFetchStatus(&bp, call, call->out_scb);
+ xdr_decode_YFSVolSync(&bp, call->out_volsync);
+ _leave(" = 0 [done]");
+ return 0;
+@@ -1457,8 +1426,7 @@ static int yfs_deliver_fs_get_volume_status(struct afs_call *call)
+ call->count = ntohl(call->tmp);
+ _debug("volname length: %u", call->count);
+ if (call->count >= AFSNAMEMAX)
+- return afs_protocol_error(call, -EBADMSG,
+- afs_eproto_volname_len);
++ return afs_protocol_error(call, afs_eproto_volname_len);
+ size = (call->count + 3) & ~3; /* It's padded */
+ afs_extract_to_buf(call, size);
+ call->unmarshall++;
+@@ -1487,8 +1455,7 @@ static int yfs_deliver_fs_get_volume_status(struct afs_call *call)
+ call->count = ntohl(call->tmp);
+ _debug("offline msg length: %u", call->count);
+ if (call->count >= AFSNAMEMAX)
+- return afs_protocol_error(call, -EBADMSG,
+- afs_eproto_offline_msg_len);
++ return afs_protocol_error(call, afs_eproto_offline_msg_len);
+ size = (call->count + 3) & ~3; /* It's padded */
+ afs_extract_to_buf(call, size);
+ call->unmarshall++;
+@@ -1518,8 +1485,7 @@ static int yfs_deliver_fs_get_volume_status(struct afs_call *call)
+ call->count = ntohl(call->tmp);
+ _debug("motd length: %u", call->count);
+ if (call->count >= AFSNAMEMAX)
+- return afs_protocol_error(call, -EBADMSG,
+- afs_eproto_motd_len);
++ return afs_protocol_error(call, afs_eproto_motd_len);
+ size = (call->count + 3) & ~3; /* It's padded */
+ afs_extract_to_buf(call, size);
+ call->unmarshall++;
+@@ -1828,8 +1794,7 @@ static int yfs_deliver_fs_inline_bulk_status(struct afs_call *call)
+ tmp = ntohl(call->tmp);
+ _debug("status count: %u/%u", tmp, call->count2);
+ if (tmp != call->count2)
+- return afs_protocol_error(call, -EBADMSG,
+- afs_eproto_ibulkst_count);
++ return afs_protocol_error(call, afs_eproto_ibulkst_count);
+
+ call->count = 0;
+ call->unmarshall++;
+@@ -1845,9 +1810,7 @@ static int yfs_deliver_fs_inline_bulk_status(struct afs_call *call)
+
+ bp = call->buffer;
+ scb = &call->out_scb[call->count];
+- ret = xdr_decode_YFSFetchStatus(&bp, call, scb);
+- if (ret < 0)
+- return ret;
++ xdr_decode_YFSFetchStatus(&bp, call, scb);
+
+ call->count++;
+ if (call->count < call->count2)
+@@ -1868,8 +1831,7 @@ static int yfs_deliver_fs_inline_bulk_status(struct afs_call *call)
+ tmp = ntohl(call->tmp);
+ _debug("CB count: %u", tmp);
+ if (tmp != call->count2)
+- return afs_protocol_error(call, -EBADMSG,
+- afs_eproto_ibulkst_cb_count);
++ return afs_protocol_error(call, afs_eproto_ibulkst_cb_count);
+ call->count = 0;
+ call->unmarshall++;
+ more_cbs:
+@@ -2067,9 +2029,7 @@ static int yfs_deliver_fs_fetch_opaque_acl(struct afs_call *call)
+ bp = call->buffer;
+ yacl->inherit_flag = ntohl(*bp++);
+ yacl->num_cleaned = ntohl(*bp++);
+- ret = xdr_decode_YFSFetchStatus(&bp, call, call->out_scb);
+- if (ret < 0)
+- return ret;
++ xdr_decode_YFSFetchStatus(&bp, call, call->out_scb);
+ xdr_decode_YFSVolSync(&bp, call->out_volsync);
+
+ call->unmarshall++;
+diff --git a/fs/block_dev.c b/fs/block_dev.c
+index 93672c3f1c78..313aae95818e 100644
+--- a/fs/block_dev.c
++++ b/fs/block_dev.c
+@@ -1583,10 +1583,8 @@ static int __blkdev_get(struct block_device *bdev, fmode_t mode, int for_part)
+ */
+ if (!for_part) {
+ ret = devcgroup_inode_permission(bdev->bd_inode, perm);
+- if (ret != 0) {
+- bdput(bdev);
++ if (ret != 0)
+ return ret;
+- }
+ }
+
+ restart:
+@@ -1655,8 +1653,10 @@ static int __blkdev_get(struct block_device *bdev, fmode_t mode, int for_part)
+ goto out_clear;
+ BUG_ON(for_part);
+ ret = __blkdev_get(whole, mode, 1);
+- if (ret)
++ if (ret) {
++ bdput(whole);
+ goto out_clear;
++ }
+ bdev->bd_contains = whole;
+ bdev->bd_part = disk_get_part(disk, partno);
+ if (!(disk->flags & GENHD_FL_UP) ||
+@@ -1706,7 +1706,6 @@ static int __blkdev_get(struct block_device *bdev, fmode_t mode, int for_part)
+ disk_unblock_events(disk);
+ put_disk_and_module(disk);
+ out:
+- bdput(bdev);
+
+ return ret;
+ }
+@@ -1773,6 +1772,9 @@ int blkdev_get(struct block_device *bdev, fmode_t mode, void *holder)
+ bdput(whole);
+ }
+
++ if (res)
++ bdput(bdev);
++
+ return res;
+ }
+ EXPORT_SYMBOL(blkdev_get);
+diff --git a/fs/ceph/export.c b/fs/ceph/export.c
+index 79dc06881e78..e088843a7734 100644
+--- a/fs/ceph/export.c
++++ b/fs/ceph/export.c
+@@ -172,9 +172,16 @@ struct inode *ceph_lookup_inode(struct super_block *sb, u64 ino)
+ static struct dentry *__fh_to_dentry(struct super_block *sb, u64 ino)
+ {
+ struct inode *inode = __lookup_inode(sb, ino);
++ int err;
++
+ if (IS_ERR(inode))
+ return ERR_CAST(inode);
+- if (inode->i_nlink == 0) {
++ /* We need LINK caps to reliably check i_nlink */
++ err = ceph_do_getattr(inode, CEPH_CAP_LINK_SHARED, false);
++ if (err)
++ return ERR_PTR(err);
++ /* -ESTALE if inode as been unlinked and no file is open */
++ if ((inode->i_nlink == 0) && (atomic_read(&inode->i_count) == 1)) {
+ iput(inode);
+ return ERR_PTR(-ESTALE);
+ }
+diff --git a/fs/cifs/connect.c b/fs/cifs/connect.c
+index 28268ed461b8..47b9fbb70bf5 100644
+--- a/fs/cifs/connect.c
++++ b/fs/cifs/connect.c
+@@ -572,26 +572,26 @@ cifs_reconnect(struct TCP_Server_Info *server)
+ try_to_freeze();
+
+ mutex_lock(&server->srv_mutex);
++#ifdef CONFIG_CIFS_DFS_UPCALL
+ /*
+ * Set up next DFS target server (if any) for reconnect. If DFS
+ * feature is disabled, then we will retry last server we
+ * connected to before.
+ */
++ reconn_inval_dfs_target(server, cifs_sb, &tgt_list, &tgt_it);
++#endif
++ rc = reconn_set_ipaddr(server);
++ if (rc) {
++ cifs_dbg(FYI, "%s: failed to resolve hostname: %d\n",
++ __func__, rc);
++ }
++
+ if (cifs_rdma_enabled(server))
+ rc = smbd_reconnect(server);
+ else
+ rc = generic_ip_connect(server);
+ if (rc) {
+ cifs_dbg(FYI, "reconnect error %d\n", rc);
+-#ifdef CONFIG_CIFS_DFS_UPCALL
+- reconn_inval_dfs_target(server, cifs_sb, &tgt_list,
+- &tgt_it);
+-#endif
+- rc = reconn_set_ipaddr(server);
+- if (rc) {
+- cifs_dbg(FYI, "%s: failed to resolve hostname: %d\n",
+- __func__, rc);
+- }
+ mutex_unlock(&server->srv_mutex);
+ msleep(3000);
+ } else {
+diff --git a/fs/dlm/dlm_internal.h b/fs/dlm/dlm_internal.h
+index 416d9de35679..4311d01b02a8 100644
+--- a/fs/dlm/dlm_internal.h
++++ b/fs/dlm/dlm_internal.h
+@@ -97,7 +97,6 @@ do { \
+ __LINE__, __FILE__, #x, jiffies); \
+ {do} \
+ printk("\n"); \
+- BUG(); \
+ panic("DLM: Record message above and reboot.\n"); \
+ } \
+ }
+diff --git a/fs/ext4/acl.c b/fs/ext4/acl.c
+index 8c7bbf3e566d..470be69f19aa 100644
+--- a/fs/ext4/acl.c
++++ b/fs/ext4/acl.c
+@@ -256,7 +256,7 @@ retry:
+ if (!error && update_mode) {
+ inode->i_mode = mode;
+ inode->i_ctime = current_time(inode);
+- ext4_mark_inode_dirty(handle, inode);
++ error = ext4_mark_inode_dirty(handle, inode);
+ }
+ out_stop:
+ ext4_journal_stop(handle);
+diff --git a/fs/ext4/dir.c b/fs/ext4/dir.c
+index c654205f648d..1d82336b1cd4 100644
+--- a/fs/ext4/dir.c
++++ b/fs/ext4/dir.c
+@@ -675,6 +675,7 @@ static int ext4_d_compare(const struct dentry *dentry, unsigned int len,
+ struct qstr qstr = {.name = str, .len = len };
+ const struct dentry *parent = READ_ONCE(dentry->d_parent);
+ const struct inode *inode = READ_ONCE(parent->d_inode);
++ char strbuf[DNAME_INLINE_LEN];
+
+ if (!inode || !IS_CASEFOLDED(inode) ||
+ !EXT4_SB(inode->i_sb)->s_encoding) {
+@@ -683,6 +684,21 @@ static int ext4_d_compare(const struct dentry *dentry, unsigned int len,
+ return memcmp(str, name->name, len);
+ }
+
++ /*
++ * If the dentry name is stored in-line, then it may be concurrently
++ * modified by a rename. If this happens, the VFS will eventually retry
++ * the lookup, so it doesn't matter what ->d_compare() returns.
++ * However, it's unsafe to call utf8_strncasecmp() with an unstable
++ * string. Therefore, we have to copy the name into a temporary buffer.
++ */
++ if (len <= DNAME_INLINE_LEN - 1) {
++ memcpy(strbuf, str, len);
++ strbuf[len] = 0;
++ qstr.name = strbuf;
++ /* prevent compiler from optimizing out the temporary buffer */
++ barrier();
++ }
++
+ return ext4_ci_compare(inode, name, &qstr, false);
+ }
+
+diff --git a/fs/ext4/ext4.h b/fs/ext4/ext4.h
+index ad2dbf6e4924..51a85b50033a 100644
+--- a/fs/ext4/ext4.h
++++ b/fs/ext4/ext4.h
+@@ -3354,7 +3354,7 @@ struct ext4_extent;
+ */
+ #define EXT_MAX_BLOCKS 0xffffffff
+
+-extern int ext4_ext_tree_init(handle_t *handle, struct inode *);
++extern void ext4_ext_tree_init(handle_t *handle, struct inode *inode);
+ extern int ext4_ext_index_trans_blocks(struct inode *inode, int extents);
+ extern int ext4_ext_map_blocks(handle_t *handle, struct inode *inode,
+ struct ext4_map_blocks *map, int flags);
+diff --git a/fs/ext4/ext4_jbd2.h b/fs/ext4/ext4_jbd2.h
+index 4b9002f0e84c..3bacf76d2609 100644
+--- a/fs/ext4/ext4_jbd2.h
++++ b/fs/ext4/ext4_jbd2.h
+@@ -222,7 +222,10 @@ ext4_mark_iloc_dirty(handle_t *handle,
+ int ext4_reserve_inode_write(handle_t *handle, struct inode *inode,
+ struct ext4_iloc *iloc);
+
+-int ext4_mark_inode_dirty(handle_t *handle, struct inode *inode);
++#define ext4_mark_inode_dirty(__h, __i) \
++ __ext4_mark_inode_dirty((__h), (__i), __func__, __LINE__)
++int __ext4_mark_inode_dirty(handle_t *handle, struct inode *inode,
++ const char *func, unsigned int line);
+
+ int ext4_expand_extra_isize(struct inode *inode,
+ unsigned int new_extra_isize,
+diff --git a/fs/ext4/extents.c b/fs/ext4/extents.c
+index 2b4b94542e34..d5453072eb63 100644
+--- a/fs/ext4/extents.c
++++ b/fs/ext4/extents.c
+@@ -816,7 +816,7 @@ ext4_ext_binsearch(struct inode *inode,
+
+ }
+
+-int ext4_ext_tree_init(handle_t *handle, struct inode *inode)
++void ext4_ext_tree_init(handle_t *handle, struct inode *inode)
+ {
+ struct ext4_extent_header *eh;
+
+@@ -826,7 +826,6 @@ int ext4_ext_tree_init(handle_t *handle, struct inode *inode)
+ eh->eh_magic = EXT4_EXT_MAGIC;
+ eh->eh_max = cpu_to_le16(ext4_ext_space_root(inode, 0));
+ ext4_mark_inode_dirty(handle, inode);
+- return 0;
+ }
+
+ struct ext4_ext_path *
+@@ -1319,7 +1318,7 @@ static int ext4_ext_grow_indepth(handle_t *handle, struct inode *inode,
+ ext4_idx_pblock(EXT_FIRST_INDEX(neh)));
+
+ le16_add_cpu(&neh->eh_depth, 1);
+- ext4_mark_inode_dirty(handle, inode);
++ err = ext4_mark_inode_dirty(handle, inode);
+ out:
+ brelse(bh);
+
+@@ -2828,7 +2827,7 @@ again:
+ * in use to avoid freeing it when removing blocks.
+ */
+ if (sbi->s_cluster_ratio > 1) {
+- pblk = ext4_ext_pblock(ex) + end - ee_block + 2;
++ pblk = ext4_ext_pblock(ex) + end - ee_block + 1;
+ partial.pclu = EXT4_B2C(sbi, pblk);
+ partial.state = nofree;
+ }
+@@ -4363,7 +4362,7 @@ static int ext4_alloc_file_blocks(struct file *file, ext4_lblk_t offset,
+ struct inode *inode = file_inode(file);
+ handle_t *handle;
+ int ret = 0;
+- int ret2 = 0;
++ int ret2 = 0, ret3 = 0;
+ int retries = 0;
+ int depth = 0;
+ struct ext4_map_blocks map;
+@@ -4423,10 +4422,11 @@ retry:
+ if (ext4_update_inode_size(inode, epos) & 0x1)
+ inode->i_mtime = inode->i_ctime;
+ }
+- ext4_mark_inode_dirty(handle, inode);
++ ret2 = ext4_mark_inode_dirty(handle, inode);
+ ext4_update_inode_fsync_trans(handle, inode, 1);
+- ret2 = ext4_journal_stop(handle);
+- if (ret2)
++ ret3 = ext4_journal_stop(handle);
++ ret2 = ret3 ? ret3 : ret2;
++ if (unlikely(ret2))
+ break;
+ }
+ if (ret == -ENOSPC &&
+@@ -4577,7 +4577,9 @@ static long ext4_zero_range(struct file *file, loff_t offset,
+ inode->i_mtime = inode->i_ctime = current_time(inode);
+ if (new_size)
+ ext4_update_inode_size(inode, new_size);
+- ext4_mark_inode_dirty(handle, inode);
++ ret = ext4_mark_inode_dirty(handle, inode);
++ if (unlikely(ret))
++ goto out_handle;
+
+ /* Zero out partial block at the edges of the range */
+ ret = ext4_zero_partial_blocks(handle, inode, offset, len);
+@@ -4587,6 +4589,7 @@ static long ext4_zero_range(struct file *file, loff_t offset,
+ if (file->f_flags & O_SYNC)
+ ext4_handle_sync(handle);
+
++out_handle:
+ ext4_journal_stop(handle);
+ out_mutex:
+ inode_unlock(inode);
+@@ -4700,8 +4703,7 @@ int ext4_convert_unwritten_extents(handle_t *handle, struct inode *inode,
+ loff_t offset, ssize_t len)
+ {
+ unsigned int max_blocks;
+- int ret = 0;
+- int ret2 = 0;
++ int ret = 0, ret2 = 0, ret3 = 0;
+ struct ext4_map_blocks map;
+ unsigned int blkbits = inode->i_blkbits;
+ unsigned int credits = 0;
+@@ -4734,9 +4736,13 @@ int ext4_convert_unwritten_extents(handle_t *handle, struct inode *inode,
+ "ext4_ext_map_blocks returned %d",
+ inode->i_ino, map.m_lblk,
+ map.m_len, ret);
+- ext4_mark_inode_dirty(handle, inode);
+- if (credits)
+- ret2 = ext4_journal_stop(handle);
++ ret2 = ext4_mark_inode_dirty(handle, inode);
++ if (credits) {
++ ret3 = ext4_journal_stop(handle);
++ if (unlikely(ret3))
++ ret2 = ret3;
++ }
++
+ if (ret <= 0 || ret2)
+ break;
+ }
+@@ -5304,7 +5310,7 @@ static int ext4_collapse_range(struct inode *inode, loff_t offset, loff_t len)
+ if (IS_SYNC(inode))
+ ext4_handle_sync(handle);
+ inode->i_mtime = inode->i_ctime = current_time(inode);
+- ext4_mark_inode_dirty(handle, inode);
++ ret = ext4_mark_inode_dirty(handle, inode);
+ ext4_update_inode_fsync_trans(handle, inode, 1);
+
+ out_stop:
+diff --git a/fs/ext4/file.c b/fs/ext4/file.c
+index 0d624250a62b..2a01e31a032c 100644
+--- a/fs/ext4/file.c
++++ b/fs/ext4/file.c
+@@ -287,6 +287,7 @@ static ssize_t ext4_handle_inode_extension(struct inode *inode, loff_t offset,
+ bool truncate = false;
+ u8 blkbits = inode->i_blkbits;
+ ext4_lblk_t written_blk, end_blk;
++ int ret;
+
+ /*
+ * Note that EXT4_I(inode)->i_disksize can get extended up to
+@@ -327,8 +328,14 @@ static ssize_t ext4_handle_inode_extension(struct inode *inode, loff_t offset,
+ goto truncate;
+ }
+
+- if (ext4_update_inode_size(inode, offset + written))
+- ext4_mark_inode_dirty(handle, inode);
++ if (ext4_update_inode_size(inode, offset + written)) {
++ ret = ext4_mark_inode_dirty(handle, inode);
++ if (unlikely(ret)) {
++ written = ret;
++ ext4_journal_stop(handle);
++ goto truncate;
++ }
++ }
+
+ /*
+ * We may need to truncate allocated but not written blocks beyond EOF.
+@@ -495,6 +502,12 @@ static ssize_t ext4_dio_write_iter(struct kiocb *iocb, struct iov_iter *from)
+ if (ret <= 0)
+ return ret;
+
++ /* if we're going to block and IOCB_NOWAIT is set, return -EAGAIN */
++ if ((iocb->ki_flags & IOCB_NOWAIT) && (unaligned_io || extend)) {
++ ret = -EAGAIN;
++ goto out;
++ }
++
+ offset = iocb->ki_pos;
+ count = ret;
+
+diff --git a/fs/ext4/indirect.c b/fs/ext4/indirect.c
+index 107f0043f67f..be2b66eb65f7 100644
+--- a/fs/ext4/indirect.c
++++ b/fs/ext4/indirect.c
+@@ -467,7 +467,9 @@ static int ext4_splice_branch(handle_t *handle,
+ /*
+ * OK, we spliced it into the inode itself on a direct block.
+ */
+- ext4_mark_inode_dirty(handle, ar->inode);
++ err = ext4_mark_inode_dirty(handle, ar->inode);
++ if (unlikely(err))
++ goto err_out;
+ jbd_debug(5, "splicing direct\n");
+ }
+ return err;
+diff --git a/fs/ext4/inline.c b/fs/ext4/inline.c
+index f35e289e17aa..c3a1ad2db122 100644
+--- a/fs/ext4/inline.c
++++ b/fs/ext4/inline.c
+@@ -1260,7 +1260,7 @@ out:
+ int ext4_try_add_inline_entry(handle_t *handle, struct ext4_filename *fname,
+ struct inode *dir, struct inode *inode)
+ {
+- int ret, inline_size, no_expand;
++ int ret, ret2, inline_size, no_expand;
+ void *inline_start;
+ struct ext4_iloc iloc;
+
+@@ -1314,7 +1314,9 @@ int ext4_try_add_inline_entry(handle_t *handle, struct ext4_filename *fname,
+
+ out:
+ ext4_write_unlock_xattr(dir, &no_expand);
+- ext4_mark_inode_dirty(handle, dir);
++ ret2 = ext4_mark_inode_dirty(handle, dir);
++ if (unlikely(ret2 && !ret))
++ ret = ret2;
+ brelse(iloc.bh);
+ return ret;
+ }
+diff --git a/fs/ext4/inode.c b/fs/ext4/inode.c
+index 2a4aae6acdcb..87430d276bcc 100644
+--- a/fs/ext4/inode.c
++++ b/fs/ext4/inode.c
+@@ -1296,7 +1296,7 @@ static int ext4_write_end(struct file *file,
+ * filesystems.
+ */
+ if (i_size_changed || inline_data)
+- ext4_mark_inode_dirty(handle, inode);
++ ret = ext4_mark_inode_dirty(handle, inode);
+
+ if (pos + len > inode->i_size && !verity && ext4_can_truncate(inode))
+ /* if we have allocated more blocks and copied
+@@ -3077,7 +3077,7 @@ static int ext4_da_write_end(struct file *file,
+ * new_i_size is less that inode->i_size
+ * bu greater than i_disksize.(hint delalloc)
+ */
+- ext4_mark_inode_dirty(handle, inode);
++ ret = ext4_mark_inode_dirty(handle, inode);
+ }
+ }
+
+@@ -3094,7 +3094,7 @@ static int ext4_da_write_end(struct file *file,
+ if (ret2 < 0)
+ ret = ret2;
+ ret2 = ext4_journal_stop(handle);
+- if (!ret)
++ if (unlikely(ret2 && !ret))
+ ret = ret2;
+
+ return ret ? ret : copied;
+@@ -3886,6 +3886,8 @@ int ext4_update_disksize_before_punch(struct inode *inode, loff_t offset,
+ loff_t len)
+ {
+ handle_t *handle;
++ int ret;
++
+ loff_t size = i_size_read(inode);
+
+ WARN_ON(!inode_is_locked(inode));
+@@ -3899,10 +3901,10 @@ int ext4_update_disksize_before_punch(struct inode *inode, loff_t offset,
+ if (IS_ERR(handle))
+ return PTR_ERR(handle);
+ ext4_update_i_disksize(inode, size);
+- ext4_mark_inode_dirty(handle, inode);
++ ret = ext4_mark_inode_dirty(handle, inode);
+ ext4_journal_stop(handle);
+
+- return 0;
++ return ret;
+ }
+
+ static void ext4_wait_dax_page(struct ext4_inode_info *ei)
+@@ -3954,7 +3956,7 @@ int ext4_punch_hole(struct inode *inode, loff_t offset, loff_t length)
+ loff_t first_block_offset, last_block_offset;
+ handle_t *handle;
+ unsigned int credits;
+- int ret = 0;
++ int ret = 0, ret2 = 0;
+
+ trace_ext4_punch_hole(inode, offset, length, 0);
+
+@@ -4077,7 +4079,9 @@ int ext4_punch_hole(struct inode *inode, loff_t offset, loff_t length)
+ ext4_handle_sync(handle);
+
+ inode->i_mtime = inode->i_ctime = current_time(inode);
+- ext4_mark_inode_dirty(handle, inode);
++ ret2 = ext4_mark_inode_dirty(handle, inode);
++ if (unlikely(ret2))
++ ret = ret2;
+ if (ret >= 0)
+ ext4_update_inode_fsync_trans(handle, inode, 1);
+ out_stop:
+@@ -4146,7 +4150,7 @@ int ext4_truncate(struct inode *inode)
+ {
+ struct ext4_inode_info *ei = EXT4_I(inode);
+ unsigned int credits;
+- int err = 0;
++ int err = 0, err2;
+ handle_t *handle;
+ struct address_space *mapping = inode->i_mapping;
+
+@@ -4234,7 +4238,9 @@ out_stop:
+ ext4_orphan_del(handle, inode);
+
+ inode->i_mtime = inode->i_ctime = current_time(inode);
+- ext4_mark_inode_dirty(handle, inode);
++ err2 = ext4_mark_inode_dirty(handle, inode);
++ if (unlikely(err2 && !err))
++ err = err2;
+ ext4_journal_stop(handle);
+
+ trace_ext4_truncate_exit(inode);
+@@ -5292,6 +5298,8 @@ int ext4_setattr(struct dentry *dentry, struct iattr *attr)
+ inode->i_gid = attr->ia_gid;
+ error = ext4_mark_inode_dirty(handle, inode);
+ ext4_journal_stop(handle);
++ if (unlikely(error))
++ return error;
+ }
+
+ if (attr->ia_valid & ATTR_SIZE) {
+@@ -5777,7 +5785,8 @@ out_unlock:
+ * Whenever the user wants stuff synced (sys_sync, sys_msync, sys_fsync)
+ * we start and wait on commits.
+ */
+-int ext4_mark_inode_dirty(handle_t *handle, struct inode *inode)
++int __ext4_mark_inode_dirty(handle_t *handle, struct inode *inode,
++ const char *func, unsigned int line)
+ {
+ struct ext4_iloc iloc;
+ struct ext4_sb_info *sbi = EXT4_SB(inode->i_sb);
+@@ -5787,13 +5796,18 @@ int ext4_mark_inode_dirty(handle_t *handle, struct inode *inode)
+ trace_ext4_mark_inode_dirty(inode, _RET_IP_);
+ err = ext4_reserve_inode_write(handle, inode, &iloc);
+ if (err)
+- return err;
++ goto out;
+
+ if (EXT4_I(inode)->i_extra_isize < sbi->s_want_extra_isize)
+ ext4_try_to_expand_extra_isize(inode, sbi->s_want_extra_isize,
+ iloc, handle);
+
+- return ext4_mark_iloc_dirty(handle, inode, &iloc);
++ err = ext4_mark_iloc_dirty(handle, inode, &iloc);
++out:
++ if (unlikely(err))
++ ext4_error_inode_err(inode, func, line, 0, err,
++ "mark_inode_dirty error");
++ return err;
+ }
+
+ /*
+diff --git a/fs/ext4/migrate.c b/fs/ext4/migrate.c
+index fb6520f37135..c5e3fc998211 100644
+--- a/fs/ext4/migrate.c
++++ b/fs/ext4/migrate.c
+@@ -287,7 +287,7 @@ static int free_ind_block(handle_t *handle, struct inode *inode, __le32 *i_data)
+ static int ext4_ext_swap_inode_data(handle_t *handle, struct inode *inode,
+ struct inode *tmp_inode)
+ {
+- int retval;
++ int retval, retval2 = 0;
+ __le32 i_data[3];
+ struct ext4_inode_info *ei = EXT4_I(inode);
+ struct ext4_inode_info *tmp_ei = EXT4_I(tmp_inode);
+@@ -342,7 +342,9 @@ static int ext4_ext_swap_inode_data(handle_t *handle, struct inode *inode,
+ * i_blocks when freeing the indirect meta-data blocks
+ */
+ retval = free_ind_block(handle, inode, i_data);
+- ext4_mark_inode_dirty(handle, inode);
++ retval2 = ext4_mark_inode_dirty(handle, inode);
++ if (unlikely(retval2 && !retval))
++ retval = retval2;
+
+ err_out:
+ return retval;
+@@ -601,7 +603,7 @@ int ext4_ind_migrate(struct inode *inode)
+ ext4_lblk_t start, end;
+ ext4_fsblk_t blk;
+ handle_t *handle;
+- int ret;
++ int ret, ret2 = 0;
+
+ if (!ext4_has_feature_extents(inode->i_sb) ||
+ (!ext4_test_inode_flag(inode, EXT4_INODE_EXTENTS)))
+@@ -655,7 +657,9 @@ int ext4_ind_migrate(struct inode *inode)
+ memset(ei->i_data, 0, sizeof(ei->i_data));
+ for (i = start; i <= end; i++)
+ ei->i_data[i] = cpu_to_le32(blk++);
+- ext4_mark_inode_dirty(handle, inode);
++ ret2 = ext4_mark_inode_dirty(handle, inode);
++ if (unlikely(ret2 && !ret))
++ ret = ret2;
+ errout:
+ ext4_journal_stop(handle);
+ up_write(&EXT4_I(inode)->i_data_sem);
+diff --git a/fs/ext4/namei.c b/fs/ext4/namei.c
+index a8aca4772aaa..56738b538ddf 100644
+--- a/fs/ext4/namei.c
++++ b/fs/ext4/namei.c
+@@ -1993,7 +1993,7 @@ static int add_dirent_to_buf(handle_t *handle, struct ext4_filename *fname,
+ {
+ unsigned int blocksize = dir->i_sb->s_blocksize;
+ int csum_size = 0;
+- int err;
++ int err, err2;
+
+ if (ext4_has_metadata_csum(inode->i_sb))
+ csum_size = sizeof(struct ext4_dir_entry_tail);
+@@ -2028,12 +2028,12 @@ static int add_dirent_to_buf(handle_t *handle, struct ext4_filename *fname,
+ dir->i_mtime = dir->i_ctime = current_time(dir);
+ ext4_update_dx_flag(dir);
+ inode_inc_iversion(dir);
+- ext4_mark_inode_dirty(handle, dir);
++ err2 = ext4_mark_inode_dirty(handle, dir);
+ BUFFER_TRACE(bh, "call ext4_handle_dirty_metadata");
+ err = ext4_handle_dirty_dirblock(handle, dir, bh);
+ if (err)
+ ext4_std_error(dir->i_sb, err);
+- return 0;
++ return err ? err : err2;
+ }
+
+ /*
+@@ -2223,7 +2223,9 @@ static int ext4_add_entry(handle_t *handle, struct dentry *dentry,
+ }
+ ext4_clear_inode_flag(dir, EXT4_INODE_INDEX);
+ dx_fallback++;
+- ext4_mark_inode_dirty(handle, dir);
++ retval = ext4_mark_inode_dirty(handle, dir);
++ if (unlikely(retval))
++ goto out;
+ }
+ blocks = dir->i_size >> sb->s_blocksize_bits;
+ for (block = 0; block < blocks; block++) {
+@@ -2576,12 +2578,12 @@ static int ext4_add_nondir(handle_t *handle,
+ struct inode *inode = *inodep;
+ int err = ext4_add_entry(handle, dentry, inode);
+ if (!err) {
+- ext4_mark_inode_dirty(handle, inode);
++ err = ext4_mark_inode_dirty(handle, inode);
+ if (IS_DIRSYNC(dir))
+ ext4_handle_sync(handle);
+ d_instantiate_new(dentry, inode);
+ *inodep = NULL;
+- return 0;
++ return err;
+ }
+ drop_nlink(inode);
+ ext4_orphan_add(handle, inode);
+@@ -2775,7 +2777,7 @@ static int ext4_mkdir(struct inode *dir, struct dentry *dentry, umode_t mode)
+ {
+ handle_t *handle;
+ struct inode *inode;
+- int err, credits, retries = 0;
++ int err, err2 = 0, credits, retries = 0;
+
+ if (EXT4_DIR_LINK_MAX(dir))
+ return -EMLINK;
+@@ -2808,7 +2810,9 @@ out_clear_inode:
+ clear_nlink(inode);
+ ext4_orphan_add(handle, inode);
+ unlock_new_inode(inode);
+- ext4_mark_inode_dirty(handle, inode);
++ err2 = ext4_mark_inode_dirty(handle, inode);
++ if (unlikely(err2))
++ err = err2;
+ ext4_journal_stop(handle);
+ iput(inode);
+ goto out_retry;
+@@ -3148,10 +3152,12 @@ static int ext4_rmdir(struct inode *dir, struct dentry *dentry)
+ inode->i_size = 0;
+ ext4_orphan_add(handle, inode);
+ inode->i_ctime = dir->i_ctime = dir->i_mtime = current_time(inode);
+- ext4_mark_inode_dirty(handle, inode);
++ retval = ext4_mark_inode_dirty(handle, inode);
++ if (retval)
++ goto end_rmdir;
+ ext4_dec_count(handle, dir);
+ ext4_update_dx_flag(dir);
+- ext4_mark_inode_dirty(handle, dir);
++ retval = ext4_mark_inode_dirty(handle, dir);
+
+ #ifdef CONFIG_UNICODE
+ /* VFS negative dentries are incompatible with Encoding and
+@@ -3221,7 +3227,9 @@ static int ext4_unlink(struct inode *dir, struct dentry *dentry)
+ goto end_unlink;
+ dir->i_ctime = dir->i_mtime = current_time(dir);
+ ext4_update_dx_flag(dir);
+- ext4_mark_inode_dirty(handle, dir);
++ retval = ext4_mark_inode_dirty(handle, dir);
++ if (retval)
++ goto end_unlink;
+ if (inode->i_nlink == 0)
+ ext4_warning_inode(inode, "Deleting file '%.*s' with no links",
+ dentry->d_name.len, dentry->d_name.name);
+@@ -3230,7 +3238,7 @@ static int ext4_unlink(struct inode *dir, struct dentry *dentry)
+ if (!inode->i_nlink)
+ ext4_orphan_add(handle, inode);
+ inode->i_ctime = current_time(inode);
+- ext4_mark_inode_dirty(handle, inode);
++ retval = ext4_mark_inode_dirty(handle, inode);
+
+ #ifdef CONFIG_UNICODE
+ /* VFS negative dentries are incompatible with Encoding and
+@@ -3419,7 +3427,7 @@ retry:
+
+ err = ext4_add_entry(handle, dentry, inode);
+ if (!err) {
+- ext4_mark_inode_dirty(handle, inode);
++ err = ext4_mark_inode_dirty(handle, inode);
+ /* this can happen only for tmpfile being
+ * linked the first time
+ */
+@@ -3531,7 +3539,7 @@ static int ext4_rename_dir_finish(handle_t *handle, struct ext4_renament *ent,
+ static int ext4_setent(handle_t *handle, struct ext4_renament *ent,
+ unsigned ino, unsigned file_type)
+ {
+- int retval;
++ int retval, retval2;
+
+ BUFFER_TRACE(ent->bh, "get write access");
+ retval = ext4_journal_get_write_access(handle, ent->bh);
+@@ -3543,19 +3551,19 @@ static int ext4_setent(handle_t *handle, struct ext4_renament *ent,
+ inode_inc_iversion(ent->dir);
+ ent->dir->i_ctime = ent->dir->i_mtime =
+ current_time(ent->dir);
+- ext4_mark_inode_dirty(handle, ent->dir);
++ retval = ext4_mark_inode_dirty(handle, ent->dir);
+ BUFFER_TRACE(ent->bh, "call ext4_handle_dirty_metadata");
+ if (!ent->inlined) {
+- retval = ext4_handle_dirty_dirblock(handle, ent->dir, ent->bh);
+- if (unlikely(retval)) {
+- ext4_std_error(ent->dir->i_sb, retval);
+- return retval;
++ retval2 = ext4_handle_dirty_dirblock(handle, ent->dir, ent->bh);
++ if (unlikely(retval2)) {
++ ext4_std_error(ent->dir->i_sb, retval2);
++ return retval2;
+ }
+ }
+ brelse(ent->bh);
+ ent->bh = NULL;
+
+- return 0;
++ return retval;
+ }
+
+ static int ext4_find_delete_entry(handle_t *handle, struct inode *dir,
+@@ -3790,7 +3798,9 @@ static int ext4_rename(struct inode *old_dir, struct dentry *old_dentry,
+ EXT4_FT_CHRDEV);
+ if (retval)
+ goto end_rename;
+- ext4_mark_inode_dirty(handle, whiteout);
++ retval = ext4_mark_inode_dirty(handle, whiteout);
++ if (unlikely(retval))
++ goto end_rename;
+ }
+ if (!new.bh) {
+ retval = ext4_add_entry(handle, new.dentry, old.inode);
+@@ -3811,7 +3821,9 @@ static int ext4_rename(struct inode *old_dir, struct dentry *old_dentry,
+ * rename.
+ */
+ old.inode->i_ctime = current_time(old.inode);
+- ext4_mark_inode_dirty(handle, old.inode);
++ retval = ext4_mark_inode_dirty(handle, old.inode);
++ if (unlikely(retval))
++ goto end_rename;
+
+ if (!whiteout) {
+ /*
+@@ -3840,12 +3852,18 @@ static int ext4_rename(struct inode *old_dir, struct dentry *old_dentry,
+ } else {
+ ext4_inc_count(handle, new.dir);
+ ext4_update_dx_flag(new.dir);
+- ext4_mark_inode_dirty(handle, new.dir);
++ retval = ext4_mark_inode_dirty(handle, new.dir);
++ if (unlikely(retval))
++ goto end_rename;
+ }
+ }
+- ext4_mark_inode_dirty(handle, old.dir);
++ retval = ext4_mark_inode_dirty(handle, old.dir);
++ if (unlikely(retval))
++ goto end_rename;
+ if (new.inode) {
+- ext4_mark_inode_dirty(handle, new.inode);
++ retval = ext4_mark_inode_dirty(handle, new.inode);
++ if (unlikely(retval))
++ goto end_rename;
+ if (!new.inode->i_nlink)
+ ext4_orphan_add(handle, new.inode);
+ }
+@@ -3979,8 +3997,12 @@ static int ext4_cross_rename(struct inode *old_dir, struct dentry *old_dentry,
+ ctime = current_time(old.inode);
+ old.inode->i_ctime = ctime;
+ new.inode->i_ctime = ctime;
+- ext4_mark_inode_dirty(handle, old.inode);
+- ext4_mark_inode_dirty(handle, new.inode);
++ retval = ext4_mark_inode_dirty(handle, old.inode);
++ if (unlikely(retval))
++ goto end_rename;
++ retval = ext4_mark_inode_dirty(handle, new.inode);
++ if (unlikely(retval))
++ goto end_rename;
+
+ if (old.dir_bh) {
+ retval = ext4_rename_dir_finish(handle, &old, new.dir->i_ino);
+diff --git a/fs/ext4/super.c b/fs/ext4/super.c
+index bf5fcb477f66..7318ca71b69e 100644
+--- a/fs/ext4/super.c
++++ b/fs/ext4/super.c
+@@ -522,9 +522,6 @@ static void ext4_handle_error(struct super_block *sb)
+ smp_wmb();
+ sb->s_flags |= SB_RDONLY;
+ } else if (test_opt(sb, ERRORS_PANIC)) {
+- if (EXT4_SB(sb)->s_journal &&
+- !(EXT4_SB(sb)->s_journal->j_flags & JBD2_REC_ERR))
+- return;
+ panic("EXT4-fs (device %s): panic forced after error\n",
+ sb->s_id);
+ }
+@@ -725,23 +722,20 @@ void __ext4_abort(struct super_block *sb, const char *function,
+ va_end(args);
+
+ if (sb_rdonly(sb) == 0) {
+- ext4_msg(sb, KERN_CRIT, "Remounting filesystem read-only");
+ EXT4_SB(sb)->s_mount_flags |= EXT4_MF_FS_ABORTED;
++ if (EXT4_SB(sb)->s_journal)
++ jbd2_journal_abort(EXT4_SB(sb)->s_journal, -EIO);
++
++ ext4_msg(sb, KERN_CRIT, "Remounting filesystem read-only");
+ /*
+ * Make sure updated value of ->s_mount_flags will be visible
+ * before ->s_flags update
+ */
+ smp_wmb();
+ sb->s_flags |= SB_RDONLY;
+- if (EXT4_SB(sb)->s_journal)
+- jbd2_journal_abort(EXT4_SB(sb)->s_journal, -EIO);
+ }
+- if (test_opt(sb, ERRORS_PANIC) && !system_going_down()) {
+- if (EXT4_SB(sb)->s_journal &&
+- !(EXT4_SB(sb)->s_journal->j_flags & JBD2_REC_ERR))
+- return;
++ if (test_opt(sb, ERRORS_PANIC) && !system_going_down())
+ panic("EXT4-fs panic from previous error\n");
+- }
+ }
+
+ void __ext4_msg(struct super_block *sb,
+@@ -2086,6 +2080,16 @@ static int handle_mount_opt(struct super_block *sb, char *opt, int token,
+ #endif
+ } else if (token == Opt_dax) {
+ #ifdef CONFIG_FS_DAX
++ if (is_remount && test_opt(sb, DAX)) {
++ ext4_msg(sb, KERN_ERR, "can't mount with "
++ "both data=journal and dax");
++ return -1;
++ }
++ if (is_remount && !(sbi->s_mount_opt & EXT4_MOUNT_DAX)) {
++ ext4_msg(sb, KERN_ERR, "can't change "
++ "dax mount option while remounting");
++ return -1;
++ }
+ ext4_msg(sb, KERN_WARNING,
+ "DAX enabled. Warning: EXPERIMENTAL, use at your own risk");
+ sbi->s_mount_opt |= m->mount_opt;
+@@ -2344,6 +2348,7 @@ static int ext4_setup_super(struct super_block *sb, struct ext4_super_block *es,
+ ext4_msg(sb, KERN_ERR, "revision level too high, "
+ "forcing read-only mode");
+ err = -EROFS;
++ goto done;
+ }
+ if (read_only)
+ goto done;
+@@ -5412,12 +5417,6 @@ static int ext4_remount(struct super_block *sb, int *flags, char *data)
+ err = -EINVAL;
+ goto restore_opts;
+ }
+- if (test_opt(sb, DAX)) {
+- ext4_msg(sb, KERN_ERR, "can't mount with "
+- "both data=journal and dax");
+- err = -EINVAL;
+- goto restore_opts;
+- }
+ } else if (test_opt(sb, DATA_FLAGS) == EXT4_MOUNT_ORDERED_DATA) {
+ if (test_opt(sb, JOURNAL_ASYNC_COMMIT)) {
+ ext4_msg(sb, KERN_ERR, "can't mount with "
+@@ -5433,12 +5432,6 @@ static int ext4_remount(struct super_block *sb, int *flags, char *data)
+ goto restore_opts;
+ }
+
+- if ((sbi->s_mount_opt ^ old_opts.s_mount_opt) & EXT4_MOUNT_DAX) {
+- ext4_msg(sb, KERN_WARNING, "warning: refusing change of "
+- "dax flag with busy inodes while remounting");
+- sbi->s_mount_opt ^= EXT4_MOUNT_DAX;
+- }
+-
+ if (sbi->s_mount_flags & EXT4_MF_FS_ABORTED)
+ ext4_abort(sb, EXT4_ERR_ESHUTDOWN, "Abort forced by user");
+
+@@ -5885,7 +5878,7 @@ static int ext4_quota_on(struct super_block *sb, int type, int format_id,
+ EXT4_I(inode)->i_flags |= EXT4_NOATIME_FL | EXT4_IMMUTABLE_FL;
+ inode_set_flags(inode, S_NOATIME | S_IMMUTABLE,
+ S_NOATIME | S_IMMUTABLE);
+- ext4_mark_inode_dirty(handle, inode);
++ err = ext4_mark_inode_dirty(handle, inode);
+ ext4_journal_stop(handle);
+ unlock_inode:
+ inode_unlock(inode);
+@@ -5987,12 +5980,14 @@ static int ext4_quota_off(struct super_block *sb, int type)
+ * this is not a hard failure and quotas are already disabled.
+ */
+ handle = ext4_journal_start(inode, EXT4_HT_QUOTA, 1);
+- if (IS_ERR(handle))
++ if (IS_ERR(handle)) {
++ err = PTR_ERR(handle);
+ goto out_unlock;
++ }
+ EXT4_I(inode)->i_flags &= ~(EXT4_NOATIME_FL | EXT4_IMMUTABLE_FL);
+ inode_set_flags(inode, 0, S_NOATIME | S_IMMUTABLE);
+ inode->i_mtime = inode->i_ctime = current_time(inode);
+- ext4_mark_inode_dirty(handle, inode);
++ err = ext4_mark_inode_dirty(handle, inode);
+ ext4_journal_stop(handle);
+ out_unlock:
+ inode_unlock(inode);
+@@ -6050,7 +6045,7 @@ static ssize_t ext4_quota_write(struct super_block *sb, int type,
+ {
+ struct inode *inode = sb_dqopt(sb)->files[type];
+ ext4_lblk_t blk = off >> EXT4_BLOCK_SIZE_BITS(sb);
+- int err, offset = off & (sb->s_blocksize - 1);
++ int err = 0, err2 = 0, offset = off & (sb->s_blocksize - 1);
+ int retries = 0;
+ struct buffer_head *bh;
+ handle_t *handle = journal_current_handle();
+@@ -6098,9 +6093,11 @@ out:
+ if (inode->i_size < off + len) {
+ i_size_write(inode, off + len);
+ EXT4_I(inode)->i_disksize = inode->i_size;
+- ext4_mark_inode_dirty(handle, inode);
++ err2 = ext4_mark_inode_dirty(handle, inode);
++ if (unlikely(err2 && !err))
++ err = err2;
+ }
+- return len;
++ return err ? err : len;
+ }
+ #endif
+
+diff --git a/fs/ext4/xattr.c b/fs/ext4/xattr.c
+index 01ba66373e97..9b29a40738ac 100644
+--- a/fs/ext4/xattr.c
++++ b/fs/ext4/xattr.c
+@@ -1327,7 +1327,7 @@ static int ext4_xattr_inode_write(handle_t *handle, struct inode *ea_inode,
+ int blocksize = ea_inode->i_sb->s_blocksize;
+ int max_blocks = (bufsize + blocksize - 1) >> ea_inode->i_blkbits;
+ int csize, wsize = 0;
+- int ret = 0;
++ int ret = 0, ret2 = 0;
+ int retries = 0;
+
+ retry:
+@@ -1385,7 +1385,9 @@ retry:
+ ext4_update_i_disksize(ea_inode, wsize);
+ inode_unlock(ea_inode);
+
+- ext4_mark_inode_dirty(handle, ea_inode);
++ ret2 = ext4_mark_inode_dirty(handle, ea_inode);
++ if (unlikely(ret2 && !ret))
++ ret = ret2;
+
+ out:
+ brelse(bh);
+diff --git a/fs/f2fs/checkpoint.c b/fs/f2fs/checkpoint.c
+index 852890b72d6a..448b3dc6f925 100644
+--- a/fs/f2fs/checkpoint.c
++++ b/fs/f2fs/checkpoint.c
+@@ -889,8 +889,8 @@ int f2fs_get_valid_checkpoint(struct f2fs_sb_info *sbi)
+ int i;
+ int err;
+
+- sbi->ckpt = f2fs_kzalloc(sbi, array_size(blk_size, cp_blks),
+- GFP_KERNEL);
++ sbi->ckpt = f2fs_kvzalloc(sbi, array_size(blk_size, cp_blks),
++ GFP_KERNEL);
+ if (!sbi->ckpt)
+ return -ENOMEM;
+ /*
+diff --git a/fs/f2fs/compress.c b/fs/f2fs/compress.c
+index df7b2d15eacd..a5b2e72174bb 100644
+--- a/fs/f2fs/compress.c
++++ b/fs/f2fs/compress.c
+@@ -236,7 +236,12 @@ static int lz4_init_compress_ctx(struct compress_ctx *cc)
+ if (!cc->private)
+ return -ENOMEM;
+
+- cc->clen = LZ4_compressBound(PAGE_SIZE << cc->log_cluster_size);
++ /*
++ * we do not change cc->clen to LZ4_compressBound(inputsize) to
++ * adapt worst compress case, because lz4 compressor can handle
++ * output budget properly.
++ */
++ cc->clen = cc->rlen - PAGE_SIZE - COMPRESS_HEADER_SIZE;
+ return 0;
+ }
+
+@@ -252,11 +257,9 @@ static int lz4_compress_pages(struct compress_ctx *cc)
+
+ len = LZ4_compress_default(cc->rbuf, cc->cbuf->cdata, cc->rlen,
+ cc->clen, cc->private);
+- if (!len) {
+- printk_ratelimited("%sF2FS-fs (%s): lz4 compress failed\n",
+- KERN_ERR, F2FS_I_SB(cc->inode)->sb->s_id);
+- return -EIO;
+- }
++ if (!len)
++ return -EAGAIN;
++
+ cc->clen = len;
+ return 0;
+ }
+@@ -366,6 +369,13 @@ static int zstd_compress_pages(struct compress_ctx *cc)
+ return -EIO;
+ }
+
++ /*
++ * there is compressed data remained in intermediate buffer due to
++ * no more space in cbuf.cdata
++ */
++ if (ret)
++ return -EAGAIN;
++
+ cc->clen = outbuf.pos;
+ return 0;
+ }
+diff --git a/fs/f2fs/data.c b/fs/f2fs/data.c
+index cdf2f626bea7..10491ae1cb85 100644
+--- a/fs/f2fs/data.c
++++ b/fs/f2fs/data.c
+@@ -2130,16 +2130,16 @@ submit_and_realloc:
+ page->index, for_write);
+ if (IS_ERR(bio)) {
+ ret = PTR_ERR(bio);
+- bio = NULL;
+ dic->failed = true;
+ if (refcount_sub_and_test(dic->nr_cpages - i,
+- &dic->ref))
++ &dic->ref)) {
+ f2fs_decompress_end_io(dic->rpages,
+ cc->cluster_size, true,
+ false);
+- f2fs_free_dic(dic);
++ f2fs_free_dic(dic);
++ }
+ f2fs_put_dnode(&dn);
+- *bio_ret = bio;
++ *bio_ret = NULL;
+ return ret;
+ }
+ }
+diff --git a/fs/f2fs/dir.c b/fs/f2fs/dir.c
+index 44bfc464df78..54e90dbb09e7 100644
+--- a/fs/f2fs/dir.c
++++ b/fs/f2fs/dir.c
+@@ -107,36 +107,28 @@ static struct f2fs_dir_entry *find_in_block(struct inode *dir,
+ /*
+ * Test whether a case-insensitive directory entry matches the filename
+ * being searched for.
+- *
+- * Returns: 0 if the directory entry matches, more than 0 if it
+- * doesn't match or less than zero on error.
+ */
+-int f2fs_ci_compare(const struct inode *parent, const struct qstr *name,
+- const struct qstr *entry, bool quick)
++static bool f2fs_match_ci_name(const struct inode *dir, const struct qstr *name,
++ const struct qstr *entry, bool quick)
+ {
+- const struct f2fs_sb_info *sbi = F2FS_SB(parent->i_sb);
++ const struct f2fs_sb_info *sbi = F2FS_SB(dir->i_sb);
+ const struct unicode_map *um = sbi->s_encoding;
+- int ret;
++ int res;
+
+ if (quick)
+- ret = utf8_strncasecmp_folded(um, name, entry);
++ res = utf8_strncasecmp_folded(um, name, entry);
+ else
+- ret = utf8_strncasecmp(um, name, entry);
+-
+- if (ret < 0) {
+- /* Handle invalid character sequence as either an error
+- * or as an opaque byte sequence.
++ res = utf8_strncasecmp(um, name, entry);
++ if (res < 0) {
++ /*
++ * In strict mode, ignore invalid names. In non-strict mode,
++ * fall back to treating them as opaque byte sequences.
+ */
+- if (f2fs_has_strict_mode(sbi))
+- return -EINVAL;
+-
+- if (name->len != entry->len)
+- return 1;
+-
+- return !!memcmp(name->name, entry->name, name->len);
++ if (f2fs_has_strict_mode(sbi) || name->len != entry->len)
++ return false;
++ return !memcmp(name->name, entry->name, name->len);
+ }
+-
+- return ret;
++ return res == 0;
+ }
+
+ static void f2fs_fname_setup_ci_filename(struct inode *dir,
+@@ -188,10 +180,10 @@ static inline bool f2fs_match_name(struct f2fs_dentry_ptr *d,
+ if (cf_str->name) {
+ struct qstr cf = {.name = cf_str->name,
+ .len = cf_str->len};
+- return !f2fs_ci_compare(parent, &cf, &entry, true);
++ return f2fs_match_ci_name(parent, &cf, &entry, true);
+ }
+- return !f2fs_ci_compare(parent, fname->usr_fname, &entry,
+- false);
++ return f2fs_match_ci_name(parent, fname->usr_fname, &entry,
++ false);
+ }
+ #endif
+ if (fscrypt_match_name(fname, d->filename[bit_pos],
+@@ -1080,17 +1072,41 @@ const struct file_operations f2fs_dir_operations = {
+ static int f2fs_d_compare(const struct dentry *dentry, unsigned int len,
+ const char *str, const struct qstr *name)
+ {
+- struct qstr qstr = {.name = str, .len = len };
+ const struct dentry *parent = READ_ONCE(dentry->d_parent);
+- const struct inode *inode = READ_ONCE(parent->d_inode);
++ const struct inode *dir = READ_ONCE(parent->d_inode);
++ const struct f2fs_sb_info *sbi = F2FS_SB(dentry->d_sb);
++ struct qstr entry = QSTR_INIT(str, len);
++ char strbuf[DNAME_INLINE_LEN];
++ int res;
++
++ if (!dir || !IS_CASEFOLDED(dir))
++ goto fallback;
+
+- if (!inode || !IS_CASEFOLDED(inode)) {
+- if (len != name->len)
+- return -1;
+- return memcmp(str, name->name, len);
++ /*
++ * If the dentry name is stored in-line, then it may be concurrently
++ * modified by a rename. If this happens, the VFS will eventually retry
++ * the lookup, so it doesn't matter what ->d_compare() returns.
++ * However, it's unsafe to call utf8_strncasecmp() with an unstable
++ * string. Therefore, we have to copy the name into a temporary buffer.
++ */
++ if (len <= DNAME_INLINE_LEN - 1) {
++ memcpy(strbuf, str, len);
++ strbuf[len] = 0;
++ entry.name = strbuf;
++ /* prevent compiler from optimizing out the temporary buffer */
++ barrier();
+ }
+
+- return f2fs_ci_compare(inode, name, &qstr, false);
++ res = utf8_strncasecmp(sbi->s_encoding, name, &entry);
++ if (res >= 0)
++ return res;
++
++ if (f2fs_has_strict_mode(sbi))
++ return -EINVAL;
++fallback:
++ if (len != name->len)
++ return 1;
++ return !!memcmp(str, name->name, len);
+ }
+
+ static int f2fs_d_hash(const struct dentry *dentry, struct qstr *str)
+diff --git a/fs/f2fs/f2fs.h b/fs/f2fs/f2fs.h
+index 7c5dd7f666a0..5a0f95dfbac2 100644
+--- a/fs/f2fs/f2fs.h
++++ b/fs/f2fs/f2fs.h
+@@ -2936,18 +2936,12 @@ static inline bool f2fs_may_extent_tree(struct inode *inode)
+ static inline void *f2fs_kmalloc(struct f2fs_sb_info *sbi,
+ size_t size, gfp_t flags)
+ {
+- void *ret;
+-
+ if (time_to_inject(sbi, FAULT_KMALLOC)) {
+ f2fs_show_injection_info(sbi, FAULT_KMALLOC);
+ return NULL;
+ }
+
+- ret = kmalloc(size, flags);
+- if (ret)
+- return ret;
+-
+- return kvmalloc(size, flags);
++ return kmalloc(size, flags);
+ }
+
+ static inline void *f2fs_kzalloc(struct f2fs_sb_info *sbi,
+@@ -3107,11 +3101,6 @@ int f2fs_update_extension_list(struct f2fs_sb_info *sbi, const char *name,
+ bool hot, bool set);
+ struct dentry *f2fs_get_parent(struct dentry *child);
+
+-extern int f2fs_ci_compare(const struct inode *parent,
+- const struct qstr *name,
+- const struct qstr *entry,
+- bool quick);
+-
+ /*
+ * dir.c
+ */
+@@ -3656,7 +3645,7 @@ static inline int f2fs_build_stats(struct f2fs_sb_info *sbi) { return 0; }
+ static inline void f2fs_destroy_stats(struct f2fs_sb_info *sbi) { }
+ static inline void __init f2fs_create_root_stats(void) { }
+ static inline void f2fs_destroy_root_stats(void) { }
+-static inline void update_sit_info(struct f2fs_sb_info *sbi) {}
++static inline void f2fs_update_sit_info(struct f2fs_sb_info *sbi) {}
+ #endif
+
+ extern const struct file_operations f2fs_dir_operations;
+diff --git a/fs/f2fs/file.c b/fs/f2fs/file.c
+index 6ab8f621a3c5..30b35915fa3a 100644
+--- a/fs/f2fs/file.c
++++ b/fs/f2fs/file.c
+@@ -2219,8 +2219,15 @@ static int f2fs_ioc_shutdown(struct file *filp, unsigned long arg)
+
+ if (in != F2FS_GOING_DOWN_FULLSYNC) {
+ ret = mnt_want_write_file(filp);
+- if (ret)
++ if (ret) {
++ if (ret == -EROFS) {
++ ret = 0;
++ f2fs_stop_checkpoint(sbi, false);
++ set_sbi_flag(sbi, SBI_IS_SHUTDOWN);
++ trace_f2fs_shutdown(sbi, in, ret);
++ }
+ return ret;
++ }
+ }
+
+ switch (in) {
+diff --git a/fs/f2fs/node.c b/fs/f2fs/node.c
+index ecbd6bd14a49..daf531e69b67 100644
+--- a/fs/f2fs/node.c
++++ b/fs/f2fs/node.c
+@@ -2928,7 +2928,7 @@ static int __get_nat_bitmaps(struct f2fs_sb_info *sbi)
+ return 0;
+
+ nm_i->nat_bits_blocks = F2FS_BLK_ALIGN((nat_bits_bytes << 1) + 8);
+- nm_i->nat_bits = f2fs_kzalloc(sbi,
++ nm_i->nat_bits = f2fs_kvzalloc(sbi,
+ nm_i->nat_bits_blocks << F2FS_BLKSIZE_BITS, GFP_KERNEL);
+ if (!nm_i->nat_bits)
+ return -ENOMEM;
+@@ -3061,9 +3061,9 @@ static int init_free_nid_cache(struct f2fs_sb_info *sbi)
+ int i;
+
+ nm_i->free_nid_bitmap =
+- f2fs_kzalloc(sbi, array_size(sizeof(unsigned char *),
+- nm_i->nat_blocks),
+- GFP_KERNEL);
++ f2fs_kvzalloc(sbi, array_size(sizeof(unsigned char *),
++ nm_i->nat_blocks),
++ GFP_KERNEL);
+ if (!nm_i->free_nid_bitmap)
+ return -ENOMEM;
+
+diff --git a/fs/f2fs/super.c b/fs/f2fs/super.c
+index 56ccb8323e21..4696c9cb47a5 100644
+--- a/fs/f2fs/super.c
++++ b/fs/f2fs/super.c
+@@ -1303,7 +1303,8 @@ static int f2fs_statfs_project(struct super_block *sb,
+ limit >>= sb->s_blocksize_bits;
+
+ if (limit && buf->f_blocks > limit) {
+- curblock = dquot->dq_dqb.dqb_curspace >> sb->s_blocksize_bits;
++ curblock = (dquot->dq_dqb.dqb_curspace +
++ dquot->dq_dqb.dqb_rsvspace) >> sb->s_blocksize_bits;
+ buf->f_blocks = limit;
+ buf->f_bfree = buf->f_bavail =
+ (buf->f_blocks > curblock) ?
+@@ -3038,7 +3039,7 @@ static int init_blkz_info(struct f2fs_sb_info *sbi, int devi)
+ if (nr_sectors & (bdev_zone_sectors(bdev) - 1))
+ FDEV(devi).nr_blkz++;
+
+- FDEV(devi).blkz_seq = f2fs_kzalloc(sbi,
++ FDEV(devi).blkz_seq = f2fs_kvzalloc(sbi,
+ BITS_TO_LONGS(FDEV(devi).nr_blkz)
+ * sizeof(unsigned long),
+ GFP_KERNEL);
+diff --git a/fs/fuse/dev.c b/fs/fuse/dev.c
+index 97eec7522bf2..5c155437a455 100644
+--- a/fs/fuse/dev.c
++++ b/fs/fuse/dev.c
+@@ -1977,8 +1977,9 @@ static ssize_t fuse_dev_splice_write(struct pipe_inode_info *pipe,
+ struct pipe_buffer *ibuf;
+ struct pipe_buffer *obuf;
+
+- BUG_ON(nbuf >= pipe->ring_size);
+- BUG_ON(tail == head);
++ if (WARN_ON(nbuf >= count || tail == head))
++ goto out_free;
++
+ ibuf = &pipe->bufs[tail & mask];
+ obuf = &bufs[nbuf];
+
+diff --git a/fs/fuse/file.c b/fs/fuse/file.c
+index 9d67b830fb7a..e3afceecaa6b 100644
+--- a/fs/fuse/file.c
++++ b/fs/fuse/file.c
+@@ -712,6 +712,7 @@ static ssize_t fuse_async_req_send(struct fuse_conn *fc,
+ spin_unlock(&io->lock);
+
+ ia->ap.args.end = fuse_aio_complete_req;
++ ia->ap.args.may_block = io->should_dirty;
+ err = fuse_simple_background(fc, &ia->ap.args, GFP_KERNEL);
+ if (err)
+ fuse_aio_complete_req(fc, &ia->ap.args, err);
+@@ -3279,13 +3280,11 @@ static ssize_t __fuse_copy_file_range(struct file *file_in, loff_t pos_in,
+ if (file_inode(file_in)->i_sb != file_inode(file_out)->i_sb)
+ return -EXDEV;
+
+- if (fc->writeback_cache) {
+- inode_lock(inode_in);
+- err = fuse_writeback_range(inode_in, pos_in, pos_in + len);
+- inode_unlock(inode_in);
+- if (err)
+- return err;
+- }
++ inode_lock(inode_in);
++ err = fuse_writeback_range(inode_in, pos_in, pos_in + len - 1);
++ inode_unlock(inode_in);
++ if (err)
++ return err;
+
+ inode_lock(inode_out);
+
+@@ -3293,11 +3292,27 @@ static ssize_t __fuse_copy_file_range(struct file *file_in, loff_t pos_in,
+ if (err)
+ goto out;
+
+- if (fc->writeback_cache) {
+- err = fuse_writeback_range(inode_out, pos_out, pos_out + len);
+- if (err)
+- goto out;
+- }
++ /*
++ * Write out dirty pages in the destination file before sending the COPY
++ * request to userspace. After the request is completed, truncate off
++ * pages (including partial ones) from the cache that have been copied,
++ * since these contain stale data at that point.
++ *
++ * This should be mostly correct, but if the COPY writes to partial
++ * pages (at the start or end) and the parts not covered by the COPY are
++ * written through a memory map after calling fuse_writeback_range(),
++ * then these partial page modifications will be lost on truncation.
++ *
++ * It is unlikely that someone would rely on such mixed style
++ * modifications. Yet this does give less guarantees than if the
++ * copying was performed with write(2).
++ *
++ * To fix this a i_mmap_sem style lock could be used to prevent new
++ * faults while the copy is ongoing.
++ */
++ err = fuse_writeback_range(inode_out, pos_out, pos_out + len - 1);
++ if (err)
++ goto out;
+
+ if (is_unstable)
+ set_bit(FUSE_I_SIZE_UNSTABLE, &fi_out->state);
+@@ -3318,6 +3333,10 @@ static ssize_t __fuse_copy_file_range(struct file *file_in, loff_t pos_in,
+ if (err)
+ goto out;
+
++ truncate_inode_pages_range(inode_out->i_mapping,
++ ALIGN_DOWN(pos_out, PAGE_SIZE),
++ ALIGN(pos_out + outarg.size, PAGE_SIZE) - 1);
++
+ if (fc->writeback_cache) {
+ fuse_write_update_size(inode_out, pos_out + outarg.size);
+ file_update_time(file_out);
+diff --git a/fs/fuse/fuse_i.h b/fs/fuse/fuse_i.h
+index ca344bf71404..d7cde216fc87 100644
+--- a/fs/fuse/fuse_i.h
++++ b/fs/fuse/fuse_i.h
+@@ -249,6 +249,7 @@ struct fuse_args {
+ bool out_argvar:1;
+ bool page_zeroing:1;
+ bool page_replace:1;
++ bool may_block:1;
+ struct fuse_in_arg in_args[3];
+ struct fuse_arg out_args[2];
+ void (*end)(struct fuse_conn *fc, struct fuse_args *args, int error);
+diff --git a/fs/fuse/virtio_fs.c b/fs/fuse/virtio_fs.c
+index bade74768903..0c6ef5d3c6ab 100644
+--- a/fs/fuse/virtio_fs.c
++++ b/fs/fuse/virtio_fs.c
+@@ -60,6 +60,12 @@ struct virtio_fs_forget {
+ struct virtio_fs_forget_req req;
+ };
+
++struct virtio_fs_req_work {
++ struct fuse_req *req;
++ struct virtio_fs_vq *fsvq;
++ struct work_struct done_work;
++};
++
+ static int virtio_fs_enqueue_req(struct virtio_fs_vq *fsvq,
+ struct fuse_req *req, bool in_flight);
+
+@@ -485,19 +491,67 @@ static void copy_args_from_argbuf(struct fuse_args *args, struct fuse_req *req)
+ }
+
+ /* Work function for request completion */
++static void virtio_fs_request_complete(struct fuse_req *req,
++ struct virtio_fs_vq *fsvq)
++{
++ struct fuse_pqueue *fpq = &fsvq->fud->pq;
++ struct fuse_conn *fc = fsvq->fud->fc;
++ struct fuse_args *args;
++ struct fuse_args_pages *ap;
++ unsigned int len, i, thislen;
++ struct page *page;
++
++ /*
++ * TODO verify that server properly follows FUSE protocol
++ * (oh.uniq, oh.len)
++ */
++ args = req->args;
++ copy_args_from_argbuf(args, req);
++
++ if (args->out_pages && args->page_zeroing) {
++ len = args->out_args[args->out_numargs - 1].size;
++ ap = container_of(args, typeof(*ap), args);
++ for (i = 0; i < ap->num_pages; i++) {
++ thislen = ap->descs[i].length;
++ if (len < thislen) {
++ WARN_ON(ap->descs[i].offset);
++ page = ap->pages[i];
++ zero_user_segment(page, len, thislen);
++ len = 0;
++ } else {
++ len -= thislen;
++ }
++ }
++ }
++
++ spin_lock(&fpq->lock);
++ clear_bit(FR_SENT, &req->flags);
++ spin_unlock(&fpq->lock);
++
++ fuse_request_end(fc, req);
++ spin_lock(&fsvq->lock);
++ dec_in_flight_req(fsvq);
++ spin_unlock(&fsvq->lock);
++}
++
++static void virtio_fs_complete_req_work(struct work_struct *work)
++{
++ struct virtio_fs_req_work *w =
++ container_of(work, typeof(*w), done_work);
++
++ virtio_fs_request_complete(w->req, w->fsvq);
++ kfree(w);
++}
++
+ static void virtio_fs_requests_done_work(struct work_struct *work)
+ {
+ struct virtio_fs_vq *fsvq = container_of(work, struct virtio_fs_vq,
+ done_work);
+ struct fuse_pqueue *fpq = &fsvq->fud->pq;
+- struct fuse_conn *fc = fsvq->fud->fc;
+ struct virtqueue *vq = fsvq->vq;
+ struct fuse_req *req;
+- struct fuse_args_pages *ap;
+ struct fuse_req *next;
+- struct fuse_args *args;
+- unsigned int len, i, thislen;
+- struct page *page;
++ unsigned int len;
+ LIST_HEAD(reqs);
+
+ /* Collect completed requests off the virtqueue */
+@@ -515,38 +569,20 @@ static void virtio_fs_requests_done_work(struct work_struct *work)
+
+ /* End requests */
+ list_for_each_entry_safe(req, next, &reqs, list) {
+- /*
+- * TODO verify that server properly follows FUSE protocol
+- * (oh.uniq, oh.len)
+- */
+- args = req->args;
+- copy_args_from_argbuf(args, req);
+-
+- if (args->out_pages && args->page_zeroing) {
+- len = args->out_args[args->out_numargs - 1].size;
+- ap = container_of(args, typeof(*ap), args);
+- for (i = 0; i < ap->num_pages; i++) {
+- thislen = ap->descs[i].length;
+- if (len < thislen) {
+- WARN_ON(ap->descs[i].offset);
+- page = ap->pages[i];
+- zero_user_segment(page, len, thislen);
+- len = 0;
+- } else {
+- len -= thislen;
+- }
+- }
+- }
+-
+- spin_lock(&fpq->lock);
+- clear_bit(FR_SENT, &req->flags);
+ list_del_init(&req->list);
+- spin_unlock(&fpq->lock);
+
+- fuse_request_end(fc, req);
+- spin_lock(&fsvq->lock);
+- dec_in_flight_req(fsvq);
+- spin_unlock(&fsvq->lock);
++ /* blocking async request completes in a worker context */
++ if (req->args->may_block) {
++ struct virtio_fs_req_work *w;
++
++ w = kzalloc(sizeof(*w), GFP_NOFS | __GFP_NOFAIL);
++ INIT_WORK(&w->done_work, virtio_fs_complete_req_work);
++ w->fsvq = fsvq;
++ w->req = req;
++ schedule_work(&w->done_work);
++ } else {
++ virtio_fs_request_complete(req, fsvq);
++ }
+ }
+ }
+
+diff --git a/fs/gfs2/log.c b/fs/gfs2/log.c
+index 0644e58c6191..b7a5221bea7d 100644
+--- a/fs/gfs2/log.c
++++ b/fs/gfs2/log.c
+@@ -1003,8 +1003,10 @@ out:
+ * @new: New transaction to be merged
+ */
+
+-static void gfs2_merge_trans(struct gfs2_trans *old, struct gfs2_trans *new)
++static void gfs2_merge_trans(struct gfs2_sbd *sdp, struct gfs2_trans *new)
+ {
++ struct gfs2_trans *old = sdp->sd_log_tr;
++
+ WARN_ON_ONCE(!test_bit(TR_ATTACHED, &old->tr_flags));
+
+ old->tr_num_buf_new += new->tr_num_buf_new;
+@@ -1016,6 +1018,11 @@ static void gfs2_merge_trans(struct gfs2_trans *old, struct gfs2_trans *new)
+
+ list_splice_tail_init(&new->tr_databuf, &old->tr_databuf);
+ list_splice_tail_init(&new->tr_buf, &old->tr_buf);
++
++ spin_lock(&sdp->sd_ail_lock);
++ list_splice_tail_init(&new->tr_ail1_list, &old->tr_ail1_list);
++ list_splice_tail_init(&new->tr_ail2_list, &old->tr_ail2_list);
++ spin_unlock(&sdp->sd_ail_lock);
+ }
+
+ static void log_refund(struct gfs2_sbd *sdp, struct gfs2_trans *tr)
+@@ -1027,7 +1034,7 @@ static void log_refund(struct gfs2_sbd *sdp, struct gfs2_trans *tr)
+ gfs2_log_lock(sdp);
+
+ if (sdp->sd_log_tr) {
+- gfs2_merge_trans(sdp->sd_log_tr, tr);
++ gfs2_merge_trans(sdp, tr);
+ } else if (tr->tr_num_buf_new || tr->tr_num_databuf_new) {
+ gfs2_assert_withdraw(sdp, test_bit(TR_ALLOCED, &tr->tr_flags));
+ sdp->sd_log_tr = tr;
+diff --git a/fs/gfs2/ops_fstype.c b/fs/gfs2/ops_fstype.c
+index e2b69ffcc6a8..094f5fe7c009 100644
+--- a/fs/gfs2/ops_fstype.c
++++ b/fs/gfs2/ops_fstype.c
+@@ -880,7 +880,7 @@ fail:
+ }
+
+ static const match_table_t nolock_tokens = {
+- { Opt_jid, "jid=%d\n", },
++ { Opt_jid, "jid=%d", },
+ { Opt_err, NULL },
+ };
+
+diff --git a/fs/io_uring.c b/fs/io_uring.c
+index 2698e9b08490..1829be7f63a3 100644
+--- a/fs/io_uring.c
++++ b/fs/io_uring.c
+@@ -513,7 +513,6 @@ enum {
+ REQ_F_INFLIGHT_BIT,
+ REQ_F_CUR_POS_BIT,
+ REQ_F_NOWAIT_BIT,
+- REQ_F_IOPOLL_COMPLETED_BIT,
+ REQ_F_LINK_TIMEOUT_BIT,
+ REQ_F_TIMEOUT_BIT,
+ REQ_F_ISREG_BIT,
+@@ -556,8 +555,6 @@ enum {
+ REQ_F_CUR_POS = BIT(REQ_F_CUR_POS_BIT),
+ /* must not punt to workers */
+ REQ_F_NOWAIT = BIT(REQ_F_NOWAIT_BIT),
+- /* polled IO has completed */
+- REQ_F_IOPOLL_COMPLETED = BIT(REQ_F_IOPOLL_COMPLETED_BIT),
+ /* has linked timeout */
+ REQ_F_LINK_TIMEOUT = BIT(REQ_F_LINK_TIMEOUT_BIT),
+ /* timeout request */
+@@ -618,6 +615,8 @@ struct io_kiocb {
+ int cflags;
+ bool needs_fixed_file;
+ u8 opcode;
++ /* polled IO has completed */
++ u8 iopoll_completed;
+
+ u16 buf_index;
+
+@@ -1691,6 +1690,18 @@ static int io_put_kbuf(struct io_kiocb *req)
+ return cflags;
+ }
+
++static void io_iopoll_queue(struct list_head *again)
++{
++ struct io_kiocb *req;
++
++ do {
++ req = list_first_entry(again, struct io_kiocb, list);
++ list_del(&req->list);
++ refcount_inc(&req->refs);
++ io_queue_async_work(req);
++ } while (!list_empty(again));
++}
++
+ /*
+ * Find and free completed poll iocbs
+ */
+@@ -1699,12 +1710,21 @@ static void io_iopoll_complete(struct io_ring_ctx *ctx, unsigned int *nr_events,
+ {
+ struct req_batch rb;
+ struct io_kiocb *req;
++ LIST_HEAD(again);
++
++ /* order with ->result store in io_complete_rw_iopoll() */
++ smp_rmb();
+
+ rb.to_free = rb.need_iter = 0;
+ while (!list_empty(done)) {
+ int cflags = 0;
+
+ req = list_first_entry(done, struct io_kiocb, list);
++ if (READ_ONCE(req->result) == -EAGAIN) {
++ req->iopoll_completed = 0;
++ list_move_tail(&req->list, &again);
++ continue;
++ }
+ list_del(&req->list);
+
+ if (req->flags & REQ_F_BUFFER_SELECTED)
+@@ -1722,18 +1742,9 @@ static void io_iopoll_complete(struct io_ring_ctx *ctx, unsigned int *nr_events,
+ if (ctx->flags & IORING_SETUP_SQPOLL)
+ io_cqring_ev_posted(ctx);
+ io_free_req_many(ctx, &rb);
+-}
+
+-static void io_iopoll_queue(struct list_head *again)
+-{
+- struct io_kiocb *req;
+-
+- do {
+- req = list_first_entry(again, struct io_kiocb, list);
+- list_del(&req->list);
+- refcount_inc(&req->refs);
+- io_queue_async_work(req);
+- } while (!list_empty(again));
++ if (!list_empty(&again))
++ io_iopoll_queue(&again);
+ }
+
+ static int io_do_iopoll(struct io_ring_ctx *ctx, unsigned int *nr_events,
+@@ -1741,7 +1752,6 @@ static int io_do_iopoll(struct io_ring_ctx *ctx, unsigned int *nr_events,
+ {
+ struct io_kiocb *req, *tmp;
+ LIST_HEAD(done);
+- LIST_HEAD(again);
+ bool spin;
+ int ret;
+
+@@ -1760,20 +1770,13 @@ static int io_do_iopoll(struct io_ring_ctx *ctx, unsigned int *nr_events,
+ * If we find a request that requires polling, break out
+ * and complete those lists first, if we have entries there.
+ */
+- if (req->flags & REQ_F_IOPOLL_COMPLETED) {
++ if (READ_ONCE(req->iopoll_completed)) {
+ list_move_tail(&req->list, &done);
+ continue;
+ }
+ if (!list_empty(&done))
+ break;
+
+- if (req->result == -EAGAIN) {
+- list_move_tail(&req->list, &again);
+- continue;
+- }
+- if (!list_empty(&again))
+- break;
+-
+ ret = kiocb->ki_filp->f_op->iopoll(kiocb, spin);
+ if (ret < 0)
+ break;
+@@ -1786,9 +1789,6 @@ static int io_do_iopoll(struct io_ring_ctx *ctx, unsigned int *nr_events,
+ if (!list_empty(&done))
+ io_iopoll_complete(ctx, nr_events, &done);
+
+- if (!list_empty(&again))
+- io_iopoll_queue(&again);
+-
+ return ret;
+ }
+
+@@ -1937,11 +1937,15 @@ static void io_complete_rw_iopoll(struct kiocb *kiocb, long res, long res2)
+ if (kiocb->ki_flags & IOCB_WRITE)
+ kiocb_end_write(req);
+
+- if (res != req->result)
++ if (res != -EAGAIN && res != req->result)
+ req_set_fail_links(req);
+- req->result = res;
+- if (res != -EAGAIN)
+- req->flags |= REQ_F_IOPOLL_COMPLETED;
++
++ WRITE_ONCE(req->result, res);
++ /* order with io_poll_complete() checking ->result */
++ if (res != -EAGAIN) {
++ smp_wmb();
++ WRITE_ONCE(req->iopoll_completed, 1);
++ }
+ }
+
+ /*
+@@ -1974,7 +1978,7 @@ static void io_iopoll_req_issued(struct io_kiocb *req)
+ * For fast devices, IO may have already completed. If it has, add
+ * it to the front so we find it first.
+ */
+- if (req->flags & REQ_F_IOPOLL_COMPLETED)
++ if (READ_ONCE(req->iopoll_completed))
+ list_add(&req->list, &ctx->poll_list);
+ else
+ list_add_tail(&req->list, &ctx->poll_list);
+@@ -2098,6 +2102,7 @@ static int io_prep_rw(struct io_kiocb *req, const struct io_uring_sqe *sqe,
+ kiocb->ki_flags |= IOCB_HIPRI;
+ kiocb->ki_complete = io_complete_rw_iopoll;
+ req->result = 0;
++ req->iopoll_completed = 0;
+ } else {
+ if (kiocb->ki_flags & IOCB_HIPRI)
+ return -EINVAL;
+@@ -2609,8 +2614,8 @@ copy_iov:
+ }
+ }
+ out_free:
+- kfree(iovec);
+- req->flags &= ~REQ_F_NEED_CLEANUP;
++ if (!(req->flags & REQ_F_NEED_CLEANUP))
++ kfree(iovec);
+ return ret;
+ }
+
+@@ -2732,8 +2737,8 @@ copy_iov:
+ }
+ }
+ out_free:
+- req->flags &= ~REQ_F_NEED_CLEANUP;
+- kfree(iovec);
++ if (!(req->flags & REQ_F_NEED_CLEANUP))
++ kfree(iovec);
+ return ret;
+ }
+
+@@ -4297,6 +4302,28 @@ static void io_async_queue_proc(struct file *file, struct wait_queue_head *head,
+ __io_queue_proc(&pt->req->apoll->poll, pt, head);
+ }
+
++static void io_sq_thread_drop_mm(struct io_ring_ctx *ctx)
++{
++ struct mm_struct *mm = current->mm;
++
++ if (mm) {
++ unuse_mm(mm);
++ mmput(mm);
++ }
++}
++
++static int io_sq_thread_acquire_mm(struct io_ring_ctx *ctx,
++ struct io_kiocb *req)
++{
++ if (io_op_defs[req->opcode].needs_mm && !current->mm) {
++ if (unlikely(!mmget_not_zero(ctx->sqo_mm)))
++ return -EFAULT;
++ use_mm(ctx->sqo_mm);
++ }
++
++ return 0;
++}
++
+ static void io_async_task_func(struct callback_head *cb)
+ {
+ struct io_kiocb *req = container_of(cb, struct io_kiocb, task_work);
+@@ -4328,12 +4355,17 @@ static void io_async_task_func(struct callback_head *cb)
+ if (canceled) {
+ kfree(apoll);
+ io_cqring_ev_posted(ctx);
++end_req:
+ req_set_fail_links(req);
+ io_double_put_req(req);
+ return;
+ }
+
+ __set_current_state(TASK_RUNNING);
++ if (io_sq_thread_acquire_mm(ctx, req)) {
++ io_cqring_add_event(req, -EFAULT);
++ goto end_req;
++ }
+ mutex_lock(&ctx->uring_lock);
+ __io_queue_sqe(req, NULL);
+ mutex_unlock(&ctx->uring_lock);
+@@ -5892,11 +5924,8 @@ static int io_init_req(struct io_ring_ctx *ctx, struct io_kiocb *req,
+ if (unlikely(req->opcode >= IORING_OP_LAST))
+ return -EINVAL;
+
+- if (io_op_defs[req->opcode].needs_mm && !current->mm) {
+- if (unlikely(!mmget_not_zero(ctx->sqo_mm)))
+- return -EFAULT;
+- use_mm(ctx->sqo_mm);
+- }
++ if (unlikely(io_sq_thread_acquire_mm(ctx, req)))
++ return -EFAULT;
+
+ sqe_flags = READ_ONCE(sqe->flags);
+ /* enforce forwards compatibility on users */
+@@ -6006,16 +6035,6 @@ fail_req:
+ return submitted;
+ }
+
+-static inline void io_sq_thread_drop_mm(struct io_ring_ctx *ctx)
+-{
+- struct mm_struct *mm = current->mm;
+-
+- if (mm) {
+- unuse_mm(mm);
+- mmput(mm);
+- }
+-}
+-
+ static int io_sq_thread(void *data)
+ {
+ struct io_ring_ctx *ctx = data;
+@@ -7385,7 +7404,17 @@ static void io_ring_exit_work(struct work_struct *work)
+ if (ctx->rings)
+ io_cqring_overflow_flush(ctx, true);
+
+- wait_for_completion(&ctx->completions[0]);
++ /*
++ * If we're doing polled IO and end up having requests being
++ * submitted async (out-of-line), then completions can come in while
++ * we're waiting for refs to drop. We need to reap these manually,
++ * as nobody else will be looking for them.
++ */
++ while (!wait_for_completion_timeout(&ctx->completions[0], HZ/20)) {
++ io_iopoll_reap_events(ctx);
++ if (ctx->rings)
++ io_cqring_overflow_flush(ctx, true);
++ }
+ io_ring_ctx_free(ctx);
+ }
+
+diff --git a/fs/jbd2/journal.c b/fs/jbd2/journal.c
+index a49d0e670ddf..e4944436e733 100644
+--- a/fs/jbd2/journal.c
++++ b/fs/jbd2/journal.c
+@@ -1140,6 +1140,7 @@ static journal_t *journal_init_common(struct block_device *bdev,
+ init_waitqueue_head(&journal->j_wait_commit);
+ init_waitqueue_head(&journal->j_wait_updates);
+ init_waitqueue_head(&journal->j_wait_reserved);
++ mutex_init(&journal->j_abort_mutex);
+ mutex_init(&journal->j_barrier);
+ mutex_init(&journal->j_checkpoint_mutex);
+ spin_lock_init(&journal->j_revoke_lock);
+@@ -1402,7 +1403,8 @@ static int jbd2_write_superblock(journal_t *journal, int write_flags)
+ printk(KERN_ERR "JBD2: Error %d detected when updating "
+ "journal superblock for %s.\n", ret,
+ journal->j_devname);
+- jbd2_journal_abort(journal, ret);
++ if (!is_journal_aborted(journal))
++ jbd2_journal_abort(journal, ret);
+ }
+
+ return ret;
+@@ -2153,6 +2155,13 @@ void jbd2_journal_abort(journal_t *journal, int errno)
+ {
+ transaction_t *transaction;
+
++ /*
++ * Lock the aborting procedure until everything is done, this avoid
++ * races between filesystem's error handling flow (e.g. ext4_abort()),
++ * ensure panic after the error info is written into journal's
++ * superblock.
++ */
++ mutex_lock(&journal->j_abort_mutex);
+ /*
+ * ESHUTDOWN always takes precedence because a file system check
+ * caused by any other journal abort error is not required after
+@@ -2167,6 +2176,7 @@ void jbd2_journal_abort(journal_t *journal, int errno)
+ journal->j_errno = errno;
+ jbd2_journal_update_sb_errno(journal);
+ }
++ mutex_unlock(&journal->j_abort_mutex);
+ return;
+ }
+
+@@ -2188,10 +2198,7 @@ void jbd2_journal_abort(journal_t *journal, int errno)
+ * layer could realise that a filesystem check is needed.
+ */
+ jbd2_journal_update_sb_errno(journal);
+-
+- write_lock(&journal->j_state_lock);
+- journal->j_flags |= JBD2_REC_ERR;
+- write_unlock(&journal->j_state_lock);
++ mutex_unlock(&journal->j_abort_mutex);
+ }
+
+ /**
+diff --git a/fs/nfs/direct.c b/fs/nfs/direct.c
+index a57e7c72c7f4..d49b1d197908 100644
+--- a/fs/nfs/direct.c
++++ b/fs/nfs/direct.c
+@@ -731,6 +731,8 @@ static void nfs_direct_write_completion(struct nfs_pgio_header *hdr)
+ nfs_list_remove_request(req);
+ if (request_commit) {
+ kref_get(&req->wb_kref);
++ memcpy(&req->wb_verf, &hdr->verf.verifier,
++ sizeof(req->wb_verf));
+ nfs_mark_request_commit(req, hdr->lseg, &cinfo,
+ hdr->ds_commit_idx);
+ }
+diff --git a/fs/nfs/inode.c b/fs/nfs/inode.c
+index b9d0921cb4fe..0bf1f835de01 100644
+--- a/fs/nfs/inode.c
++++ b/fs/nfs/inode.c
+@@ -833,6 +833,8 @@ int nfs_getattr(const struct path *path, struct kstat *stat,
+ do_update |= cache_validity & NFS_INO_INVALID_ATIME;
+ if (request_mask & (STATX_CTIME|STATX_MTIME))
+ do_update |= cache_validity & NFS_INO_REVAL_PAGECACHE;
++ if (request_mask & STATX_BLOCKS)
++ do_update |= cache_validity & NFS_INO_INVALID_BLOCKS;
+ if (do_update) {
+ /* Update the attribute cache */
+ if (!(server->flags & NFS_MOUNT_NOAC))
+@@ -1764,7 +1766,8 @@ out_noforce:
+ status = nfs_post_op_update_inode_locked(inode, fattr,
+ NFS_INO_INVALID_CHANGE
+ | NFS_INO_INVALID_CTIME
+- | NFS_INO_INVALID_MTIME);
++ | NFS_INO_INVALID_MTIME
++ | NFS_INO_INVALID_BLOCKS);
+ return status;
+ }
+
+@@ -1871,7 +1874,8 @@ static int nfs_update_inode(struct inode *inode, struct nfs_fattr *fattr)
+ nfsi->cache_validity &= ~(NFS_INO_INVALID_ATTR
+ | NFS_INO_INVALID_ATIME
+ | NFS_INO_REVAL_FORCED
+- | NFS_INO_REVAL_PAGECACHE);
++ | NFS_INO_REVAL_PAGECACHE
++ | NFS_INO_INVALID_BLOCKS);
+
+ /* Do atomic weak cache consistency updates */
+ nfs_wcc_update_inode(inode, fattr);
+@@ -2033,8 +2037,12 @@ static int nfs_update_inode(struct inode *inode, struct nfs_fattr *fattr)
+ inode->i_blocks = nfs_calc_block_size(fattr->du.nfs3.used);
+ } else if (fattr->valid & NFS_ATTR_FATTR_BLOCKS_USED)
+ inode->i_blocks = fattr->du.nfs2.blocks;
+- else
++ else {
++ nfsi->cache_validity |= save_cache_validity &
++ (NFS_INO_INVALID_BLOCKS
++ | NFS_INO_REVAL_FORCED);
+ cache_revalidated = false;
++ }
+
+ /* Update attrtimeo value if we're out of the unstable period */
+ if (attr_changed) {
+diff --git a/fs/nfs/nfs4proc.c b/fs/nfs/nfs4proc.c
+index 9056f3dd380e..e32717fd1169 100644
+--- a/fs/nfs/nfs4proc.c
++++ b/fs/nfs/nfs4proc.c
+@@ -7909,7 +7909,7 @@ nfs4_bind_one_conn_to_session_done(struct rpc_task *task, void *calldata)
+ }
+
+ static const struct rpc_call_ops nfs4_bind_one_conn_to_session_ops = {
+- .rpc_call_done = &nfs4_bind_one_conn_to_session_done,
++ .rpc_call_done = nfs4_bind_one_conn_to_session_done,
+ };
+
+ /*
+diff --git a/fs/nfsd/cache.h b/fs/nfsd/cache.h
+index 10ec5ecdf117..65c331f75e9c 100644
+--- a/fs/nfsd/cache.h
++++ b/fs/nfsd/cache.h
+@@ -78,6 +78,8 @@ enum {
+ /* Checksum this amount of the request */
+ #define RC_CSUMLEN (256U)
+
++int nfsd_drc_slab_create(void);
++void nfsd_drc_slab_free(void);
+ int nfsd_reply_cache_init(struct nfsd_net *);
+ void nfsd_reply_cache_shutdown(struct nfsd_net *);
+ int nfsd_cache_lookup(struct svc_rqst *);
+diff --git a/fs/nfsd/netns.h b/fs/nfsd/netns.h
+index 09aa545825bd..9217cb64bf0e 100644
+--- a/fs/nfsd/netns.h
++++ b/fs/nfsd/netns.h
+@@ -139,7 +139,6 @@ struct nfsd_net {
+ * Duplicate reply cache
+ */
+ struct nfsd_drc_bucket *drc_hashtbl;
+- struct kmem_cache *drc_slab;
+
+ /* max number of entries allowed in the cache */
+ unsigned int max_drc_entries;
+diff --git a/fs/nfsd/nfs4callback.c b/fs/nfsd/nfs4callback.c
+index 5cf91322de0f..07e0c6f6322f 100644
+--- a/fs/nfsd/nfs4callback.c
++++ b/fs/nfsd/nfs4callback.c
+@@ -1301,6 +1301,8 @@ static void nfsd4_process_cb_update(struct nfsd4_callback *cb)
+ err = setup_callback_client(clp, &conn, ses);
+ if (err) {
+ nfsd4_mark_cb_down(clp, err);
++ if (c)
++ svc_xprt_put(c->cn_xprt);
+ return;
+ }
+ }
+diff --git a/fs/nfsd/nfscache.c b/fs/nfsd/nfscache.c
+index 96352ab7bd81..4a258065188e 100644
+--- a/fs/nfsd/nfscache.c
++++ b/fs/nfsd/nfscache.c
+@@ -36,6 +36,8 @@ struct nfsd_drc_bucket {
+ spinlock_t cache_lock;
+ };
+
++static struct kmem_cache *drc_slab;
++
+ static int nfsd_cache_append(struct svc_rqst *rqstp, struct kvec *vec);
+ static unsigned long nfsd_reply_cache_count(struct shrinker *shrink,
+ struct shrink_control *sc);
+@@ -95,7 +97,7 @@ nfsd_reply_cache_alloc(struct svc_rqst *rqstp, __wsum csum,
+ {
+ struct svc_cacherep *rp;
+
+- rp = kmem_cache_alloc(nn->drc_slab, GFP_KERNEL);
++ rp = kmem_cache_alloc(drc_slab, GFP_KERNEL);
+ if (rp) {
+ rp->c_state = RC_UNUSED;
+ rp->c_type = RC_NOCACHE;
+@@ -129,7 +131,7 @@ nfsd_reply_cache_free_locked(struct nfsd_drc_bucket *b, struct svc_cacherep *rp,
+ atomic_dec(&nn->num_drc_entries);
+ nn->drc_mem_usage -= sizeof(*rp);
+ }
+- kmem_cache_free(nn->drc_slab, rp);
++ kmem_cache_free(drc_slab, rp);
+ }
+
+ static void
+@@ -141,6 +143,18 @@ nfsd_reply_cache_free(struct nfsd_drc_bucket *b, struct svc_cacherep *rp,
+ spin_unlock(&b->cache_lock);
+ }
+
++int nfsd_drc_slab_create(void)
++{
++ drc_slab = kmem_cache_create("nfsd_drc",
++ sizeof(struct svc_cacherep), 0, 0, NULL);
++ return drc_slab ? 0: -ENOMEM;
++}
++
++void nfsd_drc_slab_free(void)
++{
++ kmem_cache_destroy(drc_slab);
++}
++
+ int nfsd_reply_cache_init(struct nfsd_net *nn)
+ {
+ unsigned int hashsize;
+@@ -159,18 +173,13 @@ int nfsd_reply_cache_init(struct nfsd_net *nn)
+ if (status)
+ goto out_nomem;
+
+- nn->drc_slab = kmem_cache_create("nfsd_drc",
+- sizeof(struct svc_cacherep), 0, 0, NULL);
+- if (!nn->drc_slab)
+- goto out_shrinker;
+-
+ nn->drc_hashtbl = kcalloc(hashsize,
+ sizeof(*nn->drc_hashtbl), GFP_KERNEL);
+ if (!nn->drc_hashtbl) {
+ nn->drc_hashtbl = vzalloc(array_size(hashsize,
+ sizeof(*nn->drc_hashtbl)));
+ if (!nn->drc_hashtbl)
+- goto out_slab;
++ goto out_shrinker;
+ }
+
+ for (i = 0; i < hashsize; i++) {
+@@ -180,8 +189,6 @@ int nfsd_reply_cache_init(struct nfsd_net *nn)
+ nn->drc_hashsize = hashsize;
+
+ return 0;
+-out_slab:
+- kmem_cache_destroy(nn->drc_slab);
+ out_shrinker:
+ unregister_shrinker(&nn->nfsd_reply_cache_shrinker);
+ out_nomem:
+@@ -209,8 +216,6 @@ void nfsd_reply_cache_shutdown(struct nfsd_net *nn)
+ nn->drc_hashtbl = NULL;
+ nn->drc_hashsize = 0;
+
+- kmem_cache_destroy(nn->drc_slab);
+- nn->drc_slab = NULL;
+ }
+
+ /*
+@@ -464,8 +469,7 @@ found_entry:
+ rtn = RC_REPLY;
+ break;
+ default:
+- printk(KERN_WARNING "nfsd: bad repcache type %d\n", rp->c_type);
+- nfsd_reply_cache_free_locked(b, rp, nn);
++ WARN_ONCE(1, "nfsd: bad repcache type %d\n", rp->c_type);
+ }
+
+ goto out;
+diff --git a/fs/nfsd/nfsctl.c b/fs/nfsd/nfsctl.c
+index 3bb2db947d29..71687d99b090 100644
+--- a/fs/nfsd/nfsctl.c
++++ b/fs/nfsd/nfsctl.c
+@@ -1533,6 +1533,9 @@ static int __init init_nfsd(void)
+ goto out_free_slabs;
+ nfsd_fault_inject_init(); /* nfsd fault injection controls */
+ nfsd_stat_init(); /* Statistics */
++ retval = nfsd_drc_slab_create();
++ if (retval)
++ goto out_free_stat;
+ nfsd_lockd_init(); /* lockd->nfsd callbacks */
+ retval = create_proc_exports_entry();
+ if (retval)
+@@ -1546,6 +1549,8 @@ out_free_all:
+ remove_proc_entry("fs/nfs", NULL);
+ out_free_lockd:
+ nfsd_lockd_shutdown();
++ nfsd_drc_slab_free();
++out_free_stat:
+ nfsd_stat_shutdown();
+ nfsd_fault_inject_cleanup();
+ nfsd4_exit_pnfs();
+@@ -1560,6 +1565,7 @@ out_unregister_pernet:
+
+ static void __exit exit_nfsd(void)
+ {
++ nfsd_drc_slab_free();
+ remove_proc_entry("fs/nfs/exports", NULL);
+ remove_proc_entry("fs/nfs", NULL);
+ nfsd_stat_shutdown();
+diff --git a/fs/proc/bootconfig.c b/fs/proc/bootconfig.c
+index 9955d75c0585..ad31ec4ad627 100644
+--- a/fs/proc/bootconfig.c
++++ b/fs/proc/bootconfig.c
+@@ -26,8 +26,9 @@ static int boot_config_proc_show(struct seq_file *m, void *v)
+ static int __init copy_xbc_key_value_list(char *dst, size_t size)
+ {
+ struct xbc_node *leaf, *vnode;
+- const char *val;
+ char *key, *end = dst + size;
++ const char *val;
++ char q;
+ int ret = 0;
+
+ key = kzalloc(XBC_KEYLEN_MAX, GFP_KERNEL);
+@@ -41,16 +42,20 @@ static int __init copy_xbc_key_value_list(char *dst, size_t size)
+ break;
+ dst += ret;
+ vnode = xbc_node_get_child(leaf);
+- if (vnode && xbc_node_is_array(vnode)) {
++ if (vnode) {
+ xbc_array_for_each_value(vnode, val) {
+- ret = snprintf(dst, rest(dst, end), "\"%s\"%s",
+- val, vnode->next ? ", " : "\n");
++ if (strchr(val, '"'))
++ q = '\'';
++ else
++ q = '"';
++ ret = snprintf(dst, rest(dst, end), "%c%s%c%s",
++ q, val, q, vnode->next ? ", " : "\n");
+ if (ret < 0)
+ goto out;
+ dst += ret;
+ }
+ } else {
+- ret = snprintf(dst, rest(dst, end), "\"%s\"\n", val);
++ ret = snprintf(dst, rest(dst, end), "\"\"\n");
+ if (ret < 0)
+ break;
+ dst += ret;
+diff --git a/fs/xfs/xfs_inode.c b/fs/xfs/xfs_inode.c
+index d1772786af29..8845faa8161a 100644
+--- a/fs/xfs/xfs_inode.c
++++ b/fs/xfs/xfs_inode.c
+@@ -2639,8 +2639,10 @@ xfs_ifree_cluster(
+ error = xfs_trans_get_buf(tp, mp->m_ddev_targp, blkno,
+ mp->m_bsize * igeo->blocks_per_cluster,
+ XBF_UNMAPPED, &bp);
+- if (error)
++ if (error) {
++ xfs_perag_put(pag);
+ return error;
++ }
+
+ /*
+ * This buffer may not have been correctly initialised as we
+diff --git a/include/linux/bitops.h b/include/linux/bitops.h
+index 9acf654f0b19..99f2ac30b1d9 100644
+--- a/include/linux/bitops.h
++++ b/include/linux/bitops.h
+@@ -72,7 +72,7 @@ static inline int get_bitmask_order(unsigned int count)
+
+ static __always_inline unsigned long hweight_long(unsigned long w)
+ {
+- return sizeof(w) == 4 ? hweight32(w) : hweight64(w);
++ return sizeof(w) == 4 ? hweight32(w) : hweight64((__u64)w);
+ }
+
+ /**
+diff --git a/include/linux/coresight.h b/include/linux/coresight.h
+index 193cc9dbf448..09f0565a5de3 100644
+--- a/include/linux/coresight.h
++++ b/include/linux/coresight.h
+@@ -100,10 +100,12 @@ union coresight_dev_subtype {
+ };
+
+ /**
+- * struct coresight_platform_data - data harvested from the DT specification
+- * @nr_inport: number of input ports for this component.
+- * @nr_outport: number of output ports for this component.
+- * @conns: Array of nr_outport connections from this component
++ * struct coresight_platform_data - data harvested from the firmware
++ * specification.
++ *
++ * @nr_inport: Number of elements for the input connections.
++ * @nr_outport: Number of elements for the output connections.
++ * @conns: Sparse array of nr_outport connections from this component.
+ */
+ struct coresight_platform_data {
+ int nr_inport;
+diff --git a/include/linux/ioport.h b/include/linux/ioport.h
+index a9b9170b5dd2..6c3eca90cbc4 100644
+--- a/include/linux/ioport.h
++++ b/include/linux/ioport.h
+@@ -301,5 +301,11 @@ struct resource *devm_request_free_mem_region(struct device *dev,
+ struct resource *request_free_mem_region(struct resource *base,
+ unsigned long size, const char *name);
+
++#ifdef CONFIG_IO_STRICT_DEVMEM
++void revoke_devmem(struct resource *res);
++#else
++static inline void revoke_devmem(struct resource *res) { };
++#endif
++
+ #endif /* __ASSEMBLY__ */
+ #endif /* _LINUX_IOPORT_H */
+diff --git a/include/linux/jbd2.h b/include/linux/jbd2.h
+index f613d8529863..d56128df2aff 100644
+--- a/include/linux/jbd2.h
++++ b/include/linux/jbd2.h
+@@ -765,6 +765,11 @@ struct journal_s
+ */
+ int j_errno;
+
++ /**
++ * @j_abort_mutex: Lock the whole aborting procedure.
++ */
++ struct mutex j_abort_mutex;
++
+ /**
+ * @j_sb_buffer: The first part of the superblock buffer.
+ */
+@@ -1247,7 +1252,6 @@ JBD2_FEATURE_INCOMPAT_FUNCS(csum3, CSUM_V3)
+ #define JBD2_ABORT_ON_SYNCDATA_ERR 0x040 /* Abort the journal on file
+ * data write error in ordered
+ * mode */
+-#define JBD2_REC_ERR 0x080 /* The errno in the sb has been recorded */
+
+ /*
+ * Function declarations for the journaling transaction and buffer
+diff --git a/include/linux/kprobes.h b/include/linux/kprobes.h
+index 04bdaf01112c..645fd401c856 100644
+--- a/include/linux/kprobes.h
++++ b/include/linux/kprobes.h
+@@ -350,6 +350,10 @@ static inline struct kprobe_ctlblk *get_kprobe_ctlblk(void)
+ return this_cpu_ptr(&kprobe_ctlblk);
+ }
+
++extern struct kprobe kprobe_busy;
++void kprobe_busy_begin(void);
++void kprobe_busy_end(void);
++
+ kprobe_opcode_t *kprobe_lookup_name(const char *name, unsigned int offset);
+ int register_kprobe(struct kprobe *p);
+ void unregister_kprobe(struct kprobe *p);
+diff --git a/include/linux/libata.h b/include/linux/libata.h
+index cffa4714bfa8..ae6dfc107ea8 100644
+--- a/include/linux/libata.h
++++ b/include/linux/libata.h
+@@ -22,6 +22,7 @@
+ #include <linux/acpi.h>
+ #include <linux/cdrom.h>
+ #include <linux/sched.h>
++#include <linux/async.h>
+
+ /*
+ * Define if arch has non-standard setup. This is a _PCI_ standard
+@@ -872,6 +873,8 @@ struct ata_port {
+ struct timer_list fastdrain_timer;
+ unsigned long fastdrain_cnt;
+
++ async_cookie_t cookie;
++
+ int em_message_type;
+ void *private_data;
+
+diff --git a/include/linux/mfd/stmfx.h b/include/linux/mfd/stmfx.h
+index 3c67983678ec..744dce63946e 100644
+--- a/include/linux/mfd/stmfx.h
++++ b/include/linux/mfd/stmfx.h
+@@ -109,6 +109,7 @@ struct stmfx {
+ struct device *dev;
+ struct regmap *map;
+ struct regulator *vdd;
++ int irq;
+ struct irq_domain *irq_domain;
+ struct mutex lock; /* IRQ bus lock */
+ u8 irq_src;
+diff --git a/include/linux/nfs_fs.h b/include/linux/nfs_fs.h
+index 73eda45f1cfd..6ee9119acc5d 100644
+--- a/include/linux/nfs_fs.h
++++ b/include/linux/nfs_fs.h
+@@ -230,6 +230,7 @@ struct nfs4_copy_state {
+ #define NFS_INO_INVALID_OTHER BIT(12) /* other attrs are invalid */
+ #define NFS_INO_DATA_INVAL_DEFER \
+ BIT(13) /* Deferred cache invalidation */
++#define NFS_INO_INVALID_BLOCKS BIT(14) /* cached blocks are invalid */
+
+ #define NFS_INO_INVALID_ATTR (NFS_INO_INVALID_CHANGE \
+ | NFS_INO_INVALID_CTIME \
+diff --git a/include/linux/usb/composite.h b/include/linux/usb/composite.h
+index 8675e145ea8b..2040696d75b6 100644
+--- a/include/linux/usb/composite.h
++++ b/include/linux/usb/composite.h
+@@ -249,6 +249,9 @@ int usb_function_activate(struct usb_function *);
+
+ int usb_interface_id(struct usb_configuration *, struct usb_function *);
+
++int config_ep_by_speed_and_alt(struct usb_gadget *g, struct usb_function *f,
++ struct usb_ep *_ep, u8 alt);
++
+ int config_ep_by_speed(struct usb_gadget *g, struct usb_function *f,
+ struct usb_ep *_ep);
+
+diff --git a/include/linux/usb/gadget.h b/include/linux/usb/gadget.h
+index 9411c08a5c7e..73a6113322c6 100644
+--- a/include/linux/usb/gadget.h
++++ b/include/linux/usb/gadget.h
+@@ -373,6 +373,7 @@ struct usb_gadget_ops {
+ * @connected: True if gadget is connected.
+ * @lpm_capable: If the gadget max_speed is FULL or HIGH, this flag
+ * indicates that it supports LPM as per the LPM ECN & errata.
++ * @irq: the interrupt number for device controller.
+ *
+ * Gadgets have a mostly-portable "gadget driver" implementing device
+ * functions, handling all usb configurations and interfaces. Gadget
+@@ -427,6 +428,7 @@ struct usb_gadget {
+ unsigned deactivated:1;
+ unsigned connected:1;
+ unsigned lpm_capable:1;
++ int irq;
+ };
+ #define work_to_gadget(w) (container_of((w), struct usb_gadget, work))
+
+diff --git a/include/sound/soc.h b/include/sound/soc.h
+index 946f88a6c63d..8e480efeda2a 100644
+--- a/include/sound/soc.h
++++ b/include/sound/soc.h
+@@ -790,9 +790,6 @@ struct snd_soc_dai_link {
+ const struct snd_soc_pcm_stream *params;
+ unsigned int num_params;
+
+- struct snd_soc_dapm_widget *playback_widget;
+- struct snd_soc_dapm_widget *capture_widget;
+-
+ unsigned int dai_fmt; /* format to set on init */
+
+ enum snd_soc_dpcm_trigger trigger[2]; /* trigger type for DPCM */
+@@ -1156,6 +1153,9 @@ struct snd_soc_pcm_runtime {
+ struct snd_soc_dai **cpu_dais;
+ unsigned int num_cpus;
+
++ struct snd_soc_dapm_widget *playback_widget;
++ struct snd_soc_dapm_widget *capture_widget;
++
+ struct delayed_work delayed_work;
+ void (*close_delayed_work_func)(struct snd_soc_pcm_runtime *rtd);
+ #ifdef CONFIG_DEBUG_FS
+@@ -1177,7 +1177,7 @@ struct snd_soc_pcm_runtime {
+ #define asoc_rtd_to_codec(rtd, n) (rtd)->dais[n + (rtd)->num_cpus]
+
+ #define for_each_rtd_components(rtd, i, component) \
+- for ((i) = 0; \
++ for ((i) = 0, component = NULL; \
+ ((i) < rtd->num_components) && ((component) = rtd->components[i]);\
+ (i)++)
+ #define for_each_rtd_cpu_dais(rtd, i, dai) \
+diff --git a/include/trace/events/afs.h b/include/trace/events/afs.h
+index c612cabbc378..93eddd32bd74 100644
+--- a/include/trace/events/afs.h
++++ b/include/trace/events/afs.h
+@@ -988,24 +988,22 @@ TRACE_EVENT(afs_edit_dir,
+ );
+
+ TRACE_EVENT(afs_protocol_error,
+- TP_PROTO(struct afs_call *call, int error, enum afs_eproto_cause cause),
++ TP_PROTO(struct afs_call *call, enum afs_eproto_cause cause),
+
+- TP_ARGS(call, error, cause),
++ TP_ARGS(call, cause),
+
+ TP_STRUCT__entry(
+ __field(unsigned int, call )
+- __field(int, error )
+ __field(enum afs_eproto_cause, cause )
+ ),
+
+ TP_fast_assign(
+ __entry->call = call ? call->debug_id : 0;
+- __entry->error = error;
+ __entry->cause = cause;
+ ),
+
+- TP_printk("c=%08x r=%d %s",
+- __entry->call, __entry->error,
++ TP_printk("c=%08x %s",
++ __entry->call,
+ __print_symbolic(__entry->cause, afs_eproto_causes))
+ );
+
+diff --git a/include/uapi/linux/magic.h b/include/uapi/linux/magic.h
+index d78064007b17..f3956fc11de6 100644
+--- a/include/uapi/linux/magic.h
++++ b/include/uapi/linux/magic.h
+@@ -94,6 +94,7 @@
+ #define BALLOON_KVM_MAGIC 0x13661366
+ #define ZSMALLOC_MAGIC 0x58295829
+ #define DMA_BUF_MAGIC 0x444d4142 /* "DMAB" */
++#define DEVMEM_MAGIC 0x454d444d /* "DMEM" */
+ #define Z3FOLD_MAGIC 0x33
+ #define PPC_CMM_MAGIC 0xc7571590
+
+diff --git a/kernel/bpf/syscall.c b/kernel/bpf/syscall.c
+index 5e52765161f9..c8acc8f37583 100644
+--- a/kernel/bpf/syscall.c
++++ b/kernel/bpf/syscall.c
+@@ -2924,6 +2924,7 @@ static struct bpf_insn *bpf_insn_prepare_dump(const struct bpf_prog *prog)
+ struct bpf_insn *insns;
+ u32 off, type;
+ u64 imm;
++ u8 code;
+ int i;
+
+ insns = kmemdup(prog->insnsi, bpf_prog_insn_size(prog),
+@@ -2932,21 +2933,27 @@ static struct bpf_insn *bpf_insn_prepare_dump(const struct bpf_prog *prog)
+ return insns;
+
+ for (i = 0; i < prog->len; i++) {
+- if (insns[i].code == (BPF_JMP | BPF_TAIL_CALL)) {
++ code = insns[i].code;
++
++ if (code == (BPF_JMP | BPF_TAIL_CALL)) {
+ insns[i].code = BPF_JMP | BPF_CALL;
+ insns[i].imm = BPF_FUNC_tail_call;
+ /* fall-through */
+ }
+- if (insns[i].code == (BPF_JMP | BPF_CALL) ||
+- insns[i].code == (BPF_JMP | BPF_CALL_ARGS)) {
+- if (insns[i].code == (BPF_JMP | BPF_CALL_ARGS))
++ if (code == (BPF_JMP | BPF_CALL) ||
++ code == (BPF_JMP | BPF_CALL_ARGS)) {
++ if (code == (BPF_JMP | BPF_CALL_ARGS))
+ insns[i].code = BPF_JMP | BPF_CALL;
+ if (!bpf_dump_raw_ok())
+ insns[i].imm = 0;
+ continue;
+ }
++ if (BPF_CLASS(code) == BPF_LDX && BPF_MODE(code) == BPF_PROBE_MEM) {
++ insns[i].code = BPF_LDX | BPF_SIZE(code) | BPF_MEM;
++ continue;
++ }
+
+- if (insns[i].code != (BPF_LD | BPF_IMM | BPF_DW))
++ if (code != (BPF_LD | BPF_IMM | BPF_DW))
+ continue;
+
+ imm = ((u64)insns[i + 1].imm << 32) | (u32)insns[i].imm;
+diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
+index efe14cf24bc6..739d9ba3ba6b 100644
+--- a/kernel/bpf/verifier.c
++++ b/kernel/bpf/verifier.c
+@@ -7366,7 +7366,7 @@ static int check_btf_func(struct bpf_verifier_env *env,
+ const struct btf *btf;
+ void __user *urecord;
+ u32 prev_offset = 0;
+- int ret = 0;
++ int ret = -ENOMEM;
+
+ nfuncs = attr->func_info_cnt;
+ if (!nfuncs)
+diff --git a/kernel/kprobes.c b/kernel/kprobes.c
+index 2625c241ac00..195ecb955fcc 100644
+--- a/kernel/kprobes.c
++++ b/kernel/kprobes.c
+@@ -586,11 +586,12 @@ static void kprobe_optimizer(struct work_struct *work)
+ mutex_unlock(&module_mutex);
+ mutex_unlock(&text_mutex);
+ cpus_read_unlock();
+- mutex_unlock(&kprobe_mutex);
+
+ /* Step 5: Kick optimizer again if needed */
+ if (!list_empty(&optimizing_list) || !list_empty(&unoptimizing_list))
+ kick_kprobe_optimizer();
++
++ mutex_unlock(&kprobe_mutex);
+ }
+
+ /* Wait for completing optimization and unoptimization */
+@@ -1236,6 +1237,26 @@ __releases(hlist_lock)
+ }
+ NOKPROBE_SYMBOL(kretprobe_table_unlock);
+
++struct kprobe kprobe_busy = {
++ .addr = (void *) get_kprobe,
++};
++
++void kprobe_busy_begin(void)
++{
++ struct kprobe_ctlblk *kcb;
++
++ preempt_disable();
++ __this_cpu_write(current_kprobe, &kprobe_busy);
++ kcb = get_kprobe_ctlblk();
++ kcb->kprobe_status = KPROBE_HIT_ACTIVE;
++}
++
++void kprobe_busy_end(void)
++{
++ __this_cpu_write(current_kprobe, NULL);
++ preempt_enable();
++}
++
+ /*
+ * This function is called from finish_task_switch when task tk becomes dead,
+ * so that we can recycle any function-return probe instances associated
+@@ -1253,6 +1274,8 @@ void kprobe_flush_task(struct task_struct *tk)
+ /* Early boot. kretprobe_table_locks not yet initialized. */
+ return;
+
++ kprobe_busy_begin();
++
+ INIT_HLIST_HEAD(&empty_rp);
+ hash = hash_ptr(tk, KPROBE_HASH_BITS);
+ head = &kretprobe_inst_table[hash];
+@@ -1266,6 +1289,8 @@ void kprobe_flush_task(struct task_struct *tk)
+ hlist_del(&ri->hlist);
+ kfree(ri);
+ }
++
++ kprobe_busy_end();
+ }
+ NOKPROBE_SYMBOL(kprobe_flush_task);
+
+diff --git a/kernel/resource.c b/kernel/resource.c
+index 76036a41143b..841737bbda9e 100644
+--- a/kernel/resource.c
++++ b/kernel/resource.c
+@@ -1126,6 +1126,7 @@ struct resource * __request_region(struct resource *parent,
+ {
+ DECLARE_WAITQUEUE(wait, current);
+ struct resource *res = alloc_resource(GFP_KERNEL);
++ struct resource *orig_parent = parent;
+
+ if (!res)
+ return NULL;
+@@ -1176,6 +1177,10 @@ struct resource * __request_region(struct resource *parent,
+ break;
+ }
+ write_unlock(&resource_lock);
++
++ if (res && orig_parent == &iomem_resource)
++ revoke_devmem(res);
++
+ return res;
+ }
+ EXPORT_SYMBOL(__request_region);
+diff --git a/kernel/trace/blktrace.c b/kernel/trace/blktrace.c
+index ca39dc3230cb..35610a4be4a9 100644
+--- a/kernel/trace/blktrace.c
++++ b/kernel/trace/blktrace.c
+@@ -995,8 +995,10 @@ static void blk_add_trace_split(void *ignore,
+
+ __blk_add_trace(bt, bio->bi_iter.bi_sector,
+ bio->bi_iter.bi_size, bio_op(bio), bio->bi_opf,
+- BLK_TA_SPLIT, bio->bi_status, sizeof(rpdu),
+- &rpdu, blk_trace_bio_get_cgid(q, bio));
++ BLK_TA_SPLIT,
++ blk_status_to_errno(bio->bi_status),
++ sizeof(rpdu), &rpdu,
++ blk_trace_bio_get_cgid(q, bio));
+ }
+ rcu_read_unlock();
+ }
+@@ -1033,7 +1035,8 @@ static void blk_add_trace_bio_remap(void *ignore,
+ r.sector_from = cpu_to_be64(from);
+
+ __blk_add_trace(bt, bio->bi_iter.bi_sector, bio->bi_iter.bi_size,
+- bio_op(bio), bio->bi_opf, BLK_TA_REMAP, bio->bi_status,
++ bio_op(bio), bio->bi_opf, BLK_TA_REMAP,
++ blk_status_to_errno(bio->bi_status),
+ sizeof(r), &r, blk_trace_bio_get_cgid(q, bio));
+ rcu_read_unlock();
+ }
+@@ -1253,21 +1256,10 @@ static inline __u16 t_error(const struct trace_entry *ent)
+
+ static __u64 get_pdu_int(const struct trace_entry *ent, bool has_cg)
+ {
+- const __u64 *val = pdu_start(ent, has_cg);
++ const __be64 *val = pdu_start(ent, has_cg);
+ return be64_to_cpu(*val);
+ }
+
+-static void get_pdu_remap(const struct trace_entry *ent,
+- struct blk_io_trace_remap *r, bool has_cg)
+-{
+- const struct blk_io_trace_remap *__r = pdu_start(ent, has_cg);
+- __u64 sector_from = __r->sector_from;
+-
+- r->device_from = be32_to_cpu(__r->device_from);
+- r->device_to = be32_to_cpu(__r->device_to);
+- r->sector_from = be64_to_cpu(sector_from);
+-}
+-
+ typedef void (blk_log_action_t) (struct trace_iterator *iter, const char *act,
+ bool has_cg);
+
+@@ -1407,13 +1399,13 @@ static void blk_log_with_error(struct trace_seq *s,
+
+ static void blk_log_remap(struct trace_seq *s, const struct trace_entry *ent, bool has_cg)
+ {
+- struct blk_io_trace_remap r = { .device_from = 0, };
++ const struct blk_io_trace_remap *__r = pdu_start(ent, has_cg);
+
+- get_pdu_remap(ent, &r, has_cg);
+ trace_seq_printf(s, "%llu + %u <- (%d,%d) %llu\n",
+ t_sector(ent), t_sec(ent),
+- MAJOR(r.device_from), MINOR(r.device_from),
+- (unsigned long long)r.sector_from);
++ MAJOR(be32_to_cpu(__r->device_from)),
++ MINOR(be32_to_cpu(__r->device_from)),
++ be64_to_cpu(__r->sector_from));
+ }
+
+ static void blk_log_plug(struct trace_seq *s, const struct trace_entry *ent, bool has_cg)
+diff --git a/kernel/trace/trace.h b/kernel/trace/trace.h
+index 4eb1d004d5f2..7fb2f4c1bc49 100644
+--- a/kernel/trace/trace.h
++++ b/kernel/trace/trace.h
+@@ -61,6 +61,9 @@ enum trace_type {
+ #undef __field_desc
+ #define __field_desc(type, container, item)
+
++#undef __field_packed
++#define __field_packed(type, container, item)
++
+ #undef __array
+ #define __array(type, item, size) type item[size];
+
+diff --git a/kernel/trace/trace_entries.h b/kernel/trace/trace_entries.h
+index a523da0dae0a..18c4a58aff79 100644
+--- a/kernel/trace/trace_entries.h
++++ b/kernel/trace/trace_entries.h
+@@ -78,8 +78,8 @@ FTRACE_ENTRY_PACKED(funcgraph_entry, ftrace_graph_ent_entry,
+
+ F_STRUCT(
+ __field_struct( struct ftrace_graph_ent, graph_ent )
+- __field_desc( unsigned long, graph_ent, func )
+- __field_desc( int, graph_ent, depth )
++ __field_packed( unsigned long, graph_ent, func )
++ __field_packed( int, graph_ent, depth )
+ ),
+
+ F_printk("--> %ps (%d)", (void *)__entry->func, __entry->depth)
+@@ -92,11 +92,11 @@ FTRACE_ENTRY_PACKED(funcgraph_exit, ftrace_graph_ret_entry,
+
+ F_STRUCT(
+ __field_struct( struct ftrace_graph_ret, ret )
+- __field_desc( unsigned long, ret, func )
+- __field_desc( unsigned long, ret, overrun )
+- __field_desc( unsigned long long, ret, calltime)
+- __field_desc( unsigned long long, ret, rettime )
+- __field_desc( int, ret, depth )
++ __field_packed( unsigned long, ret, func )
++ __field_packed( unsigned long, ret, overrun )
++ __field_packed( unsigned long long, ret, calltime)
++ __field_packed( unsigned long long, ret, rettime )
++ __field_packed( int, ret, depth )
+ ),
+
+ F_printk("<-- %ps (%d) (start: %llx end: %llx) over: %d",
+diff --git a/kernel/trace/trace_export.c b/kernel/trace/trace_export.c
+index 77ce5a3b6773..70d3d0a09053 100644
+--- a/kernel/trace/trace_export.c
++++ b/kernel/trace/trace_export.c
+@@ -45,6 +45,9 @@ static int ftrace_event_register(struct trace_event_call *call,
+ #undef __field_desc
+ #define __field_desc(type, container, item) type item;
+
++#undef __field_packed
++#define __field_packed(type, container, item) type item;
++
+ #undef __array
+ #define __array(type, item, size) type item[size];
+
+@@ -85,6 +88,13 @@ static void __always_unused ____ftrace_check_##name(void) \
+ .size = sizeof(_type), .align = __alignof__(_type), \
+ is_signed_type(_type), .filter_type = _filter_type },
+
++
++#undef __field_ext_packed
++#define __field_ext_packed(_type, _item, _filter_type) { \
++ .type = #_type, .name = #_item, \
++ .size = sizeof(_type), .align = 1, \
++ is_signed_type(_type), .filter_type = _filter_type },
++
+ #undef __field
+ #define __field(_type, _item) __field_ext(_type, _item, FILTER_OTHER)
+
+@@ -94,6 +104,9 @@ static void __always_unused ____ftrace_check_##name(void) \
+ #undef __field_desc
+ #define __field_desc(_type, _container, _item) __field_ext(_type, _item, FILTER_OTHER)
+
++#undef __field_packed
++#define __field_packed(_type, _container, _item) __field_ext_packed(_type, _item, FILTER_OTHER)
++
+ #undef __array
+ #define __array(_type, _item, _len) { \
+ .type = #_type"["__stringify(_len)"]", .name = #_item, \
+@@ -129,6 +142,9 @@ static struct trace_event_fields ftrace_event_fields_##name[] = { \
+ #undef __field_desc
+ #define __field_desc(type, container, item)
+
++#undef __field_packed
++#define __field_packed(type, container, item)
++
+ #undef __array
+ #define __array(type, item, len)
+
+diff --git a/kernel/trace/trace_kprobe.c b/kernel/trace/trace_kprobe.c
+index 35989383ae11..8eeb95e04bf5 100644
+--- a/kernel/trace/trace_kprobe.c
++++ b/kernel/trace/trace_kprobe.c
+@@ -1629,7 +1629,7 @@ int bpf_get_kprobe_info(const struct perf_event *event, u32 *fd_type,
+ if (perf_type_tracepoint)
+ tk = find_trace_kprobe(pevent, group);
+ else
+- tk = event->tp_event->data;
++ tk = trace_kprobe_primary_from_call(event->tp_event);
+ if (!tk)
+ return -EINVAL;
+
+diff --git a/kernel/trace/trace_probe.c b/kernel/trace/trace_probe.c
+index ab8b6436d53f..f98d6d94cbbf 100644
+--- a/kernel/trace/trace_probe.c
++++ b/kernel/trace/trace_probe.c
+@@ -639,8 +639,8 @@ static int traceprobe_parse_probe_arg_body(char *arg, ssize_t *size,
+ ret = -EINVAL;
+ goto fail;
+ }
+- if ((code->op == FETCH_OP_IMM || code->op == FETCH_OP_COMM) ||
+- parg->count) {
++ if ((code->op == FETCH_OP_IMM || code->op == FETCH_OP_COMM ||
++ code->op == FETCH_OP_DATA) || parg->count) {
+ /*
+ * IMM, DATA and COMM is pointing actual address, those
+ * must be kept, and if parg->count != 0, this is an
+diff --git a/kernel/trace/trace_uprobe.c b/kernel/trace/trace_uprobe.c
+index 2a8e8e9c1c75..fdd47f99b18f 100644
+--- a/kernel/trace/trace_uprobe.c
++++ b/kernel/trace/trace_uprobe.c
+@@ -1412,7 +1412,7 @@ int bpf_get_uprobe_info(const struct perf_event *event, u32 *fd_type,
+ if (perf_type_tracepoint)
+ tu = find_probe_event(pevent, group);
+ else
+- tu = event->tp_event->data;
++ tu = trace_uprobe_primary_from_call(event->tp_event);
+ if (!tu)
+ return -EINVAL;
+
+diff --git a/lib/zlib_inflate/inffast.c b/lib/zlib_inflate/inffast.c
+index 2c13ecc5bb2c..ed1f3df27260 100644
+--- a/lib/zlib_inflate/inffast.c
++++ b/lib/zlib_inflate/inffast.c
+@@ -10,17 +10,6 @@
+
+ #ifndef ASMINF
+
+-/* Allow machine dependent optimization for post-increment or pre-increment.
+- Based on testing to date,
+- Pre-increment preferred for:
+- - PowerPC G3 (Adler)
+- - MIPS R5000 (Randers-Pehrson)
+- Post-increment preferred for:
+- - none
+- No measurable difference:
+- - Pentium III (Anderson)
+- - M68060 (Nikl)
+- */
+ union uu {
+ unsigned short us;
+ unsigned char b[2];
+@@ -38,16 +27,6 @@ get_unaligned16(const unsigned short *p)
+ return mm.us;
+ }
+
+-#ifdef POSTINC
+-# define OFF 0
+-# define PUP(a) *(a)++
+-# define UP_UNALIGNED(a) get_unaligned16((a)++)
+-#else
+-# define OFF 1
+-# define PUP(a) *++(a)
+-# define UP_UNALIGNED(a) get_unaligned16(++(a))
+-#endif
+-
+ /*
+ Decode literal, length, and distance codes and write out the resulting
+ literal and match bytes until either not enough input or output is
+@@ -115,9 +94,9 @@ void inflate_fast(z_streamp strm, unsigned start)
+
+ /* copy state to local variables */
+ state = (struct inflate_state *)strm->state;
+- in = strm->next_in - OFF;
++ in = strm->next_in;
+ last = in + (strm->avail_in - 5);
+- out = strm->next_out - OFF;
++ out = strm->next_out;
+ beg = out - (start - strm->avail_out);
+ end = out + (strm->avail_out - 257);
+ #ifdef INFLATE_STRICT
+@@ -138,9 +117,9 @@ void inflate_fast(z_streamp strm, unsigned start)
+ input data or output space */
+ do {
+ if (bits < 15) {
+- hold += (unsigned long)(PUP(in)) << bits;
++ hold += (unsigned long)(*in++) << bits;
+ bits += 8;
+- hold += (unsigned long)(PUP(in)) << bits;
++ hold += (unsigned long)(*in++) << bits;
+ bits += 8;
+ }
+ this = lcode[hold & lmask];
+@@ -150,14 +129,14 @@ void inflate_fast(z_streamp strm, unsigned start)
+ bits -= op;
+ op = (unsigned)(this.op);
+ if (op == 0) { /* literal */
+- PUP(out) = (unsigned char)(this.val);
++ *out++ = (unsigned char)(this.val);
+ }
+ else if (op & 16) { /* length base */
+ len = (unsigned)(this.val);
+ op &= 15; /* number of extra bits */
+ if (op) {
+ if (bits < op) {
+- hold += (unsigned long)(PUP(in)) << bits;
++ hold += (unsigned long)(*in++) << bits;
+ bits += 8;
+ }
+ len += (unsigned)hold & ((1U << op) - 1);
+@@ -165,9 +144,9 @@ void inflate_fast(z_streamp strm, unsigned start)
+ bits -= op;
+ }
+ if (bits < 15) {
+- hold += (unsigned long)(PUP(in)) << bits;
++ hold += (unsigned long)(*in++) << bits;
+ bits += 8;
+- hold += (unsigned long)(PUP(in)) << bits;
++ hold += (unsigned long)(*in++) << bits;
+ bits += 8;
+ }
+ this = dcode[hold & dmask];
+@@ -180,10 +159,10 @@ void inflate_fast(z_streamp strm, unsigned start)
+ dist = (unsigned)(this.val);
+ op &= 15; /* number of extra bits */
+ if (bits < op) {
+- hold += (unsigned long)(PUP(in)) << bits;
++ hold += (unsigned long)(*in++) << bits;
+ bits += 8;
+ if (bits < op) {
+- hold += (unsigned long)(PUP(in)) << bits;
++ hold += (unsigned long)(*in++) << bits;
+ bits += 8;
+ }
+ }
+@@ -205,13 +184,13 @@ void inflate_fast(z_streamp strm, unsigned start)
+ state->mode = BAD;
+ break;
+ }
+- from = window - OFF;
++ from = window;
+ if (write == 0) { /* very common case */
+ from += wsize - op;
+ if (op < len) { /* some from window */
+ len -= op;
+ do {
+- PUP(out) = PUP(from);
++ *out++ = *from++;
+ } while (--op);
+ from = out - dist; /* rest from output */
+ }
+@@ -222,14 +201,14 @@ void inflate_fast(z_streamp strm, unsigned start)
+ if (op < len) { /* some from end of window */
+ len -= op;
+ do {
+- PUP(out) = PUP(from);
++ *out++ = *from++;
+ } while (--op);
+- from = window - OFF;
++ from = window;
+ if (write < len) { /* some from start of window */
+ op = write;
+ len -= op;
+ do {
+- PUP(out) = PUP(from);
++ *out++ = *from++;
+ } while (--op);
+ from = out - dist; /* rest from output */
+ }
+@@ -240,21 +219,21 @@ void inflate_fast(z_streamp strm, unsigned start)
+ if (op < len) { /* some from window */
+ len -= op;
+ do {
+- PUP(out) = PUP(from);
++ *out++ = *from++;
+ } while (--op);
+ from = out - dist; /* rest from output */
+ }
+ }
+ while (len > 2) {
+- PUP(out) = PUP(from);
+- PUP(out) = PUP(from);
+- PUP(out) = PUP(from);
++ *out++ = *from++;
++ *out++ = *from++;
++ *out++ = *from++;
+ len -= 3;
+ }
+ if (len) {
+- PUP(out) = PUP(from);
++ *out++ = *from++;
+ if (len > 1)
+- PUP(out) = PUP(from);
++ *out++ = *from++;
+ }
+ }
+ else {
+@@ -264,29 +243,29 @@ void inflate_fast(z_streamp strm, unsigned start)
+ from = out - dist; /* copy direct from output */
+ /* minimum length is three */
+ /* Align out addr */
+- if (!((long)(out - 1 + OFF) & 1)) {
+- PUP(out) = PUP(from);
++ if (!((long)(out - 1) & 1)) {
++ *out++ = *from++;
+ len--;
+ }
+- sout = (unsigned short *)(out - OFF);
++ sout = (unsigned short *)(out);
+ if (dist > 2) {
+ unsigned short *sfrom;
+
+- sfrom = (unsigned short *)(from - OFF);
++ sfrom = (unsigned short *)(from);
+ loops = len >> 1;
+ do
+ #ifdef CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS
+- PUP(sout) = PUP(sfrom);
++ *sout++ = *sfrom++;
+ #else
+- PUP(sout) = UP_UNALIGNED(sfrom);
++ *sout++ = get_unaligned16(sfrom++);
+ #endif
+ while (--loops);
+- out = (unsigned char *)sout + OFF;
+- from = (unsigned char *)sfrom + OFF;
++ out = (unsigned char *)sout;
++ from = (unsigned char *)sfrom;
+ } else { /* dist == 1 or dist == 2 */
+ unsigned short pat16;
+
+- pat16 = *(sout-1+OFF);
++ pat16 = *(sout-1);
+ if (dist == 1) {
+ union uu mm;
+ /* copy one char pattern to both bytes */
+@@ -296,12 +275,12 @@ void inflate_fast(z_streamp strm, unsigned start)
+ }
+ loops = len >> 1;
+ do
+- PUP(sout) = pat16;
++ *sout++ = pat16;
+ while (--loops);
+- out = (unsigned char *)sout + OFF;
++ out = (unsigned char *)sout;
+ }
+ if (len & 1)
+- PUP(out) = PUP(from);
++ *out++ = *from++;
+ }
+ }
+ else if ((op & 64) == 0) { /* 2nd level distance code */
+@@ -336,8 +315,8 @@ void inflate_fast(z_streamp strm, unsigned start)
+ hold &= (1U << bits) - 1;
+
+ /* update state and return */
+- strm->next_in = in + OFF;
+- strm->next_out = out + OFF;
++ strm->next_in = in;
++ strm->next_out = out;
+ strm->avail_in = (unsigned)(in < last ? 5 + (last - in) : 5 - (in - last));
+ strm->avail_out = (unsigned)(out < end ?
+ 257 + (end - out) : 257 - (out - end));
+diff --git a/net/core/dev.c b/net/core/dev.c
+index 2d8aceee4284..93a279ab4e97 100644
+--- a/net/core/dev.c
++++ b/net/core/dev.c
+@@ -79,6 +79,7 @@
+ #include <linux/sched.h>
+ #include <linux/sched/mm.h>
+ #include <linux/mutex.h>
++#include <linux/rwsem.h>
+ #include <linux/string.h>
+ #include <linux/mm.h>
+ #include <linux/socket.h>
+@@ -194,7 +195,7 @@ static DEFINE_SPINLOCK(napi_hash_lock);
+ static unsigned int napi_gen_id = NR_CPUS;
+ static DEFINE_READ_MOSTLY_HASHTABLE(napi_hash, 8);
+
+-static seqcount_t devnet_rename_seq;
++static DECLARE_RWSEM(devnet_rename_sem);
+
+ static inline void dev_base_seq_inc(struct net *net)
+ {
+@@ -930,33 +931,28 @@ EXPORT_SYMBOL(dev_get_by_napi_id);
+ * @net: network namespace
+ * @name: a pointer to the buffer where the name will be stored.
+ * @ifindex: the ifindex of the interface to get the name from.
+- *
+- * The use of raw_seqcount_begin() and cond_resched() before
+- * retrying is required as we want to give the writers a chance
+- * to complete when CONFIG_PREEMPTION is not set.
+ */
+ int netdev_get_name(struct net *net, char *name, int ifindex)
+ {
+ struct net_device *dev;
+- unsigned int seq;
++ int ret;
+
+-retry:
+- seq = raw_seqcount_begin(&devnet_rename_seq);
++ down_read(&devnet_rename_sem);
+ rcu_read_lock();
++
+ dev = dev_get_by_index_rcu(net, ifindex);
+ if (!dev) {
+- rcu_read_unlock();
+- return -ENODEV;
++ ret = -ENODEV;
++ goto out;
+ }
+
+ strcpy(name, dev->name);
+- rcu_read_unlock();
+- if (read_seqcount_retry(&devnet_rename_seq, seq)) {
+- cond_resched();
+- goto retry;
+- }
+
+- return 0;
++ ret = 0;
++out:
++ rcu_read_unlock();
++ up_read(&devnet_rename_sem);
++ return ret;
+ }
+
+ /**
+@@ -1228,10 +1224,10 @@ int dev_change_name(struct net_device *dev, const char *newname)
+ likely(!(dev->priv_flags & IFF_LIVE_RENAME_OK)))
+ return -EBUSY;
+
+- write_seqcount_begin(&devnet_rename_seq);
++ down_write(&devnet_rename_sem);
+
+ if (strncmp(newname, dev->name, IFNAMSIZ) == 0) {
+- write_seqcount_end(&devnet_rename_seq);
++ up_write(&devnet_rename_sem);
+ return 0;
+ }
+
+@@ -1239,7 +1235,7 @@ int dev_change_name(struct net_device *dev, const char *newname)
+
+ err = dev_get_valid_name(net, dev, newname);
+ if (err < 0) {
+- write_seqcount_end(&devnet_rename_seq);
++ up_write(&devnet_rename_sem);
+ return err;
+ }
+
+@@ -1254,11 +1250,11 @@ rollback:
+ if (ret) {
+ memcpy(dev->name, oldname, IFNAMSIZ);
+ dev->name_assign_type = old_assign_type;
+- write_seqcount_end(&devnet_rename_seq);
++ up_write(&devnet_rename_sem);
+ return ret;
+ }
+
+- write_seqcount_end(&devnet_rename_seq);
++ up_write(&devnet_rename_sem);
+
+ netdev_adjacent_rename_links(dev, oldname);
+
+@@ -1279,7 +1275,7 @@ rollback:
+ /* err >= 0 after dev_alloc_name() or stores the first errno */
+ if (err >= 0) {
+ err = ret;
+- write_seqcount_begin(&devnet_rename_seq);
++ down_write(&devnet_rename_sem);
+ memcpy(dev->name, oldname, IFNAMSIZ);
+ memcpy(oldname, newname, IFNAMSIZ);
+ dev->name_assign_type = old_assign_type;
+diff --git a/net/core/filter.c b/net/core/filter.c
+index 11b97c31bca5..9512a9772d69 100644
+--- a/net/core/filter.c
++++ b/net/core/filter.c
+@@ -1766,25 +1766,27 @@ BPF_CALL_5(bpf_skb_load_bytes_relative, const struct sk_buff *, skb,
+ u32, offset, void *, to, u32, len, u32, start_header)
+ {
+ u8 *end = skb_tail_pointer(skb);
+- u8 *net = skb_network_header(skb);
+- u8 *mac = skb_mac_header(skb);
+- u8 *ptr;
++ u8 *start, *ptr;
+
+- if (unlikely(offset > 0xffff || len > (end - mac)))
++ if (unlikely(offset > 0xffff))
+ goto err_clear;
+
+ switch (start_header) {
+ case BPF_HDR_START_MAC:
+- ptr = mac + offset;
++ if (unlikely(!skb_mac_header_was_set(skb)))
++ goto err_clear;
++ start = skb_mac_header(skb);
+ break;
+ case BPF_HDR_START_NET:
+- ptr = net + offset;
++ start = skb_network_header(skb);
+ break;
+ default:
+ goto err_clear;
+ }
+
+- if (likely(ptr >= mac && ptr + len <= end)) {
++ ptr = start + offset;
++
++ if (likely(ptr + len <= end)) {
+ memcpy(to, ptr, len);
+ return 0;
+ }
+diff --git a/net/core/sock_map.c b/net/core/sock_map.c
+index b08dfae10f88..591457fcbd02 100644
+--- a/net/core/sock_map.c
++++ b/net/core/sock_map.c
+@@ -417,10 +417,7 @@ static int sock_map_get_next_key(struct bpf_map *map, void *key, void *next)
+ return 0;
+ }
+
+-static bool sock_map_redirect_allowed(const struct sock *sk)
+-{
+- return sk->sk_state != TCP_LISTEN;
+-}
++static bool sock_map_redirect_allowed(const struct sock *sk);
+
+ static int sock_map_update_common(struct bpf_map *map, u32 idx,
+ struct sock *sk, u64 flags)
+@@ -501,6 +498,11 @@ static bool sk_is_udp(const struct sock *sk)
+ sk->sk_protocol == IPPROTO_UDP;
+ }
+
++static bool sock_map_redirect_allowed(const struct sock *sk)
++{
++ return sk_is_tcp(sk) && sk->sk_state != TCP_LISTEN;
++}
++
+ static bool sock_map_sk_is_suitable(const struct sock *sk)
+ {
+ return sk_is_tcp(sk) || sk_is_udp(sk);
+@@ -982,11 +984,15 @@ static struct bpf_map *sock_hash_alloc(union bpf_attr *attr)
+ err = -EINVAL;
+ goto free_htab;
+ }
++ err = bpf_map_charge_init(&htab->map.memory, cost);
++ if (err)
++ goto free_htab;
+
+ htab->buckets = bpf_map_area_alloc(htab->buckets_num *
+ sizeof(struct bpf_htab_bucket),
+ htab->map.numa_node);
+ if (!htab->buckets) {
++ bpf_map_charge_finish(&htab->map.memory);
+ err = -ENOMEM;
+ goto free_htab;
+ }
+@@ -1006,6 +1012,7 @@ static void sock_hash_free(struct bpf_map *map)
+ {
+ struct bpf_htab *htab = container_of(map, struct bpf_htab, map);
+ struct bpf_htab_bucket *bucket;
++ struct hlist_head unlink_list;
+ struct bpf_htab_elem *elem;
+ struct hlist_node *node;
+ int i;
+@@ -1017,13 +1024,32 @@ static void sock_hash_free(struct bpf_map *map)
+ synchronize_rcu();
+ for (i = 0; i < htab->buckets_num; i++) {
+ bucket = sock_hash_select_bucket(htab, i);
+- hlist_for_each_entry_safe(elem, node, &bucket->head, node) {
+- hlist_del_rcu(&elem->node);
++
++ /* We are racing with sock_hash_delete_from_link to
++ * enter the spin-lock critical section. Every socket on
++ * the list is still linked to sockhash. Since link
++ * exists, psock exists and holds a ref to socket. That
++ * lets us to grab a socket ref too.
++ */
++ raw_spin_lock_bh(&bucket->lock);
++ hlist_for_each_entry(elem, &bucket->head, node)
++ sock_hold(elem->sk);
++ hlist_move_list(&bucket->head, &unlink_list);
++ raw_spin_unlock_bh(&bucket->lock);
++
++ /* Process removed entries out of atomic context to
++ * block for socket lock before deleting the psock's
++ * link to sockhash.
++ */
++ hlist_for_each_entry_safe(elem, node, &unlink_list, node) {
++ hlist_del(&elem->node);
+ lock_sock(elem->sk);
+ rcu_read_lock();
+ sock_map_unref(elem->sk, elem);
+ rcu_read_unlock();
+ release_sock(elem->sk);
++ sock_put(elem->sk);
++ sock_hash_free_elem(htab, elem);
+ }
+ }
+
+diff --git a/net/ipv4/tcp_bpf.c b/net/ipv4/tcp_bpf.c
+index 629aaa9a1eb9..7aa68f4aae6c 100644
+--- a/net/ipv4/tcp_bpf.c
++++ b/net/ipv4/tcp_bpf.c
+@@ -64,6 +64,9 @@ int __tcp_bpf_recvmsg(struct sock *sk, struct sk_psock *psock,
+ } while (i != msg_rx->sg.end);
+
+ if (unlikely(peek)) {
++ if (msg_rx == list_last_entry(&psock->ingress_msg,
++ struct sk_msg, list))
++ break;
+ msg_rx = list_next_entry(msg_rx, list);
+ continue;
+ }
+@@ -242,6 +245,9 @@ static int tcp_bpf_wait_data(struct sock *sk, struct sk_psock *psock,
+ DEFINE_WAIT_FUNC(wait, woken_wake_function);
+ int ret = 0;
+
++ if (sk->sk_shutdown & RCV_SHUTDOWN)
++ return 1;
++
+ if (!timeo)
+ return ret;
+
+diff --git a/net/netfilter/nft_set_pipapo.c b/net/netfilter/nft_set_pipapo.c
+index 8b5acc6910fd..8c04388296b0 100644
+--- a/net/netfilter/nft_set_pipapo.c
++++ b/net/netfilter/nft_set_pipapo.c
+@@ -1242,7 +1242,9 @@ static int nft_pipapo_insert(const struct net *net, const struct nft_set *set,
+ end += NFT_PIPAPO_GROUPS_PADDED_SIZE(f);
+ }
+
+- if (!*this_cpu_ptr(m->scratch) || bsize_max > m->bsize_max) {
++ if (!*get_cpu_ptr(m->scratch) || bsize_max > m->bsize_max) {
++ put_cpu_ptr(m->scratch);
++
+ err = pipapo_realloc_scratch(m, bsize_max);
+ if (err)
+ return err;
+@@ -1250,6 +1252,8 @@ static int nft_pipapo_insert(const struct net *net, const struct nft_set *set,
+ this_cpu_write(nft_pipapo_scratch_index, false);
+
+ m->bsize_max = bsize_max;
++ } else {
++ put_cpu_ptr(m->scratch);
+ }
+
+ *ext2 = &e->ext;
+diff --git a/net/netfilter/nft_set_rbtree.c b/net/netfilter/nft_set_rbtree.c
+index 62f416bc0579..b6aad3fc46c3 100644
+--- a/net/netfilter/nft_set_rbtree.c
++++ b/net/netfilter/nft_set_rbtree.c
+@@ -271,12 +271,14 @@ static int __nft_rbtree_insert(const struct net *net, const struct nft_set *set,
+
+ if (nft_rbtree_interval_start(new)) {
+ if (nft_rbtree_interval_end(rbe) &&
+- nft_set_elem_active(&rbe->ext, genmask))
++ nft_set_elem_active(&rbe->ext, genmask) &&
++ !nft_set_elem_expired(&rbe->ext))
+ overlap = false;
+ } else {
+ overlap = nft_rbtree_interval_end(rbe) &&
+ nft_set_elem_active(&rbe->ext,
+- genmask);
++ genmask) &&
++ !nft_set_elem_expired(&rbe->ext);
+ }
+ } else if (d > 0) {
+ p = &parent->rb_right;
+@@ -284,9 +286,11 @@ static int __nft_rbtree_insert(const struct net *net, const struct nft_set *set,
+ if (nft_rbtree_interval_end(new)) {
+ overlap = nft_rbtree_interval_end(rbe) &&
+ nft_set_elem_active(&rbe->ext,
+- genmask);
++ genmask) &&
++ !nft_set_elem_expired(&rbe->ext);
+ } else if (nft_rbtree_interval_end(rbe) &&
+- nft_set_elem_active(&rbe->ext, genmask)) {
++ nft_set_elem_active(&rbe->ext, genmask) &&
++ !nft_set_elem_expired(&rbe->ext)) {
+ overlap = true;
+ }
+ } else {
+@@ -294,15 +298,18 @@ static int __nft_rbtree_insert(const struct net *net, const struct nft_set *set,
+ nft_rbtree_interval_start(new)) {
+ p = &parent->rb_left;
+
+- if (nft_set_elem_active(&rbe->ext, genmask))
++ if (nft_set_elem_active(&rbe->ext, genmask) &&
++ !nft_set_elem_expired(&rbe->ext))
+ overlap = false;
+ } else if (nft_rbtree_interval_start(rbe) &&
+ nft_rbtree_interval_end(new)) {
+ p = &parent->rb_right;
+
+- if (nft_set_elem_active(&rbe->ext, genmask))
++ if (nft_set_elem_active(&rbe->ext, genmask) &&
++ !nft_set_elem_expired(&rbe->ext))
+ overlap = false;
+- } else if (nft_set_elem_active(&rbe->ext, genmask)) {
++ } else if (nft_set_elem_active(&rbe->ext, genmask) &&
++ !nft_set_elem_expired(&rbe->ext)) {
+ *ext = &rbe->ext;
+ return -EEXIST;
+ } else {
+diff --git a/net/rxrpc/proc.c b/net/rxrpc/proc.c
+index 8b179e3c802a..543afd9bd664 100644
+--- a/net/rxrpc/proc.c
++++ b/net/rxrpc/proc.c
+@@ -68,7 +68,7 @@ static int rxrpc_call_seq_show(struct seq_file *seq, void *v)
+ "Proto Local "
+ " Remote "
+ " SvID ConnID CallID End Use State Abort "
+- " UserID TxSeq TW RxSeq RW RxSerial RxTimo\n");
++ " DebugId TxSeq TW RxSeq RW RxSerial RxTimo\n");
+ return 0;
+ }
+
+@@ -100,7 +100,7 @@ static int rxrpc_call_seq_show(struct seq_file *seq, void *v)
+ rx_hard_ack = READ_ONCE(call->rx_hard_ack);
+ seq_printf(seq,
+ "UDP %-47.47s %-47.47s %4x %08x %08x %s %3u"
+- " %-8.8s %08x %lx %08x %02x %08x %02x %08x %06lx\n",
++ " %-8.8s %08x %08x %08x %02x %08x %02x %08x %06lx\n",
+ lbuff,
+ rbuff,
+ call->service_id,
+@@ -110,7 +110,7 @@ static int rxrpc_call_seq_show(struct seq_file *seq, void *v)
+ atomic_read(&call->usage),
+ rxrpc_call_states[call->state],
+ call->abort_code,
+- call->user_call_ID,
++ call->debug_id,
+ tx_hard_ack, READ_ONCE(call->tx_top) - tx_hard_ack,
+ rx_hard_ack, READ_ONCE(call->rx_top) - rx_hard_ack,
+ call->rx_serial,
+diff --git a/net/sunrpc/addr.c b/net/sunrpc/addr.c
+index 8b4d72b1a066..010dcb876f9d 100644
+--- a/net/sunrpc/addr.c
++++ b/net/sunrpc/addr.c
+@@ -82,11 +82,11 @@ static size_t rpc_ntop6(const struct sockaddr *sap,
+
+ rc = snprintf(scopebuf, sizeof(scopebuf), "%c%u",
+ IPV6_SCOPE_DELIMITER, sin6->sin6_scope_id);
+- if (unlikely((size_t)rc > sizeof(scopebuf)))
++ if (unlikely((size_t)rc >= sizeof(scopebuf)))
+ return 0;
+
+ len += rc;
+- if (unlikely(len > buflen))
++ if (unlikely(len >= buflen))
+ return 0;
+
+ strcat(buf, scopebuf);
+diff --git a/net/xdp/xsk.c b/net/xdp/xsk.c
+index c350108aa38d..a4676107fad0 100644
+--- a/net/xdp/xsk.c
++++ b/net/xdp/xsk.c
+@@ -397,10 +397,8 @@ static int xsk_generic_xmit(struct sock *sk)
+
+ len = desc.len;
+ skb = sock_alloc_send_skb(sk, len, 1, &err);
+- if (unlikely(!skb)) {
+- err = -EAGAIN;
++ if (unlikely(!skb))
+ goto out;
+- }
+
+ skb_put(skb, len);
+ addr = desc.addr;
+diff --git a/samples/ftrace/sample-trace-array.c b/samples/ftrace/sample-trace-array.c
+index d523450d73eb..6aba02a31c96 100644
+--- a/samples/ftrace/sample-trace-array.c
++++ b/samples/ftrace/sample-trace-array.c
+@@ -6,6 +6,7 @@
+ #include <linux/timer.h>
+ #include <linux/err.h>
+ #include <linux/jiffies.h>
++#include <linux/workqueue.h>
+
+ /*
+ * Any file that uses trace points, must include the header.
+@@ -20,6 +21,16 @@ struct trace_array *tr;
+ static void mytimer_handler(struct timer_list *unused);
+ static struct task_struct *simple_tsk;
+
++static void trace_work_fn(struct work_struct *work)
++{
++ /*
++ * Disable tracing for event "sample_event".
++ */
++ trace_array_set_clr_event(tr, "sample-subsystem", "sample_event",
++ false);
++}
++static DECLARE_WORK(trace_work, trace_work_fn);
++
+ /*
+ * mytimer: Timer setup to disable tracing for event "sample_event". This
+ * timer is only for the purposes of the sample module to demonstrate access of
+@@ -29,11 +40,7 @@ static DEFINE_TIMER(mytimer, mytimer_handler);
+
+ static void mytimer_handler(struct timer_list *unused)
+ {
+- /*
+- * Disable tracing for event "sample_event".
+- */
+- trace_array_set_clr_event(tr, "sample-subsystem", "sample_event",
+- false);
++ schedule_work(&trace_work);
+ }
+
+ static void simple_thread_func(int count)
+@@ -76,6 +83,7 @@ static int simple_thread(void *arg)
+ simple_thread_func(count++);
+
+ del_timer(&mytimer);
++ cancel_work_sync(&trace_work);
+
+ /*
+ * trace_array_put() decrements the reference counter associated with
+@@ -107,8 +115,12 @@ static int __init sample_trace_array_init(void)
+ trace_printk_init_buffers();
+
+ simple_tsk = kthread_run(simple_thread, NULL, "sample-instance");
+- if (IS_ERR(simple_tsk))
++ if (IS_ERR(simple_tsk)) {
++ trace_array_put(tr);
++ trace_array_destroy(tr);
+ return -1;
++ }
++
+ return 0;
+ }
+
+diff --git a/scripts/Makefile.modpost b/scripts/Makefile.modpost
+index 957eed6a17a5..33aaa572f686 100644
+--- a/scripts/Makefile.modpost
++++ b/scripts/Makefile.modpost
+@@ -66,7 +66,7 @@ __modpost:
+
+ else
+
+-MODPOST += $(subst -i,-n,$(filter -i,$(MAKEFLAGS))) -s -T - \
++MODPOST += -s -T - \
+ $(if $(KBUILD_NSDEPS),-d $(MODULES_NSDEPS))
+
+ ifeq ($(KBUILD_EXTMOD),)
+@@ -82,6 +82,11 @@ include $(if $(wildcard $(KBUILD_EXTMOD)/Kbuild), \
+ $(KBUILD_EXTMOD)/Kbuild, $(KBUILD_EXTMOD)/Makefile)
+ endif
+
++# 'make -i -k' ignores compile errors, and builds as many modules as possible.
++ifneq ($(findstring i,$(filter-out --%,$(MAKEFLAGS))),)
++MODPOST += -n
++endif
++
+ # find all modules listed in modules.order
+ modules := $(sort $(shell cat $(MODORDER)))
+
+diff --git a/scripts/headers_install.sh b/scripts/headers_install.sh
+index a07668a5c36b..94a833597a88 100755
+--- a/scripts/headers_install.sh
++++ b/scripts/headers_install.sh
+@@ -64,7 +64,7 @@ configs=$(sed -e '
+ d
+ ' $OUTFILE)
+
+-# The entries in the following list are not warned.
++# The entries in the following list do not result in an error.
+ # Please do not add a new entry. This list is only for existing ones.
+ # The list will be reduced gradually, and deleted eventually. (hopefully)
+ #
+@@ -98,18 +98,19 @@ include/uapi/linux/raw.h:CONFIG_MAX_RAW_DEVS
+
+ for c in $configs
+ do
+- warn=1
++ leak_error=1
+
+ for ignore in $config_leak_ignores
+ do
+ if echo "$INFILE:$c" | grep -q "$ignore$"; then
+- warn=
++ leak_error=
+ break
+ fi
+ done
+
+- if [ "$warn" = 1 ]; then
+- echo "warning: $INFILE: leak $c to user-space" >&2
++ if [ "$leak_error" = 1 ]; then
++ echo "error: $INFILE: leak $c to user-space" >&2
++ exit 1
+ fi
+ done
+
+diff --git a/scripts/mksysmap b/scripts/mksysmap
+index a35acc0d0b82..9aa23d15862a 100755
+--- a/scripts/mksysmap
++++ b/scripts/mksysmap
+@@ -41,4 +41,4 @@
+ # so we just ignore them to let readprofile continue to work.
+ # (At least sparc64 has __crc_ in the middle).
+
+-$NM -n $1 | grep -v '\( [aNUw] \)\|\(__crc_\)\|\( \$[adt]\)\|\( .L\)' > $2
++$NM -n $1 | grep -v '\( [aNUw] \)\|\(__crc_\)\|\( \$[adt]\)\|\( \.L\)' > $2
+diff --git a/security/apparmor/domain.c b/security/apparmor/domain.c
+index a84ef030fbd7..4cfa58c07778 100644
+--- a/security/apparmor/domain.c
++++ b/security/apparmor/domain.c
+@@ -929,7 +929,8 @@ int apparmor_bprm_set_creds(struct linux_binprm *bprm)
+ * aways results in a further reduction of permissions.
+ */
+ if ((bprm->unsafe & LSM_UNSAFE_NO_NEW_PRIVS) &&
+- !unconfined(label) && !aa_label_is_subset(new, ctx->nnp)) {
++ !unconfined(label) &&
++ !aa_label_is_unconfined_subset(new, ctx->nnp)) {
+ error = -EPERM;
+ info = "no new privs";
+ goto audit;
+@@ -1207,7 +1208,7 @@ int aa_change_hat(const char *hats[], int count, u64 token, int flags)
+ * reduce restrictions.
+ */
+ if (task_no_new_privs(current) && !unconfined(label) &&
+- !aa_label_is_subset(new, ctx->nnp)) {
++ !aa_label_is_unconfined_subset(new, ctx->nnp)) {
+ /* not an apparmor denial per se, so don't log it */
+ AA_DEBUG("no_new_privs - change_hat denied");
+ error = -EPERM;
+@@ -1228,7 +1229,7 @@ int aa_change_hat(const char *hats[], int count, u64 token, int flags)
+ * reduce restrictions.
+ */
+ if (task_no_new_privs(current) && !unconfined(label) &&
+- !aa_label_is_subset(previous, ctx->nnp)) {
++ !aa_label_is_unconfined_subset(previous, ctx->nnp)) {
+ /* not an apparmor denial per se, so don't log it */
+ AA_DEBUG("no_new_privs - change_hat denied");
+ error = -EPERM;
+@@ -1423,7 +1424,7 @@ check:
+ * reduce restrictions.
+ */
+ if (task_no_new_privs(current) && !unconfined(label) &&
+- !aa_label_is_subset(new, ctx->nnp)) {
++ !aa_label_is_unconfined_subset(new, ctx->nnp)) {
+ /* not an apparmor denial per se, so don't log it */
+ AA_DEBUG("no_new_privs - change_hat denied");
+ error = -EPERM;
+diff --git a/security/apparmor/include/label.h b/security/apparmor/include/label.h
+index 47942c4ba7ca..255764ab06e2 100644
+--- a/security/apparmor/include/label.h
++++ b/security/apparmor/include/label.h
+@@ -281,6 +281,7 @@ bool aa_label_init(struct aa_label *label, int size, gfp_t gfp);
+ struct aa_label *aa_label_alloc(int size, struct aa_proxy *proxy, gfp_t gfp);
+
+ bool aa_label_is_subset(struct aa_label *set, struct aa_label *sub);
++bool aa_label_is_unconfined_subset(struct aa_label *set, struct aa_label *sub);
+ struct aa_profile *__aa_label_next_not_in_set(struct label_it *I,
+ struct aa_label *set,
+ struct aa_label *sub);
+diff --git a/security/apparmor/label.c b/security/apparmor/label.c
+index 470693239e64..5f324d63ceaa 100644
+--- a/security/apparmor/label.c
++++ b/security/apparmor/label.c
+@@ -550,6 +550,39 @@ bool aa_label_is_subset(struct aa_label *set, struct aa_label *sub)
+ return __aa_label_next_not_in_set(&i, set, sub) == NULL;
+ }
+
++/**
++ * aa_label_is_unconfined_subset - test if @sub is a subset of @set
++ * @set: label to test against
++ * @sub: label to test if is subset of @set
++ *
++ * This checks for subset but taking into account unconfined. IF
++ * @sub contains an unconfined profile that does not have a matching
++ * unconfined in @set then this will not cause the test to fail.
++ * Conversely we don't care about an unconfined in @set that is not in
++ * @sub
++ *
++ * Returns: true if @sub is special_subset of @set
++ * else false
++ */
++bool aa_label_is_unconfined_subset(struct aa_label *set, struct aa_label *sub)
++{
++ struct label_it i = { };
++ struct aa_profile *p;
++
++ AA_BUG(!set);
++ AA_BUG(!sub);
++
++ if (sub == set)
++ return true;
++
++ do {
++ p = __aa_label_next_not_in_set(&i, set, sub);
++ if (p && !profile_unconfined(p))
++ break;
++ } while (p);
++
++ return p == NULL;
++}
+
+
+ /**
+@@ -1531,13 +1564,13 @@ static const char *label_modename(struct aa_ns *ns, struct aa_label *label,
+
+ label_for_each(i, label, profile) {
+ if (aa_ns_visible(ns, profile->ns, flags & FLAG_VIEW_SUBNS)) {
+- if (profile->mode == APPARMOR_UNCONFINED)
++ count++;
++ if (profile == profile->ns->unconfined)
+ /* special case unconfined so stacks with
+ * unconfined don't report as mixed. ie.
+ * profile_foo//&:ns1:unconfined (mixed)
+ */
+ continue;
+- count++;
+ if (mode == -1)
+ mode = profile->mode;
+ else if (mode != profile->mode)
+diff --git a/security/apparmor/lsm.c b/security/apparmor/lsm.c
+index b621ad74f54a..66a8504c8bea 100644
+--- a/security/apparmor/lsm.c
++++ b/security/apparmor/lsm.c
+@@ -804,7 +804,12 @@ static void apparmor_sk_clone_security(const struct sock *sk,
+ struct aa_sk_ctx *ctx = SK_CTX(sk);
+ struct aa_sk_ctx *new = SK_CTX(newsk);
+
++ if (new->label)
++ aa_put_label(new->label);
+ new->label = aa_get_label(ctx->label);
++
++ if (new->peer)
++ aa_put_label(new->peer);
+ new->peer = aa_get_label(ctx->peer);
+ }
+
+diff --git a/security/selinux/ss/conditional.c b/security/selinux/ss/conditional.c
+index da94a1b4bfda..0cc7cdd58465 100644
+--- a/security/selinux/ss/conditional.c
++++ b/security/selinux/ss/conditional.c
+@@ -27,6 +27,9 @@ static int cond_evaluate_expr(struct policydb *p, struct cond_expr *expr)
+ int s[COND_EXPR_MAXDEPTH];
+ int sp = -1;
+
++ if (expr->len == 0)
++ return -1;
++
+ for (i = 0; i < expr->len; i++) {
+ struct cond_expr_node *node = &expr->nodes[i];
+
+@@ -392,27 +395,19 @@ static int cond_read_node(struct policydb *p, struct cond_node *node, void *fp)
+
+ rc = next_entry(buf, fp, sizeof(u32) * 2);
+ if (rc)
+- goto err;
++ return rc;
+
+ expr->expr_type = le32_to_cpu(buf[0]);
+ expr->bool = le32_to_cpu(buf[1]);
+
+- if (!expr_node_isvalid(p, expr)) {
+- rc = -EINVAL;
+- goto err;
+- }
++ if (!expr_node_isvalid(p, expr))
++ return -EINVAL;
+ }
+
+ rc = cond_read_av_list(p, fp, &node->true_list, NULL);
+ if (rc)
+- goto err;
+- rc = cond_read_av_list(p, fp, &node->false_list, &node->true_list);
+- if (rc)
+- goto err;
+- return 0;
+-err:
+- cond_node_destroy(node);
+- return rc;
++ return rc;
++ return cond_read_av_list(p, fp, &node->false_list, &node->true_list);
+ }
+
+ int cond_read_list(struct policydb *p, void *fp)
+diff --git a/security/selinux/ss/services.c b/security/selinux/ss/services.c
+index 8ad34fd031d1..77e591fce919 100644
+--- a/security/selinux/ss/services.c
++++ b/security/selinux/ss/services.c
+@@ -2923,8 +2923,12 @@ err:
+ if (*names) {
+ for (i = 0; i < *len; i++)
+ kfree((*names)[i]);
++ kfree(*names);
+ }
+ kfree(*values);
++ *len = 0;
++ *names = NULL;
++ *values = NULL;
+ goto out;
+ }
+
+diff --git a/sound/firewire/amdtp-am824.c b/sound/firewire/amdtp-am824.c
+index 67d735e9a6a4..fea92e148790 100644
+--- a/sound/firewire/amdtp-am824.c
++++ b/sound/firewire/amdtp-am824.c
+@@ -82,7 +82,8 @@ int amdtp_am824_set_parameters(struct amdtp_stream *s, unsigned int rate,
+ if (err < 0)
+ return err;
+
+- s->ctx_data.rx.fdf = AMDTP_FDF_AM824 | s->sfc;
++ if (s->direction == AMDTP_OUT_STREAM)
++ s->ctx_data.rx.fdf = AMDTP_FDF_AM824 | s->sfc;
+
+ p->pcm_channels = pcm_channels;
+ p->midi_ports = midi_ports;
+diff --git a/sound/isa/wavefront/wavefront_synth.c b/sound/isa/wavefront/wavefront_synth.c
+index c5b1d5900eed..d6420d224d09 100644
+--- a/sound/isa/wavefront/wavefront_synth.c
++++ b/sound/isa/wavefront/wavefront_synth.c
+@@ -1171,7 +1171,10 @@ wavefront_send_alias (snd_wavefront_t *dev, wavefront_patch_info *header)
+ "alias for %d\n",
+ header->number,
+ header->hdr.a.OriginalSample);
+-
++
++ if (header->number >= WF_MAX_SAMPLE)
++ return -EINVAL;
++
+ munge_int32 (header->number, &alias_hdr[0], 2);
+ munge_int32 (header->hdr.a.OriginalSample, &alias_hdr[2], 2);
+ munge_int32 (*((unsigned int *)&header->hdr.a.sampleStartOffset),
+@@ -1202,6 +1205,9 @@ wavefront_send_multisample (snd_wavefront_t *dev, wavefront_patch_info *header)
+ int num_samples;
+ unsigned char *msample_hdr;
+
++ if (header->number >= WF_MAX_SAMPLE)
++ return -EINVAL;
++
+ msample_hdr = kmalloc(WF_MSAMPLE_BYTES, GFP_KERNEL);
+ if (! msample_hdr)
+ return -ENOMEM;
+diff --git a/sound/pci/hda/patch_realtek.c b/sound/pci/hda/patch_realtek.c
+index 2c4575909441..e057ecb5a904 100644
+--- a/sound/pci/hda/patch_realtek.c
++++ b/sound/pci/hda/patch_realtek.c
+@@ -81,6 +81,7 @@ struct alc_spec {
+
+ /* mute LED for HP laptops, see alc269_fixup_mic_mute_hook() */
+ int mute_led_polarity;
++ int micmute_led_polarity;
+ hda_nid_t mute_led_nid;
+ hda_nid_t cap_mute_led_nid;
+
+@@ -4080,11 +4081,9 @@ static void alc269_fixup_hp_mute_led_mic3(struct hda_codec *codec,
+
+ /* update LED status via GPIO */
+ static void alc_update_gpio_led(struct hda_codec *codec, unsigned int mask,
+- bool enabled)
++ int polarity, bool enabled)
+ {
+- struct alc_spec *spec = codec->spec;
+-
+- if (spec->mute_led_polarity)
++ if (polarity)
+ enabled = !enabled;
+ alc_update_gpio_data(codec, mask, !enabled); /* muted -> LED on */
+ }
+@@ -4095,7 +4094,8 @@ static void alc_fixup_gpio_mute_hook(void *private_data, int enabled)
+ struct hda_codec *codec = private_data;
+ struct alc_spec *spec = codec->spec;
+
+- alc_update_gpio_led(codec, spec->gpio_mute_led_mask, enabled);
++ alc_update_gpio_led(codec, spec->gpio_mute_led_mask,
++ spec->mute_led_polarity, enabled);
+ }
+
+ /* turn on/off mic-mute LED via GPIO per capture hook */
+@@ -4104,6 +4104,7 @@ static void alc_gpio_micmute_update(struct hda_codec *codec)
+ struct alc_spec *spec = codec->spec;
+
+ alc_update_gpio_led(codec, spec->gpio_mic_led_mask,
++ spec->micmute_led_polarity,
+ spec->gen.micmute_led.led_value);
+ }
+
+@@ -5808,7 +5809,8 @@ static void alc280_hp_gpio4_automute_hook(struct hda_codec *codec,
+
+ snd_hda_gen_hp_automute(codec, jack);
+ /* mute_led_polarity is set to 0, so we pass inverted value here */
+- alc_update_gpio_led(codec, 0x10, !spec->gen.hp_jack_present);
++ alc_update_gpio_led(codec, 0x10, spec->mute_led_polarity,
++ !spec->gen.hp_jack_present);
+ }
+
+ /* Manage GPIOs for HP EliteBook Folio 9480m.
+diff --git a/sound/soc/codecs/Kconfig b/sound/soc/codecs/Kconfig
+index e60e0b6a689c..8a66f23a7b05 100644
+--- a/sound/soc/codecs/Kconfig
++++ b/sound/soc/codecs/Kconfig
+@@ -1136,10 +1136,13 @@ config SND_SOC_RT5677_SPI
+ config SND_SOC_RT5682
+ tristate
+ depends on I2C || SOUNDWIRE
++ depends on SOUNDWIRE || !SOUNDWIRE
++ depends on I2C || !I2C
+
+ config SND_SOC_RT5682_SDW
+ tristate "Realtek RT5682 Codec - SDW"
+ depends on SOUNDWIRE
++ depends on I2C || !I2C
+ select SND_SOC_RT5682
+ select REGMAP_SOUNDWIRE
+
+@@ -1620,19 +1623,19 @@ config SND_SOC_WM9090
+
+ config SND_SOC_WM9705
+ tristate
+- depends on SND_SOC_AC97_BUS
++ depends on SND_SOC_AC97_BUS || AC97_BUS_NEW
+ select REGMAP_AC97
+ select AC97_BUS_COMPAT if AC97_BUS_NEW
+
+ config SND_SOC_WM9712
+ tristate
+- depends on SND_SOC_AC97_BUS
++ depends on SND_SOC_AC97_BUS || AC97_BUS_NEW
+ select REGMAP_AC97
+ select AC97_BUS_COMPAT if AC97_BUS_NEW
+
+ config SND_SOC_WM9713
+ tristate
+- depends on SND_SOC_AC97_BUS
++ depends on SND_SOC_AC97_BUS || AC97_BUS_NEW
+ select REGMAP_AC97
+ select AC97_BUS_COMPAT if AC97_BUS_NEW
+
+diff --git a/sound/soc/codecs/max98373.c b/sound/soc/codecs/max98373.c
+index cae1def8902d..96718e3a1ad0 100644
+--- a/sound/soc/codecs/max98373.c
++++ b/sound/soc/codecs/max98373.c
+@@ -850,8 +850,8 @@ static int max98373_resume(struct device *dev)
+ {
+ struct max98373_priv *max98373 = dev_get_drvdata(dev);
+
+- max98373_reset(max98373, dev);
+ regcache_cache_only(max98373->regmap, false);
++ max98373_reset(max98373, dev);
+ regcache_sync(max98373->regmap);
+ return 0;
+ }
+diff --git a/sound/soc/codecs/rt1308-sdw.c b/sound/soc/codecs/rt1308-sdw.c
+index a5a7e46de246..a7f45191364d 100644
+--- a/sound/soc/codecs/rt1308-sdw.c
++++ b/sound/soc/codecs/rt1308-sdw.c
+@@ -482,6 +482,9 @@ static int rt1308_set_sdw_stream(struct snd_soc_dai *dai, void *sdw_stream,
+ {
+ struct sdw_stream_data *stream;
+
++ if (!sdw_stream)
++ return 0;
++
+ stream = kzalloc(sizeof(*stream), GFP_KERNEL);
+ if (!stream)
+ return -ENOMEM;
+diff --git a/sound/soc/codecs/rt5645.c b/sound/soc/codecs/rt5645.c
+index 6ba1849a77b0..e2e1d5b03b38 100644
+--- a/sound/soc/codecs/rt5645.c
++++ b/sound/soc/codecs/rt5645.c
+@@ -3625,6 +3625,12 @@ static const struct rt5645_platform_data asus_t100ha_platform_data = {
+ .inv_jd1_1 = true,
+ };
+
++static const struct rt5645_platform_data asus_t101ha_platform_data = {
++ .dmic1_data_pin = RT5645_DMIC_DATA_IN2N,
++ .dmic2_data_pin = RT5645_DMIC2_DISABLE,
++ .jd_mode = 3,
++};
++
+ static const struct rt5645_platform_data lenovo_ideapad_miix_310_pdata = {
+ .jd_mode = 3,
+ .in2_diff = true,
+@@ -3708,6 +3714,14 @@ static const struct dmi_system_id dmi_platform_data[] = {
+ },
+ .driver_data = (void *)&asus_t100ha_platform_data,
+ },
++ {
++ .ident = "ASUS T101HA",
++ .matches = {
++ DMI_MATCH(DMI_SYS_VENDOR, "ASUSTeK COMPUTER INC."),
++ DMI_MATCH(DMI_PRODUCT_NAME, "T101HA"),
++ },
++ .driver_data = (void *)&asus_t101ha_platform_data,
++ },
+ {
+ .ident = "MINIX Z83-4",
+ .matches = {
+diff --git a/sound/soc/codecs/rt5682.c b/sound/soc/codecs/rt5682.c
+index d36f560ad7a8..c4892af14850 100644
+--- a/sound/soc/codecs/rt5682.c
++++ b/sound/soc/codecs/rt5682.c
+@@ -2958,6 +2958,9 @@ static int rt5682_set_sdw_stream(struct snd_soc_dai *dai, void *sdw_stream,
+ {
+ struct sdw_stream_data *stream;
+
++ if (!sdw_stream)
++ return 0;
++
+ stream = kzalloc(sizeof(*stream), GFP_KERNEL);
+ if (!stream)
+ return -ENOMEM;
+diff --git a/sound/soc/codecs/rt700.c b/sound/soc/codecs/rt700.c
+index ff68f0e4f629..687ac2153666 100644
+--- a/sound/soc/codecs/rt700.c
++++ b/sound/soc/codecs/rt700.c
+@@ -860,6 +860,9 @@ static int rt700_set_sdw_stream(struct snd_soc_dai *dai, void *sdw_stream,
+ {
+ struct sdw_stream_data *stream;
+
++ if (!sdw_stream)
++ return 0;
++
+ stream = kzalloc(sizeof(*stream), GFP_KERNEL);
+ if (!stream)
+ return -ENOMEM;
+diff --git a/sound/soc/codecs/rt711.c b/sound/soc/codecs/rt711.c
+index 2daed7692a3b..65b59dbfb43c 100644
+--- a/sound/soc/codecs/rt711.c
++++ b/sound/soc/codecs/rt711.c
+@@ -906,6 +906,9 @@ static int rt711_set_sdw_stream(struct snd_soc_dai *dai, void *sdw_stream,
+ {
+ struct sdw_stream_data *stream;
+
++ if (!sdw_stream)
++ return 0;
++
+ stream = kzalloc(sizeof(*stream), GFP_KERNEL);
+ if (!stream)
+ return -ENOMEM;
+diff --git a/sound/soc/codecs/rt715.c b/sound/soc/codecs/rt715.c
+index 2cbc57b16b13..099c8bd20006 100644
+--- a/sound/soc/codecs/rt715.c
++++ b/sound/soc/codecs/rt715.c
+@@ -530,6 +530,9 @@ static int rt715_set_sdw_stream(struct snd_soc_dai *dai, void *sdw_stream,
+
+ struct sdw_stream_data *stream;
+
++ if (!sdw_stream)
++ return 0;
++
+ stream = kzalloc(sizeof(*stream), GFP_KERNEL);
+ if (!stream)
+ return -ENOMEM;
+diff --git a/sound/soc/fsl/fsl_asrc_dma.c b/sound/soc/fsl/fsl_asrc_dma.c
+index e7178817d7a7..1ee10eafe3e6 100644
+--- a/sound/soc/fsl/fsl_asrc_dma.c
++++ b/sound/soc/fsl/fsl_asrc_dma.c
+@@ -252,6 +252,7 @@ static int fsl_asrc_dma_hw_params(struct snd_soc_component *component,
+ ret = dmaengine_slave_config(pair->dma_chan[dir], &config_be);
+ if (ret) {
+ dev_err(dev, "failed to config DMA channel for Back-End\n");
++ dma_release_channel(pair->dma_chan[dir]);
+ return ret;
+ }
+
+diff --git a/sound/soc/fsl/fsl_esai.c b/sound/soc/fsl/fsl_esai.c
+index c7a49d03463a..84290be778f0 100644
+--- a/sound/soc/fsl/fsl_esai.c
++++ b/sound/soc/fsl/fsl_esai.c
+@@ -87,6 +87,10 @@ static irqreturn_t esai_isr(int irq, void *devid)
+ if ((saisr & (ESAI_SAISR_TUE | ESAI_SAISR_ROE)) &&
+ esai_priv->reset_at_xrun) {
+ dev_dbg(&pdev->dev, "reset module for xrun\n");
++ regmap_update_bits(esai_priv->regmap, REG_ESAI_TCR,
++ ESAI_xCR_xEIE_MASK, 0);
++ regmap_update_bits(esai_priv->regmap, REG_ESAI_RCR,
++ ESAI_xCR_xEIE_MASK, 0);
+ tasklet_schedule(&esai_priv->task);
+ }
+
+diff --git a/sound/soc/img/img-i2s-in.c b/sound/soc/img/img-i2s-in.c
+index a495d1050d49..e30b66b94bf6 100644
+--- a/sound/soc/img/img-i2s-in.c
++++ b/sound/soc/img/img-i2s-in.c
+@@ -482,6 +482,7 @@ static int img_i2s_in_probe(struct platform_device *pdev)
+ if (IS_ERR(rst)) {
+ if (PTR_ERR(rst) == -EPROBE_DEFER) {
+ ret = -EPROBE_DEFER;
++ pm_runtime_put(&pdev->dev);
+ goto err_suspend;
+ }
+
+diff --git a/sound/soc/intel/boards/bytcr_rt5640.c b/sound/soc/intel/boards/bytcr_rt5640.c
+index 08f4ae964b02..5c1a5e2aff6f 100644
+--- a/sound/soc/intel/boards/bytcr_rt5640.c
++++ b/sound/soc/intel/boards/bytcr_rt5640.c
+@@ -742,6 +742,30 @@ static const struct dmi_system_id byt_rt5640_quirk_table[] = {
+ BYT_RT5640_SSP0_AIF1 |
+ BYT_RT5640_MCLK_EN),
+ },
++ { /* Toshiba Encore WT8-A */
++ .matches = {
++ DMI_EXACT_MATCH(DMI_SYS_VENDOR, "TOSHIBA"),
++ DMI_EXACT_MATCH(DMI_PRODUCT_NAME, "TOSHIBA WT8-A"),
++ },
++ .driver_data = (void *)(BYT_RT5640_DMIC1_MAP |
++ BYT_RT5640_JD_SRC_JD2_IN4N |
++ BYT_RT5640_OVCD_TH_2000UA |
++ BYT_RT5640_OVCD_SF_0P75 |
++ BYT_RT5640_JD_NOT_INV |
++ BYT_RT5640_MCLK_EN),
++ },
++ { /* Toshiba Encore WT10-A */
++ .matches = {
++ DMI_EXACT_MATCH(DMI_SYS_VENDOR, "TOSHIBA"),
++ DMI_EXACT_MATCH(DMI_PRODUCT_NAME, "TOSHIBA WT10-A-103"),
++ },
++ .driver_data = (void *)(BYT_RT5640_DMIC1_MAP |
++ BYT_RT5640_JD_SRC_JD1_IN4P |
++ BYT_RT5640_OVCD_TH_2000UA |
++ BYT_RT5640_OVCD_SF_0P75 |
++ BYT_RT5640_SSP0_AIF2 |
++ BYT_RT5640_MCLK_EN),
++ },
+ { /* Catch-all for generic Insyde tablets, must be last */
+ .matches = {
+ DMI_MATCH(DMI_SYS_VENDOR, "Insyde"),
+diff --git a/sound/soc/meson/axg-fifo.c b/sound/soc/meson/axg-fifo.c
+index 2e9b56b29d31..b2e867113226 100644
+--- a/sound/soc/meson/axg-fifo.c
++++ b/sound/soc/meson/axg-fifo.c
+@@ -249,7 +249,7 @@ int axg_fifo_pcm_open(struct snd_soc_component *component,
+ /* Enable pclk to access registers and clock the fifo ip */
+ ret = clk_prepare_enable(fifo->pclk);
+ if (ret)
+- return ret;
++ goto free_irq;
+
+ /* Setup status2 so it reports the memory pointer */
+ regmap_update_bits(fifo->map, FIFO_CTRL1,
+@@ -269,8 +269,14 @@ int axg_fifo_pcm_open(struct snd_soc_component *component,
+ /* Take memory arbitror out of reset */
+ ret = reset_control_deassert(fifo->arb);
+ if (ret)
+- clk_disable_unprepare(fifo->pclk);
++ goto free_clk;
++
++ return 0;
+
++free_clk:
++ clk_disable_unprepare(fifo->pclk);
++free_irq:
++ free_irq(fifo->irq, ss);
+ return ret;
+ }
+ EXPORT_SYMBOL_GPL(axg_fifo_pcm_open);
+diff --git a/sound/soc/meson/meson-card-utils.c b/sound/soc/meson/meson-card-utils.c
+index 2ca8c98e204f..5a4a91c88734 100644
+--- a/sound/soc/meson/meson-card-utils.c
++++ b/sound/soc/meson/meson-card-utils.c
+@@ -49,19 +49,26 @@ int meson_card_reallocate_links(struct snd_soc_card *card,
+ links = krealloc(priv->card.dai_link,
+ num_links * sizeof(*priv->card.dai_link),
+ GFP_KERNEL | __GFP_ZERO);
++ if (!links)
++ goto err_links;
++
+ ldata = krealloc(priv->link_data,
+ num_links * sizeof(*priv->link_data),
+ GFP_KERNEL | __GFP_ZERO);
+-
+- if (!links || !ldata) {
+- dev_err(priv->card.dev, "failed to allocate links\n");
+- return -ENOMEM;
+- }
++ if (!ldata)
++ goto err_ldata;
+
+ priv->card.dai_link = links;
+ priv->link_data = ldata;
+ priv->card.num_links = num_links;
+ return 0;
++
++err_ldata:
++ kfree(links);
++err_links:
++ dev_err(priv->card.dev, "failed to allocate links\n");
++ return -ENOMEM;
++
+ }
+ EXPORT_SYMBOL_GPL(meson_card_reallocate_links);
+
+diff --git a/sound/soc/qcom/qdsp6/q6asm-dai.c b/sound/soc/qcom/qdsp6/q6asm-dai.c
+index 125af00bba53..4640804aab7f 100644
+--- a/sound/soc/qcom/qdsp6/q6asm-dai.c
++++ b/sound/soc/qcom/qdsp6/q6asm-dai.c
+@@ -176,7 +176,7 @@ static const struct snd_compr_codec_caps q6asm_compr_caps = {
+ };
+
+ static void event_handler(uint32_t opcode, uint32_t token,
+- uint32_t *payload, void *priv)
++ void *payload, void *priv)
+ {
+ struct q6asm_dai_rtd *prtd = priv;
+ struct snd_pcm_substream *substream = prtd->substream;
+@@ -490,7 +490,7 @@ static int q6asm_dai_hw_params(struct snd_soc_component *component,
+ }
+
+ static void compress_event_handler(uint32_t opcode, uint32_t token,
+- uint32_t *payload, void *priv)
++ void *payload, void *priv)
+ {
+ struct q6asm_dai_rtd *prtd = priv;
+ struct snd_compr_stream *substream = prtd->cstream;
+diff --git a/sound/soc/sh/rcar/gen.c b/sound/soc/sh/rcar/gen.c
+index af19010b9d88..8bd49c8a9517 100644
+--- a/sound/soc/sh/rcar/gen.c
++++ b/sound/soc/sh/rcar/gen.c
+@@ -224,6 +224,14 @@ static int rsnd_gen2_probe(struct rsnd_priv *priv)
+ RSND_GEN_S_REG(SSI_SYS_STATUS5, 0x884),
+ RSND_GEN_S_REG(SSI_SYS_STATUS6, 0x888),
+ RSND_GEN_S_REG(SSI_SYS_STATUS7, 0x88c),
++ RSND_GEN_S_REG(SSI_SYS_INT_ENABLE0, 0x850),
++ RSND_GEN_S_REG(SSI_SYS_INT_ENABLE1, 0x854),
++ RSND_GEN_S_REG(SSI_SYS_INT_ENABLE2, 0x858),
++ RSND_GEN_S_REG(SSI_SYS_INT_ENABLE3, 0x85c),
++ RSND_GEN_S_REG(SSI_SYS_INT_ENABLE4, 0x890),
++ RSND_GEN_S_REG(SSI_SYS_INT_ENABLE5, 0x894),
++ RSND_GEN_S_REG(SSI_SYS_INT_ENABLE6, 0x898),
++ RSND_GEN_S_REG(SSI_SYS_INT_ENABLE7, 0x89c),
+ RSND_GEN_S_REG(HDMI0_SEL, 0x9e0),
+ RSND_GEN_S_REG(HDMI1_SEL, 0x9e4),
+
+diff --git a/sound/soc/sh/rcar/rsnd.h b/sound/soc/sh/rcar/rsnd.h
+index ea6cbaa9743e..d47608ff5fac 100644
+--- a/sound/soc/sh/rcar/rsnd.h
++++ b/sound/soc/sh/rcar/rsnd.h
+@@ -189,6 +189,14 @@ enum rsnd_reg {
+ SSI_SYS_STATUS5,
+ SSI_SYS_STATUS6,
+ SSI_SYS_STATUS7,
++ SSI_SYS_INT_ENABLE0,
++ SSI_SYS_INT_ENABLE1,
++ SSI_SYS_INT_ENABLE2,
++ SSI_SYS_INT_ENABLE3,
++ SSI_SYS_INT_ENABLE4,
++ SSI_SYS_INT_ENABLE5,
++ SSI_SYS_INT_ENABLE6,
++ SSI_SYS_INT_ENABLE7,
+ HDMI0_SEL,
+ HDMI1_SEL,
+ SSI9_BUSIF0_MODE,
+@@ -237,6 +245,7 @@ enum rsnd_reg {
+ #define SSI9_BUSIF_ADINR(i) (SSI9_BUSIF0_ADINR + (i))
+ #define SSI9_BUSIF_DALIGN(i) (SSI9_BUSIF0_DALIGN + (i))
+ #define SSI_SYS_STATUS(i) (SSI_SYS_STATUS0 + (i))
++#define SSI_SYS_INT_ENABLE(i) (SSI_SYS_INT_ENABLE0 + (i))
+
+
+ struct rsnd_priv;
+diff --git a/sound/soc/sh/rcar/ssi.c b/sound/soc/sh/rcar/ssi.c
+index 4a7d3413917f..47d5ddb526f2 100644
+--- a/sound/soc/sh/rcar/ssi.c
++++ b/sound/soc/sh/rcar/ssi.c
+@@ -372,6 +372,9 @@ static void rsnd_ssi_config_init(struct rsnd_mod *mod,
+ u32 wsr = ssi->wsr;
+ int width;
+ int is_tdm, is_tdm_split;
++ int id = rsnd_mod_id(mod);
++ int i;
++ u32 sys_int_enable = 0;
+
+ is_tdm = rsnd_runtime_is_tdm(io);
+ is_tdm_split = rsnd_runtime_is_tdm_split(io);
+@@ -447,6 +450,38 @@ static void rsnd_ssi_config_init(struct rsnd_mod *mod,
+ cr_mode = DIEN; /* PIO : enable Data interrupt */
+ }
+
++ /* enable busif buffer over/under run interrupt. */
++ if (is_tdm || is_tdm_split) {
++ switch (id) {
++ case 0:
++ case 1:
++ case 2:
++ case 3:
++ case 4:
++ for (i = 0; i < 4; i++) {
++ sys_int_enable = rsnd_mod_read(mod,
++ SSI_SYS_INT_ENABLE(i * 2));
++ sys_int_enable |= 0xf << (id * 4);
++ rsnd_mod_write(mod,
++ SSI_SYS_INT_ENABLE(i * 2),
++ sys_int_enable);
++ }
++
++ break;
++ case 9:
++ for (i = 0; i < 4; i++) {
++ sys_int_enable = rsnd_mod_read(mod,
++ SSI_SYS_INT_ENABLE((i * 2) + 1));
++ sys_int_enable |= 0xf << 4;
++ rsnd_mod_write(mod,
++ SSI_SYS_INT_ENABLE((i * 2) + 1),
++ sys_int_enable);
++ }
++
++ break;
++ }
++ }
++
+ init_end:
+ ssi->cr_own = cr_own;
+ ssi->cr_mode = cr_mode;
+@@ -496,6 +531,13 @@ static int rsnd_ssi_quit(struct rsnd_mod *mod,
+ {
+ struct rsnd_ssi *ssi = rsnd_mod_to_ssi(mod);
+ struct device *dev = rsnd_priv_to_dev(priv);
++ int is_tdm, is_tdm_split;
++ int id = rsnd_mod_id(mod);
++ int i;
++ u32 sys_int_enable = 0;
++
++ is_tdm = rsnd_runtime_is_tdm(io);
++ is_tdm_split = rsnd_runtime_is_tdm_split(io);
+
+ if (!rsnd_ssi_is_run_mods(mod, io))
+ return 0;
+@@ -517,6 +559,38 @@ static int rsnd_ssi_quit(struct rsnd_mod *mod,
+ ssi->wsr = 0;
+ }
+
++ /* disable busif buffer over/under run interrupt. */
++ if (is_tdm || is_tdm_split) {
++ switch (id) {
++ case 0:
++ case 1:
++ case 2:
++ case 3:
++ case 4:
++ for (i = 0; i < 4; i++) {
++ sys_int_enable = rsnd_mod_read(mod,
++ SSI_SYS_INT_ENABLE(i * 2));
++ sys_int_enable &= ~(0xf << (id * 4));
++ rsnd_mod_write(mod,
++ SSI_SYS_INT_ENABLE(i * 2),
++ sys_int_enable);
++ }
++
++ break;
++ case 9:
++ for (i = 0; i < 4; i++) {
++ sys_int_enable = rsnd_mod_read(mod,
++ SSI_SYS_INT_ENABLE((i * 2) + 1));
++ sys_int_enable &= ~(0xf << 4);
++ rsnd_mod_write(mod,
++ SSI_SYS_INT_ENABLE((i * 2) + 1),
++ sys_int_enable);
++ }
++
++ break;
++ }
++ }
++
+ return 0;
+ }
+
+@@ -622,6 +696,11 @@ static int rsnd_ssi_irq(struct rsnd_mod *mod,
+ int enable)
+ {
+ u32 val = 0;
++ int is_tdm, is_tdm_split;
++ int id = rsnd_mod_id(mod);
++
++ is_tdm = rsnd_runtime_is_tdm(io);
++ is_tdm_split = rsnd_runtime_is_tdm_split(io);
+
+ if (rsnd_is_gen1(priv))
+ return 0;
+@@ -635,6 +714,19 @@ static int rsnd_ssi_irq(struct rsnd_mod *mod,
+ if (enable)
+ val = rsnd_ssi_is_dma_mode(mod) ? 0x0e000000 : 0x0f000000;
+
++ if (is_tdm || is_tdm_split) {
++ switch (id) {
++ case 0:
++ case 1:
++ case 2:
++ case 3:
++ case 4:
++ case 9:
++ val |= 0x0000ff00;
++ break;
++ }
++ }
++
+ rsnd_mod_write(mod, SSI_INT_ENABLE, val);
+
+ return 0;
+@@ -651,6 +743,12 @@ static void __rsnd_ssi_interrupt(struct rsnd_mod *mod,
+ u32 status;
+ bool elapsed = false;
+ bool stop = false;
++ int id = rsnd_mod_id(mod);
++ int i;
++ int is_tdm, is_tdm_split;
++
++ is_tdm = rsnd_runtime_is_tdm(io);
++ is_tdm_split = rsnd_runtime_is_tdm_split(io);
+
+ spin_lock(&priv->lock);
+
+@@ -672,6 +770,53 @@ static void __rsnd_ssi_interrupt(struct rsnd_mod *mod,
+ stop = true;
+ }
+
++ status = 0;
++
++ if (is_tdm || is_tdm_split) {
++ switch (id) {
++ case 0:
++ case 1:
++ case 2:
++ case 3:
++ case 4:
++ for (i = 0; i < 4; i++) {
++ status = rsnd_mod_read(mod,
++ SSI_SYS_STATUS(i * 2));
++ status &= 0xf << (id * 4);
++
++ if (status) {
++ rsnd_dbg_irq_status(dev,
++ "%s err status : 0x%08x\n",
++ rsnd_mod_name(mod), status);
++ rsnd_mod_write(mod,
++ SSI_SYS_STATUS(i * 2),
++ 0xf << (id * 4));
++ stop = true;
++ break;
++ }
++ }
++ break;
++ case 9:
++ for (i = 0; i < 4; i++) {
++ status = rsnd_mod_read(mod,
++ SSI_SYS_STATUS((i * 2) + 1));
++ status &= 0xf << 4;
++
++ if (status) {
++ rsnd_dbg_irq_status(dev,
++ "%s err status : 0x%08x\n",
++ rsnd_mod_name(mod), status);
++ rsnd_mod_write(mod,
++ SSI_SYS_STATUS((i * 2) + 1),
++ 0xf << 4);
++ stop = true;
++ break;
++ }
++ }
++ break;
++ }
++ }
++
+ rsnd_ssi_status_clear(mod);
+ rsnd_ssi_interrupt_out:
+ spin_unlock(&priv->lock);
+diff --git a/sound/soc/soc-core.c b/sound/soc/soc-core.c
+index 843b8b1c89d4..e5433e8fcf19 100644
+--- a/sound/soc/soc-core.c
++++ b/sound/soc/soc-core.c
+@@ -1720,9 +1720,25 @@ match:
+ dai_link->platforms->name = component->name;
+
+ /* convert non BE into BE */
+- dai_link->no_pcm = 1;
+- dai_link->dpcm_playback = 1;
+- dai_link->dpcm_capture = 1;
++ if (!dai_link->no_pcm) {
++ dai_link->no_pcm = 1;
++
++ if (dai_link->dpcm_playback)
++ dev_warn(card->dev,
++ "invalid configuration, dailink %s has flags no_pcm=0 and dpcm_playback=1\n",
++ dai_link->name);
++ if (dai_link->dpcm_capture)
++ dev_warn(card->dev,
++ "invalid configuration, dailink %s has flags no_pcm=0 and dpcm_capture=1\n",
++ dai_link->name);
++
++ /* convert normal link into DPCM one */
++ if (!(dai_link->dpcm_playback ||
++ dai_link->dpcm_capture)) {
++ dai_link->dpcm_playback = !dai_link->capture_only;
++ dai_link->dpcm_capture = !dai_link->playback_only;
++ }
++ }
+
+ /* override any BE fixups */
+ dai_link->be_hw_params_fixup =
+diff --git a/sound/soc/soc-dapm.c b/sound/soc/soc-dapm.c
+index e2632841b321..c0aa64ff8e32 100644
+--- a/sound/soc/soc-dapm.c
++++ b/sound/soc/soc-dapm.c
+@@ -4340,16 +4340,16 @@ static void dapm_connect_dai_pair(struct snd_soc_card *card,
+ codec = codec_dai->playback_widget;
+
+ if (playback_cpu && codec) {
+- if (dai_link->params && !dai_link->playback_widget) {
++ if (dai_link->params && !rtd->playback_widget) {
+ substream = streams[SNDRV_PCM_STREAM_PLAYBACK].substream;
+ dai = snd_soc_dapm_new_dai(card, substream, "playback");
+ if (IS_ERR(dai))
+ goto capture;
+- dai_link->playback_widget = dai;
++ rtd->playback_widget = dai;
+ }
+
+ dapm_connect_dai_routes(&card->dapm, cpu_dai, playback_cpu,
+- dai_link->playback_widget,
++ rtd->playback_widget,
+ codec_dai, codec);
+ }
+
+@@ -4358,16 +4358,16 @@ capture:
+ codec = codec_dai->capture_widget;
+
+ if (codec && capture_cpu) {
+- if (dai_link->params && !dai_link->capture_widget) {
++ if (dai_link->params && !rtd->capture_widget) {
+ substream = streams[SNDRV_PCM_STREAM_CAPTURE].substream;
+ dai = snd_soc_dapm_new_dai(card, substream, "capture");
+ if (IS_ERR(dai))
+ return;
+- dai_link->capture_widget = dai;
++ rtd->capture_widget = dai;
+ }
+
+ dapm_connect_dai_routes(&card->dapm, codec_dai, codec,
+- dai_link->capture_widget,
++ rtd->capture_widget,
+ cpu_dai, capture_cpu);
+ }
+ }
+diff --git a/sound/soc/soc-pcm.c b/sound/soc/soc-pcm.c
+index 1f302de44052..39ce61c5b874 100644
+--- a/sound/soc/soc-pcm.c
++++ b/sound/soc/soc-pcm.c
+@@ -2908,20 +2908,44 @@ int soc_new_pcm(struct snd_soc_pcm_runtime *rtd, int num)
+ struct snd_pcm *pcm;
+ char new_name[64];
+ int ret = 0, playback = 0, capture = 0;
++ int stream;
+ int i;
+
++ if (rtd->dai_link->dynamic && rtd->num_cpus > 1) {
++ dev_err(rtd->dev,
++ "DPCM doesn't support Multi CPU for Front-Ends yet\n");
++ return -EINVAL;
++ }
++
+ if (rtd->dai_link->dynamic || rtd->dai_link->no_pcm) {
+- cpu_dai = asoc_rtd_to_cpu(rtd, 0);
+- if (rtd->num_cpus > 1) {
+- dev_err(rtd->dev,
+- "DPCM doesn't support Multi CPU yet\n");
+- return -EINVAL;
++ if (rtd->dai_link->dpcm_playback) {
++ stream = SNDRV_PCM_STREAM_PLAYBACK;
++
++ for_each_rtd_cpu_dais(rtd, i, cpu_dai)
++ if (!snd_soc_dai_stream_valid(cpu_dai,
++ stream)) {
++ dev_err(rtd->card->dev,
++ "CPU DAI %s for rtd %s does not support playback\n",
++ cpu_dai->name,
++ rtd->dai_link->stream_name);
++ return -EINVAL;
++ }
++ playback = 1;
++ }
++ if (rtd->dai_link->dpcm_capture) {
++ stream = SNDRV_PCM_STREAM_CAPTURE;
++
++ for_each_rtd_cpu_dais(rtd, i, cpu_dai)
++ if (!snd_soc_dai_stream_valid(cpu_dai,
++ stream)) {
++ dev_err(rtd->card->dev,
++ "CPU DAI %s for rtd %s does not support capture\n",
++ cpu_dai->name,
++ rtd->dai_link->stream_name);
++ return -EINVAL;
++ }
++ capture = 1;
+ }
+-
+- playback = rtd->dai_link->dpcm_playback &&
+- snd_soc_dai_stream_valid(cpu_dai, SNDRV_PCM_STREAM_PLAYBACK);
+- capture = rtd->dai_link->dpcm_capture &&
+- snd_soc_dai_stream_valid(cpu_dai, SNDRV_PCM_STREAM_CAPTURE);
+ } else {
+ /* Adapt stream for codec2codec links */
+ int cpu_capture = rtd->dai_link->params ?
+diff --git a/sound/soc/sof/control.c b/sound/soc/sof/control.c
+index dfc412e2d956..6d63768d42aa 100644
+--- a/sound/soc/sof/control.c
++++ b/sound/soc/sof/control.c
+@@ -19,8 +19,8 @@ static void update_mute_led(struct snd_sof_control *scontrol,
+ struct snd_kcontrol *kcontrol,
+ struct snd_ctl_elem_value *ucontrol)
+ {
+- unsigned int temp = 0;
+- unsigned int mask;
++ int temp = 0;
++ int mask;
+ int i;
+
+ mask = 1U << snd_ctl_get_ioffidx(kcontrol, &ucontrol->id);
+diff --git a/sound/soc/sof/core.c b/sound/soc/sof/core.c
+index 91acfae7935c..74b438216216 100644
+--- a/sound/soc/sof/core.c
++++ b/sound/soc/sof/core.c
+@@ -176,6 +176,7 @@ static int sof_probe_continue(struct snd_sof_dev *sdev)
+ /* init the IPC */
+ sdev->ipc = snd_sof_ipc_init(sdev);
+ if (!sdev->ipc) {
++ ret = -ENOMEM;
+ dev_err(sdev->dev, "error: failed to init DSP IPC %d\n", ret);
+ goto ipc_err;
+ }
+diff --git a/sound/soc/sof/imx/Kconfig b/sound/soc/sof/imx/Kconfig
+index bae4f7bf5f75..812749064ca8 100644
+--- a/sound/soc/sof/imx/Kconfig
++++ b/sound/soc/sof/imx/Kconfig
+@@ -14,7 +14,7 @@ if SND_SOC_SOF_IMX_TOPLEVEL
+ config SND_SOC_SOF_IMX8_SUPPORT
+ bool "SOF support for i.MX8"
+ depends on IMX_SCU
+- depends on IMX_DSP
++ select IMX_DSP
+ help
+ This adds support for Sound Open Firmware for NXP i.MX8 platforms
+ Say Y if you have such a device.
+diff --git a/sound/soc/sof/intel/hda-codec.c b/sound/soc/sof/intel/hda-codec.c
+index 3041fbbb010a..ea021db697b8 100644
+--- a/sound/soc/sof/intel/hda-codec.c
++++ b/sound/soc/sof/intel/hda-codec.c
+@@ -24,19 +24,44 @@
+ #define IDISP_VID_INTEL 0x80860000
+
+ /* load the legacy HDA codec driver */
+-static int hda_codec_load_module(struct hda_codec *codec)
++static int request_codec_module(struct hda_codec *codec)
+ {
+ #ifdef MODULE
+ char alias[MODULE_NAME_LEN];
+- const char *module = alias;
++ const char *mod = NULL;
+
+- snd_hdac_codec_modalias(&codec->core, alias, sizeof(alias));
+- dev_dbg(&codec->core.dev, "loading codec module: %s\n", module);
+- request_module(module);
++ switch (codec->probe_id) {
++ case HDA_CODEC_ID_GENERIC:
++#if IS_MODULE(CONFIG_SND_HDA_GENERIC)
++ mod = "snd-hda-codec-generic";
+ #endif
++ break;
++ default:
++ snd_hdac_codec_modalias(&codec->core, alias, sizeof(alias));
++ mod = alias;
++ break;
++ }
++
++ if (mod) {
++ dev_dbg(&codec->core.dev, "loading codec module: %s\n", mod);
++ request_module(mod);
++ }
++#endif /* MODULE */
+ return device_attach(hda_codec_dev(codec));
+ }
+
++static int hda_codec_load_module(struct hda_codec *codec)
++{
++ int ret = request_codec_module(codec);
++
++ if (ret <= 0) {
++ codec->probe_id = HDA_CODEC_ID_GENERIC;
++ ret = request_codec_module(codec);
++ }
++
++ return ret;
++}
++
+ /* enable controller wake up event for all codecs with jack connectors */
+ void hda_codec_jack_wake_enable(struct snd_sof_dev *sdev)
+ {
+@@ -78,6 +103,13 @@ void hda_codec_jack_check(struct snd_sof_dev *sdev) {}
+ EXPORT_SYMBOL_NS(hda_codec_jack_wake_enable, SND_SOC_SOF_HDA_AUDIO_CODEC);
+ EXPORT_SYMBOL_NS(hda_codec_jack_check, SND_SOC_SOF_HDA_AUDIO_CODEC);
+
++#if IS_ENABLED(CONFIG_SND_HDA_GENERIC)
++#define is_generic_config(bus) \
++ ((bus)->modelname && !strcmp((bus)->modelname, "generic"))
++#else
++#define is_generic_config(x) 0
++#endif
++
+ /* probe individual codec */
+ static int hda_codec_probe(struct snd_sof_dev *sdev, int address,
+ bool hda_codec_use_common_hdmi)
+@@ -87,6 +119,7 @@ static int hda_codec_probe(struct snd_sof_dev *sdev, int address,
+ #endif
+ struct hda_bus *hbus = sof_to_hbus(sdev);
+ struct hdac_device *hdev;
++ struct hda_codec *codec;
+ u32 hda_cmd = (address << 28) | (AC_NODE_ROOT << 20) |
+ (AC_VERB_PARAMETERS << 8) | AC_PAR_VENDOR_ID;
+ u32 resp = -1;
+@@ -108,6 +141,7 @@ static int hda_codec_probe(struct snd_sof_dev *sdev, int address,
+
+ hda_priv->codec.bus = hbus;
+ hdev = &hda_priv->codec.core;
++ codec = &hda_priv->codec;
+
+ ret = snd_hdac_ext_bus_device_init(&hbus->core, address, hdev);
+ if (ret < 0)
+@@ -122,6 +156,11 @@ static int hda_codec_probe(struct snd_sof_dev *sdev, int address,
+ hda_priv->need_display_power = true;
+ }
+
++ if (is_generic_config(hbus))
++ codec->probe_id = HDA_CODEC_ID_GENERIC;
++ else
++ codec->probe_id = 0;
++
+ /*
+ * if common HDMI codec driver is not used, codec load
+ * is skipped here and hdac_hdmi is used instead
+@@ -129,7 +168,7 @@ static int hda_codec_probe(struct snd_sof_dev *sdev, int address,
+ if (hda_codec_use_common_hdmi ||
+ (resp & 0xFFFF0000) != IDISP_VID_INTEL) {
+ hdev->type = HDA_DEV_LEGACY;
+- ret = hda_codec_load_module(&hda_priv->codec);
++ ret = hda_codec_load_module(codec);
+ /*
+ * handle ret==0 (no driver bound) as an error, but pass
+ * other return codes without modification
+diff --git a/sound/soc/sof/nocodec.c b/sound/soc/sof/nocodec.c
+index 2233146386cc..71cf5f9db79d 100644
+--- a/sound/soc/sof/nocodec.c
++++ b/sound/soc/sof/nocodec.c
+@@ -52,8 +52,10 @@ static int sof_nocodec_bes_setup(struct device *dev,
+ links[i].platforms->name = dev_name(dev);
+ links[i].codecs->dai_name = "snd-soc-dummy-dai";
+ links[i].codecs->name = "snd-soc-dummy";
+- links[i].dpcm_playback = 1;
+- links[i].dpcm_capture = 1;
++ if (ops->drv[i].playback.channels_min)
++ links[i].dpcm_playback = 1;
++ if (ops->drv[i].capture.channels_min)
++ links[i].dpcm_capture = 1;
+ }
+
+ card->dai_link = links;
+diff --git a/sound/soc/sof/pm.c b/sound/soc/sof/pm.c
+index c410822d9920..01d83ddc16ba 100644
+--- a/sound/soc/sof/pm.c
++++ b/sound/soc/sof/pm.c
+@@ -90,7 +90,10 @@ static int sof_resume(struct device *dev, bool runtime_resume)
+ int ret;
+
+ /* do nothing if dsp resume callbacks are not set */
+- if (!sof_ops(sdev)->resume || !sof_ops(sdev)->runtime_resume)
++ if (!runtime_resume && !sof_ops(sdev)->resume)
++ return 0;
++
++ if (runtime_resume && !sof_ops(sdev)->runtime_resume)
+ return 0;
+
+ /* DSP was never successfully started, nothing to resume */
+@@ -175,7 +178,10 @@ static int sof_suspend(struct device *dev, bool runtime_suspend)
+ int ret;
+
+ /* do nothing if dsp suspend callback is not set */
+- if (!sof_ops(sdev)->suspend)
++ if (!runtime_suspend && !sof_ops(sdev)->suspend)
++ return 0;
++
++ if (runtime_suspend && !sof_ops(sdev)->runtime_suspend)
+ return 0;
+
+ if (sdev->fw_state != SOF_FW_BOOT_COMPLETE)
+diff --git a/sound/soc/sof/sof-audio.h b/sound/soc/sof/sof-audio.h
+index bf65f31af858..875a5fc13297 100644
+--- a/sound/soc/sof/sof-audio.h
++++ b/sound/soc/sof/sof-audio.h
+@@ -56,7 +56,7 @@ struct snd_sof_pcm {
+ struct snd_sof_led_control {
+ unsigned int use_led;
+ unsigned int direction;
+- unsigned int led_value;
++ int led_value;
+ };
+
+ /* ALSA SOF Kcontrol device */
+diff --git a/sound/soc/sof/topology.c b/sound/soc/sof/topology.c
+index fe8ba3e05e08..ab2b69de1d4d 100644
+--- a/sound/soc/sof/topology.c
++++ b/sound/soc/sof/topology.c
+@@ -1203,6 +1203,8 @@ static int sof_control_load(struct snd_soc_component *scomp, int index,
+ return ret;
+ }
+
++ scontrol->led_ctl.led_value = -1;
++
+ dobj->private = scontrol;
+ list_add(&scontrol->list, &sdev->kcontrol_list);
+ return ret;
+diff --git a/sound/soc/tegra/tegra_wm8903.c b/sound/soc/tegra/tegra_wm8903.c
+index 9b5651502f12..3aca354f9e08 100644
+--- a/sound/soc/tegra/tegra_wm8903.c
++++ b/sound/soc/tegra/tegra_wm8903.c
+@@ -177,6 +177,7 @@ static int tegra_wm8903_init(struct snd_soc_pcm_runtime *rtd)
+ struct snd_soc_component *component = codec_dai->component;
+ struct snd_soc_card *card = rtd->card;
+ struct tegra_wm8903 *machine = snd_soc_card_get_drvdata(card);
++ int shrt = 0;
+
+ if (gpio_is_valid(machine->gpio_hp_det)) {
+ tegra_wm8903_hp_jack_gpio.gpio = machine->gpio_hp_det;
+@@ -189,12 +190,15 @@ static int tegra_wm8903_init(struct snd_soc_pcm_runtime *rtd)
+ &tegra_wm8903_hp_jack_gpio);
+ }
+
++ if (of_property_read_bool(card->dev->of_node, "nvidia,headset"))
++ shrt = SND_JACK_MICROPHONE;
++
+ snd_soc_card_jack_new(rtd->card, "Mic Jack", SND_JACK_MICROPHONE,
+ &tegra_wm8903_mic_jack,
+ tegra_wm8903_mic_jack_pins,
+ ARRAY_SIZE(tegra_wm8903_mic_jack_pins));
+ wm8903_mic_detect(component, &tegra_wm8903_mic_jack, SND_JACK_MICROPHONE,
+- 0);
++ shrt);
+
+ snd_soc_dapm_force_enable_pin(&card->dapm, "MICBIAS");
+
+diff --git a/sound/soc/ti/davinci-mcasp.c b/sound/soc/ti/davinci-mcasp.c
+index 734ffe925c4d..7a7db743dc5b 100644
+--- a/sound/soc/ti/davinci-mcasp.c
++++ b/sound/soc/ti/davinci-mcasp.c
+@@ -1896,8 +1896,10 @@ static int davinci_mcasp_get_dma_type(struct davinci_mcasp *mcasp)
+ PTR_ERR(chan));
+ return PTR_ERR(chan);
+ }
+- if (WARN_ON(!chan->device || !chan->device->dev))
++ if (WARN_ON(!chan->device || !chan->device->dev)) {
++ dma_release_channel(chan);
+ return -EINVAL;
++ }
+
+ if (chan->device->dev->of_node)
+ ret = of_property_read_string(chan->device->dev->of_node,
+diff --git a/sound/soc/ti/omap-mcbsp.c b/sound/soc/ti/omap-mcbsp.c
+index 3d41ca2238d4..4f33ddb7b441 100644
+--- a/sound/soc/ti/omap-mcbsp.c
++++ b/sound/soc/ti/omap-mcbsp.c
+@@ -686,7 +686,7 @@ static int omap_mcbsp_init(struct platform_device *pdev)
+ mcbsp->dma_data[1].addr = omap_mcbsp_dma_reg_params(mcbsp,
+ SNDRV_PCM_STREAM_CAPTURE);
+
+- mcbsp->fclk = clk_get(&pdev->dev, "fck");
++ mcbsp->fclk = devm_clk_get(&pdev->dev, "fck");
+ if (IS_ERR(mcbsp->fclk)) {
+ ret = PTR_ERR(mcbsp->fclk);
+ dev_err(mcbsp->dev, "unable to get fck: %d\n", ret);
+@@ -711,7 +711,7 @@ static int omap_mcbsp_init(struct platform_device *pdev)
+ if (ret) {
+ dev_err(mcbsp->dev,
+ "Unable to create additional controls\n");
+- goto err_thres;
++ return ret;
+ }
+ }
+
+@@ -724,8 +724,6 @@ static int omap_mcbsp_init(struct platform_device *pdev)
+ err_st:
+ if (mcbsp->pdata->buffer_size)
+ sysfs_remove_group(&mcbsp->dev->kobj, &additional_attr_group);
+-err_thres:
+- clk_put(mcbsp->fclk);
+ return ret;
+ }
+
+@@ -1442,8 +1440,6 @@ static int asoc_mcbsp_remove(struct platform_device *pdev)
+
+ omap_mcbsp_st_cleanup(pdev);
+
+- clk_put(mcbsp->fclk);
+-
+ return 0;
+ }
+
+diff --git a/sound/soc/ux500/mop500.c b/sound/soc/ux500/mop500.c
+index 2873e8e6f02b..cdae1190b930 100644
+--- a/sound/soc/ux500/mop500.c
++++ b/sound/soc/ux500/mop500.c
+@@ -63,10 +63,11 @@ static void mop500_of_node_put(void)
+ {
+ int i;
+
+- for (i = 0; i < 2; i++) {
++ for (i = 0; i < 2; i++)
+ of_node_put(mop500_dai_links[i].cpus->of_node);
+- of_node_put(mop500_dai_links[i].codecs->of_node);
+- }
++
++ /* Both links use the same codec, which is refcounted only once */
++ of_node_put(mop500_dai_links[0].codecs->of_node);
+ }
+
+ static int mop500_of_probe(struct platform_device *pdev,
+@@ -81,7 +82,9 @@ static int mop500_of_probe(struct platform_device *pdev,
+
+ if (!(msp_np[0] && msp_np[1] && codec_np)) {
+ dev_err(&pdev->dev, "Phandle missing or invalid\n");
+- mop500_of_node_put();
++ for (i = 0; i < 2; i++)
++ of_node_put(msp_np[i]);
++ of_node_put(codec_np);
+ return -EINVAL;
+ }
+
+diff --git a/sound/usb/card.h b/sound/usb/card.h
+index 395403a2d33f..d6219fba9699 100644
+--- a/sound/usb/card.h
++++ b/sound/usb/card.h
+@@ -84,6 +84,10 @@ struct snd_usb_endpoint {
+ dma_addr_t sync_dma; /* DMA address of syncbuf */
+
+ unsigned int pipe; /* the data i/o pipe */
++ unsigned int framesize[2]; /* small/large frame sizes in samples */
++ unsigned int sample_rem; /* remainder from division fs/fps */
++ unsigned int sample_accum; /* sample accumulator */
++ unsigned int fps; /* frames per second */
+ unsigned int freqn; /* nominal sampling rate in fs/fps in Q16.16 format */
+ unsigned int freqm; /* momentary sampling rate in fs/fps in Q16.16 format */
+ int freqshift; /* how much to shift the feedback value to get Q16.16 */
+@@ -104,6 +108,7 @@ struct snd_usb_endpoint {
+ int iface, altsetting;
+ int skip_packets; /* quirks for devices to ignore the first n packets
+ in a stream */
++ bool is_implicit_feedback; /* This endpoint is used as implicit feedback */
+
+ spinlock_t lock;
+ struct list_head list;
+diff --git a/sound/usb/endpoint.c b/sound/usb/endpoint.c
+index 4a9a2f6ef5a4..9bea7d3f99f8 100644
+--- a/sound/usb/endpoint.c
++++ b/sound/usb/endpoint.c
+@@ -124,12 +124,12 @@ int snd_usb_endpoint_implicit_feedback_sink(struct snd_usb_endpoint *ep)
+
+ /*
+ * For streaming based on information derived from sync endpoints,
+- * prepare_outbound_urb_sizes() will call next_packet_size() to
++ * prepare_outbound_urb_sizes() will call slave_next_packet_size() to
+ * determine the number of samples to be sent in the next packet.
+ *
+- * For implicit feedback, next_packet_size() is unused.
++ * For implicit feedback, slave_next_packet_size() is unused.
+ */
+-int snd_usb_endpoint_next_packet_size(struct snd_usb_endpoint *ep)
++int snd_usb_endpoint_slave_next_packet_size(struct snd_usb_endpoint *ep)
+ {
+ unsigned long flags;
+ int ret;
+@@ -146,6 +146,29 @@ int snd_usb_endpoint_next_packet_size(struct snd_usb_endpoint *ep)
+ return ret;
+ }
+
++/*
++ * For adaptive and synchronous endpoints, prepare_outbound_urb_sizes()
++ * will call next_packet_size() to determine the number of samples to be
++ * sent in the next packet.
++ */
++int snd_usb_endpoint_next_packet_size(struct snd_usb_endpoint *ep)
++{
++ int ret;
++
++ if (ep->fill_max)
++ return ep->maxframesize;
++
++ ep->sample_accum += ep->sample_rem;
++ if (ep->sample_accum >= ep->fps) {
++ ep->sample_accum -= ep->fps;
++ ret = ep->framesize[1];
++ } else {
++ ret = ep->framesize[0];
++ }
++
++ return ret;
++}
++
+ static void retire_outbound_urb(struct snd_usb_endpoint *ep,
+ struct snd_urb_ctx *urb_ctx)
+ {
+@@ -190,6 +213,8 @@ static void prepare_silent_urb(struct snd_usb_endpoint *ep,
+
+ if (ctx->packet_size[i])
+ counts = ctx->packet_size[i];
++ else if (ep->sync_master)
++ counts = snd_usb_endpoint_slave_next_packet_size(ep);
+ else
+ counts = snd_usb_endpoint_next_packet_size(ep);
+
+@@ -321,17 +346,17 @@ static void queue_pending_output_urbs(struct snd_usb_endpoint *ep)
+ ep->next_packet_read_pos %= MAX_URBS;
+
+ /* take URB out of FIFO */
+- if (!list_empty(&ep->ready_playback_urbs))
++ if (!list_empty(&ep->ready_playback_urbs)) {
+ ctx = list_first_entry(&ep->ready_playback_urbs,
+ struct snd_urb_ctx, ready_list);
++ list_del_init(&ctx->ready_list);
++ }
+ }
+ spin_unlock_irqrestore(&ep->lock, flags);
+
+ if (ctx == NULL)
+ return;
+
+- list_del_init(&ctx->ready_list);
+-
+ /* copy over the length information */
+ for (i = 0; i < packet->packets; i++)
+ ctx->packet_size[i] = packet->packet_size[i];
+@@ -497,6 +522,8 @@ struct snd_usb_endpoint *snd_usb_add_endpoint(struct snd_usb_audio *chip,
+
+ list_add_tail(&ep->list, &chip->ep_list);
+
++ ep->is_implicit_feedback = 0;
++
+ __exit_unlock:
+ mutex_unlock(&chip->mutex);
+
+@@ -596,6 +623,178 @@ static void release_urbs(struct snd_usb_endpoint *ep, int force)
+ ep->nurbs = 0;
+ }
+
++/*
++ * Check data endpoint for format differences
++ */
++static bool check_ep_params(struct snd_usb_endpoint *ep,
++ snd_pcm_format_t pcm_format,
++ unsigned int channels,
++ unsigned int period_bytes,
++ unsigned int frames_per_period,
++ unsigned int periods_per_buffer,
++ struct audioformat *fmt,
++ struct snd_usb_endpoint *sync_ep)
++{
++ unsigned int maxsize, minsize, packs_per_ms, max_packs_per_urb;
++ unsigned int max_packs_per_period, urbs_per_period, urb_packs;
++ unsigned int max_urbs;
++ int frame_bits = snd_pcm_format_physical_width(pcm_format) * channels;
++ int tx_length_quirk = (ep->chip->tx_length_quirk &&
++ usb_pipeout(ep->pipe));
++ bool ret = 1;
++
++ if (pcm_format == SNDRV_PCM_FORMAT_DSD_U16_LE && fmt->dsd_dop) {
++ /*
++ * When operating in DSD DOP mode, the size of a sample frame
++ * in hardware differs from the actual physical format width
++ * because we need to make room for the DOP markers.
++ */
++ frame_bits += channels << 3;
++ }
++
++ ret = ret && (ep->datainterval == fmt->datainterval);
++ ret = ret && (ep->stride == frame_bits >> 3);
++
++ switch (pcm_format) {
++ case SNDRV_PCM_FORMAT_U8:
++ ret = ret && (ep->silence_value == 0x80);
++ break;
++ case SNDRV_PCM_FORMAT_DSD_U8:
++ case SNDRV_PCM_FORMAT_DSD_U16_LE:
++ case SNDRV_PCM_FORMAT_DSD_U32_LE:
++ case SNDRV_PCM_FORMAT_DSD_U16_BE:
++ case SNDRV_PCM_FORMAT_DSD_U32_BE:
++ ret = ret && (ep->silence_value == 0x69);
++ break;
++ default:
++ ret = ret && (ep->silence_value == 0);
++ }
++
++ /* assume max. frequency is 50% higher than nominal */
++ ret = ret && (ep->freqmax == ep->freqn + (ep->freqn >> 1));
++ /* Round up freqmax to nearest integer in order to calculate maximum
++ * packet size, which must represent a whole number of frames.
++ * This is accomplished by adding 0x0.ffff before converting the
++ * Q16.16 format into integer.
++ * In order to accurately calculate the maximum packet size when
++ * the data interval is more than 1 (i.e. ep->datainterval > 0),
++ * multiply by the data interval prior to rounding. For instance,
++ * a freqmax of 41 kHz will result in a max packet size of 6 (5.125)
++ * frames with a data interval of 1, but 11 (10.25) frames with a
++ * data interval of 2.
++ * (ep->freqmax << ep->datainterval overflows at 8.192 MHz for the
++ * maximum datainterval value of 3, at USB full speed, higher for
++ * USB high speed, noting that ep->freqmax is in units of
++ * frames per packet in Q16.16 format.)
++ */
++ maxsize = (((ep->freqmax << ep->datainterval) + 0xffff) >> 16) *
++ (frame_bits >> 3);
++ if (tx_length_quirk)
++ maxsize += sizeof(__le32); /* Space for length descriptor */
++ /* but wMaxPacketSize might reduce this */
++ if (ep->maxpacksize && ep->maxpacksize < maxsize) {
++ /* whatever fits into a max. size packet */
++ unsigned int data_maxsize = maxsize = ep->maxpacksize;
++
++ if (tx_length_quirk)
++ /* Need to remove the length descriptor to calc freq */
++ data_maxsize -= sizeof(__le32);
++ ret = ret && (ep->freqmax == (data_maxsize / (frame_bits >> 3))
++ << (16 - ep->datainterval));
++ }
++
++ if (ep->fill_max)
++ ret = ret && (ep->curpacksize == ep->maxpacksize);
++ else
++ ret = ret && (ep->curpacksize == maxsize);
++
++ if (snd_usb_get_speed(ep->chip->dev) != USB_SPEED_FULL) {
++ packs_per_ms = 8 >> ep->datainterval;
++ max_packs_per_urb = MAX_PACKS_HS;
++ } else {
++ packs_per_ms = 1;
++ max_packs_per_urb = MAX_PACKS;
++ }
++ if (sync_ep && !snd_usb_endpoint_implicit_feedback_sink(ep))
++ max_packs_per_urb = min(max_packs_per_urb,
++ 1U << sync_ep->syncinterval);
++ max_packs_per_urb = max(1u, max_packs_per_urb >> ep->datainterval);
++
++ /*
++ * Capture endpoints need to use small URBs because there's no way
++ * to tell in advance where the next period will end, and we don't
++ * want the next URB to complete much after the period ends.
++ *
++ * Playback endpoints with implicit sync much use the same parameters
++ * as their corresponding capture endpoint.
++ */
++ if (usb_pipein(ep->pipe) ||
++ snd_usb_endpoint_implicit_feedback_sink(ep)) {
++
++ urb_packs = packs_per_ms;
++ /*
++ * Wireless devices can poll at a max rate of once per 4ms.
++ * For dataintervals less than 5, increase the packet count to
++ * allow the host controller to use bursting to fill in the
++ * gaps.
++ */
++ if (snd_usb_get_speed(ep->chip->dev) == USB_SPEED_WIRELESS) {
++ int interval = ep->datainterval;
++
++ while (interval < 5) {
++ urb_packs <<= 1;
++ ++interval;
++ }
++ }
++ /* make capture URBs <= 1 ms and smaller than a period */
++ urb_packs = min(max_packs_per_urb, urb_packs);
++ while (urb_packs > 1 && urb_packs * maxsize >= period_bytes)
++ urb_packs >>= 1;
++ ret = ret && (ep->nurbs == MAX_URBS);
++
++ /*
++ * Playback endpoints without implicit sync are adjusted so that
++ * a period fits as evenly as possible in the smallest number of
++ * URBs. The total number of URBs is adjusted to the size of the
++ * ALSA buffer, subject to the MAX_URBS and MAX_QUEUE limits.
++ */
++ } else {
++ /* determine how small a packet can be */
++ minsize = (ep->freqn >> (16 - ep->datainterval)) *
++ (frame_bits >> 3);
++ /* with sync from device, assume it can be 12% lower */
++ if (sync_ep)
++ minsize -= minsize >> 3;
++ minsize = max(minsize, 1u);
++
++ /* how many packets will contain an entire ALSA period? */
++ max_packs_per_period = DIV_ROUND_UP(period_bytes, minsize);
++
++ /* how many URBs will contain a period? */
++ urbs_per_period = DIV_ROUND_UP(max_packs_per_period,
++ max_packs_per_urb);
++ /* how many packets are needed in each URB? */
++ urb_packs = DIV_ROUND_UP(max_packs_per_period, urbs_per_period);
++
++ /* limit the number of frames in a single URB */
++ ret = ret && (ep->max_urb_frames ==
++ DIV_ROUND_UP(frames_per_period, urbs_per_period));
++
++ /* try to use enough URBs to contain an entire ALSA buffer */
++ max_urbs = min((unsigned) MAX_URBS,
++ MAX_QUEUE * packs_per_ms / urb_packs);
++ ret = ret && (ep->nurbs == min(max_urbs,
++ urbs_per_period * periods_per_buffer));
++ }
++
++ ret = ret && (ep->datainterval == fmt->datainterval);
++ ret = ret && (ep->maxpacksize == fmt->maxpacksize);
++ ret = ret &&
++ (ep->fill_max == !!(fmt->attributes & UAC_EP_CS_ATTR_FILL_MAX));
++
++ return ret;
++}
++
+ /*
+ * configure a data endpoint
+ */
+@@ -861,10 +1060,23 @@ int snd_usb_endpoint_set_params(struct snd_usb_endpoint *ep,
+ int err;
+
+ if (ep->use_count != 0) {
+- usb_audio_warn(ep->chip,
+- "Unable to change format on ep #%x: already in use\n",
+- ep->ep_num);
+- return -EBUSY;
++ bool check = ep->is_implicit_feedback &&
++ check_ep_params(ep, pcm_format,
++ channels, period_bytes,
++ period_frames, buffer_periods,
++ fmt, sync_ep);
++
++ if (!check) {
++ usb_audio_warn(ep->chip,
++ "Unable to change format on ep #%x: already in use\n",
++ ep->ep_num);
++ return -EBUSY;
++ }
++
++ usb_audio_dbg(ep->chip,
++ "Ep #%x already in use as implicit feedback but format not changed\n",
++ ep->ep_num);
++ return 0;
+ }
+
+ /* release old buffers, if any */
+@@ -874,10 +1086,17 @@ int snd_usb_endpoint_set_params(struct snd_usb_endpoint *ep,
+ ep->maxpacksize = fmt->maxpacksize;
+ ep->fill_max = !!(fmt->attributes & UAC_EP_CS_ATTR_FILL_MAX);
+
+- if (snd_usb_get_speed(ep->chip->dev) == USB_SPEED_FULL)
++ if (snd_usb_get_speed(ep->chip->dev) == USB_SPEED_FULL) {
+ ep->freqn = get_usb_full_speed_rate(rate);
+- else
++ ep->fps = 1000;
++ } else {
+ ep->freqn = get_usb_high_speed_rate(rate);
++ ep->fps = 8000;
++ }
++
++ ep->sample_rem = rate % ep->fps;
++ ep->framesize[0] = rate / ep->fps;
++ ep->framesize[1] = (rate + (ep->fps - 1)) / ep->fps;
+
+ /* calculate the frequency in 16.16 format */
+ ep->freqm = ep->freqn;
+@@ -936,6 +1155,7 @@ int snd_usb_endpoint_start(struct snd_usb_endpoint *ep)
+ ep->active_mask = 0;
+ ep->unlink_mask = 0;
+ ep->phase = 0;
++ ep->sample_accum = 0;
+
+ snd_usb_endpoint_start_quirk(ep);
+
+diff --git a/sound/usb/endpoint.h b/sound/usb/endpoint.h
+index 63a39d4fa8d8..d23fa0a8c11b 100644
+--- a/sound/usb/endpoint.h
++++ b/sound/usb/endpoint.h
+@@ -28,6 +28,7 @@ void snd_usb_endpoint_release(struct snd_usb_endpoint *ep);
+ void snd_usb_endpoint_free(struct snd_usb_endpoint *ep);
+
+ int snd_usb_endpoint_implicit_feedback_sink(struct snd_usb_endpoint *ep);
++int snd_usb_endpoint_slave_next_packet_size(struct snd_usb_endpoint *ep);
+ int snd_usb_endpoint_next_packet_size(struct snd_usb_endpoint *ep);
+
+ void snd_usb_handle_sync_urb(struct snd_usb_endpoint *ep,
+diff --git a/sound/usb/mixer_quirks.c b/sound/usb/mixer_quirks.c
+index a5f65a9a0254..aad2683ff793 100644
+--- a/sound/usb/mixer_quirks.c
++++ b/sound/usb/mixer_quirks.c
+@@ -2185,6 +2185,421 @@ static int snd_rme_controls_create(struct usb_mixer_interface *mixer)
+ return 0;
+ }
+
++/*
++ * RME Babyface Pro (FS)
++ *
++ * These devices exposes a couple of DSP functions via request to EP0.
++ * Switches are available via control registers, while routing is controlled
++ * by controlling the volume on each possible crossing point.
++ * Volume control is linear, from -inf (dec. 0) to +6dB (dec. 65536) with
++ * 0dB being at dec. 32768.
++ */
++enum {
++ SND_BBFPRO_CTL_REG1 = 0,
++ SND_BBFPRO_CTL_REG2
++};
++
++#define SND_BBFPRO_CTL_REG_MASK 1
++#define SND_BBFPRO_CTL_IDX_MASK 0xff
++#define SND_BBFPRO_CTL_IDX_SHIFT 1
++#define SND_BBFPRO_CTL_VAL_MASK 1
++#define SND_BBFPRO_CTL_VAL_SHIFT 9
++#define SND_BBFPRO_CTL_REG1_CLK_MASTER 0
++#define SND_BBFPRO_CTL_REG1_CLK_OPTICAL 1
++#define SND_BBFPRO_CTL_REG1_SPDIF_PRO 7
++#define SND_BBFPRO_CTL_REG1_SPDIF_EMPH 8
++#define SND_BBFPRO_CTL_REG1_SPDIF_OPTICAL 10
++#define SND_BBFPRO_CTL_REG2_48V_AN1 0
++#define SND_BBFPRO_CTL_REG2_48V_AN2 1
++#define SND_BBFPRO_CTL_REG2_SENS_IN3 2
++#define SND_BBFPRO_CTL_REG2_SENS_IN4 3
++#define SND_BBFPRO_CTL_REG2_PAD_AN1 4
++#define SND_BBFPRO_CTL_REG2_PAD_AN2 5
++
++#define SND_BBFPRO_MIXER_IDX_MASK 0x1ff
++#define SND_BBFPRO_MIXER_VAL_MASK 0x3ffff
++#define SND_BBFPRO_MIXER_VAL_SHIFT 9
++#define SND_BBFPRO_MIXER_VAL_MIN 0 // -inf
++#define SND_BBFPRO_MIXER_VAL_MAX 65536 // +6dB
++
++#define SND_BBFPRO_USBREQ_CTL_REG1 0x10
++#define SND_BBFPRO_USBREQ_CTL_REG2 0x17
++#define SND_BBFPRO_USBREQ_MIXER 0x12
++
++static int snd_bbfpro_ctl_update(struct usb_mixer_interface *mixer, u8 reg,
++ u8 index, u8 value)
++{
++ int err;
++ u16 usb_req, usb_idx, usb_val;
++ struct snd_usb_audio *chip = mixer->chip;
++
++ err = snd_usb_lock_shutdown(chip);
++ if (err < 0)
++ return err;
++
++ if (reg == SND_BBFPRO_CTL_REG1) {
++ usb_req = SND_BBFPRO_USBREQ_CTL_REG1;
++ if (index == SND_BBFPRO_CTL_REG1_CLK_OPTICAL) {
++ usb_idx = 3;
++ usb_val = value ? 3 : 0;
++ } else {
++ usb_idx = 1 << index;
++ usb_val = value ? usb_idx : 0;
++ }
++ } else {
++ usb_req = SND_BBFPRO_USBREQ_CTL_REG2;
++ usb_idx = 1 << index;
++ usb_val = value ? usb_idx : 0;
++ }
++
++ err = snd_usb_ctl_msg(chip->dev,
++ usb_sndctrlpipe(chip->dev, 0), usb_req,
++ USB_DIR_OUT | USB_TYPE_VENDOR | USB_RECIP_DEVICE,
++ usb_val, usb_idx, 0, 0);
++
++ snd_usb_unlock_shutdown(chip);
++ return err;
++}
++
++static int snd_bbfpro_ctl_get(struct snd_kcontrol *kcontrol,
++ struct snd_ctl_elem_value *ucontrol)
++{
++ u8 reg, idx, val;
++ int pv;
++
++ pv = kcontrol->private_value;
++ reg = pv & SND_BBFPRO_CTL_REG_MASK;
++ idx = (pv >> SND_BBFPRO_CTL_IDX_SHIFT) & SND_BBFPRO_CTL_IDX_MASK;
++ val = kcontrol->private_value >> SND_BBFPRO_CTL_VAL_SHIFT;
++
++ if ((reg == SND_BBFPRO_CTL_REG1 &&
++ idx == SND_BBFPRO_CTL_REG1_CLK_OPTICAL) ||
++ (reg == SND_BBFPRO_CTL_REG2 &&
++ (idx == SND_BBFPRO_CTL_REG2_SENS_IN3 ||
++ idx == SND_BBFPRO_CTL_REG2_SENS_IN4))) {
++ ucontrol->value.enumerated.item[0] = val;
++ } else {
++ ucontrol->value.integer.value[0] = val;
++ }
++ return 0;
++}
++
++static int snd_bbfpro_ctl_info(struct snd_kcontrol *kcontrol,
++ struct snd_ctl_elem_info *uinfo)
++{
++ u8 reg, idx;
++ int pv;
++
++ pv = kcontrol->private_value;
++ reg = pv & SND_BBFPRO_CTL_REG_MASK;
++ idx = (pv >> SND_BBFPRO_CTL_IDX_SHIFT) & SND_BBFPRO_CTL_IDX_MASK;
++
++ if (reg == SND_BBFPRO_CTL_REG1 &&
++ idx == SND_BBFPRO_CTL_REG1_CLK_OPTICAL) {
++ static const char * const texts[2] = {
++ "AutoSync",
++ "Internal"
++ };
++ return snd_ctl_enum_info(uinfo, 1, 2, texts);
++ } else if (reg == SND_BBFPRO_CTL_REG2 &&
++ (idx == SND_BBFPRO_CTL_REG2_SENS_IN3 ||
++ idx == SND_BBFPRO_CTL_REG2_SENS_IN4)) {
++ static const char * const texts[2] = {
++ "-10dBV",
++ "+4dBu"
++ };
++ return snd_ctl_enum_info(uinfo, 1, 2, texts);
++ }
++
++ uinfo->count = 1;
++ uinfo->value.integer.min = 0;
++ uinfo->value.integer.max = 1;
++ uinfo->type = SNDRV_CTL_ELEM_TYPE_BOOLEAN;
++ return 0;
++}
++
++static int snd_bbfpro_ctl_put(struct snd_kcontrol *kcontrol,
++ struct snd_ctl_elem_value *ucontrol)
++{
++ int err;
++ u8 reg, idx;
++ int old_value, pv, val;
++
++ struct usb_mixer_elem_list *list = snd_kcontrol_chip(kcontrol);
++ struct usb_mixer_interface *mixer = list->mixer;
++
++ pv = kcontrol->private_value;
++ reg = pv & SND_BBFPRO_CTL_REG_MASK;
++ idx = (pv >> SND_BBFPRO_CTL_IDX_SHIFT) & SND_BBFPRO_CTL_IDX_MASK;
++ old_value = (pv >> SND_BBFPRO_CTL_VAL_SHIFT) & SND_BBFPRO_CTL_VAL_MASK;
++
++ if ((reg == SND_BBFPRO_CTL_REG1 &&
++ idx == SND_BBFPRO_CTL_REG1_CLK_OPTICAL) ||
++ (reg == SND_BBFPRO_CTL_REG2 &&
++ (idx == SND_BBFPRO_CTL_REG2_SENS_IN3 ||
++ idx == SND_BBFPRO_CTL_REG2_SENS_IN4))) {
++ val = ucontrol->value.enumerated.item[0];
++ } else {
++ val = ucontrol->value.integer.value[0];
++ }
++
++ if (val > 1)
++ return -EINVAL;
++
++ if (val == old_value)
++ return 0;
++
++ kcontrol->private_value = reg
++ | ((idx & SND_BBFPRO_CTL_IDX_MASK) << SND_BBFPRO_CTL_IDX_SHIFT)
++ | ((val & SND_BBFPRO_CTL_VAL_MASK) << SND_BBFPRO_CTL_VAL_SHIFT);
++
++ err = snd_bbfpro_ctl_update(mixer, reg, idx, val);
++ return err < 0 ? err : 1;
++}
++
++static int snd_bbfpro_ctl_resume(struct usb_mixer_elem_list *list)
++{
++ u8 reg, idx;
++ int value, pv;
++
++ pv = list->kctl->private_value;
++ reg = pv & SND_BBFPRO_CTL_REG_MASK;
++ idx = (pv >> SND_BBFPRO_CTL_IDX_SHIFT) & SND_BBFPRO_CTL_IDX_MASK;
++ value = (pv >> SND_BBFPRO_CTL_VAL_SHIFT) & SND_BBFPRO_CTL_VAL_MASK;
++
++ return snd_bbfpro_ctl_update(list->mixer, reg, idx, value);
++}
++
++static int snd_bbfpro_vol_update(struct usb_mixer_interface *mixer, u16 index,
++ u32 value)
++{
++ struct snd_usb_audio *chip = mixer->chip;
++ int err;
++ u16 idx;
++ u16 usb_idx, usb_val;
++ u32 v;
++
++ err = snd_usb_lock_shutdown(chip);
++ if (err < 0)
++ return err;
++
++ idx = index & SND_BBFPRO_MIXER_IDX_MASK;
++ // 18 bit linear volume, split so 2 bits end up in index.
++ v = value & SND_BBFPRO_MIXER_VAL_MASK;
++ usb_idx = idx | (v & 0x3) << 14;
++ usb_val = (v >> 2) & 0xffff;
++
++ err = snd_usb_ctl_msg(chip->dev,
++ usb_sndctrlpipe(chip->dev, 0),
++ SND_BBFPRO_USBREQ_MIXER,
++ USB_DIR_OUT | USB_TYPE_VENDOR |
++ USB_RECIP_DEVICE,
++ usb_val, usb_idx, 0, 0);
++
++ snd_usb_unlock_shutdown(chip);
++ return err;
++}
++
++static int snd_bbfpro_vol_get(struct snd_kcontrol *kcontrol,
++ struct snd_ctl_elem_value *ucontrol)
++{
++ ucontrol->value.integer.value[0] =
++ kcontrol->private_value >> SND_BBFPRO_MIXER_VAL_SHIFT;
++ return 0;
++}
++
++static int snd_bbfpro_vol_info(struct snd_kcontrol *kcontrol,
++ struct snd_ctl_elem_info *uinfo)
++{
++ uinfo->type = SNDRV_CTL_ELEM_TYPE_INTEGER;
++ uinfo->count = 1;
++ uinfo->value.integer.min = SND_BBFPRO_MIXER_VAL_MIN;
++ uinfo->value.integer.max = SND_BBFPRO_MIXER_VAL_MAX;
++ return 0;
++}
++
++static int snd_bbfpro_vol_put(struct snd_kcontrol *kcontrol,
++ struct snd_ctl_elem_value *ucontrol)
++{
++ int err;
++ u16 idx;
++ u32 new_val, old_value, uvalue;
++ struct usb_mixer_elem_list *list = snd_kcontrol_chip(kcontrol);
++ struct usb_mixer_interface *mixer = list->mixer;
++
++ uvalue = ucontrol->value.integer.value[0];
++ idx = kcontrol->private_value & SND_BBFPRO_MIXER_IDX_MASK;
++ old_value = kcontrol->private_value >> SND_BBFPRO_MIXER_VAL_SHIFT;
++
++ if (uvalue > SND_BBFPRO_MIXER_VAL_MAX)
++ return -EINVAL;
++
++ if (uvalue == old_value)
++ return 0;
++
++ new_val = uvalue & SND_BBFPRO_MIXER_VAL_MASK;
++
++ kcontrol->private_value = idx
++ | (new_val << SND_BBFPRO_MIXER_VAL_SHIFT);
++
++ err = snd_bbfpro_vol_update(mixer, idx, new_val);
++ return err < 0 ? err : 1;
++}
++
++static int snd_bbfpro_vol_resume(struct usb_mixer_elem_list *list)
++{
++ int pv = list->kctl->private_value;
++ u16 idx = pv & SND_BBFPRO_MIXER_IDX_MASK;
++ u32 val = (pv >> SND_BBFPRO_MIXER_VAL_SHIFT)
++ & SND_BBFPRO_MIXER_VAL_MASK;
++ return snd_bbfpro_vol_update(list->mixer, idx, val);
++}
++
++// Predfine elements
++static const struct snd_kcontrol_new snd_bbfpro_ctl_control = {
++ .iface = SNDRV_CTL_ELEM_IFACE_MIXER,
++ .access = SNDRV_CTL_ELEM_ACCESS_READWRITE,
++ .index = 0,
++ .info = snd_bbfpro_ctl_info,
++ .get = snd_bbfpro_ctl_get,
++ .put = snd_bbfpro_ctl_put
++};
++
++static const struct snd_kcontrol_new snd_bbfpro_vol_control = {
++ .iface = SNDRV_CTL_ELEM_IFACE_MIXER,
++ .access = SNDRV_CTL_ELEM_ACCESS_READWRITE,
++ .index = 0,
++ .info = snd_bbfpro_vol_info,
++ .get = snd_bbfpro_vol_get,
++ .put = snd_bbfpro_vol_put
++};
++
++static int snd_bbfpro_ctl_add(struct usb_mixer_interface *mixer, u8 reg,
++ u8 index, char *name)
++{
++ struct snd_kcontrol_new knew = snd_bbfpro_ctl_control;
++
++ knew.name = name;
++ knew.private_value = (reg & SND_BBFPRO_CTL_REG_MASK)
++ | ((index & SND_BBFPRO_CTL_IDX_MASK)
++ << SND_BBFPRO_CTL_IDX_SHIFT);
++
++ return add_single_ctl_with_resume(mixer, 0, snd_bbfpro_ctl_resume,
++ &knew, NULL);
++}
++
++static int snd_bbfpro_vol_add(struct usb_mixer_interface *mixer, u16 index,
++ char *name)
++{
++ struct snd_kcontrol_new knew = snd_bbfpro_vol_control;
++
++ knew.name = name;
++ knew.private_value = index & SND_BBFPRO_MIXER_IDX_MASK;
++
++ return add_single_ctl_with_resume(mixer, 0, snd_bbfpro_vol_resume,
++ &knew, NULL);
++}
++
++static int snd_bbfpro_controls_create(struct usb_mixer_interface *mixer)
++{
++ int err, i, o;
++ char name[48];
++
++ static const char * const input[] = {
++ "AN1", "AN2", "IN3", "IN4", "AS1", "AS2", "ADAT3",
++ "ADAT4", "ADAT5", "ADAT6", "ADAT7", "ADAT8"};
++
++ static const char * const output[] = {
++ "AN1", "AN2", "PH3", "PH4", "AS1", "AS2", "ADAT3", "ADAT4",
++ "ADAT5", "ADAT6", "ADAT7", "ADAT8"};
++
++ for (o = 0 ; o < 12 ; ++o) {
++ for (i = 0 ; i < 12 ; ++i) {
++ // Line routing
++ snprintf(name, sizeof(name),
++ "%s-%s-%s Playback Volume",
++ (i < 2 ? "Mic" : "Line"),
++ input[i], output[o]);
++ err = snd_bbfpro_vol_add(mixer, (26 * o + i), name);
++ if (err < 0)
++ return err;
++
++ // PCM routing... yes, it is output remapping
++ snprintf(name, sizeof(name),
++ "PCM-%s-%s Playback Volume",
++ output[i], output[o]);
++ err = snd_bbfpro_vol_add(mixer, (26 * o + 12 + i),
++ name);
++ if (err < 0)
++ return err;
++ }
++ }
++
++ // Control Reg 1
++ err = snd_bbfpro_ctl_add(mixer, SND_BBFPRO_CTL_REG1,
++ SND_BBFPRO_CTL_REG1_CLK_OPTICAL,
++ "Sample Clock Source");
++ if (err < 0)
++ return err;
++
++ err = snd_bbfpro_ctl_add(mixer, SND_BBFPRO_CTL_REG1,
++ SND_BBFPRO_CTL_REG1_SPDIF_PRO,
++ "IEC958 Pro Mask");
++ if (err < 0)
++ return err;
++
++ err = snd_bbfpro_ctl_add(mixer, SND_BBFPRO_CTL_REG1,
++ SND_BBFPRO_CTL_REG1_SPDIF_EMPH,
++ "IEC958 Emphasis");
++ if (err < 0)
++ return err;
++
++ err = snd_bbfpro_ctl_add(mixer, SND_BBFPRO_CTL_REG1,
++ SND_BBFPRO_CTL_REG1_SPDIF_OPTICAL,
++ "IEC958 Switch");
++ if (err < 0)
++ return err;
++
++ // Control Reg 2
++ err = snd_bbfpro_ctl_add(mixer, SND_BBFPRO_CTL_REG2,
++ SND_BBFPRO_CTL_REG2_48V_AN1,
++ "Mic-AN1 48V");
++ if (err < 0)
++ return err;
++
++ err = snd_bbfpro_ctl_add(mixer, SND_BBFPRO_CTL_REG2,
++ SND_BBFPRO_CTL_REG2_48V_AN2,
++ "Mic-AN2 48V");
++ if (err < 0)
++ return err;
++
++ err = snd_bbfpro_ctl_add(mixer, SND_BBFPRO_CTL_REG2,
++ SND_BBFPRO_CTL_REG2_SENS_IN3,
++ "Line-IN3 Sens.");
++ if (err < 0)
++ return err;
++
++ err = snd_bbfpro_ctl_add(mixer, SND_BBFPRO_CTL_REG2,
++ SND_BBFPRO_CTL_REG2_SENS_IN4,
++ "Line-IN4 Sens.");
++ if (err < 0)
++ return err;
++
++ err = snd_bbfpro_ctl_add(mixer, SND_BBFPRO_CTL_REG2,
++ SND_BBFPRO_CTL_REG2_PAD_AN1,
++ "Mic-AN1 PAD");
++ if (err < 0)
++ return err;
++
++ err = snd_bbfpro_ctl_add(mixer, SND_BBFPRO_CTL_REG2,
++ SND_BBFPRO_CTL_REG2_PAD_AN2,
++ "Mic-AN2 PAD");
++ if (err < 0)
++ return err;
++
++ return 0;
++}
++
+ int snd_usb_mixer_apply_create_quirk(struct usb_mixer_interface *mixer)
+ {
+ int err = 0;
+@@ -2286,6 +2701,9 @@ int snd_usb_mixer_apply_create_quirk(struct usb_mixer_interface *mixer)
+ case USB_ID(0x0194f, 0x010c): /* Presonus Studio 1810c */
+ err = snd_sc1810_init_mixer(mixer);
+ break;
++ case USB_ID(0x2a39, 0x3fb0): /* RME Babyface Pro FS */
++ err = snd_bbfpro_controls_create(mixer);
++ break;
+ }
+
+ return err;
+diff --git a/sound/usb/pcm.c b/sound/usb/pcm.c
+index a4e4064f9aee..d61c2f1095b5 100644
+--- a/sound/usb/pcm.c
++++ b/sound/usb/pcm.c
+@@ -404,6 +404,8 @@ add_sync_ep:
+ if (!subs->sync_endpoint)
+ return -EINVAL;
+
++ subs->sync_endpoint->is_implicit_feedback = 1;
++
+ subs->data_endpoint->sync_master = subs->sync_endpoint;
+
+ return 1;
+@@ -502,12 +504,15 @@ static int set_sync_endpoint(struct snd_usb_substream *subs,
+ implicit_fb ?
+ SND_USB_ENDPOINT_TYPE_DATA :
+ SND_USB_ENDPOINT_TYPE_SYNC);
++
+ if (!subs->sync_endpoint) {
+ if (is_playback && attr == USB_ENDPOINT_SYNC_NONE)
+ return 0;
+ return -EINVAL;
+ }
+
++ subs->sync_endpoint->is_implicit_feedback = implicit_fb;
++
+ subs->data_endpoint->sync_master = subs->sync_endpoint;
+
+ return 0;
+@@ -1579,6 +1584,8 @@ static void prepare_playback_urb(struct snd_usb_substream *subs,
+ for (i = 0; i < ctx->packets; i++) {
+ if (ctx->packet_size[i])
+ counts = ctx->packet_size[i];
++ else if (ep->sync_master)
++ counts = snd_usb_endpoint_slave_next_packet_size(ep);
+ else
+ counts = snd_usb_endpoint_next_packet_size(ep);
+
+diff --git a/tools/bootconfig/main.c b/tools/bootconfig/main.c
+index 0efaf45f7367..e0878f5f74b1 100644
+--- a/tools/bootconfig/main.c
++++ b/tools/bootconfig/main.c
+@@ -14,13 +14,18 @@
+ #include <linux/kernel.h>
+ #include <linux/bootconfig.h>
+
+-static int xbc_show_array(struct xbc_node *node)
++static int xbc_show_value(struct xbc_node *node)
+ {
+ const char *val;
++ char q;
+ int i = 0;
+
+ xbc_array_for_each_value(node, val) {
+- printf("\"%s\"%s", val, node->next ? ", " : ";\n");
++ if (strchr(val, '"'))
++ q = '\'';
++ else
++ q = '"';
++ printf("%c%s%c%s", q, val, q, node->next ? ", " : ";\n");
+ i++;
+ }
+ return i;
+@@ -48,10 +53,7 @@ static void xbc_show_compact_tree(void)
+ continue;
+ } else if (cnode && xbc_node_is_value(cnode)) {
+ printf("%s = ", xbc_node_get_data(node));
+- if (cnode->next)
+- xbc_show_array(cnode);
+- else
+- printf("\"%s\";\n", xbc_node_get_data(cnode));
++ xbc_show_value(cnode);
+ } else {
+ printf("%s;\n", xbc_node_get_data(node));
+ }
+@@ -205,11 +207,13 @@ int show_xbc(const char *path)
+ }
+
+ ret = load_xbc_from_initrd(fd, &buf);
+- if (ret < 0)
++ if (ret < 0) {
+ pr_err("Failed to load a boot config from initrd: %d\n", ret);
+- else
+- xbc_show_compact_tree();
+-
++ goto out;
++ }
++ xbc_show_compact_tree();
++ ret = 0;
++out:
+ close(fd);
+ free(buf);
+
+diff --git a/tools/bpf/bpftool/gen.c b/tools/bpf/bpftool/gen.c
+index f8113b3646f5..f5960b48c861 100644
+--- a/tools/bpf/bpftool/gen.c
++++ b/tools/bpf/bpftool/gen.c
+@@ -225,6 +225,7 @@ static int codegen(const char *template, ...)
+ } else {
+ p_err("unrecognized character at pos %td in template '%s'",
+ src - template - 1, template);
++ free(s);
+ return -EINVAL;
+ }
+ }
+@@ -235,6 +236,7 @@ static int codegen(const char *template, ...)
+ if (*src != '\t') {
+ p_err("not enough tabs at pos %td in template '%s'",
+ src - template - 1, template);
++ free(s);
+ return -EINVAL;
+ }
+ }
+diff --git a/tools/lib/bpf/btf_dump.c b/tools/lib/bpf/btf_dump.c
+index 0c28ee82834b..653dbbe2e366 100644
+--- a/tools/lib/bpf/btf_dump.c
++++ b/tools/lib/bpf/btf_dump.c
+@@ -1137,6 +1137,20 @@ static void btf_dump_emit_mods(struct btf_dump *d, struct id_stack *decl_stack)
+ }
+ }
+
++static void btf_dump_drop_mods(struct btf_dump *d, struct id_stack *decl_stack)
++{
++ const struct btf_type *t;
++ __u32 id;
++
++ while (decl_stack->cnt) {
++ id = decl_stack->ids[decl_stack->cnt - 1];
++ t = btf__type_by_id(d->btf, id);
++ if (!btf_is_mod(t))
++ return;
++ decl_stack->cnt--;
++ }
++}
++
+ static void btf_dump_emit_name(const struct btf_dump *d,
+ const char *name, bool last_was_ptr)
+ {
+@@ -1235,14 +1249,7 @@ static void btf_dump_emit_type_chain(struct btf_dump *d,
+ * a const/volatile modifier for array, so we are
+ * going to silently skip them here.
+ */
+- while (decls->cnt) {
+- next_id = decls->ids[decls->cnt - 1];
+- next_t = btf__type_by_id(d->btf, next_id);
+- if (btf_is_mod(next_t))
+- decls->cnt--;
+- else
+- break;
+- }
++ btf_dump_drop_mods(d, decls);
+
+ if (decls->cnt == 0) {
+ btf_dump_emit_name(d, fname, last_was_ptr);
+@@ -1270,7 +1277,15 @@ static void btf_dump_emit_type_chain(struct btf_dump *d,
+ __u16 vlen = btf_vlen(t);
+ int i;
+
+- btf_dump_emit_mods(d, decls);
++ /*
++ * GCC emits extra volatile qualifier for
++ * __attribute__((noreturn)) function pointers. Clang
++ * doesn't do it. It's a GCC quirk for backwards
++ * compatibility with code written for GCC <2.5. So,
++ * similarly to extra qualifiers for array, just drop
++ * them, instead of handling them.
++ */
++ btf_dump_drop_mods(d, decls);
+ if (decls->cnt) {
+ btf_dump_printf(d, " (");
+ btf_dump_emit_type_chain(d, decls, fname, lvl);
+diff --git a/tools/lib/bpf/libbpf.c b/tools/lib/bpf/libbpf.c
+index 0c5b4fb553fb..c417cff2cdaf 100644
+--- a/tools/lib/bpf/libbpf.c
++++ b/tools/lib/bpf/libbpf.c
+@@ -3455,10 +3455,6 @@ bpf_object__populate_internal_map(struct bpf_object *obj, struct bpf_map *map)
+ char *cp, errmsg[STRERR_BUFSIZE];
+ int err, zero = 0;
+
+- /* kernel already zero-initializes .bss map. */
+- if (map_type == LIBBPF_MAP_BSS)
+- return 0;
+-
+ err = bpf_map_update_elem(map->fd, &zero, map->mmaped, 0);
+ if (err) {
+ err = -errno;
+diff --git a/tools/perf/builtin-report.c b/tools/perf/builtin-report.c
+index 26d8fc27e427..fc7855262162 100644
+--- a/tools/perf/builtin-report.c
++++ b/tools/perf/builtin-report.c
+@@ -476,8 +476,7 @@ static size_t hists__fprintf_nr_sample_events(struct hists *hists, struct report
+ if (rep->time_str)
+ ret += fprintf(fp, " (time slices: %s)", rep->time_str);
+
+- if (symbol_conf.show_ref_callgraph &&
+- strstr(evname, "call-graph=no")) {
++ if (symbol_conf.show_ref_callgraph && evname && strstr(evname, "call-graph=no")) {
+ ret += fprintf(fp, ", show reference callgraph");
+ }
+
+diff --git a/tools/perf/util/parse-events.y b/tools/perf/util/parse-events.y
+index 94f8bcd83582..9a41247c602b 100644
+--- a/tools/perf/util/parse-events.y
++++ b/tools/perf/util/parse-events.y
+@@ -348,7 +348,7 @@ PE_PMU_EVENT_PRE '-' PE_PMU_EVENT_SUF sep_dc
+ struct list_head *list;
+ char pmu_name[128];
+
+- snprintf(&pmu_name, 128, "%s-%s", $1, $3);
++ snprintf(pmu_name, sizeof(pmu_name), "%s-%s", $1, $3);
+ free($1);
+ free($3);
+ if (parse_events_multi_pmu_add(_parse_state, pmu_name, &list) < 0)
+diff --git a/tools/perf/util/probe-event.c b/tools/perf/util/probe-event.c
+index a08f373d3305..df713a5d1e26 100644
+--- a/tools/perf/util/probe-event.c
++++ b/tools/perf/util/probe-event.c
+@@ -1575,7 +1575,7 @@ static int parse_perf_probe_arg(char *str, struct perf_probe_arg *arg)
+ }
+
+ tmp = strchr(str, '@');
+- if (tmp && tmp != str && strcmp(tmp + 1, "user")) { /* user attr */
++ if (tmp && tmp != str && !strcmp(tmp + 1, "user")) { /* user attr */
+ if (!user_access_is_supported()) {
+ semantic_error("ftrace does not support user access\n");
+ return -EINVAL;
+@@ -1995,7 +1995,10 @@ static int __synthesize_probe_trace_arg_ref(struct probe_trace_arg_ref *ref,
+ if (depth < 0)
+ return depth;
+ }
+- err = strbuf_addf(buf, "%+ld(", ref->offset);
++ if (ref->user_access)
++ err = strbuf_addf(buf, "%s%ld(", "+u", ref->offset);
++ else
++ err = strbuf_addf(buf, "%+ld(", ref->offset);
+ return (err < 0) ? err : depth;
+ }
+
+diff --git a/tools/perf/util/probe-file.c b/tools/perf/util/probe-file.c
+index 8c852948513e..064b63a6a3f3 100644
+--- a/tools/perf/util/probe-file.c
++++ b/tools/perf/util/probe-file.c
+@@ -1044,7 +1044,7 @@ static struct {
+ DEFINE_TYPE(FTRACE_README_PROBE_TYPE_X, "*type: * x8/16/32/64,*"),
+ DEFINE_TYPE(FTRACE_README_KRETPROBE_OFFSET, "*place (kretprobe): *"),
+ DEFINE_TYPE(FTRACE_README_UPROBE_REF_CTR, "*ref_ctr_offset*"),
+- DEFINE_TYPE(FTRACE_README_USER_ACCESS, "*[u]<offset>*"),
++ DEFINE_TYPE(FTRACE_README_USER_ACCESS, "*u]<offset>*"),
+ DEFINE_TYPE(FTRACE_README_MULTIPROBE_EVENT, "*Create/append/*"),
+ DEFINE_TYPE(FTRACE_README_IMMEDIATE_VALUE, "*\\imm-value,*"),
+ };
+diff --git a/tools/perf/util/stat-display.c b/tools/perf/util/stat-display.c
+index 9e757d18d713..cf393c3eea23 100644
+--- a/tools/perf/util/stat-display.c
++++ b/tools/perf/util/stat-display.c
+@@ -671,7 +671,7 @@ static void print_aggr(struct perf_stat_config *config,
+ int s;
+ bool first;
+
+- if (!(config->aggr_map || config->aggr_get_id))
++ if (!config->aggr_map || !config->aggr_get_id)
+ return;
+
+ aggr_update_shadow(config, evlist);
+@@ -1172,7 +1172,7 @@ static void print_percore(struct perf_stat_config *config,
+ int s;
+ bool first = true;
+
+- if (!(config->aggr_map || config->aggr_get_id))
++ if (!config->aggr_map || !config->aggr_get_id)
+ return;
+
+ if (config->percore_show_thread)
+diff --git a/tools/testing/selftests/bpf/prog_tests/skeleton.c b/tools/testing/selftests/bpf/prog_tests/skeleton.c
+index 9264a2736018..fa153cf67b1b 100644
+--- a/tools/testing/selftests/bpf/prog_tests/skeleton.c
++++ b/tools/testing/selftests/bpf/prog_tests/skeleton.c
+@@ -15,6 +15,8 @@ void test_skeleton(void)
+ int duration = 0, err;
+ struct test_skeleton* skel;
+ struct test_skeleton__bss *bss;
++ struct test_skeleton__data *data;
++ struct test_skeleton__rodata *rodata;
+ struct test_skeleton__kconfig *kcfg;
+
+ skel = test_skeleton__open();
+@@ -24,13 +26,45 @@ void test_skeleton(void)
+ if (CHECK(skel->kconfig, "skel_kconfig", "kconfig is mmaped()!\n"))
+ goto cleanup;
+
++ bss = skel->bss;
++ data = skel->data;
++ rodata = skel->rodata;
++
++ /* validate values are pre-initialized correctly */
++ CHECK(data->in1 != -1, "in1", "got %d != exp %d\n", data->in1, -1);
++ CHECK(data->out1 != -1, "out1", "got %d != exp %d\n", data->out1, -1);
++ CHECK(data->in2 != -1, "in2", "got %lld != exp %lld\n", data->in2, -1LL);
++ CHECK(data->out2 != -1, "out2", "got %lld != exp %lld\n", data->out2, -1LL);
++
++ CHECK(bss->in3 != 0, "in3", "got %d != exp %d\n", bss->in3, 0);
++ CHECK(bss->out3 != 0, "out3", "got %d != exp %d\n", bss->out3, 0);
++ CHECK(bss->in4 != 0, "in4", "got %lld != exp %lld\n", bss->in4, 0LL);
++ CHECK(bss->out4 != 0, "out4", "got %lld != exp %lld\n", bss->out4, 0LL);
++
++ CHECK(rodata->in6 != 0, "in6", "got %d != exp %d\n", rodata->in6, 0);
++ CHECK(bss->out6 != 0, "out6", "got %d != exp %d\n", bss->out6, 0);
++
++ /* validate we can pre-setup global variables, even in .bss */
++ data->in1 = 10;
++ data->in2 = 11;
++ bss->in3 = 12;
++ bss->in4 = 13;
++ rodata->in6 = 14;
++
+ err = test_skeleton__load(skel);
+ if (CHECK(err, "skel_load", "failed to load skeleton: %d\n", err))
+ goto cleanup;
+
+- bss = skel->bss;
+- bss->in1 = 1;
+- bss->in2 = 2;
++ /* validate pre-setup values are still there */
++ CHECK(data->in1 != 10, "in1", "got %d != exp %d\n", data->in1, 10);
++ CHECK(data->in2 != 11, "in2", "got %lld != exp %lld\n", data->in2, 11LL);
++ CHECK(bss->in3 != 12, "in3", "got %d != exp %d\n", bss->in3, 12);
++ CHECK(bss->in4 != 13, "in4", "got %lld != exp %lld\n", bss->in4, 13LL);
++ CHECK(rodata->in6 != 14, "in6", "got %d != exp %d\n", rodata->in6, 14);
++
++ /* now set new values and attach to get them into outX variables */
++ data->in1 = 1;
++ data->in2 = 2;
+ bss->in3 = 3;
+ bss->in4 = 4;
+ bss->in5.a = 5;
+@@ -44,14 +78,15 @@ void test_skeleton(void)
+ /* trigger tracepoint */
+ usleep(1);
+
+- CHECK(bss->out1 != 1, "res1", "got %d != exp %d\n", bss->out1, 1);
+- CHECK(bss->out2 != 2, "res2", "got %lld != exp %d\n", bss->out2, 2);
++ CHECK(data->out1 != 1, "res1", "got %d != exp %d\n", data->out1, 1);
++ CHECK(data->out2 != 2, "res2", "got %lld != exp %d\n", data->out2, 2);
+ CHECK(bss->out3 != 3, "res3", "got %d != exp %d\n", (int)bss->out3, 3);
+ CHECK(bss->out4 != 4, "res4", "got %lld != exp %d\n", bss->out4, 4);
+ CHECK(bss->handler_out5.a != 5, "res5", "got %d != exp %d\n",
+ bss->handler_out5.a, 5);
+ CHECK(bss->handler_out5.b != 6, "res6", "got %lld != exp %d\n",
+ bss->handler_out5.b, 6);
++ CHECK(bss->out6 != 14, "res7", "got %d != exp %d\n", bss->out6, 14);
+
+ CHECK(bss->bpf_syscall != kcfg->CONFIG_BPF_SYSCALL, "ext1",
+ "got %d != exp %d\n", bss->bpf_syscall, kcfg->CONFIG_BPF_SYSCALL);
+diff --git a/tools/testing/selftests/bpf/progs/test_skeleton.c b/tools/testing/selftests/bpf/progs/test_skeleton.c
+index de03a90f78ca..77ae86f44db5 100644
+--- a/tools/testing/selftests/bpf/progs/test_skeleton.c
++++ b/tools/testing/selftests/bpf/progs/test_skeleton.c
+@@ -10,16 +10,26 @@ struct s {
+ long long b;
+ } __attribute__((packed));
+
+-int in1 = 0;
+-long long in2 = 0;
++/* .data section */
++int in1 = -1;
++long long in2 = -1;
++
++/* .bss section */
+ char in3 = '\0';
+ long long in4 __attribute__((aligned(64))) = 0;
+ struct s in5 = {};
+
+-long long out2 = 0;
++/* .rodata section */
++const volatile int in6 = 0;
++
++/* .data section */
++int out1 = -1;
++long long out2 = -1;
++
++/* .bss section */
+ char out3 = 0;
+ long long out4 = 0;
+-int out1 = 0;
++int out6 = 0;
+
+ extern bool CONFIG_BPF_SYSCALL __kconfig;
+ extern int LINUX_KERNEL_VERSION __kconfig;
+@@ -36,6 +46,7 @@ int handler(const void *ctx)
+ out3 = in3;
+ out4 = in4;
+ out5 = in5;
++ out6 = in6;
+
+ bpf_syscall = CONFIG_BPF_SYSCALL;
+ kern_ver = LINUX_KERNEL_VERSION;
+diff --git a/tools/testing/selftests/kvm/Makefile b/tools/testing/selftests/kvm/Makefile
+index 42f4f49f2a48..2c85b9dd86f5 100644
+--- a/tools/testing/selftests/kvm/Makefile
++++ b/tools/testing/selftests/kvm/Makefile
+@@ -80,7 +80,11 @@ LIBKVM += $(LIBKVM_$(UNAME_M))
+ INSTALL_HDR_PATH = $(top_srcdir)/usr
+ LINUX_HDR_PATH = $(INSTALL_HDR_PATH)/include/
+ LINUX_TOOL_INCLUDE = $(top_srcdir)/tools/include
++ifeq ($(ARCH),x86_64)
++LINUX_TOOL_ARCH_INCLUDE = $(top_srcdir)/tools/arch/x86/include
++else
+ LINUX_TOOL_ARCH_INCLUDE = $(top_srcdir)/tools/arch/$(ARCH)/include
++endif
+ CFLAGS += -Wall -Wstrict-prototypes -Wuninitialized -O2 -g -std=gnu99 \
+ -fno-stack-protector -fno-PIE -I$(LINUX_TOOL_INCLUDE) \
+ -I$(LINUX_TOOL_ARCH_INCLUDE) -I$(LINUX_HDR_PATH) -Iinclude \
+diff --git a/tools/testing/selftests/net/timestamping.c b/tools/testing/selftests/net/timestamping.c
+index aca3491174a1..f4bb4fef0f39 100644
+--- a/tools/testing/selftests/net/timestamping.c
++++ b/tools/testing/selftests/net/timestamping.c
+@@ -313,10 +313,16 @@ int main(int argc, char **argv)
+ int val;
+ socklen_t len;
+ struct timeval next;
++ size_t if_len;
+
+ if (argc < 2)
+ usage(0);
+ interface = argv[1];
++ if_len = strlen(interface);
++ if (if_len >= IFNAMSIZ) {
++ printf("interface name exceeds IFNAMSIZ\n");
++ exit(1);
++ }
+
+ for (i = 2; i < argc; i++) {
+ if (!strcasecmp(argv[i], "SO_TIMESTAMP"))
+@@ -350,12 +356,12 @@ int main(int argc, char **argv)
+ bail("socket");
+
+ memset(&device, 0, sizeof(device));
+- strncpy(device.ifr_name, interface, sizeof(device.ifr_name));
++ memcpy(device.ifr_name, interface, if_len + 1);
+ if (ioctl(sock, SIOCGIFADDR, &device) < 0)
+ bail("getting interface IP address");
+
+ memset(&hwtstamp, 0, sizeof(hwtstamp));
+- strncpy(hwtstamp.ifr_name, interface, sizeof(hwtstamp.ifr_name));
++ memcpy(hwtstamp.ifr_name, interface, if_len + 1);
+ hwtstamp.ifr_data = (void *)&hwconfig;
+ memset(&hwconfig, 0, sizeof(hwconfig));
+ hwconfig.tx_type =
+diff --git a/tools/testing/selftests/ntb/ntb_test.sh b/tools/testing/selftests/ntb/ntb_test.sh
+index 9c60337317c6..020137b61407 100755
+--- a/tools/testing/selftests/ntb/ntb_test.sh
++++ b/tools/testing/selftests/ntb/ntb_test.sh
+@@ -241,7 +241,7 @@ function get_files_count()
+ split_remote $LOC
+
+ if [[ "$REMOTE" == "" ]]; then
+- echo $(ls -1 "$LOC"/${NAME}* 2>/dev/null | wc -l)
++ echo $(ls -1 "$VPATH"/${NAME}* 2>/dev/null | wc -l)
+ else
+ echo $(ssh "$REMOTE" "ls -1 \"$VPATH\"/${NAME}* | \
+ wc -l" 2> /dev/null)
+diff --git a/tools/testing/selftests/timens/clock_nanosleep.c b/tools/testing/selftests/timens/clock_nanosleep.c
+index 8e7b7c72ef65..72d41b955fb2 100644
+--- a/tools/testing/selftests/timens/clock_nanosleep.c
++++ b/tools/testing/selftests/timens/clock_nanosleep.c
+@@ -119,7 +119,7 @@ int main(int argc, char *argv[])
+
+ ksft_set_plan(4);
+
+- check_config_posix_timers();
++ check_supported_timers();
+
+ if (unshare_timens())
+ return 1;
+diff --git a/tools/testing/selftests/timens/timens.c b/tools/testing/selftests/timens/timens.c
+index 098be7c83be3..52b6a1185f52 100644
+--- a/tools/testing/selftests/timens/timens.c
++++ b/tools/testing/selftests/timens/timens.c
+@@ -155,7 +155,7 @@ int main(int argc, char *argv[])
+
+ nscheck();
+
+- check_config_posix_timers();
++ check_supported_timers();
+
+ ksft_set_plan(ARRAY_SIZE(clocks) * 2);
+
+diff --git a/tools/testing/selftests/timens/timens.h b/tools/testing/selftests/timens/timens.h
+index e09e7e39bc52..d4fc52d47146 100644
+--- a/tools/testing/selftests/timens/timens.h
++++ b/tools/testing/selftests/timens/timens.h
+@@ -14,15 +14,26 @@
+ #endif
+
+ static int config_posix_timers = true;
++static int config_alarm_timers = true;
+
+-static inline void check_config_posix_timers(void)
++static inline void check_supported_timers(void)
+ {
++ struct timespec ts;
++
+ if (timer_create(-1, 0, 0) == -1 && errno == ENOSYS)
+ config_posix_timers = false;
++
++ if (clock_gettime(CLOCK_BOOTTIME_ALARM, &ts) == -1 && errno == EINVAL)
++ config_alarm_timers = false;
+ }
+
+ static inline bool check_skip(int clockid)
+ {
++ if (!config_alarm_timers && clockid == CLOCK_BOOTTIME_ALARM) {
++ ksft_test_result_skip("CLOCK_BOOTTIME_ALARM isn't supported\n");
++ return true;
++ }
++
+ if (config_posix_timers)
+ return false;
+
+diff --git a/tools/testing/selftests/timens/timer.c b/tools/testing/selftests/timens/timer.c
+index 96dba11ebe44..5e7f0051bd7b 100644
+--- a/tools/testing/selftests/timens/timer.c
++++ b/tools/testing/selftests/timens/timer.c
+@@ -22,6 +22,9 @@ int run_test(int clockid, struct timespec now)
+ timer_t fd;
+ int i;
+
++ if (check_skip(clockid))
++ return 0;
++
+ for (i = 0; i < 2; i++) {
+ struct sigevent sevp = {.sigev_notify = SIGEV_NONE};
+ int flags = 0;
+@@ -74,6 +77,8 @@ int main(int argc, char *argv[])
+
+ nscheck();
+
++ check_supported_timers();
++
+ ksft_set_plan(3);
+
+ clock_gettime(CLOCK_MONOTONIC, &mtime_now);
+diff --git a/tools/testing/selftests/timens/timerfd.c b/tools/testing/selftests/timens/timerfd.c
+index eff1ec5ff215..9edd43d6b2c1 100644
+--- a/tools/testing/selftests/timens/timerfd.c
++++ b/tools/testing/selftests/timens/timerfd.c
+@@ -28,6 +28,9 @@ int run_test(int clockid, struct timespec now)
+ long long elapsed;
+ int fd, i;
+
++ if (check_skip(clockid))
++ return 0;
++
+ if (tclock_gettime(clockid, &now))
+ return pr_perror("clock_gettime(%d)", clockid);
+
+@@ -81,6 +84,8 @@ int main(int argc, char *argv[])
+
+ nscheck();
+
++ check_supported_timers();
++
+ ksft_set_plan(3);
+
+ clock_gettime(CLOCK_MONOTONIC, &mtime_now);
+diff --git a/tools/testing/selftests/x86/protection_keys.c b/tools/testing/selftests/x86/protection_keys.c
+index 480995bceefa..47191af46617 100644
+--- a/tools/testing/selftests/x86/protection_keys.c
++++ b/tools/testing/selftests/x86/protection_keys.c
+@@ -24,6 +24,7 @@
+ #define _GNU_SOURCE
+ #include <errno.h>
+ #include <linux/futex.h>
++#include <time.h>
+ #include <sys/time.h>
+ #include <sys/syscall.h>
+ #include <string.h>
+@@ -612,10 +613,10 @@ int alloc_random_pkey(void)
+ int nr_alloced = 0;
+ int random_index;
+ memset(alloced_pkeys, 0, sizeof(alloced_pkeys));
++ srand((unsigned int)time(NULL));
+
+ /* allocate every possible key and make a note of which ones we got */
+ max_nr_pkey_allocs = NR_PKEYS;
+- max_nr_pkey_allocs = 1;
+ for (i = 0; i < max_nr_pkey_allocs; i++) {
+ int new_pkey = alloc_pkey();
+ if (new_pkey < 0)
^ permalink raw reply related [flat|nested] 30+ messages in thread
* [gentoo-commits] proj/linux-patches:5.8 commit in: /
@ 2020-09-03 12:52 Mike Pagano
0 siblings, 0 replies; 30+ messages in thread
From: Mike Pagano @ 2020-09-03 12:52 UTC (permalink / raw
To: gentoo-commits
commit: 29791b6c4491908c9c633785b7b2c4438caea4fb
Author: Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Thu Sep 3 12:52:34 2020 +0000
Commit: Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Thu Sep 3 12:52:34 2020 +0000
URL: https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=29791b6c
Remove incorrect patch, add correct one
Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>
1005_linux-5.7.6.patch | 20870 -----------------------------------------------
1005_linux-5.8.6.patch | 11789 ++++++++++++++++++++++++++
2 files changed, 11789 insertions(+), 20870 deletions(-)
diff --git a/1005_linux-5.7.6.patch b/1005_linux-5.7.6.patch
deleted file mode 100644
index 9939e08..0000000
--- a/1005_linux-5.7.6.patch
+++ /dev/null
@@ -1,20870 +0,0 @@
-diff --git a/Makefile b/Makefile
-index c48d489f82bc..f928cd1dfdc1 100644
---- a/Makefile
-+++ b/Makefile
-@@ -1,7 +1,7 @@
- # SPDX-License-Identifier: GPL-2.0
- VERSION = 5
- PATCHLEVEL = 7
--SUBLEVEL = 5
-+SUBLEVEL = 6
- EXTRAVERSION =
- NAME = Kleptomaniac Octopus
-
-diff --git a/arch/arm/boot/dts/aspeed-bmc-facebook-tiogapass.dts b/arch/arm/boot/dts/aspeed-bmc-facebook-tiogapass.dts
-index 5d7cbd9164d4..669980c690f9 100644
---- a/arch/arm/boot/dts/aspeed-bmc-facebook-tiogapass.dts
-+++ b/arch/arm/boot/dts/aspeed-bmc-facebook-tiogapass.dts
-@@ -112,13 +112,13 @@
- &kcs2 {
- // BMC KCS channel 2
- status = "okay";
-- kcs_addr = <0xca8>;
-+ aspeed,lpc-io-reg = <0xca8>;
- };
-
- &kcs3 {
- // BMC KCS channel 3
- status = "okay";
-- kcs_addr = <0xca2>;
-+ aspeed,lpc-io-reg = <0xca2>;
- };
-
- &mac0 {
-diff --git a/arch/arm/boot/dts/aspeed-g5.dtsi b/arch/arm/boot/dts/aspeed-g5.dtsi
-index f12ec04d3cbc..bc92d3db7b78 100644
---- a/arch/arm/boot/dts/aspeed-g5.dtsi
-+++ b/arch/arm/boot/dts/aspeed-g5.dtsi
-@@ -426,22 +426,22 @@
- #size-cells = <1>;
- ranges = <0x0 0x0 0x80>;
-
-- kcs1: kcs1@0 {
-- compatible = "aspeed,ast2500-kcs-bmc";
-+ kcs1: kcs@24 {
-+ compatible = "aspeed,ast2500-kcs-bmc-v2";
-+ reg = <0x24 0x1>, <0x30 0x1>, <0x3c 0x1>;
- interrupts = <8>;
-- kcs_chan = <1>;
- status = "disabled";
- };
-- kcs2: kcs2@0 {
-- compatible = "aspeed,ast2500-kcs-bmc";
-+ kcs2: kcs@28 {
-+ compatible = "aspeed,ast2500-kcs-bmc-v2";
-+ reg = <0x28 0x1>, <0x34 0x1>, <0x40 0x1>;
- interrupts = <8>;
-- kcs_chan = <2>;
- status = "disabled";
- };
-- kcs3: kcs3@0 {
-- compatible = "aspeed,ast2500-kcs-bmc";
-+ kcs3: kcs@2c {
-+ compatible = "aspeed,ast2500-kcs-bmc-v2";
-+ reg = <0x2c 0x1>, <0x38 0x1>, <0x44 0x1>;
- interrupts = <8>;
-- kcs_chan = <3>;
- status = "disabled";
- };
- };
-@@ -455,10 +455,10 @@
- #size-cells = <1>;
- ranges = <0x0 0x80 0x1e0>;
-
-- kcs4: kcs4@0 {
-- compatible = "aspeed,ast2500-kcs-bmc";
-+ kcs4: kcs@94 {
-+ compatible = "aspeed,ast2500-kcs-bmc-v2";
-+ reg = <0x94 0x1>, <0x98 0x1>, <0x9c 0x1>;
- interrupts = <8>;
-- kcs_chan = <4>;
- status = "disabled";
- };
-
-diff --git a/arch/arm/boot/dts/aspeed-g6.dtsi b/arch/arm/boot/dts/aspeed-g6.dtsi
-index 0a29b3b57a9d..a2d2ac720a51 100644
---- a/arch/arm/boot/dts/aspeed-g6.dtsi
-+++ b/arch/arm/boot/dts/aspeed-g6.dtsi
-@@ -65,6 +65,7 @@
- <GIC_PPI 10 (GIC_CPU_MASK_SIMPLE(2) | IRQ_TYPE_LEVEL_LOW)>;
- clocks = <&syscon ASPEED_CLK_HPLL>;
- arm,cpu-registers-not-fw-configured;
-+ always-on;
- };
-
- ahb {
-@@ -368,6 +369,7 @@
- <&gic GIC_SPI 23 IRQ_TYPE_LEVEL_HIGH>;
- clocks = <&syscon ASPEED_CLK_APB1>;
- clock-names = "PCLK";
-+ status = "disabled";
- };
-
- uart1: serial@1e783000 {
-@@ -433,22 +435,23 @@
- #size-cells = <1>;
- ranges = <0x0 0x0 0x80>;
-
-- kcs1: kcs1@0 {
-- compatible = "aspeed,ast2600-kcs-bmc";
-+ kcs1: kcs@24 {
-+ compatible = "aspeed,ast2500-kcs-bmc-v2";
-+ reg = <0x24 0x1>, <0x30 0x1>, <0x3c 0x1>;
- interrupts = <GIC_SPI 138 IRQ_TYPE_LEVEL_HIGH>;
- kcs_chan = <1>;
- status = "disabled";
- };
-- kcs2: kcs2@0 {
-- compatible = "aspeed,ast2600-kcs-bmc";
-+ kcs2: kcs@28 {
-+ compatible = "aspeed,ast2500-kcs-bmc-v2";
-+ reg = <0x28 0x1>, <0x34 0x1>, <0x40 0x1>;
- interrupts = <GIC_SPI 139 IRQ_TYPE_LEVEL_HIGH>;
-- kcs_chan = <2>;
- status = "disabled";
- };
-- kcs3: kcs3@0 {
-- compatible = "aspeed,ast2600-kcs-bmc";
-+ kcs3: kcs@2c {
-+ compatible = "aspeed,ast2500-kcs-bmc-v2";
-+ reg = <0x2c 0x1>, <0x38 0x1>, <0x44 0x1>;
- interrupts = <GIC_SPI 140 IRQ_TYPE_LEVEL_HIGH>;
-- kcs_chan = <3>;
- status = "disabled";
- };
- };
-@@ -462,10 +465,10 @@
- #size-cells = <1>;
- ranges = <0x0 0x80 0x1e0>;
-
-- kcs4: kcs4@0 {
-- compatible = "aspeed,ast2600-kcs-bmc";
-+ kcs4: kcs@94 {
-+ compatible = "aspeed,ast2500-kcs-bmc-v2";
-+ reg = <0x94 0x1>, <0x98 0x1>, <0x9c 0x1>;
- interrupts = <GIC_SPI 141 IRQ_TYPE_LEVEL_HIGH>;
-- kcs_chan = <4>;
- status = "disabled";
- };
-
-diff --git a/arch/arm/boot/dts/bcm2835-common.dtsi b/arch/arm/boot/dts/bcm2835-common.dtsi
-index 2b1d9d4c0cde..4119271c979d 100644
---- a/arch/arm/boot/dts/bcm2835-common.dtsi
-+++ b/arch/arm/boot/dts/bcm2835-common.dtsi
-@@ -130,7 +130,6 @@
- compatible = "brcm,bcm2835-v3d";
- reg = <0x7ec00000 0x1000>;
- interrupts = <1 10>;
-- power-domains = <&pm BCM2835_POWER_DOMAIN_GRAFX_V3D>;
- };
-
- vc4: gpu {
-diff --git a/arch/arm/boot/dts/bcm2835-rpi-common.dtsi b/arch/arm/boot/dts/bcm2835-rpi-common.dtsi
-new file mode 100644
-index 000000000000..8a55b6cded59
---- /dev/null
-+++ b/arch/arm/boot/dts/bcm2835-rpi-common.dtsi
-@@ -0,0 +1,12 @@
-+// SPDX-License-Identifier: GPL-2.0
-+/*
-+ * This include file covers the common peripherals and configuration between
-+ * bcm2835, bcm2836 and bcm2837 implementations that interact with RPi's
-+ * firmware interface.
-+ */
-+
-+#include <dt-bindings/power/raspberrypi-power.h>
-+
-+&v3d {
-+ power-domains = <&power RPI_POWER_DOMAIN_V3D>;
-+};
-diff --git a/arch/arm/boot/dts/bcm2835.dtsi b/arch/arm/boot/dts/bcm2835.dtsi
-index 53bf4579cc22..0549686134ea 100644
---- a/arch/arm/boot/dts/bcm2835.dtsi
-+++ b/arch/arm/boot/dts/bcm2835.dtsi
-@@ -1,6 +1,7 @@
- // SPDX-License-Identifier: GPL-2.0
- #include "bcm283x.dtsi"
- #include "bcm2835-common.dtsi"
-+#include "bcm2835-rpi-common.dtsi"
-
- / {
- compatible = "brcm,bcm2835";
-diff --git a/arch/arm/boot/dts/bcm2836.dtsi b/arch/arm/boot/dts/bcm2836.dtsi
-index 82d6c4662ae4..b390006aef79 100644
---- a/arch/arm/boot/dts/bcm2836.dtsi
-+++ b/arch/arm/boot/dts/bcm2836.dtsi
-@@ -1,6 +1,7 @@
- // SPDX-License-Identifier: GPL-2.0
- #include "bcm283x.dtsi"
- #include "bcm2835-common.dtsi"
-+#include "bcm2835-rpi-common.dtsi"
-
- / {
- compatible = "brcm,bcm2836";
-diff --git a/arch/arm/boot/dts/bcm2837.dtsi b/arch/arm/boot/dts/bcm2837.dtsi
-index 9e95fee78e19..0199ec98cd61 100644
---- a/arch/arm/boot/dts/bcm2837.dtsi
-+++ b/arch/arm/boot/dts/bcm2837.dtsi
-@@ -1,5 +1,6 @@
- #include "bcm283x.dtsi"
- #include "bcm2835-common.dtsi"
-+#include "bcm2835-rpi-common.dtsi"
-
- / {
- compatible = "brcm,bcm2837";
-diff --git a/arch/arm/boot/dts/r8a7743.dtsi b/arch/arm/boot/dts/r8a7743.dtsi
-index e8b340bb99bc..fff123753b85 100644
---- a/arch/arm/boot/dts/r8a7743.dtsi
-+++ b/arch/arm/boot/dts/r8a7743.dtsi
-@@ -338,7 +338,7 @@
- #thermal-sensor-cells = <0>;
- };
-
-- ipmmu_sy0: mmu@e6280000 {
-+ ipmmu_sy0: iommu@e6280000 {
- compatible = "renesas,ipmmu-r8a7743",
- "renesas,ipmmu-vmsa";
- reg = <0 0xe6280000 0 0x1000>;
-@@ -348,7 +348,7 @@
- status = "disabled";
- };
-
-- ipmmu_sy1: mmu@e6290000 {
-+ ipmmu_sy1: iommu@e6290000 {
- compatible = "renesas,ipmmu-r8a7743",
- "renesas,ipmmu-vmsa";
- reg = <0 0xe6290000 0 0x1000>;
-@@ -357,7 +357,7 @@
- status = "disabled";
- };
-
-- ipmmu_ds: mmu@e6740000 {
-+ ipmmu_ds: iommu@e6740000 {
- compatible = "renesas,ipmmu-r8a7743",
- "renesas,ipmmu-vmsa";
- reg = <0 0xe6740000 0 0x1000>;
-@@ -367,7 +367,7 @@
- status = "disabled";
- };
-
-- ipmmu_mp: mmu@ec680000 {
-+ ipmmu_mp: iommu@ec680000 {
- compatible = "renesas,ipmmu-r8a7743",
- "renesas,ipmmu-vmsa";
- reg = <0 0xec680000 0 0x1000>;
-@@ -376,7 +376,7 @@
- status = "disabled";
- };
-
-- ipmmu_mx: mmu@fe951000 {
-+ ipmmu_mx: iommu@fe951000 {
- compatible = "renesas,ipmmu-r8a7743",
- "renesas,ipmmu-vmsa";
- reg = <0 0xfe951000 0 0x1000>;
-@@ -386,7 +386,7 @@
- status = "disabled";
- };
-
-- ipmmu_gp: mmu@e62a0000 {
-+ ipmmu_gp: iommu@e62a0000 {
- compatible = "renesas,ipmmu-r8a7743",
- "renesas,ipmmu-vmsa";
- reg = <0 0xe62a0000 0 0x1000>;
-diff --git a/arch/arm/boot/dts/r8a7744.dtsi b/arch/arm/boot/dts/r8a7744.dtsi
-index def840b8b2d3..5050ac19041d 100644
---- a/arch/arm/boot/dts/r8a7744.dtsi
-+++ b/arch/arm/boot/dts/r8a7744.dtsi
-@@ -338,7 +338,7 @@
- #thermal-sensor-cells = <0>;
- };
-
-- ipmmu_sy0: mmu@e6280000 {
-+ ipmmu_sy0: iommu@e6280000 {
- compatible = "renesas,ipmmu-r8a7744",
- "renesas,ipmmu-vmsa";
- reg = <0 0xe6280000 0 0x1000>;
-@@ -348,7 +348,7 @@
- status = "disabled";
- };
-
-- ipmmu_sy1: mmu@e6290000 {
-+ ipmmu_sy1: iommu@e6290000 {
- compatible = "renesas,ipmmu-r8a7744",
- "renesas,ipmmu-vmsa";
- reg = <0 0xe6290000 0 0x1000>;
-@@ -357,7 +357,7 @@
- status = "disabled";
- };
-
-- ipmmu_ds: mmu@e6740000 {
-+ ipmmu_ds: iommu@e6740000 {
- compatible = "renesas,ipmmu-r8a7744",
- "renesas,ipmmu-vmsa";
- reg = <0 0xe6740000 0 0x1000>;
-@@ -367,7 +367,7 @@
- status = "disabled";
- };
-
-- ipmmu_mp: mmu@ec680000 {
-+ ipmmu_mp: iommu@ec680000 {
- compatible = "renesas,ipmmu-r8a7744",
- "renesas,ipmmu-vmsa";
- reg = <0 0xec680000 0 0x1000>;
-@@ -376,7 +376,7 @@
- status = "disabled";
- };
-
-- ipmmu_mx: mmu@fe951000 {
-+ ipmmu_mx: iommu@fe951000 {
- compatible = "renesas,ipmmu-r8a7744",
- "renesas,ipmmu-vmsa";
- reg = <0 0xfe951000 0 0x1000>;
-@@ -386,7 +386,7 @@
- status = "disabled";
- };
-
-- ipmmu_gp: mmu@e62a0000 {
-+ ipmmu_gp: iommu@e62a0000 {
- compatible = "renesas,ipmmu-r8a7744",
- "renesas,ipmmu-vmsa";
- reg = <0 0xe62a0000 0 0x1000>;
-diff --git a/arch/arm/boot/dts/r8a7745.dtsi b/arch/arm/boot/dts/r8a7745.dtsi
-index 7ab58d8bb740..b0d1fc24e97e 100644
---- a/arch/arm/boot/dts/r8a7745.dtsi
-+++ b/arch/arm/boot/dts/r8a7745.dtsi
-@@ -302,7 +302,7 @@
- resets = <&cpg 407>;
- };
-
-- ipmmu_sy0: mmu@e6280000 {
-+ ipmmu_sy0: iommu@e6280000 {
- compatible = "renesas,ipmmu-r8a7745",
- "renesas,ipmmu-vmsa";
- reg = <0 0xe6280000 0 0x1000>;
-@@ -312,7 +312,7 @@
- status = "disabled";
- };
-
-- ipmmu_sy1: mmu@e6290000 {
-+ ipmmu_sy1: iommu@e6290000 {
- compatible = "renesas,ipmmu-r8a7745",
- "renesas,ipmmu-vmsa";
- reg = <0 0xe6290000 0 0x1000>;
-@@ -321,7 +321,7 @@
- status = "disabled";
- };
-
-- ipmmu_ds: mmu@e6740000 {
-+ ipmmu_ds: iommu@e6740000 {
- compatible = "renesas,ipmmu-r8a7745",
- "renesas,ipmmu-vmsa";
- reg = <0 0xe6740000 0 0x1000>;
-@@ -331,7 +331,7 @@
- status = "disabled";
- };
-
-- ipmmu_mp: mmu@ec680000 {
-+ ipmmu_mp: iommu@ec680000 {
- compatible = "renesas,ipmmu-r8a7745",
- "renesas,ipmmu-vmsa";
- reg = <0 0xec680000 0 0x1000>;
-@@ -340,7 +340,7 @@
- status = "disabled";
- };
-
-- ipmmu_mx: mmu@fe951000 {
-+ ipmmu_mx: iommu@fe951000 {
- compatible = "renesas,ipmmu-r8a7745",
- "renesas,ipmmu-vmsa";
- reg = <0 0xfe951000 0 0x1000>;
-@@ -350,7 +350,7 @@
- status = "disabled";
- };
-
-- ipmmu_gp: mmu@e62a0000 {
-+ ipmmu_gp: iommu@e62a0000 {
- compatible = "renesas,ipmmu-r8a7745",
- "renesas,ipmmu-vmsa";
- reg = <0 0xe62a0000 0 0x1000>;
-diff --git a/arch/arm/boot/dts/r8a7790.dtsi b/arch/arm/boot/dts/r8a7790.dtsi
-index e5ef9fd4284a..166d5566229d 100644
---- a/arch/arm/boot/dts/r8a7790.dtsi
-+++ b/arch/arm/boot/dts/r8a7790.dtsi
-@@ -427,7 +427,7 @@
- #thermal-sensor-cells = <0>;
- };
-
-- ipmmu_sy0: mmu@e6280000 {
-+ ipmmu_sy0: iommu@e6280000 {
- compatible = "renesas,ipmmu-r8a7790",
- "renesas,ipmmu-vmsa";
- reg = <0 0xe6280000 0 0x1000>;
-@@ -437,7 +437,7 @@
- status = "disabled";
- };
-
-- ipmmu_sy1: mmu@e6290000 {
-+ ipmmu_sy1: iommu@e6290000 {
- compatible = "renesas,ipmmu-r8a7790",
- "renesas,ipmmu-vmsa";
- reg = <0 0xe6290000 0 0x1000>;
-@@ -446,7 +446,7 @@
- status = "disabled";
- };
-
-- ipmmu_ds: mmu@e6740000 {
-+ ipmmu_ds: iommu@e6740000 {
- compatible = "renesas,ipmmu-r8a7790",
- "renesas,ipmmu-vmsa";
- reg = <0 0xe6740000 0 0x1000>;
-@@ -456,7 +456,7 @@
- status = "disabled";
- };
-
-- ipmmu_mp: mmu@ec680000 {
-+ ipmmu_mp: iommu@ec680000 {
- compatible = "renesas,ipmmu-r8a7790",
- "renesas,ipmmu-vmsa";
- reg = <0 0xec680000 0 0x1000>;
-@@ -465,7 +465,7 @@
- status = "disabled";
- };
-
-- ipmmu_mx: mmu@fe951000 {
-+ ipmmu_mx: iommu@fe951000 {
- compatible = "renesas,ipmmu-r8a7790",
- "renesas,ipmmu-vmsa";
- reg = <0 0xfe951000 0 0x1000>;
-@@ -475,7 +475,7 @@
- status = "disabled";
- };
-
-- ipmmu_rt: mmu@ffc80000 {
-+ ipmmu_rt: iommu@ffc80000 {
- compatible = "renesas,ipmmu-r8a7790",
- "renesas,ipmmu-vmsa";
- reg = <0 0xffc80000 0 0x1000>;
-diff --git a/arch/arm/boot/dts/r8a7791.dtsi b/arch/arm/boot/dts/r8a7791.dtsi
-index 6e5bd86731cd..09e47cc17765 100644
---- a/arch/arm/boot/dts/r8a7791.dtsi
-+++ b/arch/arm/boot/dts/r8a7791.dtsi
-@@ -350,7 +350,7 @@
- #thermal-sensor-cells = <0>;
- };
-
-- ipmmu_sy0: mmu@e6280000 {
-+ ipmmu_sy0: iommu@e6280000 {
- compatible = "renesas,ipmmu-r8a7791",
- "renesas,ipmmu-vmsa";
- reg = <0 0xe6280000 0 0x1000>;
-@@ -360,7 +360,7 @@
- status = "disabled";
- };
-
-- ipmmu_sy1: mmu@e6290000 {
-+ ipmmu_sy1: iommu@e6290000 {
- compatible = "renesas,ipmmu-r8a7791",
- "renesas,ipmmu-vmsa";
- reg = <0 0xe6290000 0 0x1000>;
-@@ -369,7 +369,7 @@
- status = "disabled";
- };
-
-- ipmmu_ds: mmu@e6740000 {
-+ ipmmu_ds: iommu@e6740000 {
- compatible = "renesas,ipmmu-r8a7791",
- "renesas,ipmmu-vmsa";
- reg = <0 0xe6740000 0 0x1000>;
-@@ -379,7 +379,7 @@
- status = "disabled";
- };
-
-- ipmmu_mp: mmu@ec680000 {
-+ ipmmu_mp: iommu@ec680000 {
- compatible = "renesas,ipmmu-r8a7791",
- "renesas,ipmmu-vmsa";
- reg = <0 0xec680000 0 0x1000>;
-@@ -388,7 +388,7 @@
- status = "disabled";
- };
-
-- ipmmu_mx: mmu@fe951000 {
-+ ipmmu_mx: iommu@fe951000 {
- compatible = "renesas,ipmmu-r8a7791",
- "renesas,ipmmu-vmsa";
- reg = <0 0xfe951000 0 0x1000>;
-@@ -398,7 +398,7 @@
- status = "disabled";
- };
-
-- ipmmu_rt: mmu@ffc80000 {
-+ ipmmu_rt: iommu@ffc80000 {
- compatible = "renesas,ipmmu-r8a7791",
- "renesas,ipmmu-vmsa";
- reg = <0 0xffc80000 0 0x1000>;
-@@ -407,7 +407,7 @@
- status = "disabled";
- };
-
-- ipmmu_gp: mmu@e62a0000 {
-+ ipmmu_gp: iommu@e62a0000 {
- compatible = "renesas,ipmmu-r8a7791",
- "renesas,ipmmu-vmsa";
- reg = <0 0xe62a0000 0 0x1000>;
-diff --git a/arch/arm/boot/dts/r8a7793.dtsi b/arch/arm/boot/dts/r8a7793.dtsi
-index dadbda16161b..1b62a7e06b42 100644
---- a/arch/arm/boot/dts/r8a7793.dtsi
-+++ b/arch/arm/boot/dts/r8a7793.dtsi
-@@ -336,7 +336,7 @@
- #thermal-sensor-cells = <0>;
- };
-
-- ipmmu_sy0: mmu@e6280000 {
-+ ipmmu_sy0: iommu@e6280000 {
- compatible = "renesas,ipmmu-r8a7793",
- "renesas,ipmmu-vmsa";
- reg = <0 0xe6280000 0 0x1000>;
-@@ -346,7 +346,7 @@
- status = "disabled";
- };
-
-- ipmmu_sy1: mmu@e6290000 {
-+ ipmmu_sy1: iommu@e6290000 {
- compatible = "renesas,ipmmu-r8a7793",
- "renesas,ipmmu-vmsa";
- reg = <0 0xe6290000 0 0x1000>;
-@@ -355,7 +355,7 @@
- status = "disabled";
- };
-
-- ipmmu_ds: mmu@e6740000 {
-+ ipmmu_ds: iommu@e6740000 {
- compatible = "renesas,ipmmu-r8a7793",
- "renesas,ipmmu-vmsa";
- reg = <0 0xe6740000 0 0x1000>;
-@@ -365,7 +365,7 @@
- status = "disabled";
- };
-
-- ipmmu_mp: mmu@ec680000 {
-+ ipmmu_mp: iommu@ec680000 {
- compatible = "renesas,ipmmu-r8a7793",
- "renesas,ipmmu-vmsa";
- reg = <0 0xec680000 0 0x1000>;
-@@ -374,7 +374,7 @@
- status = "disabled";
- };
-
-- ipmmu_mx: mmu@fe951000 {
-+ ipmmu_mx: iommu@fe951000 {
- compatible = "renesas,ipmmu-r8a7793",
- "renesas,ipmmu-vmsa";
- reg = <0 0xfe951000 0 0x1000>;
-@@ -384,7 +384,7 @@
- status = "disabled";
- };
-
-- ipmmu_rt: mmu@ffc80000 {
-+ ipmmu_rt: iommu@ffc80000 {
- compatible = "renesas,ipmmu-r8a7793",
- "renesas,ipmmu-vmsa";
- reg = <0 0xffc80000 0 0x1000>;
-@@ -393,7 +393,7 @@
- status = "disabled";
- };
-
-- ipmmu_gp: mmu@e62a0000 {
-+ ipmmu_gp: iommu@e62a0000 {
- compatible = "renesas,ipmmu-r8a7793",
- "renesas,ipmmu-vmsa";
- reg = <0 0xe62a0000 0 0x1000>;
-diff --git a/arch/arm/boot/dts/r8a7794.dtsi b/arch/arm/boot/dts/r8a7794.dtsi
-index 2c9e7a1ebfec..8d7f8798628a 100644
---- a/arch/arm/boot/dts/r8a7794.dtsi
-+++ b/arch/arm/boot/dts/r8a7794.dtsi
-@@ -290,7 +290,7 @@
- resets = <&cpg 407>;
- };
-
-- ipmmu_sy0: mmu@e6280000 {
-+ ipmmu_sy0: iommu@e6280000 {
- compatible = "renesas,ipmmu-r8a7794",
- "renesas,ipmmu-vmsa";
- reg = <0 0xe6280000 0 0x1000>;
-@@ -300,7 +300,7 @@
- status = "disabled";
- };
-
-- ipmmu_sy1: mmu@e6290000 {
-+ ipmmu_sy1: iommu@e6290000 {
- compatible = "renesas,ipmmu-r8a7794",
- "renesas,ipmmu-vmsa";
- reg = <0 0xe6290000 0 0x1000>;
-@@ -309,7 +309,7 @@
- status = "disabled";
- };
-
-- ipmmu_ds: mmu@e6740000 {
-+ ipmmu_ds: iommu@e6740000 {
- compatible = "renesas,ipmmu-r8a7794",
- "renesas,ipmmu-vmsa";
- reg = <0 0xe6740000 0 0x1000>;
-@@ -319,7 +319,7 @@
- status = "disabled";
- };
-
-- ipmmu_mp: mmu@ec680000 {
-+ ipmmu_mp: iommu@ec680000 {
- compatible = "renesas,ipmmu-r8a7794",
- "renesas,ipmmu-vmsa";
- reg = <0 0xec680000 0 0x1000>;
-@@ -328,7 +328,7 @@
- status = "disabled";
- };
-
-- ipmmu_mx: mmu@fe951000 {
-+ ipmmu_mx: iommu@fe951000 {
- compatible = "renesas,ipmmu-r8a7794",
- "renesas,ipmmu-vmsa";
- reg = <0 0xfe951000 0 0x1000>;
-@@ -338,7 +338,7 @@
- status = "disabled";
- };
-
-- ipmmu_gp: mmu@e62a0000 {
-+ ipmmu_gp: iommu@e62a0000 {
- compatible = "renesas,ipmmu-r8a7794",
- "renesas,ipmmu-vmsa";
- reg = <0 0xe62a0000 0 0x1000>;
-diff --git a/arch/arm/boot/dts/stm32mp157a-avenger96.dts b/arch/arm/boot/dts/stm32mp157a-avenger96.dts
-index 425175f7d83c..081037b510bc 100644
---- a/arch/arm/boot/dts/stm32mp157a-avenger96.dts
-+++ b/arch/arm/boot/dts/stm32mp157a-avenger96.dts
-@@ -92,6 +92,9 @@
- #address-cells = <1>;
- #size-cells = <0>;
- compatible = "snps,dwmac-mdio";
-+ reset-gpios = <&gpioz 2 GPIO_ACTIVE_LOW>;
-+ reset-delay-us = <1000>;
-+
- phy0: ethernet-phy@7 {
- reg = <7>;
- };
-diff --git a/arch/arm/boot/dts/sun8i-h2-plus-bananapi-m2-zero.dts b/arch/arm/boot/dts/sun8i-h2-plus-bananapi-m2-zero.dts
-index d277d043031b..4c6704e4c57e 100644
---- a/arch/arm/boot/dts/sun8i-h2-plus-bananapi-m2-zero.dts
-+++ b/arch/arm/boot/dts/sun8i-h2-plus-bananapi-m2-zero.dts
-@@ -31,7 +31,7 @@
-
- pwr_led {
- label = "bananapi-m2-zero:red:pwr";
-- gpios = <&r_pio 0 10 GPIO_ACTIVE_HIGH>; /* PL10 */
-+ gpios = <&r_pio 0 10 GPIO_ACTIVE_LOW>; /* PL10 */
- default-state = "on";
- };
- };
-diff --git a/arch/arm/boot/dts/vexpress-v2m-rs1.dtsi b/arch/arm/boot/dts/vexpress-v2m-rs1.dtsi
-index 5c183483ec3b..8010cdcdb37a 100644
---- a/arch/arm/boot/dts/vexpress-v2m-rs1.dtsi
-+++ b/arch/arm/boot/dts/vexpress-v2m-rs1.dtsi
-@@ -31,7 +31,7 @@
- #interrupt-cells = <1>;
- ranges;
-
-- nor_flash: flash@0,00000000 {
-+ nor_flash: flash@0 {
- compatible = "arm,vexpress-flash", "cfi-flash";
- reg = <0 0x00000000 0x04000000>,
- <4 0x00000000 0x04000000>;
-@@ -41,13 +41,13 @@
- };
- };
-
-- psram@1,00000000 {
-+ psram@100000000 {
- compatible = "arm,vexpress-psram", "mtd-ram";
- reg = <1 0x00000000 0x02000000>;
- bank-width = <4>;
- };
-
-- ethernet@2,02000000 {
-+ ethernet@202000000 {
- compatible = "smsc,lan9118", "smsc,lan9115";
- reg = <2 0x02000000 0x10000>;
- interrupts = <15>;
-@@ -59,14 +59,14 @@
- vddvario-supply = <&v2m_fixed_3v3>;
- };
-
-- usb@2,03000000 {
-+ usb@203000000 {
- compatible = "nxp,usb-isp1761";
- reg = <2 0x03000000 0x20000>;
- interrupts = <16>;
- port1-otg;
- };
-
-- iofpga@3,00000000 {
-+ iofpga@300000000 {
- compatible = "simple-bus";
- #address-cells = <1>;
- #size-cells = <1>;
-diff --git a/arch/arm/mach-davinci/board-dm644x-evm.c b/arch/arm/mach-davinci/board-dm644x-evm.c
-index 3461d12bbfc0..a5d3708fedf6 100644
---- a/arch/arm/mach-davinci/board-dm644x-evm.c
-+++ b/arch/arm/mach-davinci/board-dm644x-evm.c
-@@ -655,19 +655,6 @@ static struct i2c_board_info __initdata i2c_info[] = {
- },
- };
-
--/* Fixed regulator support */
--static struct regulator_consumer_supply fixed_supplies_3_3v[] = {
-- /* Baseboard 3.3V: 5V -> TPS54310PWP -> 3.3V */
-- REGULATOR_SUPPLY("AVDD", "1-001b"),
-- REGULATOR_SUPPLY("DRVDD", "1-001b"),
--};
--
--static struct regulator_consumer_supply fixed_supplies_1_8v[] = {
-- /* Baseboard 1.8V: 5V -> TPS54310PWP -> 1.8V */
-- REGULATOR_SUPPLY("IOVDD", "1-001b"),
-- REGULATOR_SUPPLY("DVDD", "1-001b"),
--};
--
- #define DM644X_I2C_SDA_PIN GPIO_TO_PIN(2, 12)
- #define DM644X_I2C_SCL_PIN GPIO_TO_PIN(2, 11)
-
-@@ -700,6 +687,19 @@ static void __init evm_init_i2c(void)
- }
- #endif
-
-+/* Fixed regulator support */
-+static struct regulator_consumer_supply fixed_supplies_3_3v[] = {
-+ /* Baseboard 3.3V: 5V -> TPS54310PWP -> 3.3V */
-+ REGULATOR_SUPPLY("AVDD", "1-001b"),
-+ REGULATOR_SUPPLY("DRVDD", "1-001b"),
-+};
-+
-+static struct regulator_consumer_supply fixed_supplies_1_8v[] = {
-+ /* Baseboard 1.8V: 5V -> TPS54310PWP -> 1.8V */
-+ REGULATOR_SUPPLY("IOVDD", "1-001b"),
-+ REGULATOR_SUPPLY("DVDD", "1-001b"),
-+};
-+
- #define VENC_STD_ALL (V4L2_STD_NTSC | V4L2_STD_PAL)
-
- /* venc standard timings */
-diff --git a/arch/arm/mach-integrator/Kconfig b/arch/arm/mach-integrator/Kconfig
-index 982eabc36163..2406cab73835 100644
---- a/arch/arm/mach-integrator/Kconfig
-+++ b/arch/arm/mach-integrator/Kconfig
-@@ -4,6 +4,8 @@ menuconfig ARCH_INTEGRATOR
- depends on ARCH_MULTI_V4T || ARCH_MULTI_V5 || ARCH_MULTI_V6
- select ARM_AMBA
- select COMMON_CLK_VERSATILE
-+ select CMA
-+ select DMA_CMA
- select HAVE_TCM
- select ICST
- select MFD_SYSCON
-@@ -35,14 +37,13 @@ config INTEGRATOR_IMPD1
- select ARM_VIC
- select GPIO_PL061
- select GPIOLIB
-+ select REGULATOR
-+ select REGULATOR_FIXED_VOLTAGE
- help
- The IM-PD1 is an add-on logic module for the Integrator which
- allows ARM(R) Ltd PrimeCells to be developed and evaluated.
- The IM-PD1 can be found on the Integrator/PP2 platform.
-
-- To compile this driver as a module, choose M here: the
-- module will be called impd1.
--
- config INTEGRATOR_CM7TDMI
- bool "Integrator/CM7TDMI core module"
- depends on ARCH_INTEGRATOR_AP
-diff --git a/arch/arm64/Kconfig.platforms b/arch/arm64/Kconfig.platforms
-index 55d70cfe0f9e..3c7e310fd8bf 100644
---- a/arch/arm64/Kconfig.platforms
-+++ b/arch/arm64/Kconfig.platforms
-@@ -248,7 +248,7 @@ config ARCH_TEGRA
- This enables support for the NVIDIA Tegra SoC family.
-
- config ARCH_SPRD
-- tristate "Spreadtrum SoC platform"
-+ bool "Spreadtrum SoC platform"
- help
- Support for Spreadtrum ARM based SoCs
-
-diff --git a/arch/arm64/boot/dts/amlogic/meson-axg.dtsi b/arch/arm64/boot/dts/amlogic/meson-axg.dtsi
-index aace3d32a3df..8e6281c685fa 100644
---- a/arch/arm64/boot/dts/amlogic/meson-axg.dtsi
-+++ b/arch/arm64/boot/dts/amlogic/meson-axg.dtsi
-@@ -1735,18 +1735,18 @@
- };
-
- sram: sram@fffc0000 {
-- compatible = "amlogic,meson-axg-sram", "mmio-sram";
-+ compatible = "mmio-sram";
- reg = <0x0 0xfffc0000 0x0 0x20000>;
- #address-cells = <1>;
- #size-cells = <1>;
- ranges = <0 0x0 0xfffc0000 0x20000>;
-
-- cpu_scp_lpri: scp-shmem@13000 {
-+ cpu_scp_lpri: scp-sram@13000 {
- compatible = "amlogic,meson-axg-scp-shmem";
- reg = <0x13000 0x400>;
- };
-
-- cpu_scp_hpri: scp-shmem@13400 {
-+ cpu_scp_hpri: scp-sram@13400 {
- compatible = "amlogic,meson-axg-scp-shmem";
- reg = <0x13400 0x400>;
- };
-diff --git a/arch/arm64/boot/dts/amlogic/meson-g12b-ugoos-am6.dts b/arch/arm64/boot/dts/amlogic/meson-g12b-ugoos-am6.dts
-index 06c5430eb92d..fdaacfd96b97 100644
---- a/arch/arm64/boot/dts/amlogic/meson-g12b-ugoos-am6.dts
-+++ b/arch/arm64/boot/dts/amlogic/meson-g12b-ugoos-am6.dts
-@@ -14,7 +14,7 @@
- #include <dt-bindings/sound/meson-g12a-tohdmitx.h>
-
- / {
-- compatible = "ugoos,am6", "amlogic,g12b";
-+ compatible = "ugoos,am6", "amlogic,s922x", "amlogic,g12b";
- model = "Ugoos AM6";
-
- aliases {
-diff --git a/arch/arm64/boot/dts/amlogic/meson-gx-libretech-pc.dtsi b/arch/arm64/boot/dts/amlogic/meson-gx-libretech-pc.dtsi
-index 248b018c83d5..b1da36fdeac6 100644
---- a/arch/arm64/boot/dts/amlogic/meson-gx-libretech-pc.dtsi
-+++ b/arch/arm64/boot/dts/amlogic/meson-gx-libretech-pc.dtsi
-@@ -96,14 +96,14 @@
- leds {
- compatible = "gpio-leds";
-
-- green {
-+ led-green {
- color = <LED_COLOR_ID_GREEN>;
- function = LED_FUNCTION_DISK_ACTIVITY;
- gpios = <&gpio_ao GPIOAO_9 GPIO_ACTIVE_HIGH>;
- linux,default-trigger = "disk-activity";
- };
-
-- blue {
-+ led-blue {
- color = <LED_COLOR_ID_BLUE>;
- function = LED_FUNCTION_STATUS;
- gpios = <&gpio GPIODV_28 GPIO_ACTIVE_HIGH>;
-diff --git a/arch/arm64/boot/dts/amlogic/meson-gx.dtsi b/arch/arm64/boot/dts/amlogic/meson-gx.dtsi
-index 03f79fe045b7..e2bb68ec8502 100644
---- a/arch/arm64/boot/dts/amlogic/meson-gx.dtsi
-+++ b/arch/arm64/boot/dts/amlogic/meson-gx.dtsi
-@@ -398,20 +398,20 @@
- };
-
- sram: sram@c8000000 {
-- compatible = "amlogic,meson-gx-sram", "amlogic,meson-gxbb-sram", "mmio-sram";
-+ compatible = "mmio-sram";
- reg = <0x0 0xc8000000 0x0 0x14000>;
-
- #address-cells = <1>;
- #size-cells = <1>;
- ranges = <0 0x0 0xc8000000 0x14000>;
-
-- cpu_scp_lpri: scp-shmem@0 {
-- compatible = "amlogic,meson-gx-scp-shmem", "amlogic,meson-gxbb-scp-shmem";
-+ cpu_scp_lpri: scp-sram@0 {
-+ compatible = "amlogic,meson-gxbb-scp-shmem";
- reg = <0x13000 0x400>;
- };
-
-- cpu_scp_hpri: scp-shmem@200 {
-- compatible = "amlogic,meson-gx-scp-shmem", "amlogic,meson-gxbb-scp-shmem";
-+ cpu_scp_hpri: scp-sram@200 {
-+ compatible = "amlogic,meson-gxbb-scp-shmem";
- reg = <0x13400 0x400>;
- };
- };
-diff --git a/arch/arm64/boot/dts/amlogic/meson-gxbb-kii-pro.dts b/arch/arm64/boot/dts/amlogic/meson-gxbb-kii-pro.dts
-index 6c9cc45fb417..e8394a8269ee 100644
---- a/arch/arm64/boot/dts/amlogic/meson-gxbb-kii-pro.dts
-+++ b/arch/arm64/boot/dts/amlogic/meson-gxbb-kii-pro.dts
-@@ -11,7 +11,7 @@
- #include <dt-bindings/input/input.h>
- #include <dt-bindings/leds/common.h>
- / {
-- compatible = "videostrong,kii-pro", "amlogic,p201", "amlogic,s905", "amlogic,meson-gxbb";
-+ compatible = "videostrong,kii-pro", "amlogic,meson-gxbb";
- model = "Videostrong KII Pro";
-
- leds {
-diff --git a/arch/arm64/boot/dts/amlogic/meson-gxbb-nanopi-k2.dts b/arch/arm64/boot/dts/amlogic/meson-gxbb-nanopi-k2.dts
-index d6ca684e0e61..7be3e354093b 100644
---- a/arch/arm64/boot/dts/amlogic/meson-gxbb-nanopi-k2.dts
-+++ b/arch/arm64/boot/dts/amlogic/meson-gxbb-nanopi-k2.dts
-@@ -29,7 +29,7 @@
- leds {
- compatible = "gpio-leds";
-
-- stat {
-+ led-stat {
- label = "nanopi-k2:blue:stat";
- gpios = <&gpio_ao GPIOAO_13 GPIO_ACTIVE_HIGH>;
- default-state = "on";
-diff --git a/arch/arm64/boot/dts/amlogic/meson-gxbb-nexbox-a95x.dts b/arch/arm64/boot/dts/amlogic/meson-gxbb-nexbox-a95x.dts
-index 65ec7dea828c..67d901ed2fa3 100644
---- a/arch/arm64/boot/dts/amlogic/meson-gxbb-nexbox-a95x.dts
-+++ b/arch/arm64/boot/dts/amlogic/meson-gxbb-nexbox-a95x.dts
-@@ -31,7 +31,7 @@
-
- leds {
- compatible = "gpio-leds";
-- blue {
-+ led-blue {
- label = "a95x:system-status";
- gpios = <&gpio_ao GPIOAO_13 GPIO_ACTIVE_LOW>;
- linux,default-trigger = "heartbeat";
-diff --git a/arch/arm64/boot/dts/amlogic/meson-gxbb-odroidc2.dts b/arch/arm64/boot/dts/amlogic/meson-gxbb-odroidc2.dts
-index b46ef985bb44..70fcfb7b0683 100644
---- a/arch/arm64/boot/dts/amlogic/meson-gxbb-odroidc2.dts
-+++ b/arch/arm64/boot/dts/amlogic/meson-gxbb-odroidc2.dts
-@@ -49,7 +49,7 @@
-
- leds {
- compatible = "gpio-leds";
-- blue {
-+ led-blue {
- label = "c2:blue:alive";
- gpios = <&gpio_ao GPIOAO_13 GPIO_ACTIVE_LOW>;
- linux,default-trigger = "heartbeat";
-diff --git a/arch/arm64/boot/dts/amlogic/meson-gxbb-vega-s95.dtsi b/arch/arm64/boot/dts/amlogic/meson-gxbb-vega-s95.dtsi
-index 45cb83625951..222ee8069cfa 100644
---- a/arch/arm64/boot/dts/amlogic/meson-gxbb-vega-s95.dtsi
-+++ b/arch/arm64/boot/dts/amlogic/meson-gxbb-vega-s95.dtsi
-@@ -20,7 +20,7 @@
- leds {
- compatible = "gpio-leds";
-
-- blue {
-+ led-blue {
- label = "vega-s95:blue:on";
- gpios = <&gpio_ao GPIOAO_13 GPIO_ACTIVE_HIGH>;
- default-state = "on";
-diff --git a/arch/arm64/boot/dts/amlogic/meson-gxbb-wetek-play2.dts b/arch/arm64/boot/dts/amlogic/meson-gxbb-wetek-play2.dts
-index 1d32d1f6d032..2ab8a3d10079 100644
---- a/arch/arm64/boot/dts/amlogic/meson-gxbb-wetek-play2.dts
-+++ b/arch/arm64/boot/dts/amlogic/meson-gxbb-wetek-play2.dts
-@@ -14,13 +14,13 @@
- model = "WeTek Play 2";
-
- leds {
-- wifi {
-+ led-wifi {
- label = "wetek-play:wifi-status";
- gpios = <&gpio GPIODV_26 GPIO_ACTIVE_HIGH>;
- default-state = "off";
- };
-
-- ethernet {
-+ led-ethernet {
- label = "wetek-play:ethernet-status";
- gpios = <&gpio GPIODV_27 GPIO_ACTIVE_HIGH>;
- default-state = "off";
-diff --git a/arch/arm64/boot/dts/amlogic/meson-gxbb-wetek.dtsi b/arch/arm64/boot/dts/amlogic/meson-gxbb-wetek.dtsi
-index dee51cf95223..d6133af09d64 100644
---- a/arch/arm64/boot/dts/amlogic/meson-gxbb-wetek.dtsi
-+++ b/arch/arm64/boot/dts/amlogic/meson-gxbb-wetek.dtsi
-@@ -25,7 +25,7 @@
- leds {
- compatible = "gpio-leds";
-
-- system {
-+ led-system {
- label = "wetek-play:system-status";
- gpios = <&gpio_ao GPIOAO_13 GPIO_ACTIVE_HIGH>;
- default-state = "on";
-diff --git a/arch/arm64/boot/dts/amlogic/meson-gxl-s905x-libretech-cc.dts b/arch/arm64/boot/dts/amlogic/meson-gxl-s905x-libretech-cc.dts
-index e8348b2728db..a4a71c13891b 100644
---- a/arch/arm64/boot/dts/amlogic/meson-gxl-s905x-libretech-cc.dts
-+++ b/arch/arm64/boot/dts/amlogic/meson-gxl-s905x-libretech-cc.dts
-@@ -54,14 +54,14 @@
- leds {
- compatible = "gpio-leds";
-
-- system {
-+ led-system {
- label = "librecomputer:system-status";
- gpios = <&gpio GPIODV_24 GPIO_ACTIVE_HIGH>;
- default-state = "on";
- panic-indicator;
- };
-
-- blue {
-+ led-blue {
- label = "librecomputer:blue";
- gpios = <&gpio_ao GPIOAO_2 GPIO_ACTIVE_HIGH>;
- linux,default-trigger = "heartbeat";
-diff --git a/arch/arm64/boot/dts/amlogic/meson-gxm-rbox-pro.dts b/arch/arm64/boot/dts/amlogic/meson-gxm-rbox-pro.dts
-index 420a88e9a195..c89c9f846fb1 100644
---- a/arch/arm64/boot/dts/amlogic/meson-gxm-rbox-pro.dts
-+++ b/arch/arm64/boot/dts/amlogic/meson-gxm-rbox-pro.dts
-@@ -36,13 +36,13 @@
- leds {
- compatible = "gpio-leds";
-
-- blue {
-+ led-blue {
- label = "rbox-pro:blue:on";
- gpios = <&gpio_ao GPIOAO_9 GPIO_ACTIVE_HIGH>;
- default-state = "on";
- };
-
-- red {
-+ led-red {
- label = "rbox-pro:red:standby";
- gpios = <&gpio GPIODV_28 GPIO_ACTIVE_HIGH>;
- default-state = "off";
-diff --git a/arch/arm64/boot/dts/amlogic/meson-khadas-vim3.dtsi b/arch/arm64/boot/dts/amlogic/meson-khadas-vim3.dtsi
-index 094ecf2222bb..1ef1e3672b96 100644
---- a/arch/arm64/boot/dts/amlogic/meson-khadas-vim3.dtsi
-+++ b/arch/arm64/boot/dts/amlogic/meson-khadas-vim3.dtsi
-@@ -39,13 +39,13 @@
- leds {
- compatible = "gpio-leds";
-
-- white {
-+ led-white {
- label = "vim3:white:sys";
- gpios = <&gpio_ao GPIOAO_4 GPIO_ACTIVE_LOW>;
- linux,default-trigger = "heartbeat";
- };
-
-- red {
-+ led-red {
- label = "vim3:red";
- gpios = <&gpio_expander 5 GPIO_ACTIVE_LOW>;
- };
-diff --git a/arch/arm64/boot/dts/amlogic/meson-sm1-sei610.dts b/arch/arm64/boot/dts/amlogic/meson-sm1-sei610.dts
-index dfb2438851c0..5ab139a34c01 100644
---- a/arch/arm64/boot/dts/amlogic/meson-sm1-sei610.dts
-+++ b/arch/arm64/boot/dts/amlogic/meson-sm1-sei610.dts
-@@ -104,7 +104,7 @@
- leds {
- compatible = "gpio-leds";
-
-- bluetooth {
-+ led-bluetooth {
- label = "sei610:blue:bt";
- gpios = <&gpio GPIOC_7 (GPIO_ACTIVE_LOW | GPIO_OPEN_DRAIN)>;
- default-state = "off";
-diff --git a/arch/arm64/boot/dts/arm/foundation-v8-gicv2.dtsi b/arch/arm64/boot/dts/arm/foundation-v8-gicv2.dtsi
-index 15fe81738e94..dfb23dfc0b0f 100644
---- a/arch/arm64/boot/dts/arm/foundation-v8-gicv2.dtsi
-+++ b/arch/arm64/boot/dts/arm/foundation-v8-gicv2.dtsi
-@@ -8,7 +8,7 @@
- gic: interrupt-controller@2c001000 {
- compatible = "arm,cortex-a15-gic", "arm,cortex-a9-gic";
- #interrupt-cells = <3>;
-- #address-cells = <2>;
-+ #address-cells = <1>;
- interrupt-controller;
- reg = <0x0 0x2c001000 0 0x1000>,
- <0x0 0x2c002000 0 0x2000>,
-diff --git a/arch/arm64/boot/dts/arm/foundation-v8-gicv3.dtsi b/arch/arm64/boot/dts/arm/foundation-v8-gicv3.dtsi
-index f2c75c756039..906f51935b36 100644
---- a/arch/arm64/boot/dts/arm/foundation-v8-gicv3.dtsi
-+++ b/arch/arm64/boot/dts/arm/foundation-v8-gicv3.dtsi
-@@ -8,9 +8,9 @@
- gic: interrupt-controller@2f000000 {
- compatible = "arm,gic-v3";
- #interrupt-cells = <3>;
-- #address-cells = <2>;
-- #size-cells = <2>;
-- ranges;
-+ #address-cells = <1>;
-+ #size-cells = <1>;
-+ ranges = <0x0 0x0 0x2f000000 0x100000>;
- interrupt-controller;
- reg = <0x0 0x2f000000 0x0 0x10000>,
- <0x0 0x2f100000 0x0 0x200000>,
-@@ -22,7 +22,7 @@
- its: its@2f020000 {
- compatible = "arm,gic-v3-its";
- msi-controller;
-- reg = <0x0 0x2f020000 0x0 0x20000>;
-+ reg = <0x20000 0x20000>;
- };
- };
- };
-diff --git a/arch/arm64/boot/dts/arm/foundation-v8.dtsi b/arch/arm64/boot/dts/arm/foundation-v8.dtsi
-index 12f039fa3dad..e2da63f78298 100644
---- a/arch/arm64/boot/dts/arm/foundation-v8.dtsi
-+++ b/arch/arm64/boot/dts/arm/foundation-v8.dtsi
-@@ -107,51 +107,51 @@
-
- #interrupt-cells = <1>;
- interrupt-map-mask = <0 0 63>;
-- interrupt-map = <0 0 0 &gic 0 0 GIC_SPI 0 IRQ_TYPE_LEVEL_HIGH>,
-- <0 0 1 &gic 0 0 GIC_SPI 1 IRQ_TYPE_LEVEL_HIGH>,
-- <0 0 2 &gic 0 0 GIC_SPI 2 IRQ_TYPE_LEVEL_HIGH>,
-- <0 0 3 &gic 0 0 GIC_SPI 3 IRQ_TYPE_LEVEL_HIGH>,
-- <0 0 4 &gic 0 0 GIC_SPI 4 IRQ_TYPE_LEVEL_HIGH>,
-- <0 0 5 &gic 0 0 GIC_SPI 5 IRQ_TYPE_LEVEL_HIGH>,
-- <0 0 6 &gic 0 0 GIC_SPI 6 IRQ_TYPE_LEVEL_HIGH>,
-- <0 0 7 &gic 0 0 GIC_SPI 7 IRQ_TYPE_LEVEL_HIGH>,
-- <0 0 8 &gic 0 0 GIC_SPI 8 IRQ_TYPE_LEVEL_HIGH>,
-- <0 0 9 &gic 0 0 GIC_SPI 9 IRQ_TYPE_LEVEL_HIGH>,
-- <0 0 10 &gic 0 0 GIC_SPI 10 IRQ_TYPE_LEVEL_HIGH>,
-- <0 0 11 &gic 0 0 GIC_SPI 11 IRQ_TYPE_LEVEL_HIGH>,
-- <0 0 12 &gic 0 0 GIC_SPI 12 IRQ_TYPE_LEVEL_HIGH>,
-- <0 0 13 &gic 0 0 GIC_SPI 13 IRQ_TYPE_LEVEL_HIGH>,
-- <0 0 14 &gic 0 0 GIC_SPI 14 IRQ_TYPE_LEVEL_HIGH>,
-- <0 0 15 &gic 0 0 GIC_SPI 15 IRQ_TYPE_LEVEL_HIGH>,
-- <0 0 16 &gic 0 0 GIC_SPI 16 IRQ_TYPE_LEVEL_HIGH>,
-- <0 0 17 &gic 0 0 GIC_SPI 17 IRQ_TYPE_LEVEL_HIGH>,
-- <0 0 18 &gic 0 0 GIC_SPI 18 IRQ_TYPE_LEVEL_HIGH>,
-- <0 0 19 &gic 0 0 GIC_SPI 19 IRQ_TYPE_LEVEL_HIGH>,
-- <0 0 20 &gic 0 0 GIC_SPI 20 IRQ_TYPE_LEVEL_HIGH>,
-- <0 0 21 &gic 0 0 GIC_SPI 21 IRQ_TYPE_LEVEL_HIGH>,
-- <0 0 22 &gic 0 0 GIC_SPI 22 IRQ_TYPE_LEVEL_HIGH>,
-- <0 0 23 &gic 0 0 GIC_SPI 23 IRQ_TYPE_LEVEL_HIGH>,
-- <0 0 24 &gic 0 0 GIC_SPI 24 IRQ_TYPE_LEVEL_HIGH>,
-- <0 0 25 &gic 0 0 GIC_SPI 25 IRQ_TYPE_LEVEL_HIGH>,
-- <0 0 26 &gic 0 0 GIC_SPI 26 IRQ_TYPE_LEVEL_HIGH>,
-- <0 0 27 &gic 0 0 GIC_SPI 27 IRQ_TYPE_LEVEL_HIGH>,
-- <0 0 28 &gic 0 0 GIC_SPI 28 IRQ_TYPE_LEVEL_HIGH>,
-- <0 0 29 &gic 0 0 GIC_SPI 29 IRQ_TYPE_LEVEL_HIGH>,
-- <0 0 30 &gic 0 0 GIC_SPI 30 IRQ_TYPE_LEVEL_HIGH>,
-- <0 0 31 &gic 0 0 GIC_SPI 31 IRQ_TYPE_LEVEL_HIGH>,
-- <0 0 32 &gic 0 0 GIC_SPI 32 IRQ_TYPE_LEVEL_HIGH>,
-- <0 0 33 &gic 0 0 GIC_SPI 33 IRQ_TYPE_LEVEL_HIGH>,
-- <0 0 34 &gic 0 0 GIC_SPI 34 IRQ_TYPE_LEVEL_HIGH>,
-- <0 0 35 &gic 0 0 GIC_SPI 35 IRQ_TYPE_LEVEL_HIGH>,
-- <0 0 36 &gic 0 0 GIC_SPI 36 IRQ_TYPE_LEVEL_HIGH>,
-- <0 0 37 &gic 0 0 GIC_SPI 37 IRQ_TYPE_LEVEL_HIGH>,
-- <0 0 38 &gic 0 0 GIC_SPI 38 IRQ_TYPE_LEVEL_HIGH>,
-- <0 0 39 &gic 0 0 GIC_SPI 39 IRQ_TYPE_LEVEL_HIGH>,
-- <0 0 40 &gic 0 0 GIC_SPI 40 IRQ_TYPE_LEVEL_HIGH>,
-- <0 0 41 &gic 0 0 GIC_SPI 41 IRQ_TYPE_LEVEL_HIGH>,
-- <0 0 42 &gic 0 0 GIC_SPI 42 IRQ_TYPE_LEVEL_HIGH>;
--
-- ethernet@2,02000000 {
-+ interrupt-map = <0 0 0 &gic 0 GIC_SPI 0 IRQ_TYPE_LEVEL_HIGH>,
-+ <0 0 1 &gic 0 GIC_SPI 1 IRQ_TYPE_LEVEL_HIGH>,
-+ <0 0 2 &gic 0 GIC_SPI 2 IRQ_TYPE_LEVEL_HIGH>,
-+ <0 0 3 &gic 0 GIC_SPI 3 IRQ_TYPE_LEVEL_HIGH>,
-+ <0 0 4 &gic 0 GIC_SPI 4 IRQ_TYPE_LEVEL_HIGH>,
-+ <0 0 5 &gic 0 GIC_SPI 5 IRQ_TYPE_LEVEL_HIGH>,
-+ <0 0 6 &gic 0 GIC_SPI 6 IRQ_TYPE_LEVEL_HIGH>,
-+ <0 0 7 &gic 0 GIC_SPI 7 IRQ_TYPE_LEVEL_HIGH>,
-+ <0 0 8 &gic 0 GIC_SPI 8 IRQ_TYPE_LEVEL_HIGH>,
-+ <0 0 9 &gic 0 GIC_SPI 9 IRQ_TYPE_LEVEL_HIGH>,
-+ <0 0 10 &gic 0 GIC_SPI 10 IRQ_TYPE_LEVEL_HIGH>,
-+ <0 0 11 &gic 0 GIC_SPI 11 IRQ_TYPE_LEVEL_HIGH>,
-+ <0 0 12 &gic 0 GIC_SPI 12 IRQ_TYPE_LEVEL_HIGH>,
-+ <0 0 13 &gic 0 GIC_SPI 13 IRQ_TYPE_LEVEL_HIGH>,
-+ <0 0 14 &gic 0 GIC_SPI 14 IRQ_TYPE_LEVEL_HIGH>,
-+ <0 0 15 &gic 0 GIC_SPI 15 IRQ_TYPE_LEVEL_HIGH>,
-+ <0 0 16 &gic 0 GIC_SPI 16 IRQ_TYPE_LEVEL_HIGH>,
-+ <0 0 17 &gic 0 GIC_SPI 17 IRQ_TYPE_LEVEL_HIGH>,
-+ <0 0 18 &gic 0 GIC_SPI 18 IRQ_TYPE_LEVEL_HIGH>,
-+ <0 0 19 &gic 0 GIC_SPI 19 IRQ_TYPE_LEVEL_HIGH>,
-+ <0 0 20 &gic 0 GIC_SPI 20 IRQ_TYPE_LEVEL_HIGH>,
-+ <0 0 21 &gic 0 GIC_SPI 21 IRQ_TYPE_LEVEL_HIGH>,
-+ <0 0 22 &gic 0 GIC_SPI 22 IRQ_TYPE_LEVEL_HIGH>,
-+ <0 0 23 &gic 0 GIC_SPI 23 IRQ_TYPE_LEVEL_HIGH>,
-+ <0 0 24 &gic 0 GIC_SPI 24 IRQ_TYPE_LEVEL_HIGH>,
-+ <0 0 25 &gic 0 GIC_SPI 25 IRQ_TYPE_LEVEL_HIGH>,
-+ <0 0 26 &gic 0 GIC_SPI 26 IRQ_TYPE_LEVEL_HIGH>,
-+ <0 0 27 &gic 0 GIC_SPI 27 IRQ_TYPE_LEVEL_HIGH>,
-+ <0 0 28 &gic 0 GIC_SPI 28 IRQ_TYPE_LEVEL_HIGH>,
-+ <0 0 29 &gic 0 GIC_SPI 29 IRQ_TYPE_LEVEL_HIGH>,
-+ <0 0 30 &gic 0 GIC_SPI 30 IRQ_TYPE_LEVEL_HIGH>,
-+ <0 0 31 &gic 0 GIC_SPI 31 IRQ_TYPE_LEVEL_HIGH>,
-+ <0 0 32 &gic 0 GIC_SPI 32 IRQ_TYPE_LEVEL_HIGH>,
-+ <0 0 33 &gic 0 GIC_SPI 33 IRQ_TYPE_LEVEL_HIGH>,
-+ <0 0 34 &gic 0 GIC_SPI 34 IRQ_TYPE_LEVEL_HIGH>,
-+ <0 0 35 &gic 0 GIC_SPI 35 IRQ_TYPE_LEVEL_HIGH>,
-+ <0 0 36 &gic 0 GIC_SPI 36 IRQ_TYPE_LEVEL_HIGH>,
-+ <0 0 37 &gic 0 GIC_SPI 37 IRQ_TYPE_LEVEL_HIGH>,
-+ <0 0 38 &gic 0 GIC_SPI 38 IRQ_TYPE_LEVEL_HIGH>,
-+ <0 0 39 &gic 0 GIC_SPI 39 IRQ_TYPE_LEVEL_HIGH>,
-+ <0 0 40 &gic 0 GIC_SPI 40 IRQ_TYPE_LEVEL_HIGH>,
-+ <0 0 41 &gic 0 GIC_SPI 41 IRQ_TYPE_LEVEL_HIGH>,
-+ <0 0 42 &gic 0 GIC_SPI 42 IRQ_TYPE_LEVEL_HIGH>;
-+
-+ ethernet@202000000 {
- compatible = "smsc,lan91c111";
- reg = <2 0x02000000 0x10000>;
- interrupts = <15>;
-@@ -178,7 +178,7 @@
- clock-output-names = "v2m:refclk32khz";
- };
-
-- iofpga@3,00000000 {
-+ iofpga@300000000 {
- compatible = "simple-bus";
- #address-cells = <1>;
- #size-cells = <1>;
-diff --git a/arch/arm64/boot/dts/arm/juno-base.dtsi b/arch/arm64/boot/dts/arm/juno-base.dtsi
-index f5889281545f..59b6ac0b828a 100644
---- a/arch/arm64/boot/dts/arm/juno-base.dtsi
-+++ b/arch/arm64/boot/dts/arm/juno-base.dtsi
-@@ -74,35 +74,35 @@
- <0x0 0x2c02f000 0 0x2000>,
- <0x0 0x2c04f000 0 0x2000>,
- <0x0 0x2c06f000 0 0x2000>;
-- #address-cells = <2>;
-+ #address-cells = <1>;
- #interrupt-cells = <3>;
-- #size-cells = <2>;
-+ #size-cells = <1>;
- interrupt-controller;
- interrupts = <GIC_PPI 9 (GIC_CPU_MASK_SIMPLE(6) | IRQ_TYPE_LEVEL_HIGH)>;
-- ranges = <0 0 0 0x2c1c0000 0 0x40000>;
-+ ranges = <0 0 0x2c1c0000 0x40000>;
-
- v2m_0: v2m@0 {
- compatible = "arm,gic-v2m-frame";
- msi-controller;
-- reg = <0 0 0 0x10000>;
-+ reg = <0 0x10000>;
- };
-
- v2m@10000 {
- compatible = "arm,gic-v2m-frame";
- msi-controller;
-- reg = <0 0x10000 0 0x10000>;
-+ reg = <0x10000 0x10000>;
- };
-
- v2m@20000 {
- compatible = "arm,gic-v2m-frame";
- msi-controller;
-- reg = <0 0x20000 0 0x10000>;
-+ reg = <0x20000 0x10000>;
- };
-
- v2m@30000 {
- compatible = "arm,gic-v2m-frame";
- msi-controller;
-- reg = <0 0x30000 0 0x10000>;
-+ reg = <0x30000 0x10000>;
- };
- };
-
-@@ -546,10 +546,10 @@
- <0x42000000 0x40 0x00000000 0x40 0x00000000 0x1 0x00000000>;
- #interrupt-cells = <1>;
- interrupt-map-mask = <0 0 0 7>;
-- interrupt-map = <0 0 0 1 &gic 0 0 GIC_SPI 136 IRQ_TYPE_LEVEL_HIGH>,
-- <0 0 0 2 &gic 0 0 GIC_SPI 137 IRQ_TYPE_LEVEL_HIGH>,
-- <0 0 0 3 &gic 0 0 GIC_SPI 138 IRQ_TYPE_LEVEL_HIGH>,
-- <0 0 0 4 &gic 0 0 GIC_SPI 139 IRQ_TYPE_LEVEL_HIGH>;
-+ interrupt-map = <0 0 0 1 &gic 0 GIC_SPI 136 IRQ_TYPE_LEVEL_HIGH>,
-+ <0 0 0 2 &gic 0 GIC_SPI 137 IRQ_TYPE_LEVEL_HIGH>,
-+ <0 0 0 3 &gic 0 GIC_SPI 138 IRQ_TYPE_LEVEL_HIGH>,
-+ <0 0 0 4 &gic 0 GIC_SPI 139 IRQ_TYPE_LEVEL_HIGH>;
- msi-parent = <&v2m_0>;
- status = "disabled";
- iommu-map-mask = <0x0>; /* RC has no means to output PCI RID */
-@@ -813,19 +813,19 @@
-
- #interrupt-cells = <1>;
- interrupt-map-mask = <0 0 15>;
-- interrupt-map = <0 0 0 &gic 0 0 GIC_SPI 68 IRQ_TYPE_LEVEL_HIGH>,
-- <0 0 1 &gic 0 0 GIC_SPI 69 IRQ_TYPE_LEVEL_HIGH>,
-- <0 0 2 &gic 0 0 GIC_SPI 70 IRQ_TYPE_LEVEL_HIGH>,
-- <0 0 3 &gic 0 0 GIC_SPI 160 IRQ_TYPE_LEVEL_HIGH>,
-- <0 0 4 &gic 0 0 GIC_SPI 161 IRQ_TYPE_LEVEL_HIGH>,
-- <0 0 5 &gic 0 0 GIC_SPI 162 IRQ_TYPE_LEVEL_HIGH>,
-- <0 0 6 &gic 0 0 GIC_SPI 163 IRQ_TYPE_LEVEL_HIGH>,
-- <0 0 7 &gic 0 0 GIC_SPI 164 IRQ_TYPE_LEVEL_HIGH>,
-- <0 0 8 &gic 0 0 GIC_SPI 165 IRQ_TYPE_LEVEL_HIGH>,
-- <0 0 9 &gic 0 0 GIC_SPI 166 IRQ_TYPE_LEVEL_HIGH>,
-- <0 0 10 &gic 0 0 GIC_SPI 167 IRQ_TYPE_LEVEL_HIGH>,
-- <0 0 11 &gic 0 0 GIC_SPI 168 IRQ_TYPE_LEVEL_HIGH>,
-- <0 0 12 &gic 0 0 GIC_SPI 169 IRQ_TYPE_LEVEL_HIGH>;
-+ interrupt-map = <0 0 0 &gic 0 GIC_SPI 68 IRQ_TYPE_LEVEL_HIGH>,
-+ <0 0 1 &gic 0 GIC_SPI 69 IRQ_TYPE_LEVEL_HIGH>,
-+ <0 0 2 &gic 0 GIC_SPI 70 IRQ_TYPE_LEVEL_HIGH>,
-+ <0 0 3 &gic 0 GIC_SPI 160 IRQ_TYPE_LEVEL_HIGH>,
-+ <0 0 4 &gic 0 GIC_SPI 161 IRQ_TYPE_LEVEL_HIGH>,
-+ <0 0 5 &gic 0 GIC_SPI 162 IRQ_TYPE_LEVEL_HIGH>,
-+ <0 0 6 &gic 0 GIC_SPI 163 IRQ_TYPE_LEVEL_HIGH>,
-+ <0 0 7 &gic 0 GIC_SPI 164 IRQ_TYPE_LEVEL_HIGH>,
-+ <0 0 8 &gic 0 GIC_SPI 165 IRQ_TYPE_LEVEL_HIGH>,
-+ <0 0 9 &gic 0 GIC_SPI 166 IRQ_TYPE_LEVEL_HIGH>,
-+ <0 0 10 &gic 0 GIC_SPI 167 IRQ_TYPE_LEVEL_HIGH>,
-+ <0 0 11 &gic 0 GIC_SPI 168 IRQ_TYPE_LEVEL_HIGH>,
-+ <0 0 12 &gic 0 GIC_SPI 169 IRQ_TYPE_LEVEL_HIGH>;
- };
-
- site2: tlx@60000000 {
-@@ -835,6 +835,6 @@
- ranges = <0 0 0x60000000 0x10000000>;
- #interrupt-cells = <1>;
- interrupt-map-mask = <0 0>;
-- interrupt-map = <0 0 &gic 0 0 GIC_SPI 168 IRQ_TYPE_LEVEL_HIGH>;
-+ interrupt-map = <0 0 &gic 0 GIC_SPI 168 IRQ_TYPE_LEVEL_HIGH>;
- };
- };
-diff --git a/arch/arm64/boot/dts/arm/juno-motherboard.dtsi b/arch/arm64/boot/dts/arm/juno-motherboard.dtsi
-index e3983ded3c3c..d5cefddde08c 100644
---- a/arch/arm64/boot/dts/arm/juno-motherboard.dtsi
-+++ b/arch/arm64/boot/dts/arm/juno-motherboard.dtsi
-@@ -103,7 +103,7 @@
- };
- };
-
-- flash@0,00000000 {
-+ flash@0 {
- /* 2 * 32MiB NOR Flash memory mounted on CS0 */
- compatible = "arm,vexpress-flash", "cfi-flash";
- reg = <0 0x00000000 0x04000000>;
-@@ -120,7 +120,7 @@
- };
- };
-
-- ethernet@2,00000000 {
-+ ethernet@200000000 {
- compatible = "smsc,lan9118", "smsc,lan9115";
- reg = <2 0x00000000 0x10000>;
- interrupts = <3>;
-@@ -133,7 +133,7 @@
- vddvario-supply = <&mb_fixed_3v3>;
- };
-
-- iofpga@3,00000000 {
-+ iofpga@300000000 {
- compatible = "simple-bus";
- #address-cells = <1>;
- #size-cells = <1>;
-diff --git a/arch/arm64/boot/dts/arm/rtsm_ve-motherboard-rs2.dtsi b/arch/arm64/boot/dts/arm/rtsm_ve-motherboard-rs2.dtsi
-index 60703b5763c6..350cbf17e8b4 100644
---- a/arch/arm64/boot/dts/arm/rtsm_ve-motherboard-rs2.dtsi
-+++ b/arch/arm64/boot/dts/arm/rtsm_ve-motherboard-rs2.dtsi
-@@ -9,7 +9,7 @@
- motherboard {
- arm,v2m-memory-map = "rs2";
-
-- iofpga@3,00000000 {
-+ iofpga@300000000 {
- virtio-p9@140000 {
- compatible = "virtio,mmio";
- reg = <0x140000 0x200>;
-diff --git a/arch/arm64/boot/dts/arm/rtsm_ve-motherboard.dtsi b/arch/arm64/boot/dts/arm/rtsm_ve-motherboard.dtsi
-index e333c8d2d0e4..d1bfa62ca073 100644
---- a/arch/arm64/boot/dts/arm/rtsm_ve-motherboard.dtsi
-+++ b/arch/arm64/boot/dts/arm/rtsm_ve-motherboard.dtsi
-@@ -17,14 +17,14 @@
- #interrupt-cells = <1>;
- ranges;
-
-- flash@0,00000000 {
-+ flash@0 {
- compatible = "arm,vexpress-flash", "cfi-flash";
- reg = <0 0x00000000 0x04000000>,
- <4 0x00000000 0x04000000>;
- bank-width = <4>;
- };
-
-- ethernet@2,02000000 {
-+ ethernet@202000000 {
- compatible = "smsc,lan91c111";
- reg = <2 0x02000000 0x10000>;
- interrupts = <15>;
-@@ -51,7 +51,7 @@
- clock-output-names = "v2m:refclk32khz";
- };
-
-- iofpga@3,00000000 {
-+ iofpga@300000000 {
- compatible = "simple-bus";
- #address-cells = <1>;
- #size-cells = <1>;
-diff --git a/arch/arm64/boot/dts/marvell/armada-3720-db.dts b/arch/arm64/boot/dts/marvell/armada-3720-db.dts
-index f2cc00594d64..3e5789f37206 100644
---- a/arch/arm64/boot/dts/marvell/armada-3720-db.dts
-+++ b/arch/arm64/boot/dts/marvell/armada-3720-db.dts
-@@ -128,6 +128,9 @@
-
- /* CON15(V2.0)/CON17(V1.4) : PCIe / CON15(V2.0)/CON12(V1.4) :mini-PCIe */
- &pcie0 {
-+ pinctrl-names = "default";
-+ pinctrl-0 = <&pcie_reset_pins &pcie_clkreq_pins>;
-+ reset-gpios = <&gpiosb 3 GPIO_ACTIVE_LOW>;
- status = "okay";
- };
-
-diff --git a/arch/arm64/boot/dts/marvell/armada-3720-espressobin.dtsi b/arch/arm64/boot/dts/marvell/armada-3720-espressobin.dtsi
-index 42e992f9c8a5..c92ad664cb0e 100644
---- a/arch/arm64/boot/dts/marvell/armada-3720-espressobin.dtsi
-+++ b/arch/arm64/boot/dts/marvell/armada-3720-espressobin.dtsi
-@@ -47,6 +47,7 @@
- phys = <&comphy1 0>;
- pinctrl-names = "default";
- pinctrl-0 = <&pcie_reset_pins &pcie_clkreq_pins>;
-+ reset-gpios = <&gpiosb 3 GPIO_ACTIVE_LOW>;
- };
-
- /* J6 */
-diff --git a/arch/arm64/boot/dts/marvell/armada-3720-turris-mox.dts b/arch/arm64/boot/dts/marvell/armada-3720-turris-mox.dts
-index bb42d1e6a4e9..1452c821f8c0 100644
---- a/arch/arm64/boot/dts/marvell/armada-3720-turris-mox.dts
-+++ b/arch/arm64/boot/dts/marvell/armada-3720-turris-mox.dts
-@@ -95,7 +95,7 @@
- };
-
- sfp: sfp {
-- compatible = "sff,sfp+";
-+ compatible = "sff,sfp";
- i2c-bus = <&i2c0>;
- los-gpio = <&moxtet_sfp 0 GPIO_ACTIVE_HIGH>;
- tx-fault-gpio = <&moxtet_sfp 1 GPIO_ACTIVE_HIGH>;
-@@ -128,10 +128,6 @@
- };
- };
-
--&pcie_reset_pins {
-- function = "gpio";
--};
--
- &pcie0 {
- pinctrl-names = "default";
- pinctrl-0 = <&pcie_reset_pins &pcie_clkreq_pins>;
-@@ -179,6 +175,8 @@
- marvell,pad-type = "sd";
- vqmmc-supply = <&vsdio_reg>;
- mmc-pwrseq = <&sdhci1_pwrseq>;
-+ /* forbid SDR104 for FCC purposes */
-+ sdhci-caps-mask = <0x2 0x0>;
- status = "okay";
- };
-
-diff --git a/arch/arm64/boot/dts/marvell/armada-37xx.dtsi b/arch/arm64/boot/dts/marvell/armada-37xx.dtsi
-index 000c135e39b7..7909c146eabf 100644
---- a/arch/arm64/boot/dts/marvell/armada-37xx.dtsi
-+++ b/arch/arm64/boot/dts/marvell/armada-37xx.dtsi
-@@ -317,7 +317,7 @@
-
- pcie_reset_pins: pcie-reset-pins {
- groups = "pcie1";
-- function = "pcie";
-+ function = "gpio";
- };
-
- pcie_clkreq_pins: pcie-clkreq-pins {
-diff --git a/arch/arm64/boot/dts/mediatek/mt8173.dtsi b/arch/arm64/boot/dts/mediatek/mt8173.dtsi
-index d819e44d94a8..6ad1053afd27 100644
---- a/arch/arm64/boot/dts/mediatek/mt8173.dtsi
-+++ b/arch/arm64/boot/dts/mediatek/mt8173.dtsi
-@@ -242,21 +242,21 @@
- cpu_on = <0x84000003>;
- };
-
-- clk26m: oscillator@0 {
-+ clk26m: oscillator0 {
- compatible = "fixed-clock";
- #clock-cells = <0>;
- clock-frequency = <26000000>;
- clock-output-names = "clk26m";
- };
-
-- clk32k: oscillator@1 {
-+ clk32k: oscillator1 {
- compatible = "fixed-clock";
- #clock-cells = <0>;
- clock-frequency = <32000>;
- clock-output-names = "clk32k";
- };
-
-- cpum_ck: oscillator@2 {
-+ cpum_ck: oscillator2 {
- compatible = "fixed-clock";
- #clock-cells = <0>;
- clock-frequency = <0>;
-@@ -272,19 +272,19 @@
- sustainable-power = <1500>; /* milliwatts */
-
- trips {
-- threshold: trip-point@0 {
-+ threshold: trip-point0 {
- temperature = <68000>;
- hysteresis = <2000>;
- type = "passive";
- };
-
-- target: trip-point@1 {
-+ target: trip-point1 {
- temperature = <85000>;
- hysteresis = <2000>;
- type = "passive";
- };
-
-- cpu_crit: cpu_crit@0 {
-+ cpu_crit: cpu_crit0 {
- temperature = <115000>;
- hysteresis = <2000>;
- type = "critical";
-@@ -292,13 +292,13 @@
- };
-
- cooling-maps {
-- map@0 {
-+ map0 {
- trip = <&target>;
- cooling-device = <&cpu0 0 0>,
- <&cpu1 0 0>;
- contribution = <3072>;
- };
-- map@1 {
-+ map1 {
- trip = <&target>;
- cooling-device = <&cpu2 0 0>,
- <&cpu3 0 0>;
-@@ -312,7 +312,7 @@
- #address-cells = <2>;
- #size-cells = <2>;
- ranges;
-- vpu_dma_reserved: vpu_dma_mem_region {
-+ vpu_dma_reserved: vpu_dma_mem_region@b7000000 {
- compatible = "shared-dma-pool";
- reg = <0 0xb7000000 0 0x500000>;
- alignment = <0x1000>;
-@@ -365,7 +365,7 @@
- reg = <0 0x10005000 0 0x1000>;
- };
-
-- pio: pinctrl@10005000 {
-+ pio: pinctrl@1000b000 {
- compatible = "mediatek,mt8173-pinctrl";
- reg = <0 0x1000b000 0 0x1000>;
- mediatek,pctl-regmap = <&syscfg_pctl_a>;
-@@ -572,7 +572,7 @@
- status = "disabled";
- };
-
-- gic: interrupt-controller@10220000 {
-+ gic: interrupt-controller@10221000 {
- compatible = "arm,gic-400";
- #interrupt-cells = <3>;
- interrupt-parent = <&gic>;
-diff --git a/arch/arm64/boot/dts/nvidia/tegra194-p2888.dtsi b/arch/arm64/boot/dts/nvidia/tegra194-p2888.dtsi
-index 623f7d7d216b..8e3136dfdd62 100644
---- a/arch/arm64/boot/dts/nvidia/tegra194-p2888.dtsi
-+++ b/arch/arm64/boot/dts/nvidia/tegra194-p2888.dtsi
-@@ -33,7 +33,7 @@
-
- phy-reset-gpios = <&gpio TEGRA194_MAIN_GPIO(G, 5) GPIO_ACTIVE_LOW>;
- phy-handle = <&phy>;
-- phy-mode = "rgmii";
-+ phy-mode = "rgmii-id";
-
- mdio {
- #address-cells = <1>;
-diff --git a/arch/arm64/boot/dts/nvidia/tegra194.dtsi b/arch/arm64/boot/dts/nvidia/tegra194.dtsi
-index f4ede86e32b4..3c928360f4ed 100644
---- a/arch/arm64/boot/dts/nvidia/tegra194.dtsi
-+++ b/arch/arm64/boot/dts/nvidia/tegra194.dtsi
-@@ -1387,7 +1387,7 @@
-
- bus-range = <0x0 0xff>;
- ranges = <0x81000000 0x0 0x30100000 0x0 0x30100000 0x0 0x00100000 /* downstream I/O (1MB) */
-- 0xc2000000 0x12 0x00000000 0x12 0x00000000 0x0 0x30000000 /* prefetchable memory (768MB) */
-+ 0xc3000000 0x12 0x00000000 0x12 0x00000000 0x0 0x30000000 /* prefetchable memory (768MB) */
- 0x82000000 0x0 0x40000000 0x12 0x30000000 0x0 0x10000000>; /* non-prefetchable memory (256MB) */
- };
-
-@@ -1432,7 +1432,7 @@
-
- bus-range = <0x0 0xff>;
- ranges = <0x81000000 0x0 0x32100000 0x0 0x32100000 0x0 0x00100000 /* downstream I/O (1MB) */
-- 0xc2000000 0x12 0x40000000 0x12 0x40000000 0x0 0x30000000 /* prefetchable memory (768MB) */
-+ 0xc3000000 0x12 0x40000000 0x12 0x40000000 0x0 0x30000000 /* prefetchable memory (768MB) */
- 0x82000000 0x0 0x40000000 0x12 0x70000000 0x0 0x10000000>; /* non-prefetchable memory (256MB) */
- };
-
-@@ -1477,7 +1477,7 @@
-
- bus-range = <0x0 0xff>;
- ranges = <0x81000000 0x0 0x34100000 0x0 0x34100000 0x0 0x00100000 /* downstream I/O (1MB) */
-- 0xc2000000 0x12 0x80000000 0x12 0x80000000 0x0 0x30000000 /* prefetchable memory (768MB) */
-+ 0xc3000000 0x12 0x80000000 0x12 0x80000000 0x0 0x30000000 /* prefetchable memory (768MB) */
- 0x82000000 0x0 0x40000000 0x12 0xb0000000 0x0 0x10000000>; /* non-prefetchable memory (256MB) */
- };
-
-@@ -1522,7 +1522,7 @@
-
- bus-range = <0x0 0xff>;
- ranges = <0x81000000 0x0 0x36100000 0x0 0x36100000 0x0 0x00100000 /* downstream I/O (1MB) */
-- 0xc2000000 0x14 0x00000000 0x14 0x00000000 0x3 0x40000000 /* prefetchable memory (13GB) */
-+ 0xc3000000 0x14 0x00000000 0x14 0x00000000 0x3 0x40000000 /* prefetchable memory (13GB) */
- 0x82000000 0x0 0x40000000 0x17 0x40000000 0x0 0xc0000000>; /* non-prefetchable memory (3GB) */
- };
-
-@@ -1567,7 +1567,7 @@
-
- bus-range = <0x0 0xff>;
- ranges = <0x81000000 0x0 0x38100000 0x0 0x38100000 0x0 0x00100000 /* downstream I/O (1MB) */
-- 0xc2000000 0x18 0x00000000 0x18 0x00000000 0x3 0x40000000 /* prefetchable memory (13GB) */
-+ 0xc3000000 0x18 0x00000000 0x18 0x00000000 0x3 0x40000000 /* prefetchable memory (13GB) */
- 0x82000000 0x0 0x40000000 0x1b 0x40000000 0x0 0xc0000000>; /* non-prefetchable memory (3GB) */
- };
-
-@@ -1616,7 +1616,7 @@
-
- bus-range = <0x0 0xff>;
- ranges = <0x81000000 0x0 0x3a100000 0x0 0x3a100000 0x0 0x00100000 /* downstream I/O (1MB) */
-- 0xc2000000 0x1c 0x00000000 0x1c 0x00000000 0x3 0x40000000 /* prefetchable memory (13GB) */
-+ 0xc3000000 0x1c 0x00000000 0x1c 0x00000000 0x3 0x40000000 /* prefetchable memory (13GB) */
- 0x82000000 0x0 0x40000000 0x1f 0x40000000 0x0 0xc0000000>; /* non-prefetchable memory (3GB) */
- };
-
-diff --git a/arch/arm64/boot/dts/qcom/apq8096-db820c.dtsi b/arch/arm64/boot/dts/qcom/apq8096-db820c.dtsi
-index c4abbccf2bed..eaa1eb70b455 100644
---- a/arch/arm64/boot/dts/qcom/apq8096-db820c.dtsi
-+++ b/arch/arm64/boot/dts/qcom/apq8096-db820c.dtsi
-@@ -117,16 +117,6 @@
- regulator-max-microvolt = <3700000>;
- };
-
-- vreg_s8a_l3a_input: vreg-s8a-l3a-input {
-- compatible = "regulator-fixed";
-- regulator-name = "vreg_s8a_l3a_input";
-- regulator-always-on;
-- regulator-boot-on;
--
-- regulator-min-microvolt = <0>;
-- regulator-max-microvolt = <0>;
-- };
--
- wlan_en: wlan-en-1-8v {
- pinctrl-names = "default";
- pinctrl-0 = <&wlan_en_gpios>;
-@@ -705,14 +695,14 @@
- vdd_s11-supply = <&vph_pwr>;
- vdd_s12-supply = <&vph_pwr>;
- vdd_l2_l26_l28-supply = <&vreg_s3a_1p3>;
-- vdd_l3_l11-supply = <&vreg_s8a_l3a_input>;
-+ vdd_l3_l11-supply = <&vreg_s3a_1p3>;
- vdd_l4_l27_l31-supply = <&vreg_s3a_1p3>;
- vdd_l5_l7-supply = <&vreg_s5a_2p15>;
- vdd_l6_l12_l32-supply = <&vreg_s5a_2p15>;
- vdd_l8_l16_l30-supply = <&vph_pwr>;
- vdd_l14_l15-supply = <&vreg_s5a_2p15>;
- vdd_l25-supply = <&vreg_s3a_1p3>;
-- vdd_lvs1_2-supply = <&vreg_s4a_1p8>;
-+ vdd_lvs1_lvs2-supply = <&vreg_s4a_1p8>;
-
- vreg_s3a_1p3: s3 {
- regulator-name = "vreg_s3a_1p3";
-diff --git a/arch/arm64/boot/dts/qcom/msm8916.dtsi b/arch/arm64/boot/dts/qcom/msm8916.dtsi
-index a88a15f2352b..5548d7b5096c 100644
---- a/arch/arm64/boot/dts/qcom/msm8916.dtsi
-+++ b/arch/arm64/boot/dts/qcom/msm8916.dtsi
-@@ -261,7 +261,7 @@
- thermal-sensors = <&tsens 4>;
-
- trips {
-- cpu2_3_alert0: trip-point@0 {
-+ cpu2_3_alert0: trip-point0 {
- temperature = <75000>;
- hysteresis = <2000>;
- type = "passive";
-@@ -291,7 +291,7 @@
- thermal-sensors = <&tsens 2>;
-
- trips {
-- gpu_alert0: trip-point@0 {
-+ gpu_alert0: trip-point0 {
- temperature = <75000>;
- hysteresis = <2000>;
- type = "passive";
-@@ -311,7 +311,7 @@
- thermal-sensors = <&tsens 1>;
-
- trips {
-- cam_alert0: trip-point@0 {
-+ cam_alert0: trip-point0 {
- temperature = <75000>;
- hysteresis = <2000>;
- type = "hot";
-@@ -326,7 +326,7 @@
- thermal-sensors = <&tsens 0>;
-
- trips {
-- modem_alert0: trip-point@0 {
-+ modem_alert0: trip-point0 {
- temperature = <85000>;
- hysteresis = <2000>;
- type = "hot";
-diff --git a/arch/arm64/boot/dts/qcom/msm8996.dtsi b/arch/arm64/boot/dts/qcom/msm8996.dtsi
-index 98634d5c4440..d22c364b520a 100644
---- a/arch/arm64/boot/dts/qcom/msm8996.dtsi
-+++ b/arch/arm64/boot/dts/qcom/msm8996.dtsi
-@@ -989,16 +989,16 @@
- "csi_clk_mux",
- "vfe0",
- "vfe1";
-- interrupts = <GIC_SPI 78 0>,
-- <GIC_SPI 79 0>,
-- <GIC_SPI 80 0>,
-- <GIC_SPI 296 0>,
-- <GIC_SPI 297 0>,
-- <GIC_SPI 298 0>,
-- <GIC_SPI 299 0>,
-- <GIC_SPI 309 0>,
-- <GIC_SPI 314 0>,
-- <GIC_SPI 315 0>;
-+ interrupts = <GIC_SPI 78 IRQ_TYPE_EDGE_RISING>,
-+ <GIC_SPI 79 IRQ_TYPE_EDGE_RISING>,
-+ <GIC_SPI 80 IRQ_TYPE_EDGE_RISING>,
-+ <GIC_SPI 296 IRQ_TYPE_EDGE_RISING>,
-+ <GIC_SPI 297 IRQ_TYPE_EDGE_RISING>,
-+ <GIC_SPI 298 IRQ_TYPE_EDGE_RISING>,
-+ <GIC_SPI 299 IRQ_TYPE_EDGE_RISING>,
-+ <GIC_SPI 309 IRQ_TYPE_EDGE_RISING>,
-+ <GIC_SPI 314 IRQ_TYPE_EDGE_RISING>,
-+ <GIC_SPI 315 IRQ_TYPE_EDGE_RISING>;
- interrupt-names = "csiphy0",
- "csiphy1",
- "csiphy2",
-diff --git a/arch/arm64/boot/dts/qcom/pm8150.dtsi b/arch/arm64/boot/dts/qcom/pm8150.dtsi
-index b6e304748a57..c0b197458665 100644
---- a/arch/arm64/boot/dts/qcom/pm8150.dtsi
-+++ b/arch/arm64/boot/dts/qcom/pm8150.dtsi
-@@ -73,18 +73,8 @@
- reg = <0xc000>;
- gpio-controller;
- #gpio-cells = <2>;
-- interrupts = <0x0 0xc0 0x0 IRQ_TYPE_NONE>,
-- <0x0 0xc1 0x0 IRQ_TYPE_NONE>,
-- <0x0 0xc2 0x0 IRQ_TYPE_NONE>,
-- <0x0 0xc3 0x0 IRQ_TYPE_NONE>,
-- <0x0 0xc4 0x0 IRQ_TYPE_NONE>,
-- <0x0 0xc5 0x0 IRQ_TYPE_NONE>,
-- <0x0 0xc6 0x0 IRQ_TYPE_NONE>,
-- <0x0 0xc7 0x0 IRQ_TYPE_NONE>,
-- <0x0 0xc8 0x0 IRQ_TYPE_NONE>,
-- <0x0 0xc9 0x0 IRQ_TYPE_NONE>,
-- <0x0 0xca 0x0 IRQ_TYPE_NONE>,
-- <0x0 0xcb 0x0 IRQ_TYPE_NONE>;
-+ interrupt-controller;
-+ #interrupt-cells = <2>;
- };
- };
-
-diff --git a/arch/arm64/boot/dts/qcom/pm8150b.dtsi b/arch/arm64/boot/dts/qcom/pm8150b.dtsi
-index 322379d5c31f..40b5d75a4a1d 100644
---- a/arch/arm64/boot/dts/qcom/pm8150b.dtsi
-+++ b/arch/arm64/boot/dts/qcom/pm8150b.dtsi
-@@ -62,18 +62,8 @@
- reg = <0xc000>;
- gpio-controller;
- #gpio-cells = <2>;
-- interrupts = <0x2 0xc0 0x0 IRQ_TYPE_NONE>,
-- <0x2 0xc1 0x0 IRQ_TYPE_NONE>,
-- <0x2 0xc2 0x0 IRQ_TYPE_NONE>,
-- <0x2 0xc3 0x0 IRQ_TYPE_NONE>,
-- <0x2 0xc4 0x0 IRQ_TYPE_NONE>,
-- <0x2 0xc5 0x0 IRQ_TYPE_NONE>,
-- <0x2 0xc6 0x0 IRQ_TYPE_NONE>,
-- <0x2 0xc7 0x0 IRQ_TYPE_NONE>,
-- <0x2 0xc8 0x0 IRQ_TYPE_NONE>,
-- <0x2 0xc9 0x0 IRQ_TYPE_NONE>,
-- <0x2 0xca 0x0 IRQ_TYPE_NONE>,
-- <0x2 0xcb 0x0 IRQ_TYPE_NONE>;
-+ interrupt-controller;
-+ #interrupt-cells = <2>;
- };
- };
-
-diff --git a/arch/arm64/boot/dts/qcom/pm8150l.dtsi b/arch/arm64/boot/dts/qcom/pm8150l.dtsi
-index eb0e9a090e42..cf05e0685d10 100644
---- a/arch/arm64/boot/dts/qcom/pm8150l.dtsi
-+++ b/arch/arm64/boot/dts/qcom/pm8150l.dtsi
-@@ -56,18 +56,8 @@
- reg = <0xc000>;
- gpio-controller;
- #gpio-cells = <2>;
-- interrupts = <0x4 0xc0 0x0 IRQ_TYPE_NONE>,
-- <0x4 0xc1 0x0 IRQ_TYPE_NONE>,
-- <0x4 0xc2 0x0 IRQ_TYPE_NONE>,
-- <0x4 0xc3 0x0 IRQ_TYPE_NONE>,
-- <0x4 0xc4 0x0 IRQ_TYPE_NONE>,
-- <0x4 0xc5 0x0 IRQ_TYPE_NONE>,
-- <0x4 0xc6 0x0 IRQ_TYPE_NONE>,
-- <0x4 0xc7 0x0 IRQ_TYPE_NONE>,
-- <0x4 0xc8 0x0 IRQ_TYPE_NONE>,
-- <0x4 0xc9 0x0 IRQ_TYPE_NONE>,
-- <0x4 0xca 0x0 IRQ_TYPE_NONE>,
-- <0x4 0xcb 0x0 IRQ_TYPE_NONE>;
-+ interrupt-controller;
-+ #interrupt-cells = <2>;
- };
- };
-
-diff --git a/arch/arm64/boot/dts/qcom/sc7180.dtsi b/arch/arm64/boot/dts/qcom/sc7180.dtsi
-index 998f101ad623..eea92b314fc6 100644
---- a/arch/arm64/boot/dts/qcom/sc7180.dtsi
-+++ b/arch/arm64/boot/dts/qcom/sc7180.dtsi
-@@ -1657,8 +1657,7 @@
- pdc: interrupt-controller@b220000 {
- compatible = "qcom,sc7180-pdc", "qcom,pdc";
- reg = <0 0x0b220000 0 0x30000>;
-- qcom,pdc-ranges = <0 480 15>, <17 497 98>,
-- <119 634 4>, <124 639 1>;
-+ qcom,pdc-ranges = <0 480 94>, <94 609 31>, <125 63 1>;
- #interrupt-cells = <2>;
- interrupt-parent = <&intc>;
- interrupt-controller;
-diff --git a/arch/arm64/boot/dts/qcom/sdm850-lenovo-yoga-c630.dts b/arch/arm64/boot/dts/qcom/sdm850-lenovo-yoga-c630.dts
-index 51a670ad15b2..4b9860a2c8eb 100644
---- a/arch/arm64/boot/dts/qcom/sdm850-lenovo-yoga-c630.dts
-+++ b/arch/arm64/boot/dts/qcom/sdm850-lenovo-yoga-c630.dts
-@@ -577,3 +577,14 @@
- };
- };
- };
-+
-+&wifi {
-+ status = "okay";
-+
-+ vdd-0.8-cx-mx-supply = <&vreg_l5a_0p8>;
-+ vdd-1.8-xo-supply = <&vreg_l7a_1p8>;
-+ vdd-1.3-rfa-supply = <&vreg_l17a_1p3>;
-+ vdd-3.3-ch0-supply = <&vreg_l25a_3p3>;
-+
-+ qcom,snoc-host-cap-8bit-quirk;
-+};
-diff --git a/arch/arm64/boot/dts/qcom/sm8250.dtsi b/arch/arm64/boot/dts/qcom/sm8250.dtsi
-index 891d83b2afea..2a7eaefd221d 100644
---- a/arch/arm64/boot/dts/qcom/sm8250.dtsi
-+++ b/arch/arm64/boot/dts/qcom/sm8250.dtsi
-@@ -314,8 +314,8 @@
- };
-
- pdc: interrupt-controller@b220000 {
-- compatible = "qcom,sm8250-pdc";
-- reg = <0x0b220000 0x30000>, <0x17c000f0 0x60>;
-+ compatible = "qcom,sm8250-pdc", "qcom,pdc";
-+ reg = <0 0x0b220000 0 0x30000>, <0 0x17c000f0 0 0x60>;
- qcom,pdc-ranges = <0 480 94>, <94 609 31>,
- <125 63 1>, <126 716 12>;
- #interrupt-cells = <2>;
-diff --git a/arch/arm64/boot/dts/realtek/rtd1293-ds418j.dts b/arch/arm64/boot/dts/realtek/rtd1293-ds418j.dts
-index b2dd583146b4..b2e44c6c2d22 100644
---- a/arch/arm64/boot/dts/realtek/rtd1293-ds418j.dts
-+++ b/arch/arm64/boot/dts/realtek/rtd1293-ds418j.dts
-@@ -1,6 +1,6 @@
- // SPDX-License-Identifier: (GPL-2.0-or-later OR BSD-2-Clause)
- /*
-- * Copyright (c) 2017 Andreas Färber
-+ * Copyright (c) 2017-2019 Andreas Färber
- */
-
- /dts-v1/;
-@@ -11,9 +11,9 @@
- compatible = "synology,ds418j", "realtek,rtd1293";
- model = "Synology DiskStation DS418j";
-
-- memory@0 {
-+ memory@1f000 {
- device_type = "memory";
-- reg = <0x0 0x40000000>;
-+ reg = <0x1f000 0x3ffe1000>; /* boot ROM to 1 GiB */
- };
-
- aliases {
-diff --git a/arch/arm64/boot/dts/realtek/rtd1293.dtsi b/arch/arm64/boot/dts/realtek/rtd1293.dtsi
-index bd4e22723f7b..2d92b56ac94d 100644
---- a/arch/arm64/boot/dts/realtek/rtd1293.dtsi
-+++ b/arch/arm64/boot/dts/realtek/rtd1293.dtsi
-@@ -36,16 +36,20 @@
- timer {
- compatible = "arm,armv8-timer";
- interrupts = <GIC_PPI 13
-- (GIC_CPU_MASK_RAW(0xf) | IRQ_TYPE_LEVEL_LOW)>,
-+ (GIC_CPU_MASK_SIMPLE(2) | IRQ_TYPE_LEVEL_LOW)>,
- <GIC_PPI 14
-- (GIC_CPU_MASK_RAW(0xf) | IRQ_TYPE_LEVEL_LOW)>,
-+ (GIC_CPU_MASK_SIMPLE(2) | IRQ_TYPE_LEVEL_LOW)>,
- <GIC_PPI 11
-- (GIC_CPU_MASK_RAW(0xf) | IRQ_TYPE_LEVEL_LOW)>,
-+ (GIC_CPU_MASK_SIMPLE(2) | IRQ_TYPE_LEVEL_LOW)>,
- <GIC_PPI 10
-- (GIC_CPU_MASK_RAW(0xf) | IRQ_TYPE_LEVEL_LOW)>;
-+ (GIC_CPU_MASK_SIMPLE(2) | IRQ_TYPE_LEVEL_LOW)>;
- };
- };
-
- &arm_pmu {
- interrupt-affinity = <&cpu0>, <&cpu1>;
- };
-+
-+&gic {
-+ interrupts = <GIC_PPI 9 (GIC_CPU_MASK_SIMPLE(2) | IRQ_TYPE_LEVEL_LOW)>;
-+};
-diff --git a/arch/arm64/boot/dts/realtek/rtd1295-mele-v9.dts b/arch/arm64/boot/dts/realtek/rtd1295-mele-v9.dts
-index bd584e99fff9..cf4a57c012a8 100644
---- a/arch/arm64/boot/dts/realtek/rtd1295-mele-v9.dts
-+++ b/arch/arm64/boot/dts/realtek/rtd1295-mele-v9.dts
-@@ -1,5 +1,5 @@
- /*
-- * Copyright (c) 2017 Andreas Färber
-+ * Copyright (c) 2017-2019 Andreas Färber
- *
- * SPDX-License-Identifier: (GPL-2.0+ OR MIT)
- */
-@@ -12,9 +12,9 @@
- compatible = "mele,v9", "realtek,rtd1295";
- model = "MeLE V9";
-
-- memory@0 {
-+ memory@1f000 {
- device_type = "memory";
-- reg = <0x0 0x80000000>;
-+ reg = <0x1f000 0x7ffe1000>; /* boot ROM to 2 GiB */
- };
-
- aliases {
-diff --git a/arch/arm64/boot/dts/realtek/rtd1295-probox2-ava.dts b/arch/arm64/boot/dts/realtek/rtd1295-probox2-ava.dts
-index 8e2b0e75298a..14161c3f304d 100644
---- a/arch/arm64/boot/dts/realtek/rtd1295-probox2-ava.dts
-+++ b/arch/arm64/boot/dts/realtek/rtd1295-probox2-ava.dts
-@@ -1,5 +1,5 @@
- /*
-- * Copyright (c) 2017 Andreas Färber
-+ * Copyright (c) 2017-2019 Andreas Färber
- *
- * SPDX-License-Identifier: (GPL-2.0+ OR MIT)
- */
-@@ -12,9 +12,9 @@
- compatible = "probox2,ava", "realtek,rtd1295";
- model = "PROBOX2 AVA";
-
-- memory@0 {
-+ memory@1f000 {
- device_type = "memory";
-- reg = <0x0 0x80000000>;
-+ reg = <0x1f000 0x7ffe1000>; /* boot ROM to 2 GiB */
- };
-
- aliases {
-diff --git a/arch/arm64/boot/dts/realtek/rtd1295-zidoo-x9s.dts b/arch/arm64/boot/dts/realtek/rtd1295-zidoo-x9s.dts
-index e98e508b9514..4beb37bb9522 100644
---- a/arch/arm64/boot/dts/realtek/rtd1295-zidoo-x9s.dts
-+++ b/arch/arm64/boot/dts/realtek/rtd1295-zidoo-x9s.dts
-@@ -11,9 +11,9 @@
- compatible = "zidoo,x9s", "realtek,rtd1295";
- model = "Zidoo X9S";
-
-- memory@0 {
-+ memory@1f000 {
- device_type = "memory";
-- reg = <0x0 0x80000000>;
-+ reg = <0x1f000 0x7ffe1000>; /* boot ROM to 2 GiB */
- };
-
- aliases {
-diff --git a/arch/arm64/boot/dts/realtek/rtd1295.dtsi b/arch/arm64/boot/dts/realtek/rtd1295.dtsi
-index 93f0e1d97721..1402abe80ea1 100644
---- a/arch/arm64/boot/dts/realtek/rtd1295.dtsi
-+++ b/arch/arm64/boot/dts/realtek/rtd1295.dtsi
-@@ -2,7 +2,7 @@
- /*
- * Realtek RTD1295 SoC
- *
-- * Copyright (c) 2016-2017 Andreas Färber
-+ * Copyright (c) 2016-2019 Andreas Färber
- */
-
- #include "rtd129x.dtsi"
-@@ -47,27 +47,16 @@
- };
- };
-
-- reserved-memory {
-- #address-cells = <1>;
-- #size-cells = <1>;
-- ranges;
--
-- tee@10100000 {
-- reg = <0x10100000 0xf00000>;
-- no-map;
-- };
-- };
--
- timer {
- compatible = "arm,armv8-timer";
- interrupts = <GIC_PPI 13
-- (GIC_CPU_MASK_RAW(0xf) | IRQ_TYPE_LEVEL_LOW)>,
-+ (GIC_CPU_MASK_SIMPLE(4) | IRQ_TYPE_LEVEL_LOW)>,
- <GIC_PPI 14
-- (GIC_CPU_MASK_RAW(0xf) | IRQ_TYPE_LEVEL_LOW)>,
-+ (GIC_CPU_MASK_SIMPLE(4) | IRQ_TYPE_LEVEL_LOW)>,
- <GIC_PPI 11
-- (GIC_CPU_MASK_RAW(0xf) | IRQ_TYPE_LEVEL_LOW)>,
-+ (GIC_CPU_MASK_SIMPLE(4) | IRQ_TYPE_LEVEL_LOW)>,
- <GIC_PPI 10
-- (GIC_CPU_MASK_RAW(0xf) | IRQ_TYPE_LEVEL_LOW)>;
-+ (GIC_CPU_MASK_SIMPLE(4) | IRQ_TYPE_LEVEL_LOW)>;
- };
- };
-
-diff --git a/arch/arm64/boot/dts/realtek/rtd1296-ds418.dts b/arch/arm64/boot/dts/realtek/rtd1296-ds418.dts
-index 5a051a52bf88..cc706d13da8b 100644
---- a/arch/arm64/boot/dts/realtek/rtd1296-ds418.dts
-+++ b/arch/arm64/boot/dts/realtek/rtd1296-ds418.dts
-@@ -11,9 +11,9 @@
- compatible = "synology,ds418", "realtek,rtd1296";
- model = "Synology DiskStation DS418";
-
-- memory@0 {
-+ memory@1f000 {
- device_type = "memory";
-- reg = <0x0 0x80000000>;
-+ reg = <0x1f000 0x7ffe1000>; /* boot ROM to 2 GiB */
- };
-
- aliases {
-diff --git a/arch/arm64/boot/dts/realtek/rtd1296.dtsi b/arch/arm64/boot/dts/realtek/rtd1296.dtsi
-index 0f9e59cac086..fb864a139c97 100644
---- a/arch/arm64/boot/dts/realtek/rtd1296.dtsi
-+++ b/arch/arm64/boot/dts/realtek/rtd1296.dtsi
-@@ -50,13 +50,13 @@
- timer {
- compatible = "arm,armv8-timer";
- interrupts = <GIC_PPI 13
-- (GIC_CPU_MASK_RAW(0xf) | IRQ_TYPE_LEVEL_LOW)>,
-+ (GIC_CPU_MASK_SIMPLE(4) | IRQ_TYPE_LEVEL_LOW)>,
- <GIC_PPI 14
-- (GIC_CPU_MASK_RAW(0xf) | IRQ_TYPE_LEVEL_LOW)>,
-+ (GIC_CPU_MASK_SIMPLE(4) | IRQ_TYPE_LEVEL_LOW)>,
- <GIC_PPI 11
-- (GIC_CPU_MASK_RAW(0xf) | IRQ_TYPE_LEVEL_LOW)>,
-+ (GIC_CPU_MASK_SIMPLE(4) | IRQ_TYPE_LEVEL_LOW)>,
- <GIC_PPI 10
-- (GIC_CPU_MASK_RAW(0xf) | IRQ_TYPE_LEVEL_LOW)>;
-+ (GIC_CPU_MASK_SIMPLE(4) | IRQ_TYPE_LEVEL_LOW)>;
- };
- };
-
-diff --git a/arch/arm64/boot/dts/realtek/rtd129x.dtsi b/arch/arm64/boot/dts/realtek/rtd129x.dtsi
-index 4433114476f5..b63d0c03597a 100644
---- a/arch/arm64/boot/dts/realtek/rtd129x.dtsi
-+++ b/arch/arm64/boot/dts/realtek/rtd129x.dtsi
-@@ -2,14 +2,12 @@
- /*
- * Realtek RTD1293/RTD1295/RTD1296 SoC
- *
-- * Copyright (c) 2016-2017 Andreas Färber
-+ * Copyright (c) 2016-2019 Andreas Färber
- */
-
--/memreserve/ 0x0000000000000000 0x0000000000030000;
--/memreserve/ 0x000000000001f000 0x0000000000001000;
--/memreserve/ 0x0000000000030000 0x00000000000d0000;
-+/memreserve/ 0x0000000000000000 0x000000000001f000;
-+/memreserve/ 0x000000000001f000 0x00000000000e1000;
- /memreserve/ 0x0000000001b00000 0x00000000004be000;
--/memreserve/ 0x0000000001ffe000 0x0000000000004000;
-
- #include <dt-bindings/interrupt-controller/arm-gic.h>
- #include <dt-bindings/reset/realtek,rtd1295.h>
-@@ -19,6 +17,25 @@
- #address-cells = <1>;
- #size-cells = <1>;
-
-+ reserved-memory {
-+ #address-cells = <1>;
-+ #size-cells = <1>;
-+ ranges;
-+
-+ rpc_comm: rpc@1f000 {
-+ reg = <0x1f000 0x1000>;
-+ };
-+
-+ rpc_ringbuf: rpc@1ffe000 {
-+ reg = <0x1ffe000 0x4000>;
-+ };
-+
-+ tee: tee@10100000 {
-+ reg = <0x10100000 0xf00000>;
-+ no-map;
-+ };
-+ };
-+
- arm_pmu: arm-pmu {
- compatible = "arm,cortex-a53-pmu";
- interrupts = <GIC_SPI 48 IRQ_TYPE_LEVEL_HIGH>;
-@@ -35,8 +52,9 @@
- compatible = "simple-bus";
- #address-cells = <1>;
- #size-cells = <1>;
-- /* Exclude up to 2 GiB of RAM */
-- ranges = <0x80000000 0x80000000 0x80000000>;
-+ ranges = <0x00000000 0x00000000 0x0001f000>, /* boot ROM */
-+ /* Exclude up to 2 GiB of RAM */
-+ <0x80000000 0x80000000 0x80000000>;
-
- reset1: reset-controller@98000000 {
- compatible = "snps,dw-low-reset";
-diff --git a/arch/arm64/boot/dts/renesas/r8a774a1.dtsi b/arch/arm64/boot/dts/renesas/r8a774a1.dtsi
-index 79023433a740..a603d947970e 100644
---- a/arch/arm64/boot/dts/renesas/r8a774a1.dtsi
-+++ b/arch/arm64/boot/dts/renesas/r8a774a1.dtsi
-@@ -1000,7 +1000,7 @@
- <&ipmmu_ds1 30>, <&ipmmu_ds1 31>;
- };
-
-- ipmmu_ds0: mmu@e6740000 {
-+ ipmmu_ds0: iommu@e6740000 {
- compatible = "renesas,ipmmu-r8a774a1";
- reg = <0 0xe6740000 0 0x1000>;
- renesas,ipmmu-main = <&ipmmu_mm 0>;
-@@ -1008,7 +1008,7 @@
- #iommu-cells = <1>;
- };
-
-- ipmmu_ds1: mmu@e7740000 {
-+ ipmmu_ds1: iommu@e7740000 {
- compatible = "renesas,ipmmu-r8a774a1";
- reg = <0 0xe7740000 0 0x1000>;
- renesas,ipmmu-main = <&ipmmu_mm 1>;
-@@ -1016,7 +1016,7 @@
- #iommu-cells = <1>;
- };
-
-- ipmmu_hc: mmu@e6570000 {
-+ ipmmu_hc: iommu@e6570000 {
- compatible = "renesas,ipmmu-r8a774a1";
- reg = <0 0xe6570000 0 0x1000>;
- renesas,ipmmu-main = <&ipmmu_mm 2>;
-@@ -1024,7 +1024,7 @@
- #iommu-cells = <1>;
- };
-
-- ipmmu_mm: mmu@e67b0000 {
-+ ipmmu_mm: iommu@e67b0000 {
- compatible = "renesas,ipmmu-r8a774a1";
- reg = <0 0xe67b0000 0 0x1000>;
- interrupts = <GIC_SPI 196 IRQ_TYPE_LEVEL_HIGH>,
-@@ -1033,7 +1033,7 @@
- #iommu-cells = <1>;
- };
-
-- ipmmu_mp: mmu@ec670000 {
-+ ipmmu_mp: iommu@ec670000 {
- compatible = "renesas,ipmmu-r8a774a1";
- reg = <0 0xec670000 0 0x1000>;
- renesas,ipmmu-main = <&ipmmu_mm 4>;
-@@ -1041,7 +1041,7 @@
- #iommu-cells = <1>;
- };
-
-- ipmmu_pv0: mmu@fd800000 {
-+ ipmmu_pv0: iommu@fd800000 {
- compatible = "renesas,ipmmu-r8a774a1";
- reg = <0 0xfd800000 0 0x1000>;
- renesas,ipmmu-main = <&ipmmu_mm 5>;
-@@ -1049,7 +1049,7 @@
- #iommu-cells = <1>;
- };
-
-- ipmmu_pv1: mmu@fd950000 {
-+ ipmmu_pv1: iommu@fd950000 {
- compatible = "renesas,ipmmu-r8a774a1";
- reg = <0 0xfd950000 0 0x1000>;
- renesas,ipmmu-main = <&ipmmu_mm 6>;
-@@ -1057,7 +1057,7 @@
- #iommu-cells = <1>;
- };
-
-- ipmmu_vc0: mmu@fe6b0000 {
-+ ipmmu_vc0: iommu@fe6b0000 {
- compatible = "renesas,ipmmu-r8a774a1";
- reg = <0 0xfe6b0000 0 0x1000>;
- renesas,ipmmu-main = <&ipmmu_mm 8>;
-@@ -1065,7 +1065,7 @@
- #iommu-cells = <1>;
- };
-
-- ipmmu_vi0: mmu@febd0000 {
-+ ipmmu_vi0: iommu@febd0000 {
- compatible = "renesas,ipmmu-r8a774a1";
- reg = <0 0xfebd0000 0 0x1000>;
- renesas,ipmmu-main = <&ipmmu_mm 9>;
-diff --git a/arch/arm64/boot/dts/renesas/r8a774b1.dtsi b/arch/arm64/boot/dts/renesas/r8a774b1.dtsi
-index 3137f735974b..1e51855c7cd3 100644
---- a/arch/arm64/boot/dts/renesas/r8a774b1.dtsi
-+++ b/arch/arm64/boot/dts/renesas/r8a774b1.dtsi
-@@ -874,7 +874,7 @@
- <&ipmmu_ds1 30>, <&ipmmu_ds1 31>;
- };
-
-- ipmmu_ds0: mmu@e6740000 {
-+ ipmmu_ds0: iommu@e6740000 {
- compatible = "renesas,ipmmu-r8a774b1";
- reg = <0 0xe6740000 0 0x1000>;
- renesas,ipmmu-main = <&ipmmu_mm 0>;
-@@ -882,7 +882,7 @@
- #iommu-cells = <1>;
- };
-
-- ipmmu_ds1: mmu@e7740000 {
-+ ipmmu_ds1: iommu@e7740000 {
- compatible = "renesas,ipmmu-r8a774b1";
- reg = <0 0xe7740000 0 0x1000>;
- renesas,ipmmu-main = <&ipmmu_mm 1>;
-@@ -890,7 +890,7 @@
- #iommu-cells = <1>;
- };
-
-- ipmmu_hc: mmu@e6570000 {
-+ ipmmu_hc: iommu@e6570000 {
- compatible = "renesas,ipmmu-r8a774b1";
- reg = <0 0xe6570000 0 0x1000>;
- renesas,ipmmu-main = <&ipmmu_mm 2>;
-@@ -898,7 +898,7 @@
- #iommu-cells = <1>;
- };
-
-- ipmmu_mm: mmu@e67b0000 {
-+ ipmmu_mm: iommu@e67b0000 {
- compatible = "renesas,ipmmu-r8a774b1";
- reg = <0 0xe67b0000 0 0x1000>;
- interrupts = <GIC_SPI 196 IRQ_TYPE_LEVEL_HIGH>,
-@@ -907,7 +907,7 @@
- #iommu-cells = <1>;
- };
-
-- ipmmu_mp: mmu@ec670000 {
-+ ipmmu_mp: iommu@ec670000 {
- compatible = "renesas,ipmmu-r8a774b1";
- reg = <0 0xec670000 0 0x1000>;
- renesas,ipmmu-main = <&ipmmu_mm 4>;
-@@ -915,7 +915,7 @@
- #iommu-cells = <1>;
- };
-
-- ipmmu_pv0: mmu@fd800000 {
-+ ipmmu_pv0: iommu@fd800000 {
- compatible = "renesas,ipmmu-r8a774b1";
- reg = <0 0xfd800000 0 0x1000>;
- renesas,ipmmu-main = <&ipmmu_mm 6>;
-@@ -923,7 +923,7 @@
- #iommu-cells = <1>;
- };
-
-- ipmmu_vc0: mmu@fe6b0000 {
-+ ipmmu_vc0: iommu@fe6b0000 {
- compatible = "renesas,ipmmu-r8a774b1";
- reg = <0 0xfe6b0000 0 0x1000>;
- renesas,ipmmu-main = <&ipmmu_mm 12>;
-@@ -931,7 +931,7 @@
- #iommu-cells = <1>;
- };
-
-- ipmmu_vi0: mmu@febd0000 {
-+ ipmmu_vi0: iommu@febd0000 {
- compatible = "renesas,ipmmu-r8a774b1";
- reg = <0 0xfebd0000 0 0x1000>;
- renesas,ipmmu-main = <&ipmmu_mm 14>;
-@@ -939,7 +939,7 @@
- #iommu-cells = <1>;
- };
-
-- ipmmu_vp0: mmu@fe990000 {
-+ ipmmu_vp0: iommu@fe990000 {
- compatible = "renesas,ipmmu-r8a774b1";
- reg = <0 0xfe990000 0 0x1000>;
- renesas,ipmmu-main = <&ipmmu_mm 16>;
-diff --git a/arch/arm64/boot/dts/renesas/r8a774c0.dtsi b/arch/arm64/boot/dts/renesas/r8a774c0.dtsi
-index 22785cbddff5..5c72a7efbb03 100644
---- a/arch/arm64/boot/dts/renesas/r8a774c0.dtsi
-+++ b/arch/arm64/boot/dts/renesas/r8a774c0.dtsi
-@@ -847,7 +847,7 @@
- <&ipmmu_ds1 30>, <&ipmmu_ds1 31>;
- };
-
-- ipmmu_ds0: mmu@e6740000 {
-+ ipmmu_ds0: iommu@e6740000 {
- compatible = "renesas,ipmmu-r8a774c0";
- reg = <0 0xe6740000 0 0x1000>;
- renesas,ipmmu-main = <&ipmmu_mm 0>;
-@@ -855,7 +855,7 @@
- #iommu-cells = <1>;
- };
-
-- ipmmu_ds1: mmu@e7740000 {
-+ ipmmu_ds1: iommu@e7740000 {
- compatible = "renesas,ipmmu-r8a774c0";
- reg = <0 0xe7740000 0 0x1000>;
- renesas,ipmmu-main = <&ipmmu_mm 1>;
-@@ -863,7 +863,7 @@
- #iommu-cells = <1>;
- };
-
-- ipmmu_hc: mmu@e6570000 {
-+ ipmmu_hc: iommu@e6570000 {
- compatible = "renesas,ipmmu-r8a774c0";
- reg = <0 0xe6570000 0 0x1000>;
- renesas,ipmmu-main = <&ipmmu_mm 2>;
-@@ -871,7 +871,7 @@
- #iommu-cells = <1>;
- };
-
-- ipmmu_mm: mmu@e67b0000 {
-+ ipmmu_mm: iommu@e67b0000 {
- compatible = "renesas,ipmmu-r8a774c0";
- reg = <0 0xe67b0000 0 0x1000>;
- interrupts = <GIC_SPI 196 IRQ_TYPE_LEVEL_HIGH>,
-@@ -880,7 +880,7 @@
- #iommu-cells = <1>;
- };
-
-- ipmmu_mp: mmu@ec670000 {
-+ ipmmu_mp: iommu@ec670000 {
- compatible = "renesas,ipmmu-r8a774c0";
- reg = <0 0xec670000 0 0x1000>;
- renesas,ipmmu-main = <&ipmmu_mm 4>;
-@@ -888,7 +888,7 @@
- #iommu-cells = <1>;
- };
-
-- ipmmu_pv0: mmu@fd800000 {
-+ ipmmu_pv0: iommu@fd800000 {
- compatible = "renesas,ipmmu-r8a774c0";
- reg = <0 0xfd800000 0 0x1000>;
- renesas,ipmmu-main = <&ipmmu_mm 6>;
-@@ -896,7 +896,7 @@
- #iommu-cells = <1>;
- };
-
-- ipmmu_vc0: mmu@fe6b0000 {
-+ ipmmu_vc0: iommu@fe6b0000 {
- compatible = "renesas,ipmmu-r8a774c0";
- reg = <0 0xfe6b0000 0 0x1000>;
- renesas,ipmmu-main = <&ipmmu_mm 12>;
-@@ -904,7 +904,7 @@
- #iommu-cells = <1>;
- };
-
-- ipmmu_vi0: mmu@febd0000 {
-+ ipmmu_vi0: iommu@febd0000 {
- compatible = "renesas,ipmmu-r8a774c0";
- reg = <0 0xfebd0000 0 0x1000>;
- renesas,ipmmu-main = <&ipmmu_mm 14>;
-@@ -912,7 +912,7 @@
- #iommu-cells = <1>;
- };
-
-- ipmmu_vp0: mmu@fe990000 {
-+ ipmmu_vp0: iommu@fe990000 {
- compatible = "renesas,ipmmu-r8a774c0";
- reg = <0 0xfe990000 0 0x1000>;
- renesas,ipmmu-main = <&ipmmu_mm 16>;
-diff --git a/arch/arm64/boot/dts/renesas/r8a77950.dtsi b/arch/arm64/boot/dts/renesas/r8a77950.dtsi
-index 3975eecd50c4..d716c4386ae9 100644
---- a/arch/arm64/boot/dts/renesas/r8a77950.dtsi
-+++ b/arch/arm64/boot/dts/renesas/r8a77950.dtsi
-@@ -77,7 +77,7 @@
- /delete-node/ dma-controller@e6460000;
- /delete-node/ dma-controller@e6470000;
-
-- ipmmu_mp1: mmu@ec680000 {
-+ ipmmu_mp1: iommu@ec680000 {
- compatible = "renesas,ipmmu-r8a7795";
- reg = <0 0xec680000 0 0x1000>;
- renesas,ipmmu-main = <&ipmmu_mm 5>;
-@@ -85,7 +85,7 @@
- #iommu-cells = <1>;
- };
-
-- ipmmu_sy: mmu@e7730000 {
-+ ipmmu_sy: iommu@e7730000 {
- compatible = "renesas,ipmmu-r8a7795";
- reg = <0 0xe7730000 0 0x1000>;
- renesas,ipmmu-main = <&ipmmu_mm 8>;
-@@ -93,11 +93,11 @@
- #iommu-cells = <1>;
- };
-
-- /delete-node/ mmu@fd950000;
-- /delete-node/ mmu@fd960000;
-- /delete-node/ mmu@fd970000;
-- /delete-node/ mmu@febe0000;
-- /delete-node/ mmu@fe980000;
-+ /delete-node/ iommu@fd950000;
-+ /delete-node/ iommu@fd960000;
-+ /delete-node/ iommu@fd970000;
-+ /delete-node/ iommu@febe0000;
-+ /delete-node/ iommu@fe980000;
-
- xhci1: usb@ee040000 {
- compatible = "renesas,xhci-r8a7795", "renesas,rcar-gen3-xhci";
-diff --git a/arch/arm64/boot/dts/renesas/r8a77951.dtsi b/arch/arm64/boot/dts/renesas/r8a77951.dtsi
-index 52229546454c..61d67d9714ab 100644
---- a/arch/arm64/boot/dts/renesas/r8a77951.dtsi
-+++ b/arch/arm64/boot/dts/renesas/r8a77951.dtsi
-@@ -1073,7 +1073,7 @@
- <&ipmmu_ds1 30>, <&ipmmu_ds1 31>;
- };
-
-- ipmmu_ds0: mmu@e6740000 {
-+ ipmmu_ds0: iommu@e6740000 {
- compatible = "renesas,ipmmu-r8a7795";
- reg = <0 0xe6740000 0 0x1000>;
- renesas,ipmmu-main = <&ipmmu_mm 0>;
-@@ -1081,7 +1081,7 @@
- #iommu-cells = <1>;
- };
-
-- ipmmu_ds1: mmu@e7740000 {
-+ ipmmu_ds1: iommu@e7740000 {
- compatible = "renesas,ipmmu-r8a7795";
- reg = <0 0xe7740000 0 0x1000>;
- renesas,ipmmu-main = <&ipmmu_mm 1>;
-@@ -1089,7 +1089,7 @@
- #iommu-cells = <1>;
- };
-
-- ipmmu_hc: mmu@e6570000 {
-+ ipmmu_hc: iommu@e6570000 {
- compatible = "renesas,ipmmu-r8a7795";
- reg = <0 0xe6570000 0 0x1000>;
- renesas,ipmmu-main = <&ipmmu_mm 2>;
-@@ -1097,7 +1097,7 @@
- #iommu-cells = <1>;
- };
-
-- ipmmu_ir: mmu@ff8b0000 {
-+ ipmmu_ir: iommu@ff8b0000 {
- compatible = "renesas,ipmmu-r8a7795";
- reg = <0 0xff8b0000 0 0x1000>;
- renesas,ipmmu-main = <&ipmmu_mm 3>;
-@@ -1105,7 +1105,7 @@
- #iommu-cells = <1>;
- };
-
-- ipmmu_mm: mmu@e67b0000 {
-+ ipmmu_mm: iommu@e67b0000 {
- compatible = "renesas,ipmmu-r8a7795";
- reg = <0 0xe67b0000 0 0x1000>;
- interrupts = <GIC_SPI 196 IRQ_TYPE_LEVEL_HIGH>,
-@@ -1114,7 +1114,7 @@
- #iommu-cells = <1>;
- };
-
-- ipmmu_mp0: mmu@ec670000 {
-+ ipmmu_mp0: iommu@ec670000 {
- compatible = "renesas,ipmmu-r8a7795";
- reg = <0 0xec670000 0 0x1000>;
- renesas,ipmmu-main = <&ipmmu_mm 4>;
-@@ -1122,7 +1122,7 @@
- #iommu-cells = <1>;
- };
-
-- ipmmu_pv0: mmu@fd800000 {
-+ ipmmu_pv0: iommu@fd800000 {
- compatible = "renesas,ipmmu-r8a7795";
- reg = <0 0xfd800000 0 0x1000>;
- renesas,ipmmu-main = <&ipmmu_mm 6>;
-@@ -1130,7 +1130,7 @@
- #iommu-cells = <1>;
- };
-
-- ipmmu_pv1: mmu@fd950000 {
-+ ipmmu_pv1: iommu@fd950000 {
- compatible = "renesas,ipmmu-r8a7795";
- reg = <0 0xfd950000 0 0x1000>;
- renesas,ipmmu-main = <&ipmmu_mm 7>;
-@@ -1138,7 +1138,7 @@
- #iommu-cells = <1>;
- };
-
-- ipmmu_pv2: mmu@fd960000 {
-+ ipmmu_pv2: iommu@fd960000 {
- compatible = "renesas,ipmmu-r8a7795";
- reg = <0 0xfd960000 0 0x1000>;
- renesas,ipmmu-main = <&ipmmu_mm 8>;
-@@ -1146,7 +1146,7 @@
- #iommu-cells = <1>;
- };
-
-- ipmmu_pv3: mmu@fd970000 {
-+ ipmmu_pv3: iommu@fd970000 {
- compatible = "renesas,ipmmu-r8a7795";
- reg = <0 0xfd970000 0 0x1000>;
- renesas,ipmmu-main = <&ipmmu_mm 9>;
-@@ -1154,7 +1154,7 @@
- #iommu-cells = <1>;
- };
-
-- ipmmu_rt: mmu@ffc80000 {
-+ ipmmu_rt: iommu@ffc80000 {
- compatible = "renesas,ipmmu-r8a7795";
- reg = <0 0xffc80000 0 0x1000>;
- renesas,ipmmu-main = <&ipmmu_mm 10>;
-@@ -1162,7 +1162,7 @@
- #iommu-cells = <1>;
- };
-
-- ipmmu_vc0: mmu@fe6b0000 {
-+ ipmmu_vc0: iommu@fe6b0000 {
- compatible = "renesas,ipmmu-r8a7795";
- reg = <0 0xfe6b0000 0 0x1000>;
- renesas,ipmmu-main = <&ipmmu_mm 12>;
-@@ -1170,7 +1170,7 @@
- #iommu-cells = <1>;
- };
-
-- ipmmu_vc1: mmu@fe6f0000 {
-+ ipmmu_vc1: iommu@fe6f0000 {
- compatible = "renesas,ipmmu-r8a7795";
- reg = <0 0xfe6f0000 0 0x1000>;
- renesas,ipmmu-main = <&ipmmu_mm 13>;
-@@ -1178,7 +1178,7 @@
- #iommu-cells = <1>;
- };
-
-- ipmmu_vi0: mmu@febd0000 {
-+ ipmmu_vi0: iommu@febd0000 {
- compatible = "renesas,ipmmu-r8a7795";
- reg = <0 0xfebd0000 0 0x1000>;
- renesas,ipmmu-main = <&ipmmu_mm 14>;
-@@ -1186,7 +1186,7 @@
- #iommu-cells = <1>;
- };
-
-- ipmmu_vi1: mmu@febe0000 {
-+ ipmmu_vi1: iommu@febe0000 {
- compatible = "renesas,ipmmu-r8a7795";
- reg = <0 0xfebe0000 0 0x1000>;
- renesas,ipmmu-main = <&ipmmu_mm 15>;
-@@ -1194,7 +1194,7 @@
- #iommu-cells = <1>;
- };
-
-- ipmmu_vp0: mmu@fe990000 {
-+ ipmmu_vp0: iommu@fe990000 {
- compatible = "renesas,ipmmu-r8a7795";
- reg = <0 0xfe990000 0 0x1000>;
- renesas,ipmmu-main = <&ipmmu_mm 16>;
-@@ -1202,7 +1202,7 @@
- #iommu-cells = <1>;
- };
-
-- ipmmu_vp1: mmu@fe980000 {
-+ ipmmu_vp1: iommu@fe980000 {
- compatible = "renesas,ipmmu-r8a7795";
- reg = <0 0xfe980000 0 0x1000>;
- renesas,ipmmu-main = <&ipmmu_mm 17>;
-diff --git a/arch/arm64/boot/dts/renesas/r8a77960.dtsi b/arch/arm64/boot/dts/renesas/r8a77960.dtsi
-index 31282367d3ac..33bf62acffbb 100644
---- a/arch/arm64/boot/dts/renesas/r8a77960.dtsi
-+++ b/arch/arm64/boot/dts/renesas/r8a77960.dtsi
-@@ -997,7 +997,7 @@
- <&ipmmu_ds1 30>, <&ipmmu_ds1 31>;
- };
-
-- ipmmu_ds0: mmu@e6740000 {
-+ ipmmu_ds0: iommu@e6740000 {
- compatible = "renesas,ipmmu-r8a7796";
- reg = <0 0xe6740000 0 0x1000>;
- renesas,ipmmu-main = <&ipmmu_mm 0>;
-@@ -1005,7 +1005,7 @@
- #iommu-cells = <1>;
- };
-
-- ipmmu_ds1: mmu@e7740000 {
-+ ipmmu_ds1: iommu@e7740000 {
- compatible = "renesas,ipmmu-r8a7796";
- reg = <0 0xe7740000 0 0x1000>;
- renesas,ipmmu-main = <&ipmmu_mm 1>;
-@@ -1013,7 +1013,7 @@
- #iommu-cells = <1>;
- };
-
-- ipmmu_hc: mmu@e6570000 {
-+ ipmmu_hc: iommu@e6570000 {
- compatible = "renesas,ipmmu-r8a7796";
- reg = <0 0xe6570000 0 0x1000>;
- renesas,ipmmu-main = <&ipmmu_mm 2>;
-@@ -1021,7 +1021,7 @@
- #iommu-cells = <1>;
- };
-
-- ipmmu_ir: mmu@ff8b0000 {
-+ ipmmu_ir: iommu@ff8b0000 {
- compatible = "renesas,ipmmu-r8a7796";
- reg = <0 0xff8b0000 0 0x1000>;
- renesas,ipmmu-main = <&ipmmu_mm 3>;
-@@ -1029,7 +1029,7 @@
- #iommu-cells = <1>;
- };
-
-- ipmmu_mm: mmu@e67b0000 {
-+ ipmmu_mm: iommu@e67b0000 {
- compatible = "renesas,ipmmu-r8a7796";
- reg = <0 0xe67b0000 0 0x1000>;
- interrupts = <GIC_SPI 196 IRQ_TYPE_LEVEL_HIGH>,
-@@ -1038,7 +1038,7 @@
- #iommu-cells = <1>;
- };
-
-- ipmmu_mp: mmu@ec670000 {
-+ ipmmu_mp: iommu@ec670000 {
- compatible = "renesas,ipmmu-r8a7796";
- reg = <0 0xec670000 0 0x1000>;
- renesas,ipmmu-main = <&ipmmu_mm 4>;
-@@ -1046,7 +1046,7 @@
- #iommu-cells = <1>;
- };
-
-- ipmmu_pv0: mmu@fd800000 {
-+ ipmmu_pv0: iommu@fd800000 {
- compatible = "renesas,ipmmu-r8a7796";
- reg = <0 0xfd800000 0 0x1000>;
- renesas,ipmmu-main = <&ipmmu_mm 5>;
-@@ -1054,7 +1054,7 @@
- #iommu-cells = <1>;
- };
-
-- ipmmu_pv1: mmu@fd950000 {
-+ ipmmu_pv1: iommu@fd950000 {
- compatible = "renesas,ipmmu-r8a7796";
- reg = <0 0xfd950000 0 0x1000>;
- renesas,ipmmu-main = <&ipmmu_mm 6>;
-@@ -1062,7 +1062,7 @@
- #iommu-cells = <1>;
- };
-
-- ipmmu_rt: mmu@ffc80000 {
-+ ipmmu_rt: iommu@ffc80000 {
- compatible = "renesas,ipmmu-r8a7796";
- reg = <0 0xffc80000 0 0x1000>;
- renesas,ipmmu-main = <&ipmmu_mm 7>;
-@@ -1070,7 +1070,7 @@
- #iommu-cells = <1>;
- };
-
-- ipmmu_vc0: mmu@fe6b0000 {
-+ ipmmu_vc0: iommu@fe6b0000 {
- compatible = "renesas,ipmmu-r8a7796";
- reg = <0 0xfe6b0000 0 0x1000>;
- renesas,ipmmu-main = <&ipmmu_mm 8>;
-@@ -1078,7 +1078,7 @@
- #iommu-cells = <1>;
- };
-
-- ipmmu_vi0: mmu@febd0000 {
-+ ipmmu_vi0: iommu@febd0000 {
- compatible = "renesas,ipmmu-r8a7796";
- reg = <0 0xfebd0000 0 0x1000>;
- renesas,ipmmu-main = <&ipmmu_mm 9>;
-diff --git a/arch/arm64/boot/dts/renesas/r8a77965.dtsi b/arch/arm64/boot/dts/renesas/r8a77965.dtsi
-index d82dd4e67b62..6f7ab39fd282 100644
---- a/arch/arm64/boot/dts/renesas/r8a77965.dtsi
-+++ b/arch/arm64/boot/dts/renesas/r8a77965.dtsi
-@@ -867,7 +867,7 @@
- <&ipmmu_ds1 30>, <&ipmmu_ds1 31>;
- };
-
-- ipmmu_ds0: mmu@e6740000 {
-+ ipmmu_ds0: iommu@e6740000 {
- compatible = "renesas,ipmmu-r8a77965";
- reg = <0 0xe6740000 0 0x1000>;
- renesas,ipmmu-main = <&ipmmu_mm 0>;
-@@ -875,7 +875,7 @@
- #iommu-cells = <1>;
- };
-
-- ipmmu_ds1: mmu@e7740000 {
-+ ipmmu_ds1: iommu@e7740000 {
- compatible = "renesas,ipmmu-r8a77965";
- reg = <0 0xe7740000 0 0x1000>;
- renesas,ipmmu-main = <&ipmmu_mm 1>;
-@@ -883,7 +883,7 @@
- #iommu-cells = <1>;
- };
-
-- ipmmu_hc: mmu@e6570000 {
-+ ipmmu_hc: iommu@e6570000 {
- compatible = "renesas,ipmmu-r8a77965";
- reg = <0 0xe6570000 0 0x1000>;
- renesas,ipmmu-main = <&ipmmu_mm 2>;
-@@ -891,7 +891,7 @@
- #iommu-cells = <1>;
- };
-
-- ipmmu_mm: mmu@e67b0000 {
-+ ipmmu_mm: iommu@e67b0000 {
- compatible = "renesas,ipmmu-r8a77965";
- reg = <0 0xe67b0000 0 0x1000>;
- interrupts = <GIC_SPI 196 IRQ_TYPE_LEVEL_HIGH>,
-@@ -900,7 +900,7 @@
- #iommu-cells = <1>;
- };
-
-- ipmmu_mp: mmu@ec670000 {
-+ ipmmu_mp: iommu@ec670000 {
- compatible = "renesas,ipmmu-r8a77965";
- reg = <0 0xec670000 0 0x1000>;
- renesas,ipmmu-main = <&ipmmu_mm 4>;
-@@ -908,7 +908,7 @@
- #iommu-cells = <1>;
- };
-
-- ipmmu_pv0: mmu@fd800000 {
-+ ipmmu_pv0: iommu@fd800000 {
- compatible = "renesas,ipmmu-r8a77965";
- reg = <0 0xfd800000 0 0x1000>;
- renesas,ipmmu-main = <&ipmmu_mm 6>;
-@@ -916,7 +916,7 @@
- #iommu-cells = <1>;
- };
-
-- ipmmu_rt: mmu@ffc80000 {
-+ ipmmu_rt: iommu@ffc80000 {
- compatible = "renesas,ipmmu-r8a77965";
- reg = <0 0xffc80000 0 0x1000>;
- renesas,ipmmu-main = <&ipmmu_mm 10>;
-@@ -924,7 +924,7 @@
- #iommu-cells = <1>;
- };
-
-- ipmmu_vc0: mmu@fe6b0000 {
-+ ipmmu_vc0: iommu@fe6b0000 {
- compatible = "renesas,ipmmu-r8a77965";
- reg = <0 0xfe6b0000 0 0x1000>;
- renesas,ipmmu-main = <&ipmmu_mm 12>;
-@@ -932,7 +932,7 @@
- #iommu-cells = <1>;
- };
-
-- ipmmu_vi0: mmu@febd0000 {
-+ ipmmu_vi0: iommu@febd0000 {
- compatible = "renesas,ipmmu-r8a77965";
- reg = <0 0xfebd0000 0 0x1000>;
- renesas,ipmmu-main = <&ipmmu_mm 14>;
-@@ -940,7 +940,7 @@
- #iommu-cells = <1>;
- };
-
-- ipmmu_vp0: mmu@fe990000 {
-+ ipmmu_vp0: iommu@fe990000 {
- compatible = "renesas,ipmmu-r8a77965";
- reg = <0 0xfe990000 0 0x1000>;
- renesas,ipmmu-main = <&ipmmu_mm 16>;
-diff --git a/arch/arm64/boot/dts/renesas/r8a77970.dtsi b/arch/arm64/boot/dts/renesas/r8a77970.dtsi
-index a009c0ebc8b4..bd95ecb1b40d 100644
---- a/arch/arm64/boot/dts/renesas/r8a77970.dtsi
-+++ b/arch/arm64/boot/dts/renesas/r8a77970.dtsi
-@@ -985,7 +985,7 @@
- <&ipmmu_ds1 22>, <&ipmmu_ds1 23>;
- };
-
-- ipmmu_ds1: mmu@e7740000 {
-+ ipmmu_ds1: iommu@e7740000 {
- compatible = "renesas,ipmmu-r8a77970";
- reg = <0 0xe7740000 0 0x1000>;
- renesas,ipmmu-main = <&ipmmu_mm 0>;
-@@ -993,7 +993,7 @@
- #iommu-cells = <1>;
- };
-
-- ipmmu_ir: mmu@ff8b0000 {
-+ ipmmu_ir: iommu@ff8b0000 {
- compatible = "renesas,ipmmu-r8a77970";
- reg = <0 0xff8b0000 0 0x1000>;
- renesas,ipmmu-main = <&ipmmu_mm 3>;
-@@ -1001,7 +1001,7 @@
- #iommu-cells = <1>;
- };
-
-- ipmmu_mm: mmu@e67b0000 {
-+ ipmmu_mm: iommu@e67b0000 {
- compatible = "renesas,ipmmu-r8a77970";
- reg = <0 0xe67b0000 0 0x1000>;
- interrupts = <GIC_SPI 196 IRQ_TYPE_LEVEL_HIGH>,
-@@ -1010,7 +1010,7 @@
- #iommu-cells = <1>;
- };
-
-- ipmmu_rt: mmu@ffc80000 {
-+ ipmmu_rt: iommu@ffc80000 {
- compatible = "renesas,ipmmu-r8a77970";
- reg = <0 0xffc80000 0 0x1000>;
- renesas,ipmmu-main = <&ipmmu_mm 7>;
-@@ -1018,7 +1018,7 @@
- #iommu-cells = <1>;
- };
-
-- ipmmu_vi0: mmu@febd0000 {
-+ ipmmu_vi0: iommu@febd0000 {
- compatible = "renesas,ipmmu-r8a77970";
- reg = <0 0xfebd0000 0 0x1000>;
- renesas,ipmmu-main = <&ipmmu_mm 9>;
-diff --git a/arch/arm64/boot/dts/renesas/r8a77980.dtsi b/arch/arm64/boot/dts/renesas/r8a77980.dtsi
-index d672b320bc14..387e6d99f2f3 100644
---- a/arch/arm64/boot/dts/renesas/r8a77980.dtsi
-+++ b/arch/arm64/boot/dts/renesas/r8a77980.dtsi
-@@ -1266,7 +1266,7 @@
- status = "disabled";
- };
-
-- ipmmu_ds1: mmu@e7740000 {
-+ ipmmu_ds1: iommu@e7740000 {
- compatible = "renesas,ipmmu-r8a77980";
- reg = <0 0xe7740000 0 0x1000>;
- renesas,ipmmu-main = <&ipmmu_mm 0>;
-@@ -1274,7 +1274,7 @@
- #iommu-cells = <1>;
- };
-
-- ipmmu_ir: mmu@ff8b0000 {
-+ ipmmu_ir: iommu@ff8b0000 {
- compatible = "renesas,ipmmu-r8a77980";
- reg = <0 0xff8b0000 0 0x1000>;
- renesas,ipmmu-main = <&ipmmu_mm 3>;
-@@ -1282,7 +1282,7 @@
- #iommu-cells = <1>;
- };
-
-- ipmmu_mm: mmu@e67b0000 {
-+ ipmmu_mm: iommu@e67b0000 {
- compatible = "renesas,ipmmu-r8a77980";
- reg = <0 0xe67b0000 0 0x1000>;
- interrupts = <GIC_SPI 196 IRQ_TYPE_LEVEL_HIGH>,
-@@ -1291,7 +1291,7 @@
- #iommu-cells = <1>;
- };
-
-- ipmmu_rt: mmu@ffc80000 {
-+ ipmmu_rt: iommu@ffc80000 {
- compatible = "renesas,ipmmu-r8a77980";
- reg = <0 0xffc80000 0 0x1000>;
- renesas,ipmmu-main = <&ipmmu_mm 10>;
-@@ -1299,7 +1299,7 @@
- #iommu-cells = <1>;
- };
-
-- ipmmu_vc0: mmu@fe990000 {
-+ ipmmu_vc0: iommu@fe990000 {
- compatible = "renesas,ipmmu-r8a77980";
- reg = <0 0xfe990000 0 0x1000>;
- renesas,ipmmu-main = <&ipmmu_mm 12>;
-@@ -1307,7 +1307,7 @@
- #iommu-cells = <1>;
- };
-
-- ipmmu_vi0: mmu@febd0000 {
-+ ipmmu_vi0: iommu@febd0000 {
- compatible = "renesas,ipmmu-r8a77980";
- reg = <0 0xfebd0000 0 0x1000>;
- renesas,ipmmu-main = <&ipmmu_mm 14>;
-@@ -1315,7 +1315,7 @@
- #iommu-cells = <1>;
- };
-
-- ipmmu_vip0: mmu@e7b00000 {
-+ ipmmu_vip0: iommu@e7b00000 {
- compatible = "renesas,ipmmu-r8a77980";
- reg = <0 0xe7b00000 0 0x1000>;
- renesas,ipmmu-main = <&ipmmu_mm 4>;
-@@ -1323,7 +1323,7 @@
- #iommu-cells = <1>;
- };
-
-- ipmmu_vip1: mmu@e7960000 {
-+ ipmmu_vip1: iommu@e7960000 {
- compatible = "renesas,ipmmu-r8a77980";
- reg = <0 0xe7960000 0 0x1000>;
- renesas,ipmmu-main = <&ipmmu_mm 11>;
-diff --git a/arch/arm64/boot/dts/renesas/r8a77990.dtsi b/arch/arm64/boot/dts/renesas/r8a77990.dtsi
-index 1543f18e834f..cd11f24744d4 100644
---- a/arch/arm64/boot/dts/renesas/r8a77990.dtsi
-+++ b/arch/arm64/boot/dts/renesas/r8a77990.dtsi
-@@ -817,7 +817,7 @@
- <&ipmmu_ds1 30>, <&ipmmu_ds1 31>;
- };
-
-- ipmmu_ds0: mmu@e6740000 {
-+ ipmmu_ds0: iommu@e6740000 {
- compatible = "renesas,ipmmu-r8a77990";
- reg = <0 0xe6740000 0 0x1000>;
- renesas,ipmmu-main = <&ipmmu_mm 0>;
-@@ -825,7 +825,7 @@
- #iommu-cells = <1>;
- };
-
-- ipmmu_ds1: mmu@e7740000 {
-+ ipmmu_ds1: iommu@e7740000 {
- compatible = "renesas,ipmmu-r8a77990";
- reg = <0 0xe7740000 0 0x1000>;
- renesas,ipmmu-main = <&ipmmu_mm 1>;
-@@ -833,7 +833,7 @@
- #iommu-cells = <1>;
- };
-
-- ipmmu_hc: mmu@e6570000 {
-+ ipmmu_hc: iommu@e6570000 {
- compatible = "renesas,ipmmu-r8a77990";
- reg = <0 0xe6570000 0 0x1000>;
- renesas,ipmmu-main = <&ipmmu_mm 2>;
-@@ -841,7 +841,7 @@
- #iommu-cells = <1>;
- };
-
-- ipmmu_mm: mmu@e67b0000 {
-+ ipmmu_mm: iommu@e67b0000 {
- compatible = "renesas,ipmmu-r8a77990";
- reg = <0 0xe67b0000 0 0x1000>;
- interrupts = <GIC_SPI 196 IRQ_TYPE_LEVEL_HIGH>,
-@@ -850,7 +850,7 @@
- #iommu-cells = <1>;
- };
-
-- ipmmu_mp: mmu@ec670000 {
-+ ipmmu_mp: iommu@ec670000 {
- compatible = "renesas,ipmmu-r8a77990";
- reg = <0 0xec670000 0 0x1000>;
- renesas,ipmmu-main = <&ipmmu_mm 4>;
-@@ -858,7 +858,7 @@
- #iommu-cells = <1>;
- };
-
-- ipmmu_pv0: mmu@fd800000 {
-+ ipmmu_pv0: iommu@fd800000 {
- compatible = "renesas,ipmmu-r8a77990";
- reg = <0 0xfd800000 0 0x1000>;
- renesas,ipmmu-main = <&ipmmu_mm 6>;
-@@ -866,7 +866,7 @@
- #iommu-cells = <1>;
- };
-
-- ipmmu_rt: mmu@ffc80000 {
-+ ipmmu_rt: iommu@ffc80000 {
- compatible = "renesas,ipmmu-r8a77990";
- reg = <0 0xffc80000 0 0x1000>;
- renesas,ipmmu-main = <&ipmmu_mm 10>;
-@@ -874,7 +874,7 @@
- #iommu-cells = <1>;
- };
-
-- ipmmu_vc0: mmu@fe6b0000 {
-+ ipmmu_vc0: iommu@fe6b0000 {
- compatible = "renesas,ipmmu-r8a77990";
- reg = <0 0xfe6b0000 0 0x1000>;
- renesas,ipmmu-main = <&ipmmu_mm 12>;
-@@ -882,7 +882,7 @@
- #iommu-cells = <1>;
- };
-
-- ipmmu_vi0: mmu@febd0000 {
-+ ipmmu_vi0: iommu@febd0000 {
- compatible = "renesas,ipmmu-r8a77990";
- reg = <0 0xfebd0000 0 0x1000>;
- renesas,ipmmu-main = <&ipmmu_mm 14>;
-@@ -890,7 +890,7 @@
- #iommu-cells = <1>;
- };
-
-- ipmmu_vp0: mmu@fe990000 {
-+ ipmmu_vp0: iommu@fe990000 {
- compatible = "renesas,ipmmu-r8a77990";
- reg = <0 0xfe990000 0 0x1000>;
- renesas,ipmmu-main = <&ipmmu_mm 16>;
-diff --git a/arch/arm64/boot/dts/renesas/r8a77995.dtsi b/arch/arm64/boot/dts/renesas/r8a77995.dtsi
-index e8d2290fe79d..e5617ec0f49c 100644
---- a/arch/arm64/boot/dts/renesas/r8a77995.dtsi
-+++ b/arch/arm64/boot/dts/renesas/r8a77995.dtsi
-@@ -507,7 +507,7 @@
- <&ipmmu_ds1 22>, <&ipmmu_ds1 23>;
- };
-
-- ipmmu_ds0: mmu@e6740000 {
-+ ipmmu_ds0: iommu@e6740000 {
- compatible = "renesas,ipmmu-r8a77995";
- reg = <0 0xe6740000 0 0x1000>;
- renesas,ipmmu-main = <&ipmmu_mm 0>;
-@@ -515,7 +515,7 @@
- #iommu-cells = <1>;
- };
-
-- ipmmu_ds1: mmu@e7740000 {
-+ ipmmu_ds1: iommu@e7740000 {
- compatible = "renesas,ipmmu-r8a77995";
- reg = <0 0xe7740000 0 0x1000>;
- renesas,ipmmu-main = <&ipmmu_mm 1>;
-@@ -523,7 +523,7 @@
- #iommu-cells = <1>;
- };
-
-- ipmmu_hc: mmu@e6570000 {
-+ ipmmu_hc: iommu@e6570000 {
- compatible = "renesas,ipmmu-r8a77995";
- reg = <0 0xe6570000 0 0x1000>;
- renesas,ipmmu-main = <&ipmmu_mm 2>;
-@@ -531,7 +531,7 @@
- #iommu-cells = <1>;
- };
-
-- ipmmu_mm: mmu@e67b0000 {
-+ ipmmu_mm: iommu@e67b0000 {
- compatible = "renesas,ipmmu-r8a77995";
- reg = <0 0xe67b0000 0 0x1000>;
- interrupts = <GIC_SPI 196 IRQ_TYPE_LEVEL_HIGH>,
-@@ -540,7 +540,7 @@
- #iommu-cells = <1>;
- };
-
-- ipmmu_mp: mmu@ec670000 {
-+ ipmmu_mp: iommu@ec670000 {
- compatible = "renesas,ipmmu-r8a77995";
- reg = <0 0xec670000 0 0x1000>;
- renesas,ipmmu-main = <&ipmmu_mm 4>;
-@@ -548,7 +548,7 @@
- #iommu-cells = <1>;
- };
-
-- ipmmu_pv0: mmu@fd800000 {
-+ ipmmu_pv0: iommu@fd800000 {
- compatible = "renesas,ipmmu-r8a77995";
- reg = <0 0xfd800000 0 0x1000>;
- renesas,ipmmu-main = <&ipmmu_mm 6>;
-@@ -556,7 +556,7 @@
- #iommu-cells = <1>;
- };
-
-- ipmmu_rt: mmu@ffc80000 {
-+ ipmmu_rt: iommu@ffc80000 {
- compatible = "renesas,ipmmu-r8a77995";
- reg = <0 0xffc80000 0 0x1000>;
- renesas,ipmmu-main = <&ipmmu_mm 10>;
-@@ -564,7 +564,7 @@
- #iommu-cells = <1>;
- };
-
-- ipmmu_vc0: mmu@fe6b0000 {
-+ ipmmu_vc0: iommu@fe6b0000 {
- compatible = "renesas,ipmmu-r8a77995";
- reg = <0 0xfe6b0000 0 0x1000>;
- renesas,ipmmu-main = <&ipmmu_mm 12>;
-@@ -572,7 +572,7 @@
- #iommu-cells = <1>;
- };
-
-- ipmmu_vi0: mmu@febd0000 {
-+ ipmmu_vi0: iommu@febd0000 {
- compatible = "renesas,ipmmu-r8a77995";
- reg = <0 0xfebd0000 0 0x1000>;
- renesas,ipmmu-main = <&ipmmu_mm 14>;
-@@ -580,7 +580,7 @@
- #iommu-cells = <1>;
- };
-
-- ipmmu_vp0: mmu@fe990000 {
-+ ipmmu_vp0: iommu@fe990000 {
- compatible = "renesas,ipmmu-r8a77995";
- reg = <0 0xfe990000 0 0x1000>;
- renesas,ipmmu-main = <&ipmmu_mm 16>;
-diff --git a/arch/arm64/kernel/ftrace.c b/arch/arm64/kernel/ftrace.c
-index 8618faa82e6d..86a5cf9bc19a 100644
---- a/arch/arm64/kernel/ftrace.c
-+++ b/arch/arm64/kernel/ftrace.c
-@@ -69,7 +69,8 @@ static struct plt_entry *get_ftrace_plt(struct module *mod, unsigned long addr)
-
- if (addr == FTRACE_ADDR)
- return &plt[FTRACE_PLT_IDX];
-- if (addr == FTRACE_REGS_ADDR && IS_ENABLED(CONFIG_FTRACE_WITH_REGS))
-+ if (addr == FTRACE_REGS_ADDR &&
-+ IS_ENABLED(CONFIG_DYNAMIC_FTRACE_WITH_REGS))
- return &plt[FTRACE_REGS_PLT_IDX];
- #endif
- return NULL;
-diff --git a/arch/arm64/kernel/hw_breakpoint.c b/arch/arm64/kernel/hw_breakpoint.c
-index 0b727edf4104..af234a1e08b7 100644
---- a/arch/arm64/kernel/hw_breakpoint.c
-+++ b/arch/arm64/kernel/hw_breakpoint.c
-@@ -730,6 +730,27 @@ static u64 get_distance_from_watchpoint(unsigned long addr, u64 val,
- return 0;
- }
-
-+static int watchpoint_report(struct perf_event *wp, unsigned long addr,
-+ struct pt_regs *regs)
-+{
-+ int step = is_default_overflow_handler(wp);
-+ struct arch_hw_breakpoint *info = counter_arch_bp(wp);
-+
-+ info->trigger = addr;
-+
-+ /*
-+ * If we triggered a user watchpoint from a uaccess routine, then
-+ * handle the stepping ourselves since userspace really can't help
-+ * us with this.
-+ */
-+ if (!user_mode(regs) && info->ctrl.privilege == AARCH64_BREAKPOINT_EL0)
-+ step = 1;
-+ else
-+ perf_bp_event(wp, regs);
-+
-+ return step;
-+}
-+
- static int watchpoint_handler(unsigned long addr, unsigned int esr,
- struct pt_regs *regs)
- {
-@@ -739,7 +760,6 @@ static int watchpoint_handler(unsigned long addr, unsigned int esr,
- u64 val;
- struct perf_event *wp, **slots;
- struct debug_info *debug_info;
-- struct arch_hw_breakpoint *info;
- struct arch_hw_breakpoint_ctrl ctrl;
-
- slots = this_cpu_ptr(wp_on_reg);
-@@ -777,25 +797,13 @@ static int watchpoint_handler(unsigned long addr, unsigned int esr,
- if (dist != 0)
- continue;
-
-- info = counter_arch_bp(wp);
-- info->trigger = addr;
-- perf_bp_event(wp, regs);
--
-- /* Do we need to handle the stepping? */
-- if (is_default_overflow_handler(wp))
-- step = 1;
-+ step = watchpoint_report(wp, addr, regs);
- }
-- if (min_dist > 0 && min_dist != -1) {
-- /* No exact match found. */
-- wp = slots[closest_match];
-- info = counter_arch_bp(wp);
-- info->trigger = addr;
-- perf_bp_event(wp, regs);
-
-- /* Do we need to handle the stepping? */
-- if (is_default_overflow_handler(wp))
-- step = 1;
-- }
-+ /* No exact match found? */
-+ if (min_dist > 0 && min_dist != -1)
-+ step = watchpoint_report(slots[closest_match], addr, regs);
-+
- rcu_read_unlock();
-
- if (!step)
-diff --git a/arch/arm64/mm/init.c b/arch/arm64/mm/init.c
-index e42727e3568e..3f9010167468 100644
---- a/arch/arm64/mm/init.c
-+++ b/arch/arm64/mm/init.c
-@@ -458,11 +458,6 @@ void __init arm64_memblock_init(void)
- high_memory = __va(memblock_end_of_DRAM() - 1) + 1;
-
- dma_contiguous_reserve(arm64_dma32_phys_limit);
--
--#ifdef CONFIG_ARM64_4K_PAGES
-- hugetlb_cma_reserve(PUD_SHIFT - PAGE_SHIFT);
--#endif
--
- }
-
- void __init bootmem_init(void)
-@@ -478,6 +473,16 @@ void __init bootmem_init(void)
- min_low_pfn = min;
-
- arm64_numa_init();
-+
-+ /*
-+ * must be done after arm64_numa_init() which calls numa_init() to
-+ * initialize node_online_map that gets used in hugetlb_cma_reserve()
-+ * while allocating required CMA size across online nodes.
-+ */
-+#ifdef CONFIG_ARM64_4K_PAGES
-+ hugetlb_cma_reserve(PUD_SHIFT - PAGE_SHIFT);
-+#endif
-+
- /*
- * Sparsemem tries to allocate bootmem in memory_present(), so must be
- * done after the fixed reservations.
-diff --git a/arch/m68k/coldfire/pci.c b/arch/m68k/coldfire/pci.c
-index 62b0eb6cf69a..84eab0f5e00a 100644
---- a/arch/m68k/coldfire/pci.c
-+++ b/arch/m68k/coldfire/pci.c
-@@ -216,8 +216,10 @@ static int __init mcf_pci_init(void)
-
- /* Keep a virtual mapping to IO/config space active */
- iospace = (unsigned long) ioremap(PCI_IO_PA, PCI_IO_SIZE);
-- if (iospace == 0)
-+ if (iospace == 0) {
-+ pci_free_host_bridge(bridge);
- return -ENODEV;
-+ }
- pr_info("Coldfire: PCI IO/config window mapped to 0x%x\n",
- (u32) iospace);
-
-diff --git a/arch/openrisc/kernel/entry.S b/arch/openrisc/kernel/entry.S
-index e4a78571f883..c6481cfc5220 100644
---- a/arch/openrisc/kernel/entry.S
-+++ b/arch/openrisc/kernel/entry.S
-@@ -1166,13 +1166,13 @@ ENTRY(__sys_clone)
- l.movhi r29,hi(sys_clone)
- l.ori r29,r29,lo(sys_clone)
- l.j _fork_save_extra_regs_and_call
-- l.addi r7,r1,0
-+ l.nop
-
- ENTRY(__sys_fork)
- l.movhi r29,hi(sys_fork)
- l.ori r29,r29,lo(sys_fork)
- l.j _fork_save_extra_regs_and_call
-- l.addi r3,r1,0
-+ l.nop
-
- ENTRY(sys_rt_sigreturn)
- l.jal _sys_rt_sigreturn
-diff --git a/arch/powerpc/Kconfig b/arch/powerpc/Kconfig
-index 62aca9efbbbe..310957b988e3 100644
---- a/arch/powerpc/Kconfig
-+++ b/arch/powerpc/Kconfig
-@@ -773,6 +773,7 @@ config THREAD_SHIFT
- range 13 15
- default "15" if PPC_256K_PAGES
- default "14" if PPC64
-+ default "14" if KASAN
- default "13"
- help
- Used to define the stack size. The default is almost always what you
-diff --git a/arch/powerpc/configs/adder875_defconfig b/arch/powerpc/configs/adder875_defconfig
-index f55e23cb176c..5326bc739279 100644
---- a/arch/powerpc/configs/adder875_defconfig
-+++ b/arch/powerpc/configs/adder875_defconfig
-@@ -10,7 +10,6 @@ CONFIG_EXPERT=y
- # CONFIG_BLK_DEV_BSG is not set
- CONFIG_PARTITION_ADVANCED=y
- CONFIG_PPC_ADDER875=y
--CONFIG_8xx_COPYBACK=y
- CONFIG_GEN_RTC=y
- CONFIG_HZ_1000=y
- # CONFIG_SECCOMP is not set
-diff --git a/arch/powerpc/configs/ep88xc_defconfig b/arch/powerpc/configs/ep88xc_defconfig
-index 0e2e5e81a359..f5c3e72da719 100644
---- a/arch/powerpc/configs/ep88xc_defconfig
-+++ b/arch/powerpc/configs/ep88xc_defconfig
-@@ -12,7 +12,6 @@ CONFIG_EXPERT=y
- # CONFIG_BLK_DEV_BSG is not set
- CONFIG_PARTITION_ADVANCED=y
- CONFIG_PPC_EP88XC=y
--CONFIG_8xx_COPYBACK=y
- CONFIG_GEN_RTC=y
- CONFIG_HZ_100=y
- # CONFIG_SECCOMP is not set
-diff --git a/arch/powerpc/configs/mpc866_ads_defconfig b/arch/powerpc/configs/mpc866_ads_defconfig
-index 5320735395e7..5c56d36cdfc5 100644
---- a/arch/powerpc/configs/mpc866_ads_defconfig
-+++ b/arch/powerpc/configs/mpc866_ads_defconfig
-@@ -12,7 +12,6 @@ CONFIG_EXPERT=y
- # CONFIG_BLK_DEV_BSG is not set
- CONFIG_PARTITION_ADVANCED=y
- CONFIG_MPC86XADS=y
--CONFIG_8xx_COPYBACK=y
- CONFIG_GEN_RTC=y
- CONFIG_HZ_1000=y
- CONFIG_MATH_EMULATION=y
-diff --git a/arch/powerpc/configs/mpc885_ads_defconfig b/arch/powerpc/configs/mpc885_ads_defconfig
-index 82a008c04eae..949ff9ccda5e 100644
---- a/arch/powerpc/configs/mpc885_ads_defconfig
-+++ b/arch/powerpc/configs/mpc885_ads_defconfig
-@@ -11,7 +11,6 @@ CONFIG_EXPERT=y
- # CONFIG_VM_EVENT_COUNTERS is not set
- # CONFIG_BLK_DEV_BSG is not set
- CONFIG_PARTITION_ADVANCED=y
--CONFIG_8xx_COPYBACK=y
- CONFIG_GEN_RTC=y
- CONFIG_HZ_100=y
- # CONFIG_SECCOMP is not set
-diff --git a/arch/powerpc/configs/tqm8xx_defconfig b/arch/powerpc/configs/tqm8xx_defconfig
-index eda8bfb2d0a3..77857d513022 100644
---- a/arch/powerpc/configs/tqm8xx_defconfig
-+++ b/arch/powerpc/configs/tqm8xx_defconfig
-@@ -15,7 +15,6 @@ CONFIG_MODULE_SRCVERSION_ALL=y
- # CONFIG_BLK_DEV_BSG is not set
- CONFIG_PARTITION_ADVANCED=y
- CONFIG_TQM8XX=y
--CONFIG_8xx_COPYBACK=y
- # CONFIG_8xx_CPU15 is not set
- CONFIG_GEN_RTC=y
- CONFIG_HZ_100=y
-diff --git a/arch/powerpc/include/asm/book3s/64/kup-radix.h b/arch/powerpc/include/asm/book3s/64/kup-radix.h
-index 3bcef989a35d..101d60f16d46 100644
---- a/arch/powerpc/include/asm/book3s/64/kup-radix.h
-+++ b/arch/powerpc/include/asm/book3s/64/kup-radix.h
-@@ -16,7 +16,9 @@
- #ifdef CONFIG_PPC_KUAP
- BEGIN_MMU_FTR_SECTION_NESTED(67)
- ld \gpr, STACK_REGS_KUAP(r1)
-+ isync
- mtspr SPRN_AMR, \gpr
-+ /* No isync required, see kuap_restore_amr() */
- END_MMU_FTR_SECTION_NESTED_IFSET(MMU_FTR_RADIX_KUAP, 67)
- #endif
- .endm
-@@ -62,8 +64,15 @@
-
- static inline void kuap_restore_amr(struct pt_regs *regs)
- {
-- if (mmu_has_feature(MMU_FTR_RADIX_KUAP))
-+ if (mmu_has_feature(MMU_FTR_RADIX_KUAP)) {
-+ isync();
- mtspr(SPRN_AMR, regs->kuap);
-+ /*
-+ * No isync required here because we are about to RFI back to
-+ * previous context before any user accesses would be made,
-+ * which is a CSI.
-+ */
-+ }
- }
-
- static inline void kuap_check_amr(void)
-diff --git a/arch/powerpc/include/asm/book3s/64/pgtable.h b/arch/powerpc/include/asm/book3s/64/pgtable.h
-index 368b136517e0..2838b98bc6df 100644
---- a/arch/powerpc/include/asm/book3s/64/pgtable.h
-+++ b/arch/powerpc/include/asm/book3s/64/pgtable.h
-@@ -998,10 +998,25 @@ extern struct page *pgd_page(pgd_t pgd);
- #define pud_page_vaddr(pud) __va(pud_val(pud) & ~PUD_MASKED_BITS)
- #define pgd_page_vaddr(pgd) __va(pgd_val(pgd) & ~PGD_MASKED_BITS)
-
--#define pgd_index(address) (((address) >> (PGDIR_SHIFT)) & (PTRS_PER_PGD - 1))
--#define pud_index(address) (((address) >> (PUD_SHIFT)) & (PTRS_PER_PUD - 1))
--#define pmd_index(address) (((address) >> (PMD_SHIFT)) & (PTRS_PER_PMD - 1))
--#define pte_index(address) (((address) >> (PAGE_SHIFT)) & (PTRS_PER_PTE - 1))
-+static inline unsigned long pgd_index(unsigned long address)
-+{
-+ return (address >> PGDIR_SHIFT) & (PTRS_PER_PGD - 1);
-+}
-+
-+static inline unsigned long pud_index(unsigned long address)
-+{
-+ return (address >> PUD_SHIFT) & (PTRS_PER_PUD - 1);
-+}
-+
-+static inline unsigned long pmd_index(unsigned long address)
-+{
-+ return (address >> PMD_SHIFT) & (PTRS_PER_PMD - 1);
-+}
-+
-+static inline unsigned long pte_index(unsigned long address)
-+{
-+ return (address >> PAGE_SHIFT) & (PTRS_PER_PTE - 1);
-+}
-
- /*
- * Find an entry in a page-table-directory. We combine the address region
-diff --git a/arch/powerpc/include/asm/nohash/32/mmu-8xx.h b/arch/powerpc/include/asm/nohash/32/mmu-8xx.h
-index 76af5b0cb16e..26b7cee34dfe 100644
---- a/arch/powerpc/include/asm/nohash/32/mmu-8xx.h
-+++ b/arch/powerpc/include/asm/nohash/32/mmu-8xx.h
-@@ -19,7 +19,6 @@
- #define MI_RSV4I 0x08000000 /* Reserve 4 TLB entries */
- #define MI_PPCS 0x02000000 /* Use MI_RPN prob/priv state */
- #define MI_IDXMASK 0x00001f00 /* TLB index to be loaded */
--#define MI_RESETVAL 0x00000000 /* Value of register at reset */
-
- /* These are the Ks and Kp from the PowerPC books. For proper operation,
- * Ks = 0, Kp = 1.
-@@ -95,7 +94,6 @@
- #define MD_TWAM 0x04000000 /* Use 4K page hardware assist */
- #define MD_PPCS 0x02000000 /* Use MI_RPN prob/priv state */
- #define MD_IDXMASK 0x00001f00 /* TLB index to be loaded */
--#define MD_RESETVAL 0x04000000 /* Value of register at reset */
-
- #define SPRN_M_CASID 793 /* Address space ID (context) to match */
- #define MC_ASIDMASK 0x0000000f /* Bits used for ASID value */
-diff --git a/arch/powerpc/include/asm/processor.h b/arch/powerpc/include/asm/processor.h
-index eedcbfb9a6ff..c220cb9eccad 100644
---- a/arch/powerpc/include/asm/processor.h
-+++ b/arch/powerpc/include/asm/processor.h
-@@ -301,7 +301,6 @@ struct thread_struct {
- #else
- #define INIT_THREAD { \
- .ksp = INIT_SP, \
-- .regs = (struct pt_regs *)INIT_SP - 1, /* XXX bogus, I think */ \
- .addr_limit = KERNEL_DS, \
- .fpexc_mode = 0, \
- .fscr = FSCR_TAR | FSCR_EBB \
-diff --git a/arch/powerpc/kernel/exceptions-64s.S b/arch/powerpc/kernel/exceptions-64s.S
-index ebeebab74b56..d9ddce40bed8 100644
---- a/arch/powerpc/kernel/exceptions-64s.S
-+++ b/arch/powerpc/kernel/exceptions-64s.S
-@@ -270,7 +270,7 @@ BEGIN_FTR_SECTION
- END_FTR_SECTION_IFSET(CPU_FTR_CFAR)
- .endif
-
-- ld r10,PACA_EXGEN+EX_CTR(r13)
-+ ld r10,IAREA+EX_CTR(r13)
- mtctr r10
- BEGIN_FTR_SECTION
- ld r10,IAREA+EX_PPR(r13)
-@@ -298,7 +298,7 @@ END_FTR_SECTION_IFSET(CPU_FTR_HAS_PPR)
-
- .if IKVM_SKIP
- 89: mtocrf 0x80,r9
-- ld r10,PACA_EXGEN+EX_CTR(r13)
-+ ld r10,IAREA+EX_CTR(r13)
- mtctr r10
- ld r9,IAREA+EX_R9(r13)
- ld r10,IAREA+EX_R10(r13)
-@@ -1117,11 +1117,30 @@ END_FTR_SECTION_IFSET(CPU_FTR_HVMODE)
- li r10,MSR_RI
- mtmsrd r10,1
-
-+ /*
-+ * Set IRQS_ALL_DISABLED and save PACAIRQHAPPENED (see
-+ * system_reset_common)
-+ */
-+ li r10,IRQS_ALL_DISABLED
-+ stb r10,PACAIRQSOFTMASK(r13)
-+ lbz r10,PACAIRQHAPPENED(r13)
-+ std r10,RESULT(r1)
-+ ori r10,r10,PACA_IRQ_HARD_DIS
-+ stb r10,PACAIRQHAPPENED(r13)
-+
- addi r3,r1,STACK_FRAME_OVERHEAD
- bl machine_check_early
- std r3,RESULT(r1) /* Save result */
- ld r12,_MSR(r1)
-
-+ /*
-+ * Restore soft mask settings.
-+ */
-+ ld r10,RESULT(r1)
-+ stb r10,PACAIRQHAPPENED(r13)
-+ ld r10,SOFTE(r1)
-+ stb r10,PACAIRQSOFTMASK(r13)
-+
- #ifdef CONFIG_PPC_P7_NAP
- /*
- * Check if thread was in power saving mode. We come here when any
-@@ -1225,17 +1244,19 @@ EXC_COMMON_BEGIN(machine_check_idle_common)
- bl machine_check_queue_event
-
- /*
-- * We have not used any non-volatile GPRs here, and as a rule
-- * most exception code including machine check does not.
-- * Therefore PACA_NAPSTATELOST does not need to be set. Idle
-- * wakeup will restore volatile registers.
-+ * GPR-loss wakeups are relatively straightforward, because the
-+ * idle sleep code has saved all non-volatile registers on its
-+ * own stack, and r1 in PACAR1.
- *
-- * Load the original SRR1 into r3 for pnv_powersave_wakeup_mce.
-+ * For no-loss wakeups the r1 and lr registers used by the
-+ * early machine check handler have to be restored first. r2 is
-+ * the kernel TOC, so no need to restore it.
- *
- * Then decrement MCE nesting after finishing with the stack.
- */
- ld r3,_MSR(r1)
- ld r4,_LINK(r1)
-+ ld r1,GPR1(r1)
-
- lhz r11,PACA_IN_MCE(r13)
- subi r11,r11,1
-@@ -1244,7 +1265,7 @@ EXC_COMMON_BEGIN(machine_check_idle_common)
- mtlr r4
- rlwinm r10,r3,47-31,30,31
- cmpwi cr1,r10,2
-- bltlr cr1 /* no state loss, return to idle caller */
-+ bltlr cr1 /* no state loss, return to idle caller with r3=SRR1 */
- b idle_return_gpr_loss
- #endif
-
-diff --git a/arch/powerpc/kernel/head_64.S b/arch/powerpc/kernel/head_64.S
-index ddfbd02140d9..0e05a9a47a4b 100644
---- a/arch/powerpc/kernel/head_64.S
-+++ b/arch/powerpc/kernel/head_64.S
-@@ -947,15 +947,8 @@ start_here_multiplatform:
- std r0,0(r4)
- #endif
-
-- /* The following gets the stack set up with the regs */
-- /* pointing to the real addr of the kernel stack. This is */
-- /* all done to support the C function call below which sets */
-- /* up the htab. This is done because we have relocated the */
-- /* kernel but are still running in real mode. */
--
-- LOAD_REG_ADDR(r3,init_thread_union)
--
- /* set up a stack pointer */
-+ LOAD_REG_ADDR(r3,init_thread_union)
- LOAD_REG_IMMEDIATE(r1,THREAD_SIZE)
- add r1,r3,r1
- li r0,0
-diff --git a/arch/powerpc/kernel/head_8xx.S b/arch/powerpc/kernel/head_8xx.S
-index 073a651787df..905205c79a25 100644
---- a/arch/powerpc/kernel/head_8xx.S
-+++ b/arch/powerpc/kernel/head_8xx.S
-@@ -779,10 +779,7 @@ start_here:
- initial_mmu:
- li r8, 0
- mtspr SPRN_MI_CTR, r8 /* remove PINNED ITLB entries */
-- lis r10, MD_RESETVAL@h
--#ifndef CONFIG_8xx_COPYBACK
-- oris r10, r10, MD_WTDEF@h
--#endif
-+ lis r10, MD_TWAM@h
- mtspr SPRN_MD_CTR, r10 /* remove PINNED DTLB entries */
-
- tlbia /* Invalidate all TLB entries */
-@@ -857,17 +854,7 @@ initial_mmu:
- mtspr SPRN_DC_CST, r8
- lis r8, IDC_ENABLE@h
- mtspr SPRN_IC_CST, r8
--#ifdef CONFIG_8xx_COPYBACK
-- mtspr SPRN_DC_CST, r8
--#else
-- /* For a debug option, I left this here to easily enable
-- * the write through cache mode
-- */
-- lis r8, DC_SFWT@h
- mtspr SPRN_DC_CST, r8
-- lis r8, IDC_ENABLE@h
-- mtspr SPRN_DC_CST, r8
--#endif
- /* Disable debug mode entry on breakpoints */
- mfspr r8, SPRN_DER
- #ifdef CONFIG_PERF_EVENTS
-diff --git a/arch/powerpc/kernel/process.c b/arch/powerpc/kernel/process.c
-index 9c21288f8645..774476be591b 100644
---- a/arch/powerpc/kernel/process.c
-+++ b/arch/powerpc/kernel/process.c
-@@ -1241,29 +1241,31 @@ struct task_struct *__switch_to(struct task_struct *prev,
- static void show_instructions(struct pt_regs *regs)
- {
- int i;
-+ unsigned long nip = regs->nip;
- unsigned long pc = regs->nip - (NR_INSN_TO_PRINT * 3 / 4 * sizeof(int));
-
- printk("Instruction dump:");
-
-+ /*
-+ * If we were executing with the MMU off for instructions, adjust pc
-+ * rather than printing XXXXXXXX.
-+ */
-+ if (!IS_ENABLED(CONFIG_BOOKE) && !(regs->msr & MSR_IR)) {
-+ pc = (unsigned long)phys_to_virt(pc);
-+ nip = (unsigned long)phys_to_virt(regs->nip);
-+ }
-+
- for (i = 0; i < NR_INSN_TO_PRINT; i++) {
- int instr;
-
- if (!(i % 8))
- pr_cont("\n");
-
--#if !defined(CONFIG_BOOKE)
-- /* If executing with the IMMU off, adjust pc rather
-- * than print XXXXXXXX.
-- */
-- if (!(regs->msr & MSR_IR))
-- pc = (unsigned long)phys_to_virt(pc);
--#endif
--
- if (!__kernel_text_address(pc) ||
- probe_kernel_address((const void *)pc, instr)) {
- pr_cont("XXXXXXXX ");
- } else {
-- if (regs->nip == pc)
-+ if (nip == pc)
- pr_cont("<%08x> ", instr);
- else
- pr_cont("%08x ", instr);
-diff --git a/arch/powerpc/kexec/core.c b/arch/powerpc/kexec/core.c
-index 078fe3d76feb..56da5eb2b923 100644
---- a/arch/powerpc/kexec/core.c
-+++ b/arch/powerpc/kexec/core.c
-@@ -115,11 +115,12 @@ void machine_kexec(struct kimage *image)
-
- void __init reserve_crashkernel(void)
- {
-- unsigned long long crash_size, crash_base;
-+ unsigned long long crash_size, crash_base, total_mem_sz;
- int ret;
-
-+ total_mem_sz = memory_limit ? memory_limit : memblock_phys_mem_size();
- /* use common parsing */
-- ret = parse_crashkernel(boot_command_line, memblock_phys_mem_size(),
-+ ret = parse_crashkernel(boot_command_line, total_mem_sz,
- &crash_size, &crash_base);
- if (ret == 0 && crash_size > 0) {
- crashk_res.start = crash_base;
-@@ -178,6 +179,7 @@ void __init reserve_crashkernel(void)
- /* Crash kernel trumps memory limit */
- if (memory_limit && memory_limit <= crashk_res.end) {
- memory_limit = crashk_res.end + 1;
-+ total_mem_sz = memory_limit;
- printk("Adjusted memory limit for crashkernel, now 0x%llx\n",
- memory_limit);
- }
-@@ -186,7 +188,7 @@ void __init reserve_crashkernel(void)
- "for crashkernel (System RAM: %ldMB)\n",
- (unsigned long)(crash_size >> 20),
- (unsigned long)(crashk_res.start >> 20),
-- (unsigned long)(memblock_phys_mem_size() >> 20));
-+ (unsigned long)(total_mem_sz >> 20));
-
- if (!memblock_is_region_memory(crashk_res.start, crash_size) ||
- memblock_reserve(crashk_res.start, crash_size)) {
-diff --git a/arch/powerpc/kvm/book3s_64_mmu_radix.c b/arch/powerpc/kvm/book3s_64_mmu_radix.c
-index aa12cd4078b3..bc6c1aa3d0e9 100644
---- a/arch/powerpc/kvm/book3s_64_mmu_radix.c
-+++ b/arch/powerpc/kvm/book3s_64_mmu_radix.c
-@@ -353,7 +353,13 @@ static struct kmem_cache *kvm_pmd_cache;
-
- static pte_t *kvmppc_pte_alloc(void)
- {
-- return kmem_cache_alloc(kvm_pte_cache, GFP_KERNEL);
-+ pte_t *pte;
-+
-+ pte = kmem_cache_alloc(kvm_pte_cache, GFP_KERNEL);
-+ /* pmd_populate() will only reference _pa(pte). */
-+ kmemleak_ignore(pte);
-+
-+ return pte;
- }
-
- static void kvmppc_pte_free(pte_t *ptep)
-@@ -363,7 +369,13 @@ static void kvmppc_pte_free(pte_t *ptep)
-
- static pmd_t *kvmppc_pmd_alloc(void)
- {
-- return kmem_cache_alloc(kvm_pmd_cache, GFP_KERNEL);
-+ pmd_t *pmd;
-+
-+ pmd = kmem_cache_alloc(kvm_pmd_cache, GFP_KERNEL);
-+ /* pud_populate() will only reference _pa(pmd). */
-+ kmemleak_ignore(pmd);
-+
-+ return pmd;
- }
-
- static void kvmppc_pmd_free(pmd_t *pmdp)
-diff --git a/arch/powerpc/kvm/book3s_64_vio.c b/arch/powerpc/kvm/book3s_64_vio.c
-index 50555ad1db93..1a529df0ab44 100644
---- a/arch/powerpc/kvm/book3s_64_vio.c
-+++ b/arch/powerpc/kvm/book3s_64_vio.c
-@@ -73,6 +73,7 @@ extern void kvm_spapr_tce_release_iommu_group(struct kvm *kvm,
- struct kvmppc_spapr_tce_iommu_table *stit, *tmp;
- struct iommu_table_group *table_group = NULL;
-
-+ rcu_read_lock();
- list_for_each_entry_rcu(stt, &kvm->arch.spapr_tce_tables, list) {
-
- table_group = iommu_group_get_iommudata(grp);
-@@ -87,7 +88,9 @@ extern void kvm_spapr_tce_release_iommu_group(struct kvm *kvm,
- kref_put(&stit->kref, kvm_spapr_tce_liobn_put);
- }
- }
-+ cond_resched_rcu();
- }
-+ rcu_read_unlock();
- }
-
- extern long kvm_spapr_tce_attach_iommu_group(struct kvm *kvm, int tablefd,
-@@ -105,12 +108,14 @@ extern long kvm_spapr_tce_attach_iommu_group(struct kvm *kvm, int tablefd,
- if (!f.file)
- return -EBADF;
-
-+ rcu_read_lock();
- list_for_each_entry_rcu(stt, &kvm->arch.spapr_tce_tables, list) {
- if (stt == f.file->private_data) {
- found = true;
- break;
- }
- }
-+ rcu_read_unlock();
-
- fdput(f);
-
-@@ -143,6 +148,7 @@ extern long kvm_spapr_tce_attach_iommu_group(struct kvm *kvm, int tablefd,
- if (!tbl)
- return -EINVAL;
-
-+ rcu_read_lock();
- list_for_each_entry_rcu(stit, &stt->iommu_tables, next) {
- if (tbl != stit->tbl)
- continue;
-@@ -150,14 +156,17 @@ extern long kvm_spapr_tce_attach_iommu_group(struct kvm *kvm, int tablefd,
- if (!kref_get_unless_zero(&stit->kref)) {
- /* stit is being destroyed */
- iommu_tce_table_put(tbl);
-+ rcu_read_unlock();
- return -ENOTTY;
- }
- /*
- * The table is already known to this KVM, we just increased
- * its KVM reference counter and can return.
- */
-+ rcu_read_unlock();
- return 0;
- }
-+ rcu_read_unlock();
-
- stit = kzalloc(sizeof(*stit), GFP_KERNEL);
- if (!stit) {
-@@ -365,18 +374,19 @@ static long kvmppc_tce_validate(struct kvmppc_spapr_tce_table *stt,
- if (kvmppc_tce_to_ua(stt->kvm, tce, &ua))
- return H_TOO_HARD;
-
-+ rcu_read_lock();
- list_for_each_entry_rcu(stit, &stt->iommu_tables, next) {
- unsigned long hpa = 0;
- struct mm_iommu_table_group_mem_t *mem;
- long shift = stit->tbl->it_page_shift;
-
- mem = mm_iommu_lookup(stt->kvm->mm, ua, 1ULL << shift);
-- if (!mem)
-- return H_TOO_HARD;
--
-- if (mm_iommu_ua_to_hpa(mem, ua, shift, &hpa))
-+ if (!mem || mm_iommu_ua_to_hpa(mem, ua, shift, &hpa)) {
-+ rcu_read_unlock();
- return H_TOO_HARD;
-+ }
- }
-+ rcu_read_unlock();
-
- return H_SUCCESS;
- }
-diff --git a/arch/powerpc/kvm/book3s_hv.c b/arch/powerpc/kvm/book3s_hv.c
-index 93493f0cbfe8..ee581cde4878 100644
---- a/arch/powerpc/kvm/book3s_hv.c
-+++ b/arch/powerpc/kvm/book3s_hv.c
-@@ -1099,9 +1099,14 @@ int kvmppc_pseries_do_hcall(struct kvm_vcpu *vcpu)
- ret = kvmppc_h_svm_init_done(vcpu->kvm);
- break;
- case H_SVM_INIT_ABORT:
-- ret = H_UNSUPPORTED;
-- if (kvmppc_get_srr1(vcpu) & MSR_S)
-- ret = kvmppc_h_svm_init_abort(vcpu->kvm);
-+ /*
-+ * Even if that call is made by the Ultravisor, the SSR1 value
-+ * is the guest context one, with the secure bit clear as it has
-+ * not yet been secured. So we can't check it here.
-+ * Instead the kvm->arch.secure_guest flag is checked inside
-+ * kvmppc_h_svm_init_abort().
-+ */
-+ ret = kvmppc_h_svm_init_abort(vcpu->kvm);
- break;
-
- default:
-diff --git a/arch/powerpc/mm/book3s32/mmu.c b/arch/powerpc/mm/book3s32/mmu.c
-index 39ba53ca5bb5..a9b2cbc74797 100644
---- a/arch/powerpc/mm/book3s32/mmu.c
-+++ b/arch/powerpc/mm/book3s32/mmu.c
-@@ -187,6 +187,7 @@ void mmu_mark_initmem_nx(void)
- int i;
- unsigned long base = (unsigned long)_stext - PAGE_OFFSET;
- unsigned long top = (unsigned long)_etext - PAGE_OFFSET;
-+ unsigned long border = (unsigned long)__init_begin - PAGE_OFFSET;
- unsigned long size;
-
- if (IS_ENABLED(CONFIG_PPC_BOOK3S_601))
-@@ -201,9 +202,10 @@ void mmu_mark_initmem_nx(void)
- size = block_size(base, top);
- size = max(size, 128UL << 10);
- if ((top - base) > size) {
-- if (strict_kernel_rwx_enabled())
-- pr_warn("Kernel _etext not properly aligned\n");
- size <<= 1;
-+ if (strict_kernel_rwx_enabled() && base + size > border)
-+ pr_warn("Some RW data is getting mapped X. "
-+ "Adjust CONFIG_DATA_SHIFT to avoid that.\n");
- }
- setibat(i++, PAGE_OFFSET + base, base, size, PAGE_KERNEL_TEXT);
- base += size;
-diff --git a/arch/powerpc/mm/book3s64/radix_tlb.c b/arch/powerpc/mm/book3s64/radix_tlb.c
-index 758ade2c2b6e..b5cc9b23cf02 100644
---- a/arch/powerpc/mm/book3s64/radix_tlb.c
-+++ b/arch/powerpc/mm/book3s64/radix_tlb.c
-@@ -884,9 +884,7 @@ is_local:
- if (IS_ENABLED(CONFIG_TRANSPARENT_HUGEPAGE)) {
- hstart = (start + PMD_SIZE - 1) & PMD_MASK;
- hend = end & PMD_MASK;
-- if (hstart == hend)
-- hflush = false;
-- else
-+ if (hstart < hend)
- hflush = true;
- }
-
-diff --git a/arch/powerpc/mm/kasan/kasan_init_32.c b/arch/powerpc/mm/kasan/kasan_init_32.c
-index 59e49c0e8154..b7c287adfd59 100644
---- a/arch/powerpc/mm/kasan/kasan_init_32.c
-+++ b/arch/powerpc/mm/kasan/kasan_init_32.c
-@@ -76,15 +76,14 @@ static int __init kasan_init_region(void *start, size_t size)
- return ret;
-
- block = memblock_alloc(k_end - k_start, PAGE_SIZE);
-+ if (!block)
-+ return -ENOMEM;
-
- for (k_cur = k_start & PAGE_MASK; k_cur < k_end; k_cur += PAGE_SIZE) {
- pmd_t *pmd = pmd_ptr_k(k_cur);
- void *va = block + k_cur - k_start;
- pte_t pte = pfn_pte(PHYS_PFN(__pa(va)), PAGE_KERNEL);
-
-- if (!va)
-- return -ENOMEM;
--
- __set_pte_at(&init_mm, k_cur, pte_offset_kernel(pmd, k_cur), pte, 0);
- }
- flush_tlb_kernel_range(k_start, k_end);
-diff --git a/arch/powerpc/mm/ptdump/shared.c b/arch/powerpc/mm/ptdump/shared.c
-index f7ed2f187cb0..784f8df17f73 100644
---- a/arch/powerpc/mm/ptdump/shared.c
-+++ b/arch/powerpc/mm/ptdump/shared.c
-@@ -30,6 +30,11 @@ static const struct flag_info flag_array[] = {
- .val = _PAGE_PRESENT,
- .set = "present",
- .clear = " ",
-+ }, {
-+ .mask = _PAGE_COHERENT,
-+ .val = _PAGE_COHERENT,
-+ .set = "coherent",
-+ .clear = " ",
- }, {
- .mask = _PAGE_GUARDED,
- .val = _PAGE_GUARDED,
-diff --git a/arch/powerpc/perf/hv-24x7.c b/arch/powerpc/perf/hv-24x7.c
-index 573e0b309c0c..48e8f4b17b91 100644
---- a/arch/powerpc/perf/hv-24x7.c
-+++ b/arch/powerpc/perf/hv-24x7.c
-@@ -1400,16 +1400,6 @@ static void h_24x7_event_read(struct perf_event *event)
- h24x7hw = &get_cpu_var(hv_24x7_hw);
- h24x7hw->events[i] = event;
- put_cpu_var(h24x7hw);
-- /*
-- * Clear the event count so we can compute the _change_
-- * in the 24x7 raw counter value at the end of the txn.
-- *
-- * Note that we could alternatively read the 24x7 value
-- * now and save its value in event->hw.prev_count. But
-- * that would require issuing a hcall, which would then
-- * defeat the purpose of using the txn interface.
-- */
-- local64_set(&event->count, 0);
- }
-
- put_cpu_var(hv_24x7_reqb);
-diff --git a/arch/powerpc/platforms/4xx/pci.c b/arch/powerpc/platforms/4xx/pci.c
-index e6e2adcc7b64..c13d64c3b019 100644
---- a/arch/powerpc/platforms/4xx/pci.c
-+++ b/arch/powerpc/platforms/4xx/pci.c
-@@ -1242,7 +1242,7 @@ static void __init ppc460sx_pciex_check_link(struct ppc4xx_pciex_port *port)
- if (mbase == NULL) {
- printk(KERN_ERR "%pOF: Can't map internal config space !",
- port->node);
-- goto done;
-+ return;
- }
-
- while (attempt && (0 == (in_le32(mbase + PECFG_460SX_DLLSTA)
-@@ -1252,9 +1252,7 @@ static void __init ppc460sx_pciex_check_link(struct ppc4xx_pciex_port *port)
- }
- if (attempt)
- port->link = 1;
--done:
- iounmap(mbase);
--
- }
-
- static struct ppc4xx_pciex_hwops ppc460sx_pcie_hwops __initdata = {
-diff --git a/arch/powerpc/platforms/8xx/Kconfig b/arch/powerpc/platforms/8xx/Kconfig
-index e0fe670f06f6..b37de62d7e7f 100644
---- a/arch/powerpc/platforms/8xx/Kconfig
-+++ b/arch/powerpc/platforms/8xx/Kconfig
-@@ -98,15 +98,6 @@ menu "MPC8xx CPM Options"
- # 8xx specific questions.
- comment "Generic MPC8xx Options"
-
--config 8xx_COPYBACK
-- bool "Copy-Back Data Cache (else Writethrough)"
-- help
-- Saying Y here will cause the cache on an MPC8xx processor to be used
-- in Copy-Back mode. If you say N here, it is used in Writethrough
-- mode.
--
-- If in doubt, say Y here.
--
- config 8xx_GPIO
- bool "GPIO API Support"
- select GPIOLIB
-diff --git a/arch/powerpc/platforms/powernv/opal.c b/arch/powerpc/platforms/powernv/opal.c
-index 2b3dfd0b6cdd..d95954ad4c0a 100644
---- a/arch/powerpc/platforms/powernv/opal.c
-+++ b/arch/powerpc/platforms/powernv/opal.c
-@@ -811,6 +811,10 @@ static int opal_add_one_export(struct kobject *parent, const char *export_name,
- goto out;
-
- attr = kzalloc(sizeof(*attr), GFP_KERNEL);
-+ if (!attr) {
-+ rc = -ENOMEM;
-+ goto out;
-+ }
- name = kstrdup(export_name, GFP_KERNEL);
- if (!name) {
- rc = -ENOMEM;
-diff --git a/arch/powerpc/platforms/ps3/mm.c b/arch/powerpc/platforms/ps3/mm.c
-index 423be34f0f5f..f42fe4e86ce5 100644
---- a/arch/powerpc/platforms/ps3/mm.c
-+++ b/arch/powerpc/platforms/ps3/mm.c
-@@ -200,13 +200,14 @@ void ps3_mm_vas_destroy(void)
- {
- int result;
-
-- DBG("%s:%d: map.vas_id = %llu\n", __func__, __LINE__, map.vas_id);
--
- if (map.vas_id) {
- result = lv1_select_virtual_address_space(0);
-- BUG_ON(result);
-- result = lv1_destruct_virtual_address_space(map.vas_id);
-- BUG_ON(result);
-+ result += lv1_destruct_virtual_address_space(map.vas_id);
-+
-+ if (result) {
-+ lv1_panic(0);
-+ }
-+
- map.vas_id = 0;
- }
- }
-@@ -304,19 +305,20 @@ static void ps3_mm_region_destroy(struct mem_region *r)
- int result;
-
- if (!r->destroy) {
-- pr_info("%s:%d: Not destroying high region: %llxh %llxh\n",
-- __func__, __LINE__, r->base, r->size);
- return;
- }
-
-- DBG("%s:%d: r->base = %llxh\n", __func__, __LINE__, r->base);
--
- if (r->base) {
- result = lv1_release_memory(r->base);
-- BUG_ON(result);
-+
-+ if (result) {
-+ lv1_panic(0);
-+ }
-+
- r->size = r->base = r->offset = 0;
- map.total = map.rm.size;
- }
-+
- ps3_mm_set_repository_highmem(NULL);
- }
-
-diff --git a/arch/powerpc/platforms/pseries/ras.c b/arch/powerpc/platforms/pseries/ras.c
-index 1d1da639b8b7..16ba5c542e55 100644
---- a/arch/powerpc/platforms/pseries/ras.c
-+++ b/arch/powerpc/platforms/pseries/ras.c
-@@ -395,10 +395,11 @@ static irqreturn_t ras_error_interrupt(int irq, void *dev_id)
- /*
- * Some versions of FWNMI place the buffer inside the 4kB page starting at
- * 0x7000. Other versions place it inside the rtas buffer. We check both.
-+ * Minimum size of the buffer is 16 bytes.
- */
- #define VALID_FWNMI_BUFFER(A) \
-- ((((A) >= 0x7000) && ((A) < 0x7ff0)) || \
-- (((A) >= rtas.base) && ((A) < (rtas.base + rtas.size - 16))))
-+ ((((A) >= 0x7000) && ((A) <= 0x8000 - 16)) || \
-+ (((A) >= rtas.base) && ((A) <= (rtas.base + rtas.size - 16))))
-
- static inline struct rtas_error_log *fwnmi_get_errlog(void)
- {
-diff --git a/arch/s390/Kconfig b/arch/s390/Kconfig
-index 2167bce993ff..ae01be202204 100644
---- a/arch/s390/Kconfig
-+++ b/arch/s390/Kconfig
-@@ -462,6 +462,7 @@ config NUMA
-
- config NODES_SHIFT
- int
-+ depends on NEED_MULTIPLE_NODES
- default "1"
-
- config SCHED_SMT
-diff --git a/arch/s390/include/asm/syscall.h b/arch/s390/include/asm/syscall.h
-index f073292e9fdb..d9d5de0f67ff 100644
---- a/arch/s390/include/asm/syscall.h
-+++ b/arch/s390/include/asm/syscall.h
-@@ -33,7 +33,17 @@ static inline void syscall_rollback(struct task_struct *task,
- static inline long syscall_get_error(struct task_struct *task,
- struct pt_regs *regs)
- {
-- return IS_ERR_VALUE(regs->gprs[2]) ? regs->gprs[2] : 0;
-+ unsigned long error = regs->gprs[2];
-+#ifdef CONFIG_COMPAT
-+ if (test_tsk_thread_flag(task, TIF_31BIT)) {
-+ /*
-+ * Sign-extend the value so (int)-EFOO becomes (long)-EFOO
-+ * and will match correctly in comparisons.
-+ */
-+ error = (long)(int)error;
-+ }
-+#endif
-+ return IS_ERR_VALUE(error) ? error : 0;
- }
-
- static inline long syscall_get_return_value(struct task_struct *task,
-diff --git a/arch/sh/include/asm/io.h b/arch/sh/include/asm/io.h
-index 39c9ead489e5..b42228906eaf 100644
---- a/arch/sh/include/asm/io.h
-+++ b/arch/sh/include/asm/io.h
-@@ -328,7 +328,7 @@ __ioremap_mode(phys_addr_t offset, unsigned long size, pgprot_t prot)
- #else
- #define __ioremap(offset, size, prot) ((void __iomem *)(offset))
- #define __ioremap_mode(offset, size, prot) ((void __iomem *)(offset))
--#define iounmap(addr) do { } while (0)
-+static inline void iounmap(void __iomem *addr) {}
- #endif /* CONFIG_MMU */
-
- static inline void __iomem *ioremap(phys_addr_t offset, unsigned long size)
-diff --git a/arch/sparc/mm/srmmu.c b/arch/sparc/mm/srmmu.c
-index a8c2f2615fc6..ecc9e8786d57 100644
---- a/arch/sparc/mm/srmmu.c
-+++ b/arch/sparc/mm/srmmu.c
-@@ -383,7 +383,6 @@ pgtable_t pte_alloc_one(struct mm_struct *mm)
- return NULL;
- page = pfn_to_page(__nocache_pa(pte) >> PAGE_SHIFT);
- if (!pgtable_pte_page_ctor(page)) {
-- __free_page(page);
- return NULL;
- }
- return page;
-diff --git a/arch/um/drivers/Makefile b/arch/um/drivers/Makefile
-index a290821e355c..2a249f619467 100644
---- a/arch/um/drivers/Makefile
-+++ b/arch/um/drivers/Makefile
-@@ -18,9 +18,9 @@ ubd-objs := ubd_kern.o ubd_user.o
- port-objs := port_kern.o port_user.o
- harddog-objs := harddog_kern.o harddog_user.o
-
--LDFLAGS_pcap.o := -r $(shell $(CC) $(KBUILD_CFLAGS) -print-file-name=libpcap.a)
-+LDFLAGS_pcap.o = $(shell $(CC) $(KBUILD_CFLAGS) -print-file-name=libpcap.a)
-
--LDFLAGS_vde.o := -r $(shell $(CC) $(CFLAGS) -print-file-name=libvdeplug.a)
-+LDFLAGS_vde.o = $(shell $(CC) $(CFLAGS) -print-file-name=libvdeplug.a)
-
- targets := pcap_kern.o pcap_user.o vde_kern.o vde_user.o
-
-diff --git a/arch/unicore32/lib/Makefile b/arch/unicore32/lib/Makefile
-index 098981a01841..5af06645b8f0 100644
---- a/arch/unicore32/lib/Makefile
-+++ b/arch/unicore32/lib/Makefile
-@@ -10,12 +10,12 @@ lib-y += strncpy_from_user.o strnlen_user.o
- lib-y += clear_user.o copy_page.o
- lib-y += copy_from_user.o copy_to_user.o
-
--GNU_LIBC_A := $(shell $(CC) $(KBUILD_CFLAGS) -print-file-name=libc.a)
-+GNU_LIBC_A = $(shell $(CC) $(KBUILD_CFLAGS) -print-file-name=libc.a)
- GNU_LIBC_A_OBJS := memchr.o memcpy.o memmove.o memset.o
- GNU_LIBC_A_OBJS += strchr.o strrchr.o
- GNU_LIBC_A_OBJS += rawmemchr.o # needed by strrchr.o
-
--GNU_LIBGCC_A := $(shell $(CC) $(KBUILD_CFLAGS) -print-file-name=libgcc.a)
-+GNU_LIBGCC_A = $(shell $(CC) $(KBUILD_CFLAGS) -print-file-name=libgcc.a)
- GNU_LIBGCC_A_OBJS := _ashldi3.o _ashrdi3.o _lshrdi3.o
- GNU_LIBGCC_A_OBJS += _divsi3.o _modsi3.o _ucmpdi2.o _umodsi3.o _udivsi3.o
-
-diff --git a/arch/x86/kernel/apic/apic.c b/arch/x86/kernel/apic/apic.c
-index e53dda210cd7..21d2f1de1057 100644
---- a/arch/x86/kernel/apic/apic.c
-+++ b/arch/x86/kernel/apic/apic.c
-@@ -2093,7 +2093,7 @@ void __init init_apic_mappings(void)
- unsigned int new_apicid;
-
- if (apic_validate_deadline_timer())
-- pr_debug("TSC deadline timer available\n");
-+ pr_info("TSC deadline timer available\n");
-
- if (x2apic_mode) {
- boot_cpu_physical_apicid = read_apic_id();
-diff --git a/arch/x86/kernel/cpu/mce/dev-mcelog.c b/arch/x86/kernel/cpu/mce/dev-mcelog.c
-index d089567a9ce8..bcb379b2fd42 100644
---- a/arch/x86/kernel/cpu/mce/dev-mcelog.c
-+++ b/arch/x86/kernel/cpu/mce/dev-mcelog.c
-@@ -343,7 +343,7 @@ static __init int dev_mcelog_init_device(void)
- if (!mcelog)
- return -ENOMEM;
-
-- strncpy(mcelog->signature, MCE_LOG_SIGNATURE, sizeof(mcelog->signature));
-+ memcpy(mcelog->signature, MCE_LOG_SIGNATURE, sizeof(mcelog->signature));
- mcelog->len = mce_log_len;
- mcelog->recordlen = sizeof(struct mce);
-
-diff --git a/arch/x86/kernel/idt.c b/arch/x86/kernel/idt.c
-index 87ef69a72c52..7bb4c3cbf4dc 100644
---- a/arch/x86/kernel/idt.c
-+++ b/arch/x86/kernel/idt.c
-@@ -318,7 +318,11 @@ void __init idt_setup_apic_and_irq_gates(void)
-
- #ifdef CONFIG_X86_LOCAL_APIC
- for_each_clear_bit_from(i, system_vectors, NR_VECTORS) {
-- set_bit(i, system_vectors);
-+ /*
-+ * Don't set the non assigned system vectors in the
-+ * system_vectors bitmap. Otherwise they show up in
-+ * /proc/interrupts.
-+ */
- entry = spurious_entries_start + 8 * (i - FIRST_SYSTEM_VECTOR);
- set_intr_gate(i, entry);
- }
-diff --git a/arch/x86/kernel/kprobes/core.c b/arch/x86/kernel/kprobes/core.c
-index 4d7022a740ab..a12adbe1559d 100644
---- a/arch/x86/kernel/kprobes/core.c
-+++ b/arch/x86/kernel/kprobes/core.c
-@@ -753,16 +753,11 @@ asm(
- NOKPROBE_SYMBOL(kretprobe_trampoline);
- STACK_FRAME_NON_STANDARD(kretprobe_trampoline);
-
--static struct kprobe kretprobe_kprobe = {
-- .addr = (void *)kretprobe_trampoline,
--};
--
- /*
- * Called from kretprobe_trampoline
- */
- __used __visible void *trampoline_handler(struct pt_regs *regs)
- {
-- struct kprobe_ctlblk *kcb;
- struct kretprobe_instance *ri = NULL;
- struct hlist_head *head, empty_rp;
- struct hlist_node *tmp;
-@@ -772,16 +767,12 @@ __used __visible void *trampoline_handler(struct pt_regs *regs)
- void *frame_pointer;
- bool skipped = false;
-
-- preempt_disable();
--
- /*
- * Set a dummy kprobe for avoiding kretprobe recursion.
- * Since kretprobe never run in kprobe handler, kprobe must not
- * be running at this point.
- */
-- kcb = get_kprobe_ctlblk();
-- __this_cpu_write(current_kprobe, &kretprobe_kprobe);
-- kcb->kprobe_status = KPROBE_HIT_ACTIVE;
-+ kprobe_busy_begin();
-
- INIT_HLIST_HEAD(&empty_rp);
- kretprobe_hash_lock(current, &head, &flags);
-@@ -857,7 +848,7 @@ __used __visible void *trampoline_handler(struct pt_regs *regs)
- __this_cpu_write(current_kprobe, &ri->rp->kp);
- ri->ret_addr = correct_ret_addr;
- ri->rp->handler(ri, regs);
-- __this_cpu_write(current_kprobe, &kretprobe_kprobe);
-+ __this_cpu_write(current_kprobe, &kprobe_busy);
- }
-
- recycle_rp_inst(ri, &empty_rp);
-@@ -873,8 +864,7 @@ __used __visible void *trampoline_handler(struct pt_regs *regs)
-
- kretprobe_hash_unlock(current, &flags);
-
-- __this_cpu_write(current_kprobe, NULL);
-- preempt_enable();
-+ kprobe_busy_end();
-
- hlist_for_each_entry_safe(ri, tmp, &empty_rp, hlist) {
- hlist_del(&ri->hlist);
-diff --git a/arch/x86/purgatory/Makefile b/arch/x86/purgatory/Makefile
-index fb4ee5444379..9733d1cc791d 100644
---- a/arch/x86/purgatory/Makefile
-+++ b/arch/x86/purgatory/Makefile
-@@ -17,7 +17,10 @@ CFLAGS_sha256.o := -D__DISABLE_EXPORTS
- LDFLAGS_purgatory.ro := -e purgatory_start -r --no-undefined -nostdlib -z nodefaultlib
- targets += purgatory.ro
-
-+# Sanitizer, etc. runtimes are unavailable and cannot be linked here.
-+GCOV_PROFILE := n
- KASAN_SANITIZE := n
-+UBSAN_SANITIZE := n
- KCOV_INSTRUMENT := n
-
- # These are adjustments to the compiler flags used for objects that
-@@ -25,7 +28,7 @@ KCOV_INSTRUMENT := n
-
- PURGATORY_CFLAGS_REMOVE := -mcmodel=kernel
- PURGATORY_CFLAGS := -mcmodel=large -ffreestanding -fno-zero-initialized-in-bss
--PURGATORY_CFLAGS += $(DISABLE_STACKLEAK_PLUGIN)
-+PURGATORY_CFLAGS += $(DISABLE_STACKLEAK_PLUGIN) -DDISABLE_BRANCH_PROFILING
-
- # Default KBUILD_CFLAGS can have -pg option set when FTRACE is enabled. That
- # in turn leaves some undefined symbols like __fentry__ in purgatory and not
-diff --git a/crypto/algboss.c b/crypto/algboss.c
-index 535f1f87e6c1..5ebccbd6b74e 100644
---- a/crypto/algboss.c
-+++ b/crypto/algboss.c
-@@ -178,8 +178,6 @@ static int cryptomgr_schedule_probe(struct crypto_larval *larval)
- if (IS_ERR(thread))
- goto err_put_larval;
-
-- wait_for_completion_interruptible(&larval->completion);
--
- return NOTIFY_STOP;
-
- err_put_larval:
-diff --git a/crypto/algif_skcipher.c b/crypto/algif_skcipher.c
-index e2c8ab408bed..4c3bdffe0c3a 100644
---- a/crypto/algif_skcipher.c
-+++ b/crypto/algif_skcipher.c
-@@ -74,14 +74,10 @@ static int _skcipher_recvmsg(struct socket *sock, struct msghdr *msg,
- return PTR_ERR(areq);
-
- /* convert iovecs of output buffers into RX SGL */
-- err = af_alg_get_rsgl(sk, msg, flags, areq, -1, &len);
-+ err = af_alg_get_rsgl(sk, msg, flags, areq, ctx->used, &len);
- if (err)
- goto free;
-
-- /* Process only as much RX buffers for which we have TX data */
-- if (len > ctx->used)
-- len = ctx->used;
--
- /*
- * If more buffers are to be expected to be processed, process only
- * full block size buffers.
-diff --git a/drivers/ata/libata-core.c b/drivers/ata/libata-core.c
-index beca5f91bb4c..e74c8fe2a5fd 100644
---- a/drivers/ata/libata-core.c
-+++ b/drivers/ata/libata-core.c
-@@ -42,7 +42,6 @@
- #include <linux/workqueue.h>
- #include <linux/scatterlist.h>
- #include <linux/io.h>
--#include <linux/async.h>
- #include <linux/log2.h>
- #include <linux/slab.h>
- #include <linux/glob.h>
-@@ -5778,7 +5777,7 @@ int ata_host_register(struct ata_host *host, struct scsi_host_template *sht)
- /* perform each probe asynchronously */
- for (i = 0; i < host->n_ports; i++) {
- struct ata_port *ap = host->ports[i];
-- async_schedule(async_port_probe, ap);
-+ ap->cookie = async_schedule(async_port_probe, ap);
- }
-
- return 0;
-@@ -5920,11 +5919,11 @@ void ata_host_detach(struct ata_host *host)
- {
- int i;
-
-- /* Ensure ata_port probe has completed */
-- async_synchronize_full();
--
-- for (i = 0; i < host->n_ports; i++)
-+ for (i = 0; i < host->n_ports; i++) {
-+ /* Ensure ata_port probe has completed */
-+ async_synchronize_cookie(host->ports[i]->cookie + 1);
- ata_port_detach(host->ports[i]);
-+ }
-
- /* the host is dead now, dissociate ACPI */
- ata_acpi_dissociate(host);
-diff --git a/drivers/base/platform.c b/drivers/base/platform.c
-index b27d0f6c18c9..f5d485166fd3 100644
---- a/drivers/base/platform.c
-+++ b/drivers/base/platform.c
-@@ -851,6 +851,8 @@ int __init_or_module __platform_driver_probe(struct platform_driver *drv,
- /* temporary section violation during probe() */
- drv->probe = probe;
- retval = code = __platform_driver_register(drv, module);
-+ if (retval)
-+ return retval;
-
- /*
- * Fixup that section violation, being paranoid about code scanning
-diff --git a/drivers/block/ps3disk.c b/drivers/block/ps3disk.c
-index c5c6487a19d5..7b55811c2a81 100644
---- a/drivers/block/ps3disk.c
-+++ b/drivers/block/ps3disk.c
-@@ -454,7 +454,6 @@ static int ps3disk_probe(struct ps3_system_bus_device *_dev)
- queue->queuedata = dev;
-
- blk_queue_max_hw_sectors(queue, dev->bounce_size >> 9);
-- blk_queue_segment_boundary(queue, -1UL);
- blk_queue_dma_alignment(queue, dev->blk_size-1);
- blk_queue_logical_block_size(queue, dev->blk_size);
-
-diff --git a/drivers/bus/mhi/core/main.c b/drivers/bus/mhi/core/main.c
-index 97e06cc586e4..8be3d0fb0614 100644
---- a/drivers/bus/mhi/core/main.c
-+++ b/drivers/bus/mhi/core/main.c
-@@ -513,7 +513,10 @@ static int parse_xfer_event(struct mhi_controller *mhi_cntrl,
- mhi_cntrl->unmap_single(mhi_cntrl, buf_info);
-
- result.buf_addr = buf_info->cb_buf;
-- result.bytes_xferd = xfer_len;
-+
-+ /* truncate to buf len if xfer_len is larger */
-+ result.bytes_xferd =
-+ min_t(u16, xfer_len, buf_info->len);
- mhi_del_ring_element(mhi_cntrl, buf_ring);
- mhi_del_ring_element(mhi_cntrl, tre_ring);
- local_rp = tre_ring->rp;
-@@ -597,7 +600,9 @@ static int parse_rsc_event(struct mhi_controller *mhi_cntrl,
-
- result.transaction_status = (ev_code == MHI_EV_CC_OVERFLOW) ?
- -EOVERFLOW : 0;
-- result.bytes_xferd = xfer_len;
-+
-+ /* truncate to buf len if xfer_len is larger */
-+ result.bytes_xferd = min_t(u16, xfer_len, buf_info->len);
- result.buf_addr = buf_info->cb_buf;
- result.dir = mhi_chan->dir;
-
-diff --git a/drivers/char/ipmi/ipmi_msghandler.c b/drivers/char/ipmi/ipmi_msghandler.c
-index c48d8f086382..9afd220cd824 100644
---- a/drivers/char/ipmi/ipmi_msghandler.c
-+++ b/drivers/char/ipmi/ipmi_msghandler.c
-@@ -33,6 +33,7 @@
- #include <linux/workqueue.h>
- #include <linux/uuid.h>
- #include <linux/nospec.h>
-+#include <linux/vmalloc.h>
-
- #define IPMI_DRIVER_VERSION "39.2"
-
-@@ -1153,7 +1154,7 @@ static void free_user_work(struct work_struct *work)
- remove_work);
-
- cleanup_srcu_struct(&user->release_barrier);
-- kfree(user);
-+ vfree(user);
- }
-
- int ipmi_create_user(unsigned int if_num,
-@@ -1185,7 +1186,7 @@ int ipmi_create_user(unsigned int if_num,
- if (rv)
- return rv;
-
-- new_user = kmalloc(sizeof(*new_user), GFP_KERNEL);
-+ new_user = vzalloc(sizeof(*new_user));
- if (!new_user)
- return -ENOMEM;
-
-@@ -1232,7 +1233,7 @@ int ipmi_create_user(unsigned int if_num,
-
- out_kfree:
- srcu_read_unlock(&ipmi_interfaces_srcu, index);
-- kfree(new_user);
-+ vfree(new_user);
- return rv;
- }
- EXPORT_SYMBOL(ipmi_create_user);
-diff --git a/drivers/char/mem.c b/drivers/char/mem.c
-index 43dd0891ca1e..31cae88a730b 100644
---- a/drivers/char/mem.c
-+++ b/drivers/char/mem.c
-@@ -31,11 +31,15 @@
- #include <linux/uio.h>
- #include <linux/uaccess.h>
- #include <linux/security.h>
-+#include <linux/pseudo_fs.h>
-+#include <uapi/linux/magic.h>
-+#include <linux/mount.h>
-
- #ifdef CONFIG_IA64
- # include <linux/efi.h>
- #endif
-
-+#define DEVMEM_MINOR 1
- #define DEVPORT_MINOR 4
-
- static inline unsigned long size_inside_page(unsigned long start,
-@@ -805,12 +809,64 @@ static loff_t memory_lseek(struct file *file, loff_t offset, int orig)
- return ret;
- }
-
-+static struct inode *devmem_inode;
-+
-+#ifdef CONFIG_IO_STRICT_DEVMEM
-+void revoke_devmem(struct resource *res)
-+{
-+ struct inode *inode = READ_ONCE(devmem_inode);
-+
-+ /*
-+ * Check that the initialization has completed. Losing the race
-+ * is ok because it means drivers are claiming resources before
-+ * the fs_initcall level of init and prevent /dev/mem from
-+ * establishing mappings.
-+ */
-+ if (!inode)
-+ return;
-+
-+ /*
-+ * The expectation is that the driver has successfully marked
-+ * the resource busy by this point, so devmem_is_allowed()
-+ * should start returning false, however for performance this
-+ * does not iterate the entire resource range.
-+ */
-+ if (devmem_is_allowed(PHYS_PFN(res->start)) &&
-+ devmem_is_allowed(PHYS_PFN(res->end))) {
-+ /*
-+ * *cringe* iomem=relaxed says "go ahead, what's the
-+ * worst that can happen?"
-+ */
-+ return;
-+ }
-+
-+ unmap_mapping_range(inode->i_mapping, res->start, resource_size(res), 1);
-+}
-+#endif
-+
- static int open_port(struct inode *inode, struct file *filp)
- {
-+ int rc;
-+
- if (!capable(CAP_SYS_RAWIO))
- return -EPERM;
-
-- return security_locked_down(LOCKDOWN_DEV_MEM);
-+ rc = security_locked_down(LOCKDOWN_DEV_MEM);
-+ if (rc)
-+ return rc;
-+
-+ if (iminor(inode) != DEVMEM_MINOR)
-+ return 0;
-+
-+ /*
-+ * Use a unified address space to have a single point to manage
-+ * revocations when drivers want to take over a /dev/mem mapped
-+ * range.
-+ */
-+ inode->i_mapping = devmem_inode->i_mapping;
-+ filp->f_mapping = inode->i_mapping;
-+
-+ return 0;
- }
-
- #define zero_lseek null_lseek
-@@ -885,7 +941,7 @@ static const struct memdev {
- fmode_t fmode;
- } devlist[] = {
- #ifdef CONFIG_DEVMEM
-- [1] = { "mem", 0, &mem_fops, FMODE_UNSIGNED_OFFSET },
-+ [DEVMEM_MINOR] = { "mem", 0, &mem_fops, FMODE_UNSIGNED_OFFSET },
- #endif
- #ifdef CONFIG_DEVKMEM
- [2] = { "kmem", 0, &kmem_fops, FMODE_UNSIGNED_OFFSET },
-@@ -939,6 +995,45 @@ static char *mem_devnode(struct device *dev, umode_t *mode)
-
- static struct class *mem_class;
-
-+static int devmem_fs_init_fs_context(struct fs_context *fc)
-+{
-+ return init_pseudo(fc, DEVMEM_MAGIC) ? 0 : -ENOMEM;
-+}
-+
-+static struct file_system_type devmem_fs_type = {
-+ .name = "devmem",
-+ .owner = THIS_MODULE,
-+ .init_fs_context = devmem_fs_init_fs_context,
-+ .kill_sb = kill_anon_super,
-+};
-+
-+static int devmem_init_inode(void)
-+{
-+ static struct vfsmount *devmem_vfs_mount;
-+ static int devmem_fs_cnt;
-+ struct inode *inode;
-+ int rc;
-+
-+ rc = simple_pin_fs(&devmem_fs_type, &devmem_vfs_mount, &devmem_fs_cnt);
-+ if (rc < 0) {
-+ pr_err("Cannot mount /dev/mem pseudo filesystem: %d\n", rc);
-+ return rc;
-+ }
-+
-+ inode = alloc_anon_inode(devmem_vfs_mount->mnt_sb);
-+ if (IS_ERR(inode)) {
-+ rc = PTR_ERR(inode);
-+ pr_err("Cannot allocate inode for /dev/mem: %d\n", rc);
-+ simple_release_fs(&devmem_vfs_mount, &devmem_fs_cnt);
-+ return rc;
-+ }
-+
-+ /* publish /dev/mem initialized */
-+ WRITE_ONCE(devmem_inode, inode);
-+
-+ return 0;
-+}
-+
- static int __init chr_dev_init(void)
- {
- int minor;
-@@ -960,6 +1055,8 @@ static int __init chr_dev_init(void)
- */
- if ((minor == DEVPORT_MINOR) && !arch_has_dev_port())
- continue;
-+ if ((minor == DEVMEM_MINOR) && devmem_init_inode() != 0)
-+ continue;
-
- device_create(mem_class, NULL, MKDEV(MEM_MAJOR, minor),
- NULL, devlist[minor].name);
-diff --git a/drivers/clk/Makefile b/drivers/clk/Makefile
-index f4169cc2fd31..60e811d3f226 100644
---- a/drivers/clk/Makefile
-+++ b/drivers/clk/Makefile
-@@ -105,7 +105,7 @@ obj-$(CONFIG_CLK_SIFIVE) += sifive/
- obj-$(CONFIG_ARCH_SIRF) += sirf/
- obj-$(CONFIG_ARCH_SOCFPGA) += socfpga/
- obj-$(CONFIG_PLAT_SPEAR) += spear/
--obj-$(CONFIG_ARCH_SPRD) += sprd/
-+obj-y += sprd/
- obj-$(CONFIG_ARCH_STI) += st/
- obj-$(CONFIG_ARCH_STRATIX10) += socfpga/
- obj-$(CONFIG_ARCH_SUNXI) += sunxi/
-diff --git a/drivers/clk/bcm/clk-bcm2835.c b/drivers/clk/bcm/clk-bcm2835.c
-index ded13ccf768e..7c845c293af0 100644
---- a/drivers/clk/bcm/clk-bcm2835.c
-+++ b/drivers/clk/bcm/clk-bcm2835.c
-@@ -1448,13 +1448,13 @@ static struct clk_hw *bcm2835_register_clock(struct bcm2835_cprman *cprman,
- return &clock->hw;
- }
-
--static struct clk *bcm2835_register_gate(struct bcm2835_cprman *cprman,
-+static struct clk_hw *bcm2835_register_gate(struct bcm2835_cprman *cprman,
- const struct bcm2835_gate_data *data)
- {
-- return clk_register_gate(cprman->dev, data->name, data->parent,
-- CLK_IGNORE_UNUSED | CLK_SET_RATE_GATE,
-- cprman->regs + data->ctl_reg,
-- CM_GATE_BIT, 0, &cprman->regs_lock);
-+ return clk_hw_register_gate(cprman->dev, data->name, data->parent,
-+ CLK_IGNORE_UNUSED | CLK_SET_RATE_GATE,
-+ cprman->regs + data->ctl_reg,
-+ CM_GATE_BIT, 0, &cprman->regs_lock);
- }
-
- typedef struct clk_hw *(*bcm2835_clk_register)(struct bcm2835_cprman *cprman,
-diff --git a/drivers/clk/clk-ast2600.c b/drivers/clk/clk-ast2600.c
-index 392d01705b97..99afc949925f 100644
---- a/drivers/clk/clk-ast2600.c
-+++ b/drivers/clk/clk-ast2600.c
-@@ -642,14 +642,22 @@ static const u32 ast2600_a0_axi_ahb_div_table[] = {
- 2, 2, 3, 5,
- };
-
--static const u32 ast2600_a1_axi_ahb_div_table[] = {
-- 4, 6, 2, 4,
-+static const u32 ast2600_a1_axi_ahb_div0_tbl[] = {
-+ 3, 2, 3, 4,
-+};
-+
-+static const u32 ast2600_a1_axi_ahb_div1_tbl[] = {
-+ 3, 4, 6, 8,
-+};
-+
-+static const u32 ast2600_a1_axi_ahb200_tbl[] = {
-+ 3, 4, 3, 4, 2, 2, 2, 2,
- };
-
- static void __init aspeed_g6_cc(struct regmap *map)
- {
- struct clk_hw *hw;
-- u32 val, div, chip_id, axi_div, ahb_div;
-+ u32 val, div, divbits, chip_id, axi_div, ahb_div;
-
- clk_hw_register_fixed_rate(NULL, "clkin", NULL, 0, 25000000);
-
-@@ -679,11 +687,22 @@ static void __init aspeed_g6_cc(struct regmap *map)
- else
- axi_div = 2;
-
-+ divbits = (val >> 11) & 0x3;
- regmap_read(map, ASPEED_G6_SILICON_REV, &chip_id);
-- if (chip_id & BIT(16))
-- ahb_div = ast2600_a1_axi_ahb_div_table[(val >> 11) & 0x3];
-- else
-+ if (chip_id & BIT(16)) {
-+ if (!divbits) {
-+ ahb_div = ast2600_a1_axi_ahb200_tbl[(val >> 8) & 0x3];
-+ if (val & BIT(16))
-+ ahb_div *= 2;
-+ } else {
-+ if (val & BIT(16))
-+ ahb_div = ast2600_a1_axi_ahb_div1_tbl[divbits];
-+ else
-+ ahb_div = ast2600_a1_axi_ahb_div0_tbl[divbits];
-+ }
-+ } else {
- ahb_div = ast2600_a0_axi_ahb_div_table[(val >> 11) & 0x3];
-+ }
-
- hw = clk_hw_register_fixed_factor(NULL, "ahb", "hpll", 0, 1, axi_div * ahb_div);
- aspeed_g6_clk_data->hws[ASPEED_CLK_AHB] = hw;
-diff --git a/drivers/clk/meson/meson8b.c b/drivers/clk/meson/meson8b.c
-index 34a70c4b4899..11f6b868cf2b 100644
---- a/drivers/clk/meson/meson8b.c
-+++ b/drivers/clk/meson/meson8b.c
-@@ -1077,7 +1077,7 @@ static struct clk_regmap meson8b_vid_pll_in_sel = {
- * Meson8m2: vid2_pll
- */
- .parent_hws = (const struct clk_hw *[]) {
-- &meson8b_hdmi_pll_dco.hw
-+ &meson8b_hdmi_pll_lvds_out.hw
- },
- .num_parents = 1,
- .flags = CLK_SET_RATE_PARENT,
-@@ -1213,7 +1213,7 @@ static struct clk_regmap meson8b_vclk_in_en = {
-
- static struct clk_regmap meson8b_vclk_div1_gate = {
- .data = &(struct clk_regmap_gate_data){
-- .offset = HHI_VID_CLK_DIV,
-+ .offset = HHI_VID_CLK_CNTL,
- .bit_idx = 0,
- },
- .hw.init = &(struct clk_init_data){
-@@ -1243,7 +1243,7 @@ static struct clk_fixed_factor meson8b_vclk_div2_div = {
-
- static struct clk_regmap meson8b_vclk_div2_div_gate = {
- .data = &(struct clk_regmap_gate_data){
-- .offset = HHI_VID_CLK_DIV,
-+ .offset = HHI_VID_CLK_CNTL,
- .bit_idx = 1,
- },
- .hw.init = &(struct clk_init_data){
-@@ -1273,7 +1273,7 @@ static struct clk_fixed_factor meson8b_vclk_div4_div = {
-
- static struct clk_regmap meson8b_vclk_div4_div_gate = {
- .data = &(struct clk_regmap_gate_data){
-- .offset = HHI_VID_CLK_DIV,
-+ .offset = HHI_VID_CLK_CNTL,
- .bit_idx = 2,
- },
- .hw.init = &(struct clk_init_data){
-@@ -1303,7 +1303,7 @@ static struct clk_fixed_factor meson8b_vclk_div6_div = {
-
- static struct clk_regmap meson8b_vclk_div6_div_gate = {
- .data = &(struct clk_regmap_gate_data){
-- .offset = HHI_VID_CLK_DIV,
-+ .offset = HHI_VID_CLK_CNTL,
- .bit_idx = 3,
- },
- .hw.init = &(struct clk_init_data){
-@@ -1333,7 +1333,7 @@ static struct clk_fixed_factor meson8b_vclk_div12_div = {
-
- static struct clk_regmap meson8b_vclk_div12_div_gate = {
- .data = &(struct clk_regmap_gate_data){
-- .offset = HHI_VID_CLK_DIV,
-+ .offset = HHI_VID_CLK_CNTL,
- .bit_idx = 4,
- },
- .hw.init = &(struct clk_init_data){
-@@ -1918,6 +1918,13 @@ static struct clk_regmap meson8b_mali = {
- },
- };
-
-+static const struct reg_sequence meson8m2_gp_pll_init_regs[] = {
-+ { .reg = HHI_GP_PLL_CNTL2, .def = 0x59c88000 },
-+ { .reg = HHI_GP_PLL_CNTL3, .def = 0xca463823 },
-+ { .reg = HHI_GP_PLL_CNTL4, .def = 0x0286a027 },
-+ { .reg = HHI_GP_PLL_CNTL5, .def = 0x00003000 },
-+};
-+
- static const struct pll_params_table meson8m2_gp_pll_params_table[] = {
- PLL_PARAMS(182, 3),
- { /* sentinel */ },
-@@ -1951,6 +1958,8 @@ static struct clk_regmap meson8m2_gp_pll_dco = {
- .width = 1,
- },
- .table = meson8m2_gp_pll_params_table,
-+ .init_regs = meson8m2_gp_pll_init_regs,
-+ .init_count = ARRAY_SIZE(meson8m2_gp_pll_init_regs),
- },
- .hw.init = &(struct clk_init_data){
- .name = "gp_pll_dco",
-@@ -3506,54 +3515,87 @@ static struct clk_regmap *const meson8b_clk_regmaps[] = {
- static const struct meson8b_clk_reset_line {
- u32 reg;
- u8 bit_idx;
-+ bool active_low;
- } meson8b_clk_reset_bits[] = {
- [CLKC_RESET_L2_CACHE_SOFT_RESET] = {
-- .reg = HHI_SYS_CPU_CLK_CNTL0, .bit_idx = 30
-+ .reg = HHI_SYS_CPU_CLK_CNTL0,
-+ .bit_idx = 30,
-+ .active_low = false,
- },
- [CLKC_RESET_AXI_64_TO_128_BRIDGE_A5_SOFT_RESET] = {
-- .reg = HHI_SYS_CPU_CLK_CNTL0, .bit_idx = 29
-+ .reg = HHI_SYS_CPU_CLK_CNTL0,
-+ .bit_idx = 29,
-+ .active_low = false,
- },
- [CLKC_RESET_SCU_SOFT_RESET] = {
-- .reg = HHI_SYS_CPU_CLK_CNTL0, .bit_idx = 28
-+ .reg = HHI_SYS_CPU_CLK_CNTL0,
-+ .bit_idx = 28,
-+ .active_low = false,
- },
- [CLKC_RESET_CPU3_SOFT_RESET] = {
-- .reg = HHI_SYS_CPU_CLK_CNTL0, .bit_idx = 27
-+ .reg = HHI_SYS_CPU_CLK_CNTL0,
-+ .bit_idx = 27,
-+ .active_low = false,
- },
- [CLKC_RESET_CPU2_SOFT_RESET] = {
-- .reg = HHI_SYS_CPU_CLK_CNTL0, .bit_idx = 26
-+ .reg = HHI_SYS_CPU_CLK_CNTL0,
-+ .bit_idx = 26,
-+ .active_low = false,
- },
- [CLKC_RESET_CPU1_SOFT_RESET] = {
-- .reg = HHI_SYS_CPU_CLK_CNTL0, .bit_idx = 25
-+ .reg = HHI_SYS_CPU_CLK_CNTL0,
-+ .bit_idx = 25,
-+ .active_low = false,
- },
- [CLKC_RESET_CPU0_SOFT_RESET] = {
-- .reg = HHI_SYS_CPU_CLK_CNTL0, .bit_idx = 24
-+ .reg = HHI_SYS_CPU_CLK_CNTL0,
-+ .bit_idx = 24,
-+ .active_low = false,
- },
- [CLKC_RESET_A5_GLOBAL_RESET] = {
-- .reg = HHI_SYS_CPU_CLK_CNTL0, .bit_idx = 18
-+ .reg = HHI_SYS_CPU_CLK_CNTL0,
-+ .bit_idx = 18,
-+ .active_low = false,
- },
- [CLKC_RESET_A5_AXI_SOFT_RESET] = {
-- .reg = HHI_SYS_CPU_CLK_CNTL0, .bit_idx = 17
-+ .reg = HHI_SYS_CPU_CLK_CNTL0,
-+ .bit_idx = 17,
-+ .active_low = false,
- },
- [CLKC_RESET_A5_ABP_SOFT_RESET] = {
-- .reg = HHI_SYS_CPU_CLK_CNTL0, .bit_idx = 16
-+ .reg = HHI_SYS_CPU_CLK_CNTL0,
-+ .bit_idx = 16,
-+ .active_low = false,
- },
- [CLKC_RESET_AXI_64_TO_128_BRIDGE_MMC_SOFT_RESET] = {
-- .reg = HHI_SYS_CPU_CLK_CNTL1, .bit_idx = 30
-+ .reg = HHI_SYS_CPU_CLK_CNTL1,
-+ .bit_idx = 30,
-+ .active_low = false,
- },
- [CLKC_RESET_VID_CLK_CNTL_SOFT_RESET] = {
-- .reg = HHI_VID_CLK_CNTL, .bit_idx = 15
-+ .reg = HHI_VID_CLK_CNTL,
-+ .bit_idx = 15,
-+ .active_low = false,
- },
- [CLKC_RESET_VID_DIVIDER_CNTL_SOFT_RESET_POST] = {
-- .reg = HHI_VID_DIVIDER_CNTL, .bit_idx = 7
-+ .reg = HHI_VID_DIVIDER_CNTL,
-+ .bit_idx = 7,
-+ .active_low = false,
- },
- [CLKC_RESET_VID_DIVIDER_CNTL_SOFT_RESET_PRE] = {
-- .reg = HHI_VID_DIVIDER_CNTL, .bit_idx = 3
-+ .reg = HHI_VID_DIVIDER_CNTL,
-+ .bit_idx = 3,
-+ .active_low = false,
- },
- [CLKC_RESET_VID_DIVIDER_CNTL_RESET_N_POST] = {
-- .reg = HHI_VID_DIVIDER_CNTL, .bit_idx = 1
-+ .reg = HHI_VID_DIVIDER_CNTL,
-+ .bit_idx = 1,
-+ .active_low = true,
- },
- [CLKC_RESET_VID_DIVIDER_CNTL_RESET_N_PRE] = {
-- .reg = HHI_VID_DIVIDER_CNTL, .bit_idx = 0
-+ .reg = HHI_VID_DIVIDER_CNTL,
-+ .bit_idx = 0,
-+ .active_low = true,
- },
- };
-
-@@ -3562,22 +3604,22 @@ static int meson8b_clk_reset_update(struct reset_controller_dev *rcdev,
- {
- struct meson8b_clk_reset *meson8b_clk_reset =
- container_of(rcdev, struct meson8b_clk_reset, reset);
-- unsigned long flags;
- const struct meson8b_clk_reset_line *reset;
-+ unsigned int value = 0;
-+ unsigned long flags;
-
- if (id >= ARRAY_SIZE(meson8b_clk_reset_bits))
- return -EINVAL;
-
- reset = &meson8b_clk_reset_bits[id];
-
-+ if (assert != reset->active_low)
-+ value = BIT(reset->bit_idx);
-+
- spin_lock_irqsave(&meson_clk_lock, flags);
-
-- if (assert)
-- regmap_update_bits(meson8b_clk_reset->regmap, reset->reg,
-- BIT(reset->bit_idx), BIT(reset->bit_idx));
-- else
-- regmap_update_bits(meson8b_clk_reset->regmap, reset->reg,
-- BIT(reset->bit_idx), 0);
-+ regmap_update_bits(meson8b_clk_reset->regmap, reset->reg,
-+ BIT(reset->bit_idx), value);
-
- spin_unlock_irqrestore(&meson_clk_lock, flags);
-
-diff --git a/drivers/clk/meson/meson8b.h b/drivers/clk/meson/meson8b.h
-index c889fbeec30f..c91fb07fcb65 100644
---- a/drivers/clk/meson/meson8b.h
-+++ b/drivers/clk/meson/meson8b.h
-@@ -20,6 +20,10 @@
- * [0] http://dn.odroid.com/S805/Datasheet/S805_Datasheet%20V0.8%2020150126.pdf
- */
- #define HHI_GP_PLL_CNTL 0x40 /* 0x10 offset in data sheet */
-+#define HHI_GP_PLL_CNTL2 0x44 /* 0x11 offset in data sheet */
-+#define HHI_GP_PLL_CNTL3 0x48 /* 0x12 offset in data sheet */
-+#define HHI_GP_PLL_CNTL4 0x4C /* 0x13 offset in data sheet */
-+#define HHI_GP_PLL_CNTL5 0x50 /* 0x14 offset in data sheet */
- #define HHI_VIID_CLK_DIV 0x128 /* 0x4a offset in data sheet */
- #define HHI_VIID_CLK_CNTL 0x12c /* 0x4b offset in data sheet */
- #define HHI_GCLK_MPEG0 0x140 /* 0x50 offset in data sheet */
-diff --git a/drivers/clk/qcom/gcc-msm8916.c b/drivers/clk/qcom/gcc-msm8916.c
-index 4e329a7baf2b..17e4a5a2a9fd 100644
---- a/drivers/clk/qcom/gcc-msm8916.c
-+++ b/drivers/clk/qcom/gcc-msm8916.c
-@@ -260,7 +260,7 @@ static struct clk_pll gpll0 = {
- .l_reg = 0x21004,
- .m_reg = 0x21008,
- .n_reg = 0x2100c,
-- .config_reg = 0x21014,
-+ .config_reg = 0x21010,
- .mode_reg = 0x21000,
- .status_reg = 0x2101c,
- .status_bit = 17,
-@@ -287,7 +287,7 @@ static struct clk_pll gpll1 = {
- .l_reg = 0x20004,
- .m_reg = 0x20008,
- .n_reg = 0x2000c,
-- .config_reg = 0x20014,
-+ .config_reg = 0x20010,
- .mode_reg = 0x20000,
- .status_reg = 0x2001c,
- .status_bit = 17,
-@@ -314,7 +314,7 @@ static struct clk_pll gpll2 = {
- .l_reg = 0x4a004,
- .m_reg = 0x4a008,
- .n_reg = 0x4a00c,
-- .config_reg = 0x4a014,
-+ .config_reg = 0x4a010,
- .mode_reg = 0x4a000,
- .status_reg = 0x4a01c,
- .status_bit = 17,
-@@ -341,7 +341,7 @@ static struct clk_pll bimc_pll = {
- .l_reg = 0x23004,
- .m_reg = 0x23008,
- .n_reg = 0x2300c,
-- .config_reg = 0x23014,
-+ .config_reg = 0x23010,
- .mode_reg = 0x23000,
- .status_reg = 0x2301c,
- .status_bit = 17,
-diff --git a/drivers/clk/renesas/renesas-cpg-mssr.c b/drivers/clk/renesas/renesas-cpg-mssr.c
-index a2663fbbd7a5..d6a53c99b114 100644
---- a/drivers/clk/renesas/renesas-cpg-mssr.c
-+++ b/drivers/clk/renesas/renesas-cpg-mssr.c
-@@ -812,7 +812,8 @@ static int cpg_mssr_suspend_noirq(struct device *dev)
- /* Save module registers with bits under our control */
- for (reg = 0; reg < ARRAY_SIZE(priv->smstpcr_saved); reg++) {
- if (priv->smstpcr_saved[reg].mask)
-- priv->smstpcr_saved[reg].val =
-+ priv->smstpcr_saved[reg].val = priv->stbyctrl ?
-+ readb(priv->base + STBCR(reg)) :
- readl(priv->base + SMSTPCR(reg));
- }
-
-@@ -872,8 +873,9 @@ static int cpg_mssr_resume_noirq(struct device *dev)
- }
-
- if (!i)
-- dev_warn(dev, "Failed to enable SMSTP %p[0x%x]\n",
-- priv->base + SMSTPCR(reg), oldval & mask);
-+ dev_warn(dev, "Failed to enable %s%u[0x%x]\n",
-+ priv->stbyctrl ? "STB" : "SMSTP", reg,
-+ oldval & mask);
- }
-
- return 0;
-diff --git a/drivers/clk/samsung/clk-exynos5420.c b/drivers/clk/samsung/clk-exynos5420.c
-index c9e5a1fb6653..edb2363c735a 100644
---- a/drivers/clk/samsung/clk-exynos5420.c
-+++ b/drivers/clk/samsung/clk-exynos5420.c
-@@ -540,7 +540,7 @@ static const struct samsung_div_clock exynos5800_div_clks[] __initconst = {
-
- static const struct samsung_gate_clock exynos5800_gate_clks[] __initconst = {
- GATE(CLK_ACLK550_CAM, "aclk550_cam", "mout_user_aclk550_cam",
-- GATE_BUS_TOP, 24, 0, 0),
-+ GATE_BUS_TOP, 24, CLK_IS_CRITICAL, 0),
- GATE(CLK_ACLK432_SCALER, "aclk432_scaler", "mout_user_aclk432_scaler",
- GATE_BUS_TOP, 27, CLK_IS_CRITICAL, 0),
- };
-@@ -943,25 +943,25 @@ static const struct samsung_gate_clock exynos5x_gate_clks[] __initconst = {
- GATE(0, "aclk300_jpeg", "mout_user_aclk300_jpeg",
- GATE_BUS_TOP, 4, CLK_IGNORE_UNUSED, 0),
- GATE(0, "aclk333_432_isp0", "mout_user_aclk333_432_isp0",
-- GATE_BUS_TOP, 5, 0, 0),
-+ GATE_BUS_TOP, 5, CLK_IS_CRITICAL, 0),
- GATE(0, "aclk300_gscl", "mout_user_aclk300_gscl",
- GATE_BUS_TOP, 6, CLK_IS_CRITICAL, 0),
- GATE(0, "aclk333_432_gscl", "mout_user_aclk333_432_gscl",
- GATE_BUS_TOP, 7, CLK_IGNORE_UNUSED, 0),
- GATE(0, "aclk333_432_isp", "mout_user_aclk333_432_isp",
-- GATE_BUS_TOP, 8, 0, 0),
-+ GATE_BUS_TOP, 8, CLK_IS_CRITICAL, 0),
- GATE(CLK_PCLK66_GPIO, "pclk66_gpio", "mout_user_pclk66_gpio",
- GATE_BUS_TOP, 9, CLK_IGNORE_UNUSED, 0),
- GATE(0, "aclk66_psgen", "mout_user_aclk66_psgen",
- GATE_BUS_TOP, 10, CLK_IGNORE_UNUSED, 0),
- GATE(0, "aclk266_isp", "mout_user_aclk266_isp",
-- GATE_BUS_TOP, 13, 0, 0),
-+ GATE_BUS_TOP, 13, CLK_IS_CRITICAL, 0),
- GATE(0, "aclk166", "mout_user_aclk166",
- GATE_BUS_TOP, 14, CLK_IGNORE_UNUSED, 0),
- GATE(CLK_ACLK333, "aclk333", "mout_user_aclk333",
- GATE_BUS_TOP, 15, CLK_IS_CRITICAL, 0),
- GATE(0, "aclk400_isp", "mout_user_aclk400_isp",
-- GATE_BUS_TOP, 16, 0, 0),
-+ GATE_BUS_TOP, 16, CLK_IS_CRITICAL, 0),
- GATE(0, "aclk400_mscl", "mout_user_aclk400_mscl",
- GATE_BUS_TOP, 17, CLK_IS_CRITICAL, 0),
- GATE(0, "aclk200_disp1", "mout_user_aclk200_disp1",
-@@ -1161,8 +1161,10 @@ static const struct samsung_gate_clock exynos5x_gate_clks[] __initconst = {
- GATE_IP_GSCL1, 3, 0, 0),
- GATE(CLK_SMMU_FIMCL1, "smmu_fimcl1", "dout_gscl_blk_333",
- GATE_IP_GSCL1, 4, 0, 0),
-- GATE(CLK_GSCL_WA, "gscl_wa", "sclk_gscl_wa", GATE_IP_GSCL1, 12, 0, 0),
-- GATE(CLK_GSCL_WB, "gscl_wb", "sclk_gscl_wb", GATE_IP_GSCL1, 13, 0, 0),
-+ GATE(CLK_GSCL_WA, "gscl_wa", "sclk_gscl_wa", GATE_IP_GSCL1, 12,
-+ CLK_IS_CRITICAL, 0),
-+ GATE(CLK_GSCL_WB, "gscl_wb", "sclk_gscl_wb", GATE_IP_GSCL1, 13,
-+ CLK_IS_CRITICAL, 0),
- GATE(CLK_SMMU_FIMCL3, "smmu_fimcl3,", "dout_gscl_blk_333",
- GATE_IP_GSCL1, 16, 0, 0),
- GATE(CLK_FIMC_LITE3, "fimc_lite3", "aclk333_432_gscl",
-diff --git a/drivers/clk/samsung/clk-exynos5433.c b/drivers/clk/samsung/clk-exynos5433.c
-index 4b1aa9382ad2..6f29ecd0442e 100644
---- a/drivers/clk/samsung/clk-exynos5433.c
-+++ b/drivers/clk/samsung/clk-exynos5433.c
-@@ -1706,7 +1706,8 @@ static const struct samsung_gate_clock peric_gate_clks[] __initconst = {
- GATE(CLK_SCLK_PCM1, "sclk_pcm1", "sclk_pcm1_peric",
- ENABLE_SCLK_PERIC, 7, CLK_SET_RATE_PARENT, 0),
- GATE(CLK_SCLK_I2S1, "sclk_i2s1", "sclk_i2s1_peric",
-- ENABLE_SCLK_PERIC, 6, CLK_SET_RATE_PARENT, 0),
-+ ENABLE_SCLK_PERIC, 6,
-+ CLK_SET_RATE_PARENT | CLK_IGNORE_UNUSED, 0),
- GATE(CLK_SCLK_SPI2, "sclk_spi2", "sclk_spi2_peric", ENABLE_SCLK_PERIC,
- 5, CLK_SET_RATE_PARENT, 0),
- GATE(CLK_SCLK_SPI1, "sclk_spi1", "sclk_spi1_peric", ENABLE_SCLK_PERIC,
-diff --git a/drivers/clk/sprd/pll.c b/drivers/clk/sprd/pll.c
-index 15791484388f..13a322b2535a 100644
---- a/drivers/clk/sprd/pll.c
-+++ b/drivers/clk/sprd/pll.c
-@@ -106,7 +106,7 @@ static unsigned long _sprd_pll_recalc_rate(const struct sprd_pll *pll,
-
- cfg = kcalloc(regs_num, sizeof(*cfg), GFP_KERNEL);
- if (!cfg)
-- return -ENOMEM;
-+ return parent_rate;
-
- for (i = 0; i < regs_num; i++)
- cfg[i] = sprd_pll_read(pll, i);
-diff --git a/drivers/clk/st/clk-flexgen.c b/drivers/clk/st/clk-flexgen.c
-index 4413b6e04a8e..55873d4b7603 100644
---- a/drivers/clk/st/clk-flexgen.c
-+++ b/drivers/clk/st/clk-flexgen.c
-@@ -375,6 +375,7 @@ static void __init st_of_flexgen_setup(struct device_node *np)
- break;
- }
-
-+ flex_flags &= ~CLK_IS_CRITICAL;
- of_clk_detect_critical(np, i, &flex_flags);
-
- /*
-diff --git a/drivers/clk/sunxi/clk-sunxi.c b/drivers/clk/sunxi/clk-sunxi.c
-index 27201fd26e44..e1aa1fbac48a 100644
---- a/drivers/clk/sunxi/clk-sunxi.c
-+++ b/drivers/clk/sunxi/clk-sunxi.c
-@@ -90,7 +90,7 @@ static void sun6i_a31_get_pll1_factors(struct factors_request *req)
- * Round down the frequency to the closest multiple of either
- * 6 or 16
- */
-- u32 round_freq_6 = round_down(freq_mhz, 6);
-+ u32 round_freq_6 = rounddown(freq_mhz, 6);
- u32 round_freq_16 = round_down(freq_mhz, 16);
-
- if (round_freq_6 > round_freq_16)
-diff --git a/drivers/clk/ti/composite.c b/drivers/clk/ti/composite.c
-index 6a89936ba03a..eaa43575cfa5 100644
---- a/drivers/clk/ti/composite.c
-+++ b/drivers/clk/ti/composite.c
-@@ -196,6 +196,7 @@ cleanup:
- if (!cclk->comp_clks[i])
- continue;
- list_del(&cclk->comp_clks[i]->link);
-+ kfree(cclk->comp_clks[i]->parent_names);
- kfree(cclk->comp_clks[i]);
- }
-
-diff --git a/drivers/clk/zynqmp/clkc.c b/drivers/clk/zynqmp/clkc.c
-index 10e89f23880b..b66c3a62233a 100644
---- a/drivers/clk/zynqmp/clkc.c
-+++ b/drivers/clk/zynqmp/clkc.c
-@@ -558,7 +558,7 @@ static struct clk_hw *zynqmp_register_clk_topology(int clk_id, char *clk_name,
- {
- int j;
- u32 num_nodes, clk_dev_id;
-- char *clk_out = NULL;
-+ char *clk_out[MAX_NODES];
- struct clock_topology *nodes;
- struct clk_hw *hw = NULL;
-
-@@ -572,16 +572,16 @@ static struct clk_hw *zynqmp_register_clk_topology(int clk_id, char *clk_name,
- * Intermediate clock names are postfixed with type of clock.
- */
- if (j != (num_nodes - 1)) {
-- clk_out = kasprintf(GFP_KERNEL, "%s%s", clk_name,
-+ clk_out[j] = kasprintf(GFP_KERNEL, "%s%s", clk_name,
- clk_type_postfix[nodes[j].type]);
- } else {
-- clk_out = kasprintf(GFP_KERNEL, "%s", clk_name);
-+ clk_out[j] = kasprintf(GFP_KERNEL, "%s", clk_name);
- }
-
- if (!clk_topology[nodes[j].type])
- continue;
-
-- hw = (*clk_topology[nodes[j].type])(clk_out, clk_dev_id,
-+ hw = (*clk_topology[nodes[j].type])(clk_out[j], clk_dev_id,
- parent_names,
- num_parents,
- &nodes[j]);
-@@ -590,9 +590,12 @@ static struct clk_hw *zynqmp_register_clk_topology(int clk_id, char *clk_name,
- __func__, clk_dev_id, clk_name,
- PTR_ERR(hw));
-
-- parent_names[0] = clk_out;
-+ parent_names[0] = clk_out[j];
- }
-- kfree(clk_out);
-+
-+ for (j = 0; j < num_nodes; j++)
-+ kfree(clk_out[j]);
-+
- return hw;
- }
-
-diff --git a/drivers/clk/zynqmp/divider.c b/drivers/clk/zynqmp/divider.c
-index 4be2cc76aa2e..9bc4f9409aea 100644
---- a/drivers/clk/zynqmp/divider.c
-+++ b/drivers/clk/zynqmp/divider.c
-@@ -111,23 +111,30 @@ static unsigned long zynqmp_clk_divider_recalc_rate(struct clk_hw *hw,
-
- static void zynqmp_get_divider2_val(struct clk_hw *hw,
- unsigned long rate,
-- unsigned long parent_rate,
- struct zynqmp_clk_divider *divider,
- int *bestdiv)
- {
- int div1;
- int div2;
- long error = LONG_MAX;
-- struct clk_hw *parent_hw = clk_hw_get_parent(hw);
-- struct zynqmp_clk_divider *pdivider = to_zynqmp_clk_divider(parent_hw);
-+ unsigned long div1_prate;
-+ struct clk_hw *div1_parent_hw;
-+ struct clk_hw *div2_parent_hw = clk_hw_get_parent(hw);
-+ struct zynqmp_clk_divider *pdivider =
-+ to_zynqmp_clk_divider(div2_parent_hw);
-
- if (!pdivider)
- return;
-
-+ div1_parent_hw = clk_hw_get_parent(div2_parent_hw);
-+ if (!div1_parent_hw)
-+ return;
-+
-+ div1_prate = clk_hw_get_rate(div1_parent_hw);
- *bestdiv = 1;
- for (div1 = 1; div1 <= pdivider->max_div;) {
- for (div2 = 1; div2 <= divider->max_div;) {
-- long new_error = ((parent_rate / div1) / div2) - rate;
-+ long new_error = ((div1_prate / div1) / div2) - rate;
-
- if (abs(new_error) < abs(error)) {
- *bestdiv = div2;
-@@ -192,7 +199,7 @@ static long zynqmp_clk_divider_round_rate(struct clk_hw *hw,
- */
- if (div_type == TYPE_DIV2 &&
- (clk_hw_get_flags(hw) & CLK_SET_RATE_PARENT)) {
-- zynqmp_get_divider2_val(hw, rate, *prate, divider, &bestdiv);
-+ zynqmp_get_divider2_val(hw, rate, divider, &bestdiv);
- }
-
- if ((clk_hw_get_flags(hw) & CLK_SET_RATE_PARENT) && divider->is_frac)
-diff --git a/drivers/crypto/hisilicon/sgl.c b/drivers/crypto/hisilicon/sgl.c
-index 0e8c7e324fb4..725a739800b0 100644
---- a/drivers/crypto/hisilicon/sgl.c
-+++ b/drivers/crypto/hisilicon/sgl.c
-@@ -66,7 +66,8 @@ struct hisi_acc_sgl_pool *hisi_acc_create_sgl_pool(struct device *dev,
-
- sgl_size = sizeof(struct acc_hw_sge) * sge_nr +
- sizeof(struct hisi_acc_hw_sgl);
-- block_size = PAGE_SIZE * (1 << (MAX_ORDER - 1));
-+ block_size = 1 << (PAGE_SHIFT + MAX_ORDER <= 32 ?
-+ PAGE_SHIFT + MAX_ORDER - 1 : 31);
- sgl_num_per_block = block_size / sgl_size;
- block_num = count / sgl_num_per_block;
- remain_sgl = count % sgl_num_per_block;
-diff --git a/drivers/crypto/marvell/octeontx/otx_cptvf_algs.c b/drivers/crypto/marvell/octeontx/otx_cptvf_algs.c
-index 06202bcffb33..a370c99ecf4c 100644
---- a/drivers/crypto/marvell/octeontx/otx_cptvf_algs.c
-+++ b/drivers/crypto/marvell/octeontx/otx_cptvf_algs.c
-@@ -118,6 +118,9 @@ static void otx_cpt_aead_callback(int status, void *arg1, void *arg2)
- struct otx_cpt_req_info *cpt_req;
- struct pci_dev *pdev;
-
-+ if (!cpt_info)
-+ goto complete;
-+
- cpt_req = cpt_info->req;
- if (!status) {
- /*
-@@ -129,10 +132,10 @@ static void otx_cpt_aead_callback(int status, void *arg1, void *arg2)
- !cpt_req->is_enc)
- status = validate_hmac_cipher_null(cpt_req);
- }
-- if (cpt_info) {
-- pdev = cpt_info->pdev;
-- do_request_cleanup(pdev, cpt_info);
-- }
-+ pdev = cpt_info->pdev;
-+ do_request_cleanup(pdev, cpt_info);
-+
-+complete:
- if (areq)
- areq->complete(areq, status);
- }
-diff --git a/drivers/crypto/omap-sham.c b/drivers/crypto/omap-sham.c
-index e4072cd38585..a82a3596dca3 100644
---- a/drivers/crypto/omap-sham.c
-+++ b/drivers/crypto/omap-sham.c
-@@ -169,8 +169,6 @@ struct omap_sham_hmac_ctx {
- };
-
- struct omap_sham_ctx {
-- struct omap_sham_dev *dd;
--
- unsigned long flags;
-
- /* fallback stuff */
-@@ -751,8 +749,15 @@ static int omap_sham_align_sgs(struct scatterlist *sg,
- int offset = rctx->offset;
- int bufcnt = rctx->bufcnt;
-
-- if (!sg || !sg->length || !nbytes)
-+ if (!sg || !sg->length || !nbytes) {
-+ if (bufcnt) {
-+ sg_init_table(rctx->sgl, 1);
-+ sg_set_buf(rctx->sgl, rctx->dd->xmit_buf, bufcnt);
-+ rctx->sg = rctx->sgl;
-+ }
-+
- return 0;
-+ }
-
- new_len = nbytes;
-
-@@ -896,7 +901,7 @@ static int omap_sham_prepare_request(struct ahash_request *req, bool update)
- if (hash_later < 0)
- hash_later = 0;
-
-- if (hash_later) {
-+ if (hash_later && hash_later <= rctx->buflen) {
- scatterwalk_map_and_copy(rctx->buffer,
- req->src,
- req->nbytes - hash_later,
-@@ -926,27 +931,35 @@ static int omap_sham_update_dma_stop(struct omap_sham_dev *dd)
- return 0;
- }
-
-+struct omap_sham_dev *omap_sham_find_dev(struct omap_sham_reqctx *ctx)
-+{
-+ struct omap_sham_dev *dd;
-+
-+ if (ctx->dd)
-+ return ctx->dd;
-+
-+ spin_lock_bh(&sham.lock);
-+ dd = list_first_entry(&sham.dev_list, struct omap_sham_dev, list);
-+ list_move_tail(&dd->list, &sham.dev_list);
-+ ctx->dd = dd;
-+ spin_unlock_bh(&sham.lock);
-+
-+ return dd;
-+}
-+
- static int omap_sham_init(struct ahash_request *req)
- {
- struct crypto_ahash *tfm = crypto_ahash_reqtfm(req);
- struct omap_sham_ctx *tctx = crypto_ahash_ctx(tfm);
- struct omap_sham_reqctx *ctx = ahash_request_ctx(req);
-- struct omap_sham_dev *dd = NULL, *tmp;
-+ struct omap_sham_dev *dd;
- int bs = 0;
-
-- spin_lock_bh(&sham.lock);
-- if (!tctx->dd) {
-- list_for_each_entry(tmp, &sham.dev_list, list) {
-- dd = tmp;
-- break;
-- }
-- tctx->dd = dd;
-- } else {
-- dd = tctx->dd;
-- }
-- spin_unlock_bh(&sham.lock);
-+ ctx->dd = NULL;
-
-- ctx->dd = dd;
-+ dd = omap_sham_find_dev(ctx);
-+ if (!dd)
-+ return -ENODEV;
-
- ctx->flags = 0;
-
-@@ -1216,8 +1229,7 @@ err1:
- static int omap_sham_enqueue(struct ahash_request *req, unsigned int op)
- {
- struct omap_sham_reqctx *ctx = ahash_request_ctx(req);
-- struct omap_sham_ctx *tctx = crypto_tfm_ctx(req->base.tfm);
-- struct omap_sham_dev *dd = tctx->dd;
-+ struct omap_sham_dev *dd = ctx->dd;
-
- ctx->op = op;
-
-@@ -1227,7 +1239,7 @@ static int omap_sham_enqueue(struct ahash_request *req, unsigned int op)
- static int omap_sham_update(struct ahash_request *req)
- {
- struct omap_sham_reqctx *ctx = ahash_request_ctx(req);
-- struct omap_sham_dev *dd = ctx->dd;
-+ struct omap_sham_dev *dd = omap_sham_find_dev(ctx);
-
- if (!req->nbytes)
- return 0;
-@@ -1331,21 +1343,8 @@ static int omap_sham_setkey(struct crypto_ahash *tfm, const u8 *key,
- struct omap_sham_hmac_ctx *bctx = tctx->base;
- int bs = crypto_shash_blocksize(bctx->shash);
- int ds = crypto_shash_digestsize(bctx->shash);
-- struct omap_sham_dev *dd = NULL, *tmp;
- int err, i;
-
-- spin_lock_bh(&sham.lock);
-- if (!tctx->dd) {
-- list_for_each_entry(tmp, &sham.dev_list, list) {
-- dd = tmp;
-- break;
-- }
-- tctx->dd = dd;
-- } else {
-- dd = tctx->dd;
-- }
-- spin_unlock_bh(&sham.lock);
--
- err = crypto_shash_setkey(tctx->fallback, key, keylen);
- if (err)
- return err;
-@@ -1363,7 +1362,7 @@ static int omap_sham_setkey(struct crypto_ahash *tfm, const u8 *key,
-
- memset(bctx->ipad + keylen, 0, bs - keylen);
-
-- if (!test_bit(FLAGS_AUTO_XOR, &dd->flags)) {
-+ if (!test_bit(FLAGS_AUTO_XOR, &sham.flags)) {
- memcpy(bctx->opad, bctx->ipad, bs);
-
- for (i = 0; i < bs; i++) {
-@@ -2167,6 +2166,7 @@ static int omap_sham_probe(struct platform_device *pdev)
- }
-
- dd->flags |= dd->pdata->flags;
-+ sham.flags |= dd->pdata->flags;
-
- pm_runtime_use_autosuspend(dev);
- pm_runtime_set_autosuspend_delay(dev, DEFAULT_AUTOSUSPEND_DELAY);
-@@ -2194,6 +2194,9 @@ static int omap_sham_probe(struct platform_device *pdev)
- spin_unlock(&sham.lock);
-
- for (i = 0; i < dd->pdata->algs_info_size; i++) {
-+ if (dd->pdata->algs_info[i].registered)
-+ break;
-+
- for (j = 0; j < dd->pdata->algs_info[i].size; j++) {
- struct ahash_alg *alg;
-
-@@ -2245,9 +2248,11 @@ static int omap_sham_remove(struct platform_device *pdev)
- list_del(&dd->list);
- spin_unlock(&sham.lock);
- for (i = dd->pdata->algs_info_size - 1; i >= 0; i--)
-- for (j = dd->pdata->algs_info[i].registered - 1; j >= 0; j--)
-+ for (j = dd->pdata->algs_info[i].registered - 1; j >= 0; j--) {
- crypto_unregister_ahash(
- &dd->pdata->algs_info[i].algs_list[j]);
-+ dd->pdata->algs_info[i].registered--;
-+ }
- tasklet_kill(&dd->done_task);
- pm_runtime_disable(&pdev->dev);
-
-diff --git a/drivers/extcon/extcon-adc-jack.c b/drivers/extcon/extcon-adc-jack.c
-index ad02dc6747a4..0317b614b680 100644
---- a/drivers/extcon/extcon-adc-jack.c
-+++ b/drivers/extcon/extcon-adc-jack.c
-@@ -124,7 +124,7 @@ static int adc_jack_probe(struct platform_device *pdev)
- for (i = 0; data->adc_conditions[i].id != EXTCON_NONE; i++);
- data->num_conditions = i;
-
-- data->chan = iio_channel_get(&pdev->dev, pdata->consumer_channel);
-+ data->chan = devm_iio_channel_get(&pdev->dev, pdata->consumer_channel);
- if (IS_ERR(data->chan))
- return PTR_ERR(data->chan);
-
-@@ -164,7 +164,6 @@ static int adc_jack_remove(struct platform_device *pdev)
-
- free_irq(data->irq, data);
- cancel_work_sync(&data->handler.work);
-- iio_channel_release(data->chan);
-
- return 0;
- }
-diff --git a/drivers/firmware/imx/imx-scu.c b/drivers/firmware/imx/imx-scu.c
-index b3da2e193ad2..176ddd151375 100644
---- a/drivers/firmware/imx/imx-scu.c
-+++ b/drivers/firmware/imx/imx-scu.c
-@@ -314,6 +314,7 @@ static int imx_scu_probe(struct platform_device *pdev)
- if (ret != -EPROBE_DEFER)
- dev_err(dev, "Failed to request mbox chan %s ret %d\n",
- chan_name, ret);
-+ kfree(chan_name);
- return ret;
- }
-
-diff --git a/drivers/firmware/qcom_scm.c b/drivers/firmware/qcom_scm.c
-index 059bb0fbae9e..4701487573f7 100644
---- a/drivers/firmware/qcom_scm.c
-+++ b/drivers/firmware/qcom_scm.c
-@@ -6,7 +6,6 @@
- #include <linux/init.h>
- #include <linux/cpumask.h>
- #include <linux/export.h>
--#include <linux/dma-direct.h>
- #include <linux/dma-mapping.h>
- #include <linux/module.h>
- #include <linux/types.h>
-@@ -806,8 +805,7 @@ int qcom_scm_assign_mem(phys_addr_t mem_addr, size_t mem_sz,
- struct qcom_scm_mem_map_info *mem_to_map;
- phys_addr_t mem_to_map_phys;
- phys_addr_t dest_phys;
-- phys_addr_t ptr_phys;
-- dma_addr_t ptr_dma;
-+ dma_addr_t ptr_phys;
- size_t mem_to_map_sz;
- size_t dest_sz;
- size_t src_sz;
-@@ -824,10 +822,9 @@ int qcom_scm_assign_mem(phys_addr_t mem_addr, size_t mem_sz,
- ptr_sz = ALIGN(src_sz, SZ_64) + ALIGN(mem_to_map_sz, SZ_64) +
- ALIGN(dest_sz, SZ_64);
-
-- ptr = dma_alloc_coherent(__scm->dev, ptr_sz, &ptr_dma, GFP_KERNEL);
-+ ptr = dma_alloc_coherent(__scm->dev, ptr_sz, &ptr_phys, GFP_KERNEL);
- if (!ptr)
- return -ENOMEM;
-- ptr_phys = dma_to_phys(__scm->dev, ptr_dma);
-
- /* Fill source vmid detail */
- src = ptr;
-@@ -855,7 +852,7 @@ int qcom_scm_assign_mem(phys_addr_t mem_addr, size_t mem_sz,
-
- ret = __qcom_scm_assign_mem(__scm->dev, mem_to_map_phys, mem_to_map_sz,
- ptr_phys, src_sz, dest_phys, dest_sz);
-- dma_free_coherent(__scm->dev, ptr_sz, ptr, ptr_dma);
-+ dma_free_coherent(__scm->dev, ptr_sz, ptr, ptr_phys);
- if (ret) {
- dev_err(__scm->dev,
- "Assign memory protection call failed %d\n", ret);
-diff --git a/drivers/fpga/dfl-afu-dma-region.c b/drivers/fpga/dfl-afu-dma-region.c
-index 62f924489db5..5942343a5d6e 100644
---- a/drivers/fpga/dfl-afu-dma-region.c
-+++ b/drivers/fpga/dfl-afu-dma-region.c
-@@ -61,10 +61,10 @@ static int afu_dma_pin_pages(struct dfl_feature_platform_data *pdata,
- region->pages);
- if (pinned < 0) {
- ret = pinned;
-- goto put_pages;
-+ goto free_pages;
- } else if (pinned != npages) {
- ret = -EFAULT;
-- goto free_pages;
-+ goto put_pages;
- }
-
- dev_dbg(dev, "%d pages pinned\n", pinned);
-diff --git a/drivers/gpio/gpio-dwapb.c b/drivers/gpio/gpio-dwapb.c
-index 92e127e74813..ed6061b5cca1 100644
---- a/drivers/gpio/gpio-dwapb.c
-+++ b/drivers/gpio/gpio-dwapb.c
-@@ -49,7 +49,9 @@
- #define GPIO_EXT_PORTC 0x58
- #define GPIO_EXT_PORTD 0x5c
-
-+#define DWAPB_DRIVER_NAME "gpio-dwapb"
- #define DWAPB_MAX_PORTS 4
-+
- #define GPIO_EXT_PORT_STRIDE 0x04 /* register stride 32 bits */
- #define GPIO_SWPORT_DR_STRIDE 0x0c /* register stride 3*32 bits */
- #define GPIO_SWPORT_DDR_STRIDE 0x0c /* register stride 3*32 bits */
-@@ -398,7 +400,7 @@ static void dwapb_configure_irqs(struct dwapb_gpio *gpio,
- return;
-
- err = irq_alloc_domain_generic_chips(gpio->domain, ngpio, 2,
-- "gpio-dwapb", handle_level_irq,
-+ DWAPB_DRIVER_NAME, handle_level_irq,
- IRQ_NOREQUEST, 0,
- IRQ_GC_INIT_NESTED_LOCK);
- if (err) {
-@@ -455,7 +457,7 @@ static void dwapb_configure_irqs(struct dwapb_gpio *gpio,
- */
- err = devm_request_irq(gpio->dev, pp->irq[0],
- dwapb_irq_handler_mfd,
-- IRQF_SHARED, "gpio-dwapb-mfd", gpio);
-+ IRQF_SHARED, DWAPB_DRIVER_NAME, gpio);
- if (err) {
- dev_err(gpio->dev, "error requesting IRQ\n");
- irq_domain_remove(gpio->domain);
-@@ -533,26 +535,33 @@ static int dwapb_gpio_add_port(struct dwapb_gpio *gpio,
- dwapb_configure_irqs(gpio, port, pp);
-
- err = gpiochip_add_data(&port->gc, port);
-- if (err)
-+ if (err) {
- dev_err(gpio->dev, "failed to register gpiochip for port%d\n",
- port->idx);
-- else
-- port->is_registered = true;
-+ return err;
-+ }
-
- /* Add GPIO-signaled ACPI event support */
-- if (pp->has_irq)
-- acpi_gpiochip_request_interrupts(&port->gc);
-+ acpi_gpiochip_request_interrupts(&port->gc);
-
-- return err;
-+ port->is_registered = true;
-+
-+ return 0;
- }
-
- static void dwapb_gpio_unregister(struct dwapb_gpio *gpio)
- {
- unsigned int m;
-
-- for (m = 0; m < gpio->nr_ports; ++m)
-- if (gpio->ports[m].is_registered)
-- gpiochip_remove(&gpio->ports[m].gc);
-+ for (m = 0; m < gpio->nr_ports; ++m) {
-+ struct dwapb_gpio_port *port = &gpio->ports[m];
-+
-+ if (!port->is_registered)
-+ continue;
-+
-+ acpi_gpiochip_free_interrupts(&port->gc);
-+ gpiochip_remove(&port->gc);
-+ }
- }
-
- static struct dwapb_platform_data *
-@@ -836,7 +845,7 @@ static SIMPLE_DEV_PM_OPS(dwapb_gpio_pm_ops, dwapb_gpio_suspend,
-
- static struct platform_driver dwapb_gpio_driver = {
- .driver = {
-- .name = "gpio-dwapb",
-+ .name = DWAPB_DRIVER_NAME,
- .pm = &dwapb_gpio_pm_ops,
- .of_match_table = of_match_ptr(dwapb_of_match),
- .acpi_match_table = ACPI_PTR(dwapb_acpi_match),
-@@ -850,3 +859,4 @@ module_platform_driver(dwapb_gpio_driver);
- MODULE_LICENSE("GPL");
- MODULE_AUTHOR("Jamie Iles");
- MODULE_DESCRIPTION("Synopsys DesignWare APB GPIO driver");
-+MODULE_ALIAS("platform:" DWAPB_DRIVER_NAME);
-diff --git a/drivers/gpio/gpio-mlxbf2.c b/drivers/gpio/gpio-mlxbf2.c
-index da570e63589d..cc0dd8593a4b 100644
---- a/drivers/gpio/gpio-mlxbf2.c
-+++ b/drivers/gpio/gpio-mlxbf2.c
-@@ -110,8 +110,8 @@ static int mlxbf2_gpio_get_lock_res(struct platform_device *pdev)
- }
-
- yu_arm_gpio_lock_param.io = devm_ioremap(dev, res->start, size);
-- if (IS_ERR(yu_arm_gpio_lock_param.io))
-- ret = PTR_ERR(yu_arm_gpio_lock_param.io);
-+ if (!yu_arm_gpio_lock_param.io)
-+ ret = -ENOMEM;
-
- exit:
- mutex_unlock(yu_arm_gpio_lock_param.lock);
-diff --git a/drivers/gpio/gpio-pca953x.c b/drivers/gpio/gpio-pca953x.c
-index 4269ea9a817e..01011a780688 100644
---- a/drivers/gpio/gpio-pca953x.c
-+++ b/drivers/gpio/gpio-pca953x.c
-@@ -307,8 +307,22 @@ static const struct regmap_config pca953x_i2c_regmap = {
- .volatile_reg = pca953x_volatile_register,
-
- .cache_type = REGCACHE_RBTREE,
-- /* REVISIT: should be 0x7f but some 24 bit chips use REG_ADDR_AI */
-- .max_register = 0xff,
-+ .max_register = 0x7f,
-+};
-+
-+static const struct regmap_config pca953x_ai_i2c_regmap = {
-+ .reg_bits = 8,
-+ .val_bits = 8,
-+
-+ .read_flag_mask = REG_ADDR_AI,
-+ .write_flag_mask = REG_ADDR_AI,
-+
-+ .readable_reg = pca953x_readable_register,
-+ .writeable_reg = pca953x_writeable_register,
-+ .volatile_reg = pca953x_volatile_register,
-+
-+ .cache_type = REGCACHE_RBTREE,
-+ .max_register = 0x7f,
- };
-
- static u8 pca953x_recalc_addr(struct pca953x_chip *chip, int reg, int off,
-@@ -319,18 +333,6 @@ static u8 pca953x_recalc_addr(struct pca953x_chip *chip, int reg, int off,
- int pinctrl = (reg & PCAL_PINCTRL_MASK) << 1;
- u8 regaddr = pinctrl | addr | (off / BANK_SZ);
-
-- /* Single byte read doesn't need AI bit set. */
-- if (!addrinc)
-- return regaddr;
--
-- /* Chips with 24 and more GPIOs always support Auto Increment */
-- if (write && NBANK(chip) > 2)
-- regaddr |= REG_ADDR_AI;
--
-- /* PCA9575 needs address-increment on multi-byte writes */
-- if (PCA_CHIP_TYPE(chip->driver_data) == PCA957X_TYPE)
-- regaddr |= REG_ADDR_AI;
--
- return regaddr;
- }
-
-@@ -863,6 +865,7 @@ static int pca953x_probe(struct i2c_client *client,
- int ret;
- u32 invert = 0;
- struct regulator *reg;
-+ const struct regmap_config *regmap_config;
-
- chip = devm_kzalloc(&client->dev, sizeof(*chip), GFP_KERNEL);
- if (chip == NULL)
-@@ -925,7 +928,17 @@ static int pca953x_probe(struct i2c_client *client,
-
- i2c_set_clientdata(client, chip);
-
-- chip->regmap = devm_regmap_init_i2c(client, &pca953x_i2c_regmap);
-+ pca953x_setup_gpio(chip, chip->driver_data & PCA_GPIO_MASK);
-+
-+ if (NBANK(chip) > 2 || PCA_CHIP_TYPE(chip->driver_data) == PCA957X_TYPE) {
-+ dev_info(&client->dev, "using AI\n");
-+ regmap_config = &pca953x_ai_i2c_regmap;
-+ } else {
-+ dev_info(&client->dev, "using no AI\n");
-+ regmap_config = &pca953x_i2c_regmap;
-+ }
-+
-+ chip->regmap = devm_regmap_init_i2c(client, regmap_config);
- if (IS_ERR(chip->regmap)) {
- ret = PTR_ERR(chip->regmap);
- goto err_exit;
-@@ -956,7 +969,6 @@ static int pca953x_probe(struct i2c_client *client,
- /* initialize cached registers from their original values.
- * we can't share this chip with another i2c master.
- */
-- pca953x_setup_gpio(chip, chip->driver_data & PCA_GPIO_MASK);
-
- if (PCA_CHIP_TYPE(chip->driver_data) == PCA953X_TYPE) {
- chip->regs = &pca953x_regs;
-diff --git a/drivers/gpu/drm/amd/amdkfd/kfd_priv.h b/drivers/gpu/drm/amd/amdkfd/kfd_priv.h
-index c24cad3c64ed..f7cfb8180b71 100644
---- a/drivers/gpu/drm/amd/amdkfd/kfd_priv.h
-+++ b/drivers/gpu/drm/amd/amdkfd/kfd_priv.h
-@@ -40,6 +40,7 @@
- #include <drm/drm_file.h>
- #include <drm/drm_drv.h>
- #include <drm/drm_device.h>
-+#include <drm/drm_ioctl.h>
- #include <kgd_kfd_interface.h>
- #include <linux/swap.h>
-
-@@ -1053,7 +1054,7 @@ static inline int kfd_devcgroup_check_permission(struct kfd_dev *kfd)
- #if defined(CONFIG_CGROUP_DEVICE) || defined(CONFIG_CGROUP_BPF)
- struct drm_device *ddev = kfd->ddev;
-
-- return devcgroup_check_permission(DEVCG_DEV_CHAR, ddev->driver->major,
-+ return devcgroup_check_permission(DEVCG_DEV_CHAR, DRM_MAJOR,
- ddev->render->index,
- DEVCG_ACC_WRITE | DEVCG_ACC_READ);
- #else
-diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
-index 7fc15b82fe48..f9f02e08054b 100644
---- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
-+++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
-@@ -1334,7 +1334,7 @@ static int dm_late_init(void *handle)
- unsigned int linear_lut[16];
- int i;
- struct dmcu *dmcu = adev->dm.dc->res_pool->dmcu;
-- bool ret = false;
-+ bool ret;
-
- for (i = 0; i < 16; i++)
- linear_lut[i] = 0xFFFF * i / 15;
-@@ -1350,13 +1350,10 @@ static int dm_late_init(void *handle)
- */
- params.min_abm_backlight = 0x28F;
-
-- /* todo will enable for navi10 */
-- if (adev->asic_type <= CHIP_RAVEN) {
-- ret = dmcu_load_iram(dmcu, params);
-+ ret = dmcu_load_iram(dmcu, params);
-
-- if (!ret)
-- return -EINVAL;
-- }
-+ if (!ret)
-+ return -EINVAL;
-
- return detect_mst_link_for_all_connectors(adev->ddev);
- }
-diff --git a/drivers/gpu/drm/amd/display/dc/core/dc.c b/drivers/gpu/drm/amd/display/dc/core/dc.c
-index 47431ca6986d..4acaf4be8a81 100644
---- a/drivers/gpu/drm/amd/display/dc/core/dc.c
-+++ b/drivers/gpu/drm/amd/display/dc/core/dc.c
-@@ -1011,9 +1011,17 @@ static void program_timing_sync(
- }
- }
-
-- /* set first pipe with plane as master */
-+ /* set first unblanked pipe as master */
- for (j = 0; j < group_size; j++) {
-- if (pipe_set[j]->plane_state) {
-+ bool is_blanked;
-+
-+ if (pipe_set[j]->stream_res.opp->funcs->dpg_is_blanked)
-+ is_blanked =
-+ pipe_set[j]->stream_res.opp->funcs->dpg_is_blanked(pipe_set[j]->stream_res.opp);
-+ else
-+ is_blanked =
-+ pipe_set[j]->stream_res.tg->funcs->is_blanked(pipe_set[j]->stream_res.tg);
-+ if (!is_blanked) {
- if (j == 0)
- break;
-
-@@ -1034,9 +1042,17 @@ static void program_timing_sync(
- status->timing_sync_info.master = false;
-
- }
-- /* remove any other pipes with plane as they have already been synced */
-+ /* remove any other unblanked pipes as they have already been synced */
- for (j = j + 1; j < group_size; j++) {
-- if (pipe_set[j]->plane_state) {
-+ bool is_blanked;
-+
-+ if (pipe_set[j]->stream_res.opp->funcs->dpg_is_blanked)
-+ is_blanked =
-+ pipe_set[j]->stream_res.opp->funcs->dpg_is_blanked(pipe_set[j]->stream_res.opp);
-+ else
-+ is_blanked =
-+ pipe_set[j]->stream_res.tg->funcs->is_blanked(pipe_set[j]->stream_res.tg);
-+ if (!is_blanked) {
- group_size--;
- pipe_set[j] = pipe_set[group_size];
- j--;
-@@ -2517,6 +2533,12 @@ void dc_commit_updates_for_stream(struct dc *dc,
-
- copy_stream_update_to_stream(dc, context, stream, stream_update);
-
-+ if (!dc->res_pool->funcs->validate_bandwidth(dc, context, false)) {
-+ DC_ERROR("Mode validation failed for stream update!\n");
-+ dc_release_state(context);
-+ return;
-+ }
-+
- commit_planes_for_stream(
- dc,
- srf_updates,
-diff --git a/drivers/gpu/drm/amd/display/modules/color/color_gamma.c b/drivers/gpu/drm/amd/display/modules/color/color_gamma.c
-index cac09d500fda..e89694eb90b4 100644
---- a/drivers/gpu/drm/amd/display/modules/color/color_gamma.c
-+++ b/drivers/gpu/drm/amd/display/modules/color/color_gamma.c
-@@ -843,7 +843,7 @@ static bool build_regamma(struct pwl_float_data_ex *rgb_regamma,
- pow_buffer_ptr = -1; // reset back to no optimize
- ret = true;
- release:
-- kfree(coeff);
-+ kvfree(coeff);
- return ret;
- }
-
-diff --git a/drivers/gpu/drm/amd/powerplay/smumgr/ci_smumgr.c b/drivers/gpu/drm/amd/powerplay/smumgr/ci_smumgr.c
-index 868e2d5f6e62..7c3e903230ca 100644
---- a/drivers/gpu/drm/amd/powerplay/smumgr/ci_smumgr.c
-+++ b/drivers/gpu/drm/amd/powerplay/smumgr/ci_smumgr.c
-@@ -239,7 +239,7 @@ static void ci_initialize_power_tune_defaults(struct pp_hwmgr *hwmgr)
-
- switch (dev_id) {
- case 0x67BA:
-- case 0x66B1:
-+ case 0x67B1:
- smu_data->power_tune_defaults = &defaults_hawaii_pro;
- break;
- case 0x67B8:
-diff --git a/drivers/gpu/drm/ast/ast_mode.c b/drivers/gpu/drm/ast/ast_mode.c
-index 7a9f20a2fd30..e7ba0b6f46d8 100644
---- a/drivers/gpu/drm/ast/ast_mode.c
-+++ b/drivers/gpu/drm/ast/ast_mode.c
-@@ -226,6 +226,7 @@ static void ast_set_vbios_color_reg(struct ast_private *ast,
- case 3:
- case 4:
- color_index = TrueCModeIndex;
-+ break;
- default:
- return;
- }
-@@ -801,6 +802,9 @@ static int ast_crtc_helper_atomic_check(struct drm_crtc *crtc,
- return -EINVAL;
- }
-
-+ if (!state->enable)
-+ return 0; /* no mode checks if CRTC is being disabled */
-+
- ast_state = to_ast_crtc_state(state);
-
- format = ast_state->format;
-diff --git a/drivers/gpu/drm/drm_connector.c b/drivers/gpu/drm/drm_connector.c
-index 644f0ad10671..ac9fd96c4c66 100644
---- a/drivers/gpu/drm/drm_connector.c
-+++ b/drivers/gpu/drm/drm_connector.c
-@@ -27,6 +27,7 @@
- #include <drm/drm_print.h>
- #include <drm/drm_drv.h>
- #include <drm/drm_file.h>
-+#include <drm/drm_sysfs.h>
-
- #include <linux/uaccess.h>
-
-@@ -523,6 +524,10 @@ int drm_connector_register(struct drm_connector *connector)
- drm_mode_object_register(connector->dev, &connector->base);
-
- connector->registration_state = DRM_CONNECTOR_REGISTERED;
-+
-+ /* Let userspace know we have a new connector */
-+ drm_sysfs_hotplug_event(connector->dev);
-+
- goto unlock;
-
- err_debugfs:
-diff --git a/drivers/gpu/drm/drm_dp_mst_topology.c b/drivers/gpu/drm/drm_dp_mst_topology.c
-index 9d89ebf3a749..abb1f358ec6d 100644
---- a/drivers/gpu/drm/drm_dp_mst_topology.c
-+++ b/drivers/gpu/drm/drm_dp_mst_topology.c
-@@ -27,6 +27,7 @@
- #include <linux/kernel.h>
- #include <linux/sched.h>
- #include <linux/seq_file.h>
-+#include <linux/iopoll.h>
-
- #if IS_ENABLED(CONFIG_DRM_DEBUG_DP_MST_TOPOLOGY_REFS)
- #include <linux/stacktrace.h>
-@@ -4448,6 +4449,17 @@ fail:
- return ret;
- }
-
-+static int do_get_act_status(struct drm_dp_aux *aux)
-+{
-+ int ret;
-+ u8 status;
-+
-+ ret = drm_dp_dpcd_readb(aux, DP_PAYLOAD_TABLE_UPDATE_STATUS, &status);
-+ if (ret < 0)
-+ return ret;
-+
-+ return status;
-+}
-
- /**
- * drm_dp_check_act_status() - Check ACT handled status.
-@@ -4457,33 +4469,29 @@ fail:
- */
- int drm_dp_check_act_status(struct drm_dp_mst_topology_mgr *mgr)
- {
-- u8 status;
-- int ret;
-- int count = 0;
--
-- do {
-- ret = drm_dp_dpcd_readb(mgr->aux, DP_PAYLOAD_TABLE_UPDATE_STATUS, &status);
--
-- if (ret < 0) {
-- DRM_DEBUG_KMS("failed to read payload table status %d\n", ret);
-- goto fail;
-- }
--
-- if (status & DP_PAYLOAD_ACT_HANDLED)
-- break;
-- count++;
-- udelay(100);
--
-- } while (count < 30);
--
-- if (!(status & DP_PAYLOAD_ACT_HANDLED)) {
-- DRM_DEBUG_KMS("failed to get ACT bit %d after %d retries\n", status, count);
-- ret = -EINVAL;
-- goto fail;
-+ /*
-+ * There doesn't seem to be any recommended retry count or timeout in
-+ * the MST specification. Since some hubs have been observed to take
-+ * over 1 second to update their payload allocations under certain
-+ * conditions, we use a rather large timeout value.
-+ */
-+ const int timeout_ms = 3000;
-+ int ret, status;
-+
-+ ret = readx_poll_timeout(do_get_act_status, mgr->aux, status,
-+ status & DP_PAYLOAD_ACT_HANDLED || status < 0,
-+ 200, timeout_ms * USEC_PER_MSEC);
-+ if (ret < 0 && status >= 0) {
-+ DRM_DEBUG_KMS("Failed to get ACT after %dms, last status: %02x\n",
-+ timeout_ms, status);
-+ return -EINVAL;
-+ } else if (status < 0) {
-+ DRM_DEBUG_KMS("Failed to read payload table status: %d\n",
-+ status);
-+ return status;
- }
-+
- return 0;
--fail:
-- return ret;
- }
- EXPORT_SYMBOL(drm_dp_check_act_status);
-
-diff --git a/drivers/gpu/drm/drm_encoder_slave.c b/drivers/gpu/drm/drm_encoder_slave.c
-index cf804389f5ec..d50a7884e69e 100644
---- a/drivers/gpu/drm/drm_encoder_slave.c
-+++ b/drivers/gpu/drm/drm_encoder_slave.c
-@@ -84,7 +84,7 @@ int drm_i2c_encoder_init(struct drm_device *dev,
-
- err = encoder_drv->encoder_init(client, dev, encoder);
- if (err)
-- goto fail_unregister;
-+ goto fail_module_put;
-
- if (info->platform_data)
- encoder->slave_funcs->set_config(&encoder->base,
-@@ -92,9 +92,10 @@ int drm_i2c_encoder_init(struct drm_device *dev,
-
- return 0;
-
-+fail_module_put:
-+ module_put(module);
- fail_unregister:
- i2c_unregister_device(client);
-- module_put(module);
- fail:
- return err;
- }
-diff --git a/drivers/gpu/drm/drm_sysfs.c b/drivers/gpu/drm/drm_sysfs.c
-index 939f0032aab1..f0336c804639 100644
---- a/drivers/gpu/drm/drm_sysfs.c
-+++ b/drivers/gpu/drm/drm_sysfs.c
-@@ -291,9 +291,6 @@ int drm_sysfs_connector_add(struct drm_connector *connector)
- return PTR_ERR(connector->kdev);
- }
-
-- /* Let userspace know we have a new connector */
-- drm_sysfs_hotplug_event(dev);
--
- if (connector->ddc)
- return sysfs_create_link(&connector->kdev->kobj,
- &connector->ddc->dev.kobj, "ddc");
-diff --git a/drivers/gpu/drm/i915/display/intel_ddi.c b/drivers/gpu/drm/i915/display/intel_ddi.c
-index 52db7852827b..647412da733e 100644
---- a/drivers/gpu/drm/i915/display/intel_ddi.c
-+++ b/drivers/gpu/drm/i915/display/intel_ddi.c
-@@ -2866,7 +2866,7 @@ icl_program_mg_dp_mode(struct intel_digital_port *intel_dig_port,
- ln1 = intel_de_read(dev_priv, MG_DP_MODE(1, tc_port));
- }
-
-- ln0 &= ~(MG_DP_MODE_CFG_DP_X1_MODE | MG_DP_MODE_CFG_DP_X1_MODE);
-+ ln0 &= ~(MG_DP_MODE_CFG_DP_X1_MODE | MG_DP_MODE_CFG_DP_X2_MODE);
- ln1 &= ~(MG_DP_MODE_CFG_DP_X1_MODE | MG_DP_MODE_CFG_DP_X2_MODE);
-
- /* DPPATC */
-diff --git a/drivers/gpu/drm/i915/display/intel_dp.c b/drivers/gpu/drm/i915/display/intel_dp.c
-index a2fafd4499f2..5e228d202e4d 100644
---- a/drivers/gpu/drm/i915/display/intel_dp.c
-+++ b/drivers/gpu/drm/i915/display/intel_dp.c
-@@ -1343,8 +1343,7 @@ intel_dp_aux_xfer(struct intel_dp *intel_dp,
- bool is_tc_port = intel_phy_is_tc(i915, phy);
- i915_reg_t ch_ctl, ch_data[5];
- u32 aux_clock_divider;
-- enum intel_display_power_domain aux_domain =
-- intel_aux_power_domain(intel_dig_port);
-+ enum intel_display_power_domain aux_domain;
- intel_wakeref_t aux_wakeref;
- intel_wakeref_t pps_wakeref;
- int i, ret, recv_bytes;
-@@ -1359,6 +1358,8 @@ intel_dp_aux_xfer(struct intel_dp *intel_dp,
- if (is_tc_port)
- intel_tc_port_lock(intel_dig_port);
-
-+ aux_domain = intel_aux_power_domain(intel_dig_port);
-+
- aux_wakeref = intel_display_power_get(i915, aux_domain);
- pps_wakeref = pps_lock(intel_dp);
-
-diff --git a/drivers/gpu/drm/i915/gem/i915_gem_shmem.c b/drivers/gpu/drm/i915/gem/i915_gem_shmem.c
-index 5d5d7eef3f43..7aff3514d97a 100644
---- a/drivers/gpu/drm/i915/gem/i915_gem_shmem.c
-+++ b/drivers/gpu/drm/i915/gem/i915_gem_shmem.c
-@@ -39,7 +39,6 @@ static int shmem_get_pages(struct drm_i915_gem_object *obj)
- unsigned long last_pfn = 0; /* suppress gcc warning */
- unsigned int max_segment = i915_sg_segment_size();
- unsigned int sg_page_sizes;
-- struct pagevec pvec;
- gfp_t noreclaim;
- int ret;
-
-@@ -192,13 +191,17 @@ err_sg:
- sg_mark_end(sg);
- err_pages:
- mapping_clear_unevictable(mapping);
-- pagevec_init(&pvec);
-- for_each_sgt_page(page, sgt_iter, st) {
-- if (!pagevec_add(&pvec, page))
-+ if (sg != st->sgl) {
-+ struct pagevec pvec;
-+
-+ pagevec_init(&pvec);
-+ for_each_sgt_page(page, sgt_iter, st) {
-+ if (!pagevec_add(&pvec, page))
-+ check_release_pagevec(&pvec);
-+ }
-+ if (pagevec_count(&pvec))
- check_release_pagevec(&pvec);
- }
-- if (pagevec_count(&pvec))
-- check_release_pagevec(&pvec);
- sg_free_table(st);
- kfree(st);
-
-diff --git a/drivers/gpu/drm/i915/gt/intel_engine_cs.c b/drivers/gpu/drm/i915/gt/intel_engine_cs.c
-index 883a9b7fe88d..55b9165e7533 100644
---- a/drivers/gpu/drm/i915/gt/intel_engine_cs.c
-+++ b/drivers/gpu/drm/i915/gt/intel_engine_cs.c
-@@ -639,7 +639,7 @@ static int engine_setup_common(struct intel_engine_cs *engine)
- struct measure_breadcrumb {
- struct i915_request rq;
- struct intel_ring ring;
-- u32 cs[1024];
-+ u32 cs[2048];
- };
-
- static int measure_breadcrumb_dw(struct intel_context *ce)
-@@ -661,6 +661,8 @@ static int measure_breadcrumb_dw(struct intel_context *ce)
-
- frame->ring.vaddr = frame->cs;
- frame->ring.size = sizeof(frame->cs);
-+ frame->ring.wrap =
-+ BITS_PER_TYPE(frame->ring.size) - ilog2(frame->ring.size);
- frame->ring.effective_size = frame->ring.size;
- intel_ring_update_space(&frame->ring);
- frame->rq.ring = &frame->ring;
-diff --git a/drivers/gpu/drm/i915/gt/intel_lrc.c b/drivers/gpu/drm/i915/gt/intel_lrc.c
-index 2dfaddb8811e..ba82193b4e31 100644
---- a/drivers/gpu/drm/i915/gt/intel_lrc.c
-+++ b/drivers/gpu/drm/i915/gt/intel_lrc.c
-@@ -972,6 +972,13 @@ __unwind_incomplete_requests(struct intel_engine_cs *engine)
- list_move(&rq->sched.link, pl);
- set_bit(I915_FENCE_FLAG_PQUEUE, &rq->fence.flags);
-
-+ /* Check in case we rollback so far we wrap [size/2] */
-+ if (intel_ring_direction(rq->ring,
-+ intel_ring_wrap(rq->ring,
-+ rq->tail),
-+ rq->ring->tail) > 0)
-+ rq->context->lrc.desc |= CTX_DESC_FORCE_RESTORE;
-+
- active = rq;
- } else {
- struct intel_engine_cs *owner = rq->context->engine;
-@@ -1383,8 +1390,9 @@ static u64 execlists_update_context(struct i915_request *rq)
- * HW has a tendency to ignore us rewinding the TAIL to the end of
- * an earlier request.
- */
-+ GEM_BUG_ON(ce->lrc_reg_state[CTX_RING_TAIL] != rq->ring->tail);
-+ prev = rq->ring->tail;
- tail = intel_ring_set_tail(rq->ring, rq->tail);
-- prev = ce->lrc_reg_state[CTX_RING_TAIL];
- if (unlikely(intel_ring_direction(rq->ring, tail, prev) <= 0))
- desc |= CTX_DESC_FORCE_RESTORE;
- ce->lrc_reg_state[CTX_RING_TAIL] = tail;
-@@ -4213,6 +4221,14 @@ static int gen12_emit_flush_render(struct i915_request *request,
- return 0;
- }
-
-+static void assert_request_valid(struct i915_request *rq)
-+{
-+ struct intel_ring *ring __maybe_unused = rq->ring;
-+
-+ /* Can we unwind this request without appearing to go forwards? */
-+ GEM_BUG_ON(intel_ring_direction(ring, rq->wa_tail, rq->head) <= 0);
-+}
-+
- /*
- * Reserve space for 2 NOOPs at the end of each request to be
- * used as a workaround for not being allowed to do lite
-@@ -4225,6 +4241,9 @@ static u32 *gen8_emit_wa_tail(struct i915_request *request, u32 *cs)
- *cs++ = MI_NOOP;
- request->wa_tail = intel_ring_offset(request, cs);
-
-+ /* Check that entire request is less than half the ring */
-+ assert_request_valid(request);
-+
- return cs;
- }
-
-diff --git a/drivers/gpu/drm/i915/gt/intel_ring.c b/drivers/gpu/drm/i915/gt/intel_ring.c
-index 8cda1b7e17ba..bdb324167ef3 100644
---- a/drivers/gpu/drm/i915/gt/intel_ring.c
-+++ b/drivers/gpu/drm/i915/gt/intel_ring.c
-@@ -315,3 +315,7 @@ int intel_ring_cacheline_align(struct i915_request *rq)
- GEM_BUG_ON(rq->ring->emit & (CACHELINE_BYTES - 1));
- return 0;
- }
-+
-+#if IS_ENABLED(CONFIG_DRM_I915_SELFTEST)
-+#include "selftest_ring.c"
-+#endif
-diff --git a/drivers/gpu/drm/i915/gt/intel_workarounds.c b/drivers/gpu/drm/i915/gt/intel_workarounds.c
-index 5176ad1a3976..bb100872cd07 100644
---- a/drivers/gpu/drm/i915/gt/intel_workarounds.c
-+++ b/drivers/gpu/drm/i915/gt/intel_workarounds.c
-@@ -178,6 +178,12 @@ wa_write_or(struct i915_wa_list *wal, i915_reg_t reg, u32 set)
- wa_write_masked_or(wal, reg, set, set);
- }
-
-+static void
-+wa_write_clr(struct i915_wa_list *wal, i915_reg_t reg, u32 clr)
-+{
-+ wa_write_masked_or(wal, reg, clr, 0);
-+}
-+
- static void
- wa_masked_en(struct i915_wa_list *wal, i915_reg_t reg, u32 val)
- {
-@@ -697,6 +703,227 @@ int intel_engine_emit_ctx_wa(struct i915_request *rq)
- return 0;
- }
-
-+static void
-+gen4_gt_workarounds_init(struct drm_i915_private *i915,
-+ struct i915_wa_list *wal)
-+{
-+ /* WaDisable_RenderCache_OperationalFlush:gen4,ilk */
-+ wa_masked_dis(wal, CACHE_MODE_0, RC_OP_FLUSH_ENABLE);
-+}
-+
-+static void
-+g4x_gt_workarounds_init(struct drm_i915_private *i915, struct i915_wa_list *wal)
-+{
-+ gen4_gt_workarounds_init(i915, wal);
-+
-+ /* WaDisableRenderCachePipelinedFlush:g4x,ilk */
-+ wa_masked_en(wal, CACHE_MODE_0, CM0_PIPELINED_RENDER_FLUSH_DISABLE);
-+}
-+
-+static void
-+ilk_gt_workarounds_init(struct drm_i915_private *i915, struct i915_wa_list *wal)
-+{
-+ g4x_gt_workarounds_init(i915, wal);
-+
-+ wa_masked_en(wal, _3D_CHICKEN2, _3D_CHICKEN2_WM_READ_PIPELINED);
-+}
-+
-+static void
-+snb_gt_workarounds_init(struct drm_i915_private *i915, struct i915_wa_list *wal)
-+{
-+ /* WaDisableHiZPlanesWhenMSAAEnabled:snb */
-+ wa_masked_en(wal,
-+ _3D_CHICKEN,
-+ _3D_CHICKEN_HIZ_PLANE_DISABLE_MSAA_4X_SNB);
-+
-+ /* WaDisable_RenderCache_OperationalFlush:snb */
-+ wa_masked_dis(wal, CACHE_MODE_0, RC_OP_FLUSH_ENABLE);
-+
-+ /*
-+ * BSpec recommends 8x4 when MSAA is used,
-+ * however in practice 16x4 seems fastest.
-+ *
-+ * Note that PS/WM thread counts depend on the WIZ hashing
-+ * disable bit, which we don't touch here, but it's good
-+ * to keep in mind (see 3DSTATE_PS and 3DSTATE_WM).
-+ */
-+ wa_add(wal,
-+ GEN6_GT_MODE, 0,
-+ _MASKED_FIELD(GEN6_WIZ_HASHING_MASK, GEN6_WIZ_HASHING_16x4),
-+ GEN6_WIZ_HASHING_16x4);
-+
-+ wa_masked_dis(wal, CACHE_MODE_0, CM0_STC_EVICT_DISABLE_LRA_SNB);
-+
-+ wa_masked_en(wal,
-+ _3D_CHICKEN3,
-+ /* WaStripsFansDisableFastClipPerformanceFix:snb */
-+ _3D_CHICKEN3_SF_DISABLE_FASTCLIP_CULL |
-+ /*
-+ * Bspec says:
-+ * "This bit must be set if 3DSTATE_CLIP clip mode is set
-+ * to normal and 3DSTATE_SF number of SF output attributes
-+ * is more than 16."
-+ */
-+ _3D_CHICKEN3_SF_DISABLE_PIPELINED_ATTR_FETCH);
-+}
-+
-+static void
-+ivb_gt_workarounds_init(struct drm_i915_private *i915, struct i915_wa_list *wal)
-+{
-+ /* WaDisableEarlyCull:ivb */
-+ wa_masked_en(wal, _3D_CHICKEN3, _3D_CHICKEN_SF_DISABLE_OBJEND_CULL);
-+
-+ /* WaDisablePSDDualDispatchEnable:ivb */
-+ if (IS_IVB_GT1(i915))
-+ wa_masked_en(wal,
-+ GEN7_HALF_SLICE_CHICKEN1,
-+ GEN7_PSD_SINGLE_PORT_DISPATCH_ENABLE);
-+
-+ /* WaDisable_RenderCache_OperationalFlush:ivb */
-+ wa_masked_dis(wal, CACHE_MODE_0_GEN7, RC_OP_FLUSH_ENABLE);
-+
-+ /* Apply the WaDisableRHWOOptimizationForRenderHang:ivb workaround. */
-+ wa_masked_dis(wal,
-+ GEN7_COMMON_SLICE_CHICKEN1,
-+ GEN7_CSC1_RHWO_OPT_DISABLE_IN_RCC);
-+
-+ /* WaApplyL3ControlAndL3ChickenMode:ivb */
-+ wa_write(wal, GEN7_L3CNTLREG1, GEN7_WA_FOR_GEN7_L3_CONTROL);
-+ wa_write(wal, GEN7_L3_CHICKEN_MODE_REGISTER, GEN7_WA_L3_CHICKEN_MODE);
-+
-+ /* WaForceL3Serialization:ivb */
-+ wa_write_clr(wal, GEN7_L3SQCREG4, L3SQ_URB_READ_CAM_MATCH_DISABLE);
-+
-+ /*
-+ * WaVSThreadDispatchOverride:ivb,vlv
-+ *
-+ * This actually overrides the dispatch
-+ * mode for all thread types.
-+ */
-+ wa_write_masked_or(wal, GEN7_FF_THREAD_MODE,
-+ GEN7_FF_SCHED_MASK,
-+ GEN7_FF_TS_SCHED_HW |
-+ GEN7_FF_VS_SCHED_HW |
-+ GEN7_FF_DS_SCHED_HW);
-+
-+ if (0) { /* causes HiZ corruption on ivb:gt1 */
-+ /* enable HiZ Raw Stall Optimization */
-+ wa_masked_dis(wal, CACHE_MODE_0_GEN7, HIZ_RAW_STALL_OPT_DISABLE);
-+ }
-+
-+ /* WaDisable4x2SubspanOptimization:ivb */
-+ wa_masked_en(wal, CACHE_MODE_1, PIXEL_SUBSPAN_COLLECT_OPT_DISABLE);
-+
-+ /*
-+ * BSpec recommends 8x4 when MSAA is used,
-+ * however in practice 16x4 seems fastest.
-+ *
-+ * Note that PS/WM thread counts depend on the WIZ hashing
-+ * disable bit, which we don't touch here, but it's good
-+ * to keep in mind (see 3DSTATE_PS and 3DSTATE_WM).
-+ */
-+ wa_add(wal, GEN7_GT_MODE, 0,
-+ _MASKED_FIELD(GEN6_WIZ_HASHING_MASK, GEN6_WIZ_HASHING_16x4),
-+ GEN6_WIZ_HASHING_16x4);
-+}
-+
-+static void
-+vlv_gt_workarounds_init(struct drm_i915_private *i915, struct i915_wa_list *wal)
-+{
-+ /* WaDisableEarlyCull:vlv */
-+ wa_masked_en(wal, _3D_CHICKEN3, _3D_CHICKEN_SF_DISABLE_OBJEND_CULL);
-+
-+ /* WaPsdDispatchEnable:vlv */
-+ /* WaDisablePSDDualDispatchEnable:vlv */
-+ wa_masked_en(wal,
-+ GEN7_HALF_SLICE_CHICKEN1,
-+ GEN7_MAX_PS_THREAD_DEP |
-+ GEN7_PSD_SINGLE_PORT_DISPATCH_ENABLE);
-+
-+ /* WaDisable_RenderCache_OperationalFlush:vlv */
-+ wa_masked_dis(wal, CACHE_MODE_0_GEN7, RC_OP_FLUSH_ENABLE);
-+
-+ /* WaForceL3Serialization:vlv */
-+ wa_write_clr(wal, GEN7_L3SQCREG4, L3SQ_URB_READ_CAM_MATCH_DISABLE);
-+
-+ /*
-+ * WaVSThreadDispatchOverride:ivb,vlv
-+ *
-+ * This actually overrides the dispatch
-+ * mode for all thread types.
-+ */
-+ wa_write_masked_or(wal,
-+ GEN7_FF_THREAD_MODE,
-+ GEN7_FF_SCHED_MASK,
-+ GEN7_FF_TS_SCHED_HW |
-+ GEN7_FF_VS_SCHED_HW |
-+ GEN7_FF_DS_SCHED_HW);
-+
-+ /*
-+ * BSpec says this must be set, even though
-+ * WaDisable4x2SubspanOptimization isn't listed for VLV.
-+ */
-+ wa_masked_en(wal, CACHE_MODE_1, PIXEL_SUBSPAN_COLLECT_OPT_DISABLE);
-+
-+ /*
-+ * BSpec recommends 8x4 when MSAA is used,
-+ * however in practice 16x4 seems fastest.
-+ *
-+ * Note that PS/WM thread counts depend on the WIZ hashing
-+ * disable bit, which we don't touch here, but it's good
-+ * to keep in mind (see 3DSTATE_PS and 3DSTATE_WM).
-+ */
-+ wa_add(wal, GEN7_GT_MODE, 0,
-+ _MASKED_FIELD(GEN6_WIZ_HASHING_MASK, GEN6_WIZ_HASHING_16x4),
-+ GEN6_WIZ_HASHING_16x4);
-+
-+ /*
-+ * WaIncreaseL3CreditsForVLVB0:vlv
-+ * This is the hardware default actually.
-+ */
-+ wa_write(wal, GEN7_L3SQCREG1, VLV_B0_WA_L3SQCREG1_VALUE);
-+}
-+
-+static void
-+hsw_gt_workarounds_init(struct drm_i915_private *i915, struct i915_wa_list *wal)
-+{
-+ /* L3 caching of data atomics doesn't work -- disable it. */
-+ wa_write(wal, HSW_SCRATCH1, HSW_SCRATCH1_L3_DATA_ATOMICS_DISABLE);
-+
-+ wa_add(wal,
-+ HSW_ROW_CHICKEN3, 0,
-+ _MASKED_BIT_ENABLE(HSW_ROW_CHICKEN3_L3_GLOBAL_ATOMICS_DISABLE),
-+ 0 /* XXX does this reg exist? */);
-+
-+ /* WaVSRefCountFullforceMissDisable:hsw */
-+ wa_write_clr(wal, GEN7_FF_THREAD_MODE, GEN7_FF_VS_REF_CNT_FFME);
-+
-+ wa_masked_dis(wal,
-+ CACHE_MODE_0_GEN7,
-+ /* WaDisable_RenderCache_OperationalFlush:hsw */
-+ RC_OP_FLUSH_ENABLE |
-+ /* enable HiZ Raw Stall Optimization */
-+ HIZ_RAW_STALL_OPT_DISABLE);
-+
-+ /* WaDisable4x2SubspanOptimization:hsw */
-+ wa_masked_en(wal, CACHE_MODE_1, PIXEL_SUBSPAN_COLLECT_OPT_DISABLE);
-+
-+ /*
-+ * BSpec recommends 8x4 when MSAA is used,
-+ * however in practice 16x4 seems fastest.
-+ *
-+ * Note that PS/WM thread counts depend on the WIZ hashing
-+ * disable bit, which we don't touch here, but it's good
-+ * to keep in mind (see 3DSTATE_PS and 3DSTATE_WM).
-+ */
-+ wa_add(wal, GEN7_GT_MODE, 0,
-+ _MASKED_FIELD(GEN6_WIZ_HASHING_MASK, GEN6_WIZ_HASHING_16x4),
-+ GEN6_WIZ_HASHING_16x4);
-+
-+ /* WaSampleCChickenBitEnable:hsw */
-+ wa_masked_en(wal, HALF_SLICE_CHICKEN3, HSW_SAMPLE_C_PERFORMANCE);
-+}
-+
- static void
- gen9_gt_workarounds_init(struct drm_i915_private *i915, struct i915_wa_list *wal)
- {
-@@ -974,6 +1201,20 @@ gt_init_workarounds(struct drm_i915_private *i915, struct i915_wa_list *wal)
- bxt_gt_workarounds_init(i915, wal);
- else if (IS_SKYLAKE(i915))
- skl_gt_workarounds_init(i915, wal);
-+ else if (IS_HASWELL(i915))
-+ hsw_gt_workarounds_init(i915, wal);
-+ else if (IS_VALLEYVIEW(i915))
-+ vlv_gt_workarounds_init(i915, wal);
-+ else if (IS_IVYBRIDGE(i915))
-+ ivb_gt_workarounds_init(i915, wal);
-+ else if (IS_GEN(i915, 6))
-+ snb_gt_workarounds_init(i915, wal);
-+ else if (IS_GEN(i915, 5))
-+ ilk_gt_workarounds_init(i915, wal);
-+ else if (IS_G4X(i915))
-+ g4x_gt_workarounds_init(i915, wal);
-+ else if (IS_GEN(i915, 4))
-+ gen4_gt_workarounds_init(i915, wal);
- else if (INTEL_GEN(i915) <= 8)
- return;
- else
-@@ -1379,12 +1620,6 @@ rcs_engine_wa_init(struct intel_engine_cs *engine, struct i915_wa_list *wal)
- GEN7_FF_THREAD_MODE,
- GEN12_FF_TESSELATION_DOP_GATE_DISABLE);
-
-- /*
-- * Wa_1409085225:tgl
-- * Wa_14010229206:tgl
-- */
-- wa_masked_en(wal, GEN9_ROW_CHICKEN4, GEN12_DISABLE_TDL_PUSH);
--
- /* Wa_1408615072:tgl */
- wa_write_or(wal, UNSLICE_UNIT_LEVEL_CLKGATE2,
- VSUNIT_CLKGATE_DIS_TGL);
-@@ -1402,6 +1637,12 @@ rcs_engine_wa_init(struct intel_engine_cs *engine, struct i915_wa_list *wal)
- wa_masked_en(wal,
- GEN9_CS_DEBUG_MODE1,
- FF_DOP_CLOCK_GATE_DISABLE);
-+
-+ /*
-+ * Wa_1409085225:tgl
-+ * Wa_14010229206:tgl
-+ */
-+ wa_masked_en(wal, GEN9_ROW_CHICKEN4, GEN12_DISABLE_TDL_PUSH);
- }
-
- if (IS_GEN(i915, 11)) {
-diff --git a/drivers/gpu/drm/i915/gt/selftest_mocs.c b/drivers/gpu/drm/i915/gt/selftest_mocs.c
-index 8831ffee2061..63f87d8608c3 100644
---- a/drivers/gpu/drm/i915/gt/selftest_mocs.c
-+++ b/drivers/gpu/drm/i915/gt/selftest_mocs.c
-@@ -18,6 +18,20 @@ struct live_mocs {
- void *vaddr;
- };
-
-+static struct intel_context *mocs_context_create(struct intel_engine_cs *engine)
-+{
-+ struct intel_context *ce;
-+
-+ ce = intel_context_create(engine);
-+ if (IS_ERR(ce))
-+ return ce;
-+
-+ /* We build large requests to read the registers from the ring */
-+ ce->ring = __intel_context_ring_size(SZ_16K);
-+
-+ return ce;
-+}
-+
- static int request_add_sync(struct i915_request *rq, int err)
- {
- i915_request_get(rq);
-@@ -301,7 +315,7 @@ static int live_mocs_clean(void *arg)
- for_each_engine(engine, gt, id) {
- struct intel_context *ce;
-
-- ce = intel_context_create(engine);
-+ ce = mocs_context_create(engine);
- if (IS_ERR(ce)) {
- err = PTR_ERR(ce);
- break;
-@@ -395,7 +409,7 @@ static int live_mocs_reset(void *arg)
- for_each_engine(engine, gt, id) {
- struct intel_context *ce;
-
-- ce = intel_context_create(engine);
-+ ce = mocs_context_create(engine);
- if (IS_ERR(ce)) {
- err = PTR_ERR(ce);
- break;
-diff --git a/drivers/gpu/drm/i915/gt/selftest_ring.c b/drivers/gpu/drm/i915/gt/selftest_ring.c
-new file mode 100644
-index 000000000000..2a8c534dc125
---- /dev/null
-+++ b/drivers/gpu/drm/i915/gt/selftest_ring.c
-@@ -0,0 +1,110 @@
-+// SPDX-License-Identifier: GPL-2.0
-+/*
-+ * Copyright © 2020 Intel Corporation
-+ */
-+
-+static struct intel_ring *mock_ring(unsigned long sz)
-+{
-+ struct intel_ring *ring;
-+
-+ ring = kzalloc(sizeof(*ring) + sz, GFP_KERNEL);
-+ if (!ring)
-+ return NULL;
-+
-+ kref_init(&ring->ref);
-+ ring->size = sz;
-+ ring->wrap = BITS_PER_TYPE(ring->size) - ilog2(sz);
-+ ring->effective_size = sz;
-+ ring->vaddr = (void *)(ring + 1);
-+ atomic_set(&ring->pin_count, 1);
-+
-+ intel_ring_update_space(ring);
-+
-+ return ring;
-+}
-+
-+static void mock_ring_free(struct intel_ring *ring)
-+{
-+ kfree(ring);
-+}
-+
-+static int check_ring_direction(struct intel_ring *ring,
-+ u32 next, u32 prev,
-+ int expected)
-+{
-+ int result;
-+
-+ result = intel_ring_direction(ring, next, prev);
-+ if (result < 0)
-+ result = -1;
-+ else if (result > 0)
-+ result = 1;
-+
-+ if (result != expected) {
-+ pr_err("intel_ring_direction(%u, %u):%d != %d\n",
-+ next, prev, result, expected);
-+ return -EINVAL;
-+ }
-+
-+ return 0;
-+}
-+
-+static int check_ring_step(struct intel_ring *ring, u32 x, u32 step)
-+{
-+ u32 prev = x, next = intel_ring_wrap(ring, x + step);
-+ int err = 0;
-+
-+ err |= check_ring_direction(ring, next, next, 0);
-+ err |= check_ring_direction(ring, prev, prev, 0);
-+ err |= check_ring_direction(ring, next, prev, 1);
-+ err |= check_ring_direction(ring, prev, next, -1);
-+
-+ return err;
-+}
-+
-+static int check_ring_offset(struct intel_ring *ring, u32 x, u32 step)
-+{
-+ int err = 0;
-+
-+ err |= check_ring_step(ring, x, step);
-+ err |= check_ring_step(ring, intel_ring_wrap(ring, x + 1), step);
-+ err |= check_ring_step(ring, intel_ring_wrap(ring, x - 1), step);
-+
-+ return err;
-+}
-+
-+static int igt_ring_direction(void *dummy)
-+{
-+ struct intel_ring *ring;
-+ unsigned int half = 2048;
-+ int step, err = 0;
-+
-+ ring = mock_ring(2 * half);
-+ if (!ring)
-+ return -ENOMEM;
-+
-+ GEM_BUG_ON(ring->size != 2 * half);
-+
-+ /* Precision of wrap detection is limited to ring->size / 2 */
-+ for (step = 1; step < half; step <<= 1) {
-+ err |= check_ring_offset(ring, 0, step);
-+ err |= check_ring_offset(ring, half, step);
-+ }
-+ err |= check_ring_step(ring, 0, half - 64);
-+
-+ /* And check unwrapped handling for good measure */
-+ err |= check_ring_offset(ring, 0, 2 * half + 64);
-+ err |= check_ring_offset(ring, 3 * half, 1);
-+
-+ mock_ring_free(ring);
-+ return err;
-+}
-+
-+int intel_ring_mock_selftests(void)
-+{
-+ static const struct i915_subtest tests[] = {
-+ SUBTEST(igt_ring_direction),
-+ };
-+
-+ return i915_subtests(tests, NULL);
-+}
-diff --git a/drivers/gpu/drm/i915/i915_cmd_parser.c b/drivers/gpu/drm/i915/i915_cmd_parser.c
-index 189b573d02be..372354d33f55 100644
---- a/drivers/gpu/drm/i915/i915_cmd_parser.c
-+++ b/drivers/gpu/drm/i915/i915_cmd_parser.c
-@@ -572,6 +572,9 @@ struct drm_i915_reg_descriptor {
- #define REG32(_reg, ...) \
- { .addr = (_reg), __VA_ARGS__ }
-
-+#define REG32_IDX(_reg, idx) \
-+ { .addr = _reg(idx) }
-+
- /*
- * Convenience macro for adding 64-bit registers.
- *
-@@ -669,6 +672,7 @@ static const struct drm_i915_reg_descriptor gen9_blt_regs[] = {
- REG64_IDX(RING_TIMESTAMP, BSD_RING_BASE),
- REG32(BCS_SWCTRL),
- REG64_IDX(RING_TIMESTAMP, BLT_RING_BASE),
-+ REG32_IDX(RING_CTX_TIMESTAMP, BLT_RING_BASE),
- REG64_IDX(BCS_GPR, 0),
- REG64_IDX(BCS_GPR, 1),
- REG64_IDX(BCS_GPR, 2),
-diff --git a/drivers/gpu/drm/i915/i915_irq.c b/drivers/gpu/drm/i915/i915_irq.c
-index 8a2b83807ffc..bd042725a678 100644
---- a/drivers/gpu/drm/i915/i915_irq.c
-+++ b/drivers/gpu/drm/i915/i915_irq.c
-@@ -3092,6 +3092,7 @@ static void gen11_hpd_irq_setup(struct drm_i915_private *dev_priv)
-
- val = I915_READ(GEN11_DE_HPD_IMR);
- val &= ~hotplug_irqs;
-+ val |= ~enabled_irqs & hotplug_irqs;
- I915_WRITE(GEN11_DE_HPD_IMR, val);
- POSTING_READ(GEN11_DE_HPD_IMR);
-
-diff --git a/drivers/gpu/drm/i915/i915_reg.h b/drivers/gpu/drm/i915/i915_reg.h
-index 6e12000c4b6b..a41be9357d15 100644
---- a/drivers/gpu/drm/i915/i915_reg.h
-+++ b/drivers/gpu/drm/i915/i915_reg.h
-@@ -7819,7 +7819,7 @@ enum {
-
- /* GEN7 chicken */
- #define GEN7_COMMON_SLICE_CHICKEN1 _MMIO(0x7010)
-- #define GEN7_CSC1_RHWO_OPT_DISABLE_IN_RCC ((1 << 10) | (1 << 26))
-+ #define GEN7_CSC1_RHWO_OPT_DISABLE_IN_RCC (1 << 10)
- #define GEN9_RHWO_OPTIMIZATION_DISABLE (1 << 14)
-
- #define COMMON_SLICE_CHICKEN2 _MMIO(0x7014)
-diff --git a/drivers/gpu/drm/i915/intel_pm.c b/drivers/gpu/drm/i915/intel_pm.c
-index a52986a9e7a6..20c1683fda24 100644
---- a/drivers/gpu/drm/i915/intel_pm.c
-+++ b/drivers/gpu/drm/i915/intel_pm.c
-@@ -6593,16 +6593,6 @@ static void ilk_init_clock_gating(struct drm_i915_private *dev_priv)
- I915_WRITE(ILK_DISPLAY_CHICKEN2,
- I915_READ(ILK_DISPLAY_CHICKEN2) |
- ILK_ELPIN_409_SELECT);
-- I915_WRITE(_3D_CHICKEN2,
-- _3D_CHICKEN2_WM_READ_PIPELINED << 16 |
-- _3D_CHICKEN2_WM_READ_PIPELINED);
--
-- /* WaDisableRenderCachePipelinedFlush:ilk */
-- I915_WRITE(CACHE_MODE_0,
-- _MASKED_BIT_ENABLE(CM0_PIPELINED_RENDER_FLUSH_DISABLE));
--
-- /* WaDisable_RenderCache_OperationalFlush:ilk */
-- I915_WRITE(CACHE_MODE_0, _MASKED_BIT_DISABLE(RC_OP_FLUSH_ENABLE));
-
- g4x_disable_trickle_feed(dev_priv);
-
-@@ -6665,27 +6655,6 @@ static void gen6_init_clock_gating(struct drm_i915_private *dev_priv)
- I915_READ(ILK_DISPLAY_CHICKEN2) |
- ILK_ELPIN_409_SELECT);
-
-- /* WaDisableHiZPlanesWhenMSAAEnabled:snb */
-- I915_WRITE(_3D_CHICKEN,
-- _MASKED_BIT_ENABLE(_3D_CHICKEN_HIZ_PLANE_DISABLE_MSAA_4X_SNB));
--
-- /* WaDisable_RenderCache_OperationalFlush:snb */
-- I915_WRITE(CACHE_MODE_0, _MASKED_BIT_DISABLE(RC_OP_FLUSH_ENABLE));
--
-- /*
-- * BSpec recoomends 8x4 when MSAA is used,
-- * however in practice 16x4 seems fastest.
-- *
-- * Note that PS/WM thread counts depend on the WIZ hashing
-- * disable bit, which we don't touch here, but it's good
-- * to keep in mind (see 3DSTATE_PS and 3DSTATE_WM).
-- */
-- I915_WRITE(GEN6_GT_MODE,
-- _MASKED_FIELD(GEN6_WIZ_HASHING_MASK, GEN6_WIZ_HASHING_16x4));
--
-- I915_WRITE(CACHE_MODE_0,
-- _MASKED_BIT_DISABLE(CM0_STC_EVICT_DISABLE_LRA_SNB));
--
- I915_WRITE(GEN6_UCGCTL1,
- I915_READ(GEN6_UCGCTL1) |
- GEN6_BLBUNIT_CLOCK_GATE_DISABLE |
-@@ -6708,18 +6677,6 @@ static void gen6_init_clock_gating(struct drm_i915_private *dev_priv)
- GEN6_RCPBUNIT_CLOCK_GATE_DISABLE |
- GEN6_RCCUNIT_CLOCK_GATE_DISABLE);
-
-- /* WaStripsFansDisableFastClipPerformanceFix:snb */
-- I915_WRITE(_3D_CHICKEN3,
-- _MASKED_BIT_ENABLE(_3D_CHICKEN3_SF_DISABLE_FASTCLIP_CULL));
--
-- /*
-- * Bspec says:
-- * "This bit must be set if 3DSTATE_CLIP clip mode is set to normal and
-- * 3DSTATE_SF number of SF output attributes is more than 16."
-- */
-- I915_WRITE(_3D_CHICKEN3,
-- _MASKED_BIT_ENABLE(_3D_CHICKEN3_SF_DISABLE_PIPELINED_ATTR_FETCH));
--
- /*
- * According to the spec the following bits should be
- * set in order to enable memory self-refresh and fbc:
-@@ -6749,24 +6706,6 @@ static void gen6_init_clock_gating(struct drm_i915_private *dev_priv)
- gen6_check_mch_setup(dev_priv);
- }
-
--static void gen7_setup_fixed_func_scheduler(struct drm_i915_private *dev_priv)
--{
-- u32 reg = I915_READ(GEN7_FF_THREAD_MODE);
--
-- /*
-- * WaVSThreadDispatchOverride:ivb,vlv
-- *
-- * This actually overrides the dispatch
-- * mode for all thread types.
-- */
-- reg &= ~GEN7_FF_SCHED_MASK;
-- reg |= GEN7_FF_TS_SCHED_HW;
-- reg |= GEN7_FF_VS_SCHED_HW;
-- reg |= GEN7_FF_DS_SCHED_HW;
--
-- I915_WRITE(GEN7_FF_THREAD_MODE, reg);
--}
--
- static void lpt_init_clock_gating(struct drm_i915_private *dev_priv)
- {
- /*
-@@ -6992,45 +6931,10 @@ static void bdw_init_clock_gating(struct drm_i915_private *dev_priv)
-
- static void hsw_init_clock_gating(struct drm_i915_private *dev_priv)
- {
-- /* L3 caching of data atomics doesn't work -- disable it. */
-- I915_WRITE(HSW_SCRATCH1, HSW_SCRATCH1_L3_DATA_ATOMICS_DISABLE);
-- I915_WRITE(HSW_ROW_CHICKEN3,
-- _MASKED_BIT_ENABLE(HSW_ROW_CHICKEN3_L3_GLOBAL_ATOMICS_DISABLE));
--
- /* This is required by WaCatErrorRejectionIssue:hsw */
- I915_WRITE(GEN7_SQ_CHICKEN_MBCUNIT_CONFIG,
-- I915_READ(GEN7_SQ_CHICKEN_MBCUNIT_CONFIG) |
-- GEN7_SQ_CHICKEN_MBCUNIT_SQINTMOB);
--
-- /* WaVSRefCountFullforceMissDisable:hsw */
-- I915_WRITE(GEN7_FF_THREAD_MODE,
-- I915_READ(GEN7_FF_THREAD_MODE) & ~GEN7_FF_VS_REF_CNT_FFME);
--
-- /* WaDisable_RenderCache_OperationalFlush:hsw */
-- I915_WRITE(CACHE_MODE_0_GEN7, _MASKED_BIT_DISABLE(RC_OP_FLUSH_ENABLE));
--
-- /* enable HiZ Raw Stall Optimization */
-- I915_WRITE(CACHE_MODE_0_GEN7,
-- _MASKED_BIT_DISABLE(HIZ_RAW_STALL_OPT_DISABLE));
--
-- /* WaDisable4x2SubspanOptimization:hsw */
-- I915_WRITE(CACHE_MODE_1,
-- _MASKED_BIT_ENABLE(PIXEL_SUBSPAN_COLLECT_OPT_DISABLE));
--
-- /*
-- * BSpec recommends 8x4 when MSAA is used,
-- * however in practice 16x4 seems fastest.
-- *
-- * Note that PS/WM thread counts depend on the WIZ hashing
-- * disable bit, which we don't touch here, but it's good
-- * to keep in mind (see 3DSTATE_PS and 3DSTATE_WM).
-- */
-- I915_WRITE(GEN7_GT_MODE,
-- _MASKED_FIELD(GEN6_WIZ_HASHING_MASK, GEN6_WIZ_HASHING_16x4));
--
-- /* WaSampleCChickenBitEnable:hsw */
-- I915_WRITE(HALF_SLICE_CHICKEN3,
-- _MASKED_BIT_ENABLE(HSW_SAMPLE_C_PERFORMANCE));
-+ I915_READ(GEN7_SQ_CHICKEN_MBCUNIT_CONFIG) |
-+ GEN7_SQ_CHICKEN_MBCUNIT_SQINTMOB);
-
- /* WaSwitchSolVfFArbitrationPriority:hsw */
- I915_WRITE(GAM_ECOCHK, I915_READ(GAM_ECOCHK) | HSW_ECOCHK_ARB_PRIO_SOL);
-@@ -7044,32 +6948,11 @@ static void ivb_init_clock_gating(struct drm_i915_private *dev_priv)
-
- I915_WRITE(ILK_DSPCLK_GATE_D, ILK_VRHUNIT_CLOCK_GATE_DISABLE);
-
-- /* WaDisableEarlyCull:ivb */
-- I915_WRITE(_3D_CHICKEN3,
-- _MASKED_BIT_ENABLE(_3D_CHICKEN_SF_DISABLE_OBJEND_CULL));
--
- /* WaDisableBackToBackFlipFix:ivb */
- I915_WRITE(IVB_CHICKEN3,
- CHICKEN3_DGMG_REQ_OUT_FIX_DISABLE |
- CHICKEN3_DGMG_DONE_FIX_DISABLE);
-
-- /* WaDisablePSDDualDispatchEnable:ivb */
-- if (IS_IVB_GT1(dev_priv))
-- I915_WRITE(GEN7_HALF_SLICE_CHICKEN1,
-- _MASKED_BIT_ENABLE(GEN7_PSD_SINGLE_PORT_DISPATCH_ENABLE));
--
-- /* WaDisable_RenderCache_OperationalFlush:ivb */
-- I915_WRITE(CACHE_MODE_0_GEN7, _MASKED_BIT_DISABLE(RC_OP_FLUSH_ENABLE));
--
-- /* Apply the WaDisableRHWOOptimizationForRenderHang:ivb workaround. */
-- I915_WRITE(GEN7_COMMON_SLICE_CHICKEN1,
-- GEN7_CSC1_RHWO_OPT_DISABLE_IN_RCC);
--
-- /* WaApplyL3ControlAndL3ChickenMode:ivb */
-- I915_WRITE(GEN7_L3CNTLREG1,
-- GEN7_WA_FOR_GEN7_L3_CONTROL);
-- I915_WRITE(GEN7_L3_CHICKEN_MODE_REGISTER,
-- GEN7_WA_L3_CHICKEN_MODE);
- if (IS_IVB_GT1(dev_priv))
- I915_WRITE(GEN7_ROW_CHICKEN2,
- _MASKED_BIT_ENABLE(DOP_CLOCK_GATING_DISABLE));
-@@ -7081,10 +6964,6 @@ static void ivb_init_clock_gating(struct drm_i915_private *dev_priv)
- _MASKED_BIT_ENABLE(DOP_CLOCK_GATING_DISABLE));
- }
-
-- /* WaForceL3Serialization:ivb */
-- I915_WRITE(GEN7_L3SQCREG4, I915_READ(GEN7_L3SQCREG4) &
-- ~L3SQ_URB_READ_CAM_MATCH_DISABLE);
--
- /*
- * According to the spec, bit 13 (RCZUNIT) must be set on IVB.
- * This implements the WaDisableRCZUnitClockGating:ivb workaround.
-@@ -7099,29 +6978,6 @@ static void ivb_init_clock_gating(struct drm_i915_private *dev_priv)
-
- g4x_disable_trickle_feed(dev_priv);
-
-- gen7_setup_fixed_func_scheduler(dev_priv);
--
-- if (0) { /* causes HiZ corruption on ivb:gt1 */
-- /* enable HiZ Raw Stall Optimization */
-- I915_WRITE(CACHE_MODE_0_GEN7,
-- _MASKED_BIT_DISABLE(HIZ_RAW_STALL_OPT_DISABLE));
-- }
--
-- /* WaDisable4x2SubspanOptimization:ivb */
-- I915_WRITE(CACHE_MODE_1,
-- _MASKED_BIT_ENABLE(PIXEL_SUBSPAN_COLLECT_OPT_DISABLE));
--
-- /*
-- * BSpec recommends 8x4 when MSAA is used,
-- * however in practice 16x4 seems fastest.
-- *
-- * Note that PS/WM thread counts depend on the WIZ hashing
-- * disable bit, which we don't touch here, but it's good
-- * to keep in mind (see 3DSTATE_PS and 3DSTATE_WM).
-- */
-- I915_WRITE(GEN7_GT_MODE,
-- _MASKED_FIELD(GEN6_WIZ_HASHING_MASK, GEN6_WIZ_HASHING_16x4));
--
- snpcr = I915_READ(GEN6_MBCUNIT_SNPCR);
- snpcr &= ~GEN6_MBC_SNPCR_MASK;
- snpcr |= GEN6_MBC_SNPCR_MED;
-@@ -7135,28 +6991,11 @@ static void ivb_init_clock_gating(struct drm_i915_private *dev_priv)
-
- static void vlv_init_clock_gating(struct drm_i915_private *dev_priv)
- {
-- /* WaDisableEarlyCull:vlv */
-- I915_WRITE(_3D_CHICKEN3,
-- _MASKED_BIT_ENABLE(_3D_CHICKEN_SF_DISABLE_OBJEND_CULL));
--
- /* WaDisableBackToBackFlipFix:vlv */
- I915_WRITE(IVB_CHICKEN3,
- CHICKEN3_DGMG_REQ_OUT_FIX_DISABLE |
- CHICKEN3_DGMG_DONE_FIX_DISABLE);
-
-- /* WaPsdDispatchEnable:vlv */
-- /* WaDisablePSDDualDispatchEnable:vlv */
-- I915_WRITE(GEN7_HALF_SLICE_CHICKEN1,
-- _MASKED_BIT_ENABLE(GEN7_MAX_PS_THREAD_DEP |
-- GEN7_PSD_SINGLE_PORT_DISPATCH_ENABLE));
--
-- /* WaDisable_RenderCache_OperationalFlush:vlv */
-- I915_WRITE(CACHE_MODE_0_GEN7, _MASKED_BIT_DISABLE(RC_OP_FLUSH_ENABLE));
--
-- /* WaForceL3Serialization:vlv */
-- I915_WRITE(GEN7_L3SQCREG4, I915_READ(GEN7_L3SQCREG4) &
-- ~L3SQ_URB_READ_CAM_MATCH_DISABLE);
--
- /* WaDisableDopClockGating:vlv */
- I915_WRITE(GEN7_ROW_CHICKEN2,
- _MASKED_BIT_ENABLE(DOP_CLOCK_GATING_DISABLE));
-@@ -7166,8 +7005,6 @@ static void vlv_init_clock_gating(struct drm_i915_private *dev_priv)
- I915_READ(GEN7_SQ_CHICKEN_MBCUNIT_CONFIG) |
- GEN7_SQ_CHICKEN_MBCUNIT_SQINTMOB);
-
-- gen7_setup_fixed_func_scheduler(dev_priv);
--
- /*
- * According to the spec, bit 13 (RCZUNIT) must be set on IVB.
- * This implements the WaDisableRCZUnitClockGating:vlv workaround.
-@@ -7181,30 +7018,6 @@ static void vlv_init_clock_gating(struct drm_i915_private *dev_priv)
- I915_WRITE(GEN7_UCGCTL4,
- I915_READ(GEN7_UCGCTL4) | GEN7_L3BANK2X_CLOCK_GATE_DISABLE);
-
-- /*
-- * BSpec says this must be set, even though
-- * WaDisable4x2SubspanOptimization isn't listed for VLV.
-- */
-- I915_WRITE(CACHE_MODE_1,
-- _MASKED_BIT_ENABLE(PIXEL_SUBSPAN_COLLECT_OPT_DISABLE));
--
-- /*
-- * BSpec recommends 8x4 when MSAA is used,
-- * however in practice 16x4 seems fastest.
-- *
-- * Note that PS/WM thread counts depend on the WIZ hashing
-- * disable bit, which we don't touch here, but it's good
-- * to keep in mind (see 3DSTATE_PS and 3DSTATE_WM).
-- */
-- I915_WRITE(GEN7_GT_MODE,
-- _MASKED_FIELD(GEN6_WIZ_HASHING_MASK, GEN6_WIZ_HASHING_16x4));
--
-- /*
-- * WaIncreaseL3CreditsForVLVB0:vlv
-- * This is the hardware default actually.
-- */
-- I915_WRITE(GEN7_L3SQCREG1, VLV_B0_WA_L3SQCREG1_VALUE);
--
- /*
- * WaDisableVLVClockGating_VBIIssue:vlv
- * Disable clock gating on th GCFG unit to prevent a delay
-@@ -7257,13 +7070,6 @@ static void g4x_init_clock_gating(struct drm_i915_private *dev_priv)
- dspclk_gate |= DSSUNIT_CLOCK_GATE_DISABLE;
- I915_WRITE(DSPCLK_GATE_D, dspclk_gate);
-
-- /* WaDisableRenderCachePipelinedFlush */
-- I915_WRITE(CACHE_MODE_0,
-- _MASKED_BIT_ENABLE(CM0_PIPELINED_RENDER_FLUSH_DISABLE));
--
-- /* WaDisable_RenderCache_OperationalFlush:g4x */
-- I915_WRITE(CACHE_MODE_0, _MASKED_BIT_DISABLE(RC_OP_FLUSH_ENABLE));
--
- g4x_disable_trickle_feed(dev_priv);
- }
-
-@@ -7279,11 +7085,6 @@ static void i965gm_init_clock_gating(struct drm_i915_private *dev_priv)
- intel_uncore_write(uncore,
- MI_ARB_STATE,
- _MASKED_BIT_ENABLE(MI_ARB_DISPLAY_TRICKLE_FEED_DISABLE));
--
-- /* WaDisable_RenderCache_OperationalFlush:gen4 */
-- intel_uncore_write(uncore,
-- CACHE_MODE_0,
-- _MASKED_BIT_DISABLE(RC_OP_FLUSH_ENABLE));
- }
-
- static void i965g_init_clock_gating(struct drm_i915_private *dev_priv)
-@@ -7296,9 +7097,6 @@ static void i965g_init_clock_gating(struct drm_i915_private *dev_priv)
- I915_WRITE(RENCLK_GATE_D2, 0);
- I915_WRITE(MI_ARB_STATE,
- _MASKED_BIT_ENABLE(MI_ARB_DISPLAY_TRICKLE_FEED_DISABLE));
--
-- /* WaDisable_RenderCache_OperationalFlush:gen4 */
-- I915_WRITE(CACHE_MODE_0, _MASKED_BIT_DISABLE(RC_OP_FLUSH_ENABLE));
- }
-
- static void gen3_init_clock_gating(struct drm_i915_private *dev_priv)
-diff --git a/drivers/gpu/drm/i915/selftests/i915_mock_selftests.h b/drivers/gpu/drm/i915/selftests/i915_mock_selftests.h
-index 5b39bab4da1d..86baed226b53 100644
---- a/drivers/gpu/drm/i915/selftests/i915_mock_selftests.h
-+++ b/drivers/gpu/drm/i915/selftests/i915_mock_selftests.h
-@@ -20,6 +20,7 @@ selftest(fence, i915_sw_fence_mock_selftests)
- selftest(scatterlist, scatterlist_mock_selftests)
- selftest(syncmap, i915_syncmap_mock_selftests)
- selftest(uncore, intel_uncore_mock_selftests)
-+selftest(ring, intel_ring_mock_selftests)
- selftest(engine, intel_engine_cs_mock_selftests)
- selftest(timelines, intel_timeline_mock_selftests)
- selftest(requests, i915_request_mock_selftests)
-diff --git a/drivers/gpu/drm/msm/adreno/a5xx_gpu.c b/drivers/gpu/drm/msm/adreno/a5xx_gpu.c
-index 724024a2243a..662d02289533 100644
---- a/drivers/gpu/drm/msm/adreno/a5xx_gpu.c
-+++ b/drivers/gpu/drm/msm/adreno/a5xx_gpu.c
-@@ -1404,6 +1404,10 @@ static unsigned long a5xx_gpu_busy(struct msm_gpu *gpu)
- {
- u64 busy_cycles, busy_time;
-
-+ /* Only read the gpu busy if the hardware is already active */
-+ if (pm_runtime_get_if_in_use(&gpu->pdev->dev) == 0)
-+ return 0;
-+
- busy_cycles = gpu_read64(gpu, REG_A5XX_RBBM_PERFCTR_RBBM_0_LO,
- REG_A5XX_RBBM_PERFCTR_RBBM_0_HI);
-
-@@ -1412,6 +1416,8 @@ static unsigned long a5xx_gpu_busy(struct msm_gpu *gpu)
-
- gpu->devfreq.busy_cycles = busy_cycles;
-
-+ pm_runtime_put(&gpu->pdev->dev);
-+
- if (WARN_ON(busy_time > ~0LU))
- return ~0LU;
-
-diff --git a/drivers/gpu/drm/msm/adreno/a6xx_gmu.c b/drivers/gpu/drm/msm/adreno/a6xx_gmu.c
-index c4e71abbdd53..34607a98cc7c 100644
---- a/drivers/gpu/drm/msm/adreno/a6xx_gmu.c
-+++ b/drivers/gpu/drm/msm/adreno/a6xx_gmu.c
-@@ -108,6 +108,13 @@ static void __a6xx_gmu_set_freq(struct a6xx_gmu *gmu, int index)
- struct msm_gpu *gpu = &adreno_gpu->base;
- int ret;
-
-+ /*
-+ * This can get called from devfreq while the hardware is idle. Don't
-+ * bring up the power if it isn't already active
-+ */
-+ if (pm_runtime_get_if_in_use(gmu->dev) == 0)
-+ return;
-+
- gmu_write(gmu, REG_A6XX_GMU_DCVS_ACK_OPTION, 0);
-
- gmu_write(gmu, REG_A6XX_GMU_DCVS_PERF_SETTING,
-@@ -134,6 +141,7 @@ static void __a6xx_gmu_set_freq(struct a6xx_gmu *gmu, int index)
- * for now leave it at max so that the performance is nominal.
- */
- icc_set_bw(gpu->icc_path, 0, MBps_to_icc(7216));
-+ pm_runtime_put(gmu->dev);
- }
-
- void a6xx_gmu_set_freq(struct msm_gpu *gpu, unsigned long freq)
-diff --git a/drivers/gpu/drm/msm/adreno/a6xx_gpu.c b/drivers/gpu/drm/msm/adreno/a6xx_gpu.c
-index 68af24150de5..2c09d2c21773 100644
---- a/drivers/gpu/drm/msm/adreno/a6xx_gpu.c
-+++ b/drivers/gpu/drm/msm/adreno/a6xx_gpu.c
-@@ -810,6 +810,11 @@ static unsigned long a6xx_gpu_busy(struct msm_gpu *gpu)
- struct a6xx_gpu *a6xx_gpu = to_a6xx_gpu(adreno_gpu);
- u64 busy_cycles, busy_time;
-
-+
-+ /* Only read the gpu busy if the hardware is already active */
-+ if (pm_runtime_get_if_in_use(a6xx_gpu->gmu.dev) == 0)
-+ return 0;
-+
- busy_cycles = gmu_read64(&a6xx_gpu->gmu,
- REG_A6XX_GMU_CX_GMU_POWER_COUNTER_XOCLK_0_L,
- REG_A6XX_GMU_CX_GMU_POWER_COUNTER_XOCLK_0_H);
-@@ -819,6 +824,8 @@ static unsigned long a6xx_gpu_busy(struct msm_gpu *gpu)
-
- gpu->devfreq.busy_cycles = busy_cycles;
-
-+ pm_runtime_put(a6xx_gpu->gmu.dev);
-+
- if (WARN_ON(busy_time > ~0LU))
- return ~0LU;
-
-diff --git a/drivers/gpu/drm/msm/disp/mdp5/mdp5_kms.c b/drivers/gpu/drm/msm/disp/mdp5/mdp5_kms.c
-index 47b989834af1..c23a2fa13fb9 100644
---- a/drivers/gpu/drm/msm/disp/mdp5/mdp5_kms.c
-+++ b/drivers/gpu/drm/msm/disp/mdp5/mdp5_kms.c
-@@ -943,7 +943,8 @@ static int mdp5_init(struct platform_device *pdev, struct drm_device *dev)
-
- return 0;
- fail:
-- mdp5_destroy(pdev);
-+ if (mdp5_kms)
-+ mdp5_destroy(pdev);
- return ret;
- }
-
-diff --git a/drivers/gpu/drm/msm/msm_rd.c b/drivers/gpu/drm/msm/msm_rd.c
-index 732f65df5c4f..fea30e7aa9e8 100644
---- a/drivers/gpu/drm/msm/msm_rd.c
-+++ b/drivers/gpu/drm/msm/msm_rd.c
-@@ -29,8 +29,6 @@
- * or shader programs (if not emitted inline in cmdstream).
- */
-
--#ifdef CONFIG_DEBUG_FS
--
- #include <linux/circ_buf.h>
- #include <linux/debugfs.h>
- #include <linux/kfifo.h>
-@@ -47,6 +45,8 @@ bool rd_full = false;
- MODULE_PARM_DESC(rd_full, "If true, $debugfs/.../rd will snapshot all buffer contents");
- module_param_named(rd_full, rd_full, bool, 0600);
-
-+#ifdef CONFIG_DEBUG_FS
-+
- enum rd_sect_type {
- RD_NONE,
- RD_TEST, /* ascii text */
-diff --git a/drivers/gpu/drm/nouveau/dispnv50/disp.c b/drivers/gpu/drm/nouveau/dispnv50/disp.c
-index 6be9df1820c5..2625ed84fc44 100644
---- a/drivers/gpu/drm/nouveau/dispnv50/disp.c
-+++ b/drivers/gpu/drm/nouveau/dispnv50/disp.c
-@@ -482,15 +482,16 @@ nv50_dac_create(struct drm_connector *connector, struct dcb_output *dcbe)
- * audio component binding for ELD notification
- */
- static void
--nv50_audio_component_eld_notify(struct drm_audio_component *acomp, int port)
-+nv50_audio_component_eld_notify(struct drm_audio_component *acomp, int port,
-+ int dev_id)
- {
- if (acomp && acomp->audio_ops && acomp->audio_ops->pin_eld_notify)
- acomp->audio_ops->pin_eld_notify(acomp->audio_ops->audio_ptr,
-- port, -1);
-+ port, dev_id);
- }
-
- static int
--nv50_audio_component_get_eld(struct device *kdev, int port, int pipe,
-+nv50_audio_component_get_eld(struct device *kdev, int port, int dev_id,
- bool *enabled, unsigned char *buf, int max_bytes)
- {
- struct drm_device *drm_dev = dev_get_drvdata(kdev);
-@@ -506,7 +507,8 @@ nv50_audio_component_get_eld(struct device *kdev, int port, int pipe,
- nv_encoder = nouveau_encoder(encoder);
- nv_connector = nouveau_encoder_connector_get(nv_encoder);
- nv_crtc = nouveau_crtc(encoder->crtc);
-- if (!nv_connector || !nv_crtc || nv_crtc->index != port)
-+ if (!nv_connector || !nv_crtc || nv_encoder->or != port ||
-+ nv_crtc->index != dev_id)
- continue;
- *enabled = drm_detect_monitor_audio(nv_connector->edid);
- if (*enabled) {
-@@ -600,7 +602,8 @@ nv50_audio_disable(struct drm_encoder *encoder, struct nouveau_crtc *nv_crtc)
-
- nvif_mthd(&disp->disp->object, 0, &args, sizeof(args));
-
-- nv50_audio_component_eld_notify(drm->audio.component, nv_crtc->index);
-+ nv50_audio_component_eld_notify(drm->audio.component, nv_encoder->or,
-+ nv_crtc->index);
- }
-
- static void
-@@ -634,7 +637,8 @@ nv50_audio_enable(struct drm_encoder *encoder, struct drm_display_mode *mode)
- nvif_mthd(&disp->disp->object, 0, &args,
- sizeof(args.base) + drm_eld_size(args.data));
-
-- nv50_audio_component_eld_notify(drm->audio.component, nv_crtc->index);
-+ nv50_audio_component_eld_notify(drm->audio.component, nv_encoder->or,
-+ nv_crtc->index);
- }
-
- /******************************************************************************
-diff --git a/drivers/gpu/drm/nouveau/nvkm/engine/disp/hdmigm200.c b/drivers/gpu/drm/nouveau/nvkm/engine/disp/hdmigm200.c
-index 9b16a08eb4d9..bf6d41fb0c9f 100644
---- a/drivers/gpu/drm/nouveau/nvkm/engine/disp/hdmigm200.c
-+++ b/drivers/gpu/drm/nouveau/nvkm/engine/disp/hdmigm200.c
-@@ -27,10 +27,10 @@ void
- gm200_hdmi_scdc(struct nvkm_ior *ior, int head, u8 scdc)
- {
- struct nvkm_device *device = ior->disp->engine.subdev.device;
-- const u32 hoff = head * 0x800;
-+ const u32 soff = nv50_ior_base(ior);
- const u32 ctrl = scdc & 0x3;
-
-- nvkm_mask(device, 0x61c5bc + hoff, 0x00000003, ctrl);
-+ nvkm_mask(device, 0x61c5bc + soff, 0x00000003, ctrl);
-
- ior->tmds.high_speed = !!(scdc & 0x2);
- }
-diff --git a/drivers/gpu/drm/nouveau/nvkm/engine/gr/gk20a.c b/drivers/gpu/drm/nouveau/nvkm/engine/gr/gk20a.c
-index 4209b24a46d7..bf6b65257852 100644
---- a/drivers/gpu/drm/nouveau/nvkm/engine/gr/gk20a.c
-+++ b/drivers/gpu/drm/nouveau/nvkm/engine/gr/gk20a.c
-@@ -341,7 +341,7 @@ gk20a_gr_load(struct gf100_gr *gr, int ver, const struct gf100_gr_fwif *fwif)
-
- static const struct gf100_gr_fwif
- gk20a_gr_fwif[] = {
-- { -1, gk20a_gr_load, &gk20a_gr },
-+ { 0, gk20a_gr_load, &gk20a_gr },
- {}
- };
-
-diff --git a/drivers/gpu/drm/qxl/qxl_kms.c b/drivers/gpu/drm/qxl/qxl_kms.c
-index 70b20ee4741a..41ef6a9ca8cc 100644
---- a/drivers/gpu/drm/qxl/qxl_kms.c
-+++ b/drivers/gpu/drm/qxl/qxl_kms.c
-@@ -218,7 +218,7 @@ int qxl_device_init(struct qxl_device *qdev,
- &(qdev->ram_header->cursor_ring_hdr),
- sizeof(struct qxl_command),
- QXL_CURSOR_RING_SIZE,
-- qdev->io_base + QXL_IO_NOTIFY_CMD,
-+ qdev->io_base + QXL_IO_NOTIFY_CURSOR,
- false,
- &qdev->cursor_event);
-
-diff --git a/drivers/gpu/drm/sun4i/sun4i_hdmi.h b/drivers/gpu/drm/sun4i/sun4i_hdmi.h
-index 7ad3f06c127e..00ca35f07ba5 100644
---- a/drivers/gpu/drm/sun4i/sun4i_hdmi.h
-+++ b/drivers/gpu/drm/sun4i/sun4i_hdmi.h
-@@ -148,7 +148,7 @@
- #define SUN4I_HDMI_DDC_CMD_IMPLICIT_WRITE 3
-
- #define SUN4I_HDMI_DDC_CLK_REG 0x528
--#define SUN4I_HDMI_DDC_CLK_M(m) (((m) & 0x7) << 3)
-+#define SUN4I_HDMI_DDC_CLK_M(m) (((m) & 0xf) << 3)
- #define SUN4I_HDMI_DDC_CLK_N(n) ((n) & 0x7)
-
- #define SUN4I_HDMI_DDC_LINE_CTRL_REG 0x540
-diff --git a/drivers/gpu/drm/sun4i/sun4i_hdmi_ddc_clk.c b/drivers/gpu/drm/sun4i/sun4i_hdmi_ddc_clk.c
-index 2ff780114106..12430b9d4e93 100644
---- a/drivers/gpu/drm/sun4i/sun4i_hdmi_ddc_clk.c
-+++ b/drivers/gpu/drm/sun4i/sun4i_hdmi_ddc_clk.c
-@@ -33,7 +33,7 @@ static unsigned long sun4i_ddc_calc_divider(unsigned long rate,
- unsigned long best_rate = 0;
- u8 best_m = 0, best_n = 0, _m, _n;
-
-- for (_m = 0; _m < 8; _m++) {
-+ for (_m = 0; _m < 16; _m++) {
- for (_n = 0; _n < 8; _n++) {
- unsigned long tmp_rate;
-
-diff --git a/drivers/hid/hid-ids.h b/drivers/hid/hid-ids.h
-index 1c71a1aa76b2..f03f1cc913ce 100644
---- a/drivers/hid/hid-ids.h
-+++ b/drivers/hid/hid-ids.h
-@@ -1157,6 +1157,9 @@
- #define USB_DEVICE_ID_TPV_OPTICAL_TOUCHSCREEN_8882 0x8882
- #define USB_DEVICE_ID_TPV_OPTICAL_TOUCHSCREEN_8883 0x8883
-
-+#define USB_VENDOR_ID_TRUST 0x145f
-+#define USB_DEVICE_ID_TRUST_PANORA_TABLET 0x0212
-+
- #define USB_VENDOR_ID_TURBOX 0x062a
- #define USB_DEVICE_ID_TURBOX_KEYBOARD 0x0201
- #define USB_DEVICE_ID_ASUS_MD_5110 0x5110
-diff --git a/drivers/hid/hid-quirks.c b/drivers/hid/hid-quirks.c
-index e4cb543de0cd..ca8b5c261c7c 100644
---- a/drivers/hid/hid-quirks.c
-+++ b/drivers/hid/hid-quirks.c
-@@ -168,6 +168,7 @@ static const struct hid_device_id hid_quirks[] = {
- { HID_USB_DEVICE(USB_VENDOR_ID_TOUCHPACK, USB_DEVICE_ID_TOUCHPACK_RTS), HID_QUIRK_MULTI_INPUT },
- { HID_USB_DEVICE(USB_VENDOR_ID_TPV, USB_DEVICE_ID_TPV_OPTICAL_TOUCHSCREEN_8882), HID_QUIRK_NOGET },
- { HID_USB_DEVICE(USB_VENDOR_ID_TPV, USB_DEVICE_ID_TPV_OPTICAL_TOUCHSCREEN_8883), HID_QUIRK_NOGET },
-+ { HID_USB_DEVICE(USB_VENDOR_ID_TRUST, USB_DEVICE_ID_TRUST_PANORA_TABLET), HID_QUIRK_MULTI_INPUT | HID_QUIRK_HIDINPUT_FORCE },
- { HID_USB_DEVICE(USB_VENDOR_ID_TURBOX, USB_DEVICE_ID_TURBOX_KEYBOARD), HID_QUIRK_NOGET },
- { HID_USB_DEVICE(USB_VENDOR_ID_UCLOGIC, USB_DEVICE_ID_UCLOGIC_TABLET_KNA5), HID_QUIRK_MULTI_INPUT },
- { HID_USB_DEVICE(USB_VENDOR_ID_UCLOGIC, USB_DEVICE_ID_UCLOGIC_TABLET_TWA60), HID_QUIRK_MULTI_INPUT },
-diff --git a/drivers/hid/intel-ish-hid/ishtp-fw-loader.c b/drivers/hid/intel-ish-hid/ishtp-fw-loader.c
-index aa2dbed30fc3..6cf59fd26ad7 100644
---- a/drivers/hid/intel-ish-hid/ishtp-fw-loader.c
-+++ b/drivers/hid/intel-ish-hid/ishtp-fw-loader.c
-@@ -480,6 +480,7 @@ static int ish_query_loader_prop(struct ishtp_cl_data *client_data,
- sizeof(ldr_xfer_query_resp));
- if (rv < 0) {
- client_data->flag_retry = true;
-+ *fw_info = (struct shim_fw_info){};
- return rv;
- }
-
-@@ -489,6 +490,7 @@ static int ish_query_loader_prop(struct ishtp_cl_data *client_data,
- "data size %d is not equal to size of loader_xfer_query_response %zu\n",
- rv, sizeof(struct loader_xfer_query_response));
- client_data->flag_retry = true;
-+ *fw_info = (struct shim_fw_info){};
- return -EMSGSIZE;
- }
-
-diff --git a/drivers/hwtracing/coresight/coresight-etm4x.c b/drivers/hwtracing/coresight/coresight-etm4x.c
-index a90d757f7043..a6d6c7a3abcb 100644
---- a/drivers/hwtracing/coresight/coresight-etm4x.c
-+++ b/drivers/hwtracing/coresight/coresight-etm4x.c
-@@ -1527,6 +1527,7 @@ static int etm4_probe(struct amba_device *adev, const struct amba_id *id)
- return 0;
-
- err_arch_supported:
-+ etmdrvdata[drvdata->cpu] = NULL;
- if (--etm4_count == 0) {
- etm4_cpu_pm_unregister();
-
-diff --git a/drivers/hwtracing/coresight/coresight-platform.c b/drivers/hwtracing/coresight/coresight-platform.c
-index 43418a2126ff..471f34e40c74 100644
---- a/drivers/hwtracing/coresight/coresight-platform.c
-+++ b/drivers/hwtracing/coresight/coresight-platform.c
-@@ -87,6 +87,7 @@ static void of_coresight_get_ports_legacy(const struct device_node *node,
- int *nr_inport, int *nr_outport)
- {
- struct device_node *ep = NULL;
-+ struct of_endpoint endpoint;
- int in = 0, out = 0;
-
- do {
-@@ -94,10 +95,16 @@ static void of_coresight_get_ports_legacy(const struct device_node *node,
- if (!ep)
- break;
-
-- if (of_coresight_legacy_ep_is_input(ep))
-- in++;
-- else
-- out++;
-+ if (of_graph_parse_endpoint(ep, &endpoint))
-+ continue;
-+
-+ if (of_coresight_legacy_ep_is_input(ep)) {
-+ in = (endpoint.port + 1 > in) ?
-+ endpoint.port + 1 : in;
-+ } else {
-+ out = (endpoint.port + 1) > out ?
-+ endpoint.port + 1 : out;
-+ }
-
- } while (ep);
-
-@@ -137,9 +144,16 @@ of_coresight_count_ports(struct device_node *port_parent)
- {
- int i = 0;
- struct device_node *ep = NULL;
-+ struct of_endpoint endpoint;
-+
-+ while ((ep = of_graph_get_next_endpoint(port_parent, ep))) {
-+ /* Defer error handling to parsing */
-+ if (of_graph_parse_endpoint(ep, &endpoint))
-+ continue;
-+ if (endpoint.port + 1 > i)
-+ i = endpoint.port + 1;
-+ }
-
-- while ((ep = of_graph_get_next_endpoint(port_parent, ep)))
-- i++;
- return i;
- }
-
-@@ -191,14 +205,12 @@ static int of_coresight_get_cpu(struct device *dev)
- * Parses the local port, remote device name and the remote port.
- *
- * Returns :
-- * 1 - If the parsing is successful and a connection record
-- * was created for an output connection.
- * 0 - If the parsing completed without any fatal errors.
- * -Errno - Fatal error, abort the scanning.
- */
- static int of_coresight_parse_endpoint(struct device *dev,
- struct device_node *ep,
-- struct coresight_connection *conn)
-+ struct coresight_platform_data *pdata)
- {
- int ret = 0;
- struct of_endpoint endpoint, rendpoint;
-@@ -206,6 +218,7 @@ static int of_coresight_parse_endpoint(struct device *dev,
- struct device_node *rep = NULL;
- struct device *rdev = NULL;
- struct fwnode_handle *rdev_fwnode;
-+ struct coresight_connection *conn;
-
- do {
- /* Parse the local port details */
-@@ -232,6 +245,13 @@ static int of_coresight_parse_endpoint(struct device *dev,
- break;
- }
-
-+ conn = &pdata->conns[endpoint.port];
-+ if (conn->child_fwnode) {
-+ dev_warn(dev, "Duplicate output port %d\n",
-+ endpoint.port);
-+ ret = -EINVAL;
-+ break;
-+ }
- conn->outport = endpoint.port;
- /*
- * Hold the refcount to the target device. This could be
-@@ -244,7 +264,6 @@ static int of_coresight_parse_endpoint(struct device *dev,
- conn->child_fwnode = fwnode_handle_get(rdev_fwnode);
- conn->child_port = rendpoint.port;
- /* Connection record updated */
-- ret = 1;
- } while (0);
-
- of_node_put(rparent);
-@@ -258,7 +277,6 @@ static int of_get_coresight_platform_data(struct device *dev,
- struct coresight_platform_data *pdata)
- {
- int ret = 0;
-- struct coresight_connection *conn;
- struct device_node *ep = NULL;
- const struct device_node *parent = NULL;
- bool legacy_binding = false;
-@@ -287,8 +305,6 @@ static int of_get_coresight_platform_data(struct device *dev,
- dev_warn_once(dev, "Uses obsolete Coresight DT bindings\n");
- }
-
-- conn = pdata->conns;
--
- /* Iterate through each output port to discover topology */
- while ((ep = of_graph_get_next_endpoint(parent, ep))) {
- /*
-@@ -300,15 +316,9 @@ static int of_get_coresight_platform_data(struct device *dev,
- if (legacy_binding && of_coresight_legacy_ep_is_input(ep))
- continue;
-
-- ret = of_coresight_parse_endpoint(dev, ep, conn);
-- switch (ret) {
-- case 1:
-- conn++; /* Fall through */
-- case 0:
-- break;
-- default:
-+ ret = of_coresight_parse_endpoint(dev, ep, pdata);
-+ if (ret)
- return ret;
-- }
- }
-
- return 0;
-@@ -647,6 +657,16 @@ static int acpi_coresight_parse_link(struct acpi_device *adev,
- * coresight_remove_match().
- */
- conn->child_fwnode = fwnode_handle_get(&r_adev->fwnode);
-+ } else if (dir == ACPI_CORESIGHT_LINK_SLAVE) {
-+ /*
-+ * We are only interested in the port number
-+ * for the input ports at this component.
-+ * Store the port number in child_port.
-+ */
-+ conn->child_port = fields[0].integer.value;
-+ } else {
-+ /* Invalid direction */
-+ return -EINVAL;
- }
-
- return dir;
-@@ -692,10 +712,20 @@ static int acpi_coresight_parse_graph(struct acpi_device *adev,
- return dir;
-
- if (dir == ACPI_CORESIGHT_LINK_MASTER) {
-- pdata->nr_outport++;
-+ if (ptr->outport > pdata->nr_outport)
-+ pdata->nr_outport = ptr->outport;
- ptr++;
- } else {
-- pdata->nr_inport++;
-+ WARN_ON(pdata->nr_inport == ptr->child_port);
-+ /*
-+ * We do not track input port connections for a device.
-+ * However we need the highest port number described,
-+ * which can be recorded now and reuse this connection
-+ * record for an output connection. Hence, do not move
-+ * the ptr for input connections
-+ */
-+ if (ptr->child_port > pdata->nr_inport)
-+ pdata->nr_inport = ptr->child_port;
- }
- }
-
-@@ -704,8 +734,13 @@ static int acpi_coresight_parse_graph(struct acpi_device *adev,
- return rc;
-
- /* Copy the connection information to the final location */
-- for (i = 0; i < pdata->nr_outport; i++)
-- pdata->conns[i] = conns[i];
-+ for (i = 0; conns + i < ptr; i++) {
-+ int port = conns[i].outport;
-+
-+ /* Duplicate output port */
-+ WARN_ON(pdata->conns[port].child_fwnode);
-+ pdata->conns[port] = conns[i];
-+ }
-
- devm_kfree(&adev->dev, conns);
- return 0;
-diff --git a/drivers/hwtracing/coresight/coresight-tmc-etf.c b/drivers/hwtracing/coresight/coresight-tmc-etf.c
-index d0cc3985b72a..36cce2bfb744 100644
---- a/drivers/hwtracing/coresight/coresight-tmc-etf.c
-+++ b/drivers/hwtracing/coresight/coresight-tmc-etf.c
-@@ -596,13 +596,6 @@ int tmc_read_prepare_etb(struct tmc_drvdata *drvdata)
- goto out;
- }
-
-- /* There is no point in reading a TMC in HW FIFO mode */
-- mode = readl_relaxed(drvdata->base + TMC_MODE);
-- if (mode != TMC_MODE_CIRCULAR_BUFFER) {
-- ret = -EINVAL;
-- goto out;
-- }
--
- /* Don't interfere if operated from Perf */
- if (drvdata->mode == CS_MODE_PERF) {
- ret = -EINVAL;
-@@ -616,8 +609,15 @@ int tmc_read_prepare_etb(struct tmc_drvdata *drvdata)
- }
-
- /* Disable the TMC if need be */
-- if (drvdata->mode == CS_MODE_SYSFS)
-+ if (drvdata->mode == CS_MODE_SYSFS) {
-+ /* There is no point in reading a TMC in HW FIFO mode */
-+ mode = readl_relaxed(drvdata->base + TMC_MODE);
-+ if (mode != TMC_MODE_CIRCULAR_BUFFER) {
-+ ret = -EINVAL;
-+ goto out;
-+ }
- __tmc_etb_disable_hw(drvdata);
-+ }
-
- drvdata->reading = true;
- out:
-diff --git a/drivers/hwtracing/coresight/coresight.c b/drivers/hwtracing/coresight/coresight.c
-index c71553c09f8e..8f5e62f02444 100644
---- a/drivers/hwtracing/coresight/coresight.c
-+++ b/drivers/hwtracing/coresight/coresight.c
-@@ -1053,6 +1053,9 @@ static int coresight_orphan_match(struct device *dev, void *data)
- for (i = 0; i < i_csdev->pdata->nr_outport; i++) {
- conn = &i_csdev->pdata->conns[i];
-
-+ /* Skip the port if FW doesn't describe it */
-+ if (!conn->child_fwnode)
-+ continue;
- /* We have found at least one orphan connection */
- if (conn->child_dev == NULL) {
- /* Does it match this newly added device? */
-@@ -1091,6 +1094,8 @@ static void coresight_fixup_device_conns(struct coresight_device *csdev)
- for (i = 0; i < csdev->pdata->nr_outport; i++) {
- struct coresight_connection *conn = &csdev->pdata->conns[i];
-
-+ if (!conn->child_fwnode)
-+ continue;
- conn->child_dev =
- coresight_find_csdev_by_fwnode(conn->child_fwnode);
- if (!conn->child_dev)
-@@ -1118,7 +1123,7 @@ static int coresight_remove_match(struct device *dev, void *data)
- for (i = 0; i < iterator->pdata->nr_outport; i++) {
- conn = &iterator->pdata->conns[i];
-
-- if (conn->child_dev == NULL)
-+ if (conn->child_dev == NULL || conn->child_fwnode == NULL)
- continue;
-
- if (csdev->dev.fwnode == conn->child_fwnode) {
-diff --git a/drivers/i2c/busses/i2c-icy.c b/drivers/i2c/busses/i2c-icy.c
-index 271470f4d8a9..66c9923fc766 100644
---- a/drivers/i2c/busses/i2c-icy.c
-+++ b/drivers/i2c/busses/i2c-icy.c
-@@ -43,6 +43,7 @@
- #include <linux/i2c.h>
- #include <linux/i2c-algo-pcf.h>
-
-+#include <asm/amigahw.h>
- #include <asm/amigaints.h>
- #include <linux/zorro.h>
-
-diff --git a/drivers/i2c/busses/i2c-piix4.c b/drivers/i2c/busses/i2c-piix4.c
-index 30ded6422e7b..69740a4ff1db 100644
---- a/drivers/i2c/busses/i2c-piix4.c
-+++ b/drivers/i2c/busses/i2c-piix4.c
-@@ -977,7 +977,8 @@ static int piix4_probe(struct pci_dev *dev, const struct pci_device_id *id)
- }
-
- if (dev->vendor == PCI_VENDOR_ID_AMD &&
-- dev->device == PCI_DEVICE_ID_AMD_HUDSON2_SMBUS) {
-+ (dev->device == PCI_DEVICE_ID_AMD_HUDSON2_SMBUS ||
-+ dev->device == PCI_DEVICE_ID_AMD_KERNCZ_SMBUS)) {
- retval = piix4_setup_sb800(dev, id, 1);
- }
-
-diff --git a/drivers/i2c/busses/i2c-pxa.c b/drivers/i2c/busses/i2c-pxa.c
-index 466e4f681d7a..f537a37ac1d5 100644
---- a/drivers/i2c/busses/i2c-pxa.c
-+++ b/drivers/i2c/busses/i2c-pxa.c
-@@ -311,11 +311,10 @@ static void i2c_pxa_scream_blue_murder(struct pxa_i2c *i2c, const char *why)
- dev_err(dev, "IBMR: %08x IDBR: %08x ICR: %08x ISR: %08x\n",
- readl(_IBMR(i2c)), readl(_IDBR(i2c)), readl(_ICR(i2c)),
- readl(_ISR(i2c)));
-- dev_dbg(dev, "log: ");
-+ dev_err(dev, "log:");
- for (i = 0; i < i2c->irqlogidx; i++)
-- pr_debug("[%08x:%08x] ", i2c->isrlog[i], i2c->icrlog[i]);
--
-- pr_debug("\n");
-+ pr_cont(" [%03x:%05x]", i2c->isrlog[i], i2c->icrlog[i]);
-+ pr_cont("\n");
- }
-
- #else /* ifdef DEBUG */
-@@ -747,11 +746,9 @@ static inline void i2c_pxa_stop_message(struct pxa_i2c *i2c)
- {
- u32 icr;
-
-- /*
-- * Clear the STOP and ACK flags
-- */
-+ /* Clear the START, STOP, ACK, TB and MA flags */
- icr = readl(_ICR(i2c));
-- icr &= ~(ICR_STOP | ICR_ACKNAK);
-+ icr &= ~(ICR_START | ICR_STOP | ICR_ACKNAK | ICR_TB | ICR_MA);
- writel(icr, _ICR(i2c));
- }
-
-diff --git a/drivers/iio/buffer/industrialio-buffer-dmaengine.c b/drivers/iio/buffer/industrialio-buffer-dmaengine.c
-index b129693af0fd..94da3b1ca3a2 100644
---- a/drivers/iio/buffer/industrialio-buffer-dmaengine.c
-+++ b/drivers/iio/buffer/industrialio-buffer-dmaengine.c
-@@ -134,7 +134,7 @@ static ssize_t iio_dmaengine_buffer_get_length_align(struct device *dev,
- struct dmaengine_buffer *dmaengine_buffer =
- iio_buffer_to_dmaengine_buffer(indio_dev->buffer);
-
-- return sprintf(buf, "%u\n", dmaengine_buffer->align);
-+ return sprintf(buf, "%zu\n", dmaengine_buffer->align);
- }
-
- static IIO_DEVICE_ATTR(length_align_bytes, 0444,
-diff --git a/drivers/iio/light/gp2ap002.c b/drivers/iio/light/gp2ap002.c
-index b7ef16b28280..7a2679bdc987 100644
---- a/drivers/iio/light/gp2ap002.c
-+++ b/drivers/iio/light/gp2ap002.c
-@@ -158,6 +158,9 @@ static irqreturn_t gp2ap002_prox_irq(int irq, void *d)
- int val;
- int ret;
-
-+ if (!gp2ap002->enabled)
-+ goto err_retrig;
-+
- ret = regmap_read(gp2ap002->map, GP2AP002_PROX, &val);
- if (ret) {
- dev_err(gp2ap002->dev, "error reading proximity\n");
-@@ -247,6 +250,8 @@ static int gp2ap002_read_raw(struct iio_dev *indio_dev,
- struct gp2ap002 *gp2ap002 = iio_priv(indio_dev);
- int ret;
-
-+ pm_runtime_get_sync(gp2ap002->dev);
-+
- switch (mask) {
- case IIO_CHAN_INFO_RAW:
- switch (chan->type) {
-@@ -255,13 +260,21 @@ static int gp2ap002_read_raw(struct iio_dev *indio_dev,
- if (ret < 0)
- return ret;
- *val = ret;
-- return IIO_VAL_INT;
-+ ret = IIO_VAL_INT;
-+ goto out;
- default:
-- return -EINVAL;
-+ ret = -EINVAL;
-+ goto out;
- }
- default:
-- return -EINVAL;
-+ ret = -EINVAL;
- }
-+
-+out:
-+ pm_runtime_mark_last_busy(gp2ap002->dev);
-+ pm_runtime_put_autosuspend(gp2ap002->dev);
-+
-+ return ret;
- }
-
- static int gp2ap002_init(struct gp2ap002 *gp2ap002)
-diff --git a/drivers/iio/pressure/bmp280-core.c b/drivers/iio/pressure/bmp280-core.c
-index 29c209cc1108..973264a088f9 100644
---- a/drivers/iio/pressure/bmp280-core.c
-+++ b/drivers/iio/pressure/bmp280-core.c
-@@ -271,6 +271,8 @@ static u32 bmp280_compensate_humidity(struct bmp280_data *data,
- + (s32)2097152) * calib->H2 + 8192) >> 14);
- var -= ((((var >> 15) * (var >> 15)) >> 7) * (s32)calib->H1) >> 4;
-
-+ var = clamp_val(var, 0, 419430400);
-+
- return var >> 12;
- };
-
-@@ -713,7 +715,7 @@ static int bmp180_measure(struct bmp280_data *data, u8 ctrl_meas)
- unsigned int ctrl;
-
- if (data->use_eoc)
-- init_completion(&data->done);
-+ reinit_completion(&data->done);
-
- ret = regmap_write(data->regmap, BMP280_REG_CTRL_MEAS, ctrl_meas);
- if (ret)
-@@ -969,6 +971,9 @@ static int bmp085_fetch_eoc_irq(struct device *dev,
- "trying to enforce it\n");
- irq_trig = IRQF_TRIGGER_RISING;
- }
-+
-+ init_completion(&data->done);
-+
- ret = devm_request_threaded_irq(dev,
- irq,
- bmp085_eoc_irq,
-diff --git a/drivers/infiniband/core/cm.c b/drivers/infiniband/core/cm.c
-index 17f14e0eafe4..1c2bf18cda9f 100644
---- a/drivers/infiniband/core/cm.c
-+++ b/drivers/infiniband/core/cm.c
-@@ -1076,7 +1076,9 @@ retest:
- case IB_CM_REP_SENT:
- case IB_CM_MRA_REP_RCVD:
- ib_cancel_mad(cm_id_priv->av.port->mad_agent, cm_id_priv->msg);
-- /* Fall through */
-+ cm_send_rej_locked(cm_id_priv, IB_CM_REJ_CONSUMER_DEFINED, NULL,
-+ 0, NULL, 0);
-+ goto retest;
- case IB_CM_MRA_REQ_SENT:
- case IB_CM_REP_RCVD:
- case IB_CM_MRA_REP_SENT:
-diff --git a/drivers/infiniband/core/cma_configfs.c b/drivers/infiniband/core/cma_configfs.c
-index c672a4978bfd..3c1e2ca564fe 100644
---- a/drivers/infiniband/core/cma_configfs.c
-+++ b/drivers/infiniband/core/cma_configfs.c
-@@ -322,8 +322,21 @@ fail:
- return ERR_PTR(err);
- }
-
-+static void drop_cma_dev(struct config_group *cgroup, struct config_item *item)
-+{
-+ struct config_group *group =
-+ container_of(item, struct config_group, cg_item);
-+ struct cma_dev_group *cma_dev_group =
-+ container_of(group, struct cma_dev_group, device_group);
-+
-+ configfs_remove_default_groups(&cma_dev_group->ports_group);
-+ configfs_remove_default_groups(&cma_dev_group->device_group);
-+ config_item_put(item);
-+}
-+
- static struct configfs_group_operations cma_subsys_group_ops = {
- .make_group = make_cma_dev,
-+ .drop_item = drop_cma_dev,
- };
-
- static const struct config_item_type cma_subsys_type = {
-diff --git a/drivers/infiniband/core/sysfs.c b/drivers/infiniband/core/sysfs.c
-index 087682e6969e..defe9cd4c5ee 100644
---- a/drivers/infiniband/core/sysfs.c
-+++ b/drivers/infiniband/core/sysfs.c
-@@ -1058,8 +1058,7 @@ static int add_port(struct ib_core_device *coredev, int port_num)
- coredev->ports_kobj,
- "%d", port_num);
- if (ret) {
-- kfree(p);
-- return ret;
-+ goto err_put;
- }
-
- p->gid_attr_group = kzalloc(sizeof(*p->gid_attr_group), GFP_KERNEL);
-@@ -1072,8 +1071,7 @@ static int add_port(struct ib_core_device *coredev, int port_num)
- ret = kobject_init_and_add(&p->gid_attr_group->kobj, &gid_attr_type,
- &p->kobj, "gid_attrs");
- if (ret) {
-- kfree(p->gid_attr_group);
-- goto err_put;
-+ goto err_put_gid_attrs;
- }
-
- if (device->ops.process_mad && is_full_dev) {
-@@ -1404,8 +1402,10 @@ int ib_port_register_module_stat(struct ib_device *device, u8 port_num,
-
- ret = kobject_init_and_add(kobj, ktype, &port->kobj, "%s",
- name);
-- if (ret)
-+ if (ret) {
-+ kobject_put(kobj);
- return ret;
-+ }
- }
-
- return 0;
-diff --git a/drivers/infiniband/core/uverbs_cmd.c b/drivers/infiniband/core/uverbs_cmd.c
-index 060b4ebbd2ba..d6e9cc94dd90 100644
---- a/drivers/infiniband/core/uverbs_cmd.c
-+++ b/drivers/infiniband/core/uverbs_cmd.c
-@@ -2959,6 +2959,7 @@ static int ib_uverbs_ex_create_wq(struct uverbs_attr_bundle *attrs)
- wq_init_attr.event_handler = ib_uverbs_wq_event_handler;
- wq_init_attr.create_flags = cmd.create_flags;
- INIT_LIST_HEAD(&obj->uevent.event_list);
-+ obj->uevent.uobject.user_handle = cmd.user_handle;
-
- wq = pd->device->ops.create_wq(pd, &wq_init_attr, &attrs->driver_udata);
- if (IS_ERR(wq)) {
-@@ -2976,8 +2977,6 @@ static int ib_uverbs_ex_create_wq(struct uverbs_attr_bundle *attrs)
- atomic_set(&wq->usecnt, 0);
- atomic_inc(&pd->usecnt);
- atomic_inc(&cq->usecnt);
-- wq->uobject = obj;
-- obj->uevent.uobject.object = wq;
-
- memset(&resp, 0, sizeof(resp));
- resp.wq_handle = obj->uevent.uobject.id;
-diff --git a/drivers/infiniband/hw/cxgb4/device.c b/drivers/infiniband/hw/cxgb4/device.c
-index 599340c1f0b8..541dbcf22d0e 100644
---- a/drivers/infiniband/hw/cxgb4/device.c
-+++ b/drivers/infiniband/hw/cxgb4/device.c
-@@ -953,6 +953,7 @@ void c4iw_dealloc(struct uld_ctx *ctx)
- static void c4iw_remove(struct uld_ctx *ctx)
- {
- pr_debug("c4iw_dev %p\n", ctx->dev);
-+ debugfs_remove_recursive(ctx->dev->debugfs_root);
- c4iw_unregister_device(ctx->dev);
- c4iw_dealloc(ctx);
- }
-diff --git a/drivers/infiniband/hw/efa/efa_com_cmd.c b/drivers/infiniband/hw/efa/efa_com_cmd.c
-index eea5574a62e8..69f842c92ff6 100644
---- a/drivers/infiniband/hw/efa/efa_com_cmd.c
-+++ b/drivers/infiniband/hw/efa/efa_com_cmd.c
-@@ -388,7 +388,7 @@ static int efa_com_get_feature_ex(struct efa_com_dev *edev,
-
- if (control_buff_size)
- EFA_SET(&get_cmd.aq_common_descriptor.flags,
-- EFA_ADMIN_AQ_COMMON_DESC_CTRL_DATA_INDIRECT, 1);
-+ EFA_ADMIN_AQ_COMMON_DESC_CTRL_DATA, 1);
-
- efa_com_set_dma_addr(control_buf_dma_addr,
- &get_cmd.control_buffer.address.mem_addr_high,
-@@ -540,7 +540,7 @@ static int efa_com_set_feature_ex(struct efa_com_dev *edev,
- if (control_buff_size) {
- set_cmd->aq_common_descriptor.flags = 0;
- EFA_SET(&set_cmd->aq_common_descriptor.flags,
-- EFA_ADMIN_AQ_COMMON_DESC_CTRL_DATA_INDIRECT, 1);
-+ EFA_ADMIN_AQ_COMMON_DESC_CTRL_DATA, 1);
- efa_com_set_dma_addr(control_buf_dma_addr,
- &set_cmd->control_buffer.address.mem_addr_high,
- &set_cmd->control_buffer.address.mem_addr_low);
-diff --git a/drivers/infiniband/hw/hns/hns_roce_hw_v2.c b/drivers/infiniband/hw/hns/hns_roce_hw_v2.c
-index c3316672b70e..f9fa80ae5560 100644
---- a/drivers/infiniband/hw/hns/hns_roce_hw_v2.c
-+++ b/drivers/infiniband/hw/hns/hns_roce_hw_v2.c
-@@ -1349,34 +1349,26 @@ static int hns_roce_query_pf_resource(struct hns_roce_dev *hr_dev)
- static int hns_roce_query_pf_timer_resource(struct hns_roce_dev *hr_dev)
- {
- struct hns_roce_pf_timer_res_a *req_a;
-- struct hns_roce_cmq_desc desc[2];
-- int ret, i;
-+ struct hns_roce_cmq_desc desc;
-+ int ret;
-
-- for (i = 0; i < 2; i++) {
-- hns_roce_cmq_setup_basic_desc(&desc[i],
-- HNS_ROCE_OPC_QUERY_PF_TIMER_RES,
-- true);
-+ hns_roce_cmq_setup_basic_desc(&desc, HNS_ROCE_OPC_QUERY_PF_TIMER_RES,
-+ true);
-
-- if (i == 0)
-- desc[i].flag |= cpu_to_le16(HNS_ROCE_CMD_FLAG_NEXT);
-- else
-- desc[i].flag &= ~cpu_to_le16(HNS_ROCE_CMD_FLAG_NEXT);
-- }
--
-- ret = hns_roce_cmq_send(hr_dev, desc, 2);
-+ ret = hns_roce_cmq_send(hr_dev, &desc, 1);
- if (ret)
- return ret;
-
-- req_a = (struct hns_roce_pf_timer_res_a *)desc[0].data;
-+ req_a = (struct hns_roce_pf_timer_res_a *)desc.data;
-
- hr_dev->caps.qpc_timer_bt_num =
-- roce_get_field(req_a->qpc_timer_bt_idx_num,
-- PF_RES_DATA_1_PF_QPC_TIMER_BT_NUM_M,
-- PF_RES_DATA_1_PF_QPC_TIMER_BT_NUM_S);
-+ roce_get_field(req_a->qpc_timer_bt_idx_num,
-+ PF_RES_DATA_1_PF_QPC_TIMER_BT_NUM_M,
-+ PF_RES_DATA_1_PF_QPC_TIMER_BT_NUM_S);
- hr_dev->caps.cqc_timer_bt_num =
-- roce_get_field(req_a->cqc_timer_bt_idx_num,
-- PF_RES_DATA_2_PF_CQC_TIMER_BT_NUM_M,
-- PF_RES_DATA_2_PF_CQC_TIMER_BT_NUM_S);
-+ roce_get_field(req_a->cqc_timer_bt_idx_num,
-+ PF_RES_DATA_2_PF_CQC_TIMER_BT_NUM_M,
-+ PF_RES_DATA_2_PF_CQC_TIMER_BT_NUM_S);
-
- return 0;
- }
-@@ -4639,7 +4631,7 @@ static int hns_roce_v2_query_qp(struct ib_qp *ibqp, struct ib_qp_attr *qp_attr,
- qp_attr->path_mig_state = IB_MIG_ARMED;
- qp_attr->ah_attr.type = RDMA_AH_ATTR_TYPE_ROCE;
- if (hr_qp->ibqp.qp_type == IB_QPT_UD)
-- qp_attr->qkey = V2_QKEY_VAL;
-+ qp_attr->qkey = le32_to_cpu(context.qkey_xrcd);
-
- qp_attr->rq_psn = roce_get_field(context.byte_108_rx_reqepsn,
- V2_QPC_BYTE_108_RX_REQ_EPSN_M,
-diff --git a/drivers/infiniband/hw/mlx5/devx.c b/drivers/infiniband/hw/mlx5/devx.c
-index 46e1ab771f10..ed10e2f32aab 100644
---- a/drivers/infiniband/hw/mlx5/devx.c
-+++ b/drivers/infiniband/hw/mlx5/devx.c
-@@ -494,6 +494,10 @@ static u64 devx_get_obj_id(const void *in)
- obj_id = get_enc_obj_id(MLX5_CMD_OP_CREATE_QP,
- MLX5_GET(rst2init_qp_in, in, qpn));
- break;
-+ case MLX5_CMD_OP_INIT2INIT_QP:
-+ obj_id = get_enc_obj_id(MLX5_CMD_OP_CREATE_QP,
-+ MLX5_GET(init2init_qp_in, in, qpn));
-+ break;
- case MLX5_CMD_OP_INIT2RTR_QP:
- obj_id = get_enc_obj_id(MLX5_CMD_OP_CREATE_QP,
- MLX5_GET(init2rtr_qp_in, in, qpn));
-@@ -819,6 +823,7 @@ static bool devx_is_obj_modify_cmd(const void *in)
- case MLX5_CMD_OP_SET_L2_TABLE_ENTRY:
- case MLX5_CMD_OP_RST2INIT_QP:
- case MLX5_CMD_OP_INIT2RTR_QP:
-+ case MLX5_CMD_OP_INIT2INIT_QP:
- case MLX5_CMD_OP_RTR2RTS_QP:
- case MLX5_CMD_OP_RTS2RTS_QP:
- case MLX5_CMD_OP_SQERR2RTS_QP:
-diff --git a/drivers/infiniband/hw/mlx5/srq.c b/drivers/infiniband/hw/mlx5/srq.c
-index b1a8a9175040..6d1ff13d2283 100644
---- a/drivers/infiniband/hw/mlx5/srq.c
-+++ b/drivers/infiniband/hw/mlx5/srq.c
-@@ -310,12 +310,18 @@ int mlx5_ib_create_srq(struct ib_srq *ib_srq,
- srq->msrq.event = mlx5_ib_srq_event;
- srq->ibsrq.ext.xrc.srq_num = srq->msrq.srqn;
-
-- if (udata)
-- if (ib_copy_to_udata(udata, &srq->msrq.srqn, sizeof(__u32))) {
-+ if (udata) {
-+ struct mlx5_ib_create_srq_resp resp = {
-+ .srqn = srq->msrq.srqn,
-+ };
-+
-+ if (ib_copy_to_udata(udata, &resp, min(udata->outlen,
-+ sizeof(resp)))) {
- mlx5_ib_dbg(dev, "copy to user failed\n");
- err = -EFAULT;
- goto err_core;
- }
-+ }
-
- init_attr->attr.max_wr = srq->msrq.max - 1;
-
-diff --git a/drivers/infiniband/ulp/srpt/ib_srpt.c b/drivers/infiniband/ulp/srpt/ib_srpt.c
-index 98552749d71c..fcf982c60db6 100644
---- a/drivers/infiniband/ulp/srpt/ib_srpt.c
-+++ b/drivers/infiniband/ulp/srpt/ib_srpt.c
-@@ -610,6 +610,11 @@ static int srpt_refresh_port(struct srpt_port *sport)
- dev_name(&sport->sdev->device->dev), sport->port,
- PTR_ERR(sport->mad_agent));
- sport->mad_agent = NULL;
-+ memset(&port_modify, 0, sizeof(port_modify));
-+ port_modify.clr_port_cap_mask = IB_PORT_DEVICE_MGMT_SUP;
-+ ib_modify_port(sport->sdev->device, sport->port, 0,
-+ &port_modify);
-+
- }
- }
-
-@@ -633,9 +638,8 @@ static void srpt_unregister_mad_agent(struct srpt_device *sdev)
- for (i = 1; i <= sdev->device->phys_port_cnt; i++) {
- sport = &sdev->port[i - 1];
- WARN_ON(sport->port != i);
-- if (ib_modify_port(sdev->device, i, 0, &port_modify) < 0)
-- pr_err("disabling MAD processing failed.\n");
- if (sport->mad_agent) {
-+ ib_modify_port(sdev->device, i, 0, &port_modify);
- ib_unregister_mad_agent(sport->mad_agent);
- sport->mad_agent = NULL;
- }
-diff --git a/drivers/input/serio/i8042-ppcio.h b/drivers/input/serio/i8042-ppcio.h
-deleted file mode 100644
-index 391f94d9e47d..000000000000
---- a/drivers/input/serio/i8042-ppcio.h
-+++ /dev/null
-@@ -1,57 +0,0 @@
--/* SPDX-License-Identifier: GPL-2.0-only */
--#ifndef _I8042_PPCIO_H
--#define _I8042_PPCIO_H
--
--
--#if defined(CONFIG_WALNUT)
--
--#define I8042_KBD_IRQ 25
--#define I8042_AUX_IRQ 26
--
--#define I8042_KBD_PHYS_DESC "walnutps2/serio0"
--#define I8042_AUX_PHYS_DESC "walnutps2/serio1"
--#define I8042_MUX_PHYS_DESC "walnutps2/serio%d"
--
--extern void *kb_cs;
--extern void *kb_data;
--
--#define I8042_COMMAND_REG (*(int *)kb_cs)
--#define I8042_DATA_REG (*(int *)kb_data)
--
--static inline int i8042_read_data(void)
--{
-- return readb(kb_data);
--}
--
--static inline int i8042_read_status(void)
--{
-- return readb(kb_cs);
--}
--
--static inline void i8042_write_data(int val)
--{
-- writeb(val, kb_data);
--}
--
--static inline void i8042_write_command(int val)
--{
-- writeb(val, kb_cs);
--}
--
--static inline int i8042_platform_init(void)
--{
-- i8042_reset = I8042_RESET_ALWAYS;
-- return 0;
--}
--
--static inline void i8042_platform_exit(void)
--{
--}
--
--#else
--
--#include "i8042-io.h"
--
--#endif
--
--#endif /* _I8042_PPCIO_H */
-diff --git a/drivers/input/serio/i8042.h b/drivers/input/serio/i8042.h
-index 38dc27ad3c18..eb376700dfff 100644
---- a/drivers/input/serio/i8042.h
-+++ b/drivers/input/serio/i8042.h
-@@ -17,8 +17,6 @@
- #include "i8042-ip22io.h"
- #elif defined(CONFIG_SNI_RM)
- #include "i8042-snirm.h"
--#elif defined(CONFIG_PPC)
--#include "i8042-ppcio.h"
- #elif defined(CONFIG_SPARC)
- #include "i8042-sparcio.h"
- #elif defined(CONFIG_X86) || defined(CONFIG_IA64)
-diff --git a/drivers/input/touchscreen/edt-ft5x06.c b/drivers/input/touchscreen/edt-ft5x06.c
-index d2587724c52a..9b8450794a8a 100644
---- a/drivers/input/touchscreen/edt-ft5x06.c
-+++ b/drivers/input/touchscreen/edt-ft5x06.c
-@@ -938,19 +938,25 @@ static void edt_ft5x06_ts_get_defaults(struct device *dev,
-
- error = device_property_read_u32(dev, "offset", &val);
- if (!error) {
-- edt_ft5x06_register_write(tsdata, reg_addr->reg_offset, val);
-+ if (reg_addr->reg_offset != NO_REGISTER)
-+ edt_ft5x06_register_write(tsdata,
-+ reg_addr->reg_offset, val);
- tsdata->offset = val;
- }
-
- error = device_property_read_u32(dev, "offset-x", &val);
- if (!error) {
-- edt_ft5x06_register_write(tsdata, reg_addr->reg_offset_x, val);
-+ if (reg_addr->reg_offset_x != NO_REGISTER)
-+ edt_ft5x06_register_write(tsdata,
-+ reg_addr->reg_offset_x, val);
- tsdata->offset_x = val;
- }
-
- error = device_property_read_u32(dev, "offset-y", &val);
- if (!error) {
-- edt_ft5x06_register_write(tsdata, reg_addr->reg_offset_y, val);
-+ if (reg_addr->reg_offset_y != NO_REGISTER)
-+ edt_ft5x06_register_write(tsdata,
-+ reg_addr->reg_offset_y, val);
- tsdata->offset_y = val;
- }
- }
-diff --git a/drivers/iommu/arm-smmu-v3.c b/drivers/iommu/arm-smmu-v3.c
-index 82508730feb7..af21d24a09e8 100644
---- a/drivers/iommu/arm-smmu-v3.c
-+++ b/drivers/iommu/arm-smmu-v3.c
-@@ -171,6 +171,8 @@
- #define ARM_SMMU_PRIQ_IRQ_CFG1 0xd8
- #define ARM_SMMU_PRIQ_IRQ_CFG2 0xdc
-
-+#define ARM_SMMU_REG_SZ 0xe00
-+
- /* Common MSI config fields */
- #define MSI_CFG0_ADDR_MASK GENMASK_ULL(51, 2)
- #define MSI_CFG2_SH GENMASK(5, 4)
-@@ -628,6 +630,7 @@ struct arm_smmu_strtab_cfg {
- struct arm_smmu_device {
- struct device *dev;
- void __iomem *base;
-+ void __iomem *page1;
-
- #define ARM_SMMU_FEAT_2_LVL_STRTAB (1 << 0)
- #define ARM_SMMU_FEAT_2_LVL_CDTAB (1 << 1)
-@@ -733,9 +736,8 @@ static struct arm_smmu_option_prop arm_smmu_options[] = {
- static inline void __iomem *arm_smmu_page1_fixup(unsigned long offset,
- struct arm_smmu_device *smmu)
- {
-- if ((offset > SZ_64K) &&
-- (smmu->options & ARM_SMMU_OPT_PAGE0_REGS_ONLY))
-- offset -= SZ_64K;
-+ if (offset > SZ_64K)
-+ return smmu->page1 + offset - SZ_64K;
-
- return smmu->base + offset;
- }
-@@ -4021,6 +4023,18 @@ err_reset_pci_ops: __maybe_unused;
- return err;
- }
-
-+static void __iomem *arm_smmu_ioremap(struct device *dev, resource_size_t start,
-+ resource_size_t size)
-+{
-+ struct resource res = {
-+ .flags = IORESOURCE_MEM,
-+ .start = start,
-+ .end = start + size - 1,
-+ };
-+
-+ return devm_ioremap_resource(dev, &res);
-+}
-+
- static int arm_smmu_device_probe(struct platform_device *pdev)
- {
- int irq, ret;
-@@ -4056,10 +4070,23 @@ static int arm_smmu_device_probe(struct platform_device *pdev)
- }
- ioaddr = res->start;
-
-- smmu->base = devm_ioremap_resource(dev, res);
-+ /*
-+ * Don't map the IMPLEMENTATION DEFINED regions, since they may contain
-+ * the PMCG registers which are reserved by the PMU driver.
-+ */
-+ smmu->base = arm_smmu_ioremap(dev, ioaddr, ARM_SMMU_REG_SZ);
- if (IS_ERR(smmu->base))
- return PTR_ERR(smmu->base);
-
-+ if (arm_smmu_resource_size(smmu) > SZ_64K) {
-+ smmu->page1 = arm_smmu_ioremap(dev, ioaddr + SZ_64K,
-+ ARM_SMMU_REG_SZ);
-+ if (IS_ERR(smmu->page1))
-+ return PTR_ERR(smmu->page1);
-+ } else {
-+ smmu->page1 = smmu->base;
-+ }
-+
- /* Interrupt lines */
-
- irq = platform_get_irq_byname_optional(pdev, "combined");
-diff --git a/drivers/iommu/intel-iommu.c b/drivers/iommu/intel-iommu.c
-index 11ed871dd255..fde7aba49b74 100644
---- a/drivers/iommu/intel-iommu.c
-+++ b/drivers/iommu/intel-iommu.c
-@@ -2518,9 +2518,6 @@ struct dmar_domain *find_domain(struct device *dev)
- if (unlikely(attach_deferred(dev) || iommu_dummy(dev)))
- return NULL;
-
-- if (dev_is_pci(dev))
-- dev = &pci_real_dma_dev(to_pci_dev(dev))->dev;
--
- /* No lock here, assumes no domain exit in normal case */
- info = dev->archdata.iommu;
- if (likely(info))
-diff --git a/drivers/mailbox/imx-mailbox.c b/drivers/mailbox/imx-mailbox.c
-index 7906624a731c..478308fb82cc 100644
---- a/drivers/mailbox/imx-mailbox.c
-+++ b/drivers/mailbox/imx-mailbox.c
-@@ -66,6 +66,8 @@ struct imx_mu_priv {
- struct clk *clk;
- int irq;
-
-+ u32 xcr;
-+
- bool side_b;
- };
-
-@@ -374,7 +376,7 @@ static struct mbox_chan *imx_mu_scu_xlate(struct mbox_controller *mbox,
- break;
- default:
- dev_err(mbox->dev, "Invalid chan type: %d\n", type);
-- return NULL;
-+ return ERR_PTR(-EINVAL);
- }
-
- if (chan >= mbox->num_chans) {
-@@ -558,12 +560,45 @@ static const struct of_device_id imx_mu_dt_ids[] = {
- };
- MODULE_DEVICE_TABLE(of, imx_mu_dt_ids);
-
-+static int imx_mu_suspend_noirq(struct device *dev)
-+{
-+ struct imx_mu_priv *priv = dev_get_drvdata(dev);
-+
-+ priv->xcr = imx_mu_read(priv, priv->dcfg->xCR);
-+
-+ return 0;
-+}
-+
-+static int imx_mu_resume_noirq(struct device *dev)
-+{
-+ struct imx_mu_priv *priv = dev_get_drvdata(dev);
-+
-+ /*
-+ * ONLY restore MU when context lost, the TIE could
-+ * be set during noirq resume as there is MU data
-+ * communication going on, and restore the saved
-+ * value will overwrite the TIE and cause MU data
-+ * send failed, may lead to system freeze. This issue
-+ * is observed by testing freeze mode suspend.
-+ */
-+ if (!imx_mu_read(priv, priv->dcfg->xCR))
-+ imx_mu_write(priv, priv->xcr, priv->dcfg->xCR);
-+
-+ return 0;
-+}
-+
-+static const struct dev_pm_ops imx_mu_pm_ops = {
-+ SET_NOIRQ_SYSTEM_SLEEP_PM_OPS(imx_mu_suspend_noirq,
-+ imx_mu_resume_noirq)
-+};
-+
- static struct platform_driver imx_mu_driver = {
- .probe = imx_mu_probe,
- .remove = imx_mu_remove,
- .driver = {
- .name = "imx_mu",
- .of_match_table = imx_mu_dt_ids,
-+ .pm = &imx_mu_pm_ops,
- },
- };
- module_platform_driver(imx_mu_driver);
-diff --git a/drivers/mailbox/zynqmp-ipi-mailbox.c b/drivers/mailbox/zynqmp-ipi-mailbox.c
-index 86887c9a349a..f9cc674ba9b7 100644
---- a/drivers/mailbox/zynqmp-ipi-mailbox.c
-+++ b/drivers/mailbox/zynqmp-ipi-mailbox.c
-@@ -504,10 +504,9 @@ static int zynqmp_ipi_mbox_probe(struct zynqmp_ipi_mbox *ipi_mbox,
- mchan->req_buf_size = resource_size(&res);
- mchan->req_buf = devm_ioremap(mdev, res.start,
- mchan->req_buf_size);
-- if (IS_ERR(mchan->req_buf)) {
-+ if (!mchan->req_buf) {
- dev_err(mdev, "Unable to map IPI buffer I/O memory\n");
-- ret = PTR_ERR(mchan->req_buf);
-- return ret;
-+ return -ENOMEM;
- }
- } else if (ret != -ENODEV) {
- dev_err(mdev, "Unmatched resource %s, %d.\n", name, ret);
-@@ -520,10 +519,9 @@ static int zynqmp_ipi_mbox_probe(struct zynqmp_ipi_mbox *ipi_mbox,
- mchan->resp_buf_size = resource_size(&res);
- mchan->resp_buf = devm_ioremap(mdev, res.start,
- mchan->resp_buf_size);
-- if (IS_ERR(mchan->resp_buf)) {
-+ if (!mchan->resp_buf) {
- dev_err(mdev, "Unable to map IPI buffer I/O memory\n");
-- ret = PTR_ERR(mchan->resp_buf);
-- return ret;
-+ return -ENOMEM;
- }
- } else if (ret != -ENODEV) {
- dev_err(mdev, "Unmatched resource %s.\n", name);
-@@ -543,10 +541,9 @@ static int zynqmp_ipi_mbox_probe(struct zynqmp_ipi_mbox *ipi_mbox,
- mchan->req_buf_size = resource_size(&res);
- mchan->req_buf = devm_ioremap(mdev, res.start,
- mchan->req_buf_size);
-- if (IS_ERR(mchan->req_buf)) {
-+ if (!mchan->req_buf) {
- dev_err(mdev, "Unable to map IPI buffer I/O memory\n");
-- ret = PTR_ERR(mchan->req_buf);
-- return ret;
-+ return -ENOMEM;
- }
- } else if (ret != -ENODEV) {
- dev_err(mdev, "Unmatched resource %s.\n", name);
-@@ -559,10 +556,9 @@ static int zynqmp_ipi_mbox_probe(struct zynqmp_ipi_mbox *ipi_mbox,
- mchan->resp_buf_size = resource_size(&res);
- mchan->resp_buf = devm_ioremap(mdev, res.start,
- mchan->resp_buf_size);
-- if (IS_ERR(mchan->resp_buf)) {
-+ if (!mchan->resp_buf) {
- dev_err(mdev, "Unable to map IPI buffer I/O memory\n");
-- ret = PTR_ERR(mchan->resp_buf);
-- return ret;
-+ return -ENOMEM;
- }
- } else if (ret != -ENODEV) {
- dev_err(mdev, "Unmatched resource %s.\n", name);
-diff --git a/drivers/md/bcache/btree.c b/drivers/md/bcache/btree.c
-index 72856e5f23a3..fd1f288fd801 100644
---- a/drivers/md/bcache/btree.c
-+++ b/drivers/md/bcache/btree.c
-@@ -1389,7 +1389,7 @@ static int btree_gc_coalesce(struct btree *b, struct btree_op *op,
- if (__set_blocks(n1, n1->keys + n2->keys,
- block_bytes(b->c)) >
- btree_blocks(new_nodes[i]))
-- goto out_nocoalesce;
-+ goto out_unlock_nocoalesce;
-
- keys = n2->keys;
- /* Take the key of the node we're getting rid of */
-@@ -1418,7 +1418,7 @@ static int btree_gc_coalesce(struct btree *b, struct btree_op *op,
-
- if (__bch_keylist_realloc(&keylist,
- bkey_u64s(&new_nodes[i]->key)))
-- goto out_nocoalesce;
-+ goto out_unlock_nocoalesce;
-
- bch_btree_node_write(new_nodes[i], &cl);
- bch_keylist_add(&keylist, &new_nodes[i]->key);
-@@ -1464,6 +1464,10 @@ static int btree_gc_coalesce(struct btree *b, struct btree_op *op,
- /* Invalidated our iterator */
- return -EINTR;
-
-+out_unlock_nocoalesce:
-+ for (i = 0; i < nodes; i++)
-+ mutex_unlock(&new_nodes[i]->write_lock);
-+
- out_nocoalesce:
- closure_sync(&cl);
-
-diff --git a/drivers/md/dm-mpath.c b/drivers/md/dm-mpath.c
-index 3e500098132f..e0c800cf87a9 100644
---- a/drivers/md/dm-mpath.c
-+++ b/drivers/md/dm-mpath.c
-@@ -1918,7 +1918,7 @@ static int multipath_prepare_ioctl(struct dm_target *ti,
- int r;
-
- current_pgpath = READ_ONCE(m->current_pgpath);
-- if (!current_pgpath)
-+ if (!current_pgpath || !test_bit(MPATHF_QUEUE_IO, &m->flags))
- current_pgpath = choose_pgpath(m, 0);
-
- if (current_pgpath) {
-diff --git a/drivers/md/dm-zoned-metadata.c b/drivers/md/dm-zoned-metadata.c
-index 369de15c4e80..61b7d7b7e5a6 100644
---- a/drivers/md/dm-zoned-metadata.c
-+++ b/drivers/md/dm-zoned-metadata.c
-@@ -1554,7 +1554,7 @@ static struct dm_zone *dmz_get_rnd_zone_for_reclaim(struct dmz_metadata *zmd)
- return dzone;
- }
-
-- return ERR_PTR(-EBUSY);
-+ return NULL;
- }
-
- /*
-@@ -1574,7 +1574,7 @@ static struct dm_zone *dmz_get_seq_zone_for_reclaim(struct dmz_metadata *zmd)
- return zone;
- }
-
-- return ERR_PTR(-EBUSY);
-+ return NULL;
- }
-
- /*
-diff --git a/drivers/md/dm-zoned-reclaim.c b/drivers/md/dm-zoned-reclaim.c
-index e7ace908a9b7..d50817320e8e 100644
---- a/drivers/md/dm-zoned-reclaim.c
-+++ b/drivers/md/dm-zoned-reclaim.c
-@@ -349,8 +349,8 @@ static int dmz_do_reclaim(struct dmz_reclaim *zrc)
-
- /* Get a data zone */
- dzone = dmz_get_zone_for_reclaim(zmd);
-- if (IS_ERR(dzone))
-- return PTR_ERR(dzone);
-+ if (!dzone)
-+ return -EBUSY;
-
- start = jiffies;
-
-diff --git a/drivers/media/platform/s5p-mfc/s5p_mfc.c b/drivers/media/platform/s5p-mfc/s5p_mfc.c
-index 5c2a23b953a4..eba2b9f040df 100644
---- a/drivers/media/platform/s5p-mfc/s5p_mfc.c
-+++ b/drivers/media/platform/s5p-mfc/s5p_mfc.c
-@@ -1089,6 +1089,10 @@ static struct device *s5p_mfc_alloc_memdev(struct device *dev,
- child->coherent_dma_mask = dev->coherent_dma_mask;
- child->dma_mask = dev->dma_mask;
- child->release = s5p_mfc_memdev_release;
-+ child->dma_parms = devm_kzalloc(dev, sizeof(*child->dma_parms),
-+ GFP_KERNEL);
-+ if (!child->dma_parms)
-+ goto err;
-
- /*
- * The memdevs are not proper OF platform devices, so in order for them
-@@ -1104,7 +1108,7 @@ static struct device *s5p_mfc_alloc_memdev(struct device *dev,
- return child;
- device_del(child);
- }
--
-+err:
- put_device(child);
- return NULL;
- }
-diff --git a/drivers/media/v4l2-core/v4l2-ctrls.c b/drivers/media/v4l2-core/v4l2-ctrls.c
-index 452edd06d67d..99fd377f9b81 100644
---- a/drivers/media/v4l2-core/v4l2-ctrls.c
-+++ b/drivers/media/v4l2-core/v4l2-ctrls.c
-@@ -1825,7 +1825,7 @@ static int std_validate_compound(const struct v4l2_ctrl *ctrl, u32 idx,
- sizeof(p_hevc_pps->row_height_minus1));
-
- p_hevc_pps->flags &=
-- ~V4L2_HEVC_PPS_FLAG_PPS_LOOP_FILTER_ACROSS_SLICES_ENABLED;
-+ ~V4L2_HEVC_PPS_FLAG_LOOP_FILTER_ACROSS_TILES_ENABLED;
- }
-
- if (p_hevc_pps->flags &
-diff --git a/drivers/mfd/stmfx.c b/drivers/mfd/stmfx.c
-index 857991cb3cbb..711979afd90a 100644
---- a/drivers/mfd/stmfx.c
-+++ b/drivers/mfd/stmfx.c
-@@ -287,14 +287,21 @@ static int stmfx_irq_init(struct i2c_client *client)
-
- ret = regmap_write(stmfx->map, STMFX_REG_IRQ_OUT_PIN, irqoutpin);
- if (ret)
-- return ret;
-+ goto irq_exit;
-
- ret = devm_request_threaded_irq(stmfx->dev, client->irq,
- NULL, stmfx_irq_handler,
- irqtrigger | IRQF_ONESHOT,
- "stmfx", stmfx);
- if (ret)
-- stmfx_irq_exit(client);
-+ goto irq_exit;
-+
-+ stmfx->irq = client->irq;
-+
-+ return 0;
-+
-+irq_exit:
-+ stmfx_irq_exit(client);
-
- return ret;
- }
-@@ -481,6 +488,8 @@ static int stmfx_suspend(struct device *dev)
- if (ret)
- return ret;
-
-+ disable_irq(stmfx->irq);
-+
- if (stmfx->vdd)
- return regulator_disable(stmfx->vdd);
-
-@@ -501,6 +510,13 @@ static int stmfx_resume(struct device *dev)
- }
- }
-
-+ /* Reset STMFX - supply has been stopped during suspend */
-+ ret = stmfx_chip_reset(stmfx);
-+ if (ret) {
-+ dev_err(stmfx->dev, "Failed to reset chip: %d\n", ret);
-+ return ret;
-+ }
-+
- ret = regmap_raw_write(stmfx->map, STMFX_REG_SYS_CTRL,
- &stmfx->bkp_sysctrl, sizeof(stmfx->bkp_sysctrl));
- if (ret)
-@@ -517,6 +533,8 @@ static int stmfx_resume(struct device *dev)
- if (ret)
- return ret;
-
-+ enable_irq(stmfx->irq);
-+
- return 0;
- }
- #endif
-diff --git a/drivers/mfd/wcd934x.c b/drivers/mfd/wcd934x.c
-index 90341f3c6810..da910302d51a 100644
---- a/drivers/mfd/wcd934x.c
-+++ b/drivers/mfd/wcd934x.c
-@@ -280,7 +280,6 @@ static void wcd934x_slim_remove(struct slim_device *sdev)
-
- regulator_bulk_disable(WCD934X_MAX_SUPPLY, ddata->supplies);
- mfd_remove_devices(&sdev->dev);
-- kfree(ddata);
- }
-
- static const struct slim_device_id wcd934x_slim_id[] = {
-diff --git a/drivers/mfd/wm8994-core.c b/drivers/mfd/wm8994-core.c
-index 1e9fe7d92597..737dede4a95c 100644
---- a/drivers/mfd/wm8994-core.c
-+++ b/drivers/mfd/wm8994-core.c
-@@ -690,3 +690,4 @@ module_i2c_driver(wm8994_i2c_driver);
- MODULE_DESCRIPTION("Core support for the WM8994 audio CODEC");
- MODULE_LICENSE("GPL");
- MODULE_AUTHOR("Mark Brown <broonie@opensource.wolfsonmicro.com>");
-+MODULE_SOFTDEP("pre: wm8994_regulator");
-diff --git a/drivers/misc/fastrpc.c b/drivers/misc/fastrpc.c
-index e3e085e33d46..7939c55daceb 100644
---- a/drivers/misc/fastrpc.c
-+++ b/drivers/misc/fastrpc.c
-@@ -904,6 +904,7 @@ static int fastrpc_invoke_send(struct fastrpc_session_ctx *sctx,
- struct fastrpc_channel_ctx *cctx;
- struct fastrpc_user *fl = ctx->fl;
- struct fastrpc_msg *msg = &ctx->msg;
-+ int ret;
-
- cctx = fl->cctx;
- msg->pid = fl->tgid;
-@@ -919,7 +920,13 @@ static int fastrpc_invoke_send(struct fastrpc_session_ctx *sctx,
- msg->size = roundup(ctx->msg_sz, PAGE_SIZE);
- fastrpc_context_get(ctx);
-
-- return rpmsg_send(cctx->rpdev->ept, (void *)msg, sizeof(*msg));
-+ ret = rpmsg_send(cctx->rpdev->ept, (void *)msg, sizeof(*msg));
-+
-+ if (ret)
-+ fastrpc_context_put(ctx);
-+
-+ return ret;
-+
- }
-
- static int fastrpc_internal_invoke(struct fastrpc_user *fl, u32 kernel,
-@@ -1613,8 +1620,10 @@ static int fastrpc_rpmsg_probe(struct rpmsg_device *rpdev)
- domains[domain_id]);
- data->miscdev.fops = &fastrpc_fops;
- err = misc_register(&data->miscdev);
-- if (err)
-+ if (err) {
-+ kfree(data);
- return err;
-+ }
-
- kref_init(&data->refcount);
-
-diff --git a/drivers/misc/habanalabs/device.c b/drivers/misc/habanalabs/device.c
-index aef4de36b7aa..6d9c298e02c7 100644
---- a/drivers/misc/habanalabs/device.c
-+++ b/drivers/misc/habanalabs/device.c
-@@ -718,7 +718,7 @@ disable_device:
- return rc;
- }
-
--static void device_kill_open_processes(struct hl_device *hdev)
-+static int device_kill_open_processes(struct hl_device *hdev)
- {
- u16 pending_total, pending_cnt;
- struct hl_fpriv *hpriv;
-@@ -771,9 +771,7 @@ static void device_kill_open_processes(struct hl_device *hdev)
- ssleep(1);
- }
-
-- if (!list_empty(&hdev->fpriv_list))
-- dev_crit(hdev->dev,
-- "Going to hard reset with open user contexts\n");
-+ return list_empty(&hdev->fpriv_list) ? 0 : -EBUSY;
- }
-
- static void device_hard_reset_pending(struct work_struct *work)
-@@ -894,7 +892,12 @@ again:
- * process can't really exit until all its CSs are done, which
- * is what we do in cs rollback
- */
-- device_kill_open_processes(hdev);
-+ rc = device_kill_open_processes(hdev);
-+ if (rc) {
-+ dev_crit(hdev->dev,
-+ "Failed to kill all open processes, stopping hard reset\n");
-+ goto out_err;
-+ }
-
- /* Flush the Event queue workers to make sure no other thread is
- * reading or writing to registers during the reset
-@@ -1375,7 +1378,9 @@ void hl_device_fini(struct hl_device *hdev)
- * can't really exit until all its CSs are done, which is what we
- * do in cs rollback
- */
-- device_kill_open_processes(hdev);
-+ rc = device_kill_open_processes(hdev);
-+ if (rc)
-+ dev_crit(hdev->dev, "Failed to kill all open processes\n");
-
- hl_cb_pool_fini(hdev);
-
-diff --git a/drivers/misc/habanalabs/habanalabs.h b/drivers/misc/habanalabs/habanalabs.h
-index 31ebcf9458fe..a6dd8e6ca594 100644
---- a/drivers/misc/habanalabs/habanalabs.h
-+++ b/drivers/misc/habanalabs/habanalabs.h
-@@ -23,7 +23,7 @@
-
- #define HL_MMAP_CB_MASK (0x8000000000000000ull >> PAGE_SHIFT)
-
--#define HL_PENDING_RESET_PER_SEC 5
-+#define HL_PENDING_RESET_PER_SEC 30
-
- #define HL_DEVICE_TIMEOUT_USEC 1000000 /* 1 s */
-
-diff --git a/drivers/misc/xilinx_sdfec.c b/drivers/misc/xilinx_sdfec.c
-index 71bbaa56bdb5..e2766aad9e14 100644
---- a/drivers/misc/xilinx_sdfec.c
-+++ b/drivers/misc/xilinx_sdfec.c
-@@ -602,10 +602,10 @@ static int xsdfec_table_write(struct xsdfec_dev *xsdfec, u32 offset,
- const u32 depth)
- {
- u32 reg = 0;
-- u32 res;
-- u32 n, i;
-+ int res, i, nr_pages;
-+ u32 n;
- u32 *addr = NULL;
-- struct page *page[MAX_NUM_PAGES];
-+ struct page *pages[MAX_NUM_PAGES];
-
- /*
- * Writes that go beyond the length of
-@@ -622,15 +622,22 @@ static int xsdfec_table_write(struct xsdfec_dev *xsdfec, u32 offset,
- if ((len * XSDFEC_REG_WIDTH_JUMP) % PAGE_SIZE)
- n += 1;
-
-- res = get_user_pages_fast((unsigned long)src_ptr, n, 0, page);
-- if (res < n) {
-- for (i = 0; i < res; i++)
-- put_page(page[i]);
-+ if (WARN_ON_ONCE(n > INT_MAX))
-+ return -EINVAL;
-+
-+ nr_pages = n;
-+
-+ res = get_user_pages_fast((unsigned long)src_ptr, nr_pages, 0, pages);
-+ if (res < nr_pages) {
-+ if (res > 0) {
-+ for (i = 0; i < res; i++)
-+ put_page(pages[i]);
-+ }
- return -EINVAL;
- }
-
-- for (i = 0; i < n; i++) {
-- addr = kmap(page[i]);
-+ for (i = 0; i < nr_pages; i++) {
-+ addr = kmap(pages[i]);
- do {
- xsdfec_regwrite(xsdfec,
- base_addr + ((offset + reg) *
-@@ -639,7 +646,7 @@ static int xsdfec_table_write(struct xsdfec_dev *xsdfec, u32 offset,
- reg++;
- } while ((reg < len) &&
- ((reg * XSDFEC_REG_WIDTH_JUMP) % PAGE_SIZE));
-- put_page(page[i]);
-+ put_page(pages[i]);
- }
- return reg;
- }
-diff --git a/drivers/net/bareudp.c b/drivers/net/bareudp.c
-index efd1a1d1f35e..5d3c691a1c66 100644
---- a/drivers/net/bareudp.c
-+++ b/drivers/net/bareudp.c
-@@ -552,6 +552,8 @@ static int bareudp_validate(struct nlattr *tb[], struct nlattr *data[],
- static int bareudp2info(struct nlattr *data[], struct bareudp_conf *conf,
- struct netlink_ext_ack *extack)
- {
-+ memset(conf, 0, sizeof(*conf));
-+
- if (!data[IFLA_BAREUDP_PORT]) {
- NL_SET_ERR_MSG(extack, "port not specified");
- return -EINVAL;
-diff --git a/drivers/net/dsa/lantiq_gswip.c b/drivers/net/dsa/lantiq_gswip.c
-index cf6fa8fede33..521ebc072903 100644
---- a/drivers/net/dsa/lantiq_gswip.c
-+++ b/drivers/net/dsa/lantiq_gswip.c
-@@ -1452,7 +1452,8 @@ static void gswip_phylink_validate(struct dsa_switch *ds, int port,
-
- unsupported:
- bitmap_zero(supported, __ETHTOOL_LINK_MODE_MASK_NBITS);
-- dev_err(ds->dev, "Unsupported interface: %d\n", state->interface);
-+ dev_err(ds->dev, "Unsupported interface '%s' for port %d\n",
-+ phy_modes(state->interface), port);
- return;
- }
-
-diff --git a/drivers/net/dsa/sja1105/sja1105_ptp.c b/drivers/net/dsa/sja1105/sja1105_ptp.c
-index bc0e47c1dbb9..177134596458 100644
---- a/drivers/net/dsa/sja1105/sja1105_ptp.c
-+++ b/drivers/net/dsa/sja1105/sja1105_ptp.c
-@@ -891,16 +891,16 @@ void sja1105_ptp_txtstamp_skb(struct dsa_switch *ds, int port,
-
- mutex_lock(&ptp_data->lock);
-
-- rc = sja1105_ptpclkval_read(priv, &ticks, NULL);
-+ rc = sja1105_ptpegr_ts_poll(ds, port, &ts);
- if (rc < 0) {
-- dev_err(ds->dev, "Failed to read PTP clock: %d\n", rc);
-+ dev_err(ds->dev, "timed out polling for tstamp\n");
- kfree_skb(skb);
- goto out;
- }
-
-- rc = sja1105_ptpegr_ts_poll(ds, port, &ts);
-+ rc = sja1105_ptpclkval_read(priv, &ticks, NULL);
- if (rc < 0) {
-- dev_err(ds->dev, "timed out polling for tstamp\n");
-+ dev_err(ds->dev, "Failed to read PTP clock: %d\n", rc);
- kfree_skb(skb);
- goto out;
- }
-diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt.c b/drivers/net/ethernet/broadcom/bnxt/bnxt.c
-index 58e0d9a781e9..19c4a0a5727a 100644
---- a/drivers/net/ethernet/broadcom/bnxt/bnxt.c
-+++ b/drivers/net/ethernet/broadcom/bnxt/bnxt.c
-@@ -10014,7 +10014,7 @@ static void bnxt_timer(struct timer_list *t)
- struct bnxt *bp = from_timer(bp, t, timer);
- struct net_device *dev = bp->dev;
-
-- if (!netif_running(dev))
-+ if (!netif_running(dev) || !test_bit(BNXT_STATE_OPEN, &bp->state))
- return;
-
- if (atomic_read(&bp->intr_sem) != 0)
-@@ -12097,19 +12097,9 @@ static int bnxt_resume(struct device *device)
- goto resume_exit;
- }
-
-- if (bnxt_hwrm_queue_qportcfg(bp)) {
-- rc = -ENODEV;
-+ rc = bnxt_hwrm_func_qcaps(bp);
-+ if (rc)
- goto resume_exit;
-- }
--
-- if (bp->hwrm_spec_code >= 0x10803) {
-- if (bnxt_alloc_ctx_mem(bp)) {
-- rc = -ENODEV;
-- goto resume_exit;
-- }
-- }
-- if (BNXT_NEW_RM(bp))
-- bnxt_hwrm_func_resc_qcaps(bp, false);
-
- if (bnxt_hwrm_func_drv_rgtr(bp, NULL, 0, false)) {
- rc = -ENODEV;
-@@ -12125,6 +12115,8 @@ static int bnxt_resume(struct device *device)
-
- resume_exit:
- bnxt_ulp_start(bp, rc);
-+ if (!rc)
-+ bnxt_reenable_sriov(bp);
- rtnl_unlock();
- return rc;
- }
-@@ -12168,6 +12160,9 @@ static pci_ers_result_t bnxt_io_error_detected(struct pci_dev *pdev,
- bnxt_close(netdev);
-
- pci_disable_device(pdev);
-+ bnxt_free_ctx_mem(bp);
-+ kfree(bp->ctx);
-+ bp->ctx = NULL;
- rtnl_unlock();
-
- /* Request a slot slot reset. */
-@@ -12201,12 +12196,16 @@ static pci_ers_result_t bnxt_io_slot_reset(struct pci_dev *pdev)
- pci_set_master(pdev);
-
- err = bnxt_hwrm_func_reset(bp);
-- if (!err && netif_running(netdev))
-- err = bnxt_open(netdev);
--
-- if (!err)
-- result = PCI_ERS_RESULT_RECOVERED;
-+ if (!err) {
-+ err = bnxt_hwrm_func_qcaps(bp);
-+ if (!err && netif_running(netdev))
-+ err = bnxt_open(netdev);
-+ }
- bnxt_ulp_start(bp, err);
-+ if (!err) {
-+ bnxt_reenable_sriov(bp);
-+ result = PCI_ERS_RESULT_RECOVERED;
-+ }
- }
-
- if (result != PCI_ERS_RESULT_RECOVERED) {
-diff --git a/drivers/net/ethernet/cavium/octeon/octeon_mgmt.c b/drivers/net/ethernet/cavium/octeon/octeon_mgmt.c
-index 9d868403d86c..cbaa1924afbe 100644
---- a/drivers/net/ethernet/cavium/octeon/octeon_mgmt.c
-+++ b/drivers/net/ethernet/cavium/octeon/octeon_mgmt.c
-@@ -234,6 +234,11 @@ static void octeon_mgmt_rx_fill_ring(struct net_device *netdev)
-
- /* Put it in the ring. */
- p->rx_ring[p->rx_next_fill] = re.d64;
-+ /* Make sure there is no reorder of filling the ring and ringing
-+ * the bell
-+ */
-+ wmb();
-+
- dma_sync_single_for_device(p->dev, p->rx_ring_handle,
- ring_size_to_bytes(OCTEON_MGMT_RX_RING_SIZE),
- DMA_BIDIRECTIONAL);
-diff --git a/drivers/net/ethernet/ibm/ibmvnic.c b/drivers/net/ethernet/ibm/ibmvnic.c
-index 197dc5b2c090..1b4d04e4474b 100644
---- a/drivers/net/ethernet/ibm/ibmvnic.c
-+++ b/drivers/net/ethernet/ibm/ibmvnic.c
-@@ -5184,6 +5184,9 @@ static int ibmvnic_remove(struct vio_dev *dev)
- adapter->state = VNIC_REMOVING;
- spin_unlock_irqrestore(&adapter->state_lock, flags);
-
-+ flush_work(&adapter->ibmvnic_reset);
-+ flush_delayed_work(&adapter->ibmvnic_delayed_reset);
-+
- rtnl_lock();
- unregister_netdevice(netdev);
-
-diff --git a/drivers/net/ethernet/intel/e1000e/netdev.c b/drivers/net/ethernet/intel/e1000e/netdev.c
-index df3d50e759de..5e388d4a97a1 100644
---- a/drivers/net/ethernet/intel/e1000e/netdev.c
-+++ b/drivers/net/ethernet/intel/e1000e/netdev.c
-@@ -6518,11 +6518,17 @@ static int __e1000_shutdown(struct pci_dev *pdev, bool runtime)
- struct net_device *netdev = pci_get_drvdata(pdev);
- struct e1000_adapter *adapter = netdev_priv(netdev);
- struct e1000_hw *hw = &adapter->hw;
-- u32 ctrl, ctrl_ext, rctl, status;
-- /* Runtime suspend should only enable wakeup for link changes */
-- u32 wufc = runtime ? E1000_WUFC_LNKC : adapter->wol;
-+ u32 ctrl, ctrl_ext, rctl, status, wufc;
- int retval = 0;
-
-+ /* Runtime suspend should only enable wakeup for link changes */
-+ if (runtime)
-+ wufc = E1000_WUFC_LNKC;
-+ else if (device_may_wakeup(&pdev->dev))
-+ wufc = adapter->wol;
-+ else
-+ wufc = 0;
-+
- status = er32(STATUS);
- if (status & E1000_STATUS_LU)
- wufc &= ~E1000_WUFC_LNKC;
-@@ -6579,7 +6585,7 @@ static int __e1000_shutdown(struct pci_dev *pdev, bool runtime)
- if (adapter->hw.phy.type == e1000_phy_igp_3) {
- e1000e_igp3_phy_powerdown_workaround_ich8lan(&adapter->hw);
- } else if (hw->mac.type >= e1000_pch_lpt) {
-- if (!(wufc & (E1000_WUFC_EX | E1000_WUFC_MC | E1000_WUFC_BC)))
-+ if (wufc && !(wufc & (E1000_WUFC_EX | E1000_WUFC_MC | E1000_WUFC_BC)))
- /* ULP does not support wake from unicast, multicast
- * or broadcast.
- */
-diff --git a/drivers/net/ethernet/intel/iavf/iavf.h b/drivers/net/ethernet/intel/iavf/iavf.h
-index bcd11b4b29df..2d4ce6fdba1a 100644
---- a/drivers/net/ethernet/intel/iavf/iavf.h
-+++ b/drivers/net/ethernet/intel/iavf/iavf.h
-@@ -87,6 +87,10 @@ struct iavf_vsi {
- #define IAVF_HLUT_ARRAY_SIZE ((IAVF_VFQF_HLUT_MAX_INDEX + 1) * 4)
- #define IAVF_MBPS_DIVISOR 125000 /* divisor to convert to Mbps */
-
-+#define IAVF_VIRTCHNL_VF_RESOURCE_SIZE (sizeof(struct virtchnl_vf_resource) + \
-+ (IAVF_MAX_VF_VSI * \
-+ sizeof(struct virtchnl_vsi_resource)))
-+
- /* MAX_MSIX_Q_VECTORS of these are allocated,
- * but we only use one per queue-specific vector.
- */
-@@ -306,6 +310,14 @@ struct iavf_adapter {
- bool netdev_registered;
- bool link_up;
- enum virtchnl_link_speed link_speed;
-+ /* This is only populated if the VIRTCHNL_VF_CAP_ADV_LINK_SPEED is set
-+ * in vf_res->vf_cap_flags. Use ADV_LINK_SUPPORT macro to determine if
-+ * this field is valid. This field should be used going forward and the
-+ * enum virtchnl_link_speed above should be considered the legacy way of
-+ * storing/communicating link speeds.
-+ */
-+ u32 link_speed_mbps;
-+
- enum virtchnl_ops current_op;
- #define CLIENT_ALLOWED(_a) ((_a)->vf_res ? \
- (_a)->vf_res->vf_cap_flags & \
-@@ -322,6 +334,8 @@ struct iavf_adapter {
- VIRTCHNL_VF_OFFLOAD_RSS_PF)))
- #define VLAN_ALLOWED(_a) ((_a)->vf_res->vf_cap_flags & \
- VIRTCHNL_VF_OFFLOAD_VLAN)
-+#define ADV_LINK_SUPPORT(_a) ((_a)->vf_res->vf_cap_flags & \
-+ VIRTCHNL_VF_CAP_ADV_LINK_SPEED)
- struct virtchnl_vf_resource *vf_res; /* incl. all VSIs */
- struct virtchnl_vsi_resource *vsi_res; /* our LAN VSI */
- struct virtchnl_version_info pf_version;
-diff --git a/drivers/net/ethernet/intel/iavf/iavf_ethtool.c b/drivers/net/ethernet/intel/iavf/iavf_ethtool.c
-index 2c39d46b6138..40a3fc7c5ea5 100644
---- a/drivers/net/ethernet/intel/iavf/iavf_ethtool.c
-+++ b/drivers/net/ethernet/intel/iavf/iavf_ethtool.c
-@@ -278,7 +278,18 @@ static int iavf_get_link_ksettings(struct net_device *netdev,
- ethtool_link_ksettings_zero_link_mode(cmd, supported);
- cmd->base.autoneg = AUTONEG_DISABLE;
- cmd->base.port = PORT_NONE;
-- /* Set speed and duplex */
-+ cmd->base.duplex = DUPLEX_FULL;
-+
-+ if (ADV_LINK_SUPPORT(adapter)) {
-+ if (adapter->link_speed_mbps &&
-+ adapter->link_speed_mbps < U32_MAX)
-+ cmd->base.speed = adapter->link_speed_mbps;
-+ else
-+ cmd->base.speed = SPEED_UNKNOWN;
-+
-+ return 0;
-+ }
-+
- switch (adapter->link_speed) {
- case IAVF_LINK_SPEED_40GB:
- cmd->base.speed = SPEED_40000;
-@@ -306,7 +317,6 @@ static int iavf_get_link_ksettings(struct net_device *netdev,
- default:
- break;
- }
-- cmd->base.duplex = DUPLEX_FULL;
-
- return 0;
- }
-diff --git a/drivers/net/ethernet/intel/iavf/iavf_main.c b/drivers/net/ethernet/intel/iavf/iavf_main.c
-index 2050649848ba..a21ae74bcd1b 100644
---- a/drivers/net/ethernet/intel/iavf/iavf_main.c
-+++ b/drivers/net/ethernet/intel/iavf/iavf_main.c
-@@ -1756,17 +1756,17 @@ static int iavf_init_get_resources(struct iavf_adapter *adapter)
- struct net_device *netdev = adapter->netdev;
- struct pci_dev *pdev = adapter->pdev;
- struct iavf_hw *hw = &adapter->hw;
-- int err = 0, bufsz;
-+ int err;
-
- WARN_ON(adapter->state != __IAVF_INIT_GET_RESOURCES);
- /* aq msg sent, awaiting reply */
- if (!adapter->vf_res) {
-- bufsz = sizeof(struct virtchnl_vf_resource) +
-- (IAVF_MAX_VF_VSI *
-- sizeof(struct virtchnl_vsi_resource));
-- adapter->vf_res = kzalloc(bufsz, GFP_KERNEL);
-- if (!adapter->vf_res)
-+ adapter->vf_res = kzalloc(IAVF_VIRTCHNL_VF_RESOURCE_SIZE,
-+ GFP_KERNEL);
-+ if (!adapter->vf_res) {
-+ err = -ENOMEM;
- goto err;
-+ }
- }
- err = iavf_get_vf_config(adapter);
- if (err == IAVF_ERR_ADMIN_QUEUE_NO_WORK) {
-@@ -2036,7 +2036,7 @@ static void iavf_disable_vf(struct iavf_adapter *adapter)
- iavf_reset_interrupt_capability(adapter);
- iavf_free_queues(adapter);
- iavf_free_q_vectors(adapter);
-- kfree(adapter->vf_res);
-+ memset(adapter->vf_res, 0, IAVF_VIRTCHNL_VF_RESOURCE_SIZE);
- iavf_shutdown_adminq(&adapter->hw);
- adapter->netdev->flags &= ~IFF_UP;
- clear_bit(__IAVF_IN_CRITICAL_TASK, &adapter->crit_section);
-@@ -2487,6 +2487,16 @@ static int iavf_validate_tx_bandwidth(struct iavf_adapter *adapter,
- {
- int speed = 0, ret = 0;
-
-+ if (ADV_LINK_SUPPORT(adapter)) {
-+ if (adapter->link_speed_mbps < U32_MAX) {
-+ speed = adapter->link_speed_mbps;
-+ goto validate_bw;
-+ } else {
-+ dev_err(&adapter->pdev->dev, "Unknown link speed\n");
-+ return -EINVAL;
-+ }
-+ }
-+
- switch (adapter->link_speed) {
- case IAVF_LINK_SPEED_40GB:
- speed = 40000;
-@@ -2510,6 +2520,7 @@ static int iavf_validate_tx_bandwidth(struct iavf_adapter *adapter,
- break;
- }
-
-+validate_bw:
- if (max_tx_rate > speed) {
- dev_err(&adapter->pdev->dev,
- "Invalid tx rate specified\n");
-diff --git a/drivers/net/ethernet/intel/iavf/iavf_virtchnl.c b/drivers/net/ethernet/intel/iavf/iavf_virtchnl.c
-index d58374c2c33d..ca79bec4ebd9 100644
---- a/drivers/net/ethernet/intel/iavf/iavf_virtchnl.c
-+++ b/drivers/net/ethernet/intel/iavf/iavf_virtchnl.c
-@@ -139,7 +139,8 @@ int iavf_send_vf_config_msg(struct iavf_adapter *adapter)
- VIRTCHNL_VF_OFFLOAD_ENCAP |
- VIRTCHNL_VF_OFFLOAD_ENCAP_CSUM |
- VIRTCHNL_VF_OFFLOAD_REQ_QUEUES |
-- VIRTCHNL_VF_OFFLOAD_ADQ;
-+ VIRTCHNL_VF_OFFLOAD_ADQ |
-+ VIRTCHNL_VF_CAP_ADV_LINK_SPEED;
-
- adapter->current_op = VIRTCHNL_OP_GET_VF_RESOURCES;
- adapter->aq_required &= ~IAVF_FLAG_AQ_GET_CONFIG;
-@@ -891,6 +892,8 @@ void iavf_disable_vlan_stripping(struct iavf_adapter *adapter)
- iavf_send_pf_msg(adapter, VIRTCHNL_OP_DISABLE_VLAN_STRIPPING, NULL, 0);
- }
-
-+#define IAVF_MAX_SPEED_STRLEN 13
-+
- /**
- * iavf_print_link_message - print link up or down
- * @adapter: adapter structure
-@@ -900,37 +903,99 @@ void iavf_disable_vlan_stripping(struct iavf_adapter *adapter)
- static void iavf_print_link_message(struct iavf_adapter *adapter)
- {
- struct net_device *netdev = adapter->netdev;
-- char *speed = "Unknown ";
-+ int link_speed_mbps;
-+ char *speed;
-
- if (!adapter->link_up) {
- netdev_info(netdev, "NIC Link is Down\n");
- return;
- }
-
-+ speed = kcalloc(1, IAVF_MAX_SPEED_STRLEN, GFP_KERNEL);
-+ if (!speed)
-+ return;
-+
-+ if (ADV_LINK_SUPPORT(adapter)) {
-+ link_speed_mbps = adapter->link_speed_mbps;
-+ goto print_link_msg;
-+ }
-+
- switch (adapter->link_speed) {
- case IAVF_LINK_SPEED_40GB:
-- speed = "40 G";
-+ link_speed_mbps = SPEED_40000;
- break;
- case IAVF_LINK_SPEED_25GB:
-- speed = "25 G";
-+ link_speed_mbps = SPEED_25000;
- break;
- case IAVF_LINK_SPEED_20GB:
-- speed = "20 G";
-+ link_speed_mbps = SPEED_20000;
- break;
- case IAVF_LINK_SPEED_10GB:
-- speed = "10 G";
-+ link_speed_mbps = SPEED_10000;
- break;
- case IAVF_LINK_SPEED_1GB:
-- speed = "1000 M";
-+ link_speed_mbps = SPEED_1000;
- break;
- case IAVF_LINK_SPEED_100MB:
-- speed = "100 M";
-+ link_speed_mbps = SPEED_100;
- break;
- default:
-+ link_speed_mbps = SPEED_UNKNOWN;
- break;
- }
-
-- netdev_info(netdev, "NIC Link is Up %sbps Full Duplex\n", speed);
-+print_link_msg:
-+ if (link_speed_mbps > SPEED_1000) {
-+ if (link_speed_mbps == SPEED_2500)
-+ snprintf(speed, IAVF_MAX_SPEED_STRLEN, "2.5 Gbps");
-+ else
-+ /* convert to Gbps inline */
-+ snprintf(speed, IAVF_MAX_SPEED_STRLEN, "%d %s",
-+ link_speed_mbps / 1000, "Gbps");
-+ } else if (link_speed_mbps == SPEED_UNKNOWN) {
-+ snprintf(speed, IAVF_MAX_SPEED_STRLEN, "%s", "Unknown Mbps");
-+ } else {
-+ snprintf(speed, IAVF_MAX_SPEED_STRLEN, "%u %s",
-+ link_speed_mbps, "Mbps");
-+ }
-+
-+ netdev_info(netdev, "NIC Link is Up Speed is %s Full Duplex\n", speed);
-+ kfree(speed);
-+}
-+
-+/**
-+ * iavf_get_vpe_link_status
-+ * @adapter: adapter structure
-+ * @vpe: virtchnl_pf_event structure
-+ *
-+ * Helper function for determining the link status
-+ **/
-+static bool
-+iavf_get_vpe_link_status(struct iavf_adapter *adapter,
-+ struct virtchnl_pf_event *vpe)
-+{
-+ if (ADV_LINK_SUPPORT(adapter))
-+ return vpe->event_data.link_event_adv.link_status;
-+ else
-+ return vpe->event_data.link_event.link_status;
-+}
-+
-+/**
-+ * iavf_set_adapter_link_speed_from_vpe
-+ * @adapter: adapter structure for which we are setting the link speed
-+ * @vpe: virtchnl_pf_event structure that contains the link speed we are setting
-+ *
-+ * Helper function for setting iavf_adapter link speed
-+ **/
-+static void
-+iavf_set_adapter_link_speed_from_vpe(struct iavf_adapter *adapter,
-+ struct virtchnl_pf_event *vpe)
-+{
-+ if (ADV_LINK_SUPPORT(adapter))
-+ adapter->link_speed_mbps =
-+ vpe->event_data.link_event_adv.link_speed;
-+ else
-+ adapter->link_speed = vpe->event_data.link_event.link_speed;
- }
-
- /**
-@@ -1160,12 +1225,11 @@ void iavf_virtchnl_completion(struct iavf_adapter *adapter,
- if (v_opcode == VIRTCHNL_OP_EVENT) {
- struct virtchnl_pf_event *vpe =
- (struct virtchnl_pf_event *)msg;
-- bool link_up = vpe->event_data.link_event.link_status;
-+ bool link_up = iavf_get_vpe_link_status(adapter, vpe);
-
- switch (vpe->event) {
- case VIRTCHNL_EVENT_LINK_CHANGE:
-- adapter->link_speed =
-- vpe->event_data.link_event.link_speed;
-+ iavf_set_adapter_link_speed_from_vpe(adapter, vpe);
-
- /* we've already got the right link status, bail */
- if (adapter->link_up == link_up)
-diff --git a/drivers/net/ethernet/marvell/mvpp2/mvpp2_main.c b/drivers/net/ethernet/marvell/mvpp2/mvpp2_main.c
-index 2b5dad2ec650..b7b553602ea9 100644
---- a/drivers/net/ethernet/marvell/mvpp2/mvpp2_main.c
-+++ b/drivers/net/ethernet/marvell/mvpp2/mvpp2_main.c
-@@ -5983,8 +5983,8 @@ static int mvpp2_remove(struct platform_device *pdev)
- {
- struct mvpp2 *priv = platform_get_drvdata(pdev);
- struct fwnode_handle *fwnode = pdev->dev.fwnode;
-+ int i = 0, poolnum = MVPP2_BM_POOLS_NUM;
- struct fwnode_handle *port_fwnode;
-- int i = 0;
-
- mvpp2_dbgfs_cleanup(priv);
-
-@@ -5998,7 +5998,10 @@ static int mvpp2_remove(struct platform_device *pdev)
-
- destroy_workqueue(priv->stats_queue);
-
-- for (i = 0; i < MVPP2_BM_POOLS_NUM; i++) {
-+ if (priv->percpu_pools)
-+ poolnum = mvpp2_get_nrxqs(priv) * 2;
-+
-+ for (i = 0; i < poolnum; i++) {
- struct mvpp2_bm_pool *bm_pool = &priv->bm_pools[i];
-
- mvpp2_bm_pool_destroy(&pdev->dev, priv, bm_pool);
-diff --git a/drivers/net/ethernet/mellanox/mlx5/core/steering/dr_send.c b/drivers/net/ethernet/mellanox/mlx5/core/steering/dr_send.c
-index 18719acb7e54..eff8bb64899d 100644
---- a/drivers/net/ethernet/mellanox/mlx5/core/steering/dr_send.c
-+++ b/drivers/net/ethernet/mellanox/mlx5/core/steering/dr_send.c
-@@ -181,7 +181,7 @@ static struct mlx5dr_qp *dr_create_rc_qp(struct mlx5_core_dev *mdev,
- in, pas));
-
- err = mlx5_core_create_qp(mdev, &dr_qp->mqp, in, inlen);
-- kfree(in);
-+ kvfree(in);
-
- if (err) {
- mlx5_core_warn(mdev, " Can't create QP\n");
-diff --git a/drivers/net/ethernet/mellanox/mlxsw/spectrum.c b/drivers/net/ethernet/mellanox/mlxsw/spectrum.c
-index 6b39978acd07..3e4199246a18 100644
---- a/drivers/net/ethernet/mellanox/mlxsw/spectrum.c
-+++ b/drivers/net/ethernet/mellanox/mlxsw/spectrum.c
-@@ -990,8 +990,10 @@ int __mlxsw_sp_port_headroom_set(struct mlxsw_sp_port *mlxsw_sp_port, int mtu,
-
- lossy = !(pfc || pause_en);
- thres_cells = mlxsw_sp_pg_buf_threshold_get(mlxsw_sp, mtu);
-+ mlxsw_sp_port_headroom_8x_adjust(mlxsw_sp_port, &thres_cells);
- delay_cells = mlxsw_sp_pg_buf_delay_get(mlxsw_sp, mtu, delay,
- pfc, pause_en);
-+ mlxsw_sp_port_headroom_8x_adjust(mlxsw_sp_port, &delay_cells);
- total_cells = thres_cells + delay_cells;
-
- taken_headroom_cells += total_cells;
-diff --git a/drivers/net/ethernet/mellanox/mlxsw/spectrum.h b/drivers/net/ethernet/mellanox/mlxsw/spectrum.h
-index ca56e72cb4b7..e28ecb84b816 100644
---- a/drivers/net/ethernet/mellanox/mlxsw/spectrum.h
-+++ b/drivers/net/ethernet/mellanox/mlxsw/spectrum.h
-@@ -395,6 +395,19 @@ mlxsw_sp_port_vlan_find_by_vid(const struct mlxsw_sp_port *mlxsw_sp_port,
- return NULL;
- }
-
-+static inline void
-+mlxsw_sp_port_headroom_8x_adjust(const struct mlxsw_sp_port *mlxsw_sp_port,
-+ u16 *p_size)
-+{
-+ /* Ports with eight lanes use two headroom buffers between which the
-+ * configured headroom size is split. Therefore, multiply the calculated
-+ * headroom size by two.
-+ */
-+ if (mlxsw_sp_port->mapping.width != 8)
-+ return;
-+ *p_size *= 2;
-+}
-+
- enum mlxsw_sp_flood_type {
- MLXSW_SP_FLOOD_TYPE_UC,
- MLXSW_SP_FLOOD_TYPE_BC,
-diff --git a/drivers/net/ethernet/mellanox/mlxsw/spectrum_buffers.c b/drivers/net/ethernet/mellanox/mlxsw/spectrum_buffers.c
-index 968f0902e4fe..19bf0768ed78 100644
---- a/drivers/net/ethernet/mellanox/mlxsw/spectrum_buffers.c
-+++ b/drivers/net/ethernet/mellanox/mlxsw/spectrum_buffers.c
-@@ -312,6 +312,7 @@ static int mlxsw_sp_port_pb_init(struct mlxsw_sp_port *mlxsw_sp_port)
-
- if (i == MLXSW_SP_PB_UNUSED)
- continue;
-+ mlxsw_sp_port_headroom_8x_adjust(mlxsw_sp_port, &size);
- mlxsw_reg_pbmc_lossy_buffer_pack(pbmc_pl, i, size);
- }
- mlxsw_reg_pbmc_lossy_buffer_pack(pbmc_pl,
-diff --git a/drivers/net/ethernet/mellanox/mlxsw/spectrum_span.c b/drivers/net/ethernet/mellanox/mlxsw/spectrum_span.c
-index 9fb2e9d93929..7c5032f9c8ff 100644
---- a/drivers/net/ethernet/mellanox/mlxsw/spectrum_span.c
-+++ b/drivers/net/ethernet/mellanox/mlxsw/spectrum_span.c
-@@ -776,6 +776,7 @@ mlxsw_sp_span_port_buffsize_update(struct mlxsw_sp_port *mlxsw_sp_port, u16 mtu)
- speed = 0;
-
- buffsize = mlxsw_sp_span_buffsize_get(mlxsw_sp, speed, mtu);
-+ mlxsw_sp_port_headroom_8x_adjust(mlxsw_sp_port, (u16 *) &buffsize);
- mlxsw_reg_sbib_pack(sbib_pl, mlxsw_sp_port->local_port, buffsize);
- return mlxsw_reg_write(mlxsw_sp->core, MLXSW_REG(sbib), sbib_pl);
- }
-diff --git a/drivers/net/geneve.c b/drivers/net/geneve.c
-index 6b461be1820b..75266580b586 100644
---- a/drivers/net/geneve.c
-+++ b/drivers/net/geneve.c
-@@ -987,9 +987,10 @@ static netdev_tx_t geneve_xmit(struct sk_buff *skb, struct net_device *dev)
- if (geneve->collect_md) {
- info = skb_tunnel_info(skb);
- if (unlikely(!info || !(info->mode & IP_TUNNEL_INFO_TX))) {
-- err = -EINVAL;
- netdev_dbg(dev, "no tunnel metadata\n");
-- goto tx_error;
-+ dev_kfree_skb(skb);
-+ dev->stats.tx_dropped++;
-+ return NETDEV_TX_OK;
- }
- } else {
- info = &geneve->info;
-@@ -1006,7 +1007,7 @@ static netdev_tx_t geneve_xmit(struct sk_buff *skb, struct net_device *dev)
-
- if (likely(!err))
- return NETDEV_TX_OK;
--tx_error:
-+
- dev_kfree_skb(skb);
-
- if (err == -ELOOP)
-diff --git a/drivers/net/hamradio/yam.c b/drivers/net/hamradio/yam.c
-index 71cdef9fb56b..5ab53e9942f3 100644
---- a/drivers/net/hamradio/yam.c
-+++ b/drivers/net/hamradio/yam.c
-@@ -1133,6 +1133,7 @@ static int __init yam_init_driver(void)
- err = register_netdev(dev);
- if (err) {
- printk(KERN_WARNING "yam: cannot register net device %s\n", dev->name);
-+ free_netdev(dev);
- goto error;
- }
- yam_devs[i] = dev;
-diff --git a/drivers/net/ipa/ipa_endpoint.c b/drivers/net/ipa/ipa_endpoint.c
-index a21534f1462f..1d823ac0f6d6 100644
---- a/drivers/net/ipa/ipa_endpoint.c
-+++ b/drivers/net/ipa/ipa_endpoint.c
-@@ -669,10 +669,12 @@ static void ipa_endpoint_init_seq(struct ipa_endpoint *endpoint)
- u32 seq_type = endpoint->seq_type;
- u32 val = 0;
-
-+ /* Sequencer type is made up of four nibbles */
- val |= u32_encode_bits(seq_type & 0xf, HPS_SEQ_TYPE_FMASK);
- val |= u32_encode_bits((seq_type >> 4) & 0xf, DPS_SEQ_TYPE_FMASK);
-- /* HPS_REP_SEQ_TYPE is 0 */
-- /* DPS_REP_SEQ_TYPE is 0 */
-+ /* The second two apply to replicated packets */
-+ val |= u32_encode_bits((seq_type >> 8) & 0xf, HPS_REP_SEQ_TYPE_FMASK);
-+ val |= u32_encode_bits((seq_type >> 12) & 0xf, DPS_REP_SEQ_TYPE_FMASK);
-
- iowrite32(val, endpoint->ipa->reg_virt + offset);
- }
-diff --git a/drivers/net/ipa/ipa_reg.h b/drivers/net/ipa/ipa_reg.h
-index 3b8106aa277a..0a688d8c1d7c 100644
---- a/drivers/net/ipa/ipa_reg.h
-+++ b/drivers/net/ipa/ipa_reg.h
-@@ -455,6 +455,8 @@ enum ipa_mode {
- * second packet processing pass + no decipher + microcontroller
- * @IPA_SEQ_DMA_DEC: DMA + cipher/decipher
- * @IPA_SEQ_DMA_COMP_DECOMP: DMA + compression/decompression
-+ * @IPA_SEQ_PKT_PROCESS_NO_DEC_NO_UCP_DMAP:
-+ * packet processing + no decipher + no uCP + HPS REP DMA parser
- * @IPA_SEQ_INVALID: invalid sequencer type
- *
- * The values defined here are broken into 4-bit nibbles that are written
-diff --git a/drivers/net/phy/dp83867.c b/drivers/net/phy/dp83867.c
-index b55e3c0403ed..ddac79960ea7 100644
---- a/drivers/net/phy/dp83867.c
-+++ b/drivers/net/phy/dp83867.c
-@@ -488,7 +488,7 @@ static int dp83867_verify_rgmii_cfg(struct phy_device *phydev)
- return 0;
- }
-
--#ifdef CONFIG_OF_MDIO
-+#if IS_ENABLED(CONFIG_OF_MDIO)
- static int dp83867_of_init(struct phy_device *phydev)
- {
- struct dp83867_private *dp83867 = phydev->priv;
-diff --git a/drivers/net/phy/marvell.c b/drivers/net/phy/marvell.c
-index 7fc8e10c5f33..a435f7352cfb 100644
---- a/drivers/net/phy/marvell.c
-+++ b/drivers/net/phy/marvell.c
-@@ -337,7 +337,7 @@ static int m88e1101_config_aneg(struct phy_device *phydev)
- return marvell_config_aneg(phydev);
- }
-
--#ifdef CONFIG_OF_MDIO
-+#if IS_ENABLED(CONFIG_OF_MDIO)
- /* Set and/or override some configuration registers based on the
- * marvell,reg-init property stored in the of_node for the phydev.
- *
-diff --git a/drivers/net/phy/mdio_bus.c b/drivers/net/phy/mdio_bus.c
-index 7a4eb3f2cb74..a1a4dee2a033 100644
---- a/drivers/net/phy/mdio_bus.c
-+++ b/drivers/net/phy/mdio_bus.c
-@@ -757,6 +757,7 @@ EXPORT_SYMBOL(mdiobus_scan);
-
- static void mdiobus_stats_acct(struct mdio_bus_stats *stats, bool op, int ret)
- {
-+ preempt_disable();
- u64_stats_update_begin(&stats->syncp);
-
- u64_stats_inc(&stats->transfers);
-@@ -771,6 +772,7 @@ static void mdiobus_stats_acct(struct mdio_bus_stats *stats, bool op, int ret)
- u64_stats_inc(&stats->writes);
- out:
- u64_stats_update_end(&stats->syncp);
-+ preempt_enable();
- }
-
- /**
-diff --git a/drivers/net/phy/mscc/mscc.h b/drivers/net/phy/mscc/mscc.h
-index 414e3b31bb1f..132f9bf49198 100644
---- a/drivers/net/phy/mscc/mscc.h
-+++ b/drivers/net/phy/mscc/mscc.h
-@@ -375,7 +375,7 @@ struct vsc8531_private {
- #endif
- };
-
--#ifdef CONFIG_OF_MDIO
-+#if IS_ENABLED(CONFIG_OF_MDIO)
- struct vsc8531_edge_rate_table {
- u32 vddmac;
- u32 slowdown[8];
-diff --git a/drivers/net/phy/mscc/mscc_main.c b/drivers/net/phy/mscc/mscc_main.c
-index c8aa6d905d8e..485a4f8a6a9a 100644
---- a/drivers/net/phy/mscc/mscc_main.c
-+++ b/drivers/net/phy/mscc/mscc_main.c
-@@ -98,7 +98,7 @@ static const struct vsc85xx_hw_stat vsc8584_hw_stats[] = {
- },
- };
-
--#ifdef CONFIG_OF_MDIO
-+#if IS_ENABLED(CONFIG_OF_MDIO)
- static const struct vsc8531_edge_rate_table edge_table[] = {
- {MSCC_VDDMAC_3300, { 0, 2, 4, 7, 10, 17, 29, 53} },
- {MSCC_VDDMAC_2500, { 0, 3, 6, 10, 14, 23, 37, 63} },
-@@ -382,7 +382,7 @@ out_unlock:
- mutex_unlock(&phydev->lock);
- }
-
--#ifdef CONFIG_OF_MDIO
-+#if IS_ENABLED(CONFIG_OF_MDIO)
- static int vsc85xx_edge_rate_magic_get(struct phy_device *phydev)
- {
- u32 vdd, sd;
-diff --git a/drivers/ntb/core.c b/drivers/ntb/core.c
-index 2581ab724c34..f8f75a504a58 100644
---- a/drivers/ntb/core.c
-+++ b/drivers/ntb/core.c
-@@ -214,10 +214,8 @@ int ntb_default_port_number(struct ntb_dev *ntb)
- case NTB_TOPO_B2B_DSD:
- return NTB_PORT_SEC_DSD;
- default:
-- break;
-+ return 0;
- }
--
-- return -EINVAL;
- }
- EXPORT_SYMBOL(ntb_default_port_number);
-
-@@ -240,10 +238,8 @@ int ntb_default_peer_port_number(struct ntb_dev *ntb, int pidx)
- case NTB_TOPO_B2B_DSD:
- return NTB_PORT_PRI_USD;
- default:
-- break;
-+ return 0;
- }
--
-- return -EINVAL;
- }
- EXPORT_SYMBOL(ntb_default_peer_port_number);
-
-@@ -315,4 +311,3 @@ static void __exit ntb_driver_exit(void)
- bus_unregister(&ntb_bus);
- }
- module_exit(ntb_driver_exit);
--
-diff --git a/drivers/ntb/test/ntb_perf.c b/drivers/ntb/test/ntb_perf.c
-index 972f6d984f6d..528751803419 100644
---- a/drivers/ntb/test/ntb_perf.c
-+++ b/drivers/ntb/test/ntb_perf.c
-@@ -159,6 +159,8 @@ struct perf_peer {
- /* NTB connection setup service */
- struct work_struct service;
- unsigned long sts;
-+
-+ struct completion init_comp;
- };
- #define to_peer_service(__work) \
- container_of(__work, struct perf_peer, service)
-@@ -547,6 +549,7 @@ static int perf_setup_outbuf(struct perf_peer *peer)
-
- /* Initialization is finally done */
- set_bit(PERF_STS_DONE, &peer->sts);
-+ complete_all(&peer->init_comp);
-
- return 0;
- }
-@@ -557,7 +560,7 @@ static void perf_free_inbuf(struct perf_peer *peer)
- return;
-
- (void)ntb_mw_clear_trans(peer->perf->ntb, peer->pidx, peer->gidx);
-- dma_free_coherent(&peer->perf->ntb->dev, peer->inbuf_size,
-+ dma_free_coherent(&peer->perf->ntb->pdev->dev, peer->inbuf_size,
- peer->inbuf, peer->inbuf_xlat);
- peer->inbuf = NULL;
- }
-@@ -586,8 +589,9 @@ static int perf_setup_inbuf(struct perf_peer *peer)
-
- perf_free_inbuf(peer);
-
-- peer->inbuf = dma_alloc_coherent(&perf->ntb->dev, peer->inbuf_size,
-- &peer->inbuf_xlat, GFP_KERNEL);
-+ peer->inbuf = dma_alloc_coherent(&perf->ntb->pdev->dev,
-+ peer->inbuf_size, &peer->inbuf_xlat,
-+ GFP_KERNEL);
- if (!peer->inbuf) {
- dev_err(&perf->ntb->dev, "Failed to alloc inbuf of %pa\n",
- &peer->inbuf_size);
-@@ -637,6 +641,7 @@ static void perf_service_work(struct work_struct *work)
- perf_setup_outbuf(peer);
-
- if (test_and_clear_bit(PERF_CMD_CLEAR, &peer->sts)) {
-+ init_completion(&peer->init_comp);
- clear_bit(PERF_STS_DONE, &peer->sts);
- if (test_bit(0, &peer->perf->busy_flag) &&
- peer == peer->perf->test_peer) {
-@@ -653,7 +658,7 @@ static int perf_init_service(struct perf_ctx *perf)
- {
- u64 mask;
-
-- if (ntb_peer_mw_count(perf->ntb) < perf->pcnt + 1) {
-+ if (ntb_peer_mw_count(perf->ntb) < perf->pcnt) {
- dev_err(&perf->ntb->dev, "Not enough memory windows\n");
- return -EINVAL;
- }
-@@ -1083,8 +1088,9 @@ static int perf_submit_test(struct perf_peer *peer)
- struct perf_thread *pthr;
- int tidx, ret;
-
-- if (!test_bit(PERF_STS_DONE, &peer->sts))
-- return -ENOLINK;
-+ ret = wait_for_completion_interruptible(&peer->init_comp);
-+ if (ret < 0)
-+ return ret;
-
- if (test_and_set_bit_lock(0, &perf->busy_flag))
- return -EBUSY;
-@@ -1455,10 +1461,21 @@ static int perf_init_peers(struct perf_ctx *perf)
- peer->gidx = pidx;
- }
- INIT_WORK(&peer->service, perf_service_work);
-+ init_completion(&peer->init_comp);
- }
- if (perf->gidx == -1)
- perf->gidx = pidx;
-
-+ /*
-+ * Hardware with only two ports may not have unique port
-+ * numbers. In this case, the gidxs should all be zero.
-+ */
-+ if (perf->pcnt == 1 && ntb_port_number(perf->ntb) == 0 &&
-+ ntb_peer_port_number(perf->ntb, 0) == 0) {
-+ perf->gidx = 0;
-+ perf->peers[0].gidx = 0;
-+ }
-+
- for (pidx = 0; pidx < perf->pcnt; pidx++) {
- ret = perf_setup_peer_mw(&perf->peers[pidx]);
- if (ret)
-@@ -1554,4 +1571,3 @@ static void __exit perf_exit(void)
- destroy_workqueue(perf_wq);
- }
- module_exit(perf_exit);
--
-diff --git a/drivers/ntb/test/ntb_pingpong.c b/drivers/ntb/test/ntb_pingpong.c
-index 04dd46647db3..2164e8492772 100644
---- a/drivers/ntb/test/ntb_pingpong.c
-+++ b/drivers/ntb/test/ntb_pingpong.c
-@@ -121,15 +121,14 @@ static int pp_find_next_peer(struct pp_ctx *pp)
- link = ntb_link_is_up(pp->ntb, NULL, NULL);
-
- /* Find next available peer */
-- if (link & pp->nmask) {
-+ if (link & pp->nmask)
- pidx = __ffs64(link & pp->nmask);
-- out_db = BIT_ULL(pidx + 1);
-- } else if (link & pp->pmask) {
-+ else if (link & pp->pmask)
- pidx = __ffs64(link & pp->pmask);
-- out_db = BIT_ULL(pidx);
-- } else {
-+ else
- return -ENODEV;
-- }
-+
-+ out_db = BIT_ULL(ntb_peer_port_number(pp->ntb, pidx));
-
- spin_lock(&pp->lock);
- pp->out_pidx = pidx;
-@@ -303,7 +302,7 @@ static void pp_init_flds(struct pp_ctx *pp)
- break;
- }
-
-- pp->in_db = BIT_ULL(pidx);
-+ pp->in_db = BIT_ULL(lport);
- pp->pmask = GENMASK_ULL(pidx, 0) >> 1;
- pp->nmask = GENMASK_ULL(pcnt - 1, pidx);
-
-@@ -432,4 +431,3 @@ static void __exit pp_exit(void)
- debugfs_remove_recursive(pp_dbgfs_topdir);
- }
- module_exit(pp_exit);
--
-diff --git a/drivers/ntb/test/ntb_tool.c b/drivers/ntb/test/ntb_tool.c
-index 69da758fe64c..b7bf3f863d79 100644
---- a/drivers/ntb/test/ntb_tool.c
-+++ b/drivers/ntb/test/ntb_tool.c
-@@ -504,7 +504,7 @@ static ssize_t tool_peer_link_read(struct file *filep, char __user *ubuf,
- buf[1] = '\n';
- buf[2] = '\0';
-
-- return simple_read_from_buffer(ubuf, size, offp, buf, 3);
-+ return simple_read_from_buffer(ubuf, size, offp, buf, 2);
- }
-
- static TOOL_FOPS_RDWR(tool_peer_link_fops,
-@@ -590,7 +590,7 @@ static int tool_setup_mw(struct tool_ctx *tc, int pidx, int widx,
- inmw->size = min_t(resource_size_t, req_size, size);
- inmw->size = round_up(inmw->size, addr_align);
- inmw->size = round_up(inmw->size, size_align);
-- inmw->mm_base = dma_alloc_coherent(&tc->ntb->dev, inmw->size,
-+ inmw->mm_base = dma_alloc_coherent(&tc->ntb->pdev->dev, inmw->size,
- &inmw->dma_base, GFP_KERNEL);
- if (!inmw->mm_base)
- return -ENOMEM;
-@@ -612,7 +612,7 @@ static int tool_setup_mw(struct tool_ctx *tc, int pidx, int widx,
- return 0;
-
- err_free_dma:
-- dma_free_coherent(&tc->ntb->dev, inmw->size, inmw->mm_base,
-+ dma_free_coherent(&tc->ntb->pdev->dev, inmw->size, inmw->mm_base,
- inmw->dma_base);
- inmw->mm_base = NULL;
- inmw->dma_base = 0;
-@@ -629,7 +629,7 @@ static void tool_free_mw(struct tool_ctx *tc, int pidx, int widx)
-
- if (inmw->mm_base != NULL) {
- ntb_mw_clear_trans(tc->ntb, pidx, widx);
-- dma_free_coherent(&tc->ntb->dev, inmw->size,
-+ dma_free_coherent(&tc->ntb->pdev->dev, inmw->size,
- inmw->mm_base, inmw->dma_base);
- }
-
-@@ -1690,4 +1690,3 @@ static void __exit tool_exit(void)
- debugfs_remove_recursive(tool_dbgfs_topdir);
- }
- module_exit(tool_exit);
--
-diff --git a/drivers/nvme/host/fc.c b/drivers/nvme/host/fc.c
-index 5ef4a84c442a..564e3f220ac7 100644
---- a/drivers/nvme/host/fc.c
-+++ b/drivers/nvme/host/fc.c
-@@ -2300,10 +2300,11 @@ nvme_fc_start_fcp_op(struct nvme_fc_ctrl *ctrl, struct nvme_fc_queue *queue,
- opstate = atomic_xchg(&op->state, FCPOP_STATE_COMPLETE);
- __nvme_fc_fcpop_chk_teardowns(ctrl, op, opstate);
-
-- if (!(op->flags & FCOP_FLAGS_AEN))
-+ if (!(op->flags & FCOP_FLAGS_AEN)) {
- nvme_fc_unmap_data(ctrl, op->rq, op);
-+ nvme_cleanup_cmd(op->rq);
-+ }
-
-- nvme_cleanup_cmd(op->rq);
- nvme_fc_ctrl_put(ctrl);
-
- if (ctrl->rport->remoteport.port_state == FC_OBJSTATE_ONLINE &&
-diff --git a/drivers/nvme/host/pci.c b/drivers/nvme/host/pci.c
-index 076bdd90c922..4ad629eb3bc6 100644
---- a/drivers/nvme/host/pci.c
-+++ b/drivers/nvme/host/pci.c
-@@ -2958,9 +2958,15 @@ static int nvme_suspend(struct device *dev)
- * the PCI bus layer to put it into D3 in order to take the PCIe link
- * down, so as to allow the platform to achieve its minimum low-power
- * state (which may not be possible if the link is up).
-+ *
-+ * If a host memory buffer is enabled, shut down the device as the NVMe
-+ * specification allows the device to access the host memory buffer in
-+ * host DRAM from all power states, but hosts will fail access to DRAM
-+ * during S3.
- */
- if (pm_suspend_via_firmware() || !ctrl->npss ||
- !pcie_aspm_enabled(pdev) ||
-+ ndev->nr_host_mem_descs ||
- (ndev->ctrl.quirks & NVME_QUIRK_SIMPLE_SUSPEND))
- return nvme_disable_prepare_reset(ndev, true);
-
-diff --git a/drivers/nvmem/core.c b/drivers/nvmem/core.c
-index 05c6ae4b0b97..a8300202a7fb 100644
---- a/drivers/nvmem/core.c
-+++ b/drivers/nvmem/core.c
-@@ -66,6 +66,30 @@ static LIST_HEAD(nvmem_lookup_list);
-
- static BLOCKING_NOTIFIER_HEAD(nvmem_notifier);
-
-+static int nvmem_reg_read(struct nvmem_device *nvmem, unsigned int offset,
-+ void *val, size_t bytes)
-+{
-+ if (nvmem->reg_read)
-+ return nvmem->reg_read(nvmem->priv, offset, val, bytes);
-+
-+ return -EINVAL;
-+}
-+
-+static int nvmem_reg_write(struct nvmem_device *nvmem, unsigned int offset,
-+ void *val, size_t bytes)
-+{
-+ int ret;
-+
-+ if (nvmem->reg_write) {
-+ gpiod_set_value_cansleep(nvmem->wp_gpio, 0);
-+ ret = nvmem->reg_write(nvmem->priv, offset, val, bytes);
-+ gpiod_set_value_cansleep(nvmem->wp_gpio, 1);
-+ return ret;
-+ }
-+
-+ return -EINVAL;
-+}
-+
- #ifdef CONFIG_NVMEM_SYSFS
- static const char * const nvmem_type_str[] = {
- [NVMEM_TYPE_UNKNOWN] = "Unknown",
-@@ -122,7 +146,7 @@ static ssize_t bin_attr_nvmem_read(struct file *filp, struct kobject *kobj,
- if (!nvmem->reg_read)
- return -EPERM;
-
-- rc = nvmem->reg_read(nvmem->priv, pos, buf, count);
-+ rc = nvmem_reg_read(nvmem, pos, buf, count);
-
- if (rc)
- return rc;
-@@ -159,7 +183,7 @@ static ssize_t bin_attr_nvmem_write(struct file *filp, struct kobject *kobj,
- if (!nvmem->reg_write)
- return -EPERM;
-
-- rc = nvmem->reg_write(nvmem->priv, pos, buf, count);
-+ rc = nvmem_reg_write(nvmem, pos, buf, count);
-
- if (rc)
- return rc;
-@@ -311,30 +335,6 @@ static void nvmem_sysfs_remove_compat(struct nvmem_device *nvmem,
-
- #endif /* CONFIG_NVMEM_SYSFS */
-
--static int nvmem_reg_read(struct nvmem_device *nvmem, unsigned int offset,
-- void *val, size_t bytes)
--{
-- if (nvmem->reg_read)
-- return nvmem->reg_read(nvmem->priv, offset, val, bytes);
--
-- return -EINVAL;
--}
--
--static int nvmem_reg_write(struct nvmem_device *nvmem, unsigned int offset,
-- void *val, size_t bytes)
--{
-- int ret;
--
-- if (nvmem->reg_write) {
-- gpiod_set_value_cansleep(nvmem->wp_gpio, 0);
-- ret = nvmem->reg_write(nvmem->priv, offset, val, bytes);
-- gpiod_set_value_cansleep(nvmem->wp_gpio, 1);
-- return ret;
-- }
--
-- return -EINVAL;
--}
--
- static void nvmem_release(struct device *dev)
- {
- struct nvmem_device *nvmem = to_nvmem_device(dev);
-diff --git a/drivers/of/kobj.c b/drivers/of/kobj.c
-index c72eef988041..a32e60b024b8 100644
---- a/drivers/of/kobj.c
-+++ b/drivers/of/kobj.c
-@@ -134,8 +134,6 @@ int __of_attach_node_sysfs(struct device_node *np)
- if (!name)
- return -ENOMEM;
-
-- of_node_get(np);
--
- rc = kobject_add(&np->kobj, parent, "%s", name);
- kfree(name);
- if (rc)
-@@ -144,6 +142,7 @@ int __of_attach_node_sysfs(struct device_node *np)
- for_each_property_of_node(np, pp)
- __of_add_property_sysfs(np, pp);
-
-+ of_node_get(np);
- return 0;
- }
-
-diff --git a/drivers/of/property.c b/drivers/of/property.c
-index b4916dcc9e72..6dc542af5a70 100644
---- a/drivers/of/property.c
-+++ b/drivers/of/property.c
-@@ -1045,8 +1045,20 @@ static int of_link_to_phandle(struct device *dev, struct device_node *sup_np,
- * Find the device node that contains the supplier phandle. It may be
- * @sup_np or it may be an ancestor of @sup_np.
- */
-- while (sup_np && !of_find_property(sup_np, "compatible", NULL))
-+ while (sup_np) {
-+
-+ /* Don't allow linking to a disabled supplier */
-+ if (!of_device_is_available(sup_np)) {
-+ of_node_put(sup_np);
-+ sup_np = NULL;
-+ }
-+
-+ if (of_find_property(sup_np, "compatible", NULL))
-+ break;
-+
- sup_np = of_get_next_parent(sup_np);
-+ }
-+
- if (!sup_np) {
- dev_dbg(dev, "Not linking to %pOFP - No device\n", tmp_np);
- return -ENODEV;
-@@ -1296,7 +1308,7 @@ static int of_link_to_suppliers(struct device *dev,
- if (of_link_property(dev, con_np, p->name))
- ret = -ENODEV;
-
-- for_each_child_of_node(con_np, child)
-+ for_each_available_child_of_node(con_np, child)
- if (of_link_to_suppliers(dev, child) && !ret)
- ret = -EAGAIN;
-
-diff --git a/drivers/pci/controller/dwc/pci-dra7xx.c b/drivers/pci/controller/dwc/pci-dra7xx.c
-index 3b0e58f2de58..6184ebc9392d 100644
---- a/drivers/pci/controller/dwc/pci-dra7xx.c
-+++ b/drivers/pci/controller/dwc/pci-dra7xx.c
-@@ -840,7 +840,6 @@ static int __init dra7xx_pcie_probe(struct platform_device *pdev)
- struct phy **phy;
- struct device_link **link;
- void __iomem *base;
-- struct resource *res;
- struct dw_pcie *pci;
- struct dra7xx_pcie *dra7xx;
- struct device *dev = &pdev->dev;
-@@ -877,10 +876,9 @@ static int __init dra7xx_pcie_probe(struct platform_device *pdev)
- return irq;
- }
-
-- res = platform_get_resource_byname(pdev, IORESOURCE_MEM, "ti_conf");
-- base = devm_ioremap(dev, res->start, resource_size(res));
-- if (!base)
-- return -ENOMEM;
-+ base = devm_platform_ioremap_resource_byname(pdev, "ti_conf");
-+ if (IS_ERR(base))
-+ return PTR_ERR(base);
-
- phy_count = of_property_count_strings(np, "phy-names");
- if (phy_count < 0) {
-diff --git a/drivers/pci/controller/dwc/pci-meson.c b/drivers/pci/controller/dwc/pci-meson.c
-index 3715dceca1bf..ca59ba9e0ecd 100644
---- a/drivers/pci/controller/dwc/pci-meson.c
-+++ b/drivers/pci/controller/dwc/pci-meson.c
-@@ -289,11 +289,11 @@ static void meson_pcie_init_dw(struct meson_pcie *mp)
- meson_cfg_writel(mp, val, PCIE_CFG0);
-
- val = meson_elb_readl(mp, PCIE_PORT_LINK_CTRL_OFF);
-- val &= ~LINK_CAPABLE_MASK;
-+ val &= ~(LINK_CAPABLE_MASK | FAST_LINK_MODE);
- meson_elb_writel(mp, val, PCIE_PORT_LINK_CTRL_OFF);
-
- val = meson_elb_readl(mp, PCIE_PORT_LINK_CTRL_OFF);
-- val |= LINK_CAPABLE_X1 | FAST_LINK_MODE;
-+ val |= LINK_CAPABLE_X1;
- meson_elb_writel(mp, val, PCIE_PORT_LINK_CTRL_OFF);
-
- val = meson_elb_readl(mp, PCIE_GEN2_CTRL_OFF);
-diff --git a/drivers/pci/controller/dwc/pcie-designware-host.c b/drivers/pci/controller/dwc/pcie-designware-host.c
-index 395feb8ca051..3c43311bb95c 100644
---- a/drivers/pci/controller/dwc/pcie-designware-host.c
-+++ b/drivers/pci/controller/dwc/pcie-designware-host.c
-@@ -264,6 +264,8 @@ int dw_pcie_allocate_domains(struct pcie_port *pp)
- return -ENOMEM;
- }
-
-+ irq_domain_update_bus_token(pp->irq_domain, DOMAIN_BUS_NEXUS);
-+
- pp->msi_domain = pci_msi_create_irq_domain(fwnode,
- &dw_pcie_msi_domain_info,
- pp->irq_domain);
-diff --git a/drivers/pci/controller/pci-aardvark.c b/drivers/pci/controller/pci-aardvark.c
-index 2a20b649f40c..2ecc79c03ade 100644
---- a/drivers/pci/controller/pci-aardvark.c
-+++ b/drivers/pci/controller/pci-aardvark.c
-@@ -9,6 +9,7 @@
- */
-
- #include <linux/delay.h>
-+#include <linux/gpio.h>
- #include <linux/interrupt.h>
- #include <linux/irq.h>
- #include <linux/irqdomain.h>
-@@ -18,6 +19,7 @@
- #include <linux/platform_device.h>
- #include <linux/msi.h>
- #include <linux/of_address.h>
-+#include <linux/of_gpio.h>
- #include <linux/of_pci.h>
-
- #include "../pci.h"
-@@ -40,6 +42,7 @@
- #define PCIE_CORE_LINK_CTRL_STAT_REG 0xd0
- #define PCIE_CORE_LINK_L0S_ENTRY BIT(0)
- #define PCIE_CORE_LINK_TRAINING BIT(5)
-+#define PCIE_CORE_LINK_SPEED_SHIFT 16
- #define PCIE_CORE_LINK_WIDTH_SHIFT 20
- #define PCIE_CORE_ERR_CAPCTL_REG 0x118
- #define PCIE_CORE_ERR_CAPCTL_ECRC_CHK_TX BIT(5)
-@@ -201,7 +204,9 @@ struct advk_pcie {
- struct mutex msi_used_lock;
- u16 msi_msg;
- int root_bus_nr;
-+ int link_gen;
- struct pci_bridge_emul bridge;
-+ struct gpio_desc *reset_gpio;
- };
-
- static inline void advk_writel(struct advk_pcie *pcie, u32 val, u64 reg)
-@@ -225,20 +230,16 @@ static int advk_pcie_link_up(struct advk_pcie *pcie)
-
- static int advk_pcie_wait_for_link(struct advk_pcie *pcie)
- {
-- struct device *dev = &pcie->pdev->dev;
- int retries;
-
- /* check if the link is up or not */
- for (retries = 0; retries < LINK_WAIT_MAX_RETRIES; retries++) {
-- if (advk_pcie_link_up(pcie)) {
-- dev_info(dev, "link up\n");
-+ if (advk_pcie_link_up(pcie))
- return 0;
-- }
-
- usleep_range(LINK_WAIT_USLEEP_MIN, LINK_WAIT_USLEEP_MAX);
- }
-
-- dev_err(dev, "link never came up\n");
- return -ETIMEDOUT;
- }
-
-@@ -253,10 +254,110 @@ static void advk_pcie_wait_for_retrain(struct advk_pcie *pcie)
- }
- }
-
-+static int advk_pcie_train_at_gen(struct advk_pcie *pcie, int gen)
-+{
-+ int ret, neg_gen;
-+ u32 reg;
-+
-+ /* Setup link speed */
-+ reg = advk_readl(pcie, PCIE_CORE_CTRL0_REG);
-+ reg &= ~PCIE_GEN_SEL_MSK;
-+ if (gen == 3)
-+ reg |= SPEED_GEN_3;
-+ else if (gen == 2)
-+ reg |= SPEED_GEN_2;
-+ else
-+ reg |= SPEED_GEN_1;
-+ advk_writel(pcie, reg, PCIE_CORE_CTRL0_REG);
-+
-+ /*
-+ * Enable link training. This is not needed in every call to this
-+ * function, just once suffices, but it does not break anything either.
-+ */
-+ reg = advk_readl(pcie, PCIE_CORE_CTRL0_REG);
-+ reg |= LINK_TRAINING_EN;
-+ advk_writel(pcie, reg, PCIE_CORE_CTRL0_REG);
-+
-+ /*
-+ * Start link training immediately after enabling it.
-+ * This solves problems for some buggy cards.
-+ */
-+ reg = advk_readl(pcie, PCIE_CORE_LINK_CTRL_STAT_REG);
-+ reg |= PCIE_CORE_LINK_TRAINING;
-+ advk_writel(pcie, reg, PCIE_CORE_LINK_CTRL_STAT_REG);
-+
-+ ret = advk_pcie_wait_for_link(pcie);
-+ if (ret)
-+ return ret;
-+
-+ reg = advk_readl(pcie, PCIE_CORE_LINK_CTRL_STAT_REG);
-+ neg_gen = (reg >> PCIE_CORE_LINK_SPEED_SHIFT) & 0xf;
-+
-+ return neg_gen;
-+}
-+
-+static void advk_pcie_train_link(struct advk_pcie *pcie)
-+{
-+ struct device *dev = &pcie->pdev->dev;
-+ int neg_gen = -1, gen;
-+
-+ /*
-+ * Try link training at link gen specified by device tree property
-+ * 'max-link-speed'. If this fails, iteratively train at lower gen.
-+ */
-+ for (gen = pcie->link_gen; gen > 0; --gen) {
-+ neg_gen = advk_pcie_train_at_gen(pcie, gen);
-+ if (neg_gen > 0)
-+ break;
-+ }
-+
-+ if (neg_gen < 0)
-+ goto err;
-+
-+ /*
-+ * After successful training if negotiated gen is lower than requested,
-+ * train again on negotiated gen. This solves some stability issues for
-+ * some buggy gen1 cards.
-+ */
-+ if (neg_gen < gen) {
-+ gen = neg_gen;
-+ neg_gen = advk_pcie_train_at_gen(pcie, gen);
-+ }
-+
-+ if (neg_gen == gen) {
-+ dev_info(dev, "link up at gen %i\n", gen);
-+ return;
-+ }
-+
-+err:
-+ dev_err(dev, "link never came up\n");
-+}
-+
-+static void advk_pcie_issue_perst(struct advk_pcie *pcie)
-+{
-+ u32 reg;
-+
-+ if (!pcie->reset_gpio)
-+ return;
-+
-+ /* PERST does not work for some cards when link training is enabled */
-+ reg = advk_readl(pcie, PCIE_CORE_CTRL0_REG);
-+ reg &= ~LINK_TRAINING_EN;
-+ advk_writel(pcie, reg, PCIE_CORE_CTRL0_REG);
-+
-+ /* 10ms delay is needed for some cards */
-+ dev_info(&pcie->pdev->dev, "issuing PERST via reset GPIO for 10ms\n");
-+ gpiod_set_value_cansleep(pcie->reset_gpio, 1);
-+ usleep_range(10000, 11000);
-+ gpiod_set_value_cansleep(pcie->reset_gpio, 0);
-+}
-+
- static void advk_pcie_setup_hw(struct advk_pcie *pcie)
- {
- u32 reg;
-
-+ advk_pcie_issue_perst(pcie);
-+
- /* Set to Direct mode */
- reg = advk_readl(pcie, CTRL_CONFIG_REG);
- reg &= ~(CTRL_MODE_MASK << CTRL_MODE_SHIFT);
-@@ -288,23 +389,12 @@ static void advk_pcie_setup_hw(struct advk_pcie *pcie)
- PCIE_CORE_CTRL2_TD_ENABLE;
- advk_writel(pcie, reg, PCIE_CORE_CTRL2_REG);
-
-- /* Set GEN2 */
-- reg = advk_readl(pcie, PCIE_CORE_CTRL0_REG);
-- reg &= ~PCIE_GEN_SEL_MSK;
-- reg |= SPEED_GEN_2;
-- advk_writel(pcie, reg, PCIE_CORE_CTRL0_REG);
--
- /* Set lane X1 */
- reg = advk_readl(pcie, PCIE_CORE_CTRL0_REG);
- reg &= ~LANE_CNT_MSK;
- reg |= LANE_COUNT_1;
- advk_writel(pcie, reg, PCIE_CORE_CTRL0_REG);
-
-- /* Enable link training */
-- reg = advk_readl(pcie, PCIE_CORE_CTRL0_REG);
-- reg |= LINK_TRAINING_EN;
-- advk_writel(pcie, reg, PCIE_CORE_CTRL0_REG);
--
- /* Enable MSI */
- reg = advk_readl(pcie, PCIE_CORE_CTRL2_REG);
- reg |= PCIE_CORE_CTRL2_MSI_ENABLE;
-@@ -340,22 +430,14 @@ static void advk_pcie_setup_hw(struct advk_pcie *pcie)
-
- /*
- * PERST# signal could have been asserted by pinctrl subsystem before
-- * probe() callback has been called, making the endpoint going into
-+ * probe() callback has been called or issued explicitly by reset gpio
-+ * function advk_pcie_issue_perst(), making the endpoint going into
- * fundamental reset. As required by PCI Express spec a delay for at
- * least 100ms after such a reset before link training is needed.
- */
- msleep(PCI_PM_D3COLD_WAIT);
-
-- /* Start link training */
-- reg = advk_readl(pcie, PCIE_CORE_LINK_CTRL_STAT_REG);
-- reg |= PCIE_CORE_LINK_TRAINING;
-- advk_writel(pcie, reg, PCIE_CORE_LINK_CTRL_STAT_REG);
--
-- advk_pcie_wait_for_link(pcie);
--
-- reg = PCIE_CORE_LINK_L0S_ENTRY |
-- (1 << PCIE_CORE_LINK_WIDTH_SHIFT);
-- advk_writel(pcie, reg, PCIE_CORE_LINK_CTRL_STAT_REG);
-+ advk_pcie_train_link(pcie);
-
- reg = advk_readl(pcie, PCIE_CORE_CMD_STATUS_REG);
- reg |= PCIE_CORE_CMD_MEM_ACCESS_EN |
-@@ -989,6 +1071,28 @@ static int advk_pcie_probe(struct platform_device *pdev)
- }
- pcie->root_bus_nr = bus->start;
-
-+ pcie->reset_gpio = devm_gpiod_get_from_of_node(dev, dev->of_node,
-+ "reset-gpios", 0,
-+ GPIOD_OUT_LOW,
-+ "pcie1-reset");
-+ ret = PTR_ERR_OR_ZERO(pcie->reset_gpio);
-+ if (ret) {
-+ if (ret == -ENOENT) {
-+ pcie->reset_gpio = NULL;
-+ } else {
-+ if (ret != -EPROBE_DEFER)
-+ dev_err(dev, "Failed to get reset-gpio: %i\n",
-+ ret);
-+ return ret;
-+ }
-+ }
-+
-+ ret = of_pci_get_max_link_speed(dev->of_node);
-+ if (ret <= 0 || ret > 3)
-+ pcie->link_gen = 3;
-+ else
-+ pcie->link_gen = ret;
-+
- advk_pcie_setup_hw(pcie);
-
- advk_sw_pci_bridge_init(pcie);
-diff --git a/drivers/pci/controller/pci-v3-semi.c b/drivers/pci/controller/pci-v3-semi.c
-index bd05221f5a22..ddcb4571a79b 100644
---- a/drivers/pci/controller/pci-v3-semi.c
-+++ b/drivers/pci/controller/pci-v3-semi.c
-@@ -720,7 +720,7 @@ static int v3_pci_probe(struct platform_device *pdev)
- int irq;
- int ret;
-
-- host = pci_alloc_host_bridge(sizeof(*v3));
-+ host = devm_pci_alloc_host_bridge(dev, sizeof(*v3));
- if (!host)
- return -ENOMEM;
-
-diff --git a/drivers/pci/controller/pcie-brcmstb.c b/drivers/pci/controller/pcie-brcmstb.c
-index 6d79d14527a6..2297910bf6e4 100644
---- a/drivers/pci/controller/pcie-brcmstb.c
-+++ b/drivers/pci/controller/pcie-brcmstb.c
-@@ -54,11 +54,11 @@
-
- #define PCIE_MISC_CPU_2_PCIE_MEM_WIN0_LO 0x400c
- #define PCIE_MEM_WIN0_LO(win) \
-- PCIE_MISC_CPU_2_PCIE_MEM_WIN0_LO + ((win) * 4)
-+ PCIE_MISC_CPU_2_PCIE_MEM_WIN0_LO + ((win) * 8)
-
- #define PCIE_MISC_CPU_2_PCIE_MEM_WIN0_HI 0x4010
- #define PCIE_MEM_WIN0_HI(win) \
-- PCIE_MISC_CPU_2_PCIE_MEM_WIN0_HI + ((win) * 4)
-+ PCIE_MISC_CPU_2_PCIE_MEM_WIN0_HI + ((win) * 8)
-
- #define PCIE_MISC_RC_BAR1_CONFIG_LO 0x402c
- #define PCIE_MISC_RC_BAR1_CONFIG_LO_SIZE_MASK 0x1f
-@@ -697,6 +697,7 @@ static int brcm_pcie_setup(struct brcm_pcie *pcie)
-
- /* Reset the bridge */
- brcm_pcie_bridge_sw_init_set(pcie, 1);
-+ brcm_pcie_perst_set(pcie, 1);
-
- usleep_range(100, 200);
-
-diff --git a/drivers/pci/controller/pcie-rcar.c b/drivers/pci/controller/pcie-rcar.c
-index 759c6542c5c8..1bae6a4abaae 100644
---- a/drivers/pci/controller/pcie-rcar.c
-+++ b/drivers/pci/controller/pcie-rcar.c
-@@ -333,11 +333,12 @@ static struct pci_ops rcar_pcie_ops = {
- };
-
- static void rcar_pcie_setup_window(int win, struct rcar_pcie *pcie,
-- struct resource *res)
-+ struct resource_entry *window)
- {
- /* Setup PCIe address space mappings for each resource */
- resource_size_t size;
- resource_size_t res_start;
-+ struct resource *res = window->res;
- u32 mask;
-
- rcar_pci_write_reg(pcie, 0x00000000, PCIEPTCTLR(win));
-@@ -351,9 +352,9 @@ static void rcar_pcie_setup_window(int win, struct rcar_pcie *pcie,
- rcar_pci_write_reg(pcie, mask << 7, PCIEPAMR(win));
-
- if (res->flags & IORESOURCE_IO)
-- res_start = pci_pio_to_address(res->start);
-+ res_start = pci_pio_to_address(res->start) - window->offset;
- else
-- res_start = res->start;
-+ res_start = res->start - window->offset;
-
- rcar_pci_write_reg(pcie, upper_32_bits(res_start), PCIEPAUR(win));
- rcar_pci_write_reg(pcie, lower_32_bits(res_start) & ~0x7F,
-@@ -382,7 +383,7 @@ static int rcar_pcie_setup(struct list_head *resource, struct rcar_pcie *pci)
- switch (resource_type(res)) {
- case IORESOURCE_IO:
- case IORESOURCE_MEM:
-- rcar_pcie_setup_window(i, pci, res);
-+ rcar_pcie_setup_window(i, pci, win);
- i++;
- break;
- case IORESOURCE_BUS:
-diff --git a/drivers/pci/controller/vmd.c b/drivers/pci/controller/vmd.c
-index dac91d60701d..e386d4eac407 100644
---- a/drivers/pci/controller/vmd.c
-+++ b/drivers/pci/controller/vmd.c
-@@ -445,9 +445,11 @@ static int vmd_enable_domain(struct vmd_dev *vmd, unsigned long features)
- if (!membar2)
- return -ENOMEM;
- offset[0] = vmd->dev->resource[VMD_MEMBAR1].start -
-- readq(membar2 + MB2_SHADOW_OFFSET);
-+ (readq(membar2 + MB2_SHADOW_OFFSET) &
-+ PCI_BASE_ADDRESS_MEM_MASK);
- offset[1] = vmd->dev->resource[VMD_MEMBAR2].start -
-- readq(membar2 + MB2_SHADOW_OFFSET + 8);
-+ (readq(membar2 + MB2_SHADOW_OFFSET + 8) &
-+ PCI_BASE_ADDRESS_MEM_MASK);
- pci_iounmap(vmd->dev, membar2);
- }
- }
-diff --git a/drivers/pci/endpoint/functions/pci-epf-test.c b/drivers/pci/endpoint/functions/pci-epf-test.c
-index 60330f3e3751..c89a9561439f 100644
---- a/drivers/pci/endpoint/functions/pci-epf-test.c
-+++ b/drivers/pci/endpoint/functions/pci-epf-test.c
-@@ -187,6 +187,9 @@ static int pci_epf_test_init_dma_chan(struct pci_epf_test *epf_test)
- */
- static void pci_epf_test_clean_dma_chan(struct pci_epf_test *epf_test)
- {
-+ if (!epf_test->dma_supported)
-+ return;
-+
- dma_release_channel(epf_test->dma_chan);
- epf_test->dma_chan = NULL;
- }
-diff --git a/drivers/pci/pci-bridge-emul.c b/drivers/pci/pci-bridge-emul.c
-index 4f4f54bc732e..faa414655f33 100644
---- a/drivers/pci/pci-bridge-emul.c
-+++ b/drivers/pci/pci-bridge-emul.c
-@@ -185,8 +185,8 @@ static const struct pci_bridge_reg_behavior pcie_cap_regs_behavior[] = {
- * RO, the rest is reserved
- */
- .w1c = GENMASK(19, 16),
-- .ro = GENMASK(20, 19),
-- .rsvd = GENMASK(31, 21),
-+ .ro = GENMASK(21, 20),
-+ .rsvd = GENMASK(31, 22),
- },
-
- [PCI_EXP_LNKCAP / 4] = {
-@@ -226,7 +226,7 @@ static const struct pci_bridge_reg_behavior pcie_cap_regs_behavior[] = {
- PCI_EXP_SLTSTA_CC | PCI_EXP_SLTSTA_DLLSC) << 16,
- .ro = (PCI_EXP_SLTSTA_MRLSS | PCI_EXP_SLTSTA_PDS |
- PCI_EXP_SLTSTA_EIS) << 16,
-- .rsvd = GENMASK(15, 12) | (GENMASK(15, 9) << 16),
-+ .rsvd = GENMASK(15, 13) | (GENMASK(15, 9) << 16),
- },
-
- [PCI_EXP_RTCTL / 4] = {
-diff --git a/drivers/pci/pci.c b/drivers/pci/pci.c
-index 6d3234f75692..809f2584e338 100644
---- a/drivers/pci/pci.c
-+++ b/drivers/pci/pci.c
-@@ -4660,7 +4660,8 @@ static int pci_pm_reset(struct pci_dev *dev, int probe)
- * pcie_wait_for_link_delay - Wait until link is active or inactive
- * @pdev: Bridge device
- * @active: waiting for active or inactive?
-- * @delay: Delay to wait after link has become active (in ms)
-+ * @delay: Delay to wait after link has become active (in ms). Specify %0
-+ * for no delay.
- *
- * Use this to wait till link becomes active or inactive.
- */
-@@ -4701,7 +4702,7 @@ static bool pcie_wait_for_link_delay(struct pci_dev *pdev, bool active,
- msleep(10);
- timeout -= 10;
- }
-- if (active && ret)
-+ if (active && ret && delay)
- msleep(delay);
- else if (ret != active)
- pci_info(pdev, "Data Link Layer Link Active not %s in 1000 msec\n",
-@@ -4822,17 +4823,28 @@ void pci_bridge_wait_for_secondary_bus(struct pci_dev *dev)
- if (!pcie_downstream_port(dev))
- return;
-
-- if (pcie_get_speed_cap(dev) <= PCIE_SPEED_5_0GT) {
-- pci_dbg(dev, "waiting %d ms for downstream link\n", delay);
-- msleep(delay);
-- } else {
-- pci_dbg(dev, "waiting %d ms for downstream link, after activation\n",
-- delay);
-- if (!pcie_wait_for_link_delay(dev, true, delay)) {
-+ /*
-+ * Per PCIe r5.0, sec 6.6.1, for downstream ports that support
-+ * speeds > 5 GT/s, we must wait for link training to complete
-+ * before the mandatory delay.
-+ *
-+ * We can only tell when link training completes via DLL Link
-+ * Active, which is required for downstream ports that support
-+ * speeds > 5 GT/s (sec 7.5.3.6). Unfortunately some common
-+ * devices do not implement Link Active reporting even when it's
-+ * required, so we'll check for that directly instead of checking
-+ * the supported link speed. We assume devices without Link Active
-+ * reporting can train in 100 ms regardless of speed.
-+ */
-+ if (dev->link_active_reporting) {
-+ pci_dbg(dev, "waiting for link to train\n");
-+ if (!pcie_wait_for_link_delay(dev, true, 0)) {
- /* Did not train, no need to wait any further */
- return;
- }
- }
-+ pci_dbg(child, "waiting %d ms to become accessible\n", delay);
-+ msleep(delay);
-
- if (!pci_device_is_present(child)) {
- pci_dbg(child, "waiting additional %d ms to become accessible\n", delay);
-diff --git a/drivers/pci/pcie/aspm.c b/drivers/pci/pcie/aspm.c
-index 2378ed692534..b17e5ffd31b1 100644
---- a/drivers/pci/pcie/aspm.c
-+++ b/drivers/pci/pcie/aspm.c
-@@ -628,16 +628,6 @@ static void pcie_aspm_cap_init(struct pcie_link_state *link, int blacklist)
-
- /* Setup initial capable state. Will be updated later */
- link->aspm_capable = link->aspm_support;
-- /*
-- * If the downstream component has pci bridge function, don't
-- * do ASPM for now.
-- */
-- list_for_each_entry(child, &linkbus->devices, bus_list) {
-- if (pci_pcie_type(child) == PCI_EXP_TYPE_PCI_BRIDGE) {
-- link->aspm_disable = ASPM_STATE_ALL;
-- break;
-- }
-- }
-
- /* Get and check endpoint acceptable latencies */
- list_for_each_entry(child, &linkbus->devices, bus_list) {
-diff --git a/drivers/pci/pcie/ptm.c b/drivers/pci/pcie/ptm.c
-index 9361f3aa26ab..357a454cafa0 100644
---- a/drivers/pci/pcie/ptm.c
-+++ b/drivers/pci/pcie/ptm.c
-@@ -39,10 +39,6 @@ void pci_ptm_init(struct pci_dev *dev)
- if (!pci_is_pcie(dev))
- return;
-
-- pos = pci_find_ext_capability(dev, PCI_EXT_CAP_ID_PTM);
-- if (!pos)
-- return;
--
- /*
- * Enable PTM only on interior devices (root ports, switch ports,
- * etc.) on the assumption that it causes no link traffic until an
-@@ -52,6 +48,23 @@ void pci_ptm_init(struct pci_dev *dev)
- pci_pcie_type(dev) == PCI_EXP_TYPE_RC_END))
- return;
-
-+ /*
-+ * Switch Downstream Ports are not permitted to have a PTM
-+ * capability; their PTM behavior is controlled by the Upstream
-+ * Port (PCIe r5.0, sec 7.9.16).
-+ */
-+ ups = pci_upstream_bridge(dev);
-+ if (pci_pcie_type(dev) == PCI_EXP_TYPE_DOWNSTREAM &&
-+ ups && ups->ptm_enabled) {
-+ dev->ptm_granularity = ups->ptm_granularity;
-+ dev->ptm_enabled = 1;
-+ return;
-+ }
-+
-+ pos = pci_find_ext_capability(dev, PCI_EXT_CAP_ID_PTM);
-+ if (!pos)
-+ return;
-+
- pci_read_config_dword(dev, pos + PCI_PTM_CAP, &cap);
- local_clock = (cap & PCI_PTM_GRANULARITY_MASK) >> 8;
-
-@@ -61,7 +74,6 @@ void pci_ptm_init(struct pci_dev *dev)
- * the spec recommendation (PCIe r3.1, sec 7.32.3), select the
- * furthest upstream Time Source as the PTM Root.
- */
-- ups = pci_upstream_bridge(dev);
- if (ups && ups->ptm_enabled) {
- ctrl = PCI_PTM_CTRL_ENABLE;
- if (ups->ptm_granularity == 0)
-diff --git a/drivers/pci/probe.c b/drivers/pci/probe.c
-index c7e3a8267521..b59a4b0f5f16 100644
---- a/drivers/pci/probe.c
-+++ b/drivers/pci/probe.c
-@@ -909,9 +909,10 @@ static int pci_register_host_bridge(struct pci_host_bridge *bridge)
- goto free;
-
- err = device_register(&bridge->dev);
-- if (err)
-+ if (err) {
- put_device(&bridge->dev);
--
-+ goto free;
-+ }
- bus->bridge = get_device(&bridge->dev);
- device_enable_async_suspend(bus->bridge);
- pci_set_bus_of_node(bus);
-diff --git a/drivers/pci/setup-res.c b/drivers/pci/setup-res.c
-index d8ca40a97693..d21fa04fa44d 100644
---- a/drivers/pci/setup-res.c
-+++ b/drivers/pci/setup-res.c
-@@ -439,10 +439,11 @@ int pci_resize_resource(struct pci_dev *dev, int resno, int size)
- res->end = res->start + pci_rebar_size_to_bytes(size) - 1;
-
- /* Check if the new config works by trying to assign everything. */
-- ret = pci_reassign_bridge_resources(dev->bus->self, res->flags);
-- if (ret)
-- goto error_resize;
--
-+ if (dev->bus->self) {
-+ ret = pci_reassign_bridge_resources(dev->bus->self, res->flags);
-+ if (ret)
-+ goto error_resize;
-+ }
- return 0;
-
- error_resize:
-diff --git a/drivers/perf/hisilicon/hisi_uncore_l3c_pmu.c b/drivers/perf/hisilicon/hisi_uncore_l3c_pmu.c
-index 1151e99b241c..479de4be99eb 100644
---- a/drivers/perf/hisilicon/hisi_uncore_l3c_pmu.c
-+++ b/drivers/perf/hisilicon/hisi_uncore_l3c_pmu.c
-@@ -35,7 +35,7 @@
- /* L3C has 8-counters */
- #define L3C_NR_COUNTERS 0x8
-
--#define L3C_PERF_CTRL_EN 0x20000
-+#define L3C_PERF_CTRL_EN 0x10000
- #define L3C_EVTYPE_NONE 0xff
-
- /*
-diff --git a/drivers/phy/broadcom/phy-bcm-sr-usb.c b/drivers/phy/broadcom/phy-bcm-sr-usb.c
-index fe6c58910e4c..7c7862b4f41f 100644
---- a/drivers/phy/broadcom/phy-bcm-sr-usb.c
-+++ b/drivers/phy/broadcom/phy-bcm-sr-usb.c
-@@ -16,8 +16,6 @@ enum bcm_usb_phy_version {
- };
-
- enum bcm_usb_phy_reg {
-- PLL_NDIV_FRAC,
-- PLL_NDIV_INT,
- PLL_CTRL,
- PHY_CTRL,
- PHY_PLL_CTRL,
-@@ -31,18 +29,11 @@ static const u8 bcm_usb_combo_phy_ss[] = {
- };
-
- static const u8 bcm_usb_combo_phy_hs[] = {
-- [PLL_NDIV_FRAC] = 0x04,
-- [PLL_NDIV_INT] = 0x08,
- [PLL_CTRL] = 0x0c,
- [PHY_CTRL] = 0x10,
- };
-
--#define HSPLL_NDIV_INT_VAL 0x13
--#define HSPLL_NDIV_FRAC_VAL 0x1005
--
- static const u8 bcm_usb_hs_phy[] = {
-- [PLL_NDIV_FRAC] = 0x0,
-- [PLL_NDIV_INT] = 0x4,
- [PLL_CTRL] = 0x8,
- [PHY_CTRL] = 0xc,
- };
-@@ -52,7 +43,6 @@ enum pll_ctrl_bits {
- SSPLL_SUSPEND_EN,
- PLL_SEQ_START,
- PLL_LOCK,
-- PLL_PDIV,
- };
-
- static const u8 u3pll_ctrl[] = {
-@@ -66,29 +56,17 @@ static const u8 u3pll_ctrl[] = {
- #define HSPLL_PDIV_VAL 0x1
-
- static const u8 u2pll_ctrl[] = {
-- [PLL_PDIV] = 1,
- [PLL_RESETB] = 5,
- [PLL_LOCK] = 6,
- };
-
- enum bcm_usb_phy_ctrl_bits {
- CORERDY,
-- AFE_LDO_PWRDWNB,
-- AFE_PLL_PWRDWNB,
-- AFE_BG_PWRDWNB,
-- PHY_ISO,
- PHY_RESETB,
- PHY_PCTL,
- };
-
- #define PHY_PCTL_MASK 0xffff
--/*
-- * 0x0806 of PCTL_VAL has below bits set
-- * BIT-8 : refclk divider 1
-- * BIT-3:2: device mode; mode is not effect
-- * BIT-1: soft reset active low
-- */
--#define HSPHY_PCTL_VAL 0x0806
- #define SSPHY_PCTL_VAL 0x0006
-
- static const u8 u3phy_ctrl[] = {
-@@ -98,10 +76,6 @@ static const u8 u3phy_ctrl[] = {
-
- static const u8 u2phy_ctrl[] = {
- [CORERDY] = 0,
-- [AFE_LDO_PWRDWNB] = 1,
-- [AFE_PLL_PWRDWNB] = 2,
-- [AFE_BG_PWRDWNB] = 3,
-- [PHY_ISO] = 4,
- [PHY_RESETB] = 5,
- [PHY_PCTL] = 6,
- };
-@@ -186,38 +160,13 @@ static int bcm_usb_hs_phy_init(struct bcm_usb_phy_cfg *phy_cfg)
- int ret = 0;
- void __iomem *regs = phy_cfg->regs;
- const u8 *offset;
-- u32 rd_data;
-
- offset = phy_cfg->offset;
-
-- writel(HSPLL_NDIV_INT_VAL, regs + offset[PLL_NDIV_INT]);
-- writel(HSPLL_NDIV_FRAC_VAL, regs + offset[PLL_NDIV_FRAC]);
--
-- rd_data = readl(regs + offset[PLL_CTRL]);
-- rd_data &= ~(HSPLL_PDIV_MASK << u2pll_ctrl[PLL_PDIV]);
-- rd_data |= (HSPLL_PDIV_VAL << u2pll_ctrl[PLL_PDIV]);
-- writel(rd_data, regs + offset[PLL_CTRL]);
--
-- /* Set Core Ready high */
-- bcm_usb_reg32_setbits(regs + offset[PHY_CTRL],
-- BIT(u2phy_ctrl[CORERDY]));
--
-- /* Maximum timeout for Core Ready done */
-- msleep(30);
--
-+ bcm_usb_reg32_clrbits(regs + offset[PLL_CTRL],
-+ BIT(u2pll_ctrl[PLL_RESETB]));
- bcm_usb_reg32_setbits(regs + offset[PLL_CTRL],
- BIT(u2pll_ctrl[PLL_RESETB]));
-- bcm_usb_reg32_setbits(regs + offset[PHY_CTRL],
-- BIT(u2phy_ctrl[PHY_RESETB]));
--
--
-- rd_data = readl(regs + offset[PHY_CTRL]);
-- rd_data &= ~(PHY_PCTL_MASK << u2phy_ctrl[PHY_PCTL]);
-- rd_data |= (HSPHY_PCTL_VAL << u2phy_ctrl[PHY_PCTL]);
-- writel(rd_data, regs + offset[PHY_CTRL]);
--
-- /* Maximum timeout for PLL reset done */
-- msleep(30);
-
- ret = bcm_usb_pll_lock_check(regs + offset[PLL_CTRL],
- BIT(u2pll_ctrl[PLL_LOCK]));
-diff --git a/drivers/phy/cadence/phy-cadence-sierra.c b/drivers/phy/cadence/phy-cadence-sierra.c
-index a5c08e5bd2bf..faed652b73f7 100644
---- a/drivers/phy/cadence/phy-cadence-sierra.c
-+++ b/drivers/phy/cadence/phy-cadence-sierra.c
-@@ -685,10 +685,10 @@ static struct cdns_reg_pairs cdns_usb_cmn_regs_ext_ssc[] = {
- static struct cdns_reg_pairs cdns_usb_ln_regs_ext_ssc[] = {
- {0xFE0A, SIERRA_DET_STANDEC_A_PREG},
- {0x000F, SIERRA_DET_STANDEC_B_PREG},
-- {0x00A5, SIERRA_DET_STANDEC_C_PREG},
-+ {0x55A5, SIERRA_DET_STANDEC_C_PREG},
- {0x69ad, SIERRA_DET_STANDEC_D_PREG},
- {0x0241, SIERRA_DET_STANDEC_E_PREG},
-- {0x0010, SIERRA_PSM_LANECAL_DLY_A1_RESETS_PREG},
-+ {0x0110, SIERRA_PSM_LANECAL_DLY_A1_RESETS_PREG},
- {0x0014, SIERRA_PSM_A0IN_TMR_PREG},
- {0xCF00, SIERRA_PSM_DIAG_PREG},
- {0x001F, SIERRA_PSC_TX_A0_PREG},
-@@ -696,7 +696,7 @@ static struct cdns_reg_pairs cdns_usb_ln_regs_ext_ssc[] = {
- {0x0003, SIERRA_PSC_TX_A2_PREG},
- {0x0003, SIERRA_PSC_TX_A3_PREG},
- {0x0FFF, SIERRA_PSC_RX_A0_PREG},
-- {0x0619, SIERRA_PSC_RX_A1_PREG},
-+ {0x0003, SIERRA_PSC_RX_A1_PREG},
- {0x0003, SIERRA_PSC_RX_A2_PREG},
- {0x0001, SIERRA_PSC_RX_A3_PREG},
- {0x0001, SIERRA_PLLCTRL_SUBRATE_PREG},
-@@ -705,19 +705,19 @@ static struct cdns_reg_pairs cdns_usb_ln_regs_ext_ssc[] = {
- {0x00CA, SIERRA_CLKPATH_BIASTRIM_PREG},
- {0x2512, SIERRA_DFE_BIASTRIM_PREG},
- {0x0000, SIERRA_DRVCTRL_ATTEN_PREG},
-- {0x873E, SIERRA_CLKPATHCTRL_TMR_PREG},
-- {0x03CF, SIERRA_RX_CREQ_FLTR_A_MODE1_PREG},
-- {0x01CE, SIERRA_RX_CREQ_FLTR_A_MODE0_PREG},
-+ {0x823E, SIERRA_CLKPATHCTRL_TMR_PREG},
-+ {0x078F, SIERRA_RX_CREQ_FLTR_A_MODE1_PREG},
-+ {0x078F, SIERRA_RX_CREQ_FLTR_A_MODE0_PREG},
- {0x7B3C, SIERRA_CREQ_CCLKDET_MODE01_PREG},
-- {0x033F, SIERRA_RX_CTLE_MAINTENANCE_PREG},
-+ {0x023C, SIERRA_RX_CTLE_MAINTENANCE_PREG},
- {0x3232, SIERRA_CREQ_FSMCLK_SEL_PREG},
- {0x0000, SIERRA_CREQ_EQ_CTRL_PREG},
-- {0x8000, SIERRA_CREQ_SPARE_PREG},
-+ {0x0000, SIERRA_CREQ_SPARE_PREG},
- {0xCC44, SIERRA_CREQ_EQ_OPEN_EYE_THRESH_PREG},
-- {0x8453, SIERRA_CTLELUT_CTRL_PREG},
-- {0x4110, SIERRA_DFE_ECMP_RATESEL_PREG},
-- {0x4110, SIERRA_DFE_SMP_RATESEL_PREG},
-- {0x0002, SIERRA_DEQ_PHALIGN_CTRL},
-+ {0x8452, SIERRA_CTLELUT_CTRL_PREG},
-+ {0x4121, SIERRA_DFE_ECMP_RATESEL_PREG},
-+ {0x4121, SIERRA_DFE_SMP_RATESEL_PREG},
-+ {0x0003, SIERRA_DEQ_PHALIGN_CTRL},
- {0x3200, SIERRA_DEQ_CONCUR_CTRL1_PREG},
- {0x5064, SIERRA_DEQ_CONCUR_CTRL2_PREG},
- {0x0030, SIERRA_DEQ_EPIPWR_CTRL2_PREG},
-@@ -725,7 +725,7 @@ static struct cdns_reg_pairs cdns_usb_ln_regs_ext_ssc[] = {
- {0x5A5A, SIERRA_DEQ_ERRCMP_CTRL_PREG},
- {0x02F5, SIERRA_DEQ_OFFSET_CTRL_PREG},
- {0x02F5, SIERRA_DEQ_GAIN_CTRL_PREG},
-- {0x9A8A, SIERRA_DEQ_VGATUNE_CTRL_PREG},
-+ {0x9999, SIERRA_DEQ_VGATUNE_CTRL_PREG},
- {0x0014, SIERRA_DEQ_GLUT0},
- {0x0014, SIERRA_DEQ_GLUT1},
- {0x0014, SIERRA_DEQ_GLUT2},
-@@ -772,6 +772,7 @@ static struct cdns_reg_pairs cdns_usb_ln_regs_ext_ssc[] = {
- {0x000F, SIERRA_LFPSFILT_NS_PREG},
- {0x0009, SIERRA_LFPSFILT_RD_PREG},
- {0x0001, SIERRA_LFPSFILT_MP_PREG},
-+ {0x6013, SIERRA_SIGDET_SUPPORT_PREG},
- {0x8013, SIERRA_SDFILT_H2L_A_PREG},
- {0x8009, SIERRA_SDFILT_L2H_PREG},
- {0x0024, SIERRA_RXBUFFER_CTLECTRL_PREG},
-diff --git a/drivers/phy/ti/phy-j721e-wiz.c b/drivers/phy/ti/phy-j721e-wiz.c
-index 7b51045df783..c8e4ff341cef 100644
---- a/drivers/phy/ti/phy-j721e-wiz.c
-+++ b/drivers/phy/ti/phy-j721e-wiz.c
-@@ -794,8 +794,10 @@ static int wiz_probe(struct platform_device *pdev)
- }
-
- base = devm_ioremap(dev, res.start, resource_size(&res));
-- if (!base)
-+ if (!base) {
-+ ret = -ENOMEM;
- goto err_addr_to_resource;
-+ }
-
- regmap = devm_regmap_init_mmio(dev, base, &wiz_regmap_config);
- if (IS_ERR(regmap)) {
-@@ -812,6 +814,7 @@ static int wiz_probe(struct platform_device *pdev)
-
- if (num_lanes > WIZ_MAX_LANES) {
- dev_err(dev, "Cannot support %d lanes\n", num_lanes);
-+ ret = -ENODEV;
- goto err_addr_to_resource;
- }
-
-@@ -897,6 +900,7 @@ static int wiz_probe(struct platform_device *pdev)
- serdes_pdev = of_platform_device_create(child_node, NULL, dev);
- if (!serdes_pdev) {
- dev_WARN(dev, "Unable to create SERDES platform device\n");
-+ ret = -ENOMEM;
- goto err_pdev_create;
- }
- wiz->serdes_pdev = serdes_pdev;
-diff --git a/drivers/pinctrl/bcm/pinctrl-bcm281xx.c b/drivers/pinctrl/bcm/pinctrl-bcm281xx.c
-index f690fc5cd688..71e666178300 100644
---- a/drivers/pinctrl/bcm/pinctrl-bcm281xx.c
-+++ b/drivers/pinctrl/bcm/pinctrl-bcm281xx.c
-@@ -1406,7 +1406,7 @@ static int __init bcm281xx_pinctrl_probe(struct platform_device *pdev)
- pdata->reg_base = devm_platform_ioremap_resource(pdev, 0);
- if (IS_ERR(pdata->reg_base)) {
- dev_err(&pdev->dev, "Failed to ioremap MEM resource\n");
-- return -ENODEV;
-+ return PTR_ERR(pdata->reg_base);
- }
-
- /* Initialize the dynamic part of pinctrl_desc */
-diff --git a/drivers/pinctrl/freescale/pinctrl-imx.c b/drivers/pinctrl/freescale/pinctrl-imx.c
-index 9f42036c5fbb..1f81569c7ae3 100644
---- a/drivers/pinctrl/freescale/pinctrl-imx.c
-+++ b/drivers/pinctrl/freescale/pinctrl-imx.c
-@@ -774,16 +774,6 @@ static int imx_pinctrl_probe_dt(struct platform_device *pdev,
- return 0;
- }
-
--/*
-- * imx_free_resources() - free memory used by this driver
-- * @info: info driver instance
-- */
--static void imx_free_resources(struct imx_pinctrl *ipctl)
--{
-- if (ipctl->pctl)
-- pinctrl_unregister(ipctl->pctl);
--}
--
- int imx_pinctrl_probe(struct platform_device *pdev,
- const struct imx_pinctrl_soc_info *info)
- {
-@@ -874,23 +864,18 @@ int imx_pinctrl_probe(struct platform_device *pdev,
- &ipctl->pctl);
- if (ret) {
- dev_err(&pdev->dev, "could not register IMX pinctrl driver\n");
-- goto free;
-+ return ret;
- }
-
- ret = imx_pinctrl_probe_dt(pdev, ipctl);
- if (ret) {
- dev_err(&pdev->dev, "fail to probe dt properties\n");
-- goto free;
-+ return ret;
- }
-
- dev_info(&pdev->dev, "initialized IMX pinctrl driver\n");
-
- return pinctrl_enable(ipctl->pctl);
--
--free:
-- imx_free_resources(ipctl);
--
-- return ret;
- }
-
- static int __maybe_unused imx_pinctrl_suspend(struct device *dev)
-diff --git a/drivers/pinctrl/freescale/pinctrl-imx1-core.c b/drivers/pinctrl/freescale/pinctrl-imx1-core.c
-index c00d0022d311..421f7d1886e5 100644
---- a/drivers/pinctrl/freescale/pinctrl-imx1-core.c
-+++ b/drivers/pinctrl/freescale/pinctrl-imx1-core.c
-@@ -638,7 +638,6 @@ int imx1_pinctrl_core_probe(struct platform_device *pdev,
-
- ret = of_platform_populate(pdev->dev.of_node, NULL, NULL, &pdev->dev);
- if (ret) {
-- pinctrl_unregister(ipctl->pctl);
- dev_err(&pdev->dev, "Failed to populate subdevices\n");
- return ret;
- }
-diff --git a/drivers/pinctrl/pinctrl-at91-pio4.c b/drivers/pinctrl/pinctrl-at91-pio4.c
-index 694912409fd9..54222ccddfb1 100644
---- a/drivers/pinctrl/pinctrl-at91-pio4.c
-+++ b/drivers/pinctrl/pinctrl-at91-pio4.c
-@@ -1019,7 +1019,7 @@ static int atmel_pinctrl_probe(struct platform_device *pdev)
-
- atmel_pioctrl->reg_base = devm_platform_ioremap_resource(pdev, 0);
- if (IS_ERR(atmel_pioctrl->reg_base))
-- return -EINVAL;
-+ return PTR_ERR(atmel_pioctrl->reg_base);
-
- atmel_pioctrl->clk = devm_clk_get(dev, NULL);
- if (IS_ERR(atmel_pioctrl->clk)) {
-diff --git a/drivers/pinctrl/pinctrl-ocelot.c b/drivers/pinctrl/pinctrl-ocelot.c
-index ed8eac6c1494..b1bf46ec207f 100644
---- a/drivers/pinctrl/pinctrl-ocelot.c
-+++ b/drivers/pinctrl/pinctrl-ocelot.c
-@@ -714,11 +714,12 @@ static void ocelot_irq_handler(struct irq_desc *desc)
- struct irq_chip *parent_chip = irq_desc_get_chip(desc);
- struct gpio_chip *chip = irq_desc_get_handler_data(desc);
- struct ocelot_pinctrl *info = gpiochip_get_data(chip);
-+ unsigned int id_reg = OCELOT_GPIO_INTR_IDENT * info->stride;
- unsigned int reg = 0, irq, i;
- unsigned long irqs;
-
- for (i = 0; i < info->stride; i++) {
-- regmap_read(info->map, OCELOT_GPIO_INTR_IDENT + 4 * i, ®);
-+ regmap_read(info->map, id_reg + 4 * i, ®);
- if (!reg)
- continue;
-
-@@ -751,21 +752,21 @@ static int ocelot_gpiochip_register(struct platform_device *pdev,
- gc->of_node = info->dev->of_node;
- gc->label = "ocelot-gpio";
-
-- irq = irq_of_parse_and_map(pdev->dev.of_node, 0);
-- if (irq <= 0)
-- return irq;
--
-- girq = &gc->irq;
-- girq->chip = &ocelot_irqchip;
-- girq->parent_handler = ocelot_irq_handler;
-- girq->num_parents = 1;
-- girq->parents = devm_kcalloc(&pdev->dev, 1, sizeof(*girq->parents),
-- GFP_KERNEL);
-- if (!girq->parents)
-- return -ENOMEM;
-- girq->parents[0] = irq;
-- girq->default_type = IRQ_TYPE_NONE;
-- girq->handler = handle_edge_irq;
-+ irq = irq_of_parse_and_map(gc->of_node, 0);
-+ if (irq) {
-+ girq = &gc->irq;
-+ girq->chip = &ocelot_irqchip;
-+ girq->parent_handler = ocelot_irq_handler;
-+ girq->num_parents = 1;
-+ girq->parents = devm_kcalloc(&pdev->dev, 1,
-+ sizeof(*girq->parents),
-+ GFP_KERNEL);
-+ if (!girq->parents)
-+ return -ENOMEM;
-+ girq->parents[0] = irq;
-+ girq->default_type = IRQ_TYPE_NONE;
-+ girq->handler = handle_edge_irq;
-+ }
-
- ret = devm_gpiochip_add_data(&pdev->dev, gc, info);
- if (ret)
-diff --git a/drivers/pinctrl/pinctrl-rockchip.c b/drivers/pinctrl/pinctrl-rockchip.c
-index 098951346339..d7869b636889 100644
---- a/drivers/pinctrl/pinctrl-rockchip.c
-+++ b/drivers/pinctrl/pinctrl-rockchip.c
-@@ -508,8 +508,8 @@ static int rockchip_dt_node_to_map(struct pinctrl_dev *pctldev,
- }
-
- map_num += grp->npins;
-- new_map = devm_kcalloc(pctldev->dev, map_num, sizeof(*new_map),
-- GFP_KERNEL);
-+
-+ new_map = kcalloc(map_num, sizeof(*new_map), GFP_KERNEL);
- if (!new_map)
- return -ENOMEM;
-
-@@ -519,7 +519,7 @@ static int rockchip_dt_node_to_map(struct pinctrl_dev *pctldev,
- /* create mux map */
- parent = of_get_parent(np);
- if (!parent) {
-- devm_kfree(pctldev->dev, new_map);
-+ kfree(new_map);
- return -EINVAL;
- }
- new_map[0].type = PIN_MAP_TYPE_MUX_GROUP;
-@@ -546,6 +546,7 @@ static int rockchip_dt_node_to_map(struct pinctrl_dev *pctldev,
- static void rockchip_dt_free_map(struct pinctrl_dev *pctldev,
- struct pinctrl_map *map, unsigned num_maps)
- {
-+ kfree(map);
- }
-
- static const struct pinctrl_ops rockchip_pctrl_ops = {
-diff --git a/drivers/pinctrl/pinctrl-rza1.c b/drivers/pinctrl/pinctrl-rza1.c
-index da2d8365c690..ff4a7fb518bb 100644
---- a/drivers/pinctrl/pinctrl-rza1.c
-+++ b/drivers/pinctrl/pinctrl-rza1.c
-@@ -418,7 +418,7 @@ static const struct rza1_bidir_entry rza1l_bidir_entries[RZA1_NPORTS] = {
- };
-
- static const struct rza1_swio_entry rza1l_swio_entries[] = {
-- [0] = { ARRAY_SIZE(rza1h_swio_pins), rza1h_swio_pins },
-+ [0] = { ARRAY_SIZE(rza1l_swio_pins), rza1l_swio_pins },
- };
-
- /* RZ/A1L (r7s72102x) pinmux flags table */
-diff --git a/drivers/pinctrl/qcom/pinctrl-ipq6018.c b/drivers/pinctrl/qcom/pinctrl-ipq6018.c
-index 38c33a778cb8..ec50a3b4bd16 100644
---- a/drivers/pinctrl/qcom/pinctrl-ipq6018.c
-+++ b/drivers/pinctrl/qcom/pinctrl-ipq6018.c
-@@ -367,7 +367,8 @@ static const char * const wci20_groups[] = {
-
- static const char * const qpic_pad_groups[] = {
- "gpio0", "gpio1", "gpio2", "gpio3", "gpio4", "gpio9", "gpio10",
-- "gpio11", "gpio17",
-+ "gpio11", "gpio17", "gpio15", "gpio12", "gpio13", "gpio14", "gpio5",
-+ "gpio6", "gpio7", "gpio8",
- };
-
- static const char * const burn0_groups[] = {
-diff --git a/drivers/pinctrl/sirf/pinctrl-sirf.c b/drivers/pinctrl/sirf/pinctrl-sirf.c
-index 1ebcb957c654..63a287d5795f 100644
---- a/drivers/pinctrl/sirf/pinctrl-sirf.c
-+++ b/drivers/pinctrl/sirf/pinctrl-sirf.c
-@@ -794,13 +794,17 @@ static int sirfsoc_gpio_probe(struct device_node *np)
- return -ENODEV;
-
- sgpio = devm_kzalloc(&pdev->dev, sizeof(*sgpio), GFP_KERNEL);
-- if (!sgpio)
-- return -ENOMEM;
-+ if (!sgpio) {
-+ err = -ENOMEM;
-+ goto out_put_device;
-+ }
- spin_lock_init(&sgpio->lock);
-
- regs = of_iomap(np, 0);
-- if (!regs)
-- return -ENOMEM;
-+ if (!regs) {
-+ err = -ENOMEM;
-+ goto out_put_device;
-+ }
-
- sgpio->chip.gc.request = sirfsoc_gpio_request;
- sgpio->chip.gc.free = sirfsoc_gpio_free;
-@@ -824,8 +828,10 @@ static int sirfsoc_gpio_probe(struct device_node *np)
- girq->parents = devm_kcalloc(&pdev->dev, SIRFSOC_GPIO_NO_OF_BANKS,
- sizeof(*girq->parents),
- GFP_KERNEL);
-- if (!girq->parents)
-- return -ENOMEM;
-+ if (!girq->parents) {
-+ err = -ENOMEM;
-+ goto out_put_device;
-+ }
- for (i = 0; i < SIRFSOC_GPIO_NO_OF_BANKS; i++) {
- bank = &sgpio->sgpio_bank[i];
- spin_lock_init(&bank->lock);
-@@ -868,6 +874,8 @@ out_no_range:
- gpiochip_remove(&sgpio->chip.gc);
- out:
- iounmap(regs);
-+out_put_device:
-+ put_device(&pdev->dev);
- return err;
- }
-
-diff --git a/drivers/power/supply/Kconfig b/drivers/power/supply/Kconfig
-index f3424fdce341..d37ec0d03237 100644
---- a/drivers/power/supply/Kconfig
-+++ b/drivers/power/supply/Kconfig
-@@ -577,7 +577,7 @@ config CHARGER_BQ24257
- tristate "TI BQ24250/24251/24257 battery charger driver"
- depends on I2C
- depends on GPIOLIB || COMPILE_TEST
-- depends on REGMAP_I2C
-+ select REGMAP_I2C
- help
- Say Y to enable support for the TI BQ24250, BQ24251, and BQ24257 battery
- chargers.
-diff --git a/drivers/power/supply/lp8788-charger.c b/drivers/power/supply/lp8788-charger.c
-index 84a206f42a8e..e7931ffb7151 100644
---- a/drivers/power/supply/lp8788-charger.c
-+++ b/drivers/power/supply/lp8788-charger.c
-@@ -572,27 +572,14 @@ static void lp8788_setup_adc_channel(struct device *dev,
- return;
-
- /* ADC channel for battery voltage */
-- chan = iio_channel_get(dev, pdata->adc_vbatt);
-+ chan = devm_iio_channel_get(dev, pdata->adc_vbatt);
- pchg->chan[LP8788_VBATT] = IS_ERR(chan) ? NULL : chan;
-
- /* ADC channel for battery temperature */
-- chan = iio_channel_get(dev, pdata->adc_batt_temp);
-+ chan = devm_iio_channel_get(dev, pdata->adc_batt_temp);
- pchg->chan[LP8788_BATT_TEMP] = IS_ERR(chan) ? NULL : chan;
- }
-
--static void lp8788_release_adc_channel(struct lp8788_charger *pchg)
--{
-- int i;
--
-- for (i = 0; i < LP8788_NUM_CHG_ADC; i++) {
-- if (!pchg->chan[i])
-- continue;
--
-- iio_channel_release(pchg->chan[i]);
-- pchg->chan[i] = NULL;
-- }
--}
--
- static ssize_t lp8788_show_charger_status(struct device *dev,
- struct device_attribute *attr, char *buf)
- {
-@@ -735,7 +722,6 @@ static int lp8788_charger_remove(struct platform_device *pdev)
- flush_work(&pchg->charger_work);
- lp8788_irq_unregister(pdev, pchg);
- lp8788_psy_unregister(pchg);
-- lp8788_release_adc_channel(pchg);
-
- return 0;
- }
-diff --git a/drivers/power/supply/smb347-charger.c b/drivers/power/supply/smb347-charger.c
-index c1d124b8be0c..d102921b3ab2 100644
---- a/drivers/power/supply/smb347-charger.c
-+++ b/drivers/power/supply/smb347-charger.c
-@@ -1138,6 +1138,7 @@ static bool smb347_volatile_reg(struct device *dev, unsigned int reg)
- switch (reg) {
- case IRQSTAT_A:
- case IRQSTAT_C:
-+ case IRQSTAT_D:
- case IRQSTAT_E:
- case IRQSTAT_F:
- case STAT_A:
-diff --git a/drivers/pwm/core.c b/drivers/pwm/core.c
-index 9973c442b455..6b3cbc0490c6 100644
---- a/drivers/pwm/core.c
-+++ b/drivers/pwm/core.c
-@@ -121,7 +121,7 @@ static int pwm_device_request(struct pwm_device *pwm, const char *label)
- pwm->chip->ops->get_state(pwm->chip, pwm, &pwm->state);
- trace_pwm_get(pwm, &pwm->state);
-
-- if (IS_ENABLED(PWM_DEBUG))
-+ if (IS_ENABLED(CONFIG_PWM_DEBUG))
- pwm->last = pwm->state;
- }
-
-diff --git a/drivers/pwm/pwm-img.c b/drivers/pwm/pwm-img.c
-index c9e57bd109fb..599a0f66a384 100644
---- a/drivers/pwm/pwm-img.c
-+++ b/drivers/pwm/pwm-img.c
-@@ -129,8 +129,10 @@ static int img_pwm_config(struct pwm_chip *chip, struct pwm_device *pwm,
- duty = DIV_ROUND_UP(timebase * duty_ns, period_ns);
-
- ret = pm_runtime_get_sync(chip->dev);
-- if (ret < 0)
-+ if (ret < 0) {
-+ pm_runtime_put_autosuspend(chip->dev);
- return ret;
-+ }
-
- val = img_pwm_readl(pwm_chip, PWM_CTRL_CFG);
- val &= ~(PWM_CTRL_CFG_DIV_MASK << PWM_CTRL_CFG_DIV_SHIFT(pwm->hwpwm));
-@@ -331,8 +333,10 @@ static int img_pwm_remove(struct platform_device *pdev)
- int ret;
-
- ret = pm_runtime_get_sync(&pdev->dev);
-- if (ret < 0)
-+ if (ret < 0) {
-+ pm_runtime_put(&pdev->dev);
- return ret;
-+ }
-
- for (i = 0; i < pwm_chip->chip.npwm; i++) {
- val = img_pwm_readl(pwm_chip, PWM_CTRL_CFG);
-diff --git a/drivers/pwm/pwm-imx27.c b/drivers/pwm/pwm-imx27.c
-index a6e40d4c485f..732a6f3701e8 100644
---- a/drivers/pwm/pwm-imx27.c
-+++ b/drivers/pwm/pwm-imx27.c
-@@ -150,13 +150,12 @@ static void pwm_imx27_get_state(struct pwm_chip *chip,
-
- prescaler = MX3_PWMCR_PRESCALER_GET(val);
- pwm_clk = clk_get_rate(imx->clk_per);
-- pwm_clk = DIV_ROUND_CLOSEST_ULL(pwm_clk, prescaler);
- val = readl(imx->mmio_base + MX3_PWMPR);
- period = val >= MX3_PWMPR_MAX ? MX3_PWMPR_MAX : val;
-
- /* PWMOUT (Hz) = PWMCLK / (PWMPR + 2) */
-- tmp = NSEC_PER_SEC * (u64)(period + 2);
-- state->period = DIV_ROUND_CLOSEST_ULL(tmp, pwm_clk);
-+ tmp = NSEC_PER_SEC * (u64)(period + 2) * prescaler;
-+ state->period = DIV_ROUND_UP_ULL(tmp, pwm_clk);
-
- /*
- * PWMSAR can be read only if PWM is enabled. If the PWM is disabled,
-@@ -167,8 +166,8 @@ static void pwm_imx27_get_state(struct pwm_chip *chip,
- else
- val = imx->duty_cycle;
-
-- tmp = NSEC_PER_SEC * (u64)(val);
-- state->duty_cycle = DIV_ROUND_CLOSEST_ULL(tmp, pwm_clk);
-+ tmp = NSEC_PER_SEC * (u64)(val) * prescaler;
-+ state->duty_cycle = DIV_ROUND_UP_ULL(tmp, pwm_clk);
-
- pwm_imx27_clk_disable_unprepare(imx);
- }
-@@ -220,22 +219,23 @@ static int pwm_imx27_apply(struct pwm_chip *chip, struct pwm_device *pwm,
- struct pwm_imx27_chip *imx = to_pwm_imx27_chip(chip);
- struct pwm_state cstate;
- unsigned long long c;
-+ unsigned long long clkrate;
- int ret;
- u32 cr;
-
- pwm_get_state(pwm, &cstate);
-
-- c = clk_get_rate(imx->clk_per);
-- c *= state->period;
-+ clkrate = clk_get_rate(imx->clk_per);
-+ c = clkrate * state->period;
-
-- do_div(c, 1000000000);
-+ do_div(c, NSEC_PER_SEC);
- period_cycles = c;
-
- prescale = period_cycles / 0x10000 + 1;
-
- period_cycles /= prescale;
-- c = (unsigned long long)period_cycles * state->duty_cycle;
-- do_div(c, state->period);
-+ c = clkrate * state->duty_cycle;
-+ do_div(c, NSEC_PER_SEC * prescale);
- duty_cycles = c;
-
- /*
-diff --git a/drivers/remoteproc/mtk_scp.c b/drivers/remoteproc/mtk_scp.c
-index 2bead57c9cf9..ac13e7b046a6 100644
---- a/drivers/remoteproc/mtk_scp.c
-+++ b/drivers/remoteproc/mtk_scp.c
-@@ -132,8 +132,8 @@ static int scp_ipi_init(struct mtk_scp *scp)
- (struct mtk_share_obj __iomem *)(scp->sram_base + recv_offset);
- scp->send_buf =
- (struct mtk_share_obj __iomem *)(scp->sram_base + send_offset);
-- memset_io(scp->recv_buf, 0, sizeof(scp->recv_buf));
-- memset_io(scp->send_buf, 0, sizeof(scp->send_buf));
-+ memset_io(scp->recv_buf, 0, sizeof(*scp->recv_buf));
-+ memset_io(scp->send_buf, 0, sizeof(*scp->send_buf));
-
- return 0;
- }
-diff --git a/drivers/remoteproc/qcom_q6v5_mss.c b/drivers/remoteproc/qcom_q6v5_mss.c
-index 5475d4f808a8..629abcee2c1d 100644
---- a/drivers/remoteproc/qcom_q6v5_mss.c
-+++ b/drivers/remoteproc/qcom_q6v5_mss.c
-@@ -69,13 +69,9 @@
- #define AXI_HALTREQ_REG 0x0
- #define AXI_HALTACK_REG 0x4
- #define AXI_IDLE_REG 0x8
--#define NAV_AXI_HALTREQ_BIT BIT(0)
--#define NAV_AXI_HALTACK_BIT BIT(1)
--#define NAV_AXI_IDLE_BIT BIT(2)
- #define AXI_GATING_VALID_OVERRIDE BIT(0)
-
- #define HALT_ACK_TIMEOUT_US 100000
--#define NAV_HALT_ACK_TIMEOUT_US 200
-
- /* QDSP6SS_RESET */
- #define Q6SS_STOP_CORE BIT(0)
-@@ -143,7 +139,7 @@ struct rproc_hexagon_res {
- int version;
- bool need_mem_protection;
- bool has_alt_reset;
-- bool has_halt_nav;
-+ bool has_spare_reg;
- };
-
- struct q6v5 {
-@@ -154,13 +150,11 @@ struct q6v5 {
- void __iomem *rmb_base;
-
- struct regmap *halt_map;
-- struct regmap *halt_nav_map;
- struct regmap *conn_map;
-
- u32 halt_q6;
- u32 halt_modem;
- u32 halt_nc;
-- u32 halt_nav;
- u32 conn_box;
-
- struct reset_control *mss_restart;
-@@ -206,7 +200,7 @@ struct q6v5 {
- struct qcom_sysmon *sysmon;
- bool need_mem_protection;
- bool has_alt_reset;
-- bool has_halt_nav;
-+ bool has_spare_reg;
- int mpss_perm;
- int mba_perm;
- const char *hexagon_mdt_image;
-@@ -427,21 +421,19 @@ static int q6v5_reset_assert(struct q6v5 *qproc)
- reset_control_assert(qproc->pdc_reset);
- ret = reset_control_reset(qproc->mss_restart);
- reset_control_deassert(qproc->pdc_reset);
-- } else if (qproc->has_halt_nav) {
-+ } else if (qproc->has_spare_reg) {
- /*
- * When the AXI pipeline is being reset with the Q6 modem partly
- * operational there is possibility of AXI valid signal to
- * glitch, leading to spurious transactions and Q6 hangs. A work
- * around is employed by asserting the AXI_GATING_VALID_OVERRIDE
-- * BIT before triggering Q6 MSS reset. Both the HALTREQ and
-- * AXI_GATING_VALID_OVERRIDE are withdrawn post MSS assert
-- * followed by a MSS deassert, while holding the PDC reset.
-+ * BIT before triggering Q6 MSS reset. AXI_GATING_VALID_OVERRIDE
-+ * is withdrawn post MSS assert followed by a MSS deassert,
-+ * while holding the PDC reset.
- */
- reset_control_assert(qproc->pdc_reset);
- regmap_update_bits(qproc->conn_map, qproc->conn_box,
- AXI_GATING_VALID_OVERRIDE, 1);
-- regmap_update_bits(qproc->halt_nav_map, qproc->halt_nav,
-- NAV_AXI_HALTREQ_BIT, 0);
- reset_control_assert(qproc->mss_restart);
- reset_control_deassert(qproc->pdc_reset);
- regmap_update_bits(qproc->conn_map, qproc->conn_box,
-@@ -464,7 +456,7 @@ static int q6v5_reset_deassert(struct q6v5 *qproc)
- ret = reset_control_reset(qproc->mss_restart);
- writel(0, qproc->rmb_base + RMB_MBA_ALT_RESET);
- reset_control_deassert(qproc->pdc_reset);
-- } else if (qproc->has_halt_nav) {
-+ } else if (qproc->has_spare_reg) {
- ret = reset_control_reset(qproc->mss_restart);
- } else {
- ret = reset_control_deassert(qproc->mss_restart);
-@@ -761,32 +753,6 @@ static void q6v5proc_halt_axi_port(struct q6v5 *qproc,
- regmap_write(halt_map, offset + AXI_HALTREQ_REG, 0);
- }
-
--static void q6v5proc_halt_nav_axi_port(struct q6v5 *qproc,
-- struct regmap *halt_map,
-- u32 offset)
--{
-- unsigned int val;
-- int ret;
--
-- /* Check if we're already idle */
-- ret = regmap_read(halt_map, offset, &val);
-- if (!ret && (val & NAV_AXI_IDLE_BIT))
-- return;
--
-- /* Assert halt request */
-- regmap_update_bits(halt_map, offset, NAV_AXI_HALTREQ_BIT,
-- NAV_AXI_HALTREQ_BIT);
--
-- /* Wait for halt ack*/
-- regmap_read_poll_timeout(halt_map, offset, val,
-- (val & NAV_AXI_HALTACK_BIT),
-- 5, NAV_HALT_ACK_TIMEOUT_US);
--
-- ret = regmap_read(halt_map, offset, &val);
-- if (ret || !(val & NAV_AXI_IDLE_BIT))
-- dev_err(qproc->dev, "port failed halt\n");
--}
--
- static int q6v5_mpss_init_image(struct q6v5 *qproc, const struct firmware *fw)
- {
- unsigned long dma_attrs = DMA_ATTR_FORCE_CONTIGUOUS;
-@@ -951,9 +917,6 @@ static int q6v5_mba_load(struct q6v5 *qproc)
- halt_axi_ports:
- q6v5proc_halt_axi_port(qproc, qproc->halt_map, qproc->halt_q6);
- q6v5proc_halt_axi_port(qproc, qproc->halt_map, qproc->halt_modem);
-- if (qproc->has_halt_nav)
-- q6v5proc_halt_nav_axi_port(qproc, qproc->halt_nav_map,
-- qproc->halt_nav);
- q6v5proc_halt_axi_port(qproc, qproc->halt_map, qproc->halt_nc);
-
- reclaim_mba:
-@@ -1001,9 +964,6 @@ static void q6v5_mba_reclaim(struct q6v5 *qproc)
-
- q6v5proc_halt_axi_port(qproc, qproc->halt_map, qproc->halt_q6);
- q6v5proc_halt_axi_port(qproc, qproc->halt_map, qproc->halt_modem);
-- if (qproc->has_halt_nav)
-- q6v5proc_halt_nav_axi_port(qproc, qproc->halt_nav_map,
-- qproc->halt_nav);
- q6v5proc_halt_axi_port(qproc, qproc->halt_map, qproc->halt_nc);
- if (qproc->version == MSS_MSM8996) {
- /*
-@@ -1156,7 +1116,13 @@ static int q6v5_mpss_load(struct q6v5 *qproc)
- goto release_firmware;
- }
-
-- ptr = qproc->mpss_region + offset;
-+ ptr = ioremap_wc(qproc->mpss_phys + offset, phdr->p_memsz);
-+ if (!ptr) {
-+ dev_err(qproc->dev,
-+ "unable to map memory region: %pa+%zx-%x\n",
-+ &qproc->mpss_phys, offset, phdr->p_memsz);
-+ goto release_firmware;
-+ }
-
- if (phdr->p_filesz && phdr->p_offset < fw->size) {
- /* Firmware is large enough to be non-split */
-@@ -1165,6 +1131,7 @@ static int q6v5_mpss_load(struct q6v5 *qproc)
- "failed to load segment %d from truncated file %s\n",
- i, fw_name);
- ret = -EINVAL;
-+ iounmap(ptr);
- goto release_firmware;
- }
-
-@@ -1175,6 +1142,7 @@ static int q6v5_mpss_load(struct q6v5 *qproc)
- ret = request_firmware(&seg_fw, fw_name, qproc->dev);
- if (ret) {
- dev_err(qproc->dev, "failed to load %s\n", fw_name);
-+ iounmap(ptr);
- goto release_firmware;
- }
-
-@@ -1187,6 +1155,7 @@ static int q6v5_mpss_load(struct q6v5 *qproc)
- memset(ptr + phdr->p_filesz, 0,
- phdr->p_memsz - phdr->p_filesz);
- }
-+ iounmap(ptr);
- size += phdr->p_memsz;
-
- code_length = readl(qproc->rmb_base + RMB_PMI_CODE_LENGTH_REG);
-@@ -1236,7 +1205,8 @@ static void qcom_q6v5_dump_segment(struct rproc *rproc,
- int ret = 0;
- struct q6v5 *qproc = rproc->priv;
- unsigned long mask = BIT((unsigned long)segment->priv);
-- void *ptr = rproc_da_to_va(rproc, segment->da, segment->size);
-+ int offset = segment->da - qproc->mpss_reloc;
-+ void *ptr = NULL;
-
- /* Unlock mba before copying segments */
- if (!qproc->dump_mba_loaded) {
-@@ -1250,10 +1220,15 @@ static void qcom_q6v5_dump_segment(struct rproc *rproc,
- }
- }
-
-- if (!ptr || ret)
-- memset(dest, 0xff, segment->size);
-- else
-+ if (!ret)
-+ ptr = ioremap_wc(qproc->mpss_phys + offset, segment->size);
-+
-+ if (ptr) {
- memcpy(dest, ptr, segment->size);
-+ iounmap(ptr);
-+ } else {
-+ memset(dest, 0xff, segment->size);
-+ }
-
- qproc->dump_segment_mask |= mask;
-
-@@ -1432,36 +1407,12 @@ static int q6v5_init_mem(struct q6v5 *qproc, struct platform_device *pdev)
- qproc->halt_modem = args.args[1];
- qproc->halt_nc = args.args[2];
-
-- if (qproc->has_halt_nav) {
-- struct platform_device *nav_pdev;
--
-+ if (qproc->has_spare_reg) {
- ret = of_parse_phandle_with_fixed_args(pdev->dev.of_node,
-- "qcom,halt-nav-regs",
-+ "qcom,spare-regs",
- 1, 0, &args);
- if (ret < 0) {
-- dev_err(&pdev->dev, "failed to parse halt-nav-regs\n");
-- return -EINVAL;
-- }
--
-- nav_pdev = of_find_device_by_node(args.np);
-- of_node_put(args.np);
-- if (!nav_pdev) {
-- dev_err(&pdev->dev, "failed to get mss clock device\n");
-- return -EPROBE_DEFER;
-- }
--
-- qproc->halt_nav_map = dev_get_regmap(&nav_pdev->dev, NULL);
-- if (!qproc->halt_nav_map) {
-- dev_err(&pdev->dev, "failed to get map from device\n");
-- return -EINVAL;
-- }
-- qproc->halt_nav = args.args[0];
--
-- ret = of_parse_phandle_with_fixed_args(pdev->dev.of_node,
-- "qcom,halt-nav-regs",
-- 1, 1, &args);
-- if (ret < 0) {
-- dev_err(&pdev->dev, "failed to parse halt-nav-regs\n");
-+ dev_err(&pdev->dev, "failed to parse spare-regs\n");
- return -EINVAL;
- }
-
-@@ -1547,7 +1498,7 @@ static int q6v5_init_reset(struct q6v5 *qproc)
- return PTR_ERR(qproc->mss_restart);
- }
-
-- if (qproc->has_alt_reset || qproc->has_halt_nav) {
-+ if (qproc->has_alt_reset || qproc->has_spare_reg) {
- qproc->pdc_reset = devm_reset_control_get_exclusive(qproc->dev,
- "pdc_reset");
- if (IS_ERR(qproc->pdc_reset)) {
-@@ -1595,12 +1546,6 @@ static int q6v5_alloc_memory_region(struct q6v5 *qproc)
-
- qproc->mpss_phys = qproc->mpss_reloc = r.start;
- qproc->mpss_size = resource_size(&r);
-- qproc->mpss_region = devm_ioremap_wc(qproc->dev, qproc->mpss_phys, qproc->mpss_size);
-- if (!qproc->mpss_region) {
-- dev_err(qproc->dev, "unable to map memory region: %pa+%zx\n",
-- &r.start, qproc->mpss_size);
-- return -EBUSY;
-- }
-
- return 0;
- }
-@@ -1679,7 +1624,7 @@ static int q6v5_probe(struct platform_device *pdev)
-
- platform_set_drvdata(pdev, qproc);
-
-- qproc->has_halt_nav = desc->has_halt_nav;
-+ qproc->has_spare_reg = desc->has_spare_reg;
- ret = q6v5_init_mem(qproc, pdev);
- if (ret)
- goto free_rproc;
-@@ -1828,8 +1773,6 @@ static const struct rproc_hexagon_res sc7180_mss = {
- .active_clk_names = (char*[]){
- "mnoc_axi",
- "nav",
-- "mss_nav",
-- "mss_crypto",
- NULL
- },
- .active_pd_names = (char*[]){
-@@ -1844,7 +1787,7 @@ static const struct rproc_hexagon_res sc7180_mss = {
- },
- .need_mem_protection = true,
- .has_alt_reset = false,
-- .has_halt_nav = true,
-+ .has_spare_reg = true,
- .version = MSS_SC7180,
- };
-
-@@ -1879,7 +1822,7 @@ static const struct rproc_hexagon_res sdm845_mss = {
- },
- .need_mem_protection = true,
- .has_alt_reset = true,
-- .has_halt_nav = false,
-+ .has_spare_reg = false,
- .version = MSS_SDM845,
- };
-
-@@ -1906,7 +1849,7 @@ static const struct rproc_hexagon_res msm8998_mss = {
- },
- .need_mem_protection = true,
- .has_alt_reset = false,
-- .has_halt_nav = false,
-+ .has_spare_reg = false,
- .version = MSS_MSM8998,
- };
-
-@@ -1936,7 +1879,7 @@ static const struct rproc_hexagon_res msm8996_mss = {
- },
- .need_mem_protection = true,
- .has_alt_reset = false,
-- .has_halt_nav = false,
-+ .has_spare_reg = false,
- .version = MSS_MSM8996,
- };
-
-@@ -1969,7 +1912,7 @@ static const struct rproc_hexagon_res msm8916_mss = {
- },
- .need_mem_protection = false,
- .has_alt_reset = false,
-- .has_halt_nav = false,
-+ .has_spare_reg = false,
- .version = MSS_MSM8916,
- };
-
-@@ -2010,7 +1953,7 @@ static const struct rproc_hexagon_res msm8974_mss = {
- },
- .need_mem_protection = false,
- .has_alt_reset = false,
-- .has_halt_nav = false,
-+ .has_spare_reg = false,
- .version = MSS_MSM8974,
- };
-
-diff --git a/drivers/remoteproc/remoteproc_core.c b/drivers/remoteproc/remoteproc_core.c
-index be15aace9b3c..8f79cfd2e467 100644
---- a/drivers/remoteproc/remoteproc_core.c
-+++ b/drivers/remoteproc/remoteproc_core.c
-@@ -2053,6 +2053,7 @@ struct rproc *rproc_alloc(struct device *dev, const char *name,
- rproc->dev.type = &rproc_type;
- rproc->dev.class = &rproc_class;
- rproc->dev.driver_data = rproc;
-+ idr_init(&rproc->notifyids);
-
- /* Assign a unique device index and name */
- rproc->index = ida_simple_get(&rproc_dev_index, 0, 0, GFP_KERNEL);
-@@ -2078,8 +2079,6 @@ struct rproc *rproc_alloc(struct device *dev, const char *name,
-
- mutex_init(&rproc->lock);
-
-- idr_init(&rproc->notifyids);
--
- INIT_LIST_HEAD(&rproc->carveouts);
- INIT_LIST_HEAD(&rproc->mappings);
- INIT_LIST_HEAD(&rproc->traces);
-diff --git a/drivers/rtc/rtc-mc13xxx.c b/drivers/rtc/rtc-mc13xxx.c
-index afce2c0b4bd6..d6802e6191cb 100644
---- a/drivers/rtc/rtc-mc13xxx.c
-+++ b/drivers/rtc/rtc-mc13xxx.c
-@@ -308,8 +308,10 @@ static int __init mc13xxx_rtc_probe(struct platform_device *pdev)
- mc13xxx_unlock(mc13xxx);
-
- ret = rtc_register_device(priv->rtc);
-- if (ret)
-+ if (ret) {
-+ mc13xxx_lock(mc13xxx);
- goto err_irq_request;
-+ }
-
- return 0;
-
-diff --git a/drivers/rtc/rtc-rc5t619.c b/drivers/rtc/rtc-rc5t619.c
-index 24e386ecbc7e..dd1a20977478 100644
---- a/drivers/rtc/rtc-rc5t619.c
-+++ b/drivers/rtc/rtc-rc5t619.c
-@@ -356,10 +356,8 @@ static int rc5t619_rtc_probe(struct platform_device *pdev)
- int err;
-
- rtc = devm_kzalloc(dev, sizeof(*rtc), GFP_KERNEL);
-- if (IS_ERR(rtc)) {
-- err = PTR_ERR(rtc);
-+ if (!rtc)
- return -ENOMEM;
-- }
-
- rtc->rn5t618 = rn5t618;
-
-diff --git a/drivers/rtc/rtc-rv3028.c b/drivers/rtc/rtc-rv3028.c
-index a0ddc86c975a..ec84db0b3d7a 100644
---- a/drivers/rtc/rtc-rv3028.c
-+++ b/drivers/rtc/rtc-rv3028.c
-@@ -755,6 +755,8 @@ static int rv3028_probe(struct i2c_client *client)
- return -ENOMEM;
-
- rv3028->regmap = devm_regmap_init_i2c(client, ®map_config);
-+ if (IS_ERR(rv3028->regmap))
-+ return PTR_ERR(rv3028->regmap);
-
- i2c_set_clientdata(client, rv3028);
-
-diff --git a/drivers/s390/cio/qdio.h b/drivers/s390/cio/qdio.h
-index b8453b594679..a2afd7bc100b 100644
---- a/drivers/s390/cio/qdio.h
-+++ b/drivers/s390/cio/qdio.h
-@@ -364,7 +364,6 @@ static inline int multicast_outbound(struct qdio_q *q)
- extern u64 last_ai_time;
-
- /* prototypes for thin interrupt */
--void qdio_setup_thinint(struct qdio_irq *irq_ptr);
- int qdio_establish_thinint(struct qdio_irq *irq_ptr);
- void qdio_shutdown_thinint(struct qdio_irq *irq_ptr);
- void tiqdio_add_device(struct qdio_irq *irq_ptr);
-@@ -389,6 +388,7 @@ int qdio_setup_get_ssqd(struct qdio_irq *irq_ptr,
- struct subchannel_id *schid,
- struct qdio_ssqd_desc *data);
- int qdio_setup_irq(struct qdio_irq *irq_ptr, struct qdio_initialize *init_data);
-+void qdio_shutdown_irq(struct qdio_irq *irq);
- void qdio_print_subchannel_info(struct qdio_irq *irq_ptr);
- void qdio_release_memory(struct qdio_irq *irq_ptr);
- int qdio_setup_init(void);
-diff --git a/drivers/s390/cio/qdio_main.c b/drivers/s390/cio/qdio_main.c
-index bcc3ab14e72d..80cc811bd2e0 100644
---- a/drivers/s390/cio/qdio_main.c
-+++ b/drivers/s390/cio/qdio_main.c
-@@ -1154,35 +1154,27 @@ int qdio_shutdown(struct ccw_device *cdev, int how)
-
- /* cleanup subchannel */
- spin_lock_irq(get_ccwdev_lock(cdev));
--
-+ qdio_set_state(irq_ptr, QDIO_IRQ_STATE_CLEANUP);
- if (how & QDIO_FLAG_CLEANUP_USING_CLEAR)
- rc = ccw_device_clear(cdev, QDIO_DOING_CLEANUP);
- else
- /* default behaviour is halt */
- rc = ccw_device_halt(cdev, QDIO_DOING_CLEANUP);
-+ spin_unlock_irq(get_ccwdev_lock(cdev));
- if (rc) {
- DBF_ERROR("%4x SHUTD ERR", irq_ptr->schid.sch_no);
- DBF_ERROR("rc:%4d", rc);
- goto no_cleanup;
- }
-
-- qdio_set_state(irq_ptr, QDIO_IRQ_STATE_CLEANUP);
-- spin_unlock_irq(get_ccwdev_lock(cdev));
- wait_event_interruptible_timeout(cdev->private->wait_q,
- irq_ptr->state == QDIO_IRQ_STATE_INACTIVE ||
- irq_ptr->state == QDIO_IRQ_STATE_ERR,
- 10 * HZ);
-- spin_lock_irq(get_ccwdev_lock(cdev));
-
- no_cleanup:
- qdio_shutdown_thinint(irq_ptr);
--
-- /* restore interrupt handler */
-- if ((void *)cdev->handler == (void *)qdio_int_handler) {
-- cdev->handler = irq_ptr->orig_handler;
-- cdev->private->intparm = 0;
-- }
-- spin_unlock_irq(get_ccwdev_lock(cdev));
-+ qdio_shutdown_irq(irq_ptr);
-
- qdio_set_state(irq_ptr, QDIO_IRQ_STATE_INACTIVE);
- mutex_unlock(&irq_ptr->setup_mutex);
-@@ -1352,8 +1344,8 @@ int qdio_establish(struct ccw_device *cdev,
-
- rc = qdio_establish_thinint(irq_ptr);
- if (rc) {
-+ qdio_shutdown_irq(irq_ptr);
- mutex_unlock(&irq_ptr->setup_mutex);
-- qdio_shutdown(cdev, QDIO_FLAG_CLEANUP_USING_CLEAR);
- return rc;
- }
-
-@@ -1371,8 +1363,9 @@ int qdio_establish(struct ccw_device *cdev,
- if (rc) {
- DBF_ERROR("%4x est IO ERR", irq_ptr->schid.sch_no);
- DBF_ERROR("rc:%4x", rc);
-+ qdio_shutdown_thinint(irq_ptr);
-+ qdio_shutdown_irq(irq_ptr);
- mutex_unlock(&irq_ptr->setup_mutex);
-- qdio_shutdown(cdev, QDIO_FLAG_CLEANUP_USING_CLEAR);
- return rc;
- }
-
-diff --git a/drivers/s390/cio/qdio_setup.c b/drivers/s390/cio/qdio_setup.c
-index 3083edd61f0c..8edfa0982221 100644
---- a/drivers/s390/cio/qdio_setup.c
-+++ b/drivers/s390/cio/qdio_setup.c
-@@ -480,7 +480,6 @@ int qdio_setup_irq(struct qdio_irq *irq_ptr, struct qdio_initialize *init_data)
- }
-
- setup_qib(irq_ptr, init_data);
-- qdio_setup_thinint(irq_ptr);
- set_impl_params(irq_ptr, init_data->qib_param_field_format,
- init_data->qib_param_field,
- init_data->input_slib_elements,
-@@ -491,6 +490,12 @@ int qdio_setup_irq(struct qdio_irq *irq_ptr, struct qdio_initialize *init_data)
-
- /* qdr, qib, sls, slsbs, slibs, sbales are filled now */
-
-+ /* set our IRQ handler */
-+ spin_lock_irq(get_ccwdev_lock(cdev));
-+ irq_ptr->orig_handler = cdev->handler;
-+ cdev->handler = qdio_int_handler;
-+ spin_unlock_irq(get_ccwdev_lock(cdev));
-+
- /* get qdio commands */
- ciw = ccw_device_get_ciw(cdev, CIW_TYPE_EQUEUE);
- if (!ciw) {
-@@ -506,12 +511,18 @@ int qdio_setup_irq(struct qdio_irq *irq_ptr, struct qdio_initialize *init_data)
- }
- irq_ptr->aqueue = *ciw;
-
-- /* set new interrupt handler */
-+ return 0;
-+}
-+
-+void qdio_shutdown_irq(struct qdio_irq *irq)
-+{
-+ struct ccw_device *cdev = irq->cdev;
-+
-+ /* restore IRQ handler */
- spin_lock_irq(get_ccwdev_lock(cdev));
-- irq_ptr->orig_handler = cdev->handler;
-- cdev->handler = qdio_int_handler;
-+ cdev->handler = irq->orig_handler;
-+ cdev->private->intparm = 0;
- spin_unlock_irq(get_ccwdev_lock(cdev));
-- return 0;
- }
-
- void qdio_print_subchannel_info(struct qdio_irq *irq_ptr)
-diff --git a/drivers/s390/cio/qdio_thinint.c b/drivers/s390/cio/qdio_thinint.c
-index ae50373617cd..0faa0ad21732 100644
---- a/drivers/s390/cio/qdio_thinint.c
-+++ b/drivers/s390/cio/qdio_thinint.c
-@@ -227,17 +227,19 @@ int __init tiqdio_register_thinints(void)
-
- int qdio_establish_thinint(struct qdio_irq *irq_ptr)
- {
-+ int rc;
-+
- if (!is_thinint_irq(irq_ptr))
- return 0;
-- return set_subchannel_ind(irq_ptr, 0);
--}
-
--void qdio_setup_thinint(struct qdio_irq *irq_ptr)
--{
-- if (!is_thinint_irq(irq_ptr))
-- return;
- irq_ptr->dsci = get_indicator();
- DBF_HEX(&irq_ptr->dsci, sizeof(void *));
-+
-+ rc = set_subchannel_ind(irq_ptr, 0);
-+ if (rc)
-+ put_indicator(irq_ptr->dsci);
-+
-+ return rc;
- }
-
- void qdio_shutdown_thinint(struct qdio_irq *irq_ptr)
-diff --git a/drivers/scsi/arm/acornscsi.c b/drivers/scsi/arm/acornscsi.c
-index ddb52e7ba622..9a912fd0f70b 100644
---- a/drivers/scsi/arm/acornscsi.c
-+++ b/drivers/scsi/arm/acornscsi.c
-@@ -2911,8 +2911,10 @@ static int acornscsi_probe(struct expansion_card *ec, const struct ecard_id *id)
-
- ashost->base = ecardm_iomap(ec, ECARD_RES_MEMC, 0, 0);
- ashost->fast = ecardm_iomap(ec, ECARD_RES_IOCFAST, 0, 0);
-- if (!ashost->base || !ashost->fast)
-+ if (!ashost->base || !ashost->fast) {
-+ ret = -ENOMEM;
- goto out_put;
-+ }
-
- host->irq = ec->irq;
- ashost->host = host;
-diff --git a/drivers/scsi/cxgbi/cxgb3i/cxgb3i.c b/drivers/scsi/cxgbi/cxgb3i/cxgb3i.c
-index 524cdbcd29aa..ec7d01f6e2d5 100644
---- a/drivers/scsi/cxgbi/cxgb3i/cxgb3i.c
-+++ b/drivers/scsi/cxgbi/cxgb3i/cxgb3i.c
-@@ -959,6 +959,7 @@ static int init_act_open(struct cxgbi_sock *csk)
- struct net_device *ndev = cdev->ports[csk->port_id];
- struct cxgbi_hba *chba = cdev->hbas[csk->port_id];
- struct sk_buff *skb = NULL;
-+ int ret;
-
- log_debug(1 << CXGBI_DBG_TOE | 1 << CXGBI_DBG_SOCK,
- "csk 0x%p,%u,0x%lx.\n", csk, csk->state, csk->flags);
-@@ -979,16 +980,16 @@ static int init_act_open(struct cxgbi_sock *csk)
- csk->atid = cxgb3_alloc_atid(t3dev, &t3_client, csk);
- if (csk->atid < 0) {
- pr_err("NO atid available.\n");
-- return -EINVAL;
-+ ret = -EINVAL;
-+ goto put_sock;
- }
- cxgbi_sock_set_flag(csk, CTPF_HAS_ATID);
- cxgbi_sock_get(csk);
-
- skb = alloc_wr(sizeof(struct cpl_act_open_req), 0, GFP_KERNEL);
- if (!skb) {
-- cxgb3_free_atid(t3dev, csk->atid);
-- cxgbi_sock_put(csk);
-- return -ENOMEM;
-+ ret = -ENOMEM;
-+ goto free_atid;
- }
- skb->sk = (struct sock *)csk;
- set_arp_failure_handler(skb, act_open_arp_failure);
-@@ -1010,6 +1011,15 @@ static int init_act_open(struct cxgbi_sock *csk)
- cxgbi_sock_set_state(csk, CTP_ACTIVE_OPEN);
- send_act_open_req(csk, skb, csk->l2t);
- return 0;
-+
-+free_atid:
-+ cxgb3_free_atid(t3dev, csk->atid);
-+put_sock:
-+ cxgbi_sock_put(csk);
-+ l2t_release(t3dev, csk->l2t);
-+ csk->l2t = NULL;
-+
-+ return ret;
- }
-
- cxgb3_cpl_handler_func cxgb3i_cpl_handlers[NUM_CPL_CMDS] = {
-diff --git a/drivers/scsi/hisi_sas/hisi_sas_main.c b/drivers/scsi/hisi_sas/hisi_sas_main.c
-index 9a6deb21fe4d..11caa4b0d797 100644
---- a/drivers/scsi/hisi_sas/hisi_sas_main.c
-+++ b/drivers/scsi/hisi_sas/hisi_sas_main.c
-@@ -898,8 +898,11 @@ void hisi_sas_phy_oob_ready(struct hisi_hba *hisi_hba, int phy_no)
- struct hisi_sas_phy *phy = &hisi_hba->phy[phy_no];
- struct device *dev = hisi_hba->dev;
-
-+ dev_dbg(dev, "phy%d OOB ready\n", phy_no);
-+ if (phy->phy_attached)
-+ return;
-+
- if (!timer_pending(&phy->timer)) {
-- dev_dbg(dev, "phy%d OOB ready\n", phy_no);
- phy->timer.expires = jiffies + HISI_SAS_WAIT_PHYUP_TIMEOUT * HZ;
- add_timer(&phy->timer);
- }
-diff --git a/drivers/scsi/ibmvscsi/ibmvscsi.c b/drivers/scsi/ibmvscsi/ibmvscsi.c
-index 59f0f1030c54..c5711c659b51 100644
---- a/drivers/scsi/ibmvscsi/ibmvscsi.c
-+++ b/drivers/scsi/ibmvscsi/ibmvscsi.c
-@@ -415,6 +415,8 @@ static int ibmvscsi_reenable_crq_queue(struct crq_queue *queue,
- int rc = 0;
- struct vio_dev *vdev = to_vio_dev(hostdata->dev);
-
-+ set_adapter_info(hostdata);
-+
- /* Re-enable the CRQ */
- do {
- if (rc)
-diff --git a/drivers/scsi/iscsi_boot_sysfs.c b/drivers/scsi/iscsi_boot_sysfs.c
-index e4857b728033..a64abe38db2d 100644
---- a/drivers/scsi/iscsi_boot_sysfs.c
-+++ b/drivers/scsi/iscsi_boot_sysfs.c
-@@ -352,7 +352,7 @@ iscsi_boot_create_kobj(struct iscsi_boot_kset *boot_kset,
- boot_kobj->kobj.kset = boot_kset->kset;
- if (kobject_init_and_add(&boot_kobj->kobj, &iscsi_boot_ktype,
- NULL, name, index)) {
-- kfree(boot_kobj);
-+ kobject_put(&boot_kobj->kobj);
- return NULL;
- }
- boot_kobj->data = data;
-diff --git a/drivers/scsi/lpfc/lpfc_els.c b/drivers/scsi/lpfc/lpfc_els.c
-index 80d1e661b0d4..35fbcb4d52eb 100644
---- a/drivers/scsi/lpfc/lpfc_els.c
-+++ b/drivers/scsi/lpfc/lpfc_els.c
-@@ -8514,6 +8514,8 @@ lpfc_els_unsol_buffer(struct lpfc_hba *phba, struct lpfc_sli_ring *pring,
- spin_lock_irq(shost->host_lock);
- if (ndlp->nlp_flag & NLP_IN_DEV_LOSS) {
- spin_unlock_irq(shost->host_lock);
-+ if (newnode)
-+ lpfc_nlp_put(ndlp);
- goto dropit;
- }
- spin_unlock_irq(shost->host_lock);
-diff --git a/drivers/scsi/mpt3sas/mpt3sas_base.c b/drivers/scsi/mpt3sas/mpt3sas_base.c
-index 663782bb790d..39d233262039 100644
---- a/drivers/scsi/mpt3sas/mpt3sas_base.c
-+++ b/drivers/scsi/mpt3sas/mpt3sas_base.c
-@@ -4915,7 +4915,9 @@ _base_release_memory_pools(struct MPT3SAS_ADAPTER *ioc)
- }
-
- kfree(ioc->hpr_lookup);
-+ ioc->hpr_lookup = NULL;
- kfree(ioc->internal_lookup);
-+ ioc->internal_lookup = NULL;
- if (ioc->chain_lookup) {
- for (i = 0; i < ioc->scsiio_depth; i++) {
- for (j = ioc->chains_per_prp_buffer;
-diff --git a/drivers/scsi/qedf/qedf.h b/drivers/scsi/qedf/qedf.h
-index f3f399fe10c8..0da4e16fb23a 100644
---- a/drivers/scsi/qedf/qedf.h
-+++ b/drivers/scsi/qedf/qedf.h
-@@ -355,6 +355,7 @@ struct qedf_ctx {
- #define QEDF_GRCDUMP_CAPTURE 4
- #define QEDF_IN_RECOVERY 5
- #define QEDF_DBG_STOP_IO 6
-+#define QEDF_PROBING 8
- unsigned long flags; /* Miscellaneous state flags */
- int fipvlan_retries;
- u8 num_queues;
-diff --git a/drivers/scsi/qedf/qedf_main.c b/drivers/scsi/qedf/qedf_main.c
-index 5b19f5175c5c..3a7d03472922 100644
---- a/drivers/scsi/qedf/qedf_main.c
-+++ b/drivers/scsi/qedf/qedf_main.c
-@@ -3153,7 +3153,7 @@ static int __qedf_probe(struct pci_dev *pdev, int mode)
- {
- int rc = -EINVAL;
- struct fc_lport *lport;
-- struct qedf_ctx *qedf;
-+ struct qedf_ctx *qedf = NULL;
- struct Scsi_Host *host;
- bool is_vf = false;
- struct qed_ll2_params params;
-@@ -3183,6 +3183,7 @@ static int __qedf_probe(struct pci_dev *pdev, int mode)
-
- /* Initialize qedf_ctx */
- qedf = lport_priv(lport);
-+ set_bit(QEDF_PROBING, &qedf->flags);
- qedf->lport = lport;
- qedf->ctlr.lp = lport;
- qedf->pdev = pdev;
-@@ -3206,9 +3207,12 @@ static int __qedf_probe(struct pci_dev *pdev, int mode)
- } else {
- /* Init pointers during recovery */
- qedf = pci_get_drvdata(pdev);
-+ set_bit(QEDF_PROBING, &qedf->flags);
- lport = qedf->lport;
- }
-
-+ QEDF_INFO(&qedf->dbg_ctx, QEDF_LOG_DISC, "Probe started.\n");
-+
- host = lport->host;
-
- /* Allocate mempool for qedf_io_work structs */
-@@ -3513,6 +3517,10 @@ static int __qedf_probe(struct pci_dev *pdev, int mode)
- else
- fc_fabric_login(lport);
-
-+ QEDF_INFO(&qedf->dbg_ctx, QEDF_LOG_DISC, "Probe done.\n");
-+
-+ clear_bit(QEDF_PROBING, &qedf->flags);
-+
- /* All good */
- return 0;
-
-@@ -3538,6 +3546,11 @@ err2:
- err1:
- scsi_host_put(lport->host);
- err0:
-+ if (qedf) {
-+ QEDF_INFO(&qedf->dbg_ctx, QEDF_LOG_DISC, "Probe done.\n");
-+
-+ clear_bit(QEDF_PROBING, &qedf->flags);
-+ }
- return rc;
- }
-
-@@ -3687,11 +3700,25 @@ void qedf_get_protocol_tlv_data(void *dev, void *data)
- {
- struct qedf_ctx *qedf = dev;
- struct qed_mfw_tlv_fcoe *fcoe = data;
-- struct fc_lport *lport = qedf->lport;
-- struct Scsi_Host *host = lport->host;
-- struct fc_host_attrs *fc_host = shost_to_fc_host(host);
-+ struct fc_lport *lport;
-+ struct Scsi_Host *host;
-+ struct fc_host_attrs *fc_host;
- struct fc_host_statistics *hst;
-
-+ if (!qedf) {
-+ QEDF_ERR(NULL, "qedf is null.\n");
-+ return;
-+ }
-+
-+ if (test_bit(QEDF_PROBING, &qedf->flags)) {
-+ QEDF_ERR(&qedf->dbg_ctx, "Function is still probing.\n");
-+ return;
-+ }
-+
-+ lport = qedf->lport;
-+ host = lport->host;
-+ fc_host = shost_to_fc_host(host);
-+
- /* Force a refresh of the fc_host stats including offload stats */
- hst = qedf_fc_get_host_stats(host);
-
-diff --git a/drivers/scsi/qedi/qedi_iscsi.c b/drivers/scsi/qedi/qedi_iscsi.c
-index 1f4a5fb00a05..366c65b295a5 100644
---- a/drivers/scsi/qedi/qedi_iscsi.c
-+++ b/drivers/scsi/qedi/qedi_iscsi.c
-@@ -1001,7 +1001,8 @@ static void qedi_ep_disconnect(struct iscsi_endpoint *ep)
- if (qedi_ep->state == EP_STATE_OFLDCONN_START)
- goto ep_exit_recover;
-
-- flush_work(&qedi_ep->offload_work);
-+ if (qedi_ep->state != EP_STATE_OFLDCONN_NONE)
-+ flush_work(&qedi_ep->offload_work);
-
- if (qedi_ep->conn) {
- qedi_conn = qedi_ep->conn;
-@@ -1218,6 +1219,10 @@ static int qedi_set_path(struct Scsi_Host *shost, struct iscsi_path *path_data)
- }
-
- iscsi_cid = (u32)path_data->handle;
-+ if (iscsi_cid >= qedi->max_active_conns) {
-+ ret = -EINVAL;
-+ goto set_path_exit;
-+ }
- qedi_ep = qedi->ep_tbl[iscsi_cid];
- QEDI_INFO(&qedi->dbg_ctx, QEDI_LOG_INFO,
- "iscsi_cid=0x%x, qedi_ep=%p\n", iscsi_cid, qedi_ep);
-diff --git a/drivers/scsi/qla2xxx/qla_os.c b/drivers/scsi/qla2xxx/qla_os.c
-index 1d9a4866f9a7..9179bb4caed8 100644
---- a/drivers/scsi/qla2xxx/qla_os.c
-+++ b/drivers/scsi/qla2xxx/qla_os.c
-@@ -6871,6 +6871,7 @@ qla2x00_do_dpc(void *data)
-
- if (do_reset && !(test_and_set_bit(ABORT_ISP_ACTIVE,
- &base_vha->dpc_flags))) {
-+ base_vha->flags.online = 1;
- ql_dbg(ql_dbg_dpc, base_vha, 0x4007,
- "ISP abort scheduled.\n");
- if (ha->isp_ops->abort_isp(base_vha)) {
-diff --git a/drivers/scsi/qla2xxx/tcm_qla2xxx.c b/drivers/scsi/qla2xxx/tcm_qla2xxx.c
-index 1f0a185b2a95..bf00ae16b487 100644
---- a/drivers/scsi/qla2xxx/tcm_qla2xxx.c
-+++ b/drivers/scsi/qla2xxx/tcm_qla2xxx.c
-@@ -949,6 +949,7 @@ static ssize_t tcm_qla2xxx_tpg_enable_store(struct config_item *item,
-
- atomic_set(&tpg->lport_tpg_enabled, 0);
- qlt_stop_phase1(vha->vha_tgt.qla_tgt);
-+ qlt_stop_phase2(vha->vha_tgt.qla_tgt);
- }
-
- return count;
-@@ -1111,6 +1112,7 @@ static ssize_t tcm_qla2xxx_npiv_tpg_enable_store(struct config_item *item,
-
- atomic_set(&tpg->lport_tpg_enabled, 0);
- qlt_stop_phase1(vha->vha_tgt.qla_tgt);
-+ qlt_stop_phase2(vha->vha_tgt.qla_tgt);
- }
-
- return count;
-diff --git a/drivers/scsi/scsi_error.c b/drivers/scsi/scsi_error.c
-index 978be1602f71..927b1e641842 100644
---- a/drivers/scsi/scsi_error.c
-+++ b/drivers/scsi/scsi_error.c
-@@ -1412,6 +1412,7 @@ static int scsi_eh_stu(struct Scsi_Host *shost,
- sdev_printk(KERN_INFO, sdev,
- "%s: skip START_UNIT, past eh deadline\n",
- current->comm));
-+ scsi_device_put(sdev);
- break;
- }
- stu_scmd = NULL;
-@@ -1478,6 +1479,7 @@ static int scsi_eh_bus_device_reset(struct Scsi_Host *shost,
- sdev_printk(KERN_INFO, sdev,
- "%s: skip BDR, past eh deadline\n",
- current->comm));
-+ scsi_device_put(sdev);
- break;
- }
- bdr_scmd = NULL;
-diff --git a/drivers/scsi/scsi_lib.c b/drivers/scsi/scsi_lib.c
-index 06c260f6cdae..b8b4366f1200 100644
---- a/drivers/scsi/scsi_lib.c
-+++ b/drivers/scsi/scsi_lib.c
-@@ -548,7 +548,7 @@ static void scsi_uninit_cmd(struct scsi_cmnd *cmd)
- }
- }
-
--static void scsi_mq_free_sgtables(struct scsi_cmnd *cmd)
-+static void scsi_free_sgtables(struct scsi_cmnd *cmd)
- {
- if (cmd->sdb.table.nents)
- sg_free_table_chained(&cmd->sdb.table,
-@@ -560,7 +560,7 @@ static void scsi_mq_free_sgtables(struct scsi_cmnd *cmd)
-
- static void scsi_mq_uninit_cmd(struct scsi_cmnd *cmd)
- {
-- scsi_mq_free_sgtables(cmd);
-+ scsi_free_sgtables(cmd);
- scsi_uninit_cmd(cmd);
- }
-
-@@ -1059,7 +1059,7 @@ blk_status_t scsi_init_io(struct scsi_cmnd *cmd)
-
- return BLK_STS_OK;
- out_free_sgtables:
-- scsi_mq_free_sgtables(cmd);
-+ scsi_free_sgtables(cmd);
- return ret;
- }
- EXPORT_SYMBOL(scsi_init_io);
-@@ -1190,6 +1190,7 @@ static blk_status_t scsi_setup_cmnd(struct scsi_device *sdev,
- struct request *req)
- {
- struct scsi_cmnd *cmd = blk_mq_rq_to_pdu(req);
-+ blk_status_t ret;
-
- if (!blk_rq_bytes(req))
- cmd->sc_data_direction = DMA_NONE;
-@@ -1199,9 +1200,14 @@ static blk_status_t scsi_setup_cmnd(struct scsi_device *sdev,
- cmd->sc_data_direction = DMA_FROM_DEVICE;
-
- if (blk_rq_is_scsi(req))
-- return scsi_setup_scsi_cmnd(sdev, req);
-+ ret = scsi_setup_scsi_cmnd(sdev, req);
- else
-- return scsi_setup_fs_cmnd(sdev, req);
-+ ret = scsi_setup_fs_cmnd(sdev, req);
-+
-+ if (ret != BLK_STS_OK)
-+ scsi_free_sgtables(cmd);
-+
-+ return ret;
- }
-
- static blk_status_t
-@@ -2859,8 +2865,10 @@ scsi_host_unblock(struct Scsi_Host *shost, int new_state)
-
- shost_for_each_device(sdev, shost) {
- ret = scsi_internal_device_unblock(sdev, new_state);
-- if (ret)
-+ if (ret) {
-+ scsi_device_put(sdev);
- break;
-+ }
- }
- return ret;
- }
-diff --git a/drivers/scsi/scsi_transport_iscsi.c b/drivers/scsi/scsi_transport_iscsi.c
-index b2a803c51288..ea6d498fa923 100644
---- a/drivers/scsi/scsi_transport_iscsi.c
-+++ b/drivers/scsi/scsi_transport_iscsi.c
-@@ -1616,6 +1616,12 @@ static DECLARE_TRANSPORT_CLASS(iscsi_connection_class,
- static struct sock *nls;
- static DEFINE_MUTEX(rx_queue_mutex);
-
-+/*
-+ * conn_mutex protects the {start,bind,stop,destroy}_conn from racing
-+ * against the kernel stop_connection recovery mechanism
-+ */
-+static DEFINE_MUTEX(conn_mutex);
-+
- static LIST_HEAD(sesslist);
- static LIST_HEAD(sessdestroylist);
- static DEFINE_SPINLOCK(sesslock);
-@@ -2445,6 +2451,32 @@ int iscsi_offload_mesg(struct Scsi_Host *shost,
- }
- EXPORT_SYMBOL_GPL(iscsi_offload_mesg);
-
-+/*
-+ * This can be called without the rx_queue_mutex, if invoked by the kernel
-+ * stop work. But, in that case, it is guaranteed not to race with
-+ * iscsi_destroy by conn_mutex.
-+ */
-+static void iscsi_if_stop_conn(struct iscsi_cls_conn *conn, int flag)
-+{
-+ /*
-+ * It is important that this path doesn't rely on
-+ * rx_queue_mutex, otherwise, a thread doing allocation on a
-+ * start_session/start_connection could sleep waiting on a
-+ * writeback to a failed iscsi device, that cannot be recovered
-+ * because the lock is held. If we don't hold it here, the
-+ * kernel stop_conn_work_fn has a chance to stop the broken
-+ * session and resolve the allocation.
-+ *
-+ * Still, the user invoked .stop_conn() needs to be serialized
-+ * with stop_conn_work_fn by a private mutex. Not pretty, but
-+ * it works.
-+ */
-+ mutex_lock(&conn_mutex);
-+ conn->transport->stop_conn(conn, flag);
-+ mutex_unlock(&conn_mutex);
-+
-+}
-+
- static void stop_conn_work_fn(struct work_struct *work)
- {
- struct iscsi_cls_conn *conn, *tmp;
-@@ -2463,30 +2495,17 @@ static void stop_conn_work_fn(struct work_struct *work)
- uint32_t sid = iscsi_conn_get_sid(conn);
- struct iscsi_cls_session *session;
-
-- mutex_lock(&rx_queue_mutex);
--
- session = iscsi_session_lookup(sid);
- if (session) {
- if (system_state != SYSTEM_RUNNING) {
- session->recovery_tmo = 0;
-- conn->transport->stop_conn(conn,
-- STOP_CONN_TERM);
-+ iscsi_if_stop_conn(conn, STOP_CONN_TERM);
- } else {
-- conn->transport->stop_conn(conn,
-- STOP_CONN_RECOVER);
-+ iscsi_if_stop_conn(conn, STOP_CONN_RECOVER);
- }
- }
-
- list_del_init(&conn->conn_list_err);
--
-- mutex_unlock(&rx_queue_mutex);
--
-- /* we don't want to hold rx_queue_mutex for too long,
-- * for instance if many conns failed at the same time,
-- * since this stall other iscsi maintenance operations.
-- * Give other users a chance to proceed.
-- */
-- cond_resched();
- }
- }
-
-@@ -2846,8 +2865,11 @@ iscsi_if_destroy_conn(struct iscsi_transport *transport, struct iscsi_uevent *ev
- spin_unlock_irqrestore(&connlock, flags);
-
- ISCSI_DBG_TRANS_CONN(conn, "Destroying transport conn\n");
-+
-+ mutex_lock(&conn_mutex);
- if (transport->destroy_conn)
- transport->destroy_conn(conn);
-+ mutex_unlock(&conn_mutex);
-
- return 0;
- }
-@@ -3689,9 +3711,12 @@ iscsi_if_recv_msg(struct sk_buff *skb, struct nlmsghdr *nlh, uint32_t *group)
- break;
- }
-
-+ mutex_lock(&conn_mutex);
- ev->r.retcode = transport->bind_conn(session, conn,
- ev->u.b_conn.transport_eph,
- ev->u.b_conn.is_leading);
-+ mutex_unlock(&conn_mutex);
-+
- if (ev->r.retcode || !transport->ep_connect)
- break;
-
-@@ -3713,9 +3738,11 @@ iscsi_if_recv_msg(struct sk_buff *skb, struct nlmsghdr *nlh, uint32_t *group)
- case ISCSI_UEVENT_START_CONN:
- conn = iscsi_conn_lookup(ev->u.start_conn.sid, ev->u.start_conn.cid);
- if (conn) {
-+ mutex_lock(&conn_mutex);
- ev->r.retcode = transport->start_conn(conn);
- if (!ev->r.retcode)
- conn->state = ISCSI_CONN_UP;
-+ mutex_unlock(&conn_mutex);
- }
- else
- err = -EINVAL;
-@@ -3723,17 +3750,20 @@ iscsi_if_recv_msg(struct sk_buff *skb, struct nlmsghdr *nlh, uint32_t *group)
- case ISCSI_UEVENT_STOP_CONN:
- conn = iscsi_conn_lookup(ev->u.stop_conn.sid, ev->u.stop_conn.cid);
- if (conn)
-- transport->stop_conn(conn, ev->u.stop_conn.flag);
-+ iscsi_if_stop_conn(conn, ev->u.stop_conn.flag);
- else
- err = -EINVAL;
- break;
- case ISCSI_UEVENT_SEND_PDU:
- conn = iscsi_conn_lookup(ev->u.send_pdu.sid, ev->u.send_pdu.cid);
-- if (conn)
-+ if (conn) {
-+ mutex_lock(&conn_mutex);
- ev->r.retcode = transport->send_pdu(conn,
- (struct iscsi_hdr*)((char*)ev + sizeof(*ev)),
- (char*)ev + sizeof(*ev) + ev->u.send_pdu.hdr_size,
- ev->u.send_pdu.data_size);
-+ mutex_unlock(&conn_mutex);
-+ }
- else
- err = -EINVAL;
- break;
-diff --git a/drivers/scsi/sr.c b/drivers/scsi/sr.c
-index d2fe3fa470f9..1e13c6a0f0ca 100644
---- a/drivers/scsi/sr.c
-+++ b/drivers/scsi/sr.c
-@@ -797,7 +797,7 @@ static int sr_probe(struct device *dev)
- cd->cdi.disk = disk;
-
- if (register_cdrom(&cd->cdi))
-- goto fail_put;
-+ goto fail_minor;
-
- /*
- * Initialize block layer runtime PM stuffs before the
-@@ -815,8 +815,13 @@ static int sr_probe(struct device *dev)
-
- return 0;
-
-+fail_minor:
-+ spin_lock(&sr_index_lock);
-+ clear_bit(minor, sr_index_bits);
-+ spin_unlock(&sr_index_lock);
- fail_put:
- put_disk(disk);
-+ mutex_destroy(&cd->lock);
- fail_free:
- kfree(cd);
- fail:
-diff --git a/drivers/scsi/ufs/ti-j721e-ufs.c b/drivers/scsi/ufs/ti-j721e-ufs.c
-index 5216d228cdd9..46bb905b4d6a 100644
---- a/drivers/scsi/ufs/ti-j721e-ufs.c
-+++ b/drivers/scsi/ufs/ti-j721e-ufs.c
-@@ -32,14 +32,14 @@ static int ti_j721e_ufs_probe(struct platform_device *pdev)
- ret = pm_runtime_get_sync(dev);
- if (ret < 0) {
- pm_runtime_put_noidle(dev);
-- return ret;
-+ goto disable_pm;
- }
-
- /* Select MPHY refclk frequency */
- clk = devm_clk_get(dev, NULL);
- if (IS_ERR(clk)) {
- dev_err(dev, "Cannot claim MPHY clock.\n");
-- return PTR_ERR(clk);
-+ goto clk_err;
- }
- clk_rate = clk_get_rate(clk);
- if (clk_rate == 26000000)
-@@ -54,16 +54,23 @@ static int ti_j721e_ufs_probe(struct platform_device *pdev)
- dev);
- if (ret) {
- dev_err(dev, "failed to populate child nodes %d\n", ret);
-- pm_runtime_put_sync(dev);
-+ goto clk_err;
- }
-
- return ret;
-+
-+clk_err:
-+ pm_runtime_put_sync(dev);
-+disable_pm:
-+ pm_runtime_disable(dev);
-+ return ret;
- }
-
- static int ti_j721e_ufs_remove(struct platform_device *pdev)
- {
- of_platform_depopulate(&pdev->dev);
- pm_runtime_put_sync(&pdev->dev);
-+ pm_runtime_disable(&pdev->dev);
-
- return 0;
- }
-diff --git a/drivers/scsi/ufs/ufs-qcom.c b/drivers/scsi/ufs/ufs-qcom.c
-index 19aa5c44e0da..f938867301a0 100644
---- a/drivers/scsi/ufs/ufs-qcom.c
-+++ b/drivers/scsi/ufs/ufs-qcom.c
-@@ -1658,11 +1658,11 @@ static void ufs_qcom_dump_dbg_regs(struct ufs_hba *hba)
-
- /* sleep a bit intermittently as we are dumping too much data */
- ufs_qcom_print_hw_debug_reg_all(hba, NULL, ufs_qcom_dump_regs_wrapper);
-- usleep_range(1000, 1100);
-+ udelay(1000);
- ufs_qcom_testbus_read(hba);
-- usleep_range(1000, 1100);
-+ udelay(1000);
- ufs_qcom_print_unipro_testbus(hba);
-- usleep_range(1000, 1100);
-+ udelay(1000);
- }
-
- /**
-diff --git a/drivers/scsi/ufs/ufs_bsg.c b/drivers/scsi/ufs/ufs_bsg.c
-index 53dd87628cbe..516a7f573942 100644
---- a/drivers/scsi/ufs/ufs_bsg.c
-+++ b/drivers/scsi/ufs/ufs_bsg.c
-@@ -106,8 +106,10 @@ static int ufs_bsg_request(struct bsg_job *job)
- desc_op = bsg_request->upiu_req.qr.opcode;
- ret = ufs_bsg_alloc_desc_buffer(hba, job, &desc_buff,
- &desc_len, desc_op);
-- if (ret)
-+ if (ret) {
-+ pm_runtime_put_sync(hba->dev);
- goto out;
-+ }
-
- /* fall through */
- case UPIU_TRANSACTION_NOP_OUT:
-diff --git a/drivers/scsi/ufs/ufshcd.c b/drivers/scsi/ufs/ufshcd.c
-index 698e8d20b4ba..52740b60d786 100644
---- a/drivers/scsi/ufs/ufshcd.c
-+++ b/drivers/scsi/ufs/ufshcd.c
-@@ -5098,7 +5098,6 @@ static int ufshcd_bkops_ctrl(struct ufs_hba *hba,
- err = ufshcd_enable_auto_bkops(hba);
- else
- err = ufshcd_disable_auto_bkops(hba);
-- hba->urgent_bkops_lvl = curr_status;
- out:
- return err;
- }
-diff --git a/drivers/slimbus/qcom-ngd-ctrl.c b/drivers/slimbus/qcom-ngd-ctrl.c
-index fc2575fef51b..7426b5884218 100644
---- a/drivers/slimbus/qcom-ngd-ctrl.c
-+++ b/drivers/slimbus/qcom-ngd-ctrl.c
-@@ -1361,7 +1361,6 @@ static int of_qcom_slim_ngd_register(struct device *parent,
- ngd->pdev->driver_override = QCOM_SLIM_NGD_DRV_NAME;
- ngd->pdev->dev.of_node = node;
- ctrl->ngd = ngd;
-- platform_set_drvdata(ngd->pdev, ctrl);
-
- platform_device_add(ngd->pdev);
- ngd->base = ctrl->base + ngd->id * data->offset +
-@@ -1376,12 +1375,13 @@ static int of_qcom_slim_ngd_register(struct device *parent,
-
- static int qcom_slim_ngd_probe(struct platform_device *pdev)
- {
-- struct qcom_slim_ngd_ctrl *ctrl = platform_get_drvdata(pdev);
- struct device *dev = &pdev->dev;
-+ struct qcom_slim_ngd_ctrl *ctrl = dev_get_drvdata(dev->parent);
- int ret;
-
- ctrl->ctrl.dev = dev;
-
-+ platform_set_drvdata(pdev, ctrl);
- pm_runtime_use_autosuspend(dev);
- pm_runtime_set_autosuspend_delay(dev, QCOM_SLIM_NGD_AUTOSUSPEND);
- pm_runtime_set_suspended(dev);
-diff --git a/drivers/soundwire/slave.c b/drivers/soundwire/slave.c
-index aace57fae7f8..4bacdb187eab 100644
---- a/drivers/soundwire/slave.c
-+++ b/drivers/soundwire/slave.c
-@@ -68,6 +68,8 @@ static int sdw_slave_add(struct sdw_bus *bus,
- list_del(&slave->node);
- mutex_unlock(&bus->bus_lock);
- put_device(&slave->dev);
-+
-+ return ret;
- }
- sdw_slave_debugfs_init(slave);
-
-diff --git a/drivers/staging/gasket/gasket_sysfs.c b/drivers/staging/gasket/gasket_sysfs.c
-index 5f0e089573a2..af26bc9f184a 100644
---- a/drivers/staging/gasket/gasket_sysfs.c
-+++ b/drivers/staging/gasket/gasket_sysfs.c
-@@ -339,6 +339,7 @@ void gasket_sysfs_put_attr(struct device *device,
-
- dev_err(device, "Unable to put unknown attribute: %s\n",
- attr->attr.attr.name);
-+ put_mapping(mapping);
- }
- EXPORT_SYMBOL(gasket_sysfs_put_attr);
-
-@@ -372,6 +373,7 @@ ssize_t gasket_sysfs_register_store(struct device *device,
- gasket_dev = mapping->gasket_dev;
- if (!gasket_dev) {
- dev_err(device, "Device driver may have been removed\n");
-+ put_mapping(mapping);
- return 0;
- }
-
-diff --git a/drivers/staging/greybus/light.c b/drivers/staging/greybus/light.c
-index d6ba25f21d80..d2672b65c3f4 100644
---- a/drivers/staging/greybus/light.c
-+++ b/drivers/staging/greybus/light.c
-@@ -1026,7 +1026,8 @@ static int gb_lights_light_config(struct gb_lights *glights, u8 id)
-
- light->channels_count = conf.channel_count;
- light->name = kstrndup(conf.name, NAMES_MAX, GFP_KERNEL);
--
-+ if (!light->name)
-+ return -ENOMEM;
- light->channels = kcalloc(light->channels_count,
- sizeof(struct gb_channel), GFP_KERNEL);
- if (!light->channels)
-diff --git a/drivers/staging/mt7621-dts/mt7621.dtsi b/drivers/staging/mt7621-dts/mt7621.dtsi
-index 9e5cf68731bb..82aa93634eda 100644
---- a/drivers/staging/mt7621-dts/mt7621.dtsi
-+++ b/drivers/staging/mt7621-dts/mt7621.dtsi
-@@ -523,11 +523,10 @@
- 0x01000000 0 0x00000000 0x1e160000 0 0x00010000 /* io space */
- >;
-
-- #interrupt-cells = <1>;
-- interrupt-map-mask = <0xF0000 0 0 1>;
-- interrupt-map = <0x10000 0 0 1 &gic GIC_SHARED 4 IRQ_TYPE_LEVEL_HIGH>,
-- <0x20000 0 0 1 &gic GIC_SHARED 24 IRQ_TYPE_LEVEL_HIGH>,
-- <0x30000 0 0 1 &gic GIC_SHARED 25 IRQ_TYPE_LEVEL_HIGH>;
-+ interrupt-parent = <&gic>;
-+ interrupts = <GIC_SHARED 4 IRQ_TYPE_LEVEL_HIGH
-+ GIC_SHARED 24 IRQ_TYPE_LEVEL_HIGH
-+ GIC_SHARED 25 IRQ_TYPE_LEVEL_HIGH>;
-
- status = "disabled";
-
-diff --git a/drivers/staging/mt7621-pci/pci-mt7621.c b/drivers/staging/mt7621-pci/pci-mt7621.c
-index b9d460a9c041..36207243a71b 100644
---- a/drivers/staging/mt7621-pci/pci-mt7621.c
-+++ b/drivers/staging/mt7621-pci/pci-mt7621.c
-@@ -97,6 +97,7 @@
- * @pcie_rst: pointer to port reset control
- * @gpio_rst: gpio reset
- * @slot: port slot
-+ * @irq: GIC irq
- * @enabled: indicates if port is enabled
- */
- struct mt7621_pcie_port {
-@@ -107,6 +108,7 @@ struct mt7621_pcie_port {
- struct reset_control *pcie_rst;
- struct gpio_desc *gpio_rst;
- u32 slot;
-+ int irq;
- bool enabled;
- };
-
-@@ -120,6 +122,7 @@ struct mt7621_pcie_port {
- * @dev: Pointer to PCIe device
- * @io_map_base: virtual memory base address for io
- * @ports: pointer to PCIe port information
-+ * @irq_map: irq mapping info according pcie link status
- * @resets_inverted: depends on chip revision
- * reset lines are inverted.
- */
-@@ -135,6 +138,7 @@ struct mt7621_pcie {
- } offset;
- unsigned long io_map_base;
- struct list_head ports;
-+ int irq_map[PCIE_P2P_MAX];
- bool resets_inverted;
- };
-
-@@ -279,6 +283,16 @@ static void setup_cm_memory_region(struct mt7621_pcie *pcie)
- }
- }
-
-+static int mt7621_map_irq(const struct pci_dev *pdev, u8 slot, u8 pin)
-+{
-+ struct mt7621_pcie *pcie = pdev->bus->sysdata;
-+ struct device *dev = pcie->dev;
-+ int irq = pcie->irq_map[slot];
-+
-+ dev_info(dev, "bus=%d slot=%d irq=%d\n", pdev->bus->number, slot, irq);
-+ return irq;
-+}
-+
- static int mt7621_pci_parse_request_of_pci_ranges(struct mt7621_pcie *pcie)
- {
- struct device *dev = pcie->dev;
-@@ -330,6 +344,7 @@ static int mt7621_pcie_parse_port(struct mt7621_pcie *pcie,
- {
- struct mt7621_pcie_port *port;
- struct device *dev = pcie->dev;
-+ struct platform_device *pdev = to_platform_device(dev);
- struct device_node *pnode = dev->of_node;
- struct resource regs;
- char name[10];
-@@ -371,6 +386,12 @@ static int mt7621_pcie_parse_port(struct mt7621_pcie *pcie,
- port->slot = slot;
- port->pcie = pcie;
-
-+ port->irq = platform_get_irq(pdev, slot);
-+ if (port->irq < 0) {
-+ dev_err(dev, "Failed to get IRQ for PCIe%d\n", slot);
-+ return -ENXIO;
-+ }
-+
- INIT_LIST_HEAD(&port->list);
- list_add_tail(&port->list, &pcie->ports);
-
-@@ -585,13 +606,15 @@ static int mt7621_pcie_init_virtual_bridges(struct mt7621_pcie *pcie)
- {
- u32 pcie_link_status = 0;
- u32 n;
-- int i;
-+ int i = 0;
- u32 p2p_br_devnum[PCIE_P2P_MAX];
-+ int irqs[PCIE_P2P_MAX];
- struct mt7621_pcie_port *port;
-
- list_for_each_entry(port, &pcie->ports, list) {
- u32 slot = port->slot;
-
-+ irqs[i++] = port->irq;
- if (port->enabled)
- pcie_link_status |= BIT(slot);
- }
-@@ -614,6 +637,15 @@ static int mt7621_pcie_init_virtual_bridges(struct mt7621_pcie *pcie)
- (p2p_br_devnum[1] << PCIE_P2P_BR_DEVNUM1_SHIFT) |
- (p2p_br_devnum[2] << PCIE_P2P_BR_DEVNUM2_SHIFT));
-
-+ /* Assign IRQs */
-+ n = 0;
-+ for (i = 0; i < PCIE_P2P_MAX; i++)
-+ if (pcie_link_status & BIT(i))
-+ pcie->irq_map[n++] = irqs[i];
-+
-+ for (i = n; i < PCIE_P2P_MAX; i++)
-+ pcie->irq_map[i] = -1;
-+
- return 0;
- }
-
-@@ -638,7 +670,7 @@ static int mt7621_pcie_register_host(struct pci_host_bridge *host,
- host->busnr = pcie->busn.start;
- host->dev.parent = pcie->dev;
- host->ops = &mt7621_pci_ops;
-- host->map_irq = of_irq_parse_and_map_pci;
-+ host->map_irq = mt7621_map_irq;
- host->swizzle_irq = pci_common_swizzle;
- host->sysdata = pcie;
-
-diff --git a/drivers/staging/sm750fb/sm750.c b/drivers/staging/sm750fb/sm750.c
-index 59568d18ce23..5b72aa81d94c 100644
---- a/drivers/staging/sm750fb/sm750.c
-+++ b/drivers/staging/sm750fb/sm750.c
-@@ -898,6 +898,7 @@ static int lynxfb_set_fbinfo(struct fb_info *info, int index)
- fix->visual = FB_VISUAL_PSEUDOCOLOR;
- break;
- case 16:
-+ case 24:
- case 32:
- fix->visual = FB_VISUAL_TRUECOLOR;
- break;
-diff --git a/drivers/staging/wfx/bus_sdio.c b/drivers/staging/wfx/bus_sdio.c
-index dedc3ff58d3e..c2e4bd1e3b0a 100644
---- a/drivers/staging/wfx/bus_sdio.c
-+++ b/drivers/staging/wfx/bus_sdio.c
-@@ -156,7 +156,13 @@ static const struct hwbus_ops wfx_sdio_hwbus_ops = {
- .align_size = wfx_sdio_align_size,
- };
-
--static const struct of_device_id wfx_sdio_of_match[];
-+static const struct of_device_id wfx_sdio_of_match[] = {
-+ { .compatible = "silabs,wfx-sdio" },
-+ { .compatible = "silabs,wf200" },
-+ { },
-+};
-+MODULE_DEVICE_TABLE(of, wfx_sdio_of_match);
-+
- static int wfx_sdio_probe(struct sdio_func *func,
- const struct sdio_device_id *id)
- {
-@@ -248,15 +254,6 @@ static const struct sdio_device_id wfx_sdio_ids[] = {
- };
- MODULE_DEVICE_TABLE(sdio, wfx_sdio_ids);
-
--#ifdef CONFIG_OF
--static const struct of_device_id wfx_sdio_of_match[] = {
-- { .compatible = "silabs,wfx-sdio" },
-- { .compatible = "silabs,wf200" },
-- { },
--};
--MODULE_DEVICE_TABLE(of, wfx_sdio_of_match);
--#endif
--
- struct sdio_driver wfx_sdio_driver = {
- .name = "wfx-sdio",
- .id_table = wfx_sdio_ids,
-@@ -264,6 +261,6 @@ struct sdio_driver wfx_sdio_driver = {
- .remove = wfx_sdio_remove,
- .drv = {
- .owner = THIS_MODULE,
-- .of_match_table = of_match_ptr(wfx_sdio_of_match),
-+ .of_match_table = wfx_sdio_of_match,
- }
- };
-diff --git a/drivers/staging/wfx/debug.c b/drivers/staging/wfx/debug.c
-index 1164aba118a1..a73b5bbb578e 100644
---- a/drivers/staging/wfx/debug.c
-+++ b/drivers/staging/wfx/debug.c
-@@ -142,7 +142,7 @@ static int wfx_rx_stats_show(struct seq_file *seq, void *v)
- mutex_lock(&wdev->rx_stats_lock);
- seq_printf(seq, "Timestamp: %dus\n", st->date);
- seq_printf(seq, "Low power clock: frequency %uHz, external %s\n",
-- st->pwr_clk_freq,
-+ le32_to_cpu(st->pwr_clk_freq),
- st->is_ext_pwr_clk ? "yes" : "no");
- seq_printf(seq,
- "Num. of frames: %d, PER (x10e4): %d, Throughput: %dKbps/s\n",
-@@ -152,9 +152,12 @@ static int wfx_rx_stats_show(struct seq_file *seq, void *v)
- for (i = 0; i < ARRAY_SIZE(channel_names); i++) {
- if (channel_names[i])
- seq_printf(seq, "%5s %8d %8d %8d %8d %8d\n",
-- channel_names[i], st->nb_rx_by_rate[i],
-- st->per[i], st->rssi[i] / 100,
-- st->snr[i] / 100, st->cfo[i]);
-+ channel_names[i],
-+ le32_to_cpu(st->nb_rx_by_rate[i]),
-+ le16_to_cpu(st->per[i]),
-+ (s16)le16_to_cpu(st->rssi[i]) / 100,
-+ (s16)le16_to_cpu(st->snr[i]) / 100,
-+ (s16)le16_to_cpu(st->cfo[i]));
- }
- mutex_unlock(&wdev->rx_stats_lock);
-
-diff --git a/drivers/staging/wfx/hif_tx.c b/drivers/staging/wfx/hif_tx.c
-index 77bca43aca42..20b3045d7667 100644
---- a/drivers/staging/wfx/hif_tx.c
-+++ b/drivers/staging/wfx/hif_tx.c
-@@ -268,7 +268,7 @@ int hif_scan(struct wfx_vif *wvif, struct cfg80211_scan_request *req,
- tmo_chan_bg = le32_to_cpu(body->max_channel_time) * USEC_PER_TU;
- tmo_chan_fg = 512 * USEC_PER_TU + body->probe_delay;
- tmo_chan_fg *= body->num_of_probe_requests;
-- tmo = chan_num * max(tmo_chan_bg, tmo_chan_fg);
-+ tmo = chan_num * max(tmo_chan_bg, tmo_chan_fg) + 512 * USEC_PER_TU;
-
- wfx_fill_header(hif, wvif->id, HIF_REQ_ID_START_SCAN, buf_len);
- ret = wfx_cmd_send(wvif->wdev, hif, NULL, 0, false);
-diff --git a/drivers/staging/wfx/queue.c b/drivers/staging/wfx/queue.c
-index 39d9127ce4b9..8ae23681e29b 100644
---- a/drivers/staging/wfx/queue.c
-+++ b/drivers/staging/wfx/queue.c
-@@ -35,6 +35,7 @@ void wfx_tx_flush(struct wfx_dev *wdev)
- if (wdev->chip_frozen)
- return;
-
-+ wfx_tx_lock(wdev);
- mutex_lock(&wdev->hif_cmd.lock);
- ret = wait_event_timeout(wdev->hif.tx_buffers_empty,
- !wdev->hif.tx_buffers_used,
-@@ -47,6 +48,7 @@ void wfx_tx_flush(struct wfx_dev *wdev)
- wdev->chip_frozen = 1;
- }
- mutex_unlock(&wdev->hif_cmd.lock);
-+ wfx_tx_unlock(wdev);
- }
-
- void wfx_tx_lock_flush(struct wfx_dev *wdev)
-diff --git a/drivers/staging/wfx/sta.c b/drivers/staging/wfx/sta.c
-index 9d430346a58b..b4cd7cb1ce56 100644
---- a/drivers/staging/wfx/sta.c
-+++ b/drivers/staging/wfx/sta.c
-@@ -520,7 +520,9 @@ static void wfx_do_join(struct wfx_vif *wvif)
- ssidie = ieee80211_bss_get_ie(bss, WLAN_EID_SSID);
- if (ssidie) {
- ssidlen = ssidie[1];
-- memcpy(ssid, &ssidie[2], ssidie[1]);
-+ if (ssidlen > IEEE80211_MAX_SSID_LEN)
-+ ssidlen = IEEE80211_MAX_SSID_LEN;
-+ memcpy(ssid, &ssidie[2], ssidlen);
- }
- rcu_read_unlock();
-
-@@ -1047,7 +1049,6 @@ int wfx_add_interface(struct ieee80211_hw *hw, struct ieee80211_vif *vif)
- init_completion(&wvif->scan_complete);
- INIT_WORK(&wvif->scan_work, wfx_hw_scan_work);
-
-- INIT_WORK(&wvif->tx_policy_upload_work, wfx_tx_policy_upload_work);
- mutex_unlock(&wdev->conf_mutex);
-
- hif_set_macaddr(wvif, vif->addr);
-diff --git a/drivers/staging/wfx/sta.h b/drivers/staging/wfx/sta.h
-index cf99a8a74a81..ace845f9ed14 100644
---- a/drivers/staging/wfx/sta.h
-+++ b/drivers/staging/wfx/sta.h
-@@ -37,7 +37,7 @@ struct wfx_grp_addr_table {
- struct wfx_sta_priv {
- int link_id;
- int vif_id;
-- u8 buffered[IEEE80211_NUM_TIDS];
-+ int buffered[IEEE80211_NUM_TIDS];
- // Ensure atomicity of "buffered" and calls to ieee80211_sta_set_buffered()
- spinlock_t lock;
- };
-diff --git a/drivers/staging/wilc1000/hif.c b/drivers/staging/wilc1000/hif.c
-index 6c7de2f8d3f2..d025a3093015 100644
---- a/drivers/staging/wilc1000/hif.c
-+++ b/drivers/staging/wilc1000/hif.c
-@@ -11,6 +11,8 @@
-
- #define WILC_FALSE_FRMWR_CHANNEL 100
-
-+#define WILC_SCAN_WID_LIST_SIZE 6
-+
- struct wilc_rcvd_mac_info {
- u8 status;
- };
-@@ -151,7 +153,7 @@ int wilc_scan(struct wilc_vif *vif, u8 scan_source, u8 scan_type,
- void *user_arg, struct cfg80211_scan_request *request)
- {
- int result = 0;
-- struct wid wid_list[5];
-+ struct wid wid_list[WILC_SCAN_WID_LIST_SIZE];
- u32 index = 0;
- u32 i, scan_timeout;
- u8 *buffer;
-diff --git a/drivers/target/loopback/tcm_loop.c b/drivers/target/loopback/tcm_loop.c
-index 3305b47fdf53..16d5a4e117a2 100644
---- a/drivers/target/loopback/tcm_loop.c
-+++ b/drivers/target/loopback/tcm_loop.c
-@@ -545,32 +545,15 @@ static int tcm_loop_write_pending(struct se_cmd *se_cmd)
- return 0;
- }
-
--static int tcm_loop_queue_data_in(struct se_cmd *se_cmd)
-+static int tcm_loop_queue_data_or_status(const char *func,
-+ struct se_cmd *se_cmd, u8 scsi_status)
- {
- struct tcm_loop_cmd *tl_cmd = container_of(se_cmd,
- struct tcm_loop_cmd, tl_se_cmd);
- struct scsi_cmnd *sc = tl_cmd->sc;
-
- pr_debug("%s() called for scsi_cmnd: %p cdb: 0x%02x\n",
-- __func__, sc, sc->cmnd[0]);
--
-- sc->result = SAM_STAT_GOOD;
-- set_host_byte(sc, DID_OK);
-- if ((se_cmd->se_cmd_flags & SCF_OVERFLOW_BIT) ||
-- (se_cmd->se_cmd_flags & SCF_UNDERFLOW_BIT))
-- scsi_set_resid(sc, se_cmd->residual_count);
-- sc->scsi_done(sc);
-- return 0;
--}
--
--static int tcm_loop_queue_status(struct se_cmd *se_cmd)
--{
-- struct tcm_loop_cmd *tl_cmd = container_of(se_cmd,
-- struct tcm_loop_cmd, tl_se_cmd);
-- struct scsi_cmnd *sc = tl_cmd->sc;
--
-- pr_debug("%s() called for scsi_cmnd: %p cdb: 0x%02x\n",
-- __func__, sc, sc->cmnd[0]);
-+ func, sc, sc->cmnd[0]);
-
- if (se_cmd->sense_buffer &&
- ((se_cmd->se_cmd_flags & SCF_TRANSPORT_TASK_SENSE) ||
-@@ -581,7 +564,7 @@ static int tcm_loop_queue_status(struct se_cmd *se_cmd)
- sc->result = SAM_STAT_CHECK_CONDITION;
- set_driver_byte(sc, DRIVER_SENSE);
- } else
-- sc->result = se_cmd->scsi_status;
-+ sc->result = scsi_status;
-
- set_host_byte(sc, DID_OK);
- if ((se_cmd->se_cmd_flags & SCF_OVERFLOW_BIT) ||
-@@ -591,6 +574,17 @@ static int tcm_loop_queue_status(struct se_cmd *se_cmd)
- return 0;
- }
-
-+static int tcm_loop_queue_data_in(struct se_cmd *se_cmd)
-+{
-+ return tcm_loop_queue_data_or_status(__func__, se_cmd, SAM_STAT_GOOD);
-+}
-+
-+static int tcm_loop_queue_status(struct se_cmd *se_cmd)
-+{
-+ return tcm_loop_queue_data_or_status(__func__,
-+ se_cmd, se_cmd->scsi_status);
-+}
-+
- static void tcm_loop_queue_tm_rsp(struct se_cmd *se_cmd)
- {
- struct tcm_loop_cmd *tl_cmd = container_of(se_cmd,
-diff --git a/drivers/target/target_core_user.c b/drivers/target/target_core_user.c
-index f769bb1e3735..b63a1e0c4aa6 100644
---- a/drivers/target/target_core_user.c
-+++ b/drivers/target/target_core_user.c
-@@ -882,41 +882,24 @@ static inline size_t tcmu_cmd_get_cmd_size(struct tcmu_cmd *tcmu_cmd,
- return command_size;
- }
-
--static int tcmu_setup_cmd_timer(struct tcmu_cmd *tcmu_cmd, unsigned int tmo,
-- struct timer_list *timer)
-+static void tcmu_setup_cmd_timer(struct tcmu_cmd *tcmu_cmd, unsigned int tmo,
-+ struct timer_list *timer)
- {
-- struct tcmu_dev *udev = tcmu_cmd->tcmu_dev;
-- int cmd_id;
--
-- if (tcmu_cmd->cmd_id)
-- goto setup_timer;
--
-- cmd_id = idr_alloc(&udev->commands, tcmu_cmd, 1, USHRT_MAX, GFP_NOWAIT);
-- if (cmd_id < 0) {
-- pr_err("tcmu: Could not allocate cmd id.\n");
-- return cmd_id;
-- }
-- tcmu_cmd->cmd_id = cmd_id;
--
-- pr_debug("allocated cmd %u for dev %s tmo %lu\n", tcmu_cmd->cmd_id,
-- udev->name, tmo / MSEC_PER_SEC);
--
--setup_timer:
- if (!tmo)
-- return 0;
-+ return;
-
- tcmu_cmd->deadline = round_jiffies_up(jiffies + msecs_to_jiffies(tmo));
- if (!timer_pending(timer))
- mod_timer(timer, tcmu_cmd->deadline);
-
-- return 0;
-+ pr_debug("Timeout set up for cmd %p, dev = %s, tmo = %lu\n", tcmu_cmd,
-+ tcmu_cmd->tcmu_dev->name, tmo / MSEC_PER_SEC);
- }
-
- static int add_to_qfull_queue(struct tcmu_cmd *tcmu_cmd)
- {
- struct tcmu_dev *udev = tcmu_cmd->tcmu_dev;
- unsigned int tmo;
-- int ret;
-
- /*
- * For backwards compat if qfull_time_out is not set use
-@@ -931,13 +914,11 @@ static int add_to_qfull_queue(struct tcmu_cmd *tcmu_cmd)
- else
- tmo = TCMU_TIME_OUT;
-
-- ret = tcmu_setup_cmd_timer(tcmu_cmd, tmo, &udev->qfull_timer);
-- if (ret)
-- return ret;
-+ tcmu_setup_cmd_timer(tcmu_cmd, tmo, &udev->qfull_timer);
-
- list_add_tail(&tcmu_cmd->queue_entry, &udev->qfull_queue);
-- pr_debug("adding cmd %u on dev %s to ring space wait queue\n",
-- tcmu_cmd->cmd_id, udev->name);
-+ pr_debug("adding cmd %p on dev %s to ring space wait queue\n",
-+ tcmu_cmd, udev->name);
- return 0;
- }
-
-@@ -959,7 +940,7 @@ static int queue_cmd_ring(struct tcmu_cmd *tcmu_cmd, sense_reason_t *scsi_err)
- struct tcmu_mailbox *mb;
- struct tcmu_cmd_entry *entry;
- struct iovec *iov;
-- int iov_cnt, ret;
-+ int iov_cnt, cmd_id;
- uint32_t cmd_head;
- uint64_t cdb_off;
- bool copy_to_data_area;
-@@ -1060,14 +1041,21 @@ static int queue_cmd_ring(struct tcmu_cmd *tcmu_cmd, sense_reason_t *scsi_err)
- }
- entry->req.iov_bidi_cnt = iov_cnt;
-
-- ret = tcmu_setup_cmd_timer(tcmu_cmd, udev->cmd_time_out,
-- &udev->cmd_timer);
-- if (ret) {
-- tcmu_cmd_free_data(tcmu_cmd, tcmu_cmd->dbi_cnt);
-+ cmd_id = idr_alloc(&udev->commands, tcmu_cmd, 1, USHRT_MAX, GFP_NOWAIT);
-+ if (cmd_id < 0) {
-+ pr_err("tcmu: Could not allocate cmd id.\n");
-
-+ tcmu_cmd_free_data(tcmu_cmd, tcmu_cmd->dbi_cnt);
- *scsi_err = TCM_OUT_OF_RESOURCES;
- return -1;
- }
-+ tcmu_cmd->cmd_id = cmd_id;
-+
-+ pr_debug("allocated cmd id %u for cmd %p dev %s\n", tcmu_cmd->cmd_id,
-+ tcmu_cmd, udev->name);
-+
-+ tcmu_setup_cmd_timer(tcmu_cmd, udev->cmd_time_out, &udev->cmd_timer);
-+
- entry->hdr.cmd_id = tcmu_cmd->cmd_id;
-
- /*
-@@ -1279,50 +1267,39 @@ static unsigned int tcmu_handle_completions(struct tcmu_dev *udev)
- return handled;
- }
-
--static int tcmu_check_expired_cmd(int id, void *p, void *data)
-+static void tcmu_check_expired_ring_cmd(struct tcmu_cmd *cmd)
- {
-- struct tcmu_cmd *cmd = p;
-- struct tcmu_dev *udev = cmd->tcmu_dev;
-- u8 scsi_status;
- struct se_cmd *se_cmd;
-- bool is_running;
--
-- if (test_bit(TCMU_CMD_BIT_EXPIRED, &cmd->flags))
-- return 0;
-
- if (!time_after(jiffies, cmd->deadline))
-- return 0;
-+ return;
-
-- is_running = test_bit(TCMU_CMD_BIT_INFLIGHT, &cmd->flags);
-+ set_bit(TCMU_CMD_BIT_EXPIRED, &cmd->flags);
-+ list_del_init(&cmd->queue_entry);
- se_cmd = cmd->se_cmd;
-+ cmd->se_cmd = NULL;
-
-- if (is_running) {
-- /*
-- * If cmd_time_out is disabled but qfull is set deadline
-- * will only reflect the qfull timeout. Ignore it.
-- */
-- if (!udev->cmd_time_out)
-- return 0;
-+ pr_debug("Timing out inflight cmd %u on dev %s.\n",
-+ cmd->cmd_id, cmd->tcmu_dev->name);
-
-- set_bit(TCMU_CMD_BIT_EXPIRED, &cmd->flags);
-- /*
-- * target_complete_cmd will translate this to LUN COMM FAILURE
-- */
-- scsi_status = SAM_STAT_CHECK_CONDITION;
-- list_del_init(&cmd->queue_entry);
-- cmd->se_cmd = NULL;
-- } else {
-- list_del_init(&cmd->queue_entry);
-- idr_remove(&udev->commands, id);
-- tcmu_free_cmd(cmd);
-- scsi_status = SAM_STAT_TASK_SET_FULL;
-- }
-+ target_complete_cmd(se_cmd, SAM_STAT_CHECK_CONDITION);
-+}
-
-- pr_debug("Timing out cmd %u on dev %s that is %s.\n",
-- id, udev->name, is_running ? "inflight" : "queued");
-+static void tcmu_check_expired_queue_cmd(struct tcmu_cmd *cmd)
-+{
-+ struct se_cmd *se_cmd;
-
-- target_complete_cmd(se_cmd, scsi_status);
-- return 0;
-+ if (!time_after(jiffies, cmd->deadline))
-+ return;
-+
-+ pr_debug("Timing out queued cmd %p on dev %s.\n",
-+ cmd, cmd->tcmu_dev->name);
-+
-+ list_del_init(&cmd->queue_entry);
-+ se_cmd = cmd->se_cmd;
-+ tcmu_free_cmd(cmd);
-+
-+ target_complete_cmd(se_cmd, SAM_STAT_TASK_SET_FULL);
- }
-
- static void tcmu_device_timedout(struct tcmu_dev *udev)
-@@ -1407,16 +1384,15 @@ static struct se_device *tcmu_alloc_device(struct se_hba *hba, const char *name)
- return &udev->se_dev;
- }
-
--static bool run_qfull_queue(struct tcmu_dev *udev, bool fail)
-+static void run_qfull_queue(struct tcmu_dev *udev, bool fail)
- {
- struct tcmu_cmd *tcmu_cmd, *tmp_cmd;
- LIST_HEAD(cmds);
-- bool drained = true;
- sense_reason_t scsi_ret;
- int ret;
-
- if (list_empty(&udev->qfull_queue))
-- return true;
-+ return;
-
- pr_debug("running %s's cmdr queue forcefail %d\n", udev->name, fail);
-
-@@ -1425,11 +1401,10 @@ static bool run_qfull_queue(struct tcmu_dev *udev, bool fail)
- list_for_each_entry_safe(tcmu_cmd, tmp_cmd, &cmds, queue_entry) {
- list_del_init(&tcmu_cmd->queue_entry);
-
-- pr_debug("removing cmd %u on dev %s from queue\n",
-- tcmu_cmd->cmd_id, udev->name);
-+ pr_debug("removing cmd %p on dev %s from queue\n",
-+ tcmu_cmd, udev->name);
-
- if (fail) {
-- idr_remove(&udev->commands, tcmu_cmd->cmd_id);
- /*
- * We were not able to even start the command, so
- * fail with busy to allow a retry in case runner
-@@ -1444,10 +1419,8 @@ static bool run_qfull_queue(struct tcmu_dev *udev, bool fail)
-
- ret = queue_cmd_ring(tcmu_cmd, &scsi_ret);
- if (ret < 0) {
-- pr_debug("cmd %u on dev %s failed with %u\n",
-- tcmu_cmd->cmd_id, udev->name, scsi_ret);
--
-- idr_remove(&udev->commands, tcmu_cmd->cmd_id);
-+ pr_debug("cmd %p on dev %s failed with %u\n",
-+ tcmu_cmd, udev->name, scsi_ret);
- /*
- * Ignore scsi_ret for now. target_complete_cmd
- * drops it.
-@@ -1462,13 +1435,11 @@ static bool run_qfull_queue(struct tcmu_dev *udev, bool fail)
- * the queue
- */
- list_splice_tail(&cmds, &udev->qfull_queue);
-- drained = false;
- break;
- }
- }
-
- tcmu_set_next_deadline(&udev->qfull_queue, &udev->qfull_timer);
-- return drained;
- }
-
- static int tcmu_irqcontrol(struct uio_info *info, s32 irq_on)
-@@ -1652,6 +1623,8 @@ static void tcmu_dev_kref_release(struct kref *kref)
- if (tcmu_check_and_free_pending_cmd(cmd) != 0)
- all_expired = false;
- }
-+ if (!list_empty(&udev->qfull_queue))
-+ all_expired = false;
- idr_destroy(&udev->commands);
- WARN_ON(!all_expired);
-
-@@ -2037,9 +2010,6 @@ static void tcmu_reset_ring(struct tcmu_dev *udev, u8 err_level)
- mutex_lock(&udev->cmdr_lock);
-
- idr_for_each_entry(&udev->commands, cmd, i) {
-- if (!test_bit(TCMU_CMD_BIT_INFLIGHT, &cmd->flags))
-- continue;
--
- pr_debug("removing cmd %u on dev %s from ring (is expired %d)\n",
- cmd->cmd_id, udev->name,
- test_bit(TCMU_CMD_BIT_EXPIRED, &cmd->flags));
-@@ -2077,6 +2047,8 @@ static void tcmu_reset_ring(struct tcmu_dev *udev, u8 err_level)
-
- del_timer(&udev->cmd_timer);
-
-+ run_qfull_queue(udev, false);
-+
- mutex_unlock(&udev->cmdr_lock);
- }
-
-@@ -2698,6 +2670,7 @@ static void find_free_blocks(void)
- static void check_timedout_devices(void)
- {
- struct tcmu_dev *udev, *tmp_dev;
-+ struct tcmu_cmd *cmd, *tmp_cmd;
- LIST_HEAD(devs);
-
- spin_lock_bh(&timed_out_udevs_lock);
-@@ -2708,9 +2681,24 @@ static void check_timedout_devices(void)
- spin_unlock_bh(&timed_out_udevs_lock);
-
- mutex_lock(&udev->cmdr_lock);
-- idr_for_each(&udev->commands, tcmu_check_expired_cmd, NULL);
-
-- tcmu_set_next_deadline(&udev->inflight_queue, &udev->cmd_timer);
-+ /*
-+ * If cmd_time_out is disabled but qfull is set deadline
-+ * will only reflect the qfull timeout. Ignore it.
-+ */
-+ if (udev->cmd_time_out) {
-+ list_for_each_entry_safe(cmd, tmp_cmd,
-+ &udev->inflight_queue,
-+ queue_entry) {
-+ tcmu_check_expired_ring_cmd(cmd);
-+ }
-+ tcmu_set_next_deadline(&udev->inflight_queue,
-+ &udev->cmd_timer);
-+ }
-+ list_for_each_entry_safe(cmd, tmp_cmd, &udev->qfull_queue,
-+ queue_entry) {
-+ tcmu_check_expired_queue_cmd(cmd);
-+ }
- tcmu_set_next_deadline(&udev->qfull_queue, &udev->qfull_timer);
-
- mutex_unlock(&udev->cmdr_lock);
-diff --git a/drivers/thermal/ti-soc-thermal/ti-thermal-common.c b/drivers/thermal/ti-soc-thermal/ti-thermal-common.c
-index d3e959d01606..85776db4bf34 100644
---- a/drivers/thermal/ti-soc-thermal/ti-thermal-common.c
-+++ b/drivers/thermal/ti-soc-thermal/ti-thermal-common.c
-@@ -169,7 +169,7 @@ int ti_thermal_expose_sensor(struct ti_bandgap *bgp, int id,
-
- data = ti_bandgap_get_sensor_data(bgp, id);
-
-- if (!data || IS_ERR(data))
-+ if (!IS_ERR_OR_NULL(data))
- data = ti_thermal_build_data(bgp, id);
-
- if (!data)
-@@ -196,7 +196,7 @@ int ti_thermal_remove_sensor(struct ti_bandgap *bgp, int id)
-
- data = ti_bandgap_get_sensor_data(bgp, id);
-
-- if (data && data->ti_thermal) {
-+ if (!IS_ERR_OR_NULL(data) && data->ti_thermal) {
- if (data->our_zone)
- thermal_zone_device_unregister(data->ti_thermal);
- }
-@@ -262,7 +262,7 @@ int ti_thermal_unregister_cpu_cooling(struct ti_bandgap *bgp, int id)
-
- data = ti_bandgap_get_sensor_data(bgp, id);
-
-- if (data) {
-+ if (!IS_ERR_OR_NULL(data)) {
- cpufreq_cooling_unregister(data->cool_dev);
- if (data->policy)
- cpufreq_cpu_put(data->policy);
-diff --git a/drivers/tty/hvc/hvc_console.c b/drivers/tty/hvc/hvc_console.c
-index cdcc64ea2554..f8e43a6faea9 100644
---- a/drivers/tty/hvc/hvc_console.c
-+++ b/drivers/tty/hvc/hvc_console.c
-@@ -75,6 +75,8 @@ static LIST_HEAD(hvc_structs);
- */
- static DEFINE_MUTEX(hvc_structs_mutex);
-
-+/* Mutex to serialize hvc_open */
-+static DEFINE_MUTEX(hvc_open_mutex);
- /*
- * This value is used to assign a tty->index value to a hvc_struct based
- * upon order of exposure via hvc_probe(), when we can not match it to
-@@ -346,16 +348,24 @@ static int hvc_install(struct tty_driver *driver, struct tty_struct *tty)
- */
- static int hvc_open(struct tty_struct *tty, struct file * filp)
- {
-- struct hvc_struct *hp = tty->driver_data;
-+ struct hvc_struct *hp;
- unsigned long flags;
- int rc = 0;
-
-+ mutex_lock(&hvc_open_mutex);
-+
-+ hp = tty->driver_data;
-+ if (!hp) {
-+ rc = -EIO;
-+ goto out;
-+ }
-+
- spin_lock_irqsave(&hp->port.lock, flags);
- /* Check and then increment for fast path open. */
- if (hp->port.count++ > 0) {
- spin_unlock_irqrestore(&hp->port.lock, flags);
- hvc_kick();
-- return 0;
-+ goto out;
- } /* else count == 0 */
- spin_unlock_irqrestore(&hp->port.lock, flags);
-
-@@ -383,6 +393,8 @@ static int hvc_open(struct tty_struct *tty, struct file * filp)
- /* Force wakeup of the polling thread */
- hvc_kick();
-
-+out:
-+ mutex_unlock(&hvc_open_mutex);
- return rc;
- }
-
-diff --git a/drivers/tty/n_gsm.c b/drivers/tty/n_gsm.c
-index d77ed82a4840..f189579db7c4 100644
---- a/drivers/tty/n_gsm.c
-+++ b/drivers/tty/n_gsm.c
-@@ -673,11 +673,10 @@ static struct gsm_msg *gsm_data_alloc(struct gsm_mux *gsm, u8 addr, int len,
- * FIXME: lock against link layer control transmissions
- */
-
--static void gsm_data_kick(struct gsm_mux *gsm)
-+static void gsm_data_kick(struct gsm_mux *gsm, struct gsm_dlci *dlci)
- {
- struct gsm_msg *msg, *nmsg;
- int len;
-- int skip_sof = 0;
-
- list_for_each_entry_safe(msg, nmsg, &gsm->tx_list, list) {
- if (gsm->constipated && msg->addr)
-@@ -699,18 +698,23 @@ static void gsm_data_kick(struct gsm_mux *gsm)
- print_hex_dump_bytes("gsm_data_kick: ",
- DUMP_PREFIX_OFFSET,
- gsm->txframe, len);
--
-- if (gsm->output(gsm, gsm->txframe + skip_sof,
-- len - skip_sof) < 0)
-+ if (gsm->output(gsm, gsm->txframe, len) < 0)
- break;
- /* FIXME: Can eliminate one SOF in many more cases */
- gsm->tx_bytes -= msg->len;
-- /* For a burst of frames skip the extra SOF within the
-- burst */
-- skip_sof = 1;
-
- list_del(&msg->list);
- kfree(msg);
-+
-+ if (dlci) {
-+ tty_port_tty_wakeup(&dlci->port);
-+ } else {
-+ int i = 0;
-+
-+ for (i = 0; i < NUM_DLCI; i++)
-+ if (gsm->dlci[i])
-+ tty_port_tty_wakeup(&gsm->dlci[i]->port);
-+ }
- }
- }
-
-@@ -762,7 +766,7 @@ static void __gsm_data_queue(struct gsm_dlci *dlci, struct gsm_msg *msg)
- /* Add to the actual output queue */
- list_add_tail(&msg->list, &gsm->tx_list);
- gsm->tx_bytes += msg->len;
-- gsm_data_kick(gsm);
-+ gsm_data_kick(gsm, dlci);
- }
-
- /**
-@@ -1223,7 +1227,7 @@ static void gsm_control_message(struct gsm_mux *gsm, unsigned int command,
- gsm_control_reply(gsm, CMD_FCON, NULL, 0);
- /* Kick the link in case it is idling */
- spin_lock_irqsave(&gsm->tx_lock, flags);
-- gsm_data_kick(gsm);
-+ gsm_data_kick(gsm, NULL);
- spin_unlock_irqrestore(&gsm->tx_lock, flags);
- break;
- case CMD_FCOFF:
-@@ -2545,7 +2549,7 @@ static void gsmld_write_wakeup(struct tty_struct *tty)
- /* Queue poll */
- clear_bit(TTY_DO_WRITE_WAKEUP, &tty->flags);
- spin_lock_irqsave(&gsm->tx_lock, flags);
-- gsm_data_kick(gsm);
-+ gsm_data_kick(gsm, NULL);
- if (gsm->tx_bytes < TX_THRESH_LO) {
- gsm_dlci_data_sweep(gsm);
- }
-diff --git a/drivers/tty/serial/8250/8250_port.c b/drivers/tty/serial/8250/8250_port.c
-index f77bf820b7a3..4d83c85a7389 100644
---- a/drivers/tty/serial/8250/8250_port.c
-+++ b/drivers/tty/serial/8250/8250_port.c
-@@ -2615,6 +2615,8 @@ static unsigned int serial8250_get_baud_rate(struct uart_port *port,
- struct ktermios *termios,
- struct ktermios *old)
- {
-+ unsigned int tolerance = port->uartclk / 100;
-+
- /*
- * Ask the core to calculate the divisor for us.
- * Allow 1% tolerance at the upper limit so uart clks marginally
-@@ -2623,7 +2625,7 @@ static unsigned int serial8250_get_baud_rate(struct uart_port *port,
- */
- return uart_get_baud_rate(port, termios, old,
- port->uartclk / 16 / UART_DIV_MAX,
-- port->uartclk);
-+ (port->uartclk + tolerance) / 16);
- }
-
- void
-diff --git a/drivers/usb/cdns3/cdns3-ti.c b/drivers/usb/cdns3/cdns3-ti.c
-index 5685ba11480b..e701ab56b0a7 100644
---- a/drivers/usb/cdns3/cdns3-ti.c
-+++ b/drivers/usb/cdns3/cdns3-ti.c
-@@ -138,7 +138,7 @@ static int cdns_ti_probe(struct platform_device *pdev)
- error = pm_runtime_get_sync(dev);
- if (error < 0) {
- dev_err(dev, "pm_runtime_get_sync failed: %d\n", error);
-- goto err_get;
-+ goto err;
- }
-
- /* assert RESET */
-@@ -185,7 +185,6 @@ static int cdns_ti_probe(struct platform_device *pdev)
-
- err:
- pm_runtime_put_sync(data->dev);
--err_get:
- pm_runtime_disable(data->dev);
-
- return error;
-diff --git a/drivers/usb/class/usblp.c b/drivers/usb/class/usblp.c
-index 0d8e3f3804a3..084c48c5848f 100644
---- a/drivers/usb/class/usblp.c
-+++ b/drivers/usb/class/usblp.c
-@@ -468,7 +468,8 @@ static int usblp_release(struct inode *inode, struct file *file)
- usb_autopm_put_interface(usblp->intf);
-
- if (!usblp->present) /* finish cleanup from disconnect */
-- usblp_cleanup(usblp);
-+ usblp_cleanup(usblp); /* any URBs must be dead */
-+
- mutex_unlock(&usblp_mutex);
- return 0;
- }
-@@ -1375,9 +1376,11 @@ static void usblp_disconnect(struct usb_interface *intf)
-
- usblp_unlink_urbs(usblp);
- mutex_unlock(&usblp->mut);
-+ usb_poison_anchored_urbs(&usblp->urbs);
-
- if (!usblp->used)
- usblp_cleanup(usblp);
-+
- mutex_unlock(&usblp_mutex);
- }
-
-diff --git a/drivers/usb/dwc2/core_intr.c b/drivers/usb/dwc2/core_intr.c
-index 876ff31261d5..55f1d14fc414 100644
---- a/drivers/usb/dwc2/core_intr.c
-+++ b/drivers/usb/dwc2/core_intr.c
-@@ -416,10 +416,13 @@ static void dwc2_handle_wakeup_detected_intr(struct dwc2_hsotg *hsotg)
- if (ret && (ret != -ENOTSUPP))
- dev_err(hsotg->dev, "exit power_down failed\n");
-
-+ /* Change to L0 state */
-+ hsotg->lx_state = DWC2_L0;
- call_gadget(hsotg, resume);
-+ } else {
-+ /* Change to L0 state */
-+ hsotg->lx_state = DWC2_L0;
- }
-- /* Change to L0 state */
-- hsotg->lx_state = DWC2_L0;
- } else {
- if (hsotg->params.power_down)
- return;
-diff --git a/drivers/usb/dwc3/dwc3-meson-g12a.c b/drivers/usb/dwc3/dwc3-meson-g12a.c
-index b81d085bc534..eabb3bb6fcaa 100644
---- a/drivers/usb/dwc3/dwc3-meson-g12a.c
-+++ b/drivers/usb/dwc3/dwc3-meson-g12a.c
-@@ -505,7 +505,7 @@ static int dwc3_meson_g12a_probe(struct platform_device *pdev)
- if (IS_ERR(priv->reset)) {
- ret = PTR_ERR(priv->reset);
- dev_err(dev, "failed to get device reset, err=%d\n", ret);
-- return ret;
-+ goto err_disable_clks;
- }
-
- ret = reset_control_reset(priv->reset);
-@@ -525,7 +525,9 @@ static int dwc3_meson_g12a_probe(struct platform_device *pdev)
- /* Get dr_mode */
- priv->otg_mode = usb_get_dr_mode(dev);
-
-- dwc3_meson_g12a_usb_init(priv);
-+ ret = dwc3_meson_g12a_usb_init(priv);
-+ if (ret)
-+ goto err_disable_clks;
-
- /* Init PHYs */
- for (i = 0 ; i < PHY_COUNT ; ++i) {
-diff --git a/drivers/usb/dwc3/gadget.c b/drivers/usb/dwc3/gadget.c
-index 585cb3deea7a..de3b92680935 100644
---- a/drivers/usb/dwc3/gadget.c
-+++ b/drivers/usb/dwc3/gadget.c
-@@ -1220,6 +1220,8 @@ static void dwc3_prepare_trbs(struct dwc3_ep *dep)
- }
- }
-
-+static void dwc3_gadget_ep_cleanup_cancelled_requests(struct dwc3_ep *dep);
-+
- static int __dwc3_gadget_kick_transfer(struct dwc3_ep *dep)
- {
- struct dwc3_gadget_ep_cmd_params params;
-@@ -1259,14 +1261,20 @@ static int __dwc3_gadget_kick_transfer(struct dwc3_ep *dep)
-
- ret = dwc3_send_gadget_ep_cmd(dep, cmd, ¶ms);
- if (ret < 0) {
-- /*
-- * FIXME we need to iterate over the list of requests
-- * here and stop, unmap, free and del each of the linked
-- * requests instead of what we do now.
-- */
-- if (req->trb)
-- memset(req->trb, 0, sizeof(struct dwc3_trb));
-- dwc3_gadget_del_and_unmap_request(dep, req, ret);
-+ struct dwc3_request *tmp;
-+
-+ if (ret == -EAGAIN)
-+ return ret;
-+
-+ dwc3_stop_active_transfer(dep, true, true);
-+
-+ list_for_each_entry_safe(req, tmp, &dep->started_list, list)
-+ dwc3_gadget_move_cancelled_request(req);
-+
-+ /* If ep isn't started, then there's no end transfer pending */
-+ if (!(dep->flags & DWC3_EP_END_TRANSFER_PENDING))
-+ dwc3_gadget_ep_cleanup_cancelled_requests(dep);
-+
- return ret;
- }
-
-@@ -1508,6 +1516,10 @@ static void dwc3_gadget_ep_skip_trbs(struct dwc3_ep *dep, struct dwc3_request *r
- {
- int i;
-
-+ /* If req->trb is not set, then the request has not started */
-+ if (!req->trb)
-+ return;
-+
- /*
- * If request was already started, this means we had to
- * stop the transfer. With that we also need to ignore
-@@ -1598,6 +1610,8 @@ int __dwc3_gadget_ep_set_halt(struct dwc3_ep *dep, int value, int protocol)
- {
- struct dwc3_gadget_ep_cmd_params params;
- struct dwc3 *dwc = dep->dwc;
-+ struct dwc3_request *req;
-+ struct dwc3_request *tmp;
- int ret;
-
- if (usb_endpoint_xfer_isoc(dep->endpoint.desc)) {
-@@ -1634,13 +1648,37 @@ int __dwc3_gadget_ep_set_halt(struct dwc3_ep *dep, int value, int protocol)
- else
- dep->flags |= DWC3_EP_STALL;
- } else {
-+ /*
-+ * Don't issue CLEAR_STALL command to control endpoints. The
-+ * controller automatically clears the STALL when it receives
-+ * the SETUP token.
-+ */
-+ if (dep->number <= 1) {
-+ dep->flags &= ~(DWC3_EP_STALL | DWC3_EP_WEDGE);
-+ return 0;
-+ }
-
- ret = dwc3_send_clear_stall_ep_cmd(dep);
-- if (ret)
-+ if (ret) {
- dev_err(dwc->dev, "failed to clear STALL on %s\n",
- dep->name);
-- else
-- dep->flags &= ~(DWC3_EP_STALL | DWC3_EP_WEDGE);
-+ return ret;
-+ }
-+
-+ dep->flags &= ~(DWC3_EP_STALL | DWC3_EP_WEDGE);
-+
-+ dwc3_stop_active_transfer(dep, true, true);
-+
-+ list_for_each_entry_safe(req, tmp, &dep->started_list, list)
-+ dwc3_gadget_move_cancelled_request(req);
-+
-+ list_for_each_entry_safe(req, tmp, &dep->pending_list, list)
-+ dwc3_gadget_move_cancelled_request(req);
-+
-+ if (!(dep->flags & DWC3_EP_END_TRANSFER_PENDING)) {
-+ dep->flags &= ~DWC3_EP_DELAY_START;
-+ dwc3_gadget_ep_cleanup_cancelled_requests(dep);
-+ }
- }
-
- return ret;
-diff --git a/drivers/usb/gadget/composite.c b/drivers/usb/gadget/composite.c
-index cb4950cf1cdc..5c1eb96a5c57 100644
---- a/drivers/usb/gadget/composite.c
-+++ b/drivers/usb/gadget/composite.c
-@@ -96,40 +96,43 @@ function_descriptors(struct usb_function *f,
- }
-
- /**
-- * next_ep_desc() - advance to the next EP descriptor
-+ * next_desc() - advance to the next desc_type descriptor
- * @t: currect pointer within descriptor array
-+ * @desc_type: descriptor type
- *
-- * Return: next EP descriptor or NULL
-+ * Return: next desc_type descriptor or NULL
- *
-- * Iterate over @t until either EP descriptor found or
-+ * Iterate over @t until either desc_type descriptor found or
- * NULL (that indicates end of list) encountered
- */
- static struct usb_descriptor_header**
--next_ep_desc(struct usb_descriptor_header **t)
-+next_desc(struct usb_descriptor_header **t, u8 desc_type)
- {
- for (; *t; t++) {
-- if ((*t)->bDescriptorType == USB_DT_ENDPOINT)
-+ if ((*t)->bDescriptorType == desc_type)
- return t;
- }
- return NULL;
- }
-
- /*
-- * for_each_ep_desc()- iterate over endpoint descriptors in the
-- * descriptors list
-- * @start: pointer within descriptor array.
-- * @ep_desc: endpoint descriptor to use as the loop cursor
-+ * for_each_desc() - iterate over desc_type descriptors in the
-+ * descriptors list
-+ * @start: pointer within descriptor array.
-+ * @iter_desc: desc_type descriptor to use as the loop cursor
-+ * @desc_type: wanted descriptr type
- */
--#define for_each_ep_desc(start, ep_desc) \
-- for (ep_desc = next_ep_desc(start); \
-- ep_desc; ep_desc = next_ep_desc(ep_desc+1))
-+#define for_each_desc(start, iter_desc, desc_type) \
-+ for (iter_desc = next_desc(start, desc_type); \
-+ iter_desc; iter_desc = next_desc(iter_desc + 1, desc_type))
-
- /**
-- * config_ep_by_speed() - configures the given endpoint
-+ * config_ep_by_speed_and_alt() - configures the given endpoint
- * according to gadget speed.
- * @g: pointer to the gadget
- * @f: usb function
- * @_ep: the endpoint to configure
-+ * @alt: alternate setting number
- *
- * Return: error code, 0 on success
- *
-@@ -142,11 +145,13 @@ next_ep_desc(struct usb_descriptor_header **t)
- * Note: the supplied function should hold all the descriptors
- * for supported speeds
- */
--int config_ep_by_speed(struct usb_gadget *g,
-- struct usb_function *f,
-- struct usb_ep *_ep)
-+int config_ep_by_speed_and_alt(struct usb_gadget *g,
-+ struct usb_function *f,
-+ struct usb_ep *_ep,
-+ u8 alt)
- {
- struct usb_endpoint_descriptor *chosen_desc = NULL;
-+ struct usb_interface_descriptor *int_desc = NULL;
- struct usb_descriptor_header **speed_desc = NULL;
-
- struct usb_ss_ep_comp_descriptor *comp_desc = NULL;
-@@ -182,8 +187,21 @@ int config_ep_by_speed(struct usb_gadget *g,
- default:
- speed_desc = f->fs_descriptors;
- }
-+
-+ /* find correct alternate setting descriptor */
-+ for_each_desc(speed_desc, d_spd, USB_DT_INTERFACE) {
-+ int_desc = (struct usb_interface_descriptor *)*d_spd;
-+
-+ if (int_desc->bAlternateSetting == alt) {
-+ speed_desc = d_spd;
-+ goto intf_found;
-+ }
-+ }
-+ return -EIO;
-+
-+intf_found:
- /* find descriptors */
-- for_each_ep_desc(speed_desc, d_spd) {
-+ for_each_desc(speed_desc, d_spd, USB_DT_ENDPOINT) {
- chosen_desc = (struct usb_endpoint_descriptor *)*d_spd;
- if (chosen_desc->bEndpointAddress == _ep->address)
- goto ep_found;
-@@ -237,6 +255,32 @@ ep_found:
- }
- return 0;
- }
-+EXPORT_SYMBOL_GPL(config_ep_by_speed_and_alt);
-+
-+/**
-+ * config_ep_by_speed() - configures the given endpoint
-+ * according to gadget speed.
-+ * @g: pointer to the gadget
-+ * @f: usb function
-+ * @_ep: the endpoint to configure
-+ *
-+ * Return: error code, 0 on success
-+ *
-+ * This function chooses the right descriptors for a given
-+ * endpoint according to gadget speed and saves it in the
-+ * endpoint desc field. If the endpoint already has a descriptor
-+ * assigned to it - overwrites it with currently corresponding
-+ * descriptor. The endpoint maxpacket field is updated according
-+ * to the chosen descriptor.
-+ * Note: the supplied function should hold all the descriptors
-+ * for supported speeds
-+ */
-+int config_ep_by_speed(struct usb_gadget *g,
-+ struct usb_function *f,
-+ struct usb_ep *_ep)
-+{
-+ return config_ep_by_speed_and_alt(g, f, _ep, 0);
-+}
- EXPORT_SYMBOL_GPL(config_ep_by_speed);
-
- /**
-diff --git a/drivers/usb/gadget/udc/core.c b/drivers/usb/gadget/udc/core.c
-index 9b11046480fe..2e28dde8376f 100644
---- a/drivers/usb/gadget/udc/core.c
-+++ b/drivers/usb/gadget/udc/core.c
-@@ -1297,6 +1297,8 @@ static void usb_gadget_remove_driver(struct usb_udc *udc)
- kobject_uevent(&udc->dev.kobj, KOBJ_CHANGE);
-
- usb_gadget_disconnect(udc->gadget);
-+ if (udc->gadget->irq)
-+ synchronize_irq(udc->gadget->irq);
- udc->driver->unbind(udc->gadget);
- usb_gadget_udc_stop(udc);
-
-diff --git a/drivers/usb/gadget/udc/lpc32xx_udc.c b/drivers/usb/gadget/udc/lpc32xx_udc.c
-index cb997b82c008..465d0b7c6522 100644
---- a/drivers/usb/gadget/udc/lpc32xx_udc.c
-+++ b/drivers/usb/gadget/udc/lpc32xx_udc.c
-@@ -1614,17 +1614,17 @@ static int lpc32xx_ep_enable(struct usb_ep *_ep,
- const struct usb_endpoint_descriptor *desc)
- {
- struct lpc32xx_ep *ep = container_of(_ep, struct lpc32xx_ep, ep);
-- struct lpc32xx_udc *udc = ep->udc;
-+ struct lpc32xx_udc *udc;
- u16 maxpacket;
- u32 tmp;
- unsigned long flags;
-
- /* Verify EP data */
- if ((!_ep) || (!ep) || (!desc) ||
-- (desc->bDescriptorType != USB_DT_ENDPOINT)) {
-- dev_dbg(udc->dev, "bad ep or descriptor\n");
-+ (desc->bDescriptorType != USB_DT_ENDPOINT))
- return -EINVAL;
-- }
-+
-+ udc = ep->udc;
- maxpacket = usb_endpoint_maxp(desc);
- if ((maxpacket == 0) || (maxpacket > ep->maxpacket)) {
- dev_dbg(udc->dev, "bad ep descriptor's packet size\n");
-@@ -1872,7 +1872,7 @@ static int lpc32xx_ep_dequeue(struct usb_ep *_ep, struct usb_request *_req)
- static int lpc32xx_ep_set_halt(struct usb_ep *_ep, int value)
- {
- struct lpc32xx_ep *ep = container_of(_ep, struct lpc32xx_ep, ep);
-- struct lpc32xx_udc *udc = ep->udc;
-+ struct lpc32xx_udc *udc;
- unsigned long flags;
-
- if ((!ep) || (ep->hwep_num <= 1))
-@@ -1882,6 +1882,7 @@ static int lpc32xx_ep_set_halt(struct usb_ep *_ep, int value)
- if (ep->is_in)
- return -EAGAIN;
-
-+ udc = ep->udc;
- spin_lock_irqsave(&udc->lock, flags);
-
- if (value == 1) {
-diff --git a/drivers/usb/gadget/udc/m66592-udc.c b/drivers/usb/gadget/udc/m66592-udc.c
-index 75d16a8902e6..931e6362a13d 100644
---- a/drivers/usb/gadget/udc/m66592-udc.c
-+++ b/drivers/usb/gadget/udc/m66592-udc.c
-@@ -1667,7 +1667,7 @@ static int m66592_probe(struct platform_device *pdev)
-
- err_add_udc:
- m66592_free_request(&m66592->ep[0].ep, m66592->ep0_req);
--
-+ m66592->ep0_req = NULL;
- clean_up3:
- if (m66592->pdata->on_chip) {
- clk_disable(m66592->clk);
-diff --git a/drivers/usb/gadget/udc/s3c2410_udc.c b/drivers/usb/gadget/udc/s3c2410_udc.c
-index 0507a2ca0f55..80002d97b59d 100644
---- a/drivers/usb/gadget/udc/s3c2410_udc.c
-+++ b/drivers/usb/gadget/udc/s3c2410_udc.c
-@@ -251,10 +251,6 @@ static void s3c2410_udc_done(struct s3c2410_ep *ep,
- static void s3c2410_udc_nuke(struct s3c2410_udc *udc,
- struct s3c2410_ep *ep, int status)
- {
-- /* Sanity check */
-- if (&ep->queue == NULL)
-- return;
--
- while (!list_empty(&ep->queue)) {
- struct s3c2410_request *req;
- req = list_entry(ep->queue.next, struct s3c2410_request,
-diff --git a/drivers/usb/host/ehci-mxc.c b/drivers/usb/host/ehci-mxc.c
-index c9f91e6c72b6..7f65c86047dd 100644
---- a/drivers/usb/host/ehci-mxc.c
-+++ b/drivers/usb/host/ehci-mxc.c
-@@ -50,6 +50,8 @@ static int ehci_mxc_drv_probe(struct platform_device *pdev)
- }
-
- irq = platform_get_irq(pdev, 0);
-+ if (irq < 0)
-+ return irq;
-
- hcd = usb_create_hcd(&ehci_mxc_hc_driver, dev, dev_name(dev));
- if (!hcd)
-diff --git a/drivers/usb/host/ehci-platform.c b/drivers/usb/host/ehci-platform.c
-index e4fc3f66d43b..e9a49007cce4 100644
---- a/drivers/usb/host/ehci-platform.c
-+++ b/drivers/usb/host/ehci-platform.c
-@@ -455,6 +455,10 @@ static int ehci_platform_resume(struct device *dev)
-
- ehci_resume(hcd, priv->reset_on_resume);
-
-+ pm_runtime_disable(dev);
-+ pm_runtime_set_active(dev);
-+ pm_runtime_enable(dev);
-+
- if (priv->quirk_poll)
- quirk_poll_init(priv);
-
-diff --git a/drivers/usb/host/ohci-platform.c b/drivers/usb/host/ohci-platform.c
-index 7addfc2cbadc..4a8456f12a73 100644
---- a/drivers/usb/host/ohci-platform.c
-+++ b/drivers/usb/host/ohci-platform.c
-@@ -299,6 +299,11 @@ static int ohci_platform_resume(struct device *dev)
- }
-
- ohci_resume(hcd, false);
-+
-+ pm_runtime_disable(dev);
-+ pm_runtime_set_active(dev);
-+ pm_runtime_enable(dev);
-+
- return 0;
- }
- #endif /* CONFIG_PM_SLEEP */
-diff --git a/drivers/usb/host/ohci-sm501.c b/drivers/usb/host/ohci-sm501.c
-index c158cda9e4b9..cff965240327 100644
---- a/drivers/usb/host/ohci-sm501.c
-+++ b/drivers/usb/host/ohci-sm501.c
-@@ -157,9 +157,10 @@ static int ohci_hcd_sm501_drv_probe(struct platform_device *pdev)
- * the call to usb_hcd_setup_local_mem() below does just that.
- */
-
-- if (usb_hcd_setup_local_mem(hcd, mem->start,
-- mem->start - mem->parent->start,
-- resource_size(mem)) < 0)
-+ retval = usb_hcd_setup_local_mem(hcd, mem->start,
-+ mem->start - mem->parent->start,
-+ resource_size(mem));
-+ if (retval < 0)
- goto err5;
- retval = usb_add_hcd(hcd, irq, IRQF_SHARED);
- if (retval)
-diff --git a/drivers/usb/host/xhci-plat.c b/drivers/usb/host/xhci-plat.c
-index ea460b9682d5..ca82e2c61ddc 100644
---- a/drivers/usb/host/xhci-plat.c
-+++ b/drivers/usb/host/xhci-plat.c
-@@ -409,7 +409,15 @@ static int __maybe_unused xhci_plat_resume(struct device *dev)
- if (ret)
- return ret;
-
-- return xhci_resume(xhci, 0);
-+ ret = xhci_resume(xhci, 0);
-+ if (ret)
-+ return ret;
-+
-+ pm_runtime_disable(dev);
-+ pm_runtime_set_active(dev);
-+ pm_runtime_enable(dev);
-+
-+ return 0;
- }
-
- static int __maybe_unused xhci_plat_runtime_suspend(struct device *dev)
-diff --git a/drivers/usb/roles/class.c b/drivers/usb/roles/class.c
-index 5b17709821df..27d92af29635 100644
---- a/drivers/usb/roles/class.c
-+++ b/drivers/usb/roles/class.c
-@@ -49,8 +49,10 @@ int usb_role_switch_set_role(struct usb_role_switch *sw, enum usb_role role)
- mutex_lock(&sw->lock);
-
- ret = sw->set(sw, role);
-- if (!ret)
-+ if (!ret) {
- sw->role = role;
-+ kobject_uevent(&sw->dev.kobj, KOBJ_CHANGE);
-+ }
-
- mutex_unlock(&sw->lock);
-
-diff --git a/drivers/vfio/mdev/mdev_sysfs.c b/drivers/vfio/mdev/mdev_sysfs.c
-index 8ad14e5c02bf..917fd84c1c6f 100644
---- a/drivers/vfio/mdev/mdev_sysfs.c
-+++ b/drivers/vfio/mdev/mdev_sysfs.c
-@@ -110,7 +110,7 @@ static struct mdev_type *add_mdev_supported_type(struct mdev_parent *parent,
- "%s-%s", dev_driver_string(parent->dev),
- group->name);
- if (ret) {
-- kfree(type);
-+ kobject_put(&type->kobj);
- return ERR_PTR(ret);
- }
-
-diff --git a/drivers/vfio/pci/vfio_pci_config.c b/drivers/vfio/pci/vfio_pci_config.c
-index 90c0b80f8acf..814bcbe0dd4e 100644
---- a/drivers/vfio/pci/vfio_pci_config.c
-+++ b/drivers/vfio/pci/vfio_pci_config.c
-@@ -1462,7 +1462,12 @@ static int vfio_cap_init(struct vfio_pci_device *vdev)
- if (ret)
- return ret;
-
-- if (cap <= PCI_CAP_ID_MAX) {
-+ /*
-+ * ID 0 is a NULL capability, conflicting with our fake
-+ * PCI_CAP_ID_BASIC. As it has no content, consider it
-+ * hidden for now.
-+ */
-+ if (cap && cap <= PCI_CAP_ID_MAX) {
- len = pci_cap_length[cap];
- if (len == 0xFF) { /* Variable length */
- len = vfio_cap_len(vdev, cap, pos);
-@@ -1728,8 +1733,11 @@ void vfio_config_free(struct vfio_pci_device *vdev)
- vdev->vconfig = NULL;
- kfree(vdev->pci_config_map);
- vdev->pci_config_map = NULL;
-- kfree(vdev->msi_perm);
-- vdev->msi_perm = NULL;
-+ if (vdev->msi_perm) {
-+ free_perm_bits(vdev->msi_perm);
-+ kfree(vdev->msi_perm);
-+ vdev->msi_perm = NULL;
-+ }
- }
-
- /*
-diff --git a/drivers/vhost/scsi.c b/drivers/vhost/scsi.c
-index c39952243fd3..8b104f76f324 100644
---- a/drivers/vhost/scsi.c
-+++ b/drivers/vhost/scsi.c
-@@ -2280,6 +2280,7 @@ static struct configfs_attribute *vhost_scsi_wwn_attrs[] = {
- static const struct target_core_fabric_ops vhost_scsi_ops = {
- .module = THIS_MODULE,
- .fabric_name = "vhost",
-+ .max_data_sg_nents = VHOST_SCSI_PREALLOC_SGLS,
- .tpg_get_wwn = vhost_scsi_get_fabric_wwn,
- .tpg_get_tag = vhost_scsi_get_tpgt,
- .tpg_check_demo_mode = vhost_scsi_check_true,
-diff --git a/drivers/video/backlight/lp855x_bl.c b/drivers/video/backlight/lp855x_bl.c
-index f68920131a4a..e94932c69f54 100644
---- a/drivers/video/backlight/lp855x_bl.c
-+++ b/drivers/video/backlight/lp855x_bl.c
-@@ -456,7 +456,7 @@ static int lp855x_probe(struct i2c_client *cl, const struct i2c_device_id *id)
- ret = regulator_enable(lp->enable);
- if (ret < 0) {
- dev_err(lp->dev, "failed to enable vddio: %d\n", ret);
-- return ret;
-+ goto disable_supply;
- }
-
- /*
-@@ -471,24 +471,34 @@ static int lp855x_probe(struct i2c_client *cl, const struct i2c_device_id *id)
- ret = lp855x_configure(lp);
- if (ret) {
- dev_err(lp->dev, "device config err: %d", ret);
-- return ret;
-+ goto disable_vddio;
- }
-
- ret = lp855x_backlight_register(lp);
- if (ret) {
- dev_err(lp->dev,
- "failed to register backlight. err: %d\n", ret);
-- return ret;
-+ goto disable_vddio;
- }
-
- ret = sysfs_create_group(&lp->dev->kobj, &lp855x_attr_group);
- if (ret) {
- dev_err(lp->dev, "failed to register sysfs. err: %d\n", ret);
-- return ret;
-+ goto disable_vddio;
- }
-
- backlight_update_status(lp->bl);
-+
- return 0;
-+
-+disable_vddio:
-+ if (lp->enable)
-+ regulator_disable(lp->enable);
-+disable_supply:
-+ if (lp->supply)
-+ regulator_disable(lp->supply);
-+
-+ return ret;
- }
-
- static int lp855x_remove(struct i2c_client *cl)
-@@ -497,6 +507,8 @@ static int lp855x_remove(struct i2c_client *cl)
-
- lp->bl->props.brightness = 0;
- backlight_update_status(lp->bl);
-+ if (lp->enable)
-+ regulator_disable(lp->enable);
- if (lp->supply)
- regulator_disable(lp->supply);
- sysfs_remove_group(&lp->dev->kobj, &lp855x_attr_group);
-diff --git a/drivers/watchdog/da9062_wdt.c b/drivers/watchdog/da9062_wdt.c
-index 0ad15d55071c..18dec438d518 100644
---- a/drivers/watchdog/da9062_wdt.c
-+++ b/drivers/watchdog/da9062_wdt.c
-@@ -58,11 +58,6 @@ static int da9062_wdt_update_timeout_register(struct da9062_watchdog *wdt,
- unsigned int regval)
- {
- struct da9062 *chip = wdt->hw;
-- int ret;
--
-- ret = da9062_reset_watchdog_timer(wdt);
-- if (ret)
-- return ret;
-
- regmap_update_bits(chip->regmap,
- DA9062AA_CONTROL_D,
-diff --git a/drivers/xen/cpu_hotplug.c b/drivers/xen/cpu_hotplug.c
-index ec975decb5de..b96b11e2b571 100644
---- a/drivers/xen/cpu_hotplug.c
-+++ b/drivers/xen/cpu_hotplug.c
-@@ -93,10 +93,8 @@ static int setup_cpu_watcher(struct notifier_block *notifier,
- (void)register_xenbus_watch(&cpu_watch);
-
- for_each_possible_cpu(cpu) {
-- if (vcpu_online(cpu) == 0) {
-- device_offline(get_cpu_device(cpu));
-- set_cpu_present(cpu, false);
-- }
-+ if (vcpu_online(cpu) == 0)
-+ disable_hotplug_cpu(cpu);
- }
-
- return NOTIFY_DONE;
-@@ -119,5 +117,5 @@ static int __init setup_vcpu_hotplug_event(void)
- return 0;
- }
-
--arch_initcall(setup_vcpu_hotplug_event);
-+late_initcall(setup_vcpu_hotplug_event);
-
-diff --git a/fs/afs/cmservice.c b/fs/afs/cmservice.c
-index 380ad5ace7cf..3a9b8b1f5f2b 100644
---- a/fs/afs/cmservice.c
-+++ b/fs/afs/cmservice.c
-@@ -305,8 +305,7 @@ static int afs_deliver_cb_callback(struct afs_call *call)
- call->count = ntohl(call->tmp);
- _debug("FID count: %u", call->count);
- if (call->count > AFSCBMAX)
-- return afs_protocol_error(call, -EBADMSG,
-- afs_eproto_cb_fid_count);
-+ return afs_protocol_error(call, afs_eproto_cb_fid_count);
-
- call->buffer = kmalloc(array3_size(call->count, 3, 4),
- GFP_KERNEL);
-@@ -351,8 +350,7 @@ static int afs_deliver_cb_callback(struct afs_call *call)
- call->count2 = ntohl(call->tmp);
- _debug("CB count: %u", call->count2);
- if (call->count2 != call->count && call->count2 != 0)
-- return afs_protocol_error(call, -EBADMSG,
-- afs_eproto_cb_count);
-+ return afs_protocol_error(call, afs_eproto_cb_count);
- call->iter = &call->def_iter;
- iov_iter_discard(&call->def_iter, READ, call->count2 * 3 * 4);
- call->unmarshall++;
-@@ -672,8 +670,7 @@ static int afs_deliver_yfs_cb_callback(struct afs_call *call)
- call->count = ntohl(call->tmp);
- _debug("FID count: %u", call->count);
- if (call->count > YFSCBMAX)
-- return afs_protocol_error(call, -EBADMSG,
-- afs_eproto_cb_fid_count);
-+ return afs_protocol_error(call, afs_eproto_cb_fid_count);
-
- size = array_size(call->count, sizeof(struct yfs_xdr_YFSFid));
- call->buffer = kmalloc(size, GFP_KERNEL);
-diff --git a/fs/afs/dir.c b/fs/afs/dir.c
-index d1e1caa23c8b..3c486340b220 100644
---- a/fs/afs/dir.c
-+++ b/fs/afs/dir.c
-@@ -658,7 +658,8 @@ static struct inode *afs_do_lookup(struct inode *dir, struct dentry *dentry,
-
- cookie->ctx.actor = afs_lookup_filldir;
- cookie->name = dentry->d_name;
-- cookie->nr_fids = 1; /* slot 0 is saved for the fid we actually want */
-+ cookie->nr_fids = 2; /* slot 0 is saved for the fid we actually want
-+ * and slot 1 for the directory */
-
- read_seqlock_excl(&dvnode->cb_lock);
- dcbi = rcu_dereference_protected(dvnode->cb_interest,
-@@ -709,7 +710,11 @@ static struct inode *afs_do_lookup(struct inode *dir, struct dentry *dentry,
- if (!cookie->inodes)
- goto out_s;
-
-- for (i = 1; i < cookie->nr_fids; i++) {
-+ cookie->fids[1] = dvnode->fid;
-+ cookie->statuses[1].cb_break = afs_calc_vnode_cb_break(dvnode);
-+ cookie->inodes[1] = igrab(&dvnode->vfs_inode);
-+
-+ for (i = 2; i < cookie->nr_fids; i++) {
- scb = &cookie->statuses[i];
-
- /* Find any inodes that already exist and get their
-diff --git a/fs/afs/fsclient.c b/fs/afs/fsclient.c
-index d2b3798c1932..7bca0c13d0c4 100644
---- a/fs/afs/fsclient.c
-+++ b/fs/afs/fsclient.c
-@@ -56,16 +56,15 @@ static void xdr_dump_bad(const __be32 *bp)
- /*
- * decode an AFSFetchStatus block
- */
--static int xdr_decode_AFSFetchStatus(const __be32 **_bp,
-- struct afs_call *call,
-- struct afs_status_cb *scb)
-+static void xdr_decode_AFSFetchStatus(const __be32 **_bp,
-+ struct afs_call *call,
-+ struct afs_status_cb *scb)
- {
- const struct afs_xdr_AFSFetchStatus *xdr = (const void *)*_bp;
- struct afs_file_status *status = &scb->status;
- bool inline_error = (call->operation_ID == afs_FS_InlineBulkStatus);
- u64 data_version, size;
- u32 type, abort_code;
-- int ret;
-
- abort_code = ntohl(xdr->abort_code);
-
-@@ -79,7 +78,7 @@ static int xdr_decode_AFSFetchStatus(const __be32 **_bp,
- */
- status->abort_code = abort_code;
- scb->have_error = true;
-- goto good;
-+ goto advance;
- }
-
- pr_warn("Unknown AFSFetchStatus version %u\n", ntohl(xdr->if_version));
-@@ -89,7 +88,7 @@ static int xdr_decode_AFSFetchStatus(const __be32 **_bp,
- if (abort_code != 0 && inline_error) {
- status->abort_code = abort_code;
- scb->have_error = true;
-- goto good;
-+ goto advance;
- }
-
- type = ntohl(xdr->type);
-@@ -125,15 +124,13 @@ static int xdr_decode_AFSFetchStatus(const __be32 **_bp,
- data_version |= (u64)ntohl(xdr->data_version_hi) << 32;
- status->data_version = data_version;
- scb->have_status = true;
--good:
-- ret = 0;
- advance:
- *_bp = (const void *)*_bp + sizeof(*xdr);
-- return ret;
-+ return;
-
- bad:
- xdr_dump_bad(*_bp);
-- ret = afs_protocol_error(call, -EBADMSG, afs_eproto_bad_status);
-+ afs_protocol_error(call, afs_eproto_bad_status);
- goto advance;
- }
-
-@@ -254,9 +251,7 @@ static int afs_deliver_fs_fetch_status_vnode(struct afs_call *call)
-
- /* unmarshall the reply once we've received all of it */
- bp = call->buffer;
-- ret = xdr_decode_AFSFetchStatus(&bp, call, call->out_scb);
-- if (ret < 0)
-- return ret;
-+ xdr_decode_AFSFetchStatus(&bp, call, call->out_scb);
- xdr_decode_AFSCallBack(&bp, call, call->out_scb);
- xdr_decode_AFSVolSync(&bp, call->out_volsync);
-
-@@ -419,9 +414,7 @@ static int afs_deliver_fs_fetch_data(struct afs_call *call)
- return ret;
-
- bp = call->buffer;
-- ret = xdr_decode_AFSFetchStatus(&bp, call, call->out_scb);
-- if (ret < 0)
-- return ret;
-+ xdr_decode_AFSFetchStatus(&bp, call, call->out_scb);
- xdr_decode_AFSCallBack(&bp, call, call->out_scb);
- xdr_decode_AFSVolSync(&bp, call->out_volsync);
-
-@@ -577,12 +570,8 @@ static int afs_deliver_fs_create_vnode(struct afs_call *call)
- /* unmarshall the reply once we've received all of it */
- bp = call->buffer;
- xdr_decode_AFSFid(&bp, call->out_fid);
-- ret = xdr_decode_AFSFetchStatus(&bp, call, call->out_scb);
-- if (ret < 0)
-- return ret;
-- ret = xdr_decode_AFSFetchStatus(&bp, call, call->out_dir_scb);
-- if (ret < 0)
-- return ret;
-+ xdr_decode_AFSFetchStatus(&bp, call, call->out_scb);
-+ xdr_decode_AFSFetchStatus(&bp, call, call->out_dir_scb);
- xdr_decode_AFSCallBack(&bp, call, call->out_scb);
- xdr_decode_AFSVolSync(&bp, call->out_volsync);
-
-@@ -691,9 +680,7 @@ static int afs_deliver_fs_dir_status_and_vol(struct afs_call *call)
-
- /* unmarshall the reply once we've received all of it */
- bp = call->buffer;
-- ret = xdr_decode_AFSFetchStatus(&bp, call, call->out_dir_scb);
-- if (ret < 0)
-- return ret;
-+ xdr_decode_AFSFetchStatus(&bp, call, call->out_dir_scb);
- xdr_decode_AFSVolSync(&bp, call->out_volsync);
-
- _leave(" = 0 [done]");
-@@ -784,12 +771,8 @@ static int afs_deliver_fs_link(struct afs_call *call)
-
- /* unmarshall the reply once we've received all of it */
- bp = call->buffer;
-- ret = xdr_decode_AFSFetchStatus(&bp, call, call->out_scb);
-- if (ret < 0)
-- return ret;
-- ret = xdr_decode_AFSFetchStatus(&bp, call, call->out_dir_scb);
-- if (ret < 0)
-- return ret;
-+ xdr_decode_AFSFetchStatus(&bp, call, call->out_scb);
-+ xdr_decode_AFSFetchStatus(&bp, call, call->out_dir_scb);
- xdr_decode_AFSVolSync(&bp, call->out_volsync);
-
- _leave(" = 0 [done]");
-@@ -878,12 +861,8 @@ static int afs_deliver_fs_symlink(struct afs_call *call)
- /* unmarshall the reply once we've received all of it */
- bp = call->buffer;
- xdr_decode_AFSFid(&bp, call->out_fid);
-- ret = xdr_decode_AFSFetchStatus(&bp, call, call->out_scb);
-- if (ret < 0)
-- return ret;
-- ret = xdr_decode_AFSFetchStatus(&bp, call, call->out_dir_scb);
-- if (ret < 0)
-- return ret;
-+ xdr_decode_AFSFetchStatus(&bp, call, call->out_scb);
-+ xdr_decode_AFSFetchStatus(&bp, call, call->out_dir_scb);
- xdr_decode_AFSVolSync(&bp, call->out_volsync);
-
- _leave(" = 0 [done]");
-@@ -986,16 +965,12 @@ static int afs_deliver_fs_rename(struct afs_call *call)
- if (ret < 0)
- return ret;
-
-+ bp = call->buffer;
- /* If the two dirs are the same, we have two copies of the same status
- * report, so we just decode it twice.
- */
-- bp = call->buffer;
-- ret = xdr_decode_AFSFetchStatus(&bp, call, call->out_dir_scb);
-- if (ret < 0)
-- return ret;
-- ret = xdr_decode_AFSFetchStatus(&bp, call, call->out_scb);
-- if (ret < 0)
-- return ret;
-+ xdr_decode_AFSFetchStatus(&bp, call, call->out_dir_scb);
-+ xdr_decode_AFSFetchStatus(&bp, call, call->out_scb);
- xdr_decode_AFSVolSync(&bp, call->out_volsync);
-
- _leave(" = 0 [done]");
-@@ -1103,9 +1078,7 @@ static int afs_deliver_fs_store_data(struct afs_call *call)
-
- /* unmarshall the reply once we've received all of it */
- bp = call->buffer;
-- ret = xdr_decode_AFSFetchStatus(&bp, call, call->out_scb);
-- if (ret < 0)
-- return ret;
-+ xdr_decode_AFSFetchStatus(&bp, call, call->out_scb);
- xdr_decode_AFSVolSync(&bp, call->out_volsync);
-
- _leave(" = 0 [done]");
-@@ -1283,9 +1256,7 @@ static int afs_deliver_fs_store_status(struct afs_call *call)
-
- /* unmarshall the reply once we've received all of it */
- bp = call->buffer;
-- ret = xdr_decode_AFSFetchStatus(&bp, call, call->out_scb);
-- if (ret < 0)
-- return ret;
-+ xdr_decode_AFSFetchStatus(&bp, call, call->out_scb);
- xdr_decode_AFSVolSync(&bp, call->out_volsync);
-
- _leave(" = 0 [done]");
-@@ -1499,8 +1470,7 @@ static int afs_deliver_fs_get_volume_status(struct afs_call *call)
- call->count = ntohl(call->tmp);
- _debug("volname length: %u", call->count);
- if (call->count >= AFSNAMEMAX)
-- return afs_protocol_error(call, -EBADMSG,
-- afs_eproto_volname_len);
-+ return afs_protocol_error(call, afs_eproto_volname_len);
- size = (call->count + 3) & ~3; /* It's padded */
- afs_extract_to_buf(call, size);
- call->unmarshall++;
-@@ -1529,8 +1499,7 @@ static int afs_deliver_fs_get_volume_status(struct afs_call *call)
- call->count = ntohl(call->tmp);
- _debug("offline msg length: %u", call->count);
- if (call->count >= AFSNAMEMAX)
-- return afs_protocol_error(call, -EBADMSG,
-- afs_eproto_offline_msg_len);
-+ return afs_protocol_error(call, afs_eproto_offline_msg_len);
- size = (call->count + 3) & ~3; /* It's padded */
- afs_extract_to_buf(call, size);
- call->unmarshall++;
-@@ -1560,8 +1529,7 @@ static int afs_deliver_fs_get_volume_status(struct afs_call *call)
- call->count = ntohl(call->tmp);
- _debug("motd length: %u", call->count);
- if (call->count >= AFSNAMEMAX)
-- return afs_protocol_error(call, -EBADMSG,
-- afs_eproto_motd_len);
-+ return afs_protocol_error(call, afs_eproto_motd_len);
- size = (call->count + 3) & ~3; /* It's padded */
- afs_extract_to_buf(call, size);
- call->unmarshall++;
-@@ -1954,9 +1922,7 @@ static int afs_deliver_fs_fetch_status(struct afs_call *call)
-
- /* unmarshall the reply once we've received all of it */
- bp = call->buffer;
-- ret = xdr_decode_AFSFetchStatus(&bp, call, call->out_scb);
-- if (ret < 0)
-- return ret;
-+ xdr_decode_AFSFetchStatus(&bp, call, call->out_scb);
- xdr_decode_AFSCallBack(&bp, call, call->out_scb);
- xdr_decode_AFSVolSync(&bp, call->out_volsync);
-
-@@ -2045,8 +2011,7 @@ static int afs_deliver_fs_inline_bulk_status(struct afs_call *call)
- tmp = ntohl(call->tmp);
- _debug("status count: %u/%u", tmp, call->count2);
- if (tmp != call->count2)
-- return afs_protocol_error(call, -EBADMSG,
-- afs_eproto_ibulkst_count);
-+ return afs_protocol_error(call, afs_eproto_ibulkst_count);
-
- call->count = 0;
- call->unmarshall++;
-@@ -2062,10 +2027,7 @@ static int afs_deliver_fs_inline_bulk_status(struct afs_call *call)
-
- bp = call->buffer;
- scb = &call->out_scb[call->count];
-- ret = xdr_decode_AFSFetchStatus(&bp, call, scb);
-- if (ret < 0)
-- return ret;
--
-+ xdr_decode_AFSFetchStatus(&bp, call, scb);
- call->count++;
- if (call->count < call->count2)
- goto more_counts;
-@@ -2085,8 +2047,7 @@ static int afs_deliver_fs_inline_bulk_status(struct afs_call *call)
- tmp = ntohl(call->tmp);
- _debug("CB count: %u", tmp);
- if (tmp != call->count2)
-- return afs_protocol_error(call, -EBADMSG,
-- afs_eproto_ibulkst_cb_count);
-+ return afs_protocol_error(call, afs_eproto_ibulkst_cb_count);
- call->count = 0;
- call->unmarshall++;
- more_cbs:
-@@ -2243,9 +2204,7 @@ static int afs_deliver_fs_fetch_acl(struct afs_call *call)
- return ret;
-
- bp = call->buffer;
-- ret = xdr_decode_AFSFetchStatus(&bp, call, call->out_scb);
-- if (ret < 0)
-- return ret;
-+ xdr_decode_AFSFetchStatus(&bp, call, call->out_scb);
- xdr_decode_AFSVolSync(&bp, call->out_volsync);
-
- call->unmarshall++;
-@@ -2326,9 +2285,7 @@ static int afs_deliver_fs_file_status_and_vol(struct afs_call *call)
- return ret;
-
- bp = call->buffer;
-- ret = xdr_decode_AFSFetchStatus(&bp, call, call->out_scb);
-- if (ret < 0)
-- return ret;
-+ xdr_decode_AFSFetchStatus(&bp, call, call->out_scb);
- xdr_decode_AFSVolSync(&bp, call->out_volsync);
-
- _leave(" = 0 [done]");
-diff --git a/fs/afs/inode.c b/fs/afs/inode.c
-index 281470fe1183..d7b65fad6679 100644
---- a/fs/afs/inode.c
-+++ b/fs/afs/inode.c
-@@ -130,7 +130,7 @@ static int afs_inode_init_from_status(struct afs_vnode *vnode, struct key *key,
- default:
- dump_vnode(vnode, parent_vnode);
- write_sequnlock(&vnode->cb_lock);
-- return afs_protocol_error(NULL, -EBADMSG, afs_eproto_file_type);
-+ return afs_protocol_error(NULL, afs_eproto_file_type);
- }
-
- afs_set_i_size(vnode, status->size);
-@@ -170,6 +170,7 @@ static void afs_apply_status(struct afs_fs_cursor *fc,
- struct timespec64 t;
- umode_t mode;
- bool data_changed = false;
-+ bool change_size = false;
-
- BUG_ON(test_bit(AFS_VNODE_UNSET, &vnode->flags));
-
-@@ -179,7 +180,7 @@ static void afs_apply_status(struct afs_fs_cursor *fc,
- vnode->fid.vnode,
- vnode->fid.unique,
- status->type, vnode->status.type);
-- afs_protocol_error(NULL, -EBADMSG, afs_eproto_bad_status);
-+ afs_protocol_error(NULL, afs_eproto_bad_status);
- return;
- }
-
-@@ -225,6 +226,7 @@ static void afs_apply_status(struct afs_fs_cursor *fc,
- } else {
- set_bit(AFS_VNODE_ZAP_DATA, &vnode->flags);
- }
-+ change_size = true;
- } else if (vnode->status.type == AFS_FTYPE_DIR) {
- /* Expected directory change is handled elsewhere so
- * that we can locally edit the directory and save on a
-@@ -232,11 +234,19 @@ static void afs_apply_status(struct afs_fs_cursor *fc,
- */
- if (test_bit(AFS_VNODE_DIR_VALID, &vnode->flags))
- data_changed = false;
-+ change_size = true;
- }
-
- if (data_changed) {
- inode_set_iversion_raw(&vnode->vfs_inode, status->data_version);
-- afs_set_i_size(vnode, status->size);
-+
-+ /* Only update the size if the data version jumped. If the
-+ * file is being modified locally, then we might have our own
-+ * idea of what the size should be that's not the same as
-+ * what's on the server.
-+ */
-+ if (change_size)
-+ afs_set_i_size(vnode, status->size);
- }
- }
-
-diff --git a/fs/afs/internal.h b/fs/afs/internal.h
-index 80255513e230..98e0cebd5e5e 100644
---- a/fs/afs/internal.h
-+++ b/fs/afs/internal.h
-@@ -161,6 +161,7 @@ struct afs_call {
- bool upgrade; /* T to request service upgrade */
- bool have_reply_time; /* T if have got reply_time */
- bool intr; /* T if interruptible */
-+ bool unmarshalling_error; /* T if an unmarshalling error occurred */
- u16 service_id; /* Actual service ID (after upgrade) */
- unsigned int debug_id; /* Trace ID */
- u32 operation_ID; /* operation ID for an incoming call */
-@@ -1128,7 +1129,7 @@ extern void afs_flat_call_destructor(struct afs_call *);
- extern void afs_send_empty_reply(struct afs_call *);
- extern void afs_send_simple_reply(struct afs_call *, const void *, size_t);
- extern int afs_extract_data(struct afs_call *, bool);
--extern int afs_protocol_error(struct afs_call *, int, enum afs_eproto_cause);
-+extern int afs_protocol_error(struct afs_call *, enum afs_eproto_cause);
-
- static inline void afs_set_fc_call(struct afs_call *call, struct afs_fs_cursor *fc)
- {
-diff --git a/fs/afs/misc.c b/fs/afs/misc.c
-index 52b19e9c1535..5334f1bd2bca 100644
---- a/fs/afs/misc.c
-+++ b/fs/afs/misc.c
-@@ -83,6 +83,7 @@ int afs_abort_to_error(u32 abort_code)
- case UAENOLCK: return -ENOLCK;
- case UAENOTEMPTY: return -ENOTEMPTY;
- case UAELOOP: return -ELOOP;
-+ case UAEOVERFLOW: return -EOVERFLOW;
- case UAENOMEDIUM: return -ENOMEDIUM;
- case UAEDQUOT: return -EDQUOT;
-
-diff --git a/fs/afs/proc.c b/fs/afs/proc.c
-index 468e1713bce1..6f34c84a0fd0 100644
---- a/fs/afs/proc.c
-+++ b/fs/afs/proc.c
-@@ -563,6 +563,7 @@ void afs_put_sysnames(struct afs_sysnames *sysnames)
- if (sysnames->subs[i] != afs_init_sysname &&
- sysnames->subs[i] != sysnames->blank)
- kfree(sysnames->subs[i]);
-+ kfree(sysnames);
- }
- }
-
-diff --git a/fs/afs/rxrpc.c b/fs/afs/rxrpc.c
-index 1ecc67da6c1a..e3c2655616dc 100644
---- a/fs/afs/rxrpc.c
-+++ b/fs/afs/rxrpc.c
-@@ -540,6 +540,8 @@ static void afs_deliver_to_call(struct afs_call *call)
-
- ret = call->type->deliver(call);
- state = READ_ONCE(call->state);
-+ if (ret == 0 && call->unmarshalling_error)
-+ ret = -EBADMSG;
- switch (ret) {
- case 0:
- afs_queue_call_work(call);
-@@ -959,9 +961,11 @@ int afs_extract_data(struct afs_call *call, bool want_more)
- /*
- * Log protocol error production.
- */
--noinline int afs_protocol_error(struct afs_call *call, int error,
-+noinline int afs_protocol_error(struct afs_call *call,
- enum afs_eproto_cause cause)
- {
-- trace_afs_protocol_error(call, error, cause);
-- return error;
-+ trace_afs_protocol_error(call, cause);
-+ if (call)
-+ call->unmarshalling_error = true;
-+ return -EBADMSG;
- }
-diff --git a/fs/afs/vlclient.c b/fs/afs/vlclient.c
-index 516e9a3bb5b4..e64b002c3bb3 100644
---- a/fs/afs/vlclient.c
-+++ b/fs/afs/vlclient.c
-@@ -447,8 +447,7 @@ static int afs_deliver_yfsvl_get_endpoints(struct afs_call *call)
- call->count2 = ntohl(*bp); /* Type or next count */
-
- if (call->count > YFS_MAXENDPOINTS)
-- return afs_protocol_error(call, -EBADMSG,
-- afs_eproto_yvl_fsendpt_num);
-+ return afs_protocol_error(call, afs_eproto_yvl_fsendpt_num);
-
- alist = afs_alloc_addrlist(call->count, FS_SERVICE, AFS_FS_PORT);
- if (!alist)
-@@ -468,8 +467,7 @@ static int afs_deliver_yfsvl_get_endpoints(struct afs_call *call)
- size = sizeof(__be32) * (1 + 4 + 1);
- break;
- default:
-- return afs_protocol_error(call, -EBADMSG,
-- afs_eproto_yvl_fsendpt_type);
-+ return afs_protocol_error(call, afs_eproto_yvl_fsendpt_type);
- }
-
- size += sizeof(__be32);
-@@ -487,21 +485,20 @@ static int afs_deliver_yfsvl_get_endpoints(struct afs_call *call)
- switch (call->count2) {
- case YFS_ENDPOINT_IPV4:
- if (ntohl(bp[0]) != sizeof(__be32) * 2)
-- return afs_protocol_error(call, -EBADMSG,
-- afs_eproto_yvl_fsendpt4_len);
-+ return afs_protocol_error(
-+ call, afs_eproto_yvl_fsendpt4_len);
- afs_merge_fs_addr4(alist, bp[1], ntohl(bp[2]));
- bp += 3;
- break;
- case YFS_ENDPOINT_IPV6:
- if (ntohl(bp[0]) != sizeof(__be32) * 5)
-- return afs_protocol_error(call, -EBADMSG,
-- afs_eproto_yvl_fsendpt6_len);
-+ return afs_protocol_error(
-+ call, afs_eproto_yvl_fsendpt6_len);
- afs_merge_fs_addr6(alist, bp + 1, ntohl(bp[5]));
- bp += 6;
- break;
- default:
-- return afs_protocol_error(call, -EBADMSG,
-- afs_eproto_yvl_fsendpt_type);
-+ return afs_protocol_error(call, afs_eproto_yvl_fsendpt_type);
- }
-
- /* Got either the type of the next entry or the count of
-@@ -519,8 +516,7 @@ static int afs_deliver_yfsvl_get_endpoints(struct afs_call *call)
- if (!call->count)
- goto end;
- if (call->count > YFS_MAXENDPOINTS)
-- return afs_protocol_error(call, -EBADMSG,
-- afs_eproto_yvl_vlendpt_type);
-+ return afs_protocol_error(call, afs_eproto_yvl_vlendpt_type);
-
- afs_extract_to_buf(call, 1 * sizeof(__be32));
- call->unmarshall = 3;
-@@ -547,8 +543,7 @@ static int afs_deliver_yfsvl_get_endpoints(struct afs_call *call)
- size = sizeof(__be32) * (1 + 4 + 1);
- break;
- default:
-- return afs_protocol_error(call, -EBADMSG,
-- afs_eproto_yvl_vlendpt_type);
-+ return afs_protocol_error(call, afs_eproto_yvl_vlendpt_type);
- }
-
- if (call->count > 1)
-@@ -566,19 +561,18 @@ static int afs_deliver_yfsvl_get_endpoints(struct afs_call *call)
- switch (call->count2) {
- case YFS_ENDPOINT_IPV4:
- if (ntohl(bp[0]) != sizeof(__be32) * 2)
-- return afs_protocol_error(call, -EBADMSG,
-- afs_eproto_yvl_vlendpt4_len);
-+ return afs_protocol_error(
-+ call, afs_eproto_yvl_vlendpt4_len);
- bp += 3;
- break;
- case YFS_ENDPOINT_IPV6:
- if (ntohl(bp[0]) != sizeof(__be32) * 5)
-- return afs_protocol_error(call, -EBADMSG,
-- afs_eproto_yvl_vlendpt6_len);
-+ return afs_protocol_error(
-+ call, afs_eproto_yvl_vlendpt6_len);
- bp += 6;
- break;
- default:
-- return afs_protocol_error(call, -EBADMSG,
-- afs_eproto_yvl_vlendpt_type);
-+ return afs_protocol_error(call, afs_eproto_yvl_vlendpt_type);
- }
-
- /* Got either the type of the next entry or the count of
-diff --git a/fs/afs/write.c b/fs/afs/write.c
-index cb76566763db..96b042af6248 100644
---- a/fs/afs/write.c
-+++ b/fs/afs/write.c
-@@ -194,11 +194,11 @@ int afs_write_end(struct file *file, struct address_space *mapping,
-
- i_size = i_size_read(&vnode->vfs_inode);
- if (maybe_i_size > i_size) {
-- spin_lock(&vnode->wb_lock);
-+ write_seqlock(&vnode->cb_lock);
- i_size = i_size_read(&vnode->vfs_inode);
- if (maybe_i_size > i_size)
- i_size_write(&vnode->vfs_inode, maybe_i_size);
-- spin_unlock(&vnode->wb_lock);
-+ write_sequnlock(&vnode->cb_lock);
- }
-
- if (!PageUptodate(page)) {
-@@ -811,6 +811,7 @@ vm_fault_t afs_page_mkwrite(struct vm_fault *vmf)
- vmf->page->index, priv);
- SetPagePrivate(vmf->page);
- set_page_private(vmf->page, priv);
-+ file_update_time(file);
-
- sb_end_pagefault(inode->i_sb);
- return VM_FAULT_LOCKED;
-diff --git a/fs/afs/yfsclient.c b/fs/afs/yfsclient.c
-index fe413e7a5cf4..bf74c679c02b 100644
---- a/fs/afs/yfsclient.c
-+++ b/fs/afs/yfsclient.c
-@@ -179,21 +179,20 @@ static void xdr_dump_bad(const __be32 *bp)
- /*
- * Decode a YFSFetchStatus block
- */
--static int xdr_decode_YFSFetchStatus(const __be32 **_bp,
-- struct afs_call *call,
-- struct afs_status_cb *scb)
-+static void xdr_decode_YFSFetchStatus(const __be32 **_bp,
-+ struct afs_call *call,
-+ struct afs_status_cb *scb)
- {
- const struct yfs_xdr_YFSFetchStatus *xdr = (const void *)*_bp;
- struct afs_file_status *status = &scb->status;
- u32 type;
-- int ret;
-
- status->abort_code = ntohl(xdr->abort_code);
- if (status->abort_code != 0) {
- if (status->abort_code == VNOVNODE)
- status->nlink = 0;
- scb->have_error = true;
-- goto good;
-+ goto advance;
- }
-
- type = ntohl(xdr->type);
-@@ -221,15 +220,13 @@ static int xdr_decode_YFSFetchStatus(const __be32 **_bp,
- status->size = xdr_to_u64(xdr->size);
- status->data_version = xdr_to_u64(xdr->data_version);
- scb->have_status = true;
--good:
-- ret = 0;
- advance:
- *_bp += xdr_size(xdr);
-- return ret;
-+ return;
-
- bad:
- xdr_dump_bad(*_bp);
-- ret = afs_protocol_error(call, -EBADMSG, afs_eproto_bad_status);
-+ afs_protocol_error(call, afs_eproto_bad_status);
- goto advance;
- }
-
-@@ -348,9 +345,7 @@ static int yfs_deliver_fs_status_cb_and_volsync(struct afs_call *call)
-
- /* unmarshall the reply once we've received all of it */
- bp = call->buffer;
-- ret = xdr_decode_YFSFetchStatus(&bp, call, call->out_scb);
-- if (ret < 0)
-- return ret;
-+ xdr_decode_YFSFetchStatus(&bp, call, call->out_scb);
- xdr_decode_YFSCallBack(&bp, call, call->out_scb);
- xdr_decode_YFSVolSync(&bp, call->out_volsync);
-
-@@ -372,9 +367,7 @@ static int yfs_deliver_status_and_volsync(struct afs_call *call)
- return ret;
-
- bp = call->buffer;
-- ret = xdr_decode_YFSFetchStatus(&bp, call, call->out_scb);
-- if (ret < 0)
-- return ret;
-+ xdr_decode_YFSFetchStatus(&bp, call, call->out_scb);
- xdr_decode_YFSVolSync(&bp, call->out_volsync);
-
- _leave(" = 0 [done]");
-@@ -534,9 +527,7 @@ static int yfs_deliver_fs_fetch_data64(struct afs_call *call)
- return ret;
-
- bp = call->buffer;
-- ret = xdr_decode_YFSFetchStatus(&bp, call, call->out_scb);
-- if (ret < 0)
-- return ret;
-+ xdr_decode_YFSFetchStatus(&bp, call, call->out_scb);
- xdr_decode_YFSCallBack(&bp, call, call->out_scb);
- xdr_decode_YFSVolSync(&bp, call->out_volsync);
-
-@@ -644,12 +635,8 @@ static int yfs_deliver_fs_create_vnode(struct afs_call *call)
- /* unmarshall the reply once we've received all of it */
- bp = call->buffer;
- xdr_decode_YFSFid(&bp, call->out_fid);
-- ret = xdr_decode_YFSFetchStatus(&bp, call, call->out_scb);
-- if (ret < 0)
-- return ret;
-- ret = xdr_decode_YFSFetchStatus(&bp, call, call->out_dir_scb);
-- if (ret < 0)
-- return ret;
-+ xdr_decode_YFSFetchStatus(&bp, call, call->out_scb);
-+ xdr_decode_YFSFetchStatus(&bp, call, call->out_dir_scb);
- xdr_decode_YFSCallBack(&bp, call, call->out_scb);
- xdr_decode_YFSVolSync(&bp, call->out_volsync);
-
-@@ -802,14 +789,9 @@ static int yfs_deliver_fs_remove_file2(struct afs_call *call)
- return ret;
-
- bp = call->buffer;
-- ret = xdr_decode_YFSFetchStatus(&bp, call, call->out_dir_scb);
-- if (ret < 0)
-- return ret;
--
-+ xdr_decode_YFSFetchStatus(&bp, call, call->out_dir_scb);
- xdr_decode_YFSFid(&bp, &fid);
-- ret = xdr_decode_YFSFetchStatus(&bp, call, call->out_scb);
-- if (ret < 0)
-- return ret;
-+ xdr_decode_YFSFetchStatus(&bp, call, call->out_scb);
- /* Was deleted if vnode->status.abort_code == VNOVNODE. */
-
- xdr_decode_YFSVolSync(&bp, call->out_volsync);
-@@ -889,10 +871,7 @@ static int yfs_deliver_fs_remove(struct afs_call *call)
- return ret;
-
- bp = call->buffer;
-- ret = xdr_decode_YFSFetchStatus(&bp, call, call->out_dir_scb);
-- if (ret < 0)
-- return ret;
--
-+ xdr_decode_YFSFetchStatus(&bp, call, call->out_dir_scb);
- xdr_decode_YFSVolSync(&bp, call->out_volsync);
- return 0;
- }
-@@ -974,12 +953,8 @@ static int yfs_deliver_fs_link(struct afs_call *call)
- return ret;
-
- bp = call->buffer;
-- ret = xdr_decode_YFSFetchStatus(&bp, call, call->out_scb);
-- if (ret < 0)
-- return ret;
-- ret = xdr_decode_YFSFetchStatus(&bp, call, call->out_dir_scb);
-- if (ret < 0)
-- return ret;
-+ xdr_decode_YFSFetchStatus(&bp, call, call->out_scb);
-+ xdr_decode_YFSFetchStatus(&bp, call, call->out_dir_scb);
- xdr_decode_YFSVolSync(&bp, call->out_volsync);
- _leave(" = 0 [done]");
- return 0;
-@@ -1061,12 +1036,8 @@ static int yfs_deliver_fs_symlink(struct afs_call *call)
- /* unmarshall the reply once we've received all of it */
- bp = call->buffer;
- xdr_decode_YFSFid(&bp, call->out_fid);
-- ret = xdr_decode_YFSFetchStatus(&bp, call, call->out_scb);
-- if (ret < 0)
-- return ret;
-- ret = xdr_decode_YFSFetchStatus(&bp, call, call->out_dir_scb);
-- if (ret < 0)
-- return ret;
-+ xdr_decode_YFSFetchStatus(&bp, call, call->out_scb);
-+ xdr_decode_YFSFetchStatus(&bp, call, call->out_dir_scb);
- xdr_decode_YFSVolSync(&bp, call->out_volsync);
-
- _leave(" = 0 [done]");
-@@ -1154,13 +1125,11 @@ static int yfs_deliver_fs_rename(struct afs_call *call)
- return ret;
-
- bp = call->buffer;
-- ret = xdr_decode_YFSFetchStatus(&bp, call, call->out_dir_scb);
-- if (ret < 0)
-- return ret;
-- ret = xdr_decode_YFSFetchStatus(&bp, call, call->out_scb);
-- if (ret < 0)
-- return ret;
--
-+ /* If the two dirs are the same, we have two copies of the same status
-+ * report, so we just decode it twice.
-+ */
-+ xdr_decode_YFSFetchStatus(&bp, call, call->out_dir_scb);
-+ xdr_decode_YFSFetchStatus(&bp, call, call->out_scb);
- xdr_decode_YFSVolSync(&bp, call->out_volsync);
- _leave(" = 0 [done]");
- return 0;
-@@ -1457,8 +1426,7 @@ static int yfs_deliver_fs_get_volume_status(struct afs_call *call)
- call->count = ntohl(call->tmp);
- _debug("volname length: %u", call->count);
- if (call->count >= AFSNAMEMAX)
-- return afs_protocol_error(call, -EBADMSG,
-- afs_eproto_volname_len);
-+ return afs_protocol_error(call, afs_eproto_volname_len);
- size = (call->count + 3) & ~3; /* It's padded */
- afs_extract_to_buf(call, size);
- call->unmarshall++;
-@@ -1487,8 +1455,7 @@ static int yfs_deliver_fs_get_volume_status(struct afs_call *call)
- call->count = ntohl(call->tmp);
- _debug("offline msg length: %u", call->count);
- if (call->count >= AFSNAMEMAX)
-- return afs_protocol_error(call, -EBADMSG,
-- afs_eproto_offline_msg_len);
-+ return afs_protocol_error(call, afs_eproto_offline_msg_len);
- size = (call->count + 3) & ~3; /* It's padded */
- afs_extract_to_buf(call, size);
- call->unmarshall++;
-@@ -1518,8 +1485,7 @@ static int yfs_deliver_fs_get_volume_status(struct afs_call *call)
- call->count = ntohl(call->tmp);
- _debug("motd length: %u", call->count);
- if (call->count >= AFSNAMEMAX)
-- return afs_protocol_error(call, -EBADMSG,
-- afs_eproto_motd_len);
-+ return afs_protocol_error(call, afs_eproto_motd_len);
- size = (call->count + 3) & ~3; /* It's padded */
- afs_extract_to_buf(call, size);
- call->unmarshall++;
-@@ -1828,8 +1794,7 @@ static int yfs_deliver_fs_inline_bulk_status(struct afs_call *call)
- tmp = ntohl(call->tmp);
- _debug("status count: %u/%u", tmp, call->count2);
- if (tmp != call->count2)
-- return afs_protocol_error(call, -EBADMSG,
-- afs_eproto_ibulkst_count);
-+ return afs_protocol_error(call, afs_eproto_ibulkst_count);
-
- call->count = 0;
- call->unmarshall++;
-@@ -1845,9 +1810,7 @@ static int yfs_deliver_fs_inline_bulk_status(struct afs_call *call)
-
- bp = call->buffer;
- scb = &call->out_scb[call->count];
-- ret = xdr_decode_YFSFetchStatus(&bp, call, scb);
-- if (ret < 0)
-- return ret;
-+ xdr_decode_YFSFetchStatus(&bp, call, scb);
-
- call->count++;
- if (call->count < call->count2)
-@@ -1868,8 +1831,7 @@ static int yfs_deliver_fs_inline_bulk_status(struct afs_call *call)
- tmp = ntohl(call->tmp);
- _debug("CB count: %u", tmp);
- if (tmp != call->count2)
-- return afs_protocol_error(call, -EBADMSG,
-- afs_eproto_ibulkst_cb_count);
-+ return afs_protocol_error(call, afs_eproto_ibulkst_cb_count);
- call->count = 0;
- call->unmarshall++;
- more_cbs:
-@@ -2067,9 +2029,7 @@ static int yfs_deliver_fs_fetch_opaque_acl(struct afs_call *call)
- bp = call->buffer;
- yacl->inherit_flag = ntohl(*bp++);
- yacl->num_cleaned = ntohl(*bp++);
-- ret = xdr_decode_YFSFetchStatus(&bp, call, call->out_scb);
-- if (ret < 0)
-- return ret;
-+ xdr_decode_YFSFetchStatus(&bp, call, call->out_scb);
- xdr_decode_YFSVolSync(&bp, call->out_volsync);
-
- call->unmarshall++;
-diff --git a/fs/block_dev.c b/fs/block_dev.c
-index 93672c3f1c78..313aae95818e 100644
---- a/fs/block_dev.c
-+++ b/fs/block_dev.c
-@@ -1583,10 +1583,8 @@ static int __blkdev_get(struct block_device *bdev, fmode_t mode, int for_part)
- */
- if (!for_part) {
- ret = devcgroup_inode_permission(bdev->bd_inode, perm);
-- if (ret != 0) {
-- bdput(bdev);
-+ if (ret != 0)
- return ret;
-- }
- }
-
- restart:
-@@ -1655,8 +1653,10 @@ static int __blkdev_get(struct block_device *bdev, fmode_t mode, int for_part)
- goto out_clear;
- BUG_ON(for_part);
- ret = __blkdev_get(whole, mode, 1);
-- if (ret)
-+ if (ret) {
-+ bdput(whole);
- goto out_clear;
-+ }
- bdev->bd_contains = whole;
- bdev->bd_part = disk_get_part(disk, partno);
- if (!(disk->flags & GENHD_FL_UP) ||
-@@ -1706,7 +1706,6 @@ static int __blkdev_get(struct block_device *bdev, fmode_t mode, int for_part)
- disk_unblock_events(disk);
- put_disk_and_module(disk);
- out:
-- bdput(bdev);
-
- return ret;
- }
-@@ -1773,6 +1772,9 @@ int blkdev_get(struct block_device *bdev, fmode_t mode, void *holder)
- bdput(whole);
- }
-
-+ if (res)
-+ bdput(bdev);
-+
- return res;
- }
- EXPORT_SYMBOL(blkdev_get);
-diff --git a/fs/ceph/export.c b/fs/ceph/export.c
-index 79dc06881e78..e088843a7734 100644
---- a/fs/ceph/export.c
-+++ b/fs/ceph/export.c
-@@ -172,9 +172,16 @@ struct inode *ceph_lookup_inode(struct super_block *sb, u64 ino)
- static struct dentry *__fh_to_dentry(struct super_block *sb, u64 ino)
- {
- struct inode *inode = __lookup_inode(sb, ino);
-+ int err;
-+
- if (IS_ERR(inode))
- return ERR_CAST(inode);
-- if (inode->i_nlink == 0) {
-+ /* We need LINK caps to reliably check i_nlink */
-+ err = ceph_do_getattr(inode, CEPH_CAP_LINK_SHARED, false);
-+ if (err)
-+ return ERR_PTR(err);
-+ /* -ESTALE if inode as been unlinked and no file is open */
-+ if ((inode->i_nlink == 0) && (atomic_read(&inode->i_count) == 1)) {
- iput(inode);
- return ERR_PTR(-ESTALE);
- }
-diff --git a/fs/cifs/connect.c b/fs/cifs/connect.c
-index 28268ed461b8..47b9fbb70bf5 100644
---- a/fs/cifs/connect.c
-+++ b/fs/cifs/connect.c
-@@ -572,26 +572,26 @@ cifs_reconnect(struct TCP_Server_Info *server)
- try_to_freeze();
-
- mutex_lock(&server->srv_mutex);
-+#ifdef CONFIG_CIFS_DFS_UPCALL
- /*
- * Set up next DFS target server (if any) for reconnect. If DFS
- * feature is disabled, then we will retry last server we
- * connected to before.
- */
-+ reconn_inval_dfs_target(server, cifs_sb, &tgt_list, &tgt_it);
-+#endif
-+ rc = reconn_set_ipaddr(server);
-+ if (rc) {
-+ cifs_dbg(FYI, "%s: failed to resolve hostname: %d\n",
-+ __func__, rc);
-+ }
-+
- if (cifs_rdma_enabled(server))
- rc = smbd_reconnect(server);
- else
- rc = generic_ip_connect(server);
- if (rc) {
- cifs_dbg(FYI, "reconnect error %d\n", rc);
--#ifdef CONFIG_CIFS_DFS_UPCALL
-- reconn_inval_dfs_target(server, cifs_sb, &tgt_list,
-- &tgt_it);
--#endif
-- rc = reconn_set_ipaddr(server);
-- if (rc) {
-- cifs_dbg(FYI, "%s: failed to resolve hostname: %d\n",
-- __func__, rc);
-- }
- mutex_unlock(&server->srv_mutex);
- msleep(3000);
- } else {
-diff --git a/fs/dlm/dlm_internal.h b/fs/dlm/dlm_internal.h
-index 416d9de35679..4311d01b02a8 100644
---- a/fs/dlm/dlm_internal.h
-+++ b/fs/dlm/dlm_internal.h
-@@ -97,7 +97,6 @@ do { \
- __LINE__, __FILE__, #x, jiffies); \
- {do} \
- printk("\n"); \
-- BUG(); \
- panic("DLM: Record message above and reboot.\n"); \
- } \
- }
-diff --git a/fs/ext4/acl.c b/fs/ext4/acl.c
-index 8c7bbf3e566d..470be69f19aa 100644
---- a/fs/ext4/acl.c
-+++ b/fs/ext4/acl.c
-@@ -256,7 +256,7 @@ retry:
- if (!error && update_mode) {
- inode->i_mode = mode;
- inode->i_ctime = current_time(inode);
-- ext4_mark_inode_dirty(handle, inode);
-+ error = ext4_mark_inode_dirty(handle, inode);
- }
- out_stop:
- ext4_journal_stop(handle);
-diff --git a/fs/ext4/dir.c b/fs/ext4/dir.c
-index c654205f648d..1d82336b1cd4 100644
---- a/fs/ext4/dir.c
-+++ b/fs/ext4/dir.c
-@@ -675,6 +675,7 @@ static int ext4_d_compare(const struct dentry *dentry, unsigned int len,
- struct qstr qstr = {.name = str, .len = len };
- const struct dentry *parent = READ_ONCE(dentry->d_parent);
- const struct inode *inode = READ_ONCE(parent->d_inode);
-+ char strbuf[DNAME_INLINE_LEN];
-
- if (!inode || !IS_CASEFOLDED(inode) ||
- !EXT4_SB(inode->i_sb)->s_encoding) {
-@@ -683,6 +684,21 @@ static int ext4_d_compare(const struct dentry *dentry, unsigned int len,
- return memcmp(str, name->name, len);
- }
-
-+ /*
-+ * If the dentry name is stored in-line, then it may be concurrently
-+ * modified by a rename. If this happens, the VFS will eventually retry
-+ * the lookup, so it doesn't matter what ->d_compare() returns.
-+ * However, it's unsafe to call utf8_strncasecmp() with an unstable
-+ * string. Therefore, we have to copy the name into a temporary buffer.
-+ */
-+ if (len <= DNAME_INLINE_LEN - 1) {
-+ memcpy(strbuf, str, len);
-+ strbuf[len] = 0;
-+ qstr.name = strbuf;
-+ /* prevent compiler from optimizing out the temporary buffer */
-+ barrier();
-+ }
-+
- return ext4_ci_compare(inode, name, &qstr, false);
- }
-
-diff --git a/fs/ext4/ext4.h b/fs/ext4/ext4.h
-index ad2dbf6e4924..51a85b50033a 100644
---- a/fs/ext4/ext4.h
-+++ b/fs/ext4/ext4.h
-@@ -3354,7 +3354,7 @@ struct ext4_extent;
- */
- #define EXT_MAX_BLOCKS 0xffffffff
-
--extern int ext4_ext_tree_init(handle_t *handle, struct inode *);
-+extern void ext4_ext_tree_init(handle_t *handle, struct inode *inode);
- extern int ext4_ext_index_trans_blocks(struct inode *inode, int extents);
- extern int ext4_ext_map_blocks(handle_t *handle, struct inode *inode,
- struct ext4_map_blocks *map, int flags);
-diff --git a/fs/ext4/ext4_jbd2.h b/fs/ext4/ext4_jbd2.h
-index 4b9002f0e84c..3bacf76d2609 100644
---- a/fs/ext4/ext4_jbd2.h
-+++ b/fs/ext4/ext4_jbd2.h
-@@ -222,7 +222,10 @@ ext4_mark_iloc_dirty(handle_t *handle,
- int ext4_reserve_inode_write(handle_t *handle, struct inode *inode,
- struct ext4_iloc *iloc);
-
--int ext4_mark_inode_dirty(handle_t *handle, struct inode *inode);
-+#define ext4_mark_inode_dirty(__h, __i) \
-+ __ext4_mark_inode_dirty((__h), (__i), __func__, __LINE__)
-+int __ext4_mark_inode_dirty(handle_t *handle, struct inode *inode,
-+ const char *func, unsigned int line);
-
- int ext4_expand_extra_isize(struct inode *inode,
- unsigned int new_extra_isize,
-diff --git a/fs/ext4/extents.c b/fs/ext4/extents.c
-index 2b4b94542e34..d5453072eb63 100644
---- a/fs/ext4/extents.c
-+++ b/fs/ext4/extents.c
-@@ -816,7 +816,7 @@ ext4_ext_binsearch(struct inode *inode,
-
- }
-
--int ext4_ext_tree_init(handle_t *handle, struct inode *inode)
-+void ext4_ext_tree_init(handle_t *handle, struct inode *inode)
- {
- struct ext4_extent_header *eh;
-
-@@ -826,7 +826,6 @@ int ext4_ext_tree_init(handle_t *handle, struct inode *inode)
- eh->eh_magic = EXT4_EXT_MAGIC;
- eh->eh_max = cpu_to_le16(ext4_ext_space_root(inode, 0));
- ext4_mark_inode_dirty(handle, inode);
-- return 0;
- }
-
- struct ext4_ext_path *
-@@ -1319,7 +1318,7 @@ static int ext4_ext_grow_indepth(handle_t *handle, struct inode *inode,
- ext4_idx_pblock(EXT_FIRST_INDEX(neh)));
-
- le16_add_cpu(&neh->eh_depth, 1);
-- ext4_mark_inode_dirty(handle, inode);
-+ err = ext4_mark_inode_dirty(handle, inode);
- out:
- brelse(bh);
-
-@@ -2828,7 +2827,7 @@ again:
- * in use to avoid freeing it when removing blocks.
- */
- if (sbi->s_cluster_ratio > 1) {
-- pblk = ext4_ext_pblock(ex) + end - ee_block + 2;
-+ pblk = ext4_ext_pblock(ex) + end - ee_block + 1;
- partial.pclu = EXT4_B2C(sbi, pblk);
- partial.state = nofree;
- }
-@@ -4363,7 +4362,7 @@ static int ext4_alloc_file_blocks(struct file *file, ext4_lblk_t offset,
- struct inode *inode = file_inode(file);
- handle_t *handle;
- int ret = 0;
-- int ret2 = 0;
-+ int ret2 = 0, ret3 = 0;
- int retries = 0;
- int depth = 0;
- struct ext4_map_blocks map;
-@@ -4423,10 +4422,11 @@ retry:
- if (ext4_update_inode_size(inode, epos) & 0x1)
- inode->i_mtime = inode->i_ctime;
- }
-- ext4_mark_inode_dirty(handle, inode);
-+ ret2 = ext4_mark_inode_dirty(handle, inode);
- ext4_update_inode_fsync_trans(handle, inode, 1);
-- ret2 = ext4_journal_stop(handle);
-- if (ret2)
-+ ret3 = ext4_journal_stop(handle);
-+ ret2 = ret3 ? ret3 : ret2;
-+ if (unlikely(ret2))
- break;
- }
- if (ret == -ENOSPC &&
-@@ -4577,7 +4577,9 @@ static long ext4_zero_range(struct file *file, loff_t offset,
- inode->i_mtime = inode->i_ctime = current_time(inode);
- if (new_size)
- ext4_update_inode_size(inode, new_size);
-- ext4_mark_inode_dirty(handle, inode);
-+ ret = ext4_mark_inode_dirty(handle, inode);
-+ if (unlikely(ret))
-+ goto out_handle;
-
- /* Zero out partial block at the edges of the range */
- ret = ext4_zero_partial_blocks(handle, inode, offset, len);
-@@ -4587,6 +4589,7 @@ static long ext4_zero_range(struct file *file, loff_t offset,
- if (file->f_flags & O_SYNC)
- ext4_handle_sync(handle);
-
-+out_handle:
- ext4_journal_stop(handle);
- out_mutex:
- inode_unlock(inode);
-@@ -4700,8 +4703,7 @@ int ext4_convert_unwritten_extents(handle_t *handle, struct inode *inode,
- loff_t offset, ssize_t len)
- {
- unsigned int max_blocks;
-- int ret = 0;
-- int ret2 = 0;
-+ int ret = 0, ret2 = 0, ret3 = 0;
- struct ext4_map_blocks map;
- unsigned int blkbits = inode->i_blkbits;
- unsigned int credits = 0;
-@@ -4734,9 +4736,13 @@ int ext4_convert_unwritten_extents(handle_t *handle, struct inode *inode,
- "ext4_ext_map_blocks returned %d",
- inode->i_ino, map.m_lblk,
- map.m_len, ret);
-- ext4_mark_inode_dirty(handle, inode);
-- if (credits)
-- ret2 = ext4_journal_stop(handle);
-+ ret2 = ext4_mark_inode_dirty(handle, inode);
-+ if (credits) {
-+ ret3 = ext4_journal_stop(handle);
-+ if (unlikely(ret3))
-+ ret2 = ret3;
-+ }
-+
- if (ret <= 0 || ret2)
- break;
- }
-@@ -5304,7 +5310,7 @@ static int ext4_collapse_range(struct inode *inode, loff_t offset, loff_t len)
- if (IS_SYNC(inode))
- ext4_handle_sync(handle);
- inode->i_mtime = inode->i_ctime = current_time(inode);
-- ext4_mark_inode_dirty(handle, inode);
-+ ret = ext4_mark_inode_dirty(handle, inode);
- ext4_update_inode_fsync_trans(handle, inode, 1);
-
- out_stop:
-diff --git a/fs/ext4/file.c b/fs/ext4/file.c
-index 0d624250a62b..2a01e31a032c 100644
---- a/fs/ext4/file.c
-+++ b/fs/ext4/file.c
-@@ -287,6 +287,7 @@ static ssize_t ext4_handle_inode_extension(struct inode *inode, loff_t offset,
- bool truncate = false;
- u8 blkbits = inode->i_blkbits;
- ext4_lblk_t written_blk, end_blk;
-+ int ret;
-
- /*
- * Note that EXT4_I(inode)->i_disksize can get extended up to
-@@ -327,8 +328,14 @@ static ssize_t ext4_handle_inode_extension(struct inode *inode, loff_t offset,
- goto truncate;
- }
-
-- if (ext4_update_inode_size(inode, offset + written))
-- ext4_mark_inode_dirty(handle, inode);
-+ if (ext4_update_inode_size(inode, offset + written)) {
-+ ret = ext4_mark_inode_dirty(handle, inode);
-+ if (unlikely(ret)) {
-+ written = ret;
-+ ext4_journal_stop(handle);
-+ goto truncate;
-+ }
-+ }
-
- /*
- * We may need to truncate allocated but not written blocks beyond EOF.
-@@ -495,6 +502,12 @@ static ssize_t ext4_dio_write_iter(struct kiocb *iocb, struct iov_iter *from)
- if (ret <= 0)
- return ret;
-
-+ /* if we're going to block and IOCB_NOWAIT is set, return -EAGAIN */
-+ if ((iocb->ki_flags & IOCB_NOWAIT) && (unaligned_io || extend)) {
-+ ret = -EAGAIN;
-+ goto out;
-+ }
-+
- offset = iocb->ki_pos;
- count = ret;
-
-diff --git a/fs/ext4/indirect.c b/fs/ext4/indirect.c
-index 107f0043f67f..be2b66eb65f7 100644
---- a/fs/ext4/indirect.c
-+++ b/fs/ext4/indirect.c
-@@ -467,7 +467,9 @@ static int ext4_splice_branch(handle_t *handle,
- /*
- * OK, we spliced it into the inode itself on a direct block.
- */
-- ext4_mark_inode_dirty(handle, ar->inode);
-+ err = ext4_mark_inode_dirty(handle, ar->inode);
-+ if (unlikely(err))
-+ goto err_out;
- jbd_debug(5, "splicing direct\n");
- }
- return err;
-diff --git a/fs/ext4/inline.c b/fs/ext4/inline.c
-index f35e289e17aa..c3a1ad2db122 100644
---- a/fs/ext4/inline.c
-+++ b/fs/ext4/inline.c
-@@ -1260,7 +1260,7 @@ out:
- int ext4_try_add_inline_entry(handle_t *handle, struct ext4_filename *fname,
- struct inode *dir, struct inode *inode)
- {
-- int ret, inline_size, no_expand;
-+ int ret, ret2, inline_size, no_expand;
- void *inline_start;
- struct ext4_iloc iloc;
-
-@@ -1314,7 +1314,9 @@ int ext4_try_add_inline_entry(handle_t *handle, struct ext4_filename *fname,
-
- out:
- ext4_write_unlock_xattr(dir, &no_expand);
-- ext4_mark_inode_dirty(handle, dir);
-+ ret2 = ext4_mark_inode_dirty(handle, dir);
-+ if (unlikely(ret2 && !ret))
-+ ret = ret2;
- brelse(iloc.bh);
- return ret;
- }
-diff --git a/fs/ext4/inode.c b/fs/ext4/inode.c
-index 2a4aae6acdcb..87430d276bcc 100644
---- a/fs/ext4/inode.c
-+++ b/fs/ext4/inode.c
-@@ -1296,7 +1296,7 @@ static int ext4_write_end(struct file *file,
- * filesystems.
- */
- if (i_size_changed || inline_data)
-- ext4_mark_inode_dirty(handle, inode);
-+ ret = ext4_mark_inode_dirty(handle, inode);
-
- if (pos + len > inode->i_size && !verity && ext4_can_truncate(inode))
- /* if we have allocated more blocks and copied
-@@ -3077,7 +3077,7 @@ static int ext4_da_write_end(struct file *file,
- * new_i_size is less that inode->i_size
- * bu greater than i_disksize.(hint delalloc)
- */
-- ext4_mark_inode_dirty(handle, inode);
-+ ret = ext4_mark_inode_dirty(handle, inode);
- }
- }
-
-@@ -3094,7 +3094,7 @@ static int ext4_da_write_end(struct file *file,
- if (ret2 < 0)
- ret = ret2;
- ret2 = ext4_journal_stop(handle);
-- if (!ret)
-+ if (unlikely(ret2 && !ret))
- ret = ret2;
-
- return ret ? ret : copied;
-@@ -3886,6 +3886,8 @@ int ext4_update_disksize_before_punch(struct inode *inode, loff_t offset,
- loff_t len)
- {
- handle_t *handle;
-+ int ret;
-+
- loff_t size = i_size_read(inode);
-
- WARN_ON(!inode_is_locked(inode));
-@@ -3899,10 +3901,10 @@ int ext4_update_disksize_before_punch(struct inode *inode, loff_t offset,
- if (IS_ERR(handle))
- return PTR_ERR(handle);
- ext4_update_i_disksize(inode, size);
-- ext4_mark_inode_dirty(handle, inode);
-+ ret = ext4_mark_inode_dirty(handle, inode);
- ext4_journal_stop(handle);
-
-- return 0;
-+ return ret;
- }
-
- static void ext4_wait_dax_page(struct ext4_inode_info *ei)
-@@ -3954,7 +3956,7 @@ int ext4_punch_hole(struct inode *inode, loff_t offset, loff_t length)
- loff_t first_block_offset, last_block_offset;
- handle_t *handle;
- unsigned int credits;
-- int ret = 0;
-+ int ret = 0, ret2 = 0;
-
- trace_ext4_punch_hole(inode, offset, length, 0);
-
-@@ -4077,7 +4079,9 @@ int ext4_punch_hole(struct inode *inode, loff_t offset, loff_t length)
- ext4_handle_sync(handle);
-
- inode->i_mtime = inode->i_ctime = current_time(inode);
-- ext4_mark_inode_dirty(handle, inode);
-+ ret2 = ext4_mark_inode_dirty(handle, inode);
-+ if (unlikely(ret2))
-+ ret = ret2;
- if (ret >= 0)
- ext4_update_inode_fsync_trans(handle, inode, 1);
- out_stop:
-@@ -4146,7 +4150,7 @@ int ext4_truncate(struct inode *inode)
- {
- struct ext4_inode_info *ei = EXT4_I(inode);
- unsigned int credits;
-- int err = 0;
-+ int err = 0, err2;
- handle_t *handle;
- struct address_space *mapping = inode->i_mapping;
-
-@@ -4234,7 +4238,9 @@ out_stop:
- ext4_orphan_del(handle, inode);
-
- inode->i_mtime = inode->i_ctime = current_time(inode);
-- ext4_mark_inode_dirty(handle, inode);
-+ err2 = ext4_mark_inode_dirty(handle, inode);
-+ if (unlikely(err2 && !err))
-+ err = err2;
- ext4_journal_stop(handle);
-
- trace_ext4_truncate_exit(inode);
-@@ -5292,6 +5298,8 @@ int ext4_setattr(struct dentry *dentry, struct iattr *attr)
- inode->i_gid = attr->ia_gid;
- error = ext4_mark_inode_dirty(handle, inode);
- ext4_journal_stop(handle);
-+ if (unlikely(error))
-+ return error;
- }
-
- if (attr->ia_valid & ATTR_SIZE) {
-@@ -5777,7 +5785,8 @@ out_unlock:
- * Whenever the user wants stuff synced (sys_sync, sys_msync, sys_fsync)
- * we start and wait on commits.
- */
--int ext4_mark_inode_dirty(handle_t *handle, struct inode *inode)
-+int __ext4_mark_inode_dirty(handle_t *handle, struct inode *inode,
-+ const char *func, unsigned int line)
- {
- struct ext4_iloc iloc;
- struct ext4_sb_info *sbi = EXT4_SB(inode->i_sb);
-@@ -5787,13 +5796,18 @@ int ext4_mark_inode_dirty(handle_t *handle, struct inode *inode)
- trace_ext4_mark_inode_dirty(inode, _RET_IP_);
- err = ext4_reserve_inode_write(handle, inode, &iloc);
- if (err)
-- return err;
-+ goto out;
-
- if (EXT4_I(inode)->i_extra_isize < sbi->s_want_extra_isize)
- ext4_try_to_expand_extra_isize(inode, sbi->s_want_extra_isize,
- iloc, handle);
-
-- return ext4_mark_iloc_dirty(handle, inode, &iloc);
-+ err = ext4_mark_iloc_dirty(handle, inode, &iloc);
-+out:
-+ if (unlikely(err))
-+ ext4_error_inode_err(inode, func, line, 0, err,
-+ "mark_inode_dirty error");
-+ return err;
- }
-
- /*
-diff --git a/fs/ext4/migrate.c b/fs/ext4/migrate.c
-index fb6520f37135..c5e3fc998211 100644
---- a/fs/ext4/migrate.c
-+++ b/fs/ext4/migrate.c
-@@ -287,7 +287,7 @@ static int free_ind_block(handle_t *handle, struct inode *inode, __le32 *i_data)
- static int ext4_ext_swap_inode_data(handle_t *handle, struct inode *inode,
- struct inode *tmp_inode)
- {
-- int retval;
-+ int retval, retval2 = 0;
- __le32 i_data[3];
- struct ext4_inode_info *ei = EXT4_I(inode);
- struct ext4_inode_info *tmp_ei = EXT4_I(tmp_inode);
-@@ -342,7 +342,9 @@ static int ext4_ext_swap_inode_data(handle_t *handle, struct inode *inode,
- * i_blocks when freeing the indirect meta-data blocks
- */
- retval = free_ind_block(handle, inode, i_data);
-- ext4_mark_inode_dirty(handle, inode);
-+ retval2 = ext4_mark_inode_dirty(handle, inode);
-+ if (unlikely(retval2 && !retval))
-+ retval = retval2;
-
- err_out:
- return retval;
-@@ -601,7 +603,7 @@ int ext4_ind_migrate(struct inode *inode)
- ext4_lblk_t start, end;
- ext4_fsblk_t blk;
- handle_t *handle;
-- int ret;
-+ int ret, ret2 = 0;
-
- if (!ext4_has_feature_extents(inode->i_sb) ||
- (!ext4_test_inode_flag(inode, EXT4_INODE_EXTENTS)))
-@@ -655,7 +657,9 @@ int ext4_ind_migrate(struct inode *inode)
- memset(ei->i_data, 0, sizeof(ei->i_data));
- for (i = start; i <= end; i++)
- ei->i_data[i] = cpu_to_le32(blk++);
-- ext4_mark_inode_dirty(handle, inode);
-+ ret2 = ext4_mark_inode_dirty(handle, inode);
-+ if (unlikely(ret2 && !ret))
-+ ret = ret2;
- errout:
- ext4_journal_stop(handle);
- up_write(&EXT4_I(inode)->i_data_sem);
-diff --git a/fs/ext4/namei.c b/fs/ext4/namei.c
-index a8aca4772aaa..56738b538ddf 100644
---- a/fs/ext4/namei.c
-+++ b/fs/ext4/namei.c
-@@ -1993,7 +1993,7 @@ static int add_dirent_to_buf(handle_t *handle, struct ext4_filename *fname,
- {
- unsigned int blocksize = dir->i_sb->s_blocksize;
- int csum_size = 0;
-- int err;
-+ int err, err2;
-
- if (ext4_has_metadata_csum(inode->i_sb))
- csum_size = sizeof(struct ext4_dir_entry_tail);
-@@ -2028,12 +2028,12 @@ static int add_dirent_to_buf(handle_t *handle, struct ext4_filename *fname,
- dir->i_mtime = dir->i_ctime = current_time(dir);
- ext4_update_dx_flag(dir);
- inode_inc_iversion(dir);
-- ext4_mark_inode_dirty(handle, dir);
-+ err2 = ext4_mark_inode_dirty(handle, dir);
- BUFFER_TRACE(bh, "call ext4_handle_dirty_metadata");
- err = ext4_handle_dirty_dirblock(handle, dir, bh);
- if (err)
- ext4_std_error(dir->i_sb, err);
-- return 0;
-+ return err ? err : err2;
- }
-
- /*
-@@ -2223,7 +2223,9 @@ static int ext4_add_entry(handle_t *handle, struct dentry *dentry,
- }
- ext4_clear_inode_flag(dir, EXT4_INODE_INDEX);
- dx_fallback++;
-- ext4_mark_inode_dirty(handle, dir);
-+ retval = ext4_mark_inode_dirty(handle, dir);
-+ if (unlikely(retval))
-+ goto out;
- }
- blocks = dir->i_size >> sb->s_blocksize_bits;
- for (block = 0; block < blocks; block++) {
-@@ -2576,12 +2578,12 @@ static int ext4_add_nondir(handle_t *handle,
- struct inode *inode = *inodep;
- int err = ext4_add_entry(handle, dentry, inode);
- if (!err) {
-- ext4_mark_inode_dirty(handle, inode);
-+ err = ext4_mark_inode_dirty(handle, inode);
- if (IS_DIRSYNC(dir))
- ext4_handle_sync(handle);
- d_instantiate_new(dentry, inode);
- *inodep = NULL;
-- return 0;
-+ return err;
- }
- drop_nlink(inode);
- ext4_orphan_add(handle, inode);
-@@ -2775,7 +2777,7 @@ static int ext4_mkdir(struct inode *dir, struct dentry *dentry, umode_t mode)
- {
- handle_t *handle;
- struct inode *inode;
-- int err, credits, retries = 0;
-+ int err, err2 = 0, credits, retries = 0;
-
- if (EXT4_DIR_LINK_MAX(dir))
- return -EMLINK;
-@@ -2808,7 +2810,9 @@ out_clear_inode:
- clear_nlink(inode);
- ext4_orphan_add(handle, inode);
- unlock_new_inode(inode);
-- ext4_mark_inode_dirty(handle, inode);
-+ err2 = ext4_mark_inode_dirty(handle, inode);
-+ if (unlikely(err2))
-+ err = err2;
- ext4_journal_stop(handle);
- iput(inode);
- goto out_retry;
-@@ -3148,10 +3152,12 @@ static int ext4_rmdir(struct inode *dir, struct dentry *dentry)
- inode->i_size = 0;
- ext4_orphan_add(handle, inode);
- inode->i_ctime = dir->i_ctime = dir->i_mtime = current_time(inode);
-- ext4_mark_inode_dirty(handle, inode);
-+ retval = ext4_mark_inode_dirty(handle, inode);
-+ if (retval)
-+ goto end_rmdir;
- ext4_dec_count(handle, dir);
- ext4_update_dx_flag(dir);
-- ext4_mark_inode_dirty(handle, dir);
-+ retval = ext4_mark_inode_dirty(handle, dir);
-
- #ifdef CONFIG_UNICODE
- /* VFS negative dentries are incompatible with Encoding and
-@@ -3221,7 +3227,9 @@ static int ext4_unlink(struct inode *dir, struct dentry *dentry)
- goto end_unlink;
- dir->i_ctime = dir->i_mtime = current_time(dir);
- ext4_update_dx_flag(dir);
-- ext4_mark_inode_dirty(handle, dir);
-+ retval = ext4_mark_inode_dirty(handle, dir);
-+ if (retval)
-+ goto end_unlink;
- if (inode->i_nlink == 0)
- ext4_warning_inode(inode, "Deleting file '%.*s' with no links",
- dentry->d_name.len, dentry->d_name.name);
-@@ -3230,7 +3238,7 @@ static int ext4_unlink(struct inode *dir, struct dentry *dentry)
- if (!inode->i_nlink)
- ext4_orphan_add(handle, inode);
- inode->i_ctime = current_time(inode);
-- ext4_mark_inode_dirty(handle, inode);
-+ retval = ext4_mark_inode_dirty(handle, inode);
-
- #ifdef CONFIG_UNICODE
- /* VFS negative dentries are incompatible with Encoding and
-@@ -3419,7 +3427,7 @@ retry:
-
- err = ext4_add_entry(handle, dentry, inode);
- if (!err) {
-- ext4_mark_inode_dirty(handle, inode);
-+ err = ext4_mark_inode_dirty(handle, inode);
- /* this can happen only for tmpfile being
- * linked the first time
- */
-@@ -3531,7 +3539,7 @@ static int ext4_rename_dir_finish(handle_t *handle, struct ext4_renament *ent,
- static int ext4_setent(handle_t *handle, struct ext4_renament *ent,
- unsigned ino, unsigned file_type)
- {
-- int retval;
-+ int retval, retval2;
-
- BUFFER_TRACE(ent->bh, "get write access");
- retval = ext4_journal_get_write_access(handle, ent->bh);
-@@ -3543,19 +3551,19 @@ static int ext4_setent(handle_t *handle, struct ext4_renament *ent,
- inode_inc_iversion(ent->dir);
- ent->dir->i_ctime = ent->dir->i_mtime =
- current_time(ent->dir);
-- ext4_mark_inode_dirty(handle, ent->dir);
-+ retval = ext4_mark_inode_dirty(handle, ent->dir);
- BUFFER_TRACE(ent->bh, "call ext4_handle_dirty_metadata");
- if (!ent->inlined) {
-- retval = ext4_handle_dirty_dirblock(handle, ent->dir, ent->bh);
-- if (unlikely(retval)) {
-- ext4_std_error(ent->dir->i_sb, retval);
-- return retval;
-+ retval2 = ext4_handle_dirty_dirblock(handle, ent->dir, ent->bh);
-+ if (unlikely(retval2)) {
-+ ext4_std_error(ent->dir->i_sb, retval2);
-+ return retval2;
- }
- }
- brelse(ent->bh);
- ent->bh = NULL;
-
-- return 0;
-+ return retval;
- }
-
- static int ext4_find_delete_entry(handle_t *handle, struct inode *dir,
-@@ -3790,7 +3798,9 @@ static int ext4_rename(struct inode *old_dir, struct dentry *old_dentry,
- EXT4_FT_CHRDEV);
- if (retval)
- goto end_rename;
-- ext4_mark_inode_dirty(handle, whiteout);
-+ retval = ext4_mark_inode_dirty(handle, whiteout);
-+ if (unlikely(retval))
-+ goto end_rename;
- }
- if (!new.bh) {
- retval = ext4_add_entry(handle, new.dentry, old.inode);
-@@ -3811,7 +3821,9 @@ static int ext4_rename(struct inode *old_dir, struct dentry *old_dentry,
- * rename.
- */
- old.inode->i_ctime = current_time(old.inode);
-- ext4_mark_inode_dirty(handle, old.inode);
-+ retval = ext4_mark_inode_dirty(handle, old.inode);
-+ if (unlikely(retval))
-+ goto end_rename;
-
- if (!whiteout) {
- /*
-@@ -3840,12 +3852,18 @@ static int ext4_rename(struct inode *old_dir, struct dentry *old_dentry,
- } else {
- ext4_inc_count(handle, new.dir);
- ext4_update_dx_flag(new.dir);
-- ext4_mark_inode_dirty(handle, new.dir);
-+ retval = ext4_mark_inode_dirty(handle, new.dir);
-+ if (unlikely(retval))
-+ goto end_rename;
- }
- }
-- ext4_mark_inode_dirty(handle, old.dir);
-+ retval = ext4_mark_inode_dirty(handle, old.dir);
-+ if (unlikely(retval))
-+ goto end_rename;
- if (new.inode) {
-- ext4_mark_inode_dirty(handle, new.inode);
-+ retval = ext4_mark_inode_dirty(handle, new.inode);
-+ if (unlikely(retval))
-+ goto end_rename;
- if (!new.inode->i_nlink)
- ext4_orphan_add(handle, new.inode);
- }
-@@ -3979,8 +3997,12 @@ static int ext4_cross_rename(struct inode *old_dir, struct dentry *old_dentry,
- ctime = current_time(old.inode);
- old.inode->i_ctime = ctime;
- new.inode->i_ctime = ctime;
-- ext4_mark_inode_dirty(handle, old.inode);
-- ext4_mark_inode_dirty(handle, new.inode);
-+ retval = ext4_mark_inode_dirty(handle, old.inode);
-+ if (unlikely(retval))
-+ goto end_rename;
-+ retval = ext4_mark_inode_dirty(handle, new.inode);
-+ if (unlikely(retval))
-+ goto end_rename;
-
- if (old.dir_bh) {
- retval = ext4_rename_dir_finish(handle, &old, new.dir->i_ino);
-diff --git a/fs/ext4/super.c b/fs/ext4/super.c
-index bf5fcb477f66..7318ca71b69e 100644
---- a/fs/ext4/super.c
-+++ b/fs/ext4/super.c
-@@ -522,9 +522,6 @@ static void ext4_handle_error(struct super_block *sb)
- smp_wmb();
- sb->s_flags |= SB_RDONLY;
- } else if (test_opt(sb, ERRORS_PANIC)) {
-- if (EXT4_SB(sb)->s_journal &&
-- !(EXT4_SB(sb)->s_journal->j_flags & JBD2_REC_ERR))
-- return;
- panic("EXT4-fs (device %s): panic forced after error\n",
- sb->s_id);
- }
-@@ -725,23 +722,20 @@ void __ext4_abort(struct super_block *sb, const char *function,
- va_end(args);
-
- if (sb_rdonly(sb) == 0) {
-- ext4_msg(sb, KERN_CRIT, "Remounting filesystem read-only");
- EXT4_SB(sb)->s_mount_flags |= EXT4_MF_FS_ABORTED;
-+ if (EXT4_SB(sb)->s_journal)
-+ jbd2_journal_abort(EXT4_SB(sb)->s_journal, -EIO);
-+
-+ ext4_msg(sb, KERN_CRIT, "Remounting filesystem read-only");
- /*
- * Make sure updated value of ->s_mount_flags will be visible
- * before ->s_flags update
- */
- smp_wmb();
- sb->s_flags |= SB_RDONLY;
-- if (EXT4_SB(sb)->s_journal)
-- jbd2_journal_abort(EXT4_SB(sb)->s_journal, -EIO);
- }
-- if (test_opt(sb, ERRORS_PANIC) && !system_going_down()) {
-- if (EXT4_SB(sb)->s_journal &&
-- !(EXT4_SB(sb)->s_journal->j_flags & JBD2_REC_ERR))
-- return;
-+ if (test_opt(sb, ERRORS_PANIC) && !system_going_down())
- panic("EXT4-fs panic from previous error\n");
-- }
- }
-
- void __ext4_msg(struct super_block *sb,
-@@ -2086,6 +2080,16 @@ static int handle_mount_opt(struct super_block *sb, char *opt, int token,
- #endif
- } else if (token == Opt_dax) {
- #ifdef CONFIG_FS_DAX
-+ if (is_remount && test_opt(sb, DAX)) {
-+ ext4_msg(sb, KERN_ERR, "can't mount with "
-+ "both data=journal and dax");
-+ return -1;
-+ }
-+ if (is_remount && !(sbi->s_mount_opt & EXT4_MOUNT_DAX)) {
-+ ext4_msg(sb, KERN_ERR, "can't change "
-+ "dax mount option while remounting");
-+ return -1;
-+ }
- ext4_msg(sb, KERN_WARNING,
- "DAX enabled. Warning: EXPERIMENTAL, use at your own risk");
- sbi->s_mount_opt |= m->mount_opt;
-@@ -2344,6 +2348,7 @@ static int ext4_setup_super(struct super_block *sb, struct ext4_super_block *es,
- ext4_msg(sb, KERN_ERR, "revision level too high, "
- "forcing read-only mode");
- err = -EROFS;
-+ goto done;
- }
- if (read_only)
- goto done;
-@@ -5412,12 +5417,6 @@ static int ext4_remount(struct super_block *sb, int *flags, char *data)
- err = -EINVAL;
- goto restore_opts;
- }
-- if (test_opt(sb, DAX)) {
-- ext4_msg(sb, KERN_ERR, "can't mount with "
-- "both data=journal and dax");
-- err = -EINVAL;
-- goto restore_opts;
-- }
- } else if (test_opt(sb, DATA_FLAGS) == EXT4_MOUNT_ORDERED_DATA) {
- if (test_opt(sb, JOURNAL_ASYNC_COMMIT)) {
- ext4_msg(sb, KERN_ERR, "can't mount with "
-@@ -5433,12 +5432,6 @@ static int ext4_remount(struct super_block *sb, int *flags, char *data)
- goto restore_opts;
- }
-
-- if ((sbi->s_mount_opt ^ old_opts.s_mount_opt) & EXT4_MOUNT_DAX) {
-- ext4_msg(sb, KERN_WARNING, "warning: refusing change of "
-- "dax flag with busy inodes while remounting");
-- sbi->s_mount_opt ^= EXT4_MOUNT_DAX;
-- }
--
- if (sbi->s_mount_flags & EXT4_MF_FS_ABORTED)
- ext4_abort(sb, EXT4_ERR_ESHUTDOWN, "Abort forced by user");
-
-@@ -5885,7 +5878,7 @@ static int ext4_quota_on(struct super_block *sb, int type, int format_id,
- EXT4_I(inode)->i_flags |= EXT4_NOATIME_FL | EXT4_IMMUTABLE_FL;
- inode_set_flags(inode, S_NOATIME | S_IMMUTABLE,
- S_NOATIME | S_IMMUTABLE);
-- ext4_mark_inode_dirty(handle, inode);
-+ err = ext4_mark_inode_dirty(handle, inode);
- ext4_journal_stop(handle);
- unlock_inode:
- inode_unlock(inode);
-@@ -5987,12 +5980,14 @@ static int ext4_quota_off(struct super_block *sb, int type)
- * this is not a hard failure and quotas are already disabled.
- */
- handle = ext4_journal_start(inode, EXT4_HT_QUOTA, 1);
-- if (IS_ERR(handle))
-+ if (IS_ERR(handle)) {
-+ err = PTR_ERR(handle);
- goto out_unlock;
-+ }
- EXT4_I(inode)->i_flags &= ~(EXT4_NOATIME_FL | EXT4_IMMUTABLE_FL);
- inode_set_flags(inode, 0, S_NOATIME | S_IMMUTABLE);
- inode->i_mtime = inode->i_ctime = current_time(inode);
-- ext4_mark_inode_dirty(handle, inode);
-+ err = ext4_mark_inode_dirty(handle, inode);
- ext4_journal_stop(handle);
- out_unlock:
- inode_unlock(inode);
-@@ -6050,7 +6045,7 @@ static ssize_t ext4_quota_write(struct super_block *sb, int type,
- {
- struct inode *inode = sb_dqopt(sb)->files[type];
- ext4_lblk_t blk = off >> EXT4_BLOCK_SIZE_BITS(sb);
-- int err, offset = off & (sb->s_blocksize - 1);
-+ int err = 0, err2 = 0, offset = off & (sb->s_blocksize - 1);
- int retries = 0;
- struct buffer_head *bh;
- handle_t *handle = journal_current_handle();
-@@ -6098,9 +6093,11 @@ out:
- if (inode->i_size < off + len) {
- i_size_write(inode, off + len);
- EXT4_I(inode)->i_disksize = inode->i_size;
-- ext4_mark_inode_dirty(handle, inode);
-+ err2 = ext4_mark_inode_dirty(handle, inode);
-+ if (unlikely(err2 && !err))
-+ err = err2;
- }
-- return len;
-+ return err ? err : len;
- }
- #endif
-
-diff --git a/fs/ext4/xattr.c b/fs/ext4/xattr.c
-index 01ba66373e97..9b29a40738ac 100644
---- a/fs/ext4/xattr.c
-+++ b/fs/ext4/xattr.c
-@@ -1327,7 +1327,7 @@ static int ext4_xattr_inode_write(handle_t *handle, struct inode *ea_inode,
- int blocksize = ea_inode->i_sb->s_blocksize;
- int max_blocks = (bufsize + blocksize - 1) >> ea_inode->i_blkbits;
- int csize, wsize = 0;
-- int ret = 0;
-+ int ret = 0, ret2 = 0;
- int retries = 0;
-
- retry:
-@@ -1385,7 +1385,9 @@ retry:
- ext4_update_i_disksize(ea_inode, wsize);
- inode_unlock(ea_inode);
-
-- ext4_mark_inode_dirty(handle, ea_inode);
-+ ret2 = ext4_mark_inode_dirty(handle, ea_inode);
-+ if (unlikely(ret2 && !ret))
-+ ret = ret2;
-
- out:
- brelse(bh);
-diff --git a/fs/f2fs/checkpoint.c b/fs/f2fs/checkpoint.c
-index 852890b72d6a..448b3dc6f925 100644
---- a/fs/f2fs/checkpoint.c
-+++ b/fs/f2fs/checkpoint.c
-@@ -889,8 +889,8 @@ int f2fs_get_valid_checkpoint(struct f2fs_sb_info *sbi)
- int i;
- int err;
-
-- sbi->ckpt = f2fs_kzalloc(sbi, array_size(blk_size, cp_blks),
-- GFP_KERNEL);
-+ sbi->ckpt = f2fs_kvzalloc(sbi, array_size(blk_size, cp_blks),
-+ GFP_KERNEL);
- if (!sbi->ckpt)
- return -ENOMEM;
- /*
-diff --git a/fs/f2fs/compress.c b/fs/f2fs/compress.c
-index df7b2d15eacd..a5b2e72174bb 100644
---- a/fs/f2fs/compress.c
-+++ b/fs/f2fs/compress.c
-@@ -236,7 +236,12 @@ static int lz4_init_compress_ctx(struct compress_ctx *cc)
- if (!cc->private)
- return -ENOMEM;
-
-- cc->clen = LZ4_compressBound(PAGE_SIZE << cc->log_cluster_size);
-+ /*
-+ * we do not change cc->clen to LZ4_compressBound(inputsize) to
-+ * adapt worst compress case, because lz4 compressor can handle
-+ * output budget properly.
-+ */
-+ cc->clen = cc->rlen - PAGE_SIZE - COMPRESS_HEADER_SIZE;
- return 0;
- }
-
-@@ -252,11 +257,9 @@ static int lz4_compress_pages(struct compress_ctx *cc)
-
- len = LZ4_compress_default(cc->rbuf, cc->cbuf->cdata, cc->rlen,
- cc->clen, cc->private);
-- if (!len) {
-- printk_ratelimited("%sF2FS-fs (%s): lz4 compress failed\n",
-- KERN_ERR, F2FS_I_SB(cc->inode)->sb->s_id);
-- return -EIO;
-- }
-+ if (!len)
-+ return -EAGAIN;
-+
- cc->clen = len;
- return 0;
- }
-@@ -366,6 +369,13 @@ static int zstd_compress_pages(struct compress_ctx *cc)
- return -EIO;
- }
-
-+ /*
-+ * there is compressed data remained in intermediate buffer due to
-+ * no more space in cbuf.cdata
-+ */
-+ if (ret)
-+ return -EAGAIN;
-+
- cc->clen = outbuf.pos;
- return 0;
- }
-diff --git a/fs/f2fs/data.c b/fs/f2fs/data.c
-index cdf2f626bea7..10491ae1cb85 100644
---- a/fs/f2fs/data.c
-+++ b/fs/f2fs/data.c
-@@ -2130,16 +2130,16 @@ submit_and_realloc:
- page->index, for_write);
- if (IS_ERR(bio)) {
- ret = PTR_ERR(bio);
-- bio = NULL;
- dic->failed = true;
- if (refcount_sub_and_test(dic->nr_cpages - i,
-- &dic->ref))
-+ &dic->ref)) {
- f2fs_decompress_end_io(dic->rpages,
- cc->cluster_size, true,
- false);
-- f2fs_free_dic(dic);
-+ f2fs_free_dic(dic);
-+ }
- f2fs_put_dnode(&dn);
-- *bio_ret = bio;
-+ *bio_ret = NULL;
- return ret;
- }
- }
-diff --git a/fs/f2fs/dir.c b/fs/f2fs/dir.c
-index 44bfc464df78..54e90dbb09e7 100644
---- a/fs/f2fs/dir.c
-+++ b/fs/f2fs/dir.c
-@@ -107,36 +107,28 @@ static struct f2fs_dir_entry *find_in_block(struct inode *dir,
- /*
- * Test whether a case-insensitive directory entry matches the filename
- * being searched for.
-- *
-- * Returns: 0 if the directory entry matches, more than 0 if it
-- * doesn't match or less than zero on error.
- */
--int f2fs_ci_compare(const struct inode *parent, const struct qstr *name,
-- const struct qstr *entry, bool quick)
-+static bool f2fs_match_ci_name(const struct inode *dir, const struct qstr *name,
-+ const struct qstr *entry, bool quick)
- {
-- const struct f2fs_sb_info *sbi = F2FS_SB(parent->i_sb);
-+ const struct f2fs_sb_info *sbi = F2FS_SB(dir->i_sb);
- const struct unicode_map *um = sbi->s_encoding;
-- int ret;
-+ int res;
-
- if (quick)
-- ret = utf8_strncasecmp_folded(um, name, entry);
-+ res = utf8_strncasecmp_folded(um, name, entry);
- else
-- ret = utf8_strncasecmp(um, name, entry);
--
-- if (ret < 0) {
-- /* Handle invalid character sequence as either an error
-- * or as an opaque byte sequence.
-+ res = utf8_strncasecmp(um, name, entry);
-+ if (res < 0) {
-+ /*
-+ * In strict mode, ignore invalid names. In non-strict mode,
-+ * fall back to treating them as opaque byte sequences.
- */
-- if (f2fs_has_strict_mode(sbi))
-- return -EINVAL;
--
-- if (name->len != entry->len)
-- return 1;
--
-- return !!memcmp(name->name, entry->name, name->len);
-+ if (f2fs_has_strict_mode(sbi) || name->len != entry->len)
-+ return false;
-+ return !memcmp(name->name, entry->name, name->len);
- }
--
-- return ret;
-+ return res == 0;
- }
-
- static void f2fs_fname_setup_ci_filename(struct inode *dir,
-@@ -188,10 +180,10 @@ static inline bool f2fs_match_name(struct f2fs_dentry_ptr *d,
- if (cf_str->name) {
- struct qstr cf = {.name = cf_str->name,
- .len = cf_str->len};
-- return !f2fs_ci_compare(parent, &cf, &entry, true);
-+ return f2fs_match_ci_name(parent, &cf, &entry, true);
- }
-- return !f2fs_ci_compare(parent, fname->usr_fname, &entry,
-- false);
-+ return f2fs_match_ci_name(parent, fname->usr_fname, &entry,
-+ false);
- }
- #endif
- if (fscrypt_match_name(fname, d->filename[bit_pos],
-@@ -1080,17 +1072,41 @@ const struct file_operations f2fs_dir_operations = {
- static int f2fs_d_compare(const struct dentry *dentry, unsigned int len,
- const char *str, const struct qstr *name)
- {
-- struct qstr qstr = {.name = str, .len = len };
- const struct dentry *parent = READ_ONCE(dentry->d_parent);
-- const struct inode *inode = READ_ONCE(parent->d_inode);
-+ const struct inode *dir = READ_ONCE(parent->d_inode);
-+ const struct f2fs_sb_info *sbi = F2FS_SB(dentry->d_sb);
-+ struct qstr entry = QSTR_INIT(str, len);
-+ char strbuf[DNAME_INLINE_LEN];
-+ int res;
-+
-+ if (!dir || !IS_CASEFOLDED(dir))
-+ goto fallback;
-
-- if (!inode || !IS_CASEFOLDED(inode)) {
-- if (len != name->len)
-- return -1;
-- return memcmp(str, name->name, len);
-+ /*
-+ * If the dentry name is stored in-line, then it may be concurrently
-+ * modified by a rename. If this happens, the VFS will eventually retry
-+ * the lookup, so it doesn't matter what ->d_compare() returns.
-+ * However, it's unsafe to call utf8_strncasecmp() with an unstable
-+ * string. Therefore, we have to copy the name into a temporary buffer.
-+ */
-+ if (len <= DNAME_INLINE_LEN - 1) {
-+ memcpy(strbuf, str, len);
-+ strbuf[len] = 0;
-+ entry.name = strbuf;
-+ /* prevent compiler from optimizing out the temporary buffer */
-+ barrier();
- }
-
-- return f2fs_ci_compare(inode, name, &qstr, false);
-+ res = utf8_strncasecmp(sbi->s_encoding, name, &entry);
-+ if (res >= 0)
-+ return res;
-+
-+ if (f2fs_has_strict_mode(sbi))
-+ return -EINVAL;
-+fallback:
-+ if (len != name->len)
-+ return 1;
-+ return !!memcmp(str, name->name, len);
- }
-
- static int f2fs_d_hash(const struct dentry *dentry, struct qstr *str)
-diff --git a/fs/f2fs/f2fs.h b/fs/f2fs/f2fs.h
-index 7c5dd7f666a0..5a0f95dfbac2 100644
---- a/fs/f2fs/f2fs.h
-+++ b/fs/f2fs/f2fs.h
-@@ -2936,18 +2936,12 @@ static inline bool f2fs_may_extent_tree(struct inode *inode)
- static inline void *f2fs_kmalloc(struct f2fs_sb_info *sbi,
- size_t size, gfp_t flags)
- {
-- void *ret;
--
- if (time_to_inject(sbi, FAULT_KMALLOC)) {
- f2fs_show_injection_info(sbi, FAULT_KMALLOC);
- return NULL;
- }
-
-- ret = kmalloc(size, flags);
-- if (ret)
-- return ret;
--
-- return kvmalloc(size, flags);
-+ return kmalloc(size, flags);
- }
-
- static inline void *f2fs_kzalloc(struct f2fs_sb_info *sbi,
-@@ -3107,11 +3101,6 @@ int f2fs_update_extension_list(struct f2fs_sb_info *sbi, const char *name,
- bool hot, bool set);
- struct dentry *f2fs_get_parent(struct dentry *child);
-
--extern int f2fs_ci_compare(const struct inode *parent,
-- const struct qstr *name,
-- const struct qstr *entry,
-- bool quick);
--
- /*
- * dir.c
- */
-@@ -3656,7 +3645,7 @@ static inline int f2fs_build_stats(struct f2fs_sb_info *sbi) { return 0; }
- static inline void f2fs_destroy_stats(struct f2fs_sb_info *sbi) { }
- static inline void __init f2fs_create_root_stats(void) { }
- static inline void f2fs_destroy_root_stats(void) { }
--static inline void update_sit_info(struct f2fs_sb_info *sbi) {}
-+static inline void f2fs_update_sit_info(struct f2fs_sb_info *sbi) {}
- #endif
-
- extern const struct file_operations f2fs_dir_operations;
-diff --git a/fs/f2fs/file.c b/fs/f2fs/file.c
-index 6ab8f621a3c5..30b35915fa3a 100644
---- a/fs/f2fs/file.c
-+++ b/fs/f2fs/file.c
-@@ -2219,8 +2219,15 @@ static int f2fs_ioc_shutdown(struct file *filp, unsigned long arg)
-
- if (in != F2FS_GOING_DOWN_FULLSYNC) {
- ret = mnt_want_write_file(filp);
-- if (ret)
-+ if (ret) {
-+ if (ret == -EROFS) {
-+ ret = 0;
-+ f2fs_stop_checkpoint(sbi, false);
-+ set_sbi_flag(sbi, SBI_IS_SHUTDOWN);
-+ trace_f2fs_shutdown(sbi, in, ret);
-+ }
- return ret;
-+ }
- }
-
- switch (in) {
-diff --git a/fs/f2fs/node.c b/fs/f2fs/node.c
-index ecbd6bd14a49..daf531e69b67 100644
---- a/fs/f2fs/node.c
-+++ b/fs/f2fs/node.c
-@@ -2928,7 +2928,7 @@ static int __get_nat_bitmaps(struct f2fs_sb_info *sbi)
- return 0;
-
- nm_i->nat_bits_blocks = F2FS_BLK_ALIGN((nat_bits_bytes << 1) + 8);
-- nm_i->nat_bits = f2fs_kzalloc(sbi,
-+ nm_i->nat_bits = f2fs_kvzalloc(sbi,
- nm_i->nat_bits_blocks << F2FS_BLKSIZE_BITS, GFP_KERNEL);
- if (!nm_i->nat_bits)
- return -ENOMEM;
-@@ -3061,9 +3061,9 @@ static int init_free_nid_cache(struct f2fs_sb_info *sbi)
- int i;
-
- nm_i->free_nid_bitmap =
-- f2fs_kzalloc(sbi, array_size(sizeof(unsigned char *),
-- nm_i->nat_blocks),
-- GFP_KERNEL);
-+ f2fs_kvzalloc(sbi, array_size(sizeof(unsigned char *),
-+ nm_i->nat_blocks),
-+ GFP_KERNEL);
- if (!nm_i->free_nid_bitmap)
- return -ENOMEM;
-
-diff --git a/fs/f2fs/super.c b/fs/f2fs/super.c
-index 56ccb8323e21..4696c9cb47a5 100644
---- a/fs/f2fs/super.c
-+++ b/fs/f2fs/super.c
-@@ -1303,7 +1303,8 @@ static int f2fs_statfs_project(struct super_block *sb,
- limit >>= sb->s_blocksize_bits;
-
- if (limit && buf->f_blocks > limit) {
-- curblock = dquot->dq_dqb.dqb_curspace >> sb->s_blocksize_bits;
-+ curblock = (dquot->dq_dqb.dqb_curspace +
-+ dquot->dq_dqb.dqb_rsvspace) >> sb->s_blocksize_bits;
- buf->f_blocks = limit;
- buf->f_bfree = buf->f_bavail =
- (buf->f_blocks > curblock) ?
-@@ -3038,7 +3039,7 @@ static int init_blkz_info(struct f2fs_sb_info *sbi, int devi)
- if (nr_sectors & (bdev_zone_sectors(bdev) - 1))
- FDEV(devi).nr_blkz++;
-
-- FDEV(devi).blkz_seq = f2fs_kzalloc(sbi,
-+ FDEV(devi).blkz_seq = f2fs_kvzalloc(sbi,
- BITS_TO_LONGS(FDEV(devi).nr_blkz)
- * sizeof(unsigned long),
- GFP_KERNEL);
-diff --git a/fs/fuse/dev.c b/fs/fuse/dev.c
-index 97eec7522bf2..5c155437a455 100644
---- a/fs/fuse/dev.c
-+++ b/fs/fuse/dev.c
-@@ -1977,8 +1977,9 @@ static ssize_t fuse_dev_splice_write(struct pipe_inode_info *pipe,
- struct pipe_buffer *ibuf;
- struct pipe_buffer *obuf;
-
-- BUG_ON(nbuf >= pipe->ring_size);
-- BUG_ON(tail == head);
-+ if (WARN_ON(nbuf >= count || tail == head))
-+ goto out_free;
-+
- ibuf = &pipe->bufs[tail & mask];
- obuf = &bufs[nbuf];
-
-diff --git a/fs/fuse/file.c b/fs/fuse/file.c
-index 9d67b830fb7a..e3afceecaa6b 100644
---- a/fs/fuse/file.c
-+++ b/fs/fuse/file.c
-@@ -712,6 +712,7 @@ static ssize_t fuse_async_req_send(struct fuse_conn *fc,
- spin_unlock(&io->lock);
-
- ia->ap.args.end = fuse_aio_complete_req;
-+ ia->ap.args.may_block = io->should_dirty;
- err = fuse_simple_background(fc, &ia->ap.args, GFP_KERNEL);
- if (err)
- fuse_aio_complete_req(fc, &ia->ap.args, err);
-@@ -3279,13 +3280,11 @@ static ssize_t __fuse_copy_file_range(struct file *file_in, loff_t pos_in,
- if (file_inode(file_in)->i_sb != file_inode(file_out)->i_sb)
- return -EXDEV;
-
-- if (fc->writeback_cache) {
-- inode_lock(inode_in);
-- err = fuse_writeback_range(inode_in, pos_in, pos_in + len);
-- inode_unlock(inode_in);
-- if (err)
-- return err;
-- }
-+ inode_lock(inode_in);
-+ err = fuse_writeback_range(inode_in, pos_in, pos_in + len - 1);
-+ inode_unlock(inode_in);
-+ if (err)
-+ return err;
-
- inode_lock(inode_out);
-
-@@ -3293,11 +3292,27 @@ static ssize_t __fuse_copy_file_range(struct file *file_in, loff_t pos_in,
- if (err)
- goto out;
-
-- if (fc->writeback_cache) {
-- err = fuse_writeback_range(inode_out, pos_out, pos_out + len);
-- if (err)
-- goto out;
-- }
-+ /*
-+ * Write out dirty pages in the destination file before sending the COPY
-+ * request to userspace. After the request is completed, truncate off
-+ * pages (including partial ones) from the cache that have been copied,
-+ * since these contain stale data at that point.
-+ *
-+ * This should be mostly correct, but if the COPY writes to partial
-+ * pages (at the start or end) and the parts not covered by the COPY are
-+ * written through a memory map after calling fuse_writeback_range(),
-+ * then these partial page modifications will be lost on truncation.
-+ *
-+ * It is unlikely that someone would rely on such mixed style
-+ * modifications. Yet this does give less guarantees than if the
-+ * copying was performed with write(2).
-+ *
-+ * To fix this a i_mmap_sem style lock could be used to prevent new
-+ * faults while the copy is ongoing.
-+ */
-+ err = fuse_writeback_range(inode_out, pos_out, pos_out + len - 1);
-+ if (err)
-+ goto out;
-
- if (is_unstable)
- set_bit(FUSE_I_SIZE_UNSTABLE, &fi_out->state);
-@@ -3318,6 +3333,10 @@ static ssize_t __fuse_copy_file_range(struct file *file_in, loff_t pos_in,
- if (err)
- goto out;
-
-+ truncate_inode_pages_range(inode_out->i_mapping,
-+ ALIGN_DOWN(pos_out, PAGE_SIZE),
-+ ALIGN(pos_out + outarg.size, PAGE_SIZE) - 1);
-+
- if (fc->writeback_cache) {
- fuse_write_update_size(inode_out, pos_out + outarg.size);
- file_update_time(file_out);
-diff --git a/fs/fuse/fuse_i.h b/fs/fuse/fuse_i.h
-index ca344bf71404..d7cde216fc87 100644
---- a/fs/fuse/fuse_i.h
-+++ b/fs/fuse/fuse_i.h
-@@ -249,6 +249,7 @@ struct fuse_args {
- bool out_argvar:1;
- bool page_zeroing:1;
- bool page_replace:1;
-+ bool may_block:1;
- struct fuse_in_arg in_args[3];
- struct fuse_arg out_args[2];
- void (*end)(struct fuse_conn *fc, struct fuse_args *args, int error);
-diff --git a/fs/fuse/virtio_fs.c b/fs/fuse/virtio_fs.c
-index bade74768903..0c6ef5d3c6ab 100644
---- a/fs/fuse/virtio_fs.c
-+++ b/fs/fuse/virtio_fs.c
-@@ -60,6 +60,12 @@ struct virtio_fs_forget {
- struct virtio_fs_forget_req req;
- };
-
-+struct virtio_fs_req_work {
-+ struct fuse_req *req;
-+ struct virtio_fs_vq *fsvq;
-+ struct work_struct done_work;
-+};
-+
- static int virtio_fs_enqueue_req(struct virtio_fs_vq *fsvq,
- struct fuse_req *req, bool in_flight);
-
-@@ -485,19 +491,67 @@ static void copy_args_from_argbuf(struct fuse_args *args, struct fuse_req *req)
- }
-
- /* Work function for request completion */
-+static void virtio_fs_request_complete(struct fuse_req *req,
-+ struct virtio_fs_vq *fsvq)
-+{
-+ struct fuse_pqueue *fpq = &fsvq->fud->pq;
-+ struct fuse_conn *fc = fsvq->fud->fc;
-+ struct fuse_args *args;
-+ struct fuse_args_pages *ap;
-+ unsigned int len, i, thislen;
-+ struct page *page;
-+
-+ /*
-+ * TODO verify that server properly follows FUSE protocol
-+ * (oh.uniq, oh.len)
-+ */
-+ args = req->args;
-+ copy_args_from_argbuf(args, req);
-+
-+ if (args->out_pages && args->page_zeroing) {
-+ len = args->out_args[args->out_numargs - 1].size;
-+ ap = container_of(args, typeof(*ap), args);
-+ for (i = 0; i < ap->num_pages; i++) {
-+ thislen = ap->descs[i].length;
-+ if (len < thislen) {
-+ WARN_ON(ap->descs[i].offset);
-+ page = ap->pages[i];
-+ zero_user_segment(page, len, thislen);
-+ len = 0;
-+ } else {
-+ len -= thislen;
-+ }
-+ }
-+ }
-+
-+ spin_lock(&fpq->lock);
-+ clear_bit(FR_SENT, &req->flags);
-+ spin_unlock(&fpq->lock);
-+
-+ fuse_request_end(fc, req);
-+ spin_lock(&fsvq->lock);
-+ dec_in_flight_req(fsvq);
-+ spin_unlock(&fsvq->lock);
-+}
-+
-+static void virtio_fs_complete_req_work(struct work_struct *work)
-+{
-+ struct virtio_fs_req_work *w =
-+ container_of(work, typeof(*w), done_work);
-+
-+ virtio_fs_request_complete(w->req, w->fsvq);
-+ kfree(w);
-+}
-+
- static void virtio_fs_requests_done_work(struct work_struct *work)
- {
- struct virtio_fs_vq *fsvq = container_of(work, struct virtio_fs_vq,
- done_work);
- struct fuse_pqueue *fpq = &fsvq->fud->pq;
-- struct fuse_conn *fc = fsvq->fud->fc;
- struct virtqueue *vq = fsvq->vq;
- struct fuse_req *req;
-- struct fuse_args_pages *ap;
- struct fuse_req *next;
-- struct fuse_args *args;
-- unsigned int len, i, thislen;
-- struct page *page;
-+ unsigned int len;
- LIST_HEAD(reqs);
-
- /* Collect completed requests off the virtqueue */
-@@ -515,38 +569,20 @@ static void virtio_fs_requests_done_work(struct work_struct *work)
-
- /* End requests */
- list_for_each_entry_safe(req, next, &reqs, list) {
-- /*
-- * TODO verify that server properly follows FUSE protocol
-- * (oh.uniq, oh.len)
-- */
-- args = req->args;
-- copy_args_from_argbuf(args, req);
--
-- if (args->out_pages && args->page_zeroing) {
-- len = args->out_args[args->out_numargs - 1].size;
-- ap = container_of(args, typeof(*ap), args);
-- for (i = 0; i < ap->num_pages; i++) {
-- thislen = ap->descs[i].length;
-- if (len < thislen) {
-- WARN_ON(ap->descs[i].offset);
-- page = ap->pages[i];
-- zero_user_segment(page, len, thislen);
-- len = 0;
-- } else {
-- len -= thislen;
-- }
-- }
-- }
--
-- spin_lock(&fpq->lock);
-- clear_bit(FR_SENT, &req->flags);
- list_del_init(&req->list);
-- spin_unlock(&fpq->lock);
-
-- fuse_request_end(fc, req);
-- spin_lock(&fsvq->lock);
-- dec_in_flight_req(fsvq);
-- spin_unlock(&fsvq->lock);
-+ /* blocking async request completes in a worker context */
-+ if (req->args->may_block) {
-+ struct virtio_fs_req_work *w;
-+
-+ w = kzalloc(sizeof(*w), GFP_NOFS | __GFP_NOFAIL);
-+ INIT_WORK(&w->done_work, virtio_fs_complete_req_work);
-+ w->fsvq = fsvq;
-+ w->req = req;
-+ schedule_work(&w->done_work);
-+ } else {
-+ virtio_fs_request_complete(req, fsvq);
-+ }
- }
- }
-
-diff --git a/fs/gfs2/log.c b/fs/gfs2/log.c
-index 0644e58c6191..b7a5221bea7d 100644
---- a/fs/gfs2/log.c
-+++ b/fs/gfs2/log.c
-@@ -1003,8 +1003,10 @@ out:
- * @new: New transaction to be merged
- */
-
--static void gfs2_merge_trans(struct gfs2_trans *old, struct gfs2_trans *new)
-+static void gfs2_merge_trans(struct gfs2_sbd *sdp, struct gfs2_trans *new)
- {
-+ struct gfs2_trans *old = sdp->sd_log_tr;
-+
- WARN_ON_ONCE(!test_bit(TR_ATTACHED, &old->tr_flags));
-
- old->tr_num_buf_new += new->tr_num_buf_new;
-@@ -1016,6 +1018,11 @@ static void gfs2_merge_trans(struct gfs2_trans *old, struct gfs2_trans *new)
-
- list_splice_tail_init(&new->tr_databuf, &old->tr_databuf);
- list_splice_tail_init(&new->tr_buf, &old->tr_buf);
-+
-+ spin_lock(&sdp->sd_ail_lock);
-+ list_splice_tail_init(&new->tr_ail1_list, &old->tr_ail1_list);
-+ list_splice_tail_init(&new->tr_ail2_list, &old->tr_ail2_list);
-+ spin_unlock(&sdp->sd_ail_lock);
- }
-
- static void log_refund(struct gfs2_sbd *sdp, struct gfs2_trans *tr)
-@@ -1027,7 +1034,7 @@ static void log_refund(struct gfs2_sbd *sdp, struct gfs2_trans *tr)
- gfs2_log_lock(sdp);
-
- if (sdp->sd_log_tr) {
-- gfs2_merge_trans(sdp->sd_log_tr, tr);
-+ gfs2_merge_trans(sdp, tr);
- } else if (tr->tr_num_buf_new || tr->tr_num_databuf_new) {
- gfs2_assert_withdraw(sdp, test_bit(TR_ALLOCED, &tr->tr_flags));
- sdp->sd_log_tr = tr;
-diff --git a/fs/gfs2/ops_fstype.c b/fs/gfs2/ops_fstype.c
-index e2b69ffcc6a8..094f5fe7c009 100644
---- a/fs/gfs2/ops_fstype.c
-+++ b/fs/gfs2/ops_fstype.c
-@@ -880,7 +880,7 @@ fail:
- }
-
- static const match_table_t nolock_tokens = {
-- { Opt_jid, "jid=%d\n", },
-+ { Opt_jid, "jid=%d", },
- { Opt_err, NULL },
- };
-
-diff --git a/fs/io_uring.c b/fs/io_uring.c
-index 2698e9b08490..1829be7f63a3 100644
---- a/fs/io_uring.c
-+++ b/fs/io_uring.c
-@@ -513,7 +513,6 @@ enum {
- REQ_F_INFLIGHT_BIT,
- REQ_F_CUR_POS_BIT,
- REQ_F_NOWAIT_BIT,
-- REQ_F_IOPOLL_COMPLETED_BIT,
- REQ_F_LINK_TIMEOUT_BIT,
- REQ_F_TIMEOUT_BIT,
- REQ_F_ISREG_BIT,
-@@ -556,8 +555,6 @@ enum {
- REQ_F_CUR_POS = BIT(REQ_F_CUR_POS_BIT),
- /* must not punt to workers */
- REQ_F_NOWAIT = BIT(REQ_F_NOWAIT_BIT),
-- /* polled IO has completed */
-- REQ_F_IOPOLL_COMPLETED = BIT(REQ_F_IOPOLL_COMPLETED_BIT),
- /* has linked timeout */
- REQ_F_LINK_TIMEOUT = BIT(REQ_F_LINK_TIMEOUT_BIT),
- /* timeout request */
-@@ -618,6 +615,8 @@ struct io_kiocb {
- int cflags;
- bool needs_fixed_file;
- u8 opcode;
-+ /* polled IO has completed */
-+ u8 iopoll_completed;
-
- u16 buf_index;
-
-@@ -1691,6 +1690,18 @@ static int io_put_kbuf(struct io_kiocb *req)
- return cflags;
- }
-
-+static void io_iopoll_queue(struct list_head *again)
-+{
-+ struct io_kiocb *req;
-+
-+ do {
-+ req = list_first_entry(again, struct io_kiocb, list);
-+ list_del(&req->list);
-+ refcount_inc(&req->refs);
-+ io_queue_async_work(req);
-+ } while (!list_empty(again));
-+}
-+
- /*
- * Find and free completed poll iocbs
- */
-@@ -1699,12 +1710,21 @@ static void io_iopoll_complete(struct io_ring_ctx *ctx, unsigned int *nr_events,
- {
- struct req_batch rb;
- struct io_kiocb *req;
-+ LIST_HEAD(again);
-+
-+ /* order with ->result store in io_complete_rw_iopoll() */
-+ smp_rmb();
-
- rb.to_free = rb.need_iter = 0;
- while (!list_empty(done)) {
- int cflags = 0;
-
- req = list_first_entry(done, struct io_kiocb, list);
-+ if (READ_ONCE(req->result) == -EAGAIN) {
-+ req->iopoll_completed = 0;
-+ list_move_tail(&req->list, &again);
-+ continue;
-+ }
- list_del(&req->list);
-
- if (req->flags & REQ_F_BUFFER_SELECTED)
-@@ -1722,18 +1742,9 @@ static void io_iopoll_complete(struct io_ring_ctx *ctx, unsigned int *nr_events,
- if (ctx->flags & IORING_SETUP_SQPOLL)
- io_cqring_ev_posted(ctx);
- io_free_req_many(ctx, &rb);
--}
-
--static void io_iopoll_queue(struct list_head *again)
--{
-- struct io_kiocb *req;
--
-- do {
-- req = list_first_entry(again, struct io_kiocb, list);
-- list_del(&req->list);
-- refcount_inc(&req->refs);
-- io_queue_async_work(req);
-- } while (!list_empty(again));
-+ if (!list_empty(&again))
-+ io_iopoll_queue(&again);
- }
-
- static int io_do_iopoll(struct io_ring_ctx *ctx, unsigned int *nr_events,
-@@ -1741,7 +1752,6 @@ static int io_do_iopoll(struct io_ring_ctx *ctx, unsigned int *nr_events,
- {
- struct io_kiocb *req, *tmp;
- LIST_HEAD(done);
-- LIST_HEAD(again);
- bool spin;
- int ret;
-
-@@ -1760,20 +1770,13 @@ static int io_do_iopoll(struct io_ring_ctx *ctx, unsigned int *nr_events,
- * If we find a request that requires polling, break out
- * and complete those lists first, if we have entries there.
- */
-- if (req->flags & REQ_F_IOPOLL_COMPLETED) {
-+ if (READ_ONCE(req->iopoll_completed)) {
- list_move_tail(&req->list, &done);
- continue;
- }
- if (!list_empty(&done))
- break;
-
-- if (req->result == -EAGAIN) {
-- list_move_tail(&req->list, &again);
-- continue;
-- }
-- if (!list_empty(&again))
-- break;
--
- ret = kiocb->ki_filp->f_op->iopoll(kiocb, spin);
- if (ret < 0)
- break;
-@@ -1786,9 +1789,6 @@ static int io_do_iopoll(struct io_ring_ctx *ctx, unsigned int *nr_events,
- if (!list_empty(&done))
- io_iopoll_complete(ctx, nr_events, &done);
-
-- if (!list_empty(&again))
-- io_iopoll_queue(&again);
--
- return ret;
- }
-
-@@ -1937,11 +1937,15 @@ static void io_complete_rw_iopoll(struct kiocb *kiocb, long res, long res2)
- if (kiocb->ki_flags & IOCB_WRITE)
- kiocb_end_write(req);
-
-- if (res != req->result)
-+ if (res != -EAGAIN && res != req->result)
- req_set_fail_links(req);
-- req->result = res;
-- if (res != -EAGAIN)
-- req->flags |= REQ_F_IOPOLL_COMPLETED;
-+
-+ WRITE_ONCE(req->result, res);
-+ /* order with io_poll_complete() checking ->result */
-+ if (res != -EAGAIN) {
-+ smp_wmb();
-+ WRITE_ONCE(req->iopoll_completed, 1);
-+ }
- }
-
- /*
-@@ -1974,7 +1978,7 @@ static void io_iopoll_req_issued(struct io_kiocb *req)
- * For fast devices, IO may have already completed. If it has, add
- * it to the front so we find it first.
- */
-- if (req->flags & REQ_F_IOPOLL_COMPLETED)
-+ if (READ_ONCE(req->iopoll_completed))
- list_add(&req->list, &ctx->poll_list);
- else
- list_add_tail(&req->list, &ctx->poll_list);
-@@ -2098,6 +2102,7 @@ static int io_prep_rw(struct io_kiocb *req, const struct io_uring_sqe *sqe,
- kiocb->ki_flags |= IOCB_HIPRI;
- kiocb->ki_complete = io_complete_rw_iopoll;
- req->result = 0;
-+ req->iopoll_completed = 0;
- } else {
- if (kiocb->ki_flags & IOCB_HIPRI)
- return -EINVAL;
-@@ -2609,8 +2614,8 @@ copy_iov:
- }
- }
- out_free:
-- kfree(iovec);
-- req->flags &= ~REQ_F_NEED_CLEANUP;
-+ if (!(req->flags & REQ_F_NEED_CLEANUP))
-+ kfree(iovec);
- return ret;
- }
-
-@@ -2732,8 +2737,8 @@ copy_iov:
- }
- }
- out_free:
-- req->flags &= ~REQ_F_NEED_CLEANUP;
-- kfree(iovec);
-+ if (!(req->flags & REQ_F_NEED_CLEANUP))
-+ kfree(iovec);
- return ret;
- }
-
-@@ -4297,6 +4302,28 @@ static void io_async_queue_proc(struct file *file, struct wait_queue_head *head,
- __io_queue_proc(&pt->req->apoll->poll, pt, head);
- }
-
-+static void io_sq_thread_drop_mm(struct io_ring_ctx *ctx)
-+{
-+ struct mm_struct *mm = current->mm;
-+
-+ if (mm) {
-+ unuse_mm(mm);
-+ mmput(mm);
-+ }
-+}
-+
-+static int io_sq_thread_acquire_mm(struct io_ring_ctx *ctx,
-+ struct io_kiocb *req)
-+{
-+ if (io_op_defs[req->opcode].needs_mm && !current->mm) {
-+ if (unlikely(!mmget_not_zero(ctx->sqo_mm)))
-+ return -EFAULT;
-+ use_mm(ctx->sqo_mm);
-+ }
-+
-+ return 0;
-+}
-+
- static void io_async_task_func(struct callback_head *cb)
- {
- struct io_kiocb *req = container_of(cb, struct io_kiocb, task_work);
-@@ -4328,12 +4355,17 @@ static void io_async_task_func(struct callback_head *cb)
- if (canceled) {
- kfree(apoll);
- io_cqring_ev_posted(ctx);
-+end_req:
- req_set_fail_links(req);
- io_double_put_req(req);
- return;
- }
-
- __set_current_state(TASK_RUNNING);
-+ if (io_sq_thread_acquire_mm(ctx, req)) {
-+ io_cqring_add_event(req, -EFAULT);
-+ goto end_req;
-+ }
- mutex_lock(&ctx->uring_lock);
- __io_queue_sqe(req, NULL);
- mutex_unlock(&ctx->uring_lock);
-@@ -5892,11 +5924,8 @@ static int io_init_req(struct io_ring_ctx *ctx, struct io_kiocb *req,
- if (unlikely(req->opcode >= IORING_OP_LAST))
- return -EINVAL;
-
-- if (io_op_defs[req->opcode].needs_mm && !current->mm) {
-- if (unlikely(!mmget_not_zero(ctx->sqo_mm)))
-- return -EFAULT;
-- use_mm(ctx->sqo_mm);
-- }
-+ if (unlikely(io_sq_thread_acquire_mm(ctx, req)))
-+ return -EFAULT;
-
- sqe_flags = READ_ONCE(sqe->flags);
- /* enforce forwards compatibility on users */
-@@ -6006,16 +6035,6 @@ fail_req:
- return submitted;
- }
-
--static inline void io_sq_thread_drop_mm(struct io_ring_ctx *ctx)
--{
-- struct mm_struct *mm = current->mm;
--
-- if (mm) {
-- unuse_mm(mm);
-- mmput(mm);
-- }
--}
--
- static int io_sq_thread(void *data)
- {
- struct io_ring_ctx *ctx = data;
-@@ -7385,7 +7404,17 @@ static void io_ring_exit_work(struct work_struct *work)
- if (ctx->rings)
- io_cqring_overflow_flush(ctx, true);
-
-- wait_for_completion(&ctx->completions[0]);
-+ /*
-+ * If we're doing polled IO and end up having requests being
-+ * submitted async (out-of-line), then completions can come in while
-+ * we're waiting for refs to drop. We need to reap these manually,
-+ * as nobody else will be looking for them.
-+ */
-+ while (!wait_for_completion_timeout(&ctx->completions[0], HZ/20)) {
-+ io_iopoll_reap_events(ctx);
-+ if (ctx->rings)
-+ io_cqring_overflow_flush(ctx, true);
-+ }
- io_ring_ctx_free(ctx);
- }
-
-diff --git a/fs/jbd2/journal.c b/fs/jbd2/journal.c
-index a49d0e670ddf..e4944436e733 100644
---- a/fs/jbd2/journal.c
-+++ b/fs/jbd2/journal.c
-@@ -1140,6 +1140,7 @@ static journal_t *journal_init_common(struct block_device *bdev,
- init_waitqueue_head(&journal->j_wait_commit);
- init_waitqueue_head(&journal->j_wait_updates);
- init_waitqueue_head(&journal->j_wait_reserved);
-+ mutex_init(&journal->j_abort_mutex);
- mutex_init(&journal->j_barrier);
- mutex_init(&journal->j_checkpoint_mutex);
- spin_lock_init(&journal->j_revoke_lock);
-@@ -1402,7 +1403,8 @@ static int jbd2_write_superblock(journal_t *journal, int write_flags)
- printk(KERN_ERR "JBD2: Error %d detected when updating "
- "journal superblock for %s.\n", ret,
- journal->j_devname);
-- jbd2_journal_abort(journal, ret);
-+ if (!is_journal_aborted(journal))
-+ jbd2_journal_abort(journal, ret);
- }
-
- return ret;
-@@ -2153,6 +2155,13 @@ void jbd2_journal_abort(journal_t *journal, int errno)
- {
- transaction_t *transaction;
-
-+ /*
-+ * Lock the aborting procedure until everything is done, this avoid
-+ * races between filesystem's error handling flow (e.g. ext4_abort()),
-+ * ensure panic after the error info is written into journal's
-+ * superblock.
-+ */
-+ mutex_lock(&journal->j_abort_mutex);
- /*
- * ESHUTDOWN always takes precedence because a file system check
- * caused by any other journal abort error is not required after
-@@ -2167,6 +2176,7 @@ void jbd2_journal_abort(journal_t *journal, int errno)
- journal->j_errno = errno;
- jbd2_journal_update_sb_errno(journal);
- }
-+ mutex_unlock(&journal->j_abort_mutex);
- return;
- }
-
-@@ -2188,10 +2198,7 @@ void jbd2_journal_abort(journal_t *journal, int errno)
- * layer could realise that a filesystem check is needed.
- */
- jbd2_journal_update_sb_errno(journal);
--
-- write_lock(&journal->j_state_lock);
-- journal->j_flags |= JBD2_REC_ERR;
-- write_unlock(&journal->j_state_lock);
-+ mutex_unlock(&journal->j_abort_mutex);
- }
-
- /**
-diff --git a/fs/nfs/direct.c b/fs/nfs/direct.c
-index a57e7c72c7f4..d49b1d197908 100644
---- a/fs/nfs/direct.c
-+++ b/fs/nfs/direct.c
-@@ -731,6 +731,8 @@ static void nfs_direct_write_completion(struct nfs_pgio_header *hdr)
- nfs_list_remove_request(req);
- if (request_commit) {
- kref_get(&req->wb_kref);
-+ memcpy(&req->wb_verf, &hdr->verf.verifier,
-+ sizeof(req->wb_verf));
- nfs_mark_request_commit(req, hdr->lseg, &cinfo,
- hdr->ds_commit_idx);
- }
-diff --git a/fs/nfs/inode.c b/fs/nfs/inode.c
-index b9d0921cb4fe..0bf1f835de01 100644
---- a/fs/nfs/inode.c
-+++ b/fs/nfs/inode.c
-@@ -833,6 +833,8 @@ int nfs_getattr(const struct path *path, struct kstat *stat,
- do_update |= cache_validity & NFS_INO_INVALID_ATIME;
- if (request_mask & (STATX_CTIME|STATX_MTIME))
- do_update |= cache_validity & NFS_INO_REVAL_PAGECACHE;
-+ if (request_mask & STATX_BLOCKS)
-+ do_update |= cache_validity & NFS_INO_INVALID_BLOCKS;
- if (do_update) {
- /* Update the attribute cache */
- if (!(server->flags & NFS_MOUNT_NOAC))
-@@ -1764,7 +1766,8 @@ out_noforce:
- status = nfs_post_op_update_inode_locked(inode, fattr,
- NFS_INO_INVALID_CHANGE
- | NFS_INO_INVALID_CTIME
-- | NFS_INO_INVALID_MTIME);
-+ | NFS_INO_INVALID_MTIME
-+ | NFS_INO_INVALID_BLOCKS);
- return status;
- }
-
-@@ -1871,7 +1874,8 @@ static int nfs_update_inode(struct inode *inode, struct nfs_fattr *fattr)
- nfsi->cache_validity &= ~(NFS_INO_INVALID_ATTR
- | NFS_INO_INVALID_ATIME
- | NFS_INO_REVAL_FORCED
-- | NFS_INO_REVAL_PAGECACHE);
-+ | NFS_INO_REVAL_PAGECACHE
-+ | NFS_INO_INVALID_BLOCKS);
-
- /* Do atomic weak cache consistency updates */
- nfs_wcc_update_inode(inode, fattr);
-@@ -2033,8 +2037,12 @@ static int nfs_update_inode(struct inode *inode, struct nfs_fattr *fattr)
- inode->i_blocks = nfs_calc_block_size(fattr->du.nfs3.used);
- } else if (fattr->valid & NFS_ATTR_FATTR_BLOCKS_USED)
- inode->i_blocks = fattr->du.nfs2.blocks;
-- else
-+ else {
-+ nfsi->cache_validity |= save_cache_validity &
-+ (NFS_INO_INVALID_BLOCKS
-+ | NFS_INO_REVAL_FORCED);
- cache_revalidated = false;
-+ }
-
- /* Update attrtimeo value if we're out of the unstable period */
- if (attr_changed) {
-diff --git a/fs/nfs/nfs4proc.c b/fs/nfs/nfs4proc.c
-index 9056f3dd380e..e32717fd1169 100644
---- a/fs/nfs/nfs4proc.c
-+++ b/fs/nfs/nfs4proc.c
-@@ -7909,7 +7909,7 @@ nfs4_bind_one_conn_to_session_done(struct rpc_task *task, void *calldata)
- }
-
- static const struct rpc_call_ops nfs4_bind_one_conn_to_session_ops = {
-- .rpc_call_done = &nfs4_bind_one_conn_to_session_done,
-+ .rpc_call_done = nfs4_bind_one_conn_to_session_done,
- };
-
- /*
-diff --git a/fs/nfsd/cache.h b/fs/nfsd/cache.h
-index 10ec5ecdf117..65c331f75e9c 100644
---- a/fs/nfsd/cache.h
-+++ b/fs/nfsd/cache.h
-@@ -78,6 +78,8 @@ enum {
- /* Checksum this amount of the request */
- #define RC_CSUMLEN (256U)
-
-+int nfsd_drc_slab_create(void);
-+void nfsd_drc_slab_free(void);
- int nfsd_reply_cache_init(struct nfsd_net *);
- void nfsd_reply_cache_shutdown(struct nfsd_net *);
- int nfsd_cache_lookup(struct svc_rqst *);
-diff --git a/fs/nfsd/netns.h b/fs/nfsd/netns.h
-index 09aa545825bd..9217cb64bf0e 100644
---- a/fs/nfsd/netns.h
-+++ b/fs/nfsd/netns.h
-@@ -139,7 +139,6 @@ struct nfsd_net {
- * Duplicate reply cache
- */
- struct nfsd_drc_bucket *drc_hashtbl;
-- struct kmem_cache *drc_slab;
-
- /* max number of entries allowed in the cache */
- unsigned int max_drc_entries;
-diff --git a/fs/nfsd/nfs4callback.c b/fs/nfsd/nfs4callback.c
-index 5cf91322de0f..07e0c6f6322f 100644
---- a/fs/nfsd/nfs4callback.c
-+++ b/fs/nfsd/nfs4callback.c
-@@ -1301,6 +1301,8 @@ static void nfsd4_process_cb_update(struct nfsd4_callback *cb)
- err = setup_callback_client(clp, &conn, ses);
- if (err) {
- nfsd4_mark_cb_down(clp, err);
-+ if (c)
-+ svc_xprt_put(c->cn_xprt);
- return;
- }
- }
-diff --git a/fs/nfsd/nfscache.c b/fs/nfsd/nfscache.c
-index 96352ab7bd81..4a258065188e 100644
---- a/fs/nfsd/nfscache.c
-+++ b/fs/nfsd/nfscache.c
-@@ -36,6 +36,8 @@ struct nfsd_drc_bucket {
- spinlock_t cache_lock;
- };
-
-+static struct kmem_cache *drc_slab;
-+
- static int nfsd_cache_append(struct svc_rqst *rqstp, struct kvec *vec);
- static unsigned long nfsd_reply_cache_count(struct shrinker *shrink,
- struct shrink_control *sc);
-@@ -95,7 +97,7 @@ nfsd_reply_cache_alloc(struct svc_rqst *rqstp, __wsum csum,
- {
- struct svc_cacherep *rp;
-
-- rp = kmem_cache_alloc(nn->drc_slab, GFP_KERNEL);
-+ rp = kmem_cache_alloc(drc_slab, GFP_KERNEL);
- if (rp) {
- rp->c_state = RC_UNUSED;
- rp->c_type = RC_NOCACHE;
-@@ -129,7 +131,7 @@ nfsd_reply_cache_free_locked(struct nfsd_drc_bucket *b, struct svc_cacherep *rp,
- atomic_dec(&nn->num_drc_entries);
- nn->drc_mem_usage -= sizeof(*rp);
- }
-- kmem_cache_free(nn->drc_slab, rp);
-+ kmem_cache_free(drc_slab, rp);
- }
-
- static void
-@@ -141,6 +143,18 @@ nfsd_reply_cache_free(struct nfsd_drc_bucket *b, struct svc_cacherep *rp,
- spin_unlock(&b->cache_lock);
- }
-
-+int nfsd_drc_slab_create(void)
-+{
-+ drc_slab = kmem_cache_create("nfsd_drc",
-+ sizeof(struct svc_cacherep), 0, 0, NULL);
-+ return drc_slab ? 0: -ENOMEM;
-+}
-+
-+void nfsd_drc_slab_free(void)
-+{
-+ kmem_cache_destroy(drc_slab);
-+}
-+
- int nfsd_reply_cache_init(struct nfsd_net *nn)
- {
- unsigned int hashsize;
-@@ -159,18 +173,13 @@ int nfsd_reply_cache_init(struct nfsd_net *nn)
- if (status)
- goto out_nomem;
-
-- nn->drc_slab = kmem_cache_create("nfsd_drc",
-- sizeof(struct svc_cacherep), 0, 0, NULL);
-- if (!nn->drc_slab)
-- goto out_shrinker;
--
- nn->drc_hashtbl = kcalloc(hashsize,
- sizeof(*nn->drc_hashtbl), GFP_KERNEL);
- if (!nn->drc_hashtbl) {
- nn->drc_hashtbl = vzalloc(array_size(hashsize,
- sizeof(*nn->drc_hashtbl)));
- if (!nn->drc_hashtbl)
-- goto out_slab;
-+ goto out_shrinker;
- }
-
- for (i = 0; i < hashsize; i++) {
-@@ -180,8 +189,6 @@ int nfsd_reply_cache_init(struct nfsd_net *nn)
- nn->drc_hashsize = hashsize;
-
- return 0;
--out_slab:
-- kmem_cache_destroy(nn->drc_slab);
- out_shrinker:
- unregister_shrinker(&nn->nfsd_reply_cache_shrinker);
- out_nomem:
-@@ -209,8 +216,6 @@ void nfsd_reply_cache_shutdown(struct nfsd_net *nn)
- nn->drc_hashtbl = NULL;
- nn->drc_hashsize = 0;
-
-- kmem_cache_destroy(nn->drc_slab);
-- nn->drc_slab = NULL;
- }
-
- /*
-@@ -464,8 +469,7 @@ found_entry:
- rtn = RC_REPLY;
- break;
- default:
-- printk(KERN_WARNING "nfsd: bad repcache type %d\n", rp->c_type);
-- nfsd_reply_cache_free_locked(b, rp, nn);
-+ WARN_ONCE(1, "nfsd: bad repcache type %d\n", rp->c_type);
- }
-
- goto out;
-diff --git a/fs/nfsd/nfsctl.c b/fs/nfsd/nfsctl.c
-index 3bb2db947d29..71687d99b090 100644
---- a/fs/nfsd/nfsctl.c
-+++ b/fs/nfsd/nfsctl.c
-@@ -1533,6 +1533,9 @@ static int __init init_nfsd(void)
- goto out_free_slabs;
- nfsd_fault_inject_init(); /* nfsd fault injection controls */
- nfsd_stat_init(); /* Statistics */
-+ retval = nfsd_drc_slab_create();
-+ if (retval)
-+ goto out_free_stat;
- nfsd_lockd_init(); /* lockd->nfsd callbacks */
- retval = create_proc_exports_entry();
- if (retval)
-@@ -1546,6 +1549,8 @@ out_free_all:
- remove_proc_entry("fs/nfs", NULL);
- out_free_lockd:
- nfsd_lockd_shutdown();
-+ nfsd_drc_slab_free();
-+out_free_stat:
- nfsd_stat_shutdown();
- nfsd_fault_inject_cleanup();
- nfsd4_exit_pnfs();
-@@ -1560,6 +1565,7 @@ out_unregister_pernet:
-
- static void __exit exit_nfsd(void)
- {
-+ nfsd_drc_slab_free();
- remove_proc_entry("fs/nfs/exports", NULL);
- remove_proc_entry("fs/nfs", NULL);
- nfsd_stat_shutdown();
-diff --git a/fs/proc/bootconfig.c b/fs/proc/bootconfig.c
-index 9955d75c0585..ad31ec4ad627 100644
---- a/fs/proc/bootconfig.c
-+++ b/fs/proc/bootconfig.c
-@@ -26,8 +26,9 @@ static int boot_config_proc_show(struct seq_file *m, void *v)
- static int __init copy_xbc_key_value_list(char *dst, size_t size)
- {
- struct xbc_node *leaf, *vnode;
-- const char *val;
- char *key, *end = dst + size;
-+ const char *val;
-+ char q;
- int ret = 0;
-
- key = kzalloc(XBC_KEYLEN_MAX, GFP_KERNEL);
-@@ -41,16 +42,20 @@ static int __init copy_xbc_key_value_list(char *dst, size_t size)
- break;
- dst += ret;
- vnode = xbc_node_get_child(leaf);
-- if (vnode && xbc_node_is_array(vnode)) {
-+ if (vnode) {
- xbc_array_for_each_value(vnode, val) {
-- ret = snprintf(dst, rest(dst, end), "\"%s\"%s",
-- val, vnode->next ? ", " : "\n");
-+ if (strchr(val, '"'))
-+ q = '\'';
-+ else
-+ q = '"';
-+ ret = snprintf(dst, rest(dst, end), "%c%s%c%s",
-+ q, val, q, vnode->next ? ", " : "\n");
- if (ret < 0)
- goto out;
- dst += ret;
- }
- } else {
-- ret = snprintf(dst, rest(dst, end), "\"%s\"\n", val);
-+ ret = snprintf(dst, rest(dst, end), "\"\"\n");
- if (ret < 0)
- break;
- dst += ret;
-diff --git a/fs/xfs/xfs_inode.c b/fs/xfs/xfs_inode.c
-index d1772786af29..8845faa8161a 100644
---- a/fs/xfs/xfs_inode.c
-+++ b/fs/xfs/xfs_inode.c
-@@ -2639,8 +2639,10 @@ xfs_ifree_cluster(
- error = xfs_trans_get_buf(tp, mp->m_ddev_targp, blkno,
- mp->m_bsize * igeo->blocks_per_cluster,
- XBF_UNMAPPED, &bp);
-- if (error)
-+ if (error) {
-+ xfs_perag_put(pag);
- return error;
-+ }
-
- /*
- * This buffer may not have been correctly initialised as we
-diff --git a/include/linux/bitops.h b/include/linux/bitops.h
-index 9acf654f0b19..99f2ac30b1d9 100644
---- a/include/linux/bitops.h
-+++ b/include/linux/bitops.h
-@@ -72,7 +72,7 @@ static inline int get_bitmask_order(unsigned int count)
-
- static __always_inline unsigned long hweight_long(unsigned long w)
- {
-- return sizeof(w) == 4 ? hweight32(w) : hweight64(w);
-+ return sizeof(w) == 4 ? hweight32(w) : hweight64((__u64)w);
- }
-
- /**
-diff --git a/include/linux/coresight.h b/include/linux/coresight.h
-index 193cc9dbf448..09f0565a5de3 100644
---- a/include/linux/coresight.h
-+++ b/include/linux/coresight.h
-@@ -100,10 +100,12 @@ union coresight_dev_subtype {
- };
-
- /**
-- * struct coresight_platform_data - data harvested from the DT specification
-- * @nr_inport: number of input ports for this component.
-- * @nr_outport: number of output ports for this component.
-- * @conns: Array of nr_outport connections from this component
-+ * struct coresight_platform_data - data harvested from the firmware
-+ * specification.
-+ *
-+ * @nr_inport: Number of elements for the input connections.
-+ * @nr_outport: Number of elements for the output connections.
-+ * @conns: Sparse array of nr_outport connections from this component.
- */
- struct coresight_platform_data {
- int nr_inport;
-diff --git a/include/linux/ioport.h b/include/linux/ioport.h
-index a9b9170b5dd2..6c3eca90cbc4 100644
---- a/include/linux/ioport.h
-+++ b/include/linux/ioport.h
-@@ -301,5 +301,11 @@ struct resource *devm_request_free_mem_region(struct device *dev,
- struct resource *request_free_mem_region(struct resource *base,
- unsigned long size, const char *name);
-
-+#ifdef CONFIG_IO_STRICT_DEVMEM
-+void revoke_devmem(struct resource *res);
-+#else
-+static inline void revoke_devmem(struct resource *res) { };
-+#endif
-+
- #endif /* __ASSEMBLY__ */
- #endif /* _LINUX_IOPORT_H */
-diff --git a/include/linux/jbd2.h b/include/linux/jbd2.h
-index f613d8529863..d56128df2aff 100644
---- a/include/linux/jbd2.h
-+++ b/include/linux/jbd2.h
-@@ -765,6 +765,11 @@ struct journal_s
- */
- int j_errno;
-
-+ /**
-+ * @j_abort_mutex: Lock the whole aborting procedure.
-+ */
-+ struct mutex j_abort_mutex;
-+
- /**
- * @j_sb_buffer: The first part of the superblock buffer.
- */
-@@ -1247,7 +1252,6 @@ JBD2_FEATURE_INCOMPAT_FUNCS(csum3, CSUM_V3)
- #define JBD2_ABORT_ON_SYNCDATA_ERR 0x040 /* Abort the journal on file
- * data write error in ordered
- * mode */
--#define JBD2_REC_ERR 0x080 /* The errno in the sb has been recorded */
-
- /*
- * Function declarations for the journaling transaction and buffer
-diff --git a/include/linux/kprobes.h b/include/linux/kprobes.h
-index 04bdaf01112c..645fd401c856 100644
---- a/include/linux/kprobes.h
-+++ b/include/linux/kprobes.h
-@@ -350,6 +350,10 @@ static inline struct kprobe_ctlblk *get_kprobe_ctlblk(void)
- return this_cpu_ptr(&kprobe_ctlblk);
- }
-
-+extern struct kprobe kprobe_busy;
-+void kprobe_busy_begin(void);
-+void kprobe_busy_end(void);
-+
- kprobe_opcode_t *kprobe_lookup_name(const char *name, unsigned int offset);
- int register_kprobe(struct kprobe *p);
- void unregister_kprobe(struct kprobe *p);
-diff --git a/include/linux/libata.h b/include/linux/libata.h
-index cffa4714bfa8..ae6dfc107ea8 100644
---- a/include/linux/libata.h
-+++ b/include/linux/libata.h
-@@ -22,6 +22,7 @@
- #include <linux/acpi.h>
- #include <linux/cdrom.h>
- #include <linux/sched.h>
-+#include <linux/async.h>
-
- /*
- * Define if arch has non-standard setup. This is a _PCI_ standard
-@@ -872,6 +873,8 @@ struct ata_port {
- struct timer_list fastdrain_timer;
- unsigned long fastdrain_cnt;
-
-+ async_cookie_t cookie;
-+
- int em_message_type;
- void *private_data;
-
-diff --git a/include/linux/mfd/stmfx.h b/include/linux/mfd/stmfx.h
-index 3c67983678ec..744dce63946e 100644
---- a/include/linux/mfd/stmfx.h
-+++ b/include/linux/mfd/stmfx.h
-@@ -109,6 +109,7 @@ struct stmfx {
- struct device *dev;
- struct regmap *map;
- struct regulator *vdd;
-+ int irq;
- struct irq_domain *irq_domain;
- struct mutex lock; /* IRQ bus lock */
- u8 irq_src;
-diff --git a/include/linux/nfs_fs.h b/include/linux/nfs_fs.h
-index 73eda45f1cfd..6ee9119acc5d 100644
---- a/include/linux/nfs_fs.h
-+++ b/include/linux/nfs_fs.h
-@@ -230,6 +230,7 @@ struct nfs4_copy_state {
- #define NFS_INO_INVALID_OTHER BIT(12) /* other attrs are invalid */
- #define NFS_INO_DATA_INVAL_DEFER \
- BIT(13) /* Deferred cache invalidation */
-+#define NFS_INO_INVALID_BLOCKS BIT(14) /* cached blocks are invalid */
-
- #define NFS_INO_INVALID_ATTR (NFS_INO_INVALID_CHANGE \
- | NFS_INO_INVALID_CTIME \
-diff --git a/include/linux/usb/composite.h b/include/linux/usb/composite.h
-index 8675e145ea8b..2040696d75b6 100644
---- a/include/linux/usb/composite.h
-+++ b/include/linux/usb/composite.h
-@@ -249,6 +249,9 @@ int usb_function_activate(struct usb_function *);
-
- int usb_interface_id(struct usb_configuration *, struct usb_function *);
-
-+int config_ep_by_speed_and_alt(struct usb_gadget *g, struct usb_function *f,
-+ struct usb_ep *_ep, u8 alt);
-+
- int config_ep_by_speed(struct usb_gadget *g, struct usb_function *f,
- struct usb_ep *_ep);
-
-diff --git a/include/linux/usb/gadget.h b/include/linux/usb/gadget.h
-index 9411c08a5c7e..73a6113322c6 100644
---- a/include/linux/usb/gadget.h
-+++ b/include/linux/usb/gadget.h
-@@ -373,6 +373,7 @@ struct usb_gadget_ops {
- * @connected: True if gadget is connected.
- * @lpm_capable: If the gadget max_speed is FULL or HIGH, this flag
- * indicates that it supports LPM as per the LPM ECN & errata.
-+ * @irq: the interrupt number for device controller.
- *
- * Gadgets have a mostly-portable "gadget driver" implementing device
- * functions, handling all usb configurations and interfaces. Gadget
-@@ -427,6 +428,7 @@ struct usb_gadget {
- unsigned deactivated:1;
- unsigned connected:1;
- unsigned lpm_capable:1;
-+ int irq;
- };
- #define work_to_gadget(w) (container_of((w), struct usb_gadget, work))
-
-diff --git a/include/sound/soc.h b/include/sound/soc.h
-index 946f88a6c63d..8e480efeda2a 100644
---- a/include/sound/soc.h
-+++ b/include/sound/soc.h
-@@ -790,9 +790,6 @@ struct snd_soc_dai_link {
- const struct snd_soc_pcm_stream *params;
- unsigned int num_params;
-
-- struct snd_soc_dapm_widget *playback_widget;
-- struct snd_soc_dapm_widget *capture_widget;
--
- unsigned int dai_fmt; /* format to set on init */
-
- enum snd_soc_dpcm_trigger trigger[2]; /* trigger type for DPCM */
-@@ -1156,6 +1153,9 @@ struct snd_soc_pcm_runtime {
- struct snd_soc_dai **cpu_dais;
- unsigned int num_cpus;
-
-+ struct snd_soc_dapm_widget *playback_widget;
-+ struct snd_soc_dapm_widget *capture_widget;
-+
- struct delayed_work delayed_work;
- void (*close_delayed_work_func)(struct snd_soc_pcm_runtime *rtd);
- #ifdef CONFIG_DEBUG_FS
-@@ -1177,7 +1177,7 @@ struct snd_soc_pcm_runtime {
- #define asoc_rtd_to_codec(rtd, n) (rtd)->dais[n + (rtd)->num_cpus]
-
- #define for_each_rtd_components(rtd, i, component) \
-- for ((i) = 0; \
-+ for ((i) = 0, component = NULL; \
- ((i) < rtd->num_components) && ((component) = rtd->components[i]);\
- (i)++)
- #define for_each_rtd_cpu_dais(rtd, i, dai) \
-diff --git a/include/trace/events/afs.h b/include/trace/events/afs.h
-index c612cabbc378..93eddd32bd74 100644
---- a/include/trace/events/afs.h
-+++ b/include/trace/events/afs.h
-@@ -988,24 +988,22 @@ TRACE_EVENT(afs_edit_dir,
- );
-
- TRACE_EVENT(afs_protocol_error,
-- TP_PROTO(struct afs_call *call, int error, enum afs_eproto_cause cause),
-+ TP_PROTO(struct afs_call *call, enum afs_eproto_cause cause),
-
-- TP_ARGS(call, error, cause),
-+ TP_ARGS(call, cause),
-
- TP_STRUCT__entry(
- __field(unsigned int, call )
-- __field(int, error )
- __field(enum afs_eproto_cause, cause )
- ),
-
- TP_fast_assign(
- __entry->call = call ? call->debug_id : 0;
-- __entry->error = error;
- __entry->cause = cause;
- ),
-
-- TP_printk("c=%08x r=%d %s",
-- __entry->call, __entry->error,
-+ TP_printk("c=%08x %s",
-+ __entry->call,
- __print_symbolic(__entry->cause, afs_eproto_causes))
- );
-
-diff --git a/include/uapi/linux/magic.h b/include/uapi/linux/magic.h
-index d78064007b17..f3956fc11de6 100644
---- a/include/uapi/linux/magic.h
-+++ b/include/uapi/linux/magic.h
-@@ -94,6 +94,7 @@
- #define BALLOON_KVM_MAGIC 0x13661366
- #define ZSMALLOC_MAGIC 0x58295829
- #define DMA_BUF_MAGIC 0x444d4142 /* "DMAB" */
-+#define DEVMEM_MAGIC 0x454d444d /* "DMEM" */
- #define Z3FOLD_MAGIC 0x33
- #define PPC_CMM_MAGIC 0xc7571590
-
-diff --git a/kernel/bpf/syscall.c b/kernel/bpf/syscall.c
-index 5e52765161f9..c8acc8f37583 100644
---- a/kernel/bpf/syscall.c
-+++ b/kernel/bpf/syscall.c
-@@ -2924,6 +2924,7 @@ static struct bpf_insn *bpf_insn_prepare_dump(const struct bpf_prog *prog)
- struct bpf_insn *insns;
- u32 off, type;
- u64 imm;
-+ u8 code;
- int i;
-
- insns = kmemdup(prog->insnsi, bpf_prog_insn_size(prog),
-@@ -2932,21 +2933,27 @@ static struct bpf_insn *bpf_insn_prepare_dump(const struct bpf_prog *prog)
- return insns;
-
- for (i = 0; i < prog->len; i++) {
-- if (insns[i].code == (BPF_JMP | BPF_TAIL_CALL)) {
-+ code = insns[i].code;
-+
-+ if (code == (BPF_JMP | BPF_TAIL_CALL)) {
- insns[i].code = BPF_JMP | BPF_CALL;
- insns[i].imm = BPF_FUNC_tail_call;
- /* fall-through */
- }
-- if (insns[i].code == (BPF_JMP | BPF_CALL) ||
-- insns[i].code == (BPF_JMP | BPF_CALL_ARGS)) {
-- if (insns[i].code == (BPF_JMP | BPF_CALL_ARGS))
-+ if (code == (BPF_JMP | BPF_CALL) ||
-+ code == (BPF_JMP | BPF_CALL_ARGS)) {
-+ if (code == (BPF_JMP | BPF_CALL_ARGS))
- insns[i].code = BPF_JMP | BPF_CALL;
- if (!bpf_dump_raw_ok())
- insns[i].imm = 0;
- continue;
- }
-+ if (BPF_CLASS(code) == BPF_LDX && BPF_MODE(code) == BPF_PROBE_MEM) {
-+ insns[i].code = BPF_LDX | BPF_SIZE(code) | BPF_MEM;
-+ continue;
-+ }
-
-- if (insns[i].code != (BPF_LD | BPF_IMM | BPF_DW))
-+ if (code != (BPF_LD | BPF_IMM | BPF_DW))
- continue;
-
- imm = ((u64)insns[i + 1].imm << 32) | (u32)insns[i].imm;
-diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
-index efe14cf24bc6..739d9ba3ba6b 100644
---- a/kernel/bpf/verifier.c
-+++ b/kernel/bpf/verifier.c
-@@ -7366,7 +7366,7 @@ static int check_btf_func(struct bpf_verifier_env *env,
- const struct btf *btf;
- void __user *urecord;
- u32 prev_offset = 0;
-- int ret = 0;
-+ int ret = -ENOMEM;
-
- nfuncs = attr->func_info_cnt;
- if (!nfuncs)
-diff --git a/kernel/kprobes.c b/kernel/kprobes.c
-index 2625c241ac00..195ecb955fcc 100644
---- a/kernel/kprobes.c
-+++ b/kernel/kprobes.c
-@@ -586,11 +586,12 @@ static void kprobe_optimizer(struct work_struct *work)
- mutex_unlock(&module_mutex);
- mutex_unlock(&text_mutex);
- cpus_read_unlock();
-- mutex_unlock(&kprobe_mutex);
-
- /* Step 5: Kick optimizer again if needed */
- if (!list_empty(&optimizing_list) || !list_empty(&unoptimizing_list))
- kick_kprobe_optimizer();
-+
-+ mutex_unlock(&kprobe_mutex);
- }
-
- /* Wait for completing optimization and unoptimization */
-@@ -1236,6 +1237,26 @@ __releases(hlist_lock)
- }
- NOKPROBE_SYMBOL(kretprobe_table_unlock);
-
-+struct kprobe kprobe_busy = {
-+ .addr = (void *) get_kprobe,
-+};
-+
-+void kprobe_busy_begin(void)
-+{
-+ struct kprobe_ctlblk *kcb;
-+
-+ preempt_disable();
-+ __this_cpu_write(current_kprobe, &kprobe_busy);
-+ kcb = get_kprobe_ctlblk();
-+ kcb->kprobe_status = KPROBE_HIT_ACTIVE;
-+}
-+
-+void kprobe_busy_end(void)
-+{
-+ __this_cpu_write(current_kprobe, NULL);
-+ preempt_enable();
-+}
-+
- /*
- * This function is called from finish_task_switch when task tk becomes dead,
- * so that we can recycle any function-return probe instances associated
-@@ -1253,6 +1274,8 @@ void kprobe_flush_task(struct task_struct *tk)
- /* Early boot. kretprobe_table_locks not yet initialized. */
- return;
-
-+ kprobe_busy_begin();
-+
- INIT_HLIST_HEAD(&empty_rp);
- hash = hash_ptr(tk, KPROBE_HASH_BITS);
- head = &kretprobe_inst_table[hash];
-@@ -1266,6 +1289,8 @@ void kprobe_flush_task(struct task_struct *tk)
- hlist_del(&ri->hlist);
- kfree(ri);
- }
-+
-+ kprobe_busy_end();
- }
- NOKPROBE_SYMBOL(kprobe_flush_task);
-
-diff --git a/kernel/resource.c b/kernel/resource.c
-index 76036a41143b..841737bbda9e 100644
---- a/kernel/resource.c
-+++ b/kernel/resource.c
-@@ -1126,6 +1126,7 @@ struct resource * __request_region(struct resource *parent,
- {
- DECLARE_WAITQUEUE(wait, current);
- struct resource *res = alloc_resource(GFP_KERNEL);
-+ struct resource *orig_parent = parent;
-
- if (!res)
- return NULL;
-@@ -1176,6 +1177,10 @@ struct resource * __request_region(struct resource *parent,
- break;
- }
- write_unlock(&resource_lock);
-+
-+ if (res && orig_parent == &iomem_resource)
-+ revoke_devmem(res);
-+
- return res;
- }
- EXPORT_SYMBOL(__request_region);
-diff --git a/kernel/trace/blktrace.c b/kernel/trace/blktrace.c
-index ca39dc3230cb..35610a4be4a9 100644
---- a/kernel/trace/blktrace.c
-+++ b/kernel/trace/blktrace.c
-@@ -995,8 +995,10 @@ static void blk_add_trace_split(void *ignore,
-
- __blk_add_trace(bt, bio->bi_iter.bi_sector,
- bio->bi_iter.bi_size, bio_op(bio), bio->bi_opf,
-- BLK_TA_SPLIT, bio->bi_status, sizeof(rpdu),
-- &rpdu, blk_trace_bio_get_cgid(q, bio));
-+ BLK_TA_SPLIT,
-+ blk_status_to_errno(bio->bi_status),
-+ sizeof(rpdu), &rpdu,
-+ blk_trace_bio_get_cgid(q, bio));
- }
- rcu_read_unlock();
- }
-@@ -1033,7 +1035,8 @@ static void blk_add_trace_bio_remap(void *ignore,
- r.sector_from = cpu_to_be64(from);
-
- __blk_add_trace(bt, bio->bi_iter.bi_sector, bio->bi_iter.bi_size,
-- bio_op(bio), bio->bi_opf, BLK_TA_REMAP, bio->bi_status,
-+ bio_op(bio), bio->bi_opf, BLK_TA_REMAP,
-+ blk_status_to_errno(bio->bi_status),
- sizeof(r), &r, blk_trace_bio_get_cgid(q, bio));
- rcu_read_unlock();
- }
-@@ -1253,21 +1256,10 @@ static inline __u16 t_error(const struct trace_entry *ent)
-
- static __u64 get_pdu_int(const struct trace_entry *ent, bool has_cg)
- {
-- const __u64 *val = pdu_start(ent, has_cg);
-+ const __be64 *val = pdu_start(ent, has_cg);
- return be64_to_cpu(*val);
- }
-
--static void get_pdu_remap(const struct trace_entry *ent,
-- struct blk_io_trace_remap *r, bool has_cg)
--{
-- const struct blk_io_trace_remap *__r = pdu_start(ent, has_cg);
-- __u64 sector_from = __r->sector_from;
--
-- r->device_from = be32_to_cpu(__r->device_from);
-- r->device_to = be32_to_cpu(__r->device_to);
-- r->sector_from = be64_to_cpu(sector_from);
--}
--
- typedef void (blk_log_action_t) (struct trace_iterator *iter, const char *act,
- bool has_cg);
-
-@@ -1407,13 +1399,13 @@ static void blk_log_with_error(struct trace_seq *s,
-
- static void blk_log_remap(struct trace_seq *s, const struct trace_entry *ent, bool has_cg)
- {
-- struct blk_io_trace_remap r = { .device_from = 0, };
-+ const struct blk_io_trace_remap *__r = pdu_start(ent, has_cg);
-
-- get_pdu_remap(ent, &r, has_cg);
- trace_seq_printf(s, "%llu + %u <- (%d,%d) %llu\n",
- t_sector(ent), t_sec(ent),
-- MAJOR(r.device_from), MINOR(r.device_from),
-- (unsigned long long)r.sector_from);
-+ MAJOR(be32_to_cpu(__r->device_from)),
-+ MINOR(be32_to_cpu(__r->device_from)),
-+ be64_to_cpu(__r->sector_from));
- }
-
- static void blk_log_plug(struct trace_seq *s, const struct trace_entry *ent, bool has_cg)
-diff --git a/kernel/trace/trace.h b/kernel/trace/trace.h
-index 4eb1d004d5f2..7fb2f4c1bc49 100644
---- a/kernel/trace/trace.h
-+++ b/kernel/trace/trace.h
-@@ -61,6 +61,9 @@ enum trace_type {
- #undef __field_desc
- #define __field_desc(type, container, item)
-
-+#undef __field_packed
-+#define __field_packed(type, container, item)
-+
- #undef __array
- #define __array(type, item, size) type item[size];
-
-diff --git a/kernel/trace/trace_entries.h b/kernel/trace/trace_entries.h
-index a523da0dae0a..18c4a58aff79 100644
---- a/kernel/trace/trace_entries.h
-+++ b/kernel/trace/trace_entries.h
-@@ -78,8 +78,8 @@ FTRACE_ENTRY_PACKED(funcgraph_entry, ftrace_graph_ent_entry,
-
- F_STRUCT(
- __field_struct( struct ftrace_graph_ent, graph_ent )
-- __field_desc( unsigned long, graph_ent, func )
-- __field_desc( int, graph_ent, depth )
-+ __field_packed( unsigned long, graph_ent, func )
-+ __field_packed( int, graph_ent, depth )
- ),
-
- F_printk("--> %ps (%d)", (void *)__entry->func, __entry->depth)
-@@ -92,11 +92,11 @@ FTRACE_ENTRY_PACKED(funcgraph_exit, ftrace_graph_ret_entry,
-
- F_STRUCT(
- __field_struct( struct ftrace_graph_ret, ret )
-- __field_desc( unsigned long, ret, func )
-- __field_desc( unsigned long, ret, overrun )
-- __field_desc( unsigned long long, ret, calltime)
-- __field_desc( unsigned long long, ret, rettime )
-- __field_desc( int, ret, depth )
-+ __field_packed( unsigned long, ret, func )
-+ __field_packed( unsigned long, ret, overrun )
-+ __field_packed( unsigned long long, ret, calltime)
-+ __field_packed( unsigned long long, ret, rettime )
-+ __field_packed( int, ret, depth )
- ),
-
- F_printk("<-- %ps (%d) (start: %llx end: %llx) over: %d",
-diff --git a/kernel/trace/trace_export.c b/kernel/trace/trace_export.c
-index 77ce5a3b6773..70d3d0a09053 100644
---- a/kernel/trace/trace_export.c
-+++ b/kernel/trace/trace_export.c
-@@ -45,6 +45,9 @@ static int ftrace_event_register(struct trace_event_call *call,
- #undef __field_desc
- #define __field_desc(type, container, item) type item;
-
-+#undef __field_packed
-+#define __field_packed(type, container, item) type item;
-+
- #undef __array
- #define __array(type, item, size) type item[size];
-
-@@ -85,6 +88,13 @@ static void __always_unused ____ftrace_check_##name(void) \
- .size = sizeof(_type), .align = __alignof__(_type), \
- is_signed_type(_type), .filter_type = _filter_type },
-
-+
-+#undef __field_ext_packed
-+#define __field_ext_packed(_type, _item, _filter_type) { \
-+ .type = #_type, .name = #_item, \
-+ .size = sizeof(_type), .align = 1, \
-+ is_signed_type(_type), .filter_type = _filter_type },
-+
- #undef __field
- #define __field(_type, _item) __field_ext(_type, _item, FILTER_OTHER)
-
-@@ -94,6 +104,9 @@ static void __always_unused ____ftrace_check_##name(void) \
- #undef __field_desc
- #define __field_desc(_type, _container, _item) __field_ext(_type, _item, FILTER_OTHER)
-
-+#undef __field_packed
-+#define __field_packed(_type, _container, _item) __field_ext_packed(_type, _item, FILTER_OTHER)
-+
- #undef __array
- #define __array(_type, _item, _len) { \
- .type = #_type"["__stringify(_len)"]", .name = #_item, \
-@@ -129,6 +142,9 @@ static struct trace_event_fields ftrace_event_fields_##name[] = { \
- #undef __field_desc
- #define __field_desc(type, container, item)
-
-+#undef __field_packed
-+#define __field_packed(type, container, item)
-+
- #undef __array
- #define __array(type, item, len)
-
-diff --git a/kernel/trace/trace_kprobe.c b/kernel/trace/trace_kprobe.c
-index 35989383ae11..8eeb95e04bf5 100644
---- a/kernel/trace/trace_kprobe.c
-+++ b/kernel/trace/trace_kprobe.c
-@@ -1629,7 +1629,7 @@ int bpf_get_kprobe_info(const struct perf_event *event, u32 *fd_type,
- if (perf_type_tracepoint)
- tk = find_trace_kprobe(pevent, group);
- else
-- tk = event->tp_event->data;
-+ tk = trace_kprobe_primary_from_call(event->tp_event);
- if (!tk)
- return -EINVAL;
-
-diff --git a/kernel/trace/trace_probe.c b/kernel/trace/trace_probe.c
-index ab8b6436d53f..f98d6d94cbbf 100644
---- a/kernel/trace/trace_probe.c
-+++ b/kernel/trace/trace_probe.c
-@@ -639,8 +639,8 @@ static int traceprobe_parse_probe_arg_body(char *arg, ssize_t *size,
- ret = -EINVAL;
- goto fail;
- }
-- if ((code->op == FETCH_OP_IMM || code->op == FETCH_OP_COMM) ||
-- parg->count) {
-+ if ((code->op == FETCH_OP_IMM || code->op == FETCH_OP_COMM ||
-+ code->op == FETCH_OP_DATA) || parg->count) {
- /*
- * IMM, DATA and COMM is pointing actual address, those
- * must be kept, and if parg->count != 0, this is an
-diff --git a/kernel/trace/trace_uprobe.c b/kernel/trace/trace_uprobe.c
-index 2a8e8e9c1c75..fdd47f99b18f 100644
---- a/kernel/trace/trace_uprobe.c
-+++ b/kernel/trace/trace_uprobe.c
-@@ -1412,7 +1412,7 @@ int bpf_get_uprobe_info(const struct perf_event *event, u32 *fd_type,
- if (perf_type_tracepoint)
- tu = find_probe_event(pevent, group);
- else
-- tu = event->tp_event->data;
-+ tu = trace_uprobe_primary_from_call(event->tp_event);
- if (!tu)
- return -EINVAL;
-
-diff --git a/lib/zlib_inflate/inffast.c b/lib/zlib_inflate/inffast.c
-index 2c13ecc5bb2c..ed1f3df27260 100644
---- a/lib/zlib_inflate/inffast.c
-+++ b/lib/zlib_inflate/inffast.c
-@@ -10,17 +10,6 @@
-
- #ifndef ASMINF
-
--/* Allow machine dependent optimization for post-increment or pre-increment.
-- Based on testing to date,
-- Pre-increment preferred for:
-- - PowerPC G3 (Adler)
-- - MIPS R5000 (Randers-Pehrson)
-- Post-increment preferred for:
-- - none
-- No measurable difference:
-- - Pentium III (Anderson)
-- - M68060 (Nikl)
-- */
- union uu {
- unsigned short us;
- unsigned char b[2];
-@@ -38,16 +27,6 @@ get_unaligned16(const unsigned short *p)
- return mm.us;
- }
-
--#ifdef POSTINC
--# define OFF 0
--# define PUP(a) *(a)++
--# define UP_UNALIGNED(a) get_unaligned16((a)++)
--#else
--# define OFF 1
--# define PUP(a) *++(a)
--# define UP_UNALIGNED(a) get_unaligned16(++(a))
--#endif
--
- /*
- Decode literal, length, and distance codes and write out the resulting
- literal and match bytes until either not enough input or output is
-@@ -115,9 +94,9 @@ void inflate_fast(z_streamp strm, unsigned start)
-
- /* copy state to local variables */
- state = (struct inflate_state *)strm->state;
-- in = strm->next_in - OFF;
-+ in = strm->next_in;
- last = in + (strm->avail_in - 5);
-- out = strm->next_out - OFF;
-+ out = strm->next_out;
- beg = out - (start - strm->avail_out);
- end = out + (strm->avail_out - 257);
- #ifdef INFLATE_STRICT
-@@ -138,9 +117,9 @@ void inflate_fast(z_streamp strm, unsigned start)
- input data or output space */
- do {
- if (bits < 15) {
-- hold += (unsigned long)(PUP(in)) << bits;
-+ hold += (unsigned long)(*in++) << bits;
- bits += 8;
-- hold += (unsigned long)(PUP(in)) << bits;
-+ hold += (unsigned long)(*in++) << bits;
- bits += 8;
- }
- this = lcode[hold & lmask];
-@@ -150,14 +129,14 @@ void inflate_fast(z_streamp strm, unsigned start)
- bits -= op;
- op = (unsigned)(this.op);
- if (op == 0) { /* literal */
-- PUP(out) = (unsigned char)(this.val);
-+ *out++ = (unsigned char)(this.val);
- }
- else if (op & 16) { /* length base */
- len = (unsigned)(this.val);
- op &= 15; /* number of extra bits */
- if (op) {
- if (bits < op) {
-- hold += (unsigned long)(PUP(in)) << bits;
-+ hold += (unsigned long)(*in++) << bits;
- bits += 8;
- }
- len += (unsigned)hold & ((1U << op) - 1);
-@@ -165,9 +144,9 @@ void inflate_fast(z_streamp strm, unsigned start)
- bits -= op;
- }
- if (bits < 15) {
-- hold += (unsigned long)(PUP(in)) << bits;
-+ hold += (unsigned long)(*in++) << bits;
- bits += 8;
-- hold += (unsigned long)(PUP(in)) << bits;
-+ hold += (unsigned long)(*in++) << bits;
- bits += 8;
- }
- this = dcode[hold & dmask];
-@@ -180,10 +159,10 @@ void inflate_fast(z_streamp strm, unsigned start)
- dist = (unsigned)(this.val);
- op &= 15; /* number of extra bits */
- if (bits < op) {
-- hold += (unsigned long)(PUP(in)) << bits;
-+ hold += (unsigned long)(*in++) << bits;
- bits += 8;
- if (bits < op) {
-- hold += (unsigned long)(PUP(in)) << bits;
-+ hold += (unsigned long)(*in++) << bits;
- bits += 8;
- }
- }
-@@ -205,13 +184,13 @@ void inflate_fast(z_streamp strm, unsigned start)
- state->mode = BAD;
- break;
- }
-- from = window - OFF;
-+ from = window;
- if (write == 0) { /* very common case */
- from += wsize - op;
- if (op < len) { /* some from window */
- len -= op;
- do {
-- PUP(out) = PUP(from);
-+ *out++ = *from++;
- } while (--op);
- from = out - dist; /* rest from output */
- }
-@@ -222,14 +201,14 @@ void inflate_fast(z_streamp strm, unsigned start)
- if (op < len) { /* some from end of window */
- len -= op;
- do {
-- PUP(out) = PUP(from);
-+ *out++ = *from++;
- } while (--op);
-- from = window - OFF;
-+ from = window;
- if (write < len) { /* some from start of window */
- op = write;
- len -= op;
- do {
-- PUP(out) = PUP(from);
-+ *out++ = *from++;
- } while (--op);
- from = out - dist; /* rest from output */
- }
-@@ -240,21 +219,21 @@ void inflate_fast(z_streamp strm, unsigned start)
- if (op < len) { /* some from window */
- len -= op;
- do {
-- PUP(out) = PUP(from);
-+ *out++ = *from++;
- } while (--op);
- from = out - dist; /* rest from output */
- }
- }
- while (len > 2) {
-- PUP(out) = PUP(from);
-- PUP(out) = PUP(from);
-- PUP(out) = PUP(from);
-+ *out++ = *from++;
-+ *out++ = *from++;
-+ *out++ = *from++;
- len -= 3;
- }
- if (len) {
-- PUP(out) = PUP(from);
-+ *out++ = *from++;
- if (len > 1)
-- PUP(out) = PUP(from);
-+ *out++ = *from++;
- }
- }
- else {
-@@ -264,29 +243,29 @@ void inflate_fast(z_streamp strm, unsigned start)
- from = out - dist; /* copy direct from output */
- /* minimum length is three */
- /* Align out addr */
-- if (!((long)(out - 1 + OFF) & 1)) {
-- PUP(out) = PUP(from);
-+ if (!((long)(out - 1) & 1)) {
-+ *out++ = *from++;
- len--;
- }
-- sout = (unsigned short *)(out - OFF);
-+ sout = (unsigned short *)(out);
- if (dist > 2) {
- unsigned short *sfrom;
-
-- sfrom = (unsigned short *)(from - OFF);
-+ sfrom = (unsigned short *)(from);
- loops = len >> 1;
- do
- #ifdef CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS
-- PUP(sout) = PUP(sfrom);
-+ *sout++ = *sfrom++;
- #else
-- PUP(sout) = UP_UNALIGNED(sfrom);
-+ *sout++ = get_unaligned16(sfrom++);
- #endif
- while (--loops);
-- out = (unsigned char *)sout + OFF;
-- from = (unsigned char *)sfrom + OFF;
-+ out = (unsigned char *)sout;
-+ from = (unsigned char *)sfrom;
- } else { /* dist == 1 or dist == 2 */
- unsigned short pat16;
-
-- pat16 = *(sout-1+OFF);
-+ pat16 = *(sout-1);
- if (dist == 1) {
- union uu mm;
- /* copy one char pattern to both bytes */
-@@ -296,12 +275,12 @@ void inflate_fast(z_streamp strm, unsigned start)
- }
- loops = len >> 1;
- do
-- PUP(sout) = pat16;
-+ *sout++ = pat16;
- while (--loops);
-- out = (unsigned char *)sout + OFF;
-+ out = (unsigned char *)sout;
- }
- if (len & 1)
-- PUP(out) = PUP(from);
-+ *out++ = *from++;
- }
- }
- else if ((op & 64) == 0) { /* 2nd level distance code */
-@@ -336,8 +315,8 @@ void inflate_fast(z_streamp strm, unsigned start)
- hold &= (1U << bits) - 1;
-
- /* update state and return */
-- strm->next_in = in + OFF;
-- strm->next_out = out + OFF;
-+ strm->next_in = in;
-+ strm->next_out = out;
- strm->avail_in = (unsigned)(in < last ? 5 + (last - in) : 5 - (in - last));
- strm->avail_out = (unsigned)(out < end ?
- 257 + (end - out) : 257 - (out - end));
-diff --git a/net/core/dev.c b/net/core/dev.c
-index 2d8aceee4284..93a279ab4e97 100644
---- a/net/core/dev.c
-+++ b/net/core/dev.c
-@@ -79,6 +79,7 @@
- #include <linux/sched.h>
- #include <linux/sched/mm.h>
- #include <linux/mutex.h>
-+#include <linux/rwsem.h>
- #include <linux/string.h>
- #include <linux/mm.h>
- #include <linux/socket.h>
-@@ -194,7 +195,7 @@ static DEFINE_SPINLOCK(napi_hash_lock);
- static unsigned int napi_gen_id = NR_CPUS;
- static DEFINE_READ_MOSTLY_HASHTABLE(napi_hash, 8);
-
--static seqcount_t devnet_rename_seq;
-+static DECLARE_RWSEM(devnet_rename_sem);
-
- static inline void dev_base_seq_inc(struct net *net)
- {
-@@ -930,33 +931,28 @@ EXPORT_SYMBOL(dev_get_by_napi_id);
- * @net: network namespace
- * @name: a pointer to the buffer where the name will be stored.
- * @ifindex: the ifindex of the interface to get the name from.
-- *
-- * The use of raw_seqcount_begin() and cond_resched() before
-- * retrying is required as we want to give the writers a chance
-- * to complete when CONFIG_PREEMPTION is not set.
- */
- int netdev_get_name(struct net *net, char *name, int ifindex)
- {
- struct net_device *dev;
-- unsigned int seq;
-+ int ret;
-
--retry:
-- seq = raw_seqcount_begin(&devnet_rename_seq);
-+ down_read(&devnet_rename_sem);
- rcu_read_lock();
-+
- dev = dev_get_by_index_rcu(net, ifindex);
- if (!dev) {
-- rcu_read_unlock();
-- return -ENODEV;
-+ ret = -ENODEV;
-+ goto out;
- }
-
- strcpy(name, dev->name);
-- rcu_read_unlock();
-- if (read_seqcount_retry(&devnet_rename_seq, seq)) {
-- cond_resched();
-- goto retry;
-- }
-
-- return 0;
-+ ret = 0;
-+out:
-+ rcu_read_unlock();
-+ up_read(&devnet_rename_sem);
-+ return ret;
- }
-
- /**
-@@ -1228,10 +1224,10 @@ int dev_change_name(struct net_device *dev, const char *newname)
- likely(!(dev->priv_flags & IFF_LIVE_RENAME_OK)))
- return -EBUSY;
-
-- write_seqcount_begin(&devnet_rename_seq);
-+ down_write(&devnet_rename_sem);
-
- if (strncmp(newname, dev->name, IFNAMSIZ) == 0) {
-- write_seqcount_end(&devnet_rename_seq);
-+ up_write(&devnet_rename_sem);
- return 0;
- }
-
-@@ -1239,7 +1235,7 @@ int dev_change_name(struct net_device *dev, const char *newname)
-
- err = dev_get_valid_name(net, dev, newname);
- if (err < 0) {
-- write_seqcount_end(&devnet_rename_seq);
-+ up_write(&devnet_rename_sem);
- return err;
- }
-
-@@ -1254,11 +1250,11 @@ rollback:
- if (ret) {
- memcpy(dev->name, oldname, IFNAMSIZ);
- dev->name_assign_type = old_assign_type;
-- write_seqcount_end(&devnet_rename_seq);
-+ up_write(&devnet_rename_sem);
- return ret;
- }
-
-- write_seqcount_end(&devnet_rename_seq);
-+ up_write(&devnet_rename_sem);
-
- netdev_adjacent_rename_links(dev, oldname);
-
-@@ -1279,7 +1275,7 @@ rollback:
- /* err >= 0 after dev_alloc_name() or stores the first errno */
- if (err >= 0) {
- err = ret;
-- write_seqcount_begin(&devnet_rename_seq);
-+ down_write(&devnet_rename_sem);
- memcpy(dev->name, oldname, IFNAMSIZ);
- memcpy(oldname, newname, IFNAMSIZ);
- dev->name_assign_type = old_assign_type;
-diff --git a/net/core/filter.c b/net/core/filter.c
-index 11b97c31bca5..9512a9772d69 100644
---- a/net/core/filter.c
-+++ b/net/core/filter.c
-@@ -1766,25 +1766,27 @@ BPF_CALL_5(bpf_skb_load_bytes_relative, const struct sk_buff *, skb,
- u32, offset, void *, to, u32, len, u32, start_header)
- {
- u8 *end = skb_tail_pointer(skb);
-- u8 *net = skb_network_header(skb);
-- u8 *mac = skb_mac_header(skb);
-- u8 *ptr;
-+ u8 *start, *ptr;
-
-- if (unlikely(offset > 0xffff || len > (end - mac)))
-+ if (unlikely(offset > 0xffff))
- goto err_clear;
-
- switch (start_header) {
- case BPF_HDR_START_MAC:
-- ptr = mac + offset;
-+ if (unlikely(!skb_mac_header_was_set(skb)))
-+ goto err_clear;
-+ start = skb_mac_header(skb);
- break;
- case BPF_HDR_START_NET:
-- ptr = net + offset;
-+ start = skb_network_header(skb);
- break;
- default:
- goto err_clear;
- }
-
-- if (likely(ptr >= mac && ptr + len <= end)) {
-+ ptr = start + offset;
-+
-+ if (likely(ptr + len <= end)) {
- memcpy(to, ptr, len);
- return 0;
- }
-diff --git a/net/core/sock_map.c b/net/core/sock_map.c
-index b08dfae10f88..591457fcbd02 100644
---- a/net/core/sock_map.c
-+++ b/net/core/sock_map.c
-@@ -417,10 +417,7 @@ static int sock_map_get_next_key(struct bpf_map *map, void *key, void *next)
- return 0;
- }
-
--static bool sock_map_redirect_allowed(const struct sock *sk)
--{
-- return sk->sk_state != TCP_LISTEN;
--}
-+static bool sock_map_redirect_allowed(const struct sock *sk);
-
- static int sock_map_update_common(struct bpf_map *map, u32 idx,
- struct sock *sk, u64 flags)
-@@ -501,6 +498,11 @@ static bool sk_is_udp(const struct sock *sk)
- sk->sk_protocol == IPPROTO_UDP;
- }
-
-+static bool sock_map_redirect_allowed(const struct sock *sk)
-+{
-+ return sk_is_tcp(sk) && sk->sk_state != TCP_LISTEN;
-+}
-+
- static bool sock_map_sk_is_suitable(const struct sock *sk)
- {
- return sk_is_tcp(sk) || sk_is_udp(sk);
-@@ -982,11 +984,15 @@ static struct bpf_map *sock_hash_alloc(union bpf_attr *attr)
- err = -EINVAL;
- goto free_htab;
- }
-+ err = bpf_map_charge_init(&htab->map.memory, cost);
-+ if (err)
-+ goto free_htab;
-
- htab->buckets = bpf_map_area_alloc(htab->buckets_num *
- sizeof(struct bpf_htab_bucket),
- htab->map.numa_node);
- if (!htab->buckets) {
-+ bpf_map_charge_finish(&htab->map.memory);
- err = -ENOMEM;
- goto free_htab;
- }
-@@ -1006,6 +1012,7 @@ static void sock_hash_free(struct bpf_map *map)
- {
- struct bpf_htab *htab = container_of(map, struct bpf_htab, map);
- struct bpf_htab_bucket *bucket;
-+ struct hlist_head unlink_list;
- struct bpf_htab_elem *elem;
- struct hlist_node *node;
- int i;
-@@ -1017,13 +1024,32 @@ static void sock_hash_free(struct bpf_map *map)
- synchronize_rcu();
- for (i = 0; i < htab->buckets_num; i++) {
- bucket = sock_hash_select_bucket(htab, i);
-- hlist_for_each_entry_safe(elem, node, &bucket->head, node) {
-- hlist_del_rcu(&elem->node);
-+
-+ /* We are racing with sock_hash_delete_from_link to
-+ * enter the spin-lock critical section. Every socket on
-+ * the list is still linked to sockhash. Since link
-+ * exists, psock exists and holds a ref to socket. That
-+ * lets us to grab a socket ref too.
-+ */
-+ raw_spin_lock_bh(&bucket->lock);
-+ hlist_for_each_entry(elem, &bucket->head, node)
-+ sock_hold(elem->sk);
-+ hlist_move_list(&bucket->head, &unlink_list);
-+ raw_spin_unlock_bh(&bucket->lock);
-+
-+ /* Process removed entries out of atomic context to
-+ * block for socket lock before deleting the psock's
-+ * link to sockhash.
-+ */
-+ hlist_for_each_entry_safe(elem, node, &unlink_list, node) {
-+ hlist_del(&elem->node);
- lock_sock(elem->sk);
- rcu_read_lock();
- sock_map_unref(elem->sk, elem);
- rcu_read_unlock();
- release_sock(elem->sk);
-+ sock_put(elem->sk);
-+ sock_hash_free_elem(htab, elem);
- }
- }
-
-diff --git a/net/ipv4/tcp_bpf.c b/net/ipv4/tcp_bpf.c
-index 629aaa9a1eb9..7aa68f4aae6c 100644
---- a/net/ipv4/tcp_bpf.c
-+++ b/net/ipv4/tcp_bpf.c
-@@ -64,6 +64,9 @@ int __tcp_bpf_recvmsg(struct sock *sk, struct sk_psock *psock,
- } while (i != msg_rx->sg.end);
-
- if (unlikely(peek)) {
-+ if (msg_rx == list_last_entry(&psock->ingress_msg,
-+ struct sk_msg, list))
-+ break;
- msg_rx = list_next_entry(msg_rx, list);
- continue;
- }
-@@ -242,6 +245,9 @@ static int tcp_bpf_wait_data(struct sock *sk, struct sk_psock *psock,
- DEFINE_WAIT_FUNC(wait, woken_wake_function);
- int ret = 0;
-
-+ if (sk->sk_shutdown & RCV_SHUTDOWN)
-+ return 1;
-+
- if (!timeo)
- return ret;
-
-diff --git a/net/netfilter/nft_set_pipapo.c b/net/netfilter/nft_set_pipapo.c
-index 8b5acc6910fd..8c04388296b0 100644
---- a/net/netfilter/nft_set_pipapo.c
-+++ b/net/netfilter/nft_set_pipapo.c
-@@ -1242,7 +1242,9 @@ static int nft_pipapo_insert(const struct net *net, const struct nft_set *set,
- end += NFT_PIPAPO_GROUPS_PADDED_SIZE(f);
- }
-
-- if (!*this_cpu_ptr(m->scratch) || bsize_max > m->bsize_max) {
-+ if (!*get_cpu_ptr(m->scratch) || bsize_max > m->bsize_max) {
-+ put_cpu_ptr(m->scratch);
-+
- err = pipapo_realloc_scratch(m, bsize_max);
- if (err)
- return err;
-@@ -1250,6 +1252,8 @@ static int nft_pipapo_insert(const struct net *net, const struct nft_set *set,
- this_cpu_write(nft_pipapo_scratch_index, false);
-
- m->bsize_max = bsize_max;
-+ } else {
-+ put_cpu_ptr(m->scratch);
- }
-
- *ext2 = &e->ext;
-diff --git a/net/netfilter/nft_set_rbtree.c b/net/netfilter/nft_set_rbtree.c
-index 62f416bc0579..b6aad3fc46c3 100644
---- a/net/netfilter/nft_set_rbtree.c
-+++ b/net/netfilter/nft_set_rbtree.c
-@@ -271,12 +271,14 @@ static int __nft_rbtree_insert(const struct net *net, const struct nft_set *set,
-
- if (nft_rbtree_interval_start(new)) {
- if (nft_rbtree_interval_end(rbe) &&
-- nft_set_elem_active(&rbe->ext, genmask))
-+ nft_set_elem_active(&rbe->ext, genmask) &&
-+ !nft_set_elem_expired(&rbe->ext))
- overlap = false;
- } else {
- overlap = nft_rbtree_interval_end(rbe) &&
- nft_set_elem_active(&rbe->ext,
-- genmask);
-+ genmask) &&
-+ !nft_set_elem_expired(&rbe->ext);
- }
- } else if (d > 0) {
- p = &parent->rb_right;
-@@ -284,9 +286,11 @@ static int __nft_rbtree_insert(const struct net *net, const struct nft_set *set,
- if (nft_rbtree_interval_end(new)) {
- overlap = nft_rbtree_interval_end(rbe) &&
- nft_set_elem_active(&rbe->ext,
-- genmask);
-+ genmask) &&
-+ !nft_set_elem_expired(&rbe->ext);
- } else if (nft_rbtree_interval_end(rbe) &&
-- nft_set_elem_active(&rbe->ext, genmask)) {
-+ nft_set_elem_active(&rbe->ext, genmask) &&
-+ !nft_set_elem_expired(&rbe->ext)) {
- overlap = true;
- }
- } else {
-@@ -294,15 +298,18 @@ static int __nft_rbtree_insert(const struct net *net, const struct nft_set *set,
- nft_rbtree_interval_start(new)) {
- p = &parent->rb_left;
-
-- if (nft_set_elem_active(&rbe->ext, genmask))
-+ if (nft_set_elem_active(&rbe->ext, genmask) &&
-+ !nft_set_elem_expired(&rbe->ext))
- overlap = false;
- } else if (nft_rbtree_interval_start(rbe) &&
- nft_rbtree_interval_end(new)) {
- p = &parent->rb_right;
-
-- if (nft_set_elem_active(&rbe->ext, genmask))
-+ if (nft_set_elem_active(&rbe->ext, genmask) &&
-+ !nft_set_elem_expired(&rbe->ext))
- overlap = false;
-- } else if (nft_set_elem_active(&rbe->ext, genmask)) {
-+ } else if (nft_set_elem_active(&rbe->ext, genmask) &&
-+ !nft_set_elem_expired(&rbe->ext)) {
- *ext = &rbe->ext;
- return -EEXIST;
- } else {
-diff --git a/net/rxrpc/proc.c b/net/rxrpc/proc.c
-index 8b179e3c802a..543afd9bd664 100644
---- a/net/rxrpc/proc.c
-+++ b/net/rxrpc/proc.c
-@@ -68,7 +68,7 @@ static int rxrpc_call_seq_show(struct seq_file *seq, void *v)
- "Proto Local "
- " Remote "
- " SvID ConnID CallID End Use State Abort "
-- " UserID TxSeq TW RxSeq RW RxSerial RxTimo\n");
-+ " DebugId TxSeq TW RxSeq RW RxSerial RxTimo\n");
- return 0;
- }
-
-@@ -100,7 +100,7 @@ static int rxrpc_call_seq_show(struct seq_file *seq, void *v)
- rx_hard_ack = READ_ONCE(call->rx_hard_ack);
- seq_printf(seq,
- "UDP %-47.47s %-47.47s %4x %08x %08x %s %3u"
-- " %-8.8s %08x %lx %08x %02x %08x %02x %08x %06lx\n",
-+ " %-8.8s %08x %08x %08x %02x %08x %02x %08x %06lx\n",
- lbuff,
- rbuff,
- call->service_id,
-@@ -110,7 +110,7 @@ static int rxrpc_call_seq_show(struct seq_file *seq, void *v)
- atomic_read(&call->usage),
- rxrpc_call_states[call->state],
- call->abort_code,
-- call->user_call_ID,
-+ call->debug_id,
- tx_hard_ack, READ_ONCE(call->tx_top) - tx_hard_ack,
- rx_hard_ack, READ_ONCE(call->rx_top) - rx_hard_ack,
- call->rx_serial,
-diff --git a/net/sunrpc/addr.c b/net/sunrpc/addr.c
-index 8b4d72b1a066..010dcb876f9d 100644
---- a/net/sunrpc/addr.c
-+++ b/net/sunrpc/addr.c
-@@ -82,11 +82,11 @@ static size_t rpc_ntop6(const struct sockaddr *sap,
-
- rc = snprintf(scopebuf, sizeof(scopebuf), "%c%u",
- IPV6_SCOPE_DELIMITER, sin6->sin6_scope_id);
-- if (unlikely((size_t)rc > sizeof(scopebuf)))
-+ if (unlikely((size_t)rc >= sizeof(scopebuf)))
- return 0;
-
- len += rc;
-- if (unlikely(len > buflen))
-+ if (unlikely(len >= buflen))
- return 0;
-
- strcat(buf, scopebuf);
-diff --git a/net/xdp/xsk.c b/net/xdp/xsk.c
-index c350108aa38d..a4676107fad0 100644
---- a/net/xdp/xsk.c
-+++ b/net/xdp/xsk.c
-@@ -397,10 +397,8 @@ static int xsk_generic_xmit(struct sock *sk)
-
- len = desc.len;
- skb = sock_alloc_send_skb(sk, len, 1, &err);
-- if (unlikely(!skb)) {
-- err = -EAGAIN;
-+ if (unlikely(!skb))
- goto out;
-- }
-
- skb_put(skb, len);
- addr = desc.addr;
-diff --git a/samples/ftrace/sample-trace-array.c b/samples/ftrace/sample-trace-array.c
-index d523450d73eb..6aba02a31c96 100644
---- a/samples/ftrace/sample-trace-array.c
-+++ b/samples/ftrace/sample-trace-array.c
-@@ -6,6 +6,7 @@
- #include <linux/timer.h>
- #include <linux/err.h>
- #include <linux/jiffies.h>
-+#include <linux/workqueue.h>
-
- /*
- * Any file that uses trace points, must include the header.
-@@ -20,6 +21,16 @@ struct trace_array *tr;
- static void mytimer_handler(struct timer_list *unused);
- static struct task_struct *simple_tsk;
-
-+static void trace_work_fn(struct work_struct *work)
-+{
-+ /*
-+ * Disable tracing for event "sample_event".
-+ */
-+ trace_array_set_clr_event(tr, "sample-subsystem", "sample_event",
-+ false);
-+}
-+static DECLARE_WORK(trace_work, trace_work_fn);
-+
- /*
- * mytimer: Timer setup to disable tracing for event "sample_event". This
- * timer is only for the purposes of the sample module to demonstrate access of
-@@ -29,11 +40,7 @@ static DEFINE_TIMER(mytimer, mytimer_handler);
-
- static void mytimer_handler(struct timer_list *unused)
- {
-- /*
-- * Disable tracing for event "sample_event".
-- */
-- trace_array_set_clr_event(tr, "sample-subsystem", "sample_event",
-- false);
-+ schedule_work(&trace_work);
- }
-
- static void simple_thread_func(int count)
-@@ -76,6 +83,7 @@ static int simple_thread(void *arg)
- simple_thread_func(count++);
-
- del_timer(&mytimer);
-+ cancel_work_sync(&trace_work);
-
- /*
- * trace_array_put() decrements the reference counter associated with
-@@ -107,8 +115,12 @@ static int __init sample_trace_array_init(void)
- trace_printk_init_buffers();
-
- simple_tsk = kthread_run(simple_thread, NULL, "sample-instance");
-- if (IS_ERR(simple_tsk))
-+ if (IS_ERR(simple_tsk)) {
-+ trace_array_put(tr);
-+ trace_array_destroy(tr);
- return -1;
-+ }
-+
- return 0;
- }
-
-diff --git a/scripts/Makefile.modpost b/scripts/Makefile.modpost
-index 957eed6a17a5..33aaa572f686 100644
---- a/scripts/Makefile.modpost
-+++ b/scripts/Makefile.modpost
-@@ -66,7 +66,7 @@ __modpost:
-
- else
-
--MODPOST += $(subst -i,-n,$(filter -i,$(MAKEFLAGS))) -s -T - \
-+MODPOST += -s -T - \
- $(if $(KBUILD_NSDEPS),-d $(MODULES_NSDEPS))
-
- ifeq ($(KBUILD_EXTMOD),)
-@@ -82,6 +82,11 @@ include $(if $(wildcard $(KBUILD_EXTMOD)/Kbuild), \
- $(KBUILD_EXTMOD)/Kbuild, $(KBUILD_EXTMOD)/Makefile)
- endif
-
-+# 'make -i -k' ignores compile errors, and builds as many modules as possible.
-+ifneq ($(findstring i,$(filter-out --%,$(MAKEFLAGS))),)
-+MODPOST += -n
-+endif
-+
- # find all modules listed in modules.order
- modules := $(sort $(shell cat $(MODORDER)))
-
-diff --git a/scripts/headers_install.sh b/scripts/headers_install.sh
-index a07668a5c36b..94a833597a88 100755
---- a/scripts/headers_install.sh
-+++ b/scripts/headers_install.sh
-@@ -64,7 +64,7 @@ configs=$(sed -e '
- d
- ' $OUTFILE)
-
--# The entries in the following list are not warned.
-+# The entries in the following list do not result in an error.
- # Please do not add a new entry. This list is only for existing ones.
- # The list will be reduced gradually, and deleted eventually. (hopefully)
- #
-@@ -98,18 +98,19 @@ include/uapi/linux/raw.h:CONFIG_MAX_RAW_DEVS
-
- for c in $configs
- do
-- warn=1
-+ leak_error=1
-
- for ignore in $config_leak_ignores
- do
- if echo "$INFILE:$c" | grep -q "$ignore$"; then
-- warn=
-+ leak_error=
- break
- fi
- done
-
-- if [ "$warn" = 1 ]; then
-- echo "warning: $INFILE: leak $c to user-space" >&2
-+ if [ "$leak_error" = 1 ]; then
-+ echo "error: $INFILE: leak $c to user-space" >&2
-+ exit 1
- fi
- done
-
-diff --git a/scripts/mksysmap b/scripts/mksysmap
-index a35acc0d0b82..9aa23d15862a 100755
---- a/scripts/mksysmap
-+++ b/scripts/mksysmap
-@@ -41,4 +41,4 @@
- # so we just ignore them to let readprofile continue to work.
- # (At least sparc64 has __crc_ in the middle).
-
--$NM -n $1 | grep -v '\( [aNUw] \)\|\(__crc_\)\|\( \$[adt]\)\|\( .L\)' > $2
-+$NM -n $1 | grep -v '\( [aNUw] \)\|\(__crc_\)\|\( \$[adt]\)\|\( \.L\)' > $2
-diff --git a/security/apparmor/domain.c b/security/apparmor/domain.c
-index a84ef030fbd7..4cfa58c07778 100644
---- a/security/apparmor/domain.c
-+++ b/security/apparmor/domain.c
-@@ -929,7 +929,8 @@ int apparmor_bprm_set_creds(struct linux_binprm *bprm)
- * aways results in a further reduction of permissions.
- */
- if ((bprm->unsafe & LSM_UNSAFE_NO_NEW_PRIVS) &&
-- !unconfined(label) && !aa_label_is_subset(new, ctx->nnp)) {
-+ !unconfined(label) &&
-+ !aa_label_is_unconfined_subset(new, ctx->nnp)) {
- error = -EPERM;
- info = "no new privs";
- goto audit;
-@@ -1207,7 +1208,7 @@ int aa_change_hat(const char *hats[], int count, u64 token, int flags)
- * reduce restrictions.
- */
- if (task_no_new_privs(current) && !unconfined(label) &&
-- !aa_label_is_subset(new, ctx->nnp)) {
-+ !aa_label_is_unconfined_subset(new, ctx->nnp)) {
- /* not an apparmor denial per se, so don't log it */
- AA_DEBUG("no_new_privs - change_hat denied");
- error = -EPERM;
-@@ -1228,7 +1229,7 @@ int aa_change_hat(const char *hats[], int count, u64 token, int flags)
- * reduce restrictions.
- */
- if (task_no_new_privs(current) && !unconfined(label) &&
-- !aa_label_is_subset(previous, ctx->nnp)) {
-+ !aa_label_is_unconfined_subset(previous, ctx->nnp)) {
- /* not an apparmor denial per se, so don't log it */
- AA_DEBUG("no_new_privs - change_hat denied");
- error = -EPERM;
-@@ -1423,7 +1424,7 @@ check:
- * reduce restrictions.
- */
- if (task_no_new_privs(current) && !unconfined(label) &&
-- !aa_label_is_subset(new, ctx->nnp)) {
-+ !aa_label_is_unconfined_subset(new, ctx->nnp)) {
- /* not an apparmor denial per se, so don't log it */
- AA_DEBUG("no_new_privs - change_hat denied");
- error = -EPERM;
-diff --git a/security/apparmor/include/label.h b/security/apparmor/include/label.h
-index 47942c4ba7ca..255764ab06e2 100644
---- a/security/apparmor/include/label.h
-+++ b/security/apparmor/include/label.h
-@@ -281,6 +281,7 @@ bool aa_label_init(struct aa_label *label, int size, gfp_t gfp);
- struct aa_label *aa_label_alloc(int size, struct aa_proxy *proxy, gfp_t gfp);
-
- bool aa_label_is_subset(struct aa_label *set, struct aa_label *sub);
-+bool aa_label_is_unconfined_subset(struct aa_label *set, struct aa_label *sub);
- struct aa_profile *__aa_label_next_not_in_set(struct label_it *I,
- struct aa_label *set,
- struct aa_label *sub);
-diff --git a/security/apparmor/label.c b/security/apparmor/label.c
-index 470693239e64..5f324d63ceaa 100644
---- a/security/apparmor/label.c
-+++ b/security/apparmor/label.c
-@@ -550,6 +550,39 @@ bool aa_label_is_subset(struct aa_label *set, struct aa_label *sub)
- return __aa_label_next_not_in_set(&i, set, sub) == NULL;
- }
-
-+/**
-+ * aa_label_is_unconfined_subset - test if @sub is a subset of @set
-+ * @set: label to test against
-+ * @sub: label to test if is subset of @set
-+ *
-+ * This checks for subset but taking into account unconfined. IF
-+ * @sub contains an unconfined profile that does not have a matching
-+ * unconfined in @set then this will not cause the test to fail.
-+ * Conversely we don't care about an unconfined in @set that is not in
-+ * @sub
-+ *
-+ * Returns: true if @sub is special_subset of @set
-+ * else false
-+ */
-+bool aa_label_is_unconfined_subset(struct aa_label *set, struct aa_label *sub)
-+{
-+ struct label_it i = { };
-+ struct aa_profile *p;
-+
-+ AA_BUG(!set);
-+ AA_BUG(!sub);
-+
-+ if (sub == set)
-+ return true;
-+
-+ do {
-+ p = __aa_label_next_not_in_set(&i, set, sub);
-+ if (p && !profile_unconfined(p))
-+ break;
-+ } while (p);
-+
-+ return p == NULL;
-+}
-
-
- /**
-@@ -1531,13 +1564,13 @@ static const char *label_modename(struct aa_ns *ns, struct aa_label *label,
-
- label_for_each(i, label, profile) {
- if (aa_ns_visible(ns, profile->ns, flags & FLAG_VIEW_SUBNS)) {
-- if (profile->mode == APPARMOR_UNCONFINED)
-+ count++;
-+ if (profile == profile->ns->unconfined)
- /* special case unconfined so stacks with
- * unconfined don't report as mixed. ie.
- * profile_foo//&:ns1:unconfined (mixed)
- */
- continue;
-- count++;
- if (mode == -1)
- mode = profile->mode;
- else if (mode != profile->mode)
-diff --git a/security/apparmor/lsm.c b/security/apparmor/lsm.c
-index b621ad74f54a..66a8504c8bea 100644
---- a/security/apparmor/lsm.c
-+++ b/security/apparmor/lsm.c
-@@ -804,7 +804,12 @@ static void apparmor_sk_clone_security(const struct sock *sk,
- struct aa_sk_ctx *ctx = SK_CTX(sk);
- struct aa_sk_ctx *new = SK_CTX(newsk);
-
-+ if (new->label)
-+ aa_put_label(new->label);
- new->label = aa_get_label(ctx->label);
-+
-+ if (new->peer)
-+ aa_put_label(new->peer);
- new->peer = aa_get_label(ctx->peer);
- }
-
-diff --git a/security/selinux/ss/conditional.c b/security/selinux/ss/conditional.c
-index da94a1b4bfda..0cc7cdd58465 100644
---- a/security/selinux/ss/conditional.c
-+++ b/security/selinux/ss/conditional.c
-@@ -27,6 +27,9 @@ static int cond_evaluate_expr(struct policydb *p, struct cond_expr *expr)
- int s[COND_EXPR_MAXDEPTH];
- int sp = -1;
-
-+ if (expr->len == 0)
-+ return -1;
-+
- for (i = 0; i < expr->len; i++) {
- struct cond_expr_node *node = &expr->nodes[i];
-
-@@ -392,27 +395,19 @@ static int cond_read_node(struct policydb *p, struct cond_node *node, void *fp)
-
- rc = next_entry(buf, fp, sizeof(u32) * 2);
- if (rc)
-- goto err;
-+ return rc;
-
- expr->expr_type = le32_to_cpu(buf[0]);
- expr->bool = le32_to_cpu(buf[1]);
-
-- if (!expr_node_isvalid(p, expr)) {
-- rc = -EINVAL;
-- goto err;
-- }
-+ if (!expr_node_isvalid(p, expr))
-+ return -EINVAL;
- }
-
- rc = cond_read_av_list(p, fp, &node->true_list, NULL);
- if (rc)
-- goto err;
-- rc = cond_read_av_list(p, fp, &node->false_list, &node->true_list);
-- if (rc)
-- goto err;
-- return 0;
--err:
-- cond_node_destroy(node);
-- return rc;
-+ return rc;
-+ return cond_read_av_list(p, fp, &node->false_list, &node->true_list);
- }
-
- int cond_read_list(struct policydb *p, void *fp)
-diff --git a/security/selinux/ss/services.c b/security/selinux/ss/services.c
-index 8ad34fd031d1..77e591fce919 100644
---- a/security/selinux/ss/services.c
-+++ b/security/selinux/ss/services.c
-@@ -2923,8 +2923,12 @@ err:
- if (*names) {
- for (i = 0; i < *len; i++)
- kfree((*names)[i]);
-+ kfree(*names);
- }
- kfree(*values);
-+ *len = 0;
-+ *names = NULL;
-+ *values = NULL;
- goto out;
- }
-
-diff --git a/sound/firewire/amdtp-am824.c b/sound/firewire/amdtp-am824.c
-index 67d735e9a6a4..fea92e148790 100644
---- a/sound/firewire/amdtp-am824.c
-+++ b/sound/firewire/amdtp-am824.c
-@@ -82,7 +82,8 @@ int amdtp_am824_set_parameters(struct amdtp_stream *s, unsigned int rate,
- if (err < 0)
- return err;
-
-- s->ctx_data.rx.fdf = AMDTP_FDF_AM824 | s->sfc;
-+ if (s->direction == AMDTP_OUT_STREAM)
-+ s->ctx_data.rx.fdf = AMDTP_FDF_AM824 | s->sfc;
-
- p->pcm_channels = pcm_channels;
- p->midi_ports = midi_ports;
-diff --git a/sound/isa/wavefront/wavefront_synth.c b/sound/isa/wavefront/wavefront_synth.c
-index c5b1d5900eed..d6420d224d09 100644
---- a/sound/isa/wavefront/wavefront_synth.c
-+++ b/sound/isa/wavefront/wavefront_synth.c
-@@ -1171,7 +1171,10 @@ wavefront_send_alias (snd_wavefront_t *dev, wavefront_patch_info *header)
- "alias for %d\n",
- header->number,
- header->hdr.a.OriginalSample);
--
-+
-+ if (header->number >= WF_MAX_SAMPLE)
-+ return -EINVAL;
-+
- munge_int32 (header->number, &alias_hdr[0], 2);
- munge_int32 (header->hdr.a.OriginalSample, &alias_hdr[2], 2);
- munge_int32 (*((unsigned int *)&header->hdr.a.sampleStartOffset),
-@@ -1202,6 +1205,9 @@ wavefront_send_multisample (snd_wavefront_t *dev, wavefront_patch_info *header)
- int num_samples;
- unsigned char *msample_hdr;
-
-+ if (header->number >= WF_MAX_SAMPLE)
-+ return -EINVAL;
-+
- msample_hdr = kmalloc(WF_MSAMPLE_BYTES, GFP_KERNEL);
- if (! msample_hdr)
- return -ENOMEM;
-diff --git a/sound/pci/hda/patch_realtek.c b/sound/pci/hda/patch_realtek.c
-index 2c4575909441..e057ecb5a904 100644
---- a/sound/pci/hda/patch_realtek.c
-+++ b/sound/pci/hda/patch_realtek.c
-@@ -81,6 +81,7 @@ struct alc_spec {
-
- /* mute LED for HP laptops, see alc269_fixup_mic_mute_hook() */
- int mute_led_polarity;
-+ int micmute_led_polarity;
- hda_nid_t mute_led_nid;
- hda_nid_t cap_mute_led_nid;
-
-@@ -4080,11 +4081,9 @@ static void alc269_fixup_hp_mute_led_mic3(struct hda_codec *codec,
-
- /* update LED status via GPIO */
- static void alc_update_gpio_led(struct hda_codec *codec, unsigned int mask,
-- bool enabled)
-+ int polarity, bool enabled)
- {
-- struct alc_spec *spec = codec->spec;
--
-- if (spec->mute_led_polarity)
-+ if (polarity)
- enabled = !enabled;
- alc_update_gpio_data(codec, mask, !enabled); /* muted -> LED on */
- }
-@@ -4095,7 +4094,8 @@ static void alc_fixup_gpio_mute_hook(void *private_data, int enabled)
- struct hda_codec *codec = private_data;
- struct alc_spec *spec = codec->spec;
-
-- alc_update_gpio_led(codec, spec->gpio_mute_led_mask, enabled);
-+ alc_update_gpio_led(codec, spec->gpio_mute_led_mask,
-+ spec->mute_led_polarity, enabled);
- }
-
- /* turn on/off mic-mute LED via GPIO per capture hook */
-@@ -4104,6 +4104,7 @@ static void alc_gpio_micmute_update(struct hda_codec *codec)
- struct alc_spec *spec = codec->spec;
-
- alc_update_gpio_led(codec, spec->gpio_mic_led_mask,
-+ spec->micmute_led_polarity,
- spec->gen.micmute_led.led_value);
- }
-
-@@ -5808,7 +5809,8 @@ static void alc280_hp_gpio4_automute_hook(struct hda_codec *codec,
-
- snd_hda_gen_hp_automute(codec, jack);
- /* mute_led_polarity is set to 0, so we pass inverted value here */
-- alc_update_gpio_led(codec, 0x10, !spec->gen.hp_jack_present);
-+ alc_update_gpio_led(codec, 0x10, spec->mute_led_polarity,
-+ !spec->gen.hp_jack_present);
- }
-
- /* Manage GPIOs for HP EliteBook Folio 9480m.
-diff --git a/sound/soc/codecs/Kconfig b/sound/soc/codecs/Kconfig
-index e60e0b6a689c..8a66f23a7b05 100644
---- a/sound/soc/codecs/Kconfig
-+++ b/sound/soc/codecs/Kconfig
-@@ -1136,10 +1136,13 @@ config SND_SOC_RT5677_SPI
- config SND_SOC_RT5682
- tristate
- depends on I2C || SOUNDWIRE
-+ depends on SOUNDWIRE || !SOUNDWIRE
-+ depends on I2C || !I2C
-
- config SND_SOC_RT5682_SDW
- tristate "Realtek RT5682 Codec - SDW"
- depends on SOUNDWIRE
-+ depends on I2C || !I2C
- select SND_SOC_RT5682
- select REGMAP_SOUNDWIRE
-
-@@ -1620,19 +1623,19 @@ config SND_SOC_WM9090
-
- config SND_SOC_WM9705
- tristate
-- depends on SND_SOC_AC97_BUS
-+ depends on SND_SOC_AC97_BUS || AC97_BUS_NEW
- select REGMAP_AC97
- select AC97_BUS_COMPAT if AC97_BUS_NEW
-
- config SND_SOC_WM9712
- tristate
-- depends on SND_SOC_AC97_BUS
-+ depends on SND_SOC_AC97_BUS || AC97_BUS_NEW
- select REGMAP_AC97
- select AC97_BUS_COMPAT if AC97_BUS_NEW
-
- config SND_SOC_WM9713
- tristate
-- depends on SND_SOC_AC97_BUS
-+ depends on SND_SOC_AC97_BUS || AC97_BUS_NEW
- select REGMAP_AC97
- select AC97_BUS_COMPAT if AC97_BUS_NEW
-
-diff --git a/sound/soc/codecs/max98373.c b/sound/soc/codecs/max98373.c
-index cae1def8902d..96718e3a1ad0 100644
---- a/sound/soc/codecs/max98373.c
-+++ b/sound/soc/codecs/max98373.c
-@@ -850,8 +850,8 @@ static int max98373_resume(struct device *dev)
- {
- struct max98373_priv *max98373 = dev_get_drvdata(dev);
-
-- max98373_reset(max98373, dev);
- regcache_cache_only(max98373->regmap, false);
-+ max98373_reset(max98373, dev);
- regcache_sync(max98373->regmap);
- return 0;
- }
-diff --git a/sound/soc/codecs/rt1308-sdw.c b/sound/soc/codecs/rt1308-sdw.c
-index a5a7e46de246..a7f45191364d 100644
---- a/sound/soc/codecs/rt1308-sdw.c
-+++ b/sound/soc/codecs/rt1308-sdw.c
-@@ -482,6 +482,9 @@ static int rt1308_set_sdw_stream(struct snd_soc_dai *dai, void *sdw_stream,
- {
- struct sdw_stream_data *stream;
-
-+ if (!sdw_stream)
-+ return 0;
-+
- stream = kzalloc(sizeof(*stream), GFP_KERNEL);
- if (!stream)
- return -ENOMEM;
-diff --git a/sound/soc/codecs/rt5645.c b/sound/soc/codecs/rt5645.c
-index 6ba1849a77b0..e2e1d5b03b38 100644
---- a/sound/soc/codecs/rt5645.c
-+++ b/sound/soc/codecs/rt5645.c
-@@ -3625,6 +3625,12 @@ static const struct rt5645_platform_data asus_t100ha_platform_data = {
- .inv_jd1_1 = true,
- };
-
-+static const struct rt5645_platform_data asus_t101ha_platform_data = {
-+ .dmic1_data_pin = RT5645_DMIC_DATA_IN2N,
-+ .dmic2_data_pin = RT5645_DMIC2_DISABLE,
-+ .jd_mode = 3,
-+};
-+
- static const struct rt5645_platform_data lenovo_ideapad_miix_310_pdata = {
- .jd_mode = 3,
- .in2_diff = true,
-@@ -3708,6 +3714,14 @@ static const struct dmi_system_id dmi_platform_data[] = {
- },
- .driver_data = (void *)&asus_t100ha_platform_data,
- },
-+ {
-+ .ident = "ASUS T101HA",
-+ .matches = {
-+ DMI_MATCH(DMI_SYS_VENDOR, "ASUSTeK COMPUTER INC."),
-+ DMI_MATCH(DMI_PRODUCT_NAME, "T101HA"),
-+ },
-+ .driver_data = (void *)&asus_t101ha_platform_data,
-+ },
- {
- .ident = "MINIX Z83-4",
- .matches = {
-diff --git a/sound/soc/codecs/rt5682.c b/sound/soc/codecs/rt5682.c
-index d36f560ad7a8..c4892af14850 100644
---- a/sound/soc/codecs/rt5682.c
-+++ b/sound/soc/codecs/rt5682.c
-@@ -2958,6 +2958,9 @@ static int rt5682_set_sdw_stream(struct snd_soc_dai *dai, void *sdw_stream,
- {
- struct sdw_stream_data *stream;
-
-+ if (!sdw_stream)
-+ return 0;
-+
- stream = kzalloc(sizeof(*stream), GFP_KERNEL);
- if (!stream)
- return -ENOMEM;
-diff --git a/sound/soc/codecs/rt700.c b/sound/soc/codecs/rt700.c
-index ff68f0e4f629..687ac2153666 100644
---- a/sound/soc/codecs/rt700.c
-+++ b/sound/soc/codecs/rt700.c
-@@ -860,6 +860,9 @@ static int rt700_set_sdw_stream(struct snd_soc_dai *dai, void *sdw_stream,
- {
- struct sdw_stream_data *stream;
-
-+ if (!sdw_stream)
-+ return 0;
-+
- stream = kzalloc(sizeof(*stream), GFP_KERNEL);
- if (!stream)
- return -ENOMEM;
-diff --git a/sound/soc/codecs/rt711.c b/sound/soc/codecs/rt711.c
-index 2daed7692a3b..65b59dbfb43c 100644
---- a/sound/soc/codecs/rt711.c
-+++ b/sound/soc/codecs/rt711.c
-@@ -906,6 +906,9 @@ static int rt711_set_sdw_stream(struct snd_soc_dai *dai, void *sdw_stream,
- {
- struct sdw_stream_data *stream;
-
-+ if (!sdw_stream)
-+ return 0;
-+
- stream = kzalloc(sizeof(*stream), GFP_KERNEL);
- if (!stream)
- return -ENOMEM;
-diff --git a/sound/soc/codecs/rt715.c b/sound/soc/codecs/rt715.c
-index 2cbc57b16b13..099c8bd20006 100644
---- a/sound/soc/codecs/rt715.c
-+++ b/sound/soc/codecs/rt715.c
-@@ -530,6 +530,9 @@ static int rt715_set_sdw_stream(struct snd_soc_dai *dai, void *sdw_stream,
-
- struct sdw_stream_data *stream;
-
-+ if (!sdw_stream)
-+ return 0;
-+
- stream = kzalloc(sizeof(*stream), GFP_KERNEL);
- if (!stream)
- return -ENOMEM;
-diff --git a/sound/soc/fsl/fsl_asrc_dma.c b/sound/soc/fsl/fsl_asrc_dma.c
-index e7178817d7a7..1ee10eafe3e6 100644
---- a/sound/soc/fsl/fsl_asrc_dma.c
-+++ b/sound/soc/fsl/fsl_asrc_dma.c
-@@ -252,6 +252,7 @@ static int fsl_asrc_dma_hw_params(struct snd_soc_component *component,
- ret = dmaengine_slave_config(pair->dma_chan[dir], &config_be);
- if (ret) {
- dev_err(dev, "failed to config DMA channel for Back-End\n");
-+ dma_release_channel(pair->dma_chan[dir]);
- return ret;
- }
-
-diff --git a/sound/soc/fsl/fsl_esai.c b/sound/soc/fsl/fsl_esai.c
-index c7a49d03463a..84290be778f0 100644
---- a/sound/soc/fsl/fsl_esai.c
-+++ b/sound/soc/fsl/fsl_esai.c
-@@ -87,6 +87,10 @@ static irqreturn_t esai_isr(int irq, void *devid)
- if ((saisr & (ESAI_SAISR_TUE | ESAI_SAISR_ROE)) &&
- esai_priv->reset_at_xrun) {
- dev_dbg(&pdev->dev, "reset module for xrun\n");
-+ regmap_update_bits(esai_priv->regmap, REG_ESAI_TCR,
-+ ESAI_xCR_xEIE_MASK, 0);
-+ regmap_update_bits(esai_priv->regmap, REG_ESAI_RCR,
-+ ESAI_xCR_xEIE_MASK, 0);
- tasklet_schedule(&esai_priv->task);
- }
-
-diff --git a/sound/soc/img/img-i2s-in.c b/sound/soc/img/img-i2s-in.c
-index a495d1050d49..e30b66b94bf6 100644
---- a/sound/soc/img/img-i2s-in.c
-+++ b/sound/soc/img/img-i2s-in.c
-@@ -482,6 +482,7 @@ static int img_i2s_in_probe(struct platform_device *pdev)
- if (IS_ERR(rst)) {
- if (PTR_ERR(rst) == -EPROBE_DEFER) {
- ret = -EPROBE_DEFER;
-+ pm_runtime_put(&pdev->dev);
- goto err_suspend;
- }
-
-diff --git a/sound/soc/intel/boards/bytcr_rt5640.c b/sound/soc/intel/boards/bytcr_rt5640.c
-index 08f4ae964b02..5c1a5e2aff6f 100644
---- a/sound/soc/intel/boards/bytcr_rt5640.c
-+++ b/sound/soc/intel/boards/bytcr_rt5640.c
-@@ -742,6 +742,30 @@ static const struct dmi_system_id byt_rt5640_quirk_table[] = {
- BYT_RT5640_SSP0_AIF1 |
- BYT_RT5640_MCLK_EN),
- },
-+ { /* Toshiba Encore WT8-A */
-+ .matches = {
-+ DMI_EXACT_MATCH(DMI_SYS_VENDOR, "TOSHIBA"),
-+ DMI_EXACT_MATCH(DMI_PRODUCT_NAME, "TOSHIBA WT8-A"),
-+ },
-+ .driver_data = (void *)(BYT_RT5640_DMIC1_MAP |
-+ BYT_RT5640_JD_SRC_JD2_IN4N |
-+ BYT_RT5640_OVCD_TH_2000UA |
-+ BYT_RT5640_OVCD_SF_0P75 |
-+ BYT_RT5640_JD_NOT_INV |
-+ BYT_RT5640_MCLK_EN),
-+ },
-+ { /* Toshiba Encore WT10-A */
-+ .matches = {
-+ DMI_EXACT_MATCH(DMI_SYS_VENDOR, "TOSHIBA"),
-+ DMI_EXACT_MATCH(DMI_PRODUCT_NAME, "TOSHIBA WT10-A-103"),
-+ },
-+ .driver_data = (void *)(BYT_RT5640_DMIC1_MAP |
-+ BYT_RT5640_JD_SRC_JD1_IN4P |
-+ BYT_RT5640_OVCD_TH_2000UA |
-+ BYT_RT5640_OVCD_SF_0P75 |
-+ BYT_RT5640_SSP0_AIF2 |
-+ BYT_RT5640_MCLK_EN),
-+ },
- { /* Catch-all for generic Insyde tablets, must be last */
- .matches = {
- DMI_MATCH(DMI_SYS_VENDOR, "Insyde"),
-diff --git a/sound/soc/meson/axg-fifo.c b/sound/soc/meson/axg-fifo.c
-index 2e9b56b29d31..b2e867113226 100644
---- a/sound/soc/meson/axg-fifo.c
-+++ b/sound/soc/meson/axg-fifo.c
-@@ -249,7 +249,7 @@ int axg_fifo_pcm_open(struct snd_soc_component *component,
- /* Enable pclk to access registers and clock the fifo ip */
- ret = clk_prepare_enable(fifo->pclk);
- if (ret)
-- return ret;
-+ goto free_irq;
-
- /* Setup status2 so it reports the memory pointer */
- regmap_update_bits(fifo->map, FIFO_CTRL1,
-@@ -269,8 +269,14 @@ int axg_fifo_pcm_open(struct snd_soc_component *component,
- /* Take memory arbitror out of reset */
- ret = reset_control_deassert(fifo->arb);
- if (ret)
-- clk_disable_unprepare(fifo->pclk);
-+ goto free_clk;
-+
-+ return 0;
-
-+free_clk:
-+ clk_disable_unprepare(fifo->pclk);
-+free_irq:
-+ free_irq(fifo->irq, ss);
- return ret;
- }
- EXPORT_SYMBOL_GPL(axg_fifo_pcm_open);
-diff --git a/sound/soc/meson/meson-card-utils.c b/sound/soc/meson/meson-card-utils.c
-index 2ca8c98e204f..5a4a91c88734 100644
---- a/sound/soc/meson/meson-card-utils.c
-+++ b/sound/soc/meson/meson-card-utils.c
-@@ -49,19 +49,26 @@ int meson_card_reallocate_links(struct snd_soc_card *card,
- links = krealloc(priv->card.dai_link,
- num_links * sizeof(*priv->card.dai_link),
- GFP_KERNEL | __GFP_ZERO);
-+ if (!links)
-+ goto err_links;
-+
- ldata = krealloc(priv->link_data,
- num_links * sizeof(*priv->link_data),
- GFP_KERNEL | __GFP_ZERO);
--
-- if (!links || !ldata) {
-- dev_err(priv->card.dev, "failed to allocate links\n");
-- return -ENOMEM;
-- }
-+ if (!ldata)
-+ goto err_ldata;
-
- priv->card.dai_link = links;
- priv->link_data = ldata;
- priv->card.num_links = num_links;
- return 0;
-+
-+err_ldata:
-+ kfree(links);
-+err_links:
-+ dev_err(priv->card.dev, "failed to allocate links\n");
-+ return -ENOMEM;
-+
- }
- EXPORT_SYMBOL_GPL(meson_card_reallocate_links);
-
-diff --git a/sound/soc/qcom/qdsp6/q6asm-dai.c b/sound/soc/qcom/qdsp6/q6asm-dai.c
-index 125af00bba53..4640804aab7f 100644
---- a/sound/soc/qcom/qdsp6/q6asm-dai.c
-+++ b/sound/soc/qcom/qdsp6/q6asm-dai.c
-@@ -176,7 +176,7 @@ static const struct snd_compr_codec_caps q6asm_compr_caps = {
- };
-
- static void event_handler(uint32_t opcode, uint32_t token,
-- uint32_t *payload, void *priv)
-+ void *payload, void *priv)
- {
- struct q6asm_dai_rtd *prtd = priv;
- struct snd_pcm_substream *substream = prtd->substream;
-@@ -490,7 +490,7 @@ static int q6asm_dai_hw_params(struct snd_soc_component *component,
- }
-
- static void compress_event_handler(uint32_t opcode, uint32_t token,
-- uint32_t *payload, void *priv)
-+ void *payload, void *priv)
- {
- struct q6asm_dai_rtd *prtd = priv;
- struct snd_compr_stream *substream = prtd->cstream;
-diff --git a/sound/soc/sh/rcar/gen.c b/sound/soc/sh/rcar/gen.c
-index af19010b9d88..8bd49c8a9517 100644
---- a/sound/soc/sh/rcar/gen.c
-+++ b/sound/soc/sh/rcar/gen.c
-@@ -224,6 +224,14 @@ static int rsnd_gen2_probe(struct rsnd_priv *priv)
- RSND_GEN_S_REG(SSI_SYS_STATUS5, 0x884),
- RSND_GEN_S_REG(SSI_SYS_STATUS6, 0x888),
- RSND_GEN_S_REG(SSI_SYS_STATUS7, 0x88c),
-+ RSND_GEN_S_REG(SSI_SYS_INT_ENABLE0, 0x850),
-+ RSND_GEN_S_REG(SSI_SYS_INT_ENABLE1, 0x854),
-+ RSND_GEN_S_REG(SSI_SYS_INT_ENABLE2, 0x858),
-+ RSND_GEN_S_REG(SSI_SYS_INT_ENABLE3, 0x85c),
-+ RSND_GEN_S_REG(SSI_SYS_INT_ENABLE4, 0x890),
-+ RSND_GEN_S_REG(SSI_SYS_INT_ENABLE5, 0x894),
-+ RSND_GEN_S_REG(SSI_SYS_INT_ENABLE6, 0x898),
-+ RSND_GEN_S_REG(SSI_SYS_INT_ENABLE7, 0x89c),
- RSND_GEN_S_REG(HDMI0_SEL, 0x9e0),
- RSND_GEN_S_REG(HDMI1_SEL, 0x9e4),
-
-diff --git a/sound/soc/sh/rcar/rsnd.h b/sound/soc/sh/rcar/rsnd.h
-index ea6cbaa9743e..d47608ff5fac 100644
---- a/sound/soc/sh/rcar/rsnd.h
-+++ b/sound/soc/sh/rcar/rsnd.h
-@@ -189,6 +189,14 @@ enum rsnd_reg {
- SSI_SYS_STATUS5,
- SSI_SYS_STATUS6,
- SSI_SYS_STATUS7,
-+ SSI_SYS_INT_ENABLE0,
-+ SSI_SYS_INT_ENABLE1,
-+ SSI_SYS_INT_ENABLE2,
-+ SSI_SYS_INT_ENABLE3,
-+ SSI_SYS_INT_ENABLE4,
-+ SSI_SYS_INT_ENABLE5,
-+ SSI_SYS_INT_ENABLE6,
-+ SSI_SYS_INT_ENABLE7,
- HDMI0_SEL,
- HDMI1_SEL,
- SSI9_BUSIF0_MODE,
-@@ -237,6 +245,7 @@ enum rsnd_reg {
- #define SSI9_BUSIF_ADINR(i) (SSI9_BUSIF0_ADINR + (i))
- #define SSI9_BUSIF_DALIGN(i) (SSI9_BUSIF0_DALIGN + (i))
- #define SSI_SYS_STATUS(i) (SSI_SYS_STATUS0 + (i))
-+#define SSI_SYS_INT_ENABLE(i) (SSI_SYS_INT_ENABLE0 + (i))
-
-
- struct rsnd_priv;
-diff --git a/sound/soc/sh/rcar/ssi.c b/sound/soc/sh/rcar/ssi.c
-index 4a7d3413917f..47d5ddb526f2 100644
---- a/sound/soc/sh/rcar/ssi.c
-+++ b/sound/soc/sh/rcar/ssi.c
-@@ -372,6 +372,9 @@ static void rsnd_ssi_config_init(struct rsnd_mod *mod,
- u32 wsr = ssi->wsr;
- int width;
- int is_tdm, is_tdm_split;
-+ int id = rsnd_mod_id(mod);
-+ int i;
-+ u32 sys_int_enable = 0;
-
- is_tdm = rsnd_runtime_is_tdm(io);
- is_tdm_split = rsnd_runtime_is_tdm_split(io);
-@@ -447,6 +450,38 @@ static void rsnd_ssi_config_init(struct rsnd_mod *mod,
- cr_mode = DIEN; /* PIO : enable Data interrupt */
- }
-
-+ /* enable busif buffer over/under run interrupt. */
-+ if (is_tdm || is_tdm_split) {
-+ switch (id) {
-+ case 0:
-+ case 1:
-+ case 2:
-+ case 3:
-+ case 4:
-+ for (i = 0; i < 4; i++) {
-+ sys_int_enable = rsnd_mod_read(mod,
-+ SSI_SYS_INT_ENABLE(i * 2));
-+ sys_int_enable |= 0xf << (id * 4);
-+ rsnd_mod_write(mod,
-+ SSI_SYS_INT_ENABLE(i * 2),
-+ sys_int_enable);
-+ }
-+
-+ break;
-+ case 9:
-+ for (i = 0; i < 4; i++) {
-+ sys_int_enable = rsnd_mod_read(mod,
-+ SSI_SYS_INT_ENABLE((i * 2) + 1));
-+ sys_int_enable |= 0xf << 4;
-+ rsnd_mod_write(mod,
-+ SSI_SYS_INT_ENABLE((i * 2) + 1),
-+ sys_int_enable);
-+ }
-+
-+ break;
-+ }
-+ }
-+
- init_end:
- ssi->cr_own = cr_own;
- ssi->cr_mode = cr_mode;
-@@ -496,6 +531,13 @@ static int rsnd_ssi_quit(struct rsnd_mod *mod,
- {
- struct rsnd_ssi *ssi = rsnd_mod_to_ssi(mod);
- struct device *dev = rsnd_priv_to_dev(priv);
-+ int is_tdm, is_tdm_split;
-+ int id = rsnd_mod_id(mod);
-+ int i;
-+ u32 sys_int_enable = 0;
-+
-+ is_tdm = rsnd_runtime_is_tdm(io);
-+ is_tdm_split = rsnd_runtime_is_tdm_split(io);
-
- if (!rsnd_ssi_is_run_mods(mod, io))
- return 0;
-@@ -517,6 +559,38 @@ static int rsnd_ssi_quit(struct rsnd_mod *mod,
- ssi->wsr = 0;
- }
-
-+ /* disable busif buffer over/under run interrupt. */
-+ if (is_tdm || is_tdm_split) {
-+ switch (id) {
-+ case 0:
-+ case 1:
-+ case 2:
-+ case 3:
-+ case 4:
-+ for (i = 0; i < 4; i++) {
-+ sys_int_enable = rsnd_mod_read(mod,
-+ SSI_SYS_INT_ENABLE(i * 2));
-+ sys_int_enable &= ~(0xf << (id * 4));
-+ rsnd_mod_write(mod,
-+ SSI_SYS_INT_ENABLE(i * 2),
-+ sys_int_enable);
-+ }
-+
-+ break;
-+ case 9:
-+ for (i = 0; i < 4; i++) {
-+ sys_int_enable = rsnd_mod_read(mod,
-+ SSI_SYS_INT_ENABLE((i * 2) + 1));
-+ sys_int_enable &= ~(0xf << 4);
-+ rsnd_mod_write(mod,
-+ SSI_SYS_INT_ENABLE((i * 2) + 1),
-+ sys_int_enable);
-+ }
-+
-+ break;
-+ }
-+ }
-+
- return 0;
- }
-
-@@ -622,6 +696,11 @@ static int rsnd_ssi_irq(struct rsnd_mod *mod,
- int enable)
- {
- u32 val = 0;
-+ int is_tdm, is_tdm_split;
-+ int id = rsnd_mod_id(mod);
-+
-+ is_tdm = rsnd_runtime_is_tdm(io);
-+ is_tdm_split = rsnd_runtime_is_tdm_split(io);
-
- if (rsnd_is_gen1(priv))
- return 0;
-@@ -635,6 +714,19 @@ static int rsnd_ssi_irq(struct rsnd_mod *mod,
- if (enable)
- val = rsnd_ssi_is_dma_mode(mod) ? 0x0e000000 : 0x0f000000;
-
-+ if (is_tdm || is_tdm_split) {
-+ switch (id) {
-+ case 0:
-+ case 1:
-+ case 2:
-+ case 3:
-+ case 4:
-+ case 9:
-+ val |= 0x0000ff00;
-+ break;
-+ }
-+ }
-+
- rsnd_mod_write(mod, SSI_INT_ENABLE, val);
-
- return 0;
-@@ -651,6 +743,12 @@ static void __rsnd_ssi_interrupt(struct rsnd_mod *mod,
- u32 status;
- bool elapsed = false;
- bool stop = false;
-+ int id = rsnd_mod_id(mod);
-+ int i;
-+ int is_tdm, is_tdm_split;
-+
-+ is_tdm = rsnd_runtime_is_tdm(io);
-+ is_tdm_split = rsnd_runtime_is_tdm_split(io);
-
- spin_lock(&priv->lock);
-
-@@ -672,6 +770,53 @@ static void __rsnd_ssi_interrupt(struct rsnd_mod *mod,
- stop = true;
- }
-
-+ status = 0;
-+
-+ if (is_tdm || is_tdm_split) {
-+ switch (id) {
-+ case 0:
-+ case 1:
-+ case 2:
-+ case 3:
-+ case 4:
-+ for (i = 0; i < 4; i++) {
-+ status = rsnd_mod_read(mod,
-+ SSI_SYS_STATUS(i * 2));
-+ status &= 0xf << (id * 4);
-+
-+ if (status) {
-+ rsnd_dbg_irq_status(dev,
-+ "%s err status : 0x%08x\n",
-+ rsnd_mod_name(mod), status);
-+ rsnd_mod_write(mod,
-+ SSI_SYS_STATUS(i * 2),
-+ 0xf << (id * 4));
-+ stop = true;
-+ break;
-+ }
-+ }
-+ break;
-+ case 9:
-+ for (i = 0; i < 4; i++) {
-+ status = rsnd_mod_read(mod,
-+ SSI_SYS_STATUS((i * 2) + 1));
-+ status &= 0xf << 4;
-+
-+ if (status) {
-+ rsnd_dbg_irq_status(dev,
-+ "%s err status : 0x%08x\n",
-+ rsnd_mod_name(mod), status);
-+ rsnd_mod_write(mod,
-+ SSI_SYS_STATUS((i * 2) + 1),
-+ 0xf << 4);
-+ stop = true;
-+ break;
-+ }
-+ }
-+ break;
-+ }
-+ }
-+
- rsnd_ssi_status_clear(mod);
- rsnd_ssi_interrupt_out:
- spin_unlock(&priv->lock);
-diff --git a/sound/soc/soc-core.c b/sound/soc/soc-core.c
-index 843b8b1c89d4..e5433e8fcf19 100644
---- a/sound/soc/soc-core.c
-+++ b/sound/soc/soc-core.c
-@@ -1720,9 +1720,25 @@ match:
- dai_link->platforms->name = component->name;
-
- /* convert non BE into BE */
-- dai_link->no_pcm = 1;
-- dai_link->dpcm_playback = 1;
-- dai_link->dpcm_capture = 1;
-+ if (!dai_link->no_pcm) {
-+ dai_link->no_pcm = 1;
-+
-+ if (dai_link->dpcm_playback)
-+ dev_warn(card->dev,
-+ "invalid configuration, dailink %s has flags no_pcm=0 and dpcm_playback=1\n",
-+ dai_link->name);
-+ if (dai_link->dpcm_capture)
-+ dev_warn(card->dev,
-+ "invalid configuration, dailink %s has flags no_pcm=0 and dpcm_capture=1\n",
-+ dai_link->name);
-+
-+ /* convert normal link into DPCM one */
-+ if (!(dai_link->dpcm_playback ||
-+ dai_link->dpcm_capture)) {
-+ dai_link->dpcm_playback = !dai_link->capture_only;
-+ dai_link->dpcm_capture = !dai_link->playback_only;
-+ }
-+ }
-
- /* override any BE fixups */
- dai_link->be_hw_params_fixup =
-diff --git a/sound/soc/soc-dapm.c b/sound/soc/soc-dapm.c
-index e2632841b321..c0aa64ff8e32 100644
---- a/sound/soc/soc-dapm.c
-+++ b/sound/soc/soc-dapm.c
-@@ -4340,16 +4340,16 @@ static void dapm_connect_dai_pair(struct snd_soc_card *card,
- codec = codec_dai->playback_widget;
-
- if (playback_cpu && codec) {
-- if (dai_link->params && !dai_link->playback_widget) {
-+ if (dai_link->params && !rtd->playback_widget) {
- substream = streams[SNDRV_PCM_STREAM_PLAYBACK].substream;
- dai = snd_soc_dapm_new_dai(card, substream, "playback");
- if (IS_ERR(dai))
- goto capture;
-- dai_link->playback_widget = dai;
-+ rtd->playback_widget = dai;
- }
-
- dapm_connect_dai_routes(&card->dapm, cpu_dai, playback_cpu,
-- dai_link->playback_widget,
-+ rtd->playback_widget,
- codec_dai, codec);
- }
-
-@@ -4358,16 +4358,16 @@ capture:
- codec = codec_dai->capture_widget;
-
- if (codec && capture_cpu) {
-- if (dai_link->params && !dai_link->capture_widget) {
-+ if (dai_link->params && !rtd->capture_widget) {
- substream = streams[SNDRV_PCM_STREAM_CAPTURE].substream;
- dai = snd_soc_dapm_new_dai(card, substream, "capture");
- if (IS_ERR(dai))
- return;
-- dai_link->capture_widget = dai;
-+ rtd->capture_widget = dai;
- }
-
- dapm_connect_dai_routes(&card->dapm, codec_dai, codec,
-- dai_link->capture_widget,
-+ rtd->capture_widget,
- cpu_dai, capture_cpu);
- }
- }
-diff --git a/sound/soc/soc-pcm.c b/sound/soc/soc-pcm.c
-index 1f302de44052..39ce61c5b874 100644
---- a/sound/soc/soc-pcm.c
-+++ b/sound/soc/soc-pcm.c
-@@ -2908,20 +2908,44 @@ int soc_new_pcm(struct snd_soc_pcm_runtime *rtd, int num)
- struct snd_pcm *pcm;
- char new_name[64];
- int ret = 0, playback = 0, capture = 0;
-+ int stream;
- int i;
-
-+ if (rtd->dai_link->dynamic && rtd->num_cpus > 1) {
-+ dev_err(rtd->dev,
-+ "DPCM doesn't support Multi CPU for Front-Ends yet\n");
-+ return -EINVAL;
-+ }
-+
- if (rtd->dai_link->dynamic || rtd->dai_link->no_pcm) {
-- cpu_dai = asoc_rtd_to_cpu(rtd, 0);
-- if (rtd->num_cpus > 1) {
-- dev_err(rtd->dev,
-- "DPCM doesn't support Multi CPU yet\n");
-- return -EINVAL;
-+ if (rtd->dai_link->dpcm_playback) {
-+ stream = SNDRV_PCM_STREAM_PLAYBACK;
-+
-+ for_each_rtd_cpu_dais(rtd, i, cpu_dai)
-+ if (!snd_soc_dai_stream_valid(cpu_dai,
-+ stream)) {
-+ dev_err(rtd->card->dev,
-+ "CPU DAI %s for rtd %s does not support playback\n",
-+ cpu_dai->name,
-+ rtd->dai_link->stream_name);
-+ return -EINVAL;
-+ }
-+ playback = 1;
-+ }
-+ if (rtd->dai_link->dpcm_capture) {
-+ stream = SNDRV_PCM_STREAM_CAPTURE;
-+
-+ for_each_rtd_cpu_dais(rtd, i, cpu_dai)
-+ if (!snd_soc_dai_stream_valid(cpu_dai,
-+ stream)) {
-+ dev_err(rtd->card->dev,
-+ "CPU DAI %s for rtd %s does not support capture\n",
-+ cpu_dai->name,
-+ rtd->dai_link->stream_name);
-+ return -EINVAL;
-+ }
-+ capture = 1;
- }
--
-- playback = rtd->dai_link->dpcm_playback &&
-- snd_soc_dai_stream_valid(cpu_dai, SNDRV_PCM_STREAM_PLAYBACK);
-- capture = rtd->dai_link->dpcm_capture &&
-- snd_soc_dai_stream_valid(cpu_dai, SNDRV_PCM_STREAM_CAPTURE);
- } else {
- /* Adapt stream for codec2codec links */
- int cpu_capture = rtd->dai_link->params ?
-diff --git a/sound/soc/sof/control.c b/sound/soc/sof/control.c
-index dfc412e2d956..6d63768d42aa 100644
---- a/sound/soc/sof/control.c
-+++ b/sound/soc/sof/control.c
-@@ -19,8 +19,8 @@ static void update_mute_led(struct snd_sof_control *scontrol,
- struct snd_kcontrol *kcontrol,
- struct snd_ctl_elem_value *ucontrol)
- {
-- unsigned int temp = 0;
-- unsigned int mask;
-+ int temp = 0;
-+ int mask;
- int i;
-
- mask = 1U << snd_ctl_get_ioffidx(kcontrol, &ucontrol->id);
-diff --git a/sound/soc/sof/core.c b/sound/soc/sof/core.c
-index 91acfae7935c..74b438216216 100644
---- a/sound/soc/sof/core.c
-+++ b/sound/soc/sof/core.c
-@@ -176,6 +176,7 @@ static int sof_probe_continue(struct snd_sof_dev *sdev)
- /* init the IPC */
- sdev->ipc = snd_sof_ipc_init(sdev);
- if (!sdev->ipc) {
-+ ret = -ENOMEM;
- dev_err(sdev->dev, "error: failed to init DSP IPC %d\n", ret);
- goto ipc_err;
- }
-diff --git a/sound/soc/sof/imx/Kconfig b/sound/soc/sof/imx/Kconfig
-index bae4f7bf5f75..812749064ca8 100644
---- a/sound/soc/sof/imx/Kconfig
-+++ b/sound/soc/sof/imx/Kconfig
-@@ -14,7 +14,7 @@ if SND_SOC_SOF_IMX_TOPLEVEL
- config SND_SOC_SOF_IMX8_SUPPORT
- bool "SOF support for i.MX8"
- depends on IMX_SCU
-- depends on IMX_DSP
-+ select IMX_DSP
- help
- This adds support for Sound Open Firmware for NXP i.MX8 platforms
- Say Y if you have such a device.
-diff --git a/sound/soc/sof/intel/hda-codec.c b/sound/soc/sof/intel/hda-codec.c
-index 3041fbbb010a..ea021db697b8 100644
---- a/sound/soc/sof/intel/hda-codec.c
-+++ b/sound/soc/sof/intel/hda-codec.c
-@@ -24,19 +24,44 @@
- #define IDISP_VID_INTEL 0x80860000
-
- /* load the legacy HDA codec driver */
--static int hda_codec_load_module(struct hda_codec *codec)
-+static int request_codec_module(struct hda_codec *codec)
- {
- #ifdef MODULE
- char alias[MODULE_NAME_LEN];
-- const char *module = alias;
-+ const char *mod = NULL;
-
-- snd_hdac_codec_modalias(&codec->core, alias, sizeof(alias));
-- dev_dbg(&codec->core.dev, "loading codec module: %s\n", module);
-- request_module(module);
-+ switch (codec->probe_id) {
-+ case HDA_CODEC_ID_GENERIC:
-+#if IS_MODULE(CONFIG_SND_HDA_GENERIC)
-+ mod = "snd-hda-codec-generic";
- #endif
-+ break;
-+ default:
-+ snd_hdac_codec_modalias(&codec->core, alias, sizeof(alias));
-+ mod = alias;
-+ break;
-+ }
-+
-+ if (mod) {
-+ dev_dbg(&codec->core.dev, "loading codec module: %s\n", mod);
-+ request_module(mod);
-+ }
-+#endif /* MODULE */
- return device_attach(hda_codec_dev(codec));
- }
-
-+static int hda_codec_load_module(struct hda_codec *codec)
-+{
-+ int ret = request_codec_module(codec);
-+
-+ if (ret <= 0) {
-+ codec->probe_id = HDA_CODEC_ID_GENERIC;
-+ ret = request_codec_module(codec);
-+ }
-+
-+ return ret;
-+}
-+
- /* enable controller wake up event for all codecs with jack connectors */
- void hda_codec_jack_wake_enable(struct snd_sof_dev *sdev)
- {
-@@ -78,6 +103,13 @@ void hda_codec_jack_check(struct snd_sof_dev *sdev) {}
- EXPORT_SYMBOL_NS(hda_codec_jack_wake_enable, SND_SOC_SOF_HDA_AUDIO_CODEC);
- EXPORT_SYMBOL_NS(hda_codec_jack_check, SND_SOC_SOF_HDA_AUDIO_CODEC);
-
-+#if IS_ENABLED(CONFIG_SND_HDA_GENERIC)
-+#define is_generic_config(bus) \
-+ ((bus)->modelname && !strcmp((bus)->modelname, "generic"))
-+#else
-+#define is_generic_config(x) 0
-+#endif
-+
- /* probe individual codec */
- static int hda_codec_probe(struct snd_sof_dev *sdev, int address,
- bool hda_codec_use_common_hdmi)
-@@ -87,6 +119,7 @@ static int hda_codec_probe(struct snd_sof_dev *sdev, int address,
- #endif
- struct hda_bus *hbus = sof_to_hbus(sdev);
- struct hdac_device *hdev;
-+ struct hda_codec *codec;
- u32 hda_cmd = (address << 28) | (AC_NODE_ROOT << 20) |
- (AC_VERB_PARAMETERS << 8) | AC_PAR_VENDOR_ID;
- u32 resp = -1;
-@@ -108,6 +141,7 @@ static int hda_codec_probe(struct snd_sof_dev *sdev, int address,
-
- hda_priv->codec.bus = hbus;
- hdev = &hda_priv->codec.core;
-+ codec = &hda_priv->codec;
-
- ret = snd_hdac_ext_bus_device_init(&hbus->core, address, hdev);
- if (ret < 0)
-@@ -122,6 +156,11 @@ static int hda_codec_probe(struct snd_sof_dev *sdev, int address,
- hda_priv->need_display_power = true;
- }
-
-+ if (is_generic_config(hbus))
-+ codec->probe_id = HDA_CODEC_ID_GENERIC;
-+ else
-+ codec->probe_id = 0;
-+
- /*
- * if common HDMI codec driver is not used, codec load
- * is skipped here and hdac_hdmi is used instead
-@@ -129,7 +168,7 @@ static int hda_codec_probe(struct snd_sof_dev *sdev, int address,
- if (hda_codec_use_common_hdmi ||
- (resp & 0xFFFF0000) != IDISP_VID_INTEL) {
- hdev->type = HDA_DEV_LEGACY;
-- ret = hda_codec_load_module(&hda_priv->codec);
-+ ret = hda_codec_load_module(codec);
- /*
- * handle ret==0 (no driver bound) as an error, but pass
- * other return codes without modification
-diff --git a/sound/soc/sof/nocodec.c b/sound/soc/sof/nocodec.c
-index 2233146386cc..71cf5f9db79d 100644
---- a/sound/soc/sof/nocodec.c
-+++ b/sound/soc/sof/nocodec.c
-@@ -52,8 +52,10 @@ static int sof_nocodec_bes_setup(struct device *dev,
- links[i].platforms->name = dev_name(dev);
- links[i].codecs->dai_name = "snd-soc-dummy-dai";
- links[i].codecs->name = "snd-soc-dummy";
-- links[i].dpcm_playback = 1;
-- links[i].dpcm_capture = 1;
-+ if (ops->drv[i].playback.channels_min)
-+ links[i].dpcm_playback = 1;
-+ if (ops->drv[i].capture.channels_min)
-+ links[i].dpcm_capture = 1;
- }
-
- card->dai_link = links;
-diff --git a/sound/soc/sof/pm.c b/sound/soc/sof/pm.c
-index c410822d9920..01d83ddc16ba 100644
---- a/sound/soc/sof/pm.c
-+++ b/sound/soc/sof/pm.c
-@@ -90,7 +90,10 @@ static int sof_resume(struct device *dev, bool runtime_resume)
- int ret;
-
- /* do nothing if dsp resume callbacks are not set */
-- if (!sof_ops(sdev)->resume || !sof_ops(sdev)->runtime_resume)
-+ if (!runtime_resume && !sof_ops(sdev)->resume)
-+ return 0;
-+
-+ if (runtime_resume && !sof_ops(sdev)->runtime_resume)
- return 0;
-
- /* DSP was never successfully started, nothing to resume */
-@@ -175,7 +178,10 @@ static int sof_suspend(struct device *dev, bool runtime_suspend)
- int ret;
-
- /* do nothing if dsp suspend callback is not set */
-- if (!sof_ops(sdev)->suspend)
-+ if (!runtime_suspend && !sof_ops(sdev)->suspend)
-+ return 0;
-+
-+ if (runtime_suspend && !sof_ops(sdev)->runtime_suspend)
- return 0;
-
- if (sdev->fw_state != SOF_FW_BOOT_COMPLETE)
-diff --git a/sound/soc/sof/sof-audio.h b/sound/soc/sof/sof-audio.h
-index bf65f31af858..875a5fc13297 100644
---- a/sound/soc/sof/sof-audio.h
-+++ b/sound/soc/sof/sof-audio.h
-@@ -56,7 +56,7 @@ struct snd_sof_pcm {
- struct snd_sof_led_control {
- unsigned int use_led;
- unsigned int direction;
-- unsigned int led_value;
-+ int led_value;
- };
-
- /* ALSA SOF Kcontrol device */
-diff --git a/sound/soc/sof/topology.c b/sound/soc/sof/topology.c
-index fe8ba3e05e08..ab2b69de1d4d 100644
---- a/sound/soc/sof/topology.c
-+++ b/sound/soc/sof/topology.c
-@@ -1203,6 +1203,8 @@ static int sof_control_load(struct snd_soc_component *scomp, int index,
- return ret;
- }
-
-+ scontrol->led_ctl.led_value = -1;
-+
- dobj->private = scontrol;
- list_add(&scontrol->list, &sdev->kcontrol_list);
- return ret;
-diff --git a/sound/soc/tegra/tegra_wm8903.c b/sound/soc/tegra/tegra_wm8903.c
-index 9b5651502f12..3aca354f9e08 100644
---- a/sound/soc/tegra/tegra_wm8903.c
-+++ b/sound/soc/tegra/tegra_wm8903.c
-@@ -177,6 +177,7 @@ static int tegra_wm8903_init(struct snd_soc_pcm_runtime *rtd)
- struct snd_soc_component *component = codec_dai->component;
- struct snd_soc_card *card = rtd->card;
- struct tegra_wm8903 *machine = snd_soc_card_get_drvdata(card);
-+ int shrt = 0;
-
- if (gpio_is_valid(machine->gpio_hp_det)) {
- tegra_wm8903_hp_jack_gpio.gpio = machine->gpio_hp_det;
-@@ -189,12 +190,15 @@ static int tegra_wm8903_init(struct snd_soc_pcm_runtime *rtd)
- &tegra_wm8903_hp_jack_gpio);
- }
-
-+ if (of_property_read_bool(card->dev->of_node, "nvidia,headset"))
-+ shrt = SND_JACK_MICROPHONE;
-+
- snd_soc_card_jack_new(rtd->card, "Mic Jack", SND_JACK_MICROPHONE,
- &tegra_wm8903_mic_jack,
- tegra_wm8903_mic_jack_pins,
- ARRAY_SIZE(tegra_wm8903_mic_jack_pins));
- wm8903_mic_detect(component, &tegra_wm8903_mic_jack, SND_JACK_MICROPHONE,
-- 0);
-+ shrt);
-
- snd_soc_dapm_force_enable_pin(&card->dapm, "MICBIAS");
-
-diff --git a/sound/soc/ti/davinci-mcasp.c b/sound/soc/ti/davinci-mcasp.c
-index 734ffe925c4d..7a7db743dc5b 100644
---- a/sound/soc/ti/davinci-mcasp.c
-+++ b/sound/soc/ti/davinci-mcasp.c
-@@ -1896,8 +1896,10 @@ static int davinci_mcasp_get_dma_type(struct davinci_mcasp *mcasp)
- PTR_ERR(chan));
- return PTR_ERR(chan);
- }
-- if (WARN_ON(!chan->device || !chan->device->dev))
-+ if (WARN_ON(!chan->device || !chan->device->dev)) {
-+ dma_release_channel(chan);
- return -EINVAL;
-+ }
-
- if (chan->device->dev->of_node)
- ret = of_property_read_string(chan->device->dev->of_node,
-diff --git a/sound/soc/ti/omap-mcbsp.c b/sound/soc/ti/omap-mcbsp.c
-index 3d41ca2238d4..4f33ddb7b441 100644
---- a/sound/soc/ti/omap-mcbsp.c
-+++ b/sound/soc/ti/omap-mcbsp.c
-@@ -686,7 +686,7 @@ static int omap_mcbsp_init(struct platform_device *pdev)
- mcbsp->dma_data[1].addr = omap_mcbsp_dma_reg_params(mcbsp,
- SNDRV_PCM_STREAM_CAPTURE);
-
-- mcbsp->fclk = clk_get(&pdev->dev, "fck");
-+ mcbsp->fclk = devm_clk_get(&pdev->dev, "fck");
- if (IS_ERR(mcbsp->fclk)) {
- ret = PTR_ERR(mcbsp->fclk);
- dev_err(mcbsp->dev, "unable to get fck: %d\n", ret);
-@@ -711,7 +711,7 @@ static int omap_mcbsp_init(struct platform_device *pdev)
- if (ret) {
- dev_err(mcbsp->dev,
- "Unable to create additional controls\n");
-- goto err_thres;
-+ return ret;
- }
- }
-
-@@ -724,8 +724,6 @@ static int omap_mcbsp_init(struct platform_device *pdev)
- err_st:
- if (mcbsp->pdata->buffer_size)
- sysfs_remove_group(&mcbsp->dev->kobj, &additional_attr_group);
--err_thres:
-- clk_put(mcbsp->fclk);
- return ret;
- }
-
-@@ -1442,8 +1440,6 @@ static int asoc_mcbsp_remove(struct platform_device *pdev)
-
- omap_mcbsp_st_cleanup(pdev);
-
-- clk_put(mcbsp->fclk);
--
- return 0;
- }
-
-diff --git a/sound/soc/ux500/mop500.c b/sound/soc/ux500/mop500.c
-index 2873e8e6f02b..cdae1190b930 100644
---- a/sound/soc/ux500/mop500.c
-+++ b/sound/soc/ux500/mop500.c
-@@ -63,10 +63,11 @@ static void mop500_of_node_put(void)
- {
- int i;
-
-- for (i = 0; i < 2; i++) {
-+ for (i = 0; i < 2; i++)
- of_node_put(mop500_dai_links[i].cpus->of_node);
-- of_node_put(mop500_dai_links[i].codecs->of_node);
-- }
-+
-+ /* Both links use the same codec, which is refcounted only once */
-+ of_node_put(mop500_dai_links[0].codecs->of_node);
- }
-
- static int mop500_of_probe(struct platform_device *pdev,
-@@ -81,7 +82,9 @@ static int mop500_of_probe(struct platform_device *pdev,
-
- if (!(msp_np[0] && msp_np[1] && codec_np)) {
- dev_err(&pdev->dev, "Phandle missing or invalid\n");
-- mop500_of_node_put();
-+ for (i = 0; i < 2; i++)
-+ of_node_put(msp_np[i]);
-+ of_node_put(codec_np);
- return -EINVAL;
- }
-
-diff --git a/sound/usb/card.h b/sound/usb/card.h
-index 395403a2d33f..d6219fba9699 100644
---- a/sound/usb/card.h
-+++ b/sound/usb/card.h
-@@ -84,6 +84,10 @@ struct snd_usb_endpoint {
- dma_addr_t sync_dma; /* DMA address of syncbuf */
-
- unsigned int pipe; /* the data i/o pipe */
-+ unsigned int framesize[2]; /* small/large frame sizes in samples */
-+ unsigned int sample_rem; /* remainder from division fs/fps */
-+ unsigned int sample_accum; /* sample accumulator */
-+ unsigned int fps; /* frames per second */
- unsigned int freqn; /* nominal sampling rate in fs/fps in Q16.16 format */
- unsigned int freqm; /* momentary sampling rate in fs/fps in Q16.16 format */
- int freqshift; /* how much to shift the feedback value to get Q16.16 */
-@@ -104,6 +108,7 @@ struct snd_usb_endpoint {
- int iface, altsetting;
- int skip_packets; /* quirks for devices to ignore the first n packets
- in a stream */
-+ bool is_implicit_feedback; /* This endpoint is used as implicit feedback */
-
- spinlock_t lock;
- struct list_head list;
-diff --git a/sound/usb/endpoint.c b/sound/usb/endpoint.c
-index 4a9a2f6ef5a4..9bea7d3f99f8 100644
---- a/sound/usb/endpoint.c
-+++ b/sound/usb/endpoint.c
-@@ -124,12 +124,12 @@ int snd_usb_endpoint_implicit_feedback_sink(struct snd_usb_endpoint *ep)
-
- /*
- * For streaming based on information derived from sync endpoints,
-- * prepare_outbound_urb_sizes() will call next_packet_size() to
-+ * prepare_outbound_urb_sizes() will call slave_next_packet_size() to
- * determine the number of samples to be sent in the next packet.
- *
-- * For implicit feedback, next_packet_size() is unused.
-+ * For implicit feedback, slave_next_packet_size() is unused.
- */
--int snd_usb_endpoint_next_packet_size(struct snd_usb_endpoint *ep)
-+int snd_usb_endpoint_slave_next_packet_size(struct snd_usb_endpoint *ep)
- {
- unsigned long flags;
- int ret;
-@@ -146,6 +146,29 @@ int snd_usb_endpoint_next_packet_size(struct snd_usb_endpoint *ep)
- return ret;
- }
-
-+/*
-+ * For adaptive and synchronous endpoints, prepare_outbound_urb_sizes()
-+ * will call next_packet_size() to determine the number of samples to be
-+ * sent in the next packet.
-+ */
-+int snd_usb_endpoint_next_packet_size(struct snd_usb_endpoint *ep)
-+{
-+ int ret;
-+
-+ if (ep->fill_max)
-+ return ep->maxframesize;
-+
-+ ep->sample_accum += ep->sample_rem;
-+ if (ep->sample_accum >= ep->fps) {
-+ ep->sample_accum -= ep->fps;
-+ ret = ep->framesize[1];
-+ } else {
-+ ret = ep->framesize[0];
-+ }
-+
-+ return ret;
-+}
-+
- static void retire_outbound_urb(struct snd_usb_endpoint *ep,
- struct snd_urb_ctx *urb_ctx)
- {
-@@ -190,6 +213,8 @@ static void prepare_silent_urb(struct snd_usb_endpoint *ep,
-
- if (ctx->packet_size[i])
- counts = ctx->packet_size[i];
-+ else if (ep->sync_master)
-+ counts = snd_usb_endpoint_slave_next_packet_size(ep);
- else
- counts = snd_usb_endpoint_next_packet_size(ep);
-
-@@ -321,17 +346,17 @@ static void queue_pending_output_urbs(struct snd_usb_endpoint *ep)
- ep->next_packet_read_pos %= MAX_URBS;
-
- /* take URB out of FIFO */
-- if (!list_empty(&ep->ready_playback_urbs))
-+ if (!list_empty(&ep->ready_playback_urbs)) {
- ctx = list_first_entry(&ep->ready_playback_urbs,
- struct snd_urb_ctx, ready_list);
-+ list_del_init(&ctx->ready_list);
-+ }
- }
- spin_unlock_irqrestore(&ep->lock, flags);
-
- if (ctx == NULL)
- return;
-
-- list_del_init(&ctx->ready_list);
--
- /* copy over the length information */
- for (i = 0; i < packet->packets; i++)
- ctx->packet_size[i] = packet->packet_size[i];
-@@ -497,6 +522,8 @@ struct snd_usb_endpoint *snd_usb_add_endpoint(struct snd_usb_audio *chip,
-
- list_add_tail(&ep->list, &chip->ep_list);
-
-+ ep->is_implicit_feedback = 0;
-+
- __exit_unlock:
- mutex_unlock(&chip->mutex);
-
-@@ -596,6 +623,178 @@ static void release_urbs(struct snd_usb_endpoint *ep, int force)
- ep->nurbs = 0;
- }
-
-+/*
-+ * Check data endpoint for format differences
-+ */
-+static bool check_ep_params(struct snd_usb_endpoint *ep,
-+ snd_pcm_format_t pcm_format,
-+ unsigned int channels,
-+ unsigned int period_bytes,
-+ unsigned int frames_per_period,
-+ unsigned int periods_per_buffer,
-+ struct audioformat *fmt,
-+ struct snd_usb_endpoint *sync_ep)
-+{
-+ unsigned int maxsize, minsize, packs_per_ms, max_packs_per_urb;
-+ unsigned int max_packs_per_period, urbs_per_period, urb_packs;
-+ unsigned int max_urbs;
-+ int frame_bits = snd_pcm_format_physical_width(pcm_format) * channels;
-+ int tx_length_quirk = (ep->chip->tx_length_quirk &&
-+ usb_pipeout(ep->pipe));
-+ bool ret = 1;
-+
-+ if (pcm_format == SNDRV_PCM_FORMAT_DSD_U16_LE && fmt->dsd_dop) {
-+ /*
-+ * When operating in DSD DOP mode, the size of a sample frame
-+ * in hardware differs from the actual physical format width
-+ * because we need to make room for the DOP markers.
-+ */
-+ frame_bits += channels << 3;
-+ }
-+
-+ ret = ret && (ep->datainterval == fmt->datainterval);
-+ ret = ret && (ep->stride == frame_bits >> 3);
-+
-+ switch (pcm_format) {
-+ case SNDRV_PCM_FORMAT_U8:
-+ ret = ret && (ep->silence_value == 0x80);
-+ break;
-+ case SNDRV_PCM_FORMAT_DSD_U8:
-+ case SNDRV_PCM_FORMAT_DSD_U16_LE:
-+ case SNDRV_PCM_FORMAT_DSD_U32_LE:
-+ case SNDRV_PCM_FORMAT_DSD_U16_BE:
-+ case SNDRV_PCM_FORMAT_DSD_U32_BE:
-+ ret = ret && (ep->silence_value == 0x69);
-+ break;
-+ default:
-+ ret = ret && (ep->silence_value == 0);
-+ }
-+
-+ /* assume max. frequency is 50% higher than nominal */
-+ ret = ret && (ep->freqmax == ep->freqn + (ep->freqn >> 1));
-+ /* Round up freqmax to nearest integer in order to calculate maximum
-+ * packet size, which must represent a whole number of frames.
-+ * This is accomplished by adding 0x0.ffff before converting the
-+ * Q16.16 format into integer.
-+ * In order to accurately calculate the maximum packet size when
-+ * the data interval is more than 1 (i.e. ep->datainterval > 0),
-+ * multiply by the data interval prior to rounding. For instance,
-+ * a freqmax of 41 kHz will result in a max packet size of 6 (5.125)
-+ * frames with a data interval of 1, but 11 (10.25) frames with a
-+ * data interval of 2.
-+ * (ep->freqmax << ep->datainterval overflows at 8.192 MHz for the
-+ * maximum datainterval value of 3, at USB full speed, higher for
-+ * USB high speed, noting that ep->freqmax is in units of
-+ * frames per packet in Q16.16 format.)
-+ */
-+ maxsize = (((ep->freqmax << ep->datainterval) + 0xffff) >> 16) *
-+ (frame_bits >> 3);
-+ if (tx_length_quirk)
-+ maxsize += sizeof(__le32); /* Space for length descriptor */
-+ /* but wMaxPacketSize might reduce this */
-+ if (ep->maxpacksize && ep->maxpacksize < maxsize) {
-+ /* whatever fits into a max. size packet */
-+ unsigned int data_maxsize = maxsize = ep->maxpacksize;
-+
-+ if (tx_length_quirk)
-+ /* Need to remove the length descriptor to calc freq */
-+ data_maxsize -= sizeof(__le32);
-+ ret = ret && (ep->freqmax == (data_maxsize / (frame_bits >> 3))
-+ << (16 - ep->datainterval));
-+ }
-+
-+ if (ep->fill_max)
-+ ret = ret && (ep->curpacksize == ep->maxpacksize);
-+ else
-+ ret = ret && (ep->curpacksize == maxsize);
-+
-+ if (snd_usb_get_speed(ep->chip->dev) != USB_SPEED_FULL) {
-+ packs_per_ms = 8 >> ep->datainterval;
-+ max_packs_per_urb = MAX_PACKS_HS;
-+ } else {
-+ packs_per_ms = 1;
-+ max_packs_per_urb = MAX_PACKS;
-+ }
-+ if (sync_ep && !snd_usb_endpoint_implicit_feedback_sink(ep))
-+ max_packs_per_urb = min(max_packs_per_urb,
-+ 1U << sync_ep->syncinterval);
-+ max_packs_per_urb = max(1u, max_packs_per_urb >> ep->datainterval);
-+
-+ /*
-+ * Capture endpoints need to use small URBs because there's no way
-+ * to tell in advance where the next period will end, and we don't
-+ * want the next URB to complete much after the period ends.
-+ *
-+ * Playback endpoints with implicit sync much use the same parameters
-+ * as their corresponding capture endpoint.
-+ */
-+ if (usb_pipein(ep->pipe) ||
-+ snd_usb_endpoint_implicit_feedback_sink(ep)) {
-+
-+ urb_packs = packs_per_ms;
-+ /*
-+ * Wireless devices can poll at a max rate of once per 4ms.
-+ * For dataintervals less than 5, increase the packet count to
-+ * allow the host controller to use bursting to fill in the
-+ * gaps.
-+ */
-+ if (snd_usb_get_speed(ep->chip->dev) == USB_SPEED_WIRELESS) {
-+ int interval = ep->datainterval;
-+
-+ while (interval < 5) {
-+ urb_packs <<= 1;
-+ ++interval;
-+ }
-+ }
-+ /* make capture URBs <= 1 ms and smaller than a period */
-+ urb_packs = min(max_packs_per_urb, urb_packs);
-+ while (urb_packs > 1 && urb_packs * maxsize >= period_bytes)
-+ urb_packs >>= 1;
-+ ret = ret && (ep->nurbs == MAX_URBS);
-+
-+ /*
-+ * Playback endpoints without implicit sync are adjusted so that
-+ * a period fits as evenly as possible in the smallest number of
-+ * URBs. The total number of URBs is adjusted to the size of the
-+ * ALSA buffer, subject to the MAX_URBS and MAX_QUEUE limits.
-+ */
-+ } else {
-+ /* determine how small a packet can be */
-+ minsize = (ep->freqn >> (16 - ep->datainterval)) *
-+ (frame_bits >> 3);
-+ /* with sync from device, assume it can be 12% lower */
-+ if (sync_ep)
-+ minsize -= minsize >> 3;
-+ minsize = max(minsize, 1u);
-+
-+ /* how many packets will contain an entire ALSA period? */
-+ max_packs_per_period = DIV_ROUND_UP(period_bytes, minsize);
-+
-+ /* how many URBs will contain a period? */
-+ urbs_per_period = DIV_ROUND_UP(max_packs_per_period,
-+ max_packs_per_urb);
-+ /* how many packets are needed in each URB? */
-+ urb_packs = DIV_ROUND_UP(max_packs_per_period, urbs_per_period);
-+
-+ /* limit the number of frames in a single URB */
-+ ret = ret && (ep->max_urb_frames ==
-+ DIV_ROUND_UP(frames_per_period, urbs_per_period));
-+
-+ /* try to use enough URBs to contain an entire ALSA buffer */
-+ max_urbs = min((unsigned) MAX_URBS,
-+ MAX_QUEUE * packs_per_ms / urb_packs);
-+ ret = ret && (ep->nurbs == min(max_urbs,
-+ urbs_per_period * periods_per_buffer));
-+ }
-+
-+ ret = ret && (ep->datainterval == fmt->datainterval);
-+ ret = ret && (ep->maxpacksize == fmt->maxpacksize);
-+ ret = ret &&
-+ (ep->fill_max == !!(fmt->attributes & UAC_EP_CS_ATTR_FILL_MAX));
-+
-+ return ret;
-+}
-+
- /*
- * configure a data endpoint
- */
-@@ -861,10 +1060,23 @@ int snd_usb_endpoint_set_params(struct snd_usb_endpoint *ep,
- int err;
-
- if (ep->use_count != 0) {
-- usb_audio_warn(ep->chip,
-- "Unable to change format on ep #%x: already in use\n",
-- ep->ep_num);
-- return -EBUSY;
-+ bool check = ep->is_implicit_feedback &&
-+ check_ep_params(ep, pcm_format,
-+ channels, period_bytes,
-+ period_frames, buffer_periods,
-+ fmt, sync_ep);
-+
-+ if (!check) {
-+ usb_audio_warn(ep->chip,
-+ "Unable to change format on ep #%x: already in use\n",
-+ ep->ep_num);
-+ return -EBUSY;
-+ }
-+
-+ usb_audio_dbg(ep->chip,
-+ "Ep #%x already in use as implicit feedback but format not changed\n",
-+ ep->ep_num);
-+ return 0;
- }
-
- /* release old buffers, if any */
-@@ -874,10 +1086,17 @@ int snd_usb_endpoint_set_params(struct snd_usb_endpoint *ep,
- ep->maxpacksize = fmt->maxpacksize;
- ep->fill_max = !!(fmt->attributes & UAC_EP_CS_ATTR_FILL_MAX);
-
-- if (snd_usb_get_speed(ep->chip->dev) == USB_SPEED_FULL)
-+ if (snd_usb_get_speed(ep->chip->dev) == USB_SPEED_FULL) {
- ep->freqn = get_usb_full_speed_rate(rate);
-- else
-+ ep->fps = 1000;
-+ } else {
- ep->freqn = get_usb_high_speed_rate(rate);
-+ ep->fps = 8000;
-+ }
-+
-+ ep->sample_rem = rate % ep->fps;
-+ ep->framesize[0] = rate / ep->fps;
-+ ep->framesize[1] = (rate + (ep->fps - 1)) / ep->fps;
-
- /* calculate the frequency in 16.16 format */
- ep->freqm = ep->freqn;
-@@ -936,6 +1155,7 @@ int snd_usb_endpoint_start(struct snd_usb_endpoint *ep)
- ep->active_mask = 0;
- ep->unlink_mask = 0;
- ep->phase = 0;
-+ ep->sample_accum = 0;
-
- snd_usb_endpoint_start_quirk(ep);
-
-diff --git a/sound/usb/endpoint.h b/sound/usb/endpoint.h
-index 63a39d4fa8d8..d23fa0a8c11b 100644
---- a/sound/usb/endpoint.h
-+++ b/sound/usb/endpoint.h
-@@ -28,6 +28,7 @@ void snd_usb_endpoint_release(struct snd_usb_endpoint *ep);
- void snd_usb_endpoint_free(struct snd_usb_endpoint *ep);
-
- int snd_usb_endpoint_implicit_feedback_sink(struct snd_usb_endpoint *ep);
-+int snd_usb_endpoint_slave_next_packet_size(struct snd_usb_endpoint *ep);
- int snd_usb_endpoint_next_packet_size(struct snd_usb_endpoint *ep);
-
- void snd_usb_handle_sync_urb(struct snd_usb_endpoint *ep,
-diff --git a/sound/usb/mixer_quirks.c b/sound/usb/mixer_quirks.c
-index a5f65a9a0254..aad2683ff793 100644
---- a/sound/usb/mixer_quirks.c
-+++ b/sound/usb/mixer_quirks.c
-@@ -2185,6 +2185,421 @@ static int snd_rme_controls_create(struct usb_mixer_interface *mixer)
- return 0;
- }
-
-+/*
-+ * RME Babyface Pro (FS)
-+ *
-+ * These devices exposes a couple of DSP functions via request to EP0.
-+ * Switches are available via control registers, while routing is controlled
-+ * by controlling the volume on each possible crossing point.
-+ * Volume control is linear, from -inf (dec. 0) to +6dB (dec. 65536) with
-+ * 0dB being at dec. 32768.
-+ */
-+enum {
-+ SND_BBFPRO_CTL_REG1 = 0,
-+ SND_BBFPRO_CTL_REG2
-+};
-+
-+#define SND_BBFPRO_CTL_REG_MASK 1
-+#define SND_BBFPRO_CTL_IDX_MASK 0xff
-+#define SND_BBFPRO_CTL_IDX_SHIFT 1
-+#define SND_BBFPRO_CTL_VAL_MASK 1
-+#define SND_BBFPRO_CTL_VAL_SHIFT 9
-+#define SND_BBFPRO_CTL_REG1_CLK_MASTER 0
-+#define SND_BBFPRO_CTL_REG1_CLK_OPTICAL 1
-+#define SND_BBFPRO_CTL_REG1_SPDIF_PRO 7
-+#define SND_BBFPRO_CTL_REG1_SPDIF_EMPH 8
-+#define SND_BBFPRO_CTL_REG1_SPDIF_OPTICAL 10
-+#define SND_BBFPRO_CTL_REG2_48V_AN1 0
-+#define SND_BBFPRO_CTL_REG2_48V_AN2 1
-+#define SND_BBFPRO_CTL_REG2_SENS_IN3 2
-+#define SND_BBFPRO_CTL_REG2_SENS_IN4 3
-+#define SND_BBFPRO_CTL_REG2_PAD_AN1 4
-+#define SND_BBFPRO_CTL_REG2_PAD_AN2 5
-+
-+#define SND_BBFPRO_MIXER_IDX_MASK 0x1ff
-+#define SND_BBFPRO_MIXER_VAL_MASK 0x3ffff
-+#define SND_BBFPRO_MIXER_VAL_SHIFT 9
-+#define SND_BBFPRO_MIXER_VAL_MIN 0 // -inf
-+#define SND_BBFPRO_MIXER_VAL_MAX 65536 // +6dB
-+
-+#define SND_BBFPRO_USBREQ_CTL_REG1 0x10
-+#define SND_BBFPRO_USBREQ_CTL_REG2 0x17
-+#define SND_BBFPRO_USBREQ_MIXER 0x12
-+
-+static int snd_bbfpro_ctl_update(struct usb_mixer_interface *mixer, u8 reg,
-+ u8 index, u8 value)
-+{
-+ int err;
-+ u16 usb_req, usb_idx, usb_val;
-+ struct snd_usb_audio *chip = mixer->chip;
-+
-+ err = snd_usb_lock_shutdown(chip);
-+ if (err < 0)
-+ return err;
-+
-+ if (reg == SND_BBFPRO_CTL_REG1) {
-+ usb_req = SND_BBFPRO_USBREQ_CTL_REG1;
-+ if (index == SND_BBFPRO_CTL_REG1_CLK_OPTICAL) {
-+ usb_idx = 3;
-+ usb_val = value ? 3 : 0;
-+ } else {
-+ usb_idx = 1 << index;
-+ usb_val = value ? usb_idx : 0;
-+ }
-+ } else {
-+ usb_req = SND_BBFPRO_USBREQ_CTL_REG2;
-+ usb_idx = 1 << index;
-+ usb_val = value ? usb_idx : 0;
-+ }
-+
-+ err = snd_usb_ctl_msg(chip->dev,
-+ usb_sndctrlpipe(chip->dev, 0), usb_req,
-+ USB_DIR_OUT | USB_TYPE_VENDOR | USB_RECIP_DEVICE,
-+ usb_val, usb_idx, 0, 0);
-+
-+ snd_usb_unlock_shutdown(chip);
-+ return err;
-+}
-+
-+static int snd_bbfpro_ctl_get(struct snd_kcontrol *kcontrol,
-+ struct snd_ctl_elem_value *ucontrol)
-+{
-+ u8 reg, idx, val;
-+ int pv;
-+
-+ pv = kcontrol->private_value;
-+ reg = pv & SND_BBFPRO_CTL_REG_MASK;
-+ idx = (pv >> SND_BBFPRO_CTL_IDX_SHIFT) & SND_BBFPRO_CTL_IDX_MASK;
-+ val = kcontrol->private_value >> SND_BBFPRO_CTL_VAL_SHIFT;
-+
-+ if ((reg == SND_BBFPRO_CTL_REG1 &&
-+ idx == SND_BBFPRO_CTL_REG1_CLK_OPTICAL) ||
-+ (reg == SND_BBFPRO_CTL_REG2 &&
-+ (idx == SND_BBFPRO_CTL_REG2_SENS_IN3 ||
-+ idx == SND_BBFPRO_CTL_REG2_SENS_IN4))) {
-+ ucontrol->value.enumerated.item[0] = val;
-+ } else {
-+ ucontrol->value.integer.value[0] = val;
-+ }
-+ return 0;
-+}
-+
-+static int snd_bbfpro_ctl_info(struct snd_kcontrol *kcontrol,
-+ struct snd_ctl_elem_info *uinfo)
-+{
-+ u8 reg, idx;
-+ int pv;
-+
-+ pv = kcontrol->private_value;
-+ reg = pv & SND_BBFPRO_CTL_REG_MASK;
-+ idx = (pv >> SND_BBFPRO_CTL_IDX_SHIFT) & SND_BBFPRO_CTL_IDX_MASK;
-+
-+ if (reg == SND_BBFPRO_CTL_REG1 &&
-+ idx == SND_BBFPRO_CTL_REG1_CLK_OPTICAL) {
-+ static const char * const texts[2] = {
-+ "AutoSync",
-+ "Internal"
-+ };
-+ return snd_ctl_enum_info(uinfo, 1, 2, texts);
-+ } else if (reg == SND_BBFPRO_CTL_REG2 &&
-+ (idx == SND_BBFPRO_CTL_REG2_SENS_IN3 ||
-+ idx == SND_BBFPRO_CTL_REG2_SENS_IN4)) {
-+ static const char * const texts[2] = {
-+ "-10dBV",
-+ "+4dBu"
-+ };
-+ return snd_ctl_enum_info(uinfo, 1, 2, texts);
-+ }
-+
-+ uinfo->count = 1;
-+ uinfo->value.integer.min = 0;
-+ uinfo->value.integer.max = 1;
-+ uinfo->type = SNDRV_CTL_ELEM_TYPE_BOOLEAN;
-+ return 0;
-+}
-+
-+static int snd_bbfpro_ctl_put(struct snd_kcontrol *kcontrol,
-+ struct snd_ctl_elem_value *ucontrol)
-+{
-+ int err;
-+ u8 reg, idx;
-+ int old_value, pv, val;
-+
-+ struct usb_mixer_elem_list *list = snd_kcontrol_chip(kcontrol);
-+ struct usb_mixer_interface *mixer = list->mixer;
-+
-+ pv = kcontrol->private_value;
-+ reg = pv & SND_BBFPRO_CTL_REG_MASK;
-+ idx = (pv >> SND_BBFPRO_CTL_IDX_SHIFT) & SND_BBFPRO_CTL_IDX_MASK;
-+ old_value = (pv >> SND_BBFPRO_CTL_VAL_SHIFT) & SND_BBFPRO_CTL_VAL_MASK;
-+
-+ if ((reg == SND_BBFPRO_CTL_REG1 &&
-+ idx == SND_BBFPRO_CTL_REG1_CLK_OPTICAL) ||
-+ (reg == SND_BBFPRO_CTL_REG2 &&
-+ (idx == SND_BBFPRO_CTL_REG2_SENS_IN3 ||
-+ idx == SND_BBFPRO_CTL_REG2_SENS_IN4))) {
-+ val = ucontrol->value.enumerated.item[0];
-+ } else {
-+ val = ucontrol->value.integer.value[0];
-+ }
-+
-+ if (val > 1)
-+ return -EINVAL;
-+
-+ if (val == old_value)
-+ return 0;
-+
-+ kcontrol->private_value = reg
-+ | ((idx & SND_BBFPRO_CTL_IDX_MASK) << SND_BBFPRO_CTL_IDX_SHIFT)
-+ | ((val & SND_BBFPRO_CTL_VAL_MASK) << SND_BBFPRO_CTL_VAL_SHIFT);
-+
-+ err = snd_bbfpro_ctl_update(mixer, reg, idx, val);
-+ return err < 0 ? err : 1;
-+}
-+
-+static int snd_bbfpro_ctl_resume(struct usb_mixer_elem_list *list)
-+{
-+ u8 reg, idx;
-+ int value, pv;
-+
-+ pv = list->kctl->private_value;
-+ reg = pv & SND_BBFPRO_CTL_REG_MASK;
-+ idx = (pv >> SND_BBFPRO_CTL_IDX_SHIFT) & SND_BBFPRO_CTL_IDX_MASK;
-+ value = (pv >> SND_BBFPRO_CTL_VAL_SHIFT) & SND_BBFPRO_CTL_VAL_MASK;
-+
-+ return snd_bbfpro_ctl_update(list->mixer, reg, idx, value);
-+}
-+
-+static int snd_bbfpro_vol_update(struct usb_mixer_interface *mixer, u16 index,
-+ u32 value)
-+{
-+ struct snd_usb_audio *chip = mixer->chip;
-+ int err;
-+ u16 idx;
-+ u16 usb_idx, usb_val;
-+ u32 v;
-+
-+ err = snd_usb_lock_shutdown(chip);
-+ if (err < 0)
-+ return err;
-+
-+ idx = index & SND_BBFPRO_MIXER_IDX_MASK;
-+ // 18 bit linear volume, split so 2 bits end up in index.
-+ v = value & SND_BBFPRO_MIXER_VAL_MASK;
-+ usb_idx = idx | (v & 0x3) << 14;
-+ usb_val = (v >> 2) & 0xffff;
-+
-+ err = snd_usb_ctl_msg(chip->dev,
-+ usb_sndctrlpipe(chip->dev, 0),
-+ SND_BBFPRO_USBREQ_MIXER,
-+ USB_DIR_OUT | USB_TYPE_VENDOR |
-+ USB_RECIP_DEVICE,
-+ usb_val, usb_idx, 0, 0);
-+
-+ snd_usb_unlock_shutdown(chip);
-+ return err;
-+}
-+
-+static int snd_bbfpro_vol_get(struct snd_kcontrol *kcontrol,
-+ struct snd_ctl_elem_value *ucontrol)
-+{
-+ ucontrol->value.integer.value[0] =
-+ kcontrol->private_value >> SND_BBFPRO_MIXER_VAL_SHIFT;
-+ return 0;
-+}
-+
-+static int snd_bbfpro_vol_info(struct snd_kcontrol *kcontrol,
-+ struct snd_ctl_elem_info *uinfo)
-+{
-+ uinfo->type = SNDRV_CTL_ELEM_TYPE_INTEGER;
-+ uinfo->count = 1;
-+ uinfo->value.integer.min = SND_BBFPRO_MIXER_VAL_MIN;
-+ uinfo->value.integer.max = SND_BBFPRO_MIXER_VAL_MAX;
-+ return 0;
-+}
-+
-+static int snd_bbfpro_vol_put(struct snd_kcontrol *kcontrol,
-+ struct snd_ctl_elem_value *ucontrol)
-+{
-+ int err;
-+ u16 idx;
-+ u32 new_val, old_value, uvalue;
-+ struct usb_mixer_elem_list *list = snd_kcontrol_chip(kcontrol);
-+ struct usb_mixer_interface *mixer = list->mixer;
-+
-+ uvalue = ucontrol->value.integer.value[0];
-+ idx = kcontrol->private_value & SND_BBFPRO_MIXER_IDX_MASK;
-+ old_value = kcontrol->private_value >> SND_BBFPRO_MIXER_VAL_SHIFT;
-+
-+ if (uvalue > SND_BBFPRO_MIXER_VAL_MAX)
-+ return -EINVAL;
-+
-+ if (uvalue == old_value)
-+ return 0;
-+
-+ new_val = uvalue & SND_BBFPRO_MIXER_VAL_MASK;
-+
-+ kcontrol->private_value = idx
-+ | (new_val << SND_BBFPRO_MIXER_VAL_SHIFT);
-+
-+ err = snd_bbfpro_vol_update(mixer, idx, new_val);
-+ return err < 0 ? err : 1;
-+}
-+
-+static int snd_bbfpro_vol_resume(struct usb_mixer_elem_list *list)
-+{
-+ int pv = list->kctl->private_value;
-+ u16 idx = pv & SND_BBFPRO_MIXER_IDX_MASK;
-+ u32 val = (pv >> SND_BBFPRO_MIXER_VAL_SHIFT)
-+ & SND_BBFPRO_MIXER_VAL_MASK;
-+ return snd_bbfpro_vol_update(list->mixer, idx, val);
-+}
-+
-+// Predfine elements
-+static const struct snd_kcontrol_new snd_bbfpro_ctl_control = {
-+ .iface = SNDRV_CTL_ELEM_IFACE_MIXER,
-+ .access = SNDRV_CTL_ELEM_ACCESS_READWRITE,
-+ .index = 0,
-+ .info = snd_bbfpro_ctl_info,
-+ .get = snd_bbfpro_ctl_get,
-+ .put = snd_bbfpro_ctl_put
-+};
-+
-+static const struct snd_kcontrol_new snd_bbfpro_vol_control = {
-+ .iface = SNDRV_CTL_ELEM_IFACE_MIXER,
-+ .access = SNDRV_CTL_ELEM_ACCESS_READWRITE,
-+ .index = 0,
-+ .info = snd_bbfpro_vol_info,
-+ .get = snd_bbfpro_vol_get,
-+ .put = snd_bbfpro_vol_put
-+};
-+
-+static int snd_bbfpro_ctl_add(struct usb_mixer_interface *mixer, u8 reg,
-+ u8 index, char *name)
-+{
-+ struct snd_kcontrol_new knew = snd_bbfpro_ctl_control;
-+
-+ knew.name = name;
-+ knew.private_value = (reg & SND_BBFPRO_CTL_REG_MASK)
-+ | ((index & SND_BBFPRO_CTL_IDX_MASK)
-+ << SND_BBFPRO_CTL_IDX_SHIFT);
-+
-+ return add_single_ctl_with_resume(mixer, 0, snd_bbfpro_ctl_resume,
-+ &knew, NULL);
-+}
-+
-+static int snd_bbfpro_vol_add(struct usb_mixer_interface *mixer, u16 index,
-+ char *name)
-+{
-+ struct snd_kcontrol_new knew = snd_bbfpro_vol_control;
-+
-+ knew.name = name;
-+ knew.private_value = index & SND_BBFPRO_MIXER_IDX_MASK;
-+
-+ return add_single_ctl_with_resume(mixer, 0, snd_bbfpro_vol_resume,
-+ &knew, NULL);
-+}
-+
-+static int snd_bbfpro_controls_create(struct usb_mixer_interface *mixer)
-+{
-+ int err, i, o;
-+ char name[48];
-+
-+ static const char * const input[] = {
-+ "AN1", "AN2", "IN3", "IN4", "AS1", "AS2", "ADAT3",
-+ "ADAT4", "ADAT5", "ADAT6", "ADAT7", "ADAT8"};
-+
-+ static const char * const output[] = {
-+ "AN1", "AN2", "PH3", "PH4", "AS1", "AS2", "ADAT3", "ADAT4",
-+ "ADAT5", "ADAT6", "ADAT7", "ADAT8"};
-+
-+ for (o = 0 ; o < 12 ; ++o) {
-+ for (i = 0 ; i < 12 ; ++i) {
-+ // Line routing
-+ snprintf(name, sizeof(name),
-+ "%s-%s-%s Playback Volume",
-+ (i < 2 ? "Mic" : "Line"),
-+ input[i], output[o]);
-+ err = snd_bbfpro_vol_add(mixer, (26 * o + i), name);
-+ if (err < 0)
-+ return err;
-+
-+ // PCM routing... yes, it is output remapping
-+ snprintf(name, sizeof(name),
-+ "PCM-%s-%s Playback Volume",
-+ output[i], output[o]);
-+ err = snd_bbfpro_vol_add(mixer, (26 * o + 12 + i),
-+ name);
-+ if (err < 0)
-+ return err;
-+ }
-+ }
-+
-+ // Control Reg 1
-+ err = snd_bbfpro_ctl_add(mixer, SND_BBFPRO_CTL_REG1,
-+ SND_BBFPRO_CTL_REG1_CLK_OPTICAL,
-+ "Sample Clock Source");
-+ if (err < 0)
-+ return err;
-+
-+ err = snd_bbfpro_ctl_add(mixer, SND_BBFPRO_CTL_REG1,
-+ SND_BBFPRO_CTL_REG1_SPDIF_PRO,
-+ "IEC958 Pro Mask");
-+ if (err < 0)
-+ return err;
-+
-+ err = snd_bbfpro_ctl_add(mixer, SND_BBFPRO_CTL_REG1,
-+ SND_BBFPRO_CTL_REG1_SPDIF_EMPH,
-+ "IEC958 Emphasis");
-+ if (err < 0)
-+ return err;
-+
-+ err = snd_bbfpro_ctl_add(mixer, SND_BBFPRO_CTL_REG1,
-+ SND_BBFPRO_CTL_REG1_SPDIF_OPTICAL,
-+ "IEC958 Switch");
-+ if (err < 0)
-+ return err;
-+
-+ // Control Reg 2
-+ err = snd_bbfpro_ctl_add(mixer, SND_BBFPRO_CTL_REG2,
-+ SND_BBFPRO_CTL_REG2_48V_AN1,
-+ "Mic-AN1 48V");
-+ if (err < 0)
-+ return err;
-+
-+ err = snd_bbfpro_ctl_add(mixer, SND_BBFPRO_CTL_REG2,
-+ SND_BBFPRO_CTL_REG2_48V_AN2,
-+ "Mic-AN2 48V");
-+ if (err < 0)
-+ return err;
-+
-+ err = snd_bbfpro_ctl_add(mixer, SND_BBFPRO_CTL_REG2,
-+ SND_BBFPRO_CTL_REG2_SENS_IN3,
-+ "Line-IN3 Sens.");
-+ if (err < 0)
-+ return err;
-+
-+ err = snd_bbfpro_ctl_add(mixer, SND_BBFPRO_CTL_REG2,
-+ SND_BBFPRO_CTL_REG2_SENS_IN4,
-+ "Line-IN4 Sens.");
-+ if (err < 0)
-+ return err;
-+
-+ err = snd_bbfpro_ctl_add(mixer, SND_BBFPRO_CTL_REG2,
-+ SND_BBFPRO_CTL_REG2_PAD_AN1,
-+ "Mic-AN1 PAD");
-+ if (err < 0)
-+ return err;
-+
-+ err = snd_bbfpro_ctl_add(mixer, SND_BBFPRO_CTL_REG2,
-+ SND_BBFPRO_CTL_REG2_PAD_AN2,
-+ "Mic-AN2 PAD");
-+ if (err < 0)
-+ return err;
-+
-+ return 0;
-+}
-+
- int snd_usb_mixer_apply_create_quirk(struct usb_mixer_interface *mixer)
- {
- int err = 0;
-@@ -2286,6 +2701,9 @@ int snd_usb_mixer_apply_create_quirk(struct usb_mixer_interface *mixer)
- case USB_ID(0x0194f, 0x010c): /* Presonus Studio 1810c */
- err = snd_sc1810_init_mixer(mixer);
- break;
-+ case USB_ID(0x2a39, 0x3fb0): /* RME Babyface Pro FS */
-+ err = snd_bbfpro_controls_create(mixer);
-+ break;
- }
-
- return err;
-diff --git a/sound/usb/pcm.c b/sound/usb/pcm.c
-index a4e4064f9aee..d61c2f1095b5 100644
---- a/sound/usb/pcm.c
-+++ b/sound/usb/pcm.c
-@@ -404,6 +404,8 @@ add_sync_ep:
- if (!subs->sync_endpoint)
- return -EINVAL;
-
-+ subs->sync_endpoint->is_implicit_feedback = 1;
-+
- subs->data_endpoint->sync_master = subs->sync_endpoint;
-
- return 1;
-@@ -502,12 +504,15 @@ static int set_sync_endpoint(struct snd_usb_substream *subs,
- implicit_fb ?
- SND_USB_ENDPOINT_TYPE_DATA :
- SND_USB_ENDPOINT_TYPE_SYNC);
-+
- if (!subs->sync_endpoint) {
- if (is_playback && attr == USB_ENDPOINT_SYNC_NONE)
- return 0;
- return -EINVAL;
- }
-
-+ subs->sync_endpoint->is_implicit_feedback = implicit_fb;
-+
- subs->data_endpoint->sync_master = subs->sync_endpoint;
-
- return 0;
-@@ -1579,6 +1584,8 @@ static void prepare_playback_urb(struct snd_usb_substream *subs,
- for (i = 0; i < ctx->packets; i++) {
- if (ctx->packet_size[i])
- counts = ctx->packet_size[i];
-+ else if (ep->sync_master)
-+ counts = snd_usb_endpoint_slave_next_packet_size(ep);
- else
- counts = snd_usb_endpoint_next_packet_size(ep);
-
-diff --git a/tools/bootconfig/main.c b/tools/bootconfig/main.c
-index 0efaf45f7367..e0878f5f74b1 100644
---- a/tools/bootconfig/main.c
-+++ b/tools/bootconfig/main.c
-@@ -14,13 +14,18 @@
- #include <linux/kernel.h>
- #include <linux/bootconfig.h>
-
--static int xbc_show_array(struct xbc_node *node)
-+static int xbc_show_value(struct xbc_node *node)
- {
- const char *val;
-+ char q;
- int i = 0;
-
- xbc_array_for_each_value(node, val) {
-- printf("\"%s\"%s", val, node->next ? ", " : ";\n");
-+ if (strchr(val, '"'))
-+ q = '\'';
-+ else
-+ q = '"';
-+ printf("%c%s%c%s", q, val, q, node->next ? ", " : ";\n");
- i++;
- }
- return i;
-@@ -48,10 +53,7 @@ static void xbc_show_compact_tree(void)
- continue;
- } else if (cnode && xbc_node_is_value(cnode)) {
- printf("%s = ", xbc_node_get_data(node));
-- if (cnode->next)
-- xbc_show_array(cnode);
-- else
-- printf("\"%s\";\n", xbc_node_get_data(cnode));
-+ xbc_show_value(cnode);
- } else {
- printf("%s;\n", xbc_node_get_data(node));
- }
-@@ -205,11 +207,13 @@ int show_xbc(const char *path)
- }
-
- ret = load_xbc_from_initrd(fd, &buf);
-- if (ret < 0)
-+ if (ret < 0) {
- pr_err("Failed to load a boot config from initrd: %d\n", ret);
-- else
-- xbc_show_compact_tree();
--
-+ goto out;
-+ }
-+ xbc_show_compact_tree();
-+ ret = 0;
-+out:
- close(fd);
- free(buf);
-
-diff --git a/tools/bpf/bpftool/gen.c b/tools/bpf/bpftool/gen.c
-index f8113b3646f5..f5960b48c861 100644
---- a/tools/bpf/bpftool/gen.c
-+++ b/tools/bpf/bpftool/gen.c
-@@ -225,6 +225,7 @@ static int codegen(const char *template, ...)
- } else {
- p_err("unrecognized character at pos %td in template '%s'",
- src - template - 1, template);
-+ free(s);
- return -EINVAL;
- }
- }
-@@ -235,6 +236,7 @@ static int codegen(const char *template, ...)
- if (*src != '\t') {
- p_err("not enough tabs at pos %td in template '%s'",
- src - template - 1, template);
-+ free(s);
- return -EINVAL;
- }
- }
-diff --git a/tools/lib/bpf/btf_dump.c b/tools/lib/bpf/btf_dump.c
-index 0c28ee82834b..653dbbe2e366 100644
---- a/tools/lib/bpf/btf_dump.c
-+++ b/tools/lib/bpf/btf_dump.c
-@@ -1137,6 +1137,20 @@ static void btf_dump_emit_mods(struct btf_dump *d, struct id_stack *decl_stack)
- }
- }
-
-+static void btf_dump_drop_mods(struct btf_dump *d, struct id_stack *decl_stack)
-+{
-+ const struct btf_type *t;
-+ __u32 id;
-+
-+ while (decl_stack->cnt) {
-+ id = decl_stack->ids[decl_stack->cnt - 1];
-+ t = btf__type_by_id(d->btf, id);
-+ if (!btf_is_mod(t))
-+ return;
-+ decl_stack->cnt--;
-+ }
-+}
-+
- static void btf_dump_emit_name(const struct btf_dump *d,
- const char *name, bool last_was_ptr)
- {
-@@ -1235,14 +1249,7 @@ static void btf_dump_emit_type_chain(struct btf_dump *d,
- * a const/volatile modifier for array, so we are
- * going to silently skip them here.
- */
-- while (decls->cnt) {
-- next_id = decls->ids[decls->cnt - 1];
-- next_t = btf__type_by_id(d->btf, next_id);
-- if (btf_is_mod(next_t))
-- decls->cnt--;
-- else
-- break;
-- }
-+ btf_dump_drop_mods(d, decls);
-
- if (decls->cnt == 0) {
- btf_dump_emit_name(d, fname, last_was_ptr);
-@@ -1270,7 +1277,15 @@ static void btf_dump_emit_type_chain(struct btf_dump *d,
- __u16 vlen = btf_vlen(t);
- int i;
-
-- btf_dump_emit_mods(d, decls);
-+ /*
-+ * GCC emits extra volatile qualifier for
-+ * __attribute__((noreturn)) function pointers. Clang
-+ * doesn't do it. It's a GCC quirk for backwards
-+ * compatibility with code written for GCC <2.5. So,
-+ * similarly to extra qualifiers for array, just drop
-+ * them, instead of handling them.
-+ */
-+ btf_dump_drop_mods(d, decls);
- if (decls->cnt) {
- btf_dump_printf(d, " (");
- btf_dump_emit_type_chain(d, decls, fname, lvl);
-diff --git a/tools/lib/bpf/libbpf.c b/tools/lib/bpf/libbpf.c
-index 0c5b4fb553fb..c417cff2cdaf 100644
---- a/tools/lib/bpf/libbpf.c
-+++ b/tools/lib/bpf/libbpf.c
-@@ -3455,10 +3455,6 @@ bpf_object__populate_internal_map(struct bpf_object *obj, struct bpf_map *map)
- char *cp, errmsg[STRERR_BUFSIZE];
- int err, zero = 0;
-
-- /* kernel already zero-initializes .bss map. */
-- if (map_type == LIBBPF_MAP_BSS)
-- return 0;
--
- err = bpf_map_update_elem(map->fd, &zero, map->mmaped, 0);
- if (err) {
- err = -errno;
-diff --git a/tools/perf/builtin-report.c b/tools/perf/builtin-report.c
-index 26d8fc27e427..fc7855262162 100644
---- a/tools/perf/builtin-report.c
-+++ b/tools/perf/builtin-report.c
-@@ -476,8 +476,7 @@ static size_t hists__fprintf_nr_sample_events(struct hists *hists, struct report
- if (rep->time_str)
- ret += fprintf(fp, " (time slices: %s)", rep->time_str);
-
-- if (symbol_conf.show_ref_callgraph &&
-- strstr(evname, "call-graph=no")) {
-+ if (symbol_conf.show_ref_callgraph && evname && strstr(evname, "call-graph=no")) {
- ret += fprintf(fp, ", show reference callgraph");
- }
-
-diff --git a/tools/perf/util/parse-events.y b/tools/perf/util/parse-events.y
-index 94f8bcd83582..9a41247c602b 100644
---- a/tools/perf/util/parse-events.y
-+++ b/tools/perf/util/parse-events.y
-@@ -348,7 +348,7 @@ PE_PMU_EVENT_PRE '-' PE_PMU_EVENT_SUF sep_dc
- struct list_head *list;
- char pmu_name[128];
-
-- snprintf(&pmu_name, 128, "%s-%s", $1, $3);
-+ snprintf(pmu_name, sizeof(pmu_name), "%s-%s", $1, $3);
- free($1);
- free($3);
- if (parse_events_multi_pmu_add(_parse_state, pmu_name, &list) < 0)
-diff --git a/tools/perf/util/probe-event.c b/tools/perf/util/probe-event.c
-index a08f373d3305..df713a5d1e26 100644
---- a/tools/perf/util/probe-event.c
-+++ b/tools/perf/util/probe-event.c
-@@ -1575,7 +1575,7 @@ static int parse_perf_probe_arg(char *str, struct perf_probe_arg *arg)
- }
-
- tmp = strchr(str, '@');
-- if (tmp && tmp != str && strcmp(tmp + 1, "user")) { /* user attr */
-+ if (tmp && tmp != str && !strcmp(tmp + 1, "user")) { /* user attr */
- if (!user_access_is_supported()) {
- semantic_error("ftrace does not support user access\n");
- return -EINVAL;
-@@ -1995,7 +1995,10 @@ static int __synthesize_probe_trace_arg_ref(struct probe_trace_arg_ref *ref,
- if (depth < 0)
- return depth;
- }
-- err = strbuf_addf(buf, "%+ld(", ref->offset);
-+ if (ref->user_access)
-+ err = strbuf_addf(buf, "%s%ld(", "+u", ref->offset);
-+ else
-+ err = strbuf_addf(buf, "%+ld(", ref->offset);
- return (err < 0) ? err : depth;
- }
-
-diff --git a/tools/perf/util/probe-file.c b/tools/perf/util/probe-file.c
-index 8c852948513e..064b63a6a3f3 100644
---- a/tools/perf/util/probe-file.c
-+++ b/tools/perf/util/probe-file.c
-@@ -1044,7 +1044,7 @@ static struct {
- DEFINE_TYPE(FTRACE_README_PROBE_TYPE_X, "*type: * x8/16/32/64,*"),
- DEFINE_TYPE(FTRACE_README_KRETPROBE_OFFSET, "*place (kretprobe): *"),
- DEFINE_TYPE(FTRACE_README_UPROBE_REF_CTR, "*ref_ctr_offset*"),
-- DEFINE_TYPE(FTRACE_README_USER_ACCESS, "*[u]<offset>*"),
-+ DEFINE_TYPE(FTRACE_README_USER_ACCESS, "*u]<offset>*"),
- DEFINE_TYPE(FTRACE_README_MULTIPROBE_EVENT, "*Create/append/*"),
- DEFINE_TYPE(FTRACE_README_IMMEDIATE_VALUE, "*\\imm-value,*"),
- };
-diff --git a/tools/perf/util/stat-display.c b/tools/perf/util/stat-display.c
-index 9e757d18d713..cf393c3eea23 100644
---- a/tools/perf/util/stat-display.c
-+++ b/tools/perf/util/stat-display.c
-@@ -671,7 +671,7 @@ static void print_aggr(struct perf_stat_config *config,
- int s;
- bool first;
-
-- if (!(config->aggr_map || config->aggr_get_id))
-+ if (!config->aggr_map || !config->aggr_get_id)
- return;
-
- aggr_update_shadow(config, evlist);
-@@ -1172,7 +1172,7 @@ static void print_percore(struct perf_stat_config *config,
- int s;
- bool first = true;
-
-- if (!(config->aggr_map || config->aggr_get_id))
-+ if (!config->aggr_map || !config->aggr_get_id)
- return;
-
- if (config->percore_show_thread)
-diff --git a/tools/testing/selftests/bpf/prog_tests/skeleton.c b/tools/testing/selftests/bpf/prog_tests/skeleton.c
-index 9264a2736018..fa153cf67b1b 100644
---- a/tools/testing/selftests/bpf/prog_tests/skeleton.c
-+++ b/tools/testing/selftests/bpf/prog_tests/skeleton.c
-@@ -15,6 +15,8 @@ void test_skeleton(void)
- int duration = 0, err;
- struct test_skeleton* skel;
- struct test_skeleton__bss *bss;
-+ struct test_skeleton__data *data;
-+ struct test_skeleton__rodata *rodata;
- struct test_skeleton__kconfig *kcfg;
-
- skel = test_skeleton__open();
-@@ -24,13 +26,45 @@ void test_skeleton(void)
- if (CHECK(skel->kconfig, "skel_kconfig", "kconfig is mmaped()!\n"))
- goto cleanup;
-
-+ bss = skel->bss;
-+ data = skel->data;
-+ rodata = skel->rodata;
-+
-+ /* validate values are pre-initialized correctly */
-+ CHECK(data->in1 != -1, "in1", "got %d != exp %d\n", data->in1, -1);
-+ CHECK(data->out1 != -1, "out1", "got %d != exp %d\n", data->out1, -1);
-+ CHECK(data->in2 != -1, "in2", "got %lld != exp %lld\n", data->in2, -1LL);
-+ CHECK(data->out2 != -1, "out2", "got %lld != exp %lld\n", data->out2, -1LL);
-+
-+ CHECK(bss->in3 != 0, "in3", "got %d != exp %d\n", bss->in3, 0);
-+ CHECK(bss->out3 != 0, "out3", "got %d != exp %d\n", bss->out3, 0);
-+ CHECK(bss->in4 != 0, "in4", "got %lld != exp %lld\n", bss->in4, 0LL);
-+ CHECK(bss->out4 != 0, "out4", "got %lld != exp %lld\n", bss->out4, 0LL);
-+
-+ CHECK(rodata->in6 != 0, "in6", "got %d != exp %d\n", rodata->in6, 0);
-+ CHECK(bss->out6 != 0, "out6", "got %d != exp %d\n", bss->out6, 0);
-+
-+ /* validate we can pre-setup global variables, even in .bss */
-+ data->in1 = 10;
-+ data->in2 = 11;
-+ bss->in3 = 12;
-+ bss->in4 = 13;
-+ rodata->in6 = 14;
-+
- err = test_skeleton__load(skel);
- if (CHECK(err, "skel_load", "failed to load skeleton: %d\n", err))
- goto cleanup;
-
-- bss = skel->bss;
-- bss->in1 = 1;
-- bss->in2 = 2;
-+ /* validate pre-setup values are still there */
-+ CHECK(data->in1 != 10, "in1", "got %d != exp %d\n", data->in1, 10);
-+ CHECK(data->in2 != 11, "in2", "got %lld != exp %lld\n", data->in2, 11LL);
-+ CHECK(bss->in3 != 12, "in3", "got %d != exp %d\n", bss->in3, 12);
-+ CHECK(bss->in4 != 13, "in4", "got %lld != exp %lld\n", bss->in4, 13LL);
-+ CHECK(rodata->in6 != 14, "in6", "got %d != exp %d\n", rodata->in6, 14);
-+
-+ /* now set new values and attach to get them into outX variables */
-+ data->in1 = 1;
-+ data->in2 = 2;
- bss->in3 = 3;
- bss->in4 = 4;
- bss->in5.a = 5;
-@@ -44,14 +78,15 @@ void test_skeleton(void)
- /* trigger tracepoint */
- usleep(1);
-
-- CHECK(bss->out1 != 1, "res1", "got %d != exp %d\n", bss->out1, 1);
-- CHECK(bss->out2 != 2, "res2", "got %lld != exp %d\n", bss->out2, 2);
-+ CHECK(data->out1 != 1, "res1", "got %d != exp %d\n", data->out1, 1);
-+ CHECK(data->out2 != 2, "res2", "got %lld != exp %d\n", data->out2, 2);
- CHECK(bss->out3 != 3, "res3", "got %d != exp %d\n", (int)bss->out3, 3);
- CHECK(bss->out4 != 4, "res4", "got %lld != exp %d\n", bss->out4, 4);
- CHECK(bss->handler_out5.a != 5, "res5", "got %d != exp %d\n",
- bss->handler_out5.a, 5);
- CHECK(bss->handler_out5.b != 6, "res6", "got %lld != exp %d\n",
- bss->handler_out5.b, 6);
-+ CHECK(bss->out6 != 14, "res7", "got %d != exp %d\n", bss->out6, 14);
-
- CHECK(bss->bpf_syscall != kcfg->CONFIG_BPF_SYSCALL, "ext1",
- "got %d != exp %d\n", bss->bpf_syscall, kcfg->CONFIG_BPF_SYSCALL);
-diff --git a/tools/testing/selftests/bpf/progs/test_skeleton.c b/tools/testing/selftests/bpf/progs/test_skeleton.c
-index de03a90f78ca..77ae86f44db5 100644
---- a/tools/testing/selftests/bpf/progs/test_skeleton.c
-+++ b/tools/testing/selftests/bpf/progs/test_skeleton.c
-@@ -10,16 +10,26 @@ struct s {
- long long b;
- } __attribute__((packed));
-
--int in1 = 0;
--long long in2 = 0;
-+/* .data section */
-+int in1 = -1;
-+long long in2 = -1;
-+
-+/* .bss section */
- char in3 = '\0';
- long long in4 __attribute__((aligned(64))) = 0;
- struct s in5 = {};
-
--long long out2 = 0;
-+/* .rodata section */
-+const volatile int in6 = 0;
-+
-+/* .data section */
-+int out1 = -1;
-+long long out2 = -1;
-+
-+/* .bss section */
- char out3 = 0;
- long long out4 = 0;
--int out1 = 0;
-+int out6 = 0;
-
- extern bool CONFIG_BPF_SYSCALL __kconfig;
- extern int LINUX_KERNEL_VERSION __kconfig;
-@@ -36,6 +46,7 @@ int handler(const void *ctx)
- out3 = in3;
- out4 = in4;
- out5 = in5;
-+ out6 = in6;
-
- bpf_syscall = CONFIG_BPF_SYSCALL;
- kern_ver = LINUX_KERNEL_VERSION;
-diff --git a/tools/testing/selftests/kvm/Makefile b/tools/testing/selftests/kvm/Makefile
-index 42f4f49f2a48..2c85b9dd86f5 100644
---- a/tools/testing/selftests/kvm/Makefile
-+++ b/tools/testing/selftests/kvm/Makefile
-@@ -80,7 +80,11 @@ LIBKVM += $(LIBKVM_$(UNAME_M))
- INSTALL_HDR_PATH = $(top_srcdir)/usr
- LINUX_HDR_PATH = $(INSTALL_HDR_PATH)/include/
- LINUX_TOOL_INCLUDE = $(top_srcdir)/tools/include
-+ifeq ($(ARCH),x86_64)
-+LINUX_TOOL_ARCH_INCLUDE = $(top_srcdir)/tools/arch/x86/include
-+else
- LINUX_TOOL_ARCH_INCLUDE = $(top_srcdir)/tools/arch/$(ARCH)/include
-+endif
- CFLAGS += -Wall -Wstrict-prototypes -Wuninitialized -O2 -g -std=gnu99 \
- -fno-stack-protector -fno-PIE -I$(LINUX_TOOL_INCLUDE) \
- -I$(LINUX_TOOL_ARCH_INCLUDE) -I$(LINUX_HDR_PATH) -Iinclude \
-diff --git a/tools/testing/selftests/net/timestamping.c b/tools/testing/selftests/net/timestamping.c
-index aca3491174a1..f4bb4fef0f39 100644
---- a/tools/testing/selftests/net/timestamping.c
-+++ b/tools/testing/selftests/net/timestamping.c
-@@ -313,10 +313,16 @@ int main(int argc, char **argv)
- int val;
- socklen_t len;
- struct timeval next;
-+ size_t if_len;
-
- if (argc < 2)
- usage(0);
- interface = argv[1];
-+ if_len = strlen(interface);
-+ if (if_len >= IFNAMSIZ) {
-+ printf("interface name exceeds IFNAMSIZ\n");
-+ exit(1);
-+ }
-
- for (i = 2; i < argc; i++) {
- if (!strcasecmp(argv[i], "SO_TIMESTAMP"))
-@@ -350,12 +356,12 @@ int main(int argc, char **argv)
- bail("socket");
-
- memset(&device, 0, sizeof(device));
-- strncpy(device.ifr_name, interface, sizeof(device.ifr_name));
-+ memcpy(device.ifr_name, interface, if_len + 1);
- if (ioctl(sock, SIOCGIFADDR, &device) < 0)
- bail("getting interface IP address");
-
- memset(&hwtstamp, 0, sizeof(hwtstamp));
-- strncpy(hwtstamp.ifr_name, interface, sizeof(hwtstamp.ifr_name));
-+ memcpy(hwtstamp.ifr_name, interface, if_len + 1);
- hwtstamp.ifr_data = (void *)&hwconfig;
- memset(&hwconfig, 0, sizeof(hwconfig));
- hwconfig.tx_type =
-diff --git a/tools/testing/selftests/ntb/ntb_test.sh b/tools/testing/selftests/ntb/ntb_test.sh
-index 9c60337317c6..020137b61407 100755
---- a/tools/testing/selftests/ntb/ntb_test.sh
-+++ b/tools/testing/selftests/ntb/ntb_test.sh
-@@ -241,7 +241,7 @@ function get_files_count()
- split_remote $LOC
-
- if [[ "$REMOTE" == "" ]]; then
-- echo $(ls -1 "$LOC"/${NAME}* 2>/dev/null | wc -l)
-+ echo $(ls -1 "$VPATH"/${NAME}* 2>/dev/null | wc -l)
- else
- echo $(ssh "$REMOTE" "ls -1 \"$VPATH\"/${NAME}* | \
- wc -l" 2> /dev/null)
-diff --git a/tools/testing/selftests/timens/clock_nanosleep.c b/tools/testing/selftests/timens/clock_nanosleep.c
-index 8e7b7c72ef65..72d41b955fb2 100644
---- a/tools/testing/selftests/timens/clock_nanosleep.c
-+++ b/tools/testing/selftests/timens/clock_nanosleep.c
-@@ -119,7 +119,7 @@ int main(int argc, char *argv[])
-
- ksft_set_plan(4);
-
-- check_config_posix_timers();
-+ check_supported_timers();
-
- if (unshare_timens())
- return 1;
-diff --git a/tools/testing/selftests/timens/timens.c b/tools/testing/selftests/timens/timens.c
-index 098be7c83be3..52b6a1185f52 100644
---- a/tools/testing/selftests/timens/timens.c
-+++ b/tools/testing/selftests/timens/timens.c
-@@ -155,7 +155,7 @@ int main(int argc, char *argv[])
-
- nscheck();
-
-- check_config_posix_timers();
-+ check_supported_timers();
-
- ksft_set_plan(ARRAY_SIZE(clocks) * 2);
-
-diff --git a/tools/testing/selftests/timens/timens.h b/tools/testing/selftests/timens/timens.h
-index e09e7e39bc52..d4fc52d47146 100644
---- a/tools/testing/selftests/timens/timens.h
-+++ b/tools/testing/selftests/timens/timens.h
-@@ -14,15 +14,26 @@
- #endif
-
- static int config_posix_timers = true;
-+static int config_alarm_timers = true;
-
--static inline void check_config_posix_timers(void)
-+static inline void check_supported_timers(void)
- {
-+ struct timespec ts;
-+
- if (timer_create(-1, 0, 0) == -1 && errno == ENOSYS)
- config_posix_timers = false;
-+
-+ if (clock_gettime(CLOCK_BOOTTIME_ALARM, &ts) == -1 && errno == EINVAL)
-+ config_alarm_timers = false;
- }
-
- static inline bool check_skip(int clockid)
- {
-+ if (!config_alarm_timers && clockid == CLOCK_BOOTTIME_ALARM) {
-+ ksft_test_result_skip("CLOCK_BOOTTIME_ALARM isn't supported\n");
-+ return true;
-+ }
-+
- if (config_posix_timers)
- return false;
-
-diff --git a/tools/testing/selftests/timens/timer.c b/tools/testing/selftests/timens/timer.c
-index 96dba11ebe44..5e7f0051bd7b 100644
---- a/tools/testing/selftests/timens/timer.c
-+++ b/tools/testing/selftests/timens/timer.c
-@@ -22,6 +22,9 @@ int run_test(int clockid, struct timespec now)
- timer_t fd;
- int i;
-
-+ if (check_skip(clockid))
-+ return 0;
-+
- for (i = 0; i < 2; i++) {
- struct sigevent sevp = {.sigev_notify = SIGEV_NONE};
- int flags = 0;
-@@ -74,6 +77,8 @@ int main(int argc, char *argv[])
-
- nscheck();
-
-+ check_supported_timers();
-+
- ksft_set_plan(3);
-
- clock_gettime(CLOCK_MONOTONIC, &mtime_now);
-diff --git a/tools/testing/selftests/timens/timerfd.c b/tools/testing/selftests/timens/timerfd.c
-index eff1ec5ff215..9edd43d6b2c1 100644
---- a/tools/testing/selftests/timens/timerfd.c
-+++ b/tools/testing/selftests/timens/timerfd.c
-@@ -28,6 +28,9 @@ int run_test(int clockid, struct timespec now)
- long long elapsed;
- int fd, i;
-
-+ if (check_skip(clockid))
-+ return 0;
-+
- if (tclock_gettime(clockid, &now))
- return pr_perror("clock_gettime(%d)", clockid);
-
-@@ -81,6 +84,8 @@ int main(int argc, char *argv[])
-
- nscheck();
-
-+ check_supported_timers();
-+
- ksft_set_plan(3);
-
- clock_gettime(CLOCK_MONOTONIC, &mtime_now);
-diff --git a/tools/testing/selftests/x86/protection_keys.c b/tools/testing/selftests/x86/protection_keys.c
-index 480995bceefa..47191af46617 100644
---- a/tools/testing/selftests/x86/protection_keys.c
-+++ b/tools/testing/selftests/x86/protection_keys.c
-@@ -24,6 +24,7 @@
- #define _GNU_SOURCE
- #include <errno.h>
- #include <linux/futex.h>
-+#include <time.h>
- #include <sys/time.h>
- #include <sys/syscall.h>
- #include <string.h>
-@@ -612,10 +613,10 @@ int alloc_random_pkey(void)
- int nr_alloced = 0;
- int random_index;
- memset(alloced_pkeys, 0, sizeof(alloced_pkeys));
-+ srand((unsigned int)time(NULL));
-
- /* allocate every possible key and make a note of which ones we got */
- max_nr_pkey_allocs = NR_PKEYS;
-- max_nr_pkey_allocs = 1;
- for (i = 0; i < max_nr_pkey_allocs; i++) {
- int new_pkey = alloc_pkey();
- if (new_pkey < 0)
diff --git a/1005_linux-5.8.6.patch b/1005_linux-5.8.6.patch
new file mode 100644
index 0000000..842f070
--- /dev/null
+++ b/1005_linux-5.8.6.patch
@@ -0,0 +1,11789 @@
+diff --git a/Documentation/admin-guide/ext4.rst b/Documentation/admin-guide/ext4.rst
+index 9443fcef18760..f37d0743fd668 100644
+--- a/Documentation/admin-guide/ext4.rst
++++ b/Documentation/admin-guide/ext4.rst
+@@ -482,6 +482,9 @@ Files in /sys/fs/ext4/<devname>:
+ multiple of this tuning parameter if the stripe size is not set in the
+ ext4 superblock
+
++ mb_max_inode_prealloc
++ The maximum length of per-inode ext4_prealloc_space list.
++
+ mb_max_to_scan
+ The maximum number of extents the multiblock allocator will search to
+ find the best extent.
+diff --git a/Makefile b/Makefile
+index f47073a3b4740..5cf35650373b1 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 5
+ PATCHLEVEL = 8
+-SUBLEVEL = 5
++SUBLEVEL = 6
+ EXTRAVERSION =
+ NAME = Kleptomaniac Octopus
+
+diff --git a/arch/arm/boot/dts/ls1021a.dtsi b/arch/arm/boot/dts/ls1021a.dtsi
+index 760a68c163c83..b2ff27af090ec 100644
+--- a/arch/arm/boot/dts/ls1021a.dtsi
++++ b/arch/arm/boot/dts/ls1021a.dtsi
+@@ -772,7 +772,7 @@
+ fsl,tmr-prsc = <2>;
+ fsl,tmr-add = <0xaaaaaaab>;
+ fsl,tmr-fiper1 = <999999995>;
+- fsl,tmr-fiper2 = <99990>;
++ fsl,tmr-fiper2 = <999999995>;
+ fsl,max-adj = <499999999>;
+ fsl,extts-fifo;
+ };
+diff --git a/arch/arm64/Makefile b/arch/arm64/Makefile
+index 91e377770a6b8..d5fe7c9e0be1d 100644
+--- a/arch/arm64/Makefile
++++ b/arch/arm64/Makefile
+@@ -158,7 +158,8 @@ zinstall install:
+ PHONY += vdso_install
+ vdso_install:
+ $(Q)$(MAKE) $(build)=arch/arm64/kernel/vdso $@
+- $(Q)$(MAKE) $(build)=arch/arm64/kernel/vdso32 $@
++ $(if $(CONFIG_COMPAT_VDSO), \
++ $(Q)$(MAKE) $(build)=arch/arm64/kernel/vdso32 $@)
+
+ # We use MRPROPER_FILES and CLEAN_FILES now
+ archclean:
+diff --git a/arch/arm64/boot/dts/qcom/msm8916-pins.dtsi b/arch/arm64/boot/dts/qcom/msm8916-pins.dtsi
+index 5785bf0a807ce..591f48a575353 100644
+--- a/arch/arm64/boot/dts/qcom/msm8916-pins.dtsi
++++ b/arch/arm64/boot/dts/qcom/msm8916-pins.dtsi
+@@ -569,7 +569,7 @@
+ pins = "gpio63", "gpio64", "gpio65", "gpio66",
+ "gpio67", "gpio68";
+ drive-strength = <2>;
+- bias-disable;
++ bias-pull-down;
+ };
+ };
+ };
+diff --git a/arch/arm64/include/asm/kvm_arm.h b/arch/arm64/include/asm/kvm_arm.h
+index 51c1d99189992..1da8e3dc44555 100644
+--- a/arch/arm64/include/asm/kvm_arm.h
++++ b/arch/arm64/include/asm/kvm_arm.h
+@@ -71,11 +71,12 @@
+ * IMO: Override CPSR.I and enable signaling with VI
+ * FMO: Override CPSR.F and enable signaling with VF
+ * SWIO: Turn set/way invalidates into set/way clean+invalidate
++ * PTW: Take a stage2 fault if a stage1 walk steps in device memory
+ */
+ #define HCR_GUEST_FLAGS (HCR_TSC | HCR_TSW | HCR_TWE | HCR_TWI | HCR_VM | \
+ HCR_BSU_IS | HCR_FB | HCR_TAC | \
+ HCR_AMO | HCR_SWIO | HCR_TIDCP | HCR_RW | HCR_TLOR | \
+- HCR_FMO | HCR_IMO)
++ HCR_FMO | HCR_IMO | HCR_PTW )
+ #define HCR_VIRT_EXCP_MASK (HCR_VSE | HCR_VI | HCR_VF)
+ #define HCR_HOST_NVHE_FLAGS (HCR_RW | HCR_API | HCR_APK)
+ #define HCR_HOST_VHE_FLAGS (HCR_RW | HCR_TGE | HCR_E2H)
+diff --git a/arch/arm64/include/asm/smp.h b/arch/arm64/include/asm/smp.h
+index a0c8a0b652593..0eadbf933e359 100644
+--- a/arch/arm64/include/asm/smp.h
++++ b/arch/arm64/include/asm/smp.h
+@@ -46,7 +46,12 @@ DECLARE_PER_CPU_READ_MOSTLY(int, cpu_number);
+ * Logical CPU mapping.
+ */
+ extern u64 __cpu_logical_map[NR_CPUS];
+-#define cpu_logical_map(cpu) __cpu_logical_map[cpu]
++extern u64 cpu_logical_map(int cpu);
++
++static inline void set_cpu_logical_map(int cpu, u64 hwid)
++{
++ __cpu_logical_map[cpu] = hwid;
++}
+
+ struct seq_file;
+
+diff --git a/arch/arm64/kernel/cpu_errata.c b/arch/arm64/kernel/cpu_errata.c
+index 79728bfb5351f..2c0b82db825ba 100644
+--- a/arch/arm64/kernel/cpu_errata.c
++++ b/arch/arm64/kernel/cpu_errata.c
+@@ -910,6 +910,8 @@ const struct arm64_cpu_capabilities arm64_errata[] = {
+ .desc = "ARM erratum 1418040",
+ .capability = ARM64_WORKAROUND_1418040,
+ ERRATA_MIDR_RANGE_LIST(erratum_1418040_list),
++ .type = (ARM64_CPUCAP_SCOPE_LOCAL_CPU |
++ ARM64_CPUCAP_PERMITTED_FOR_LATE_CPU),
+ },
+ #endif
+ #ifdef CONFIG_ARM64_WORKAROUND_SPECULATIVE_AT
+diff --git a/arch/arm64/kernel/entry.S b/arch/arm64/kernel/entry.S
+index 35de8ba60e3d5..44445d471442d 100644
+--- a/arch/arm64/kernel/entry.S
++++ b/arch/arm64/kernel/entry.S
+@@ -169,19 +169,6 @@ alternative_cb_end
+ stp x28, x29, [sp, #16 * 14]
+
+ .if \el == 0
+- .if \regsize == 32
+- /*
+- * If we're returning from a 32-bit task on a system affected by
+- * 1418040 then re-enable userspace access to the virtual counter.
+- */
+-#ifdef CONFIG_ARM64_ERRATUM_1418040
+-alternative_if ARM64_WORKAROUND_1418040
+- mrs x0, cntkctl_el1
+- orr x0, x0, #2 // ARCH_TIMER_USR_VCT_ACCESS_EN
+- msr cntkctl_el1, x0
+-alternative_else_nop_endif
+-#endif
+- .endif
+ clear_gp_regs
+ mrs x21, sp_el0
+ ldr_this_cpu tsk, __entry_task, x20
+@@ -337,14 +324,6 @@ alternative_else_nop_endif
+ tst x22, #PSR_MODE32_BIT // native task?
+ b.eq 3f
+
+-#ifdef CONFIG_ARM64_ERRATUM_1418040
+-alternative_if ARM64_WORKAROUND_1418040
+- mrs x0, cntkctl_el1
+- bic x0, x0, #2 // ARCH_TIMER_USR_VCT_ACCESS_EN
+- msr cntkctl_el1, x0
+-alternative_else_nop_endif
+-#endif
+-
+ #ifdef CONFIG_ARM64_ERRATUM_845719
+ alternative_if ARM64_WORKAROUND_845719
+ #ifdef CONFIG_PID_IN_CONTEXTIDR
+diff --git a/arch/arm64/kernel/process.c b/arch/arm64/kernel/process.c
+index 6089638c7d43f..d8a10cf28f827 100644
+--- a/arch/arm64/kernel/process.c
++++ b/arch/arm64/kernel/process.c
+@@ -515,6 +515,39 @@ static void entry_task_switch(struct task_struct *next)
+ __this_cpu_write(__entry_task, next);
+ }
+
++/*
++ * ARM erratum 1418040 handling, affecting the 32bit view of CNTVCT.
++ * Assuming the virtual counter is enabled at the beginning of times:
++ *
++ * - disable access when switching from a 64bit task to a 32bit task
++ * - enable access when switching from a 32bit task to a 64bit task
++ */
++static void erratum_1418040_thread_switch(struct task_struct *prev,
++ struct task_struct *next)
++{
++ bool prev32, next32;
++ u64 val;
++
++ if (!(IS_ENABLED(CONFIG_ARM64_ERRATUM_1418040) &&
++ cpus_have_const_cap(ARM64_WORKAROUND_1418040)))
++ return;
++
++ prev32 = is_compat_thread(task_thread_info(prev));
++ next32 = is_compat_thread(task_thread_info(next));
++
++ if (prev32 == next32)
++ return;
++
++ val = read_sysreg(cntkctl_el1);
++
++ if (!next32)
++ val |= ARCH_TIMER_USR_VCT_ACCESS_EN;
++ else
++ val &= ~ARCH_TIMER_USR_VCT_ACCESS_EN;
++
++ write_sysreg(val, cntkctl_el1);
++}
++
+ /*
+ * Thread switching.
+ */
+@@ -530,6 +563,7 @@ __notrace_funcgraph struct task_struct *__switch_to(struct task_struct *prev,
+ entry_task_switch(next);
+ uao_thread_switch(next);
+ ssbs_thread_switch(next);
++ erratum_1418040_thread_switch(prev, next);
+
+ /*
+ * Complete any pending TLB or cache maintenance on this CPU in case
+diff --git a/arch/arm64/kernel/setup.c b/arch/arm64/kernel/setup.c
+index 93b3844cf4421..07b7940951e28 100644
+--- a/arch/arm64/kernel/setup.c
++++ b/arch/arm64/kernel/setup.c
+@@ -85,7 +85,7 @@ u64 __cacheline_aligned boot_args[4];
+ void __init smp_setup_processor_id(void)
+ {
+ u64 mpidr = read_cpuid_mpidr() & MPIDR_HWID_BITMASK;
+- cpu_logical_map(0) = mpidr;
++ set_cpu_logical_map(0, mpidr);
+
+ /*
+ * clear __my_cpu_offset on boot CPU to avoid hang caused by
+@@ -276,6 +276,12 @@ arch_initcall(reserve_memblock_reserved_regions);
+
+ u64 __cpu_logical_map[NR_CPUS] = { [0 ... NR_CPUS-1] = INVALID_HWID };
+
++u64 cpu_logical_map(int cpu)
++{
++ return __cpu_logical_map[cpu];
++}
++EXPORT_SYMBOL_GPL(cpu_logical_map);
++
+ void __init setup_arch(char **cmdline_p)
+ {
+ init_mm.start_code = (unsigned long) _text;
+diff --git a/arch/arm64/kernel/smp.c b/arch/arm64/kernel/smp.c
+index e43a8ff19f0f6..8cd6316a0d833 100644
+--- a/arch/arm64/kernel/smp.c
++++ b/arch/arm64/kernel/smp.c
+@@ -567,7 +567,7 @@ acpi_map_gic_cpu_interface(struct acpi_madt_generic_interrupt *processor)
+ return;
+
+ /* map the logical cpu id to cpu MPIDR */
+- cpu_logical_map(cpu_count) = hwid;
++ set_cpu_logical_map(cpu_count, hwid);
+
+ cpu_madt_gicc[cpu_count] = *processor;
+
+@@ -681,7 +681,7 @@ static void __init of_parse_and_init_cpus(void)
+ goto next;
+
+ pr_debug("cpu logical map 0x%llx\n", hwid);
+- cpu_logical_map(cpu_count) = hwid;
++ set_cpu_logical_map(cpu_count, hwid);
+
+ early_map_cpu_to_node(cpu_count, of_node_to_nid(dn));
+ next:
+@@ -722,7 +722,7 @@ void __init smp_init_cpus(void)
+ for (i = 1; i < nr_cpu_ids; i++) {
+ if (cpu_logical_map(i) != INVALID_HWID) {
+ if (smp_cpu_setup(i))
+- cpu_logical_map(i) = INVALID_HWID;
++ set_cpu_logical_map(i, INVALID_HWID);
+ }
+ }
+ }
+diff --git a/arch/arm64/kvm/hyp/switch.c b/arch/arm64/kvm/hyp/switch.c
+index db1c4487d95d1..9270b14157b55 100644
+--- a/arch/arm64/kvm/hyp/switch.c
++++ b/arch/arm64/kvm/hyp/switch.c
+@@ -897,7 +897,7 @@ static void __hyp_text __hyp_call_panic_nvhe(u64 spsr, u64 elr, u64 par,
+ * making sure it is a kernel address and not a PC-relative
+ * reference.
+ */
+- asm volatile("ldr %0, =__hyp_panic_string" : "=r" (str_va));
++ asm volatile("ldr %0, =%1" : "=r" (str_va) : "S" (__hyp_panic_string));
+
+ __hyp_do_panic(str_va,
+ spsr, elr,
+diff --git a/arch/mips/Kconfig b/arch/mips/Kconfig
+index a7e40bb1e5bc6..c43ad3b3cea4b 100644
+--- a/arch/mips/Kconfig
++++ b/arch/mips/Kconfig
+@@ -2203,6 +2203,7 @@ endchoice
+
+ config KVM_GUEST
+ bool "KVM Guest Kernel"
++ depends on CPU_MIPS32_R2
+ depends on BROKEN_ON_SMP
+ help
+ Select this option if building a guest kernel for KVM (Trap & Emulate)
+diff --git a/arch/mips/kvm/Kconfig b/arch/mips/kvm/Kconfig
+index 2bf02d849a3a8..032b3fca6cbba 100644
+--- a/arch/mips/kvm/Kconfig
++++ b/arch/mips/kvm/Kconfig
+@@ -37,10 +37,11 @@ choice
+
+ config KVM_MIPS_TE
+ bool "Trap & Emulate"
++ depends on CPU_MIPS32_R2
+ help
+ Use trap and emulate to virtualize 32-bit guests in user mode. This
+ does not require any special hardware Virtualization support beyond
+- standard MIPS32/64 r2 or later, but it does require the guest kernel
++ standard MIPS32 r2 or later, but it does require the guest kernel
+ to be configured with CONFIG_KVM_GUEST=y so that it resides in the
+ user address segment.
+
+diff --git a/arch/mips/vdso/genvdso.c b/arch/mips/vdso/genvdso.c
+index be57b832bbe0a..ccba50ec8a40e 100644
+--- a/arch/mips/vdso/genvdso.c
++++ b/arch/mips/vdso/genvdso.c
+@@ -122,6 +122,7 @@ static void *map_vdso(const char *path, size_t *_size)
+ if (fstat(fd, &stat) != 0) {
+ fprintf(stderr, "%s: Failed to stat '%s': %s\n", program_name,
+ path, strerror(errno));
++ close(fd);
+ return NULL;
+ }
+
+@@ -130,6 +131,7 @@ static void *map_vdso(const char *path, size_t *_size)
+ if (addr == MAP_FAILED) {
+ fprintf(stderr, "%s: Failed to map '%s': %s\n", program_name,
+ path, strerror(errno));
++ close(fd);
+ return NULL;
+ }
+
+@@ -139,6 +141,7 @@ static void *map_vdso(const char *path, size_t *_size)
+ if (memcmp(ehdr->e_ident, ELFMAG, SELFMAG) != 0) {
+ fprintf(stderr, "%s: '%s' is not an ELF file\n", program_name,
+ path);
++ close(fd);
+ return NULL;
+ }
+
+@@ -150,6 +153,7 @@ static void *map_vdso(const char *path, size_t *_size)
+ default:
+ fprintf(stderr, "%s: '%s' has invalid ELF class\n",
+ program_name, path);
++ close(fd);
+ return NULL;
+ }
+
+@@ -161,6 +165,7 @@ static void *map_vdso(const char *path, size_t *_size)
+ default:
+ fprintf(stderr, "%s: '%s' has invalid ELF data order\n",
+ program_name, path);
++ close(fd);
+ return NULL;
+ }
+
+@@ -168,15 +173,18 @@ static void *map_vdso(const char *path, size_t *_size)
+ fprintf(stderr,
+ "%s: '%s' has invalid ELF machine (expected EM_MIPS)\n",
+ program_name, path);
++ close(fd);
+ return NULL;
+ } else if (swap_uint16(ehdr->e_type) != ET_DYN) {
+ fprintf(stderr,
+ "%s: '%s' has invalid ELF type (expected ET_DYN)\n",
+ program_name, path);
++ close(fd);
+ return NULL;
+ }
+
+ *_size = stat.st_size;
++ close(fd);
+ return addr;
+ }
+
+@@ -293,10 +301,12 @@ int main(int argc, char **argv)
+ /* Calculate and write symbol offsets to <output file> */
+ if (!get_symbols(dbg_vdso_path, dbg_vdso)) {
+ unlink(out_path);
++ fclose(out_file);
+ return EXIT_FAILURE;
+ }
+
+ fprintf(out_file, "};\n");
++ fclose(out_file);
+
+ return EXIT_SUCCESS;
+ }
+diff --git a/arch/powerpc/perf/core-book3s.c b/arch/powerpc/perf/core-book3s.c
+index 01d70280d2872..c6f9d75283813 100644
+--- a/arch/powerpc/perf/core-book3s.c
++++ b/arch/powerpc/perf/core-book3s.c
+@@ -1517,9 +1517,16 @@ nocheck:
+ ret = 0;
+ out:
+ if (has_branch_stack(event)) {
+- power_pmu_bhrb_enable(event);
+- cpuhw->bhrb_filter = ppmu->bhrb_filter_map(
+- event->attr.branch_sample_type);
++ u64 bhrb_filter = -1;
++
++ if (ppmu->bhrb_filter_map)
++ bhrb_filter = ppmu->bhrb_filter_map(
++ event->attr.branch_sample_type);
++
++ if (bhrb_filter != -1) {
++ cpuhw->bhrb_filter = bhrb_filter;
++ power_pmu_bhrb_enable(event);
++ }
+ }
+
+ perf_pmu_enable(event->pmu);
+@@ -1841,7 +1848,6 @@ static int power_pmu_event_init(struct perf_event *event)
+ int n;
+ int err;
+ struct cpu_hw_events *cpuhw;
+- u64 bhrb_filter;
+
+ if (!ppmu)
+ return -ENOENT;
+@@ -1947,7 +1953,10 @@ static int power_pmu_event_init(struct perf_event *event)
+ err = power_check_constraints(cpuhw, events, cflags, n + 1);
+
+ if (has_branch_stack(event)) {
+- bhrb_filter = ppmu->bhrb_filter_map(
++ u64 bhrb_filter = -1;
++
++ if (ppmu->bhrb_filter_map)
++ bhrb_filter = ppmu->bhrb_filter_map(
+ event->attr.branch_sample_type);
+
+ if (bhrb_filter == -1) {
+@@ -2101,6 +2110,10 @@ static void record_and_restart(struct perf_event *event, unsigned long val,
+
+ if (perf_event_overflow(event, &data, regs))
+ power_pmu_stop(event, 0);
++ } else if (period) {
++ /* Account for interrupt in case of invalid SIAR */
++ if (perf_event_account_interrupt(event))
++ power_pmu_stop(event, 0);
+ }
+ }
+
+diff --git a/arch/powerpc/platforms/Kconfig.cputype b/arch/powerpc/platforms/Kconfig.cputype
+index 87737ec86d39a..1dc9d3c818726 100644
+--- a/arch/powerpc/platforms/Kconfig.cputype
++++ b/arch/powerpc/platforms/Kconfig.cputype
+@@ -36,7 +36,7 @@ config PPC_BOOK3S_6xx
+ select PPC_HAVE_PMU_SUPPORT
+ select PPC_HAVE_KUEP
+ select PPC_HAVE_KUAP
+- select HAVE_ARCH_VMAP_STACK
++ select HAVE_ARCH_VMAP_STACK if !ADB_PMU
+
+ config PPC_BOOK3S_601
+ bool "PowerPC 601"
+diff --git a/arch/powerpc/platforms/cell/Kconfig b/arch/powerpc/platforms/cell/Kconfig
+index 0f7c8241912b9..f2ff359041eec 100644
+--- a/arch/powerpc/platforms/cell/Kconfig
++++ b/arch/powerpc/platforms/cell/Kconfig
+@@ -44,6 +44,7 @@ config SPU_FS
+ tristate "SPU file system"
+ default m
+ depends on PPC_CELL
++ depends on COREDUMP
+ select SPU_BASE
+ help
+ The SPU file system is used to access Synergistic Processing
+diff --git a/arch/powerpc/sysdev/xive/native.c b/arch/powerpc/sysdev/xive/native.c
+index 71b881e554fcb..cb58ec7ce77ac 100644
+--- a/arch/powerpc/sysdev/xive/native.c
++++ b/arch/powerpc/sysdev/xive/native.c
+@@ -18,6 +18,7 @@
+ #include <linux/delay.h>
+ #include <linux/cpumask.h>
+ #include <linux/mm.h>
++#include <linux/kmemleak.h>
+
+ #include <asm/machdep.h>
+ #include <asm/prom.h>
+@@ -647,6 +648,7 @@ static bool xive_native_provision_pages(void)
+ pr_err("Failed to allocate provisioning page\n");
+ return false;
+ }
++ kmemleak_ignore(p);
+ opal_xive_donate_page(chip, __pa(p));
+ }
+ return true;
+diff --git a/arch/x86/kernel/apic/vector.c b/arch/x86/kernel/apic/vector.c
+index dae32d948bf25..f8a56b5dc29fe 100644
+--- a/arch/x86/kernel/apic/vector.c
++++ b/arch/x86/kernel/apic/vector.c
+@@ -161,6 +161,7 @@ static void apic_update_vector(struct irq_data *irqd, unsigned int newvec,
+ apicd->move_in_progress = true;
+ apicd->prev_vector = apicd->vector;
+ apicd->prev_cpu = apicd->cpu;
++ WARN_ON_ONCE(apicd->cpu == newcpu);
+ } else {
+ irq_matrix_free(vector_matrix, apicd->cpu, apicd->vector,
+ managed);
+@@ -910,7 +911,7 @@ void send_cleanup_vector(struct irq_cfg *cfg)
+ __send_cleanup_vector(apicd);
+ }
+
+-static void __irq_complete_move(struct irq_cfg *cfg, unsigned vector)
++void irq_complete_move(struct irq_cfg *cfg)
+ {
+ struct apic_chip_data *apicd;
+
+@@ -918,15 +919,16 @@ static void __irq_complete_move(struct irq_cfg *cfg, unsigned vector)
+ if (likely(!apicd->move_in_progress))
+ return;
+
+- if (vector == apicd->vector && apicd->cpu == smp_processor_id())
++ /*
++ * If the interrupt arrived on the new target CPU, cleanup the
++ * vector on the old target CPU. A vector check is not required
++ * because an interrupt can never move from one vector to another
++ * on the same CPU.
++ */
++ if (apicd->cpu == smp_processor_id())
+ __send_cleanup_vector(apicd);
+ }
+
+-void irq_complete_move(struct irq_cfg *cfg)
+-{
+- __irq_complete_move(cfg, ~get_irq_regs()->orig_ax);
+-}
+-
+ /*
+ * Called from fixup_irqs() with @desc->lock held and interrupts disabled.
+ */
+diff --git a/arch/x86/kernel/smpboot.c b/arch/x86/kernel/smpboot.c
+index 518ac6bf752e0..9fb6a8655ddf3 100644
+--- a/arch/x86/kernel/smpboot.c
++++ b/arch/x86/kernel/smpboot.c
+@@ -1604,14 +1604,28 @@ int native_cpu_disable(void)
+ if (ret)
+ return ret;
+
+- /*
+- * Disable the local APIC. Otherwise IPI broadcasts will reach
+- * it. It still responds normally to INIT, NMI, SMI, and SIPI
+- * messages.
+- */
+- apic_soft_disable();
+ cpu_disable_common();
+
++ /*
++ * Disable the local APIC. Otherwise IPI broadcasts will reach
++ * it. It still responds normally to INIT, NMI, SMI, and SIPI
++ * messages.
++ *
++ * Disabling the APIC must happen after cpu_disable_common()
++ * which invokes fixup_irqs().
++ *
++ * Disabling the APIC preserves already set bits in IRR, but
++ * an interrupt arriving after disabling the local APIC does not
++ * set the corresponding IRR bit.
++ *
++ * fixup_irqs() scans IRR for set bits so it can raise a not
++ * yet handled interrupt on the new destination CPU via an IPI
++ * but obviously it can't do so for IRR bits which are not set.
++ * IOW, interrupts arriving after disabling the local APIC will
++ * be lost.
++ */
++ apic_soft_disable();
++
+ return 0;
+ }
+
+diff --git a/block/bfq-cgroup.c b/block/bfq-cgroup.c
+index 68882b9b8f11f..b791e2041e49b 100644
+--- a/block/bfq-cgroup.c
++++ b/block/bfq-cgroup.c
+@@ -332,7 +332,7 @@ static void bfqg_put(struct bfq_group *bfqg)
+ kfree(bfqg);
+ }
+
+-void bfqg_and_blkg_get(struct bfq_group *bfqg)
++static void bfqg_and_blkg_get(struct bfq_group *bfqg)
+ {
+ /* see comments in bfq_bic_update_cgroup for why refcounting bfqg */
+ bfqg_get(bfqg);
+diff --git a/block/bfq-iosched.h b/block/bfq-iosched.h
+index cd224aaf9f52a..703895224562c 100644
+--- a/block/bfq-iosched.h
++++ b/block/bfq-iosched.h
+@@ -986,7 +986,6 @@ struct bfq_group *bfq_find_set_group(struct bfq_data *bfqd,
+ struct blkcg_gq *bfqg_to_blkg(struct bfq_group *bfqg);
+ struct bfq_group *bfqq_group(struct bfq_queue *bfqq);
+ struct bfq_group *bfq_create_group_hierarchy(struct bfq_data *bfqd, int node);
+-void bfqg_and_blkg_get(struct bfq_group *bfqg);
+ void bfqg_and_blkg_put(struct bfq_group *bfqg);
+
+ #ifdef CONFIG_BFQ_GROUP_IOSCHED
+diff --git a/block/bfq-wf2q.c b/block/bfq-wf2q.c
+index eb0e2a6daabe6..26776bdbdf360 100644
+--- a/block/bfq-wf2q.c
++++ b/block/bfq-wf2q.c
+@@ -533,9 +533,7 @@ static void bfq_get_entity(struct bfq_entity *entity)
+ bfqq->ref++;
+ bfq_log_bfqq(bfqq->bfqd, bfqq, "get_entity: %p %d",
+ bfqq, bfqq->ref);
+- } else
+- bfqg_and_blkg_get(container_of(entity, struct bfq_group,
+- entity));
++ }
+ }
+
+ /**
+@@ -649,14 +647,8 @@ static void bfq_forget_entity(struct bfq_service_tree *st,
+
+ entity->on_st_or_in_serv = false;
+ st->wsum -= entity->weight;
+- if (is_in_service)
+- return;
+-
+- if (bfqq)
++ if (bfqq && !is_in_service)
+ bfq_put_queue(bfqq);
+- else
+- bfqg_and_blkg_put(container_of(entity, struct bfq_group,
+- entity));
+ }
+
+ /**
+diff --git a/block/bio.c b/block/bio.c
+index a7366c02c9b57..b1883adc8f154 100644
+--- a/block/bio.c
++++ b/block/bio.c
+@@ -738,8 +738,8 @@ static inline bool page_is_mergeable(const struct bio_vec *bv,
+ struct page *page, unsigned int len, unsigned int off,
+ bool *same_page)
+ {
+- phys_addr_t vec_end_addr = page_to_phys(bv->bv_page) +
+- bv->bv_offset + bv->bv_len - 1;
++ size_t bv_end = bv->bv_offset + bv->bv_len;
++ phys_addr_t vec_end_addr = page_to_phys(bv->bv_page) + bv_end - 1;
+ phys_addr_t page_addr = page_to_phys(page);
+
+ if (vec_end_addr + 1 != page_addr + off)
+@@ -748,9 +748,9 @@ static inline bool page_is_mergeable(const struct bio_vec *bv,
+ return false;
+
+ *same_page = ((vec_end_addr & PAGE_MASK) == page_addr);
+- if (!*same_page && pfn_to_page(PFN_DOWN(vec_end_addr)) + 1 != page)
+- return false;
+- return true;
++ if (*same_page)
++ return true;
++ return (bv->bv_page + bv_end / PAGE_SIZE) == (page + off / PAGE_SIZE);
+ }
+
+ /*
+diff --git a/block/blk-cgroup.c b/block/blk-cgroup.c
+index 0ecc897b225c9..6e8f5e60b0982 100644
+--- a/block/blk-cgroup.c
++++ b/block/blk-cgroup.c
+@@ -1056,13 +1056,15 @@ int blkcg_init_queue(struct request_queue *q)
+ if (preloaded)
+ radix_tree_preload_end();
+
+- ret = blk_iolatency_init(q);
++ ret = blk_throtl_init(q);
+ if (ret)
+ goto err_destroy_all;
+
+- ret = blk_throtl_init(q);
+- if (ret)
++ ret = blk_iolatency_init(q);
++ if (ret) {
++ blk_throtl_exit(q);
+ goto err_destroy_all;
++ }
+ return 0;
+
+ err_destroy_all:
+diff --git a/block/blk-merge.c b/block/blk-merge.c
+index f0b0bae075a0c..75abba4d4591c 100644
+--- a/block/blk-merge.c
++++ b/block/blk-merge.c
+@@ -154,7 +154,7 @@ static inline unsigned get_max_io_size(struct request_queue *q,
+ if (max_sectors > start_offset)
+ return max_sectors - start_offset;
+
+- return sectors & (lbs - 1);
++ return sectors & ~(lbs - 1);
+ }
+
+ static inline unsigned get_max_segment_size(const struct request_queue *q,
+@@ -534,10 +534,17 @@ int __blk_rq_map_sg(struct request_queue *q, struct request *rq,
+ }
+ EXPORT_SYMBOL(__blk_rq_map_sg);
+
++static inline unsigned int blk_rq_get_max_segments(struct request *rq)
++{
++ if (req_op(rq) == REQ_OP_DISCARD)
++ return queue_max_discard_segments(rq->q);
++ return queue_max_segments(rq->q);
++}
++
+ static inline int ll_new_hw_segment(struct request *req, struct bio *bio,
+ unsigned int nr_phys_segs)
+ {
+- if (req->nr_phys_segments + nr_phys_segs > queue_max_segments(req->q))
++ if (req->nr_phys_segments + nr_phys_segs > blk_rq_get_max_segments(req))
+ goto no_merge;
+
+ if (blk_integrity_merge_bio(req->q, req, bio) == false)
+@@ -625,7 +632,7 @@ static int ll_merge_requests_fn(struct request_queue *q, struct request *req,
+ return 0;
+
+ total_phys_segments = req->nr_phys_segments + next->nr_phys_segments;
+- if (total_phys_segments > queue_max_segments(q))
++ if (total_phys_segments > blk_rq_get_max_segments(req))
+ return 0;
+
+ if (blk_integrity_merge_rq(q, req, next) == false)
+diff --git a/block/blk-mq-sched.c b/block/blk-mq-sched.c
+index fdcc2c1dd1788..fd850d9e68a1a 100644
+--- a/block/blk-mq-sched.c
++++ b/block/blk-mq-sched.c
+@@ -77,6 +77,15 @@ void blk_mq_sched_restart(struct blk_mq_hw_ctx *hctx)
+ return;
+ clear_bit(BLK_MQ_S_SCHED_RESTART, &hctx->state);
+
++ /*
++ * Order clearing SCHED_RESTART and list_empty_careful(&hctx->dispatch)
++ * in blk_mq_run_hw_queue(). Its pair is the barrier in
++ * blk_mq_dispatch_rq_list(). So dispatch code won't see SCHED_RESTART,
++ * meantime new request added to hctx->dispatch is missed to check in
++ * blk_mq_run_hw_queue().
++ */
++ smp_mb();
++
+ blk_mq_run_hw_queue(hctx, true);
+ }
+
+diff --git a/block/blk-mq.c b/block/blk-mq.c
+index 4e0d173beaa35..a366726094a89 100644
+--- a/block/blk-mq.c
++++ b/block/blk-mq.c
+@@ -1323,6 +1323,15 @@ bool blk_mq_dispatch_rq_list(struct request_queue *q, struct list_head *list,
+ list_splice_tail_init(list, &hctx->dispatch);
+ spin_unlock(&hctx->lock);
+
++ /*
++ * Order adding requests to hctx->dispatch and checking
++ * SCHED_RESTART flag. The pair of this smp_mb() is the one
++ * in blk_mq_sched_restart(). Avoid restart code path to
++ * miss the new added requests to hctx->dispatch, meantime
++ * SCHED_RESTART is observed here.
++ */
++ smp_mb();
++
+ /*
+ * If SCHED_RESTART was set by the caller of this function and
+ * it is no longer set that means that it was cleared by another
+@@ -1909,7 +1918,8 @@ insert:
+ if (bypass_insert)
+ return BLK_STS_RESOURCE;
+
+- blk_mq_request_bypass_insert(rq, false, run_queue);
++ blk_mq_sched_insert_request(rq, false, run_queue, false);
++
+ return BLK_STS_OK;
+ }
+
+diff --git a/crypto/af_alg.c b/crypto/af_alg.c
+index 5882ed46f1adb..e31cf43df2e09 100644
+--- a/crypto/af_alg.c
++++ b/crypto/af_alg.c
+@@ -16,6 +16,7 @@
+ #include <linux/module.h>
+ #include <linux/net.h>
+ #include <linux/rwsem.h>
++#include <linux/sched.h>
+ #include <linux/sched/signal.h>
+ #include <linux/security.h>
+
+@@ -847,9 +848,15 @@ int af_alg_sendmsg(struct socket *sock, struct msghdr *msg, size_t size,
+ }
+
+ lock_sock(sk);
+- if (ctx->init && (init || !ctx->more)) {
+- err = -EINVAL;
+- goto unlock;
++ if (ctx->init && !ctx->more) {
++ if (ctx->used) {
++ err = -EINVAL;
++ goto unlock;
++ }
++
++ pr_info_once(
++ "%s sent an empty control message without MSG_MORE.\n",
++ current->comm);
+ }
+ ctx->init = true;
+
+diff --git a/drivers/base/core.c b/drivers/base/core.c
+index 05d414e9e8a40..0799e1445f654 100644
+--- a/drivers/base/core.c
++++ b/drivers/base/core.c
+@@ -3988,9 +3988,9 @@ static inline bool fwnode_is_primary(struct fwnode_handle *fwnode)
+ */
+ void set_primary_fwnode(struct device *dev, struct fwnode_handle *fwnode)
+ {
+- if (fwnode) {
+- struct fwnode_handle *fn = dev->fwnode;
++ struct fwnode_handle *fn = dev->fwnode;
+
++ if (fwnode) {
+ if (fwnode_is_primary(fn))
+ fn = fn->secondary;
+
+@@ -4000,8 +4000,12 @@ void set_primary_fwnode(struct device *dev, struct fwnode_handle *fwnode)
+ }
+ dev->fwnode = fwnode;
+ } else {
+- dev->fwnode = fwnode_is_primary(dev->fwnode) ?
+- dev->fwnode->secondary : NULL;
++ if (fwnode_is_primary(fn)) {
++ dev->fwnode = fn->secondary;
++ fn->secondary = NULL;
++ } else {
++ dev->fwnode = NULL;
++ }
+ }
+ }
+ EXPORT_SYMBOL_GPL(set_primary_fwnode);
+diff --git a/drivers/base/power/main.c b/drivers/base/power/main.c
+index 9dd85bea40260..205a06752ca90 100644
+--- a/drivers/base/power/main.c
++++ b/drivers/base/power/main.c
+@@ -1606,13 +1606,17 @@ static int __device_suspend(struct device *dev, pm_message_t state, bool async)
+ }
+
+ /*
+- * If a device configured to wake up the system from sleep states
+- * has been suspended at run time and there's a resume request pending
+- * for it, this is equivalent to the device signaling wakeup, so the
+- * system suspend operation should be aborted.
++ * Wait for possible runtime PM transitions of the device in progress
++ * to complete and if there's a runtime resume request pending for it,
++ * resume it before proceeding with invoking the system-wide suspend
++ * callbacks for it.
++ *
++ * If the system-wide suspend callbacks below change the configuration
++ * of the device, they must disable runtime PM for it or otherwise
++ * ensure that its runtime-resume callbacks will not be confused by that
++ * change in case they are invoked going forward.
+ */
+- if (pm_runtime_barrier(dev) && device_may_wakeup(dev))
+- pm_wakeup_event(dev, 0);
++ pm_runtime_barrier(dev);
+
+ if (pm_wakeup_pending()) {
+ dev->power.direct_complete = false;
+diff --git a/drivers/block/loop.c b/drivers/block/loop.c
+index 776083963ee6c..84433922aed16 100644
+--- a/drivers/block/loop.c
++++ b/drivers/block/loop.c
+@@ -877,6 +877,7 @@ static void loop_config_discard(struct loop_device *lo)
+ struct file *file = lo->lo_backing_file;
+ struct inode *inode = file->f_mapping->host;
+ struct request_queue *q = lo->lo_queue;
++ u32 granularity, max_discard_sectors;
+
+ /*
+ * If the backing device is a block device, mirror its zeroing
+@@ -889,11 +890,10 @@ static void loop_config_discard(struct loop_device *lo)
+ struct request_queue *backingq;
+
+ backingq = bdev_get_queue(inode->i_bdev);
+- blk_queue_max_discard_sectors(q,
+- backingq->limits.max_write_zeroes_sectors);
+
+- blk_queue_max_write_zeroes_sectors(q,
+- backingq->limits.max_write_zeroes_sectors);
++ max_discard_sectors = backingq->limits.max_write_zeroes_sectors;
++ granularity = backingq->limits.discard_granularity ?:
++ queue_physical_block_size(backingq);
+
+ /*
+ * We use punch hole to reclaim the free space used by the
+@@ -902,23 +902,26 @@ static void loop_config_discard(struct loop_device *lo)
+ * useful information.
+ */
+ } else if (!file->f_op->fallocate || lo->lo_encrypt_key_size) {
+- q->limits.discard_granularity = 0;
+- q->limits.discard_alignment = 0;
+- blk_queue_max_discard_sectors(q, 0);
+- blk_queue_max_write_zeroes_sectors(q, 0);
++ max_discard_sectors = 0;
++ granularity = 0;
+
+ } else {
+- q->limits.discard_granularity = inode->i_sb->s_blocksize;
+- q->limits.discard_alignment = 0;
+-
+- blk_queue_max_discard_sectors(q, UINT_MAX >> 9);
+- blk_queue_max_write_zeroes_sectors(q, UINT_MAX >> 9);
++ max_discard_sectors = UINT_MAX >> 9;
++ granularity = inode->i_sb->s_blocksize;
+ }
+
+- if (q->limits.max_write_zeroes_sectors)
++ if (max_discard_sectors) {
++ q->limits.discard_granularity = granularity;
++ blk_queue_max_discard_sectors(q, max_discard_sectors);
++ blk_queue_max_write_zeroes_sectors(q, max_discard_sectors);
+ blk_queue_flag_set(QUEUE_FLAG_DISCARD, q);
+- else
++ } else {
++ q->limits.discard_granularity = 0;
++ blk_queue_max_discard_sectors(q, 0);
++ blk_queue_max_write_zeroes_sectors(q, 0);
+ blk_queue_flag_clear(QUEUE_FLAG_DISCARD, q);
++ }
++ q->limits.discard_alignment = 0;
+ }
+
+ static void loop_unprepare_queue(struct loop_device *lo)
+diff --git a/drivers/block/null_blk_main.c b/drivers/block/null_blk_main.c
+index 87b31f9ca362e..8cf13ea11cd2c 100644
+--- a/drivers/block/null_blk_main.c
++++ b/drivers/block/null_blk_main.c
+@@ -1139,7 +1139,7 @@ static int null_handle_rq(struct nullb_cmd *cmd)
+ len = bvec.bv_len;
+ err = null_transfer(nullb, bvec.bv_page, len, bvec.bv_offset,
+ op_is_write(req_op(rq)), sector,
+- req_op(rq) & REQ_FUA);
++ rq->cmd_flags & REQ_FUA);
+ if (err) {
+ spin_unlock_irq(&nullb->lock);
+ return err;
+diff --git a/drivers/block/virtio_blk.c b/drivers/block/virtio_blk.c
+index 980df853ee497..99991b6a6f0ed 100644
+--- a/drivers/block/virtio_blk.c
++++ b/drivers/block/virtio_blk.c
+@@ -126,16 +126,31 @@ static int virtblk_setup_discard_write_zeroes(struct request *req, bool unmap)
+ if (!range)
+ return -ENOMEM;
+
+- __rq_for_each_bio(bio, req) {
+- u64 sector = bio->bi_iter.bi_sector;
+- u32 num_sectors = bio->bi_iter.bi_size >> SECTOR_SHIFT;
+-
+- range[n].flags = cpu_to_le32(flags);
+- range[n].num_sectors = cpu_to_le32(num_sectors);
+- range[n].sector = cpu_to_le64(sector);
+- n++;
++ /*
++ * Single max discard segment means multi-range discard isn't
++ * supported, and block layer only runs contiguity merge like
++ * normal RW request. So we can't reply on bio for retrieving
++ * each range info.
++ */
++ if (queue_max_discard_segments(req->q) == 1) {
++ range[0].flags = cpu_to_le32(flags);
++ range[0].num_sectors = cpu_to_le32(blk_rq_sectors(req));
++ range[0].sector = cpu_to_le64(blk_rq_pos(req));
++ n = 1;
++ } else {
++ __rq_for_each_bio(bio, req) {
++ u64 sector = bio->bi_iter.bi_sector;
++ u32 num_sectors = bio->bi_iter.bi_size >> SECTOR_SHIFT;
++
++ range[n].flags = cpu_to_le32(flags);
++ range[n].num_sectors = cpu_to_le32(num_sectors);
++ range[n].sector = cpu_to_le64(sector);
++ n++;
++ }
+ }
+
++ WARN_ON_ONCE(n != segments);
++
+ req->special_vec.bv_page = virt_to_page(range);
+ req->special_vec.bv_offset = offset_in_page(range);
+ req->special_vec.bv_len = sizeof(*range) * segments;
+diff --git a/drivers/cpufreq/intel_pstate.c b/drivers/cpufreq/intel_pstate.c
+index c7540ad28995b..8c730a47e0537 100644
+--- a/drivers/cpufreq/intel_pstate.c
++++ b/drivers/cpufreq/intel_pstate.c
+@@ -649,11 +649,12 @@ static int intel_pstate_set_energy_pref_index(struct cpudata *cpu_data,
+ mutex_lock(&intel_pstate_limits_lock);
+
+ if (boot_cpu_has(X86_FEATURE_HWP_EPP)) {
+- u64 value;
+-
+- ret = rdmsrl_on_cpu(cpu_data->cpu, MSR_HWP_REQUEST, &value);
+- if (ret)
+- goto return_pref;
++ /*
++ * Use the cached HWP Request MSR value, because the register
++ * itself may be updated by intel_pstate_hwp_boost_up() or
++ * intel_pstate_hwp_boost_down() at any time.
++ */
++ u64 value = READ_ONCE(cpu_data->hwp_req_cached);
+
+ value &= ~GENMASK_ULL(31, 24);
+
+@@ -661,13 +662,18 @@ static int intel_pstate_set_energy_pref_index(struct cpudata *cpu_data,
+ epp = epp_values[pref_index - 1];
+
+ value |= (u64)epp << 24;
++ /*
++ * The only other updater of hwp_req_cached in the active mode,
++ * intel_pstate_hwp_set(), is called under the same lock as this
++ * function, so it cannot run in parallel with the update below.
++ */
++ WRITE_ONCE(cpu_data->hwp_req_cached, value);
+ ret = wrmsrl_on_cpu(cpu_data->cpu, MSR_HWP_REQUEST, value);
+ } else {
+ if (epp == -EINVAL)
+ epp = (pref_index - 1) << 2;
+ ret = intel_pstate_set_epb(cpu_data->cpu, epp);
+ }
+-return_pref:
+ mutex_unlock(&intel_pstate_limits_lock);
+
+ return ret;
+diff --git a/drivers/devfreq/devfreq.c b/drivers/devfreq/devfreq.c
+index 46c84dce6544a..5f8d94e812c8f 100644
+--- a/drivers/devfreq/devfreq.c
++++ b/drivers/devfreq/devfreq.c
+@@ -1690,9 +1690,9 @@ static int devfreq_summary_show(struct seq_file *s, void *data)
+ #endif
+
+ mutex_lock(&devfreq->lock);
+- cur_freq = devfreq->previous_freq,
++ cur_freq = devfreq->previous_freq;
+ get_freq_range(devfreq, &min_freq, &max_freq);
+- polling_ms = devfreq->profile->polling_ms,
++ polling_ms = devfreq->profile->polling_ms;
+ mutex_unlock(&devfreq->lock);
+
+ seq_printf(s,
+diff --git a/drivers/dma/Kconfig b/drivers/dma/Kconfig
+index de41d7928bff2..984354ca877de 100644
+--- a/drivers/dma/Kconfig
++++ b/drivers/dma/Kconfig
+@@ -285,6 +285,7 @@ config INTEL_IDMA64
+ config INTEL_IDXD
+ tristate "Intel Data Accelerators support"
+ depends on PCI && X86_64
++ depends on PCI_MSI
+ select DMA_ENGINE
+ select SBITMAP
+ help
+diff --git a/drivers/edac/edac_mc.c b/drivers/edac/edac_mc.c
+index 5813e931f2f00..01ff71f7b6456 100644
+--- a/drivers/edac/edac_mc.c
++++ b/drivers/edac/edac_mc.c
+@@ -950,6 +950,8 @@ static void edac_ue_error(struct edac_raw_error_desc *e)
+ e->other_detail);
+ }
+
++ edac_inc_ue_error(e);
++
+ if (edac_mc_get_panic_on_ue()) {
+ panic("UE %s%son %s (%s page:0x%lx offset:0x%lx grain:%ld%s%s)\n",
+ e->msg,
+@@ -959,8 +961,6 @@ static void edac_ue_error(struct edac_raw_error_desc *e)
+ *e->other_detail ? " - " : "",
+ e->other_detail);
+ }
+-
+- edac_inc_ue_error(e);
+ }
+
+ static void edac_inc_csrow(struct edac_raw_error_desc *e, int row, int chan)
+diff --git a/drivers/edac/ie31200_edac.c b/drivers/edac/ie31200_edac.c
+index d68346a8e141a..ebe50996cc423 100644
+--- a/drivers/edac/ie31200_edac.c
++++ b/drivers/edac/ie31200_edac.c
+@@ -170,6 +170,8 @@
+ (n << (28 + (2 * skl) - PAGE_SHIFT))
+
+ static int nr_channels;
++static struct pci_dev *mci_pdev;
++static int ie31200_registered = 1;
+
+ struct ie31200_priv {
+ void __iomem *window;
+@@ -538,12 +540,16 @@ fail_free:
+ static int ie31200_init_one(struct pci_dev *pdev,
+ const struct pci_device_id *ent)
+ {
+- edac_dbg(0, "MC:\n");
++ int rc;
+
++ edac_dbg(0, "MC:\n");
+ if (pci_enable_device(pdev) < 0)
+ return -EIO;
++ rc = ie31200_probe1(pdev, ent->driver_data);
++ if (rc == 0 && !mci_pdev)
++ mci_pdev = pci_dev_get(pdev);
+
+- return ie31200_probe1(pdev, ent->driver_data);
++ return rc;
+ }
+
+ static void ie31200_remove_one(struct pci_dev *pdev)
+@@ -552,6 +558,8 @@ static void ie31200_remove_one(struct pci_dev *pdev)
+ struct ie31200_priv *priv;
+
+ edac_dbg(0, "\n");
++ pci_dev_put(mci_pdev);
++ mci_pdev = NULL;
+ mci = edac_mc_del_mc(&pdev->dev);
+ if (!mci)
+ return;
+@@ -593,17 +601,53 @@ static struct pci_driver ie31200_driver = {
+
+ static int __init ie31200_init(void)
+ {
++ int pci_rc, i;
++
+ edac_dbg(3, "MC:\n");
+ /* Ensure that the OPSTATE is set correctly for POLL or NMI */
+ opstate_init();
+
+- return pci_register_driver(&ie31200_driver);
++ pci_rc = pci_register_driver(&ie31200_driver);
++ if (pci_rc < 0)
++ goto fail0;
++
++ if (!mci_pdev) {
++ ie31200_registered = 0;
++ for (i = 0; ie31200_pci_tbl[i].vendor != 0; i++) {
++ mci_pdev = pci_get_device(ie31200_pci_tbl[i].vendor,
++ ie31200_pci_tbl[i].device,
++ NULL);
++ if (mci_pdev)
++ break;
++ }
++ if (!mci_pdev) {
++ edac_dbg(0, "ie31200 pci_get_device fail\n");
++ pci_rc = -ENODEV;
++ goto fail1;
++ }
++ pci_rc = ie31200_init_one(mci_pdev, &ie31200_pci_tbl[i]);
++ if (pci_rc < 0) {
++ edac_dbg(0, "ie31200 init fail\n");
++ pci_rc = -ENODEV;
++ goto fail1;
++ }
++ }
++ return 0;
++
++fail1:
++ pci_unregister_driver(&ie31200_driver);
++fail0:
++ pci_dev_put(mci_pdev);
++
++ return pci_rc;
+ }
+
+ static void __exit ie31200_exit(void)
+ {
+ edac_dbg(3, "MC:\n");
+ pci_unregister_driver(&ie31200_driver);
++ if (!ie31200_registered)
++ ie31200_remove_one(mci_pdev);
+ }
+
+ module_init(ie31200_init);
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd_gfx_v9.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd_gfx_v9.c
+index c7fd0c47b2545..1102de76d8767 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd_gfx_v9.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd_gfx_v9.c
+@@ -195,19 +195,32 @@ static uint32_t get_sdma_rlc_reg_offset(struct amdgpu_device *adev,
+ unsigned int engine_id,
+ unsigned int queue_id)
+ {
+- uint32_t sdma_engine_reg_base[2] = {
+- SOC15_REG_OFFSET(SDMA0, 0,
+- mmSDMA0_RLC0_RB_CNTL) - mmSDMA0_RLC0_RB_CNTL,
+- SOC15_REG_OFFSET(SDMA1, 0,
+- mmSDMA1_RLC0_RB_CNTL) - mmSDMA1_RLC0_RB_CNTL
+- };
+- uint32_t retval = sdma_engine_reg_base[engine_id]
++ uint32_t sdma_engine_reg_base = 0;
++ uint32_t sdma_rlc_reg_offset;
++
++ switch (engine_id) {
++ default:
++ dev_warn(adev->dev,
++ "Invalid sdma engine id (%d), using engine id 0\n",
++ engine_id);
++ fallthrough;
++ case 0:
++ sdma_engine_reg_base = SOC15_REG_OFFSET(SDMA0, 0,
++ mmSDMA0_RLC0_RB_CNTL) - mmSDMA0_RLC0_RB_CNTL;
++ break;
++ case 1:
++ sdma_engine_reg_base = SOC15_REG_OFFSET(SDMA1, 0,
++ mmSDMA1_RLC0_RB_CNTL) - mmSDMA0_RLC0_RB_CNTL;
++ break;
++ }
++
++ sdma_rlc_reg_offset = sdma_engine_reg_base
+ + queue_id * (mmSDMA0_RLC1_RB_CNTL - mmSDMA0_RLC0_RB_CNTL);
+
+ pr_debug("RLC register offset for SDMA%d RLC%d: 0x%x\n", engine_id,
+- queue_id, retval);
++ queue_id, sdma_rlc_reg_offset);
+
+- return retval;
++ return sdma_rlc_reg_offset;
+ }
+
+ static inline struct v9_mqd *get_mqd(void *mqd)
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_connectors.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_connectors.c
+index f355d9a752d29..a1aec205435de 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_connectors.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_connectors.c
+@@ -716,8 +716,10 @@ amdgpu_connector_lvds_detect(struct drm_connector *connector, bool force)
+
+ if (!drm_kms_helper_is_poll_worker()) {
+ r = pm_runtime_get_sync(connector->dev->dev);
+- if (r < 0)
++ if (r < 0) {
++ pm_runtime_put_autosuspend(connector->dev->dev);
+ return connector_status_disconnected;
++ }
+ }
+
+ if (encoder) {
+@@ -854,8 +856,10 @@ amdgpu_connector_vga_detect(struct drm_connector *connector, bool force)
+
+ if (!drm_kms_helper_is_poll_worker()) {
+ r = pm_runtime_get_sync(connector->dev->dev);
+- if (r < 0)
++ if (r < 0) {
++ pm_runtime_put_autosuspend(connector->dev->dev);
+ return connector_status_disconnected;
++ }
+ }
+
+ encoder = amdgpu_connector_best_single_encoder(connector);
+@@ -977,8 +981,10 @@ amdgpu_connector_dvi_detect(struct drm_connector *connector, bool force)
+
+ if (!drm_kms_helper_is_poll_worker()) {
+ r = pm_runtime_get_sync(connector->dev->dev);
+- if (r < 0)
++ if (r < 0) {
++ pm_runtime_put_autosuspend(connector->dev->dev);
+ return connector_status_disconnected;
++ }
+ }
+
+ if (!force && amdgpu_connector_check_hpd_status_unchanged(connector)) {
+@@ -1328,8 +1334,10 @@ amdgpu_connector_dp_detect(struct drm_connector *connector, bool force)
+
+ if (!drm_kms_helper_is_poll_worker()) {
+ r = pm_runtime_get_sync(connector->dev->dev);
+- if (r < 0)
++ if (r < 0) {
++ pm_runtime_put_autosuspend(connector->dev->dev);
+ return connector_status_disconnected;
++ }
+ }
+
+ if (!force && amdgpu_connector_check_hpd_status_unchanged(connector)) {
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_display.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_display.c
+index f7143d927b6d8..5e51f0acf744f 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_display.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_display.c
+@@ -282,7 +282,7 @@ int amdgpu_display_crtc_set_config(struct drm_mode_set *set,
+
+ ret = pm_runtime_get_sync(dev->dev);
+ if (ret < 0)
+- return ret;
++ goto out;
+
+ ret = drm_crtc_helper_set_config(set, ctx);
+
+@@ -297,7 +297,7 @@ int amdgpu_display_crtc_set_config(struct drm_mode_set *set,
+ take the current one */
+ if (active && !adev->have_disp_power_ref) {
+ adev->have_disp_power_ref = true;
+- return ret;
++ goto out;
+ }
+ /* if we have no active crtcs, then drop the power ref
+ we got before */
+@@ -306,6 +306,7 @@ int amdgpu_display_crtc_set_config(struct drm_mode_set *set,
+ adev->have_disp_power_ref = false;
+ }
+
++out:
+ /* drop the power reference we got coming in here */
+ pm_runtime_put_autosuspend(dev->dev);
+ return ret;
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c
+index 126e74758a342..d73924e35a57e 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c
+@@ -1373,11 +1373,12 @@ long amdgpu_drm_ioctl(struct file *filp,
+ dev = file_priv->minor->dev;
+ ret = pm_runtime_get_sync(dev->dev);
+ if (ret < 0)
+- return ret;
++ goto out;
+
+ ret = drm_ioctl(filp, cmd, arg);
+
+ pm_runtime_mark_last_busy(dev->dev);
++out:
+ pm_runtime_put_autosuspend(dev->dev);
+ return ret;
+ }
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_fence.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_fence.c
+index 3414e119f0cbf..f5a6ee7c2eaa3 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_fence.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_fence.c
+@@ -754,8 +754,10 @@ static int amdgpu_debugfs_gpu_recover(struct seq_file *m, void *data)
+ int r;
+
+ r = pm_runtime_get_sync(dev->dev);
+- if (r < 0)
++ if (r < 0) {
++ pm_runtime_put_autosuspend(dev->dev);
+ return 0;
++ }
+
+ seq_printf(m, "gpu recover\n");
+ amdgpu_device_gpu_recover(adev, NULL);
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_kms.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_kms.c
+index 21292098bc023..0a3b7d9df8a56 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_kms.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_kms.c
+@@ -663,8 +663,12 @@ static int amdgpu_info_ioctl(struct drm_device *dev, void *data, struct drm_file
+ * in the bitfields */
+ if (se_num == AMDGPU_INFO_MMR_SE_INDEX_MASK)
+ se_num = 0xffffffff;
++ else if (se_num >= AMDGPU_GFX_MAX_SE)
++ return -EINVAL;
+ if (sh_num == AMDGPU_INFO_MMR_SH_INDEX_MASK)
+ sh_num = 0xffffffff;
++ else if (sh_num >= AMDGPU_GFX_MAX_SH_PER_SE)
++ return -EINVAL;
+
+ if (info->read_mmr_reg.count > 128)
+ return -EINVAL;
+@@ -992,7 +996,7 @@ int amdgpu_driver_open_kms(struct drm_device *dev, struct drm_file *file_priv)
+
+ r = pm_runtime_get_sync(dev->dev);
+ if (r < 0)
+- return r;
++ goto pm_put;
+
+ fpriv = kzalloc(sizeof(*fpriv), GFP_KERNEL);
+ if (unlikely(!fpriv)) {
+@@ -1043,6 +1047,7 @@ error_pasid:
+
+ out_suspend:
+ pm_runtime_mark_last_busy(dev->dev);
++pm_put:
+ pm_runtime_put_autosuspend(dev->dev);
+
+ return r;
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_pm.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_pm.c
+index 02e6f8c4dde08..459b81fc5aef4 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_pm.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_pm.c
+@@ -167,8 +167,10 @@ static ssize_t amdgpu_get_power_dpm_state(struct device *dev,
+ return -EPERM;
+
+ ret = pm_runtime_get_sync(ddev->dev);
+- if (ret < 0)
++ if (ret < 0) {
++ pm_runtime_put_autosuspend(ddev->dev);
+ return ret;
++ }
+
+ if (is_support_sw_smu(adev)) {
+ if (adev->smu.ppt_funcs->get_current_power_state)
+@@ -212,8 +214,10 @@ static ssize_t amdgpu_set_power_dpm_state(struct device *dev,
+ return -EINVAL;
+
+ ret = pm_runtime_get_sync(ddev->dev);
+- if (ret < 0)
++ if (ret < 0) {
++ pm_runtime_put_autosuspend(ddev->dev);
+ return ret;
++ }
+
+ if (is_support_sw_smu(adev)) {
+ mutex_lock(&adev->pm.mutex);
+@@ -307,8 +311,10 @@ static ssize_t amdgpu_get_power_dpm_force_performance_level(struct device *dev,
+ return -EPERM;
+
+ ret = pm_runtime_get_sync(ddev->dev);
+- if (ret < 0)
++ if (ret < 0) {
++ pm_runtime_put_autosuspend(ddev->dev);
+ return ret;
++ }
+
+ if (is_support_sw_smu(adev))
+ level = smu_get_performance_level(&adev->smu);
+@@ -369,8 +375,10 @@ static ssize_t amdgpu_set_power_dpm_force_performance_level(struct device *dev,
+ }
+
+ ret = pm_runtime_get_sync(ddev->dev);
+- if (ret < 0)
++ if (ret < 0) {
++ pm_runtime_put_autosuspend(ddev->dev);
+ return ret;
++ }
+
+ if (is_support_sw_smu(adev))
+ current_level = smu_get_performance_level(&adev->smu);
+@@ -449,8 +457,10 @@ static ssize_t amdgpu_get_pp_num_states(struct device *dev,
+ return -EPERM;
+
+ ret = pm_runtime_get_sync(ddev->dev);
+- if (ret < 0)
++ if (ret < 0) {
++ pm_runtime_put_autosuspend(ddev->dev);
+ return ret;
++ }
+
+ if (is_support_sw_smu(adev)) {
+ ret = smu_get_power_num_states(&adev->smu, &data);
+@@ -491,8 +501,10 @@ static ssize_t amdgpu_get_pp_cur_state(struct device *dev,
+ return -EPERM;
+
+ ret = pm_runtime_get_sync(ddev->dev);
+- if (ret < 0)
++ if (ret < 0) {
++ pm_runtime_put_autosuspend(ddev->dev);
+ return ret;
++ }
+
+ if (is_support_sw_smu(adev)) {
+ pm = smu_get_current_power_state(smu);
+@@ -567,8 +579,10 @@ static ssize_t amdgpu_set_pp_force_state(struct device *dev,
+ state = data.states[idx];
+
+ ret = pm_runtime_get_sync(ddev->dev);
+- if (ret < 0)
++ if (ret < 0) {
++ pm_runtime_put_autosuspend(ddev->dev);
+ return ret;
++ }
+
+ /* only set user selected power states */
+ if (state != POWER_STATE_TYPE_INTERNAL_BOOT &&
+@@ -608,8 +622,10 @@ static ssize_t amdgpu_get_pp_table(struct device *dev,
+ return -EPERM;
+
+ ret = pm_runtime_get_sync(ddev->dev);
+- if (ret < 0)
++ if (ret < 0) {
++ pm_runtime_put_autosuspend(ddev->dev);
+ return ret;
++ }
+
+ if (is_support_sw_smu(adev)) {
+ size = smu_sys_get_pp_table(&adev->smu, (void **)&table);
+@@ -650,8 +666,10 @@ static ssize_t amdgpu_set_pp_table(struct device *dev,
+ return -EPERM;
+
+ ret = pm_runtime_get_sync(ddev->dev);
+- if (ret < 0)
++ if (ret < 0) {
++ pm_runtime_put_autosuspend(ddev->dev);
+ return ret;
++ }
+
+ if (is_support_sw_smu(adev)) {
+ ret = smu_sys_set_pp_table(&adev->smu, (void *)buf, count);
+@@ -790,8 +808,10 @@ static ssize_t amdgpu_set_pp_od_clk_voltage(struct device *dev,
+ }
+
+ ret = pm_runtime_get_sync(ddev->dev);
+- if (ret < 0)
++ if (ret < 0) {
++ pm_runtime_put_autosuspend(ddev->dev);
+ return ret;
++ }
+
+ if (is_support_sw_smu(adev)) {
+ ret = smu_od_edit_dpm_table(&adev->smu, type,
+@@ -847,8 +867,10 @@ static ssize_t amdgpu_get_pp_od_clk_voltage(struct device *dev,
+ return -EPERM;
+
+ ret = pm_runtime_get_sync(ddev->dev);
+- if (ret < 0)
++ if (ret < 0) {
++ pm_runtime_put_autosuspend(ddev->dev);
+ return ret;
++ }
+
+ if (is_support_sw_smu(adev)) {
+ size = smu_print_clk_levels(&adev->smu, SMU_OD_SCLK, buf);
+@@ -905,8 +927,10 @@ static ssize_t amdgpu_set_pp_features(struct device *dev,
+ pr_debug("featuremask = 0x%llx\n", featuremask);
+
+ ret = pm_runtime_get_sync(ddev->dev);
+- if (ret < 0)
++ if (ret < 0) {
++ pm_runtime_put_autosuspend(ddev->dev);
+ return ret;
++ }
+
+ if (is_support_sw_smu(adev)) {
+ ret = smu_sys_set_pp_feature_mask(&adev->smu, featuremask);
+@@ -942,8 +966,10 @@ static ssize_t amdgpu_get_pp_features(struct device *dev,
+ return -EPERM;
+
+ ret = pm_runtime_get_sync(ddev->dev);
+- if (ret < 0)
++ if (ret < 0) {
++ pm_runtime_put_autosuspend(ddev->dev);
+ return ret;
++ }
+
+ if (is_support_sw_smu(adev))
+ size = smu_sys_get_pp_feature_mask(&adev->smu, buf);
+@@ -1001,8 +1027,10 @@ static ssize_t amdgpu_get_pp_dpm_sclk(struct device *dev,
+ return -EPERM;
+
+ ret = pm_runtime_get_sync(ddev->dev);
+- if (ret < 0)
++ if (ret < 0) {
++ pm_runtime_put_autosuspend(ddev->dev);
+ return ret;
++ }
+
+ if (is_support_sw_smu(adev))
+ size = smu_print_clk_levels(&adev->smu, SMU_SCLK, buf);
+@@ -1071,8 +1099,10 @@ static ssize_t amdgpu_set_pp_dpm_sclk(struct device *dev,
+ return ret;
+
+ ret = pm_runtime_get_sync(ddev->dev);
+- if (ret < 0)
++ if (ret < 0) {
++ pm_runtime_put_autosuspend(ddev->dev);
+ return ret;
++ }
+
+ if (is_support_sw_smu(adev))
+ ret = smu_force_clk_levels(&adev->smu, SMU_SCLK, mask, true);
+@@ -1101,8 +1131,10 @@ static ssize_t amdgpu_get_pp_dpm_mclk(struct device *dev,
+ return -EPERM;
+
+ ret = pm_runtime_get_sync(ddev->dev);
+- if (ret < 0)
++ if (ret < 0) {
++ pm_runtime_put_autosuspend(ddev->dev);
+ return ret;
++ }
+
+ if (is_support_sw_smu(adev))
+ size = smu_print_clk_levels(&adev->smu, SMU_MCLK, buf);
+@@ -1135,8 +1167,10 @@ static ssize_t amdgpu_set_pp_dpm_mclk(struct device *dev,
+ return ret;
+
+ ret = pm_runtime_get_sync(ddev->dev);
+- if (ret < 0)
++ if (ret < 0) {
++ pm_runtime_put_autosuspend(ddev->dev);
+ return ret;
++ }
+
+ if (is_support_sw_smu(adev))
+ ret = smu_force_clk_levels(&adev->smu, SMU_MCLK, mask, true);
+@@ -1165,8 +1199,10 @@ static ssize_t amdgpu_get_pp_dpm_socclk(struct device *dev,
+ return -EPERM;
+
+ ret = pm_runtime_get_sync(ddev->dev);
+- if (ret < 0)
++ if (ret < 0) {
++ pm_runtime_put_autosuspend(ddev->dev);
+ return ret;
++ }
+
+ if (is_support_sw_smu(adev))
+ size = smu_print_clk_levels(&adev->smu, SMU_SOCCLK, buf);
+@@ -1199,8 +1235,10 @@ static ssize_t amdgpu_set_pp_dpm_socclk(struct device *dev,
+ return ret;
+
+ ret = pm_runtime_get_sync(ddev->dev);
+- if (ret < 0)
++ if (ret < 0) {
++ pm_runtime_put_autosuspend(ddev->dev);
+ return ret;
++ }
+
+ if (is_support_sw_smu(adev))
+ ret = smu_force_clk_levels(&adev->smu, SMU_SOCCLK, mask, true);
+@@ -1231,8 +1269,10 @@ static ssize_t amdgpu_get_pp_dpm_fclk(struct device *dev,
+ return -EPERM;
+
+ ret = pm_runtime_get_sync(ddev->dev);
+- if (ret < 0)
++ if (ret < 0) {
++ pm_runtime_put_autosuspend(ddev->dev);
+ return ret;
++ }
+
+ if (is_support_sw_smu(adev))
+ size = smu_print_clk_levels(&adev->smu, SMU_FCLK, buf);
+@@ -1265,8 +1305,10 @@ static ssize_t amdgpu_set_pp_dpm_fclk(struct device *dev,
+ return ret;
+
+ ret = pm_runtime_get_sync(ddev->dev);
+- if (ret < 0)
++ if (ret < 0) {
++ pm_runtime_put_autosuspend(ddev->dev);
+ return ret;
++ }
+
+ if (is_support_sw_smu(adev))
+ ret = smu_force_clk_levels(&adev->smu, SMU_FCLK, mask, true);
+@@ -1297,8 +1339,10 @@ static ssize_t amdgpu_get_pp_dpm_dcefclk(struct device *dev,
+ return -EPERM;
+
+ ret = pm_runtime_get_sync(ddev->dev);
+- if (ret < 0)
++ if (ret < 0) {
++ pm_runtime_put_autosuspend(ddev->dev);
+ return ret;
++ }
+
+ if (is_support_sw_smu(adev))
+ size = smu_print_clk_levels(&adev->smu, SMU_DCEFCLK, buf);
+@@ -1331,8 +1375,10 @@ static ssize_t amdgpu_set_pp_dpm_dcefclk(struct device *dev,
+ return ret;
+
+ ret = pm_runtime_get_sync(ddev->dev);
+- if (ret < 0)
++ if (ret < 0) {
++ pm_runtime_put_autosuspend(ddev->dev);
+ return ret;
++ }
+
+ if (is_support_sw_smu(adev))
+ ret = smu_force_clk_levels(&adev->smu, SMU_DCEFCLK, mask, true);
+@@ -1363,8 +1409,10 @@ static ssize_t amdgpu_get_pp_dpm_pcie(struct device *dev,
+ return -EPERM;
+
+ ret = pm_runtime_get_sync(ddev->dev);
+- if (ret < 0)
++ if (ret < 0) {
++ pm_runtime_put_autosuspend(ddev->dev);
+ return ret;
++ }
+
+ if (is_support_sw_smu(adev))
+ size = smu_print_clk_levels(&adev->smu, SMU_PCIE, buf);
+@@ -1397,8 +1445,10 @@ static ssize_t amdgpu_set_pp_dpm_pcie(struct device *dev,
+ return ret;
+
+ ret = pm_runtime_get_sync(ddev->dev);
+- if (ret < 0)
++ if (ret < 0) {
++ pm_runtime_put_autosuspend(ddev->dev);
+ return ret;
++ }
+
+ if (is_support_sw_smu(adev))
+ ret = smu_force_clk_levels(&adev->smu, SMU_PCIE, mask, true);
+@@ -1429,8 +1479,10 @@ static ssize_t amdgpu_get_pp_sclk_od(struct device *dev,
+ return -EPERM;
+
+ ret = pm_runtime_get_sync(ddev->dev);
+- if (ret < 0)
++ if (ret < 0) {
++ pm_runtime_put_autosuspend(ddev->dev);
+ return ret;
++ }
+
+ if (is_support_sw_smu(adev))
+ value = smu_get_od_percentage(&(adev->smu), SMU_OD_SCLK);
+@@ -1462,8 +1514,10 @@ static ssize_t amdgpu_set_pp_sclk_od(struct device *dev,
+ return -EINVAL;
+
+ ret = pm_runtime_get_sync(ddev->dev);
+- if (ret < 0)
++ if (ret < 0) {
++ pm_runtime_put_autosuspend(ddev->dev);
+ return ret;
++ }
+
+ if (is_support_sw_smu(adev)) {
+ value = smu_set_od_percentage(&(adev->smu), SMU_OD_SCLK, (uint32_t)value);
+@@ -1498,8 +1552,10 @@ static ssize_t amdgpu_get_pp_mclk_od(struct device *dev,
+ return -EPERM;
+
+ ret = pm_runtime_get_sync(ddev->dev);
+- if (ret < 0)
++ if (ret < 0) {
++ pm_runtime_put_autosuspend(ddev->dev);
+ return ret;
++ }
+
+ if (is_support_sw_smu(adev))
+ value = smu_get_od_percentage(&(adev->smu), SMU_OD_MCLK);
+@@ -1531,8 +1587,10 @@ static ssize_t amdgpu_set_pp_mclk_od(struct device *dev,
+ return -EINVAL;
+
+ ret = pm_runtime_get_sync(ddev->dev);
+- if (ret < 0)
++ if (ret < 0) {
++ pm_runtime_put_autosuspend(ddev->dev);
+ return ret;
++ }
+
+ if (is_support_sw_smu(adev)) {
+ value = smu_set_od_percentage(&(adev->smu), SMU_OD_MCLK, (uint32_t)value);
+@@ -1587,8 +1645,10 @@ static ssize_t amdgpu_get_pp_power_profile_mode(struct device *dev,
+ return -EPERM;
+
+ ret = pm_runtime_get_sync(ddev->dev);
+- if (ret < 0)
++ if (ret < 0) {
++ pm_runtime_put_autosuspend(ddev->dev);
+ return ret;
++ }
+
+ if (is_support_sw_smu(adev))
+ size = smu_get_power_profile_mode(&adev->smu, buf);
+@@ -1650,8 +1710,10 @@ static ssize_t amdgpu_set_pp_power_profile_mode(struct device *dev,
+ parameter[parameter_size] = profile_mode;
+
+ ret = pm_runtime_get_sync(ddev->dev);
+- if (ret < 0)
++ if (ret < 0) {
++ pm_runtime_put_autosuspend(ddev->dev);
+ return ret;
++ }
+
+ if (is_support_sw_smu(adev))
+ ret = smu_set_power_profile_mode(&adev->smu, parameter, parameter_size, true);
+@@ -1687,8 +1749,10 @@ static ssize_t amdgpu_get_gpu_busy_percent(struct device *dev,
+ return -EPERM;
+
+ r = pm_runtime_get_sync(ddev->dev);
+- if (r < 0)
++ if (r < 0) {
++ pm_runtime_put_autosuspend(ddev->dev);
+ return r;
++ }
+
+ /* read the IP busy sensor */
+ r = amdgpu_dpm_read_sensor(adev, AMDGPU_PP_SENSOR_GPU_LOAD,
+@@ -1723,8 +1787,10 @@ static ssize_t amdgpu_get_mem_busy_percent(struct device *dev,
+ return -EPERM;
+
+ r = pm_runtime_get_sync(ddev->dev);
+- if (r < 0)
++ if (r < 0) {
++ pm_runtime_put_autosuspend(ddev->dev);
+ return r;
++ }
+
+ /* read the IP busy sensor */
+ r = amdgpu_dpm_read_sensor(adev, AMDGPU_PP_SENSOR_MEM_LOAD,
+@@ -1770,8 +1836,10 @@ static ssize_t amdgpu_get_pcie_bw(struct device *dev,
+ return -ENODATA;
+
+ ret = pm_runtime_get_sync(ddev->dev);
+- if (ret < 0)
++ if (ret < 0) {
++ pm_runtime_put_autosuspend(ddev->dev);
+ return ret;
++ }
+
+ amdgpu_asic_get_pcie_usage(adev, &count0, &count1);
+
+@@ -2003,8 +2071,10 @@ static ssize_t amdgpu_hwmon_show_temp(struct device *dev,
+ return -EINVAL;
+
+ r = pm_runtime_get_sync(adev->ddev->dev);
+- if (r < 0)
++ if (r < 0) {
++ pm_runtime_put_autosuspend(adev->ddev->dev);
+ return r;
++ }
+
+ switch (channel) {
+ case PP_TEMP_JUNCTION:
+@@ -2134,8 +2204,10 @@ static ssize_t amdgpu_hwmon_get_pwm1_enable(struct device *dev,
+ return -EPERM;
+
+ ret = pm_runtime_get_sync(adev->ddev->dev);
+- if (ret < 0)
++ if (ret < 0) {
++ pm_runtime_put_autosuspend(adev->ddev->dev);
+ return ret;
++ }
+
+ if (is_support_sw_smu(adev)) {
+ pwm_mode = smu_get_fan_control_mode(&adev->smu);
+@@ -2172,8 +2244,10 @@ static ssize_t amdgpu_hwmon_set_pwm1_enable(struct device *dev,
+ return err;
+
+ ret = pm_runtime_get_sync(adev->ddev->dev);
+- if (ret < 0)
++ if (ret < 0) {
++ pm_runtime_put_autosuspend(adev->ddev->dev);
+ return ret;
++ }
+
+ if (is_support_sw_smu(adev)) {
+ smu_set_fan_control_mode(&adev->smu, value);
+@@ -2220,8 +2294,10 @@ static ssize_t amdgpu_hwmon_set_pwm1(struct device *dev,
+ return -EPERM;
+
+ err = pm_runtime_get_sync(adev->ddev->dev);
+- if (err < 0)
++ if (err < 0) {
++ pm_runtime_put_autosuspend(adev->ddev->dev);
+ return err;
++ }
+
+ if (is_support_sw_smu(adev))
+ pwm_mode = smu_get_fan_control_mode(&adev->smu);
+@@ -2272,8 +2348,10 @@ static ssize_t amdgpu_hwmon_get_pwm1(struct device *dev,
+ return -EPERM;
+
+ err = pm_runtime_get_sync(adev->ddev->dev);
+- if (err < 0)
++ if (err < 0) {
++ pm_runtime_put_autosuspend(adev->ddev->dev);
+ return err;
++ }
+
+ if (is_support_sw_smu(adev))
+ err = smu_get_fan_speed_percent(&adev->smu, &speed);
+@@ -2305,8 +2383,10 @@ static ssize_t amdgpu_hwmon_get_fan1_input(struct device *dev,
+ return -EPERM;
+
+ err = pm_runtime_get_sync(adev->ddev->dev);
+- if (err < 0)
++ if (err < 0) {
++ pm_runtime_put_autosuspend(adev->ddev->dev);
+ return err;
++ }
+
+ if (is_support_sw_smu(adev))
+ err = smu_get_fan_speed_rpm(&adev->smu, &speed);
+@@ -2337,8 +2417,10 @@ static ssize_t amdgpu_hwmon_get_fan1_min(struct device *dev,
+ return -EPERM;
+
+ r = pm_runtime_get_sync(adev->ddev->dev);
+- if (r < 0)
++ if (r < 0) {
++ pm_runtime_put_autosuspend(adev->ddev->dev);
+ return r;
++ }
+
+ r = amdgpu_dpm_read_sensor(adev, AMDGPU_PP_SENSOR_MIN_FAN_RPM,
+ (void *)&min_rpm, &size);
+@@ -2365,8 +2447,10 @@ static ssize_t amdgpu_hwmon_get_fan1_max(struct device *dev,
+ return -EPERM;
+
+ r = pm_runtime_get_sync(adev->ddev->dev);
+- if (r < 0)
++ if (r < 0) {
++ pm_runtime_put_autosuspend(adev->ddev->dev);
+ return r;
++ }
+
+ r = amdgpu_dpm_read_sensor(adev, AMDGPU_PP_SENSOR_MAX_FAN_RPM,
+ (void *)&max_rpm, &size);
+@@ -2392,8 +2476,10 @@ static ssize_t amdgpu_hwmon_get_fan1_target(struct device *dev,
+ return -EPERM;
+
+ err = pm_runtime_get_sync(adev->ddev->dev);
+- if (err < 0)
++ if (err < 0) {
++ pm_runtime_put_autosuspend(adev->ddev->dev);
+ return err;
++ }
+
+ if (is_support_sw_smu(adev))
+ err = smu_get_fan_speed_rpm(&adev->smu, &rpm);
+@@ -2424,8 +2510,10 @@ static ssize_t amdgpu_hwmon_set_fan1_target(struct device *dev,
+ return -EPERM;
+
+ err = pm_runtime_get_sync(adev->ddev->dev);
+- if (err < 0)
++ if (err < 0) {
++ pm_runtime_put_autosuspend(adev->ddev->dev);
+ return err;
++ }
+
+ if (is_support_sw_smu(adev))
+ pwm_mode = smu_get_fan_control_mode(&adev->smu);
+@@ -2473,8 +2561,10 @@ static ssize_t amdgpu_hwmon_get_fan1_enable(struct device *dev,
+ return -EPERM;
+
+ ret = pm_runtime_get_sync(adev->ddev->dev);
+- if (ret < 0)
++ if (ret < 0) {
++ pm_runtime_put_autosuspend(adev->ddev->dev);
+ return ret;
++ }
+
+ if (is_support_sw_smu(adev)) {
+ pwm_mode = smu_get_fan_control_mode(&adev->smu);
+@@ -2519,8 +2609,10 @@ static ssize_t amdgpu_hwmon_set_fan1_enable(struct device *dev,
+ return -EINVAL;
+
+ err = pm_runtime_get_sync(adev->ddev->dev);
+- if (err < 0)
++ if (err < 0) {
++ pm_runtime_put_autosuspend(adev->ddev->dev);
+ return err;
++ }
+
+ if (is_support_sw_smu(adev)) {
+ smu_set_fan_control_mode(&adev->smu, pwm_mode);
+@@ -2551,8 +2643,10 @@ static ssize_t amdgpu_hwmon_show_vddgfx(struct device *dev,
+ return -EPERM;
+
+ r = pm_runtime_get_sync(adev->ddev->dev);
+- if (r < 0)
++ if (r < 0) {
++ pm_runtime_put_autosuspend(adev->ddev->dev);
+ return r;
++ }
+
+ /* get the voltage */
+ r = amdgpu_dpm_read_sensor(adev, AMDGPU_PP_SENSOR_VDDGFX,
+@@ -2590,8 +2684,10 @@ static ssize_t amdgpu_hwmon_show_vddnb(struct device *dev,
+ return -EINVAL;
+
+ r = pm_runtime_get_sync(adev->ddev->dev);
+- if (r < 0)
++ if (r < 0) {
++ pm_runtime_put_autosuspend(adev->ddev->dev);
+ return r;
++ }
+
+ /* get the voltage */
+ r = amdgpu_dpm_read_sensor(adev, AMDGPU_PP_SENSOR_VDDNB,
+@@ -2626,8 +2722,10 @@ static ssize_t amdgpu_hwmon_show_power_avg(struct device *dev,
+ return -EPERM;
+
+ r = pm_runtime_get_sync(adev->ddev->dev);
+- if (r < 0)
++ if (r < 0) {
++ pm_runtime_put_autosuspend(adev->ddev->dev);
+ return r;
++ }
+
+ /* get the voltage */
+ r = amdgpu_dpm_read_sensor(adev, AMDGPU_PP_SENSOR_GPU_POWER,
+@@ -2665,8 +2763,10 @@ static ssize_t amdgpu_hwmon_show_power_cap_max(struct device *dev,
+ return -EPERM;
+
+ r = pm_runtime_get_sync(adev->ddev->dev);
+- if (r < 0)
++ if (r < 0) {
++ pm_runtime_put_autosuspend(adev->ddev->dev);
+ return r;
++ }
+
+ if (is_support_sw_smu(adev)) {
+ smu_get_power_limit(&adev->smu, &limit, true, true);
+@@ -2697,8 +2797,10 @@ static ssize_t amdgpu_hwmon_show_power_cap(struct device *dev,
+ return -EPERM;
+
+ r = pm_runtime_get_sync(adev->ddev->dev);
+- if (r < 0)
++ if (r < 0) {
++ pm_runtime_put_autosuspend(adev->ddev->dev);
+ return r;
++ }
+
+ if (is_support_sw_smu(adev)) {
+ smu_get_power_limit(&adev->smu, &limit, false, true);
+@@ -2740,8 +2842,10 @@ static ssize_t amdgpu_hwmon_set_power_cap(struct device *dev,
+
+
+ err = pm_runtime_get_sync(adev->ddev->dev);
+- if (err < 0)
++ if (err < 0) {
++ pm_runtime_put_autosuspend(adev->ddev->dev);
+ return err;
++ }
+
+ if (is_support_sw_smu(adev))
+ err = smu_set_power_limit(&adev->smu, value);
+@@ -2771,8 +2875,10 @@ static ssize_t amdgpu_hwmon_show_sclk(struct device *dev,
+ return -EPERM;
+
+ r = pm_runtime_get_sync(adev->ddev->dev);
+- if (r < 0)
++ if (r < 0) {
++ pm_runtime_put_autosuspend(adev->ddev->dev);
+ return r;
++ }
+
+ /* get the sclk */
+ r = amdgpu_dpm_read_sensor(adev, AMDGPU_PP_SENSOR_GFX_SCLK,
+@@ -2806,8 +2912,10 @@ static ssize_t amdgpu_hwmon_show_mclk(struct device *dev,
+ return -EPERM;
+
+ r = pm_runtime_get_sync(adev->ddev->dev);
+- if (r < 0)
++ if (r < 0) {
++ pm_runtime_put_autosuspend(adev->ddev->dev);
+ return r;
++ }
+
+ /* get the sclk */
+ r = amdgpu_dpm_read_sensor(adev, AMDGPU_PP_SENSOR_GFX_MCLK,
+@@ -3669,8 +3777,10 @@ static int amdgpu_debugfs_pm_info(struct seq_file *m, void *data)
+ return -EPERM;
+
+ r = pm_runtime_get_sync(dev->dev);
+- if (r < 0)
++ if (r < 0) {
++ pm_runtime_put_autosuspend(dev->dev);
+ return r;
++ }
+
+ amdgpu_device_ip_get_clockgating_state(adev, &flags);
+ seq_printf(m, "Clock Gating Flags Mask: 0x%x\n", flags);
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ras.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_ras.c
+index 50fe08bf2f727..3f47f35eedff1 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ras.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ras.c
+@@ -1240,7 +1240,6 @@ void amdgpu_ras_debugfs_remove(struct amdgpu_device *adev,
+ if (!obj || !obj->ent)
+ return;
+
+- debugfs_remove(obj->ent);
+ obj->ent = NULL;
+ put_obj(obj);
+ }
+@@ -1254,7 +1253,6 @@ static void amdgpu_ras_debugfs_remove_all(struct amdgpu_device *adev)
+ amdgpu_ras_debugfs_remove(adev, &obj->head);
+ }
+
+- debugfs_remove_recursive(con->dir);
+ con->dir = NULL;
+ }
+ /* debugfs end */
+@@ -1914,9 +1912,8 @@ int amdgpu_ras_init(struct amdgpu_device *adev)
+ amdgpu_ras_check_supported(adev, &con->hw_supported,
+ &con->supported);
+ if (!con->hw_supported) {
+- amdgpu_ras_set_context(adev, NULL);
+- kfree(con);
+- return 0;
++ r = 0;
++ goto err_out;
+ }
+
+ con->features = 0;
+@@ -1927,29 +1924,31 @@ int amdgpu_ras_init(struct amdgpu_device *adev)
+ if (adev->nbio.funcs->init_ras_controller_interrupt) {
+ r = adev->nbio.funcs->init_ras_controller_interrupt(adev);
+ if (r)
+- return r;
++ goto err_out;
+ }
+
+ if (adev->nbio.funcs->init_ras_err_event_athub_interrupt) {
+ r = adev->nbio.funcs->init_ras_err_event_athub_interrupt(adev);
+ if (r)
+- return r;
++ goto err_out;
+ }
+
+ amdgpu_ras_mask &= AMDGPU_RAS_BLOCK_MASK;
+
+- if (amdgpu_ras_fs_init(adev))
+- goto fs_out;
++ if (amdgpu_ras_fs_init(adev)) {
++ r = -EINVAL;
++ goto err_out;
++ }
+
+ dev_info(adev->dev, "RAS INFO: ras initialized successfully, "
+ "hardware ability[%x] ras_mask[%x]\n",
+ con->hw_supported, con->supported);
+ return 0;
+-fs_out:
++err_out:
+ amdgpu_ras_set_context(adev, NULL);
+ kfree(con);
+
+- return -EINVAL;
++ return r;
+ }
+
+ /* helper function to handle common stuff in ip late init phase */
+diff --git a/drivers/gpu/drm/amd/amdgpu/gfx_v10_0.c b/drivers/gpu/drm/amd/amdgpu/gfx_v10_0.c
+index fac77a86c04b2..2c7e6efeea2ff 100644
+--- a/drivers/gpu/drm/amd/amdgpu/gfx_v10_0.c
++++ b/drivers/gpu/drm/amd/amdgpu/gfx_v10_0.c
+@@ -6854,10 +6854,8 @@ static void gfx_v10_0_update_medium_grain_clock_gating(struct amdgpu_device *ade
+ def = data = RREG32_SOC15(GC, 0, mmRLC_CGTT_MGCG_OVERRIDE);
+ data &= ~(RLC_CGTT_MGCG_OVERRIDE__GRBM_CGTT_SCLK_OVERRIDE_MASK |
+ RLC_CGTT_MGCG_OVERRIDE__GFXIP_MGCG_OVERRIDE_MASK |
+- RLC_CGTT_MGCG_OVERRIDE__GFXIP_MGLS_OVERRIDE_MASK);
+-
+- /* only for Vega10 & Raven1 */
+- data |= RLC_CGTT_MGCG_OVERRIDE__RLC_CGTT_SCLK_OVERRIDE_MASK;
++ RLC_CGTT_MGCG_OVERRIDE__GFXIP_MGLS_OVERRIDE_MASK |
++ RLC_CGTT_MGCG_OVERRIDE__ENABLE_CGTS_LEGACY_MASK);
+
+ if (def != data)
+ WREG32_SOC15(GC, 0, mmRLC_CGTT_MGCG_OVERRIDE, data);
+diff --git a/drivers/gpu/drm/amd/amdkfd/kfd_process.c b/drivers/gpu/drm/amd/amdkfd/kfd_process.c
+index 0e0c42e9f6a31..6520a920cad4a 100644
+--- a/drivers/gpu/drm/amd/amdkfd/kfd_process.c
++++ b/drivers/gpu/drm/amd/amdkfd/kfd_process.c
+@@ -1003,8 +1003,10 @@ struct kfd_process_device *kfd_bind_process_to_device(struct kfd_dev *dev,
+ */
+ if (!pdd->runtime_inuse) {
+ err = pm_runtime_get_sync(dev->ddev->dev);
+- if (err < 0)
++ if (err < 0) {
++ pm_runtime_put_autosuspend(dev->ddev->dev);
+ return ERR_PTR(err);
++ }
+ }
+
+ err = kfd_iommu_bind_process_to_device(pdd);
+diff --git a/drivers/gpu/drm/amd/amdkfd/kfd_topology.c b/drivers/gpu/drm/amd/amdkfd/kfd_topology.c
+index bb77f7af2b6d9..dc3c4149f8600 100644
+--- a/drivers/gpu/drm/amd/amdkfd/kfd_topology.c
++++ b/drivers/gpu/drm/amd/amdkfd/kfd_topology.c
+@@ -632,8 +632,10 @@ static int kfd_build_sysfs_node_entry(struct kfd_topology_device *dev,
+
+ ret = kobject_init_and_add(dev->kobj_node, &node_type,
+ sys_props.kobj_nodes, "%d", id);
+- if (ret < 0)
++ if (ret < 0) {
++ kobject_put(dev->kobj_node);
+ return ret;
++ }
+
+ dev->kobj_mem = kobject_create_and_add("mem_banks", dev->kobj_node);
+ if (!dev->kobj_mem)
+@@ -680,8 +682,10 @@ static int kfd_build_sysfs_node_entry(struct kfd_topology_device *dev,
+ return -ENOMEM;
+ ret = kobject_init_and_add(mem->kobj, &mem_type,
+ dev->kobj_mem, "%d", i);
+- if (ret < 0)
++ if (ret < 0) {
++ kobject_put(mem->kobj);
+ return ret;
++ }
+
+ mem->attr.name = "properties";
+ mem->attr.mode = KFD_SYSFS_FILE_MODE;
+@@ -699,8 +703,10 @@ static int kfd_build_sysfs_node_entry(struct kfd_topology_device *dev,
+ return -ENOMEM;
+ ret = kobject_init_and_add(cache->kobj, &cache_type,
+ dev->kobj_cache, "%d", i);
+- if (ret < 0)
++ if (ret < 0) {
++ kobject_put(cache->kobj);
+ return ret;
++ }
+
+ cache->attr.name = "properties";
+ cache->attr.mode = KFD_SYSFS_FILE_MODE;
+@@ -718,8 +724,10 @@ static int kfd_build_sysfs_node_entry(struct kfd_topology_device *dev,
+ return -ENOMEM;
+ ret = kobject_init_and_add(iolink->kobj, &iolink_type,
+ dev->kobj_iolink, "%d", i);
+- if (ret < 0)
++ if (ret < 0) {
++ kobject_put(iolink->kobj);
+ return ret;
++ }
+
+ iolink->attr.name = "properties";
+ iolink->attr.mode = KFD_SYSFS_FILE_MODE;
+@@ -798,8 +806,10 @@ static int kfd_topology_update_sysfs(void)
+ ret = kobject_init_and_add(sys_props.kobj_topology,
+ &sysprops_type, &kfd_device->kobj,
+ "topology");
+- if (ret < 0)
++ if (ret < 0) {
++ kobject_put(sys_props.kobj_topology);
+ return ret;
++ }
+
+ sys_props.kobj_nodes = kobject_create_and_add("nodes",
+ sys_props.kobj_topology);
+diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
+index 0a39a8558b294..666ebe04837af 100644
+--- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
++++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
+@@ -2882,51 +2882,50 @@ static int set_backlight_via_aux(struct dc_link *link, uint32_t brightness)
+ return rc ? 0 : 1;
+ }
+
+-static u32 convert_brightness(const struct amdgpu_dm_backlight_caps *caps,
+- const uint32_t user_brightness)
++static int get_brightness_range(const struct amdgpu_dm_backlight_caps *caps,
++ unsigned *min, unsigned *max)
+ {
+- u32 min, max, conversion_pace;
+- u32 brightness = user_brightness;
+-
+ if (!caps)
+- goto out;
++ return 0;
+
+- if (!caps->aux_support) {
+- max = caps->max_input_signal;
+- min = caps->min_input_signal;
+- /*
+- * The brightness input is in the range 0-255
+- * It needs to be rescaled to be between the
+- * requested min and max input signal
+- * It also needs to be scaled up by 0x101 to
+- * match the DC interface which has a range of
+- * 0 to 0xffff
+- */
+- conversion_pace = 0x101;
+- brightness =
+- user_brightness
+- * conversion_pace
+- * (max - min)
+- / AMDGPU_MAX_BL_LEVEL
+- + min * conversion_pace;
++ if (caps->aux_support) {
++ // Firmware limits are in nits, DC API wants millinits.
++ *max = 1000 * caps->aux_max_input_signal;
++ *min = 1000 * caps->aux_min_input_signal;
+ } else {
+- /* TODO
+- * We are doing a linear interpolation here, which is OK but
+- * does not provide the optimal result. We probably want
+- * something close to the Perceptual Quantizer (PQ) curve.
+- */
+- max = caps->aux_max_input_signal;
+- min = caps->aux_min_input_signal;
+-
+- brightness = (AMDGPU_MAX_BL_LEVEL - user_brightness) * min
+- + user_brightness * max;
+- // Multiple the value by 1000 since we use millinits
+- brightness *= 1000;
+- brightness = DIV_ROUND_CLOSEST(brightness, AMDGPU_MAX_BL_LEVEL);
++ // Firmware limits are 8-bit, PWM control is 16-bit.
++ *max = 0x101 * caps->max_input_signal;
++ *min = 0x101 * caps->min_input_signal;
+ }
++ return 1;
++}
++
++static u32 convert_brightness_from_user(const struct amdgpu_dm_backlight_caps *caps,
++ uint32_t brightness)
++{
++ unsigned min, max;
+
+-out:
+- return brightness;
++ if (!get_brightness_range(caps, &min, &max))
++ return brightness;
++
++ // Rescale 0..255 to min..max
++ return min + DIV_ROUND_CLOSEST((max - min) * brightness,
++ AMDGPU_MAX_BL_LEVEL);
++}
++
++static u32 convert_brightness_to_user(const struct amdgpu_dm_backlight_caps *caps,
++ uint32_t brightness)
++{
++ unsigned min, max;
++
++ if (!get_brightness_range(caps, &min, &max))
++ return brightness;
++
++ if (brightness < min)
++ return 0;
++ // Rescale min..max to 0..255
++ return DIV_ROUND_CLOSEST(AMDGPU_MAX_BL_LEVEL * (brightness - min),
++ max - min);
+ }
+
+ static int amdgpu_dm_backlight_update_status(struct backlight_device *bd)
+@@ -2942,7 +2941,7 @@ static int amdgpu_dm_backlight_update_status(struct backlight_device *bd)
+
+ link = (struct dc_link *)dm->backlight_link;
+
+- brightness = convert_brightness(&caps, bd->props.brightness);
++ brightness = convert_brightness_from_user(&caps, bd->props.brightness);
+ // Change brightness based on AUX property
+ if (caps.aux_support)
+ return set_backlight_via_aux(link, brightness);
+@@ -2959,7 +2958,7 @@ static int amdgpu_dm_backlight_get_brightness(struct backlight_device *bd)
+
+ if (ret == DC_ERROR_UNEXPECTED)
+ return bd->props.brightness;
+- return ret;
++ return convert_brightness_to_user(&dm->backlight_caps, ret);
+ }
+
+ static const struct backlight_ops amdgpu_dm_backlight_ops = {
+diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_color.c b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_color.c
+index 4dfb6b55bb2ed..b321ff654df42 100644
+--- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_color.c
++++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_color.c
+@@ -195,10 +195,13 @@ static int __set_legacy_tf(struct dc_transfer_func *func,
+ bool has_rom)
+ {
+ struct dc_gamma *gamma = NULL;
++ struct calculate_buffer cal_buffer = {0};
+ bool res;
+
+ ASSERT(lut && lut_size == MAX_COLOR_LEGACY_LUT_ENTRIES);
+
++ cal_buffer.buffer_index = -1;
++
+ gamma = dc_create_gamma();
+ if (!gamma)
+ return -ENOMEM;
+@@ -208,7 +211,7 @@ static int __set_legacy_tf(struct dc_transfer_func *func,
+ __drm_lut_to_dc_gamma(lut, gamma, true);
+
+ res = mod_color_calculate_regamma_params(func, gamma, true, has_rom,
+- NULL);
++ NULL, &cal_buffer);
+
+ dc_gamma_release(&gamma);
+
+@@ -221,10 +224,13 @@ static int __set_output_tf(struct dc_transfer_func *func,
+ bool has_rom)
+ {
+ struct dc_gamma *gamma = NULL;
++ struct calculate_buffer cal_buffer = {0};
+ bool res;
+
+ ASSERT(lut && lut_size == MAX_COLOR_LUT_ENTRIES);
+
++ cal_buffer.buffer_index = -1;
++
+ gamma = dc_create_gamma();
+ if (!gamma)
+ return -ENOMEM;
+@@ -248,7 +254,7 @@ static int __set_output_tf(struct dc_transfer_func *func,
+ */
+ gamma->type = GAMMA_CS_TFM_1D;
+ res = mod_color_calculate_regamma_params(func, gamma, false,
+- has_rom, NULL);
++ has_rom, NULL, &cal_buffer);
+ }
+
+ dc_gamma_release(&gamma);
+diff --git a/drivers/gpu/drm/amd/display/dc/dcn10/dcn10_stream_encoder.c b/drivers/gpu/drm/amd/display/dc/dcn10/dcn10_stream_encoder.c
+index 07b2f9399671d..842abb4c475bc 100644
+--- a/drivers/gpu/drm/amd/display/dc/dcn10/dcn10_stream_encoder.c
++++ b/drivers/gpu/drm/amd/display/dc/dcn10/dcn10_stream_encoder.c
+@@ -121,35 +121,35 @@ void enc1_update_generic_info_packet(
+ switch (packet_index) {
+ case 0:
+ REG_UPDATE(AFMT_VBI_PACKET_CONTROL1,
+- AFMT_GENERIC0_FRAME_UPDATE, 1);
++ AFMT_GENERIC0_IMMEDIATE_UPDATE, 1);
+ break;
+ case 1:
+ REG_UPDATE(AFMT_VBI_PACKET_CONTROL1,
+- AFMT_GENERIC1_FRAME_UPDATE, 1);
++ AFMT_GENERIC1_IMMEDIATE_UPDATE, 1);
+ break;
+ case 2:
+ REG_UPDATE(AFMT_VBI_PACKET_CONTROL1,
+- AFMT_GENERIC2_FRAME_UPDATE, 1);
++ AFMT_GENERIC2_IMMEDIATE_UPDATE, 1);
+ break;
+ case 3:
+ REG_UPDATE(AFMT_VBI_PACKET_CONTROL1,
+- AFMT_GENERIC3_FRAME_UPDATE, 1);
++ AFMT_GENERIC3_IMMEDIATE_UPDATE, 1);
+ break;
+ case 4:
+ REG_UPDATE(AFMT_VBI_PACKET_CONTROL1,
+- AFMT_GENERIC4_FRAME_UPDATE, 1);
++ AFMT_GENERIC4_IMMEDIATE_UPDATE, 1);
+ break;
+ case 5:
+ REG_UPDATE(AFMT_VBI_PACKET_CONTROL1,
+- AFMT_GENERIC5_FRAME_UPDATE, 1);
++ AFMT_GENERIC5_IMMEDIATE_UPDATE, 1);
+ break;
+ case 6:
+ REG_UPDATE(AFMT_VBI_PACKET_CONTROL1,
+- AFMT_GENERIC6_FRAME_UPDATE, 1);
++ AFMT_GENERIC6_IMMEDIATE_UPDATE, 1);
+ break;
+ case 7:
+ REG_UPDATE(AFMT_VBI_PACKET_CONTROL1,
+- AFMT_GENERIC7_FRAME_UPDATE, 1);
++ AFMT_GENERIC7_IMMEDIATE_UPDATE, 1);
+ break;
+ default:
+ break;
+diff --git a/drivers/gpu/drm/amd/display/dc/dcn10/dcn10_stream_encoder.h b/drivers/gpu/drm/amd/display/dc/dcn10/dcn10_stream_encoder.h
+index f9b9e221c698b..7507000a99ac4 100644
+--- a/drivers/gpu/drm/amd/display/dc/dcn10/dcn10_stream_encoder.h
++++ b/drivers/gpu/drm/amd/display/dc/dcn10/dcn10_stream_encoder.h
+@@ -273,7 +273,14 @@ struct dcn10_stream_enc_registers {
+ SE_SF(DIG0_AFMT_VBI_PACKET_CONTROL1, AFMT_GENERIC2_FRAME_UPDATE, mask_sh),\
+ SE_SF(DIG0_AFMT_VBI_PACKET_CONTROL1, AFMT_GENERIC3_FRAME_UPDATE, mask_sh),\
+ SE_SF(DIG0_AFMT_VBI_PACKET_CONTROL1, AFMT_GENERIC4_FRAME_UPDATE, mask_sh),\
++ SE_SF(DIG0_AFMT_VBI_PACKET_CONTROL1, AFMT_GENERIC0_IMMEDIATE_UPDATE, mask_sh),\
++ SE_SF(DIG0_AFMT_VBI_PACKET_CONTROL1, AFMT_GENERIC1_IMMEDIATE_UPDATE, mask_sh),\
++ SE_SF(DIG0_AFMT_VBI_PACKET_CONTROL1, AFMT_GENERIC2_IMMEDIATE_UPDATE, mask_sh),\
++ SE_SF(DIG0_AFMT_VBI_PACKET_CONTROL1, AFMT_GENERIC3_IMMEDIATE_UPDATE, mask_sh),\
+ SE_SF(DIG0_AFMT_VBI_PACKET_CONTROL1, AFMT_GENERIC4_IMMEDIATE_UPDATE, mask_sh),\
++ SE_SF(DIG0_AFMT_VBI_PACKET_CONTROL1, AFMT_GENERIC5_IMMEDIATE_UPDATE, mask_sh),\
++ SE_SF(DIG0_AFMT_VBI_PACKET_CONTROL1, AFMT_GENERIC6_IMMEDIATE_UPDATE, mask_sh),\
++ SE_SF(DIG0_AFMT_VBI_PACKET_CONTROL1, AFMT_GENERIC7_IMMEDIATE_UPDATE, mask_sh),\
+ SE_SF(DIG0_AFMT_VBI_PACKET_CONTROL1, AFMT_GENERIC5_FRAME_UPDATE, mask_sh),\
+ SE_SF(DIG0_AFMT_VBI_PACKET_CONTROL1, AFMT_GENERIC6_FRAME_UPDATE, mask_sh),\
+ SE_SF(DIG0_AFMT_VBI_PACKET_CONTROL1, AFMT_GENERIC7_FRAME_UPDATE, mask_sh),\
+@@ -337,7 +344,14 @@ struct dcn10_stream_enc_registers {
+ type AFMT_GENERIC2_FRAME_UPDATE;\
+ type AFMT_GENERIC3_FRAME_UPDATE;\
+ type AFMT_GENERIC4_FRAME_UPDATE;\
++ type AFMT_GENERIC0_IMMEDIATE_UPDATE;\
++ type AFMT_GENERIC1_IMMEDIATE_UPDATE;\
++ type AFMT_GENERIC2_IMMEDIATE_UPDATE;\
++ type AFMT_GENERIC3_IMMEDIATE_UPDATE;\
+ type AFMT_GENERIC4_IMMEDIATE_UPDATE;\
++ type AFMT_GENERIC5_IMMEDIATE_UPDATE;\
++ type AFMT_GENERIC6_IMMEDIATE_UPDATE;\
++ type AFMT_GENERIC7_IMMEDIATE_UPDATE;\
+ type AFMT_GENERIC5_FRAME_UPDATE;\
+ type AFMT_GENERIC6_FRAME_UPDATE;\
+ type AFMT_GENERIC7_FRAME_UPDATE;\
+diff --git a/drivers/gpu/drm/amd/display/modules/color/Makefile b/drivers/gpu/drm/amd/display/modules/color/Makefile
+index 65c33a76951a4..e66c19a840c29 100644
+--- a/drivers/gpu/drm/amd/display/modules/color/Makefile
++++ b/drivers/gpu/drm/amd/display/modules/color/Makefile
+@@ -23,7 +23,7 @@
+ # Makefile for the color sub-module of DAL.
+ #
+
+-MOD_COLOR = color_gamma.o
++MOD_COLOR = color_gamma.o color_table.o
+
+ AMD_DAL_MOD_COLOR = $(addprefix $(AMDDALPATH)/modules/color/,$(MOD_COLOR))
+ #$(info ************ DAL COLOR MODULE MAKEFILE ************)
+diff --git a/drivers/gpu/drm/amd/display/modules/color/color_gamma.c b/drivers/gpu/drm/amd/display/modules/color/color_gamma.c
+index bcfe34ef8c28d..b8695660b480e 100644
+--- a/drivers/gpu/drm/amd/display/modules/color/color_gamma.c
++++ b/drivers/gpu/drm/amd/display/modules/color/color_gamma.c
+@@ -30,20 +30,10 @@
+ #include "opp.h"
+ #include "color_gamma.h"
+
+-#define NUM_PTS_IN_REGION 16
+-#define NUM_REGIONS 32
+-#define MAX_HW_POINTS (NUM_PTS_IN_REGION*NUM_REGIONS)
+-
+ static struct hw_x_point coordinates_x[MAX_HW_POINTS + 2];
+
+-static struct fixed31_32 pq_table[MAX_HW_POINTS + 2];
+-static struct fixed31_32 de_pq_table[MAX_HW_POINTS + 2];
+-
+ // these are helpers for calculations to reduce stack usage
+ // do not depend on these being preserved across calls
+-static struct fixed31_32 scratch_1;
+-static struct fixed31_32 scratch_2;
+-static struct translate_from_linear_space_args scratch_gamma_args;
+
+ /* Helper to optimize gamma calculation, only use in translate_from_linear, in
+ * particular the dc_fixpt_pow function which is very expensive
+@@ -56,9 +46,6 @@ static struct translate_from_linear_space_args scratch_gamma_args;
+ * just multiply with 2^gamma which can be computed once, and save the result so we
+ * recursively compute all the values.
+ */
+-static struct fixed31_32 pow_buffer[NUM_PTS_IN_REGION];
+-static struct fixed31_32 gamma_of_2; // 2^gamma
+-int pow_buffer_ptr = -1;
+ /*sRGB 709 2.2 2.4 P3*/
+ static const int32_t gamma_numerator01[] = { 31308, 180000, 0, 0, 0};
+ static const int32_t gamma_numerator02[] = { 12920, 4500, 0, 0, 0};
+@@ -66,9 +53,6 @@ static const int32_t gamma_numerator03[] = { 55, 99, 0, 0, 0};
+ static const int32_t gamma_numerator04[] = { 55, 99, 0, 0, 0};
+ static const int32_t gamma_numerator05[] = { 2400, 2200, 2200, 2400, 2600};
+
+-static bool pq_initialized; /* = false; */
+-static bool de_pq_initialized; /* = false; */
+-
+ /* one-time setup of X points */
+ void setup_x_points_distribution(void)
+ {
+@@ -250,6 +234,8 @@ void precompute_pq(void)
+ struct fixed31_32 scaling_factor =
+ dc_fixpt_from_fraction(80, 10000);
+
++ struct fixed31_32 *pq_table = mod_color_get_table(type_pq_table);
++
+ /* pow function has problems with arguments too small */
+ for (i = 0; i < 32; i++)
+ pq_table[i] = dc_fixpt_zero;
+@@ -269,7 +255,7 @@ void precompute_de_pq(void)
+ uint32_t begin_index, end_index;
+
+ struct fixed31_32 scaling_factor = dc_fixpt_from_int(125);
+-
++ struct fixed31_32 *de_pq_table = mod_color_get_table(type_de_pq_table);
+ /* X points is 2^-25 to 2^7
+ * De-gamma X is 2^-12 to 2^0 – we are skipping first -12-(-25) = 13 regions
+ */
+@@ -339,6 +325,9 @@ static struct fixed31_32 translate_from_linear_space(
+ {
+ const struct fixed31_32 one = dc_fixpt_from_int(1);
+
++ struct fixed31_32 scratch_1, scratch_2;
++ struct calculate_buffer *cal_buffer = args->cal_buffer;
++
+ if (dc_fixpt_le(one, args->arg))
+ return one;
+
+@@ -352,21 +341,21 @@ static struct fixed31_32 translate_from_linear_space(
+
+ return scratch_1;
+ } else if (dc_fixpt_le(args->a0, args->arg)) {
+- if (pow_buffer_ptr == 0) {
+- gamma_of_2 = dc_fixpt_pow(dc_fixpt_from_int(2),
++ if (cal_buffer->buffer_index == 0) {
++ cal_buffer->gamma_of_2 = dc_fixpt_pow(dc_fixpt_from_int(2),
+ dc_fixpt_recip(args->gamma));
+ }
+ scratch_1 = dc_fixpt_add(one, args->a3);
+- if (pow_buffer_ptr < 16)
++ if (cal_buffer->buffer_index < 16)
+ scratch_2 = dc_fixpt_pow(args->arg,
+ dc_fixpt_recip(args->gamma));
+ else
+- scratch_2 = dc_fixpt_mul(gamma_of_2,
+- pow_buffer[pow_buffer_ptr%16]);
++ scratch_2 = dc_fixpt_mul(cal_buffer->gamma_of_2,
++ cal_buffer->buffer[cal_buffer->buffer_index%16]);
+
+- if (pow_buffer_ptr != -1) {
+- pow_buffer[pow_buffer_ptr%16] = scratch_2;
+- pow_buffer_ptr++;
++ if (cal_buffer->buffer_index != -1) {
++ cal_buffer->buffer[cal_buffer->buffer_index%16] = scratch_2;
++ cal_buffer->buffer_index++;
+ }
+
+ scratch_1 = dc_fixpt_mul(scratch_1, scratch_2);
+@@ -413,15 +402,17 @@ static struct fixed31_32 translate_from_linear_space_long(
+ args->a1);
+ }
+
+-static struct fixed31_32 calculate_gamma22(struct fixed31_32 arg, bool use_eetf)
++static struct fixed31_32 calculate_gamma22(struct fixed31_32 arg, bool use_eetf, struct calculate_buffer *cal_buffer)
+ {
+ struct fixed31_32 gamma = dc_fixpt_from_fraction(22, 10);
++ struct translate_from_linear_space_args scratch_gamma_args;
+
+ scratch_gamma_args.arg = arg;
+ scratch_gamma_args.a0 = dc_fixpt_zero;
+ scratch_gamma_args.a1 = dc_fixpt_zero;
+ scratch_gamma_args.a2 = dc_fixpt_zero;
+ scratch_gamma_args.a3 = dc_fixpt_zero;
++ scratch_gamma_args.cal_buffer = cal_buffer;
+ scratch_gamma_args.gamma = gamma;
+
+ if (use_eetf)
+@@ -467,14 +458,18 @@ static struct fixed31_32 translate_to_linear_space(
+ static struct fixed31_32 translate_from_linear_space_ex(
+ struct fixed31_32 arg,
+ struct gamma_coefficients *coeff,
+- uint32_t color_index)
++ uint32_t color_index,
++ struct calculate_buffer *cal_buffer)
+ {
++ struct translate_from_linear_space_args scratch_gamma_args;
++
+ scratch_gamma_args.arg = arg;
+ scratch_gamma_args.a0 = coeff->a0[color_index];
+ scratch_gamma_args.a1 = coeff->a1[color_index];
+ scratch_gamma_args.a2 = coeff->a2[color_index];
+ scratch_gamma_args.a3 = coeff->a3[color_index];
+ scratch_gamma_args.gamma = coeff->user_gamma[color_index];
++ scratch_gamma_args.cal_buffer = cal_buffer;
+
+ return translate_from_linear_space(&scratch_gamma_args);
+ }
+@@ -742,10 +737,11 @@ static void build_pq(struct pwl_float_data_ex *rgb_regamma,
+ struct fixed31_32 output;
+ struct fixed31_32 scaling_factor =
+ dc_fixpt_from_fraction(sdr_white_level, 10000);
++ struct fixed31_32 *pq_table = mod_color_get_table(type_pq_table);
+
+- if (!pq_initialized && sdr_white_level == 80) {
++ if (!mod_color_is_table_init(type_pq_table) && sdr_white_level == 80) {
+ precompute_pq();
+- pq_initialized = true;
++ mod_color_set_table_init_state(type_pq_table, true);
+ }
+
+ /* TODO: start index is from segment 2^-24, skipping first segment
+@@ -787,12 +783,12 @@ static void build_de_pq(struct pwl_float_data_ex *de_pq,
+ {
+ uint32_t i;
+ struct fixed31_32 output;
+-
++ struct fixed31_32 *de_pq_table = mod_color_get_table(type_de_pq_table);
+ struct fixed31_32 scaling_factor = dc_fixpt_from_int(125);
+
+- if (!de_pq_initialized) {
++ if (!mod_color_is_table_init(type_de_pq_table)) {
+ precompute_de_pq();
+- de_pq_initialized = true;
++ mod_color_set_table_init_state(type_de_pq_table, true);
+ }
+
+
+@@ -811,7 +807,9 @@ static void build_de_pq(struct pwl_float_data_ex *de_pq,
+
+ static bool build_regamma(struct pwl_float_data_ex *rgb_regamma,
+ uint32_t hw_points_num,
+- const struct hw_x_point *coordinate_x, enum dc_transfer_func_predefined type)
++ const struct hw_x_point *coordinate_x,
++ enum dc_transfer_func_predefined type,
++ struct calculate_buffer *cal_buffer)
+ {
+ uint32_t i;
+ bool ret = false;
+@@ -827,20 +825,21 @@ static bool build_regamma(struct pwl_float_data_ex *rgb_regamma,
+ if (!build_coefficients(coeff, type))
+ goto release;
+
+- memset(pow_buffer, 0, NUM_PTS_IN_REGION * sizeof(struct fixed31_32));
+- pow_buffer_ptr = 0; // see variable definition for more info
++ memset(cal_buffer->buffer, 0, NUM_PTS_IN_REGION * sizeof(struct fixed31_32));
++ cal_buffer->buffer_index = 0; // see variable definition for more info
++
+ i = 0;
+ while (i <= hw_points_num) {
+ /*TODO use y vs r,g,b*/
+ rgb->r = translate_from_linear_space_ex(
+- coord_x->x, coeff, 0);
++ coord_x->x, coeff, 0, cal_buffer);
+ rgb->g = rgb->r;
+ rgb->b = rgb->r;
+ ++coord_x;
+ ++rgb;
+ ++i;
+ }
+- pow_buffer_ptr = -1; // reset back to no optimize
++ cal_buffer->buffer_index = -1;
+ ret = true;
+ release:
+ kvfree(coeff);
+@@ -932,7 +931,8 @@ static void hermite_spline_eetf(struct fixed31_32 input_x,
+ static bool build_freesync_hdr(struct pwl_float_data_ex *rgb_regamma,
+ uint32_t hw_points_num,
+ const struct hw_x_point *coordinate_x,
+- const struct freesync_hdr_tf_params *fs_params)
++ const struct freesync_hdr_tf_params *fs_params,
++ struct calculate_buffer *cal_buffer)
+ {
+ uint32_t i;
+ struct pwl_float_data_ex *rgb = rgb_regamma;
+@@ -969,7 +969,7 @@ static bool build_freesync_hdr(struct pwl_float_data_ex *rgb_regamma,
+ max_content = max_display;
+
+ if (!use_eetf)
+- pow_buffer_ptr = 0; // see var definition for more info
++ cal_buffer->buffer_index = 0; // see var definition for more info
+ rgb += 32; // first 32 points have problems with fixed point, too small
+ coord_x += 32;
+ for (i = 32; i <= hw_points_num; i++) {
+@@ -988,7 +988,7 @@ static bool build_freesync_hdr(struct pwl_float_data_ex *rgb_regamma,
+ if (dc_fixpt_lt(scaledX, dc_fixpt_zero))
+ output = dc_fixpt_zero;
+ else
+- output = calculate_gamma22(scaledX, use_eetf);
++ output = calculate_gamma22(scaledX, use_eetf, cal_buffer);
+
+ rgb->r = output;
+ rgb->g = output;
+@@ -1008,7 +1008,7 @@ static bool build_freesync_hdr(struct pwl_float_data_ex *rgb_regamma,
+ ++coord_x;
+ ++rgb;
+ }
+- pow_buffer_ptr = -1;
++ cal_buffer->buffer_index = -1;
+
+ return true;
+ }
+@@ -1606,7 +1606,7 @@ static void build_new_custom_resulted_curve(
+ }
+
+ static void apply_degamma_for_user_regamma(struct pwl_float_data_ex *rgb_regamma,
+- uint32_t hw_points_num)
++ uint32_t hw_points_num, struct calculate_buffer *cal_buffer)
+ {
+ uint32_t i;
+
+@@ -1619,7 +1619,7 @@ static void apply_degamma_for_user_regamma(struct pwl_float_data_ex *rgb_regamma
+ i = 0;
+ while (i != hw_points_num + 1) {
+ rgb->r = translate_from_linear_space_ex(
+- coord_x->x, &coeff, 0);
++ coord_x->x, &coeff, 0, cal_buffer);
+ rgb->g = rgb->r;
+ rgb->b = rgb->r;
+ ++coord_x;
+@@ -1674,7 +1674,8 @@ static bool map_regamma_hw_to_x_user(
+ #define _EXTRA_POINTS 3
+
+ bool calculate_user_regamma_coeff(struct dc_transfer_func *output_tf,
+- const struct regamma_lut *regamma)
++ const struct regamma_lut *regamma,
++ struct calculate_buffer *cal_buffer)
+ {
+ struct gamma_coefficients coeff;
+ const struct hw_x_point *coord_x = coordinates_x;
+@@ -1706,11 +1707,11 @@ bool calculate_user_regamma_coeff(struct dc_transfer_func *output_tf,
+ }
+ while (i != MAX_HW_POINTS + 1) {
+ output_tf->tf_pts.red[i] = translate_from_linear_space_ex(
+- coord_x->x, &coeff, 0);
++ coord_x->x, &coeff, 0, cal_buffer);
+ output_tf->tf_pts.green[i] = translate_from_linear_space_ex(
+- coord_x->x, &coeff, 1);
++ coord_x->x, &coeff, 1, cal_buffer);
+ output_tf->tf_pts.blue[i] = translate_from_linear_space_ex(
+- coord_x->x, &coeff, 2);
++ coord_x->x, &coeff, 2, cal_buffer);
+ ++coord_x;
+ ++i;
+ }
+@@ -1723,7 +1724,8 @@ bool calculate_user_regamma_coeff(struct dc_transfer_func *output_tf,
+ }
+
+ bool calculate_user_regamma_ramp(struct dc_transfer_func *output_tf,
+- const struct regamma_lut *regamma)
++ const struct regamma_lut *regamma,
++ struct calculate_buffer *cal_buffer)
+ {
+ struct dc_transfer_func_distributed_points *tf_pts = &output_tf->tf_pts;
+ struct dividers dividers;
+@@ -1756,7 +1758,7 @@ bool calculate_user_regamma_ramp(struct dc_transfer_func *output_tf,
+ scale_user_regamma_ramp(rgb_user, ®amma->ramp, dividers);
+
+ if (regamma->flags.bits.applyDegamma == 1) {
+- apply_degamma_for_user_regamma(rgb_regamma, MAX_HW_POINTS);
++ apply_degamma_for_user_regamma(rgb_regamma, MAX_HW_POINTS, cal_buffer);
+ copy_rgb_regamma_to_coordinates_x(coordinates_x,
+ MAX_HW_POINTS, rgb_regamma);
+ }
+@@ -1943,7 +1945,8 @@ static bool calculate_curve(enum dc_transfer_func_predefined trans,
+ struct dc_transfer_func_distributed_points *points,
+ struct pwl_float_data_ex *rgb_regamma,
+ const struct freesync_hdr_tf_params *fs_params,
+- uint32_t sdr_ref_white_level)
++ uint32_t sdr_ref_white_level,
++ struct calculate_buffer *cal_buffer)
+ {
+ uint32_t i;
+ bool ret = false;
+@@ -1979,7 +1982,8 @@ static bool calculate_curve(enum dc_transfer_func_predefined trans,
+ build_freesync_hdr(rgb_regamma,
+ MAX_HW_POINTS,
+ coordinates_x,
+- fs_params);
++ fs_params,
++ cal_buffer);
+
+ ret = true;
+ } else if (trans == TRANSFER_FUNCTION_HLG) {
+@@ -2008,7 +2012,8 @@ static bool calculate_curve(enum dc_transfer_func_predefined trans,
+ build_regamma(rgb_regamma,
+ MAX_HW_POINTS,
+ coordinates_x,
+- trans);
++ trans,
++ cal_buffer);
+
+ ret = true;
+ }
+@@ -2018,7 +2023,8 @@ static bool calculate_curve(enum dc_transfer_func_predefined trans,
+
+ bool mod_color_calculate_regamma_params(struct dc_transfer_func *output_tf,
+ const struct dc_gamma *ramp, bool mapUserRamp, bool canRomBeUsed,
+- const struct freesync_hdr_tf_params *fs_params)
++ const struct freesync_hdr_tf_params *fs_params,
++ struct calculate_buffer *cal_buffer)
+ {
+ struct dc_transfer_func_distributed_points *tf_pts = &output_tf->tf_pts;
+ struct dividers dividers;
+@@ -2090,7 +2096,8 @@ bool mod_color_calculate_regamma_params(struct dc_transfer_func *output_tf,
+ tf_pts,
+ rgb_regamma,
+ fs_params,
+- output_tf->sdr_ref_white_level);
++ output_tf->sdr_ref_white_level,
++ cal_buffer);
+
+ if (ret) {
+ map_regamma_hw_to_x_user(ramp, coeff, rgb_user,
+diff --git a/drivers/gpu/drm/amd/display/modules/color/color_gamma.h b/drivers/gpu/drm/amd/display/modules/color/color_gamma.h
+index 7f56226ba77a9..37ffbef6602b0 100644
+--- a/drivers/gpu/drm/amd/display/modules/color/color_gamma.h
++++ b/drivers/gpu/drm/amd/display/modules/color/color_gamma.h
+@@ -26,6 +26,8 @@
+ #ifndef COLOR_MOD_COLOR_GAMMA_H_
+ #define COLOR_MOD_COLOR_GAMMA_H_
+
++#include "color_table.h"
++
+ struct dc_transfer_func;
+ struct dc_gamma;
+ struct dc_transfer_func_distributed_points;
+@@ -83,6 +85,12 @@ struct freesync_hdr_tf_params {
+ unsigned int skip_tm; // skip tm
+ };
+
++struct calculate_buffer {
++ int buffer_index;
++ struct fixed31_32 buffer[NUM_PTS_IN_REGION];
++ struct fixed31_32 gamma_of_2;
++};
++
+ struct translate_from_linear_space_args {
+ struct fixed31_32 arg;
+ struct fixed31_32 a0;
+@@ -90,6 +98,7 @@ struct translate_from_linear_space_args {
+ struct fixed31_32 a2;
+ struct fixed31_32 a3;
+ struct fixed31_32 gamma;
++ struct calculate_buffer *cal_buffer;
+ };
+
+ void setup_x_points_distribution(void);
+@@ -99,7 +108,8 @@ void precompute_de_pq(void);
+
+ bool mod_color_calculate_regamma_params(struct dc_transfer_func *output_tf,
+ const struct dc_gamma *ramp, bool mapUserRamp, bool canRomBeUsed,
+- const struct freesync_hdr_tf_params *fs_params);
++ const struct freesync_hdr_tf_params *fs_params,
++ struct calculate_buffer *cal_buffer);
+
+ bool mod_color_calculate_degamma_params(struct dc_color_caps *dc_caps,
+ struct dc_transfer_func *output_tf,
+@@ -109,10 +119,12 @@ bool mod_color_calculate_degamma_curve(enum dc_transfer_func_predefined trans,
+ struct dc_transfer_func_distributed_points *points);
+
+ bool calculate_user_regamma_coeff(struct dc_transfer_func *output_tf,
+- const struct regamma_lut *regamma);
++ const struct regamma_lut *regamma,
++ struct calculate_buffer *cal_buffer);
+
+ bool calculate_user_regamma_ramp(struct dc_transfer_func *output_tf,
+- const struct regamma_lut *regamma);
++ const struct regamma_lut *regamma,
++ struct calculate_buffer *cal_buffer);
+
+
+ #endif /* COLOR_MOD_COLOR_GAMMA_H_ */
+diff --git a/drivers/gpu/drm/amd/display/modules/color/color_table.c b/drivers/gpu/drm/amd/display/modules/color/color_table.c
+new file mode 100644
+index 0000000000000..692e536e7d057
+--- /dev/null
++++ b/drivers/gpu/drm/amd/display/modules/color/color_table.c
+@@ -0,0 +1,48 @@
++/*
++ * Copyright (c) 2019 Advanced Micro Devices, Inc. (unpublished)
++ *
++ * All rights reserved. This notice is intended as a precaution against
++ * inadvertent publication and does not imply publication or any waiver
++ * of confidentiality. The year included in the foregoing notice is the
++ * year of creation of the work.
++ */
++
++#include "color_table.h"
++
++static struct fixed31_32 pq_table[MAX_HW_POINTS + 2];
++static struct fixed31_32 de_pq_table[MAX_HW_POINTS + 2];
++static bool pq_initialized;
++static bool de_pg_initialized;
++
++bool mod_color_is_table_init(enum table_type type)
++{
++ bool ret = false;
++
++ if (type == type_pq_table)
++ ret = pq_initialized;
++ if (type == type_de_pq_table)
++ ret = de_pg_initialized;
++
++ return ret;
++}
++
++struct fixed31_32 *mod_color_get_table(enum table_type type)
++{
++ struct fixed31_32 *table = NULL;
++
++ if (type == type_pq_table)
++ table = pq_table;
++ if (type == type_de_pq_table)
++ table = de_pq_table;
++
++ return table;
++}
++
++void mod_color_set_table_init_state(enum table_type type, bool state)
++{
++ if (type == type_pq_table)
++ pq_initialized = state;
++ if (type == type_de_pq_table)
++ de_pg_initialized = state;
++}
++
+diff --git a/drivers/gpu/drm/amd/display/modules/color/color_table.h b/drivers/gpu/drm/amd/display/modules/color/color_table.h
+new file mode 100644
+index 0000000000000..2621dd6194027
+--- /dev/null
++++ b/drivers/gpu/drm/amd/display/modules/color/color_table.h
+@@ -0,0 +1,47 @@
++/*
++ * Copyright 2016 Advanced Micro Devices, Inc.
++ *
++ * Permission is hereby granted, free of charge, to any person obtaining a
++ * copy of this software and associated documentation files (the "Software"),
++ * to deal in the Software without restriction, including without limitation
++ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
++ * and/or sell copies of the Software, and to permit persons to whom the
++ * Software is furnished to do so, subject to the following conditions:
++ *
++ * The above copyright notice and this permission notice shall be included in
++ * all copies or substantial portions of the Software.
++ *
++ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
++ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
++ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
++ * THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR
++ * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
++ * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
++ * OTHER DEALINGS IN THE SOFTWARE.
++ *
++ * Authors: AMD
++ *
++ */
++
++
++#ifndef COLOR_MOD_COLOR_TABLE_H_
++#define COLOR_MOD_COLOR_TABLE_H_
++
++#include "dc_types.h"
++
++#define NUM_PTS_IN_REGION 16
++#define NUM_REGIONS 32
++#define MAX_HW_POINTS (NUM_PTS_IN_REGION*NUM_REGIONS)
++
++enum table_type {
++ type_pq_table,
++ type_de_pq_table
++};
++
++bool mod_color_is_table_init(enum table_type type);
++
++struct fixed31_32 *mod_color_get_table(enum table_type type);
++
++void mod_color_set_table_init_state(enum table_type type, bool state);
++
++#endif /* COLOR_MOD_COLOR_TABLE_H_ */
+diff --git a/drivers/gpu/drm/amd/display/modules/freesync/freesync.c b/drivers/gpu/drm/amd/display/modules/freesync/freesync.c
+index eb7421e83b865..23a7fa8447e24 100644
+--- a/drivers/gpu/drm/amd/display/modules/freesync/freesync.c
++++ b/drivers/gpu/drm/amd/display/modules/freesync/freesync.c
+@@ -324,22 +324,44 @@ static void apply_below_the_range(struct core_freesync *core_freesync,
+
+ /* Choose number of frames to insert based on how close it
+ * can get to the mid point of the variable range.
++ * - Delta for CEIL: delta_from_mid_point_in_us_1
++ * - Delta for FLOOR: delta_from_mid_point_in_us_2
+ */
+- if ((frame_time_in_us / mid_point_frames_ceil) > in_out_vrr->min_duration_in_us &&
+- (delta_from_mid_point_in_us_1 < delta_from_mid_point_in_us_2 ||
+- mid_point_frames_floor < 2)) {
++ if ((last_render_time_in_us / mid_point_frames_ceil) < in_out_vrr->min_duration_in_us) {
++ /* Check for out of range.
++ * If using CEIL produces a value that is out of range,
++ * then we are forced to use FLOOR.
++ */
++ frames_to_insert = mid_point_frames_floor;
++ } else if (mid_point_frames_floor < 2) {
++ /* Check if FLOOR would result in non-LFC. In this case
++ * choose to use CEIL
++ */
++ frames_to_insert = mid_point_frames_ceil;
++ } else if (delta_from_mid_point_in_us_1 < delta_from_mid_point_in_us_2) {
++ /* If choosing CEIL results in a frame duration that is
++ * closer to the mid point of the range.
++ * Choose CEIL
++ */
+ frames_to_insert = mid_point_frames_ceil;
+- delta_from_mid_point_delta_in_us = delta_from_mid_point_in_us_2 -
+- delta_from_mid_point_in_us_1;
+ } else {
++ /* If choosing FLOOR results in a frame duration that is
++ * closer to the mid point of the range.
++ * Choose FLOOR
++ */
+ frames_to_insert = mid_point_frames_floor;
+- delta_from_mid_point_delta_in_us = delta_from_mid_point_in_us_1 -
+- delta_from_mid_point_in_us_2;
+ }
+
+ /* Prefer current frame multiplier when BTR is enabled unless it drifts
+ * too far from the midpoint
+ */
++ if (delta_from_mid_point_in_us_1 < delta_from_mid_point_in_us_2) {
++ delta_from_mid_point_delta_in_us = delta_from_mid_point_in_us_2 -
++ delta_from_mid_point_in_us_1;
++ } else {
++ delta_from_mid_point_delta_in_us = delta_from_mid_point_in_us_1 -
++ delta_from_mid_point_in_us_2;
++ }
+ if (in_out_vrr->btr.frames_to_insert != 0 &&
+ delta_from_mid_point_delta_in_us < BTR_DRIFT_MARGIN) {
+ if (((last_render_time_in_us / in_out_vrr->btr.frames_to_insert) <
+diff --git a/drivers/gpu/drm/amd/powerplay/hwmgr/smu10_hwmgr.c b/drivers/gpu/drm/amd/powerplay/hwmgr/smu10_hwmgr.c
+index c9cfe90a29471..9ee8cf8267c88 100644
+--- a/drivers/gpu/drm/amd/powerplay/hwmgr/smu10_hwmgr.c
++++ b/drivers/gpu/drm/amd/powerplay/hwmgr/smu10_hwmgr.c
+@@ -204,8 +204,7 @@ static int smu10_set_min_deep_sleep_dcefclk(struct pp_hwmgr *hwmgr, uint32_t clo
+ {
+ struct smu10_hwmgr *smu10_data = (struct smu10_hwmgr *)(hwmgr->backend);
+
+- if (smu10_data->need_min_deep_sleep_dcefclk &&
+- smu10_data->deep_sleep_dcefclk != clock) {
++ if (clock && smu10_data->deep_sleep_dcefclk != clock) {
+ smu10_data->deep_sleep_dcefclk = clock;
+ smum_send_msg_to_smc_with_parameter(hwmgr,
+ PPSMC_MSG_SetMinDeepSleepDcefclk,
+@@ -219,8 +218,7 @@ static int smu10_set_hard_min_dcefclk_by_freq(struct pp_hwmgr *hwmgr, uint32_t c
+ {
+ struct smu10_hwmgr *smu10_data = (struct smu10_hwmgr *)(hwmgr->backend);
+
+- if (smu10_data->dcf_actual_hard_min_freq &&
+- smu10_data->dcf_actual_hard_min_freq != clock) {
++ if (clock && smu10_data->dcf_actual_hard_min_freq != clock) {
+ smu10_data->dcf_actual_hard_min_freq = clock;
+ smum_send_msg_to_smc_with_parameter(hwmgr,
+ PPSMC_MSG_SetHardMinDcefclkByFreq,
+@@ -234,8 +232,7 @@ static int smu10_set_hard_min_fclk_by_freq(struct pp_hwmgr *hwmgr, uint32_t cloc
+ {
+ struct smu10_hwmgr *smu10_data = (struct smu10_hwmgr *)(hwmgr->backend);
+
+- if (smu10_data->f_actual_hard_min_freq &&
+- smu10_data->f_actual_hard_min_freq != clock) {
++ if (clock && smu10_data->f_actual_hard_min_freq != clock) {
+ smu10_data->f_actual_hard_min_freq = clock;
+ smum_send_msg_to_smc_with_parameter(hwmgr,
+ PPSMC_MSG_SetHardMinFclkByFreq,
+diff --git a/drivers/gpu/drm/amd/powerplay/hwmgr/vega10_thermal.c b/drivers/gpu/drm/amd/powerplay/hwmgr/vega10_thermal.c
+index 7783c7fd7ccb0..eff87c8968380 100644
+--- a/drivers/gpu/drm/amd/powerplay/hwmgr/vega10_thermal.c
++++ b/drivers/gpu/drm/amd/powerplay/hwmgr/vega10_thermal.c
+@@ -363,17 +363,19 @@ int vega10_thermal_get_temperature(struct pp_hwmgr *hwmgr)
+ static int vega10_thermal_set_temperature_range(struct pp_hwmgr *hwmgr,
+ struct PP_TemperatureRange *range)
+ {
++ struct phm_ppt_v2_information *pp_table_info =
++ (struct phm_ppt_v2_information *)(hwmgr->pptable);
++ struct phm_tdp_table *tdp_table = pp_table_info->tdp_table;
+ struct amdgpu_device *adev = hwmgr->adev;
+- int low = VEGA10_THERMAL_MINIMUM_ALERT_TEMP *
+- PP_TEMPERATURE_UNITS_PER_CENTIGRADES;
+- int high = VEGA10_THERMAL_MAXIMUM_ALERT_TEMP *
+- PP_TEMPERATURE_UNITS_PER_CENTIGRADES;
++ int low = VEGA10_THERMAL_MINIMUM_ALERT_TEMP;
++ int high = VEGA10_THERMAL_MAXIMUM_ALERT_TEMP;
+ uint32_t val;
+
+- if (low < range->min)
+- low = range->min;
+- if (high > range->max)
+- high = range->max;
++ /* compare them in unit celsius degree */
++ if (low < range->min / PP_TEMPERATURE_UNITS_PER_CENTIGRADES)
++ low = range->min / PP_TEMPERATURE_UNITS_PER_CENTIGRADES;
++ if (high > tdp_table->usSoftwareShutdownTemp)
++ high = tdp_table->usSoftwareShutdownTemp;
+
+ if (low > high)
+ return -EINVAL;
+@@ -382,8 +384,8 @@ static int vega10_thermal_set_temperature_range(struct pp_hwmgr *hwmgr,
+
+ val = REG_SET_FIELD(val, THM_THERMAL_INT_CTRL, MAX_IH_CREDIT, 5);
+ val = REG_SET_FIELD(val, THM_THERMAL_INT_CTRL, THERM_IH_HW_ENA, 1);
+- val = REG_SET_FIELD(val, THM_THERMAL_INT_CTRL, DIG_THERM_INTH, (high / PP_TEMPERATURE_UNITS_PER_CENTIGRADES));
+- val = REG_SET_FIELD(val, THM_THERMAL_INT_CTRL, DIG_THERM_INTL, (low / PP_TEMPERATURE_UNITS_PER_CENTIGRADES));
++ val = REG_SET_FIELD(val, THM_THERMAL_INT_CTRL, DIG_THERM_INTH, high);
++ val = REG_SET_FIELD(val, THM_THERMAL_INT_CTRL, DIG_THERM_INTL, low);
+ val &= (~THM_THERMAL_INT_CTRL__THERM_TRIGGER_MASK_MASK) &
+ (~THM_THERMAL_INT_CTRL__THERM_INTH_MASK_MASK) &
+ (~THM_THERMAL_INT_CTRL__THERM_INTL_MASK_MASK);
+diff --git a/drivers/gpu/drm/amd/powerplay/hwmgr/vega12_thermal.c b/drivers/gpu/drm/amd/powerplay/hwmgr/vega12_thermal.c
+index c85806a6f62e3..650623106ceba 100644
+--- a/drivers/gpu/drm/amd/powerplay/hwmgr/vega12_thermal.c
++++ b/drivers/gpu/drm/amd/powerplay/hwmgr/vega12_thermal.c
+@@ -170,17 +170,18 @@ int vega12_thermal_get_temperature(struct pp_hwmgr *hwmgr)
+ static int vega12_thermal_set_temperature_range(struct pp_hwmgr *hwmgr,
+ struct PP_TemperatureRange *range)
+ {
++ struct phm_ppt_v3_information *pptable_information =
++ (struct phm_ppt_v3_information *)hwmgr->pptable;
+ struct amdgpu_device *adev = hwmgr->adev;
+- int low = VEGA12_THERMAL_MINIMUM_ALERT_TEMP *
+- PP_TEMPERATURE_UNITS_PER_CENTIGRADES;
+- int high = VEGA12_THERMAL_MAXIMUM_ALERT_TEMP *
+- PP_TEMPERATURE_UNITS_PER_CENTIGRADES;
++ int low = VEGA12_THERMAL_MINIMUM_ALERT_TEMP;
++ int high = VEGA12_THERMAL_MAXIMUM_ALERT_TEMP;
+ uint32_t val;
+
+- if (low < range->min)
+- low = range->min;
+- if (high > range->max)
+- high = range->max;
++ /* compare them in unit celsius degree */
++ if (low < range->min / PP_TEMPERATURE_UNITS_PER_CENTIGRADES)
++ low = range->min / PP_TEMPERATURE_UNITS_PER_CENTIGRADES;
++ if (high > pptable_information->us_software_shutdown_temp)
++ high = pptable_information->us_software_shutdown_temp;
+
+ if (low > high)
+ return -EINVAL;
+@@ -189,8 +190,8 @@ static int vega12_thermal_set_temperature_range(struct pp_hwmgr *hwmgr,
+
+ val = REG_SET_FIELD(val, THM_THERMAL_INT_CTRL, MAX_IH_CREDIT, 5);
+ val = REG_SET_FIELD(val, THM_THERMAL_INT_CTRL, THERM_IH_HW_ENA, 1);
+- val = REG_SET_FIELD(val, THM_THERMAL_INT_CTRL, DIG_THERM_INTH, (high / PP_TEMPERATURE_UNITS_PER_CENTIGRADES));
+- val = REG_SET_FIELD(val, THM_THERMAL_INT_CTRL, DIG_THERM_INTL, (low / PP_TEMPERATURE_UNITS_PER_CENTIGRADES));
++ val = REG_SET_FIELD(val, THM_THERMAL_INT_CTRL, DIG_THERM_INTH, high);
++ val = REG_SET_FIELD(val, THM_THERMAL_INT_CTRL, DIG_THERM_INTL, low);
+ val = val & (~THM_THERMAL_INT_CTRL__THERM_TRIGGER_MASK_MASK);
+
+ WREG32_SOC15(THM, 0, mmTHM_THERMAL_INT_CTRL, val);
+diff --git a/drivers/gpu/drm/amd/powerplay/hwmgr/vega20_hwmgr.c b/drivers/gpu/drm/amd/powerplay/hwmgr/vega20_hwmgr.c
+index 9ff470f1b826c..9bd2874a122b4 100644
+--- a/drivers/gpu/drm/amd/powerplay/hwmgr/vega20_hwmgr.c
++++ b/drivers/gpu/drm/amd/powerplay/hwmgr/vega20_hwmgr.c
+@@ -979,10 +979,7 @@ static int vega20_disable_all_smu_features(struct pp_hwmgr *hwmgr)
+ {
+ struct vega20_hwmgr *data =
+ (struct vega20_hwmgr *)(hwmgr->backend);
+- uint64_t features_enabled;
+- int i;
+- bool enabled;
+- int ret = 0;
++ int i, ret = 0;
+
+ PP_ASSERT_WITH_CODE((ret = smum_send_msg_to_smc(hwmgr,
+ PPSMC_MSG_DisableAllSmuFeatures,
+@@ -990,17 +987,8 @@ static int vega20_disable_all_smu_features(struct pp_hwmgr *hwmgr)
+ "[DisableAllSMUFeatures] Failed to disable all smu features!",
+ return ret);
+
+- ret = vega20_get_enabled_smc_features(hwmgr, &features_enabled);
+- PP_ASSERT_WITH_CODE(!ret,
+- "[DisableAllSMUFeatures] Failed to get enabled smc features!",
+- return ret);
+-
+- for (i = 0; i < GNLD_FEATURES_MAX; i++) {
+- enabled = (features_enabled & data->smu_features[i].smu_feature_bitmap) ?
+- true : false;
+- data->smu_features[i].enabled = enabled;
+- data->smu_features[i].supported = enabled;
+- }
++ for (i = 0; i < GNLD_FEATURES_MAX; i++)
++ data->smu_features[i].enabled = 0;
+
+ return 0;
+ }
+@@ -1652,12 +1640,6 @@ static void vega20_init_powergate_state(struct pp_hwmgr *hwmgr)
+
+ data->uvd_power_gated = true;
+ data->vce_power_gated = true;
+-
+- if (data->smu_features[GNLD_DPM_UVD].enabled)
+- data->uvd_power_gated = false;
+-
+- if (data->smu_features[GNLD_DPM_VCE].enabled)
+- data->vce_power_gated = false;
+ }
+
+ static int vega20_enable_dpm_tasks(struct pp_hwmgr *hwmgr)
+@@ -3230,10 +3212,11 @@ static int vega20_get_ppfeature_status(struct pp_hwmgr *hwmgr, char *buf)
+
+ static int vega20_set_ppfeature_status(struct pp_hwmgr *hwmgr, uint64_t new_ppfeature_masks)
+ {
+- uint64_t features_enabled;
+- uint64_t features_to_enable;
+- uint64_t features_to_disable;
+- int ret = 0;
++ struct vega20_hwmgr *data =
++ (struct vega20_hwmgr *)(hwmgr->backend);
++ uint64_t features_enabled, features_to_enable, features_to_disable;
++ int i, ret = 0;
++ bool enabled;
+
+ if (new_ppfeature_masks >= (1ULL << GNLD_FEATURES_MAX))
+ return -EINVAL;
+@@ -3262,6 +3245,17 @@ static int vega20_set_ppfeature_status(struct pp_hwmgr *hwmgr, uint64_t new_ppfe
+ return ret;
+ }
+
++ /* Update the cached feature enablement state */
++ ret = vega20_get_enabled_smc_features(hwmgr, &features_enabled);
++ if (ret)
++ return ret;
++
++ for (i = 0; i < GNLD_FEATURES_MAX; i++) {
++ enabled = (features_enabled & data->smu_features[i].smu_feature_bitmap) ?
++ true : false;
++ data->smu_features[i].enabled = enabled;
++ }
++
+ return 0;
+ }
+
+diff --git a/drivers/gpu/drm/amd/powerplay/hwmgr/vega20_thermal.c b/drivers/gpu/drm/amd/powerplay/hwmgr/vega20_thermal.c
+index 7add2f60f49c4..364162ddaa9c6 100644
+--- a/drivers/gpu/drm/amd/powerplay/hwmgr/vega20_thermal.c
++++ b/drivers/gpu/drm/amd/powerplay/hwmgr/vega20_thermal.c
+@@ -240,17 +240,18 @@ int vega20_thermal_get_temperature(struct pp_hwmgr *hwmgr)
+ static int vega20_thermal_set_temperature_range(struct pp_hwmgr *hwmgr,
+ struct PP_TemperatureRange *range)
+ {
++ struct phm_ppt_v3_information *pptable_information =
++ (struct phm_ppt_v3_information *)hwmgr->pptable;
+ struct amdgpu_device *adev = hwmgr->adev;
+- int low = VEGA20_THERMAL_MINIMUM_ALERT_TEMP *
+- PP_TEMPERATURE_UNITS_PER_CENTIGRADES;
+- int high = VEGA20_THERMAL_MAXIMUM_ALERT_TEMP *
+- PP_TEMPERATURE_UNITS_PER_CENTIGRADES;
++ int low = VEGA20_THERMAL_MINIMUM_ALERT_TEMP;
++ int high = VEGA20_THERMAL_MAXIMUM_ALERT_TEMP;
+ uint32_t val;
+
+- if (low < range->min)
+- low = range->min;
+- if (high > range->max)
+- high = range->max;
++ /* compare them in unit celsius degree */
++ if (low < range->min / PP_TEMPERATURE_UNITS_PER_CENTIGRADES)
++ low = range->min / PP_TEMPERATURE_UNITS_PER_CENTIGRADES;
++ if (high > pptable_information->us_software_shutdown_temp)
++ high = pptable_information->us_software_shutdown_temp;
+
+ if (low > high)
+ return -EINVAL;
+@@ -259,8 +260,8 @@ static int vega20_thermal_set_temperature_range(struct pp_hwmgr *hwmgr,
+
+ val = CGS_REG_SET_FIELD(val, THM_THERMAL_INT_CTRL, MAX_IH_CREDIT, 5);
+ val = CGS_REG_SET_FIELD(val, THM_THERMAL_INT_CTRL, THERM_IH_HW_ENA, 1);
+- val = CGS_REG_SET_FIELD(val, THM_THERMAL_INT_CTRL, DIG_THERM_INTH, (high / PP_TEMPERATURE_UNITS_PER_CENTIGRADES));
+- val = CGS_REG_SET_FIELD(val, THM_THERMAL_INT_CTRL, DIG_THERM_INTL, (low / PP_TEMPERATURE_UNITS_PER_CENTIGRADES));
++ val = CGS_REG_SET_FIELD(val, THM_THERMAL_INT_CTRL, DIG_THERM_INTH, high);
++ val = CGS_REG_SET_FIELD(val, THM_THERMAL_INT_CTRL, DIG_THERM_INTL, low);
+ val = val & (~THM_THERMAL_INT_CTRL__THERM_TRIGGER_MASK_MASK);
+
+ WREG32_SOC15(THM, 0, mmTHM_THERMAL_INT_CTRL, val);
+diff --git a/drivers/gpu/drm/arm/display/komeda/komeda_crtc.c b/drivers/gpu/drm/arm/display/komeda/komeda_crtc.c
+index 56bd938961eee..f33418d6e1a08 100644
+--- a/drivers/gpu/drm/arm/display/komeda/komeda_crtc.c
++++ b/drivers/gpu/drm/arm/display/komeda/komeda_crtc.c
+@@ -492,10 +492,8 @@ static void komeda_crtc_reset(struct drm_crtc *crtc)
+ crtc->state = NULL;
+
+ state = kzalloc(sizeof(*state), GFP_KERNEL);
+- if (state) {
+- crtc->state = &state->base;
+- crtc->state->crtc = crtc;
+- }
++ if (state)
++ __drm_atomic_helper_crtc_reset(crtc, &state->base);
+ }
+
+ static struct drm_crtc_state *
+@@ -616,7 +614,6 @@ static int komeda_crtc_add(struct komeda_kms_dev *kms,
+ return err;
+
+ drm_crtc_helper_add(crtc, &komeda_crtc_helper_funcs);
+- drm_crtc_vblank_reset(crtc);
+
+ crtc->port = kcrtc->master->of_output_port;
+
+diff --git a/drivers/gpu/drm/arm/malidp_drv.c b/drivers/gpu/drm/arm/malidp_drv.c
+index def8c9ffafcaf..a2a10bfbccac4 100644
+--- a/drivers/gpu/drm/arm/malidp_drv.c
++++ b/drivers/gpu/drm/arm/malidp_drv.c
+@@ -870,7 +870,6 @@ static int malidp_bind(struct device *dev)
+ drm->irq_enabled = true;
+
+ ret = drm_vblank_init(drm, drm->mode_config.num_crtc);
+- drm_crtc_vblank_reset(&malidp->crtc);
+ if (ret < 0) {
+ DRM_ERROR("failed to initialise vblank\n");
+ goto vblank_fail;
+diff --git a/drivers/gpu/drm/atmel-hlcdc/atmel_hlcdc_crtc.c b/drivers/gpu/drm/atmel-hlcdc/atmel_hlcdc_crtc.c
+index 10985134ce0ba..ce246b96330b7 100644
+--- a/drivers/gpu/drm/atmel-hlcdc/atmel_hlcdc_crtc.c
++++ b/drivers/gpu/drm/atmel-hlcdc/atmel_hlcdc_crtc.c
+@@ -411,10 +411,8 @@ static void atmel_hlcdc_crtc_reset(struct drm_crtc *crtc)
+ }
+
+ state = kzalloc(sizeof(*state), GFP_KERNEL);
+- if (state) {
+- crtc->state = &state->base;
+- crtc->state->crtc = crtc;
+- }
++ if (state)
++ __drm_atomic_helper_crtc_reset(crtc, &state->base);
+ }
+
+ static struct drm_crtc_state *
+@@ -528,7 +526,6 @@ int atmel_hlcdc_crtc_create(struct drm_device *dev)
+ }
+
+ drm_crtc_helper_add(&crtc->base, &lcdc_crtc_helper_funcs);
+- drm_crtc_vblank_reset(&crtc->base);
+
+ drm_mode_crtc_set_gamma_size(&crtc->base, ATMEL_HLCDC_CLUT_SIZE);
+ drm_crtc_enable_color_mgmt(&crtc->base, 0, false,
+diff --git a/drivers/gpu/drm/drm_atomic_helper.c b/drivers/gpu/drm/drm_atomic_helper.c
+index 85d163f16801f..b78e142a5620c 100644
+--- a/drivers/gpu/drm/drm_atomic_helper.c
++++ b/drivers/gpu/drm/drm_atomic_helper.c
+@@ -34,6 +34,7 @@
+ #include <drm/drm_bridge.h>
+ #include <drm/drm_damage_helper.h>
+ #include <drm/drm_device.h>
++#include <drm/drm_drv.h>
+ #include <drm/drm_plane_helper.h>
+ #include <drm/drm_print.h>
+ #include <drm/drm_self_refresh_helper.h>
+@@ -3105,7 +3106,7 @@ void drm_atomic_helper_shutdown(struct drm_device *dev)
+ if (ret)
+ DRM_ERROR("Disabling all crtc's during unload failed with %i\n", ret);
+
+- DRM_MODESET_LOCK_ALL_END(ctx, ret);
++ DRM_MODESET_LOCK_ALL_END(dev, ctx, ret);
+ }
+ EXPORT_SYMBOL(drm_atomic_helper_shutdown);
+
+@@ -3245,7 +3246,7 @@ struct drm_atomic_state *drm_atomic_helper_suspend(struct drm_device *dev)
+ }
+
+ unlock:
+- DRM_MODESET_LOCK_ALL_END(ctx, err);
++ DRM_MODESET_LOCK_ALL_END(dev, ctx, err);
+ if (err)
+ return ERR_PTR(err);
+
+@@ -3326,7 +3327,7 @@ int drm_atomic_helper_resume(struct drm_device *dev,
+
+ err = drm_atomic_helper_commit_duplicated_state(state, &ctx);
+
+- DRM_MODESET_LOCK_ALL_END(ctx, err);
++ DRM_MODESET_LOCK_ALL_END(dev, ctx, err);
+ drm_atomic_state_put(state);
+
+ return err;
+diff --git a/drivers/gpu/drm/drm_atomic_state_helper.c b/drivers/gpu/drm/drm_atomic_state_helper.c
+index 8fce6a115dfe3..9ad74045158ec 100644
+--- a/drivers/gpu/drm/drm_atomic_state_helper.c
++++ b/drivers/gpu/drm/drm_atomic_state_helper.c
+@@ -32,6 +32,7 @@
+ #include <drm/drm_device.h>
+ #include <drm/drm_plane.h>
+ #include <drm/drm_print.h>
++#include <drm/drm_vblank.h>
+ #include <drm/drm_writeback.h>
+
+ #include <linux/slab.h>
+@@ -93,6 +94,9 @@ __drm_atomic_helper_crtc_reset(struct drm_crtc *crtc,
+ if (crtc_state)
+ __drm_atomic_helper_crtc_state_reset(crtc_state, crtc);
+
++ if (drm_dev_has_vblank(crtc->dev))
++ drm_crtc_vblank_reset(crtc);
++
+ crtc->state = crtc_state;
+ }
+ EXPORT_SYMBOL(__drm_atomic_helper_crtc_reset);
+diff --git a/drivers/gpu/drm/drm_color_mgmt.c b/drivers/gpu/drm/drm_color_mgmt.c
+index c93123ff7c218..138ff34b31db5 100644
+--- a/drivers/gpu/drm/drm_color_mgmt.c
++++ b/drivers/gpu/drm/drm_color_mgmt.c
+@@ -294,7 +294,7 @@ int drm_mode_gamma_set_ioctl(struct drm_device *dev,
+ crtc->gamma_size, &ctx);
+
+ out:
+- DRM_MODESET_LOCK_ALL_END(ctx, ret);
++ DRM_MODESET_LOCK_ALL_END(dev, ctx, ret);
+ return ret;
+
+ }
+diff --git a/drivers/gpu/drm/drm_crtc.c b/drivers/gpu/drm/drm_crtc.c
+index 4936e1080e417..eb1c33e5d0f49 100644
+--- a/drivers/gpu/drm/drm_crtc.c
++++ b/drivers/gpu/drm/drm_crtc.c
+@@ -561,7 +561,6 @@ int drm_mode_setcrtc(struct drm_device *dev, void *data,
+ if (crtc_req->mode_valid && !drm_lease_held(file_priv, plane->base.id))
+ return -EACCES;
+
+- mutex_lock(&crtc->dev->mode_config.mutex);
+ DRM_MODESET_LOCK_ALL_BEGIN(dev, ctx,
+ DRM_MODESET_ACQUIRE_INTERRUPTIBLE, ret);
+
+@@ -728,8 +727,7 @@ out:
+ fb = NULL;
+ mode = NULL;
+
+- DRM_MODESET_LOCK_ALL_END(ctx, ret);
+- mutex_unlock(&crtc->dev->mode_config.mutex);
++ DRM_MODESET_LOCK_ALL_END(dev, ctx, ret);
+
+ return ret;
+ }
+diff --git a/drivers/gpu/drm/drm_dp_mst_topology.c b/drivers/gpu/drm/drm_dp_mst_topology.c
+index ffbd754a53825..954cd69117826 100644
+--- a/drivers/gpu/drm/drm_dp_mst_topology.c
++++ b/drivers/gpu/drm/drm_dp_mst_topology.c
+@@ -4993,8 +4993,8 @@ int drm_dp_mst_add_affected_dsc_crtcs(struct drm_atomic_state *state, struct drm
+
+ crtc = conn_state->crtc;
+
+- if (WARN_ON(!crtc))
+- return -EINVAL;
++ if (!crtc)
++ continue;
+
+ if (!drm_dp_mst_dsc_aux_for_port(pos->port))
+ continue;
+diff --git a/drivers/gpu/drm/drm_mode_object.c b/drivers/gpu/drm/drm_mode_object.c
+index 901b078abf40c..db05f386a709e 100644
+--- a/drivers/gpu/drm/drm_mode_object.c
++++ b/drivers/gpu/drm/drm_mode_object.c
+@@ -428,7 +428,7 @@ int drm_mode_obj_get_properties_ioctl(struct drm_device *dev, void *data,
+ out_unref:
+ drm_mode_object_put(obj);
+ out:
+- DRM_MODESET_LOCK_ALL_END(ctx, ret);
++ DRM_MODESET_LOCK_ALL_END(dev, ctx, ret);
+ return ret;
+ }
+
+@@ -470,7 +470,7 @@ static int set_property_legacy(struct drm_mode_object *obj,
+ break;
+ }
+ drm_property_change_valid_put(prop, ref);
+- DRM_MODESET_LOCK_ALL_END(ctx, ret);
++ DRM_MODESET_LOCK_ALL_END(dev, ctx, ret);
+
+ return ret;
+ }
+diff --git a/drivers/gpu/drm/drm_plane.c b/drivers/gpu/drm/drm_plane.c
+index 4af173ced3277..fdbafc2b81998 100644
+--- a/drivers/gpu/drm/drm_plane.c
++++ b/drivers/gpu/drm/drm_plane.c
+@@ -791,7 +791,7 @@ static int setplane_internal(struct drm_plane *plane,
+ crtc_x, crtc_y, crtc_w, crtc_h,
+ src_x, src_y, src_w, src_h, &ctx);
+
+- DRM_MODESET_LOCK_ALL_END(ctx, ret);
++ DRM_MODESET_LOCK_ALL_END(plane->dev, ctx, ret);
+
+ return ret;
+ }
+diff --git a/drivers/gpu/drm/etnaviv/etnaviv_gpu.c b/drivers/gpu/drm/etnaviv/etnaviv_gpu.c
+index 4a512b062df8f..bb9a37d3fcff6 100644
+--- a/drivers/gpu/drm/etnaviv/etnaviv_gpu.c
++++ b/drivers/gpu/drm/etnaviv/etnaviv_gpu.c
+@@ -337,9 +337,16 @@ static void etnaviv_hw_identify(struct etnaviv_gpu *gpu)
+
+ gpu->identity.model = gpu_read(gpu, VIVS_HI_CHIP_MODEL);
+ gpu->identity.revision = gpu_read(gpu, VIVS_HI_CHIP_REV);
+- gpu->identity.product_id = gpu_read(gpu, VIVS_HI_CHIP_PRODUCT_ID);
+ gpu->identity.customer_id = gpu_read(gpu, VIVS_HI_CHIP_CUSTOMER_ID);
+- gpu->identity.eco_id = gpu_read(gpu, VIVS_HI_CHIP_ECO_ID);
++
++ /*
++ * Reading these two registers on GC600 rev 0x19 result in a
++ * unhandled fault: external abort on non-linefetch
++ */
++ if (!etnaviv_is_model_rev(gpu, GC600, 0x19)) {
++ gpu->identity.product_id = gpu_read(gpu, VIVS_HI_CHIP_PRODUCT_ID);
++ gpu->identity.eco_id = gpu_read(gpu, VIVS_HI_CHIP_ECO_ID);
++ }
+
+ /*
+ * !!!! HACK ALERT !!!!
+diff --git a/drivers/gpu/drm/etnaviv/etnaviv_sched.c b/drivers/gpu/drm/etnaviv/etnaviv_sched.c
+index 4e3e95dce6d87..cd46c882269cc 100644
+--- a/drivers/gpu/drm/etnaviv/etnaviv_sched.c
++++ b/drivers/gpu/drm/etnaviv/etnaviv_sched.c
+@@ -89,12 +89,15 @@ static void etnaviv_sched_timedout_job(struct drm_sched_job *sched_job)
+ u32 dma_addr;
+ int change;
+
++ /* block scheduler */
++ drm_sched_stop(&gpu->sched, sched_job);
++
+ /*
+ * If the GPU managed to complete this jobs fence, the timout is
+ * spurious. Bail out.
+ */
+ if (dma_fence_is_signaled(submit->out_fence))
+- return;
++ goto out_no_timeout;
+
+ /*
+ * If the GPU is still making forward progress on the front-end (which
+@@ -105,12 +108,9 @@ static void etnaviv_sched_timedout_job(struct drm_sched_job *sched_job)
+ change = dma_addr - gpu->hangcheck_dma_addr;
+ if (change < 0 || change > 16) {
+ gpu->hangcheck_dma_addr = dma_addr;
+- return;
++ goto out_no_timeout;
+ }
+
+- /* block scheduler */
+- drm_sched_stop(&gpu->sched, sched_job);
+-
+ if(sched_job)
+ drm_sched_increase_karma(sched_job);
+
+@@ -120,6 +120,7 @@ static void etnaviv_sched_timedout_job(struct drm_sched_job *sched_job)
+
+ drm_sched_resubmit_jobs(&gpu->sched);
+
++out_no_timeout:
+ /* restart scheduler after GPU is usable again */
+ drm_sched_start(&gpu->sched, true);
+ }
+diff --git a/drivers/gpu/drm/i915/i915_cmd_parser.c b/drivers/gpu/drm/i915/i915_cmd_parser.c
+index 372354d33f552..5ac4a999f05a6 100644
+--- a/drivers/gpu/drm/i915/i915_cmd_parser.c
++++ b/drivers/gpu/drm/i915/i915_cmd_parser.c
+@@ -1204,6 +1204,12 @@ static u32 *copy_batch(struct drm_i915_gem_object *dst_obj,
+ return dst;
+ }
+
++static inline bool cmd_desc_is(const struct drm_i915_cmd_descriptor * const desc,
++ const u32 cmd)
++{
++ return desc->cmd.value == (cmd & desc->cmd.mask);
++}
++
+ static bool check_cmd(const struct intel_engine_cs *engine,
+ const struct drm_i915_cmd_descriptor *desc,
+ const u32 *cmd, u32 length)
+@@ -1242,19 +1248,19 @@ static bool check_cmd(const struct intel_engine_cs *engine,
+ * allowed mask/value pair given in the whitelist entry.
+ */
+ if (reg->mask) {
+- if (desc->cmd.value == MI_LOAD_REGISTER_MEM) {
++ if (cmd_desc_is(desc, MI_LOAD_REGISTER_MEM)) {
+ DRM_DEBUG("CMD: Rejected LRM to masked register 0x%08X\n",
+ reg_addr);
+ return false;
+ }
+
+- if (desc->cmd.value == MI_LOAD_REGISTER_REG) {
++ if (cmd_desc_is(desc, MI_LOAD_REGISTER_REG)) {
+ DRM_DEBUG("CMD: Rejected LRR to masked register 0x%08X\n",
+ reg_addr);
+ return false;
+ }
+
+- if (desc->cmd.value == MI_LOAD_REGISTER_IMM(1) &&
++ if (cmd_desc_is(desc, MI_LOAD_REGISTER_IMM(1)) &&
+ (offset + 2 > length ||
+ (cmd[offset + 1] & reg->mask) != reg->value)) {
+ DRM_DEBUG("CMD: Rejected LRI to masked register 0x%08X\n",
+@@ -1478,7 +1484,7 @@ int intel_engine_cmd_parser(struct intel_engine_cs *engine,
+ break;
+ }
+
+- if (desc->cmd.value == MI_BATCH_BUFFER_START) {
++ if (cmd_desc_is(desc, MI_BATCH_BUFFER_START)) {
+ ret = check_bbstart(cmd, offset, length, batch_length,
+ batch_addr, shadow_addr,
+ jump_whitelist);
+diff --git a/drivers/gpu/drm/msm/adreno/adreno_gpu.c b/drivers/gpu/drm/msm/adreno/adreno_gpu.c
+index 5db06b5909438..e7b39f3ca33dc 100644
+--- a/drivers/gpu/drm/msm/adreno/adreno_gpu.c
++++ b/drivers/gpu/drm/msm/adreno/adreno_gpu.c
+@@ -396,7 +396,7 @@ int adreno_hw_init(struct msm_gpu *gpu)
+ ring->next = ring->start;
+
+ /* reset completed fence seqno: */
+- ring->memptrs->fence = ring->seqno;
++ ring->memptrs->fence = ring->fctx->completed_fence;
+ ring->memptrs->rptr = 0;
+ }
+
+diff --git a/drivers/gpu/drm/msm/disp/mdp5/mdp5_crtc.c b/drivers/gpu/drm/msm/disp/mdp5/mdp5_crtc.c
+index b5fed67c4651f..0c54b7bc19010 100644
+--- a/drivers/gpu/drm/msm/disp/mdp5/mdp5_crtc.c
++++ b/drivers/gpu/drm/msm/disp/mdp5/mdp5_crtc.c
+@@ -1117,8 +1117,6 @@ static void mdp5_crtc_reset(struct drm_crtc *crtc)
+ mdp5_crtc_destroy_state(crtc, crtc->state);
+
+ __drm_atomic_helper_crtc_reset(crtc, &mdp5_cstate->base);
+-
+- drm_crtc_vblank_reset(crtc);
+ }
+
+ static const struct drm_crtc_funcs mdp5_crtc_funcs = {
+diff --git a/drivers/gpu/drm/nouveau/dispnv50/disp.c b/drivers/gpu/drm/nouveau/dispnv50/disp.c
+index 800b7757252e3..d2c2d102e7329 100644
+--- a/drivers/gpu/drm/nouveau/dispnv50/disp.c
++++ b/drivers/gpu/drm/nouveau/dispnv50/disp.c
+@@ -2160,8 +2160,10 @@ nv50_disp_atomic_commit(struct drm_device *dev,
+ int ret, i;
+
+ ret = pm_runtime_get_sync(dev->dev);
+- if (ret < 0 && ret != -EACCES)
++ if (ret < 0 && ret != -EACCES) {
++ pm_runtime_put_autosuspend(dev->dev);
+ return ret;
++ }
+
+ ret = drm_atomic_helper_setup_commit(state, nonblock);
+ if (ret)
+diff --git a/drivers/gpu/drm/nouveau/nouveau_connector.c b/drivers/gpu/drm/nouveau/nouveau_connector.c
+index 1b383ae0248f3..ef8ddbe445812 100644
+--- a/drivers/gpu/drm/nouveau/nouveau_connector.c
++++ b/drivers/gpu/drm/nouveau/nouveau_connector.c
+@@ -572,8 +572,10 @@ nouveau_connector_detect(struct drm_connector *connector, bool force)
+ pm_runtime_get_noresume(dev->dev);
+ } else {
+ ret = pm_runtime_get_sync(dev->dev);
+- if (ret < 0 && ret != -EACCES)
++ if (ret < 0 && ret != -EACCES) {
++ pm_runtime_put_autosuspend(dev->dev);
+ return conn_status;
++ }
+ }
+
+ nv_encoder = nouveau_connector_ddc_detect(connector);
+diff --git a/drivers/gpu/drm/nouveau/nouveau_fbcon.c b/drivers/gpu/drm/nouveau/nouveau_fbcon.c
+index d5c23d1c20d88..44e515bbbb444 100644
+--- a/drivers/gpu/drm/nouveau/nouveau_fbcon.c
++++ b/drivers/gpu/drm/nouveau/nouveau_fbcon.c
+@@ -189,8 +189,10 @@ nouveau_fbcon_open(struct fb_info *info, int user)
+ struct nouveau_fbdev *fbcon = info->par;
+ struct nouveau_drm *drm = nouveau_drm(fbcon->helper.dev);
+ int ret = pm_runtime_get_sync(drm->dev->dev);
+- if (ret < 0 && ret != -EACCES)
++ if (ret < 0 && ret != -EACCES) {
++ pm_runtime_put(drm->dev->dev);
+ return ret;
++ }
+ return 0;
+ }
+
+diff --git a/drivers/gpu/drm/omapdrm/omap_crtc.c b/drivers/gpu/drm/omapdrm/omap_crtc.c
+index fce7e944a280b..6d40914675dad 100644
+--- a/drivers/gpu/drm/omapdrm/omap_crtc.c
++++ b/drivers/gpu/drm/omapdrm/omap_crtc.c
+@@ -697,14 +697,16 @@ static int omap_crtc_atomic_get_property(struct drm_crtc *crtc,
+
+ static void omap_crtc_reset(struct drm_crtc *crtc)
+ {
++ struct omap_crtc_state *state;
++
+ if (crtc->state)
+ __drm_atomic_helper_crtc_destroy_state(crtc->state);
+
+ kfree(crtc->state);
+- crtc->state = kzalloc(sizeof(struct omap_crtc_state), GFP_KERNEL);
+
+- if (crtc->state)
+- crtc->state->crtc = crtc;
++ state = kzalloc(sizeof(*state), GFP_KERNEL);
++ if (state)
++ __drm_atomic_helper_crtc_reset(crtc, &state->base);
+ }
+
+ static struct drm_crtc_state *
+diff --git a/drivers/gpu/drm/omapdrm/omap_drv.c b/drivers/gpu/drm/omapdrm/omap_drv.c
+index cdafd7ef1c320..cc4d754ff8c02 100644
+--- a/drivers/gpu/drm/omapdrm/omap_drv.c
++++ b/drivers/gpu/drm/omapdrm/omap_drv.c
+@@ -595,7 +595,6 @@ static int omapdrm_init(struct omap_drm_private *priv, struct device *dev)
+ {
+ const struct soc_device_attribute *soc;
+ struct drm_device *ddev;
+- unsigned int i;
+ int ret;
+
+ DBG("%s", dev_name(dev));
+@@ -642,9 +641,6 @@ static int omapdrm_init(struct omap_drm_private *priv, struct device *dev)
+ goto err_cleanup_modeset;
+ }
+
+- for (i = 0; i < priv->num_pipes; i++)
+- drm_crtc_vblank_off(priv->pipes[i].crtc);
+-
+ omap_fbdev_init(ddev);
+
+ drm_kms_helper_poll_init(ddev);
+diff --git a/drivers/gpu/drm/radeon/radeon_connectors.c b/drivers/gpu/drm/radeon/radeon_connectors.c
+index fe12d9d91d7a5..e308344344425 100644
+--- a/drivers/gpu/drm/radeon/radeon_connectors.c
++++ b/drivers/gpu/drm/radeon/radeon_connectors.c
+@@ -879,8 +879,10 @@ radeon_lvds_detect(struct drm_connector *connector, bool force)
+
+ if (!drm_kms_helper_is_poll_worker()) {
+ r = pm_runtime_get_sync(connector->dev->dev);
+- if (r < 0)
++ if (r < 0) {
++ pm_runtime_put_autosuspend(connector->dev->dev);
+ return connector_status_disconnected;
++ }
+ }
+
+ if (encoder) {
+@@ -1025,8 +1027,10 @@ radeon_vga_detect(struct drm_connector *connector, bool force)
+
+ if (!drm_kms_helper_is_poll_worker()) {
+ r = pm_runtime_get_sync(connector->dev->dev);
+- if (r < 0)
++ if (r < 0) {
++ pm_runtime_put_autosuspend(connector->dev->dev);
+ return connector_status_disconnected;
++ }
+ }
+
+ encoder = radeon_best_single_encoder(connector);
+@@ -1163,8 +1167,10 @@ radeon_tv_detect(struct drm_connector *connector, bool force)
+
+ if (!drm_kms_helper_is_poll_worker()) {
+ r = pm_runtime_get_sync(connector->dev->dev);
+- if (r < 0)
++ if (r < 0) {
++ pm_runtime_put_autosuspend(connector->dev->dev);
+ return connector_status_disconnected;
++ }
+ }
+
+ encoder = radeon_best_single_encoder(connector);
+@@ -1247,8 +1253,10 @@ radeon_dvi_detect(struct drm_connector *connector, bool force)
+
+ if (!drm_kms_helper_is_poll_worker()) {
+ r = pm_runtime_get_sync(connector->dev->dev);
+- if (r < 0)
++ if (r < 0) {
++ pm_runtime_put_autosuspend(connector->dev->dev);
+ return connector_status_disconnected;
++ }
+ }
+
+ if (radeon_connector->detected_hpd_without_ddc) {
+@@ -1657,8 +1665,10 @@ radeon_dp_detect(struct drm_connector *connector, bool force)
+
+ if (!drm_kms_helper_is_poll_worker()) {
+ r = pm_runtime_get_sync(connector->dev->dev);
+- if (r < 0)
++ if (r < 0) {
++ pm_runtime_put_autosuspend(connector->dev->dev);
+ return connector_status_disconnected;
++ }
+ }
+
+ if (!force && radeon_check_hpd_status_unchanged(connector)) {
+diff --git a/drivers/gpu/drm/rcar-du/rcar_du_crtc.c b/drivers/gpu/drm/rcar-du/rcar_du_crtc.c
+index d73e88ddecd0f..fe86a3e677571 100644
+--- a/drivers/gpu/drm/rcar-du/rcar_du_crtc.c
++++ b/drivers/gpu/drm/rcar-du/rcar_du_crtc.c
+@@ -975,8 +975,7 @@ static void rcar_du_crtc_reset(struct drm_crtc *crtc)
+ state->crc.source = VSP1_DU_CRC_NONE;
+ state->crc.index = 0;
+
+- crtc->state = &state->state;
+- crtc->state->crtc = crtc;
++ __drm_atomic_helper_crtc_reset(crtc, &state->state);
+ }
+
+ static int rcar_du_crtc_enable_vblank(struct drm_crtc *crtc)
+@@ -1271,9 +1270,6 @@ int rcar_du_crtc_create(struct rcar_du_group *rgrp, unsigned int swindex,
+
+ drm_crtc_helper_add(crtc, &crtc_helper_funcs);
+
+- /* Start with vertical blanking interrupt reporting disabled. */
+- drm_crtc_vblank_off(crtc);
+-
+ /* Register the interrupt handler. */
+ if (rcar_du_has(rcdu, RCAR_DU_FEATURE_CRTC_IRQ_CLOCK)) {
+ /* The IRQ's are associated with the CRTC (sw)index. */
+diff --git a/drivers/gpu/drm/tegra/dc.c b/drivers/gpu/drm/tegra/dc.c
+index 04d6848d19fcf..da8b9983b7de0 100644
+--- a/drivers/gpu/drm/tegra/dc.c
++++ b/drivers/gpu/drm/tegra/dc.c
+@@ -1169,7 +1169,6 @@ static void tegra_crtc_reset(struct drm_crtc *crtc)
+ tegra_crtc_atomic_destroy_state(crtc, crtc->state);
+
+ __drm_atomic_helper_crtc_reset(crtc, &state->base);
+- drm_crtc_vblank_reset(crtc);
+ }
+
+ static struct drm_crtc_state *
+diff --git a/drivers/gpu/drm/tidss/tidss_crtc.c b/drivers/gpu/drm/tidss/tidss_crtc.c
+index 89a226912de85..4d01c4af61cd0 100644
+--- a/drivers/gpu/drm/tidss/tidss_crtc.c
++++ b/drivers/gpu/drm/tidss/tidss_crtc.c
+@@ -352,8 +352,7 @@ static void tidss_crtc_reset(struct drm_crtc *crtc)
+ return;
+ }
+
+- crtc->state = &tcrtc->base;
+- crtc->state->crtc = crtc;
++ __drm_atomic_helper_crtc_reset(crtc, &tcrtc->base);
+ }
+
+ static struct drm_crtc_state *tidss_crtc_duplicate_state(struct drm_crtc *crtc)
+diff --git a/drivers/gpu/drm/tidss/tidss_kms.c b/drivers/gpu/drm/tidss/tidss_kms.c
+index c0240f7e0b198..eec359f61a06d 100644
+--- a/drivers/gpu/drm/tidss/tidss_kms.c
++++ b/drivers/gpu/drm/tidss/tidss_kms.c
+@@ -253,7 +253,6 @@ static int tidss_dispc_modeset_init(struct tidss_device *tidss)
+ int tidss_modeset_init(struct tidss_device *tidss)
+ {
+ struct drm_device *ddev = &tidss->ddev;
+- unsigned int i;
+ int ret;
+
+ dev_dbg(tidss->dev, "%s\n", __func__);
+@@ -278,10 +277,6 @@ int tidss_modeset_init(struct tidss_device *tidss)
+ if (ret)
+ return ret;
+
+- /* Start with vertical blanking interrupt reporting disabled. */
+- for (i = 0; i < tidss->num_crtcs; ++i)
+- drm_crtc_vblank_reset(tidss->crtcs[i]);
+-
+ drm_mode_config_reset(ddev);
+
+ dev_dbg(tidss->dev, "%s done\n", __func__);
+diff --git a/drivers/gpu/drm/virtio/virtgpu_object.c b/drivers/gpu/drm/virtio/virtgpu_object.c
+index 6ccbd01cd888c..703b5cd517519 100644
+--- a/drivers/gpu/drm/virtio/virtgpu_object.c
++++ b/drivers/gpu/drm/virtio/virtgpu_object.c
+@@ -79,6 +79,7 @@ void virtio_gpu_cleanup_object(struct virtio_gpu_object *bo)
+ }
+
+ sg_free_table(shmem->pages);
++ kfree(shmem->pages);
+ shmem->pages = NULL;
+ drm_gem_shmem_unpin(&bo->base.base);
+ }
+diff --git a/drivers/gpu/drm/vmwgfx/vmwgfx_ldu.c b/drivers/gpu/drm/vmwgfx/vmwgfx_ldu.c
+index 009f1742bed51..c4017c7a24db6 100644
+--- a/drivers/gpu/drm/vmwgfx/vmwgfx_ldu.c
++++ b/drivers/gpu/drm/vmwgfx/vmwgfx_ldu.c
+@@ -387,8 +387,6 @@ static int vmw_ldu_init(struct vmw_private *dev_priv, unsigned unit)
+ ldu->base.is_implicit = true;
+
+ /* Initialize primary plane */
+- vmw_du_plane_reset(primary);
+-
+ ret = drm_universal_plane_init(dev, &ldu->base.primary,
+ 0, &vmw_ldu_plane_funcs,
+ vmw_primary_plane_formats,
+@@ -402,8 +400,6 @@ static int vmw_ldu_init(struct vmw_private *dev_priv, unsigned unit)
+ drm_plane_helper_add(primary, &vmw_ldu_primary_plane_helper_funcs);
+
+ /* Initialize cursor plane */
+- vmw_du_plane_reset(cursor);
+-
+ ret = drm_universal_plane_init(dev, &ldu->base.cursor,
+ 0, &vmw_ldu_cursor_funcs,
+ vmw_cursor_plane_formats,
+@@ -417,7 +413,6 @@ static int vmw_ldu_init(struct vmw_private *dev_priv, unsigned unit)
+
+ drm_plane_helper_add(cursor, &vmw_ldu_cursor_plane_helper_funcs);
+
+- vmw_du_connector_reset(connector);
+ ret = drm_connector_init(dev, connector, &vmw_legacy_connector_funcs,
+ DRM_MODE_CONNECTOR_VIRTUAL);
+ if (ret) {
+@@ -445,7 +440,6 @@ static int vmw_ldu_init(struct vmw_private *dev_priv, unsigned unit)
+ goto err_free_encoder;
+ }
+
+- vmw_du_crtc_reset(crtc);
+ ret = drm_crtc_init_with_planes(dev, crtc, &ldu->base.primary,
+ &ldu->base.cursor,
+ &vmw_legacy_crtc_funcs, NULL);
+@@ -520,6 +514,8 @@ int vmw_kms_ldu_init_display(struct vmw_private *dev_priv)
+
+ dev_priv->active_display_unit = vmw_du_legacy;
+
++ drm_mode_config_reset(dev);
++
+ DRM_INFO("Legacy Display Unit initialized\n");
+
+ return 0;
+diff --git a/drivers/gpu/drm/vmwgfx/vmwgfx_scrn.c b/drivers/gpu/drm/vmwgfx/vmwgfx_scrn.c
+index 32a22e4eddb1a..4bf0f5ec4fc2d 100644
+--- a/drivers/gpu/drm/vmwgfx/vmwgfx_scrn.c
++++ b/drivers/gpu/drm/vmwgfx/vmwgfx_scrn.c
+@@ -859,8 +859,6 @@ static int vmw_sou_init(struct vmw_private *dev_priv, unsigned unit)
+ sou->base.is_implicit = false;
+
+ /* Initialize primary plane */
+- vmw_du_plane_reset(primary);
+-
+ ret = drm_universal_plane_init(dev, &sou->base.primary,
+ 0, &vmw_sou_plane_funcs,
+ vmw_primary_plane_formats,
+@@ -875,8 +873,6 @@ static int vmw_sou_init(struct vmw_private *dev_priv, unsigned unit)
+ drm_plane_enable_fb_damage_clips(primary);
+
+ /* Initialize cursor plane */
+- vmw_du_plane_reset(cursor);
+-
+ ret = drm_universal_plane_init(dev, &sou->base.cursor,
+ 0, &vmw_sou_cursor_funcs,
+ vmw_cursor_plane_formats,
+@@ -890,7 +886,6 @@ static int vmw_sou_init(struct vmw_private *dev_priv, unsigned unit)
+
+ drm_plane_helper_add(cursor, &vmw_sou_cursor_plane_helper_funcs);
+
+- vmw_du_connector_reset(connector);
+ ret = drm_connector_init(dev, connector, &vmw_sou_connector_funcs,
+ DRM_MODE_CONNECTOR_VIRTUAL);
+ if (ret) {
+@@ -918,8 +913,6 @@ static int vmw_sou_init(struct vmw_private *dev_priv, unsigned unit)
+ goto err_free_encoder;
+ }
+
+-
+- vmw_du_crtc_reset(crtc);
+ ret = drm_crtc_init_with_planes(dev, crtc, &sou->base.primary,
+ &sou->base.cursor,
+ &vmw_screen_object_crtc_funcs, NULL);
+@@ -973,6 +966,8 @@ int vmw_kms_sou_init_display(struct vmw_private *dev_priv)
+
+ dev_priv->active_display_unit = vmw_du_screen_object;
+
++ drm_mode_config_reset(dev);
++
+ DRM_INFO("Screen Objects Display Unit initialized\n");
+
+ return 0;
+diff --git a/drivers/gpu/drm/vmwgfx/vmwgfx_stdu.c b/drivers/gpu/drm/vmwgfx/vmwgfx_stdu.c
+index 16b3856296889..cf3aafd00837c 100644
+--- a/drivers/gpu/drm/vmwgfx/vmwgfx_stdu.c
++++ b/drivers/gpu/drm/vmwgfx/vmwgfx_stdu.c
+@@ -1738,8 +1738,6 @@ static int vmw_stdu_init(struct vmw_private *dev_priv, unsigned unit)
+ stdu->base.is_implicit = false;
+
+ /* Initialize primary plane */
+- vmw_du_plane_reset(primary);
+-
+ ret = drm_universal_plane_init(dev, primary,
+ 0, &vmw_stdu_plane_funcs,
+ vmw_primary_plane_formats,
+@@ -1754,8 +1752,6 @@ static int vmw_stdu_init(struct vmw_private *dev_priv, unsigned unit)
+ drm_plane_enable_fb_damage_clips(primary);
+
+ /* Initialize cursor plane */
+- vmw_du_plane_reset(cursor);
+-
+ ret = drm_universal_plane_init(dev, cursor,
+ 0, &vmw_stdu_cursor_funcs,
+ vmw_cursor_plane_formats,
+@@ -1769,8 +1765,6 @@ static int vmw_stdu_init(struct vmw_private *dev_priv, unsigned unit)
+
+ drm_plane_helper_add(cursor, &vmw_stdu_cursor_plane_helper_funcs);
+
+- vmw_du_connector_reset(connector);
+-
+ ret = drm_connector_init(dev, connector, &vmw_stdu_connector_funcs,
+ DRM_MODE_CONNECTOR_VIRTUAL);
+ if (ret) {
+@@ -1798,7 +1792,6 @@ static int vmw_stdu_init(struct vmw_private *dev_priv, unsigned unit)
+ goto err_free_encoder;
+ }
+
+- vmw_du_crtc_reset(crtc);
+ ret = drm_crtc_init_with_planes(dev, crtc, &stdu->base.primary,
+ &stdu->base.cursor,
+ &vmw_stdu_crtc_funcs, NULL);
+@@ -1894,6 +1887,8 @@ int vmw_kms_stdu_init_display(struct vmw_private *dev_priv)
+ }
+ }
+
++ drm_mode_config_reset(dev);
++
+ DRM_INFO("Screen Target Display device initialized\n");
+
+ return 0;
+diff --git a/drivers/gpu/host1x/job.c b/drivers/gpu/host1x/job.c
+index a10643aa89aa5..2ac5a99406d98 100644
+--- a/drivers/gpu/host1x/job.c
++++ b/drivers/gpu/host1x/job.c
+@@ -102,6 +102,7 @@ static unsigned int pin_job(struct host1x *host, struct host1x_job *job)
+ {
+ struct host1x_client *client = job->client;
+ struct device *dev = client->dev;
++ struct host1x_job_gather *g;
+ struct iommu_domain *domain;
+ unsigned int i;
+ int err;
+@@ -184,7 +185,6 @@ static unsigned int pin_job(struct host1x *host, struct host1x_job *job)
+ }
+
+ for (i = 0; i < job->num_gathers; i++) {
+- struct host1x_job_gather *g = &job->gathers[i];
+ size_t gather_size = 0;
+ struct scatterlist *sg;
+ struct sg_table *sgt;
+@@ -194,6 +194,7 @@ static unsigned int pin_job(struct host1x *host, struct host1x_job *job)
+ dma_addr_t *phys;
+ unsigned int j;
+
++ g = &job->gathers[i];
+ g->bo = host1x_bo_get(g->bo);
+ if (!g->bo) {
+ err = -EINVAL;
+@@ -213,7 +214,7 @@ static unsigned int pin_job(struct host1x *host, struct host1x_job *job)
+ sgt = host1x_bo_pin(host->dev, g->bo, phys);
+ if (IS_ERR(sgt)) {
+ err = PTR_ERR(sgt);
+- goto unpin;
++ goto put;
+ }
+
+ if (!IS_ENABLED(CONFIG_TEGRA_HOST1X_FIREWALL) && host->domain) {
+@@ -226,7 +227,7 @@ static unsigned int pin_job(struct host1x *host, struct host1x_job *job)
+ host->iova_end >> shift, true);
+ if (!alloc) {
+ err = -ENOMEM;
+- goto unpin;
++ goto put;
+ }
+
+ err = iommu_map_sg(host->domain,
+@@ -235,7 +236,7 @@ static unsigned int pin_job(struct host1x *host, struct host1x_job *job)
+ if (err == 0) {
+ __free_iova(&host->iova, alloc);
+ err = -EINVAL;
+- goto unpin;
++ goto put;
+ }
+
+ job->unpins[job->num_unpins].size = gather_size;
+@@ -245,7 +246,7 @@ static unsigned int pin_job(struct host1x *host, struct host1x_job *job)
+ DMA_TO_DEVICE);
+ if (!err) {
+ err = -ENOMEM;
+- goto unpin;
++ goto put;
+ }
+
+ job->unpins[job->num_unpins].dir = DMA_TO_DEVICE;
+@@ -263,6 +264,8 @@ static unsigned int pin_job(struct host1x *host, struct host1x_job *job)
+
+ return 0;
+
++put:
++ host1x_bo_put(g->bo);
+ unpin:
+ host1x_job_unpin(job);
+ return err;
+diff --git a/drivers/hid/hid-ids.h b/drivers/hid/hid-ids.h
+index 6f370e020feb3..7cfa9785bfbb0 100644
+--- a/drivers/hid/hid-ids.h
++++ b/drivers/hid/hid-ids.h
+@@ -773,6 +773,7 @@
+ #define USB_DEVICE_ID_LOGITECH_G27_WHEEL 0xc29b
+ #define USB_DEVICE_ID_LOGITECH_WII_WHEEL 0xc29c
+ #define USB_DEVICE_ID_LOGITECH_ELITE_KBD 0xc30a
++#define USB_DEVICE_ID_LOGITECH_GROUP_AUDIO 0x0882
+ #define USB_DEVICE_ID_S510_RECEIVER 0xc50c
+ #define USB_DEVICE_ID_S510_RECEIVER_2 0xc517
+ #define USB_DEVICE_ID_LOGITECH_CORDLESS_DESKTOP_LX500 0xc512
+diff --git a/drivers/hid/hid-quirks.c b/drivers/hid/hid-quirks.c
+index 934fc0a798d4d..c242150d35a3a 100644
+--- a/drivers/hid/hid-quirks.c
++++ b/drivers/hid/hid-quirks.c
+@@ -179,6 +179,7 @@ static const struct hid_device_id hid_quirks[] = {
+ { HID_USB_DEVICE(USB_VENDOR_ID_WISEGROUP_LTD2, USB_DEVICE_ID_SMARTJOY_DUAL_PLUS), HID_QUIRK_NOGET | HID_QUIRK_MULTI_INPUT },
+ { HID_USB_DEVICE(USB_VENDOR_ID_WISEGROUP, USB_DEVICE_ID_QUAD_USB_JOYPAD), HID_QUIRK_NOGET | HID_QUIRK_MULTI_INPUT },
+ { HID_USB_DEVICE(USB_VENDOR_ID_XIN_MO, USB_DEVICE_ID_XIN_MO_DUAL_ARCADE), HID_QUIRK_MULTI_INPUT },
++ { HID_USB_DEVICE(USB_VENDOR_ID_LOGITECH, USB_DEVICE_ID_LOGITECH_GROUP_AUDIO), HID_QUIRK_NOGET },
+
+ { 0 }
+ };
+diff --git a/drivers/hid/i2c-hid/i2c-hid-core.c b/drivers/hid/i2c-hid/i2c-hid-core.c
+index 294c84e136d72..dbd04492825d4 100644
+--- a/drivers/hid/i2c-hid/i2c-hid-core.c
++++ b/drivers/hid/i2c-hid/i2c-hid-core.c
+@@ -420,6 +420,19 @@ static int i2c_hid_set_power(struct i2c_client *client, int power_state)
+ dev_err(&client->dev, "failed to change power setting.\n");
+
+ set_pwr_exit:
++
++ /*
++ * The HID over I2C specification states that if a DEVICE needs time
++ * after the PWR_ON request, it should utilise CLOCK stretching.
++ * However, it has been observered that the Windows driver provides a
++ * 1ms sleep between the PWR_ON and RESET requests.
++ * According to Goodix Windows even waits 60 ms after (other?)
++ * PWR_ON requests. Testing has confirmed that several devices
++ * will not work properly without a delay after a PWR_ON request.
++ */
++ if (!ret && power_state == I2C_HID_PWR_ON)
++ msleep(60);
++
+ return ret;
+ }
+
+@@ -441,15 +454,6 @@ static int i2c_hid_hwreset(struct i2c_client *client)
+ if (ret)
+ goto out_unlock;
+
+- /*
+- * The HID over I2C specification states that if a DEVICE needs time
+- * after the PWR_ON request, it should utilise CLOCK stretching.
+- * However, it has been observered that the Windows driver provides a
+- * 1ms sleep between the PWR_ON and RESET requests and that some devices
+- * rely on this.
+- */
+- usleep_range(1000, 5000);
+-
+ i2c_hid_dbg(ihid, "resetting...\n");
+
+ ret = i2c_hid_command(client, &hid_reset_cmd, NULL, 0);
+diff --git a/drivers/hid/usbhid/hiddev.c b/drivers/hid/usbhid/hiddev.c
+index 4140dea693e90..4f97e6c120595 100644
+--- a/drivers/hid/usbhid/hiddev.c
++++ b/drivers/hid/usbhid/hiddev.c
+@@ -519,12 +519,16 @@ static noinline int hiddev_ioctl_usage(struct hiddev *hiddev, unsigned int cmd,
+
+ switch (cmd) {
+ case HIDIOCGUSAGE:
++ if (uref->usage_index >= field->report_count)
++ goto inval;
+ uref->value = field->value[uref->usage_index];
+ if (copy_to_user(user_arg, uref, sizeof(*uref)))
+ goto fault;
+ goto goodreturn;
+
+ case HIDIOCSUSAGE:
++ if (uref->usage_index >= field->report_count)
++ goto inval;
+ field->value[uref->usage_index] = uref->value;
+ goto goodreturn;
+
+diff --git a/drivers/hwmon/gsc-hwmon.c b/drivers/hwmon/gsc-hwmon.c
+index 2137bc65829d3..35337922aa1bd 100644
+--- a/drivers/hwmon/gsc-hwmon.c
++++ b/drivers/hwmon/gsc-hwmon.c
+@@ -172,6 +172,7 @@ gsc_hwmon_read(struct device *dev, enum hwmon_sensor_types type, u32 attr,
+ case mode_temperature:
+ if (tmp > 0x8000)
+ tmp -= 0xffff;
++ tmp *= 100; /* convert to millidegrees celsius */
+ break;
+ case mode_voltage_raw:
+ tmp = clamp_val(tmp, 0, BIT(GSC_HWMON_RESOLUTION));
+diff --git a/drivers/hwmon/nct7904.c b/drivers/hwmon/nct7904.c
+index b0425694f7022..242ff8bee78dd 100644
+--- a/drivers/hwmon/nct7904.c
++++ b/drivers/hwmon/nct7904.c
+@@ -231,7 +231,7 @@ static int nct7904_read_fan(struct device *dev, u32 attr, int channel,
+ if (ret < 0)
+ return ret;
+ cnt = ((ret & 0xff00) >> 3) | (ret & 0x1f);
+- if (cnt == 0x1fff)
++ if (cnt == 0 || cnt == 0x1fff)
+ rpm = 0;
+ else
+ rpm = 1350000 / cnt;
+@@ -243,7 +243,7 @@ static int nct7904_read_fan(struct device *dev, u32 attr, int channel,
+ if (ret < 0)
+ return ret;
+ cnt = ((ret & 0xff00) >> 3) | (ret & 0x1f);
+- if (cnt == 0x1fff)
++ if (cnt == 0 || cnt == 0x1fff)
+ rpm = 0;
+ else
+ rpm = 1350000 / cnt;
+diff --git a/drivers/i2c/busses/i2c-i801.c b/drivers/i2c/busses/i2c-i801.c
+index fea644921a768..f206e28af5831 100644
+--- a/drivers/i2c/busses/i2c-i801.c
++++ b/drivers/i2c/busses/i2c-i801.c
+@@ -67,6 +67,7 @@
+ * Comet Lake-H (PCH) 0x06a3 32 hard yes yes yes
+ * Elkhart Lake (PCH) 0x4b23 32 hard yes yes yes
+ * Tiger Lake-LP (PCH) 0xa0a3 32 hard yes yes yes
++ * Tiger Lake-H (PCH) 0x43a3 32 hard yes yes yes
+ * Jasper Lake (SOC) 0x4da3 32 hard yes yes yes
+ * Comet Lake-V (PCH) 0xa3a3 32 hard yes yes yes
+ *
+@@ -221,6 +222,7 @@
+ #define PCI_DEVICE_ID_INTEL_GEMINILAKE_SMBUS 0x31d4
+ #define PCI_DEVICE_ID_INTEL_ICELAKE_LP_SMBUS 0x34a3
+ #define PCI_DEVICE_ID_INTEL_5_3400_SERIES_SMBUS 0x3b30
++#define PCI_DEVICE_ID_INTEL_TIGERLAKE_H_SMBUS 0x43a3
+ #define PCI_DEVICE_ID_INTEL_ELKHART_LAKE_SMBUS 0x4b23
+ #define PCI_DEVICE_ID_INTEL_JASPER_LAKE_SMBUS 0x4da3
+ #define PCI_DEVICE_ID_INTEL_BROXTON_SMBUS 0x5ad4
+@@ -1074,6 +1076,7 @@ static const struct pci_device_id i801_ids[] = {
+ { PCI_DEVICE(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_COMETLAKE_V_SMBUS) },
+ { PCI_DEVICE(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_ELKHART_LAKE_SMBUS) },
+ { PCI_DEVICE(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_TIGERLAKE_LP_SMBUS) },
++ { PCI_DEVICE(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_TIGERLAKE_H_SMBUS) },
+ { PCI_DEVICE(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_JASPER_LAKE_SMBUS) },
+ { 0, }
+ };
+@@ -1748,6 +1751,7 @@ static int i801_probe(struct pci_dev *dev, const struct pci_device_id *id)
+ case PCI_DEVICE_ID_INTEL_COMETLAKE_H_SMBUS:
+ case PCI_DEVICE_ID_INTEL_ELKHART_LAKE_SMBUS:
+ case PCI_DEVICE_ID_INTEL_TIGERLAKE_LP_SMBUS:
++ case PCI_DEVICE_ID_INTEL_TIGERLAKE_H_SMBUS:
+ case PCI_DEVICE_ID_INTEL_JASPER_LAKE_SMBUS:
+ priv->features |= FEATURE_BLOCK_PROC;
+ priv->features |= FEATURE_I2C_BLOCK_READ;
+diff --git a/drivers/i2c/busses/i2c-rcar.c b/drivers/i2c/busses/i2c-rcar.c
+index 9e883474db8ce..c7c543483b08c 100644
+--- a/drivers/i2c/busses/i2c-rcar.c
++++ b/drivers/i2c/busses/i2c-rcar.c
+@@ -590,6 +590,7 @@ static bool rcar_i2c_slave_irq(struct rcar_i2c_priv *priv)
+ /* master sent stop */
+ if (ssr_filtered & SSR) {
+ i2c_slave_event(priv->slave, I2C_SLAVE_STOP, &value);
++ rcar_i2c_write(priv, ICSCR, SIE | SDBS); /* clear our NACK */
+ rcar_i2c_write(priv, ICSIER, SAR);
+ rcar_i2c_write(priv, ICSSR, ~SSR & 0xff);
+ }
+diff --git a/drivers/i2c/i2c-core-base.c b/drivers/i2c/i2c-core-base.c
+index 26f03a14a4781..4f09d4c318287 100644
+--- a/drivers/i2c/i2c-core-base.c
++++ b/drivers/i2c/i2c-core-base.c
+@@ -354,7 +354,7 @@ static int i2c_device_probe(struct device *dev)
+ * or ACPI ID table is supplied for the probing device.
+ */
+ if (!driver->id_table &&
+- !i2c_acpi_match_device(dev->driver->acpi_match_table, client) &&
++ !acpi_driver_match_device(dev, dev->driver) &&
+ !i2c_of_match_device(dev->driver->of_match_table, client)) {
+ status = -ENODEV;
+ goto put_sync_adapter;
+diff --git a/drivers/iommu/dma-iommu.c b/drivers/iommu/dma-iommu.c
+index 4959f5df21bd0..5141d49a046ba 100644
+--- a/drivers/iommu/dma-iommu.c
++++ b/drivers/iommu/dma-iommu.c
+@@ -1035,8 +1035,8 @@ static void *iommu_dma_alloc(struct device *dev, size_t size,
+
+ if (IS_ENABLED(CONFIG_DMA_DIRECT_REMAP) &&
+ !gfpflags_allow_blocking(gfp) && !coherent)
+- cpu_addr = dma_alloc_from_pool(dev, PAGE_ALIGN(size), &page,
+- gfp);
++ page = dma_alloc_from_pool(dev, PAGE_ALIGN(size), &cpu_addr,
++ gfp, NULL);
+ else
+ cpu_addr = iommu_dma_alloc_pages(dev, size, &page, gfp, attrs);
+ if (!cpu_addr)
+diff --git a/drivers/iommu/iova.c b/drivers/iommu/iova.c
+index 49fc01f2a28d4..45a251da54537 100644
+--- a/drivers/iommu/iova.c
++++ b/drivers/iommu/iova.c
+@@ -811,7 +811,9 @@ iova_magazine_free_pfns(struct iova_magazine *mag, struct iova_domain *iovad)
+ for (i = 0 ; i < mag->size; ++i) {
+ struct iova *iova = private_find_iova(iovad, mag->pfns[i]);
+
+- BUG_ON(!iova);
++ if (WARN_ON(!iova))
++ continue;
++
+ private_free_iova(iovad, iova);
+ }
+
+diff --git a/drivers/irqchip/irq-stm32-exti.c b/drivers/irqchip/irq-stm32-exti.c
+index faa8482c8246d..4dd8a5532f893 100644
+--- a/drivers/irqchip/irq-stm32-exti.c
++++ b/drivers/irqchip/irq-stm32-exti.c
+@@ -431,6 +431,16 @@ static void stm32_irq_ack(struct irq_data *d)
+ irq_gc_unlock(gc);
+ }
+
++/* directly set the target bit without reading first. */
++static inline void stm32_exti_write_bit(struct irq_data *d, u32 reg)
++{
++ struct stm32_exti_chip_data *chip_data = irq_data_get_irq_chip_data(d);
++ void __iomem *base = chip_data->host_data->base;
++ u32 val = BIT(d->hwirq % IRQS_PER_BANK);
++
++ writel_relaxed(val, base + reg);
++}
++
+ static inline u32 stm32_exti_set_bit(struct irq_data *d, u32 reg)
+ {
+ struct stm32_exti_chip_data *chip_data = irq_data_get_irq_chip_data(d);
+@@ -464,9 +474,9 @@ static void stm32_exti_h_eoi(struct irq_data *d)
+
+ raw_spin_lock(&chip_data->rlock);
+
+- stm32_exti_set_bit(d, stm32_bank->rpr_ofst);
++ stm32_exti_write_bit(d, stm32_bank->rpr_ofst);
+ if (stm32_bank->fpr_ofst != UNDEF_REG)
+- stm32_exti_set_bit(d, stm32_bank->fpr_ofst);
++ stm32_exti_write_bit(d, stm32_bank->fpr_ofst);
+
+ raw_spin_unlock(&chip_data->rlock);
+
+diff --git a/drivers/media/cec/core/cec-api.c b/drivers/media/cec/core/cec-api.c
+index 17d1cb2e5f976..f922a2196b2b7 100644
+--- a/drivers/media/cec/core/cec-api.c
++++ b/drivers/media/cec/core/cec-api.c
+@@ -147,7 +147,13 @@ static long cec_adap_g_log_addrs(struct cec_adapter *adap,
+ struct cec_log_addrs log_addrs;
+
+ mutex_lock(&adap->lock);
+- log_addrs = adap->log_addrs;
++ /*
++ * We use memcpy here instead of assignment since there is a
++ * hole at the end of struct cec_log_addrs that an assignment
++ * might ignore. So when we do copy_to_user() we could leak
++ * one byte of memory.
++ */
++ memcpy(&log_addrs, &adap->log_addrs, sizeof(log_addrs));
+ if (!adap->is_configured)
+ memset(log_addrs.log_addr, CEC_LOG_ADDR_INVALID,
+ sizeof(log_addrs.log_addr));
+diff --git a/drivers/media/i2c/imx290.c b/drivers/media/i2c/imx290.c
+index f7678e5a5d879..157a0ed0a8856 100644
+--- a/drivers/media/i2c/imx290.c
++++ b/drivers/media/i2c/imx290.c
+@@ -628,7 +628,7 @@ static int imx290_power_on(struct device *dev)
+ }
+
+ usleep_range(1, 2);
+- gpiod_set_value_cansleep(imx290->rst_gpio, 1);
++ gpiod_set_value_cansleep(imx290->rst_gpio, 0);
+ usleep_range(30000, 31000);
+
+ return 0;
+@@ -641,7 +641,7 @@ static int imx290_power_off(struct device *dev)
+ struct imx290 *imx290 = to_imx290(sd);
+
+ clk_disable_unprepare(imx290->xclk);
+- gpiod_set_value_cansleep(imx290->rst_gpio, 0);
++ gpiod_set_value_cansleep(imx290->rst_gpio, 1);
+ regulator_bulk_disable(IMX290_NUM_SUPPLIES, imx290->supplies);
+
+ return 0;
+@@ -760,7 +760,8 @@ static int imx290_probe(struct i2c_client *client)
+ goto free_err;
+ }
+
+- imx290->rst_gpio = devm_gpiod_get_optional(dev, "reset", GPIOD_ASIS);
++ imx290->rst_gpio = devm_gpiod_get_optional(dev, "reset",
++ GPIOD_OUT_HIGH);
+ if (IS_ERR(imx290->rst_gpio)) {
+ dev_err(dev, "Cannot get reset gpio\n");
+ ret = PTR_ERR(imx290->rst_gpio);
+diff --git a/drivers/media/pci/ttpci/av7110.c b/drivers/media/pci/ttpci/av7110.c
+index d0cdee1c6eb0b..bf36b1e22b635 100644
+--- a/drivers/media/pci/ttpci/av7110.c
++++ b/drivers/media/pci/ttpci/av7110.c
+@@ -406,14 +406,15 @@ static void debiirq(unsigned long cookie)
+ case DATA_CI_GET:
+ {
+ u8 *data = av7110->debi_virt;
++ u8 data_0 = data[0];
+
+- if ((data[0] < 2) && data[2] == 0xff) {
++ if (data_0 < 2 && data[2] == 0xff) {
+ int flags = 0;
+ if (data[5] > 0)
+ flags |= CA_CI_MODULE_PRESENT;
+ if (data[5] > 5)
+ flags |= CA_CI_MODULE_READY;
+- av7110->ci_slot[data[0]].flags = flags;
++ av7110->ci_slot[data_0].flags = flags;
+ } else
+ ci_get_data(&av7110->ci_rbuffer,
+ av7110->debi_virt,
+diff --git a/drivers/media/platform/davinci/vpif_capture.c b/drivers/media/platform/davinci/vpif_capture.c
+index d9ec439faefa6..72a0e94e2e21a 100644
+--- a/drivers/media/platform/davinci/vpif_capture.c
++++ b/drivers/media/platform/davinci/vpif_capture.c
+@@ -1482,8 +1482,6 @@ probe_out:
+ /* Unregister video device */
+ video_unregister_device(&ch->video_dev);
+ }
+- kfree(vpif_obj.sd);
+- v4l2_device_unregister(&vpif_obj.v4l2_dev);
+
+ return err;
+ }
+diff --git a/drivers/mfd/intel-lpss-pci.c b/drivers/mfd/intel-lpss-pci.c
+index 046222684b8b2..9a58032f818ae 100644
+--- a/drivers/mfd/intel-lpss-pci.c
++++ b/drivers/mfd/intel-lpss-pci.c
+@@ -201,6 +201,9 @@ static const struct pci_device_id intel_lpss_pci_ids[] = {
+ { PCI_VDEVICE(INTEL, 0x1ac4), (kernel_ulong_t)&bxt_info },
+ { PCI_VDEVICE(INTEL, 0x1ac6), (kernel_ulong_t)&bxt_info },
+ { PCI_VDEVICE(INTEL, 0x1aee), (kernel_ulong_t)&bxt_uart_info },
++ /* EBG */
++ { PCI_VDEVICE(INTEL, 0x1bad), (kernel_ulong_t)&bxt_uart_info },
++ { PCI_VDEVICE(INTEL, 0x1bae), (kernel_ulong_t)&bxt_uart_info },
+ /* GLK */
+ { PCI_VDEVICE(INTEL, 0x31ac), (kernel_ulong_t)&glk_i2c_info },
+ { PCI_VDEVICE(INTEL, 0x31ae), (kernel_ulong_t)&glk_i2c_info },
+@@ -230,6 +233,22 @@ static const struct pci_device_id intel_lpss_pci_ids[] = {
+ { PCI_VDEVICE(INTEL, 0x34ea), (kernel_ulong_t)&bxt_i2c_info },
+ { PCI_VDEVICE(INTEL, 0x34eb), (kernel_ulong_t)&bxt_i2c_info },
+ { PCI_VDEVICE(INTEL, 0x34fb), (kernel_ulong_t)&spt_info },
++ /* TGL-H */
++ { PCI_VDEVICE(INTEL, 0x43a7), (kernel_ulong_t)&bxt_uart_info },
++ { PCI_VDEVICE(INTEL, 0x43a8), (kernel_ulong_t)&bxt_uart_info },
++ { PCI_VDEVICE(INTEL, 0x43a9), (kernel_ulong_t)&bxt_uart_info },
++ { PCI_VDEVICE(INTEL, 0x43aa), (kernel_ulong_t)&bxt_info },
++ { PCI_VDEVICE(INTEL, 0x43ab), (kernel_ulong_t)&bxt_info },
++ { PCI_VDEVICE(INTEL, 0x43ad), (kernel_ulong_t)&bxt_i2c_info },
++ { PCI_VDEVICE(INTEL, 0x43ae), (kernel_ulong_t)&bxt_i2c_info },
++ { PCI_VDEVICE(INTEL, 0x43d8), (kernel_ulong_t)&bxt_i2c_info },
++ { PCI_VDEVICE(INTEL, 0x43da), (kernel_ulong_t)&bxt_uart_info },
++ { PCI_VDEVICE(INTEL, 0x43e8), (kernel_ulong_t)&bxt_i2c_info },
++ { PCI_VDEVICE(INTEL, 0x43e9), (kernel_ulong_t)&bxt_i2c_info },
++ { PCI_VDEVICE(INTEL, 0x43ea), (kernel_ulong_t)&bxt_i2c_info },
++ { PCI_VDEVICE(INTEL, 0x43eb), (kernel_ulong_t)&bxt_i2c_info },
++ { PCI_VDEVICE(INTEL, 0x43fb), (kernel_ulong_t)&bxt_info },
++ { PCI_VDEVICE(INTEL, 0x43fd), (kernel_ulong_t)&bxt_info },
+ /* EHL */
+ { PCI_VDEVICE(INTEL, 0x4b28), (kernel_ulong_t)&bxt_uart_info },
+ { PCI_VDEVICE(INTEL, 0x4b29), (kernel_ulong_t)&bxt_uart_info },
+diff --git a/drivers/misc/habanalabs/debugfs.c b/drivers/misc/habanalabs/debugfs.c
+index 0bc036e01ee8d..6c2b9cf45e831 100644
+--- a/drivers/misc/habanalabs/debugfs.c
++++ b/drivers/misc/habanalabs/debugfs.c
+@@ -19,7 +19,7 @@
+ static struct dentry *hl_debug_root;
+
+ static int hl_debugfs_i2c_read(struct hl_device *hdev, u8 i2c_bus, u8 i2c_addr,
+- u8 i2c_reg, u32 *val)
++ u8 i2c_reg, long *val)
+ {
+ struct armcp_packet pkt;
+ int rc;
+@@ -36,7 +36,7 @@ static int hl_debugfs_i2c_read(struct hl_device *hdev, u8 i2c_bus, u8 i2c_addr,
+ pkt.i2c_reg = i2c_reg;
+
+ rc = hdev->asic_funcs->send_cpu_message(hdev, (u32 *) &pkt, sizeof(pkt),
+- 0, (long *) val);
++ 0, val);
+
+ if (rc)
+ dev_err(hdev->dev, "Failed to read from I2C, error %d\n", rc);
+@@ -827,7 +827,7 @@ static ssize_t hl_i2c_data_read(struct file *f, char __user *buf,
+ struct hl_dbg_device_entry *entry = file_inode(f)->i_private;
+ struct hl_device *hdev = entry->hdev;
+ char tmp_buf[32];
+- u32 val;
++ long val;
+ ssize_t rc;
+
+ if (*ppos)
+@@ -842,7 +842,7 @@ static ssize_t hl_i2c_data_read(struct file *f, char __user *buf,
+ return rc;
+ }
+
+- sprintf(tmp_buf, "0x%02x\n", val);
++ sprintf(tmp_buf, "0x%02lx\n", val);
+ rc = simple_read_from_buffer(buf, count, ppos, tmp_buf,
+ strlen(tmp_buf));
+
+diff --git a/drivers/mmc/host/sdhci-of-arasan.c b/drivers/mmc/host/sdhci-of-arasan.c
+index fb26e743e1fd4..d0a80bfb953b0 100644
+--- a/drivers/mmc/host/sdhci-of-arasan.c
++++ b/drivers/mmc/host/sdhci-of-arasan.c
+@@ -1025,7 +1025,6 @@ static void arasan_dt_read_clk_phase(struct device *dev,
+ static void arasan_dt_parse_clk_phases(struct device *dev,
+ struct sdhci_arasan_clk_data *clk_data)
+ {
+- int *iclk_phase, *oclk_phase;
+ u32 mio_bank = 0;
+ int i;
+
+@@ -1037,28 +1036,32 @@ static void arasan_dt_parse_clk_phases(struct device *dev,
+ clk_data->set_clk_delays = sdhci_arasan_set_clk_delays;
+
+ if (of_device_is_compatible(dev->of_node, "xlnx,zynqmp-8.9a")) {
+- iclk_phase = (int [MMC_TIMING_MMC_HS400 + 1]) ZYNQMP_ICLK_PHASE;
+- oclk_phase = (int [MMC_TIMING_MMC_HS400 + 1]) ZYNQMP_OCLK_PHASE;
++ u32 zynqmp_iclk_phase[MMC_TIMING_MMC_HS400 + 1] =
++ ZYNQMP_ICLK_PHASE;
++ u32 zynqmp_oclk_phase[MMC_TIMING_MMC_HS400 + 1] =
++ ZYNQMP_OCLK_PHASE;
+
+ of_property_read_u32(dev->of_node, "xlnx,mio-bank", &mio_bank);
+ if (mio_bank == 2) {
+- oclk_phase[MMC_TIMING_UHS_SDR104] = 90;
+- oclk_phase[MMC_TIMING_MMC_HS200] = 90;
++ zynqmp_oclk_phase[MMC_TIMING_UHS_SDR104] = 90;
++ zynqmp_oclk_phase[MMC_TIMING_MMC_HS200] = 90;
+ }
+
+ for (i = 0; i <= MMC_TIMING_MMC_HS400; i++) {
+- clk_data->clk_phase_in[i] = iclk_phase[i];
+- clk_data->clk_phase_out[i] = oclk_phase[i];
++ clk_data->clk_phase_in[i] = zynqmp_iclk_phase[i];
++ clk_data->clk_phase_out[i] = zynqmp_oclk_phase[i];
+ }
+ }
+
+ if (of_device_is_compatible(dev->of_node, "xlnx,versal-8.9a")) {
+- iclk_phase = (int [MMC_TIMING_MMC_HS400 + 1]) VERSAL_ICLK_PHASE;
+- oclk_phase = (int [MMC_TIMING_MMC_HS400 + 1]) VERSAL_OCLK_PHASE;
++ u32 versal_iclk_phase[MMC_TIMING_MMC_HS400 + 1] =
++ VERSAL_ICLK_PHASE;
++ u32 versal_oclk_phase[MMC_TIMING_MMC_HS400 + 1] =
++ VERSAL_OCLK_PHASE;
+
+ for (i = 0; i <= MMC_TIMING_MMC_HS400; i++) {
+- clk_data->clk_phase_in[i] = iclk_phase[i];
+- clk_data->clk_phase_out[i] = oclk_phase[i];
++ clk_data->clk_phase_in[i] = versal_iclk_phase[i];
++ clk_data->clk_phase_out[i] = versal_oclk_phase[i];
+ }
+ }
+
+diff --git a/drivers/net/ethernet/freescale/gianfar.c b/drivers/net/ethernet/freescale/gianfar.c
+index b513b8c5c3b5e..41dd3d0f34524 100644
+--- a/drivers/net/ethernet/freescale/gianfar.c
++++ b/drivers/net/ethernet/freescale/gianfar.c
+@@ -750,8 +750,10 @@ static int gfar_of_init(struct platform_device *ofdev, struct net_device **pdev)
+ continue;
+
+ err = gfar_parse_group(child, priv, model);
+- if (err)
++ if (err) {
++ of_node_put(child);
+ goto err_grp_init;
++ }
+ }
+ } else { /* SQ_SG_MODE */
+ err = gfar_parse_group(np, priv, model);
+diff --git a/drivers/net/ethernet/intel/ixgbe/ixgbe_fcoe.c b/drivers/net/ethernet/intel/ixgbe/ixgbe_fcoe.c
+index ec7a11d13fdc0..9e70b9a674409 100644
+--- a/drivers/net/ethernet/intel/ixgbe/ixgbe_fcoe.c
++++ b/drivers/net/ethernet/intel/ixgbe/ixgbe_fcoe.c
+@@ -192,7 +192,7 @@ static int ixgbe_fcoe_ddp_setup(struct net_device *netdev, u16 xid,
+ }
+
+ /* alloc the udl from per cpu ddp pool */
+- ddp->udl = dma_pool_alloc(ddp_pool->pool, GFP_KERNEL, &ddp->udp);
++ ddp->udl = dma_pool_alloc(ddp_pool->pool, GFP_ATOMIC, &ddp->udp);
+ if (!ddp->udl) {
+ e_err(drv, "failed allocated ddp context\n");
+ goto out_noddp_unmap;
+diff --git a/drivers/net/macvlan.c b/drivers/net/macvlan.c
+index 4942f6112e51f..5da04e9979894 100644
+--- a/drivers/net/macvlan.c
++++ b/drivers/net/macvlan.c
+@@ -1269,6 +1269,9 @@ static void macvlan_port_destroy(struct net_device *dev)
+ static int macvlan_validate(struct nlattr *tb[], struct nlattr *data[],
+ struct netlink_ext_ack *extack)
+ {
++ struct nlattr *nla, *head;
++ int rem, len;
++
+ if (tb[IFLA_ADDRESS]) {
+ if (nla_len(tb[IFLA_ADDRESS]) != ETH_ALEN)
+ return -EINVAL;
+@@ -1316,6 +1319,20 @@ static int macvlan_validate(struct nlattr *tb[], struct nlattr *data[],
+ return -EADDRNOTAVAIL;
+ }
+
++ if (data[IFLA_MACVLAN_MACADDR_DATA]) {
++ head = nla_data(data[IFLA_MACVLAN_MACADDR_DATA]);
++ len = nla_len(data[IFLA_MACVLAN_MACADDR_DATA]);
++
++ nla_for_each_attr(nla, head, len, rem) {
++ if (nla_type(nla) != IFLA_MACVLAN_MACADDR ||
++ nla_len(nla) != ETH_ALEN)
++ return -EINVAL;
++
++ if (!is_valid_ether_addr(nla_data(nla)))
++ return -EADDRNOTAVAIL;
++ }
++ }
++
+ if (data[IFLA_MACVLAN_MACADDR_COUNT])
+ return -EINVAL;
+
+@@ -1372,10 +1389,6 @@ static int macvlan_changelink_sources(struct macvlan_dev *vlan, u32 mode,
+ len = nla_len(data[IFLA_MACVLAN_MACADDR_DATA]);
+
+ nla_for_each_attr(nla, head, len, rem) {
+- if (nla_type(nla) != IFLA_MACVLAN_MACADDR ||
+- nla_len(nla) != ETH_ALEN)
+- continue;
+-
+ addr = nla_data(nla);
+ ret = macvlan_hash_add_source(vlan, addr);
+ if (ret)
+diff --git a/drivers/net/wan/hdlc.c b/drivers/net/wan/hdlc.c
+index dfc16770458d8..386ed2aa31fd9 100644
+--- a/drivers/net/wan/hdlc.c
++++ b/drivers/net/wan/hdlc.c
+@@ -230,6 +230,7 @@ static void hdlc_setup_dev(struct net_device *dev)
+ dev->max_mtu = HDLC_MAX_MTU;
+ dev->type = ARPHRD_RAWHDLC;
+ dev->hard_header_len = 16;
++ dev->needed_headroom = 0;
+ dev->addr_len = 0;
+ dev->header_ops = &hdlc_null_ops;
+ }
+diff --git a/drivers/net/wan/hdlc_x25.c b/drivers/net/wan/hdlc_x25.c
+index f70336bb6f524..f52b9fed05931 100644
+--- a/drivers/net/wan/hdlc_x25.c
++++ b/drivers/net/wan/hdlc_x25.c
+@@ -107,8 +107,14 @@ static netdev_tx_t x25_xmit(struct sk_buff *skb, struct net_device *dev)
+ {
+ int result;
+
++ /* There should be a pseudo header of 1 byte added by upper layers.
++ * Check to make sure it is there before reading it.
++ */
++ if (skb->len < 1) {
++ kfree_skb(skb);
++ return NETDEV_TX_OK;
++ }
+
+- /* X.25 to LAPB */
+ switch (skb->data[0]) {
+ case X25_IFACE_DATA: /* Data to be transmitted */
+ skb_pull(skb, 1);
+@@ -294,6 +300,15 @@ static int x25_ioctl(struct net_device *dev, struct ifreq *ifr)
+ return result;
+
+ memcpy(&state(hdlc)->settings, &new_settings, size);
++
++ /* There's no header_ops so hard_header_len should be 0. */
++ dev->hard_header_len = 0;
++ /* When transmitting data:
++ * first we'll remove a pseudo header of 1 byte,
++ * then we'll prepend an LAPB header of at most 3 bytes.
++ */
++ dev->needed_headroom = 3 - 1;
++
+ dev->type = ARPHRD_X25;
+ call_netdevice_notifiers(NETDEV_POST_TYPE_CHANGE, dev);
+ netif_dormant_off(dev);
+diff --git a/drivers/net/wireless/broadcom/brcm80211/brcmfmac/cfg80211.c b/drivers/net/wireless/broadcom/brcm80211/brcmfmac/cfg80211.c
+index a757abd7a5999..f4db818cccae7 100644
+--- a/drivers/net/wireless/broadcom/brcm80211/brcmfmac/cfg80211.c
++++ b/drivers/net/wireless/broadcom/brcm80211/brcmfmac/cfg80211.c
+@@ -84,6 +84,8 @@
+
+ #define BRCMF_ND_INFO_TIMEOUT msecs_to_jiffies(2000)
+
++#define BRCMF_PS_MAX_TIMEOUT_MS 2000
++
+ #define BRCMF_ASSOC_PARAMS_FIXED_SIZE \
+ (sizeof(struct brcmf_assoc_params_le) - sizeof(u16))
+
+@@ -2941,6 +2943,12 @@ brcmf_cfg80211_set_power_mgmt(struct wiphy *wiphy, struct net_device *ndev,
+ else
+ bphy_err(drvr, "error (%d)\n", err);
+ }
++
++ err = brcmf_fil_iovar_int_set(ifp, "pm2_sleep_ret",
++ min_t(u32, timeout, BRCMF_PS_MAX_TIMEOUT_MS));
++ if (err)
++ bphy_err(drvr, "Unable to set pm timeout, (%d)\n", err);
++
+ done:
+ brcmf_dbg(TRACE, "Exit\n");
+ return err;
+diff --git a/drivers/net/wireless/realtek/rtlwifi/usb.c b/drivers/net/wireless/realtek/rtlwifi/usb.c
+index c66c6dc003783..bad06939a247c 100644
+--- a/drivers/net/wireless/realtek/rtlwifi/usb.c
++++ b/drivers/net/wireless/realtek/rtlwifi/usb.c
+@@ -718,8 +718,11 @@ static int _rtl_usb_receive(struct ieee80211_hw *hw)
+
+ usb_anchor_urb(urb, &rtlusb->rx_submitted);
+ err = usb_submit_urb(urb, GFP_KERNEL);
+- if (err)
++ if (err) {
++ usb_unanchor_urb(urb);
++ usb_free_urb(urb);
+ goto err_out;
++ }
+ usb_free_urb(urb);
+ }
+ return 0;
+diff --git a/drivers/nvme/host/fc.c b/drivers/nvme/host/fc.c
+index 549f5b0fb0b4b..1a2b6910509ca 100644
+--- a/drivers/nvme/host/fc.c
++++ b/drivers/nvme/host/fc.c
+@@ -2076,7 +2076,7 @@ __nvme_fc_init_request(struct nvme_fc_ctrl *ctrl,
+ if (fc_dma_mapping_error(ctrl->lport->dev, op->fcp_req.cmddma)) {
+ dev_err(ctrl->dev,
+ "FCP Op failed - cmdiu dma mapping failed.\n");
+- ret = EFAULT;
++ ret = -EFAULT;
+ goto out_on_error;
+ }
+
+@@ -2086,7 +2086,7 @@ __nvme_fc_init_request(struct nvme_fc_ctrl *ctrl,
+ if (fc_dma_mapping_error(ctrl->lport->dev, op->fcp_req.rspdma)) {
+ dev_err(ctrl->dev,
+ "FCP Op failed - rspiu dma mapping failed.\n");
+- ret = EFAULT;
++ ret = -EFAULT;
+ }
+
+ atomic_set(&op->state, FCPOP_STATE_IDLE);
+diff --git a/drivers/nvme/host/multipath.c b/drivers/nvme/host/multipath.c
+index 2672953233434..041a755f936a6 100644
+--- a/drivers/nvme/host/multipath.c
++++ b/drivers/nvme/host/multipath.c
+@@ -255,12 +255,17 @@ static struct nvme_ns *nvme_round_robin_path(struct nvme_ns_head *head,
+ fallback = ns;
+ }
+
+- /* No optimized path found, re-check the current path */
++ /*
++ * The loop above skips the current path for round-robin semantics.
++ * Fall back to the current path if either:
++ * - no other optimized path found and current is optimized,
++ * - no other usable path found and current is usable.
++ */
+ if (!nvme_path_is_disabled(old) &&
+- old->ana_state == NVME_ANA_OPTIMIZED) {
+- found = old;
+- goto out;
+- }
++ (old->ana_state == NVME_ANA_OPTIMIZED ||
++ (!fallback && old->ana_state == NVME_ANA_NONOPTIMIZED)))
++ return old;
++
+ if (!fallback)
+ return NULL;
+ found = fallback;
+diff --git a/drivers/nvme/target/configfs.c b/drivers/nvme/target/configfs.c
+index 419e0d4ce79b1..d84b935704a3d 100644
+--- a/drivers/nvme/target/configfs.c
++++ b/drivers/nvme/target/configfs.c
+@@ -1035,6 +1035,7 @@ static ssize_t nvmet_subsys_attr_model_store(struct config_item *item,
+ up_write(&nvmet_config_sem);
+
+ kfree_rcu(new_model, rcuhead);
++ kfree(new_model_number);
+
+ return count;
+ }
+diff --git a/drivers/pci/controller/dwc/pcie-qcom.c b/drivers/pci/controller/dwc/pcie-qcom.c
+index 5dd1740855770..f38e710de4789 100644
+--- a/drivers/pci/controller/dwc/pcie-qcom.c
++++ b/drivers/pci/controller/dwc/pcie-qcom.c
+@@ -106,11 +106,14 @@ struct qcom_pcie_resources_2_1_0 {
+ struct clk *iface_clk;
+ struct clk *core_clk;
+ struct clk *phy_clk;
++ struct clk *aux_clk;
++ struct clk *ref_clk;
+ struct reset_control *pci_reset;
+ struct reset_control *axi_reset;
+ struct reset_control *ahb_reset;
+ struct reset_control *por_reset;
+ struct reset_control *phy_reset;
++ struct reset_control *ext_reset;
+ struct regulator_bulk_data supplies[QCOM_PCIE_2_1_0_MAX_SUPPLY];
+ };
+
+@@ -264,6 +267,14 @@ static int qcom_pcie_get_resources_2_1_0(struct qcom_pcie *pcie)
+ if (IS_ERR(res->phy_clk))
+ return PTR_ERR(res->phy_clk);
+
++ res->aux_clk = devm_clk_get_optional(dev, "aux");
++ if (IS_ERR(res->aux_clk))
++ return PTR_ERR(res->aux_clk);
++
++ res->ref_clk = devm_clk_get_optional(dev, "ref");
++ if (IS_ERR(res->ref_clk))
++ return PTR_ERR(res->ref_clk);
++
+ res->pci_reset = devm_reset_control_get_exclusive(dev, "pci");
+ if (IS_ERR(res->pci_reset))
+ return PTR_ERR(res->pci_reset);
+@@ -280,6 +291,10 @@ static int qcom_pcie_get_resources_2_1_0(struct qcom_pcie *pcie)
+ if (IS_ERR(res->por_reset))
+ return PTR_ERR(res->por_reset);
+
++ res->ext_reset = devm_reset_control_get_optional_exclusive(dev, "ext");
++ if (IS_ERR(res->ext_reset))
++ return PTR_ERR(res->ext_reset);
++
+ res->phy_reset = devm_reset_control_get_exclusive(dev, "phy");
+ return PTR_ERR_OR_ZERO(res->phy_reset);
+ }
+@@ -288,14 +303,17 @@ static void qcom_pcie_deinit_2_1_0(struct qcom_pcie *pcie)
+ {
+ struct qcom_pcie_resources_2_1_0 *res = &pcie->res.v2_1_0;
+
++ clk_disable_unprepare(res->phy_clk);
+ reset_control_assert(res->pci_reset);
+ reset_control_assert(res->axi_reset);
+ reset_control_assert(res->ahb_reset);
+ reset_control_assert(res->por_reset);
+- reset_control_assert(res->pci_reset);
++ reset_control_assert(res->ext_reset);
++ reset_control_assert(res->phy_reset);
+ clk_disable_unprepare(res->iface_clk);
+ clk_disable_unprepare(res->core_clk);
+- clk_disable_unprepare(res->phy_clk);
++ clk_disable_unprepare(res->aux_clk);
++ clk_disable_unprepare(res->ref_clk);
+ regulator_bulk_disable(ARRAY_SIZE(res->supplies), res->supplies);
+ }
+
+@@ -326,24 +344,36 @@ static int qcom_pcie_init_2_1_0(struct qcom_pcie *pcie)
+ goto err_assert_ahb;
+ }
+
+- ret = clk_prepare_enable(res->phy_clk);
+- if (ret) {
+- dev_err(dev, "cannot prepare/enable phy clock\n");
+- goto err_clk_phy;
+- }
+-
+ ret = clk_prepare_enable(res->core_clk);
+ if (ret) {
+ dev_err(dev, "cannot prepare/enable core clock\n");
+ goto err_clk_core;
+ }
+
++ ret = clk_prepare_enable(res->aux_clk);
++ if (ret) {
++ dev_err(dev, "cannot prepare/enable aux clock\n");
++ goto err_clk_aux;
++ }
++
++ ret = clk_prepare_enable(res->ref_clk);
++ if (ret) {
++ dev_err(dev, "cannot prepare/enable ref clock\n");
++ goto err_clk_ref;
++ }
++
+ ret = reset_control_deassert(res->ahb_reset);
+ if (ret) {
+ dev_err(dev, "cannot deassert ahb reset\n");
+ goto err_deassert_ahb;
+ }
+
++ ret = reset_control_deassert(res->ext_reset);
++ if (ret) {
++ dev_err(dev, "cannot deassert ext reset\n");
++ goto err_deassert_ahb;
++ }
++
+ /* enable PCIe clocks and resets */
+ val = readl(pcie->parf + PCIE20_PARF_PHY_CTRL);
+ val &= ~BIT(0);
+@@ -398,6 +428,12 @@ static int qcom_pcie_init_2_1_0(struct qcom_pcie *pcie)
+ return ret;
+ }
+
++ ret = clk_prepare_enable(res->phy_clk);
++ if (ret) {
++ dev_err(dev, "cannot prepare/enable phy clock\n");
++ goto err_deassert_ahb;
++ }
++
+ /* wait for clock acquisition */
+ usleep_range(1000, 1500);
+
+@@ -411,10 +447,12 @@ static int qcom_pcie_init_2_1_0(struct qcom_pcie *pcie)
+ return 0;
+
+ err_deassert_ahb:
++ clk_disable_unprepare(res->ref_clk);
++err_clk_ref:
++ clk_disable_unprepare(res->aux_clk);
++err_clk_aux:
+ clk_disable_unprepare(res->core_clk);
+ err_clk_core:
+- clk_disable_unprepare(res->phy_clk);
+-err_clk_phy:
+ clk_disable_unprepare(res->iface_clk);
+ err_assert_ahb:
+ regulator_bulk_disable(ARRAY_SIZE(res->supplies), res->supplies);
+diff --git a/drivers/pci/slot.c b/drivers/pci/slot.c
+index cc386ef2fa122..3861505741e6d 100644
+--- a/drivers/pci/slot.c
++++ b/drivers/pci/slot.c
+@@ -268,13 +268,16 @@ placeholder:
+ slot_name = make_slot_name(name);
+ if (!slot_name) {
+ err = -ENOMEM;
++ kfree(slot);
+ goto err;
+ }
+
+ err = kobject_init_and_add(&slot->kobj, &pci_slot_ktype, NULL,
+ "%s", slot_name);
+- if (err)
++ if (err) {
++ kobject_put(&slot->kobj);
+ goto err;
++ }
+
+ INIT_LIST_HEAD(&slot->list);
+ list_add(&slot->list, &parent->slots);
+@@ -293,7 +296,6 @@ out:
+ mutex_unlock(&pci_slot_mutex);
+ return slot;
+ err:
+- kfree(slot);
+ slot = ERR_PTR(err);
+ goto out;
+ }
+diff --git a/drivers/pinctrl/mediatek/pinctrl-mtk-common-v2.c b/drivers/pinctrl/mediatek/pinctrl-mtk-common-v2.c
+index b77b18fe5adcf..2f3dfb56c3fa4 100644
+--- a/drivers/pinctrl/mediatek/pinctrl-mtk-common-v2.c
++++ b/drivers/pinctrl/mediatek/pinctrl-mtk-common-v2.c
+@@ -243,6 +243,29 @@ static int mtk_xt_find_eint_num(struct mtk_pinctrl *hw, unsigned long eint_n)
+ return EINT_NA;
+ }
+
++/*
++ * Virtual GPIO only used inside SOC and not being exported to outside SOC.
++ * Some modules use virtual GPIO as eint (e.g. pmif or usb).
++ * In MTK platform, external interrupt (EINT) and GPIO is 1-1 mapping
++ * and we can set GPIO as eint.
++ * But some modules use specific eint which doesn't have real GPIO pin.
++ * So we use virtual GPIO to map it.
++ */
++
++bool mtk_is_virt_gpio(struct mtk_pinctrl *hw, unsigned int gpio_n)
++{
++ const struct mtk_pin_desc *desc;
++ bool virt_gpio = false;
++
++ desc = (const struct mtk_pin_desc *)&hw->soc->pins[gpio_n];
++
++ if (desc->funcs && !desc->funcs[desc->eint.eint_m].name)
++ virt_gpio = true;
++
++ return virt_gpio;
++}
++EXPORT_SYMBOL_GPL(mtk_is_virt_gpio);
++
+ static int mtk_xt_get_gpio_n(void *data, unsigned long eint_n,
+ unsigned int *gpio_n,
+ struct gpio_chip **gpio_chip)
+@@ -295,6 +318,9 @@ static int mtk_xt_set_gpio_as_eint(void *data, unsigned long eint_n)
+ if (err)
+ return err;
+
++ if (mtk_is_virt_gpio(hw, gpio_n))
++ return 0;
++
+ desc = (const struct mtk_pin_desc *)&hw->soc->pins[gpio_n];
+
+ err = mtk_hw_set_value(hw, desc, PINCTRL_PIN_REG_MODE,
+diff --git a/drivers/pinctrl/mediatek/pinctrl-mtk-common-v2.h b/drivers/pinctrl/mediatek/pinctrl-mtk-common-v2.h
+index 27df087363960..bd079f4fb1d6f 100644
+--- a/drivers/pinctrl/mediatek/pinctrl-mtk-common-v2.h
++++ b/drivers/pinctrl/mediatek/pinctrl-mtk-common-v2.h
+@@ -315,4 +315,5 @@ int mtk_pinconf_adv_drive_set(struct mtk_pinctrl *hw,
+ int mtk_pinconf_adv_drive_get(struct mtk_pinctrl *hw,
+ const struct mtk_pin_desc *desc, u32 *val);
+
++bool mtk_is_virt_gpio(struct mtk_pinctrl *hw, unsigned int gpio_n);
+ #endif /* __PINCTRL_MTK_COMMON_V2_H */
+diff --git a/drivers/pinctrl/mediatek/pinctrl-paris.c b/drivers/pinctrl/mediatek/pinctrl-paris.c
+index 90a432bf9fedc..a23c18251965e 100644
+--- a/drivers/pinctrl/mediatek/pinctrl-paris.c
++++ b/drivers/pinctrl/mediatek/pinctrl-paris.c
+@@ -769,6 +769,13 @@ static int mtk_gpio_get_direction(struct gpio_chip *chip, unsigned int gpio)
+ if (gpio >= hw->soc->npins)
+ return -EINVAL;
+
++ /*
++ * "Virtual" GPIOs are always and only used for interrupts
++ * Since they are only used for interrupts, they are always inputs
++ */
++ if (mtk_is_virt_gpio(hw, gpio))
++ return 1;
++
+ desc = (const struct mtk_pin_desc *)&hw->soc->pins[gpio];
+
+ err = mtk_hw_get_value(hw, desc, PINCTRL_PIN_REG_DIR, &value);
+diff --git a/drivers/platform/chrome/cros_ec_sensorhub_ring.c b/drivers/platform/chrome/cros_ec_sensorhub_ring.c
+index 24e48d96ed766..b1c641c72f515 100644
+--- a/drivers/platform/chrome/cros_ec_sensorhub_ring.c
++++ b/drivers/platform/chrome/cros_ec_sensorhub_ring.c
+@@ -419,9 +419,7 @@ cros_ec_sensor_ring_process_event(struct cros_ec_sensorhub *sensorhub,
+ * Disable filtering since we might add more jitter
+ * if b is in a random point in time.
+ */
+- new_timestamp = fifo_timestamp -
+- fifo_info->timestamp * 1000 +
+- in->timestamp * 1000;
++ new_timestamp = c - b * 1000 + a * 1000;
+ /*
+ * The timestamp can be stale if we had to use the fifo
+ * info timestamp.
+diff --git a/drivers/s390/cio/css.c b/drivers/s390/cio/css.c
+index 94edbb33d0d1f..aca022239b333 100644
+--- a/drivers/s390/cio/css.c
++++ b/drivers/s390/cio/css.c
+@@ -677,6 +677,11 @@ static int slow_eval_known_fn(struct subchannel *sch, void *data)
+ rc = css_evaluate_known_subchannel(sch, 1);
+ if (rc == -EAGAIN)
+ css_schedule_eval(sch->schid);
++ /*
++ * The loop might take long time for platforms with lots of
++ * known devices. Allow scheduling here.
++ */
++ cond_resched();
+ }
+ return 0;
+ }
+diff --git a/drivers/scsi/fcoe/fcoe_ctlr.c b/drivers/scsi/fcoe/fcoe_ctlr.c
+index 1791a393795da..07a0dadc75bf5 100644
+--- a/drivers/scsi/fcoe/fcoe_ctlr.c
++++ b/drivers/scsi/fcoe/fcoe_ctlr.c
+@@ -255,9 +255,9 @@ static void fcoe_sysfs_fcf_del(struct fcoe_fcf *new)
+ WARN_ON(!fcf_dev);
+ new->fcf_dev = NULL;
+ fcoe_fcf_device_delete(fcf_dev);
+- kfree(new);
+ mutex_unlock(&cdev->lock);
+ }
++ kfree(new);
+ }
+
+ /**
+diff --git a/drivers/scsi/lpfc/lpfc_vport.c b/drivers/scsi/lpfc/lpfc_vport.c
+index b766463579800..d0296f7cf45fc 100644
+--- a/drivers/scsi/lpfc/lpfc_vport.c
++++ b/drivers/scsi/lpfc/lpfc_vport.c
+@@ -642,27 +642,16 @@ lpfc_vport_delete(struct fc_vport *fc_vport)
+ vport->port_state < LPFC_VPORT_READY)
+ return -EAGAIN;
+ }
++
+ /*
+- * This is a bit of a mess. We want to ensure the shost doesn't get
+- * torn down until we're done with the embedded lpfc_vport structure.
+- *
+- * Beyond holding a reference for this function, we also need a
+- * reference for outstanding I/O requests we schedule during delete
+- * processing. But once we scsi_remove_host() we can no longer obtain
+- * a reference through scsi_host_get().
+- *
+- * So we take two references here. We release one reference at the
+- * bottom of the function -- after delinking the vport. And we
+- * release the other at the completion of the unreg_vpi that get's
+- * initiated after we've disposed of all other resources associated
+- * with the port.
++ * Take early refcount for outstanding I/O requests we schedule during
++ * delete processing for unreg_vpi. Always keep this before
++ * scsi_remove_host() as we can no longer obtain a reference through
++ * scsi_host_get() after scsi_host_remove as shost is set to SHOST_DEL.
+ */
+ if (!scsi_host_get(shost))
+ return VPORT_INVAL;
+- if (!scsi_host_get(shost)) {
+- scsi_host_put(shost);
+- return VPORT_INVAL;
+- }
++
+ lpfc_free_sysfs_attr(vport);
+
+ lpfc_debugfs_terminate(vport);
+@@ -809,8 +798,9 @@ skip_logo:
+ if (!(vport->vpi_state & LPFC_VPI_REGISTERED) ||
+ lpfc_mbx_unreg_vpi(vport))
+ scsi_host_put(shost);
+- } else
++ } else {
+ scsi_host_put(shost);
++ }
+
+ lpfc_free_vpi(phba, vport->vpi);
+ vport->work_port_events = 0;
+diff --git a/drivers/scsi/qla2xxx/qla_gs.c b/drivers/scsi/qla2xxx/qla_gs.c
+index df670fba2ab8a..de9fd7f688d01 100644
+--- a/drivers/scsi/qla2xxx/qla_gs.c
++++ b/drivers/scsi/qla2xxx/qla_gs.c
+@@ -1505,11 +1505,11 @@ qla2x00_prep_ct_fdmi_req(struct ct_sns_pkt *p, uint16_t cmd,
+ static uint
+ qla25xx_fdmi_port_speed_capability(struct qla_hw_data *ha)
+ {
++ uint speeds = 0;
++
+ if (IS_CNA_CAPABLE(ha))
+ return FDMI_PORT_SPEED_10GB;
+ if (IS_QLA28XX(ha) || IS_QLA27XX(ha)) {
+- uint speeds = 0;
+-
+ if (ha->max_supported_speed == 2) {
+ if (ha->min_supported_speed <= 6)
+ speeds |= FDMI_PORT_SPEED_64GB;
+@@ -1536,9 +1536,16 @@ qla25xx_fdmi_port_speed_capability(struct qla_hw_data *ha)
+ }
+ return speeds;
+ }
+- if (IS_QLA2031(ha))
+- return FDMI_PORT_SPEED_16GB|FDMI_PORT_SPEED_8GB|
+- FDMI_PORT_SPEED_4GB;
++ if (IS_QLA2031(ha)) {
++ if ((ha->pdev->subsystem_vendor == 0x103C) &&
++ (ha->pdev->subsystem_device == 0x8002)) {
++ speeds = FDMI_PORT_SPEED_16GB;
++ } else {
++ speeds = FDMI_PORT_SPEED_16GB|FDMI_PORT_SPEED_8GB|
++ FDMI_PORT_SPEED_4GB;
++ }
++ return speeds;
++ }
+ if (IS_QLA25XX(ha))
+ return FDMI_PORT_SPEED_8GB|FDMI_PORT_SPEED_4GB|
+ FDMI_PORT_SPEED_2GB|FDMI_PORT_SPEED_1GB;
+@@ -3436,7 +3443,6 @@ void qla24xx_async_gnnft_done(scsi_qla_host_t *vha, srb_t *sp)
+ list_for_each_entry(fcport, &vha->vp_fcports, list) {
+ if ((fcport->flags & FCF_FABRIC_DEVICE) != 0) {
+ fcport->scan_state = QLA_FCPORT_SCAN;
+- fcport->logout_on_delete = 0;
+ }
+ }
+ goto login_logout;
+@@ -3532,10 +3538,22 @@ login_logout:
+ }
+
+ if (fcport->scan_state != QLA_FCPORT_FOUND) {
++ bool do_delete = false;
++
++ if (fcport->scan_needed &&
++ fcport->disc_state == DSC_LOGIN_PEND) {
++ /* Cable got disconnected after we sent
++ * a login. Do delete to prevent timeout.
++ */
++ fcport->logout_on_delete = 1;
++ do_delete = true;
++ }
++
+ fcport->scan_needed = 0;
+- if ((qla_dual_mode_enabled(vha) ||
+- qla_ini_mode_enabled(vha)) &&
+- atomic_read(&fcport->state) == FCS_ONLINE) {
++ if (((qla_dual_mode_enabled(vha) ||
++ qla_ini_mode_enabled(vha)) &&
++ atomic_read(&fcport->state) == FCS_ONLINE) ||
++ do_delete) {
+ if (fcport->loop_id != FC_NO_LOOP_ID) {
+ if (fcport->flags & FCF_FCP2_DEVICE)
+ fcport->logout_on_delete = 0;
+@@ -3736,6 +3754,18 @@ static void qla2x00_async_gpnft_gnnft_sp_done(srb_t *sp, int res)
+ unsigned long flags;
+ const char *name = sp->name;
+
++ if (res == QLA_OS_TIMER_EXPIRED) {
++ /* switch is ignoring all commands.
++ * This might be a zone disable behavior.
++ * This means we hit 64s timeout.
++ * 22s GPNFT + 44s Abort = 64s
++ */
++ ql_dbg(ql_dbg_disc, vha, 0xffff,
++ "%s: Switch Zone check please .\n",
++ name);
++ qla2x00_mark_all_devices_lost(vha);
++ }
++
+ /*
+ * We are in an Interrupt context, queue up this
+ * sp for GNNFT_DONE work. This will allow all
+diff --git a/drivers/scsi/qla2xxx/qla_mbx.c b/drivers/scsi/qla2xxx/qla_mbx.c
+index df31ee0d59b20..fdb2ce7acb912 100644
+--- a/drivers/scsi/qla2xxx/qla_mbx.c
++++ b/drivers/scsi/qla2xxx/qla_mbx.c
+@@ -333,14 +333,6 @@ qla2x00_mailbox_command(scsi_qla_host_t *vha, mbx_cmd_t *mcp)
+ if (time_after(jiffies, wait_time))
+ break;
+
+- /*
+- * Check if it's UNLOADING, cause we cannot poll in
+- * this case, or else a NULL pointer dereference
+- * is triggered.
+- */
+- if (unlikely(test_bit(UNLOADING, &base_vha->dpc_flags)))
+- return QLA_FUNCTION_TIMEOUT;
+-
+ /* Check for pending interrupts. */
+ qla2x00_poll(ha->rsp_q_map[0]);
+
+diff --git a/drivers/scsi/qla2xxx/qla_nvme.c b/drivers/scsi/qla2xxx/qla_nvme.c
+index fa695a4007f86..262dfd7635a48 100644
+--- a/drivers/scsi/qla2xxx/qla_nvme.c
++++ b/drivers/scsi/qla2xxx/qla_nvme.c
+@@ -536,6 +536,11 @@ static int qla_nvme_post_cmd(struct nvme_fc_local_port *lport,
+ struct nvme_private *priv = fd->private;
+ struct qla_nvme_rport *qla_rport = rport->private;
+
++ if (!priv) {
++ /* nvme association has been torn down */
++ return rval;
++ }
++
+ fcport = qla_rport->fcport;
+
+ if (!qpair || !fcport || (qpair && !qpair->fw_started) ||
+diff --git a/drivers/scsi/qla2xxx/qla_os.c b/drivers/scsi/qla2xxx/qla_os.c
+index 5c7c22d0fab4b..8b6803f4f2dc1 100644
+--- a/drivers/scsi/qla2xxx/qla_os.c
++++ b/drivers/scsi/qla2xxx/qla_os.c
+@@ -2017,6 +2017,11 @@ skip_pio:
+ /* Determine queue resources */
+ ha->max_req_queues = ha->max_rsp_queues = 1;
+ ha->msix_count = QLA_BASE_VECTORS;
++
++ /* Check if FW supports MQ or not */
++ if (!(ha->fw_attributes & BIT_6))
++ goto mqiobase_exit;
++
+ if (!ql2xmqsupport || !ql2xnvmeenable ||
+ (!IS_QLA25XX(ha) && !IS_QLA81XX(ha)))
+ goto mqiobase_exit;
+diff --git a/drivers/scsi/qla2xxx/qla_target.c b/drivers/scsi/qla2xxx/qla_target.c
+index fbb80a043b4fe..90289162dbd4c 100644
+--- a/drivers/scsi/qla2xxx/qla_target.c
++++ b/drivers/scsi/qla2xxx/qla_target.c
+@@ -1270,7 +1270,7 @@ void qlt_schedule_sess_for_deletion(struct fc_port *sess)
+
+ qla24xx_chk_fcp_state(sess);
+
+- ql_dbg(ql_dbg_tgt, sess->vha, 0xe001,
++ ql_dbg(ql_dbg_disc, sess->vha, 0xe001,
+ "Scheduling sess %p for deletion %8phC\n",
+ sess, sess->port_name);
+
+diff --git a/drivers/scsi/scsi_debug.c b/drivers/scsi/scsi_debug.c
+index b0d93bf79978f..25faad7f8e617 100644
+--- a/drivers/scsi/scsi_debug.c
++++ b/drivers/scsi/scsi_debug.c
+@@ -5486,9 +5486,11 @@ static int schedule_resp(struct scsi_cmnd *cmnd, struct sdebug_dev_info *devip,
+ u64 d = ktime_get_boottime_ns() - ns_from_boot;
+
+ if (kt <= d) { /* elapsed duration >= kt */
++ spin_lock_irqsave(&sqp->qc_lock, iflags);
+ sqcp->a_cmnd = NULL;
+ atomic_dec(&devip->num_in_q);
+ clear_bit(k, sqp->in_use_bm);
++ spin_unlock_irqrestore(&sqp->qc_lock, iflags);
+ if (new_sd_dp)
+ kfree(sd_dp);
+ /* call scsi_done() from this thread */
+diff --git a/drivers/scsi/scsi_transport_iscsi.c b/drivers/scsi/scsi_transport_iscsi.c
+index 7ae5024e78243..df07ecd94793a 100644
+--- a/drivers/scsi/scsi_transport_iscsi.c
++++ b/drivers/scsi/scsi_transport_iscsi.c
+@@ -3291,7 +3291,7 @@ static int iscsi_set_flashnode_param(struct iscsi_transport *transport,
+ pr_err("%s could not find host no %u\n",
+ __func__, ev->u.set_flashnode.host_no);
+ err = -ENODEV;
+- goto put_host;
++ goto exit_set_fnode;
+ }
+
+ idx = ev->u.set_flashnode.flashnode_idx;
+diff --git a/drivers/scsi/ufs/ufshcd.c b/drivers/scsi/ufs/ufshcd.c
+index 136b863bc1d45..8bc8e4e62c045 100644
+--- a/drivers/scsi/ufs/ufshcd.c
++++ b/drivers/scsi/ufs/ufshcd.c
+@@ -1566,6 +1566,7 @@ unblock_reqs:
+ int ufshcd_hold(struct ufs_hba *hba, bool async)
+ {
+ int rc = 0;
++ bool flush_result;
+ unsigned long flags;
+
+ if (!ufshcd_is_clkgating_allowed(hba))
+@@ -1597,7 +1598,9 @@ start:
+ break;
+ }
+ spin_unlock_irqrestore(hba->host->host_lock, flags);
+- flush_work(&hba->clk_gating.ungate_work);
++ flush_result = flush_work(&hba->clk_gating.ungate_work);
++ if (hba->clk_gating.is_suspended && !flush_result)
++ goto out;
+ spin_lock_irqsave(hba->host->host_lock, flags);
+ goto start;
+ }
+@@ -5988,7 +5991,7 @@ static irqreturn_t ufshcd_sl_intr(struct ufs_hba *hba, u32 intr_status)
+ */
+ static irqreturn_t ufshcd_intr(int irq, void *__hba)
+ {
+- u32 intr_status, enabled_intr_status;
++ u32 intr_status, enabled_intr_status = 0;
+ irqreturn_t retval = IRQ_NONE;
+ struct ufs_hba *hba = __hba;
+ int retries = hba->nutrs;
+@@ -6002,7 +6005,7 @@ static irqreturn_t ufshcd_intr(int irq, void *__hba)
+ * read, make sure we handle them by checking the interrupt status
+ * again in a loop until we process all of the reqs before returning.
+ */
+- do {
++ while (intr_status && retries--) {
+ enabled_intr_status =
+ intr_status & ufshcd_readl(hba, REG_INTERRUPT_ENABLE);
+ if (intr_status)
+@@ -6011,7 +6014,7 @@ static irqreturn_t ufshcd_intr(int irq, void *__hba)
+ retval |= ufshcd_sl_intr(hba, enabled_intr_status);
+
+ intr_status = ufshcd_readl(hba, REG_INTERRUPT_STATUS);
+- } while (intr_status && --retries);
++ }
+
+ if (enabled_intr_status && retval == IRQ_NONE) {
+ dev_err(hba->dev, "%s: Unhandled interrupt 0x%08x\n",
+@@ -6538,7 +6541,7 @@ static int ufshcd_abort(struct scsi_cmnd *cmd)
+ /* command completed already */
+ dev_err(hba->dev, "%s: cmd at tag %d successfully cleared from DB.\n",
+ __func__, tag);
+- goto out;
++ goto cleanup;
+ } else {
+ dev_err(hba->dev,
+ "%s: no response from device. tag = %d, err %d\n",
+@@ -6572,6 +6575,7 @@ static int ufshcd_abort(struct scsi_cmnd *cmd)
+ goto out;
+ }
+
++cleanup:
+ scsi_dma_unmap(cmd);
+
+ spin_lock_irqsave(host->host_lock, flags);
+diff --git a/drivers/spi/spi-stm32.c b/drivers/spi/spi-stm32.c
+index 9672cda2f8031..d4b33b358a31e 100644
+--- a/drivers/spi/spi-stm32.c
++++ b/drivers/spi/spi-stm32.c
+@@ -442,7 +442,8 @@ static int stm32_spi_prepare_mbr(struct stm32_spi *spi, u32 speed_hz,
+ {
+ u32 div, mbrdiv;
+
+- div = DIV_ROUND_UP(spi->clk_rate, speed_hz);
++ /* Ensure spi->clk_rate is even */
++ div = DIV_ROUND_UP(spi->clk_rate & ~0x1, speed_hz);
+
+ /*
+ * SPI framework set xfer->speed_hz to master->max_speed_hz if
+@@ -468,20 +469,27 @@ static int stm32_spi_prepare_mbr(struct stm32_spi *spi, u32 speed_hz,
+ /**
+ * stm32h7_spi_prepare_fthlv - Determine FIFO threshold level
+ * @spi: pointer to the spi controller data structure
++ * @xfer_len: length of the message to be transferred
+ */
+-static u32 stm32h7_spi_prepare_fthlv(struct stm32_spi *spi)
++static u32 stm32h7_spi_prepare_fthlv(struct stm32_spi *spi, u32 xfer_len)
+ {
+- u32 fthlv, half_fifo;
++ u32 fthlv, half_fifo, packet;
+
+ /* data packet should not exceed 1/2 of fifo space */
+ half_fifo = (spi->fifo_size / 2);
+
++ /* data_packet should not exceed transfer length */
++ if (half_fifo > xfer_len)
++ packet = xfer_len;
++ else
++ packet = half_fifo;
++
+ if (spi->cur_bpw <= 8)
+- fthlv = half_fifo;
++ fthlv = packet;
+ else if (spi->cur_bpw <= 16)
+- fthlv = half_fifo / 2;
++ fthlv = packet / 2;
+ else
+- fthlv = half_fifo / 4;
++ fthlv = packet / 4;
+
+ /* align packet size with data registers access */
+ if (spi->cur_bpw > 8)
+@@ -489,6 +497,9 @@ static u32 stm32h7_spi_prepare_fthlv(struct stm32_spi *spi)
+ else
+ fthlv -= (fthlv % 4); /* multiple of 4 */
+
++ if (!fthlv)
++ fthlv = 1;
++
+ return fthlv;
+ }
+
+@@ -967,13 +978,13 @@ static irqreturn_t stm32h7_spi_irq_thread(int irq, void *dev_id)
+ if (!spi->cur_usedma && (spi->rx_buf && (spi->rx_len > 0)))
+ stm32h7_spi_read_rxfifo(spi, false);
+
+- writel_relaxed(mask, spi->base + STM32H7_SPI_IFCR);
++ writel_relaxed(sr & mask, spi->base + STM32H7_SPI_IFCR);
+
+ spin_unlock_irqrestore(&spi->lock, flags);
+
+ if (end) {
+- spi_finalize_current_transfer(master);
+ stm32h7_spi_disable(spi);
++ spi_finalize_current_transfer(master);
+ }
+
+ return IRQ_HANDLED;
+@@ -1394,7 +1405,7 @@ static void stm32h7_spi_set_bpw(struct stm32_spi *spi)
+ cfg1_setb |= (bpw << STM32H7_SPI_CFG1_DSIZE_SHIFT) &
+ STM32H7_SPI_CFG1_DSIZE;
+
+- spi->cur_fthlv = stm32h7_spi_prepare_fthlv(spi);
++ spi->cur_fthlv = stm32h7_spi_prepare_fthlv(spi, spi->cur_xferlen);
+ fthlv = spi->cur_fthlv - 1;
+
+ cfg1_clrb |= STM32H7_SPI_CFG1_FTHLV;
+@@ -1586,39 +1597,33 @@ static int stm32_spi_transfer_one_setup(struct stm32_spi *spi,
+ unsigned long flags;
+ unsigned int comm_type;
+ int nb_words, ret = 0;
++ int mbr;
+
+ spin_lock_irqsave(&spi->lock, flags);
+
+- if (spi->cur_bpw != transfer->bits_per_word) {
+- spi->cur_bpw = transfer->bits_per_word;
+- spi->cfg->set_bpw(spi);
+- }
++ spi->cur_xferlen = transfer->len;
+
+- if (spi->cur_speed != transfer->speed_hz) {
+- int mbr;
++ spi->cur_bpw = transfer->bits_per_word;
++ spi->cfg->set_bpw(spi);
+
+- /* Update spi->cur_speed with real clock speed */
+- mbr = stm32_spi_prepare_mbr(spi, transfer->speed_hz,
+- spi->cfg->baud_rate_div_min,
+- spi->cfg->baud_rate_div_max);
+- if (mbr < 0) {
+- ret = mbr;
+- goto out;
+- }
+-
+- transfer->speed_hz = spi->cur_speed;
+- stm32_spi_set_mbr(spi, mbr);
++ /* Update spi->cur_speed with real clock speed */
++ mbr = stm32_spi_prepare_mbr(spi, transfer->speed_hz,
++ spi->cfg->baud_rate_div_min,
++ spi->cfg->baud_rate_div_max);
++ if (mbr < 0) {
++ ret = mbr;
++ goto out;
+ }
+
+- comm_type = stm32_spi_communication_type(spi_dev, transfer);
+- if (spi->cur_comm != comm_type) {
+- ret = spi->cfg->set_mode(spi, comm_type);
++ transfer->speed_hz = spi->cur_speed;
++ stm32_spi_set_mbr(spi, mbr);
+
+- if (ret < 0)
+- goto out;
++ comm_type = stm32_spi_communication_type(spi_dev, transfer);
++ ret = spi->cfg->set_mode(spi, comm_type);
++ if (ret < 0)
++ goto out;
+
+- spi->cur_comm = comm_type;
+- }
++ spi->cur_comm = comm_type;
+
+ if (spi->cfg->set_data_idleness)
+ spi->cfg->set_data_idleness(spi, transfer->len);
+@@ -1636,8 +1641,6 @@ static int stm32_spi_transfer_one_setup(struct stm32_spi *spi,
+ goto out;
+ }
+
+- spi->cur_xferlen = transfer->len;
+-
+ dev_dbg(spi->dev, "transfer communication mode set to %d\n",
+ spi->cur_comm);
+ dev_dbg(spi->dev,
+diff --git a/drivers/staging/rts5208/rtsx.c b/drivers/staging/rts5208/rtsx.c
+index be0053c795b7a..937f4e732a75c 100644
+--- a/drivers/staging/rts5208/rtsx.c
++++ b/drivers/staging/rts5208/rtsx.c
+@@ -972,6 +972,7 @@ ioremap_fail:
+ kfree(dev->chip);
+ chip_alloc_fail:
+ dev_err(&pci->dev, "%s failed\n", __func__);
++ scsi_host_put(host);
+ scsi_host_alloc_fail:
+ pci_release_regions(pci);
+ return err;
+diff --git a/drivers/target/target_core_internal.h b/drivers/target/target_core_internal.h
+index 8533444159635..e7b3c6e5d5744 100644
+--- a/drivers/target/target_core_internal.h
++++ b/drivers/target/target_core_internal.h
+@@ -138,6 +138,7 @@ int init_se_kmem_caches(void);
+ void release_se_kmem_caches(void);
+ u32 scsi_get_new_index(scsi_index_t);
+ void transport_subsystem_check_init(void);
++void transport_uninit_session(struct se_session *);
+ unsigned char *transport_dump_cmd_direction(struct se_cmd *);
+ void transport_dump_dev_state(struct se_device *, char *, int *);
+ void transport_dump_dev_info(struct se_device *, struct se_lun *,
+diff --git a/drivers/target/target_core_transport.c b/drivers/target/target_core_transport.c
+index 90ecdd706a017..e6e1fa68de542 100644
+--- a/drivers/target/target_core_transport.c
++++ b/drivers/target/target_core_transport.c
+@@ -236,6 +236,11 @@ int transport_init_session(struct se_session *se_sess)
+ }
+ EXPORT_SYMBOL(transport_init_session);
+
++void transport_uninit_session(struct se_session *se_sess)
++{
++ percpu_ref_exit(&se_sess->cmd_count);
++}
++
+ /**
+ * transport_alloc_session - allocate a session object and initialize it
+ * @sup_prot_ops: bitmask that defines which T10-PI modes are supported.
+@@ -579,7 +584,7 @@ void transport_free_session(struct se_session *se_sess)
+ sbitmap_queue_free(&se_sess->sess_tag_pool);
+ kvfree(se_sess->sess_cmd_map);
+ }
+- percpu_ref_exit(&se_sess->cmd_count);
++ transport_uninit_session(se_sess);
+ kmem_cache_free(se_sess_cache, se_sess);
+ }
+ EXPORT_SYMBOL(transport_free_session);
+diff --git a/drivers/target/target_core_user.c b/drivers/target/target_core_user.c
+index 63cca0e1e9123..9ab960cc39b6f 100644
+--- a/drivers/target/target_core_user.c
++++ b/drivers/target/target_core_user.c
+@@ -1220,7 +1220,14 @@ static unsigned int tcmu_handle_completions(struct tcmu_dev *udev)
+
+ struct tcmu_cmd_entry *entry = (void *) mb + CMDR_OFF + udev->cmdr_last_cleaned;
+
+- tcmu_flush_dcache_range(entry, sizeof(*entry));
++ /*
++ * Flush max. up to end of cmd ring since current entry might
++ * be a padding that is shorter than sizeof(*entry)
++ */
++ size_t ring_left = head_to_end(udev->cmdr_last_cleaned,
++ udev->cmdr_size);
++ tcmu_flush_dcache_range(entry, ring_left < sizeof(*entry) ?
++ ring_left : sizeof(*entry));
+
+ if (tcmu_hdr_get_op(entry->hdr.len_op) == TCMU_OP_PAD) {
+ UPDATE_HEAD(udev->cmdr_last_cleaned,
+diff --git a/drivers/target/target_core_xcopy.c b/drivers/target/target_core_xcopy.c
+index 0d00ccbeb0503..44e15d7fb2f09 100644
+--- a/drivers/target/target_core_xcopy.c
++++ b/drivers/target/target_core_xcopy.c
+@@ -474,7 +474,7 @@ int target_xcopy_setup_pt(void)
+ memset(&xcopy_pt_sess, 0, sizeof(struct se_session));
+ ret = transport_init_session(&xcopy_pt_sess);
+ if (ret < 0)
+- return ret;
++ goto destroy_wq;
+
+ xcopy_pt_nacl.se_tpg = &xcopy_pt_tpg;
+ xcopy_pt_nacl.nacl_sess = &xcopy_pt_sess;
+@@ -483,12 +483,19 @@ int target_xcopy_setup_pt(void)
+ xcopy_pt_sess.se_node_acl = &xcopy_pt_nacl;
+
+ return 0;
++
++destroy_wq:
++ destroy_workqueue(xcopy_wq);
++ xcopy_wq = NULL;
++ return ret;
+ }
+
+ void target_xcopy_release_pt(void)
+ {
+- if (xcopy_wq)
++ if (xcopy_wq) {
+ destroy_workqueue(xcopy_wq);
++ transport_uninit_session(&xcopy_pt_sess);
++ }
+ }
+
+ /*
+diff --git a/drivers/tty/serial/8250/8250_exar.c b/drivers/tty/serial/8250/8250_exar.c
+index 04b9af7ed9415..2d0e7c7e408dc 100644
+--- a/drivers/tty/serial/8250/8250_exar.c
++++ b/drivers/tty/serial/8250/8250_exar.c
+@@ -744,6 +744,24 @@ static const struct exar8250_board pbn_exar_XR17V35x = {
+ .exit = pci_xr17v35x_exit,
+ };
+
++static const struct exar8250_board pbn_fastcom35x_2 = {
++ .num_ports = 2,
++ .setup = pci_xr17v35x_setup,
++ .exit = pci_xr17v35x_exit,
++};
++
++static const struct exar8250_board pbn_fastcom35x_4 = {
++ .num_ports = 4,
++ .setup = pci_xr17v35x_setup,
++ .exit = pci_xr17v35x_exit,
++};
++
++static const struct exar8250_board pbn_fastcom35x_8 = {
++ .num_ports = 8,
++ .setup = pci_xr17v35x_setup,
++ .exit = pci_xr17v35x_exit,
++};
++
+ static const struct exar8250_board pbn_exar_XR17V4358 = {
+ .num_ports = 12,
+ .setup = pci_xr17v35x_setup,
+@@ -811,9 +829,9 @@ static const struct pci_device_id exar_pci_tbl[] = {
+ EXAR_DEVICE(EXAR, XR17V358, pbn_exar_XR17V35x),
+ EXAR_DEVICE(EXAR, XR17V4358, pbn_exar_XR17V4358),
+ EXAR_DEVICE(EXAR, XR17V8358, pbn_exar_XR17V8358),
+- EXAR_DEVICE(COMMTECH, 4222PCIE, pbn_exar_XR17V35x),
+- EXAR_DEVICE(COMMTECH, 4224PCIE, pbn_exar_XR17V35x),
+- EXAR_DEVICE(COMMTECH, 4228PCIE, pbn_exar_XR17V35x),
++ EXAR_DEVICE(COMMTECH, 4222PCIE, pbn_fastcom35x_2),
++ EXAR_DEVICE(COMMTECH, 4224PCIE, pbn_fastcom35x_4),
++ EXAR_DEVICE(COMMTECH, 4228PCIE, pbn_fastcom35x_8),
+
+ EXAR_DEVICE(COMMTECH, 4222PCI335, pbn_fastcom335_2),
+ EXAR_DEVICE(COMMTECH, 4224PCI335, pbn_fastcom335_4),
+diff --git a/drivers/tty/serial/8250/8250_port.c b/drivers/tty/serial/8250/8250_port.c
+index 1632f7d25acca..63a6d13f70b80 100644
+--- a/drivers/tty/serial/8250/8250_port.c
++++ b/drivers/tty/serial/8250/8250_port.c
+@@ -2274,6 +2274,10 @@ int serial8250_do_startup(struct uart_port *port)
+
+ if (port->irq && !(up->port.flags & UPF_NO_THRE_TEST)) {
+ unsigned char iir1;
++
++ if (port->irqflags & IRQF_SHARED)
++ disable_irq_nosync(port->irq);
++
+ /*
+ * Test for UARTs that do not reassert THRE when the
+ * transmitter is idle and the interrupt has already
+@@ -2283,8 +2287,6 @@ int serial8250_do_startup(struct uart_port *port)
+ * allow register changes to become visible.
+ */
+ spin_lock_irqsave(&port->lock, flags);
+- if (up->port.irqflags & IRQF_SHARED)
+- disable_irq_nosync(port->irq);
+
+ wait_for_xmitr(up, UART_LSR_THRE);
+ serial_port_out_sync(port, UART_IER, UART_IER_THRI);
+@@ -2296,9 +2298,10 @@ int serial8250_do_startup(struct uart_port *port)
+ iir = serial_port_in(port, UART_IIR);
+ serial_port_out(port, UART_IER, 0);
+
++ spin_unlock_irqrestore(&port->lock, flags);
++
+ if (port->irqflags & IRQF_SHARED)
+ enable_irq(port->irq);
+- spin_unlock_irqrestore(&port->lock, flags);
+
+ /*
+ * If the interrupt is not reasserted, or we otherwise
+diff --git a/drivers/tty/serial/amba-pl011.c b/drivers/tty/serial/amba-pl011.c
+index 8efd7c2a34fe8..a8d1edcf252c7 100644
+--- a/drivers/tty/serial/amba-pl011.c
++++ b/drivers/tty/serial/amba-pl011.c
+@@ -2241,9 +2241,8 @@ pl011_console_write(struct console *co, const char *s, unsigned int count)
+ clk_disable(uap->clk);
+ }
+
+-static void __init
+-pl011_console_get_options(struct uart_amba_port *uap, int *baud,
+- int *parity, int *bits)
++static void pl011_console_get_options(struct uart_amba_port *uap, int *baud,
++ int *parity, int *bits)
+ {
+ if (pl011_read(uap, REG_CR) & UART01x_CR_UARTEN) {
+ unsigned int lcr_h, ibrd, fbrd;
+@@ -2276,7 +2275,7 @@ pl011_console_get_options(struct uart_amba_port *uap, int *baud,
+ }
+ }
+
+-static int __init pl011_console_setup(struct console *co, char *options)
++static int pl011_console_setup(struct console *co, char *options)
+ {
+ struct uart_amba_port *uap;
+ int baud = 38400;
+@@ -2344,8 +2343,8 @@ static int __init pl011_console_setup(struct console *co, char *options)
+ *
+ * Returns 0 if console matches; otherwise non-zero to use default matching
+ */
+-static int __init pl011_console_match(struct console *co, char *name, int idx,
+- char *options)
++static int pl011_console_match(struct console *co, char *name, int idx,
++ char *options)
+ {
+ unsigned char iotype;
+ resource_size_t addr;
+@@ -2616,7 +2615,7 @@ static int pl011_setup_port(struct device *dev, struct uart_amba_port *uap,
+
+ static int pl011_register_port(struct uart_amba_port *uap)
+ {
+- int ret;
++ int ret, i;
+
+ /* Ensure interrupts from this UART are masked and cleared */
+ pl011_write(0, uap, REG_IMSC);
+@@ -2627,6 +2626,9 @@ static int pl011_register_port(struct uart_amba_port *uap)
+ if (ret < 0) {
+ dev_err(uap->port.dev,
+ "Failed to register AMBA-PL011 driver\n");
++ for (i = 0; i < ARRAY_SIZE(amba_ports); i++)
++ if (amba_ports[i] == uap)
++ amba_ports[i] = NULL;
+ return ret;
+ }
+ }
+diff --git a/drivers/tty/serial/samsung_tty.c b/drivers/tty/serial/samsung_tty.c
+index d913d9b2762a6..815da3e78ad1a 100644
+--- a/drivers/tty/serial/samsung_tty.c
++++ b/drivers/tty/serial/samsung_tty.c
+@@ -1911,9 +1911,11 @@ static int s3c24xx_serial_init_port(struct s3c24xx_uart_port *ourport,
+ ourport->tx_irq = ret + 1;
+ }
+
+- ret = platform_get_irq(platdev, 1);
+- if (ret > 0)
+- ourport->tx_irq = ret;
++ if (!s3c24xx_serial_has_interrupt_mask(port)) {
++ ret = platform_get_irq(platdev, 1);
++ if (ret > 0)
++ ourport->tx_irq = ret;
++ }
+ /*
+ * DMA is currently supported only on DT platforms, if DMA properties
+ * are specified.
+diff --git a/drivers/tty/serial/stm32-usart.c b/drivers/tty/serial/stm32-usart.c
+index 8602ff3573218..b77b41c768fbf 100644
+--- a/drivers/tty/serial/stm32-usart.c
++++ b/drivers/tty/serial/stm32-usart.c
+@@ -962,7 +962,7 @@ static int stm32_init_port(struct stm32_port *stm32port,
+ return ret;
+
+ if (stm32port->info->cfg.has_wakeup) {
+- stm32port->wakeirq = platform_get_irq(pdev, 1);
++ stm32port->wakeirq = platform_get_irq_optional(pdev, 1);
+ if (stm32port->wakeirq <= 0 && stm32port->wakeirq != -ENXIO)
+ return stm32port->wakeirq ? : -ENODEV;
+ }
+diff --git a/drivers/tty/vt/vt.c b/drivers/tty/vt/vt.c
+index 42d8c67a481f0..c9ee8e9498d5a 100644
+--- a/drivers/tty/vt/vt.c
++++ b/drivers/tty/vt/vt.c
+@@ -1196,7 +1196,7 @@ static int vc_do_resize(struct tty_struct *tty, struct vc_data *vc,
+ unsigned int old_rows, old_row_size, first_copied_row;
+ unsigned int new_cols, new_rows, new_row_size, new_screen_size;
+ unsigned int user;
+- unsigned short *newscreen;
++ unsigned short *oldscreen, *newscreen;
+ struct uni_screen *new_uniscr = NULL;
+
+ WARN_CONSOLE_UNLOCKED();
+@@ -1294,10 +1294,11 @@ static int vc_do_resize(struct tty_struct *tty, struct vc_data *vc,
+ if (new_scr_end > new_origin)
+ scr_memsetw((void *)new_origin, vc->vc_video_erase_char,
+ new_scr_end - new_origin);
+- kfree(vc->vc_screenbuf);
++ oldscreen = vc->vc_screenbuf;
+ vc->vc_screenbuf = newscreen;
+ vc->vc_screenbuf_size = new_screen_size;
+ set_origin(vc);
++ kfree(oldscreen);
+
+ /* do part of a reset_terminal() */
+ vc->vc_top = 0;
+diff --git a/drivers/tty/vt/vt_ioctl.c b/drivers/tty/vt/vt_ioctl.c
+index daf61c28ba766..cbc85c995d92d 100644
+--- a/drivers/tty/vt/vt_ioctl.c
++++ b/drivers/tty/vt/vt_ioctl.c
+@@ -893,12 +893,22 @@ int vt_ioctl(struct tty_struct *tty,
+ console_lock();
+ vcp = vc_cons[i].d;
+ if (vcp) {
++ int ret;
++ int save_scan_lines = vcp->vc_scan_lines;
++ int save_font_height = vcp->vc_font.height;
++
+ if (v.v_vlin)
+ vcp->vc_scan_lines = v.v_vlin;
+ if (v.v_clin)
+ vcp->vc_font.height = v.v_clin;
+ vcp->vc_resize_user = 1;
+- vc_resize(vcp, v.v_cols, v.v_rows);
++ ret = vc_resize(vcp, v.v_cols, v.v_rows);
++ if (ret) {
++ vcp->vc_scan_lines = save_scan_lines;
++ vcp->vc_font.height = save_font_height;
++ console_unlock();
++ return ret;
++ }
+ }
+ console_unlock();
+ }
+diff --git a/drivers/usb/class/cdc-acm.c b/drivers/usb/class/cdc-acm.c
+index d5187b50fc828..7499ba118665a 100644
+--- a/drivers/usb/class/cdc-acm.c
++++ b/drivers/usb/class/cdc-acm.c
+@@ -378,21 +378,19 @@ static void acm_ctrl_irq(struct urb *urb)
+ if (current_size < expected_size) {
+ /* notification is transmitted fragmented, reassemble */
+ if (acm->nb_size < expected_size) {
+- if (acm->nb_size) {
+- kfree(acm->notification_buffer);
+- acm->nb_size = 0;
+- }
++ u8 *new_buffer;
+ alloc_size = roundup_pow_of_two(expected_size);
+- /*
+- * kmalloc ensures a valid notification_buffer after a
+- * use of kfree in case the previous allocation was too
+- * small. Final freeing is done on disconnect.
+- */
+- acm->notification_buffer =
+- kmalloc(alloc_size, GFP_ATOMIC);
+- if (!acm->notification_buffer)
++ /* Final freeing is done on disconnect. */
++ new_buffer = krealloc(acm->notification_buffer,
++ alloc_size, GFP_ATOMIC);
++ if (!new_buffer) {
++ acm->nb_index = 0;
+ goto exit;
++ }
++
++ acm->notification_buffer = new_buffer;
+ acm->nb_size = alloc_size;
++ dr = (struct usb_cdc_notification *)acm->notification_buffer;
+ }
+
+ copy_size = min(current_size,
+diff --git a/drivers/usb/core/driver.c b/drivers/usb/core/driver.c
+index f81606c6a35b0..7e73e989645bd 100644
+--- a/drivers/usb/core/driver.c
++++ b/drivers/usb/core/driver.c
+@@ -905,6 +905,35 @@ static int usb_uevent(struct device *dev, struct kobj_uevent_env *env)
+ return 0;
+ }
+
++static bool is_dev_usb_generic_driver(struct device *dev)
++{
++ struct usb_device_driver *udd = dev->driver ?
++ to_usb_device_driver(dev->driver) : NULL;
++
++ return udd == &usb_generic_driver;
++}
++
++static int __usb_bus_reprobe_drivers(struct device *dev, void *data)
++{
++ struct usb_device_driver *new_udriver = data;
++ struct usb_device *udev;
++ int ret;
++
++ if (!is_dev_usb_generic_driver(dev))
++ return 0;
++
++ udev = to_usb_device(dev);
++ if (usb_device_match_id(udev, new_udriver->id_table) == NULL &&
++ (!new_udriver->match || new_udriver->match(udev) != 0))
++ return 0;
++
++ ret = device_reprobe(dev);
++ if (ret && ret != -EPROBE_DEFER)
++ dev_err(dev, "Failed to reprobe device (error %d)\n", ret);
++
++ return 0;
++}
++
+ /**
+ * usb_register_device_driver - register a USB device (not interface) driver
+ * @new_udriver: USB operations for the device driver
+@@ -934,13 +963,20 @@ int usb_register_device_driver(struct usb_device_driver *new_udriver,
+
+ retval = driver_register(&new_udriver->drvwrap.driver);
+
+- if (!retval)
++ if (!retval) {
+ pr_info("%s: registered new device driver %s\n",
+ usbcore_name, new_udriver->name);
+- else
++ /*
++ * Check whether any device could be better served with
++ * this new driver
++ */
++ bus_for_each_dev(&usb_bus_type, NULL, new_udriver,
++ __usb_bus_reprobe_drivers);
++ } else {
+ printk(KERN_ERR "%s: error %d registering device "
+ " driver %s\n",
+ usbcore_name, retval, new_udriver->name);
++ }
+
+ return retval;
+ }
+diff --git a/drivers/usb/core/generic.c b/drivers/usb/core/generic.c
+index 4626227a6dd22..cd08a47144bd3 100644
+--- a/drivers/usb/core/generic.c
++++ b/drivers/usb/core/generic.c
+@@ -207,8 +207,9 @@ static int __check_usb_generic(struct device_driver *drv, void *data)
+ return 0;
+ if (!udrv->id_table)
+ return 0;
+-
+- return usb_device_match_id(udev, udrv->id_table) != NULL;
++ if (usb_device_match_id(udev, udrv->id_table) != NULL)
++ return 1;
++ return (udrv->match && udrv->match(udev));
+ }
+
+ static bool usb_generic_driver_match(struct usb_device *udev)
+diff --git a/drivers/usb/core/quirks.c b/drivers/usb/core/quirks.c
+index c96c50faccf72..2f068e525a374 100644
+--- a/drivers/usb/core/quirks.c
++++ b/drivers/usb/core/quirks.c
+@@ -370,6 +370,10 @@ static const struct usb_device_id usb_quirk_list[] = {
+ { USB_DEVICE(0x0926, 0x0202), .driver_info =
+ USB_QUIRK_ENDPOINT_BLACKLIST },
+
++ /* Sound Devices MixPre-D */
++ { USB_DEVICE(0x0926, 0x0208), .driver_info =
++ USB_QUIRK_ENDPOINT_BLACKLIST },
++
+ /* Keytouch QWERTY Panel keyboard */
+ { USB_DEVICE(0x0926, 0x3333), .driver_info =
+ USB_QUIRK_CONFIG_INTF_STRINGS },
+@@ -465,6 +469,8 @@ static const struct usb_device_id usb_quirk_list[] = {
+
+ { USB_DEVICE(0x2386, 0x3119), .driver_info = USB_QUIRK_NO_LPM },
+
++ { USB_DEVICE(0x2386, 0x350e), .driver_info = USB_QUIRK_NO_LPM },
++
+ /* DJI CineSSD */
+ { USB_DEVICE(0x2ca3, 0x0031), .driver_info = USB_QUIRK_NO_LPM },
+
+@@ -509,6 +515,7 @@ static const struct usb_device_id usb_amd_resume_quirk_list[] = {
+ */
+ static const struct usb_device_id usb_endpoint_blacklist[] = {
+ { USB_DEVICE_INTERFACE_NUMBER(0x0926, 0x0202, 1), .driver_info = 0x85 },
++ { USB_DEVICE_INTERFACE_NUMBER(0x0926, 0x0208, 1), .driver_info = 0x85 },
+ { }
+ };
+
+diff --git a/drivers/usb/dwc3/gadget.c b/drivers/usb/dwc3/gadget.c
+index 80c3ef134e41d..1739c5ea93c82 100644
+--- a/drivers/usb/dwc3/gadget.c
++++ b/drivers/usb/dwc3/gadget.c
+@@ -1054,27 +1054,25 @@ static void __dwc3_prepare_one_trb(struct dwc3_ep *dep, struct dwc3_trb *trb,
+ * dwc3_prepare_one_trb - setup one TRB from one request
+ * @dep: endpoint for which this request is prepared
+ * @req: dwc3_request pointer
++ * @trb_length: buffer size of the TRB
+ * @chain: should this TRB be chained to the next?
+ * @node: only for isochronous endpoints. First TRB needs different type.
+ */
+ static void dwc3_prepare_one_trb(struct dwc3_ep *dep,
+- struct dwc3_request *req, unsigned chain, unsigned node)
++ struct dwc3_request *req, unsigned int trb_length,
++ unsigned chain, unsigned node)
+ {
+ struct dwc3_trb *trb;
+- unsigned int length;
+ dma_addr_t dma;
+ unsigned stream_id = req->request.stream_id;
+ unsigned short_not_ok = req->request.short_not_ok;
+ unsigned no_interrupt = req->request.no_interrupt;
+ unsigned is_last = req->request.is_last;
+
+- if (req->request.num_sgs > 0) {
+- length = sg_dma_len(req->start_sg);
++ if (req->request.num_sgs > 0)
+ dma = sg_dma_address(req->start_sg);
+- } else {
+- length = req->request.length;
++ else
+ dma = req->request.dma;
+- }
+
+ trb = &dep->trb_pool[dep->trb_enqueue];
+
+@@ -1086,7 +1084,7 @@ static void dwc3_prepare_one_trb(struct dwc3_ep *dep,
+
+ req->num_trbs++;
+
+- __dwc3_prepare_one_trb(dep, trb, dma, length, chain, node,
++ __dwc3_prepare_one_trb(dep, trb, dma, trb_length, chain, node,
+ stream_id, short_not_ok, no_interrupt, is_last);
+ }
+
+@@ -1096,16 +1094,27 @@ static void dwc3_prepare_one_trb_sg(struct dwc3_ep *dep,
+ struct scatterlist *sg = req->start_sg;
+ struct scatterlist *s;
+ int i;
+-
++ unsigned int length = req->request.length;
+ unsigned int remaining = req->request.num_mapped_sgs
+ - req->num_queued_sgs;
+
++ /*
++ * If we resume preparing the request, then get the remaining length of
++ * the request and resume where we left off.
++ */
++ for_each_sg(req->request.sg, s, req->num_queued_sgs, i)
++ length -= sg_dma_len(s);
++
+ for_each_sg(sg, s, remaining, i) {
+- unsigned int length = req->request.length;
+ unsigned int maxp = usb_endpoint_maxp(dep->endpoint.desc);
+ unsigned int rem = length % maxp;
++ unsigned int trb_length;
+ unsigned chain = true;
+
++ trb_length = min_t(unsigned int, length, sg_dma_len(s));
++
++ length -= trb_length;
++
+ /*
+ * IOMMU driver is coalescing the list of sgs which shares a
+ * page boundary into one and giving it to USB driver. With
+@@ -1113,7 +1122,7 @@ static void dwc3_prepare_one_trb_sg(struct dwc3_ep *dep,
+ * sgs passed. So mark the chain bit to false if it isthe last
+ * mapped sg.
+ */
+- if (i == remaining - 1)
++ if ((i == remaining - 1) || !length)
+ chain = false;
+
+ if (rem && usb_endpoint_dir_out(dep->endpoint.desc) && !chain) {
+@@ -1123,7 +1132,7 @@ static void dwc3_prepare_one_trb_sg(struct dwc3_ep *dep,
+ req->needs_extra_trb = true;
+
+ /* prepare normal TRB */
+- dwc3_prepare_one_trb(dep, req, true, i);
++ dwc3_prepare_one_trb(dep, req, trb_length, true, i);
+
+ /* Now prepare one extra TRB to align transfer size */
+ trb = &dep->trb_pool[dep->trb_enqueue];
+@@ -1134,8 +1143,39 @@ static void dwc3_prepare_one_trb_sg(struct dwc3_ep *dep,
+ req->request.short_not_ok,
+ req->request.no_interrupt,
+ req->request.is_last);
++ } else if (req->request.zero && req->request.length &&
++ !usb_endpoint_xfer_isoc(dep->endpoint.desc) &&
++ !rem && !chain) {
++ struct dwc3 *dwc = dep->dwc;
++ struct dwc3_trb *trb;
++
++ req->needs_extra_trb = true;
++
++ /* Prepare normal TRB */
++ dwc3_prepare_one_trb(dep, req, trb_length, true, i);
++
++ /* Prepare one extra TRB to handle ZLP */
++ trb = &dep->trb_pool[dep->trb_enqueue];
++ req->num_trbs++;
++ __dwc3_prepare_one_trb(dep, trb, dwc->bounce_addr, 0,
++ !req->direction, 1,
++ req->request.stream_id,
++ req->request.short_not_ok,
++ req->request.no_interrupt,
++ req->request.is_last);
++
++ /* Prepare one more TRB to handle MPS alignment */
++ if (!req->direction) {
++ trb = &dep->trb_pool[dep->trb_enqueue];
++ req->num_trbs++;
++ __dwc3_prepare_one_trb(dep, trb, dwc->bounce_addr, maxp,
++ false, 1, req->request.stream_id,
++ req->request.short_not_ok,
++ req->request.no_interrupt,
++ req->request.is_last);
++ }
+ } else {
+- dwc3_prepare_one_trb(dep, req, chain, i);
++ dwc3_prepare_one_trb(dep, req, trb_length, chain, i);
+ }
+
+ /*
+@@ -1150,6 +1190,16 @@ static void dwc3_prepare_one_trb_sg(struct dwc3_ep *dep,
+
+ req->num_queued_sgs++;
+
++ /*
++ * The number of pending SG entries may not correspond to the
++ * number of mapped SG entries. If all the data are queued, then
++ * don't include unused SG entries.
++ */
++ if (length == 0) {
++ req->num_pending_sgs -= req->request.num_mapped_sgs - req->num_queued_sgs;
++ break;
++ }
++
+ if (!dwc3_calc_trbs_left(dep))
+ break;
+ }
+@@ -1169,7 +1219,7 @@ static void dwc3_prepare_one_trb_linear(struct dwc3_ep *dep,
+ req->needs_extra_trb = true;
+
+ /* prepare normal TRB */
+- dwc3_prepare_one_trb(dep, req, true, 0);
++ dwc3_prepare_one_trb(dep, req, length, true, 0);
+
+ /* Now prepare one extra TRB to align transfer size */
+ trb = &dep->trb_pool[dep->trb_enqueue];
+@@ -1180,6 +1230,7 @@ static void dwc3_prepare_one_trb_linear(struct dwc3_ep *dep,
+ req->request.no_interrupt,
+ req->request.is_last);
+ } else if (req->request.zero && req->request.length &&
++ !usb_endpoint_xfer_isoc(dep->endpoint.desc) &&
+ (IS_ALIGNED(req->request.length, maxp))) {
+ struct dwc3 *dwc = dep->dwc;
+ struct dwc3_trb *trb;
+@@ -1187,18 +1238,29 @@ static void dwc3_prepare_one_trb_linear(struct dwc3_ep *dep,
+ req->needs_extra_trb = true;
+
+ /* prepare normal TRB */
+- dwc3_prepare_one_trb(dep, req, true, 0);
++ dwc3_prepare_one_trb(dep, req, length, true, 0);
+
+- /* Now prepare one extra TRB to handle ZLP */
++ /* Prepare one extra TRB to handle ZLP */
+ trb = &dep->trb_pool[dep->trb_enqueue];
+ req->num_trbs++;
+ __dwc3_prepare_one_trb(dep, trb, dwc->bounce_addr, 0,
+- false, 1, req->request.stream_id,
++ !req->direction, 1, req->request.stream_id,
+ req->request.short_not_ok,
+ req->request.no_interrupt,
+ req->request.is_last);
++
++ /* Prepare one more TRB to handle MPS alignment for OUT */
++ if (!req->direction) {
++ trb = &dep->trb_pool[dep->trb_enqueue];
++ req->num_trbs++;
++ __dwc3_prepare_one_trb(dep, trb, dwc->bounce_addr, maxp,
++ false, 1, req->request.stream_id,
++ req->request.short_not_ok,
++ req->request.no_interrupt,
++ req->request.is_last);
++ }
+ } else {
+- dwc3_prepare_one_trb(dep, req, false, 0);
++ dwc3_prepare_one_trb(dep, req, length, false, 0);
+ }
+ }
+
+@@ -2649,8 +2711,17 @@ static int dwc3_gadget_ep_cleanup_completed_request(struct dwc3_ep *dep,
+ status);
+
+ if (req->needs_extra_trb) {
++ unsigned int maxp = usb_endpoint_maxp(dep->endpoint.desc);
++
+ ret = dwc3_gadget_ep_reclaim_trb_linear(dep, req, event,
+ status);
++
++ /* Reclaim MPS padding TRB for ZLP */
++ if (!req->direction && req->request.zero && req->request.length &&
++ !usb_endpoint_xfer_isoc(dep->endpoint.desc) &&
++ (IS_ALIGNED(req->request.length, maxp)))
++ ret = dwc3_gadget_ep_reclaim_trb_linear(dep, req, event, status);
++
+ req->needs_extra_trb = false;
+ }
+
+diff --git a/drivers/usb/gadget/function/f_ncm.c b/drivers/usb/gadget/function/f_ncm.c
+index 1d900081b1f0c..b4206b0dede54 100644
+--- a/drivers/usb/gadget/function/f_ncm.c
++++ b/drivers/usb/gadget/function/f_ncm.c
+@@ -1181,12 +1181,15 @@ static int ncm_unwrap_ntb(struct gether *port,
+ int ndp_index;
+ unsigned dg_len, dg_len2;
+ unsigned ndp_len;
++ unsigned block_len;
+ struct sk_buff *skb2;
+ int ret = -EINVAL;
+- unsigned max_size = le32_to_cpu(ntb_parameters.dwNtbOutMaxSize);
++ unsigned ntb_max = le32_to_cpu(ntb_parameters.dwNtbOutMaxSize);
++ unsigned frame_max = le16_to_cpu(ecm_desc.wMaxSegmentSize);
+ const struct ndp_parser_opts *opts = ncm->parser_opts;
+ unsigned crc_len = ncm->is_crc ? sizeof(uint32_t) : 0;
+ int dgram_counter;
++ bool ndp_after_header;
+
+ /* dwSignature */
+ if (get_unaligned_le32(tmp) != opts->nth_sign) {
+@@ -1205,25 +1208,37 @@ static int ncm_unwrap_ntb(struct gether *port,
+ }
+ tmp++; /* skip wSequence */
+
++ block_len = get_ncm(&tmp, opts->block_length);
+ /* (d)wBlockLength */
+- if (get_ncm(&tmp, opts->block_length) > max_size) {
++ if (block_len > ntb_max) {
+ INFO(port->func.config->cdev, "OUT size exceeded\n");
+ goto err;
+ }
+
+ ndp_index = get_ncm(&tmp, opts->ndp_index);
++ ndp_after_header = false;
+
+ /* Run through all the NDP's in the NTB */
+ do {
+- /* NCM 3.2 */
+- if (((ndp_index % 4) != 0) &&
+- (ndp_index < opts->nth_size)) {
++ /*
++ * NCM 3.2
++ * dwNdpIndex
++ */
++ if (((ndp_index % 4) != 0) ||
++ (ndp_index < opts->nth_size) ||
++ (ndp_index > (block_len -
++ opts->ndp_size))) {
+ INFO(port->func.config->cdev, "Bad index: %#X\n",
+ ndp_index);
+ goto err;
+ }
++ if (ndp_index == opts->nth_size)
++ ndp_after_header = true;
+
+- /* walk through NDP */
++ /*
++ * walk through NDP
++ * dwSignature
++ */
+ tmp = (void *)(skb->data + ndp_index);
+ if (get_unaligned_le32(tmp) != ncm->ndp_sign) {
+ INFO(port->func.config->cdev, "Wrong NDP SIGN\n");
+@@ -1234,14 +1249,15 @@ static int ncm_unwrap_ntb(struct gether *port,
+ ndp_len = get_unaligned_le16(tmp++);
+ /*
+ * NCM 3.3.1
++ * wLength
+ * entry is 2 items
+ * item size is 16/32 bits, opts->dgram_item_len * 2 bytes
+ * minimal: struct usb_cdc_ncm_ndpX + normal entry + zero entry
+ * Each entry is a dgram index and a dgram length.
+ */
+ if ((ndp_len < opts->ndp_size
+- + 2 * 2 * (opts->dgram_item_len * 2))
+- || (ndp_len % opts->ndplen_align != 0)) {
++ + 2 * 2 * (opts->dgram_item_len * 2)) ||
++ (ndp_len % opts->ndplen_align != 0)) {
+ INFO(port->func.config->cdev, "Bad NDP length: %#X\n",
+ ndp_len);
+ goto err;
+@@ -1258,8 +1274,21 @@ static int ncm_unwrap_ntb(struct gether *port,
+
+ do {
+ index = index2;
++ /* wDatagramIndex[0] */
++ if ((index < opts->nth_size) ||
++ (index > block_len - opts->dpe_size)) {
++ INFO(port->func.config->cdev,
++ "Bad index: %#X\n", index);
++ goto err;
++ }
++
+ dg_len = dg_len2;
+- if (dg_len < 14 + crc_len) { /* ethernet hdr + crc */
++ /*
++ * wDatagramLength[0]
++ * ethernet hdr + crc or larger than max frame size
++ */
++ if ((dg_len < 14 + crc_len) ||
++ (dg_len > frame_max)) {
+ INFO(port->func.config->cdev,
+ "Bad dgram length: %#X\n", dg_len);
+ goto err;
+@@ -1283,6 +1312,37 @@ static int ncm_unwrap_ntb(struct gether *port,
+ index2 = get_ncm(&tmp, opts->dgram_item_len);
+ dg_len2 = get_ncm(&tmp, opts->dgram_item_len);
+
++ if (index2 == 0 || dg_len2 == 0)
++ break;
++
++ /* wDatagramIndex[1] */
++ if (ndp_after_header) {
++ if (index2 < opts->nth_size + opts->ndp_size) {
++ INFO(port->func.config->cdev,
++ "Bad index: %#X\n", index2);
++ goto err;
++ }
++ } else {
++ if (index2 < opts->nth_size + opts->dpe_size) {
++ INFO(port->func.config->cdev,
++ "Bad index: %#X\n", index2);
++ goto err;
++ }
++ }
++ if (index2 > block_len - opts->dpe_size) {
++ INFO(port->func.config->cdev,
++ "Bad index: %#X\n", index2);
++ goto err;
++ }
++
++ /* wDatagramLength[1] */
++ if ((dg_len2 < 14 + crc_len) ||
++ (dg_len2 > frame_max)) {
++ INFO(port->func.config->cdev,
++ "Bad dgram length: %#X\n", dg_len);
++ goto err;
++ }
++
+ /*
+ * Copy the data into a new skb.
+ * This ensures the truesize is correct
+@@ -1299,9 +1359,6 @@ static int ncm_unwrap_ntb(struct gether *port,
+ ndp_len -= 2 * (opts->dgram_item_len * 2);
+
+ dgram_counter++;
+-
+- if (index2 == 0 || dg_len2 == 0)
+- break;
+ } while (ndp_len > 2 * (opts->dgram_item_len * 2));
+ } while (ndp_index);
+
+diff --git a/drivers/usb/gadget/function/f_tcm.c b/drivers/usb/gadget/function/f_tcm.c
+index eaf556ceac32b..0a45b4ef66a67 100644
+--- a/drivers/usb/gadget/function/f_tcm.c
++++ b/drivers/usb/gadget/function/f_tcm.c
+@@ -753,12 +753,13 @@ static int uasp_alloc_stream_res(struct f_uas *fu, struct uas_stream *stream)
+ goto err_sts;
+
+ return 0;
++
+ err_sts:
+- usb_ep_free_request(fu->ep_status, stream->req_status);
+- stream->req_status = NULL;
+-err_out:
+ usb_ep_free_request(fu->ep_out, stream->req_out);
+ stream->req_out = NULL;
++err_out:
++ usb_ep_free_request(fu->ep_in, stream->req_in);
++ stream->req_in = NULL;
+ out:
+ return -ENOMEM;
+ }
+diff --git a/drivers/usb/gadget/u_f.h b/drivers/usb/gadget/u_f.h
+index eaa13fd3dc7f3..e313c3b8dcb19 100644
+--- a/drivers/usb/gadget/u_f.h
++++ b/drivers/usb/gadget/u_f.h
+@@ -14,6 +14,7 @@
+ #define __U_F_H__
+
+ #include <linux/usb/gadget.h>
++#include <linux/overflow.h>
+
+ /* Variable Length Array Macros **********************************************/
+ #define vla_group(groupname) size_t groupname##__next = 0
+@@ -21,21 +22,36 @@
+
+ #define vla_item(groupname, type, name, n) \
+ size_t groupname##_##name##__offset = ({ \
+- size_t align_mask = __alignof__(type) - 1; \
+- size_t offset = (groupname##__next + align_mask) & ~align_mask;\
+- size_t size = (n) * sizeof(type); \
+- groupname##__next = offset + size; \
++ size_t offset = 0; \
++ if (groupname##__next != SIZE_MAX) { \
++ size_t align_mask = __alignof__(type) - 1; \
++ size_t size = array_size(n, sizeof(type)); \
++ offset = (groupname##__next + align_mask) & \
++ ~align_mask; \
++ if (check_add_overflow(offset, size, \
++ &groupname##__next)) { \
++ groupname##__next = SIZE_MAX; \
++ offset = 0; \
++ } \
++ } \
+ offset; \
+ })
+
+ #define vla_item_with_sz(groupname, type, name, n) \
+- size_t groupname##_##name##__sz = (n) * sizeof(type); \
+- size_t groupname##_##name##__offset = ({ \
+- size_t align_mask = __alignof__(type) - 1; \
+- size_t offset = (groupname##__next + align_mask) & ~align_mask;\
+- size_t size = groupname##_##name##__sz; \
+- groupname##__next = offset + size; \
+- offset; \
++ size_t groupname##_##name##__sz = array_size(n, sizeof(type)); \
++ size_t groupname##_##name##__offset = ({ \
++ size_t offset = 0; \
++ if (groupname##__next != SIZE_MAX) { \
++ size_t align_mask = __alignof__(type) - 1; \
++ offset = (groupname##__next + align_mask) & \
++ ~align_mask; \
++ if (check_add_overflow(offset, groupname##_##name##__sz,\
++ &groupname##__next)) { \
++ groupname##__next = SIZE_MAX; \
++ offset = 0; \
++ } \
++ } \
++ offset; \
+ })
+
+ #define vla_ptr(ptr, groupname, name) \
+diff --git a/drivers/usb/host/ohci-exynos.c b/drivers/usb/host/ohci-exynos.c
+index bd40e597f2566..5f5e8a64c8e2e 100644
+--- a/drivers/usb/host/ohci-exynos.c
++++ b/drivers/usb/host/ohci-exynos.c
+@@ -171,9 +171,8 @@ static int exynos_ohci_probe(struct platform_device *pdev)
+ hcd->rsrc_len = resource_size(res);
+
+ irq = platform_get_irq(pdev, 0);
+- if (!irq) {
+- dev_err(&pdev->dev, "Failed to get IRQ\n");
+- err = -ENODEV;
++ if (irq < 0) {
++ err = irq;
+ goto fail_io;
+ }
+
+diff --git a/drivers/usb/host/xhci-debugfs.c b/drivers/usb/host/xhci-debugfs.c
+index 76c3f29562d2b..448d7b11dec4c 100644
+--- a/drivers/usb/host/xhci-debugfs.c
++++ b/drivers/usb/host/xhci-debugfs.c
+@@ -273,7 +273,7 @@ static int xhci_slot_context_show(struct seq_file *s, void *unused)
+
+ static int xhci_endpoint_context_show(struct seq_file *s, void *unused)
+ {
+- int dci;
++ int ep_index;
+ dma_addr_t dma;
+ struct xhci_hcd *xhci;
+ struct xhci_ep_ctx *ep_ctx;
+@@ -282,9 +282,9 @@ static int xhci_endpoint_context_show(struct seq_file *s, void *unused)
+
+ xhci = hcd_to_xhci(bus_to_hcd(dev->udev->bus));
+
+- for (dci = 1; dci < 32; dci++) {
+- ep_ctx = xhci_get_ep_ctx(xhci, dev->out_ctx, dci);
+- dma = dev->out_ctx->dma + dci * CTX_SIZE(xhci->hcc_params);
++ for (ep_index = 0; ep_index < 31; ep_index++) {
++ ep_ctx = xhci_get_ep_ctx(xhci, dev->out_ctx, ep_index);
++ dma = dev->out_ctx->dma + (ep_index + 1) * CTX_SIZE(xhci->hcc_params);
+ seq_printf(s, "%pad: %s\n", &dma,
+ xhci_decode_ep_context(le32_to_cpu(ep_ctx->ep_info),
+ le32_to_cpu(ep_ctx->ep_info2),
+diff --git a/drivers/usb/host/xhci-hub.c b/drivers/usb/host/xhci-hub.c
+index f37316d2c8fa4..fa8f7935c2abe 100644
+--- a/drivers/usb/host/xhci-hub.c
++++ b/drivers/usb/host/xhci-hub.c
+@@ -740,15 +740,6 @@ static void xhci_hub_report_usb3_link_state(struct xhci_hcd *xhci,
+ {
+ u32 pls = status_reg & PORT_PLS_MASK;
+
+- /* resume state is a xHCI internal state.
+- * Do not report it to usb core, instead, pretend to be U3,
+- * thus usb core knows it's not ready for transfer
+- */
+- if (pls == XDEV_RESUME) {
+- *status |= USB_SS_PORT_LS_U3;
+- return;
+- }
+-
+ /* When the CAS bit is set then warm reset
+ * should be performed on port
+ */
+@@ -770,6 +761,16 @@ static void xhci_hub_report_usb3_link_state(struct xhci_hcd *xhci,
+ */
+ pls |= USB_PORT_STAT_CONNECTION;
+ } else {
++ /*
++ * Resume state is an xHCI internal state. Do not report it to
++ * usb core, instead, pretend to be U3, thus usb core knows
++ * it's not ready for transfer.
++ */
++ if (pls == XDEV_RESUME) {
++ *status |= USB_SS_PORT_LS_U3;
++ return;
++ }
++
+ /*
+ * If CAS bit isn't set but the Port is already at
+ * Compliance Mode, fake a connection so the USB core
+diff --git a/drivers/usb/host/xhci-pci-renesas.c b/drivers/usb/host/xhci-pci-renesas.c
+index 59b1965ad0a3f..f97ac9f52bf4d 100644
+--- a/drivers/usb/host/xhci-pci-renesas.c
++++ b/drivers/usb/host/xhci-pci-renesas.c
+@@ -50,20 +50,6 @@
+ #define RENESAS_RETRY 10000
+ #define RENESAS_DELAY 10
+
+-#define ROM_VALID_01 0x2013
+-#define ROM_VALID_02 0x2026
+-
+-static int renesas_verify_fw_version(struct pci_dev *pdev, u32 version)
+-{
+- switch (version) {
+- case ROM_VALID_01:
+- case ROM_VALID_02:
+- return 0;
+- }
+- dev_err(&pdev->dev, "FW has invalid version :%d\n", version);
+- return -EINVAL;
+-}
+-
+ static int renesas_fw_download_image(struct pci_dev *dev,
+ const u32 *fw, size_t step, bool rom)
+ {
+@@ -202,10 +188,7 @@ static int renesas_check_rom_state(struct pci_dev *pdev)
+
+ version &= RENESAS_FW_VERSION_FIELD;
+ version = version >> RENESAS_FW_VERSION_OFFSET;
+-
+- err = renesas_verify_fw_version(pdev, version);
+- if (err)
+- return err;
++ dev_dbg(&pdev->dev, "Found ROM version: %x\n", version);
+
+ /*
+ * Test if ROM is present and loaded, if so we can skip everything
+diff --git a/drivers/usb/host/xhci-tegra.c b/drivers/usb/host/xhci-tegra.c
+index ee6bf01775bba..545bdecc8f15e 100644
+--- a/drivers/usb/host/xhci-tegra.c
++++ b/drivers/usb/host/xhci-tegra.c
+@@ -1136,7 +1136,7 @@ static struct phy *tegra_xusb_get_phy(struct tegra_xusb *tegra, char *name,
+ unsigned int i, phy_count = 0;
+
+ for (i = 0; i < tegra->soc->num_types; i++) {
+- if (!strncmp(tegra->soc->phy_types[i].name, "usb2",
++ if (!strncmp(tegra->soc->phy_types[i].name, name,
+ strlen(name)))
+ return tegra->phys[phy_count+port];
+
+@@ -1258,6 +1258,8 @@ static int tegra_xusb_init_usb_phy(struct tegra_xusb *tegra)
+
+ INIT_WORK(&tegra->id_work, tegra_xhci_id_work);
+ tegra->id_nb.notifier_call = tegra_xhci_id_notify;
++ tegra->otg_usb2_port = -EINVAL;
++ tegra->otg_usb3_port = -EINVAL;
+
+ for (i = 0; i < tegra->num_usb_phys; i++) {
+ struct phy *phy = tegra_xusb_get_phy(tegra, "usb2", i);
+diff --git a/drivers/usb/host/xhci.c b/drivers/usb/host/xhci.c
+index ed468eed299c5..113ab5d3cbfe5 100644
+--- a/drivers/usb/host/xhci.c
++++ b/drivers/usb/host/xhci.c
+@@ -3236,10 +3236,11 @@ static void xhci_endpoint_reset(struct usb_hcd *hcd,
+
+ wait_for_completion(cfg_cmd->completion);
+
+- ep->ep_state &= ~EP_SOFT_CLEAR_TOGGLE;
+ xhci_free_command(xhci, cfg_cmd);
+ cleanup:
+ xhci_free_command(xhci, stop_cmd);
++ if (ep->ep_state & EP_SOFT_CLEAR_TOGGLE)
++ ep->ep_state &= ~EP_SOFT_CLEAR_TOGGLE;
+ }
+
+ static int xhci_check_streams_endpoint(struct xhci_hcd *xhci,
+diff --git a/drivers/usb/misc/lvstest.c b/drivers/usb/misc/lvstest.c
+index 407fe7570f3bc..f8686139d6f39 100644
+--- a/drivers/usb/misc/lvstest.c
++++ b/drivers/usb/misc/lvstest.c
+@@ -426,7 +426,7 @@ static int lvs_rh_probe(struct usb_interface *intf,
+ USB_DT_SS_HUB_SIZE, USB_CTRL_GET_TIMEOUT);
+ if (ret < (USB_DT_HUB_NONVAR_SIZE + 2)) {
+ dev_err(&hdev->dev, "wrong root hub descriptor read %d\n", ret);
+- return ret;
++ return ret < 0 ? ret : -EINVAL;
+ }
+
+ /* submit urb to poll interrupt endpoint */
+diff --git a/drivers/usb/misc/sisusbvga/sisusb.c b/drivers/usb/misc/sisusbvga/sisusb.c
+index fc8a5da4a07c9..0734e6dd93862 100644
+--- a/drivers/usb/misc/sisusbvga/sisusb.c
++++ b/drivers/usb/misc/sisusbvga/sisusb.c
+@@ -761,7 +761,7 @@ static int sisusb_write_mem_bulk(struct sisusb_usb_data *sisusb, u32 addr,
+ u8 swap8, fromkern = kernbuffer ? 1 : 0;
+ u16 swap16;
+ u32 swap32, flag = (length >> 28) & 1;
+- char buf[4];
++ u8 buf[4];
+
+ /* if neither kernbuffer not userbuffer are given, assume
+ * data in obuf
+diff --git a/drivers/usb/misc/yurex.c b/drivers/usb/misc/yurex.c
+index be0505b8b5d4e..785080f790738 100644
+--- a/drivers/usb/misc/yurex.c
++++ b/drivers/usb/misc/yurex.c
+@@ -492,7 +492,7 @@ static ssize_t yurex_write(struct file *file, const char __user *user_buffer,
+ prepare_to_wait(&dev->waitq, &wait, TASK_INTERRUPTIBLE);
+ dev_dbg(&dev->interface->dev, "%s - submit %c\n", __func__,
+ dev->cntl_buffer[0]);
+- retval = usb_submit_urb(dev->cntl_urb, GFP_KERNEL);
++ retval = usb_submit_urb(dev->cntl_urb, GFP_ATOMIC);
+ if (retval >= 0)
+ timeout = schedule_timeout(YUREX_WRITE_TIMEOUT);
+ finish_wait(&dev->waitq, &wait);
+diff --git a/drivers/usb/storage/unusual_devs.h b/drivers/usb/storage/unusual_devs.h
+index b6a9a74516201..e5f9557690f9e 100644
+--- a/drivers/usb/storage/unusual_devs.h
++++ b/drivers/usb/storage/unusual_devs.h
+@@ -2328,7 +2328,7 @@ UNUSUAL_DEV( 0x357d, 0x7788, 0x0114, 0x0114,
+ "JMicron",
+ "USB to ATA/ATAPI Bridge",
+ USB_SC_DEVICE, USB_PR_DEVICE, NULL,
+- US_FL_BROKEN_FUA ),
++ US_FL_BROKEN_FUA | US_FL_IGNORE_UAS ),
+
+ /* Reported by Andrey Rahmatullin <wrar@altlinux.org> */
+ UNUSUAL_DEV( 0x4102, 0x1020, 0x0100, 0x0100,
+diff --git a/drivers/usb/storage/unusual_uas.h b/drivers/usb/storage/unusual_uas.h
+index 162b09d69f62f..711ab240058c7 100644
+--- a/drivers/usb/storage/unusual_uas.h
++++ b/drivers/usb/storage/unusual_uas.h
+@@ -28,6 +28,13 @@
+ * and don't forget to CC: the USB development list <linux-usb@vger.kernel.org>
+ */
+
++/* Reported-by: Till Dörges <doerges@pre-sense.de> */
++UNUSUAL_DEV(0x054c, 0x087d, 0x0000, 0x9999,
++ "Sony",
++ "PSZ-HA*",
++ USB_SC_DEVICE, USB_PR_DEVICE, NULL,
++ US_FL_NO_REPORT_OPCODES),
++
+ /* Reported-by: Julian Groß <julian.g@posteo.de> */
+ UNUSUAL_DEV(0x059f, 0x105f, 0x0000, 0x9999,
+ "LaCie",
+@@ -80,6 +87,13 @@ UNUSUAL_DEV(0x152d, 0x0578, 0x0000, 0x9999,
+ USB_SC_DEVICE, USB_PR_DEVICE, NULL,
+ US_FL_BROKEN_FUA),
+
++/* Reported-by: Thinh Nguyen <thinhn@synopsys.com> */
++UNUSUAL_DEV(0x154b, 0xf00d, 0x0000, 0x9999,
++ "PNY",
++ "Pro Elite SSD",
++ USB_SC_DEVICE, USB_PR_DEVICE, NULL,
++ US_FL_NO_ATA_1X),
++
+ /* Reported-by: Hans de Goede <hdegoede@redhat.com> */
+ UNUSUAL_DEV(0x2109, 0x0711, 0x0000, 0x9999,
+ "VIA",
+diff --git a/drivers/usb/typec/tcpm/tcpm.c b/drivers/usb/typec/tcpm/tcpm.c
+index 82b19ebd7838e..b2111fe6d140a 100644
+--- a/drivers/usb/typec/tcpm/tcpm.c
++++ b/drivers/usb/typec/tcpm/tcpm.c
+@@ -3321,13 +3321,31 @@ static void run_state_machine(struct tcpm_port *port)
+ tcpm_set_state(port, SNK_HARD_RESET_SINK_OFF, 0);
+ break;
+ case SRC_HARD_RESET_VBUS_OFF:
+- tcpm_set_vconn(port, true);
++ /*
++ * 7.1.5 Response to Hard Resets
++ * Hard Reset Signaling indicates a communication failure has occurred and the
++ * Source Shall stop driving VCONN, Shall remove Rp from the VCONN pin and Shall
++ * drive VBUS to vSafe0V as shown in Figure 7-9.
++ */
++ tcpm_set_vconn(port, false);
+ tcpm_set_vbus(port, false);
+ tcpm_set_roles(port, port->self_powered, TYPEC_SOURCE,
+ tcpm_data_role_for_source(port));
+- tcpm_set_state(port, SRC_HARD_RESET_VBUS_ON, PD_T_SRC_RECOVER);
++ /*
++ * If tcpc fails to notify vbus off, TCPM will wait for PD_T_SAFE_0V +
++ * PD_T_SRC_RECOVER before turning vbus back on.
++ * From Table 7-12 Sequence Description for a Source Initiated Hard Reset:
++ * 4. Policy Engine waits tPSHardReset after sending Hard Reset Signaling and then
++ * tells the Device Policy Manager to instruct the power supply to perform a
++ * Hard Reset. The transition to vSafe0V Shall occur within tSafe0V (t2).
++ * 5. After tSrcRecover the Source applies power to VBUS in an attempt to
++ * re-establish communication with the Sink and resume USB Default Operation.
++ * The transition to vSafe5V Shall occur within tSrcTurnOn(t4).
++ */
++ tcpm_set_state(port, SRC_HARD_RESET_VBUS_ON, PD_T_SAFE_0V + PD_T_SRC_RECOVER);
+ break;
+ case SRC_HARD_RESET_VBUS_ON:
++ tcpm_set_vconn(port, true);
+ tcpm_set_vbus(port, true);
+ port->tcpc->set_pd_rx(port->tcpc, true);
+ tcpm_set_attached_state(port, true);
+@@ -3887,7 +3905,11 @@ static void _tcpm_pd_vbus_off(struct tcpm_port *port)
+ tcpm_set_state(port, SNK_HARD_RESET_WAIT_VBUS, 0);
+ break;
+ case SRC_HARD_RESET_VBUS_OFF:
+- tcpm_set_state(port, SRC_HARD_RESET_VBUS_ON, 0);
++ /*
++ * After establishing the vSafe0V voltage condition on VBUS, the Source Shall wait
++ * tSrcRecover before re-applying VCONN and restoring VBUS to vSafe5V.
++ */
++ tcpm_set_state(port, SRC_HARD_RESET_VBUS_ON, PD_T_SRC_RECOVER);
+ break;
+ case HARD_RESET_SEND:
+ break;
+diff --git a/drivers/usb/typec/ucsi/displayport.c b/drivers/usb/typec/ucsi/displayport.c
+index 048381c058a5b..261131c9e37c6 100644
+--- a/drivers/usb/typec/ucsi/displayport.c
++++ b/drivers/usb/typec/ucsi/displayport.c
+@@ -288,8 +288,6 @@ struct typec_altmode *ucsi_register_displayport(struct ucsi_connector *con,
+ struct typec_altmode *alt;
+ struct ucsi_dp *dp;
+
+- mutex_lock(&con->lock);
+-
+ /* We can't rely on the firmware with the capabilities. */
+ desc->vdo |= DP_CAP_DP_SIGNALING | DP_CAP_RECEPTACLE;
+
+@@ -298,15 +296,12 @@ struct typec_altmode *ucsi_register_displayport(struct ucsi_connector *con,
+ desc->vdo |= all_assignments << 16;
+
+ alt = typec_port_register_altmode(con->port, desc);
+- if (IS_ERR(alt)) {
+- mutex_unlock(&con->lock);
++ if (IS_ERR(alt))
+ return alt;
+- }
+
+ dp = devm_kzalloc(&alt->dev, sizeof(*dp), GFP_KERNEL);
+ if (!dp) {
+ typec_unregister_altmode(alt);
+- mutex_unlock(&con->lock);
+ return ERR_PTR(-ENOMEM);
+ }
+
+@@ -319,7 +314,5 @@ struct typec_altmode *ucsi_register_displayport(struct ucsi_connector *con,
+ alt->ops = &ucsi_displayport_ops;
+ typec_altmode_set_drvdata(alt, dp);
+
+- mutex_unlock(&con->lock);
+-
+ return alt;
+ }
+diff --git a/drivers/usb/typec/ucsi/ucsi.c b/drivers/usb/typec/ucsi/ucsi.c
+index d0c63afaf345d..2999217c81090 100644
+--- a/drivers/usb/typec/ucsi/ucsi.c
++++ b/drivers/usb/typec/ucsi/ucsi.c
+@@ -146,40 +146,33 @@ static int ucsi_exec_command(struct ucsi *ucsi, u64 cmd)
+ return UCSI_CCI_LENGTH(cci);
+ }
+
+-static int ucsi_run_command(struct ucsi *ucsi, u64 command,
+- void *data, size_t size)
++int ucsi_send_command(struct ucsi *ucsi, u64 command,
++ void *data, size_t size)
+ {
+ u8 length;
+ int ret;
+
++ mutex_lock(&ucsi->ppm_lock);
++
+ ret = ucsi_exec_command(ucsi, command);
+ if (ret < 0)
+- return ret;
++ goto out;
+
+ length = ret;
+
+ if (data) {
+ ret = ucsi->ops->read(ucsi, UCSI_MESSAGE_IN, data, size);
+ if (ret)
+- return ret;
++ goto out;
+ }
+
+ ret = ucsi_acknowledge_command(ucsi);
+ if (ret)
+- return ret;
+-
+- return length;
+-}
+-
+-int ucsi_send_command(struct ucsi *ucsi, u64 command,
+- void *retval, size_t size)
+-{
+- int ret;
++ goto out;
+
+- mutex_lock(&ucsi->ppm_lock);
+- ret = ucsi_run_command(ucsi, command, retval, size);
++ ret = length;
++out:
+ mutex_unlock(&ucsi->ppm_lock);
+-
+ return ret;
+ }
+ EXPORT_SYMBOL_GPL(ucsi_send_command);
+@@ -205,7 +198,7 @@ void ucsi_altmode_update_active(struct ucsi_connector *con)
+ int i;
+
+ command = UCSI_GET_CURRENT_CAM | UCSI_CONNECTOR_NUMBER(con->num);
+- ret = ucsi_run_command(con->ucsi, command, &cur, sizeof(cur));
++ ret = ucsi_send_command(con->ucsi, command, &cur, sizeof(cur));
+ if (ret < 0) {
+ if (con->ucsi->version > 0x0100) {
+ dev_err(con->ucsi->dev,
+@@ -354,7 +347,7 @@ ucsi_register_altmodes_nvidia(struct ucsi_connector *con, u8 recipient)
+ command |= UCSI_GET_ALTMODE_RECIPIENT(recipient);
+ command |= UCSI_GET_ALTMODE_CONNECTOR_NUMBER(con->num);
+ command |= UCSI_GET_ALTMODE_OFFSET(i);
+- len = ucsi_run_command(con->ucsi, command, &alt, sizeof(alt));
++ len = ucsi_send_command(con->ucsi, command, &alt, sizeof(alt));
+ /*
+ * We are collecting all altmodes first and then registering.
+ * Some type-C device will return zero length data beyond last
+@@ -431,7 +424,7 @@ static int ucsi_register_altmodes(struct ucsi_connector *con, u8 recipient)
+ command |= UCSI_GET_ALTMODE_RECIPIENT(recipient);
+ command |= UCSI_GET_ALTMODE_CONNECTOR_NUMBER(con->num);
+ command |= UCSI_GET_ALTMODE_OFFSET(i);
+- len = ucsi_run_command(con->ucsi, command, alt, sizeof(alt));
++ len = ucsi_send_command(con->ucsi, command, alt, sizeof(alt));
+ if (len <= 0)
+ return len;
+
+@@ -502,7 +495,7 @@ static void ucsi_get_pdos(struct ucsi_connector *con, int is_partner)
+ command |= UCSI_GET_PDOS_PARTNER_PDO(is_partner);
+ command |= UCSI_GET_PDOS_NUM_PDOS(UCSI_MAX_PDOS - 1);
+ command |= UCSI_GET_PDOS_SRC_PDOS;
+- ret = ucsi_run_command(ucsi, command, con->src_pdos,
++ ret = ucsi_send_command(ucsi, command, con->src_pdos,
+ sizeof(con->src_pdos));
+ if (ret < 0) {
+ dev_err(ucsi->dev, "UCSI_GET_PDOS failed (%d)\n", ret);
+@@ -681,7 +674,7 @@ static void ucsi_handle_connector_change(struct work_struct *work)
+ */
+ command = UCSI_GET_CAM_SUPPORTED;
+ command |= UCSI_CONNECTOR_NUMBER(con->num);
+- ucsi_run_command(con->ucsi, command, NULL, 0);
++ ucsi_send_command(con->ucsi, command, NULL, 0);
+ }
+
+ if (con->status.change & UCSI_CONSTAT_PARTNER_CHANGE)
+@@ -736,20 +729,24 @@ static int ucsi_reset_ppm(struct ucsi *ucsi)
+ u32 cci;
+ int ret;
+
++ mutex_lock(&ucsi->ppm_lock);
++
+ ret = ucsi->ops->async_write(ucsi, UCSI_CONTROL, &command,
+ sizeof(command));
+ if (ret < 0)
+- return ret;
++ goto out;
+
+ tmo = jiffies + msecs_to_jiffies(UCSI_TIMEOUT_MS);
+
+ do {
+- if (time_is_before_jiffies(tmo))
+- return -ETIMEDOUT;
++ if (time_is_before_jiffies(tmo)) {
++ ret = -ETIMEDOUT;
++ goto out;
++ }
+
+ ret = ucsi->ops->read(ucsi, UCSI_CCI, &cci, sizeof(cci));
+ if (ret)
+- return ret;
++ goto out;
+
+ /* If the PPM is still doing something else, reset it again. */
+ if (cci & ~UCSI_CCI_RESET_COMPLETE) {
+@@ -757,13 +754,15 @@ static int ucsi_reset_ppm(struct ucsi *ucsi)
+ &command,
+ sizeof(command));
+ if (ret < 0)
+- return ret;
++ goto out;
+ }
+
+ msleep(20);
+ } while (!(cci & UCSI_CCI_RESET_COMPLETE));
+
+- return 0;
++out:
++ mutex_unlock(&ucsi->ppm_lock);
++ return ret;
+ }
+
+ static int ucsi_role_cmd(struct ucsi_connector *con, u64 command)
+@@ -775,9 +774,7 @@ static int ucsi_role_cmd(struct ucsi_connector *con, u64 command)
+ u64 c;
+
+ /* PPM most likely stopped responding. Resetting everything. */
+- mutex_lock(&con->ucsi->ppm_lock);
+ ucsi_reset_ppm(con->ucsi);
+- mutex_unlock(&con->ucsi->ppm_lock);
+
+ c = UCSI_SET_NOTIFICATION_ENABLE | con->ucsi->ntfy;
+ ucsi_send_command(con->ucsi, c, NULL, 0);
+@@ -901,12 +898,15 @@ static int ucsi_register_port(struct ucsi *ucsi, int index)
+ con->num = index + 1;
+ con->ucsi = ucsi;
+
++ /* Delay other interactions with the con until registration is complete */
++ mutex_lock(&con->lock);
++
+ /* Get connector capability */
+ command = UCSI_GET_CONNECTOR_CAPABILITY;
+ command |= UCSI_CONNECTOR_NUMBER(con->num);
+- ret = ucsi_run_command(ucsi, command, &con->cap, sizeof(con->cap));
++ ret = ucsi_send_command(ucsi, command, &con->cap, sizeof(con->cap));
+ if (ret < 0)
+- return ret;
++ goto out;
+
+ if (con->cap.op_mode & UCSI_CONCAP_OPMODE_DRP)
+ cap->data = TYPEC_PORT_DRD;
+@@ -938,27 +938,32 @@ static int ucsi_register_port(struct ucsi *ucsi, int index)
+
+ ret = ucsi_register_port_psy(con);
+ if (ret)
+- return ret;
++ goto out;
+
+ /* Register the connector */
+ con->port = typec_register_port(ucsi->dev, cap);
+- if (IS_ERR(con->port))
+- return PTR_ERR(con->port);
++ if (IS_ERR(con->port)) {
++ ret = PTR_ERR(con->port);
++ goto out;
++ }
+
+ /* Alternate modes */
+ ret = ucsi_register_altmodes(con, UCSI_RECIPIENT_CON);
+- if (ret)
++ if (ret) {
+ dev_err(ucsi->dev, "con%d: failed to register alt modes\n",
+ con->num);
++ goto out;
++ }
+
+ /* Get the status */
+ command = UCSI_GET_CONNECTOR_STATUS | UCSI_CONNECTOR_NUMBER(con->num);
+- ret = ucsi_run_command(ucsi, command, &con->status,
+- sizeof(con->status));
++ ret = ucsi_send_command(ucsi, command, &con->status, sizeof(con->status));
+ if (ret < 0) {
+ dev_err(ucsi->dev, "con%d: failed to get status\n", con->num);
+- return 0;
++ ret = 0;
++ goto out;
+ }
++ ret = 0; /* ucsi_send_command() returns length on success */
+
+ switch (UCSI_CONSTAT_PARTNER_TYPE(con->status.flags)) {
+ case UCSI_CONSTAT_PARTNER_TYPE_UFP:
+@@ -983,17 +988,21 @@ static int ucsi_register_port(struct ucsi *ucsi, int index)
+
+ if (con->partner) {
+ ret = ucsi_register_altmodes(con, UCSI_RECIPIENT_SOP);
+- if (ret)
++ if (ret) {
+ dev_err(ucsi->dev,
+ "con%d: failed to register alternate modes\n",
+ con->num);
+- else
++ ret = 0;
++ } else {
+ ucsi_altmode_update_active(con);
++ }
+ }
+
+ trace_ucsi_register_port(con->num, &con->status);
+
+- return 0;
++out:
++ mutex_unlock(&con->lock);
++ return ret;
+ }
+
+ /**
+@@ -1009,8 +1018,6 @@ int ucsi_init(struct ucsi *ucsi)
+ int ret;
+ int i;
+
+- mutex_lock(&ucsi->ppm_lock);
+-
+ /* Reset the PPM */
+ ret = ucsi_reset_ppm(ucsi);
+ if (ret) {
+@@ -1021,13 +1028,13 @@ int ucsi_init(struct ucsi *ucsi)
+ /* Enable basic notifications */
+ ucsi->ntfy = UCSI_ENABLE_NTFY_CMD_COMPLETE | UCSI_ENABLE_NTFY_ERROR;
+ command = UCSI_SET_NOTIFICATION_ENABLE | ucsi->ntfy;
+- ret = ucsi_run_command(ucsi, command, NULL, 0);
++ ret = ucsi_send_command(ucsi, command, NULL, 0);
+ if (ret < 0)
+ goto err_reset;
+
+ /* Get PPM capabilities */
+ command = UCSI_GET_CAPABILITY;
+- ret = ucsi_run_command(ucsi, command, &ucsi->cap, sizeof(ucsi->cap));
++ ret = ucsi_send_command(ucsi, command, &ucsi->cap, sizeof(ucsi->cap));
+ if (ret < 0)
+ goto err_reset;
+
+@@ -1054,12 +1061,10 @@ int ucsi_init(struct ucsi *ucsi)
+ /* Enable all notifications */
+ ucsi->ntfy = UCSI_ENABLE_NTFY_ALL;
+ command = UCSI_SET_NOTIFICATION_ENABLE | ucsi->ntfy;
+- ret = ucsi_run_command(ucsi, command, NULL, 0);
++ ret = ucsi_send_command(ucsi, command, NULL, 0);
+ if (ret < 0)
+ goto err_unregister;
+
+- mutex_unlock(&ucsi->ppm_lock);
+-
+ return 0;
+
+ err_unregister:
+@@ -1074,8 +1079,6 @@ err_unregister:
+ err_reset:
+ ucsi_reset_ppm(ucsi);
+ err:
+- mutex_unlock(&ucsi->ppm_lock);
+-
+ return ret;
+ }
+ EXPORT_SYMBOL_GPL(ucsi_init);
+diff --git a/drivers/usb/usbip/stub_dev.c b/drivers/usb/usbip/stub_dev.c
+index 2305d425e6c9a..9d7d642022d1f 100644
+--- a/drivers/usb/usbip/stub_dev.c
++++ b/drivers/usb/usbip/stub_dev.c
+@@ -461,6 +461,11 @@ static void stub_disconnect(struct usb_device *udev)
+ return;
+ }
+
++static bool usbip_match(struct usb_device *udev)
++{
++ return true;
++}
++
+ #ifdef CONFIG_PM
+
+ /* These functions need usb_port_suspend and usb_port_resume,
+@@ -486,6 +491,7 @@ struct usb_device_driver stub_driver = {
+ .name = "usbip-host",
+ .probe = stub_probe,
+ .disconnect = stub_disconnect,
++ .match = usbip_match,
+ #ifdef CONFIG_PM
+ .suspend = stub_suspend,
+ .resume = stub_resume,
+diff --git a/drivers/vdpa/ifcvf/ifcvf_base.h b/drivers/vdpa/ifcvf/ifcvf_base.h
+index f4554412e607f..29efa75cdfce5 100644
+--- a/drivers/vdpa/ifcvf/ifcvf_base.h
++++ b/drivers/vdpa/ifcvf/ifcvf_base.h
+@@ -84,7 +84,7 @@ struct ifcvf_hw {
+ void __iomem * const *base;
+ char config_msix_name[256];
+ struct vdpa_callback config_cb;
+-
++ unsigned int config_irq;
+ };
+
+ struct ifcvf_adapter {
+diff --git a/drivers/vdpa/ifcvf/ifcvf_main.c b/drivers/vdpa/ifcvf/ifcvf_main.c
+index f5a60c14b9799..7a6d899e541df 100644
+--- a/drivers/vdpa/ifcvf/ifcvf_main.c
++++ b/drivers/vdpa/ifcvf/ifcvf_main.c
+@@ -53,6 +53,7 @@ static void ifcvf_free_irq(struct ifcvf_adapter *adapter, int queues)
+ for (i = 0; i < queues; i++)
+ devm_free_irq(&pdev->dev, vf->vring[i].irq, &vf->vring[i]);
+
++ devm_free_irq(&pdev->dev, vf->config_irq, vf);
+ ifcvf_free_irq_vectors(pdev);
+ }
+
+@@ -72,10 +73,14 @@ static int ifcvf_request_irq(struct ifcvf_adapter *adapter)
+ snprintf(vf->config_msix_name, 256, "ifcvf[%s]-config\n",
+ pci_name(pdev));
+ vector = 0;
+- irq = pci_irq_vector(pdev, vector);
+- ret = devm_request_irq(&pdev->dev, irq,
++ vf->config_irq = pci_irq_vector(pdev, vector);
++ ret = devm_request_irq(&pdev->dev, vf->config_irq,
+ ifcvf_config_changed, 0,
+ vf->config_msix_name, vf);
++ if (ret) {
++ IFCVF_ERR(pdev, "Failed to request config irq\n");
++ return ret;
++ }
+
+ for (i = 0; i < IFCVF_MAX_QUEUE_PAIRS * 2; i++) {
+ snprintf(vf->vring[i].msix_name, 256, "ifcvf[%s]-%d\n",
+diff --git a/drivers/video/fbdev/controlfb.c b/drivers/video/fbdev/controlfb.c
+index 9c4f1be856eca..547abeb39f87a 100644
+--- a/drivers/video/fbdev/controlfb.c
++++ b/drivers/video/fbdev/controlfb.c
+@@ -49,6 +49,8 @@
+ #include <linux/cuda.h>
+ #ifdef CONFIG_PPC_PMAC
+ #include <asm/prom.h>
++#endif
++#ifdef CONFIG_BOOTX_TEXT
+ #include <asm/btext.h>
+ #endif
+
+diff --git a/drivers/video/fbdev/core/fbcon.c b/drivers/video/fbdev/core/fbcon.c
+index e2a490c5ae08f..fbf10e62bcde9 100644
+--- a/drivers/video/fbdev/core/fbcon.c
++++ b/drivers/video/fbdev/core/fbcon.c
+@@ -2191,6 +2191,9 @@ static void updatescrollmode(struct fbcon_display *p,
+ }
+ }
+
++#define PITCH(w) (((w) + 7) >> 3)
++#define CALC_FONTSZ(h, p, c) ((h) * (p) * (c)) /* size = height * pitch * charcount */
++
+ static int fbcon_resize(struct vc_data *vc, unsigned int width,
+ unsigned int height, unsigned int user)
+ {
+@@ -2200,6 +2203,24 @@ static int fbcon_resize(struct vc_data *vc, unsigned int width,
+ struct fb_var_screeninfo var = info->var;
+ int x_diff, y_diff, virt_w, virt_h, virt_fw, virt_fh;
+
++ if (ops->p && ops->p->userfont && FNTSIZE(vc->vc_font.data)) {
++ int size;
++ int pitch = PITCH(vc->vc_font.width);
++
++ /*
++ * If user font, ensure that a possible change to user font
++ * height or width will not allow a font data out-of-bounds access.
++ * NOTE: must use original charcount in calculation as font
++ * charcount can change and cannot be used to determine the
++ * font data allocated size.
++ */
++ if (pitch <= 0)
++ return -EINVAL;
++ size = CALC_FONTSZ(vc->vc_font.height, pitch, FNTCHARCNT(vc->vc_font.data));
++ if (size > FNTSIZE(vc->vc_font.data))
++ return -EINVAL;
++ }
++
+ virt_w = FBCON_SWAP(ops->rotate, width, height);
+ virt_h = FBCON_SWAP(ops->rotate, height, width);
+ virt_fw = FBCON_SWAP(ops->rotate, vc->vc_font.width,
+@@ -2652,7 +2673,7 @@ static int fbcon_set_font(struct vc_data *vc, struct console_font *font,
+ int size;
+ int i, csum;
+ u8 *new_data, *data = font->data;
+- int pitch = (font->width+7) >> 3;
++ int pitch = PITCH(font->width);
+
+ /* Is there a reason why fbconsole couldn't handle any charcount >256?
+ * If not this check should be changed to charcount < 256 */
+@@ -2668,7 +2689,7 @@ static int fbcon_set_font(struct vc_data *vc, struct console_font *font,
+ if (fbcon_invalid_charcount(info, charcount))
+ return -EINVAL;
+
+- size = h * pitch * charcount;
++ size = CALC_FONTSZ(h, pitch, charcount);
+
+ new_data = kmalloc(FONT_EXTRA_WORDS * sizeof(int) + size, GFP_USER);
+
+diff --git a/drivers/video/fbdev/core/fbmem.c b/drivers/video/fbdev/core/fbmem.c
+index 30e73ec4ad5c8..da7c88ffaa6a8 100644
+--- a/drivers/video/fbdev/core/fbmem.c
++++ b/drivers/video/fbdev/core/fbmem.c
+@@ -957,7 +957,6 @@ static int fb_check_caps(struct fb_info *info, struct fb_var_screeninfo *var,
+ int
+ fb_set_var(struct fb_info *info, struct fb_var_screeninfo *var)
+ {
+- int flags = info->flags;
+ int ret = 0;
+ u32 activate;
+ struct fb_var_screeninfo old_var;
+@@ -1052,9 +1051,6 @@ fb_set_var(struct fb_info *info, struct fb_var_screeninfo *var)
+ event.data = &mode;
+ fb_notifier_call_chain(FB_EVENT_MODE_CHANGE, &event);
+
+- if (flags & FBINFO_MISC_USEREVENT)
+- fbcon_update_vcs(info, activate & FB_ACTIVATE_ALL);
+-
+ return 0;
+ }
+ EXPORT_SYMBOL(fb_set_var);
+@@ -1105,9 +1101,9 @@ static long do_fb_ioctl(struct fb_info *info, unsigned int cmd,
+ return -EFAULT;
+ console_lock();
+ lock_fb_info(info);
+- info->flags |= FBINFO_MISC_USEREVENT;
+ ret = fb_set_var(info, &var);
+- info->flags &= ~FBINFO_MISC_USEREVENT;
++ if (!ret)
++ fbcon_update_vcs(info, var.activate & FB_ACTIVATE_ALL);
+ unlock_fb_info(info);
+ console_unlock();
+ if (!ret && copy_to_user(argp, &var, sizeof(var)))
+diff --git a/drivers/video/fbdev/core/fbsysfs.c b/drivers/video/fbdev/core/fbsysfs.c
+index d54c88f88991d..65dae05fff8e6 100644
+--- a/drivers/video/fbdev/core/fbsysfs.c
++++ b/drivers/video/fbdev/core/fbsysfs.c
+@@ -91,9 +91,9 @@ static int activate(struct fb_info *fb_info, struct fb_var_screeninfo *var)
+
+ var->activate |= FB_ACTIVATE_FORCE;
+ console_lock();
+- fb_info->flags |= FBINFO_MISC_USEREVENT;
+ err = fb_set_var(fb_info, var);
+- fb_info->flags &= ~FBINFO_MISC_USEREVENT;
++ if (!err)
++ fbcon_update_vcs(fb_info, var->activate & FB_ACTIVATE_ALL);
+ console_unlock();
+ if (err)
+ return err;
+diff --git a/drivers/video/fbdev/omap2/omapfb/dss/dispc.c b/drivers/video/fbdev/omap2/omapfb/dss/dispc.c
+index 4a16798b2ecd8..e2b572761bf61 100644
+--- a/drivers/video/fbdev/omap2/omapfb/dss/dispc.c
++++ b/drivers/video/fbdev/omap2/omapfb/dss/dispc.c
+@@ -520,8 +520,11 @@ int dispc_runtime_get(void)
+ DSSDBG("dispc_runtime_get\n");
+
+ r = pm_runtime_get_sync(&dispc.pdev->dev);
+- WARN_ON(r < 0);
+- return r < 0 ? r : 0;
++ if (WARN_ON(r < 0)) {
++ pm_runtime_put_sync(&dispc.pdev->dev);
++ return r;
++ }
++ return 0;
+ }
+ EXPORT_SYMBOL(dispc_runtime_get);
+
+diff --git a/drivers/video/fbdev/omap2/omapfb/dss/dsi.c b/drivers/video/fbdev/omap2/omapfb/dss/dsi.c
+index d620376216e1d..6f9c25fec9946 100644
+--- a/drivers/video/fbdev/omap2/omapfb/dss/dsi.c
++++ b/drivers/video/fbdev/omap2/omapfb/dss/dsi.c
+@@ -1137,8 +1137,11 @@ static int dsi_runtime_get(struct platform_device *dsidev)
+ DSSDBG("dsi_runtime_get\n");
+
+ r = pm_runtime_get_sync(&dsi->pdev->dev);
+- WARN_ON(r < 0);
+- return r < 0 ? r : 0;
++ if (WARN_ON(r < 0)) {
++ pm_runtime_put_sync(&dsi->pdev->dev);
++ return r;
++ }
++ return 0;
+ }
+
+ static void dsi_runtime_put(struct platform_device *dsidev)
+diff --git a/drivers/video/fbdev/omap2/omapfb/dss/dss.c b/drivers/video/fbdev/omap2/omapfb/dss/dss.c
+index bfc5c4c5a26ad..a6b1c1598040d 100644
+--- a/drivers/video/fbdev/omap2/omapfb/dss/dss.c
++++ b/drivers/video/fbdev/omap2/omapfb/dss/dss.c
+@@ -768,8 +768,11 @@ int dss_runtime_get(void)
+ DSSDBG("dss_runtime_get\n");
+
+ r = pm_runtime_get_sync(&dss.pdev->dev);
+- WARN_ON(r < 0);
+- return r < 0 ? r : 0;
++ if (WARN_ON(r < 0)) {
++ pm_runtime_put_sync(&dss.pdev->dev);
++ return r;
++ }
++ return 0;
+ }
+
+ void dss_runtime_put(void)
+diff --git a/drivers/video/fbdev/omap2/omapfb/dss/hdmi4.c b/drivers/video/fbdev/omap2/omapfb/dss/hdmi4.c
+index 7060ae56c062c..4804aab342981 100644
+--- a/drivers/video/fbdev/omap2/omapfb/dss/hdmi4.c
++++ b/drivers/video/fbdev/omap2/omapfb/dss/hdmi4.c
+@@ -39,9 +39,10 @@ static int hdmi_runtime_get(void)
+ DSSDBG("hdmi_runtime_get\n");
+
+ r = pm_runtime_get_sync(&hdmi.pdev->dev);
+- WARN_ON(r < 0);
+- if (r < 0)
++ if (WARN_ON(r < 0)) {
++ pm_runtime_put_sync(&hdmi.pdev->dev);
+ return r;
++ }
+
+ return 0;
+ }
+diff --git a/drivers/video/fbdev/omap2/omapfb/dss/hdmi5.c b/drivers/video/fbdev/omap2/omapfb/dss/hdmi5.c
+index ac49531e47327..a06b6f1355bdb 100644
+--- a/drivers/video/fbdev/omap2/omapfb/dss/hdmi5.c
++++ b/drivers/video/fbdev/omap2/omapfb/dss/hdmi5.c
+@@ -43,9 +43,10 @@ static int hdmi_runtime_get(void)
+ DSSDBG("hdmi_runtime_get\n");
+
+ r = pm_runtime_get_sync(&hdmi.pdev->dev);
+- WARN_ON(r < 0);
+- if (r < 0)
++ if (WARN_ON(r < 0)) {
++ pm_runtime_put_sync(&hdmi.pdev->dev);
+ return r;
++ }
+
+ return 0;
+ }
+diff --git a/drivers/video/fbdev/omap2/omapfb/dss/venc.c b/drivers/video/fbdev/omap2/omapfb/dss/venc.c
+index d5404d56c922f..0b0ad20afd630 100644
+--- a/drivers/video/fbdev/omap2/omapfb/dss/venc.c
++++ b/drivers/video/fbdev/omap2/omapfb/dss/venc.c
+@@ -348,8 +348,11 @@ static int venc_runtime_get(void)
+ DSSDBG("venc_runtime_get\n");
+
+ r = pm_runtime_get_sync(&venc.pdev->dev);
+- WARN_ON(r < 0);
+- return r < 0 ? r : 0;
++ if (WARN_ON(r < 0)) {
++ pm_runtime_put_sync(&venc.pdev->dev);
++ return r;
++ }
++ return 0;
+ }
+
+ static void venc_runtime_put(void)
+diff --git a/drivers/video/fbdev/ps3fb.c b/drivers/video/fbdev/ps3fb.c
+index 9df78fb772672..203c254f8f6cb 100644
+--- a/drivers/video/fbdev/ps3fb.c
++++ b/drivers/video/fbdev/ps3fb.c
+@@ -29,6 +29,7 @@
+ #include <linux/freezer.h>
+ #include <linux/uaccess.h>
+ #include <linux/fb.h>
++#include <linux/fbcon.h>
+ #include <linux/init.h>
+
+ #include <asm/cell-regs.h>
+@@ -824,12 +825,12 @@ static int ps3fb_ioctl(struct fb_info *info, unsigned int cmd,
+ var = info->var;
+ fb_videomode_to_var(&var, vmode);
+ console_lock();
+- info->flags |= FBINFO_MISC_USEREVENT;
+ /* Force, in case only special bits changed */
+ var.activate |= FB_ACTIVATE_FORCE;
+ par->new_mode_id = val;
+ retval = fb_set_var(info, &var);
+- info->flags &= ~FBINFO_MISC_USEREVENT;
++ if (!retval)
++ fbcon_update_vcs(info, var.activate & FB_ACTIVATE_ALL);
+ console_unlock();
+ }
+ break;
+diff --git a/drivers/xen/events/events_base.c b/drivers/xen/events/events_base.c
+index 140c7bf33a989..90b8f56fbadb1 100644
+--- a/drivers/xen/events/events_base.c
++++ b/drivers/xen/events/events_base.c
+@@ -156,7 +156,7 @@ int get_evtchn_to_irq(evtchn_port_t evtchn)
+ /* Get info for IRQ */
+ struct irq_info *info_for_irq(unsigned irq)
+ {
+- return irq_get_handler_data(irq);
++ return irq_get_chip_data(irq);
+ }
+
+ /* Constructors for packed IRQ information. */
+@@ -377,7 +377,7 @@ static void xen_irq_init(unsigned irq)
+ info->type = IRQT_UNBOUND;
+ info->refcnt = -1;
+
+- irq_set_handler_data(irq, info);
++ irq_set_chip_data(irq, info);
+
+ list_add_tail(&info->list, &xen_irq_list_head);
+ }
+@@ -426,14 +426,14 @@ static int __must_check xen_allocate_irq_gsi(unsigned gsi)
+
+ static void xen_free_irq(unsigned irq)
+ {
+- struct irq_info *info = irq_get_handler_data(irq);
++ struct irq_info *info = irq_get_chip_data(irq);
+
+ if (WARN_ON(!info))
+ return;
+
+ list_del(&info->list);
+
+- irq_set_handler_data(irq, NULL);
++ irq_set_chip_data(irq, NULL);
+
+ WARN_ON(info->refcnt > 0);
+
+@@ -603,7 +603,7 @@ EXPORT_SYMBOL_GPL(xen_irq_from_gsi);
+ static void __unbind_from_irq(unsigned int irq)
+ {
+ evtchn_port_t evtchn = evtchn_from_irq(irq);
+- struct irq_info *info = irq_get_handler_data(irq);
++ struct irq_info *info = irq_get_chip_data(irq);
+
+ if (info->refcnt > 0) {
+ info->refcnt--;
+@@ -1108,7 +1108,7 @@ int bind_ipi_to_irqhandler(enum ipi_vector ipi,
+
+ void unbind_from_irqhandler(unsigned int irq, void *dev_id)
+ {
+- struct irq_info *info = irq_get_handler_data(irq);
++ struct irq_info *info = irq_get_chip_data(irq);
+
+ if (WARN_ON(!info))
+ return;
+@@ -1142,7 +1142,7 @@ int evtchn_make_refcounted(evtchn_port_t evtchn)
+ if (irq == -1)
+ return -ENOENT;
+
+- info = irq_get_handler_data(irq);
++ info = irq_get_chip_data(irq);
+
+ if (!info)
+ return -ENOENT;
+@@ -1170,7 +1170,7 @@ int evtchn_get(evtchn_port_t evtchn)
+ if (irq == -1)
+ goto done;
+
+- info = irq_get_handler_data(irq);
++ info = irq_get_chip_data(irq);
+
+ if (!info)
+ goto done;
+diff --git a/fs/btrfs/ctree.h b/fs/btrfs/ctree.h
+index 6fdb3392a06d5..284d9afa900b3 100644
+--- a/fs/btrfs/ctree.h
++++ b/fs/btrfs/ctree.h
+@@ -2468,7 +2468,7 @@ int btrfs_pin_extent_for_log_replay(struct btrfs_trans_handle *trans,
+ u64 bytenr, u64 num_bytes);
+ int btrfs_exclude_logged_extents(struct extent_buffer *eb);
+ int btrfs_cross_ref_exist(struct btrfs_root *root,
+- u64 objectid, u64 offset, u64 bytenr);
++ u64 objectid, u64 offset, u64 bytenr, bool strict);
+ struct extent_buffer *btrfs_alloc_tree_block(struct btrfs_trans_handle *trans,
+ struct btrfs_root *root,
+ u64 parent, u64 root_objectid,
+@@ -2854,7 +2854,7 @@ struct extent_map *btrfs_get_extent_fiemap(struct btrfs_inode *inode,
+ u64 start, u64 len);
+ noinline int can_nocow_extent(struct inode *inode, u64 offset, u64 *len,
+ u64 *orig_start, u64 *orig_block_len,
+- u64 *ram_bytes);
++ u64 *ram_bytes, bool strict);
+
+ void __btrfs_del_delalloc_inode(struct btrfs_root *root,
+ struct btrfs_inode *inode);
+diff --git a/fs/btrfs/disk-io.c b/fs/btrfs/disk-io.c
+index 66618a1794ea7..983f4d58ae59b 100644
+--- a/fs/btrfs/disk-io.c
++++ b/fs/btrfs/disk-io.c
+@@ -4574,6 +4574,7 @@ static void btrfs_cleanup_bg_io(struct btrfs_block_group *cache)
+ cache->io_ctl.inode = NULL;
+ iput(inode);
+ }
++ ASSERT(cache->io_ctl.pages == NULL);
+ btrfs_put_block_group(cache);
+ }
+
+diff --git a/fs/btrfs/extent-tree.c b/fs/btrfs/extent-tree.c
+index de6fe176fdfb3..5871ef78edbac 100644
+--- a/fs/btrfs/extent-tree.c
++++ b/fs/btrfs/extent-tree.c
+@@ -2306,7 +2306,8 @@ static noinline int check_delayed_ref(struct btrfs_root *root,
+
+ static noinline int check_committed_ref(struct btrfs_root *root,
+ struct btrfs_path *path,
+- u64 objectid, u64 offset, u64 bytenr)
++ u64 objectid, u64 offset, u64 bytenr,
++ bool strict)
+ {
+ struct btrfs_fs_info *fs_info = root->fs_info;
+ struct btrfs_root *extent_root = fs_info->extent_root;
+@@ -2348,9 +2349,13 @@ static noinline int check_committed_ref(struct btrfs_root *root,
+ btrfs_extent_inline_ref_size(BTRFS_EXTENT_DATA_REF_KEY))
+ goto out;
+
+- /* If extent created before last snapshot => it's definitely shared */
+- if (btrfs_extent_generation(leaf, ei) <=
+- btrfs_root_last_snapshot(&root->root_item))
++ /*
++ * If extent created before last snapshot => it's shared unless the
++ * snapshot has been deleted. Use the heuristic if strict is false.
++ */
++ if (!strict &&
++ (btrfs_extent_generation(leaf, ei) <=
++ btrfs_root_last_snapshot(&root->root_item)))
+ goto out;
+
+ iref = (struct btrfs_extent_inline_ref *)(ei + 1);
+@@ -2375,7 +2380,7 @@ out:
+ }
+
+ int btrfs_cross_ref_exist(struct btrfs_root *root, u64 objectid, u64 offset,
+- u64 bytenr)
++ u64 bytenr, bool strict)
+ {
+ struct btrfs_path *path;
+ int ret;
+@@ -2386,7 +2391,7 @@ int btrfs_cross_ref_exist(struct btrfs_root *root, u64 objectid, u64 offset,
+
+ do {
+ ret = check_committed_ref(root, path, objectid,
+- offset, bytenr);
++ offset, bytenr, strict);
+ if (ret && ret != -ENOENT)
+ goto out;
+
+diff --git a/fs/btrfs/file.c b/fs/btrfs/file.c
+index 1523aa4eaff07..e485f0275e1a6 100644
+--- a/fs/btrfs/file.c
++++ b/fs/btrfs/file.c
+@@ -1568,7 +1568,7 @@ int btrfs_check_can_nocow(struct btrfs_inode *inode, loff_t pos,
+ }
+
+ ret = can_nocow_extent(&inode->vfs_inode, lockstart, &num_bytes,
+- NULL, NULL, NULL);
++ NULL, NULL, NULL, false);
+ if (ret <= 0) {
+ ret = 0;
+ if (!nowait)
+@@ -3176,14 +3176,14 @@ reserve_space:
+ if (ret < 0)
+ goto out;
+ space_reserved = true;
+- ret = btrfs_qgroup_reserve_data(inode, &data_reserved,
+- alloc_start, bytes_to_reserve);
+- if (ret)
+- goto out;
+ ret = btrfs_punch_hole_lock_range(inode, lockstart, lockend,
+ &cached_state);
+ if (ret)
+ goto out;
++ ret = btrfs_qgroup_reserve_data(inode, &data_reserved,
++ alloc_start, bytes_to_reserve);
++ if (ret)
++ goto out;
+ ret = btrfs_prealloc_file_range(inode, mode, alloc_start,
+ alloc_end - alloc_start,
+ i_blocksize(inode),
+diff --git a/fs/btrfs/free-space-cache.c b/fs/btrfs/free-space-cache.c
+index 6f7b6bca6dc5b..53cfcf017b8db 100644
+--- a/fs/btrfs/free-space-cache.c
++++ b/fs/btrfs/free-space-cache.c
+@@ -1186,7 +1186,6 @@ static int __btrfs_wait_cache_io(struct btrfs_root *root,
+ ret = update_cache_item(trans, root, inode, path, offset,
+ io_ctl->entries, io_ctl->bitmaps);
+ out:
+- io_ctl_free(io_ctl);
+ if (ret) {
+ invalidate_inode_pages2(inode->i_mapping);
+ BTRFS_I(inode)->generation = 0;
+@@ -1346,6 +1345,7 @@ static int __btrfs_write_out_cache(struct btrfs_root *root, struct inode *inode,
+ * them out later
+ */
+ io_ctl_drop_pages(io_ctl);
++ io_ctl_free(io_ctl);
+
+ unlock_extent_cached(&BTRFS_I(inode)->io_tree, 0,
+ i_size_read(inode) - 1, &cached_state);
+diff --git a/fs/btrfs/inode.c b/fs/btrfs/inode.c
+index 7ba1218b1630e..cb2a6893ec417 100644
+--- a/fs/btrfs/inode.c
++++ b/fs/btrfs/inode.c
+@@ -1611,7 +1611,7 @@ next_slot:
+ goto out_check;
+ ret = btrfs_cross_ref_exist(root, ino,
+ found_key.offset -
+- extent_offset, disk_bytenr);
++ extent_offset, disk_bytenr, false);
+ if (ret) {
+ /*
+ * ret could be -EIO if the above fails to read
+@@ -6957,7 +6957,7 @@ static struct extent_map *btrfs_new_extent_direct(struct inode *inode,
+ */
+ noinline int can_nocow_extent(struct inode *inode, u64 offset, u64 *len,
+ u64 *orig_start, u64 *orig_block_len,
+- u64 *ram_bytes)
++ u64 *ram_bytes, bool strict)
+ {
+ struct btrfs_fs_info *fs_info = btrfs_sb(inode->i_sb);
+ struct btrfs_path *path;
+@@ -7035,8 +7035,9 @@ noinline int can_nocow_extent(struct inode *inode, u64 offset, u64 *len,
+ * Do the same check as in btrfs_cross_ref_exist but without the
+ * unnecessary search.
+ */
+- if (btrfs_file_extent_generation(leaf, fi) <=
+- btrfs_root_last_snapshot(&root->root_item))
++ if (!strict &&
++ (btrfs_file_extent_generation(leaf, fi) <=
++ btrfs_root_last_snapshot(&root->root_item)))
+ goto out;
+
+ backref_offset = btrfs_file_extent_offset(leaf, fi);
+@@ -7072,7 +7073,8 @@ noinline int can_nocow_extent(struct inode *inode, u64 offset, u64 *len,
+ */
+
+ ret = btrfs_cross_ref_exist(root, btrfs_ino(BTRFS_I(inode)),
+- key.offset - backref_offset, disk_bytenr);
++ key.offset - backref_offset, disk_bytenr,
++ strict);
+ if (ret) {
+ ret = 0;
+ goto out;
+@@ -7293,7 +7295,7 @@ static int btrfs_get_blocks_direct_write(struct extent_map **map,
+ block_start = em->block_start + (start - em->start);
+
+ if (can_nocow_extent(inode, start, &len, &orig_start,
+- &orig_block_len, &ram_bytes) == 1 &&
++ &orig_block_len, &ram_bytes, false) == 1 &&
+ btrfs_inc_nocow_writers(fs_info, block_start)) {
+ struct extent_map *em2;
+
+@@ -8640,7 +8642,7 @@ void btrfs_destroy_inode(struct inode *inode)
+ btrfs_put_ordered_extent(ordered);
+ }
+ }
+- btrfs_qgroup_check_reserved_leak(inode);
++ btrfs_qgroup_check_reserved_leak(BTRFS_I(inode));
+ inode_tree_del(inode);
+ btrfs_drop_extent_cache(BTRFS_I(inode), 0, (u64)-1, 0);
+ btrfs_inode_clear_file_extent_range(BTRFS_I(inode), 0, (u64)-1);
+@@ -10103,7 +10105,7 @@ static int btrfs_swap_activate(struct swap_info_struct *sis, struct file *file,
+ free_extent_map(em);
+ em = NULL;
+
+- ret = can_nocow_extent(inode, start, &len, NULL, NULL, NULL);
++ ret = can_nocow_extent(inode, start, &len, NULL, NULL, NULL, true);
+ if (ret < 0) {
+ goto out;
+ } else if (ret) {
+diff --git a/fs/btrfs/qgroup.c b/fs/btrfs/qgroup.c
+index 5bd4089ad0e1a..574a669894774 100644
+--- a/fs/btrfs/qgroup.c
++++ b/fs/btrfs/qgroup.c
+@@ -3742,7 +3742,7 @@ void btrfs_qgroup_convert_reserved_meta(struct btrfs_root *root, int num_bytes)
+ * Check qgroup reserved space leaking, normally at destroy inode
+ * time
+ */
+-void btrfs_qgroup_check_reserved_leak(struct inode *inode)
++void btrfs_qgroup_check_reserved_leak(struct btrfs_inode *inode)
+ {
+ struct extent_changeset changeset;
+ struct ulist_node *unode;
+@@ -3750,19 +3750,19 @@ void btrfs_qgroup_check_reserved_leak(struct inode *inode)
+ int ret;
+
+ extent_changeset_init(&changeset);
+- ret = clear_record_extent_bits(&BTRFS_I(inode)->io_tree, 0, (u64)-1,
++ ret = clear_record_extent_bits(&inode->io_tree, 0, (u64)-1,
+ EXTENT_QGROUP_RESERVED, &changeset);
+
+ WARN_ON(ret < 0);
+ if (WARN_ON(changeset.bytes_changed)) {
+ ULIST_ITER_INIT(&iter);
+ while ((unode = ulist_next(&changeset.range_changed, &iter))) {
+- btrfs_warn(BTRFS_I(inode)->root->fs_info,
+- "leaking qgroup reserved space, ino: %lu, start: %llu, end: %llu",
+- inode->i_ino, unode->val, unode->aux);
++ btrfs_warn(inode->root->fs_info,
++ "leaking qgroup reserved space, ino: %llu, start: %llu, end: %llu",
++ btrfs_ino(inode), unode->val, unode->aux);
+ }
+- btrfs_qgroup_free_refroot(BTRFS_I(inode)->root->fs_info,
+- BTRFS_I(inode)->root->root_key.objectid,
++ btrfs_qgroup_free_refroot(inode->root->fs_info,
++ inode->root->root_key.objectid,
+ changeset.bytes_changed, BTRFS_QGROUP_RSV_DATA);
+
+ }
+diff --git a/fs/btrfs/qgroup.h b/fs/btrfs/qgroup.h
+index 1bc6544594690..406366f20cb0a 100644
+--- a/fs/btrfs/qgroup.h
++++ b/fs/btrfs/qgroup.h
+@@ -399,7 +399,7 @@ void btrfs_qgroup_free_meta_all_pertrans(struct btrfs_root *root);
+ */
+ void btrfs_qgroup_convert_reserved_meta(struct btrfs_root *root, int num_bytes);
+
+-void btrfs_qgroup_check_reserved_leak(struct inode *inode);
++void btrfs_qgroup_check_reserved_leak(struct btrfs_inode *inode);
+
+ /* btrfs_qgroup_swapped_blocks related functions */
+ void btrfs_qgroup_init_swapped_blocks(
+diff --git a/fs/btrfs/super.c b/fs/btrfs/super.c
+index 56cd2cf571588..9eb03b0e0dd43 100644
+--- a/fs/btrfs/super.c
++++ b/fs/btrfs/super.c
+@@ -558,6 +558,7 @@ int btrfs_parse_options(struct btrfs_fs_info *info, char *options,
+ } else if (strncmp(args[0].from, "lzo", 3) == 0) {
+ compress_type = "lzo";
+ info->compress_type = BTRFS_COMPRESS_LZO;
++ info->compress_level = 0;
+ btrfs_set_opt(info->mount_opt, COMPRESS);
+ btrfs_clear_opt(info->mount_opt, NODATACOW);
+ btrfs_clear_opt(info->mount_opt, NODATASUM);
+diff --git a/fs/btrfs/tree-log.c b/fs/btrfs/tree-log.c
+index d22ff1e0963c6..065439b4bdda5 100644
+--- a/fs/btrfs/tree-log.c
++++ b/fs/btrfs/tree-log.c
+@@ -3449,11 +3449,13 @@ fail:
+ btrfs_free_path(path);
+ out_unlock:
+ mutex_unlock(&dir->log_mutex);
+- if (ret == -ENOSPC) {
++ if (err == -ENOSPC) {
+ btrfs_set_log_full_commit(trans);
+- ret = 0;
+- } else if (ret < 0)
+- btrfs_abort_transaction(trans, ret);
++ err = 0;
++ } else if (err < 0 && err != -ENOENT) {
++ /* ENOENT can be returned if the entry hasn't been fsynced yet */
++ btrfs_abort_transaction(trans, err);
++ }
+
+ btrfs_end_log_trans(root);
+
+diff --git a/fs/buffer.c b/fs/buffer.c
+index 64fe82ec65ff1..75a8849abb5d2 100644
+--- a/fs/buffer.c
++++ b/fs/buffer.c
+@@ -3160,6 +3160,15 @@ int __sync_dirty_buffer(struct buffer_head *bh, int op_flags)
+ WARN_ON(atomic_read(&bh->b_count) < 1);
+ lock_buffer(bh);
+ if (test_clear_buffer_dirty(bh)) {
++ /*
++ * The bh should be mapped, but it might not be if the
++ * device was hot-removed. Not much we can do but fail the I/O.
++ */
++ if (!buffer_mapped(bh)) {
++ unlock_buffer(bh);
++ return -EIO;
++ }
++
+ get_bh(bh);
+ bh->b_end_io = end_buffer_write_sync;
+ ret = submit_bh(REQ_OP_WRITE, op_flags, bh);
+diff --git a/fs/ceph/file.c b/fs/ceph/file.c
+index 160644ddaeed7..d51c3f2fdca02 100644
+--- a/fs/ceph/file.c
++++ b/fs/ceph/file.c
+@@ -1538,6 +1538,7 @@ static ssize_t ceph_read_iter(struct kiocb *iocb, struct iov_iter *to)
+ struct inode *inode = file_inode(filp);
+ struct ceph_inode_info *ci = ceph_inode(inode);
+ struct page *pinned_page = NULL;
++ bool direct_lock = iocb->ki_flags & IOCB_DIRECT;
+ ssize_t ret;
+ int want, got = 0;
+ int retry_op = 0, read = 0;
+@@ -1546,7 +1547,7 @@ again:
+ dout("aio_read %p %llx.%llx %llu~%u trying to get caps on %p\n",
+ inode, ceph_vinop(inode), iocb->ki_pos, (unsigned)len, inode);
+
+- if (iocb->ki_flags & IOCB_DIRECT)
++ if (direct_lock)
+ ceph_start_io_direct(inode);
+ else
+ ceph_start_io_read(inode);
+@@ -1603,7 +1604,7 @@ again:
+ }
+ ceph_put_cap_refs(ci, got);
+
+- if (iocb->ki_flags & IOCB_DIRECT)
++ if (direct_lock)
+ ceph_end_io_direct(inode);
+ else
+ ceph_end_io_read(inode);
+diff --git a/fs/ceph/mds_client.c b/fs/ceph/mds_client.c
+index 946f9a92658ab..903b6a35b321b 100644
+--- a/fs/ceph/mds_client.c
++++ b/fs/ceph/mds_client.c
+@@ -4285,6 +4285,9 @@ static void delayed_work(struct work_struct *work)
+
+ dout("mdsc delayed_work\n");
+
++ if (mdsc->stopping)
++ return;
++
+ mutex_lock(&mdsc->mutex);
+ renew_interval = mdsc->mdsmap->m_session_timeout >> 2;
+ renew_caps = time_after_eq(jiffies, HZ*renew_interval +
+@@ -4660,7 +4663,16 @@ void ceph_mdsc_force_umount(struct ceph_mds_client *mdsc)
+ static void ceph_mdsc_stop(struct ceph_mds_client *mdsc)
+ {
+ dout("stop\n");
+- cancel_delayed_work_sync(&mdsc->delayed_work); /* cancel timer */
++ /*
++ * Make sure the delayed work stopped before releasing
++ * the resources.
++ *
++ * Because the cancel_delayed_work_sync() will only
++ * guarantee that the work finishes executing. But the
++ * delayed work will re-arm itself again after that.
++ */
++ flush_delayed_work(&mdsc->delayed_work);
++
+ if (mdsc->mdsmap)
+ ceph_mdsmap_destroy(mdsc->mdsmap);
+ kfree(mdsc->sessions);
+diff --git a/fs/ext4/block_validity.c b/fs/ext4/block_validity.c
+index e830a9d4e10d3..11aa37693e436 100644
+--- a/fs/ext4/block_validity.c
++++ b/fs/ext4/block_validity.c
+@@ -254,14 +254,6 @@ int ext4_setup_system_zone(struct super_block *sb)
+ int flex_size = ext4_flex_bg_size(sbi);
+ int ret;
+
+- if (!test_opt(sb, BLOCK_VALIDITY)) {
+- if (sbi->system_blks)
+- ext4_release_system_zone(sb);
+- return 0;
+- }
+- if (sbi->system_blks)
+- return 0;
+-
+ system_blks = kzalloc(sizeof(*system_blks), GFP_KERNEL);
+ if (!system_blks)
+ return -ENOMEM;
+diff --git a/fs/ext4/ext4.h b/fs/ext4/ext4.h
+index 42815304902b8..ff46defc65683 100644
+--- a/fs/ext4/ext4.h
++++ b/fs/ext4/ext4.h
+@@ -1054,6 +1054,7 @@ struct ext4_inode_info {
+ struct timespec64 i_crtime;
+
+ /* mballoc */
++ atomic_t i_prealloc_active;
+ struct list_head i_prealloc_list;
+ spinlock_t i_prealloc_lock;
+
+@@ -1501,6 +1502,7 @@ struct ext4_sb_info {
+ unsigned int s_mb_stats;
+ unsigned int s_mb_order2_reqs;
+ unsigned int s_mb_group_prealloc;
++ unsigned int s_mb_max_inode_prealloc;
+ unsigned int s_max_dir_size_kb;
+ /* where last allocation was done - for stream allocation */
+ unsigned long s_mb_last_group;
+@@ -1585,6 +1587,9 @@ struct ext4_sb_info {
+ #ifdef CONFIG_EXT4_DEBUG
+ unsigned long s_simulate_fail;
+ #endif
++ /* Record the errseq of the backing block device */
++ errseq_t s_bdev_wb_err;
++ spinlock_t s_bdev_wb_lock;
+ };
+
+ static inline struct ext4_sb_info *EXT4_SB(struct super_block *sb)
+@@ -2651,7 +2656,7 @@ extern int ext4_mb_release(struct super_block *);
+ extern ext4_fsblk_t ext4_mb_new_blocks(handle_t *,
+ struct ext4_allocation_request *, int *);
+ extern int ext4_mb_reserve_blocks(struct super_block *, int);
+-extern void ext4_discard_preallocations(struct inode *);
++extern void ext4_discard_preallocations(struct inode *, unsigned int);
+ extern int __init ext4_init_mballoc(void);
+ extern void ext4_exit_mballoc(void);
+ extern void ext4_free_blocks(handle_t *handle, struct inode *inode,
+diff --git a/fs/ext4/ext4_jbd2.c b/fs/ext4/ext4_jbd2.c
+index 0c76cdd44d90d..760b9ee49dc00 100644
+--- a/fs/ext4/ext4_jbd2.c
++++ b/fs/ext4/ext4_jbd2.c
+@@ -195,6 +195,28 @@ static void ext4_journal_abort_handle(const char *caller, unsigned int line,
+ jbd2_journal_abort_handle(handle);
+ }
+
++static void ext4_check_bdev_write_error(struct super_block *sb)
++{
++ struct address_space *mapping = sb->s_bdev->bd_inode->i_mapping;
++ struct ext4_sb_info *sbi = EXT4_SB(sb);
++ int err;
++
++ /*
++ * If the block device has write error flag, it may have failed to
++ * async write out metadata buffers in the background. In this case,
++ * we could read old data from disk and write it out again, which
++ * may lead to on-disk filesystem inconsistency.
++ */
++ if (errseq_check(&mapping->wb_err, READ_ONCE(sbi->s_bdev_wb_err))) {
++ spin_lock(&sbi->s_bdev_wb_lock);
++ err = errseq_check_and_advance(&mapping->wb_err, &sbi->s_bdev_wb_err);
++ spin_unlock(&sbi->s_bdev_wb_lock);
++ if (err)
++ ext4_error_err(sb, -err,
++ "Error while async write back metadata");
++ }
++}
++
+ int __ext4_journal_get_write_access(const char *where, unsigned int line,
+ handle_t *handle, struct buffer_head *bh)
+ {
+@@ -202,6 +224,9 @@ int __ext4_journal_get_write_access(const char *where, unsigned int line,
+
+ might_sleep();
+
++ if (bh->b_bdev->bd_super)
++ ext4_check_bdev_write_error(bh->b_bdev->bd_super);
++
+ if (ext4_handle_valid(handle)) {
+ err = jbd2_journal_get_write_access(handle, bh);
+ if (err)
+diff --git a/fs/ext4/extents.c b/fs/ext4/extents.c
+index d75054570e44c..11a321dd11e7e 100644
+--- a/fs/ext4/extents.c
++++ b/fs/ext4/extents.c
+@@ -100,7 +100,7 @@ static int ext4_ext_trunc_restart_fn(struct inode *inode, int *dropped)
+ * i_mutex. So we can safely drop the i_data_sem here.
+ */
+ BUG_ON(EXT4_JOURNAL(inode) == NULL);
+- ext4_discard_preallocations(inode);
++ ext4_discard_preallocations(inode, 0);
+ up_write(&EXT4_I(inode)->i_data_sem);
+ *dropped = 1;
+ return 0;
+@@ -4268,7 +4268,7 @@ got_allocated_blocks:
+ * not a good idea to call discard here directly,
+ * but otherwise we'd need to call it every free().
+ */
+- ext4_discard_preallocations(inode);
++ ext4_discard_preallocations(inode, 0);
+ if (flags & EXT4_GET_BLOCKS_DELALLOC_RESERVE)
+ fb_flags = EXT4_FREE_BLOCKS_NO_QUOT_UPDATE;
+ ext4_free_blocks(handle, inode, NULL, newblock,
+@@ -5295,7 +5295,7 @@ static int ext4_collapse_range(struct inode *inode, loff_t offset, loff_t len)
+ }
+
+ down_write(&EXT4_I(inode)->i_data_sem);
+- ext4_discard_preallocations(inode);
++ ext4_discard_preallocations(inode, 0);
+
+ ret = ext4_es_remove_extent(inode, punch_start,
+ EXT_MAX_BLOCKS - punch_start);
+@@ -5309,7 +5309,7 @@ static int ext4_collapse_range(struct inode *inode, loff_t offset, loff_t len)
+ up_write(&EXT4_I(inode)->i_data_sem);
+ goto out_stop;
+ }
+- ext4_discard_preallocations(inode);
++ ext4_discard_preallocations(inode, 0);
+
+ ret = ext4_ext_shift_extents(inode, handle, punch_stop,
+ punch_stop - punch_start, SHIFT_LEFT);
+@@ -5441,7 +5441,7 @@ static int ext4_insert_range(struct inode *inode, loff_t offset, loff_t len)
+ goto out_stop;
+
+ down_write(&EXT4_I(inode)->i_data_sem);
+- ext4_discard_preallocations(inode);
++ ext4_discard_preallocations(inode, 0);
+
+ path = ext4_find_extent(inode, offset_lblk, NULL, 0);
+ if (IS_ERR(path)) {
+diff --git a/fs/ext4/file.c b/fs/ext4/file.c
+index 8f742b53f1d40..4ee9a4dc01a88 100644
+--- a/fs/ext4/file.c
++++ b/fs/ext4/file.c
+@@ -148,7 +148,7 @@ static int ext4_release_file(struct inode *inode, struct file *filp)
+ !EXT4_I(inode)->i_reserved_data_blocks)
+ {
+ down_write(&EXT4_I(inode)->i_data_sem);
+- ext4_discard_preallocations(inode);
++ ext4_discard_preallocations(inode, 0);
+ up_write(&EXT4_I(inode)->i_data_sem);
+ }
+ if (is_dx(inode) && filp->private_data)
+diff --git a/fs/ext4/indirect.c b/fs/ext4/indirect.c
+index 4026418257121..e8ca405673923 100644
+--- a/fs/ext4/indirect.c
++++ b/fs/ext4/indirect.c
+@@ -696,7 +696,7 @@ static int ext4_ind_trunc_restart_fn(handle_t *handle, struct inode *inode,
+ * i_mutex. So we can safely drop the i_data_sem here.
+ */
+ BUG_ON(EXT4_JOURNAL(inode) == NULL);
+- ext4_discard_preallocations(inode);
++ ext4_discard_preallocations(inode, 0);
+ up_write(&EXT4_I(inode)->i_data_sem);
+ *dropped = 1;
+ return 0;
+diff --git a/fs/ext4/inode.c b/fs/ext4/inode.c
+index 92573f8540ab7..9c0629ffb4261 100644
+--- a/fs/ext4/inode.c
++++ b/fs/ext4/inode.c
+@@ -383,7 +383,7 @@ void ext4_da_update_reserve_space(struct inode *inode,
+ */
+ if ((ei->i_reserved_data_blocks == 0) &&
+ !inode_is_open_for_write(inode))
+- ext4_discard_preallocations(inode);
++ ext4_discard_preallocations(inode, 0);
+ }
+
+ static int __check_block_validity(struct inode *inode, const char *func,
+@@ -4055,7 +4055,7 @@ int ext4_punch_hole(struct inode *inode, loff_t offset, loff_t length)
+ if (stop_block > first_block) {
+
+ down_write(&EXT4_I(inode)->i_data_sem);
+- ext4_discard_preallocations(inode);
++ ext4_discard_preallocations(inode, 0);
+
+ ret = ext4_es_remove_extent(inode, first_block,
+ stop_block - first_block);
+@@ -4210,7 +4210,7 @@ int ext4_truncate(struct inode *inode)
+
+ down_write(&EXT4_I(inode)->i_data_sem);
+
+- ext4_discard_preallocations(inode);
++ ext4_discard_preallocations(inode, 0);
+
+ if (ext4_test_inode_flag(inode, EXT4_INODE_EXTENTS))
+ err = ext4_ext_truncate(handle, inode);
+diff --git a/fs/ext4/ioctl.c b/fs/ext4/ioctl.c
+index 999cf6add39c6..a5fcc238c6693 100644
+--- a/fs/ext4/ioctl.c
++++ b/fs/ext4/ioctl.c
+@@ -202,7 +202,7 @@ static long swap_inode_boot_loader(struct super_block *sb,
+ reset_inode_seed(inode);
+ reset_inode_seed(inode_bl);
+
+- ext4_discard_preallocations(inode);
++ ext4_discard_preallocations(inode, 0);
+
+ err = ext4_mark_inode_dirty(handle, inode);
+ if (err < 0) {
+diff --git a/fs/ext4/mballoc.c b/fs/ext4/mballoc.c
+index 38719c156573c..e88eff999bd15 100644
+--- a/fs/ext4/mballoc.c
++++ b/fs/ext4/mballoc.c
+@@ -2177,6 +2177,7 @@ static int ext4_mb_good_group_nolock(struct ext4_allocation_context *ac,
+ {
+ struct ext4_group_info *grp = ext4_get_group_info(ac->ac_sb, group);
+ struct super_block *sb = ac->ac_sb;
++ struct ext4_sb_info *sbi = EXT4_SB(sb);
+ bool should_lock = ac->ac_flags & EXT4_MB_STRICT_CHECK;
+ ext4_grpblk_t free;
+ int ret = 0;
+@@ -2195,7 +2196,25 @@ static int ext4_mb_good_group_nolock(struct ext4_allocation_context *ac,
+
+ /* We only do this if the grp has never been initialized */
+ if (unlikely(EXT4_MB_GRP_NEED_INIT(grp))) {
+- ret = ext4_mb_init_group(ac->ac_sb, group, GFP_NOFS);
++ struct ext4_group_desc *gdp =
++ ext4_get_group_desc(sb, group, NULL);
++ int ret;
++
++ /* cr=0/1 is a very optimistic search to find large
++ * good chunks almost for free. If buddy data is not
++ * ready, then this optimization makes no sense. But
++ * we never skip the first block group in a flex_bg,
++ * since this gets used for metadata block allocation,
++ * and we want to make sure we locate metadata blocks
++ * in the first block group in the flex_bg if possible.
++ */
++ if (cr < 2 &&
++ (!sbi->s_log_groups_per_flex ||
++ ((group & ((1 << sbi->s_log_groups_per_flex) - 1)) != 0)) &&
++ !(ext4_has_group_desc_csum(sb) &&
++ (gdp->bg_flags & cpu_to_le16(EXT4_BG_BLOCK_UNINIT))))
++ return 0;
++ ret = ext4_mb_init_group(sb, group, GFP_NOFS);
+ if (ret)
+ return ret;
+ }
+@@ -2736,6 +2755,7 @@ int ext4_mb_init(struct super_block *sb)
+ sbi->s_mb_stats = MB_DEFAULT_STATS;
+ sbi->s_mb_stream_request = MB_DEFAULT_STREAM_THRESHOLD;
+ sbi->s_mb_order2_reqs = MB_DEFAULT_ORDER2_REQS;
++ sbi->s_mb_max_inode_prealloc = MB_DEFAULT_MAX_INODE_PREALLOC;
+ /*
+ * The default group preallocation is 512, which for 4k block
+ * sizes translates to 2 megabytes. However for bigalloc file
+@@ -3674,6 +3694,26 @@ void ext4_mb_generate_from_pa(struct super_block *sb, void *bitmap,
+ mb_debug(sb, "preallocated %d for group %u\n", preallocated, group);
+ }
+
++static void ext4_mb_mark_pa_deleted(struct super_block *sb,
++ struct ext4_prealloc_space *pa)
++{
++ struct ext4_inode_info *ei;
++
++ if (pa->pa_deleted) {
++ ext4_warning(sb, "deleted pa, type:%d, pblk:%llu, lblk:%u, len:%d\n",
++ pa->pa_type, pa->pa_pstart, pa->pa_lstart,
++ pa->pa_len);
++ return;
++ }
++
++ pa->pa_deleted = 1;
++
++ if (pa->pa_type == MB_INODE_PA) {
++ ei = EXT4_I(pa->pa_inode);
++ atomic_dec(&ei->i_prealloc_active);
++ }
++}
++
+ static void ext4_mb_pa_callback(struct rcu_head *head)
+ {
+ struct ext4_prealloc_space *pa;
+@@ -3706,7 +3746,7 @@ static void ext4_mb_put_pa(struct ext4_allocation_context *ac,
+ return;
+ }
+
+- pa->pa_deleted = 1;
++ ext4_mb_mark_pa_deleted(sb, pa);
+ spin_unlock(&pa->pa_lock);
+
+ grp_blk = pa->pa_pstart;
+@@ -3830,6 +3870,7 @@ ext4_mb_new_inode_pa(struct ext4_allocation_context *ac)
+ spin_lock(pa->pa_obj_lock);
+ list_add_rcu(&pa->pa_inode_list, &ei->i_prealloc_list);
+ spin_unlock(pa->pa_obj_lock);
++ atomic_inc(&ei->i_prealloc_active);
+ }
+
+ /*
+@@ -4040,7 +4081,7 @@ repeat:
+ }
+
+ /* seems this one can be freed ... */
+- pa->pa_deleted = 1;
++ ext4_mb_mark_pa_deleted(sb, pa);
+
+ /* we can trust pa_free ... */
+ free += pa->pa_free;
+@@ -4103,7 +4144,7 @@ out_dbg:
+ *
+ * FIXME!! Make sure it is valid at all the call sites
+ */
+-void ext4_discard_preallocations(struct inode *inode)
++void ext4_discard_preallocations(struct inode *inode, unsigned int needed)
+ {
+ struct ext4_inode_info *ei = EXT4_I(inode);
+ struct super_block *sb = inode->i_sb;
+@@ -4121,15 +4162,19 @@ void ext4_discard_preallocations(struct inode *inode)
+
+ mb_debug(sb, "discard preallocation for inode %lu\n",
+ inode->i_ino);
+- trace_ext4_discard_preallocations(inode);
++ trace_ext4_discard_preallocations(inode,
++ atomic_read(&ei->i_prealloc_active), needed);
+
+ INIT_LIST_HEAD(&list);
+
++ if (needed == 0)
++ needed = UINT_MAX;
++
+ repeat:
+ /* first, collect all pa's in the inode */
+ spin_lock(&ei->i_prealloc_lock);
+- while (!list_empty(&ei->i_prealloc_list)) {
+- pa = list_entry(ei->i_prealloc_list.next,
++ while (!list_empty(&ei->i_prealloc_list) && needed) {
++ pa = list_entry(ei->i_prealloc_list.prev,
+ struct ext4_prealloc_space, pa_inode_list);
+ BUG_ON(pa->pa_obj_lock != &ei->i_prealloc_lock);
+ spin_lock(&pa->pa_lock);
+@@ -4146,10 +4191,11 @@ repeat:
+
+ }
+ if (pa->pa_deleted == 0) {
+- pa->pa_deleted = 1;
++ ext4_mb_mark_pa_deleted(sb, pa);
+ spin_unlock(&pa->pa_lock);
+ list_del_rcu(&pa->pa_inode_list);
+ list_add(&pa->u.pa_tmp_list, &list);
++ needed--;
+ continue;
+ }
+
+@@ -4450,7 +4496,7 @@ ext4_mb_discard_lg_preallocations(struct super_block *sb,
+ BUG_ON(pa->pa_type != MB_GROUP_PA);
+
+ /* seems this one can be freed ... */
+- pa->pa_deleted = 1;
++ ext4_mb_mark_pa_deleted(sb, pa);
+ spin_unlock(&pa->pa_lock);
+
+ list_del_rcu(&pa->pa_inode_list);
+@@ -4548,11 +4594,30 @@ static void ext4_mb_add_n_trim(struct ext4_allocation_context *ac)
+ return ;
+ }
+
++/*
++ * if per-inode prealloc list is too long, trim some PA
++ */
++static void ext4_mb_trim_inode_pa(struct inode *inode)
++{
++ struct ext4_inode_info *ei = EXT4_I(inode);
++ struct ext4_sb_info *sbi = EXT4_SB(inode->i_sb);
++ int count, delta;
++
++ count = atomic_read(&ei->i_prealloc_active);
++ delta = (sbi->s_mb_max_inode_prealloc >> 2) + 1;
++ if (count > sbi->s_mb_max_inode_prealloc + delta) {
++ count -= sbi->s_mb_max_inode_prealloc;
++ ext4_discard_preallocations(inode, count);
++ }
++}
++
+ /*
+ * release all resource we used in allocation
+ */
+ static int ext4_mb_release_context(struct ext4_allocation_context *ac)
+ {
++ struct inode *inode = ac->ac_inode;
++ struct ext4_inode_info *ei = EXT4_I(inode);
+ struct ext4_sb_info *sbi = EXT4_SB(ac->ac_sb);
+ struct ext4_prealloc_space *pa = ac->ac_pa;
+ if (pa) {
+@@ -4579,6 +4644,17 @@ static int ext4_mb_release_context(struct ext4_allocation_context *ac)
+ spin_unlock(pa->pa_obj_lock);
+ ext4_mb_add_n_trim(ac);
+ }
++
++ if (pa->pa_type == MB_INODE_PA) {
++ /*
++ * treat per-inode prealloc list as a lru list, then try
++ * to trim the least recently used PA.
++ */
++ spin_lock(pa->pa_obj_lock);
++ list_move(&pa->pa_inode_list, &ei->i_prealloc_list);
++ spin_unlock(pa->pa_obj_lock);
++ }
++
+ ext4_mb_put_pa(ac, ac->ac_sb, pa);
+ }
+ if (ac->ac_bitmap_page)
+@@ -4588,6 +4664,7 @@ static int ext4_mb_release_context(struct ext4_allocation_context *ac)
+ if (ac->ac_flags & EXT4_MB_HINT_GROUP_ALLOC)
+ mutex_unlock(&ac->ac_lg->lg_mutex);
+ ext4_mb_collect_stats(ac);
++ ext4_mb_trim_inode_pa(inode);
+ return 0;
+ }
+
+diff --git a/fs/ext4/mballoc.h b/fs/ext4/mballoc.h
+index 6b4d17c2935d6..e75b4749aa1c2 100644
+--- a/fs/ext4/mballoc.h
++++ b/fs/ext4/mballoc.h
+@@ -73,6 +73,10 @@
+ */
+ #define MB_DEFAULT_GROUP_PREALLOC 512
+
++/*
++ * maximum length of inode prealloc list
++ */
++#define MB_DEFAULT_MAX_INODE_PREALLOC 512
+
+ struct ext4_free_data {
+ /* this links the free block information from sb_info */
+diff --git a/fs/ext4/move_extent.c b/fs/ext4/move_extent.c
+index 1ed86fb6c3026..0d601b8228753 100644
+--- a/fs/ext4/move_extent.c
++++ b/fs/ext4/move_extent.c
+@@ -686,8 +686,8 @@ ext4_move_extents(struct file *o_filp, struct file *d_filp, __u64 orig_blk,
+
+ out:
+ if (*moved_len) {
+- ext4_discard_preallocations(orig_inode);
+- ext4_discard_preallocations(donor_inode);
++ ext4_discard_preallocations(orig_inode, 0);
++ ext4_discard_preallocations(donor_inode, 0);
+ }
+
+ ext4_ext_drop_refs(path);
+diff --git a/fs/ext4/super.c b/fs/ext4/super.c
+index 330957ed1f05c..0b38bf29c07e0 100644
+--- a/fs/ext4/super.c
++++ b/fs/ext4/super.c
+@@ -66,10 +66,10 @@ static int ext4_load_journal(struct super_block *, struct ext4_super_block *,
+ unsigned long journal_devnum);
+ static int ext4_show_options(struct seq_file *seq, struct dentry *root);
+ static int ext4_commit_super(struct super_block *sb, int sync);
+-static void ext4_mark_recovery_complete(struct super_block *sb,
++static int ext4_mark_recovery_complete(struct super_block *sb,
+ struct ext4_super_block *es);
+-static void ext4_clear_journal_err(struct super_block *sb,
+- struct ext4_super_block *es);
++static int ext4_clear_journal_err(struct super_block *sb,
++ struct ext4_super_block *es);
+ static int ext4_sync_fs(struct super_block *sb, int wait);
+ static int ext4_remount(struct super_block *sb, int *flags, char *data);
+ static int ext4_statfs(struct dentry *dentry, struct kstatfs *buf);
+@@ -1123,6 +1123,7 @@ static struct inode *ext4_alloc_inode(struct super_block *sb)
+ inode_set_iversion(&ei->vfs_inode, 1);
+ spin_lock_init(&ei->i_raw_lock);
+ INIT_LIST_HEAD(&ei->i_prealloc_list);
++ atomic_set(&ei->i_prealloc_active, 0);
+ spin_lock_init(&ei->i_prealloc_lock);
+ ext4_es_init_tree(&ei->i_es_tree);
+ rwlock_init(&ei->i_es_lock);
+@@ -1216,7 +1217,7 @@ void ext4_clear_inode(struct inode *inode)
+ {
+ invalidate_inode_buffers(inode);
+ clear_inode(inode);
+- ext4_discard_preallocations(inode);
++ ext4_discard_preallocations(inode, 0);
+ ext4_es_remove_extent(inode, 0, EXT_MAX_BLOCKS);
+ dquot_drop(inode);
+ if (EXT4_I(inode)->jinode) {
+@@ -4698,11 +4699,13 @@ no_journal:
+
+ ext4_set_resv_clusters(sb);
+
+- err = ext4_setup_system_zone(sb);
+- if (err) {
+- ext4_msg(sb, KERN_ERR, "failed to initialize system "
+- "zone (%d)", err);
+- goto failed_mount4a;
++ if (test_opt(sb, BLOCK_VALIDITY)) {
++ err = ext4_setup_system_zone(sb);
++ if (err) {
++ ext4_msg(sb, KERN_ERR, "failed to initialize system "
++ "zone (%d)", err);
++ goto failed_mount4a;
++ }
+ }
+
+ ext4_ext_init(sb);
+@@ -4765,12 +4768,23 @@ no_journal:
+ }
+ #endif /* CONFIG_QUOTA */
+
++ /*
++ * Save the original bdev mapping's wb_err value which could be
++ * used to detect the metadata async write error.
++ */
++ spin_lock_init(&sbi->s_bdev_wb_lock);
++ if (!sb_rdonly(sb))
++ errseq_check_and_advance(&sb->s_bdev->bd_inode->i_mapping->wb_err,
++ &sbi->s_bdev_wb_err);
++ sb->s_bdev->bd_super = sb;
+ EXT4_SB(sb)->s_mount_state |= EXT4_ORPHAN_FS;
+ ext4_orphan_cleanup(sb, es);
+ EXT4_SB(sb)->s_mount_state &= ~EXT4_ORPHAN_FS;
+ if (needs_recovery) {
+ ext4_msg(sb, KERN_INFO, "recovery complete");
+- ext4_mark_recovery_complete(sb, es);
++ err = ext4_mark_recovery_complete(sb, es);
++ if (err)
++ goto failed_mount8;
+ }
+ if (EXT4_SB(sb)->s_journal) {
+ if (test_opt(sb, DATA_FLAGS) == EXT4_MOUNT_JOURNAL_DATA)
+@@ -4813,10 +4827,8 @@ cantfind_ext4:
+ ext4_msg(sb, KERN_ERR, "VFS: Can't find ext4 filesystem");
+ goto failed_mount;
+
+-#ifdef CONFIG_QUOTA
+ failed_mount8:
+ ext4_unregister_sysfs(sb);
+-#endif
+ failed_mount7:
+ ext4_unregister_li_request(sb);
+ failed_mount6:
+@@ -4956,7 +4968,8 @@ static journal_t *ext4_get_journal(struct super_block *sb,
+ struct inode *journal_inode;
+ journal_t *journal;
+
+- BUG_ON(!ext4_has_feature_journal(sb));
++ if (WARN_ON_ONCE(!ext4_has_feature_journal(sb)))
++ return NULL;
+
+ journal_inode = ext4_get_journal_inode(sb, journal_inum);
+ if (!journal_inode)
+@@ -4986,7 +4999,8 @@ static journal_t *ext4_get_dev_journal(struct super_block *sb,
+ struct ext4_super_block *es;
+ struct block_device *bdev;
+
+- BUG_ON(!ext4_has_feature_journal(sb));
++ if (WARN_ON_ONCE(!ext4_has_feature_journal(sb)))
++ return NULL;
+
+ bdev = ext4_blkdev_get(j_dev, sb);
+ if (bdev == NULL)
+@@ -5077,8 +5091,10 @@ static int ext4_load_journal(struct super_block *sb,
+ dev_t journal_dev;
+ int err = 0;
+ int really_read_only;
++ int journal_dev_ro;
+
+- BUG_ON(!ext4_has_feature_journal(sb));
++ if (WARN_ON_ONCE(!ext4_has_feature_journal(sb)))
++ return -EFSCORRUPTED;
+
+ if (journal_devnum &&
+ journal_devnum != le32_to_cpu(es->s_journal_dev)) {
+@@ -5088,7 +5104,31 @@ static int ext4_load_journal(struct super_block *sb,
+ } else
+ journal_dev = new_decode_dev(le32_to_cpu(es->s_journal_dev));
+
+- really_read_only = bdev_read_only(sb->s_bdev);
++ if (journal_inum && journal_dev) {
++ ext4_msg(sb, KERN_ERR,
++ "filesystem has both journal inode and journal device!");
++ return -EINVAL;
++ }
++
++ if (journal_inum) {
++ journal = ext4_get_journal(sb, journal_inum);
++ if (!journal)
++ return -EINVAL;
++ } else {
++ journal = ext4_get_dev_journal(sb, journal_dev);
++ if (!journal)
++ return -EINVAL;
++ }
++
++ journal_dev_ro = bdev_read_only(journal->j_dev);
++ really_read_only = bdev_read_only(sb->s_bdev) | journal_dev_ro;
++
++ if (journal_dev_ro && !sb_rdonly(sb)) {
++ ext4_msg(sb, KERN_ERR,
++ "journal device read-only, try mounting with '-o ro'");
++ err = -EROFS;
++ goto err_out;
++ }
+
+ /*
+ * Are we loading a blank journal or performing recovery after a
+@@ -5103,27 +5143,14 @@ static int ext4_load_journal(struct super_block *sb,
+ ext4_msg(sb, KERN_ERR, "write access "
+ "unavailable, cannot proceed "
+ "(try mounting with noload)");
+- return -EROFS;
++ err = -EROFS;
++ goto err_out;
+ }
+ ext4_msg(sb, KERN_INFO, "write access will "
+ "be enabled during recovery");
+ }
+ }
+
+- if (journal_inum && journal_dev) {
+- ext4_msg(sb, KERN_ERR, "filesystem has both journal "
+- "and inode journals!");
+- return -EINVAL;
+- }
+-
+- if (journal_inum) {
+- if (!(journal = ext4_get_journal(sb, journal_inum)))
+- return -EINVAL;
+- } else {
+- if (!(journal = ext4_get_dev_journal(sb, journal_dev)))
+- return -EINVAL;
+- }
+-
+ if (!(journal->j_flags & JBD2_BARRIER))
+ ext4_msg(sb, KERN_INFO, "barriers disabled");
+
+@@ -5143,12 +5170,16 @@ static int ext4_load_journal(struct super_block *sb,
+
+ if (err) {
+ ext4_msg(sb, KERN_ERR, "error loading journal");
+- jbd2_journal_destroy(journal);
+- return err;
++ goto err_out;
+ }
+
+ EXT4_SB(sb)->s_journal = journal;
+- ext4_clear_journal_err(sb, es);
++ err = ext4_clear_journal_err(sb, es);
++ if (err) {
++ EXT4_SB(sb)->s_journal = NULL;
++ jbd2_journal_destroy(journal);
++ return err;
++ }
+
+ if (!really_read_only && journal_devnum &&
+ journal_devnum != le32_to_cpu(es->s_journal_dev)) {
+@@ -5159,6 +5190,10 @@ static int ext4_load_journal(struct super_block *sb,
+ }
+
+ return 0;
++
++err_out:
++ jbd2_journal_destroy(journal);
++ return err;
+ }
+
+ static int ext4_commit_super(struct super_block *sb, int sync)
+@@ -5170,13 +5205,6 @@ static int ext4_commit_super(struct super_block *sb, int sync)
+ if (!sbh || block_device_ejected(sb))
+ return error;
+
+- /*
+- * The superblock bh should be mapped, but it might not be if the
+- * device was hot-removed. Not much we can do but fail the I/O.
+- */
+- if (!buffer_mapped(sbh))
+- return error;
+-
+ /*
+ * If the file system is mounted read-only, don't update the
+ * superblock write time. This avoids updating the superblock
+@@ -5244,26 +5272,32 @@ static int ext4_commit_super(struct super_block *sb, int sync)
+ * remounting) the filesystem readonly, then we will end up with a
+ * consistent fs on disk. Record that fact.
+ */
+-static void ext4_mark_recovery_complete(struct super_block *sb,
+- struct ext4_super_block *es)
++static int ext4_mark_recovery_complete(struct super_block *sb,
++ struct ext4_super_block *es)
+ {
++ int err;
+ journal_t *journal = EXT4_SB(sb)->s_journal;
+
+ if (!ext4_has_feature_journal(sb)) {
+- BUG_ON(journal != NULL);
+- return;
++ if (journal != NULL) {
++ ext4_error(sb, "Journal got removed while the fs was "
++ "mounted!");
++ return -EFSCORRUPTED;
++ }
++ return 0;
+ }
+ jbd2_journal_lock_updates(journal);
+- if (jbd2_journal_flush(journal) < 0)
++ err = jbd2_journal_flush(journal);
++ if (err < 0)
+ goto out;
+
+ if (ext4_has_feature_journal_needs_recovery(sb) && sb_rdonly(sb)) {
+ ext4_clear_feature_journal_needs_recovery(sb);
+ ext4_commit_super(sb, 1);
+ }
+-
+ out:
+ jbd2_journal_unlock_updates(journal);
++ return err;
+ }
+
+ /*
+@@ -5271,14 +5305,17 @@ out:
+ * has recorded an error from a previous lifetime, move that error to the
+ * main filesystem now.
+ */
+-static void ext4_clear_journal_err(struct super_block *sb,
++static int ext4_clear_journal_err(struct super_block *sb,
+ struct ext4_super_block *es)
+ {
+ journal_t *journal;
+ int j_errno;
+ const char *errstr;
+
+- BUG_ON(!ext4_has_feature_journal(sb));
++ if (!ext4_has_feature_journal(sb)) {
++ ext4_error(sb, "Journal got removed while the fs was mounted!");
++ return -EFSCORRUPTED;
++ }
+
+ journal = EXT4_SB(sb)->s_journal;
+
+@@ -5303,6 +5340,7 @@ static void ext4_clear_journal_err(struct super_block *sb,
+ jbd2_journal_clear_err(journal);
+ jbd2_journal_update_sb_errno(journal);
+ }
++ return 0;
+ }
+
+ /*
+@@ -5445,7 +5483,7 @@ static int ext4_remount(struct super_block *sb, int *flags, char *data)
+ {
+ struct ext4_super_block *es;
+ struct ext4_sb_info *sbi = EXT4_SB(sb);
+- unsigned long old_sb_flags;
++ unsigned long old_sb_flags, vfs_flags;
+ struct ext4_mount_options old_opts;
+ int enable_quota = 0;
+ ext4_group_t g;
+@@ -5488,6 +5526,14 @@ static int ext4_remount(struct super_block *sb, int *flags, char *data)
+ if (sbi->s_journal && sbi->s_journal->j_task->io_context)
+ journal_ioprio = sbi->s_journal->j_task->io_context->ioprio;
+
++ /*
++ * Some options can be enabled by ext4 and/or by VFS mount flag
++ * either way we need to make sure it matches in both *flags and
++ * s_flags. Copy those selected flags from *flags to s_flags
++ */
++ vfs_flags = SB_LAZYTIME | SB_I_VERSION;
++ sb->s_flags = (sb->s_flags & ~vfs_flags) | (*flags & vfs_flags);
++
+ if (!parse_options(data, sb, NULL, &journal_ioprio, 1)) {
+ err = -EINVAL;
+ goto restore_opts;
+@@ -5541,9 +5587,6 @@ static int ext4_remount(struct super_block *sb, int *flags, char *data)
+ set_task_ioprio(sbi->s_journal->j_task, journal_ioprio);
+ }
+
+- if (*flags & SB_LAZYTIME)
+- sb->s_flags |= SB_LAZYTIME;
+-
+ if ((bool)(*flags & SB_RDONLY) != sb_rdonly(sb)) {
+ if (sbi->s_mount_flags & EXT4_MF_FS_ABORTED) {
+ err = -EROFS;
+@@ -5573,8 +5616,13 @@ static int ext4_remount(struct super_block *sb, int *flags, char *data)
+ (sbi->s_mount_state & EXT4_VALID_FS))
+ es->s_state = cpu_to_le16(sbi->s_mount_state);
+
+- if (sbi->s_journal)
++ if (sbi->s_journal) {
++ /*
++ * We let remount-ro finish even if marking fs
++ * as clean failed...
++ */
+ ext4_mark_recovery_complete(sb, es);
++ }
+ if (sbi->s_mmp_tsk)
+ kthread_stop(sbi->s_mmp_tsk);
+ } else {
+@@ -5616,14 +5664,25 @@ static int ext4_remount(struct super_block *sb, int *flags, char *data)
+ goto restore_opts;
+ }
+
++ /*
++ * Update the original bdev mapping's wb_err value
++ * which could be used to detect the metadata async
++ * write error.
++ */
++ errseq_check_and_advance(&sb->s_bdev->bd_inode->i_mapping->wb_err,
++ &sbi->s_bdev_wb_err);
++
+ /*
+ * Mounting a RDONLY partition read-write, so reread
+ * and store the current valid flag. (It may have
+ * been changed by e2fsck since we originally mounted
+ * the partition.)
+ */
+- if (sbi->s_journal)
+- ext4_clear_journal_err(sb, es);
++ if (sbi->s_journal) {
++ err = ext4_clear_journal_err(sb, es);
++ if (err)
++ goto restore_opts;
++ }
+ sbi->s_mount_state = le16_to_cpu(es->s_state);
+
+ err = ext4_setup_super(sb, es, 0);
+@@ -5653,7 +5712,17 @@ static int ext4_remount(struct super_block *sb, int *flags, char *data)
+ ext4_register_li_request(sb, first_not_zeroed);
+ }
+
+- ext4_setup_system_zone(sb);
++ /*
++ * Handle creation of system zone data early because it can fail.
++ * Releasing of existing data is done when we are sure remount will
++ * succeed.
++ */
++ if (test_opt(sb, BLOCK_VALIDITY) && !sbi->system_blks) {
++ err = ext4_setup_system_zone(sb);
++ if (err)
++ goto restore_opts;
++ }
++
+ if (sbi->s_journal == NULL && !(old_sb_flags & SB_RDONLY)) {
+ err = ext4_commit_super(sb, 1);
+ if (err)
+@@ -5674,8 +5743,16 @@ static int ext4_remount(struct super_block *sb, int *flags, char *data)
+ }
+ }
+ #endif
++ if (!test_opt(sb, BLOCK_VALIDITY) && sbi->system_blks)
++ ext4_release_system_zone(sb);
++
++ /*
++ * Some options can be enabled by ext4 and/or by VFS mount flag
++ * either way we need to make sure it matches in both *flags and
++ * s_flags. Copy those selected flags from s_flags to *flags
++ */
++ *flags = (*flags & ~vfs_flags) | (sb->s_flags & vfs_flags);
+
+- *flags = (*flags & ~SB_LAZYTIME) | (sb->s_flags & SB_LAZYTIME);
+ ext4_msg(sb, KERN_INFO, "re-mounted. Opts: %s", orig_data);
+ kfree(orig_data);
+ return 0;
+@@ -5689,6 +5766,8 @@ restore_opts:
+ sbi->s_commit_interval = old_opts.s_commit_interval;
+ sbi->s_min_batch_time = old_opts.s_min_batch_time;
+ sbi->s_max_batch_time = old_opts.s_max_batch_time;
++ if (!test_opt(sb, BLOCK_VALIDITY) && sbi->system_blks)
++ ext4_release_system_zone(sb);
+ #ifdef CONFIG_QUOTA
+ sbi->s_jquota_fmt = old_opts.s_jquota_fmt;
+ for (i = 0; i < EXT4_MAXQUOTAS; i++) {
+diff --git a/fs/ext4/sysfs.c b/fs/ext4/sysfs.c
+index 6c9fc9e21c138..92f04e9e94413 100644
+--- a/fs/ext4/sysfs.c
++++ b/fs/ext4/sysfs.c
+@@ -215,6 +215,7 @@ EXT4_RW_ATTR_SBI_UI(mb_min_to_scan, s_mb_min_to_scan);
+ EXT4_RW_ATTR_SBI_UI(mb_order2_req, s_mb_order2_reqs);
+ EXT4_RW_ATTR_SBI_UI(mb_stream_req, s_mb_stream_request);
+ EXT4_RW_ATTR_SBI_UI(mb_group_prealloc, s_mb_group_prealloc);
++EXT4_RW_ATTR_SBI_UI(mb_max_inode_prealloc, s_mb_max_inode_prealloc);
+ EXT4_RW_ATTR_SBI_UI(extent_max_zeroout_kb, s_extent_max_zeroout_kb);
+ EXT4_ATTR(trigger_fs_error, 0200, trigger_test_error);
+ EXT4_RW_ATTR_SBI_UI(err_ratelimit_interval_ms, s_err_ratelimit_state.interval);
+@@ -257,6 +258,7 @@ static struct attribute *ext4_attrs[] = {
+ ATTR_LIST(mb_order2_req),
+ ATTR_LIST(mb_stream_req),
+ ATTR_LIST(mb_group_prealloc),
++ ATTR_LIST(mb_max_inode_prealloc),
+ ATTR_LIST(max_writeback_mb_bump),
+ ATTR_LIST(extent_max_zeroout_kb),
+ ATTR_LIST(trigger_fs_error),
+diff --git a/fs/f2fs/f2fs.h b/fs/f2fs/f2fs.h
+index b35a50f4953c5..7d9afd54e9d8f 100644
+--- a/fs/f2fs/f2fs.h
++++ b/fs/f2fs/f2fs.h
+@@ -3287,7 +3287,7 @@ bool f2fs_alloc_nid(struct f2fs_sb_info *sbi, nid_t *nid);
+ void f2fs_alloc_nid_done(struct f2fs_sb_info *sbi, nid_t nid);
+ void f2fs_alloc_nid_failed(struct f2fs_sb_info *sbi, nid_t nid);
+ int f2fs_try_to_free_nids(struct f2fs_sb_info *sbi, int nr_shrink);
+-void f2fs_recover_inline_xattr(struct inode *inode, struct page *page);
++int f2fs_recover_inline_xattr(struct inode *inode, struct page *page);
+ int f2fs_recover_xattr_data(struct inode *inode, struct page *page);
+ int f2fs_recover_inode_page(struct f2fs_sb_info *sbi, struct page *page);
+ int f2fs_restore_node_summary(struct f2fs_sb_info *sbi,
+@@ -3750,7 +3750,7 @@ int f2fs_convert_inline_page(struct dnode_of_data *dn, struct page *page);
+ int f2fs_convert_inline_inode(struct inode *inode);
+ int f2fs_try_convert_inline_dir(struct inode *dir, struct dentry *dentry);
+ int f2fs_write_inline_data(struct inode *inode, struct page *page);
+-bool f2fs_recover_inline_data(struct inode *inode, struct page *npage);
++int f2fs_recover_inline_data(struct inode *inode, struct page *npage);
+ struct f2fs_dir_entry *f2fs_find_in_inline_dir(struct inode *dir,
+ const struct f2fs_filename *fname,
+ struct page **res_page);
+diff --git a/fs/f2fs/inline.c b/fs/f2fs/inline.c
+index dbade310dc792..cf2c347bd7a3d 100644
+--- a/fs/f2fs/inline.c
++++ b/fs/f2fs/inline.c
+@@ -253,7 +253,7 @@ int f2fs_write_inline_data(struct inode *inode, struct page *page)
+ return 0;
+ }
+
+-bool f2fs_recover_inline_data(struct inode *inode, struct page *npage)
++int f2fs_recover_inline_data(struct inode *inode, struct page *npage)
+ {
+ struct f2fs_sb_info *sbi = F2FS_I_SB(inode);
+ struct f2fs_inode *ri = NULL;
+@@ -275,7 +275,8 @@ bool f2fs_recover_inline_data(struct inode *inode, struct page *npage)
+ ri && (ri->i_inline & F2FS_INLINE_DATA)) {
+ process_inline:
+ ipage = f2fs_get_node_page(sbi, inode->i_ino);
+- f2fs_bug_on(sbi, IS_ERR(ipage));
++ if (IS_ERR(ipage))
++ return PTR_ERR(ipage);
+
+ f2fs_wait_on_page_writeback(ipage, NODE, true, true);
+
+@@ -288,21 +289,25 @@ process_inline:
+
+ set_page_dirty(ipage);
+ f2fs_put_page(ipage, 1);
+- return true;
++ return 1;
+ }
+
+ if (f2fs_has_inline_data(inode)) {
+ ipage = f2fs_get_node_page(sbi, inode->i_ino);
+- f2fs_bug_on(sbi, IS_ERR(ipage));
++ if (IS_ERR(ipage))
++ return PTR_ERR(ipage);
+ f2fs_truncate_inline_inode(inode, ipage, 0);
+ clear_inode_flag(inode, FI_INLINE_DATA);
+ f2fs_put_page(ipage, 1);
+ } else if (ri && (ri->i_inline & F2FS_INLINE_DATA)) {
+- if (f2fs_truncate_blocks(inode, 0, false))
+- return false;
++ int ret;
++
++ ret = f2fs_truncate_blocks(inode, 0, false);
++ if (ret)
++ return ret;
+ goto process_inline;
+ }
+- return false;
++ return 0;
+ }
+
+ struct f2fs_dir_entry *f2fs_find_in_inline_dir(struct inode *dir,
+diff --git a/fs/f2fs/node.c b/fs/f2fs/node.c
+index e61ce7fb0958b..98736d0598b8d 100644
+--- a/fs/f2fs/node.c
++++ b/fs/f2fs/node.c
+@@ -2576,7 +2576,7 @@ int f2fs_try_to_free_nids(struct f2fs_sb_info *sbi, int nr_shrink)
+ return nr - nr_shrink;
+ }
+
+-void f2fs_recover_inline_xattr(struct inode *inode, struct page *page)
++int f2fs_recover_inline_xattr(struct inode *inode, struct page *page)
+ {
+ void *src_addr, *dst_addr;
+ size_t inline_size;
+@@ -2584,7 +2584,8 @@ void f2fs_recover_inline_xattr(struct inode *inode, struct page *page)
+ struct f2fs_inode *ri;
+
+ ipage = f2fs_get_node_page(F2FS_I_SB(inode), inode->i_ino);
+- f2fs_bug_on(F2FS_I_SB(inode), IS_ERR(ipage));
++ if (IS_ERR(ipage))
++ return PTR_ERR(ipage);
+
+ ri = F2FS_INODE(page);
+ if (ri->i_inline & F2FS_INLINE_XATTR) {
+@@ -2603,6 +2604,7 @@ void f2fs_recover_inline_xattr(struct inode *inode, struct page *page)
+ update_inode:
+ f2fs_update_inode(inode, ipage);
+ f2fs_put_page(ipage, 1);
++ return 0;
+ }
+
+ int f2fs_recover_xattr_data(struct inode *inode, struct page *page)
+diff --git a/fs/f2fs/recovery.c b/fs/f2fs/recovery.c
+index ae5310f02e7ff..2807251944668 100644
+--- a/fs/f2fs/recovery.c
++++ b/fs/f2fs/recovery.c
+@@ -544,7 +544,9 @@ static int do_recover_data(struct f2fs_sb_info *sbi, struct inode *inode,
+
+ /* step 1: recover xattr */
+ if (IS_INODE(page)) {
+- f2fs_recover_inline_xattr(inode, page);
++ err = f2fs_recover_inline_xattr(inode, page);
++ if (err)
++ goto out;
+ } else if (f2fs_has_xattr_block(ofs_of_node(page))) {
+ err = f2fs_recover_xattr_data(inode, page);
+ if (!err)
+@@ -553,8 +555,12 @@ static int do_recover_data(struct f2fs_sb_info *sbi, struct inode *inode,
+ }
+
+ /* step 2: recover inline data */
+- if (f2fs_recover_inline_data(inode, page))
++ err = f2fs_recover_inline_data(inode, page);
++ if (err) {
++ if (err == 1)
++ err = 0;
+ goto out;
++ }
+
+ /* step 3: recover data indices */
+ start = f2fs_start_bidx_of_node(ofs_of_node(page), inode);
+diff --git a/fs/f2fs/super.c b/fs/f2fs/super.c
+index 20e56b0fa46a9..0deb839da0a03 100644
+--- a/fs/f2fs/super.c
++++ b/fs/f2fs/super.c
+@@ -1173,6 +1173,9 @@ static void f2fs_put_super(struct super_block *sb)
+ int i;
+ bool dropped;
+
++ /* unregister procfs/sysfs entries in advance to avoid race case */
++ f2fs_unregister_sysfs(sbi);
++
+ f2fs_quota_off_umount(sb);
+
+ /* prevent remaining shrinker jobs */
+@@ -1238,8 +1241,6 @@ static void f2fs_put_super(struct super_block *sb)
+
+ kvfree(sbi->ckpt);
+
+- f2fs_unregister_sysfs(sbi);
+-
+ sb->s_fs_info = NULL;
+ if (sbi->s_chksum_driver)
+ crypto_free_shash(sbi->s_chksum_driver);
+diff --git a/fs/f2fs/sysfs.c b/fs/f2fs/sysfs.c
+index e877c59b9fdb4..c5e32ceb94827 100644
+--- a/fs/f2fs/sysfs.c
++++ b/fs/f2fs/sysfs.c
+@@ -223,6 +223,13 @@ static ssize_t avg_vblocks_show(struct f2fs_attr *a,
+ }
+ #endif
+
++static ssize_t main_blkaddr_show(struct f2fs_attr *a,
++ struct f2fs_sb_info *sbi, char *buf)
++{
++ return snprintf(buf, PAGE_SIZE, "%llu\n",
++ (unsigned long long)MAIN_BLKADDR(sbi));
++}
++
+ static ssize_t f2fs_sbi_show(struct f2fs_attr *a,
+ struct f2fs_sb_info *sbi, char *buf)
+ {
+@@ -522,7 +529,6 @@ F2FS_RW_ATTR(GC_THREAD, f2fs_gc_kthread, gc_no_gc_sleep_time, no_gc_sleep_time);
+ F2FS_RW_ATTR(F2FS_SBI, f2fs_sb_info, gc_idle, gc_mode);
+ F2FS_RW_ATTR(F2FS_SBI, f2fs_sb_info, gc_urgent, gc_mode);
+ F2FS_RW_ATTR(SM_INFO, f2fs_sm_info, reclaim_segments, rec_prefree_segments);
+-F2FS_RW_ATTR(SM_INFO, f2fs_sm_info, main_blkaddr, main_blkaddr);
+ F2FS_RW_ATTR(DCC_INFO, discard_cmd_control, max_small_discards, max_discards);
+ F2FS_RW_ATTR(DCC_INFO, discard_cmd_control, discard_granularity, discard_granularity);
+ F2FS_RW_ATTR(RESERVED_BLOCKS, f2fs_sb_info, reserved_blocks, reserved_blocks);
+@@ -565,6 +571,7 @@ F2FS_GENERAL_RO_ATTR(current_reserved_blocks);
+ F2FS_GENERAL_RO_ATTR(unusable);
+ F2FS_GENERAL_RO_ATTR(encoding);
+ F2FS_GENERAL_RO_ATTR(mounted_time_sec);
++F2FS_GENERAL_RO_ATTR(main_blkaddr);
+ #ifdef CONFIG_F2FS_STAT_FS
+ F2FS_STAT_ATTR(STAT_INFO, f2fs_stat_info, cp_foreground_calls, cp_count);
+ F2FS_STAT_ATTR(STAT_INFO, f2fs_stat_info, cp_background_calls, bg_cp_count);
+diff --git a/fs/fs-writeback.c b/fs/fs-writeback.c
+index a605c3dddabc7..ae17d64a3e189 100644
+--- a/fs/fs-writeback.c
++++ b/fs/fs-writeback.c
+@@ -42,7 +42,6 @@
+ struct wb_writeback_work {
+ long nr_pages;
+ struct super_block *sb;
+- unsigned long *older_than_this;
+ enum writeback_sync_modes sync_mode;
+ unsigned int tagged_writepages:1;
+ unsigned int for_kupdate:1;
+@@ -144,7 +143,9 @@ static void inode_io_list_del_locked(struct inode *inode,
+ struct bdi_writeback *wb)
+ {
+ assert_spin_locked(&wb->list_lock);
++ assert_spin_locked(&inode->i_lock);
+
++ inode->i_state &= ~I_SYNC_QUEUED;
+ list_del_init(&inode->i_io_list);
+ wb_io_lists_depopulated(wb);
+ }
+@@ -1122,7 +1123,9 @@ void inode_io_list_del(struct inode *inode)
+ struct bdi_writeback *wb;
+
+ wb = inode_to_wb_and_lock_list(inode);
++ spin_lock(&inode->i_lock);
+ inode_io_list_del_locked(inode, wb);
++ spin_unlock(&inode->i_lock);
+ spin_unlock(&wb->list_lock);
+ }
+ EXPORT_SYMBOL(inode_io_list_del);
+@@ -1172,8 +1175,10 @@ void sb_clear_inode_writeback(struct inode *inode)
+ * the case then the inode must have been redirtied while it was being written
+ * out and we don't reset its dirtied_when.
+ */
+-static void redirty_tail(struct inode *inode, struct bdi_writeback *wb)
++static void redirty_tail_locked(struct inode *inode, struct bdi_writeback *wb)
+ {
++ assert_spin_locked(&inode->i_lock);
++
+ if (!list_empty(&wb->b_dirty)) {
+ struct inode *tail;
+
+@@ -1182,6 +1187,14 @@ static void redirty_tail(struct inode *inode, struct bdi_writeback *wb)
+ inode->dirtied_when = jiffies;
+ }
+ inode_io_list_move_locked(inode, wb, &wb->b_dirty);
++ inode->i_state &= ~I_SYNC_QUEUED;
++}
++
++static void redirty_tail(struct inode *inode, struct bdi_writeback *wb)
++{
++ spin_lock(&inode->i_lock);
++ redirty_tail_locked(inode, wb);
++ spin_unlock(&inode->i_lock);
+ }
+
+ /*
+@@ -1220,16 +1233,13 @@ static bool inode_dirtied_after(struct inode *inode, unsigned long t)
+ #define EXPIRE_DIRTY_ATIME 0x0001
+
+ /*
+- * Move expired (dirtied before work->older_than_this) dirty inodes from
++ * Move expired (dirtied before dirtied_before) dirty inodes from
+ * @delaying_queue to @dispatch_queue.
+ */
+ static int move_expired_inodes(struct list_head *delaying_queue,
+ struct list_head *dispatch_queue,
+- int flags,
+- struct wb_writeback_work *work)
++ int flags, unsigned long dirtied_before)
+ {
+- unsigned long *older_than_this = NULL;
+- unsigned long expire_time;
+ LIST_HEAD(tmp);
+ struct list_head *pos, *node;
+ struct super_block *sb = NULL;
+@@ -1237,21 +1247,17 @@ static int move_expired_inodes(struct list_head *delaying_queue,
+ int do_sb_sort = 0;
+ int moved = 0;
+
+- if ((flags & EXPIRE_DIRTY_ATIME) == 0)
+- older_than_this = work->older_than_this;
+- else if (!work->for_sync) {
+- expire_time = jiffies - (dirtytime_expire_interval * HZ);
+- older_than_this = &expire_time;
+- }
+ while (!list_empty(delaying_queue)) {
+ inode = wb_inode(delaying_queue->prev);
+- if (older_than_this &&
+- inode_dirtied_after(inode, *older_than_this))
++ if (inode_dirtied_after(inode, dirtied_before))
+ break;
+ list_move(&inode->i_io_list, &tmp);
+ moved++;
++ spin_lock(&inode->i_lock);
+ if (flags & EXPIRE_DIRTY_ATIME)
+- set_bit(__I_DIRTY_TIME_EXPIRED, &inode->i_state);
++ inode->i_state |= I_DIRTY_TIME_EXPIRED;
++ inode->i_state |= I_SYNC_QUEUED;
++ spin_unlock(&inode->i_lock);
+ if (sb_is_blkdev_sb(inode->i_sb))
+ continue;
+ if (sb && sb != inode->i_sb)
+@@ -1289,18 +1295,22 @@ out:
+ * |
+ * +--> dequeue for IO
+ */
+-static void queue_io(struct bdi_writeback *wb, struct wb_writeback_work *work)
++static void queue_io(struct bdi_writeback *wb, struct wb_writeback_work *work,
++ unsigned long dirtied_before)
+ {
+ int moved;
++ unsigned long time_expire_jif = dirtied_before;
+
+ assert_spin_locked(&wb->list_lock);
+ list_splice_init(&wb->b_more_io, &wb->b_io);
+- moved = move_expired_inodes(&wb->b_dirty, &wb->b_io, 0, work);
++ moved = move_expired_inodes(&wb->b_dirty, &wb->b_io, 0, dirtied_before);
++ if (!work->for_sync)
++ time_expire_jif = jiffies - dirtytime_expire_interval * HZ;
+ moved += move_expired_inodes(&wb->b_dirty_time, &wb->b_io,
+- EXPIRE_DIRTY_ATIME, work);
++ EXPIRE_DIRTY_ATIME, time_expire_jif);
+ if (moved)
+ wb_io_lists_populated(wb);
+- trace_writeback_queue_io(wb, work, moved);
++ trace_writeback_queue_io(wb, work, dirtied_before, moved);
+ }
+
+ static int write_inode(struct inode *inode, struct writeback_control *wbc)
+@@ -1394,7 +1404,7 @@ static void requeue_inode(struct inode *inode, struct bdi_writeback *wb,
+ * writeback is not making progress due to locked
+ * buffers. Skip this inode for now.
+ */
+- redirty_tail(inode, wb);
++ redirty_tail_locked(inode, wb);
+ return;
+ }
+
+@@ -1414,7 +1424,7 @@ static void requeue_inode(struct inode *inode, struct bdi_writeback *wb,
+ * retrying writeback of the dirty page/inode
+ * that cannot be performed immediately.
+ */
+- redirty_tail(inode, wb);
++ redirty_tail_locked(inode, wb);
+ }
+ } else if (inode->i_state & I_DIRTY) {
+ /*
+@@ -1422,10 +1432,11 @@ static void requeue_inode(struct inode *inode, struct bdi_writeback *wb,
+ * such as delayed allocation during submission or metadata
+ * updates after data IO completion.
+ */
+- redirty_tail(inode, wb);
++ redirty_tail_locked(inode, wb);
+ } else if (inode->i_state & I_DIRTY_TIME) {
+ inode->dirtied_when = jiffies;
+ inode_io_list_move_locked(inode, wb, &wb->b_dirty_time);
++ inode->i_state &= ~I_SYNC_QUEUED;
+ } else {
+ /* The inode is clean. Remove from writeback lists. */
+ inode_io_list_del_locked(inode, wb);
+@@ -1669,8 +1680,8 @@ static long writeback_sb_inodes(struct super_block *sb,
+ */
+ spin_lock(&inode->i_lock);
+ if (inode->i_state & (I_NEW | I_FREEING | I_WILL_FREE)) {
++ redirty_tail_locked(inode, wb);
+ spin_unlock(&inode->i_lock);
+- redirty_tail(inode, wb);
+ continue;
+ }
+ if ((inode->i_state & I_SYNC) && wbc.sync_mode != WB_SYNC_ALL) {
+@@ -1811,7 +1822,7 @@ static long writeback_inodes_wb(struct bdi_writeback *wb, long nr_pages,
+ blk_start_plug(&plug);
+ spin_lock(&wb->list_lock);
+ if (list_empty(&wb->b_io))
+- queue_io(wb, &work);
++ queue_io(wb, &work, jiffies);
+ __writeback_inodes_wb(wb, &work);
+ spin_unlock(&wb->list_lock);
+ blk_finish_plug(&plug);
+@@ -1831,7 +1842,7 @@ static long writeback_inodes_wb(struct bdi_writeback *wb, long nr_pages,
+ * takes longer than a dirty_writeback_interval interval, then leave a
+ * one-second gap.
+ *
+- * older_than_this takes precedence over nr_to_write. So we'll only write back
++ * dirtied_before takes precedence over nr_to_write. So we'll only write back
+ * all dirty pages if they are all attached to "old" mappings.
+ */
+ static long wb_writeback(struct bdi_writeback *wb,
+@@ -1839,14 +1850,11 @@ static long wb_writeback(struct bdi_writeback *wb,
+ {
+ unsigned long wb_start = jiffies;
+ long nr_pages = work->nr_pages;
+- unsigned long oldest_jif;
++ unsigned long dirtied_before = jiffies;
+ struct inode *inode;
+ long progress;
+ struct blk_plug plug;
+
+- oldest_jif = jiffies;
+- work->older_than_this = &oldest_jif;
+-
+ blk_start_plug(&plug);
+ spin_lock(&wb->list_lock);
+ for (;;) {
+@@ -1880,14 +1888,14 @@ static long wb_writeback(struct bdi_writeback *wb,
+ * safe.
+ */
+ if (work->for_kupdate) {
+- oldest_jif = jiffies -
++ dirtied_before = jiffies -
+ msecs_to_jiffies(dirty_expire_interval * 10);
+ } else if (work->for_background)
+- oldest_jif = jiffies;
++ dirtied_before = jiffies;
+
+ trace_writeback_start(wb, work);
+ if (list_empty(&wb->b_io))
+- queue_io(wb, work);
++ queue_io(wb, work, dirtied_before);
+ if (work->sb)
+ progress = writeback_sb_inodes(work->sb, wb, work);
+ else
+@@ -2289,11 +2297,12 @@ void __mark_inode_dirty(struct inode *inode, int flags)
+ inode->i_state |= flags;
+
+ /*
+- * If the inode is being synced, just update its dirty state.
+- * The unlocker will place the inode on the appropriate
+- * superblock list, based upon its state.
++ * If the inode is queued for writeback by flush worker, just
++ * update its dirty state. Once the flush worker is done with
++ * the inode it will place it on the appropriate superblock
++ * list, based upon its state.
+ */
+- if (inode->i_state & I_SYNC)
++ if (inode->i_state & I_SYNC_QUEUED)
+ goto out_unlock_inode;
+
+ /*
+diff --git a/fs/hugetlbfs/inode.c b/fs/hugetlbfs/inode.c
+index ef5313f9c78fe..f936bcf02cce7 100644
+--- a/fs/hugetlbfs/inode.c
++++ b/fs/hugetlbfs/inode.c
+@@ -1364,6 +1364,12 @@ hugetlbfs_fill_super(struct super_block *sb, struct fs_context *fc)
+ sb->s_magic = HUGETLBFS_MAGIC;
+ sb->s_op = &hugetlbfs_ops;
+ sb->s_time_gran = 1;
++
++ /*
++ * Due to the special and limited functionality of hugetlbfs, it does
++ * not work well as a stacking filesystem.
++ */
++ sb->s_stack_depth = FILESYSTEM_MAX_STACK_DEPTH;
+ sb->s_root = d_make_root(hugetlbfs_get_root(sb, ctx));
+ if (!sb->s_root)
+ goto out_free;
+diff --git a/fs/io-wq.c b/fs/io-wq.c
+index 47c5f3aeb4600..cb9e5a444fba7 100644
+--- a/fs/io-wq.c
++++ b/fs/io-wq.c
+@@ -929,6 +929,24 @@ static bool io_wq_worker_cancel(struct io_worker *worker, void *data)
+ return match->nr_running && !match->cancel_all;
+ }
+
++static inline void io_wqe_remove_pending(struct io_wqe *wqe,
++ struct io_wq_work *work,
++ struct io_wq_work_node *prev)
++{
++ unsigned int hash = io_get_work_hash(work);
++ struct io_wq_work *prev_work = NULL;
++
++ if (io_wq_is_hashed(work) && work == wqe->hash_tail[hash]) {
++ if (prev)
++ prev_work = container_of(prev, struct io_wq_work, list);
++ if (prev_work && io_get_work_hash(prev_work) == hash)
++ wqe->hash_tail[hash] = prev_work;
++ else
++ wqe->hash_tail[hash] = NULL;
++ }
++ wq_list_del(&wqe->work_list, &work->list, prev);
++}
++
+ static void io_wqe_cancel_pending_work(struct io_wqe *wqe,
+ struct io_cb_cancel_data *match)
+ {
+@@ -942,8 +960,7 @@ retry:
+ work = container_of(node, struct io_wq_work, list);
+ if (!match->fn(work, match->data))
+ continue;
+-
+- wq_list_del(&wqe->work_list, node, prev);
++ io_wqe_remove_pending(wqe, work, prev);
+ spin_unlock_irqrestore(&wqe->lock, flags);
+ io_run_cancel(work, wqe);
+ match->nr_pending++;
+diff --git a/fs/io_uring.c b/fs/io_uring.c
+index 26978630378e0..4115bfedf15dc 100644
+--- a/fs/io_uring.c
++++ b/fs/io_uring.c
+@@ -1810,6 +1810,7 @@ static void io_iopoll_complete(struct io_ring_ctx *ctx, unsigned int *nr_events,
+
+ req = list_first_entry(done, struct io_kiocb, list);
+ if (READ_ONCE(req->result) == -EAGAIN) {
++ req->result = 0;
+ req->iopoll_completed = 0;
+ list_move_tail(&req->list, &again);
+ continue;
+@@ -2517,6 +2518,11 @@ static ssize_t io_import_iovec(int rw, struct io_kiocb *req,
+ return import_iovec(rw, buf, sqe_len, UIO_FASTIOV, iovec, iter);
+ }
+
++static inline loff_t *io_kiocb_ppos(struct kiocb *kiocb)
++{
++ return kiocb->ki_filp->f_mode & FMODE_STREAM ? NULL : &kiocb->ki_pos;
++}
++
+ /*
+ * For files that don't have ->read_iter() and ->write_iter(), handle them
+ * by looping over ->read() or ->write() manually.
+@@ -2552,10 +2558,10 @@ static ssize_t loop_rw_iter(int rw, struct file *file, struct kiocb *kiocb,
+
+ if (rw == READ) {
+ nr = file->f_op->read(file, iovec.iov_base,
+- iovec.iov_len, &kiocb->ki_pos);
++ iovec.iov_len, io_kiocb_ppos(kiocb));
+ } else {
+ nr = file->f_op->write(file, iovec.iov_base,
+- iovec.iov_len, &kiocb->ki_pos);
++ iovec.iov_len, io_kiocb_ppos(kiocb));
+ }
+
+ if (iov_iter_is_bvec(iter))
+@@ -2680,7 +2686,7 @@ static int io_read(struct io_kiocb *req, bool force_nonblock)
+ goto copy_iov;
+
+ iov_count = iov_iter_count(&iter);
+- ret = rw_verify_area(READ, req->file, &kiocb->ki_pos, iov_count);
++ ret = rw_verify_area(READ, req->file, io_kiocb_ppos(kiocb), iov_count);
+ if (!ret) {
+ ssize_t ret2;
+
+@@ -2779,7 +2785,7 @@ static int io_write(struct io_kiocb *req, bool force_nonblock)
+ goto copy_iov;
+
+ iov_count = iov_iter_count(&iter);
+- ret = rw_verify_area(WRITE, req->file, &kiocb->ki_pos, iov_count);
++ ret = rw_verify_area(WRITE, req->file, io_kiocb_ppos(kiocb), iov_count);
+ if (!ret) {
+ ssize_t ret2;
+
+@@ -4113,7 +4119,8 @@ struct io_poll_table {
+ int error;
+ };
+
+-static int io_req_task_work_add(struct io_kiocb *req, struct callback_head *cb)
++static int io_req_task_work_add(struct io_kiocb *req, struct callback_head *cb,
++ bool twa_signal_ok)
+ {
+ struct task_struct *tsk = req->task;
+ struct io_ring_ctx *ctx = req->ctx;
+@@ -4126,7 +4133,7 @@ static int io_req_task_work_add(struct io_kiocb *req, struct callback_head *cb)
+ * will do the job.
+ */
+ notify = 0;
+- if (!(ctx->flags & IORING_SETUP_SQPOLL))
++ if (!(ctx->flags & IORING_SETUP_SQPOLL) && twa_signal_ok)
+ notify = TWA_SIGNAL;
+
+ ret = task_work_add(tsk, cb, notify);
+@@ -4140,6 +4147,7 @@ static int __io_async_wake(struct io_kiocb *req, struct io_poll_iocb *poll,
+ __poll_t mask, task_work_func_t func)
+ {
+ struct task_struct *tsk;
++ bool twa_signal_ok;
+ int ret;
+
+ /* for instances that support it check for an event match first: */
+@@ -4155,13 +4163,21 @@ static int __io_async_wake(struct io_kiocb *req, struct io_poll_iocb *poll,
+ init_task_work(&req->task_work, func);
+ percpu_ref_get(&req->ctx->refs);
+
++ /*
++ * If we using the signalfd wait_queue_head for this wakeup, then
++ * it's not safe to use TWA_SIGNAL as we could be recursing on the
++ * tsk->sighand->siglock on doing the wakeup. Should not be needed
++ * either, as the normal wakeup will suffice.
++ */
++ twa_signal_ok = (poll->head != &req->task->sighand->signalfd_wqh);
++
+ /*
+ * If this fails, then the task is exiting. When a task exits, the
+ * work gets canceled, so just cancel this request as well instead
+ * of executing it. We can't safely execute it anyway, as we may not
+ * have the needed state needed for it anyway.
+ */
+- ret = io_req_task_work_add(req, &req->task_work);
++ ret = io_req_task_work_add(req, &req->task_work, twa_signal_ok);
+ if (unlikely(ret)) {
+ WRITE_ONCE(poll->canceled, true);
+ tsk = io_wq_get_task(req->ctx->io_wq);
+@@ -4492,12 +4508,20 @@ static bool io_arm_poll_handler(struct io_kiocb *req)
+ struct async_poll *apoll;
+ struct io_poll_table ipt;
+ __poll_t mask, ret;
++ int rw;
+
+ if (!req->file || !file_can_poll(req->file))
+ return false;
+ if (req->flags & (REQ_F_MUST_PUNT | REQ_F_POLLED))
+ return false;
+- if (!def->pollin && !def->pollout)
++ if (def->pollin)
++ rw = READ;
++ else if (def->pollout)
++ rw = WRITE;
++ else
++ return false;
++ /* if we can't nonblock try, then no point in arming a poll handler */
++ if (!io_file_supports_async(req->file, rw))
+ return false;
+
+ apoll = kmalloc(sizeof(*apoll), GFP_ATOMIC);
+diff --git a/fs/jbd2/transaction.c b/fs/jbd2/transaction.c
+index e91aad3637a23..6250c9faa4cbe 100644
+--- a/fs/jbd2/transaction.c
++++ b/fs/jbd2/transaction.c
+@@ -2026,6 +2026,9 @@ static void __jbd2_journal_temp_unlink_buffer(struct journal_head *jh)
+ */
+ static void __jbd2_journal_unfile_buffer(struct journal_head *jh)
+ {
++ J_ASSERT_JH(jh, jh->b_transaction != NULL);
++ J_ASSERT_JH(jh, jh->b_next_transaction == NULL);
++
+ __jbd2_journal_temp_unlink_buffer(jh);
+ jh->b_transaction = NULL;
+ }
+@@ -2117,6 +2120,7 @@ int jbd2_journal_try_to_free_buffers(journal_t *journal,
+ {
+ struct buffer_head *head;
+ struct buffer_head *bh;
++ bool has_write_io_error = false;
+ int ret = 0;
+
+ J_ASSERT(PageLocked(page));
+@@ -2141,11 +2145,26 @@ int jbd2_journal_try_to_free_buffers(journal_t *journal,
+ jbd2_journal_put_journal_head(jh);
+ if (buffer_jbd(bh))
+ goto busy;
++
++ /*
++ * If we free a metadata buffer which has been failed to
++ * write out, the jbd2 checkpoint procedure will not detect
++ * this failure and may lead to filesystem inconsistency
++ * after cleanup journal tail.
++ */
++ if (buffer_write_io_error(bh)) {
++ pr_err("JBD2: Error while async write back metadata bh %llu.",
++ (unsigned long long)bh->b_blocknr);
++ has_write_io_error = true;
++ }
+ } while ((bh = bh->b_this_page) != head);
+
+ ret = try_to_free_buffers(page);
+
+ busy:
++ if (has_write_io_error)
++ jbd2_journal_abort(journal, -EIO);
++
+ return ret;
+ }
+
+@@ -2572,6 +2591,13 @@ bool __jbd2_journal_refile_buffer(struct journal_head *jh)
+
+ was_dirty = test_clear_buffer_jbddirty(bh);
+ __jbd2_journal_temp_unlink_buffer(jh);
++
++ /*
++ * b_transaction must be set, otherwise the new b_transaction won't
++ * be holding jh reference
++ */
++ J_ASSERT_JH(jh, jh->b_transaction != NULL);
++
+ /*
+ * We set b_transaction here because b_next_transaction will inherit
+ * our jh reference and thus __jbd2_journal_file_buffer() must not
+diff --git a/fs/nfsd/nfs4state.c b/fs/nfsd/nfs4state.c
+index c9056316a0b35..cea682ce8aa12 100644
+--- a/fs/nfsd/nfs4state.c
++++ b/fs/nfsd/nfs4state.c
+@@ -4597,6 +4597,8 @@ static bool nfsd_breaker_owns_lease(struct file_lock *fl)
+ if (!i_am_nfsd())
+ return NULL;
+ rqst = kthread_data(current);
++ if (!rqst->rq_lease_breaker)
++ return NULL;
+ clp = *(rqst->rq_lease_breaker);
+ return dl->dl_stid.sc_client == clp;
+ }
+diff --git a/fs/xfs/libxfs/xfs_trans_inode.c b/fs/xfs/libxfs/xfs_trans_inode.c
+index b5dfb66548422..4504d215cd590 100644
+--- a/fs/xfs/libxfs/xfs_trans_inode.c
++++ b/fs/xfs/libxfs/xfs_trans_inode.c
+@@ -36,6 +36,7 @@ xfs_trans_ijoin(
+
+ ASSERT(iip->ili_lock_flags == 0);
+ iip->ili_lock_flags = lock_flags;
++ ASSERT(!xfs_iflags_test(ip, XFS_ISTALE));
+
+ /*
+ * Get a log_item_desc to point at the new item.
+@@ -89,6 +90,7 @@ xfs_trans_log_inode(
+
+ ASSERT(ip->i_itemp != NULL);
+ ASSERT(xfs_isilocked(ip, XFS_ILOCK_EXCL));
++ ASSERT(!xfs_iflags_test(ip, XFS_ISTALE));
+
+ /*
+ * Don't bother with i_lock for the I_DIRTY_TIME check here, as races
+diff --git a/fs/xfs/xfs_icache.c b/fs/xfs/xfs_icache.c
+index 5daef654956cb..59dea8178ae3c 100644
+--- a/fs/xfs/xfs_icache.c
++++ b/fs/xfs/xfs_icache.c
+@@ -1141,7 +1141,7 @@ restart:
+ goto out_ifunlock;
+ xfs_iunpin_wait(ip);
+ }
+- if (xfs_iflags_test(ip, XFS_ISTALE) || xfs_inode_clean(ip)) {
++ if (xfs_inode_clean(ip)) {
+ xfs_ifunlock(ip);
+ goto reclaim;
+ }
+@@ -1228,6 +1228,7 @@ reclaim:
+ xfs_ilock(ip, XFS_ILOCK_EXCL);
+ xfs_qm_dqdetach(ip);
+ xfs_iunlock(ip, XFS_ILOCK_EXCL);
++ ASSERT(xfs_inode_clean(ip));
+
+ __xfs_inode_free(ip);
+ return error;
+diff --git a/fs/xfs/xfs_inode.c b/fs/xfs/xfs_inode.c
+index 9aea7d68d8ab9..6d70daf1c92a7 100644
+--- a/fs/xfs/xfs_inode.c
++++ b/fs/xfs/xfs_inode.c
+@@ -1740,10 +1740,31 @@ xfs_inactive_ifree(
+ return error;
+ }
+
++ /*
++ * We do not hold the inode locked across the entire rolling transaction
++ * here. We only need to hold it for the first transaction that
++ * xfs_ifree() builds, which may mark the inode XFS_ISTALE if the
++ * underlying cluster buffer is freed. Relogging an XFS_ISTALE inode
++ * here breaks the relationship between cluster buffer invalidation and
++ * stale inode invalidation on cluster buffer item journal commit
++ * completion, and can result in leaving dirty stale inodes hanging
++ * around in memory.
++ *
++ * We have no need for serialising this inode operation against other
++ * operations - we freed the inode and hence reallocation is required
++ * and that will serialise on reallocating the space the deferops need
++ * to free. Hence we can unlock the inode on the first commit of
++ * the transaction rather than roll it right through the deferops. This
++ * avoids relogging the XFS_ISTALE inode.
++ *
++ * We check that xfs_ifree() hasn't grown an internal transaction roll
++ * by asserting that the inode is still locked when it returns.
++ */
+ xfs_ilock(ip, XFS_ILOCK_EXCL);
+- xfs_trans_ijoin(tp, ip, 0);
++ xfs_trans_ijoin(tp, ip, XFS_ILOCK_EXCL);
+
+ error = xfs_ifree(tp, ip);
++ ASSERT(xfs_isilocked(ip, XFS_ILOCK_EXCL));
+ if (error) {
+ /*
+ * If we fail to free the inode, shut down. The cancel
+@@ -1756,7 +1777,6 @@ xfs_inactive_ifree(
+ xfs_force_shutdown(mp, SHUTDOWN_META_IO_ERROR);
+ }
+ xfs_trans_cancel(tp);
+- xfs_iunlock(ip, XFS_ILOCK_EXCL);
+ return error;
+ }
+
+@@ -1774,7 +1794,6 @@ xfs_inactive_ifree(
+ xfs_notice(mp, "%s: xfs_trans_commit returned error %d",
+ __func__, error);
+
+- xfs_iunlock(ip, XFS_ILOCK_EXCL);
+ return 0;
+ }
+
+diff --git a/include/drm/drm_modeset_lock.h b/include/drm/drm_modeset_lock.h
+index 4fc9a43ac45a8..aafd07388eb7b 100644
+--- a/include/drm/drm_modeset_lock.h
++++ b/include/drm/drm_modeset_lock.h
+@@ -164,6 +164,8 @@ int drm_modeset_lock_all_ctx(struct drm_device *dev,
+ * is 0, so no error checking is necessary
+ */
+ #define DRM_MODESET_LOCK_ALL_BEGIN(dev, ctx, flags, ret) \
++ if (!drm_drv_uses_atomic_modeset(dev)) \
++ mutex_lock(&dev->mode_config.mutex); \
+ drm_modeset_acquire_init(&ctx, flags); \
+ modeset_lock_retry: \
+ ret = drm_modeset_lock_all_ctx(dev, &ctx); \
+@@ -172,6 +174,7 @@ modeset_lock_retry: \
+
+ /**
+ * DRM_MODESET_LOCK_ALL_END - Helper to release and cleanup modeset locks
++ * @dev: drm device
+ * @ctx: local modeset acquire context, will be dereferenced
+ * @ret: local ret/err/etc variable to track error status
+ *
+@@ -188,7 +191,7 @@ modeset_lock_retry: \
+ * to that failure. In both of these cases the code between BEGIN/END will not
+ * be run, so the failure will reflect the inability to grab the locks.
+ */
+-#define DRM_MODESET_LOCK_ALL_END(ctx, ret) \
++#define DRM_MODESET_LOCK_ALL_END(dev, ctx, ret) \
+ modeset_lock_fail: \
+ if (ret == -EDEADLK) { \
+ ret = drm_modeset_backoff(&ctx); \
+@@ -196,6 +199,8 @@ modeset_lock_fail: \
+ goto modeset_lock_retry; \
+ } \
+ drm_modeset_drop_locks(&ctx); \
+- drm_modeset_acquire_fini(&ctx);
++ drm_modeset_acquire_fini(&ctx); \
++ if (!drm_drv_uses_atomic_modeset(dev)) \
++ mutex_unlock(&dev->mode_config.mutex);
+
+ #endif /* DRM_MODESET_LOCK_H_ */
+diff --git a/include/linux/dma-direct.h b/include/linux/dma-direct.h
+index ab2e20cba9514..ba22952c24e24 100644
+--- a/include/linux/dma-direct.h
++++ b/include/linux/dma-direct.h
+@@ -67,9 +67,6 @@ static inline bool dma_capable(struct device *dev, dma_addr_t addr, size_t size,
+ }
+
+ u64 dma_direct_get_required_mask(struct device *dev);
+-gfp_t dma_direct_optimal_gfp_mask(struct device *dev, u64 dma_mask,
+- u64 *phys_mask);
+-bool dma_coherent_ok(struct device *dev, phys_addr_t phys, size_t size);
+ void *dma_direct_alloc(struct device *dev, size_t size, dma_addr_t *dma_handle,
+ gfp_t gfp, unsigned long attrs);
+ void dma_direct_free(struct device *dev, size_t size, void *cpu_addr,
+diff --git a/include/linux/dma-mapping.h b/include/linux/dma-mapping.h
+index a33ed3954ed46..0dc08701d7b7e 100644
+--- a/include/linux/dma-mapping.h
++++ b/include/linux/dma-mapping.h
+@@ -715,8 +715,9 @@ void *dma_common_pages_remap(struct page **pages, size_t size,
+ pgprot_t prot, const void *caller);
+ void dma_common_free_remap(void *cpu_addr, size_t size);
+
+-void *dma_alloc_from_pool(struct device *dev, size_t size,
+- struct page **ret_page, gfp_t flags);
++struct page *dma_alloc_from_pool(struct device *dev, size_t size,
++ void **cpu_addr, gfp_t flags,
++ bool (*phys_addr_ok)(struct device *, phys_addr_t, size_t));
+ bool dma_free_from_pool(struct device *dev, void *start, size_t size);
+
+ int
+diff --git a/include/linux/efi.h b/include/linux/efi.h
+index 05c47f857383e..73db1ae04cef8 100644
+--- a/include/linux/efi.h
++++ b/include/linux/efi.h
+@@ -606,7 +606,11 @@ extern void *efi_get_pal_addr (void);
+ extern void efi_map_pal_code (void);
+ extern void efi_memmap_walk (efi_freemem_callback_t callback, void *arg);
+ extern void efi_gettimeofday (struct timespec64 *ts);
++#ifdef CONFIG_EFI
+ extern void efi_enter_virtual_mode (void); /* switch EFI to virtual mode, if possible */
++#else
++static inline void efi_enter_virtual_mode (void) {}
++#endif
+ #ifdef CONFIG_X86
+ extern efi_status_t efi_query_variable_store(u32 attributes,
+ unsigned long size,
+diff --git a/include/linux/fb.h b/include/linux/fb.h
+index 3b4b2f0c6994d..b11eb02cad6d3 100644
+--- a/include/linux/fb.h
++++ b/include/linux/fb.h
+@@ -400,8 +400,6 @@ struct fb_tile_ops {
+ #define FBINFO_HWACCEL_YPAN 0x2000 /* optional */
+ #define FBINFO_HWACCEL_YWRAP 0x4000 /* optional */
+
+-#define FBINFO_MISC_USEREVENT 0x10000 /* event request
+- from userspace */
+ #define FBINFO_MISC_TILEBLITTING 0x20000 /* use tile blitting */
+
+ /* A driver may set this flag to indicate that it does want a set_par to be
+diff --git a/include/linux/fs.h b/include/linux/fs.h
+index 2dab217c6047f..ac1e89872db4f 100644
+--- a/include/linux/fs.h
++++ b/include/linux/fs.h
+@@ -2168,6 +2168,10 @@ static inline void kiocb_clone(struct kiocb *kiocb, struct kiocb *kiocb_src,
+ *
+ * I_DONTCACHE Evict inode as soon as it is not used anymore.
+ *
++ * I_SYNC_QUEUED Inode is queued in b_io or b_more_io writeback lists.
++ * Used to detect that mark_inode_dirty() should not move
++ * inode between dirty lists.
++ *
+ * Q: What is the difference between I_WILL_FREE and I_FREEING?
+ */
+ #define I_DIRTY_SYNC (1 << 0)
+@@ -2185,12 +2189,12 @@ static inline void kiocb_clone(struct kiocb *kiocb, struct kiocb *kiocb_src,
+ #define I_DIO_WAKEUP (1 << __I_DIO_WAKEUP)
+ #define I_LINKABLE (1 << 10)
+ #define I_DIRTY_TIME (1 << 11)
+-#define __I_DIRTY_TIME_EXPIRED 12
+-#define I_DIRTY_TIME_EXPIRED (1 << __I_DIRTY_TIME_EXPIRED)
++#define I_DIRTY_TIME_EXPIRED (1 << 12)
+ #define I_WB_SWITCH (1 << 13)
+ #define I_OVL_INUSE (1 << 14)
+ #define I_CREATING (1 << 15)
+ #define I_DONTCACHE (1 << 16)
++#define I_SYNC_QUEUED (1 << 17)
+
+ #define I_DIRTY_INODE (I_DIRTY_SYNC | I_DIRTY_DATASYNC)
+ #define I_DIRTY (I_DIRTY_INODE | I_DIRTY_PAGES)
+diff --git a/include/linux/netfilter_ipv6.h b/include/linux/netfilter_ipv6.h
+index aac42c28fe62d..9b67394471e1c 100644
+--- a/include/linux/netfilter_ipv6.h
++++ b/include/linux/netfilter_ipv6.h
+@@ -58,7 +58,6 @@ struct nf_ipv6_ops {
+ int (*output)(struct net *, struct sock *, struct sk_buff *));
+ int (*reroute)(struct sk_buff *skb, const struct nf_queue_entry *entry);
+ #if IS_MODULE(CONFIG_IPV6)
+- int (*br_defrag)(struct net *net, struct sk_buff *skb, u32 user);
+ int (*br_fragment)(struct net *net, struct sock *sk,
+ struct sk_buff *skb,
+ struct nf_bridge_frag_data *data,
+@@ -117,23 +116,6 @@ static inline int nf_ip6_route(struct net *net, struct dst_entry **dst,
+
+ #include <net/netfilter/ipv6/nf_defrag_ipv6.h>
+
+-static inline int nf_ipv6_br_defrag(struct net *net, struct sk_buff *skb,
+- u32 user)
+-{
+-#if IS_MODULE(CONFIG_IPV6)
+- const struct nf_ipv6_ops *v6_ops = nf_get_ipv6_ops();
+-
+- if (!v6_ops)
+- return 1;
+-
+- return v6_ops->br_defrag(net, skb, user);
+-#elif IS_BUILTIN(CONFIG_IPV6)
+- return nf_ct_frag6_gather(net, skb, user);
+-#else
+- return 1;
+-#endif
+-}
+-
+ int br_ip6_fragment(struct net *net, struct sock *sk, struct sk_buff *skb,
+ struct nf_bridge_frag_data *data,
+ int (*output)(struct net *, struct sock *sk,
+diff --git a/include/trace/events/ext4.h b/include/trace/events/ext4.h
+index cc41d692ae8ed..628db6a07fda0 100644
+--- a/include/trace/events/ext4.h
++++ b/include/trace/events/ext4.h
+@@ -746,24 +746,29 @@ TRACE_EVENT(ext4_mb_release_group_pa,
+ );
+
+ TRACE_EVENT(ext4_discard_preallocations,
+- TP_PROTO(struct inode *inode),
++ TP_PROTO(struct inode *inode, unsigned int len, unsigned int needed),
+
+- TP_ARGS(inode),
++ TP_ARGS(inode, len, needed),
+
+ TP_STRUCT__entry(
+- __field( dev_t, dev )
+- __field( ino_t, ino )
++ __field( dev_t, dev )
++ __field( ino_t, ino )
++ __field( unsigned int, len )
++ __field( unsigned int, needed )
+
+ ),
+
+ TP_fast_assign(
+ __entry->dev = inode->i_sb->s_dev;
+ __entry->ino = inode->i_ino;
++ __entry->len = len;
++ __entry->needed = needed;
+ ),
+
+- TP_printk("dev %d,%d ino %lu",
++ TP_printk("dev %d,%d ino %lu len: %u needed %u",
+ MAJOR(__entry->dev), MINOR(__entry->dev),
+- (unsigned long) __entry->ino)
++ (unsigned long) __entry->ino, __entry->len,
++ __entry->needed)
+ );
+
+ TRACE_EVENT(ext4_mb_discard_preallocations,
+diff --git a/include/trace/events/writeback.h b/include/trace/events/writeback.h
+index 10f5d1fa73476..7565dcd596973 100644
+--- a/include/trace/events/writeback.h
++++ b/include/trace/events/writeback.h
+@@ -498,8 +498,9 @@ DEFINE_WBC_EVENT(wbc_writepage);
+ TRACE_EVENT(writeback_queue_io,
+ TP_PROTO(struct bdi_writeback *wb,
+ struct wb_writeback_work *work,
++ unsigned long dirtied_before,
+ int moved),
+- TP_ARGS(wb, work, moved),
++ TP_ARGS(wb, work, dirtied_before, moved),
+ TP_STRUCT__entry(
+ __array(char, name, 32)
+ __field(unsigned long, older)
+@@ -509,19 +510,17 @@ TRACE_EVENT(writeback_queue_io,
+ __field(ino_t, cgroup_ino)
+ ),
+ TP_fast_assign(
+- unsigned long *older_than_this = work->older_than_this;
+ strscpy_pad(__entry->name, bdi_dev_name(wb->bdi), 32);
+- __entry->older = older_than_this ? *older_than_this : 0;
+- __entry->age = older_than_this ?
+- (jiffies - *older_than_this) * 1000 / HZ : -1;
++ __entry->older = dirtied_before;
++ __entry->age = (jiffies - dirtied_before) * 1000 / HZ;
+ __entry->moved = moved;
+ __entry->reason = work->reason;
+ __entry->cgroup_ino = __trace_wb_assign_cgroup(wb);
+ ),
+ TP_printk("bdi %s: older=%lu age=%ld enqueue=%d reason=%s cgroup_ino=%lu",
+ __entry->name,
+- __entry->older, /* older_than_this in jiffies */
+- __entry->age, /* older_than_this in relative milliseconds */
++ __entry->older, /* dirtied_before in jiffies */
++ __entry->age, /* dirtied_before in relative milliseconds */
+ __entry->moved,
+ __print_symbolic(__entry->reason, WB_WORK_REASON),
+ (unsigned long)__entry->cgroup_ino
+diff --git a/kernel/Makefile b/kernel/Makefile
+index f3218bc5ec69f..155b5380500ad 100644
+--- a/kernel/Makefile
++++ b/kernel/Makefile
+@@ -125,6 +125,7 @@ obj-$(CONFIG_WATCH_QUEUE) += watch_queue.o
+
+ obj-$(CONFIG_SYSCTL_KUNIT_TEST) += sysctl-test.o
+
++CFLAGS_stackleak.o += $(DISABLE_STACKLEAK_PLUGIN)
+ obj-$(CONFIG_GCC_PLUGIN_STACKLEAK) += stackleak.o
+ KASAN_SANITIZE_stackleak.o := n
+ KCSAN_SANITIZE_stackleak.o := n
+diff --git a/kernel/bpf/bpf_iter.c b/kernel/bpf/bpf_iter.c
+index dd612b80b9fea..3c18090cd73dc 100644
+--- a/kernel/bpf/bpf_iter.c
++++ b/kernel/bpf/bpf_iter.c
+@@ -64,6 +64,9 @@ static void bpf_iter_done_stop(struct seq_file *seq)
+ iter_priv->done_stop = true;
+ }
+
++/* maximum visited objects before bailing out */
++#define MAX_ITER_OBJECTS 1000000
++
+ /* bpf_seq_read, a customized and simpler version for bpf iterator.
+ * no_llseek is assumed for this file.
+ * The following are differences from seq_read():
+@@ -76,7 +79,7 @@ static ssize_t bpf_seq_read(struct file *file, char __user *buf, size_t size,
+ {
+ struct seq_file *seq = file->private_data;
+ size_t n, offs, copied = 0;
+- int err = 0;
++ int err = 0, num_objs = 0;
+ void *p;
+
+ mutex_lock(&seq->lock);
+@@ -132,6 +135,7 @@ static ssize_t bpf_seq_read(struct file *file, char __user *buf, size_t size,
+ while (1) {
+ loff_t pos = seq->index;
+
++ num_objs++;
+ offs = seq->count;
+ p = seq->op->next(seq, p, &seq->index);
+ if (pos == seq->index) {
+@@ -150,6 +154,15 @@ static ssize_t bpf_seq_read(struct file *file, char __user *buf, size_t size,
+ if (seq->count >= size)
+ break;
+
++ if (num_objs >= MAX_ITER_OBJECTS) {
++ if (offs == 0) {
++ err = -EAGAIN;
++ seq->op->stop(seq, p);
++ goto done;
++ }
++ break;
++ }
++
+ err = seq->op->show(seq, p);
+ if (err > 0) {
+ bpf_iter_dec_seq_num(seq);
+diff --git a/kernel/bpf/task_iter.c b/kernel/bpf/task_iter.c
+index a4a0fb4f94cc1..323def936be24 100644
+--- a/kernel/bpf/task_iter.c
++++ b/kernel/bpf/task_iter.c
+@@ -28,8 +28,9 @@ static struct task_struct *task_seq_get_next(struct pid_namespace *ns,
+
+ rcu_read_lock();
+ retry:
+- pid = idr_get_next(&ns->idr, tid);
++ pid = find_ge_pid(*tid, ns);
+ if (pid) {
++ *tid = pid_nr_ns(pid, ns);
+ task = get_pid_task(pid, PIDTYPE_PID);
+ if (!task) {
+ ++*tid;
+diff --git a/kernel/dma/direct.c b/kernel/dma/direct.c
+index 67f060b86a73f..f17aec9d01f0c 100644
+--- a/kernel/dma/direct.c
++++ b/kernel/dma/direct.c
+@@ -45,7 +45,7 @@ u64 dma_direct_get_required_mask(struct device *dev)
+ return (1ULL << (fls64(max_dma) - 1)) * 2 - 1;
+ }
+
+-gfp_t dma_direct_optimal_gfp_mask(struct device *dev, u64 dma_mask,
++static gfp_t dma_direct_optimal_gfp_mask(struct device *dev, u64 dma_mask,
+ u64 *phys_limit)
+ {
+ u64 dma_limit = min_not_zero(dma_mask, dev->bus_dma_limit);
+@@ -70,7 +70,7 @@ gfp_t dma_direct_optimal_gfp_mask(struct device *dev, u64 dma_mask,
+ return 0;
+ }
+
+-bool dma_coherent_ok(struct device *dev, phys_addr_t phys, size_t size)
++static bool dma_coherent_ok(struct device *dev, phys_addr_t phys, size_t size)
+ {
+ return phys_to_dma_direct(dev, phys) + size - 1 <=
+ min_not_zero(dev->coherent_dma_mask, dev->bus_dma_limit);
+@@ -163,8 +163,13 @@ void *dma_direct_alloc_pages(struct device *dev, size_t size,
+ size = PAGE_ALIGN(size);
+
+ if (dma_should_alloc_from_pool(dev, gfp, attrs)) {
+- ret = dma_alloc_from_pool(dev, size, &page, gfp);
+- if (!ret)
++ u64 phys_mask;
++
++ gfp |= dma_direct_optimal_gfp_mask(dev, dev->coherent_dma_mask,
++ &phys_mask);
++ page = dma_alloc_from_pool(dev, size, &ret, gfp,
++ dma_coherent_ok);
++ if (!page)
+ return NULL;
+ goto done;
+ }
+diff --git a/kernel/dma/pool.c b/kernel/dma/pool.c
+index 6bc74a2d51273..1281c0f0442bc 100644
+--- a/kernel/dma/pool.c
++++ b/kernel/dma/pool.c
+@@ -3,7 +3,9 @@
+ * Copyright (C) 2012 ARM Ltd.
+ * Copyright (C) 2020 Google LLC
+ */
++#include <linux/cma.h>
+ #include <linux/debugfs.h>
++#include <linux/dma-contiguous.h>
+ #include <linux/dma-direct.h>
+ #include <linux/dma-noncoherent.h>
+ #include <linux/init.h>
+@@ -55,11 +57,34 @@ static void dma_atomic_pool_size_add(gfp_t gfp, size_t size)
+ pool_size_kernel += size;
+ }
+
++static bool cma_in_zone(gfp_t gfp)
++{
++ unsigned long size;
++ phys_addr_t end;
++ struct cma *cma;
++
++ cma = dev_get_cma_area(NULL);
++ if (!cma)
++ return false;
++
++ size = cma_get_size(cma);
++ if (!size)
++ return false;
++
++ /* CMA can't cross zone boundaries, see cma_activate_area() */
++ end = cma_get_base(cma) + size - 1;
++ if (IS_ENABLED(CONFIG_ZONE_DMA) && (gfp & GFP_DMA))
++ return end <= DMA_BIT_MASK(zone_dma_bits);
++ if (IS_ENABLED(CONFIG_ZONE_DMA32) && (gfp & GFP_DMA32))
++ return end <= DMA_BIT_MASK(32);
++ return true;
++}
++
+ static int atomic_pool_expand(struct gen_pool *pool, size_t pool_size,
+ gfp_t gfp)
+ {
+ unsigned int order;
+- struct page *page;
++ struct page *page = NULL;
+ void *addr;
+ int ret = -ENOMEM;
+
+@@ -68,7 +93,11 @@ static int atomic_pool_expand(struct gen_pool *pool, size_t pool_size,
+
+ do {
+ pool_size = 1 << (PAGE_SHIFT + order);
+- page = alloc_pages(gfp, order);
++ if (cma_in_zone(gfp))
++ page = dma_alloc_from_contiguous(NULL, 1 << order,
++ order, false);
++ if (!page)
++ page = alloc_pages(gfp, order);
+ } while (!page && order-- > 0);
+ if (!page)
+ goto out;
+@@ -196,93 +225,75 @@ static int __init dma_atomic_pool_init(void)
+ }
+ postcore_initcall(dma_atomic_pool_init);
+
+-static inline struct gen_pool *dma_guess_pool_from_device(struct device *dev)
++static inline struct gen_pool *dma_guess_pool(struct gen_pool *prev, gfp_t gfp)
+ {
+- u64 phys_mask;
+- gfp_t gfp;
+-
+- gfp = dma_direct_optimal_gfp_mask(dev, dev->coherent_dma_mask,
+- &phys_mask);
+- if (IS_ENABLED(CONFIG_ZONE_DMA) && gfp == GFP_DMA)
++ if (prev == NULL) {
++ if (IS_ENABLED(CONFIG_ZONE_DMA32) && (gfp & GFP_DMA32))
++ return atomic_pool_dma32;
++ if (IS_ENABLED(CONFIG_ZONE_DMA) && (gfp & GFP_DMA))
++ return atomic_pool_dma;
++ return atomic_pool_kernel;
++ }
++ if (prev == atomic_pool_kernel)
++ return atomic_pool_dma32 ? atomic_pool_dma32 : atomic_pool_dma;
++ if (prev == atomic_pool_dma32)
+ return atomic_pool_dma;
+- if (IS_ENABLED(CONFIG_ZONE_DMA32) && gfp == GFP_DMA32)
+- return atomic_pool_dma32;
+- return atomic_pool_kernel;
++ return NULL;
+ }
+
+-static inline struct gen_pool *dma_get_safer_pool(struct gen_pool *bad_pool)
++static struct page *__dma_alloc_from_pool(struct device *dev, size_t size,
++ struct gen_pool *pool, void **cpu_addr,
++ bool (*phys_addr_ok)(struct device *, phys_addr_t, size_t))
+ {
+- if (bad_pool == atomic_pool_kernel)
+- return atomic_pool_dma32 ? : atomic_pool_dma;
++ unsigned long addr;
++ phys_addr_t phys;
+
+- if (bad_pool == atomic_pool_dma32)
+- return atomic_pool_dma;
++ addr = gen_pool_alloc(pool, size);
++ if (!addr)
++ return NULL;
+
+- return NULL;
+-}
++ phys = gen_pool_virt_to_phys(pool, addr);
++ if (phys_addr_ok && !phys_addr_ok(dev, phys, size)) {
++ gen_pool_free(pool, addr, size);
++ return NULL;
++ }
+
+-static inline struct gen_pool *dma_guess_pool(struct device *dev,
+- struct gen_pool *bad_pool)
+-{
+- if (bad_pool)
+- return dma_get_safer_pool(bad_pool);
++ if (gen_pool_avail(pool) < atomic_pool_size)
++ schedule_work(&atomic_pool_work);
+
+- return dma_guess_pool_from_device(dev);
++ *cpu_addr = (void *)addr;
++ memset(*cpu_addr, 0, size);
++ return pfn_to_page(__phys_to_pfn(phys));
+ }
+
+-void *dma_alloc_from_pool(struct device *dev, size_t size,
+- struct page **ret_page, gfp_t flags)
++struct page *dma_alloc_from_pool(struct device *dev, size_t size,
++ void **cpu_addr, gfp_t gfp,
++ bool (*phys_addr_ok)(struct device *, phys_addr_t, size_t))
+ {
+ struct gen_pool *pool = NULL;
+- unsigned long val = 0;
+- void *ptr = NULL;
+- phys_addr_t phys;
+-
+- while (1) {
+- pool = dma_guess_pool(dev, pool);
+- if (!pool) {
+- WARN(1, "Failed to get suitable pool for %s\n",
+- dev_name(dev));
+- break;
+- }
+-
+- val = gen_pool_alloc(pool, size);
+- if (!val)
+- continue;
+-
+- phys = gen_pool_virt_to_phys(pool, val);
+- if (dma_coherent_ok(dev, phys, size))
+- break;
+-
+- gen_pool_free(pool, val, size);
+- val = 0;
+- }
+-
+-
+- if (val) {
+- *ret_page = pfn_to_page(__phys_to_pfn(phys));
+- ptr = (void *)val;
+- memset(ptr, 0, size);
++ struct page *page;
+
+- if (gen_pool_avail(pool) < atomic_pool_size)
+- schedule_work(&atomic_pool_work);
++ while ((pool = dma_guess_pool(pool, gfp))) {
++ page = __dma_alloc_from_pool(dev, size, pool, cpu_addr,
++ phys_addr_ok);
++ if (page)
++ return page;
+ }
+
+- return ptr;
++ WARN(1, "Failed to get suitable pool for %s\n", dev_name(dev));
++ return NULL;
+ }
+
+ bool dma_free_from_pool(struct device *dev, void *start, size_t size)
+ {
+ struct gen_pool *pool = NULL;
+
+- while (1) {
+- pool = dma_guess_pool(dev, pool);
+- if (!pool)
+- return false;
+-
+- if (gen_pool_has_addr(pool, (unsigned long)start, size)) {
+- gen_pool_free(pool, (unsigned long)start, size);
+- return true;
+- }
++ while ((pool = dma_guess_pool(pool, 0))) {
++ if (!gen_pool_has_addr(pool, (unsigned long)start, size))
++ continue;
++ gen_pool_free(pool, (unsigned long)start, size);
++ return true;
+ }
++
++ return false;
+ }
+diff --git a/kernel/irq/matrix.c b/kernel/irq/matrix.c
+index 30cc217b86318..651a4ad6d711f 100644
+--- a/kernel/irq/matrix.c
++++ b/kernel/irq/matrix.c
+@@ -380,6 +380,13 @@ int irq_matrix_alloc(struct irq_matrix *m, const struct cpumask *msk,
+ unsigned int cpu, bit;
+ struct cpumap *cm;
+
++ /*
++ * Not required in theory, but matrix_find_best_cpu() uses
++ * for_each_cpu() which ignores the cpumask on UP .
++ */
++ if (cpumask_empty(msk))
++ return -EINVAL;
++
+ cpu = matrix_find_best_cpu(m, msk);
+ if (cpu == UINT_MAX)
+ return -ENOSPC;
+diff --git a/kernel/locking/lockdep_proc.c b/kernel/locking/lockdep_proc.c
+index 5525cd3ba0c83..02ef87f50df29 100644
+--- a/kernel/locking/lockdep_proc.c
++++ b/kernel/locking/lockdep_proc.c
+@@ -423,7 +423,7 @@ static void seq_lock_time(struct seq_file *m, struct lock_time *lt)
+ seq_time(m, lt->min);
+ seq_time(m, lt->max);
+ seq_time(m, lt->total);
+- seq_time(m, lt->nr ? div_s64(lt->total, lt->nr) : 0);
++ seq_time(m, lt->nr ? div64_u64(lt->total, lt->nr) : 0);
+ }
+
+ static void seq_stats(struct seq_file *m, struct lock_stat_data *data)
+diff --git a/kernel/trace/blktrace.c b/kernel/trace/blktrace.c
+index 588e8e3960197..1bd6563939e59 100644
+--- a/kernel/trace/blktrace.c
++++ b/kernel/trace/blktrace.c
+@@ -536,6 +536,18 @@ static int do_blk_trace_setup(struct request_queue *q, char *name, dev_t dev,
+ #endif
+ bt->dir = dir = debugfs_create_dir(buts->name, blk_debugfs_root);
+
++ /*
++ * As blktrace relies on debugfs for its interface the debugfs directory
++ * is required, contrary to the usual mantra of not checking for debugfs
++ * files or directories.
++ */
++ if (IS_ERR_OR_NULL(dir)) {
++ pr_warn("debugfs_dir not present for %s so skipping\n",
++ buts->name);
++ ret = -ENOENT;
++ goto err;
++ }
++
+ bt->dev = dev;
+ atomic_set(&bt->dropped, 0);
+ INIT_LIST_HEAD(&bt->running_list);
+diff --git a/mm/khugepaged.c b/mm/khugepaged.c
+index 1d6a9b0b6a9fd..dd592ea9a4a06 100644
+--- a/mm/khugepaged.c
++++ b/mm/khugepaged.c
+@@ -431,7 +431,7 @@ static void insert_to_mm_slots_hash(struct mm_struct *mm,
+
+ static inline int khugepaged_test_exit(struct mm_struct *mm)
+ {
+- return atomic_read(&mm->mm_users) == 0;
++ return atomic_read(&mm->mm_users) == 0 || !mmget_still_valid(mm);
+ }
+
+ static bool hugepage_vma_check(struct vm_area_struct *vma,
+@@ -1100,9 +1100,6 @@ static void collapse_huge_page(struct mm_struct *mm,
+ * handled by the anon_vma lock + PG_lock.
+ */
+ mmap_write_lock(mm);
+- result = SCAN_ANY_PROCESS;
+- if (!mmget_still_valid(mm))
+- goto out;
+ result = hugepage_vma_revalidate(mm, address, &vma);
+ if (result)
+ goto out;
+diff --git a/mm/page_counter.c b/mm/page_counter.c
+index b4663844c9b37..afe22ad335ccc 100644
+--- a/mm/page_counter.c
++++ b/mm/page_counter.c
+@@ -77,8 +77,8 @@ void page_counter_charge(struct page_counter *counter, unsigned long nr_pages)
+ * This is indeed racy, but we can live with some
+ * inaccuracy in the watermark.
+ */
+- if (new > c->watermark)
+- c->watermark = new;
++ if (new > READ_ONCE(c->watermark))
++ WRITE_ONCE(c->watermark, new);
+ }
+ }
+
+@@ -119,9 +119,10 @@ bool page_counter_try_charge(struct page_counter *counter,
+ propagate_protected_usage(c, new);
+ /*
+ * This is racy, but we can live with some
+- * inaccuracy in the failcnt.
++ * inaccuracy in the failcnt which is only used
++ * to report stats.
+ */
+- c->failcnt++;
++ data_race(c->failcnt++);
+ *fail = c;
+ goto failed;
+ }
+@@ -130,8 +131,8 @@ bool page_counter_try_charge(struct page_counter *counter,
+ * Just like with failcnt, we can live with some
+ * inaccuracy in the watermark.
+ */
+- if (new > c->watermark)
+- c->watermark = new;
++ if (new > READ_ONCE(c->watermark))
++ WRITE_ONCE(c->watermark, new);
+ }
+ return true;
+
+diff --git a/net/bridge/netfilter/nf_conntrack_bridge.c b/net/bridge/netfilter/nf_conntrack_bridge.c
+index 8096732223828..8d033a75a766e 100644
+--- a/net/bridge/netfilter/nf_conntrack_bridge.c
++++ b/net/bridge/netfilter/nf_conntrack_bridge.c
+@@ -168,6 +168,7 @@ static unsigned int nf_ct_br_defrag4(struct sk_buff *skb,
+ static unsigned int nf_ct_br_defrag6(struct sk_buff *skb,
+ const struct nf_hook_state *state)
+ {
++#if IS_ENABLED(CONFIG_NF_DEFRAG_IPV6)
+ u16 zone_id = NF_CT_DEFAULT_ZONE_ID;
+ enum ip_conntrack_info ctinfo;
+ struct br_input_skb_cb cb;
+@@ -180,14 +181,17 @@ static unsigned int nf_ct_br_defrag6(struct sk_buff *skb,
+
+ br_skb_cb_save(skb, &cb, sizeof(struct inet6_skb_parm));
+
+- err = nf_ipv6_br_defrag(state->net, skb,
+- IP_DEFRAG_CONNTRACK_BRIDGE_IN + zone_id);
++ err = nf_ct_frag6_gather(state->net, skb,
++ IP_DEFRAG_CONNTRACK_BRIDGE_IN + zone_id);
+ /* queued */
+ if (err == -EINPROGRESS)
+ return NF_STOLEN;
+
+ br_skb_cb_restore(skb, &cb, IP6CB(skb)->frag_max_size);
+ return err == 0 ? NF_ACCEPT : NF_DROP;
++#else
++ return NF_ACCEPT;
++#endif
+ }
+
+ static int nf_ct_br_ip_check(const struct sk_buff *skb)
+diff --git a/net/can/j1939/transport.c b/net/can/j1939/transport.c
+index dbd215cbc53d8..a8dd956b5e8e1 100644
+--- a/net/can/j1939/transport.c
++++ b/net/can/j1939/transport.c
+@@ -1803,7 +1803,20 @@ static void j1939_xtp_rx_dat_one(struct j1939_session *session,
+ }
+
+ tpdat = se_skb->data;
+- memcpy(&tpdat[offset], &dat[1], nbytes);
++ if (!session->transmission) {
++ memcpy(&tpdat[offset], &dat[1], nbytes);
++ } else {
++ int err;
++
++ err = memcmp(&tpdat[offset], &dat[1], nbytes);
++ if (err)
++ netdev_err_once(priv->ndev,
++ "%s: 0x%p: Data of RX-looped back packet (%*ph) doesn't match TX data (%*ph)!\n",
++ __func__, session,
++ nbytes, &dat[1],
++ nbytes, &tpdat[offset]);
++ }
++
+ if (packet == session->pkt.rx)
+ session->pkt.rx++;
+
+diff --git a/net/ipv6/netfilter.c b/net/ipv6/netfilter.c
+index 409e79b84a830..6d0e942d082d4 100644
+--- a/net/ipv6/netfilter.c
++++ b/net/ipv6/netfilter.c
+@@ -245,9 +245,6 @@ static const struct nf_ipv6_ops ipv6ops = {
+ .route_input = ip6_route_input,
+ .fragment = ip6_fragment,
+ .reroute = nf_ip6_reroute,
+-#if IS_MODULE(CONFIG_IPV6) && IS_ENABLED(CONFIG_NF_DEFRAG_IPV6)
+- .br_defrag = nf_ct_frag6_gather,
+-#endif
+ #if IS_MODULE(CONFIG_IPV6)
+ .br_fragment = br_ip6_fragment,
+ #endif
+diff --git a/net/netfilter/nf_tables_api.c b/net/netfilter/nf_tables_api.c
+index 88325b264737f..d31832d32e028 100644
+--- a/net/netfilter/nf_tables_api.c
++++ b/net/netfilter/nf_tables_api.c
+@@ -2037,7 +2037,7 @@ static int nf_tables_updchain(struct nft_ctx *ctx, u8 genmask, u8 policy,
+
+ if (nla[NFTA_CHAIN_HOOK]) {
+ if (!nft_is_base_chain(chain))
+- return -EBUSY;
++ return -EEXIST;
+
+ err = nft_chain_parse_hook(ctx->net, nla, &hook, ctx->family,
+ false);
+@@ -2047,21 +2047,21 @@ static int nf_tables_updchain(struct nft_ctx *ctx, u8 genmask, u8 policy,
+ basechain = nft_base_chain(chain);
+ if (basechain->type != hook.type) {
+ nft_chain_release_hook(&hook);
+- return -EBUSY;
++ return -EEXIST;
+ }
+
+ if (ctx->family == NFPROTO_NETDEV) {
+ if (!nft_hook_list_equal(&basechain->hook_list,
+ &hook.list)) {
+ nft_chain_release_hook(&hook);
+- return -EBUSY;
++ return -EEXIST;
+ }
+ } else {
+ ops = &basechain->ops;
+ if (ops->hooknum != hook.num ||
+ ops->priority != hook.priority) {
+ nft_chain_release_hook(&hook);
+- return -EBUSY;
++ return -EEXIST;
+ }
+ }
+ nft_chain_release_hook(&hook);
+@@ -5160,10 +5160,8 @@ static int nft_add_set_elem(struct nft_ctx *ctx, struct nft_set *set,
+ if (nft_set_ext_exists(ext, NFT_SET_EXT_DATA) ^
+ nft_set_ext_exists(ext2, NFT_SET_EXT_DATA) ||
+ nft_set_ext_exists(ext, NFT_SET_EXT_OBJREF) ^
+- nft_set_ext_exists(ext2, NFT_SET_EXT_OBJREF)) {
+- err = -EBUSY;
++ nft_set_ext_exists(ext2, NFT_SET_EXT_OBJREF))
+ goto err_element_clash;
+- }
+ if ((nft_set_ext_exists(ext, NFT_SET_EXT_DATA) &&
+ nft_set_ext_exists(ext2, NFT_SET_EXT_DATA) &&
+ memcmp(nft_set_ext_data(ext),
+@@ -5171,7 +5169,7 @@ static int nft_add_set_elem(struct nft_ctx *ctx, struct nft_set *set,
+ (nft_set_ext_exists(ext, NFT_SET_EXT_OBJREF) &&
+ nft_set_ext_exists(ext2, NFT_SET_EXT_OBJREF) &&
+ *nft_set_ext_obj(ext) != *nft_set_ext_obj(ext2)))
+- err = -EBUSY;
++ goto err_element_clash;
+ else if (!(nlmsg_flags & NLM_F_EXCL))
+ err = 0;
+ } else if (err == -ENOTEMPTY) {
+@@ -6308,7 +6306,7 @@ static int nft_register_flowtable_net_hooks(struct net *net,
+ list_for_each_entry(hook2, &ft->hook_list, list) {
+ if (hook->ops.dev == hook2->ops.dev &&
+ hook->ops.pf == hook2->ops.pf) {
+- err = -EBUSY;
++ err = -EEXIST;
+ goto err_unregister_net_hooks;
+ }
+ }
+diff --git a/net/openvswitch/datapath.c b/net/openvswitch/datapath.c
+index 94b024534987a..03b81aa99975b 100644
+--- a/net/openvswitch/datapath.c
++++ b/net/openvswitch/datapath.c
+@@ -1736,6 +1736,7 @@ err:
+ /* Called with ovs_mutex. */
+ static void __dp_destroy(struct datapath *dp)
+ {
++ struct flow_table *table = &dp->table;
+ int i;
+
+ for (i = 0; i < DP_VPORT_HASH_BUCKETS; i++) {
+@@ -1754,7 +1755,14 @@ static void __dp_destroy(struct datapath *dp)
+ */
+ ovs_dp_detach_port(ovs_vport_ovsl(dp, OVSP_LOCAL));
+
+- /* RCU destroy the flow table */
++ /* Flush sw_flow in the tables. RCU cb only releases resource
++ * such as dp, ports and tables. That may avoid some issues
++ * such as RCU usage warning.
++ */
++ table_instance_flow_flush(table, ovsl_dereference(table->ti),
++ ovsl_dereference(table->ufid_ti));
++
++ /* RCU destroy the ports, meters and flow tables. */
+ call_rcu(&dp->rcu, destroy_dp_rcu);
+ }
+
+diff --git a/net/openvswitch/flow_table.c b/net/openvswitch/flow_table.c
+index 2398d72383005..f198bbb0c517a 100644
+--- a/net/openvswitch/flow_table.c
++++ b/net/openvswitch/flow_table.c
+@@ -345,19 +345,15 @@ static void table_instance_flow_free(struct flow_table *table,
+ flow_mask_remove(table, flow->mask);
+ }
+
+-static void table_instance_destroy(struct flow_table *table,
+- struct table_instance *ti,
+- struct table_instance *ufid_ti,
+- bool deferred)
++/* Must be called with OVS mutex held. */
++void table_instance_flow_flush(struct flow_table *table,
++ struct table_instance *ti,
++ struct table_instance *ufid_ti)
+ {
+ int i;
+
+- if (!ti)
+- return;
+-
+- BUG_ON(!ufid_ti);
+ if (ti->keep_flows)
+- goto skip_flows;
++ return;
+
+ for (i = 0; i < ti->n_buckets; i++) {
+ struct sw_flow *flow;
+@@ -369,18 +365,16 @@ static void table_instance_destroy(struct flow_table *table,
+
+ table_instance_flow_free(table, ti, ufid_ti,
+ flow, false);
+- ovs_flow_free(flow, deferred);
++ ovs_flow_free(flow, true);
+ }
+ }
++}
+
+-skip_flows:
+- if (deferred) {
+- call_rcu(&ti->rcu, flow_tbl_destroy_rcu_cb);
+- call_rcu(&ufid_ti->rcu, flow_tbl_destroy_rcu_cb);
+- } else {
+- __table_instance_destroy(ti);
+- __table_instance_destroy(ufid_ti);
+- }
++static void table_instance_destroy(struct table_instance *ti,
++ struct table_instance *ufid_ti)
++{
++ call_rcu(&ti->rcu, flow_tbl_destroy_rcu_cb);
++ call_rcu(&ufid_ti->rcu, flow_tbl_destroy_rcu_cb);
+ }
+
+ /* No need for locking this function is called from RCU callback or
+@@ -393,7 +387,7 @@ void ovs_flow_tbl_destroy(struct flow_table *table)
+
+ free_percpu(table->mask_cache);
+ kfree_rcu(rcu_dereference_raw(table->mask_array), rcu);
+- table_instance_destroy(table, ti, ufid_ti, false);
++ table_instance_destroy(ti, ufid_ti);
+ }
+
+ struct sw_flow *ovs_flow_tbl_dump_next(struct table_instance *ti,
+@@ -511,7 +505,8 @@ int ovs_flow_tbl_flush(struct flow_table *flow_table)
+ flow_table->count = 0;
+ flow_table->ufid_count = 0;
+
+- table_instance_destroy(flow_table, old_ti, old_ufid_ti, true);
++ table_instance_flow_flush(flow_table, old_ti, old_ufid_ti);
++ table_instance_destroy(old_ti, old_ufid_ti);
+ return 0;
+
+ err_free_ti:
+diff --git a/net/openvswitch/flow_table.h b/net/openvswitch/flow_table.h
+index 8a5cea6ae1116..8ea8fc9573776 100644
+--- a/net/openvswitch/flow_table.h
++++ b/net/openvswitch/flow_table.h
+@@ -86,4 +86,7 @@ bool ovs_flow_cmp(const struct sw_flow *, const struct sw_flow_match *);
+
+ void ovs_flow_mask_key(struct sw_flow_key *dst, const struct sw_flow_key *src,
+ bool full, const struct sw_flow_mask *mask);
++void table_instance_flow_flush(struct flow_table *table,
++ struct table_instance *ti,
++ struct table_instance *ufid_ti);
+ #endif /* flow_table.h */
+diff --git a/sound/pci/cs46xx/cs46xx_lib.c b/sound/pci/cs46xx/cs46xx_lib.c
+index a080d63a9b456..4490dd7469d99 100644
+--- a/sound/pci/cs46xx/cs46xx_lib.c
++++ b/sound/pci/cs46xx/cs46xx_lib.c
+@@ -766,7 +766,7 @@ static void snd_cs46xx_set_capture_sample_rate(struct snd_cs46xx *chip, unsigned
+ rate = 48000 / 9;
+
+ /*
+- * We can not capture at at rate greater than the Input Rate (48000).
++ * We can not capture at a rate greater than the Input Rate (48000).
+ * Return an error if an attempt is made to stray outside that limit.
+ */
+ if (rate > 48000)
+diff --git a/sound/pci/cs46xx/dsp_spos_scb_lib.c b/sound/pci/cs46xx/dsp_spos_scb_lib.c
+index 6b536fc23ca62..1f90ca723f4df 100644
+--- a/sound/pci/cs46xx/dsp_spos_scb_lib.c
++++ b/sound/pci/cs46xx/dsp_spos_scb_lib.c
+@@ -1716,7 +1716,7 @@ int cs46xx_iec958_pre_open (struct snd_cs46xx *chip)
+ struct dsp_spos_instance * ins = chip->dsp_spos_instance;
+
+ if ( ins->spdif_status_out & DSP_SPDIF_STATUS_OUTPUT_ENABLED ) {
+- /* remove AsynchFGTxSCB and and PCMSerialInput_II */
++ /* remove AsynchFGTxSCB and PCMSerialInput_II */
+ cs46xx_dsp_disable_spdif_out (chip);
+
+ /* save state */
+diff --git a/sound/pci/hda/hda_codec.c b/sound/pci/hda/hda_codec.c
+index 803978d69e3c4..ea7f16dd1f73c 100644
+--- a/sound/pci/hda/hda_codec.c
++++ b/sound/pci/hda/hda_codec.c
+@@ -3427,7 +3427,7 @@ EXPORT_SYMBOL_GPL(snd_hda_set_power_save);
+ * @nid: NID to check / update
+ *
+ * Check whether the given NID is in the amp list. If it's in the list,
+- * check the current AMP status, and update the the power-status according
++ * check the current AMP status, and update the power-status according
+ * to the mute status.
+ *
+ * This function is supposed to be set or called from the check_power_status
+diff --git a/sound/pci/hda/hda_generic.c b/sound/pci/hda/hda_generic.c
+index f4e9d9445e18f..201a3b6b0b0f6 100644
+--- a/sound/pci/hda/hda_generic.c
++++ b/sound/pci/hda/hda_generic.c
+@@ -813,7 +813,7 @@ static void activate_amp_in(struct hda_codec *codec, struct nid_path *path,
+ }
+ }
+
+-/* sync power of each widget in the the given path */
++/* sync power of each widget in the given path */
+ static hda_nid_t path_power_update(struct hda_codec *codec,
+ struct nid_path *path,
+ bool allow_powerdown)
+diff --git a/sound/pci/hda/hda_intel.c b/sound/pci/hda/hda_intel.c
+index 4c23b169ac67e..1a26940a3fd7c 100644
+--- a/sound/pci/hda/hda_intel.c
++++ b/sound/pci/hda/hda_intel.c
+@@ -2747,6 +2747,8 @@ static const struct pci_device_id azx_ids[] = {
+ .driver_data = AZX_DRIVER_GENERIC | AZX_DCAPS_PRESET_ATI_HDMI },
+ /* Zhaoxin */
+ { PCI_DEVICE(0x1d17, 0x3288), .driver_data = AZX_DRIVER_ZHAOXIN },
++ /* Loongson */
++ { PCI_DEVICE(0x0014, 0x7a07), .driver_data = AZX_DRIVER_GENERIC },
+ { 0, }
+ };
+ MODULE_DEVICE_TABLE(pci, azx_ids);
+diff --git a/sound/pci/hda/patch_hdmi.c b/sound/pci/hda/patch_hdmi.c
+index cd46247988e4d..f0c6d2907e396 100644
+--- a/sound/pci/hda/patch_hdmi.c
++++ b/sound/pci/hda/patch_hdmi.c
+@@ -160,6 +160,7 @@ struct hdmi_spec {
+
+ bool use_acomp_notifier; /* use eld_notify callback for hotplug */
+ bool acomp_registered; /* audio component registered in this driver */
++ bool force_connect; /* force connectivity */
+ struct drm_audio_component_audio_ops drm_audio_ops;
+ int (*port2pin)(struct hda_codec *, int); /* reverse port/pin mapping */
+
+@@ -1701,7 +1702,8 @@ static int hdmi_add_pin(struct hda_codec *codec, hda_nid_t pin_nid)
+ * all device entries on the same pin
+ */
+ config = snd_hda_codec_get_pincfg(codec, pin_nid);
+- if (get_defcfg_connect(config) == AC_JACK_PORT_NONE)
++ if (get_defcfg_connect(config) == AC_JACK_PORT_NONE &&
++ !spec->force_connect)
+ return 0;
+
+ /*
+@@ -1803,11 +1805,19 @@ static int hdmi_add_cvt(struct hda_codec *codec, hda_nid_t cvt_nid)
+ return 0;
+ }
+
++static const struct snd_pci_quirk force_connect_list[] = {
++ SND_PCI_QUIRK(0x103c, 0x870f, "HP", 1),
++ SND_PCI_QUIRK(0x103c, 0x871a, "HP", 1),
++ {}
++};
++
+ static int hdmi_parse_codec(struct hda_codec *codec)
+ {
++ struct hdmi_spec *spec = codec->spec;
+ hda_nid_t start_nid;
+ unsigned int caps;
+ int i, nodes;
++ const struct snd_pci_quirk *q;
+
+ nodes = snd_hda_get_sub_nodes(codec, codec->core.afg, &start_nid);
+ if (!start_nid || nodes < 0) {
+@@ -1815,6 +1825,11 @@ static int hdmi_parse_codec(struct hda_codec *codec)
+ return -EINVAL;
+ }
+
++ q = snd_pci_quirk_lookup(codec->bus->pci, force_connect_list);
++
++ if (q && q->value)
++ spec->force_connect = true;
++
+ /*
+ * hdmi_add_pin() assumes total amount of converters to
+ * be known, so first discover all converters
+diff --git a/sound/pci/hda/patch_realtek.c b/sound/pci/hda/patch_realtek.c
+index b10d005786d07..da23c2d4ca51e 100644
+--- a/sound/pci/hda/patch_realtek.c
++++ b/sound/pci/hda/patch_realtek.c
+@@ -6167,6 +6167,7 @@ enum {
+ ALC269_FIXUP_CZC_L101,
+ ALC269_FIXUP_LEMOTE_A1802,
+ ALC269_FIXUP_LEMOTE_A190X,
++ ALC256_FIXUP_INTEL_NUC8_RUGGED,
+ };
+
+ static const struct hda_fixup alc269_fixups[] = {
+@@ -7488,6 +7489,15 @@ static const struct hda_fixup alc269_fixups[] = {
+ },
+ .chain_id = ALC269_FIXUP_DMIC,
+ },
++ [ALC256_FIXUP_INTEL_NUC8_RUGGED] = {
++ .type = HDA_FIXUP_PINS,
++ .v.pins = (const struct hda_pintbl[]) {
++ { 0x1b, 0x01a1913c }, /* use as headset mic, without its own jack detect */
++ { }
++ },
++ .chained = true,
++ .chain_id = ALC269_FIXUP_HEADSET_MODE
++ },
+ };
+
+ static const struct snd_pci_quirk alc269_fixup_tbl[] = {
+@@ -7787,6 +7797,7 @@ static const struct snd_pci_quirk alc269_fixup_tbl[] = {
+ SND_PCI_QUIRK(0x10ec, 0x118c, "Medion EE4254 MD62100", ALC256_FIXUP_MEDION_HEADSET_NO_PRESENCE),
+ SND_PCI_QUIRK(0x1c06, 0x2013, "Lemote A1802", ALC269_FIXUP_LEMOTE_A1802),
+ SND_PCI_QUIRK(0x1c06, 0x2015, "Lemote A190X", ALC269_FIXUP_LEMOTE_A190X),
++ SND_PCI_QUIRK(0x8086, 0x2080, "Intel NUC 8 Rugged", ALC256_FIXUP_INTEL_NUC8_RUGGED),
+
+ #if 0
+ /* Below is a quirk table taken from the old code.
+@@ -7958,6 +7969,7 @@ static const struct hda_model_fixup alc269_fixup_models[] = {
+ {.id = ALC299_FIXUP_PREDATOR_SPK, .name = "predator-spk"},
+ {.id = ALC298_FIXUP_HUAWEI_MBX_STEREO, .name = "huawei-mbx-stereo"},
+ {.id = ALC256_FIXUP_MEDION_HEADSET_NO_PRESENCE, .name = "alc256-medion-headset"},
++ {.id = ALC298_FIXUP_SAMSUNG_HEADPHONE_VERY_QUIET, .name = "alc298-samsung-headphone"},
+ {}
+ };
+ #define ALC225_STANDARD_PINS \
+diff --git a/sound/pci/hda/patch_sigmatel.c b/sound/pci/hda/patch_sigmatel.c
+index a608d0486ae49..2bea11d62d3e9 100644
+--- a/sound/pci/hda/patch_sigmatel.c
++++ b/sound/pci/hda/patch_sigmatel.c
+@@ -832,7 +832,7 @@ static int stac_auto_create_beep_ctls(struct hda_codec *codec,
+ static const struct snd_kcontrol_new beep_vol_ctl =
+ HDA_CODEC_VOLUME(NULL, 0, 0, 0);
+
+- /* check for mute support for the the amp */
++ /* check for mute support for the amp */
+ if ((caps & AC_AMPCAP_MUTE) >> AC_AMPCAP_MUTE_SHIFT) {
+ const struct snd_kcontrol_new *temp;
+ if (spec->anabeep_nid == nid)
+diff --git a/sound/pci/ice1712/prodigy192.c b/sound/pci/ice1712/prodigy192.c
+index 8df14f63b10df..096ec76f53046 100644
+--- a/sound/pci/ice1712/prodigy192.c
++++ b/sound/pci/ice1712/prodigy192.c
+@@ -32,7 +32,7 @@
+ * Experimentally I found out that only a combination of
+ * OCKS0=1, OCKS1=1 (128fs, 64fs output) and ice1724 -
+ * VT1724_MT_I2S_MCLK_128X=0 (256fs input) yields correct
+- * sampling rate. That means the the FPGA doubles the
++ * sampling rate. That means that the FPGA doubles the
+ * MCK01 rate.
+ *
+ * Copyright (c) 2003 Takashi Iwai <tiwai@suse.de>
+diff --git a/sound/pci/oxygen/xonar_dg.c b/sound/pci/oxygen/xonar_dg.c
+index c3f8721624cd4..b90421a1d909a 100644
+--- a/sound/pci/oxygen/xonar_dg.c
++++ b/sound/pci/oxygen/xonar_dg.c
+@@ -29,7 +29,7 @@
+ * GPIO 4 <- headphone detect
+ * GPIO 5 -> enable ADC analog circuit for the left channel
+ * GPIO 6 -> enable ADC analog circuit for the right channel
+- * GPIO 7 -> switch green rear output jack between CS4245 and and the first
++ * GPIO 7 -> switch green rear output jack between CS4245 and the first
+ * channel of CS4361 (mechanical relay)
+ * GPIO 8 -> enable output to speakers
+ *
+diff --git a/sound/soc/codecs/wm8958-dsp2.c b/sound/soc/codecs/wm8958-dsp2.c
+index ca42445b649d4..b471892d84778 100644
+--- a/sound/soc/codecs/wm8958-dsp2.c
++++ b/sound/soc/codecs/wm8958-dsp2.c
+@@ -412,8 +412,12 @@ int wm8958_aif_ev(struct snd_soc_dapm_widget *w,
+ struct snd_kcontrol *kcontrol, int event)
+ {
+ struct snd_soc_component *component = snd_soc_dapm_to_component(w->dapm);
++ struct wm8994 *control = dev_get_drvdata(component->dev->parent);
+ int i;
+
++ if (control->type != WM8958)
++ return 0;
++
+ switch (event) {
+ case SND_SOC_DAPM_POST_PMU:
+ case SND_SOC_DAPM_PRE_PMU:
+diff --git a/sound/soc/img/img-i2s-in.c b/sound/soc/img/img-i2s-in.c
+index e30b66b94bf67..0843235d73c91 100644
+--- a/sound/soc/img/img-i2s-in.c
++++ b/sound/soc/img/img-i2s-in.c
+@@ -343,8 +343,10 @@ static int img_i2s_in_set_fmt(struct snd_soc_dai *dai, unsigned int fmt)
+ chan_control_mask = IMG_I2S_IN_CH_CTL_CLK_TRANS_MASK;
+
+ ret = pm_runtime_get_sync(i2s->dev);
+- if (ret < 0)
++ if (ret < 0) {
++ pm_runtime_put_noidle(i2s->dev);
+ return ret;
++ }
+
+ for (i = 0; i < i2s->active_channels; i++)
+ img_i2s_in_ch_disable(i2s, i);
+diff --git a/sound/soc/img/img-parallel-out.c b/sound/soc/img/img-parallel-out.c
+index 5ddbe3a31c2e9..4da49a42e8547 100644
+--- a/sound/soc/img/img-parallel-out.c
++++ b/sound/soc/img/img-parallel-out.c
+@@ -163,8 +163,10 @@ static int img_prl_out_set_fmt(struct snd_soc_dai *dai, unsigned int fmt)
+ }
+
+ ret = pm_runtime_get_sync(prl->dev);
+- if (ret < 0)
++ if (ret < 0) {
++ pm_runtime_put_noidle(prl->dev);
+ return ret;
++ }
+
+ reg = img_prl_out_readl(prl, IMG_PRL_OUT_CTL);
+ reg = (reg & ~IMG_PRL_OUT_CTL_EDGE_MASK) | control_set;
+diff --git a/sound/soc/intel/boards/skl_hda_dsp_common.h b/sound/soc/intel/boards/skl_hda_dsp_common.h
+index 507750ef67f30..4b0b3959182e5 100644
+--- a/sound/soc/intel/boards/skl_hda_dsp_common.h
++++ b/sound/soc/intel/boards/skl_hda_dsp_common.h
+@@ -33,6 +33,7 @@ struct skl_hda_private {
+ int dai_index;
+ const char *platform_name;
+ bool common_hdmi_codec_drv;
++ bool idisp_codec;
+ };
+
+ extern struct snd_soc_dai_link skl_hda_be_dai_links[HDA_DSP_MAX_BE_DAI_LINKS];
+diff --git a/sound/soc/intel/boards/skl_hda_dsp_generic.c b/sound/soc/intel/boards/skl_hda_dsp_generic.c
+index 79c8947f840b9..ca4900036ead9 100644
+--- a/sound/soc/intel/boards/skl_hda_dsp_generic.c
++++ b/sound/soc/intel/boards/skl_hda_dsp_generic.c
+@@ -79,6 +79,9 @@ skl_hda_add_dai_link(struct snd_soc_card *card, struct snd_soc_dai_link *link)
+ link->platforms->name = ctx->platform_name;
+ link->nonatomic = 1;
+
++ if (!ctx->idisp_codec)
++ return 0;
++
+ if (strstr(link->name, "HDMI")) {
+ ret = skl_hda_hdmi_add_pcm(card, ctx->pcm_count);
+
+@@ -118,19 +121,20 @@ static char hda_soc_components[30];
+ static int skl_hda_fill_card_info(struct snd_soc_acpi_mach_params *mach_params)
+ {
+ struct snd_soc_card *card = &hda_soc_card;
++ struct skl_hda_private *ctx = snd_soc_card_get_drvdata(card);
+ struct snd_soc_dai_link *dai_link;
+- u32 codec_count, codec_mask, idisp_mask;
++ u32 codec_count, codec_mask;
+ int i, num_links, num_route;
+
+ codec_mask = mach_params->codec_mask;
+ codec_count = hweight_long(codec_mask);
+- idisp_mask = codec_mask & IDISP_CODEC_MASK;
++ ctx->idisp_codec = !!(codec_mask & IDISP_CODEC_MASK);
+
+ if (!codec_count || codec_count > 2 ||
+- (codec_count == 2 && !idisp_mask))
++ (codec_count == 2 && !ctx->idisp_codec))
+ return -EINVAL;
+
+- if (codec_mask == idisp_mask) {
++ if (codec_mask == IDISP_CODEC_MASK) {
+ /* topology with iDisp as the only HDA codec */
+ num_links = IDISP_DAI_COUNT + DMIC_DAI_COUNT;
+ num_route = IDISP_ROUTE_COUNT;
+@@ -152,7 +156,7 @@ static int skl_hda_fill_card_info(struct snd_soc_acpi_mach_params *mach_params)
+ num_route = ARRAY_SIZE(skl_hda_map);
+ card->dapm_widgets = skl_hda_widgets;
+ card->num_dapm_widgets = ARRAY_SIZE(skl_hda_widgets);
+- if (!idisp_mask) {
++ if (!ctx->idisp_codec) {
+ for (i = 0; i < IDISP_DAI_COUNT; i++) {
+ skl_hda_be_dai_links[i].codecs = dummy_codec;
+ skl_hda_be_dai_links[i].num_codecs =
+@@ -211,6 +215,8 @@ static int skl_hda_audio_probe(struct platform_device *pdev)
+ if (!mach)
+ return -EINVAL;
+
++ snd_soc_card_set_drvdata(&hda_soc_card, ctx);
++
+ ret = skl_hda_fill_card_info(&mach->mach_params);
+ if (ret < 0) {
+ dev_err(&pdev->dev, "Unsupported HDAudio/iDisp configuration found\n");
+@@ -223,7 +229,6 @@ static int skl_hda_audio_probe(struct platform_device *pdev)
+ ctx->common_hdmi_codec_drv = mach->mach_params.common_hdmi_codec_drv;
+
+ hda_soc_card.dev = &pdev->dev;
+- snd_soc_card_set_drvdata(&hda_soc_card, ctx);
+
+ if (mach->mach_params.dmic_num > 0) {
+ snprintf(hda_soc_components, sizeof(hda_soc_components),
+diff --git a/sound/soc/intel/boards/sof_sdw.c b/sound/soc/intel/boards/sof_sdw.c
+index 1bfd9613449e9..95a119a2d354e 100644
+--- a/sound/soc/intel/boards/sof_sdw.c
++++ b/sound/soc/intel/boards/sof_sdw.c
+@@ -184,6 +184,7 @@ static struct sof_sdw_codec_info codec_info_list[] = {
+ .direction = {true, true},
+ .dai_name = "rt711-aif1",
+ .init = sof_sdw_rt711_init,
++ .exit = sof_sdw_rt711_exit,
+ },
+ {
+ .id = 0x1308,
+diff --git a/sound/soc/intel/boards/sof_sdw_common.h b/sound/soc/intel/boards/sof_sdw_common.h
+index 69b363b8a6869..fdd2385049e1e 100644
+--- a/sound/soc/intel/boards/sof_sdw_common.h
++++ b/sound/soc/intel/boards/sof_sdw_common.h
+@@ -84,6 +84,7 @@ int sof_sdw_rt711_init(const struct snd_soc_acpi_link_adr *link,
+ struct snd_soc_dai_link *dai_links,
+ struct sof_sdw_codec_info *info,
+ bool playback);
++int sof_sdw_rt711_exit(struct device *dev, struct snd_soc_dai_link *dai_link);
+
+ /* RT700 support */
+ int sof_sdw_rt700_init(const struct snd_soc_acpi_link_adr *link,
+diff --git a/sound/soc/intel/boards/sof_sdw_rt711.c b/sound/soc/intel/boards/sof_sdw_rt711.c
+index d4d75c8dc6b78..0cb9f1c1f8676 100644
+--- a/sound/soc/intel/boards/sof_sdw_rt711.c
++++ b/sound/soc/intel/boards/sof_sdw_rt711.c
+@@ -133,6 +133,21 @@ static int rt711_rtd_init(struct snd_soc_pcm_runtime *rtd)
+ return ret;
+ }
+
++int sof_sdw_rt711_exit(struct device *dev, struct snd_soc_dai_link *dai_link)
++{
++ struct device *sdw_dev;
++
++ sdw_dev = bus_find_device_by_name(&sdw_bus_type, NULL,
++ dai_link->codecs[0].name);
++ if (!sdw_dev)
++ return -EINVAL;
++
++ device_remove_properties(sdw_dev);
++ put_device(sdw_dev);
++
++ return 0;
++}
++
+ int sof_sdw_rt711_init(const struct snd_soc_acpi_link_adr *link,
+ struct snd_soc_dai_link *dai_links,
+ struct sof_sdw_codec_info *info,
+diff --git a/sound/soc/tegra/tegra30_ahub.c b/sound/soc/tegra/tegra30_ahub.c
+index 635eacbd28d47..156e3b9d613c6 100644
+--- a/sound/soc/tegra/tegra30_ahub.c
++++ b/sound/soc/tegra/tegra30_ahub.c
+@@ -643,8 +643,10 @@ static int tegra30_ahub_resume(struct device *dev)
+ int ret;
+
+ ret = pm_runtime_get_sync(dev);
+- if (ret < 0)
++ if (ret < 0) {
++ pm_runtime_put(dev);
+ return ret;
++ }
+ ret = regcache_sync(ahub->regmap_ahub);
+ ret |= regcache_sync(ahub->regmap_apbif);
+ pm_runtime_put(dev);
+diff --git a/sound/soc/tegra/tegra30_i2s.c b/sound/soc/tegra/tegra30_i2s.c
+index d59882ec48f16..db5a8587bfa4c 100644
+--- a/sound/soc/tegra/tegra30_i2s.c
++++ b/sound/soc/tegra/tegra30_i2s.c
+@@ -567,8 +567,10 @@ static int tegra30_i2s_resume(struct device *dev)
+ int ret;
+
+ ret = pm_runtime_get_sync(dev);
+- if (ret < 0)
++ if (ret < 0) {
++ pm_runtime_put(dev);
+ return ret;
++ }
+ ret = regcache_sync(i2s->regmap);
+ pm_runtime_put(dev);
+
+diff --git a/sound/usb/mixer.c b/sound/usb/mixer.c
+index eab0fd4fd7c33..e0b7174c10430 100644
+--- a/sound/usb/mixer.c
++++ b/sound/usb/mixer.c
+@@ -2367,7 +2367,7 @@ static int build_audio_procunit(struct mixer_build *state, int unitid,
+ int num_ins;
+ struct usb_mixer_elem_info *cval;
+ struct snd_kcontrol *kctl;
+- int i, err, nameid, type, len;
++ int i, err, nameid, type, len, val;
+ const struct procunit_info *info;
+ const struct procunit_value_info *valinfo;
+ const struct usbmix_name_map *map;
+@@ -2470,6 +2470,12 @@ static int build_audio_procunit(struct mixer_build *state, int unitid,
+ break;
+ }
+
++ err = get_cur_ctl_value(cval, cval->control << 8, &val);
++ if (err < 0) {
++ usb_mixer_elem_info_free(cval);
++ return -EINVAL;
++ }
++
+ kctl = snd_ctl_new1(&mixer_procunit_ctl, cval);
+ if (!kctl) {
+ usb_mixer_elem_info_free(cval);
+diff --git a/sound/usb/quirks-table.h b/sound/usb/quirks-table.h
+index a53eb67ad4bd8..366faaa4ba82c 100644
+--- a/sound/usb/quirks-table.h
++++ b/sound/usb/quirks-table.h
+@@ -2678,6 +2678,10 @@ YAMAHA_DEVICE(0x7010, "UB99"),
+ .ifnum = QUIRK_ANY_INTERFACE,
+ .type = QUIRK_COMPOSITE,
+ .data = (const struct snd_usb_audio_quirk[]) {
++ {
++ .ifnum = 0,
++ .type = QUIRK_AUDIO_STANDARD_MIXER,
++ },
+ {
+ .ifnum = 0,
+ .type = QUIRK_AUDIO_FIXED_ENDPOINT,
+@@ -2690,6 +2694,32 @@ YAMAHA_DEVICE(0x7010, "UB99"),
+ .attributes = UAC_EP_CS_ATTR_SAMPLE_RATE,
+ .endpoint = 0x01,
+ .ep_attr = USB_ENDPOINT_XFER_ISOC,
++ .datainterval = 1,
++ .maxpacksize = 0x024c,
++ .rates = SNDRV_PCM_RATE_44100 |
++ SNDRV_PCM_RATE_48000,
++ .rate_min = 44100,
++ .rate_max = 48000,
++ .nr_rates = 2,
++ .rate_table = (unsigned int[]) {
++ 44100, 48000
++ }
++ }
++ },
++ {
++ .ifnum = 0,
++ .type = QUIRK_AUDIO_FIXED_ENDPOINT,
++ .data = &(const struct audioformat) {
++ .formats = SNDRV_PCM_FMTBIT_S24_3LE,
++ .channels = 2,
++ .iface = 0,
++ .altsetting = 1,
++ .altset_idx = 1,
++ .attributes = 0,
++ .endpoint = 0x82,
++ .ep_attr = USB_ENDPOINT_XFER_ISOC,
++ .datainterval = 1,
++ .maxpacksize = 0x0126,
+ .rates = SNDRV_PCM_RATE_44100 |
+ SNDRV_PCM_RATE_48000,
+ .rate_min = 44100,
+@@ -3697,8 +3727,8 @@ ALC1220_VB_DESKTOP(0x26ce, 0x0a01), /* Asrock TRX40 Creator */
+ * they pretend to be 96kHz mono as a workaround for stereo being broken
+ * by that...
+ *
+- * They also have swapped L-R channels, but that's for userspace to deal
+- * with.
++ * They also have an issue with initial stream alignment that causes the
++ * channels to be swapped and out of phase, which is dealt with in quirks.c.
+ */
+ {
+ .match_flags = USB_DEVICE_ID_MATCH_DEVICE |
+diff --git a/tools/bpf/bpftool/btf_dumper.c b/tools/bpf/bpftool/btf_dumper.c
+index ede162f83eea0..0e9310727281a 100644
+--- a/tools/bpf/bpftool/btf_dumper.c
++++ b/tools/bpf/bpftool/btf_dumper.c
+@@ -67,7 +67,7 @@ static int dump_prog_id_as_func_ptr(const struct btf_dumper *d,
+ if (!info->btf_id || !info->nr_func_info ||
+ btf__get_from_id(info->btf_id, &prog_btf))
+ goto print;
+- finfo = (struct bpf_func_info *)info->func_info;
++ finfo = u64_to_ptr(info->func_info);
+ func_type = btf__type_by_id(prog_btf, finfo->type_id);
+ if (!func_type || !btf_is_func(func_type))
+ goto print;
+diff --git a/tools/bpf/bpftool/link.c b/tools/bpf/bpftool/link.c
+index fca57ee8fafe4..dea691c83afca 100644
+--- a/tools/bpf/bpftool/link.c
++++ b/tools/bpf/bpftool/link.c
+@@ -101,7 +101,7 @@ static int show_link_close_json(int fd, struct bpf_link_info *info)
+ switch (info->type) {
+ case BPF_LINK_TYPE_RAW_TRACEPOINT:
+ jsonw_string_field(json_wtr, "tp_name",
+- (const char *)info->raw_tracepoint.tp_name);
++ u64_to_ptr(info->raw_tracepoint.tp_name));
+ break;
+ case BPF_LINK_TYPE_TRACING:
+ err = get_prog_info(info->prog_id, &prog_info);
+@@ -177,7 +177,7 @@ static int show_link_close_plain(int fd, struct bpf_link_info *info)
+ switch (info->type) {
+ case BPF_LINK_TYPE_RAW_TRACEPOINT:
+ printf("\n\ttp '%s' ",
+- (const char *)info->raw_tracepoint.tp_name);
++ (const char *)u64_to_ptr(info->raw_tracepoint.tp_name));
+ break;
+ case BPF_LINK_TYPE_TRACING:
+ err = get_prog_info(info->prog_id, &prog_info);
+diff --git a/tools/bpf/bpftool/main.h b/tools/bpf/bpftool/main.h
+index 5cdf0bc049bd9..5917484c2e027 100644
+--- a/tools/bpf/bpftool/main.h
++++ b/tools/bpf/bpftool/main.h
+@@ -21,7 +21,15 @@
+ /* Make sure we do not use kernel-only integer typedefs */
+ #pragma GCC poison u8 u16 u32 u64 s8 s16 s32 s64
+
+-#define ptr_to_u64(ptr) ((__u64)(unsigned long)(ptr))
++static inline __u64 ptr_to_u64(const void *ptr)
++{
++ return (__u64)(unsigned long)ptr;
++}
++
++static inline void *u64_to_ptr(__u64 ptr)
++{
++ return (void *)(unsigned long)ptr;
++}
+
+ #define NEXT_ARG() ({ argc--; argv++; if (argc < 0) usage(); })
+ #define NEXT_ARGP() ({ (*argc)--; (*argv)++; if (*argc < 0) usage(); })
+diff --git a/tools/bpf/bpftool/prog.c b/tools/bpf/bpftool/prog.c
+index a5eff83496f2d..2c6f7e160b248 100644
+--- a/tools/bpf/bpftool/prog.c
++++ b/tools/bpf/bpftool/prog.c
+@@ -537,14 +537,14 @@ prog_dump(struct bpf_prog_info *info, enum dump_mode mode,
+ p_info("no instructions returned");
+ return -1;
+ }
+- buf = (unsigned char *)(info->jited_prog_insns);
++ buf = u64_to_ptr(info->jited_prog_insns);
+ member_len = info->jited_prog_len;
+ } else { /* DUMP_XLATED */
+ if (info->xlated_prog_len == 0 || !info->xlated_prog_insns) {
+ p_err("error retrieving insn dump: kernel.kptr_restrict set?");
+ return -1;
+ }
+- buf = (unsigned char *)info->xlated_prog_insns;
++ buf = u64_to_ptr(info->xlated_prog_insns);
+ member_len = info->xlated_prog_len;
+ }
+
+@@ -553,7 +553,7 @@ prog_dump(struct bpf_prog_info *info, enum dump_mode mode,
+ return -1;
+ }
+
+- func_info = (void *)info->func_info;
++ func_info = u64_to_ptr(info->func_info);
+
+ if (info->nr_line_info) {
+ prog_linfo = bpf_prog_linfo__new(info);
+@@ -571,7 +571,7 @@ prog_dump(struct bpf_prog_info *info, enum dump_mode mode,
+
+ n = write(fd, buf, member_len);
+ close(fd);
+- if (n != member_len) {
++ if (n != (ssize_t)member_len) {
+ p_err("error writing output file: %s",
+ n < 0 ? strerror(errno) : "short write");
+ return -1;
+@@ -601,13 +601,13 @@ prog_dump(struct bpf_prog_info *info, enum dump_mode mode,
+ __u32 i;
+ if (info->nr_jited_ksyms) {
+ kernel_syms_load(&dd);
+- ksyms = (__u64 *) info->jited_ksyms;
++ ksyms = u64_to_ptr(info->jited_ksyms);
+ }
+
+ if (json_output)
+ jsonw_start_array(json_wtr);
+
+- lens = (__u32 *) info->jited_func_lens;
++ lens = u64_to_ptr(info->jited_func_lens);
+ for (i = 0; i < info->nr_jited_func_lens; i++) {
+ if (ksyms) {
+ sym = kernel_syms_search(&dd, ksyms[i]);
+@@ -668,7 +668,7 @@ prog_dump(struct bpf_prog_info *info, enum dump_mode mode,
+ } else {
+ kernel_syms_load(&dd);
+ dd.nr_jited_ksyms = info->nr_jited_ksyms;
+- dd.jited_ksyms = (__u64 *) info->jited_ksyms;
++ dd.jited_ksyms = u64_to_ptr(info->jited_ksyms);
+ dd.btf = btf;
+ dd.func_info = func_info;
+ dd.finfo_rec_size = info->func_info_rec_size;
+@@ -1790,7 +1790,7 @@ static char *profile_target_name(int tgt_fd)
+ goto out;
+ }
+
+- func_info = (struct bpf_func_info *)(info_linear->info.func_info);
++ func_info = u64_to_ptr(info_linear->info.func_info);
+ t = btf__type_by_id(btf, func_info[0].type_id);
+ if (!t) {
+ p_err("btf %d doesn't have type %d",
+diff --git a/tools/lib/bpf/libbpf.c b/tools/lib/bpf/libbpf.c
+index e7642a6e39f9e..3ac0094706b81 100644
+--- a/tools/lib/bpf/libbpf.c
++++ b/tools/lib/bpf/libbpf.c
+@@ -2237,7 +2237,7 @@ static int bpf_object__init_user_btf_maps(struct bpf_object *obj, bool strict,
+ data = elf_getdata(scn, NULL);
+ if (!scn || !data) {
+ pr_warn("failed to get Elf_Data from map section %d (%s)\n",
+- obj->efile.maps_shndx, MAPS_ELF_SEC);
++ obj->efile.btf_maps_shndx, MAPS_ELF_SEC);
+ return -EINVAL;
+ }
+
+@@ -3319,10 +3319,11 @@ bpf_object__probe_global_data(struct bpf_object *obj)
+
+ map = bpf_create_map_xattr(&map_attr);
+ if (map < 0) {
+- cp = libbpf_strerror_r(errno, errmsg, sizeof(errmsg));
++ ret = -errno;
++ cp = libbpf_strerror_r(ret, errmsg, sizeof(errmsg));
+ pr_warn("Error in %s():%s(%d). Couldn't create simple array map.\n",
+- __func__, cp, errno);
+- return -errno;
++ __func__, cp, -ret);
++ return ret;
+ }
+
+ insns[0].imm = map;
+@@ -5779,9 +5780,10 @@ int bpf_program__pin_instance(struct bpf_program *prog, const char *path,
+ }
+
+ if (bpf_obj_pin(prog->instances.fds[instance], path)) {
+- cp = libbpf_strerror_r(errno, errmsg, sizeof(errmsg));
++ err = -errno;
++ cp = libbpf_strerror_r(err, errmsg, sizeof(errmsg));
+ pr_warn("failed to pin program: %s\n", cp);
+- return -errno;
++ return err;
+ }
+ pr_debug("pinned program '%s'\n", path);
+
+diff --git a/tools/testing/selftests/bpf/prog_tests/bpf_obj_id.c b/tools/testing/selftests/bpf/prog_tests/bpf_obj_id.c
+index 7afa4160416f6..284d5921c3458 100644
+--- a/tools/testing/selftests/bpf/prog_tests/bpf_obj_id.c
++++ b/tools/testing/selftests/bpf/prog_tests/bpf_obj_id.c
+@@ -159,15 +159,15 @@ void test_bpf_obj_id(void)
+ /* Check getting link info */
+ info_len = sizeof(struct bpf_link_info) * 2;
+ bzero(&link_infos[i], info_len);
+- link_infos[i].raw_tracepoint.tp_name = (__u64)&tp_name;
++ link_infos[i].raw_tracepoint.tp_name = ptr_to_u64(&tp_name);
+ link_infos[i].raw_tracepoint.tp_name_len = sizeof(tp_name);
+ err = bpf_obj_get_info_by_fd(bpf_link__fd(links[i]),
+ &link_infos[i], &info_len);
+ if (CHECK(err ||
+ link_infos[i].type != BPF_LINK_TYPE_RAW_TRACEPOINT ||
+ link_infos[i].prog_id != prog_infos[i].id ||
+- link_infos[i].raw_tracepoint.tp_name != (__u64)&tp_name ||
+- strcmp((char *)link_infos[i].raw_tracepoint.tp_name,
++ link_infos[i].raw_tracepoint.tp_name != ptr_to_u64(&tp_name) ||
++ strcmp(u64_to_ptr(link_infos[i].raw_tracepoint.tp_name),
+ "sys_enter") ||
+ info_len != sizeof(struct bpf_link_info),
+ "get-link-info(fd)",
+@@ -178,7 +178,7 @@ void test_bpf_obj_id(void)
+ link_infos[i].type, BPF_LINK_TYPE_RAW_TRACEPOINT,
+ link_infos[i].id,
+ link_infos[i].prog_id, prog_infos[i].id,
+- (char *)link_infos[i].raw_tracepoint.tp_name,
++ (const char *)u64_to_ptr(link_infos[i].raw_tracepoint.tp_name),
+ "sys_enter"))
+ goto done;
+
+diff --git a/tools/testing/selftests/bpf/prog_tests/btf_dump.c b/tools/testing/selftests/bpf/prog_tests/btf_dump.c
+index cb33a7ee4e04f..39fb81d9daeb5 100644
+--- a/tools/testing/selftests/bpf/prog_tests/btf_dump.c
++++ b/tools/testing/selftests/bpf/prog_tests/btf_dump.c
+@@ -12,15 +12,16 @@ void btf_dump_printf(void *ctx, const char *fmt, va_list args)
+ static struct btf_dump_test_case {
+ const char *name;
+ const char *file;
++ bool known_ptr_sz;
+ struct btf_dump_opts opts;
+ } btf_dump_test_cases[] = {
+- {"btf_dump: syntax", "btf_dump_test_case_syntax", {}},
+- {"btf_dump: ordering", "btf_dump_test_case_ordering", {}},
+- {"btf_dump: padding", "btf_dump_test_case_padding", {}},
+- {"btf_dump: packing", "btf_dump_test_case_packing", {}},
+- {"btf_dump: bitfields", "btf_dump_test_case_bitfields", {}},
+- {"btf_dump: multidim", "btf_dump_test_case_multidim", {}},
+- {"btf_dump: namespacing", "btf_dump_test_case_namespacing", {}},
++ {"btf_dump: syntax", "btf_dump_test_case_syntax", true, {}},
++ {"btf_dump: ordering", "btf_dump_test_case_ordering", false, {}},
++ {"btf_dump: padding", "btf_dump_test_case_padding", true, {}},
++ {"btf_dump: packing", "btf_dump_test_case_packing", true, {}},
++ {"btf_dump: bitfields", "btf_dump_test_case_bitfields", true, {}},
++ {"btf_dump: multidim", "btf_dump_test_case_multidim", false, {}},
++ {"btf_dump: namespacing", "btf_dump_test_case_namespacing", false, {}},
+ };
+
+ static int btf_dump_all_types(const struct btf *btf,
+@@ -62,6 +63,18 @@ static int test_btf_dump_case(int n, struct btf_dump_test_case *t)
+ goto done;
+ }
+
++ /* tests with t->known_ptr_sz have no "long" or "unsigned long" type,
++ * so it's impossible to determine correct pointer size; but if they
++ * do, it should be 8 regardless of host architecture, becaues BPF
++ * target is always 64-bit
++ */
++ if (!t->known_ptr_sz) {
++ btf__set_pointer_size(btf, 8);
++ } else {
++ CHECK(btf__pointer_size(btf) != 8, "ptr_sz", "exp %d, got %zu\n",
++ 8, btf__pointer_size(btf));
++ }
++
+ snprintf(out_file, sizeof(out_file), "/tmp/%s.output.XXXXXX", t->file);
+ fd = mkstemp(out_file);
+ if (CHECK(fd < 0, "create_tmp", "failed to create file: %d\n", fd)) {
+diff --git a/tools/testing/selftests/bpf/prog_tests/core_extern.c b/tools/testing/selftests/bpf/prog_tests/core_extern.c
+index b093787e94489..1931a158510e0 100644
+--- a/tools/testing/selftests/bpf/prog_tests/core_extern.c
++++ b/tools/testing/selftests/bpf/prog_tests/core_extern.c
+@@ -159,8 +159,8 @@ void test_core_extern(void)
+ exp = (uint64_t *)&t->data;
+ for (j = 0; j < n; j++) {
+ CHECK(got[j] != exp[j], "check_res",
+- "result #%d: expected %lx, but got %lx\n",
+- j, exp[j], got[j]);
++ "result #%d: expected %llx, but got %llx\n",
++ j, (__u64)exp[j], (__u64)got[j]);
+ }
+ cleanup:
+ test_core_extern__destroy(skel);
+diff --git a/tools/testing/selftests/bpf/prog_tests/core_reloc.c b/tools/testing/selftests/bpf/prog_tests/core_reloc.c
+index 084ed26a7d78c..a54eafc5e4b31 100644
+--- a/tools/testing/selftests/bpf/prog_tests/core_reloc.c
++++ b/tools/testing/selftests/bpf/prog_tests/core_reloc.c
+@@ -237,8 +237,8 @@
+ .union_sz = sizeof(((type *)0)->union_field), \
+ .arr_sz = sizeof(((type *)0)->arr_field), \
+ .arr_elem_sz = sizeof(((type *)0)->arr_field[0]), \
+- .ptr_sz = sizeof(((type *)0)->ptr_field), \
+- .enum_sz = sizeof(((type *)0)->enum_field), \
++ .ptr_sz = 8, /* always 8-byte pointer for BPF */ \
++ .enum_sz = sizeof(((type *)0)->enum_field), \
+ }
+
+ #define SIZE_CASE(name) { \
+@@ -432,20 +432,20 @@ static struct core_reloc_test_case test_cases[] = {
+ .sb4 = -1,
+ .sb20 = -0x17654321,
+ .u32 = 0xBEEF,
+- .s32 = -0x3FEDCBA987654321,
++ .s32 = -0x3FEDCBA987654321LL,
+ }),
+ BITFIELDS_CASE(bitfields___bitfield_vs_int, {
+- .ub1 = 0xFEDCBA9876543210,
++ .ub1 = 0xFEDCBA9876543210LL,
+ .ub2 = 0xA6,
+- .ub7 = -0x7EDCBA987654321,
+- .sb4 = -0x6123456789ABCDE,
+- .sb20 = 0xD00D,
++ .ub7 = -0x7EDCBA987654321LL,
++ .sb4 = -0x6123456789ABCDELL,
++ .sb20 = 0xD00DLL,
+ .u32 = -0x76543,
+- .s32 = 0x0ADEADBEEFBADB0B,
++ .s32 = 0x0ADEADBEEFBADB0BLL,
+ }),
+ BITFIELDS_CASE(bitfields___just_big_enough, {
+- .ub1 = 0xF,
+- .ub2 = 0x0812345678FEDCBA,
++ .ub1 = 0xFLL,
++ .ub2 = 0x0812345678FEDCBALL,
+ }),
+ BITFIELDS_ERR_CASE(bitfields___err_too_big_bitfield),
+
+diff --git a/tools/testing/selftests/bpf/prog_tests/fexit_bpf2bpf.c b/tools/testing/selftests/bpf/prog_tests/fexit_bpf2bpf.c
+index a895bfed55db0..197d0d217b56b 100644
+--- a/tools/testing/selftests/bpf/prog_tests/fexit_bpf2bpf.c
++++ b/tools/testing/selftests/bpf/prog_tests/fexit_bpf2bpf.c
+@@ -16,7 +16,7 @@ static void test_fexit_bpf2bpf_common(const char *obj_file,
+ __u32 duration = 0, retval;
+ struct bpf_map *data_map;
+ const int zero = 0;
+- u64 *result = NULL;
++ __u64 *result = NULL;
+
+ err = bpf_prog_load(target_obj_file, BPF_PROG_TYPE_UNSPEC,
+ &pkt_obj, &pkt_fd);
+@@ -29,7 +29,7 @@ static void test_fexit_bpf2bpf_common(const char *obj_file,
+
+ link = calloc(sizeof(struct bpf_link *), prog_cnt);
+ prog = calloc(sizeof(struct bpf_program *), prog_cnt);
+- result = malloc((prog_cnt + 32 /* spare */) * sizeof(u64));
++ result = malloc((prog_cnt + 32 /* spare */) * sizeof(__u64));
+ if (CHECK(!link || !prog || !result, "alloc_memory",
+ "failed to alloc memory"))
+ goto close_prog;
+@@ -72,7 +72,7 @@ static void test_fexit_bpf2bpf_common(const char *obj_file,
+ goto close_prog;
+
+ for (i = 0; i < prog_cnt; i++)
+- if (CHECK(result[i] != 1, "result", "fexit_bpf2bpf failed err %ld\n",
++ if (CHECK(result[i] != 1, "result", "fexit_bpf2bpf failed err %llu\n",
+ result[i]))
+ goto close_prog;
+
+diff --git a/tools/testing/selftests/bpf/prog_tests/flow_dissector.c b/tools/testing/selftests/bpf/prog_tests/flow_dissector.c
+index f11f187990e95..cd6dc80edf18e 100644
+--- a/tools/testing/selftests/bpf/prog_tests/flow_dissector.c
++++ b/tools/testing/selftests/bpf/prog_tests/flow_dissector.c
+@@ -591,7 +591,7 @@ void test_flow_dissector(void)
+ CHECK_ATTR(tattr.data_size_out != sizeof(flow_keys) ||
+ err || tattr.retval != 1,
+ tests[i].name,
+- "err %d errno %d retval %d duration %d size %u/%lu\n",
++ "err %d errno %d retval %d duration %d size %u/%zu\n",
+ err, errno, tattr.retval, tattr.duration,
+ tattr.data_size_out, sizeof(flow_keys));
+ CHECK_FLOW_KEYS(tests[i].name, flow_keys, tests[i].keys);
+diff --git a/tools/testing/selftests/bpf/prog_tests/global_data.c b/tools/testing/selftests/bpf/prog_tests/global_data.c
+index e3cb62b0a110e..9efa7e50eab27 100644
+--- a/tools/testing/selftests/bpf/prog_tests/global_data.c
++++ b/tools/testing/selftests/bpf/prog_tests/global_data.c
+@@ -5,7 +5,7 @@
+ static void test_global_data_number(struct bpf_object *obj, __u32 duration)
+ {
+ int i, err, map_fd;
+- uint64_t num;
++ __u64 num;
+
+ map_fd = bpf_find_map(__func__, obj, "result_number");
+ if (CHECK_FAIL(map_fd < 0))
+@@ -14,7 +14,7 @@ static void test_global_data_number(struct bpf_object *obj, __u32 duration)
+ struct {
+ char *name;
+ uint32_t key;
+- uint64_t num;
++ __u64 num;
+ } tests[] = {
+ { "relocate .bss reference", 0, 0 },
+ { "relocate .data reference", 1, 42 },
+@@ -32,7 +32,7 @@ static void test_global_data_number(struct bpf_object *obj, __u32 duration)
+ for (i = 0; i < sizeof(tests) / sizeof(tests[0]); i++) {
+ err = bpf_map_lookup_elem(map_fd, &tests[i].key, &num);
+ CHECK(err || num != tests[i].num, tests[i].name,
+- "err %d result %lx expected %lx\n",
++ "err %d result %llx expected %llx\n",
+ err, num, tests[i].num);
+ }
+ }
+diff --git a/tools/testing/selftests/bpf/prog_tests/mmap.c b/tools/testing/selftests/bpf/prog_tests/mmap.c
+index 43d0b5578f461..9c3c5c0f068fb 100644
+--- a/tools/testing/selftests/bpf/prog_tests/mmap.c
++++ b/tools/testing/selftests/bpf/prog_tests/mmap.c
+@@ -21,7 +21,7 @@ void test_mmap(void)
+ const long page_size = sysconf(_SC_PAGE_SIZE);
+ int err, duration = 0, i, data_map_fd, data_map_id, tmp_fd, rdmap_fd;
+ struct bpf_map *data_map, *bss_map;
+- void *bss_mmaped = NULL, *map_mmaped = NULL, *tmp1, *tmp2;
++ void *bss_mmaped = NULL, *map_mmaped = NULL, *tmp0, *tmp1, *tmp2;
+ struct test_mmap__bss *bss_data;
+ struct bpf_map_info map_info;
+ __u32 map_info_sz = sizeof(map_info);
+@@ -183,16 +183,23 @@ void test_mmap(void)
+
+ /* check some more advanced mmap() manipulations */
+
++ tmp0 = mmap(NULL, 4 * page_size, PROT_READ, MAP_SHARED | MAP_ANONYMOUS,
++ -1, 0);
++ if (CHECK(tmp0 == MAP_FAILED, "adv_mmap0", "errno %d\n", errno))
++ goto cleanup;
++
+ /* map all but last page: pages 1-3 mapped */
+- tmp1 = mmap(NULL, 3 * page_size, PROT_READ, MAP_SHARED,
++ tmp1 = mmap(tmp0, 3 * page_size, PROT_READ, MAP_SHARED | MAP_FIXED,
+ data_map_fd, 0);
+- if (CHECK(tmp1 == MAP_FAILED, "adv_mmap1", "errno %d\n", errno))
++ if (CHECK(tmp0 != tmp1, "adv_mmap1", "tmp0: %p, tmp1: %p\n", tmp0, tmp1)) {
++ munmap(tmp0, 4 * page_size);
+ goto cleanup;
++ }
+
+ /* unmap second page: pages 1, 3 mapped */
+ err = munmap(tmp1 + page_size, page_size);
+ if (CHECK(err, "adv_mmap2", "errno %d\n", errno)) {
+- munmap(tmp1, map_sz);
++ munmap(tmp1, 4 * page_size);
+ goto cleanup;
+ }
+
+@@ -201,7 +208,7 @@ void test_mmap(void)
+ MAP_SHARED | MAP_FIXED, data_map_fd, 0);
+ if (CHECK(tmp2 == MAP_FAILED, "adv_mmap3", "errno %d\n", errno)) {
+ munmap(tmp1, page_size);
+- munmap(tmp1 + 2*page_size, page_size);
++ munmap(tmp1 + 2*page_size, 2 * page_size);
+ goto cleanup;
+ }
+ CHECK(tmp1 + page_size != tmp2, "adv_mmap4",
+@@ -211,7 +218,7 @@ void test_mmap(void)
+ tmp2 = mmap(tmp1, 4 * page_size, PROT_READ, MAP_SHARED | MAP_FIXED,
+ data_map_fd, 0);
+ if (CHECK(tmp2 == MAP_FAILED, "adv_mmap5", "errno %d\n", errno)) {
+- munmap(tmp1, 3 * page_size); /* unmap page 1 */
++ munmap(tmp1, 4 * page_size); /* unmap page 1 */
+ goto cleanup;
+ }
+ CHECK(tmp1 != tmp2, "adv_mmap6", "tmp1: %p, tmp2: %p\n", tmp1, tmp2);
+diff --git a/tools/testing/selftests/bpf/prog_tests/prog_run_xattr.c b/tools/testing/selftests/bpf/prog_tests/prog_run_xattr.c
+index dde2b7ae7bc9e..935a294f049a2 100644
+--- a/tools/testing/selftests/bpf/prog_tests/prog_run_xattr.c
++++ b/tools/testing/selftests/bpf/prog_tests/prog_run_xattr.c
+@@ -28,7 +28,7 @@ void test_prog_run_xattr(void)
+ "err %d errno %d retval %d\n", err, errno, tattr.retval);
+
+ CHECK_ATTR(tattr.data_size_out != sizeof(pkt_v4), "data_size_out",
+- "incorrect output size, want %lu have %u\n",
++ "incorrect output size, want %zu have %u\n",
+ sizeof(pkt_v4), tattr.data_size_out);
+
+ CHECK_ATTR(buf[5] != 0, "overflow",
+diff --git a/tools/testing/selftests/bpf/prog_tests/skb_ctx.c b/tools/testing/selftests/bpf/prog_tests/skb_ctx.c
+index 7021b92af3134..c61b2b69710a9 100644
+--- a/tools/testing/selftests/bpf/prog_tests/skb_ctx.c
++++ b/tools/testing/selftests/bpf/prog_tests/skb_ctx.c
+@@ -80,7 +80,7 @@ void test_skb_ctx(void)
+
+ CHECK_ATTR(tattr.ctx_size_out != sizeof(skb),
+ "ctx_size_out",
+- "incorrect output size, want %lu have %u\n",
++ "incorrect output size, want %zu have %u\n",
+ sizeof(skb), tattr.ctx_size_out);
+
+ for (i = 0; i < 5; i++)
+diff --git a/tools/testing/selftests/bpf/prog_tests/test_global_funcs.c b/tools/testing/selftests/bpf/prog_tests/test_global_funcs.c
+index 25b068591e9a4..193002b14d7f6 100644
+--- a/tools/testing/selftests/bpf/prog_tests/test_global_funcs.c
++++ b/tools/testing/selftests/bpf/prog_tests/test_global_funcs.c
+@@ -19,7 +19,7 @@ static int libbpf_debug_print(enum libbpf_print_level level,
+ log_buf = va_arg(args, char *);
+ if (!log_buf)
+ goto out;
+- if (strstr(log_buf, err_str) == 0)
++ if (err_str && strstr(log_buf, err_str) == 0)
+ found = true;
+ out:
+ printf(format, log_buf);
+diff --git a/tools/testing/selftests/bpf/progs/core_reloc_types.h b/tools/testing/selftests/bpf/progs/core_reloc_types.h
+index 34d84717c9464..69139ed662164 100644
+--- a/tools/testing/selftests/bpf/progs/core_reloc_types.h
++++ b/tools/testing/selftests/bpf/progs/core_reloc_types.h
+@@ -1,5 +1,10 @@
+ #include <stdint.h>
+ #include <stdbool.h>
++
++void preserce_ptr_sz_fn(long x) {}
++
++#define __bpf_aligned __attribute__((aligned(8)))
++
+ /*
+ * KERNEL
+ */
+@@ -444,51 +449,51 @@ struct core_reloc_primitives {
+ char a;
+ int b;
+ enum core_reloc_primitives_enum c;
+- void *d;
+- int (*f)(const char *);
++ void *d __bpf_aligned;
++ int (*f)(const char *) __bpf_aligned;
+ };
+
+ struct core_reloc_primitives___diff_enum_def {
+ char a;
+ int b;
+- void *d;
+- int (*f)(const char *);
++ void *d __bpf_aligned;
++ int (*f)(const char *) __bpf_aligned;
+ enum {
+ X = 100,
+ Y = 200,
+- } c; /* inline enum def with differing set of values */
++ } c __bpf_aligned; /* inline enum def with differing set of values */
+ };
+
+ struct core_reloc_primitives___diff_func_proto {
+- void (*f)(int); /* incompatible function prototype */
+- void *d;
+- enum core_reloc_primitives_enum c;
++ void (*f)(int) __bpf_aligned; /* incompatible function prototype */
++ void *d __bpf_aligned;
++ enum core_reloc_primitives_enum c __bpf_aligned;
+ int b;
+ char a;
+ };
+
+ struct core_reloc_primitives___diff_ptr_type {
+- const char * const d; /* different pointee type + modifiers */
+- char a;
++ const char * const d __bpf_aligned; /* different pointee type + modifiers */
++ char a __bpf_aligned;
+ int b;
+ enum core_reloc_primitives_enum c;
+- int (*f)(const char *);
++ int (*f)(const char *) __bpf_aligned;
+ };
+
+ struct core_reloc_primitives___err_non_enum {
+ char a[1];
+ int b;
+ int c; /* int instead of enum */
+- void *d;
+- int (*f)(const char *);
++ void *d __bpf_aligned;
++ int (*f)(const char *) __bpf_aligned;
+ };
+
+ struct core_reloc_primitives___err_non_int {
+ char a[1];
+- int *b; /* ptr instead of int */
+- enum core_reloc_primitives_enum c;
+- void *d;
+- int (*f)(const char *);
++ int *b __bpf_aligned; /* ptr instead of int */
++ enum core_reloc_primitives_enum c __bpf_aligned;
++ void *d __bpf_aligned;
++ int (*f)(const char *) __bpf_aligned;
+ };
+
+ struct core_reloc_primitives___err_non_ptr {
+@@ -496,7 +501,7 @@ struct core_reloc_primitives___err_non_ptr {
+ int b;
+ enum core_reloc_primitives_enum c;
+ int d; /* int instead of ptr */
+- int (*f)(const char *);
++ int (*f)(const char *) __bpf_aligned;
+ };
+
+ /*
+@@ -507,7 +512,7 @@ struct core_reloc_mods_output {
+ };
+
+ typedef const int int_t;
+-typedef const char *char_ptr_t;
++typedef const char *char_ptr_t __bpf_aligned;
+ typedef const int arr_t[7];
+
+ struct core_reloc_mods_substruct {
+@@ -523,9 +528,9 @@ typedef struct {
+ struct core_reloc_mods {
+ int a;
+ int_t b;
+- char *c;
++ char *c __bpf_aligned;
+ char_ptr_t d;
+- int e[3];
++ int e[3] __bpf_aligned;
+ arr_t f;
+ struct core_reloc_mods_substruct g;
+ core_reloc_mods_substruct_t h;
+@@ -535,9 +540,9 @@ struct core_reloc_mods {
+ struct core_reloc_mods___mod_swap {
+ int b;
+ int_t a;
+- char *d;
++ char *d __bpf_aligned;
+ char_ptr_t c;
+- int f[3];
++ int f[3] __bpf_aligned;
+ arr_t e;
+ struct {
+ int y;
+@@ -555,7 +560,7 @@ typedef arr1_t arr2_t;
+ typedef arr2_t arr3_t;
+ typedef arr3_t arr4_t;
+
+-typedef const char * const volatile fancy_char_ptr_t;
++typedef const char * const volatile fancy_char_ptr_t __bpf_aligned;
+
+ typedef core_reloc_mods_substruct_t core_reloc_mods_substruct_tt;
+
+@@ -567,7 +572,7 @@ struct core_reloc_mods___typedefs {
+ arr4_t e;
+ fancy_char_ptr_t d;
+ fancy_char_ptr_t c;
+- int3_t b;
++ int3_t b __bpf_aligned;
+ int3_t a;
+ };
+
+@@ -739,19 +744,19 @@ struct core_reloc_bitfields___bit_sz_change {
+ int8_t sb4: 1; /* 4 -> 1 */
+ int32_t sb20: 30; /* 20 -> 30 */
+ /* non-bitfields */
+- uint16_t u32; /* 32 -> 16 */
+- int64_t s32; /* 32 -> 64 */
++ uint16_t u32; /* 32 -> 16 */
++ int64_t s32 __bpf_aligned; /* 32 -> 64 */
+ };
+
+ /* turn bitfield into non-bitfield and vice versa */
+ struct core_reloc_bitfields___bitfield_vs_int {
+ uint64_t ub1; /* 3 -> 64 non-bitfield */
+ uint8_t ub2; /* 20 -> 8 non-bitfield */
+- int64_t ub7; /* 7 -> 64 non-bitfield signed */
+- int64_t sb4; /* 4 -> 64 non-bitfield signed */
+- uint64_t sb20; /* 20 -> 16 non-bitfield unsigned */
+- int32_t u32: 20; /* 32 non-bitfield -> 20 bitfield */
+- uint64_t s32: 60; /* 32 non-bitfield -> 60 bitfield */
++ int64_t ub7 __bpf_aligned; /* 7 -> 64 non-bitfield signed */
++ int64_t sb4 __bpf_aligned; /* 4 -> 64 non-bitfield signed */
++ uint64_t sb20 __bpf_aligned; /* 20 -> 16 non-bitfield unsigned */
++ int32_t u32: 20; /* 32 non-bitfield -> 20 bitfield */
++ uint64_t s32: 60 __bpf_aligned; /* 32 non-bitfield -> 60 bitfield */
+ };
+
+ struct core_reloc_bitfields___just_big_enough {
+diff --git a/tools/testing/selftests/bpf/test_btf.c b/tools/testing/selftests/bpf/test_btf.c
+index 305fae8f80a98..c75fc6447186a 100644
+--- a/tools/testing/selftests/bpf/test_btf.c
++++ b/tools/testing/selftests/bpf/test_btf.c
+@@ -3883,7 +3883,7 @@ static int test_big_btf_info(unsigned int test_num)
+ info_garbage.garbage = 0;
+ err = bpf_obj_get_info_by_fd(btf_fd, info, &info_len);
+ if (CHECK(err || info_len != sizeof(*info),
+- "err:%d errno:%d info_len:%u sizeof(*info):%lu",
++ "err:%d errno:%d info_len:%u sizeof(*info):%zu",
+ err, errno, info_len, sizeof(*info))) {
+ err = -1;
+ goto done;
+@@ -4094,7 +4094,7 @@ static int do_test_get_info(unsigned int test_num)
+ if (CHECK(err || !info.id || info_len != sizeof(info) ||
+ info.btf_size != raw_btf_size ||
+ (ret = memcmp(raw_btf, user_btf, expected_nbytes)),
+- "err:%d errno:%d info.id:%u info_len:%u sizeof(info):%lu raw_btf_size:%u info.btf_size:%u expected_nbytes:%u memcmp:%d",
++ "err:%d errno:%d info.id:%u info_len:%u sizeof(info):%zu raw_btf_size:%u info.btf_size:%u expected_nbytes:%u memcmp:%d",
+ err, errno, info.id, info_len, sizeof(info),
+ raw_btf_size, info.btf_size, expected_nbytes, ret)) {
+ err = -1;
+@@ -4730,7 +4730,7 @@ ssize_t get_pprint_expected_line(enum pprint_mapv_kind_t mapv_kind,
+
+ nexpected_line = snprintf(expected_line, line_size,
+ "%s%u: {%u,0,%d,0x%x,0x%x,0x%x,"
+- "{%lu|[%u,%u,%u,%u,%u,%u,%u,%u]},%s,"
++ "{%llu|[%u,%u,%u,%u,%u,%u,%u,%u]},%s,"
+ "%u,0x%x,[[%d,%d],[%d,%d]]}\n",
+ percpu_map ? "\tcpu" : "",
+ percpu_map ? cpu : next_key,
+@@ -4738,7 +4738,7 @@ ssize_t get_pprint_expected_line(enum pprint_mapv_kind_t mapv_kind,
+ v->unused_bits2a,
+ v->bits28,
+ v->unused_bits2b,
+- v->ui64,
++ (__u64)v->ui64,
+ v->ui8a[0], v->ui8a[1],
+ v->ui8a[2], v->ui8a[3],
+ v->ui8a[4], v->ui8a[5],
+diff --git a/tools/testing/selftests/bpf/test_progs.h b/tools/testing/selftests/bpf/test_progs.h
+index b809246039181..b5670350e3263 100644
+--- a/tools/testing/selftests/bpf/test_progs.h
++++ b/tools/testing/selftests/bpf/test_progs.h
+@@ -133,6 +133,11 @@ static inline __u64 ptr_to_u64(const void *ptr)
+ return (__u64) (unsigned long) ptr;
+ }
+
++static inline void *u64_to_ptr(__u64 ptr)
++{
++ return (void *) (unsigned long) ptr;
++}
++
+ int bpf_find_map(const char *test, struct bpf_object *obj, const char *name);
+ int compare_map_keys(int map1_fd, int map2_fd);
+ int compare_stack_ips(int smap_fd, int amap_fd, int stack_trace_len);
+diff --git a/tools/testing/selftests/net/icmp_redirect.sh b/tools/testing/selftests/net/icmp_redirect.sh
+index 18c5de53558af..bf361f30d6ef9 100755
+--- a/tools/testing/selftests/net/icmp_redirect.sh
++++ b/tools/testing/selftests/net/icmp_redirect.sh
+@@ -180,6 +180,8 @@ setup()
+ ;;
+ r[12]) ip netns exec $ns sysctl -q -w net.ipv4.ip_forward=1
+ ip netns exec $ns sysctl -q -w net.ipv4.conf.all.send_redirects=1
++ ip netns exec $ns sysctl -q -w net.ipv4.conf.default.rp_filter=0
++ ip netns exec $ns sysctl -q -w net.ipv4.conf.all.rp_filter=0
+
+ ip netns exec $ns sysctl -q -w net.ipv6.conf.all.forwarding=1
+ ip netns exec $ns sysctl -q -w net.ipv6.route.mtu_expires=10
+diff --git a/tools/testing/selftests/powerpc/pmu/ebb/back_to_back_ebbs_test.c b/tools/testing/selftests/powerpc/pmu/ebb/back_to_back_ebbs_test.c
+index a2d7b0e3dca97..a26ac122c759f 100644
+--- a/tools/testing/selftests/powerpc/pmu/ebb/back_to_back_ebbs_test.c
++++ b/tools/testing/selftests/powerpc/pmu/ebb/back_to_back_ebbs_test.c
+@@ -91,8 +91,6 @@ int back_to_back_ebbs(void)
+ ebb_global_disable();
+ ebb_freeze_pmcs();
+
+- count_pmc(1, sample_period);
+-
+ dump_ebb_state();
+
+ event_close(&event);
+diff --git a/tools/testing/selftests/powerpc/pmu/ebb/cycles_test.c b/tools/testing/selftests/powerpc/pmu/ebb/cycles_test.c
+index bc893813483ee..bb9f587fa76e8 100644
+--- a/tools/testing/selftests/powerpc/pmu/ebb/cycles_test.c
++++ b/tools/testing/selftests/powerpc/pmu/ebb/cycles_test.c
+@@ -42,8 +42,6 @@ int cycles(void)
+ ebb_global_disable();
+ ebb_freeze_pmcs();
+
+- count_pmc(1, sample_period);
+-
+ dump_ebb_state();
+
+ event_close(&event);
+diff --git a/tools/testing/selftests/powerpc/pmu/ebb/cycles_with_freeze_test.c b/tools/testing/selftests/powerpc/pmu/ebb/cycles_with_freeze_test.c
+index dcd351d203289..9ae795ce314e6 100644
+--- a/tools/testing/selftests/powerpc/pmu/ebb/cycles_with_freeze_test.c
++++ b/tools/testing/selftests/powerpc/pmu/ebb/cycles_with_freeze_test.c
+@@ -99,8 +99,6 @@ int cycles_with_freeze(void)
+ ebb_global_disable();
+ ebb_freeze_pmcs();
+
+- count_pmc(1, sample_period);
+-
+ dump_ebb_state();
+
+ printf("EBBs while frozen %d\n", ebbs_while_frozen);
+diff --git a/tools/testing/selftests/powerpc/pmu/ebb/cycles_with_mmcr2_test.c b/tools/testing/selftests/powerpc/pmu/ebb/cycles_with_mmcr2_test.c
+index 94c99c12c0f23..4b45a2e70f62b 100644
+--- a/tools/testing/selftests/powerpc/pmu/ebb/cycles_with_mmcr2_test.c
++++ b/tools/testing/selftests/powerpc/pmu/ebb/cycles_with_mmcr2_test.c
+@@ -71,8 +71,6 @@ int cycles_with_mmcr2(void)
+ ebb_global_disable();
+ ebb_freeze_pmcs();
+
+- count_pmc(1, sample_period);
+-
+ dump_ebb_state();
+
+ event_close(&event);
+diff --git a/tools/testing/selftests/powerpc/pmu/ebb/ebb.c b/tools/testing/selftests/powerpc/pmu/ebb/ebb.c
+index dfbc5c3ad52d7..21537d6eb6b7d 100644
+--- a/tools/testing/selftests/powerpc/pmu/ebb/ebb.c
++++ b/tools/testing/selftests/powerpc/pmu/ebb/ebb.c
+@@ -396,8 +396,6 @@ int ebb_child(union pipe read_pipe, union pipe write_pipe)
+ ebb_global_disable();
+ ebb_freeze_pmcs();
+
+- count_pmc(1, sample_period);
+-
+ dump_ebb_state();
+
+ event_close(&event);
+diff --git a/tools/testing/selftests/powerpc/pmu/ebb/ebb_on_willing_child_test.c b/tools/testing/selftests/powerpc/pmu/ebb/ebb_on_willing_child_test.c
+index ca2f7d729155b..b208bf6ad58d3 100644
+--- a/tools/testing/selftests/powerpc/pmu/ebb/ebb_on_willing_child_test.c
++++ b/tools/testing/selftests/powerpc/pmu/ebb/ebb_on_willing_child_test.c
+@@ -38,8 +38,6 @@ static int victim_child(union pipe read_pipe, union pipe write_pipe)
+ ebb_global_disable();
+ ebb_freeze_pmcs();
+
+- count_pmc(1, sample_period);
+-
+ dump_ebb_state();
+
+ FAIL_IF(ebb_state.stats.ebb_count == 0);
+diff --git a/tools/testing/selftests/powerpc/pmu/ebb/lost_exception_test.c b/tools/testing/selftests/powerpc/pmu/ebb/lost_exception_test.c
+index ac3e6e182614a..ba2681a12cc7b 100644
+--- a/tools/testing/selftests/powerpc/pmu/ebb/lost_exception_test.c
++++ b/tools/testing/selftests/powerpc/pmu/ebb/lost_exception_test.c
+@@ -75,7 +75,6 @@ static int test_body(void)
+ ebb_freeze_pmcs();
+ ebb_global_disable();
+
+- count_pmc(4, sample_period);
+ mtspr(SPRN_PMC4, 0xdead);
+
+ dump_summary_ebb_state();
+diff --git a/tools/testing/selftests/powerpc/pmu/ebb/multi_counter_test.c b/tools/testing/selftests/powerpc/pmu/ebb/multi_counter_test.c
+index b8242e9d97d2d..791d37ba327b5 100644
+--- a/tools/testing/selftests/powerpc/pmu/ebb/multi_counter_test.c
++++ b/tools/testing/selftests/powerpc/pmu/ebb/multi_counter_test.c
+@@ -70,13 +70,6 @@ int multi_counter(void)
+ ebb_global_disable();
+ ebb_freeze_pmcs();
+
+- count_pmc(1, sample_period);
+- count_pmc(2, sample_period);
+- count_pmc(3, sample_period);
+- count_pmc(4, sample_period);
+- count_pmc(5, sample_period);
+- count_pmc(6, sample_period);
+-
+ dump_ebb_state();
+
+ for (i = 0; i < 6; i++)
+diff --git a/tools/testing/selftests/powerpc/pmu/ebb/multi_ebb_procs_test.c b/tools/testing/selftests/powerpc/pmu/ebb/multi_ebb_procs_test.c
+index a05c0e18ded63..9b0f70d597020 100644
+--- a/tools/testing/selftests/powerpc/pmu/ebb/multi_ebb_procs_test.c
++++ b/tools/testing/selftests/powerpc/pmu/ebb/multi_ebb_procs_test.c
+@@ -61,8 +61,6 @@ static int cycles_child(void)
+ ebb_global_disable();
+ ebb_freeze_pmcs();
+
+- count_pmc(1, sample_period);
+-
+ dump_summary_ebb_state();
+
+ event_close(&event);
+diff --git a/tools/testing/selftests/powerpc/pmu/ebb/pmae_handling_test.c b/tools/testing/selftests/powerpc/pmu/ebb/pmae_handling_test.c
+index 153ebc92234fd..2904c741e04e5 100644
+--- a/tools/testing/selftests/powerpc/pmu/ebb/pmae_handling_test.c
++++ b/tools/testing/selftests/powerpc/pmu/ebb/pmae_handling_test.c
+@@ -82,8 +82,6 @@ static int test_body(void)
+ ebb_global_disable();
+ ebb_freeze_pmcs();
+
+- count_pmc(1, sample_period);
+-
+ dump_ebb_state();
+
+ if (mmcr0_mismatch)
+diff --git a/tools/testing/selftests/powerpc/pmu/ebb/pmc56_overflow_test.c b/tools/testing/selftests/powerpc/pmu/ebb/pmc56_overflow_test.c
+index eadad75ed7e6f..b29f8ba22d1e6 100644
+--- a/tools/testing/selftests/powerpc/pmu/ebb/pmc56_overflow_test.c
++++ b/tools/testing/selftests/powerpc/pmu/ebb/pmc56_overflow_test.c
+@@ -76,8 +76,6 @@ int pmc56_overflow(void)
+ ebb_global_disable();
+ ebb_freeze_pmcs();
+
+- count_pmc(2, sample_period);
+-
+ dump_ebb_state();
+
+ printf("PMC5/6 overflow %d\n", pmc56_overflowed);
^ permalink raw reply related [flat|nested] 30+ messages in thread
* [gentoo-commits] proj/linux-patches:5.8 commit in: /
@ 2020-09-05 10:48 Mike Pagano
0 siblings, 0 replies; 30+ messages in thread
From: Mike Pagano @ 2020-09-05 10:48 UTC (permalink / raw
To: gentoo-commits
commit: 1dd46bf348e30b3b618337b4d4cdc91feebf1b9a
Author: Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Sat Sep 5 10:48:26 2020 +0000
Commit: Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Sat Sep 5 10:48:26 2020 +0000
URL: https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=1dd46bf3
Linux patch 5.8.7
Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>
0000_README | 4 +
1006_linux-5.8.7.patch | 939 +++++++++++++++++++++++++++++++++++++++++++++++++
2 files changed, 943 insertions(+)
diff --git a/0000_README b/0000_README
index ba2f389..62e43d7 100644
--- a/0000_README
+++ b/0000_README
@@ -67,6 +67,10 @@ Patch: 1005_linux-5.8.6.patch
From: http://www.kernel.org
Desc: Linux 5.8.6
+Patch: 1006_linux-5.8.7.patch
+From: http://www.kernel.org
+Desc: Linux 5.8.7
+
Patch: 1500_XATTR_USER_PREFIX.patch
From: https://bugs.gentoo.org/show_bug.cgi?id=470644
Desc: Support for namespace user.pax.* on tmpfs.
diff --git a/1006_linux-5.8.7.patch b/1006_linux-5.8.7.patch
new file mode 100644
index 0000000..d773b7c
--- /dev/null
+++ b/1006_linux-5.8.7.patch
@@ -0,0 +1,939 @@
+diff --git a/Documentation/devicetree/bindings/mmc/nvidia,tegra20-sdhci.txt b/Documentation/devicetree/bindings/mmc/nvidia,tegra20-sdhci.txt
+index 2cf3affa1be70..96c0b1440c9c5 100644
+--- a/Documentation/devicetree/bindings/mmc/nvidia,tegra20-sdhci.txt
++++ b/Documentation/devicetree/bindings/mmc/nvidia,tegra20-sdhci.txt
+@@ -15,8 +15,15 @@ Required properties:
+ - "nvidia,tegra210-sdhci": for Tegra210
+ - "nvidia,tegra186-sdhci": for Tegra186
+ - "nvidia,tegra194-sdhci": for Tegra194
+-- clocks : Must contain one entry, for the module clock.
+- See ../clocks/clock-bindings.txt for details.
++- clocks: For Tegra210, Tegra186 and Tegra194 must contain two entries.
++ One for the module clock and one for the timeout clock.
++ For all other Tegra devices, must contain a single entry for
++ the module clock. See ../clocks/clock-bindings.txt for details.
++- clock-names: For Tegra210, Tegra186 and Tegra194 must contain the
++ strings 'sdhci' and 'tmclk' to represent the module and
++ the timeout clocks, respectively.
++ For all other Tegra devices must contain the string 'sdhci'
++ to represent the module clock.
+ - resets : Must contain an entry for each entry in reset-names.
+ See ../reset/reset.txt for details.
+ - reset-names : Must include the following entries:
+@@ -99,7 +106,7 @@ Optional properties for Tegra210, Tegra186 and Tegra194:
+
+ Example:
+ sdhci@700b0000 {
+- compatible = "nvidia,tegra210-sdhci", "nvidia,tegra124-sdhci";
++ compatible = "nvidia,tegra124-sdhci";
+ reg = <0x0 0x700b0000 0x0 0x200>;
+ interrupts = <GIC_SPI 14 IRQ_TYPE_LEVEL_HIGH>;
+ clocks = <&tegra_car TEGRA210_CLK_SDMMC1>;
+@@ -115,3 +122,22 @@ sdhci@700b0000 {
+ nvidia,pad-autocal-pull-down-offset-1v8 = <0x7b>;
+ status = "disabled";
+ };
++
++sdhci@700b0000 {
++ compatible = "nvidia,tegra210-sdhci";
++ reg = <0x0 0x700b0000 0x0 0x200>;
++ interrupts = <GIC_SPI 14 IRQ_TYPE_LEVEL_HIGH>;
++ clocks = <&tegra_car TEGRA210_CLK_SDMMC1>,
++ <&tegra_car TEGRA210_CLK_SDMMC_LEGACY>;
++ clock-names = "sdhci", "tmclk";
++ resets = <&tegra_car 14>;
++ reset-names = "sdhci";
++ pinctrl-names = "sdmmc-3v3", "sdmmc-1v8";
++ pinctrl-0 = <&sdmmc1_3v3>;
++ pinctrl-1 = <&sdmmc1_1v8>;
++ nvidia,pad-autocal-pull-up-offset-3v3 = <0x00>;
++ nvidia,pad-autocal-pull-down-offset-3v3 = <0x7d>;
++ nvidia,pad-autocal-pull-up-offset-1v8 = <0x7b>;
++ nvidia,pad-autocal-pull-down-offset-1v8 = <0x7b>;
++ status = "disabled";
++};
+diff --git a/Makefile b/Makefile
+index 5cf35650373b1..5081bd85af29f 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 5
+ PATCHLEVEL = 8
+-SUBLEVEL = 6
++SUBLEVEL = 7
+ EXTRAVERSION =
+ NAME = Kleptomaniac Octopus
+
+diff --git a/arch/arm64/boot/dts/nvidia/tegra186.dtsi b/arch/arm64/boot/dts/nvidia/tegra186.dtsi
+index 58100fb9cd8b5..93236febd327f 100644
+--- a/arch/arm64/boot/dts/nvidia/tegra186.dtsi
++++ b/arch/arm64/boot/dts/nvidia/tegra186.dtsi
+@@ -331,8 +331,9 @@
+ compatible = "nvidia,tegra186-sdhci";
+ reg = <0x0 0x03400000 0x0 0x10000>;
+ interrupts = <GIC_SPI 62 IRQ_TYPE_LEVEL_HIGH>;
+- clocks = <&bpmp TEGRA186_CLK_SDMMC1>;
+- clock-names = "sdhci";
++ clocks = <&bpmp TEGRA186_CLK_SDMMC1>,
++ <&bpmp TEGRA186_CLK_SDMMC_LEGACY_TM>;
++ clock-names = "sdhci", "tmclk";
+ resets = <&bpmp TEGRA186_RESET_SDMMC1>;
+ reset-names = "sdhci";
+ iommus = <&smmu TEGRA186_SID_SDMMC1>;
+@@ -357,8 +358,9 @@
+ compatible = "nvidia,tegra186-sdhci";
+ reg = <0x0 0x03420000 0x0 0x10000>;
+ interrupts = <GIC_SPI 63 IRQ_TYPE_LEVEL_HIGH>;
+- clocks = <&bpmp TEGRA186_CLK_SDMMC2>;
+- clock-names = "sdhci";
++ clocks = <&bpmp TEGRA186_CLK_SDMMC2>,
++ <&bpmp TEGRA186_CLK_SDMMC_LEGACY_TM>;
++ clock-names = "sdhci", "tmclk";
+ resets = <&bpmp TEGRA186_RESET_SDMMC2>;
+ reset-names = "sdhci";
+ iommus = <&smmu TEGRA186_SID_SDMMC2>;
+@@ -378,8 +380,9 @@
+ compatible = "nvidia,tegra186-sdhci";
+ reg = <0x0 0x03440000 0x0 0x10000>;
+ interrupts = <GIC_SPI 64 IRQ_TYPE_LEVEL_HIGH>;
+- clocks = <&bpmp TEGRA186_CLK_SDMMC3>;
+- clock-names = "sdhci";
++ clocks = <&bpmp TEGRA186_CLK_SDMMC3>,
++ <&bpmp TEGRA186_CLK_SDMMC_LEGACY_TM>;
++ clock-names = "sdhci", "tmclk";
+ resets = <&bpmp TEGRA186_RESET_SDMMC3>;
+ reset-names = "sdhci";
+ iommus = <&smmu TEGRA186_SID_SDMMC3>;
+@@ -401,8 +404,9 @@
+ compatible = "nvidia,tegra186-sdhci";
+ reg = <0x0 0x03460000 0x0 0x10000>;
+ interrupts = <GIC_SPI 65 IRQ_TYPE_LEVEL_HIGH>;
+- clocks = <&bpmp TEGRA186_CLK_SDMMC4>;
+- clock-names = "sdhci";
++ clocks = <&bpmp TEGRA186_CLK_SDMMC4>,
++ <&bpmp TEGRA186_CLK_SDMMC_LEGACY_TM>;
++ clock-names = "sdhci", "tmclk";
+ assigned-clocks = <&bpmp TEGRA186_CLK_SDMMC4>,
+ <&bpmp TEGRA186_CLK_PLLC4_VCO>;
+ assigned-clock-parents = <&bpmp TEGRA186_CLK_PLLC4_VCO>;
+diff --git a/arch/arm64/boot/dts/nvidia/tegra194.dtsi b/arch/arm64/boot/dts/nvidia/tegra194.dtsi
+index 4bc187a4eacdb..980a8500b4b27 100644
+--- a/arch/arm64/boot/dts/nvidia/tegra194.dtsi
++++ b/arch/arm64/boot/dts/nvidia/tegra194.dtsi
+@@ -453,8 +453,9 @@
+ compatible = "nvidia,tegra194-sdhci", "nvidia,tegra186-sdhci";
+ reg = <0x03400000 0x10000>;
+ interrupts = <GIC_SPI 62 IRQ_TYPE_LEVEL_HIGH>;
+- clocks = <&bpmp TEGRA194_CLK_SDMMC1>;
+- clock-names = "sdhci";
++ clocks = <&bpmp TEGRA194_CLK_SDMMC1>,
++ <&bpmp TEGRA194_CLK_SDMMC_LEGACY_TM>;
++ clock-names = "sdhci", "tmclk";
+ resets = <&bpmp TEGRA194_RESET_SDMMC1>;
+ reset-names = "sdhci";
+ nvidia,pad-autocal-pull-up-offset-3v3-timeout =
+@@ -475,8 +476,9 @@
+ compatible = "nvidia,tegra194-sdhci", "nvidia,tegra186-sdhci";
+ reg = <0x03440000 0x10000>;
+ interrupts = <GIC_SPI 64 IRQ_TYPE_LEVEL_HIGH>;
+- clocks = <&bpmp TEGRA194_CLK_SDMMC3>;
+- clock-names = "sdhci";
++ clocks = <&bpmp TEGRA194_CLK_SDMMC3>,
++ <&bpmp TEGRA194_CLK_SDMMC_LEGACY_TM>;
++ clock-names = "sdhci", "tmclk";
+ resets = <&bpmp TEGRA194_RESET_SDMMC3>;
+ reset-names = "sdhci";
+ nvidia,pad-autocal-pull-up-offset-1v8 = <0x00>;
+@@ -498,8 +500,9 @@
+ compatible = "nvidia,tegra194-sdhci", "nvidia,tegra186-sdhci";
+ reg = <0x03460000 0x10000>;
+ interrupts = <GIC_SPI 65 IRQ_TYPE_LEVEL_HIGH>;
+- clocks = <&bpmp TEGRA194_CLK_SDMMC4>;
+- clock-names = "sdhci";
++ clocks = <&bpmp TEGRA194_CLK_SDMMC4>,
++ <&bpmp TEGRA194_CLK_SDMMC_LEGACY_TM>;
++ clock-names = "sdhci", "tmclk";
+ assigned-clocks = <&bpmp TEGRA194_CLK_SDMMC4>,
+ <&bpmp TEGRA194_CLK_PLLC4>;
+ assigned-clock-parents =
+diff --git a/arch/arm64/boot/dts/nvidia/tegra210.dtsi b/arch/arm64/boot/dts/nvidia/tegra210.dtsi
+index 08655081f72d1..04f3a2d4990de 100644
+--- a/arch/arm64/boot/dts/nvidia/tegra210.dtsi
++++ b/arch/arm64/boot/dts/nvidia/tegra210.dtsi
+@@ -1180,8 +1180,9 @@
+ compatible = "nvidia,tegra210-sdhci", "nvidia,tegra124-sdhci";
+ reg = <0x0 0x700b0000 0x0 0x200>;
+ interrupts = <GIC_SPI 14 IRQ_TYPE_LEVEL_HIGH>;
+- clocks = <&tegra_car TEGRA210_CLK_SDMMC1>;
+- clock-names = "sdhci";
++ clocks = <&tegra_car TEGRA210_CLK_SDMMC1>,
++ <&tegra_car TEGRA210_CLK_SDMMC_LEGACY>;
++ clock-names = "sdhci", "tmclk";
+ resets = <&tegra_car 14>;
+ reset-names = "sdhci";
+ pinctrl-names = "sdmmc-3v3", "sdmmc-1v8",
+@@ -1208,8 +1209,9 @@
+ compatible = "nvidia,tegra210-sdhci", "nvidia,tegra124-sdhci";
+ reg = <0x0 0x700b0200 0x0 0x200>;
+ interrupts = <GIC_SPI 15 IRQ_TYPE_LEVEL_HIGH>;
+- clocks = <&tegra_car TEGRA210_CLK_SDMMC2>;
+- clock-names = "sdhci";
++ clocks = <&tegra_car TEGRA210_CLK_SDMMC2>,
++ <&tegra_car TEGRA210_CLK_SDMMC_LEGACY>;
++ clock-names = "sdhci", "tmclk";
+ resets = <&tegra_car 9>;
+ reset-names = "sdhci";
+ pinctrl-names = "sdmmc-1v8-drv";
+@@ -1225,8 +1227,9 @@
+ compatible = "nvidia,tegra210-sdhci", "nvidia,tegra124-sdhci";
+ reg = <0x0 0x700b0400 0x0 0x200>;
+ interrupts = <GIC_SPI 19 IRQ_TYPE_LEVEL_HIGH>;
+- clocks = <&tegra_car TEGRA210_CLK_SDMMC3>;
+- clock-names = "sdhci";
++ clocks = <&tegra_car TEGRA210_CLK_SDMMC3>,
++ <&tegra_car TEGRA210_CLK_SDMMC_LEGACY>;
++ clock-names = "sdhci", "tmclk";
+ resets = <&tegra_car 69>;
+ reset-names = "sdhci";
+ pinctrl-names = "sdmmc-3v3", "sdmmc-1v8",
+@@ -1248,8 +1251,9 @@
+ compatible = "nvidia,tegra210-sdhci", "nvidia,tegra124-sdhci";
+ reg = <0x0 0x700b0600 0x0 0x200>;
+ interrupts = <GIC_SPI 31 IRQ_TYPE_LEVEL_HIGH>;
+- clocks = <&tegra_car TEGRA210_CLK_SDMMC4>;
+- clock-names = "sdhci";
++ clocks = <&tegra_car TEGRA210_CLK_SDMMC4>,
++ <&tegra_car TEGRA210_CLK_SDMMC_LEGACY>;
++ clock-names = "sdhci", "tmclk";
+ resets = <&tegra_car 15>;
+ reset-names = "sdhci";
+ pinctrl-names = "sdmmc-3v3-drv", "sdmmc-1v8-drv";
+diff --git a/arch/arm64/include/asm/kvm_asm.h b/arch/arm64/include/asm/kvm_asm.h
+index 352aaebf41980..2eff49d81be2d 100644
+--- a/arch/arm64/include/asm/kvm_asm.h
++++ b/arch/arm64/include/asm/kvm_asm.h
+@@ -121,6 +121,34 @@ extern char __smccc_workaround_1_smc[__SMCCC_WORKAROUND_1_SMC_SZ];
+ *__hyp_this_cpu_ptr(sym); \
+ })
+
++#define __KVM_EXTABLE(from, to) \
++ " .pushsection __kvm_ex_table, \"a\"\n" \
++ " .align 3\n" \
++ " .long (" #from " - .), (" #to " - .)\n" \
++ " .popsection\n"
++
++
++#define __kvm_at(at_op, addr) \
++( { \
++ int __kvm_at_err = 0; \
++ u64 spsr, elr; \
++ asm volatile( \
++ " mrs %1, spsr_el2\n" \
++ " mrs %2, elr_el2\n" \
++ "1: at "at_op", %3\n" \
++ " isb\n" \
++ " b 9f\n" \
++ "2: msr spsr_el2, %1\n" \
++ " msr elr_el2, %2\n" \
++ " mov %w0, %4\n" \
++ "9:\n" \
++ __KVM_EXTABLE(1b, 2b) \
++ : "+r" (__kvm_at_err), "=&r" (spsr), "=&r" (elr) \
++ : "r" (addr), "i" (-EFAULT)); \
++ __kvm_at_err; \
++} )
++
++
+ #else /* __ASSEMBLY__ */
+
+ .macro hyp_adr_this_cpu reg, sym, tmp
+@@ -146,6 +174,21 @@ extern char __smccc_workaround_1_smc[__SMCCC_WORKAROUND_1_SMC_SZ];
+ kern_hyp_va \vcpu
+ .endm
+
++/*
++ * KVM extable for unexpected exceptions.
++ * In the same format _asm_extable, but output to a different section so that
++ * it can be mapped to EL2. The KVM version is not sorted. The caller must
++ * ensure:
++ * x18 has the hypervisor value to allow any Shadow-Call-Stack instrumented
++ * code to write to it, and that SPSR_EL2 and ELR_EL2 are restored by the fixup.
++ */
++.macro _kvm_extable, from, to
++ .pushsection __kvm_ex_table, "a"
++ .align 3
++ .long (\from - .), (\to - .)
++ .popsection
++.endm
++
+ #endif
+
+ #endif /* __ARM_KVM_ASM_H__ */
+diff --git a/arch/arm64/kernel/vmlinux.lds.S b/arch/arm64/kernel/vmlinux.lds.S
+index 5423ffe0a9876..1417a9042d135 100644
+--- a/arch/arm64/kernel/vmlinux.lds.S
++++ b/arch/arm64/kernel/vmlinux.lds.S
+@@ -21,6 +21,13 @@ ENTRY(_text)
+
+ jiffies = jiffies_64;
+
++
++#define HYPERVISOR_EXTABLE \
++ . = ALIGN(SZ_8); \
++ __start___kvm_ex_table = .; \
++ *(__kvm_ex_table) \
++ __stop___kvm_ex_table = .;
++
+ #define HYPERVISOR_TEXT \
+ /* \
+ * Align to 4 KB so that \
+@@ -36,6 +43,7 @@ jiffies = jiffies_64;
+ __hyp_idmap_text_end = .; \
+ __hyp_text_start = .; \
+ *(.hyp.text) \
++ HYPERVISOR_EXTABLE \
+ __hyp_text_end = .;
+
+ #define IDMAP_TEXT \
+diff --git a/arch/arm64/kvm/hyp/entry.S b/arch/arm64/kvm/hyp/entry.S
+index 90186cf6473e0..c2e6da3564082 100644
+--- a/arch/arm64/kvm/hyp/entry.S
++++ b/arch/arm64/kvm/hyp/entry.S
+@@ -198,20 +198,23 @@ alternative_endif
+ // This is our single instruction exception window. A pending
+ // SError is guaranteed to occur at the earliest when we unmask
+ // it, and at the latest just after the ISB.
+- .global abort_guest_exit_start
+ abort_guest_exit_start:
+
+ isb
+
+- .global abort_guest_exit_end
+ abort_guest_exit_end:
+
+ msr daifset, #4 // Mask aborts
++ ret
++
++ _kvm_extable abort_guest_exit_start, 9997f
++ _kvm_extable abort_guest_exit_end, 9997f
++9997:
++ msr daifset, #4 // Mask aborts
++ mov x0, #(1 << ARM_EXIT_WITH_SERROR_BIT)
+
+- // If the exception took place, restore the EL1 exception
+- // context so that we can report some information.
+- // Merge the exception code with the SError pending bit.
+- tbz x0, #ARM_EXIT_WITH_SERROR_BIT, 1f
++ // restore the EL1 exception context so that we can report some
++ // information. Merge the exception code with the SError pending bit.
+ msr elr_el2, x2
+ msr esr_el2, x3
+ msr spsr_el2, x4
+diff --git a/arch/arm64/kvm/hyp/hyp-entry.S b/arch/arm64/kvm/hyp/hyp-entry.S
+index 9c5cfb04170ee..741f7cbaeb799 100644
+--- a/arch/arm64/kvm/hyp/hyp-entry.S
++++ b/arch/arm64/kvm/hyp/hyp-entry.S
+@@ -15,6 +15,30 @@
+ #include <asm/kvm_mmu.h>
+ #include <asm/mmu.h>
+
++.macro save_caller_saved_regs_vect
++ /* x0 and x1 were saved in the vector entry */
++ stp x2, x3, [sp, #-16]!
++ stp x4, x5, [sp, #-16]!
++ stp x6, x7, [sp, #-16]!
++ stp x8, x9, [sp, #-16]!
++ stp x10, x11, [sp, #-16]!
++ stp x12, x13, [sp, #-16]!
++ stp x14, x15, [sp, #-16]!
++ stp x16, x17, [sp, #-16]!
++.endm
++
++.macro restore_caller_saved_regs_vect
++ ldp x16, x17, [sp], #16
++ ldp x14, x15, [sp], #16
++ ldp x12, x13, [sp], #16
++ ldp x10, x11, [sp], #16
++ ldp x8, x9, [sp], #16
++ ldp x6, x7, [sp], #16
++ ldp x4, x5, [sp], #16
++ ldp x2, x3, [sp], #16
++ ldp x0, x1, [sp], #16
++.endm
++
+ .text
+ .pushsection .hyp.text, "ax"
+
+@@ -142,13 +166,19 @@ el1_error:
+ b __guest_exit
+
+ el2_sync:
+- /* Check for illegal exception return, otherwise panic */
++ /* Check for illegal exception return */
+ mrs x0, spsr_el2
++ tbnz x0, #20, 1f
+
+- /* if this was something else, then panic! */
+- tst x0, #PSR_IL_BIT
+- b.eq __hyp_panic
++ save_caller_saved_regs_vect
++ stp x29, x30, [sp, #-16]!
++ bl kvm_unexpected_el2_exception
++ ldp x29, x30, [sp], #16
++ restore_caller_saved_regs_vect
+
++ eret
++
++1:
+ /* Let's attempt a recovery from the illegal exception return */
+ get_vcpu_ptr x1, x0
+ mov x0, #ARM_EXCEPTION_IL
+@@ -156,27 +186,14 @@ el2_sync:
+
+
+ el2_error:
+- ldp x0, x1, [sp], #16
++ save_caller_saved_regs_vect
++ stp x29, x30, [sp, #-16]!
++
++ bl kvm_unexpected_el2_exception
++
++ ldp x29, x30, [sp], #16
++ restore_caller_saved_regs_vect
+
+- /*
+- * Only two possibilities:
+- * 1) Either we come from the exit path, having just unmasked
+- * PSTATE.A: change the return code to an EL2 fault, and
+- * carry on, as we're already in a sane state to handle it.
+- * 2) Or we come from anywhere else, and that's a bug: we panic.
+- *
+- * For (1), x0 contains the original return code and x1 doesn't
+- * contain anything meaningful at that stage. We can reuse them
+- * as temp registers.
+- * For (2), who cares?
+- */
+- mrs x0, elr_el2
+- adr x1, abort_guest_exit_start
+- cmp x0, x1
+- adr x1, abort_guest_exit_end
+- ccmp x0, x1, #4, ne
+- b.ne __hyp_panic
+- mov x0, #(1 << ARM_EXIT_WITH_SERROR_BIT)
+ eret
+ sb
+
+diff --git a/arch/arm64/kvm/hyp/switch.c b/arch/arm64/kvm/hyp/switch.c
+index 9270b14157b55..ba225e09aaf15 100644
+--- a/arch/arm64/kvm/hyp/switch.c
++++ b/arch/arm64/kvm/hyp/switch.c
+@@ -14,6 +14,7 @@
+
+ #include <asm/barrier.h>
+ #include <asm/cpufeature.h>
++#include <asm/extable.h>
+ #include <asm/kprobes.h>
+ #include <asm/kvm_asm.h>
+ #include <asm/kvm_emulate.h>
+@@ -24,6 +25,9 @@
+ #include <asm/processor.h>
+ #include <asm/thread_info.h>
+
++extern struct exception_table_entry __start___kvm_ex_table;
++extern struct exception_table_entry __stop___kvm_ex_table;
++
+ /* Check whether the FP regs were dirtied while in the host-side run loop: */
+ static bool __hyp_text update_fp_enabled(struct kvm_vcpu *vcpu)
+ {
+@@ -299,10 +303,10 @@ static bool __hyp_text __translate_far_to_hpfar(u64 far, u64 *hpfar)
+ * saved the guest context yet, and we may return early...
+ */
+ par = read_sysreg(par_el1);
+- asm volatile("at s1e1r, %0" : : "r" (far));
+- isb();
+-
+- tmp = read_sysreg(par_el1);
++ if (!__kvm_at("s1e1r", far))
++ tmp = read_sysreg(par_el1);
++ else
++ tmp = SYS_PAR_EL1_F; /* back to the guest */
+ write_sysreg(par, par_el1);
+
+ if (unlikely(tmp & SYS_PAR_EL1_F))
+@@ -934,3 +938,30 @@ void __hyp_text __noreturn hyp_panic(struct kvm_cpu_context *host_ctxt)
+
+ unreachable();
+ }
++
++asmlinkage void __hyp_text kvm_unexpected_el2_exception(void)
++{
++ unsigned long addr, fixup;
++ struct kvm_cpu_context *host_ctxt;
++ struct exception_table_entry *entry, *end;
++ unsigned long elr_el2 = read_sysreg(elr_el2);
++
++ entry = hyp_symbol_addr(__start___kvm_ex_table);
++ end = hyp_symbol_addr(__stop___kvm_ex_table);
++ host_ctxt = &__hyp_this_cpu_ptr(kvm_host_data)->host_ctxt;
++
++ while (entry < end) {
++ addr = (unsigned long)&entry->insn + entry->insn;
++ fixup = (unsigned long)&entry->fixup + entry->fixup;
++
++ if (addr != elr_el2) {
++ entry++;
++ continue;
++ }
++
++ write_sysreg(fixup, elr_el2);
++ return;
++ }
++
++ hyp_panic(host_ctxt);
++}
+diff --git a/drivers/hid/hid-core.c b/drivers/hid/hid-core.c
+index 359616e3efbbb..d2ecc9c452554 100644
+--- a/drivers/hid/hid-core.c
++++ b/drivers/hid/hid-core.c
+@@ -1597,6 +1597,17 @@ static void hid_output_field(const struct hid_device *hid,
+ }
+ }
+
++/*
++ * Compute the size of a report.
++ */
++static size_t hid_compute_report_size(struct hid_report *report)
++{
++ if (report->size)
++ return ((report->size - 1) >> 3) + 1;
++
++ return 0;
++}
++
+ /*
+ * Create a report. 'data' has to be allocated using
+ * hid_alloc_report_buf() so that it has proper size.
+@@ -1609,7 +1620,7 @@ void hid_output_report(struct hid_report *report, __u8 *data)
+ if (report->id > 0)
+ *data++ = report->id;
+
+- memset(data, 0, ((report->size - 1) >> 3) + 1);
++ memset(data, 0, hid_compute_report_size(report));
+ for (n = 0; n < report->maxfield; n++)
+ hid_output_field(report->device, report->field[n], data);
+ }
+@@ -1739,7 +1750,7 @@ int hid_report_raw_event(struct hid_device *hid, int type, u8 *data, u32 size,
+ csize--;
+ }
+
+- rsize = ((report->size - 1) >> 3) + 1;
++ rsize = hid_compute_report_size(report);
+
+ if (report_enum->numbered && rsize >= HID_MAX_BUFFER_SIZE)
+ rsize = HID_MAX_BUFFER_SIZE - 1;
+diff --git a/drivers/hid/hid-input.c b/drivers/hid/hid-input.c
+index e8641ce677e47..e3d475f4baf66 100644
+--- a/drivers/hid/hid-input.c
++++ b/drivers/hid/hid-input.c
+@@ -1132,6 +1132,10 @@ static void hidinput_configure_usage(struct hid_input *hidinput, struct hid_fiel
+ }
+
+ mapped:
++ /* Mapping failed, bail out */
++ if (!bit)
++ return;
++
+ if (device->driver->input_mapped &&
+ device->driver->input_mapped(device, hidinput, field, usage,
+ &bit, &max) < 0) {
+diff --git a/drivers/hid/hid-multitouch.c b/drivers/hid/hid-multitouch.c
+index 3f94b4954225b..e3152155c4b85 100644
+--- a/drivers/hid/hid-multitouch.c
++++ b/drivers/hid/hid-multitouch.c
+@@ -856,6 +856,8 @@ static int mt_touch_input_mapping(struct hid_device *hdev, struct hid_input *hi,
+ code = BTN_0 + ((usage->hid - 1) & HID_USAGE);
+
+ hid_map_usage(hi, usage, bit, max, EV_KEY, code);
++ if (!*bit)
++ return -1;
+ input_set_capability(hi->input, EV_KEY, code);
+ return 1;
+
+diff --git a/drivers/media/v4l2-core/v4l2-ioctl.c b/drivers/media/v4l2-core/v4l2-ioctl.c
+index 2322f08a98be6..5e057f798a15b 100644
+--- a/drivers/media/v4l2-core/v4l2-ioctl.c
++++ b/drivers/media/v4l2-core/v4l2-ioctl.c
+@@ -3186,14 +3186,16 @@ static int video_put_user(void __user *arg, void *parg, unsigned int cmd)
+ #ifdef CONFIG_COMPAT_32BIT_TIME
+ case VIDIOC_DQEVENT_TIME32: {
+ struct v4l2_event *ev = parg;
+- struct v4l2_event_time32 ev32 = {
+- .type = ev->type,
+- .pending = ev->pending,
+- .sequence = ev->sequence,
+- .timestamp.tv_sec = ev->timestamp.tv_sec,
+- .timestamp.tv_nsec = ev->timestamp.tv_nsec,
+- .id = ev->id,
+- };
++ struct v4l2_event_time32 ev32;
++
++ memset(&ev32, 0, sizeof(ev32));
++
++ ev32.type = ev->type;
++ ev32.pending = ev->pending;
++ ev32.sequence = ev->sequence;
++ ev32.timestamp.tv_sec = ev->timestamp.tv_sec;
++ ev32.timestamp.tv_nsec = ev->timestamp.tv_nsec;
++ ev32.id = ev->id;
+
+ memcpy(&ev32.u, &ev->u, sizeof(ev->u));
+ memcpy(&ev32.reserved, &ev->reserved, sizeof(ev->reserved));
+@@ -3207,21 +3209,23 @@ static int video_put_user(void __user *arg, void *parg, unsigned int cmd)
+ case VIDIOC_DQBUF_TIME32:
+ case VIDIOC_PREPARE_BUF_TIME32: {
+ struct v4l2_buffer *vb = parg;
+- struct v4l2_buffer_time32 vb32 = {
+- .index = vb->index,
+- .type = vb->type,
+- .bytesused = vb->bytesused,
+- .flags = vb->flags,
+- .field = vb->field,
+- .timestamp.tv_sec = vb->timestamp.tv_sec,
+- .timestamp.tv_usec = vb->timestamp.tv_usec,
+- .timecode = vb->timecode,
+- .sequence = vb->sequence,
+- .memory = vb->memory,
+- .m.userptr = vb->m.userptr,
+- .length = vb->length,
+- .request_fd = vb->request_fd,
+- };
++ struct v4l2_buffer_time32 vb32;
++
++ memset(&vb32, 0, sizeof(vb32));
++
++ vb32.index = vb->index;
++ vb32.type = vb->type;
++ vb32.bytesused = vb->bytesused;
++ vb32.flags = vb->flags;
++ vb32.field = vb->field;
++ vb32.timestamp.tv_sec = vb->timestamp.tv_sec;
++ vb32.timestamp.tv_usec = vb->timestamp.tv_usec;
++ vb32.timecode = vb->timecode;
++ vb32.sequence = vb->sequence;
++ vb32.memory = vb->memory;
++ vb32.m.userptr = vb->m.userptr;
++ vb32.length = vb->length;
++ vb32.request_fd = vb->request_fd;
+
+ if (copy_to_user(arg, &vb32, sizeof(vb32)))
+ return -EFAULT;
+diff --git a/drivers/mmc/host/sdhci-tegra.c b/drivers/mmc/host/sdhci-tegra.c
+index 3a372ab3d12e8..db1a8d1c96b36 100644
+--- a/drivers/mmc/host/sdhci-tegra.c
++++ b/drivers/mmc/host/sdhci-tegra.c
+@@ -1409,7 +1409,6 @@ static const struct sdhci_ops tegra210_sdhci_ops = {
+
+ static const struct sdhci_pltfm_data sdhci_tegra210_pdata = {
+ .quirks = SDHCI_QUIRK_BROKEN_TIMEOUT_VAL |
+- SDHCI_QUIRK_DATA_TIMEOUT_USES_SDCLK |
+ SDHCI_QUIRK_SINGLE_POWER_WRITE |
+ SDHCI_QUIRK_NO_HISPD_BIT |
+ SDHCI_QUIRK_BROKEN_ADMA_ZEROLEN_DESC |
+@@ -1447,7 +1446,6 @@ static const struct sdhci_ops tegra186_sdhci_ops = {
+
+ static const struct sdhci_pltfm_data sdhci_tegra186_pdata = {
+ .quirks = SDHCI_QUIRK_BROKEN_TIMEOUT_VAL |
+- SDHCI_QUIRK_DATA_TIMEOUT_USES_SDCLK |
+ SDHCI_QUIRK_SINGLE_POWER_WRITE |
+ SDHCI_QUIRK_NO_HISPD_BIT |
+ SDHCI_QUIRK_BROKEN_ADMA_ZEROLEN_DESC |
+diff --git a/drivers/target/target_core_user.c b/drivers/target/target_core_user.c
+index 9ab960cc39b6f..0209bc23e631e 100644
+--- a/drivers/target/target_core_user.c
++++ b/drivers/target/target_core_user.c
+@@ -676,8 +676,10 @@ static void scatter_data_area(struct tcmu_dev *udev,
+ from = kmap_atomic(sg_page(sg)) + sg->offset;
+ while (sg_remaining > 0) {
+ if (block_remaining == 0) {
+- if (to)
++ if (to) {
++ flush_dcache_page(page);
+ kunmap_atomic(to);
++ }
+
+ block_remaining = DATA_BLOCK_SIZE;
+ dbi = tcmu_cmd_get_dbi(tcmu_cmd);
+@@ -722,7 +724,6 @@ static void scatter_data_area(struct tcmu_dev *udev,
+ memcpy(to + offset,
+ from + sg->length - sg_remaining,
+ copy_bytes);
+- tcmu_flush_dcache_range(to, copy_bytes);
+ }
+
+ sg_remaining -= copy_bytes;
+@@ -731,8 +732,10 @@ static void scatter_data_area(struct tcmu_dev *udev,
+ kunmap_atomic(from - sg->offset);
+ }
+
+- if (to)
++ if (to) {
++ flush_dcache_page(page);
+ kunmap_atomic(to);
++ }
+ }
+
+ static void gather_data_area(struct tcmu_dev *udev, struct tcmu_cmd *cmd,
+@@ -778,13 +781,13 @@ static void gather_data_area(struct tcmu_dev *udev, struct tcmu_cmd *cmd,
+ dbi = tcmu_cmd_get_dbi(cmd);
+ page = tcmu_get_block_page(udev, dbi);
+ from = kmap_atomic(page);
++ flush_dcache_page(page);
+ }
+ copy_bytes = min_t(size_t, sg_remaining,
+ block_remaining);
+ if (read_len < copy_bytes)
+ copy_bytes = read_len;
+ offset = DATA_BLOCK_SIZE - block_remaining;
+- tcmu_flush_dcache_range(from, copy_bytes);
+ memcpy(to + sg->length - sg_remaining, from + offset,
+ copy_bytes);
+
+diff --git a/include/linux/hid.h b/include/linux/hid.h
+index 875f71132b142..c7044a14200ea 100644
+--- a/include/linux/hid.h
++++ b/include/linux/hid.h
+@@ -959,34 +959,49 @@ static inline void hid_device_io_stop(struct hid_device *hid) {
+ * @max: maximal valid usage->code to consider later (out parameter)
+ * @type: input event type (EV_KEY, EV_REL, ...)
+ * @c: code which corresponds to this usage and type
++ *
++ * The value pointed to by @bit will be set to NULL if either @type is
++ * an unhandled event type, or if @c is out of range for @type. This
++ * can be used as an error condition.
+ */
+ static inline void hid_map_usage(struct hid_input *hidinput,
+ struct hid_usage *usage, unsigned long **bit, int *max,
+- __u8 type, __u16 c)
++ __u8 type, unsigned int c)
+ {
+ struct input_dev *input = hidinput->input;
+-
+- usage->type = type;
+- usage->code = c;
++ unsigned long *bmap = NULL;
++ unsigned int limit = 0;
+
+ switch (type) {
+ case EV_ABS:
+- *bit = input->absbit;
+- *max = ABS_MAX;
++ bmap = input->absbit;
++ limit = ABS_MAX;
+ break;
+ case EV_REL:
+- *bit = input->relbit;
+- *max = REL_MAX;
++ bmap = input->relbit;
++ limit = REL_MAX;
+ break;
+ case EV_KEY:
+- *bit = input->keybit;
+- *max = KEY_MAX;
++ bmap = input->keybit;
++ limit = KEY_MAX;
+ break;
+ case EV_LED:
+- *bit = input->ledbit;
+- *max = LED_MAX;
++ bmap = input->ledbit;
++ limit = LED_MAX;
+ break;
+ }
++
++ if (unlikely(c > limit || !bmap)) {
++ pr_warn_ratelimited("%s: Invalid code %d type %d\n",
++ input->name, c, type);
++ *bit = NULL;
++ return;
++ }
++
++ usage->type = type;
++ usage->code = c;
++ *max = limit;
++ *bit = bmap;
+ }
+
+ /**
+@@ -1000,7 +1015,8 @@ static inline void hid_map_usage_clear(struct hid_input *hidinput,
+ __u8 type, __u16 c)
+ {
+ hid_map_usage(hidinput, usage, bit, max, type, c);
+- clear_bit(c, *bit);
++ if (*bit)
++ clear_bit(usage->code, *bit);
+ }
+
+ /**
+diff --git a/mm/gup.c b/mm/gup.c
+index 6f47697f8fb0b..0d8d76f10ac61 100644
+--- a/mm/gup.c
++++ b/mm/gup.c
+@@ -843,7 +843,7 @@ static int get_gate_page(struct mm_struct *mm, unsigned long address,
+ goto unmap;
+ *page = pte_page(*pte);
+ }
+- if (unlikely(!try_get_page(*page))) {
++ if (unlikely(!try_grab_page(*page, gup_flags))) {
+ ret = -ENOMEM;
+ goto unmap;
+ }
+diff --git a/net/netfilter/nft_set_rbtree.c b/net/netfilter/nft_set_rbtree.c
+index b6aad3fc46c35..b85ce6f0c0a6f 100644
+--- a/net/netfilter/nft_set_rbtree.c
++++ b/net/netfilter/nft_set_rbtree.c
+@@ -238,21 +238,27 @@ static int __nft_rbtree_insert(const struct net *net, const struct nft_set *set,
+ *
+ * b1. _ _ __>| !_ _ __| (insert end before existing start)
+ * b2. _ _ ___| !_ _ _>| (insert end after existing start)
+- * b3. _ _ ___! >|_ _ __| (insert start after existing end)
++ * b3. _ _ ___! >|_ _ __| (insert start after existing end, as a leaf)
++ * '--' no nodes falling in this range
++ * b4. >|_ _ ! (insert start before existing start)
+ *
+ * Case a3. resolves to b3.:
+ * - if the inserted start element is the leftmost, because the '0'
+ * element in the tree serves as end element
+- * - otherwise, if an existing end is found. Note that end elements are
+- * always inserted after corresponding start elements.
++ * - otherwise, if an existing end is found immediately to the left. If
++ * there are existing nodes in between, we need to further descend the
++ * tree before we can conclude the new start isn't causing an overlap
++ *
++ * or to b4., which, preceded by a3., means we already traversed one or
++ * more existing intervals entirely, from the right.
+ *
+ * For a new, rightmost pair of elements, we'll hit cases b3. and b2.,
+ * in that order.
+ *
+ * The flag is also cleared in two special cases:
+ *
+- * b4. |__ _ _!|<_ _ _ (insert start right before existing end)
+- * b5. |__ _ >|!__ _ _ (insert end right after existing start)
++ * b5. |__ _ _!|<_ _ _ (insert start right before existing end)
++ * b6. |__ _ >|!__ _ _ (insert end right after existing start)
+ *
+ * which always happen as last step and imply that no further
+ * overlapping is possible.
+@@ -272,7 +278,7 @@ static int __nft_rbtree_insert(const struct net *net, const struct nft_set *set,
+ if (nft_rbtree_interval_start(new)) {
+ if (nft_rbtree_interval_end(rbe) &&
+ nft_set_elem_active(&rbe->ext, genmask) &&
+- !nft_set_elem_expired(&rbe->ext))
++ !nft_set_elem_expired(&rbe->ext) && !*p)
+ overlap = false;
+ } else {
+ overlap = nft_rbtree_interval_end(rbe) &&
+@@ -288,10 +294,9 @@ static int __nft_rbtree_insert(const struct net *net, const struct nft_set *set,
+ nft_set_elem_active(&rbe->ext,
+ genmask) &&
+ !nft_set_elem_expired(&rbe->ext);
+- } else if (nft_rbtree_interval_end(rbe) &&
+- nft_set_elem_active(&rbe->ext, genmask) &&
++ } else if (nft_set_elem_active(&rbe->ext, genmask) &&
+ !nft_set_elem_expired(&rbe->ext)) {
+- overlap = true;
++ overlap = nft_rbtree_interval_end(rbe);
+ }
+ } else {
+ if (nft_rbtree_interval_end(rbe) &&
+diff --git a/net/wireless/nl80211.c b/net/wireless/nl80211.c
+index 7fbca0854265a..279c87a2a523b 100644
+--- a/net/wireless/nl80211.c
++++ b/net/wireless/nl80211.c
+@@ -6010,7 +6010,7 @@ static int nl80211_set_station(struct sk_buff *skb, struct genl_info *info)
+
+ if (info->attrs[NL80211_ATTR_HE_6GHZ_CAPABILITY])
+ params.he_6ghz_capa =
+- nla_data(info->attrs[NL80211_ATTR_HE_CAPABILITY]);
++ nla_data(info->attrs[NL80211_ATTR_HE_6GHZ_CAPABILITY]);
+
+ if (info->attrs[NL80211_ATTR_AIRTIME_WEIGHT])
+ params.airtime_weight =
+diff --git a/tools/perf/Documentation/perf-record.txt b/tools/perf/Documentation/perf-record.txt
+index fa8a5fcd27aba..950e4ad213747 100644
+--- a/tools/perf/Documentation/perf-record.txt
++++ b/tools/perf/Documentation/perf-record.txt
+@@ -33,6 +33,10 @@ OPTIONS
+ - a raw PMU event (eventsel+umask) in the form of rNNN where NNN is a
+ hexadecimal event descriptor.
+
++ - a symbolic or raw PMU event followed by an optional colon
++ and a list of event modifiers, e.g., cpu-cycles:p. See the
++ linkperf:perf-list[1] man page for details on event modifiers.
++
+ - a symbolically formed PMU event like 'pmu/param1=0x3,param2/' where
+ 'param1', 'param2', etc are defined as formats for the PMU in
+ /sys/bus/event_source/devices/<pmu>/format/*.
+diff --git a/tools/perf/Documentation/perf-stat.txt b/tools/perf/Documentation/perf-stat.txt
+index b029ee728a0bc..c8209467076b1 100644
+--- a/tools/perf/Documentation/perf-stat.txt
++++ b/tools/perf/Documentation/perf-stat.txt
+@@ -39,6 +39,10 @@ report::
+ - a raw PMU event (eventsel+umask) in the form of rNNN where NNN is a
+ hexadecimal event descriptor.
+
++ - a symbolic or raw PMU event followed by an optional colon
++ and a list of event modifiers, e.g., cpu-cycles:p. See the
++ linkperf:perf-list[1] man page for details on event modifiers.
++
+ - a symbolically formed event like 'pmu/param1=0x3,param2/' where
+ param1 and param2 are defined as formats for the PMU in
+ /sys/bus/event_source/devices/<pmu>/format/*
+diff --git a/tools/testing/selftests/x86/test_vsyscall.c b/tools/testing/selftests/x86/test_vsyscall.c
+index c41f24b517f40..65c141ebfbbde 100644
+--- a/tools/testing/selftests/x86/test_vsyscall.c
++++ b/tools/testing/selftests/x86/test_vsyscall.c
+@@ -462,6 +462,17 @@ static int test_vsys_x(void)
+ return 0;
+ }
+
++/*
++ * Debuggers expect ptrace() to be able to peek at the vsyscall page.
++ * Use process_vm_readv() as a proxy for ptrace() to test this. We
++ * want it to work in the vsyscall=emulate case and to fail in the
++ * vsyscall=xonly case.
++ *
++ * It's worth noting that this ABI is a bit nutty. write(2) can't
++ * read from the vsyscall page on any kernel version or mode. The
++ * fact that ptrace() ever worked was a nice courtesy of old kernels,
++ * but the code to support it is fairly gross.
++ */
+ static int test_process_vm_readv(void)
+ {
+ #ifdef __x86_64__
+@@ -477,8 +488,12 @@ static int test_process_vm_readv(void)
+ remote.iov_len = 4096;
+ ret = process_vm_readv(getpid(), &local, 1, &remote, 1, 0);
+ if (ret != 4096) {
+- printf("[OK]\tprocess_vm_readv() failed (ret = %d, errno = %d)\n", ret, errno);
+- return 0;
++ /*
++ * We expect process_vm_readv() to work if and only if the
++ * vsyscall page is readable.
++ */
++ printf("[%s]\tprocess_vm_readv() failed (ret = %d, errno = %d)\n", vsyscall_map_r ? "FAIL" : "OK", ret, errno);
++ return vsyscall_map_r ? 1 : 0;
+ }
+
+ if (vsyscall_map_r) {
+@@ -488,6 +503,9 @@ static int test_process_vm_readv(void)
+ printf("[FAIL]\tIt worked but returned incorrect data\n");
+ return 1;
+ }
++ } else {
++ printf("[FAIL]\tprocess_rm_readv() succeeded, but it should have failed in this configuration\n");
++ return 1;
+ }
+ #endif
+
^ permalink raw reply related [flat|nested] 30+ messages in thread
* [gentoo-commits] proj/linux-patches:5.8 commit in: /
@ 2020-09-09 18:02 Mike Pagano
0 siblings, 0 replies; 30+ messages in thread
From: Mike Pagano @ 2020-09-09 18:02 UTC (permalink / raw
To: gentoo-commits
commit: 9823217538d25d8a5408a78372823d3649ae9eed
Author: Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Wed Sep 9 18:02:12 2020 +0000
Commit: Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Wed Sep 9 18:02:12 2020 +0000
URL: https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=98232175
Linux patch 5.8.8
Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>
0000_README | 4 +
1007_linux-5.8.8.patch | 7178 ++++++++++++++++++++++++++++++++++++++++++++++++
2 files changed, 7182 insertions(+)
diff --git a/0000_README b/0000_README
index 62e43d7..93860e0 100644
--- a/0000_README
+++ b/0000_README
@@ -71,6 +71,10 @@ Patch: 1006_linux-5.8.7.patch
From: http://www.kernel.org
Desc: Linux 5.8.7
+Patch: 1007_linux-5.8.8.patch
+From: http://www.kernel.org
+Desc: Linux 5.8.8
+
Patch: 1500_XATTR_USER_PREFIX.patch
From: https://bugs.gentoo.org/show_bug.cgi?id=470644
Desc: Support for namespace user.pax.* on tmpfs.
diff --git a/1007_linux-5.8.8.patch b/1007_linux-5.8.8.patch
new file mode 100644
index 0000000..1dd4fdb
--- /dev/null
+++ b/1007_linux-5.8.8.patch
@@ -0,0 +1,7178 @@
+diff --git a/Documentation/devicetree/bindings/mmc/mtk-sd.txt b/Documentation/devicetree/bindings/mmc/mtk-sd.txt
+index 8a532f4453f26..09aecec47003a 100644
+--- a/Documentation/devicetree/bindings/mmc/mtk-sd.txt
++++ b/Documentation/devicetree/bindings/mmc/mtk-sd.txt
+@@ -49,6 +49,8 @@ Optional properties:
+ error caused by stop clock(fifo full)
+ Valid range = [0:0x7]. if not present, default value is 0.
+ applied to compatible "mediatek,mt2701-mmc".
++- resets: Phandle and reset specifier pair to softreset line of MSDC IP.
++- reset-names: Should be "hrst".
+
+ Examples:
+ mmc0: mmc@11230000 {
+diff --git a/Documentation/filesystems/affs.rst b/Documentation/filesystems/affs.rst
+index 7f1a40dce6d3d..5776cbd5fa532 100644
+--- a/Documentation/filesystems/affs.rst
++++ b/Documentation/filesystems/affs.rst
+@@ -110,13 +110,15 @@ The Amiga protection flags RWEDRWEDHSPARWED are handled as follows:
+
+ - R maps to r for user, group and others. On directories, R implies x.
+
+- - If both W and D are allowed, w will be set.
++ - W maps to w.
+
+ - E maps to x.
+
+- - H and P are always retained and ignored under Linux.
++ - D is ignored.
+
+- - A is always reset when a file is written to.
++ - H, S and P are always retained and ignored under Linux.
++
++ - A is cleared when a file is written to.
+
+ User id and group id will be used unless set[gu]id are given as mount
+ options. Since most of the Amiga file systems are single user systems
+@@ -128,11 +130,13 @@ Linux -> Amiga:
+
+ The Linux rwxrwxrwx file mode is handled as follows:
+
+- - r permission will set R for user, group and others.
++ - r permission will allow R for user, group and others.
++
++ - w permission will allow W for user, group and others.
+
+- - w permission will set W and D for user, group and others.
++ - x permission of the user will allow E for plain files.
+
+- - x permission of the user will set E for plain files.
++ - D will be allowed for user, group and others.
+
+ - All other flags (suid, sgid, ...) are ignored and will
+ not be retained.
+diff --git a/Makefile b/Makefile
+index 5081bd85af29f..dba4d8f2f7862 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 5
+ PATCHLEVEL = 8
+-SUBLEVEL = 7
++SUBLEVEL = 8
+ EXTRAVERSION =
+ NAME = Kleptomaniac Octopus
+
+diff --git a/arch/arc/kernel/perf_event.c b/arch/arc/kernel/perf_event.c
+index 661fd842ea97d..79849f37e782c 100644
+--- a/arch/arc/kernel/perf_event.c
++++ b/arch/arc/kernel/perf_event.c
+@@ -562,7 +562,7 @@ static int arc_pmu_device_probe(struct platform_device *pdev)
+ {
+ struct arc_reg_pct_build pct_bcr;
+ struct arc_reg_cc_build cc_bcr;
+- int i, has_interrupts;
++ int i, has_interrupts, irq;
+ int counter_size; /* in bits */
+
+ union cc_name {
+@@ -637,13 +637,7 @@ static int arc_pmu_device_probe(struct platform_device *pdev)
+ .attr_groups = arc_pmu->attr_groups,
+ };
+
+- if (has_interrupts) {
+- int irq = platform_get_irq(pdev, 0);
+-
+- if (irq < 0) {
+- pr_err("Cannot get IRQ number for the platform\n");
+- return -ENODEV;
+- }
++ if (has_interrupts && (irq = platform_get_irq(pdev, 0) >= 0)) {
+
+ arc_pmu->irq = irq;
+
+@@ -652,9 +646,9 @@ static int arc_pmu_device_probe(struct platform_device *pdev)
+ this_cpu_ptr(&arc_pmu_cpu));
+
+ on_each_cpu(arc_cpu_pmu_irq_init, &irq, 1);
+-
+- } else
++ } else {
+ arc_pmu->pmu.capabilities |= PERF_PMU_CAP_NO_INTERRUPT;
++ }
+
+ /*
+ * perf parser doesn't really like '-' symbol in events name, so let's
+diff --git a/arch/arc/mm/init.c b/arch/arc/mm/init.c
+index e7bdc2ac1c87c..8fcb9e25aa648 100644
+--- a/arch/arc/mm/init.c
++++ b/arch/arc/mm/init.c
+@@ -27,8 +27,8 @@ static unsigned long low_mem_sz;
+
+ #ifdef CONFIG_HIGHMEM
+ static unsigned long min_high_pfn, max_high_pfn;
+-static u64 high_mem_start;
+-static u64 high_mem_sz;
++static phys_addr_t high_mem_start;
++static phys_addr_t high_mem_sz;
+ #endif
+
+ #ifdef CONFIG_DISCONTIGMEM
+@@ -70,6 +70,7 @@ void __init early_init_dt_add_memory_arch(u64 base, u64 size)
+ high_mem_sz = size;
+ in_use = 1;
+ memblock_add_node(base, size, 1);
++ memblock_reserve(base, size);
+ #endif
+ }
+
+@@ -158,7 +159,7 @@ void __init setup_arch_memory(void)
+ min_high_pfn = PFN_DOWN(high_mem_start);
+ max_high_pfn = PFN_DOWN(high_mem_start + high_mem_sz);
+
+- max_zone_pfn[ZONE_HIGHMEM] = max_high_pfn;
++ max_zone_pfn[ZONE_HIGHMEM] = min_low_pfn;
+
+ high_memory = (void *)(min_high_pfn << PAGE_SHIFT);
+ kmap_init();
+@@ -167,22 +168,26 @@ void __init setup_arch_memory(void)
+ free_area_init(max_zone_pfn);
+ }
+
+-/*
+- * mem_init - initializes memory
+- *
+- * Frees up bootmem
+- * Calculates and displays memory available/used
+- */
+-void __init mem_init(void)
++static void __init highmem_init(void)
+ {
+ #ifdef CONFIG_HIGHMEM
+ unsigned long tmp;
+
+- reset_all_zones_managed_pages();
++ memblock_free(high_mem_start, high_mem_sz);
+ for (tmp = min_high_pfn; tmp < max_high_pfn; tmp++)
+ free_highmem_page(pfn_to_page(tmp));
+ #endif
++}
+
++/*
++ * mem_init - initializes memory
++ *
++ * Frees up bootmem
++ * Calculates and displays memory available/used
++ */
++void __init mem_init(void)
++{
+ memblock_free_all();
++ highmem_init();
+ mem_init_print_info(NULL);
+ }
+diff --git a/arch/arm64/boot/dts/mediatek/mt7622.dtsi b/arch/arm64/boot/dts/mediatek/mt7622.dtsi
+index 1a39e0ef776bb..5b9ec032ce8d8 100644
+--- a/arch/arm64/boot/dts/mediatek/mt7622.dtsi
++++ b/arch/arm64/boot/dts/mediatek/mt7622.dtsi
+@@ -686,6 +686,8 @@
+ clocks = <&pericfg CLK_PERI_MSDC30_0_PD>,
+ <&topckgen CLK_TOP_MSDC50_0_SEL>;
+ clock-names = "source", "hclk";
++ resets = <&pericfg MT7622_PERI_MSDC0_SW_RST>;
++ reset-names = "hrst";
+ status = "disabled";
+ };
+
+diff --git a/arch/mips/kernel/perf_event_mipsxx.c b/arch/mips/kernel/perf_event_mipsxx.c
+index efce5defcc5cf..011eb6bbf81a5 100644
+--- a/arch/mips/kernel/perf_event_mipsxx.c
++++ b/arch/mips/kernel/perf_event_mipsxx.c
+@@ -1898,8 +1898,8 @@ static const struct mips_perf_event *mipsxx_pmu_map_raw_event(u64 config)
+ (base_id >= 64 && base_id < 90) ||
+ (base_id >= 128 && base_id < 164) ||
+ (base_id >= 192 && base_id < 200) ||
+- (base_id >= 256 && base_id < 274) ||
+- (base_id >= 320 && base_id < 358) ||
++ (base_id >= 256 && base_id < 275) ||
++ (base_id >= 320 && base_id < 361) ||
+ (base_id >= 384 && base_id < 574))
+ break;
+
+diff --git a/arch/mips/kernel/smp-bmips.c b/arch/mips/kernel/smp-bmips.c
+index 2f513506a3d52..1dbfb5aadffd6 100644
+--- a/arch/mips/kernel/smp-bmips.c
++++ b/arch/mips/kernel/smp-bmips.c
+@@ -239,6 +239,8 @@ static int bmips_boot_secondary(int cpu, struct task_struct *idle)
+ */
+ static void bmips_init_secondary(void)
+ {
++ bmips_cpu_setup();
++
+ switch (current_cpu_type()) {
+ case CPU_BMIPS4350:
+ case CPU_BMIPS4380:
+diff --git a/arch/mips/kernel/traps.c b/arch/mips/kernel/traps.c
+index e664d8b43e72b..2e9d0637591c9 100644
+--- a/arch/mips/kernel/traps.c
++++ b/arch/mips/kernel/traps.c
+@@ -1286,6 +1286,18 @@ static int enable_restore_fp_context(int msa)
+ err = own_fpu_inatomic(1);
+ if (msa && !err) {
+ enable_msa();
++ /*
++ * with MSA enabled, userspace can see MSACSR
++ * and MSA regs, but the values in them are from
++ * other task before current task, restore them
++ * from saved fp/msa context
++ */
++ write_msa_csr(current->thread.fpu.msacsr);
++ /*
++ * own_fpu_inatomic(1) just restore low 64bit,
++ * fix the high 64bit
++ */
++ init_msa_upper();
+ set_thread_flag(TIF_USEDMSA);
+ set_thread_flag(TIF_MSA_CTX_LIVE);
+ }
+diff --git a/arch/mips/mm/c-r4k.c b/arch/mips/mm/c-r4k.c
+index 49569e5666d7a..cb32a00d286e1 100644
+--- a/arch/mips/mm/c-r4k.c
++++ b/arch/mips/mm/c-r4k.c
+@@ -1712,7 +1712,11 @@ static void setup_scache(void)
+ printk("MIPS secondary cache %ldkB, %s, linesize %d bytes.\n",
+ scache_size >> 10,
+ way_string[c->scache.ways], c->scache.linesz);
++
++ if (current_cpu_type() == CPU_BMIPS5000)
++ c->options |= MIPS_CPU_INCLUSIVE_CACHES;
+ }
++
+ #else
+ if (!(c->scache.flags & MIPS_CACHE_NOT_PRESENT))
+ panic("Dunno how to handle MIPS32 / MIPS64 second level cache");
+diff --git a/arch/mips/oprofile/op_model_mipsxx.c b/arch/mips/oprofile/op_model_mipsxx.c
+index 1493c49ca47a1..55d7b7fd18b6f 100644
+--- a/arch/mips/oprofile/op_model_mipsxx.c
++++ b/arch/mips/oprofile/op_model_mipsxx.c
+@@ -245,7 +245,6 @@ static int mipsxx_perfcount_handler(void)
+
+ switch (counters) {
+ #define HANDLE_COUNTER(n) \
+- fallthrough; \
+ case n + 1: \
+ control = r_c0_perfctrl ## n(); \
+ counter = r_c0_perfcntr ## n(); \
+@@ -256,8 +255,11 @@ static int mipsxx_perfcount_handler(void)
+ handled = IRQ_HANDLED; \
+ }
+ HANDLE_COUNTER(3)
++ fallthrough;
+ HANDLE_COUNTER(2)
++ fallthrough;
+ HANDLE_COUNTER(1)
++ fallthrough;
+ HANDLE_COUNTER(0)
+ }
+
+diff --git a/arch/mips/sni/a20r.c b/arch/mips/sni/a20r.c
+index 0ecffb65fd6d1..b09dc844985a8 100644
+--- a/arch/mips/sni/a20r.c
++++ b/arch/mips/sni/a20r.c
+@@ -222,8 +222,8 @@ void __init sni_a20r_irq_init(void)
+ irq_set_chip_and_handler(i, &a20r_irq_type, handle_level_irq);
+ sni_hwint = a20r_hwint;
+ change_c0_status(ST0_IM, IE_IRQ0);
+- if (request_irq(SNI_A20R_IRQ_BASE + 3, sni_isa_irq_handler, 0, "ISA",
+- NULL))
++ if (request_irq(SNI_A20R_IRQ_BASE + 3, sni_isa_irq_handler,
++ IRQF_SHARED, "ISA", sni_isa_irq_handler))
+ pr_err("Failed to register ISA interrupt\n");
+ }
+
+diff --git a/arch/s390/Kconfig b/arch/s390/Kconfig
+index 4907a5149a8a3..79e074ffad139 100644
+--- a/arch/s390/Kconfig
++++ b/arch/s390/Kconfig
+@@ -30,7 +30,7 @@ config GENERIC_BUG_RELATIVE_POINTERS
+ def_bool y
+
+ config GENERIC_LOCKBREAK
+- def_bool y if PREEMPTTION
++ def_bool y if PREEMPTION
+
+ config PGSTE
+ def_bool y if KVM
+diff --git a/arch/s390/include/asm/percpu.h b/arch/s390/include/asm/percpu.h
+index 50b4ce8cddfdc..918f0ba4f4d20 100644
+--- a/arch/s390/include/asm/percpu.h
++++ b/arch/s390/include/asm/percpu.h
+@@ -29,7 +29,7 @@
+ typedef typeof(pcp) pcp_op_T__; \
+ pcp_op_T__ old__, new__, prev__; \
+ pcp_op_T__ *ptr__; \
+- preempt_disable(); \
++ preempt_disable_notrace(); \
+ ptr__ = raw_cpu_ptr(&(pcp)); \
+ prev__ = *ptr__; \
+ do { \
+@@ -37,7 +37,7 @@
+ new__ = old__ op (val); \
+ prev__ = cmpxchg(ptr__, old__, new__); \
+ } while (prev__ != old__); \
+- preempt_enable(); \
++ preempt_enable_notrace(); \
+ new__; \
+ })
+
+@@ -68,7 +68,7 @@
+ typedef typeof(pcp) pcp_op_T__; \
+ pcp_op_T__ val__ = (val); \
+ pcp_op_T__ old__, *ptr__; \
+- preempt_disable(); \
++ preempt_disable_notrace(); \
+ ptr__ = raw_cpu_ptr(&(pcp)); \
+ if (__builtin_constant_p(val__) && \
+ ((szcast)val__ > -129) && ((szcast)val__ < 128)) { \
+@@ -84,7 +84,7 @@
+ : [val__] "d" (val__) \
+ : "cc"); \
+ } \
+- preempt_enable(); \
++ preempt_enable_notrace(); \
+ }
+
+ #define this_cpu_add_4(pcp, val) arch_this_cpu_add(pcp, val, "laa", "asi", int)
+@@ -95,14 +95,14 @@
+ typedef typeof(pcp) pcp_op_T__; \
+ pcp_op_T__ val__ = (val); \
+ pcp_op_T__ old__, *ptr__; \
+- preempt_disable(); \
++ preempt_disable_notrace(); \
+ ptr__ = raw_cpu_ptr(&(pcp)); \
+ asm volatile( \
+ op " %[old__],%[val__],%[ptr__]\n" \
+ : [old__] "=d" (old__), [ptr__] "+Q" (*ptr__) \
+ : [val__] "d" (val__) \
+ : "cc"); \
+- preempt_enable(); \
++ preempt_enable_notrace(); \
+ old__ + val__; \
+ })
+
+@@ -114,14 +114,14 @@
+ typedef typeof(pcp) pcp_op_T__; \
+ pcp_op_T__ val__ = (val); \
+ pcp_op_T__ old__, *ptr__; \
+- preempt_disable(); \
++ preempt_disable_notrace(); \
+ ptr__ = raw_cpu_ptr(&(pcp)); \
+ asm volatile( \
+ op " %[old__],%[val__],%[ptr__]\n" \
+ : [old__] "=d" (old__), [ptr__] "+Q" (*ptr__) \
+ : [val__] "d" (val__) \
+ : "cc"); \
+- preempt_enable(); \
++ preempt_enable_notrace(); \
+ }
+
+ #define this_cpu_and_4(pcp, val) arch_this_cpu_to_op(pcp, val, "lan")
+@@ -136,10 +136,10 @@
+ typedef typeof(pcp) pcp_op_T__; \
+ pcp_op_T__ ret__; \
+ pcp_op_T__ *ptr__; \
+- preempt_disable(); \
++ preempt_disable_notrace(); \
+ ptr__ = raw_cpu_ptr(&(pcp)); \
+ ret__ = cmpxchg(ptr__, oval, nval); \
+- preempt_enable(); \
++ preempt_enable_notrace(); \
+ ret__; \
+ })
+
+@@ -152,10 +152,10 @@
+ ({ \
+ typeof(pcp) *ptr__; \
+ typeof(pcp) ret__; \
+- preempt_disable(); \
++ preempt_disable_notrace(); \
+ ptr__ = raw_cpu_ptr(&(pcp)); \
+ ret__ = xchg(ptr__, nval); \
+- preempt_enable(); \
++ preempt_enable_notrace(); \
+ ret__; \
+ })
+
+@@ -171,11 +171,11 @@
+ typeof(pcp1) *p1__; \
+ typeof(pcp2) *p2__; \
+ int ret__; \
+- preempt_disable(); \
++ preempt_disable_notrace(); \
+ p1__ = raw_cpu_ptr(&(pcp1)); \
+ p2__ = raw_cpu_ptr(&(pcp2)); \
+ ret__ = __cmpxchg_double(p1__, p2__, o1__, o2__, n1__, n2__); \
+- preempt_enable(); \
++ preempt_enable_notrace(); \
+ ret__; \
+ })
+
+diff --git a/arch/x86/entry/common.c b/arch/x86/entry/common.c
+index f09288431f289..606c4e25ee934 100644
+--- a/arch/x86/entry/common.c
++++ b/arch/x86/entry/common.c
+@@ -55,8 +55,16 @@ static noinstr void check_user_regs(struct pt_regs *regs)
+ * state, not the interrupt state as imagined by Xen.
+ */
+ unsigned long flags = native_save_fl();
+- WARN_ON_ONCE(flags & (X86_EFLAGS_AC | X86_EFLAGS_DF |
+- X86_EFLAGS_NT));
++ unsigned long mask = X86_EFLAGS_DF | X86_EFLAGS_NT;
++
++ /*
++ * For !SMAP hardware we patch out CLAC on entry.
++ */
++ if (boot_cpu_has(X86_FEATURE_SMAP) ||
++ (IS_ENABLED(CONFIG_64_BIT) && boot_cpu_has(X86_FEATURE_XENPV)))
++ mask |= X86_EFLAGS_AC;
++
++ WARN_ON_ONCE(flags & mask);
+
+ /* We think we came from user mode. Make sure pt_regs agrees. */
+ WARN_ON_ONCE(!user_mode(regs));
+diff --git a/arch/x86/include/asm/ptrace.h b/arch/x86/include/asm/ptrace.h
+index 255b2dde2c1b7..9675d8b2c6666 100644
+--- a/arch/x86/include/asm/ptrace.h
++++ b/arch/x86/include/asm/ptrace.h
+@@ -322,8 +322,8 @@ static inline unsigned long regs_get_kernel_argument(struct pt_regs *regs,
+ static const unsigned int argument_offs[] = {
+ #ifdef __i386__
+ offsetof(struct pt_regs, ax),
+- offsetof(struct pt_regs, cx),
+ offsetof(struct pt_regs, dx),
++ offsetof(struct pt_regs, cx),
+ #define NR_REG_ARGUMENTS 3
+ #else
+ offsetof(struct pt_regs, di),
+diff --git a/arch/x86/include/asm/switch_to.h b/arch/x86/include/asm/switch_to.h
+index 9f69cc497f4b6..0e059b73437b4 100644
+--- a/arch/x86/include/asm/switch_to.h
++++ b/arch/x86/include/asm/switch_to.h
+@@ -12,6 +12,27 @@ struct task_struct *__switch_to_asm(struct task_struct *prev,
+ __visible struct task_struct *__switch_to(struct task_struct *prev,
+ struct task_struct *next);
+
++/* This runs runs on the previous thread's stack. */
++static inline void prepare_switch_to(struct task_struct *next)
++{
++#ifdef CONFIG_VMAP_STACK
++ /*
++ * If we switch to a stack that has a top-level paging entry
++ * that is not present in the current mm, the resulting #PF will
++ * will be promoted to a double-fault and we'll panic. Probe
++ * the new stack now so that vmalloc_fault can fix up the page
++ * tables if needed. This can only happen if we use a stack
++ * in vmap space.
++ *
++ * We assume that the stack is aligned so that it never spans
++ * more than one top-level paging entry.
++ *
++ * To minimize cache pollution, just follow the stack pointer.
++ */
++ READ_ONCE(*(unsigned char *)next->thread.sp);
++#endif
++}
++
+ asmlinkage void ret_from_fork(void);
+
+ /*
+@@ -46,6 +67,8 @@ struct fork_frame {
+
+ #define switch_to(prev, next, last) \
+ do { \
++ prepare_switch_to(next); \
++ \
+ ((last) = __switch_to_asm((prev), (next))); \
+ } while (0)
+
+diff --git a/arch/x86/kernel/setup_percpu.c b/arch/x86/kernel/setup_percpu.c
+index fd945ce78554e..e6d7894ad1279 100644
+--- a/arch/x86/kernel/setup_percpu.c
++++ b/arch/x86/kernel/setup_percpu.c
+@@ -287,9 +287,9 @@ void __init setup_per_cpu_areas(void)
+ /*
+ * Sync back kernel address range again. We already did this in
+ * setup_arch(), but percpu data also needs to be available in
+- * the smpboot asm and arch_sync_kernel_mappings() doesn't sync to
+- * swapper_pg_dir on 32-bit. The per-cpu mappings need to be available
+- * there too.
++ * the smpboot asm. We can't reliably pick up percpu mappings
++ * using vmalloc_fault(), because exception dispatch needs
++ * percpu data.
+ *
+ * FIXME: Can the later sync in setup_cpu_entry_areas() replace
+ * this call?
+diff --git a/arch/x86/kernel/traps.c b/arch/x86/kernel/traps.c
+index b7cb3e0716f7d..69cc823109740 100644
+--- a/arch/x86/kernel/traps.c
++++ b/arch/x86/kernel/traps.c
+@@ -733,20 +733,9 @@ static bool is_sysenter_singlestep(struct pt_regs *regs)
+ #endif
+ }
+
+-static __always_inline void debug_enter(unsigned long *dr6, unsigned long *dr7)
++static __always_inline unsigned long debug_read_clear_dr6(void)
+ {
+- /*
+- * Disable breakpoints during exception handling; recursive exceptions
+- * are exceedingly 'fun'.
+- *
+- * Since this function is NOKPROBE, and that also applies to
+- * HW_BREAKPOINT_X, we can't hit a breakpoint before this (XXX except a
+- * HW_BREAKPOINT_W on our stack)
+- *
+- * Entry text is excluded for HW_BP_X and cpu_entry_area, which
+- * includes the entry stack is excluded for everything.
+- */
+- *dr7 = local_db_save();
++ unsigned long dr6;
+
+ /*
+ * The Intel SDM says:
+@@ -759,15 +748,12 @@ static __always_inline void debug_enter(unsigned long *dr6, unsigned long *dr7)
+ *
+ * Keep it simple: clear DR6 immediately.
+ */
+- get_debugreg(*dr6, 6);
++ get_debugreg(dr6, 6);
+ set_debugreg(0, 6);
+ /* Filter out all the reserved bits which are preset to 1 */
+- *dr6 &= ~DR6_RESERVED;
+-}
++ dr6 &= ~DR6_RESERVED;
+
+-static __always_inline void debug_exit(unsigned long dr7)
+-{
+- local_db_restore(dr7);
++ return dr6;
+ }
+
+ /*
+@@ -867,6 +853,19 @@ out:
+ static __always_inline void exc_debug_kernel(struct pt_regs *regs,
+ unsigned long dr6)
+ {
++ /*
++ * Disable breakpoints during exception handling; recursive exceptions
++ * are exceedingly 'fun'.
++ *
++ * Since this function is NOKPROBE, and that also applies to
++ * HW_BREAKPOINT_X, we can't hit a breakpoint before this (XXX except a
++ * HW_BREAKPOINT_W on our stack)
++ *
++ * Entry text is excluded for HW_BP_X and cpu_entry_area, which
++ * includes the entry stack is excluded for everything.
++ */
++ unsigned long dr7 = local_db_save();
++
+ nmi_enter();
+ instrumentation_begin();
+ trace_hardirqs_off_finish();
+@@ -890,6 +889,8 @@ static __always_inline void exc_debug_kernel(struct pt_regs *regs,
+ trace_hardirqs_on_prepare();
+ instrumentation_end();
+ nmi_exit();
++
++ local_db_restore(dr7);
+ }
+
+ static __always_inline void exc_debug_user(struct pt_regs *regs,
+@@ -901,6 +902,15 @@ static __always_inline void exc_debug_user(struct pt_regs *regs,
+ */
+ WARN_ON_ONCE(!user_mode(regs));
+
++ /*
++ * NB: We can't easily clear DR7 here because
++ * idtentry_exit_to_usermode() can invoke ptrace, schedule, access
++ * user memory, etc. This means that a recursive #DB is possible. If
++ * this happens, that #DB will hit exc_debug_kernel() and clear DR7.
++ * Since we're not on the IST stack right now, everything will be
++ * fine.
++ */
++
+ idtentry_enter_user(regs);
+ instrumentation_begin();
+
+@@ -913,36 +923,24 @@ static __always_inline void exc_debug_user(struct pt_regs *regs,
+ /* IST stack entry */
+ DEFINE_IDTENTRY_DEBUG(exc_debug)
+ {
+- unsigned long dr6, dr7;
+-
+- debug_enter(&dr6, &dr7);
+- exc_debug_kernel(regs, dr6);
+- debug_exit(dr7);
++ exc_debug_kernel(regs, debug_read_clear_dr6());
+ }
+
+ /* User entry, runs on regular task stack */
+ DEFINE_IDTENTRY_DEBUG_USER(exc_debug)
+ {
+- unsigned long dr6, dr7;
+-
+- debug_enter(&dr6, &dr7);
+- exc_debug_user(regs, dr6);
+- debug_exit(dr7);
++ exc_debug_user(regs, debug_read_clear_dr6());
+ }
+ #else
+ /* 32 bit does not have separate entry points. */
+ DEFINE_IDTENTRY_RAW(exc_debug)
+ {
+- unsigned long dr6, dr7;
+-
+- debug_enter(&dr6, &dr7);
++ unsigned long dr6 = debug_read_clear_dr6();
+
+ if (user_mode(regs))
+ exc_debug_user(regs, dr6);
+ else
+ exc_debug_kernel(regs, dr6);
+-
+- debug_exit(dr7);
+ }
+ #endif
+
+diff --git a/arch/x86/mm/fault.c b/arch/x86/mm/fault.c
+index 1ead568c01012..370c314b8f44d 100644
+--- a/arch/x86/mm/fault.c
++++ b/arch/x86/mm/fault.c
+@@ -215,6 +215,44 @@ void arch_sync_kernel_mappings(unsigned long start, unsigned long end)
+ }
+ }
+
++/*
++ * 32-bit:
++ *
++ * Handle a fault on the vmalloc or module mapping area
++ */
++static noinline int vmalloc_fault(unsigned long address)
++{
++ unsigned long pgd_paddr;
++ pmd_t *pmd_k;
++ pte_t *pte_k;
++
++ /* Make sure we are in vmalloc area: */
++ if (!(address >= VMALLOC_START && address < VMALLOC_END))
++ return -1;
++
++ /*
++ * Synchronize this task's top level page-table
++ * with the 'reference' page table.
++ *
++ * Do _not_ use "current" here. We might be inside
++ * an interrupt in the middle of a task switch..
++ */
++ pgd_paddr = read_cr3_pa();
++ pmd_k = vmalloc_sync_one(__va(pgd_paddr), address);
++ if (!pmd_k)
++ return -1;
++
++ if (pmd_large(*pmd_k))
++ return 0;
++
++ pte_k = pte_offset_kernel(pmd_k, address);
++ if (!pte_present(*pte_k))
++ return -1;
++
++ return 0;
++}
++NOKPROBE_SYMBOL(vmalloc_fault);
++
+ /*
+ * Did it hit the DOS screen memory VA from vm86 mode?
+ */
+@@ -279,6 +317,79 @@ out:
+
+ #else /* CONFIG_X86_64: */
+
++/*
++ * 64-bit:
++ *
++ * Handle a fault on the vmalloc area
++ */
++static noinline int vmalloc_fault(unsigned long address)
++{
++ pgd_t *pgd, *pgd_k;
++ p4d_t *p4d, *p4d_k;
++ pud_t *pud;
++ pmd_t *pmd;
++ pte_t *pte;
++
++ /* Make sure we are in vmalloc area: */
++ if (!(address >= VMALLOC_START && address < VMALLOC_END))
++ return -1;
++
++ /*
++ * Copy kernel mappings over when needed. This can also
++ * happen within a race in page table update. In the later
++ * case just flush:
++ */
++ pgd = (pgd_t *)__va(read_cr3_pa()) + pgd_index(address);
++ pgd_k = pgd_offset_k(address);
++ if (pgd_none(*pgd_k))
++ return -1;
++
++ if (pgtable_l5_enabled()) {
++ if (pgd_none(*pgd)) {
++ set_pgd(pgd, *pgd_k);
++ arch_flush_lazy_mmu_mode();
++ } else {
++ BUG_ON(pgd_page_vaddr(*pgd) != pgd_page_vaddr(*pgd_k));
++ }
++ }
++
++ /* With 4-level paging, copying happens on the p4d level. */
++ p4d = p4d_offset(pgd, address);
++ p4d_k = p4d_offset(pgd_k, address);
++ if (p4d_none(*p4d_k))
++ return -1;
++
++ if (p4d_none(*p4d) && !pgtable_l5_enabled()) {
++ set_p4d(p4d, *p4d_k);
++ arch_flush_lazy_mmu_mode();
++ } else {
++ BUG_ON(p4d_pfn(*p4d) != p4d_pfn(*p4d_k));
++ }
++
++ BUILD_BUG_ON(CONFIG_PGTABLE_LEVELS < 4);
++
++ pud = pud_offset(p4d, address);
++ if (pud_none(*pud))
++ return -1;
++
++ if (pud_large(*pud))
++ return 0;
++
++ pmd = pmd_offset(pud, address);
++ if (pmd_none(*pmd))
++ return -1;
++
++ if (pmd_large(*pmd))
++ return 0;
++
++ pte = pte_offset_kernel(pmd, address);
++ if (!pte_present(*pte))
++ return -1;
++
++ return 0;
++}
++NOKPROBE_SYMBOL(vmalloc_fault);
++
+ #ifdef CONFIG_CPU_SUP_AMD
+ static const char errata93_warning[] =
+ KERN_ERR
+@@ -1111,6 +1222,29 @@ do_kern_addr_fault(struct pt_regs *regs, unsigned long hw_error_code,
+ */
+ WARN_ON_ONCE(hw_error_code & X86_PF_PK);
+
++ /*
++ * We can fault-in kernel-space virtual memory on-demand. The
++ * 'reference' page table is init_mm.pgd.
++ *
++ * NOTE! We MUST NOT take any locks for this case. We may
++ * be in an interrupt or a critical region, and should
++ * only copy the information from the master page table,
++ * nothing more.
++ *
++ * Before doing this on-demand faulting, ensure that the
++ * fault is not any of the following:
++ * 1. A fault on a PTE with a reserved bit set.
++ * 2. A fault caused by a user-mode access. (Do not demand-
++ * fault kernel memory due to user-mode accesses).
++ * 3. A fault caused by a page-level protection violation.
++ * (A demand fault would be on a non-present page which
++ * would have X86_PF_PROT==0).
++ */
++ if (!(hw_error_code & (X86_PF_RSVD | X86_PF_USER | X86_PF_PROT))) {
++ if (vmalloc_fault(address) >= 0)
++ return;
++ }
++
+ /* Was the fault spurious, caused by lazy TLB invalidation? */
+ if (spurious_kernel_fault(hw_error_code, address))
+ return;
+diff --git a/arch/x86/mm/numa_emulation.c b/arch/x86/mm/numa_emulation.c
+index c5174b4e318b4..683cd12f47938 100644
+--- a/arch/x86/mm/numa_emulation.c
++++ b/arch/x86/mm/numa_emulation.c
+@@ -321,7 +321,7 @@ static int __init split_nodes_size_interleave(struct numa_meminfo *ei,
+ u64 addr, u64 max_addr, u64 size)
+ {
+ return split_nodes_size_interleave_uniform(ei, pi, addr, max_addr, size,
+- 0, NULL, NUMA_NO_NODE);
++ 0, NULL, 0);
+ }
+
+ static int __init setup_emu2phys_nid(int *dfl_phys_nid)
+diff --git a/arch/x86/mm/pti.c b/arch/x86/mm/pti.c
+index a8a924b3c3358..0b0d1cdce2e73 100644
+--- a/arch/x86/mm/pti.c
++++ b/arch/x86/mm/pti.c
+@@ -447,7 +447,13 @@ static void __init pti_clone_user_shared(void)
+ * the sp1 and sp2 slots.
+ *
+ * This is done for all possible CPUs during boot to ensure
+- * that it's propagated to all mms.
++ * that it's propagated to all mms. If we were to add one of
++ * these mappings during CPU hotplug, we would need to take
++ * some measure to make sure that every mm that subsequently
++ * ran on that CPU would have the relevant PGD entry in its
++ * pagetables. The usual vmalloc_fault() mechanism would not
++ * work for page faults taken in entry_SYSCALL_64 before RSP
++ * is set up.
+ */
+
+ unsigned long va = (unsigned long)&per_cpu(cpu_tss_rw, cpu);
+diff --git a/arch/x86/mm/tlb.c b/arch/x86/mm/tlb.c
+index 1a3569b43aa5b..cf81902e6992f 100644
+--- a/arch/x86/mm/tlb.c
++++ b/arch/x86/mm/tlb.c
+@@ -317,6 +317,34 @@ void switch_mm(struct mm_struct *prev, struct mm_struct *next,
+ local_irq_restore(flags);
+ }
+
++static void sync_current_stack_to_mm(struct mm_struct *mm)
++{
++ unsigned long sp = current_stack_pointer;
++ pgd_t *pgd = pgd_offset(mm, sp);
++
++ if (pgtable_l5_enabled()) {
++ if (unlikely(pgd_none(*pgd))) {
++ pgd_t *pgd_ref = pgd_offset_k(sp);
++
++ set_pgd(pgd, *pgd_ref);
++ }
++ } else {
++ /*
++ * "pgd" is faked. The top level entries are "p4d"s, so sync
++ * the p4d. This compiles to approximately the same code as
++ * the 5-level case.
++ */
++ p4d_t *p4d = p4d_offset(pgd, sp);
++
++ if (unlikely(p4d_none(*p4d))) {
++ pgd_t *pgd_ref = pgd_offset_k(sp);
++ p4d_t *p4d_ref = p4d_offset(pgd_ref, sp);
++
++ set_p4d(p4d, *p4d_ref);
++ }
++ }
++}
++
+ static inline unsigned long mm_mangle_tif_spec_ib(struct task_struct *next)
+ {
+ unsigned long next_tif = task_thread_info(next)->flags;
+@@ -525,6 +553,15 @@ void switch_mm_irqs_off(struct mm_struct *prev, struct mm_struct *next,
+ */
+ cond_ibpb(tsk);
+
++ if (IS_ENABLED(CONFIG_VMAP_STACK)) {
++ /*
++ * If our current stack is in vmalloc space and isn't
++ * mapped in the new pgd, we'll double-fault. Forcibly
++ * map it.
++ */
++ sync_current_stack_to_mm(next);
++ }
++
+ /*
+ * Stop remote flushes for the previous mm.
+ * Skip kernel threads; we never send init_mm TLB flushing IPIs,
+diff --git a/block/blk-core.c b/block/blk-core.c
+index 03252af8c82c8..619a3dcd3f5e7 100644
+--- a/block/blk-core.c
++++ b/block/blk-core.c
+@@ -526,6 +526,7 @@ struct request_queue *__blk_alloc_queue(int node_id)
+ goto fail_stats;
+
+ q->backing_dev_info->ra_pages = VM_READAHEAD_PAGES;
++ q->backing_dev_info->io_pages = VM_READAHEAD_PAGES;
+ q->backing_dev_info->capabilities = BDI_CAP_CGROUP_WRITEBACK;
+ q->node = node_id;
+
+diff --git a/block/blk-iocost.c b/block/blk-iocost.c
+index 86ba6fd254e1d..27c05e3caf75a 100644
+--- a/block/blk-iocost.c
++++ b/block/blk-iocost.c
+@@ -2094,14 +2094,15 @@ static void ioc_pd_free(struct blkg_policy_data *pd)
+ {
+ struct ioc_gq *iocg = pd_to_iocg(pd);
+ struct ioc *ioc = iocg->ioc;
++ unsigned long flags;
+
+ if (ioc) {
+- spin_lock(&ioc->lock);
++ spin_lock_irqsave(&ioc->lock, flags);
+ if (!list_empty(&iocg->active_list)) {
+ propagate_active_weight(iocg, 0, 0);
+ list_del_init(&iocg->active_list);
+ }
+- spin_unlock(&ioc->lock);
++ spin_unlock_irqrestore(&ioc->lock, flags);
+
+ hrtimer_cancel(&iocg->waitq_timer);
+ hrtimer_cancel(&iocg->delay_timer);
+diff --git a/block/blk-stat.c b/block/blk-stat.c
+index 7da302ff88d0d..ae3dd1fb8e61d 100644
+--- a/block/blk-stat.c
++++ b/block/blk-stat.c
+@@ -137,6 +137,7 @@ void blk_stat_add_callback(struct request_queue *q,
+ struct blk_stat_callback *cb)
+ {
+ unsigned int bucket;
++ unsigned long flags;
+ int cpu;
+
+ for_each_possible_cpu(cpu) {
+@@ -147,20 +148,22 @@ void blk_stat_add_callback(struct request_queue *q,
+ blk_rq_stat_init(&cpu_stat[bucket]);
+ }
+
+- spin_lock(&q->stats->lock);
++ spin_lock_irqsave(&q->stats->lock, flags);
+ list_add_tail_rcu(&cb->list, &q->stats->callbacks);
+ blk_queue_flag_set(QUEUE_FLAG_STATS, q);
+- spin_unlock(&q->stats->lock);
++ spin_unlock_irqrestore(&q->stats->lock, flags);
+ }
+
+ void blk_stat_remove_callback(struct request_queue *q,
+ struct blk_stat_callback *cb)
+ {
+- spin_lock(&q->stats->lock);
++ unsigned long flags;
++
++ spin_lock_irqsave(&q->stats->lock, flags);
+ list_del_rcu(&cb->list);
+ if (list_empty(&q->stats->callbacks) && !q->stats->enable_accounting)
+ blk_queue_flag_clear(QUEUE_FLAG_STATS, q);
+- spin_unlock(&q->stats->lock);
++ spin_unlock_irqrestore(&q->stats->lock, flags);
+
+ del_timer_sync(&cb->timer);
+ }
+@@ -183,10 +186,12 @@ void blk_stat_free_callback(struct blk_stat_callback *cb)
+
+ void blk_stat_enable_accounting(struct request_queue *q)
+ {
+- spin_lock(&q->stats->lock);
++ unsigned long flags;
++
++ spin_lock_irqsave(&q->stats->lock, flags);
+ q->stats->enable_accounting = true;
+ blk_queue_flag_set(QUEUE_FLAG_STATS, q);
+- spin_unlock(&q->stats->lock);
++ spin_unlock_irqrestore(&q->stats->lock, flags);
+ }
+ EXPORT_SYMBOL_GPL(blk_stat_enable_accounting);
+
+diff --git a/block/partitions/core.c b/block/partitions/core.c
+index 78951e33b2d7c..534e11285a8d4 100644
+--- a/block/partitions/core.c
++++ b/block/partitions/core.c
+@@ -524,19 +524,20 @@ int bdev_add_partition(struct block_device *bdev, int partno,
+ int bdev_del_partition(struct block_device *bdev, int partno)
+ {
+ struct block_device *bdevp;
+- struct hd_struct *part;
+- int ret = 0;
+-
+- part = disk_get_part(bdev->bd_disk, partno);
+- if (!part)
+- return -ENXIO;
++ struct hd_struct *part = NULL;
++ int ret;
+
+- ret = -ENOMEM;
+- bdevp = bdget(part_devt(part));
++ bdevp = bdget_disk(bdev->bd_disk, partno);
+ if (!bdevp)
+- goto out_put_part;
++ return -ENOMEM;
+
+ mutex_lock(&bdevp->bd_mutex);
++ mutex_lock_nested(&bdev->bd_mutex, 1);
++
++ ret = -ENXIO;
++ part = disk_get_part(bdev->bd_disk, partno);
++ if (!part)
++ goto out_unlock;
+
+ ret = -EBUSY;
+ if (bdevp->bd_openers)
+@@ -545,16 +546,14 @@ int bdev_del_partition(struct block_device *bdev, int partno)
+ sync_blockdev(bdevp);
+ invalidate_bdev(bdevp);
+
+- mutex_lock_nested(&bdev->bd_mutex, 1);
+ delete_partition(bdev->bd_disk, part);
+- mutex_unlock(&bdev->bd_mutex);
+-
+ ret = 0;
+ out_unlock:
++ mutex_unlock(&bdev->bd_mutex);
+ mutex_unlock(&bdevp->bd_mutex);
+ bdput(bdevp);
+-out_put_part:
+- disk_put_part(part);
++ if (part)
++ disk_put_part(part);
+ return ret;
+ }
+
+diff --git a/drivers/ata/libata-core.c b/drivers/ata/libata-core.c
+index b1cd4d97bc2a7..1be73d29119ab 100644
+--- a/drivers/ata/libata-core.c
++++ b/drivers/ata/libata-core.c
+@@ -3868,9 +3868,8 @@ static const struct ata_blacklist_entry ata_device_blacklist [] = {
+ /* https://bugzilla.kernel.org/show_bug.cgi?id=15573 */
+ { "C300-CTFDDAC128MAG", "0001", ATA_HORKAGE_NONCQ, },
+
+- /* Some Sandisk SSDs lock up hard with NCQ enabled. Reported on
+- SD7SN6S256G and SD8SN8U256G */
+- { "SanDisk SD[78]SN*G", NULL, ATA_HORKAGE_NONCQ, },
++ /* Sandisk SD7/8/9s lock up hard on large trims */
++ { "SanDisk SD[789]*", NULL, ATA_HORKAGE_MAX_TRIM_128M, },
+
+ /* devices which puke on READ_NATIVE_MAX */
+ { "HDS724040KLSA80", "KFAOA20N", ATA_HORKAGE_BROKEN_HPA, },
+diff --git a/drivers/ata/libata-scsi.c b/drivers/ata/libata-scsi.c
+index 46336084b1a90..cc7bedafb3923 100644
+--- a/drivers/ata/libata-scsi.c
++++ b/drivers/ata/libata-scsi.c
+@@ -2080,6 +2080,7 @@ static unsigned int ata_scsiop_inq_89(struct ata_scsi_args *args, u8 *rbuf)
+
+ static unsigned int ata_scsiop_inq_b0(struct ata_scsi_args *args, u8 *rbuf)
+ {
++ struct ata_device *dev = args->dev;
+ u16 min_io_sectors;
+
+ rbuf[1] = 0xb0;
+@@ -2105,7 +2106,12 @@ static unsigned int ata_scsiop_inq_b0(struct ata_scsi_args *args, u8 *rbuf)
+ * with the unmap bit set.
+ */
+ if (ata_id_has_trim(args->id)) {
+- put_unaligned_be64(65535 * ATA_MAX_TRIM_RNUM, &rbuf[36]);
++ u64 max_blocks = 65535 * ATA_MAX_TRIM_RNUM;
++
++ if (dev->horkage & ATA_HORKAGE_MAX_TRIM_128M)
++ max_blocks = 128 << (20 - SECTOR_SHIFT);
++
++ put_unaligned_be64(max_blocks, &rbuf[36]);
+ put_unaligned_be32(1, &rbuf[28]);
+ }
+
+diff --git a/drivers/block/nbd.c b/drivers/block/nbd.c
+index ce7e9f223b20b..bc9dc1f847e19 100644
+--- a/drivers/block/nbd.c
++++ b/drivers/block/nbd.c
+@@ -1360,6 +1360,8 @@ static void nbd_set_cmd_timeout(struct nbd_device *nbd, u64 timeout)
+ nbd->tag_set.timeout = timeout * HZ;
+ if (timeout)
+ blk_queue_rq_timeout(nbd->disk->queue, timeout * HZ);
++ else
++ blk_queue_rq_timeout(nbd->disk->queue, 30 * HZ);
+ }
+
+ /* Must be called with config_lock held */
+diff --git a/drivers/cpuidle/cpuidle.c b/drivers/cpuidle/cpuidle.c
+index 87197319ab069..2fe4f3cdf54d7 100644
+--- a/drivers/cpuidle/cpuidle.c
++++ b/drivers/cpuidle/cpuidle.c
+@@ -153,7 +153,8 @@ static void enter_s2idle_proper(struct cpuidle_driver *drv,
+ */
+ stop_critical_timings();
+ drv->states[index].enter_s2idle(dev, drv, index);
+- WARN_ON(!irqs_disabled());
++ if (WARN_ON_ONCE(!irqs_disabled()))
++ local_irq_disable();
+ /*
+ * timekeeping_resume() that will be called by tick_unfreeze() for the
+ * first CPU executing it calls functions containing RCU read-side
+diff --git a/drivers/dma/at_hdmac.c b/drivers/dma/at_hdmac.c
+index 73a20780744bf..626819b33a325 100644
+--- a/drivers/dma/at_hdmac.c
++++ b/drivers/dma/at_hdmac.c
+@@ -1650,13 +1650,17 @@ static struct dma_chan *at_dma_xlate(struct of_phandle_args *dma_spec,
+ return NULL;
+
+ dmac_pdev = of_find_device_by_node(dma_spec->np);
++ if (!dmac_pdev)
++ return NULL;
+
+ dma_cap_zero(mask);
+ dma_cap_set(DMA_SLAVE, mask);
+
+ atslave = kmalloc(sizeof(*atslave), GFP_KERNEL);
+- if (!atslave)
++ if (!atslave) {
++ put_device(&dmac_pdev->dev);
+ return NULL;
++ }
+
+ atslave->cfg = ATC_DST_H2SEL_HW | ATC_SRC_H2SEL_HW;
+ /*
+@@ -1685,8 +1689,11 @@ static struct dma_chan *at_dma_xlate(struct of_phandle_args *dma_spec,
+ atslave->dma_dev = &dmac_pdev->dev;
+
+ chan = dma_request_channel(mask, at_dma_filter, atslave);
+- if (!chan)
++ if (!chan) {
++ put_device(&dmac_pdev->dev);
++ kfree(atslave);
+ return NULL;
++ }
+
+ atchan = to_at_dma_chan(chan);
+ atchan->per_if = dma_spec->args[0] & 0xff;
+diff --git a/drivers/dma/dw-edma/dw-edma-core.c b/drivers/dma/dw-edma/dw-edma-core.c
+index ed430ad9b3dd8..b971505b87152 100644
+--- a/drivers/dma/dw-edma/dw-edma-core.c
++++ b/drivers/dma/dw-edma/dw-edma-core.c
+@@ -405,7 +405,7 @@ dw_edma_device_transfer(struct dw_edma_transfer *xfer)
+ if (xfer->cyclic) {
+ burst->dar = xfer->xfer.cyclic.paddr;
+ } else {
+- burst->dar = sg_dma_address(sg);
++ burst->dar = dst_addr;
+ /* Unlike the typical assumption by other
+ * drivers/IPs the peripheral memory isn't
+ * a FIFO memory, in this case, it's a
+@@ -413,14 +413,13 @@ dw_edma_device_transfer(struct dw_edma_transfer *xfer)
+ * and destination addresses are increased
+ * by the same portion (data length)
+ */
+- src_addr += sg_dma_len(sg);
+ }
+ } else {
+ burst->dar = dst_addr;
+ if (xfer->cyclic) {
+ burst->sar = xfer->xfer.cyclic.paddr;
+ } else {
+- burst->sar = sg_dma_address(sg);
++ burst->sar = src_addr;
+ /* Unlike the typical assumption by other
+ * drivers/IPs the peripheral memory isn't
+ * a FIFO memory, in this case, it's a
+@@ -428,12 +427,14 @@ dw_edma_device_transfer(struct dw_edma_transfer *xfer)
+ * and destination addresses are increased
+ * by the same portion (data length)
+ */
+- dst_addr += sg_dma_len(sg);
+ }
+ }
+
+- if (!xfer->cyclic)
++ if (!xfer->cyclic) {
++ src_addr += sg_dma_len(sg);
++ dst_addr += sg_dma_len(sg);
+ sg = sg_next(sg);
++ }
+ }
+
+ return vchan_tx_prep(&chan->vc, &desc->vd, xfer->flags);
+diff --git a/drivers/dma/fsldma.h b/drivers/dma/fsldma.h
+index 56f18ae992332..308bed0a560ac 100644
+--- a/drivers/dma/fsldma.h
++++ b/drivers/dma/fsldma.h
+@@ -205,10 +205,10 @@ struct fsldma_chan {
+ #else
+ static u64 fsl_ioread64(const u64 __iomem *addr)
+ {
+- u32 fsl_addr = lower_32_bits(addr);
+- u64 fsl_addr_hi = (u64)in_le32((u32 *)(fsl_addr + 1)) << 32;
++ u32 val_lo = in_le32((u32 __iomem *)addr);
++ u32 val_hi = in_le32((u32 __iomem *)addr + 1);
+
+- return fsl_addr_hi | in_le32((u32 *)fsl_addr);
++ return ((u64)val_hi << 32) + val_lo;
+ }
+
+ static void fsl_iowrite64(u64 val, u64 __iomem *addr)
+@@ -219,10 +219,10 @@ static void fsl_iowrite64(u64 val, u64 __iomem *addr)
+
+ static u64 fsl_ioread64be(const u64 __iomem *addr)
+ {
+- u32 fsl_addr = lower_32_bits(addr);
+- u64 fsl_addr_hi = (u64)in_be32((u32 *)fsl_addr) << 32;
++ u32 val_hi = in_be32((u32 __iomem *)addr);
++ u32 val_lo = in_be32((u32 __iomem *)addr + 1);
+
+- return fsl_addr_hi | in_be32((u32 *)(fsl_addr + 1));
++ return ((u64)val_hi << 32) + val_lo;
+ }
+
+ static void fsl_iowrite64be(u64 val, u64 __iomem *addr)
+diff --git a/drivers/dma/of-dma.c b/drivers/dma/of-dma.c
+index b2c2b5e8093cf..0db816eb8080d 100644
+--- a/drivers/dma/of-dma.c
++++ b/drivers/dma/of-dma.c
+@@ -71,12 +71,12 @@ static struct dma_chan *of_dma_router_xlate(struct of_phandle_args *dma_spec,
+ return NULL;
+
+ chan = ofdma_target->of_dma_xlate(&dma_spec_target, ofdma_target);
+- if (chan) {
+- chan->router = ofdma->dma_router;
+- chan->route_data = route_data;
+- } else {
++ if (IS_ERR_OR_NULL(chan)) {
+ ofdma->dma_router->route_free(ofdma->dma_router->dev,
+ route_data);
++ } else {
++ chan->router = ofdma->dma_router;
++ chan->route_data = route_data;
+ }
+
+ /*
+diff --git a/drivers/dma/pl330.c b/drivers/dma/pl330.c
+index 88b884cbb7c1b..9d8a235a5b884 100644
+--- a/drivers/dma/pl330.c
++++ b/drivers/dma/pl330.c
+@@ -2788,6 +2788,7 @@ pl330_prep_dma_memcpy(struct dma_chan *chan, dma_addr_t dst,
+ while (burst != (1 << desc->rqcfg.brst_size))
+ desc->rqcfg.brst_size++;
+
++ desc->rqcfg.brst_len = get_burst_len(desc, len);
+ /*
+ * If burst size is smaller than bus width then make sure we only
+ * transfer one at a time to avoid a burst stradling an MFIFO entry.
+@@ -2795,7 +2796,6 @@ pl330_prep_dma_memcpy(struct dma_chan *chan, dma_addr_t dst,
+ if (desc->rqcfg.brst_size * 8 < pl330->pcfg.data_bus_width)
+ desc->rqcfg.brst_len = 1;
+
+- desc->rqcfg.brst_len = get_burst_len(desc, len);
+ desc->bytes_requested = len;
+
+ desc->txd.flags = flags;
+diff --git a/drivers/dma/ti/k3-udma.c b/drivers/dma/ti/k3-udma.c
+index 6c879a7343604..3e488d963f246 100644
+--- a/drivers/dma/ti/k3-udma.c
++++ b/drivers/dma/ti/k3-udma.c
+@@ -2109,9 +2109,9 @@ udma_prep_slave_sg_tr(struct udma_chan *uc, struct scatterlist *sgl,
+ return NULL;
+ }
+
+- cppi5_tr_init(&tr_req[i].flags, CPPI5_TR_TYPE1, false, false,
+- CPPI5_TR_EVENT_SIZE_COMPLETION, 0);
+- cppi5_tr_csf_set(&tr_req[i].flags, CPPI5_TR_CSF_SUPR_EVT);
++ cppi5_tr_init(&tr_req[tr_idx].flags, CPPI5_TR_TYPE1, false,
++ false, CPPI5_TR_EVENT_SIZE_COMPLETION, 0);
++ cppi5_tr_csf_set(&tr_req[tr_idx].flags, CPPI5_TR_CSF_SUPR_EVT);
+
+ tr_req[tr_idx].addr = sg_addr;
+ tr_req[tr_idx].icnt0 = tr0_cnt0;
+diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
+index 666ebe04837af..3f7eced92c0c8 100644
+--- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
++++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
+@@ -2822,12 +2822,18 @@ static int amdgpu_dm_mode_config_init(struct amdgpu_device *adev)
+ &dm_atomic_state_funcs);
+
+ r = amdgpu_display_modeset_create_props(adev);
+- if (r)
++ if (r) {
++ dc_release_state(state->context);
++ kfree(state);
+ return r;
++ }
+
+ r = amdgpu_dm_audio_init(adev);
+- if (r)
++ if (r) {
++ dc_release_state(state->context);
++ kfree(state);
+ return r;
++ }
+
+ return 0;
+ }
+@@ -2844,6 +2850,8 @@ static void amdgpu_dm_update_backlight_caps(struct amdgpu_display_manager *dm)
+ #if defined(CONFIG_ACPI)
+ struct amdgpu_dm_backlight_caps caps;
+
++ memset(&caps, 0, sizeof(caps));
++
+ if (dm->backlight_caps.caps_valid)
+ return;
+
+diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_mst_types.c b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_mst_types.c
+index e5ecc5affa1eb..5098fc98cc255 100644
+--- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_mst_types.c
++++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_mst_types.c
+@@ -67,7 +67,7 @@ static ssize_t dm_dp_aux_transfer(struct drm_dp_aux *aux,
+ result = dc_link_aux_transfer_raw(TO_DM_AUX(aux)->ddc_service, &payload,
+ &operation_result);
+
+- if (payload.write)
++ if (payload.write && result >= 0)
+ result = msg->size;
+
+ if (result < 0)
+diff --git a/drivers/gpu/drm/amd/display/dc/core/dc_link.c b/drivers/gpu/drm/amd/display/dc/core/dc_link.c
+index 31aa31c280ee6..885beb0bcc199 100644
+--- a/drivers/gpu/drm/amd/display/dc/core/dc_link.c
++++ b/drivers/gpu/drm/amd/display/dc/core/dc_link.c
+@@ -767,6 +767,7 @@ static bool detect_dp(struct dc_link *link,
+ sink_caps->signal = dp_passive_dongle_detection(link->ddc,
+ sink_caps,
+ audio_support);
++ link->dpcd_caps.dongle_type = sink_caps->dongle_type;
+ }
+
+ return true;
+@@ -3265,10 +3266,10 @@ void core_link_disable_stream(struct pipe_ctx *pipe_ctx)
+ core_link_set_avmute(pipe_ctx, true);
+ }
+
+- dc->hwss.blank_stream(pipe_ctx);
+ #if defined(CONFIG_DRM_AMD_DC_HDCP)
+ update_psp_stream_config(pipe_ctx, true);
+ #endif
++ dc->hwss.blank_stream(pipe_ctx);
+
+ if (pipe_ctx->stream->signal == SIGNAL_TYPE_DISPLAY_PORT_MST)
+ deallocate_mst_payload(pipe_ctx);
+diff --git a/drivers/gpu/drm/amd/display/dc/core/dc_link_dp.c b/drivers/gpu/drm/amd/display/dc/core/dc_link_dp.c
+index 91cd884d6f257..7728fd71d1f3a 100644
+--- a/drivers/gpu/drm/amd/display/dc/core/dc_link_dp.c
++++ b/drivers/gpu/drm/amd/display/dc/core/dc_link_dp.c
+@@ -4346,9 +4346,9 @@ bool dc_link_get_backlight_level_nits(struct dc_link *link,
+ link->connector_signal != SIGNAL_TYPE_DISPLAY_PORT))
+ return false;
+
+- if (!core_link_read_dpcd(link, DP_SOURCE_BACKLIGHT_CURRENT_PEAK,
++ if (core_link_read_dpcd(link, DP_SOURCE_BACKLIGHT_CURRENT_PEAK,
+ dpcd_backlight_get.raw,
+- sizeof(union dpcd_source_backlight_get)))
++ sizeof(union dpcd_source_backlight_get)) != DC_OK)
+ return false;
+
+ *backlight_millinits_avg =
+@@ -4387,9 +4387,9 @@ bool dc_link_read_default_bl_aux(struct dc_link *link, uint32_t *backlight_milli
+ link->connector_signal != SIGNAL_TYPE_DISPLAY_PORT))
+ return false;
+
+- if (!core_link_read_dpcd(link, DP_SOURCE_BACKLIGHT_LEVEL,
++ if (core_link_read_dpcd(link, DP_SOURCE_BACKLIGHT_LEVEL,
+ (uint8_t *) backlight_millinits,
+- sizeof(uint32_t)))
++ sizeof(uint32_t)) != DC_OK)
+ return false;
+
+ return true;
+diff --git a/drivers/gpu/drm/amd/display/dc/dc_stream.h b/drivers/gpu/drm/amd/display/dc/dc_stream.h
+index 49aad691e687e..ccac2315a903a 100644
+--- a/drivers/gpu/drm/amd/display/dc/dc_stream.h
++++ b/drivers/gpu/drm/amd/display/dc/dc_stream.h
+@@ -222,7 +222,7 @@ struct dc_stream_state {
+ union stream_update_flags update_flags;
+ };
+
+-#define ABM_LEVEL_IMMEDIATE_DISABLE 0xFFFFFFFF
++#define ABM_LEVEL_IMMEDIATE_DISABLE 255
+
+ struct dc_stream_update {
+ struct dc_stream_state *stream;
+diff --git a/drivers/gpu/drm/amd/display/dc/dcn10/dcn10_resource.c b/drivers/gpu/drm/amd/display/dc/dcn10/dcn10_resource.c
+index 17d5cb422025e..8939541ad7afc 100644
+--- a/drivers/gpu/drm/amd/display/dc/dcn10/dcn10_resource.c
++++ b/drivers/gpu/drm/amd/display/dc/dcn10/dcn10_resource.c
+@@ -1213,6 +1213,7 @@ static enum dc_status dcn10_validate_global(struct dc *dc, struct dc_state *cont
+ bool video_large = false;
+ bool desktop_large = false;
+ bool dcc_disabled = false;
++ bool mpo_enabled = false;
+
+ for (i = 0; i < context->stream_count; i++) {
+ if (context->stream_status[i].plane_count == 0)
+@@ -1221,6 +1222,9 @@ static enum dc_status dcn10_validate_global(struct dc *dc, struct dc_state *cont
+ if (context->stream_status[i].plane_count > 2)
+ return DC_FAIL_UNSUPPORTED_1;
+
++ if (context->stream_status[i].plane_count > 1)
++ mpo_enabled = true;
++
+ for (j = 0; j < context->stream_status[i].plane_count; j++) {
+ struct dc_plane_state *plane =
+ context->stream_status[i].plane_states[j];
+@@ -1244,6 +1248,10 @@ static enum dc_status dcn10_validate_global(struct dc *dc, struct dc_state *cont
+ }
+ }
+
++ /* Disable MPO in multi-display configurations. */
++ if (context->stream_count > 1 && mpo_enabled)
++ return DC_FAIL_UNSUPPORTED_1;
++
+ /*
+ * Workaround: On DCN10 there is UMC issue that causes underflow when
+ * playing 4k video on 4k desktop with video downscaled and single channel
+diff --git a/drivers/gpu/drm/amd/powerplay/hwmgr/vega10_thermal.c b/drivers/gpu/drm/amd/powerplay/hwmgr/vega10_thermal.c
+index eff87c8968380..0e7ae58180347 100644
+--- a/drivers/gpu/drm/amd/powerplay/hwmgr/vega10_thermal.c
++++ b/drivers/gpu/drm/amd/powerplay/hwmgr/vega10_thermal.c
+@@ -374,8 +374,18 @@ static int vega10_thermal_set_temperature_range(struct pp_hwmgr *hwmgr,
+ /* compare them in unit celsius degree */
+ if (low < range->min / PP_TEMPERATURE_UNITS_PER_CENTIGRADES)
+ low = range->min / PP_TEMPERATURE_UNITS_PER_CENTIGRADES;
+- if (high > tdp_table->usSoftwareShutdownTemp)
+- high = tdp_table->usSoftwareShutdownTemp;
++
++ /*
++ * As a common sense, usSoftwareShutdownTemp should be bigger
++ * than ThotspotLimit. For any invalid usSoftwareShutdownTemp,
++ * we will just use the max possible setting VEGA10_THERMAL_MAXIMUM_ALERT_TEMP
++ * to avoid false alarms.
++ */
++ if ((tdp_table->usSoftwareShutdownTemp >
++ range->hotspot_crit_max / PP_TEMPERATURE_UNITS_PER_CENTIGRADES)) {
++ if (high > tdp_table->usSoftwareShutdownTemp)
++ high = tdp_table->usSoftwareShutdownTemp;
++ }
+
+ if (low > high)
+ return -EINVAL;
+diff --git a/drivers/gpu/drm/amd/powerplay/smumgr/ci_smumgr.c b/drivers/gpu/drm/amd/powerplay/smumgr/ci_smumgr.c
+index ad54f4500af1f..63016c14b9428 100644
+--- a/drivers/gpu/drm/amd/powerplay/smumgr/ci_smumgr.c
++++ b/drivers/gpu/drm/amd/powerplay/smumgr/ci_smumgr.c
+@@ -37,6 +37,7 @@
+ #include "cgs_common.h"
+ #include "atombios.h"
+ #include "pppcielanes.h"
++#include "smu7_smumgr.h"
+
+ #include "smu/smu_7_0_1_d.h"
+ #include "smu/smu_7_0_1_sh_mask.h"
+@@ -2948,6 +2949,7 @@ const struct pp_smumgr_func ci_smu_funcs = {
+ .request_smu_load_specific_fw = NULL,
+ .send_msg_to_smc = ci_send_msg_to_smc,
+ .send_msg_to_smc_with_parameter = ci_send_msg_to_smc_with_parameter,
++ .get_argument = smu7_get_argument,
+ .download_pptable_settings = NULL,
+ .upload_pptable_settings = NULL,
+ .get_offsetof = ci_get_offsetof,
+diff --git a/drivers/gpu/drm/i915/display/intel_hdcp.c b/drivers/gpu/drm/i915/display/intel_hdcp.c
+index 2cbc4619b4ce6..525658fd201fd 100644
+--- a/drivers/gpu/drm/i915/display/intel_hdcp.c
++++ b/drivers/gpu/drm/i915/display/intel_hdcp.c
+@@ -336,8 +336,10 @@ int intel_hdcp_validate_v_prime(struct intel_connector *connector,
+
+ /* Fill up the empty slots in sha_text and write it out */
+ sha_empty = sizeof(sha_text) - sha_leftovers;
+- for (j = 0; j < sha_empty; j++)
+- sha_text |= ksv[j] << ((sizeof(sha_text) - j - 1) * 8);
++ for (j = 0; j < sha_empty; j++) {
++ u8 off = ((sizeof(sha_text) - j - 1 - sha_leftovers) * 8);
++ sha_text |= ksv[j] << off;
++ }
+
+ ret = intel_write_sha_text(dev_priv, sha_text);
+ if (ret < 0)
+@@ -435,7 +437,7 @@ int intel_hdcp_validate_v_prime(struct intel_connector *connector,
+ /* Write 32 bits of text */
+ intel_de_write(dev_priv, HDCP_REP_CTL,
+ rep_ctl | HDCP_SHA1_TEXT_32);
+- sha_text |= bstatus[0] << 24 | bstatus[1] << 16;
++ sha_text |= bstatus[0] << 8 | bstatus[1];
+ ret = intel_write_sha_text(dev_priv, sha_text);
+ if (ret < 0)
+ return ret;
+@@ -450,17 +452,29 @@ int intel_hdcp_validate_v_prime(struct intel_connector *connector,
+ return ret;
+ sha_idx += sizeof(sha_text);
+ }
++
++ /*
++ * Terminate the SHA-1 stream by hand. For the other leftover
++ * cases this is appended by the hardware.
++ */
++ intel_de_write(dev_priv, HDCP_REP_CTL,
++ rep_ctl | HDCP_SHA1_TEXT_32);
++ sha_text = DRM_HDCP_SHA1_TERMINATOR << 24;
++ ret = intel_write_sha_text(dev_priv, sha_text);
++ if (ret < 0)
++ return ret;
++ sha_idx += sizeof(sha_text);
+ } else if (sha_leftovers == 3) {
+- /* Write 32 bits of text */
++ /* Write 32 bits of text (filled from LSB) */
+ intel_de_write(dev_priv, HDCP_REP_CTL,
+ rep_ctl | HDCP_SHA1_TEXT_32);
+- sha_text |= bstatus[0] << 24;
++ sha_text |= bstatus[0];
+ ret = intel_write_sha_text(dev_priv, sha_text);
+ if (ret < 0)
+ return ret;
+ sha_idx += sizeof(sha_text);
+
+- /* Write 8 bits of text, 24 bits of M0 */
++ /* Write 8 bits of text (filled from LSB), 24 bits of M0 */
+ intel_de_write(dev_priv, HDCP_REP_CTL,
+ rep_ctl | HDCP_SHA1_TEXT_8);
+ ret = intel_write_sha_text(dev_priv, bstatus[1]);
+diff --git a/drivers/gpu/drm/msm/adreno/a6xx_gmu.c b/drivers/gpu/drm/msm/adreno/a6xx_gmu.c
+index 1d330204c465c..2dd1cf1ffbe25 100644
+--- a/drivers/gpu/drm/msm/adreno/a6xx_gmu.c
++++ b/drivers/gpu/drm/msm/adreno/a6xx_gmu.c
+@@ -207,6 +207,16 @@ static int a6xx_gmu_start(struct a6xx_gmu *gmu)
+ {
+ int ret;
+ u32 val;
++ u32 mask, reset_val;
++
++ val = gmu_read(gmu, REG_A6XX_GMU_CM3_DTCM_START + 0xff8);
++ if (val <= 0x20010004) {
++ mask = 0xffffffff;
++ reset_val = 0xbabeface;
++ } else {
++ mask = 0x1ff;
++ reset_val = 0x100;
++ }
+
+ gmu_write(gmu, REG_A6XX_GMU_CM3_SYSRESET, 1);
+
+@@ -218,7 +228,7 @@ static int a6xx_gmu_start(struct a6xx_gmu *gmu)
+ gmu_write(gmu, REG_A6XX_GMU_CM3_SYSRESET, 0);
+
+ ret = gmu_poll_timeout(gmu, REG_A6XX_GMU_CM3_FW_INIT_RESULT, val,
+- val == 0xbabeface, 100, 10000);
++ (val & mask) == reset_val, 100, 10000);
+
+ if (ret)
+ DRM_DEV_ERROR(gmu->dev, "GMU firmware initialization timed out\n");
+diff --git a/drivers/gpu/drm/msm/disp/dpu1/dpu_crtc.c b/drivers/gpu/drm/msm/disp/dpu1/dpu_crtc.c
+index 969d95aa873c4..1026e1e5bec10 100644
+--- a/drivers/gpu/drm/msm/disp/dpu1/dpu_crtc.c
++++ b/drivers/gpu/drm/msm/disp/dpu1/dpu_crtc.c
+@@ -827,7 +827,7 @@ static void dpu_crtc_enable(struct drm_crtc *crtc,
+ {
+ struct dpu_crtc *dpu_crtc;
+ struct drm_encoder *encoder;
+- bool request_bandwidth;
++ bool request_bandwidth = false;
+
+ if (!crtc) {
+ DPU_ERROR("invalid crtc\n");
+diff --git a/drivers/gpu/drm/msm/disp/dpu1/dpu_encoder.c b/drivers/gpu/drm/msm/disp/dpu1/dpu_encoder.c
+index 0946a86b37b28..c0cd936314e66 100644
+--- a/drivers/gpu/drm/msm/disp/dpu1/dpu_encoder.c
++++ b/drivers/gpu/drm/msm/disp/dpu1/dpu_encoder.c
+@@ -586,7 +586,10 @@ static int dpu_encoder_virt_atomic_check(
+ dpu_kms = to_dpu_kms(priv->kms);
+ mode = &crtc_state->mode;
+ adj_mode = &crtc_state->adjusted_mode;
+- global_state = dpu_kms_get_existing_global_state(dpu_kms);
++ global_state = dpu_kms_get_global_state(crtc_state->state);
++ if (IS_ERR(global_state))
++ return PTR_ERR(global_state);
++
+ trace_dpu_enc_atomic_check(DRMID(drm_enc));
+
+ /*
+@@ -621,12 +624,15 @@ static int dpu_encoder_virt_atomic_check(
+ /* Reserve dynamic resources now. */
+ if (!ret) {
+ /*
+- * Avoid reserving resources when mode set is pending. Topology
+- * info may not be available to complete reservation.
++ * Release and Allocate resources on every modeset
++ * Dont allocate when active is false.
+ */
+ if (drm_atomic_crtc_needs_modeset(crtc_state)) {
+- ret = dpu_rm_reserve(&dpu_kms->rm, global_state,
+- drm_enc, crtc_state, topology);
++ dpu_rm_release(global_state, drm_enc);
++
++ if (!crtc_state->active_changed || crtc_state->active)
++ ret = dpu_rm_reserve(&dpu_kms->rm, global_state,
++ drm_enc, crtc_state, topology);
+ }
+ }
+
+@@ -1175,7 +1181,6 @@ static void dpu_encoder_virt_disable(struct drm_encoder *drm_enc)
+ struct dpu_encoder_virt *dpu_enc = NULL;
+ struct msm_drm_private *priv;
+ struct dpu_kms *dpu_kms;
+- struct dpu_global_state *global_state;
+ int i = 0;
+
+ if (!drm_enc) {
+@@ -1194,7 +1199,6 @@ static void dpu_encoder_virt_disable(struct drm_encoder *drm_enc)
+
+ priv = drm_enc->dev->dev_private;
+ dpu_kms = to_dpu_kms(priv->kms);
+- global_state = dpu_kms_get_existing_global_state(dpu_kms);
+
+ trace_dpu_enc_disable(DRMID(drm_enc));
+
+@@ -1224,8 +1228,6 @@ static void dpu_encoder_virt_disable(struct drm_encoder *drm_enc)
+
+ DPU_DEBUG_ENC(dpu_enc, "encoder disabled\n");
+
+- dpu_rm_release(global_state, drm_enc);
+-
+ mutex_unlock(&dpu_enc->enc_lock);
+ }
+
+diff --git a/drivers/gpu/drm/msm/disp/dpu1/dpu_plane.c b/drivers/gpu/drm/msm/disp/dpu1/dpu_plane.c
+index 3b9c33e694bf4..994d23bad3870 100644
+--- a/drivers/gpu/drm/msm/disp/dpu1/dpu_plane.c
++++ b/drivers/gpu/drm/msm/disp/dpu1/dpu_plane.c
+@@ -866,9 +866,9 @@ static int dpu_plane_atomic_check(struct drm_plane *plane,
+ crtc_state = drm_atomic_get_new_crtc_state(state->state,
+ state->crtc);
+
+- min_scale = FRAC_16_16(1, pdpu->pipe_sblk->maxdwnscale);
++ min_scale = FRAC_16_16(1, pdpu->pipe_sblk->maxupscale);
+ ret = drm_atomic_helper_check_plane_state(state, crtc_state, min_scale,
+- pdpu->pipe_sblk->maxupscale << 16,
++ pdpu->pipe_sblk->maxdwnscale << 16,
+ true, true);
+ if (ret) {
+ DPU_DEBUG_PLANE(pdpu, "Check plane state failed (%d)\n", ret);
+diff --git a/drivers/gpu/drm/msm/msm_atomic.c b/drivers/gpu/drm/msm/msm_atomic.c
+index 5ccfad794c6a5..561bfa48841c3 100644
+--- a/drivers/gpu/drm/msm/msm_atomic.c
++++ b/drivers/gpu/drm/msm/msm_atomic.c
+@@ -27,6 +27,34 @@ int msm_atomic_prepare_fb(struct drm_plane *plane,
+ return msm_framebuffer_prepare(new_state->fb, kms->aspace);
+ }
+
++/*
++ * Helpers to control vblanks while we flush.. basically just to ensure
++ * that vblank accounting is switched on, so we get valid seqn/timestamp
++ * on pageflip events (if requested)
++ */
++
++static void vblank_get(struct msm_kms *kms, unsigned crtc_mask)
++{
++ struct drm_crtc *crtc;
++
++ for_each_crtc_mask(kms->dev, crtc, crtc_mask) {
++ if (!crtc->state->active)
++ continue;
++ drm_crtc_vblank_get(crtc);
++ }
++}
++
++static void vblank_put(struct msm_kms *kms, unsigned crtc_mask)
++{
++ struct drm_crtc *crtc;
++
++ for_each_crtc_mask(kms->dev, crtc, crtc_mask) {
++ if (!crtc->state->active)
++ continue;
++ drm_crtc_vblank_put(crtc);
++ }
++}
++
+ static void msm_atomic_async_commit(struct msm_kms *kms, int crtc_idx)
+ {
+ unsigned crtc_mask = BIT(crtc_idx);
+@@ -44,6 +72,8 @@ static void msm_atomic_async_commit(struct msm_kms *kms, int crtc_idx)
+
+ kms->funcs->enable_commit(kms);
+
++ vblank_get(kms, crtc_mask);
++
+ /*
+ * Flush hardware updates:
+ */
+@@ -58,6 +88,8 @@ static void msm_atomic_async_commit(struct msm_kms *kms, int crtc_idx)
+ kms->funcs->wait_flush(kms, crtc_mask);
+ trace_msm_atomic_wait_flush_finish(crtc_mask);
+
++ vblank_put(kms, crtc_mask);
++
+ mutex_lock(&kms->commit_lock);
+ kms->funcs->complete_commit(kms, crtc_mask);
+ mutex_unlock(&kms->commit_lock);
+@@ -221,6 +253,8 @@ void msm_atomic_commit_tail(struct drm_atomic_state *state)
+ */
+ kms->pending_crtc_mask &= ~crtc_mask;
+
++ vblank_get(kms, crtc_mask);
++
+ /*
+ * Flush hardware updates:
+ */
+@@ -235,6 +269,8 @@ void msm_atomic_commit_tail(struct drm_atomic_state *state)
+ kms->funcs->wait_flush(kms, crtc_mask);
+ trace_msm_atomic_wait_flush_finish(crtc_mask);
+
++ vblank_put(kms, crtc_mask);
++
+ mutex_lock(&kms->commit_lock);
+ kms->funcs->complete_commit(kms, crtc_mask);
+ mutex_unlock(&kms->commit_lock);
+diff --git a/drivers/gpu/drm/msm/msm_drv.c b/drivers/gpu/drm/msm/msm_drv.c
+index f6ce40bf36998..b4d61af7a104e 100644
+--- a/drivers/gpu/drm/msm/msm_drv.c
++++ b/drivers/gpu/drm/msm/msm_drv.c
+@@ -1328,6 +1328,13 @@ static int msm_pdev_remove(struct platform_device *pdev)
+ return 0;
+ }
+
++static void msm_pdev_shutdown(struct platform_device *pdev)
++{
++ struct drm_device *drm = platform_get_drvdata(pdev);
++
++ drm_atomic_helper_shutdown(drm);
++}
++
+ static const struct of_device_id dt_match[] = {
+ { .compatible = "qcom,mdp4", .data = (void *)KMS_MDP4 },
+ { .compatible = "qcom,mdss", .data = (void *)KMS_MDP5 },
+@@ -1340,6 +1347,7 @@ MODULE_DEVICE_TABLE(of, dt_match);
+ static struct platform_driver msm_platform_driver = {
+ .probe = msm_pdev_probe,
+ .remove = msm_pdev_remove,
++ .shutdown = msm_pdev_shutdown,
+ .driver = {
+ .name = "msm",
+ .of_match_table = dt_match,
+diff --git a/drivers/gpu/drm/omapdrm/omap_crtc.c b/drivers/gpu/drm/omapdrm/omap_crtc.c
+index 6d40914675dad..328a4a74f534e 100644
+--- a/drivers/gpu/drm/omapdrm/omap_crtc.c
++++ b/drivers/gpu/drm/omapdrm/omap_crtc.c
+@@ -451,11 +451,12 @@ static void omap_crtc_atomic_enable(struct drm_crtc *crtc,
+ if (omap_state->manually_updated)
+ return;
+
+- spin_lock_irq(&crtc->dev->event_lock);
+ drm_crtc_vblank_on(crtc);
++
+ ret = drm_crtc_vblank_get(crtc);
+ WARN_ON(ret != 0);
+
++ spin_lock_irq(&crtc->dev->event_lock);
+ omap_crtc_arm_event(crtc);
+ spin_unlock_irq(&crtc->dev->event_lock);
+ }
+diff --git a/drivers/hid/hid-ids.h b/drivers/hid/hid-ids.h
+index 7cfa9785bfbb0..6ea3619842d8d 100644
+--- a/drivers/hid/hid-ids.h
++++ b/drivers/hid/hid-ids.h
+@@ -727,6 +727,9 @@
+ #define USB_DEVICE_ID_LENOVO_TPPRODOCK 0x6067
+ #define USB_DEVICE_ID_LENOVO_X1_COVER 0x6085
+ #define USB_DEVICE_ID_LENOVO_PIXART_USB_MOUSE_608D 0x608d
++#define USB_DEVICE_ID_LENOVO_PIXART_USB_MOUSE_6019 0x6019
++#define USB_DEVICE_ID_LENOVO_PIXART_USB_MOUSE_602E 0x602e
++#define USB_DEVICE_ID_LENOVO_PIXART_USB_MOUSE_6093 0x6093
+
+ #define USB_VENDOR_ID_LG 0x1fd2
+ #define USB_DEVICE_ID_LG_MULTITOUCH 0x0064
+diff --git a/drivers/hid/hid-quirks.c b/drivers/hid/hid-quirks.c
+index c242150d35a3a..a65aef6a322fb 100644
+--- a/drivers/hid/hid-quirks.c
++++ b/drivers/hid/hid-quirks.c
+@@ -105,6 +105,9 @@ static const struct hid_device_id hid_quirks[] = {
+ { HID_USB_DEVICE(USB_VENDOR_ID_KYE, USB_DEVICE_ID_KYE_EASYPEN_M406XE), HID_QUIRK_MULTI_INPUT },
+ { HID_USB_DEVICE(USB_VENDOR_ID_KYE, USB_DEVICE_ID_PIXART_USB_OPTICAL_MOUSE_ID2), HID_QUIRK_ALWAYS_POLL },
+ { HID_USB_DEVICE(USB_VENDOR_ID_LENOVO, USB_DEVICE_ID_LENOVO_PIXART_USB_MOUSE_608D), HID_QUIRK_ALWAYS_POLL },
++ { HID_USB_DEVICE(USB_VENDOR_ID_LENOVO, USB_DEVICE_ID_LENOVO_PIXART_USB_MOUSE_6019), HID_QUIRK_ALWAYS_POLL },
++ { HID_USB_DEVICE(USB_VENDOR_ID_LENOVO, USB_DEVICE_ID_LENOVO_PIXART_USB_MOUSE_602E), HID_QUIRK_ALWAYS_POLL },
++ { HID_USB_DEVICE(USB_VENDOR_ID_LENOVO, USB_DEVICE_ID_LENOVO_PIXART_USB_MOUSE_6093), HID_QUIRK_ALWAYS_POLL },
+ { HID_USB_DEVICE(USB_VENDOR_ID_LOGITECH, USB_DEVICE_ID_LOGITECH_C007), HID_QUIRK_ALWAYS_POLL },
+ { HID_USB_DEVICE(USB_VENDOR_ID_LOGITECH, USB_DEVICE_ID_LOGITECH_C077), HID_QUIRK_ALWAYS_POLL },
+ { HID_USB_DEVICE(USB_VENDOR_ID_LOGITECH, USB_DEVICE_ID_LOGITECH_KEYBOARD_G710_PLUS), HID_QUIRK_NOGET },
+diff --git a/drivers/hv/hv_util.c b/drivers/hv/hv_util.c
+index 92ee0fe4c919e..a4e8d96513c22 100644
+--- a/drivers/hv/hv_util.c
++++ b/drivers/hv/hv_util.c
+@@ -282,26 +282,52 @@ static struct {
+ spinlock_t lock;
+ } host_ts;
+
+-static struct timespec64 hv_get_adj_host_time(void)
++static inline u64 reftime_to_ns(u64 reftime)
+ {
+- struct timespec64 ts;
+- u64 newtime, reftime;
++ return (reftime - WLTIMEDELTA) * 100;
++}
++
++/*
++ * Hard coded threshold for host timesync delay: 600 seconds
++ */
++static const u64 HOST_TIMESYNC_DELAY_THRESH = 600 * (u64)NSEC_PER_SEC;
++
++static int hv_get_adj_host_time(struct timespec64 *ts)
++{
++ u64 newtime, reftime, timediff_adj;
+ unsigned long flags;
++ int ret = 0;
+
+ spin_lock_irqsave(&host_ts.lock, flags);
+ reftime = hv_read_reference_counter();
+- newtime = host_ts.host_time + (reftime - host_ts.ref_time);
+- ts = ns_to_timespec64((newtime - WLTIMEDELTA) * 100);
++
++ /*
++ * We need to let the caller know that last update from host
++ * is older than the max allowable threshold. clock_gettime()
++ * and PTP ioctl do not have a documented error that we could
++ * return for this specific case. Use ESTALE to report this.
++ */
++ timediff_adj = reftime - host_ts.ref_time;
++ if (timediff_adj * 100 > HOST_TIMESYNC_DELAY_THRESH) {
++ pr_warn_once("TIMESYNC IC: Stale time stamp, %llu nsecs old\n",
++ (timediff_adj * 100));
++ ret = -ESTALE;
++ }
++
++ newtime = host_ts.host_time + timediff_adj;
++ *ts = ns_to_timespec64(reftime_to_ns(newtime));
+ spin_unlock_irqrestore(&host_ts.lock, flags);
+
+- return ts;
++ return ret;
+ }
+
+ static void hv_set_host_time(struct work_struct *work)
+ {
+- struct timespec64 ts = hv_get_adj_host_time();
+
+- do_settimeofday64(&ts);
++ struct timespec64 ts;
++
++ if (!hv_get_adj_host_time(&ts))
++ do_settimeofday64(&ts);
+ }
+
+ /*
+@@ -361,10 +387,23 @@ static void timesync_onchannelcallback(void *context)
+ struct ictimesync_ref_data *refdata;
+ u8 *time_txf_buf = util_timesynch.recv_buffer;
+
+- vmbus_recvpacket(channel, time_txf_buf,
+- HV_HYP_PAGE_SIZE, &recvlen, &requestid);
++ /*
++ * Drain the ring buffer and use the last packet to update
++ * host_ts
++ */
++ while (1) {
++ int ret = vmbus_recvpacket(channel, time_txf_buf,
++ HV_HYP_PAGE_SIZE, &recvlen,
++ &requestid);
++ if (ret) {
++ pr_warn_once("TimeSync IC pkt recv failed (Err: %d)\n",
++ ret);
++ break;
++ }
++
++ if (!recvlen)
++ break;
+
+- if (recvlen > 0) {
+ icmsghdrp = (struct icmsg_hdr *)&time_txf_buf[
+ sizeof(struct vmbuspipe_hdr)];
+
+@@ -622,9 +661,7 @@ static int hv_ptp_adjtime(struct ptp_clock_info *ptp, s64 delta)
+
+ static int hv_ptp_gettime(struct ptp_clock_info *info, struct timespec64 *ts)
+ {
+- *ts = hv_get_adj_host_time();
+-
+- return 0;
++ return hv_get_adj_host_time(ts);
+ }
+
+ static struct ptp_clock_info ptp_hyperv_info = {
+diff --git a/drivers/hwmon/applesmc.c b/drivers/hwmon/applesmc.c
+index 3166184093157..a18887990f4a2 100644
+--- a/drivers/hwmon/applesmc.c
++++ b/drivers/hwmon/applesmc.c
+@@ -753,15 +753,18 @@ static ssize_t applesmc_light_show(struct device *dev,
+ }
+
+ ret = applesmc_read_key(LIGHT_SENSOR_LEFT_KEY, buffer, data_length);
++ if (ret)
++ goto out;
+ /* newer macbooks report a single 10-bit bigendian value */
+ if (data_length == 10) {
+ left = be16_to_cpu(*(__be16 *)(buffer + 6)) >> 2;
+ goto out;
+ }
+ left = buffer[2];
++
++ ret = applesmc_read_key(LIGHT_SENSOR_RIGHT_KEY, buffer, data_length);
+ if (ret)
+ goto out;
+- ret = applesmc_read_key(LIGHT_SENSOR_RIGHT_KEY, buffer, data_length);
+ right = buffer[2];
+
+ out:
+@@ -810,12 +813,11 @@ static ssize_t applesmc_show_fan_speed(struct device *dev,
+ to_index(attr));
+
+ ret = applesmc_read_key(newkey, buffer, 2);
+- speed = ((buffer[0] << 8 | buffer[1]) >> 2);
+-
+ if (ret)
+ return ret;
+- else
+- return snprintf(sysfsbuf, PAGE_SIZE, "%u\n", speed);
++
++ speed = ((buffer[0] << 8 | buffer[1]) >> 2);
++ return snprintf(sysfsbuf, PAGE_SIZE, "%u\n", speed);
+ }
+
+ static ssize_t applesmc_store_fan_speed(struct device *dev,
+@@ -851,12 +853,11 @@ static ssize_t applesmc_show_fan_manual(struct device *dev,
+ u8 buffer[2];
+
+ ret = applesmc_read_key(FANS_MANUAL, buffer, 2);
+- manual = ((buffer[0] << 8 | buffer[1]) >> to_index(attr)) & 0x01;
+-
+ if (ret)
+ return ret;
+- else
+- return snprintf(sysfsbuf, PAGE_SIZE, "%d\n", manual);
++
++ manual = ((buffer[0] << 8 | buffer[1]) >> to_index(attr)) & 0x01;
++ return snprintf(sysfsbuf, PAGE_SIZE, "%d\n", manual);
+ }
+
+ static ssize_t applesmc_store_fan_manual(struct device *dev,
+@@ -872,10 +873,11 @@ static ssize_t applesmc_store_fan_manual(struct device *dev,
+ return -EINVAL;
+
+ ret = applesmc_read_key(FANS_MANUAL, buffer, 2);
+- val = (buffer[0] << 8 | buffer[1]);
+ if (ret)
+ goto out;
+
++ val = (buffer[0] << 8 | buffer[1]);
++
+ if (input)
+ val = val | (0x01 << to_index(attr));
+ else
+@@ -951,13 +953,12 @@ static ssize_t applesmc_key_count_show(struct device *dev,
+ u32 count;
+
+ ret = applesmc_read_key(KEY_COUNT_KEY, buffer, 4);
+- count = ((u32)buffer[0]<<24) + ((u32)buffer[1]<<16) +
+- ((u32)buffer[2]<<8) + buffer[3];
+-
+ if (ret)
+ return ret;
+- else
+- return snprintf(sysfsbuf, PAGE_SIZE, "%d\n", count);
++
++ count = ((u32)buffer[0]<<24) + ((u32)buffer[1]<<16) +
++ ((u32)buffer[2]<<8) + buffer[3];
++ return snprintf(sysfsbuf, PAGE_SIZE, "%d\n", count);
+ }
+
+ static ssize_t applesmc_key_at_index_read_show(struct device *dev,
+diff --git a/drivers/hwmon/pmbus/isl68137.c b/drivers/hwmon/pmbus/isl68137.c
+index 0c622711ef7e0..58aa95a3c010c 100644
+--- a/drivers/hwmon/pmbus/isl68137.c
++++ b/drivers/hwmon/pmbus/isl68137.c
+@@ -67,6 +67,7 @@ enum variants {
+ raa_dmpvr1_2rail,
+ raa_dmpvr2_1rail,
+ raa_dmpvr2_2rail,
++ raa_dmpvr2_2rail_nontc,
+ raa_dmpvr2_3rail,
+ raa_dmpvr2_hv,
+ };
+@@ -241,6 +242,10 @@ static int isl68137_probe(struct i2c_client *client,
+ info->pages = 1;
+ info->read_word_data = raa_dmpvr2_read_word_data;
+ break;
++ case raa_dmpvr2_2rail_nontc:
++ info->func[0] &= ~PMBUS_HAVE_TEMP;
++ info->func[1] &= ~PMBUS_HAVE_TEMP;
++ fallthrough;
+ case raa_dmpvr2_2rail:
+ info->pages = 2;
+ info->read_word_data = raa_dmpvr2_read_word_data;
+@@ -304,7 +309,7 @@ static const struct i2c_device_id raa_dmpvr_id[] = {
+ {"raa228000", raa_dmpvr2_hv},
+ {"raa228004", raa_dmpvr2_hv},
+ {"raa228006", raa_dmpvr2_hv},
+- {"raa228228", raa_dmpvr2_2rail},
++ {"raa228228", raa_dmpvr2_2rail_nontc},
+ {"raa229001", raa_dmpvr2_2rail},
+ {"raa229004", raa_dmpvr2_2rail},
+ {}
+diff --git a/drivers/i2c/busses/i2c-bcm-iproc.c b/drivers/i2c/busses/i2c-bcm-iproc.c
+index 688e928188214..d8295b1c379d1 100644
+--- a/drivers/i2c/busses/i2c-bcm-iproc.c
++++ b/drivers/i2c/busses/i2c-bcm-iproc.c
+@@ -720,7 +720,7 @@ static int bcm_iproc_i2c_xfer_internal(struct bcm_iproc_i2c_dev *iproc_i2c,
+
+ /* mark the last byte */
+ if (!process_call && (i == msg->len - 1))
+- val |= 1 << M_TX_WR_STATUS_SHIFT;
++ val |= BIT(M_TX_WR_STATUS_SHIFT);
+
+ iproc_i2c_wr_reg(iproc_i2c, M_TX_OFFSET, val);
+ }
+@@ -738,7 +738,7 @@ static int bcm_iproc_i2c_xfer_internal(struct bcm_iproc_i2c_dev *iproc_i2c,
+ */
+ addr = i2c_8bit_addr_from_msg(msg);
+ /* mark it the last byte out */
+- val = addr | (1 << M_TX_WR_STATUS_SHIFT);
++ val = addr | BIT(M_TX_WR_STATUS_SHIFT);
+ iproc_i2c_wr_reg(iproc_i2c, M_TX_OFFSET, val);
+ }
+
+diff --git a/drivers/iommu/Kconfig b/drivers/iommu/Kconfig
+index b0f308cb7f7c2..201b2718f0755 100644
+--- a/drivers/iommu/Kconfig
++++ b/drivers/iommu/Kconfig
+@@ -143,7 +143,7 @@ config AMD_IOMMU
+ select IOMMU_API
+ select IOMMU_IOVA
+ select IOMMU_DMA
+- depends on X86_64 && PCI && ACPI
++ depends on X86_64 && PCI && ACPI && HAVE_CMPXCHG_DOUBLE
+ help
+ With this option you can enable support for AMD IOMMU hardware in
+ your system. An IOMMU is a hardware component which provides
+diff --git a/drivers/iommu/amd/init.c b/drivers/iommu/amd/init.c
+index 6ebd4825e3206..bf45f8e2c7edd 100644
+--- a/drivers/iommu/amd/init.c
++++ b/drivers/iommu/amd/init.c
+@@ -1518,7 +1518,14 @@ static int __init init_iommu_one(struct amd_iommu *iommu, struct ivhd_header *h)
+ iommu->mmio_phys_end = MMIO_REG_END_OFFSET;
+ else
+ iommu->mmio_phys_end = MMIO_CNTR_CONF_OFFSET;
+- if (((h->efr_attr & (0x1 << IOMMU_FEAT_GASUP_SHIFT)) == 0))
++
++ /*
++ * Note: GA (128-bit IRTE) mode requires cmpxchg16b supports.
++ * GAM also requires GA mode. Therefore, we need to
++ * check cmpxchg16b support before enabling it.
++ */
++ if (!boot_cpu_has(X86_FEATURE_CX16) ||
++ ((h->efr_attr & (0x1 << IOMMU_FEAT_GASUP_SHIFT)) == 0))
+ amd_iommu_guest_ir = AMD_IOMMU_GUEST_IR_LEGACY;
+ break;
+ case 0x11:
+@@ -1527,8 +1534,18 @@ static int __init init_iommu_one(struct amd_iommu *iommu, struct ivhd_header *h)
+ iommu->mmio_phys_end = MMIO_REG_END_OFFSET;
+ else
+ iommu->mmio_phys_end = MMIO_CNTR_CONF_OFFSET;
+- if (((h->efr_reg & (0x1 << IOMMU_EFR_GASUP_SHIFT)) == 0))
++
++ /*
++ * Note: GA (128-bit IRTE) mode requires cmpxchg16b supports.
++ * XT, GAM also requires GA mode. Therefore, we need to
++ * check cmpxchg16b support before enabling them.
++ */
++ if (!boot_cpu_has(X86_FEATURE_CX16) ||
++ ((h->efr_reg & (0x1 << IOMMU_EFR_GASUP_SHIFT)) == 0)) {
+ amd_iommu_guest_ir = AMD_IOMMU_GUEST_IR_LEGACY;
++ break;
++ }
++
+ /*
+ * Note: Since iommu_update_intcapxt() leverages
+ * the IOMMU MMIO access to MSI capability block registers
+diff --git a/drivers/iommu/amd/iommu.c b/drivers/iommu/amd/iommu.c
+index 2f22326ee4dfe..200ee948f6ec1 100644
+--- a/drivers/iommu/amd/iommu.c
++++ b/drivers/iommu/amd/iommu.c
+@@ -3283,6 +3283,7 @@ out:
+ static int modify_irte_ga(u16 devid, int index, struct irte_ga *irte,
+ struct amd_ir_data *data)
+ {
++ bool ret;
+ struct irq_remap_table *table;
+ struct amd_iommu *iommu;
+ unsigned long flags;
+@@ -3300,10 +3301,18 @@ static int modify_irte_ga(u16 devid, int index, struct irte_ga *irte,
+
+ entry = (struct irte_ga *)table->table;
+ entry = &entry[index];
+- entry->lo.fields_remap.valid = 0;
+- entry->hi.val = irte->hi.val;
+- entry->lo.val = irte->lo.val;
+- entry->lo.fields_remap.valid = 1;
++
++ ret = cmpxchg_double(&entry->lo.val, &entry->hi.val,
++ entry->lo.val, entry->hi.val,
++ irte->lo.val, irte->hi.val);
++ /*
++ * We use cmpxchg16 to atomically update the 128-bit IRTE,
++ * and it cannot be updated by the hardware or other processors
++ * behind us, so the return value of cmpxchg16 should be the
++ * same as the old value.
++ */
++ WARN_ON(!ret);
++
+ if (data)
+ data->ref = entry;
+
+@@ -3841,6 +3850,7 @@ int amd_iommu_deactivate_guest_mode(void *data)
+ struct amd_ir_data *ir_data = (struct amd_ir_data *)data;
+ struct irte_ga *entry = (struct irte_ga *) ir_data->entry;
+ struct irq_cfg *cfg = ir_data->cfg;
++ u64 valid = entry->lo.fields_remap.valid;
+
+ if (!AMD_IOMMU_GUEST_IR_VAPIC(amd_iommu_guest_ir) ||
+ !entry || !entry->lo.fields_vapic.guest_mode)
+@@ -3849,6 +3859,7 @@ int amd_iommu_deactivate_guest_mode(void *data)
+ entry->lo.val = 0;
+ entry->hi.val = 0;
+
++ entry->lo.fields_remap.valid = valid;
+ entry->lo.fields_remap.dm = apic->irq_dest_mode;
+ entry->lo.fields_remap.int_type = apic->irq_delivery_mode;
+ entry->hi.fields.vector = cfg->vector;
+diff --git a/drivers/iommu/intel/iommu.c b/drivers/iommu/intel/iommu.c
+index 04e82f1756010..fbe0b0cc56edf 100644
+--- a/drivers/iommu/intel/iommu.c
++++ b/drivers/iommu/intel/iommu.c
+@@ -123,29 +123,29 @@ static inline unsigned int level_to_offset_bits(int level)
+ return (level - 1) * LEVEL_STRIDE;
+ }
+
+-static inline int pfn_level_offset(unsigned long pfn, int level)
++static inline int pfn_level_offset(u64 pfn, int level)
+ {
+ return (pfn >> level_to_offset_bits(level)) & LEVEL_MASK;
+ }
+
+-static inline unsigned long level_mask(int level)
++static inline u64 level_mask(int level)
+ {
+- return -1UL << level_to_offset_bits(level);
++ return -1ULL << level_to_offset_bits(level);
+ }
+
+-static inline unsigned long level_size(int level)
++static inline u64 level_size(int level)
+ {
+- return 1UL << level_to_offset_bits(level);
++ return 1ULL << level_to_offset_bits(level);
+ }
+
+-static inline unsigned long align_to_level(unsigned long pfn, int level)
++static inline u64 align_to_level(u64 pfn, int level)
+ {
+ return (pfn + level_size(level) - 1) & level_mask(level);
+ }
+
+ static inline unsigned long lvl_to_nr_pages(unsigned int lvl)
+ {
+- return 1 << min_t(int, (lvl - 1) * LEVEL_STRIDE, MAX_AGAW_PFN_WIDTH);
++ return 1UL << min_t(int, (lvl - 1) * LEVEL_STRIDE, MAX_AGAW_PFN_WIDTH);
+ }
+
+ /* VT-d pages must always be _smaller_ than MM pages. Otherwise things
+diff --git a/drivers/iommu/intel/irq_remapping.c b/drivers/iommu/intel/irq_remapping.c
+index aa096b333a991..4828f4fe09ab5 100644
+--- a/drivers/iommu/intel/irq_remapping.c
++++ b/drivers/iommu/intel/irq_remapping.c
+@@ -507,12 +507,18 @@ static void iommu_enable_irq_remapping(struct intel_iommu *iommu)
+
+ /* Enable interrupt-remapping */
+ iommu->gcmd |= DMA_GCMD_IRE;
+- iommu->gcmd &= ~DMA_GCMD_CFI; /* Block compatibility-format MSIs */
+ writel(iommu->gcmd, iommu->reg + DMAR_GCMD_REG);
+-
+ IOMMU_WAIT_OP(iommu, DMAR_GSTS_REG,
+ readl, (sts & DMA_GSTS_IRES), sts);
+
++ /* Block compatibility-format MSIs */
++ if (sts & DMA_GSTS_CFIS) {
++ iommu->gcmd &= ~DMA_GCMD_CFI;
++ writel(iommu->gcmd, iommu->reg + DMAR_GCMD_REG);
++ IOMMU_WAIT_OP(iommu, DMAR_GSTS_REG,
++ readl, !(sts & DMA_GSTS_CFIS), sts);
++ }
++
+ /*
+ * With CFI clear in the Global Command register, we should be
+ * protected from dangerous (i.e. compatibility) interrupts
+diff --git a/drivers/irqchip/irq-ingenic.c b/drivers/irqchip/irq-ingenic.c
+index 9f3da4260ca65..b61a8901ef722 100644
+--- a/drivers/irqchip/irq-ingenic.c
++++ b/drivers/irqchip/irq-ingenic.c
+@@ -125,7 +125,7 @@ static int __init ingenic_intc_of_init(struct device_node *node,
+ irq_reg_writel(gc, IRQ_MSK(32), JZ_REG_INTC_SET_MASK);
+ }
+
+- if (request_irq(parent_irq, intc_cascade, 0,
++ if (request_irq(parent_irq, intc_cascade, IRQF_NO_SUSPEND,
+ "SoC intc cascade interrupt", NULL))
+ pr_err("Failed to register SoC intc cascade interrupt\n");
+ return 0;
+diff --git a/drivers/md/dm-cache-metadata.c b/drivers/md/dm-cache-metadata.c
+index 151aa95775be2..af6d4f898e4c1 100644
+--- a/drivers/md/dm-cache-metadata.c
++++ b/drivers/md/dm-cache-metadata.c
+@@ -537,12 +537,16 @@ static int __create_persistent_data_objects(struct dm_cache_metadata *cmd,
+ CACHE_MAX_CONCURRENT_LOCKS);
+ if (IS_ERR(cmd->bm)) {
+ DMERR("could not create block manager");
+- return PTR_ERR(cmd->bm);
++ r = PTR_ERR(cmd->bm);
++ cmd->bm = NULL;
++ return r;
+ }
+
+ r = __open_or_format_metadata(cmd, may_format_device);
+- if (r)
++ if (r) {
+ dm_block_manager_destroy(cmd->bm);
++ cmd->bm = NULL;
++ }
+
+ return r;
+ }
+diff --git a/drivers/md/dm-crypt.c b/drivers/md/dm-crypt.c
+index 000ddfab5ba05..195ff0974ece9 100644
+--- a/drivers/md/dm-crypt.c
++++ b/drivers/md/dm-crypt.c
+@@ -736,7 +736,7 @@ static int crypt_iv_eboiv_gen(struct crypt_config *cc, u8 *iv,
+ u8 buf[MAX_CIPHER_BLOCKSIZE] __aligned(__alignof__(__le64));
+ struct skcipher_request *req;
+ struct scatterlist src, dst;
+- struct crypto_wait wait;
++ DECLARE_CRYPTO_WAIT(wait);
+ int err;
+
+ req = skcipher_request_alloc(any_tfm(cc), GFP_NOIO);
+@@ -933,7 +933,7 @@ static int crypt_iv_elephant(struct crypt_config *cc, struct dm_crypt_request *d
+ u8 *es, *ks, *data, *data2, *data_offset;
+ struct skcipher_request *req;
+ struct scatterlist *sg, *sg2, src, dst;
+- struct crypto_wait wait;
++ DECLARE_CRYPTO_WAIT(wait);
+ int i, r;
+
+ req = skcipher_request_alloc(elephant->tfm, GFP_NOIO);
+diff --git a/drivers/md/dm-integrity.c b/drivers/md/dm-integrity.c
+index a83a1de1e03fa..8b4289014c00d 100644
+--- a/drivers/md/dm-integrity.c
++++ b/drivers/md/dm-integrity.c
+@@ -2487,6 +2487,7 @@ next_chunk:
+ range.logical_sector = le64_to_cpu(ic->sb->recalc_sector);
+ if (unlikely(range.logical_sector >= ic->provided_data_sectors)) {
+ if (ic->mode == 'B') {
++ block_bitmap_op(ic, ic->recalc_bitmap, 0, ic->provided_data_sectors, BITMAP_OP_CLEAR);
+ DEBUG_print("queue_delayed_work: bitmap_flush_work\n");
+ queue_delayed_work(ic->commit_wq, &ic->bitmap_flush_work, 0);
+ }
+@@ -2564,6 +2565,17 @@ next_chunk:
+ goto err;
+ }
+
++ if (ic->mode == 'B') {
++ sector_t start, end;
++ start = (range.logical_sector >>
++ (ic->sb->log2_sectors_per_block + ic->log2_blocks_per_bitmap_bit)) <<
++ (ic->sb->log2_sectors_per_block + ic->log2_blocks_per_bitmap_bit);
++ end = ((range.logical_sector + range.n_sectors) >>
++ (ic->sb->log2_sectors_per_block + ic->log2_blocks_per_bitmap_bit)) <<
++ (ic->sb->log2_sectors_per_block + ic->log2_blocks_per_bitmap_bit);
++ block_bitmap_op(ic, ic->recalc_bitmap, start, end - start, BITMAP_OP_CLEAR);
++ }
++
+ advance_and_next:
+ cond_resched();
+
+diff --git a/drivers/md/dm-mpath.c b/drivers/md/dm-mpath.c
+index 78cff42d987ee..dc5846971d6cc 100644
+--- a/drivers/md/dm-mpath.c
++++ b/drivers/md/dm-mpath.c
+@@ -1247,17 +1247,25 @@ static void multipath_wait_for_pg_init_completion(struct multipath *m)
+ static void flush_multipath_work(struct multipath *m)
+ {
+ if (m->hw_handler_name) {
+- set_bit(MPATHF_PG_INIT_DISABLED, &m->flags);
+- smp_mb__after_atomic();
++ unsigned long flags;
++
++ if (!atomic_read(&m->pg_init_in_progress))
++ goto skip;
++
++ spin_lock_irqsave(&m->lock, flags);
++ if (atomic_read(&m->pg_init_in_progress) &&
++ !test_and_set_bit(MPATHF_PG_INIT_DISABLED, &m->flags)) {
++ spin_unlock_irqrestore(&m->lock, flags);
+
+- if (atomic_read(&m->pg_init_in_progress))
+ flush_workqueue(kmpath_handlerd);
+- multipath_wait_for_pg_init_completion(m);
++ multipath_wait_for_pg_init_completion(m);
+
+- clear_bit(MPATHF_PG_INIT_DISABLED, &m->flags);
+- smp_mb__after_atomic();
++ spin_lock_irqsave(&m->lock, flags);
++ clear_bit(MPATHF_PG_INIT_DISABLED, &m->flags);
++ }
++ spin_unlock_irqrestore(&m->lock, flags);
+ }
+-
++skip:
+ if (m->queue_mode == DM_TYPE_BIO_BASED)
+ flush_work(&m->process_queued_bios);
+ flush_work(&m->trigger_event);
+diff --git a/drivers/md/dm-thin-metadata.c b/drivers/md/dm-thin-metadata.c
+index 76b6b323bf4bd..b461836b6d263 100644
+--- a/drivers/md/dm-thin-metadata.c
++++ b/drivers/md/dm-thin-metadata.c
+@@ -739,12 +739,16 @@ static int __create_persistent_data_objects(struct dm_pool_metadata *pmd, bool f
+ THIN_MAX_CONCURRENT_LOCKS);
+ if (IS_ERR(pmd->bm)) {
+ DMERR("could not create block manager");
+- return PTR_ERR(pmd->bm);
++ r = PTR_ERR(pmd->bm);
++ pmd->bm = NULL;
++ return r;
+ }
+
+ r = __open_or_format_metadata(pmd, format_device);
+- if (r)
++ if (r) {
+ dm_block_manager_destroy(pmd->bm);
++ pmd->bm = NULL;
++ }
+
+ return r;
+ }
+@@ -954,7 +958,7 @@ int dm_pool_metadata_close(struct dm_pool_metadata *pmd)
+ }
+
+ pmd_write_lock_in_core(pmd);
+- if (!dm_bm_is_read_only(pmd->bm) && !pmd->fail_io) {
++ if (!pmd->fail_io && !dm_bm_is_read_only(pmd->bm)) {
+ r = __commit_transaction(pmd);
+ if (r < 0)
+ DMWARN("%s: __commit_transaction() failed, error = %d",
+diff --git a/drivers/md/dm-writecache.c b/drivers/md/dm-writecache.c
+index 5358894bb9fdc..1533419f18758 100644
+--- a/drivers/md/dm-writecache.c
++++ b/drivers/md/dm-writecache.c
+@@ -231,6 +231,7 @@ static int persistent_memory_claim(struct dm_writecache *wc)
+ pfn_t pfn;
+ int id;
+ struct page **pages;
++ sector_t offset;
+
+ wc->memory_vmapped = false;
+
+@@ -245,9 +246,16 @@ static int persistent_memory_claim(struct dm_writecache *wc)
+ goto err1;
+ }
+
++ offset = get_start_sect(wc->ssd_dev->bdev);
++ if (offset & (PAGE_SIZE / 512 - 1)) {
++ r = -EINVAL;
++ goto err1;
++ }
++ offset >>= PAGE_SHIFT - 9;
++
+ id = dax_read_lock();
+
+- da = dax_direct_access(wc->ssd_dev->dax_dev, 0, p, &wc->memory_map, &pfn);
++ da = dax_direct_access(wc->ssd_dev->dax_dev, offset, p, &wc->memory_map, &pfn);
+ if (da < 0) {
+ wc->memory_map = NULL;
+ r = da;
+@@ -269,7 +277,7 @@ static int persistent_memory_claim(struct dm_writecache *wc)
+ i = 0;
+ do {
+ long daa;
+- daa = dax_direct_access(wc->ssd_dev->dax_dev, i, p - i,
++ daa = dax_direct_access(wc->ssd_dev->dax_dev, offset + i, p - i,
+ NULL, &pfn);
+ if (daa <= 0) {
+ r = daa ? daa : -EINVAL;
+diff --git a/drivers/md/persistent-data/dm-block-manager.c b/drivers/md/persistent-data/dm-block-manager.c
+index 749ec268d957d..54c089a50b152 100644
+--- a/drivers/md/persistent-data/dm-block-manager.c
++++ b/drivers/md/persistent-data/dm-block-manager.c
+@@ -493,7 +493,7 @@ int dm_bm_write_lock(struct dm_block_manager *bm,
+ void *p;
+ int r;
+
+- if (bm->read_only)
++ if (dm_bm_is_read_only(bm))
+ return -EPERM;
+
+ p = dm_bufio_read(bm->bufio, b, (struct dm_buffer **) result);
+@@ -562,7 +562,7 @@ int dm_bm_write_lock_zero(struct dm_block_manager *bm,
+ struct buffer_aux *aux;
+ void *p;
+
+- if (bm->read_only)
++ if (dm_bm_is_read_only(bm))
+ return -EPERM;
+
+ p = dm_bufio_new(bm->bufio, b, (struct dm_buffer **) result);
+@@ -602,7 +602,7 @@ EXPORT_SYMBOL_GPL(dm_bm_unlock);
+
+ int dm_bm_flush(struct dm_block_manager *bm)
+ {
+- if (bm->read_only)
++ if (dm_bm_is_read_only(bm))
+ return -EPERM;
+
+ return dm_bufio_write_dirty_buffers(bm->bufio);
+@@ -616,19 +616,21 @@ void dm_bm_prefetch(struct dm_block_manager *bm, dm_block_t b)
+
+ bool dm_bm_is_read_only(struct dm_block_manager *bm)
+ {
+- return bm->read_only;
++ return (bm ? bm->read_only : true);
+ }
+ EXPORT_SYMBOL_GPL(dm_bm_is_read_only);
+
+ void dm_bm_set_read_only(struct dm_block_manager *bm)
+ {
+- bm->read_only = true;
++ if (bm)
++ bm->read_only = true;
+ }
+ EXPORT_SYMBOL_GPL(dm_bm_set_read_only);
+
+ void dm_bm_set_read_write(struct dm_block_manager *bm)
+ {
+- bm->read_only = false;
++ if (bm)
++ bm->read_only = false;
+ }
+ EXPORT_SYMBOL_GPL(dm_bm_set_read_write);
+
+diff --git a/drivers/media/i2c/Kconfig b/drivers/media/i2c/Kconfig
+index da11036ad804d..6b1a6851ccb0b 100644
+--- a/drivers/media/i2c/Kconfig
++++ b/drivers/media/i2c/Kconfig
+@@ -728,7 +728,7 @@ config VIDEO_HI556
+ config VIDEO_IMX214
+ tristate "Sony IMX214 sensor support"
+ depends on GPIOLIB && I2C && VIDEO_V4L2
+- depends on V4L2_FWNODE
++ select V4L2_FWNODE
+ select MEDIA_CONTROLLER
+ select VIDEO_V4L2_SUBDEV_API
+ select REGMAP_I2C
+diff --git a/drivers/media/rc/rc-main.c b/drivers/media/rc/rc-main.c
+index d7064d664d528..38aa0c2de243f 100644
+--- a/drivers/media/rc/rc-main.c
++++ b/drivers/media/rc/rc-main.c
+@@ -1292,6 +1292,10 @@ static ssize_t store_protocols(struct device *device,
+ }
+
+ mutex_lock(&dev->lock);
++ if (!dev->registered) {
++ mutex_unlock(&dev->lock);
++ return -ENODEV;
++ }
+
+ old_protocols = *current_protocols;
+ new_protocols = old_protocols;
+@@ -1430,6 +1434,10 @@ static ssize_t store_filter(struct device *device,
+ return -EINVAL;
+
+ mutex_lock(&dev->lock);
++ if (!dev->registered) {
++ mutex_unlock(&dev->lock);
++ return -ENODEV;
++ }
+
+ new_filter = *filter;
+ if (fattr->mask)
+@@ -1544,6 +1552,10 @@ static ssize_t store_wakeup_protocols(struct device *device,
+ int i;
+
+ mutex_lock(&dev->lock);
++ if (!dev->registered) {
++ mutex_unlock(&dev->lock);
++ return -ENODEV;
++ }
+
+ allowed = dev->allowed_wakeup_protocols;
+
+@@ -1601,25 +1613,25 @@ static void rc_dev_release(struct device *device)
+ kfree(dev);
+ }
+
+-#define ADD_HOTPLUG_VAR(fmt, val...) \
+- do { \
+- int err = add_uevent_var(env, fmt, val); \
+- if (err) \
+- return err; \
+- } while (0)
+-
+ static int rc_dev_uevent(struct device *device, struct kobj_uevent_env *env)
+ {
+ struct rc_dev *dev = to_rc_dev(device);
++ int ret = 0;
+
+- if (dev->rc_map.name)
+- ADD_HOTPLUG_VAR("NAME=%s", dev->rc_map.name);
+- if (dev->driver_name)
+- ADD_HOTPLUG_VAR("DRV_NAME=%s", dev->driver_name);
+- if (dev->device_name)
+- ADD_HOTPLUG_VAR("DEV_NAME=%s", dev->device_name);
++ mutex_lock(&dev->lock);
+
+- return 0;
++ if (!dev->registered)
++ ret = -ENODEV;
++ if (ret == 0 && dev->rc_map.name)
++ ret = add_uevent_var(env, "NAME=%s", dev->rc_map.name);
++ if (ret == 0 && dev->driver_name)
++ ret = add_uevent_var(env, "DRV_NAME=%s", dev->driver_name);
++ if (ret == 0 && dev->device_name)
++ ret = add_uevent_var(env, "DEV_NAME=%s", dev->device_name);
++
++ mutex_unlock(&dev->lock);
++
++ return ret;
+ }
+
+ /*
+@@ -2011,14 +2023,14 @@ void rc_unregister_device(struct rc_dev *dev)
+ del_timer_sync(&dev->timer_keyup);
+ del_timer_sync(&dev->timer_repeat);
+
+- rc_free_rx_device(dev);
+-
+ mutex_lock(&dev->lock);
+ if (dev->users && dev->close)
+ dev->close(dev);
+ dev->registered = false;
+ mutex_unlock(&dev->lock);
+
++ rc_free_rx_device(dev);
++
+ /*
+ * lirc device should be freed with dev->registered = false, so
+ * that userspace polling will get notified.
+diff --git a/drivers/media/test-drivers/vicodec/vicodec-core.c b/drivers/media/test-drivers/vicodec/vicodec-core.c
+index e879290727ef4..25c4ca6884dda 100644
+--- a/drivers/media/test-drivers/vicodec/vicodec-core.c
++++ b/drivers/media/test-drivers/vicodec/vicodec-core.c
+@@ -1994,6 +1994,7 @@ static int vicodec_request_validate(struct media_request *req)
+ }
+ ctrl = v4l2_ctrl_request_hdl_ctrl_find(hdl,
+ vicodec_ctrl_stateless_state.id);
++ v4l2_ctrl_request_hdl_put(hdl);
+ if (!ctrl) {
+ v4l2_info(&ctx->dev->v4l2_dev,
+ "Missing required codec control\n");
+diff --git a/drivers/misc/habanalabs/device.c b/drivers/misc/habanalabs/device.c
+index 59608d1bac880..baa4e66d4c457 100644
+--- a/drivers/misc/habanalabs/device.c
++++ b/drivers/misc/habanalabs/device.c
+@@ -1027,7 +1027,7 @@ again:
+ goto out_err;
+ }
+
+- hl_set_max_power(hdev, hdev->max_power);
++ hl_set_max_power(hdev);
+ } else {
+ rc = hdev->asic_funcs->soft_reset_late_init(hdev);
+ if (rc) {
+@@ -1268,6 +1268,11 @@ int hl_device_init(struct hl_device *hdev, struct class *hclass)
+ goto out_disabled;
+ }
+
++ /* Need to call this again because the max power might change,
++ * depending on card type for certain ASICs
++ */
++ hl_set_max_power(hdev);
++
+ /*
+ * hl_hwmon_init() must be called after device_late_init(), because only
+ * there we get the information from the device about which
+diff --git a/drivers/misc/habanalabs/firmware_if.c b/drivers/misc/habanalabs/firmware_if.c
+index d27841cb5bcb3..345c228a7971e 100644
+--- a/drivers/misc/habanalabs/firmware_if.c
++++ b/drivers/misc/habanalabs/firmware_if.c
+@@ -13,6 +13,7 @@
+ #include <linux/io-64-nonatomic-lo-hi.h>
+ #include <linux/slab.h>
+
++#define FW_FILE_MAX_SIZE 0x1400000 /* maximum size of 20MB */
+ /**
+ * hl_fw_load_fw_to_device() - Load F/W code to device's memory.
+ * @hdev: pointer to hl_device structure.
+@@ -45,6 +46,14 @@ int hl_fw_load_fw_to_device(struct hl_device *hdev, const char *fw_name,
+
+ dev_dbg(hdev->dev, "%s firmware size == %zu\n", fw_name, fw_size);
+
++ if (fw_size > FW_FILE_MAX_SIZE) {
++ dev_err(hdev->dev,
++ "FW file size %zu exceeds maximum of %u bytes\n",
++ fw_size, FW_FILE_MAX_SIZE);
++ rc = -EINVAL;
++ goto out;
++ }
++
+ fw_data = (const u64 *) fw->data;
+
+ memcpy_toio(dst, fw_data, fw_size);
+diff --git a/drivers/misc/habanalabs/gaudi/gaudi.c b/drivers/misc/habanalabs/gaudi/gaudi.c
+index 637a9d608707f..ca183733847b6 100644
+--- a/drivers/misc/habanalabs/gaudi/gaudi.c
++++ b/drivers/misc/habanalabs/gaudi/gaudi.c
+@@ -154,6 +154,29 @@ static const u16 gaudi_packet_sizes[MAX_PACKET_ID] = {
+ [PACKET_LOAD_AND_EXE] = sizeof(struct packet_load_and_exe)
+ };
+
++static inline bool validate_packet_id(enum packet_id id)
++{
++ switch (id) {
++ case PACKET_WREG_32:
++ case PACKET_WREG_BULK:
++ case PACKET_MSG_LONG:
++ case PACKET_MSG_SHORT:
++ case PACKET_CP_DMA:
++ case PACKET_REPEAT:
++ case PACKET_MSG_PROT:
++ case PACKET_FENCE:
++ case PACKET_LIN_DMA:
++ case PACKET_NOP:
++ case PACKET_STOP:
++ case PACKET_ARB_POINT:
++ case PACKET_WAIT:
++ case PACKET_LOAD_AND_EXE:
++ return true;
++ default:
++ return false;
++ }
++}
++
+ static const char * const
+ gaudi_tpc_interrupts_cause[GAUDI_NUM_OF_TPC_INTR_CAUSE] = {
+ "tpc_address_exceed_slm",
+@@ -424,7 +447,7 @@ static int gaudi_get_fixed_properties(struct hl_device *hdev)
+ prop->num_of_events = GAUDI_EVENT_SIZE;
+ prop->tpc_enabled_mask = TPC_ENABLED_MASK;
+
+- prop->max_power_default = MAX_POWER_DEFAULT;
++ prop->max_power_default = MAX_POWER_DEFAULT_PCI;
+
+ prop->cb_pool_cb_cnt = GAUDI_CB_POOL_CB_CNT;
+ prop->cb_pool_cb_size = GAUDI_CB_POOL_CB_SIZE;
+@@ -2541,6 +2564,7 @@ static void gaudi_set_clock_gating(struct hl_device *hdev)
+ {
+ struct gaudi_device *gaudi = hdev->asic_specific;
+ u32 qman_offset;
++ bool enable;
+ int i;
+
+ /* In case we are during debug session, don't enable the clock gate
+@@ -2550,46 +2574,43 @@ static void gaudi_set_clock_gating(struct hl_device *hdev)
+ return;
+
+ for (i = GAUDI_PCI_DMA_1, qman_offset = 0 ; i < GAUDI_HBM_DMA_1 ; i++) {
+- if (!(hdev->clock_gating_mask &
+- (BIT_ULL(gaudi_dma_assignment[i]))))
+- continue;
++ enable = !!(hdev->clock_gating_mask &
++ (BIT_ULL(gaudi_dma_assignment[i])));
+
+ qman_offset = gaudi_dma_assignment[i] * DMA_QMAN_OFFSET;
+- WREG32(mmDMA0_QM_CGM_CFG1 + qman_offset, QMAN_CGM1_PWR_GATE_EN);
++ WREG32(mmDMA0_QM_CGM_CFG1 + qman_offset,
++ enable ? QMAN_CGM1_PWR_GATE_EN : 0);
+ WREG32(mmDMA0_QM_CGM_CFG + qman_offset,
+- QMAN_UPPER_CP_CGM_PWR_GATE_EN);
++ enable ? QMAN_UPPER_CP_CGM_PWR_GATE_EN : 0);
+ }
+
+ for (i = GAUDI_HBM_DMA_1 ; i < GAUDI_DMA_MAX ; i++) {
+- if (!(hdev->clock_gating_mask &
+- (BIT_ULL(gaudi_dma_assignment[i]))))
+- continue;
++ enable = !!(hdev->clock_gating_mask &
++ (BIT_ULL(gaudi_dma_assignment[i])));
+
+ qman_offset = gaudi_dma_assignment[i] * DMA_QMAN_OFFSET;
+- WREG32(mmDMA0_QM_CGM_CFG1 + qman_offset, QMAN_CGM1_PWR_GATE_EN);
++ WREG32(mmDMA0_QM_CGM_CFG1 + qman_offset,
++ enable ? QMAN_CGM1_PWR_GATE_EN : 0);
+ WREG32(mmDMA0_QM_CGM_CFG + qman_offset,
+- QMAN_COMMON_CP_CGM_PWR_GATE_EN);
++ enable ? QMAN_COMMON_CP_CGM_PWR_GATE_EN : 0);
+ }
+
+- if (hdev->clock_gating_mask & (BIT_ULL(GAUDI_ENGINE_ID_MME_0))) {
+- WREG32(mmMME0_QM_CGM_CFG1, QMAN_CGM1_PWR_GATE_EN);
+- WREG32(mmMME0_QM_CGM_CFG, QMAN_COMMON_CP_CGM_PWR_GATE_EN);
+- }
++ enable = !!(hdev->clock_gating_mask & (BIT_ULL(GAUDI_ENGINE_ID_MME_0)));
++ WREG32(mmMME0_QM_CGM_CFG1, enable ? QMAN_CGM1_PWR_GATE_EN : 0);
++ WREG32(mmMME0_QM_CGM_CFG, enable ? QMAN_COMMON_CP_CGM_PWR_GATE_EN : 0);
+
+- if (hdev->clock_gating_mask & (BIT_ULL(GAUDI_ENGINE_ID_MME_2))) {
+- WREG32(mmMME2_QM_CGM_CFG1, QMAN_CGM1_PWR_GATE_EN);
+- WREG32(mmMME2_QM_CGM_CFG, QMAN_COMMON_CP_CGM_PWR_GATE_EN);
+- }
++ enable = !!(hdev->clock_gating_mask & (BIT_ULL(GAUDI_ENGINE_ID_MME_2)));
++ WREG32(mmMME2_QM_CGM_CFG1, enable ? QMAN_CGM1_PWR_GATE_EN : 0);
++ WREG32(mmMME2_QM_CGM_CFG, enable ? QMAN_COMMON_CP_CGM_PWR_GATE_EN : 0);
+
+ for (i = 0, qman_offset = 0 ; i < TPC_NUMBER_OF_ENGINES ; i++) {
+- if (!(hdev->clock_gating_mask &
+- (BIT_ULL(GAUDI_ENGINE_ID_TPC_0 + i))))
+- continue;
++ enable = !!(hdev->clock_gating_mask &
++ (BIT_ULL(GAUDI_ENGINE_ID_TPC_0 + i)));
+
+ WREG32(mmTPC0_QM_CGM_CFG1 + qman_offset,
+- QMAN_CGM1_PWR_GATE_EN);
++ enable ? QMAN_CGM1_PWR_GATE_EN : 0);
+ WREG32(mmTPC0_QM_CGM_CFG + qman_offset,
+- QMAN_COMMON_CP_CGM_PWR_GATE_EN);
++ enable ? QMAN_COMMON_CP_CGM_PWR_GATE_EN : 0);
+
+ qman_offset += TPC_QMAN_OFFSET;
+ }
+@@ -3859,6 +3880,12 @@ static int gaudi_validate_cb(struct hl_device *hdev,
+ PACKET_HEADER_PACKET_ID_MASK) >>
+ PACKET_HEADER_PACKET_ID_SHIFT);
+
++ if (!validate_packet_id(pkt_id)) {
++ dev_err(hdev->dev, "Invalid packet id %u\n", pkt_id);
++ rc = -EINVAL;
++ break;
++ }
++
+ pkt_size = gaudi_packet_sizes[pkt_id];
+ cb_parsed_length += pkt_size;
+ if (cb_parsed_length > parser->user_cb_size) {
+@@ -4082,6 +4109,12 @@ static int gaudi_patch_cb(struct hl_device *hdev,
+ PACKET_HEADER_PACKET_ID_MASK) >>
+ PACKET_HEADER_PACKET_ID_SHIFT);
+
++ if (!validate_packet_id(pkt_id)) {
++ dev_err(hdev->dev, "Invalid packet id %u\n", pkt_id);
++ rc = -EINVAL;
++ break;
++ }
++
+ pkt_size = gaudi_packet_sizes[pkt_id];
+ cb_parsed_length += pkt_size;
+ if (cb_parsed_length > parser->user_cb_size) {
+@@ -6208,6 +6241,15 @@ static int gaudi_armcp_info_get(struct hl_device *hdev)
+ strncpy(prop->armcp_info.card_name, GAUDI_DEFAULT_CARD_NAME,
+ CARD_NAME_MAX_LEN);
+
++ hdev->card_type = le32_to_cpu(hdev->asic_prop.armcp_info.card_type);
++
++ if (hdev->card_type == armcp_card_type_pci)
++ prop->max_power_default = MAX_POWER_DEFAULT_PCI;
++ else if (hdev->card_type == armcp_card_type_pmc)
++ prop->max_power_default = MAX_POWER_DEFAULT_PMC;
++
++ hdev->max_power = prop->max_power_default;
++
+ return 0;
+ }
+
+diff --git a/drivers/misc/habanalabs/gaudi/gaudiP.h b/drivers/misc/habanalabs/gaudi/gaudiP.h
+index 41a8d9bff6bf9..00f1efeaa8832 100644
+--- a/drivers/misc/habanalabs/gaudi/gaudiP.h
++++ b/drivers/misc/habanalabs/gaudi/gaudiP.h
+@@ -41,7 +41,8 @@
+
+ #define GAUDI_MAX_CLK_FREQ 2200000000ull /* 2200 MHz */
+
+-#define MAX_POWER_DEFAULT 200000 /* 200W */
++#define MAX_POWER_DEFAULT_PCI 200000 /* 200W */
++#define MAX_POWER_DEFAULT_PMC 350000 /* 350W */
+
+ #define GAUDI_CPU_TIMEOUT_USEC 15000000 /* 15s */
+
+diff --git a/drivers/misc/habanalabs/gaudi/gaudi_coresight.c b/drivers/misc/habanalabs/gaudi/gaudi_coresight.c
+index bf0e062d7b874..cc3d03549a6e4 100644
+--- a/drivers/misc/habanalabs/gaudi/gaudi_coresight.c
++++ b/drivers/misc/habanalabs/gaudi/gaudi_coresight.c
+@@ -523,7 +523,7 @@ static int gaudi_config_etf(struct hl_device *hdev,
+ }
+
+ static bool gaudi_etr_validate_address(struct hl_device *hdev, u64 addr,
+- u32 size, bool *is_host)
++ u64 size, bool *is_host)
+ {
+ struct asic_fixed_properties *prop = &hdev->asic_prop;
+ struct gaudi_device *gaudi = hdev->asic_specific;
+@@ -535,6 +535,12 @@ static bool gaudi_etr_validate_address(struct hl_device *hdev, u64 addr,
+ return false;
+ }
+
++ if (addr > (addr + size)) {
++ dev_err(hdev->dev,
++ "ETR buffer size %llu overflow\n", size);
++ return false;
++ }
++
+ /* PMMU and HPMMU addresses are equal, check only one of them */
+ if ((gaudi->hw_cap_initialized & HW_CAP_MMU) &&
+ hl_mem_area_inside_range(addr, size,
+diff --git a/drivers/misc/habanalabs/goya/goya.c b/drivers/misc/habanalabs/goya/goya.c
+index 88460b2138d88..c179085ced7b8 100644
+--- a/drivers/misc/habanalabs/goya/goya.c
++++ b/drivers/misc/habanalabs/goya/goya.c
+@@ -139,6 +139,25 @@ static u16 goya_packet_sizes[MAX_PACKET_ID] = {
+ [PACKET_STOP] = sizeof(struct packet_stop)
+ };
+
++static inline bool validate_packet_id(enum packet_id id)
++{
++ switch (id) {
++ case PACKET_WREG_32:
++ case PACKET_WREG_BULK:
++ case PACKET_MSG_LONG:
++ case PACKET_MSG_SHORT:
++ case PACKET_CP_DMA:
++ case PACKET_MSG_PROT:
++ case PACKET_FENCE:
++ case PACKET_LIN_DMA:
++ case PACKET_NOP:
++ case PACKET_STOP:
++ return true;
++ default:
++ return false;
++ }
++}
++
+ static u64 goya_mmu_regs[GOYA_MMU_REGS_NUM] = {
+ mmDMA_QM_0_GLBL_NON_SECURE_PROPS,
+ mmDMA_QM_1_GLBL_NON_SECURE_PROPS,
+@@ -3381,6 +3400,12 @@ static int goya_validate_cb(struct hl_device *hdev,
+ PACKET_HEADER_PACKET_ID_MASK) >>
+ PACKET_HEADER_PACKET_ID_SHIFT);
+
++ if (!validate_packet_id(pkt_id)) {
++ dev_err(hdev->dev, "Invalid packet id %u\n", pkt_id);
++ rc = -EINVAL;
++ break;
++ }
++
+ pkt_size = goya_packet_sizes[pkt_id];
+ cb_parsed_length += pkt_size;
+ if (cb_parsed_length > parser->user_cb_size) {
+@@ -3616,6 +3641,12 @@ static int goya_patch_cb(struct hl_device *hdev,
+ PACKET_HEADER_PACKET_ID_MASK) >>
+ PACKET_HEADER_PACKET_ID_SHIFT);
+
++ if (!validate_packet_id(pkt_id)) {
++ dev_err(hdev->dev, "Invalid packet id %u\n", pkt_id);
++ rc = -EINVAL;
++ break;
++ }
++
+ pkt_size = goya_packet_sizes[pkt_id];
+ cb_parsed_length += pkt_size;
+ if (cb_parsed_length > parser->user_cb_size) {
+diff --git a/drivers/misc/habanalabs/goya/goya_coresight.c b/drivers/misc/habanalabs/goya/goya_coresight.c
+index 1258724ea5106..c23a9fcb74b57 100644
+--- a/drivers/misc/habanalabs/goya/goya_coresight.c
++++ b/drivers/misc/habanalabs/goya/goya_coresight.c
+@@ -358,11 +358,17 @@ static int goya_config_etf(struct hl_device *hdev,
+ }
+
+ static int goya_etr_validate_address(struct hl_device *hdev, u64 addr,
+- u32 size)
++ u64 size)
+ {
+ struct asic_fixed_properties *prop = &hdev->asic_prop;
+ u64 range_start, range_end;
+
++ if (addr > (addr + size)) {
++ dev_err(hdev->dev,
++ "ETR buffer size %llu overflow\n", size);
++ return false;
++ }
++
+ if (hdev->mmu_enable) {
+ range_start = prop->dmmu.start_addr;
+ range_end = prop->dmmu.end_addr;
+diff --git a/drivers/misc/habanalabs/habanalabs.h b/drivers/misc/habanalabs/habanalabs.h
+index 194d833526964..1072f300252a4 100644
+--- a/drivers/misc/habanalabs/habanalabs.h
++++ b/drivers/misc/habanalabs/habanalabs.h
+@@ -1408,6 +1408,8 @@ struct hl_device_idle_busy_ts {
+ * details.
+ * @in_reset: is device in reset flow.
+ * @curr_pll_profile: current PLL profile.
++ * @card_type: Various ASICs have several card types. This indicates the card
++ * type of the current device.
+ * @cs_active_cnt: number of active command submissions on this device (active
+ * means already in H/W queues)
+ * @major: habanalabs kernel driver major.
+@@ -1503,6 +1505,7 @@ struct hl_device {
+ u64 clock_gating_mask;
+ atomic_t in_reset;
+ enum hl_pll_frequency curr_pll_profile;
++ enum armcp_card_types card_type;
+ int cs_active_cnt;
+ u32 major;
+ u32 high_pll;
+@@ -1587,7 +1590,7 @@ struct hl_ioctl_desc {
+ *
+ * Return: true if the area is inside the valid range, false otherwise.
+ */
+-static inline bool hl_mem_area_inside_range(u64 address, u32 size,
++static inline bool hl_mem_area_inside_range(u64 address, u64 size,
+ u64 range_start_address, u64 range_end_address)
+ {
+ u64 end_address = address + size;
+@@ -1792,7 +1795,7 @@ int hl_get_pwm_info(struct hl_device *hdev,
+ void hl_set_pwm_info(struct hl_device *hdev, int sensor_index, u32 attr,
+ long value);
+ u64 hl_get_max_power(struct hl_device *hdev);
+-void hl_set_max_power(struct hl_device *hdev, u64 value);
++void hl_set_max_power(struct hl_device *hdev);
+ int hl_set_voltage(struct hl_device *hdev,
+ int sensor_index, u32 attr, long value);
+ int hl_set_current(struct hl_device *hdev,
+diff --git a/drivers/misc/habanalabs/memory.c b/drivers/misc/habanalabs/memory.c
+index 47da84a177197..e30b1b1877efa 100644
+--- a/drivers/misc/habanalabs/memory.c
++++ b/drivers/misc/habanalabs/memory.c
+@@ -66,6 +66,11 @@ static int alloc_device_memory(struct hl_ctx *ctx, struct hl_mem_in *args,
+ num_pgs = (args->alloc.mem_size + (page_size - 1)) >> page_shift;
+ total_size = num_pgs << page_shift;
+
++ if (!total_size) {
++ dev_err(hdev->dev, "Cannot allocate 0 bytes\n");
++ return -EINVAL;
++ }
++
+ contiguous = args->flags & HL_MEM_CONTIGUOUS;
+
+ if (contiguous) {
+@@ -93,7 +98,7 @@ static int alloc_device_memory(struct hl_ctx *ctx, struct hl_mem_in *args,
+ phys_pg_pack->contiguous = contiguous;
+
+ phys_pg_pack->pages = kvmalloc_array(num_pgs, sizeof(u64), GFP_KERNEL);
+- if (!phys_pg_pack->pages) {
++ if (ZERO_OR_NULL_PTR(phys_pg_pack->pages)) {
+ rc = -ENOMEM;
+ goto pages_arr_err;
+ }
+@@ -683,7 +688,7 @@ static int init_phys_pg_pack_from_userptr(struct hl_ctx *ctx,
+
+ phys_pg_pack->pages = kvmalloc_array(total_npages, sizeof(u64),
+ GFP_KERNEL);
+- if (!phys_pg_pack->pages) {
++ if (ZERO_OR_NULL_PTR(phys_pg_pack->pages)) {
+ rc = -ENOMEM;
+ goto page_pack_arr_mem_err;
+ }
+diff --git a/drivers/misc/habanalabs/mmu.c b/drivers/misc/habanalabs/mmu.c
+index a290d6b49d788..eb582bd4937ba 100644
+--- a/drivers/misc/habanalabs/mmu.c
++++ b/drivers/misc/habanalabs/mmu.c
+@@ -450,7 +450,7 @@ int hl_mmu_init(struct hl_device *hdev)
+ hdev->mmu_shadow_hop0 = kvmalloc_array(prop->max_asid,
+ prop->mmu_hop_table_size,
+ GFP_KERNEL | __GFP_ZERO);
+- if (!hdev->mmu_shadow_hop0) {
++ if (ZERO_OR_NULL_PTR(hdev->mmu_shadow_hop0)) {
+ rc = -ENOMEM;
+ goto err_pool_add;
+ }
+diff --git a/drivers/misc/habanalabs/pci.c b/drivers/misc/habanalabs/pci.c
+index 9f634ef6f5b37..77022c0b42027 100644
+--- a/drivers/misc/habanalabs/pci.c
++++ b/drivers/misc/habanalabs/pci.c
+@@ -378,15 +378,17 @@ int hl_pci_init(struct hl_device *hdev)
+ rc = hdev->asic_funcs->init_iatu(hdev);
+ if (rc) {
+ dev_err(hdev->dev, "Failed to initialize iATU\n");
+- goto disable_device;
++ goto unmap_pci_bars;
+ }
+
+ rc = hl_pci_set_dma_mask(hdev);
+ if (rc)
+- goto disable_device;
++ goto unmap_pci_bars;
+
+ return 0;
+
++unmap_pci_bars:
++ hl_pci_bars_unmap(hdev);
+ disable_device:
+ pci_clear_master(pdev);
+ pci_disable_device(pdev);
+diff --git a/drivers/misc/habanalabs/sysfs.c b/drivers/misc/habanalabs/sysfs.c
+index 70b6b1863c2ef..87dadb53ac59d 100644
+--- a/drivers/misc/habanalabs/sysfs.c
++++ b/drivers/misc/habanalabs/sysfs.c
+@@ -81,7 +81,7 @@ u64 hl_get_max_power(struct hl_device *hdev)
+ return result;
+ }
+
+-void hl_set_max_power(struct hl_device *hdev, u64 value)
++void hl_set_max_power(struct hl_device *hdev)
+ {
+ struct armcp_packet pkt;
+ int rc;
+@@ -90,7 +90,7 @@ void hl_set_max_power(struct hl_device *hdev, u64 value)
+
+ pkt.ctl = cpu_to_le32(ARMCP_PACKET_MAX_POWER_SET <<
+ ARMCP_PKT_CTL_OPCODE_SHIFT);
+- pkt.value = cpu_to_le64(value);
++ pkt.value = cpu_to_le64(hdev->max_power);
+
+ rc = hdev->asic_funcs->send_cpu_message(hdev, (u32 *) &pkt, sizeof(pkt),
+ 0, NULL);
+@@ -316,7 +316,7 @@ static ssize_t max_power_store(struct device *dev,
+ }
+
+ hdev->max_power = value;
+- hl_set_max_power(hdev, value);
++ hl_set_max_power(hdev);
+
+ out:
+ return count;
+@@ -419,6 +419,7 @@ int hl_sysfs_init(struct hl_device *hdev)
+ hdev->pm_mng_profile = PM_AUTO;
+ else
+ hdev->pm_mng_profile = PM_MANUAL;
++
+ hdev->max_power = hdev->asic_prop.max_power_default;
+
+ hdev->asic_funcs->add_device_attr(hdev, &hl_dev_clks_attr_group);
+diff --git a/drivers/mmc/host/mtk-sd.c b/drivers/mmc/host/mtk-sd.c
+index 39e7fc54c438f..0319eac3a05d7 100644
+--- a/drivers/mmc/host/mtk-sd.c
++++ b/drivers/mmc/host/mtk-sd.c
+@@ -22,6 +22,7 @@
+ #include <linux/slab.h>
+ #include <linux/spinlock.h>
+ #include <linux/interrupt.h>
++#include <linux/reset.h>
+
+ #include <linux/mmc/card.h>
+ #include <linux/mmc/core.h>
+@@ -414,6 +415,7 @@ struct msdc_host {
+ struct pinctrl_state *pins_uhs;
+ struct delayed_work req_timeout;
+ int irq; /* host interrupt */
++ struct reset_control *reset;
+
+ struct clk *src_clk; /* msdc source clock */
+ struct clk *h_clk; /* msdc h_clk */
+@@ -1516,6 +1518,12 @@ static void msdc_init_hw(struct msdc_host *host)
+ u32 val;
+ u32 tune_reg = host->dev_comp->pad_tune_reg;
+
++ if (host->reset) {
++ reset_control_assert(host->reset);
++ usleep_range(10, 50);
++ reset_control_deassert(host->reset);
++ }
++
+ /* Configure to MMC/SD mode, clock free running */
+ sdr_set_bits(host->base + MSDC_CFG, MSDC_CFG_MODE | MSDC_CFG_CKPDN);
+
+@@ -2273,6 +2281,11 @@ static int msdc_drv_probe(struct platform_device *pdev)
+ if (IS_ERR(host->src_clk_cg))
+ host->src_clk_cg = NULL;
+
++ host->reset = devm_reset_control_get_optional_exclusive(&pdev->dev,
++ "hrst");
++ if (IS_ERR(host->reset))
++ return PTR_ERR(host->reset);
++
+ host->irq = platform_get_irq(pdev, 0);
+ if (host->irq < 0) {
+ ret = -EINVAL;
+diff --git a/drivers/mmc/host/sdhci-acpi.c b/drivers/mmc/host/sdhci-acpi.c
+index d8b76cb8698aa..2d9f79b50a7fa 100644
+--- a/drivers/mmc/host/sdhci-acpi.c
++++ b/drivers/mmc/host/sdhci-acpi.c
+@@ -535,6 +535,11 @@ static const struct sdhci_acpi_slot sdhci_acpi_slot_qcom_sd = {
+ .caps = MMC_CAP_NONREMOVABLE,
+ };
+
++struct amd_sdhci_host {
++ bool tuned_clock;
++ bool dll_enabled;
++};
++
+ /* AMD sdhci reset dll register. */
+ #define SDHCI_AMD_RESET_DLL_REGISTER 0x908
+
+@@ -554,26 +559,66 @@ static void sdhci_acpi_amd_hs400_dll(struct sdhci_host *host)
+ }
+
+ /*
+- * For AMD Platform it is required to disable the tuning
+- * bit first controller to bring to HS Mode from HS200
+- * mode, later enable to tune to HS400 mode.
++ * The initialization sequence for HS400 is:
++ * HS->HS200->Perform Tuning->HS->HS400
++ *
++ * The re-tuning sequence is:
++ * HS400->DDR52->HS->HS200->Perform Tuning->HS->HS400
++ *
++ * The AMD eMMC Controller can only use the tuned clock while in HS200 and HS400
++ * mode. If we switch to a different mode, we need to disable the tuned clock.
++ * If we have previously performed tuning and switch back to HS200 or
++ * HS400, we can re-enable the tuned clock.
++ *
+ */
+ static void amd_set_ios(struct mmc_host *mmc, struct mmc_ios *ios)
+ {
+ struct sdhci_host *host = mmc_priv(mmc);
++ struct sdhci_acpi_host *acpi_host = sdhci_priv(host);
++ struct amd_sdhci_host *amd_host = sdhci_acpi_priv(acpi_host);
+ unsigned int old_timing = host->timing;
++ u16 val;
+
+ sdhci_set_ios(mmc, ios);
+- if (old_timing == MMC_TIMING_MMC_HS200 &&
+- ios->timing == MMC_TIMING_MMC_HS)
+- sdhci_writew(host, 0x9, SDHCI_HOST_CONTROL2);
+- if (old_timing != MMC_TIMING_MMC_HS400 &&
+- ios->timing == MMC_TIMING_MMC_HS400) {
+- sdhci_writew(host, 0x80, SDHCI_HOST_CONTROL2);
+- sdhci_acpi_amd_hs400_dll(host);
++
++ if (old_timing != host->timing && amd_host->tuned_clock) {
++ if (host->timing == MMC_TIMING_MMC_HS400 ||
++ host->timing == MMC_TIMING_MMC_HS200) {
++ val = sdhci_readw(host, SDHCI_HOST_CONTROL2);
++ val |= SDHCI_CTRL_TUNED_CLK;
++ sdhci_writew(host, val, SDHCI_HOST_CONTROL2);
++ } else {
++ val = sdhci_readw(host, SDHCI_HOST_CONTROL2);
++ val &= ~SDHCI_CTRL_TUNED_CLK;
++ sdhci_writew(host, val, SDHCI_HOST_CONTROL2);
++ }
++
++ /* DLL is only required for HS400 */
++ if (host->timing == MMC_TIMING_MMC_HS400 &&
++ !amd_host->dll_enabled) {
++ sdhci_acpi_amd_hs400_dll(host);
++ amd_host->dll_enabled = true;
++ }
+ }
+ }
+
++static int amd_sdhci_execute_tuning(struct mmc_host *mmc, u32 opcode)
++{
++ int err;
++ struct sdhci_host *host = mmc_priv(mmc);
++ struct sdhci_acpi_host *acpi_host = sdhci_priv(host);
++ struct amd_sdhci_host *amd_host = sdhci_acpi_priv(acpi_host);
++
++ amd_host->tuned_clock = false;
++
++ err = sdhci_execute_tuning(mmc, opcode);
++
++ if (!err && !host->tuning_err)
++ amd_host->tuned_clock = true;
++
++ return err;
++}
++
+ static const struct sdhci_ops sdhci_acpi_ops_amd = {
+ .set_clock = sdhci_set_clock,
+ .set_bus_width = sdhci_set_bus_width,
+@@ -601,6 +646,7 @@ static int sdhci_acpi_emmc_amd_probe_slot(struct platform_device *pdev,
+
+ host->mmc_host_ops.select_drive_strength = amd_select_drive_strength;
+ host->mmc_host_ops.set_ios = amd_set_ios;
++ host->mmc_host_ops.execute_tuning = amd_sdhci_execute_tuning;
+ return 0;
+ }
+
+@@ -612,6 +658,7 @@ static const struct sdhci_acpi_slot sdhci_acpi_slot_amd_emmc = {
+ SDHCI_QUIRK_32BIT_ADMA_SIZE,
+ .quirks2 = SDHCI_QUIRK2_BROKEN_64_BIT_DMA,
+ .probe_slot = sdhci_acpi_emmc_amd_probe_slot,
++ .priv_size = sizeof(struct amd_sdhci_host),
+ };
+
+ struct sdhci_acpi_uid_slot {
+diff --git a/drivers/mmc/host/sdhci-pci-core.c b/drivers/mmc/host/sdhci-pci-core.c
+index bb6802448b2f4..af413805bbf1a 100644
+--- a/drivers/mmc/host/sdhci-pci-core.c
++++ b/drivers/mmc/host/sdhci-pci-core.c
+@@ -232,6 +232,14 @@ static void sdhci_pci_dumpregs(struct mmc_host *mmc)
+ sdhci_dumpregs(mmc_priv(mmc));
+ }
+
++static void sdhci_cqhci_reset(struct sdhci_host *host, u8 mask)
++{
++ if ((host->mmc->caps2 & MMC_CAP2_CQE) && (mask & SDHCI_RESET_ALL) &&
++ host->mmc->cqe_private)
++ cqhci_deactivate(host->mmc);
++ sdhci_reset(host, mask);
++}
++
+ /*****************************************************************************\
+ * *
+ * Hardware specific quirk handling *
+@@ -718,7 +726,7 @@ static const struct sdhci_ops sdhci_intel_glk_ops = {
+ .set_power = sdhci_intel_set_power,
+ .enable_dma = sdhci_pci_enable_dma,
+ .set_bus_width = sdhci_set_bus_width,
+- .reset = sdhci_reset,
++ .reset = sdhci_cqhci_reset,
+ .set_uhs_signaling = sdhci_set_uhs_signaling,
+ .hw_reset = sdhci_pci_hw_reset,
+ .irq = sdhci_cqhci_irq,
+diff --git a/drivers/mmc/host/sdhci-tegra.c b/drivers/mmc/host/sdhci-tegra.c
+index db1a8d1c96b36..0919ff11d8173 100644
+--- a/drivers/mmc/host/sdhci-tegra.c
++++ b/drivers/mmc/host/sdhci-tegra.c
+@@ -101,6 +101,12 @@
+ #define NVQUIRK_DIS_CARD_CLK_CONFIG_TAP BIT(8)
+ #define NVQUIRK_CQHCI_DCMD_R1B_CMD_TIMING BIT(9)
+
++/*
++ * NVQUIRK_HAS_TMCLK is for SoC's having separate timeout clock for Tegra
++ * SDMMC hardware data timeout.
++ */
++#define NVQUIRK_HAS_TMCLK BIT(10)
++
+ /* SDMMC CQE Base Address for Tegra Host Ver 4.1 and Higher */
+ #define SDHCI_TEGRA_CQE_BASE_ADDR 0xF000
+
+@@ -131,6 +137,7 @@ struct sdhci_tegra_autocal_offsets {
+ struct sdhci_tegra {
+ const struct sdhci_tegra_soc_data *soc_data;
+ struct gpio_desc *power_gpio;
++ struct clk *tmclk;
+ bool ddr_signaling;
+ bool pad_calib_required;
+ bool pad_control_available;
+@@ -1424,7 +1431,8 @@ static const struct sdhci_tegra_soc_data soc_data_tegra210 = {
+ NVQUIRK_HAS_PADCALIB |
+ NVQUIRK_DIS_CARD_CLK_CONFIG_TAP |
+ NVQUIRK_ENABLE_SDR50 |
+- NVQUIRK_ENABLE_SDR104,
++ NVQUIRK_ENABLE_SDR104 |
++ NVQUIRK_HAS_TMCLK,
+ .min_tap_delay = 106,
+ .max_tap_delay = 185,
+ };
+@@ -1462,6 +1470,7 @@ static const struct sdhci_tegra_soc_data soc_data_tegra186 = {
+ NVQUIRK_DIS_CARD_CLK_CONFIG_TAP |
+ NVQUIRK_ENABLE_SDR50 |
+ NVQUIRK_ENABLE_SDR104 |
++ NVQUIRK_HAS_TMCLK |
+ NVQUIRK_CQHCI_DCMD_R1B_CMD_TIMING,
+ .min_tap_delay = 84,
+ .max_tap_delay = 136,
+@@ -1474,7 +1483,8 @@ static const struct sdhci_tegra_soc_data soc_data_tegra194 = {
+ NVQUIRK_HAS_PADCALIB |
+ NVQUIRK_DIS_CARD_CLK_CONFIG_TAP |
+ NVQUIRK_ENABLE_SDR50 |
+- NVQUIRK_ENABLE_SDR104,
++ NVQUIRK_ENABLE_SDR104 |
++ NVQUIRK_HAS_TMCLK,
+ .min_tap_delay = 96,
+ .max_tap_delay = 139,
+ };
+@@ -1602,6 +1612,43 @@ static int sdhci_tegra_probe(struct platform_device *pdev)
+ goto err_power_req;
+ }
+
++ /*
++ * Tegra210 has a separate SDMMC_LEGACY_TM clock used for host
++ * timeout clock and SW can choose TMCLK or SDCLK for hardware
++ * data timeout through the bit USE_TMCLK_FOR_DATA_TIMEOUT of
++ * the register SDHCI_TEGRA_VENDOR_SYS_SW_CTRL.
++ *
++ * USE_TMCLK_FOR_DATA_TIMEOUT bit default is set to 1 and SDMMC uses
++ * 12Mhz TMCLK which is advertised in host capability register.
++ * With TMCLK of 12Mhz provides maximum data timeout period that can
++ * be achieved is 11s better than using SDCLK for data timeout.
++ *
++ * So, TMCLK is set to 12Mhz and kept enabled all the time on SoC's
++ * supporting separate TMCLK.
++ */
++
++ if (soc_data->nvquirks & NVQUIRK_HAS_TMCLK) {
++ clk = devm_clk_get(&pdev->dev, "tmclk");
++ if (IS_ERR(clk)) {
++ rc = PTR_ERR(clk);
++ if (rc == -EPROBE_DEFER)
++ goto err_power_req;
++
++ dev_warn(&pdev->dev, "failed to get tmclk: %d\n", rc);
++ clk = NULL;
++ }
++
++ clk_set_rate(clk, 12000000);
++ rc = clk_prepare_enable(clk);
++ if (rc) {
++ dev_err(&pdev->dev,
++ "failed to enable tmclk: %d\n", rc);
++ goto err_power_req;
++ }
++
++ tegra_host->tmclk = clk;
++ }
++
+ clk = devm_clk_get(mmc_dev(host->mmc), NULL);
+ if (IS_ERR(clk)) {
+ rc = PTR_ERR(clk);
+@@ -1645,6 +1692,7 @@ err_add_host:
+ err_rst_get:
+ clk_disable_unprepare(pltfm_host->clk);
+ err_clk_get:
++ clk_disable_unprepare(tegra_host->tmclk);
+ err_power_req:
+ err_parse_dt:
+ sdhci_pltfm_free(pdev);
+@@ -1662,6 +1710,7 @@ static int sdhci_tegra_remove(struct platform_device *pdev)
+ reset_control_assert(tegra_host->rst);
+ usleep_range(2000, 4000);
+ clk_disable_unprepare(pltfm_host->clk);
++ clk_disable_unprepare(tegra_host->tmclk);
+
+ sdhci_pltfm_free(pdev);
+
+diff --git a/drivers/net/dsa/mt7530.c b/drivers/net/dsa/mt7530.c
+index 8dcb8a49ab67f..238417db26f9b 100644
+--- a/drivers/net/dsa/mt7530.c
++++ b/drivers/net/dsa/mt7530.c
+@@ -1501,7 +1501,7 @@ unsupported:
+ phylink_set(mask, 100baseT_Full);
+
+ if (state->interface != PHY_INTERFACE_MODE_MII) {
+- phylink_set(mask, 1000baseT_Half);
++ /* This switch only supports 1G full-duplex. */
+ phylink_set(mask, 1000baseT_Full);
+ if (port == 5)
+ phylink_set(mask, 1000baseX_Full);
+diff --git a/drivers/net/ethernet/arc/emac_mdio.c b/drivers/net/ethernet/arc/emac_mdio.c
+index 0187dbf3b87df..54cdafdd067db 100644
+--- a/drivers/net/ethernet/arc/emac_mdio.c
++++ b/drivers/net/ethernet/arc/emac_mdio.c
+@@ -153,6 +153,7 @@ int arc_mdio_probe(struct arc_emac_priv *priv)
+ if (IS_ERR(data->reset_gpio)) {
+ error = PTR_ERR(data->reset_gpio);
+ dev_err(priv->dev, "Failed to request gpio: %d\n", error);
++ mdiobus_free(bus);
+ return error;
+ }
+
+diff --git a/drivers/net/ethernet/broadcom/bcmsysport.c b/drivers/net/ethernet/broadcom/bcmsysport.c
+index b25356e21a1ea..e6ccc2122573d 100644
+--- a/drivers/net/ethernet/broadcom/bcmsysport.c
++++ b/drivers/net/ethernet/broadcom/bcmsysport.c
+@@ -2462,8 +2462,10 @@ static int bcm_sysport_probe(struct platform_device *pdev)
+ priv->tx_rings = devm_kcalloc(&pdev->dev, txq,
+ sizeof(struct bcm_sysport_tx_ring),
+ GFP_KERNEL);
+- if (!priv->tx_rings)
+- return -ENOMEM;
++ if (!priv->tx_rings) {
++ ret = -ENOMEM;
++ goto err_free_netdev;
++ }
+
+ priv->is_lite = params->is_lite;
+ priv->num_rx_desc_words = params->num_rx_desc_words;
+diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt.c b/drivers/net/ethernet/broadcom/bnxt/bnxt.c
+index 7463a1847cebd..cd5c7a1412c6d 100644
+--- a/drivers/net/ethernet/broadcom/bnxt/bnxt.c
++++ b/drivers/net/ethernet/broadcom/bnxt/bnxt.c
+@@ -1141,6 +1141,9 @@ static int bnxt_discard_rx(struct bnxt *bp, struct bnxt_cp_ring_info *cpr,
+
+ static void bnxt_queue_fw_reset_work(struct bnxt *bp, unsigned long delay)
+ {
++ if (!(test_bit(BNXT_STATE_IN_FW_RESET, &bp->state)))
++ return;
++
+ if (BNXT_PF(bp))
+ queue_delayed_work(bnxt_pf_wq, &bp->fw_reset_task, delay);
+ else
+@@ -1157,10 +1160,12 @@ static void bnxt_queue_sp_work(struct bnxt *bp)
+
+ static void bnxt_cancel_sp_work(struct bnxt *bp)
+ {
+- if (BNXT_PF(bp))
++ if (BNXT_PF(bp)) {
+ flush_workqueue(bnxt_pf_wq);
+- else
++ } else {
+ cancel_work_sync(&bp->sp_task);
++ cancel_delayed_work_sync(&bp->fw_reset_task);
++ }
+ }
+
+ static void bnxt_sched_reset(struct bnxt *bp, struct bnxt_rx_ring_info *rxr)
+@@ -8987,16 +8992,19 @@ static ssize_t bnxt_show_temp(struct device *dev,
+ struct hwrm_temp_monitor_query_input req = {0};
+ struct hwrm_temp_monitor_query_output *resp;
+ struct bnxt *bp = dev_get_drvdata(dev);
+- u32 temp = 0;
++ u32 len = 0;
+
+ resp = bp->hwrm_cmd_resp_addr;
+ bnxt_hwrm_cmd_hdr_init(bp, &req, HWRM_TEMP_MONITOR_QUERY, -1, -1);
+ mutex_lock(&bp->hwrm_cmd_lock);
+- if (!_hwrm_send_message(bp, &req, sizeof(req), HWRM_CMD_TIMEOUT))
+- temp = resp->temp * 1000; /* display millidegree */
++ if (!_hwrm_send_message_silent(bp, &req, sizeof(req), HWRM_CMD_TIMEOUT))
++ len = sprintf(buf, "%u\n", resp->temp * 1000); /* display millidegree */
+ mutex_unlock(&bp->hwrm_cmd_lock);
+
+- return sprintf(buf, "%u\n", temp);
++ if (len)
++ return len;
++
++ return sprintf(buf, "unknown\n");
+ }
+ static SENSOR_DEVICE_ATTR(temp1_input, 0444, bnxt_show_temp, NULL, 0);
+
+@@ -9178,15 +9186,15 @@ static int __bnxt_open_nic(struct bnxt *bp, bool irq_re_init, bool link_re_init)
+ }
+ }
+
+- bnxt_enable_napi(bp);
+- bnxt_debug_dev_init(bp);
+-
+ rc = bnxt_init_nic(bp, irq_re_init);
+ if (rc) {
+ netdev_err(bp->dev, "bnxt_init_nic err: %x\n", rc);
+- goto open_err;
++ goto open_err_irq;
+ }
+
++ bnxt_enable_napi(bp);
++ bnxt_debug_dev_init(bp);
++
+ if (link_re_init) {
+ mutex_lock(&bp->link_lock);
+ rc = bnxt_update_phy_setting(bp);
+@@ -9217,10 +9225,6 @@ static int __bnxt_open_nic(struct bnxt *bp, bool irq_re_init, bool link_re_init)
+ bnxt_vf_reps_open(bp);
+ return 0;
+
+-open_err:
+- bnxt_debug_dev_exit(bp);
+- bnxt_disable_napi(bp);
+-
+ open_err_irq:
+ bnxt_del_napi(bp);
+
+@@ -11501,6 +11505,7 @@ static void bnxt_remove_one(struct pci_dev *pdev)
+ unregister_netdev(dev);
+ bnxt_dl_unregister(bp);
+ bnxt_shutdown_tc(bp);
++ clear_bit(BNXT_STATE_IN_FW_RESET, &bp->state);
+ bnxt_cancel_sp_work(bp);
+ bp->sp_event = 0;
+
+@@ -12065,6 +12070,7 @@ static int bnxt_init_one(struct pci_dev *pdev, const struct pci_device_id *ent)
+ (long)pci_resource_start(pdev, 0), dev->dev_addr);
+ pcie_print_link_status(pdev);
+
++ pci_save_state(pdev);
+ return 0;
+
+ init_err_cleanup:
+@@ -12260,6 +12266,8 @@ static pci_ers_result_t bnxt_io_slot_reset(struct pci_dev *pdev)
+ "Cannot re-enable PCI device after reset.\n");
+ } else {
+ pci_set_master(pdev);
++ pci_restore_state(pdev);
++ pci_save_state(pdev);
+
+ err = bnxt_hwrm_func_reset(bp);
+ if (!err) {
+diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt_ethtool.c b/drivers/net/ethernet/broadcom/bnxt/bnxt_ethtool.c
+index b4aa56dc4f9fb..bc2c76fa54cad 100644
+--- a/drivers/net/ethernet/broadcom/bnxt/bnxt_ethtool.c
++++ b/drivers/net/ethernet/broadcom/bnxt/bnxt_ethtool.c
+@@ -494,20 +494,13 @@ static int bnxt_get_num_tpa_ring_stats(struct bnxt *bp)
+ static int bnxt_get_num_ring_stats(struct bnxt *bp)
+ {
+ int rx, tx, cmn;
+- bool sh = false;
+-
+- if (bp->flags & BNXT_FLAG_SHARED_RINGS)
+- sh = true;
+
+ rx = NUM_RING_RX_HW_STATS + NUM_RING_RX_SW_STATS +
+ bnxt_get_num_tpa_ring_stats(bp);
+ tx = NUM_RING_TX_HW_STATS;
+ cmn = NUM_RING_CMN_SW_STATS;
+- if (sh)
+- return (rx + tx + cmn) * bp->cp_nr_rings;
+- else
+- return rx * bp->rx_nr_rings + tx * bp->tx_nr_rings +
+- cmn * bp->cp_nr_rings;
++ return rx * bp->rx_nr_rings + tx * bp->tx_nr_rings +
++ cmn * bp->cp_nr_rings;
+ }
+
+ static int bnxt_get_num_stats(struct bnxt *bp)
+@@ -847,7 +840,7 @@ static void bnxt_get_channels(struct net_device *dev,
+ int max_tx_sch_inputs;
+
+ /* Get the most up-to-date max_tx_sch_inputs. */
+- if (BNXT_NEW_RM(bp))
++ if (netif_running(dev) && BNXT_NEW_RM(bp))
+ bnxt_hwrm_func_resc_qcaps(bp, false);
+ max_tx_sch_inputs = hw_resc->max_tx_sch_inputs;
+
+@@ -2270,6 +2263,9 @@ static int bnxt_get_nvram_directory(struct net_device *dev, u32 len, u8 *data)
+ if (rc != 0)
+ return rc;
+
++ if (!dir_entries || !entry_length)
++ return -EIO;
++
+ /* Insert 2 bytes of directory info (count and size of entries) */
+ if (len < 2)
+ return -EINVAL;
+diff --git a/drivers/net/ethernet/broadcom/genet/bcmgenet.c b/drivers/net/ethernet/broadcom/genet/bcmgenet.c
+index e471b14fc6e98..f0074c873da3b 100644
+--- a/drivers/net/ethernet/broadcom/genet/bcmgenet.c
++++ b/drivers/net/ethernet/broadcom/genet/bcmgenet.c
+@@ -1364,7 +1364,7 @@ static int bcmgenet_validate_flow(struct net_device *dev,
+ case ETHER_FLOW:
+ eth_mask = &cmd->fs.m_u.ether_spec;
+ /* don't allow mask which isn't valid */
+- if (VALIDATE_MASK(eth_mask->h_source) ||
++ if (VALIDATE_MASK(eth_mask->h_dest) ||
+ VALIDATE_MASK(eth_mask->h_source) ||
+ VALIDATE_MASK(eth_mask->h_proto)) {
+ netdev_err(dev, "rxnfc: Unsupported mask\n");
+diff --git a/drivers/net/ethernet/broadcom/tg3.c b/drivers/net/ethernet/broadcom/tg3.c
+index ebff1fc0d8cef..4515804d1ce4c 100644
+--- a/drivers/net/ethernet/broadcom/tg3.c
++++ b/drivers/net/ethernet/broadcom/tg3.c
+@@ -7221,8 +7221,8 @@ static inline void tg3_reset_task_schedule(struct tg3 *tp)
+
+ static inline void tg3_reset_task_cancel(struct tg3 *tp)
+ {
+- cancel_work_sync(&tp->reset_task);
+- tg3_flag_clear(tp, RESET_TASK_PENDING);
++ if (test_and_clear_bit(TG3_FLAG_RESET_TASK_PENDING, tp->tg3_flags))
++ cancel_work_sync(&tp->reset_task);
+ tg3_flag_clear(tp, TX_RECOVERY_PENDING);
+ }
+
+@@ -11209,18 +11209,27 @@ static void tg3_reset_task(struct work_struct *work)
+
+ tg3_halt(tp, RESET_KIND_SHUTDOWN, 0);
+ err = tg3_init_hw(tp, true);
+- if (err)
++ if (err) {
++ tg3_full_unlock(tp);
++ tp->irq_sync = 0;
++ tg3_napi_enable(tp);
++ /* Clear this flag so that tg3_reset_task_cancel() will not
++ * call cancel_work_sync() and wait forever.
++ */
++ tg3_flag_clear(tp, RESET_TASK_PENDING);
++ dev_close(tp->dev);
+ goto out;
++ }
+
+ tg3_netif_start(tp);
+
+-out:
+ tg3_full_unlock(tp);
+
+ if (!err)
+ tg3_phy_start(tp);
+
+ tg3_flag_clear(tp, RESET_TASK_PENDING);
++out:
+ rtnl_unlock();
+ }
+
+diff --git a/drivers/net/ethernet/chelsio/cxgb4/cxgb4_thermal.c b/drivers/net/ethernet/chelsio/cxgb4/cxgb4_thermal.c
+index 3de8a5e83b6c7..d7fefdbf3e575 100644
+--- a/drivers/net/ethernet/chelsio/cxgb4/cxgb4_thermal.c
++++ b/drivers/net/ethernet/chelsio/cxgb4/cxgb4_thermal.c
+@@ -62,6 +62,7 @@ static struct thermal_zone_device_ops cxgb4_thermal_ops = {
+ int cxgb4_thermal_init(struct adapter *adap)
+ {
+ struct ch_thermal *ch_thermal = &adap->ch_thermal;
++ char ch_tz_name[THERMAL_NAME_LENGTH];
+ int num_trip = CXGB4_NUM_TRIPS;
+ u32 param, val;
+ int ret;
+@@ -82,7 +83,8 @@ int cxgb4_thermal_init(struct adapter *adap)
+ ch_thermal->trip_type = THERMAL_TRIP_CRITICAL;
+ }
+
+- ch_thermal->tzdev = thermal_zone_device_register("cxgb4", num_trip,
++ snprintf(ch_tz_name, sizeof(ch_tz_name), "cxgb4_%s", adap->name);
++ ch_thermal->tzdev = thermal_zone_device_register(ch_tz_name, num_trip,
+ 0, adap,
+ &cxgb4_thermal_ops,
+ NULL, 0, 0);
+@@ -97,7 +99,9 @@ int cxgb4_thermal_init(struct adapter *adap)
+
+ int cxgb4_thermal_remove(struct adapter *adap)
+ {
+- if (adap->ch_thermal.tzdev)
++ if (adap->ch_thermal.tzdev) {
+ thermal_zone_device_unregister(adap->ch_thermal.tzdev);
++ adap->ch_thermal.tzdev = NULL;
++ }
+ return 0;
+ }
+diff --git a/drivers/net/ethernet/cortina/gemini.c b/drivers/net/ethernet/cortina/gemini.c
+index 62e271aea4a50..ffec0f3dd9578 100644
+--- a/drivers/net/ethernet/cortina/gemini.c
++++ b/drivers/net/ethernet/cortina/gemini.c
+@@ -2446,8 +2446,8 @@ static int gemini_ethernet_port_probe(struct platform_device *pdev)
+ port->reset = devm_reset_control_get_exclusive(dev, NULL);
+ if (IS_ERR(port->reset)) {
+ dev_err(dev, "no reset\n");
+- clk_disable_unprepare(port->pclk);
+- return PTR_ERR(port->reset);
++ ret = PTR_ERR(port->reset);
++ goto unprepare;
+ }
+ reset_control_reset(port->reset);
+ usleep_range(100, 500);
+@@ -2502,25 +2502,25 @@ static int gemini_ethernet_port_probe(struct platform_device *pdev)
+ IRQF_SHARED,
+ port_names[port->id],
+ port);
+- if (ret) {
+- clk_disable_unprepare(port->pclk);
+- return ret;
+- }
++ if (ret)
++ goto unprepare;
+
+ ret = register_netdev(netdev);
+- if (!ret) {
++ if (ret)
++ goto unprepare;
++
++ netdev_info(netdev,
++ "irq %d, DMA @ 0x%pap, GMAC @ 0x%pap\n",
++ port->irq, &dmares->start,
++ &gmacres->start);
++ ret = gmac_setup_phy(netdev);
++ if (ret)
+ netdev_info(netdev,
+- "irq %d, DMA @ 0x%pap, GMAC @ 0x%pap\n",
+- port->irq, &dmares->start,
+- &gmacres->start);
+- ret = gmac_setup_phy(netdev);
+- if (ret)
+- netdev_info(netdev,
+- "PHY init failed, deferring to ifup time\n");
+- return 0;
+- }
++ "PHY init failed, deferring to ifup time\n");
++ return 0;
+
+- port->netdev = NULL;
++unprepare:
++ clk_disable_unprepare(port->pclk);
+ return ret;
+ }
+
+diff --git a/drivers/net/ethernet/hisilicon/hns/hns_enet.c b/drivers/net/ethernet/hisilicon/hns/hns_enet.c
+index 23f278e46975b..22522f8a52999 100644
+--- a/drivers/net/ethernet/hisilicon/hns/hns_enet.c
++++ b/drivers/net/ethernet/hisilicon/hns/hns_enet.c
+@@ -2282,8 +2282,10 @@ static int hns_nic_dev_probe(struct platform_device *pdev)
+ priv->enet_ver = AE_VERSION_1;
+ else if (acpi_dev_found(hns_enet_acpi_match[1].id))
+ priv->enet_ver = AE_VERSION_2;
+- else
+- return -ENXIO;
++ else {
++ ret = -ENXIO;
++ goto out_read_prop_fail;
++ }
+
+ /* try to find port-idx-in-ae first */
+ ret = acpi_node_get_property_reference(dev->fwnode,
+@@ -2299,7 +2301,8 @@ static int hns_nic_dev_probe(struct platform_device *pdev)
+ priv->fwnode = args.fwnode;
+ } else {
+ dev_err(dev, "cannot read cfg data from OF or acpi\n");
+- return -ENXIO;
++ ret = -ENXIO;
++ goto out_read_prop_fail;
+ }
+
+ ret = device_property_read_u32(dev, "port-idx-in-ae", &port_id);
+diff --git a/drivers/net/ethernet/mellanox/mlx4/mr.c b/drivers/net/ethernet/mellanox/mlx4/mr.c
+index d2986f1f2db02..d7444782bfdd0 100644
+--- a/drivers/net/ethernet/mellanox/mlx4/mr.c
++++ b/drivers/net/ethernet/mellanox/mlx4/mr.c
+@@ -114,7 +114,7 @@ static int mlx4_buddy_init(struct mlx4_buddy *buddy, int max_order)
+ goto err_out;
+
+ for (i = 0; i <= buddy->max_order; ++i) {
+- s = BITS_TO_LONGS(1 << (buddy->max_order - i));
++ s = BITS_TO_LONGS(1UL << (buddy->max_order - i));
+ buddy->bits[i] = kvmalloc_array(s, sizeof(long), GFP_KERNEL | __GFP_ZERO);
+ if (!buddy->bits[i])
+ goto err_out_free;
+diff --git a/drivers/net/ethernet/netronome/nfp/flower/tunnel_conf.c b/drivers/net/ethernet/netronome/nfp/flower/tunnel_conf.c
+index 2df3deedf9fd8..7248d248f6041 100644
+--- a/drivers/net/ethernet/netronome/nfp/flower/tunnel_conf.c
++++ b/drivers/net/ethernet/netronome/nfp/flower/tunnel_conf.c
+@@ -61,6 +61,7 @@ struct nfp_tun_active_tuns {
+ * @flags: options part of the request
+ * @tun_info.ipv6: dest IPv6 address of active route
+ * @tun_info.egress_port: port the encapsulated packet egressed
++ * @tun_info.extra: reserved for future use
+ * @tun_info: tunnels that have sent traffic in reported period
+ */
+ struct nfp_tun_active_tuns_v6 {
+@@ -70,6 +71,7 @@ struct nfp_tun_active_tuns_v6 {
+ struct route_ip_info_v6 {
+ struct in6_addr ipv6;
+ __be32 egress_port;
++ __be32 extra[2];
+ } tun_info[];
+ };
+
+diff --git a/drivers/net/ethernet/renesas/ravb_main.c b/drivers/net/ethernet/renesas/ravb_main.c
+index 99f7aae102ce1..df89d09b253e2 100644
+--- a/drivers/net/ethernet/renesas/ravb_main.c
++++ b/drivers/net/ethernet/renesas/ravb_main.c
+@@ -1342,6 +1342,51 @@ static inline int ravb_hook_irq(unsigned int irq, irq_handler_t handler,
+ return error;
+ }
+
++/* MDIO bus init function */
++static int ravb_mdio_init(struct ravb_private *priv)
++{
++ struct platform_device *pdev = priv->pdev;
++ struct device *dev = &pdev->dev;
++ int error;
++
++ /* Bitbang init */
++ priv->mdiobb.ops = &bb_ops;
++
++ /* MII controller setting */
++ priv->mii_bus = alloc_mdio_bitbang(&priv->mdiobb);
++ if (!priv->mii_bus)
++ return -ENOMEM;
++
++ /* Hook up MII support for ethtool */
++ priv->mii_bus->name = "ravb_mii";
++ priv->mii_bus->parent = dev;
++ snprintf(priv->mii_bus->id, MII_BUS_ID_SIZE, "%s-%x",
++ pdev->name, pdev->id);
++
++ /* Register MDIO bus */
++ error = of_mdiobus_register(priv->mii_bus, dev->of_node);
++ if (error)
++ goto out_free_bus;
++
++ return 0;
++
++out_free_bus:
++ free_mdio_bitbang(priv->mii_bus);
++ return error;
++}
++
++/* MDIO bus release function */
++static int ravb_mdio_release(struct ravb_private *priv)
++{
++ /* Unregister mdio bus */
++ mdiobus_unregister(priv->mii_bus);
++
++ /* Free bitbang info */
++ free_mdio_bitbang(priv->mii_bus);
++
++ return 0;
++}
++
+ /* Network device open function for Ethernet AVB */
+ static int ravb_open(struct net_device *ndev)
+ {
+@@ -1350,6 +1395,13 @@ static int ravb_open(struct net_device *ndev)
+ struct device *dev = &pdev->dev;
+ int error;
+
++ /* MDIO bus init */
++ error = ravb_mdio_init(priv);
++ if (error) {
++ netdev_err(ndev, "failed to initialize MDIO\n");
++ return error;
++ }
++
+ napi_enable(&priv->napi[RAVB_BE]);
+ napi_enable(&priv->napi[RAVB_NC]);
+
+@@ -1427,6 +1479,7 @@ out_free_irq:
+ out_napi_off:
+ napi_disable(&priv->napi[RAVB_NC]);
+ napi_disable(&priv->napi[RAVB_BE]);
++ ravb_mdio_release(priv);
+ return error;
+ }
+
+@@ -1736,6 +1789,8 @@ static int ravb_close(struct net_device *ndev)
+ ravb_ring_free(ndev, RAVB_BE);
+ ravb_ring_free(ndev, RAVB_NC);
+
++ ravb_mdio_release(priv);
++
+ return 0;
+ }
+
+@@ -1887,51 +1942,6 @@ static const struct net_device_ops ravb_netdev_ops = {
+ .ndo_set_features = ravb_set_features,
+ };
+
+-/* MDIO bus init function */
+-static int ravb_mdio_init(struct ravb_private *priv)
+-{
+- struct platform_device *pdev = priv->pdev;
+- struct device *dev = &pdev->dev;
+- int error;
+-
+- /* Bitbang init */
+- priv->mdiobb.ops = &bb_ops;
+-
+- /* MII controller setting */
+- priv->mii_bus = alloc_mdio_bitbang(&priv->mdiobb);
+- if (!priv->mii_bus)
+- return -ENOMEM;
+-
+- /* Hook up MII support for ethtool */
+- priv->mii_bus->name = "ravb_mii";
+- priv->mii_bus->parent = dev;
+- snprintf(priv->mii_bus->id, MII_BUS_ID_SIZE, "%s-%x",
+- pdev->name, pdev->id);
+-
+- /* Register MDIO bus */
+- error = of_mdiobus_register(priv->mii_bus, dev->of_node);
+- if (error)
+- goto out_free_bus;
+-
+- return 0;
+-
+-out_free_bus:
+- free_mdio_bitbang(priv->mii_bus);
+- return error;
+-}
+-
+-/* MDIO bus release function */
+-static int ravb_mdio_release(struct ravb_private *priv)
+-{
+- /* Unregister mdio bus */
+- mdiobus_unregister(priv->mii_bus);
+-
+- /* Free bitbang info */
+- free_mdio_bitbang(priv->mii_bus);
+-
+- return 0;
+-}
+-
+ static const struct of_device_id ravb_match_table[] = {
+ { .compatible = "renesas,etheravb-r8a7790", .data = (void *)RCAR_GEN2 },
+ { .compatible = "renesas,etheravb-r8a7794", .data = (void *)RCAR_GEN2 },
+@@ -2174,13 +2184,6 @@ static int ravb_probe(struct platform_device *pdev)
+ eth_hw_addr_random(ndev);
+ }
+
+- /* MDIO bus init */
+- error = ravb_mdio_init(priv);
+- if (error) {
+- dev_err(&pdev->dev, "failed to initialize MDIO\n");
+- goto out_dma_free;
+- }
+-
+ netif_napi_add(ndev, &priv->napi[RAVB_BE], ravb_poll, 64);
+ netif_napi_add(ndev, &priv->napi[RAVB_NC], ravb_poll, 64);
+
+@@ -2202,8 +2205,6 @@ static int ravb_probe(struct platform_device *pdev)
+ out_napi_del:
+ netif_napi_del(&priv->napi[RAVB_NC]);
+ netif_napi_del(&priv->napi[RAVB_BE]);
+- ravb_mdio_release(priv);
+-out_dma_free:
+ dma_free_coherent(ndev->dev.parent, priv->desc_bat_size, priv->desc_bat,
+ priv->desc_bat_dma);
+
+@@ -2235,7 +2236,6 @@ static int ravb_remove(struct platform_device *pdev)
+ unregister_netdev(ndev);
+ netif_napi_del(&priv->napi[RAVB_NC]);
+ netif_napi_del(&priv->napi[RAVB_BE]);
+- ravb_mdio_release(priv);
+ pm_runtime_disable(&pdev->dev);
+ free_netdev(ndev);
+ platform_set_drvdata(pdev, NULL);
+diff --git a/drivers/net/ethernet/ti/am65-cpsw-nuss.c b/drivers/net/ethernet/ti/am65-cpsw-nuss.c
+index 88832277edd5a..c7c9980e02604 100644
+--- a/drivers/net/ethernet/ti/am65-cpsw-nuss.c
++++ b/drivers/net/ethernet/ti/am65-cpsw-nuss.c
+@@ -172,6 +172,8 @@ void am65_cpsw_nuss_adjust_link(struct net_device *ndev)
+ if (phy->speed == 10 && phy_interface_is_rgmii(phy))
+ /* Can be used with in band mode only */
+ mac_control |= CPSW_SL_CTL_EXT_EN;
++ if (phy->speed == 100 && phy->interface == PHY_INTERFACE_MODE_RMII)
++ mac_control |= CPSW_SL_CTL_IFCTL_A;
+ if (phy->duplex)
+ mac_control |= CPSW_SL_CTL_FULLDUPLEX;
+
+diff --git a/drivers/net/ethernet/ti/cpsw.c b/drivers/net/ethernet/ti/cpsw.c
+index 9b17bbbe102fe..4a65edc5a3759 100644
+--- a/drivers/net/ethernet/ti/cpsw.c
++++ b/drivers/net/ethernet/ti/cpsw.c
+@@ -1116,7 +1116,7 @@ static int cpsw_ndo_vlan_rx_kill_vid(struct net_device *ndev,
+ HOST_PORT_NUM, ALE_VLAN, vid);
+ ret |= cpsw_ale_del_mcast(cpsw->ale, priv->ndev->broadcast,
+ 0, ALE_VLAN, vid);
+- ret |= cpsw_ale_flush_multicast(cpsw->ale, 0, vid);
++ ret |= cpsw_ale_flush_multicast(cpsw->ale, ALE_PORT_HOST, vid);
+ err:
+ pm_runtime_put(cpsw->dev);
+ return ret;
+diff --git a/drivers/net/ethernet/ti/cpsw_new.c b/drivers/net/ethernet/ti/cpsw_new.c
+index 1247d35d42ef3..8ed78577cdedf 100644
+--- a/drivers/net/ethernet/ti/cpsw_new.c
++++ b/drivers/net/ethernet/ti/cpsw_new.c
+@@ -1032,19 +1032,34 @@ static int cpsw_ndo_vlan_rx_kill_vid(struct net_device *ndev,
+ return ret;
+ }
+
++ /* reset the return code as pm_runtime_get_sync() can return
++ * non zero values as well.
++ */
++ ret = 0;
+ for (i = 0; i < cpsw->data.slaves; i++) {
+ if (cpsw->slaves[i].ndev &&
+- vid == cpsw->slaves[i].port_vlan)
++ vid == cpsw->slaves[i].port_vlan) {
++ ret = -EINVAL;
+ goto err;
++ }
+ }
+
+ dev_dbg(priv->dev, "removing vlanid %d from vlan filter\n", vid);
+- cpsw_ale_del_vlan(cpsw->ale, vid, 0);
+- cpsw_ale_del_ucast(cpsw->ale, priv->mac_addr,
+- HOST_PORT_NUM, ALE_VLAN, vid);
+- cpsw_ale_del_mcast(cpsw->ale, priv->ndev->broadcast,
+- 0, ALE_VLAN, vid);
+- cpsw_ale_flush_multicast(cpsw->ale, 0, vid);
++ ret = cpsw_ale_del_vlan(cpsw->ale, vid, 0);
++ if (ret)
++ dev_err(priv->dev, "cpsw_ale_del_vlan() failed: ret %d\n", ret);
++ ret = cpsw_ale_del_ucast(cpsw->ale, priv->mac_addr,
++ HOST_PORT_NUM, ALE_VLAN, vid);
++ if (ret)
++ dev_err(priv->dev, "cpsw_ale_del_ucast() failed: ret %d\n",
++ ret);
++ ret = cpsw_ale_del_mcast(cpsw->ale, priv->ndev->broadcast,
++ 0, ALE_VLAN, vid);
++ if (ret)
++ dev_err(priv->dev, "cpsw_ale_del_mcast failed. ret %d\n",
++ ret);
++ cpsw_ale_flush_multicast(cpsw->ale, ALE_PORT_HOST, vid);
++ ret = 0;
+ err:
+ pm_runtime_put(cpsw->dev);
+ return ret;
+diff --git a/drivers/net/gtp.c b/drivers/net/gtp.c
+index 21640a035d7df..8e47d0112e5dc 100644
+--- a/drivers/net/gtp.c
++++ b/drivers/net/gtp.c
+@@ -1179,6 +1179,7 @@ static int gtp_genl_fill_info(struct sk_buff *skb, u32 snd_portid, u32 snd_seq,
+ goto nlmsg_failure;
+
+ if (nla_put_u32(skb, GTPA_VERSION, pctx->gtp_version) ||
++ nla_put_u32(skb, GTPA_LINK, pctx->dev->ifindex) ||
+ nla_put_be32(skb, GTPA_PEER_ADDRESS, pctx->peer_addr_ip4.s_addr) ||
+ nla_put_be32(skb, GTPA_MS_ADDRESS, pctx->ms_addr_ip4.s_addr))
+ goto nla_put_failure;
+diff --git a/drivers/net/phy/dp83867.c b/drivers/net/phy/dp83867.c
+index f3c04981b8da6..cd7032628a28c 100644
+--- a/drivers/net/phy/dp83867.c
++++ b/drivers/net/phy/dp83867.c
+@@ -215,9 +215,9 @@ static int dp83867_set_wol(struct phy_device *phydev,
+ if (wol->wolopts & WAKE_MAGICSECURE) {
+ phy_write_mmd(phydev, DP83867_DEVADDR, DP83867_RXFSOP1,
+ (wol->sopass[1] << 8) | wol->sopass[0]);
+- phy_write_mmd(phydev, DP83867_DEVADDR, DP83867_RXFSOP1,
++ phy_write_mmd(phydev, DP83867_DEVADDR, DP83867_RXFSOP2,
+ (wol->sopass[3] << 8) | wol->sopass[2]);
+- phy_write_mmd(phydev, DP83867_DEVADDR, DP83867_RXFSOP1,
++ phy_write_mmd(phydev, DP83867_DEVADDR, DP83867_RXFSOP3,
+ (wol->sopass[5] << 8) | wol->sopass[4]);
+
+ val_rxcfg |= DP83867_WOL_SEC_EN;
+diff --git a/drivers/net/usb/asix_common.c b/drivers/net/usb/asix_common.c
+index e39f41efda3ec..7bc6e8f856fe0 100644
+--- a/drivers/net/usb/asix_common.c
++++ b/drivers/net/usb/asix_common.c
+@@ -296,7 +296,7 @@ int asix_read_phy_addr(struct usbnet *dev, int internal)
+
+ netdev_dbg(dev->net, "asix_get_phy_addr()\n");
+
+- if (ret < 0) {
++ if (ret < 2) {
+ netdev_err(dev->net, "Error reading PHYID register: %02x\n", ret);
+ goto out;
+ }
+diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c
+index f38548e6d55ec..fa0039dcacc66 100644
+--- a/drivers/nvme/host/core.c
++++ b/drivers/nvme/host/core.c
+@@ -4148,7 +4148,7 @@ static void nvme_free_ctrl(struct device *dev)
+ container_of(dev, struct nvme_ctrl, ctrl_device);
+ struct nvme_subsystem *subsys = ctrl->subsys;
+
+- if (subsys && ctrl->instance != subsys->instance)
++ if (!subsys || ctrl->instance != subsys->instance)
+ ida_simple_remove(&nvme_instance_ida, ctrl->instance);
+
+ kfree(ctrl->effects);
+diff --git a/drivers/nvme/target/core.c b/drivers/nvme/target/core.c
+index 6e2f623e472e9..58b035cc67a01 100644
+--- a/drivers/nvme/target/core.c
++++ b/drivers/nvme/target/core.c
+@@ -396,6 +396,9 @@ static void nvmet_keep_alive_timer(struct work_struct *work)
+
+ static void nvmet_start_keep_alive_timer(struct nvmet_ctrl *ctrl)
+ {
++ if (unlikely(ctrl->kato == 0))
++ return;
++
+ pr_debug("ctrl %d start keep-alive timer for %d secs\n",
+ ctrl->cntlid, ctrl->kato);
+
+@@ -405,6 +408,9 @@ static void nvmet_start_keep_alive_timer(struct nvmet_ctrl *ctrl)
+
+ static void nvmet_stop_keep_alive_timer(struct nvmet_ctrl *ctrl)
+ {
++ if (unlikely(ctrl->kato == 0))
++ return;
++
+ pr_debug("ctrl %d stop keep-alive\n", ctrl->cntlid);
+
+ cancel_delayed_work_sync(&ctrl->ka_work);
+diff --git a/drivers/nvme/target/fc.c b/drivers/nvme/target/fc.c
+index 27fd3b5aa621c..f98a1ba4dc47c 100644
+--- a/drivers/nvme/target/fc.c
++++ b/drivers/nvme/target/fc.c
+@@ -2362,9 +2362,9 @@ nvmet_fc_fod_op_done(struct nvmet_fc_fcp_iod *fod)
+ return;
+ if (fcpreq->fcp_error ||
+ fcpreq->transferred_length != fcpreq->transfer_length) {
+- spin_lock(&fod->flock);
++ spin_lock_irqsave(&fod->flock, flags);
+ fod->abort = true;
+- spin_unlock(&fod->flock);
++ spin_unlock_irqrestore(&fod->flock, flags);
+
+ nvmet_req_complete(&fod->req, NVME_SC_INTERNAL);
+ return;
+diff --git a/drivers/opp/core.c b/drivers/opp/core.c
+index 8c90f78717723..91dcad982d362 100644
+--- a/drivers/opp/core.c
++++ b/drivers/opp/core.c
+@@ -1265,13 +1265,19 @@ void dev_pm_opp_remove(struct device *dev, unsigned long freq)
+ }
+ EXPORT_SYMBOL_GPL(dev_pm_opp_remove);
+
+-void _opp_remove_all_static(struct opp_table *opp_table)
++bool _opp_remove_all_static(struct opp_table *opp_table)
+ {
+ struct dev_pm_opp *opp, *tmp;
++ bool ret = true;
+
+ mutex_lock(&opp_table->lock);
+
+- if (!opp_table->parsed_static_opps || --opp_table->parsed_static_opps)
++ if (!opp_table->parsed_static_opps) {
++ ret = false;
++ goto unlock;
++ }
++
++ if (--opp_table->parsed_static_opps)
+ goto unlock;
+
+ list_for_each_entry_safe(opp, tmp, &opp_table->opp_list, node) {
+@@ -1281,6 +1287,8 @@ void _opp_remove_all_static(struct opp_table *opp_table)
+
+ unlock:
+ mutex_unlock(&opp_table->lock);
++
++ return ret;
+ }
+
+ /**
+@@ -2382,13 +2390,15 @@ void _dev_pm_opp_find_and_remove_table(struct device *dev)
+ return;
+ }
+
+- _opp_remove_all_static(opp_table);
++ /*
++ * Drop the extra reference only if the OPP table was successfully added
++ * with dev_pm_opp_of_add_table() earlier.
++ **/
++ if (_opp_remove_all_static(opp_table))
++ dev_pm_opp_put_opp_table(opp_table);
+
+ /* Drop reference taken by _find_opp_table() */
+ dev_pm_opp_put_opp_table(opp_table);
+-
+- /* Drop reference taken while the OPP table was added */
+- dev_pm_opp_put_opp_table(opp_table);
+ }
+
+ /**
+diff --git a/drivers/opp/opp.h b/drivers/opp/opp.h
+index e51646ff279eb..c3fcd571e446d 100644
+--- a/drivers/opp/opp.h
++++ b/drivers/opp/opp.h
+@@ -212,7 +212,7 @@ struct opp_table {
+
+ /* Routines internal to opp core */
+ void dev_pm_opp_get(struct dev_pm_opp *opp);
+-void _opp_remove_all_static(struct opp_table *opp_table);
++bool _opp_remove_all_static(struct opp_table *opp_table);
+ void _get_opp_table_kref(struct opp_table *opp_table);
+ int _get_opp_count(struct opp_table *opp_table);
+ struct opp_table *_find_opp_table(struct device *dev);
+diff --git a/drivers/staging/media/sunxi/cedrus/cedrus.c b/drivers/staging/media/sunxi/cedrus/cedrus.c
+index bc27f9430eeb1..7c6b91f0e780a 100644
+--- a/drivers/staging/media/sunxi/cedrus/cedrus.c
++++ b/drivers/staging/media/sunxi/cedrus/cedrus.c
+@@ -199,6 +199,7 @@ static int cedrus_request_validate(struct media_request *req)
+ struct v4l2_ctrl *ctrl_test;
+ unsigned int count;
+ unsigned int i;
++ int ret = 0;
+
+ list_for_each_entry(obj, &req->objects, list) {
+ struct vb2_buffer *vb;
+@@ -243,12 +244,16 @@ static int cedrus_request_validate(struct media_request *req)
+ if (!ctrl_test) {
+ v4l2_info(&ctx->dev->v4l2_dev,
+ "Missing required codec control\n");
+- return -ENOENT;
++ ret = -ENOENT;
++ break;
+ }
+ }
+
+ v4l2_ctrl_request_hdl_put(hdl);
+
++ if (ret)
++ return ret;
++
+ return vb2_request_validate(req);
+ }
+
+diff --git a/drivers/thermal/qcom/qcom-spmi-temp-alarm.c b/drivers/thermal/qcom/qcom-spmi-temp-alarm.c
+index bf7bae42c141c..6dc879fea9c8a 100644
+--- a/drivers/thermal/qcom/qcom-spmi-temp-alarm.c
++++ b/drivers/thermal/qcom/qcom-spmi-temp-alarm.c
+@@ -1,6 +1,6 @@
+ // SPDX-License-Identifier: GPL-2.0-only
+ /*
+- * Copyright (c) 2011-2015, 2017, The Linux Foundation. All rights reserved.
++ * Copyright (c) 2011-2015, 2017, 2020, The Linux Foundation. All rights reserved.
+ */
+
+ #include <linux/bitops.h>
+@@ -191,7 +191,7 @@ static int qpnp_tm_get_temp(void *data, int *temp)
+ chip->temp = mili_celsius;
+ }
+
+- *temp = chip->temp < 0 ? 0 : chip->temp;
++ *temp = chip->temp;
+
+ return 0;
+ }
+diff --git a/drivers/thermal/ti-soc-thermal/omap4-thermal-data.c b/drivers/thermal/ti-soc-thermal/omap4-thermal-data.c
+index 63b02bfb2adf6..fdb8a495ab69a 100644
+--- a/drivers/thermal/ti-soc-thermal/omap4-thermal-data.c
++++ b/drivers/thermal/ti-soc-thermal/omap4-thermal-data.c
+@@ -37,20 +37,21 @@ static struct temp_sensor_data omap4430_mpu_temp_sensor_data = {
+
+ /*
+ * Temperature values in milli degree celsius
+- * ADC code values from 530 to 923
++ * ADC code values from 13 to 107, see TRM
++ * "18.4.10.2.3 ADC Codes Versus Temperature".
+ */
+ static const int
+ omap4430_adc_to_temp[OMAP4430_ADC_END_VALUE - OMAP4430_ADC_START_VALUE + 1] = {
+- -38000, -35000, -34000, -32000, -30000, -28000, -26000, -24000, -22000,
+- -20000, -18000, -17000, -15000, -13000, -12000, -10000, -8000, -6000,
+- -5000, -3000, -1000, 0, 2000, 3000, 5000, 6000, 8000, 10000, 12000,
+- 13000, 15000, 17000, 19000, 21000, 23000, 25000, 27000, 28000, 30000,
+- 32000, 33000, 35000, 37000, 38000, 40000, 42000, 43000, 45000, 47000,
+- 48000, 50000, 52000, 53000, 55000, 57000, 58000, 60000, 62000, 64000,
+- 66000, 68000, 70000, 71000, 73000, 75000, 77000, 78000, 80000, 82000,
+- 83000, 85000, 87000, 88000, 90000, 92000, 93000, 95000, 97000, 98000,
+- 100000, 102000, 103000, 105000, 107000, 109000, 111000, 113000, 115000,
+- 117000, 118000, 120000, 122000, 123000,
++ -40000, -38000, -35000, -34000, -32000, -30000, -28000, -26000, -24000,
++ -22000, -20000, -18500, -17000, -15000, -13500, -12000, -10000, -8000,
++ -6500, -5000, -3500, -1500, 0, 2000, 3500, 5000, 6500, 8500, 10000,
++ 12000, 13500, 15000, 17000, 19000, 21000, 23000, 25000, 27000, 28500,
++ 30000, 32000, 33500, 35000, 37000, 38500, 40000, 42000, 43500, 45000,
++ 47000, 48500, 50000, 52000, 53500, 55000, 57000, 58500, 60000, 62000,
++ 64000, 66000, 68000, 70000, 71500, 73500, 75000, 77000, 78500, 80000,
++ 82000, 83500, 85000, 87000, 88500, 90000, 92000, 93500, 95000, 97000,
++ 98500, 100000, 102000, 103500, 105000, 107000, 109000, 111000, 113000,
++ 115000, 117000, 118500, 120000, 122000, 123500, 125000,
+ };
+
+ /* OMAP4430 data */
+diff --git a/drivers/thermal/ti-soc-thermal/omap4xxx-bandgap.h b/drivers/thermal/ti-soc-thermal/omap4xxx-bandgap.h
+index a453ff8eb313e..9a3955c3853ba 100644
+--- a/drivers/thermal/ti-soc-thermal/omap4xxx-bandgap.h
++++ b/drivers/thermal/ti-soc-thermal/omap4xxx-bandgap.h
+@@ -53,9 +53,13 @@
+ * and thresholds for OMAP4430.
+ */
+
+-/* ADC conversion table limits */
+-#define OMAP4430_ADC_START_VALUE 0
+-#define OMAP4430_ADC_END_VALUE 127
++/*
++ * ADC conversion table limits. Ignore values outside the TRM listed
++ * range to avoid bogus thermal shutdowns. See omap4430 TRM chapter
++ * "18.4.10.2.3 ADC Codes Versus Temperature".
++ */
++#define OMAP4430_ADC_START_VALUE 13
++#define OMAP4430_ADC_END_VALUE 107
+ /* bandgap clock limits (no control on 4430) */
+ #define OMAP4430_MAX_FREQ 32768
+ #define OMAP4430_MIN_FREQ 32768
+diff --git a/drivers/tty/serial/qcom_geni_serial.c b/drivers/tty/serial/qcom_geni_serial.c
+index 457c0bf8cbf83..ffdf6da016c21 100644
+--- a/drivers/tty/serial/qcom_geni_serial.c
++++ b/drivers/tty/serial/qcom_geni_serial.c
+@@ -1047,7 +1047,7 @@ static unsigned int qcom_geni_serial_tx_empty(struct uart_port *uport)
+ }
+
+ #ifdef CONFIG_SERIAL_QCOM_GENI_CONSOLE
+-static int __init qcom_geni_console_setup(struct console *co, char *options)
++static int qcom_geni_console_setup(struct console *co, char *options)
+ {
+ struct uart_port *uport;
+ struct qcom_geni_serial_port *port;
+diff --git a/drivers/xen/xenbus/xenbus_client.c b/drivers/xen/xenbus/xenbus_client.c
+index 786fbb7d8be06..907bcbb93afbf 100644
+--- a/drivers/xen/xenbus/xenbus_client.c
++++ b/drivers/xen/xenbus/xenbus_client.c
+@@ -379,8 +379,14 @@ int xenbus_grant_ring(struct xenbus_device *dev, void *vaddr,
+ int i, j;
+
+ for (i = 0; i < nr_pages; i++) {
+- err = gnttab_grant_foreign_access(dev->otherend_id,
+- virt_to_gfn(vaddr), 0);
++ unsigned long gfn;
++
++ if (is_vmalloc_addr(vaddr))
++ gfn = pfn_to_gfn(vmalloc_to_pfn(vaddr));
++ else
++ gfn = virt_to_gfn(vaddr);
++
++ err = gnttab_grant_foreign_access(dev->otherend_id, gfn, 0);
+ if (err < 0) {
+ xenbus_dev_fatal(dev, err,
+ "granting access to ring page");
+diff --git a/fs/affs/amigaffs.c b/fs/affs/amigaffs.c
+index f708c45d5f664..29f11e10a7c7d 100644
+--- a/fs/affs/amigaffs.c
++++ b/fs/affs/amigaffs.c
+@@ -420,24 +420,51 @@ affs_mode_to_prot(struct inode *inode)
+ u32 prot = AFFS_I(inode)->i_protect;
+ umode_t mode = inode->i_mode;
+
++ /*
++ * First, clear all RWED bits for owner, group, other.
++ * Then, recalculate them afresh.
++ *
++ * We'll always clear the delete-inhibit bit for the owner, as that is
++ * the classic single-user mode AmigaOS protection bit and we need to
++ * stay compatible with all scenarios.
++ *
++ * Since multi-user AmigaOS is an extension, we'll only set the
++ * delete-allow bit if any of the other bits in the same user class
++ * (group/other) are used.
++ */
++ prot &= ~(FIBF_NOEXECUTE | FIBF_NOREAD
++ | FIBF_NOWRITE | FIBF_NODELETE
++ | FIBF_GRP_EXECUTE | FIBF_GRP_READ
++ | FIBF_GRP_WRITE | FIBF_GRP_DELETE
++ | FIBF_OTR_EXECUTE | FIBF_OTR_READ
++ | FIBF_OTR_WRITE | FIBF_OTR_DELETE);
++
++ /* Classic single-user AmigaOS flags. These are inverted. */
+ if (!(mode & 0100))
+ prot |= FIBF_NOEXECUTE;
+ if (!(mode & 0400))
+ prot |= FIBF_NOREAD;
+ if (!(mode & 0200))
+ prot |= FIBF_NOWRITE;
++
++ /* Multi-user extended flags. Not inverted. */
+ if (mode & 0010)
+ prot |= FIBF_GRP_EXECUTE;
+ if (mode & 0040)
+ prot |= FIBF_GRP_READ;
+ if (mode & 0020)
+ prot |= FIBF_GRP_WRITE;
++ if (mode & 0070)
++ prot |= FIBF_GRP_DELETE;
++
+ if (mode & 0001)
+ prot |= FIBF_OTR_EXECUTE;
+ if (mode & 0004)
+ prot |= FIBF_OTR_READ;
+ if (mode & 0002)
+ prot |= FIBF_OTR_WRITE;
++ if (mode & 0007)
++ prot |= FIBF_OTR_DELETE;
+
+ AFFS_I(inode)->i_protect = prot;
+ }
+diff --git a/fs/affs/file.c b/fs/affs/file.c
+index a85817f54483f..ba084b0b214b9 100644
+--- a/fs/affs/file.c
++++ b/fs/affs/file.c
+@@ -428,6 +428,24 @@ static int affs_write_begin(struct file *file, struct address_space *mapping,
+ return ret;
+ }
+
++static int affs_write_end(struct file *file, struct address_space *mapping,
++ loff_t pos, unsigned int len, unsigned int copied,
++ struct page *page, void *fsdata)
++{
++ struct inode *inode = mapping->host;
++ int ret;
++
++ ret = generic_write_end(file, mapping, pos, len, copied, page, fsdata);
++
++ /* Clear Archived bit on file writes, as AmigaOS would do */
++ if (AFFS_I(inode)->i_protect & FIBF_ARCHIVED) {
++ AFFS_I(inode)->i_protect &= ~FIBF_ARCHIVED;
++ mark_inode_dirty(inode);
++ }
++
++ return ret;
++}
++
+ static sector_t _affs_bmap(struct address_space *mapping, sector_t block)
+ {
+ return generic_block_bmap(mapping,block,affs_get_block);
+@@ -437,7 +455,7 @@ const struct address_space_operations affs_aops = {
+ .readpage = affs_readpage,
+ .writepage = affs_writepage,
+ .write_begin = affs_write_begin,
+- .write_end = generic_write_end,
++ .write_end = affs_write_end,
+ .direct_IO = affs_direct_IO,
+ .bmap = _affs_bmap
+ };
+@@ -794,6 +812,12 @@ done:
+ if (tmp > inode->i_size)
+ inode->i_size = AFFS_I(inode)->mmu_private = tmp;
+
++ /* Clear Archived bit on file writes, as AmigaOS would do */
++ if (AFFS_I(inode)->i_protect & FIBF_ARCHIVED) {
++ AFFS_I(inode)->i_protect &= ~FIBF_ARCHIVED;
++ mark_inode_dirty(inode);
++ }
++
+ err_first_bh:
+ unlock_page(page);
+ put_page(page);
+diff --git a/fs/afs/fs_probe.c b/fs/afs/fs_probe.c
+index 5d9ef517cf816..e7e98ad63a91a 100644
+--- a/fs/afs/fs_probe.c
++++ b/fs/afs/fs_probe.c
+@@ -161,8 +161,8 @@ responded:
+ }
+ }
+
+- rtt_us = rxrpc_kernel_get_srtt(call->net->socket, call->rxcall);
+- if (rtt_us < server->probe.rtt) {
++ if (rxrpc_kernel_get_srtt(call->net->socket, call->rxcall, &rtt_us) &&
++ rtt_us < server->probe.rtt) {
+ server->probe.rtt = rtt_us;
+ server->rtt = rtt_us;
+ alist->preferred = index;
+diff --git a/fs/afs/vl_probe.c b/fs/afs/vl_probe.c
+index e3aa013c21779..081b7e5b13f58 100644
+--- a/fs/afs/vl_probe.c
++++ b/fs/afs/vl_probe.c
+@@ -92,8 +92,8 @@ responded:
+ }
+ }
+
+- rtt_us = rxrpc_kernel_get_srtt(call->net->socket, call->rxcall);
+- if (rtt_us < server->probe.rtt) {
++ if (rxrpc_kernel_get_srtt(call->net->socket, call->rxcall, &rtt_us) &&
++ rtt_us < server->probe.rtt) {
+ server->probe.rtt = rtt_us;
+ alist->preferred = index;
+ have_result = true;
+diff --git a/fs/btrfs/block-group.c b/fs/btrfs/block-group.c
+index c037ef514b64a..8702e8a4d20db 100644
+--- a/fs/btrfs/block-group.c
++++ b/fs/btrfs/block-group.c
+@@ -1814,7 +1814,6 @@ static struct btrfs_block_group *btrfs_create_block_group_cache(
+
+ cache->fs_info = fs_info;
+ cache->full_stripe_len = btrfs_full_stripe_len(fs_info, start);
+- set_free_space_tree_thresholds(cache);
+
+ cache->discard_index = BTRFS_DISCARD_INDEX_UNUSED;
+
+@@ -1928,6 +1927,8 @@ static int read_one_block_group(struct btrfs_fs_info *info,
+ if (ret < 0)
+ goto error;
+
++ set_free_space_tree_thresholds(cache);
++
+ if (need_clear) {
+ /*
+ * When we mount with old space cache, we need to
+@@ -2148,6 +2149,7 @@ int btrfs_make_block_group(struct btrfs_trans_handle *trans, u64 bytes_used,
+ return -ENOMEM;
+
+ cache->length = size;
++ set_free_space_tree_thresholds(cache);
+ cache->used = bytes_used;
+ cache->flags = type;
+ cache->last_byte_to_unpin = (u64)-1;
+diff --git a/fs/btrfs/ctree.c b/fs/btrfs/ctree.c
+index 82ab6e5a386da..367e3044b620b 100644
+--- a/fs/btrfs/ctree.c
++++ b/fs/btrfs/ctree.c
+@@ -1297,6 +1297,8 @@ tree_mod_log_rewind(struct btrfs_fs_info *fs_info, struct btrfs_path *path,
+ btrfs_tree_read_unlock_blocking(eb);
+ free_extent_buffer(eb);
+
++ btrfs_set_buffer_lockdep_class(btrfs_header_owner(eb_rewin),
++ eb_rewin, btrfs_header_level(eb_rewin));
+ btrfs_tree_read_lock(eb_rewin);
+ __tree_mod_log_rewind(fs_info, eb_rewin, time_seq, tm);
+ WARN_ON(btrfs_header_nritems(eb_rewin) >
+@@ -1370,7 +1372,6 @@ get_old_root(struct btrfs_root *root, u64 time_seq)
+
+ if (!eb)
+ return NULL;
+- btrfs_tree_read_lock(eb);
+ if (old_root) {
+ btrfs_set_header_bytenr(eb, eb->start);
+ btrfs_set_header_backref_rev(eb, BTRFS_MIXED_BACKREF_REV);
+@@ -1378,6 +1379,9 @@ get_old_root(struct btrfs_root *root, u64 time_seq)
+ btrfs_set_header_level(eb, old_root->level);
+ btrfs_set_header_generation(eb, old_generation);
+ }
++ btrfs_set_buffer_lockdep_class(btrfs_header_owner(eb), eb,
++ btrfs_header_level(eb));
++ btrfs_tree_read_lock(eb);
+ if (tm)
+ __tree_mod_log_rewind(fs_info, eb, time_seq, tm);
+ else
+diff --git a/fs/btrfs/extent-tree.c b/fs/btrfs/extent-tree.c
+index 5871ef78edbac..e9eedc053fc52 100644
+--- a/fs/btrfs/extent-tree.c
++++ b/fs/btrfs/extent-tree.c
+@@ -4527,7 +4527,7 @@ btrfs_init_new_buffer(struct btrfs_trans_handle *trans, struct btrfs_root *root,
+ return ERR_PTR(-EUCLEAN);
+ }
+
+- btrfs_set_buffer_lockdep_class(root->root_key.objectid, buf, level);
++ btrfs_set_buffer_lockdep_class(owner, buf, level);
+ btrfs_tree_lock(buf);
+ btrfs_clean_tree_block(buf);
+ clear_bit(EXTENT_BUFFER_STALE, &buf->bflags);
+diff --git a/fs/btrfs/extent_io.c b/fs/btrfs/extent_io.c
+index 8ba8788461ae5..df68736bdad1b 100644
+--- a/fs/btrfs/extent_io.c
++++ b/fs/btrfs/extent_io.c
+@@ -5640,9 +5640,9 @@ void read_extent_buffer(const struct extent_buffer *eb, void *dstv,
+ }
+ }
+
+-int read_extent_buffer_to_user(const struct extent_buffer *eb,
+- void __user *dstv,
+- unsigned long start, unsigned long len)
++int read_extent_buffer_to_user_nofault(const struct extent_buffer *eb,
++ void __user *dstv,
++ unsigned long start, unsigned long len)
+ {
+ size_t cur;
+ size_t offset;
+@@ -5662,7 +5662,7 @@ int read_extent_buffer_to_user(const struct extent_buffer *eb,
+
+ cur = min(len, (PAGE_SIZE - offset));
+ kaddr = page_address(page);
+- if (copy_to_user(dst, kaddr + offset, cur)) {
++ if (copy_to_user_nofault(dst, kaddr + offset, cur)) {
+ ret = -EFAULT;
+ break;
+ }
+diff --git a/fs/btrfs/extent_io.h b/fs/btrfs/extent_io.h
+index 87f60a48f7500..0ab8a20d282b8 100644
+--- a/fs/btrfs/extent_io.h
++++ b/fs/btrfs/extent_io.h
+@@ -241,9 +241,9 @@ int memcmp_extent_buffer(const struct extent_buffer *eb, const void *ptrv,
+ void read_extent_buffer(const struct extent_buffer *eb, void *dst,
+ unsigned long start,
+ unsigned long len);
+-int read_extent_buffer_to_user(const struct extent_buffer *eb,
+- void __user *dst, unsigned long start,
+- unsigned long len);
++int read_extent_buffer_to_user_nofault(const struct extent_buffer *eb,
++ void __user *dst, unsigned long start,
++ unsigned long len);
+ void write_extent_buffer_fsid(const struct extent_buffer *eb, const void *src);
+ void write_extent_buffer_chunk_tree_uuid(const struct extent_buffer *eb,
+ const void *src);
+diff --git a/fs/btrfs/free-space-tree.c b/fs/btrfs/free-space-tree.c
+index 8b1f5c8897b75..6b9faf3b0e967 100644
+--- a/fs/btrfs/free-space-tree.c
++++ b/fs/btrfs/free-space-tree.c
+@@ -22,6 +22,10 @@ void set_free_space_tree_thresholds(struct btrfs_block_group *cache)
+ size_t bitmap_size;
+ u64 num_bitmaps, total_bitmap_size;
+
++ if (WARN_ON(cache->length == 0))
++ btrfs_warn(cache->fs_info, "block group %llu length is zero",
++ cache->start);
++
+ /*
+ * We convert to bitmaps when the disk space required for using extents
+ * exceeds that required for using bitmaps.
+diff --git a/fs/btrfs/ioctl.c b/fs/btrfs/ioctl.c
+index 1448bc43561c2..5cbebf32082ab 100644
+--- a/fs/btrfs/ioctl.c
++++ b/fs/btrfs/ioctl.c
+@@ -2086,9 +2086,14 @@ static noinline int copy_to_sk(struct btrfs_path *path,
+ sh.len = item_len;
+ sh.transid = found_transid;
+
+- /* copy search result header */
+- if (copy_to_user(ubuf + *sk_offset, &sh, sizeof(sh))) {
+- ret = -EFAULT;
++ /*
++ * Copy search result header. If we fault then loop again so we
++ * can fault in the pages and -EFAULT there if there's a
++ * problem. Otherwise we'll fault and then copy the buffer in
++ * properly this next time through
++ */
++ if (copy_to_user_nofault(ubuf + *sk_offset, &sh, sizeof(sh))) {
++ ret = 0;
+ goto out;
+ }
+
+@@ -2096,10 +2101,14 @@ static noinline int copy_to_sk(struct btrfs_path *path,
+
+ if (item_len) {
+ char __user *up = ubuf + *sk_offset;
+- /* copy the item */
+- if (read_extent_buffer_to_user(leaf, up,
+- item_off, item_len)) {
+- ret = -EFAULT;
++ /*
++ * Copy the item, same behavior as above, but reset the
++ * * sk_offset so we copy the full thing again.
++ */
++ if (read_extent_buffer_to_user_nofault(leaf, up,
++ item_off, item_len)) {
++ ret = 0;
++ *sk_offset -= sizeof(sh);
+ goto out;
+ }
+
+@@ -2184,6 +2193,10 @@ static noinline int search_ioctl(struct inode *inode,
+ key.offset = sk->min_offset;
+
+ while (1) {
++ ret = fault_in_pages_writeable(ubuf, *buf_size - sk_offset);
++ if (ret)
++ break;
++
+ ret = btrfs_search_forward(root, &key, path, sk->min_transid);
+ if (ret != 0) {
+ if (ret > 0)
+diff --git a/fs/btrfs/scrub.c b/fs/btrfs/scrub.c
+index 5f5b21e389dbc..4e857e91c76e5 100644
+--- a/fs/btrfs/scrub.c
++++ b/fs/btrfs/scrub.c
+@@ -3783,50 +3783,84 @@ static noinline_for_stack int scrub_supers(struct scrub_ctx *sctx,
+ return 0;
+ }
+
++static void scrub_workers_put(struct btrfs_fs_info *fs_info)
++{
++ if (refcount_dec_and_mutex_lock(&fs_info->scrub_workers_refcnt,
++ &fs_info->scrub_lock)) {
++ struct btrfs_workqueue *scrub_workers = NULL;
++ struct btrfs_workqueue *scrub_wr_comp = NULL;
++ struct btrfs_workqueue *scrub_parity = NULL;
++
++ scrub_workers = fs_info->scrub_workers;
++ scrub_wr_comp = fs_info->scrub_wr_completion_workers;
++ scrub_parity = fs_info->scrub_parity_workers;
++
++ fs_info->scrub_workers = NULL;
++ fs_info->scrub_wr_completion_workers = NULL;
++ fs_info->scrub_parity_workers = NULL;
++ mutex_unlock(&fs_info->scrub_lock);
++
++ btrfs_destroy_workqueue(scrub_workers);
++ btrfs_destroy_workqueue(scrub_wr_comp);
++ btrfs_destroy_workqueue(scrub_parity);
++ }
++}
++
+ /*
+ * get a reference count on fs_info->scrub_workers. start worker if necessary
+ */
+ static noinline_for_stack int scrub_workers_get(struct btrfs_fs_info *fs_info,
+ int is_dev_replace)
+ {
++ struct btrfs_workqueue *scrub_workers = NULL;
++ struct btrfs_workqueue *scrub_wr_comp = NULL;
++ struct btrfs_workqueue *scrub_parity = NULL;
+ unsigned int flags = WQ_FREEZABLE | WQ_UNBOUND;
+ int max_active = fs_info->thread_pool_size;
++ int ret = -ENOMEM;
+
+- lockdep_assert_held(&fs_info->scrub_lock);
++ if (refcount_inc_not_zero(&fs_info->scrub_workers_refcnt))
++ return 0;
+
+- if (refcount_read(&fs_info->scrub_workers_refcnt) == 0) {
+- ASSERT(fs_info->scrub_workers == NULL);
+- fs_info->scrub_workers = btrfs_alloc_workqueue(fs_info, "scrub",
+- flags, is_dev_replace ? 1 : max_active, 4);
+- if (!fs_info->scrub_workers)
+- goto fail_scrub_workers;
+-
+- ASSERT(fs_info->scrub_wr_completion_workers == NULL);
+- fs_info->scrub_wr_completion_workers =
+- btrfs_alloc_workqueue(fs_info, "scrubwrc", flags,
+- max_active, 2);
+- if (!fs_info->scrub_wr_completion_workers)
+- goto fail_scrub_wr_completion_workers;
++ scrub_workers = btrfs_alloc_workqueue(fs_info, "scrub", flags,
++ is_dev_replace ? 1 : max_active, 4);
++ if (!scrub_workers)
++ goto fail_scrub_workers;
+
+- ASSERT(fs_info->scrub_parity_workers == NULL);
+- fs_info->scrub_parity_workers =
+- btrfs_alloc_workqueue(fs_info, "scrubparity", flags,
++ scrub_wr_comp = btrfs_alloc_workqueue(fs_info, "scrubwrc", flags,
+ max_active, 2);
+- if (!fs_info->scrub_parity_workers)
+- goto fail_scrub_parity_workers;
++ if (!scrub_wr_comp)
++ goto fail_scrub_wr_completion_workers;
+
++ scrub_parity = btrfs_alloc_workqueue(fs_info, "scrubparity", flags,
++ max_active, 2);
++ if (!scrub_parity)
++ goto fail_scrub_parity_workers;
++
++ mutex_lock(&fs_info->scrub_lock);
++ if (refcount_read(&fs_info->scrub_workers_refcnt) == 0) {
++ ASSERT(fs_info->scrub_workers == NULL &&
++ fs_info->scrub_wr_completion_workers == NULL &&
++ fs_info->scrub_parity_workers == NULL);
++ fs_info->scrub_workers = scrub_workers;
++ fs_info->scrub_wr_completion_workers = scrub_wr_comp;
++ fs_info->scrub_parity_workers = scrub_parity;
+ refcount_set(&fs_info->scrub_workers_refcnt, 1);
+- } else {
+- refcount_inc(&fs_info->scrub_workers_refcnt);
++ mutex_unlock(&fs_info->scrub_lock);
++ return 0;
+ }
+- return 0;
++ /* Other thread raced in and created the workers for us */
++ refcount_inc(&fs_info->scrub_workers_refcnt);
++ mutex_unlock(&fs_info->scrub_lock);
+
++ ret = 0;
++ btrfs_destroy_workqueue(scrub_parity);
+ fail_scrub_parity_workers:
+- btrfs_destroy_workqueue(fs_info->scrub_wr_completion_workers);
++ btrfs_destroy_workqueue(scrub_wr_comp);
+ fail_scrub_wr_completion_workers:
+- btrfs_destroy_workqueue(fs_info->scrub_workers);
++ btrfs_destroy_workqueue(scrub_workers);
+ fail_scrub_workers:
+- return -ENOMEM;
++ return ret;
+ }
+
+ int btrfs_scrub_dev(struct btrfs_fs_info *fs_info, u64 devid, u64 start,
+@@ -3837,9 +3871,6 @@ int btrfs_scrub_dev(struct btrfs_fs_info *fs_info, u64 devid, u64 start,
+ int ret;
+ struct btrfs_device *dev;
+ unsigned int nofs_flag;
+- struct btrfs_workqueue *scrub_workers = NULL;
+- struct btrfs_workqueue *scrub_wr_comp = NULL;
+- struct btrfs_workqueue *scrub_parity = NULL;
+
+ if (btrfs_fs_closing(fs_info))
+ return -EAGAIN;
+@@ -3886,13 +3917,17 @@ int btrfs_scrub_dev(struct btrfs_fs_info *fs_info, u64 devid, u64 start,
+ if (IS_ERR(sctx))
+ return PTR_ERR(sctx);
+
++ ret = scrub_workers_get(fs_info, is_dev_replace);
++ if (ret)
++ goto out_free_ctx;
++
+ mutex_lock(&fs_info->fs_devices->device_list_mutex);
+ dev = btrfs_find_device(fs_info->fs_devices, devid, NULL, NULL, true);
+ if (!dev || (test_bit(BTRFS_DEV_STATE_MISSING, &dev->dev_state) &&
+ !is_dev_replace)) {
+ mutex_unlock(&fs_info->fs_devices->device_list_mutex);
+ ret = -ENODEV;
+- goto out_free_ctx;
++ goto out;
+ }
+
+ if (!is_dev_replace && !readonly &&
+@@ -3901,7 +3936,7 @@ int btrfs_scrub_dev(struct btrfs_fs_info *fs_info, u64 devid, u64 start,
+ btrfs_err_in_rcu(fs_info, "scrub: device %s is not writable",
+ rcu_str_deref(dev->name));
+ ret = -EROFS;
+- goto out_free_ctx;
++ goto out;
+ }
+
+ mutex_lock(&fs_info->scrub_lock);
+@@ -3910,7 +3945,7 @@ int btrfs_scrub_dev(struct btrfs_fs_info *fs_info, u64 devid, u64 start,
+ mutex_unlock(&fs_info->scrub_lock);
+ mutex_unlock(&fs_info->fs_devices->device_list_mutex);
+ ret = -EIO;
+- goto out_free_ctx;
++ goto out;
+ }
+
+ down_read(&fs_info->dev_replace.rwsem);
+@@ -3921,17 +3956,10 @@ int btrfs_scrub_dev(struct btrfs_fs_info *fs_info, u64 devid, u64 start,
+ mutex_unlock(&fs_info->scrub_lock);
+ mutex_unlock(&fs_info->fs_devices->device_list_mutex);
+ ret = -EINPROGRESS;
+- goto out_free_ctx;
++ goto out;
+ }
+ up_read(&fs_info->dev_replace.rwsem);
+
+- ret = scrub_workers_get(fs_info, is_dev_replace);
+- if (ret) {
+- mutex_unlock(&fs_info->scrub_lock);
+- mutex_unlock(&fs_info->fs_devices->device_list_mutex);
+- goto out_free_ctx;
+- }
+-
+ sctx->readonly = readonly;
+ dev->scrub_ctx = sctx;
+ mutex_unlock(&fs_info->fs_devices->device_list_mutex);
+@@ -3984,24 +4012,14 @@ int btrfs_scrub_dev(struct btrfs_fs_info *fs_info, u64 devid, u64 start,
+
+ mutex_lock(&fs_info->scrub_lock);
+ dev->scrub_ctx = NULL;
+- if (refcount_dec_and_test(&fs_info->scrub_workers_refcnt)) {
+- scrub_workers = fs_info->scrub_workers;
+- scrub_wr_comp = fs_info->scrub_wr_completion_workers;
+- scrub_parity = fs_info->scrub_parity_workers;
+-
+- fs_info->scrub_workers = NULL;
+- fs_info->scrub_wr_completion_workers = NULL;
+- fs_info->scrub_parity_workers = NULL;
+- }
+ mutex_unlock(&fs_info->scrub_lock);
+
+- btrfs_destroy_workqueue(scrub_workers);
+- btrfs_destroy_workqueue(scrub_wr_comp);
+- btrfs_destroy_workqueue(scrub_parity);
++ scrub_workers_put(fs_info);
+ scrub_put_ctx(sctx);
+
+ return ret;
+-
++out:
++ scrub_workers_put(fs_info);
+ out_free_ctx:
+ scrub_free_ctx(sctx);
+
+diff --git a/fs/btrfs/tree-checker.c b/fs/btrfs/tree-checker.c
+index 517b44300a05c..7b1fee630f978 100644
+--- a/fs/btrfs/tree-checker.c
++++ b/fs/btrfs/tree-checker.c
+@@ -984,7 +984,7 @@ static int check_inode_item(struct extent_buffer *leaf,
+ /* Note for ROOT_TREE_DIR_ITEM, mkfs could set its transid 0 */
+ if (btrfs_inode_transid(leaf, iitem) > super_gen + 1) {
+ inode_item_err(leaf, slot,
+- "invalid inode generation: has %llu expect [0, %llu]",
++ "invalid inode transid: has %llu expect [0, %llu]",
+ btrfs_inode_transid(leaf, iitem), super_gen + 1);
+ return -EUCLEAN;
+ }
+diff --git a/fs/btrfs/volumes.c b/fs/btrfs/volumes.c
+index 0fecf1e4d8f66..0e50b885d3fd6 100644
+--- a/fs/btrfs/volumes.c
++++ b/fs/btrfs/volumes.c
+@@ -4462,6 +4462,7 @@ int btrfs_uuid_scan_kthread(void *data)
+ goto skip;
+ }
+ update_tree:
++ btrfs_release_path(path);
+ if (!btrfs_is_empty_uuid(root_item.uuid)) {
+ ret = btrfs_uuid_tree_add(trans, root_item.uuid,
+ BTRFS_UUID_KEY_SUBVOL,
+@@ -4486,6 +4487,7 @@ update_tree:
+ }
+
+ skip:
++ btrfs_release_path(path);
+ if (trans) {
+ ret = btrfs_end_transaction(trans);
+ trans = NULL;
+@@ -4493,7 +4495,6 @@ skip:
+ break;
+ }
+
+- btrfs_release_path(path);
+ if (key.offset < (u64)-1) {
+ key.offset++;
+ } else if (key.type < BTRFS_ROOT_ITEM_KEY) {
+diff --git a/fs/ceph/file.c b/fs/ceph/file.c
+index d51c3f2fdca02..327649883ec7c 100644
+--- a/fs/ceph/file.c
++++ b/fs/ceph/file.c
+@@ -2507,6 +2507,7 @@ const struct file_operations ceph_file_fops = {
+ .mmap = ceph_mmap,
+ .fsync = ceph_fsync,
+ .lock = ceph_lock,
++ .setlease = simple_nosetlease,
+ .flock = ceph_flock,
+ .splice_read = generic_file_splice_read,
+ .splice_write = iter_file_splice_write,
+diff --git a/fs/eventpoll.c b/fs/eventpoll.c
+index e0decff22ae27..8107e06d7f6f5 100644
+--- a/fs/eventpoll.c
++++ b/fs/eventpoll.c
+@@ -1995,9 +1995,9 @@ static int ep_loop_check_proc(void *priv, void *cookie, int call_nests)
+ * during ep_insert().
+ */
+ if (list_empty(&epi->ffd.file->f_tfile_llink)) {
+- get_file(epi->ffd.file);
+- list_add(&epi->ffd.file->f_tfile_llink,
+- &tfile_check_list);
++ if (get_file_rcu(epi->ffd.file))
++ list_add(&epi->ffd.file->f_tfile_llink,
++ &tfile_check_list);
+ }
+ }
+ }
+diff --git a/fs/ext2/file.c b/fs/ext2/file.c
+index 60378ddf1424b..96044f5dbc0e0 100644
+--- a/fs/ext2/file.c
++++ b/fs/ext2/file.c
+@@ -93,8 +93,10 @@ static vm_fault_t ext2_dax_fault(struct vm_fault *vmf)
+ struct inode *inode = file_inode(vmf->vma->vm_file);
+ struct ext2_inode_info *ei = EXT2_I(inode);
+ vm_fault_t ret;
++ bool write = (vmf->flags & FAULT_FLAG_WRITE) &&
++ (vmf->vma->vm_flags & VM_SHARED);
+
+- if (vmf->flags & FAULT_FLAG_WRITE) {
++ if (write) {
+ sb_start_pagefault(inode->i_sb);
+ file_update_time(vmf->vma->vm_file);
+ }
+@@ -103,7 +105,7 @@ static vm_fault_t ext2_dax_fault(struct vm_fault *vmf)
+ ret = dax_iomap_fault(vmf, PE_SIZE_PTE, NULL, NULL, &ext2_iomap_ops);
+
+ up_read(&ei->dax_sem);
+- if (vmf->flags & FAULT_FLAG_WRITE)
++ if (write)
+ sb_end_pagefault(inode->i_sb);
+ return ret;
+ }
+diff --git a/fs/gfs2/log.c b/fs/gfs2/log.c
+index a76e55bc28ebf..27f467a0f008e 100644
+--- a/fs/gfs2/log.c
++++ b/fs/gfs2/log.c
+@@ -901,6 +901,36 @@ static void empty_ail1_list(struct gfs2_sbd *sdp)
+ }
+ }
+
++/**
++ * drain_bd - drain the buf and databuf queue for a failed transaction
++ * @tr: the transaction to drain
++ *
++ * When this is called, we're taking an error exit for a log write that failed
++ * but since we bypassed the after_commit functions, we need to remove the
++ * items from the buf and databuf queue.
++ */
++static void trans_drain(struct gfs2_trans *tr)
++{
++ struct gfs2_bufdata *bd;
++ struct list_head *head;
++
++ if (!tr)
++ return;
++
++ head = &tr->tr_buf;
++ while (!list_empty(head)) {
++ bd = list_first_entry(head, struct gfs2_bufdata, bd_list);
++ list_del_init(&bd->bd_list);
++ kmem_cache_free(gfs2_bufdata_cachep, bd);
++ }
++ head = &tr->tr_databuf;
++ while (!list_empty(head)) {
++ bd = list_first_entry(head, struct gfs2_bufdata, bd_list);
++ list_del_init(&bd->bd_list);
++ kmem_cache_free(gfs2_bufdata_cachep, bd);
++ }
++}
++
+ /**
+ * gfs2_log_flush - flush incore transaction(s)
+ * @sdp: the filesystem
+@@ -1005,6 +1035,7 @@ void gfs2_log_flush(struct gfs2_sbd *sdp, struct gfs2_glock *gl, u32 flags)
+
+ out:
+ if (gfs2_withdrawn(sdp)) {
++ trans_drain(tr);
+ /**
+ * If the tr_list is empty, we're withdrawing during a log
+ * flush that targets a transaction, but the transaction was
+diff --git a/fs/gfs2/trans.c b/fs/gfs2/trans.c
+index a3dfa3aa87ad9..d897dd73c5999 100644
+--- a/fs/gfs2/trans.c
++++ b/fs/gfs2/trans.c
+@@ -52,6 +52,7 @@ int gfs2_trans_begin(struct gfs2_sbd *sdp, unsigned int blocks,
+ tr->tr_reserved += gfs2_struct2blk(sdp, revokes);
+ INIT_LIST_HEAD(&tr->tr_databuf);
+ INIT_LIST_HEAD(&tr->tr_buf);
++ INIT_LIST_HEAD(&tr->tr_list);
+ INIT_LIST_HEAD(&tr->tr_ail1_list);
+ INIT_LIST_HEAD(&tr->tr_ail2_list);
+
+diff --git a/fs/io_uring.c b/fs/io_uring.c
+index 4115bfedf15dc..38f3ec15ba3b1 100644
+--- a/fs/io_uring.c
++++ b/fs/io_uring.c
+@@ -2697,8 +2697,15 @@ static int io_read(struct io_kiocb *req, bool force_nonblock)
+ else
+ ret2 = -EINVAL;
+
++ /* no retry on NONBLOCK marked file */
++ if (ret2 == -EAGAIN && (req->file->f_flags & O_NONBLOCK)) {
++ ret = 0;
++ goto done;
++ }
++
+ /* Catch -EAGAIN return for forced non-blocking submission */
+ if (!force_nonblock || ret2 != -EAGAIN) {
++ done:
+ kiocb_done(kiocb, ret2);
+ } else {
+ copy_iov:
+@@ -2823,7 +2830,13 @@ static int io_write(struct io_kiocb *req, bool force_nonblock)
+ */
+ if (ret2 == -EOPNOTSUPP && (kiocb->ki_flags & IOCB_NOWAIT))
+ ret2 = -EAGAIN;
++ /* no retry on NONBLOCK marked file */
++ if (ret2 == -EAGAIN && (req->file->f_flags & O_NONBLOCK)) {
++ ret = 0;
++ goto done;
++ }
+ if (!force_nonblock || ret2 != -EAGAIN) {
++done:
+ kiocb_done(kiocb, ret2);
+ } else {
+ copy_iov:
+@@ -6928,7 +6941,7 @@ static int __io_sqe_files_update(struct io_ring_ctx *ctx,
+ table = &ctx->file_data->table[i >> IORING_FILE_TABLE_SHIFT];
+ index = i & IORING_FILE_TABLE_MASK;
+ if (table->files[index]) {
+- file = io_file_from_index(ctx, index);
++ file = table->files[index];
+ err = io_queue_file_removal(data, file);
+ if (err)
+ break;
+@@ -6957,6 +6970,7 @@ static int __io_sqe_files_update(struct io_ring_ctx *ctx,
+ table->files[index] = file;
+ err = io_sqe_file_register(ctx, file, i);
+ if (err) {
++ table->files[index] = NULL;
+ fput(file);
+ break;
+ }
+diff --git a/fs/xfs/libxfs/xfs_attr_leaf.c b/fs/xfs/libxfs/xfs_attr_leaf.c
+index 2f7e89e4be3e3..4eb2ecd31b0d2 100644
+--- a/fs/xfs/libxfs/xfs_attr_leaf.c
++++ b/fs/xfs/libxfs/xfs_attr_leaf.c
+@@ -996,8 +996,10 @@ xfs_attr_shortform_verify(
+ * struct xfs_attr_sf_entry has a variable length.
+ * Check the fixed-offset parts of the structure are
+ * within the data buffer.
++ * xfs_attr_sf_entry is defined with a 1-byte variable
++ * array at the end, so we must subtract that off.
+ */
+- if (((char *)sfep + sizeof(*sfep)) >= endp)
++ if (((char *)sfep + sizeof(*sfep) - 1) >= endp)
+ return __this_address;
+
+ /* Don't allow names with known bad length. */
+diff --git a/fs/xfs/libxfs/xfs_bmap.c b/fs/xfs/libxfs/xfs_bmap.c
+index 667cdd0dfdf4a..aa784404964a0 100644
+--- a/fs/xfs/libxfs/xfs_bmap.c
++++ b/fs/xfs/libxfs/xfs_bmap.c
+@@ -6222,7 +6222,7 @@ xfs_bmap_validate_extent(
+
+ isrt = XFS_IS_REALTIME_INODE(ip);
+ endfsb = irec->br_startblock + irec->br_blockcount - 1;
+- if (isrt) {
++ if (isrt && whichfork == XFS_DATA_FORK) {
+ if (!xfs_verify_rtbno(mp, irec->br_startblock))
+ return __this_address;
+ if (!xfs_verify_rtbno(mp, endfsb))
+diff --git a/fs/xfs/xfs_bmap_util.c b/fs/xfs/xfs_bmap_util.c
+index afdc7f8e0e701..feb277874a1fb 100644
+--- a/fs/xfs/xfs_bmap_util.c
++++ b/fs/xfs/xfs_bmap_util.c
+@@ -1165,7 +1165,7 @@ xfs_insert_file_space(
+ goto out_trans_cancel;
+
+ do {
+- error = xfs_trans_roll_inode(&tp, ip);
++ error = xfs_defer_finish(&tp);
+ if (error)
+ goto out_trans_cancel;
+
+diff --git a/fs/xfs/xfs_file.c b/fs/xfs/xfs_file.c
+index 00db81eac80d6..4d7385426149c 100644
+--- a/fs/xfs/xfs_file.c
++++ b/fs/xfs/xfs_file.c
+@@ -1220,6 +1220,14 @@ __xfs_filemap_fault(
+ return ret;
+ }
+
++static inline bool
++xfs_is_write_fault(
++ struct vm_fault *vmf)
++{
++ return (vmf->flags & FAULT_FLAG_WRITE) &&
++ (vmf->vma->vm_flags & VM_SHARED);
++}
++
+ static vm_fault_t
+ xfs_filemap_fault(
+ struct vm_fault *vmf)
+@@ -1227,7 +1235,7 @@ xfs_filemap_fault(
+ /* DAX can shortcut the normal fault path on write faults! */
+ return __xfs_filemap_fault(vmf, PE_SIZE_PTE,
+ IS_DAX(file_inode(vmf->vma->vm_file)) &&
+- (vmf->flags & FAULT_FLAG_WRITE));
++ xfs_is_write_fault(vmf));
+ }
+
+ static vm_fault_t
+@@ -1240,7 +1248,7 @@ xfs_filemap_huge_fault(
+
+ /* DAX can shortcut the normal fault path on write faults! */
+ return __xfs_filemap_fault(vmf, pe_size,
+- (vmf->flags & FAULT_FLAG_WRITE));
++ xfs_is_write_fault(vmf));
+ }
+
+ static vm_fault_t
+diff --git a/include/drm/drm_hdcp.h b/include/drm/drm_hdcp.h
+index c6bab4986a658..fe58dbb46962a 100644
+--- a/include/drm/drm_hdcp.h
++++ b/include/drm/drm_hdcp.h
+@@ -29,6 +29,9 @@
+ /* Slave address for the HDCP registers in the receiver */
+ #define DRM_HDCP_DDC_ADDR 0x3A
+
++/* Value to use at the end of the SHA-1 bytestream used for repeaters */
++#define DRM_HDCP_SHA1_TERMINATOR 0x80
++
+ /* HDCP register offsets for HDMI/DVI devices */
+ #define DRM_HDCP_DDC_BKSV 0x00
+ #define DRM_HDCP_DDC_RI_PRIME 0x08
+diff --git a/include/linux/bvec.h b/include/linux/bvec.h
+index ac0c7299d5b8a..dd74503f7e5ea 100644
+--- a/include/linux/bvec.h
++++ b/include/linux/bvec.h
+@@ -117,11 +117,18 @@ static inline bool bvec_iter_advance(const struct bio_vec *bv,
+ return true;
+ }
+
++static inline void bvec_iter_skip_zero_bvec(struct bvec_iter *iter)
++{
++ iter->bi_bvec_done = 0;
++ iter->bi_idx++;
++}
++
+ #define for_each_bvec(bvl, bio_vec, iter, start) \
+ for (iter = (start); \
+ (iter).bi_size && \
+ ((bvl = bvec_iter_bvec((bio_vec), (iter))), 1); \
+- bvec_iter_advance((bio_vec), &(iter), (bvl).bv_len))
++ (bvl).bv_len ? (void)bvec_iter_advance((bio_vec), &(iter), \
++ (bvl).bv_len) : bvec_iter_skip_zero_bvec(&(iter)))
+
+ /* for iterating one bio from start to end */
+ #define BVEC_ITER_ALL_INIT (struct bvec_iter) \
+diff --git a/include/linux/libata.h b/include/linux/libata.h
+index 77ccf040a128b..5f550eb27f811 100644
+--- a/include/linux/libata.h
++++ b/include/linux/libata.h
+@@ -421,6 +421,7 @@ enum {
+ ATA_HORKAGE_NO_DMA_LOG = (1 << 23), /* don't use DMA for log read */
+ ATA_HORKAGE_NOTRIM = (1 << 24), /* don't use TRIM */
+ ATA_HORKAGE_MAX_SEC_1024 = (1 << 25), /* Limit max sects to 1024 */
++ ATA_HORKAGE_MAX_TRIM_128M = (1 << 26), /* Limit max trim size to 128M */
+
+ /* DMA mask for user DMA control: User visible values; DO NOT
+ renumber */
+diff --git a/include/linux/log2.h b/include/linux/log2.h
+index 83a4a3ca3e8a7..c619ec6eff4ae 100644
+--- a/include/linux/log2.h
++++ b/include/linux/log2.h
+@@ -173,7 +173,7 @@ unsigned long __rounddown_pow_of_two(unsigned long n)
+ #define roundup_pow_of_two(n) \
+ ( \
+ __builtin_constant_p(n) ? ( \
+- (n == 1) ? 1 : \
++ ((n) == 1) ? 1 : \
+ (1UL << (ilog2((n) - 1) + 1)) \
+ ) : \
+ __roundup_pow_of_two(n) \
+diff --git a/include/linux/netfilter/nfnetlink.h b/include/linux/netfilter/nfnetlink.h
+index 851425c3178f1..89016d08f6a27 100644
+--- a/include/linux/netfilter/nfnetlink.h
++++ b/include/linux/netfilter/nfnetlink.h
+@@ -43,8 +43,7 @@ int nfnetlink_has_listeners(struct net *net, unsigned int group);
+ int nfnetlink_send(struct sk_buff *skb, struct net *net, u32 portid,
+ unsigned int group, int echo, gfp_t flags);
+ int nfnetlink_set_err(struct net *net, u32 portid, u32 group, int error);
+-int nfnetlink_unicast(struct sk_buff *skb, struct net *net, u32 portid,
+- int flags);
++int nfnetlink_unicast(struct sk_buff *skb, struct net *net, u32 portid);
+
+ static inline u16 nfnl_msg_type(u8 subsys, u8 msg_type)
+ {
+diff --git a/include/net/af_rxrpc.h b/include/net/af_rxrpc.h
+index 91eacbdcf33d2..f6abcc0bbd6e7 100644
+--- a/include/net/af_rxrpc.h
++++ b/include/net/af_rxrpc.h
+@@ -59,7 +59,7 @@ bool rxrpc_kernel_abort_call(struct socket *, struct rxrpc_call *,
+ void rxrpc_kernel_end_call(struct socket *, struct rxrpc_call *);
+ void rxrpc_kernel_get_peer(struct socket *, struct rxrpc_call *,
+ struct sockaddr_rxrpc *);
+-u32 rxrpc_kernel_get_srtt(struct socket *, struct rxrpc_call *);
++bool rxrpc_kernel_get_srtt(struct socket *, struct rxrpc_call *, u32 *);
+ int rxrpc_kernel_charge_accept(struct socket *, rxrpc_notify_rx_t,
+ rxrpc_user_attach_call_t, unsigned long, gfp_t,
+ unsigned int);
+diff --git a/include/net/netfilter/nf_tables.h b/include/net/netfilter/nf_tables.h
+index 6f0f6fca9ac3e..ec2cbfab71f35 100644
+--- a/include/net/netfilter/nf_tables.h
++++ b/include/net/netfilter/nf_tables.h
+@@ -143,6 +143,8 @@ static inline u64 nft_reg_load64(const u32 *sreg)
+ static inline void nft_data_copy(u32 *dst, const struct nft_data *src,
+ unsigned int len)
+ {
++ if (len % NFT_REG32_SIZE)
++ dst[len / NFT_REG32_SIZE] = 0;
+ memcpy(dst, src, len);
+ }
+
+diff --git a/include/trace/events/rxrpc.h b/include/trace/events/rxrpc.h
+index 059b6e45a0283..c33079b986e86 100644
+--- a/include/trace/events/rxrpc.h
++++ b/include/trace/events/rxrpc.h
+@@ -138,11 +138,16 @@ enum rxrpc_recvmsg_trace {
+ };
+
+ enum rxrpc_rtt_tx_trace {
++ rxrpc_rtt_tx_cancel,
+ rxrpc_rtt_tx_data,
++ rxrpc_rtt_tx_no_slot,
+ rxrpc_rtt_tx_ping,
+ };
+
+ enum rxrpc_rtt_rx_trace {
++ rxrpc_rtt_rx_cancel,
++ rxrpc_rtt_rx_lost,
++ rxrpc_rtt_rx_obsolete,
+ rxrpc_rtt_rx_ping_response,
+ rxrpc_rtt_rx_requested_ack,
+ };
+@@ -339,10 +344,15 @@ enum rxrpc_tx_point {
+ E_(rxrpc_recvmsg_wait, "WAIT")
+
+ #define rxrpc_rtt_tx_traces \
++ EM(rxrpc_rtt_tx_cancel, "CNCE") \
+ EM(rxrpc_rtt_tx_data, "DATA") \
++ EM(rxrpc_rtt_tx_no_slot, "FULL") \
+ E_(rxrpc_rtt_tx_ping, "PING")
+
+ #define rxrpc_rtt_rx_traces \
++ EM(rxrpc_rtt_rx_cancel, "CNCL") \
++ EM(rxrpc_rtt_rx_obsolete, "OBSL") \
++ EM(rxrpc_rtt_rx_lost, "LOST") \
+ EM(rxrpc_rtt_rx_ping_response, "PONG") \
+ E_(rxrpc_rtt_rx_requested_ack, "RACK")
+
+@@ -1087,38 +1097,43 @@ TRACE_EVENT(rxrpc_recvmsg,
+
+ TRACE_EVENT(rxrpc_rtt_tx,
+ TP_PROTO(struct rxrpc_call *call, enum rxrpc_rtt_tx_trace why,
+- rxrpc_serial_t send_serial),
++ int slot, rxrpc_serial_t send_serial),
+
+- TP_ARGS(call, why, send_serial),
++ TP_ARGS(call, why, slot, send_serial),
+
+ TP_STRUCT__entry(
+ __field(unsigned int, call )
+ __field(enum rxrpc_rtt_tx_trace, why )
++ __field(int, slot )
+ __field(rxrpc_serial_t, send_serial )
+ ),
+
+ TP_fast_assign(
+ __entry->call = call->debug_id;
+ __entry->why = why;
++ __entry->slot = slot;
+ __entry->send_serial = send_serial;
+ ),
+
+- TP_printk("c=%08x %s sr=%08x",
++ TP_printk("c=%08x [%d] %s sr=%08x",
+ __entry->call,
++ __entry->slot,
+ __print_symbolic(__entry->why, rxrpc_rtt_tx_traces),
+ __entry->send_serial)
+ );
+
+ TRACE_EVENT(rxrpc_rtt_rx,
+ TP_PROTO(struct rxrpc_call *call, enum rxrpc_rtt_rx_trace why,
++ int slot,
+ rxrpc_serial_t send_serial, rxrpc_serial_t resp_serial,
+ u32 rtt, u32 rto),
+
+- TP_ARGS(call, why, send_serial, resp_serial, rtt, rto),
++ TP_ARGS(call, why, slot, send_serial, resp_serial, rtt, rto),
+
+ TP_STRUCT__entry(
+ __field(unsigned int, call )
+ __field(enum rxrpc_rtt_rx_trace, why )
++ __field(int, slot )
+ __field(rxrpc_serial_t, send_serial )
+ __field(rxrpc_serial_t, resp_serial )
+ __field(u32, rtt )
+@@ -1128,14 +1143,16 @@ TRACE_EVENT(rxrpc_rtt_rx,
+ TP_fast_assign(
+ __entry->call = call->debug_id;
+ __entry->why = why;
++ __entry->slot = slot;
+ __entry->send_serial = send_serial;
+ __entry->resp_serial = resp_serial;
+ __entry->rtt = rtt;
+ __entry->rto = rto;
+ ),
+
+- TP_printk("c=%08x %s sr=%08x rr=%08x rtt=%u rto=%u",
++ TP_printk("c=%08x [%d] %s sr=%08x rr=%08x rtt=%u rto=%u",
+ __entry->call,
++ __entry->slot,
+ __print_symbolic(__entry->why, rxrpc_rtt_rx_traces),
+ __entry->send_serial,
+ __entry->resp_serial,
+diff --git a/include/uapi/linux/netfilter/nf_tables.h b/include/uapi/linux/netfilter/nf_tables.h
+index 4565456c0ef44..0b27da1d771ba 100644
+--- a/include/uapi/linux/netfilter/nf_tables.h
++++ b/include/uapi/linux/netfilter/nf_tables.h
+@@ -133,7 +133,7 @@ enum nf_tables_msg_types {
+ * @NFTA_LIST_ELEM: list element (NLA_NESTED)
+ */
+ enum nft_list_attributes {
+- NFTA_LIST_UNPEC,
++ NFTA_LIST_UNSPEC,
+ NFTA_LIST_ELEM,
+ __NFTA_LIST_MAX
+ };
+diff --git a/kernel/bpf/syscall.c b/kernel/bpf/syscall.c
+index 0fd80ac81f705..72e943b3bd656 100644
+--- a/kernel/bpf/syscall.c
++++ b/kernel/bpf/syscall.c
+@@ -2629,7 +2629,7 @@ static int bpf_raw_tp_link_fill_link_info(const struct bpf_link *link,
+ u32 ulen = info->raw_tracepoint.tp_name_len;
+ size_t tp_len = strlen(tp_name);
+
+- if (ulen && !ubuf)
++ if (!ulen ^ !ubuf)
+ return -EINVAL;
+
+ info->raw_tracepoint.tp_name_len = tp_len + 1;
+diff --git a/mm/hugetlb.c b/mm/hugetlb.c
+index 7952c6cb6f08c..e153509820958 100644
+--- a/mm/hugetlb.c
++++ b/mm/hugetlb.c
+@@ -1251,21 +1251,32 @@ static struct page *alloc_gigantic_page(struct hstate *h, gfp_t gfp_mask,
+ int nid, nodemask_t *nodemask)
+ {
+ unsigned long nr_pages = 1UL << huge_page_order(h);
++ if (nid == NUMA_NO_NODE)
++ nid = numa_mem_id();
+
+ #ifdef CONFIG_CMA
+ {
+ struct page *page;
+ int node;
+
+- for_each_node_mask(node, *nodemask) {
+- if (!hugetlb_cma[node])
+- continue;
+-
+- page = cma_alloc(hugetlb_cma[node], nr_pages,
+- huge_page_order(h), true);
++ if (hugetlb_cma[nid]) {
++ page = cma_alloc(hugetlb_cma[nid], nr_pages,
++ huge_page_order(h), true);
+ if (page)
+ return page;
+ }
++
++ if (!(gfp_mask & __GFP_THISNODE)) {
++ for_each_node_mask(node, *nodemask) {
++ if (node == nid || !hugetlb_cma[node])
++ continue;
++
++ page = cma_alloc(hugetlb_cma[node], nr_pages,
++ huge_page_order(h), true);
++ if (page)
++ return page;
++ }
++ }
+ }
+ #endif
+
+@@ -3469,6 +3480,22 @@ static unsigned int cpuset_mems_nr(unsigned int *array)
+ }
+
+ #ifdef CONFIG_SYSCTL
++static int proc_hugetlb_doulongvec_minmax(struct ctl_table *table, int write,
++ void *buffer, size_t *length,
++ loff_t *ppos, unsigned long *out)
++{
++ struct ctl_table dup_table;
++
++ /*
++ * In order to avoid races with __do_proc_doulongvec_minmax(), we
++ * can duplicate the @table and alter the duplicate of it.
++ */
++ dup_table = *table;
++ dup_table.data = out;
++
++ return proc_doulongvec_minmax(&dup_table, write, buffer, length, ppos);
++}
++
+ static int hugetlb_sysctl_handler_common(bool obey_mempolicy,
+ struct ctl_table *table, int write,
+ void *buffer, size_t *length, loff_t *ppos)
+@@ -3480,9 +3507,8 @@ static int hugetlb_sysctl_handler_common(bool obey_mempolicy,
+ if (!hugepages_supported())
+ return -EOPNOTSUPP;
+
+- table->data = &tmp;
+- table->maxlen = sizeof(unsigned long);
+- ret = proc_doulongvec_minmax(table, write, buffer, length, ppos);
++ ret = proc_hugetlb_doulongvec_minmax(table, write, buffer, length, ppos,
++ &tmp);
+ if (ret)
+ goto out;
+
+@@ -3525,9 +3551,8 @@ int hugetlb_overcommit_handler(struct ctl_table *table, int write,
+ if (write && hstate_is_gigantic(h))
+ return -EINVAL;
+
+- table->data = &tmp;
+- table->maxlen = sizeof(unsigned long);
+- ret = proc_doulongvec_minmax(table, write, buffer, length, ppos);
++ ret = proc_hugetlb_doulongvec_minmax(table, write, buffer, length, ppos,
++ &tmp);
+ if (ret)
+ goto out;
+
+diff --git a/mm/khugepaged.c b/mm/khugepaged.c
+index dd592ea9a4a06..e6fc7c3e7dc98 100644
+--- a/mm/khugepaged.c
++++ b/mm/khugepaged.c
+@@ -1709,7 +1709,7 @@ static void collapse_file(struct mm_struct *mm,
+ xas_unlock_irq(&xas);
+ page_cache_sync_readahead(mapping, &file->f_ra,
+ file, index,
+- PAGE_SIZE);
++ end - index);
+ /* drain pagevecs to help isolate_lru_page() */
+ lru_add_drain();
+ page = find_lock_page(mapping, index);
+diff --git a/mm/madvise.c b/mm/madvise.c
+index dd1d43cf026de..d4aa5f7765435 100644
+--- a/mm/madvise.c
++++ b/mm/madvise.c
+@@ -289,9 +289,9 @@ static long madvise_willneed(struct vm_area_struct *vma,
+ */
+ *prev = NULL; /* tell sys_madvise we drop mmap_lock */
+ get_file(file);
+- mmap_read_unlock(current->mm);
+ offset = (loff_t)(start - vma->vm_start)
+ + ((loff_t)vma->vm_pgoff << PAGE_SHIFT);
++ mmap_read_unlock(current->mm);
+ vfs_fadvise(file, offset, end - start, POSIX_FADV_WILLNEED);
+ fput(file);
+ mmap_read_lock(current->mm);
+diff --git a/mm/memory.c b/mm/memory.c
+index a279c1a26af7e..03c693ea59bda 100644
+--- a/mm/memory.c
++++ b/mm/memory.c
+@@ -71,6 +71,7 @@
+ #include <linux/dax.h>
+ #include <linux/oom.h>
+ #include <linux/numa.h>
++#include <linux/vmalloc.h>
+
+ #include <trace/events/kmem.h>
+
+@@ -2201,7 +2202,8 @@ EXPORT_SYMBOL(vm_iomap_memory);
+
+ static int apply_to_pte_range(struct mm_struct *mm, pmd_t *pmd,
+ unsigned long addr, unsigned long end,
+- pte_fn_t fn, void *data, bool create)
++ pte_fn_t fn, void *data, bool create,
++ pgtbl_mod_mask *mask)
+ {
+ pte_t *pte;
+ int err = 0;
+@@ -2209,7 +2211,7 @@ static int apply_to_pte_range(struct mm_struct *mm, pmd_t *pmd,
+
+ if (create) {
+ pte = (mm == &init_mm) ?
+- pte_alloc_kernel(pmd, addr) :
++ pte_alloc_kernel_track(pmd, addr, mask) :
+ pte_alloc_map_lock(mm, pmd, addr, &ptl);
+ if (!pte)
+ return -ENOMEM;
+@@ -2230,6 +2232,7 @@ static int apply_to_pte_range(struct mm_struct *mm, pmd_t *pmd,
+ break;
+ }
+ } while (addr += PAGE_SIZE, addr != end);
++ *mask |= PGTBL_PTE_MODIFIED;
+
+ arch_leave_lazy_mmu_mode();
+
+@@ -2240,7 +2243,8 @@ static int apply_to_pte_range(struct mm_struct *mm, pmd_t *pmd,
+
+ static int apply_to_pmd_range(struct mm_struct *mm, pud_t *pud,
+ unsigned long addr, unsigned long end,
+- pte_fn_t fn, void *data, bool create)
++ pte_fn_t fn, void *data, bool create,
++ pgtbl_mod_mask *mask)
+ {
+ pmd_t *pmd;
+ unsigned long next;
+@@ -2249,7 +2253,7 @@ static int apply_to_pmd_range(struct mm_struct *mm, pud_t *pud,
+ BUG_ON(pud_huge(*pud));
+
+ if (create) {
+- pmd = pmd_alloc(mm, pud, addr);
++ pmd = pmd_alloc_track(mm, pud, addr, mask);
+ if (!pmd)
+ return -ENOMEM;
+ } else {
+@@ -2259,7 +2263,7 @@ static int apply_to_pmd_range(struct mm_struct *mm, pud_t *pud,
+ next = pmd_addr_end(addr, end);
+ if (create || !pmd_none_or_clear_bad(pmd)) {
+ err = apply_to_pte_range(mm, pmd, addr, next, fn, data,
+- create);
++ create, mask);
+ if (err)
+ break;
+ }
+@@ -2269,14 +2273,15 @@ static int apply_to_pmd_range(struct mm_struct *mm, pud_t *pud,
+
+ static int apply_to_pud_range(struct mm_struct *mm, p4d_t *p4d,
+ unsigned long addr, unsigned long end,
+- pte_fn_t fn, void *data, bool create)
++ pte_fn_t fn, void *data, bool create,
++ pgtbl_mod_mask *mask)
+ {
+ pud_t *pud;
+ unsigned long next;
+ int err = 0;
+
+ if (create) {
+- pud = pud_alloc(mm, p4d, addr);
++ pud = pud_alloc_track(mm, p4d, addr, mask);
+ if (!pud)
+ return -ENOMEM;
+ } else {
+@@ -2286,7 +2291,7 @@ static int apply_to_pud_range(struct mm_struct *mm, p4d_t *p4d,
+ next = pud_addr_end(addr, end);
+ if (create || !pud_none_or_clear_bad(pud)) {
+ err = apply_to_pmd_range(mm, pud, addr, next, fn, data,
+- create);
++ create, mask);
+ if (err)
+ break;
+ }
+@@ -2296,14 +2301,15 @@ static int apply_to_pud_range(struct mm_struct *mm, p4d_t *p4d,
+
+ static int apply_to_p4d_range(struct mm_struct *mm, pgd_t *pgd,
+ unsigned long addr, unsigned long end,
+- pte_fn_t fn, void *data, bool create)
++ pte_fn_t fn, void *data, bool create,
++ pgtbl_mod_mask *mask)
+ {
+ p4d_t *p4d;
+ unsigned long next;
+ int err = 0;
+
+ if (create) {
+- p4d = p4d_alloc(mm, pgd, addr);
++ p4d = p4d_alloc_track(mm, pgd, addr, mask);
+ if (!p4d)
+ return -ENOMEM;
+ } else {
+@@ -2313,7 +2319,7 @@ static int apply_to_p4d_range(struct mm_struct *mm, pgd_t *pgd,
+ next = p4d_addr_end(addr, end);
+ if (create || !p4d_none_or_clear_bad(p4d)) {
+ err = apply_to_pud_range(mm, p4d, addr, next, fn, data,
+- create);
++ create, mask);
+ if (err)
+ break;
+ }
+@@ -2326,8 +2332,9 @@ static int __apply_to_page_range(struct mm_struct *mm, unsigned long addr,
+ void *data, bool create)
+ {
+ pgd_t *pgd;
+- unsigned long next;
++ unsigned long start = addr, next;
+ unsigned long end = addr + size;
++ pgtbl_mod_mask mask = 0;
+ int err = 0;
+
+ if (WARN_ON(addr >= end))
+@@ -2338,11 +2345,14 @@ static int __apply_to_page_range(struct mm_struct *mm, unsigned long addr,
+ next = pgd_addr_end(addr, end);
+ if (!create && pgd_none_or_clear_bad(pgd))
+ continue;
+- err = apply_to_p4d_range(mm, pgd, addr, next, fn, data, create);
++ err = apply_to_p4d_range(mm, pgd, addr, next, fn, data, create, &mask);
+ if (err)
+ break;
+ } while (pgd++, addr = next, addr != end);
+
++ if (mask & ARCH_PAGE_TABLE_SYNC_MASK)
++ arch_sync_kernel_mappings(start, start + size);
++
+ return err;
+ }
+
+diff --git a/mm/migrate.c b/mm/migrate.c
+index 40cd7016ae6fc..3511f9529ea60 100644
+--- a/mm/migrate.c
++++ b/mm/migrate.c
+@@ -251,7 +251,7 @@ static bool remove_migration_pte(struct page *page, struct vm_area_struct *vma,
+ entry = make_device_private_entry(new, pte_write(pte));
+ pte = swp_entry_to_pte(entry);
+ if (pte_swp_uffd_wp(*pvmw.pte))
+- pte = pte_mkuffd_wp(pte);
++ pte = pte_swp_mkuffd_wp(pte);
+ }
+ }
+
+@@ -2330,10 +2330,17 @@ again:
+ entry = make_migration_entry(page, mpfn &
+ MIGRATE_PFN_WRITE);
+ swp_pte = swp_entry_to_pte(entry);
+- if (pte_soft_dirty(pte))
+- swp_pte = pte_swp_mksoft_dirty(swp_pte);
+- if (pte_uffd_wp(pte))
+- swp_pte = pte_swp_mkuffd_wp(swp_pte);
++ if (pte_present(pte)) {
++ if (pte_soft_dirty(pte))
++ swp_pte = pte_swp_mksoft_dirty(swp_pte);
++ if (pte_uffd_wp(pte))
++ swp_pte = pte_swp_mkuffd_wp(swp_pte);
++ } else {
++ if (pte_swp_soft_dirty(pte))
++ swp_pte = pte_swp_mksoft_dirty(swp_pte);
++ if (pte_swp_uffd_wp(pte))
++ swp_pte = pte_swp_mkuffd_wp(swp_pte);
++ }
+ set_pte_at(mm, addr, ptep, swp_pte);
+
+ /*
+diff --git a/mm/rmap.c b/mm/rmap.c
+index 6cce9ef06753b..536f2706a6c8d 100644
+--- a/mm/rmap.c
++++ b/mm/rmap.c
+@@ -1511,9 +1511,14 @@ static bool try_to_unmap_one(struct page *page, struct vm_area_struct *vma,
+ */
+ entry = make_migration_entry(page, 0);
+ swp_pte = swp_entry_to_pte(entry);
+- if (pte_soft_dirty(pteval))
++
++ /*
++ * pteval maps a zone device page and is therefore
++ * a swap pte.
++ */
++ if (pte_swp_soft_dirty(pteval))
+ swp_pte = pte_swp_mksoft_dirty(swp_pte);
+- if (pte_uffd_wp(pteval))
++ if (pte_swp_uffd_wp(pteval))
+ swp_pte = pte_swp_mkuffd_wp(swp_pte);
+ set_pte_at(mm, pvmw.address, pvmw.pte, swp_pte);
+ /*
+diff --git a/mm/slub.c b/mm/slub.c
+index ef303070d175a..76d005862c4d9 100644
+--- a/mm/slub.c
++++ b/mm/slub.c
+@@ -680,12 +680,12 @@ static void slab_fix(struct kmem_cache *s, char *fmt, ...)
+ }
+
+ static bool freelist_corrupted(struct kmem_cache *s, struct page *page,
+- void *freelist, void *nextfree)
++ void **freelist, void *nextfree)
+ {
+ if ((s->flags & SLAB_CONSISTENCY_CHECKS) &&
+- !check_valid_pointer(s, page, nextfree)) {
+- object_err(s, page, freelist, "Freechain corrupt");
+- freelist = NULL;
++ !check_valid_pointer(s, page, nextfree) && freelist) {
++ object_err(s, page, *freelist, "Freechain corrupt");
++ *freelist = NULL;
+ slab_fix(s, "Isolate corrupted freechain");
+ return true;
+ }
+@@ -1425,7 +1425,7 @@ static inline void dec_slabs_node(struct kmem_cache *s, int node,
+ int objects) {}
+
+ static bool freelist_corrupted(struct kmem_cache *s, struct page *page,
+- void *freelist, void *nextfree)
++ void **freelist, void *nextfree)
+ {
+ return false;
+ }
+@@ -2117,7 +2117,7 @@ static void deactivate_slab(struct kmem_cache *s, struct page *page,
+ * 'freelist' is already corrupted. So isolate all objects
+ * starting at 'freelist'.
+ */
+- if (freelist_corrupted(s, page, freelist, nextfree))
++ if (freelist_corrupted(s, page, &freelist, nextfree))
+ break;
+
+ do {
+diff --git a/net/batman-adv/bat_v_ogm.c b/net/batman-adv/bat_v_ogm.c
+index 18028b9f95f01..65b1280cf2fc1 100644
+--- a/net/batman-adv/bat_v_ogm.c
++++ b/net/batman-adv/bat_v_ogm.c
+@@ -874,6 +874,12 @@ static void batadv_v_ogm_process(const struct sk_buff *skb, int ogm_offset,
+ ntohl(ogm_packet->seqno), ogm_throughput, ogm_packet->ttl,
+ ogm_packet->version, ntohs(ogm_packet->tvlv_len));
+
++ if (batadv_is_my_mac(bat_priv, ogm_packet->orig)) {
++ batadv_dbg(BATADV_DBG_BATMAN, bat_priv,
++ "Drop packet: originator packet from ourself\n");
++ return;
++ }
++
+ /* If the throughput metric is 0, immediately drop the packet. No need
+ * to create orig_node / neigh_node for an unusable route.
+ */
+@@ -1001,11 +1007,6 @@ int batadv_v_ogm_packet_recv(struct sk_buff *skb,
+ if (batadv_is_my_mac(bat_priv, ethhdr->h_source))
+ goto free_skb;
+
+- ogm_packet = (struct batadv_ogm2_packet *)skb->data;
+-
+- if (batadv_is_my_mac(bat_priv, ogm_packet->orig))
+- goto free_skb;
+-
+ batadv_inc_counter(bat_priv, BATADV_CNT_MGMT_RX);
+ batadv_add_counter(bat_priv, BATADV_CNT_MGMT_RX_BYTES,
+ skb->len + ETH_HLEN);
+diff --git a/net/batman-adv/bridge_loop_avoidance.c b/net/batman-adv/bridge_loop_avoidance.c
+index 41cc87f06b142..cfb9e16afe38a 100644
+--- a/net/batman-adv/bridge_loop_avoidance.c
++++ b/net/batman-adv/bridge_loop_avoidance.c
+@@ -437,7 +437,10 @@ static void batadv_bla_send_claim(struct batadv_priv *bat_priv, u8 *mac,
+ batadv_add_counter(bat_priv, BATADV_CNT_RX_BYTES,
+ skb->len + ETH_HLEN);
+
+- netif_rx(skb);
++ if (in_interrupt())
++ netif_rx(skb);
++ else
++ netif_rx_ni(skb);
+ out:
+ if (primary_if)
+ batadv_hardif_put(primary_if);
+diff --git a/net/batman-adv/gateway_client.c b/net/batman-adv/gateway_client.c
+index a18dcc686dc31..ef3f85b576c4c 100644
+--- a/net/batman-adv/gateway_client.c
++++ b/net/batman-adv/gateway_client.c
+@@ -703,8 +703,10 @@ batadv_gw_dhcp_recipient_get(struct sk_buff *skb, unsigned int *header_len,
+
+ chaddr_offset = *header_len + BATADV_DHCP_CHADDR_OFFSET;
+ /* store the client address if the message is going to a client */
+- if (ret == BATADV_DHCP_TO_CLIENT &&
+- pskb_may_pull(skb, chaddr_offset + ETH_ALEN)) {
++ if (ret == BATADV_DHCP_TO_CLIENT) {
++ if (!pskb_may_pull(skb, chaddr_offset + ETH_ALEN))
++ return BATADV_DHCP_NO;
++
+ /* check if the DHCP packet carries an Ethernet DHCP */
+ p = skb->data + *header_len + BATADV_DHCP_HTYPE_OFFSET;
+ if (*p != BATADV_DHCP_HTYPE_ETHERNET)
+diff --git a/net/bluetooth/hci_core.c b/net/bluetooth/hci_core.c
+index 41fba93d857a6..fc28dc201b936 100644
+--- a/net/bluetooth/hci_core.c
++++ b/net/bluetooth/hci_core.c
+@@ -3370,7 +3370,7 @@ done:
+ bt_dev_err(hdev, "Suspend notifier action (%lu) failed: %d",
+ action, ret);
+
+- return NOTIFY_STOP;
++ return NOTIFY_DONE;
+ }
+
+ /* Alloc HCI device */
+diff --git a/net/netfilter/nf_conntrack_proto_udp.c b/net/netfilter/nf_conntrack_proto_udp.c
+index 760ca24228165..af402f458ee02 100644
+--- a/net/netfilter/nf_conntrack_proto_udp.c
++++ b/net/netfilter/nf_conntrack_proto_udp.c
+@@ -81,18 +81,6 @@ static bool udp_error(struct sk_buff *skb,
+ return false;
+ }
+
+-static void nf_conntrack_udp_refresh_unreplied(struct nf_conn *ct,
+- struct sk_buff *skb,
+- enum ip_conntrack_info ctinfo,
+- u32 extra_jiffies)
+-{
+- if (unlikely(ctinfo == IP_CT_ESTABLISHED_REPLY &&
+- ct->status & IPS_NAT_CLASH))
+- nf_ct_kill(ct);
+- else
+- nf_ct_refresh_acct(ct, ctinfo, skb, extra_jiffies);
+-}
+-
+ /* Returns verdict for packet, and may modify conntracktype */
+ int nf_conntrack_udp_packet(struct nf_conn *ct,
+ struct sk_buff *skb,
+@@ -124,12 +112,15 @@ int nf_conntrack_udp_packet(struct nf_conn *ct,
+
+ nf_ct_refresh_acct(ct, ctinfo, skb, extra);
+
++ /* never set ASSURED for IPS_NAT_CLASH, they time out soon */
++ if (unlikely((ct->status & IPS_NAT_CLASH)))
++ return NF_ACCEPT;
++
+ /* Also, more likely to be important, and not a probe */
+ if (!test_and_set_bit(IPS_ASSURED_BIT, &ct->status))
+ nf_conntrack_event_cache(IPCT_ASSURED, ct);
+ } else {
+- nf_conntrack_udp_refresh_unreplied(ct, skb, ctinfo,
+- timeouts[UDP_CT_UNREPLIED]);
++ nf_ct_refresh_acct(ct, ctinfo, skb, timeouts[UDP_CT_UNREPLIED]);
+ }
+ return NF_ACCEPT;
+ }
+@@ -206,12 +197,15 @@ int nf_conntrack_udplite_packet(struct nf_conn *ct,
+ if (test_bit(IPS_SEEN_REPLY_BIT, &ct->status)) {
+ nf_ct_refresh_acct(ct, ctinfo, skb,
+ timeouts[UDP_CT_REPLIED]);
++
++ if (unlikely((ct->status & IPS_NAT_CLASH)))
++ return NF_ACCEPT;
++
+ /* Also, more likely to be important, and not a probe */
+ if (!test_and_set_bit(IPS_ASSURED_BIT, &ct->status))
+ nf_conntrack_event_cache(IPCT_ASSURED, ct);
+ } else {
+- nf_conntrack_udp_refresh_unreplied(ct, skb, ctinfo,
+- timeouts[UDP_CT_UNREPLIED]);
++ nf_ct_refresh_acct(ct, ctinfo, skb, timeouts[UDP_CT_UNREPLIED]);
+ }
+ return NF_ACCEPT;
+ }
+diff --git a/net/netfilter/nf_tables_api.c b/net/netfilter/nf_tables_api.c
+index d31832d32e028..05059f620d41e 100644
+--- a/net/netfilter/nf_tables_api.c
++++ b/net/netfilter/nf_tables_api.c
+@@ -797,11 +797,11 @@ static int nf_tables_gettable(struct net *net, struct sock *nlsk,
+ nlh->nlmsg_seq, NFT_MSG_NEWTABLE, 0,
+ family, table);
+ if (err < 0)
+- goto err;
++ goto err_fill_table_info;
+
+- return nlmsg_unicast(nlsk, skb2, NETLINK_CB(skb).portid);
++ return nfnetlink_unicast(skb2, net, NETLINK_CB(skb).portid);
+
+-err:
++err_fill_table_info:
+ kfree_skb(skb2);
+ return err;
+ }
+@@ -1527,11 +1527,11 @@ static int nf_tables_getchain(struct net *net, struct sock *nlsk,
+ nlh->nlmsg_seq, NFT_MSG_NEWCHAIN, 0,
+ family, table, chain);
+ if (err < 0)
+- goto err;
++ goto err_fill_chain_info;
+
+- return nlmsg_unicast(nlsk, skb2, NETLINK_CB(skb).portid);
++ return nfnetlink_unicast(skb2, net, NETLINK_CB(skb).portid);
+
+-err:
++err_fill_chain_info:
+ kfree_skb(skb2);
+ return err;
+ }
+@@ -2898,11 +2898,11 @@ static int nf_tables_getrule(struct net *net, struct sock *nlsk,
+ nlh->nlmsg_seq, NFT_MSG_NEWRULE, 0,
+ family, table, chain, rule, NULL);
+ if (err < 0)
+- goto err;
++ goto err_fill_rule_info;
+
+- return nlmsg_unicast(nlsk, skb2, NETLINK_CB(skb).portid);
++ return nfnetlink_unicast(skb2, net, NETLINK_CB(skb).portid);
+
+-err:
++err_fill_rule_info:
+ kfree_skb(skb2);
+ return err;
+ }
+@@ -3643,7 +3643,8 @@ static int nf_tables_fill_set(struct sk_buff *skb, const struct nft_ctx *ctx,
+ goto nla_put_failure;
+ }
+
+- if (nla_put(skb, NFTA_SET_USERDATA, set->udlen, set->udata))
++ if (set->udata &&
++ nla_put(skb, NFTA_SET_USERDATA, set->udlen, set->udata))
+ goto nla_put_failure;
+
+ nest = nla_nest_start_noflag(skb, NFTA_SET_DESC);
+@@ -3828,11 +3829,11 @@ static int nf_tables_getset(struct net *net, struct sock *nlsk,
+
+ err = nf_tables_fill_set(skb2, &ctx, set, NFT_MSG_NEWSET, 0);
+ if (err < 0)
+- goto err;
++ goto err_fill_set_info;
+
+- return nlmsg_unicast(nlsk, skb2, NETLINK_CB(skb).portid);
++ return nfnetlink_unicast(skb2, net, NETLINK_CB(skb).portid);
+
+-err:
++err_fill_set_info:
+ kfree_skb(skb2);
+ return err;
+ }
+@@ -4720,24 +4721,18 @@ static int nft_get_set_elem(struct nft_ctx *ctx, struct nft_set *set,
+ err = -ENOMEM;
+ skb = nlmsg_new(NLMSG_GOODSIZE, GFP_ATOMIC);
+ if (skb == NULL)
+- goto err1;
++ return err;
+
+ err = nf_tables_fill_setelem_info(skb, ctx, ctx->seq, ctx->portid,
+ NFT_MSG_NEWSETELEM, 0, set, &elem);
+ if (err < 0)
+- goto err2;
++ goto err_fill_setelem;
+
+- err = nfnetlink_unicast(skb, ctx->net, ctx->portid, MSG_DONTWAIT);
+- /* This avoids a loop in nfnetlink. */
+- if (err < 0)
+- goto err1;
++ return nfnetlink_unicast(skb, ctx->net, ctx->portid);
+
+- return 0;
+-err2:
++err_fill_setelem:
+ kfree_skb(skb);
+-err1:
+- /* this avoids a loop in nfnetlink. */
+- return err == -EAGAIN ? -ENOBUFS : err;
++ return err;
+ }
+
+ /* called with rcu_read_lock held */
+@@ -5991,10 +5986,11 @@ static int nf_tables_getobj(struct net *net, struct sock *nlsk,
+ nlh->nlmsg_seq, NFT_MSG_NEWOBJ, 0,
+ family, table, obj, reset);
+ if (err < 0)
+- goto err;
++ goto err_fill_obj_info;
+
+- return nlmsg_unicast(nlsk, skb2, NETLINK_CB(skb).portid);
+-err:
++ return nfnetlink_unicast(skb2, net, NETLINK_CB(skb).portid);
++
++err_fill_obj_info:
+ kfree_skb(skb2);
+ return err;
+ }
+@@ -6843,10 +6839,11 @@ static int nf_tables_getflowtable(struct net *net, struct sock *nlsk,
+ NFT_MSG_NEWFLOWTABLE, 0, family,
+ flowtable, &flowtable->hook_list);
+ if (err < 0)
+- goto err;
++ goto err_fill_flowtable_info;
+
+- return nlmsg_unicast(nlsk, skb2, NETLINK_CB(skb).portid);
+-err:
++ return nfnetlink_unicast(skb2, net, NETLINK_CB(skb).portid);
++
++err_fill_flowtable_info:
+ kfree_skb(skb2);
+ return err;
+ }
+@@ -7017,10 +7014,11 @@ static int nf_tables_getgen(struct net *net, struct sock *nlsk,
+ err = nf_tables_fill_gen_info(skb2, net, NETLINK_CB(skb).portid,
+ nlh->nlmsg_seq);
+ if (err < 0)
+- goto err;
++ goto err_fill_gen_info;
+
+- return nlmsg_unicast(nlsk, skb2, NETLINK_CB(skb).portid);
+-err:
++ return nfnetlink_unicast(skb2, net, NETLINK_CB(skb).portid);
++
++err_fill_gen_info:
+ kfree_skb(skb2);
+ return err;
+ }
+diff --git a/net/netfilter/nfnetlink.c b/net/netfilter/nfnetlink.c
+index 5f24edf958309..3a2e64e13b227 100644
+--- a/net/netfilter/nfnetlink.c
++++ b/net/netfilter/nfnetlink.c
+@@ -149,10 +149,15 @@ int nfnetlink_set_err(struct net *net, u32 portid, u32 group, int error)
+ }
+ EXPORT_SYMBOL_GPL(nfnetlink_set_err);
+
+-int nfnetlink_unicast(struct sk_buff *skb, struct net *net, u32 portid,
+- int flags)
++int nfnetlink_unicast(struct sk_buff *skb, struct net *net, u32 portid)
+ {
+- return netlink_unicast(net->nfnl, skb, portid, flags);
++ int err;
++
++ err = nlmsg_unicast(net->nfnl, skb, portid);
++ if (err == -EAGAIN)
++ err = -ENOBUFS;
++
++ return err;
+ }
+ EXPORT_SYMBOL_GPL(nfnetlink_unicast);
+
+diff --git a/net/netfilter/nfnetlink_log.c b/net/netfilter/nfnetlink_log.c
+index 0ba020ca38e68..7ca2ca4bba055 100644
+--- a/net/netfilter/nfnetlink_log.c
++++ b/net/netfilter/nfnetlink_log.c
+@@ -356,8 +356,7 @@ __nfulnl_send(struct nfulnl_instance *inst)
+ goto out;
+ }
+ }
+- nfnetlink_unicast(inst->skb, inst->net, inst->peer_portid,
+- MSG_DONTWAIT);
++ nfnetlink_unicast(inst->skb, inst->net, inst->peer_portid);
+ out:
+ inst->qlen = 0;
+ inst->skb = NULL;
+diff --git a/net/netfilter/nfnetlink_queue.c b/net/netfilter/nfnetlink_queue.c
+index 3243a31f6e829..70d086944bcc7 100644
+--- a/net/netfilter/nfnetlink_queue.c
++++ b/net/netfilter/nfnetlink_queue.c
+@@ -681,7 +681,7 @@ __nfqnl_enqueue_packet(struct net *net, struct nfqnl_instance *queue,
+ *packet_id_ptr = htonl(entry->id);
+
+ /* nfnetlink_unicast will either free the nskb or add it to a socket */
+- err = nfnetlink_unicast(nskb, net, queue->peer_portid, MSG_DONTWAIT);
++ err = nfnetlink_unicast(nskb, net, queue->peer_portid);
+ if (err < 0) {
+ if (queue->flags & NFQA_CFG_F_FAIL_OPEN) {
+ failopen = 1;
+diff --git a/net/netfilter/nft_flow_offload.c b/net/netfilter/nft_flow_offload.c
+index 3b9b97aa4b32e..3a6c84fb2c90d 100644
+--- a/net/netfilter/nft_flow_offload.c
++++ b/net/netfilter/nft_flow_offload.c
+@@ -102,7 +102,7 @@ static void nft_flow_offload_eval(const struct nft_expr *expr,
+ }
+
+ if (nf_ct_ext_exist(ct, NF_CT_EXT_HELPER) ||
+- ct->status & IPS_SEQ_ADJUST)
++ ct->status & (IPS_SEQ_ADJUST | IPS_NAT_CLASH))
+ goto out;
+
+ if (!nf_ct_is_confirmed(ct))
+diff --git a/net/netfilter/nft_payload.c b/net/netfilter/nft_payload.c
+index a7de3a58f553d..67ce866a446d9 100644
+--- a/net/netfilter/nft_payload.c
++++ b/net/netfilter/nft_payload.c
+@@ -87,7 +87,9 @@ void nft_payload_eval(const struct nft_expr *expr,
+ u32 *dest = ®s->data[priv->dreg];
+ int offset;
+
+- dest[priv->len / NFT_REG32_SIZE] = 0;
++ if (priv->len % NFT_REG32_SIZE)
++ dest[priv->len / NFT_REG32_SIZE] = 0;
++
+ switch (priv->base) {
+ case NFT_PAYLOAD_LL_HEADER:
+ if (!skb_mac_header_was_set(skb))
+diff --git a/net/packet/af_packet.c b/net/packet/af_packet.c
+index 301f41d4929bd..82f7802983797 100644
+--- a/net/packet/af_packet.c
++++ b/net/packet/af_packet.c
+@@ -2170,7 +2170,8 @@ static int tpacket_rcv(struct sk_buff *skb, struct net_device *dev,
+ int skb_len = skb->len;
+ unsigned int snaplen, res;
+ unsigned long status = TP_STATUS_USER;
+- unsigned short macoff, netoff, hdrlen;
++ unsigned short macoff, hdrlen;
++ unsigned int netoff;
+ struct sk_buff *copy_skb = NULL;
+ struct timespec64 ts;
+ __u32 ts_status;
+@@ -2239,6 +2240,10 @@ static int tpacket_rcv(struct sk_buff *skb, struct net_device *dev,
+ }
+ macoff = netoff - maclen;
+ }
++ if (netoff > USHRT_MAX) {
++ atomic_inc(&po->tp_drops);
++ goto drop_n_restore;
++ }
+ if (po->tp_version <= TPACKET_V2) {
+ if (macoff + snaplen > po->rx_ring.frame_size) {
+ if (po->copy_thresh &&
+diff --git a/net/rxrpc/ar-internal.h b/net/rxrpc/ar-internal.h
+index 9a2139ebd67d7..ca1fea72c8d29 100644
+--- a/net/rxrpc/ar-internal.h
++++ b/net/rxrpc/ar-internal.h
+@@ -488,7 +488,6 @@ enum rxrpc_call_flag {
+ RXRPC_CALL_RX_LAST, /* Received the last packet (at rxtx_top) */
+ RXRPC_CALL_TX_LAST, /* Last packet in Tx buffer (at rxtx_top) */
+ RXRPC_CALL_SEND_PING, /* A ping will need to be sent */
+- RXRPC_CALL_PINGING, /* Ping in process */
+ RXRPC_CALL_RETRANS_TIMEOUT, /* Retransmission due to timeout occurred */
+ RXRPC_CALL_BEGAN_RX_TIMER, /* We began the expect_rx_by timer */
+ RXRPC_CALL_RX_HEARD, /* The peer responded at least once to this call */
+@@ -673,9 +672,13 @@ struct rxrpc_call {
+ rxrpc_seq_t ackr_consumed; /* Highest packet shown consumed */
+ rxrpc_seq_t ackr_seen; /* Highest packet shown seen */
+
+- /* ping management */
+- rxrpc_serial_t ping_serial; /* Last ping sent */
+- ktime_t ping_time; /* Time last ping sent */
++ /* RTT management */
++ rxrpc_serial_t rtt_serial[4]; /* Serial number of DATA or PING sent */
++ ktime_t rtt_sent_at[4]; /* Time packet sent */
++ unsigned long rtt_avail; /* Mask of available slots in bits 0-3,
++ * Mask of pending samples in 8-11 */
++#define RXRPC_CALL_RTT_AVAIL_MASK 0xf
++#define RXRPC_CALL_RTT_PEND_SHIFT 8
+
+ /* transmission-phase ACK management */
+ ktime_t acks_latest_ts; /* Timestamp of latest ACK received */
+@@ -1037,7 +1040,7 @@ static inline bool __rxrpc_abort_eproto(struct rxrpc_call *call,
+ /*
+ * rtt.c
+ */
+-void rxrpc_peer_add_rtt(struct rxrpc_call *, enum rxrpc_rtt_rx_trace,
++void rxrpc_peer_add_rtt(struct rxrpc_call *, enum rxrpc_rtt_rx_trace, int,
+ rxrpc_serial_t, rxrpc_serial_t, ktime_t, ktime_t);
+ unsigned long rxrpc_get_rto_backoff(struct rxrpc_peer *, bool);
+ void rxrpc_peer_init_rtt(struct rxrpc_peer *);
+diff --git a/net/rxrpc/call_object.c b/net/rxrpc/call_object.c
+index 38a46167523fa..a40fae0139423 100644
+--- a/net/rxrpc/call_object.c
++++ b/net/rxrpc/call_object.c
+@@ -153,6 +153,7 @@ struct rxrpc_call *rxrpc_alloc_call(struct rxrpc_sock *rx, gfp_t gfp,
+ call->cong_ssthresh = RXRPC_RXTX_BUFF_SIZE - 1;
+
+ call->rxnet = rxnet;
++ call->rtt_avail = RXRPC_CALL_RTT_AVAIL_MASK;
+ atomic_inc(&rxnet->nr_calls);
+ return call;
+
+diff --git a/net/rxrpc/input.c b/net/rxrpc/input.c
+index 767579328a069..19ddfc9807e89 100644
+--- a/net/rxrpc/input.c
++++ b/net/rxrpc/input.c
+@@ -608,36 +608,57 @@ unlock:
+ }
+
+ /*
+- * Process a requested ACK.
++ * See if there's a cached RTT probe to complete.
+ */
+-static void rxrpc_input_requested_ack(struct rxrpc_call *call,
+- ktime_t resp_time,
+- rxrpc_serial_t orig_serial,
+- rxrpc_serial_t ack_serial)
++static void rxrpc_complete_rtt_probe(struct rxrpc_call *call,
++ ktime_t resp_time,
++ rxrpc_serial_t acked_serial,
++ rxrpc_serial_t ack_serial,
++ enum rxrpc_rtt_rx_trace type)
+ {
+- struct rxrpc_skb_priv *sp;
+- struct sk_buff *skb;
++ rxrpc_serial_t orig_serial;
++ unsigned long avail;
+ ktime_t sent_at;
+- int ix;
++ bool matched = false;
++ int i;
+
+- for (ix = 0; ix < RXRPC_RXTX_BUFF_SIZE; ix++) {
+- skb = call->rxtx_buffer[ix];
+- if (!skb)
+- continue;
++ avail = READ_ONCE(call->rtt_avail);
++ smp_rmb(); /* Read avail bits before accessing data. */
+
+- sent_at = skb->tstamp;
+- smp_rmb(); /* Read timestamp before serial. */
+- sp = rxrpc_skb(skb);
+- if (sp->hdr.serial != orig_serial)
++ for (i = 0; i < ARRAY_SIZE(call->rtt_serial); i++) {
++ if (!test_bit(i + RXRPC_CALL_RTT_PEND_SHIFT, &avail))
+ continue;
+- goto found;
+- }
+
+- return;
++ sent_at = call->rtt_sent_at[i];
++ orig_serial = call->rtt_serial[i];
++
++ if (orig_serial == acked_serial) {
++ clear_bit(i + RXRPC_CALL_RTT_PEND_SHIFT, &call->rtt_avail);
++ smp_mb(); /* Read data before setting avail bit */
++ set_bit(i, &call->rtt_avail);
++ if (type != rxrpc_rtt_rx_cancel)
++ rxrpc_peer_add_rtt(call, type, i, acked_serial, ack_serial,
++ sent_at, resp_time);
++ else
++ trace_rxrpc_rtt_rx(call, rxrpc_rtt_rx_cancel, i,
++ orig_serial, acked_serial, 0, 0);
++ matched = true;
++ }
++
++ /* If a later serial is being acked, then mark this slot as
++ * being available.
++ */
++ if (after(acked_serial, orig_serial)) {
++ trace_rxrpc_rtt_rx(call, rxrpc_rtt_rx_obsolete, i,
++ orig_serial, acked_serial, 0, 0);
++ clear_bit(i + RXRPC_CALL_RTT_PEND_SHIFT, &call->rtt_avail);
++ smp_wmb();
++ set_bit(i, &call->rtt_avail);
++ }
++ }
+
+-found:
+- rxrpc_peer_add_rtt(call, rxrpc_rtt_rx_requested_ack,
+- orig_serial, ack_serial, sent_at, resp_time);
++ if (!matched)
++ trace_rxrpc_rtt_rx(call, rxrpc_rtt_rx_lost, 9, 0, acked_serial, 0, 0);
+ }
+
+ /*
+@@ -682,27 +703,11 @@ static void rxrpc_input_check_for_lost_ack(struct rxrpc_call *call)
+ */
+ static void rxrpc_input_ping_response(struct rxrpc_call *call,
+ ktime_t resp_time,
+- rxrpc_serial_t orig_serial,
++ rxrpc_serial_t acked_serial,
+ rxrpc_serial_t ack_serial)
+ {
+- rxrpc_serial_t ping_serial;
+- ktime_t ping_time;
+-
+- ping_time = call->ping_time;
+- smp_rmb();
+- ping_serial = READ_ONCE(call->ping_serial);
+-
+- if (orig_serial == call->acks_lost_ping)
++ if (acked_serial == call->acks_lost_ping)
+ rxrpc_input_check_for_lost_ack(call);
+-
+- if (before(orig_serial, ping_serial) ||
+- !test_and_clear_bit(RXRPC_CALL_PINGING, &call->flags))
+- return;
+- if (after(orig_serial, ping_serial))
+- return;
+-
+- rxrpc_peer_add_rtt(call, rxrpc_rtt_rx_ping_response,
+- orig_serial, ack_serial, ping_time, resp_time);
+ }
+
+ /*
+@@ -843,7 +848,7 @@ static void rxrpc_input_ack(struct rxrpc_call *call, struct sk_buff *skb)
+ struct rxrpc_ackinfo info;
+ u8 acks[RXRPC_MAXACKS];
+ } buf;
+- rxrpc_serial_t acked_serial;
++ rxrpc_serial_t ack_serial, acked_serial;
+ rxrpc_seq_t first_soft_ack, hard_ack, prev_pkt;
+ int nr_acks, offset, ioffset;
+
+@@ -856,6 +861,7 @@ static void rxrpc_input_ack(struct rxrpc_call *call, struct sk_buff *skb)
+ }
+ offset += sizeof(buf.ack);
+
++ ack_serial = sp->hdr.serial;
+ acked_serial = ntohl(buf.ack.serial);
+ first_soft_ack = ntohl(buf.ack.firstPacket);
+ prev_pkt = ntohl(buf.ack.previousPacket);
+@@ -864,31 +870,42 @@ static void rxrpc_input_ack(struct rxrpc_call *call, struct sk_buff *skb)
+ summary.ack_reason = (buf.ack.reason < RXRPC_ACK__INVALID ?
+ buf.ack.reason : RXRPC_ACK__INVALID);
+
+- trace_rxrpc_rx_ack(call, sp->hdr.serial, acked_serial,
++ trace_rxrpc_rx_ack(call, ack_serial, acked_serial,
+ first_soft_ack, prev_pkt,
+ summary.ack_reason, nr_acks);
+
+- if (buf.ack.reason == RXRPC_ACK_PING_RESPONSE)
++ switch (buf.ack.reason) {
++ case RXRPC_ACK_PING_RESPONSE:
+ rxrpc_input_ping_response(call, skb->tstamp, acked_serial,
+- sp->hdr.serial);
+- if (buf.ack.reason == RXRPC_ACK_REQUESTED)
+- rxrpc_input_requested_ack(call, skb->tstamp, acked_serial,
+- sp->hdr.serial);
++ ack_serial);
++ rxrpc_complete_rtt_probe(call, skb->tstamp, acked_serial, ack_serial,
++ rxrpc_rtt_rx_ping_response);
++ break;
++ case RXRPC_ACK_REQUESTED:
++ rxrpc_complete_rtt_probe(call, skb->tstamp, acked_serial, ack_serial,
++ rxrpc_rtt_rx_requested_ack);
++ break;
++ default:
++ if (acked_serial != 0)
++ rxrpc_complete_rtt_probe(call, skb->tstamp, acked_serial, ack_serial,
++ rxrpc_rtt_rx_cancel);
++ break;
++ }
+
+ if (buf.ack.reason == RXRPC_ACK_PING) {
+- _proto("Rx ACK %%%u PING Request", sp->hdr.serial);
++ _proto("Rx ACK %%%u PING Request", ack_serial);
+ rxrpc_propose_ACK(call, RXRPC_ACK_PING_RESPONSE,
+- sp->hdr.serial, true, true,
++ ack_serial, true, true,
+ rxrpc_propose_ack_respond_to_ping);
+ } else if (sp->hdr.flags & RXRPC_REQUEST_ACK) {
+ rxrpc_propose_ACK(call, RXRPC_ACK_REQUESTED,
+- sp->hdr.serial, true, true,
++ ack_serial, true, true,
+ rxrpc_propose_ack_respond_to_ack);
+ }
+
+ /* Discard any out-of-order or duplicate ACKs (outside lock). */
+ if (!rxrpc_is_ack_valid(call, first_soft_ack, prev_pkt)) {
+- trace_rxrpc_rx_discard_ack(call->debug_id, sp->hdr.serial,
++ trace_rxrpc_rx_discard_ack(call->debug_id, ack_serial,
+ first_soft_ack, call->ackr_first_seq,
+ prev_pkt, call->ackr_prev_seq);
+ return;
+@@ -904,7 +921,7 @@ static void rxrpc_input_ack(struct rxrpc_call *call, struct sk_buff *skb)
+
+ /* Discard any out-of-order or duplicate ACKs (inside lock). */
+ if (!rxrpc_is_ack_valid(call, first_soft_ack, prev_pkt)) {
+- trace_rxrpc_rx_discard_ack(call->debug_id, sp->hdr.serial,
++ trace_rxrpc_rx_discard_ack(call->debug_id, ack_serial,
+ first_soft_ack, call->ackr_first_seq,
+ prev_pkt, call->ackr_prev_seq);
+ goto out;
+@@ -964,7 +981,7 @@ static void rxrpc_input_ack(struct rxrpc_call *call, struct sk_buff *skb)
+ RXRPC_TX_ANNO_LAST &&
+ summary.nr_acks == call->tx_top - hard_ack &&
+ rxrpc_is_client_call(call))
+- rxrpc_propose_ACK(call, RXRPC_ACK_PING, sp->hdr.serial,
++ rxrpc_propose_ACK(call, RXRPC_ACK_PING, ack_serial,
+ false, true,
+ rxrpc_propose_ack_ping_for_lost_reply);
+
+diff --git a/net/rxrpc/output.c b/net/rxrpc/output.c
+index 1ba43c3df4adb..3cfff7922ba82 100644
+--- a/net/rxrpc/output.c
++++ b/net/rxrpc/output.c
+@@ -123,6 +123,49 @@ static size_t rxrpc_fill_out_ack(struct rxrpc_connection *conn,
+ return top - hard_ack + 3;
+ }
+
++/*
++ * Record the beginning of an RTT probe.
++ */
++static int rxrpc_begin_rtt_probe(struct rxrpc_call *call, rxrpc_serial_t serial,
++ enum rxrpc_rtt_tx_trace why)
++{
++ unsigned long avail = call->rtt_avail;
++ int rtt_slot = 9;
++
++ if (!(avail & RXRPC_CALL_RTT_AVAIL_MASK))
++ goto no_slot;
++
++ rtt_slot = __ffs(avail & RXRPC_CALL_RTT_AVAIL_MASK);
++ if (!test_and_clear_bit(rtt_slot, &call->rtt_avail))
++ goto no_slot;
++
++ call->rtt_serial[rtt_slot] = serial;
++ call->rtt_sent_at[rtt_slot] = ktime_get_real();
++ smp_wmb(); /* Write data before avail bit */
++ set_bit(rtt_slot + RXRPC_CALL_RTT_PEND_SHIFT, &call->rtt_avail);
++
++ trace_rxrpc_rtt_tx(call, why, rtt_slot, serial);
++ return rtt_slot;
++
++no_slot:
++ trace_rxrpc_rtt_tx(call, rxrpc_rtt_tx_no_slot, rtt_slot, serial);
++ return -1;
++}
++
++/*
++ * Cancel an RTT probe.
++ */
++static void rxrpc_cancel_rtt_probe(struct rxrpc_call *call,
++ rxrpc_serial_t serial, int rtt_slot)
++{
++ if (rtt_slot != -1) {
++ clear_bit(rtt_slot + RXRPC_CALL_RTT_PEND_SHIFT, &call->rtt_avail);
++ smp_wmb(); /* Clear pending bit before setting slot */
++ set_bit(rtt_slot, &call->rtt_avail);
++ trace_rxrpc_rtt_tx(call, rxrpc_rtt_tx_cancel, rtt_slot, serial);
++ }
++}
++
+ /*
+ * Send an ACK call packet.
+ */
+@@ -136,7 +179,7 @@ int rxrpc_send_ack_packet(struct rxrpc_call *call, bool ping,
+ rxrpc_serial_t serial;
+ rxrpc_seq_t hard_ack, top;
+ size_t len, n;
+- int ret;
++ int ret, rtt_slot = -1;
+ u8 reason;
+
+ if (test_bit(RXRPC_CALL_DISCONNECTED, &call->flags))
+@@ -196,18 +239,8 @@ int rxrpc_send_ack_packet(struct rxrpc_call *call, bool ping,
+ if (_serial)
+ *_serial = serial;
+
+- if (ping) {
+- call->ping_serial = serial;
+- smp_wmb();
+- /* We need to stick a time in before we send the packet in case
+- * the reply gets back before kernel_sendmsg() completes - but
+- * asking UDP to send the packet can take a relatively long
+- * time.
+- */
+- call->ping_time = ktime_get_real();
+- set_bit(RXRPC_CALL_PINGING, &call->flags);
+- trace_rxrpc_rtt_tx(call, rxrpc_rtt_tx_ping, serial);
+- }
++ if (ping)
++ rtt_slot = rxrpc_begin_rtt_probe(call, serial, rxrpc_rtt_tx_ping);
+
+ ret = kernel_sendmsg(conn->params.local->socket, &msg, iov, 2, len);
+ conn->params.peer->last_tx_at = ktime_get_seconds();
+@@ -221,8 +254,7 @@ int rxrpc_send_ack_packet(struct rxrpc_call *call, bool ping,
+
+ if (call->state < RXRPC_CALL_COMPLETE) {
+ if (ret < 0) {
+- if (ping)
+- clear_bit(RXRPC_CALL_PINGING, &call->flags);
++ rxrpc_cancel_rtt_probe(call, serial, rtt_slot);
+ rxrpc_propose_ACK(call, pkt->ack.reason,
+ ntohl(pkt->ack.serial),
+ false, true,
+@@ -321,7 +353,7 @@ int rxrpc_send_data_packet(struct rxrpc_call *call, struct sk_buff *skb,
+ struct kvec iov[2];
+ rxrpc_serial_t serial;
+ size_t len;
+- int ret;
++ int ret, rtt_slot = -1;
+
+ _enter(",{%d}", skb->len);
+
+@@ -397,6 +429,8 @@ int rxrpc_send_data_packet(struct rxrpc_call *call, struct sk_buff *skb,
+ sp->hdr.serial = serial;
+ smp_wmb(); /* Set serial before timestamp */
+ skb->tstamp = ktime_get_real();
++ if (whdr.flags & RXRPC_REQUEST_ACK)
++ rtt_slot = rxrpc_begin_rtt_probe(call, serial, rxrpc_rtt_tx_data);
+
+ /* send the packet by UDP
+ * - returns -EMSGSIZE if UDP would have to fragment the packet
+@@ -408,12 +442,15 @@ int rxrpc_send_data_packet(struct rxrpc_call *call, struct sk_buff *skb,
+ conn->params.peer->last_tx_at = ktime_get_seconds();
+
+ up_read(&conn->params.local->defrag_sem);
+- if (ret < 0)
++ if (ret < 0) {
++ rxrpc_cancel_rtt_probe(call, serial, rtt_slot);
+ trace_rxrpc_tx_fail(call->debug_id, serial, ret,
+ rxrpc_tx_point_call_data_nofrag);
+- else
++ } else {
+ trace_rxrpc_tx_packet(call->debug_id, &whdr,
+ rxrpc_tx_point_call_data_nofrag);
++ }
++
+ rxrpc_tx_backoff(call, ret);
+ if (ret == -EMSGSIZE)
+ goto send_fragmentable;
+@@ -422,7 +459,6 @@ done:
+ if (ret >= 0) {
+ if (whdr.flags & RXRPC_REQUEST_ACK) {
+ call->peer->rtt_last_req = skb->tstamp;
+- trace_rxrpc_rtt_tx(call, rxrpc_rtt_tx_data, serial);
+ if (call->peer->rtt_count > 1) {
+ unsigned long nowj = jiffies, ack_lost_at;
+
+@@ -469,6 +505,8 @@ send_fragmentable:
+ sp->hdr.serial = serial;
+ smp_wmb(); /* Set serial before timestamp */
+ skb->tstamp = ktime_get_real();
++ if (whdr.flags & RXRPC_REQUEST_ACK)
++ rtt_slot = rxrpc_begin_rtt_probe(call, serial, rxrpc_rtt_tx_data);
+
+ switch (conn->params.local->srx.transport.family) {
+ case AF_INET6:
+@@ -487,12 +525,14 @@ send_fragmentable:
+ BUG();
+ }
+
+- if (ret < 0)
++ if (ret < 0) {
++ rxrpc_cancel_rtt_probe(call, serial, rtt_slot);
+ trace_rxrpc_tx_fail(call->debug_id, serial, ret,
+ rxrpc_tx_point_call_data_frag);
+- else
++ } else {
+ trace_rxrpc_tx_packet(call->debug_id, &whdr,
+ rxrpc_tx_point_call_data_frag);
++ }
+ rxrpc_tx_backoff(call, ret);
+
+ up_write(&conn->params.local->defrag_sem);
+diff --git a/net/rxrpc/peer_object.c b/net/rxrpc/peer_object.c
+index ca29976bb193e..68396d0520525 100644
+--- a/net/rxrpc/peer_object.c
++++ b/net/rxrpc/peer_object.c
+@@ -502,11 +502,21 @@ EXPORT_SYMBOL(rxrpc_kernel_get_peer);
+ * rxrpc_kernel_get_srtt - Get a call's peer smoothed RTT
+ * @sock: The socket on which the call is in progress.
+ * @call: The call to query
++ * @_srtt: Where to store the SRTT value.
+ *
+- * Get the call's peer smoothed RTT.
++ * Get the call's peer smoothed RTT in uS.
+ */
+-u32 rxrpc_kernel_get_srtt(struct socket *sock, struct rxrpc_call *call)
++bool rxrpc_kernel_get_srtt(struct socket *sock, struct rxrpc_call *call,
++ u32 *_srtt)
+ {
+- return call->peer->srtt_us >> 3;
++ struct rxrpc_peer *peer = call->peer;
++
++ if (peer->rtt_count == 0) {
++ *_srtt = 1000000; /* 1S */
++ return false;
++ }
++
++ *_srtt = call->peer->srtt_us >> 3;
++ return true;
+ }
+ EXPORT_SYMBOL(rxrpc_kernel_get_srtt);
+diff --git a/net/rxrpc/rtt.c b/net/rxrpc/rtt.c
+index 928d8b34a3eee..1221b0637a7ec 100644
+--- a/net/rxrpc/rtt.c
++++ b/net/rxrpc/rtt.c
+@@ -146,6 +146,7 @@ static void rxrpc_ack_update_rtt(struct rxrpc_peer *peer, long rtt_us)
+ * exclusive access to the peer RTT data.
+ */
+ void rxrpc_peer_add_rtt(struct rxrpc_call *call, enum rxrpc_rtt_rx_trace why,
++ int rtt_slot,
+ rxrpc_serial_t send_serial, rxrpc_serial_t resp_serial,
+ ktime_t send_time, ktime_t resp_time)
+ {
+@@ -162,7 +163,7 @@ void rxrpc_peer_add_rtt(struct rxrpc_call *call, enum rxrpc_rtt_rx_trace why,
+ peer->rtt_count++;
+ spin_unlock(&peer->rtt_input_lock);
+
+- trace_rxrpc_rtt_rx(call, why, send_serial, resp_serial,
++ trace_rxrpc_rtt_rx(call, why, rtt_slot, send_serial, resp_serial,
+ peer->srtt_us >> 3, peer->rto_j);
+ }
+
+diff --git a/net/wireless/reg.c b/net/wireless/reg.c
+index 0d74a31ef0ab4..fc2af2c8b6d54 100644
+--- a/net/wireless/reg.c
++++ b/net/wireless/reg.c
+@@ -2944,6 +2944,9 @@ int regulatory_hint_user(const char *alpha2,
+ if (WARN_ON(!alpha2))
+ return -EINVAL;
+
++ if (!is_world_regdom(alpha2) && !is_an_alpha2(alpha2))
++ return -EINVAL;
++
+ request = kzalloc(sizeof(struct regulatory_request), GFP_KERNEL);
+ if (!request)
+ return -ENOMEM;
+diff --git a/scripts/checkpatch.pl b/scripts/checkpatch.pl
+index 4c820607540bf..e73e998d582a1 100755
+--- a/scripts/checkpatch.pl
++++ b/scripts/checkpatch.pl
+@@ -2636,8 +2636,8 @@ sub process {
+
+ # Check if the commit log has what seems like a diff which can confuse patch
+ if ($in_commit_log && !$commit_log_has_diff &&
+- (($line =~ m@^\s+diff\b.*a/[\w/]+@ &&
+- $line =~ m@^\s+diff\b.*a/([\w/]+)\s+b/$1\b@) ||
++ (($line =~ m@^\s+diff\b.*a/([\w/]+)@ &&
++ $line =~ m@^\s+diff\b.*a/[\w/]+\s+b/$1\b@) ||
+ $line =~ m@^\s*(?:\-\-\-\s+a/|\+\+\+\s+b/)@ ||
+ $line =~ m/^\s*\@\@ \-\d+,\d+ \+\d+,\d+ \@\@/)) {
+ ERROR("DIFF_IN_COMMIT_MSG",
+diff --git a/scripts/kconfig/streamline_config.pl b/scripts/kconfig/streamline_config.pl
+index 19857d18d814d..1c78ba49ca992 100755
+--- a/scripts/kconfig/streamline_config.pl
++++ b/scripts/kconfig/streamline_config.pl
+@@ -593,7 +593,10 @@ while ($repeat) {
+ }
+
+ my %setconfigs;
+-my @preserved_kconfigs = split(/:/,$ENV{LMC_KEEP});
++my @preserved_kconfigs;
++if (defined($ENV{'LMC_KEEP'})) {
++ @preserved_kconfigs = split(/:/,$ENV{LMC_KEEP});
++}
+
+ sub in_preserved_kconfigs {
+ my $kconfig = $config2kfile{$_[0]};
+diff --git a/sound/core/oss/mulaw.c b/sound/core/oss/mulaw.c
+index 3788906421a73..fe27034f28460 100644
+--- a/sound/core/oss/mulaw.c
++++ b/sound/core/oss/mulaw.c
+@@ -329,8 +329,8 @@ int snd_pcm_plugin_build_mulaw(struct snd_pcm_substream *plug,
+ snd_BUG();
+ return -EINVAL;
+ }
+- if (snd_BUG_ON(!snd_pcm_format_linear(format->format)))
+- return -ENXIO;
++ if (!snd_pcm_format_linear(format->format))
++ return -EINVAL;
+
+ err = snd_pcm_plugin_build(plug, "Mu-Law<->linear conversion",
+ src_format, dst_format,
+diff --git a/sound/firewire/digi00x/digi00x.c b/sound/firewire/digi00x/digi00x.c
+index c84b913a9fe01..ab8408966ec33 100644
+--- a/sound/firewire/digi00x/digi00x.c
++++ b/sound/firewire/digi00x/digi00x.c
+@@ -14,6 +14,7 @@ MODULE_LICENSE("GPL v2");
+ #define VENDOR_DIGIDESIGN 0x00a07e
+ #define MODEL_CONSOLE 0x000001
+ #define MODEL_RACK 0x000002
++#define SPEC_VERSION 0x000001
+
+ static int name_card(struct snd_dg00x *dg00x)
+ {
+@@ -175,14 +176,18 @@ static const struct ieee1394_device_id snd_dg00x_id_table[] = {
+ /* Both of 002/003 use the same ID. */
+ {
+ .match_flags = IEEE1394_MATCH_VENDOR_ID |
++ IEEE1394_MATCH_VERSION |
+ IEEE1394_MATCH_MODEL_ID,
+ .vendor_id = VENDOR_DIGIDESIGN,
++ .version = SPEC_VERSION,
+ .model_id = MODEL_CONSOLE,
+ },
+ {
+ .match_flags = IEEE1394_MATCH_VENDOR_ID |
++ IEEE1394_MATCH_VERSION |
+ IEEE1394_MATCH_MODEL_ID,
+ .vendor_id = VENDOR_DIGIDESIGN,
++ .version = SPEC_VERSION,
+ .model_id = MODEL_RACK,
+ },
+ {}
+diff --git a/sound/firewire/tascam/tascam.c b/sound/firewire/tascam/tascam.c
+index 5dac0d9fc58e5..75f2edd8e78fb 100644
+--- a/sound/firewire/tascam/tascam.c
++++ b/sound/firewire/tascam/tascam.c
+@@ -39,9 +39,6 @@ static const struct snd_tscm_spec model_specs[] = {
+ .midi_capture_ports = 2,
+ .midi_playback_ports = 4,
+ },
+- // This kernel module doesn't support FE-8 because the most of features
+- // can be implemented in userspace without any specific support of this
+- // module.
+ };
+
+ static int identify_model(struct snd_tscm *tscm)
+@@ -211,11 +208,39 @@ static void snd_tscm_remove(struct fw_unit *unit)
+ }
+
+ static const struct ieee1394_device_id snd_tscm_id_table[] = {
++ // Tascam, FW-1884.
++ {
++ .match_flags = IEEE1394_MATCH_VENDOR_ID |
++ IEEE1394_MATCH_SPECIFIER_ID |
++ IEEE1394_MATCH_VERSION,
++ .vendor_id = 0x00022e,
++ .specifier_id = 0x00022e,
++ .version = 0x800000,
++ },
++ // Tascam, FE-8 (.version = 0x800001)
++ // This kernel module doesn't support FE-8 because the most of features
++ // can be implemented in userspace without any specific support of this
++ // module.
++ //
++ // .version = 0x800002 is unknown.
++ //
++ // Tascam, FW-1082.
++ {
++ .match_flags = IEEE1394_MATCH_VENDOR_ID |
++ IEEE1394_MATCH_SPECIFIER_ID |
++ IEEE1394_MATCH_VERSION,
++ .vendor_id = 0x00022e,
++ .specifier_id = 0x00022e,
++ .version = 0x800003,
++ },
++ // Tascam, FW-1804.
+ {
+ .match_flags = IEEE1394_MATCH_VENDOR_ID |
+- IEEE1394_MATCH_SPECIFIER_ID,
++ IEEE1394_MATCH_SPECIFIER_ID |
++ IEEE1394_MATCH_VERSION,
+ .vendor_id = 0x00022e,
+ .specifier_id = 0x00022e,
++ .version = 0x800004,
+ },
+ {}
+ };
+diff --git a/sound/pci/ca0106/ca0106_main.c b/sound/pci/ca0106/ca0106_main.c
+index 70d775ff967eb..c189f70c82cb9 100644
+--- a/sound/pci/ca0106/ca0106_main.c
++++ b/sound/pci/ca0106/ca0106_main.c
+@@ -537,7 +537,8 @@ static int snd_ca0106_pcm_power_dac(struct snd_ca0106 *chip, int channel_id,
+ else
+ /* Power down */
+ chip->spi_dac_reg[reg] |= bit;
+- return snd_ca0106_spi_write(chip, chip->spi_dac_reg[reg]);
++ if (snd_ca0106_spi_write(chip, chip->spi_dac_reg[reg]) != 0)
++ return -ENXIO;
+ }
+ return 0;
+ }
+diff --git a/sound/pci/hda/hda_intel.c b/sound/pci/hda/hda_intel.c
+index 1a26940a3fd7c..4c23b169ac67e 100644
+--- a/sound/pci/hda/hda_intel.c
++++ b/sound/pci/hda/hda_intel.c
+@@ -2747,8 +2747,6 @@ static const struct pci_device_id azx_ids[] = {
+ .driver_data = AZX_DRIVER_GENERIC | AZX_DCAPS_PRESET_ATI_HDMI },
+ /* Zhaoxin */
+ { PCI_DEVICE(0x1d17, 0x3288), .driver_data = AZX_DRIVER_ZHAOXIN },
+- /* Loongson */
+- { PCI_DEVICE(0x0014, 0x7a07), .driver_data = AZX_DRIVER_GENERIC },
+ { 0, }
+ };
+ MODULE_DEVICE_TABLE(pci, azx_ids);
+diff --git a/sound/pci/hda/patch_hdmi.c b/sound/pci/hda/patch_hdmi.c
+index f0c6d2907e396..fc22bdc30da3e 100644
+--- a/sound/pci/hda/patch_hdmi.c
++++ b/sound/pci/hda/patch_hdmi.c
+@@ -2737,6 +2737,7 @@ static void i915_pin_cvt_fixup(struct hda_codec *codec,
+ hda_nid_t cvt_nid)
+ {
+ if (per_pin) {
++ haswell_verify_D0(codec, per_pin->cvt_nid, per_pin->pin_nid);
+ snd_hda_set_dev_select(codec, per_pin->pin_nid,
+ per_pin->dev_id);
+ intel_verify_pin_cvt_connect(codec, per_pin);
+diff --git a/sound/pci/hda/patch_realtek.c b/sound/pci/hda/patch_realtek.c
+index da23c2d4ca51e..0b9907c9cd84f 100644
+--- a/sound/pci/hda/patch_realtek.c
++++ b/sound/pci/hda/patch_realtek.c
+@@ -2467,6 +2467,7 @@ static const struct snd_pci_quirk alc882_fixup_tbl[] = {
+ SND_PCI_QUIRK(0x1462, 0x1276, "MSI-GL73", ALC1220_FIXUP_CLEVO_P950),
+ SND_PCI_QUIRK(0x1462, 0x1293, "MSI-GP65", ALC1220_FIXUP_CLEVO_P950),
+ SND_PCI_QUIRK(0x1462, 0x7350, "MSI-7350", ALC889_FIXUP_CD),
++ SND_PCI_QUIRK(0x1462, 0x9c37, "MSI X570-A PRO", ALC1220_FIXUP_CLEVO_P950),
+ SND_PCI_QUIRK(0x1462, 0xda57, "MSI Z270-Gaming", ALC1220_FIXUP_GB_DUAL_CODECS),
+ SND_PCI_QUIRK_VENDOR(0x1462, "MSI", ALC882_FIXUP_GPIO3),
+ SND_PCI_QUIRK(0x147b, 0x107a, "Abit AW9D-MAX", ALC882_FIXUP_ABIT_AW9D_MAX),
+@@ -5879,6 +5880,39 @@ static void alc275_fixup_gpio4_off(struct hda_codec *codec,
+ }
+ }
+
++/* Quirk for Thinkpad X1 7th and 8th Gen
++ * The following fixed routing needed
++ * DAC1 (NID 0x02) -> Speaker (NID 0x14); some eq applied secretly
++ * DAC2 (NID 0x03) -> Bass (NID 0x17) & Headphone (NID 0x21); sharing a DAC
++ * DAC3 (NID 0x06) -> Unused, due to the lack of volume amp
++ */
++static void alc285_fixup_thinkpad_x1_gen7(struct hda_codec *codec,
++ const struct hda_fixup *fix, int action)
++{
++ static const hda_nid_t conn[] = { 0x02, 0x03 }; /* exclude 0x06 */
++ static const hda_nid_t preferred_pairs[] = {
++ 0x14, 0x02, 0x17, 0x03, 0x21, 0x03, 0
++ };
++ struct alc_spec *spec = codec->spec;
++
++ switch (action) {
++ case HDA_FIXUP_ACT_PRE_PROBE:
++ snd_hda_override_conn_list(codec, 0x17, ARRAY_SIZE(conn), conn);
++ spec->gen.preferred_dacs = preferred_pairs;
++ break;
++ case HDA_FIXUP_ACT_BUILD:
++ /* The generic parser creates somewhat unintuitive volume ctls
++ * with the fixed routing above, and the shared DAC2 may be
++ * confusing for PA.
++ * Rename those to unique names so that PA doesn't touch them
++ * and use only Master volume.
++ */
++ rename_ctl(codec, "Front Playback Volume", "DAC1 Playback Volume");
++ rename_ctl(codec, "Bass Speaker Playback Volume", "DAC2 Playback Volume");
++ break;
++ }
++}
++
+ static void alc233_alc662_fixup_lenovo_dual_codecs(struct hda_codec *codec,
+ const struct hda_fixup *fix,
+ int action)
+@@ -6147,6 +6181,7 @@ enum {
+ ALC289_FIXUP_DUAL_SPK,
+ ALC294_FIXUP_SPK2_TO_DAC1,
+ ALC294_FIXUP_ASUS_DUAL_SPK,
++ ALC285_FIXUP_THINKPAD_X1_GEN7,
+ ALC285_FIXUP_THINKPAD_HEADSET_JACK,
+ ALC294_FIXUP_ASUS_HPE,
+ ALC294_FIXUP_ASUS_COEF_1B,
+@@ -7292,11 +7327,17 @@ static const struct hda_fixup alc269_fixups[] = {
+ .chained = true,
+ .chain_id = ALC294_FIXUP_SPK2_TO_DAC1
+ },
++ [ALC285_FIXUP_THINKPAD_X1_GEN7] = {
++ .type = HDA_FIXUP_FUNC,
++ .v.func = alc285_fixup_thinkpad_x1_gen7,
++ .chained = true,
++ .chain_id = ALC269_FIXUP_THINKPAD_ACPI
++ },
+ [ALC285_FIXUP_THINKPAD_HEADSET_JACK] = {
+ .type = HDA_FIXUP_FUNC,
+ .v.func = alc_fixup_headset_jack,
+ .chained = true,
+- .chain_id = ALC285_FIXUP_SPEAKER2_TO_DAC1
++ .chain_id = ALC285_FIXUP_THINKPAD_X1_GEN7
+ },
+ [ALC294_FIXUP_ASUS_HPE] = {
+ .type = HDA_FIXUP_VERBS,
+@@ -7707,7 +7748,8 @@ static const struct snd_pci_quirk alc269_fixup_tbl[] = {
+ SND_PCI_QUIRK(0x144d, 0xc169, "Samsung Notebook 9 Pen (NP930SBE-K01US)", ALC298_FIXUP_SAMSUNG_HEADPHONE_VERY_QUIET),
+ SND_PCI_QUIRK(0x144d, 0xc176, "Samsung Notebook 9 Pro (NP930MBE-K04US)", ALC298_FIXUP_SAMSUNG_HEADPHONE_VERY_QUIET),
+ SND_PCI_QUIRK(0x144d, 0xc189, "Samsung Galaxy Flex Book (NT950QCG-X716)", ALC298_FIXUP_SAMSUNG_HEADPHONE_VERY_QUIET),
+- SND_PCI_QUIRK(0x144d, 0xc18a, "Samsung Galaxy Book Ion (NT950XCJ-X716A)", ALC298_FIXUP_SAMSUNG_HEADPHONE_VERY_QUIET),
++ SND_PCI_QUIRK(0x144d, 0xc18a, "Samsung Galaxy Book Ion (NP930XCJ-K01US)", ALC298_FIXUP_SAMSUNG_HEADPHONE_VERY_QUIET),
++ SND_PCI_QUIRK(0x144d, 0xc830, "Samsung Galaxy Book Ion (NT950XCJ-X716A)", ALC298_FIXUP_SAMSUNG_HEADPHONE_VERY_QUIET),
+ SND_PCI_QUIRK(0x144d, 0xc740, "Samsung Ativ book 8 (NP870Z5G)", ALC269_FIXUP_ATIV_BOOK_8),
+ SND_PCI_QUIRK(0x144d, 0xc812, "Samsung Notebook Pen S (NT950SBE-X58)", ALC298_FIXUP_SAMSUNG_HEADPHONE_VERY_QUIET),
+ SND_PCI_QUIRK(0x1458, 0xfa53, "Gigabyte BXBT-2807", ALC283_FIXUP_HEADSET_MIC),
+diff --git a/sound/usb/pcm.c b/sound/usb/pcm.c
+index eb3cececda794..28506415c7ad5 100644
+--- a/sound/usb/pcm.c
++++ b/sound/usb/pcm.c
+@@ -369,11 +369,13 @@ static int set_sync_ep_implicit_fb_quirk(struct snd_usb_substream *subs,
+ case USB_ID(0x07fd, 0x0008): /* MOTU M Series */
+ case USB_ID(0x31e9, 0x0001): /* Solid State Logic SSL2 */
+ case USB_ID(0x31e9, 0x0002): /* Solid State Logic SSL2+ */
++ case USB_ID(0x0499, 0x172f): /* Steinberg UR22C */
+ case USB_ID(0x0d9a, 0x00df): /* RTX6001 */
+ ep = 0x81;
+ ifnum = 2;
+ goto add_sync_ep_from_ifnum;
+ case USB_ID(0x2b73, 0x000a): /* Pioneer DJ DJM-900NXS2 */
++ case USB_ID(0x2b73, 0x0017): /* Pioneer DJ DJM-250MK2 */
+ ep = 0x82;
+ ifnum = 0;
+ goto add_sync_ep_from_ifnum;
+diff --git a/sound/usb/quirks-table.h b/sound/usb/quirks-table.h
+index 366faaa4ba82c..5410e5ac82f91 100644
+--- a/sound/usb/quirks-table.h
++++ b/sound/usb/quirks-table.h
+@@ -3532,14 +3532,40 @@ AU0828_DEVICE(0x2040, 0x7270, "Hauppauge", "HVR-950Q"),
+ {
+ /*
+ * Pioneer DJ DJM-250MK2
+- * PCM is 8 channels out @ 48 fixed (endpoints 0x01).
+- * The output from computer to the mixer is usable.
++ * PCM is 8 channels out @ 48 fixed (endpoint 0x01)
++ * and 8 channels in @ 48 fixed (endpoint 0x82).
+ *
+- * The input (phono or line to computer) is not working.
+- * It should be at endpoint 0x82 and probably also 8 channels,
+- * but it seems that it works only with Pioneer proprietary software.
+- * Even on officially supported OS, the Audacity was unable to record
+- * and Mixxx to recognize the control vinyls.
++ * Both playback and recording is working, even simultaneously.
++ *
++ * Playback channels could be mapped to:
++ * - CH1
++ * - CH2
++ * - AUX
++ *
++ * Recording channels could be mapped to:
++ * - Post CH1 Fader
++ * - Post CH2 Fader
++ * - Cross Fader A
++ * - Cross Fader B
++ * - MIC
++ * - AUX
++ * - REC OUT
++ *
++ * There is remaining problem with recording directly from PHONO/LINE.
++ * If we map a channel to:
++ * - CH1 Control Tone PHONO
++ * - CH1 Control Tone LINE
++ * - CH2 Control Tone PHONO
++ * - CH2 Control Tone LINE
++ * it is silent.
++ * There is no signal even on other operating systems with official drivers.
++ * The signal appears only when a supported application is started.
++ * This needs to be investigated yet...
++ * (there is quite a lot communication on the USB in both directions)
++ *
++ * In current version this mixer could be used for playback
++ * and for recording from vinyls (through Post CH* Fader)
++ * but not for DVS (Digital Vinyl Systems) like in Mixxx.
+ */
+ USB_DEVICE_VENDOR_SPEC(0x2b73, 0x0017),
+ .driver_info = (unsigned long) &(const struct snd_usb_audio_quirk) {
+@@ -3563,6 +3589,26 @@ AU0828_DEVICE(0x2040, 0x7270, "Hauppauge", "HVR-950Q"),
+ .rate_max = 48000,
+ .nr_rates = 1,
+ .rate_table = (unsigned int[]) { 48000 }
++ }
++ },
++ {
++ .ifnum = 0,
++ .type = QUIRK_AUDIO_FIXED_ENDPOINT,
++ .data = &(const struct audioformat) {
++ .formats = SNDRV_PCM_FMTBIT_S24_3LE,
++ .channels = 8, // inputs
++ .iface = 0,
++ .altsetting = 1,
++ .altset_idx = 1,
++ .endpoint = 0x82,
++ .ep_attr = USB_ENDPOINT_XFER_ISOC|
++ USB_ENDPOINT_SYNC_ASYNC|
++ USB_ENDPOINT_USAGE_IMPLICIT_FB,
++ .rates = SNDRV_PCM_RATE_48000,
++ .rate_min = 48000,
++ .rate_max = 48000,
++ .nr_rates = 1,
++ .rate_table = (unsigned int[]) { 48000 }
+ }
+ },
+ {
+diff --git a/sound/usb/quirks.c b/sound/usb/quirks.c
+index ef1c1cf040b45..bf2d521b6768c 100644
+--- a/sound/usb/quirks.c
++++ b/sound/usb/quirks.c
+@@ -1493,6 +1493,7 @@ void snd_usb_set_format_quirk(struct snd_usb_substream *subs,
+ set_format_emu_quirk(subs, fmt);
+ break;
+ case USB_ID(0x2b73, 0x000a): /* Pioneer DJ DJM-900NXS2 */
++ case USB_ID(0x2b73, 0x0017): /* Pioneer DJ DJM-250MK2 */
+ pioneer_djm_set_format_quirk(subs);
+ break;
+ case USB_ID(0x534d, 0x2109): /* MacroSilicon MS2109 */
+diff --git a/tools/include/uapi/linux/perf_event.h b/tools/include/uapi/linux/perf_event.h
+index 7b2d6fc9e6ed7..bc8c4816ba386 100644
+--- a/tools/include/uapi/linux/perf_event.h
++++ b/tools/include/uapi/linux/perf_event.h
+@@ -1155,7 +1155,7 @@ union perf_mem_data_src {
+
+ #define PERF_MEM_SNOOPX_FWD 0x01 /* forward */
+ /* 1 free */
+-#define PERF_MEM_SNOOPX_SHIFT 37
++#define PERF_MEM_SNOOPX_SHIFT 38
+
+ /* locked instruction */
+ #define PERF_MEM_LOCK_NA 0x01 /* not available */
+diff --git a/tools/perf/Documentation/perf-stat.txt b/tools/perf/Documentation/perf-stat.txt
+index c8209467076b1..d8299b77f5c89 100644
+--- a/tools/perf/Documentation/perf-stat.txt
++++ b/tools/perf/Documentation/perf-stat.txt
+@@ -380,6 +380,9 @@ counts for all hardware threads in a core but show the sum counts per
+ hardware thread. This is essentially a replacement for the any bit and
+ convenient for post processing.
+
++--summary::
++Print summary for interval mode (-I).
++
+ EXAMPLES
+ --------
+
+diff --git a/tools/perf/bench/synthesize.c b/tools/perf/bench/synthesize.c
+index 8d624aea1c5e5..b2924e3181dc3 100644
+--- a/tools/perf/bench/synthesize.c
++++ b/tools/perf/bench/synthesize.c
+@@ -162,8 +162,8 @@ static int do_run_multi_threaded(struct target *target,
+ init_stats(&event_stats);
+ for (i = 0; i < multi_iterations; i++) {
+ session = perf_session__new(NULL, false, NULL);
+- if (!session)
+- return -ENOMEM;
++ if (IS_ERR(session))
++ return PTR_ERR(session);
+
+ atomic_set(&event_count, 0);
+ gettimeofday(&start, NULL);
+diff --git a/tools/perf/builtin-record.c b/tools/perf/builtin-record.c
+index 23ea934f30b34..07313217db4cd 100644
+--- a/tools/perf/builtin-record.c
++++ b/tools/perf/builtin-record.c
+@@ -2417,7 +2417,7 @@ static struct option __record_options[] = {
+ OPT_BOOLEAN(0, "tail-synthesize", &record.opts.tail_synthesize,
+ "synthesize non-sample events at the end of output"),
+ OPT_BOOLEAN(0, "overwrite", &record.opts.overwrite, "use overwrite mode"),
+- OPT_BOOLEAN(0, "no-bpf-event", &record.opts.no_bpf_event, "record bpf events"),
++ OPT_BOOLEAN(0, "no-bpf-event", &record.opts.no_bpf_event, "do not record bpf events"),
+ OPT_BOOLEAN(0, "strict-freq", &record.opts.strict_freq,
+ "Fail if the specified frequency can't be used"),
+ OPT_CALLBACK('F', "freq", &record.opts, "freq or 'max'",
+diff --git a/tools/perf/builtin-sched.c b/tools/perf/builtin-sched.c
+index 459e4229945e4..7b9511e59b434 100644
+--- a/tools/perf/builtin-sched.c
++++ b/tools/perf/builtin-sched.c
+@@ -2575,7 +2575,8 @@ static int timehist_sched_change_event(struct perf_tool *tool,
+ }
+
+ if (!sched->idle_hist || thread->tid == 0) {
+- timehist_update_runtime_stats(tr, t, tprev);
++ if (!cpu_list || test_bit(sample->cpu, cpu_bitmap))
++ timehist_update_runtime_stats(tr, t, tprev);
+
+ if (sched->idle_hist) {
+ struct idle_thread_runtime *itr = (void *)tr;
+@@ -2848,6 +2849,9 @@ static void timehist_print_summary(struct perf_sched *sched,
+
+ printf("\nIdle stats:\n");
+ for (i = 0; i < idle_max_cpu; ++i) {
++ if (cpu_list && !test_bit(i, cpu_bitmap))
++ continue;
++
+ t = idle_threads[i];
+ if (!t)
+ continue;
+diff --git a/tools/perf/builtin-stat.c b/tools/perf/builtin-stat.c
+index 9be020e0098ad..6e2502de755a8 100644
+--- a/tools/perf/builtin-stat.c
++++ b/tools/perf/builtin-stat.c
+@@ -402,7 +402,7 @@ static void read_counters(struct timespec *rs)
+ {
+ struct evsel *counter;
+
+- if (!stat_config.summary && (read_affinity_counters(rs) < 0))
++ if (!stat_config.stop_read_counter && (read_affinity_counters(rs) < 0))
+ return;
+
+ evlist__for_each_entry(evsel_list, counter) {
+@@ -826,9 +826,9 @@ try_again_reset:
+ if (stat_config.walltime_run_table)
+ stat_config.walltime_run[run_idx] = t1 - t0;
+
+- if (interval) {
++ if (interval && stat_config.summary) {
+ stat_config.interval = 0;
+- stat_config.summary = true;
++ stat_config.stop_read_counter = true;
+ init_stats(&walltime_nsecs_stats);
+ update_stats(&walltime_nsecs_stats, t1 - t0);
+
+@@ -1066,6 +1066,8 @@ static struct option stat_options[] = {
+ "Use with 'percore' event qualifier to show the event "
+ "counts of one hardware thread by sum up total hardware "
+ "threads of same physical core"),
++ OPT_BOOLEAN(0, "summary", &stat_config.summary,
++ "print summary for interval mode"),
+ #ifdef HAVE_LIBPFM
+ OPT_CALLBACK(0, "pfm-events", &evsel_list, "event",
+ "libpfm4 event selector. use 'perf list' to list available events",
+diff --git a/tools/perf/builtin-top.c b/tools/perf/builtin-top.c
+index 13889d73f8dd5..c665d69c0651d 100644
+--- a/tools/perf/builtin-top.c
++++ b/tools/perf/builtin-top.c
+@@ -1746,6 +1746,7 @@ int cmd_top(int argc, const char **argv)
+ goto out_delete_evlist;
+ }
+
++#ifdef HAVE_LIBBPF_SUPPORT
+ if (!top.record_opts.no_bpf_event) {
+ top.sb_evlist = evlist__new();
+
+@@ -1759,6 +1760,7 @@ int cmd_top(int argc, const char **argv)
+ goto out_delete_evlist;
+ }
+ }
++#endif
+
+ if (perf_evlist__start_sb_thread(top.sb_evlist, target)) {
+ pr_debug("Couldn't start the BPF side band thread:\nBPF programs starting from now on won't be annotatable\n");
+diff --git a/tools/perf/pmu-events/jevents.c b/tools/perf/pmu-events/jevents.c
+index fa86c5f997cc5..fc9c158bfa134 100644
+--- a/tools/perf/pmu-events/jevents.c
++++ b/tools/perf/pmu-events/jevents.c
+@@ -137,7 +137,7 @@ static char *fixregex(char *s)
+ return s;
+
+ /* allocate space for a new string */
+- fixed = (char *) malloc(len + 1);
++ fixed = (char *) malloc(len + esc_count + 1);
+ if (!fixed)
+ return NULL;
+
+diff --git a/tools/perf/ui/browsers/hists.c b/tools/perf/ui/browsers/hists.c
+index be9c4c0549bc8..a07626f072087 100644
+--- a/tools/perf/ui/browsers/hists.c
++++ b/tools/perf/ui/browsers/hists.c
+@@ -3629,8 +3629,8 @@ int perf_evlist__tui_browse_hists(struct evlist *evlist, const char *help,
+ {
+ int nr_entries = evlist->core.nr_entries;
+
+-single_entry:
+ if (perf_evlist__single_entry(evlist)) {
++single_entry: {
+ struct evsel *first = evlist__first(evlist);
+
+ return perf_evsel__hists_browse(first, nr_entries, help,
+@@ -3638,6 +3638,7 @@ single_entry:
+ env, warn_lost_event,
+ annotation_opts);
+ }
++ }
+
+ if (symbol_conf.event_group) {
+ struct evsel *pos;
+diff --git a/tools/perf/util/cs-etm.c b/tools/perf/util/cs-etm.c
+index c283223fb31f2..a2a369e2fbb67 100644
+--- a/tools/perf/util/cs-etm.c
++++ b/tools/perf/util/cs-etm.c
+@@ -1344,8 +1344,15 @@ static int cs_etm__synth_events(struct cs_etm_auxtrace *etm,
+ attr.sample_type &= ~(u64)PERF_SAMPLE_ADDR;
+ }
+
+- if (etm->synth_opts.last_branch)
++ if (etm->synth_opts.last_branch) {
+ attr.sample_type |= PERF_SAMPLE_BRANCH_STACK;
++ /*
++ * We don't use the hardware index, but the sample generation
++ * code uses the new format branch_stack with this field,
++ * so the event attributes must indicate that it's present.
++ */
++ attr.branch_sample_type |= PERF_SAMPLE_BRANCH_HW_INDEX;
++ }
+
+ if (etm->synth_opts.instructions) {
+ attr.config = PERF_COUNT_HW_INSTRUCTIONS;
+diff --git a/tools/perf/util/intel-pt.c b/tools/perf/util/intel-pt.c
+index cb3c1e569a2db..9357b5f62c273 100644
+--- a/tools/perf/util/intel-pt.c
++++ b/tools/perf/util/intel-pt.c
+@@ -2913,8 +2913,15 @@ static int intel_pt_synth_events(struct intel_pt *pt,
+
+ if (pt->synth_opts.callchain)
+ attr.sample_type |= PERF_SAMPLE_CALLCHAIN;
+- if (pt->synth_opts.last_branch)
++ if (pt->synth_opts.last_branch) {
+ attr.sample_type |= PERF_SAMPLE_BRANCH_STACK;
++ /*
++ * We don't use the hardware index, but the sample generation
++ * code uses the new format branch_stack with this field,
++ * so the event attributes must indicate that it's present.
++ */
++ attr.branch_sample_type |= PERF_SAMPLE_BRANCH_HW_INDEX;
++ }
+
+ if (pt->synth_opts.instructions) {
+ attr.config = PERF_COUNT_HW_INSTRUCTIONS;
+diff --git a/tools/perf/util/stat.h b/tools/perf/util/stat.h
+index f75ae679eb281..d8a9dd786bf43 100644
+--- a/tools/perf/util/stat.h
++++ b/tools/perf/util/stat.h
+@@ -113,6 +113,7 @@ struct perf_stat_config {
+ bool summary;
+ bool metric_no_group;
+ bool metric_no_merge;
++ bool stop_read_counter;
+ FILE *output;
+ unsigned int interval;
+ unsigned int timeout;
+diff --git a/tools/testing/selftests/bpf/test_maps.c b/tools/testing/selftests/bpf/test_maps.c
+index 754cf611723ee..0d92ebcb335d1 100644
+--- a/tools/testing/selftests/bpf/test_maps.c
++++ b/tools/testing/selftests/bpf/test_maps.c
+@@ -1274,6 +1274,8 @@ static void __run_parallel(unsigned int tasks,
+ pid_t pid[tasks];
+ int i;
+
++ fflush(stdout);
++
+ for (i = 0; i < tasks; i++) {
+ pid[i] = fork();
+ if (pid[i] == 0) {
^ permalink raw reply related [flat|nested] 30+ messages in thread
* [gentoo-commits] proj/linux-patches:5.8 commit in: /
@ 2020-09-12 18:14 Mike Pagano
0 siblings, 0 replies; 30+ messages in thread
From: Mike Pagano @ 2020-09-12 18:14 UTC (permalink / raw
To: gentoo-commits
commit: ccb74324f775c901be1e1ddef5b15982a7d649e0
Author: Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Sat Sep 12 18:14:03 2020 +0000
Commit: Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Sat Sep 12 18:14:03 2020 +0000
URL: https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=ccb74324
Linux patch 5.8.9
Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>
0000_README | 4 +
1008_linux-5.8.9.patch | 952 +++++++++++++++++++++++++++++++++++++++++++++++++
2 files changed, 956 insertions(+)
diff --git a/0000_README b/0000_README
index 93860e0..96ae239 100644
--- a/0000_README
+++ b/0000_README
@@ -75,6 +75,10 @@ Patch: 1007_linux-5.8.8.patch
From: http://www.kernel.org
Desc: Linux 5.8.8
+Patch: 1008_linux-5.8.9.patch
+From: http://www.kernel.org
+Desc: Linux 5.8.9
+
Patch: 1500_XATTR_USER_PREFIX.patch
From: https://bugs.gentoo.org/show_bug.cgi?id=470644
Desc: Support for namespace user.pax.* on tmpfs.
diff --git a/1008_linux-5.8.9.patch b/1008_linux-5.8.9.patch
new file mode 100644
index 0000000..55b6aa0
--- /dev/null
+++ b/1008_linux-5.8.9.patch
@@ -0,0 +1,952 @@
+diff --git a/Makefile b/Makefile
+index dba4d8f2f7862..36eab48d1d4a6 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 5
+ PATCHLEVEL = 8
+-SUBLEVEL = 8
++SUBLEVEL = 9
+ EXTRAVERSION =
+ NAME = Kleptomaniac Octopus
+
+diff --git a/drivers/infiniband/core/cma.c b/drivers/infiniband/core/cma.c
+index c30cf5307ce3e..26de0dab60bbb 100644
+--- a/drivers/infiniband/core/cma.c
++++ b/drivers/infiniband/core/cma.c
+@@ -428,19 +428,6 @@ static int cma_comp_exch(struct rdma_id_private *id_priv,
+ return ret;
+ }
+
+-static enum rdma_cm_state cma_exch(struct rdma_id_private *id_priv,
+- enum rdma_cm_state exch)
+-{
+- unsigned long flags;
+- enum rdma_cm_state old;
+-
+- spin_lock_irqsave(&id_priv->lock, flags);
+- old = id_priv->state;
+- id_priv->state = exch;
+- spin_unlock_irqrestore(&id_priv->lock, flags);
+- return old;
+-}
+-
+ static inline u8 cma_get_ip_ver(const struct cma_hdr *hdr)
+ {
+ return hdr->ip_version >> 4;
+@@ -1829,23 +1816,11 @@ static void cma_leave_mc_groups(struct rdma_id_private *id_priv)
+ }
+ }
+
+-void rdma_destroy_id(struct rdma_cm_id *id)
++static void _destroy_id(struct rdma_id_private *id_priv,
++ enum rdma_cm_state state)
+ {
+- struct rdma_id_private *id_priv;
+- enum rdma_cm_state state;
+-
+- id_priv = container_of(id, struct rdma_id_private, id);
+- trace_cm_id_destroy(id_priv);
+- state = cma_exch(id_priv, RDMA_CM_DESTROYING);
+ cma_cancel_operation(id_priv, state);
+
+- /*
+- * Wait for any active callback to finish. New callbacks will find
+- * the id_priv state set to destroying and abort.
+- */
+- mutex_lock(&id_priv->handler_mutex);
+- mutex_unlock(&id_priv->handler_mutex);
+-
+ rdma_restrack_del(&id_priv->res);
+ if (id_priv->cma_dev) {
+ if (rdma_cap_ib_cm(id_priv->id.device, 1)) {
+@@ -1874,6 +1849,42 @@ void rdma_destroy_id(struct rdma_cm_id *id)
+ put_net(id_priv->id.route.addr.dev_addr.net);
+ kfree(id_priv);
+ }
++
++/*
++ * destroy an ID from within the handler_mutex. This ensures that no other
++ * handlers can start running concurrently.
++ */
++static void destroy_id_handler_unlock(struct rdma_id_private *id_priv)
++ __releases(&idprv->handler_mutex)
++{
++ enum rdma_cm_state state;
++ unsigned long flags;
++
++ trace_cm_id_destroy(id_priv);
++
++ /*
++ * Setting the state to destroyed under the handler mutex provides a
++ * fence against calling handler callbacks. If this is invoked due to
++ * the failure of a handler callback then it guarentees that no future
++ * handlers will be called.
++ */
++ lockdep_assert_held(&id_priv->handler_mutex);
++ spin_lock_irqsave(&id_priv->lock, flags);
++ state = id_priv->state;
++ id_priv->state = RDMA_CM_DESTROYING;
++ spin_unlock_irqrestore(&id_priv->lock, flags);
++ mutex_unlock(&id_priv->handler_mutex);
++ _destroy_id(id_priv, state);
++}
++
++void rdma_destroy_id(struct rdma_cm_id *id)
++{
++ struct rdma_id_private *id_priv =
++ container_of(id, struct rdma_id_private, id);
++
++ mutex_lock(&id_priv->handler_mutex);
++ destroy_id_handler_unlock(id_priv);
++}
+ EXPORT_SYMBOL(rdma_destroy_id);
+
+ static int cma_rep_recv(struct rdma_id_private *id_priv)
+@@ -1925,6 +1936,8 @@ static int cma_cm_event_handler(struct rdma_id_private *id_priv,
+ {
+ int ret;
+
++ lockdep_assert_held(&id_priv->handler_mutex);
++
+ trace_cm_event_handler(id_priv, event);
+ ret = id_priv->id.event_handler(&id_priv->id, event);
+ trace_cm_event_done(id_priv, event, ret);
+@@ -1936,7 +1949,7 @@ static int cma_ib_handler(struct ib_cm_id *cm_id,
+ {
+ struct rdma_id_private *id_priv = cm_id->context;
+ struct rdma_cm_event event = {};
+- int ret = 0;
++ int ret;
+
+ mutex_lock(&id_priv->handler_mutex);
+ if ((ib_event->event != IB_CM_TIMEWAIT_EXIT &&
+@@ -2005,14 +2018,12 @@ static int cma_ib_handler(struct ib_cm_id *cm_id,
+ if (ret) {
+ /* Destroy the CM ID by returning a non-zero value. */
+ id_priv->cm_id.ib = NULL;
+- cma_exch(id_priv, RDMA_CM_DESTROYING);
+- mutex_unlock(&id_priv->handler_mutex);
+- rdma_destroy_id(&id_priv->id);
++ destroy_id_handler_unlock(id_priv);
+ return ret;
+ }
+ out:
+ mutex_unlock(&id_priv->handler_mutex);
+- return ret;
++ return 0;
+ }
+
+ static struct rdma_id_private *
+@@ -2174,7 +2185,7 @@ static int cma_ib_req_handler(struct ib_cm_id *cm_id,
+ mutex_lock(&listen_id->handler_mutex);
+ if (listen_id->state != RDMA_CM_LISTEN) {
+ ret = -ECONNABORTED;
+- goto err1;
++ goto err_unlock;
+ }
+
+ offset = cma_user_data_offset(listen_id);
+@@ -2191,55 +2202,38 @@ static int cma_ib_req_handler(struct ib_cm_id *cm_id,
+ }
+ if (!conn_id) {
+ ret = -ENOMEM;
+- goto err1;
++ goto err_unlock;
+ }
+
+ mutex_lock_nested(&conn_id->handler_mutex, SINGLE_DEPTH_NESTING);
+ ret = cma_ib_acquire_dev(conn_id, listen_id, &req);
+- if (ret)
+- goto err2;
++ if (ret) {
++ destroy_id_handler_unlock(conn_id);
++ goto err_unlock;
++ }
+
+ conn_id->cm_id.ib = cm_id;
+ cm_id->context = conn_id;
+ cm_id->cm_handler = cma_ib_handler;
+
+- /*
+- * Protect against the user destroying conn_id from another thread
+- * until we're done accessing it.
+- */
+- cma_id_get(conn_id);
+ ret = cma_cm_event_handler(conn_id, &event);
+- if (ret)
+- goto err3;
+- /*
+- * Acquire mutex to prevent user executing rdma_destroy_id()
+- * while we're accessing the cm_id.
+- */
+- mutex_lock(&lock);
++ if (ret) {
++ /* Destroy the CM ID by returning a non-zero value. */
++ conn_id->cm_id.ib = NULL;
++ mutex_unlock(&listen_id->handler_mutex);
++ destroy_id_handler_unlock(conn_id);
++ goto net_dev_put;
++ }
++
+ if (cma_comp(conn_id, RDMA_CM_CONNECT) &&
+ (conn_id->id.qp_type != IB_QPT_UD)) {
+ trace_cm_send_mra(cm_id->context);
+ ib_send_cm_mra(cm_id, CMA_CM_MRA_SETTING, NULL, 0);
+ }
+- mutex_unlock(&lock);
+ mutex_unlock(&conn_id->handler_mutex);
+- mutex_unlock(&listen_id->handler_mutex);
+- cma_id_put(conn_id);
+- if (net_dev)
+- dev_put(net_dev);
+- return 0;
+
+-err3:
+- cma_id_put(conn_id);
+- /* Destroy the CM ID by returning a non-zero value. */
+- conn_id->cm_id.ib = NULL;
+-err2:
+- cma_exch(conn_id, RDMA_CM_DESTROYING);
+- mutex_unlock(&conn_id->handler_mutex);
+-err1:
++err_unlock:
+ mutex_unlock(&listen_id->handler_mutex);
+- if (conn_id)
+- rdma_destroy_id(&conn_id->id);
+
+ net_dev_put:
+ if (net_dev)
+@@ -2339,9 +2333,7 @@ static int cma_iw_handler(struct iw_cm_id *iw_id, struct iw_cm_event *iw_event)
+ if (ret) {
+ /* Destroy the CM ID by returning a non-zero value. */
+ id_priv->cm_id.iw = NULL;
+- cma_exch(id_priv, RDMA_CM_DESTROYING);
+- mutex_unlock(&id_priv->handler_mutex);
+- rdma_destroy_id(&id_priv->id);
++ destroy_id_handler_unlock(id_priv);
+ return ret;
+ }
+
+@@ -2388,16 +2380,16 @@ static int iw_conn_req_handler(struct iw_cm_id *cm_id,
+
+ ret = rdma_translate_ip(laddr, &conn_id->id.route.addr.dev_addr);
+ if (ret) {
+- mutex_unlock(&conn_id->handler_mutex);
+- rdma_destroy_id(new_cm_id);
+- goto out;
++ mutex_unlock(&listen_id->handler_mutex);
++ destroy_id_handler_unlock(conn_id);
++ return ret;
+ }
+
+ ret = cma_iw_acquire_dev(conn_id, listen_id);
+ if (ret) {
+- mutex_unlock(&conn_id->handler_mutex);
+- rdma_destroy_id(new_cm_id);
+- goto out;
++ mutex_unlock(&listen_id->handler_mutex);
++ destroy_id_handler_unlock(conn_id);
++ return ret;
+ }
+
+ conn_id->cm_id.iw = cm_id;
+@@ -2407,25 +2399,16 @@ static int iw_conn_req_handler(struct iw_cm_id *cm_id,
+ memcpy(cma_src_addr(conn_id), laddr, rdma_addr_size(laddr));
+ memcpy(cma_dst_addr(conn_id), raddr, rdma_addr_size(raddr));
+
+- /*
+- * Protect against the user destroying conn_id from another thread
+- * until we're done accessing it.
+- */
+- cma_id_get(conn_id);
+ ret = cma_cm_event_handler(conn_id, &event);
+ if (ret) {
+ /* User wants to destroy the CM ID */
+ conn_id->cm_id.iw = NULL;
+- cma_exch(conn_id, RDMA_CM_DESTROYING);
+- mutex_unlock(&conn_id->handler_mutex);
+ mutex_unlock(&listen_id->handler_mutex);
+- cma_id_put(conn_id);
+- rdma_destroy_id(&conn_id->id);
++ destroy_id_handler_unlock(conn_id);
+ return ret;
+ }
+
+ mutex_unlock(&conn_id->handler_mutex);
+- cma_id_put(conn_id);
+
+ out:
+ mutex_unlock(&listen_id->handler_mutex);
+@@ -2482,6 +2465,10 @@ static int cma_listen_handler(struct rdma_cm_id *id,
+ {
+ struct rdma_id_private *id_priv = id->context;
+
++ /* Listening IDs are always destroyed on removal */
++ if (event->event == RDMA_CM_EVENT_DEVICE_REMOVAL)
++ return -1;
++
+ id->context = id_priv->id.context;
+ id->event_handler = id_priv->id.event_handler;
+ trace_cm_event_handler(id_priv, event);
+@@ -2657,21 +2644,21 @@ static void cma_work_handler(struct work_struct *_work)
+ {
+ struct cma_work *work = container_of(_work, struct cma_work, work);
+ struct rdma_id_private *id_priv = work->id;
+- int destroy = 0;
+
+ mutex_lock(&id_priv->handler_mutex);
+ if (!cma_comp_exch(id_priv, work->old_state, work->new_state))
+- goto out;
++ goto out_unlock;
+
+ if (cma_cm_event_handler(id_priv, &work->event)) {
+- cma_exch(id_priv, RDMA_CM_DESTROYING);
+- destroy = 1;
++ cma_id_put(id_priv);
++ destroy_id_handler_unlock(id_priv);
++ goto out_free;
+ }
+-out:
++
++out_unlock:
+ mutex_unlock(&id_priv->handler_mutex);
+ cma_id_put(id_priv);
+- if (destroy)
+- rdma_destroy_id(&id_priv->id);
++out_free:
+ kfree(work);
+ }
+
+@@ -2679,23 +2666,22 @@ static void cma_ndev_work_handler(struct work_struct *_work)
+ {
+ struct cma_ndev_work *work = container_of(_work, struct cma_ndev_work, work);
+ struct rdma_id_private *id_priv = work->id;
+- int destroy = 0;
+
+ mutex_lock(&id_priv->handler_mutex);
+ if (id_priv->state == RDMA_CM_DESTROYING ||
+ id_priv->state == RDMA_CM_DEVICE_REMOVAL)
+- goto out;
++ goto out_unlock;
+
+ if (cma_cm_event_handler(id_priv, &work->event)) {
+- cma_exch(id_priv, RDMA_CM_DESTROYING);
+- destroy = 1;
++ cma_id_put(id_priv);
++ destroy_id_handler_unlock(id_priv);
++ goto out_free;
+ }
+
+-out:
++out_unlock:
+ mutex_unlock(&id_priv->handler_mutex);
+ cma_id_put(id_priv);
+- if (destroy)
+- rdma_destroy_id(&id_priv->id);
++out_free:
+ kfree(work);
+ }
+
+@@ -3171,9 +3157,7 @@ static void addr_handler(int status, struct sockaddr *src_addr,
+ event.event = RDMA_CM_EVENT_ADDR_RESOLVED;
+
+ if (cma_cm_event_handler(id_priv, &event)) {
+- cma_exch(id_priv, RDMA_CM_DESTROYING);
+- mutex_unlock(&id_priv->handler_mutex);
+- rdma_destroy_id(&id_priv->id);
++ destroy_id_handler_unlock(id_priv);
+ return;
+ }
+ out:
+@@ -3790,7 +3774,7 @@ static int cma_sidr_rep_handler(struct ib_cm_id *cm_id,
+ struct rdma_cm_event event = {};
+ const struct ib_cm_sidr_rep_event_param *rep =
+ &ib_event->param.sidr_rep_rcvd;
+- int ret = 0;
++ int ret;
+
+ mutex_lock(&id_priv->handler_mutex);
+ if (id_priv->state != RDMA_CM_CONNECT)
+@@ -3840,14 +3824,12 @@ static int cma_sidr_rep_handler(struct ib_cm_id *cm_id,
+ if (ret) {
+ /* Destroy the CM ID by returning a non-zero value. */
+ id_priv->cm_id.ib = NULL;
+- cma_exch(id_priv, RDMA_CM_DESTROYING);
+- mutex_unlock(&id_priv->handler_mutex);
+- rdma_destroy_id(&id_priv->id);
++ destroy_id_handler_unlock(id_priv);
+ return ret;
+ }
+ out:
+ mutex_unlock(&id_priv->handler_mutex);
+- return ret;
++ return 0;
+ }
+
+ static int cma_resolve_ib_udp(struct rdma_id_private *id_priv,
+@@ -4372,9 +4354,7 @@ static int cma_ib_mc_handler(int status, struct ib_sa_multicast *multicast)
+
+ rdma_destroy_ah_attr(&event.param.ud.ah_attr);
+ if (ret) {
+- cma_exch(id_priv, RDMA_CM_DESTROYING);
+- mutex_unlock(&id_priv->handler_mutex);
+- rdma_destroy_id(&id_priv->id);
++ destroy_id_handler_unlock(id_priv);
+ return 0;
+ }
+
+@@ -4789,50 +4769,59 @@ free_cma_dev:
+ return ret;
+ }
+
+-static int cma_remove_id_dev(struct rdma_id_private *id_priv)
++static void cma_send_device_removal_put(struct rdma_id_private *id_priv)
+ {
+- struct rdma_cm_event event = {};
++ struct rdma_cm_event event = { .event = RDMA_CM_EVENT_DEVICE_REMOVAL };
+ enum rdma_cm_state state;
+- int ret = 0;
+-
+- /* Record that we want to remove the device */
+- state = cma_exch(id_priv, RDMA_CM_DEVICE_REMOVAL);
+- if (state == RDMA_CM_DESTROYING)
+- return 0;
++ unsigned long flags;
+
+- cma_cancel_operation(id_priv, state);
+ mutex_lock(&id_priv->handler_mutex);
++ /* Record that we want to remove the device */
++ spin_lock_irqsave(&id_priv->lock, flags);
++ state = id_priv->state;
++ if (state == RDMA_CM_DESTROYING || state == RDMA_CM_DEVICE_REMOVAL) {
++ spin_unlock_irqrestore(&id_priv->lock, flags);
++ mutex_unlock(&id_priv->handler_mutex);
++ cma_id_put(id_priv);
++ return;
++ }
++ id_priv->state = RDMA_CM_DEVICE_REMOVAL;
++ spin_unlock_irqrestore(&id_priv->lock, flags);
+
+- /* Check for destruction from another callback. */
+- if (!cma_comp(id_priv, RDMA_CM_DEVICE_REMOVAL))
+- goto out;
+-
+- event.event = RDMA_CM_EVENT_DEVICE_REMOVAL;
+- ret = cma_cm_event_handler(id_priv, &event);
+-out:
++ if (cma_cm_event_handler(id_priv, &event)) {
++ /*
++ * At this point the ULP promises it won't call
++ * rdma_destroy_id() concurrently
++ */
++ cma_id_put(id_priv);
++ mutex_unlock(&id_priv->handler_mutex);
++ trace_cm_id_destroy(id_priv);
++ _destroy_id(id_priv, state);
++ return;
++ }
+ mutex_unlock(&id_priv->handler_mutex);
+- return ret;
++
++ /*
++ * If this races with destroy then the thread that first assigns state
++ * to a destroying does the cancel.
++ */
++ cma_cancel_operation(id_priv, state);
++ cma_id_put(id_priv);
+ }
+
+ static void cma_process_remove(struct cma_device *cma_dev)
+ {
+- struct rdma_id_private *id_priv;
+- int ret;
+-
+ mutex_lock(&lock);
+ while (!list_empty(&cma_dev->id_list)) {
+- id_priv = list_entry(cma_dev->id_list.next,
+- struct rdma_id_private, list);
++ struct rdma_id_private *id_priv = list_first_entry(
++ &cma_dev->id_list, struct rdma_id_private, list);
+
+ list_del(&id_priv->listen_list);
+ list_del_init(&id_priv->list);
+ cma_id_get(id_priv);
+ mutex_unlock(&lock);
+
+- ret = id_priv->internal_id ? 1 : cma_remove_id_dev(id_priv);
+- cma_id_put(id_priv);
+- if (ret)
+- rdma_destroy_id(&id_priv->id);
++ cma_send_device_removal_put(id_priv);
+
+ mutex_lock(&lock);
+ }
+diff --git a/drivers/net/usb/dm9601.c b/drivers/net/usb/dm9601.c
+index b91f92e4e5f22..915ac75b55fc7 100644
+--- a/drivers/net/usb/dm9601.c
++++ b/drivers/net/usb/dm9601.c
+@@ -625,6 +625,10 @@ static const struct usb_device_id products[] = {
+ USB_DEVICE(0x0a46, 0x1269), /* DM9621A USB to Fast Ethernet Adapter */
+ .driver_info = (unsigned long)&dm9601_info,
+ },
++ {
++ USB_DEVICE(0x0586, 0x3427), /* ZyXEL Keenetic Plus DSL xDSL modem */
++ .driver_info = (unsigned long)&dm9601_info,
++ },
+ {}, // END
+ };
+
+diff --git a/fs/io_uring.c b/fs/io_uring.c
+index 38f3ec15ba3b1..d05023ca74bdc 100644
+--- a/fs/io_uring.c
++++ b/fs/io_uring.c
+@@ -7601,6 +7601,28 @@ static bool io_match_link(struct io_kiocb *preq, struct io_kiocb *req)
+ return false;
+ }
+
++static inline bool io_match_files(struct io_kiocb *req,
++ struct files_struct *files)
++{
++ return (req->flags & REQ_F_WORK_INITIALIZED) && req->work.files == files;
++}
++
++static bool io_match_link_files(struct io_kiocb *req,
++ struct files_struct *files)
++{
++ struct io_kiocb *link;
++
++ if (io_match_files(req, files))
++ return true;
++ if (req->flags & REQ_F_LINK_HEAD) {
++ list_for_each_entry(link, &req->link_list, link_list) {
++ if (io_match_files(link, files))
++ return true;
++ }
++ }
++ return false;
++}
++
+ /*
+ * We're looking to cancel 'req' because it's holding on to our files, but
+ * 'req' could be a link to another request. See if it is, and cancel that
+@@ -7675,12 +7697,37 @@ static void io_attempt_cancel(struct io_ring_ctx *ctx, struct io_kiocb *req)
+ io_timeout_remove_link(ctx, req);
+ }
+
++static void io_cancel_defer_files(struct io_ring_ctx *ctx,
++ struct files_struct *files)
++{
++ struct io_kiocb *req = NULL;
++ LIST_HEAD(list);
++
++ spin_lock_irq(&ctx->completion_lock);
++ list_for_each_entry_reverse(req, &ctx->defer_list, list) {
++ if (io_match_link_files(req, files)) {
++ list_cut_position(&list, &ctx->defer_list, &req->list);
++ break;
++ }
++ }
++ spin_unlock_irq(&ctx->completion_lock);
++
++ while (!list_empty(&list)) {
++ req = list_first_entry(&list, struct io_kiocb, list);
++ list_del_init(&req->list);
++ req_set_fail_links(req);
++ io_cqring_add_event(req, -ECANCELED);
++ io_double_put_req(req);
++ }
++}
++
+ static void io_uring_cancel_files(struct io_ring_ctx *ctx,
+ struct files_struct *files)
+ {
+ if (list_empty_careful(&ctx->inflight_list))
+ return;
+
++ io_cancel_defer_files(ctx, files);
+ /* cancel all at once, should be faster than doing it one by one*/
+ io_wq_cancel_cb(ctx->io_wq, io_wq_files_match, files, true);
+
+diff --git a/net/core/dev.c b/net/core/dev.c
+index 7a774ebf64e26..5bd0b550893fb 100644
+--- a/net/core/dev.c
++++ b/net/core/dev.c
+@@ -6609,12 +6609,13 @@ void netif_napi_add(struct net_device *dev, struct napi_struct *napi,
+ netdev_err_once(dev, "%s() called with weight %d\n", __func__,
+ weight);
+ napi->weight = weight;
+- list_add(&napi->dev_list, &dev->napi_list);
+ napi->dev = dev;
+ #ifdef CONFIG_NETPOLL
+ napi->poll_owner = -1;
+ #endif
+ set_bit(NAPI_STATE_SCHED, &napi->state);
++ set_bit(NAPI_STATE_NPSVC, &napi->state);
++ list_add_rcu(&napi->dev_list, &dev->napi_list);
+ napi_hash_add(napi);
+ }
+ EXPORT_SYMBOL(netif_napi_add);
+diff --git a/net/core/netpoll.c b/net/core/netpoll.c
+index 093e90e52bc25..2338753e936b7 100644
+--- a/net/core/netpoll.c
++++ b/net/core/netpoll.c
+@@ -162,7 +162,7 @@ static void poll_napi(struct net_device *dev)
+ struct napi_struct *napi;
+ int cpu = smp_processor_id();
+
+- list_for_each_entry(napi, &dev->napi_list, dev_list) {
++ list_for_each_entry_rcu(napi, &dev->napi_list, dev_list) {
+ if (cmpxchg(&napi->poll_owner, -1, cpu) == -1) {
+ poll_one_napi(napi);
+ smp_store_release(&napi->poll_owner, -1);
+diff --git a/net/ipv4/fib_trie.c b/net/ipv4/fib_trie.c
+index 3c65f71d0e820..6734bab26386b 100644
+--- a/net/ipv4/fib_trie.c
++++ b/net/ipv4/fib_trie.c
+@@ -2121,7 +2121,8 @@ void fib_info_notify_update(struct net *net, struct nl_info *info)
+ struct hlist_head *head = &net->ipv4.fib_table_hash[h];
+ struct fib_table *tb;
+
+- hlist_for_each_entry_rcu(tb, head, tb_hlist)
++ hlist_for_each_entry_rcu(tb, head, tb_hlist,
++ lockdep_rtnl_is_held())
+ __fib_info_notify_update(net, tb, info);
+ }
+ }
+diff --git a/net/ipv6/sysctl_net_ipv6.c b/net/ipv6/sysctl_net_ipv6.c
+index fac2135aa47b6..5b60a4bdd36af 100644
+--- a/net/ipv6/sysctl_net_ipv6.c
++++ b/net/ipv6/sysctl_net_ipv6.c
+@@ -21,6 +21,7 @@
+ #include <net/calipso.h>
+ #endif
+
++static int two = 2;
+ static int flowlabel_reflect_max = 0x7;
+ static int auto_flowlabels_min;
+ static int auto_flowlabels_max = IP6_AUTO_FLOW_LABEL_MAX;
+@@ -150,7 +151,7 @@ static struct ctl_table ipv6_table_template[] = {
+ .mode = 0644,
+ .proc_handler = proc_rt6_multipath_hash_policy,
+ .extra1 = SYSCTL_ZERO,
+- .extra2 = SYSCTL_ONE,
++ .extra2 = &two,
+ },
+ {
+ .procname = "seg6_flowlabel",
+diff --git a/net/mptcp/protocol.c b/net/mptcp/protocol.c
+index c0abe738e7d31..939e445d5188c 100644
+--- a/net/mptcp/protocol.c
++++ b/net/mptcp/protocol.c
+@@ -772,7 +772,6 @@ fallback:
+ restart:
+ mptcp_clean_una(sk);
+
+-wait_for_sndbuf:
+ __mptcp_flush_join_list(msk);
+ ssk = mptcp_subflow_get_send(msk);
+ while (!sk_stream_memory_free(sk) ||
+@@ -873,7 +872,7 @@ wait_for_sndbuf:
+ */
+ mptcp_set_timeout(sk, ssk);
+ release_sock(ssk);
+- goto wait_for_sndbuf;
++ goto restart;
+ }
+ }
+ }
+diff --git a/net/netlabel/netlabel_domainhash.c b/net/netlabel/netlabel_domainhash.c
+index a1f2320ecc16d..785d13b8b5748 100644
+--- a/net/netlabel/netlabel_domainhash.c
++++ b/net/netlabel/netlabel_domainhash.c
+@@ -85,6 +85,7 @@ static void netlbl_domhsh_free_entry(struct rcu_head *entry)
+ kfree(netlbl_domhsh_addr6_entry(iter6));
+ }
+ #endif /* IPv6 */
++ kfree(ptr->def.addrsel);
+ }
+ kfree(ptr->domain);
+ kfree(ptr);
+@@ -537,6 +538,8 @@ int netlbl_domhsh_add(struct netlbl_dom_map *entry,
+ goto add_return;
+ }
+ #endif /* IPv6 */
++ /* cleanup the new entry since we've moved everything over */
++ netlbl_domhsh_free_entry(&entry->rcu);
+ } else
+ ret_val = -EINVAL;
+
+@@ -580,6 +583,12 @@ int netlbl_domhsh_remove_entry(struct netlbl_dom_map *entry,
+ {
+ int ret_val = 0;
+ struct audit_buffer *audit_buf;
++ struct netlbl_af4list *iter4;
++ struct netlbl_domaddr4_map *map4;
++#if IS_ENABLED(CONFIG_IPV6)
++ struct netlbl_af6list *iter6;
++ struct netlbl_domaddr6_map *map6;
++#endif /* IPv6 */
+
+ if (entry == NULL)
+ return -ENOENT;
+@@ -597,6 +606,9 @@ int netlbl_domhsh_remove_entry(struct netlbl_dom_map *entry,
+ ret_val = -ENOENT;
+ spin_unlock(&netlbl_domhsh_lock);
+
++ if (ret_val)
++ return ret_val;
++
+ audit_buf = netlbl_audit_start_common(AUDIT_MAC_MAP_DEL, audit_info);
+ if (audit_buf != NULL) {
+ audit_log_format(audit_buf,
+@@ -606,40 +618,29 @@ int netlbl_domhsh_remove_entry(struct netlbl_dom_map *entry,
+ audit_log_end(audit_buf);
+ }
+
+- if (ret_val == 0) {
+- struct netlbl_af4list *iter4;
+- struct netlbl_domaddr4_map *map4;
+-#if IS_ENABLED(CONFIG_IPV6)
+- struct netlbl_af6list *iter6;
+- struct netlbl_domaddr6_map *map6;
+-#endif /* IPv6 */
+-
+- switch (entry->def.type) {
+- case NETLBL_NLTYPE_ADDRSELECT:
+- netlbl_af4list_foreach_rcu(iter4,
+- &entry->def.addrsel->list4) {
+- map4 = netlbl_domhsh_addr4_entry(iter4);
+- cipso_v4_doi_putdef(map4->def.cipso);
+- }
++ switch (entry->def.type) {
++ case NETLBL_NLTYPE_ADDRSELECT:
++ netlbl_af4list_foreach_rcu(iter4, &entry->def.addrsel->list4) {
++ map4 = netlbl_domhsh_addr4_entry(iter4);
++ cipso_v4_doi_putdef(map4->def.cipso);
++ }
+ #if IS_ENABLED(CONFIG_IPV6)
+- netlbl_af6list_foreach_rcu(iter6,
+- &entry->def.addrsel->list6) {
+- map6 = netlbl_domhsh_addr6_entry(iter6);
+- calipso_doi_putdef(map6->def.calipso);
+- }
++ netlbl_af6list_foreach_rcu(iter6, &entry->def.addrsel->list6) {
++ map6 = netlbl_domhsh_addr6_entry(iter6);
++ calipso_doi_putdef(map6->def.calipso);
++ }
+ #endif /* IPv6 */
+- break;
+- case NETLBL_NLTYPE_CIPSOV4:
+- cipso_v4_doi_putdef(entry->def.cipso);
+- break;
++ break;
++ case NETLBL_NLTYPE_CIPSOV4:
++ cipso_v4_doi_putdef(entry->def.cipso);
++ break;
+ #if IS_ENABLED(CONFIG_IPV6)
+- case NETLBL_NLTYPE_CALIPSO:
+- calipso_doi_putdef(entry->def.calipso);
+- break;
++ case NETLBL_NLTYPE_CALIPSO:
++ calipso_doi_putdef(entry->def.calipso);
++ break;
+ #endif /* IPv6 */
+- }
+- call_rcu(&entry->rcu, netlbl_domhsh_free_entry);
+ }
++ call_rcu(&entry->rcu, netlbl_domhsh_free_entry);
+
+ return ret_val;
+ }
+diff --git a/net/sched/sch_taprio.c b/net/sched/sch_taprio.c
+index b1eb12d33b9a6..6a5086e586efb 100644
+--- a/net/sched/sch_taprio.c
++++ b/net/sched/sch_taprio.c
+@@ -1177,9 +1177,27 @@ static void taprio_offload_config_changed(struct taprio_sched *q)
+ spin_unlock(&q->current_entry_lock);
+ }
+
+-static void taprio_sched_to_offload(struct taprio_sched *q,
++static u32 tc_map_to_queue_mask(struct net_device *dev, u32 tc_mask)
++{
++ u32 i, queue_mask = 0;
++
++ for (i = 0; i < dev->num_tc; i++) {
++ u32 offset, count;
++
++ if (!(tc_mask & BIT(i)))
++ continue;
++
++ offset = dev->tc_to_txq[i].offset;
++ count = dev->tc_to_txq[i].count;
++
++ queue_mask |= GENMASK(offset + count - 1, offset);
++ }
++
++ return queue_mask;
++}
++
++static void taprio_sched_to_offload(struct net_device *dev,
+ struct sched_gate_list *sched,
+- const struct tc_mqprio_qopt *mqprio,
+ struct tc_taprio_qopt_offload *offload)
+ {
+ struct sched_entry *entry;
+@@ -1194,7 +1212,8 @@ static void taprio_sched_to_offload(struct taprio_sched *q,
+
+ e->command = entry->command;
+ e->interval = entry->interval;
+- e->gate_mask = entry->gate_mask;
++ e->gate_mask = tc_map_to_queue_mask(dev, entry->gate_mask);
++
+ i++;
+ }
+
+@@ -1202,7 +1221,6 @@ static void taprio_sched_to_offload(struct taprio_sched *q,
+ }
+
+ static int taprio_enable_offload(struct net_device *dev,
+- struct tc_mqprio_qopt *mqprio,
+ struct taprio_sched *q,
+ struct sched_gate_list *sched,
+ struct netlink_ext_ack *extack)
+@@ -1224,7 +1242,7 @@ static int taprio_enable_offload(struct net_device *dev,
+ return -ENOMEM;
+ }
+ offload->enable = 1;
+- taprio_sched_to_offload(q, sched, mqprio, offload);
++ taprio_sched_to_offload(dev, sched, offload);
+
+ err = ops->ndo_setup_tc(dev, TC_SETUP_QDISC_TAPRIO, offload);
+ if (err < 0) {
+@@ -1486,7 +1504,7 @@ static int taprio_change(struct Qdisc *sch, struct nlattr *opt,
+ }
+
+ if (FULL_OFFLOAD_IS_ENABLED(q->flags))
+- err = taprio_enable_offload(dev, mqprio, q, new_admin, extack);
++ err = taprio_enable_offload(dev, q, new_admin, extack);
+ else
+ err = taprio_disable_offload(dev, q, extack);
+ if (err)
+diff --git a/net/sctp/socket.c b/net/sctp/socket.c
+index d57e1a002ffc8..fa20e945700e0 100644
+--- a/net/sctp/socket.c
++++ b/net/sctp/socket.c
+@@ -8297,8 +8297,6 @@ static int sctp_get_port_local(struct sock *sk, union sctp_addr *addr)
+
+ pr_debug("%s: begins, snum:%d\n", __func__, snum);
+
+- local_bh_disable();
+-
+ if (snum == 0) {
+ /* Search for an available port. */
+ int low, high, remaining, index;
+@@ -8316,20 +8314,21 @@ static int sctp_get_port_local(struct sock *sk, union sctp_addr *addr)
+ continue;
+ index = sctp_phashfn(net, rover);
+ head = &sctp_port_hashtable[index];
+- spin_lock(&head->lock);
++ spin_lock_bh(&head->lock);
+ sctp_for_each_hentry(pp, &head->chain)
+ if ((pp->port == rover) &&
+ net_eq(net, pp->net))
+ goto next;
+ break;
+ next:
+- spin_unlock(&head->lock);
++ spin_unlock_bh(&head->lock);
++ cond_resched();
+ } while (--remaining > 0);
+
+ /* Exhausted local port range during search? */
+ ret = 1;
+ if (remaining <= 0)
+- goto fail;
++ return ret;
+
+ /* OK, here is the one we will use. HEAD (the port
+ * hash table list entry) is non-NULL and we hold it's
+@@ -8344,7 +8343,7 @@ static int sctp_get_port_local(struct sock *sk, union sctp_addr *addr)
+ * port iterator, pp being NULL.
+ */
+ head = &sctp_port_hashtable[sctp_phashfn(net, snum)];
+- spin_lock(&head->lock);
++ spin_lock_bh(&head->lock);
+ sctp_for_each_hentry(pp, &head->chain) {
+ if ((pp->port == snum) && net_eq(pp->net, net))
+ goto pp_found;
+@@ -8444,10 +8443,7 @@ success:
+ ret = 0;
+
+ fail_unlock:
+- spin_unlock(&head->lock);
+-
+-fail:
+- local_bh_enable();
++ spin_unlock_bh(&head->lock);
+ return ret;
+ }
+
+diff --git a/net/tipc/crypto.c b/net/tipc/crypto.c
+index d6426b6cc9c5a..3f35577b7404f 100644
+--- a/net/tipc/crypto.c
++++ b/net/tipc/crypto.c
+@@ -326,7 +326,8 @@ static void tipc_aead_free(struct rcu_head *rp)
+ if (aead->cloned) {
+ tipc_aead_put(aead->cloned);
+ } else {
+- head = *this_cpu_ptr(aead->tfm_entry);
++ head = *get_cpu_ptr(aead->tfm_entry);
++ put_cpu_ptr(aead->tfm_entry);
+ list_for_each_entry_safe(tfm_entry, tmp, &head->list, list) {
+ crypto_free_aead(tfm_entry->tfm);
+ list_del(&tfm_entry->list);
+@@ -399,10 +400,15 @@ static void tipc_aead_users_set(struct tipc_aead __rcu *aead, int val)
+ */
+ static struct crypto_aead *tipc_aead_tfm_next(struct tipc_aead *aead)
+ {
+- struct tipc_tfm **tfm_entry = this_cpu_ptr(aead->tfm_entry);
++ struct tipc_tfm **tfm_entry;
++ struct crypto_aead *tfm;
+
++ tfm_entry = get_cpu_ptr(aead->tfm_entry);
+ *tfm_entry = list_next_entry(*tfm_entry, list);
+- return (*tfm_entry)->tfm;
++ tfm = (*tfm_entry)->tfm;
++ put_cpu_ptr(tfm_entry);
++
++ return tfm;
+ }
+
+ /**
+diff --git a/net/tipc/socket.c b/net/tipc/socket.c
+index a94f38333698a..79cc84393f932 100644
+--- a/net/tipc/socket.c
++++ b/net/tipc/socket.c
+@@ -2773,18 +2773,21 @@ static int tipc_shutdown(struct socket *sock, int how)
+
+ trace_tipc_sk_shutdown(sk, NULL, TIPC_DUMP_ALL, " ");
+ __tipc_shutdown(sock, TIPC_CONN_SHUTDOWN);
+- sk->sk_shutdown = SEND_SHUTDOWN;
++ if (tipc_sk_type_connectionless(sk))
++ sk->sk_shutdown = SHUTDOWN_MASK;
++ else
++ sk->sk_shutdown = SEND_SHUTDOWN;
+
+ if (sk->sk_state == TIPC_DISCONNECTING) {
+ /* Discard any unreceived messages */
+ __skb_queue_purge(&sk->sk_receive_queue);
+
+- /* Wake up anyone sleeping in poll */
+- sk->sk_state_change(sk);
+ res = 0;
+ } else {
+ res = -ENOTCONN;
+ }
++ /* Wake up anyone sleeping in poll. */
++ sk->sk_state_change(sk);
+
+ release_sock(sk);
+ return res;
^ permalink raw reply related [flat|nested] 30+ messages in thread
* [gentoo-commits] proj/linux-patches:5.8 commit in: /
@ 2020-09-14 17:36 Mike Pagano
0 siblings, 0 replies; 30+ messages in thread
From: Mike Pagano @ 2020-09-14 17:36 UTC (permalink / raw
To: gentoo-commits
commit: 5846980492f891a80511f1ed36d666ef8a073e1a
Author: Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Mon Sep 14 17:35:23 2020 +0000
Commit: Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Mon Sep 14 17:35:23 2020 +0000
URL: https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=58469804
Update cpu opt patch for v9.1 >= gcc < v10.X.
See bug #742533
Reported by Balint SZENTE
Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>
0000_README | 4 +
5012_enable-cpu-optimizations-for-gcc91.patch | 641 ++++++++++++++++++++++++++
2 files changed, 645 insertions(+)
diff --git a/0000_README b/0000_README
index 96ae239..8c02383 100644
--- a/0000_README
+++ b/0000_README
@@ -135,6 +135,10 @@ Patch: 5007_ZSTD-v10-8-8-gitignore-add-ZSTD-compressed-files.patch
From: https://lkml.org/lkml/2020/4/1/29
Desc: .gitignore: add ZSTD-compressed files
+Patch: 5012_enable-cpu-optimizations-for-gcc91.patch
+From: https://github.com/graysky2/kernel_gcc_patch/
+Desc: Kernel patch enables gcc >= v9.1 >= gcc < v10 optimizations for additional CPUs.
+
Patch: 5013_enable-cpu-optimizations-for-gcc10.patch
From: https://github.com/graysky2/kernel_gcc_patch/
Desc: Kernel patch enables gcc = v10.1+ optimizations for additional CPUs.
diff --git a/5012_enable-cpu-optimizations-for-gcc91.patch b/5012_enable-cpu-optimizations-for-gcc91.patch
new file mode 100644
index 0000000..564eede
--- /dev/null
+++ b/5012_enable-cpu-optimizations-for-gcc91.patch
@@ -0,0 +1,641 @@
+WARNING
+This patch works with gcc versions 9.1+ and with kernel version 5.8+ and should
+NOT be applied when compiling on older versions of gcc due to key name changes
+of the march flags introduced with the version 4.9 release of gcc.[1]
+
+Use the older version of this patch hosted on the same github for older
+versions of gcc.
+
+FEATURES
+This patch adds additional CPU options to the Linux kernel accessible under:
+ Processor type and features --->
+ Processor family --->
+
+The expanded microarchitectures include:
+* AMD Improved K8-family
+* AMD K10-family
+* AMD Family 10h (Barcelona)
+* AMD Family 14h (Bobcat)
+* AMD Family 16h (Jaguar)
+* AMD Family 15h (Bulldozer)
+* AMD Family 15h (Piledriver)
+* AMD Family 15h (Steamroller)
+* AMD Family 15h (Excavator)
+* AMD Family 17h (Zen)
+* AMD Family 17h (Zen 2)
+* Intel Silvermont low-power processors
+* Intel Goldmont low-power processors (Apollo Lake and Denverton)
+* Intel Goldmont Plus low-power processors (Gemini Lake)
+* Intel 1st Gen Core i3/i5/i7 (Nehalem)
+* Intel 1.5 Gen Core i3/i5/i7 (Westmere)
+* Intel 2nd Gen Core i3/i5/i7 (Sandybridge)
+* Intel 3rd Gen Core i3/i5/i7 (Ivybridge)
+* Intel 4th Gen Core i3/i5/i7 (Haswell)
+* Intel 5th Gen Core i3/i5/i7 (Broadwell)
+* Intel 6th Gen Core i3/i5/i7 (Skylake)
+* Intel 6th Gen Core i7/i9 (Skylake X)
+* Intel 8th Gen Core i3/i5/i7 (Cannon Lake)
+* Intel 10th Gen Core i7/i9 (Ice Lake)
+* Intel Xeon (Cascade Lake)
+
+It also offers to compile passing the 'native' option which, "selects the CPU
+to generate code for at compilation time by determining the processor type of
+the compiling machine. Using -march=native enables all instruction subsets
+supported by the local machine and will produce code optimized for the local
+machine under the constraints of the selected instruction set."[2]
+
+Do NOT try using the 'native' option on AMD Piledriver, Steamroller, or
+Excavator CPUs (-march=bdver{2,3,4} flag). The build will error out due the
+kernel's objtool issue with these.[3a,b]
+
+MINOR NOTES
+This patch also changes 'atom' to 'bonnell' in accordance with the gcc v4.9
+changes. Note that upstream is using the deprecated 'match=atom' flags when I
+believe it should use the newer 'march=bonnell' flag for atom processors.[4]
+
+It is not recommended to compile on Atom-CPUs with the 'native' option.[5] The
+recommendation is to use the 'atom' option instead.
+
+BENEFITS
+Small but real speed increases are measurable using a make endpoint comparing
+a generic kernel to one built with one of the respective microarchs.
+
+See the following experimental evidence supporting this statement:
+https://github.com/graysky2/kernel_gcc_patch
+
+REQUIREMENTS
+linux version >=5.8
+gcc version >=9.1 and <10
+
+ACKNOWLEDGMENTS
+This patch builds on the seminal work by Jeroen.[6]
+
+REFERENCES
+1. https://gcc.gnu.org/gcc-4.9/changes.html
+2. https://gcc.gnu.org/onlinedocs/gcc/x86-Options.html
+3a. https://gcc.gnu.org/bugzilla/show_bug.cgi?id=95671#c11
+3b. https://github.com/graysky2/kernel_gcc_patch/issues/55
+4. https://bugzilla.kernel.org/show_bug.cgi?id=77461
+5. https://github.com/graysky2/kernel_gcc_patch/issues/15
+6. http://www.linuxforge.net/docs/linux/linux-gcc.php
+
+--- a/arch/x86/include/asm/vermagic.h 2020-06-10 14:21:45.000000000 -0400
++++ b/arch/x86/include/asm/vermagic.h 2020-06-15 10:44:10.437477053 -0400
+@@ -17,6 +17,36 @@
+ #define MODULE_PROC_FAMILY "586MMX "
+ #elif defined CONFIG_MCORE2
+ #define MODULE_PROC_FAMILY "CORE2 "
++#elif defined CONFIG_MNATIVE
++#define MODULE_PROC_FAMILY "NATIVE "
++#elif defined CONFIG_MNEHALEM
++#define MODULE_PROC_FAMILY "NEHALEM "
++#elif defined CONFIG_MWESTMERE
++#define MODULE_PROC_FAMILY "WESTMERE "
++#elif defined CONFIG_MSILVERMONT
++#define MODULE_PROC_FAMILY "SILVERMONT "
++#elif defined CONFIG_MGOLDMONT
++#define MODULE_PROC_FAMILY "GOLDMONT "
++#elif defined CONFIG_MGOLDMONTPLUS
++#define MODULE_PROC_FAMILY "GOLDMONTPLUS "
++#elif defined CONFIG_MSANDYBRIDGE
++#define MODULE_PROC_FAMILY "SANDYBRIDGE "
++#elif defined CONFIG_MIVYBRIDGE
++#define MODULE_PROC_FAMILY "IVYBRIDGE "
++#elif defined CONFIG_MHASWELL
++#define MODULE_PROC_FAMILY "HASWELL "
++#elif defined CONFIG_MBROADWELL
++#define MODULE_PROC_FAMILY "BROADWELL "
++#elif defined CONFIG_MSKYLAKE
++#define MODULE_PROC_FAMILY "SKYLAKE "
++#elif defined CONFIG_MSKYLAKEX
++#define MODULE_PROC_FAMILY "SKYLAKEX "
++#elif defined CONFIG_MCANNONLAKE
++#define MODULE_PROC_FAMILY "CANNONLAKE "
++#elif defined CONFIG_MICELAKE
++#define MODULE_PROC_FAMILY "ICELAKE "
++#elif defined CONFIG_MCASCADELAKE
++#define MODULE_PROC_FAMILY "CASCADELAKE "
+ #elif defined CONFIG_MATOM
+ #define MODULE_PROC_FAMILY "ATOM "
+ #elif defined CONFIG_M686
+@@ -35,6 +65,28 @@
+ #define MODULE_PROC_FAMILY "K7 "
+ #elif defined CONFIG_MK8
+ #define MODULE_PROC_FAMILY "K8 "
++#elif defined CONFIG_MK8SSE3
++#define MODULE_PROC_FAMILY "K8SSE3 "
++#elif defined CONFIG_MK10
++#define MODULE_PROC_FAMILY "K10 "
++#elif defined CONFIG_MBARCELONA
++#define MODULE_PROC_FAMILY "BARCELONA "
++#elif defined CONFIG_MBOBCAT
++#define MODULE_PROC_FAMILY "BOBCAT "
++#elif defined CONFIG_MBULLDOZER
++#define MODULE_PROC_FAMILY "BULLDOZER "
++#elif defined CONFIG_MPILEDRIVER
++#define MODULE_PROC_FAMILY "PILEDRIVER "
++#elif defined CONFIG_MSTEAMROLLER
++#define MODULE_PROC_FAMILY "STEAMROLLER "
++#elif defined CONFIG_MJAGUAR
++#define MODULE_PROC_FAMILY "JAGUAR "
++#elif defined CONFIG_MEXCAVATOR
++#define MODULE_PROC_FAMILY "EXCAVATOR "
++#elif defined CONFIG_MZEN
++#define MODULE_PROC_FAMILY "ZEN "
++#elif defined CONFIG_MZEN2
++#define MODULE_PROC_FAMILY "ZEN2 "
+ #elif defined CONFIG_MELAN
+ #define MODULE_PROC_FAMILY "ELAN "
+ #elif defined CONFIG_MCRUSOE
+--- a/arch/x86/Kconfig.cpu 2020-06-10 14:21:45.000000000 -0400
++++ b/arch/x86/Kconfig.cpu 2020-06-15 10:44:10.437477053 -0400
+@@ -123,6 +123,7 @@ config MPENTIUMM
+ config MPENTIUM4
+ bool "Pentium-4/Celeron(P4-based)/Pentium-4 M/older Xeon"
+ depends on X86_32
++ select X86_P6_NOP
+ help
+ Select this for Intel Pentium 4 chips. This includes the
+ Pentium 4, Pentium D, P4-based Celeron and Xeon, and
+@@ -155,9 +156,8 @@ config MPENTIUM4
+ -Paxville
+ -Dempsey
+
+-
+ config MK6
+- bool "K6/K6-II/K6-III"
++ bool "AMD K6/K6-II/K6-III"
+ depends on X86_32
+ help
+ Select this for an AMD K6-family processor. Enables use of
+@@ -165,7 +165,7 @@ config MK6
+ flags to GCC.
+
+ config MK7
+- bool "Athlon/Duron/K7"
++ bool "AMD Athlon/Duron/K7"
+ depends on X86_32
+ help
+ Select this for an AMD Athlon K7-family processor. Enables use of
+@@ -173,12 +173,90 @@ config MK7
+ flags to GCC.
+
+ config MK8
+- bool "Opteron/Athlon64/Hammer/K8"
++ bool "AMD Opteron/Athlon64/Hammer/K8"
+ help
+ Select this for an AMD Opteron or Athlon64 Hammer-family processor.
+ Enables use of some extended instructions, and passes appropriate
+ optimization flags to GCC.
+
++config MK8SSE3
++ bool "AMD Opteron/Athlon64/Hammer/K8 with SSE3"
++ help
++ Select this for improved AMD Opteron or Athlon64 Hammer-family processors.
++ Enables use of some extended instructions, and passes appropriate
++ optimization flags to GCC.
++
++config MK10
++ bool "AMD 61xx/7x50/PhenomX3/X4/II/K10"
++ help
++ Select this for an AMD 61xx Eight-Core Magny-Cours, Athlon X2 7x50,
++ Phenom X3/X4/II, Athlon II X2/X3/X4, or Turion II-family processor.
++ Enables use of some extended instructions, and passes appropriate
++ optimization flags to GCC.
++
++config MBARCELONA
++ bool "AMD Barcelona"
++ help
++ Select this for AMD Family 10h Barcelona processors.
++
++ Enables -march=barcelona
++
++config MBOBCAT
++ bool "AMD Bobcat"
++ help
++ Select this for AMD Family 14h Bobcat processors.
++
++ Enables -march=btver1
++
++config MJAGUAR
++ bool "AMD Jaguar"
++ help
++ Select this for AMD Family 16h Jaguar processors.
++
++ Enables -march=btver2
++
++config MBULLDOZER
++ bool "AMD Bulldozer"
++ help
++ Select this for AMD Family 15h Bulldozer processors.
++
++ Enables -march=bdver1
++
++config MPILEDRIVER
++ bool "AMD Piledriver"
++ help
++ Select this for AMD Family 15h Piledriver processors.
++
++ Enables -march=bdver2
++
++config MSTEAMROLLER
++ bool "AMD Steamroller"
++ help
++ Select this for AMD Family 15h Steamroller processors.
++
++ Enables -march=bdver3
++
++config MEXCAVATOR
++ bool "AMD Excavator"
++ help
++ Select this for AMD Family 15h Excavator processors.
++
++ Enables -march=bdver4
++
++config MZEN
++ bool "AMD Zen"
++ help
++ Select this for AMD Family 17h Zen processors.
++
++ Enables -march=znver1
++
++config MZEN2
++ bool "AMD Zen 2"
++ help
++ Select this for AMD Family 17h Zen 2 processors.
++
++ Enables -march=znver2
++
+ config MCRUSOE
+ bool "Crusoe"
+ depends on X86_32
+@@ -260,6 +338,7 @@ config MVIAC7
+
+ config MPSC
+ bool "Intel P4 / older Netburst based Xeon"
++ select X86_P6_NOP
+ depends on X86_64
+ help
+ Optimize for Intel Pentium 4, Pentium D and older Nocona/Dempsey
+@@ -269,8 +348,19 @@ config MPSC
+ using the cpu family field
+ in /proc/cpuinfo. Family 15 is an older Xeon, Family 6 a newer one.
+
++config MATOM
++ bool "Intel Atom"
++ select X86_P6_NOP
++ help
++
++ Select this for the Intel Atom platform. Intel Atom CPUs have an
++ in-order pipelining architecture and thus can benefit from
++ accordingly optimized code. Use a recent GCC with specific Atom
++ support in order to fully benefit from selecting this option.
++
+ config MCORE2
+- bool "Core 2/newer Xeon"
++ bool "Intel Core 2"
++ select X86_P6_NOP
+ help
+
+ Select this for Intel Core 2 and newer Core 2 Xeons (Xeon 51xx and
+@@ -278,14 +368,133 @@ config MCORE2
+ family in /proc/cpuinfo. Newer ones have 6 and older ones 15
+ (not a typo)
+
+-config MATOM
+- bool "Intel Atom"
++ Enables -march=core2
++
++config MNEHALEM
++ bool "Intel Nehalem"
++ select X86_P6_NOP
+ help
+
+- Select this for the Intel Atom platform. Intel Atom CPUs have an
+- in-order pipelining architecture and thus can benefit from
+- accordingly optimized code. Use a recent GCC with specific Atom
+- support in order to fully benefit from selecting this option.
++ Select this for 1st Gen Core processors in the Nehalem family.
++
++ Enables -march=nehalem
++
++config MWESTMERE
++ bool "Intel Westmere"
++ select X86_P6_NOP
++ help
++
++ Select this for the Intel Westmere formerly Nehalem-C family.
++
++ Enables -march=westmere
++
++config MSILVERMONT
++ bool "Intel Silvermont"
++ select X86_P6_NOP
++ help
++
++ Select this for the Intel Silvermont platform.
++
++ Enables -march=silvermont
++
++config MGOLDMONT
++ bool "Intel Goldmont"
++ select X86_P6_NOP
++ help
++
++ Select this for the Intel Goldmont platform including Apollo Lake and Denverton.
++
++ Enables -march=goldmont
++
++config MGOLDMONTPLUS
++ bool "Intel Goldmont Plus"
++ select X86_P6_NOP
++ help
++
++ Select this for the Intel Goldmont Plus platform including Gemini Lake.
++
++ Enables -march=goldmont-plus
++
++config MSANDYBRIDGE
++ bool "Intel Sandy Bridge"
++ select X86_P6_NOP
++ help
++
++ Select this for 2nd Gen Core processors in the Sandy Bridge family.
++
++ Enables -march=sandybridge
++
++config MIVYBRIDGE
++ bool "Intel Ivy Bridge"
++ select X86_P6_NOP
++ help
++
++ Select this for 3rd Gen Core processors in the Ivy Bridge family.
++
++ Enables -march=ivybridge
++
++config MHASWELL
++ bool "Intel Haswell"
++ select X86_P6_NOP
++ help
++
++ Select this for 4th Gen Core processors in the Haswell family.
++
++ Enables -march=haswell
++
++config MBROADWELL
++ bool "Intel Broadwell"
++ select X86_P6_NOP
++ help
++
++ Select this for 5th Gen Core processors in the Broadwell family.
++
++ Enables -march=broadwell
++
++config MSKYLAKE
++ bool "Intel Skylake"
++ select X86_P6_NOP
++ help
++
++ Select this for 6th Gen Core processors in the Skylake family.
++
++ Enables -march=skylake
++
++config MSKYLAKEX
++ bool "Intel Skylake X"
++ select X86_P6_NOP
++ help
++
++ Select this for 6th Gen Core processors in the Skylake X family.
++
++ Enables -march=skylake-avx512
++
++config MCANNONLAKE
++ bool "Intel Cannon Lake"
++ select X86_P6_NOP
++ help
++
++ Select this for 8th Gen Core processors
++
++ Enables -march=cannonlake
++
++config MICELAKE
++ bool "Intel Ice Lake"
++ select X86_P6_NOP
++ help
++
++ Select this for 10th Gen Core processors in the Ice Lake family.
++
++ Enables -march=icelake-client
++
++config MCASCADELAKE
++ bool "Intel Cascade Lake"
++ select X86_P6_NOP
++ help
++
++ Select this for Xeon processors in the Cascade Lake family.
++
++ Enables -march=cascadelake
+
+ config GENERIC_CPU
+ bool "Generic-x86-64"
+@@ -294,6 +503,19 @@ config GENERIC_CPU
+ Generic x86-64 CPU.
+ Run equally well on all x86-64 CPUs.
+
++config MNATIVE
++ bool "Native optimizations autodetected by GCC"
++ help
++
++ GCC 4.2 and above support -march=native, which automatically detects
++ the optimum settings to use based on your processor. -march=native
++ also detects and applies additional settings beyond -march specific
++ to your CPU, (eg. -msse4). Unless you have a specific reason not to
++ (e.g. distcc cross-compiling), you should probably be using
++ -march=native rather than anything listed below.
++
++ Enables -march=native
++
+ endchoice
+
+ config X86_GENERIC
+@@ -318,7 +540,7 @@ config X86_INTERNODE_CACHE_SHIFT
+ config X86_L1_CACHE_SHIFT
+ int
+ default "7" if MPENTIUM4 || MPSC
+- default "6" if MK7 || MK8 || MPENTIUMM || MCORE2 || MATOM || MVIAC7 || X86_GENERIC || GENERIC_CPU
++ default "6" if MK7 || MK8 || MK8SSE3 || MK10 || MBARCELONA || MBOBCAT || MBULLDOZER || MPILEDRIVER || MSTEAMROLLER || MEXCAVATOR || MZEN || MZEN2 || MJAGUAR || MPENTIUMM || MCORE2 || MNEHALEM || MWESTMERE || MSILVERMONT || MGOLDMONT || MGOLDMONTPLUS || MSANDYBRIDGE || MIVYBRIDGE || MHASWELL || MBROADWELL || MSKYLAKE || MSKYLAKEX || MCANNONLAKE || MICELAKE || MCASCADELAKE || MNATIVE || MATOM || MVIAC7 || X86_GENERIC || GENERIC_CPU
+ default "4" if MELAN || M486SX || M486 || MGEODEGX1
+ default "5" if MWINCHIP3D || MWINCHIPC6 || MCRUSOE || MEFFICEON || MCYRIXIII || MK6 || MPENTIUMIII || MPENTIUMII || M686 || M586MMX || M586TSC || M586 || MVIAC3_2 || MGEODE_LX
+
+@@ -336,35 +558,36 @@ config X86_ALIGNMENT_16
+
+ config X86_INTEL_USERCOPY
+ def_bool y
+- depends on MPENTIUM4 || MPENTIUMM || MPENTIUMIII || MPENTIUMII || M586MMX || X86_GENERIC || MK8 || MK7 || MEFFICEON || MCORE2
++ depends on MPENTIUM4 || MPENTIUMM || MPENTIUMIII || MPENTIUMII || M586MMX || X86_GENERIC || MK8 || MK8SSE3 || MK7 || MEFFICEON || MCORE2 || MK10 || MBARCELONA || MNEHALEM || MWESTMERE || MSILVERMONT || MGOLDMONT || MGOLDMONTPLUS || MSANDYBRIDGE || MIVYBRIDGE || MHASWELL || MBROADWELL || MSKYLAKE || MSKYLAKEX || MCANNONLAKE || MICELAKE || MCASCADELAKE || MNATIVE
+
+ config X86_USE_PPRO_CHECKSUM
+ def_bool y
+- depends on MWINCHIP3D || MWINCHIPC6 || MCYRIXIII || MK7 || MK6 || MPENTIUM4 || MPENTIUMM || MPENTIUMIII || MPENTIUMII || M686 || MK8 || MVIAC3_2 || MVIAC7 || MEFFICEON || MGEODE_LX || MCORE2 || MATOM
++ depends on MWINCHIP3D || MWINCHIPC6 || MCYRIXIII || MK7 || MK6 || MK10 || MPENTIUM4 || MPENTIUMM || MPENTIUMIII || MPENTIUMII || M686 || MK8 || MK8SSE3 || MVIAC3_2 || MVIAC7 || MEFFICEON || MGEODE_LX || MCORE2 || MNEHALEM || MWESTMERE || MSILVERMONT || MGOLDMONT || MGOLDMONTPLUS || MSANDYBRIDGE || MIVYBRIDGE || MHASWELL || MBROADWELL || MSKYLAKE || MSKYLAKEX || MCANNONLAKE || MICELAKE || MCASCADELAKE || MATOM || MNATIVE
+
+ config X86_USE_3DNOW
+ def_bool y
+ depends on (MCYRIXIII || MK7 || MGEODE_LX) && !UML
+
+-#
+-# P6_NOPs are a relatively minor optimization that require a family >=
+-# 6 processor, except that it is broken on certain VIA chips.
+-# Furthermore, AMD chips prefer a totally different sequence of NOPs
+-# (which work on all CPUs). In addition, it looks like Virtual PC
+-# does not understand them.
+-#
+-# As a result, disallow these if we're not compiling for X86_64 (these
+-# NOPs do work on all x86-64 capable chips); the list of processors in
+-# the right-hand clause are the cores that benefit from this optimization.
+-#
+ config X86_P6_NOP
+- def_bool y
+- depends on X86_64
+- depends on (MCORE2 || MPENTIUM4 || MPSC)
++ default n
++ bool "Support for P6_NOPs on Intel chips"
++ depends on (MCORE2 || MPENTIUM4 || MPSC || MATOM || MNEHALEM || MWESTMERE || MSILVERMONT || MGOLDMONT || MGOLDMONTPLUS || MSANDYBRIDGE || MIVYBRIDGE || MHASWELL || MBROADWELL || MSKYLAKE || MSKYLAKEX || MCANNONLAKE || MICELAKE || MCASCADELAKE || MNATIVE)
++ help
++ P6_NOPs are a relatively minor optimization that require a family >=
++ 6 processor, except that it is broken on certain VIA chips.
++ Furthermore, AMD chips prefer a totally different sequence of NOPs
++ (which work on all CPUs). In addition, it looks like Virtual PC
++ does not understand them.
++
++ As a result, disallow these if we're not compiling for X86_64 (these
++ NOPs do work on all x86-64 capable chips); the list of processors in
++ the right-hand clause are the cores that benefit from this optimization.
++
++ Say Y if you have Intel CPU newer than Pentium Pro, N otherwise.
+
+ config X86_TSC
+ def_bool y
+- depends on (MWINCHIP3D || MCRUSOE || MEFFICEON || MCYRIXIII || MK7 || MK6 || MPENTIUM4 || MPENTIUMM || MPENTIUMIII || MPENTIUMII || M686 || M586MMX || M586TSC || MK8 || MVIAC3_2 || MVIAC7 || MGEODEGX1 || MGEODE_LX || MCORE2 || MATOM) || X86_64
++ depends on (MWINCHIP3D || MCRUSOE || MEFFICEON || MCYRIXIII || MK7 || MK6 || MPENTIUM4 || MPENTIUMM || MPENTIUMIII || MPENTIUMII || M686 || M586MMX || M586TSC || MK8 || MK8SSE3 || MVIAC3_2 || MVIAC7 || MGEODEGX1 || MGEODE_LX || MCORE2 || MNEHALEM || MWESTMERE || MSILVERMONT || MGOLDMONT || MGOLDMONTPLUS || MSANDYBRIDGE || MIVYBRIDGE || MHASWELL || MBROADWELL || MSKYLAKE || MSKYLAKEX || MCANNONLAKE || MICELAKE || MCASCADELAKE || MNATIVE || MATOM) || X86_64
+
+ config X86_CMPXCHG64
+ def_bool y
+@@ -374,7 +597,7 @@ config X86_CMPXCHG64
+ # generates cmov.
+ config X86_CMOV
+ def_bool y
+- depends on (MK8 || MK7 || MCORE2 || MPENTIUM4 || MPENTIUMM || MPENTIUMIII || MPENTIUMII || M686 || MVIAC3_2 || MVIAC7 || MCRUSOE || MEFFICEON || X86_64 || MATOM || MGEODE_LX)
++ depends on (MK8 || MK8SSE3 || MK10 || MBARCELONA || MBOBCAT || MBULLDOZER || MPILEDRIVER || MSTEAMROLLER || MEXCAVATOR || MZEN || MZEN2 || MJAGUAR || MK7 || MCORE2 || MNEHALEM || MWESTMERE || MSILVERMONT || MGOLDMONT || MGOLDMONTPLUS || MSANDYBRIDGE || MIVYBRIDGE || MHASWELL || MBROADWELL || MSKYLAKE || MSKYLAKEX || MCANNONLAKE || MICELAKE || MCASCADELAKE || MPENTIUM4 || MPENTIUMM || MPENTIUMIII || MPENTIUMII || M686 || MVIAC3_2 || MVIAC7 || MCRUSOE || MEFFICEON || X86_64 || MNATIVE || MATOM || MGEODE_LX)
+
+ config X86_MINIMUM_CPU_FAMILY
+ int
+--- a/arch/x86/Makefile 2020-06-10 14:21:45.000000000 -0400
++++ b/arch/x86/Makefile 2020-06-15 10:44:35.608035680 -0400
+@@ -119,13 +119,56 @@ else
+ KBUILD_CFLAGS += $(call cc-option,-mskip-rax-setup)
+
+ # FIXME - should be integrated in Makefile.cpu (Makefile_32.cpu)
++ cflags-$(CONFIG_MNATIVE) += $(call cc-option,-march=native)
+ cflags-$(CONFIG_MK8) += $(call cc-option,-march=k8)
++ cflags-$(CONFIG_MK8SSE3) += $(call cc-option,-march=k8-sse3,-mtune=k8)
++ cflags-$(CONFIG_MK10) += $(call cc-option,-march=amdfam10)
++ cflags-$(CONFIG_MBARCELONA) += $(call cc-option,-march=barcelona)
++ cflags-$(CONFIG_MBOBCAT) += $(call cc-option,-march=btver1)
++ cflags-$(CONFIG_MJAGUAR) += $(call cc-option,-march=btver2)
++ cflags-$(CONFIG_MBULLDOZER) += $(call cc-option,-march=bdver1)
++ cflags-$(CONFIG_MPILEDRIVER) += $(call cc-option,-march=bdver2)
++ cflags-$(CONFIG_MPILEDRIVER) += $(call cc-option,-mno-tbm)
++ cflags-$(CONFIG_MSTEAMROLLER) += $(call cc-option,-march=bdver3)
++ cflags-$(CONFIG_MSTEAMROLLER) += $(call cc-option,-mno-tbm)
++ cflags-$(CONFIG_MEXCAVATOR) += $(call cc-option,-march=bdver4)
++ cflags-$(CONFIG_MEXCAVATOR) += $(call cc-option,-mno-tbm)
++ cflags-$(CONFIG_MZEN) += $(call cc-option,-march=znver1)
++ cflags-$(CONFIG_MZEN2) += $(call cc-option,-march=znver2)
+ cflags-$(CONFIG_MPSC) += $(call cc-option,-march=nocona)
+
+ cflags-$(CONFIG_MCORE2) += \
+- $(call cc-option,-march=core2,$(call cc-option,-mtune=generic))
+- cflags-$(CONFIG_MATOM) += $(call cc-option,-march=atom) \
+- $(call cc-option,-mtune=atom,$(call cc-option,-mtune=generic))
++ $(call cc-option,-march=core2,$(call cc-option,-mtune=core2))
++ cflags-$(CONFIG_MNEHALEM) += \
++ $(call cc-option,-march=nehalem,$(call cc-option,-mtune=nehalem))
++ cflags-$(CONFIG_MWESTMERE) += \
++ $(call cc-option,-march=westmere,$(call cc-option,-mtune=westmere))
++ cflags-$(CONFIG_MSILVERMONT) += \
++ $(call cc-option,-march=silvermont,$(call cc-option,-mtune=silvermont))
++ cflags-$(CONFIG_MGOLDMONT) += \
++ $(call cc-option,-march=goldmont,$(call cc-option,-mtune=goldmont))
++ cflags-$(CONFIG_MGOLDMONTPLUS) += \
++ $(call cc-option,-march=goldmont-plus,$(call cc-option,-mtune=goldmont-plus))
++ cflags-$(CONFIG_MSANDYBRIDGE) += \
++ $(call cc-option,-march=sandybridge,$(call cc-option,-mtune=sandybridge))
++ cflags-$(CONFIG_MIVYBRIDGE) += \
++ $(call cc-option,-march=ivybridge,$(call cc-option,-mtune=ivybridge))
++ cflags-$(CONFIG_MHASWELL) += \
++ $(call cc-option,-march=haswell,$(call cc-option,-mtune=haswell))
++ cflags-$(CONFIG_MBROADWELL) += \
++ $(call cc-option,-march=broadwell,$(call cc-option,-mtune=broadwell))
++ cflags-$(CONFIG_MSKYLAKE) += \
++ $(call cc-option,-march=skylake,$(call cc-option,-mtune=skylake))
++ cflags-$(CONFIG_MSKYLAKEX) += \
++ $(call cc-option,-march=skylake-avx512,$(call cc-option,-mtune=skylake-avx512))
++ cflags-$(CONFIG_MCANNONLAKE) += \
++ $(call cc-option,-march=cannonlake,$(call cc-option,-mtune=cannonlake))
++ cflags-$(CONFIG_MICELAKE) += \
++ $(call cc-option,-march=icelake-client,$(call cc-option,-mtune=icelake-client))
++ cflags-$(CONFIG_MCASCADELAKE) += \
++ $(call cc-option,-march=cascadelake,$(call cc-option,-mtune=cascadelake))
++ cflags-$(CONFIG_MATOM) += $(call cc-option,-march=bonnell) \
++ $(call cc-option,-mtune=bonnell,$(call cc-option,-mtune=generic))
+ cflags-$(CONFIG_GENERIC_CPU) += $(call cc-option,-mtune=generic)
+ KBUILD_CFLAGS += $(cflags-y)
+
+--- a/arch/x86/Makefile_32.cpu 2020-06-10 14:21:45.000000000 -0400
++++ b/arch/x86/Makefile_32.cpu 2020-06-15 10:44:10.437477053 -0400
+@@ -24,7 +24,19 @@ cflags-$(CONFIG_MK6) += -march=k6
+ # Please note, that patches that add -march=athlon-xp and friends are pointless.
+ # They make zero difference whatsosever to performance at this time.
+ cflags-$(CONFIG_MK7) += -march=athlon
++cflags-$(CONFIG_MNATIVE) += $(call cc-option,-march=native)
+ cflags-$(CONFIG_MK8) += $(call cc-option,-march=k8,-march=athlon)
++cflags-$(CONFIG_MK8SSE3) += $(call cc-option,-march=k8-sse3,-march=athlon)
++cflags-$(CONFIG_MK10) += $(call cc-option,-march=amdfam10,-march=athlon)
++cflags-$(CONFIG_MBARCELONA) += $(call cc-option,-march=barcelona,-march=athlon)
++cflags-$(CONFIG_MBOBCAT) += $(call cc-option,-march=btver1,-march=athlon)
++cflags-$(CONFIG_MJAGUAR) += $(call cc-option,-march=btver2,-march=athlon)
++cflags-$(CONFIG_MBULLDOZER) += $(call cc-option,-march=bdver1,-march=athlon)
++cflags-$(CONFIG_MPILEDRIVER) += $(call cc-option,-march=bdver2,-march=athlon)
++cflags-$(CONFIG_MSTEAMROLLER) += $(call cc-option,-march=bdver3,-march=athlon)
++cflags-$(CONFIG_MEXCAVATOR) += $(call cc-option,-march=bdver4,-march=athlon)
++cflags-$(CONFIG_MZEN) += $(call cc-option,-march=znver1,-march=athlon)
++cflags-$(CONFIG_MZEN2) += $(call cc-option,-march=znver2,-march=athlon)
+ cflags-$(CONFIG_MCRUSOE) += -march=i686 -falign-functions=0 -falign-jumps=0 -falign-loops=0
+ cflags-$(CONFIG_MEFFICEON) += -march=i686 $(call tune,pentium3) -falign-functions=0 -falign-jumps=0 -falign-loops=0
+ cflags-$(CONFIG_MWINCHIPC6) += $(call cc-option,-march=winchip-c6,-march=i586)
+@@ -33,8 +45,22 @@ cflags-$(CONFIG_MCYRIXIII) += $(call cc-
+ cflags-$(CONFIG_MVIAC3_2) += $(call cc-option,-march=c3-2,-march=i686)
+ cflags-$(CONFIG_MVIAC7) += -march=i686
+ cflags-$(CONFIG_MCORE2) += -march=i686 $(call tune,core2)
+-cflags-$(CONFIG_MATOM) += $(call cc-option,-march=atom,$(call cc-option,-march=core2,-march=i686)) \
+- $(call cc-option,-mtune=atom,$(call cc-option,-mtune=generic))
++cflags-$(CONFIG_MNEHALEM) += -march=i686 $(call tune,nehalem)
++cflags-$(CONFIG_MWESTMERE) += -march=i686 $(call tune,westmere)
++cflags-$(CONFIG_MSILVERMONT) += -march=i686 $(call tune,silvermont)
++cflags-$(CONFIG_MGOLDMONT) += -march=i686 $(call tune,goldmont)
++cflags-$(CONFIG_MGOLDMONTPLUS) += -march=i686 $(call tune,goldmont-plus)
++cflags-$(CONFIG_MSANDYBRIDGE) += -march=i686 $(call tune,sandybridge)
++cflags-$(CONFIG_MIVYBRIDGE) += -march=i686 $(call tune,ivybridge)
++cflags-$(CONFIG_MHASWELL) += -march=i686 $(call tune,haswell)
++cflags-$(CONFIG_MBROADWELL) += -march=i686 $(call tune,broadwell)
++cflags-$(CONFIG_MSKYLAKE) += -march=i686 $(call tune,skylake)
++cflags-$(CONFIG_MSKYLAKEX) += -march=i686 $(call tune,skylake-avx512)
++cflags-$(CONFIG_MCANNONLAKE) += -march=i686 $(call tune,cannonlake)
++cflags-$(CONFIG_MICELAKE) += -march=i686 $(call tune,icelake-client)
++cflags-$(CONFIG_MCASCADELAKE) += -march=i686 $(call tune,cascadelake)
++cflags-$(CONFIG_MATOM) += $(call cc-option,-march=bonnell,$(call cc-option,-march=core2,-march=i686)) \
++ $(call cc-option,-mtune=bonnell,$(call cc-option,-mtune=generic))
+
+ # AMD Elan support
+ cflags-$(CONFIG_MELAN) += -march=i486
^ permalink raw reply related [flat|nested] 30+ messages in thread
* [gentoo-commits] proj/linux-patches:5.8 commit in: /
@ 2020-09-17 14:57 Mike Pagano
0 siblings, 0 replies; 30+ messages in thread
From: Mike Pagano @ 2020-09-17 14:57 UTC (permalink / raw
To: gentoo-commits
commit: aec56589ca906076f1dc545d65c1cebcafaa9775
Author: Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Thu Sep 17 14:57:11 2020 +0000
Commit: Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Thu Sep 17 14:57:11 2020 +0000
URL: https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=aec56589
Linux patch 5.8.10
Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>
0000_README | 4 +
1009_linux-5.8.10.patch | 6953 +++++++++++++++++++++++++++++++++++++++++++++++
2 files changed, 6957 insertions(+)
diff --git a/0000_README b/0000_README
index 8c02383..f2e8a67 100644
--- a/0000_README
+++ b/0000_README
@@ -79,6 +79,10 @@ Patch: 1008_linux-5.8.9.patch
From: http://www.kernel.org
Desc: Linux 5.8.9
+Patch: 1009_linux-5.8.10.patch
+From: http://www.kernel.org
+Desc: Linux 5.8.10
+
Patch: 1500_XATTR_USER_PREFIX.patch
From: https://bugs.gentoo.org/show_bug.cgi?id=470644
Desc: Support for namespace user.pax.* on tmpfs.
diff --git a/1009_linux-5.8.10.patch b/1009_linux-5.8.10.patch
new file mode 100644
index 0000000..bc12dd8
--- /dev/null
+++ b/1009_linux-5.8.10.patch
@@ -0,0 +1,6953 @@
+diff --git a/Documentation/sound/designs/timestamping.rst b/Documentation/sound/designs/timestamping.rst
+index 2b0fff5034151..7c7ecf5dbc4bd 100644
+--- a/Documentation/sound/designs/timestamping.rst
++++ b/Documentation/sound/designs/timestamping.rst
+@@ -143,7 +143,7 @@ timestamp shows when the information is put together by the driver
+ before returning from the ``STATUS`` and ``STATUS_EXT`` ioctl. in most cases
+ this driver_timestamp will be identical to the regular system tstamp.
+
+-Examples of typestamping with HDaudio:
++Examples of timestamping with HDAudio:
+
+ 1. DMA timestamp, no compensation for DMA+analog delay
+ ::
+diff --git a/Makefile b/Makefile
+index 36eab48d1d4a6..d937530d33427 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 5
+ PATCHLEVEL = 8
+-SUBLEVEL = 9
++SUBLEVEL = 10
+ EXTRAVERSION =
+ NAME = Kleptomaniac Octopus
+
+@@ -876,10 +876,6 @@ KBUILD_CFLAGS_KERNEL += -ffunction-sections -fdata-sections
+ LDFLAGS_vmlinux += --gc-sections
+ endif
+
+-ifdef CONFIG_LIVEPATCH
+-KBUILD_CFLAGS += $(call cc-option, -flive-patching=inline-clone)
+-endif
+-
+ ifdef CONFIG_SHADOW_CALL_STACK
+ CC_FLAGS_SCS := -fsanitize=shadow-call-stack
+ KBUILD_CFLAGS += $(CC_FLAGS_SCS)
+diff --git a/arch/arc/boot/dts/hsdk.dts b/arch/arc/boot/dts/hsdk.dts
+index 9acbeba832c0b..dcaa44e408ace 100644
+--- a/arch/arc/boot/dts/hsdk.dts
++++ b/arch/arc/boot/dts/hsdk.dts
+@@ -88,6 +88,8 @@
+
+ arcpct: pct {
+ compatible = "snps,archs-pct";
++ interrupt-parent = <&cpu_intc>;
++ interrupts = <20>;
+ };
+
+ /* TIMER0 with interrupt for clockevent */
+@@ -208,7 +210,7 @@
+ reg = <0x8000 0x2000>;
+ interrupts = <10>;
+ interrupt-names = "macirq";
+- phy-mode = "rgmii";
++ phy-mode = "rgmii-id";
+ snps,pbl = <32>;
+ snps,multicast-filter-bins = <256>;
+ clocks = <&gmacclk>;
+@@ -226,7 +228,7 @@
+ #address-cells = <1>;
+ #size-cells = <0>;
+ compatible = "snps,dwmac-mdio";
+- phy0: ethernet-phy@0 {
++ phy0: ethernet-phy@0 { /* Micrel KSZ9031 */
+ reg = <0>;
+ };
+ };
+diff --git a/arch/arc/kernel/troubleshoot.c b/arch/arc/kernel/troubleshoot.c
+index 28e8bf04b253f..a331bb5d8319f 100644
+--- a/arch/arc/kernel/troubleshoot.c
++++ b/arch/arc/kernel/troubleshoot.c
+@@ -18,44 +18,37 @@
+
+ #define ARC_PATH_MAX 256
+
+-/*
+- * Common routine to print scratch regs (r0-r12) or callee regs (r13-r25)
+- * -Prints 3 regs per line and a CR.
+- * -To continue, callee regs right after scratch, special handling of CR
+- */
+-static noinline void print_reg_file(long *reg_rev, int start_num)
++static noinline void print_regs_scratch(struct pt_regs *regs)
+ {
+- unsigned int i;
+- char buf[512];
+- int n = 0, len = sizeof(buf);
+-
+- for (i = start_num; i < start_num + 13; i++) {
+- n += scnprintf(buf + n, len - n, "r%02u: 0x%08lx\t",
+- i, (unsigned long)*reg_rev);
+-
+- if (((i + 1) % 3) == 0)
+- n += scnprintf(buf + n, len - n, "\n");
+-
+- /* because pt_regs has regs reversed: r12..r0, r25..r13 */
+- if (is_isa_arcv2() && start_num == 0)
+- reg_rev++;
+- else
+- reg_rev--;
+- }
+-
+- if (start_num != 0)
+- n += scnprintf(buf + n, len - n, "\n\n");
++ pr_cont("BTA: 0x%08lx\n SP: 0x%08lx FP: 0x%08lx BLK: %pS\n",
++ regs->bta, regs->sp, regs->fp, (void *)regs->blink);
++ pr_cont("LPS: 0x%08lx\tLPE: 0x%08lx\tLPC: 0x%08lx\n",
++ regs->lp_start, regs->lp_end, regs->lp_count);
+
+- /* To continue printing callee regs on same line as scratch regs */
+- if (start_num == 0)
+- pr_info("%s", buf);
+- else
+- pr_cont("%s\n", buf);
++ pr_info("r00: 0x%08lx\tr01: 0x%08lx\tr02: 0x%08lx\n" \
++ "r03: 0x%08lx\tr04: 0x%08lx\tr05: 0x%08lx\n" \
++ "r06: 0x%08lx\tr07: 0x%08lx\tr08: 0x%08lx\n" \
++ "r09: 0x%08lx\tr10: 0x%08lx\tr11: 0x%08lx\n" \
++ "r12: 0x%08lx\t",
++ regs->r0, regs->r1, regs->r2,
++ regs->r3, regs->r4, regs->r5,
++ regs->r6, regs->r7, regs->r8,
++ regs->r9, regs->r10, regs->r11,
++ regs->r12);
+ }
+
+-static void show_callee_regs(struct callee_regs *cregs)
++static void print_regs_callee(struct callee_regs *regs)
+ {
+- print_reg_file(&(cregs->r13), 13);
++ pr_cont("r13: 0x%08lx\tr14: 0x%08lx\n" \
++ "r15: 0x%08lx\tr16: 0x%08lx\tr17: 0x%08lx\n" \
++ "r18: 0x%08lx\tr19: 0x%08lx\tr20: 0x%08lx\n" \
++ "r21: 0x%08lx\tr22: 0x%08lx\tr23: 0x%08lx\n" \
++ "r24: 0x%08lx\tr25: 0x%08lx\n",
++ regs->r13, regs->r14,
++ regs->r15, regs->r16, regs->r17,
++ regs->r18, regs->r19, regs->r20,
++ regs->r21, regs->r22, regs->r23,
++ regs->r24, regs->r25);
+ }
+
+ static void print_task_path_n_nm(struct task_struct *tsk)
+@@ -175,7 +168,7 @@ static void show_ecr_verbose(struct pt_regs *regs)
+ void show_regs(struct pt_regs *regs)
+ {
+ struct task_struct *tsk = current;
+- struct callee_regs *cregs;
++ struct callee_regs *cregs = (struct callee_regs *)tsk->thread.callee_reg;
+
+ /*
+ * generic code calls us with preemption disabled, but some calls
+@@ -204,25 +197,15 @@ void show_regs(struct pt_regs *regs)
+ STS_BIT(regs, A2), STS_BIT(regs, A1),
+ STS_BIT(regs, E2), STS_BIT(regs, E1));
+ #else
+- pr_cont(" [%2s%2s%2s%2s]",
++ pr_cont(" [%2s%2s%2s%2s] ",
+ STS_BIT(regs, IE),
+ (regs->status32 & STATUS_U_MASK) ? "U " : "K ",
+ STS_BIT(regs, DE), STS_BIT(regs, AE));
+ #endif
+- pr_cont(" BTA: 0x%08lx\n SP: 0x%08lx FP: 0x%08lx BLK: %pS\n",
+- regs->bta, regs->sp, regs->fp, (void *)regs->blink);
+- pr_info("LPS: 0x%08lx\tLPE: 0x%08lx\tLPC: 0x%08lx\n",
+- regs->lp_start, regs->lp_end, regs->lp_count);
+-
+- /* print regs->r0 thru regs->r12
+- * Sequential printing was generating horrible code
+- */
+- print_reg_file(&(regs->r0), 0);
+
+- /* If Callee regs were saved, display them too */
+- cregs = (struct callee_regs *)current->thread.callee_reg;
++ print_regs_scratch(regs);
+ if (cregs)
+- show_callee_regs(cregs);
++ print_regs_callee(cregs);
+
+ preempt_disable();
+ }
+diff --git a/arch/arc/plat-eznps/include/plat/ctop.h b/arch/arc/plat-eznps/include/plat/ctop.h
+index a4a61531c7fb9..77712c5ffe848 100644
+--- a/arch/arc/plat-eznps/include/plat/ctop.h
++++ b/arch/arc/plat-eznps/include/plat/ctop.h
+@@ -33,7 +33,6 @@
+ #define CTOP_AUX_DPC (CTOP_AUX_BASE + 0x02C)
+ #define CTOP_AUX_LPC (CTOP_AUX_BASE + 0x030)
+ #define CTOP_AUX_EFLAGS (CTOP_AUX_BASE + 0x080)
+-#define CTOP_AUX_IACK (CTOP_AUX_BASE + 0x088)
+ #define CTOP_AUX_GPA1 (CTOP_AUX_BASE + 0x08C)
+ #define CTOP_AUX_UDMC (CTOP_AUX_BASE + 0x300)
+
+diff --git a/arch/arm/boot/dts/bcm-hr2.dtsi b/arch/arm/boot/dts/bcm-hr2.dtsi
+index 5e5f5ca3c86f1..bba0e8cd2acbd 100644
+--- a/arch/arm/boot/dts/bcm-hr2.dtsi
++++ b/arch/arm/boot/dts/bcm-hr2.dtsi
+@@ -217,7 +217,7 @@
+ };
+
+ qspi: spi@27200 {
+- compatible = "brcm,spi-bcm-qspi", "brcm,spi-nsp-qspi";
++ compatible = "brcm,spi-nsp-qspi", "brcm,spi-bcm-qspi";
+ reg = <0x027200 0x184>,
+ <0x027000 0x124>,
+ <0x11c408 0x004>,
+diff --git a/arch/arm/boot/dts/bcm-nsp.dtsi b/arch/arm/boot/dts/bcm-nsp.dtsi
+index 3175266ede646..465937b79c8e4 100644
+--- a/arch/arm/boot/dts/bcm-nsp.dtsi
++++ b/arch/arm/boot/dts/bcm-nsp.dtsi
+@@ -284,7 +284,7 @@
+ };
+
+ qspi: spi@27200 {
+- compatible = "brcm,spi-bcm-qspi", "brcm,spi-nsp-qspi";
++ compatible = "brcm,spi-nsp-qspi", "brcm,spi-bcm-qspi";
+ reg = <0x027200 0x184>,
+ <0x027000 0x124>,
+ <0x11c408 0x004>,
+diff --git a/arch/arm/boot/dts/bcm5301x.dtsi b/arch/arm/boot/dts/bcm5301x.dtsi
+index 2d9b4dd058307..0016720ce5300 100644
+--- a/arch/arm/boot/dts/bcm5301x.dtsi
++++ b/arch/arm/boot/dts/bcm5301x.dtsi
+@@ -488,7 +488,7 @@
+ };
+
+ spi@18029200 {
+- compatible = "brcm,spi-bcm-qspi", "brcm,spi-nsp-qspi";
++ compatible = "brcm,spi-nsp-qspi", "brcm,spi-bcm-qspi";
+ reg = <0x18029200 0x184>,
+ <0x18029000 0x124>,
+ <0x1811b408 0x004>,
+diff --git a/arch/arm/boot/dts/imx6sx-pinfunc.h b/arch/arm/boot/dts/imx6sx-pinfunc.h
+index 0b02c7e60c174..f4dc46207954c 100644
+--- a/arch/arm/boot/dts/imx6sx-pinfunc.h
++++ b/arch/arm/boot/dts/imx6sx-pinfunc.h
+@@ -1026,7 +1026,7 @@
+ #define MX6SX_PAD_QSPI1B_DQS__SIM_M_HADDR_15 0x01B0 0x04F8 0x0000 0x7 0x0
+ #define MX6SX_PAD_QSPI1B_SCLK__QSPI1_B_SCLK 0x01B4 0x04FC 0x0000 0x0 0x0
+ #define MX6SX_PAD_QSPI1B_SCLK__UART3_DCE_RX 0x01B4 0x04FC 0x0840 0x1 0x4
+-#define MX6SX_PAD_QSPI1B_SCLK__UART3_DTE_TX 0x01B4 0x04FC 0x0000 0x0 0x0
++#define MX6SX_PAD_QSPI1B_SCLK__UART3_DTE_TX 0x01B4 0x04FC 0x0000 0x1 0x0
+ #define MX6SX_PAD_QSPI1B_SCLK__ECSPI3_SCLK 0x01B4 0x04FC 0x0730 0x2 0x1
+ #define MX6SX_PAD_QSPI1B_SCLK__ESAI_RX_HF_CLK 0x01B4 0x04FC 0x0780 0x3 0x2
+ #define MX6SX_PAD_QSPI1B_SCLK__CSI1_DATA_16 0x01B4 0x04FC 0x06DC 0x4 0x1
+diff --git a/arch/arm/boot/dts/imx7d-zii-rmu2.dts b/arch/arm/boot/dts/imx7d-zii-rmu2.dts
+index e5e20b07f184b..7cb6153fc650b 100644
+--- a/arch/arm/boot/dts/imx7d-zii-rmu2.dts
++++ b/arch/arm/boot/dts/imx7d-zii-rmu2.dts
+@@ -58,7 +58,7 @@
+ <&clks IMX7D_ENET1_TIME_ROOT_CLK>;
+ assigned-clock-parents = <&clks IMX7D_PLL_ENET_MAIN_100M_CLK>;
+ assigned-clock-rates = <0>, <100000000>;
+- phy-mode = "rgmii";
++ phy-mode = "rgmii-id";
+ phy-handle = <&fec1_phy>;
+ status = "okay";
+
+diff --git a/arch/arm/boot/dts/imx7ulp.dtsi b/arch/arm/boot/dts/imx7ulp.dtsi
+index f7c4878534c8e..1bff3efe8aafe 100644
+--- a/arch/arm/boot/dts/imx7ulp.dtsi
++++ b/arch/arm/boot/dts/imx7ulp.dtsi
+@@ -394,7 +394,7 @@
+ clocks = <&pcc2 IMX7ULP_CLK_RGPIO2P1>,
+ <&pcc3 IMX7ULP_CLK_PCTLC>;
+ clock-names = "gpio", "port";
+- gpio-ranges = <&iomuxc1 0 0 32>;
++ gpio-ranges = <&iomuxc1 0 0 20>;
+ };
+
+ gpio_ptd: gpio@40af0000 {
+@@ -408,7 +408,7 @@
+ clocks = <&pcc2 IMX7ULP_CLK_RGPIO2P1>,
+ <&pcc3 IMX7ULP_CLK_PCTLD>;
+ clock-names = "gpio", "port";
+- gpio-ranges = <&iomuxc1 0 32 32>;
++ gpio-ranges = <&iomuxc1 0 32 12>;
+ };
+
+ gpio_pte: gpio@40b00000 {
+@@ -422,7 +422,7 @@
+ clocks = <&pcc2 IMX7ULP_CLK_RGPIO2P1>,
+ <&pcc3 IMX7ULP_CLK_PCTLE>;
+ clock-names = "gpio", "port";
+- gpio-ranges = <&iomuxc1 0 64 32>;
++ gpio-ranges = <&iomuxc1 0 64 16>;
+ };
+
+ gpio_ptf: gpio@40b10000 {
+@@ -436,7 +436,7 @@
+ clocks = <&pcc2 IMX7ULP_CLK_RGPIO2P1>,
+ <&pcc3 IMX7ULP_CLK_PCTLF>;
+ clock-names = "gpio", "port";
+- gpio-ranges = <&iomuxc1 0 96 32>;
++ gpio-ranges = <&iomuxc1 0 96 20>;
+ };
+ };
+
+diff --git a/arch/arm/boot/dts/logicpd-som-lv-baseboard.dtsi b/arch/arm/boot/dts/logicpd-som-lv-baseboard.dtsi
+index 100396f6c2feb..395e05f10d36c 100644
+--- a/arch/arm/boot/dts/logicpd-som-lv-baseboard.dtsi
++++ b/arch/arm/boot/dts/logicpd-som-lv-baseboard.dtsi
+@@ -51,6 +51,8 @@
+
+ &mcbsp2 {
+ status = "okay";
++ pinctrl-names = "default";
++ pinctrl-0 = <&mcbsp2_pins>;
+ };
+
+ &charger {
+@@ -102,35 +104,18 @@
+ regulator-max-microvolt = <3300000>;
+ };
+
+- lcd0: display@0 {
+- compatible = "panel-dpi";
+- label = "28";
+- status = "okay";
+- /* default-on; */
++ lcd0: display {
++ /* This isn't the exact LCD, but the timings meet spec */
++ compatible = "logicpd,type28";
+ pinctrl-names = "default";
+ pinctrl-0 = <&lcd_enable_pin>;
+- enable-gpios = <&gpio5 27 GPIO_ACTIVE_HIGH>; /* gpio155, lcd INI */
++ backlight = <&bl>;
++ enable-gpios = <&gpio5 27 GPIO_ACTIVE_HIGH>;
+ port {
+ lcd_in: endpoint {
+ remote-endpoint = <&dpi_out>;
+ };
+ };
+-
+- panel-timing {
+- clock-frequency = <9000000>;
+- hactive = <480>;
+- vactive = <272>;
+- hfront-porch = <3>;
+- hback-porch = <2>;
+- hsync-len = <42>;
+- vback-porch = <3>;
+- vfront-porch = <2>;
+- vsync-len = <11>;
+- hsync-active = <1>;
+- vsync-active = <1>;
+- de-active = <1>;
+- pixelclk-active = <0>;
+- };
+ };
+
+ bl: backlight {
+diff --git a/arch/arm/boot/dts/logicpd-torpedo-baseboard.dtsi b/arch/arm/boot/dts/logicpd-torpedo-baseboard.dtsi
+index 381f0e82bb706..b0f6613e6d549 100644
+--- a/arch/arm/boot/dts/logicpd-torpedo-baseboard.dtsi
++++ b/arch/arm/boot/dts/logicpd-torpedo-baseboard.dtsi
+@@ -81,6 +81,8 @@
+ };
+
+ &mcbsp2 {
++ pinctrl-names = "default";
++ pinctrl-0 = <&mcbsp2_pins>;
+ status = "okay";
+ };
+
+diff --git a/arch/arm/boot/dts/ls1021a.dtsi b/arch/arm/boot/dts/ls1021a.dtsi
+index b2ff27af090ec..9435ce527e855 100644
+--- a/arch/arm/boot/dts/ls1021a.dtsi
++++ b/arch/arm/boot/dts/ls1021a.dtsi
+@@ -181,7 +181,7 @@
+ #address-cells = <1>;
+ #size-cells = <0>;
+ reg = <0x0 0x1550000 0x0 0x10000>,
+- <0x0 0x40000000 0x0 0x40000000>;
++ <0x0 0x40000000 0x0 0x20000000>;
+ reg-names = "QuadSPI", "QuadSPI-memory";
+ interrupts = <GIC_SPI 131 IRQ_TYPE_LEVEL_HIGH>;
+ clock-names = "qspi_en", "qspi";
+diff --git a/arch/arm/boot/dts/omap5.dtsi b/arch/arm/boot/dts/omap5.dtsi
+index fb889c5b00c9d..de55ac5e60f39 100644
+--- a/arch/arm/boot/dts/omap5.dtsi
++++ b/arch/arm/boot/dts/omap5.dtsi
+@@ -463,11 +463,11 @@
+ };
+ };
+
+- target-module@5000 {
++ target-module@4000 {
+ compatible = "ti,sysc-omap2", "ti,sysc";
+- reg = <0x5000 0x4>,
+- <0x5010 0x4>,
+- <0x5014 0x4>;
++ reg = <0x4000 0x4>,
++ <0x4010 0x4>,
++ <0x4014 0x4>;
+ reg-names = "rev", "sysc", "syss";
+ ti,sysc-sidle = <SYSC_IDLE_FORCE>,
+ <SYSC_IDLE_NO>,
+@@ -479,7 +479,7 @@
+ ti,syss-mask = <1>;
+ #address-cells = <1>;
+ #size-cells = <1>;
+- ranges = <0 0x5000 0x1000>;
++ ranges = <0 0x4000 0x1000>;
+
+ dsi1: encoder@0 {
+ compatible = "ti,omap5-dsi";
+@@ -489,8 +489,9 @@
+ reg-names = "proto", "phy", "pll";
+ interrupts = <GIC_SPI 53 IRQ_TYPE_LEVEL_HIGH>;
+ status = "disabled";
+- clocks = <&dss_clkctrl OMAP5_DSS_CORE_CLKCTRL 8>;
+- clock-names = "fck";
++ clocks = <&dss_clkctrl OMAP5_DSS_CORE_CLKCTRL 8>,
++ <&dss_clkctrl OMAP5_DSS_CORE_CLKCTRL 10>;
++ clock-names = "fck", "sys_clk";
+ };
+ };
+
+@@ -520,8 +521,9 @@
+ reg-names = "proto", "phy", "pll";
+ interrupts = <GIC_SPI 55 IRQ_TYPE_LEVEL_HIGH>;
+ status = "disabled";
+- clocks = <&dss_clkctrl OMAP5_DSS_CORE_CLKCTRL 8>;
+- clock-names = "fck";
++ clocks = <&dss_clkctrl OMAP5_DSS_CORE_CLKCTRL 8>,
++ <&dss_clkctrl OMAP5_DSS_CORE_CLKCTRL 10>;
++ clock-names = "fck", "sys_clk";
+ };
+ };
+
+diff --git a/arch/arm/boot/dts/socfpga_arria10.dtsi b/arch/arm/boot/dts/socfpga_arria10.dtsi
+index 8f614c4b0e3eb..9c71472c237bd 100644
+--- a/arch/arm/boot/dts/socfpga_arria10.dtsi
++++ b/arch/arm/boot/dts/socfpga_arria10.dtsi
+@@ -819,7 +819,7 @@
+ timer3: timer3@ffd00100 {
+ compatible = "snps,dw-apb-timer";
+ interrupts = <0 118 IRQ_TYPE_LEVEL_HIGH>;
+- reg = <0xffd01000 0x100>;
++ reg = <0xffd00100 0x100>;
+ clocks = <&l4_sys_free_clk>;
+ clock-names = "timer";
+ resets = <&rst L4SYSTIMER1_RESET>;
+diff --git a/arch/arm/boot/dts/vfxxx.dtsi b/arch/arm/boot/dts/vfxxx.dtsi
+index 2d547e7b21ad5..8a14ac34d1313 100644
+--- a/arch/arm/boot/dts/vfxxx.dtsi
++++ b/arch/arm/boot/dts/vfxxx.dtsi
+@@ -495,7 +495,7 @@
+ };
+
+ ocotp: ocotp@400a5000 {
+- compatible = "fsl,vf610-ocotp";
++ compatible = "fsl,vf610-ocotp", "syscon";
+ reg = <0x400a5000 0x1000>;
+ clocks = <&clks VF610_CLK_OCOTP>;
+ };
+diff --git a/arch/arm/mach-omap2/omap-iommu.c b/arch/arm/mach-omap2/omap-iommu.c
+index 54aff33e55e6e..bfa5e1b8dba7f 100644
+--- a/arch/arm/mach-omap2/omap-iommu.c
++++ b/arch/arm/mach-omap2/omap-iommu.c
+@@ -74,7 +74,7 @@ static struct powerdomain *_get_pwrdm(struct device *dev)
+ return pwrdm;
+
+ clk = of_clk_get(dev->of_node->parent, 0);
+- if (!clk) {
++ if (IS_ERR(clk)) {
+ dev_err(dev, "no fck found\n");
+ return NULL;
+ }
+diff --git a/arch/arm64/boot/dts/broadcom/northstar2/ns2.dtsi b/arch/arm64/boot/dts/broadcom/northstar2/ns2.dtsi
+index 15f7b0ed38369..39802066232e1 100644
+--- a/arch/arm64/boot/dts/broadcom/northstar2/ns2.dtsi
++++ b/arch/arm64/boot/dts/broadcom/northstar2/ns2.dtsi
+@@ -745,7 +745,7 @@
+ };
+
+ qspi: spi@66470200 {
+- compatible = "brcm,spi-bcm-qspi", "brcm,spi-ns2-qspi";
++ compatible = "brcm,spi-ns2-qspi", "brcm,spi-bcm-qspi";
+ reg = <0x66470200 0x184>,
+ <0x66470000 0x124>,
+ <0x67017408 0x004>,
+diff --git a/arch/arm64/boot/dts/freescale/Makefile b/arch/arm64/boot/dts/freescale/Makefile
+index a39f0a1723e02..903c0eb61290d 100644
+--- a/arch/arm64/boot/dts/freescale/Makefile
++++ b/arch/arm64/boot/dts/freescale/Makefile
+@@ -28,6 +28,7 @@ dtb-$(CONFIG_ARCH_LAYERSCAPE) += fsl-lx2160a-honeycomb.dtb
+ dtb-$(CONFIG_ARCH_LAYERSCAPE) += fsl-lx2160a-qds.dtb
+ dtb-$(CONFIG_ARCH_LAYERSCAPE) += fsl-lx2160a-rdb.dtb
+
++dtb-$(CONFIG_ARCH_MXC) += imx8mm-beacon-kit.dtb
+ dtb-$(CONFIG_ARCH_MXC) += imx8mm-evk.dtb
+ dtb-$(CONFIG_ARCH_MXC) += imx8mn-evk.dtb
+ dtb-$(CONFIG_ARCH_MXC) += imx8mn-ddr4-evk.dtb
+diff --git a/arch/arm64/boot/dts/freescale/imx8mp.dtsi b/arch/arm64/boot/dts/freescale/imx8mp.dtsi
+index 45e2c0a4e8896..437e2ccf8f866 100644
+--- a/arch/arm64/boot/dts/freescale/imx8mp.dtsi
++++ b/arch/arm64/boot/dts/freescale/imx8mp.dtsi
+@@ -688,7 +688,7 @@
+ reg = <0x30bd0000 0x10000>;
+ interrupts = <GIC_SPI 2 IRQ_TYPE_LEVEL_HIGH>;
+ clocks = <&clk IMX8MP_CLK_SDMA1_ROOT>,
+- <&clk IMX8MP_CLK_SDMA1_ROOT>;
++ <&clk IMX8MP_CLK_AHB>;
+ clock-names = "ipg", "ahb";
+ #dma-cells = <3>;
+ fsl,sdma-ram-script-name = "imx/sdma/sdma-imx7d.bin";
+diff --git a/arch/arm64/boot/dts/freescale/imx8mq.dtsi b/arch/arm64/boot/dts/freescale/imx8mq.dtsi
+index 978f8122c0d2c..66ac66856e7e8 100644
+--- a/arch/arm64/boot/dts/freescale/imx8mq.dtsi
++++ b/arch/arm64/boot/dts/freescale/imx8mq.dtsi
+@@ -420,7 +420,7 @@
+ tmu: tmu@30260000 {
+ compatible = "fsl,imx8mq-tmu";
+ reg = <0x30260000 0x10000>;
+- interrupt = <GIC_SPI 49 IRQ_TYPE_LEVEL_HIGH>;
++ interrupts = <GIC_SPI 49 IRQ_TYPE_LEVEL_HIGH>;
+ clocks = <&clk IMX8MQ_CLK_TMU_ROOT>;
+ little-endian;
+ fsl,tmu-range = <0xb0000 0xa0026 0x80048 0x70061>;
+diff --git a/arch/arm64/kernel/module-plts.c b/arch/arm64/kernel/module-plts.c
+index 65b08a74aec65..37c0b51a7b7b5 100644
+--- a/arch/arm64/kernel/module-plts.c
++++ b/arch/arm64/kernel/module-plts.c
+@@ -271,8 +271,7 @@ int module_frob_arch_sections(Elf_Ehdr *ehdr, Elf_Shdr *sechdrs,
+ mod->arch.core.plt_shndx = i;
+ else if (!strcmp(secstrings + sechdrs[i].sh_name, ".init.plt"))
+ mod->arch.init.plt_shndx = i;
+- else if (IS_ENABLED(CONFIG_DYNAMIC_FTRACE) &&
+- !strcmp(secstrings + sechdrs[i].sh_name,
++ else if (!strcmp(secstrings + sechdrs[i].sh_name,
+ ".text.ftrace_trampoline"))
+ tramp = sechdrs + i;
+ else if (sechdrs[i].sh_type == SHT_SYMTAB)
+diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c
+index bd47f06739d6c..d906350d543dd 100644
+--- a/arch/arm64/kvm/mmu.c
++++ b/arch/arm64/kvm/mmu.c
+@@ -1873,6 +1873,7 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa,
+ !fault_supports_stage2_huge_mapping(memslot, hva, vma_pagesize)) {
+ force_pte = true;
+ vma_pagesize = PAGE_SIZE;
++ vma_shift = PAGE_SHIFT;
+ }
+
+ /*
+@@ -1967,7 +1968,12 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa,
+ (fault_status == FSC_PERM &&
+ stage2_is_exec(kvm, fault_ipa, vma_pagesize));
+
+- if (vma_pagesize == PUD_SIZE) {
++ /*
++ * If PUD_SIZE == PMD_SIZE, there is no real PUD level, and
++ * all we have is a 2-level page table. Trying to map a PUD in
++ * this case would be fatally wrong.
++ */
++ if (PUD_SIZE != PMD_SIZE && vma_pagesize == PUD_SIZE) {
+ pud_t new_pud = kvm_pfn_pud(pfn, mem_type);
+
+ new_pud = kvm_pud_mkhuge(new_pud);
+diff --git a/arch/mips/include/asm/mach-loongson64/cpu-feature-overrides.h b/arch/mips/include/asm/mach-loongson64/cpu-feature-overrides.h
+index b6e9c99b85a52..eb181224eb4c4 100644
+--- a/arch/mips/include/asm/mach-loongson64/cpu-feature-overrides.h
++++ b/arch/mips/include/asm/mach-loongson64/cpu-feature-overrides.h
+@@ -26,7 +26,6 @@
+ #define cpu_has_counter 1
+ #define cpu_has_dc_aliases (PAGE_SIZE < 0x4000)
+ #define cpu_has_divec 0
+-#define cpu_has_ejtag 0
+ #define cpu_has_inclusive_pcaches 1
+ #define cpu_has_llsc 1
+ #define cpu_has_mcheck 0
+@@ -42,7 +41,6 @@
+ #define cpu_has_veic 0
+ #define cpu_has_vint 0
+ #define cpu_has_vtag_icache 0
+-#define cpu_has_watch 1
+ #define cpu_has_wsbh 1
+ #define cpu_has_ic_fills_f_dc 1
+ #define cpu_hwrena_impl_bits 0xc0000000
+diff --git a/arch/powerpc/configs/pasemi_defconfig b/arch/powerpc/configs/pasemi_defconfig
+index 08b7f4cef2434..ddf5e97877e2b 100644
+--- a/arch/powerpc/configs/pasemi_defconfig
++++ b/arch/powerpc/configs/pasemi_defconfig
+@@ -109,7 +109,6 @@ CONFIG_FB_NVIDIA=y
+ CONFIG_FB_NVIDIA_I2C=y
+ CONFIG_FB_RADEON=y
+ # CONFIG_LCD_CLASS_DEVICE is not set
+-CONFIG_VGACON_SOFT_SCROLLBACK=y
+ CONFIG_LOGO=y
+ CONFIG_SOUND=y
+ CONFIG_SND=y
+diff --git a/arch/powerpc/configs/ppc6xx_defconfig b/arch/powerpc/configs/ppc6xx_defconfig
+index feb5d47d8d1e0..4cc9039e4deb9 100644
+--- a/arch/powerpc/configs/ppc6xx_defconfig
++++ b/arch/powerpc/configs/ppc6xx_defconfig
+@@ -772,7 +772,6 @@ CONFIG_FB_TRIDENT=m
+ CONFIG_FB_SM501=m
+ CONFIG_FB_IBM_GXT4500=y
+ CONFIG_LCD_PLATFORM=m
+-CONFIG_VGACON_SOFT_SCROLLBACK=y
+ CONFIG_FRAMEBUFFER_CONSOLE=y
+ CONFIG_FRAMEBUFFER_CONSOLE_ROTATION=y
+ CONFIG_LOGO=y
+diff --git a/arch/x86/configs/i386_defconfig b/arch/x86/configs/i386_defconfig
+index 550904591e94d..62e834793af05 100644
+--- a/arch/x86/configs/i386_defconfig
++++ b/arch/x86/configs/i386_defconfig
+@@ -202,7 +202,6 @@ CONFIG_FB_MODE_HELPERS=y
+ CONFIG_FB_TILEBLITTING=y
+ CONFIG_FB_EFI=y
+ # CONFIG_LCD_CLASS_DEVICE is not set
+-CONFIG_VGACON_SOFT_SCROLLBACK=y
+ CONFIG_LOGO=y
+ # CONFIG_LOGO_LINUX_MONO is not set
+ # CONFIG_LOGO_LINUX_VGA16 is not set
+diff --git a/arch/x86/configs/x86_64_defconfig b/arch/x86/configs/x86_64_defconfig
+index 6149610090751..075d97ec20d56 100644
+--- a/arch/x86/configs/x86_64_defconfig
++++ b/arch/x86/configs/x86_64_defconfig
+@@ -197,7 +197,6 @@ CONFIG_FB_MODE_HELPERS=y
+ CONFIG_FB_TILEBLITTING=y
+ CONFIG_FB_EFI=y
+ # CONFIG_LCD_CLASS_DEVICE is not set
+-CONFIG_VGACON_SOFT_SCROLLBACK=y
+ CONFIG_LOGO=y
+ # CONFIG_LOGO_LINUX_MONO is not set
+ # CONFIG_LOGO_LINUX_VGA16 is not set
+diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c
+index 9516a958e7801..1e6724c30cc05 100644
+--- a/arch/x86/kvm/mmu/mmu.c
++++ b/arch/x86/kvm/mmu/mmu.c
+@@ -2521,7 +2521,7 @@ static struct kvm_mmu_page *kvm_mmu_get_page(struct kvm_vcpu *vcpu,
+ }
+
+ if (sp->unsync_children)
+- kvm_make_request(KVM_REQ_TLB_FLUSH_CURRENT, vcpu);
++ kvm_make_request(KVM_REQ_MMU_SYNC, vcpu);
+
+ __clear_sp_write_flooding_count(sp);
+ trace_kvm_mmu_get_page(sp, false);
+diff --git a/arch/x86/kvm/vmx/nested.c b/arch/x86/kvm/vmx/nested.c
+index 11e4df5600183..a5810928b011f 100644
+--- a/arch/x86/kvm/vmx/nested.c
++++ b/arch/x86/kvm/vmx/nested.c
+@@ -4620,7 +4620,7 @@ void nested_vmx_pmu_entry_exit_ctls_update(struct kvm_vcpu *vcpu)
+ vmx->nested.msrs.entry_ctls_high &=
+ ~VM_ENTRY_LOAD_IA32_PERF_GLOBAL_CTRL;
+ vmx->nested.msrs.exit_ctls_high &=
+- ~VM_ENTRY_LOAD_IA32_PERF_GLOBAL_CTRL;
++ ~VM_EXIT_LOAD_IA32_PERF_GLOBAL_CTRL;
+ }
+ }
+
+diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c
+index eb33c764d1593..b934e99649436 100644
+--- a/arch/x86/kvm/vmx/vmx.c
++++ b/arch/x86/kvm/vmx/vmx.c
+@@ -6028,6 +6028,7 @@ static int vmx_handle_exit(struct kvm_vcpu *vcpu, fastpath_t exit_fastpath)
+ (exit_reason != EXIT_REASON_EXCEPTION_NMI &&
+ exit_reason != EXIT_REASON_EPT_VIOLATION &&
+ exit_reason != EXIT_REASON_PML_FULL &&
++ exit_reason != EXIT_REASON_APIC_ACCESS &&
+ exit_reason != EXIT_REASON_TASK_SWITCH)) {
+ vcpu->run->exit_reason = KVM_EXIT_INTERNAL_ERROR;
+ vcpu->run->internal.suberror = KVM_INTERNAL_ERROR_DELIVERY_EV;
+diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
+index f7304132d5907..f5481ae588aff 100644
+--- a/arch/x86/kvm/x86.c
++++ b/arch/x86/kvm/x86.c
+@@ -2696,7 +2696,7 @@ static int kvm_pv_enable_async_pf(struct kvm_vcpu *vcpu, u64 data)
+ return 1;
+
+ if (!lapic_in_kernel(vcpu))
+- return 1;
++ return data ? 1 : 0;
+
+ vcpu->arch.apf.msr_en_val = data;
+
+diff --git a/block/bio.c b/block/bio.c
+index b1883adc8f154..eac129f21d2df 100644
+--- a/block/bio.c
++++ b/block/bio.c
+@@ -877,8 +877,10 @@ bool __bio_try_merge_page(struct bio *bio, struct page *page,
+ struct bio_vec *bv = &bio->bi_io_vec[bio->bi_vcnt - 1];
+
+ if (page_is_mergeable(bv, page, len, off, same_page)) {
+- if (bio->bi_iter.bi_size > UINT_MAX - len)
++ if (bio->bi_iter.bi_size > UINT_MAX - len) {
++ *same_page = false;
+ return false;
++ }
+ bv->bv_len += len;
+ bio->bi_iter.bi_size += len;
+ return true;
+diff --git a/block/partitions/core.c b/block/partitions/core.c
+index 534e11285a8d4..b45539764c994 100644
+--- a/block/partitions/core.c
++++ b/block/partitions/core.c
+@@ -529,7 +529,7 @@ int bdev_del_partition(struct block_device *bdev, int partno)
+
+ bdevp = bdget_disk(bdev->bd_disk, partno);
+ if (!bdevp)
+- return -ENOMEM;
++ return -ENXIO;
+
+ mutex_lock(&bdevp->bd_mutex);
+ mutex_lock_nested(&bdev->bd_mutex, 1);
+diff --git a/drivers/atm/firestream.c b/drivers/atm/firestream.c
+index cc87004d5e2d6..5f22555933df1 100644
+--- a/drivers/atm/firestream.c
++++ b/drivers/atm/firestream.c
+@@ -998,6 +998,7 @@ static int fs_open(struct atm_vcc *atm_vcc)
+ error = make_rate (pcr, r, &tmc0, NULL);
+ if (error) {
+ kfree(tc);
++ kfree(vcc);
+ return error;
+ }
+ }
+diff --git a/drivers/block/rbd.c b/drivers/block/rbd.c
+index 4f61e92094614..ae6d7dbbf5d1e 100644
+--- a/drivers/block/rbd.c
++++ b/drivers/block/rbd.c
+@@ -5120,6 +5120,9 @@ static ssize_t rbd_config_info_show(struct device *dev,
+ {
+ struct rbd_device *rbd_dev = dev_to_rbd_dev(dev);
+
++ if (!capable(CAP_SYS_ADMIN))
++ return -EPERM;
++
+ return sprintf(buf, "%s\n", rbd_dev->config_info);
+ }
+
+@@ -5231,6 +5234,9 @@ static ssize_t rbd_image_refresh(struct device *dev,
+ struct rbd_device *rbd_dev = dev_to_rbd_dev(dev);
+ int ret;
+
++ if (!capable(CAP_SYS_ADMIN))
++ return -EPERM;
++
+ ret = rbd_dev_refresh(rbd_dev);
+ if (ret)
+ return ret;
+@@ -7059,6 +7065,9 @@ static ssize_t do_rbd_add(struct bus_type *bus,
+ struct rbd_client *rbdc;
+ int rc;
+
++ if (!capable(CAP_SYS_ADMIN))
++ return -EPERM;
++
+ if (!try_module_get(THIS_MODULE))
+ return -ENODEV;
+
+@@ -7209,6 +7218,9 @@ static ssize_t do_rbd_remove(struct bus_type *bus,
+ bool force = false;
+ int ret;
+
++ if (!capable(CAP_SYS_ADMIN))
++ return -EPERM;
++
+ dev_id = -1;
+ opt_buf[0] = '\0';
+ sscanf(buf, "%d %5s", &dev_id, opt_buf);
+diff --git a/drivers/cpufreq/intel_pstate.c b/drivers/cpufreq/intel_pstate.c
+index 8c730a47e0537..36a469150ff9c 100644
+--- a/drivers/cpufreq/intel_pstate.c
++++ b/drivers/cpufreq/intel_pstate.c
+@@ -762,7 +762,7 @@ static void intel_pstate_get_hwp_max(unsigned int cpu, int *phy_max,
+
+ rdmsrl_on_cpu(cpu, MSR_HWP_CAPABILITIES, &cap);
+ WRITE_ONCE(all_cpu_data[cpu]->hwp_cap_cached, cap);
+- if (global.no_turbo)
++ if (global.no_turbo || global.turbo_disabled)
+ *current_max = HWP_GUARANTEED_PERF(cap);
+ else
+ *current_max = HWP_HIGHEST_PERF(cap);
+@@ -2534,9 +2534,15 @@ static int intel_pstate_update_status(const char *buf, size_t size)
+ {
+ int ret;
+
+- if (size == 3 && !strncmp(buf, "off", size))
+- return intel_pstate_driver ?
+- intel_pstate_unregister_driver() : -EINVAL;
++ if (size == 3 && !strncmp(buf, "off", size)) {
++ if (!intel_pstate_driver)
++ return -EINVAL;
++
++ if (hwp_active)
++ return -EBUSY;
++
++ return intel_pstate_unregister_driver();
++ }
+
+ if (size == 6 && !strncmp(buf, "active", size)) {
+ if (intel_pstate_driver) {
+diff --git a/drivers/dma/acpi-dma.c b/drivers/dma/acpi-dma.c
+index 8a05db3343d39..dcbcb712de6e8 100644
+--- a/drivers/dma/acpi-dma.c
++++ b/drivers/dma/acpi-dma.c
+@@ -135,11 +135,13 @@ static void acpi_dma_parse_csrt(struct acpi_device *adev, struct acpi_dma *adma)
+ if (ret < 0) {
+ dev_warn(&adev->dev,
+ "error in parsing resource group\n");
+- return;
++ break;
+ }
+
+ grp = (struct acpi_csrt_group *)((void *)grp + grp->length);
+ }
++
++ acpi_put_table((struct acpi_table_header *)csrt);
+ }
+
+ /**
+diff --git a/drivers/dma/dma-jz4780.c b/drivers/dma/dma-jz4780.c
+index 448f663da89c6..8beed91428bd6 100644
+--- a/drivers/dma/dma-jz4780.c
++++ b/drivers/dma/dma-jz4780.c
+@@ -879,24 +879,11 @@ static int jz4780_dma_probe(struct platform_device *pdev)
+ return -EINVAL;
+ }
+
+- ret = platform_get_irq(pdev, 0);
+- if (ret < 0)
+- return ret;
+-
+- jzdma->irq = ret;
+-
+- ret = request_irq(jzdma->irq, jz4780_dma_irq_handler, 0, dev_name(dev),
+- jzdma);
+- if (ret) {
+- dev_err(dev, "failed to request IRQ %u!\n", jzdma->irq);
+- return ret;
+- }
+-
+ jzdma->clk = devm_clk_get(dev, NULL);
+ if (IS_ERR(jzdma->clk)) {
+ dev_err(dev, "failed to get clock\n");
+ ret = PTR_ERR(jzdma->clk);
+- goto err_free_irq;
++ return ret;
+ }
+
+ clk_prepare_enable(jzdma->clk);
+@@ -949,10 +936,23 @@ static int jz4780_dma_probe(struct platform_device *pdev)
+ jzchan->vchan.desc_free = jz4780_dma_desc_free;
+ }
+
++ ret = platform_get_irq(pdev, 0);
++ if (ret < 0)
++ goto err_disable_clk;
++
++ jzdma->irq = ret;
++
++ ret = request_irq(jzdma->irq, jz4780_dma_irq_handler, 0, dev_name(dev),
++ jzdma);
++ if (ret) {
++ dev_err(dev, "failed to request IRQ %u!\n", jzdma->irq);
++ goto err_disable_clk;
++ }
++
+ ret = dmaenginem_async_device_register(dd);
+ if (ret) {
+ dev_err(dev, "failed to register device\n");
+- goto err_disable_clk;
++ goto err_free_irq;
+ }
+
+ /* Register with OF DMA helpers. */
+@@ -960,17 +960,17 @@ static int jz4780_dma_probe(struct platform_device *pdev)
+ jzdma);
+ if (ret) {
+ dev_err(dev, "failed to register OF DMA controller\n");
+- goto err_disable_clk;
++ goto err_free_irq;
+ }
+
+ dev_info(dev, "JZ4780 DMA controller initialised\n");
+ return 0;
+
+-err_disable_clk:
+- clk_disable_unprepare(jzdma->clk);
+-
+ err_free_irq:
+ free_irq(jzdma->irq, jzdma);
++
++err_disable_clk:
++ clk_disable_unprepare(jzdma->clk);
+ return ret;
+ }
+
+diff --git a/drivers/firmware/efi/embedded-firmware.c b/drivers/firmware/efi/embedded-firmware.c
+index a1b199de9006e..84e32634ed6cd 100644
+--- a/drivers/firmware/efi/embedded-firmware.c
++++ b/drivers/firmware/efi/embedded-firmware.c
+@@ -16,9 +16,9 @@
+
+ /* Exported for use by lib/test_firmware.c only */
+ LIST_HEAD(efi_embedded_fw_list);
+-EXPORT_SYMBOL_GPL(efi_embedded_fw_list);
+-
+-static bool checked_for_fw;
++EXPORT_SYMBOL_NS_GPL(efi_embedded_fw_list, TEST_FIRMWARE);
++bool efi_embedded_fw_checked;
++EXPORT_SYMBOL_NS_GPL(efi_embedded_fw_checked, TEST_FIRMWARE);
+
+ static const struct dmi_system_id * const embedded_fw_table[] = {
+ #ifdef CONFIG_TOUCHSCREEN_DMI
+@@ -119,14 +119,14 @@ void __init efi_check_for_embedded_firmwares(void)
+ }
+ }
+
+- checked_for_fw = true;
++ efi_embedded_fw_checked = true;
+ }
+
+ int efi_get_embedded_fw(const char *name, const u8 **data, size_t *size)
+ {
+ struct efi_embedded_fw *iter, *fw = NULL;
+
+- if (!checked_for_fw) {
++ if (!efi_embedded_fw_checked) {
+ pr_warn("Warning %s called while we did not check for embedded fw\n",
+ __func__);
+ return -ENOENT;
+diff --git a/drivers/gpu/drm/amd/powerplay/hwmgr/smu7_hwmgr.c b/drivers/gpu/drm/amd/powerplay/hwmgr/smu7_hwmgr.c
+index 753cb2cf6b77e..3adf9c1dfdbb0 100644
+--- a/drivers/gpu/drm/amd/powerplay/hwmgr/smu7_hwmgr.c
++++ b/drivers/gpu/drm/amd/powerplay/hwmgr/smu7_hwmgr.c
+@@ -3587,7 +3587,8 @@ static int smu7_read_sensor(struct pp_hwmgr *hwmgr, int idx,
+ case AMDGPU_PP_SENSOR_GPU_POWER:
+ return smu7_get_gpu_power(hwmgr, (uint32_t *)value);
+ case AMDGPU_PP_SENSOR_VDDGFX:
+- if ((data->vr_config & 0xff) == 0x2)
++ if ((data->vr_config & VRCONF_VDDGFX_MASK) ==
++ (VR_SVI2_PLANE_2 << VRCONF_VDDGFX_SHIFT))
+ val_vid = PHM_READ_INDIRECT_FIELD(hwmgr->device,
+ CGS_IND_REG__SMC, PWR_SVI2_STATUS, PLANE2_VID);
+ else
+diff --git a/drivers/gpu/drm/msm/adreno/a2xx_gpu.c b/drivers/gpu/drm/msm/adreno/a2xx_gpu.c
+index 6021f8d9efd1f..48fa49f69d6d0 100644
+--- a/drivers/gpu/drm/msm/adreno/a2xx_gpu.c
++++ b/drivers/gpu/drm/msm/adreno/a2xx_gpu.c
+@@ -164,6 +164,11 @@ static int a2xx_hw_init(struct msm_gpu *gpu)
+ if (ret)
+ return ret;
+
++ gpu_write(gpu, REG_AXXX_CP_RB_CNTL,
++ MSM_GPU_RB_CNTL_DEFAULT | AXXX_CP_RB_CNTL_NO_UPDATE);
++
++ gpu_write(gpu, REG_AXXX_CP_RB_BASE, lower_32_bits(gpu->rb[0]->iova));
++
+ /* NOTE: PM4/micro-engine firmware registers look to be the same
+ * for a2xx and a3xx.. we could possibly push that part down to
+ * adreno_gpu base class. Or push both PM4 and PFP but
+diff --git a/drivers/gpu/drm/msm/adreno/a3xx_gpu.c b/drivers/gpu/drm/msm/adreno/a3xx_gpu.c
+index 0a5ea9f56cb88..f6471145a7a60 100644
+--- a/drivers/gpu/drm/msm/adreno/a3xx_gpu.c
++++ b/drivers/gpu/drm/msm/adreno/a3xx_gpu.c
+@@ -211,6 +211,16 @@ static int a3xx_hw_init(struct msm_gpu *gpu)
+ if (ret)
+ return ret;
+
++ /*
++ * Use the default ringbuffer size and block size but disable the RPTR
++ * shadow
++ */
++ gpu_write(gpu, REG_AXXX_CP_RB_CNTL,
++ MSM_GPU_RB_CNTL_DEFAULT | AXXX_CP_RB_CNTL_NO_UPDATE);
++
++ /* Set the ringbuffer address */
++ gpu_write(gpu, REG_AXXX_CP_RB_BASE, lower_32_bits(gpu->rb[0]->iova));
++
+ /* setup access protection: */
+ gpu_write(gpu, REG_A3XX_CP_PROTECT_CTRL, 0x00000007);
+
+diff --git a/drivers/gpu/drm/msm/adreno/a4xx_gpu.c b/drivers/gpu/drm/msm/adreno/a4xx_gpu.c
+index b9b26b2bf9c54..9547536006254 100644
+--- a/drivers/gpu/drm/msm/adreno/a4xx_gpu.c
++++ b/drivers/gpu/drm/msm/adreno/a4xx_gpu.c
+@@ -267,6 +267,16 @@ static int a4xx_hw_init(struct msm_gpu *gpu)
+ if (ret)
+ return ret;
+
++ /*
++ * Use the default ringbuffer size and block size but disable the RPTR
++ * shadow
++ */
++ gpu_write(gpu, REG_A4XX_CP_RB_CNTL,
++ MSM_GPU_RB_CNTL_DEFAULT | AXXX_CP_RB_CNTL_NO_UPDATE);
++
++ /* Set the ringbuffer address */
++ gpu_write(gpu, REG_A4XX_CP_RB_BASE, lower_32_bits(gpu->rb[0]->iova));
++
+ /* Load PM4: */
+ ptr = (uint32_t *)(adreno_gpu->fw[ADRENO_FW_PM4]->data);
+ len = adreno_gpu->fw[ADRENO_FW_PM4]->size / 4;
+diff --git a/drivers/gpu/drm/msm/adreno/a5xx_gpu.c b/drivers/gpu/drm/msm/adreno/a5xx_gpu.c
+index d95970a73fb4b..1bf0969ce725f 100644
+--- a/drivers/gpu/drm/msm/adreno/a5xx_gpu.c
++++ b/drivers/gpu/drm/msm/adreno/a5xx_gpu.c
+@@ -702,8 +702,6 @@ static int a5xx_hw_init(struct msm_gpu *gpu)
+ if (ret)
+ return ret;
+
+- a5xx_preempt_hw_init(gpu);
+-
+ if (!adreno_is_a510(adreno_gpu))
+ a5xx_gpmu_ucode_init(gpu);
+
+@@ -711,6 +709,15 @@ static int a5xx_hw_init(struct msm_gpu *gpu)
+ if (ret)
+ return ret;
+
++ /* Set the ringbuffer address */
++ gpu_write64(gpu, REG_A5XX_CP_RB_BASE, REG_A5XX_CP_RB_BASE_HI,
++ gpu->rb[0]->iova);
++
++ gpu_write(gpu, REG_A5XX_CP_RB_CNTL,
++ MSM_GPU_RB_CNTL_DEFAULT | AXXX_CP_RB_CNTL_NO_UPDATE);
++
++ a5xx_preempt_hw_init(gpu);
++
+ /* Disable the interrupts through the initial bringup stage */
+ gpu_write(gpu, REG_A5XX_RBBM_INT_0_MASK, A5XX_INT_MASK);
+
+@@ -1510,7 +1517,8 @@ struct msm_gpu *a5xx_gpu_init(struct drm_device *dev)
+
+ check_speed_bin(&pdev->dev);
+
+- ret = adreno_gpu_init(dev, pdev, adreno_gpu, &funcs, 4);
++ /* Restricting nr_rings to 1 to temporarily disable preemption */
++ ret = adreno_gpu_init(dev, pdev, adreno_gpu, &funcs, 1);
+ if (ret) {
+ a5xx_destroy(&(a5xx_gpu->base.base));
+ return ERR_PTR(ret);
+diff --git a/drivers/gpu/drm/msm/adreno/a5xx_gpu.h b/drivers/gpu/drm/msm/adreno/a5xx_gpu.h
+index 54868d4e3958f..1e5b1a15a70f0 100644
+--- a/drivers/gpu/drm/msm/adreno/a5xx_gpu.h
++++ b/drivers/gpu/drm/msm/adreno/a5xx_gpu.h
+@@ -31,6 +31,7 @@ struct a5xx_gpu {
+ struct msm_ringbuffer *next_ring;
+
+ struct drm_gem_object *preempt_bo[MSM_GPU_MAX_RINGS];
++ struct drm_gem_object *preempt_counters_bo[MSM_GPU_MAX_RINGS];
+ struct a5xx_preempt_record *preempt[MSM_GPU_MAX_RINGS];
+ uint64_t preempt_iova[MSM_GPU_MAX_RINGS];
+
+diff --git a/drivers/gpu/drm/msm/adreno/a5xx_preempt.c b/drivers/gpu/drm/msm/adreno/a5xx_preempt.c
+index 9cf9353a7ff11..9f3fe177b00e9 100644
+--- a/drivers/gpu/drm/msm/adreno/a5xx_preempt.c
++++ b/drivers/gpu/drm/msm/adreno/a5xx_preempt.c
+@@ -226,19 +226,31 @@ static int preempt_init_ring(struct a5xx_gpu *a5xx_gpu,
+ struct adreno_gpu *adreno_gpu = &a5xx_gpu->base;
+ struct msm_gpu *gpu = &adreno_gpu->base;
+ struct a5xx_preempt_record *ptr;
+- struct drm_gem_object *bo = NULL;
+- u64 iova = 0;
++ void *counters;
++ struct drm_gem_object *bo = NULL, *counters_bo = NULL;
++ u64 iova = 0, counters_iova = 0;
+
+ ptr = msm_gem_kernel_new(gpu->dev,
+ A5XX_PREEMPT_RECORD_SIZE + A5XX_PREEMPT_COUNTER_SIZE,
+- MSM_BO_UNCACHED, gpu->aspace, &bo, &iova);
++ MSM_BO_UNCACHED | MSM_BO_MAP_PRIV, gpu->aspace, &bo, &iova);
+
+ if (IS_ERR(ptr))
+ return PTR_ERR(ptr);
+
++ /* The buffer to store counters needs to be unprivileged */
++ counters = msm_gem_kernel_new(gpu->dev,
++ A5XX_PREEMPT_COUNTER_SIZE,
++ MSM_BO_UNCACHED, gpu->aspace, &counters_bo, &counters_iova);
++ if (IS_ERR(counters)) {
++ msm_gem_kernel_put(bo, gpu->aspace, true);
++ return PTR_ERR(counters);
++ }
++
+ msm_gem_object_set_name(bo, "preempt");
++ msm_gem_object_set_name(counters_bo, "preempt_counters");
+
+ a5xx_gpu->preempt_bo[ring->id] = bo;
++ a5xx_gpu->preempt_counters_bo[ring->id] = counters_bo;
+ a5xx_gpu->preempt_iova[ring->id] = iova;
+ a5xx_gpu->preempt[ring->id] = ptr;
+
+@@ -249,7 +261,7 @@ static int preempt_init_ring(struct a5xx_gpu *a5xx_gpu,
+ ptr->data = 0;
+ ptr->cntl = MSM_GPU_RB_CNTL_DEFAULT;
+ ptr->rptr_addr = rbmemptr(ring, rptr);
+- ptr->counter = iova + A5XX_PREEMPT_RECORD_SIZE;
++ ptr->counter = counters_iova;
+
+ return 0;
+ }
+@@ -260,8 +272,11 @@ void a5xx_preempt_fini(struct msm_gpu *gpu)
+ struct a5xx_gpu *a5xx_gpu = to_a5xx_gpu(adreno_gpu);
+ int i;
+
+- for (i = 0; i < gpu->nr_rings; i++)
++ for (i = 0; i < gpu->nr_rings; i++) {
+ msm_gem_kernel_put(a5xx_gpu->preempt_bo[i], gpu->aspace, true);
++ msm_gem_kernel_put(a5xx_gpu->preempt_counters_bo[i],
++ gpu->aspace, true);
++ }
+ }
+
+ void a5xx_preempt_init(struct msm_gpu *gpu)
+diff --git a/drivers/gpu/drm/msm/adreno/a6xx_gpu.c b/drivers/gpu/drm/msm/adreno/a6xx_gpu.c
+index 7768557cdfb28..b7dc350d96fc8 100644
+--- a/drivers/gpu/drm/msm/adreno/a6xx_gpu.c
++++ b/drivers/gpu/drm/msm/adreno/a6xx_gpu.c
+@@ -557,6 +557,13 @@ static int a6xx_hw_init(struct msm_gpu *gpu)
+ if (ret)
+ goto out;
+
++ /* Set the ringbuffer address */
++ gpu_write64(gpu, REG_A6XX_CP_RB_BASE, REG_A6XX_CP_RB_BASE_HI,
++ gpu->rb[0]->iova);
++
++ gpu_write(gpu, REG_A6XX_CP_RB_CNTL,
++ MSM_GPU_RB_CNTL_DEFAULT | AXXX_CP_RB_CNTL_NO_UPDATE);
++
+ /* Always come up on rb 0 */
+ a6xx_gpu->cur_ring = gpu->rb[0];
+
+diff --git a/drivers/gpu/drm/msm/adreno/adreno_gpu.c b/drivers/gpu/drm/msm/adreno/adreno_gpu.c
+index e7b39f3ca33dc..a74ccc5b8220d 100644
+--- a/drivers/gpu/drm/msm/adreno/adreno_gpu.c
++++ b/drivers/gpu/drm/msm/adreno/adreno_gpu.c
+@@ -400,26 +400,6 @@ int adreno_hw_init(struct msm_gpu *gpu)
+ ring->memptrs->rptr = 0;
+ }
+
+- /*
+- * Setup REG_CP_RB_CNTL. The same value is used across targets (with
+- * the excpetion of A430 that disables the RPTR shadow) - the cacluation
+- * for the ringbuffer size and block size is moved to msm_gpu.h for the
+- * pre-processor to deal with and the A430 variant is ORed in here
+- */
+- adreno_gpu_write(adreno_gpu, REG_ADRENO_CP_RB_CNTL,
+- MSM_GPU_RB_CNTL_DEFAULT |
+- (adreno_is_a430(adreno_gpu) ? AXXX_CP_RB_CNTL_NO_UPDATE : 0));
+-
+- /* Setup ringbuffer address - use ringbuffer[0] for GPU init */
+- adreno_gpu_write64(adreno_gpu, REG_ADRENO_CP_RB_BASE,
+- REG_ADRENO_CP_RB_BASE_HI, gpu->rb[0]->iova);
+-
+- if (!adreno_is_a430(adreno_gpu)) {
+- adreno_gpu_write64(adreno_gpu, REG_ADRENO_CP_RB_RPTR_ADDR,
+- REG_ADRENO_CP_RB_RPTR_ADDR_HI,
+- rbmemptr(gpu->rb[0], rptr));
+- }
+-
+ return 0;
+ }
+
+@@ -427,11 +407,8 @@ int adreno_hw_init(struct msm_gpu *gpu)
+ static uint32_t get_rptr(struct adreno_gpu *adreno_gpu,
+ struct msm_ringbuffer *ring)
+ {
+- if (adreno_is_a430(adreno_gpu))
+- return ring->memptrs->rptr = adreno_gpu_read(
+- adreno_gpu, REG_ADRENO_CP_RB_RPTR);
+- else
+- return ring->memptrs->rptr;
++ return ring->memptrs->rptr = adreno_gpu_read(
++ adreno_gpu, REG_ADRENO_CP_RB_RPTR);
+ }
+
+ struct msm_ringbuffer *adreno_active_ring(struct msm_gpu *gpu)
+diff --git a/drivers/gpu/drm/msm/msm_ringbuffer.c b/drivers/gpu/drm/msm/msm_ringbuffer.c
+index e397c44cc0112..39ecb5a18431e 100644
+--- a/drivers/gpu/drm/msm/msm_ringbuffer.c
++++ b/drivers/gpu/drm/msm/msm_ringbuffer.c
+@@ -27,7 +27,8 @@ struct msm_ringbuffer *msm_ringbuffer_new(struct msm_gpu *gpu, int id,
+ ring->id = id;
+
+ ring->start = msm_gem_kernel_new(gpu->dev, MSM_GPU_RINGBUFFER_SZ,
+- MSM_BO_WC, gpu->aspace, &ring->bo, &ring->iova);
++ MSM_BO_WC | MSM_BO_GPU_READONLY, gpu->aspace, &ring->bo,
++ &ring->iova);
+
+ if (IS_ERR(ring->start)) {
+ ret = PTR_ERR(ring->start);
+diff --git a/drivers/gpu/drm/sun4i/sun4i_backend.c b/drivers/gpu/drm/sun4i/sun4i_backend.c
+index 072ea113e6be5..ed5d866178028 100644
+--- a/drivers/gpu/drm/sun4i/sun4i_backend.c
++++ b/drivers/gpu/drm/sun4i/sun4i_backend.c
+@@ -589,8 +589,7 @@ static int sun4i_backend_atomic_check(struct sunxi_engine *engine,
+
+ /* We can't have an alpha plane at the lowest position */
+ if (!backend->quirks->supports_lowest_plane_alpha &&
+- (plane_states[0]->fb->format->has_alpha ||
+- (plane_states[0]->alpha != DRM_BLEND_ALPHA_OPAQUE)))
++ (plane_states[0]->alpha != DRM_BLEND_ALPHA_OPAQUE))
+ return -EINVAL;
+
+ for (i = 1; i < num_planes; i++) {
+@@ -995,7 +994,6 @@ static const struct sun4i_backend_quirks sun6i_backend_quirks = {
+
+ static const struct sun4i_backend_quirks sun7i_backend_quirks = {
+ .needs_output_muxing = true,
+- .supports_lowest_plane_alpha = true,
+ };
+
+ static const struct sun4i_backend_quirks sun8i_a33_backend_quirks = {
+diff --git a/drivers/gpu/drm/sun4i/sun4i_tcon.c b/drivers/gpu/drm/sun4i/sun4i_tcon.c
+index 359b56e43b83c..24d95f058918c 100644
+--- a/drivers/gpu/drm/sun4i/sun4i_tcon.c
++++ b/drivers/gpu/drm/sun4i/sun4i_tcon.c
+@@ -1433,14 +1433,18 @@ static int sun8i_r40_tcon_tv_set_mux(struct sun4i_tcon *tcon,
+ if (IS_ENABLED(CONFIG_DRM_SUN8I_TCON_TOP) &&
+ encoder->encoder_type == DRM_MODE_ENCODER_TMDS) {
+ ret = sun8i_tcon_top_set_hdmi_src(&pdev->dev, id);
+- if (ret)
++ if (ret) {
++ put_device(&pdev->dev);
+ return ret;
++ }
+ }
+
+ if (IS_ENABLED(CONFIG_DRM_SUN8I_TCON_TOP)) {
+ ret = sun8i_tcon_top_de_config(&pdev->dev, tcon->id, id);
+- if (ret)
++ if (ret) {
++ put_device(&pdev->dev);
+ return ret;
++ }
+ }
+
+ return 0;
+diff --git a/drivers/gpu/drm/sun4i/sun6i_mipi_dsi.c b/drivers/gpu/drm/sun4i/sun6i_mipi_dsi.c
+index aa67cb037e9d1..32d4c3f7fc4eb 100644
+--- a/drivers/gpu/drm/sun4i/sun6i_mipi_dsi.c
++++ b/drivers/gpu/drm/sun4i/sun6i_mipi_dsi.c
+@@ -889,7 +889,7 @@ static int sun6i_dsi_dcs_write_long(struct sun6i_dsi *dsi,
+ regmap_write(dsi->regs, SUN6I_DSI_CMD_TX_REG(0),
+ sun6i_dsi_dcs_build_pkt_hdr(dsi, msg));
+
+- bounce = kzalloc(msg->tx_len + sizeof(crc), GFP_KERNEL);
++ bounce = kzalloc(ALIGN(msg->tx_len + sizeof(crc), 4), GFP_KERNEL);
+ if (!bounce)
+ return -ENOMEM;
+
+@@ -900,7 +900,7 @@ static int sun6i_dsi_dcs_write_long(struct sun6i_dsi *dsi,
+ memcpy((u8 *)bounce + msg->tx_len, &crc, sizeof(crc));
+ len += sizeof(crc);
+
+- regmap_bulk_write(dsi->regs, SUN6I_DSI_CMD_TX_REG(1), bounce, len);
++ regmap_bulk_write(dsi->regs, SUN6I_DSI_CMD_TX_REG(1), bounce, DIV_ROUND_UP(len, 4));
+ regmap_write(dsi->regs, SUN6I_DSI_CMD_CTL_REG, len + 4 - 1);
+ kfree(bounce);
+
+diff --git a/drivers/gpu/drm/sun4i/sun8i_vi_layer.c b/drivers/gpu/drm/sun4i/sun8i_vi_layer.c
+index 22c8c5375d0db..c0147af6a8406 100644
+--- a/drivers/gpu/drm/sun4i/sun8i_vi_layer.c
++++ b/drivers/gpu/drm/sun4i/sun8i_vi_layer.c
+@@ -211,7 +211,7 @@ static int sun8i_vi_layer_update_coord(struct sun8i_mixer *mixer, int channel,
+ return 0;
+ }
+
+-static bool sun8i_vi_layer_get_csc_mode(const struct drm_format_info *format)
++static u32 sun8i_vi_layer_get_csc_mode(const struct drm_format_info *format)
+ {
+ if (!format->is_yuv)
+ return SUN8I_CSC_MODE_OFF;
+diff --git a/drivers/gpu/drm/tve200/tve200_display.c b/drivers/gpu/drm/tve200/tve200_display.c
+index d733bbc4ac0e5..17ff24d999d18 100644
+--- a/drivers/gpu/drm/tve200/tve200_display.c
++++ b/drivers/gpu/drm/tve200/tve200_display.c
+@@ -14,6 +14,7 @@
+ #include <linux/version.h>
+ #include <linux/dma-buf.h>
+ #include <linux/of_graph.h>
++#include <linux/delay.h>
+
+ #include <drm/drm_fb_cma_helper.h>
+ #include <drm/drm_fourcc.h>
+@@ -130,9 +131,25 @@ static void tve200_display_enable(struct drm_simple_display_pipe *pipe,
+ struct drm_connector *connector = priv->connector;
+ u32 format = fb->format->format;
+ u32 ctrl1 = 0;
++ int retries;
+
+ clk_prepare_enable(priv->clk);
+
++ /* Reset the TVE200 and wait for it to come back online */
++ writel(TVE200_CTRL_4_RESET, priv->regs + TVE200_CTRL_4);
++ for (retries = 0; retries < 5; retries++) {
++ usleep_range(30000, 50000);
++ if (readl(priv->regs + TVE200_CTRL_4) & TVE200_CTRL_4_RESET)
++ continue;
++ else
++ break;
++ }
++ if (retries == 5 &&
++ readl(priv->regs + TVE200_CTRL_4) & TVE200_CTRL_4_RESET) {
++ dev_err(drm->dev, "can't get hardware out of reset\n");
++ return;
++ }
++
+ /* Function 1 */
+ ctrl1 |= TVE200_CTRL_CSMODE;
+ /* Interlace mode for CCIR656: parameterize? */
+@@ -230,8 +247,9 @@ static void tve200_display_disable(struct drm_simple_display_pipe *pipe)
+
+ drm_crtc_vblank_off(crtc);
+
+- /* Disable and Power Down */
++ /* Disable put into reset and Power Down */
+ writel(0, priv->regs + TVE200_CTRL);
++ writel(TVE200_CTRL_4_RESET, priv->regs + TVE200_CTRL_4);
+
+ clk_disable_unprepare(priv->clk);
+ }
+@@ -279,6 +297,8 @@ static int tve200_display_enable_vblank(struct drm_simple_display_pipe *pipe)
+ struct drm_device *drm = crtc->dev;
+ struct tve200_drm_dev_private *priv = drm->dev_private;
+
++ /* Clear any IRQs and enable */
++ writel(0xFF, priv->regs + TVE200_INT_CLR);
+ writel(TVE200_INT_V_STATUS, priv->regs + TVE200_INT_EN);
+ return 0;
+ }
+diff --git a/drivers/gpu/drm/virtio/virtgpu_display.c b/drivers/gpu/drm/virtio/virtgpu_display.c
+index cc7fd957a3072..2b8421a35ab94 100644
+--- a/drivers/gpu/drm/virtio/virtgpu_display.c
++++ b/drivers/gpu/drm/virtio/virtgpu_display.c
+@@ -123,6 +123,17 @@ static int virtio_gpu_crtc_atomic_check(struct drm_crtc *crtc,
+ static void virtio_gpu_crtc_atomic_flush(struct drm_crtc *crtc,
+ struct drm_crtc_state *old_state)
+ {
++ struct virtio_gpu_output *output = drm_crtc_to_virtio_gpu_output(crtc);
++
++ /*
++ * virtio-gpu can't do modeset and plane update operations
++ * independent from each other. So the actual modeset happens
++ * in the plane update callback, and here we just check
++ * whenever we must force the modeset.
++ */
++ if (drm_atomic_crtc_needs_modeset(crtc->state)) {
++ output->needs_modeset = true;
++ }
+ }
+
+ static const struct drm_crtc_helper_funcs virtio_gpu_crtc_helper_funcs = {
+diff --git a/drivers/gpu/drm/virtio/virtgpu_drv.h b/drivers/gpu/drm/virtio/virtgpu_drv.h
+index 9ff9f4ac0522a..4ab1b0ba29253 100644
+--- a/drivers/gpu/drm/virtio/virtgpu_drv.h
++++ b/drivers/gpu/drm/virtio/virtgpu_drv.h
+@@ -138,6 +138,7 @@ struct virtio_gpu_output {
+ int cur_x;
+ int cur_y;
+ bool enabled;
++ bool needs_modeset;
+ };
+ #define drm_crtc_to_virtio_gpu_output(x) \
+ container_of(x, struct virtio_gpu_output, crtc)
+diff --git a/drivers/gpu/drm/virtio/virtgpu_plane.c b/drivers/gpu/drm/virtio/virtgpu_plane.c
+index 52d24179bcecc..65757409d9ed1 100644
+--- a/drivers/gpu/drm/virtio/virtgpu_plane.c
++++ b/drivers/gpu/drm/virtio/virtgpu_plane.c
+@@ -163,7 +163,9 @@ static void virtio_gpu_primary_plane_update(struct drm_plane *plane,
+ plane->state->src_w != old_state->src_w ||
+ plane->state->src_h != old_state->src_h ||
+ plane->state->src_x != old_state->src_x ||
+- plane->state->src_y != old_state->src_y) {
++ plane->state->src_y != old_state->src_y ||
++ output->needs_modeset) {
++ output->needs_modeset = false;
+ DRM_DEBUG("handle 0x%x, crtc %dx%d+%d+%d, src %dx%d+%d+%d\n",
+ bo->hw_res_handle,
+ plane->state->crtc_w, plane->state->crtc_h,
+diff --git a/drivers/hid/hid-elan.c b/drivers/hid/hid-elan.c
+index 45c4f888b7c4e..dae193749d443 100644
+--- a/drivers/hid/hid-elan.c
++++ b/drivers/hid/hid-elan.c
+@@ -188,6 +188,7 @@ static int elan_input_configured(struct hid_device *hdev, struct hid_input *hi)
+ ret = input_mt_init_slots(input, ELAN_MAX_FINGERS, INPUT_MT_POINTER);
+ if (ret) {
+ hid_err(hdev, "Failed to init elan MT slots: %d\n", ret);
++ input_free_device(input);
+ return ret;
+ }
+
+@@ -198,6 +199,7 @@ static int elan_input_configured(struct hid_device *hdev, struct hid_input *hi)
+ if (ret) {
+ hid_err(hdev, "Failed to register elan input device: %d\n",
+ ret);
++ input_mt_destroy_slots(input);
+ input_free_device(input);
+ return ret;
+ }
+diff --git a/drivers/hid/hid-ids.h b/drivers/hid/hid-ids.h
+index 6ea3619842d8d..b49ec7dde6457 100644
+--- a/drivers/hid/hid-ids.h
++++ b/drivers/hid/hid-ids.h
+@@ -849,6 +849,7 @@
+ #define USB_DEVICE_ID_MS_POWER_COVER 0x07da
+ #define USB_DEVICE_ID_MS_XBOX_ONE_S_CONTROLLER 0x02fd
+ #define USB_DEVICE_ID_MS_PIXART_MOUSE 0x00cb
++#define USB_DEVICE_ID_8BITDO_SN30_PRO_PLUS 0x02e0
+
+ #define USB_VENDOR_ID_MOJO 0x8282
+ #define USB_DEVICE_ID_RETRO_ADAPTER 0x3201
+@@ -1014,6 +1015,8 @@
+ #define USB_DEVICE_ID_SAITEK_RAT9 0x0cfa
+ #define USB_DEVICE_ID_SAITEK_MMO7 0x0cd0
+ #define USB_DEVICE_ID_SAITEK_X52 0x075c
++#define USB_DEVICE_ID_SAITEK_X52_2 0x0255
++#define USB_DEVICE_ID_SAITEK_X52_PRO 0x0762
+
+ #define USB_VENDOR_ID_SAMSUNG 0x0419
+ #define USB_DEVICE_ID_SAMSUNG_IR_REMOTE 0x0001
+diff --git a/drivers/hid/hid-microsoft.c b/drivers/hid/hid-microsoft.c
+index 2d8b589201a4e..8cb1ca1936e42 100644
+--- a/drivers/hid/hid-microsoft.c
++++ b/drivers/hid/hid-microsoft.c
+@@ -451,6 +451,8 @@ static const struct hid_device_id ms_devices[] = {
+ .driver_data = MS_SURFACE_DIAL },
+ { HID_BLUETOOTH_DEVICE(USB_VENDOR_ID_MICROSOFT, USB_DEVICE_ID_MS_XBOX_ONE_S_CONTROLLER),
+ .driver_data = MS_QUIRK_FF },
++ { HID_BLUETOOTH_DEVICE(USB_VENDOR_ID_MICROSOFT, USB_DEVICE_ID_8BITDO_SN30_PRO_PLUS),
++ .driver_data = MS_QUIRK_FF },
+ { }
+ };
+ MODULE_DEVICE_TABLE(hid, ms_devices);
+diff --git a/drivers/hid/hid-quirks.c b/drivers/hid/hid-quirks.c
+index a65aef6a322fb..7a2be0205dfd1 100644
+--- a/drivers/hid/hid-quirks.c
++++ b/drivers/hid/hid-quirks.c
+@@ -150,6 +150,8 @@ static const struct hid_device_id hid_quirks[] = {
+ { HID_USB_DEVICE(USB_VENDOR_ID_RETROUSB, USB_DEVICE_ID_RETROUSB_SNES_RETROPORT), HID_QUIRK_INCREMENT_USAGE_ON_DUPLICATE },
+ { HID_USB_DEVICE(USB_VENDOR_ID_SAITEK, USB_DEVICE_ID_SAITEK_RUMBLEPAD), HID_QUIRK_BADPAD },
+ { HID_USB_DEVICE(USB_VENDOR_ID_SAITEK, USB_DEVICE_ID_SAITEK_X52), HID_QUIRK_INCREMENT_USAGE_ON_DUPLICATE },
++ { HID_USB_DEVICE(USB_VENDOR_ID_SAITEK, USB_DEVICE_ID_SAITEK_X52_2), HID_QUIRK_INCREMENT_USAGE_ON_DUPLICATE },
++ { HID_USB_DEVICE(USB_VENDOR_ID_SAITEK, USB_DEVICE_ID_SAITEK_X52_PRO), HID_QUIRK_INCREMENT_USAGE_ON_DUPLICATE },
+ { HID_USB_DEVICE(USB_VENDOR_ID_SEMICO, USB_DEVICE_ID_SEMICO_USB_KEYKOARD2), HID_QUIRK_NO_INIT_REPORTS },
+ { HID_USB_DEVICE(USB_VENDOR_ID_SEMICO, USB_DEVICE_ID_SEMICO_USB_KEYKOARD), HID_QUIRK_NO_INIT_REPORTS },
+ { HID_USB_DEVICE(USB_VENDOR_ID_SENNHEISER, USB_DEVICE_ID_SENNHEISER_BTD500USB), HID_QUIRK_NOGET },
+diff --git a/drivers/i2c/busses/i2c-npcm7xx.c b/drivers/i2c/busses/i2c-npcm7xx.c
+index 75f07138a6fa2..dfcf04e1967f1 100644
+--- a/drivers/i2c/busses/i2c-npcm7xx.c
++++ b/drivers/i2c/busses/i2c-npcm7xx.c
+@@ -2093,8 +2093,12 @@ static int npcm_i2c_master_xfer(struct i2c_adapter *adap, struct i2c_msg *msgs,
+ }
+ }
+
+- /* Adaptive TimeOut: astimated time in usec + 100% margin */
+- timeout_usec = (2 * 10000 / bus->bus_freq) * (2 + nread + nwrite);
++ /*
++ * Adaptive TimeOut: estimated time in usec + 100% margin:
++ * 2: double the timeout for clock stretching case
++ * 9: bits per transaction (including the ack/nack)
++ */
++ timeout_usec = (2 * 9 * USEC_PER_SEC / bus->bus_freq) * (2 + nread + nwrite);
+ timeout = max(msecs_to_jiffies(35), usecs_to_jiffies(timeout_usec));
+ if (nwrite >= 32 * 1024 || nread >= 32 * 1024) {
+ dev_err(bus->dev, "i2c%d buffer too big\n", bus->num);
+diff --git a/drivers/iio/accel/bmc150-accel-core.c b/drivers/iio/accel/bmc150-accel-core.c
+index 121b4e89f038c..bcdf25f32e220 100644
+--- a/drivers/iio/accel/bmc150-accel-core.c
++++ b/drivers/iio/accel/bmc150-accel-core.c
+@@ -189,6 +189,14 @@ struct bmc150_accel_data {
+ struct mutex mutex;
+ u8 fifo_mode, watermark;
+ s16 buffer[8];
++ /*
++ * Ensure there is sufficient space and correct alignment for
++ * the timestamp if enabled
++ */
++ struct {
++ __le16 channels[3];
++ s64 ts __aligned(8);
++ } scan;
+ u8 bw_bits;
+ u32 slope_dur;
+ u32 slope_thres;
+@@ -922,15 +930,16 @@ static int __bmc150_accel_fifo_flush(struct iio_dev *indio_dev,
+ * now.
+ */
+ for (i = 0; i < count; i++) {
+- u16 sample[8];
+ int j, bit;
+
+ j = 0;
+ for_each_set_bit(bit, indio_dev->active_scan_mask,
+ indio_dev->masklength)
+- memcpy(&sample[j++], &buffer[i * 3 + bit], 2);
++ memcpy(&data->scan.channels[j++], &buffer[i * 3 + bit],
++ sizeof(data->scan.channels[0]));
+
+- iio_push_to_buffers_with_timestamp(indio_dev, sample, tstamp);
++ iio_push_to_buffers_with_timestamp(indio_dev, &data->scan,
++ tstamp);
+
+ tstamp += sample_period;
+ }
+diff --git a/drivers/iio/accel/kxsd9.c b/drivers/iio/accel/kxsd9.c
+index 0b876b2dc5bd4..76429e2a6fb8f 100644
+--- a/drivers/iio/accel/kxsd9.c
++++ b/drivers/iio/accel/kxsd9.c
+@@ -209,14 +209,20 @@ static irqreturn_t kxsd9_trigger_handler(int irq, void *p)
+ const struct iio_poll_func *pf = p;
+ struct iio_dev *indio_dev = pf->indio_dev;
+ struct kxsd9_state *st = iio_priv(indio_dev);
++ /*
++ * Ensure correct positioning and alignment of timestamp.
++ * No need to zero initialize as all elements written.
++ */
++ struct {
++ __be16 chan[4];
++ s64 ts __aligned(8);
++ } hw_values;
+ int ret;
+- /* 4 * 16bit values AND timestamp */
+- __be16 hw_values[8];
+
+ ret = regmap_bulk_read(st->map,
+ KXSD9_REG_X,
+- &hw_values,
+- 8);
++ hw_values.chan,
++ sizeof(hw_values.chan));
+ if (ret) {
+ dev_err(st->dev,
+ "error reading data\n");
+@@ -224,7 +230,7 @@ static irqreturn_t kxsd9_trigger_handler(int irq, void *p)
+ }
+
+ iio_push_to_buffers_with_timestamp(indio_dev,
+- hw_values,
++ &hw_values,
+ iio_get_time_ns(indio_dev));
+ iio_trigger_notify_done(indio_dev->trig);
+
+diff --git a/drivers/iio/accel/mma7455_core.c b/drivers/iio/accel/mma7455_core.c
+index 8b5a6aff9bf4b..70ec3490bdb85 100644
+--- a/drivers/iio/accel/mma7455_core.c
++++ b/drivers/iio/accel/mma7455_core.c
+@@ -52,6 +52,14 @@
+
+ struct mma7455_data {
+ struct regmap *regmap;
++ /*
++ * Used to reorganize data. Will ensure correct alignment of
++ * the timestamp if present
++ */
++ struct {
++ __le16 channels[3];
++ s64 ts __aligned(8);
++ } scan;
+ };
+
+ static int mma7455_drdy(struct mma7455_data *mma7455)
+@@ -82,19 +90,19 @@ static irqreturn_t mma7455_trigger_handler(int irq, void *p)
+ struct iio_poll_func *pf = p;
+ struct iio_dev *indio_dev = pf->indio_dev;
+ struct mma7455_data *mma7455 = iio_priv(indio_dev);
+- u8 buf[16]; /* 3 x 16-bit channels + padding + ts */
+ int ret;
+
+ ret = mma7455_drdy(mma7455);
+ if (ret)
+ goto done;
+
+- ret = regmap_bulk_read(mma7455->regmap, MMA7455_REG_XOUTL, buf,
+- sizeof(__le16) * 3);
++ ret = regmap_bulk_read(mma7455->regmap, MMA7455_REG_XOUTL,
++ mma7455->scan.channels,
++ sizeof(mma7455->scan.channels));
+ if (ret)
+ goto done;
+
+- iio_push_to_buffers_with_timestamp(indio_dev, buf,
++ iio_push_to_buffers_with_timestamp(indio_dev, &mma7455->scan,
+ iio_get_time_ns(indio_dev));
+
+ done:
+diff --git a/drivers/iio/accel/mma8452.c b/drivers/iio/accel/mma8452.c
+index 813bca7cfc3ed..85d453b3f5ec1 100644
+--- a/drivers/iio/accel/mma8452.c
++++ b/drivers/iio/accel/mma8452.c
+@@ -110,6 +110,12 @@ struct mma8452_data {
+ int sleep_val;
+ struct regulator *vdd_reg;
+ struct regulator *vddio_reg;
++
++ /* Ensure correct alignment of time stamp when present */
++ struct {
++ __be16 channels[3];
++ s64 ts __aligned(8);
++ } buffer;
+ };
+
+ /**
+@@ -1091,14 +1097,13 @@ static irqreturn_t mma8452_trigger_handler(int irq, void *p)
+ struct iio_poll_func *pf = p;
+ struct iio_dev *indio_dev = pf->indio_dev;
+ struct mma8452_data *data = iio_priv(indio_dev);
+- u8 buffer[16]; /* 3 16-bit channels + padding + ts */
+ int ret;
+
+- ret = mma8452_read(data, (__be16 *)buffer);
++ ret = mma8452_read(data, data->buffer.channels);
+ if (ret < 0)
+ goto done;
+
+- iio_push_to_buffers_with_timestamp(indio_dev, buffer,
++ iio_push_to_buffers_with_timestamp(indio_dev, &data->buffer,
+ iio_get_time_ns(indio_dev));
+
+ done:
+diff --git a/drivers/iio/adc/ina2xx-adc.c b/drivers/iio/adc/ina2xx-adc.c
+index bdd7cba6f6b0b..d3e9ec00ef959 100644
+--- a/drivers/iio/adc/ina2xx-adc.c
++++ b/drivers/iio/adc/ina2xx-adc.c
+@@ -146,6 +146,11 @@ struct ina2xx_chip_info {
+ int range_vbus; /* Bus voltage maximum in V */
+ int pga_gain_vshunt; /* Shunt voltage PGA gain */
+ bool allow_async_readout;
++ /* data buffer needs space for channel data and timestamp */
++ struct {
++ u16 chan[4];
++ u64 ts __aligned(8);
++ } scan;
+ };
+
+ static const struct ina2xx_config ina2xx_config[] = {
+@@ -738,8 +743,6 @@ static int ina2xx_conversion_ready(struct iio_dev *indio_dev)
+ static int ina2xx_work_buffer(struct iio_dev *indio_dev)
+ {
+ struct ina2xx_chip_info *chip = iio_priv(indio_dev);
+- /* data buffer needs space for channel data and timestap */
+- unsigned short data[4 + sizeof(s64)/sizeof(short)];
+ int bit, ret, i = 0;
+ s64 time;
+
+@@ -758,10 +761,10 @@ static int ina2xx_work_buffer(struct iio_dev *indio_dev)
+ if (ret < 0)
+ return ret;
+
+- data[i++] = val;
++ chip->scan.chan[i++] = val;
+ }
+
+- iio_push_to_buffers_with_timestamp(indio_dev, data, time);
++ iio_push_to_buffers_with_timestamp(indio_dev, &chip->scan, time);
+
+ return 0;
+ };
+diff --git a/drivers/iio/adc/max1118.c b/drivers/iio/adc/max1118.c
+index 0c5d7aaf68262..b3d8cba6ce698 100644
+--- a/drivers/iio/adc/max1118.c
++++ b/drivers/iio/adc/max1118.c
+@@ -35,6 +35,11 @@ struct max1118 {
+ struct spi_device *spi;
+ struct mutex lock;
+ struct regulator *reg;
++ /* Ensure natural alignment of buffer elements */
++ struct {
++ u8 channels[2];
++ s64 ts __aligned(8);
++ } scan;
+
+ u8 data ____cacheline_aligned;
+ };
+@@ -165,7 +170,6 @@ static irqreturn_t max1118_trigger_handler(int irq, void *p)
+ struct iio_poll_func *pf = p;
+ struct iio_dev *indio_dev = pf->indio_dev;
+ struct max1118 *adc = iio_priv(indio_dev);
+- u8 data[16] = { }; /* 2x 8-bit ADC data + padding + 8 bytes timestamp */
+ int scan_index;
+ int i = 0;
+
+@@ -183,10 +187,10 @@ static irqreturn_t max1118_trigger_handler(int irq, void *p)
+ goto out;
+ }
+
+- data[i] = ret;
++ adc->scan.channels[i] = ret;
+ i++;
+ }
+- iio_push_to_buffers_with_timestamp(indio_dev, data,
++ iio_push_to_buffers_with_timestamp(indio_dev, &adc->scan,
+ iio_get_time_ns(indio_dev));
+ out:
+ mutex_unlock(&adc->lock);
+diff --git a/drivers/iio/adc/mcp3422.c b/drivers/iio/adc/mcp3422.c
+index d86c0b5d80a3d..f96f0cecbcdef 100644
+--- a/drivers/iio/adc/mcp3422.c
++++ b/drivers/iio/adc/mcp3422.c
+@@ -96,16 +96,12 @@ static int mcp3422_update_config(struct mcp3422 *adc, u8 newconfig)
+ {
+ int ret;
+
+- mutex_lock(&adc->lock);
+-
+ ret = i2c_master_send(adc->i2c, &newconfig, 1);
+ if (ret > 0) {
+ adc->config = newconfig;
+ ret = 0;
+ }
+
+- mutex_unlock(&adc->lock);
+-
+ return ret;
+ }
+
+@@ -138,6 +134,8 @@ static int mcp3422_read_channel(struct mcp3422 *adc,
+ u8 config;
+ u8 req_channel = channel->channel;
+
++ mutex_lock(&adc->lock);
++
+ if (req_channel != MCP3422_CHANNEL(adc->config)) {
+ config = adc->config;
+ config &= ~MCP3422_CHANNEL_MASK;
+@@ -145,12 +143,18 @@ static int mcp3422_read_channel(struct mcp3422 *adc,
+ config &= ~MCP3422_PGA_MASK;
+ config |= MCP3422_PGA_VALUE(adc->pga[req_channel]);
+ ret = mcp3422_update_config(adc, config);
+- if (ret < 0)
++ if (ret < 0) {
++ mutex_unlock(&adc->lock);
+ return ret;
++ }
+ msleep(mcp3422_read_times[MCP3422_SAMPLE_RATE(adc->config)]);
+ }
+
+- return mcp3422_read(adc, value, &config);
++ ret = mcp3422_read(adc, value, &config);
++
++ mutex_unlock(&adc->lock);
++
++ return ret;
+ }
+
+ static int mcp3422_read_raw(struct iio_dev *iio,
+diff --git a/drivers/iio/adc/ti-adc081c.c b/drivers/iio/adc/ti-adc081c.c
+index 0235863ff77b0..cc8cbffe2b7b5 100644
+--- a/drivers/iio/adc/ti-adc081c.c
++++ b/drivers/iio/adc/ti-adc081c.c
+@@ -33,6 +33,12 @@ struct adc081c {
+
+ /* 8, 10 or 12 */
+ int bits;
++
++ /* Ensure natural alignment of buffer elements */
++ struct {
++ u16 channel;
++ s64 ts __aligned(8);
++ } scan;
+ };
+
+ #define REG_CONV_RES 0x00
+@@ -128,14 +134,13 @@ static irqreturn_t adc081c_trigger_handler(int irq, void *p)
+ struct iio_poll_func *pf = p;
+ struct iio_dev *indio_dev = pf->indio_dev;
+ struct adc081c *data = iio_priv(indio_dev);
+- u16 buf[8]; /* 2 bytes data + 6 bytes padding + 8 bytes timestamp */
+ int ret;
+
+ ret = i2c_smbus_read_word_swapped(data->i2c, REG_CONV_RES);
+ if (ret < 0)
+ goto out;
+- buf[0] = ret;
+- iio_push_to_buffers_with_timestamp(indio_dev, buf,
++ data->scan.channel = ret;
++ iio_push_to_buffers_with_timestamp(indio_dev, &data->scan,
+ iio_get_time_ns(indio_dev));
+ out:
+ iio_trigger_notify_done(indio_dev->trig);
+diff --git a/drivers/iio/adc/ti-adc084s021.c b/drivers/iio/adc/ti-adc084s021.c
+index bdedf456ee05d..fc053216d282c 100644
+--- a/drivers/iio/adc/ti-adc084s021.c
++++ b/drivers/iio/adc/ti-adc084s021.c
+@@ -25,6 +25,11 @@ struct adc084s021 {
+ struct spi_transfer spi_trans;
+ struct regulator *reg;
+ struct mutex lock;
++ /* Buffer used to align data */
++ struct {
++ __be16 channels[4];
++ s64 ts __aligned(8);
++ } scan;
+ /*
+ * DMA (thus cache coherency maintenance) requires the
+ * transfer buffers to live in their own cache line.
+@@ -140,14 +145,13 @@ static irqreturn_t adc084s021_buffer_trigger_handler(int irq, void *pollfunc)
+ struct iio_poll_func *pf = pollfunc;
+ struct iio_dev *indio_dev = pf->indio_dev;
+ struct adc084s021 *adc = iio_priv(indio_dev);
+- __be16 data[8] = {0}; /* 4 * 16-bit words of data + 8 bytes timestamp */
+
+ mutex_lock(&adc->lock);
+
+- if (adc084s021_adc_conversion(adc, &data) < 0)
++ if (adc084s021_adc_conversion(adc, adc->scan.channels) < 0)
+ dev_err(&adc->spi->dev, "Failed to read data\n");
+
+- iio_push_to_buffers_with_timestamp(indio_dev, data,
++ iio_push_to_buffers_with_timestamp(indio_dev, &adc->scan,
+ iio_get_time_ns(indio_dev));
+ mutex_unlock(&adc->lock);
+ iio_trigger_notify_done(indio_dev->trig);
+diff --git a/drivers/iio/adc/ti-ads1015.c b/drivers/iio/adc/ti-ads1015.c
+index 5ea4f45d6bade..64fe3b2a6ec6d 100644
+--- a/drivers/iio/adc/ti-ads1015.c
++++ b/drivers/iio/adc/ti-ads1015.c
+@@ -316,6 +316,7 @@ static const struct iio_chan_spec ads1115_channels[] = {
+ IIO_CHAN_SOFT_TIMESTAMP(ADS1015_TIMESTAMP),
+ };
+
++#ifdef CONFIG_PM
+ static int ads1015_set_power_state(struct ads1015_data *data, bool on)
+ {
+ int ret;
+@@ -333,6 +334,15 @@ static int ads1015_set_power_state(struct ads1015_data *data, bool on)
+ return ret < 0 ? ret : 0;
+ }
+
++#else /* !CONFIG_PM */
++
++static int ads1015_set_power_state(struct ads1015_data *data, bool on)
++{
++ return 0;
++}
++
++#endif /* !CONFIG_PM */
++
+ static
+ int ads1015_get_adc_result(struct ads1015_data *data, int chan, int *val)
+ {
+diff --git a/drivers/iio/chemical/ccs811.c b/drivers/iio/chemical/ccs811.c
+index 3ecd633f9ed32..b2b6009078e10 100644
+--- a/drivers/iio/chemical/ccs811.c
++++ b/drivers/iio/chemical/ccs811.c
+@@ -78,6 +78,11 @@ struct ccs811_data {
+ struct iio_trigger *drdy_trig;
+ struct gpio_desc *wakeup_gpio;
+ bool drdy_trig_on;
++ /* Ensures correct alignment of timestamp if present */
++ struct {
++ s16 channels[2];
++ s64 ts __aligned(8);
++ } scan;
+ };
+
+ static const struct iio_chan_spec ccs811_channels[] = {
+@@ -327,17 +332,17 @@ static irqreturn_t ccs811_trigger_handler(int irq, void *p)
+ struct iio_dev *indio_dev = pf->indio_dev;
+ struct ccs811_data *data = iio_priv(indio_dev);
+ struct i2c_client *client = data->client;
+- s16 buf[8]; /* s16 eCO2 + s16 TVOC + padding + 8 byte timestamp */
+ int ret;
+
+- ret = i2c_smbus_read_i2c_block_data(client, CCS811_ALG_RESULT_DATA, 4,
+- (u8 *)&buf);
++ ret = i2c_smbus_read_i2c_block_data(client, CCS811_ALG_RESULT_DATA,
++ sizeof(data->scan.channels),
++ (u8 *)data->scan.channels);
+ if (ret != 4) {
+ dev_err(&client->dev, "cannot read sensor data\n");
+ goto err;
+ }
+
+- iio_push_to_buffers_with_timestamp(indio_dev, buf,
++ iio_push_to_buffers_with_timestamp(indio_dev, &data->scan,
+ iio_get_time_ns(indio_dev));
+
+ err:
+diff --git a/drivers/iio/common/cros_ec_sensors/cros_ec_sensors_core.c b/drivers/iio/common/cros_ec_sensors/cros_ec_sensors_core.c
+index c831915ca7e56..6b6b5987ac753 100644
+--- a/drivers/iio/common/cros_ec_sensors/cros_ec_sensors_core.c
++++ b/drivers/iio/common/cros_ec_sensors/cros_ec_sensors_core.c
+@@ -72,10 +72,13 @@ static void get_default_min_max_freq(enum motionsensor_type type,
+
+ switch (type) {
+ case MOTIONSENSE_TYPE_ACCEL:
+- case MOTIONSENSE_TYPE_GYRO:
+ *min_freq = 12500;
+ *max_freq = 100000;
+ break;
++ case MOTIONSENSE_TYPE_GYRO:
++ *min_freq = 25000;
++ *max_freq = 100000;
++ break;
+ case MOTIONSENSE_TYPE_MAG:
+ *min_freq = 5000;
+ *max_freq = 25000;
+diff --git a/drivers/iio/light/ltr501.c b/drivers/iio/light/ltr501.c
+index 5a3fcb127cd20..d10ed80566567 100644
+--- a/drivers/iio/light/ltr501.c
++++ b/drivers/iio/light/ltr501.c
+@@ -1243,13 +1243,16 @@ static irqreturn_t ltr501_trigger_handler(int irq, void *p)
+ struct iio_poll_func *pf = p;
+ struct iio_dev *indio_dev = pf->indio_dev;
+ struct ltr501_data *data = iio_priv(indio_dev);
+- u16 buf[8];
++ struct {
++ u16 channels[3];
++ s64 ts __aligned(8);
++ } scan;
+ __le16 als_buf[2];
+ u8 mask = 0;
+ int j = 0;
+ int ret, psdata;
+
+- memset(buf, 0, sizeof(buf));
++ memset(&scan, 0, sizeof(scan));
+
+ /* figure out which data needs to be ready */
+ if (test_bit(0, indio_dev->active_scan_mask) ||
+@@ -1268,9 +1271,9 @@ static irqreturn_t ltr501_trigger_handler(int irq, void *p)
+ if (ret < 0)
+ return ret;
+ if (test_bit(0, indio_dev->active_scan_mask))
+- buf[j++] = le16_to_cpu(als_buf[1]);
++ scan.channels[j++] = le16_to_cpu(als_buf[1]);
+ if (test_bit(1, indio_dev->active_scan_mask))
+- buf[j++] = le16_to_cpu(als_buf[0]);
++ scan.channels[j++] = le16_to_cpu(als_buf[0]);
+ }
+
+ if (mask & LTR501_STATUS_PS_RDY) {
+@@ -1278,10 +1281,10 @@ static irqreturn_t ltr501_trigger_handler(int irq, void *p)
+ &psdata, 2);
+ if (ret < 0)
+ goto done;
+- buf[j++] = psdata & LTR501_PS_DATA_MASK;
++ scan.channels[j++] = psdata & LTR501_PS_DATA_MASK;
+ }
+
+- iio_push_to_buffers_with_timestamp(indio_dev, buf,
++ iio_push_to_buffers_with_timestamp(indio_dev, &scan,
+ iio_get_time_ns(indio_dev));
+
+ done:
+diff --git a/drivers/iio/light/max44000.c b/drivers/iio/light/max44000.c
+index d6d8007ba430a..8cc619de2c3ae 100644
+--- a/drivers/iio/light/max44000.c
++++ b/drivers/iio/light/max44000.c
+@@ -75,6 +75,11 @@
+ struct max44000_data {
+ struct mutex lock;
+ struct regmap *regmap;
++ /* Ensure naturally aligned timestamp */
++ struct {
++ u16 channels[2];
++ s64 ts __aligned(8);
++ } scan;
+ };
+
+ /* Default scale is set to the minimum of 0.03125 or 1 / (1 << 5) lux */
+@@ -488,7 +493,6 @@ static irqreturn_t max44000_trigger_handler(int irq, void *p)
+ struct iio_poll_func *pf = p;
+ struct iio_dev *indio_dev = pf->indio_dev;
+ struct max44000_data *data = iio_priv(indio_dev);
+- u16 buf[8]; /* 2x u16 + padding + 8 bytes timestamp */
+ int index = 0;
+ unsigned int regval;
+ int ret;
+@@ -498,17 +502,17 @@ static irqreturn_t max44000_trigger_handler(int irq, void *p)
+ ret = max44000_read_alsval(data);
+ if (ret < 0)
+ goto out_unlock;
+- buf[index++] = ret;
++ data->scan.channels[index++] = ret;
+ }
+ if (test_bit(MAX44000_SCAN_INDEX_PRX, indio_dev->active_scan_mask)) {
+ ret = regmap_read(data->regmap, MAX44000_REG_PRX_DATA, ®val);
+ if (ret < 0)
+ goto out_unlock;
+- buf[index] = regval;
++ data->scan.channels[index] = regval;
+ }
+ mutex_unlock(&data->lock);
+
+- iio_push_to_buffers_with_timestamp(indio_dev, buf,
++ iio_push_to_buffers_with_timestamp(indio_dev, &data->scan,
+ iio_get_time_ns(indio_dev));
+ iio_trigger_notify_done(indio_dev->trig);
+ return IRQ_HANDLED;
+diff --git a/drivers/iio/magnetometer/ak8975.c b/drivers/iio/magnetometer/ak8975.c
+index 3c881541ae72f..3fc44ec45f763 100644
+--- a/drivers/iio/magnetometer/ak8975.c
++++ b/drivers/iio/magnetometer/ak8975.c
+@@ -365,6 +365,12 @@ struct ak8975_data {
+ struct iio_mount_matrix orientation;
+ struct regulator *vdd;
+ struct regulator *vid;
++
++ /* Ensure natural alignment of timestamp */
++ struct {
++ s16 channels[3];
++ s64 ts __aligned(8);
++ } scan;
+ };
+
+ /* Enable attached power regulator if any. */
+@@ -787,7 +793,6 @@ static void ak8975_fill_buffer(struct iio_dev *indio_dev)
+ const struct i2c_client *client = data->client;
+ const struct ak_def *def = data->def;
+ int ret;
+- s16 buff[8]; /* 3 x 16 bits axis values + 1 aligned 64 bits timestamp */
+ __le16 fval[3];
+
+ mutex_lock(&data->lock);
+@@ -810,12 +815,13 @@ static void ak8975_fill_buffer(struct iio_dev *indio_dev)
+ mutex_unlock(&data->lock);
+
+ /* Clamp to valid range. */
+- buff[0] = clamp_t(s16, le16_to_cpu(fval[0]), -def->range, def->range);
+- buff[1] = clamp_t(s16, le16_to_cpu(fval[1]), -def->range, def->range);
+- buff[2] = clamp_t(s16, le16_to_cpu(fval[2]), -def->range, def->range);
++ data->scan.channels[0] = clamp_t(s16, le16_to_cpu(fval[0]), -def->range, def->range);
++ data->scan.channels[1] = clamp_t(s16, le16_to_cpu(fval[1]), -def->range, def->range);
++ data->scan.channels[2] = clamp_t(s16, le16_to_cpu(fval[2]), -def->range, def->range);
+
+- iio_push_to_buffers_with_timestamp(indio_dev, buff,
++ iio_push_to_buffers_with_timestamp(indio_dev, &data->scan,
+ iio_get_time_ns(indio_dev));
++
+ return;
+
+ unlock:
+diff --git a/drivers/iio/proximity/mb1232.c b/drivers/iio/proximity/mb1232.c
+index 166b3e6d7db89..5254b1fbccfdc 100644
+--- a/drivers/iio/proximity/mb1232.c
++++ b/drivers/iio/proximity/mb1232.c
+@@ -40,6 +40,11 @@ struct mb1232_data {
+ */
+ struct completion ranging;
+ int irqnr;
++ /* Ensure correct alignment of data to push to IIO buffer */
++ struct {
++ s16 distance;
++ s64 ts __aligned(8);
++ } scan;
+ };
+
+ static irqreturn_t mb1232_handle_irq(int irq, void *dev_id)
+@@ -113,17 +118,13 @@ static irqreturn_t mb1232_trigger_handler(int irq, void *p)
+ struct iio_poll_func *pf = p;
+ struct iio_dev *indio_dev = pf->indio_dev;
+ struct mb1232_data *data = iio_priv(indio_dev);
+- /*
+- * triggered buffer
+- * 16-bit channel + 48-bit padding + 64-bit timestamp
+- */
+- s16 buffer[8] = { 0 };
+
+- buffer[0] = mb1232_read_distance(data);
+- if (buffer[0] < 0)
++ data->scan.distance = mb1232_read_distance(data);
++ if (data->scan.distance < 0)
+ goto err;
+
+- iio_push_to_buffers_with_timestamp(indio_dev, buffer, pf->timestamp);
++ iio_push_to_buffers_with_timestamp(indio_dev, &data->scan,
++ pf->timestamp);
+
+ err:
+ iio_trigger_notify_done(indio_dev->trig);
+diff --git a/drivers/infiniband/core/cq.c b/drivers/infiniband/core/cq.c
+index 513825e424bff..a92fc3f90bb5b 100644
+--- a/drivers/infiniband/core/cq.c
++++ b/drivers/infiniband/core/cq.c
+@@ -379,7 +379,7 @@ static int ib_alloc_cqs(struct ib_device *dev, unsigned int nr_cqes,
+ {
+ LIST_HEAD(tmp_list);
+ unsigned int nr_cqs, i;
+- struct ib_cq *cq;
++ struct ib_cq *cq, *n;
+ int ret;
+
+ if (poll_ctx > IB_POLL_LAST_POOL_TYPE) {
+@@ -412,7 +412,7 @@ static int ib_alloc_cqs(struct ib_device *dev, unsigned int nr_cqes,
+ return 0;
+
+ out_free_cqs:
+- list_for_each_entry(cq, &tmp_list, pool_entry) {
++ list_for_each_entry_safe(cq, n, &tmp_list, pool_entry) {
+ cq->shared = false;
+ ib_free_cq(cq);
+ }
+diff --git a/drivers/infiniband/core/verbs.c b/drivers/infiniband/core/verbs.c
+index f369f0a19e851..1b0ea945756f0 100644
+--- a/drivers/infiniband/core/verbs.c
++++ b/drivers/infiniband/core/verbs.c
+@@ -1803,7 +1803,7 @@ int ib_get_eth_speed(struct ib_device *dev, u8 port_num, u8 *speed, u8 *width)
+
+ dev_put(netdev);
+
+- if (!rc) {
++ if (!rc && lksettings.base.speed != (u32)SPEED_UNKNOWN) {
+ netdev_speed = lksettings.base.speed;
+ } else {
+ netdev_speed = SPEED_1000;
+diff --git a/drivers/infiniband/hw/bnxt_re/ib_verbs.c b/drivers/infiniband/hw/bnxt_re/ib_verbs.c
+index 8b6ad5cddfce9..cb6e873039df5 100644
+--- a/drivers/infiniband/hw/bnxt_re/ib_verbs.c
++++ b/drivers/infiniband/hw/bnxt_re/ib_verbs.c
+@@ -752,12 +752,6 @@ static int bnxt_re_destroy_gsi_sqp(struct bnxt_re_qp *qp)
+ gsi_sqp = rdev->gsi_ctx.gsi_sqp;
+ gsi_sah = rdev->gsi_ctx.gsi_sah;
+
+- /* remove from active qp list */
+- mutex_lock(&rdev->qp_lock);
+- list_del(&gsi_sqp->list);
+- mutex_unlock(&rdev->qp_lock);
+- atomic_dec(&rdev->qp_count);
+-
+ ibdev_dbg(&rdev->ibdev, "Destroy the shadow AH\n");
+ bnxt_qplib_destroy_ah(&rdev->qplib_res,
+ &gsi_sah->qplib_ah,
+@@ -772,6 +766,12 @@ static int bnxt_re_destroy_gsi_sqp(struct bnxt_re_qp *qp)
+ }
+ bnxt_qplib_free_qp_res(&rdev->qplib_res, &gsi_sqp->qplib_qp);
+
++ /* remove from active qp list */
++ mutex_lock(&rdev->qp_lock);
++ list_del(&gsi_sqp->list);
++ mutex_unlock(&rdev->qp_lock);
++ atomic_dec(&rdev->qp_count);
++
+ kfree(rdev->gsi_ctx.sqp_tbl);
+ kfree(gsi_sah);
+ kfree(gsi_sqp);
+@@ -792,11 +792,6 @@ int bnxt_re_destroy_qp(struct ib_qp *ib_qp, struct ib_udata *udata)
+ unsigned int flags;
+ int rc;
+
+- mutex_lock(&rdev->qp_lock);
+- list_del(&qp->list);
+- mutex_unlock(&rdev->qp_lock);
+- atomic_dec(&rdev->qp_count);
+-
+ bnxt_qplib_flush_cqn_wq(&qp->qplib_qp);
+
+ rc = bnxt_qplib_destroy_qp(&rdev->qplib_res, &qp->qplib_qp);
+@@ -819,6 +814,11 @@ int bnxt_re_destroy_qp(struct ib_qp *ib_qp, struct ib_udata *udata)
+ goto sh_fail;
+ }
+
++ mutex_lock(&rdev->qp_lock);
++ list_del(&qp->list);
++ mutex_unlock(&rdev->qp_lock);
++ atomic_dec(&rdev->qp_count);
++
+ ib_umem_release(qp->rumem);
+ ib_umem_release(qp->sumem);
+
+@@ -3178,6 +3178,19 @@ static void bnxt_re_process_res_rawqp1_wc(struct ib_wc *wc,
+ wc->wc_flags |= IB_WC_GRH;
+ }
+
++static bool bnxt_re_check_if_vlan_valid(struct bnxt_re_dev *rdev,
++ u16 vlan_id)
++{
++ /*
++ * Check if the vlan is configured in the host. If not configured, it
++ * can be a transparent VLAN. So dont report the vlan id.
++ */
++ if (!__vlan_find_dev_deep_rcu(rdev->netdev,
++ htons(ETH_P_8021Q), vlan_id))
++ return false;
++ return true;
++}
++
+ static bool bnxt_re_is_vlan_pkt(struct bnxt_qplib_cqe *orig_cqe,
+ u16 *vid, u8 *sl)
+ {
+@@ -3246,9 +3259,11 @@ static void bnxt_re_process_res_shadow_qp_wc(struct bnxt_re_qp *gsi_sqp,
+ wc->src_qp = orig_cqe->src_qp;
+ memcpy(wc->smac, orig_cqe->smac, ETH_ALEN);
+ if (bnxt_re_is_vlan_pkt(orig_cqe, &vlan_id, &sl)) {
+- wc->vlan_id = vlan_id;
+- wc->sl = sl;
+- wc->wc_flags |= IB_WC_WITH_VLAN;
++ if (bnxt_re_check_if_vlan_valid(rdev, vlan_id)) {
++ wc->vlan_id = vlan_id;
++ wc->sl = sl;
++ wc->wc_flags |= IB_WC_WITH_VLAN;
++ }
+ }
+ wc->port_num = 1;
+ wc->vendor_err = orig_cqe->status;
+diff --git a/drivers/infiniband/hw/bnxt_re/main.c b/drivers/infiniband/hw/bnxt_re/main.c
+index 5c41e13496a02..882c4f49d3a87 100644
+--- a/drivers/infiniband/hw/bnxt_re/main.c
++++ b/drivers/infiniband/hw/bnxt_re/main.c
+@@ -1027,8 +1027,7 @@ static int bnxt_re_alloc_res(struct bnxt_re_dev *rdev)
+ struct bnxt_qplib_nq *nq;
+
+ nq = &rdev->nq[i];
+- nq->hwq.max_elements = (qplib_ctx->cq_count +
+- qplib_ctx->srqc_count + 2);
++ nq->hwq.max_elements = BNXT_QPLIB_NQE_MAX_CNT;
+ rc = bnxt_qplib_alloc_nq(&rdev->qplib_res, &rdev->nq[i]);
+ if (rc) {
+ ibdev_err(&rdev->ibdev, "Alloc Failed NQ%d rc:%#x",
+diff --git a/drivers/infiniband/hw/bnxt_re/qplib_fp.c b/drivers/infiniband/hw/bnxt_re/qplib_fp.c
+index c5e29577cd434..4b53f79b91d1d 100644
+--- a/drivers/infiniband/hw/bnxt_re/qplib_fp.c
++++ b/drivers/infiniband/hw/bnxt_re/qplib_fp.c
+@@ -796,6 +796,7 @@ int bnxt_qplib_create_qp1(struct bnxt_qplib_res *res, struct bnxt_qplib_qp *qp)
+ u16 cmd_flags = 0;
+ u32 qp_flags = 0;
+ u8 pg_sz_lvl;
++ u32 tbl_indx;
+ int rc;
+
+ RCFW_CMD_PREP(req, CREATE_QP1, cmd_flags);
+@@ -891,8 +892,9 @@ int bnxt_qplib_create_qp1(struct bnxt_qplib_res *res, struct bnxt_qplib_qp *qp)
+ rq->dbinfo.xid = qp->id;
+ rq->dbinfo.db = qp->dpi->dbr;
+ }
+- rcfw->qp_tbl[qp->id].qp_id = qp->id;
+- rcfw->qp_tbl[qp->id].qp_handle = (void *)qp;
++ tbl_indx = map_qp_id_to_tbl_indx(qp->id, rcfw);
++ rcfw->qp_tbl[tbl_indx].qp_id = qp->id;
++ rcfw->qp_tbl[tbl_indx].qp_handle = (void *)qp;
+
+ return 0;
+
+@@ -920,10 +922,10 @@ static void bnxt_qplib_init_psn_ptr(struct bnxt_qplib_qp *qp, int size)
+ sq = &qp->sq;
+ hwq = &sq->hwq;
+
++ /* First psn entry */
+ fpsne = (u64)bnxt_qplib_get_qe(hwq, hwq->max_elements, &psn_pg);
+ if (!IS_ALIGNED(fpsne, PAGE_SIZE))
+- indx_pad = ALIGN(fpsne, PAGE_SIZE) / size;
+-
++ indx_pad = (fpsne & ~PAGE_MASK) / size;
+ page = (u64 *)psn_pg;
+ for (indx = 0; indx < hwq->max_elements; indx++) {
+ pg_num = (indx + indx_pad) / (PAGE_SIZE / size);
+@@ -950,6 +952,7 @@ int bnxt_qplib_create_qp(struct bnxt_qplib_res *res, struct bnxt_qplib_qp *qp)
+ u32 qp_flags = 0;
+ u8 pg_sz_lvl;
+ u16 max_rsge;
++ u32 tbl_indx;
+
+ RCFW_CMD_PREP(req, CREATE_QP, cmd_flags);
+
+@@ -1118,8 +1121,9 @@ int bnxt_qplib_create_qp(struct bnxt_qplib_res *res, struct bnxt_qplib_qp *qp)
+ rq->dbinfo.xid = qp->id;
+ rq->dbinfo.db = qp->dpi->dbr;
+ }
+- rcfw->qp_tbl[qp->id].qp_id = qp->id;
+- rcfw->qp_tbl[qp->id].qp_handle = (void *)qp;
++ tbl_indx = map_qp_id_to_tbl_indx(qp->id, rcfw);
++ rcfw->qp_tbl[tbl_indx].qp_id = qp->id;
++ rcfw->qp_tbl[tbl_indx].qp_handle = (void *)qp;
+
+ return 0;
+
+@@ -1467,10 +1471,12 @@ int bnxt_qplib_destroy_qp(struct bnxt_qplib_res *res,
+ struct cmdq_destroy_qp req;
+ struct creq_destroy_qp_resp resp;
+ u16 cmd_flags = 0;
++ u32 tbl_indx;
+ int rc;
+
+- rcfw->qp_tbl[qp->id].qp_id = BNXT_QPLIB_QP_ID_INVALID;
+- rcfw->qp_tbl[qp->id].qp_handle = NULL;
++ tbl_indx = map_qp_id_to_tbl_indx(qp->id, rcfw);
++ rcfw->qp_tbl[tbl_indx].qp_id = BNXT_QPLIB_QP_ID_INVALID;
++ rcfw->qp_tbl[tbl_indx].qp_handle = NULL;
+
+ RCFW_CMD_PREP(req, DESTROY_QP, cmd_flags);
+
+@@ -1478,8 +1484,8 @@ int bnxt_qplib_destroy_qp(struct bnxt_qplib_res *res,
+ rc = bnxt_qplib_rcfw_send_message(rcfw, (void *)&req,
+ (void *)&resp, NULL, 0);
+ if (rc) {
+- rcfw->qp_tbl[qp->id].qp_id = qp->id;
+- rcfw->qp_tbl[qp->id].qp_handle = qp;
++ rcfw->qp_tbl[tbl_indx].qp_id = qp->id;
++ rcfw->qp_tbl[tbl_indx].qp_handle = qp;
+ return rc;
+ }
+
+diff --git a/drivers/infiniband/hw/bnxt_re/qplib_rcfw.c b/drivers/infiniband/hw/bnxt_re/qplib_rcfw.c
+index 4e211162acee2..f7736e34ac64c 100644
+--- a/drivers/infiniband/hw/bnxt_re/qplib_rcfw.c
++++ b/drivers/infiniband/hw/bnxt_re/qplib_rcfw.c
+@@ -307,14 +307,15 @@ static int bnxt_qplib_process_qp_event(struct bnxt_qplib_rcfw *rcfw,
+ __le16 mcookie;
+ u16 cookie;
+ int rc = 0;
+- u32 qp_id;
++ u32 qp_id, tbl_indx;
+
+ pdev = rcfw->pdev;
+ switch (qp_event->event) {
+ case CREQ_QP_EVENT_EVENT_QP_ERROR_NOTIFICATION:
+ err_event = (struct creq_qp_error_notification *)qp_event;
+ qp_id = le32_to_cpu(err_event->xid);
+- qp = rcfw->qp_tbl[qp_id].qp_handle;
++ tbl_indx = map_qp_id_to_tbl_indx(qp_id, rcfw);
++ qp = rcfw->qp_tbl[tbl_indx].qp_handle;
+ dev_dbg(&pdev->dev, "Received QP error notification\n");
+ dev_dbg(&pdev->dev,
+ "qpid 0x%x, req_err=0x%x, resp_err=0x%x\n",
+@@ -615,8 +616,9 @@ int bnxt_qplib_alloc_rcfw_channel(struct bnxt_qplib_res *res,
+
+ cmdq->bmap_size = bmap_size;
+
+- rcfw->qp_tbl_size = qp_tbl_sz;
+- rcfw->qp_tbl = kcalloc(qp_tbl_sz, sizeof(struct bnxt_qplib_qp_node),
++ /* Allocate one extra to hold the QP1 entries */
++ rcfw->qp_tbl_size = qp_tbl_sz + 1;
++ rcfw->qp_tbl = kcalloc(rcfw->qp_tbl_size, sizeof(struct bnxt_qplib_qp_node),
+ GFP_KERNEL);
+ if (!rcfw->qp_tbl)
+ goto fail;
+diff --git a/drivers/infiniband/hw/bnxt_re/qplib_rcfw.h b/drivers/infiniband/hw/bnxt_re/qplib_rcfw.h
+index 157387636d004..5f2f0a5a3560f 100644
+--- a/drivers/infiniband/hw/bnxt_re/qplib_rcfw.h
++++ b/drivers/infiniband/hw/bnxt_re/qplib_rcfw.h
+@@ -216,4 +216,9 @@ int bnxt_qplib_deinit_rcfw(struct bnxt_qplib_rcfw *rcfw);
+ int bnxt_qplib_init_rcfw(struct bnxt_qplib_rcfw *rcfw,
+ struct bnxt_qplib_ctx *ctx, int is_virtfn);
+ void bnxt_qplib_mark_qp_error(void *qp_handle);
++static inline u32 map_qp_id_to_tbl_indx(u32 qid, struct bnxt_qplib_rcfw *rcfw)
++{
++ /* Last index of the qp_tbl is for QP1 ie. qp_tbl_size - 1*/
++ return (qid == 1) ? rcfw->qp_tbl_size - 1 : qid % rcfw->qp_tbl_size - 2;
++}
+ #endif /* __BNXT_QPLIB_RCFW_H__ */
+diff --git a/drivers/infiniband/hw/mlx4/main.c b/drivers/infiniband/hw/mlx4/main.c
+index 816d28854a8e1..5bcf481a9c3c2 100644
+--- a/drivers/infiniband/hw/mlx4/main.c
++++ b/drivers/infiniband/hw/mlx4/main.c
+@@ -784,7 +784,8 @@ static int eth_link_query_port(struct ib_device *ibdev, u8 port,
+ props->ip_gids = true;
+ props->gid_tbl_len = mdev->dev->caps.gid_table_len[port];
+ props->max_msg_sz = mdev->dev->caps.max_msg_sz;
+- props->pkey_tbl_len = 1;
++ if (mdev->dev->caps.pkey_table_len[port])
++ props->pkey_tbl_len = 1;
+ props->max_mtu = IB_MTU_4096;
+ props->max_vl_num = 2;
+ props->state = IB_PORT_DOWN;
+diff --git a/drivers/infiniband/sw/rxe/rxe.c b/drivers/infiniband/sw/rxe/rxe.c
+index 5642eefb4ba1c..d6b1236b114ab 100644
+--- a/drivers/infiniband/sw/rxe/rxe.c
++++ b/drivers/infiniband/sw/rxe/rxe.c
+@@ -48,6 +48,8 @@ static void rxe_cleanup_ports(struct rxe_dev *rxe)
+
+ }
+
++bool rxe_initialized;
++
+ /* free resources for a rxe device all objects created for this device must
+ * have been destroyed
+ */
+@@ -147,9 +149,6 @@ static int rxe_init_ports(struct rxe_dev *rxe)
+
+ rxe_init_port_param(port);
+
+- if (!port->attr.pkey_tbl_len || !port->attr.gid_tbl_len)
+- return -EINVAL;
+-
+ port->pkey_tbl = kcalloc(port->attr.pkey_tbl_len,
+ sizeof(*port->pkey_tbl), GFP_KERNEL);
+
+@@ -348,6 +347,7 @@ static int __init rxe_module_init(void)
+ return err;
+
+ rdma_link_register(&rxe_link_ops);
++ rxe_initialized = true;
+ pr_info("loaded\n");
+ return 0;
+ }
+@@ -359,6 +359,7 @@ static void __exit rxe_module_exit(void)
+ rxe_net_exit();
+ rxe_cache_exit();
+
++ rxe_initialized = false;
+ pr_info("unloaded\n");
+ }
+
+diff --git a/drivers/infiniband/sw/rxe/rxe.h b/drivers/infiniband/sw/rxe/rxe.h
+index fb07eed9e4028..cae1b0a24c850 100644
+--- a/drivers/infiniband/sw/rxe/rxe.h
++++ b/drivers/infiniband/sw/rxe/rxe.h
+@@ -67,6 +67,8 @@
+
+ #define RXE_ROCE_V2_SPORT (0xc000)
+
++extern bool rxe_initialized;
++
+ static inline u32 rxe_crc32(struct rxe_dev *rxe,
+ u32 crc, void *next, size_t len)
+ {
+diff --git a/drivers/infiniband/sw/rxe/rxe_mr.c b/drivers/infiniband/sw/rxe/rxe_mr.c
+index e83c7b518bfa2..bfb96a0d071bb 100644
+--- a/drivers/infiniband/sw/rxe/rxe_mr.c
++++ b/drivers/infiniband/sw/rxe/rxe_mr.c
+@@ -207,6 +207,7 @@ int rxe_mem_init_user(struct rxe_pd *pd, u64 start,
+ vaddr = page_address(sg_page_iter_page(&sg_iter));
+ if (!vaddr) {
+ pr_warn("null vaddr\n");
++ ib_umem_release(umem);
+ err = -ENOMEM;
+ goto err1;
+ }
+diff --git a/drivers/infiniband/sw/rxe/rxe_sysfs.c b/drivers/infiniband/sw/rxe/rxe_sysfs.c
+index ccda5f5a3bc0a..2af31d421bfc3 100644
+--- a/drivers/infiniband/sw/rxe/rxe_sysfs.c
++++ b/drivers/infiniband/sw/rxe/rxe_sysfs.c
+@@ -61,6 +61,11 @@ static int rxe_param_set_add(const char *val, const struct kernel_param *kp)
+ struct net_device *ndev;
+ struct rxe_dev *exists;
+
++ if (!rxe_initialized) {
++ pr_err("Module parameters are not supported, use rdma link add or rxe_cfg\n");
++ return -EAGAIN;
++ }
++
+ len = sanitize_arg(val, intf, sizeof(intf));
+ if (!len) {
+ pr_err("add: invalid interface name\n");
+diff --git a/drivers/infiniband/sw/rxe/rxe_verbs.c b/drivers/infiniband/sw/rxe/rxe_verbs.c
+index 84fec5fd798d5..00ba6fb1e6763 100644
+--- a/drivers/infiniband/sw/rxe/rxe_verbs.c
++++ b/drivers/infiniband/sw/rxe/rxe_verbs.c
+@@ -1083,7 +1083,7 @@ static ssize_t parent_show(struct device *device,
+ struct rxe_dev *rxe =
+ rdma_device_to_drv_device(device, struct rxe_dev, ib_dev);
+
+- return snprintf(buf, 16, "%s\n", rxe_parent_name(rxe, 1));
++ return scnprintf(buf, PAGE_SIZE, "%s\n", rxe_parent_name(rxe, 1));
+ }
+
+ static DEVICE_ATTR_RO(parent);
+diff --git a/drivers/infiniband/ulp/isert/ib_isert.c b/drivers/infiniband/ulp/isert/ib_isert.c
+index b7df38ee8ae05..49ca8727e3fa3 100644
+--- a/drivers/infiniband/ulp/isert/ib_isert.c
++++ b/drivers/infiniband/ulp/isert/ib_isert.c
+@@ -183,15 +183,15 @@ isert_alloc_rx_descriptors(struct isert_conn *isert_conn)
+ rx_desc = isert_conn->rx_descs;
+
+ for (i = 0; i < ISERT_QP_MAX_RECV_DTOS; i++, rx_desc++) {
+- dma_addr = ib_dma_map_single(ib_dev, (void *)rx_desc,
+- ISER_RX_PAYLOAD_SIZE, DMA_FROM_DEVICE);
++ dma_addr = ib_dma_map_single(ib_dev, rx_desc->buf,
++ ISER_RX_SIZE, DMA_FROM_DEVICE);
+ if (ib_dma_mapping_error(ib_dev, dma_addr))
+ goto dma_map_fail;
+
+ rx_desc->dma_addr = dma_addr;
+
+ rx_sg = &rx_desc->rx_sg;
+- rx_sg->addr = rx_desc->dma_addr;
++ rx_sg->addr = rx_desc->dma_addr + isert_get_hdr_offset(rx_desc);
+ rx_sg->length = ISER_RX_PAYLOAD_SIZE;
+ rx_sg->lkey = device->pd->local_dma_lkey;
+ rx_desc->rx_cqe.done = isert_recv_done;
+@@ -203,7 +203,7 @@ dma_map_fail:
+ rx_desc = isert_conn->rx_descs;
+ for (j = 0; j < i; j++, rx_desc++) {
+ ib_dma_unmap_single(ib_dev, rx_desc->dma_addr,
+- ISER_RX_PAYLOAD_SIZE, DMA_FROM_DEVICE);
++ ISER_RX_SIZE, DMA_FROM_DEVICE);
+ }
+ kfree(isert_conn->rx_descs);
+ isert_conn->rx_descs = NULL;
+@@ -224,7 +224,7 @@ isert_free_rx_descriptors(struct isert_conn *isert_conn)
+ rx_desc = isert_conn->rx_descs;
+ for (i = 0; i < ISERT_QP_MAX_RECV_DTOS; i++, rx_desc++) {
+ ib_dma_unmap_single(ib_dev, rx_desc->dma_addr,
+- ISER_RX_PAYLOAD_SIZE, DMA_FROM_DEVICE);
++ ISER_RX_SIZE, DMA_FROM_DEVICE);
+ }
+
+ kfree(isert_conn->rx_descs);
+@@ -409,10 +409,9 @@ isert_free_login_buf(struct isert_conn *isert_conn)
+ ISER_RX_PAYLOAD_SIZE, DMA_TO_DEVICE);
+ kfree(isert_conn->login_rsp_buf);
+
+- ib_dma_unmap_single(ib_dev, isert_conn->login_req_dma,
+- ISER_RX_PAYLOAD_SIZE,
+- DMA_FROM_DEVICE);
+- kfree(isert_conn->login_req_buf);
++ ib_dma_unmap_single(ib_dev, isert_conn->login_desc->dma_addr,
++ ISER_RX_SIZE, DMA_FROM_DEVICE);
++ kfree(isert_conn->login_desc);
+ }
+
+ static int
+@@ -421,25 +420,25 @@ isert_alloc_login_buf(struct isert_conn *isert_conn,
+ {
+ int ret;
+
+- isert_conn->login_req_buf = kzalloc(sizeof(*isert_conn->login_req_buf),
++ isert_conn->login_desc = kzalloc(sizeof(*isert_conn->login_desc),
+ GFP_KERNEL);
+- if (!isert_conn->login_req_buf)
++ if (!isert_conn->login_desc)
+ return -ENOMEM;
+
+- isert_conn->login_req_dma = ib_dma_map_single(ib_dev,
+- isert_conn->login_req_buf,
+- ISER_RX_PAYLOAD_SIZE, DMA_FROM_DEVICE);
+- ret = ib_dma_mapping_error(ib_dev, isert_conn->login_req_dma);
++ isert_conn->login_desc->dma_addr = ib_dma_map_single(ib_dev,
++ isert_conn->login_desc->buf,
++ ISER_RX_SIZE, DMA_FROM_DEVICE);
++ ret = ib_dma_mapping_error(ib_dev, isert_conn->login_desc->dma_addr);
+ if (ret) {
+- isert_err("login_req_dma mapping error: %d\n", ret);
+- isert_conn->login_req_dma = 0;
+- goto out_free_login_req_buf;
++ isert_err("login_desc dma mapping error: %d\n", ret);
++ isert_conn->login_desc->dma_addr = 0;
++ goto out_free_login_desc;
+ }
+
+ isert_conn->login_rsp_buf = kzalloc(ISER_RX_PAYLOAD_SIZE, GFP_KERNEL);
+ if (!isert_conn->login_rsp_buf) {
+ ret = -ENOMEM;
+- goto out_unmap_login_req_buf;
++ goto out_unmap_login_desc;
+ }
+
+ isert_conn->login_rsp_dma = ib_dma_map_single(ib_dev,
+@@ -456,11 +455,11 @@ isert_alloc_login_buf(struct isert_conn *isert_conn,
+
+ out_free_login_rsp_buf:
+ kfree(isert_conn->login_rsp_buf);
+-out_unmap_login_req_buf:
+- ib_dma_unmap_single(ib_dev, isert_conn->login_req_dma,
+- ISER_RX_PAYLOAD_SIZE, DMA_FROM_DEVICE);
+-out_free_login_req_buf:
+- kfree(isert_conn->login_req_buf);
++out_unmap_login_desc:
++ ib_dma_unmap_single(ib_dev, isert_conn->login_desc->dma_addr,
++ ISER_RX_SIZE, DMA_FROM_DEVICE);
++out_free_login_desc:
++ kfree(isert_conn->login_desc);
+ return ret;
+ }
+
+@@ -579,7 +578,7 @@ isert_connect_release(struct isert_conn *isert_conn)
+ ib_destroy_qp(isert_conn->qp);
+ }
+
+- if (isert_conn->login_req_buf)
++ if (isert_conn->login_desc)
+ isert_free_login_buf(isert_conn);
+
+ isert_device_put(device);
+@@ -965,17 +964,18 @@ isert_login_post_recv(struct isert_conn *isert_conn)
+ int ret;
+
+ memset(&sge, 0, sizeof(struct ib_sge));
+- sge.addr = isert_conn->login_req_dma;
++ sge.addr = isert_conn->login_desc->dma_addr +
++ isert_get_hdr_offset(isert_conn->login_desc);
+ sge.length = ISER_RX_PAYLOAD_SIZE;
+ sge.lkey = isert_conn->device->pd->local_dma_lkey;
+
+ isert_dbg("Setup sge: addr: %llx length: %d 0x%08x\n",
+ sge.addr, sge.length, sge.lkey);
+
+- isert_conn->login_req_buf->rx_cqe.done = isert_login_recv_done;
++ isert_conn->login_desc->rx_cqe.done = isert_login_recv_done;
+
+ memset(&rx_wr, 0, sizeof(struct ib_recv_wr));
+- rx_wr.wr_cqe = &isert_conn->login_req_buf->rx_cqe;
++ rx_wr.wr_cqe = &isert_conn->login_desc->rx_cqe;
+ rx_wr.sg_list = &sge;
+ rx_wr.num_sge = 1;
+
+@@ -1052,7 +1052,7 @@ post_send:
+ static void
+ isert_rx_login_req(struct isert_conn *isert_conn)
+ {
+- struct iser_rx_desc *rx_desc = isert_conn->login_req_buf;
++ struct iser_rx_desc *rx_desc = isert_conn->login_desc;
+ int rx_buflen = isert_conn->login_req_len;
+ struct iscsi_conn *conn = isert_conn->conn;
+ struct iscsi_login *login = conn->conn_login;
+@@ -1064,7 +1064,7 @@ isert_rx_login_req(struct isert_conn *isert_conn)
+
+ if (login->first_request) {
+ struct iscsi_login_req *login_req =
+- (struct iscsi_login_req *)&rx_desc->iscsi_header;
++ (struct iscsi_login_req *)isert_get_iscsi_hdr(rx_desc);
+ /*
+ * Setup the initial iscsi_login values from the leading
+ * login request PDU.
+@@ -1083,13 +1083,13 @@ isert_rx_login_req(struct isert_conn *isert_conn)
+ login->tsih = be16_to_cpu(login_req->tsih);
+ }
+
+- memcpy(&login->req[0], (void *)&rx_desc->iscsi_header, ISCSI_HDR_LEN);
++ memcpy(&login->req[0], isert_get_iscsi_hdr(rx_desc), ISCSI_HDR_LEN);
+
+ size = min(rx_buflen, MAX_KEY_VALUE_PAIRS);
+ isert_dbg("Using login payload size: %d, rx_buflen: %d "
+ "MAX_KEY_VALUE_PAIRS: %d\n", size, rx_buflen,
+ MAX_KEY_VALUE_PAIRS);
+- memcpy(login->req_buf, &rx_desc->data[0], size);
++ memcpy(login->req_buf, isert_get_data(rx_desc), size);
+
+ if (login->first_request) {
+ complete(&isert_conn->login_comp);
+@@ -1154,14 +1154,15 @@ isert_handle_scsi_cmd(struct isert_conn *isert_conn,
+ if (imm_data_len != data_len) {
+ sg_nents = max(1UL, DIV_ROUND_UP(imm_data_len, PAGE_SIZE));
+ sg_copy_from_buffer(cmd->se_cmd.t_data_sg, sg_nents,
+- &rx_desc->data[0], imm_data_len);
++ isert_get_data(rx_desc), imm_data_len);
+ isert_dbg("Copy Immediate sg_nents: %u imm_data_len: %d\n",
+ sg_nents, imm_data_len);
+ } else {
+ sg_init_table(&isert_cmd->sg, 1);
+ cmd->se_cmd.t_data_sg = &isert_cmd->sg;
+ cmd->se_cmd.t_data_nents = 1;
+- sg_set_buf(&isert_cmd->sg, &rx_desc->data[0], imm_data_len);
++ sg_set_buf(&isert_cmd->sg, isert_get_data(rx_desc),
++ imm_data_len);
+ isert_dbg("Transfer Immediate imm_data_len: %d\n",
+ imm_data_len);
+ }
+@@ -1230,9 +1231,9 @@ isert_handle_iscsi_dataout(struct isert_conn *isert_conn,
+ }
+ isert_dbg("Copying DataOut: sg_start: %p, sg_off: %u "
+ "sg_nents: %u from %p %u\n", sg_start, sg_off,
+- sg_nents, &rx_desc->data[0], unsol_data_len);
++ sg_nents, isert_get_data(rx_desc), unsol_data_len);
+
+- sg_copy_from_buffer(sg_start, sg_nents, &rx_desc->data[0],
++ sg_copy_from_buffer(sg_start, sg_nents, isert_get_data(rx_desc),
+ unsol_data_len);
+
+ rc = iscsit_check_dataout_payload(cmd, hdr, false);
+@@ -1291,7 +1292,7 @@ isert_handle_text_cmd(struct isert_conn *isert_conn, struct isert_cmd *isert_cmd
+ }
+ cmd->text_in_ptr = text_in;
+
+- memcpy(cmd->text_in_ptr, &rx_desc->data[0], payload_length);
++ memcpy(cmd->text_in_ptr, isert_get_data(rx_desc), payload_length);
+
+ return iscsit_process_text_cmd(conn, cmd, hdr);
+ }
+@@ -1301,7 +1302,7 @@ isert_rx_opcode(struct isert_conn *isert_conn, struct iser_rx_desc *rx_desc,
+ uint32_t read_stag, uint64_t read_va,
+ uint32_t write_stag, uint64_t write_va)
+ {
+- struct iscsi_hdr *hdr = &rx_desc->iscsi_header;
++ struct iscsi_hdr *hdr = isert_get_iscsi_hdr(rx_desc);
+ struct iscsi_conn *conn = isert_conn->conn;
+ struct iscsi_cmd *cmd;
+ struct isert_cmd *isert_cmd;
+@@ -1399,8 +1400,8 @@ isert_recv_done(struct ib_cq *cq, struct ib_wc *wc)
+ struct isert_conn *isert_conn = wc->qp->qp_context;
+ struct ib_device *ib_dev = isert_conn->cm_id->device;
+ struct iser_rx_desc *rx_desc = cqe_to_rx_desc(wc->wr_cqe);
+- struct iscsi_hdr *hdr = &rx_desc->iscsi_header;
+- struct iser_ctrl *iser_ctrl = &rx_desc->iser_header;
++ struct iscsi_hdr *hdr = isert_get_iscsi_hdr(rx_desc);
++ struct iser_ctrl *iser_ctrl = isert_get_iser_hdr(rx_desc);
+ uint64_t read_va = 0, write_va = 0;
+ uint32_t read_stag = 0, write_stag = 0;
+
+@@ -1414,7 +1415,7 @@ isert_recv_done(struct ib_cq *cq, struct ib_wc *wc)
+ rx_desc->in_use = true;
+
+ ib_dma_sync_single_for_cpu(ib_dev, rx_desc->dma_addr,
+- ISER_RX_PAYLOAD_SIZE, DMA_FROM_DEVICE);
++ ISER_RX_SIZE, DMA_FROM_DEVICE);
+
+ isert_dbg("DMA: 0x%llx, iSCSI opcode: 0x%02x, ITT: 0x%08x, flags: 0x%02x dlen: %d\n",
+ rx_desc->dma_addr, hdr->opcode, hdr->itt, hdr->flags,
+@@ -1449,7 +1450,7 @@ isert_recv_done(struct ib_cq *cq, struct ib_wc *wc)
+ read_stag, read_va, write_stag, write_va);
+
+ ib_dma_sync_single_for_device(ib_dev, rx_desc->dma_addr,
+- ISER_RX_PAYLOAD_SIZE, DMA_FROM_DEVICE);
++ ISER_RX_SIZE, DMA_FROM_DEVICE);
+ }
+
+ static void
+@@ -1463,8 +1464,8 @@ isert_login_recv_done(struct ib_cq *cq, struct ib_wc *wc)
+ return;
+ }
+
+- ib_dma_sync_single_for_cpu(ib_dev, isert_conn->login_req_dma,
+- ISER_RX_PAYLOAD_SIZE, DMA_FROM_DEVICE);
++ ib_dma_sync_single_for_cpu(ib_dev, isert_conn->login_desc->dma_addr,
++ ISER_RX_SIZE, DMA_FROM_DEVICE);
+
+ isert_conn->login_req_len = wc->byte_len - ISER_HEADERS_LEN;
+
+@@ -1479,8 +1480,8 @@ isert_login_recv_done(struct ib_cq *cq, struct ib_wc *wc)
+ complete(&isert_conn->login_req_comp);
+ mutex_unlock(&isert_conn->mutex);
+
+- ib_dma_sync_single_for_device(ib_dev, isert_conn->login_req_dma,
+- ISER_RX_PAYLOAD_SIZE, DMA_FROM_DEVICE);
++ ib_dma_sync_single_for_device(ib_dev, isert_conn->login_desc->dma_addr,
++ ISER_RX_SIZE, DMA_FROM_DEVICE);
+ }
+
+ static void
+diff --git a/drivers/infiniband/ulp/isert/ib_isert.h b/drivers/infiniband/ulp/isert/ib_isert.h
+index 3b296bac4f603..d267a6d60d87d 100644
+--- a/drivers/infiniband/ulp/isert/ib_isert.h
++++ b/drivers/infiniband/ulp/isert/ib_isert.h
+@@ -59,9 +59,11 @@
+ ISERT_MAX_TX_MISC_PDUS + \
+ ISERT_MAX_RX_MISC_PDUS)
+
+-#define ISER_RX_PAD_SIZE (ISCSI_DEF_MAX_RECV_SEG_LEN + 4096 - \
+- (ISER_RX_PAYLOAD_SIZE + sizeof(u64) + sizeof(struct ib_sge) + \
+- sizeof(struct ib_cqe) + sizeof(bool)))
++/*
++ * RX size is default of 8k plus headers, but data needs to align to
++ * 512 boundary, so use 1024 to have the extra space for alignment.
++ */
++#define ISER_RX_SIZE (ISCSI_DEF_MAX_RECV_SEG_LEN + 1024)
+
+ #define ISCSI_ISER_SG_TABLESIZE 256
+
+@@ -80,21 +82,41 @@ enum iser_conn_state {
+ };
+
+ struct iser_rx_desc {
+- struct iser_ctrl iser_header;
+- struct iscsi_hdr iscsi_header;
+- char data[ISCSI_DEF_MAX_RECV_SEG_LEN];
++ char buf[ISER_RX_SIZE];
+ u64 dma_addr;
+ struct ib_sge rx_sg;
+ struct ib_cqe rx_cqe;
+ bool in_use;
+- char pad[ISER_RX_PAD_SIZE];
+-} __packed;
++};
+
+ static inline struct iser_rx_desc *cqe_to_rx_desc(struct ib_cqe *cqe)
+ {
+ return container_of(cqe, struct iser_rx_desc, rx_cqe);
+ }
+
++static void *isert_get_iser_hdr(struct iser_rx_desc *desc)
++{
++ return PTR_ALIGN(desc->buf + ISER_HEADERS_LEN, 512) - ISER_HEADERS_LEN;
++}
++
++static size_t isert_get_hdr_offset(struct iser_rx_desc *desc)
++{
++ return isert_get_iser_hdr(desc) - (void *)desc->buf;
++}
++
++static void *isert_get_iscsi_hdr(struct iser_rx_desc *desc)
++{
++ return isert_get_iser_hdr(desc) + sizeof(struct iser_ctrl);
++}
++
++static void *isert_get_data(struct iser_rx_desc *desc)
++{
++ void *data = isert_get_iser_hdr(desc) + ISER_HEADERS_LEN;
++
++ WARN_ON((uintptr_t)data & 511);
++ return data;
++}
++
+ struct iser_tx_desc {
+ struct iser_ctrl iser_header;
+ struct iscsi_hdr iscsi_header;
+@@ -141,9 +163,8 @@ struct isert_conn {
+ u32 responder_resources;
+ u32 initiator_depth;
+ bool pi_support;
+- struct iser_rx_desc *login_req_buf;
++ struct iser_rx_desc *login_desc;
+ char *login_rsp_buf;
+- u64 login_req_dma;
+ int login_req_len;
+ u64 login_rsp_dma;
+ struct iser_rx_desc *rx_descs;
+diff --git a/drivers/infiniband/ulp/rtrs/rtrs-srv-sysfs.c b/drivers/infiniband/ulp/rtrs/rtrs-srv-sysfs.c
+index 3d7877534bcc9..cf6a2be61695d 100644
+--- a/drivers/infiniband/ulp/rtrs/rtrs-srv-sysfs.c
++++ b/drivers/infiniband/ulp/rtrs/rtrs-srv-sysfs.c
+@@ -152,13 +152,6 @@ static struct attribute_group rtrs_srv_stats_attr_group = {
+ .attrs = rtrs_srv_stats_attrs,
+ };
+
+-static void rtrs_srv_dev_release(struct device *dev)
+-{
+- struct rtrs_srv *srv = container_of(dev, struct rtrs_srv, dev);
+-
+- kfree(srv);
+-}
+-
+ static int rtrs_srv_create_once_sysfs_root_folders(struct rtrs_srv_sess *sess)
+ {
+ struct rtrs_srv *srv = sess->srv;
+@@ -172,7 +165,6 @@ static int rtrs_srv_create_once_sysfs_root_folders(struct rtrs_srv_sess *sess)
+ goto unlock;
+ }
+ srv->dev.class = rtrs_dev_class;
+- srv->dev.release = rtrs_srv_dev_release;
+ err = dev_set_name(&srv->dev, "%s", sess->s.sessname);
+ if (err)
+ goto unlock;
+@@ -182,16 +174,16 @@ static int rtrs_srv_create_once_sysfs_root_folders(struct rtrs_srv_sess *sess)
+ * sysfs files are created
+ */
+ dev_set_uevent_suppress(&srv->dev, true);
+- err = device_register(&srv->dev);
++ err = device_add(&srv->dev);
+ if (err) {
+- pr_err("device_register(): %d\n", err);
++ pr_err("device_add(): %d\n", err);
+ goto put;
+ }
+ srv->kobj_paths = kobject_create_and_add("paths", &srv->dev.kobj);
+ if (!srv->kobj_paths) {
+ err = -ENOMEM;
+ pr_err("kobject_create_and_add(): %d\n", err);
+- device_unregister(&srv->dev);
++ device_del(&srv->dev);
+ goto unlock;
+ }
+ dev_set_uevent_suppress(&srv->dev, false);
+@@ -216,7 +208,7 @@ rtrs_srv_destroy_once_sysfs_root_folders(struct rtrs_srv_sess *sess)
+ kobject_del(srv->kobj_paths);
+ kobject_put(srv->kobj_paths);
+ mutex_unlock(&srv->paths_mutex);
+- device_unregister(&srv->dev);
++ device_del(&srv->dev);
+ } else {
+ mutex_unlock(&srv->paths_mutex);
+ }
+diff --git a/drivers/infiniband/ulp/rtrs/rtrs-srv.c b/drivers/infiniband/ulp/rtrs/rtrs-srv.c
+index a219bd1bdbc26..28f6414dfa3dc 100644
+--- a/drivers/infiniband/ulp/rtrs/rtrs-srv.c
++++ b/drivers/infiniband/ulp/rtrs/rtrs-srv.c
+@@ -1319,6 +1319,13 @@ static int rtrs_srv_get_next_cq_vector(struct rtrs_srv_sess *sess)
+ return sess->cur_cq_vector;
+ }
+
++static void rtrs_srv_dev_release(struct device *dev)
++{
++ struct rtrs_srv *srv = container_of(dev, struct rtrs_srv, dev);
++
++ kfree(srv);
++}
++
+ static struct rtrs_srv *__alloc_srv(struct rtrs_srv_ctx *ctx,
+ const uuid_t *paths_uuid)
+ {
+@@ -1336,6 +1343,8 @@ static struct rtrs_srv *__alloc_srv(struct rtrs_srv_ctx *ctx,
+ uuid_copy(&srv->paths_uuid, paths_uuid);
+ srv->queue_depth = sess_queue_depth;
+ srv->ctx = ctx;
++ device_initialize(&srv->dev);
++ srv->dev.release = rtrs_srv_dev_release;
+
+ srv->chunks = kcalloc(srv->queue_depth, sizeof(*srv->chunks),
+ GFP_KERNEL);
+diff --git a/drivers/interconnect/qcom/bcm-voter.c b/drivers/interconnect/qcom/bcm-voter.c
+index 2a11a63e7217a..b360dc34c90c7 100644
+--- a/drivers/interconnect/qcom/bcm-voter.c
++++ b/drivers/interconnect/qcom/bcm-voter.c
+@@ -52,8 +52,20 @@ static int cmp_vcd(void *priv, struct list_head *a, struct list_head *b)
+ return 1;
+ }
+
++static u64 bcm_div(u64 num, u32 base)
++{
++ /* Ensure that small votes aren't lost. */
++ if (num && num < base)
++ return 1;
++
++ do_div(num, base);
++
++ return num;
++}
++
+ static void bcm_aggregate(struct qcom_icc_bcm *bcm)
+ {
++ struct qcom_icc_node *node;
+ size_t i, bucket;
+ u64 agg_avg[QCOM_ICC_NUM_BUCKETS] = {0};
+ u64 agg_peak[QCOM_ICC_NUM_BUCKETS] = {0};
+@@ -61,22 +73,21 @@ static void bcm_aggregate(struct qcom_icc_bcm *bcm)
+
+ for (bucket = 0; bucket < QCOM_ICC_NUM_BUCKETS; bucket++) {
+ for (i = 0; i < bcm->num_nodes; i++) {
+- temp = bcm->nodes[i]->sum_avg[bucket] * bcm->aux_data.width;
+- do_div(temp, bcm->nodes[i]->buswidth * bcm->nodes[i]->channels);
++ node = bcm->nodes[i];
++ temp = bcm_div(node->sum_avg[bucket] * bcm->aux_data.width,
++ node->buswidth * node->channels);
+ agg_avg[bucket] = max(agg_avg[bucket], temp);
+
+- temp = bcm->nodes[i]->max_peak[bucket] * bcm->aux_data.width;
+- do_div(temp, bcm->nodes[i]->buswidth);
++ temp = bcm_div(node->max_peak[bucket] * bcm->aux_data.width,
++ node->buswidth);
+ agg_peak[bucket] = max(agg_peak[bucket], temp);
+ }
+
+ temp = agg_avg[bucket] * 1000ULL;
+- do_div(temp, bcm->aux_data.unit);
+- bcm->vote_x[bucket] = temp;
++ bcm->vote_x[bucket] = bcm_div(temp, bcm->aux_data.unit);
+
+ temp = agg_peak[bucket] * 1000ULL;
+- do_div(temp, bcm->aux_data.unit);
+- bcm->vote_y[bucket] = temp;
++ bcm->vote_y[bucket] = bcm_div(temp, bcm->aux_data.unit);
+ }
+
+ if (bcm->keepalive && bcm->vote_x[QCOM_ICC_BUCKET_AMC] == 0 &&
+diff --git a/drivers/iommu/amd/iommu.c b/drivers/iommu/amd/iommu.c
+index 200ee948f6ec1..37c74c842f3a3 100644
+--- a/drivers/iommu/amd/iommu.c
++++ b/drivers/iommu/amd/iommu.c
+@@ -2650,7 +2650,12 @@ static int amd_iommu_def_domain_type(struct device *dev)
+ if (!dev_data)
+ return 0;
+
+- if (dev_data->iommu_v2)
++ /*
++ * Do not identity map IOMMUv2 capable devices when memory encryption is
++ * active, because some of those devices (AMD GPUs) don't have the
++ * encryption bit in their DMA-mask and require remapping.
++ */
++ if (!mem_encrypt_active() && dev_data->iommu_v2)
+ return IOMMU_DOMAIN_IDENTITY;
+
+ return 0;
+diff --git a/drivers/iommu/amd/iommu_v2.c b/drivers/iommu/amd/iommu_v2.c
+index e4b025c5637c4..5a188cac7a0f1 100644
+--- a/drivers/iommu/amd/iommu_v2.c
++++ b/drivers/iommu/amd/iommu_v2.c
+@@ -737,6 +737,13 @@ int amd_iommu_init_device(struct pci_dev *pdev, int pasids)
+
+ might_sleep();
+
++ /*
++ * When memory encryption is active the device is likely not in a
++ * direct-mapped domain. Forbid using IOMMUv2 functionality for now.
++ */
++ if (mem_encrypt_active())
++ return -ENODEV;
++
+ if (!amd_iommu_v2_supported())
+ return -ENODEV;
+
+diff --git a/drivers/media/rc/gpio-ir-tx.c b/drivers/media/rc/gpio-ir-tx.c
+index f33b443bfa47b..c6cd2e6d8e654 100644
+--- a/drivers/media/rc/gpio-ir-tx.c
++++ b/drivers/media/rc/gpio-ir-tx.c
+@@ -19,8 +19,6 @@ struct gpio_ir {
+ struct gpio_desc *gpio;
+ unsigned int carrier;
+ unsigned int duty_cycle;
+- /* we need a spinlock to hold the cpu while transmitting */
+- spinlock_t lock;
+ };
+
+ static const struct of_device_id gpio_ir_tx_of_match[] = {
+@@ -53,12 +51,11 @@ static int gpio_ir_tx_set_carrier(struct rc_dev *dev, u32 carrier)
+ static void gpio_ir_tx_unmodulated(struct gpio_ir *gpio_ir, uint *txbuf,
+ uint count)
+ {
+- unsigned long flags;
+ ktime_t edge;
+ s32 delta;
+ int i;
+
+- spin_lock_irqsave(&gpio_ir->lock, flags);
++ local_irq_disable();
+
+ edge = ktime_get();
+
+@@ -72,14 +69,11 @@ static void gpio_ir_tx_unmodulated(struct gpio_ir *gpio_ir, uint *txbuf,
+ }
+
+ gpiod_set_value(gpio_ir->gpio, 0);
+-
+- spin_unlock_irqrestore(&gpio_ir->lock, flags);
+ }
+
+ static void gpio_ir_tx_modulated(struct gpio_ir *gpio_ir, uint *txbuf,
+ uint count)
+ {
+- unsigned long flags;
+ ktime_t edge;
+ /*
+ * delta should never exceed 0.5 seconds (IR_MAX_DURATION) and on
+@@ -95,7 +89,7 @@ static void gpio_ir_tx_modulated(struct gpio_ir *gpio_ir, uint *txbuf,
+ space = DIV_ROUND_CLOSEST((100 - gpio_ir->duty_cycle) *
+ (NSEC_PER_SEC / 100), gpio_ir->carrier);
+
+- spin_lock_irqsave(&gpio_ir->lock, flags);
++ local_irq_disable();
+
+ edge = ktime_get();
+
+@@ -128,19 +122,20 @@ static void gpio_ir_tx_modulated(struct gpio_ir *gpio_ir, uint *txbuf,
+ edge = last;
+ }
+ }
+-
+- spin_unlock_irqrestore(&gpio_ir->lock, flags);
+ }
+
+ static int gpio_ir_tx(struct rc_dev *dev, unsigned int *txbuf,
+ unsigned int count)
+ {
+ struct gpio_ir *gpio_ir = dev->priv;
++ unsigned long flags;
+
++ local_irq_save(flags);
+ if (gpio_ir->carrier)
+ gpio_ir_tx_modulated(gpio_ir, txbuf, count);
+ else
+ gpio_ir_tx_unmodulated(gpio_ir, txbuf, count);
++ local_irq_restore(flags);
+
+ return count;
+ }
+@@ -176,7 +171,6 @@ static int gpio_ir_tx_probe(struct platform_device *pdev)
+
+ gpio_ir->carrier = 38000;
+ gpio_ir->duty_cycle = 50;
+- spin_lock_init(&gpio_ir->lock);
+
+ rc = devm_rc_register_device(&pdev->dev, rcdev);
+ if (rc < 0)
+diff --git a/drivers/misc/eeprom/at24.c b/drivers/misc/eeprom/at24.c
+index 9ff18d4961ceb..5561075f7a1bb 100644
+--- a/drivers/misc/eeprom/at24.c
++++ b/drivers/misc/eeprom/at24.c
+@@ -692,10 +692,6 @@ static int at24_probe(struct i2c_client *client)
+ nvmem_config.word_size = 1;
+ nvmem_config.size = byte_len;
+
+- at24->nvmem = devm_nvmem_register(dev, &nvmem_config);
+- if (IS_ERR(at24->nvmem))
+- return PTR_ERR(at24->nvmem);
+-
+ i2c_set_clientdata(client, at24);
+
+ err = regulator_enable(at24->vcc_reg);
+@@ -708,6 +704,13 @@ static int at24_probe(struct i2c_client *client)
+ pm_runtime_set_active(dev);
+ pm_runtime_enable(dev);
+
++ at24->nvmem = devm_nvmem_register(dev, &nvmem_config);
++ if (IS_ERR(at24->nvmem)) {
++ pm_runtime_disable(dev);
++ regulator_disable(at24->vcc_reg);
++ return PTR_ERR(at24->nvmem);
++ }
++
+ /*
+ * Perform a one-byte test read to verify that the
+ * chip is functional.
+diff --git a/drivers/mmc/core/sdio_ops.c b/drivers/mmc/core/sdio_ops.c
+index 93d346c01110d..4c229dd2b6e54 100644
+--- a/drivers/mmc/core/sdio_ops.c
++++ b/drivers/mmc/core/sdio_ops.c
+@@ -121,6 +121,7 @@ int mmc_io_rw_extended(struct mmc_card *card, int write, unsigned fn,
+ struct sg_table sgtable;
+ unsigned int nents, left_size, i;
+ unsigned int seg_size = card->host->max_seg_size;
++ int err;
+
+ WARN_ON(blksz == 0);
+
+@@ -170,28 +171,32 @@ int mmc_io_rw_extended(struct mmc_card *card, int write, unsigned fn,
+
+ mmc_set_data_timeout(&data, card);
+
+- mmc_wait_for_req(card->host, &mrq);
++ mmc_pre_req(card->host, &mrq);
+
+- if (nents > 1)
+- sg_free_table(&sgtable);
++ mmc_wait_for_req(card->host, &mrq);
+
+ if (cmd.error)
+- return cmd.error;
+- if (data.error)
+- return data.error;
+-
+- if (mmc_host_is_spi(card->host)) {
++ err = cmd.error;
++ else if (data.error)
++ err = data.error;
++ else if (mmc_host_is_spi(card->host))
+ /* host driver already reported errors */
+- } else {
+- if (cmd.resp[0] & R5_ERROR)
+- return -EIO;
+- if (cmd.resp[0] & R5_FUNCTION_NUMBER)
+- return -EINVAL;
+- if (cmd.resp[0] & R5_OUT_OF_RANGE)
+- return -ERANGE;
+- }
++ err = 0;
++ else if (cmd.resp[0] & R5_ERROR)
++ err = -EIO;
++ else if (cmd.resp[0] & R5_FUNCTION_NUMBER)
++ err = -EINVAL;
++ else if (cmd.resp[0] & R5_OUT_OF_RANGE)
++ err = -ERANGE;
++ else
++ err = 0;
+
+- return 0;
++ mmc_post_req(card->host, &mrq, err);
++
++ if (nents > 1)
++ sg_free_table(&sgtable);
++
++ return err;
+ }
+
+ int sdio_reset(struct mmc_host *host)
+diff --git a/drivers/mmc/host/sdhci-acpi.c b/drivers/mmc/host/sdhci-acpi.c
+index 2d9f79b50a7fa..841e34aa7caae 100644
+--- a/drivers/mmc/host/sdhci-acpi.c
++++ b/drivers/mmc/host/sdhci-acpi.c
+@@ -550,12 +550,18 @@ static int amd_select_drive_strength(struct mmc_card *card,
+ return MMC_SET_DRIVER_TYPE_A;
+ }
+
+-static void sdhci_acpi_amd_hs400_dll(struct sdhci_host *host)
++static void sdhci_acpi_amd_hs400_dll(struct sdhci_host *host, bool enable)
+ {
++ struct sdhci_acpi_host *acpi_host = sdhci_priv(host);
++ struct amd_sdhci_host *amd_host = sdhci_acpi_priv(acpi_host);
++
+ /* AMD Platform requires dll setting */
+ sdhci_writel(host, 0x40003210, SDHCI_AMD_RESET_DLL_REGISTER);
+ usleep_range(10, 20);
+- sdhci_writel(host, 0x40033210, SDHCI_AMD_RESET_DLL_REGISTER);
++ if (enable)
++ sdhci_writel(host, 0x40033210, SDHCI_AMD_RESET_DLL_REGISTER);
++
++ amd_host->dll_enabled = enable;
+ }
+
+ /*
+@@ -595,10 +601,8 @@ static void amd_set_ios(struct mmc_host *mmc, struct mmc_ios *ios)
+
+ /* DLL is only required for HS400 */
+ if (host->timing == MMC_TIMING_MMC_HS400 &&
+- !amd_host->dll_enabled) {
+- sdhci_acpi_amd_hs400_dll(host);
+- amd_host->dll_enabled = true;
+- }
++ !amd_host->dll_enabled)
++ sdhci_acpi_amd_hs400_dll(host, true);
+ }
+ }
+
+@@ -619,10 +623,23 @@ static int amd_sdhci_execute_tuning(struct mmc_host *mmc, u32 opcode)
+ return err;
+ }
+
++static void amd_sdhci_reset(struct sdhci_host *host, u8 mask)
++{
++ struct sdhci_acpi_host *acpi_host = sdhci_priv(host);
++ struct amd_sdhci_host *amd_host = sdhci_acpi_priv(acpi_host);
++
++ if (mask & SDHCI_RESET_ALL) {
++ amd_host->tuned_clock = false;
++ sdhci_acpi_amd_hs400_dll(host, false);
++ }
++
++ sdhci_reset(host, mask);
++}
++
+ static const struct sdhci_ops sdhci_acpi_ops_amd = {
+ .set_clock = sdhci_set_clock,
+ .set_bus_width = sdhci_set_bus_width,
+- .reset = sdhci_reset,
++ .reset = amd_sdhci_reset,
+ .set_uhs_signaling = sdhci_set_uhs_signaling,
+ };
+
+diff --git a/drivers/mmc/host/sdhci-msm.c b/drivers/mmc/host/sdhci-msm.c
+index c0d58e9fcc333..0450f521c6f9a 100644
+--- a/drivers/mmc/host/sdhci-msm.c
++++ b/drivers/mmc/host/sdhci-msm.c
+@@ -1158,7 +1158,7 @@ static void sdhci_msm_set_cdr(struct sdhci_host *host, bool enable)
+ static int sdhci_msm_execute_tuning(struct mmc_host *mmc, u32 opcode)
+ {
+ struct sdhci_host *host = mmc_priv(mmc);
+- int tuning_seq_cnt = 3;
++ int tuning_seq_cnt = 10;
+ u8 phase, tuned_phases[16], tuned_phase_cnt = 0;
+ int rc;
+ struct mmc_ios ios = host->mmc->ios;
+@@ -1214,6 +1214,22 @@ retry:
+ } while (++phase < ARRAY_SIZE(tuned_phases));
+
+ if (tuned_phase_cnt) {
++ if (tuned_phase_cnt == ARRAY_SIZE(tuned_phases)) {
++ /*
++ * All phases valid is _almost_ as bad as no phases
++ * valid. Probably all phases are not really reliable
++ * but we didn't detect where the unreliable place is.
++ * That means we'll essentially be guessing and hoping
++ * we get a good phase. Better to try a few times.
++ */
++ dev_dbg(mmc_dev(mmc), "%s: All phases valid; try again\n",
++ mmc_hostname(mmc));
++ if (--tuning_seq_cnt) {
++ tuned_phase_cnt = 0;
++ goto retry;
++ }
++ }
++
+ rc = msm_find_most_appropriate_phase(host, tuned_phases,
+ tuned_phase_cnt);
+ if (rc < 0)
+diff --git a/drivers/mmc/host/sdhci-of-esdhc.c b/drivers/mmc/host/sdhci-of-esdhc.c
+index 7c73d243dc6ce..45881b3099567 100644
+--- a/drivers/mmc/host/sdhci-of-esdhc.c
++++ b/drivers/mmc/host/sdhci-of-esdhc.c
+@@ -81,6 +81,7 @@ struct sdhci_esdhc {
+ bool quirk_tuning_erratum_type2;
+ bool quirk_ignore_data_inhibit;
+ bool quirk_delay_before_data_reset;
++ bool quirk_trans_complete_erratum;
+ bool in_sw_tuning;
+ unsigned int peripheral_clock;
+ const struct esdhc_clk_fixup *clk_fixup;
+@@ -1177,10 +1178,11 @@ static void esdhc_set_uhs_signaling(struct sdhci_host *host,
+
+ static u32 esdhc_irq(struct sdhci_host *host, u32 intmask)
+ {
++ struct sdhci_pltfm_host *pltfm_host = sdhci_priv(host);
++ struct sdhci_esdhc *esdhc = sdhci_pltfm_priv(pltfm_host);
+ u32 command;
+
+- if (of_find_compatible_node(NULL, NULL,
+- "fsl,p2020-esdhc")) {
++ if (esdhc->quirk_trans_complete_erratum) {
+ command = SDHCI_GET_CMD(sdhci_readw(host,
+ SDHCI_COMMAND));
+ if (command == MMC_WRITE_MULTIPLE_BLOCK &&
+@@ -1334,8 +1336,10 @@ static void esdhc_init(struct platform_device *pdev, struct sdhci_host *host)
+ esdhc->clk_fixup = match->data;
+ np = pdev->dev.of_node;
+
+- if (of_device_is_compatible(np, "fsl,p2020-esdhc"))
++ if (of_device_is_compatible(np, "fsl,p2020-esdhc")) {
+ esdhc->quirk_delay_before_data_reset = true;
++ esdhc->quirk_trans_complete_erratum = true;
++ }
+
+ clk = of_clk_get(np, 0);
+ if (!IS_ERR(clk)) {
+diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3_enet.c b/drivers/net/ethernet/hisilicon/hns3/hns3_enet.c
+index 71ed4c54f6d5d..eaadcc7043349 100644
+--- a/drivers/net/ethernet/hisilicon/hns3/hns3_enet.c
++++ b/drivers/net/ethernet/hisilicon/hns3/hns3_enet.c
+@@ -20,6 +20,7 @@
+ #include <net/pkt_cls.h>
+ #include <net/tcp.h>
+ #include <net/vxlan.h>
++#include <net/geneve.h>
+
+ #include "hnae3.h"
+ #include "hns3_enet.h"
+@@ -780,7 +781,7 @@ static int hns3_get_l4_protocol(struct sk_buff *skb, u8 *ol4_proto,
+ * and it is udp packet, which has a dest port as the IANA assigned.
+ * the hardware is expected to do the checksum offload, but the
+ * hardware will not do the checksum offload when udp dest port is
+- * 4789.
++ * 4789 or 6081.
+ */
+ static bool hns3_tunnel_csum_bug(struct sk_buff *skb)
+ {
+@@ -789,7 +790,8 @@ static bool hns3_tunnel_csum_bug(struct sk_buff *skb)
+ l4.hdr = skb_transport_header(skb);
+
+ if (!(!skb->encapsulation &&
+- l4.udp->dest == htons(IANA_VXLAN_UDP_PORT)))
++ (l4.udp->dest == htons(IANA_VXLAN_UDP_PORT) ||
++ l4.udp->dest == htons(GENEVE_UDP_PORT))))
+ return false;
+
+ skb_checksum_help(skb);
+diff --git a/drivers/net/wan/hdlc.c b/drivers/net/wan/hdlc.c
+index 386ed2aa31fd9..9b00708676cf7 100644
+--- a/drivers/net/wan/hdlc.c
++++ b/drivers/net/wan/hdlc.c
+@@ -229,7 +229,7 @@ static void hdlc_setup_dev(struct net_device *dev)
+ dev->min_mtu = 68;
+ dev->max_mtu = HDLC_MAX_MTU;
+ dev->type = ARPHRD_RAWHDLC;
+- dev->hard_header_len = 16;
++ dev->hard_header_len = 0;
+ dev->needed_headroom = 0;
+ dev->addr_len = 0;
+ dev->header_ops = &hdlc_null_ops;
+diff --git a/drivers/net/wan/hdlc_cisco.c b/drivers/net/wan/hdlc_cisco.c
+index d8cba3625c185..444130655d8ea 100644
+--- a/drivers/net/wan/hdlc_cisco.c
++++ b/drivers/net/wan/hdlc_cisco.c
+@@ -370,6 +370,7 @@ static int cisco_ioctl(struct net_device *dev, struct ifreq *ifr)
+ memcpy(&state(hdlc)->settings, &new_settings, size);
+ spin_lock_init(&state(hdlc)->lock);
+ dev->header_ops = &cisco_header_ops;
++ dev->hard_header_len = sizeof(struct hdlc_header);
+ dev->type = ARPHRD_CISCO;
+ call_netdevice_notifiers(NETDEV_POST_TYPE_CHANGE, dev);
+ netif_dormant_on(dev);
+diff --git a/drivers/net/wan/lapbether.c b/drivers/net/wan/lapbether.c
+index 1ea15f2123ed5..e61616b0b91c7 100644
+--- a/drivers/net/wan/lapbether.c
++++ b/drivers/net/wan/lapbether.c
+@@ -210,6 +210,8 @@ static void lapbeth_data_transmit(struct net_device *ndev, struct sk_buff *skb)
+
+ skb->dev = dev = lapbeth->ethdev;
+
++ skb_reset_network_header(skb);
++
+ dev_hard_header(skb, dev, ETH_P_DEC, bcast_addr, NULL, 0);
+
+ dev_queue_xmit(skb);
+@@ -340,6 +342,7 @@ static int lapbeth_new_device(struct net_device *dev)
+ */
+ ndev->needed_headroom = -1 + 3 + 2 + dev->hard_header_len
+ + dev->needed_headroom;
++ ndev->needed_tailroom = dev->needed_tailroom;
+
+ lapbeth = netdev_priv(ndev);
+ lapbeth->axdev = ndev;
+diff --git a/drivers/nfc/st95hf/core.c b/drivers/nfc/st95hf/core.c
+index 9642971e89cea..4578547659839 100644
+--- a/drivers/nfc/st95hf/core.c
++++ b/drivers/nfc/st95hf/core.c
+@@ -966,7 +966,7 @@ static int st95hf_in_send_cmd(struct nfc_digital_dev *ddev,
+ rc = down_killable(&stcontext->exchange_lock);
+ if (rc) {
+ WARN(1, "Semaphore is not found up in st95hf_in_send_cmd\n");
+- return rc;
++ goto free_skb_resp;
+ }
+
+ rc = st95hf_spi_send(&stcontext->spicontext, skb->data,
+diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c
+index fa0039dcacc66..f2556f0ea20dc 100644
+--- a/drivers/nvme/host/core.c
++++ b/drivers/nvme/host/core.c
+@@ -3324,10 +3324,6 @@ static ssize_t nvme_sysfs_delete(struct device *dev,
+ {
+ struct nvme_ctrl *ctrl = dev_get_drvdata(dev);
+
+- /* Can't delete non-created controllers */
+- if (!ctrl->created)
+- return -EBUSY;
+-
+ if (device_remove_file_self(dev, attr))
+ nvme_delete_ctrl_sync(ctrl);
+ return count;
+@@ -4129,7 +4125,6 @@ void nvme_start_ctrl(struct nvme_ctrl *ctrl)
+ nvme_queue_scan(ctrl);
+ nvme_start_queues(ctrl);
+ }
+- ctrl->created = true;
+ }
+ EXPORT_SYMBOL_GPL(nvme_start_ctrl);
+
+@@ -4287,7 +4282,7 @@ void nvme_unfreeze(struct nvme_ctrl *ctrl)
+ }
+ EXPORT_SYMBOL_GPL(nvme_unfreeze);
+
+-void nvme_wait_freeze_timeout(struct nvme_ctrl *ctrl, long timeout)
++int nvme_wait_freeze_timeout(struct nvme_ctrl *ctrl, long timeout)
+ {
+ struct nvme_ns *ns;
+
+@@ -4298,6 +4293,7 @@ void nvme_wait_freeze_timeout(struct nvme_ctrl *ctrl, long timeout)
+ break;
+ }
+ up_read(&ctrl->namespaces_rwsem);
++ return timeout;
+ }
+ EXPORT_SYMBOL_GPL(nvme_wait_freeze_timeout);
+
+diff --git a/drivers/nvme/host/fabrics.c b/drivers/nvme/host/fabrics.c
+index 4ec4829d62334..8575724734e02 100644
+--- a/drivers/nvme/host/fabrics.c
++++ b/drivers/nvme/host/fabrics.c
+@@ -565,10 +565,14 @@ bool __nvmf_check_ready(struct nvme_ctrl *ctrl, struct request *rq,
+ struct nvme_request *req = nvme_req(rq);
+
+ /*
+- * If we are in some state of setup or teardown only allow
+- * internally generated commands.
++ * currently we have a problem sending passthru commands
++ * on the admin_q if the controller is not LIVE because we can't
++ * make sure that they are going out after the admin connect,
++ * controller enable and/or other commands in the initialization
++ * sequence. until the controller will be LIVE, fail with
++ * BLK_STS_RESOURCE so that they will be rescheduled.
+ */
+- if (!blk_rq_is_passthrough(rq) || (req->flags & NVME_REQ_USERCMD))
++ if (rq->q == ctrl->admin_q && (req->flags & NVME_REQ_USERCMD))
+ return false;
+
+ /*
+@@ -576,9 +580,8 @@ bool __nvmf_check_ready(struct nvme_ctrl *ctrl, struct request *rq,
+ * which is require to set the queue live in the appropinquate states.
+ */
+ switch (ctrl->state) {
+- case NVME_CTRL_NEW:
+ case NVME_CTRL_CONNECTING:
+- if (nvme_is_fabrics(req->cmd) &&
++ if (blk_rq_is_passthrough(rq) && nvme_is_fabrics(req->cmd) &&
+ req->cmd->fabrics.fctype == nvme_fabrics_type_connect)
+ return true;
+ break;
+diff --git a/drivers/nvme/host/nvme.h b/drivers/nvme/host/nvme.h
+index e268f1d7e1a0f..1db144eb74ff1 100644
+--- a/drivers/nvme/host/nvme.h
++++ b/drivers/nvme/host/nvme.h
+@@ -271,7 +271,6 @@ struct nvme_ctrl {
+ struct nvme_command ka_cmd;
+ struct work_struct fw_act_work;
+ unsigned long events;
+- bool created;
+
+ #ifdef CONFIG_NVME_MULTIPATH
+ /* asymmetric namespace access: */
+@@ -538,7 +537,7 @@ void nvme_kill_queues(struct nvme_ctrl *ctrl);
+ void nvme_sync_queues(struct nvme_ctrl *ctrl);
+ void nvme_unfreeze(struct nvme_ctrl *ctrl);
+ void nvme_wait_freeze(struct nvme_ctrl *ctrl);
+-void nvme_wait_freeze_timeout(struct nvme_ctrl *ctrl, long timeout);
++int nvme_wait_freeze_timeout(struct nvme_ctrl *ctrl, long timeout);
+ void nvme_start_freeze(struct nvme_ctrl *ctrl);
+
+ #define NVME_QID_ANY -1
+diff --git a/drivers/nvme/host/pci.c b/drivers/nvme/host/pci.c
+index d4b1ff7471231..69a19fe241063 100644
+--- a/drivers/nvme/host/pci.c
++++ b/drivers/nvme/host/pci.c
+@@ -1250,8 +1250,8 @@ static enum blk_eh_timer_return nvme_timeout(struct request *req, bool reserved)
+ dev_warn_ratelimited(dev->ctrl.device,
+ "I/O %d QID %d timeout, disable controller\n",
+ req->tag, nvmeq->qid);
+- nvme_dev_disable(dev, true);
+ nvme_req(req)->flags |= NVME_REQ_CANCELLED;
++ nvme_dev_disable(dev, true);
+ return BLK_EH_DONE;
+ case NVME_CTRL_RESETTING:
+ return BLK_EH_RESET_TIMER;
+@@ -1268,10 +1268,10 @@ static enum blk_eh_timer_return nvme_timeout(struct request *req, bool reserved)
+ dev_warn(dev->ctrl.device,
+ "I/O %d QID %d timeout, reset controller\n",
+ req->tag, nvmeq->qid);
++ nvme_req(req)->flags |= NVME_REQ_CANCELLED;
+ nvme_dev_disable(dev, false);
+ nvme_reset_ctrl(&dev->ctrl);
+
+- nvme_req(req)->flags |= NVME_REQ_CANCELLED;
+ return BLK_EH_DONE;
+ }
+
+diff --git a/drivers/nvme/host/rdma.c b/drivers/nvme/host/rdma.c
+index 876859cd14e86..6c07bb55b0f83 100644
+--- a/drivers/nvme/host/rdma.c
++++ b/drivers/nvme/host/rdma.c
+@@ -121,6 +121,7 @@ struct nvme_rdma_ctrl {
+ struct sockaddr_storage src_addr;
+
+ struct nvme_ctrl ctrl;
++ struct mutex teardown_lock;
+ bool use_inline_data;
+ u32 io_queues[HCTX_MAX_TYPES];
+ };
+@@ -949,7 +950,15 @@ static int nvme_rdma_configure_io_queues(struct nvme_rdma_ctrl *ctrl, bool new)
+
+ if (!new) {
+ nvme_start_queues(&ctrl->ctrl);
+- nvme_wait_freeze(&ctrl->ctrl);
++ if (!nvme_wait_freeze_timeout(&ctrl->ctrl, NVME_IO_TIMEOUT)) {
++ /*
++ * If we timed out waiting for freeze we are likely to
++ * be stuck. Fail the controller initialization just
++ * to be safe.
++ */
++ ret = -ENODEV;
++ goto out_wait_freeze_timed_out;
++ }
+ blk_mq_update_nr_hw_queues(ctrl->ctrl.tagset,
+ ctrl->ctrl.queue_count - 1);
+ nvme_unfreeze(&ctrl->ctrl);
+@@ -957,6 +966,9 @@ static int nvme_rdma_configure_io_queues(struct nvme_rdma_ctrl *ctrl, bool new)
+
+ return 0;
+
++out_wait_freeze_timed_out:
++ nvme_stop_queues(&ctrl->ctrl);
++ nvme_rdma_stop_io_queues(ctrl);
+ out_cleanup_connect_q:
+ if (new)
+ blk_cleanup_queue(ctrl->ctrl.connect_q);
+@@ -971,6 +983,7 @@ out_free_io_queues:
+ static void nvme_rdma_teardown_admin_queue(struct nvme_rdma_ctrl *ctrl,
+ bool remove)
+ {
++ mutex_lock(&ctrl->teardown_lock);
+ blk_mq_quiesce_queue(ctrl->ctrl.admin_q);
+ nvme_rdma_stop_queue(&ctrl->queues[0]);
+ if (ctrl->ctrl.admin_tagset) {
+@@ -981,11 +994,13 @@ static void nvme_rdma_teardown_admin_queue(struct nvme_rdma_ctrl *ctrl,
+ if (remove)
+ blk_mq_unquiesce_queue(ctrl->ctrl.admin_q);
+ nvme_rdma_destroy_admin_queue(ctrl, remove);
++ mutex_unlock(&ctrl->teardown_lock);
+ }
+
+ static void nvme_rdma_teardown_io_queues(struct nvme_rdma_ctrl *ctrl,
+ bool remove)
+ {
++ mutex_lock(&ctrl->teardown_lock);
+ if (ctrl->ctrl.queue_count > 1) {
+ nvme_start_freeze(&ctrl->ctrl);
+ nvme_stop_queues(&ctrl->ctrl);
+@@ -999,6 +1014,7 @@ static void nvme_rdma_teardown_io_queues(struct nvme_rdma_ctrl *ctrl,
+ nvme_start_queues(&ctrl->ctrl);
+ nvme_rdma_destroy_io_queues(ctrl, remove);
+ }
++ mutex_unlock(&ctrl->teardown_lock);
+ }
+
+ static void nvme_rdma_free_ctrl(struct nvme_ctrl *nctrl)
+@@ -1154,6 +1170,7 @@ static void nvme_rdma_error_recovery(struct nvme_rdma_ctrl *ctrl)
+ if (!nvme_change_ctrl_state(&ctrl->ctrl, NVME_CTRL_RESETTING))
+ return;
+
++ dev_warn(ctrl->ctrl.device, "starting error recovery\n");
+ queue_work(nvme_reset_wq, &ctrl->err_work);
+ }
+
+@@ -1920,6 +1937,22 @@ static int nvme_rdma_cm_handler(struct rdma_cm_id *cm_id,
+ return 0;
+ }
+
++static void nvme_rdma_complete_timed_out(struct request *rq)
++{
++ struct nvme_rdma_request *req = blk_mq_rq_to_pdu(rq);
++ struct nvme_rdma_queue *queue = req->queue;
++ struct nvme_rdma_ctrl *ctrl = queue->ctrl;
++
++ /* fence other contexts that may complete the command */
++ mutex_lock(&ctrl->teardown_lock);
++ nvme_rdma_stop_queue(queue);
++ if (!blk_mq_request_completed(rq)) {
++ nvme_req(rq)->status = NVME_SC_HOST_ABORTED_CMD;
++ blk_mq_complete_request(rq);
++ }
++ mutex_unlock(&ctrl->teardown_lock);
++}
++
+ static enum blk_eh_timer_return
+ nvme_rdma_timeout(struct request *rq, bool reserved)
+ {
+@@ -1930,29 +1963,29 @@ nvme_rdma_timeout(struct request *rq, bool reserved)
+ dev_warn(ctrl->ctrl.device, "I/O %d QID %d timeout\n",
+ rq->tag, nvme_rdma_queue_idx(queue));
+
+- /*
+- * Restart the timer if a controller reset is already scheduled. Any
+- * timed out commands would be handled before entering the connecting
+- * state.
+- */
+- if (ctrl->ctrl.state == NVME_CTRL_RESETTING)
+- return BLK_EH_RESET_TIMER;
+-
+ if (ctrl->ctrl.state != NVME_CTRL_LIVE) {
+ /*
+- * Teardown immediately if controller times out while starting
+- * or we are already started error recovery. all outstanding
+- * requests are completed on shutdown, so we return BLK_EH_DONE.
++ * If we are resetting, connecting or deleting we should
++ * complete immediately because we may block controller
++ * teardown or setup sequence
++ * - ctrl disable/shutdown fabrics requests
++ * - connect requests
++ * - initialization admin requests
++ * - I/O requests that entered after unquiescing and
++ * the controller stopped responding
++ *
++ * All other requests should be cancelled by the error
++ * recovery work, so it's fine that we fail it here.
+ */
+- flush_work(&ctrl->err_work);
+- nvme_rdma_teardown_io_queues(ctrl, false);
+- nvme_rdma_teardown_admin_queue(ctrl, false);
++ nvme_rdma_complete_timed_out(rq);
+ return BLK_EH_DONE;
+ }
+
+- dev_warn(ctrl->ctrl.device, "starting error recovery\n");
++ /*
++ * LIVE state should trigger the normal error recovery which will
++ * handle completing this request.
++ */
+ nvme_rdma_error_recovery(ctrl);
+-
+ return BLK_EH_RESET_TIMER;
+ }
+
+@@ -2252,6 +2285,7 @@ static struct nvme_ctrl *nvme_rdma_create_ctrl(struct device *dev,
+ return ERR_PTR(-ENOMEM);
+ ctrl->ctrl.opts = opts;
+ INIT_LIST_HEAD(&ctrl->list);
++ mutex_init(&ctrl->teardown_lock);
+
+ if (!(opts->mask & NVMF_OPT_TRSVCID)) {
+ opts->trsvcid =
+diff --git a/drivers/nvme/host/tcp.c b/drivers/nvme/host/tcp.c
+index a6d2e3330a584..f1f66bf96cbb9 100644
+--- a/drivers/nvme/host/tcp.c
++++ b/drivers/nvme/host/tcp.c
+@@ -122,6 +122,7 @@ struct nvme_tcp_ctrl {
+ struct sockaddr_storage src_addr;
+ struct nvme_ctrl ctrl;
+
++ struct mutex teardown_lock;
+ struct work_struct err_work;
+ struct delayed_work connect_work;
+ struct nvme_tcp_request async_req;
+@@ -447,6 +448,7 @@ static void nvme_tcp_error_recovery(struct nvme_ctrl *ctrl)
+ if (!nvme_change_ctrl_state(ctrl, NVME_CTRL_RESETTING))
+ return;
+
++ dev_warn(ctrl->device, "starting error recovery\n");
+ queue_work(nvme_reset_wq, &to_tcp_ctrl(ctrl)->err_work);
+ }
+
+@@ -1497,7 +1499,6 @@ static void nvme_tcp_stop_queue(struct nvme_ctrl *nctrl, int qid)
+
+ if (!test_and_clear_bit(NVME_TCP_Q_LIVE, &queue->flags))
+ return;
+-
+ __nvme_tcp_stop_queue(queue);
+ }
+
+@@ -1752,7 +1753,15 @@ static int nvme_tcp_configure_io_queues(struct nvme_ctrl *ctrl, bool new)
+
+ if (!new) {
+ nvme_start_queues(ctrl);
+- nvme_wait_freeze(ctrl);
++ if (!nvme_wait_freeze_timeout(ctrl, NVME_IO_TIMEOUT)) {
++ /*
++ * If we timed out waiting for freeze we are likely to
++ * be stuck. Fail the controller initialization just
++ * to be safe.
++ */
++ ret = -ENODEV;
++ goto out_wait_freeze_timed_out;
++ }
+ blk_mq_update_nr_hw_queues(ctrl->tagset,
+ ctrl->queue_count - 1);
+ nvme_unfreeze(ctrl);
+@@ -1760,6 +1769,9 @@ static int nvme_tcp_configure_io_queues(struct nvme_ctrl *ctrl, bool new)
+
+ return 0;
+
++out_wait_freeze_timed_out:
++ nvme_stop_queues(ctrl);
++ nvme_tcp_stop_io_queues(ctrl);
+ out_cleanup_connect_q:
+ if (new)
+ blk_cleanup_queue(ctrl->connect_q);
+@@ -1845,6 +1857,7 @@ out_free_queue:
+ static void nvme_tcp_teardown_admin_queue(struct nvme_ctrl *ctrl,
+ bool remove)
+ {
++ mutex_lock(&to_tcp_ctrl(ctrl)->teardown_lock);
+ blk_mq_quiesce_queue(ctrl->admin_q);
+ nvme_tcp_stop_queue(ctrl, 0);
+ if (ctrl->admin_tagset) {
+@@ -1855,13 +1868,16 @@ static void nvme_tcp_teardown_admin_queue(struct nvme_ctrl *ctrl,
+ if (remove)
+ blk_mq_unquiesce_queue(ctrl->admin_q);
+ nvme_tcp_destroy_admin_queue(ctrl, remove);
++ mutex_unlock(&to_tcp_ctrl(ctrl)->teardown_lock);
+ }
+
+ static void nvme_tcp_teardown_io_queues(struct nvme_ctrl *ctrl,
+ bool remove)
+ {
++ mutex_lock(&to_tcp_ctrl(ctrl)->teardown_lock);
+ if (ctrl->queue_count <= 1)
+- return;
++ goto out;
++ blk_mq_quiesce_queue(ctrl->admin_q);
+ nvme_start_freeze(ctrl);
+ nvme_stop_queues(ctrl);
+ nvme_tcp_stop_io_queues(ctrl);
+@@ -1873,6 +1889,8 @@ static void nvme_tcp_teardown_io_queues(struct nvme_ctrl *ctrl,
+ if (remove)
+ nvme_start_queues(ctrl);
+ nvme_tcp_destroy_io_queues(ctrl, remove);
++out:
++ mutex_unlock(&to_tcp_ctrl(ctrl)->teardown_lock);
+ }
+
+ static void nvme_tcp_reconnect_or_remove(struct nvme_ctrl *ctrl)
+@@ -2119,40 +2137,55 @@ static void nvme_tcp_submit_async_event(struct nvme_ctrl *arg)
+ nvme_tcp_queue_request(&ctrl->async_req, true);
+ }
+
++static void nvme_tcp_complete_timed_out(struct request *rq)
++{
++ struct nvme_tcp_request *req = blk_mq_rq_to_pdu(rq);
++ struct nvme_ctrl *ctrl = &req->queue->ctrl->ctrl;
++
++ /* fence other contexts that may complete the command */
++ mutex_lock(&to_tcp_ctrl(ctrl)->teardown_lock);
++ nvme_tcp_stop_queue(ctrl, nvme_tcp_queue_id(req->queue));
++ if (!blk_mq_request_completed(rq)) {
++ nvme_req(rq)->status = NVME_SC_HOST_ABORTED_CMD;
++ blk_mq_complete_request(rq);
++ }
++ mutex_unlock(&to_tcp_ctrl(ctrl)->teardown_lock);
++}
++
+ static enum blk_eh_timer_return
+ nvme_tcp_timeout(struct request *rq, bool reserved)
+ {
+ struct nvme_tcp_request *req = blk_mq_rq_to_pdu(rq);
+- struct nvme_tcp_ctrl *ctrl = req->queue->ctrl;
++ struct nvme_ctrl *ctrl = &req->queue->ctrl->ctrl;
+ struct nvme_tcp_cmd_pdu *pdu = req->pdu;
+
+- /*
+- * Restart the timer if a controller reset is already scheduled. Any
+- * timed out commands would be handled before entering the connecting
+- * state.
+- */
+- if (ctrl->ctrl.state == NVME_CTRL_RESETTING)
+- return BLK_EH_RESET_TIMER;
+-
+- dev_warn(ctrl->ctrl.device,
++ dev_warn(ctrl->device,
+ "queue %d: timeout request %#x type %d\n",
+ nvme_tcp_queue_id(req->queue), rq->tag, pdu->hdr.type);
+
+- if (ctrl->ctrl.state != NVME_CTRL_LIVE) {
++ if (ctrl->state != NVME_CTRL_LIVE) {
+ /*
+- * Teardown immediately if controller times out while starting
+- * or we are already started error recovery. all outstanding
+- * requests are completed on shutdown, so we return BLK_EH_DONE.
++ * If we are resetting, connecting or deleting we should
++ * complete immediately because we may block controller
++ * teardown or setup sequence
++ * - ctrl disable/shutdown fabrics requests
++ * - connect requests
++ * - initialization admin requests
++ * - I/O requests that entered after unquiescing and
++ * the controller stopped responding
++ *
++ * All other requests should be cancelled by the error
++ * recovery work, so it's fine that we fail it here.
+ */
+- flush_work(&ctrl->err_work);
+- nvme_tcp_teardown_io_queues(&ctrl->ctrl, false);
+- nvme_tcp_teardown_admin_queue(&ctrl->ctrl, false);
++ nvme_tcp_complete_timed_out(rq);
+ return BLK_EH_DONE;
+ }
+
+- dev_warn(ctrl->ctrl.device, "starting error recovery\n");
+- nvme_tcp_error_recovery(&ctrl->ctrl);
+-
++ /*
++ * LIVE state should trigger the normal error recovery which will
++ * handle completing this request.
++ */
++ nvme_tcp_error_recovery(ctrl);
+ return BLK_EH_RESET_TIMER;
+ }
+
+@@ -2384,6 +2417,7 @@ static struct nvme_ctrl *nvme_tcp_create_ctrl(struct device *dev,
+ nvme_tcp_reconnect_ctrl_work);
+ INIT_WORK(&ctrl->err_work, nvme_tcp_error_recovery_work);
+ INIT_WORK(&ctrl->ctrl.reset_work, nvme_reset_ctrl_work);
++ mutex_init(&ctrl->teardown_lock);
+
+ if (!(opts->mask & NVMF_OPT_TRSVCID)) {
+ opts->trsvcid =
+diff --git a/drivers/nvme/target/tcp.c b/drivers/nvme/target/tcp.c
+index de9217cfd22d7..3d29b773ced27 100644
+--- a/drivers/nvme/target/tcp.c
++++ b/drivers/nvme/target/tcp.c
+@@ -160,6 +160,11 @@ static void nvmet_tcp_finish_cmd(struct nvmet_tcp_cmd *cmd);
+ static inline u16 nvmet_tcp_cmd_tag(struct nvmet_tcp_queue *queue,
+ struct nvmet_tcp_cmd *cmd)
+ {
++ if (unlikely(!queue->nr_cmds)) {
++ /* We didn't allocate cmds yet, send 0xffff */
++ return USHRT_MAX;
++ }
++
+ return cmd - queue->cmds;
+ }
+
+@@ -872,7 +877,10 @@ static int nvmet_tcp_handle_h2c_data_pdu(struct nvmet_tcp_queue *queue)
+ struct nvme_tcp_data_pdu *data = &queue->pdu.data;
+ struct nvmet_tcp_cmd *cmd;
+
+- cmd = &queue->cmds[data->ttag];
++ if (likely(queue->nr_cmds))
++ cmd = &queue->cmds[data->ttag];
++ else
++ cmd = &queue->connect;
+
+ if (le32_to_cpu(data->data_offset) != cmd->rbytes_done) {
+ pr_err("ttag %u unexpected data offset %u (expected %u)\n",
+diff --git a/drivers/phy/qualcomm/phy-qcom-qmp.c b/drivers/phy/qualcomm/phy-qcom-qmp.c
+index e91040af33945..ba277136f52b1 100644
+--- a/drivers/phy/qualcomm/phy-qcom-qmp.c
++++ b/drivers/phy/qualcomm/phy-qcom-qmp.c
+@@ -504,8 +504,8 @@ static const struct qmp_phy_init_tbl ipq8074_pcie_serdes_tbl[] = {
+ QMP_PHY_INIT_CFG(QSERDES_COM_BG_TRIM, 0xf),
+ QMP_PHY_INIT_CFG(QSERDES_COM_LOCK_CMP_EN, 0x1),
+ QMP_PHY_INIT_CFG(QSERDES_COM_VCO_TUNE_MAP, 0x0),
+- QMP_PHY_INIT_CFG(QSERDES_COM_VCO_TUNE_TIMER1, 0x1f),
+- QMP_PHY_INIT_CFG(QSERDES_COM_VCO_TUNE_TIMER2, 0x3f),
++ QMP_PHY_INIT_CFG(QSERDES_COM_VCO_TUNE_TIMER1, 0xff),
++ QMP_PHY_INIT_CFG(QSERDES_COM_VCO_TUNE_TIMER2, 0x1f),
+ QMP_PHY_INIT_CFG(QSERDES_COM_CMN_CONFIG, 0x6),
+ QMP_PHY_INIT_CFG(QSERDES_COM_PLL_IVCO, 0xf),
+ QMP_PHY_INIT_CFG(QSERDES_COM_HSCLK_SEL, 0x0),
+@@ -531,7 +531,6 @@ static const struct qmp_phy_init_tbl ipq8074_pcie_serdes_tbl[] = {
+ QMP_PHY_INIT_CFG(QSERDES_COM_INTEGLOOP_GAIN1_MODE0, 0x0),
+ QMP_PHY_INIT_CFG(QSERDES_COM_INTEGLOOP_GAIN0_MODE0, 0x80),
+ QMP_PHY_INIT_CFG(QSERDES_COM_BIAS_EN_CTRL_BY_PSM, 0x1),
+- QMP_PHY_INIT_CFG(QSERDES_COM_VCO_TUNE_CTRL, 0xa),
+ QMP_PHY_INIT_CFG(QSERDES_COM_SSC_EN_CENTER, 0x1),
+ QMP_PHY_INIT_CFG(QSERDES_COM_SSC_PER1, 0x31),
+ QMP_PHY_INIT_CFG(QSERDES_COM_SSC_PER2, 0x1),
+@@ -540,7 +539,6 @@ static const struct qmp_phy_init_tbl ipq8074_pcie_serdes_tbl[] = {
+ QMP_PHY_INIT_CFG(QSERDES_COM_SSC_STEP_SIZE1, 0x2f),
+ QMP_PHY_INIT_CFG(QSERDES_COM_SSC_STEP_SIZE2, 0x19),
+ QMP_PHY_INIT_CFG(QSERDES_COM_CLK_EP_DIV, 0x19),
+- QMP_PHY_INIT_CFG(QSERDES_RX_SIGDET_CNTRL, 0x7),
+ };
+
+ static const struct qmp_phy_init_tbl ipq8074_pcie_tx_tbl[] = {
+@@ -548,6 +546,8 @@ static const struct qmp_phy_init_tbl ipq8074_pcie_tx_tbl[] = {
+ QMP_PHY_INIT_CFG(QSERDES_TX_LANE_MODE, 0x6),
+ QMP_PHY_INIT_CFG(QSERDES_TX_RES_CODE_LANE_OFFSET, 0x2),
+ QMP_PHY_INIT_CFG(QSERDES_TX_RCV_DETECT_LVL_2, 0x12),
++ QMP_PHY_INIT_CFG(QSERDES_TX_EMP_POST1_LVL, 0x36),
++ QMP_PHY_INIT_CFG(QSERDES_TX_SLEW_CNTL, 0x0a),
+ };
+
+ static const struct qmp_phy_init_tbl ipq8074_pcie_rx_tbl[] = {
+@@ -558,7 +558,6 @@ static const struct qmp_phy_init_tbl ipq8074_pcie_rx_tbl[] = {
+ QMP_PHY_INIT_CFG(QSERDES_RX_RX_EQU_ADAPTOR_CNTRL4, 0xdb),
+ QMP_PHY_INIT_CFG(QSERDES_RX_UCDR_SO_SATURATION_AND_ENABLE, 0x4b),
+ QMP_PHY_INIT_CFG(QSERDES_RX_UCDR_SO_GAIN, 0x4),
+- QMP_PHY_INIT_CFG(QSERDES_RX_UCDR_SO_GAIN_HALF, 0x4),
+ };
+
+ static const struct qmp_phy_init_tbl ipq8074_pcie_pcs_tbl[] = {
+@@ -1673,6 +1672,9 @@ static const struct qmp_phy_cfg msm8996_usb3phy_cfg = {
+ .pwrdn_ctrl = SW_PWRDN,
+ };
+
++static const char * const ipq8074_pciephy_clk_l[] = {
++ "aux", "cfg_ahb",
++};
+ /* list of resets */
+ static const char * const ipq8074_pciephy_reset_l[] = {
+ "phy", "common",
+@@ -1690,8 +1692,8 @@ static const struct qmp_phy_cfg ipq8074_pciephy_cfg = {
+ .rx_tbl_num = ARRAY_SIZE(ipq8074_pcie_rx_tbl),
+ .pcs_tbl = ipq8074_pcie_pcs_tbl,
+ .pcs_tbl_num = ARRAY_SIZE(ipq8074_pcie_pcs_tbl),
+- .clk_list = NULL,
+- .num_clks = 0,
++ .clk_list = ipq8074_pciephy_clk_l,
++ .num_clks = ARRAY_SIZE(ipq8074_pciephy_clk_l),
+ .reset_list = ipq8074_pciephy_reset_l,
+ .num_resets = ARRAY_SIZE(ipq8074_pciephy_reset_l),
+ .vreg_list = NULL,
+diff --git a/drivers/phy/qualcomm/phy-qcom-qmp.h b/drivers/phy/qualcomm/phy-qcom-qmp.h
+index 6d017a0c0c8d9..832b3d0984033 100644
+--- a/drivers/phy/qualcomm/phy-qcom-qmp.h
++++ b/drivers/phy/qualcomm/phy-qcom-qmp.h
+@@ -77,6 +77,8 @@
+ #define QSERDES_COM_CORECLK_DIV_MODE1 0x1bc
+
+ /* Only for QMP V2 PHY - TX registers */
++#define QSERDES_TX_EMP_POST1_LVL 0x018
++#define QSERDES_TX_SLEW_CNTL 0x040
+ #define QSERDES_TX_RES_CODE_LANE_OFFSET 0x054
+ #define QSERDES_TX_DEBUG_BUS_SEL 0x064
+ #define QSERDES_TX_HIGHZ_TRANSCEIVEREN_BIAS_DRVR_EN 0x068
+diff --git a/drivers/regulator/core.c b/drivers/regulator/core.c
+index 720f28844795b..be8c709a74883 100644
+--- a/drivers/regulator/core.c
++++ b/drivers/regulator/core.c
+@@ -235,8 +235,8 @@ static bool regulator_supply_is_couple(struct regulator_dev *rdev)
+ static void regulator_unlock_recursive(struct regulator_dev *rdev,
+ unsigned int n_coupled)
+ {
+- struct regulator_dev *c_rdev;
+- int i;
++ struct regulator_dev *c_rdev, *supply_rdev;
++ int i, supply_n_coupled;
+
+ for (i = n_coupled; i > 0; i--) {
+ c_rdev = rdev->coupling_desc.coupled_rdevs[i - 1];
+@@ -244,10 +244,13 @@ static void regulator_unlock_recursive(struct regulator_dev *rdev,
+ if (!c_rdev)
+ continue;
+
+- if (c_rdev->supply && !regulator_supply_is_couple(c_rdev))
+- regulator_unlock_recursive(
+- c_rdev->supply->rdev,
+- c_rdev->coupling_desc.n_coupled);
++ if (c_rdev->supply && !regulator_supply_is_couple(c_rdev)) {
++ supply_rdev = c_rdev->supply->rdev;
++ supply_n_coupled = supply_rdev->coupling_desc.n_coupled;
++
++ regulator_unlock_recursive(supply_rdev,
++ supply_n_coupled);
++ }
+
+ regulator_unlock(c_rdev);
+ }
+@@ -1460,7 +1463,7 @@ static int set_consumer_device_supply(struct regulator_dev *rdev,
+ const char *consumer_dev_name,
+ const char *supply)
+ {
+- struct regulator_map *node;
++ struct regulator_map *node, *new_node;
+ int has_dev;
+
+ if (supply == NULL)
+@@ -1471,6 +1474,22 @@ static int set_consumer_device_supply(struct regulator_dev *rdev,
+ else
+ has_dev = 0;
+
++ new_node = kzalloc(sizeof(struct regulator_map), GFP_KERNEL);
++ if (new_node == NULL)
++ return -ENOMEM;
++
++ new_node->regulator = rdev;
++ new_node->supply = supply;
++
++ if (has_dev) {
++ new_node->dev_name = kstrdup(consumer_dev_name, GFP_KERNEL);
++ if (new_node->dev_name == NULL) {
++ kfree(new_node);
++ return -ENOMEM;
++ }
++ }
++
++ mutex_lock(®ulator_list_mutex);
+ list_for_each_entry(node, ®ulator_map_list, list) {
+ if (node->dev_name && consumer_dev_name) {
+ if (strcmp(node->dev_name, consumer_dev_name) != 0)
+@@ -1488,26 +1507,19 @@ static int set_consumer_device_supply(struct regulator_dev *rdev,
+ node->regulator->desc->name,
+ supply,
+ dev_name(&rdev->dev), rdev_get_name(rdev));
+- return -EBUSY;
++ goto fail;
+ }
+
+- node = kzalloc(sizeof(struct regulator_map), GFP_KERNEL);
+- if (node == NULL)
+- return -ENOMEM;
+-
+- node->regulator = rdev;
+- node->supply = supply;
+-
+- if (has_dev) {
+- node->dev_name = kstrdup(consumer_dev_name, GFP_KERNEL);
+- if (node->dev_name == NULL) {
+- kfree(node);
+- return -ENOMEM;
+- }
+- }
++ list_add(&new_node->list, ®ulator_map_list);
++ mutex_unlock(®ulator_list_mutex);
+
+- list_add(&node->list, ®ulator_map_list);
+ return 0;
++
++fail:
++ mutex_unlock(®ulator_list_mutex);
++ kfree(new_node->dev_name);
++ kfree(new_node);
++ return -EBUSY;
+ }
+
+ static void unset_regulator_supplies(struct regulator_dev *rdev)
+@@ -1579,44 +1591,53 @@ static struct regulator *create_regulator(struct regulator_dev *rdev,
+ const char *supply_name)
+ {
+ struct regulator *regulator;
+- char buf[REG_STR_SIZE];
+- int err, size;
++ int err;
++
++ if (dev) {
++ char buf[REG_STR_SIZE];
++ int size;
++
++ size = snprintf(buf, REG_STR_SIZE, "%s-%s",
++ dev->kobj.name, supply_name);
++ if (size >= REG_STR_SIZE)
++ return NULL;
++
++ supply_name = kstrdup(buf, GFP_KERNEL);
++ if (supply_name == NULL)
++ return NULL;
++ } else {
++ supply_name = kstrdup_const(supply_name, GFP_KERNEL);
++ if (supply_name == NULL)
++ return NULL;
++ }
+
+ regulator = kzalloc(sizeof(*regulator), GFP_KERNEL);
+- if (regulator == NULL)
++ if (regulator == NULL) {
++ kfree(supply_name);
+ return NULL;
++ }
+
+- regulator_lock(rdev);
+ regulator->rdev = rdev;
++ regulator->supply_name = supply_name;
++
++ regulator_lock(rdev);
+ list_add(®ulator->list, &rdev->consumer_list);
++ regulator_unlock(rdev);
+
+ if (dev) {
+ regulator->dev = dev;
+
+ /* Add a link to the device sysfs entry */
+- size = snprintf(buf, REG_STR_SIZE, "%s-%s",
+- dev->kobj.name, supply_name);
+- if (size >= REG_STR_SIZE)
+- goto overflow_err;
+-
+- regulator->supply_name = kstrdup(buf, GFP_KERNEL);
+- if (regulator->supply_name == NULL)
+- goto overflow_err;
+-
+ err = sysfs_create_link_nowarn(&rdev->dev.kobj, &dev->kobj,
+- buf);
++ supply_name);
+ if (err) {
+ rdev_dbg(rdev, "could not add device link %s err %d\n",
+ dev->kobj.name, err);
+ /* non-fatal */
+ }
+- } else {
+- regulator->supply_name = kstrdup_const(supply_name, GFP_KERNEL);
+- if (regulator->supply_name == NULL)
+- goto overflow_err;
+ }
+
+- regulator->debugfs = debugfs_create_dir(regulator->supply_name,
++ regulator->debugfs = debugfs_create_dir(supply_name,
+ rdev->debugfs);
+ if (!regulator->debugfs) {
+ rdev_dbg(rdev, "Failed to create debugfs directory\n");
+@@ -1641,13 +1662,7 @@ static struct regulator *create_regulator(struct regulator_dev *rdev,
+ _regulator_is_enabled(rdev))
+ regulator->always_on = true;
+
+- regulator_unlock(rdev);
+ return regulator;
+-overflow_err:
+- list_del(®ulator->list);
+- kfree(regulator);
+- regulator_unlock(rdev);
+- return NULL;
+ }
+
+ static int _regulator_get_enable_time(struct regulator_dev *rdev)
+@@ -2222,10 +2237,13 @@ EXPORT_SYMBOL_GPL(regulator_bulk_unregister_supply_alias);
+ static int regulator_ena_gpio_request(struct regulator_dev *rdev,
+ const struct regulator_config *config)
+ {
+- struct regulator_enable_gpio *pin;
++ struct regulator_enable_gpio *pin, *new_pin;
+ struct gpio_desc *gpiod;
+
+ gpiod = config->ena_gpiod;
++ new_pin = kzalloc(sizeof(*new_pin), GFP_KERNEL);
++
++ mutex_lock(®ulator_list_mutex);
+
+ list_for_each_entry(pin, ®ulator_ena_gpio_list, list) {
+ if (pin->gpiod == gpiod) {
+@@ -2234,9 +2252,13 @@ static int regulator_ena_gpio_request(struct regulator_dev *rdev,
+ }
+ }
+
+- pin = kzalloc(sizeof(struct regulator_enable_gpio), GFP_KERNEL);
+- if (pin == NULL)
++ if (new_pin == NULL) {
++ mutex_unlock(®ulator_list_mutex);
+ return -ENOMEM;
++ }
++
++ pin = new_pin;
++ new_pin = NULL;
+
+ pin->gpiod = gpiod;
+ list_add(&pin->list, ®ulator_ena_gpio_list);
+@@ -2244,6 +2266,10 @@ static int regulator_ena_gpio_request(struct regulator_dev *rdev,
+ update_ena_gpio_to_rdev:
+ pin->request_count++;
+ rdev->ena_pin = pin;
++
++ mutex_unlock(®ulator_list_mutex);
++ kfree(new_pin);
++
+ return 0;
+ }
+
+@@ -4880,13 +4906,9 @@ static void regulator_resolve_coupling(struct regulator_dev *rdev)
+ return;
+ }
+
+- regulator_lock(c_rdev);
+-
+ c_desc->coupled_rdevs[i] = c_rdev;
+ c_desc->n_resolved++;
+
+- regulator_unlock(c_rdev);
+-
+ regulator_resolve_coupling(c_rdev);
+ }
+ }
+@@ -4971,7 +4993,10 @@ static int regulator_init_coupling(struct regulator_dev *rdev)
+ if (!of_check_coupling_data(rdev))
+ return -EPERM;
+
++ mutex_lock(®ulator_list_mutex);
+ rdev->coupling_desc.coupler = regulator_find_coupler(rdev);
++ mutex_unlock(®ulator_list_mutex);
++
+ if (IS_ERR(rdev->coupling_desc.coupler)) {
+ err = PTR_ERR(rdev->coupling_desc.coupler);
+ rdev_err(rdev, "failed to get coupler: %d\n", err);
+@@ -5072,6 +5097,7 @@ regulator_register(const struct regulator_desc *regulator_desc,
+ ret = -ENOMEM;
+ goto rinse;
+ }
++ device_initialize(&rdev->dev);
+
+ /*
+ * Duplicate the config so the driver could override it after
+@@ -5079,9 +5105,8 @@ regulator_register(const struct regulator_desc *regulator_desc,
+ */
+ config = kmemdup(cfg, sizeof(*cfg), GFP_KERNEL);
+ if (config == NULL) {
+- kfree(rdev);
+ ret = -ENOMEM;
+- goto rinse;
++ goto clean;
+ }
+
+ init_data = regulator_of_get_init_data(dev, regulator_desc, config,
+@@ -5093,10 +5118,8 @@ regulator_register(const struct regulator_desc *regulator_desc,
+ * from a gpio extender or something else.
+ */
+ if (PTR_ERR(init_data) == -EPROBE_DEFER) {
+- kfree(config);
+- kfree(rdev);
+ ret = -EPROBE_DEFER;
+- goto rinse;
++ goto clean;
+ }
+
+ /*
+@@ -5137,9 +5160,7 @@ regulator_register(const struct regulator_desc *regulator_desc,
+ }
+
+ if (config->ena_gpiod) {
+- mutex_lock(®ulator_list_mutex);
+ ret = regulator_ena_gpio_request(rdev, config);
+- mutex_unlock(®ulator_list_mutex);
+ if (ret != 0) {
+ rdev_err(rdev, "Failed to request enable GPIO: %d\n",
+ ret);
+@@ -5151,7 +5172,6 @@ regulator_register(const struct regulator_desc *regulator_desc,
+ }
+
+ /* register with sysfs */
+- device_initialize(&rdev->dev);
+ rdev->dev.class = ®ulator_class;
+ rdev->dev.parent = dev;
+ dev_set_name(&rdev->dev, "regulator.%lu",
+@@ -5179,27 +5199,22 @@ regulator_register(const struct regulator_desc *regulator_desc,
+ if (ret < 0)
+ goto wash;
+
+- mutex_lock(®ulator_list_mutex);
+ ret = regulator_init_coupling(rdev);
+- mutex_unlock(®ulator_list_mutex);
+ if (ret < 0)
+ goto wash;
+
+ /* add consumers devices */
+ if (init_data) {
+- mutex_lock(®ulator_list_mutex);
+ for (i = 0; i < init_data->num_consumer_supplies; i++) {
+ ret = set_consumer_device_supply(rdev,
+ init_data->consumer_supplies[i].dev_name,
+ init_data->consumer_supplies[i].supply);
+ if (ret < 0) {
+- mutex_unlock(®ulator_list_mutex);
+ dev_err(dev, "Failed to set supply %s\n",
+ init_data->consumer_supplies[i].supply);
+ goto unset_supplies;
+ }
+ }
+- mutex_unlock(®ulator_list_mutex);
+ }
+
+ if (!rdev->desc->ops->get_voltage &&
+@@ -5234,13 +5249,11 @@ wash:
+ mutex_lock(®ulator_list_mutex);
+ regulator_ena_gpio_free(rdev);
+ mutex_unlock(®ulator_list_mutex);
+- put_device(&rdev->dev);
+- rdev = NULL;
+ clean:
+ if (dangling_of_gpiod)
+ gpiod_put(config->ena_gpiod);
+- kfree(rdev);
+ kfree(config);
++ put_device(&rdev->dev);
+ rinse:
+ if (dangling_cfg_gpiod)
+ gpiod_put(cfg->ena_gpiod);
+diff --git a/drivers/scsi/libsas/sas_ata.c b/drivers/scsi/libsas/sas_ata.c
+index 5d716d3887071..6de4bc77fd55c 100644
+--- a/drivers/scsi/libsas/sas_ata.c
++++ b/drivers/scsi/libsas/sas_ata.c
+@@ -209,7 +209,10 @@ static unsigned int sas_ata_qc_issue(struct ata_queued_cmd *qc)
+ task->num_scatter = si;
+ }
+
+- task->data_dir = qc->dma_dir;
++ if (qc->tf.protocol == ATA_PROT_NODATA)
++ task->data_dir = DMA_NONE;
++ else
++ task->data_dir = qc->dma_dir;
+ task->scatter = qc->sg;
+ task->ata_task.retry_count = 1;
+ task->task_state_flags = SAS_TASK_STATE_PENDING;
+diff --git a/drivers/scsi/lpfc/lpfc_init.c b/drivers/scsi/lpfc/lpfc_init.c
+index 6637f84a3d1bc..0dd6cc0ccdf2d 100644
+--- a/drivers/scsi/lpfc/lpfc_init.c
++++ b/drivers/scsi/lpfc/lpfc_init.c
+@@ -11257,7 +11257,6 @@ lpfc_irq_clear_aff(struct lpfc_hba_eq_hdl *eqhdl)
+ {
+ cpumask_clear(&eqhdl->aff_mask);
+ irq_clear_status_flags(eqhdl->irq, IRQ_NO_BALANCING);
+- irq_set_affinity_hint(eqhdl->irq, &eqhdl->aff_mask);
+ }
+
+ /**
+diff --git a/drivers/scsi/megaraid/megaraid_sas_fusion.c b/drivers/scsi/megaraid/megaraid_sas_fusion.c
+index fcf03f733e417..1a0e2e4342ad8 100644
+--- a/drivers/scsi/megaraid/megaraid_sas_fusion.c
++++ b/drivers/scsi/megaraid/megaraid_sas_fusion.c
+@@ -3690,7 +3690,7 @@ int megasas_irqpoll(struct irq_poll *irqpoll, int budget)
+ instance = irq_ctx->instance;
+
+ if (irq_ctx->irq_line_enable) {
+- disable_irq(irq_ctx->os_irq);
++ disable_irq_nosync(irq_ctx->os_irq);
+ irq_ctx->irq_line_enable = false;
+ }
+
+diff --git a/drivers/scsi/mpt3sas/mpt3sas_base.c b/drivers/scsi/mpt3sas/mpt3sas_base.c
+index 96b78fdc6b8a9..a85c9672c6ea3 100644
+--- a/drivers/scsi/mpt3sas/mpt3sas_base.c
++++ b/drivers/scsi/mpt3sas/mpt3sas_base.c
+@@ -1732,7 +1732,7 @@ _base_irqpoll(struct irq_poll *irqpoll, int budget)
+ reply_q = container_of(irqpoll, struct adapter_reply_queue,
+ irqpoll);
+ if (reply_q->irq_line_enable) {
+- disable_irq(reply_q->os_irq);
++ disable_irq_nosync(reply_q->os_irq);
+ reply_q->irq_line_enable = false;
+ }
+ num_entries = _base_process_reply_queue(reply_q);
+diff --git a/drivers/scsi/qedf/qedf_main.c b/drivers/scsi/qedf/qedf_main.c
+index 36b1ca2dadbb5..51cfab9d1afdc 100644
+--- a/drivers/scsi/qedf/qedf_main.c
++++ b/drivers/scsi/qedf/qedf_main.c
+@@ -3843,7 +3843,7 @@ void qedf_stag_change_work(struct work_struct *work)
+ container_of(work, struct qedf_ctx, stag_work.work);
+
+ if (!qedf) {
+- QEDF_ERR(&qedf->dbg_ctx, "qedf is NULL");
++ QEDF_ERR(NULL, "qedf is NULL");
+ return;
+ }
+ QEDF_ERR(&qedf->dbg_ctx, "Performing software context reset.\n");
+diff --git a/drivers/scsi/qla2xxx/qla_def.h b/drivers/scsi/qla2xxx/qla_def.h
+index 42dbf90d46510..392312333746f 100644
+--- a/drivers/scsi/qla2xxx/qla_def.h
++++ b/drivers/scsi/qla2xxx/qla_def.h
+@@ -1605,7 +1605,7 @@ typedef struct {
+ */
+ uint8_t firmware_options[2];
+
+- uint16_t frame_payload_size;
++ __le16 frame_payload_size;
+ __le16 max_iocb_allocation;
+ __le16 execution_throttle;
+ uint8_t retry_count;
+diff --git a/drivers/scsi/qla2xxx/qla_init.c b/drivers/scsi/qla2xxx/qla_init.c
+index 2436a17f5cd91..2861c636dd651 100644
+--- a/drivers/scsi/qla2xxx/qla_init.c
++++ b/drivers/scsi/qla2xxx/qla_init.c
+@@ -4603,18 +4603,18 @@ qla2x00_nvram_config(scsi_qla_host_t *vha)
+ nv->firmware_options[1] = BIT_7 | BIT_5;
+ nv->add_firmware_options[0] = BIT_5;
+ nv->add_firmware_options[1] = BIT_5 | BIT_4;
+- nv->frame_payload_size = 2048;
++ nv->frame_payload_size = cpu_to_le16(2048);
+ nv->special_options[1] = BIT_7;
+ } else if (IS_QLA2200(ha)) {
+ nv->firmware_options[0] = BIT_2 | BIT_1;
+ nv->firmware_options[1] = BIT_7 | BIT_5;
+ nv->add_firmware_options[0] = BIT_5;
+ nv->add_firmware_options[1] = BIT_5 | BIT_4;
+- nv->frame_payload_size = 1024;
++ nv->frame_payload_size = cpu_to_le16(1024);
+ } else if (IS_QLA2100(ha)) {
+ nv->firmware_options[0] = BIT_3 | BIT_1;
+ nv->firmware_options[1] = BIT_5;
+- nv->frame_payload_size = 1024;
++ nv->frame_payload_size = cpu_to_le16(1024);
+ }
+
+ nv->max_iocb_allocation = cpu_to_le16(256);
+diff --git a/drivers/soundwire/stream.c b/drivers/soundwire/stream.c
+index a9a72574b34ab..684761e86d4fc 100644
+--- a/drivers/soundwire/stream.c
++++ b/drivers/soundwire/stream.c
+@@ -716,6 +716,7 @@ error:
+ kfree(wbuf);
+ error_1:
+ kfree(wr_msg);
++ bus->defer_msg.msg = NULL;
+ return ret;
+ }
+
+@@ -839,9 +840,10 @@ static int do_bank_switch(struct sdw_stream_runtime *stream)
+ error:
+ list_for_each_entry(m_rt, &stream->master_list, stream_node) {
+ bus = m_rt->bus;
+-
+- kfree(bus->defer_msg.msg->buf);
+- kfree(bus->defer_msg.msg);
++ if (bus->defer_msg.msg) {
++ kfree(bus->defer_msg.msg->buf);
++ kfree(bus->defer_msg.msg);
++ }
+ }
+
+ msg_unlock:
+diff --git a/drivers/spi/spi-stm32.c b/drivers/spi/spi-stm32.c
+index d4b33b358a31e..3056428b09f31 100644
+--- a/drivers/spi/spi-stm32.c
++++ b/drivers/spi/spi-stm32.c
+@@ -936,7 +936,11 @@ static irqreturn_t stm32h7_spi_irq_thread(int irq, void *dev_id)
+ }
+
+ if (sr & STM32H7_SPI_SR_SUSP) {
+- dev_warn(spi->dev, "Communication suspended\n");
++ static DEFINE_RATELIMIT_STATE(rs,
++ DEFAULT_RATELIMIT_INTERVAL * 10,
++ 1);
++ if (__ratelimit(&rs))
++ dev_dbg_ratelimited(spi->dev, "Communication suspended\n");
+ if (!spi->cur_usedma && (spi->rx_buf && (spi->rx_len > 0)))
+ stm32h7_spi_read_rxfifo(spi, false);
+ /*
+@@ -2060,7 +2064,7 @@ static int stm32_spi_resume(struct device *dev)
+ }
+
+ ret = pm_runtime_get_sync(dev);
+- if (ret) {
++ if (ret < 0) {
+ dev_err(dev, "Unable to power device:%d\n", ret);
+ return ret;
+ }
+diff --git a/drivers/staging/greybus/audio_topology.c b/drivers/staging/greybus/audio_topology.c
+index 4ac30accf226a..cc329b990e165 100644
+--- a/drivers/staging/greybus/audio_topology.c
++++ b/drivers/staging/greybus/audio_topology.c
+@@ -460,6 +460,15 @@ static int gbcodec_mixer_dapm_ctl_put(struct snd_kcontrol *kcontrol,
+ val = ucontrol->value.integer.value[0] & mask;
+ connect = !!val;
+
++ ret = gb_pm_runtime_get_sync(bundle);
++ if (ret)
++ return ret;
++
++ ret = gb_audio_gb_get_control(module->mgmt_connection, data->ctl_id,
++ GB_AUDIO_INVALID_INDEX, &gbvalue);
++ if (ret)
++ goto exit;
++
+ /* update ucontrol */
+ if (gbvalue.value.integer_value[0] != val) {
+ for (wi = 0; wi < wlist->num_widgets; wi++) {
+@@ -473,25 +482,17 @@ static int gbcodec_mixer_dapm_ctl_put(struct snd_kcontrol *kcontrol,
+ gbvalue.value.integer_value[0] =
+ cpu_to_le32(ucontrol->value.integer.value[0]);
+
+- ret = gb_pm_runtime_get_sync(bundle);
+- if (ret)
+- return ret;
+-
+ ret = gb_audio_gb_set_control(module->mgmt_connection,
+ data->ctl_id,
+ GB_AUDIO_INVALID_INDEX, &gbvalue);
+-
+- gb_pm_runtime_put_autosuspend(bundle);
+-
+- if (ret) {
+- dev_err_ratelimited(codec->dev,
+- "%d:Error in %s for %s\n", ret,
+- __func__, kcontrol->id.name);
+- return ret;
+- }
+ }
+
+- return 0;
++exit:
++ gb_pm_runtime_put_autosuspend(bundle);
++ if (ret)
++ dev_err_ratelimited(codec_dev, "%d:Error in %s for %s\n", ret,
++ __func__, kcontrol->id.name);
++ return ret;
+ }
+
+ #define SOC_DAPM_MIXER_GB(xname, kcount, data) \
+diff --git a/drivers/staging/wlan-ng/hfa384x_usb.c b/drivers/staging/wlan-ng/hfa384x_usb.c
+index fa1bf8b069fda..2720f7319a3d0 100644
+--- a/drivers/staging/wlan-ng/hfa384x_usb.c
++++ b/drivers/staging/wlan-ng/hfa384x_usb.c
+@@ -524,13 +524,8 @@ static void hfa384x_usb_defer(struct work_struct *data)
+ */
+ void hfa384x_create(struct hfa384x *hw, struct usb_device *usb)
+ {
+- memset(hw, 0, sizeof(*hw));
+ hw->usb = usb;
+
+- /* set up the endpoints */
+- hw->endp_in = usb_rcvbulkpipe(usb, 1);
+- hw->endp_out = usb_sndbulkpipe(usb, 2);
+-
+ /* Set up the waitq */
+ init_waitqueue_head(&hw->cmdq);
+
+diff --git a/drivers/staging/wlan-ng/prism2usb.c b/drivers/staging/wlan-ng/prism2usb.c
+index 456603fd26c0b..4b08dc1da4f97 100644
+--- a/drivers/staging/wlan-ng/prism2usb.c
++++ b/drivers/staging/wlan-ng/prism2usb.c
+@@ -61,23 +61,14 @@ static int prism2sta_probe_usb(struct usb_interface *interface,
+ const struct usb_device_id *id)
+ {
+ struct usb_device *dev;
+- const struct usb_endpoint_descriptor *epd;
+- const struct usb_host_interface *iface_desc = interface->cur_altsetting;
++ struct usb_endpoint_descriptor *bulk_in, *bulk_out;
++ struct usb_host_interface *iface_desc = interface->cur_altsetting;
+ struct wlandevice *wlandev = NULL;
+ struct hfa384x *hw = NULL;
+ int result = 0;
+
+- if (iface_desc->desc.bNumEndpoints != 2) {
+- result = -ENODEV;
+- goto failed;
+- }
+-
+- result = -EINVAL;
+- epd = &iface_desc->endpoint[1].desc;
+- if (!usb_endpoint_is_bulk_in(epd))
+- goto failed;
+- epd = &iface_desc->endpoint[2].desc;
+- if (!usb_endpoint_is_bulk_out(epd))
++ result = usb_find_common_endpoints(iface_desc, &bulk_in, &bulk_out, NULL, NULL);
++ if (result)
+ goto failed;
+
+ dev = interface_to_usbdev(interface);
+@@ -96,6 +87,8 @@ static int prism2sta_probe_usb(struct usb_interface *interface,
+ }
+
+ /* Initialize the hw data */
++ hw->endp_in = usb_rcvbulkpipe(dev, bulk_in->bEndpointAddress);
++ hw->endp_out = usb_sndbulkpipe(dev, bulk_out->bEndpointAddress);
+ hfa384x_create(hw, dev);
+ hw->wlandev = wlandev;
+
+diff --git a/drivers/target/iscsi/iscsi_target.c b/drivers/target/iscsi/iscsi_target.c
+index c9689610e186d..2ec778e97b1be 100644
+--- a/drivers/target/iscsi/iscsi_target.c
++++ b/drivers/target/iscsi/iscsi_target.c
+@@ -1389,14 +1389,27 @@ static u32 iscsit_do_crypto_hash_sg(
+ sg = cmd->first_data_sg;
+ page_off = cmd->first_data_sg_off;
+
++ if (data_length && page_off) {
++ struct scatterlist first_sg;
++ u32 len = min_t(u32, data_length, sg->length - page_off);
++
++ sg_init_table(&first_sg, 1);
++ sg_set_page(&first_sg, sg_page(sg), len, sg->offset + page_off);
++
++ ahash_request_set_crypt(hash, &first_sg, NULL, len);
++ crypto_ahash_update(hash);
++
++ data_length -= len;
++ sg = sg_next(sg);
++ }
++
+ while (data_length) {
+- u32 cur_len = min_t(u32, data_length, (sg->length - page_off));
++ u32 cur_len = min_t(u32, data_length, sg->length);
+
+ ahash_request_set_crypt(hash, sg, NULL, cur_len);
+ crypto_ahash_update(hash);
+
+ data_length -= cur_len;
+- page_off = 0;
+ /* iscsit_map_iovec has already checked for invalid sg pointers */
+ sg = sg_next(sg);
+ }
+diff --git a/drivers/target/iscsi/iscsi_target_login.c b/drivers/target/iscsi/iscsi_target_login.c
+index 85748e3388582..893d1b406c292 100644
+--- a/drivers/target/iscsi/iscsi_target_login.c
++++ b/drivers/target/iscsi/iscsi_target_login.c
+@@ -1149,7 +1149,7 @@ void iscsit_free_conn(struct iscsi_conn *conn)
+ }
+
+ void iscsi_target_login_sess_out(struct iscsi_conn *conn,
+- struct iscsi_np *np, bool zero_tsih, bool new_sess)
++ bool zero_tsih, bool new_sess)
+ {
+ if (!new_sess)
+ goto old_sess_out;
+@@ -1167,7 +1167,6 @@ void iscsi_target_login_sess_out(struct iscsi_conn *conn,
+ conn->sess = NULL;
+
+ old_sess_out:
+- iscsi_stop_login_thread_timer(np);
+ /*
+ * If login negotiation fails check if the Time2Retain timer
+ * needs to be restarted.
+@@ -1407,8 +1406,9 @@ static int __iscsi_target_login_thread(struct iscsi_np *np)
+ new_sess_out:
+ new_sess = true;
+ old_sess_out:
++ iscsi_stop_login_thread_timer(np);
+ tpg_np = conn->tpg_np;
+- iscsi_target_login_sess_out(conn, np, zero_tsih, new_sess);
++ iscsi_target_login_sess_out(conn, zero_tsih, new_sess);
+ new_sess = false;
+
+ if (tpg) {
+diff --git a/drivers/target/iscsi/iscsi_target_login.h b/drivers/target/iscsi/iscsi_target_login.h
+index 3b8e3639ff5d0..fc95e6150253f 100644
+--- a/drivers/target/iscsi/iscsi_target_login.h
++++ b/drivers/target/iscsi/iscsi_target_login.h
+@@ -22,8 +22,7 @@ extern int iscsit_put_login_tx(struct iscsi_conn *, struct iscsi_login *, u32);
+ extern void iscsit_free_conn(struct iscsi_conn *);
+ extern int iscsit_start_kthreads(struct iscsi_conn *);
+ extern void iscsi_post_login_handler(struct iscsi_np *, struct iscsi_conn *, u8);
+-extern void iscsi_target_login_sess_out(struct iscsi_conn *, struct iscsi_np *,
+- bool, bool);
++extern void iscsi_target_login_sess_out(struct iscsi_conn *, bool, bool);
+ extern int iscsi_target_login_thread(void *);
+ extern void iscsi_handle_login_thread_timeout(struct timer_list *t);
+
+diff --git a/drivers/target/iscsi/iscsi_target_nego.c b/drivers/target/iscsi/iscsi_target_nego.c
+index 685d771b51d41..e32d93b927428 100644
+--- a/drivers/target/iscsi/iscsi_target_nego.c
++++ b/drivers/target/iscsi/iscsi_target_nego.c
+@@ -535,12 +535,11 @@ static bool iscsi_target_sk_check_and_clear(struct iscsi_conn *conn, unsigned in
+
+ static void iscsi_target_login_drop(struct iscsi_conn *conn, struct iscsi_login *login)
+ {
+- struct iscsi_np *np = login->np;
+ bool zero_tsih = login->zero_tsih;
+
+ iscsi_remove_failed_auth_entry(conn);
+ iscsi_target_nego_release(conn);
+- iscsi_target_login_sess_out(conn, np, zero_tsih, true);
++ iscsi_target_login_sess_out(conn, zero_tsih, true);
+ }
+
+ struct conn_timeout {
+diff --git a/drivers/thunderbolt/switch.c b/drivers/thunderbolt/switch.c
+index d7d60cd9226f7..43df502a36d6b 100644
+--- a/drivers/thunderbolt/switch.c
++++ b/drivers/thunderbolt/switch.c
+@@ -739,6 +739,7 @@ static int tb_init_port(struct tb_port *port)
+ if (res == -ENODEV) {
+ tb_dbg(port->sw->tb, " Port %d: not implemented\n",
+ port->port);
++ port->disabled = true;
+ return 0;
+ }
+ return res;
+diff --git a/drivers/thunderbolt/tb.h b/drivers/thunderbolt/tb.h
+index 2eb2bcd3cca35..72e48e86c476e 100644
+--- a/drivers/thunderbolt/tb.h
++++ b/drivers/thunderbolt/tb.h
+@@ -167,7 +167,7 @@ struct tb_switch {
+ * @cap_adap: Offset of the adapter specific capability (%0 if not present)
+ * @cap_usb4: Offset to the USB4 port capability (%0 if not present)
+ * @port: Port number on switch
+- * @disabled: Disabled by eeprom
++ * @disabled: Disabled by eeprom or enabled but not implemented
+ * @bonded: true if the port is bonded (two lanes combined as one)
+ * @dual_link_port: If the switch is connected using two ports, points
+ * to the other port.
+diff --git a/drivers/usb/core/message.c b/drivers/usb/core/message.c
+index 6197938dcc2d8..ae1de9cc4b094 100644
+--- a/drivers/usb/core/message.c
++++ b/drivers/usb/core/message.c
+@@ -1205,6 +1205,34 @@ void usb_disable_interface(struct usb_device *dev, struct usb_interface *intf,
+ }
+ }
+
++/*
++ * usb_disable_device_endpoints -- Disable all endpoints for a device
++ * @dev: the device whose endpoints are being disabled
++ * @skip_ep0: 0 to disable endpoint 0, 1 to skip it.
++ */
++static void usb_disable_device_endpoints(struct usb_device *dev, int skip_ep0)
++{
++ struct usb_hcd *hcd = bus_to_hcd(dev->bus);
++ int i;
++
++ if (hcd->driver->check_bandwidth) {
++ /* First pass: Cancel URBs, leave endpoint pointers intact. */
++ for (i = skip_ep0; i < 16; ++i) {
++ usb_disable_endpoint(dev, i, false);
++ usb_disable_endpoint(dev, i + USB_DIR_IN, false);
++ }
++ /* Remove endpoints from the host controller internal state */
++ mutex_lock(hcd->bandwidth_mutex);
++ usb_hcd_alloc_bandwidth(dev, NULL, NULL, NULL);
++ mutex_unlock(hcd->bandwidth_mutex);
++ }
++ /* Second pass: remove endpoint pointers */
++ for (i = skip_ep0; i < 16; ++i) {
++ usb_disable_endpoint(dev, i, true);
++ usb_disable_endpoint(dev, i + USB_DIR_IN, true);
++ }
++}
++
+ /**
+ * usb_disable_device - Disable all the endpoints for a USB device
+ * @dev: the device whose endpoints are being disabled
+@@ -1218,7 +1246,6 @@ void usb_disable_interface(struct usb_device *dev, struct usb_interface *intf,
+ void usb_disable_device(struct usb_device *dev, int skip_ep0)
+ {
+ int i;
+- struct usb_hcd *hcd = bus_to_hcd(dev->bus);
+
+ /* getting rid of interfaces will disconnect
+ * any drivers bound to them (a key side effect)
+@@ -1264,22 +1291,8 @@ void usb_disable_device(struct usb_device *dev, int skip_ep0)
+
+ dev_dbg(&dev->dev, "%s nuking %s URBs\n", __func__,
+ skip_ep0 ? "non-ep0" : "all");
+- if (hcd->driver->check_bandwidth) {
+- /* First pass: Cancel URBs, leave endpoint pointers intact. */
+- for (i = skip_ep0; i < 16; ++i) {
+- usb_disable_endpoint(dev, i, false);
+- usb_disable_endpoint(dev, i + USB_DIR_IN, false);
+- }
+- /* Remove endpoints from the host controller internal state */
+- mutex_lock(hcd->bandwidth_mutex);
+- usb_hcd_alloc_bandwidth(dev, NULL, NULL, NULL);
+- mutex_unlock(hcd->bandwidth_mutex);
+- /* Second pass: remove endpoint pointers */
+- }
+- for (i = skip_ep0; i < 16; ++i) {
+- usb_disable_endpoint(dev, i, true);
+- usb_disable_endpoint(dev, i + USB_DIR_IN, true);
+- }
++
++ usb_disable_device_endpoints(dev, skip_ep0);
+ }
+
+ /**
+@@ -1522,6 +1535,9 @@ EXPORT_SYMBOL_GPL(usb_set_interface);
+ * The caller must own the device lock.
+ *
+ * Return: Zero on success, else a negative error code.
++ *
++ * If this routine fails the device will probably be in an unusable state
++ * with endpoints disabled, and interfaces only partially enabled.
+ */
+ int usb_reset_configuration(struct usb_device *dev)
+ {
+@@ -1537,10 +1553,7 @@ int usb_reset_configuration(struct usb_device *dev)
+ * calls during probe() are fine
+ */
+
+- for (i = 1; i < 16; ++i) {
+- usb_disable_endpoint(dev, i, true);
+- usb_disable_endpoint(dev, i + USB_DIR_IN, true);
+- }
++ usb_disable_device_endpoints(dev, 1); /* skip ep0*/
+
+ config = dev->actconfig;
+ retval = 0;
+@@ -1553,34 +1566,10 @@ int usb_reset_configuration(struct usb_device *dev)
+ mutex_unlock(hcd->bandwidth_mutex);
+ return -ENOMEM;
+ }
+- /* Make sure we have enough bandwidth for each alternate setting 0 */
+- for (i = 0; i < config->desc.bNumInterfaces; i++) {
+- struct usb_interface *intf = config->interface[i];
+- struct usb_host_interface *alt;
+
+- alt = usb_altnum_to_altsetting(intf, 0);
+- if (!alt)
+- alt = &intf->altsetting[0];
+- if (alt != intf->cur_altsetting)
+- retval = usb_hcd_alloc_bandwidth(dev, NULL,
+- intf->cur_altsetting, alt);
+- if (retval < 0)
+- break;
+- }
+- /* If not, reinstate the old alternate settings */
++ /* xHCI adds all endpoints in usb_hcd_alloc_bandwidth */
++ retval = usb_hcd_alloc_bandwidth(dev, config, NULL, NULL);
+ if (retval < 0) {
+-reset_old_alts:
+- for (i--; i >= 0; i--) {
+- struct usb_interface *intf = config->interface[i];
+- struct usb_host_interface *alt;
+-
+- alt = usb_altnum_to_altsetting(intf, 0);
+- if (!alt)
+- alt = &intf->altsetting[0];
+- if (alt != intf->cur_altsetting)
+- usb_hcd_alloc_bandwidth(dev, NULL,
+- alt, intf->cur_altsetting);
+- }
+ usb_enable_lpm(dev);
+ mutex_unlock(hcd->bandwidth_mutex);
+ return retval;
+@@ -1589,8 +1578,12 @@ reset_old_alts:
+ USB_REQ_SET_CONFIGURATION, 0,
+ config->desc.bConfigurationValue, 0,
+ NULL, 0, USB_CTRL_SET_TIMEOUT);
+- if (retval < 0)
+- goto reset_old_alts;
++ if (retval < 0) {
++ usb_hcd_alloc_bandwidth(dev, NULL, NULL, NULL);
++ usb_enable_lpm(dev);
++ mutex_unlock(hcd->bandwidth_mutex);
++ return retval;
++ }
+ mutex_unlock(hcd->bandwidth_mutex);
+
+ /* re-init hc/hcd interface/endpoint state */
+diff --git a/drivers/usb/core/sysfs.c b/drivers/usb/core/sysfs.c
+index a2ca38e25e0c3..8d134193fa0cf 100644
+--- a/drivers/usb/core/sysfs.c
++++ b/drivers/usb/core/sysfs.c
+@@ -889,7 +889,11 @@ read_descriptors(struct file *filp, struct kobject *kobj,
+ size_t srclen, n;
+ int cfgno;
+ void *src;
++ int retval;
+
++ retval = usb_lock_device_interruptible(udev);
++ if (retval < 0)
++ return -EINTR;
+ /* The binary attribute begins with the device descriptor.
+ * Following that are the raw descriptor entries for all the
+ * configurations (config plus subsidiary descriptors).
+@@ -914,6 +918,7 @@ read_descriptors(struct file *filp, struct kobject *kobj,
+ off -= srclen;
+ }
+ }
++ usb_unlock_device(udev);
+ return count - nleft;
+ }
+
+diff --git a/drivers/usb/dwc3/dwc3-meson-g12a.c b/drivers/usb/dwc3/dwc3-meson-g12a.c
+index 88b75b5a039c9..1f7f4d88ed9d8 100644
+--- a/drivers/usb/dwc3/dwc3-meson-g12a.c
++++ b/drivers/usb/dwc3/dwc3-meson-g12a.c
+@@ -737,13 +737,13 @@ static int dwc3_meson_g12a_probe(struct platform_device *pdev)
+ goto err_disable_clks;
+ }
+
+- ret = reset_control_deassert(priv->reset);
++ ret = reset_control_reset(priv->reset);
+ if (ret)
+- goto err_assert_reset;
++ goto err_disable_clks;
+
+ ret = dwc3_meson_g12a_get_phys(priv);
+ if (ret)
+- goto err_assert_reset;
++ goto err_disable_clks;
+
+ ret = priv->drvdata->setup_regmaps(priv, base);
+ if (ret)
+@@ -752,7 +752,7 @@ static int dwc3_meson_g12a_probe(struct platform_device *pdev)
+ if (priv->vbus) {
+ ret = regulator_enable(priv->vbus);
+ if (ret)
+- goto err_assert_reset;
++ goto err_disable_clks;
+ }
+
+ /* Get dr_mode */
+@@ -765,13 +765,13 @@ static int dwc3_meson_g12a_probe(struct platform_device *pdev)
+
+ ret = priv->drvdata->usb_init(priv);
+ if (ret)
+- goto err_assert_reset;
++ goto err_disable_clks;
+
+ /* Init PHYs */
+ for (i = 0 ; i < PHY_COUNT ; ++i) {
+ ret = phy_init(priv->phys[i]);
+ if (ret)
+- goto err_assert_reset;
++ goto err_disable_clks;
+ }
+
+ /* Set PHY Power */
+@@ -809,9 +809,6 @@ err_phys_exit:
+ for (i = 0 ; i < PHY_COUNT ; ++i)
+ phy_exit(priv->phys[i]);
+
+-err_assert_reset:
+- reset_control_assert(priv->reset);
+-
+ err_disable_clks:
+ clk_bulk_disable_unprepare(priv->drvdata->num_clks,
+ priv->drvdata->clks);
+diff --git a/drivers/usb/serial/ftdi_sio.c b/drivers/usb/serial/ftdi_sio.c
+index 33f1cca7eaa61..ae98fe94fe91e 100644
+--- a/drivers/usb/serial/ftdi_sio.c
++++ b/drivers/usb/serial/ftdi_sio.c
+@@ -713,6 +713,7 @@ static const struct usb_device_id id_table_combined[] = {
+ { USB_DEVICE(XSENS_VID, XSENS_AWINDA_STATION_PID) },
+ { USB_DEVICE(XSENS_VID, XSENS_CONVERTER_PID) },
+ { USB_DEVICE(XSENS_VID, XSENS_MTDEVBOARD_PID) },
++ { USB_DEVICE(XSENS_VID, XSENS_MTIUSBCONVERTER_PID) },
+ { USB_DEVICE(XSENS_VID, XSENS_MTW_PID) },
+ { USB_DEVICE(FTDI_VID, FTDI_OMNI1509) },
+ { USB_DEVICE(MOBILITY_VID, MOBILITY_USB_SERIAL_PID) },
+diff --git a/drivers/usb/serial/ftdi_sio_ids.h b/drivers/usb/serial/ftdi_sio_ids.h
+index e8373528264c3..b5ca17a5967a0 100644
+--- a/drivers/usb/serial/ftdi_sio_ids.h
++++ b/drivers/usb/serial/ftdi_sio_ids.h
+@@ -160,6 +160,7 @@
+ #define XSENS_AWINDA_DONGLE_PID 0x0102
+ #define XSENS_MTW_PID 0x0200 /* Xsens MTw */
+ #define XSENS_MTDEVBOARD_PID 0x0300 /* Motion Tracker Development Board */
++#define XSENS_MTIUSBCONVERTER_PID 0x0301 /* MTi USB converter */
+ #define XSENS_CONVERTER_PID 0xD00D /* Xsens USB-serial converter */
+
+ /* Xsens devices using FTDI VID */
+diff --git a/drivers/usb/serial/option.c b/drivers/usb/serial/option.c
+index 9b7cee98ea607..f7a6ac05ac57a 100644
+--- a/drivers/usb/serial/option.c
++++ b/drivers/usb/serial/option.c
+@@ -1094,14 +1094,18 @@ static const struct usb_device_id option_ids[] = {
+ { USB_DEVICE(QUALCOMM_VENDOR_ID, UBLOX_PRODUCT_R410M),
+ .driver_info = RSVD(1) | RSVD(3) },
+ /* Quectel products using Quectel vendor ID */
+- { USB_DEVICE(QUECTEL_VENDOR_ID, QUECTEL_PRODUCT_EC21),
+- .driver_info = RSVD(4) },
+- { USB_DEVICE(QUECTEL_VENDOR_ID, QUECTEL_PRODUCT_EC25),
+- .driver_info = RSVD(4) },
+- { USB_DEVICE(QUECTEL_VENDOR_ID, QUECTEL_PRODUCT_EG95),
+- .driver_info = RSVD(4) },
+- { USB_DEVICE(QUECTEL_VENDOR_ID, QUECTEL_PRODUCT_BG96),
+- .driver_info = RSVD(4) },
++ { USB_DEVICE_AND_INTERFACE_INFO(QUECTEL_VENDOR_ID, QUECTEL_PRODUCT_EC21, 0xff, 0xff, 0xff),
++ .driver_info = NUMEP2 },
++ { USB_DEVICE_AND_INTERFACE_INFO(QUECTEL_VENDOR_ID, QUECTEL_PRODUCT_EC21, 0xff, 0, 0) },
++ { USB_DEVICE_AND_INTERFACE_INFO(QUECTEL_VENDOR_ID, QUECTEL_PRODUCT_EC25, 0xff, 0xff, 0xff),
++ .driver_info = NUMEP2 },
++ { USB_DEVICE_AND_INTERFACE_INFO(QUECTEL_VENDOR_ID, QUECTEL_PRODUCT_EC25, 0xff, 0, 0) },
++ { USB_DEVICE_AND_INTERFACE_INFO(QUECTEL_VENDOR_ID, QUECTEL_PRODUCT_EG95, 0xff, 0xff, 0xff),
++ .driver_info = NUMEP2 },
++ { USB_DEVICE_AND_INTERFACE_INFO(QUECTEL_VENDOR_ID, QUECTEL_PRODUCT_EG95, 0xff, 0, 0) },
++ { USB_DEVICE_AND_INTERFACE_INFO(QUECTEL_VENDOR_ID, QUECTEL_PRODUCT_BG96, 0xff, 0xff, 0xff),
++ .driver_info = NUMEP2 },
++ { USB_DEVICE_AND_INTERFACE_INFO(QUECTEL_VENDOR_ID, QUECTEL_PRODUCT_BG96, 0xff, 0, 0) },
+ { USB_DEVICE_AND_INTERFACE_INFO(QUECTEL_VENDOR_ID, QUECTEL_PRODUCT_EP06, 0xff, 0xff, 0xff),
+ .driver_info = RSVD(1) | RSVD(2) | RSVD(3) | RSVD(4) | NUMEP2 },
+ { USB_DEVICE_AND_INTERFACE_INFO(QUECTEL_VENDOR_ID, QUECTEL_PRODUCT_EP06, 0xff, 0, 0) },
+@@ -1819,6 +1823,8 @@ static const struct usb_device_id option_ids[] = {
+ { USB_DEVICE_INTERFACE_CLASS(0x1e0e, 0x9003, 0xff) }, /* Simcom SIM7500/SIM7600 MBIM mode */
+ { USB_DEVICE_INTERFACE_CLASS(0x1e0e, 0x9011, 0xff), /* Simcom SIM7500/SIM7600 RNDIS mode */
+ .driver_info = RSVD(7) },
++ { USB_DEVICE_INTERFACE_CLASS(0x1e0e, 0x9205, 0xff) }, /* Simcom SIM7070/SIM7080/SIM7090 AT+ECM mode */
++ { USB_DEVICE_INTERFACE_CLASS(0x1e0e, 0x9206, 0xff) }, /* Simcom SIM7070/SIM7080/SIM7090 AT-only mode */
+ { USB_DEVICE(ALCATEL_VENDOR_ID, ALCATEL_PRODUCT_X060S_X200),
+ .driver_info = NCTRL(0) | NCTRL(1) | RSVD(4) },
+ { USB_DEVICE(ALCATEL_VENDOR_ID, ALCATEL_PRODUCT_X220_X500D),
+diff --git a/drivers/usb/typec/mux/intel_pmc_mux.c b/drivers/usb/typec/mux/intel_pmc_mux.c
+index 70ddc9d6d49e4..6a17789208779 100644
+--- a/drivers/usb/typec/mux/intel_pmc_mux.c
++++ b/drivers/usb/typec/mux/intel_pmc_mux.c
+@@ -56,14 +56,11 @@ enum {
+
+ #define PMC_USB_ALTMODE_ORI_SHIFT 1
+ #define PMC_USB_ALTMODE_UFP_SHIFT 3
+-#define PMC_USB_ALTMODE_ORI_AUX_SHIFT 4
+-#define PMC_USB_ALTMODE_ORI_HSL_SHIFT 5
+
+ /* DP specific Mode Data bits */
+ #define PMC_USB_ALTMODE_DP_MODE_SHIFT 8
+
+ /* TBT specific Mode Data bits */
+-#define PMC_USB_ALTMODE_HPD_HIGH BIT(14)
+ #define PMC_USB_ALTMODE_TBT_TYPE BIT(17)
+ #define PMC_USB_ALTMODE_CABLE_TYPE BIT(18)
+ #define PMC_USB_ALTMODE_ACTIVE_LINK BIT(20)
+@@ -174,15 +171,9 @@ pmc_usb_mux_dp(struct pmc_usb_port *port, struct typec_mux_state *state)
+ req.mode_data = (port->orientation - 1) << PMC_USB_ALTMODE_ORI_SHIFT;
+ req.mode_data |= (port->role - 1) << PMC_USB_ALTMODE_UFP_SHIFT;
+
+- req.mode_data |= sbu_orientation(port) << PMC_USB_ALTMODE_ORI_AUX_SHIFT;
+- req.mode_data |= hsl_orientation(port) << PMC_USB_ALTMODE_ORI_HSL_SHIFT;
+-
+ req.mode_data |= (state->mode - TYPEC_STATE_MODAL) <<
+ PMC_USB_ALTMODE_DP_MODE_SHIFT;
+
+- if (data->status & DP_STATUS_HPD_STATE)
+- req.mode_data |= PMC_USB_ALTMODE_HPD_HIGH;
+-
+ ret = pmc_usb_command(port, (void *)&req, sizeof(req));
+ if (ret)
+ return ret;
+@@ -207,9 +198,6 @@ pmc_usb_mux_tbt(struct pmc_usb_port *port, struct typec_mux_state *state)
+ req.mode_data = (port->orientation - 1) << PMC_USB_ALTMODE_ORI_SHIFT;
+ req.mode_data |= (port->role - 1) << PMC_USB_ALTMODE_UFP_SHIFT;
+
+- req.mode_data |= sbu_orientation(port) << PMC_USB_ALTMODE_ORI_AUX_SHIFT;
+- req.mode_data |= hsl_orientation(port) << PMC_USB_ALTMODE_ORI_HSL_SHIFT;
+-
+ if (TBT_ADAPTER(data->device_mode) == TBT_ADAPTER_TBT3)
+ req.mode_data |= PMC_USB_ALTMODE_TBT_TYPE;
+
+@@ -441,6 +429,7 @@ err_remove_ports:
+ for (i = 0; i < pmc->num_ports; i++) {
+ typec_switch_unregister(pmc->port[i].typec_sw);
+ typec_mux_unregister(pmc->port[i].typec_mux);
++ usb_role_switch_unregister(pmc->port[i].usb_sw);
+ }
+
+ return ret;
+@@ -454,6 +443,7 @@ static int pmc_usb_remove(struct platform_device *pdev)
+ for (i = 0; i < pmc->num_ports; i++) {
+ typec_switch_unregister(pmc->port[i].typec_sw);
+ typec_mux_unregister(pmc->port[i].typec_mux);
++ usb_role_switch_unregister(pmc->port[i].usb_sw);
+ }
+
+ return 0;
+diff --git a/drivers/usb/typec/ucsi/ucsi_acpi.c b/drivers/usb/typec/ucsi/ucsi_acpi.c
+index 9fc4f338e8700..c0aca2f0f23f0 100644
+--- a/drivers/usb/typec/ucsi/ucsi_acpi.c
++++ b/drivers/usb/typec/ucsi/ucsi_acpi.c
+@@ -112,11 +112,15 @@ static void ucsi_acpi_notify(acpi_handle handle, u32 event, void *data)
+
+ static int ucsi_acpi_probe(struct platform_device *pdev)
+ {
++ struct acpi_device *adev = ACPI_COMPANION(&pdev->dev);
+ struct ucsi_acpi *ua;
+ struct resource *res;
+ acpi_status status;
+ int ret;
+
++ if (adev->dep_unmet)
++ return -EPROBE_DEFER;
++
+ ua = devm_kzalloc(&pdev->dev, sizeof(*ua), GFP_KERNEL);
+ if (!ua)
+ return -ENOMEM;
+diff --git a/drivers/video/console/Kconfig b/drivers/video/console/Kconfig
+index 5e850cc9f891d..39deb22a41807 100644
+--- a/drivers/video/console/Kconfig
++++ b/drivers/video/console/Kconfig
+@@ -22,52 +22,6 @@ config VGA_CONSOLE
+
+ Say Y.
+
+-config VGACON_SOFT_SCROLLBACK
+- bool "Enable Scrollback Buffer in System RAM"
+- depends on VGA_CONSOLE
+- default n
+- help
+- The scrollback buffer of the standard VGA console is located in
+- the VGA RAM. The size of this RAM is fixed and is quite small.
+- If you require a larger scrollback buffer, this can be placed in
+- System RAM which is dynamically allocated during initialization.
+- Placing the scrollback buffer in System RAM will slightly slow
+- down the console.
+-
+- If you want this feature, say 'Y' here and enter the amount of
+- RAM to allocate for this buffer. If unsure, say 'N'.
+-
+-config VGACON_SOFT_SCROLLBACK_SIZE
+- int "Scrollback Buffer Size (in KB)"
+- depends on VGACON_SOFT_SCROLLBACK
+- range 1 1024
+- default "64"
+- help
+- Enter the amount of System RAM to allocate for scrollback
+- buffers of VGA consoles. Each 64KB will give you approximately
+- 16 80x25 screenfuls of scrollback buffer.
+-
+-config VGACON_SOFT_SCROLLBACK_PERSISTENT_ENABLE_BY_DEFAULT
+- bool "Persistent Scrollback History for each console by default"
+- depends on VGACON_SOFT_SCROLLBACK
+- default n
+- help
+- Say Y here if the scrollback history should persist by default when
+- switching between consoles. Otherwise, the scrollback history will be
+- flushed each time the console is switched. This feature can also be
+- enabled using the boot command line parameter
+- 'vgacon.scrollback_persistent=1'.
+-
+- This feature might break your tool of choice to flush the scrollback
+- buffer, e.g. clear(1) will work fine but Debian's clear_console(1)
+- will be broken, which might cause security issues.
+- You can use the escape sequence \e[3J instead if this feature is
+- activated.
+-
+- Note that a buffer of VGACON_SOFT_SCROLLBACK_SIZE is taken for each
+- created tty device.
+- So if you use a RAM-constrained system, say N here.
+-
+ config MDA_CONSOLE
+ depends on !M68K && !PARISC && ISA
+ tristate "MDA text console (dual-headed)"
+diff --git a/drivers/video/console/vgacon.c b/drivers/video/console/vgacon.c
+index e9254b3085a3e..6d0418e88ad71 100644
+--- a/drivers/video/console/vgacon.c
++++ b/drivers/video/console/vgacon.c
+@@ -165,214 +165,6 @@ static inline void vga_set_mem_top(struct vc_data *c)
+ write_vga(12, (c->vc_visible_origin - vga_vram_base) / 2);
+ }
+
+-#ifdef CONFIG_VGACON_SOFT_SCROLLBACK
+-/* software scrollback */
+-struct vgacon_scrollback_info {
+- void *data;
+- int tail;
+- int size;
+- int rows;
+- int cnt;
+- int cur;
+- int save;
+- int restore;
+-};
+-
+-static struct vgacon_scrollback_info *vgacon_scrollback_cur;
+-static struct vgacon_scrollback_info vgacon_scrollbacks[MAX_NR_CONSOLES];
+-static bool scrollback_persistent = \
+- IS_ENABLED(CONFIG_VGACON_SOFT_SCROLLBACK_PERSISTENT_ENABLE_BY_DEFAULT);
+-module_param_named(scrollback_persistent, scrollback_persistent, bool, 0000);
+-MODULE_PARM_DESC(scrollback_persistent, "Enable persistent scrollback for all vga consoles");
+-
+-static void vgacon_scrollback_reset(int vc_num, size_t reset_size)
+-{
+- struct vgacon_scrollback_info *scrollback = &vgacon_scrollbacks[vc_num];
+-
+- if (scrollback->data && reset_size > 0)
+- memset(scrollback->data, 0, reset_size);
+-
+- scrollback->cnt = 0;
+- scrollback->tail = 0;
+- scrollback->cur = 0;
+-}
+-
+-static void vgacon_scrollback_init(int vc_num)
+-{
+- int pitch = vga_video_num_columns * 2;
+- size_t size = CONFIG_VGACON_SOFT_SCROLLBACK_SIZE * 1024;
+- int rows = size / pitch;
+- void *data;
+-
+- data = kmalloc_array(CONFIG_VGACON_SOFT_SCROLLBACK_SIZE, 1024,
+- GFP_NOWAIT);
+-
+- vgacon_scrollbacks[vc_num].data = data;
+- vgacon_scrollback_cur = &vgacon_scrollbacks[vc_num];
+-
+- vgacon_scrollback_cur->rows = rows - 1;
+- vgacon_scrollback_cur->size = rows * pitch;
+-
+- vgacon_scrollback_reset(vc_num, size);
+-}
+-
+-static void vgacon_scrollback_switch(int vc_num)
+-{
+- if (!scrollback_persistent)
+- vc_num = 0;
+-
+- if (!vgacon_scrollbacks[vc_num].data) {
+- vgacon_scrollback_init(vc_num);
+- } else {
+- if (scrollback_persistent) {
+- vgacon_scrollback_cur = &vgacon_scrollbacks[vc_num];
+- } else {
+- size_t size = CONFIG_VGACON_SOFT_SCROLLBACK_SIZE * 1024;
+-
+- vgacon_scrollback_reset(vc_num, size);
+- }
+- }
+-}
+-
+-static void vgacon_scrollback_startup(void)
+-{
+- vgacon_scrollback_cur = &vgacon_scrollbacks[0];
+- vgacon_scrollback_init(0);
+-}
+-
+-static void vgacon_scrollback_update(struct vc_data *c, int t, int count)
+-{
+- void *p;
+-
+- if (!vgacon_scrollback_cur->data || !vgacon_scrollback_cur->size ||
+- c->vc_num != fg_console)
+- return;
+-
+- p = (void *) (c->vc_origin + t * c->vc_size_row);
+-
+- while (count--) {
+- if ((vgacon_scrollback_cur->tail + c->vc_size_row) >
+- vgacon_scrollback_cur->size)
+- vgacon_scrollback_cur->tail = 0;
+-
+- scr_memcpyw(vgacon_scrollback_cur->data +
+- vgacon_scrollback_cur->tail,
+- p, c->vc_size_row);
+-
+- vgacon_scrollback_cur->cnt++;
+- p += c->vc_size_row;
+- vgacon_scrollback_cur->tail += c->vc_size_row;
+-
+- if (vgacon_scrollback_cur->tail >= vgacon_scrollback_cur->size)
+- vgacon_scrollback_cur->tail = 0;
+-
+- if (vgacon_scrollback_cur->cnt > vgacon_scrollback_cur->rows)
+- vgacon_scrollback_cur->cnt = vgacon_scrollback_cur->rows;
+-
+- vgacon_scrollback_cur->cur = vgacon_scrollback_cur->cnt;
+- }
+-}
+-
+-static void vgacon_restore_screen(struct vc_data *c)
+-{
+- c->vc_origin = c->vc_visible_origin;
+- vgacon_scrollback_cur->save = 0;
+-
+- if (!vga_is_gfx && !vgacon_scrollback_cur->restore) {
+- scr_memcpyw((u16 *) c->vc_origin, (u16 *) c->vc_screenbuf,
+- c->vc_screenbuf_size > vga_vram_size ?
+- vga_vram_size : c->vc_screenbuf_size);
+- vgacon_scrollback_cur->restore = 1;
+- vgacon_scrollback_cur->cur = vgacon_scrollback_cur->cnt;
+- }
+-}
+-
+-static void vgacon_scrolldelta(struct vc_data *c, int lines)
+-{
+- int start, end, count, soff;
+-
+- if (!lines) {
+- vgacon_restore_screen(c);
+- return;
+- }
+-
+- if (!vgacon_scrollback_cur->data)
+- return;
+-
+- if (!vgacon_scrollback_cur->save) {
+- vgacon_cursor(c, CM_ERASE);
+- vgacon_save_screen(c);
+- c->vc_origin = (unsigned long)c->vc_screenbuf;
+- vgacon_scrollback_cur->save = 1;
+- }
+-
+- vgacon_scrollback_cur->restore = 0;
+- start = vgacon_scrollback_cur->cur + lines;
+- end = start + abs(lines);
+-
+- if (start < 0)
+- start = 0;
+-
+- if (start > vgacon_scrollback_cur->cnt)
+- start = vgacon_scrollback_cur->cnt;
+-
+- if (end < 0)
+- end = 0;
+-
+- if (end > vgacon_scrollback_cur->cnt)
+- end = vgacon_scrollback_cur->cnt;
+-
+- vgacon_scrollback_cur->cur = start;
+- count = end - start;
+- soff = vgacon_scrollback_cur->tail -
+- ((vgacon_scrollback_cur->cnt - end) * c->vc_size_row);
+- soff -= count * c->vc_size_row;
+-
+- if (soff < 0)
+- soff += vgacon_scrollback_cur->size;
+-
+- count = vgacon_scrollback_cur->cnt - start;
+-
+- if (count > c->vc_rows)
+- count = c->vc_rows;
+-
+- if (count) {
+- int copysize;
+-
+- int diff = c->vc_rows - count;
+- void *d = (void *) c->vc_visible_origin;
+- void *s = (void *) c->vc_screenbuf;
+-
+- count *= c->vc_size_row;
+- /* how much memory to end of buffer left? */
+- copysize = min(count, vgacon_scrollback_cur->size - soff);
+- scr_memcpyw(d, vgacon_scrollback_cur->data + soff, copysize);
+- d += copysize;
+- count -= copysize;
+-
+- if (count) {
+- scr_memcpyw(d, vgacon_scrollback_cur->data, count);
+- d += count;
+- }
+-
+- if (diff)
+- scr_memcpyw(d, s, diff * c->vc_size_row);
+- } else
+- vgacon_cursor(c, CM_MOVE);
+-}
+-
+-static void vgacon_flush_scrollback(struct vc_data *c)
+-{
+- size_t size = CONFIG_VGACON_SOFT_SCROLLBACK_SIZE * 1024;
+-
+- vgacon_scrollback_reset(c->vc_num, size);
+-}
+-#else
+-#define vgacon_scrollback_startup(...) do { } while (0)
+-#define vgacon_scrollback_init(...) do { } while (0)
+-#define vgacon_scrollback_update(...) do { } while (0)
+-#define vgacon_scrollback_switch(...) do { } while (0)
+-
+ static void vgacon_restore_screen(struct vc_data *c)
+ {
+ if (c->vc_origin != c->vc_visible_origin)
+@@ -386,11 +178,6 @@ static void vgacon_scrolldelta(struct vc_data *c, int lines)
+ vga_set_mem_top(c);
+ }
+
+-static void vgacon_flush_scrollback(struct vc_data *c)
+-{
+-}
+-#endif /* CONFIG_VGACON_SOFT_SCROLLBACK */
+-
+ static const char *vgacon_startup(void)
+ {
+ const char *display_desc = NULL;
+@@ -573,10 +360,7 @@ static const char *vgacon_startup(void)
+ vgacon_xres = screen_info.orig_video_cols * VGA_FONTWIDTH;
+ vgacon_yres = vga_scan_lines;
+
+- if (!vga_init_done) {
+- vgacon_scrollback_startup();
+- vga_init_done = true;
+- }
++ vga_init_done = true;
+
+ return display_desc;
+ }
+@@ -867,7 +651,6 @@ static int vgacon_switch(struct vc_data *c)
+ vgacon_doresize(c, c->vc_cols, c->vc_rows);
+ }
+
+- vgacon_scrollback_switch(c->vc_num);
+ return 0; /* Redrawing not needed */
+ }
+
+@@ -1384,7 +1167,6 @@ static bool vgacon_scroll(struct vc_data *c, unsigned int t, unsigned int b,
+ oldo = c->vc_origin;
+ delta = lines * c->vc_size_row;
+ if (dir == SM_UP) {
+- vgacon_scrollback_update(c, t, lines);
+ if (c->vc_scr_end + delta >= vga_vram_end) {
+ scr_memcpyw((u16 *) vga_vram_base,
+ (u16 *) (oldo + delta),
+@@ -1448,7 +1230,6 @@ const struct consw vga_con = {
+ .con_save_screen = vgacon_save_screen,
+ .con_build_attr = vgacon_build_attr,
+ .con_invert_region = vgacon_invert_region,
+- .con_flush_scrollback = vgacon_flush_scrollback,
+ };
+ EXPORT_SYMBOL(vga_con);
+
+diff --git a/drivers/video/fbdev/core/bitblit.c b/drivers/video/fbdev/core/bitblit.c
+index 35ebeeccde4df..436365efae731 100644
+--- a/drivers/video/fbdev/core/bitblit.c
++++ b/drivers/video/fbdev/core/bitblit.c
+@@ -234,7 +234,7 @@ static void bit_clear_margins(struct vc_data *vc, struct fb_info *info,
+ }
+
+ static void bit_cursor(struct vc_data *vc, struct fb_info *info, int mode,
+- int softback_lines, int fg, int bg)
++ int fg, int bg)
+ {
+ struct fb_cursor cursor;
+ struct fbcon_ops *ops = info->fbcon_par;
+@@ -247,15 +247,6 @@ static void bit_cursor(struct vc_data *vc, struct fb_info *info, int mode,
+
+ cursor.set = 0;
+
+- if (softback_lines) {
+- if (y + softback_lines >= vc->vc_rows) {
+- mode = CM_ERASE;
+- ops->cursor_flash = 0;
+- return;
+- } else
+- y += softback_lines;
+- }
+-
+ c = scr_readw((u16 *) vc->vc_pos);
+ attribute = get_attribute(info, c);
+ src = vc->vc_font.data + ((c & charmask) * (w * vc->vc_font.height));
+diff --git a/drivers/video/fbdev/core/fbcon.c b/drivers/video/fbdev/core/fbcon.c
+index fbf10e62bcde9..b36bfe10c712c 100644
+--- a/drivers/video/fbdev/core/fbcon.c
++++ b/drivers/video/fbdev/core/fbcon.c
+@@ -122,12 +122,6 @@ static int logo_lines;
+ /* logo_shown is an index to vc_cons when >= 0; otherwise follows FBCON_LOGO
+ enums. */
+ static int logo_shown = FBCON_LOGO_CANSHOW;
+-/* Software scrollback */
+-static int fbcon_softback_size = 32768;
+-static unsigned long softback_buf, softback_curr;
+-static unsigned long softback_in;
+-static unsigned long softback_top, softback_end;
+-static int softback_lines;
+ /* console mappings */
+ static int first_fb_vc;
+ static int last_fb_vc = MAX_NR_CONSOLES - 1;
+@@ -167,8 +161,6 @@ static int margin_color;
+
+ static const struct consw fb_con;
+
+-#define CM_SOFTBACK (8)
+-
+ #define advance_row(p, delta) (unsigned short *)((unsigned long)(p) + (delta) * vc->vc_size_row)
+
+ static int fbcon_set_origin(struct vc_data *);
+@@ -373,18 +365,6 @@ static int get_color(struct vc_data *vc, struct fb_info *info,
+ return color;
+ }
+
+-static void fbcon_update_softback(struct vc_data *vc)
+-{
+- int l = fbcon_softback_size / vc->vc_size_row;
+-
+- if (l > 5)
+- softback_end = softback_buf + l * vc->vc_size_row;
+- else
+- /* Smaller scrollback makes no sense, and 0 would screw
+- the operation totally */
+- softback_top = 0;
+-}
+-
+ static void fb_flashcursor(struct work_struct *work)
+ {
+ struct fb_info *info = container_of(work, struct fb_info, queue);
+@@ -414,7 +394,7 @@ static void fb_flashcursor(struct work_struct *work)
+ c = scr_readw((u16 *) vc->vc_pos);
+ mode = (!ops->cursor_flash || ops->cursor_state.enable) ?
+ CM_ERASE : CM_DRAW;
+- ops->cursor(vc, info, mode, softback_lines, get_color(vc, info, c, 1),
++ ops->cursor(vc, info, mode, get_color(vc, info, c, 1),
+ get_color(vc, info, c, 0));
+ console_unlock();
+ }
+@@ -471,13 +451,7 @@ static int __init fb_console_setup(char *this_opt)
+ }
+
+ if (!strncmp(options, "scrollback:", 11)) {
+- options += 11;
+- if (*options) {
+- fbcon_softback_size = simple_strtoul(options, &options, 0);
+- if (*options == 'k' || *options == 'K') {
+- fbcon_softback_size *= 1024;
+- }
+- }
++ pr_warn("Ignoring scrollback size option\n");
+ continue;
+ }
+
+@@ -1022,31 +996,6 @@ static const char *fbcon_startup(void)
+
+ set_blitting_type(vc, info);
+
+- if (info->fix.type != FB_TYPE_TEXT) {
+- if (fbcon_softback_size) {
+- if (!softback_buf) {
+- softback_buf =
+- (unsigned long)
+- kvmalloc(fbcon_softback_size,
+- GFP_KERNEL);
+- if (!softback_buf) {
+- fbcon_softback_size = 0;
+- softback_top = 0;
+- }
+- }
+- } else {
+- if (softback_buf) {
+- kvfree((void *) softback_buf);
+- softback_buf = 0;
+- softback_top = 0;
+- }
+- }
+- if (softback_buf)
+- softback_in = softback_top = softback_curr =
+- softback_buf;
+- softback_lines = 0;
+- }
+-
+ /* Setup default font */
+ if (!p->fontdata && !vc->vc_font.data) {
+ if (!fontname[0] || !(font = find_font(fontname)))
+@@ -1220,9 +1169,6 @@ static void fbcon_init(struct vc_data *vc, int init)
+ if (logo)
+ fbcon_prepare_logo(vc, info, cols, rows, new_cols, new_rows);
+
+- if (vc == svc && softback_buf)
+- fbcon_update_softback(vc);
+-
+ if (ops->rotate_font && ops->rotate_font(info, vc)) {
+ ops->rotate = FB_ROTATE_UR;
+ set_blitting_type(vc, info);
+@@ -1385,7 +1331,6 @@ static void fbcon_cursor(struct vc_data *vc, int mode)
+ {
+ struct fb_info *info = registered_fb[con2fb_map[vc->vc_num]];
+ struct fbcon_ops *ops = info->fbcon_par;
+- int y;
+ int c = scr_readw((u16 *) vc->vc_pos);
+
+ ops->cur_blink_jiffies = msecs_to_jiffies(vc->vc_cur_blink_ms);
+@@ -1399,16 +1344,8 @@ static void fbcon_cursor(struct vc_data *vc, int mode)
+ fbcon_add_cursor_timer(info);
+
+ ops->cursor_flash = (mode == CM_ERASE) ? 0 : 1;
+- if (mode & CM_SOFTBACK) {
+- mode &= ~CM_SOFTBACK;
+- y = softback_lines;
+- } else {
+- if (softback_lines)
+- fbcon_set_origin(vc);
+- y = 0;
+- }
+
+- ops->cursor(vc, info, mode, y, get_color(vc, info, c, 1),
++ ops->cursor(vc, info, mode, get_color(vc, info, c, 1),
+ get_color(vc, info, c, 0));
+ }
+
+@@ -1479,8 +1416,6 @@ static void fbcon_set_disp(struct fb_info *info, struct fb_var_screeninfo *var,
+
+ if (con_is_visible(vc)) {
+ update_screen(vc);
+- if (softback_buf)
+- fbcon_update_softback(vc);
+ }
+ }
+
+@@ -1618,99 +1553,6 @@ static __inline__ void ypan_down_redraw(struct vc_data *vc, int t, int count)
+ scrollback_current = 0;
+ }
+
+-static void fbcon_redraw_softback(struct vc_data *vc, struct fbcon_display *p,
+- long delta)
+-{
+- int count = vc->vc_rows;
+- unsigned short *d, *s;
+- unsigned long n;
+- int line = 0;
+-
+- d = (u16 *) softback_curr;
+- if (d == (u16 *) softback_in)
+- d = (u16 *) vc->vc_origin;
+- n = softback_curr + delta * vc->vc_size_row;
+- softback_lines -= delta;
+- if (delta < 0) {
+- if (softback_curr < softback_top && n < softback_buf) {
+- n += softback_end - softback_buf;
+- if (n < softback_top) {
+- softback_lines -=
+- (softback_top - n) / vc->vc_size_row;
+- n = softback_top;
+- }
+- } else if (softback_curr >= softback_top
+- && n < softback_top) {
+- softback_lines -=
+- (softback_top - n) / vc->vc_size_row;
+- n = softback_top;
+- }
+- } else {
+- if (softback_curr > softback_in && n >= softback_end) {
+- n += softback_buf - softback_end;
+- if (n > softback_in) {
+- n = softback_in;
+- softback_lines = 0;
+- }
+- } else if (softback_curr <= softback_in && n > softback_in) {
+- n = softback_in;
+- softback_lines = 0;
+- }
+- }
+- if (n == softback_curr)
+- return;
+- softback_curr = n;
+- s = (u16 *) softback_curr;
+- if (s == (u16 *) softback_in)
+- s = (u16 *) vc->vc_origin;
+- while (count--) {
+- unsigned short *start;
+- unsigned short *le;
+- unsigned short c;
+- int x = 0;
+- unsigned short attr = 1;
+-
+- start = s;
+- le = advance_row(s, 1);
+- do {
+- c = scr_readw(s);
+- if (attr != (c & 0xff00)) {
+- attr = c & 0xff00;
+- if (s > start) {
+- fbcon_putcs(vc, start, s - start,
+- line, x);
+- x += s - start;
+- start = s;
+- }
+- }
+- if (c == scr_readw(d)) {
+- if (s > start) {
+- fbcon_putcs(vc, start, s - start,
+- line, x);
+- x += s - start + 1;
+- start = s + 1;
+- } else {
+- x++;
+- start++;
+- }
+- }
+- s++;
+- d++;
+- } while (s < le);
+- if (s > start)
+- fbcon_putcs(vc, start, s - start, line, x);
+- line++;
+- if (d == (u16 *) softback_end)
+- d = (u16 *) softback_buf;
+- if (d == (u16 *) softback_in)
+- d = (u16 *) vc->vc_origin;
+- if (s == (u16 *) softback_end)
+- s = (u16 *) softback_buf;
+- if (s == (u16 *) softback_in)
+- s = (u16 *) vc->vc_origin;
+- }
+-}
+-
+ static void fbcon_redraw_move(struct vc_data *vc, struct fbcon_display *p,
+ int line, int count, int dy)
+ {
+@@ -1850,31 +1692,6 @@ static void fbcon_redraw(struct vc_data *vc, struct fbcon_display *p,
+ }
+ }
+
+-static inline void fbcon_softback_note(struct vc_data *vc, int t,
+- int count)
+-{
+- unsigned short *p;
+-
+- if (vc->vc_num != fg_console)
+- return;
+- p = (unsigned short *) (vc->vc_origin + t * vc->vc_size_row);
+-
+- while (count) {
+- scr_memcpyw((u16 *) softback_in, p, vc->vc_size_row);
+- count--;
+- p = advance_row(p, 1);
+- softback_in += vc->vc_size_row;
+- if (softback_in == softback_end)
+- softback_in = softback_buf;
+- if (softback_in == softback_top) {
+- softback_top += vc->vc_size_row;
+- if (softback_top == softback_end)
+- softback_top = softback_buf;
+- }
+- }
+- softback_curr = softback_in;
+-}
+-
+ static bool fbcon_scroll(struct vc_data *vc, unsigned int t, unsigned int b,
+ enum con_scroll dir, unsigned int count)
+ {
+@@ -1897,8 +1714,6 @@ static bool fbcon_scroll(struct vc_data *vc, unsigned int t, unsigned int b,
+ case SM_UP:
+ if (count > vc->vc_rows) /* Maximum realistic size */
+ count = vc->vc_rows;
+- if (softback_top)
+- fbcon_softback_note(vc, t, count);
+ if (logo_shown >= 0)
+ goto redraw_up;
+ switch (p->scrollmode) {
+@@ -2269,14 +2084,6 @@ static int fbcon_switch(struct vc_data *vc)
+ info = registered_fb[con2fb_map[vc->vc_num]];
+ ops = info->fbcon_par;
+
+- if (softback_top) {
+- if (softback_lines)
+- fbcon_set_origin(vc);
+- softback_top = softback_curr = softback_in = softback_buf;
+- softback_lines = 0;
+- fbcon_update_softback(vc);
+- }
+-
+ if (logo_shown >= 0) {
+ struct vc_data *conp2 = vc_cons[logo_shown].d;
+
+@@ -2600,9 +2407,6 @@ static int fbcon_do_set_font(struct vc_data *vc, int w, int h,
+ int cnt;
+ char *old_data = NULL;
+
+- if (con_is_visible(vc) && softback_lines)
+- fbcon_set_origin(vc);
+-
+ resize = (w != vc->vc_font.width) || (h != vc->vc_font.height);
+ if (p->userfont)
+ old_data = vc->vc_font.data;
+@@ -2628,8 +2432,6 @@ static int fbcon_do_set_font(struct vc_data *vc, int w, int h,
+ cols /= w;
+ rows /= h;
+ vc_resize(vc, cols, rows);
+- if (con_is_visible(vc) && softback_buf)
+- fbcon_update_softback(vc);
+ } else if (con_is_visible(vc)
+ && vc->vc_mode == KD_TEXT) {
+ fbcon_clear_margins(vc, 0);
+@@ -2788,19 +2590,7 @@ static void fbcon_set_palette(struct vc_data *vc, const unsigned char *table)
+
+ static u16 *fbcon_screen_pos(struct vc_data *vc, int offset)
+ {
+- unsigned long p;
+- int line;
+-
+- if (vc->vc_num != fg_console || !softback_lines)
+- return (u16 *) (vc->vc_origin + offset);
+- line = offset / vc->vc_size_row;
+- if (line >= softback_lines)
+- return (u16 *) (vc->vc_origin + offset -
+- softback_lines * vc->vc_size_row);
+- p = softback_curr + offset;
+- if (p >= softback_end)
+- p += softback_buf - softback_end;
+- return (u16 *) p;
++ return (u16 *) (vc->vc_origin + offset);
+ }
+
+ static unsigned long fbcon_getxy(struct vc_data *vc, unsigned long pos,
+@@ -2814,22 +2604,7 @@ static unsigned long fbcon_getxy(struct vc_data *vc, unsigned long pos,
+
+ x = offset % vc->vc_cols;
+ y = offset / vc->vc_cols;
+- if (vc->vc_num == fg_console)
+- y += softback_lines;
+ ret = pos + (vc->vc_cols - x) * 2;
+- } else if (vc->vc_num == fg_console && softback_lines) {
+- unsigned long offset = pos - softback_curr;
+-
+- if (pos < softback_curr)
+- offset += softback_end - softback_buf;
+- offset /= 2;
+- x = offset % vc->vc_cols;
+- y = offset / vc->vc_cols;
+- ret = pos + (vc->vc_cols - x) * 2;
+- if (ret == softback_end)
+- ret = softback_buf;
+- if (ret == softback_in)
+- ret = vc->vc_origin;
+ } else {
+ /* Should not happen */
+ x = y = 0;
+@@ -2857,106 +2632,11 @@ static void fbcon_invert_region(struct vc_data *vc, u16 * p, int cnt)
+ a = ((a) & 0x88ff) | (((a) & 0x7000) >> 4) |
+ (((a) & 0x0700) << 4);
+ scr_writew(a, p++);
+- if (p == (u16 *) softback_end)
+- p = (u16 *) softback_buf;
+- if (p == (u16 *) softback_in)
+- p = (u16 *) vc->vc_origin;
+- }
+-}
+-
+-static void fbcon_scrolldelta(struct vc_data *vc, int lines)
+-{
+- struct fb_info *info = registered_fb[con2fb_map[fg_console]];
+- struct fbcon_ops *ops = info->fbcon_par;
+- struct fbcon_display *disp = &fb_display[fg_console];
+- int offset, limit, scrollback_old;
+-
+- if (softback_top) {
+- if (vc->vc_num != fg_console)
+- return;
+- if (vc->vc_mode != KD_TEXT || !lines)
+- return;
+- if (logo_shown >= 0) {
+- struct vc_data *conp2 = vc_cons[logo_shown].d;
+-
+- if (conp2->vc_top == logo_lines
+- && conp2->vc_bottom == conp2->vc_rows)
+- conp2->vc_top = 0;
+- if (logo_shown == vc->vc_num) {
+- unsigned long p, q;
+- int i;
+-
+- p = softback_in;
+- q = vc->vc_origin +
+- logo_lines * vc->vc_size_row;
+- for (i = 0; i < logo_lines; i++) {
+- if (p == softback_top)
+- break;
+- if (p == softback_buf)
+- p = softback_end;
+- p -= vc->vc_size_row;
+- q -= vc->vc_size_row;
+- scr_memcpyw((u16 *) q, (u16 *) p,
+- vc->vc_size_row);
+- }
+- softback_in = softback_curr = p;
+- update_region(vc, vc->vc_origin,
+- logo_lines * vc->vc_cols);
+- }
+- logo_shown = FBCON_LOGO_CANSHOW;
+- }
+- fbcon_cursor(vc, CM_ERASE | CM_SOFTBACK);
+- fbcon_redraw_softback(vc, disp, lines);
+- fbcon_cursor(vc, CM_DRAW | CM_SOFTBACK);
+- return;
+ }
+-
+- if (!scrollback_phys_max)
+- return;
+-
+- scrollback_old = scrollback_current;
+- scrollback_current -= lines;
+- if (scrollback_current < 0)
+- scrollback_current = 0;
+- else if (scrollback_current > scrollback_max)
+- scrollback_current = scrollback_max;
+- if (scrollback_current == scrollback_old)
+- return;
+-
+- if (fbcon_is_inactive(vc, info))
+- return;
+-
+- fbcon_cursor(vc, CM_ERASE);
+-
+- offset = disp->yscroll - scrollback_current;
+- limit = disp->vrows;
+- switch (disp->scrollmode) {
+- case SCROLL_WRAP_MOVE:
+- info->var.vmode |= FB_VMODE_YWRAP;
+- break;
+- case SCROLL_PAN_MOVE:
+- case SCROLL_PAN_REDRAW:
+- limit -= vc->vc_rows;
+- info->var.vmode &= ~FB_VMODE_YWRAP;
+- break;
+- }
+- if (offset < 0)
+- offset += limit;
+- else if (offset >= limit)
+- offset -= limit;
+-
+- ops->var.xoffset = 0;
+- ops->var.yoffset = offset * vc->vc_font.height;
+- ops->update_start(info);
+-
+- if (!scrollback_current)
+- fbcon_cursor(vc, CM_DRAW);
+ }
+
+ static int fbcon_set_origin(struct vc_data *vc)
+ {
+- if (softback_lines)
+- fbcon_scrolldelta(vc, softback_lines);
+ return 0;
+ }
+
+@@ -3020,8 +2700,6 @@ static void fbcon_modechanged(struct fb_info *info)
+
+ fbcon_set_palette(vc, color_table);
+ update_screen(vc);
+- if (softback_buf)
+- fbcon_update_softback(vc);
+ }
+ }
+
+@@ -3432,7 +3110,6 @@ static const struct consw fb_con = {
+ .con_font_default = fbcon_set_def_font,
+ .con_font_copy = fbcon_copy_font,
+ .con_set_palette = fbcon_set_palette,
+- .con_scrolldelta = fbcon_scrolldelta,
+ .con_set_origin = fbcon_set_origin,
+ .con_invert_region = fbcon_invert_region,
+ .con_screen_pos = fbcon_screen_pos,
+@@ -3667,9 +3344,6 @@ static void fbcon_exit(void)
+ }
+ #endif
+
+- kvfree((void *)softback_buf);
+- softback_buf = 0UL;
+-
+ for_each_registered_fb(i) {
+ int pending = 0;
+
+diff --git a/drivers/video/fbdev/core/fbcon.h b/drivers/video/fbdev/core/fbcon.h
+index 20dea853765f5..78bb14c03643e 100644
+--- a/drivers/video/fbdev/core/fbcon.h
++++ b/drivers/video/fbdev/core/fbcon.h
+@@ -62,7 +62,7 @@ struct fbcon_ops {
+ void (*clear_margins)(struct vc_data *vc, struct fb_info *info,
+ int color, int bottom_only);
+ void (*cursor)(struct vc_data *vc, struct fb_info *info, int mode,
+- int softback_lines, int fg, int bg);
++ int fg, int bg);
+ int (*update_start)(struct fb_info *info);
+ int (*rotate_font)(struct fb_info *info, struct vc_data *vc);
+ struct fb_var_screeninfo var; /* copy of the current fb_var_screeninfo */
+diff --git a/drivers/video/fbdev/core/fbcon_ccw.c b/drivers/video/fbdev/core/fbcon_ccw.c
+index 78f3a56214782..71ad6967a70ee 100644
+--- a/drivers/video/fbdev/core/fbcon_ccw.c
++++ b/drivers/video/fbdev/core/fbcon_ccw.c
+@@ -219,7 +219,7 @@ static void ccw_clear_margins(struct vc_data *vc, struct fb_info *info,
+ }
+
+ static void ccw_cursor(struct vc_data *vc, struct fb_info *info, int mode,
+- int softback_lines, int fg, int bg)
++ int fg, int bg)
+ {
+ struct fb_cursor cursor;
+ struct fbcon_ops *ops = info->fbcon_par;
+@@ -236,15 +236,6 @@ static void ccw_cursor(struct vc_data *vc, struct fb_info *info, int mode,
+
+ cursor.set = 0;
+
+- if (softback_lines) {
+- if (y + softback_lines >= vc->vc_rows) {
+- mode = CM_ERASE;
+- ops->cursor_flash = 0;
+- return;
+- } else
+- y += softback_lines;
+- }
+-
+ c = scr_readw((u16 *) vc->vc_pos);
+ attribute = get_attribute(info, c);
+ src = ops->fontbuffer + ((c & charmask) * (w * vc->vc_font.width));
+diff --git a/drivers/video/fbdev/core/fbcon_cw.c b/drivers/video/fbdev/core/fbcon_cw.c
+index fd098ff17574b..31fe5dd651d44 100644
+--- a/drivers/video/fbdev/core/fbcon_cw.c
++++ b/drivers/video/fbdev/core/fbcon_cw.c
+@@ -202,7 +202,7 @@ static void cw_clear_margins(struct vc_data *vc, struct fb_info *info,
+ }
+
+ static void cw_cursor(struct vc_data *vc, struct fb_info *info, int mode,
+- int softback_lines, int fg, int bg)
++ int fg, int bg)
+ {
+ struct fb_cursor cursor;
+ struct fbcon_ops *ops = info->fbcon_par;
+@@ -219,15 +219,6 @@ static void cw_cursor(struct vc_data *vc, struct fb_info *info, int mode,
+
+ cursor.set = 0;
+
+- if (softback_lines) {
+- if (y + softback_lines >= vc->vc_rows) {
+- mode = CM_ERASE;
+- ops->cursor_flash = 0;
+- return;
+- } else
+- y += softback_lines;
+- }
+-
+ c = scr_readw((u16 *) vc->vc_pos);
+ attribute = get_attribute(info, c);
+ src = ops->fontbuffer + ((c & charmask) * (w * vc->vc_font.width));
+diff --git a/drivers/video/fbdev/core/fbcon_ud.c b/drivers/video/fbdev/core/fbcon_ud.c
+index e165a3fad29ad..b2dd1370e39b2 100644
+--- a/drivers/video/fbdev/core/fbcon_ud.c
++++ b/drivers/video/fbdev/core/fbcon_ud.c
+@@ -249,7 +249,7 @@ static void ud_clear_margins(struct vc_data *vc, struct fb_info *info,
+ }
+
+ static void ud_cursor(struct vc_data *vc, struct fb_info *info, int mode,
+- int softback_lines, int fg, int bg)
++ int fg, int bg)
+ {
+ struct fb_cursor cursor;
+ struct fbcon_ops *ops = info->fbcon_par;
+@@ -267,15 +267,6 @@ static void ud_cursor(struct vc_data *vc, struct fb_info *info, int mode,
+
+ cursor.set = 0;
+
+- if (softback_lines) {
+- if (y + softback_lines >= vc->vc_rows) {
+- mode = CM_ERASE;
+- ops->cursor_flash = 0;
+- return;
+- } else
+- y += softback_lines;
+- }
+-
+ c = scr_readw((u16 *) vc->vc_pos);
+ attribute = get_attribute(info, c);
+ src = ops->fontbuffer + ((c & charmask) * (w * vc->vc_font.height));
+diff --git a/drivers/video/fbdev/core/tileblit.c b/drivers/video/fbdev/core/tileblit.c
+index 93390312957ff..eb664dbf96f66 100644
+--- a/drivers/video/fbdev/core/tileblit.c
++++ b/drivers/video/fbdev/core/tileblit.c
+@@ -80,7 +80,7 @@ static void tile_clear_margins(struct vc_data *vc, struct fb_info *info,
+ }
+
+ static void tile_cursor(struct vc_data *vc, struct fb_info *info, int mode,
+- int softback_lines, int fg, int bg)
++ int fg, int bg)
+ {
+ struct fb_tilecursor cursor;
+ int use_sw = (vc->vc_cursor_type & 0x10);
+diff --git a/drivers/video/fbdev/vga16fb.c b/drivers/video/fbdev/vga16fb.c
+index a20eeb8308ffd..578d3541e3d6f 100644
+--- a/drivers/video/fbdev/vga16fb.c
++++ b/drivers/video/fbdev/vga16fb.c
+@@ -1121,7 +1121,7 @@ static void vga_8planes_imageblit(struct fb_info *info, const struct fb_image *i
+ char oldop = setop(0);
+ char oldsr = setsr(0);
+ char oldmask = selectmask();
+- const char *cdat = image->data;
++ const unsigned char *cdat = image->data;
+ u32 dx = image->dx;
+ char __iomem *where;
+ int y;
+diff --git a/fs/btrfs/disk-io.c b/fs/btrfs/disk-io.c
+index 983f4d58ae59b..6d3ed9542b6c1 100644
+--- a/fs/btrfs/disk-io.c
++++ b/fs/btrfs/disk-io.c
+@@ -3446,6 +3446,8 @@ fail_block_groups:
+ btrfs_put_block_group_cache(fs_info);
+
+ fail_tree_roots:
++ if (fs_info->data_reloc_root)
++ btrfs_drop_and_free_fs_root(fs_info, fs_info->data_reloc_root);
+ free_root_pointers(fs_info, true);
+ invalidate_inode_pages2(fs_info->btree_inode->i_mapping);
+
+diff --git a/fs/btrfs/extent-tree.c b/fs/btrfs/extent-tree.c
+index e9eedc053fc52..780b9c9a98fe3 100644
+--- a/fs/btrfs/extent-tree.c
++++ b/fs/btrfs/extent-tree.c
+@@ -400,12 +400,11 @@ int btrfs_get_extent_inline_ref_type(const struct extent_buffer *eb,
+ if (type == BTRFS_SHARED_BLOCK_REF_KEY) {
+ ASSERT(eb->fs_info);
+ /*
+- * Every shared one has parent tree
+- * block, which must be aligned to
+- * nodesize.
++ * Every shared one has parent tree block,
++ * which must be aligned to sector size.
+ */
+ if (offset &&
+- IS_ALIGNED(offset, eb->fs_info->nodesize))
++ IS_ALIGNED(offset, eb->fs_info->sectorsize))
+ return type;
+ }
+ } else if (is_data == BTRFS_REF_TYPE_DATA) {
+@@ -414,12 +413,11 @@ int btrfs_get_extent_inline_ref_type(const struct extent_buffer *eb,
+ if (type == BTRFS_SHARED_DATA_REF_KEY) {
+ ASSERT(eb->fs_info);
+ /*
+- * Every shared one has parent tree
+- * block, which must be aligned to
+- * nodesize.
++ * Every shared one has parent tree block,
++ * which must be aligned to sector size.
+ */
+ if (offset &&
+- IS_ALIGNED(offset, eb->fs_info->nodesize))
++ IS_ALIGNED(offset, eb->fs_info->sectorsize))
+ return type;
+ }
+ } else {
+@@ -429,8 +427,9 @@ int btrfs_get_extent_inline_ref_type(const struct extent_buffer *eb,
+ }
+
+ btrfs_print_leaf((struct extent_buffer *)eb);
+- btrfs_err(eb->fs_info, "eb %llu invalid extent inline ref type %d",
+- eb->start, type);
++ btrfs_err(eb->fs_info,
++ "eb %llu iref 0x%lx invalid extent inline ref type %d",
++ eb->start, (unsigned long)iref, type);
+ WARN_ON(1);
+
+ return BTRFS_REF_TYPE_INVALID;
+diff --git a/fs/btrfs/ioctl.c b/fs/btrfs/ioctl.c
+index 5cbebf32082ab..2dc7707d4e600 100644
+--- a/fs/btrfs/ioctl.c
++++ b/fs/btrfs/ioctl.c
+@@ -2193,7 +2193,8 @@ static noinline int search_ioctl(struct inode *inode,
+ key.offset = sk->min_offset;
+
+ while (1) {
+- ret = fault_in_pages_writeable(ubuf, *buf_size - sk_offset);
++ ret = fault_in_pages_writeable(ubuf + sk_offset,
++ *buf_size - sk_offset);
+ if (ret)
+ break;
+
+diff --git a/fs/btrfs/print-tree.c b/fs/btrfs/print-tree.c
+index 61f44e78e3c9e..80567c11ec122 100644
+--- a/fs/btrfs/print-tree.c
++++ b/fs/btrfs/print-tree.c
+@@ -95,9 +95,10 @@ static void print_extent_item(struct extent_buffer *eb, int slot, int type)
+ * offset is supposed to be a tree block which
+ * must be aligned to nodesize.
+ */
+- if (!IS_ALIGNED(offset, eb->fs_info->nodesize))
+- pr_info("\t\t\t(parent %llu is NOT ALIGNED to nodesize %llu)\n",
+- offset, (unsigned long long)eb->fs_info->nodesize);
++ if (!IS_ALIGNED(offset, eb->fs_info->sectorsize))
++ pr_info(
++ "\t\t\t(parent %llu not aligned to sectorsize %u)\n",
++ offset, eb->fs_info->sectorsize);
+ break;
+ case BTRFS_EXTENT_DATA_REF_KEY:
+ dref = (struct btrfs_extent_data_ref *)(&iref->offset);
+@@ -112,8 +113,9 @@ static void print_extent_item(struct extent_buffer *eb, int slot, int type)
+ * must be aligned to nodesize.
+ */
+ if (!IS_ALIGNED(offset, eb->fs_info->nodesize))
+- pr_info("\t\t\t(parent %llu is NOT ALIGNED to nodesize %llu)\n",
+- offset, (unsigned long long)eb->fs_info->nodesize);
++ pr_info(
++ "\t\t\t(parent %llu not aligned to sectorsize %u)\n",
++ offset, eb->fs_info->sectorsize);
+ break;
+ default:
+ pr_cont("(extent %llu has INVALID ref type %d)\n",
+diff --git a/fs/btrfs/transaction.c b/fs/btrfs/transaction.c
+index 2710f8ddb95fb..b43ebf55b93e1 100644
+--- a/fs/btrfs/transaction.c
++++ b/fs/btrfs/transaction.c
+@@ -1636,6 +1636,7 @@ static noinline int create_pending_snapshot(struct btrfs_trans_handle *trans,
+ pending->snap = btrfs_get_new_fs_root(fs_info, objectid, pending->anon_dev);
+ if (IS_ERR(pending->snap)) {
+ ret = PTR_ERR(pending->snap);
++ pending->snap = NULL;
+ btrfs_abort_transaction(trans, ret);
+ goto fail;
+ }
+diff --git a/fs/btrfs/volumes.c b/fs/btrfs/volumes.c
+index 0e50b885d3fd6..956eb0d6bc584 100644
+--- a/fs/btrfs/volumes.c
++++ b/fs/btrfs/volumes.c
+@@ -4,6 +4,7 @@
+ */
+
+ #include <linux/sched.h>
++#include <linux/sched/mm.h>
+ #include <linux/bio.h>
+ #include <linux/slab.h>
+ #include <linux/blkdev.h>
+@@ -6500,8 +6501,17 @@ static struct btrfs_device *add_missing_dev(struct btrfs_fs_devices *fs_devices,
+ u64 devid, u8 *dev_uuid)
+ {
+ struct btrfs_device *device;
++ unsigned int nofs_flag;
+
++ /*
++ * We call this under the chunk_mutex, so we want to use NOFS for this
++ * allocation, however we don't want to change btrfs_alloc_device() to
++ * always do NOFS because we use it in a lot of other GFP_KERNEL safe
++ * places.
++ */
++ nofs_flag = memalloc_nofs_save();
+ device = btrfs_alloc_device(NULL, &devid, dev_uuid);
++ memalloc_nofs_restore(nofs_flag);
+ if (IS_ERR(device))
+ return device;
+
+diff --git a/fs/debugfs/file.c b/fs/debugfs/file.c
+index ae49a55bda001..1862331f1b48d 100644
+--- a/fs/debugfs/file.c
++++ b/fs/debugfs/file.c
+@@ -177,7 +177,7 @@ static int open_proxy_open(struct inode *inode, struct file *filp)
+ goto out;
+
+ if (!fops_get(real_fops)) {
+-#ifdef MODULE
++#ifdef CONFIG_MODULES
+ if (real_fops->owner &&
+ real_fops->owner->state == MODULE_STATE_GOING)
+ goto out;
+@@ -312,7 +312,7 @@ static int full_proxy_open(struct inode *inode, struct file *filp)
+ goto out;
+
+ if (!fops_get(real_fops)) {
+-#ifdef MODULE
++#ifdef CONFIG_MODULES
+ if (real_fops->owner &&
+ real_fops->owner->state == MODULE_STATE_GOING)
+ goto out;
+diff --git a/fs/xfs/libxfs/xfs_attr_leaf.c b/fs/xfs/libxfs/xfs_attr_leaf.c
+index 4eb2ecd31b0d2..9bafe50a21240 100644
+--- a/fs/xfs/libxfs/xfs_attr_leaf.c
++++ b/fs/xfs/libxfs/xfs_attr_leaf.c
+@@ -653,8 +653,8 @@ xfs_attr_shortform_create(
+ ASSERT(ifp->if_flags & XFS_IFINLINE);
+ }
+ xfs_idata_realloc(dp, sizeof(*hdr), XFS_ATTR_FORK);
+- hdr = (xfs_attr_sf_hdr_t *)ifp->if_u1.if_data;
+- hdr->count = 0;
++ hdr = (struct xfs_attr_sf_hdr *)ifp->if_u1.if_data;
++ memset(hdr, 0, sizeof(*hdr));
+ hdr->totsize = cpu_to_be16(sizeof(*hdr));
+ xfs_trans_log_inode(args->trans, dp, XFS_ILOG_CORE | XFS_ILOG_ADATA);
+ }
+diff --git a/fs/xfs/libxfs/xfs_ialloc.c b/fs/xfs/libxfs/xfs_ialloc.c
+index 7fcf62b324b0d..8c1a7cc484b65 100644
+--- a/fs/xfs/libxfs/xfs_ialloc.c
++++ b/fs/xfs/libxfs/xfs_ialloc.c
+@@ -688,7 +688,7 @@ xfs_ialloc_ag_alloc(
+ args.minalignslop = igeo->cluster_align - 1;
+
+ /* Allow space for the inode btree to split. */
+- args.minleft = igeo->inobt_maxlevels - 1;
++ args.minleft = igeo->inobt_maxlevels;
+ if ((error = xfs_alloc_vextent(&args)))
+ return error;
+
+@@ -736,7 +736,7 @@ xfs_ialloc_ag_alloc(
+ /*
+ * Allow space for the inode btree to split.
+ */
+- args.minleft = igeo->inobt_maxlevels - 1;
++ args.minleft = igeo->inobt_maxlevels;
+ if ((error = xfs_alloc_vextent(&args)))
+ return error;
+ }
+diff --git a/fs/xfs/libxfs/xfs_trans_space.h b/fs/xfs/libxfs/xfs_trans_space.h
+index c6df01a2a1585..7ad3659c5d2a9 100644
+--- a/fs/xfs/libxfs/xfs_trans_space.h
++++ b/fs/xfs/libxfs/xfs_trans_space.h
+@@ -58,7 +58,7 @@
+ #define XFS_IALLOC_SPACE_RES(mp) \
+ (M_IGEO(mp)->ialloc_blks + \
+ ((xfs_sb_version_hasfinobt(&mp->m_sb) ? 2 : 1) * \
+- (M_IGEO(mp)->inobt_maxlevels - 1)))
++ M_IGEO(mp)->inobt_maxlevels))
+
+ /*
+ * Space reservation values for various transactions.
+diff --git a/include/linux/efi_embedded_fw.h b/include/linux/efi_embedded_fw.h
+index 57eac5241303a..a97a12bb2c9ef 100644
+--- a/include/linux/efi_embedded_fw.h
++++ b/include/linux/efi_embedded_fw.h
+@@ -8,8 +8,8 @@
+ #define EFI_EMBEDDED_FW_PREFIX_LEN 8
+
+ /*
+- * This struct and efi_embedded_fw_list are private to the efi-embedded fw
+- * implementation they are in this header for use by lib/test_firmware.c only!
++ * This struct is private to the efi-embedded fw implementation.
++ * They are in this header for use by lib/test_firmware.c only!
+ */
+ struct efi_embedded_fw {
+ struct list_head list;
+@@ -18,8 +18,6 @@ struct efi_embedded_fw {
+ size_t length;
+ };
+
+-extern struct list_head efi_embedded_fw_list;
+-
+ /**
+ * struct efi_embedded_fw_desc - This struct is used by the EFI embedded-fw
+ * code to search for embedded firmwares.
+diff --git a/include/linux/netfilter/nf_conntrack_sctp.h b/include/linux/netfilter/nf_conntrack_sctp.h
+index 9a33f171aa822..625f491b95de8 100644
+--- a/include/linux/netfilter/nf_conntrack_sctp.h
++++ b/include/linux/netfilter/nf_conntrack_sctp.h
+@@ -9,6 +9,8 @@ struct ip_ct_sctp {
+ enum sctp_conntrack state;
+
+ __be32 vtag[IP_CT_DIR_MAX];
++ u8 last_dir;
++ u8 flags;
+ };
+
+ #endif /* _NF_CONNTRACK_SCTP_H */
+diff --git a/include/soc/nps/common.h b/include/soc/nps/common.h
+index 9b1d43d671a3f..8c18dc6d3fde5 100644
+--- a/include/soc/nps/common.h
++++ b/include/soc/nps/common.h
+@@ -45,6 +45,12 @@
+ #define CTOP_INST_MOV2B_FLIP_R3_B1_B2_INST 0x5B60
+ #define CTOP_INST_MOV2B_FLIP_R3_B1_B2_LIMM 0x00010422
+
++#ifndef AUX_IENABLE
++#define AUX_IENABLE 0x40c
++#endif
++
++#define CTOP_AUX_IACK (0xFFFFF800 + 0x088)
++
+ #ifndef __ASSEMBLY__
+
+ /* In order to increase compilation test coverage */
+diff --git a/kernel/gcov/gcc_4_7.c b/kernel/gcov/gcc_4_7.c
+index 908fdf5098c32..53c67c87f141b 100644
+--- a/kernel/gcov/gcc_4_7.c
++++ b/kernel/gcov/gcc_4_7.c
+@@ -19,7 +19,9 @@
+ #include <linux/vmalloc.h>
+ #include "gcov.h"
+
+-#if (__GNUC__ >= 7)
++#if (__GNUC__ >= 10)
++#define GCOV_COUNTERS 8
++#elif (__GNUC__ >= 7)
+ #define GCOV_COUNTERS 9
+ #elif (__GNUC__ > 5) || (__GNUC__ == 5 && __GNUC_MINOR__ >= 1)
+ #define GCOV_COUNTERS 10
+diff --git a/kernel/padata.c b/kernel/padata.c
+index 4373f7adaa40a..3bc90fec0904c 100644
+--- a/kernel/padata.c
++++ b/kernel/padata.c
+@@ -215,12 +215,13 @@ int padata_do_parallel(struct padata_shell *ps,
+ padata->pd = pd;
+ padata->cb_cpu = *cb_cpu;
+
+- rcu_read_unlock_bh();
+-
+ spin_lock(&padata_works_lock);
+ padata->seq_nr = ++pd->seq_nr;
+ pw = padata_work_alloc();
+ spin_unlock(&padata_works_lock);
++
++ rcu_read_unlock_bh();
++
+ if (pw) {
+ padata_work_init(pw, padata_parallel_worker, padata, 0);
+ queue_work(pinst->parallel_wq, &pw->pw_work);
+diff --git a/kernel/seccomp.c b/kernel/seccomp.c
+index c461ba9925136..54cf84bac3c9b 100644
+--- a/kernel/seccomp.c
++++ b/kernel/seccomp.c
+@@ -997,13 +997,12 @@ out:
+ }
+
+ #ifdef CONFIG_SECCOMP_FILTER
+-static int seccomp_notify_release(struct inode *inode, struct file *file)
++static void seccomp_notify_detach(struct seccomp_filter *filter)
+ {
+- struct seccomp_filter *filter = file->private_data;
+ struct seccomp_knotif *knotif;
+
+ if (!filter)
+- return 0;
++ return;
+
+ mutex_lock(&filter->notify_lock);
+
+@@ -1025,6 +1024,13 @@ static int seccomp_notify_release(struct inode *inode, struct file *file)
+ kfree(filter->notif);
+ filter->notif = NULL;
+ mutex_unlock(&filter->notify_lock);
++}
++
++static int seccomp_notify_release(struct inode *inode, struct file *file)
++{
++ struct seccomp_filter *filter = file->private_data;
++
++ seccomp_notify_detach(filter);
+ __put_seccomp_filter(filter);
+ return 0;
+ }
+@@ -1358,6 +1364,7 @@ out_put_fd:
+ listener_f->private_data = NULL;
+ fput(listener_f);
+ put_unused_fd(listener);
++ seccomp_notify_detach(prepared);
+ } else {
+ fd_install(listener, listener_f);
+ ret = listener;
+diff --git a/lib/kobject.c b/lib/kobject.c
+index 3afb939f2a1cc..9dce68c378e61 100644
+--- a/lib/kobject.c
++++ b/lib/kobject.c
+@@ -637,8 +637,12 @@ static void __kobject_del(struct kobject *kobj)
+ */
+ void kobject_del(struct kobject *kobj)
+ {
+- struct kobject *parent = kobj->parent;
++ struct kobject *parent;
++
++ if (!kobj)
++ return;
+
++ parent = kobj->parent;
+ __kobject_del(kobj);
+ kobject_put(parent);
+ }
+diff --git a/lib/test_firmware.c b/lib/test_firmware.c
+index 9fee2b93a8d18..06c9550577564 100644
+--- a/lib/test_firmware.c
++++ b/lib/test_firmware.c
+@@ -26,6 +26,8 @@
+ #include <linux/vmalloc.h>
+ #include <linux/efi_embedded_fw.h>
+
++MODULE_IMPORT_NS(TEST_FIRMWARE);
++
+ #define TEST_FIRMWARE_NAME "test-firmware.bin"
+ #define TEST_FIRMWARE_NUM_REQS 4
+ #define TEST_FIRMWARE_BUF_SIZE SZ_1K
+@@ -489,6 +491,9 @@ out:
+ static DEVICE_ATTR_WO(trigger_request);
+
+ #ifdef CONFIG_EFI_EMBEDDED_FIRMWARE
++extern struct list_head efi_embedded_fw_list;
++extern bool efi_embedded_fw_checked;
++
+ static ssize_t trigger_request_platform_store(struct device *dev,
+ struct device_attribute *attr,
+ const char *buf, size_t count)
+@@ -501,6 +506,7 @@ static ssize_t trigger_request_platform_store(struct device *dev,
+ };
+ struct efi_embedded_fw efi_embedded_fw;
+ const struct firmware *firmware = NULL;
++ bool saved_efi_embedded_fw_checked;
+ char *name;
+ int rc;
+
+@@ -513,6 +519,8 @@ static ssize_t trigger_request_platform_store(struct device *dev,
+ efi_embedded_fw.data = (void *)test_data;
+ efi_embedded_fw.length = sizeof(test_data);
+ list_add(&efi_embedded_fw.list, &efi_embedded_fw_list);
++ saved_efi_embedded_fw_checked = efi_embedded_fw_checked;
++ efi_embedded_fw_checked = true;
+
+ pr_info("loading '%s'\n", name);
+ rc = firmware_request_platform(&firmware, name, dev);
+@@ -530,6 +538,7 @@ static ssize_t trigger_request_platform_store(struct device *dev,
+ rc = count;
+
+ out:
++ efi_embedded_fw_checked = saved_efi_embedded_fw_checked;
+ release_firmware(firmware);
+ list_del(&efi_embedded_fw.list);
+ kfree(name);
+diff --git a/net/mac80211/sta_info.h b/net/mac80211/sta_info.h
+index 49728047dfad6..f66fcce8e6a45 100644
+--- a/net/mac80211/sta_info.h
++++ b/net/mac80211/sta_info.h
+@@ -522,7 +522,7 @@ struct ieee80211_sta_rx_stats {
+ * @status_stats.retry_failed: # of frames that failed after retry
+ * @status_stats.retry_count: # of retries attempted
+ * @status_stats.lost_packets: # of lost packets
+- * @status_stats.last_tdls_pkt_time: timestamp of last TDLS packet
++ * @status_stats.last_pkt_time: timestamp of last ACKed packet
+ * @status_stats.msdu_retries: # of MSDU retries
+ * @status_stats.msdu_failed: # of failed MSDUs
+ * @status_stats.last_ack: last ack timestamp (jiffies)
+@@ -595,7 +595,7 @@ struct sta_info {
+ unsigned long filtered;
+ unsigned long retry_failed, retry_count;
+ unsigned int lost_packets;
+- unsigned long last_tdls_pkt_time;
++ unsigned long last_pkt_time;
+ u64 msdu_retries[IEEE80211_NUM_TIDS + 1];
+ u64 msdu_failed[IEEE80211_NUM_TIDS + 1];
+ unsigned long last_ack;
+diff --git a/net/mac80211/status.c b/net/mac80211/status.c
+index cbc40b358ba26..819c4221c284e 100644
+--- a/net/mac80211/status.c
++++ b/net/mac80211/status.c
+@@ -755,12 +755,16 @@ static void ieee80211_report_used_skb(struct ieee80211_local *local,
+ * - current throughput (higher value for higher tpt)?
+ */
+ #define STA_LOST_PKT_THRESHOLD 50
++#define STA_LOST_PKT_TIME HZ /* 1 sec since last ACK */
+ #define STA_LOST_TDLS_PKT_THRESHOLD 10
+ #define STA_LOST_TDLS_PKT_TIME (10*HZ) /* 10secs since last ACK */
+
+ static void ieee80211_lost_packet(struct sta_info *sta,
+ struct ieee80211_tx_info *info)
+ {
++ unsigned long pkt_time = STA_LOST_PKT_TIME;
++ unsigned int pkt_thr = STA_LOST_PKT_THRESHOLD;
++
+ /* If driver relies on its own algorithm for station kickout, skip
+ * mac80211 packet loss mechanism.
+ */
+@@ -773,21 +777,20 @@ static void ieee80211_lost_packet(struct sta_info *sta,
+ return;
+
+ sta->status_stats.lost_packets++;
+- if (!sta->sta.tdls &&
+- sta->status_stats.lost_packets < STA_LOST_PKT_THRESHOLD)
+- return;
++ if (sta->sta.tdls) {
++ pkt_time = STA_LOST_TDLS_PKT_TIME;
++ pkt_thr = STA_LOST_PKT_THRESHOLD;
++ }
+
+ /*
+ * If we're in TDLS mode, make sure that all STA_LOST_TDLS_PKT_THRESHOLD
+ * of the last packets were lost, and that no ACK was received in the
+ * last STA_LOST_TDLS_PKT_TIME ms, before triggering the CQM packet-loss
+ * mechanism.
++ * For non-TDLS, use STA_LOST_PKT_THRESHOLD and STA_LOST_PKT_TIME
+ */
+- if (sta->sta.tdls &&
+- (sta->status_stats.lost_packets < STA_LOST_TDLS_PKT_THRESHOLD ||
+- time_before(jiffies,
+- sta->status_stats.last_tdls_pkt_time +
+- STA_LOST_TDLS_PKT_TIME)))
++ if (sta->status_stats.lost_packets < pkt_thr ||
++ !time_after(jiffies, sta->status_stats.last_pkt_time + pkt_time))
+ return;
+
+ cfg80211_cqm_pktloss_notify(sta->sdata->dev, sta->sta.addr,
+@@ -1035,9 +1038,7 @@ static void __ieee80211_tx_status(struct ieee80211_hw *hw,
+ sta->status_stats.lost_packets = 0;
+
+ /* Track when last TDLS packet was ACKed */
+- if (test_sta_flag(sta, WLAN_STA_TDLS_PEER_AUTH))
+- sta->status_stats.last_tdls_pkt_time =
+- jiffies;
++ sta->status_stats.last_pkt_time = jiffies;
+ } else if (noack_success) {
+ /* nothing to do here, do not account as lost */
+ } else {
+@@ -1170,9 +1171,8 @@ void ieee80211_tx_status_ext(struct ieee80211_hw *hw,
+ if (sta->status_stats.lost_packets)
+ sta->status_stats.lost_packets = 0;
+
+- /* Track when last TDLS packet was ACKed */
+- if (test_sta_flag(sta, WLAN_STA_TDLS_PEER_AUTH))
+- sta->status_stats.last_tdls_pkt_time = jiffies;
++ /* Track when last packet was ACKed */
++ sta->status_stats.last_pkt_time = jiffies;
+ } else if (test_sta_flag(sta, WLAN_STA_PS_STA)) {
+ return;
+ } else if (noack_success) {
+@@ -1261,8 +1261,7 @@ void ieee80211_tx_status_8023(struct ieee80211_hw *hw,
+ if (sta->status_stats.lost_packets)
+ sta->status_stats.lost_packets = 0;
+
+- if (test_sta_flag(sta, WLAN_STA_TDLS_PEER_AUTH))
+- sta->status_stats.last_tdls_pkt_time = jiffies;
++ sta->status_stats.last_pkt_time = jiffies;
+ } else {
+ ieee80211_lost_packet(sta, info);
+ }
+diff --git a/net/netfilter/nf_conntrack_proto_sctp.c b/net/netfilter/nf_conntrack_proto_sctp.c
+index 4f897b14b6069..810cca24b3990 100644
+--- a/net/netfilter/nf_conntrack_proto_sctp.c
++++ b/net/netfilter/nf_conntrack_proto_sctp.c
+@@ -62,6 +62,8 @@ static const unsigned int sctp_timeouts[SCTP_CONNTRACK_MAX] = {
+ [SCTP_CONNTRACK_HEARTBEAT_ACKED] = 210 SECS,
+ };
+
++#define SCTP_FLAG_HEARTBEAT_VTAG_FAILED 1
++
+ #define sNO SCTP_CONNTRACK_NONE
+ #define sCL SCTP_CONNTRACK_CLOSED
+ #define sCW SCTP_CONNTRACK_COOKIE_WAIT
+@@ -369,6 +371,7 @@ int nf_conntrack_sctp_packet(struct nf_conn *ct,
+ u_int32_t offset, count;
+ unsigned int *timeouts;
+ unsigned long map[256 / sizeof(unsigned long)] = { 0 };
++ bool ignore = false;
+
+ if (sctp_error(skb, dataoff, state))
+ return -NF_ACCEPT;
+@@ -427,15 +430,39 @@ int nf_conntrack_sctp_packet(struct nf_conn *ct,
+ /* Sec 8.5.1 (D) */
+ if (sh->vtag != ct->proto.sctp.vtag[dir])
+ goto out_unlock;
+- } else if (sch->type == SCTP_CID_HEARTBEAT ||
+- sch->type == SCTP_CID_HEARTBEAT_ACK) {
++ } else if (sch->type == SCTP_CID_HEARTBEAT) {
++ if (ct->proto.sctp.vtag[dir] == 0) {
++ pr_debug("Setting %d vtag %x for dir %d\n", sch->type, sh->vtag, dir);
++ ct->proto.sctp.vtag[dir] = sh->vtag;
++ } else if (sh->vtag != ct->proto.sctp.vtag[dir]) {
++ if (test_bit(SCTP_CID_DATA, map) || ignore)
++ goto out_unlock;
++
++ ct->proto.sctp.flags |= SCTP_FLAG_HEARTBEAT_VTAG_FAILED;
++ ct->proto.sctp.last_dir = dir;
++ ignore = true;
++ continue;
++ } else if (ct->proto.sctp.flags & SCTP_FLAG_HEARTBEAT_VTAG_FAILED) {
++ ct->proto.sctp.flags &= ~SCTP_FLAG_HEARTBEAT_VTAG_FAILED;
++ }
++ } else if (sch->type == SCTP_CID_HEARTBEAT_ACK) {
+ if (ct->proto.sctp.vtag[dir] == 0) {
+ pr_debug("Setting vtag %x for dir %d\n",
+ sh->vtag, dir);
+ ct->proto.sctp.vtag[dir] = sh->vtag;
+ } else if (sh->vtag != ct->proto.sctp.vtag[dir]) {
+- pr_debug("Verification tag check failed\n");
+- goto out_unlock;
++ if (test_bit(SCTP_CID_DATA, map) || ignore)
++ goto out_unlock;
++
++ if ((ct->proto.sctp.flags & SCTP_FLAG_HEARTBEAT_VTAG_FAILED) == 0 ||
++ ct->proto.sctp.last_dir == dir)
++ goto out_unlock;
++
++ ct->proto.sctp.flags &= ~SCTP_FLAG_HEARTBEAT_VTAG_FAILED;
++ ct->proto.sctp.vtag[dir] = sh->vtag;
++ ct->proto.sctp.vtag[!dir] = 0;
++ } else if (ct->proto.sctp.flags & SCTP_FLAG_HEARTBEAT_VTAG_FAILED) {
++ ct->proto.sctp.flags &= ~SCTP_FLAG_HEARTBEAT_VTAG_FAILED;
+ }
+ }
+
+@@ -470,6 +497,10 @@ int nf_conntrack_sctp_packet(struct nf_conn *ct,
+ }
+ spin_unlock_bh(&ct->lock);
+
++ /* allow but do not refresh timeout */
++ if (ignore)
++ return NF_ACCEPT;
++
+ timeouts = nf_ct_timeout_lookup(ct);
+ if (!timeouts)
+ timeouts = nf_sctp_pernet(nf_ct_net(ct))->timeouts;
+diff --git a/net/netfilter/nft_set_rbtree.c b/net/netfilter/nft_set_rbtree.c
+index b85ce6f0c0a6f..f317ad80cd6bc 100644
+--- a/net/netfilter/nft_set_rbtree.c
++++ b/net/netfilter/nft_set_rbtree.c
+@@ -218,11 +218,11 @@ static int __nft_rbtree_insert(const struct net *net, const struct nft_set *set,
+ struct nft_rbtree_elem *new,
+ struct nft_set_ext **ext)
+ {
++ bool overlap = false, dup_end_left = false, dup_end_right = false;
+ struct nft_rbtree *priv = nft_set_priv(set);
+ u8 genmask = nft_genmask_next(net);
+ struct nft_rbtree_elem *rbe;
+ struct rb_node *parent, **p;
+- bool overlap = false;
+ int d;
+
+ /* Detect overlaps as we descend the tree. Set the flag in these cases:
+@@ -262,6 +262,20 @@ static int __nft_rbtree_insert(const struct net *net, const struct nft_set *set,
+ *
+ * which always happen as last step and imply that no further
+ * overlapping is possible.
++ *
++ * Another special case comes from the fact that start elements matching
++ * an already existing start element are allowed: insertion is not
++ * performed but we return -EEXIST in that case, and the error will be
++ * cleared by the caller if NLM_F_EXCL is not present in the request.
++ * This way, request for insertion of an exact overlap isn't reported as
++ * error to userspace if not desired.
++ *
++ * However, if the existing start matches a pre-existing start, but the
++ * end element doesn't match the corresponding pre-existing end element,
++ * we need to report a partial overlap. This is a local condition that
++ * can be noticed without need for a tracking flag, by checking for a
++ * local duplicated end for a corresponding start, from left and right,
++ * separately.
+ */
+
+ parent = NULL;
+@@ -281,19 +295,35 @@ static int __nft_rbtree_insert(const struct net *net, const struct nft_set *set,
+ !nft_set_elem_expired(&rbe->ext) && !*p)
+ overlap = false;
+ } else {
++ if (dup_end_left && !*p)
++ return -ENOTEMPTY;
++
+ overlap = nft_rbtree_interval_end(rbe) &&
+ nft_set_elem_active(&rbe->ext,
+ genmask) &&
+ !nft_set_elem_expired(&rbe->ext);
++
++ if (overlap) {
++ dup_end_right = true;
++ continue;
++ }
+ }
+ } else if (d > 0) {
+ p = &parent->rb_right;
+
+ if (nft_rbtree_interval_end(new)) {
++ if (dup_end_right && !*p)
++ return -ENOTEMPTY;
++
+ overlap = nft_rbtree_interval_end(rbe) &&
+ nft_set_elem_active(&rbe->ext,
+ genmask) &&
+ !nft_set_elem_expired(&rbe->ext);
++
++ if (overlap) {
++ dup_end_left = true;
++ continue;
++ }
+ } else if (nft_set_elem_active(&rbe->ext, genmask) &&
+ !nft_set_elem_expired(&rbe->ext)) {
+ overlap = nft_rbtree_interval_end(rbe);
+@@ -321,6 +351,8 @@ static int __nft_rbtree_insert(const struct net *net, const struct nft_set *set,
+ p = &parent->rb_left;
+ }
+ }
++
++ dup_end_left = dup_end_right = false;
+ }
+
+ if (overlap)
+diff --git a/net/wireless/chan.c b/net/wireless/chan.c
+index cddf92c5d09ef..7a7cc4ade2b36 100644
+--- a/net/wireless/chan.c
++++ b/net/wireless/chan.c
+@@ -10,6 +10,7 @@
+ */
+
+ #include <linux/export.h>
++#include <linux/bitfield.h>
+ #include <net/cfg80211.h>
+ #include "core.h"
+ #include "rdev-ops.h"
+@@ -892,6 +893,7 @@ bool cfg80211_chandef_usable(struct wiphy *wiphy,
+ struct ieee80211_sta_vht_cap *vht_cap;
+ struct ieee80211_edmg *edmg_cap;
+ u32 width, control_freq, cap;
++ bool support_80_80 = false;
+
+ if (WARN_ON(!cfg80211_chandef_valid(chandef)))
+ return false;
+@@ -944,9 +946,13 @@ bool cfg80211_chandef_usable(struct wiphy *wiphy,
+ return false;
+ break;
+ case NL80211_CHAN_WIDTH_80P80:
+- cap = vht_cap->cap & IEEE80211_VHT_CAP_SUPP_CHAN_WIDTH_MASK;
+- if (chandef->chan->band != NL80211_BAND_6GHZ &&
+- cap != IEEE80211_VHT_CAP_SUPP_CHAN_WIDTH_160_80PLUS80MHZ)
++ cap = vht_cap->cap;
++ support_80_80 =
++ (cap & IEEE80211_VHT_CAP_SUPP_CHAN_WIDTH_160_80PLUS80MHZ) ||
++ (cap & IEEE80211_VHT_CAP_SUPP_CHAN_WIDTH_160MHZ &&
++ cap & IEEE80211_VHT_CAP_EXT_NSS_BW_MASK) ||
++ u32_get_bits(cap, IEEE80211_VHT_CAP_EXT_NSS_BW_MASK) > 1;
++ if (chandef->chan->band != NL80211_BAND_6GHZ && !support_80_80)
+ return false;
+ /* fall through */
+ case NL80211_CHAN_WIDTH_80:
+@@ -966,7 +972,8 @@ bool cfg80211_chandef_usable(struct wiphy *wiphy,
+ return false;
+ cap = vht_cap->cap & IEEE80211_VHT_CAP_SUPP_CHAN_WIDTH_MASK;
+ if (cap != IEEE80211_VHT_CAP_SUPP_CHAN_WIDTH_160MHZ &&
+- cap != IEEE80211_VHT_CAP_SUPP_CHAN_WIDTH_160_80PLUS80MHZ)
++ cap != IEEE80211_VHT_CAP_SUPP_CHAN_WIDTH_160_80PLUS80MHZ &&
++ !(vht_cap->cap & IEEE80211_VHT_CAP_EXT_NSS_BW_MASK))
+ return false;
+ break;
+ default:
+diff --git a/net/wireless/util.c b/net/wireless/util.c
+index 4d3b76f94f55e..a72d2ad6ade8b 100644
+--- a/net/wireless/util.c
++++ b/net/wireless/util.c
+@@ -121,11 +121,13 @@ int ieee80211_freq_khz_to_channel(u32 freq)
+ return (freq - 2407) / 5;
+ else if (freq >= 4910 && freq <= 4980)
+ return (freq - 4000) / 5;
+- else if (freq < 5945)
++ else if (freq < 5925)
+ return (freq - 5000) / 5;
++ else if (freq == 5935)
++ return 2;
+ else if (freq <= 45000) /* DMG band lower limit */
+- /* see 802.11ax D4.1 27.3.22.2 */
+- return (freq - 5940) / 5;
++ /* see 802.11ax D6.1 27.3.22.2 */
++ return (freq - 5950) / 5;
+ else if (freq >= 58320 && freq <= 70200)
+ return (freq - 56160) / 2160;
+ else
+diff --git a/sound/hda/hdac_device.c b/sound/hda/hdac_device.c
+index 333220f0f8afc..3e9e9ac804f62 100644
+--- a/sound/hda/hdac_device.c
++++ b/sound/hda/hdac_device.c
+@@ -127,6 +127,8 @@ EXPORT_SYMBOL_GPL(snd_hdac_device_init);
+ void snd_hdac_device_exit(struct hdac_device *codec)
+ {
+ pm_runtime_put_noidle(&codec->dev);
++ /* keep balance of runtime PM child_count in parent device */
++ pm_runtime_set_suspended(&codec->dev);
+ snd_hdac_bus_remove_device(codec->bus, codec);
+ kfree(codec->vendor_name);
+ kfree(codec->chip_name);
+diff --git a/sound/hda/intel-dsp-config.c b/sound/hda/intel-dsp-config.c
+index 99aec73491676..1c5114dedda92 100644
+--- a/sound/hda/intel-dsp-config.c
++++ b/sound/hda/intel-dsp-config.c
+@@ -54,7 +54,7 @@ static const struct config_entry config_table[] = {
+ #endif
+ /*
+ * Apollolake (Broxton-P)
+- * the legacy HDaudio driver is used except on Up Squared (SOF) and
++ * the legacy HDAudio driver is used except on Up Squared (SOF) and
+ * Chromebooks (SST)
+ */
+ #if IS_ENABLED(CONFIG_SND_SOC_SOF_APOLLOLAKE)
+@@ -89,7 +89,7 @@ static const struct config_entry config_table[] = {
+ },
+ #endif
+ /*
+- * Skylake and Kabylake use legacy HDaudio driver except for Google
++ * Skylake and Kabylake use legacy HDAudio driver except for Google
+ * Chromebooks (SST)
+ */
+
+@@ -135,7 +135,7 @@ static const struct config_entry config_table[] = {
+ #endif
+
+ /*
+- * Geminilake uses legacy HDaudio driver except for Google
++ * Geminilake uses legacy HDAudio driver except for Google
+ * Chromebooks
+ */
+ /* Geminilake */
+@@ -157,7 +157,7 @@ static const struct config_entry config_table[] = {
+
+ /*
+ * CoffeeLake, CannonLake, CometLake, IceLake, TigerLake use legacy
+- * HDaudio driver except for Google Chromebooks and when DMICs are
++ * HDAudio driver except for Google Chromebooks and when DMICs are
+ * present. Two cases are required since Coreboot does not expose NHLT
+ * tables.
+ *
+@@ -391,7 +391,7 @@ int snd_intel_dsp_driver_probe(struct pci_dev *pci)
+ if (pci->class == 0x040300)
+ return SND_INTEL_DSP_DRIVER_LEGACY;
+ if (pci->class != 0x040100 && pci->class != 0x040380) {
+- dev_err(&pci->dev, "Unknown PCI class/subclass/prog-if information (0x%06x) found, selecting HDA legacy driver\n", pci->class);
++ dev_err(&pci->dev, "Unknown PCI class/subclass/prog-if information (0x%06x) found, selecting HDAudio legacy driver\n", pci->class);
+ return SND_INTEL_DSP_DRIVER_LEGACY;
+ }
+
+diff --git a/sound/pci/hda/hda_tegra.c b/sound/pci/hda/hda_tegra.c
+index 0cc5fad1af8a9..ae40ca3f29837 100644
+--- a/sound/pci/hda/hda_tegra.c
++++ b/sound/pci/hda/hda_tegra.c
+@@ -179,6 +179,10 @@ static int __maybe_unused hda_tegra_runtime_suspend(struct device *dev)
+ struct hda_tegra *hda = container_of(chip, struct hda_tegra, chip);
+
+ if (chip && chip->running) {
++ /* enable controller wake up event */
++ azx_writew(chip, WAKEEN, azx_readw(chip, WAKEEN) |
++ STATESTS_INT_MASK);
++
+ azx_stop_chip(chip);
+ azx_enter_link_reset(chip);
+ }
+@@ -200,6 +204,9 @@ static int __maybe_unused hda_tegra_runtime_resume(struct device *dev)
+ if (chip && chip->running) {
+ hda_tegra_init(hda);
+ azx_init_chip(chip, 1);
++ /* disable controller wake up event*/
++ azx_writew(chip, WAKEEN, azx_readw(chip, WAKEEN) &
++ ~STATESTS_INT_MASK);
+ }
+
+ return 0;
+diff --git a/sound/pci/hda/patch_hdmi.c b/sound/pci/hda/patch_hdmi.c
+index fc22bdc30da3e..419f012b9853c 100644
+--- a/sound/pci/hda/patch_hdmi.c
++++ b/sound/pci/hda/patch_hdmi.c
+@@ -3671,6 +3671,7 @@ static int tegra_hdmi_build_pcms(struct hda_codec *codec)
+
+ static int patch_tegra_hdmi(struct hda_codec *codec)
+ {
++ struct hdmi_spec *spec;
+ int err;
+
+ err = patch_generic_hdmi(codec);
+@@ -3678,6 +3679,10 @@ static int patch_tegra_hdmi(struct hda_codec *codec)
+ return err;
+
+ codec->patch_ops.build_pcms = tegra_hdmi_build_pcms;
++ spec = codec->spec;
++ spec->chmap.ops.chmap_cea_alloc_validate_get_type =
++ nvhdmi_chmap_cea_alloc_validate_get_type;
++ spec->chmap.ops.chmap_validate = nvhdmi_chmap_validate;
+
+ return 0;
+ }
+@@ -4200,6 +4205,7 @@ HDA_CODEC_ENTRY(0x8086280c, "Cannonlake HDMI", patch_i915_glk_hdmi),
+ HDA_CODEC_ENTRY(0x8086280d, "Geminilake HDMI", patch_i915_glk_hdmi),
+ HDA_CODEC_ENTRY(0x8086280f, "Icelake HDMI", patch_i915_icl_hdmi),
+ HDA_CODEC_ENTRY(0x80862812, "Tigerlake HDMI", patch_i915_tgl_hdmi),
++HDA_CODEC_ENTRY(0x80862816, "Rocketlake HDMI", patch_i915_tgl_hdmi),
+ HDA_CODEC_ENTRY(0x8086281a, "Jasperlake HDMI", patch_i915_icl_hdmi),
+ HDA_CODEC_ENTRY(0x8086281b, "Elkhartlake HDMI", patch_i915_icl_hdmi),
+ HDA_CODEC_ENTRY(0x80862880, "CedarTrail HDMI", patch_generic_hdmi),
+diff --git a/sound/x86/Kconfig b/sound/x86/Kconfig
+index 77777192f6508..4ffcc5e623c22 100644
+--- a/sound/x86/Kconfig
++++ b/sound/x86/Kconfig
+@@ -9,7 +9,7 @@ menuconfig SND_X86
+ if SND_X86
+
+ config HDMI_LPE_AUDIO
+- tristate "HDMI audio without HDaudio on Intel Atom platforms"
++ tristate "HDMI audio without HDAudio on Intel Atom platforms"
+ depends on DRM_I915
+ select SND_PCM
+ help
+diff --git a/tools/testing/selftests/timers/Makefile b/tools/testing/selftests/timers/Makefile
+index 7656c7ce79d90..0e73a16874c4c 100644
+--- a/tools/testing/selftests/timers/Makefile
++++ b/tools/testing/selftests/timers/Makefile
+@@ -13,6 +13,7 @@ DESTRUCTIVE_TESTS = alarmtimer-suspend valid-adjtimex adjtick change_skew \
+
+ TEST_GEN_PROGS_EXTENDED = $(DESTRUCTIVE_TESTS)
+
++TEST_FILES := settings
+
+ include ../lib.mk
+
+diff --git a/tools/testing/selftests/timers/settings b/tools/testing/selftests/timers/settings
+new file mode 100644
+index 0000000000000..e7b9417537fbc
+--- /dev/null
++++ b/tools/testing/selftests/timers/settings
+@@ -0,0 +1 @@
++timeout=0
+diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c
+index 9e925675a8868..49a877918e2f1 100644
+--- a/virt/kvm/kvm_main.c
++++ b/virt/kvm/kvm_main.c
+@@ -4269,7 +4269,7 @@ int kvm_io_bus_register_dev(struct kvm *kvm, enum kvm_bus bus_idx, gpa_t addr,
+ void kvm_io_bus_unregister_dev(struct kvm *kvm, enum kvm_bus bus_idx,
+ struct kvm_io_device *dev)
+ {
+- int i;
++ int i, j;
+ struct kvm_io_bus *new_bus, *bus;
+
+ bus = kvm_get_bus(kvm, bus_idx);
+@@ -4286,17 +4286,20 @@ void kvm_io_bus_unregister_dev(struct kvm *kvm, enum kvm_bus bus_idx,
+
+ new_bus = kmalloc(struct_size(bus, range, bus->dev_count - 1),
+ GFP_KERNEL_ACCOUNT);
+- if (!new_bus) {
++ if (new_bus) {
++ memcpy(new_bus, bus, sizeof(*bus) + i * sizeof(struct kvm_io_range));
++ new_bus->dev_count--;
++ memcpy(new_bus->range + i, bus->range + i + 1,
++ (new_bus->dev_count - i) * sizeof(struct kvm_io_range));
++ } else {
+ pr_err("kvm: failed to shrink bus, removing it completely\n");
+- goto broken;
++ for (j = 0; j < bus->dev_count; j++) {
++ if (j == i)
++ continue;
++ kvm_iodevice_destructor(bus->range[j].dev);
++ }
+ }
+
+- memcpy(new_bus, bus, sizeof(*bus) + i * sizeof(struct kvm_io_range));
+- new_bus->dev_count--;
+- memcpy(new_bus->range + i, bus->range + i + 1,
+- (new_bus->dev_count - i) * sizeof(struct kvm_io_range));
+-
+-broken:
+ rcu_assign_pointer(kvm->buses[bus_idx], new_bus);
+ synchronize_srcu_expedited(&kvm->srcu);
+ kfree(bus);
^ permalink raw reply related [flat|nested] 30+ messages in thread
* [gentoo-commits] proj/linux-patches:5.8 commit in: /
@ 2020-09-23 12:14 Mike Pagano
0 siblings, 0 replies; 30+ messages in thread
From: Mike Pagano @ 2020-09-23 12:14 UTC (permalink / raw
To: gentoo-commits
commit: d90887db45c919719e541d2b025a26f8f23958c3
Author: Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Wed Sep 23 12:14:03 2020 +0000
Commit: Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Wed Sep 23 12:14:03 2020 +0000
URL: https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=d90887db
Linux patch 5.8.11
Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>
0000_README | 4 +
1010_linux-5.8.11.patch | 3917 +++++++++++++++++++++++++++++++++++++++++++++++
2 files changed, 3921 insertions(+)
diff --git a/0000_README b/0000_README
index f2e8a67..e5b0bab 100644
--- a/0000_README
+++ b/0000_README
@@ -83,6 +83,10 @@ Patch: 1009_linux-5.8.10.patch
From: http://www.kernel.org
Desc: Linux 5.8.10
+Patch: 1010_linux-5.8.11.patch
+From: http://www.kernel.org
+Desc: Linux 5.8.11
+
Patch: 1500_XATTR_USER_PREFIX.patch
From: https://bugs.gentoo.org/show_bug.cgi?id=470644
Desc: Support for namespace user.pax.* on tmpfs.
diff --git a/1010_linux-5.8.11.patch b/1010_linux-5.8.11.patch
new file mode 100644
index 0000000..2bc9a4f
--- /dev/null
+++ b/1010_linux-5.8.11.patch
@@ -0,0 +1,3917 @@
+diff --git a/Documentation/devicetree/bindings/pci/intel-gw-pcie.yaml b/Documentation/devicetree/bindings/pci/intel-gw-pcie.yaml
+index 64b2c64ca8065..a1e2be737eec9 100644
+--- a/Documentation/devicetree/bindings/pci/intel-gw-pcie.yaml
++++ b/Documentation/devicetree/bindings/pci/intel-gw-pcie.yaml
+@@ -9,6 +9,14 @@ title: PCIe RC controller on Intel Gateway SoCs
+ maintainers:
+ - Dilip Kota <eswara.kota@linux.intel.com>
+
++select:
++ properties:
++ compatible:
++ contains:
++ const: intel,lgm-pcie
++ required:
++ - compatible
++
+ properties:
+ compatible:
+ items:
+diff --git a/Documentation/devicetree/bindings/spi/brcm,spi-bcm-qspi.txt b/Documentation/devicetree/bindings/spi/brcm,spi-bcm-qspi.txt
+index f5e518d099f2c..62d4ed2d7fd79 100644
+--- a/Documentation/devicetree/bindings/spi/brcm,spi-bcm-qspi.txt
++++ b/Documentation/devicetree/bindings/spi/brcm,spi-bcm-qspi.txt
+@@ -23,8 +23,8 @@ Required properties:
+
+ - compatible:
+ Must be one of :
+- "brcm,spi-bcm-qspi", "brcm,spi-brcmstb-qspi" : MSPI+BSPI on BRCMSTB SoCs
+- "brcm,spi-bcm-qspi", "brcm,spi-brcmstb-mspi" : Second Instance of MSPI
++ "brcm,spi-brcmstb-qspi", "brcm,spi-bcm-qspi" : MSPI+BSPI on BRCMSTB SoCs
++ "brcm,spi-brcmstb-mspi", "brcm,spi-bcm-qspi" : Second Instance of MSPI
+ BRCMSTB SoCs
+ "brcm,spi-bcm7425-qspi", "brcm,spi-bcm-qspi", "brcm,spi-brcmstb-mspi" : Second Instance of MSPI
+ BRCMSTB SoCs
+@@ -36,8 +36,8 @@ Required properties:
+ BRCMSTB SoCs
+ "brcm,spi-bcm7278-qspi", "brcm,spi-bcm-qspi", "brcm,spi-brcmstb-mspi" : Second Instance of MSPI
+ BRCMSTB SoCs
+- "brcm,spi-bcm-qspi", "brcm,spi-nsp-qspi" : MSPI+BSPI on Cygnus, NSP
+- "brcm,spi-bcm-qspi", "brcm,spi-ns2-qspi" : NS2 SoCs
++ "brcm,spi-nsp-qspi", "brcm,spi-bcm-qspi" : MSPI+BSPI on Cygnus, NSP
++ "brcm,spi-ns2-qspi", "brcm,spi-bcm-qspi" : NS2 SoCs
+
+ - reg:
+ Define the bases and ranges of the associated I/O address spaces.
+@@ -86,7 +86,7 @@ BRCMSTB SoC Example:
+ spi@f03e3400 {
+ #address-cells = <0x1>;
+ #size-cells = <0x0>;
+- compatible = "brcm,spi-brcmstb-qspi", "brcm,spi-brcmstb-qspi";
++ compatible = "brcm,spi-brcmstb-qspi", "brcm,spi-bcm-qspi";
+ reg = <0xf03e0920 0x4 0xf03e3400 0x188 0xf03e3200 0x50>;
+ reg-names = "cs_reg", "mspi", "bspi";
+ interrupts = <0x6 0x5 0x4 0x3 0x2 0x1 0x0>;
+@@ -149,7 +149,7 @@ BRCMSTB SoC Example:
+ #address-cells = <1>;
+ #size-cells = <0>;
+ clocks = <&upg_fixed>;
+- compatible = "brcm,spi-brcmstb-qspi", "brcm,spi-brcmstb-mspi";
++ compatible = "brcm,spi-brcmstb-mspi", "brcm,spi-bcm-qspi";
+ reg = <0xf0416000 0x180>;
+ reg-names = "mspi";
+ interrupts = <0x14>;
+@@ -160,7 +160,7 @@ BRCMSTB SoC Example:
+ iProc SoC Example:
+
+ qspi: spi@18027200 {
+- compatible = "brcm,spi-bcm-qspi", "brcm,spi-nsp-qspi";
++ compatible = "brcm,spi-nsp-qspi", "brcm,spi-bcm-qspi";
+ reg = <0x18027200 0x184>,
+ <0x18027000 0x124>,
+ <0x1811c408 0x004>,
+@@ -191,7 +191,7 @@ iProc SoC Example:
+ NS2 SoC Example:
+
+ qspi: spi@66470200 {
+- compatible = "brcm,spi-bcm-qspi", "brcm,spi-ns2-qspi";
++ compatible = "brcm,spi-ns2-qspi", "brcm,spi-bcm-qspi";
+ reg = <0x66470200 0x184>,
+ <0x66470000 0x124>,
+ <0x67017408 0x004>,
+diff --git a/Makefile b/Makefile
+index d937530d33427..0b025b3a56401 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 5
+ PATCHLEVEL = 8
+-SUBLEVEL = 10
++SUBLEVEL = 11
+ EXTRAVERSION =
+ NAME = Kleptomaniac Octopus
+
+diff --git a/arch/arm64/kernel/cpu_errata.c b/arch/arm64/kernel/cpu_errata.c
+index 2c0b82db825ba..422ed2e38a6c8 100644
+--- a/arch/arm64/kernel/cpu_errata.c
++++ b/arch/arm64/kernel/cpu_errata.c
+@@ -910,8 +910,12 @@ const struct arm64_cpu_capabilities arm64_errata[] = {
+ .desc = "ARM erratum 1418040",
+ .capability = ARM64_WORKAROUND_1418040,
+ ERRATA_MIDR_RANGE_LIST(erratum_1418040_list),
+- .type = (ARM64_CPUCAP_SCOPE_LOCAL_CPU |
+- ARM64_CPUCAP_PERMITTED_FOR_LATE_CPU),
++ /*
++ * We need to allow affected CPUs to come in late, but
++ * also need the non-affected CPUs to be able to come
++ * in at any point in time. Wonderful.
++ */
++ .type = ARM64_CPUCAP_WEAK_LOCAL_CPU_FEATURE,
+ },
+ #endif
+ #ifdef CONFIG_ARM64_WORKAROUND_SPECULATIVE_AT
+diff --git a/arch/arm64/kernel/paravirt.c b/arch/arm64/kernel/paravirt.c
+index 295d66490584b..c07d7a0349410 100644
+--- a/arch/arm64/kernel/paravirt.c
++++ b/arch/arm64/kernel/paravirt.c
+@@ -50,16 +50,19 @@ static u64 pv_steal_clock(int cpu)
+ struct pv_time_stolen_time_region *reg;
+
+ reg = per_cpu_ptr(&stolen_time_region, cpu);
+- if (!reg->kaddr) {
+- pr_warn_once("stolen time enabled but not configured for cpu %d\n",
+- cpu);
++
++ /*
++ * paravirt_steal_clock() may be called before the CPU
++ * online notification callback runs. Until the callback
++ * has run we just return zero.
++ */
++ if (!reg->kaddr)
+ return 0;
+- }
+
+ return le64_to_cpu(READ_ONCE(reg->kaddr->stolen_time));
+ }
+
+-static int stolen_time_dying_cpu(unsigned int cpu)
++static int stolen_time_cpu_down_prepare(unsigned int cpu)
+ {
+ struct pv_time_stolen_time_region *reg;
+
+@@ -73,7 +76,7 @@ static int stolen_time_dying_cpu(unsigned int cpu)
+ return 0;
+ }
+
+-static int init_stolen_time_cpu(unsigned int cpu)
++static int stolen_time_cpu_online(unsigned int cpu)
+ {
+ struct pv_time_stolen_time_region *reg;
+ struct arm_smccc_res res;
+@@ -103,19 +106,20 @@ static int init_stolen_time_cpu(unsigned int cpu)
+ return 0;
+ }
+
+-static int pv_time_init_stolen_time(void)
++static int __init pv_time_init_stolen_time(void)
+ {
+ int ret;
+
+- ret = cpuhp_setup_state(CPUHP_AP_ARM_KVMPV_STARTING,
+- "hypervisor/arm/pvtime:starting",
+- init_stolen_time_cpu, stolen_time_dying_cpu);
++ ret = cpuhp_setup_state(CPUHP_AP_ONLINE_DYN,
++ "hypervisor/arm/pvtime:online",
++ stolen_time_cpu_online,
++ stolen_time_cpu_down_prepare);
+ if (ret < 0)
+ return ret;
+ return 0;
+ }
+
+-static bool has_pv_steal_clock(void)
++static bool __init has_pv_steal_clock(void)
+ {
+ struct arm_smccc_res res;
+
+diff --git a/arch/arm64/net/bpf_jit_comp.c b/arch/arm64/net/bpf_jit_comp.c
+index 3cb25b43b368e..1b2d82755e41f 100644
+--- a/arch/arm64/net/bpf_jit_comp.c
++++ b/arch/arm64/net/bpf_jit_comp.c
+@@ -141,14 +141,17 @@ static inline void emit_addr_mov_i64(const int reg, const u64 val,
+ }
+ }
+
+-static inline int bpf2a64_offset(int bpf_to, int bpf_from,
++static inline int bpf2a64_offset(int bpf_insn, int off,
+ const struct jit_ctx *ctx)
+ {
+- int to = ctx->offset[bpf_to];
+- /* -1 to account for the Branch instruction */
+- int from = ctx->offset[bpf_from] - 1;
+-
+- return to - from;
++ /* BPF JMP offset is relative to the next instruction */
++ bpf_insn++;
++ /*
++ * Whereas arm64 branch instructions encode the offset
++ * from the branch itself, so we must subtract 1 from the
++ * instruction offset.
++ */
++ return ctx->offset[bpf_insn + off] - (ctx->offset[bpf_insn] - 1);
+ }
+
+ static void jit_fill_hole(void *area, unsigned int size)
+@@ -578,7 +581,7 @@ emit_bswap_uxt:
+
+ /* JUMP off */
+ case BPF_JMP | BPF_JA:
+- jmp_offset = bpf2a64_offset(i + off, i, ctx);
++ jmp_offset = bpf2a64_offset(i, off, ctx);
+ check_imm26(jmp_offset);
+ emit(A64_B(jmp_offset), ctx);
+ break;
+@@ -605,7 +608,7 @@ emit_bswap_uxt:
+ case BPF_JMP32 | BPF_JSLE | BPF_X:
+ emit(A64_CMP(is64, dst, src), ctx);
+ emit_cond_jmp:
+- jmp_offset = bpf2a64_offset(i + off, i, ctx);
++ jmp_offset = bpf2a64_offset(i, off, ctx);
+ check_imm19(jmp_offset);
+ switch (BPF_OP(code)) {
+ case BPF_JEQ:
+@@ -837,10 +840,21 @@ static int build_body(struct jit_ctx *ctx, bool extra_pass)
+ const struct bpf_prog *prog = ctx->prog;
+ int i;
+
++ /*
++ * - offset[0] offset of the end of prologue,
++ * start of the 1st instruction.
++ * - offset[1] - offset of the end of 1st instruction,
++ * start of the 2nd instruction
++ * [....]
++ * - offset[3] - offset of the end of 3rd instruction,
++ * start of 4th instruction
++ */
+ for (i = 0; i < prog->len; i++) {
+ const struct bpf_insn *insn = &prog->insnsi[i];
+ int ret;
+
++ if (ctx->image == NULL)
++ ctx->offset[i] = ctx->idx;
+ ret = build_insn(insn, ctx, extra_pass);
+ if (ret > 0) {
+ i++;
+@@ -848,11 +862,16 @@ static int build_body(struct jit_ctx *ctx, bool extra_pass)
+ ctx->offset[i] = ctx->idx;
+ continue;
+ }
+- if (ctx->image == NULL)
+- ctx->offset[i] = ctx->idx;
+ if (ret)
+ return ret;
+ }
++ /*
++ * offset is allocated with prog->len + 1 so fill in
++ * the last element with the offset after the last
++ * instruction (end of program)
++ */
++ if (ctx->image == NULL)
++ ctx->offset[i] = ctx->idx;
+
+ return 0;
+ }
+@@ -928,7 +947,7 @@ struct bpf_prog *bpf_int_jit_compile(struct bpf_prog *prog)
+ memset(&ctx, 0, sizeof(ctx));
+ ctx.prog = prog;
+
+- ctx.offset = kcalloc(prog->len, sizeof(int), GFP_KERNEL);
++ ctx.offset = kcalloc(prog->len + 1, sizeof(int), GFP_KERNEL);
+ if (ctx.offset == NULL) {
+ prog = orig_prog;
+ goto out_off;
+@@ -1008,7 +1027,7 @@ skip_init_ctx:
+ prog->jited_len = image_size;
+
+ if (!prog->is_func || extra_pass) {
+- bpf_prog_fill_jited_linfo(prog, ctx.offset);
++ bpf_prog_fill_jited_linfo(prog, ctx.offset + 1);
+ out_off:
+ kfree(ctx.offset);
+ kfree(jit_data);
+diff --git a/arch/mips/Kconfig b/arch/mips/Kconfig
+index c43ad3b3cea4b..daa24f1e14831 100644
+--- a/arch/mips/Kconfig
++++ b/arch/mips/Kconfig
+@@ -876,6 +876,7 @@ config SNI_RM
+ select I8253
+ select I8259
+ select ISA
++ select MIPS_L1_CACHE_SHIFT_6
+ select SWAP_IO_SPACE if CPU_BIG_ENDIAN
+ select SYS_HAS_CPU_R4X00
+ select SYS_HAS_CPU_R5000
+diff --git a/arch/mips/kvm/mips.c b/arch/mips/kvm/mips.c
+index 666d3350b4ac1..6c6836669ce16 100644
+--- a/arch/mips/kvm/mips.c
++++ b/arch/mips/kvm/mips.c
+@@ -137,6 +137,8 @@ extern void kvm_init_loongson_ipi(struct kvm *kvm);
+ int kvm_arch_init_vm(struct kvm *kvm, unsigned long type)
+ {
+ switch (type) {
++ case KVM_VM_MIPS_AUTO:
++ break;
+ #ifdef CONFIG_KVM_MIPS_VZ
+ case KVM_VM_MIPS_VZ:
+ #else
+diff --git a/arch/mips/sni/a20r.c b/arch/mips/sni/a20r.c
+index b09dc844985a8..eeeec18c420a6 100644
+--- a/arch/mips/sni/a20r.c
++++ b/arch/mips/sni/a20r.c
+@@ -143,7 +143,10 @@ static struct platform_device sc26xx_pdev = {
+ },
+ };
+
+-static u32 a20r_ack_hwint(void)
++/*
++ * Trigger chipset to update CPU's CAUSE IP field
++ */
++static u32 a20r_update_cause_ip(void)
+ {
+ u32 status = read_c0_status();
+
+@@ -205,12 +208,14 @@ static void a20r_hwint(void)
+ int irq;
+
+ clear_c0_status(IE_IRQ0);
+- status = a20r_ack_hwint();
++ status = a20r_update_cause_ip();
+ cause = read_c0_cause();
+
+ irq = ffs(((cause & status) >> 8) & 0xf8);
+ if (likely(irq > 0))
+ do_IRQ(SNI_A20R_IRQ_BASE + irq - 1);
++
++ a20r_update_cause_ip();
+ set_c0_status(IE_IRQ0);
+ }
+
+diff --git a/arch/openrisc/mm/cache.c b/arch/openrisc/mm/cache.c
+index 08f56af387ac4..534a52ec5e667 100644
+--- a/arch/openrisc/mm/cache.c
++++ b/arch/openrisc/mm/cache.c
+@@ -16,7 +16,7 @@
+ #include <asm/cacheflush.h>
+ #include <asm/tlbflush.h>
+
+-static void cache_loop(struct page *page, const unsigned int reg)
++static __always_inline void cache_loop(struct page *page, const unsigned int reg)
+ {
+ unsigned long paddr = page_to_pfn(page) << PAGE_SHIFT;
+ unsigned long line = paddr & ~(L1_CACHE_BYTES - 1);
+diff --git a/arch/powerpc/include/asm/book3s/64/mmu.h b/arch/powerpc/include/asm/book3s/64/mmu.h
+index 5393a535240c7..dfbbffa0eb2e2 100644
+--- a/arch/powerpc/include/asm/book3s/64/mmu.h
++++ b/arch/powerpc/include/asm/book3s/64/mmu.h
+@@ -228,14 +228,14 @@ static inline void early_init_mmu_secondary(void)
+
+ extern void hash__setup_initial_memory_limit(phys_addr_t first_memblock_base,
+ phys_addr_t first_memblock_size);
+-extern void radix__setup_initial_memory_limit(phys_addr_t first_memblock_base,
+- phys_addr_t first_memblock_size);
+ static inline void setup_initial_memory_limit(phys_addr_t first_memblock_base,
+ phys_addr_t first_memblock_size)
+ {
+- if (early_radix_enabled())
+- return radix__setup_initial_memory_limit(first_memblock_base,
+- first_memblock_size);
++ /*
++ * Hash has more strict restrictions. At this point we don't
++ * know which translations we will pick. Hence go with hash
++ * restrictions.
++ */
+ return hash__setup_initial_memory_limit(first_memblock_base,
+ first_memblock_size);
+ }
+diff --git a/arch/powerpc/kernel/dma-iommu.c b/arch/powerpc/kernel/dma-iommu.c
+index e486d1d78de28..f4cb2c546adbb 100644
+--- a/arch/powerpc/kernel/dma-iommu.c
++++ b/arch/powerpc/kernel/dma-iommu.c
+@@ -160,7 +160,8 @@ u64 dma_iommu_get_required_mask(struct device *dev)
+ return bypass_mask;
+ }
+
+- mask = 1ULL < (fls_long(tbl->it_offset + tbl->it_size) - 1);
++ mask = 1ULL << (fls_long(tbl->it_offset + tbl->it_size) +
++ tbl->it_page_shift - 1);
+ mask += mask - 1;
+
+ return mask;
+diff --git a/arch/powerpc/mm/book3s64/radix_pgtable.c b/arch/powerpc/mm/book3s64/radix_pgtable.c
+index c2989c1718839..1e9a298020a63 100644
+--- a/arch/powerpc/mm/book3s64/radix_pgtable.c
++++ b/arch/powerpc/mm/book3s64/radix_pgtable.c
+@@ -654,21 +654,6 @@ void radix__mmu_cleanup_all(void)
+ }
+ }
+
+-void radix__setup_initial_memory_limit(phys_addr_t first_memblock_base,
+- phys_addr_t first_memblock_size)
+-{
+- /*
+- * We don't currently support the first MEMBLOCK not mapping 0
+- * physical on those processors
+- */
+- BUG_ON(first_memblock_base != 0);
+-
+- /*
+- * Radix mode is not limited by RMA / VRMA addressing.
+- */
+- ppc64_rma_size = ULONG_MAX;
+-}
+-
+ #ifdef CONFIG_MEMORY_HOTPLUG
+ static void free_pte_table(pte_t *pte_start, pmd_t *pmd)
+ {
+diff --git a/arch/powerpc/mm/init_64.c b/arch/powerpc/mm/init_64.c
+index bc73abf0bc25e..ef566fc43933e 100644
+--- a/arch/powerpc/mm/init_64.c
++++ b/arch/powerpc/mm/init_64.c
+@@ -431,9 +431,16 @@ void __init mmu_early_init_devtree(void)
+ if (!(mfmsr() & MSR_HV))
+ early_check_vec5();
+
+- if (early_radix_enabled())
++ if (early_radix_enabled()) {
+ radix__early_init_devtree();
+- else
++ /*
++ * We have finalized the translation we are going to use by now.
++ * Radix mode is not limited by RMA / VRMA addressing.
++ * Hence don't limit memblock allocations.
++ */
++ ppc64_rma_size = ULONG_MAX;
++ memblock_set_current_limit(MEMBLOCK_ALLOC_ANYWHERE);
++ } else
+ hash__early_init_devtree();
+ }
+ #endif /* CONFIG_PPC_BOOK3S_64 */
+diff --git a/arch/riscv/mm/init.c b/arch/riscv/mm/init.c
+index 79e9d55bdf1ac..e229d95f470b8 100644
+--- a/arch/riscv/mm/init.c
++++ b/arch/riscv/mm/init.c
+@@ -226,12 +226,11 @@ void __set_fixmap(enum fixed_addresses idx, phys_addr_t phys, pgprot_t prot)
+
+ ptep = &fixmap_pte[pte_index(addr)];
+
+- if (pgprot_val(prot)) {
++ if (pgprot_val(prot))
+ set_pte(ptep, pfn_pte(phys >> PAGE_SHIFT, prot));
+- } else {
++ else
+ pte_clear(&init_mm, addr, ptep);
+- local_flush_tlb_page(addr);
+- }
++ local_flush_tlb_page(addr);
+ }
+
+ static pte_t *__init get_pte_virt(phys_addr_t pa)
+diff --git a/arch/s390/kernel/entry.h b/arch/s390/kernel/entry.h
+index faca269d5f278..a44ddc2f2dec5 100644
+--- a/arch/s390/kernel/entry.h
++++ b/arch/s390/kernel/entry.h
+@@ -26,6 +26,7 @@ void do_protection_exception(struct pt_regs *regs);
+ void do_dat_exception(struct pt_regs *regs);
+ void do_secure_storage_access(struct pt_regs *regs);
+ void do_non_secure_storage_access(struct pt_regs *regs);
++void do_secure_storage_violation(struct pt_regs *regs);
+
+ void addressing_exception(struct pt_regs *regs);
+ void data_exception(struct pt_regs *regs);
+diff --git a/arch/s390/kernel/pgm_check.S b/arch/s390/kernel/pgm_check.S
+index 2c27907a5ffcb..9a92638360eee 100644
+--- a/arch/s390/kernel/pgm_check.S
++++ b/arch/s390/kernel/pgm_check.S
+@@ -80,7 +80,7 @@ PGM_CHECK(do_dat_exception) /* 3b */
+ PGM_CHECK_DEFAULT /* 3c */
+ PGM_CHECK(do_secure_storage_access) /* 3d */
+ PGM_CHECK(do_non_secure_storage_access) /* 3e */
+-PGM_CHECK_DEFAULT /* 3f */
++PGM_CHECK(do_secure_storage_violation) /* 3f */
+ PGM_CHECK(monitor_event_exception) /* 40 */
+ PGM_CHECK_DEFAULT /* 41 */
+ PGM_CHECK_DEFAULT /* 42 */
+diff --git a/arch/s390/mm/fault.c b/arch/s390/mm/fault.c
+index d53c2e2ea1fd2..48d9fc5b699b4 100644
+--- a/arch/s390/mm/fault.c
++++ b/arch/s390/mm/fault.c
+@@ -875,6 +875,21 @@ void do_non_secure_storage_access(struct pt_regs *regs)
+ }
+ NOKPROBE_SYMBOL(do_non_secure_storage_access);
+
++void do_secure_storage_violation(struct pt_regs *regs)
++{
++ /*
++ * Either KVM messed up the secure guest mapping or the same
++ * page is mapped into multiple secure guests.
++ *
++ * This exception is only triggered when a guest 2 is running
++ * and can therefore never occur in kernel context.
++ */
++ printk_ratelimited(KERN_WARNING
++ "Secure storage violation in task: %s, pid %d\n",
++ current->comm, current->pid);
++ send_sig(SIGSEGV, current, 0);
++}
++
+ #else
+ void do_secure_storage_access(struct pt_regs *regs)
+ {
+@@ -885,4 +900,9 @@ void do_non_secure_storage_access(struct pt_regs *regs)
+ {
+ default_trap_handler(regs);
+ }
++
++void do_secure_storage_violation(struct pt_regs *regs)
++{
++ default_trap_handler(regs);
++}
+ #endif
+diff --git a/arch/s390/pci/pci.c b/arch/s390/pci/pci.c
+index 4b62d6b550246..1804230dd8d82 100644
+--- a/arch/s390/pci/pci.c
++++ b/arch/s390/pci/pci.c
+@@ -668,6 +668,10 @@ EXPORT_SYMBOL_GPL(zpci_enable_device);
+ int zpci_disable_device(struct zpci_dev *zdev)
+ {
+ zpci_dma_exit_device(zdev);
++ /*
++ * The zPCI function may already be disabled by the platform, this is
++ * detected in clp_disable_fh() which becomes a no-op.
++ */
+ return clp_disable_fh(zdev);
+ }
+ EXPORT_SYMBOL_GPL(zpci_disable_device);
+diff --git a/arch/s390/pci/pci_event.c b/arch/s390/pci/pci_event.c
+index 9a3a291cad432..d9ae7456dd4c8 100644
+--- a/arch/s390/pci/pci_event.c
++++ b/arch/s390/pci/pci_event.c
+@@ -143,6 +143,8 @@ static void __zpci_event_availability(struct zpci_ccdf_avail *ccdf)
+ zpci_remove_device(zdev);
+ }
+
++ zdev->fh = ccdf->fh;
++ zpci_disable_device(zdev);
+ zdev->state = ZPCI_FN_STATE_STANDBY;
+ if (!clp_get_state(ccdf->fid, &state) &&
+ state == ZPCI_FN_STATE_RESERVED) {
+diff --git a/arch/x86/boot/compressed/Makefile b/arch/x86/boot/compressed/Makefile
+index 5a828fde7a42f..be38af7bea89d 100644
+--- a/arch/x86/boot/compressed/Makefile
++++ b/arch/x86/boot/compressed/Makefile
+@@ -42,6 +42,8 @@ KBUILD_CFLAGS += $(call cc-disable-warning, gnu)
+ KBUILD_CFLAGS += -Wno-pointer-sign
+ KBUILD_CFLAGS += $(call cc-option,-fmacro-prefix-map=$(srctree)/=)
+ KBUILD_CFLAGS += -fno-asynchronous-unwind-tables
++# Disable relocation relaxation in case the link is not PIE.
++KBUILD_CFLAGS += $(call as-option,-Wa$(comma)-mrelax-relocations=no)
+
+ KBUILD_AFLAGS := $(KBUILD_CFLAGS) -D__ASSEMBLY__
+ GCOV_PROFILE := n
+diff --git a/arch/x86/include/asm/frame.h b/arch/x86/include/asm/frame.h
+index 296b346184b27..fb42659f6e988 100644
+--- a/arch/x86/include/asm/frame.h
++++ b/arch/x86/include/asm/frame.h
+@@ -60,12 +60,26 @@
+ #define FRAME_END "pop %" _ASM_BP "\n"
+
+ #ifdef CONFIG_X86_64
++
+ #define ENCODE_FRAME_POINTER \
+ "lea 1(%rsp), %rbp\n\t"
++
++static inline unsigned long encode_frame_pointer(struct pt_regs *regs)
++{
++ return (unsigned long)regs + 1;
++}
++
+ #else /* !CONFIG_X86_64 */
++
+ #define ENCODE_FRAME_POINTER \
+ "movl %esp, %ebp\n\t" \
+ "andl $0x7fffffff, %ebp\n\t"
++
++static inline unsigned long encode_frame_pointer(struct pt_regs *regs)
++{
++ return (unsigned long)regs & 0x7fffffff;
++}
++
+ #endif /* CONFIG_X86_64 */
+
+ #endif /* __ASSEMBLY__ */
+@@ -83,6 +97,11 @@
+
+ #define ENCODE_FRAME_POINTER
+
++static inline unsigned long encode_frame_pointer(struct pt_regs *regs)
++{
++ return 0;
++}
++
+ #endif
+
+ #define FRAME_BEGIN
+diff --git a/arch/x86/kernel/process.c b/arch/x86/kernel/process.c
+index fe67dbd76e517..bff502e779e44 100644
+--- a/arch/x86/kernel/process.c
++++ b/arch/x86/kernel/process.c
+@@ -42,6 +42,7 @@
+ #include <asm/spec-ctrl.h>
+ #include <asm/io_bitmap.h>
+ #include <asm/proto.h>
++#include <asm/frame.h>
+
+ #include "process.h"
+
+@@ -133,7 +134,7 @@ int copy_thread_tls(unsigned long clone_flags, unsigned long sp,
+ fork_frame = container_of(childregs, struct fork_frame, regs);
+ frame = &fork_frame->frame;
+
+- frame->bp = 0;
++ frame->bp = encode_frame_pointer(childregs);
+ frame->ret_addr = (unsigned long) ret_from_fork;
+ p->thread.sp = (unsigned long) fork_frame;
+ p->thread.io_bitmap = NULL;
+diff --git a/block/bfq-iosched.c b/block/bfq-iosched.c
+index 50c8f034c01c5..caa4fa7f42b84 100644
+--- a/block/bfq-iosched.c
++++ b/block/bfq-iosched.c
+@@ -5895,18 +5895,6 @@ static void bfq_finish_requeue_request(struct request *rq)
+ struct bfq_queue *bfqq = RQ_BFQQ(rq);
+ struct bfq_data *bfqd;
+
+- /*
+- * Requeue and finish hooks are invoked in blk-mq without
+- * checking whether the involved request is actually still
+- * referenced in the scheduler. To handle this fact, the
+- * following two checks make this function exit in case of
+- * spurious invocations, for which there is nothing to do.
+- *
+- * First, check whether rq has nothing to do with an elevator.
+- */
+- if (unlikely(!(rq->rq_flags & RQF_ELVPRIV)))
+- return;
+-
+ /*
+ * rq either is not associated with any icq, or is an already
+ * requeued request that has not (yet) been re-inserted into
+diff --git a/block/blk-mq-sched.h b/block/blk-mq-sched.h
+index 126021fc3a11f..e81ca1bf6e10b 100644
+--- a/block/blk-mq-sched.h
++++ b/block/blk-mq-sched.h
+@@ -66,7 +66,7 @@ static inline void blk_mq_sched_requeue_request(struct request *rq)
+ struct request_queue *q = rq->q;
+ struct elevator_queue *e = q->elevator;
+
+- if (e && e->type->ops.requeue_request)
++ if ((rq->rq_flags & RQF_ELVPRIV) && e && e->type->ops.requeue_request)
+ e->type->ops.requeue_request(rq);
+ }
+
+diff --git a/drivers/base/firmware_loader/firmware.h b/drivers/base/firmware_loader/firmware.h
+index 933e2192fbe8a..d08efc77cf16a 100644
+--- a/drivers/base/firmware_loader/firmware.h
++++ b/drivers/base/firmware_loader/firmware.h
+@@ -142,10 +142,12 @@ int assign_fw(struct firmware *fw, struct device *device, u32 opt_flags);
+ void fw_free_paged_buf(struct fw_priv *fw_priv);
+ int fw_grow_paged_buf(struct fw_priv *fw_priv, int pages_needed);
+ int fw_map_paged_buf(struct fw_priv *fw_priv);
++bool fw_is_paged_buf(struct fw_priv *fw_priv);
+ #else
+ static inline void fw_free_paged_buf(struct fw_priv *fw_priv) {}
+ static inline int fw_grow_paged_buf(struct fw_priv *fw_priv, int pages_needed) { return -ENXIO; }
+ static inline int fw_map_paged_buf(struct fw_priv *fw_priv) { return -ENXIO; }
++static inline bool fw_is_paged_buf(struct fw_priv *fw_priv) { return false; }
+ #endif
+
+ #endif /* __FIRMWARE_LOADER_H */
+diff --git a/drivers/base/firmware_loader/main.c b/drivers/base/firmware_loader/main.c
+index ca871b13524e2..36bf45509e0b0 100644
+--- a/drivers/base/firmware_loader/main.c
++++ b/drivers/base/firmware_loader/main.c
+@@ -252,9 +252,11 @@ static void __free_fw_priv(struct kref *ref)
+ list_del(&fw_priv->list);
+ spin_unlock(&fwc->lock);
+
+- fw_free_paged_buf(fw_priv); /* free leftover pages */
+- if (!fw_priv->allocated_size)
++ if (fw_is_paged_buf(fw_priv))
++ fw_free_paged_buf(fw_priv);
++ else if (!fw_priv->allocated_size)
+ vfree(fw_priv->data);
++
+ kfree_const(fw_priv->fw_name);
+ kfree(fw_priv);
+ }
+@@ -268,6 +270,11 @@ static void free_fw_priv(struct fw_priv *fw_priv)
+ }
+
+ #ifdef CONFIG_FW_LOADER_PAGED_BUF
++bool fw_is_paged_buf(struct fw_priv *fw_priv)
++{
++ return fw_priv->is_paged_buf;
++}
++
+ void fw_free_paged_buf(struct fw_priv *fw_priv)
+ {
+ int i;
+@@ -275,6 +282,8 @@ void fw_free_paged_buf(struct fw_priv *fw_priv)
+ if (!fw_priv->pages)
+ return;
+
++ vunmap(fw_priv->data);
++
+ for (i = 0; i < fw_priv->nr_pages; i++)
+ __free_page(fw_priv->pages[i]);
+ kvfree(fw_priv->pages);
+@@ -328,10 +337,6 @@ int fw_map_paged_buf(struct fw_priv *fw_priv)
+ if (!fw_priv->data)
+ return -ENOMEM;
+
+- /* page table is no longer needed after mapping, let's free */
+- kvfree(fw_priv->pages);
+- fw_priv->pages = NULL;
+-
+ return 0;
+ }
+ #endif
+diff --git a/drivers/block/loop.c b/drivers/block/loop.c
+index 84433922aed16..dfc66038bef9f 100644
+--- a/drivers/block/loop.c
++++ b/drivers/block/loop.c
+@@ -1114,8 +1114,6 @@ static int loop_configure(struct loop_device *lo, fmode_t mode,
+ mapping = file->f_mapping;
+ inode = mapping->host;
+
+- size = get_loop_size(lo, file);
+-
+ if ((config->info.lo_flags & ~LOOP_CONFIGURE_SETTABLE_FLAGS) != 0) {
+ error = -EINVAL;
+ goto out_unlock;
+@@ -1165,6 +1163,8 @@ static int loop_configure(struct loop_device *lo, fmode_t mode,
+ loop_update_rotational(lo);
+ loop_update_dio(lo);
+ loop_sysfs_init(lo);
++
++ size = get_loop_size(lo, file);
+ loop_set_size(lo, size);
+
+ set_blocksize(bdev, S_ISBLK(inode->i_mode) ?
+diff --git a/drivers/clk/davinci/pll.c b/drivers/clk/davinci/pll.c
+index 8a23d5dfd1f8d..5f063e1be4b14 100644
+--- a/drivers/clk/davinci/pll.c
++++ b/drivers/clk/davinci/pll.c
+@@ -491,7 +491,7 @@ struct clk *davinci_pll_clk_register(struct device *dev,
+ parent_name = postdiv_name;
+ }
+
+- pllen = kzalloc(sizeof(*pllout), GFP_KERNEL);
++ pllen = kzalloc(sizeof(*pllen), GFP_KERNEL);
+ if (!pllen) {
+ ret = -ENOMEM;
+ goto err_unregister_postdiv;
+diff --git a/drivers/clk/rockchip/clk-rk3228.c b/drivers/clk/rockchip/clk-rk3228.c
+index d7243c09cc843..47d6482dda9df 100644
+--- a/drivers/clk/rockchip/clk-rk3228.c
++++ b/drivers/clk/rockchip/clk-rk3228.c
+@@ -137,7 +137,7 @@ PNAME(mux_usb480m_p) = { "usb480m_phy", "xin24m" };
+ PNAME(mux_hdmiphy_p) = { "hdmiphy_phy", "xin24m" };
+ PNAME(mux_aclk_cpu_src_p) = { "cpll_aclk_cpu", "gpll_aclk_cpu", "hdmiphy_aclk_cpu" };
+
+-PNAME(mux_pll_src_4plls_p) = { "cpll", "gpll", "hdmiphy" "usb480m" };
++PNAME(mux_pll_src_4plls_p) = { "cpll", "gpll", "hdmiphy", "usb480m" };
+ PNAME(mux_pll_src_3plls_p) = { "cpll", "gpll", "hdmiphy" };
+ PNAME(mux_pll_src_2plls_p) = { "cpll", "gpll" };
+ PNAME(mux_sclk_hdmi_cec_p) = { "cpll", "gpll", "xin24m" };
+diff --git a/drivers/dax/super.c b/drivers/dax/super.c
+index 8e32345be0f74..af95d7e723f76 100644
+--- a/drivers/dax/super.c
++++ b/drivers/dax/super.c
+@@ -318,11 +318,15 @@ EXPORT_SYMBOL_GPL(dax_direct_access);
+ bool dax_supported(struct dax_device *dax_dev, struct block_device *bdev,
+ int blocksize, sector_t start, sector_t len)
+ {
++ if (!dax_dev)
++ return false;
++
+ if (!dax_alive(dax_dev))
+ return false;
+
+ return dax_dev->ops->dax_supported(dax_dev, bdev, blocksize, start, len);
+ }
++EXPORT_SYMBOL_GPL(dax_supported);
+
+ size_t dax_copy_from_iter(struct dax_device *dax_dev, pgoff_t pgoff, void *addr,
+ size_t bytes, struct iov_iter *i)
+diff --git a/drivers/firmware/efi/efibc.c b/drivers/firmware/efi/efibc.c
+index 35dccc88ac0af..15a47539dc563 100644
+--- a/drivers/firmware/efi/efibc.c
++++ b/drivers/firmware/efi/efibc.c
+@@ -84,7 +84,7 @@ static int __init efibc_init(void)
+ {
+ int ret;
+
+- if (!efi_enabled(EFI_RUNTIME_SERVICES))
++ if (!efivars_kobject() || !efivar_supports_writes())
+ return -ENODEV;
+
+ ret = register_reboot_notifier(&efibc_reboot_notifier);
+diff --git a/drivers/gpu/drm/amd/amdkfd/kfd_device_queue_manager.c b/drivers/gpu/drm/amd/amdkfd/kfd_device_queue_manager.c
+index e9c4867abeffb..aa1e0f0550835 100644
+--- a/drivers/gpu/drm/amd/amdkfd/kfd_device_queue_manager.c
++++ b/drivers/gpu/drm/amd/amdkfd/kfd_device_queue_manager.c
+@@ -1287,7 +1287,7 @@ static int create_queue_cpsch(struct device_queue_manager *dqm, struct queue *q,
+ if (q->properties.is_active) {
+ increment_queue_count(dqm, q->properties.type);
+
+- retval = execute_queues_cpsch(dqm,
++ execute_queues_cpsch(dqm,
+ KFD_UNMAP_QUEUES_FILTER_DYNAMIC_QUEUES, 0);
+ }
+
+diff --git a/drivers/gpu/drm/i915/gem/i915_gem_context.c b/drivers/gpu/drm/i915/gem/i915_gem_context.c
+index 30c229fcb4046..c5c549177d726 100644
+--- a/drivers/gpu/drm/i915/gem/i915_gem_context.c
++++ b/drivers/gpu/drm/i915/gem/i915_gem_context.c
+@@ -440,29 +440,36 @@ static bool __cancel_engine(struct intel_engine_cs *engine)
+ return __reset_engine(engine);
+ }
+
+-static struct intel_engine_cs *__active_engine(struct i915_request *rq)
++static bool
++__active_engine(struct i915_request *rq, struct intel_engine_cs **active)
+ {
+ struct intel_engine_cs *engine, *locked;
++ bool ret = false;
+
+ /*
+ * Serialise with __i915_request_submit() so that it sees
+ * is-banned?, or we know the request is already inflight.
++ *
++ * Note that rq->engine is unstable, and so we double
++ * check that we have acquired the lock on the final engine.
+ */
+ locked = READ_ONCE(rq->engine);
+ spin_lock_irq(&locked->active.lock);
+ while (unlikely(locked != (engine = READ_ONCE(rq->engine)))) {
+ spin_unlock(&locked->active.lock);
+- spin_lock(&engine->active.lock);
+ locked = engine;
++ spin_lock(&locked->active.lock);
+ }
+
+- engine = NULL;
+- if (i915_request_is_active(rq) && rq->fence.error != -EIO)
+- engine = rq->engine;
++ if (!i915_request_completed(rq)) {
++ if (i915_request_is_active(rq) && rq->fence.error != -EIO)
++ *active = locked;
++ ret = true;
++ }
+
+ spin_unlock_irq(&locked->active.lock);
+
+- return engine;
++ return ret;
+ }
+
+ static struct intel_engine_cs *active_engine(struct intel_context *ce)
+@@ -473,17 +480,16 @@ static struct intel_engine_cs *active_engine(struct intel_context *ce)
+ if (!ce->timeline)
+ return NULL;
+
+- mutex_lock(&ce->timeline->mutex);
+- list_for_each_entry_reverse(rq, &ce->timeline->requests, link) {
+- if (i915_request_completed(rq))
+- break;
++ rcu_read_lock();
++ list_for_each_entry_rcu(rq, &ce->timeline->requests, link) {
++ if (i915_request_is_active(rq) && i915_request_completed(rq))
++ continue;
+
+ /* Check with the backend if the request is inflight */
+- engine = __active_engine(rq);
+- if (engine)
++ if (__active_engine(rq, &engine))
+ break;
+ }
+- mutex_unlock(&ce->timeline->mutex);
++ rcu_read_unlock();
+
+ return engine;
+ }
+@@ -714,6 +720,7 @@ __create_context(struct drm_i915_private *i915)
+ ctx->i915 = i915;
+ ctx->sched.priority = I915_USER_PRIORITY(I915_PRIORITY_NORMAL);
+ mutex_init(&ctx->mutex);
++ INIT_LIST_HEAD(&ctx->link);
+
+ spin_lock_init(&ctx->stale.lock);
+ INIT_LIST_HEAD(&ctx->stale.engines);
+@@ -740,10 +747,6 @@ __create_context(struct drm_i915_private *i915)
+ for (i = 0; i < ARRAY_SIZE(ctx->hang_timestamp); i++)
+ ctx->hang_timestamp[i] = jiffies - CONTEXT_FAST_HANG_JIFFIES;
+
+- spin_lock(&i915->gem.contexts.lock);
+- list_add_tail(&ctx->link, &i915->gem.contexts.list);
+- spin_unlock(&i915->gem.contexts.lock);
+-
+ return ctx;
+
+ err_free:
+@@ -931,6 +934,7 @@ static int gem_context_register(struct i915_gem_context *ctx,
+ struct drm_i915_file_private *fpriv,
+ u32 *id)
+ {
++ struct drm_i915_private *i915 = ctx->i915;
+ struct i915_address_space *vm;
+ int ret;
+
+@@ -949,8 +953,16 @@ static int gem_context_register(struct i915_gem_context *ctx,
+ /* And finally expose ourselves to userspace via the idr */
+ ret = xa_alloc(&fpriv->context_xa, id, ctx, xa_limit_32b, GFP_KERNEL);
+ if (ret)
+- put_pid(fetch_and_zero(&ctx->pid));
++ goto err_pid;
++
++ spin_lock(&i915->gem.contexts.lock);
++ list_add_tail(&ctx->link, &i915->gem.contexts.list);
++ spin_unlock(&i915->gem.contexts.lock);
++
++ return 0;
+
++err_pid:
++ put_pid(fetch_and_zero(&ctx->pid));
+ return ret;
+ }
+
+diff --git a/drivers/gpu/drm/i915/i915_sw_fence.c b/drivers/gpu/drm/i915/i915_sw_fence.c
+index 295b9829e2da5..4cd2038cbe359 100644
+--- a/drivers/gpu/drm/i915/i915_sw_fence.c
++++ b/drivers/gpu/drm/i915/i915_sw_fence.c
+@@ -164,9 +164,13 @@ static void __i915_sw_fence_wake_up_all(struct i915_sw_fence *fence,
+
+ do {
+ list_for_each_entry_safe(pos, next, &x->head, entry) {
+- pos->func(pos,
+- TASK_NORMAL, fence->error,
+- &extra);
++ int wake_flags;
++
++ wake_flags = fence->error;
++ if (pos->func == autoremove_wake_function)
++ wake_flags = 0;
++
++ pos->func(pos, TASK_NORMAL, wake_flags, &extra);
+ }
+
+ if (list_empty(&extra))
+diff --git a/drivers/gpu/drm/mediatek/mtk_drm_crtc.c b/drivers/gpu/drm/mediatek/mtk_drm_crtc.c
+index 7cd8f415fd029..d8b43500f12d1 100644
+--- a/drivers/gpu/drm/mediatek/mtk_drm_crtc.c
++++ b/drivers/gpu/drm/mediatek/mtk_drm_crtc.c
+@@ -834,13 +834,19 @@ int mtk_drm_crtc_create(struct drm_device *drm_dev,
+ drm_crtc_index(&mtk_crtc->base));
+ mtk_crtc->cmdq_client = NULL;
+ }
+- ret = of_property_read_u32_index(priv->mutex_node,
+- "mediatek,gce-events",
+- drm_crtc_index(&mtk_crtc->base),
+- &mtk_crtc->cmdq_event);
+- if (ret)
+- dev_dbg(dev, "mtk_crtc %d failed to get mediatek,gce-events property\n",
+- drm_crtc_index(&mtk_crtc->base));
++
++ if (mtk_crtc->cmdq_client) {
++ ret = of_property_read_u32_index(priv->mutex_node,
++ "mediatek,gce-events",
++ drm_crtc_index(&mtk_crtc->base),
++ &mtk_crtc->cmdq_event);
++ if (ret) {
++ dev_dbg(dev, "mtk_crtc %d failed to get mediatek,gce-events property\n",
++ drm_crtc_index(&mtk_crtc->base));
++ cmdq_mbox_destroy(mtk_crtc->cmdq_client);
++ mtk_crtc->cmdq_client = NULL;
++ }
++ }
+ #endif
+ return 0;
+ }
+diff --git a/drivers/gpu/drm/mediatek/mtk_drm_ddp_comp.c b/drivers/gpu/drm/mediatek/mtk_drm_ddp_comp.c
+index 57c88de9a3293..526648885b97e 100644
+--- a/drivers/gpu/drm/mediatek/mtk_drm_ddp_comp.c
++++ b/drivers/gpu/drm/mediatek/mtk_drm_ddp_comp.c
+@@ -496,6 +496,7 @@ int mtk_ddp_comp_init(struct device *dev, struct device_node *node,
+ #if IS_REACHABLE(CONFIG_MTK_CMDQ)
+ if (of_address_to_resource(node, 0, &res) != 0) {
+ dev_err(dev, "Missing reg in %s node\n", node->full_name);
++ put_device(&larb_pdev->dev);
+ return -EINVAL;
+ }
+ comp->regs_pa = res.start;
+diff --git a/drivers/gpu/drm/mediatek/mtk_drm_drv.c b/drivers/gpu/drm/mediatek/mtk_drm_drv.c
+index 040a8f393fe24..b77dc36be4224 100644
+--- a/drivers/gpu/drm/mediatek/mtk_drm_drv.c
++++ b/drivers/gpu/drm/mediatek/mtk_drm_drv.c
+@@ -165,7 +165,7 @@ static int mtk_drm_kms_init(struct drm_device *drm)
+
+ ret = drmm_mode_config_init(drm);
+ if (ret)
+- return ret;
++ goto put_mutex_dev;
+
+ drm->mode_config.min_width = 64;
+ drm->mode_config.min_height = 64;
+@@ -182,7 +182,7 @@ static int mtk_drm_kms_init(struct drm_device *drm)
+
+ ret = component_bind_all(drm->dev, drm);
+ if (ret)
+- return ret;
++ goto put_mutex_dev;
+
+ /*
+ * We currently support two fixed data streams, each optional,
+@@ -229,7 +229,7 @@ static int mtk_drm_kms_init(struct drm_device *drm)
+ }
+ if (!dma_dev->dma_parms) {
+ ret = -ENOMEM;
+- goto err_component_unbind;
++ goto put_dma_dev;
+ }
+
+ ret = dma_set_max_seg_size(dma_dev, (unsigned int)DMA_BIT_MASK(32));
+@@ -256,9 +256,12 @@ static int mtk_drm_kms_init(struct drm_device *drm)
+ err_unset_dma_parms:
+ if (private->dma_parms_allocated)
+ dma_dev->dma_parms = NULL;
++put_dma_dev:
++ put_device(private->dma_dev);
+ err_component_unbind:
+ component_unbind_all(drm->dev, drm);
+-
++put_mutex_dev:
++ put_device(private->mutex_dev);
+ return ret;
+ }
+
+@@ -544,8 +547,13 @@ err_pm:
+ pm_runtime_disable(dev);
+ err_node:
+ of_node_put(private->mutex_node);
+- for (i = 0; i < DDP_COMPONENT_ID_MAX; i++)
++ for (i = 0; i < DDP_COMPONENT_ID_MAX; i++) {
+ of_node_put(private->comp_node[i]);
++ if (private->ddp_comp[i]) {
++ put_device(private->ddp_comp[i]->larb_dev);
++ private->ddp_comp[i] = NULL;
++ }
++ }
+ return ret;
+ }
+
+diff --git a/drivers/gpu/drm/mediatek/mtk_dsi.c b/drivers/gpu/drm/mediatek/mtk_dsi.c
+index 02ac55c13a80b..ee011a0633841 100644
+--- a/drivers/gpu/drm/mediatek/mtk_dsi.c
++++ b/drivers/gpu/drm/mediatek/mtk_dsi.c
+@@ -470,14 +470,13 @@ static void mtk_dsi_config_vdo_timing(struct mtk_dsi *dsi)
+ horizontal_sync_active_byte = (vm->hsync_len * dsi_tmp_buf_bpp - 10);
+
+ if (dsi->mode_flags & MIPI_DSI_MODE_VIDEO_SYNC_PULSE)
+- horizontal_backporch_byte =
+- (vm->hback_porch * dsi_tmp_buf_bpp - 10);
++ horizontal_backporch_byte = vm->hback_porch * dsi_tmp_buf_bpp;
+ else
+- horizontal_backporch_byte = ((vm->hback_porch + vm->hsync_len) *
+- dsi_tmp_buf_bpp - 10);
++ horizontal_backporch_byte = (vm->hback_porch + vm->hsync_len) *
++ dsi_tmp_buf_bpp;
+
+ data_phy_cycles = timing->lpx + timing->da_hs_prepare +
+- timing->da_hs_zero + timing->da_hs_exit + 3;
++ timing->da_hs_zero + timing->da_hs_exit;
+
+ if (dsi->mode_flags & MIPI_DSI_MODE_VIDEO_BURST) {
+ if ((vm->hfront_porch + vm->hback_porch) * dsi_tmp_buf_bpp >
+diff --git a/drivers/gpu/drm/mediatek/mtk_hdmi.c b/drivers/gpu/drm/mediatek/mtk_hdmi.c
+index 1eebe310470af..a9704822c0334 100644
+--- a/drivers/gpu/drm/mediatek/mtk_hdmi.c
++++ b/drivers/gpu/drm/mediatek/mtk_hdmi.c
+@@ -1507,25 +1507,30 @@ static int mtk_hdmi_dt_parse_pdata(struct mtk_hdmi *hdmi,
+ dev_err(dev,
+ "Failed to get system configuration registers: %d\n",
+ ret);
+- return ret;
++ goto put_device;
+ }
+ hdmi->sys_regmap = regmap;
+
+ mem = platform_get_resource(pdev, IORESOURCE_MEM, 0);
+ hdmi->regs = devm_ioremap_resource(dev, mem);
+- if (IS_ERR(hdmi->regs))
+- return PTR_ERR(hdmi->regs);
++ if (IS_ERR(hdmi->regs)) {
++ ret = PTR_ERR(hdmi->regs);
++ goto put_device;
++ }
+
+ remote = of_graph_get_remote_node(np, 1, 0);
+- if (!remote)
+- return -EINVAL;
++ if (!remote) {
++ ret = -EINVAL;
++ goto put_device;
++ }
+
+ if (!of_device_is_compatible(remote, "hdmi-connector")) {
+ hdmi->next_bridge = of_drm_find_bridge(remote);
+ if (!hdmi->next_bridge) {
+ dev_err(dev, "Waiting for external bridge\n");
+ of_node_put(remote);
+- return -EPROBE_DEFER;
++ ret = -EPROBE_DEFER;
++ goto put_device;
+ }
+ }
+
+@@ -1534,7 +1539,8 @@ static int mtk_hdmi_dt_parse_pdata(struct mtk_hdmi *hdmi,
+ dev_err(dev, "Failed to find ddc-i2c-bus node in %pOF\n",
+ remote);
+ of_node_put(remote);
+- return -EINVAL;
++ ret = -EINVAL;
++ goto put_device;
+ }
+ of_node_put(remote);
+
+@@ -1542,10 +1548,14 @@ static int mtk_hdmi_dt_parse_pdata(struct mtk_hdmi *hdmi,
+ of_node_put(i2c_np);
+ if (!hdmi->ddc_adpt) {
+ dev_err(dev, "Failed to get ddc i2c adapter by node\n");
+- return -EINVAL;
++ ret = -EINVAL;
++ goto put_device;
+ }
+
+ return 0;
++put_device:
++ put_device(hdmi->cec_dev);
++ return ret;
+ }
+
+ /*
+diff --git a/drivers/hv/channel_mgmt.c b/drivers/hv/channel_mgmt.c
+index 417a95e5094dd..af7832e131674 100644
+--- a/drivers/hv/channel_mgmt.c
++++ b/drivers/hv/channel_mgmt.c
+@@ -750,7 +750,7 @@ static void vmbus_wait_for_unload(void)
+ void *page_addr;
+ struct hv_message *msg;
+ struct vmbus_channel_message_header *hdr;
+- u32 message_type;
++ u32 message_type, i;
+
+ /*
+ * CHANNELMSG_UNLOAD_RESPONSE is always delivered to the CPU which was
+@@ -760,8 +760,11 @@ static void vmbus_wait_for_unload(void)
+ * functional and vmbus_unload_response() will complete
+ * vmbus_connection.unload_event. If not, the last thing we can do is
+ * read message pages for all CPUs directly.
++ *
++ * Wait no more than 10 seconds so that the panic path can't get
++ * hung forever in case the response message isn't seen.
+ */
+- while (1) {
++ for (i = 0; i < 1000; i++) {
+ if (completion_done(&vmbus_connection.unload_event))
+ break;
+
+diff --git a/drivers/hv/vmbus_drv.c b/drivers/hv/vmbus_drv.c
+index d69f4efa37198..dacdd8d2eb1b3 100644
+--- a/drivers/hv/vmbus_drv.c
++++ b/drivers/hv/vmbus_drv.c
+@@ -2383,7 +2383,10 @@ static int vmbus_bus_suspend(struct device *dev)
+ if (atomic_read(&vmbus_connection.nr_chan_close_on_suspend) > 0)
+ wait_for_completion(&vmbus_connection.ready_for_suspend_event);
+
+- WARN_ON(atomic_read(&vmbus_connection.nr_chan_fixup_on_resume) != 0);
++ if (atomic_read(&vmbus_connection.nr_chan_fixup_on_resume) != 0) {
++ pr_err("Can not suspend due to a previous failed resuming\n");
++ return -EBUSY;
++ }
+
+ mutex_lock(&vmbus_connection.channel_mutex);
+
+@@ -2459,7 +2462,9 @@ static int vmbus_bus_resume(struct device *dev)
+
+ vmbus_request_offers();
+
+- wait_for_completion(&vmbus_connection.ready_for_resume_event);
++ if (wait_for_completion_timeout(
++ &vmbus_connection.ready_for_resume_event, 10 * HZ) == 0)
++ pr_err("Some vmbus device is missing after suspending?\n");
+
+ /* Reset the event for the next suspend. */
+ reinit_completion(&vmbus_connection.ready_for_suspend_event);
+diff --git a/drivers/i2c/algos/i2c-algo-pca.c b/drivers/i2c/algos/i2c-algo-pca.c
+index 388978775be04..edc6985c696f0 100644
+--- a/drivers/i2c/algos/i2c-algo-pca.c
++++ b/drivers/i2c/algos/i2c-algo-pca.c
+@@ -41,8 +41,22 @@ static void pca_reset(struct i2c_algo_pca_data *adap)
+ pca_outw(adap, I2C_PCA_INDPTR, I2C_PCA_IPRESET);
+ pca_outw(adap, I2C_PCA_IND, 0xA5);
+ pca_outw(adap, I2C_PCA_IND, 0x5A);
++
++ /*
++ * After a reset we need to re-apply any configuration
++ * (calculated in pca_init) to get the bus in a working state.
++ */
++ pca_outw(adap, I2C_PCA_INDPTR, I2C_PCA_IMODE);
++ pca_outw(adap, I2C_PCA_IND, adap->bus_settings.mode);
++ pca_outw(adap, I2C_PCA_INDPTR, I2C_PCA_ISCLL);
++ pca_outw(adap, I2C_PCA_IND, adap->bus_settings.tlow);
++ pca_outw(adap, I2C_PCA_INDPTR, I2C_PCA_ISCLH);
++ pca_outw(adap, I2C_PCA_IND, adap->bus_settings.thi);
++
++ pca_set_con(adap, I2C_PCA_CON_ENSIO);
+ } else {
+ adap->reset_chip(adap->data);
++ pca_set_con(adap, I2C_PCA_CON_ENSIO | adap->bus_settings.clock_freq);
+ }
+ }
+
+@@ -423,13 +437,14 @@ static int pca_init(struct i2c_adapter *adap)
+ " Use the nominal frequency.\n", adap->name);
+ }
+
+- pca_reset(pca_data);
+-
+ clock = pca_clock(pca_data);
+ printk(KERN_INFO "%s: Clock frequency is %dkHz\n",
+ adap->name, freqs[clock]);
+
+- pca_set_con(pca_data, I2C_PCA_CON_ENSIO | clock);
++ /* Store settings as these will be needed when the PCA chip is reset */
++ pca_data->bus_settings.clock_freq = clock;
++
++ pca_reset(pca_data);
+ } else {
+ int clock;
+ int mode;
+@@ -496,19 +511,15 @@ static int pca_init(struct i2c_adapter *adap)
+ thi = tlow * min_thi / min_tlow;
+ }
+
++ /* Store settings as these will be needed when the PCA chip is reset */
++ pca_data->bus_settings.mode = mode;
++ pca_data->bus_settings.tlow = tlow;
++ pca_data->bus_settings.thi = thi;
++
+ pca_reset(pca_data);
+
+ printk(KERN_INFO
+ "%s: Clock frequency is %dHz\n", adap->name, clock * 100);
+-
+- pca_outw(pca_data, I2C_PCA_INDPTR, I2C_PCA_IMODE);
+- pca_outw(pca_data, I2C_PCA_IND, mode);
+- pca_outw(pca_data, I2C_PCA_INDPTR, I2C_PCA_ISCLL);
+- pca_outw(pca_data, I2C_PCA_IND, tlow);
+- pca_outw(pca_data, I2C_PCA_INDPTR, I2C_PCA_ISCLH);
+- pca_outw(pca_data, I2C_PCA_IND, thi);
+-
+- pca_set_con(pca_data, I2C_PCA_CON_ENSIO);
+ }
+ udelay(500); /* 500 us for oscillator to stabilise */
+
+diff --git a/drivers/i2c/busses/i2c-i801.c b/drivers/i2c/busses/i2c-i801.c
+index f206e28af5831..3843eabeddda3 100644
+--- a/drivers/i2c/busses/i2c-i801.c
++++ b/drivers/i2c/busses/i2c-i801.c
+@@ -1706,6 +1706,16 @@ static inline int i801_acpi_probe(struct i801_priv *priv) { return 0; }
+ static inline void i801_acpi_remove(struct i801_priv *priv) { }
+ #endif
+
++static unsigned char i801_setup_hstcfg(struct i801_priv *priv)
++{
++ unsigned char hstcfg = priv->original_hstcfg;
++
++ hstcfg &= ~SMBHSTCFG_I2C_EN; /* SMBus timing */
++ hstcfg |= SMBHSTCFG_HST_EN;
++ pci_write_config_byte(priv->pci_dev, SMBHSTCFG, hstcfg);
++ return hstcfg;
++}
++
+ static int i801_probe(struct pci_dev *dev, const struct pci_device_id *id)
+ {
+ unsigned char temp;
+@@ -1826,14 +1836,10 @@ static int i801_probe(struct pci_dev *dev, const struct pci_device_id *id)
+ return err;
+ }
+
+- pci_read_config_byte(priv->pci_dev, SMBHSTCFG, &temp);
+- priv->original_hstcfg = temp;
+- temp &= ~SMBHSTCFG_I2C_EN; /* SMBus timing */
+- if (!(temp & SMBHSTCFG_HST_EN)) {
++ pci_read_config_byte(priv->pci_dev, SMBHSTCFG, &priv->original_hstcfg);
++ temp = i801_setup_hstcfg(priv);
++ if (!(priv->original_hstcfg & SMBHSTCFG_HST_EN))
+ dev_info(&dev->dev, "Enabling SMBus device\n");
+- temp |= SMBHSTCFG_HST_EN;
+- }
+- pci_write_config_byte(priv->pci_dev, SMBHSTCFG, temp);
+
+ if (temp & SMBHSTCFG_SMB_SMI_EN) {
+ dev_dbg(&dev->dev, "SMBus using interrupt SMI#\n");
+@@ -1959,6 +1965,7 @@ static int i801_resume(struct device *dev)
+ {
+ struct i801_priv *priv = dev_get_drvdata(dev);
+
++ i801_setup_hstcfg(priv);
+ i801_enable_host_notify(&priv->adapter);
+
+ return 0;
+diff --git a/drivers/i2c/busses/i2c-mt65xx.c b/drivers/i2c/busses/i2c-mt65xx.c
+index deef69e569062..b099139cbb91e 100644
+--- a/drivers/i2c/busses/i2c-mt65xx.c
++++ b/drivers/i2c/busses/i2c-mt65xx.c
+@@ -658,8 +658,8 @@ static int mtk_i2c_calculate_speed(struct mtk_i2c *i2c, unsigned int clk_src,
+ unsigned int cnt_mul;
+ int ret = -EINVAL;
+
+- if (target_speed > I2C_MAX_FAST_MODE_PLUS_FREQ)
+- target_speed = I2C_MAX_FAST_MODE_PLUS_FREQ;
++ if (target_speed > I2C_MAX_HIGH_SPEED_MODE_FREQ)
++ target_speed = I2C_MAX_HIGH_SPEED_MODE_FREQ;
+
+ max_step_cnt = mtk_i2c_max_step_cnt(target_speed);
+ base_step_cnt = max_step_cnt;
+diff --git a/drivers/i2c/busses/i2c-mxs.c b/drivers/i2c/busses/i2c-mxs.c
+index 9587347447f0f..c4b08a9244614 100644
+--- a/drivers/i2c/busses/i2c-mxs.c
++++ b/drivers/i2c/busses/i2c-mxs.c
+@@ -25,6 +25,7 @@
+ #include <linux/of_device.h>
+ #include <linux/dma-mapping.h>
+ #include <linux/dmaengine.h>
++#include <linux/dma/mxs-dma.h>
+
+ #define DRIVER_NAME "mxs-i2c"
+
+@@ -200,7 +201,8 @@ static int mxs_i2c_dma_setup_xfer(struct i2c_adapter *adap,
+ dma_map_sg(i2c->dev, &i2c->sg_io[0], 1, DMA_TO_DEVICE);
+ desc = dmaengine_prep_slave_sg(i2c->dmach, &i2c->sg_io[0], 1,
+ DMA_MEM_TO_DEV,
+- DMA_PREP_INTERRUPT | DMA_CTRL_ACK);
++ DMA_PREP_INTERRUPT |
++ MXS_DMA_CTRL_WAIT4END);
+ if (!desc) {
+ dev_err(i2c->dev,
+ "Failed to get DMA data write descriptor.\n");
+@@ -228,7 +230,8 @@ static int mxs_i2c_dma_setup_xfer(struct i2c_adapter *adap,
+ dma_map_sg(i2c->dev, &i2c->sg_io[1], 1, DMA_FROM_DEVICE);
+ desc = dmaengine_prep_slave_sg(i2c->dmach, &i2c->sg_io[1], 1,
+ DMA_DEV_TO_MEM,
+- DMA_PREP_INTERRUPT | DMA_CTRL_ACK);
++ DMA_PREP_INTERRUPT |
++ MXS_DMA_CTRL_WAIT4END);
+ if (!desc) {
+ dev_err(i2c->dev,
+ "Failed to get DMA data write descriptor.\n");
+@@ -260,7 +263,8 @@ static int mxs_i2c_dma_setup_xfer(struct i2c_adapter *adap,
+ dma_map_sg(i2c->dev, i2c->sg_io, 2, DMA_TO_DEVICE);
+ desc = dmaengine_prep_slave_sg(i2c->dmach, i2c->sg_io, 2,
+ DMA_MEM_TO_DEV,
+- DMA_PREP_INTERRUPT | DMA_CTRL_ACK);
++ DMA_PREP_INTERRUPT |
++ MXS_DMA_CTRL_WAIT4END);
+ if (!desc) {
+ dev_err(i2c->dev,
+ "Failed to get DMA data write descriptor.\n");
+diff --git a/drivers/infiniband/hw/bnxt_re/qplib_sp.c b/drivers/infiniband/hw/bnxt_re/qplib_sp.c
+index 4cd475ea97a24..64d44f51db4b6 100644
+--- a/drivers/infiniband/hw/bnxt_re/qplib_sp.c
++++ b/drivers/infiniband/hw/bnxt_re/qplib_sp.c
+@@ -149,7 +149,7 @@ int bnxt_qplib_get_dev_attr(struct bnxt_qplib_rcfw *rcfw,
+ attr->max_inline_data = le32_to_cpu(sb->max_inline_data);
+ attr->l2_db_size = (sb->l2_db_space_size + 1) *
+ (0x01 << RCFW_DBR_BASE_PAGE_SHIFT);
+- attr->max_sgid = le32_to_cpu(sb->max_gid);
++ attr->max_sgid = BNXT_QPLIB_NUM_GIDS_SUPPORTED;
+
+ bnxt_qplib_query_version(rcfw, attr->fw_ver);
+
+diff --git a/drivers/infiniband/hw/bnxt_re/qplib_sp.h b/drivers/infiniband/hw/bnxt_re/qplib_sp.h
+index 6404f0da10517..967890cd81f27 100644
+--- a/drivers/infiniband/hw/bnxt_re/qplib_sp.h
++++ b/drivers/infiniband/hw/bnxt_re/qplib_sp.h
+@@ -47,6 +47,7 @@
+ struct bnxt_qplib_dev_attr {
+ #define FW_VER_ARR_LEN 4
+ u8 fw_ver[FW_VER_ARR_LEN];
++#define BNXT_QPLIB_NUM_GIDS_SUPPORTED 256
+ u16 max_sgid;
+ u16 max_mrw;
+ u32 max_qp;
+diff --git a/drivers/input/mouse/trackpoint.c b/drivers/input/mouse/trackpoint.c
+index 3eefee2ee2a12..854d5e7587241 100644
+--- a/drivers/input/mouse/trackpoint.c
++++ b/drivers/input/mouse/trackpoint.c
+@@ -17,10 +17,12 @@
+ #include "trackpoint.h"
+
+ static const char * const trackpoint_variants[] = {
+- [TP_VARIANT_IBM] = "IBM",
+- [TP_VARIANT_ALPS] = "ALPS",
+- [TP_VARIANT_ELAN] = "Elan",
+- [TP_VARIANT_NXP] = "NXP",
++ [TP_VARIANT_IBM] = "IBM",
++ [TP_VARIANT_ALPS] = "ALPS",
++ [TP_VARIANT_ELAN] = "Elan",
++ [TP_VARIANT_NXP] = "NXP",
++ [TP_VARIANT_JYT_SYNAPTICS] = "JYT_Synaptics",
++ [TP_VARIANT_SYNAPTICS] = "Synaptics",
+ };
+
+ /*
+diff --git a/drivers/input/mouse/trackpoint.h b/drivers/input/mouse/trackpoint.h
+index 5cb93ed260856..eb5412904fe07 100644
+--- a/drivers/input/mouse/trackpoint.h
++++ b/drivers/input/mouse/trackpoint.h
+@@ -24,10 +24,12 @@
+ * 0x01 was the original IBM trackpoint, others implement very limited
+ * subset of trackpoint features.
+ */
+-#define TP_VARIANT_IBM 0x01
+-#define TP_VARIANT_ALPS 0x02
+-#define TP_VARIANT_ELAN 0x03
+-#define TP_VARIANT_NXP 0x04
++#define TP_VARIANT_IBM 0x01
++#define TP_VARIANT_ALPS 0x02
++#define TP_VARIANT_ELAN 0x03
++#define TP_VARIANT_NXP 0x04
++#define TP_VARIANT_JYT_SYNAPTICS 0x05
++#define TP_VARIANT_SYNAPTICS 0x06
+
+ /*
+ * Commands
+diff --git a/drivers/input/serio/i8042-x86ia64io.h b/drivers/input/serio/i8042-x86ia64io.h
+index 7d7f737027264..37fb9aa88f9c3 100644
+--- a/drivers/input/serio/i8042-x86ia64io.h
++++ b/drivers/input/serio/i8042-x86ia64io.h
+@@ -548,6 +548,14 @@ static const struct dmi_system_id __initconst i8042_dmi_nomux_table[] = {
+ DMI_MATCH(DMI_PRODUCT_NAME, "Aspire 5738"),
+ },
+ },
++ {
++ /* Entroware Proteus */
++ .matches = {
++ DMI_MATCH(DMI_SYS_VENDOR, "Entroware"),
++ DMI_MATCH(DMI_PRODUCT_NAME, "Proteus"),
++ DMI_MATCH(DMI_PRODUCT_VERSION, "EL07R4"),
++ },
++ },
+ { }
+ };
+
+@@ -676,6 +684,14 @@ static const struct dmi_system_id __initconst i8042_dmi_reset_table[] = {
+ DMI_MATCH(DMI_PRODUCT_NAME, "33474HU"),
+ },
+ },
++ {
++ /* Entroware Proteus */
++ .matches = {
++ DMI_MATCH(DMI_SYS_VENDOR, "Entroware"),
++ DMI_MATCH(DMI_PRODUCT_NAME, "Proteus"),
++ DMI_MATCH(DMI_PRODUCT_VERSION, "EL07R4"),
++ },
++ },
+ { }
+ };
+
+diff --git a/drivers/interconnect/core.c b/drivers/interconnect/core.c
+index 9e1ab701785c7..0162a9af93237 100644
+--- a/drivers/interconnect/core.c
++++ b/drivers/interconnect/core.c
+@@ -55,12 +55,18 @@ static int icc_summary_show(struct seq_file *s, void *data)
+
+ icc_summary_show_one(s, n);
+ hlist_for_each_entry(r, &n->req_list, req_node) {
++ u32 avg_bw = 0, peak_bw = 0;
++
+ if (!r->dev)
+ continue;
+
++ if (r->enabled) {
++ avg_bw = r->avg_bw;
++ peak_bw = r->peak_bw;
++ }
++
+ seq_printf(s, " %-27s %12u %12u %12u\n",
+- dev_name(r->dev), r->tag, r->avg_bw,
+- r->peak_bw);
++ dev_name(r->dev), r->tag, avg_bw, peak_bw);
+ }
+ }
+ }
+diff --git a/drivers/iommu/amd/iommu.c b/drivers/iommu/amd/iommu.c
+index 37c74c842f3a3..a51dcf26b09f2 100644
+--- a/drivers/iommu/amd/iommu.c
++++ b/drivers/iommu/amd/iommu.c
+@@ -3831,14 +3831,18 @@ int amd_iommu_activate_guest_mode(void *data)
+ {
+ struct amd_ir_data *ir_data = (struct amd_ir_data *)data;
+ struct irte_ga *entry = (struct irte_ga *) ir_data->entry;
++ u64 valid;
+
+ if (!AMD_IOMMU_GUEST_IR_VAPIC(amd_iommu_guest_ir) ||
+ !entry || entry->lo.fields_vapic.guest_mode)
+ return 0;
+
++ valid = entry->lo.fields_vapic.valid;
++
+ entry->lo.val = 0;
+ entry->hi.val = 0;
+
++ entry->lo.fields_vapic.valid = valid;
+ entry->lo.fields_vapic.guest_mode = 1;
+ entry->lo.fields_vapic.ga_log_intr = 1;
+ entry->hi.fields.ga_root_ptr = ir_data->ga_root_ptr;
+@@ -3855,12 +3859,14 @@ int amd_iommu_deactivate_guest_mode(void *data)
+ struct amd_ir_data *ir_data = (struct amd_ir_data *)data;
+ struct irte_ga *entry = (struct irte_ga *) ir_data->entry;
+ struct irq_cfg *cfg = ir_data->cfg;
+- u64 valid = entry->lo.fields_remap.valid;
++ u64 valid;
+
+ if (!AMD_IOMMU_GUEST_IR_VAPIC(amd_iommu_guest_ir) ||
+ !entry || !entry->lo.fields_vapic.guest_mode)
+ return 0;
+
++ valid = entry->lo.fields_remap.valid;
++
+ entry->lo.val = 0;
+ entry->hi.val = 0;
+
+diff --git a/drivers/md/dm-table.c b/drivers/md/dm-table.c
+index 8277b959e00bd..6a4057b844e24 100644
+--- a/drivers/md/dm-table.c
++++ b/drivers/md/dm-table.c
+@@ -865,10 +865,14 @@ EXPORT_SYMBOL_GPL(dm_table_set_type);
+ int device_supports_dax(struct dm_target *ti, struct dm_dev *dev,
+ sector_t start, sector_t len, void *data)
+ {
+- int blocksize = *(int *) data;
++ int blocksize = *(int *) data, id;
++ bool rc;
+
+- return generic_fsdax_supported(dev->dax_dev, dev->bdev, blocksize,
+- start, len);
++ id = dax_read_lock();
++ rc = dax_supported(dev->dax_dev, dev->bdev, blocksize, start, len);
++ dax_read_unlock(id);
++
++ return rc;
+ }
+
+ /* Check devices support synchronous DAX */
+diff --git a/drivers/md/dm.c b/drivers/md/dm.c
+index 88b391ff9bea7..49c758fef8cb6 100644
+--- a/drivers/md/dm.c
++++ b/drivers/md/dm.c
+@@ -1136,15 +1136,16 @@ static bool dm_dax_supported(struct dax_device *dax_dev, struct block_device *bd
+ {
+ struct mapped_device *md = dax_get_private(dax_dev);
+ struct dm_table *map;
++ bool ret = false;
+ int srcu_idx;
+- bool ret;
+
+ map = dm_get_live_table(md, &srcu_idx);
+ if (!map)
+- return false;
++ goto out;
+
+ ret = dm_table_supports_dax(map, device_supports_dax, &blocksize);
+
++out:
+ dm_put_live_table(md, srcu_idx);
+
+ return ret;
+diff --git a/drivers/misc/habanalabs/debugfs.c b/drivers/misc/habanalabs/debugfs.c
+index 6c2b9cf45e831..650922061bdc7 100644
+--- a/drivers/misc/habanalabs/debugfs.c
++++ b/drivers/misc/habanalabs/debugfs.c
+@@ -982,7 +982,7 @@ static ssize_t hl_clk_gate_read(struct file *f, char __user *buf,
+ return 0;
+
+ sprintf(tmp_buf, "0x%llx\n", hdev->clock_gating_mask);
+- rc = simple_read_from_buffer(buf, strlen(tmp_buf) + 1, ppos, tmp_buf,
++ rc = simple_read_from_buffer(buf, count, ppos, tmp_buf,
+ strlen(tmp_buf) + 1);
+
+ return rc;
+diff --git a/drivers/misc/habanalabs/include/gaudi/gaudi_masks.h b/drivers/misc/habanalabs/include/gaudi/gaudi_masks.h
+index 96f08050ef0fb..6c50f015eda47 100644
+--- a/drivers/misc/habanalabs/include/gaudi/gaudi_masks.h
++++ b/drivers/misc/habanalabs/include/gaudi/gaudi_masks.h
+@@ -378,15 +378,15 @@ enum axi_id {
+ ((((y) & RAZWI_INITIATOR_Y_MASK) << RAZWI_INITIATOR_Y_SHIFT) | \
+ (((x) & RAZWI_INITIATOR_X_MASK) << RAZWI_INITIATOR_X_SHIFT))
+
+-#define RAZWI_INITIATOR_ID_X_Y_TPC0_NIC0 RAZWI_INITIATOR_ID_X_Y(1, 0)
+-#define RAZWI_INITIATOR_ID_X_Y_TPC1 RAZWI_INITIATOR_ID_X_Y(2, 0)
+-#define RAZWI_INITIATOR_ID_X_Y_MME0_0 RAZWI_INITIATOR_ID_X_Y(3, 0)
+-#define RAZWI_INITIATOR_ID_X_Y_MME0_1 RAZWI_INITIATOR_ID_X_Y(4, 0)
+-#define RAZWI_INITIATOR_ID_X_Y_MME1_0 RAZWI_INITIATOR_ID_X_Y(5, 0)
+-#define RAZWI_INITIATOR_ID_X_Y_MME1_1 RAZWI_INITIATOR_ID_X_Y(6, 0)
+-#define RAZWI_INITIATOR_ID_X_Y_TPC2 RAZWI_INITIATOR_ID_X_Y(7, 0)
++#define RAZWI_INITIATOR_ID_X_Y_TPC0_NIC0 RAZWI_INITIATOR_ID_X_Y(1, 1)
++#define RAZWI_INITIATOR_ID_X_Y_TPC1 RAZWI_INITIATOR_ID_X_Y(2, 1)
++#define RAZWI_INITIATOR_ID_X_Y_MME0_0 RAZWI_INITIATOR_ID_X_Y(3, 1)
++#define RAZWI_INITIATOR_ID_X_Y_MME0_1 RAZWI_INITIATOR_ID_X_Y(4, 1)
++#define RAZWI_INITIATOR_ID_X_Y_MME1_0 RAZWI_INITIATOR_ID_X_Y(5, 1)
++#define RAZWI_INITIATOR_ID_X_Y_MME1_1 RAZWI_INITIATOR_ID_X_Y(6, 1)
++#define RAZWI_INITIATOR_ID_X_Y_TPC2 RAZWI_INITIATOR_ID_X_Y(7, 1)
+ #define RAZWI_INITIATOR_ID_X_Y_TPC3_PCI_CPU_PSOC \
+- RAZWI_INITIATOR_ID_X_Y(8, 0)
++ RAZWI_INITIATOR_ID_X_Y(8, 1)
+ #define RAZWI_INITIATOR_ID_X_Y_DMA_IF_W_S_0 RAZWI_INITIATOR_ID_X_Y(0, 1)
+ #define RAZWI_INITIATOR_ID_X_Y_DMA_IF_E_S_0 RAZWI_INITIATOR_ID_X_Y(9, 1)
+ #define RAZWI_INITIATOR_ID_X_Y_DMA_IF_W_S_1 RAZWI_INITIATOR_ID_X_Y(0, 2)
+@@ -395,14 +395,14 @@ enum axi_id {
+ #define RAZWI_INITIATOR_ID_X_Y_DMA_IF_E_N_0 RAZWI_INITIATOR_ID_X_Y(9, 3)
+ #define RAZWI_INITIATOR_ID_X_Y_DMA_IF_W_N_1 RAZWI_INITIATOR_ID_X_Y(0, 4)
+ #define RAZWI_INITIATOR_ID_X_Y_DMA_IF_E_N_1 RAZWI_INITIATOR_ID_X_Y(9, 4)
+-#define RAZWI_INITIATOR_ID_X_Y_TPC4_NIC1_NIC2 RAZWI_INITIATOR_ID_X_Y(1, 5)
+-#define RAZWI_INITIATOR_ID_X_Y_TPC5 RAZWI_INITIATOR_ID_X_Y(2, 5)
+-#define RAZWI_INITIATOR_ID_X_Y_MME2_0 RAZWI_INITIATOR_ID_X_Y(3, 5)
+-#define RAZWI_INITIATOR_ID_X_Y_MME2_1 RAZWI_INITIATOR_ID_X_Y(4, 5)
+-#define RAZWI_INITIATOR_ID_X_Y_MME3_0 RAZWI_INITIATOR_ID_X_Y(5, 5)
+-#define RAZWI_INITIATOR_ID_X_Y_MME3_1 RAZWI_INITIATOR_ID_X_Y(6, 5)
+-#define RAZWI_INITIATOR_ID_X_Y_TPC6 RAZWI_INITIATOR_ID_X_Y(7, 5)
+-#define RAZWI_INITIATOR_ID_X_Y_TPC7_NIC4_NIC5 RAZWI_INITIATOR_ID_X_Y(8, 5)
++#define RAZWI_INITIATOR_ID_X_Y_TPC4_NIC1_NIC2 RAZWI_INITIATOR_ID_X_Y(1, 6)
++#define RAZWI_INITIATOR_ID_X_Y_TPC5 RAZWI_INITIATOR_ID_X_Y(2, 6)
++#define RAZWI_INITIATOR_ID_X_Y_MME2_0 RAZWI_INITIATOR_ID_X_Y(3, 6)
++#define RAZWI_INITIATOR_ID_X_Y_MME2_1 RAZWI_INITIATOR_ID_X_Y(4, 6)
++#define RAZWI_INITIATOR_ID_X_Y_MME3_0 RAZWI_INITIATOR_ID_X_Y(5, 6)
++#define RAZWI_INITIATOR_ID_X_Y_MME3_1 RAZWI_INITIATOR_ID_X_Y(6, 6)
++#define RAZWI_INITIATOR_ID_X_Y_TPC6 RAZWI_INITIATOR_ID_X_Y(7, 6)
++#define RAZWI_INITIATOR_ID_X_Y_TPC7_NIC4_NIC5 RAZWI_INITIATOR_ID_X_Y(8, 6)
+
+ #define PSOC_ETR_AXICTL_PROTCTRLBIT1_SHIFT 1
+
+diff --git a/drivers/net/hyperv/netvsc_drv.c b/drivers/net/hyperv/netvsc_drv.c
+index 6b81c04ab5e29..47159b31e6b39 100644
+--- a/drivers/net/hyperv/netvsc_drv.c
++++ b/drivers/net/hyperv/netvsc_drv.c
+@@ -367,7 +367,7 @@ static u16 netvsc_select_queue(struct net_device *ndev, struct sk_buff *skb,
+ }
+ rcu_read_unlock();
+
+- while (unlikely(txq >= ndev->real_num_tx_queues))
++ while (txq >= ndev->real_num_tx_queues)
+ txq -= ndev->real_num_tx_queues;
+
+ return txq;
+diff --git a/drivers/nvme/host/fc.c b/drivers/nvme/host/fc.c
+index 1a2b6910509ca..92c966ac34c20 100644
+--- a/drivers/nvme/host/fc.c
++++ b/drivers/nvme/host/fc.c
+@@ -2158,6 +2158,7 @@ nvme_fc_term_aen_ops(struct nvme_fc_ctrl *ctrl)
+ struct nvme_fc_fcp_op *aen_op;
+ int i;
+
++ cancel_work_sync(&ctrl->ctrl.async_event_work);
+ aen_op = ctrl->aen_ops;
+ for (i = 0; i < NVME_NR_AEN_COMMANDS; i++, aen_op++) {
+ __nvme_fc_exit_request(ctrl, aen_op);
+diff --git a/drivers/nvme/host/rdma.c b/drivers/nvme/host/rdma.c
+index 6c07bb55b0f83..4a0bc8927048a 100644
+--- a/drivers/nvme/host/rdma.c
++++ b/drivers/nvme/host/rdma.c
+@@ -809,6 +809,7 @@ static void nvme_rdma_destroy_admin_queue(struct nvme_rdma_ctrl *ctrl,
+ blk_mq_free_tag_set(ctrl->ctrl.admin_tagset);
+ }
+ if (ctrl->async_event_sqe.data) {
++ cancel_work_sync(&ctrl->ctrl.async_event_work);
+ nvme_rdma_free_qe(ctrl->device->dev, &ctrl->async_event_sqe,
+ sizeof(struct nvme_command), DMA_TO_DEVICE);
+ ctrl->async_event_sqe.data = NULL;
+diff --git a/drivers/nvme/host/tcp.c b/drivers/nvme/host/tcp.c
+index f1f66bf96cbb9..24467eea73999 100644
+--- a/drivers/nvme/host/tcp.c
++++ b/drivers/nvme/host/tcp.c
+@@ -1567,6 +1567,7 @@ static struct blk_mq_tag_set *nvme_tcp_alloc_tagset(struct nvme_ctrl *nctrl,
+ static void nvme_tcp_free_admin_queue(struct nvme_ctrl *ctrl)
+ {
+ if (to_tcp_ctrl(ctrl)->async_req.pdu) {
++ cancel_work_sync(&ctrl->async_event_work);
+ nvme_tcp_free_async_req(to_tcp_ctrl(ctrl));
+ to_tcp_ctrl(ctrl)->async_req.pdu = NULL;
+ }
+diff --git a/drivers/nvme/target/loop.c b/drivers/nvme/target/loop.c
+index 6344e73c93548..9c4f257962423 100644
+--- a/drivers/nvme/target/loop.c
++++ b/drivers/nvme/target/loop.c
+@@ -583,6 +583,9 @@ static struct nvme_ctrl *nvme_loop_create_ctrl(struct device *dev,
+ if (ret)
+ goto out_put_ctrl;
+
++ changed = nvme_change_ctrl_state(&ctrl->ctrl, NVME_CTRL_CONNECTING);
++ WARN_ON_ONCE(!changed);
++
+ ret = -ENOMEM;
+
+ ctrl->ctrl.sqsize = opts->queue_size - 1;
+diff --git a/drivers/phy/ti/phy-omap-usb2.c b/drivers/phy/ti/phy-omap-usb2.c
+index cb2dd3230fa76..507f79d14adb8 100644
+--- a/drivers/phy/ti/phy-omap-usb2.c
++++ b/drivers/phy/ti/phy-omap-usb2.c
+@@ -22,10 +22,15 @@
+ #include <linux/mfd/syscon.h>
+ #include <linux/regmap.h>
+ #include <linux/of_platform.h>
++#include <linux/sys_soc.h>
+
+ #define USB2PHY_ANA_CONFIG1 0x4c
+ #define USB2PHY_DISCON_BYP_LATCH BIT(31)
+
++#define USB2PHY_CHRG_DET 0x14
++#define USB2PHY_CHRG_DET_USE_CHG_DET_REG BIT(29)
++#define USB2PHY_CHRG_DET_DIS_CHG_DET BIT(28)
++
+ /* SoC Specific USB2_OTG register definitions */
+ #define AM654_USB2_OTG_PD BIT(8)
+ #define AM654_USB2_VBUS_DET_EN BIT(5)
+@@ -43,6 +48,7 @@
+ #define OMAP_USB2_HAS_START_SRP BIT(0)
+ #define OMAP_USB2_HAS_SET_VBUS BIT(1)
+ #define OMAP_USB2_CALIBRATE_FALSE_DISCONNECT BIT(2)
++#define OMAP_USB2_DISABLE_CHRG_DET BIT(3)
+
+ struct omap_usb {
+ struct usb_phy phy;
+@@ -236,6 +242,13 @@ static int omap_usb_init(struct phy *x)
+ omap_usb_writel(phy->phy_base, USB2PHY_ANA_CONFIG1, val);
+ }
+
++ if (phy->flags & OMAP_USB2_DISABLE_CHRG_DET) {
++ val = omap_usb_readl(phy->phy_base, USB2PHY_CHRG_DET);
++ val |= USB2PHY_CHRG_DET_USE_CHG_DET_REG |
++ USB2PHY_CHRG_DET_DIS_CHG_DET;
++ omap_usb_writel(phy->phy_base, USB2PHY_CHRG_DET, val);
++ }
++
+ return 0;
+ }
+
+@@ -329,6 +342,26 @@ static const struct of_device_id omap_usb2_id_table[] = {
+ };
+ MODULE_DEVICE_TABLE(of, omap_usb2_id_table);
+
++static void omap_usb2_init_errata(struct omap_usb *phy)
++{
++ static const struct soc_device_attribute am65x_sr10_soc_devices[] = {
++ { .family = "AM65X", .revision = "SR1.0" },
++ { /* sentinel */ }
++ };
++
++ /*
++ * Errata i2075: USB2PHY: USB2PHY Charger Detect is Enabled by
++ * Default Without VBUS Presence.
++ *
++ * AM654x SR1.0 has a silicon bug due to which D+ is pulled high after
++ * POR, which could cause enumeration failure with some USB hubs.
++ * Disabling the USB2_PHY Charger Detect function will put D+
++ * into the normal state.
++ */
++ if (soc_device_match(am65x_sr10_soc_devices))
++ phy->flags |= OMAP_USB2_DISABLE_CHRG_DET;
++}
++
+ static int omap_usb2_probe(struct platform_device *pdev)
+ {
+ struct omap_usb *phy;
+@@ -366,14 +399,14 @@ static int omap_usb2_probe(struct platform_device *pdev)
+ phy->mask = phy_data->mask;
+ phy->power_on = phy_data->power_on;
+ phy->power_off = phy_data->power_off;
++ phy->flags = phy_data->flags;
+
+- if (phy_data->flags & OMAP_USB2_CALIBRATE_FALSE_DISCONNECT) {
+- res = platform_get_resource(pdev, IORESOURCE_MEM, 0);
+- phy->phy_base = devm_ioremap_resource(&pdev->dev, res);
+- if (IS_ERR(phy->phy_base))
+- return PTR_ERR(phy->phy_base);
+- phy->flags |= OMAP_USB2_CALIBRATE_FALSE_DISCONNECT;
+- }
++ omap_usb2_init_errata(phy);
++
++ res = platform_get_resource(pdev, IORESOURCE_MEM, 0);
++ phy->phy_base = devm_ioremap_resource(&pdev->dev, res);
++ if (IS_ERR(phy->phy_base))
++ return PTR_ERR(phy->phy_base);
+
+ phy->syscon_phy_power = syscon_regmap_lookup_by_phandle(node,
+ "syscon-phy-power");
+diff --git a/drivers/rapidio/Kconfig b/drivers/rapidio/Kconfig
+index e4c422d806bee..b9f8514909bf0 100644
+--- a/drivers/rapidio/Kconfig
++++ b/drivers/rapidio/Kconfig
+@@ -37,7 +37,7 @@ config RAPIDIO_ENABLE_RX_TX_PORTS
+ config RAPIDIO_DMA_ENGINE
+ bool "DMA Engine support for RapidIO"
+ depends on RAPIDIO
+- select DMADEVICES
++ depends on DMADEVICES
+ select DMA_ENGINE
+ help
+ Say Y here if you want to use DMA Engine frameork for RapidIO data
+diff --git a/drivers/regulator/pwm-regulator.c b/drivers/regulator/pwm-regulator.c
+index 638329bd0745e..62ad7c4e7e7c8 100644
+--- a/drivers/regulator/pwm-regulator.c
++++ b/drivers/regulator/pwm-regulator.c
+@@ -279,7 +279,7 @@ static int pwm_regulator_init_table(struct platform_device *pdev,
+ return ret;
+ }
+
+- drvdata->state = -EINVAL;
++ drvdata->state = -ENOTRECOVERABLE;
+ drvdata->duty_cycle_table = duty_cycle_table;
+ drvdata->desc.ops = &pwm_regulator_voltage_table_ops;
+ drvdata->desc.n_voltages = length / sizeof(*duty_cycle_table);
+diff --git a/drivers/s390/crypto/zcrypt_ccamisc.c b/drivers/s390/crypto/zcrypt_ccamisc.c
+index 1b835398feec3..d1e3ee9ddf287 100644
+--- a/drivers/s390/crypto/zcrypt_ccamisc.c
++++ b/drivers/s390/crypto/zcrypt_ccamisc.c
+@@ -1685,9 +1685,9 @@ int cca_findcard2(u32 **apqns, u32 *nr_apqns, u16 cardnr, u16 domain,
+ *nr_apqns = 0;
+
+ /* fetch status of all crypto cards */
+- device_status = kmalloc_array(MAX_ZDEV_ENTRIES_EXT,
+- sizeof(struct zcrypt_device_status_ext),
+- GFP_KERNEL);
++ device_status = kvmalloc_array(MAX_ZDEV_ENTRIES_EXT,
++ sizeof(struct zcrypt_device_status_ext),
++ GFP_KERNEL);
+ if (!device_status)
+ return -ENOMEM;
+ zcrypt_device_status_mask_ext(device_status);
+@@ -1755,7 +1755,7 @@ int cca_findcard2(u32 **apqns, u32 *nr_apqns, u16 cardnr, u16 domain,
+ verify = 0;
+ }
+
+- kfree(device_status);
++ kvfree(device_status);
+ return rc;
+ }
+ EXPORT_SYMBOL(cca_findcard2);
+diff --git a/drivers/scsi/libfc/fc_disc.c b/drivers/scsi/libfc/fc_disc.c
+index e00dc4693fcbd..589ddf003886e 100644
+--- a/drivers/scsi/libfc/fc_disc.c
++++ b/drivers/scsi/libfc/fc_disc.c
+@@ -634,8 +634,6 @@ free_fp:
+ fc_frame_free(fp);
+ out:
+ kref_put(&rdata->kref, fc_rport_destroy);
+- if (!IS_ERR(fp))
+- fc_frame_free(fp);
+ }
+
+ /**
+diff --git a/drivers/scsi/libsas/sas_discover.c b/drivers/scsi/libsas/sas_discover.c
+index daf951b0b3f55..13ad2b3d314e2 100644
+--- a/drivers/scsi/libsas/sas_discover.c
++++ b/drivers/scsi/libsas/sas_discover.c
+@@ -182,10 +182,11 @@ int sas_notify_lldd_dev_found(struct domain_device *dev)
+ pr_warn("driver on host %s cannot handle device %016llx, error:%d\n",
+ dev_name(sas_ha->dev),
+ SAS_ADDR(dev->sas_addr), res);
++ return res;
+ }
+ set_bit(SAS_DEV_FOUND, &dev->state);
+ kref_get(&dev->kref);
+- return res;
++ return 0;
+ }
+
+
+diff --git a/drivers/scsi/lpfc/lpfc_els.c b/drivers/scsi/lpfc/lpfc_els.c
+index 3d670568a2760..519c7be404e75 100644
+--- a/drivers/scsi/lpfc/lpfc_els.c
++++ b/drivers/scsi/lpfc/lpfc_els.c
+@@ -3512,6 +3512,9 @@ lpfc_issue_els_rdf(struct lpfc_vport *vport, uint8_t retry)
+ FC_TLV_DESC_LENGTH_FROM_SZ(prdf->reg_d1));
+ prdf->reg_d1.reg_desc.count = cpu_to_be32(ELS_RDF_REG_TAG_CNT);
+ prdf->reg_d1.desc_tags[0] = cpu_to_be32(ELS_DTAG_LNK_INTEGRITY);
++ prdf->reg_d1.desc_tags[1] = cpu_to_be32(ELS_DTAG_DELIVERY);
++ prdf->reg_d1.desc_tags[2] = cpu_to_be32(ELS_DTAG_PEER_CONGEST);
++ prdf->reg_d1.desc_tags[3] = cpu_to_be32(ELS_DTAG_CONGESTION);
+
+ lpfc_debugfs_disc_trc(vport, LPFC_DISC_TRC_ELS_CMD,
+ "Issue RDF: did:x%x",
+@@ -4644,7 +4647,9 @@ lpfc_cmpl_els_rsp(struct lpfc_hba *phba, struct lpfc_iocbq *cmdiocb,
+ out:
+ if (ndlp && NLP_CHK_NODE_ACT(ndlp) && shost) {
+ spin_lock_irq(shost->host_lock);
+- ndlp->nlp_flag &= ~(NLP_ACC_REGLOGIN | NLP_RM_DFLT_RPI);
++ if (mbox)
++ ndlp->nlp_flag &= ~NLP_ACC_REGLOGIN;
++ ndlp->nlp_flag &= ~NLP_RM_DFLT_RPI;
+ spin_unlock_irq(shost->host_lock);
+
+ /* If the node is not being used by another discovery thread,
+diff --git a/drivers/scsi/lpfc/lpfc_hw4.h b/drivers/scsi/lpfc/lpfc_hw4.h
+index 6dfff03765471..c7085769170d7 100644
+--- a/drivers/scsi/lpfc/lpfc_hw4.h
++++ b/drivers/scsi/lpfc/lpfc_hw4.h
+@@ -4797,7 +4797,7 @@ struct send_frame_wqe {
+ uint32_t fc_hdr_wd5; /* word 15 */
+ };
+
+-#define ELS_RDF_REG_TAG_CNT 1
++#define ELS_RDF_REG_TAG_CNT 4
+ struct lpfc_els_rdf_reg_desc {
+ struct fc_df_desc_fpin_reg reg_desc; /* descriptor header */
+ __be32 desc_tags[ELS_RDF_REG_TAG_CNT];
+diff --git a/drivers/scsi/pm8001/pm8001_sas.c b/drivers/scsi/pm8001/pm8001_sas.c
+index b7cbc312843e9..da9fd8a5f8cae 100644
+--- a/drivers/scsi/pm8001/pm8001_sas.c
++++ b/drivers/scsi/pm8001/pm8001_sas.c
+@@ -818,7 +818,7 @@ pm8001_exec_internal_task_abort(struct pm8001_hba_info *pm8001_ha,
+
+ res = pm8001_tag_alloc(pm8001_ha, &ccb_tag);
+ if (res)
+- return res;
++ goto ex_err;
+ ccb = &pm8001_ha->ccb_info[ccb_tag];
+ ccb->device = pm8001_dev;
+ ccb->ccb_tag = ccb_tag;
+diff --git a/drivers/spi/spi-loopback-test.c b/drivers/spi/spi-loopback-test.c
+index b6d79cd156fb5..da1153ec9f0e3 100644
+--- a/drivers/spi/spi-loopback-test.c
++++ b/drivers/spi/spi-loopback-test.c
+@@ -90,7 +90,7 @@ static struct spi_test spi_tests[] = {
+ {
+ .description = "tx/rx-transfer - crossing PAGE_SIZE",
+ .fill_option = FILL_COUNT_8,
+- .iterate_len = { ITERATE_MAX_LEN },
++ .iterate_len = { ITERATE_LEN },
+ .iterate_tx_align = ITERATE_ALIGN,
+ .iterate_rx_align = ITERATE_ALIGN,
+ .transfer_count = 1,
+diff --git a/drivers/spi/spi.c b/drivers/spi/spi.c
+index 5c5a95792c0d3..65ca552654794 100644
+--- a/drivers/spi/spi.c
++++ b/drivers/spi/spi.c
+@@ -1305,8 +1305,6 @@ out:
+ if (msg->status && ctlr->handle_err)
+ ctlr->handle_err(ctlr, msg);
+
+- spi_res_release(ctlr, msg);
+-
+ spi_finalize_current_message(ctlr);
+
+ return ret;
+@@ -1694,6 +1692,13 @@ void spi_finalize_current_message(struct spi_controller *ctlr)
+
+ spi_unmap_msg(ctlr, mesg);
+
++ /* In the prepare_messages callback the spi bus has the opportunity to
++ * split a transfer to smaller chunks.
++ * Release splited transfers here since spi_map_msg is done on the
++ * splited transfers.
++ */
++ spi_res_release(ctlr, mesg);
++
+ if (ctlr->cur_msg_prepared && ctlr->unprepare_message) {
+ ret = ctlr->unprepare_message(ctlr, mesg);
+ if (ret) {
+diff --git a/drivers/thunderbolt/eeprom.c b/drivers/thunderbolt/eeprom.c
+index b451a5aa90b50..2db6532d39230 100644
+--- a/drivers/thunderbolt/eeprom.c
++++ b/drivers/thunderbolt/eeprom.c
+@@ -7,6 +7,7 @@
+ */
+
+ #include <linux/crc32.h>
++#include <linux/delay.h>
+ #include <linux/property.h>
+ #include <linux/slab.h>
+ #include "tb.h"
+@@ -389,8 +390,8 @@ static int tb_drom_parse_entries(struct tb_switch *sw)
+ struct tb_drom_entry_header *entry = (void *) (sw->drom + pos);
+ if (pos + 1 == drom_size || pos + entry->len > drom_size
+ || !entry->len) {
+- tb_sw_warn(sw, "drom buffer overrun, aborting\n");
+- return -EIO;
++ tb_sw_warn(sw, "DROM buffer overrun\n");
++ return -EILSEQ;
+ }
+
+ switch (entry->type) {
+@@ -526,7 +527,8 @@ int tb_drom_read(struct tb_switch *sw)
+ u16 size;
+ u32 crc;
+ struct tb_drom_header *header;
+- int res;
++ int res, retries = 1;
++
+ if (sw->drom)
+ return 0;
+
+@@ -611,7 +613,17 @@ parse:
+ tb_sw_warn(sw, "drom device_rom_revision %#x unknown\n",
+ header->device_rom_revision);
+
+- return tb_drom_parse_entries(sw);
++ res = tb_drom_parse_entries(sw);
++ /* If the DROM parsing fails, wait a moment and retry once */
++ if (res == -EILSEQ && retries--) {
++ tb_sw_warn(sw, "parsing DROM failed, retrying\n");
++ msleep(100);
++ res = tb_drom_read_n(sw, 0, sw->drom, size);
++ if (!res)
++ goto parse;
++ }
++
++ return res;
+ err:
+ kfree(sw->drom);
+ sw->drom = NULL;
+diff --git a/drivers/tty/serial/8250/8250_pci.c b/drivers/tty/serial/8250/8250_pci.c
+index 1a74d511b02a5..81c0b67f22640 100644
+--- a/drivers/tty/serial/8250/8250_pci.c
++++ b/drivers/tty/serial/8250/8250_pci.c
+@@ -5566,6 +5566,17 @@ static const struct pci_device_id serial_pci_tbl[] = {
+ PCI_ANY_ID, PCI_ANY_ID,
+ 0, 0, pbn_wch384_4 },
+
++ /*
++ * Realtek RealManage
++ */
++ { PCI_VENDOR_ID_REALTEK, 0x816a,
++ PCI_ANY_ID, PCI_ANY_ID,
++ 0, 0, pbn_b0_1_115200 },
++
++ { PCI_VENDOR_ID_REALTEK, 0x816b,
++ PCI_ANY_ID, PCI_ANY_ID,
++ 0, 0, pbn_b0_1_115200 },
++
+ /* Fintek PCI serial cards */
+ { PCI_DEVICE(0x1c29, 0x1104), .driver_data = pbn_fintek_4 },
+ { PCI_DEVICE(0x1c29, 0x1108), .driver_data = pbn_fintek_8 },
+diff --git a/drivers/tty/serial/serial_core.c b/drivers/tty/serial/serial_core.c
+index 5f3daabdc916e..ef1cdc82bc1f1 100644
+--- a/drivers/tty/serial/serial_core.c
++++ b/drivers/tty/serial/serial_core.c
+@@ -1914,24 +1914,12 @@ static inline bool uart_console_enabled(struct uart_port *port)
+ return uart_console(port) && (port->cons->flags & CON_ENABLED);
+ }
+
+-static void __uart_port_spin_lock_init(struct uart_port *port)
++static void uart_port_spin_lock_init(struct uart_port *port)
+ {
+ spin_lock_init(&port->lock);
+ lockdep_set_class(&port->lock, &port_lock_key);
+ }
+
+-/*
+- * Ensure that the serial console lock is initialised early.
+- * If this port is a console, then the spinlock is already initialised.
+- */
+-static inline void uart_port_spin_lock_init(struct uart_port *port)
+-{
+- if (uart_console(port))
+- return;
+-
+- __uart_port_spin_lock_init(port);
+-}
+-
+ #if defined(CONFIG_SERIAL_CORE_CONSOLE) || defined(CONFIG_CONSOLE_POLL)
+ /**
+ * uart_console_write - write a console message to a serial port
+@@ -2084,7 +2072,15 @@ uart_set_options(struct uart_port *port, struct console *co,
+ struct ktermios termios;
+ static struct ktermios dummy;
+
+- uart_port_spin_lock_init(port);
++ /*
++ * Ensure that the serial-console lock is initialised early.
++ *
++ * Note that the console-enabled check is needed because of kgdboc,
++ * which can end up calling uart_set_options() for an already enabled
++ * console via tty_find_polling_driver() and uart_poll_init().
++ */
++ if (!uart_console_enabled(port) && !port->console_reinit)
++ uart_port_spin_lock_init(port);
+
+ memset(&termios, 0, sizeof(struct ktermios));
+
+@@ -2375,13 +2371,6 @@ uart_configure_port(struct uart_driver *drv, struct uart_state *state,
+ /* Power up port for set_mctrl() */
+ uart_change_pm(state, UART_PM_STATE_ON);
+
+- /*
+- * If this driver supports console, and it hasn't been
+- * successfully registered yet, initialise spin lock for it.
+- */
+- if (port->cons && !(port->cons->flags & CON_ENABLED))
+- __uart_port_spin_lock_init(port);
+-
+ /*
+ * Ensure that the modem control lines are de-activated.
+ * keep the DTR setting that is set in uart_set_options()
+@@ -2798,10 +2787,12 @@ static ssize_t console_store(struct device *dev,
+ if (oldconsole && !newconsole) {
+ ret = unregister_console(uport->cons);
+ } else if (!oldconsole && newconsole) {
+- if (uart_console(uport))
++ if (uart_console(uport)) {
++ uport->console_reinit = 1;
+ register_console(uport->cons);
+- else
++ } else {
+ ret = -ENOENT;
++ }
+ }
+ } else {
+ ret = -ENXIO;
+@@ -2897,7 +2888,12 @@ int uart_add_one_port(struct uart_driver *drv, struct uart_port *uport)
+ goto out;
+ }
+
+- uart_port_spin_lock_init(uport);
++ /*
++ * If this port is in use as a console then the spinlock is already
++ * initialised.
++ */
++ if (!uart_console_enabled(uport))
++ uart_port_spin_lock_init(uport);
+
+ if (uport->cons && uport->dev)
+ of_console_check(uport->dev->of_node, uport->cons->name, uport->line);
+diff --git a/drivers/usb/class/usblp.c b/drivers/usb/class/usblp.c
+index 084c48c5848fc..67cbd42421bee 100644
+--- a/drivers/usb/class/usblp.c
++++ b/drivers/usb/class/usblp.c
+@@ -827,6 +827,11 @@ static ssize_t usblp_read(struct file *file, char __user *buffer, size_t len, lo
+ if (rv < 0)
+ return rv;
+
++ if (!usblp->present) {
++ count = -ENODEV;
++ goto done;
++ }
++
+ if ((avail = usblp->rstatus) < 0) {
+ printk(KERN_ERR "usblp%d: error %d reading from printer\n",
+ usblp->minor, (int)avail);
+diff --git a/drivers/usb/core/quirks.c b/drivers/usb/core/quirks.c
+index 2f068e525a374..4ee8105310989 100644
+--- a/drivers/usb/core/quirks.c
++++ b/drivers/usb/core/quirks.c
+@@ -397,6 +397,10 @@ static const struct usb_device_id usb_quirk_list[] = {
+ /* Generic RTL8153 based ethernet adapters */
+ { USB_DEVICE(0x0bda, 0x8153), .driver_info = USB_QUIRK_NO_LPM },
+
++ /* SONiX USB DEVICE Touchpad */
++ { USB_DEVICE(0x0c45, 0x7056), .driver_info =
++ USB_QUIRK_IGNORE_REMOTE_WAKEUP },
++
+ /* Action Semiconductor flash disk */
+ { USB_DEVICE(0x10d6, 0x2200), .driver_info =
+ USB_QUIRK_STRING_FETCH_255 },
+diff --git a/drivers/usb/host/ehci-hcd.c b/drivers/usb/host/ehci-hcd.c
+index cf2b7ae93b7e9..0e5c56e065591 100644
+--- a/drivers/usb/host/ehci-hcd.c
++++ b/drivers/usb/host/ehci-hcd.c
+@@ -22,6 +22,7 @@
+ #include <linux/interrupt.h>
+ #include <linux/usb.h>
+ #include <linux/usb/hcd.h>
++#include <linux/usb/otg.h>
+ #include <linux/moduleparam.h>
+ #include <linux/dma-mapping.h>
+ #include <linux/debugfs.h>
+diff --git a/drivers/usb/host/ehci-hub.c b/drivers/usb/host/ehci-hub.c
+index ce0eaf7d7c12a..087402aec5cbe 100644
+--- a/drivers/usb/host/ehci-hub.c
++++ b/drivers/usb/host/ehci-hub.c
+@@ -14,7 +14,6 @@
+ */
+
+ /*-------------------------------------------------------------------------*/
+-#include <linux/usb/otg.h>
+
+ #define PORT_WAKE_BITS (PORT_WKOC_E|PORT_WKDISC_E|PORT_WKCONN_E)
+
+diff --git a/drivers/usb/storage/uas.c b/drivers/usb/storage/uas.c
+index d592071119ba6..13696f03f800d 100644
+--- a/drivers/usb/storage/uas.c
++++ b/drivers/usb/storage/uas.c
+@@ -662,8 +662,7 @@ static int uas_queuecommand_lck(struct scsi_cmnd *cmnd,
+ if (devinfo->resetting) {
+ cmnd->result = DID_ERROR << 16;
+ cmnd->scsi_done(cmnd);
+- spin_unlock_irqrestore(&devinfo->lock, flags);
+- return 0;
++ goto zombie;
+ }
+
+ /* Find a free uas-tag */
+@@ -699,6 +698,16 @@ static int uas_queuecommand_lck(struct scsi_cmnd *cmnd,
+ cmdinfo->state &= ~(SUBMIT_DATA_IN_URB | SUBMIT_DATA_OUT_URB);
+
+ err = uas_submit_urbs(cmnd, devinfo);
++ /*
++ * in case of fatal errors the SCSI layer is peculiar
++ * a command that has finished is a success for the purpose
++ * of queueing, no matter how fatal the error
++ */
++ if (err == -ENODEV) {
++ cmnd->result = DID_ERROR << 16;
++ cmnd->scsi_done(cmnd);
++ goto zombie;
++ }
+ if (err) {
+ /* If we did nothing, give up now */
+ if (cmdinfo->state & SUBMIT_STATUS_URB) {
+@@ -709,6 +718,7 @@ static int uas_queuecommand_lck(struct scsi_cmnd *cmnd,
+ }
+
+ devinfo->cmnd[idx] = cmnd;
++zombie:
+ spin_unlock_irqrestore(&devinfo->lock, flags);
+ return 0;
+ }
+diff --git a/drivers/usb/typec/ucsi/ucsi.c b/drivers/usb/typec/ucsi/ucsi.c
+index 2999217c81090..e2af10301c779 100644
+--- a/drivers/usb/typec/ucsi/ucsi.c
++++ b/drivers/usb/typec/ucsi/ucsi.c
+@@ -216,14 +216,18 @@ void ucsi_altmode_update_active(struct ucsi_connector *con)
+ con->partner_altmode[i] == altmode);
+ }
+
+-static u8 ucsi_altmode_next_mode(struct typec_altmode **alt, u16 svid)
++static int ucsi_altmode_next_mode(struct typec_altmode **alt, u16 svid)
+ {
+ u8 mode = 1;
+ int i;
+
+- for (i = 0; alt[i]; i++)
++ for (i = 0; alt[i]; i++) {
++ if (i > MODE_DISCOVERY_MAX)
++ return -ERANGE;
++
+ if (alt[i]->svid == svid)
+ mode++;
++ }
+
+ return mode;
+ }
+@@ -258,8 +262,11 @@ static int ucsi_register_altmode(struct ucsi_connector *con,
+ goto err;
+ }
+
+- desc->mode = ucsi_altmode_next_mode(con->port_altmode,
+- desc->svid);
++ ret = ucsi_altmode_next_mode(con->port_altmode, desc->svid);
++ if (ret < 0)
++ return ret;
++
++ desc->mode = ret;
+
+ switch (desc->svid) {
+ case USB_TYPEC_DP_SID:
+@@ -292,8 +299,11 @@ static int ucsi_register_altmode(struct ucsi_connector *con,
+ goto err;
+ }
+
+- desc->mode = ucsi_altmode_next_mode(con->partner_altmode,
+- desc->svid);
++ ret = ucsi_altmode_next_mode(con->partner_altmode, desc->svid);
++ if (ret < 0)
++ return ret;
++
++ desc->mode = ret;
+
+ alt = typec_partner_register_altmode(con->partner, desc);
+ if (IS_ERR(alt)) {
+diff --git a/drivers/usb/typec/ucsi/ucsi_acpi.c b/drivers/usb/typec/ucsi/ucsi_acpi.c
+index c0aca2f0f23f0..fbfe8f5933af8 100644
+--- a/drivers/usb/typec/ucsi/ucsi_acpi.c
++++ b/drivers/usb/typec/ucsi/ucsi_acpi.c
+@@ -78,7 +78,7 @@ static int ucsi_acpi_sync_write(struct ucsi *ucsi, unsigned int offset,
+ if (ret)
+ goto out_clear_bit;
+
+- if (!wait_for_completion_timeout(&ua->complete, msecs_to_jiffies(5000)))
++ if (!wait_for_completion_timeout(&ua->complete, 60 * HZ))
+ ret = -ETIMEDOUT;
+
+ out_clear_bit:
+diff --git a/drivers/video/fbdev/core/fbcon.c b/drivers/video/fbdev/core/fbcon.c
+index b36bfe10c712c..09cb46e94f405 100644
+--- a/drivers/video/fbdev/core/fbcon.c
++++ b/drivers/video/fbdev/core/fbcon.c
+@@ -2018,7 +2018,7 @@ static int fbcon_resize(struct vc_data *vc, unsigned int width,
+ struct fb_var_screeninfo var = info->var;
+ int x_diff, y_diff, virt_w, virt_h, virt_fw, virt_fh;
+
+- if (ops->p && ops->p->userfont && FNTSIZE(vc->vc_font.data)) {
++ if (p->userfont && FNTSIZE(vc->vc_font.data)) {
+ int size;
+ int pitch = PITCH(vc->vc_font.width);
+
+diff --git a/fs/cifs/inode.c b/fs/cifs/inode.c
+index ce95801e9b664..7708175062eba 100644
+--- a/fs/cifs/inode.c
++++ b/fs/cifs/inode.c
+@@ -1017,6 +1017,8 @@ handle_mnt_opt:
+ if (cifs_sb->mnt_cifs_flags & CIFS_MOUNT_MODE_FROM_SID) {
+ rc = cifs_acl_to_fattr(cifs_sb, &fattr, *inode, true,
+ full_path, fid);
++ if (rc == -EREMOTE)
++ rc = 0;
+ if (rc) {
+ cifs_dbg(FYI, "%s: Get mode from SID failed. rc=%d\n",
+ __func__, rc);
+@@ -1025,6 +1027,8 @@ handle_mnt_opt:
+ } else if (cifs_sb->mnt_cifs_flags & CIFS_MOUNT_CIFS_ACL) {
+ rc = cifs_acl_to_fattr(cifs_sb, &fattr, *inode, false,
+ full_path, fid);
++ if (rc == -EREMOTE)
++ rc = 0;
+ if (rc) {
+ cifs_dbg(FYI, "%s: Getting ACL failed with error: %d\n",
+ __func__, rc);
+diff --git a/fs/f2fs/data.c b/fs/f2fs/data.c
+index 6e9017e6a8197..403e8033c974b 100644
+--- a/fs/f2fs/data.c
++++ b/fs/f2fs/data.c
+@@ -3463,6 +3463,9 @@ static int check_direct_IO(struct inode *inode, struct iov_iter *iter,
+ unsigned long align = offset | iov_iter_alignment(iter);
+ struct block_device *bdev = inode->i_sb->s_bdev;
+
++ if (iov_iter_rw(iter) == READ && offset >= i_size_read(inode))
++ return 1;
++
+ if (align & blocksize_mask) {
+ if (bdev)
+ blkbits = blksize_bits(bdev_logical_block_size(bdev));
+diff --git a/fs/f2fs/node.c b/fs/f2fs/node.c
+index 98736d0598b8d..0fde35611df18 100644
+--- a/fs/f2fs/node.c
++++ b/fs/f2fs/node.c
+@@ -2375,6 +2375,9 @@ static int __f2fs_build_free_nids(struct f2fs_sb_info *sbi,
+ if (unlikely(nid >= nm_i->max_nid))
+ nid = 0;
+
++ if (unlikely(nid % NAT_ENTRY_PER_BLOCK))
++ nid = NAT_BLOCK_OFFSET(nid) * NAT_ENTRY_PER_BLOCK;
++
+ /* Enough entries */
+ if (nm_i->nid_cnt[FREE_NID] >= NAT_ENTRY_PER_BLOCK)
+ return 0;
+diff --git a/fs/nfs/nfs4proc.c b/fs/nfs/nfs4proc.c
+index 45e0585e0667c..08b1fb0a9225a 100644
+--- a/fs/nfs/nfs4proc.c
++++ b/fs/nfs/nfs4proc.c
+@@ -3272,8 +3272,10 @@ static int _nfs4_do_setattr(struct inode *inode,
+
+ /* Servers should only apply open mode checks for file size changes */
+ truncate = (arg->iap->ia_valid & ATTR_SIZE) ? true : false;
+- if (!truncate)
++ if (!truncate) {
++ nfs4_inode_make_writeable(inode);
+ goto zero_stateid;
++ }
+
+ if (nfs4_copy_delegation_stateid(inode, FMODE_WRITE, &arg->stateid, &delegation_cred)) {
+ /* Use that stateid */
+@@ -7271,7 +7273,12 @@ int nfs4_lock_delegation_recall(struct file_lock *fl, struct nfs4_state *state,
+ err = nfs4_set_lock_state(state, fl);
+ if (err != 0)
+ return err;
+- err = _nfs4_do_setlk(state, F_SETLK, fl, NFS_LOCK_NEW);
++ do {
++ err = _nfs4_do_setlk(state, F_SETLK, fl, NFS_LOCK_NEW);
++ if (err != -NFS4ERR_DELAY)
++ break;
++ ssleep(1);
++ } while (err == -NFS4ERR_DELAY);
+ return nfs4_handle_delegation_recall_error(server, state, stateid, fl, err);
+ }
+
+diff --git a/include/linux/cpuhotplug.h b/include/linux/cpuhotplug.h
+index 191772d4a4d7d..67ba16c7e118b 100644
+--- a/include/linux/cpuhotplug.h
++++ b/include/linux/cpuhotplug.h
+@@ -141,7 +141,6 @@ enum cpuhp_state {
+ /* Must be the last timer callback */
+ CPUHP_AP_DUMMY_TIMER_STARTING,
+ CPUHP_AP_ARM_XEN_STARTING,
+- CPUHP_AP_ARM_KVMPV_STARTING,
+ CPUHP_AP_ARM_CORESIGHT_STARTING,
+ CPUHP_AP_ARM_CORESIGHT_CTI_STARTING,
+ CPUHP_AP_ARM64_ISNDEP_STARTING,
+diff --git a/include/linux/dax.h b/include/linux/dax.h
+index 6904d4e0b2e0a..43b39ab9de1a9 100644
+--- a/include/linux/dax.h
++++ b/include/linux/dax.h
+@@ -58,6 +58,8 @@ static inline void set_dax_synchronous(struct dax_device *dax_dev)
+ {
+ __set_dax_synchronous(dax_dev);
+ }
++bool dax_supported(struct dax_device *dax_dev, struct block_device *bdev,
++ int blocksize, sector_t start, sector_t len);
+ /*
+ * Check if given mapping is supported by the file / underlying device.
+ */
+@@ -104,6 +106,12 @@ static inline bool dax_synchronous(struct dax_device *dax_dev)
+ static inline void set_dax_synchronous(struct dax_device *dax_dev)
+ {
+ }
++static inline bool dax_supported(struct dax_device *dax_dev,
++ struct block_device *bdev, int blocksize, sector_t start,
++ sector_t len)
++{
++ return false;
++}
+ static inline bool daxdev_mapping_supported(struct vm_area_struct *vma,
+ struct dax_device *dax_dev)
+ {
+@@ -189,14 +197,23 @@ static inline void dax_unlock_page(struct page *page, dax_entry_t cookie)
+ }
+ #endif
+
++#if IS_ENABLED(CONFIG_DAX)
+ int dax_read_lock(void);
+ void dax_read_unlock(int id);
++#else
++static inline int dax_read_lock(void)
++{
++ return 0;
++}
++
++static inline void dax_read_unlock(int id)
++{
++}
++#endif /* CONFIG_DAX */
+ bool dax_alive(struct dax_device *dax_dev);
+ void *dax_get_private(struct dax_device *dax_dev);
+ long dax_direct_access(struct dax_device *dax_dev, pgoff_t pgoff, long nr_pages,
+ void **kaddr, pfn_t *pfn);
+-bool dax_supported(struct dax_device *dax_dev, struct block_device *bdev,
+- int blocksize, sector_t start, sector_t len);
+ size_t dax_copy_from_iter(struct dax_device *dax_dev, pgoff_t pgoff, void *addr,
+ size_t bytes, struct iov_iter *i);
+ size_t dax_copy_to_iter(struct dax_device *dax_dev, pgoff_t pgoff, void *addr,
+diff --git a/include/linux/i2c-algo-pca.h b/include/linux/i2c-algo-pca.h
+index d03071732db4a..7c522fdd9ea73 100644
+--- a/include/linux/i2c-algo-pca.h
++++ b/include/linux/i2c-algo-pca.h
+@@ -53,6 +53,20 @@
+ #define I2C_PCA_CON_SI 0x08 /* Serial Interrupt */
+ #define I2C_PCA_CON_CR 0x07 /* Clock Rate (MASK) */
+
++/**
++ * struct pca_i2c_bus_settings - The configured PCA i2c bus settings
++ * @mode: Configured i2c bus mode
++ * @tlow: Configured SCL LOW period
++ * @thi: Configured SCL HIGH period
++ * @clock_freq: The configured clock frequency
++ */
++struct pca_i2c_bus_settings {
++ int mode;
++ int tlow;
++ int thi;
++ int clock_freq;
++};
++
+ struct i2c_algo_pca_data {
+ void *data; /* private low level data */
+ void (*write_byte) (void *data, int reg, int val);
+@@ -64,6 +78,7 @@ struct i2c_algo_pca_data {
+ * For PCA9665, use the frequency you want here. */
+ unsigned int i2c_clock;
+ unsigned int chip;
++ struct pca_i2c_bus_settings bus_settings;
+ };
+
+ int i2c_pca_add_bus(struct i2c_adapter *);
+diff --git a/include/linux/percpu-rwsem.h b/include/linux/percpu-rwsem.h
+index 5e033fe1ff4e9..5fda40f97fe91 100644
+--- a/include/linux/percpu-rwsem.h
++++ b/include/linux/percpu-rwsem.h
+@@ -60,7 +60,7 @@ static inline void percpu_down_read(struct percpu_rw_semaphore *sem)
+ * anything we did within this RCU-sched read-size critical section.
+ */
+ if (likely(rcu_sync_is_idle(&sem->rss)))
+- __this_cpu_inc(*sem->read_count);
++ this_cpu_inc(*sem->read_count);
+ else
+ __percpu_down_read(sem, false); /* Unconditional memory barrier */
+ /*
+@@ -79,7 +79,7 @@ static inline bool percpu_down_read_trylock(struct percpu_rw_semaphore *sem)
+ * Same as in percpu_down_read().
+ */
+ if (likely(rcu_sync_is_idle(&sem->rss)))
+- __this_cpu_inc(*sem->read_count);
++ this_cpu_inc(*sem->read_count);
+ else
+ ret = __percpu_down_read(sem, true); /* Unconditional memory barrier */
+ preempt_enable();
+@@ -103,7 +103,7 @@ static inline void percpu_up_read(struct percpu_rw_semaphore *sem)
+ * Same as in percpu_down_read().
+ */
+ if (likely(rcu_sync_is_idle(&sem->rss))) {
+- __this_cpu_dec(*sem->read_count);
++ this_cpu_dec(*sem->read_count);
+ } else {
+ /*
+ * slowpath; reader will only ever wake a single blocked
+@@ -115,7 +115,7 @@ static inline void percpu_up_read(struct percpu_rw_semaphore *sem)
+ * aggregate zero, as that is the only time it matters) they
+ * will also see our critical section.
+ */
+- __this_cpu_dec(*sem->read_count);
++ this_cpu_dec(*sem->read_count);
+ rcuwait_wake_up(&sem->writer);
+ }
+ preempt_enable();
+diff --git a/include/linux/serial_core.h b/include/linux/serial_core.h
+index 791f4844efeb9..d3266caefe049 100644
+--- a/include/linux/serial_core.h
++++ b/include/linux/serial_core.h
+@@ -248,6 +248,7 @@ struct uart_port {
+
+ unsigned char hub6; /* this should be in the 8250 driver */
+ unsigned char suspended;
++ unsigned char console_reinit;
+ const char *name; /* port name */
+ struct attribute_group *attr_group; /* port specific attributes */
+ const struct attribute_group **tty_groups; /* all attributes (serial core use only) */
+diff --git a/include/sound/soc.h b/include/sound/soc.h
+index 3ce7f0f5aa929..ca765062787b0 100644
+--- a/include/sound/soc.h
++++ b/include/sound/soc.h
+@@ -1205,6 +1205,8 @@ struct snd_soc_pcm_runtime {
+ ((i) < (rtd)->num_cpus + (rtd)->num_codecs) && \
+ ((dai) = (rtd)->dais[i]); \
+ (i)++)
++#define for_each_rtd_dais_rollback(rtd, i, dai) \
++ for (; (--(i) >= 0) && ((dai) = (rtd)->dais[i]);)
+
+ void snd_soc_close_delayed_work(struct snd_soc_pcm_runtime *rtd);
+
+@@ -1373,6 +1375,8 @@ void snd_soc_unregister_dai(struct snd_soc_dai *dai);
+
+ struct snd_soc_dai *snd_soc_find_dai(
+ const struct snd_soc_dai_link_component *dlc);
++struct snd_soc_dai *snd_soc_find_dai_with_mutex(
++ const struct snd_soc_dai_link_component *dlc);
+
+ #include <sound/soc-dai.h>
+
+diff --git a/include/uapi/linux/kvm.h b/include/uapi/linux/kvm.h
+index 4fdf303165827..65fd95f9784ce 100644
+--- a/include/uapi/linux/kvm.h
++++ b/include/uapi/linux/kvm.h
+@@ -789,9 +789,10 @@ struct kvm_ppc_resize_hpt {
+ #define KVM_VM_PPC_HV 1
+ #define KVM_VM_PPC_PR 2
+
+-/* on MIPS, 0 forces trap & emulate, 1 forces VZ ASE */
+-#define KVM_VM_MIPS_TE 0
++/* on MIPS, 0 indicates auto, 1 forces VZ ASE, 2 forces trap & emulate */
++#define KVM_VM_MIPS_AUTO 0
+ #define KVM_VM_MIPS_VZ 1
++#define KVM_VM_MIPS_TE 2
+
+ #define KVM_S390_SIE_PAGE_OFFSET 1
+
+diff --git a/kernel/kprobes.c b/kernel/kprobes.c
+index 72af5d37e9ff1..a264246ff85aa 100644
+--- a/kernel/kprobes.c
++++ b/kernel/kprobes.c
+@@ -2108,6 +2108,9 @@ static void kill_kprobe(struct kprobe *p)
+
+ lockdep_assert_held(&kprobe_mutex);
+
++ if (WARN_ON_ONCE(kprobe_gone(p)))
++ return;
++
+ p->flags |= KPROBE_FLAG_GONE;
+ if (kprobe_aggrprobe(p)) {
+ /*
+@@ -2365,7 +2368,10 @@ static int kprobes_module_callback(struct notifier_block *nb,
+ mutex_lock(&kprobe_mutex);
+ for (i = 0; i < KPROBE_TABLE_SIZE; i++) {
+ head = &kprobe_table[i];
+- hlist_for_each_entry(p, head, hlist)
++ hlist_for_each_entry(p, head, hlist) {
++ if (kprobe_gone(p))
++ continue;
++
+ if (within_module_init((unsigned long)p->addr, mod) ||
+ (checkcore &&
+ within_module_core((unsigned long)p->addr, mod))) {
+@@ -2382,6 +2388,7 @@ static int kprobes_module_callback(struct notifier_block *nb,
+ */
+ kill_kprobe(p);
+ }
++ }
+ }
+ if (val == MODULE_STATE_GOING)
+ remove_module_kprobe_blacklist(mod);
+diff --git a/kernel/locking/lockdep.c b/kernel/locking/lockdep.c
+index 29a8de4c50b90..a611dedac7d60 100644
+--- a/kernel/locking/lockdep.c
++++ b/kernel/locking/lockdep.c
+@@ -3923,13 +3923,18 @@ static int separate_irq_context(struct task_struct *curr,
+ static int mark_lock(struct task_struct *curr, struct held_lock *this,
+ enum lock_usage_bit new_bit)
+ {
+- unsigned int new_mask = 1 << new_bit, ret = 1;
++ unsigned int old_mask, new_mask, ret = 1;
+
+ if (new_bit >= LOCK_USAGE_STATES) {
+ DEBUG_LOCKS_WARN_ON(1);
+ return 0;
+ }
+
++ if (new_bit == LOCK_USED && this->read)
++ new_bit = LOCK_USED_READ;
++
++ new_mask = 1 << new_bit;
++
+ /*
+ * If already set then do not dirty the cacheline,
+ * nor do any checks:
+@@ -3942,13 +3947,22 @@ static int mark_lock(struct task_struct *curr, struct held_lock *this,
+ /*
+ * Make sure we didn't race:
+ */
+- if (unlikely(hlock_class(this)->usage_mask & new_mask)) {
+- graph_unlock();
+- return 1;
+- }
++ if (unlikely(hlock_class(this)->usage_mask & new_mask))
++ goto unlock;
+
++ old_mask = hlock_class(this)->usage_mask;
+ hlock_class(this)->usage_mask |= new_mask;
+
++ /*
++ * Save one usage_traces[] entry and map both LOCK_USED and
++ * LOCK_USED_READ onto the same entry.
++ */
++ if (new_bit == LOCK_USED || new_bit == LOCK_USED_READ) {
++ if (old_mask & (LOCKF_USED | LOCKF_USED_READ))
++ goto unlock;
++ new_bit = LOCK_USED;
++ }
++
+ if (!(hlock_class(this)->usage_traces[new_bit] = save_trace()))
+ return 0;
+
+@@ -3962,6 +3976,7 @@ static int mark_lock(struct task_struct *curr, struct held_lock *this,
+ return 0;
+ }
+
++unlock:
+ graph_unlock();
+
+ /*
+@@ -4896,12 +4911,20 @@ static void verify_lock_unused(struct lockdep_map *lock, struct held_lock *hlock
+ {
+ #ifdef CONFIG_PROVE_LOCKING
+ struct lock_class *class = look_up_lock_class(lock, subclass);
++ unsigned long mask = LOCKF_USED;
+
+ /* if it doesn't have a class (yet), it certainly hasn't been used yet */
+ if (!class)
+ return;
+
+- if (!(class->usage_mask & LOCK_USED))
++ /*
++ * READ locks only conflict with USED, such that if we only ever use
++ * READ locks, there is no deadlock possible -- RCU.
++ */
++ if (!hlock->read)
++ mask |= LOCKF_USED_READ;
++
++ if (!(class->usage_mask & mask))
+ return;
+
+ hlock->class_idx = class - lock_classes;
+diff --git a/kernel/locking/lockdep_internals.h b/kernel/locking/lockdep_internals.h
+index baca699b94e91..b0be1560ed17a 100644
+--- a/kernel/locking/lockdep_internals.h
++++ b/kernel/locking/lockdep_internals.h
+@@ -19,6 +19,7 @@ enum lock_usage_bit {
+ #include "lockdep_states.h"
+ #undef LOCKDEP_STATE
+ LOCK_USED,
++ LOCK_USED_READ,
+ LOCK_USAGE_STATES
+ };
+
+@@ -40,6 +41,7 @@ enum {
+ #include "lockdep_states.h"
+ #undef LOCKDEP_STATE
+ __LOCKF(USED)
++ __LOCKF(USED_READ)
+ };
+
+ #define LOCKDEP_STATE(__STATE) LOCKF_ENABLED_##__STATE |
+diff --git a/kernel/locking/percpu-rwsem.c b/kernel/locking/percpu-rwsem.c
+index 8bbafe3e5203d..70a32a576f3f2 100644
+--- a/kernel/locking/percpu-rwsem.c
++++ b/kernel/locking/percpu-rwsem.c
+@@ -45,7 +45,7 @@ EXPORT_SYMBOL_GPL(percpu_free_rwsem);
+
+ static bool __percpu_down_read_trylock(struct percpu_rw_semaphore *sem)
+ {
+- __this_cpu_inc(*sem->read_count);
++ this_cpu_inc(*sem->read_count);
+
+ /*
+ * Due to having preemption disabled the decrement happens on
+@@ -71,7 +71,7 @@ static bool __percpu_down_read_trylock(struct percpu_rw_semaphore *sem)
+ if (likely(!atomic_read_acquire(&sem->block)))
+ return true;
+
+- __this_cpu_dec(*sem->read_count);
++ this_cpu_dec(*sem->read_count);
+
+ /* Prod writer to re-evaluate readers_active_check() */
+ rcuwait_wake_up(&sem->writer);
+diff --git a/mm/huge_memory.c b/mm/huge_memory.c
+index 78c84bee7e294..74300e337c3c7 100644
+--- a/mm/huge_memory.c
++++ b/mm/huge_memory.c
+@@ -2048,7 +2048,7 @@ static void __split_huge_pmd_locked(struct vm_area_struct *vma, pmd_t *pmd,
+ put_page(page);
+ add_mm_counter(mm, mm_counter_file(page), -HPAGE_PMD_NR);
+ return;
+- } else if (is_huge_zero_pmd(*pmd)) {
++ } else if (pmd_trans_huge(*pmd) && is_huge_zero_pmd(*pmd)) {
+ /*
+ * FIXME: Do we want to invalidate secondary mmu by calling
+ * mmu_notifier_invalidate_range() see comments below inside
+@@ -2142,30 +2142,34 @@ static void __split_huge_pmd_locked(struct vm_area_struct *vma, pmd_t *pmd,
+ pte = pte_offset_map(&_pmd, addr);
+ BUG_ON(!pte_none(*pte));
+ set_pte_at(mm, addr, pte, entry);
+- atomic_inc(&page[i]._mapcount);
+- pte_unmap(pte);
+- }
+-
+- /*
+- * Set PG_double_map before dropping compound_mapcount to avoid
+- * false-negative page_mapped().
+- */
+- if (compound_mapcount(page) > 1 && !TestSetPageDoubleMap(page)) {
+- for (i = 0; i < HPAGE_PMD_NR; i++)
++ if (!pmd_migration)
+ atomic_inc(&page[i]._mapcount);
++ pte_unmap(pte);
+ }
+
+- lock_page_memcg(page);
+- if (atomic_add_negative(-1, compound_mapcount_ptr(page))) {
+- /* Last compound_mapcount is gone. */
+- __dec_lruvec_page_state(page, NR_ANON_THPS);
+- if (TestClearPageDoubleMap(page)) {
+- /* No need in mapcount reference anymore */
++ if (!pmd_migration) {
++ /*
++ * Set PG_double_map before dropping compound_mapcount to avoid
++ * false-negative page_mapped().
++ */
++ if (compound_mapcount(page) > 1 &&
++ !TestSetPageDoubleMap(page)) {
+ for (i = 0; i < HPAGE_PMD_NR; i++)
+- atomic_dec(&page[i]._mapcount);
++ atomic_inc(&page[i]._mapcount);
++ }
++
++ lock_page_memcg(page);
++ if (atomic_add_negative(-1, compound_mapcount_ptr(page))) {
++ /* Last compound_mapcount is gone. */
++ __dec_lruvec_page_state(page, NR_ANON_THPS);
++ if (TestClearPageDoubleMap(page)) {
++ /* No need in mapcount reference anymore */
++ for (i = 0; i < HPAGE_PMD_NR; i++)
++ atomic_dec(&page[i]._mapcount);
++ }
+ }
++ unlock_page_memcg(page);
+ }
+- unlock_page_memcg(page);
+
+ smp_wmb(); /* make pte visible before pmd */
+ pmd_populate(mm, pmd, pgtable);
+diff --git a/mm/ksm.c b/mm/ksm.c
+index 4102034cd55a1..f16983394b228 100644
+--- a/mm/ksm.c
++++ b/mm/ksm.c
+@@ -2585,6 +2585,10 @@ struct page *ksm_might_need_to_copy(struct page *page,
+ return page; /* let do_swap_page report the error */
+
+ new_page = alloc_page_vma(GFP_HIGHUSER_MOVABLE, vma, address);
++ if (new_page && mem_cgroup_charge(new_page, vma->vm_mm, GFP_KERNEL)) {
++ put_page(new_page);
++ new_page = NULL;
++ }
+ if (new_page) {
+ copy_user_highpage(new_page, page, address, vma);
+
+diff --git a/mm/memory_hotplug.c b/mm/memory_hotplug.c
+index 76c75a599da3f..e76de2067bfd1 100644
+--- a/mm/memory_hotplug.c
++++ b/mm/memory_hotplug.c
+@@ -1557,6 +1557,20 @@ static int __ref __offline_pages(unsigned long start_pfn,
+ /* check again */
+ ret = walk_system_ram_range(start_pfn, end_pfn - start_pfn,
+ NULL, check_pages_isolated_cb);
++ /*
++ * per-cpu pages are drained in start_isolate_page_range, but if
++ * there are still pages that are not free, make sure that we
++ * drain again, because when we isolated range we might
++ * have raced with another thread that was adding pages to pcp
++ * list.
++ *
++ * Forward progress should be still guaranteed because
++ * pages on the pcp list can only belong to MOVABLE_ZONE
++ * because has_unmovable_pages explicitly checks for
++ * PageBuddy on freed pages on other zones.
++ */
++ if (ret)
++ drain_all_pages(zone);
+ } while (ret);
+
+ /* Ok, all of our target is isolated.
+diff --git a/mm/page_isolation.c b/mm/page_isolation.c
+index f6d07c5f0d34d..5b4a28b2dbf56 100644
+--- a/mm/page_isolation.c
++++ b/mm/page_isolation.c
+@@ -170,6 +170,14 @@ __first_valid_page(unsigned long pfn, unsigned long nr_pages)
+ * pageblocks we may have modified and return -EBUSY to caller. This
+ * prevents two threads from simultaneously working on overlapping ranges.
+ *
++ * Please note that there is no strong synchronization with the page allocator
++ * either. Pages might be freed while their page blocks are marked ISOLATED.
++ * In some cases pages might still end up on pcp lists and that would allow
++ * for their allocation even when they are in fact isolated already. Depending
++ * on how strong of a guarantee the caller needs drain_all_pages might be needed
++ * (e.g. __offline_pages will need to call it after check for isolated range for
++ * a next retry).
++ *
+ * Return: the number of isolated pageblocks on success and -EBUSY if any part
+ * of range cannot be isolated.
+ */
+diff --git a/mm/percpu.c b/mm/percpu.c
+index 696367b182221..d83e0032cb209 100644
+--- a/mm/percpu.c
++++ b/mm/percpu.c
+@@ -1300,7 +1300,7 @@ static struct pcpu_chunk * __init pcpu_alloc_first_chunk(unsigned long tmp_addr,
+
+ /* allocate chunk */
+ alloc_size = sizeof(struct pcpu_chunk) +
+- BITS_TO_LONGS(region_size >> PAGE_SHIFT);
++ BITS_TO_LONGS(region_size >> PAGE_SHIFT) * sizeof(unsigned long);
+ chunk = memblock_alloc(alloc_size, SMP_CACHE_BYTES);
+ if (!chunk)
+ panic("%s: Failed to allocate %zu bytes\n", __func__,
+diff --git a/mm/vmscan.c b/mm/vmscan.c
+index 749d239c62b2b..8b97bc615d8c0 100644
+--- a/mm/vmscan.c
++++ b/mm/vmscan.c
+@@ -2619,6 +2619,14 @@ static void shrink_node_memcgs(pg_data_t *pgdat, struct scan_control *sc)
+ unsigned long reclaimed;
+ unsigned long scanned;
+
++ /*
++ * This loop can become CPU-bound when target memcgs
++ * aren't eligible for reclaim - either because they
++ * don't have any reclaimable pages, or because their
++ * memory is explicitly protected. Avoid soft lockups.
++ */
++ cond_resched();
++
+ switch (mem_cgroup_protected(target_memcg, memcg)) {
+ case MEMCG_PROT_MIN:
+ /*
+diff --git a/net/core/skbuff.c b/net/core/skbuff.c
+index 7afe52bd038ba..c50bd7a7943ab 100644
+--- a/net/core/skbuff.c
++++ b/net/core/skbuff.c
+@@ -5988,9 +5988,13 @@ static int pskb_carve_inside_nonlinear(struct sk_buff *skb, const u32 off,
+ if (skb_has_frag_list(skb))
+ skb_clone_fraglist(skb);
+
+- if (k == 0) {
+- /* split line is in frag list */
+- pskb_carve_frag_list(skb, shinfo, off - pos, gfp_mask);
++ /* split line is in frag list */
++ if (k == 0 && pskb_carve_frag_list(skb, shinfo, off - pos, gfp_mask)) {
++ /* skb_frag_unref() is not needed here as shinfo->nr_frags = 0. */
++ if (skb_has_frag_list(skb))
++ kfree_skb_list(skb_shinfo(skb)->frag_list);
++ kfree(data);
++ return -ENOMEM;
+ }
+ skb_release_data(skb);
+
+diff --git a/net/mptcp/protocol.c b/net/mptcp/protocol.c
+index 939e445d5188c..5542e8061955f 100644
+--- a/net/mptcp/protocol.c
++++ b/net/mptcp/protocol.c
+@@ -605,8 +605,10 @@ static int mptcp_sendmsg_frag(struct sock *sk, struct sock *ssk,
+ if (!psize)
+ return -EINVAL;
+
+- if (!sk_wmem_schedule(sk, psize + dfrag->overhead))
++ if (!sk_wmem_schedule(sk, psize + dfrag->overhead)) {
++ iov_iter_revert(&msg->msg_iter, psize);
+ return -ENOMEM;
++ }
+ } else {
+ offset = dfrag->offset;
+ psize = min_t(size_t, dfrag->data_len, avail_size);
+@@ -617,8 +619,10 @@ static int mptcp_sendmsg_frag(struct sock *sk, struct sock *ssk,
+ */
+ ret = do_tcp_sendpages(ssk, page, offset, psize,
+ msg->msg_flags | MSG_SENDPAGE_NOTLAST | MSG_DONTWAIT);
+- if (ret <= 0)
++ if (ret <= 0) {
++ iov_iter_revert(&msg->msg_iter, psize);
+ return ret;
++ }
+
+ frag_truesize += ret;
+ if (!retransmission) {
+diff --git a/net/sunrpc/rpcb_clnt.c b/net/sunrpc/rpcb_clnt.c
+index c27123e6ba80c..4a67685c83eb4 100644
+--- a/net/sunrpc/rpcb_clnt.c
++++ b/net/sunrpc/rpcb_clnt.c
+@@ -982,8 +982,8 @@ static int rpcb_dec_getaddr(struct rpc_rqst *req, struct xdr_stream *xdr,
+ p = xdr_inline_decode(xdr, len);
+ if (unlikely(p == NULL))
+ goto out_fail;
+- dprintk("RPC: %5u RPCB_%s reply: %s\n", req->rq_task->tk_pid,
+- req->rq_task->tk_msg.rpc_proc->p_name, (char *)p);
++ dprintk("RPC: %5u RPCB_%s reply: %*pE\n", req->rq_task->tk_pid,
++ req->rq_task->tk_msg.rpc_proc->p_name, len, (char *)p);
+
+ if (rpc_uaddr2sockaddr(req->rq_xprt->xprt_net, (char *)p, len,
+ sap, sizeof(address)) == 0)
+diff --git a/net/sunrpc/xprtrdma/verbs.c b/net/sunrpc/xprtrdma/verbs.c
+index 75c646743df3e..ca89f24a1590b 100644
+--- a/net/sunrpc/xprtrdma/verbs.c
++++ b/net/sunrpc/xprtrdma/verbs.c
+@@ -933,6 +933,8 @@ static void rpcrdma_req_reset(struct rpcrdma_req *req)
+
+ rpcrdma_regbuf_dma_unmap(req->rl_sendbuf);
+ rpcrdma_regbuf_dma_unmap(req->rl_recvbuf);
++
++ frwr_reset(req);
+ }
+
+ /* ASSUMPTION: the rb_allreqs list is stable for the duration,
+diff --git a/scripts/kconfig/qconf.cc b/scripts/kconfig/qconf.cc
+index 5ceb93010a973..aedcc3343719e 100644
+--- a/scripts/kconfig/qconf.cc
++++ b/scripts/kconfig/qconf.cc
+@@ -1263,7 +1263,7 @@ void ConfigInfoView::clicked(const QUrl &url)
+ }
+
+ free(result);
+- delete data;
++ delete[] data;
+ }
+
+ QMenu* ConfigInfoView::createStandardContextMenu(const QPoint & pos)
+diff --git a/sound/pci/hda/patch_realtek.c b/sound/pci/hda/patch_realtek.c
+index 0b9907c9cd84f..77e2e6ede31dc 100644
+--- a/sound/pci/hda/patch_realtek.c
++++ b/sound/pci/hda/patch_realtek.c
+@@ -2467,7 +2467,6 @@ static const struct snd_pci_quirk alc882_fixup_tbl[] = {
+ SND_PCI_QUIRK(0x1462, 0x1276, "MSI-GL73", ALC1220_FIXUP_CLEVO_P950),
+ SND_PCI_QUIRK(0x1462, 0x1293, "MSI-GP65", ALC1220_FIXUP_CLEVO_P950),
+ SND_PCI_QUIRK(0x1462, 0x7350, "MSI-7350", ALC889_FIXUP_CD),
+- SND_PCI_QUIRK(0x1462, 0x9c37, "MSI X570-A PRO", ALC1220_FIXUP_CLEVO_P950),
+ SND_PCI_QUIRK(0x1462, 0xda57, "MSI Z270-Gaming", ALC1220_FIXUP_GB_DUAL_CODECS),
+ SND_PCI_QUIRK_VENDOR(0x1462, "MSI", ALC882_FIXUP_GPIO3),
+ SND_PCI_QUIRK(0x147b, 0x107a, "Abit AW9D-MAX", ALC882_FIXUP_ABIT_AW9D_MAX),
+@@ -6005,6 +6004,40 @@ static void alc_fixup_disable_mic_vref(struct hda_codec *codec,
+ snd_hda_codec_set_pin_target(codec, 0x19, PIN_VREFHIZ);
+ }
+
++
++static void alc294_gx502_toggle_output(struct hda_codec *codec,
++ struct hda_jack_callback *cb)
++{
++ /* The Windows driver sets the codec up in a very different way where
++ * it appears to leave 0x10 = 0x8a20 set. For Linux we need to toggle it
++ */
++ if (snd_hda_jack_detect_state(codec, 0x21) == HDA_JACK_PRESENT)
++ alc_write_coef_idx(codec, 0x10, 0x8a20);
++ else
++ alc_write_coef_idx(codec, 0x10, 0x0a20);
++}
++
++static void alc294_fixup_gx502_hp(struct hda_codec *codec,
++ const struct hda_fixup *fix, int action)
++{
++ /* Pin 0x21: headphones/headset mic */
++ if (!is_jack_detectable(codec, 0x21))
++ return;
++
++ switch (action) {
++ case HDA_FIXUP_ACT_PRE_PROBE:
++ snd_hda_jack_detect_enable_callback(codec, 0x21,
++ alc294_gx502_toggle_output);
++ break;
++ case HDA_FIXUP_ACT_INIT:
++ /* Make sure to start in a correct state, i.e. if
++ * headphones have been plugged in before powering up the system
++ */
++ alc294_gx502_toggle_output(codec, NULL);
++ break;
++ }
++}
++
+ static void alc285_fixup_hp_gpio_amp_init(struct hda_codec *codec,
+ const struct hda_fixup *fix, int action)
+ {
+@@ -6185,6 +6218,9 @@ enum {
+ ALC285_FIXUP_THINKPAD_HEADSET_JACK,
+ ALC294_FIXUP_ASUS_HPE,
+ ALC294_FIXUP_ASUS_COEF_1B,
++ ALC294_FIXUP_ASUS_GX502_HP,
++ ALC294_FIXUP_ASUS_GX502_PINS,
++ ALC294_FIXUP_ASUS_GX502_VERBS,
+ ALC285_FIXUP_HP_GPIO_LED,
+ ALC285_FIXUP_HP_MUTE_LED,
+ ALC236_FIXUP_HP_MUTE_LED,
+@@ -6203,6 +6239,7 @@ enum {
+ ALC269_FIXUP_LEMOTE_A1802,
+ ALC269_FIXUP_LEMOTE_A190X,
+ ALC256_FIXUP_INTEL_NUC8_RUGGED,
++ ALC255_FIXUP_XIAOMI_HEADSET_MIC,
+ };
+
+ static const struct hda_fixup alc269_fixups[] = {
+@@ -7350,6 +7387,33 @@ static const struct hda_fixup alc269_fixups[] = {
+ .chained = true,
+ .chain_id = ALC294_FIXUP_ASUS_HEADSET_MIC
+ },
++ [ALC294_FIXUP_ASUS_GX502_PINS] = {
++ .type = HDA_FIXUP_PINS,
++ .v.pins = (const struct hda_pintbl[]) {
++ { 0x19, 0x03a11050 }, /* front HP mic */
++ { 0x1a, 0x01a11830 }, /* rear external mic */
++ { 0x21, 0x03211020 }, /* front HP out */
++ { }
++ },
++ .chained = true,
++ .chain_id = ALC294_FIXUP_ASUS_GX502_VERBS
++ },
++ [ALC294_FIXUP_ASUS_GX502_VERBS] = {
++ .type = HDA_FIXUP_VERBS,
++ .v.verbs = (const struct hda_verb[]) {
++ /* set 0x15 to HP-OUT ctrl */
++ { 0x15, AC_VERB_SET_PIN_WIDGET_CONTROL, 0xc0 },
++ /* unmute the 0x15 amp */
++ { 0x15, AC_VERB_SET_AMP_GAIN_MUTE, 0xb000 },
++ { }
++ },
++ .chained = true,
++ .chain_id = ALC294_FIXUP_ASUS_GX502_HP
++ },
++ [ALC294_FIXUP_ASUS_GX502_HP] = {
++ .type = HDA_FIXUP_FUNC,
++ .v.func = alc294_fixup_gx502_hp,
++ },
+ [ALC294_FIXUP_ASUS_COEF_1B] = {
+ .type = HDA_FIXUP_VERBS,
+ .v.verbs = (const struct hda_verb[]) {
+@@ -7539,6 +7603,16 @@ static const struct hda_fixup alc269_fixups[] = {
+ .chained = true,
+ .chain_id = ALC269_FIXUP_HEADSET_MODE
+ },
++ [ALC255_FIXUP_XIAOMI_HEADSET_MIC] = {
++ .type = HDA_FIXUP_VERBS,
++ .v.verbs = (const struct hda_verb[]) {
++ { 0x20, AC_VERB_SET_COEF_INDEX, 0x45 },
++ { 0x20, AC_VERB_SET_PROC_COEF, 0x5089 },
++ { }
++ },
++ .chained = true,
++ .chain_id = ALC289_FIXUP_ASUS_GA401
++ },
+ };
+
+ static const struct snd_pci_quirk alc269_fixup_tbl[] = {
+@@ -7723,6 +7797,7 @@ static const struct snd_pci_quirk alc269_fixup_tbl[] = {
+ SND_PCI_QUIRK(0x1043, 0x1ccd, "ASUS X555UB", ALC256_FIXUP_ASUS_MIC),
+ SND_PCI_QUIRK(0x1043, 0x1e11, "ASUS Zephyrus G15", ALC289_FIXUP_ASUS_GA502),
+ SND_PCI_QUIRK(0x1043, 0x1f11, "ASUS Zephyrus G14", ALC289_FIXUP_ASUS_GA401),
++ SND_PCI_QUIRK(0x1043, 0x1881, "ASUS Zephyrus S/M", ALC294_FIXUP_ASUS_GX502_PINS),
+ SND_PCI_QUIRK(0x1043, 0x3030, "ASUS ZN270IE", ALC256_FIXUP_ASUS_AIO_GPIO2),
+ SND_PCI_QUIRK(0x1043, 0x831a, "ASUS P901", ALC269_FIXUP_STEREO_DMIC),
+ SND_PCI_QUIRK(0x1043, 0x834a, "ASUS S101", ALC269_FIXUP_STEREO_DMIC),
+@@ -7835,6 +7910,7 @@ static const struct snd_pci_quirk alc269_fixup_tbl[] = {
+ SND_PCI_QUIRK(0x1b35, 0x1236, "CZC TMI", ALC269_FIXUP_CZC_TMI),
+ SND_PCI_QUIRK(0x1b35, 0x1237, "CZC L101", ALC269_FIXUP_CZC_L101),
+ SND_PCI_QUIRK(0x1b7d, 0xa831, "Ordissimo EVE2 ", ALC269VB_FIXUP_ORDISSIMO_EVE2), /* Also known as Malata PC-B1303 */
++ SND_PCI_QUIRK(0x1d72, 0x1602, "RedmiBook", ALC255_FIXUP_XIAOMI_HEADSET_MIC),
+ SND_PCI_QUIRK(0x1d72, 0x1901, "RedmiBook 14", ALC256_FIXUP_ASUS_HEADSET_MIC),
+ SND_PCI_QUIRK(0x10ec, 0x118c, "Medion EE4254 MD62100", ALC256_FIXUP_MEDION_HEADSET_NO_PRESENCE),
+ SND_PCI_QUIRK(0x1c06, 0x2013, "Lemote A1802", ALC269_FIXUP_LEMOTE_A1802),
+@@ -8012,6 +8088,7 @@ static const struct hda_model_fixup alc269_fixup_models[] = {
+ {.id = ALC298_FIXUP_HUAWEI_MBX_STEREO, .name = "huawei-mbx-stereo"},
+ {.id = ALC256_FIXUP_MEDION_HEADSET_NO_PRESENCE, .name = "alc256-medion-headset"},
+ {.id = ALC298_FIXUP_SAMSUNG_HEADPHONE_VERY_QUIET, .name = "alc298-samsung-headphone"},
++ {.id = ALC255_FIXUP_XIAOMI_HEADSET_MIC, .name = "alc255-xiaomi-headset"},
+ {}
+ };
+ #define ALC225_STANDARD_PINS \
+diff --git a/sound/soc/codecs/rt1308-sdw.c b/sound/soc/codecs/rt1308-sdw.c
+index b0ba0d2acbdd6..56e952a904a39 100644
+--- a/sound/soc/codecs/rt1308-sdw.c
++++ b/sound/soc/codecs/rt1308-sdw.c
+@@ -684,8 +684,8 @@ static int rt1308_sdw_probe(struct sdw_slave *slave,
+
+ /* Regmap Initialization */
+ regmap = devm_regmap_init_sdw(slave, &rt1308_sdw_regmap);
+- if (!regmap)
+- return -EINVAL;
++ if (IS_ERR(regmap))
++ return PTR_ERR(regmap);
+
+ rt1308_sdw_init(&slave->dev, regmap, slave);
+
+diff --git a/sound/soc/codecs/rt700-sdw.c b/sound/soc/codecs/rt700-sdw.c
+index 4d14048d11976..1d24bf0407182 100644
+--- a/sound/soc/codecs/rt700-sdw.c
++++ b/sound/soc/codecs/rt700-sdw.c
+@@ -452,8 +452,8 @@ static int rt700_sdw_probe(struct sdw_slave *slave,
+
+ /* Regmap Initialization */
+ sdw_regmap = devm_regmap_init_sdw(slave, &rt700_sdw_regmap);
+- if (!sdw_regmap)
+- return -EINVAL;
++ if (IS_ERR(sdw_regmap))
++ return PTR_ERR(sdw_regmap);
+
+ regmap = devm_regmap_init(&slave->dev, NULL,
+ &slave->dev, &rt700_regmap);
+diff --git a/sound/soc/codecs/rt711-sdw.c b/sound/soc/codecs/rt711-sdw.c
+index 45b928954b580..7efff130a638c 100644
+--- a/sound/soc/codecs/rt711-sdw.c
++++ b/sound/soc/codecs/rt711-sdw.c
+@@ -452,8 +452,8 @@ static int rt711_sdw_probe(struct sdw_slave *slave,
+
+ /* Regmap Initialization */
+ sdw_regmap = devm_regmap_init_sdw(slave, &rt711_sdw_regmap);
+- if (!sdw_regmap)
+- return -EINVAL;
++ if (IS_ERR(sdw_regmap))
++ return PTR_ERR(sdw_regmap);
+
+ regmap = devm_regmap_init(&slave->dev, NULL,
+ &slave->dev, &rt711_regmap);
+diff --git a/sound/soc/codecs/rt715-sdw.c b/sound/soc/codecs/rt715-sdw.c
+index d11b23d6b240a..68a36739f1b0d 100644
+--- a/sound/soc/codecs/rt715-sdw.c
++++ b/sound/soc/codecs/rt715-sdw.c
+@@ -527,8 +527,8 @@ static int rt715_sdw_probe(struct sdw_slave *slave,
+
+ /* Regmap Initialization */
+ sdw_regmap = devm_regmap_init_sdw(slave, &rt715_sdw_regmap);
+- if (!sdw_regmap)
+- return -EINVAL;
++ if (IS_ERR(sdw_regmap))
++ return PTR_ERR(sdw_regmap);
+
+ regmap = devm_regmap_init(&slave->dev, NULL, &slave->dev,
+ &rt715_regmap);
+diff --git a/sound/soc/codecs/tlv320adcx140.c b/sound/soc/codecs/tlv320adcx140.c
+index 35fe8ee5bce9f..03fb50175d876 100644
+--- a/sound/soc/codecs/tlv320adcx140.c
++++ b/sound/soc/codecs/tlv320adcx140.c
+@@ -930,6 +930,8 @@ static int adcx140_i2c_probe(struct i2c_client *i2c,
+ if (!adcx140)
+ return -ENOMEM;
+
++ adcx140->dev = &i2c->dev;
++
+ adcx140->gpio_reset = devm_gpiod_get_optional(adcx140->dev,
+ "reset", GPIOD_OUT_LOW);
+ if (IS_ERR(adcx140->gpio_reset))
+@@ -957,7 +959,7 @@ static int adcx140_i2c_probe(struct i2c_client *i2c,
+ ret);
+ return ret;
+ }
+- adcx140->dev = &i2c->dev;
++
+ i2c_set_clientdata(i2c, adcx140);
+
+ return devm_snd_soc_register_component(&i2c->dev,
+diff --git a/sound/soc/intel/boards/skl_hda_dsp_generic.c b/sound/soc/intel/boards/skl_hda_dsp_generic.c
+index ca4900036ead9..bc50eda297ab7 100644
+--- a/sound/soc/intel/boards/skl_hda_dsp_generic.c
++++ b/sound/soc/intel/boards/skl_hda_dsp_generic.c
+@@ -181,7 +181,7 @@ static void skl_set_hda_codec_autosuspend_delay(struct snd_soc_card *card)
+ struct snd_soc_dai *dai;
+
+ for_each_card_rtds(card, rtd) {
+- if (!strstr(rtd->dai_link->codecs->name, "ehdaudio"))
++ if (!strstr(rtd->dai_link->codecs->name, "ehdaudio0D0"))
+ continue;
+ dai = asoc_rtd_to_codec(rtd, 0);
+ hda_pvt = snd_soc_component_get_drvdata(dai->component);
+diff --git a/sound/soc/intel/haswell/sst-haswell-dsp.c b/sound/soc/intel/haswell/sst-haswell-dsp.c
+index de80e19454c13..88c3f63bded90 100644
+--- a/sound/soc/intel/haswell/sst-haswell-dsp.c
++++ b/sound/soc/intel/haswell/sst-haswell-dsp.c
+@@ -243,92 +243,45 @@ static irqreturn_t hsw_irq(int irq, void *context)
+ return ret;
+ }
+
+-#define CSR_DEFAULT_VALUE 0x8480040E
+-#define ISC_DEFAULT_VALUE 0x0
+-#define ISD_DEFAULT_VALUE 0x0
+-#define IMC_DEFAULT_VALUE 0x7FFF0003
+-#define IMD_DEFAULT_VALUE 0x7FFF0003
+-#define IPCC_DEFAULT_VALUE 0x0
+-#define IPCD_DEFAULT_VALUE 0x0
+-#define CLKCTL_DEFAULT_VALUE 0x7FF
+-#define CSR2_DEFAULT_VALUE 0x0
+-#define LTR_CTRL_DEFAULT_VALUE 0x0
+-#define HMD_CTRL_DEFAULT_VALUE 0x0
+-
+-static void hsw_set_shim_defaults(struct sst_dsp *sst)
+-{
+- sst_dsp_shim_write_unlocked(sst, SST_CSR, CSR_DEFAULT_VALUE);
+- sst_dsp_shim_write_unlocked(sst, SST_ISRX, ISC_DEFAULT_VALUE);
+- sst_dsp_shim_write_unlocked(sst, SST_ISRD, ISD_DEFAULT_VALUE);
+- sst_dsp_shim_write_unlocked(sst, SST_IMRX, IMC_DEFAULT_VALUE);
+- sst_dsp_shim_write_unlocked(sst, SST_IMRD, IMD_DEFAULT_VALUE);
+- sst_dsp_shim_write_unlocked(sst, SST_IPCX, IPCC_DEFAULT_VALUE);
+- sst_dsp_shim_write_unlocked(sst, SST_IPCD, IPCD_DEFAULT_VALUE);
+- sst_dsp_shim_write_unlocked(sst, SST_CLKCTL, CLKCTL_DEFAULT_VALUE);
+- sst_dsp_shim_write_unlocked(sst, SST_CSR2, CSR2_DEFAULT_VALUE);
+- sst_dsp_shim_write_unlocked(sst, SST_LTRC, LTR_CTRL_DEFAULT_VALUE);
+- sst_dsp_shim_write_unlocked(sst, SST_HMDC, HMD_CTRL_DEFAULT_VALUE);
+-}
+-
+-/* all clock-gating minus DCLCGE and DTCGE */
+-#define SST_VDRTCL2_CG_OTHER 0xB7D
+-
+ static void hsw_set_dsp_D3(struct sst_dsp *sst)
+ {
++ u32 val;
+ u32 reg;
+
+- /* disable clock core gating */
++ /* Disable core clock gating (VDRTCTL2.DCLCGE = 0) */
+ reg = readl(sst->addr.pci_cfg + SST_VDRTCTL2);
+- reg &= ~(SST_VDRTCL2_DCLCGE);
++ reg &= ~(SST_VDRTCL2_DCLCGE | SST_VDRTCL2_DTCGE);
+ writel(reg, sst->addr.pci_cfg + SST_VDRTCTL2);
+
+- /* stall, reset and set 24MHz XOSC */
+- sst_dsp_shim_update_bits_unlocked(sst, SST_CSR,
+- SST_CSR_24MHZ_LPCS | SST_CSR_STALL | SST_CSR_RST,
+- SST_CSR_24MHZ_LPCS | SST_CSR_STALL | SST_CSR_RST);
+-
+- /* DRAM power gating all */
+- reg = readl(sst->addr.pci_cfg + SST_VDRTCTL0);
+- reg |= SST_VDRTCL0_ISRAMPGE_MASK |
+- SST_VDRTCL0_DSRAMPGE_MASK;
+- reg &= ~(SST_VDRTCL0_D3SRAMPGD);
+- reg |= SST_VDRTCL0_D3PGD;
+- writel(reg, sst->addr.pci_cfg + SST_VDRTCTL0);
+- udelay(50);
++ /* enable power gating and switch off DRAM & IRAM blocks */
++ val = readl(sst->addr.pci_cfg + SST_VDRTCTL0);
++ val |= SST_VDRTCL0_DSRAMPGE_MASK |
++ SST_VDRTCL0_ISRAMPGE_MASK;
++ val &= ~(SST_VDRTCL0_D3PGD | SST_VDRTCL0_D3SRAMPGD);
++ writel(val, sst->addr.pci_cfg + SST_VDRTCTL0);
+
+- /* PLL shutdown enable */
+- reg = readl(sst->addr.pci_cfg + SST_VDRTCTL2);
+- reg |= SST_VDRTCL2_APLLSE_MASK;
+- writel(reg, sst->addr.pci_cfg + SST_VDRTCTL2);
++ /* switch off audio PLL */
++ val = readl(sst->addr.pci_cfg + SST_VDRTCTL2);
++ val |= SST_VDRTCL2_APLLSE_MASK;
++ writel(val, sst->addr.pci_cfg + SST_VDRTCTL2);
+
+- /* disable MCLK */
++ /* disable MCLK(clkctl.smos = 0) */
+ sst_dsp_shim_update_bits_unlocked(sst, SST_CLKCTL,
+- SST_CLKCTL_MASK, 0);
+-
+- /* switch clock gating */
+- reg = readl(sst->addr.pci_cfg + SST_VDRTCTL2);
+- reg |= SST_VDRTCL2_CG_OTHER;
+- reg &= ~(SST_VDRTCL2_DTCGE);
+- writel(reg, sst->addr.pci_cfg + SST_VDRTCTL2);
+- /* enable DTCGE separatelly */
+- reg = readl(sst->addr.pci_cfg + SST_VDRTCTL2);
+- reg |= SST_VDRTCL2_DTCGE;
+- writel(reg, sst->addr.pci_cfg + SST_VDRTCTL2);
++ SST_CLKCTL_MASK, 0);
+
+- /* set shim defaults */
+- hsw_set_shim_defaults(sst);
+-
+- /* set D3 */
+- reg = readl(sst->addr.pci_cfg + SST_PMCS);
+- reg |= SST_PMCS_PS_MASK;
+- writel(reg, sst->addr.pci_cfg + SST_PMCS);
++ /* Set D3 state, delay 50 us */
++ val = readl(sst->addr.pci_cfg + SST_PMCS);
++ val |= SST_PMCS_PS_MASK;
++ writel(val, sst->addr.pci_cfg + SST_PMCS);
+ udelay(50);
+
+- /* enable clock core gating */
++ /* Enable core clock gating (VDRTCTL2.DCLCGE = 1), delay 50 us */
+ reg = readl(sst->addr.pci_cfg + SST_VDRTCTL2);
+- reg |= SST_VDRTCL2_DCLCGE;
++ reg |= SST_VDRTCL2_DCLCGE | SST_VDRTCL2_DTCGE;
+ writel(reg, sst->addr.pci_cfg + SST_VDRTCTL2);
++
+ udelay(50);
++
+ }
+
+ static void hsw_reset(struct sst_dsp *sst)
+@@ -346,62 +299,75 @@ static void hsw_reset(struct sst_dsp *sst)
+ SST_CSR_RST | SST_CSR_STALL, SST_CSR_STALL);
+ }
+
+-/* recommended CSR state for power-up */
+-#define SST_CSR_D0_MASK (0x18A09C0C | SST_CSR_DCS_MASK)
+-
+ static int hsw_set_dsp_D0(struct sst_dsp *sst)
+ {
+- u32 reg;
++ int tries = 10;
++ u32 reg, fw_dump_bit;
+
+- /* disable clock core gating */
++ /* Disable core clock gating (VDRTCTL2.DCLCGE = 0) */
+ reg = readl(sst->addr.pci_cfg + SST_VDRTCTL2);
+- reg &= ~(SST_VDRTCL2_DCLCGE);
++ reg &= ~(SST_VDRTCL2_DCLCGE | SST_VDRTCL2_DTCGE);
+ writel(reg, sst->addr.pci_cfg + SST_VDRTCTL2);
+
+- /* switch clock gating */
+- reg = readl(sst->addr.pci_cfg + SST_VDRTCTL2);
+- reg |= SST_VDRTCL2_CG_OTHER;
+- reg &= ~(SST_VDRTCL2_DTCGE);
+- writel(reg, sst->addr.pci_cfg + SST_VDRTCTL2);
++ /* Disable D3PG (VDRTCTL0.D3PGD = 1) */
++ reg = readl(sst->addr.pci_cfg + SST_VDRTCTL0);
++ reg |= SST_VDRTCL0_D3PGD;
++ writel(reg, sst->addr.pci_cfg + SST_VDRTCTL0);
+
+- /* set D0 */
++ /* Set D0 state */
+ reg = readl(sst->addr.pci_cfg + SST_PMCS);
+- reg &= ~(SST_PMCS_PS_MASK);
++ reg &= ~SST_PMCS_PS_MASK;
+ writel(reg, sst->addr.pci_cfg + SST_PMCS);
+
+- /* DRAM power gating none */
+- reg = readl(sst->addr.pci_cfg + SST_VDRTCTL0);
+- reg &= ~(SST_VDRTCL0_ISRAMPGE_MASK |
+- SST_VDRTCL0_DSRAMPGE_MASK);
+- reg |= SST_VDRTCL0_D3SRAMPGD;
+- reg |= SST_VDRTCL0_D3PGD;
+- writel(reg, sst->addr.pci_cfg + SST_VDRTCTL0);
+- mdelay(10);
++ /* check that ADSP shim is enabled */
++ while (tries--) {
++ reg = readl(sst->addr.pci_cfg + SST_PMCS) & SST_PMCS_PS_MASK;
++ if (reg == 0)
++ goto finish;
++
++ msleep(1);
++ }
++
++ return -ENODEV;
+
+- /* set shim defaults */
+- hsw_set_shim_defaults(sst);
++finish:
++ /* select SSP1 19.2MHz base clock, SSP clock 0, turn off Low Power Clock */
++ sst_dsp_shim_update_bits_unlocked(sst, SST_CSR,
++ SST_CSR_S1IOCS | SST_CSR_SBCS1 | SST_CSR_LPCS, 0x0);
++
++ /* stall DSP core, set clk to 192/96Mhz */
++ sst_dsp_shim_update_bits_unlocked(sst,
++ SST_CSR, SST_CSR_STALL | SST_CSR_DCS_MASK,
++ SST_CSR_STALL | SST_CSR_DCS(4));
+
+- /* restore MCLK */
++ /* Set 24MHz MCLK, prevent local clock gating, enable SSP0 clock */
+ sst_dsp_shim_update_bits_unlocked(sst, SST_CLKCTL,
+- SST_CLKCTL_MASK, SST_CLKCTL_MASK);
++ SST_CLKCTL_MASK | SST_CLKCTL_DCPLCG | SST_CLKCTL_SCOE0,
++ SST_CLKCTL_MASK | SST_CLKCTL_DCPLCG | SST_CLKCTL_SCOE0);
+
+- /* PLL shutdown disable */
++ /* Stall and reset core, set CSR */
++ hsw_reset(sst);
++
++ /* Enable core clock gating (VDRTCTL2.DCLCGE = 1), delay 50 us */
+ reg = readl(sst->addr.pci_cfg + SST_VDRTCTL2);
+- reg &= ~(SST_VDRTCL2_APLLSE_MASK);
++ reg |= SST_VDRTCL2_DCLCGE | SST_VDRTCL2_DTCGE;
+ writel(reg, sst->addr.pci_cfg + SST_VDRTCTL2);
+
+- sst_dsp_shim_update_bits_unlocked(sst, SST_CSR,
+- SST_CSR_D0_MASK, SST_CSR_SBCS0 | SST_CSR_SBCS1 |
+- SST_CSR_STALL | SST_CSR_DCS(4));
+ udelay(50);
+
+- /* enable clock core gating */
++ /* switch on audio PLL */
+ reg = readl(sst->addr.pci_cfg + SST_VDRTCTL2);
+- reg |= SST_VDRTCL2_DCLCGE;
++ reg &= ~SST_VDRTCL2_APLLSE_MASK;
+ writel(reg, sst->addr.pci_cfg + SST_VDRTCTL2);
+
+- /* clear reset */
+- sst_dsp_shim_update_bits_unlocked(sst, SST_CSR, SST_CSR_RST, 0);
++ /* set default power gating control, enable power gating control for all blocks. that is,
++ can't be accessed, please enable each block before accessing. */
++ reg = readl(sst->addr.pci_cfg + SST_VDRTCTL0);
++ reg |= SST_VDRTCL0_DSRAMPGE_MASK | SST_VDRTCL0_ISRAMPGE_MASK;
++ /* for D0, always enable the block(DSRAM[0]) used for FW dump */
++ fw_dump_bit = 1 << SST_VDRTCL0_DSRAMPGE_SHIFT;
++ writel(reg & ~fw_dump_bit, sst->addr.pci_cfg + SST_VDRTCTL0);
++
+
+ /* disable DMA finish function for SSP0 & SSP1 */
+ sst_dsp_shim_update_bits_unlocked(sst, SST_CSR2, SST_CSR2_SDFD_SSP1,
+@@ -418,6 +384,12 @@ static int hsw_set_dsp_D0(struct sst_dsp *sst)
+ sst_dsp_shim_update_bits(sst, SST_IMRD, (SST_IMRD_DONE | SST_IMRD_BUSY |
+ SST_IMRD_SSP0 | SST_IMRD_DMAC), 0x0);
+
++ /* clear IPC registers */
++ sst_dsp_shim_write(sst, SST_IPCX, 0x0);
++ sst_dsp_shim_write(sst, SST_IPCD, 0x0);
++ sst_dsp_shim_write(sst, 0x80, 0x6);
++ sst_dsp_shim_write(sst, 0xe0, 0x300a);
++
+ return 0;
+ }
+
+@@ -443,6 +415,11 @@ static void hsw_sleep(struct sst_dsp *sst)
+ {
+ dev_dbg(sst->dev, "HSW_PM dsp runtime suspend\n");
+
++ /* put DSP into reset and stall */
++ sst_dsp_shim_update_bits(sst, SST_CSR,
++ SST_CSR_24MHZ_LPCS | SST_CSR_RST | SST_CSR_STALL,
++ SST_CSR_RST | SST_CSR_STALL | SST_CSR_24MHZ_LPCS);
++
+ hsw_set_dsp_D3(sst);
+ dev_dbg(sst->dev, "HSW_PM dsp runtime suspend exit\n");
+ }
+diff --git a/sound/soc/meson/axg-toddr.c b/sound/soc/meson/axg-toddr.c
+index e711abcf8c124..d6adf7edea41f 100644
+--- a/sound/soc/meson/axg-toddr.c
++++ b/sound/soc/meson/axg-toddr.c
+@@ -18,6 +18,7 @@
+ #define CTRL0_TODDR_SEL_RESAMPLE BIT(30)
+ #define CTRL0_TODDR_EXT_SIGNED BIT(29)
+ #define CTRL0_TODDR_PP_MODE BIT(28)
++#define CTRL0_TODDR_SYNC_CH BIT(27)
+ #define CTRL0_TODDR_TYPE_MASK GENMASK(15, 13)
+ #define CTRL0_TODDR_TYPE(x) ((x) << 13)
+ #define CTRL0_TODDR_MSB_POS_MASK GENMASK(12, 8)
+@@ -189,10 +190,31 @@ static const struct axg_fifo_match_data axg_toddr_match_data = {
+ .dai_drv = &axg_toddr_dai_drv
+ };
+
++static int g12a_toddr_dai_startup(struct snd_pcm_substream *substream,
++ struct snd_soc_dai *dai)
++{
++ struct axg_fifo *fifo = snd_soc_dai_get_drvdata(dai);
++ int ret;
++
++ ret = axg_toddr_dai_startup(substream, dai);
++ if (ret)
++ return ret;
++
++ /*
++ * Make sure the first channel ends up in the at beginning of the output
++ * As weird as it looks, without this the first channel may be misplaced
++ * in memory, with a random shift of 2 channels.
++ */
++ regmap_update_bits(fifo->map, FIFO_CTRL0, CTRL0_TODDR_SYNC_CH,
++ CTRL0_TODDR_SYNC_CH);
++
++ return 0;
++}
++
+ static const struct snd_soc_dai_ops g12a_toddr_ops = {
+ .prepare = g12a_toddr_dai_prepare,
+ .hw_params = axg_toddr_dai_hw_params,
+- .startup = axg_toddr_dai_startup,
++ .startup = g12a_toddr_dai_startup,
+ .shutdown = axg_toddr_dai_shutdown,
+ };
+
+diff --git a/sound/soc/qcom/apq8016_sbc.c b/sound/soc/qcom/apq8016_sbc.c
+index 2ef090f4af9e9..8abc1a95184b2 100644
+--- a/sound/soc/qcom/apq8016_sbc.c
++++ b/sound/soc/qcom/apq8016_sbc.c
+@@ -234,6 +234,7 @@ static int apq8016_sbc_platform_probe(struct platform_device *pdev)
+ return -ENOMEM;
+
+ card->dev = dev;
++ card->owner = THIS_MODULE;
+ card->dapm_widgets = apq8016_sbc_dapm_widgets;
+ card->num_dapm_widgets = ARRAY_SIZE(apq8016_sbc_dapm_widgets);
+ data = apq8016_sbc_parse_of(card);
+diff --git a/sound/soc/qcom/apq8096.c b/sound/soc/qcom/apq8096.c
+index 287ad2aa27f37..d47bedc259c59 100644
+--- a/sound/soc/qcom/apq8096.c
++++ b/sound/soc/qcom/apq8096.c
+@@ -114,6 +114,7 @@ static int apq8096_platform_probe(struct platform_device *pdev)
+ return -ENOMEM;
+
+ card->dev = dev;
++ card->owner = THIS_MODULE;
+ dev_set_drvdata(dev, card);
+ ret = qcom_snd_parse_of(card);
+ if (ret)
+diff --git a/sound/soc/qcom/common.c b/sound/soc/qcom/common.c
+index 8ada4ecba8472..10322690c0eaa 100644
+--- a/sound/soc/qcom/common.c
++++ b/sound/soc/qcom/common.c
+@@ -45,8 +45,10 @@ int qcom_snd_parse_of(struct snd_soc_card *card)
+
+ for_each_child_of_node(dev->of_node, np) {
+ dlc = devm_kzalloc(dev, 2 * sizeof(*dlc), GFP_KERNEL);
+- if (!dlc)
+- return -ENOMEM;
++ if (!dlc) {
++ ret = -ENOMEM;
++ goto err;
++ }
+
+ link->cpus = &dlc[0];
+ link->platforms = &dlc[1];
+diff --git a/sound/soc/qcom/sdm845.c b/sound/soc/qcom/sdm845.c
+index 68e9388ff46f1..b5b8465caf56f 100644
+--- a/sound/soc/qcom/sdm845.c
++++ b/sound/soc/qcom/sdm845.c
+@@ -557,6 +557,7 @@ static int sdm845_snd_platform_probe(struct platform_device *pdev)
+ card->dapm_widgets = sdm845_snd_widgets;
+ card->num_dapm_widgets = ARRAY_SIZE(sdm845_snd_widgets);
+ card->dev = dev;
++ card->owner = THIS_MODULE;
+ dev_set_drvdata(dev, card);
+ ret = qcom_snd_parse_of(card);
+ if (ret)
+diff --git a/sound/soc/qcom/storm.c b/sound/soc/qcom/storm.c
+index 3a6e18709b9e2..4ba111c841375 100644
+--- a/sound/soc/qcom/storm.c
++++ b/sound/soc/qcom/storm.c
+@@ -96,6 +96,7 @@ static int storm_platform_probe(struct platform_device *pdev)
+ return -ENOMEM;
+
+ card->dev = &pdev->dev;
++ card->owner = THIS_MODULE;
+
+ ret = snd_soc_of_parse_card_name(card, "qcom,model");
+ if (ret) {
+diff --git a/sound/soc/soc-core.c b/sound/soc/soc-core.c
+index f1d641cd48da9..20ca1d38b4b87 100644
+--- a/sound/soc/soc-core.c
++++ b/sound/soc/soc-core.c
+@@ -834,6 +834,19 @@ struct snd_soc_dai *snd_soc_find_dai(
+ }
+ EXPORT_SYMBOL_GPL(snd_soc_find_dai);
+
++struct snd_soc_dai *snd_soc_find_dai_with_mutex(
++ const struct snd_soc_dai_link_component *dlc)
++{
++ struct snd_soc_dai *dai;
++
++ mutex_lock(&client_mutex);
++ dai = snd_soc_find_dai(dlc);
++ mutex_unlock(&client_mutex);
++
++ return dai;
++}
++EXPORT_SYMBOL_GPL(snd_soc_find_dai_with_mutex);
++
+ static int soc_dai_link_sanity_check(struct snd_soc_card *card,
+ struct snd_soc_dai_link *link)
+ {
+diff --git a/sound/soc/soc-dai.c b/sound/soc/soc-dai.c
+index cecbbed2de9d5..0e04ad7689cd9 100644
+--- a/sound/soc/soc-dai.c
++++ b/sound/soc/soc-dai.c
+@@ -410,14 +410,14 @@ void snd_soc_dai_link_set_capabilities(struct snd_soc_dai_link *dai_link)
+ supported_codec = false;
+
+ for_each_link_cpus(dai_link, i, cpu) {
+- dai = snd_soc_find_dai(cpu);
++ dai = snd_soc_find_dai_with_mutex(cpu);
+ if (dai && snd_soc_dai_stream_valid(dai, direction)) {
+ supported_cpu = true;
+ break;
+ }
+ }
+ for_each_link_codecs(dai_link, i, codec) {
+- dai = snd_soc_find_dai(codec);
++ dai = snd_soc_find_dai_with_mutex(codec);
+ if (dai && snd_soc_dai_stream_valid(dai, direction)) {
+ supported_codec = true;
+ break;
+diff --git a/sound/soc/soc-pcm.c b/sound/soc/soc-pcm.c
+index 74baf1fce053f..918ed77726cc0 100644
+--- a/sound/soc/soc-pcm.c
++++ b/sound/soc/soc-pcm.c
+@@ -811,7 +811,7 @@ dynamic:
+ return 0;
+
+ config_err:
+- for_each_rtd_dais(rtd, i, dai)
++ for_each_rtd_dais_rollback(rtd, i, dai)
+ snd_soc_dai_shutdown(dai, substream);
+
+ snd_soc_link_shutdown(substream);
+diff --git a/tools/perf/tests/bp_signal.c b/tools/perf/tests/bp_signal.c
+index da8ec1e8e0648..cc9fbcedb3646 100644
+--- a/tools/perf/tests/bp_signal.c
++++ b/tools/perf/tests/bp_signal.c
+@@ -45,10 +45,13 @@ volatile long the_var;
+ #if defined (__x86_64__)
+ extern void __test_function(volatile long *ptr);
+ asm (
++ ".pushsection .text;"
+ ".globl __test_function\n"
++ ".type __test_function, @function;"
+ "__test_function:\n"
+ "incq (%rdi)\n"
+- "ret\n");
++ "ret\n"
++ ".popsection\n");
+ #else
+ static void __test_function(volatile long *ptr)
+ {
+diff --git a/tools/perf/tests/pmu-events.c b/tools/perf/tests/pmu-events.c
+index ab64b4a4e2848..343d36965836a 100644
+--- a/tools/perf/tests/pmu-events.c
++++ b/tools/perf/tests/pmu-events.c
+@@ -274,6 +274,7 @@ static int __test__pmu_event_aliases(char *pmu_name, int *count)
+ int res = 0;
+ bool use_uncore_table;
+ struct pmu_events_map *map = __test_pmu_get_events_map();
++ struct perf_pmu_alias *a, *tmp;
+
+ if (!map)
+ return -1;
+@@ -347,6 +348,10 @@ static int __test__pmu_event_aliases(char *pmu_name, int *count)
+ pmu_name, alias->name);
+ }
+
++ list_for_each_entry_safe(a, tmp, &aliases, list) {
++ list_del(&a->list);
++ perf_pmu_free_alias(a);
++ }
+ free(pmu);
+ return res;
+ }
+diff --git a/tools/perf/tests/pmu.c b/tools/perf/tests/pmu.c
+index 5c11fe2b30406..714e6830a758f 100644
+--- a/tools/perf/tests/pmu.c
++++ b/tools/perf/tests/pmu.c
+@@ -173,6 +173,7 @@ int test__pmu(struct test *test __maybe_unused, int subtest __maybe_unused)
+ ret = 0;
+ } while (0);
+
++ perf_pmu__del_formats(&formats);
+ test_format_dir_put(format);
+ return ret;
+ }
+diff --git a/tools/perf/util/evlist.c b/tools/perf/util/evlist.c
+index ab48be4cf2584..b279888bb1aab 100644
+--- a/tools/perf/util/evlist.c
++++ b/tools/perf/util/evlist.c
+@@ -946,6 +946,10 @@ int perf_evlist__create_maps(struct evlist *evlist, struct target *target)
+
+ perf_evlist__set_maps(&evlist->core, cpus, threads);
+
++ /* as evlist now has references, put count here */
++ perf_cpu_map__put(cpus);
++ perf_thread_map__put(threads);
++
+ return 0;
+
+ out_delete_threads:
+@@ -1273,11 +1277,12 @@ static int perf_evlist__create_syswide_maps(struct evlist *evlist)
+ goto out_put;
+
+ perf_evlist__set_maps(&evlist->core, cpus, threads);
+-out:
+- return err;
++
++ perf_thread_map__put(threads);
+ out_put:
+ perf_cpu_map__put(cpus);
+- goto out;
++out:
++ return err;
+ }
+
+ int evlist__open(struct evlist *evlist)
+diff --git a/tools/perf/util/metricgroup.c b/tools/perf/util/metricgroup.c
+index 9e21aa767e417..344a75718afc3 100644
+--- a/tools/perf/util/metricgroup.c
++++ b/tools/perf/util/metricgroup.c
+@@ -443,6 +443,9 @@ void metricgroup__print(bool metrics, bool metricgroups, char *filter,
+ continue;
+ strlist__add(me->metrics, s);
+ }
++
++ if (!raw)
++ free(s);
+ }
+ free(omg);
+ }
+@@ -726,7 +729,7 @@ int metricgroup__parse_groups(const struct option *opt,
+ ret = metricgroup__add_metric_list(str, metric_no_group,
+ &extra_events, &group_list);
+ if (ret)
+- return ret;
++ goto out;
+ pr_debug("adding %s\n", extra_events.buf);
+ bzero(&parse_error, sizeof(parse_error));
+ ret = parse_events(perf_evlist, extra_events.buf, &parse_error);
+@@ -734,11 +737,11 @@ int metricgroup__parse_groups(const struct option *opt,
+ parse_events_print_error(&parse_error, extra_events.buf);
+ goto out;
+ }
+- strbuf_release(&extra_events);
+ ret = metricgroup__setup_events(&group_list, metric_no_merge,
+ perf_evlist, metric_events);
+ out:
+ metricgroup__free_egroups(&group_list);
++ strbuf_release(&extra_events);
+ return ret;
+ }
+
+diff --git a/tools/perf/util/parse-events.c b/tools/perf/util/parse-events.c
+index 4476de0e678aa..c1120d8196fae 100644
+--- a/tools/perf/util/parse-events.c
++++ b/tools/perf/util/parse-events.c
+@@ -410,7 +410,7 @@ static int add_event_tool(struct list_head *list, int *idx,
+ return -ENOMEM;
+ evsel->tool_event = tool_event;
+ if (tool_event == PERF_TOOL_DURATION_TIME)
+- evsel->unit = strdup("ns");
++ evsel->unit = "ns";
+ return 0;
+ }
+
+diff --git a/tools/perf/util/pmu.c b/tools/perf/util/pmu.c
+index 93fe72a9dc0b2..483da97ac4459 100644
+--- a/tools/perf/util/pmu.c
++++ b/tools/perf/util/pmu.c
+@@ -272,7 +272,7 @@ static void perf_pmu_update_alias(struct perf_pmu_alias *old,
+ }
+
+ /* Delete an alias entry. */
+-static void perf_pmu_free_alias(struct perf_pmu_alias *newalias)
++void perf_pmu_free_alias(struct perf_pmu_alias *newalias)
+ {
+ zfree(&newalias->name);
+ zfree(&newalias->desc);
+@@ -1352,6 +1352,17 @@ void perf_pmu__set_format(unsigned long *bits, long from, long to)
+ set_bit(b, bits);
+ }
+
++void perf_pmu__del_formats(struct list_head *formats)
++{
++ struct perf_pmu_format *fmt, *tmp;
++
++ list_for_each_entry_safe(fmt, tmp, formats, list) {
++ list_del(&fmt->list);
++ free(fmt->name);
++ free(fmt);
++ }
++}
++
+ static int sub_non_neg(int a, int b)
+ {
+ if (b > a)
+diff --git a/tools/perf/util/pmu.h b/tools/perf/util/pmu.h
+index f971d9aa4570a..28778b47fb4b7 100644
+--- a/tools/perf/util/pmu.h
++++ b/tools/perf/util/pmu.h
+@@ -92,6 +92,7 @@ int perf_pmu__new_format(struct list_head *list, char *name,
+ int config, unsigned long *bits);
+ void perf_pmu__set_format(unsigned long *bits, long from, long to);
+ int perf_pmu__format_parse(char *dir, struct list_head *head);
++void perf_pmu__del_formats(struct list_head *formats);
+
+ struct perf_pmu *perf_pmu__scan(struct perf_pmu *pmu);
+
+@@ -111,6 +112,7 @@ void pmu_add_cpu_aliases_map(struct list_head *head, struct perf_pmu *pmu,
+
+ struct pmu_events_map *perf_pmu__find_map(struct perf_pmu *pmu);
+ bool pmu_uncore_alias_match(const char *pmu_name, const char *name);
++void perf_pmu_free_alias(struct perf_pmu_alias *alias);
+
+ int perf_pmu__convert_scale(const char *scale, char **end, double *sval);
+
+diff --git a/tools/perf/util/record.c b/tools/perf/util/record.c
+index a4cc11592f6b3..ea9aa1d7cf501 100644
+--- a/tools/perf/util/record.c
++++ b/tools/perf/util/record.c
+@@ -2,6 +2,7 @@
+ #include "debug.h"
+ #include "evlist.h"
+ #include "evsel.h"
++#include "evsel_config.h"
+ #include "parse-events.h"
+ #include <errno.h>
+ #include <limits.h>
+@@ -33,11 +34,24 @@ static struct evsel *evsel__read_sampler(struct evsel *evsel, struct evlist *evl
+ return leader;
+ }
+
++static u64 evsel__config_term_mask(struct evsel *evsel)
++{
++ struct evsel_config_term *term;
++ struct list_head *config_terms = &evsel->config_terms;
++ u64 term_types = 0;
++
++ list_for_each_entry(term, config_terms, list) {
++ term_types |= 1 << term->type;
++ }
++ return term_types;
++}
++
+ static void evsel__config_leader_sampling(struct evsel *evsel, struct evlist *evlist)
+ {
+ struct perf_event_attr *attr = &evsel->core.attr;
+ struct evsel *leader = evsel->leader;
+ struct evsel *read_sampler;
++ u64 term_types, freq_mask;
+
+ if (!leader->sample_read)
+ return;
+@@ -47,16 +61,20 @@ static void evsel__config_leader_sampling(struct evsel *evsel, struct evlist *ev
+ if (evsel == read_sampler)
+ return;
+
++ term_types = evsel__config_term_mask(evsel);
+ /*
+- * Disable sampling for all group members other than the leader in
+- * case the leader 'leads' the sampling, except when the leader is an
+- * AUX area event, in which case the 2nd event in the group is the one
+- * that 'leads' the sampling.
++ * Disable sampling for all group members except those with explicit
++ * config terms or the leader. In the case of an AUX area event, the 2nd
++ * event in the group is the one that 'leads' the sampling.
+ */
+- attr->freq = 0;
+- attr->sample_freq = 0;
+- attr->sample_period = 0;
+- attr->write_backward = 0;
++ freq_mask = (1 << EVSEL__CONFIG_TERM_FREQ) | (1 << EVSEL__CONFIG_TERM_PERIOD);
++ if ((term_types & freq_mask) == 0) {
++ attr->freq = 0;
++ attr->sample_freq = 0;
++ attr->sample_period = 0;
++ }
++ if ((term_types & (1 << EVSEL__CONFIG_TERM_OVERWRITE)) == 0)
++ attr->write_backward = 0;
+
+ /*
+ * We don't get a sample for slave events, we make them when delivering
+diff --git a/tools/testing/selftests/vm/map_hugetlb.c b/tools/testing/selftests/vm/map_hugetlb.c
+index 6af951900aa39..312889edb84ab 100644
+--- a/tools/testing/selftests/vm/map_hugetlb.c
++++ b/tools/testing/selftests/vm/map_hugetlb.c
+@@ -83,7 +83,7 @@ int main(int argc, char **argv)
+ }
+
+ if (shift)
+- printf("%u kB hugepages\n", 1 << shift);
++ printf("%u kB hugepages\n", 1 << (shift - 10));
+ else
+ printf("Default size hugepages\n");
+ printf("Mapping %lu Mbytes\n", (unsigned long)length >> 20);
^ permalink raw reply related [flat|nested] 30+ messages in thread
* [gentoo-commits] proj/linux-patches:5.8 commit in: /
@ 2020-09-23 13:06 Mike Pagano
0 siblings, 0 replies; 30+ messages in thread
From: Mike Pagano @ 2020-09-23 13:06 UTC (permalink / raw
To: gentoo-commits
commit: b7b7344597998be6b6b69f5d9fd42a0b1f1dbf01
Author: Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Wed Sep 23 13:05:42 2020 +0000
Commit: Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Wed Sep 23 13:05:42 2020 +0000
URL: https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=b7b73445
Remove incompatible ZSTD patches. Will carry for 5.9 if necessary
Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>
0000_README | 32 --
...ZSTD-v10-1-8-prepare-zstd-for-preboot-env.patch | 65 ----
...TD-v10-2-8-prepare-xxhash-for-preboot-env.patch | 425 ---------------------
...TD-v10-3-8-add-zstd-support-to-decompress.patch | 93 -----
...v10-4-8-add-support-for-zstd-compres-kern.patch | 50 ---
...add-support-for-zstd-compressed-initramfs.patch | 20 -
| 123 ------
...10-7-8-support-for-ZSTD-compressed-kernel.patch | 12 -
...0-8-8-gitignore-add-ZSTD-compressed-files.patch | 12 -
9 files changed, 832 deletions(-)
diff --git a/0000_README b/0000_README
index e5b0bab..d438f0f 100644
--- a/0000_README
+++ b/0000_README
@@ -111,38 +111,6 @@ Patch: 4567_distro-Gentoo-Kconfig.patch
From: Tom Wijsman <TomWij@gentoo.org>
Desc: Add Gentoo Linux support config settings and defaults.
-Patch: 5000_ZSTD-v10-1-8-prepare-zstd-for-preboot-env.patch
-From: https://lkml.org/lkml/2020/4/1/29
-Desc: lib: prepare zstd for preboot environment
-
-Patch: 5001_ZSTD-v10-2-8-prepare-xxhash-for-preboot-env.patch
-From: https://lkml.org/lkml/2020/4/1/29
-Desc: lib: prepare xxhash for preboot environment
-
-Patch: 5002_ZSTD-v10-3-8-add-zstd-support-to-decompress.patch
-From: https://lkml.org/lkml/2020/4/1/29
-Desc: lib: add zstd support to decompress
-
-Patch: 5003_ZSTD-v10-4-8-add-support-for-zstd-compres-kern.patch
-From: https://lkml.org/lkml/2020/4/1/29
-Desc: init: add support for zstd compressed kernel
-
-Patch: 5004_ZSTD-v10-5-8-add-support-for-zstd-compressed-initramfs.patch
-From: https://lkml.org/lkml/2020/4/1/29
-Desc: usr: add support for zstd compressed initramfs
-
-Patch: 5005_ZSTD-v10-6-8-bump-ZO-z-extra-bytes-margin.patch
-From: https://lkml.org/lkml/2020/4/1/29
-Desc: x86: bump ZO_z_extra_bytes margin for zstd
-
-Patch: 5006_ZSTD-v10-7-8-support-for-ZSTD-compressed-kernel.patch
-From: https://lkml.org/lkml/2020/4/1/29
-Desc: x86: Add support for ZSTD compressed kernel
-
-Patch: 5007_ZSTD-v10-8-8-gitignore-add-ZSTD-compressed-files.patch
-From: https://lkml.org/lkml/2020/4/1/29
-Desc: .gitignore: add ZSTD-compressed files
-
Patch: 5012_enable-cpu-optimizations-for-gcc91.patch
From: https://github.com/graysky2/kernel_gcc_patch/
Desc: Kernel patch enables gcc >= v9.1 >= gcc < v10 optimizations for additional CPUs.
diff --git a/5000_ZSTD-v10-1-8-prepare-zstd-for-preboot-env.patch b/5000_ZSTD-v10-1-8-prepare-zstd-for-preboot-env.patch
deleted file mode 100644
index c13b091..0000000
--- a/5000_ZSTD-v10-1-8-prepare-zstd-for-preboot-env.patch
+++ /dev/null
@@ -1,65 +0,0 @@
-diff --git a/lib/zstd/fse_decompress.c b/lib/zstd/fse_decompress.c
-index a84300e5a013..0b353530fb3f 100644
---- a/lib/zstd/fse_decompress.c
-+++ b/lib/zstd/fse_decompress.c
-@@ -47,6 +47,7 @@
- ****************************************************************/
- #include "bitstream.h"
- #include "fse.h"
-+#include "zstd_internal.h"
- #include <linux/compiler.h>
- #include <linux/kernel.h>
- #include <linux/string.h> /* memcpy, memset */
-@@ -60,14 +61,6 @@
- enum { FSE_static_assert = 1 / (int)(!!(c)) }; \
- } /* use only *after* variable declarations */
-
--/* check and forward error code */
--#define CHECK_F(f) \
-- { \
-- size_t const e = f; \
-- if (FSE_isError(e)) \
-- return e; \
-- }
--
- /* **************************************************************
- * Templates
- ****************************************************************/
-diff --git a/lib/zstd/zstd_internal.h b/lib/zstd/zstd_internal.h
-index 1a79fab9e13a..dac753397f86 100644
---- a/lib/zstd/zstd_internal.h
-+++ b/lib/zstd/zstd_internal.h
-@@ -127,7 +127,14 @@ static const U32 OF_defaultNormLog = OF_DEFAULTNORMLOG;
- * Shared functions to include for inlining
- *********************************************/
- ZSTD_STATIC void ZSTD_copy8(void *dst, const void *src) {
-- memcpy(dst, src, 8);
-+ /*
-+ * zstd relies heavily on gcc being able to analyze and inline this
-+ * memcpy() call, since it is called in a tight loop. Preboot mode
-+ * is compiled in freestanding mode, which stops gcc from analyzing
-+ * memcpy(). Use __builtin_memcpy() to tell gcc to analyze this as a
-+ * regular memcpy().
-+ */
-+ __builtin_memcpy(dst, src, 8);
- }
- /*! ZSTD_wildcopy() :
- * custom version of memcpy(), can copy up to 7 bytes too many (8 bytes if length==0) */
-@@ -137,13 +144,16 @@ ZSTD_STATIC void ZSTD_wildcopy(void *dst, const void *src, ptrdiff_t length)
- const BYTE* ip = (const BYTE*)src;
- BYTE* op = (BYTE*)dst;
- BYTE* const oend = op + length;
-- /* Work around https://gcc.gnu.org/bugzilla/show_bug.cgi?id=81388.
-+#if defined(GCC_VERSION) && GCC_VERSION >= 70000 && GCC_VERSION < 70200
-+ /*
-+ * Work around https://gcc.gnu.org/bugzilla/show_bug.cgi?id=81388.
- * Avoid the bad case where the loop only runs once by handling the
- * special case separately. This doesn't trigger the bug because it
- * doesn't involve pointer/integer overflow.
- */
- if (length <= 8)
- return ZSTD_copy8(dst, src);
-+#endif
- do {
- ZSTD_copy8(op, ip);
- op += 8;
diff --git a/5001_ZSTD-v10-2-8-prepare-xxhash-for-preboot-env.patch b/5001_ZSTD-v10-2-8-prepare-xxhash-for-preboot-env.patch
deleted file mode 100644
index b18164c..0000000
--- a/5001_ZSTD-v10-2-8-prepare-xxhash-for-preboot-env.patch
+++ /dev/null
@@ -1,425 +0,0 @@
-diff --git a/include/linux/decompress/unzstd.h b/include/linux/decompress/unzstd.h
-new file mode 100644
-index 000000000000..56d539ae880f
---- /dev/null
-+++ b/include/linux/decompress/unzstd.h
-@@ -0,0 +1,11 @@
-+/* SPDX-License-Identifier: GPL-2.0 */
-+#ifndef LINUX_DECOMPRESS_UNZSTD_H
-+#define LINUX_DECOMPRESS_UNZSTD_H
-+
-+int unzstd(unsigned char *inbuf, long len,
-+ long (*fill)(void*, unsigned long),
-+ long (*flush)(void*, unsigned long),
-+ unsigned char *output,
-+ long *pos,
-+ void (*error_fn)(char *x));
-+#endif
-diff --git a/lib/Kconfig b/lib/Kconfig
-index df3f3da95990..a5d6f23c4cab 100644
---- a/lib/Kconfig
-+++ b/lib/Kconfig
-@@ -342,6 +342,10 @@ config DECOMPRESS_LZ4
- select LZ4_DECOMPRESS
- tristate
-
-+config DECOMPRESS_ZSTD
-+ select ZSTD_DECOMPRESS
-+ tristate
-+
- #
- # Generic allocator support is selected if needed
- #
-diff --git a/lib/Makefile b/lib/Makefile
-index b1c42c10073b..2ba9642a3a87 100644
---- a/lib/Makefile
-+++ b/lib/Makefile
-@@ -170,6 +170,7 @@ lib-$(CONFIG_DECOMPRESS_LZMA) += decompress_unlzma.o
- lib-$(CONFIG_DECOMPRESS_XZ) += decompress_unxz.o
- lib-$(CONFIG_DECOMPRESS_LZO) += decompress_unlzo.o
- lib-$(CONFIG_DECOMPRESS_LZ4) += decompress_unlz4.o
-+lib-$(CONFIG_DECOMPRESS_ZSTD) += decompress_unzstd.o
-
- obj-$(CONFIG_TEXTSEARCH) += textsearch.o
- obj-$(CONFIG_TEXTSEARCH_KMP) += ts_kmp.o
-diff --git a/lib/decompress.c b/lib/decompress.c
-index 857ab1af1ef3..ab3fc90ffc64 100644
---- a/lib/decompress.c
-+++ b/lib/decompress.c
-@@ -13,6 +13,7 @@
- #include <linux/decompress/inflate.h>
- #include <linux/decompress/unlzo.h>
- #include <linux/decompress/unlz4.h>
-+#include <linux/decompress/unzstd.h>
-
- #include <linux/types.h>
- #include <linux/string.h>
-@@ -37,6 +38,9 @@
- #ifndef CONFIG_DECOMPRESS_LZ4
- # define unlz4 NULL
- #endif
-+#ifndef CONFIG_DECOMPRESS_ZSTD
-+# define unzstd NULL
-+#endif
-
- struct compress_format {
- unsigned char magic[2];
-@@ -52,6 +56,7 @@ static const struct compress_format compressed_formats[] __initconst = {
- { {0xfd, 0x37}, "xz", unxz },
- { {0x89, 0x4c}, "lzo", unlzo },
- { {0x02, 0x21}, "lz4", unlz4 },
-+ { {0x28, 0xb5}, "zstd", unzstd },
- { {0, 0}, NULL, NULL }
- };
-
-diff --git a/lib/decompress_unzstd.c b/lib/decompress_unzstd.c
-new file mode 100644
-index 000000000000..0ad2c15479ed
---- /dev/null
-+++ b/lib/decompress_unzstd.c
-@@ -0,0 +1,345 @@
-+// SPDX-License-Identifier: GPL-2.0
-+
-+/*
-+ * Important notes about in-place decompression
-+ *
-+ * At least on x86, the kernel is decompressed in place: the compressed data
-+ * is placed to the end of the output buffer, and the decompressor overwrites
-+ * most of the compressed data. There must be enough safety margin to
-+ * guarantee that the write position is always behind the read position.
-+ *
-+ * The safety margin for ZSTD with a 128 KB block size is calculated below.
-+ * Note that the margin with ZSTD is bigger than with GZIP or XZ!
-+ *
-+ * The worst case for in-place decompression is that the beginning of
-+ * the file is compressed extremely well, and the rest of the file is
-+ * uncompressible. Thus, we must look for worst-case expansion when the
-+ * compressor is encoding uncompressible data.
-+ *
-+ * The structure of the .zst file in case of a compresed kernel is as follows.
-+ * Maximum sizes (as bytes) of the fields are in parenthesis.
-+ *
-+ * Frame Header: (18)
-+ * Blocks: (N)
-+ * Checksum: (4)
-+ *
-+ * The frame header and checksum overhead is at most 22 bytes.
-+ *
-+ * ZSTD stores the data in blocks. Each block has a header whose size is
-+ * a 3 bytes. After the block header, there is up to 128 KB of payload.
-+ * The maximum uncompressed size of the payload is 128 KB. The minimum
-+ * uncompressed size of the payload is never less than the payload size
-+ * (excluding the block header).
-+ *
-+ * The assumption, that the uncompressed size of the payload is never
-+ * smaller than the payload itself, is valid only when talking about
-+ * the payload as a whole. It is possible that the payload has parts where
-+ * the decompressor consumes more input than it produces output. Calculating
-+ * the worst case for this would be tricky. Instead of trying to do that,
-+ * let's simply make sure that the decompressor never overwrites any bytes
-+ * of the payload which it is currently reading.
-+ *
-+ * Now we have enough information to calculate the safety margin. We need
-+ * - 22 bytes for the .zst file format headers;
-+ * - 3 bytes per every 128 KiB of uncompressed size (one block header per
-+ * block); and
-+ * - 128 KiB (biggest possible zstd block size) to make sure that the
-+ * decompressor never overwrites anything from the block it is currently
-+ * reading.
-+ *
-+ * We get the following formula:
-+ *
-+ * safety_margin = 22 + uncompressed_size * 3 / 131072 + 131072
-+ * <= 22 + (uncompressed_size >> 15) + 131072
-+ */
-+
-+/*
-+ * Preboot environments #include "path/to/decompress_unzstd.c".
-+ * All of the source files we depend on must be #included.
-+ * zstd's only source dependeny is xxhash, which has no source
-+ * dependencies.
-+ *
-+ * When UNZSTD_PREBOOT is defined we declare __decompress(), which is
-+ * used for kernel decompression, instead of unzstd().
-+ *
-+ * Define __DISABLE_EXPORTS in preboot environments to prevent symbols
-+ * from xxhash and zstd from being exported by the EXPORT_SYMBOL macro.
-+ */
-+#ifdef STATIC
-+# define UNZSTD_PREBOOT
-+# include "xxhash.c"
-+# include "zstd/entropy_common.c"
-+# include "zstd/fse_decompress.c"
-+# include "zstd/huf_decompress.c"
-+# include "zstd/zstd_common.c"
-+# include "zstd/decompress.c"
-+#endif
-+
-+#include <linux/decompress/mm.h>
-+#include <linux/kernel.h>
-+#include <linux/zstd.h>
-+
-+/* 128MB is the maximum window size supported by zstd. */
-+#define ZSTD_WINDOWSIZE_MAX (1 << ZSTD_WINDOWLOG_MAX)
-+/*
-+ * Size of the input and output buffers in multi-call mode.
-+ * Pick a larger size because it isn't used during kernel decompression,
-+ * since that is single pass, and we have to allocate a large buffer for
-+ * zstd's window anyway. The larger size speeds up initramfs decompression.
-+ */
-+#define ZSTD_IOBUF_SIZE (1 << 17)
-+
-+static int INIT handle_zstd_error(size_t ret, void (*error)(char *x))
-+{
-+ const int err = ZSTD_getErrorCode(ret);
-+
-+ if (!ZSTD_isError(ret))
-+ return 0;
-+
-+ switch (err) {
-+ case ZSTD_error_memory_allocation:
-+ error("ZSTD decompressor ran out of memory");
-+ break;
-+ case ZSTD_error_prefix_unknown:
-+ error("Input is not in the ZSTD format (wrong magic bytes)");
-+ break;
-+ case ZSTD_error_dstSize_tooSmall:
-+ case ZSTD_error_corruption_detected:
-+ case ZSTD_error_checksum_wrong:
-+ error("ZSTD-compressed data is corrupt");
-+ break;
-+ default:
-+ error("ZSTD-compressed data is probably corrupt");
-+ break;
-+ }
-+ return -1;
-+}
-+
-+/*
-+ * Handle the case where we have the entire input and output in one segment.
-+ * We can allocate less memory (no circular buffer for the sliding window),
-+ * and avoid some memcpy() calls.
-+ */
-+static int INIT decompress_single(const u8 *in_buf, long in_len, u8 *out_buf,
-+ long out_len, long *in_pos,
-+ void (*error)(char *x))
-+{
-+ const size_t wksp_size = ZSTD_DCtxWorkspaceBound();
-+ void *wksp = large_malloc(wksp_size);
-+ ZSTD_DCtx *dctx = ZSTD_initDCtx(wksp, wksp_size);
-+ int err;
-+ size_t ret;
-+
-+ if (dctx == NULL) {
-+ error("Out of memory while allocating ZSTD_DCtx");
-+ err = -1;
-+ goto out;
-+ }
-+ /*
-+ * Find out how large the frame actually is, there may be junk at
-+ * the end of the frame that ZSTD_decompressDCtx() can't handle.
-+ */
-+ ret = ZSTD_findFrameCompressedSize(in_buf, in_len);
-+ err = handle_zstd_error(ret, error);
-+ if (err)
-+ goto out;
-+ in_len = (long)ret;
-+
-+ ret = ZSTD_decompressDCtx(dctx, out_buf, out_len, in_buf, in_len);
-+ err = handle_zstd_error(ret, error);
-+ if (err)
-+ goto out;
-+
-+ if (in_pos != NULL)
-+ *in_pos = in_len;
-+
-+ err = 0;
-+out:
-+ if (wksp != NULL)
-+ large_free(wksp);
-+ return err;
-+}
-+
-+static int INIT __unzstd(unsigned char *in_buf, long in_len,
-+ long (*fill)(void*, unsigned long),
-+ long (*flush)(void*, unsigned long),
-+ unsigned char *out_buf, long out_len,
-+ long *in_pos,
-+ void (*error)(char *x))
-+{
-+ ZSTD_inBuffer in;
-+ ZSTD_outBuffer out;
-+ ZSTD_frameParams params;
-+ void *in_allocated = NULL;
-+ void *out_allocated = NULL;
-+ void *wksp = NULL;
-+ size_t wksp_size;
-+ ZSTD_DStream *dstream;
-+ int err;
-+ size_t ret;
-+
-+ if (out_len == 0)
-+ out_len = LONG_MAX; /* no limit */
-+
-+ if (fill == NULL && flush == NULL)
-+ /*
-+ * We can decompress faster and with less memory when we have a
-+ * single chunk.
-+ */
-+ return decompress_single(in_buf, in_len, out_buf, out_len,
-+ in_pos, error);
-+
-+ /*
-+ * If in_buf is not provided, we must be using fill(), so allocate
-+ * a large enough buffer. If it is provided, it must be at least
-+ * ZSTD_IOBUF_SIZE large.
-+ */
-+ if (in_buf == NULL) {
-+ in_allocated = large_malloc(ZSTD_IOBUF_SIZE);
-+ if (in_allocated == NULL) {
-+ error("Out of memory while allocating input buffer");
-+ err = -1;
-+ goto out;
-+ }
-+ in_buf = in_allocated;
-+ in_len = 0;
-+ }
-+ /* Read the first chunk, since we need to decode the frame header. */
-+ if (fill != NULL)
-+ in_len = fill(in_buf, ZSTD_IOBUF_SIZE);
-+ if (in_len < 0) {
-+ error("ZSTD-compressed data is truncated");
-+ err = -1;
-+ goto out;
-+ }
-+ /* Set the first non-empty input buffer. */
-+ in.src = in_buf;
-+ in.pos = 0;
-+ in.size = in_len;
-+ /* Allocate the output buffer if we are using flush(). */
-+ if (flush != NULL) {
-+ out_allocated = large_malloc(ZSTD_IOBUF_SIZE);
-+ if (out_allocated == NULL) {
-+ error("Out of memory while allocating output buffer");
-+ err = -1;
-+ goto out;
-+ }
-+ out_buf = out_allocated;
-+ out_len = ZSTD_IOBUF_SIZE;
-+ }
-+ /* Set the output buffer. */
-+ out.dst = out_buf;
-+ out.pos = 0;
-+ out.size = out_len;
-+
-+ /*
-+ * We need to know the window size to allocate the ZSTD_DStream.
-+ * Since we are streaming, we need to allocate a buffer for the sliding
-+ * window. The window size varies from 1 KB to ZSTD_WINDOWSIZE_MAX
-+ * (8 MB), so it is important to use the actual value so as not to
-+ * waste memory when it is smaller.
-+ */
-+ ret = ZSTD_getFrameParams(¶ms, in.src, in.size);
-+ err = handle_zstd_error(ret, error);
-+ if (err)
-+ goto out;
-+ if (ret != 0) {
-+ error("ZSTD-compressed data has an incomplete frame header");
-+ err = -1;
-+ goto out;
-+ }
-+ if (params.windowSize > ZSTD_WINDOWSIZE_MAX) {
-+ error("ZSTD-compressed data has too large a window size");
-+ err = -1;
-+ goto out;
-+ }
-+
-+ /*
-+ * Allocate the ZSTD_DStream now that we know how much memory is
-+ * required.
-+ */
-+ wksp_size = ZSTD_DStreamWorkspaceBound(params.windowSize);
-+ wksp = large_malloc(wksp_size);
-+ dstream = ZSTD_initDStream(params.windowSize, wksp, wksp_size);
-+ if (dstream == NULL) {
-+ error("Out of memory while allocating ZSTD_DStream");
-+ err = -1;
-+ goto out;
-+ }
-+
-+ /*
-+ * Decompression loop:
-+ * Read more data if necessary (error if no more data can be read).
-+ * Call the decompression function, which returns 0 when finished.
-+ * Flush any data produced if using flush().
-+ */
-+ if (in_pos != NULL)
-+ *in_pos = 0;
-+ do {
-+ /*
-+ * If we need to reload data, either we have fill() and can
-+ * try to get more data, or we don't and the input is truncated.
-+ */
-+ if (in.pos == in.size) {
-+ if (in_pos != NULL)
-+ *in_pos += in.pos;
-+ in_len = fill ? fill(in_buf, ZSTD_IOBUF_SIZE) : -1;
-+ if (in_len < 0) {
-+ error("ZSTD-compressed data is truncated");
-+ err = -1;
-+ goto out;
-+ }
-+ in.pos = 0;
-+ in.size = in_len;
-+ }
-+ /* Returns zero when the frame is complete. */
-+ ret = ZSTD_decompressStream(dstream, &out, &in);
-+ err = handle_zstd_error(ret, error);
-+ if (err)
-+ goto out;
-+ /* Flush all of the data produced if using flush(). */
-+ if (flush != NULL && out.pos > 0) {
-+ if (out.pos != flush(out.dst, out.pos)) {
-+ error("Failed to flush()");
-+ err = -1;
-+ goto out;
-+ }
-+ out.pos = 0;
-+ }
-+ } while (ret != 0);
-+
-+ if (in_pos != NULL)
-+ *in_pos += in.pos;
-+
-+ err = 0;
-+out:
-+ if (in_allocated != NULL)
-+ large_free(in_allocated);
-+ if (out_allocated != NULL)
-+ large_free(out_allocated);
-+ if (wksp != NULL)
-+ large_free(wksp);
-+ return err;
-+}
-+
-+#ifndef UNZSTD_PREBOOT
-+STATIC int INIT unzstd(unsigned char *buf, long len,
-+ long (*fill)(void*, unsigned long),
-+ long (*flush)(void*, unsigned long),
-+ unsigned char *out_buf,
-+ long *pos,
-+ void (*error)(char *x))
-+{
-+ return __unzstd(buf, len, fill, flush, out_buf, 0, pos, error);
-+}
-+#else
-+STATIC int INIT __decompress(unsigned char *buf, long len,
-+ long (*fill)(void*, unsigned long),
-+ long (*flush)(void*, unsigned long),
-+ unsigned char *out_buf, long out_len,
-+ long *pos,
-+ void (*error)(char *x))
-+{
-+ return __unzstd(buf, len, fill, flush, out_buf, out_len, pos, error);
-+}
-+#endif
diff --git a/5002_ZSTD-v10-3-8-add-zstd-support-to-decompress.patch b/5002_ZSTD-v10-3-8-add-zstd-support-to-decompress.patch
deleted file mode 100644
index a277f5e..0000000
--- a/5002_ZSTD-v10-3-8-add-zstd-support-to-decompress.patch
+++ /dev/null
@@ -1,93 +0,0 @@
-diff --git a/Makefile b/Makefile
-index 229e67f2ff75..565084f347bd 100644
---- a/Makefile
-+++ b/Makefile
-@@ -464,6 +464,7 @@ KLZOP = lzop
- LZMA = lzma
- LZ4 = lz4c
- XZ = xz
-+ZSTD = zstd
-
- CHECKFLAGS := -D__linux__ -Dlinux -D__STDC__ -Dunix -D__unix__ \
- -Wbitwise -Wno-return-void -Wno-unknown-attribute $(CF)
-@@ -512,7 +513,7 @@ CLANG_FLAGS :=
- export ARCH SRCARCH CONFIG_SHELL BASH HOSTCC KBUILD_HOSTCFLAGS CROSS_COMPILE LD CC
- export CPP AR NM STRIP OBJCOPY OBJDUMP OBJSIZE READELF PAHOLE LEX YACC AWK INSTALLKERNEL
- export PERL PYTHON PYTHON3 CHECK CHECKFLAGS MAKE UTS_MACHINE HOSTCXX
--export KGZIP KBZIP2 KLZOP LZMA LZ4 XZ
-+export KGZIP KBZIP2 KLZOP LZMA LZ4 XZ ZSTD
- export KBUILD_HOSTCXXFLAGS KBUILD_HOSTLDFLAGS KBUILD_HOSTLDLIBS LDFLAGS_MODULE
-
- export KBUILD_CPPFLAGS NOSTDINC_FLAGS LINUXINCLUDE OBJCOPYFLAGS KBUILD_LDFLAGS
-diff --git a/init/Kconfig b/init/Kconfig
-index 0498af567f70..2b6409fec53f 100644
---- a/init/Kconfig
-+++ b/init/Kconfig
-@@ -191,13 +191,16 @@ config HAVE_KERNEL_LZO
- config HAVE_KERNEL_LZ4
- bool
-
-+config HAVE_KERNEL_ZSTD
-+ bool
-+
- config HAVE_KERNEL_UNCOMPRESSED
- bool
-
- choice
- prompt "Kernel compression mode"
- default KERNEL_GZIP
-- depends on HAVE_KERNEL_GZIP || HAVE_KERNEL_BZIP2 || HAVE_KERNEL_LZMA || HAVE_KERNEL_XZ || HAVE_KERNEL_LZO || HAVE_KERNEL_LZ4 || HAVE_KERNEL_UNCOMPRESSED
-+ depends on HAVE_KERNEL_GZIP || HAVE_KERNEL_BZIP2 || HAVE_KERNEL_LZMA || HAVE_KERNEL_XZ || HAVE_KERNEL_LZO || HAVE_KERNEL_LZ4 || HAVE_KERNEL_ZSTD || HAVE_KERNEL_UNCOMPRESSED
- help
- The linux kernel is a kind of self-extracting executable.
- Several compression algorithms are available, which differ
-@@ -276,6 +279,16 @@ config KERNEL_LZ4
- is about 8% bigger than LZO. But the decompression speed is
- faster than LZO.
-
-+config KERNEL_ZSTD
-+ bool "ZSTD"
-+ depends on HAVE_KERNEL_ZSTD
-+ help
-+ ZSTD is a compression algorithm targeting intermediate compression
-+ with fast decompression speed. It will compress better than GZIP and
-+ decompress around the same speed as LZO, but slower than LZ4. You
-+ will need at least 192 KB RAM or more for booting. The zstd command
-+ line tool is required for compression.
-+
- config KERNEL_UNCOMPRESSED
- bool "None"
- depends on HAVE_KERNEL_UNCOMPRESSED
-diff --git a/scripts/Makefile.lib b/scripts/Makefile.lib
-index 916b2f7f7098..54f7b7eb580b 100644
---- a/scripts/Makefile.lib
-+++ b/scripts/Makefile.lib
-@@ -413,6 +413,28 @@ quiet_cmd_xzkern = XZKERN $@
- quiet_cmd_xzmisc = XZMISC $@
- cmd_xzmisc = cat $(real-prereqs) | $(XZ) --check=crc32 --lzma2=dict=1MiB > $@
-
-+# ZSTD
-+# ---------------------------------------------------------------------------
-+# Appends the uncompressed size of the data using size_append. The .zst
-+# format has the size information available at the beginning of the file too,
-+# but it's in a more complex format and it's good to avoid changing the part
-+# of the boot code that reads the uncompressed size.
-+#
-+# Note that the bytes added by size_append will make the zstd tool think that
-+# the file is corrupt. This is expected.
-+#
-+# zstd uses a maximum window size of 8 MB. zstd22 uses a maximum window size of
-+# 128 MB. zstd22 is used for kernel compression because it is decompressed in a
-+# single pass, so zstd doesn't need to allocate a window buffer. When streaming
-+# decompression is used, like initramfs decompression, zstd22 should likely not
-+# be used because it would require zstd to allocate a 128 MB buffer.
-+
-+quiet_cmd_zstd = ZSTD $@
-+ cmd_zstd = { cat $(real-prereqs) | $(ZSTD) -19; $(size_append); } > $@
-+
-+quiet_cmd_zstd22 = ZSTD22 $@
-+ cmd_zstd22 = { cat $(real-prereqs) | $(ZSTD) -22 --ultra; $(size_append); } > $@
-+
- # ASM offsets
- # ---------------------------------------------------------------------------
-
diff --git a/5003_ZSTD-v10-4-8-add-support-for-zstd-compres-kern.patch b/5003_ZSTD-v10-4-8-add-support-for-zstd-compres-kern.patch
deleted file mode 100644
index 0096db1..0000000
--- a/5003_ZSTD-v10-4-8-add-support-for-zstd-compres-kern.patch
+++ /dev/null
@@ -1,50 +0,0 @@
-diff --git a/usr/Kconfig b/usr/Kconfig
-index 96afb03b65f9..2599bc21c1b2 100644
---- a/usr/Kconfig
-+++ b/usr/Kconfig
-@@ -100,6 +100,15 @@ config RD_LZ4
- Support loading of a LZ4 encoded initial ramdisk or cpio buffer
- If unsure, say N.
-
-+config RD_ZSTD
-+ bool "Support initial ramdisk/ramfs compressed using ZSTD"
-+ default y
-+ depends on BLK_DEV_INITRD
-+ select DECOMPRESS_ZSTD
-+ help
-+ Support loading of a ZSTD encoded initial ramdisk or cpio buffer.
-+ If unsure, say N.
-+
- choice
- prompt "Built-in initramfs compression mode"
- depends on INITRAMFS_SOURCE != ""
-@@ -196,6 +205,17 @@ config INITRAMFS_COMPRESSION_LZ4
- If you choose this, keep in mind that most distros don't provide lz4
- by default which could cause a build failure.
-
-+config INITRAMFS_COMPRESSION_ZSTD
-+ bool "ZSTD"
-+ depends on RD_ZSTD
-+ help
-+ ZSTD is a compression algorithm targeting intermediate compression
-+ with fast decompression speed. It will compress better than GZIP and
-+ decompress around the same speed as LZO, but slower than LZ4.
-+
-+ If you choose this, keep in mind that you may need to install the zstd
-+ tool to be able to compress the initram.
-+
- config INITRAMFS_COMPRESSION_NONE
- bool "None"
- help
-diff --git a/usr/Makefile b/usr/Makefile
-index c12e6b15ce72..b1a81a40eab1 100644
---- a/usr/Makefile
-+++ b/usr/Makefile
-@@ -15,6 +15,7 @@ compress-$(CONFIG_INITRAMFS_COMPRESSION_LZMA) := lzma
- compress-$(CONFIG_INITRAMFS_COMPRESSION_XZ) := xzmisc
- compress-$(CONFIG_INITRAMFS_COMPRESSION_LZO) := lzo
- compress-$(CONFIG_INITRAMFS_COMPRESSION_LZ4) := lz4
-+compress-$(CONFIG_INITRAMFS_COMPRESSION_ZSTD) := zstd
-
- obj-$(CONFIG_BLK_DEV_INITRD) := initramfs_data.o
-
diff --git a/5004_ZSTD-v10-5-8-add-support-for-zstd-compressed-initramfs.patch b/5004_ZSTD-v10-5-8-add-support-for-zstd-compressed-initramfs.patch
deleted file mode 100644
index 4e86d56..0000000
--- a/5004_ZSTD-v10-5-8-add-support-for-zstd-compressed-initramfs.patch
+++ /dev/null
@@ -1,20 +0,0 @@
-diff --git a/arch/x86/boot/header.S b/arch/x86/boot/header.S
-index 735ad7f21ab0..6dbd7e9f74c9 100644
---- a/arch/x86/boot/header.S
-+++ b/arch/x86/boot/header.S
-@@ -539,8 +539,14 @@ pref_address: .quad LOAD_PHYSICAL_ADDR # preferred load addr
- # the size-dependent part now grows so fast.
- #
- # extra_bytes = (uncompressed_size >> 8) + 65536
-+#
-+# ZSTD compressed data grows by at most 3 bytes per 128K, and only has a 22
-+# byte fixed overhead but has a maximum block size of 128K, so it needs a
-+# larger margin.
-+#
-+# extra_bytes = (uncompressed_size >> 8) + 131072
-
--#define ZO_z_extra_bytes ((ZO_z_output_len >> 8) + 65536)
-+#define ZO_z_extra_bytes ((ZO_z_output_len >> 8) + 131072)
- #if ZO_z_output_len > ZO_z_input_len
- # define ZO_z_extract_offset (ZO_z_output_len + ZO_z_extra_bytes - \
- ZO_z_input_len)
diff --git a/5005_ZSTD-v10-6-8-bump-ZO-z-extra-bytes-margin.patch b/5005_ZSTD-v10-6-8-bump-ZO-z-extra-bytes-margin.patch
deleted file mode 100644
index c9615c0..0000000
--- a/5005_ZSTD-v10-6-8-bump-ZO-z-extra-bytes-margin.patch
+++ /dev/null
@@ -1,123 +0,0 @@
-diff --git a/Documentation/x86/boot.rst b/Documentation/x86/boot.rst
-index 5325c71ca877..7fafc7ac00d7 100644
---- a/Documentation/x86/boot.rst
-+++ b/Documentation/x86/boot.rst
-@@ -782,9 +782,9 @@ Protocol: 2.08+
- uncompressed data should be determined using the standard magic
- numbers. The currently supported compression formats are gzip
- (magic numbers 1F 8B or 1F 9E), bzip2 (magic number 42 5A), LZMA
-- (magic number 5D 00), XZ (magic number FD 37), and LZ4 (magic number
-- 02 21). The uncompressed payload is currently always ELF (magic
-- number 7F 45 4C 46).
-+ (magic number 5D 00), XZ (magic number FD 37), LZ4 (magic number
-+ 02 21) and ZSTD (magic number 28 B5). The uncompressed payload is
-+ currently always ELF (magic number 7F 45 4C 46).
-
- ============ ==============
- Field name: payload_length
-diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig
-index 883da0abf779..4a64395bc35d 100644
---- a/arch/x86/Kconfig
-+++ b/arch/x86/Kconfig
-@@ -188,6 +188,7 @@ config X86
- select HAVE_KERNEL_LZMA
- select HAVE_KERNEL_LZO
- select HAVE_KERNEL_XZ
-+ select HAVE_KERNEL_ZSTD
- select HAVE_KPROBES
- select HAVE_KPROBES_ON_FTRACE
- select HAVE_FUNCTION_ERROR_INJECTION
-diff --git a/arch/x86/boot/compressed/Makefile b/arch/x86/boot/compressed/Makefile
-index 5a828fde7a42..c08714ae76ec 100644
---- a/arch/x86/boot/compressed/Makefile
-+++ b/arch/x86/boot/compressed/Makefile
-@@ -26,7 +26,7 @@ OBJECT_FILES_NON_STANDARD := y
- KCOV_INSTRUMENT := n
-
- targets := vmlinux vmlinux.bin vmlinux.bin.gz vmlinux.bin.bz2 vmlinux.bin.lzma \
-- vmlinux.bin.xz vmlinux.bin.lzo vmlinux.bin.lz4
-+ vmlinux.bin.xz vmlinux.bin.lzo vmlinux.bin.lz4 vmlinux.bin.zst
-
- KBUILD_CFLAGS := -m$(BITS) -O2
- KBUILD_CFLAGS += -fno-strict-aliasing $(call cc-option, -fPIE, -fPIC)
-@@ -42,6 +42,7 @@ KBUILD_CFLAGS += $(call cc-disable-warning, gnu)
- KBUILD_CFLAGS += -Wno-pointer-sign
- KBUILD_CFLAGS += $(call cc-option,-fmacro-prefix-map=$(srctree)/=)
- KBUILD_CFLAGS += -fno-asynchronous-unwind-tables
-+KBUILD_CFLAGS += -D__DISABLE_EXPORTS
-
- KBUILD_AFLAGS := $(KBUILD_CFLAGS) -D__ASSEMBLY__
- GCOV_PROFILE := n
-@@ -145,6 +146,8 @@ $(obj)/vmlinux.bin.lzo: $(vmlinux.bin.all-y) FORCE
- $(call if_changed,lzo)
- $(obj)/vmlinux.bin.lz4: $(vmlinux.bin.all-y) FORCE
- $(call if_changed,lz4)
-+$(obj)/vmlinux.bin.zst: $(vmlinux.bin.all-y) FORCE
-+ $(call if_changed,zstd22)
-
- suffix-$(CONFIG_KERNEL_GZIP) := gz
- suffix-$(CONFIG_KERNEL_BZIP2) := bz2
-@@ -152,6 +155,7 @@ suffix-$(CONFIG_KERNEL_LZMA) := lzma
- suffix-$(CONFIG_KERNEL_XZ) := xz
- suffix-$(CONFIG_KERNEL_LZO) := lzo
- suffix-$(CONFIG_KERNEL_LZ4) := lz4
-+suffix-$(CONFIG_KERNEL_ZSTD) := zst
-
- quiet_cmd_mkpiggy = MKPIGGY $@
- cmd_mkpiggy = $(obj)/mkpiggy $< > $@
-diff --git a/arch/x86/boot/compressed/kaslr.c b/arch/x86/boot/compressed/kaslr.c
-index d7408af55738..0048269180d5 100644
---- a/arch/x86/boot/compressed/kaslr.c
-+++ b/arch/x86/boot/compressed/kaslr.c
-@@ -19,13 +19,6 @@
- */
- #define BOOT_CTYPE_H
-
--/*
-- * _ctype[] in lib/ctype.c is needed by isspace() of linux/ctype.h.
-- * While both lib/ctype.c and lib/cmdline.c will bring EXPORT_SYMBOL
-- * which is meaningless and will cause compiling error in some cases.
-- */
--#define __DISABLE_EXPORTS
--
- #include "misc.h"
- #include "error.h"
- #include "../string.h"
-diff --git a/arch/x86/boot/compressed/misc.c b/arch/x86/boot/compressed/misc.c
-index 9652d5c2afda..39e592d0e0b4 100644
---- a/arch/x86/boot/compressed/misc.c
-+++ b/arch/x86/boot/compressed/misc.c
-@@ -77,6 +77,10 @@ static int lines, cols;
- #ifdef CONFIG_KERNEL_LZ4
- #include "../../../../lib/decompress_unlz4.c"
- #endif
-+
-+#ifdef CONFIG_KERNEL_ZSTD
-+#include "../../../../lib/decompress_unzstd.c"
-+#endif
- /*
- * NOTE: When adding a new decompressor, please update the analysis in
- * ../header.S.
-diff --git a/arch/x86/include/asm/boot.h b/arch/x86/include/asm/boot.h
-index 680c320363db..9191280d9ea3 100644
---- a/arch/x86/include/asm/boot.h
-+++ b/arch/x86/include/asm/boot.h
-@@ -24,9 +24,16 @@
- # error "Invalid value for CONFIG_PHYSICAL_ALIGN"
- #endif
-
--#ifdef CONFIG_KERNEL_BZIP2
-+#if defined(CONFIG_KERNEL_BZIP2)
- # define BOOT_HEAP_SIZE 0x400000
--#else /* !CONFIG_KERNEL_BZIP2 */
-+#elif defined(CONFIG_KERNEL_ZSTD)
-+/*
-+ * Zstd needs to allocate the ZSTD_DCtx in order to decompress the kernel.
-+ * The ZSTD_DCtx is ~160KB, so set the heap size to 192KB because it is a
-+ * round number and to allow some slack.
-+ */
-+# define BOOT_HEAP_SIZE 0x30000
-+#else
- # define BOOT_HEAP_SIZE 0x10000
- #endif
-
diff --git a/5006_ZSTD-v10-7-8-support-for-ZSTD-compressed-kernel.patch b/5006_ZSTD-v10-7-8-support-for-ZSTD-compressed-kernel.patch
deleted file mode 100644
index ec12df5..0000000
--- a/5006_ZSTD-v10-7-8-support-for-ZSTD-compressed-kernel.patch
+++ /dev/null
@@ -1,12 +0,0 @@
-diff --git a/.gitignore b/.gitignore
-index d5f4804ed07c..162bd2b67bdf 100644
---- a/.gitignore
-+++ b/.gitignore
-@@ -44,6 +44,7 @@
- *.tab.[ch]
- *.tar
- *.xz
-+*.zst
- Module.symvers
- modules.builtin
- modules.order
diff --git a/5007_ZSTD-v10-8-8-gitignore-add-ZSTD-compressed-files.patch b/5007_ZSTD-v10-8-8-gitignore-add-ZSTD-compressed-files.patch
deleted file mode 100644
index 3c9ea69..0000000
--- a/5007_ZSTD-v10-8-8-gitignore-add-ZSTD-compressed-files.patch
+++ /dev/null
@@ -1,12 +0,0 @@
-diff --git a/Documentation/dontdiff b/Documentation/dontdiff
-index ef9519c32c55..e361fc95ca29 100644
---- a/Documentation/dontdiff
-+++ b/Documentation/dontdiff
-@@ -55,6 +55,7 @@
- *.ver
- *.xml
- *.xz
-+*.zst
- *_MODULES
- *_vga16.c
- *~
^ permalink raw reply related [flat|nested] 30+ messages in thread
* [gentoo-commits] proj/linux-patches:5.8 commit in: /
@ 2020-09-24 15:37 Mike Pagano
0 siblings, 0 replies; 30+ messages in thread
From: Mike Pagano @ 2020-09-24 15:37 UTC (permalink / raw
To: gentoo-commits
commit: f10a4ecaf95815d30fb7ec85a5f5c720698fe4de
Author: Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Thu Sep 24 15:12:00 2020 +0000
Commit: Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Thu Sep 24 15:36:53 2020 +0000
URL: https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=f10a4eca
Add CONFIG_USER_NS to GENTOO_LINUX_INIT_SYSTEMD
Required for PrivateUsers= in service units
Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>
4567_distro-Gentoo-Kconfig.patch | 6 +++---
1 file changed, 3 insertions(+), 3 deletions(-)
diff --git a/4567_distro-Gentoo-Kconfig.patch b/4567_distro-Gentoo-Kconfig.patch
index cb2eaa6..ebcd606 100644
--- a/4567_distro-Gentoo-Kconfig.patch
+++ b/4567_distro-Gentoo-Kconfig.patch
@@ -6,8 +6,8 @@
source "Documentation/Kconfig"
+
+source "distro/Kconfig"
---- /dev/null 2020-05-13 03:13:57.920193259 -0400
-+++ b/distro/Kconfig 2020-05-13 07:51:36.841663359 -0400
+--- /dev/null 2020-09-24 03:06:47.590000000 -0400
++++ b/distro/Kconfig 2020-09-24 11:09:36.442549224 -0400
@@ -0,0 +1,157 @@
+menu "Gentoo Linux"
+
@@ -145,6 +145,7 @@
+ select TIMERFD
+ select TMPFS_POSIX_ACL
+ select TMPFS_XATTR
++ select USER_NS
+
+ select ANON_INODES
+ select BLOCK
@@ -165,4 +166,3 @@
+
+endmenu
+
-+endmenu
^ permalink raw reply related [flat|nested] 30+ messages in thread
* [gentoo-commits] proj/linux-patches:5.8 commit in: /
@ 2020-09-24 15:37 Mike Pagano
0 siblings, 0 replies; 30+ messages in thread
From: Mike Pagano @ 2020-09-24 15:37 UTC (permalink / raw
To: gentoo-commits
commit: d5d07e58d36bfeb1e16443daa4a523de73c22f5d
Author: Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Thu Sep 24 15:17:35 2020 +0000
Commit: Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Thu Sep 24 15:36:53 2020 +0000
URL: https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=d5d07e58
Fix formatting
Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>
4567_distro-Gentoo-Kconfig.patch | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/4567_distro-Gentoo-Kconfig.patch b/4567_distro-Gentoo-Kconfig.patch
index ebcd606..3e09969 100644
--- a/4567_distro-Gentoo-Kconfig.patch
+++ b/4567_distro-Gentoo-Kconfig.patch
@@ -145,7 +145,7 @@
+ select TIMERFD
+ select TMPFS_POSIX_ACL
+ select TMPFS_XATTR
-+ select USER_NS
++ select USER_NS
+
+ select ANON_INODES
+ select BLOCK
^ permalink raw reply related [flat|nested] 30+ messages in thread
* [gentoo-commits] proj/linux-patches:5.8 commit in: /
@ 2020-09-24 15:37 Mike Pagano
0 siblings, 0 replies; 30+ messages in thread
From: Mike Pagano @ 2020-09-24 15:37 UTC (permalink / raw
To: gentoo-commits
commit: ab859e1b8a49672ca2e6df740fa2dcf90db9f44c
Author: Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Thu Sep 24 15:34:05 2020 +0000
Commit: Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Thu Sep 24 15:36:53 2020 +0000
URL: https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=ab859e1b
Add missing endmenu
Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>
4567_distro-Gentoo-Kconfig.patch | 5 +++--
1 file changed, 3 insertions(+), 2 deletions(-)
diff --git a/4567_distro-Gentoo-Kconfig.patch b/4567_distro-Gentoo-Kconfig.patch
index 3e09969..e754a3e 100644
--- a/4567_distro-Gentoo-Kconfig.patch
+++ b/4567_distro-Gentoo-Kconfig.patch
@@ -7,8 +7,8 @@
+
+source "distro/Kconfig"
--- /dev/null 2020-09-24 03:06:47.590000000 -0400
-+++ b/distro/Kconfig 2020-09-24 11:09:36.442549224 -0400
-@@ -0,0 +1,157 @@
++++ b/distro/Kconfig 2020-09-24 11:31:29.403150624 -0400
+@@ -0,0 +1,158 @@
+menu "Gentoo Linux"
+
+config GENTOO_LINUX
@@ -166,3 +166,4 @@
+
+endmenu
+
++endmenu
^ permalink raw reply related [flat|nested] 30+ messages in thread
* [gentoo-commits] proj/linux-patches:5.8 commit in: /
@ 2020-09-26 21:50 Mike Pagano
0 siblings, 0 replies; 30+ messages in thread
From: Mike Pagano @ 2020-09-26 21:50 UTC (permalink / raw
To: gentoo-commits
commit: 63e318c66258ba6be277fb558bc364ef2f2c126f
Author: Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Sat Sep 26 21:50:07 2020 +0000
Commit: Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Sat Sep 26 21:50:07 2020 +0000
URL: https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=63e318c6
Linux patch 5.8.12
Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>
0000_README | 4 +
1011_linux-5.8.12.patch | 2440 +++++++++++++++++++++++++++++++++++++++++++++++
2 files changed, 2444 insertions(+)
diff --git a/0000_README b/0000_README
index d438f0f..51cee27 100644
--- a/0000_README
+++ b/0000_README
@@ -87,6 +87,10 @@ Patch: 1010_linux-5.8.11.patch
From: http://www.kernel.org
Desc: Linux 5.8.11
+Patch: 1011_linux-5.8.12.patch
+From: http://www.kernel.org
+Desc: Linux 5.8.12
+
Patch: 1500_XATTR_USER_PREFIX.patch
From: https://bugs.gentoo.org/show_bug.cgi?id=470644
Desc: Support for namespace user.pax.* on tmpfs.
diff --git a/1011_linux-5.8.12.patch b/1011_linux-5.8.12.patch
new file mode 100644
index 0000000..ac579a3
--- /dev/null
+++ b/1011_linux-5.8.12.patch
@@ -0,0 +1,2440 @@
+diff --git a/Makefile b/Makefile
+index 0b025b3a56401..d0d40c628dc34 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 5
+ PATCHLEVEL = 8
+-SUBLEVEL = 11
++SUBLEVEL = 12
+ EXTRAVERSION =
+ NAME = Kleptomaniac Octopus
+
+diff --git a/drivers/net/dsa/microchip/ksz8795.c b/drivers/net/dsa/microchip/ksz8795.c
+index 7c17b0f705ec3..87db588bcdd6b 100644
+--- a/drivers/net/dsa/microchip/ksz8795.c
++++ b/drivers/net/dsa/microchip/ksz8795.c
+@@ -1269,7 +1269,7 @@ static int ksz8795_switch_init(struct ksz_device *dev)
+ }
+
+ /* set the real number of ports */
+- dev->ds->num_ports = dev->port_cnt;
++ dev->ds->num_ports = dev->port_cnt + 1;
+
+ return 0;
+ }
+diff --git a/drivers/net/dsa/rtl8366.c b/drivers/net/dsa/rtl8366.c
+index 1368816abaed1..99cdb2f18fa2f 100644
+--- a/drivers/net/dsa/rtl8366.c
++++ b/drivers/net/dsa/rtl8366.c
+@@ -452,13 +452,19 @@ int rtl8366_vlan_del(struct dsa_switch *ds, int port,
+ return ret;
+
+ if (vid == vlanmc.vid) {
+- /* clear VLAN member configurations */
+- vlanmc.vid = 0;
+- vlanmc.priority = 0;
+- vlanmc.member = 0;
+- vlanmc.untag = 0;
+- vlanmc.fid = 0;
+-
++ /* Remove this port from the VLAN */
++ vlanmc.member &= ~BIT(port);
++ vlanmc.untag &= ~BIT(port);
++ /*
++ * If no ports are members of this VLAN
++ * anymore then clear the whole member
++ * config so it can be reused.
++ */
++ if (!vlanmc.member && vlanmc.untag) {
++ vlanmc.vid = 0;
++ vlanmc.priority = 0;
++ vlanmc.fid = 0;
++ }
+ ret = smi->ops->set_vlan_mc(smi, i, &vlanmc);
+ if (ret) {
+ dev_err(smi->dev,
+diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt.c b/drivers/net/ethernet/broadcom/bnxt/bnxt.c
+index cd5c7a1412c6d..dd07db656a5c3 100644
+--- a/drivers/net/ethernet/broadcom/bnxt/bnxt.c
++++ b/drivers/net/ethernet/broadcom/bnxt/bnxt.c
+@@ -4198,7 +4198,7 @@ static int bnxt_hwrm_do_send_msg(struct bnxt *bp, void *msg, u32 msg_len,
+ u32 bar_offset = BNXT_GRCPF_REG_CHIMP_COMM;
+ u16 dst = BNXT_HWRM_CHNL_CHIMP;
+
+- if (test_bit(BNXT_STATE_FW_FATAL_COND, &bp->state))
++ if (BNXT_NO_FW_ACCESS(bp))
+ return -EBUSY;
+
+ if (msg_len > BNXT_HWRM_MAX_REQ_LEN) {
+@@ -5530,7 +5530,7 @@ static int hwrm_ring_free_send_msg(struct bnxt *bp,
+ struct hwrm_ring_free_output *resp = bp->hwrm_cmd_resp_addr;
+ u16 error_code;
+
+- if (test_bit(BNXT_STATE_FW_FATAL_COND, &bp->state))
++ if (BNXT_NO_FW_ACCESS(bp))
+ return 0;
+
+ bnxt_hwrm_cmd_hdr_init(bp, &req, HWRM_RING_FREE, cmpl_ring_id, -1);
+@@ -7502,7 +7502,7 @@ static int bnxt_set_tpa(struct bnxt *bp, bool set_tpa)
+
+ if (set_tpa)
+ tpa_flags = bp->flags & BNXT_FLAG_TPA;
+- else if (test_bit(BNXT_STATE_FW_FATAL_COND, &bp->state))
++ else if (BNXT_NO_FW_ACCESS(bp))
+ return 0;
+ for (i = 0; i < bp->nr_vnics; i++) {
+ rc = bnxt_hwrm_vnic_set_tpa(bp, i, tpa_flags);
+@@ -8993,18 +8993,16 @@ static ssize_t bnxt_show_temp(struct device *dev,
+ struct hwrm_temp_monitor_query_output *resp;
+ struct bnxt *bp = dev_get_drvdata(dev);
+ u32 len = 0;
++ int rc;
+
+ resp = bp->hwrm_cmd_resp_addr;
+ bnxt_hwrm_cmd_hdr_init(bp, &req, HWRM_TEMP_MONITOR_QUERY, -1, -1);
+ mutex_lock(&bp->hwrm_cmd_lock);
+- if (!_hwrm_send_message_silent(bp, &req, sizeof(req), HWRM_CMD_TIMEOUT))
++ rc = _hwrm_send_message(bp, &req, sizeof(req), HWRM_CMD_TIMEOUT);
++ if (!rc)
+ len = sprintf(buf, "%u\n", resp->temp * 1000); /* display millidegree */
+ mutex_unlock(&bp->hwrm_cmd_lock);
+-
+- if (len)
+- return len;
+-
+- return sprintf(buf, "unknown\n");
++ return rc ?: len;
+ }
+ static SENSOR_DEVICE_ATTR(temp1_input, 0444, bnxt_show_temp, NULL, 0);
+
+@@ -9024,7 +9022,16 @@ static void bnxt_hwmon_close(struct bnxt *bp)
+
+ static void bnxt_hwmon_open(struct bnxt *bp)
+ {
++ struct hwrm_temp_monitor_query_input req = {0};
+ struct pci_dev *pdev = bp->pdev;
++ int rc;
++
++ bnxt_hwrm_cmd_hdr_init(bp, &req, HWRM_TEMP_MONITOR_QUERY, -1, -1);
++ rc = hwrm_send_message_silent(bp, &req, sizeof(req), HWRM_CMD_TIMEOUT);
++ if (rc == -EACCES || rc == -EOPNOTSUPP) {
++ bnxt_hwmon_close(bp);
++ return;
++ }
+
+ if (bp->hwmon_dev)
+ return;
+@@ -11498,6 +11505,10 @@ static void bnxt_remove_one(struct pci_dev *pdev)
+ if (BNXT_PF(bp))
+ bnxt_sriov_disable(bp);
+
++ clear_bit(BNXT_STATE_IN_FW_RESET, &bp->state);
++ bnxt_cancel_sp_work(bp);
++ bp->sp_event = 0;
++
+ bnxt_dl_fw_reporters_destroy(bp, true);
+ if (BNXT_PF(bp))
+ devlink_port_type_clear(&bp->dl_port);
+@@ -11505,9 +11516,6 @@ static void bnxt_remove_one(struct pci_dev *pdev)
+ unregister_netdev(dev);
+ bnxt_dl_unregister(bp);
+ bnxt_shutdown_tc(bp);
+- clear_bit(BNXT_STATE_IN_FW_RESET, &bp->state);
+- bnxt_cancel_sp_work(bp);
+- bp->sp_event = 0;
+
+ bnxt_clear_int_mode(bp);
+ bnxt_hwrm_func_drv_unrgtr(bp);
+@@ -11806,7 +11814,7 @@ static int bnxt_init_mac_addr(struct bnxt *bp)
+ static void bnxt_vpd_read_info(struct bnxt *bp)
+ {
+ struct pci_dev *pdev = bp->pdev;
+- int i, len, pos, ro_size;
++ int i, len, pos, ro_size, size;
+ ssize_t vpd_size;
+ u8 *vpd_data;
+
+@@ -11841,7 +11849,8 @@ static void bnxt_vpd_read_info(struct bnxt *bp)
+ if (len + pos > vpd_size)
+ goto read_sn;
+
+- strlcpy(bp->board_partno, &vpd_data[pos], min(len, BNXT_VPD_FLD_LEN));
++ size = min(len, BNXT_VPD_FLD_LEN - 1);
++ memcpy(bp->board_partno, &vpd_data[pos], size);
+
+ read_sn:
+ pos = pci_vpd_find_info_keyword(vpd_data, i, ro_size,
+@@ -11854,7 +11863,8 @@ read_sn:
+ if (len + pos > vpd_size)
+ goto exit;
+
+- strlcpy(bp->board_serialno, &vpd_data[pos], min(len, BNXT_VPD_FLD_LEN));
++ size = min(len, BNXT_VPD_FLD_LEN - 1);
++ memcpy(bp->board_serialno, &vpd_data[pos], size);
+ exit:
+ kfree(vpd_data);
+ }
+diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt.h b/drivers/net/ethernet/broadcom/bnxt/bnxt.h
+index 78e2fd63ac3d5..440b43c8068f1 100644
+--- a/drivers/net/ethernet/broadcom/bnxt/bnxt.h
++++ b/drivers/net/ethernet/broadcom/bnxt/bnxt.h
+@@ -1673,6 +1673,10 @@ struct bnxt {
+ #define BNXT_STATE_FW_FATAL_COND 6
+ #define BNXT_STATE_DRV_REGISTERED 7
+
++#define BNXT_NO_FW_ACCESS(bp) \
++ (test_bit(BNXT_STATE_FW_FATAL_COND, &(bp)->state) || \
++ pci_channel_offline((bp)->pdev))
++
+ struct bnxt_irq *irq_tbl;
+ int total_irqs;
+ u8 mac_addr[ETH_ALEN];
+diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt_ethtool.c b/drivers/net/ethernet/broadcom/bnxt/bnxt_ethtool.c
+index bc2c76fa54cad..f6e236a7bf18d 100644
+--- a/drivers/net/ethernet/broadcom/bnxt/bnxt_ethtool.c
++++ b/drivers/net/ethernet/broadcom/bnxt/bnxt_ethtool.c
+@@ -1735,9 +1735,12 @@ static int bnxt_set_pauseparam(struct net_device *dev,
+ if (!BNXT_PHY_CFG_ABLE(bp))
+ return -EOPNOTSUPP;
+
++ mutex_lock(&bp->link_lock);
+ if (epause->autoneg) {
+- if (!(link_info->autoneg & BNXT_AUTONEG_SPEED))
+- return -EINVAL;
++ if (!(link_info->autoneg & BNXT_AUTONEG_SPEED)) {
++ rc = -EINVAL;
++ goto pause_exit;
++ }
+
+ link_info->autoneg |= BNXT_AUTONEG_FLOW_CTRL;
+ if (bp->hwrm_spec_code >= 0x10201)
+@@ -1758,11 +1761,11 @@ static int bnxt_set_pauseparam(struct net_device *dev,
+ if (epause->tx_pause)
+ link_info->req_flow_ctrl |= BNXT_LINK_PAUSE_TX;
+
+- if (netif_running(dev)) {
+- mutex_lock(&bp->link_lock);
++ if (netif_running(dev))
+ rc = bnxt_hwrm_set_pause(bp);
+- mutex_unlock(&bp->link_lock);
+- }
++
++pause_exit:
++ mutex_unlock(&bp->link_lock);
+ return rc;
+ }
+
+@@ -2499,8 +2502,7 @@ static int bnxt_set_eee(struct net_device *dev, struct ethtool_eee *edata)
+ struct bnxt *bp = netdev_priv(dev);
+ struct ethtool_eee *eee = &bp->eee;
+ struct bnxt_link_info *link_info = &bp->link_info;
+- u32 advertising =
+- _bnxt_fw_to_ethtool_adv_spds(link_info->advertising, 0);
++ u32 advertising;
+ int rc = 0;
+
+ if (!BNXT_PHY_CFG_ABLE(bp))
+@@ -2509,19 +2511,23 @@ static int bnxt_set_eee(struct net_device *dev, struct ethtool_eee *edata)
+ if (!(bp->flags & BNXT_FLAG_EEE_CAP))
+ return -EOPNOTSUPP;
+
++ mutex_lock(&bp->link_lock);
++ advertising = _bnxt_fw_to_ethtool_adv_spds(link_info->advertising, 0);
+ if (!edata->eee_enabled)
+ goto eee_ok;
+
+ if (!(link_info->autoneg & BNXT_AUTONEG_SPEED)) {
+ netdev_warn(dev, "EEE requires autoneg\n");
+- return -EINVAL;
++ rc = -EINVAL;
++ goto eee_exit;
+ }
+ if (edata->tx_lpi_enabled) {
+ if (bp->lpi_tmr_hi && (edata->tx_lpi_timer > bp->lpi_tmr_hi ||
+ edata->tx_lpi_timer < bp->lpi_tmr_lo)) {
+ netdev_warn(dev, "Valid LPI timer range is %d and %d microsecs\n",
+ bp->lpi_tmr_lo, bp->lpi_tmr_hi);
+- return -EINVAL;
++ rc = -EINVAL;
++ goto eee_exit;
+ } else if (!bp->lpi_tmr_hi) {
+ edata->tx_lpi_timer = eee->tx_lpi_timer;
+ }
+@@ -2531,7 +2537,8 @@ static int bnxt_set_eee(struct net_device *dev, struct ethtool_eee *edata)
+ } else if (edata->advertised & ~advertising) {
+ netdev_warn(dev, "EEE advertised %x must be a subset of autoneg advertised speeds %x\n",
+ edata->advertised, advertising);
+- return -EINVAL;
++ rc = -EINVAL;
++ goto eee_exit;
+ }
+
+ eee->advertised = edata->advertised;
+@@ -2543,6 +2550,8 @@ eee_ok:
+ if (netif_running(dev))
+ rc = bnxt_hwrm_set_link_setting(bp, false, true);
+
++eee_exit:
++ mutex_unlock(&bp->link_lock);
+ return rc;
+ }
+
+diff --git a/drivers/net/ethernet/cadence/macb_main.c b/drivers/net/ethernet/cadence/macb_main.c
+index 4b1b5928b1043..55347bcea2285 100644
+--- a/drivers/net/ethernet/cadence/macb_main.c
++++ b/drivers/net/ethernet/cadence/macb_main.c
+@@ -647,8 +647,7 @@ static void macb_mac_link_up(struct phylink_config *config,
+ ctrl |= GEM_BIT(GBE);
+ }
+
+- /* We do not support MLO_PAUSE_RX yet */
+- if (tx_pause)
++ if (rx_pause)
+ ctrl |= MACB_BIT(PAE);
+
+ macb_set_tx_clk(bp->tx_clk, speed, ndev);
+diff --git a/drivers/net/ethernet/chelsio/cxgb4/cxgb4_filter.c b/drivers/net/ethernet/chelsio/cxgb4/cxgb4_filter.c
+index d02d346629b36..ff0d82e2535da 100644
+--- a/drivers/net/ethernet/chelsio/cxgb4/cxgb4_filter.c
++++ b/drivers/net/ethernet/chelsio/cxgb4/cxgb4_filter.c
+@@ -1906,13 +1906,16 @@ out:
+ static int configure_filter_tcb(struct adapter *adap, unsigned int tid,
+ struct filter_entry *f)
+ {
+- if (f->fs.hitcnts)
++ if (f->fs.hitcnts) {
+ set_tcb_field(adap, f, tid, TCB_TIMESTAMP_W,
+- TCB_TIMESTAMP_V(TCB_TIMESTAMP_M) |
++ TCB_TIMESTAMP_V(TCB_TIMESTAMP_M),
++ TCB_TIMESTAMP_V(0ULL),
++ 1);
++ set_tcb_field(adap, f, tid, TCB_RTT_TS_RECENT_AGE_W,
+ TCB_RTT_TS_RECENT_AGE_V(TCB_RTT_TS_RECENT_AGE_M),
+- TCB_TIMESTAMP_V(0ULL) |
+ TCB_RTT_TS_RECENT_AGE_V(0ULL),
+ 1);
++ }
+
+ if (f->fs.newdmac)
+ set_tcb_tflag(adap, f, tid, TF_CCTRL_ECE_S, 1,
+diff --git a/drivers/net/ethernet/chelsio/cxgb4/cxgb4_mps.c b/drivers/net/ethernet/chelsio/cxgb4/cxgb4_mps.c
+index b1a073eea60b2..a020e84906813 100644
+--- a/drivers/net/ethernet/chelsio/cxgb4/cxgb4_mps.c
++++ b/drivers/net/ethernet/chelsio/cxgb4/cxgb4_mps.c
+@@ -229,7 +229,7 @@ void cxgb4_free_mps_ref_entries(struct adapter *adap)
+ {
+ struct mps_entries_ref *mps_entry, *tmp;
+
+- if (!list_empty(&adap->mps_ref))
++ if (list_empty(&adap->mps_ref))
+ return;
+
+ spin_lock(&adap->mps_ref_lock);
+diff --git a/drivers/net/ethernet/huawei/hinic/hinic_hw_mgmt.c b/drivers/net/ethernet/huawei/hinic/hinic_hw_mgmt.c
+index e0f5a81d8620d..7fe39a155b329 100644
+--- a/drivers/net/ethernet/huawei/hinic/hinic_hw_mgmt.c
++++ b/drivers/net/ethernet/huawei/hinic/hinic_hw_mgmt.c
+@@ -45,6 +45,8 @@
+
+ #define MGMT_MSG_TIMEOUT 5000
+
++#define SET_FUNC_PORT_MBOX_TIMEOUT 30000
++
+ #define SET_FUNC_PORT_MGMT_TIMEOUT 25000
+
+ #define mgmt_to_pfhwdev(pf_mgmt) \
+@@ -358,16 +360,20 @@ int hinic_msg_to_mgmt(struct hinic_pf_to_mgmt *pf_to_mgmt,
+ return -EINVAL;
+ }
+
+- if (cmd == HINIC_PORT_CMD_SET_FUNC_STATE)
+- timeout = SET_FUNC_PORT_MGMT_TIMEOUT;
++ if (HINIC_IS_VF(hwif)) {
++ if (cmd == HINIC_PORT_CMD_SET_FUNC_STATE)
++ timeout = SET_FUNC_PORT_MBOX_TIMEOUT;
+
+- if (HINIC_IS_VF(hwif))
+ return hinic_mbox_to_pf(pf_to_mgmt->hwdev, mod, cmd, buf_in,
+- in_size, buf_out, out_size, 0);
+- else
++ in_size, buf_out, out_size, timeout);
++ } else {
++ if (cmd == HINIC_PORT_CMD_SET_FUNC_STATE)
++ timeout = SET_FUNC_PORT_MGMT_TIMEOUT;
++
+ return msg_to_mgmt_sync(pf_to_mgmt, mod, cmd, buf_in, in_size,
+ buf_out, out_size, MGMT_DIRECT_SEND,
+ MSG_NOT_RESP, timeout);
++ }
+ }
+
+ static void recv_mgmt_msg_work_handler(struct work_struct *work)
+diff --git a/drivers/net/ethernet/huawei/hinic/hinic_main.c b/drivers/net/ethernet/huawei/hinic/hinic_main.c
+index e9e6f4c9309a1..c9d884049fd04 100644
+--- a/drivers/net/ethernet/huawei/hinic/hinic_main.c
++++ b/drivers/net/ethernet/huawei/hinic/hinic_main.c
+@@ -168,6 +168,24 @@ err_init_txq:
+ return err;
+ }
+
++static void enable_txqs_napi(struct hinic_dev *nic_dev)
++{
++ int num_txqs = hinic_hwdev_num_qps(nic_dev->hwdev);
++ int i;
++
++ for (i = 0; i < num_txqs; i++)
++ napi_enable(&nic_dev->txqs[i].napi);
++}
++
++static void disable_txqs_napi(struct hinic_dev *nic_dev)
++{
++ int num_txqs = hinic_hwdev_num_qps(nic_dev->hwdev);
++ int i;
++
++ for (i = 0; i < num_txqs; i++)
++ napi_disable(&nic_dev->txqs[i].napi);
++}
++
+ /**
+ * free_txqs - Free the Logical Tx Queues of specific NIC device
+ * @nic_dev: the specific NIC device
+@@ -394,6 +412,8 @@ int hinic_open(struct net_device *netdev)
+ goto err_create_txqs;
+ }
+
++ enable_txqs_napi(nic_dev);
++
+ err = create_rxqs(nic_dev);
+ if (err) {
+ netif_err(nic_dev, drv, netdev,
+@@ -475,6 +495,7 @@ err_port_state:
+ }
+
+ err_create_rxqs:
++ disable_txqs_napi(nic_dev);
+ free_txqs(nic_dev);
+
+ err_create_txqs:
+@@ -488,6 +509,9 @@ int hinic_close(struct net_device *netdev)
+ struct hinic_dev *nic_dev = netdev_priv(netdev);
+ unsigned int flags;
+
++ /* Disable txq napi firstly to aviod rewaking txq in free_tx_poll */
++ disable_txqs_napi(nic_dev);
++
+ down(&nic_dev->mgmt_lock);
+
+ flags = nic_dev->flags;
+diff --git a/drivers/net/ethernet/huawei/hinic/hinic_tx.c b/drivers/net/ethernet/huawei/hinic/hinic_tx.c
+index 4c66a0bc1b283..789aa278851e3 100644
+--- a/drivers/net/ethernet/huawei/hinic/hinic_tx.c
++++ b/drivers/net/ethernet/huawei/hinic/hinic_tx.c
+@@ -684,18 +684,6 @@ static int free_tx_poll(struct napi_struct *napi, int budget)
+ return budget;
+ }
+
+-static void tx_napi_add(struct hinic_txq *txq, int weight)
+-{
+- netif_napi_add(txq->netdev, &txq->napi, free_tx_poll, weight);
+- napi_enable(&txq->napi);
+-}
+-
+-static void tx_napi_del(struct hinic_txq *txq)
+-{
+- napi_disable(&txq->napi);
+- netif_napi_del(&txq->napi);
+-}
+-
+ static irqreturn_t tx_irq(int irq, void *data)
+ {
+ struct hinic_txq *txq = data;
+@@ -724,7 +712,7 @@ static int tx_request_irq(struct hinic_txq *txq)
+ struct hinic_sq *sq = txq->sq;
+ int err;
+
+- tx_napi_add(txq, nic_dev->tx_weight);
++ netif_napi_add(txq->netdev, &txq->napi, free_tx_poll, nic_dev->tx_weight);
+
+ hinic_hwdev_msix_set(nic_dev->hwdev, sq->msix_entry,
+ TX_IRQ_NO_PENDING, TX_IRQ_NO_COALESC,
+@@ -734,7 +722,7 @@ static int tx_request_irq(struct hinic_txq *txq)
+ err = request_irq(sq->irq, tx_irq, 0, txq->irq_name, txq);
+ if (err) {
+ dev_err(&pdev->dev, "Failed to request Tx irq\n");
+- tx_napi_del(txq);
++ netif_napi_del(&txq->napi);
+ return err;
+ }
+
+@@ -746,7 +734,7 @@ static void tx_free_irq(struct hinic_txq *txq)
+ struct hinic_sq *sq = txq->sq;
+
+ free_irq(sq->irq, txq);
+- tx_napi_del(txq);
++ netif_napi_del(&txq->napi);
+ }
+
+ /**
+diff --git a/drivers/net/ethernet/ibm/ibmvnic.c b/drivers/net/ethernet/ibm/ibmvnic.c
+index 5afb3c9c52d20..1b702a43a5d01 100644
+--- a/drivers/net/ethernet/ibm/ibmvnic.c
++++ b/drivers/net/ethernet/ibm/ibmvnic.c
+@@ -479,6 +479,9 @@ static int reset_rx_pools(struct ibmvnic_adapter *adapter)
+ int i, j, rc;
+ u64 *size_array;
+
++ if (!adapter->rx_pool)
++ return -1;
++
+ size_array = (u64 *)((u8 *)(adapter->login_rsp_buf) +
+ be32_to_cpu(adapter->login_rsp_buf->off_rxadd_buff_size));
+
+@@ -649,6 +652,9 @@ static int reset_tx_pools(struct ibmvnic_adapter *adapter)
+ int tx_scrqs;
+ int i, rc;
+
++ if (!adapter->tx_pool)
++ return -1;
++
+ tx_scrqs = be32_to_cpu(adapter->login_rsp_buf->num_txsubm_subcrqs);
+ for (i = 0; i < tx_scrqs; i++) {
+ rc = reset_one_tx_pool(adapter, &adapter->tso_pool[i]);
+@@ -2011,7 +2017,10 @@ static int do_reset(struct ibmvnic_adapter *adapter,
+ adapter->req_rx_add_entries_per_subcrq !=
+ old_num_rx_slots ||
+ adapter->req_tx_entries_per_subcrq !=
+- old_num_tx_slots) {
++ old_num_tx_slots ||
++ !adapter->rx_pool ||
++ !adapter->tso_pool ||
++ !adapter->tx_pool) {
+ release_rx_pools(adapter);
+ release_tx_pools(adapter);
+ release_napi(adapter);
+@@ -2023,12 +2032,18 @@ static int do_reset(struct ibmvnic_adapter *adapter,
+
+ } else {
+ rc = reset_tx_pools(adapter);
+- if (rc)
++ if (rc) {
++ netdev_dbg(adapter->netdev, "reset tx pools failed (%d)\n",
++ rc);
+ goto out;
++ }
+
+ rc = reset_rx_pools(adapter);
+- if (rc)
++ if (rc) {
++ netdev_dbg(adapter->netdev, "reset rx pools failed (%d)\n",
++ rc);
+ goto out;
++ }
+ }
+ ibmvnic_disable_irqs(adapter);
+ }
+diff --git a/drivers/net/ethernet/lantiq_xrx200.c b/drivers/net/ethernet/lantiq_xrx200.c
+index 1645e4e7ebdbb..635ff3a5dcfb3 100644
+--- a/drivers/net/ethernet/lantiq_xrx200.c
++++ b/drivers/net/ethernet/lantiq_xrx200.c
+@@ -230,8 +230,8 @@ static int xrx200_poll_rx(struct napi_struct *napi, int budget)
+ }
+
+ if (rx < budget) {
+- napi_complete(&ch->napi);
+- ltq_dma_enable_irq(&ch->dma);
++ if (napi_complete_done(&ch->napi, rx))
++ ltq_dma_enable_irq(&ch->dma);
+ }
+
+ return rx;
+@@ -268,9 +268,12 @@ static int xrx200_tx_housekeeping(struct napi_struct *napi, int budget)
+ net_dev->stats.tx_bytes += bytes;
+ netdev_completed_queue(ch->priv->net_dev, pkts, bytes);
+
++ if (netif_queue_stopped(net_dev))
++ netif_wake_queue(net_dev);
++
+ if (pkts < budget) {
+- napi_complete(&ch->napi);
+- ltq_dma_enable_irq(&ch->dma);
++ if (napi_complete_done(&ch->napi, pkts))
++ ltq_dma_enable_irq(&ch->dma);
+ }
+
+ return pkts;
+@@ -342,10 +345,12 @@ static irqreturn_t xrx200_dma_irq(int irq, void *ptr)
+ {
+ struct xrx200_chan *ch = ptr;
+
+- ltq_dma_disable_irq(&ch->dma);
+- ltq_dma_ack_irq(&ch->dma);
++ if (napi_schedule_prep(&ch->napi)) {
++ __napi_schedule(&ch->napi);
++ ltq_dma_disable_irq(&ch->dma);
++ }
+
+- napi_schedule(&ch->napi);
++ ltq_dma_ack_irq(&ch->dma);
+
+ return IRQ_HANDLED;
+ }
+@@ -499,7 +504,7 @@ static int xrx200_probe(struct platform_device *pdev)
+
+ /* setup NAPI */
+ netif_napi_add(net_dev, &priv->chan_rx.napi, xrx200_poll_rx, 32);
+- netif_napi_add(net_dev, &priv->chan_tx.napi, xrx200_tx_housekeeping, 32);
++ netif_tx_napi_add(net_dev, &priv->chan_tx.napi, xrx200_tx_housekeeping, 32);
+
+ platform_set_drvdata(pdev, priv);
+
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en.h b/drivers/net/ethernet/mellanox/mlx5/core/en.h
+index 842db20493df6..76b23ba7a4687 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/en.h
++++ b/drivers/net/ethernet/mellanox/mlx5/core/en.h
+@@ -604,7 +604,7 @@ struct mlx5e_rq {
+ struct dim dim; /* Dynamic Interrupt Moderation */
+
+ /* XDP */
+- struct bpf_prog *xdp_prog;
++ struct bpf_prog __rcu *xdp_prog;
+ struct mlx5e_xdpsq *xdpsq;
+ DECLARE_BITMAP(flags, 8);
+ struct page_pool *page_pool;
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en/xdp.c b/drivers/net/ethernet/mellanox/mlx5/core/en/xdp.c
+index c9d308e919655..75ed820b0ad72 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/en/xdp.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/en/xdp.c
+@@ -121,7 +121,7 @@ mlx5e_xmit_xdp_buff(struct mlx5e_xdpsq *sq, struct mlx5e_rq *rq,
+ bool mlx5e_xdp_handle(struct mlx5e_rq *rq, struct mlx5e_dma_info *di,
+ u32 *len, struct xdp_buff *xdp)
+ {
+- struct bpf_prog *prog = READ_ONCE(rq->xdp_prog);
++ struct bpf_prog *prog = rcu_dereference(rq->xdp_prog);
+ u32 act;
+ int err;
+
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en/xsk/rx.c b/drivers/net/ethernet/mellanox/mlx5/core/en/xsk/rx.c
+index a33a1f762c70d..40db27bf790bb 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/en/xsk/rx.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/en/xsk/rx.c
+@@ -31,7 +31,6 @@ struct sk_buff *mlx5e_xsk_skb_from_cqe_mpwrq_linear(struct mlx5e_rq *rq,
+ {
+ struct xdp_buff *xdp = wi->umr.dma_info[page_idx].xsk;
+ u32 cqe_bcnt32 = cqe_bcnt;
+- bool consumed;
+
+ /* Check packet size. Note LRO doesn't use linear SKB */
+ if (unlikely(cqe_bcnt > rq->hw_mtu)) {
+@@ -51,10 +50,6 @@ struct sk_buff *mlx5e_xsk_skb_from_cqe_mpwrq_linear(struct mlx5e_rq *rq,
+ xsk_buff_dma_sync_for_cpu(xdp);
+ prefetch(xdp->data);
+
+- rcu_read_lock();
+- consumed = mlx5e_xdp_handle(rq, NULL, &cqe_bcnt32, xdp);
+- rcu_read_unlock();
+-
+ /* Possible flows:
+ * - XDP_REDIRECT to XSKMAP:
+ * The page is owned by the userspace from now.
+@@ -70,7 +65,7 @@ struct sk_buff *mlx5e_xsk_skb_from_cqe_mpwrq_linear(struct mlx5e_rq *rq,
+ * allocated first from the Reuse Ring, so it has enough space.
+ */
+
+- if (likely(consumed)) {
++ if (likely(mlx5e_xdp_handle(rq, NULL, &cqe_bcnt32, xdp))) {
+ if (likely(__test_and_clear_bit(MLX5E_RQ_FLAG_XDP_XMIT, rq->flags)))
+ __set_bit(page_idx, wi->xdp_xmit_bitmap); /* non-atomic */
+ return NULL; /* page/packet was consumed by XDP */
+@@ -88,7 +83,6 @@ struct sk_buff *mlx5e_xsk_skb_from_cqe_linear(struct mlx5e_rq *rq,
+ u32 cqe_bcnt)
+ {
+ struct xdp_buff *xdp = wi->di->xsk;
+- bool consumed;
+
+ /* wi->offset is not used in this function, because xdp->data and the
+ * DMA address point directly to the necessary place. Furthermore, the
+@@ -107,11 +101,7 @@ struct sk_buff *mlx5e_xsk_skb_from_cqe_linear(struct mlx5e_rq *rq,
+ return NULL;
+ }
+
+- rcu_read_lock();
+- consumed = mlx5e_xdp_handle(rq, NULL, &cqe_bcnt, xdp);
+- rcu_read_unlock();
+-
+- if (likely(consumed))
++ if (likely(mlx5e_xdp_handle(rq, NULL, &cqe_bcnt, xdp)))
+ return NULL; /* page/packet was consumed by XDP */
+
+ /* XDP_PASS: copy the data from the UMEM to a new SKB. The frame reuse
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en/xsk/setup.c b/drivers/net/ethernet/mellanox/mlx5/core/en/xsk/setup.c
+index 2c80205dc939d..3081cd74d651b 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/en/xsk/setup.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/en/xsk/setup.c
+@@ -143,8 +143,7 @@ err_free_cparam:
+ void mlx5e_close_xsk(struct mlx5e_channel *c)
+ {
+ clear_bit(MLX5E_CHANNEL_STATE_XSK, c->state);
+- napi_synchronize(&c->napi);
+- synchronize_rcu(); /* Sync with the XSK wakeup. */
++ synchronize_rcu(); /* Sync with the XSK wakeup and with NAPI. */
+
+ mlx5e_close_rq(&c->xskrq);
+ mlx5e_close_cq(&c->xskrq.cq);
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_accel/tls_stats.c b/drivers/net/ethernet/mellanox/mlx5/core/en_accel/tls_stats.c
+index 01468ec274466..b949b9a7538b0 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/en_accel/tls_stats.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/en_accel/tls_stats.c
+@@ -35,7 +35,6 @@
+ #include <net/sock.h>
+
+ #include "en.h"
+-#include "accel/tls.h"
+ #include "fpga/sdk.h"
+ #include "en_accel/tls.h"
+
+@@ -51,9 +50,14 @@ static const struct counter_desc mlx5e_tls_sw_stats_desc[] = {
+
+ #define NUM_TLS_SW_COUNTERS ARRAY_SIZE(mlx5e_tls_sw_stats_desc)
+
++static bool is_tls_atomic_stats(struct mlx5e_priv *priv)
++{
++ return priv->tls && !mlx5_accel_is_ktls_device(priv->mdev);
++}
++
+ int mlx5e_tls_get_count(struct mlx5e_priv *priv)
+ {
+- if (!priv->tls)
++ if (!is_tls_atomic_stats(priv))
+ return 0;
+
+ return NUM_TLS_SW_COUNTERS;
+@@ -63,7 +67,7 @@ int mlx5e_tls_get_strings(struct mlx5e_priv *priv, uint8_t *data)
+ {
+ unsigned int i, idx = 0;
+
+- if (!priv->tls)
++ if (!is_tls_atomic_stats(priv))
+ return 0;
+
+ for (i = 0; i < NUM_TLS_SW_COUNTERS; i++)
+@@ -77,7 +81,7 @@ int mlx5e_tls_get_stats(struct mlx5e_priv *priv, u64 *data)
+ {
+ int i, idx = 0;
+
+- if (!priv->tls)
++ if (!is_tls_atomic_stats(priv))
+ return 0;
+
+ for (i = 0; i < NUM_TLS_SW_COUNTERS; i++)
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_main.c b/drivers/net/ethernet/mellanox/mlx5/core/en_main.c
+index 3b892ec301b4a..cccf65fc116ee 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/en_main.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/en_main.c
+@@ -401,7 +401,7 @@ static int mlx5e_alloc_rq(struct mlx5e_channel *c,
+
+ if (params->xdp_prog)
+ bpf_prog_inc(params->xdp_prog);
+- rq->xdp_prog = params->xdp_prog;
++ RCU_INIT_POINTER(rq->xdp_prog, params->xdp_prog);
+
+ rq_xdp_ix = rq->ix;
+ if (xsk)
+@@ -410,7 +410,7 @@ static int mlx5e_alloc_rq(struct mlx5e_channel *c,
+ if (err < 0)
+ goto err_rq_wq_destroy;
+
+- rq->buff.map_dir = rq->xdp_prog ? DMA_BIDIRECTIONAL : DMA_FROM_DEVICE;
++ rq->buff.map_dir = params->xdp_prog ? DMA_BIDIRECTIONAL : DMA_FROM_DEVICE;
+ rq->buff.headroom = mlx5e_get_rq_headroom(mdev, params, xsk);
+ pool_size = 1 << params->log_rq_mtu_frames;
+
+@@ -605,8 +605,8 @@ err_free:
+ }
+
+ err_rq_wq_destroy:
+- if (rq->xdp_prog)
+- bpf_prog_put(rq->xdp_prog);
++ if (params->xdp_prog)
++ bpf_prog_put(params->xdp_prog);
+ xdp_rxq_info_unreg(&rq->xdp_rxq);
+ page_pool_destroy(rq->page_pool);
+ mlx5_wq_destroy(&rq->wq_ctrl);
+@@ -616,10 +616,16 @@ err_rq_wq_destroy:
+
+ static void mlx5e_free_rq(struct mlx5e_rq *rq)
+ {
++ struct mlx5e_channel *c = rq->channel;
++ struct bpf_prog *old_prog = NULL;
+ int i;
+
+- if (rq->xdp_prog)
+- bpf_prog_put(rq->xdp_prog);
++ /* drop_rq has neither channel nor xdp_prog. */
++ if (c)
++ old_prog = rcu_dereference_protected(rq->xdp_prog,
++ lockdep_is_held(&c->priv->state_lock));
++ if (old_prog)
++ bpf_prog_put(old_prog);
+
+ switch (rq->wq_type) {
+ case MLX5_WQ_TYPE_LINKED_LIST_STRIDING_RQ:
+@@ -905,7 +911,7 @@ void mlx5e_activate_rq(struct mlx5e_rq *rq)
+ void mlx5e_deactivate_rq(struct mlx5e_rq *rq)
+ {
+ clear_bit(MLX5E_RQ_STATE_ENABLED, &rq->state);
+- napi_synchronize(&rq->channel->napi); /* prevent mlx5e_post_rx_wqes */
++ synchronize_rcu(); /* Sync with NAPI to prevent mlx5e_post_rx_wqes. */
+ }
+
+ void mlx5e_close_rq(struct mlx5e_rq *rq)
+@@ -1350,12 +1356,10 @@ void mlx5e_tx_disable_queue(struct netdev_queue *txq)
+
+ static void mlx5e_deactivate_txqsq(struct mlx5e_txqsq *sq)
+ {
+- struct mlx5e_channel *c = sq->channel;
+ struct mlx5_wq_cyc *wq = &sq->wq;
+
+ clear_bit(MLX5E_SQ_STATE_ENABLED, &sq->state);
+- /* prevent netif_tx_wake_queue */
+- napi_synchronize(&c->napi);
++ synchronize_rcu(); /* Sync with NAPI to prevent netif_tx_wake_queue. */
+
+ mlx5e_tx_disable_queue(sq->txq);
+
+@@ -1430,10 +1434,8 @@ void mlx5e_activate_icosq(struct mlx5e_icosq *icosq)
+
+ void mlx5e_deactivate_icosq(struct mlx5e_icosq *icosq)
+ {
+- struct mlx5e_channel *c = icosq->channel;
+-
+ clear_bit(MLX5E_SQ_STATE_ENABLED, &icosq->state);
+- napi_synchronize(&c->napi);
++ synchronize_rcu(); /* Sync with NAPI. */
+ }
+
+ void mlx5e_close_icosq(struct mlx5e_icosq *sq)
+@@ -1511,7 +1513,7 @@ void mlx5e_close_xdpsq(struct mlx5e_xdpsq *sq)
+ struct mlx5e_channel *c = sq->channel;
+
+ clear_bit(MLX5E_SQ_STATE_ENABLED, &sq->state);
+- napi_synchronize(&c->napi);
++ synchronize_rcu(); /* Sync with NAPI. */
+
+ mlx5e_destroy_sq(c->mdev, sq->sqn);
+ mlx5e_free_xdpsq_descs(sq);
+@@ -4423,6 +4425,16 @@ static int mlx5e_xdp_allowed(struct mlx5e_priv *priv, struct bpf_prog *prog)
+ return 0;
+ }
+
++static void mlx5e_rq_replace_xdp_prog(struct mlx5e_rq *rq, struct bpf_prog *prog)
++{
++ struct bpf_prog *old_prog;
++
++ old_prog = rcu_replace_pointer(rq->xdp_prog, prog,
++ lockdep_is_held(&rq->channel->priv->state_lock));
++ if (old_prog)
++ bpf_prog_put(old_prog);
++}
++
+ static int mlx5e_xdp_set(struct net_device *netdev, struct bpf_prog *prog)
+ {
+ struct mlx5e_priv *priv = netdev_priv(netdev);
+@@ -4481,29 +4493,10 @@ static int mlx5e_xdp_set(struct net_device *netdev, struct bpf_prog *prog)
+ */
+ for (i = 0; i < priv->channels.num; i++) {
+ struct mlx5e_channel *c = priv->channels.c[i];
+- bool xsk_open = test_bit(MLX5E_CHANNEL_STATE_XSK, c->state);
+-
+- clear_bit(MLX5E_RQ_STATE_ENABLED, &c->rq.state);
+- if (xsk_open)
+- clear_bit(MLX5E_RQ_STATE_ENABLED, &c->xskrq.state);
+- napi_synchronize(&c->napi);
+- /* prevent mlx5e_poll_rx_cq from accessing rq->xdp_prog */
+-
+- old_prog = xchg(&c->rq.xdp_prog, prog);
+- if (old_prog)
+- bpf_prog_put(old_prog);
+-
+- if (xsk_open) {
+- old_prog = xchg(&c->xskrq.xdp_prog, prog);
+- if (old_prog)
+- bpf_prog_put(old_prog);
+- }
+
+- set_bit(MLX5E_RQ_STATE_ENABLED, &c->rq.state);
+- if (xsk_open)
+- set_bit(MLX5E_RQ_STATE_ENABLED, &c->xskrq.state);
+- /* napi_schedule in case we have missed anything */
+- napi_schedule(&c->napi);
++ mlx5e_rq_replace_xdp_prog(&c->rq, prog);
++ if (test_bit(MLX5E_CHANNEL_STATE_XSK, c->state))
++ mlx5e_rq_replace_xdp_prog(&c->xskrq, prog);
+ }
+
+ unlock:
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_rx.c b/drivers/net/ethernet/mellanox/mlx5/core/en_rx.c
+index dbb1c63239672..409fecbcc5d2b 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/en_rx.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/en_rx.c
+@@ -1072,7 +1072,6 @@ mlx5e_skb_from_cqe_linear(struct mlx5e_rq *rq, struct mlx5_cqe64 *cqe,
+ struct xdp_buff xdp;
+ struct sk_buff *skb;
+ void *va, *data;
+- bool consumed;
+ u32 frag_size;
+
+ va = page_address(di->page) + wi->offset;
+@@ -1084,11 +1083,8 @@ mlx5e_skb_from_cqe_linear(struct mlx5e_rq *rq, struct mlx5_cqe64 *cqe,
+ prefetchw(va); /* xdp_frame data area */
+ prefetch(data);
+
+- rcu_read_lock();
+ mlx5e_fill_xdp_buff(rq, va, rx_headroom, cqe_bcnt, &xdp);
+- consumed = mlx5e_xdp_handle(rq, di, &cqe_bcnt, &xdp);
+- rcu_read_unlock();
+- if (consumed)
++ if (mlx5e_xdp_handle(rq, di, &cqe_bcnt, &xdp))
+ return NULL; /* page/packet was consumed by XDP */
+
+ rx_headroom = xdp.data - xdp.data_hard_start;
+@@ -1369,7 +1365,6 @@ mlx5e_skb_from_cqe_mpwrq_linear(struct mlx5e_rq *rq, struct mlx5e_mpw_info *wi,
+ struct sk_buff *skb;
+ void *va, *data;
+ u32 frag_size;
+- bool consumed;
+
+ /* Check packet size. Note LRO doesn't use linear SKB */
+ if (unlikely(cqe_bcnt > rq->hw_mtu)) {
+@@ -1386,11 +1381,8 @@ mlx5e_skb_from_cqe_mpwrq_linear(struct mlx5e_rq *rq, struct mlx5e_mpw_info *wi,
+ prefetchw(va); /* xdp_frame data area */
+ prefetch(data);
+
+- rcu_read_lock();
+ mlx5e_fill_xdp_buff(rq, va, rx_headroom, cqe_bcnt32, &xdp);
+- consumed = mlx5e_xdp_handle(rq, di, &cqe_bcnt32, &xdp);
+- rcu_read_unlock();
+- if (consumed) {
++ if (mlx5e_xdp_handle(rq, di, &cqe_bcnt32, &xdp)) {
+ if (__test_and_clear_bit(MLX5E_RQ_FLAG_XDP_XMIT, rq->flags))
+ __set_bit(page_idx, wi->xdp_xmit_bitmap); /* non-atomic */
+ return NULL; /* page/packet was consumed by XDP */
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_tc.c b/drivers/net/ethernet/mellanox/mlx5/core/en_tc.c
+index fcedb5bdca9e5..7da1e7462f64e 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/en_tc.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/en_tc.c
+@@ -1399,11 +1399,8 @@ static void mlx5e_tc_del_fdb_flow(struct mlx5e_priv *priv,
+
+ mlx5e_put_flow_tunnel_id(flow);
+
+- if (flow_flag_test(flow, NOT_READY)) {
++ if (flow_flag_test(flow, NOT_READY))
+ remove_unready_flow(flow);
+- kvfree(attr->parse_attr);
+- return;
+- }
+
+ if (mlx5e_is_offloaded_flow(flow)) {
+ if (flow_flag_test(flow, SLOW))
+@@ -2734,6 +2731,22 @@ static struct mlx5_fields fields[] = {
+ OFFLOAD(UDP_DPORT, 16, U16_MAX, udp.dest, 0, udp_dport),
+ };
+
++static unsigned long mask_to_le(unsigned long mask, int size)
++{
++ __be32 mask_be32;
++ __be16 mask_be16;
++
++ if (size == 32) {
++ mask_be32 = (__force __be32)(mask);
++ mask = (__force unsigned long)cpu_to_le32(be32_to_cpu(mask_be32));
++ } else if (size == 16) {
++ mask_be32 = (__force __be32)(mask);
++ mask_be16 = *(__be16 *)&mask_be32;
++ mask = (__force unsigned long)cpu_to_le16(be16_to_cpu(mask_be16));
++ }
++
++ return mask;
++}
+ static int offload_pedit_fields(struct mlx5e_priv *priv,
+ int namespace,
+ struct pedit_headers_action *hdrs,
+@@ -2747,9 +2760,7 @@ static int offload_pedit_fields(struct mlx5e_priv *priv,
+ u32 *s_masks_p, *a_masks_p, s_mask, a_mask;
+ struct mlx5e_tc_mod_hdr_acts *mod_acts;
+ struct mlx5_fields *f;
+- unsigned long mask;
+- __be32 mask_be32;
+- __be16 mask_be16;
++ unsigned long mask, field_mask;
+ int err;
+ u8 cmd;
+
+@@ -2815,14 +2826,7 @@ static int offload_pedit_fields(struct mlx5e_priv *priv,
+ if (skip)
+ continue;
+
+- if (f->field_bsize == 32) {
+- mask_be32 = (__force __be32)(mask);
+- mask = (__force unsigned long)cpu_to_le32(be32_to_cpu(mask_be32));
+- } else if (f->field_bsize == 16) {
+- mask_be32 = (__force __be32)(mask);
+- mask_be16 = *(__be16 *)&mask_be32;
+- mask = (__force unsigned long)cpu_to_le16(be16_to_cpu(mask_be16));
+- }
++ mask = mask_to_le(mask, f->field_bsize);
+
+ first = find_first_bit(&mask, f->field_bsize);
+ next_z = find_next_zero_bit(&mask, f->field_bsize, first);
+@@ -2853,9 +2857,10 @@ static int offload_pedit_fields(struct mlx5e_priv *priv,
+ if (cmd == MLX5_ACTION_TYPE_SET) {
+ int start;
+
++ field_mask = mask_to_le(f->field_mask, f->field_bsize);
++
+ /* if field is bit sized it can start not from first bit */
+- start = find_first_bit((unsigned long *)&f->field_mask,
+- f->field_bsize);
++ start = find_first_bit(&field_mask, f->field_bsize);
+
+ MLX5_SET(set_action_in, action, offset, first - start);
+ /* length is num of bits to be written, zero means length of 32 */
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_txrx.c b/drivers/net/ethernet/mellanox/mlx5/core/en_txrx.c
+index 8480278f2ee20..954a2f0513d67 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/en_txrx.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/en_txrx.c
+@@ -121,13 +121,17 @@ int mlx5e_napi_poll(struct napi_struct *napi, int budget)
+ struct mlx5e_xdpsq *xsksq = &c->xsksq;
+ struct mlx5e_rq *xskrq = &c->xskrq;
+ struct mlx5e_rq *rq = &c->rq;
+- bool xsk_open = test_bit(MLX5E_CHANNEL_STATE_XSK, c->state);
+ bool aff_change = false;
+ bool busy_xsk = false;
+ bool busy = false;
+ int work_done = 0;
++ bool xsk_open;
+ int i;
+
++ rcu_read_lock();
++
++ xsk_open = test_bit(MLX5E_CHANNEL_STATE_XSK, c->state);
++
+ ch_stats->poll++;
+
+ for (i = 0; i < c->num_tc; i++)
+@@ -167,8 +171,10 @@ int mlx5e_napi_poll(struct napi_struct *napi, int budget)
+ busy |= busy_xsk;
+
+ if (busy) {
+- if (likely(mlx5e_channel_no_affinity_change(c)))
+- return budget;
++ if (likely(mlx5e_channel_no_affinity_change(c))) {
++ work_done = budget;
++ goto out;
++ }
+ ch_stats->aff_change++;
+ aff_change = true;
+ if (budget && work_done == budget)
+@@ -176,7 +182,7 @@ int mlx5e_napi_poll(struct napi_struct *napi, int budget)
+ }
+
+ if (unlikely(!napi_complete_done(napi, work_done)))
+- return work_done;
++ goto out;
+
+ ch_stats->arm++;
+
+@@ -203,6 +209,9 @@ int mlx5e_napi_poll(struct napi_struct *napi, int budget)
+ ch_stats->force_irq++;
+ }
+
++out:
++ rcu_read_unlock();
++
+ return work_done;
+ }
+
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/eswitch_offloads.c b/drivers/net/ethernet/mellanox/mlx5/core/eswitch_offloads.c
+index ed75353c56b85..f16610feab88d 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/eswitch_offloads.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/eswitch_offloads.c
+@@ -1219,35 +1219,37 @@ static int esw_create_offloads_fdb_tables(struct mlx5_eswitch *esw, int nvports)
+ }
+ esw->fdb_table.offloads.send_to_vport_grp = g;
+
+- /* create peer esw miss group */
+- memset(flow_group_in, 0, inlen);
++ if (MLX5_CAP_ESW(esw->dev, merged_eswitch)) {
++ /* create peer esw miss group */
++ memset(flow_group_in, 0, inlen);
+
+- esw_set_flow_group_source_port(esw, flow_group_in);
++ esw_set_flow_group_source_port(esw, flow_group_in);
+
+- if (!mlx5_eswitch_vport_match_metadata_enabled(esw)) {
+- match_criteria = MLX5_ADDR_OF(create_flow_group_in,
+- flow_group_in,
+- match_criteria);
++ if (!mlx5_eswitch_vport_match_metadata_enabled(esw)) {
++ match_criteria = MLX5_ADDR_OF(create_flow_group_in,
++ flow_group_in,
++ match_criteria);
+
+- MLX5_SET_TO_ONES(fte_match_param, match_criteria,
+- misc_parameters.source_eswitch_owner_vhca_id);
++ MLX5_SET_TO_ONES(fte_match_param, match_criteria,
++ misc_parameters.source_eswitch_owner_vhca_id);
+
+- MLX5_SET(create_flow_group_in, flow_group_in,
+- source_eswitch_owner_vhca_id_valid, 1);
+- }
++ MLX5_SET(create_flow_group_in, flow_group_in,
++ source_eswitch_owner_vhca_id_valid, 1);
++ }
+
+- MLX5_SET(create_flow_group_in, flow_group_in, start_flow_index, ix);
+- MLX5_SET(create_flow_group_in, flow_group_in, end_flow_index,
+- ix + esw->total_vports - 1);
+- ix += esw->total_vports;
++ MLX5_SET(create_flow_group_in, flow_group_in, start_flow_index, ix);
++ MLX5_SET(create_flow_group_in, flow_group_in, end_flow_index,
++ ix + esw->total_vports - 1);
++ ix += esw->total_vports;
+
+- g = mlx5_create_flow_group(fdb, flow_group_in);
+- if (IS_ERR(g)) {
+- err = PTR_ERR(g);
+- esw_warn(dev, "Failed to create peer miss flow group err(%d)\n", err);
+- goto peer_miss_err;
++ g = mlx5_create_flow_group(fdb, flow_group_in);
++ if (IS_ERR(g)) {
++ err = PTR_ERR(g);
++ esw_warn(dev, "Failed to create peer miss flow group err(%d)\n", err);
++ goto peer_miss_err;
++ }
++ esw->fdb_table.offloads.peer_miss_grp = g;
+ }
+- esw->fdb_table.offloads.peer_miss_grp = g;
+
+ /* create miss group */
+ memset(flow_group_in, 0, inlen);
+@@ -1282,7 +1284,8 @@ static int esw_create_offloads_fdb_tables(struct mlx5_eswitch *esw, int nvports)
+ miss_rule_err:
+ mlx5_destroy_flow_group(esw->fdb_table.offloads.miss_grp);
+ miss_err:
+- mlx5_destroy_flow_group(esw->fdb_table.offloads.peer_miss_grp);
++ if (MLX5_CAP_ESW(esw->dev, merged_eswitch))
++ mlx5_destroy_flow_group(esw->fdb_table.offloads.peer_miss_grp);
+ peer_miss_err:
+ mlx5_destroy_flow_group(esw->fdb_table.offloads.send_to_vport_grp);
+ send_vport_err:
+@@ -1306,7 +1309,8 @@ static void esw_destroy_offloads_fdb_tables(struct mlx5_eswitch *esw)
+ mlx5_del_flow_rules(esw->fdb_table.offloads.miss_rule_multi);
+ mlx5_del_flow_rules(esw->fdb_table.offloads.miss_rule_uni);
+ mlx5_destroy_flow_group(esw->fdb_table.offloads.send_to_vport_grp);
+- mlx5_destroy_flow_group(esw->fdb_table.offloads.peer_miss_grp);
++ if (MLX5_CAP_ESW(esw->dev, merged_eswitch))
++ mlx5_destroy_flow_group(esw->fdb_table.offloads.peer_miss_grp);
+ mlx5_destroy_flow_group(esw->fdb_table.offloads.miss_grp);
+
+ mlx5_esw_chains_destroy(esw);
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/fs_core.c b/drivers/net/ethernet/mellanox/mlx5/core/fs_core.c
+index 2e5f7efb82a88..1f96f9efa3c18 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/fs_core.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/fs_core.c
+@@ -655,7 +655,7 @@ static struct fs_fte *alloc_fte(struct mlx5_flow_table *ft,
+ fte->action = *flow_act;
+ fte->flow_context = spec->flow_context;
+
+- tree_init_node(&fte->node, NULL, del_sw_fte);
++ tree_init_node(&fte->node, del_hw_fte, del_sw_fte);
+
+ return fte;
+ }
+@@ -1792,7 +1792,6 @@ skip_search:
+ up_write_ref_node(&g->node, false);
+ rule = add_rule_fg(g, spec, flow_act, dest, dest_num, fte);
+ up_write_ref_node(&fte->node, false);
+- tree_put_node(&fte->node, false);
+ return rule;
+ }
+ rule = ERR_PTR(-ENOENT);
+@@ -1891,7 +1890,6 @@ search_again_locked:
+ up_write_ref_node(&g->node, false);
+ rule = add_rule_fg(g, spec, flow_act, dest, dest_num, fte);
+ up_write_ref_node(&fte->node, false);
+- tree_put_node(&fte->node, false);
+ tree_put_node(&g->node, false);
+ return rule;
+
+@@ -2001,7 +1999,9 @@ void mlx5_del_flow_rules(struct mlx5_flow_handle *handle)
+ up_write_ref_node(&fte->node, false);
+ } else {
+ del_hw_fte(&fte->node);
+- up_write(&fte->node.lock);
++ /* Avoid double call to del_hw_fte */
++ fte->node.del_hw_func = NULL;
++ up_write_ref_node(&fte->node, false);
+ tree_put_node(&fte->node, false);
+ }
+ kfree(handle);
+diff --git a/drivers/net/ethernet/netronome/nfp/nfp_net_ethtool.c b/drivers/net/ethernet/netronome/nfp/nfp_net_ethtool.c
+index 6eb9fb9a18145..9c9ae33d84ce9 100644
+--- a/drivers/net/ethernet/netronome/nfp/nfp_net_ethtool.c
++++ b/drivers/net/ethernet/netronome/nfp/nfp_net_ethtool.c
+@@ -829,8 +829,8 @@ nfp_port_get_fecparam(struct net_device *netdev,
+ struct nfp_eth_table_port *eth_port;
+ struct nfp_port *port;
+
+- param->active_fec = ETHTOOL_FEC_NONE_BIT;
+- param->fec = ETHTOOL_FEC_NONE_BIT;
++ param->active_fec = ETHTOOL_FEC_NONE;
++ param->fec = ETHTOOL_FEC_NONE;
+
+ port = nfp_port_from_netdev(netdev);
+ eth_port = nfp_port_get_eth_port(port);
+diff --git a/drivers/net/ethernet/ti/cpsw_new.c b/drivers/net/ethernet/ti/cpsw_new.c
+index 8ed78577cdedf..15672d0a4de69 100644
+--- a/drivers/net/ethernet/ti/cpsw_new.c
++++ b/drivers/net/ethernet/ti/cpsw_new.c
+@@ -17,6 +17,7 @@
+ #include <linux/phy.h>
+ #include <linux/phy/phy.h>
+ #include <linux/delay.h>
++#include <linux/pinctrl/consumer.h>
+ #include <linux/pm_runtime.h>
+ #include <linux/gpio/consumer.h>
+ #include <linux/of.h>
+@@ -2070,9 +2071,61 @@ static int cpsw_remove(struct platform_device *pdev)
+ return 0;
+ }
+
++static int __maybe_unused cpsw_suspend(struct device *dev)
++{
++ struct cpsw_common *cpsw = dev_get_drvdata(dev);
++ int i;
++
++ rtnl_lock();
++
++ for (i = 0; i < cpsw->data.slaves; i++) {
++ struct net_device *ndev = cpsw->slaves[i].ndev;
++
++ if (!(ndev && netif_running(ndev)))
++ continue;
++
++ cpsw_ndo_stop(ndev);
++ }
++
++ rtnl_unlock();
++
++ /* Select sleep pin state */
++ pinctrl_pm_select_sleep_state(dev);
++
++ return 0;
++}
++
++static int __maybe_unused cpsw_resume(struct device *dev)
++{
++ struct cpsw_common *cpsw = dev_get_drvdata(dev);
++ int i;
++
++ /* Select default pin state */
++ pinctrl_pm_select_default_state(dev);
++
++ /* shut up ASSERT_RTNL() warning in netif_set_real_num_tx/rx_queues */
++ rtnl_lock();
++
++ for (i = 0; i < cpsw->data.slaves; i++) {
++ struct net_device *ndev = cpsw->slaves[i].ndev;
++
++ if (!(ndev && netif_running(ndev)))
++ continue;
++
++ cpsw_ndo_open(ndev);
++ }
++
++ rtnl_unlock();
++
++ return 0;
++}
++
++static SIMPLE_DEV_PM_OPS(cpsw_pm_ops, cpsw_suspend, cpsw_resume);
++
+ static struct platform_driver cpsw_driver = {
+ .driver = {
+ .name = "cpsw-switch",
++ .pm = &cpsw_pm_ops,
+ .of_match_table = cpsw_of_mtable,
+ },
+ .probe = cpsw_probe,
+diff --git a/drivers/net/geneve.c b/drivers/net/geneve.c
+index dec52b763d508..deede92b17fc7 100644
+--- a/drivers/net/geneve.c
++++ b/drivers/net/geneve.c
+@@ -773,7 +773,8 @@ static struct rtable *geneve_get_v4_rt(struct sk_buff *skb,
+ struct net_device *dev,
+ struct geneve_sock *gs4,
+ struct flowi4 *fl4,
+- const struct ip_tunnel_info *info)
++ const struct ip_tunnel_info *info,
++ __be16 dport, __be16 sport)
+ {
+ bool use_cache = ip_tunnel_dst_cache_usable(skb, info);
+ struct geneve_dev *geneve = netdev_priv(dev);
+@@ -789,6 +790,8 @@ static struct rtable *geneve_get_v4_rt(struct sk_buff *skb,
+ fl4->flowi4_proto = IPPROTO_UDP;
+ fl4->daddr = info->key.u.ipv4.dst;
+ fl4->saddr = info->key.u.ipv4.src;
++ fl4->fl4_dport = dport;
++ fl4->fl4_sport = sport;
+
+ tos = info->key.tos;
+ if ((tos == 1) && !geneve->collect_md) {
+@@ -823,7 +826,8 @@ static struct dst_entry *geneve_get_v6_dst(struct sk_buff *skb,
+ struct net_device *dev,
+ struct geneve_sock *gs6,
+ struct flowi6 *fl6,
+- const struct ip_tunnel_info *info)
++ const struct ip_tunnel_info *info,
++ __be16 dport, __be16 sport)
+ {
+ bool use_cache = ip_tunnel_dst_cache_usable(skb, info);
+ struct geneve_dev *geneve = netdev_priv(dev);
+@@ -839,6 +843,9 @@ static struct dst_entry *geneve_get_v6_dst(struct sk_buff *skb,
+ fl6->flowi6_proto = IPPROTO_UDP;
+ fl6->daddr = info->key.u.ipv6.dst;
+ fl6->saddr = info->key.u.ipv6.src;
++ fl6->fl6_dport = dport;
++ fl6->fl6_sport = sport;
++
+ prio = info->key.tos;
+ if ((prio == 1) && !geneve->collect_md) {
+ prio = ip_tunnel_get_dsfield(ip_hdr(skb), skb);
+@@ -885,14 +892,15 @@ static int geneve_xmit_skb(struct sk_buff *skb, struct net_device *dev,
+ __be16 sport;
+ int err;
+
+- rt = geneve_get_v4_rt(skb, dev, gs4, &fl4, info);
++ sport = udp_flow_src_port(geneve->net, skb, 1, USHRT_MAX, true);
++ rt = geneve_get_v4_rt(skb, dev, gs4, &fl4, info,
++ geneve->info.key.tp_dst, sport);
+ if (IS_ERR(rt))
+ return PTR_ERR(rt);
+
+ skb_tunnel_check_pmtu(skb, &rt->dst,
+ GENEVE_IPV4_HLEN + info->options_len);
+
+- sport = udp_flow_src_port(geneve->net, skb, 1, USHRT_MAX, true);
+ if (geneve->collect_md) {
+ tos = ip_tunnel_ecn_encap(key->tos, ip_hdr(skb), skb);
+ ttl = key->ttl;
+@@ -947,13 +955,14 @@ static int geneve6_xmit_skb(struct sk_buff *skb, struct net_device *dev,
+ __be16 sport;
+ int err;
+
+- dst = geneve_get_v6_dst(skb, dev, gs6, &fl6, info);
++ sport = udp_flow_src_port(geneve->net, skb, 1, USHRT_MAX, true);
++ dst = geneve_get_v6_dst(skb, dev, gs6, &fl6, info,
++ geneve->info.key.tp_dst, sport);
+ if (IS_ERR(dst))
+ return PTR_ERR(dst);
+
+ skb_tunnel_check_pmtu(skb, dst, GENEVE_IPV6_HLEN + info->options_len);
+
+- sport = udp_flow_src_port(geneve->net, skb, 1, USHRT_MAX, true);
+ if (geneve->collect_md) {
+ prio = ip_tunnel_ecn_encap(key->tos, ip_hdr(skb), skb);
+ ttl = key->ttl;
+@@ -1034,13 +1043,18 @@ static int geneve_fill_metadata_dst(struct net_device *dev, struct sk_buff *skb)
+ {
+ struct ip_tunnel_info *info = skb_tunnel_info(skb);
+ struct geneve_dev *geneve = netdev_priv(dev);
++ __be16 sport;
+
+ if (ip_tunnel_info_af(info) == AF_INET) {
+ struct rtable *rt;
+ struct flowi4 fl4;
++
+ struct geneve_sock *gs4 = rcu_dereference(geneve->sock4);
++ sport = udp_flow_src_port(geneve->net, skb,
++ 1, USHRT_MAX, true);
+
+- rt = geneve_get_v4_rt(skb, dev, gs4, &fl4, info);
++ rt = geneve_get_v4_rt(skb, dev, gs4, &fl4, info,
++ geneve->info.key.tp_dst, sport);
+ if (IS_ERR(rt))
+ return PTR_ERR(rt);
+
+@@ -1050,9 +1064,13 @@ static int geneve_fill_metadata_dst(struct net_device *dev, struct sk_buff *skb)
+ } else if (ip_tunnel_info_af(info) == AF_INET6) {
+ struct dst_entry *dst;
+ struct flowi6 fl6;
++
+ struct geneve_sock *gs6 = rcu_dereference(geneve->sock6);
++ sport = udp_flow_src_port(geneve->net, skb,
++ 1, USHRT_MAX, true);
+
+- dst = geneve_get_v6_dst(skb, dev, gs6, &fl6, info);
++ dst = geneve_get_v6_dst(skb, dev, gs6, &fl6, info,
++ geneve->info.key.tp_dst, sport);
+ if (IS_ERR(dst))
+ return PTR_ERR(dst);
+
+@@ -1063,8 +1081,7 @@ static int geneve_fill_metadata_dst(struct net_device *dev, struct sk_buff *skb)
+ return -EINVAL;
+ }
+
+- info->key.tp_src = udp_flow_src_port(geneve->net, skb,
+- 1, USHRT_MAX, true);
++ info->key.tp_src = sport;
+ info->key.tp_dst = geneve->info.key.tp_dst;
+ return 0;
+ }
+diff --git a/drivers/net/hyperv/netvsc_drv.c b/drivers/net/hyperv/netvsc_drv.c
+index 47159b31e6b39..8309194b351a9 100644
+--- a/drivers/net/hyperv/netvsc_drv.c
++++ b/drivers/net/hyperv/netvsc_drv.c
+@@ -2544,8 +2544,8 @@ static int netvsc_remove(struct hv_device *dev)
+ static int netvsc_suspend(struct hv_device *dev)
+ {
+ struct net_device_context *ndev_ctx;
+- struct net_device *vf_netdev, *net;
+ struct netvsc_device *nvdev;
++ struct net_device *net;
+ int ret;
+
+ net = hv_get_drvdata(dev);
+@@ -2561,10 +2561,6 @@ static int netvsc_suspend(struct hv_device *dev)
+ goto out;
+ }
+
+- vf_netdev = rtnl_dereference(ndev_ctx->vf_netdev);
+- if (vf_netdev)
+- netvsc_unregister_vf(vf_netdev);
+-
+ /* Save the current config info */
+ ndev_ctx->saved_netvsc_dev_info = netvsc_devinfo_get(nvdev);
+
+@@ -2580,6 +2576,7 @@ static int netvsc_resume(struct hv_device *dev)
+ struct net_device *net = hv_get_drvdata(dev);
+ struct net_device_context *net_device_ctx;
+ struct netvsc_device_info *device_info;
++ struct net_device *vf_netdev;
+ int ret;
+
+ rtnl_lock();
+@@ -2592,6 +2589,15 @@ static int netvsc_resume(struct hv_device *dev)
+ netvsc_devinfo_put(device_info);
+ net_device_ctx->saved_netvsc_dev_info = NULL;
+
++ /* A NIC driver (e.g. mlx5) may keep the VF network interface across
++ * hibernation, but here the data path is implicitly switched to the
++ * netvsc NIC since the vmbus channel is closed and re-opened, so
++ * netvsc_vf_changed() must be used to switch the data path to the VF.
++ */
++ vf_netdev = rtnl_dereference(net_device_ctx->vf_netdev);
++ if (vf_netdev && netvsc_vf_changed(vf_netdev) != NOTIFY_OK)
++ ret = -EINVAL;
++
+ rtnl_unlock();
+
+ return ret;
+diff --git a/drivers/net/ipa/ipa_table.c b/drivers/net/ipa/ipa_table.c
+index 9df2a3e78c989..d08c626b2baa6 100644
+--- a/drivers/net/ipa/ipa_table.c
++++ b/drivers/net/ipa/ipa_table.c
+@@ -521,7 +521,7 @@ static void ipa_filter_tuple_zero(struct ipa_endpoint *endpoint)
+ val = ioread32(endpoint->ipa->reg_virt + offset);
+
+ /* Zero all filter-related fields, preserving the rest */
+- u32_replace_bits(val, 0, IPA_REG_ENDP_FILTER_HASH_MSK_ALL);
++ u32p_replace_bits(&val, 0, IPA_REG_ENDP_FILTER_HASH_MSK_ALL);
+
+ iowrite32(val, endpoint->ipa->reg_virt + offset);
+ }
+@@ -572,7 +572,7 @@ static void ipa_route_tuple_zero(struct ipa *ipa, u32 route_id)
+ val = ioread32(ipa->reg_virt + offset);
+
+ /* Zero all route-related fields, preserving the rest */
+- u32_replace_bits(val, 0, IPA_REG_ENDP_ROUTER_HASH_MSK_ALL);
++ u32p_replace_bits(&val, 0, IPA_REG_ENDP_ROUTER_HASH_MSK_ALL);
+
+ iowrite32(val, ipa->reg_virt + offset);
+ }
+diff --git a/drivers/net/phy/phy.c b/drivers/net/phy/phy.c
+index 56cfae9504727..f5620f91dbf3a 100644
+--- a/drivers/net/phy/phy.c
++++ b/drivers/net/phy/phy.c
+@@ -948,7 +948,7 @@ void phy_stop(struct phy_device *phydev)
+ {
+ struct net_device *dev = phydev->attached_dev;
+
+- if (!phy_is_started(phydev)) {
++ if (!phy_is_started(phydev) && phydev->state != PHY_DOWN) {
+ WARN(1, "called from state %s\n",
+ phy_state_to_str(phydev->state));
+ return;
+diff --git a/drivers/net/phy/phy_device.c b/drivers/net/phy/phy_device.c
+index 98369430a3be5..067910d242ab3 100644
+--- a/drivers/net/phy/phy_device.c
++++ b/drivers/net/phy/phy_device.c
+@@ -1092,10 +1092,6 @@ int phy_init_hw(struct phy_device *phydev)
+ if (ret < 0)
+ return ret;
+
+- ret = phy_disable_interrupts(phydev);
+- if (ret)
+- return ret;
+-
+ if (phydev->drv->config_init)
+ ret = phydev->drv->config_init(phydev);
+
+@@ -1372,6 +1368,10 @@ int phy_attach_direct(struct net_device *dev, struct phy_device *phydev,
+ if (err)
+ goto error;
+
++ err = phy_disable_interrupts(phydev);
++ if (err)
++ return err;
++
+ phy_resume(phydev);
+ phy_led_triggers_register(phydev);
+
+@@ -1631,7 +1631,8 @@ void phy_detach(struct phy_device *phydev)
+
+ phy_led_triggers_unregister(phydev);
+
+- module_put(phydev->mdio.dev.driver->owner);
++ if (phydev->mdio.dev.driver)
++ module_put(phydev->mdio.dev.driver->owner);
+
+ /* If the device had no specific driver before (i.e. - it
+ * was using the generic driver), we unbind the device
+diff --git a/drivers/net/wan/hdlc_ppp.c b/drivers/net/wan/hdlc_ppp.c
+index 48ced3912576c..16f33d1ffbfb9 100644
+--- a/drivers/net/wan/hdlc_ppp.c
++++ b/drivers/net/wan/hdlc_ppp.c
+@@ -383,11 +383,8 @@ static void ppp_cp_parse_cr(struct net_device *dev, u16 pid, u8 id,
+ }
+
+ for (opt = data; len; len -= opt[1], opt += opt[1]) {
+- if (len < 2 || len < opt[1]) {
+- dev->stats.rx_errors++;
+- kfree(out);
+- return; /* bad packet, drop silently */
+- }
++ if (len < 2 || opt[1] < 2 || len < opt[1])
++ goto err_out;
+
+ if (pid == PID_LCP)
+ switch (opt[0]) {
+@@ -395,6 +392,8 @@ static void ppp_cp_parse_cr(struct net_device *dev, u16 pid, u8 id,
+ continue; /* MRU always OK and > 1500 bytes? */
+
+ case LCP_OPTION_ACCM: /* async control character map */
++ if (opt[1] < sizeof(valid_accm))
++ goto err_out;
+ if (!memcmp(opt, valid_accm,
+ sizeof(valid_accm)))
+ continue;
+@@ -406,6 +405,8 @@ static void ppp_cp_parse_cr(struct net_device *dev, u16 pid, u8 id,
+ }
+ break;
+ case LCP_OPTION_MAGIC:
++ if (len < 6)
++ goto err_out;
+ if (opt[1] != 6 || (!opt[2] && !opt[3] &&
+ !opt[4] && !opt[5]))
+ break; /* reject invalid magic number */
+@@ -424,6 +425,11 @@ static void ppp_cp_parse_cr(struct net_device *dev, u16 pid, u8 id,
+ ppp_cp_event(dev, pid, RCR_GOOD, CP_CONF_ACK, id, req_len, data);
+
+ kfree(out);
++ return;
++
++err_out:
++ dev->stats.rx_errors++;
++ kfree(out);
+ }
+
+ static int ppp_rx(struct sk_buff *skb)
+diff --git a/drivers/net/wireguard/noise.c b/drivers/net/wireguard/noise.c
+index 201a22681945f..27cb5045bed2d 100644
+--- a/drivers/net/wireguard/noise.c
++++ b/drivers/net/wireguard/noise.c
+@@ -87,15 +87,12 @@ static void handshake_zero(struct noise_handshake *handshake)
+
+ void wg_noise_handshake_clear(struct noise_handshake *handshake)
+ {
++ down_write(&handshake->lock);
+ wg_index_hashtable_remove(
+ handshake->entry.peer->device->index_hashtable,
+ &handshake->entry);
+- down_write(&handshake->lock);
+ handshake_zero(handshake);
+ up_write(&handshake->lock);
+- wg_index_hashtable_remove(
+- handshake->entry.peer->device->index_hashtable,
+- &handshake->entry);
+ }
+
+ static struct noise_keypair *keypair_create(struct wg_peer *peer)
+diff --git a/drivers/net/wireguard/peerlookup.c b/drivers/net/wireguard/peerlookup.c
+index e4deb331476b3..f2783aa7a88f1 100644
+--- a/drivers/net/wireguard/peerlookup.c
++++ b/drivers/net/wireguard/peerlookup.c
+@@ -167,9 +167,13 @@ bool wg_index_hashtable_replace(struct index_hashtable *table,
+ struct index_hashtable_entry *old,
+ struct index_hashtable_entry *new)
+ {
+- if (unlikely(hlist_unhashed(&old->index_hash)))
+- return false;
++ bool ret;
++
+ spin_lock_bh(&table->lock);
++ ret = !hlist_unhashed(&old->index_hash);
++ if (unlikely(!ret))
++ goto out;
++
+ new->index = old->index;
+ hlist_replace_rcu(&old->index_hash, &new->index_hash);
+
+@@ -180,8 +184,9 @@ bool wg_index_hashtable_replace(struct index_hashtable *table,
+ * simply gets dropped, which isn't terrible.
+ */
+ INIT_HLIST_NODE(&old->index_hash);
++out:
+ spin_unlock_bh(&table->lock);
+- return true;
++ return ret;
+ }
+
+ void wg_index_hashtable_remove(struct index_hashtable *table,
+diff --git a/include/linux/skbuff.h b/include/linux/skbuff.h
+index 0c0377fc00c2a..1119463cf2425 100644
+--- a/include/linux/skbuff.h
++++ b/include/linux/skbuff.h
+@@ -3208,8 +3208,9 @@ static inline int skb_padto(struct sk_buff *skb, unsigned int len)
+ * is untouched. Otherwise it is extended. Returns zero on
+ * success. The skb is freed on error if @free_on_error is true.
+ */
+-static inline int __skb_put_padto(struct sk_buff *skb, unsigned int len,
+- bool free_on_error)
++static inline int __must_check __skb_put_padto(struct sk_buff *skb,
++ unsigned int len,
++ bool free_on_error)
+ {
+ unsigned int size = skb->len;
+
+@@ -3232,7 +3233,7 @@ static inline int __skb_put_padto(struct sk_buff *skb, unsigned int len,
+ * is untouched. Otherwise it is extended. Returns zero on
+ * success. The skb is freed on error.
+ */
+-static inline int skb_put_padto(struct sk_buff *skb, unsigned int len)
++static inline int __must_check skb_put_padto(struct sk_buff *skb, unsigned int len)
+ {
+ return __skb_put_padto(skb, len, true);
+ }
+diff --git a/include/net/flow.h b/include/net/flow.h
+index a50fb77a0b279..d058e63fb59a3 100644
+--- a/include/net/flow.h
++++ b/include/net/flow.h
+@@ -116,6 +116,7 @@ static inline void flowi4_init_output(struct flowi4 *fl4, int oif,
+ fl4->saddr = saddr;
+ fl4->fl4_dport = dport;
+ fl4->fl4_sport = sport;
++ fl4->flowi4_multipath_hash = 0;
+ }
+
+ /* Reset some input parameters after previous lookup */
+diff --git a/include/net/sctp/structs.h b/include/net/sctp/structs.h
+index fb42c90348d3b..f3c5d9d2f82d2 100644
+--- a/include/net/sctp/structs.h
++++ b/include/net/sctp/structs.h
+@@ -226,12 +226,14 @@ struct sctp_sock {
+ data_ready_signalled:1;
+
+ atomic_t pd_mode;
++
++ /* Fields after this point will be skipped on copies, like on accept
++ * and peeloff operations
++ */
++
+ /* Receive to here while partial delivery is in effect. */
+ struct sk_buff_head pd_lobby;
+
+- /* These must be the last fields, as they will skipped on copies,
+- * like on accept and peeloff operations
+- */
+ struct list_head auto_asconf_list;
+ int do_auto_asconf;
+ };
+diff --git a/net/bridge/br_vlan.c b/net/bridge/br_vlan.c
+index f9092c71225fd..61c94cefa8436 100644
+--- a/net/bridge/br_vlan.c
++++ b/net/bridge/br_vlan.c
+@@ -1288,11 +1288,13 @@ void br_vlan_get_stats(const struct net_bridge_vlan *v,
+ }
+ }
+
+-static int __br_vlan_get_pvid(const struct net_device *dev,
+- struct net_bridge_port *p, u16 *p_pvid)
++int br_vlan_get_pvid(const struct net_device *dev, u16 *p_pvid)
+ {
+ struct net_bridge_vlan_group *vg;
++ struct net_bridge_port *p;
+
++ ASSERT_RTNL();
++ p = br_port_get_check_rtnl(dev);
+ if (p)
+ vg = nbp_vlan_group(p);
+ else if (netif_is_bridge_master(dev))
+@@ -1303,18 +1305,23 @@ static int __br_vlan_get_pvid(const struct net_device *dev,
+ *p_pvid = br_get_pvid(vg);
+ return 0;
+ }
+-
+-int br_vlan_get_pvid(const struct net_device *dev, u16 *p_pvid)
+-{
+- ASSERT_RTNL();
+-
+- return __br_vlan_get_pvid(dev, br_port_get_check_rtnl(dev), p_pvid);
+-}
+ EXPORT_SYMBOL_GPL(br_vlan_get_pvid);
+
+ int br_vlan_get_pvid_rcu(const struct net_device *dev, u16 *p_pvid)
+ {
+- return __br_vlan_get_pvid(dev, br_port_get_check_rcu(dev), p_pvid);
++ struct net_bridge_vlan_group *vg;
++ struct net_bridge_port *p;
++
++ p = br_port_get_check_rcu(dev);
++ if (p)
++ vg = nbp_vlan_group_rcu(p);
++ else if (netif_is_bridge_master(dev))
++ vg = br_vlan_group_rcu(netdev_priv(dev));
++ else
++ return -EINVAL;
++
++ *p_pvid = br_get_pvid(vg);
++ return 0;
+ }
+ EXPORT_SYMBOL_GPL(br_vlan_get_pvid_rcu);
+
+diff --git a/net/core/dev.c b/net/core/dev.c
+index 5bd0b550893fb..181b13e02bdc0 100644
+--- a/net/core/dev.c
++++ b/net/core/dev.c
+@@ -8641,7 +8641,7 @@ int dev_get_port_parent_id(struct net_device *dev,
+ if (!first.id_len)
+ first = *ppid;
+ else if (memcmp(&first, ppid, sizeof(*ppid)))
+- return -ENODATA;
++ return -EOPNOTSUPP;
+ }
+
+ return err;
+diff --git a/net/core/filter.c b/net/core/filter.c
+index a69e79327c29e..d13ea1642b974 100644
+--- a/net/core/filter.c
++++ b/net/core/filter.c
+@@ -4774,6 +4774,7 @@ static int bpf_ipv4_fib_lookup(struct net *net, struct bpf_fib_lookup *params,
+ fl4.saddr = params->ipv4_src;
+ fl4.fl4_sport = params->sport;
+ fl4.fl4_dport = params->dport;
++ fl4.flowi4_multipath_hash = 0;
+
+ if (flags & BPF_FIB_LOOKUP_DIRECT) {
+ u32 tbid = l3mdev_fib_table_rcu(dev) ? : RT_TABLE_MAIN;
+diff --git a/net/core/net_namespace.c b/net/core/net_namespace.c
+index dcd61aca343ec..944ab214e5ae8 100644
+--- a/net/core/net_namespace.c
++++ b/net/core/net_namespace.c
+@@ -251,10 +251,10 @@ int peernet2id_alloc(struct net *net, struct net *peer, gfp_t gfp)
+ if (refcount_read(&net->count) == 0)
+ return NETNSA_NSID_NOT_ASSIGNED;
+
+- spin_lock(&net->nsid_lock);
++ spin_lock_bh(&net->nsid_lock);
+ id = __peernet2id(net, peer);
+ if (id >= 0) {
+- spin_unlock(&net->nsid_lock);
++ spin_unlock_bh(&net->nsid_lock);
+ return id;
+ }
+
+@@ -264,12 +264,12 @@ int peernet2id_alloc(struct net *net, struct net *peer, gfp_t gfp)
+ * just been idr_remove()'d from there in cleanup_net().
+ */
+ if (!maybe_get_net(peer)) {
+- spin_unlock(&net->nsid_lock);
++ spin_unlock_bh(&net->nsid_lock);
+ return NETNSA_NSID_NOT_ASSIGNED;
+ }
+
+ id = alloc_netid(net, peer, -1);
+- spin_unlock(&net->nsid_lock);
++ spin_unlock_bh(&net->nsid_lock);
+
+ put_net(peer);
+ if (id < 0)
+@@ -534,20 +534,20 @@ static void unhash_nsid(struct net *net, struct net *last)
+ for_each_net(tmp) {
+ int id;
+
+- spin_lock(&tmp->nsid_lock);
++ spin_lock_bh(&tmp->nsid_lock);
+ id = __peernet2id(tmp, net);
+ if (id >= 0)
+ idr_remove(&tmp->netns_ids, id);
+- spin_unlock(&tmp->nsid_lock);
++ spin_unlock_bh(&tmp->nsid_lock);
+ if (id >= 0)
+ rtnl_net_notifyid(tmp, RTM_DELNSID, id, 0, NULL,
+ GFP_KERNEL);
+ if (tmp == last)
+ break;
+ }
+- spin_lock(&net->nsid_lock);
++ spin_lock_bh(&net->nsid_lock);
+ idr_destroy(&net->netns_ids);
+- spin_unlock(&net->nsid_lock);
++ spin_unlock_bh(&net->nsid_lock);
+ }
+
+ static LLIST_HEAD(cleanup_list);
+@@ -760,9 +760,9 @@ static int rtnl_net_newid(struct sk_buff *skb, struct nlmsghdr *nlh,
+ return PTR_ERR(peer);
+ }
+
+- spin_lock(&net->nsid_lock);
++ spin_lock_bh(&net->nsid_lock);
+ if (__peernet2id(net, peer) >= 0) {
+- spin_unlock(&net->nsid_lock);
++ spin_unlock_bh(&net->nsid_lock);
+ err = -EEXIST;
+ NL_SET_BAD_ATTR(extack, nla);
+ NL_SET_ERR_MSG(extack,
+@@ -771,7 +771,7 @@ static int rtnl_net_newid(struct sk_buff *skb, struct nlmsghdr *nlh,
+ }
+
+ err = alloc_netid(net, peer, nsid);
+- spin_unlock(&net->nsid_lock);
++ spin_unlock_bh(&net->nsid_lock);
+ if (err >= 0) {
+ rtnl_net_notifyid(net, RTM_NEWNSID, err, NETLINK_CB(skb).portid,
+ nlh, GFP_KERNEL);
+diff --git a/net/dcb/dcbnl.c b/net/dcb/dcbnl.c
+index d2a4553bcf39d..0fd1c2aa13615 100644
+--- a/net/dcb/dcbnl.c
++++ b/net/dcb/dcbnl.c
+@@ -1426,6 +1426,7 @@ static int dcbnl_ieee_set(struct net_device *netdev, struct nlmsghdr *nlh,
+ {
+ const struct dcbnl_rtnl_ops *ops = netdev->dcbnl_ops;
+ struct nlattr *ieee[DCB_ATTR_IEEE_MAX + 1];
++ int prio;
+ int err;
+
+ if (!ops)
+@@ -1475,6 +1476,13 @@ static int dcbnl_ieee_set(struct net_device *netdev, struct nlmsghdr *nlh,
+ struct dcbnl_buffer *buffer =
+ nla_data(ieee[DCB_ATTR_DCB_BUFFER]);
+
++ for (prio = 0; prio < ARRAY_SIZE(buffer->prio2buffer); prio++) {
++ if (buffer->prio2buffer[prio] >= DCBX_MAX_BUFFERS) {
++ err = -EINVAL;
++ goto err;
++ }
++ }
++
+ err = ops->dcbnl_setbuffer(netdev, buffer);
+ if (err)
+ goto err;
+diff --git a/net/dsa/slave.c b/net/dsa/slave.c
+index 4c7f086a047b1..3f7be8c64c504 100644
+--- a/net/dsa/slave.c
++++ b/net/dsa/slave.c
+@@ -1801,15 +1801,27 @@ int dsa_slave_create(struct dsa_port *port)
+
+ dsa_slave_notify(slave_dev, DSA_PORT_REGISTER);
+
+- ret = register_netdev(slave_dev);
++ rtnl_lock();
++
++ ret = register_netdevice(slave_dev);
+ if (ret) {
+ netdev_err(master, "error %d registering interface %s\n",
+ ret, slave_dev->name);
++ rtnl_unlock();
+ goto out_phy;
+ }
+
++ ret = netdev_upper_dev_link(master, slave_dev, NULL);
++
++ rtnl_unlock();
++
++ if (ret)
++ goto out_unregister;
++
+ return 0;
+
++out_unregister:
++ unregister_netdev(slave_dev);
+ out_phy:
+ rtnl_lock();
+ phylink_disconnect_phy(p->dp->pl);
+@@ -1826,16 +1838,18 @@ out_free:
+
+ void dsa_slave_destroy(struct net_device *slave_dev)
+ {
++ struct net_device *master = dsa_slave_to_master(slave_dev);
+ struct dsa_port *dp = dsa_slave_to_port(slave_dev);
+ struct dsa_slave_priv *p = netdev_priv(slave_dev);
+
+ netif_carrier_off(slave_dev);
+ rtnl_lock();
++ netdev_upper_dev_unlink(master, slave_dev);
++ unregister_netdevice(slave_dev);
+ phylink_disconnect_phy(dp->pl);
+ rtnl_unlock();
+
+ dsa_slave_notify(slave_dev, DSA_PORT_UNREGISTER);
+- unregister_netdev(slave_dev);
+ phylink_destroy(dp->pl);
+ gro_cells_destroy(&p->gcells);
+ free_percpu(p->stats64);
+diff --git a/net/ipv4/fib_frontend.c b/net/ipv4/fib_frontend.c
+index 41079490a1181..86a23e4a6a50f 100644
+--- a/net/ipv4/fib_frontend.c
++++ b/net/ipv4/fib_frontend.c
+@@ -362,6 +362,7 @@ static int __fib_validate_source(struct sk_buff *skb, __be32 src, __be32 dst,
+ fl4.flowi4_tun_key.tun_id = 0;
+ fl4.flowi4_flags = 0;
+ fl4.flowi4_uid = sock_net_uid(net, NULL);
++ fl4.flowi4_multipath_hash = 0;
+
+ no_addr = idev->ifa_list == NULL;
+
+diff --git a/net/ipv4/ip_output.c b/net/ipv4/ip_output.c
+index 17206677d5033..f09a188397165 100644
+--- a/net/ipv4/ip_output.c
++++ b/net/ipv4/ip_output.c
+@@ -74,6 +74,7 @@
+ #include <net/icmp.h>
+ #include <net/checksum.h>
+ #include <net/inetpeer.h>
++#include <net/inet_ecn.h>
+ #include <net/lwtunnel.h>
+ #include <linux/bpf-cgroup.h>
+ #include <linux/igmp.h>
+@@ -1697,7 +1698,7 @@ void ip_send_unicast_reply(struct sock *sk, struct sk_buff *skb,
+ if (IS_ERR(rt))
+ return;
+
+- inet_sk(sk)->tos = arg->tos;
++ inet_sk(sk)->tos = arg->tos & ~INET_ECN_MASK;
+
+ sk->sk_protocol = ip_hdr(skb)->protocol;
+ sk->sk_bound_dev_if = arg->bound_dev_if;
+diff --git a/net/ipv4/route.c b/net/ipv4/route.c
+index a01efa062f6bc..37f1288894747 100644
+--- a/net/ipv4/route.c
++++ b/net/ipv4/route.c
+@@ -786,8 +786,10 @@ static void __ip_do_redirect(struct rtable *rt, struct sk_buff *skb, struct flow
+ neigh_event_send(n, NULL);
+ } else {
+ if (fib_lookup(net, fl4, &res, 0) == 0) {
+- struct fib_nh_common *nhc = FIB_RES_NHC(res);
++ struct fib_nh_common *nhc;
+
++ fib_select_path(net, &res, fl4, skb);
++ nhc = FIB_RES_NHC(res);
+ update_or_create_fnhe(nhc, fl4->daddr, new_gw,
+ 0, false,
+ jiffies + ip_rt_gc_timeout);
+@@ -1013,6 +1015,7 @@ out: kfree_skb(skb);
+ static void __ip_rt_update_pmtu(struct rtable *rt, struct flowi4 *fl4, u32 mtu)
+ {
+ struct dst_entry *dst = &rt->dst;
++ struct net *net = dev_net(dst->dev);
+ u32 old_mtu = ipv4_mtu(dst);
+ struct fib_result res;
+ bool lock = false;
+@@ -1033,9 +1036,11 @@ static void __ip_rt_update_pmtu(struct rtable *rt, struct flowi4 *fl4, u32 mtu)
+ return;
+
+ rcu_read_lock();
+- if (fib_lookup(dev_net(dst->dev), fl4, &res, 0) == 0) {
+- struct fib_nh_common *nhc = FIB_RES_NHC(res);
++ if (fib_lookup(net, fl4, &res, 0) == 0) {
++ struct fib_nh_common *nhc;
+
++ fib_select_path(net, &res, fl4, NULL);
++ nhc = FIB_RES_NHC(res);
+ update_or_create_fnhe(nhc, fl4->daddr, 0, mtu, lock,
+ jiffies + ip_rt_mtu_expires);
+ }
+@@ -2142,6 +2147,7 @@ static int ip_route_input_slow(struct sk_buff *skb, __be32 daddr, __be32 saddr,
+ fl4.daddr = daddr;
+ fl4.saddr = saddr;
+ fl4.flowi4_uid = sock_net_uid(net, NULL);
++ fl4.flowi4_multipath_hash = 0;
+
+ if (fib4_rules_early_flow_dissect(net, skb, &fl4, &_flkeys)) {
+ flkeys = &_flkeys;
+@@ -2662,8 +2668,6 @@ struct rtable *ip_route_output_key_hash_rcu(struct net *net, struct flowi4 *fl4,
+ fib_select_path(net, res, fl4, skb);
+
+ dev_out = FIB_RES_DEV(*res);
+- fl4->flowi4_oif = dev_out->ifindex;
+-
+
+ make_route:
+ rth = __mkroute_output(res, fl4, orig_oif, dev_out, flags);
+diff --git a/net/ipv6/Kconfig b/net/ipv6/Kconfig
+index f4f19e89af5ed..9d66af9e4c7fe 100644
+--- a/net/ipv6/Kconfig
++++ b/net/ipv6/Kconfig
+@@ -303,6 +303,7 @@ config IPV6_SEG6_LWTUNNEL
+ config IPV6_SEG6_HMAC
+ bool "IPv6: Segment Routing HMAC support"
+ depends on IPV6
++ select CRYPTO
+ select CRYPTO_HMAC
+ select CRYPTO_SHA1
+ select CRYPTO_SHA256
+diff --git a/net/ipv6/ip6_fib.c b/net/ipv6/ip6_fib.c
+index 49ee89bbcba0c..3c32dcb5fd8e2 100644
+--- a/net/ipv6/ip6_fib.c
++++ b/net/ipv6/ip6_fib.c
+@@ -1992,14 +1992,19 @@ static void fib6_del_route(struct fib6_table *table, struct fib6_node *fn,
+ /* Need to own table->tb6_lock */
+ int fib6_del(struct fib6_info *rt, struct nl_info *info)
+ {
+- struct fib6_node *fn = rcu_dereference_protected(rt->fib6_node,
+- lockdep_is_held(&rt->fib6_table->tb6_lock));
+- struct fib6_table *table = rt->fib6_table;
+ struct net *net = info->nl_net;
+ struct fib6_info __rcu **rtp;
+ struct fib6_info __rcu **rtp_next;
++ struct fib6_table *table;
++ struct fib6_node *fn;
++
++ if (rt == net->ipv6.fib6_null_entry)
++ return -ENOENT;
+
+- if (!fn || rt == net->ipv6.fib6_null_entry)
++ table = rt->fib6_table;
++ fn = rcu_dereference_protected(rt->fib6_node,
++ lockdep_is_held(&table->tb6_lock));
++ if (!fn)
+ return -ENOENT;
+
+ WARN_ON(!(fn->fn_flags & RTN_RTINFO));
+diff --git a/net/qrtr/qrtr.c b/net/qrtr/qrtr.c
+index 85ab4559f0577..0f77e24a5152e 100644
+--- a/net/qrtr/qrtr.c
++++ b/net/qrtr/qrtr.c
+@@ -332,8 +332,7 @@ static int qrtr_node_enqueue(struct qrtr_node *node, struct sk_buff *skb,
+ {
+ struct qrtr_hdr_v1 *hdr;
+ size_t len = skb->len;
+- int rc = -ENODEV;
+- int confirm_rx;
++ int rc, confirm_rx;
+
+ confirm_rx = qrtr_tx_wait(node, to->sq_node, to->sq_port, type);
+ if (confirm_rx < 0) {
+@@ -357,15 +356,17 @@ static int qrtr_node_enqueue(struct qrtr_node *node, struct sk_buff *skb,
+ hdr->size = cpu_to_le32(len);
+ hdr->confirm_rx = !!confirm_rx;
+
+- skb_put_padto(skb, ALIGN(len, 4) + sizeof(*hdr));
+-
+- mutex_lock(&node->ep_lock);
+- if (node->ep)
+- rc = node->ep->xmit(node->ep, skb);
+- else
+- kfree_skb(skb);
+- mutex_unlock(&node->ep_lock);
++ rc = skb_put_padto(skb, ALIGN(len, 4) + sizeof(*hdr));
+
++ if (!rc) {
++ mutex_lock(&node->ep_lock);
++ rc = -ENODEV;
++ if (node->ep)
++ rc = node->ep->xmit(node->ep, skb);
++ else
++ kfree_skb(skb);
++ mutex_unlock(&node->ep_lock);
++ }
+ /* Need to ensure that a subsequent message carries the otherwise lost
+ * confirm_rx flag if we dropped this one */
+ if (rc && confirm_rx)
+diff --git a/net/sched/act_ife.c b/net/sched/act_ife.c
+index c1fcd85719d6a..5c568757643b2 100644
+--- a/net/sched/act_ife.c
++++ b/net/sched/act_ife.c
+@@ -436,6 +436,25 @@ static void tcf_ife_cleanup(struct tc_action *a)
+ kfree_rcu(p, rcu);
+ }
+
++static int load_metalist(struct nlattr **tb, bool rtnl_held)
++{
++ int i;
++
++ for (i = 1; i < max_metacnt; i++) {
++ if (tb[i]) {
++ void *val = nla_data(tb[i]);
++ int len = nla_len(tb[i]);
++ int rc;
++
++ rc = load_metaops_and_vet(i, val, len, rtnl_held);
++ if (rc != 0)
++ return rc;
++ }
++ }
++
++ return 0;
++}
++
+ static int populate_metalist(struct tcf_ife_info *ife, struct nlattr **tb,
+ bool exists, bool rtnl_held)
+ {
+@@ -449,10 +468,6 @@ static int populate_metalist(struct tcf_ife_info *ife, struct nlattr **tb,
+ val = nla_data(tb[i]);
+ len = nla_len(tb[i]);
+
+- rc = load_metaops_and_vet(i, val, len, rtnl_held);
+- if (rc != 0)
+- return rc;
+-
+ rc = add_metainfo(ife, i, val, len, exists);
+ if (rc)
+ return rc;
+@@ -509,6 +524,21 @@ static int tcf_ife_init(struct net *net, struct nlattr *nla,
+ if (!p)
+ return -ENOMEM;
+
++ if (tb[TCA_IFE_METALST]) {
++ err = nla_parse_nested_deprecated(tb2, IFE_META_MAX,
++ tb[TCA_IFE_METALST], NULL,
++ NULL);
++ if (err) {
++ kfree(p);
++ return err;
++ }
++ err = load_metalist(tb2, rtnl_held);
++ if (err) {
++ kfree(p);
++ return err;
++ }
++ }
++
+ index = parm->index;
+ err = tcf_idr_check_alloc(tn, &index, a, bind);
+ if (err < 0) {
+@@ -570,15 +600,9 @@ static int tcf_ife_init(struct net *net, struct nlattr *nla,
+ }
+
+ if (tb[TCA_IFE_METALST]) {
+- err = nla_parse_nested_deprecated(tb2, IFE_META_MAX,
+- tb[TCA_IFE_METALST], NULL,
+- NULL);
+- if (err)
+- goto metadata_parse_err;
+ err = populate_metalist(ife, tb2, exists, rtnl_held);
+ if (err)
+ goto metadata_parse_err;
+-
+ } else {
+ /* if no passed metadata allow list or passed allow-all
+ * then here we process by adding as many supported metadatum
+diff --git a/net/sched/cls_flower.c b/net/sched/cls_flower.c
+index e30bd969fc485..5fe145d97f52e 100644
+--- a/net/sched/cls_flower.c
++++ b/net/sched/cls_flower.c
+@@ -1215,6 +1215,7 @@ static int fl_set_erspan_opt(const struct nlattr *nla, struct fl_flow_key *key,
+ }
+ if (tb[TCA_FLOWER_KEY_ENC_OPT_ERSPAN_INDEX]) {
+ nla = tb[TCA_FLOWER_KEY_ENC_OPT_ERSPAN_INDEX];
++ memset(&md->u, 0x00, sizeof(md->u));
+ md->u.index = nla_get_be32(nla);
+ }
+ } else if (md->version == 2) {
+diff --git a/net/sched/sch_generic.c b/net/sched/sch_generic.c
+index 265a61d011dfa..54c417244642a 100644
+--- a/net/sched/sch_generic.c
++++ b/net/sched/sch_generic.c
+@@ -1131,24 +1131,10 @@ EXPORT_SYMBOL(dev_activate);
+
+ static void qdisc_deactivate(struct Qdisc *qdisc)
+ {
+- bool nolock = qdisc->flags & TCQ_F_NOLOCK;
+-
+ if (qdisc->flags & TCQ_F_BUILTIN)
+ return;
+- if (test_bit(__QDISC_STATE_DEACTIVATED, &qdisc->state))
+- return;
+-
+- if (nolock)
+- spin_lock_bh(&qdisc->seqlock);
+- spin_lock_bh(qdisc_lock(qdisc));
+
+ set_bit(__QDISC_STATE_DEACTIVATED, &qdisc->state);
+-
+- qdisc_reset(qdisc);
+-
+- spin_unlock_bh(qdisc_lock(qdisc));
+- if (nolock)
+- spin_unlock_bh(&qdisc->seqlock);
+ }
+
+ static void dev_deactivate_queue(struct net_device *dev,
+@@ -1165,6 +1151,30 @@ static void dev_deactivate_queue(struct net_device *dev,
+ }
+ }
+
++static void dev_reset_queue(struct net_device *dev,
++ struct netdev_queue *dev_queue,
++ void *_unused)
++{
++ struct Qdisc *qdisc;
++ bool nolock;
++
++ qdisc = dev_queue->qdisc_sleeping;
++ if (!qdisc)
++ return;
++
++ nolock = qdisc->flags & TCQ_F_NOLOCK;
++
++ if (nolock)
++ spin_lock_bh(&qdisc->seqlock);
++ spin_lock_bh(qdisc_lock(qdisc));
++
++ qdisc_reset(qdisc);
++
++ spin_unlock_bh(qdisc_lock(qdisc));
++ if (nolock)
++ spin_unlock_bh(&qdisc->seqlock);
++}
++
+ static bool some_qdisc_is_busy(struct net_device *dev)
+ {
+ unsigned int i;
+@@ -1213,12 +1223,20 @@ void dev_deactivate_many(struct list_head *head)
+ dev_watchdog_down(dev);
+ }
+
+- /* Wait for outstanding qdisc-less dev_queue_xmit calls.
++ /* Wait for outstanding qdisc-less dev_queue_xmit calls or
++ * outstanding qdisc enqueuing calls.
+ * This is avoided if all devices are in dismantle phase :
+ * Caller will call synchronize_net() for us
+ */
+ synchronize_net();
+
++ list_for_each_entry(dev, head, close_list) {
++ netdev_for_each_tx_queue(dev, dev_reset_queue, NULL);
++
++ if (dev_ingress_queue(dev))
++ dev_reset_queue(dev, dev_ingress_queue(dev), NULL);
++ }
++
+ /* Wait for outstanding qdisc_run calls. */
+ list_for_each_entry(dev, head, close_list) {
+ while (some_qdisc_is_busy(dev)) {
+diff --git a/net/sched/sch_taprio.c b/net/sched/sch_taprio.c
+index 6a5086e586efb..2b797a71e9bda 100644
+--- a/net/sched/sch_taprio.c
++++ b/net/sched/sch_taprio.c
+@@ -777,9 +777,11 @@ static const struct nla_policy taprio_policy[TCA_TAPRIO_ATTR_MAX + 1] = {
+ [TCA_TAPRIO_ATTR_TXTIME_DELAY] = { .type = NLA_U32 },
+ };
+
+-static int fill_sched_entry(struct nlattr **tb, struct sched_entry *entry,
++static int fill_sched_entry(struct taprio_sched *q, struct nlattr **tb,
++ struct sched_entry *entry,
+ struct netlink_ext_ack *extack)
+ {
++ int min_duration = length_to_duration(q, ETH_ZLEN);
+ u32 interval = 0;
+
+ if (tb[TCA_TAPRIO_SCHED_ENTRY_CMD])
+@@ -794,7 +796,10 @@ static int fill_sched_entry(struct nlattr **tb, struct sched_entry *entry,
+ interval = nla_get_u32(
+ tb[TCA_TAPRIO_SCHED_ENTRY_INTERVAL]);
+
+- if (interval == 0) {
++ /* The interval should allow at least the minimum ethernet
++ * frame to go out.
++ */
++ if (interval < min_duration) {
+ NL_SET_ERR_MSG(extack, "Invalid interval for schedule entry");
+ return -EINVAL;
+ }
+@@ -804,8 +809,9 @@ static int fill_sched_entry(struct nlattr **tb, struct sched_entry *entry,
+ return 0;
+ }
+
+-static int parse_sched_entry(struct nlattr *n, struct sched_entry *entry,
+- int index, struct netlink_ext_ack *extack)
++static int parse_sched_entry(struct taprio_sched *q, struct nlattr *n,
++ struct sched_entry *entry, int index,
++ struct netlink_ext_ack *extack)
+ {
+ struct nlattr *tb[TCA_TAPRIO_SCHED_ENTRY_MAX + 1] = { };
+ int err;
+@@ -819,10 +825,10 @@ static int parse_sched_entry(struct nlattr *n, struct sched_entry *entry,
+
+ entry->index = index;
+
+- return fill_sched_entry(tb, entry, extack);
++ return fill_sched_entry(q, tb, entry, extack);
+ }
+
+-static int parse_sched_list(struct nlattr *list,
++static int parse_sched_list(struct taprio_sched *q, struct nlattr *list,
+ struct sched_gate_list *sched,
+ struct netlink_ext_ack *extack)
+ {
+@@ -847,7 +853,7 @@ static int parse_sched_list(struct nlattr *list,
+ return -ENOMEM;
+ }
+
+- err = parse_sched_entry(n, entry, i, extack);
++ err = parse_sched_entry(q, n, entry, i, extack);
+ if (err < 0) {
+ kfree(entry);
+ return err;
+@@ -862,7 +868,7 @@ static int parse_sched_list(struct nlattr *list,
+ return i;
+ }
+
+-static int parse_taprio_schedule(struct nlattr **tb,
++static int parse_taprio_schedule(struct taprio_sched *q, struct nlattr **tb,
+ struct sched_gate_list *new,
+ struct netlink_ext_ack *extack)
+ {
+@@ -883,8 +889,8 @@ static int parse_taprio_schedule(struct nlattr **tb,
+ new->cycle_time = nla_get_s64(tb[TCA_TAPRIO_ATTR_SCHED_CYCLE_TIME]);
+
+ if (tb[TCA_TAPRIO_ATTR_SCHED_ENTRY_LIST])
+- err = parse_sched_list(
+- tb[TCA_TAPRIO_ATTR_SCHED_ENTRY_LIST], new, extack);
++ err = parse_sched_list(q, tb[TCA_TAPRIO_ATTR_SCHED_ENTRY_LIST],
++ new, extack);
+ if (err < 0)
+ return err;
+
+@@ -1474,7 +1480,7 @@ static int taprio_change(struct Qdisc *sch, struct nlattr *opt,
+ goto free_sched;
+ }
+
+- err = parse_taprio_schedule(tb, new_admin, extack);
++ err = parse_taprio_schedule(q, tb, new_admin, extack);
+ if (err < 0)
+ goto free_sched;
+
+diff --git a/net/sctp/socket.c b/net/sctp/socket.c
+index fa20e945700e0..102aee4f7dfde 100644
+--- a/net/sctp/socket.c
++++ b/net/sctp/socket.c
+@@ -9457,13 +9457,10 @@ void sctp_copy_sock(struct sock *newsk, struct sock *sk,
+ static inline void sctp_copy_descendant(struct sock *sk_to,
+ const struct sock *sk_from)
+ {
+- int ancestor_size = sizeof(struct inet_sock) +
+- sizeof(struct sctp_sock) -
+- offsetof(struct sctp_sock, pd_lobby);
+-
+- if (sk_from->sk_family == PF_INET6)
+- ancestor_size += sizeof(struct ipv6_pinfo);
++ size_t ancestor_size = sizeof(struct inet_sock);
+
++ ancestor_size += sk_from->sk_prot->obj_size;
++ ancestor_size -= offsetof(struct sctp_sock, pd_lobby);
+ __inet_sk_copy_descendant(sk_to, sk_from, ancestor_size);
+ }
+
+diff --git a/net/tipc/group.c b/net/tipc/group.c
+index 89257e2a980de..f53871baa42eb 100644
+--- a/net/tipc/group.c
++++ b/net/tipc/group.c
+@@ -273,8 +273,8 @@ static struct tipc_member *tipc_group_find_node(struct tipc_group *grp,
+ return NULL;
+ }
+
+-static void tipc_group_add_to_tree(struct tipc_group *grp,
+- struct tipc_member *m)
++static int tipc_group_add_to_tree(struct tipc_group *grp,
++ struct tipc_member *m)
+ {
+ u64 nkey, key = (u64)m->node << 32 | m->port;
+ struct rb_node **n, *parent = NULL;
+@@ -291,10 +291,11 @@ static void tipc_group_add_to_tree(struct tipc_group *grp,
+ else if (key > nkey)
+ n = &(*n)->rb_right;
+ else
+- return;
++ return -EEXIST;
+ }
+ rb_link_node(&m->tree_node, parent, n);
+ rb_insert_color(&m->tree_node, &grp->members);
++ return 0;
+ }
+
+ static struct tipc_member *tipc_group_create_member(struct tipc_group *grp,
+@@ -302,6 +303,7 @@ static struct tipc_member *tipc_group_create_member(struct tipc_group *grp,
+ u32 instance, int state)
+ {
+ struct tipc_member *m;
++ int ret;
+
+ m = kzalloc(sizeof(*m), GFP_ATOMIC);
+ if (!m)
+@@ -314,8 +316,12 @@ static struct tipc_member *tipc_group_create_member(struct tipc_group *grp,
+ m->port = port;
+ m->instance = instance;
+ m->bc_acked = grp->bc_snd_nxt - 1;
++ ret = tipc_group_add_to_tree(grp, m);
++ if (ret < 0) {
++ kfree(m);
++ return NULL;
++ }
+ grp->member_cnt++;
+- tipc_group_add_to_tree(grp, m);
+ tipc_nlist_add(&grp->dests, m->node);
+ m->state = state;
+ return m;
+diff --git a/net/tipc/msg.c b/net/tipc/msg.c
+index 01b64869a1739..2776a41e0dece 100644
+--- a/net/tipc/msg.c
++++ b/net/tipc/msg.c
+@@ -150,7 +150,8 @@ int tipc_buf_append(struct sk_buff **headbuf, struct sk_buff **buf)
+ if (fragid == FIRST_FRAGMENT) {
+ if (unlikely(head))
+ goto err;
+- if (unlikely(skb_unclone(frag, GFP_ATOMIC)))
++ frag = skb_unshare(frag, GFP_ATOMIC);
++ if (unlikely(!frag))
+ goto err;
+ head = *headbuf = frag;
+ *buf = NULL;
+diff --git a/net/tipc/socket.c b/net/tipc/socket.c
+index 79cc84393f932..59c9e592b0a25 100644
+--- a/net/tipc/socket.c
++++ b/net/tipc/socket.c
+@@ -2773,10 +2773,7 @@ static int tipc_shutdown(struct socket *sock, int how)
+
+ trace_tipc_sk_shutdown(sk, NULL, TIPC_DUMP_ALL, " ");
+ __tipc_shutdown(sock, TIPC_CONN_SHUTDOWN);
+- if (tipc_sk_type_connectionless(sk))
+- sk->sk_shutdown = SHUTDOWN_MASK;
+- else
+- sk->sk_shutdown = SEND_SHUTDOWN;
++ sk->sk_shutdown = SHUTDOWN_MASK;
+
+ if (sk->sk_state == TIPC_DISCONNECTING) {
+ /* Discard any unreceived messages */
^ permalink raw reply related [flat|nested] 30+ messages in thread
* [gentoo-commits] proj/linux-patches:5.8 commit in: /
@ 2020-10-01 19:00 Mike Pagano
0 siblings, 0 replies; 30+ messages in thread
From: Mike Pagano @ 2020-10-01 19:00 UTC (permalink / raw
To: gentoo-commits
commit: 763eb4b84c25bff950fbac603a2248bb551b4f23
Author: Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Thu Oct 1 19:00:19 2020 +0000
Commit: Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Thu Oct 1 19:00:19 2020 +0000
URL: https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=763eb4b8
Linux patch 5.8.13
Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>
0000_README | 4 +
1012_linux-5.8.13.patch | 3615 +++++++++++++++++++++++++++++++++++++++++++++++
2 files changed, 3619 insertions(+)
diff --git a/0000_README b/0000_README
index 51cee27..0944db1 100644
--- a/0000_README
+++ b/0000_README
@@ -91,6 +91,10 @@ Patch: 1011_linux-5.8.12.patch
From: http://www.kernel.org
Desc: Linux 5.8.12
+Patch: 1012_linux-5.8.13.patch
+From: http://www.kernel.org
+Desc: Linux 5.8.13
+
Patch: 1500_XATTR_USER_PREFIX.patch
From: https://bugs.gentoo.org/show_bug.cgi?id=470644
Desc: Support for namespace user.pax.* on tmpfs.
diff --git a/1012_linux-5.8.13.patch b/1012_linux-5.8.13.patch
new file mode 100644
index 0000000..10424ba
--- /dev/null
+++ b/1012_linux-5.8.13.patch
@@ -0,0 +1,3615 @@
+diff --git a/Makefile b/Makefile
+index d0d40c628dc34..0d81d8cba48b6 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 5
+ PATCHLEVEL = 8
+-SUBLEVEL = 12
++SUBLEVEL = 13
+ EXTRAVERSION =
+ NAME = Kleptomaniac Octopus
+
+diff --git a/arch/arm64/include/asm/kvm_emulate.h b/arch/arm64/include/asm/kvm_emulate.h
+index 4d0f8ea600ba4..e1254e55835bb 100644
+--- a/arch/arm64/include/asm/kvm_emulate.h
++++ b/arch/arm64/include/asm/kvm_emulate.h
+@@ -319,7 +319,7 @@ static __always_inline int kvm_vcpu_dabt_get_rd(const struct kvm_vcpu *vcpu)
+ return (kvm_vcpu_get_hsr(vcpu) & ESR_ELx_SRT_MASK) >> ESR_ELx_SRT_SHIFT;
+ }
+
+-static __always_inline bool kvm_vcpu_dabt_iss1tw(const struct kvm_vcpu *vcpu)
++static __always_inline bool kvm_vcpu_abt_iss1tw(const struct kvm_vcpu *vcpu)
+ {
+ return !!(kvm_vcpu_get_hsr(vcpu) & ESR_ELx_S1PTW);
+ }
+@@ -327,7 +327,7 @@ static __always_inline bool kvm_vcpu_dabt_iss1tw(const struct kvm_vcpu *vcpu)
+ static __always_inline bool kvm_vcpu_dabt_iswrite(const struct kvm_vcpu *vcpu)
+ {
+ return !!(kvm_vcpu_get_hsr(vcpu) & ESR_ELx_WNR) ||
+- kvm_vcpu_dabt_iss1tw(vcpu); /* AF/DBM update */
++ kvm_vcpu_abt_iss1tw(vcpu); /* AF/DBM update */
+ }
+
+ static inline bool kvm_vcpu_dabt_is_cm(const struct kvm_vcpu *vcpu)
+@@ -356,6 +356,11 @@ static inline bool kvm_vcpu_trap_is_iabt(const struct kvm_vcpu *vcpu)
+ return kvm_vcpu_trap_get_class(vcpu) == ESR_ELx_EC_IABT_LOW;
+ }
+
++static inline bool kvm_vcpu_trap_is_exec_fault(const struct kvm_vcpu *vcpu)
++{
++ return kvm_vcpu_trap_is_iabt(vcpu) && !kvm_vcpu_abt_iss1tw(vcpu);
++}
++
+ static __always_inline u8 kvm_vcpu_trap_get_fault(const struct kvm_vcpu *vcpu)
+ {
+ return kvm_vcpu_get_hsr(vcpu) & ESR_ELx_FSC;
+@@ -393,6 +398,9 @@ static __always_inline int kvm_vcpu_sys_get_rt(struct kvm_vcpu *vcpu)
+
+ static inline bool kvm_is_write_fault(struct kvm_vcpu *vcpu)
+ {
++ if (kvm_vcpu_abt_iss1tw(vcpu))
++ return true;
++
+ if (kvm_vcpu_trap_is_iabt(vcpu))
+ return false;
+
+diff --git a/arch/arm64/kvm/hyp/switch.c b/arch/arm64/kvm/hyp/switch.c
+index ba225e09aaf15..8564742948d31 100644
+--- a/arch/arm64/kvm/hyp/switch.c
++++ b/arch/arm64/kvm/hyp/switch.c
+@@ -599,7 +599,7 @@ static bool __hyp_text fixup_guest_exit(struct kvm_vcpu *vcpu, u64 *exit_code)
+ kvm_vcpu_trap_get_fault_type(vcpu) == FSC_FAULT &&
+ kvm_vcpu_dabt_isvalid(vcpu) &&
+ !kvm_vcpu_dabt_isextabt(vcpu) &&
+- !kvm_vcpu_dabt_iss1tw(vcpu);
++ !kvm_vcpu_abt_iss1tw(vcpu);
+
+ if (valid) {
+ int ret = __vgic_v2_perform_cpuif_access(vcpu);
+diff --git a/arch/arm64/kvm/mmio.c b/arch/arm64/kvm/mmio.c
+index 4e0366759726d..07e9b6eab59e4 100644
+--- a/arch/arm64/kvm/mmio.c
++++ b/arch/arm64/kvm/mmio.c
+@@ -146,7 +146,7 @@ int io_mem_abort(struct kvm_vcpu *vcpu, struct kvm_run *run,
+ }
+
+ /* Page table accesses IO mem: tell guest to fix its TTBR */
+- if (kvm_vcpu_dabt_iss1tw(vcpu)) {
++ if (kvm_vcpu_abt_iss1tw(vcpu)) {
+ kvm_inject_dabt(vcpu, kvm_vcpu_get_hfar(vcpu));
+ return 1;
+ }
+diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c
+index d906350d543dd..1677107b74de2 100644
+--- a/arch/arm64/kvm/mmu.c
++++ b/arch/arm64/kvm/mmu.c
+@@ -1845,7 +1845,7 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa,
+ unsigned long vma_pagesize, flags = 0;
+
+ write_fault = kvm_is_write_fault(vcpu);
+- exec_fault = kvm_vcpu_trap_is_iabt(vcpu);
++ exec_fault = kvm_vcpu_trap_is_exec_fault(vcpu);
+ VM_BUG_ON(write_fault && exec_fault);
+
+ if (fault_status == FSC_PERM && !write_fault && !exec_fault) {
+diff --git a/arch/ia64/mm/init.c b/arch/ia64/mm/init.c
+index 0b3fb4c7af292..8e7b8c6c576ee 100644
+--- a/arch/ia64/mm/init.c
++++ b/arch/ia64/mm/init.c
+@@ -538,7 +538,7 @@ virtual_memmap_init(u64 start, u64 end, void *arg)
+ if (map_start < map_end)
+ memmap_init_zone((unsigned long)(map_end - map_start),
+ args->nid, args->zone, page_to_pfn(map_start),
+- MEMMAP_EARLY, NULL);
++ MEMINIT_EARLY, NULL);
+ return 0;
+ }
+
+@@ -547,8 +547,8 @@ memmap_init (unsigned long size, int nid, unsigned long zone,
+ unsigned long start_pfn)
+ {
+ if (!vmem_map) {
+- memmap_init_zone(size, nid, zone, start_pfn, MEMMAP_EARLY,
+- NULL);
++ memmap_init_zone(size, nid, zone, start_pfn,
++ MEMINIT_EARLY, NULL);
+ } else {
+ struct page *start;
+ struct memmap_init_callback_data args;
+diff --git a/arch/mips/include/asm/cpu-type.h b/arch/mips/include/asm/cpu-type.h
+index 75a7a382da099..3288cef4b168c 100644
+--- a/arch/mips/include/asm/cpu-type.h
++++ b/arch/mips/include/asm/cpu-type.h
+@@ -47,6 +47,7 @@ static inline int __pure __get_cpu_type(const int cpu_type)
+ case CPU_34K:
+ case CPU_1004K:
+ case CPU_74K:
++ case CPU_1074K:
+ case CPU_M14KC:
+ case CPU_M14KEC:
+ case CPU_INTERAPTIV:
+diff --git a/arch/mips/loongson2ef/Platform b/arch/mips/loongson2ef/Platform
+index cdad3c1a9a18f..7db0935bda3d1 100644
+--- a/arch/mips/loongson2ef/Platform
++++ b/arch/mips/loongson2ef/Platform
+@@ -22,6 +22,10 @@ ifdef CONFIG_CPU_LOONGSON2F_WORKAROUNDS
+ endif
+ endif
+
++# Some -march= flags enable MMI instructions, and GCC complains about that
++# support being enabled alongside -msoft-float. Thus explicitly disable MMI.
++cflags-y += $(call cc-option,-mno-loongson-mmi)
++
+ #
+ # Loongson Machines' Support
+ #
+diff --git a/arch/mips/loongson64/cop2-ex.c b/arch/mips/loongson64/cop2-ex.c
+index f130f62129b86..00055d4b6042f 100644
+--- a/arch/mips/loongson64/cop2-ex.c
++++ b/arch/mips/loongson64/cop2-ex.c
+@@ -95,10 +95,8 @@ static int loongson_cu2_call(struct notifier_block *nfb, unsigned long action,
+ if (res)
+ goto fault;
+
+- set_fpr64(current->thread.fpu.fpr,
+- insn.loongson3_lswc2_format.rt, value);
+- set_fpr64(current->thread.fpu.fpr,
+- insn.loongson3_lswc2_format.rq, value_next);
++ set_fpr64(¤t->thread.fpu.fpr[insn.loongson3_lswc2_format.rt], 0, value);
++ set_fpr64(¤t->thread.fpu.fpr[insn.loongson3_lswc2_format.rq], 0, value_next);
+ compute_return_epc(regs);
+ own_fpu(1);
+ }
+@@ -130,15 +128,13 @@ static int loongson_cu2_call(struct notifier_block *nfb, unsigned long action,
+ goto sigbus;
+
+ lose_fpu(1);
+- value_next = get_fpr64(current->thread.fpu.fpr,
+- insn.loongson3_lswc2_format.rq);
++ value_next = get_fpr64(¤t->thread.fpu.fpr[insn.loongson3_lswc2_format.rq], 0);
+
+ StoreDW(addr + 8, value_next, res);
+ if (res)
+ goto fault;
+
+- value = get_fpr64(current->thread.fpu.fpr,
+- insn.loongson3_lswc2_format.rt);
++ value = get_fpr64(¤t->thread.fpu.fpr[insn.loongson3_lswc2_format.rt], 0);
+
+ StoreDW(addr, value, res);
+ if (res)
+@@ -204,8 +200,7 @@ static int loongson_cu2_call(struct notifier_block *nfb, unsigned long action,
+ if (res)
+ goto fault;
+
+- set_fpr64(current->thread.fpu.fpr,
+- insn.loongson3_lsdc2_format.rt, value);
++ set_fpr64(¤t->thread.fpu.fpr[insn.loongson3_lsdc2_format.rt], 0, value);
+ compute_return_epc(regs);
+ own_fpu(1);
+
+@@ -221,8 +216,7 @@ static int loongson_cu2_call(struct notifier_block *nfb, unsigned long action,
+ if (res)
+ goto fault;
+
+- set_fpr64(current->thread.fpu.fpr,
+- insn.loongson3_lsdc2_format.rt, value);
++ set_fpr64(¤t->thread.fpu.fpr[insn.loongson3_lsdc2_format.rt], 0, value);
+ compute_return_epc(regs);
+ own_fpu(1);
+ break;
+@@ -286,8 +280,7 @@ static int loongson_cu2_call(struct notifier_block *nfb, unsigned long action,
+ goto sigbus;
+
+ lose_fpu(1);
+- value = get_fpr64(current->thread.fpu.fpr,
+- insn.loongson3_lsdc2_format.rt);
++ value = get_fpr64(¤t->thread.fpu.fpr[insn.loongson3_lsdc2_format.rt], 0);
+
+ StoreW(addr, value, res);
+ if (res)
+@@ -305,8 +298,7 @@ static int loongson_cu2_call(struct notifier_block *nfb, unsigned long action,
+ goto sigbus;
+
+ lose_fpu(1);
+- value = get_fpr64(current->thread.fpu.fpr,
+- insn.loongson3_lsdc2_format.rt);
++ value = get_fpr64(¤t->thread.fpu.fpr[insn.loongson3_lsdc2_format.rt], 0);
+
+ StoreDW(addr, value, res);
+ if (res)
+diff --git a/arch/riscv/boot/dts/kendryte/k210.dtsi b/arch/riscv/boot/dts/kendryte/k210.dtsi
+index c1df56ccb8d55..d2d0ff6456325 100644
+--- a/arch/riscv/boot/dts/kendryte/k210.dtsi
++++ b/arch/riscv/boot/dts/kendryte/k210.dtsi
+@@ -95,10 +95,12 @@
+ #clock-cells = <1>;
+ };
+
+- clint0: interrupt-controller@2000000 {
++ clint0: clint@2000000 {
++ #interrupt-cells = <1>;
+ compatible = "riscv,clint0";
+ reg = <0x2000000 0xC000>;
+- interrupts-extended = <&cpu0_intc 3>, <&cpu1_intc 3>;
++ interrupts-extended = <&cpu0_intc 3 &cpu0_intc 7
++ &cpu1_intc 3 &cpu1_intc 7>;
+ clocks = <&sysctl K210_CLK_ACLK>;
+ };
+
+diff --git a/arch/riscv/include/asm/ftrace.h b/arch/riscv/include/asm/ftrace.h
+index ace8a6e2d11d3..845002cc2e571 100644
+--- a/arch/riscv/include/asm/ftrace.h
++++ b/arch/riscv/include/asm/ftrace.h
+@@ -66,6 +66,13 @@ do { \
+ * Let auipc+jalr be the basic *mcount unit*, so we make it 8 bytes here.
+ */
+ #define MCOUNT_INSN_SIZE 8
++
++#ifndef __ASSEMBLY__
++struct dyn_ftrace;
++int ftrace_init_nop(struct module *mod, struct dyn_ftrace *rec);
++#define ftrace_init_nop ftrace_init_nop
++#endif
++
+ #endif
+
+ #endif /* _ASM_RISCV_FTRACE_H */
+diff --git a/arch/riscv/kernel/ftrace.c b/arch/riscv/kernel/ftrace.c
+index 2ff63d0cbb500..99e12faa54986 100644
+--- a/arch/riscv/kernel/ftrace.c
++++ b/arch/riscv/kernel/ftrace.c
+@@ -97,6 +97,25 @@ int ftrace_make_nop(struct module *mod, struct dyn_ftrace *rec,
+ return __ftrace_modify_call(rec->ip, addr, false);
+ }
+
++
++/*
++ * This is called early on, and isn't wrapped by
++ * ftrace_arch_code_modify_{prepare,post_process}() and therefor doesn't hold
++ * text_mutex, which triggers a lockdep failure. SMP isn't running so we could
++ * just directly poke the text, but it's simpler to just take the lock
++ * ourselves.
++ */
++int ftrace_init_nop(struct module *mod, struct dyn_ftrace *rec)
++{
++ int out;
++
++ ftrace_arch_code_modify_prepare();
++ out = ftrace_make_nop(mod, rec, MCOUNT_ADDR);
++ ftrace_arch_code_modify_post_process();
++
++ return out;
++}
++
+ int ftrace_update_ftrace_func(ftrace_func_t func)
+ {
+ int ret = __ftrace_modify_call((unsigned long)&ftrace_call,
+diff --git a/arch/s390/include/asm/pgtable.h b/arch/s390/include/asm/pgtable.h
+index 19d603bd1f36e..a60ab538747c8 100644
+--- a/arch/s390/include/asm/pgtable.h
++++ b/arch/s390/include/asm/pgtable.h
+@@ -1260,26 +1260,44 @@ static inline pgd_t *pgd_offset_raw(pgd_t *pgd, unsigned long address)
+
+ #define pgd_offset(mm, address) pgd_offset_raw(READ_ONCE((mm)->pgd), address)
+
+-static inline p4d_t *p4d_offset(pgd_t *pgd, unsigned long address)
++static inline p4d_t *p4d_offset_lockless(pgd_t *pgdp, pgd_t pgd, unsigned long address)
+ {
+- if ((pgd_val(*pgd) & _REGION_ENTRY_TYPE_MASK) >= _REGION_ENTRY_TYPE_R1)
+- return (p4d_t *) pgd_deref(*pgd) + p4d_index(address);
+- return (p4d_t *) pgd;
++ if ((pgd_val(pgd) & _REGION_ENTRY_TYPE_MASK) >= _REGION_ENTRY_TYPE_R1)
++ return (p4d_t *) pgd_deref(pgd) + p4d_index(address);
++ return (p4d_t *) pgdp;
+ }
++#define p4d_offset_lockless p4d_offset_lockless
+
+-static inline pud_t *pud_offset(p4d_t *p4d, unsigned long address)
++static inline p4d_t *p4d_offset(pgd_t *pgdp, unsigned long address)
+ {
+- if ((p4d_val(*p4d) & _REGION_ENTRY_TYPE_MASK) >= _REGION_ENTRY_TYPE_R2)
+- return (pud_t *) p4d_deref(*p4d) + pud_index(address);
+- return (pud_t *) p4d;
++ return p4d_offset_lockless(pgdp, *pgdp, address);
++}
++
++static inline pud_t *pud_offset_lockless(p4d_t *p4dp, p4d_t p4d, unsigned long address)
++{
++ if ((p4d_val(p4d) & _REGION_ENTRY_TYPE_MASK) >= _REGION_ENTRY_TYPE_R2)
++ return (pud_t *) p4d_deref(p4d) + pud_index(address);
++ return (pud_t *) p4dp;
++}
++#define pud_offset_lockless pud_offset_lockless
++
++static inline pud_t *pud_offset(p4d_t *p4dp, unsigned long address)
++{
++ return pud_offset_lockless(p4dp, *p4dp, address);
+ }
+ #define pud_offset pud_offset
+
+-static inline pmd_t *pmd_offset(pud_t *pud, unsigned long address)
++static inline pmd_t *pmd_offset_lockless(pud_t *pudp, pud_t pud, unsigned long address)
++{
++ if ((pud_val(pud) & _REGION_ENTRY_TYPE_MASK) >= _REGION_ENTRY_TYPE_R3)
++ return (pmd_t *) pud_deref(pud) + pmd_index(address);
++ return (pmd_t *) pudp;
++}
++#define pmd_offset_lockless pmd_offset_lockless
++
++static inline pmd_t *pmd_offset(pud_t *pudp, unsigned long address)
+ {
+- if ((pud_val(*pud) & _REGION_ENTRY_TYPE_MASK) >= _REGION_ENTRY_TYPE_R3)
+- return (pmd_t *) pud_deref(*pud) + pmd_index(address);
+- return (pmd_t *) pud;
++ return pmd_offset_lockless(pudp, *pudp, address);
+ }
+ #define pmd_offset pmd_offset
+
+diff --git a/arch/s390/kernel/setup.c b/arch/s390/kernel/setup.c
+index 07aa15ba43b3e..faf30f37c6361 100644
+--- a/arch/s390/kernel/setup.c
++++ b/arch/s390/kernel/setup.c
+@@ -619,7 +619,7 @@ static struct notifier_block kdump_mem_nb = {
+ /*
+ * Make sure that the area behind memory_end is protected
+ */
+-static void reserve_memory_end(void)
++static void __init reserve_memory_end(void)
+ {
+ if (memory_end_set)
+ memblock_reserve(memory_end, ULONG_MAX);
+@@ -628,7 +628,7 @@ static void reserve_memory_end(void)
+ /*
+ * Make sure that oldmem, where the dump is stored, is protected
+ */
+-static void reserve_oldmem(void)
++static void __init reserve_oldmem(void)
+ {
+ #ifdef CONFIG_CRASH_DUMP
+ if (OLDMEM_BASE)
+@@ -640,7 +640,7 @@ static void reserve_oldmem(void)
+ /*
+ * Make sure that oldmem, where the dump is stored, is protected
+ */
+-static void remove_oldmem(void)
++static void __init remove_oldmem(void)
+ {
+ #ifdef CONFIG_CRASH_DUMP
+ if (OLDMEM_BASE)
+diff --git a/arch/x86/entry/common.c b/arch/x86/entry/common.c
+index 606c4e25ee934..e290164df5ada 100644
+--- a/arch/x86/entry/common.c
++++ b/arch/x86/entry/common.c
+@@ -814,7 +814,7 @@ __visible noinstr void xen_pv_evtchn_do_upcall(struct pt_regs *regs)
+ old_regs = set_irq_regs(regs);
+
+ instrumentation_begin();
+- run_on_irqstack_cond(__xen_pv_evtchn_do_upcall, NULL, regs);
++ run_on_irqstack_cond(__xen_pv_evtchn_do_upcall, regs);
+ instrumentation_begin();
+
+ set_irq_regs(old_regs);
+diff --git a/arch/x86/entry/entry_64.S b/arch/x86/entry/entry_64.S
+index d2a00c97e53f6..20f62398477e5 100644
+--- a/arch/x86/entry/entry_64.S
++++ b/arch/x86/entry/entry_64.S
+@@ -687,6 +687,8 @@ SYM_CODE_END(.Lbad_gs)
+ * rdx: Function argument (can be NULL if none)
+ */
+ SYM_FUNC_START(asm_call_on_stack)
++SYM_INNER_LABEL(asm_call_sysvec_on_stack, SYM_L_GLOBAL)
++SYM_INNER_LABEL(asm_call_irq_on_stack, SYM_L_GLOBAL)
+ /*
+ * Save the frame pointer unconditionally. This allows the ORC
+ * unwinder to handle the stack switch.
+diff --git a/arch/x86/include/asm/idtentry.h b/arch/x86/include/asm/idtentry.h
+index 80d3b30d3ee3e..4abe2e5b3fa76 100644
+--- a/arch/x86/include/asm/idtentry.h
++++ b/arch/x86/include/asm/idtentry.h
+@@ -246,7 +246,7 @@ __visible noinstr void func(struct pt_regs *regs) \
+ instrumentation_begin(); \
+ irq_enter_rcu(); \
+ kvm_set_cpu_l1tf_flush_l1d(); \
+- run_on_irqstack_cond(__##func, regs, regs); \
++ run_sysvec_on_irqstack_cond(__##func, regs); \
+ irq_exit_rcu(); \
+ instrumentation_end(); \
+ idtentry_exit_cond_rcu(regs, rcu_exit); \
+diff --git a/arch/x86/include/asm/irq_stack.h b/arch/x86/include/asm/irq_stack.h
+index 4ae66f097101d..d95616c7e7d40 100644
+--- a/arch/x86/include/asm/irq_stack.h
++++ b/arch/x86/include/asm/irq_stack.h
+@@ -3,6 +3,7 @@
+ #define _ASM_X86_IRQ_STACK_H
+
+ #include <linux/ptrace.h>
++#include <linux/irq.h>
+
+ #include <asm/processor.h>
+
+@@ -12,20 +13,50 @@ static __always_inline bool irqstack_active(void)
+ return __this_cpu_read(irq_count) != -1;
+ }
+
+-void asm_call_on_stack(void *sp, void *func, void *arg);
++void asm_call_on_stack(void *sp, void (*func)(void), void *arg);
++void asm_call_sysvec_on_stack(void *sp, void (*func)(struct pt_regs *regs),
++ struct pt_regs *regs);
++void asm_call_irq_on_stack(void *sp, void (*func)(struct irq_desc *desc),
++ struct irq_desc *desc);
+
+-static __always_inline void __run_on_irqstack(void *func, void *arg)
++static __always_inline void __run_on_irqstack(void (*func)(void))
+ {
+ void *tos = __this_cpu_read(hardirq_stack_ptr);
+
+ __this_cpu_add(irq_count, 1);
+- asm_call_on_stack(tos - 8, func, arg);
++ asm_call_on_stack(tos - 8, func, NULL);
++ __this_cpu_sub(irq_count, 1);
++}
++
++static __always_inline void
++__run_sysvec_on_irqstack(void (*func)(struct pt_regs *regs),
++ struct pt_regs *regs)
++{
++ void *tos = __this_cpu_read(hardirq_stack_ptr);
++
++ __this_cpu_add(irq_count, 1);
++ asm_call_sysvec_on_stack(tos - 8, func, regs);
++ __this_cpu_sub(irq_count, 1);
++}
++
++static __always_inline void
++__run_irq_on_irqstack(void (*func)(struct irq_desc *desc),
++ struct irq_desc *desc)
++{
++ void *tos = __this_cpu_read(hardirq_stack_ptr);
++
++ __this_cpu_add(irq_count, 1);
++ asm_call_irq_on_stack(tos - 8, func, desc);
+ __this_cpu_sub(irq_count, 1);
+ }
+
+ #else /* CONFIG_X86_64 */
+ static inline bool irqstack_active(void) { return false; }
+-static inline void __run_on_irqstack(void *func, void *arg) { }
++static inline void __run_on_irqstack(void (*func)(void)) { }
++static inline void __run_sysvec_on_irqstack(void (*func)(struct pt_regs *regs),
++ struct pt_regs *regs) { }
++static inline void __run_irq_on_irqstack(void (*func)(struct irq_desc *desc),
++ struct irq_desc *desc) { }
+ #endif /* !CONFIG_X86_64 */
+
+ static __always_inline bool irq_needs_irq_stack(struct pt_regs *regs)
+@@ -37,17 +68,40 @@ static __always_inline bool irq_needs_irq_stack(struct pt_regs *regs)
+ return !user_mode(regs) && !irqstack_active();
+ }
+
+-static __always_inline void run_on_irqstack_cond(void *func, void *arg,
++
++static __always_inline void run_on_irqstack_cond(void (*func)(void),
+ struct pt_regs *regs)
+ {
+- void (*__func)(void *arg) = func;
++ lockdep_assert_irqs_disabled();
++
++ if (irq_needs_irq_stack(regs))
++ __run_on_irqstack(func);
++ else
++ func();
++}
++
++static __always_inline void
++run_sysvec_on_irqstack_cond(void (*func)(struct pt_regs *regs),
++ struct pt_regs *regs)
++{
++ lockdep_assert_irqs_disabled();
+
++ if (irq_needs_irq_stack(regs))
++ __run_sysvec_on_irqstack(func, regs);
++ else
++ func(regs);
++}
++
++static __always_inline void
++run_irq_on_irqstack_cond(void (*func)(struct irq_desc *desc), struct irq_desc *desc,
++ struct pt_regs *regs)
++{
+ lockdep_assert_irqs_disabled();
+
+ if (irq_needs_irq_stack(regs))
+- __run_on_irqstack(__func, arg);
++ __run_irq_on_irqstack(func, desc);
+ else
+- __func(arg);
++ func(desc);
+ }
+
+ #endif
+diff --git a/arch/x86/kernel/apic/io_apic.c b/arch/x86/kernel/apic/io_apic.c
+index 21325a4a78b92..ad4e841b4a00d 100644
+--- a/arch/x86/kernel/apic/io_apic.c
++++ b/arch/x86/kernel/apic/io_apic.c
+@@ -2243,6 +2243,7 @@ static inline void __init check_timer(void)
+ legacy_pic->init(0);
+ legacy_pic->make_irq(0);
+ apic_write(APIC_LVT0, APIC_DM_EXTINT);
++ legacy_pic->unmask(0);
+
+ unlock_ExtINT_logic();
+
+diff --git a/arch/x86/kernel/irq.c b/arch/x86/kernel/irq.c
+index 181060247e3cb..c5dd50369e2f3 100644
+--- a/arch/x86/kernel/irq.c
++++ b/arch/x86/kernel/irq.c
+@@ -227,7 +227,7 @@ static __always_inline void handle_irq(struct irq_desc *desc,
+ struct pt_regs *regs)
+ {
+ if (IS_ENABLED(CONFIG_X86_64))
+- run_on_irqstack_cond(desc->handle_irq, desc, regs);
++ run_irq_on_irqstack_cond(desc->handle_irq, desc, regs);
+ else
+ __handle_irq(desc, regs);
+ }
+diff --git a/arch/x86/kernel/irq_64.c b/arch/x86/kernel/irq_64.c
+index 1b4fe93a86c5c..440eed558558d 100644
+--- a/arch/x86/kernel/irq_64.c
++++ b/arch/x86/kernel/irq_64.c
+@@ -74,5 +74,5 @@ int irq_init_percpu_irqstack(unsigned int cpu)
+
+ void do_softirq_own_stack(void)
+ {
+- run_on_irqstack_cond(__do_softirq, NULL, NULL);
++ run_on_irqstack_cond(__do_softirq, NULL);
+ }
+diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c
+index f8ead44c3265e..10aba4b6df6ed 100644
+--- a/arch/x86/kvm/svm/svm.c
++++ b/arch/x86/kvm/svm/svm.c
+@@ -2169,6 +2169,12 @@ static int iret_interception(struct vcpu_svm *svm)
+ return 1;
+ }
+
++static int invd_interception(struct vcpu_svm *svm)
++{
++ /* Treat an INVD instruction as a NOP and just skip it. */
++ return kvm_skip_emulated_instruction(&svm->vcpu);
++}
++
+ static int invlpg_interception(struct vcpu_svm *svm)
+ {
+ if (!static_cpu_has(X86_FEATURE_DECODEASSISTS))
+@@ -2758,7 +2764,7 @@ static int (*const svm_exit_handlers[])(struct vcpu_svm *svm) = {
+ [SVM_EXIT_RDPMC] = rdpmc_interception,
+ [SVM_EXIT_CPUID] = cpuid_interception,
+ [SVM_EXIT_IRET] = iret_interception,
+- [SVM_EXIT_INVD] = emulate_on_interception,
++ [SVM_EXIT_INVD] = invd_interception,
+ [SVM_EXIT_PAUSE] = pause_interception,
+ [SVM_EXIT_HLT] = halt_interception,
+ [SVM_EXIT_INVLPG] = invlpg_interception,
+diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
+index f5481ae588aff..a04f8abd0ead9 100644
+--- a/arch/x86/kvm/x86.c
++++ b/arch/x86/kvm/x86.c
+@@ -968,6 +968,7 @@ int kvm_set_cr4(struct kvm_vcpu *vcpu, unsigned long cr4)
+ unsigned long old_cr4 = kvm_read_cr4(vcpu);
+ unsigned long pdptr_bits = X86_CR4_PGE | X86_CR4_PSE | X86_CR4_PAE |
+ X86_CR4_SMEP;
++ unsigned long mmu_role_bits = pdptr_bits | X86_CR4_SMAP | X86_CR4_PKE;
+
+ if (kvm_valid_cr4(vcpu, cr4))
+ return 1;
+@@ -995,7 +996,7 @@ int kvm_set_cr4(struct kvm_vcpu *vcpu, unsigned long cr4)
+ if (kvm_x86_ops.set_cr4(vcpu, cr4))
+ return 1;
+
+- if (((cr4 ^ old_cr4) & pdptr_bits) ||
++ if (((cr4 ^ old_cr4) & mmu_role_bits) ||
+ (!(cr4 & X86_CR4_PCIDE) && (old_cr4 & X86_CR4_PCIDE)))
+ kvm_mmu_reset_context(vcpu);
+
+diff --git a/arch/x86/lib/usercopy_64.c b/arch/x86/lib/usercopy_64.c
+index b0dfac3d3df71..1847e993ac63a 100644
+--- a/arch/x86/lib/usercopy_64.c
++++ b/arch/x86/lib/usercopy_64.c
+@@ -120,7 +120,7 @@ long __copy_user_flushcache(void *dst, const void __user *src, unsigned size)
+ */
+ if (size < 8) {
+ if (!IS_ALIGNED(dest, 4) || size != 4)
+- clean_cache_range(dst, 1);
++ clean_cache_range(dst, size);
+ } else {
+ if (!IS_ALIGNED(dest, 8)) {
+ dest = ALIGN(dest, boot_cpu_data.x86_clflush_size);
+diff --git a/drivers/atm/eni.c b/drivers/atm/eni.c
+index 17d47ad03ab79..de50fb0541a20 100644
+--- a/drivers/atm/eni.c
++++ b/drivers/atm/eni.c
+@@ -2239,7 +2239,7 @@ static int eni_init_one(struct pci_dev *pci_dev,
+
+ rc = dma_set_mask_and_coherent(&pci_dev->dev, DMA_BIT_MASK(32));
+ if (rc < 0)
+- goto out;
++ goto err_disable;
+
+ rc = -ENOMEM;
+ eni_dev = kmalloc(sizeof(struct eni_dev), GFP_KERNEL);
+diff --git a/drivers/base/node.c b/drivers/base/node.c
+index 5b02f69769e86..11ffb50fa875b 100644
+--- a/drivers/base/node.c
++++ b/drivers/base/node.c
+@@ -761,14 +761,36 @@ static int __ref get_nid_for_pfn(unsigned long pfn)
+ return pfn_to_nid(pfn);
+ }
+
++static int do_register_memory_block_under_node(int nid,
++ struct memory_block *mem_blk)
++{
++ int ret;
++
++ /*
++ * If this memory block spans multiple nodes, we only indicate
++ * the last processed node.
++ */
++ mem_blk->nid = nid;
++
++ ret = sysfs_create_link_nowarn(&node_devices[nid]->dev.kobj,
++ &mem_blk->dev.kobj,
++ kobject_name(&mem_blk->dev.kobj));
++ if (ret)
++ return ret;
++
++ return sysfs_create_link_nowarn(&mem_blk->dev.kobj,
++ &node_devices[nid]->dev.kobj,
++ kobject_name(&node_devices[nid]->dev.kobj));
++}
++
+ /* register memory section under specified node if it spans that node */
+-static int register_mem_sect_under_node(struct memory_block *mem_blk,
+- void *arg)
++static int register_mem_block_under_node_early(struct memory_block *mem_blk,
++ void *arg)
+ {
+ unsigned long memory_block_pfns = memory_block_size_bytes() / PAGE_SIZE;
+ unsigned long start_pfn = section_nr_to_pfn(mem_blk->start_section_nr);
+ unsigned long end_pfn = start_pfn + memory_block_pfns - 1;
+- int ret, nid = *(int *)arg;
++ int nid = *(int *)arg;
+ unsigned long pfn;
+
+ for (pfn = start_pfn; pfn <= end_pfn; pfn++) {
+@@ -785,38 +807,33 @@ static int register_mem_sect_under_node(struct memory_block *mem_blk,
+ }
+
+ /*
+- * We need to check if page belongs to nid only for the boot
+- * case, during hotplug we know that all pages in the memory
+- * block belong to the same node.
+- */
+- if (system_state == SYSTEM_BOOTING) {
+- page_nid = get_nid_for_pfn(pfn);
+- if (page_nid < 0)
+- continue;
+- if (page_nid != nid)
+- continue;
+- }
+-
+- /*
+- * If this memory block spans multiple nodes, we only indicate
+- * the last processed node.
++ * We need to check if page belongs to nid only at the boot
++ * case because node's ranges can be interleaved.
+ */
+- mem_blk->nid = nid;
+-
+- ret = sysfs_create_link_nowarn(&node_devices[nid]->dev.kobj,
+- &mem_blk->dev.kobj,
+- kobject_name(&mem_blk->dev.kobj));
+- if (ret)
+- return ret;
++ page_nid = get_nid_for_pfn(pfn);
++ if (page_nid < 0)
++ continue;
++ if (page_nid != nid)
++ continue;
+
+- return sysfs_create_link_nowarn(&mem_blk->dev.kobj,
+- &node_devices[nid]->dev.kobj,
+- kobject_name(&node_devices[nid]->dev.kobj));
++ return do_register_memory_block_under_node(nid, mem_blk);
+ }
+ /* mem section does not span the specified node */
+ return 0;
+ }
+
++/*
++ * During hotplug we know that all pages in the memory block belong to the same
++ * node.
++ */
++static int register_mem_block_under_node_hotplug(struct memory_block *mem_blk,
++ void *arg)
++{
++ int nid = *(int *)arg;
++
++ return do_register_memory_block_under_node(nid, mem_blk);
++}
++
+ /*
+ * Unregister a memory block device under the node it spans. Memory blocks
+ * with multiple nodes cannot be offlined and therefore also never be removed.
+@@ -832,11 +849,19 @@ void unregister_memory_block_under_nodes(struct memory_block *mem_blk)
+ kobject_name(&node_devices[mem_blk->nid]->dev.kobj));
+ }
+
+-int link_mem_sections(int nid, unsigned long start_pfn, unsigned long end_pfn)
++int link_mem_sections(int nid, unsigned long start_pfn, unsigned long end_pfn,
++ enum meminit_context context)
+ {
++ walk_memory_blocks_func_t func;
++
++ if (context == MEMINIT_HOTPLUG)
++ func = register_mem_block_under_node_hotplug;
++ else
++ func = register_mem_block_under_node_early;
++
+ return walk_memory_blocks(PFN_PHYS(start_pfn),
+ PFN_PHYS(end_pfn - start_pfn), (void *)&nid,
+- register_mem_sect_under_node);
++ func);
+ }
+
+ #ifdef CONFIG_HUGETLBFS
+diff --git a/drivers/base/regmap/internal.h b/drivers/base/regmap/internal.h
+index 3d80c4b43f720..d7c01b70e43db 100644
+--- a/drivers/base/regmap/internal.h
++++ b/drivers/base/regmap/internal.h
+@@ -259,7 +259,7 @@ bool regcache_set_val(struct regmap *map, void *base, unsigned int idx,
+ int regcache_lookup_reg(struct regmap *map, unsigned int reg);
+
+ int _regmap_raw_write(struct regmap *map, unsigned int reg,
+- const void *val, size_t val_len);
++ const void *val, size_t val_len, bool noinc);
+
+ void regmap_async_complete_cb(struct regmap_async *async, int ret);
+
+diff --git a/drivers/base/regmap/regcache.c b/drivers/base/regmap/regcache.c
+index a93cafd7be4f2..7f4b3b62492ca 100644
+--- a/drivers/base/regmap/regcache.c
++++ b/drivers/base/regmap/regcache.c
+@@ -717,7 +717,7 @@ static int regcache_sync_block_raw_flush(struct regmap *map, const void **data,
+
+ map->cache_bypass = true;
+
+- ret = _regmap_raw_write(map, base, *data, count * val_bytes);
++ ret = _regmap_raw_write(map, base, *data, count * val_bytes, false);
+ if (ret)
+ dev_err(map->dev, "Unable to sync registers %#x-%#x. %d\n",
+ base, cur - map->reg_stride, ret);
+diff --git a/drivers/base/regmap/regmap.c b/drivers/base/regmap/regmap.c
+index 795a62a040220..9751304c5c158 100644
+--- a/drivers/base/regmap/regmap.c
++++ b/drivers/base/regmap/regmap.c
+@@ -1469,7 +1469,7 @@ static void regmap_set_work_buf_flag_mask(struct regmap *map, int max_bytes,
+ }
+
+ static int _regmap_raw_write_impl(struct regmap *map, unsigned int reg,
+- const void *val, size_t val_len)
++ const void *val, size_t val_len, bool noinc)
+ {
+ struct regmap_range_node *range;
+ unsigned long flags;
+@@ -1528,7 +1528,7 @@ static int _regmap_raw_write_impl(struct regmap *map, unsigned int reg,
+ win_residue, val_len / map->format.val_bytes);
+ ret = _regmap_raw_write_impl(map, reg, val,
+ win_residue *
+- map->format.val_bytes);
++ map->format.val_bytes, noinc);
+ if (ret != 0)
+ return ret;
+
+@@ -1542,7 +1542,7 @@ static int _regmap_raw_write_impl(struct regmap *map, unsigned int reg,
+ win_residue = range->window_len - win_offset;
+ }
+
+- ret = _regmap_select_page(map, ®, range, val_num);
++ ret = _regmap_select_page(map, ®, range, noinc ? 1 : val_num);
+ if (ret != 0)
+ return ret;
+ }
+@@ -1750,7 +1750,8 @@ static int _regmap_bus_raw_write(void *context, unsigned int reg,
+ map->work_buf +
+ map->format.reg_bytes +
+ map->format.pad_bytes,
+- map->format.val_bytes);
++ map->format.val_bytes,
++ false);
+ }
+
+ static inline void *_regmap_map_get_context(struct regmap *map)
+@@ -1844,7 +1845,7 @@ int regmap_write_async(struct regmap *map, unsigned int reg, unsigned int val)
+ EXPORT_SYMBOL_GPL(regmap_write_async);
+
+ int _regmap_raw_write(struct regmap *map, unsigned int reg,
+- const void *val, size_t val_len)
++ const void *val, size_t val_len, bool noinc)
+ {
+ size_t val_bytes = map->format.val_bytes;
+ size_t val_count = val_len / val_bytes;
+@@ -1865,7 +1866,7 @@ int _regmap_raw_write(struct regmap *map, unsigned int reg,
+
+ /* Write as many bytes as possible with chunk_size */
+ for (i = 0; i < chunk_count; i++) {
+- ret = _regmap_raw_write_impl(map, reg, val, chunk_bytes);
++ ret = _regmap_raw_write_impl(map, reg, val, chunk_bytes, noinc);
+ if (ret)
+ return ret;
+
+@@ -1876,7 +1877,7 @@ int _regmap_raw_write(struct regmap *map, unsigned int reg,
+
+ /* Write remaining bytes */
+ if (val_len)
+- ret = _regmap_raw_write_impl(map, reg, val, val_len);
++ ret = _regmap_raw_write_impl(map, reg, val, val_len, noinc);
+
+ return ret;
+ }
+@@ -1909,7 +1910,7 @@ int regmap_raw_write(struct regmap *map, unsigned int reg,
+
+ map->lock(map->lock_arg);
+
+- ret = _regmap_raw_write(map, reg, val, val_len);
++ ret = _regmap_raw_write(map, reg, val, val_len, false);
+
+ map->unlock(map->lock_arg);
+
+@@ -1967,7 +1968,7 @@ int regmap_noinc_write(struct regmap *map, unsigned int reg,
+ write_len = map->max_raw_write;
+ else
+ write_len = val_len;
+- ret = _regmap_raw_write(map, reg, val, write_len);
++ ret = _regmap_raw_write(map, reg, val, write_len, true);
+ if (ret)
+ goto out_unlock;
+ val = ((u8 *)val) + write_len;
+@@ -2444,7 +2445,7 @@ int regmap_raw_write_async(struct regmap *map, unsigned int reg,
+
+ map->async = true;
+
+- ret = _regmap_raw_write(map, reg, val, val_len);
++ ret = _regmap_raw_write(map, reg, val, val_len, false);
+
+ map->async = false;
+
+@@ -2455,7 +2456,7 @@ int regmap_raw_write_async(struct regmap *map, unsigned int reg,
+ EXPORT_SYMBOL_GPL(regmap_raw_write_async);
+
+ static int _regmap_raw_read(struct regmap *map, unsigned int reg, void *val,
+- unsigned int val_len)
++ unsigned int val_len, bool noinc)
+ {
+ struct regmap_range_node *range;
+ int ret;
+@@ -2468,7 +2469,7 @@ static int _regmap_raw_read(struct regmap *map, unsigned int reg, void *val,
+ range = _regmap_range_lookup(map, reg);
+ if (range) {
+ ret = _regmap_select_page(map, ®, range,
+- val_len / map->format.val_bytes);
++ noinc ? 1 : val_len / map->format.val_bytes);
+ if (ret != 0)
+ return ret;
+ }
+@@ -2506,7 +2507,7 @@ static int _regmap_bus_read(void *context, unsigned int reg,
+ if (!map->format.parse_val)
+ return -EINVAL;
+
+- ret = _regmap_raw_read(map, reg, work_val, map->format.val_bytes);
++ ret = _regmap_raw_read(map, reg, work_val, map->format.val_bytes, false);
+ if (ret == 0)
+ *val = map->format.parse_val(work_val);
+
+@@ -2622,7 +2623,7 @@ int regmap_raw_read(struct regmap *map, unsigned int reg, void *val,
+
+ /* Read bytes that fit into whole chunks */
+ for (i = 0; i < chunk_count; i++) {
+- ret = _regmap_raw_read(map, reg, val, chunk_bytes);
++ ret = _regmap_raw_read(map, reg, val, chunk_bytes, false);
+ if (ret != 0)
+ goto out;
+
+@@ -2633,7 +2634,7 @@ int regmap_raw_read(struct regmap *map, unsigned int reg, void *val,
+
+ /* Read remaining bytes */
+ if (val_len) {
+- ret = _regmap_raw_read(map, reg, val, val_len);
++ ret = _regmap_raw_read(map, reg, val, val_len, false);
+ if (ret != 0)
+ goto out;
+ }
+@@ -2708,7 +2709,7 @@ int regmap_noinc_read(struct regmap *map, unsigned int reg,
+ read_len = map->max_raw_read;
+ else
+ read_len = val_len;
+- ret = _regmap_raw_read(map, reg, val, read_len);
++ ret = _regmap_raw_read(map, reg, val, read_len, true);
+ if (ret)
+ goto out_unlock;
+ val = ((u8 *)val) + read_len;
+diff --git a/drivers/clk/versatile/clk-impd1.c b/drivers/clk/versatile/clk-impd1.c
+index ca798249544d0..85c395df9c008 100644
+--- a/drivers/clk/versatile/clk-impd1.c
++++ b/drivers/clk/versatile/clk-impd1.c
+@@ -109,8 +109,10 @@ static int integrator_impd1_clk_probe(struct platform_device *pdev)
+
+ for_each_available_child_of_node(np, child) {
+ ret = integrator_impd1_clk_spawn(dev, np, child);
+- if (ret)
++ if (ret) {
++ of_node_put(child);
+ break;
++ }
+ }
+
+ return ret;
+diff --git a/drivers/clocksource/h8300_timer8.c b/drivers/clocksource/h8300_timer8.c
+index 1d740a8c42ab3..47114c2a7cb54 100644
+--- a/drivers/clocksource/h8300_timer8.c
++++ b/drivers/clocksource/h8300_timer8.c
+@@ -169,7 +169,7 @@ static int __init h8300_8timer_init(struct device_node *node)
+ return PTR_ERR(clk);
+ }
+
+- ret = ENXIO;
++ ret = -ENXIO;
+ base = of_iomap(node, 0);
+ if (!base) {
+ pr_err("failed to map registers for clockevent\n");
+diff --git a/drivers/clocksource/timer-ti-dm-systimer.c b/drivers/clocksource/timer-ti-dm-systimer.c
+index f6fd1c1cc527f..33b3e8aa2cc50 100644
+--- a/drivers/clocksource/timer-ti-dm-systimer.c
++++ b/drivers/clocksource/timer-ti-dm-systimer.c
+@@ -69,12 +69,33 @@ static bool dmtimer_systimer_revision1(struct dmtimer_systimer *t)
+ return !(tidr >> 16);
+ }
+
++static void dmtimer_systimer_enable(struct dmtimer_systimer *t)
++{
++ u32 val;
++
++ if (dmtimer_systimer_revision1(t))
++ val = DMTIMER_TYPE1_ENABLE;
++ else
++ val = DMTIMER_TYPE2_ENABLE;
++
++ writel_relaxed(val, t->base + t->sysc);
++}
++
++static void dmtimer_systimer_disable(struct dmtimer_systimer *t)
++{
++ if (!dmtimer_systimer_revision1(t))
++ return;
++
++ writel_relaxed(DMTIMER_TYPE1_DISABLE, t->base + t->sysc);
++}
++
+ static int __init dmtimer_systimer_type1_reset(struct dmtimer_systimer *t)
+ {
+ void __iomem *syss = t->base + OMAP_TIMER_V1_SYS_STAT_OFFSET;
+ int ret;
+ u32 l;
+
++ dmtimer_systimer_enable(t);
+ writel_relaxed(BIT(1) | BIT(2), t->base + t->ifctrl);
+ ret = readl_poll_timeout_atomic(syss, l, l & BIT(0), 100,
+ DMTIMER_RESET_WAIT);
+@@ -88,6 +109,7 @@ static int __init dmtimer_systimer_type2_reset(struct dmtimer_systimer *t)
+ void __iomem *sysc = t->base + t->sysc;
+ u32 l;
+
++ dmtimer_systimer_enable(t);
+ l = readl_relaxed(sysc);
+ l |= BIT(0);
+ writel_relaxed(l, sysc);
+@@ -336,26 +358,6 @@ static int __init dmtimer_systimer_init_clock(struct dmtimer_systimer *t,
+ return 0;
+ }
+
+-static void dmtimer_systimer_enable(struct dmtimer_systimer *t)
+-{
+- u32 val;
+-
+- if (dmtimer_systimer_revision1(t))
+- val = DMTIMER_TYPE1_ENABLE;
+- else
+- val = DMTIMER_TYPE2_ENABLE;
+-
+- writel_relaxed(val, t->base + t->sysc);
+-}
+-
+-static void dmtimer_systimer_disable(struct dmtimer_systimer *t)
+-{
+- if (!dmtimer_systimer_revision1(t))
+- return;
+-
+- writel_relaxed(DMTIMER_TYPE1_DISABLE, t->base + t->sysc);
+-}
+-
+ static int __init dmtimer_systimer_setup(struct device_node *np,
+ struct dmtimer_systimer *t)
+ {
+@@ -409,8 +411,8 @@ static int __init dmtimer_systimer_setup(struct device_node *np,
+ t->wakeup = regbase + _OMAP_TIMER_WAKEUP_EN_OFFSET;
+ t->ifctrl = regbase + _OMAP_TIMER_IF_CTRL_OFFSET;
+
+- dmtimer_systimer_enable(t);
+ dmtimer_systimer_reset(t);
++ dmtimer_systimer_enable(t);
+ pr_debug("dmtimer rev %08x sysc %08x\n", readl_relaxed(t->base),
+ readl_relaxed(t->base + t->sysc));
+
+diff --git a/drivers/devfreq/tegra30-devfreq.c b/drivers/devfreq/tegra30-devfreq.c
+index e94a27804c209..dedd39de73675 100644
+--- a/drivers/devfreq/tegra30-devfreq.c
++++ b/drivers/devfreq/tegra30-devfreq.c
+@@ -836,7 +836,8 @@ static int tegra_devfreq_probe(struct platform_device *pdev)
+ rate = clk_round_rate(tegra->emc_clock, ULONG_MAX);
+ if (rate < 0) {
+ dev_err(&pdev->dev, "Failed to round clock rate: %ld\n", rate);
+- return rate;
++ err = rate;
++ goto disable_clk;
+ }
+
+ tegra->max_freq = rate / KHZ;
+@@ -897,6 +898,7 @@ remove_opps:
+ dev_pm_opp_remove_all_dynamic(&pdev->dev);
+
+ reset_control_reset(tegra->reset);
++disable_clk:
+ clk_disable_unprepare(tegra->clock);
+
+ return err;
+diff --git a/drivers/dma-buf/dma-buf.c b/drivers/dma-buf/dma-buf.c
+index 1ca609f66fdf8..241c4b48d6099 100644
+--- a/drivers/dma-buf/dma-buf.c
++++ b/drivers/dma-buf/dma-buf.c
+@@ -59,6 +59,8 @@ static void dma_buf_release(struct dentry *dentry)
+ struct dma_buf *dmabuf;
+
+ dmabuf = dentry->d_fsdata;
++ if (unlikely(!dmabuf))
++ return;
+
+ BUG_ON(dmabuf->vmapping_counter);
+
+diff --git a/drivers/edac/ghes_edac.c b/drivers/edac/ghes_edac.c
+index cb3dab56a875d..efad23575b16b 100644
+--- a/drivers/edac/ghes_edac.c
++++ b/drivers/edac/ghes_edac.c
+@@ -469,6 +469,7 @@ int ghes_edac_register(struct ghes *ghes, struct device *dev)
+ if (!force_load && idx < 0)
+ return -ENODEV;
+ } else {
++ force_load = true;
+ idx = 0;
+ }
+
+@@ -566,6 +567,9 @@ void ghes_edac_unregister(struct ghes *ghes)
+ struct mem_ctl_info *mci;
+ unsigned long flags;
+
++ if (!force_load)
++ return;
++
+ mutex_lock(&ghes_reg_mutex);
+
+ if (!refcount_dec_and_test(&ghes_refcount))
+diff --git a/drivers/gpu/drm/amd/amdkfd/kfd_device_queue_manager.c b/drivers/gpu/drm/amd/amdkfd/kfd_device_queue_manager.c
+index aa1e0f0550835..6b00cdbb08368 100644
+--- a/drivers/gpu/drm/amd/amdkfd/kfd_device_queue_manager.c
++++ b/drivers/gpu/drm/amd/amdkfd/kfd_device_queue_manager.c
+@@ -1177,6 +1177,8 @@ static int stop_cpsch(struct device_queue_manager *dqm)
+ dqm->sched_running = false;
+ dqm_unlock(dqm);
+
++ pm_release_ib(&dqm->packets);
++
+ kfd_gtt_sa_free(dqm->dev, dqm->fence_mem);
+ pm_uninit(&dqm->packets, hanging);
+
+diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
+index 3f7eced92c0c8..7c1cc0ba30a55 100644
+--- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
++++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
+@@ -5257,19 +5257,6 @@ static void dm_crtc_helper_disable(struct drm_crtc *crtc)
+ {
+ }
+
+-static bool does_crtc_have_active_cursor(struct drm_crtc_state *new_crtc_state)
+-{
+- struct drm_device *dev = new_crtc_state->crtc->dev;
+- struct drm_plane *plane;
+-
+- drm_for_each_plane_mask(plane, dev, new_crtc_state->plane_mask) {
+- if (plane->type == DRM_PLANE_TYPE_CURSOR)
+- return true;
+- }
+-
+- return false;
+-}
+-
+ static int count_crtc_active_planes(struct drm_crtc_state *new_crtc_state)
+ {
+ struct drm_atomic_state *state = new_crtc_state->state;
+@@ -5349,19 +5336,20 @@ static int dm_crtc_helper_atomic_check(struct drm_crtc *crtc,
+ return ret;
+ }
+
+- /* In some use cases, like reset, no stream is attached */
+- if (!dm_crtc_state->stream)
+- return 0;
+-
+ /*
+- * We want at least one hardware plane enabled to use
+- * the stream with a cursor enabled.
++ * We require the primary plane to be enabled whenever the CRTC is, otherwise
++ * drm_mode_cursor_universal may end up trying to enable the cursor plane while all other
++ * planes are disabled, which is not supported by the hardware. And there is legacy
++ * userspace which stops using the HW cursor altogether in response to the resulting EINVAL.
+ */
+- if (state->enable && state->active &&
+- does_crtc_have_active_cursor(state) &&
+- dm_crtc_state->active_planes == 0)
++ if (state->enable &&
++ !(state->plane_mask & drm_plane_mask(crtc->primary)))
+ return -EINVAL;
+
++ /* In some use cases, like reset, no stream is attached */
++ if (!dm_crtc_state->stream)
++ return 0;
++
+ if (dc_validate_stream(dc, dm_crtc_state->stream) == DC_OK)
+ return 0;
+
+diff --git a/drivers/gpu/drm/amd/display/dc/dcn20/dcn20_resource.c b/drivers/gpu/drm/amd/display/dc/dcn20/dcn20_resource.c
+index 2d9055eb3ce92..20bdabebbc434 100644
+--- a/drivers/gpu/drm/amd/display/dc/dcn20/dcn20_resource.c
++++ b/drivers/gpu/drm/amd/display/dc/dcn20/dcn20_resource.c
+@@ -409,8 +409,8 @@ struct _vcs_dpi_soc_bounding_box_st dcn2_0_nv14_soc = {
+ },
+ },
+ .num_states = 5,
+- .sr_exit_time_us = 8.6,
+- .sr_enter_plus_exit_time_us = 10.9,
++ .sr_exit_time_us = 11.6,
++ .sr_enter_plus_exit_time_us = 13.9,
+ .urgent_latency_us = 4.0,
+ .urgent_latency_pixel_data_only_us = 4.0,
+ .urgent_latency_pixel_mixed_with_vm_data_us = 4.0,
+diff --git a/drivers/gpu/drm/amd/display/modules/hdcp/hdcp_log.h b/drivers/gpu/drm/amd/display/modules/hdcp/hdcp_log.h
+index d3192b9d0c3d8..47f8ee2832ff0 100644
+--- a/drivers/gpu/drm/amd/display/modules/hdcp/hdcp_log.h
++++ b/drivers/gpu/drm/amd/display/modules/hdcp/hdcp_log.h
+@@ -27,7 +27,7 @@
+ #define MOD_HDCP_LOG_H_
+
+ #ifdef CONFIG_DRM_AMD_DC_HDCP
+-#define HDCP_LOG_ERR(hdcp, ...) DRM_WARN(__VA_ARGS__)
++#define HDCP_LOG_ERR(hdcp, ...) DRM_DEBUG_KMS(__VA_ARGS__)
+ #define HDCP_LOG_VER(hdcp, ...) DRM_DEBUG_KMS(__VA_ARGS__)
+ #define HDCP_LOG_FSM(hdcp, ...) DRM_DEBUG_KMS(__VA_ARGS__)
+ #define HDCP_LOG_TOP(hdcp, ...) pr_debug("[HDCP_TOP]:"__VA_ARGS__)
+diff --git a/drivers/gpu/drm/amd/display/modules/hdcp/hdcp_psp.c b/drivers/gpu/drm/amd/display/modules/hdcp/hdcp_psp.c
+index fb1161dd7ea80..3a367a5968ae1 100644
+--- a/drivers/gpu/drm/amd/display/modules/hdcp/hdcp_psp.c
++++ b/drivers/gpu/drm/amd/display/modules/hdcp/hdcp_psp.c
+@@ -88,7 +88,7 @@ enum mod_hdcp_status mod_hdcp_add_display_to_topology(struct mod_hdcp *hdcp,
+ enum mod_hdcp_status status = MOD_HDCP_STATUS_SUCCESS;
+
+ if (!psp->dtm_context.dtm_initialized) {
+- DRM_ERROR("Failed to add display topology, DTM TA is not initialized.");
++ DRM_INFO("Failed to add display topology, DTM TA is not initialized.");
+ display->state = MOD_HDCP_DISPLAY_INACTIVE;
+ return MOD_HDCP_STATUS_FAILURE;
+ }
+diff --git a/drivers/gpu/drm/sun4i/sun8i_csc.h b/drivers/gpu/drm/sun4i/sun8i_csc.h
+index f42441b1b14dd..a55a38ad849c1 100644
+--- a/drivers/gpu/drm/sun4i/sun8i_csc.h
++++ b/drivers/gpu/drm/sun4i/sun8i_csc.h
+@@ -12,7 +12,7 @@ struct sun8i_mixer;
+
+ /* VI channel CSC units offsets */
+ #define CCSC00_OFFSET 0xAA050
+-#define CCSC01_OFFSET 0xFA000
++#define CCSC01_OFFSET 0xFA050
+ #define CCSC10_OFFSET 0xA0000
+ #define CCSC11_OFFSET 0xF0000
+
+diff --git a/drivers/gpu/drm/vc4/vc4_hdmi.c b/drivers/gpu/drm/vc4/vc4_hdmi.c
+index 625bfcf52dc4d..bdcc54c87d7e8 100644
+--- a/drivers/gpu/drm/vc4/vc4_hdmi.c
++++ b/drivers/gpu/drm/vc4/vc4_hdmi.c
+@@ -1117,6 +1117,7 @@ static int vc4_hdmi_audio_init(struct vc4_hdmi *hdmi)
+ card->num_links = 1;
+ card->name = "vc4-hdmi";
+ card->dev = dev;
++ card->owner = THIS_MODULE;
+
+ /*
+ * Be careful, snd_soc_register_card() calls dev_set_drvdata() and
+diff --git a/drivers/i2c/busses/i2c-aspeed.c b/drivers/i2c/busses/i2c-aspeed.c
+index f51702d86a90e..1ad74efcab372 100644
+--- a/drivers/i2c/busses/i2c-aspeed.c
++++ b/drivers/i2c/busses/i2c-aspeed.c
+@@ -69,6 +69,7 @@
+ * These share bit definitions, so use the same values for the enable &
+ * status bits.
+ */
++#define ASPEED_I2CD_INTR_RECV_MASK 0xf000ffff
+ #define ASPEED_I2CD_INTR_SDA_DL_TIMEOUT BIT(14)
+ #define ASPEED_I2CD_INTR_BUS_RECOVER_DONE BIT(13)
+ #define ASPEED_I2CD_INTR_SLAVE_MATCH BIT(7)
+@@ -604,6 +605,7 @@ static irqreturn_t aspeed_i2c_bus_irq(int irq, void *dev_id)
+ writel(irq_received & ~ASPEED_I2CD_INTR_RX_DONE,
+ bus->base + ASPEED_I2C_INTR_STS_REG);
+ readl(bus->base + ASPEED_I2C_INTR_STS_REG);
++ irq_received &= ASPEED_I2CD_INTR_RECV_MASK;
+ irq_remaining = irq_received;
+
+ #if IS_ENABLED(CONFIG_I2C_SLAVE)
+diff --git a/drivers/i2c/busses/i2c-mt65xx.c b/drivers/i2c/busses/i2c-mt65xx.c
+index b099139cbb91e..f9e62c958cf69 100644
+--- a/drivers/i2c/busses/i2c-mt65xx.c
++++ b/drivers/i2c/busses/i2c-mt65xx.c
+@@ -736,7 +736,7 @@ static int mtk_i2c_set_speed(struct mtk_i2c *i2c, unsigned int parent_clk)
+ for (clk_div = 1; clk_div <= max_clk_div; clk_div++) {
+ clk_src = parent_clk / clk_div;
+
+- if (target_speed > I2C_MAX_FAST_MODE_FREQ) {
++ if (target_speed > I2C_MAX_FAST_MODE_PLUS_FREQ) {
+ /* Set master code speed register */
+ ret = mtk_i2c_calculate_speed(i2c, clk_src,
+ I2C_MAX_FAST_MODE_FREQ,
+diff --git a/drivers/i2c/i2c-core-base.c b/drivers/i2c/i2c-core-base.c
+index 4f09d4c318287..7031393c74806 100644
+--- a/drivers/i2c/i2c-core-base.c
++++ b/drivers/i2c/i2c-core-base.c
+@@ -1336,8 +1336,8 @@ static int i2c_register_adapter(struct i2c_adapter *adap)
+
+ /* create pre-declared device nodes */
+ of_i2c_register_devices(adap);
+- i2c_acpi_register_devices(adap);
+ i2c_acpi_install_space_handler(adap);
++ i2c_acpi_register_devices(adap);
+
+ if (adap->nr < __i2c_first_dynamic_bus_num)
+ i2c_scan_static_board_info(adap);
+diff --git a/drivers/infiniband/core/device.c b/drivers/infiniband/core/device.c
+index eadba29432dd7..abcfe4dc1284f 100644
+--- a/drivers/infiniband/core/device.c
++++ b/drivers/infiniband/core/device.c
+@@ -1282,6 +1282,8 @@ static void disable_device(struct ib_device *device)
+ remove_client_context(device, cid);
+ }
+
++ ib_cq_pool_destroy(device);
++
+ /* Pairs with refcount_set in enable_device */
+ ib_device_put(device);
+ wait_for_completion(&device->unreg_completion);
+@@ -1325,6 +1327,8 @@ static int enable_device_and_get(struct ib_device *device)
+ goto out;
+ }
+
++ ib_cq_pool_init(device);
++
+ down_read(&clients_rwsem);
+ xa_for_each_marked (&clients, index, client, CLIENT_REGISTERED) {
+ ret = add_client_context(device, client);
+@@ -1397,7 +1401,6 @@ int ib_register_device(struct ib_device *device, const char *name)
+ goto dev_cleanup;
+ }
+
+- ib_cq_pool_init(device);
+ ret = enable_device_and_get(device);
+ dev_set_uevent_suppress(&device->dev, false);
+ /* Mark for userspace that device is ready */
+@@ -1452,7 +1455,6 @@ static void __ib_unregister_device(struct ib_device *ib_dev)
+ goto out;
+
+ disable_device(ib_dev);
+- ib_cq_pool_destroy(ib_dev);
+
+ /* Expedite removing unregistered pointers from the hash table */
+ free_netdevs(ib_dev);
+diff --git a/drivers/md/dm.c b/drivers/md/dm.c
+index 49c758fef8cb6..548ad06094e98 100644
+--- a/drivers/md/dm.c
++++ b/drivers/md/dm.c
+@@ -1728,23 +1728,6 @@ out:
+ return ret;
+ }
+
+-static void dm_queue_split(struct mapped_device *md, struct dm_target *ti, struct bio **bio)
+-{
+- unsigned len, sector_count;
+-
+- sector_count = bio_sectors(*bio);
+- len = min_t(sector_t, max_io_len((*bio)->bi_iter.bi_sector, ti), sector_count);
+-
+- if (sector_count > len) {
+- struct bio *split = bio_split(*bio, len, GFP_NOIO, &md->queue->bio_split);
+-
+- bio_chain(split, *bio);
+- trace_block_split(md->queue, split, (*bio)->bi_iter.bi_sector);
+- generic_make_request(*bio);
+- *bio = split;
+- }
+-}
+-
+ static blk_qc_t dm_process_bio(struct mapped_device *md,
+ struct dm_table *map, struct bio *bio)
+ {
+@@ -1772,14 +1755,12 @@ static blk_qc_t dm_process_bio(struct mapped_device *md,
+ if (current->bio_list) {
+ if (is_abnormal_io(bio))
+ blk_queue_split(md->queue, &bio);
+- else
+- dm_queue_split(md, ti, &bio);
++ /* regular IO is split by __split_and_process_bio */
+ }
+
+ if (dm_get_md_type(md) == DM_TYPE_NVME_BIO_BASED)
+ return __process_bio(md, map, bio, ti);
+- else
+- return __split_and_process_bio(md, map, bio);
++ return __split_and_process_bio(md, map, bio);
+ }
+
+ static blk_qc_t dm_make_request(struct request_queue *q, struct bio *bio)
+diff --git a/drivers/media/cec/core/cec-adap.c b/drivers/media/cec/core/cec-adap.c
+index 6a04d19a96b2e..accc893243295 100644
+--- a/drivers/media/cec/core/cec-adap.c
++++ b/drivers/media/cec/core/cec-adap.c
+@@ -1199,7 +1199,7 @@ void cec_received_msg_ts(struct cec_adapter *adap,
+ /* Cancel the pending timeout work */
+ if (!cancel_delayed_work(&data->work)) {
+ mutex_unlock(&adap->lock);
+- flush_scheduled_work();
++ cancel_delayed_work_sync(&data->work);
+ mutex_lock(&adap->lock);
+ }
+ /*
+diff --git a/drivers/net/ethernet/intel/igc/igc.h b/drivers/net/ethernet/intel/igc/igc.h
+index 5dbc5a156626a..206b73aa6d7a7 100644
+--- a/drivers/net/ethernet/intel/igc/igc.h
++++ b/drivers/net/ethernet/intel/igc/igc.h
+@@ -298,18 +298,14 @@ extern char igc_driver_version[];
+ #define IGC_RX_HDR_LEN IGC_RXBUFFER_256
+
+ /* Transmit and receive latency (for PTP timestamps) */
+-/* FIXME: These values were estimated using the ones that i225 has as
+- * basis, they seem to provide good numbers with ptp4l/phc2sys, but we
+- * need to confirm them.
+- */
+-#define IGC_I225_TX_LATENCY_10 9542
+-#define IGC_I225_TX_LATENCY_100 1024
+-#define IGC_I225_TX_LATENCY_1000 178
+-#define IGC_I225_TX_LATENCY_2500 64
+-#define IGC_I225_RX_LATENCY_10 20662
+-#define IGC_I225_RX_LATENCY_100 2213
+-#define IGC_I225_RX_LATENCY_1000 448
+-#define IGC_I225_RX_LATENCY_2500 160
++#define IGC_I225_TX_LATENCY_10 240
++#define IGC_I225_TX_LATENCY_100 58
++#define IGC_I225_TX_LATENCY_1000 80
++#define IGC_I225_TX_LATENCY_2500 1325
++#define IGC_I225_RX_LATENCY_10 6450
++#define IGC_I225_RX_LATENCY_100 185
++#define IGC_I225_RX_LATENCY_1000 300
++#define IGC_I225_RX_LATENCY_2500 1485
+
+ /* RX and TX descriptor control thresholds.
+ * PTHRESH - MAC will consider prefetch if it has fewer than this number of
+diff --git a/drivers/net/ethernet/intel/igc/igc_ptp.c b/drivers/net/ethernet/intel/igc/igc_ptp.c
+index 61e38853aa47d..9f191a7f3c71a 100644
+--- a/drivers/net/ethernet/intel/igc/igc_ptp.c
++++ b/drivers/net/ethernet/intel/igc/igc_ptp.c
+@@ -471,12 +471,31 @@ static void igc_ptp_tx_hwtstamp(struct igc_adapter *adapter)
+ struct sk_buff *skb = adapter->ptp_tx_skb;
+ struct skb_shared_hwtstamps shhwtstamps;
+ struct igc_hw *hw = &adapter->hw;
++ int adjust = 0;
+ u64 regval;
+
+ regval = rd32(IGC_TXSTMPL);
+ regval |= (u64)rd32(IGC_TXSTMPH) << 32;
+ igc_ptp_systim_to_hwtstamp(adapter, &shhwtstamps, regval);
+
++ switch (adapter->link_speed) {
++ case SPEED_10:
++ adjust = IGC_I225_TX_LATENCY_10;
++ break;
++ case SPEED_100:
++ adjust = IGC_I225_TX_LATENCY_100;
++ break;
++ case SPEED_1000:
++ adjust = IGC_I225_TX_LATENCY_1000;
++ break;
++ case SPEED_2500:
++ adjust = IGC_I225_TX_LATENCY_2500;
++ break;
++ }
++
++ shhwtstamps.hwtstamp =
++ ktime_add_ns(shhwtstamps.hwtstamp, adjust);
++
+ /* Clear the lock early before calling skb_tstamp_tx so that
+ * applications are not woken up before the lock bit is clear. We use
+ * a copy of the skb pointer to ensure other threads can't change it
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en/port.c b/drivers/net/ethernet/mellanox/mlx5/core/en/port.c
+index 3cf3e35053f77..98e909bf3c1ec 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/en/port.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/en/port.c
+@@ -487,11 +487,8 @@ bool mlx5e_fec_in_caps(struct mlx5_core_dev *dev, int fec_policy)
+ int err;
+ int i;
+
+- if (!MLX5_CAP_GEN(dev, pcam_reg))
+- return -EOPNOTSUPP;
+-
+- if (!MLX5_CAP_PCAM_REG(dev, pplm))
+- return -EOPNOTSUPP;
++ if (!MLX5_CAP_GEN(dev, pcam_reg) || !MLX5_CAP_PCAM_REG(dev, pplm))
++ return false;
+
+ MLX5_SET(pplm_reg, in, local_port, 1);
+ err = mlx5_core_access_reg(dev, in, sz, out, sz, MLX5_REG_PPLM, 0, 0);
+diff --git a/drivers/net/ethernet/qlogic/qed/qed_dev.c b/drivers/net/ethernet/qlogic/qed/qed_dev.c
+index dbdac983ccde5..105d9afe825f1 100644
+--- a/drivers/net/ethernet/qlogic/qed/qed_dev.c
++++ b/drivers/net/ethernet/qlogic/qed/qed_dev.c
+@@ -4191,7 +4191,8 @@ static int qed_hw_get_nvm_info(struct qed_hwfn *p_hwfn, struct qed_ptt *p_ptt)
+ cdev->mf_bits = BIT(QED_MF_LLH_MAC_CLSS) |
+ BIT(QED_MF_LLH_PROTO_CLSS) |
+ BIT(QED_MF_LL2_NON_UNICAST) |
+- BIT(QED_MF_INTER_PF_SWITCH);
++ BIT(QED_MF_INTER_PF_SWITCH) |
++ BIT(QED_MF_DISABLE_ARFS);
+ break;
+ case NVM_CFG1_GLOB_MF_MODE_DEFAULT:
+ cdev->mf_bits = BIT(QED_MF_LLH_MAC_CLSS) |
+@@ -4204,6 +4205,14 @@ static int qed_hw_get_nvm_info(struct qed_hwfn *p_hwfn, struct qed_ptt *p_ptt)
+
+ DP_INFO(p_hwfn, "Multi function mode is 0x%lx\n",
+ cdev->mf_bits);
++
++ /* In CMT the PF is unknown when the GFS block processes the
++ * packet. Therefore cannot use searcher as it has a per PF
++ * database, and thus ARFS must be disabled.
++ *
++ */
++ if (QED_IS_CMT(cdev))
++ cdev->mf_bits |= BIT(QED_MF_DISABLE_ARFS);
+ }
+
+ DP_INFO(p_hwfn, "Multi function mode is 0x%lx\n",
+diff --git a/drivers/net/ethernet/qlogic/qed/qed_l2.c b/drivers/net/ethernet/qlogic/qed/qed_l2.c
+index 29810a1aa2106..b2cd153321720 100644
+--- a/drivers/net/ethernet/qlogic/qed/qed_l2.c
++++ b/drivers/net/ethernet/qlogic/qed/qed_l2.c
+@@ -2001,6 +2001,9 @@ void qed_arfs_mode_configure(struct qed_hwfn *p_hwfn,
+ struct qed_ptt *p_ptt,
+ struct qed_arfs_config_params *p_cfg_params)
+ {
++ if (test_bit(QED_MF_DISABLE_ARFS, &p_hwfn->cdev->mf_bits))
++ return;
++
+ if (p_cfg_params->mode != QED_FILTER_CONFIG_MODE_DISABLE) {
+ qed_gft_config(p_hwfn, p_ptt, p_hwfn->rel_pf_id,
+ p_cfg_params->tcp,
+diff --git a/drivers/net/ethernet/qlogic/qed/qed_main.c b/drivers/net/ethernet/qlogic/qed/qed_main.c
+index 11367a248d55e..05eff348b22a8 100644
+--- a/drivers/net/ethernet/qlogic/qed/qed_main.c
++++ b/drivers/net/ethernet/qlogic/qed/qed_main.c
+@@ -289,6 +289,8 @@ int qed_fill_dev_info(struct qed_dev *cdev,
+ dev_info->fw_eng = FW_ENGINEERING_VERSION;
+ dev_info->b_inter_pf_switch = test_bit(QED_MF_INTER_PF_SWITCH,
+ &cdev->mf_bits);
++ if (!test_bit(QED_MF_DISABLE_ARFS, &cdev->mf_bits))
++ dev_info->b_arfs_capable = true;
+ dev_info->tx_switching = true;
+
+ if (hw_info->b_wol_support == QED_WOL_SUPPORT_PME)
+diff --git a/drivers/net/ethernet/qlogic/qed/qed_sriov.c b/drivers/net/ethernet/qlogic/qed/qed_sriov.c
+index 20679fd4204be..229c6f3ff3935 100644
+--- a/drivers/net/ethernet/qlogic/qed/qed_sriov.c
++++ b/drivers/net/ethernet/qlogic/qed/qed_sriov.c
+@@ -97,6 +97,7 @@ static int qed_sp_vf_start(struct qed_hwfn *p_hwfn, struct qed_vf_info *p_vf)
+ p_ramrod->personality = PERSONALITY_ETH;
+ break;
+ case QED_PCI_ETH_ROCE:
++ case QED_PCI_ETH_IWARP:
+ p_ramrod->personality = PERSONALITY_RDMA_AND_ETH;
+ break;
+ default:
+diff --git a/drivers/net/ethernet/qlogic/qede/qede_filter.c b/drivers/net/ethernet/qlogic/qede/qede_filter.c
+index fe72bb6c9455e..203cc76214c70 100644
+--- a/drivers/net/ethernet/qlogic/qede/qede_filter.c
++++ b/drivers/net/ethernet/qlogic/qede/qede_filter.c
+@@ -336,6 +336,9 @@ int qede_alloc_arfs(struct qede_dev *edev)
+ {
+ int i;
+
++ if (!edev->dev_info.common.b_arfs_capable)
++ return -EINVAL;
++
+ edev->arfs = vzalloc(sizeof(*edev->arfs));
+ if (!edev->arfs)
+ return -ENOMEM;
+diff --git a/drivers/net/ethernet/qlogic/qede/qede_main.c b/drivers/net/ethernet/qlogic/qede/qede_main.c
+index 29e285430f995..082055ee2d397 100644
+--- a/drivers/net/ethernet/qlogic/qede/qede_main.c
++++ b/drivers/net/ethernet/qlogic/qede/qede_main.c
+@@ -827,7 +827,7 @@ static void qede_init_ndev(struct qede_dev *edev)
+ NETIF_F_IP_CSUM | NETIF_F_IPV6_CSUM |
+ NETIF_F_TSO | NETIF_F_TSO6 | NETIF_F_HW_TC;
+
+- if (!IS_VF(edev) && edev->dev_info.common.num_hwfns == 1)
++ if (edev->dev_info.common.b_arfs_capable)
+ hw_features |= NETIF_F_NTUPLE;
+
+ if (edev->dev_info.common.vxlan_enable ||
+@@ -2278,7 +2278,7 @@ static void qede_unload(struct qede_dev *edev, enum qede_unload_mode mode,
+ qede_vlan_mark_nonconfigured(edev);
+ edev->ops->fastpath_stop(edev->cdev);
+
+- if (!IS_VF(edev) && edev->dev_info.common.num_hwfns == 1) {
++ if (edev->dev_info.common.b_arfs_capable) {
+ qede_poll_for_freeing_arfs_filters(edev);
+ qede_free_arfs(edev);
+ }
+@@ -2345,10 +2345,9 @@ static int qede_load(struct qede_dev *edev, enum qede_load_mode mode,
+ if (rc)
+ goto err2;
+
+- if (!IS_VF(edev) && edev->dev_info.common.num_hwfns == 1) {
+- rc = qede_alloc_arfs(edev);
+- if (rc)
+- DP_NOTICE(edev, "aRFS memory allocation failed\n");
++ if (qede_alloc_arfs(edev)) {
++ edev->ndev->features &= ~NETIF_F_NTUPLE;
++ edev->dev_info.common.b_arfs_capable = false;
+ }
+
+ qede_napi_add_enable(edev);
+diff --git a/drivers/net/hyperv/netvsc_drv.c b/drivers/net/hyperv/netvsc_drv.c
+index 8309194b351a9..a2db5ef3b62a2 100644
+--- a/drivers/net/hyperv/netvsc_drv.c
++++ b/drivers/net/hyperv/netvsc_drv.c
+@@ -2576,7 +2576,6 @@ static int netvsc_resume(struct hv_device *dev)
+ struct net_device *net = hv_get_drvdata(dev);
+ struct net_device_context *net_device_ctx;
+ struct netvsc_device_info *device_info;
+- struct net_device *vf_netdev;
+ int ret;
+
+ rtnl_lock();
+@@ -2589,15 +2588,6 @@ static int netvsc_resume(struct hv_device *dev)
+ netvsc_devinfo_put(device_info);
+ net_device_ctx->saved_netvsc_dev_info = NULL;
+
+- /* A NIC driver (e.g. mlx5) may keep the VF network interface across
+- * hibernation, but here the data path is implicitly switched to the
+- * netvsc NIC since the vmbus channel is closed and re-opened, so
+- * netvsc_vf_changed() must be used to switch the data path to the VF.
+- */
+- vf_netdev = rtnl_dereference(net_device_ctx->vf_netdev);
+- if (vf_netdev && netvsc_vf_changed(vf_netdev) != NOTIFY_OK)
+- ret = -EINVAL;
+-
+ rtnl_unlock();
+
+ return ret;
+@@ -2658,6 +2648,7 @@ static int netvsc_netdev_event(struct notifier_block *this,
+ return netvsc_unregister_vf(event_dev);
+ case NETDEV_UP:
+ case NETDEV_DOWN:
++ case NETDEV_CHANGE:
+ return netvsc_vf_changed(event_dev);
+ default:
+ return NOTIFY_DONE;
+diff --git a/drivers/net/ieee802154/adf7242.c b/drivers/net/ieee802154/adf7242.c
+index c11f32f644db3..7db9cbd0f5ded 100644
+--- a/drivers/net/ieee802154/adf7242.c
++++ b/drivers/net/ieee802154/adf7242.c
+@@ -882,7 +882,9 @@ static int adf7242_rx(struct adf7242_local *lp)
+ int ret;
+ u8 lqi, len_u8, *data;
+
+- adf7242_read_reg(lp, 0, &len_u8);
++ ret = adf7242_read_reg(lp, 0, &len_u8);
++ if (ret)
++ return ret;
+
+ len = len_u8;
+
+diff --git a/drivers/net/ieee802154/ca8210.c b/drivers/net/ieee802154/ca8210.c
+index e04c3b60cae78..4eb64709d44cb 100644
+--- a/drivers/net/ieee802154/ca8210.c
++++ b/drivers/net/ieee802154/ca8210.c
+@@ -2925,6 +2925,7 @@ static int ca8210_dev_com_init(struct ca8210_priv *priv)
+ );
+ if (!priv->irq_workqueue) {
+ dev_crit(&priv->spi->dev, "alloc of irq_workqueue failed!\n");
++ destroy_workqueue(priv->mlme_workqueue);
+ return -ENOMEM;
+ }
+
+diff --git a/drivers/net/wireless/marvell/mwifiex/fw.h b/drivers/net/wireless/marvell/mwifiex/fw.h
+index 8047e307892e3..d9f8bdbc817b2 100644
+--- a/drivers/net/wireless/marvell/mwifiex/fw.h
++++ b/drivers/net/wireless/marvell/mwifiex/fw.h
+@@ -954,7 +954,7 @@ struct mwifiex_tkip_param {
+ struct mwifiex_aes_param {
+ u8 pn[WPA_PN_SIZE];
+ __le16 key_len;
+- u8 key[WLAN_KEY_LEN_CCMP];
++ u8 key[WLAN_KEY_LEN_CCMP_256];
+ } __packed;
+
+ struct mwifiex_wapi_param {
+diff --git a/drivers/net/wireless/marvell/mwifiex/sta_cmdresp.c b/drivers/net/wireless/marvell/mwifiex/sta_cmdresp.c
+index 962d8bfe6f101..119ccacd1fcc4 100644
+--- a/drivers/net/wireless/marvell/mwifiex/sta_cmdresp.c
++++ b/drivers/net/wireless/marvell/mwifiex/sta_cmdresp.c
+@@ -619,7 +619,7 @@ static int mwifiex_ret_802_11_key_material_v2(struct mwifiex_private *priv,
+ key_v2 = &resp->params.key_material_v2;
+
+ len = le16_to_cpu(key_v2->key_param_set.key_params.aes.key_len);
+- if (len > WLAN_KEY_LEN_CCMP)
++ if (len > sizeof(key_v2->key_param_set.key_params.aes.key))
+ return -EINVAL;
+
+ if (le16_to_cpu(key_v2->action) == HostCmd_ACT_GEN_SET) {
+@@ -635,7 +635,7 @@ static int mwifiex_ret_802_11_key_material_v2(struct mwifiex_private *priv,
+ return 0;
+
+ memset(priv->aes_key_v2.key_param_set.key_params.aes.key, 0,
+- WLAN_KEY_LEN_CCMP);
++ sizeof(key_v2->key_param_set.key_params.aes.key));
+ priv->aes_key_v2.key_param_set.key_params.aes.key_len =
+ cpu_to_le16(len);
+ memcpy(priv->aes_key_v2.key_param_set.key_params.aes.key,
+diff --git a/drivers/net/wireless/mediatek/mt76/mt7615/mcu.c b/drivers/net/wireless/mediatek/mt76/mt7615/mcu.c
+index cb8c1d80ead92..72ad1426c45fc 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt7615/mcu.c
++++ b/drivers/net/wireless/mediatek/mt76/mt7615/mcu.c
+@@ -2014,7 +2014,8 @@ static int mt7615_load_n9(struct mt7615_dev *dev, const char *name)
+ sizeof(dev->mt76.hw->wiphy->fw_version),
+ "%.10s-%.15s", hdr->fw_ver, hdr->build_date);
+
+- if (!strncmp(hdr->fw_ver, "2.0", sizeof(hdr->fw_ver))) {
++ if (!is_mt7615(&dev->mt76) &&
++ !strncmp(hdr->fw_ver, "2.0", sizeof(hdr->fw_ver))) {
+ dev->fw_ver = MT7615_FIRMWARE_V2;
+ dev->mcu_ops = &sta_update_ops;
+ } else {
+diff --git a/drivers/nvme/host/Kconfig b/drivers/nvme/host/Kconfig
+index 3ed9786b88d8e..a44d49d63968a 100644
+--- a/drivers/nvme/host/Kconfig
++++ b/drivers/nvme/host/Kconfig
+@@ -73,6 +73,7 @@ config NVME_TCP
+ depends on INET
+ depends on BLK_DEV_NVME
+ select NVME_FABRICS
++ select CRYPTO
+ select CRYPTO_CRC32C
+ help
+ This provides support for the NVMe over Fabrics protocol using
+diff --git a/drivers/regulator/axp20x-regulator.c b/drivers/regulator/axp20x-regulator.c
+index fbc95cadaf539..126649c172e11 100644
+--- a/drivers/regulator/axp20x-regulator.c
++++ b/drivers/regulator/axp20x-regulator.c
+@@ -42,8 +42,9 @@
+
+ #define AXP20X_DCDC2_V_OUT_MASK GENMASK(5, 0)
+ #define AXP20X_DCDC3_V_OUT_MASK GENMASK(7, 0)
+-#define AXP20X_LDO24_V_OUT_MASK GENMASK(7, 4)
++#define AXP20X_LDO2_V_OUT_MASK GENMASK(7, 4)
+ #define AXP20X_LDO3_V_OUT_MASK GENMASK(6, 0)
++#define AXP20X_LDO4_V_OUT_MASK GENMASK(3, 0)
+ #define AXP20X_LDO5_V_OUT_MASK GENMASK(7, 4)
+
+ #define AXP20X_PWR_OUT_EXTEN_MASK BIT_MASK(0)
+@@ -542,14 +543,14 @@ static const struct regulator_desc axp20x_regulators[] = {
+ AXP20X_PWR_OUT_CTRL, AXP20X_PWR_OUT_DCDC3_MASK),
+ AXP_DESC_FIXED(AXP20X, LDO1, "ldo1", "acin", 1300),
+ AXP_DESC(AXP20X, LDO2, "ldo2", "ldo24in", 1800, 3300, 100,
+- AXP20X_LDO24_V_OUT, AXP20X_LDO24_V_OUT_MASK,
++ AXP20X_LDO24_V_OUT, AXP20X_LDO2_V_OUT_MASK,
+ AXP20X_PWR_OUT_CTRL, AXP20X_PWR_OUT_LDO2_MASK),
+ AXP_DESC(AXP20X, LDO3, "ldo3", "ldo3in", 700, 3500, 25,
+ AXP20X_LDO3_V_OUT, AXP20X_LDO3_V_OUT_MASK,
+ AXP20X_PWR_OUT_CTRL, AXP20X_PWR_OUT_LDO3_MASK),
+ AXP_DESC_RANGES(AXP20X, LDO4, "ldo4", "ldo24in",
+ axp20x_ldo4_ranges, AXP20X_LDO4_V_OUT_NUM_VOLTAGES,
+- AXP20X_LDO24_V_OUT, AXP20X_LDO24_V_OUT_MASK,
++ AXP20X_LDO24_V_OUT, AXP20X_LDO4_V_OUT_MASK,
+ AXP20X_PWR_OUT_CTRL, AXP20X_PWR_OUT_LDO4_MASK),
+ AXP_DESC_IO(AXP20X, LDO5, "ldo5", "ldo5in", 1800, 3300, 100,
+ AXP20X_LDO5_V_OUT, AXP20X_LDO5_V_OUT_MASK,
+diff --git a/drivers/s390/block/dasd_fba.c b/drivers/s390/block/dasd_fba.c
+index cbb770824226f..1a44e321b54e1 100644
+--- a/drivers/s390/block/dasd_fba.c
++++ b/drivers/s390/block/dasd_fba.c
+@@ -40,6 +40,7 @@
+ MODULE_LICENSE("GPL");
+
+ static struct dasd_discipline dasd_fba_discipline;
++static void *dasd_fba_zero_page;
+
+ struct dasd_fba_private {
+ struct dasd_fba_characteristics rdc_data;
+@@ -270,7 +271,7 @@ static void ccw_write_zero(struct ccw1 *ccw, int count)
+ ccw->cmd_code = DASD_FBA_CCW_WRITE;
+ ccw->flags |= CCW_FLAG_SLI;
+ ccw->count = count;
+- ccw->cda = (__u32) (addr_t) page_to_phys(ZERO_PAGE(0));
++ ccw->cda = (__u32) (addr_t) dasd_fba_zero_page;
+ }
+
+ /*
+@@ -830,6 +831,11 @@ dasd_fba_init(void)
+ int ret;
+
+ ASCEBC(dasd_fba_discipline.ebcname, 4);
++
++ dasd_fba_zero_page = (void *)get_zeroed_page(GFP_KERNEL | GFP_DMA);
++ if (!dasd_fba_zero_page)
++ return -ENOMEM;
++
+ ret = ccw_driver_register(&dasd_fba_driver);
+ if (!ret)
+ wait_for_device_probe();
+@@ -841,6 +847,7 @@ static void __exit
+ dasd_fba_cleanup(void)
+ {
+ ccw_driver_unregister(&dasd_fba_driver);
++ free_page((unsigned long)dasd_fba_zero_page);
+ }
+
+ module_init(dasd_fba_init);
+diff --git a/drivers/s390/crypto/zcrypt_api.c b/drivers/s390/crypto/zcrypt_api.c
+index 56a405dce8bcf..0b244f691b72d 100644
+--- a/drivers/s390/crypto/zcrypt_api.c
++++ b/drivers/s390/crypto/zcrypt_api.c
+@@ -1429,7 +1429,8 @@ static long zcrypt_unlocked_ioctl(struct file *filp, unsigned int cmd,
+ if (!reqcnt)
+ return -ENOMEM;
+ zcrypt_perdev_reqcnt(reqcnt, AP_DEVICES);
+- if (copy_to_user((int __user *) arg, reqcnt, sizeof(reqcnt)))
++ if (copy_to_user((int __user *) arg, reqcnt,
++ sizeof(u32) * AP_DEVICES))
+ rc = -EFAULT;
+ kfree(reqcnt);
+ return rc;
+diff --git a/drivers/scsi/lpfc/lpfc_hbadisc.c b/drivers/scsi/lpfc/lpfc_hbadisc.c
+index 4084f7f2b8216..7064e8024d14d 100644
+--- a/drivers/scsi/lpfc/lpfc_hbadisc.c
++++ b/drivers/scsi/lpfc/lpfc_hbadisc.c
+@@ -71,6 +71,7 @@ static void lpfc_disc_timeout_handler(struct lpfc_vport *);
+ static void lpfc_disc_flush_list(struct lpfc_vport *vport);
+ static void lpfc_unregister_fcfi_cmpl(struct lpfc_hba *, LPFC_MBOXQ_t *);
+ static int lpfc_fcf_inuse(struct lpfc_hba *);
++static void lpfc_mbx_cmpl_read_sparam(struct lpfc_hba *, LPFC_MBOXQ_t *);
+
+ void
+ lpfc_terminate_rport_io(struct fc_rport *rport)
+@@ -1138,11 +1139,13 @@ out:
+ return;
+ }
+
+-
+ void
+ lpfc_mbx_cmpl_local_config_link(struct lpfc_hba *phba, LPFC_MBOXQ_t *pmb)
+ {
+ struct lpfc_vport *vport = pmb->vport;
++ LPFC_MBOXQ_t *sparam_mb;
++ struct lpfc_dmabuf *sparam_mp;
++ int rc;
+
+ if (pmb->u.mb.mbxStatus)
+ goto out;
+@@ -1167,12 +1170,42 @@ lpfc_mbx_cmpl_local_config_link(struct lpfc_hba *phba, LPFC_MBOXQ_t *pmb)
+ }
+
+ /* Start discovery by sending a FLOGI. port_state is identically
+- * LPFC_FLOGI while waiting for FLOGI cmpl. Check if sending
+- * the FLOGI is being deferred till after MBX_READ_SPARAM completes.
++ * LPFC_FLOGI while waiting for FLOGI cmpl.
+ */
+ if (vport->port_state != LPFC_FLOGI) {
+- if (!(phba->hba_flag & HBA_DEFER_FLOGI))
++ /* Issue MBX_READ_SPARAM to update CSPs before FLOGI if
++ * bb-credit recovery is in place.
++ */
++ if (phba->bbcredit_support && phba->cfg_enable_bbcr &&
++ !(phba->link_flag & LS_LOOPBACK_MODE)) {
++ sparam_mb = mempool_alloc(phba->mbox_mem_pool,
++ GFP_KERNEL);
++ if (!sparam_mb)
++ goto sparam_out;
++
++ rc = lpfc_read_sparam(phba, sparam_mb, 0);
++ if (rc) {
++ mempool_free(sparam_mb, phba->mbox_mem_pool);
++ goto sparam_out;
++ }
++ sparam_mb->vport = vport;
++ sparam_mb->mbox_cmpl = lpfc_mbx_cmpl_read_sparam;
++ rc = lpfc_sli_issue_mbox(phba, sparam_mb, MBX_NOWAIT);
++ if (rc == MBX_NOT_FINISHED) {
++ sparam_mp = (struct lpfc_dmabuf *)
++ sparam_mb->ctx_buf;
++ lpfc_mbuf_free(phba, sparam_mp->virt,
++ sparam_mp->phys);
++ kfree(sparam_mp);
++ sparam_mb->ctx_buf = NULL;
++ mempool_free(sparam_mb, phba->mbox_mem_pool);
++ goto sparam_out;
++ }
++
++ phba->hba_flag |= HBA_DEFER_FLOGI;
++ } else {
+ lpfc_initial_flogi(vport);
++ }
+ } else {
+ if (vport->fc_flag & FC_PT2PT)
+ lpfc_disc_start(vport);
+@@ -1184,6 +1217,7 @@ out:
+ "0306 CONFIG_LINK mbxStatus error x%x "
+ "HBA state x%x\n",
+ pmb->u.mb.mbxStatus, vport->port_state);
++sparam_out:
+ mempool_free(pmb, phba->mbox_mem_pool);
+
+ lpfc_linkdown(phba);
+@@ -3239,21 +3273,6 @@ lpfc_mbx_process_link_up(struct lpfc_hba *phba, struct lpfc_mbx_read_top *la)
+ lpfc_linkup(phba);
+ sparam_mbox = NULL;
+
+- if (!(phba->hba_flag & HBA_FCOE_MODE)) {
+- cfglink_mbox = mempool_alloc(phba->mbox_mem_pool, GFP_KERNEL);
+- if (!cfglink_mbox)
+- goto out;
+- vport->port_state = LPFC_LOCAL_CFG_LINK;
+- lpfc_config_link(phba, cfglink_mbox);
+- cfglink_mbox->vport = vport;
+- cfglink_mbox->mbox_cmpl = lpfc_mbx_cmpl_local_config_link;
+- rc = lpfc_sli_issue_mbox(phba, cfglink_mbox, MBX_NOWAIT);
+- if (rc == MBX_NOT_FINISHED) {
+- mempool_free(cfglink_mbox, phba->mbox_mem_pool);
+- goto out;
+- }
+- }
+-
+ sparam_mbox = mempool_alloc(phba->mbox_mem_pool, GFP_KERNEL);
+ if (!sparam_mbox)
+ goto out;
+@@ -3274,7 +3293,20 @@ lpfc_mbx_process_link_up(struct lpfc_hba *phba, struct lpfc_mbx_read_top *la)
+ goto out;
+ }
+
+- if (phba->hba_flag & HBA_FCOE_MODE) {
++ if (!(phba->hba_flag & HBA_FCOE_MODE)) {
++ cfglink_mbox = mempool_alloc(phba->mbox_mem_pool, GFP_KERNEL);
++ if (!cfglink_mbox)
++ goto out;
++ vport->port_state = LPFC_LOCAL_CFG_LINK;
++ lpfc_config_link(phba, cfglink_mbox);
++ cfglink_mbox->vport = vport;
++ cfglink_mbox->mbox_cmpl = lpfc_mbx_cmpl_local_config_link;
++ rc = lpfc_sli_issue_mbox(phba, cfglink_mbox, MBX_NOWAIT);
++ if (rc == MBX_NOT_FINISHED) {
++ mempool_free(cfglink_mbox, phba->mbox_mem_pool);
++ goto out;
++ }
++ } else {
+ vport->port_state = LPFC_VPORT_UNKNOWN;
+ /*
+ * Add the driver's default FCF record at FCF index 0 now. This
+@@ -3331,10 +3363,6 @@ lpfc_mbx_process_link_up(struct lpfc_hba *phba, struct lpfc_mbx_read_top *la)
+ }
+ /* Reset FCF roundrobin bmask for new discovery */
+ lpfc_sli4_clear_fcf_rr_bmask(phba);
+- } else {
+- if (phba->bbcredit_support && phba->cfg_enable_bbcr &&
+- !(phba->link_flag & LS_LOOPBACK_MODE))
+- phba->hba_flag |= HBA_DEFER_FLOGI;
+ }
+
+ /* Prepare for LINK up registrations */
+diff --git a/drivers/spi/spi-bcm-qspi.c b/drivers/spi/spi-bcm-qspi.c
+index 681d090851756..9cfa15ec8b08c 100644
+--- a/drivers/spi/spi-bcm-qspi.c
++++ b/drivers/spi/spi-bcm-qspi.c
+@@ -1295,7 +1295,7 @@ static const struct of_device_id bcm_qspi_of_match[] = {
+ },
+ {
+ .compatible = "brcm,spi-bcm-qspi",
+- .data = &bcm_qspi_rev_data,
++ .data = &bcm_qspi_no_rev_data,
+ },
+ {
+ .compatible = "brcm,spi-bcm7216-qspi",
+diff --git a/drivers/spi/spi-fsl-dspi.c b/drivers/spi/spi-fsl-dspi.c
+index 91c6affe139c9..283f2468a2f46 100644
+--- a/drivers/spi/spi-fsl-dspi.c
++++ b/drivers/spi/spi-fsl-dspi.c
+@@ -174,17 +174,17 @@ static const struct fsl_dspi_devtype_data devtype_data[] = {
+ .fifo_size = 16,
+ },
+ [LS2080A] = {
+- .trans_mode = DSPI_DMA_MODE,
++ .trans_mode = DSPI_XSPI_MODE,
+ .max_clock_factor = 8,
+ .fifo_size = 4,
+ },
+ [LS2085A] = {
+- .trans_mode = DSPI_DMA_MODE,
++ .trans_mode = DSPI_XSPI_MODE,
+ .max_clock_factor = 8,
+ .fifo_size = 4,
+ },
+ [LX2160A] = {
+- .trans_mode = DSPI_DMA_MODE,
++ .trans_mode = DSPI_XSPI_MODE,
+ .max_clock_factor = 8,
+ .fifo_size = 4,
+ },
+diff --git a/fs/btrfs/disk-io.c b/fs/btrfs/disk-io.c
+index 6d3ed9542b6c1..e6dbfd09bf1cb 100644
+--- a/fs/btrfs/disk-io.c
++++ b/fs/btrfs/disk-io.c
+@@ -636,16 +636,15 @@ static int btree_readpage_end_io_hook(struct btrfs_io_bio *io_bio,
+ csum_tree_block(eb, result);
+
+ if (memcmp_extent_buffer(eb, result, 0, csum_size)) {
+- u32 val;
+- u32 found = 0;
+-
+- memcpy(&found, result, csum_size);
++ u8 val[BTRFS_CSUM_SIZE] = { 0 };
+
+ read_extent_buffer(eb, &val, 0, csum_size);
+ btrfs_warn_rl(fs_info,
+- "%s checksum verify failed on %llu wanted %x found %x level %d",
++ "%s checksum verify failed on %llu wanted " CSUM_FMT " found " CSUM_FMT " level %d",
+ fs_info->sb->s_id, eb->start,
+- val, found, btrfs_header_level(eb));
++ CSUM_FMT_VALUE(csum_size, val),
++ CSUM_FMT_VALUE(csum_size, result),
++ btrfs_header_level(eb));
+ ret = -EUCLEAN;
+ goto err;
+ }
+diff --git a/fs/btrfs/sysfs.c b/fs/btrfs/sysfs.c
+index abc4a8fd6df65..21a1f4b0152e7 100644
+--- a/fs/btrfs/sysfs.c
++++ b/fs/btrfs/sysfs.c
+@@ -1165,10 +1165,12 @@ int btrfs_sysfs_remove_devices_dir(struct btrfs_fs_devices *fs_devices,
+ disk_kobj->name);
+ }
+
+- kobject_del(&one_device->devid_kobj);
+- kobject_put(&one_device->devid_kobj);
++ if (one_device->devid_kobj.state_initialized) {
++ kobject_del(&one_device->devid_kobj);
++ kobject_put(&one_device->devid_kobj);
+
+- wait_for_completion(&one_device->kobj_unregister);
++ wait_for_completion(&one_device->kobj_unregister);
++ }
+
+ return 0;
+ }
+@@ -1181,10 +1183,12 @@ int btrfs_sysfs_remove_devices_dir(struct btrfs_fs_devices *fs_devices,
+ sysfs_remove_link(fs_devices->devices_kobj,
+ disk_kobj->name);
+ }
+- kobject_del(&one_device->devid_kobj);
+- kobject_put(&one_device->devid_kobj);
++ if (one_device->devid_kobj.state_initialized) {
++ kobject_del(&one_device->devid_kobj);
++ kobject_put(&one_device->devid_kobj);
+
+- wait_for_completion(&one_device->kobj_unregister);
++ wait_for_completion(&one_device->kobj_unregister);
++ }
+ }
+
+ return 0;
+diff --git a/fs/io_uring.c b/fs/io_uring.c
+index d05023ca74bdc..1d5640cc2a488 100644
+--- a/fs/io_uring.c
++++ b/fs/io_uring.c
+@@ -3056,8 +3056,6 @@ static int __io_openat_prep(struct io_kiocb *req, const struct io_uring_sqe *sqe
+ const char __user *fname;
+ int ret;
+
+- if (unlikely(req->ctx->flags & (IORING_SETUP_IOPOLL|IORING_SETUP_SQPOLL)))
+- return -EINVAL;
+ if (unlikely(sqe->ioprio || sqe->buf_index))
+ return -EINVAL;
+ if (unlikely(req->flags & REQ_F_FIXED_FILE))
+@@ -3084,6 +3082,8 @@ static int io_openat_prep(struct io_kiocb *req, const struct io_uring_sqe *sqe)
+ {
+ u64 flags, mode;
+
++ if (unlikely(req->ctx->flags & (IORING_SETUP_IOPOLL|IORING_SETUP_SQPOLL)))
++ return -EINVAL;
+ if (req->flags & REQ_F_NEED_CLEANUP)
+ return 0;
+ mode = READ_ONCE(sqe->len);
+@@ -3098,6 +3098,8 @@ static int io_openat2_prep(struct io_kiocb *req, const struct io_uring_sqe *sqe)
+ size_t len;
+ int ret;
+
++ if (unlikely(req->ctx->flags & (IORING_SETUP_IOPOLL|IORING_SETUP_SQPOLL)))
++ return -EINVAL;
+ if (req->flags & REQ_F_NEED_CLEANUP)
+ return 0;
+ how = u64_to_user_ptr(READ_ONCE(sqe->addr2));
+@@ -5252,6 +5254,8 @@ static void io_cleanup_req(struct io_kiocb *req)
+ break;
+ case IORING_OP_OPENAT:
+ case IORING_OP_OPENAT2:
++ if (req->open.filename)
++ putname(req->open.filename);
+ break;
+ case IORING_OP_SPLICE:
+ case IORING_OP_TEE:
+diff --git a/include/linux/kprobes.h b/include/linux/kprobes.h
+index 6adf90f248d70..d6c83de361e47 100644
+--- a/include/linux/kprobes.h
++++ b/include/linux/kprobes.h
+@@ -369,6 +369,8 @@ void unregister_kretprobes(struct kretprobe **rps, int num);
+ void kprobe_flush_task(struct task_struct *tk);
+ void recycle_rp_inst(struct kretprobe_instance *ri, struct hlist_head *head);
+
++void kprobe_free_init_mem(void);
++
+ int disable_kprobe(struct kprobe *kp);
+ int enable_kprobe(struct kprobe *kp);
+
+@@ -426,6 +428,9 @@ static inline void unregister_kretprobes(struct kretprobe **rps, int num)
+ static inline void kprobe_flush_task(struct task_struct *tk)
+ {
+ }
++static inline void kprobe_free_init_mem(void)
++{
++}
+ static inline int disable_kprobe(struct kprobe *kp)
+ {
+ return -ENOSYS;
+diff --git a/include/linux/mm.h b/include/linux/mm.h
+index dc7b87310c103..bc05c3588aa31 100644
+--- a/include/linux/mm.h
++++ b/include/linux/mm.h
+@@ -2445,7 +2445,7 @@ extern int __meminit __early_pfn_to_nid(unsigned long pfn,
+
+ extern void set_dma_reserve(unsigned long new_dma_reserve);
+ extern void memmap_init_zone(unsigned long, int, unsigned long, unsigned long,
+- enum memmap_context, struct vmem_altmap *);
++ enum meminit_context, struct vmem_altmap *);
+ extern void setup_per_zone_wmarks(void);
+ extern int __meminit init_per_zone_wmark_min(void);
+ extern void mem_init(void);
+diff --git a/include/linux/mmzone.h b/include/linux/mmzone.h
+index f6f884970511d..04ff9a03bdb33 100644
+--- a/include/linux/mmzone.h
++++ b/include/linux/mmzone.h
+@@ -799,10 +799,15 @@ bool zone_watermark_ok(struct zone *z, unsigned int order,
+ unsigned int alloc_flags);
+ bool zone_watermark_ok_safe(struct zone *z, unsigned int order,
+ unsigned long mark, int highest_zoneidx);
+-enum memmap_context {
+- MEMMAP_EARLY,
+- MEMMAP_HOTPLUG,
++/*
++ * Memory initialization context, use to differentiate memory added by
++ * the platform statically or via memory hotplug interface.
++ */
++enum meminit_context {
++ MEMINIT_EARLY,
++ MEMINIT_HOTPLUG,
+ };
++
+ extern void init_currently_empty_zone(struct zone *zone, unsigned long start_pfn,
+ unsigned long size);
+
+diff --git a/include/linux/node.h b/include/linux/node.h
+index 4866f32a02d8d..014ba3ab2efd8 100644
+--- a/include/linux/node.h
++++ b/include/linux/node.h
+@@ -99,11 +99,13 @@ extern struct node *node_devices[];
+ typedef void (*node_registration_func_t)(struct node *);
+
+ #if defined(CONFIG_MEMORY_HOTPLUG_SPARSE) && defined(CONFIG_NUMA)
+-extern int link_mem_sections(int nid, unsigned long start_pfn,
+- unsigned long end_pfn);
++int link_mem_sections(int nid, unsigned long start_pfn,
++ unsigned long end_pfn,
++ enum meminit_context context);
+ #else
+ static inline int link_mem_sections(int nid, unsigned long start_pfn,
+- unsigned long end_pfn)
++ unsigned long end_pfn,
++ enum meminit_context context)
+ {
+ return 0;
+ }
+@@ -128,7 +130,8 @@ static inline int register_one_node(int nid)
+ if (error)
+ return error;
+ /* link memory sections under this node */
+- error = link_mem_sections(nid, start_pfn, end_pfn);
++ error = link_mem_sections(nid, start_pfn, end_pfn,
++ MEMINIT_EARLY);
+ }
+
+ return error;
+diff --git a/include/linux/pgtable.h b/include/linux/pgtable.h
+index 8075f6ae185a1..552df749531db 100644
+--- a/include/linux/pgtable.h
++++ b/include/linux/pgtable.h
+@@ -1424,6 +1424,16 @@ typedef unsigned int pgtbl_mod_mask;
+ #define mm_pmd_folded(mm) __is_defined(__PAGETABLE_PMD_FOLDED)
+ #endif
+
++#ifndef p4d_offset_lockless
++#define p4d_offset_lockless(pgdp, pgd, address) p4d_offset(&(pgd), address)
++#endif
++#ifndef pud_offset_lockless
++#define pud_offset_lockless(p4dp, p4d, address) pud_offset(&(p4d), address)
++#endif
++#ifndef pmd_offset_lockless
++#define pmd_offset_lockless(pudp, pud, address) pmd_offset(&(pud), address)
++#endif
++
+ /*
+ * p?d_leaf() - true if this entry is a final mapping to a physical address.
+ * This differs from p?d_huge() by the fact that they are always available (if
+diff --git a/include/linux/qed/qed_if.h b/include/linux/qed/qed_if.h
+index 8cb76405cbce1..78ba1dc54fd57 100644
+--- a/include/linux/qed/qed_if.h
++++ b/include/linux/qed/qed_if.h
+@@ -648,6 +648,7 @@ struct qed_dev_info {
+ #define QED_MFW_VERSION_3_OFFSET 24
+
+ u32 flash_size;
++ bool b_arfs_capable;
+ bool b_inter_pf_switch;
+ bool tx_switching;
+ bool rdma_supported;
+diff --git a/init/main.c b/init/main.c
+index 883ded3638e59..e214cdd18c285 100644
+--- a/init/main.c
++++ b/init/main.c
+@@ -33,6 +33,7 @@
+ #include <linux/nmi.h>
+ #include <linux/percpu.h>
+ #include <linux/kmod.h>
++#include <linux/kprobes.h>
+ #include <linux/vmalloc.h>
+ #include <linux/kernel_stat.h>
+ #include <linux/start_kernel.h>
+@@ -1401,6 +1402,7 @@ static int __ref kernel_init(void *unused)
+ kernel_init_freeable();
+ /* need to finish all async __init code before freeing the memory */
+ async_synchronize_full();
++ kprobe_free_init_mem();
+ ftrace_free_init_mem();
+ free_initmem();
+ mark_readonly();
+diff --git a/kernel/bpf/inode.c b/kernel/bpf/inode.c
+index fb878ba3f22f0..18f4969552ac2 100644
+--- a/kernel/bpf/inode.c
++++ b/kernel/bpf/inode.c
+@@ -226,10 +226,12 @@ static void *map_seq_next(struct seq_file *m, void *v, loff_t *pos)
+ else
+ prev_key = key;
+
++ rcu_read_lock();
+ if (map->ops->map_get_next_key(map, prev_key, key)) {
+ map_iter(m)->done = true;
+- return NULL;
++ key = NULL;
+ }
++ rcu_read_unlock();
+ return key;
+ }
+
+diff --git a/kernel/kprobes.c b/kernel/kprobes.c
+index a264246ff85aa..d0bf0ad425df5 100644
+--- a/kernel/kprobes.c
++++ b/kernel/kprobes.c
+@@ -2130,9 +2130,10 @@ static void kill_kprobe(struct kprobe *p)
+
+ /*
+ * The module is going away. We should disarm the kprobe which
+- * is using ftrace.
++ * is using ftrace, because ftrace framework is still available at
++ * MODULE_STATE_GOING notification.
+ */
+- if (kprobe_ftrace(p))
++ if (kprobe_ftrace(p) && !kprobe_disabled(p) && !kprobes_all_disarmed)
+ disarm_kprobe_ftrace(p);
+ }
+
+@@ -2405,6 +2406,28 @@ static struct notifier_block kprobe_module_nb = {
+ extern unsigned long __start_kprobe_blacklist[];
+ extern unsigned long __stop_kprobe_blacklist[];
+
++void kprobe_free_init_mem(void)
++{
++ void *start = (void *)(&__init_begin);
++ void *end = (void *)(&__init_end);
++ struct hlist_head *head;
++ struct kprobe *p;
++ int i;
++
++ mutex_lock(&kprobe_mutex);
++
++ /* Kill all kprobes on initmem */
++ for (i = 0; i < KPROBE_TABLE_SIZE; i++) {
++ head = &kprobe_table[i];
++ hlist_for_each_entry(p, head, hlist) {
++ if (start <= (void *)p->addr && (void *)p->addr < end)
++ kill_kprobe(p);
++ }
++ }
++
++ mutex_unlock(&kprobe_mutex);
++}
++
+ static int __init init_kprobes(void)
+ {
+ int i, err = 0;
+diff --git a/kernel/trace/trace_events_hist.c b/kernel/trace/trace_events_hist.c
+index 0b933546142e8..1b2ef64902296 100644
+--- a/kernel/trace/trace_events_hist.c
++++ b/kernel/trace/trace_events_hist.c
+@@ -3865,7 +3865,6 @@ static int parse_var_defs(struct hist_trigger_data *hist_data)
+
+ s = kstrdup(field_str, GFP_KERNEL);
+ if (!s) {
+- kfree(hist_data->attrs->var_defs.name[n_vars]);
+ ret = -ENOMEM;
+ goto free;
+ }
+diff --git a/kernel/trace/trace_preemptirq.c b/kernel/trace/trace_preemptirq.c
+index f10073e626030..f4938040c2286 100644
+--- a/kernel/trace/trace_preemptirq.c
++++ b/kernel/trace/trace_preemptirq.c
+@@ -102,14 +102,14 @@ NOKPROBE_SYMBOL(trace_hardirqs_on_caller);
+
+ __visible void trace_hardirqs_off_caller(unsigned long caller_addr)
+ {
++ lockdep_hardirqs_off(CALLER_ADDR0);
++
+ if (!this_cpu_read(tracing_irq_cpu)) {
+ this_cpu_write(tracing_irq_cpu, 1);
+ tracer_hardirqs_off(CALLER_ADDR0, caller_addr);
+ if (!in_nmi())
+ trace_irq_disable_rcuidle(CALLER_ADDR0, caller_addr);
+ }
+-
+- lockdep_hardirqs_off(CALLER_ADDR0);
+ }
+ EXPORT_SYMBOL(trace_hardirqs_off_caller);
+ NOKPROBE_SYMBOL(trace_hardirqs_off_caller);
+diff --git a/lib/bootconfig.c b/lib/bootconfig.c
+index 912ef49213986..510a0384861a2 100644
+--- a/lib/bootconfig.c
++++ b/lib/bootconfig.c
+@@ -31,6 +31,8 @@ static size_t xbc_data_size __initdata;
+ static struct xbc_node *last_parent __initdata;
+ static const char *xbc_err_msg __initdata;
+ static int xbc_err_pos __initdata;
++static int open_brace[XBC_DEPTH_MAX] __initdata;
++static int brace_index __initdata;
+
+ static int __init xbc_parse_error(const char *msg, const char *p)
+ {
+@@ -423,27 +425,27 @@ static char *skip_spaces_until_newline(char *p)
+ return p;
+ }
+
+-static int __init __xbc_open_brace(void)
++static int __init __xbc_open_brace(char *p)
+ {
+- /* Mark the last key as open brace */
+- last_parent->next = XBC_NODE_MAX;
++ /* Push the last key as open brace */
++ open_brace[brace_index++] = xbc_node_index(last_parent);
++ if (brace_index >= XBC_DEPTH_MAX)
++ return xbc_parse_error("Exceed max depth of braces", p);
+
+ return 0;
+ }
+
+ static int __init __xbc_close_brace(char *p)
+ {
+- struct xbc_node *node;
+-
+- if (!last_parent || last_parent->next != XBC_NODE_MAX)
++ brace_index--;
++ if (!last_parent || brace_index < 0 ||
++ (open_brace[brace_index] != xbc_node_index(last_parent)))
+ return xbc_parse_error("Unexpected closing brace", p);
+
+- node = last_parent;
+- node->next = 0;
+- do {
+- node = xbc_node_get_parent(node);
+- } while (node && node->next != XBC_NODE_MAX);
+- last_parent = node;
++ if (brace_index == 0)
++ last_parent = NULL;
++ else
++ last_parent = &xbc_nodes[open_brace[brace_index - 1]];
+
+ return 0;
+ }
+@@ -484,8 +486,8 @@ static int __init __xbc_parse_value(char **__v, char **__n)
+ break;
+ }
+ if (strchr(",;\n#}", c)) {
+- v = strim(v);
+ *p++ = '\0';
++ v = strim(v);
+ break;
+ }
+ }
+@@ -651,7 +653,7 @@ static int __init xbc_open_brace(char **k, char *n)
+ return ret;
+ *k = n;
+
+- return __xbc_open_brace();
++ return __xbc_open_brace(n - 1);
+ }
+
+ static int __init xbc_close_brace(char **k, char *n)
+@@ -671,6 +673,13 @@ static int __init xbc_verify_tree(void)
+ int i, depth, len, wlen;
+ struct xbc_node *n, *m;
+
++ /* Brace closing */
++ if (brace_index) {
++ n = &xbc_nodes[open_brace[brace_index]];
++ return xbc_parse_error("Brace is not closed",
++ xbc_node_get_data(n));
++ }
++
+ /* Empty tree */
+ if (xbc_node_num == 0) {
+ xbc_parse_error("Empty config", xbc_data);
+@@ -735,6 +744,7 @@ void __init xbc_destroy_all(void)
+ xbc_node_num = 0;
+ memblock_free(__pa(xbc_nodes), sizeof(struct xbc_node) * XBC_NODE_MAX);
+ xbc_nodes = NULL;
++ brace_index = 0;
+ }
+
+ /**
+diff --git a/lib/string.c b/lib/string.c
+index 6012c385fb314..4288e0158d47f 100644
+--- a/lib/string.c
++++ b/lib/string.c
+@@ -272,6 +272,30 @@ ssize_t strscpy_pad(char *dest, const char *src, size_t count)
+ }
+ EXPORT_SYMBOL(strscpy_pad);
+
++/**
++ * stpcpy - copy a string from src to dest returning a pointer to the new end
++ * of dest, including src's %NUL-terminator. May overrun dest.
++ * @dest: pointer to end of string being copied into. Must be large enough
++ * to receive copy.
++ * @src: pointer to the beginning of string being copied from. Must not overlap
++ * dest.
++ *
++ * stpcpy differs from strcpy in a key way: the return value is a pointer
++ * to the new %NUL-terminating character in @dest. (For strcpy, the return
++ * value is a pointer to the start of @dest). This interface is considered
++ * unsafe as it doesn't perform bounds checking of the inputs. As such it's
++ * not recommended for usage. Instead, its definition is provided in case
++ * the compiler lowers other libcalls to stpcpy.
++ */
++char *stpcpy(char *__restrict__ dest, const char *__restrict__ src);
++char *stpcpy(char *__restrict__ dest, const char *__restrict__ src)
++{
++ while ((*dest++ = *src++) != '\0')
++ /* nothing */;
++ return --dest;
++}
++EXPORT_SYMBOL(stpcpy);
++
+ #ifndef __HAVE_ARCH_STRCAT
+ /**
+ * strcat - Append one %NUL-terminated string to another
+diff --git a/mm/gup.c b/mm/gup.c
+index 0d8d76f10ac61..2e9ce90f29a1c 100644
+--- a/mm/gup.c
++++ b/mm/gup.c
+@@ -2574,13 +2574,13 @@ static int gup_huge_pgd(pgd_t orig, pgd_t *pgdp, unsigned long addr,
+ return 1;
+ }
+
+-static int gup_pmd_range(pud_t pud, unsigned long addr, unsigned long end,
++static int gup_pmd_range(pud_t *pudp, pud_t pud, unsigned long addr, unsigned long end,
+ unsigned int flags, struct page **pages, int *nr)
+ {
+ unsigned long next;
+ pmd_t *pmdp;
+
+- pmdp = pmd_offset(&pud, addr);
++ pmdp = pmd_offset_lockless(pudp, pud, addr);
+ do {
+ pmd_t pmd = READ_ONCE(*pmdp);
+
+@@ -2617,13 +2617,13 @@ static int gup_pmd_range(pud_t pud, unsigned long addr, unsigned long end,
+ return 1;
+ }
+
+-static int gup_pud_range(p4d_t p4d, unsigned long addr, unsigned long end,
++static int gup_pud_range(p4d_t *p4dp, p4d_t p4d, unsigned long addr, unsigned long end,
+ unsigned int flags, struct page **pages, int *nr)
+ {
+ unsigned long next;
+ pud_t *pudp;
+
+- pudp = pud_offset(&p4d, addr);
++ pudp = pud_offset_lockless(p4dp, p4d, addr);
+ do {
+ pud_t pud = READ_ONCE(*pudp);
+
+@@ -2638,20 +2638,20 @@ static int gup_pud_range(p4d_t p4d, unsigned long addr, unsigned long end,
+ if (!gup_huge_pd(__hugepd(pud_val(pud)), addr,
+ PUD_SHIFT, next, flags, pages, nr))
+ return 0;
+- } else if (!gup_pmd_range(pud, addr, next, flags, pages, nr))
++ } else if (!gup_pmd_range(pudp, pud, addr, next, flags, pages, nr))
+ return 0;
+ } while (pudp++, addr = next, addr != end);
+
+ return 1;
+ }
+
+-static int gup_p4d_range(pgd_t pgd, unsigned long addr, unsigned long end,
++static int gup_p4d_range(pgd_t *pgdp, pgd_t pgd, unsigned long addr, unsigned long end,
+ unsigned int flags, struct page **pages, int *nr)
+ {
+ unsigned long next;
+ p4d_t *p4dp;
+
+- p4dp = p4d_offset(&pgd, addr);
++ p4dp = p4d_offset_lockless(pgdp, pgd, addr);
+ do {
+ p4d_t p4d = READ_ONCE(*p4dp);
+
+@@ -2663,7 +2663,7 @@ static int gup_p4d_range(pgd_t pgd, unsigned long addr, unsigned long end,
+ if (!gup_huge_pd(__hugepd(p4d_val(p4d)), addr,
+ P4D_SHIFT, next, flags, pages, nr))
+ return 0;
+- } else if (!gup_pud_range(p4d, addr, next, flags, pages, nr))
++ } else if (!gup_pud_range(p4dp, p4d, addr, next, flags, pages, nr))
+ return 0;
+ } while (p4dp++, addr = next, addr != end);
+
+@@ -2691,7 +2691,7 @@ static void gup_pgd_range(unsigned long addr, unsigned long end,
+ if (!gup_huge_pd(__hugepd(pgd_val(pgd)), addr,
+ PGDIR_SHIFT, next, flags, pages, nr))
+ return;
+- } else if (!gup_p4d_range(pgd, addr, next, flags, pages, nr))
++ } else if (!gup_p4d_range(pgdp, pgd, addr, next, flags, pages, nr))
+ return;
+ } while (pgdp++, addr = next, addr != end);
+ }
+diff --git a/mm/madvise.c b/mm/madvise.c
+index d4aa5f7765435..0e0d61003fc6f 100644
+--- a/mm/madvise.c
++++ b/mm/madvise.c
+@@ -381,9 +381,9 @@ huge_unlock:
+ return 0;
+ }
+
++regular_page:
+ if (pmd_trans_unstable(pmd))
+ return 0;
+-regular_page:
+ #endif
+ tlb_change_page_size(tlb, PAGE_SIZE);
+ orig_pte = pte = pte_offset_map_lock(vma->vm_mm, pmd, addr, &ptl);
+diff --git a/mm/memory_hotplug.c b/mm/memory_hotplug.c
+index e76de2067bfd1..3f5073330bd50 100644
+--- a/mm/memory_hotplug.c
++++ b/mm/memory_hotplug.c
+@@ -719,7 +719,7 @@ void __ref move_pfn_range_to_zone(struct zone *zone, unsigned long start_pfn,
+ * are reserved so nobody should be touching them so we should be safe
+ */
+ memmap_init_zone(nr_pages, nid, zone_idx(zone), start_pfn,
+- MEMMAP_HOTPLUG, altmap);
++ MEMINIT_HOTPLUG, altmap);
+
+ set_zone_contiguous(zone);
+ }
+@@ -1065,7 +1065,8 @@ int __ref add_memory_resource(int nid, struct resource *res)
+ }
+
+ /* link memory sections under this node.*/
+- ret = link_mem_sections(nid, PFN_DOWN(start), PFN_UP(start + size - 1));
++ ret = link_mem_sections(nid, PFN_DOWN(start), PFN_UP(start + size - 1),
++ MEMINIT_HOTPLUG);
+ BUG_ON(ret);
+
+ /* create new memmap entry */
+diff --git a/mm/page_alloc.c b/mm/page_alloc.c
+index d809242f671f0..898ff44f2c7b2 100644
+--- a/mm/page_alloc.c
++++ b/mm/page_alloc.c
+@@ -5952,7 +5952,7 @@ overlap_memmap_init(unsigned long zone, unsigned long *pfn)
+ * done. Non-atomic initialization, single-pass.
+ */
+ void __meminit memmap_init_zone(unsigned long size, int nid, unsigned long zone,
+- unsigned long start_pfn, enum memmap_context context,
++ unsigned long start_pfn, enum meminit_context context,
+ struct vmem_altmap *altmap)
+ {
+ unsigned long pfn, end_pfn = start_pfn + size;
+@@ -5984,7 +5984,7 @@ void __meminit memmap_init_zone(unsigned long size, int nid, unsigned long zone,
+ * There can be holes in boot-time mem_map[]s handed to this
+ * function. They do not exist on hotplugged memory.
+ */
+- if (context == MEMMAP_EARLY) {
++ if (context == MEMINIT_EARLY) {
+ if (overlap_memmap_init(zone, &pfn))
+ continue;
+ if (defer_init(nid, pfn, end_pfn))
+@@ -5993,7 +5993,7 @@ void __meminit memmap_init_zone(unsigned long size, int nid, unsigned long zone,
+
+ page = pfn_to_page(pfn);
+ __init_single_page(page, pfn, zone, nid);
+- if (context == MEMMAP_HOTPLUG)
++ if (context == MEMINIT_HOTPLUG)
+ __SetPageReserved(page);
+
+ /*
+@@ -6076,7 +6076,7 @@ void __ref memmap_init_zone_device(struct zone *zone,
+ * check here not to call set_pageblock_migratetype() against
+ * pfn out of zone.
+ *
+- * Please note that MEMMAP_HOTPLUG path doesn't clear memmap
++ * Please note that MEMINIT_HOTPLUG path doesn't clear memmap
+ * because this is done early in section_activate()
+ */
+ if (!(pfn & (pageblock_nr_pages - 1))) {
+@@ -6114,7 +6114,7 @@ void __meminit __weak memmap_init(unsigned long size, int nid,
+ if (end_pfn > start_pfn) {
+ size = end_pfn - start_pfn;
+ memmap_init_zone(size, nid, zone, start_pfn,
+- MEMMAP_EARLY, NULL);
++ MEMINIT_EARLY, NULL);
+ }
+ }
+ }
+diff --git a/mm/swapfile.c b/mm/swapfile.c
+index 987276c557d1f..26707c5dc9fce 100644
+--- a/mm/swapfile.c
++++ b/mm/swapfile.c
+@@ -1074,7 +1074,7 @@ start_over:
+ goto nextsi;
+ }
+ if (size == SWAPFILE_CLUSTER) {
+- if (!(si->flags & SWP_FS))
++ if (si->flags & SWP_BLKDEV)
+ n_ret = swap_alloc_cluster(si, swp_entries);
+ } else
+ n_ret = scan_swap_map_slots(si, SWAP_HAS_CACHE,
+diff --git a/net/batman-adv/bridge_loop_avoidance.c b/net/batman-adv/bridge_loop_avoidance.c
+index cfb9e16afe38a..8002a7f8f3fad 100644
+--- a/net/batman-adv/bridge_loop_avoidance.c
++++ b/net/batman-adv/bridge_loop_avoidance.c
+@@ -25,6 +25,7 @@
+ #include <linux/lockdep.h>
+ #include <linux/netdevice.h>
+ #include <linux/netlink.h>
++#include <linux/preempt.h>
+ #include <linux/rculist.h>
+ #include <linux/rcupdate.h>
+ #include <linux/seq_file.h>
+@@ -83,11 +84,12 @@ static inline u32 batadv_choose_claim(const void *data, u32 size)
+ */
+ static inline u32 batadv_choose_backbone_gw(const void *data, u32 size)
+ {
+- const struct batadv_bla_claim *claim = (struct batadv_bla_claim *)data;
++ const struct batadv_bla_backbone_gw *gw;
+ u32 hash = 0;
+
+- hash = jhash(&claim->addr, sizeof(claim->addr), hash);
+- hash = jhash(&claim->vid, sizeof(claim->vid), hash);
++ gw = (struct batadv_bla_backbone_gw *)data;
++ hash = jhash(&gw->orig, sizeof(gw->orig), hash);
++ hash = jhash(&gw->vid, sizeof(gw->vid), hash);
+
+ return hash % size;
+ }
+@@ -1579,13 +1581,16 @@ int batadv_bla_init(struct batadv_priv *bat_priv)
+ }
+
+ /**
+- * batadv_bla_check_bcast_duplist() - Check if a frame is in the broadcast dup.
++ * batadv_bla_check_duplist() - Check if a frame is in the broadcast dup.
+ * @bat_priv: the bat priv with all the soft interface information
+- * @skb: contains the bcast_packet to be checked
++ * @skb: contains the multicast packet to be checked
++ * @payload_ptr: pointer to position inside the head buffer of the skb
++ * marking the start of the data to be CRC'ed
++ * @orig: originator mac address, NULL if unknown
+ *
+- * check if it is on our broadcast list. Another gateway might
+- * have sent the same packet because it is connected to the same backbone,
+- * so we have to remove this duplicate.
++ * Check if it is on our broadcast list. Another gateway might have sent the
++ * same packet because it is connected to the same backbone, so we have to
++ * remove this duplicate.
+ *
+ * This is performed by checking the CRC, which will tell us
+ * with a good chance that it is the same packet. If it is furthermore
+@@ -1594,19 +1599,17 @@ int batadv_bla_init(struct batadv_priv *bat_priv)
+ *
+ * Return: true if a packet is in the duplicate list, false otherwise.
+ */
+-bool batadv_bla_check_bcast_duplist(struct batadv_priv *bat_priv,
+- struct sk_buff *skb)
++static bool batadv_bla_check_duplist(struct batadv_priv *bat_priv,
++ struct sk_buff *skb, u8 *payload_ptr,
++ const u8 *orig)
+ {
+- int i, curr;
+- __be32 crc;
+- struct batadv_bcast_packet *bcast_packet;
+ struct batadv_bcast_duplist_entry *entry;
+ bool ret = false;
+-
+- bcast_packet = (struct batadv_bcast_packet *)skb->data;
++ int i, curr;
++ __be32 crc;
+
+ /* calculate the crc ... */
+- crc = batadv_skb_crc32(skb, (u8 *)(bcast_packet + 1));
++ crc = batadv_skb_crc32(skb, payload_ptr);
+
+ spin_lock_bh(&bat_priv->bla.bcast_duplist_lock);
+
+@@ -1625,8 +1628,21 @@ bool batadv_bla_check_bcast_duplist(struct batadv_priv *bat_priv,
+ if (entry->crc != crc)
+ continue;
+
+- if (batadv_compare_eth(entry->orig, bcast_packet->orig))
+- continue;
++ /* are the originators both known and not anonymous? */
++ if (orig && !is_zero_ether_addr(orig) &&
++ !is_zero_ether_addr(entry->orig)) {
++ /* If known, check if the new frame came from
++ * the same originator:
++ * We are safe to take identical frames from the
++ * same orig, if known, as multiplications in
++ * the mesh are detected via the (orig, seqno) pair.
++ * So we can be a bit more liberal here and allow
++ * identical frames from the same orig which the source
++ * host might have sent multiple times on purpose.
++ */
++ if (batadv_compare_eth(entry->orig, orig))
++ continue;
++ }
+
+ /* this entry seems to match: same crc, not too old,
+ * and from another gw. therefore return true to forbid it.
+@@ -1642,7 +1658,14 @@ bool batadv_bla_check_bcast_duplist(struct batadv_priv *bat_priv,
+ entry = &bat_priv->bla.bcast_duplist[curr];
+ entry->crc = crc;
+ entry->entrytime = jiffies;
+- ether_addr_copy(entry->orig, bcast_packet->orig);
++
++ /* known originator */
++ if (orig)
++ ether_addr_copy(entry->orig, orig);
++ /* anonymous originator */
++ else
++ eth_zero_addr(entry->orig);
++
+ bat_priv->bla.bcast_duplist_curr = curr;
+
+ out:
+@@ -1651,6 +1674,48 @@ out:
+ return ret;
+ }
+
++/**
++ * batadv_bla_check_ucast_duplist() - Check if a frame is in the broadcast dup.
++ * @bat_priv: the bat priv with all the soft interface information
++ * @skb: contains the multicast packet to be checked, decapsulated from a
++ * unicast_packet
++ *
++ * Check if it is on our broadcast list. Another gateway might have sent the
++ * same packet because it is connected to the same backbone, so we have to
++ * remove this duplicate.
++ *
++ * Return: true if a packet is in the duplicate list, false otherwise.
++ */
++static bool batadv_bla_check_ucast_duplist(struct batadv_priv *bat_priv,
++ struct sk_buff *skb)
++{
++ return batadv_bla_check_duplist(bat_priv, skb, (u8 *)skb->data, NULL);
++}
++
++/**
++ * batadv_bla_check_bcast_duplist() - Check if a frame is in the broadcast dup.
++ * @bat_priv: the bat priv with all the soft interface information
++ * @skb: contains the bcast_packet to be checked
++ *
++ * Check if it is on our broadcast list. Another gateway might have sent the
++ * same packet because it is connected to the same backbone, so we have to
++ * remove this duplicate.
++ *
++ * Return: true if a packet is in the duplicate list, false otherwise.
++ */
++bool batadv_bla_check_bcast_duplist(struct batadv_priv *bat_priv,
++ struct sk_buff *skb)
++{
++ struct batadv_bcast_packet *bcast_packet;
++ u8 *payload_ptr;
++
++ bcast_packet = (struct batadv_bcast_packet *)skb->data;
++ payload_ptr = (u8 *)(bcast_packet + 1);
++
++ return batadv_bla_check_duplist(bat_priv, skb, payload_ptr,
++ bcast_packet->orig);
++}
++
+ /**
+ * batadv_bla_is_backbone_gw_orig() - Check if the originator is a gateway for
+ * the VLAN identified by vid.
+@@ -1812,7 +1877,7 @@ batadv_bla_loopdetect_check(struct batadv_priv *bat_priv, struct sk_buff *skb,
+ * @bat_priv: the bat priv with all the soft interface information
+ * @skb: the frame to be checked
+ * @vid: the VLAN ID of the frame
+- * @is_bcast: the packet came in a broadcast packet type.
++ * @packet_type: the batman packet type this frame came in
+ *
+ * batadv_bla_rx avoidance checks if:
+ * * we have to race for a claim
+@@ -1824,7 +1889,7 @@ batadv_bla_loopdetect_check(struct batadv_priv *bat_priv, struct sk_buff *skb,
+ * further process the skb.
+ */
+ bool batadv_bla_rx(struct batadv_priv *bat_priv, struct sk_buff *skb,
+- unsigned short vid, bool is_bcast)
++ unsigned short vid, int packet_type)
+ {
+ struct batadv_bla_backbone_gw *backbone_gw;
+ struct ethhdr *ethhdr;
+@@ -1846,9 +1911,32 @@ bool batadv_bla_rx(struct batadv_priv *bat_priv, struct sk_buff *skb,
+ goto handled;
+
+ if (unlikely(atomic_read(&bat_priv->bla.num_requests)))
+- /* don't allow broadcasts while requests are in flight */
+- if (is_multicast_ether_addr(ethhdr->h_dest) && is_bcast)
+- goto handled;
++ /* don't allow multicast packets while requests are in flight */
++ if (is_multicast_ether_addr(ethhdr->h_dest))
++ /* Both broadcast flooding or multicast-via-unicasts
++ * delivery might send to multiple backbone gateways
++ * sharing the same LAN and therefore need to coordinate
++ * which backbone gateway forwards into the LAN,
++ * by claiming the payload source address.
++ *
++ * Broadcast flooding and multicast-via-unicasts
++ * delivery use the following two batman packet types.
++ * Note: explicitly exclude BATADV_UNICAST_4ADDR,
++ * as the DHCP gateway feature will send explicitly
++ * to only one BLA gateway, so the claiming process
++ * should be avoided there.
++ */
++ if (packet_type == BATADV_BCAST ||
++ packet_type == BATADV_UNICAST)
++ goto handled;
++
++ /* potential duplicates from foreign BLA backbone gateways via
++ * multicast-in-unicast packets
++ */
++ if (is_multicast_ether_addr(ethhdr->h_dest) &&
++ packet_type == BATADV_UNICAST &&
++ batadv_bla_check_ucast_duplist(bat_priv, skb))
++ goto handled;
+
+ ether_addr_copy(search_claim.addr, ethhdr->h_source);
+ search_claim.vid = vid;
+@@ -1883,13 +1971,14 @@ bool batadv_bla_rx(struct batadv_priv *bat_priv, struct sk_buff *skb,
+ goto allow;
+ }
+
+- /* if it is a broadcast ... */
+- if (is_multicast_ether_addr(ethhdr->h_dest) && is_bcast) {
++ /* if it is a multicast ... */
++ if (is_multicast_ether_addr(ethhdr->h_dest) &&
++ (packet_type == BATADV_BCAST || packet_type == BATADV_UNICAST)) {
+ /* ... drop it. the responsible gateway is in charge.
+ *
+- * We need to check is_bcast because with the gateway
++ * We need to check packet type because with the gateway
+ * feature, broadcasts (like DHCP requests) may be sent
+- * using a unicast packet type.
++ * using a unicast 4 address packet type. See comment above.
+ */
+ goto handled;
+ } else {
+diff --git a/net/batman-adv/bridge_loop_avoidance.h b/net/batman-adv/bridge_loop_avoidance.h
+index 41edb2c4a3277..a81c41b636f93 100644
+--- a/net/batman-adv/bridge_loop_avoidance.h
++++ b/net/batman-adv/bridge_loop_avoidance.h
+@@ -35,7 +35,7 @@ static inline bool batadv_bla_is_loopdetect_mac(const uint8_t *mac)
+
+ #ifdef CONFIG_BATMAN_ADV_BLA
+ bool batadv_bla_rx(struct batadv_priv *bat_priv, struct sk_buff *skb,
+- unsigned short vid, bool is_bcast);
++ unsigned short vid, int packet_type);
+ bool batadv_bla_tx(struct batadv_priv *bat_priv, struct sk_buff *skb,
+ unsigned short vid);
+ bool batadv_bla_is_backbone_gw(struct sk_buff *skb,
+@@ -66,7 +66,7 @@ bool batadv_bla_check_claim(struct batadv_priv *bat_priv, u8 *addr,
+
+ static inline bool batadv_bla_rx(struct batadv_priv *bat_priv,
+ struct sk_buff *skb, unsigned short vid,
+- bool is_bcast)
++ int packet_type)
+ {
+ return false;
+ }
+diff --git a/net/batman-adv/multicast.c b/net/batman-adv/multicast.c
+index 9ebdc1e864b96..3aaa6612f8c9f 100644
+--- a/net/batman-adv/multicast.c
++++ b/net/batman-adv/multicast.c
+@@ -51,6 +51,7 @@
+ #include <uapi/linux/batadv_packet.h>
+ #include <uapi/linux/batman_adv.h>
+
++#include "bridge_loop_avoidance.h"
+ #include "hard-interface.h"
+ #include "hash.h"
+ #include "log.h"
+@@ -1434,6 +1435,35 @@ batadv_mcast_forw_mode(struct batadv_priv *bat_priv, struct sk_buff *skb,
+ return BATADV_FORW_ALL;
+ }
+
++/**
++ * batadv_mcast_forw_send_orig() - send a multicast packet to an originator
++ * @bat_priv: the bat priv with all the soft interface information
++ * @skb: the multicast packet to send
++ * @vid: the vlan identifier
++ * @orig_node: the originator to send the packet to
++ *
++ * Return: NET_XMIT_DROP in case of error or NET_XMIT_SUCCESS otherwise.
++ */
++int batadv_mcast_forw_send_orig(struct batadv_priv *bat_priv,
++ struct sk_buff *skb,
++ unsigned short vid,
++ struct batadv_orig_node *orig_node)
++{
++ /* Avoid sending multicast-in-unicast packets to other BLA
++ * gateways - they already got the frame from the LAN side
++ * we share with them.
++ * TODO: Refactor to take BLA into account earlier, to avoid
++ * reducing the mcast_fanout count.
++ */
++ if (batadv_bla_is_backbone_gw_orig(bat_priv, orig_node->orig, vid)) {
++ dev_kfree_skb(skb);
++ return NET_XMIT_SUCCESS;
++ }
++
++ return batadv_send_skb_unicast(bat_priv, skb, BATADV_UNICAST, 0,
++ orig_node, vid);
++}
++
+ /**
+ * batadv_mcast_forw_tt() - forwards a packet to multicast listeners
+ * @bat_priv: the bat priv with all the soft interface information
+@@ -1471,8 +1501,8 @@ batadv_mcast_forw_tt(struct batadv_priv *bat_priv, struct sk_buff *skb,
+ break;
+ }
+
+- batadv_send_skb_unicast(bat_priv, newskb, BATADV_UNICAST, 0,
+- orig_entry->orig_node, vid);
++ batadv_mcast_forw_send_orig(bat_priv, newskb, vid,
++ orig_entry->orig_node);
+ }
+ rcu_read_unlock();
+
+@@ -1513,8 +1543,7 @@ batadv_mcast_forw_want_all_ipv4(struct batadv_priv *bat_priv,
+ break;
+ }
+
+- batadv_send_skb_unicast(bat_priv, newskb, BATADV_UNICAST, 0,
+- orig_node, vid);
++ batadv_mcast_forw_send_orig(bat_priv, newskb, vid, orig_node);
+ }
+ rcu_read_unlock();
+ return ret;
+@@ -1551,8 +1580,7 @@ batadv_mcast_forw_want_all_ipv6(struct batadv_priv *bat_priv,
+ break;
+ }
+
+- batadv_send_skb_unicast(bat_priv, newskb, BATADV_UNICAST, 0,
+- orig_node, vid);
++ batadv_mcast_forw_send_orig(bat_priv, newskb, vid, orig_node);
+ }
+ rcu_read_unlock();
+ return ret;
+@@ -1618,8 +1646,7 @@ batadv_mcast_forw_want_all_rtr4(struct batadv_priv *bat_priv,
+ break;
+ }
+
+- batadv_send_skb_unicast(bat_priv, newskb, BATADV_UNICAST, 0,
+- orig_node, vid);
++ batadv_mcast_forw_send_orig(bat_priv, newskb, vid, orig_node);
+ }
+ rcu_read_unlock();
+ return ret;
+@@ -1656,8 +1683,7 @@ batadv_mcast_forw_want_all_rtr6(struct batadv_priv *bat_priv,
+ break;
+ }
+
+- batadv_send_skb_unicast(bat_priv, newskb, BATADV_UNICAST, 0,
+- orig_node, vid);
++ batadv_mcast_forw_send_orig(bat_priv, newskb, vid, orig_node);
+ }
+ rcu_read_unlock();
+ return ret;
+diff --git a/net/batman-adv/multicast.h b/net/batman-adv/multicast.h
+index ebf825991ecd9..3e114bc5ca3bb 100644
+--- a/net/batman-adv/multicast.h
++++ b/net/batman-adv/multicast.h
+@@ -46,6 +46,11 @@ enum batadv_forw_mode
+ batadv_mcast_forw_mode(struct batadv_priv *bat_priv, struct sk_buff *skb,
+ struct batadv_orig_node **mcast_single_orig);
+
++int batadv_mcast_forw_send_orig(struct batadv_priv *bat_priv,
++ struct sk_buff *skb,
++ unsigned short vid,
++ struct batadv_orig_node *orig_node);
++
+ int batadv_mcast_forw_send(struct batadv_priv *bat_priv, struct sk_buff *skb,
+ unsigned short vid);
+
+@@ -71,6 +76,16 @@ batadv_mcast_forw_mode(struct batadv_priv *bat_priv, struct sk_buff *skb,
+ return BATADV_FORW_ALL;
+ }
+
++static inline int
++batadv_mcast_forw_send_orig(struct batadv_priv *bat_priv,
++ struct sk_buff *skb,
++ unsigned short vid,
++ struct batadv_orig_node *orig_node)
++{
++ kfree_skb(skb);
++ return NET_XMIT_DROP;
++}
++
+ static inline int
+ batadv_mcast_forw_send(struct batadv_priv *bat_priv, struct sk_buff *skb,
+ unsigned short vid)
+diff --git a/net/batman-adv/routing.c b/net/batman-adv/routing.c
+index d343382e96641..e6515df546a60 100644
+--- a/net/batman-adv/routing.c
++++ b/net/batman-adv/routing.c
+@@ -826,6 +826,10 @@ static bool batadv_check_unicast_ttvn(struct batadv_priv *bat_priv,
+ vid = batadv_get_vid(skb, hdr_len);
+ ethhdr = (struct ethhdr *)(skb->data + hdr_len);
+
++ /* do not reroute multicast frames in a unicast header */
++ if (is_multicast_ether_addr(ethhdr->h_dest))
++ return true;
++
+ /* check if the destination client was served by this node and it is now
+ * roaming. In this case, it means that the node has got a ROAM_ADV
+ * message and that it knows the new destination in the mesh to re-route
+diff --git a/net/batman-adv/soft-interface.c b/net/batman-adv/soft-interface.c
+index f1f1c86f34193..012b6d0b87ead 100644
+--- a/net/batman-adv/soft-interface.c
++++ b/net/batman-adv/soft-interface.c
+@@ -364,9 +364,8 @@ send:
+ goto dropped;
+ ret = batadv_send_skb_via_gw(bat_priv, skb, vid);
+ } else if (mcast_single_orig) {
+- ret = batadv_send_skb_unicast(bat_priv, skb,
+- BATADV_UNICAST, 0,
+- mcast_single_orig, vid);
++ ret = batadv_mcast_forw_send_orig(bat_priv, skb, vid,
++ mcast_single_orig);
+ } else if (forw_mode == BATADV_FORW_SOME) {
+ ret = batadv_mcast_forw_send(bat_priv, skb, vid);
+ } else {
+@@ -425,10 +424,10 @@ void batadv_interface_rx(struct net_device *soft_iface,
+ struct vlan_ethhdr *vhdr;
+ struct ethhdr *ethhdr;
+ unsigned short vid;
+- bool is_bcast;
++ int packet_type;
+
+ batadv_bcast_packet = (struct batadv_bcast_packet *)skb->data;
+- is_bcast = (batadv_bcast_packet->packet_type == BATADV_BCAST);
++ packet_type = batadv_bcast_packet->packet_type;
+
+ skb_pull_rcsum(skb, hdr_size);
+ skb_reset_mac_header(skb);
+@@ -471,7 +470,7 @@ void batadv_interface_rx(struct net_device *soft_iface,
+ /* Let the bridge loop avoidance check the packet. If will
+ * not handle it, we can safely push it up.
+ */
+- if (batadv_bla_rx(bat_priv, skb, vid, is_bcast))
++ if (batadv_bla_rx(bat_priv, skb, vid, packet_type))
+ goto out;
+
+ if (orig_node)
+diff --git a/net/core/filter.c b/net/core/filter.c
+index d13ea1642b974..0261531d4fda6 100644
+--- a/net/core/filter.c
++++ b/net/core/filter.c
+@@ -6998,8 +6998,6 @@ static int bpf_gen_ld_abs(const struct bpf_insn *orig,
+ bool indirect = BPF_MODE(orig->code) == BPF_IND;
+ struct bpf_insn *insn = insn_buf;
+
+- /* We're guaranteed here that CTX is in R6. */
+- *insn++ = BPF_MOV64_REG(BPF_REG_1, BPF_REG_CTX);
+ if (!indirect) {
+ *insn++ = BPF_MOV64_IMM(BPF_REG_2, orig->imm);
+ } else {
+@@ -7007,6 +7005,8 @@ static int bpf_gen_ld_abs(const struct bpf_insn *orig,
+ if (orig->imm)
+ *insn++ = BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, orig->imm);
+ }
++ /* We're guaranteed here that CTX is in R6. */
++ *insn++ = BPF_MOV64_REG(BPF_REG_1, BPF_REG_CTX);
+
+ switch (BPF_SIZE(orig->code)) {
+ case BPF_B:
+diff --git a/net/mac80211/mlme.c b/net/mac80211/mlme.c
+index b2a9d47cf86dd..c85186799d059 100644
+--- a/net/mac80211/mlme.c
++++ b/net/mac80211/mlme.c
+@@ -4853,6 +4853,7 @@ static int ieee80211_prep_channel(struct ieee80211_sub_if_data *sdata,
+ struct ieee80211_supported_band *sband;
+ struct cfg80211_chan_def chandef;
+ bool is_6ghz = cbss->channel->band == NL80211_BAND_6GHZ;
++ bool is_5ghz = cbss->channel->band == NL80211_BAND_5GHZ;
+ struct ieee80211_bss *bss = (void *)cbss->priv;
+ int ret;
+ u32 i;
+@@ -4871,7 +4872,7 @@ static int ieee80211_prep_channel(struct ieee80211_sub_if_data *sdata,
+ ifmgd->flags |= IEEE80211_STA_DISABLE_HE;
+ }
+
+- if (!sband->vht_cap.vht_supported && !is_6ghz) {
++ if (!sband->vht_cap.vht_supported && is_5ghz) {
+ ifmgd->flags |= IEEE80211_STA_DISABLE_VHT;
+ ifmgd->flags |= IEEE80211_STA_DISABLE_HE;
+ }
+diff --git a/net/mac80211/util.c b/net/mac80211/util.c
+index dd9f5c7a1ade6..7b1f3645603ca 100644
+--- a/net/mac80211/util.c
++++ b/net/mac80211/util.c
+@@ -3354,9 +3354,10 @@ bool ieee80211_chandef_he_6ghz_oper(struct ieee80211_sub_if_data *sdata,
+ he_chandef.center_freq1 =
+ ieee80211_channel_to_frequency(he_6ghz_oper->ccfs0,
+ NL80211_BAND_6GHZ);
+- he_chandef.center_freq2 =
+- ieee80211_channel_to_frequency(he_6ghz_oper->ccfs1,
+- NL80211_BAND_6GHZ);
++ if (support_80_80 || support_160)
++ he_chandef.center_freq2 =
++ ieee80211_channel_to_frequency(he_6ghz_oper->ccfs1,
++ NL80211_BAND_6GHZ);
+ }
+
+ if (!cfg80211_chandef_valid(&he_chandef)) {
+diff --git a/net/mac802154/tx.c b/net/mac802154/tx.c
+index ab52811523e99..c829e4a753256 100644
+--- a/net/mac802154/tx.c
++++ b/net/mac802154/tx.c
+@@ -34,11 +34,11 @@ void ieee802154_xmit_worker(struct work_struct *work)
+ if (res)
+ goto err_tx;
+
+- ieee802154_xmit_complete(&local->hw, skb, false);
+-
+ dev->stats.tx_packets++;
+ dev->stats.tx_bytes += skb->len;
+
++ ieee802154_xmit_complete(&local->hw, skb, false);
++
+ return;
+
+ err_tx:
+@@ -78,6 +78,8 @@ ieee802154_tx(struct ieee802154_local *local, struct sk_buff *skb)
+
+ /* async is priority, otherwise sync is fallback */
+ if (local->ops->xmit_async) {
++ unsigned int len = skb->len;
++
+ ret = drv_xmit_async(local, skb);
+ if (ret) {
+ ieee802154_wake_queue(&local->hw);
+@@ -85,7 +87,7 @@ ieee802154_tx(struct ieee802154_local *local, struct sk_buff *skb)
+ }
+
+ dev->stats.tx_packets++;
+- dev->stats.tx_bytes += skb->len;
++ dev->stats.tx_bytes += len;
+ } else {
+ local->tx_skb = skb;
+ queue_work(local->workqueue, &local->tx_work);
+diff --git a/net/netfilter/nf_conntrack_netlink.c b/net/netfilter/nf_conntrack_netlink.c
+index 832eabecfbddc..c3a4214dc9588 100644
+--- a/net/netfilter/nf_conntrack_netlink.c
++++ b/net/netfilter/nf_conntrack_netlink.c
+@@ -851,7 +851,6 @@ static int ctnetlink_done(struct netlink_callback *cb)
+ }
+
+ struct ctnetlink_filter {
+- u_int32_t cta_flags;
+ u8 family;
+
+ u_int32_t orig_flags;
+@@ -906,10 +905,6 @@ static int ctnetlink_parse_tuple_filter(const struct nlattr * const cda[],
+ struct nf_conntrack_zone *zone,
+ u_int32_t flags);
+
+-/* applied on filters */
+-#define CTA_FILTER_F_CTA_MARK (1 << 0)
+-#define CTA_FILTER_F_CTA_MARK_MASK (1 << 1)
+-
+ static struct ctnetlink_filter *
+ ctnetlink_alloc_filter(const struct nlattr * const cda[], u8 family)
+ {
+@@ -930,14 +925,10 @@ ctnetlink_alloc_filter(const struct nlattr * const cda[], u8 family)
+ #ifdef CONFIG_NF_CONNTRACK_MARK
+ if (cda[CTA_MARK]) {
+ filter->mark.val = ntohl(nla_get_be32(cda[CTA_MARK]));
+- filter->cta_flags |= CTA_FILTER_FLAG(CTA_MARK);
+-
+- if (cda[CTA_MARK_MASK]) {
++ if (cda[CTA_MARK_MASK])
+ filter->mark.mask = ntohl(nla_get_be32(cda[CTA_MARK_MASK]));
+- filter->cta_flags |= CTA_FILTER_FLAG(CTA_MARK_MASK);
+- } else {
++ else
+ filter->mark.mask = 0xffffffff;
+- }
+ } else if (cda[CTA_MARK_MASK]) {
+ err = -EINVAL;
+ goto err_filter;
+@@ -1117,11 +1108,7 @@ static int ctnetlink_filter_match(struct nf_conn *ct, void *data)
+ }
+
+ #ifdef CONFIG_NF_CONNTRACK_MARK
+- if ((filter->cta_flags & CTA_FILTER_FLAG(CTA_MARK_MASK)) &&
+- (ct->mark & filter->mark.mask) != filter->mark.val)
+- goto ignore_entry;
+- else if ((filter->cta_flags & CTA_FILTER_FLAG(CTA_MARK)) &&
+- ct->mark != filter->mark.val)
++ if ((ct->mark & filter->mark.mask) != filter->mark.val)
+ goto ignore_entry;
+ #endif
+
+@@ -1404,7 +1391,8 @@ ctnetlink_parse_tuple_filter(const struct nlattr * const cda[],
+ if (err < 0)
+ return err;
+
+-
++ if (l3num != NFPROTO_IPV4 && l3num != NFPROTO_IPV6)
++ return -EOPNOTSUPP;
+ tuple->src.l3num = l3num;
+
+ if (flags & CTA_FILTER_FLAG(CTA_IP_DST) ||
+diff --git a/net/netfilter/nf_conntrack_proto.c b/net/netfilter/nf_conntrack_proto.c
+index a0560d175a7ff..aaf4293ddd459 100644
+--- a/net/netfilter/nf_conntrack_proto.c
++++ b/net/netfilter/nf_conntrack_proto.c
+@@ -565,6 +565,7 @@ static int nf_ct_netns_inet_get(struct net *net)
+ int err;
+
+ err = nf_ct_netns_do_get(net, NFPROTO_IPV4);
++#if IS_ENABLED(CONFIG_IPV6)
+ if (err < 0)
+ goto err1;
+ err = nf_ct_netns_do_get(net, NFPROTO_IPV6);
+@@ -575,6 +576,7 @@ static int nf_ct_netns_inet_get(struct net *net)
+ err2:
+ nf_ct_netns_put(net, NFPROTO_IPV4);
+ err1:
++#endif
+ return err;
+ }
+
+diff --git a/net/netfilter/nft_meta.c b/net/netfilter/nft_meta.c
+index 7bc6537f3ccb5..b37bd02448d8c 100644
+--- a/net/netfilter/nft_meta.c
++++ b/net/netfilter/nft_meta.c
+@@ -147,11 +147,11 @@ nft_meta_get_eval_skugid(enum nft_meta_keys key,
+
+ switch (key) {
+ case NFT_META_SKUID:
+- *dest = from_kuid_munged(&init_user_ns,
++ *dest = from_kuid_munged(sock_net(sk)->user_ns,
+ sock->file->f_cred->fsuid);
+ break;
+ case NFT_META_SKGID:
+- *dest = from_kgid_munged(&init_user_ns,
++ *dest = from_kgid_munged(sock_net(sk)->user_ns,
+ sock->file->f_cred->fsgid);
+ break;
+ default:
+diff --git a/net/sunrpc/svcsock.c b/net/sunrpc/svcsock.c
+index c537272f9c7ed..183d2465df7a3 100644
+--- a/net/sunrpc/svcsock.c
++++ b/net/sunrpc/svcsock.c
+@@ -228,7 +228,7 @@ static int svc_one_sock_name(struct svc_sock *svsk, char *buf, int remaining)
+ static void svc_flush_bvec(const struct bio_vec *bvec, size_t size, size_t seek)
+ {
+ struct bvec_iter bi = {
+- .bi_size = size,
++ .bi_size = size + seek,
+ };
+ struct bio_vec bv;
+
+diff --git a/net/wireless/Kconfig b/net/wireless/Kconfig
+index faf74850a1b52..27026f587fa61 100644
+--- a/net/wireless/Kconfig
++++ b/net/wireless/Kconfig
+@@ -217,6 +217,7 @@ config LIB80211_CRYPT_WEP
+
+ config LIB80211_CRYPT_CCMP
+ tristate
++ select CRYPTO
+ select CRYPTO_AES
+ select CRYPTO_CCM
+
+diff --git a/net/wireless/util.c b/net/wireless/util.c
+index a72d2ad6ade8b..0f95844e73d80 100644
+--- a/net/wireless/util.c
++++ b/net/wireless/util.c
+@@ -95,7 +95,7 @@ u32 ieee80211_channel_to_freq_khz(int chan, enum nl80211_band band)
+ /* see 802.11ax D6.1 27.3.23.2 */
+ if (chan == 2)
+ return MHZ_TO_KHZ(5935);
+- if (chan <= 253)
++ if (chan <= 233)
+ return MHZ_TO_KHZ(5950 + chan * 5);
+ break;
+ case NL80211_BAND_60GHZ:
+diff --git a/net/xdp/xdp_umem.c b/net/xdp/xdp_umem.c
+index e97db37354e4f..b010bfde01490 100644
+--- a/net/xdp/xdp_umem.c
++++ b/net/xdp/xdp_umem.c
+@@ -303,10 +303,10 @@ static int xdp_umem_account_pages(struct xdp_umem *umem)
+
+ static int xdp_umem_reg(struct xdp_umem *umem, struct xdp_umem_reg *mr)
+ {
++ u32 npgs_rem, chunk_size = mr->chunk_size, headroom = mr->headroom;
+ bool unaligned_chunks = mr->flags & XDP_UMEM_UNALIGNED_CHUNK_FLAG;
+- u32 chunk_size = mr->chunk_size, headroom = mr->headroom;
+ u64 npgs, addr = mr->addr, size = mr->len;
+- unsigned int chunks, chunks_per_page;
++ unsigned int chunks, chunks_rem;
+ int err;
+
+ if (chunk_size < XDP_UMEM_MIN_CHUNK_SIZE || chunk_size > PAGE_SIZE) {
+@@ -336,19 +336,18 @@ static int xdp_umem_reg(struct xdp_umem *umem, struct xdp_umem_reg *mr)
+ if ((addr + size) < addr)
+ return -EINVAL;
+
+- npgs = size >> PAGE_SHIFT;
++ npgs = div_u64_rem(size, PAGE_SIZE, &npgs_rem);
++ if (npgs_rem)
++ npgs++;
+ if (npgs > U32_MAX)
+ return -EINVAL;
+
+- chunks = (unsigned int)div_u64(size, chunk_size);
++ chunks = (unsigned int)div_u64_rem(size, chunk_size, &chunks_rem);
+ if (chunks == 0)
+ return -EINVAL;
+
+- if (!unaligned_chunks) {
+- chunks_per_page = PAGE_SIZE / chunk_size;
+- if (chunks < chunks_per_page || chunks % chunks_per_page)
+- return -EINVAL;
+- }
++ if (!unaligned_chunks && chunks_rem)
++ return -EINVAL;
+
+ if (headroom >= chunk_size - XDP_PACKET_HEADROOM)
+ return -EINVAL;
+diff --git a/security/device_cgroup.c b/security/device_cgroup.c
+index 43ab0ad45c1b6..04375df52fc9a 100644
+--- a/security/device_cgroup.c
++++ b/security/device_cgroup.c
+@@ -354,7 +354,8 @@ static bool match_exception_partial(struct list_head *exceptions, short type,
+ {
+ struct dev_exception_item *ex;
+
+- list_for_each_entry_rcu(ex, exceptions, list) {
++ list_for_each_entry_rcu(ex, exceptions, list,
++ lockdep_is_held(&devcgroup_mutex)) {
+ if ((type & DEVCG_DEV_BLOCK) && !(ex->type & DEVCG_DEV_BLOCK))
+ continue;
+ if ((type & DEVCG_DEV_CHAR) && !(ex->type & DEVCG_DEV_CHAR))
+diff --git a/sound/pci/asihpi/hpioctl.c b/sound/pci/asihpi/hpioctl.c
+index 496dcde9715d6..9790f5108a166 100644
+--- a/sound/pci/asihpi/hpioctl.c
++++ b/sound/pci/asihpi/hpioctl.c
+@@ -343,7 +343,7 @@ int asihpi_adapter_probe(struct pci_dev *pci_dev,
+ struct hpi_message hm;
+ struct hpi_response hr;
+ struct hpi_adapter adapter;
+- struct hpi_pci pci;
++ struct hpi_pci pci = { 0 };
+
+ memset(&adapter, 0, sizeof(adapter));
+
+@@ -499,7 +499,7 @@ int asihpi_adapter_probe(struct pci_dev *pci_dev,
+ return 0;
+
+ err:
+- for (idx = 0; idx < HPI_MAX_ADAPTER_MEM_SPACES; idx++) {
++ while (--idx >= 0) {
+ if (pci.ap_mem_base[idx]) {
+ iounmap(pci.ap_mem_base[idx]);
+ pci.ap_mem_base[idx] = NULL;
+diff --git a/sound/pci/hda/patch_realtek.c b/sound/pci/hda/patch_realtek.c
+index 77e2e6ede31dc..601683e05ccca 100644
+--- a/sound/pci/hda/patch_realtek.c
++++ b/sound/pci/hda/patch_realtek.c
+@@ -3419,7 +3419,11 @@ static void alc256_shutup(struct hda_codec *codec)
+
+ /* 3k pull low control for Headset jack. */
+ /* NOTE: call this before clearing the pin, otherwise codec stalls */
+- alc_update_coef_idx(codec, 0x46, 0, 3 << 12);
++ /* If disable 3k pulldown control for alc257, the Mic detection will not work correctly
++ * when booting with headset plugged. So skip setting it for the codec alc257
++ */
++ if (codec->core.vendor_id != 0x10ec0257)
++ alc_update_coef_idx(codec, 0x46, 0, 3 << 12);
+
+ if (!spec->no_shutup_pins)
+ snd_hda_codec_write(codec, hp_pin, 0,
+@@ -6062,6 +6066,7 @@ static void alc_fixup_thinkpad_acpi(struct hda_codec *codec,
+ #include "hp_x360_helper.c"
+
+ enum {
++ ALC269_FIXUP_GPIO2,
+ ALC269_FIXUP_SONY_VAIO,
+ ALC275_FIXUP_SONY_VAIO_GPIO2,
+ ALC269_FIXUP_DELL_M101Z,
+@@ -6243,6 +6248,10 @@ enum {
+ };
+
+ static const struct hda_fixup alc269_fixups[] = {
++ [ALC269_FIXUP_GPIO2] = {
++ .type = HDA_FIXUP_FUNC,
++ .v.func = alc_fixup_gpio2,
++ },
+ [ALC269_FIXUP_SONY_VAIO] = {
+ .type = HDA_FIXUP_PINCTLS,
+ .v.pins = (const struct hda_pintbl[]) {
+@@ -7062,6 +7071,8 @@ static const struct hda_fixup alc269_fixups[] = {
+ [ALC233_FIXUP_LENOVO_MULTI_CODECS] = {
+ .type = HDA_FIXUP_FUNC,
+ .v.func = alc233_alc662_fixup_lenovo_dual_codecs,
++ .chained = true,
++ .chain_id = ALC269_FIXUP_GPIO2
+ },
+ [ALC233_FIXUP_ACER_HEADSET_MIC] = {
+ .type = HDA_FIXUP_VERBS,
+diff --git a/sound/soc/codecs/pcm3168a.c b/sound/soc/codecs/pcm3168a.c
+index 9711fab296ebc..045c6f8b26bef 100644
+--- a/sound/soc/codecs/pcm3168a.c
++++ b/sound/soc/codecs/pcm3168a.c
+@@ -306,6 +306,13 @@ static int pcm3168a_set_dai_sysclk(struct snd_soc_dai *dai,
+ struct pcm3168a_priv *pcm3168a = snd_soc_component_get_drvdata(dai->component);
+ int ret;
+
++ /*
++ * Some sound card sets 0 Hz as reset,
++ * but it is impossible to set. Ignore it here
++ */
++ if (freq == 0)
++ return 0;
++
+ if (freq > PCM3168A_MAX_SYSCLK)
+ return -EINVAL;
+
+diff --git a/sound/soc/codecs/wm8994.c b/sound/soc/codecs/wm8994.c
+index 55d0b9be6ff00..58f21329d0e99 100644
+--- a/sound/soc/codecs/wm8994.c
++++ b/sound/soc/codecs/wm8994.c
+@@ -3491,6 +3491,8 @@ int wm8994_mic_detect(struct snd_soc_component *component, struct snd_soc_jack *
+ return -EINVAL;
+ }
+
++ pm_runtime_get_sync(component->dev);
++
+ switch (micbias) {
+ case 1:
+ micdet = &wm8994->micdet[0];
+@@ -3538,6 +3540,8 @@ int wm8994_mic_detect(struct snd_soc_component *component, struct snd_soc_jack *
+
+ snd_soc_dapm_sync(dapm);
+
++ pm_runtime_put(component->dev);
++
+ return 0;
+ }
+ EXPORT_SYMBOL_GPL(wm8994_mic_detect);
+@@ -3905,6 +3909,8 @@ int wm8958_mic_detect(struct snd_soc_component *component, struct snd_soc_jack *
+ return -EINVAL;
+ }
+
++ pm_runtime_get_sync(component->dev);
++
+ if (jack) {
+ snd_soc_dapm_force_enable_pin(dapm, "CLK_SYS");
+ snd_soc_dapm_sync(dapm);
+@@ -3973,6 +3979,8 @@ int wm8958_mic_detect(struct snd_soc_component *component, struct snd_soc_jack *
+ snd_soc_dapm_sync(dapm);
+ }
+
++ pm_runtime_put(component->dev);
++
+ return 0;
+ }
+ EXPORT_SYMBOL_GPL(wm8958_mic_detect);
+@@ -4166,11 +4174,13 @@ static int wm8994_component_probe(struct snd_soc_component *component)
+ wm8994->hubs.dcs_readback_mode = 2;
+ break;
+ }
++ wm8994->hubs.micd_scthr = true;
+ break;
+
+ case WM8958:
+ wm8994->hubs.dcs_readback_mode = 1;
+ wm8994->hubs.hp_startup_mode = 1;
++ wm8994->hubs.micd_scthr = true;
+
+ switch (control->revision) {
+ case 0:
+diff --git a/sound/soc/codecs/wm_hubs.c b/sound/soc/codecs/wm_hubs.c
+index e93af7edd8f75..dd421e2fe7b21 100644
+--- a/sound/soc/codecs/wm_hubs.c
++++ b/sound/soc/codecs/wm_hubs.c
+@@ -1223,6 +1223,9 @@ int wm_hubs_handle_analogue_pdata(struct snd_soc_component *component,
+ snd_soc_component_update_bits(component, WM8993_ADDITIONAL_CONTROL,
+ WM8993_LINEOUT2_FB, WM8993_LINEOUT2_FB);
+
++ if (!hubs->micd_scthr)
++ return 0;
++
+ snd_soc_component_update_bits(component, WM8993_MICBIAS,
+ WM8993_JD_SCTHR_MASK | WM8993_JD_THR_MASK |
+ WM8993_MICB1_LVL | WM8993_MICB2_LVL,
+diff --git a/sound/soc/codecs/wm_hubs.h b/sound/soc/codecs/wm_hubs.h
+index 4b8e5f0d6e32d..988b29e630607 100644
+--- a/sound/soc/codecs/wm_hubs.h
++++ b/sound/soc/codecs/wm_hubs.h
+@@ -27,6 +27,7 @@ struct wm_hubs_data {
+ int hp_startup_mode;
+ int series_startup;
+ int no_series_update;
++ bool micd_scthr;
+
+ bool no_cache_dac_hp_direct;
+ struct list_head dcs_cache;
+diff --git a/sound/soc/intel/boards/bytcr_rt5640.c b/sound/soc/intel/boards/bytcr_rt5640.c
+index 1fdb70b9e4788..5f885062145fe 100644
+--- a/sound/soc/intel/boards/bytcr_rt5640.c
++++ b/sound/soc/intel/boards/bytcr_rt5640.c
+@@ -591,6 +591,16 @@ static const struct dmi_system_id byt_rt5640_quirk_table[] = {
+ BYT_RT5640_SSP0_AIF1 |
+ BYT_RT5640_MCLK_EN),
+ },
++ { /* MPMAN Converter 9, similar hw as the I.T.Works TW891 2-in-1 */
++ .matches = {
++ DMI_MATCH(DMI_SYS_VENDOR, "MPMAN"),
++ DMI_MATCH(DMI_PRODUCT_NAME, "Converter9"),
++ },
++ .driver_data = (void *)(BYTCR_INPUT_DEFAULTS |
++ BYT_RT5640_MONO_SPEAKER |
++ BYT_RT5640_SSP0_AIF1 |
++ BYT_RT5640_MCLK_EN),
++ },
+ {
+ /* MPMAN MPWIN895CL */
+ .matches = {
+diff --git a/sound/usb/quirks.c b/sound/usb/quirks.c
+index bf2d521b6768c..e680416a6a8de 100644
+--- a/sound/usb/quirks.c
++++ b/sound/usb/quirks.c
+@@ -1668,12 +1668,13 @@ void snd_usb_ctl_msg_quirk(struct usb_device *dev, unsigned int pipe,
+ && (requesttype & USB_TYPE_MASK) == USB_TYPE_CLASS)
+ msleep(20);
+
+- /* Zoom R16/24, Logitech H650e, Jabra 550a, Kingston HyperX needs a tiny
+- * delay here, otherwise requests like get/set frequency return as
+- * failed despite actually succeeding.
++ /* Zoom R16/24, Logitech H650e/H570e, Jabra 550a, Kingston HyperX
++ * needs a tiny delay here, otherwise requests like get/set
++ * frequency return as failed despite actually succeeding.
+ */
+ if ((chip->usb_id == USB_ID(0x1686, 0x00dd) ||
+ chip->usb_id == USB_ID(0x046d, 0x0a46) ||
++ chip->usb_id == USB_ID(0x046d, 0x0a56) ||
+ chip->usb_id == USB_ID(0x0b0e, 0x0349) ||
+ chip->usb_id == USB_ID(0x0951, 0x16ad)) &&
+ (requesttype & USB_TYPE_MASK) == USB_TYPE_CLASS)
+diff --git a/tools/lib/bpf/Makefile b/tools/lib/bpf/Makefile
+index bf8ed134cb8a3..c820b0be9d637 100644
+--- a/tools/lib/bpf/Makefile
++++ b/tools/lib/bpf/Makefile
+@@ -152,6 +152,7 @@ GLOBAL_SYM_COUNT = $(shell readelf -s --wide $(BPF_IN_SHARED) | \
+ awk '/GLOBAL/ && /DEFAULT/ && !/UND/ {print $$NF}' | \
+ sort -u | wc -l)
+ VERSIONED_SYM_COUNT = $(shell readelf --dyn-syms --wide $(OUTPUT)libbpf.so | \
++ awk '/GLOBAL/ && /DEFAULT/ && !/UND/ {print $$NF}' | \
+ grep -Eo '[^ ]+@LIBBPF_' | cut -d@ -f1 | sort -u | wc -l)
+
+ CMD_TARGETS = $(LIB_TARGET) $(PC_FILE)
+@@ -219,6 +220,7 @@ check_abi: $(OUTPUT)libbpf.so
+ awk '/GLOBAL/ && /DEFAULT/ && !/UND/ {print $$NF}'| \
+ sort -u > $(OUTPUT)libbpf_global_syms.tmp; \
+ readelf --dyn-syms --wide $(OUTPUT)libbpf.so | \
++ awk '/GLOBAL/ && /DEFAULT/ && !/UND/ {print $$NF}'| \
+ grep -Eo '[^ ]+@LIBBPF_' | cut -d@ -f1 | \
+ sort -u > $(OUTPUT)libbpf_versioned_syms.tmp; \
+ diff -u $(OUTPUT)libbpf_global_syms.tmp \
+diff --git a/tools/lib/bpf/libbpf.c b/tools/lib/bpf/libbpf.c
+index 3ac0094706b81..236c91aff48f8 100644
+--- a/tools/lib/bpf/libbpf.c
++++ b/tools/lib/bpf/libbpf.c
+@@ -5030,8 +5030,8 @@ static int bpf_object__collect_map_relos(struct bpf_object *obj,
+ int i, j, nrels, new_sz;
+ const struct btf_var_secinfo *vi = NULL;
+ const struct btf_type *sec, *var, *def;
++ struct bpf_map *map = NULL, *targ_map;
+ const struct btf_member *member;
+- struct bpf_map *map, *targ_map;
+ const char *name, *mname;
+ Elf_Data *symbols;
+ unsigned int moff;
+diff --git a/tools/objtool/check.c b/tools/objtool/check.c
+index 5e0d70a89fb87..773e6c7ee5f93 100644
+--- a/tools/objtool/check.c
++++ b/tools/objtool/check.c
+@@ -619,7 +619,7 @@ static int add_jump_destinations(struct objtool_file *file)
+ if (!is_static_jump(insn))
+ continue;
+
+- if (insn->ignore || insn->offset == FAKE_JUMP_OFFSET)
++ if (insn->offset == FAKE_JUMP_OFFSET)
+ continue;
+
+ rela = find_rela_by_dest_range(file->elf, insn->sec,
^ permalink raw reply related [flat|nested] 30+ messages in thread
* [gentoo-commits] proj/linux-patches:5.8 commit in: /
@ 2020-10-07 12:47 Mike Pagano
0 siblings, 0 replies; 30+ messages in thread
From: Mike Pagano @ 2020-10-07 12:47 UTC (permalink / raw
To: gentoo-commits
commit: 0183b5b10995269f4f30116a291631085336b482
Author: Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Wed Oct 7 12:47:24 2020 +0000
Commit: Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Wed Oct 7 12:47:24 2020 +0000
URL: https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=0183b5b1
Linux patch 5.8.14
Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>
0000_README | 4 +
1013_linux-5.8.14.patch | 3103 +++++++++++++++++++++++++++++++++++++++++++++++
2 files changed, 3107 insertions(+)
diff --git a/0000_README b/0000_README
index 0944db1..6e16f1d 100644
--- a/0000_README
+++ b/0000_README
@@ -95,6 +95,10 @@ Patch: 1012_linux-5.8.13.patch
From: http://www.kernel.org
Desc: Linux 5.8.13
+Patch: 1013_linux-5.8.14.patch
+From: http://www.kernel.org
+Desc: Linux 5.8.14
+
Patch: 1500_XATTR_USER_PREFIX.patch
From: https://bugs.gentoo.org/show_bug.cgi?id=470644
Desc: Support for namespace user.pax.* on tmpfs.
diff --git a/1013_linux-5.8.14.patch b/1013_linux-5.8.14.patch
new file mode 100644
index 0000000..fe7cc03
--- /dev/null
+++ b/1013_linux-5.8.14.patch
@@ -0,0 +1,3103 @@
+diff --git a/Documentation/devicetree/bindings/gpio/sgpio-aspeed.txt b/Documentation/devicetree/bindings/gpio/sgpio-aspeed.txt
+index d4d83916c09dd..be329ea4794f8 100644
+--- a/Documentation/devicetree/bindings/gpio/sgpio-aspeed.txt
++++ b/Documentation/devicetree/bindings/gpio/sgpio-aspeed.txt
+@@ -20,8 +20,9 @@ Required properties:
+ - gpio-controller : Marks the device node as a GPIO controller
+ - interrupts : Interrupt specifier, see interrupt-controller/interrupts.txt
+ - interrupt-controller : Mark the GPIO controller as an interrupt-controller
+-- ngpios : number of GPIO lines, see gpio.txt
+- (should be multiple of 8, up to 80 pins)
++- ngpios : number of *hardware* GPIO lines, see gpio.txt. This will expose
++ 2 software GPIOs per hardware GPIO: one for hardware input, one for hardware
++ output. Up to 80 pins, must be a multiple of 8.
+ - clocks : A phandle to the APB clock for SGPM clock division
+ - bus-frequency : SGPM CLK frequency
+
+diff --git a/Makefile b/Makefile
+index 0d81d8cba48b6..33ceda527e5ef 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 5
+ PATCHLEVEL = 8
+-SUBLEVEL = 13
++SUBLEVEL = 14
+ EXTRAVERSION =
+ NAME = Kleptomaniac Octopus
+
+diff --git a/block/blk-mq.c b/block/blk-mq.c
+index a366726094a89..8e623e0282757 100644
+--- a/block/blk-mq.c
++++ b/block/blk-mq.c
+@@ -1304,6 +1304,11 @@ bool blk_mq_dispatch_rq_list(struct request_queue *q, struct list_head *list,
+
+ hctx->dispatched[queued_to_index(queued)]++;
+
++ /* If we didn't flush the entire list, we could have told the driver
++ * there was more coming, but that turned out to be a lie.
++ */
++ if ((!list_empty(list) || errors) && q->mq_ops->commit_rqs && queued)
++ q->mq_ops->commit_rqs(hctx);
+ /*
+ * Any items that need requeuing? Stuff them into hctx->dispatch,
+ * that is where we will continue on next queue run.
+@@ -1311,14 +1316,6 @@ bool blk_mq_dispatch_rq_list(struct request_queue *q, struct list_head *list,
+ if (!list_empty(list)) {
+ bool needs_restart;
+
+- /*
+- * If we didn't flush the entire list, we could have told
+- * the driver there was more coming, but that turned out to
+- * be a lie.
+- */
+- if (q->mq_ops->commit_rqs && queued)
+- q->mq_ops->commit_rqs(hctx);
+-
+ spin_lock(&hctx->lock);
+ list_splice_tail_init(list, &hctx->dispatch);
+ spin_unlock(&hctx->lock);
+@@ -1971,6 +1968,7 @@ void blk_mq_try_issue_list_directly(struct blk_mq_hw_ctx *hctx,
+ struct list_head *list)
+ {
+ int queued = 0;
++ int errors = 0;
+
+ while (!list_empty(list)) {
+ blk_status_t ret;
+@@ -1987,6 +1985,7 @@ void blk_mq_try_issue_list_directly(struct blk_mq_hw_ctx *hctx,
+ break;
+ }
+ blk_mq_end_request(rq, ret);
++ errors++;
+ } else
+ queued++;
+ }
+@@ -1996,7 +1995,8 @@ void blk_mq_try_issue_list_directly(struct blk_mq_hw_ctx *hctx,
+ * the driver there was more coming, but that turned out to
+ * be a lie.
+ */
+- if (!list_empty(list) && hctx->queue->mq_ops->commit_rqs && queued)
++ if ((!list_empty(list) || errors) &&
++ hctx->queue->mq_ops->commit_rqs && queued)
+ hctx->queue->mq_ops->commit_rqs(hctx);
+ }
+
+diff --git a/block/blk-settings.c b/block/blk-settings.c
+index 9a2c23cd97007..525bdb699deb8 100644
+--- a/block/blk-settings.c
++++ b/block/blk-settings.c
+@@ -832,6 +832,52 @@ bool blk_queue_can_use_dma_map_merging(struct request_queue *q,
+ }
+ EXPORT_SYMBOL_GPL(blk_queue_can_use_dma_map_merging);
+
++/**
++ * blk_queue_set_zoned - configure a disk queue zoned model.
++ * @disk: the gendisk of the queue to configure
++ * @model: the zoned model to set
++ *
++ * Set the zoned model of the request queue of @disk according to @model.
++ * When @model is BLK_ZONED_HM (host managed), this should be called only
++ * if zoned block device support is enabled (CONFIG_BLK_DEV_ZONED option).
++ * If @model specifies BLK_ZONED_HA (host aware), the effective model used
++ * depends on CONFIG_BLK_DEV_ZONED settings and on the existence of partitions
++ * on the disk.
++ */
++void blk_queue_set_zoned(struct gendisk *disk, enum blk_zoned_model model)
++{
++ switch (model) {
++ case BLK_ZONED_HM:
++ /*
++ * Host managed devices are supported only if
++ * CONFIG_BLK_DEV_ZONED is enabled.
++ */
++ WARN_ON_ONCE(!IS_ENABLED(CONFIG_BLK_DEV_ZONED));
++ break;
++ case BLK_ZONED_HA:
++ /*
++ * Host aware devices can be treated either as regular block
++ * devices (similar to drive managed devices) or as zoned block
++ * devices to take advantage of the zone command set, similarly
++ * to host managed devices. We try the latter if there are no
++ * partitions and zoned block device support is enabled, else
++ * we do nothing special as far as the block layer is concerned.
++ */
++ if (!IS_ENABLED(CONFIG_BLK_DEV_ZONED) ||
++ disk_has_partitions(disk))
++ model = BLK_ZONED_NONE;
++ break;
++ case BLK_ZONED_NONE:
++ default:
++ if (WARN_ON_ONCE(model != BLK_ZONED_NONE))
++ model = BLK_ZONED_NONE;
++ break;
++ }
++
++ disk->queue->limits.zoned = model;
++}
++EXPORT_SYMBOL_GPL(blk_queue_set_zoned);
++
+ static int __init blk_settings_init(void)
+ {
+ blk_max_low_pfn = max_low_pfn - 1;
+diff --git a/drivers/clk/samsung/clk-exynos4.c b/drivers/clk/samsung/clk-exynos4.c
+index 51564fc23c639..f4086287bb71b 100644
+--- a/drivers/clk/samsung/clk-exynos4.c
++++ b/drivers/clk/samsung/clk-exynos4.c
+@@ -927,7 +927,7 @@ static const struct samsung_gate_clock exynos4210_gate_clks[] __initconst = {
+ GATE(CLK_PCIE, "pcie", "aclk133", GATE_IP_FSYS, 14, 0, 0),
+ GATE(CLK_SMMU_PCIE, "smmu_pcie", "aclk133", GATE_IP_FSYS, 18, 0, 0),
+ GATE(CLK_MODEMIF, "modemif", "aclk100", GATE_IP_PERIL, 28, 0, 0),
+- GATE(CLK_CHIPID, "chipid", "aclk100", E4210_GATE_IP_PERIR, 0, 0, 0),
++ GATE(CLK_CHIPID, "chipid", "aclk100", E4210_GATE_IP_PERIR, 0, CLK_IGNORE_UNUSED, 0),
+ GATE(CLK_SYSREG, "sysreg", "aclk100", E4210_GATE_IP_PERIR, 0,
+ CLK_IGNORE_UNUSED, 0),
+ GATE(CLK_HDMI_CEC, "hdmi_cec", "aclk100", E4210_GATE_IP_PERIR, 11, 0,
+@@ -969,7 +969,7 @@ static const struct samsung_gate_clock exynos4x12_gate_clks[] __initconst = {
+ 0),
+ GATE(CLK_TSADC, "tsadc", "aclk133", E4X12_GATE_BUS_FSYS1, 16, 0, 0),
+ GATE(CLK_MIPI_HSI, "mipi_hsi", "aclk133", GATE_IP_FSYS, 10, 0, 0),
+- GATE(CLK_CHIPID, "chipid", "aclk100", E4X12_GATE_IP_PERIR, 0, 0, 0),
++ GATE(CLK_CHIPID, "chipid", "aclk100", E4X12_GATE_IP_PERIR, 0, CLK_IGNORE_UNUSED, 0),
+ GATE(CLK_SYSREG, "sysreg", "aclk100", E4X12_GATE_IP_PERIR, 1,
+ CLK_IGNORE_UNUSED, 0),
+ GATE(CLK_HDMI_CEC, "hdmi_cec", "aclk100", E4X12_GATE_IP_PERIR, 11, 0,
+diff --git a/drivers/clk/samsung/clk-exynos5420.c b/drivers/clk/samsung/clk-exynos5420.c
+index fea33399a632d..bd620876544d9 100644
+--- a/drivers/clk/samsung/clk-exynos5420.c
++++ b/drivers/clk/samsung/clk-exynos5420.c
+@@ -1655,6 +1655,11 @@ static void __init exynos5x_clk_init(struct device_node *np,
+ * main G3D clock enablement status.
+ */
+ clk_prepare_enable(__clk_lookup("mout_sw_aclk_g3d"));
++ /*
++ * Keep top BPLL mux enabled permanently to ensure that DRAM operates
++ * properly.
++ */
++ clk_prepare_enable(__clk_lookup("mout_bpll"));
+
+ samsung_clk_of_add_provider(np, ctx);
+ }
+diff --git a/drivers/clk/socfpga/clk-s10.c b/drivers/clk/socfpga/clk-s10.c
+index c1dfc9b34e4e9..661a8e9bfb9bd 100644
+--- a/drivers/clk/socfpga/clk-s10.c
++++ b/drivers/clk/socfpga/clk-s10.c
+@@ -209,7 +209,7 @@ static const struct stratix10_perip_cnt_clock s10_main_perip_cnt_clks[] = {
+ { STRATIX10_EMAC_B_FREE_CLK, "emacb_free_clk", NULL, emacb_free_mux, ARRAY_SIZE(emacb_free_mux),
+ 0, 0, 2, 0xB0, 1},
+ { STRATIX10_EMAC_PTP_FREE_CLK, "emac_ptp_free_clk", NULL, emac_ptp_free_mux,
+- ARRAY_SIZE(emac_ptp_free_mux), 0, 0, 4, 0xB0, 2},
++ ARRAY_SIZE(emac_ptp_free_mux), 0, 0, 2, 0xB0, 2},
+ { STRATIX10_GPIO_DB_FREE_CLK, "gpio_db_free_clk", NULL, gpio_db_free_mux,
+ ARRAY_SIZE(gpio_db_free_mux), 0, 0, 0, 0xB0, 3},
+ { STRATIX10_SDMMC_FREE_CLK, "sdmmc_free_clk", NULL, sdmmc_free_mux,
+diff --git a/drivers/clk/tegra/clk-pll.c b/drivers/clk/tegra/clk-pll.c
+index 0b212cf2e7942..1cc982d3de635 100644
+--- a/drivers/clk/tegra/clk-pll.c
++++ b/drivers/clk/tegra/clk-pll.c
+@@ -1601,9 +1601,6 @@ static int clk_plle_tegra114_enable(struct clk_hw *hw)
+ unsigned long flags = 0;
+ unsigned long input_rate;
+
+- if (clk_pll_is_enabled(hw))
+- return 0;
+-
+ input_rate = clk_hw_get_rate(clk_hw_get_parent(hw));
+
+ if (_get_table_rate(hw, &sel, pll->params->fixed_rate, input_rate))
+diff --git a/drivers/clk/tegra/clk-tegra210-emc.c b/drivers/clk/tegra/clk-tegra210-emc.c
+index 352a2c3fc3740..51fd0ec2a2d04 100644
+--- a/drivers/clk/tegra/clk-tegra210-emc.c
++++ b/drivers/clk/tegra/clk-tegra210-emc.c
+@@ -12,6 +12,8 @@
+ #include <linux/io.h>
+ #include <linux/slab.h>
+
++#include "clk.h"
++
+ #define CLK_SOURCE_EMC 0x19c
+ #define CLK_SOURCE_EMC_2X_CLK_SRC GENMASK(31, 29)
+ #define CLK_SOURCE_EMC_MC_EMC_SAME_FREQ BIT(16)
+diff --git a/drivers/clocksource/timer-gx6605s.c b/drivers/clocksource/timer-gx6605s.c
+index 80d0939d040b5..8d386adbe8009 100644
+--- a/drivers/clocksource/timer-gx6605s.c
++++ b/drivers/clocksource/timer-gx6605s.c
+@@ -28,6 +28,7 @@ static irqreturn_t gx6605s_timer_interrupt(int irq, void *dev)
+ void __iomem *base = timer_of_base(to_timer_of(ce));
+
+ writel_relaxed(GX6605S_STATUS_CLR, base + TIMER_STATUS);
++ writel_relaxed(0, base + TIMER_INI);
+
+ ce->event_handler(ce);
+
+diff --git a/drivers/cpuidle/cpuidle-psci.c b/drivers/cpuidle/cpuidle-psci.c
+index 3806f911b61c0..915172e3ec906 100644
+--- a/drivers/cpuidle/cpuidle-psci.c
++++ b/drivers/cpuidle/cpuidle-psci.c
+@@ -64,7 +64,7 @@ static int psci_enter_domain_idle_state(struct cpuidle_device *dev,
+ return -1;
+
+ /* Do runtime PM to manage a hierarchical CPU toplogy. */
+- pm_runtime_put_sync_suspend(pd_dev);
++ RCU_NONIDLE(pm_runtime_put_sync_suspend(pd_dev));
+
+ state = psci_get_domain_state();
+ if (!state)
+@@ -72,7 +72,7 @@ static int psci_enter_domain_idle_state(struct cpuidle_device *dev,
+
+ ret = psci_cpu_suspend_enter(state) ? -1 : idx;
+
+- pm_runtime_get_sync(pd_dev);
++ RCU_NONIDLE(pm_runtime_get_sync(pd_dev));
+
+ cpu_pm_exit();
+
+diff --git a/drivers/dma/dmatest.c b/drivers/dma/dmatest.c
+index 604f803579312..323822372b4ce 100644
+--- a/drivers/dma/dmatest.c
++++ b/drivers/dma/dmatest.c
+@@ -129,6 +129,7 @@ struct dmatest_params {
+ * @nr_channels: number of channels under test
+ * @lock: access protection to the fields of this structure
+ * @did_init: module has been initialized completely
++ * @last_error: test has faced configuration issues
+ */
+ static struct dmatest_info {
+ /* Test parameters */
+@@ -137,6 +138,7 @@ static struct dmatest_info {
+ /* Internal state */
+ struct list_head channels;
+ unsigned int nr_channels;
++ int last_error;
+ struct mutex lock;
+ bool did_init;
+ } test_info = {
+@@ -1175,10 +1177,22 @@ static int dmatest_run_set(const char *val, const struct kernel_param *kp)
+ return ret;
+ } else if (dmatest_run) {
+ if (!is_threaded_test_pending(info)) {
+- pr_info("No channels configured, continue with any\n");
+- if (!is_threaded_test_run(info))
+- stop_threaded_test(info);
+- add_threaded_test(info);
++ /*
++ * We have nothing to run. This can be due to:
++ */
++ ret = info->last_error;
++ if (ret) {
++ /* 1) Misconfiguration */
++ pr_err("Channel misconfigured, can't continue\n");
++ mutex_unlock(&info->lock);
++ return ret;
++ } else {
++ /* 2) We rely on defaults */
++ pr_info("No channels configured, continue with any\n");
++ if (!is_threaded_test_run(info))
++ stop_threaded_test(info);
++ add_threaded_test(info);
++ }
+ }
+ start_threaded_tests(info);
+ } else {
+@@ -1195,7 +1209,7 @@ static int dmatest_chan_set(const char *val, const struct kernel_param *kp)
+ struct dmatest_info *info = &test_info;
+ struct dmatest_chan *dtc;
+ char chan_reset_val[20];
+- int ret = 0;
++ int ret;
+
+ mutex_lock(&info->lock);
+ ret = param_set_copystring(val, kp);
+@@ -1250,12 +1264,14 @@ static int dmatest_chan_set(const char *val, const struct kernel_param *kp)
+ goto add_chan_err;
+ }
+
++ info->last_error = ret;
+ mutex_unlock(&info->lock);
+
+ return ret;
+
+ add_chan_err:
+ param_set_copystring(chan_reset_val, kp);
++ info->last_error = ret;
+ mutex_unlock(&info->lock);
+
+ return ret;
+diff --git a/drivers/gpio/gpio-amd-fch.c b/drivers/gpio/gpio-amd-fch.c
+index 4e44ba4d7423c..2a21354ed6a03 100644
+--- a/drivers/gpio/gpio-amd-fch.c
++++ b/drivers/gpio/gpio-amd-fch.c
+@@ -92,7 +92,7 @@ static int amd_fch_gpio_get_direction(struct gpio_chip *gc, unsigned int gpio)
+ ret = (readl_relaxed(ptr) & AMD_FCH_GPIO_FLAG_DIRECTION);
+ spin_unlock_irqrestore(&priv->lock, flags);
+
+- return ret ? GPIO_LINE_DIRECTION_IN : GPIO_LINE_DIRECTION_OUT;
++ return ret ? GPIO_LINE_DIRECTION_OUT : GPIO_LINE_DIRECTION_IN;
+ }
+
+ static void amd_fch_gpio_set(struct gpio_chip *gc,
+diff --git a/drivers/gpio/gpio-aspeed-sgpio.c b/drivers/gpio/gpio-aspeed-sgpio.c
+index d16645c1d8d9d..a0eb00c024f62 100644
+--- a/drivers/gpio/gpio-aspeed-sgpio.c
++++ b/drivers/gpio/gpio-aspeed-sgpio.c
+@@ -17,7 +17,17 @@
+ #include <linux/spinlock.h>
+ #include <linux/string.h>
+
+-#define MAX_NR_SGPIO 80
++/*
++ * MAX_NR_HW_GPIO represents the number of actual hardware-supported GPIOs (ie,
++ * slots within the clocked serial GPIO data). Since each HW GPIO is both an
++ * input and an output, we provide MAX_NR_HW_GPIO * 2 lines on our gpiochip
++ * device.
++ *
++ * We use SGPIO_OUTPUT_OFFSET to define the split between the inputs and
++ * outputs; the inputs start at line 0, the outputs start at OUTPUT_OFFSET.
++ */
++#define MAX_NR_HW_SGPIO 80
++#define SGPIO_OUTPUT_OFFSET MAX_NR_HW_SGPIO
+
+ #define ASPEED_SGPIO_CTRL 0x54
+
+@@ -30,8 +40,8 @@ struct aspeed_sgpio {
+ struct clk *pclk;
+ spinlock_t lock;
+ void __iomem *base;
+- uint32_t dir_in[3];
+ int irq;
++ int n_sgpio;
+ };
+
+ struct aspeed_sgpio_bank {
+@@ -111,31 +121,69 @@ static void __iomem *bank_reg(struct aspeed_sgpio *gpio,
+ }
+ }
+
+-#define GPIO_BANK(x) ((x) >> 5)
+-#define GPIO_OFFSET(x) ((x) & 0x1f)
++#define GPIO_BANK(x) ((x % SGPIO_OUTPUT_OFFSET) >> 5)
++#define GPIO_OFFSET(x) ((x % SGPIO_OUTPUT_OFFSET) & 0x1f)
+ #define GPIO_BIT(x) BIT(GPIO_OFFSET(x))
+
+ static const struct aspeed_sgpio_bank *to_bank(unsigned int offset)
+ {
+- unsigned int bank = GPIO_BANK(offset);
++ unsigned int bank;
++
++ bank = GPIO_BANK(offset);
+
+ WARN_ON(bank >= ARRAY_SIZE(aspeed_sgpio_banks));
+ return &aspeed_sgpio_banks[bank];
+ }
+
++static int aspeed_sgpio_init_valid_mask(struct gpio_chip *gc,
++ unsigned long *valid_mask, unsigned int ngpios)
++{
++ struct aspeed_sgpio *sgpio = gpiochip_get_data(gc);
++ int n = sgpio->n_sgpio;
++ int c = SGPIO_OUTPUT_OFFSET - n;
++
++ WARN_ON(ngpios < MAX_NR_HW_SGPIO * 2);
++
++ /* input GPIOs in the lower range */
++ bitmap_set(valid_mask, 0, n);
++ bitmap_clear(valid_mask, n, c);
++
++ /* output GPIOS above SGPIO_OUTPUT_OFFSET */
++ bitmap_set(valid_mask, SGPIO_OUTPUT_OFFSET, n);
++ bitmap_clear(valid_mask, SGPIO_OUTPUT_OFFSET + n, c);
++
++ return 0;
++}
++
++static void aspeed_sgpio_irq_init_valid_mask(struct gpio_chip *gc,
++ unsigned long *valid_mask, unsigned int ngpios)
++{
++ struct aspeed_sgpio *sgpio = gpiochip_get_data(gc);
++ int n = sgpio->n_sgpio;
++
++ WARN_ON(ngpios < MAX_NR_HW_SGPIO * 2);
++
++ /* input GPIOs in the lower range */
++ bitmap_set(valid_mask, 0, n);
++ bitmap_clear(valid_mask, n, ngpios - n);
++}
++
++static bool aspeed_sgpio_is_input(unsigned int offset)
++{
++ return offset < SGPIO_OUTPUT_OFFSET;
++}
++
+ static int aspeed_sgpio_get(struct gpio_chip *gc, unsigned int offset)
+ {
+ struct aspeed_sgpio *gpio = gpiochip_get_data(gc);
+ const struct aspeed_sgpio_bank *bank = to_bank(offset);
+ unsigned long flags;
+ enum aspeed_sgpio_reg reg;
+- bool is_input;
+ int rc = 0;
+
+ spin_lock_irqsave(&gpio->lock, flags);
+
+- is_input = gpio->dir_in[GPIO_BANK(offset)] & GPIO_BIT(offset);
+- reg = is_input ? reg_val : reg_rdata;
++ reg = aspeed_sgpio_is_input(offset) ? reg_val : reg_rdata;
+ rc = !!(ioread32(bank_reg(gpio, bank, reg)) & GPIO_BIT(offset));
+
+ spin_unlock_irqrestore(&gpio->lock, flags);
+@@ -143,22 +191,31 @@ static int aspeed_sgpio_get(struct gpio_chip *gc, unsigned int offset)
+ return rc;
+ }
+
+-static void sgpio_set_value(struct gpio_chip *gc, unsigned int offset, int val)
++static int sgpio_set_value(struct gpio_chip *gc, unsigned int offset, int val)
+ {
+ struct aspeed_sgpio *gpio = gpiochip_get_data(gc);
+ const struct aspeed_sgpio_bank *bank = to_bank(offset);
+- void __iomem *addr;
++ void __iomem *addr_r, *addr_w;
+ u32 reg = 0;
+
+- addr = bank_reg(gpio, bank, reg_val);
+- reg = ioread32(addr);
++ if (aspeed_sgpio_is_input(offset))
++ return -EINVAL;
++
++ /* Since this is an output, read the cached value from rdata, then
++ * update val. */
++ addr_r = bank_reg(gpio, bank, reg_rdata);
++ addr_w = bank_reg(gpio, bank, reg_val);
++
++ reg = ioread32(addr_r);
+
+ if (val)
+ reg |= GPIO_BIT(offset);
+ else
+ reg &= ~GPIO_BIT(offset);
+
+- iowrite32(reg, addr);
++ iowrite32(reg, addr_w);
++
++ return 0;
+ }
+
+ static void aspeed_sgpio_set(struct gpio_chip *gc, unsigned int offset, int val)
+@@ -175,43 +232,28 @@ static void aspeed_sgpio_set(struct gpio_chip *gc, unsigned int offset, int val)
+
+ static int aspeed_sgpio_dir_in(struct gpio_chip *gc, unsigned int offset)
+ {
+- struct aspeed_sgpio *gpio = gpiochip_get_data(gc);
+- unsigned long flags;
+-
+- spin_lock_irqsave(&gpio->lock, flags);
+- gpio->dir_in[GPIO_BANK(offset)] |= GPIO_BIT(offset);
+- spin_unlock_irqrestore(&gpio->lock, flags);
+-
+- return 0;
++ return aspeed_sgpio_is_input(offset) ? 0 : -EINVAL;
+ }
+
+ static int aspeed_sgpio_dir_out(struct gpio_chip *gc, unsigned int offset, int val)
+ {
+ struct aspeed_sgpio *gpio = gpiochip_get_data(gc);
+ unsigned long flags;
++ int rc;
+
+- spin_lock_irqsave(&gpio->lock, flags);
+-
+- gpio->dir_in[GPIO_BANK(offset)] &= ~GPIO_BIT(offset);
+- sgpio_set_value(gc, offset, val);
++ /* No special action is required for setting the direction; we'll
++ * error-out in sgpio_set_value if this isn't an output GPIO */
+
++ spin_lock_irqsave(&gpio->lock, flags);
++ rc = sgpio_set_value(gc, offset, val);
+ spin_unlock_irqrestore(&gpio->lock, flags);
+
+- return 0;
++ return rc;
+ }
+
+ static int aspeed_sgpio_get_direction(struct gpio_chip *gc, unsigned int offset)
+ {
+- int dir_status;
+- struct aspeed_sgpio *gpio = gpiochip_get_data(gc);
+- unsigned long flags;
+-
+- spin_lock_irqsave(&gpio->lock, flags);
+- dir_status = gpio->dir_in[GPIO_BANK(offset)] & GPIO_BIT(offset);
+- spin_unlock_irqrestore(&gpio->lock, flags);
+-
+- return dir_status;
+-
++ return !!aspeed_sgpio_is_input(offset);
+ }
+
+ static void irqd_to_aspeed_sgpio_data(struct irq_data *d,
+@@ -402,6 +444,7 @@ static int aspeed_sgpio_setup_irqs(struct aspeed_sgpio *gpio,
+
+ irq = &gpio->chip.irq;
+ irq->chip = &aspeed_sgpio_irqchip;
++ irq->init_valid_mask = aspeed_sgpio_irq_init_valid_mask;
+ irq->handler = handle_bad_irq;
+ irq->default_type = IRQ_TYPE_NONE;
+ irq->parent_handler = aspeed_sgpio_irq_handler;
+@@ -409,17 +452,15 @@ static int aspeed_sgpio_setup_irqs(struct aspeed_sgpio *gpio,
+ irq->parents = &gpio->irq;
+ irq->num_parents = 1;
+
+- /* set IRQ settings and Enable Interrupt */
++ /* Apply default IRQ settings */
+ for (i = 0; i < ARRAY_SIZE(aspeed_sgpio_banks); i++) {
+ bank = &aspeed_sgpio_banks[i];
+ /* set falling or level-low irq */
+ iowrite32(0x00000000, bank_reg(gpio, bank, reg_irq_type0));
+ /* trigger type is edge */
+ iowrite32(0x00000000, bank_reg(gpio, bank, reg_irq_type1));
+- /* dual edge trigger mode. */
+- iowrite32(0xffffffff, bank_reg(gpio, bank, reg_irq_type2));
+- /* enable irq */
+- iowrite32(0xffffffff, bank_reg(gpio, bank, reg_irq_enable));
++ /* single edge trigger */
++ iowrite32(0x00000000, bank_reg(gpio, bank, reg_irq_type2));
+ }
+
+ return 0;
+@@ -452,11 +493,12 @@ static int __init aspeed_sgpio_probe(struct platform_device *pdev)
+ if (rc < 0) {
+ dev_err(&pdev->dev, "Could not read ngpios property\n");
+ return -EINVAL;
+- } else if (nr_gpios > MAX_NR_SGPIO) {
++ } else if (nr_gpios > MAX_NR_HW_SGPIO) {
+ dev_err(&pdev->dev, "Number of GPIOs exceeds the maximum of %d: %d\n",
+- MAX_NR_SGPIO, nr_gpios);
++ MAX_NR_HW_SGPIO, nr_gpios);
+ return -EINVAL;
+ }
++ gpio->n_sgpio = nr_gpios;
+
+ rc = of_property_read_u32(pdev->dev.of_node, "bus-frequency", &sgpio_freq);
+ if (rc < 0) {
+@@ -497,7 +539,8 @@ static int __init aspeed_sgpio_probe(struct platform_device *pdev)
+ spin_lock_init(&gpio->lock);
+
+ gpio->chip.parent = &pdev->dev;
+- gpio->chip.ngpio = nr_gpios;
++ gpio->chip.ngpio = MAX_NR_HW_SGPIO * 2;
++ gpio->chip.init_valid_mask = aspeed_sgpio_init_valid_mask;
+ gpio->chip.direction_input = aspeed_sgpio_dir_in;
+ gpio->chip.direction_output = aspeed_sgpio_dir_out;
+ gpio->chip.get_direction = aspeed_sgpio_get_direction;
+@@ -509,9 +552,6 @@ static int __init aspeed_sgpio_probe(struct platform_device *pdev)
+ gpio->chip.label = dev_name(&pdev->dev);
+ gpio->chip.base = -1;
+
+- /* set all SGPIO pins as input (1). */
+- memset(gpio->dir_in, 0xff, sizeof(gpio->dir_in));
+-
+ aspeed_sgpio_setup_irqs(gpio, pdev);
+
+ rc = devm_gpiochip_add_data(&pdev->dev, &gpio->chip, gpio);
+diff --git a/drivers/gpio/gpio-aspeed.c b/drivers/gpio/gpio-aspeed.c
+index 879db23d84549..d07bf2c3f1369 100644
+--- a/drivers/gpio/gpio-aspeed.c
++++ b/drivers/gpio/gpio-aspeed.c
+@@ -1114,8 +1114,8 @@ static const struct aspeed_gpio_config ast2500_config =
+
+ static const struct aspeed_bank_props ast2600_bank_props[] = {
+ /* input output */
+- {5, 0xffffffff, 0x0000ffff}, /* U/V/W/X */
+- {6, 0xffff0000, 0x0fff0000}, /* Y/Z */
++ {5, 0xffffffff, 0xffffff00}, /* U/V/W/X */
++ {6, 0x0000ffff, 0x0000ffff}, /* Y/Z */
+ { },
+ };
+
+diff --git a/drivers/gpio/gpio-mockup.c b/drivers/gpio/gpio-mockup.c
+index bc345185db260..1652897fdf90d 100644
+--- a/drivers/gpio/gpio-mockup.c
++++ b/drivers/gpio/gpio-mockup.c
+@@ -552,6 +552,7 @@ static int __init gpio_mockup_init(void)
+ err = platform_driver_register(&gpio_mockup_driver);
+ if (err) {
+ gpio_mockup_err("error registering platform driver\n");
++ debugfs_remove_recursive(gpio_mockup_dbg_dir);
+ return err;
+ }
+
+@@ -582,6 +583,7 @@ static int __init gpio_mockup_init(void)
+ gpio_mockup_err("error registering device");
+ platform_driver_unregister(&gpio_mockup_driver);
+ gpio_mockup_unregister_pdevs();
++ debugfs_remove_recursive(gpio_mockup_dbg_dir);
+ return PTR_ERR(pdev);
+ }
+
+diff --git a/drivers/gpio/gpio-pca953x.c b/drivers/gpio/gpio-pca953x.c
+index a3b9bdedbe443..11c3bbd105f11 100644
+--- a/drivers/gpio/gpio-pca953x.c
++++ b/drivers/gpio/gpio-pca953x.c
+@@ -813,7 +813,7 @@ static irqreturn_t pca953x_irq_handler(int irq, void *devid)
+ {
+ struct pca953x_chip *chip = devid;
+ struct gpio_chip *gc = &chip->gpio_chip;
+- DECLARE_BITMAP(pending, MAX_LINE);
++ DECLARE_BITMAP(pending, MAX_LINE) = {};
+ int level;
+ bool ret;
+
+@@ -938,6 +938,7 @@ out:
+ static int device_pca957x_init(struct pca953x_chip *chip, u32 invert)
+ {
+ DECLARE_BITMAP(val, MAX_LINE);
++ unsigned int i;
+ int ret;
+
+ ret = device_pca95xx_init(chip, invert);
+@@ -945,7 +946,9 @@ static int device_pca957x_init(struct pca953x_chip *chip, u32 invert)
+ goto out;
+
+ /* To enable register 6, 7 to control pull up and pull down */
+- memset(val, 0x02, NBANK(chip));
++ for (i = 0; i < NBANK(chip); i++)
++ bitmap_set_value8(val, 0x02, i * BANK_SZ);
++
+ ret = pca953x_write_regs(chip, PCA957X_BKEN, val);
+ if (ret)
+ goto out;
+diff --git a/drivers/gpio/gpio-siox.c b/drivers/gpio/gpio-siox.c
+index 26e1fe092304d..f8c5e9fc4baca 100644
+--- a/drivers/gpio/gpio-siox.c
++++ b/drivers/gpio/gpio-siox.c
+@@ -245,6 +245,7 @@ static int gpio_siox_probe(struct siox_device *sdevice)
+ girq->chip = &ddata->ichip;
+ girq->default_type = IRQ_TYPE_NONE;
+ girq->handler = handle_level_irq;
++ girq->threaded = true;
+
+ ret = devm_gpiochip_add_data(dev, &ddata->gchip, NULL);
+ if (ret)
+diff --git a/drivers/gpio/gpio-sprd.c b/drivers/gpio/gpio-sprd.c
+index d7314d39ab65b..36ea8a3bd4510 100644
+--- a/drivers/gpio/gpio-sprd.c
++++ b/drivers/gpio/gpio-sprd.c
+@@ -149,17 +149,20 @@ static int sprd_gpio_irq_set_type(struct irq_data *data,
+ sprd_gpio_update(chip, offset, SPRD_GPIO_IS, 0);
+ sprd_gpio_update(chip, offset, SPRD_GPIO_IBE, 0);
+ sprd_gpio_update(chip, offset, SPRD_GPIO_IEV, 1);
++ sprd_gpio_update(chip, offset, SPRD_GPIO_IC, 1);
+ irq_set_handler_locked(data, handle_edge_irq);
+ break;
+ case IRQ_TYPE_EDGE_FALLING:
+ sprd_gpio_update(chip, offset, SPRD_GPIO_IS, 0);
+ sprd_gpio_update(chip, offset, SPRD_GPIO_IBE, 0);
+ sprd_gpio_update(chip, offset, SPRD_GPIO_IEV, 0);
++ sprd_gpio_update(chip, offset, SPRD_GPIO_IC, 1);
+ irq_set_handler_locked(data, handle_edge_irq);
+ break;
+ case IRQ_TYPE_EDGE_BOTH:
+ sprd_gpio_update(chip, offset, SPRD_GPIO_IS, 0);
+ sprd_gpio_update(chip, offset, SPRD_GPIO_IBE, 1);
++ sprd_gpio_update(chip, offset, SPRD_GPIO_IC, 1);
+ irq_set_handler_locked(data, handle_edge_irq);
+ break;
+ case IRQ_TYPE_LEVEL_HIGH:
+diff --git a/drivers/gpio/gpio-tc3589x.c b/drivers/gpio/gpio-tc3589x.c
+index 6be0684cfa494..a70bc71281056 100644
+--- a/drivers/gpio/gpio-tc3589x.c
++++ b/drivers/gpio/gpio-tc3589x.c
+@@ -212,7 +212,7 @@ static void tc3589x_gpio_irq_sync_unlock(struct irq_data *d)
+ continue;
+
+ tc3589x_gpio->oldregs[i][j] = new;
+- tc3589x_reg_write(tc3589x, regmap[i] + j * 8, new);
++ tc3589x_reg_write(tc3589x, regmap[i] + j, new);
+ }
+ }
+
+diff --git a/drivers/gpio/gpiolib.c b/drivers/gpio/gpiolib.c
+index 4fa075d49fbc9..6e813b13d6988 100644
+--- a/drivers/gpio/gpiolib.c
++++ b/drivers/gpio/gpiolib.c
+@@ -836,6 +836,21 @@ static __poll_t lineevent_poll(struct file *filep,
+ return events;
+ }
+
++static ssize_t lineevent_get_size(void)
++{
++#ifdef __x86_64__
++ /* i386 has no padding after 'id' */
++ if (in_ia32_syscall()) {
++ struct compat_gpioeevent_data {
++ compat_u64 timestamp;
++ u32 id;
++ };
++
++ return sizeof(struct compat_gpioeevent_data);
++ }
++#endif
++ return sizeof(struct gpioevent_data);
++}
+
+ static ssize_t lineevent_read(struct file *filep,
+ char __user *buf,
+@@ -845,9 +860,20 @@ static ssize_t lineevent_read(struct file *filep,
+ struct lineevent_state *le = filep->private_data;
+ struct gpioevent_data ge;
+ ssize_t bytes_read = 0;
++ ssize_t ge_size;
+ int ret;
+
+- if (count < sizeof(ge))
++ /*
++ * When compatible system call is being used the struct gpioevent_data,
++ * in case of at least ia32, has different size due to the alignment
++ * differences. Because we have first member 64 bits followed by one of
++ * 32 bits there is no gap between them. The only difference is the
++ * padding at the end of the data structure. Hence, we calculate the
++ * actual sizeof() and pass this as an argument to copy_to_user() to
++ * drop unneeded bytes from the output.
++ */
++ ge_size = lineevent_get_size();
++ if (count < ge_size)
+ return -EINVAL;
+
+ do {
+@@ -883,10 +909,10 @@ static ssize_t lineevent_read(struct file *filep,
+ break;
+ }
+
+- if (copy_to_user(buf + bytes_read, &ge, sizeof(ge)))
++ if (copy_to_user(buf + bytes_read, &ge, ge_size))
+ return -EFAULT;
+- bytes_read += sizeof(ge);
+- } while (count >= bytes_read + sizeof(ge));
++ bytes_read += ge_size;
++ } while (count >= bytes_read + ge_size);
+
+ return bytes_read;
+ }
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_display.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_display.c
+index 5e51f0acf744f..f05fecbec0a86 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_display.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_display.c
+@@ -297,7 +297,7 @@ int amdgpu_display_crtc_set_config(struct drm_mode_set *set,
+ take the current one */
+ if (active && !adev->have_disp_power_ref) {
+ adev->have_disp_power_ref = true;
+- goto out;
++ return ret;
+ }
+ /* if we have no active crtcs, then drop the power ref
+ we got before */
+diff --git a/drivers/gpu/drm/i915/gvt/vgpu.c b/drivers/gpu/drm/i915/gvt/vgpu.c
+index 7d361623ff679..041f601b07d06 100644
+--- a/drivers/gpu/drm/i915/gvt/vgpu.c
++++ b/drivers/gpu/drm/i915/gvt/vgpu.c
+@@ -367,6 +367,7 @@ void intel_gvt_destroy_idle_vgpu(struct intel_vgpu *vgpu)
+ static struct intel_vgpu *__intel_gvt_create_vgpu(struct intel_gvt *gvt,
+ struct intel_vgpu_creation_params *param)
+ {
++ struct drm_i915_private *dev_priv = gvt->gt->i915;
+ struct intel_vgpu *vgpu;
+ int ret;
+
+@@ -434,7 +435,10 @@ static struct intel_vgpu *__intel_gvt_create_vgpu(struct intel_gvt *gvt,
+ if (ret)
+ goto out_clean_sched_policy;
+
+- ret = intel_gvt_hypervisor_set_edid(vgpu, PORT_D);
++ if (IS_BROADWELL(dev_priv))
++ ret = intel_gvt_hypervisor_set_edid(vgpu, PORT_B);
++ else
++ ret = intel_gvt_hypervisor_set_edid(vgpu, PORT_D);
+ if (ret)
+ goto out_clean_sched_policy;
+
+diff --git a/drivers/gpu/drm/sun4i/sun8i_mixer.c b/drivers/gpu/drm/sun4i/sun8i_mixer.c
+index cc4fb916318f3..c3304028e3dcd 100644
+--- a/drivers/gpu/drm/sun4i/sun8i_mixer.c
++++ b/drivers/gpu/drm/sun4i/sun8i_mixer.c
+@@ -307,7 +307,7 @@ static struct regmap_config sun8i_mixer_regmap_config = {
+ .reg_bits = 32,
+ .val_bits = 32,
+ .reg_stride = 4,
+- .max_register = 0xbfffc, /* guessed */
++ .max_register = 0xffffc, /* guessed */
+ };
+
+ static int sun8i_mixer_of_get_id(struct device_node *node)
+diff --git a/drivers/i2c/busses/i2c-cpm.c b/drivers/i2c/busses/i2c-cpm.c
+index 1213e1932ccb5..24d584a1c9a78 100644
+--- a/drivers/i2c/busses/i2c-cpm.c
++++ b/drivers/i2c/busses/i2c-cpm.c
+@@ -65,6 +65,9 @@ struct i2c_ram {
+ char res1[4]; /* Reserved */
+ ushort rpbase; /* Relocation pointer */
+ char res2[2]; /* Reserved */
++ /* The following elements are only for CPM2 */
++ char res3[4]; /* Reserved */
++ uint sdmatmp; /* Internal */
+ };
+
+ #define I2COM_START 0x80
+diff --git a/drivers/i2c/busses/i2c-i801.c b/drivers/i2c/busses/i2c-i801.c
+index 3843eabeddda3..6126290e4d650 100644
+--- a/drivers/i2c/busses/i2c-i801.c
++++ b/drivers/i2c/busses/i2c-i801.c
+@@ -1913,6 +1913,7 @@ static int i801_probe(struct pci_dev *dev, const struct pci_device_id *id)
+
+ pci_set_drvdata(dev, priv);
+
++ dev_pm_set_driver_flags(&dev->dev, DPM_FLAG_NO_DIRECT_COMPLETE);
+ pm_runtime_set_autosuspend_delay(&dev->dev, 1000);
+ pm_runtime_use_autosuspend(&dev->dev);
+ pm_runtime_put_autosuspend(&dev->dev);
+diff --git a/drivers/i2c/busses/i2c-npcm7xx.c b/drivers/i2c/busses/i2c-npcm7xx.c
+index dfcf04e1967f1..2ad166355ec9b 100644
+--- a/drivers/i2c/busses/i2c-npcm7xx.c
++++ b/drivers/i2c/busses/i2c-npcm7xx.c
+@@ -2163,6 +2163,15 @@ static int npcm_i2c_master_xfer(struct i2c_adapter *adap, struct i2c_msg *msgs,
+ if (bus->cmd_err == -EAGAIN)
+ ret = i2c_recover_bus(adap);
+
++ /*
++ * After any type of error, check if LAST bit is still set,
++ * due to a HW issue.
++ * It cannot be cleared without resetting the module.
++ */
++ if (bus->cmd_err &&
++ (NPCM_I2CRXF_CTL_LAST_PEC & ioread8(bus->reg + NPCM_I2CRXF_CTL)))
++ npcm_i2c_reset(bus);
++
+ #if IS_ENABLED(CONFIG_I2C_SLAVE)
+ /* reenable slave if it was enabled */
+ if (bus->slave)
+diff --git a/drivers/iio/adc/qcom-spmi-adc5.c b/drivers/iio/adc/qcom-spmi-adc5.c
+index 21fdcde77883f..56e7696aa3c0f 100644
+--- a/drivers/iio/adc/qcom-spmi-adc5.c
++++ b/drivers/iio/adc/qcom-spmi-adc5.c
+@@ -786,7 +786,7 @@ static int adc5_probe(struct platform_device *pdev)
+
+ static struct platform_driver adc5_driver = {
+ .driver = {
+- .name = "qcom-spmi-adc5.c",
++ .name = "qcom-spmi-adc5",
+ .of_match_table = adc5_match_table,
+ },
+ .probe = adc5_probe,
+diff --git a/drivers/input/mouse/trackpoint.c b/drivers/input/mouse/trackpoint.c
+index 854d5e7587241..ef2fa0905208d 100644
+--- a/drivers/input/mouse/trackpoint.c
++++ b/drivers/input/mouse/trackpoint.c
+@@ -282,6 +282,8 @@ static int trackpoint_start_protocol(struct psmouse *psmouse,
+ case TP_VARIANT_ALPS:
+ case TP_VARIANT_ELAN:
+ case TP_VARIANT_NXP:
++ case TP_VARIANT_JYT_SYNAPTICS:
++ case TP_VARIANT_SYNAPTICS:
+ if (variant_id)
+ *variant_id = param[0];
+ if (firmware_id)
+diff --git a/drivers/input/serio/i8042-x86ia64io.h b/drivers/input/serio/i8042-x86ia64io.h
+index 37fb9aa88f9c3..a4c9b9652560a 100644
+--- a/drivers/input/serio/i8042-x86ia64io.h
++++ b/drivers/input/serio/i8042-x86ia64io.h
+@@ -721,6 +721,13 @@ static const struct dmi_system_id __initconst i8042_dmi_nopnp_table[] = {
+ DMI_MATCH(DMI_BOARD_VENDOR, "MICRO-STAR INTERNATIONAL CO., LTD"),
+ },
+ },
++ {
++ /* Acer Aspire 5 A515 */
++ .matches = {
++ DMI_MATCH(DMI_BOARD_NAME, "Grumpy_PK"),
++ DMI_MATCH(DMI_BOARD_VENDOR, "PK"),
++ },
++ },
+ { }
+ };
+
+diff --git a/drivers/iommu/amd/init.c b/drivers/iommu/amd/init.c
+index bf45f8e2c7edd..016e35d3d6c86 100644
+--- a/drivers/iommu/amd/init.c
++++ b/drivers/iommu/amd/init.c
+@@ -1110,25 +1110,6 @@ static int __init add_early_maps(void)
+ return 0;
+ }
+
+-/*
+- * Reads the device exclusion range from ACPI and initializes the IOMMU with
+- * it
+- */
+-static void __init set_device_exclusion_range(u16 devid, struct ivmd_header *m)
+-{
+- if (!(m->flags & IVMD_FLAG_EXCL_RANGE))
+- return;
+-
+- /*
+- * Treat per-device exclusion ranges as r/w unity-mapped regions
+- * since some buggy BIOSes might lead to the overwritten exclusion
+- * range (exclusion_start and exclusion_length members). This
+- * happens when there are multiple exclusion ranges (IVMD entries)
+- * defined in ACPI table.
+- */
+- m->flags = (IVMD_FLAG_IW | IVMD_FLAG_IR | IVMD_FLAG_UNITY_MAP);
+-}
+-
+ /*
+ * Takes a pointer to an AMD IOMMU entry in the ACPI table and
+ * initializes the hardware and our data structures with it.
+@@ -2080,30 +2061,6 @@ static void __init free_unity_maps(void)
+ }
+ }
+
+-/* called when we find an exclusion range definition in ACPI */
+-static int __init init_exclusion_range(struct ivmd_header *m)
+-{
+- int i;
+-
+- switch (m->type) {
+- case ACPI_IVMD_TYPE:
+- set_device_exclusion_range(m->devid, m);
+- break;
+- case ACPI_IVMD_TYPE_ALL:
+- for (i = 0; i <= amd_iommu_last_bdf; ++i)
+- set_device_exclusion_range(i, m);
+- break;
+- case ACPI_IVMD_TYPE_RANGE:
+- for (i = m->devid; i <= m->aux; ++i)
+- set_device_exclusion_range(i, m);
+- break;
+- default:
+- break;
+- }
+-
+- return 0;
+-}
+-
+ /* called for unity map ACPI definition */
+ static int __init init_unity_map_range(struct ivmd_header *m)
+ {
+@@ -2114,9 +2071,6 @@ static int __init init_unity_map_range(struct ivmd_header *m)
+ if (e == NULL)
+ return -ENOMEM;
+
+- if (m->flags & IVMD_FLAG_EXCL_RANGE)
+- init_exclusion_range(m);
+-
+ switch (m->type) {
+ default:
+ kfree(e);
+@@ -2140,6 +2094,16 @@ static int __init init_unity_map_range(struct ivmd_header *m)
+ e->address_end = e->address_start + PAGE_ALIGN(m->range_length);
+ e->prot = m->flags >> 1;
+
++ /*
++ * Treat per-device exclusion ranges as r/w unity-mapped regions
++ * since some buggy BIOSes might lead to the overwritten exclusion
++ * range (exclusion_start and exclusion_length members). This
++ * happens when there are multiple exclusion ranges (IVMD entries)
++ * defined in ACPI table.
++ */
++ if (m->flags & IVMD_FLAG_EXCL_RANGE)
++ e->prot = (IVMD_FLAG_IW | IVMD_FLAG_IR) >> 1;
++
+ DUMP_printk("%s devid_start: %02x:%02x.%x devid_end: %02x:%02x.%x"
+ " range_start: %016llx range_end: %016llx flags: %x\n", s,
+ PCI_BUS_NUM(e->devid_start), PCI_SLOT(e->devid_start),
+diff --git a/drivers/iommu/exynos-iommu.c b/drivers/iommu/exynos-iommu.c
+index 60c8a56e4a3f8..89f628da148ac 100644
+--- a/drivers/iommu/exynos-iommu.c
++++ b/drivers/iommu/exynos-iommu.c
+@@ -1295,13 +1295,17 @@ static int exynos_iommu_of_xlate(struct device *dev,
+ return -ENODEV;
+
+ data = platform_get_drvdata(sysmmu);
+- if (!data)
++ if (!data) {
++ put_device(&sysmmu->dev);
+ return -ENODEV;
++ }
+
+ if (!owner) {
+ owner = kzalloc(sizeof(*owner), GFP_KERNEL);
+- if (!owner)
++ if (!owner) {
++ put_device(&sysmmu->dev);
+ return -ENOMEM;
++ }
+
+ INIT_LIST_HEAD(&owner->controllers);
+ mutex_init(&owner->rpm_lock);
+diff --git a/drivers/memstick/core/memstick.c b/drivers/memstick/core/memstick.c
+index 693ee73eb2912..ef03d6fafc5ce 100644
+--- a/drivers/memstick/core/memstick.c
++++ b/drivers/memstick/core/memstick.c
+@@ -441,6 +441,9 @@ static void memstick_check(struct work_struct *work)
+ } else if (host->card->stop)
+ host->card->stop(host->card);
+
++ if (host->removing)
++ goto out_power_off;
++
+ card = memstick_alloc_card(host);
+
+ if (!card) {
+@@ -545,6 +548,7 @@ EXPORT_SYMBOL(memstick_add_host);
+ */
+ void memstick_remove_host(struct memstick_host *host)
+ {
++ host->removing = 1;
+ flush_workqueue(workqueue);
+ mutex_lock(&host->lock);
+ if (host->card)
+diff --git a/drivers/mmc/host/sdhci-pci-core.c b/drivers/mmc/host/sdhci-pci-core.c
+index af413805bbf1a..914f5184295ff 100644
+--- a/drivers/mmc/host/sdhci-pci-core.c
++++ b/drivers/mmc/host/sdhci-pci-core.c
+@@ -794,7 +794,8 @@ static int byt_emmc_probe_slot(struct sdhci_pci_slot *slot)
+ static bool glk_broken_cqhci(struct sdhci_pci_slot *slot)
+ {
+ return slot->chip->pdev->device == PCI_DEVICE_ID_INTEL_GLK_EMMC &&
+- dmi_match(DMI_BIOS_VENDOR, "LENOVO");
++ (dmi_match(DMI_BIOS_VENDOR, "LENOVO") ||
++ dmi_match(DMI_SYS_VENDOR, "IRBIS"));
+ }
+
+ static int glk_emmc_probe_slot(struct sdhci_pci_slot *slot)
+diff --git a/drivers/net/dsa/ocelot/felix_vsc9959.c b/drivers/net/dsa/ocelot/felix_vsc9959.c
+index 1dd9e348152d7..7c167a394b762 100644
+--- a/drivers/net/dsa/ocelot/felix_vsc9959.c
++++ b/drivers/net/dsa/ocelot/felix_vsc9959.c
+@@ -607,17 +607,17 @@ struct vcap_field vsc9959_vcap_is2_keys[] = {
+ [VCAP_IS2_HK_DIP_EQ_SIP] = {118, 1},
+ /* IP4_TCP_UDP (TYPE=100) */
+ [VCAP_IS2_HK_TCP] = {119, 1},
+- [VCAP_IS2_HK_L4_SPORT] = {120, 16},
+- [VCAP_IS2_HK_L4_DPORT] = {136, 16},
++ [VCAP_IS2_HK_L4_DPORT] = {120, 16},
++ [VCAP_IS2_HK_L4_SPORT] = {136, 16},
+ [VCAP_IS2_HK_L4_RNG] = {152, 8},
+ [VCAP_IS2_HK_L4_SPORT_EQ_DPORT] = {160, 1},
+ [VCAP_IS2_HK_L4_SEQUENCE_EQ0] = {161, 1},
+- [VCAP_IS2_HK_L4_URG] = {162, 1},
+- [VCAP_IS2_HK_L4_ACK] = {163, 1},
+- [VCAP_IS2_HK_L4_PSH] = {164, 1},
+- [VCAP_IS2_HK_L4_RST] = {165, 1},
+- [VCAP_IS2_HK_L4_SYN] = {166, 1},
+- [VCAP_IS2_HK_L4_FIN] = {167, 1},
++ [VCAP_IS2_HK_L4_FIN] = {162, 1},
++ [VCAP_IS2_HK_L4_SYN] = {163, 1},
++ [VCAP_IS2_HK_L4_RST] = {164, 1},
++ [VCAP_IS2_HK_L4_PSH] = {165, 1},
++ [VCAP_IS2_HK_L4_ACK] = {166, 1},
++ [VCAP_IS2_HK_L4_URG] = {167, 1},
+ [VCAP_IS2_HK_L4_1588_DOM] = {168, 8},
+ [VCAP_IS2_HK_L4_1588_VER] = {176, 4},
+ /* IP4_OTHER (TYPE=101) */
+diff --git a/drivers/net/ethernet/dec/tulip/de2104x.c b/drivers/net/ethernet/dec/tulip/de2104x.c
+index 592454f444ce2..0b30011b9839e 100644
+--- a/drivers/net/ethernet/dec/tulip/de2104x.c
++++ b/drivers/net/ethernet/dec/tulip/de2104x.c
+@@ -85,7 +85,7 @@ MODULE_PARM_DESC (rx_copybreak, "de2104x Breakpoint at which Rx packets are copi
+ #define DSL CONFIG_DE2104X_DSL
+ #endif
+
+-#define DE_RX_RING_SIZE 64
++#define DE_RX_RING_SIZE 128
+ #define DE_TX_RING_SIZE 64
+ #define DE_RING_BYTES \
+ ((sizeof(struct de_desc) * DE_RX_RING_SIZE) + \
+diff --git a/drivers/net/hyperv/hyperv_net.h b/drivers/net/hyperv/hyperv_net.h
+index abda736e7c7dc..a4d2dd2637e26 100644
+--- a/drivers/net/hyperv/hyperv_net.h
++++ b/drivers/net/hyperv/hyperv_net.h
+@@ -973,6 +973,9 @@ struct net_device_context {
+ /* Serial number of the VF to team with */
+ u32 vf_serial;
+
++ /* Is the current data path through the VF NIC? */
++ bool data_path_is_vf;
++
+ /* Used to temporarily save the config info across hibernation */
+ struct netvsc_device_info *saved_netvsc_dev_info;
+ };
+diff --git a/drivers/net/hyperv/netvsc_drv.c b/drivers/net/hyperv/netvsc_drv.c
+index a2db5ef3b62a2..3b0dc1f0ef212 100644
+--- a/drivers/net/hyperv/netvsc_drv.c
++++ b/drivers/net/hyperv/netvsc_drv.c
+@@ -2323,7 +2323,16 @@ static int netvsc_register_vf(struct net_device *vf_netdev)
+ return NOTIFY_OK;
+ }
+
+-/* VF up/down change detected, schedule to change data path */
++/* Change the data path when VF UP/DOWN/CHANGE are detected.
++ *
++ * Typically a UP or DOWN event is followed by a CHANGE event, so
++ * net_device_ctx->data_path_is_vf is used to cache the current data path
++ * to avoid the duplicate call of netvsc_switch_datapath() and the duplicate
++ * message.
++ *
++ * During hibernation, if a VF NIC driver (e.g. mlx5) preserves the network
++ * interface, there is only the CHANGE event and no UP or DOWN event.
++ */
+ static int netvsc_vf_changed(struct net_device *vf_netdev)
+ {
+ struct net_device_context *net_device_ctx;
+@@ -2340,6 +2349,10 @@ static int netvsc_vf_changed(struct net_device *vf_netdev)
+ if (!netvsc_dev)
+ return NOTIFY_DONE;
+
++ if (net_device_ctx->data_path_is_vf == vf_is_up)
++ return NOTIFY_OK;
++ net_device_ctx->data_path_is_vf = vf_is_up;
++
+ netvsc_switch_datapath(ndev, vf_is_up);
+ netdev_info(ndev, "Data path switched %s VF: %s\n",
+ vf_is_up ? "to" : "from", vf_netdev->name);
+@@ -2581,6 +2594,12 @@ static int netvsc_resume(struct hv_device *dev)
+ rtnl_lock();
+
+ net_device_ctx = netdev_priv(net);
++
++ /* Reset the data path to the netvsc NIC before re-opening the vmbus
++ * channel. Later netvsc_netdev_event() will switch the data path to
++ * the VF upon the UP or CHANGE event.
++ */
++ net_device_ctx->data_path_is_vf = false;
+ device_info = net_device_ctx->saved_netvsc_dev_info;
+
+ ret = netvsc_attach(net, device_info);
+diff --git a/drivers/net/usb/rndis_host.c b/drivers/net/usb/rndis_host.c
+index bd9c07888ebb4..6fa7a009a24a4 100644
+--- a/drivers/net/usb/rndis_host.c
++++ b/drivers/net/usb/rndis_host.c
+@@ -201,7 +201,7 @@ int rndis_command(struct usbnet *dev, struct rndis_msg_hdr *buf, int buflen)
+ dev_dbg(&info->control->dev,
+ "rndis response error, code %d\n", retval);
+ }
+- msleep(20);
++ msleep(40);
+ }
+ dev_dbg(&info->control->dev, "rndis response timeout\n");
+ return -ETIMEDOUT;
+diff --git a/drivers/net/wan/hdlc_cisco.c b/drivers/net/wan/hdlc_cisco.c
+index 444130655d8ea..cb5898f7d68c9 100644
+--- a/drivers/net/wan/hdlc_cisco.c
++++ b/drivers/net/wan/hdlc_cisco.c
+@@ -118,6 +118,7 @@ static void cisco_keepalive_send(struct net_device *dev, u32 type,
+ skb_put(skb, sizeof(struct cisco_packet));
+ skb->priority = TC_PRIO_CONTROL;
+ skb->dev = dev;
++ skb->protocol = htons(ETH_P_HDLC);
+ skb_reset_network_header(skb);
+
+ dev_queue_xmit(skb);
+diff --git a/drivers/net/wan/hdlc_fr.c b/drivers/net/wan/hdlc_fr.c
+index 9acad651ea1f6..d6cfd51613ed8 100644
+--- a/drivers/net/wan/hdlc_fr.c
++++ b/drivers/net/wan/hdlc_fr.c
+@@ -433,6 +433,8 @@ static netdev_tx_t pvc_xmit(struct sk_buff *skb, struct net_device *dev)
+ if (pvc->state.fecn) /* TX Congestion counter */
+ dev->stats.tx_compressed++;
+ skb->dev = pvc->frad;
++ skb->protocol = htons(ETH_P_HDLC);
++ skb_reset_network_header(skb);
+ dev_queue_xmit(skb);
+ return NETDEV_TX_OK;
+ }
+@@ -555,6 +557,7 @@ static void fr_lmi_send(struct net_device *dev, int fullrep)
+ skb_put(skb, i);
+ skb->priority = TC_PRIO_CONTROL;
+ skb->dev = dev;
++ skb->protocol = htons(ETH_P_HDLC);
+ skb_reset_network_header(skb);
+
+ dev_queue_xmit(skb);
+@@ -1041,7 +1044,7 @@ static void pvc_setup(struct net_device *dev)
+ {
+ dev->type = ARPHRD_DLCI;
+ dev->flags = IFF_POINTOPOINT;
+- dev->hard_header_len = 10;
++ dev->hard_header_len = 0;
+ dev->addr_len = 2;
+ netif_keep_dst(dev);
+ }
+@@ -1093,6 +1096,7 @@ static int fr_add_pvc(struct net_device *frad, unsigned int dlci, int type)
+ dev->mtu = HDLC_MAX_MTU;
+ dev->min_mtu = 68;
+ dev->max_mtu = HDLC_MAX_MTU;
++ dev->needed_headroom = 10;
+ dev->priv_flags |= IFF_NO_QUEUE;
+ dev->ml_priv = pvc;
+
+diff --git a/drivers/net/wan/hdlc_ppp.c b/drivers/net/wan/hdlc_ppp.c
+index 16f33d1ffbfb9..64f8556513369 100644
+--- a/drivers/net/wan/hdlc_ppp.c
++++ b/drivers/net/wan/hdlc_ppp.c
+@@ -251,6 +251,7 @@ static void ppp_tx_cp(struct net_device *dev, u16 pid, u8 code,
+
+ skb->priority = TC_PRIO_CONTROL;
+ skb->dev = dev;
++ skb->protocol = htons(ETH_P_HDLC);
+ skb_reset_network_header(skb);
+ skb_queue_tail(&tx_queue, skb);
+ }
+diff --git a/drivers/net/wan/lapbether.c b/drivers/net/wan/lapbether.c
+index e61616b0b91c7..b726101d4707a 100644
+--- a/drivers/net/wan/lapbether.c
++++ b/drivers/net/wan/lapbether.c
+@@ -198,8 +198,6 @@ static void lapbeth_data_transmit(struct net_device *ndev, struct sk_buff *skb)
+ struct net_device *dev;
+ int size = skb->len;
+
+- skb->protocol = htons(ETH_P_X25);
+-
+ ptr = skb_push(skb, 2);
+
+ *ptr++ = size % 256;
+@@ -210,6 +208,8 @@ static void lapbeth_data_transmit(struct net_device *ndev, struct sk_buff *skb)
+
+ skb->dev = dev = lapbeth->ethdev;
+
++ skb->protocol = htons(ETH_P_DEC);
++
+ skb_reset_network_header(skb);
+
+ dev_hard_header(skb, dev, ETH_P_DEC, bcast_addr, NULL, 0);
+diff --git a/drivers/net/wireless/mediatek/mt76/mt7915/init.c b/drivers/net/wireless/mediatek/mt76/mt7915/init.c
+index aadf56e80bae8..d7a3b05ab50c3 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt7915/init.c
++++ b/drivers/net/wireless/mediatek/mt76/mt7915/init.c
+@@ -691,8 +691,12 @@ void mt7915_unregister_device(struct mt7915_dev *dev)
+ spin_lock_bh(&dev->token_lock);
+ idr_for_each_entry(&dev->token, txwi, id) {
+ mt7915_txp_skb_unmap(&dev->mt76, txwi);
+- if (txwi->skb)
+- dev_kfree_skb_any(txwi->skb);
++ if (txwi->skb) {
++ struct ieee80211_hw *hw;
++
++ hw = mt76_tx_status_get_hw(&dev->mt76, txwi->skb);
++ ieee80211_free_txskb(hw, txwi->skb);
++ }
+ mt76_put_txwi(&dev->mt76, txwi);
+ }
+ spin_unlock_bh(&dev->token_lock);
+diff --git a/drivers/net/wireless/mediatek/mt76/mt7915/mac.c b/drivers/net/wireless/mediatek/mt76/mt7915/mac.c
+index a264e304a3dfb..5800b2d1fb233 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt7915/mac.c
++++ b/drivers/net/wireless/mediatek/mt76/mt7915/mac.c
+@@ -844,7 +844,7 @@ mt7915_tx_complete_status(struct mt76_dev *mdev, struct sk_buff *skb,
+ if (sta || !(info->flags & IEEE80211_TX_CTL_NO_ACK))
+ mt7915_tx_status(sta, hw, info, NULL);
+
+- dev_kfree_skb(skb);
++ ieee80211_free_txskb(hw, skb);
+ }
+
+ void mt7915_txp_skb_unmap(struct mt76_dev *dev,
+diff --git a/drivers/net/wireless/ti/wlcore/cmd.h b/drivers/net/wireless/ti/wlcore/cmd.h
+index 9acd8a41ea61f..f2609d5b6bf71 100644
+--- a/drivers/net/wireless/ti/wlcore/cmd.h
++++ b/drivers/net/wireless/ti/wlcore/cmd.h
+@@ -458,7 +458,6 @@ enum wl1271_cmd_key_type {
+ KEY_TKIP = 2,
+ KEY_AES = 3,
+ KEY_GEM = 4,
+- KEY_IGTK = 5,
+ };
+
+ struct wl1271_cmd_set_keys {
+diff --git a/drivers/net/wireless/ti/wlcore/main.c b/drivers/net/wireless/ti/wlcore/main.c
+index de6c8a7589ca3..ef169de992249 100644
+--- a/drivers/net/wireless/ti/wlcore/main.c
++++ b/drivers/net/wireless/ti/wlcore/main.c
+@@ -3550,9 +3550,6 @@ int wlcore_set_key(struct wl1271 *wl, enum set_key_cmd cmd,
+ case WL1271_CIPHER_SUITE_GEM:
+ key_type = KEY_GEM;
+ break;
+- case WLAN_CIPHER_SUITE_AES_CMAC:
+- key_type = KEY_IGTK;
+- break;
+ default:
+ wl1271_error("Unknown key algo 0x%x", key_conf->cipher);
+
+@@ -6222,7 +6219,6 @@ static int wl1271_init_ieee80211(struct wl1271 *wl)
+ WLAN_CIPHER_SUITE_TKIP,
+ WLAN_CIPHER_SUITE_CCMP,
+ WL1271_CIPHER_SUITE_GEM,
+- WLAN_CIPHER_SUITE_AES_CMAC,
+ };
+
+ /* The tx descriptor buffer */
+diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c
+index f2556f0ea20dc..69165a8f7c1f0 100644
+--- a/drivers/nvme/host/core.c
++++ b/drivers/nvme/host/core.c
+@@ -3060,10 +3060,24 @@ static int nvme_dev_open(struct inode *inode, struct file *file)
+ return -EWOULDBLOCK;
+ }
+
++ nvme_get_ctrl(ctrl);
++ if (!try_module_get(ctrl->ops->module))
++ return -EINVAL;
++
+ file->private_data = ctrl;
+ return 0;
+ }
+
++static int nvme_dev_release(struct inode *inode, struct file *file)
++{
++ struct nvme_ctrl *ctrl =
++ container_of(inode->i_cdev, struct nvme_ctrl, cdev);
++
++ module_put(ctrl->ops->module);
++ nvme_put_ctrl(ctrl);
++ return 0;
++}
++
+ static int nvme_dev_user_cmd(struct nvme_ctrl *ctrl, void __user *argp)
+ {
+ struct nvme_ns *ns;
+@@ -3126,6 +3140,7 @@ static long nvme_dev_ioctl(struct file *file, unsigned int cmd,
+ static const struct file_operations nvme_dev_fops = {
+ .owner = THIS_MODULE,
+ .open = nvme_dev_open,
++ .release = nvme_dev_release,
+ .unlocked_ioctl = nvme_dev_ioctl,
+ .compat_ioctl = compat_ptr_ioctl,
+ };
+diff --git a/drivers/nvme/host/fc.c b/drivers/nvme/host/fc.c
+index 92c966ac34c20..43c1745ecd45b 100644
+--- a/drivers/nvme/host/fc.c
++++ b/drivers/nvme/host/fc.c
+@@ -3668,12 +3668,14 @@ nvme_fc_create_ctrl(struct device *dev, struct nvmf_ctrl_options *opts)
+ spin_lock_irqsave(&nvme_fc_lock, flags);
+ list_for_each_entry(lport, &nvme_fc_lport_list, port_list) {
+ if (lport->localport.node_name != laddr.nn ||
+- lport->localport.port_name != laddr.pn)
++ lport->localport.port_name != laddr.pn ||
++ lport->localport.port_state != FC_OBJSTATE_ONLINE)
+ continue;
+
+ list_for_each_entry(rport, &lport->endp_list, endp_list) {
+ if (rport->remoteport.node_name != raddr.nn ||
+- rport->remoteport.port_name != raddr.pn)
++ rport->remoteport.port_name != raddr.pn ||
++ rport->remoteport.port_state != FC_OBJSTATE_ONLINE)
+ continue;
+
+ /* if fail to get reference fall through. Will error */
+diff --git a/drivers/nvme/host/pci.c b/drivers/nvme/host/pci.c
+index 69a19fe241063..cc3ae9c63a01b 100644
+--- a/drivers/nvme/host/pci.c
++++ b/drivers/nvme/host/pci.c
+@@ -942,13 +942,6 @@ static inline void nvme_handle_cqe(struct nvme_queue *nvmeq, u16 idx)
+ struct nvme_completion *cqe = &nvmeq->cqes[idx];
+ struct request *req;
+
+- if (unlikely(cqe->command_id >= nvmeq->q_depth)) {
+- dev_warn(nvmeq->dev->ctrl.device,
+- "invalid id %d completed on queue %d\n",
+- cqe->command_id, le16_to_cpu(cqe->sq_id));
+- return;
+- }
+-
+ /*
+ * AEN requests are special as they don't time out and can
+ * survive any kind of queue freeze and often don't respond to
+@@ -962,6 +955,13 @@ static inline void nvme_handle_cqe(struct nvme_queue *nvmeq, u16 idx)
+ }
+
+ req = blk_mq_tag_to_rq(nvme_queue_tagset(nvmeq), cqe->command_id);
++ if (unlikely(!req)) {
++ dev_warn(nvmeq->dev->ctrl.device,
++ "invalid id %d completed on queue %d\n",
++ cqe->command_id, le16_to_cpu(cqe->sq_id));
++ return;
++ }
++
+ trace_nvme_sq(req, cqe->sq_head, nvmeq->sq_tail);
+ nvme_end_request(req, cqe->status, cqe->result);
+ }
+@@ -3093,7 +3093,8 @@ static const struct pci_device_id nvme_id_table[] = {
+ { PCI_VDEVICE(INTEL, 0xf1a5), /* Intel 600P/P3100 */
+ .driver_data = NVME_QUIRK_NO_DEEPEST_PS |
+ NVME_QUIRK_MEDIUM_PRIO_SQ |
+- NVME_QUIRK_NO_TEMP_THRESH_CHANGE },
++ NVME_QUIRK_NO_TEMP_THRESH_CHANGE |
++ NVME_QUIRK_DISABLE_WRITE_ZEROES, },
+ { PCI_VDEVICE(INTEL, 0xf1a6), /* Intel 760p/Pro 7600p */
+ .driver_data = NVME_QUIRK_IGNORE_DEV_SUBNQN, },
+ { PCI_VDEVICE(INTEL, 0x5845), /* Qemu emulated controller */
+diff --git a/drivers/phy/ti/phy-am654-serdes.c b/drivers/phy/ti/phy-am654-serdes.c
+index a174b3c3f010f..819c49af169ac 100644
+--- a/drivers/phy/ti/phy-am654-serdes.c
++++ b/drivers/phy/ti/phy-am654-serdes.c
+@@ -725,8 +725,10 @@ static int serdes_am654_probe(struct platform_device *pdev)
+ pm_runtime_enable(dev);
+
+ phy = devm_phy_create(dev, NULL, &ops);
+- if (IS_ERR(phy))
+- return PTR_ERR(phy);
++ if (IS_ERR(phy)) {
++ ret = PTR_ERR(phy);
++ goto clk_err;
++ }
+
+ phy_set_drvdata(phy, am654_phy);
+ phy_provider = devm_of_phy_provider_register(dev, serdes_am654_xlate);
+diff --git a/drivers/pinctrl/mediatek/pinctrl-mtk-common-v2.c b/drivers/pinctrl/mediatek/pinctrl-mtk-common-v2.c
+index 2f3dfb56c3fa4..35bbe59357088 100644
+--- a/drivers/pinctrl/mediatek/pinctrl-mtk-common-v2.c
++++ b/drivers/pinctrl/mediatek/pinctrl-mtk-common-v2.c
+@@ -259,6 +259,10 @@ bool mtk_is_virt_gpio(struct mtk_pinctrl *hw, unsigned int gpio_n)
+
+ desc = (const struct mtk_pin_desc *)&hw->soc->pins[gpio_n];
+
++ /* if the GPIO is not supported for eint mode */
++ if (desc->eint.eint_m == NO_EINT_SUPPORT)
++ return virt_gpio;
++
+ if (desc->funcs && !desc->funcs[desc->eint.eint_m].name)
+ virt_gpio = true;
+
+diff --git a/drivers/pinctrl/mvebu/pinctrl-armada-xp.c b/drivers/pinctrl/mvebu/pinctrl-armada-xp.c
+index a767a05fa3a0d..48e2a6c56a83b 100644
+--- a/drivers/pinctrl/mvebu/pinctrl-armada-xp.c
++++ b/drivers/pinctrl/mvebu/pinctrl-armada-xp.c
+@@ -414,7 +414,7 @@ static struct mvebu_mpp_mode mv98dx3236_mpp_modes[] = {
+ MPP_VAR_FUNCTION(0x1, "i2c0", "sck", V_98DX3236_PLUS)),
+ MPP_MODE(15,
+ MPP_VAR_FUNCTION(0x0, "gpio", NULL, V_98DX3236_PLUS),
+- MPP_VAR_FUNCTION(0x4, "i2c0", "sda", V_98DX3236_PLUS)),
++ MPP_VAR_FUNCTION(0x1, "i2c0", "sda", V_98DX3236_PLUS)),
+ MPP_MODE(16,
+ MPP_VAR_FUNCTION(0x0, "gpo", NULL, V_98DX3236_PLUS),
+ MPP_VAR_FUNCTION(0x4, "dev", "oe", V_98DX3236_PLUS)),
+diff --git a/drivers/pinctrl/qcom/pinctrl-sm8250.c b/drivers/pinctrl/qcom/pinctrl-sm8250.c
+index a660f1274b667..826df0d637eaa 100644
+--- a/drivers/pinctrl/qcom/pinctrl-sm8250.c
++++ b/drivers/pinctrl/qcom/pinctrl-sm8250.c
+@@ -1308,7 +1308,7 @@ static const struct msm_pingroup sm8250_groups[] = {
+ [178] = PINGROUP(178, WEST, _, _, _, _, _, _, _, _, _),
+ [179] = PINGROUP(179, WEST, _, _, _, _, _, _, _, _, _),
+ [180] = UFS_RESET(ufs_reset, 0xb8000),
+- [181] = SDC_PINGROUP(sdc2_clk, 0x7000, 14, 6),
++ [181] = SDC_PINGROUP(sdc2_clk, 0xb7000, 14, 6),
+ [182] = SDC_PINGROUP(sdc2_cmd, 0xb7000, 11, 3),
+ [183] = SDC_PINGROUP(sdc2_data, 0xb7000, 9, 0),
+ };
+diff --git a/drivers/scsi/iscsi_tcp.c b/drivers/scsi/iscsi_tcp.c
+index b5dd1caae5e92..d10efb66cf193 100644
+--- a/drivers/scsi/iscsi_tcp.c
++++ b/drivers/scsi/iscsi_tcp.c
+@@ -736,6 +736,7 @@ static int iscsi_sw_tcp_conn_get_param(struct iscsi_cls_conn *cls_conn,
+ struct iscsi_tcp_conn *tcp_conn = conn->dd_data;
+ struct iscsi_sw_tcp_conn *tcp_sw_conn = tcp_conn->dd_data;
+ struct sockaddr_in6 addr;
++ struct socket *sock;
+ int rc;
+
+ switch(param) {
+@@ -747,13 +748,17 @@ static int iscsi_sw_tcp_conn_get_param(struct iscsi_cls_conn *cls_conn,
+ spin_unlock_bh(&conn->session->frwd_lock);
+ return -ENOTCONN;
+ }
++ sock = tcp_sw_conn->sock;
++ sock_hold(sock->sk);
++ spin_unlock_bh(&conn->session->frwd_lock);
++
+ if (param == ISCSI_PARAM_LOCAL_PORT)
+- rc = kernel_getsockname(tcp_sw_conn->sock,
++ rc = kernel_getsockname(sock,
+ (struct sockaddr *)&addr);
+ else
+- rc = kernel_getpeername(tcp_sw_conn->sock,
++ rc = kernel_getpeername(sock,
+ (struct sockaddr *)&addr);
+- spin_unlock_bh(&conn->session->frwd_lock);
++ sock_put(sock->sk);
+ if (rc < 0)
+ return rc;
+
+@@ -775,6 +780,7 @@ static int iscsi_sw_tcp_host_get_param(struct Scsi_Host *shost,
+ struct iscsi_tcp_conn *tcp_conn;
+ struct iscsi_sw_tcp_conn *tcp_sw_conn;
+ struct sockaddr_in6 addr;
++ struct socket *sock;
+ int rc;
+
+ switch (param) {
+@@ -789,16 +795,18 @@ static int iscsi_sw_tcp_host_get_param(struct Scsi_Host *shost,
+ return -ENOTCONN;
+ }
+ tcp_conn = conn->dd_data;
+-
+ tcp_sw_conn = tcp_conn->dd_data;
+- if (!tcp_sw_conn->sock) {
++ sock = tcp_sw_conn->sock;
++ if (!sock) {
+ spin_unlock_bh(&session->frwd_lock);
+ return -ENOTCONN;
+ }
++ sock_hold(sock->sk);
++ spin_unlock_bh(&session->frwd_lock);
+
+- rc = kernel_getsockname(tcp_sw_conn->sock,
++ rc = kernel_getsockname(sock,
+ (struct sockaddr *)&addr);
+- spin_unlock_bh(&session->frwd_lock);
++ sock_put(sock->sk);
+ if (rc < 0)
+ return rc;
+
+diff --git a/drivers/scsi/sd.c b/drivers/scsi/sd.c
+index d90fefffe31b7..4b2117cb84837 100644
+--- a/drivers/scsi/sd.c
++++ b/drivers/scsi/sd.c
+@@ -2966,26 +2966,32 @@ static void sd_read_block_characteristics(struct scsi_disk *sdkp)
+
+ if (sdkp->device->type == TYPE_ZBC) {
+ /* Host-managed */
+- q->limits.zoned = BLK_ZONED_HM;
++ blk_queue_set_zoned(sdkp->disk, BLK_ZONED_HM);
+ } else {
+ sdkp->zoned = (buffer[8] >> 4) & 3;
+- if (sdkp->zoned == 1 && !disk_has_partitions(sdkp->disk)) {
++ if (sdkp->zoned == 1) {
+ /* Host-aware */
+- q->limits.zoned = BLK_ZONED_HA;
++ blk_queue_set_zoned(sdkp->disk, BLK_ZONED_HA);
+ } else {
+- /*
+- * Treat drive-managed devices and host-aware devices
+- * with partitions as regular block devices.
+- */
+- q->limits.zoned = BLK_ZONED_NONE;
+- if (sdkp->zoned == 2 && sdkp->first_scan)
+- sd_printk(KERN_NOTICE, sdkp,
+- "Drive-managed SMR disk\n");
++ /* Regular disk or drive managed disk */
++ blk_queue_set_zoned(sdkp->disk, BLK_ZONED_NONE);
+ }
+ }
+- if (blk_queue_is_zoned(q) && sdkp->first_scan)
++
++ if (!sdkp->first_scan)
++ goto out;
++
++ if (blk_queue_is_zoned(q)) {
+ sd_printk(KERN_NOTICE, sdkp, "Host-%s zoned block device\n",
+ q->limits.zoned == BLK_ZONED_HM ? "managed" : "aware");
++ } else {
++ if (sdkp->zoned == 1)
++ sd_printk(KERN_NOTICE, sdkp,
++ "Host-aware SMR disk used as regular disk\n");
++ else if (sdkp->zoned == 2)
++ sd_printk(KERN_NOTICE, sdkp,
++ "Drive-managed SMR disk\n");
++ }
+
+ out:
+ kfree(buffer);
+@@ -3398,10 +3404,6 @@ static int sd_probe(struct device *dev)
+ sdkp->first_scan = 1;
+ sdkp->max_medium_access_timeouts = SD_MAX_MEDIUM_TIMEOUTS;
+
+- error = sd_zbc_init_disk(sdkp);
+- if (error)
+- goto out_free_index;
+-
+ sd_revalidate_disk(gd);
+
+ gd->flags = GENHD_FL_EXT_DEVT;
+diff --git a/drivers/scsi/sd.h b/drivers/scsi/sd.h
+index 3a74f4b45134f..9c24de305c6b9 100644
+--- a/drivers/scsi/sd.h
++++ b/drivers/scsi/sd.h
+@@ -213,7 +213,6 @@ static inline int sd_is_zoned(struct scsi_disk *sdkp)
+
+ #ifdef CONFIG_BLK_DEV_ZONED
+
+-int sd_zbc_init_disk(struct scsi_disk *sdkp);
+ void sd_zbc_release_disk(struct scsi_disk *sdkp);
+ extern int sd_zbc_read_zones(struct scsi_disk *sdkp, unsigned char *buffer);
+ extern void sd_zbc_print_zones(struct scsi_disk *sdkp);
+@@ -229,17 +228,6 @@ blk_status_t sd_zbc_prepare_zone_append(struct scsi_cmnd *cmd, sector_t *lba,
+
+ #else /* CONFIG_BLK_DEV_ZONED */
+
+-static inline int sd_zbc_init(void)
+-{
+- return 0;
+-}
+-
+-static inline int sd_zbc_init_disk(struct scsi_disk *sdkp)
+-{
+- return 0;
+-}
+-
+-static inline void sd_zbc_exit(void) {}
+ static inline void sd_zbc_release_disk(struct scsi_disk *sdkp) {}
+
+ static inline int sd_zbc_read_zones(struct scsi_disk *sdkp,
+@@ -260,7 +248,7 @@ static inline blk_status_t sd_zbc_setup_zone_mgmt_cmnd(struct scsi_cmnd *cmd,
+ static inline unsigned int sd_zbc_complete(struct scsi_cmnd *cmd,
+ unsigned int good_bytes, struct scsi_sense_hdr *sshdr)
+ {
+- return 0;
++ return good_bytes;
+ }
+
+ static inline blk_status_t sd_zbc_prepare_zone_append(struct scsi_cmnd *cmd,
+diff --git a/drivers/scsi/sd_zbc.c b/drivers/scsi/sd_zbc.c
+index 6f7eba66687e9..8384b5dcfa029 100644
+--- a/drivers/scsi/sd_zbc.c
++++ b/drivers/scsi/sd_zbc.c
+@@ -633,6 +633,45 @@ static int sd_zbc_check_capacity(struct scsi_disk *sdkp, unsigned char *buf,
+ return 0;
+ }
+
++void sd_zbc_print_zones(struct scsi_disk *sdkp)
++{
++ if (!sd_is_zoned(sdkp) || !sdkp->capacity)
++ return;
++
++ if (sdkp->capacity & (sdkp->zone_blocks - 1))
++ sd_printk(KERN_NOTICE, sdkp,
++ "%u zones of %u logical blocks + 1 runt zone\n",
++ sdkp->nr_zones - 1,
++ sdkp->zone_blocks);
++ else
++ sd_printk(KERN_NOTICE, sdkp,
++ "%u zones of %u logical blocks\n",
++ sdkp->nr_zones,
++ sdkp->zone_blocks);
++}
++
++static int sd_zbc_init_disk(struct scsi_disk *sdkp)
++{
++ sdkp->zones_wp_offset = NULL;
++ spin_lock_init(&sdkp->zones_wp_offset_lock);
++ sdkp->rev_wp_offset = NULL;
++ mutex_init(&sdkp->rev_mutex);
++ INIT_WORK(&sdkp->zone_wp_offset_work, sd_zbc_update_wp_offset_workfn);
++ sdkp->zone_wp_update_buf = kzalloc(SD_BUF_SIZE, GFP_KERNEL);
++ if (!sdkp->zone_wp_update_buf)
++ return -ENOMEM;
++
++ return 0;
++}
++
++void sd_zbc_release_disk(struct scsi_disk *sdkp)
++{
++ kvfree(sdkp->zones_wp_offset);
++ sdkp->zones_wp_offset = NULL;
++ kfree(sdkp->zone_wp_update_buf);
++ sdkp->zone_wp_update_buf = NULL;
++}
++
+ static void sd_zbc_revalidate_zones_cb(struct gendisk *disk)
+ {
+ struct scsi_disk *sdkp = scsi_disk(disk);
+@@ -645,8 +684,30 @@ static int sd_zbc_revalidate_zones(struct scsi_disk *sdkp,
+ unsigned int nr_zones)
+ {
+ struct gendisk *disk = sdkp->disk;
++ struct request_queue *q = disk->queue;
++ u32 max_append;
+ int ret = 0;
+
++ /*
++ * For all zoned disks, initialize zone append emulation data if not
++ * already done. This is necessary also for host-aware disks used as
++ * regular disks due to the presence of partitions as these partitions
++ * may be deleted and the disk zoned model changed back from
++ * BLK_ZONED_NONE to BLK_ZONED_HA.
++ */
++ if (sd_is_zoned(sdkp) && !sdkp->zone_wp_update_buf) {
++ ret = sd_zbc_init_disk(sdkp);
++ if (ret)
++ return ret;
++ }
++
++ /*
++ * There is nothing to do for regular disks, including host-aware disks
++ * that have partitions.
++ */
++ if (!blk_queue_is_zoned(q))
++ return 0;
++
+ /*
+ * Make sure revalidate zones are serialized to ensure exclusive
+ * updates of the scsi disk data.
+@@ -681,6 +742,19 @@ static int sd_zbc_revalidate_zones(struct scsi_disk *sdkp,
+ kvfree(sdkp->rev_wp_offset);
+ sdkp->rev_wp_offset = NULL;
+
++ if (ret) {
++ sdkp->zone_blocks = 0;
++ sdkp->nr_zones = 0;
++ sdkp->capacity = 0;
++ goto unlock;
++ }
++
++ max_append = min_t(u32, logical_to_sectors(sdkp->device, zone_blocks),
++ q->limits.max_segments << (PAGE_SHIFT - 9));
++ max_append = min_t(u32, max_append, queue_max_hw_sectors(q));
++
++ blk_queue_max_zone_append_sectors(q, max_append);
++
+ unlock:
+ mutex_unlock(&sdkp->rev_mutex);
+
+@@ -693,7 +767,6 @@ int sd_zbc_read_zones(struct scsi_disk *sdkp, unsigned char *buf)
+ struct request_queue *q = disk->queue;
+ unsigned int nr_zones;
+ u32 zone_blocks = 0;
+- u32 max_append;
+ int ret;
+
+ if (!sd_is_zoned(sdkp))
+@@ -726,20 +799,6 @@ int sd_zbc_read_zones(struct scsi_disk *sdkp, unsigned char *buf)
+ if (ret)
+ goto err;
+
+- /*
+- * On the first scan 'chunk_sectors' isn't setup yet, so calling
+- * blk_queue_max_zone_append_sectors() will result in a WARN(). Defer
+- * this setting to the second scan.
+- */
+- if (sdkp->first_scan)
+- return 0;
+-
+- max_append = min_t(u32, logical_to_sectors(sdkp->device, zone_blocks),
+- q->limits.max_segments << (PAGE_SHIFT - 9));
+- max_append = min_t(u32, max_append, queue_max_hw_sectors(q));
+-
+- blk_queue_max_zone_append_sectors(q, max_append);
+-
+ return 0;
+
+ err:
+@@ -747,45 +806,3 @@ err:
+
+ return ret;
+ }
+-
+-void sd_zbc_print_zones(struct scsi_disk *sdkp)
+-{
+- if (!sd_is_zoned(sdkp) || !sdkp->capacity)
+- return;
+-
+- if (sdkp->capacity & (sdkp->zone_blocks - 1))
+- sd_printk(KERN_NOTICE, sdkp,
+- "%u zones of %u logical blocks + 1 runt zone\n",
+- sdkp->nr_zones - 1,
+- sdkp->zone_blocks);
+- else
+- sd_printk(KERN_NOTICE, sdkp,
+- "%u zones of %u logical blocks\n",
+- sdkp->nr_zones,
+- sdkp->zone_blocks);
+-}
+-
+-int sd_zbc_init_disk(struct scsi_disk *sdkp)
+-{
+- if (!sd_is_zoned(sdkp))
+- return 0;
+-
+- sdkp->zones_wp_offset = NULL;
+- spin_lock_init(&sdkp->zones_wp_offset_lock);
+- sdkp->rev_wp_offset = NULL;
+- mutex_init(&sdkp->rev_mutex);
+- INIT_WORK(&sdkp->zone_wp_offset_work, sd_zbc_update_wp_offset_workfn);
+- sdkp->zone_wp_update_buf = kzalloc(SD_BUF_SIZE, GFP_KERNEL);
+- if (!sdkp->zone_wp_update_buf)
+- return -ENOMEM;
+-
+- return 0;
+-}
+-
+-void sd_zbc_release_disk(struct scsi_disk *sdkp)
+-{
+- kvfree(sdkp->zones_wp_offset);
+- sdkp->zones_wp_offset = NULL;
+- kfree(sdkp->zone_wp_update_buf);
+- sdkp->zone_wp_update_buf = NULL;
+-}
+diff --git a/drivers/spi/spi-fsl-espi.c b/drivers/spi/spi-fsl-espi.c
+index e60581283a247..6d148ab70b93e 100644
+--- a/drivers/spi/spi-fsl-espi.c
++++ b/drivers/spi/spi-fsl-espi.c
+@@ -564,13 +564,14 @@ static void fsl_espi_cpu_irq(struct fsl_espi *espi, u32 events)
+ static irqreturn_t fsl_espi_irq(s32 irq, void *context_data)
+ {
+ struct fsl_espi *espi = context_data;
+- u32 events;
++ u32 events, mask;
+
+ spin_lock(&espi->lock);
+
+ /* Get interrupt events(tx/rx) */
+ events = fsl_espi_read_reg(espi, ESPI_SPIE);
+- if (!events) {
++ mask = fsl_espi_read_reg(espi, ESPI_SPIM);
++ if (!(events & mask)) {
+ spin_unlock(&espi->lock);
+ return IRQ_NONE;
+ }
+diff --git a/drivers/target/target_core_transport.c b/drivers/target/target_core_transport.c
+index e6e1fa68de542..94f4f05b5002c 100644
+--- a/drivers/target/target_core_transport.c
++++ b/drivers/target/target_core_transport.c
+@@ -1840,7 +1840,8 @@ int target_submit_tmr(struct se_cmd *se_cmd, struct se_session *se_sess,
+ * out unpacked_lun for the original se_cmd.
+ */
+ if (tm_type == TMR_ABORT_TASK && (flags & TARGET_SCF_LOOKUP_LUN_FROM_TAG)) {
+- if (!target_lookup_lun_from_tag(se_sess, tag, &unpacked_lun))
++ if (!target_lookup_lun_from_tag(se_sess, tag,
++ &se_cmd->orig_fe_lun))
+ goto failure;
+ }
+
+diff --git a/drivers/usb/core/driver.c b/drivers/usb/core/driver.c
+index 7e73e989645bd..b351962279e4d 100644
+--- a/drivers/usb/core/driver.c
++++ b/drivers/usb/core/driver.c
+@@ -269,8 +269,30 @@ static int usb_probe_device(struct device *dev)
+ if (error)
+ return error;
+
++ /* Probe the USB device with the driver in hand, but only
++ * defer to a generic driver in case the current USB
++ * device driver has an id_table or a match function; i.e.,
++ * when the device driver was explicitly matched against
++ * a device.
++ *
++ * If the device driver does not have either of these,
++ * then we assume that it can bind to any device and is
++ * not truly a more specialized/non-generic driver, so a
++ * return value of -ENODEV should not force the device
++ * to be handled by the generic USB driver, as there
++ * can still be another, more specialized, device driver.
++ *
++ * This accommodates the usbip driver.
++ *
++ * TODO: What if, in the future, there are multiple
++ * specialized USB device drivers for a particular device?
++ * In such cases, there is a need to try all matching
++ * specialised device drivers prior to setting the
++ * use_generic_driver bit.
++ */
+ error = udriver->probe(udev);
+- if (error == -ENODEV && udriver != &usb_generic_driver) {
++ if (error == -ENODEV && udriver != &usb_generic_driver &&
++ (udriver->id_table || udriver->match)) {
+ udev->use_generic_driver = 1;
+ return -EPROBE_DEFER;
+ }
+@@ -831,14 +853,17 @@ static int usb_device_match(struct device *dev, struct device_driver *drv)
+ udev = to_usb_device(dev);
+ udrv = to_usb_device_driver(drv);
+
+- if (udrv->id_table &&
+- usb_device_match_id(udev, udrv->id_table) != NULL) {
+- return 1;
+- }
++ if (udrv->id_table)
++ return usb_device_match_id(udev, udrv->id_table) != NULL;
+
+ if (udrv->match)
+ return udrv->match(udev);
+- return 0;
++
++ /* If the device driver under consideration does not have a
++ * id_table or a match function, then let the driver's probe
++ * function decide.
++ */
++ return 1;
+
+ } else if (is_usb_interface(dev)) {
+ struct usb_interface *intf;
+@@ -905,26 +930,19 @@ static int usb_uevent(struct device *dev, struct kobj_uevent_env *env)
+ return 0;
+ }
+
+-static bool is_dev_usb_generic_driver(struct device *dev)
+-{
+- struct usb_device_driver *udd = dev->driver ?
+- to_usb_device_driver(dev->driver) : NULL;
+-
+- return udd == &usb_generic_driver;
+-}
+-
+ static int __usb_bus_reprobe_drivers(struct device *dev, void *data)
+ {
+ struct usb_device_driver *new_udriver = data;
+ struct usb_device *udev;
+ int ret;
+
+- if (!is_dev_usb_generic_driver(dev))
++ /* Don't reprobe if current driver isn't usb_generic_driver */
++ if (dev->driver != &usb_generic_driver.drvwrap.driver)
+ return 0;
+
+ udev = to_usb_device(dev);
+ if (usb_device_match_id(udev, new_udriver->id_table) == NULL &&
+- (!new_udriver->match || new_udriver->match(udev) != 0))
++ (!new_udriver->match || new_udriver->match(udev) == 0))
+ return 0;
+
+ ret = device_reprobe(dev);
+diff --git a/drivers/usb/gadget/function/f_ncm.c b/drivers/usb/gadget/function/f_ncm.c
+index b4206b0dede54..1f638759a9533 100644
+--- a/drivers/usb/gadget/function/f_ncm.c
++++ b/drivers/usb/gadget/function/f_ncm.c
+@@ -1189,7 +1189,6 @@ static int ncm_unwrap_ntb(struct gether *port,
+ const struct ndp_parser_opts *opts = ncm->parser_opts;
+ unsigned crc_len = ncm->is_crc ? sizeof(uint32_t) : 0;
+ int dgram_counter;
+- bool ndp_after_header;
+
+ /* dwSignature */
+ if (get_unaligned_le32(tmp) != opts->nth_sign) {
+@@ -1216,7 +1215,6 @@ static int ncm_unwrap_ntb(struct gether *port,
+ }
+
+ ndp_index = get_ncm(&tmp, opts->ndp_index);
+- ndp_after_header = false;
+
+ /* Run through all the NDP's in the NTB */
+ do {
+@@ -1232,8 +1230,6 @@ static int ncm_unwrap_ntb(struct gether *port,
+ ndp_index);
+ goto err;
+ }
+- if (ndp_index == opts->nth_size)
+- ndp_after_header = true;
+
+ /*
+ * walk through NDP
+@@ -1312,37 +1308,13 @@ static int ncm_unwrap_ntb(struct gether *port,
+ index2 = get_ncm(&tmp, opts->dgram_item_len);
+ dg_len2 = get_ncm(&tmp, opts->dgram_item_len);
+
+- if (index2 == 0 || dg_len2 == 0)
+- break;
+-
+ /* wDatagramIndex[1] */
+- if (ndp_after_header) {
+- if (index2 < opts->nth_size + opts->ndp_size) {
+- INFO(port->func.config->cdev,
+- "Bad index: %#X\n", index2);
+- goto err;
+- }
+- } else {
+- if (index2 < opts->nth_size + opts->dpe_size) {
+- INFO(port->func.config->cdev,
+- "Bad index: %#X\n", index2);
+- goto err;
+- }
+- }
+ if (index2 > block_len - opts->dpe_size) {
+ INFO(port->func.config->cdev,
+ "Bad index: %#X\n", index2);
+ goto err;
+ }
+
+- /* wDatagramLength[1] */
+- if ((dg_len2 < 14 + crc_len) ||
+- (dg_len2 > frame_max)) {
+- INFO(port->func.config->cdev,
+- "Bad dgram length: %#X\n", dg_len);
+- goto err;
+- }
+-
+ /*
+ * Copy the data into a new skb.
+ * This ensures the truesize is correct
+@@ -1359,6 +1331,8 @@ static int ncm_unwrap_ntb(struct gether *port,
+ ndp_len -= 2 * (opts->dgram_item_len * 2);
+
+ dgram_counter++;
++ if (index2 == 0 || dg_len2 == 0)
++ break;
+ } while (ndp_len > 2 * (opts->dgram_item_len * 2));
+ } while (ndp_index);
+
+diff --git a/drivers/usb/usbip/stub_dev.c b/drivers/usb/usbip/stub_dev.c
+index 9d7d642022d1f..2305d425e6c9a 100644
+--- a/drivers/usb/usbip/stub_dev.c
++++ b/drivers/usb/usbip/stub_dev.c
+@@ -461,11 +461,6 @@ static void stub_disconnect(struct usb_device *udev)
+ return;
+ }
+
+-static bool usbip_match(struct usb_device *udev)
+-{
+- return true;
+-}
+-
+ #ifdef CONFIG_PM
+
+ /* These functions need usb_port_suspend and usb_port_resume,
+@@ -491,7 +486,6 @@ struct usb_device_driver stub_driver = {
+ .name = "usbip-host",
+ .probe = stub_probe,
+ .disconnect = stub_disconnect,
+- .match = usbip_match,
+ #ifdef CONFIG_PM
+ .suspend = stub_suspend,
+ .resume = stub_resume,
+diff --git a/drivers/xen/events/events_base.c b/drivers/xen/events/events_base.c
+index 90b8f56fbadb1..6f02c18fa65c8 100644
+--- a/drivers/xen/events/events_base.c
++++ b/drivers/xen/events/events_base.c
+@@ -92,6 +92,8 @@ static bool (*pirq_needs_eoi)(unsigned irq);
+ /* Xen will never allocate port zero for any purpose. */
+ #define VALID_EVTCHN(chn) ((chn) != 0)
+
++static struct irq_info *legacy_info_ptrs[NR_IRQS_LEGACY];
++
+ static struct irq_chip xen_dynamic_chip;
+ static struct irq_chip xen_percpu_chip;
+ static struct irq_chip xen_pirq_chip;
+@@ -156,7 +158,18 @@ int get_evtchn_to_irq(evtchn_port_t evtchn)
+ /* Get info for IRQ */
+ struct irq_info *info_for_irq(unsigned irq)
+ {
+- return irq_get_chip_data(irq);
++ if (irq < nr_legacy_irqs())
++ return legacy_info_ptrs[irq];
++ else
++ return irq_get_chip_data(irq);
++}
++
++static void set_info_for_irq(unsigned int irq, struct irq_info *info)
++{
++ if (irq < nr_legacy_irqs())
++ legacy_info_ptrs[irq] = info;
++ else
++ irq_set_chip_data(irq, info);
+ }
+
+ /* Constructors for packed IRQ information. */
+@@ -377,7 +390,7 @@ static void xen_irq_init(unsigned irq)
+ info->type = IRQT_UNBOUND;
+ info->refcnt = -1;
+
+- irq_set_chip_data(irq, info);
++ set_info_for_irq(irq, info);
+
+ list_add_tail(&info->list, &xen_irq_list_head);
+ }
+@@ -426,14 +439,14 @@ static int __must_check xen_allocate_irq_gsi(unsigned gsi)
+
+ static void xen_free_irq(unsigned irq)
+ {
+- struct irq_info *info = irq_get_chip_data(irq);
++ struct irq_info *info = info_for_irq(irq);
+
+ if (WARN_ON(!info))
+ return;
+
+ list_del(&info->list);
+
+- irq_set_chip_data(irq, NULL);
++ set_info_for_irq(irq, NULL);
+
+ WARN_ON(info->refcnt > 0);
+
+@@ -603,7 +616,7 @@ EXPORT_SYMBOL_GPL(xen_irq_from_gsi);
+ static void __unbind_from_irq(unsigned int irq)
+ {
+ evtchn_port_t evtchn = evtchn_from_irq(irq);
+- struct irq_info *info = irq_get_chip_data(irq);
++ struct irq_info *info = info_for_irq(irq);
+
+ if (info->refcnt > 0) {
+ info->refcnt--;
+@@ -1108,7 +1121,7 @@ int bind_ipi_to_irqhandler(enum ipi_vector ipi,
+
+ void unbind_from_irqhandler(unsigned int irq, void *dev_id)
+ {
+- struct irq_info *info = irq_get_chip_data(irq);
++ struct irq_info *info = info_for_irq(irq);
+
+ if (WARN_ON(!info))
+ return;
+@@ -1142,7 +1155,7 @@ int evtchn_make_refcounted(evtchn_port_t evtchn)
+ if (irq == -1)
+ return -ENOENT;
+
+- info = irq_get_chip_data(irq);
++ info = info_for_irq(irq);
+
+ if (!info)
+ return -ENOENT;
+@@ -1170,7 +1183,7 @@ int evtchn_get(evtchn_port_t evtchn)
+ if (irq == -1)
+ goto done;
+
+- info = irq_get_chip_data(irq);
++ info = info_for_irq(irq);
+
+ if (!info)
+ goto done;
+diff --git a/fs/autofs/waitq.c b/fs/autofs/waitq.c
+index 74c886f7c51cb..5ced859dac539 100644
+--- a/fs/autofs/waitq.c
++++ b/fs/autofs/waitq.c
+@@ -53,7 +53,7 @@ static int autofs_write(struct autofs_sb_info *sbi,
+
+ mutex_lock(&sbi->pipe_mutex);
+ while (bytes) {
+- wr = kernel_write(file, data, bytes, &file->f_pos);
++ wr = __kernel_write(file, data, bytes, NULL);
+ if (wr <= 0)
+ break;
+ data += wr;
+diff --git a/fs/btrfs/dev-replace.c b/fs/btrfs/dev-replace.c
+index db93909b25e08..eb86e4b88c73a 100644
+--- a/fs/btrfs/dev-replace.c
++++ b/fs/btrfs/dev-replace.c
+@@ -599,6 +599,37 @@ static void btrfs_rm_dev_replace_unblocked(struct btrfs_fs_info *fs_info)
+ wake_up(&fs_info->dev_replace.replace_wait);
+ }
+
++/*
++ * When finishing the device replace, before swapping the source device with the
++ * target device we must update the chunk allocation state in the target device,
++ * as it is empty because replace works by directly copying the chunks and not
++ * through the normal chunk allocation path.
++ */
++static int btrfs_set_target_alloc_state(struct btrfs_device *srcdev,
++ struct btrfs_device *tgtdev)
++{
++ struct extent_state *cached_state = NULL;
++ u64 start = 0;
++ u64 found_start;
++ u64 found_end;
++ int ret = 0;
++
++ lockdep_assert_held(&srcdev->fs_info->chunk_mutex);
++
++ while (!find_first_extent_bit(&srcdev->alloc_state, start,
++ &found_start, &found_end,
++ CHUNK_ALLOCATED, &cached_state)) {
++ ret = set_extent_bits(&tgtdev->alloc_state, found_start,
++ found_end, CHUNK_ALLOCATED);
++ if (ret)
++ break;
++ start = found_end + 1;
++ }
++
++ free_extent_state(cached_state);
++ return ret;
++}
++
+ static int btrfs_dev_replace_finishing(struct btrfs_fs_info *fs_info,
+ int scrub_ret)
+ {
+@@ -673,8 +704,14 @@ static int btrfs_dev_replace_finishing(struct btrfs_fs_info *fs_info,
+ dev_replace->time_stopped = ktime_get_real_seconds();
+ dev_replace->item_needs_writeback = 1;
+
+- /* replace old device with new one in mapping tree */
++ /*
++ * Update allocation state in the new device and replace the old device
++ * with the new one in the mapping tree.
++ */
+ if (!scrub_ret) {
++ scrub_ret = btrfs_set_target_alloc_state(src_device, tgt_device);
++ if (scrub_ret)
++ goto error;
+ btrfs_dev_replace_update_device_in_mapping_tree(fs_info,
+ src_device,
+ tgt_device);
+@@ -685,6 +722,7 @@ static int btrfs_dev_replace_finishing(struct btrfs_fs_info *fs_info,
+ btrfs_dev_name(src_device),
+ src_device->devid,
+ rcu_str_deref(tgt_device->name), scrub_ret);
++error:
+ up_write(&dev_replace->rwsem);
+ mutex_unlock(&fs_info->chunk_mutex);
+ mutex_unlock(&fs_info->fs_devices->device_list_mutex);
+diff --git a/fs/eventpoll.c b/fs/eventpoll.c
+index 8107e06d7f6f5..4df61129566d4 100644
+--- a/fs/eventpoll.c
++++ b/fs/eventpoll.c
+@@ -218,8 +218,7 @@ struct eventpoll {
+ struct file *file;
+
+ /* used to optimize loop detection check */
+- struct list_head visited_list_link;
+- int visited;
++ u64 gen;
+
+ #ifdef CONFIG_NET_RX_BUSY_POLL
+ /* used to track busy poll napi_id */
+@@ -274,6 +273,8 @@ static long max_user_watches __read_mostly;
+ */
+ static DEFINE_MUTEX(epmutex);
+
++static u64 loop_check_gen = 0;
++
+ /* Used to check for epoll file descriptor inclusion loops */
+ static struct nested_calls poll_loop_ncalls;
+
+@@ -283,9 +284,6 @@ static struct kmem_cache *epi_cache __read_mostly;
+ /* Slab cache used to allocate "struct eppoll_entry" */
+ static struct kmem_cache *pwq_cache __read_mostly;
+
+-/* Visited nodes during ep_loop_check(), so we can unset them when we finish */
+-static LIST_HEAD(visited_list);
+-
+ /*
+ * List of files with newly added links, where we may need to limit the number
+ * of emanating paths. Protected by the epmutex.
+@@ -1450,7 +1448,7 @@ static int reverse_path_check(void)
+
+ static int ep_create_wakeup_source(struct epitem *epi)
+ {
+- const char *name;
++ struct name_snapshot n;
+ struct wakeup_source *ws;
+
+ if (!epi->ep->ws) {
+@@ -1459,8 +1457,9 @@ static int ep_create_wakeup_source(struct epitem *epi)
+ return -ENOMEM;
+ }
+
+- name = epi->ffd.file->f_path.dentry->d_name.name;
+- ws = wakeup_source_register(NULL, name);
++ take_dentry_name_snapshot(&n, epi->ffd.file->f_path.dentry);
++ ws = wakeup_source_register(NULL, n.name.name);
++ release_dentry_name_snapshot(&n);
+
+ if (!ws)
+ return -ENOMEM;
+@@ -1522,6 +1521,22 @@ static int ep_insert(struct eventpoll *ep, const struct epoll_event *event,
+ RCU_INIT_POINTER(epi->ws, NULL);
+ }
+
++ /* Add the current item to the list of active epoll hook for this file */
++ spin_lock(&tfile->f_lock);
++ list_add_tail_rcu(&epi->fllink, &tfile->f_ep_links);
++ spin_unlock(&tfile->f_lock);
++
++ /*
++ * Add the current item to the RB tree. All RB tree operations are
++ * protected by "mtx", and ep_insert() is called with "mtx" held.
++ */
++ ep_rbtree_insert(ep, epi);
++
++ /* now check if we've created too many backpaths */
++ error = -EINVAL;
++ if (full_check && reverse_path_check())
++ goto error_remove_epi;
++
+ /* Initialize the poll table using the queue callback */
+ epq.epi = epi;
+ init_poll_funcptr(&epq.pt, ep_ptable_queue_proc);
+@@ -1544,22 +1559,6 @@ static int ep_insert(struct eventpoll *ep, const struct epoll_event *event,
+ if (epi->nwait < 0)
+ goto error_unregister;
+
+- /* Add the current item to the list of active epoll hook for this file */
+- spin_lock(&tfile->f_lock);
+- list_add_tail_rcu(&epi->fllink, &tfile->f_ep_links);
+- spin_unlock(&tfile->f_lock);
+-
+- /*
+- * Add the current item to the RB tree. All RB tree operations are
+- * protected by "mtx", and ep_insert() is called with "mtx" held.
+- */
+- ep_rbtree_insert(ep, epi);
+-
+- /* now check if we've created too many backpaths */
+- error = -EINVAL;
+- if (full_check && reverse_path_check())
+- goto error_remove_epi;
+-
+ /* We have to drop the new item inside our item list to keep track of it */
+ write_lock_irq(&ep->lock);
+
+@@ -1588,6 +1587,8 @@ static int ep_insert(struct eventpoll *ep, const struct epoll_event *event,
+
+ return 0;
+
++error_unregister:
++ ep_unregister_pollwait(ep, epi);
+ error_remove_epi:
+ spin_lock(&tfile->f_lock);
+ list_del_rcu(&epi->fllink);
+@@ -1595,9 +1596,6 @@ error_remove_epi:
+
+ rb_erase_cached(&epi->rbn, &ep->rbr);
+
+-error_unregister:
+- ep_unregister_pollwait(ep, epi);
+-
+ /*
+ * We need to do this because an event could have been arrived on some
+ * allocated wait queue. Note that we don't care about the ep->ovflist
+@@ -1972,13 +1970,12 @@ static int ep_loop_check_proc(void *priv, void *cookie, int call_nests)
+ struct epitem *epi;
+
+ mutex_lock_nested(&ep->mtx, call_nests + 1);
+- ep->visited = 1;
+- list_add(&ep->visited_list_link, &visited_list);
++ ep->gen = loop_check_gen;
+ for (rbp = rb_first_cached(&ep->rbr); rbp; rbp = rb_next(rbp)) {
+ epi = rb_entry(rbp, struct epitem, rbn);
+ if (unlikely(is_file_epoll(epi->ffd.file))) {
+ ep_tovisit = epi->ffd.file->private_data;
+- if (ep_tovisit->visited)
++ if (ep_tovisit->gen == loop_check_gen)
+ continue;
+ error = ep_call_nested(&poll_loop_ncalls,
+ ep_loop_check_proc, epi->ffd.file,
+@@ -2019,18 +2016,8 @@ static int ep_loop_check_proc(void *priv, void *cookie, int call_nests)
+ */
+ static int ep_loop_check(struct eventpoll *ep, struct file *file)
+ {
+- int ret;
+- struct eventpoll *ep_cur, *ep_next;
+-
+- ret = ep_call_nested(&poll_loop_ncalls,
++ return ep_call_nested(&poll_loop_ncalls,
+ ep_loop_check_proc, file, ep, current);
+- /* clear visited list */
+- list_for_each_entry_safe(ep_cur, ep_next, &visited_list,
+- visited_list_link) {
+- ep_cur->visited = 0;
+- list_del(&ep_cur->visited_list_link);
+- }
+- return ret;
+ }
+
+ static void clear_tfile_check_list(void)
+@@ -2195,11 +2182,13 @@ int do_epoll_ctl(int epfd, int op, int fd, struct epoll_event *epds,
+ goto error_tgt_fput;
+ if (op == EPOLL_CTL_ADD) {
+ if (!list_empty(&f.file->f_ep_links) ||
++ ep->gen == loop_check_gen ||
+ is_file_epoll(tf.file)) {
+ mutex_unlock(&ep->mtx);
+ error = epoll_mutex_lock(&epmutex, 0, nonblock);
+ if (error)
+ goto error_tgt_fput;
++ loop_check_gen++;
+ full_check = 1;
+ if (is_file_epoll(tf.file)) {
+ error = -ELOOP;
+@@ -2263,6 +2252,7 @@ int do_epoll_ctl(int epfd, int op, int fd, struct epoll_event *epds,
+ error_tgt_fput:
+ if (full_check) {
+ clear_tfile_check_list();
++ loop_check_gen++;
+ mutex_unlock(&epmutex);
+ }
+
+diff --git a/fs/fuse/file.c b/fs/fuse/file.c
+index 83d917f7e5425..98e170cc0b932 100644
+--- a/fs/fuse/file.c
++++ b/fs/fuse/file.c
+@@ -3091,11 +3091,10 @@ fuse_direct_IO(struct kiocb *iocb, struct iov_iter *iter)
+ ssize_t ret = 0;
+ struct file *file = iocb->ki_filp;
+ struct fuse_file *ff = file->private_data;
+- bool async_dio = ff->fc->async_dio;
+ loff_t pos = 0;
+ struct inode *inode;
+ loff_t i_size;
+- size_t count = iov_iter_count(iter);
++ size_t count = iov_iter_count(iter), shortened = 0;
+ loff_t offset = iocb->ki_pos;
+ struct fuse_io_priv *io;
+
+@@ -3103,17 +3102,9 @@ fuse_direct_IO(struct kiocb *iocb, struct iov_iter *iter)
+ inode = file->f_mapping->host;
+ i_size = i_size_read(inode);
+
+- if ((iov_iter_rw(iter) == READ) && (offset > i_size))
++ if ((iov_iter_rw(iter) == READ) && (offset >= i_size))
+ return 0;
+
+- /* optimization for short read */
+- if (async_dio && iov_iter_rw(iter) != WRITE && offset + count > i_size) {
+- if (offset >= i_size)
+- return 0;
+- iov_iter_truncate(iter, fuse_round_up(ff->fc, i_size - offset));
+- count = iov_iter_count(iter);
+- }
+-
+ io = kmalloc(sizeof(struct fuse_io_priv), GFP_KERNEL);
+ if (!io)
+ return -ENOMEM;
+@@ -3129,15 +3120,22 @@ fuse_direct_IO(struct kiocb *iocb, struct iov_iter *iter)
+ * By default, we want to optimize all I/Os with async request
+ * submission to the client filesystem if supported.
+ */
+- io->async = async_dio;
++ io->async = ff->fc->async_dio;
+ io->iocb = iocb;
+ io->blocking = is_sync_kiocb(iocb);
+
++ /* optimization for short read */
++ if (io->async && !io->write && offset + count > i_size) {
++ iov_iter_truncate(iter, fuse_round_up(ff->fc, i_size - offset));
++ shortened = count - iov_iter_count(iter);
++ count -= shortened;
++ }
++
+ /*
+ * We cannot asynchronously extend the size of a file.
+ * In such case the aio will behave exactly like sync io.
+ */
+- if ((offset + count > i_size) && iov_iter_rw(iter) == WRITE)
++ if ((offset + count > i_size) && io->write)
+ io->blocking = true;
+
+ if (io->async && io->blocking) {
+@@ -3155,6 +3153,7 @@ fuse_direct_IO(struct kiocb *iocb, struct iov_iter *iter)
+ } else {
+ ret = __fuse_direct_read(io, iter, &pos);
+ }
++ iov_iter_reexpand(iter, iov_iter_count(iter) + shortened);
+
+ if (io->async) {
+ bool blocking = io->blocking;
+diff --git a/fs/io_uring.c b/fs/io_uring.c
+index 1d5640cc2a488..ebc3586b18795 100644
+--- a/fs/io_uring.c
++++ b/fs/io_uring.c
+@@ -3318,7 +3318,7 @@ static int io_epoll_ctl_prep(struct io_kiocb *req,
+ #if defined(CONFIG_EPOLL)
+ if (sqe->ioprio || sqe->buf_index)
+ return -EINVAL;
+- if (unlikely(req->ctx->flags & IORING_SETUP_IOPOLL))
++ if (unlikely(req->ctx->flags & (IORING_SETUP_IOPOLL | IORING_SETUP_SQPOLL)))
+ return -EINVAL;
+
+ req->epoll.epfd = READ_ONCE(sqe->fd);
+@@ -3435,7 +3435,7 @@ static int io_fadvise(struct io_kiocb *req, bool force_nonblock)
+
+ static int io_statx_prep(struct io_kiocb *req, const struct io_uring_sqe *sqe)
+ {
+- if (unlikely(req->ctx->flags & IORING_SETUP_IOPOLL))
++ if (unlikely(req->ctx->flags & (IORING_SETUP_IOPOLL | IORING_SETUP_SQPOLL)))
+ return -EINVAL;
+ if (sqe->ioprio || sqe->buf_index)
+ return -EINVAL;
+@@ -4310,6 +4310,8 @@ static int io_poll_double_wake(struct wait_queue_entry *wait, unsigned mode,
+ if (mask && !(mask & poll->events))
+ return 0;
+
++ list_del_init(&wait->entry);
++
+ if (poll && poll->head) {
+ bool done;
+
+@@ -5040,6 +5042,8 @@ static int io_async_cancel(struct io_kiocb *req)
+ static int io_files_update_prep(struct io_kiocb *req,
+ const struct io_uring_sqe *sqe)
+ {
++ if (unlikely(req->ctx->flags & IORING_SETUP_SQPOLL))
++ return -EINVAL;
+ if (unlikely(req->flags & (REQ_F_FIXED_FILE | REQ_F_BUFFER_SELECT)))
+ return -EINVAL;
+ if (sqe->ioprio || sqe->rw_flags)
+diff --git a/fs/nfs/dir.c b/fs/nfs/dir.c
+index 5a331da5f55ad..785f46217f11a 100644
+--- a/fs/nfs/dir.c
++++ b/fs/nfs/dir.c
+@@ -579,6 +579,9 @@ int nfs_readdir_page_filler(nfs_readdir_descriptor_t *desc, struct nfs_entry *en
+ xdr_set_scratch_buffer(&stream, page_address(scratch), PAGE_SIZE);
+
+ do {
++ if (entry->label)
++ entry->label->len = NFS4_MAXLABELLEN;
++
+ status = xdr_decode(desc, entry, &stream);
+ if (status != 0) {
+ if (status == -EAGAIN)
+diff --git a/fs/nfs/flexfilelayout/flexfilelayout.c b/fs/nfs/flexfilelayout/flexfilelayout.c
+index 048272d60a165..f9348ed1bcdad 100644
+--- a/fs/nfs/flexfilelayout/flexfilelayout.c
++++ b/fs/nfs/flexfilelayout/flexfilelayout.c
+@@ -838,6 +838,7 @@ ff_layout_pg_init_read(struct nfs_pageio_descriptor *pgio,
+ struct nfs4_ff_layout_mirror *mirror;
+ struct nfs4_pnfs_ds *ds;
+ int ds_idx;
++ u32 i;
+
+ retry:
+ ff_layout_pg_check_layout(pgio, req);
+@@ -864,14 +865,14 @@ retry:
+ goto retry;
+ }
+
+- mirror = FF_LAYOUT_COMP(pgio->pg_lseg, ds_idx);
++ for (i = 0; i < pgio->pg_mirror_count; i++) {
++ mirror = FF_LAYOUT_COMP(pgio->pg_lseg, i);
++ pgm = &pgio->pg_mirrors[i];
++ pgm->pg_bsize = mirror->mirror_ds->ds_versions[0].rsize;
++ }
+
+ pgio->pg_mirror_idx = ds_idx;
+
+- /* read always uses only one mirror - idx 0 for pgio layer */
+- pgm = &pgio->pg_mirrors[0];
+- pgm->pg_bsize = mirror->mirror_ds->ds_versions[0].rsize;
+-
+ if (NFS_SERVER(pgio->pg_inode)->flags &
+ (NFS_MOUNT_SOFT|NFS_MOUNT_SOFTERR))
+ pgio->pg_maxretrans = io_maxretrans;
+diff --git a/fs/nfs/nfs42proc.c b/fs/nfs/nfs42proc.c
+index e2ae54b35dfe1..395a468e349b0 100644
+--- a/fs/nfs/nfs42proc.c
++++ b/fs/nfs/nfs42proc.c
+@@ -355,7 +355,15 @@ static ssize_t _nfs42_proc_copy(struct file *src,
+
+ truncate_pagecache_range(dst_inode, pos_dst,
+ pos_dst + res->write_res.count);
+-
++ spin_lock(&dst_inode->i_lock);
++ NFS_I(dst_inode)->cache_validity |= (NFS_INO_REVAL_PAGECACHE |
++ NFS_INO_REVAL_FORCED | NFS_INO_INVALID_SIZE |
++ NFS_INO_INVALID_ATTR | NFS_INO_INVALID_DATA);
++ spin_unlock(&dst_inode->i_lock);
++ spin_lock(&src_inode->i_lock);
++ NFS_I(src_inode)->cache_validity |= (NFS_INO_REVAL_PAGECACHE |
++ NFS_INO_REVAL_FORCED | NFS_INO_INVALID_ATIME);
++ spin_unlock(&src_inode->i_lock);
+ status = res->write_res.count;
+ out:
+ if (args->sync)
+diff --git a/fs/pipe.c b/fs/pipe.c
+index 60dbee4571436..117db82b10af5 100644
+--- a/fs/pipe.c
++++ b/fs/pipe.c
+@@ -106,25 +106,6 @@ void pipe_double_lock(struct pipe_inode_info *pipe1,
+ }
+ }
+
+-/* Drop the inode semaphore and wait for a pipe event, atomically */
+-void pipe_wait(struct pipe_inode_info *pipe)
+-{
+- DEFINE_WAIT(rdwait);
+- DEFINE_WAIT(wrwait);
+-
+- /*
+- * Pipes are system-local resources, so sleeping on them
+- * is considered a noninteractive wait:
+- */
+- prepare_to_wait(&pipe->rd_wait, &rdwait, TASK_INTERRUPTIBLE);
+- prepare_to_wait(&pipe->wr_wait, &wrwait, TASK_INTERRUPTIBLE);
+- pipe_unlock(pipe);
+- schedule();
+- finish_wait(&pipe->rd_wait, &rdwait);
+- finish_wait(&pipe->wr_wait, &wrwait);
+- pipe_lock(pipe);
+-}
+-
+ static void anon_pipe_buf_release(struct pipe_inode_info *pipe,
+ struct pipe_buffer *buf)
+ {
+@@ -1035,12 +1016,52 @@ SYSCALL_DEFINE1(pipe, int __user *, fildes)
+ return do_pipe2(fildes, 0);
+ }
+
++/*
++ * This is the stupid "wait for pipe to be readable or writable"
++ * model.
++ *
++ * See pipe_read/write() for the proper kind of exclusive wait,
++ * but that requires that we wake up any other readers/writers
++ * if we then do not end up reading everything (ie the whole
++ * "wake_next_reader/writer" logic in pipe_read/write()).
++ */
++void pipe_wait_readable(struct pipe_inode_info *pipe)
++{
++ pipe_unlock(pipe);
++ wait_event_interruptible(pipe->rd_wait, pipe_readable(pipe));
++ pipe_lock(pipe);
++}
++
++void pipe_wait_writable(struct pipe_inode_info *pipe)
++{
++ pipe_unlock(pipe);
++ wait_event_interruptible(pipe->wr_wait, pipe_writable(pipe));
++ pipe_lock(pipe);
++}
++
++/*
++ * This depends on both the wait (here) and the wakeup (wake_up_partner)
++ * holding the pipe lock, so "*cnt" is stable and we know a wakeup cannot
++ * race with the count check and waitqueue prep.
++ *
++ * Normally in order to avoid races, you'd do the prepare_to_wait() first,
++ * then check the condition you're waiting for, and only then sleep. But
++ * because of the pipe lock, we can check the condition before being on
++ * the wait queue.
++ *
++ * We use the 'rd_wait' waitqueue for pipe partner waiting.
++ */
+ static int wait_for_partner(struct pipe_inode_info *pipe, unsigned int *cnt)
+ {
++ DEFINE_WAIT(rdwait);
+ int cur = *cnt;
+
+ while (cur == *cnt) {
+- pipe_wait(pipe);
++ prepare_to_wait(&pipe->rd_wait, &rdwait, TASK_INTERRUPTIBLE);
++ pipe_unlock(pipe);
++ schedule();
++ finish_wait(&pipe->rd_wait, &rdwait);
++ pipe_lock(pipe);
+ if (signal_pending(current))
+ break;
+ }
+@@ -1050,7 +1071,6 @@ static int wait_for_partner(struct pipe_inode_info *pipe, unsigned int *cnt)
+ static void wake_up_partner(struct pipe_inode_info *pipe)
+ {
+ wake_up_interruptible_all(&pipe->rd_wait);
+- wake_up_interruptible_all(&pipe->wr_wait);
+ }
+
+ static int fifo_open(struct inode *inode, struct file *filp)
+diff --git a/fs/read_write.c b/fs/read_write.c
+index 4fb797822567a..9a5cb9c2f0d46 100644
+--- a/fs/read_write.c
++++ b/fs/read_write.c
+@@ -538,6 +538,14 @@ ssize_t __kernel_write(struct file *file, const void *buf, size_t count, loff_t
+ inc_syscw(current);
+ return ret;
+ }
++/*
++ * This "EXPORT_SYMBOL_GPL()" is more of a "EXPORT_SYMBOL_DONTUSE()",
++ * but autofs is one of the few internal kernel users that actually
++ * wants this _and_ can be built as a module. So we need to export
++ * this symbol for autofs, even though it really isn't appropriate
++ * for any other kernel modules.
++ */
++EXPORT_SYMBOL_GPL(__kernel_write);
+
+ ssize_t kernel_write(struct file *file, const void *buf, size_t count,
+ loff_t *pos)
+diff --git a/fs/splice.c b/fs/splice.c
+index d7c8a7c4db07f..c3d00dfc73446 100644
+--- a/fs/splice.c
++++ b/fs/splice.c
+@@ -563,7 +563,7 @@ static int splice_from_pipe_next(struct pipe_inode_info *pipe, struct splice_des
+ sd->need_wakeup = false;
+ }
+
+- pipe_wait(pipe);
++ pipe_wait_readable(pipe);
+ }
+
+ return 1;
+@@ -1077,7 +1077,7 @@ static int wait_for_space(struct pipe_inode_info *pipe, unsigned flags)
+ return -EAGAIN;
+ if (signal_pending(current))
+ return -ERESTARTSYS;
+- pipe_wait(pipe);
++ pipe_wait_writable(pipe);
+ }
+ }
+
+@@ -1454,7 +1454,7 @@ static int ipipe_prep(struct pipe_inode_info *pipe, unsigned int flags)
+ ret = -EAGAIN;
+ break;
+ }
+- pipe_wait(pipe);
++ pipe_wait_readable(pipe);
+ }
+
+ pipe_unlock(pipe);
+@@ -1493,7 +1493,7 @@ static int opipe_prep(struct pipe_inode_info *pipe, unsigned int flags)
+ ret = -ERESTARTSYS;
+ break;
+ }
+- pipe_wait(pipe);
++ pipe_wait_writable(pipe);
+ }
+
+ pipe_unlock(pipe);
+diff --git a/fs/vboxsf/super.c b/fs/vboxsf/super.c
+index 8fe03b4a0d2b0..25aade3441922 100644
+--- a/fs/vboxsf/super.c
++++ b/fs/vboxsf/super.c
+@@ -384,7 +384,7 @@ fail_nomem:
+
+ static int vboxsf_parse_monolithic(struct fs_context *fc, void *data)
+ {
+- char *options = data;
++ unsigned char *options = data;
+
+ if (options && options[0] == VBSF_MOUNT_SIGNATURE_BYTE_0 &&
+ options[1] == VBSF_MOUNT_SIGNATURE_BYTE_1 &&
+diff --git a/include/linux/blkdev.h b/include/linux/blkdev.h
+index 57241417ff2f8..1af8c9ac50a4b 100644
+--- a/include/linux/blkdev.h
++++ b/include/linux/blkdev.h
+@@ -354,6 +354,8 @@ struct queue_limits {
+ typedef int (*report_zones_cb)(struct blk_zone *zone, unsigned int idx,
+ void *data);
+
++void blk_queue_set_zoned(struct gendisk *disk, enum blk_zoned_model model);
++
+ #ifdef CONFIG_BLK_DEV_ZONED
+
+ #define BLK_ALL_ZONES ((unsigned int)-1)
+diff --git a/include/linux/memstick.h b/include/linux/memstick.h
+index da4c65f9435ff..ebf73d4ee9690 100644
+--- a/include/linux/memstick.h
++++ b/include/linux/memstick.h
+@@ -281,6 +281,7 @@ struct memstick_host {
+
+ struct memstick_dev *card;
+ unsigned int retries;
++ bool removing;
+
+ /* Notify the host that some requests are pending. */
+ void (*request)(struct memstick_host *host);
+diff --git a/include/linux/pipe_fs_i.h b/include/linux/pipe_fs_i.h
+index 50afd0d0084ca..5d2705f1d01c3 100644
+--- a/include/linux/pipe_fs_i.h
++++ b/include/linux/pipe_fs_i.h
+@@ -240,8 +240,9 @@ extern unsigned int pipe_max_size;
+ extern unsigned long pipe_user_pages_hard;
+ extern unsigned long pipe_user_pages_soft;
+
+-/* Drop the inode semaphore and wait for a pipe event, atomically */
+-void pipe_wait(struct pipe_inode_info *pipe);
++/* Wait for a pipe to be readable/writable while dropping the pipe lock */
++void pipe_wait_readable(struct pipe_inode_info *);
++void pipe_wait_writable(struct pipe_inode_info *);
+
+ struct pipe_inode_info *alloc_pipe_info(void);
+ void free_pipe_info(struct pipe_inode_info *);
+diff --git a/kernel/trace/ftrace.c b/kernel/trace/ftrace.c
+index b5cb5be3ca6f6..b3d0d266fb737 100644
+--- a/kernel/trace/ftrace.c
++++ b/kernel/trace/ftrace.c
+@@ -6877,16 +6877,14 @@ static void ftrace_ops_assist_func(unsigned long ip, unsigned long parent_ip,
+ {
+ int bit;
+
+- if ((op->flags & FTRACE_OPS_FL_RCU) && !rcu_is_watching())
+- return;
+-
+ bit = trace_test_and_set_recursion(TRACE_LIST_START, TRACE_LIST_MAX);
+ if (bit < 0)
+ return;
+
+ preempt_disable_notrace();
+
+- op->func(ip, parent_ip, op, regs);
++ if (!(op->flags & FTRACE_OPS_FL_RCU) || rcu_is_watching())
++ op->func(ip, parent_ip, op, regs);
+
+ preempt_enable_notrace();
+ trace_clear_recursion(bit);
+diff --git a/kernel/trace/trace.c b/kernel/trace/trace.c
+index 6fc6da55b94e2..68c0ff4bd02fa 100644
+--- a/kernel/trace/trace.c
++++ b/kernel/trace/trace.c
+@@ -3507,13 +3507,15 @@ struct trace_entry *trace_find_next_entry(struct trace_iterator *iter,
+ if (iter->ent && iter->ent != iter->temp) {
+ if ((!iter->temp || iter->temp_size < iter->ent_size) &&
+ !WARN_ON_ONCE(iter->temp == static_temp_buf)) {
+- kfree(iter->temp);
+- iter->temp = kmalloc(iter->ent_size, GFP_KERNEL);
+- if (!iter->temp)
++ void *temp;
++ temp = kmalloc(iter->ent_size, GFP_KERNEL);
++ if (!temp)
+ return NULL;
++ kfree(iter->temp);
++ iter->temp = temp;
++ iter->temp_size = iter->ent_size;
+ }
+ memcpy(iter->temp, iter->ent, iter->ent_size);
+- iter->temp_size = iter->ent_size;
+ iter->ent = iter->temp;
+ }
+ entry = __find_next_entry(iter, ent_cpu, NULL, ent_ts);
+@@ -3743,14 +3745,14 @@ unsigned long trace_total_entries(struct trace_array *tr)
+
+ static void print_lat_help_header(struct seq_file *m)
+ {
+- seq_puts(m, "# _------=> CPU# \n"
+- "# / _-----=> irqs-off \n"
+- "# | / _----=> need-resched \n"
+- "# || / _---=> hardirq/softirq \n"
+- "# ||| / _--=> preempt-depth \n"
+- "# |||| / delay \n"
+- "# cmd pid ||||| time | caller \n"
+- "# \\ / ||||| \\ | / \n");
++ seq_puts(m, "# _------=> CPU# \n"
++ "# / _-----=> irqs-off \n"
++ "# | / _----=> need-resched \n"
++ "# || / _---=> hardirq/softirq \n"
++ "# ||| / _--=> preempt-depth \n"
++ "# |||| / delay \n"
++ "# cmd pid ||||| time | caller \n"
++ "# \\ / ||||| \\ | / \n");
+ }
+
+ static void print_event_info(struct array_buffer *buf, struct seq_file *m)
+@@ -3771,26 +3773,26 @@ static void print_func_help_header(struct array_buffer *buf, struct seq_file *m,
+
+ print_event_info(buf, m);
+
+- seq_printf(m, "# TASK-PID %s CPU# TIMESTAMP FUNCTION\n", tgid ? "TGID " : "");
+- seq_printf(m, "# | | %s | | |\n", tgid ? " | " : "");
++ seq_printf(m, "# TASK-PID %s CPU# TIMESTAMP FUNCTION\n", tgid ? " TGID " : "");
++ seq_printf(m, "# | | %s | | |\n", tgid ? " | " : "");
+ }
+
+ static void print_func_help_header_irq(struct array_buffer *buf, struct seq_file *m,
+ unsigned int flags)
+ {
+ bool tgid = flags & TRACE_ITER_RECORD_TGID;
+- const char *space = " ";
+- int prec = tgid ? 10 : 2;
++ const char *space = " ";
++ int prec = tgid ? 12 : 2;
+
+ print_event_info(buf, m);
+
+- seq_printf(m, "# %.*s _-----=> irqs-off\n", prec, space);
+- seq_printf(m, "# %.*s / _----=> need-resched\n", prec, space);
+- seq_printf(m, "# %.*s| / _---=> hardirq/softirq\n", prec, space);
+- seq_printf(m, "# %.*s|| / _--=> preempt-depth\n", prec, space);
+- seq_printf(m, "# %.*s||| / delay\n", prec, space);
+- seq_printf(m, "# TASK-PID %.*sCPU# |||| TIMESTAMP FUNCTION\n", prec, " TGID ");
+- seq_printf(m, "# | | %.*s | |||| | |\n", prec, " | ");
++ seq_printf(m, "# %.*s _-----=> irqs-off\n", prec, space);
++ seq_printf(m, "# %.*s / _----=> need-resched\n", prec, space);
++ seq_printf(m, "# %.*s| / _---=> hardirq/softirq\n", prec, space);
++ seq_printf(m, "# %.*s|| / _--=> preempt-depth\n", prec, space);
++ seq_printf(m, "# %.*s||| / delay\n", prec, space);
++ seq_printf(m, "# TASK-PID %.*s CPU# |||| TIMESTAMP FUNCTION\n", prec, " TGID ");
++ seq_printf(m, "# | | %.*s | |||| | |\n", prec, " | ");
+ }
+
+ void
+diff --git a/kernel/trace/trace_output.c b/kernel/trace/trace_output.c
+index 73976de7f8cc8..a8d719263e1bc 100644
+--- a/kernel/trace/trace_output.c
++++ b/kernel/trace/trace_output.c
+@@ -497,7 +497,7 @@ lat_print_generic(struct trace_seq *s, struct trace_entry *entry, int cpu)
+
+ trace_find_cmdline(entry->pid, comm);
+
+- trace_seq_printf(s, "%8.8s-%-5d %3d",
++ trace_seq_printf(s, "%8.8s-%-7d %3d",
+ comm, entry->pid, cpu);
+
+ return trace_print_lat_fmt(s, entry);
+@@ -588,15 +588,15 @@ int trace_print_context(struct trace_iterator *iter)
+
+ trace_find_cmdline(entry->pid, comm);
+
+- trace_seq_printf(s, "%16s-%-5d ", comm, entry->pid);
++ trace_seq_printf(s, "%16s-%-7d ", comm, entry->pid);
+
+ if (tr->trace_flags & TRACE_ITER_RECORD_TGID) {
+ unsigned int tgid = trace_find_tgid(entry->pid);
+
+ if (!tgid)
+- trace_seq_printf(s, "(-----) ");
++ trace_seq_printf(s, "(-------) ");
+ else
+- trace_seq_printf(s, "(%5d) ", tgid);
++ trace_seq_printf(s, "(%7d) ", tgid);
+ }
+
+ trace_seq_printf(s, "[%03d] ", iter->cpu);
+@@ -636,7 +636,7 @@ int trace_print_lat_context(struct trace_iterator *iter)
+ trace_find_cmdline(entry->pid, comm);
+
+ trace_seq_printf(
+- s, "%16s %5d %3d %d %08x %08lx ",
++ s, "%16s %7d %3d %d %08x %08lx ",
+ comm, entry->pid, iter->cpu, entry->flags,
+ entry->preempt_count, iter->idx);
+ } else {
+@@ -917,7 +917,7 @@ static enum print_line_t trace_ctxwake_print(struct trace_iterator *iter,
+ S = task_index_to_char(field->prev_state);
+ trace_find_cmdline(field->next_pid, comm);
+ trace_seq_printf(&iter->seq,
+- " %5d:%3d:%c %s [%03d] %5d:%3d:%c %s\n",
++ " %7d:%3d:%c %s [%03d] %7d:%3d:%c %s\n",
+ field->prev_pid,
+ field->prev_prio,
+ S, delim,
+diff --git a/lib/random32.c b/lib/random32.c
+index 3d749abb9e80d..1786f78bf4c53 100644
+--- a/lib/random32.c
++++ b/lib/random32.c
+@@ -48,7 +48,7 @@ static inline void prandom_state_selftest(void)
+ }
+ #endif
+
+-DEFINE_PER_CPU(struct rnd_state, net_rand_state);
++DEFINE_PER_CPU(struct rnd_state, net_rand_state) __latent_entropy;
+
+ /**
+ * prandom_u32_state - seeded pseudo-random number generator.
+diff --git a/net/mac80211/rx.c b/net/mac80211/rx.c
+index 5c5af4b5fc080..f2c3ac648fc08 100644
+--- a/net/mac80211/rx.c
++++ b/net/mac80211/rx.c
+@@ -451,7 +451,8 @@ ieee80211_add_rx_radiotap_header(struct ieee80211_local *local,
+ else if (status->bw == RATE_INFO_BW_5)
+ channel_flags |= IEEE80211_CHAN_QUARTER;
+
+- if (status->band == NL80211_BAND_5GHZ)
++ if (status->band == NL80211_BAND_5GHZ ||
++ status->band == NL80211_BAND_6GHZ)
+ channel_flags |= IEEE80211_CHAN_OFDM | IEEE80211_CHAN_5GHZ;
+ else if (status->encoding != RX_ENC_LEGACY)
+ channel_flags |= IEEE80211_CHAN_DYN | IEEE80211_CHAN_2GHZ;
+diff --git a/net/mac80211/vht.c b/net/mac80211/vht.c
+index 9c6045f9c24da..d1b64d0751f2e 100644
+--- a/net/mac80211/vht.c
++++ b/net/mac80211/vht.c
+@@ -168,10 +168,7 @@ ieee80211_vht_cap_ie_to_sta_vht_cap(struct ieee80211_sub_if_data *sdata,
+ /* take some capabilities as-is */
+ cap_info = le32_to_cpu(vht_cap_ie->vht_cap_info);
+ vht_cap->cap = cap_info;
+- vht_cap->cap &= IEEE80211_VHT_CAP_MAX_MPDU_LENGTH_3895 |
+- IEEE80211_VHT_CAP_MAX_MPDU_LENGTH_7991 |
+- IEEE80211_VHT_CAP_MAX_MPDU_LENGTH_11454 |
+- IEEE80211_VHT_CAP_RXLDPC |
++ vht_cap->cap &= IEEE80211_VHT_CAP_RXLDPC |
+ IEEE80211_VHT_CAP_VHT_TXOP_PS |
+ IEEE80211_VHT_CAP_HTC_VHT |
+ IEEE80211_VHT_CAP_MAX_A_MPDU_LENGTH_EXPONENT_MASK |
+@@ -180,6 +177,9 @@ ieee80211_vht_cap_ie_to_sta_vht_cap(struct ieee80211_sub_if_data *sdata,
+ IEEE80211_VHT_CAP_RX_ANTENNA_PATTERN |
+ IEEE80211_VHT_CAP_TX_ANTENNA_PATTERN;
+
++ vht_cap->cap |= min_t(u32, cap_info & IEEE80211_VHT_CAP_MAX_MPDU_MASK,
++ own_cap.cap & IEEE80211_VHT_CAP_MAX_MPDU_MASK);
++
+ /* and some based on our own capabilities */
+ switch (own_cap.cap & IEEE80211_VHT_CAP_SUPP_CHAN_WIDTH_MASK) {
+ case IEEE80211_VHT_CAP_SUPP_CHAN_WIDTH_160MHZ:
+diff --git a/scripts/dtc/Makefile b/scripts/dtc/Makefile
+index 0b44917f981c7..d4129e0275e4a 100644
+--- a/scripts/dtc/Makefile
++++ b/scripts/dtc/Makefile
+@@ -10,7 +10,7 @@ dtc-objs := dtc.o flattree.o fstree.o data.o livetree.o treesource.o \
+ dtc-objs += dtc-lexer.lex.o dtc-parser.tab.o
+
+ # Source files need to get at the userspace version of libfdt_env.h to compile
+-HOST_EXTRACFLAGS := -I $(srctree)/$(src)/libfdt
++HOST_EXTRACFLAGS += -I $(srctree)/$(src)/libfdt
+
+ ifeq ($(shell pkg-config --exists yaml-0.1 2>/dev/null && echo yes),)
+ ifneq ($(CHECK_DT_BINDING)$(CHECK_DTBS),)
+diff --git a/scripts/kallsyms.c b/scripts/kallsyms.c
+index 6dc3078649fa0..e8b26050f5458 100644
+--- a/scripts/kallsyms.c
++++ b/scripts/kallsyms.c
+@@ -82,6 +82,7 @@ static char *sym_name(const struct sym_entry *s)
+
+ static bool is_ignored_symbol(const char *name, char type)
+ {
++ /* Symbol names that exactly match to the following are ignored.*/
+ static const char * const ignored_symbols[] = {
+ /*
+ * Symbols which vary between passes. Passes 1 and 2 must have
+@@ -104,6 +105,7 @@ static bool is_ignored_symbol(const char *name, char type)
+ NULL
+ };
+
++ /* Symbol names that begin with the following are ignored.*/
+ static const char * const ignored_prefixes[] = {
+ "$", /* local symbols for ARM, MIPS, etc. */
+ ".LASANPC", /* s390 kasan local symbols */
+@@ -112,6 +114,7 @@ static bool is_ignored_symbol(const char *name, char type)
+ NULL
+ };
+
++ /* Symbol names that end with the following are ignored.*/
+ static const char * const ignored_suffixes[] = {
+ "_from_arm", /* arm */
+ "_from_thumb", /* arm */
+@@ -119,9 +122,15 @@ static bool is_ignored_symbol(const char *name, char type)
+ NULL
+ };
+
++ /* Symbol names that contain the following are ignored.*/
++ static const char * const ignored_matches[] = {
++ ".long_branch.", /* ppc stub */
++ ".plt_branch.", /* ppc stub */
++ NULL
++ };
++
+ const char * const *p;
+
+- /* Exclude symbols which vary between passes. */
+ for (p = ignored_symbols; *p; p++)
+ if (!strcmp(name, *p))
+ return true;
+@@ -137,6 +146,11 @@ static bool is_ignored_symbol(const char *name, char type)
+ return true;
+ }
+
++ for (p = ignored_matches; *p; p++) {
++ if (strstr(name, *p))
++ return true;
++ }
++
+ if (type == 'U' || type == 'u')
+ return true;
+ /* exclude debugging symbols */
+diff --git a/tools/io_uring/io_uring-bench.c b/tools/io_uring/io_uring-bench.c
+index 0f257139b003e..7703f01183854 100644
+--- a/tools/io_uring/io_uring-bench.c
++++ b/tools/io_uring/io_uring-bench.c
+@@ -130,7 +130,7 @@ static int io_uring_register_files(struct submitter *s)
+ s->nr_files);
+ }
+
+-static int gettid(void)
++static int lk_gettid(void)
+ {
+ return syscall(__NR_gettid);
+ }
+@@ -281,7 +281,7 @@ static void *submitter_fn(void *data)
+ struct io_sq_ring *ring = &s->sq_ring;
+ int ret, prepped;
+
+- printf("submitter=%d\n", gettid());
++ printf("submitter=%d\n", lk_gettid());
+
+ srand48_r(pthread_self(), &s->rand);
+
+diff --git a/tools/lib/bpf/Makefile b/tools/lib/bpf/Makefile
+index c820b0be9d637..9ae8f4ef0aac2 100644
+--- a/tools/lib/bpf/Makefile
++++ b/tools/lib/bpf/Makefile
+@@ -59,7 +59,7 @@ FEATURE_USER = .libbpf
+ FEATURE_TESTS = libelf libelf-mmap zlib bpf reallocarray
+ FEATURE_DISPLAY = libelf zlib bpf
+
+-INCLUDES = -I. -I$(srctree)/tools/include -I$(srctree)/tools/arch/$(ARCH)/include/uapi -I$(srctree)/tools/include/uapi
++INCLUDES = -I. -I$(srctree)/tools/include -I$(srctree)/tools/include/uapi
+ FEATURE_CHECK_CFLAGS-bpf = $(INCLUDES)
+
+ check_feat := 1
^ permalink raw reply related [flat|nested] 30+ messages in thread
* [gentoo-commits] proj/linux-patches:5.8 commit in: /
@ 2020-10-14 20:38 Mike Pagano
0 siblings, 0 replies; 30+ messages in thread
From: Mike Pagano @ 2020-10-14 20:38 UTC (permalink / raw
To: gentoo-commits
commit: 0d84d996394498a013226bca10c8c5e84cca985e
Author: Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Wed Oct 14 20:38:37 2020 +0000
Commit: Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Wed Oct 14 20:38:37 2020 +0000
URL: https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=0d84d996
Linux patch 5.8.15
Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>
0000_README | 4 +
1014_linux-5.8.15.patch | 7265 +++++++++++++++++++++++++++++++++++++++++++++++
2 files changed, 7269 insertions(+)
diff --git a/0000_README b/0000_README
index 6e16f1d..3400494 100644
--- a/0000_README
+++ b/0000_README
@@ -99,6 +99,10 @@ Patch: 1013_linux-5.8.14.patch
From: http://www.kernel.org
Desc: Linux 5.8.14
+Patch: 1014_linux-5.8.15.patch
+From: http://www.kernel.org
+Desc: Linux 5.8.15
+
Patch: 1500_XATTR_USER_PREFIX.patch
From: https://bugs.gentoo.org/show_bug.cgi?id=470644
Desc: Support for namespace user.pax.* on tmpfs.
diff --git a/1014_linux-5.8.15.patch b/1014_linux-5.8.15.patch
new file mode 100644
index 0000000..0710af7
--- /dev/null
+++ b/1014_linux-5.8.15.patch
@@ -0,0 +1,7265 @@
+diff --git a/Makefile b/Makefile
+index 33ceda527e5ef..6c787cd1cb514 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 5
+ PATCHLEVEL = 8
+-SUBLEVEL = 14
++SUBLEVEL = 15
+ EXTRAVERSION =
+ NAME = Kleptomaniac Octopus
+
+diff --git a/arch/arm64/crypto/aes-neonbs-core.S b/arch/arm64/crypto/aes-neonbs-core.S
+index b357164379f6d..63a52ad9a75c0 100644
+--- a/arch/arm64/crypto/aes-neonbs-core.S
++++ b/arch/arm64/crypto/aes-neonbs-core.S
+@@ -788,7 +788,7 @@ SYM_FUNC_START_LOCAL(__xts_crypt8)
+
+ 0: mov bskey, x21
+ mov rounds, x22
+- br x7
++ br x16
+ SYM_FUNC_END(__xts_crypt8)
+
+ .macro __xts_crypt, do8, o0, o1, o2, o3, o4, o5, o6, o7
+@@ -806,7 +806,7 @@ SYM_FUNC_END(__xts_crypt8)
+ uzp1 v30.4s, v30.4s, v25.4s
+ ld1 {v25.16b}, [x24]
+
+-99: adr x7, \do8
++99: adr x16, \do8
+ bl __xts_crypt8
+
+ ldp q16, q17, [sp, #.Lframe_local_offset]
+diff --git a/arch/riscv/mm/init.c b/arch/riscv/mm/init.c
+index e229d95f470b8..7c1dadf14f567 100644
+--- a/arch/riscv/mm/init.c
++++ b/arch/riscv/mm/init.c
+@@ -515,6 +515,7 @@ asmlinkage void __init setup_vm(uintptr_t dtb_pa)
+ #else
+ dtb_early_va = (void *)dtb_pa;
+ #endif
++ dtb_early_pa = dtb_pa;
+ }
+
+ static inline void setup_vm_final(void)
+diff --git a/block/partitions/ibm.c b/block/partitions/ibm.c
+index d6e18df9c53c6..4b044e620d353 100644
+--- a/block/partitions/ibm.c
++++ b/block/partitions/ibm.c
+@@ -305,8 +305,6 @@ int ibm_partition(struct parsed_partitions *state)
+ if (!disk->fops->getgeo)
+ goto out_exit;
+ fn = symbol_get(dasd_biodasdinfo);
+- if (!fn)
+- goto out_exit;
+ blocksize = bdev_logical_block_size(bdev);
+ if (blocksize <= 0)
+ goto out_symbol;
+@@ -326,7 +324,7 @@ int ibm_partition(struct parsed_partitions *state)
+ geo->start = get_start_sect(bdev);
+ if (disk->fops->getgeo(bdev, geo))
+ goto out_freeall;
+- if (fn(disk, info)) {
++ if (!fn || fn(disk, info)) {
+ kfree(info);
+ info = NULL;
+ }
+@@ -370,7 +368,8 @@ out_nolab:
+ out_nogeo:
+ kfree(info);
+ out_symbol:
+- symbol_put(dasd_biodasdinfo);
++ if (fn)
++ symbol_put(dasd_biodasdinfo);
+ out_exit:
+ return res;
+ }
+diff --git a/block/scsi_ioctl.c b/block/scsi_ioctl.c
+index ef722f04f88a9..72108404718fe 100644
+--- a/block/scsi_ioctl.c
++++ b/block/scsi_ioctl.c
+@@ -651,6 +651,7 @@ struct compat_cdrom_generic_command {
+ compat_int_t stat;
+ compat_caddr_t sense;
+ unsigned char data_direction;
++ unsigned char pad[3];
+ compat_int_t quiet;
+ compat_int_t timeout;
+ compat_caddr_t reserved[1];
+diff --git a/drivers/gpio/gpiolib.c b/drivers/gpio/gpiolib.c
+index 6e813b13d6988..02154d8b1c0d5 100644
+--- a/drivers/gpio/gpiolib.c
++++ b/drivers/gpio/gpiolib.c
+@@ -838,7 +838,7 @@ static __poll_t lineevent_poll(struct file *filep,
+
+ static ssize_t lineevent_get_size(void)
+ {
+-#ifdef __x86_64__
++#if defined(CONFIG_X86_64) && !defined(CONFIG_UML)
+ /* i386 has no padding after 'id' */
+ if (in_ia32_syscall()) {
+ struct compat_gpioeevent_data {
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c
+index e59c01a83dace..9a3267f06376f 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c
+@@ -1052,6 +1052,7 @@ static int amdgpu_ttm_tt_pin_userptr(struct ttm_tt *ttm)
+
+ release_sg:
+ kfree(ttm->sg);
++ ttm->sg = NULL;
+ return r;
+ }
+
+diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_hdcp.c b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_hdcp.c
+index 949d10ef83040..6dd1f3f8d9903 100644
+--- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_hdcp.c
++++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_hdcp.c
+@@ -568,7 +568,7 @@ struct hdcp_workqueue *hdcp_create_workqueue(struct amdgpu_device *adev, struct
+ int i = 0;
+
+ hdcp_work = kcalloc(max_caps, sizeof(*hdcp_work), GFP_KERNEL);
+- if (hdcp_work == NULL)
++ if (ZERO_OR_NULL_PTR(hdcp_work))
+ return NULL;
+
+ hdcp_work->srm = kcalloc(PSP_HDCP_SRM_FIRST_GEN_MAX_SIZE, sizeof(*hdcp_work->srm), GFP_KERNEL);
+diff --git a/drivers/gpu/drm/amd/powerplay/hwmgr/smu10_hwmgr.c b/drivers/gpu/drm/amd/powerplay/hwmgr/smu10_hwmgr.c
+index 9ee8cf8267c88..43f7adff6cb74 100644
+--- a/drivers/gpu/drm/amd/powerplay/hwmgr/smu10_hwmgr.c
++++ b/drivers/gpu/drm/amd/powerplay/hwmgr/smu10_hwmgr.c
+@@ -563,6 +563,8 @@ static int smu10_dpm_force_dpm_level(struct pp_hwmgr *hwmgr,
+ struct smu10_hwmgr *data = hwmgr->backend;
+ uint32_t min_sclk = hwmgr->display_config->min_core_set_clock;
+ uint32_t min_mclk = hwmgr->display_config->min_mem_set_clock/100;
++ uint32_t index_fclk = data->clock_vol_info.vdd_dep_on_fclk->count - 1;
++ uint32_t index_socclk = data->clock_vol_info.vdd_dep_on_socclk->count - 1;
+
+ if (hwmgr->smu_version < 0x1E3700) {
+ pr_info("smu firmware version too old, can not set dpm level\n");
+@@ -676,13 +678,13 @@ static int smu10_dpm_force_dpm_level(struct pp_hwmgr *hwmgr,
+ smum_send_msg_to_smc_with_parameter(hwmgr,
+ PPSMC_MSG_SetHardMinFclkByFreq,
+ hwmgr->display_config->num_display > 3 ?
+- SMU10_UMD_PSTATE_PEAK_FCLK :
++ data->clock_vol_info.vdd_dep_on_fclk->entries[0].clk :
+ min_mclk,
+ NULL);
+
+ smum_send_msg_to_smc_with_parameter(hwmgr,
+ PPSMC_MSG_SetHardMinSocclkByFreq,
+- SMU10_UMD_PSTATE_MIN_SOCCLK,
++ data->clock_vol_info.vdd_dep_on_socclk->entries[0].clk,
+ NULL);
+ smum_send_msg_to_smc_with_parameter(hwmgr,
+ PPSMC_MSG_SetHardMinVcn,
+@@ -695,11 +697,11 @@ static int smu10_dpm_force_dpm_level(struct pp_hwmgr *hwmgr,
+ NULL);
+ smum_send_msg_to_smc_with_parameter(hwmgr,
+ PPSMC_MSG_SetSoftMaxFclkByFreq,
+- SMU10_UMD_PSTATE_PEAK_FCLK,
++ data->clock_vol_info.vdd_dep_on_fclk->entries[index_fclk].clk,
+ NULL);
+ smum_send_msg_to_smc_with_parameter(hwmgr,
+ PPSMC_MSG_SetSoftMaxSocclkByFreq,
+- SMU10_UMD_PSTATE_PEAK_SOCCLK,
++ data->clock_vol_info.vdd_dep_on_socclk->entries[index_socclk].clk,
+ NULL);
+ smum_send_msg_to_smc_with_parameter(hwmgr,
+ PPSMC_MSG_SetSoftMaxVcn,
+diff --git a/drivers/gpu/drm/nouveau/nouveau_mem.c b/drivers/gpu/drm/nouveau/nouveau_mem.c
+index c002f89685073..9682f30ab6f68 100644
+--- a/drivers/gpu/drm/nouveau/nouveau_mem.c
++++ b/drivers/gpu/drm/nouveau/nouveau_mem.c
+@@ -176,6 +176,8 @@ void
+ nouveau_mem_del(struct ttm_mem_reg *reg)
+ {
+ struct nouveau_mem *mem = nouveau_mem(reg);
++ if (!mem)
++ return;
+ nouveau_mem_fini(mem);
+ kfree(reg->mm_node);
+ reg->mm_node = NULL;
+diff --git a/drivers/gpu/drm/nouveau/nvkm/engine/device/base.c b/drivers/gpu/drm/nouveau/nvkm/engine/device/base.c
+index 5b90c2a1bf3d3..7c2e5db840be5 100644
+--- a/drivers/gpu/drm/nouveau/nvkm/engine/device/base.c
++++ b/drivers/gpu/drm/nouveau/nvkm/engine/device/base.c
+@@ -3149,6 +3149,7 @@ nvkm_device_ctor(const struct nvkm_device_func *func,
+ case 0x168: device->chip = &nv168_chipset; break;
+ default:
+ nvdev_error(device, "unknown chipset (%08x)\n", boot0);
++ ret = -ENODEV;
+ goto done;
+ }
+
+diff --git a/drivers/i2c/busses/i2c-meson.c b/drivers/i2c/busses/i2c-meson.c
+index c5dec572fc48e..ef73a42577cc7 100644
+--- a/drivers/i2c/busses/i2c-meson.c
++++ b/drivers/i2c/busses/i2c-meson.c
+@@ -5,6 +5,7 @@
+ * Copyright (C) 2014 Beniamino Galvani <b.galvani@gmail.com>
+ */
+
++#include <linux/bitfield.h>
+ #include <linux/clk.h>
+ #include <linux/completion.h>
+ #include <linux/i2c.h>
+@@ -33,12 +34,17 @@
+ #define REG_CTRL_ACK_IGNORE BIT(1)
+ #define REG_CTRL_STATUS BIT(2)
+ #define REG_CTRL_ERROR BIT(3)
+-#define REG_CTRL_CLKDIV_SHIFT 12
+-#define REG_CTRL_CLKDIV_MASK GENMASK(21, 12)
+-#define REG_CTRL_CLKDIVEXT_SHIFT 28
+-#define REG_CTRL_CLKDIVEXT_MASK GENMASK(29, 28)
++#define REG_CTRL_CLKDIV GENMASK(21, 12)
++#define REG_CTRL_CLKDIVEXT GENMASK(29, 28)
++
++#define REG_SLV_ADDR GENMASK(7, 0)
++#define REG_SLV_SDA_FILTER GENMASK(10, 8)
++#define REG_SLV_SCL_FILTER GENMASK(13, 11)
++#define REG_SLV_SCL_LOW GENMASK(27, 16)
++#define REG_SLV_SCL_LOW_EN BIT(28)
+
+ #define I2C_TIMEOUT_MS 500
++#define FILTER_DELAY 15
+
+ enum {
+ TOKEN_END = 0,
+@@ -133,19 +139,24 @@ static void meson_i2c_set_clk_div(struct meson_i2c *i2c, unsigned int freq)
+ unsigned long clk_rate = clk_get_rate(i2c->clk);
+ unsigned int div;
+
+- div = DIV_ROUND_UP(clk_rate, freq * i2c->data->div_factor);
++ div = DIV_ROUND_UP(clk_rate, freq);
++ div -= FILTER_DELAY;
++ div = DIV_ROUND_UP(div, i2c->data->div_factor);
+
+ /* clock divider has 12 bits */
+- if (div >= (1 << 12)) {
++ if (div > GENMASK(11, 0)) {
+ dev_err(i2c->dev, "requested bus frequency too low\n");
+- div = (1 << 12) - 1;
++ div = GENMASK(11, 0);
+ }
+
+- meson_i2c_set_mask(i2c, REG_CTRL, REG_CTRL_CLKDIV_MASK,
+- (div & GENMASK(9, 0)) << REG_CTRL_CLKDIV_SHIFT);
++ meson_i2c_set_mask(i2c, REG_CTRL, REG_CTRL_CLKDIV,
++ FIELD_PREP(REG_CTRL_CLKDIV, div & GENMASK(9, 0)));
++
++ meson_i2c_set_mask(i2c, REG_CTRL, REG_CTRL_CLKDIVEXT,
++ FIELD_PREP(REG_CTRL_CLKDIVEXT, div >> 10));
+
+- meson_i2c_set_mask(i2c, REG_CTRL, REG_CTRL_CLKDIVEXT_MASK,
+- (div >> 10) << REG_CTRL_CLKDIVEXT_SHIFT);
++ /* Disable HIGH/LOW mode */
++ meson_i2c_set_mask(i2c, REG_SLAVE_ADDR, REG_SLV_SCL_LOW_EN, 0);
+
+ dev_dbg(i2c->dev, "%s: clk %lu, freq %u, div %u\n", __func__,
+ clk_rate, freq, div);
+@@ -280,7 +291,10 @@ static void meson_i2c_do_start(struct meson_i2c *i2c, struct i2c_msg *msg)
+ token = (msg->flags & I2C_M_RD) ? TOKEN_SLAVE_ADDR_READ :
+ TOKEN_SLAVE_ADDR_WRITE;
+
+- writel(msg->addr << 1, i2c->regs + REG_SLAVE_ADDR);
++
++ meson_i2c_set_mask(i2c, REG_SLAVE_ADDR, REG_SLV_ADDR,
++ FIELD_PREP(REG_SLV_ADDR, msg->addr << 1));
++
+ meson_i2c_add_token(i2c, TOKEN_START);
+ meson_i2c_add_token(i2c, token);
+ }
+@@ -357,16 +371,12 @@ static int meson_i2c_xfer_messages(struct i2c_adapter *adap,
+ struct meson_i2c *i2c = adap->algo_data;
+ int i, ret = 0;
+
+- clk_enable(i2c->clk);
+-
+ for (i = 0; i < num; i++) {
+ ret = meson_i2c_xfer_msg(i2c, msgs + i, i == num - 1, atomic);
+ if (ret)
+ break;
+ }
+
+- clk_disable(i2c->clk);
+-
+ return ret ?: i;
+ }
+
+@@ -435,7 +445,7 @@ static int meson_i2c_probe(struct platform_device *pdev)
+ return ret;
+ }
+
+- ret = clk_prepare(i2c->clk);
++ ret = clk_prepare_enable(i2c->clk);
+ if (ret < 0) {
+ dev_err(&pdev->dev, "can't prepare clock\n");
+ return ret;
+@@ -457,10 +467,14 @@ static int meson_i2c_probe(struct platform_device *pdev)
+
+ ret = i2c_add_adapter(&i2c->adap);
+ if (ret < 0) {
+- clk_unprepare(i2c->clk);
++ clk_disable_unprepare(i2c->clk);
+ return ret;
+ }
+
++ /* Disable filtering */
++ meson_i2c_set_mask(i2c, REG_SLAVE_ADDR,
++ REG_SLV_SDA_FILTER | REG_SLV_SCL_FILTER, 0);
++
+ meson_i2c_set_clk_div(i2c, timings.bus_freq_hz);
+
+ return 0;
+@@ -471,7 +485,7 @@ static int meson_i2c_remove(struct platform_device *pdev)
+ struct meson_i2c *i2c = platform_get_drvdata(pdev);
+
+ i2c_del_adapter(&i2c->adap);
+- clk_unprepare(i2c->clk);
++ clk_disable_unprepare(i2c->clk);
+
+ return 0;
+ }
+diff --git a/drivers/i2c/busses/i2c-owl.c b/drivers/i2c/busses/i2c-owl.c
+index 672f1f239bd6f..a163b8f308c14 100644
+--- a/drivers/i2c/busses/i2c-owl.c
++++ b/drivers/i2c/busses/i2c-owl.c
+@@ -176,6 +176,9 @@ static irqreturn_t owl_i2c_interrupt(int irq, void *_dev)
+ fifostat = readl(i2c_dev->base + OWL_I2C_REG_FIFOSTAT);
+ if (fifostat & OWL_I2C_FIFOSTAT_RNB) {
+ i2c_dev->err = -ENXIO;
++ /* Clear NACK error bit by writing "1" */
++ owl_i2c_update_reg(i2c_dev->base + OWL_I2C_REG_FIFOSTAT,
++ OWL_I2C_FIFOSTAT_RNB, true);
+ goto stop;
+ }
+
+@@ -183,6 +186,9 @@ static irqreturn_t owl_i2c_interrupt(int irq, void *_dev)
+ stat = readl(i2c_dev->base + OWL_I2C_REG_STAT);
+ if (stat & OWL_I2C_STAT_BEB) {
+ i2c_dev->err = -EIO;
++ /* Clear BUS error bit by writing "1" */
++ owl_i2c_update_reg(i2c_dev->base + OWL_I2C_REG_STAT,
++ OWL_I2C_STAT_BEB, true);
+ goto stop;
+ }
+
+diff --git a/drivers/input/misc/ati_remote2.c b/drivers/input/misc/ati_remote2.c
+index 305f0160506a0..8a36d78fed63a 100644
+--- a/drivers/input/misc/ati_remote2.c
++++ b/drivers/input/misc/ati_remote2.c
+@@ -68,7 +68,7 @@ static int ati_remote2_get_channel_mask(char *buffer,
+ {
+ pr_debug("%s()\n", __func__);
+
+- return sprintf(buffer, "0x%04x", *(unsigned int *)kp->arg);
++ return sprintf(buffer, "0x%04x\n", *(unsigned int *)kp->arg);
+ }
+
+ static int ati_remote2_set_mode_mask(const char *val,
+@@ -84,7 +84,7 @@ static int ati_remote2_get_mode_mask(char *buffer,
+ {
+ pr_debug("%s()\n", __func__);
+
+- return sprintf(buffer, "0x%02x", *(unsigned int *)kp->arg);
++ return sprintf(buffer, "0x%02x\n", *(unsigned int *)kp->arg);
+ }
+
+ static unsigned int channel_mask = ATI_REMOTE2_MAX_CHANNEL_MASK;
+diff --git a/drivers/iommu/intel/iommu.c b/drivers/iommu/intel/iommu.c
+index fbe0b0cc56edf..24a84d294fd01 100644
+--- a/drivers/iommu/intel/iommu.c
++++ b/drivers/iommu/intel/iommu.c
+@@ -2617,7 +2617,7 @@ static struct dmar_domain *dmar_insert_one_dev_info(struct intel_iommu *iommu,
+ }
+
+ /* Setup the PASID entry for requests without PASID: */
+- spin_lock(&iommu->lock);
++ spin_lock_irqsave(&iommu->lock, flags);
+ if (hw_pass_through && domain_type_is_si(domain))
+ ret = intel_pasid_setup_pass_through(iommu, domain,
+ dev, PASID_RID2PASID);
+@@ -2627,7 +2627,7 @@ static struct dmar_domain *dmar_insert_one_dev_info(struct intel_iommu *iommu,
+ else
+ ret = intel_pasid_setup_second_level(iommu, domain,
+ dev, PASID_RID2PASID);
+- spin_unlock(&iommu->lock);
++ spin_unlock_irqrestore(&iommu->lock, flags);
+ if (ret) {
+ dev_err(dev, "Setup RID2PASID failed\n");
+ dmar_remove_one_dev_info(dev);
+diff --git a/drivers/mmc/core/queue.c b/drivers/mmc/core/queue.c
+index 4b1eb89b401d9..1ad518821157f 100644
+--- a/drivers/mmc/core/queue.c
++++ b/drivers/mmc/core/queue.c
+@@ -190,7 +190,7 @@ static void mmc_queue_setup_discard(struct request_queue *q,
+ q->limits.discard_granularity = card->pref_erase << 9;
+ /* granularity must not be greater than max. discard */
+ if (card->pref_erase > max_discard)
+- q->limits.discard_granularity = 0;
++ q->limits.discard_granularity = SECTOR_SIZE;
+ if (mmc_can_secure_erase_trim(card))
+ blk_queue_flag_set(QUEUE_FLAG_SECERASE, q);
+ }
+diff --git a/drivers/net/bonding/bond_main.c b/drivers/net/bonding/bond_main.c
+index 500aa3e19a4c7..fddf7c502355b 100644
+--- a/drivers/net/bonding/bond_main.c
++++ b/drivers/net/bonding/bond_main.c
+@@ -1195,6 +1195,7 @@ static void bond_setup_by_slave(struct net_device *bond_dev,
+
+ bond_dev->type = slave_dev->type;
+ bond_dev->hard_header_len = slave_dev->hard_header_len;
++ bond_dev->needed_headroom = slave_dev->needed_headroom;
+ bond_dev->addr_len = slave_dev->addr_len;
+
+ memcpy(bond_dev->broadcast, slave_dev->broadcast,
+diff --git a/drivers/net/dsa/ocelot/felix_vsc9959.c b/drivers/net/dsa/ocelot/felix_vsc9959.c
+index 7c167a394b762..259a612da0030 100644
+--- a/drivers/net/dsa/ocelot/felix_vsc9959.c
++++ b/drivers/net/dsa/ocelot/felix_vsc9959.c
+@@ -1105,8 +1105,21 @@ static int vsc9959_prevalidate_phy_mode(struct ocelot *ocelot, int port,
+ }
+ }
+
++/* Watermark encode
++ * Bit 8: Unit; 0:1, 1:16
++ * Bit 7-0: Value to be multiplied with unit
++ */
++static u16 vsc9959_wm_enc(u16 value)
++{
++ if (value >= BIT(8))
++ return BIT(8) | (value / 16);
++
++ return value;
++}
++
+ static const struct ocelot_ops vsc9959_ops = {
+ .reset = vsc9959_reset,
++ .wm_enc = vsc9959_wm_enc,
+ };
+
+ static int vsc9959_mdio_bus_alloc(struct ocelot *ocelot)
+@@ -1215,8 +1228,28 @@ static void vsc9959_mdio_bus_free(struct ocelot *ocelot)
+ static void vsc9959_sched_speed_set(struct ocelot *ocelot, int port,
+ u32 speed)
+ {
++ u8 tas_speed;
++
++ switch (speed) {
++ case SPEED_10:
++ tas_speed = OCELOT_SPEED_10;
++ break;
++ case SPEED_100:
++ tas_speed = OCELOT_SPEED_100;
++ break;
++ case SPEED_1000:
++ tas_speed = OCELOT_SPEED_1000;
++ break;
++ case SPEED_2500:
++ tas_speed = OCELOT_SPEED_2500;
++ break;
++ default:
++ tas_speed = OCELOT_SPEED_1000;
++ break;
++ }
++
+ ocelot_rmw_rix(ocelot,
+- QSYS_TAG_CONFIG_LINK_SPEED(speed),
++ QSYS_TAG_CONFIG_LINK_SPEED(tas_speed),
+ QSYS_TAG_CONFIG_LINK_SPEED_M,
+ QSYS_TAG_CONFIG, port);
+ }
+diff --git a/drivers/net/ethernet/cavium/octeon/octeon_mgmt.c b/drivers/net/ethernet/cavium/octeon/octeon_mgmt.c
+index cbaa1924afbe1..706e959bf02ac 100644
+--- a/drivers/net/ethernet/cavium/octeon/octeon_mgmt.c
++++ b/drivers/net/ethernet/cavium/octeon/octeon_mgmt.c
+@@ -1219,7 +1219,7 @@ static int octeon_mgmt_open(struct net_device *netdev)
+ */
+ if (netdev->phydev) {
+ netif_carrier_off(netdev);
+- phy_start_aneg(netdev->phydev);
++ phy_start(netdev->phydev);
+ }
+
+ netif_wake_queue(netdev);
+@@ -1247,8 +1247,10 @@ static int octeon_mgmt_stop(struct net_device *netdev)
+ napi_disable(&p->napi);
+ netif_stop_queue(netdev);
+
+- if (netdev->phydev)
++ if (netdev->phydev) {
++ phy_stop(netdev->phydev);
+ phy_disconnect(netdev->phydev);
++ }
+
+ netif_carrier_off(netdev);
+
+diff --git a/drivers/net/ethernet/huawei/hinic/Kconfig b/drivers/net/ethernet/huawei/hinic/Kconfig
+index 936e2dd3bb135..b47bd5440c5f0 100644
+--- a/drivers/net/ethernet/huawei/hinic/Kconfig
++++ b/drivers/net/ethernet/huawei/hinic/Kconfig
+@@ -6,6 +6,7 @@
+ config HINIC
+ tristate "Huawei Intelligent PCIE Network Interface Card"
+ depends on (PCI_MSI && (X86 || ARM64))
++ select NET_DEVLINK
+ help
+ This driver supports HiNIC PCIE Ethernet cards.
+ To compile this driver as part of the kernel, choose Y here.
+diff --git a/drivers/net/ethernet/huawei/hinic/hinic_hw_api_cmd.c b/drivers/net/ethernet/huawei/hinic/hinic_hw_api_cmd.c
+index 583fd24c29cf6..29e88e25a4a4f 100644
+--- a/drivers/net/ethernet/huawei/hinic/hinic_hw_api_cmd.c
++++ b/drivers/net/ethernet/huawei/hinic/hinic_hw_api_cmd.c
+@@ -112,6 +112,26 @@ static u32 get_hw_cons_idx(struct hinic_api_cmd_chain *chain)
+ return HINIC_API_CMD_STATUS_GET(val, CONS_IDX);
+ }
+
++static void dump_api_chain_reg(struct hinic_api_cmd_chain *chain)
++{
++ u32 addr, val;
++
++ addr = HINIC_CSR_API_CMD_STATUS_ADDR(chain->chain_type);
++ val = hinic_hwif_read_reg(chain->hwif, addr);
++
++ dev_err(&chain->hwif->pdev->dev, "Chain type: 0x%x, cpld error: 0x%x, check error: 0x%x, current fsm: 0x%x\n",
++ chain->chain_type, HINIC_API_CMD_STATUS_GET(val, CPLD_ERR),
++ HINIC_API_CMD_STATUS_GET(val, CHKSUM_ERR),
++ HINIC_API_CMD_STATUS_GET(val, FSM));
++
++ dev_err(&chain->hwif->pdev->dev, "Chain hw current ci: 0x%x\n",
++ HINIC_API_CMD_STATUS_GET(val, CONS_IDX));
++
++ addr = HINIC_CSR_API_CMD_CHAIN_PI_ADDR(chain->chain_type);
++ val = hinic_hwif_read_reg(chain->hwif, addr);
++ dev_err(&chain->hwif->pdev->dev, "Chain hw current pi: 0x%x\n", val);
++}
++
+ /**
+ * chain_busy - check if the chain is still processing last requests
+ * @chain: chain to check
+@@ -131,8 +151,10 @@ static int chain_busy(struct hinic_api_cmd_chain *chain)
+
+ /* check for a space for a new command */
+ if (chain->cons_idx == MASKED_IDX(chain, prod_idx + 1)) {
+- dev_err(&pdev->dev, "API CMD chain %d is busy\n",
+- chain->chain_type);
++ dev_err(&pdev->dev, "API CMD chain %d is busy, cons_idx: %d, prod_idx: %d\n",
++ chain->chain_type, chain->cons_idx,
++ chain->prod_idx);
++ dump_api_chain_reg(chain);
+ return -EBUSY;
+ }
+ break;
+@@ -332,6 +354,7 @@ static int wait_for_api_cmd_completion(struct hinic_api_cmd_chain *chain)
+ err = wait_for_status_poll(chain);
+ if (err) {
+ dev_err(&pdev->dev, "API CMD Poll status timeout\n");
++ dump_api_chain_reg(chain);
+ break;
+ }
+ break;
+diff --git a/drivers/net/ethernet/huawei/hinic/hinic_hw_api_cmd.h b/drivers/net/ethernet/huawei/hinic/hinic_hw_api_cmd.h
+index 0ba00fd828dfc..6d1654b050ad5 100644
+--- a/drivers/net/ethernet/huawei/hinic/hinic_hw_api_cmd.h
++++ b/drivers/net/ethernet/huawei/hinic/hinic_hw_api_cmd.h
+@@ -103,10 +103,14 @@
+ HINIC_API_CMD_STATUS_HEADER_##member##_MASK)
+
+ #define HINIC_API_CMD_STATUS_CONS_IDX_SHIFT 0
++#define HINIC_API_CMD_STATUS_FSM_SHIFT 24
+ #define HINIC_API_CMD_STATUS_CHKSUM_ERR_SHIFT 28
++#define HINIC_API_CMD_STATUS_CPLD_ERR_SHIFT 30
+
+ #define HINIC_API_CMD_STATUS_CONS_IDX_MASK 0xFFFFFF
++#define HINIC_API_CMD_STATUS_FSM_MASK 0xFU
+ #define HINIC_API_CMD_STATUS_CHKSUM_ERR_MASK 0x3
++#define HINIC_API_CMD_STATUS_CPLD_ERR_MASK 0x1U
+
+ #define HINIC_API_CMD_STATUS_GET(val, member) \
+ (((val) >> HINIC_API_CMD_STATUS_##member##_SHIFT) & \
+diff --git a/drivers/net/ethernet/huawei/hinic/hinic_hw_cmdq.c b/drivers/net/ethernet/huawei/hinic/hinic_hw_cmdq.c
+index cb5b6e5f787f2..e0eb294779ec1 100644
+--- a/drivers/net/ethernet/huawei/hinic/hinic_hw_cmdq.c
++++ b/drivers/net/ethernet/huawei/hinic/hinic_hw_cmdq.c
+@@ -401,6 +401,7 @@ static int cmdq_sync_cmd_direct_resp(struct hinic_cmdq *cmdq,
+
+ spin_unlock_bh(&cmdq->cmdq_lock);
+
++ hinic_dump_ceq_info(cmdq->hwdev);
+ return -ETIMEDOUT;
+ }
+
+@@ -807,6 +808,7 @@ static int init_cmdqs_ctxt(struct hinic_hwdev *hwdev,
+
+ cmdq_type = HINIC_CMDQ_SYNC;
+ for (; cmdq_type < HINIC_MAX_CMDQ_TYPES; cmdq_type++) {
++ cmdqs->cmdq[cmdq_type].hwdev = hwdev;
+ err = init_cmdq(&cmdqs->cmdq[cmdq_type],
+ &cmdqs->saved_wqs[cmdq_type], cmdq_type,
+ db_area[cmdq_type]);
+diff --git a/drivers/net/ethernet/huawei/hinic/hinic_hw_cmdq.h b/drivers/net/ethernet/huawei/hinic/hinic_hw_cmdq.h
+index 3e4b0aef9fe6c..f40c31e1879f1 100644
+--- a/drivers/net/ethernet/huawei/hinic/hinic_hw_cmdq.h
++++ b/drivers/net/ethernet/huawei/hinic/hinic_hw_cmdq.h
+@@ -130,6 +130,8 @@ struct hinic_cmdq_ctxt {
+ };
+
+ struct hinic_cmdq {
++ struct hinic_hwdev *hwdev;
++
+ struct hinic_wq *wq;
+
+ enum hinic_cmdq_type cmdq_type;
+diff --git a/drivers/net/ethernet/huawei/hinic/hinic_hw_dev.c b/drivers/net/ethernet/huawei/hinic/hinic_hw_dev.c
+index b735bc537508f..298ceb930cc62 100644
+--- a/drivers/net/ethernet/huawei/hinic/hinic_hw_dev.c
++++ b/drivers/net/ethernet/huawei/hinic/hinic_hw_dev.c
+@@ -253,9 +253,9 @@ static int init_fw_ctxt(struct hinic_hwdev *hwdev)
+ &fw_ctxt, sizeof(fw_ctxt),
+ &fw_ctxt, &out_size);
+ if (err || (out_size != sizeof(fw_ctxt)) || fw_ctxt.status) {
+- dev_err(&pdev->dev, "Failed to init FW ctxt, ret = %d\n",
+- fw_ctxt.status);
+- return -EFAULT;
++ dev_err(&pdev->dev, "Failed to init FW ctxt, err: %d, status: 0x%x, out size: 0x%x\n",
++ err, fw_ctxt.status, out_size);
++ return -EIO;
+ }
+
+ return 0;
+@@ -420,9 +420,9 @@ static int get_base_qpn(struct hinic_hwdev *hwdev, u16 *base_qpn)
+ &cmd_base_qpn, sizeof(cmd_base_qpn),
+ &cmd_base_qpn, &out_size);
+ if (err || (out_size != sizeof(cmd_base_qpn)) || cmd_base_qpn.status) {
+- dev_err(&pdev->dev, "Failed to get base qpn, status = %d\n",
+- cmd_base_qpn.status);
+- return -EFAULT;
++ dev_err(&pdev->dev, "Failed to get base qpn, err: %d, status: 0x%x, out size: 0x%x\n",
++ err, cmd_base_qpn.status, out_size);
++ return -EIO;
+ }
+
+ *base_qpn = cmd_base_qpn.qpn;
+diff --git a/drivers/net/ethernet/huawei/hinic/hinic_hw_eqs.c b/drivers/net/ethernet/huawei/hinic/hinic_hw_eqs.c
+index 397936cac304c..ca8cb68a8d206 100644
+--- a/drivers/net/ethernet/huawei/hinic/hinic_hw_eqs.c
++++ b/drivers/net/ethernet/huawei/hinic/hinic_hw_eqs.c
+@@ -953,3 +953,42 @@ void hinic_ceqs_free(struct hinic_ceqs *ceqs)
+ for (q_id = 0; q_id < ceqs->num_ceqs; q_id++)
+ remove_eq(&ceqs->ceq[q_id]);
+ }
++
++void hinic_dump_ceq_info(struct hinic_hwdev *hwdev)
++{
++ struct hinic_eq *eq = NULL;
++ u32 addr, ci, pi;
++ int q_id;
++
++ for (q_id = 0; q_id < hwdev->func_to_io.ceqs.num_ceqs; q_id++) {
++ eq = &hwdev->func_to_io.ceqs.ceq[q_id];
++ addr = EQ_CONS_IDX_REG_ADDR(eq);
++ ci = hinic_hwif_read_reg(hwdev->hwif, addr);
++ addr = EQ_PROD_IDX_REG_ADDR(eq);
++ pi = hinic_hwif_read_reg(hwdev->hwif, addr);
++ dev_err(&hwdev->hwif->pdev->dev, "Ceq id: %d, ci: 0x%08x, sw_ci: 0x%08x, pi: 0x%x, tasklet_state: 0x%lx, wrap: %d, ceqe: 0x%x\n",
++ q_id, ci, eq->cons_idx, pi,
++ eq->ceq_tasklet.state,
++ eq->wrapped, be32_to_cpu(*(__be32 *)(GET_CURR_CEQ_ELEM(eq))));
++ }
++}
++
++void hinic_dump_aeq_info(struct hinic_hwdev *hwdev)
++{
++ struct hinic_aeq_elem *aeqe_pos = NULL;
++ struct hinic_eq *eq = NULL;
++ u32 addr, ci, pi;
++ int q_id;
++
++ for (q_id = 0; q_id < hwdev->aeqs.num_aeqs; q_id++) {
++ eq = &hwdev->aeqs.aeq[q_id];
++ addr = EQ_CONS_IDX_REG_ADDR(eq);
++ ci = hinic_hwif_read_reg(hwdev->hwif, addr);
++ addr = EQ_PROD_IDX_REG_ADDR(eq);
++ pi = hinic_hwif_read_reg(hwdev->hwif, addr);
++ aeqe_pos = GET_CURR_AEQ_ELEM(eq);
++ dev_err(&hwdev->hwif->pdev->dev, "Aeq id: %d, ci: 0x%08x, pi: 0x%x, work_state: 0x%x, wrap: %d, desc: 0x%x\n",
++ q_id, ci, pi, work_busy(&eq->aeq_work.work),
++ eq->wrapped, be32_to_cpu(aeqe_pos->desc));
++ }
++}
+diff --git a/drivers/net/ethernet/huawei/hinic/hinic_hw_eqs.h b/drivers/net/ethernet/huawei/hinic/hinic_hw_eqs.h
+index 74b9ff90640c2..43065fc708693 100644
+--- a/drivers/net/ethernet/huawei/hinic/hinic_hw_eqs.h
++++ b/drivers/net/ethernet/huawei/hinic/hinic_hw_eqs.h
+@@ -162,7 +162,7 @@ enum hinic_eqe_state {
+
+ struct hinic_aeq_elem {
+ u8 data[HINIC_AEQE_DATA_SIZE];
+- u32 desc;
++ __be32 desc;
+ };
+
+ struct hinic_eq_work {
+@@ -254,4 +254,8 @@ int hinic_ceqs_init(struct hinic_ceqs *ceqs, struct hinic_hwif *hwif,
+
+ void hinic_ceqs_free(struct hinic_ceqs *ceqs);
+
++void hinic_dump_ceq_info(struct hinic_hwdev *hwdev);
++
++void hinic_dump_aeq_info(struct hinic_hwdev *hwdev);
++
+ #endif
+diff --git a/drivers/net/ethernet/huawei/hinic/hinic_hw_if.c b/drivers/net/ethernet/huawei/hinic/hinic_hw_if.c
+index cf127d896ba69..bc8925c0c982c 100644
+--- a/drivers/net/ethernet/huawei/hinic/hinic_hw_if.c
++++ b/drivers/net/ethernet/huawei/hinic/hinic_hw_if.c
+@@ -21,6 +21,8 @@
+
+ #define WAIT_HWIF_READY_TIMEOUT 10000
+
++#define HINIC_SELFTEST_RESULT 0x883C
++
+ /**
+ * hinic_msix_attr_set - set message attribute for msix entry
+ * @hwif: the HW interface of a pci function device
+@@ -369,6 +371,26 @@ u16 hinic_pf_id_of_vf_hw(struct hinic_hwif *hwif)
+ return HINIC_FA0_GET(attr0, PF_IDX);
+ }
+
++static void __print_selftest_reg(struct hinic_hwif *hwif)
++{
++ u32 addr, attr0, attr1;
++
++ addr = HINIC_CSR_FUNC_ATTR1_ADDR;
++ attr1 = hinic_hwif_read_reg(hwif, addr);
++
++ if (attr1 == HINIC_PCIE_LINK_DOWN) {
++ dev_err(&hwif->pdev->dev, "PCIE is link down\n");
++ return;
++ }
++
++ addr = HINIC_CSR_FUNC_ATTR0_ADDR;
++ attr0 = hinic_hwif_read_reg(hwif, addr);
++ if (HINIC_FA0_GET(attr0, FUNC_TYPE) != HINIC_VF &&
++ !HINIC_FA0_GET(attr0, PCI_INTF_IDX))
++ dev_err(&hwif->pdev->dev, "Selftest reg: 0x%08x\n",
++ hinic_hwif_read_reg(hwif, HINIC_SELFTEST_RESULT));
++}
++
+ /**
+ * hinic_init_hwif - initialize the hw interface
+ * @hwif: the HW interface of a pci function device
+@@ -398,6 +420,7 @@ int hinic_init_hwif(struct hinic_hwif *hwif, struct pci_dev *pdev)
+ err = wait_hwif_ready(hwif);
+ if (err) {
+ dev_err(&pdev->dev, "HW interface is not ready\n");
++ __print_selftest_reg(hwif);
+ goto err_hwif_ready;
+ }
+
+diff --git a/drivers/net/ethernet/huawei/hinic/hinic_hw_if.h b/drivers/net/ethernet/huawei/hinic/hinic_hw_if.h
+index 0872e035faa11..c06f2253151e2 100644
+--- a/drivers/net/ethernet/huawei/hinic/hinic_hw_if.h
++++ b/drivers/net/ethernet/huawei/hinic/hinic_hw_if.h
+@@ -12,6 +12,8 @@
+ #include <linux/types.h>
+ #include <asm/byteorder.h>
+
++#define HINIC_PCIE_LINK_DOWN 0xFFFFFFFF
++
+ #define HINIC_DMA_ATTR_ST_SHIFT 0
+ #define HINIC_DMA_ATTR_AT_SHIFT 8
+ #define HINIC_DMA_ATTR_PH_SHIFT 10
+@@ -249,13 +251,17 @@ struct hinic_hwif {
+
+ static inline u32 hinic_hwif_read_reg(struct hinic_hwif *hwif, u32 reg)
+ {
+- return be32_to_cpu(readl(hwif->cfg_regs_bar + reg));
++ u32 out = readl(hwif->cfg_regs_bar + reg);
++
++ return be32_to_cpu(*(__be32 *)&out);
+ }
+
+ static inline void hinic_hwif_write_reg(struct hinic_hwif *hwif, u32 reg,
+ u32 val)
+ {
+- writel(cpu_to_be32(val), hwif->cfg_regs_bar + reg);
++ __be32 in = cpu_to_be32(val);
++
++ writel(*(u32 *)&in, hwif->cfg_regs_bar + reg);
+ }
+
+ int hinic_msix_attr_set(struct hinic_hwif *hwif, u16 msix_index,
+diff --git a/drivers/net/ethernet/huawei/hinic/hinic_hw_mbox.c b/drivers/net/ethernet/huawei/hinic/hinic_hw_mbox.c
+index bc2f87e6cb5d7..47c93f946b94d 100644
+--- a/drivers/net/ethernet/huawei/hinic/hinic_hw_mbox.c
++++ b/drivers/net/ethernet/huawei/hinic/hinic_hw_mbox.c
+@@ -650,6 +650,7 @@ wait_for_mbox_seg_completion(struct hinic_mbox_func_to_func *func_to_func,
+ if (!wait_for_completion_timeout(done, jif)) {
+ dev_err(&hwdev->hwif->pdev->dev, "Send mailbox segment timeout\n");
+ dump_mox_reg(hwdev);
++ hinic_dump_aeq_info(hwdev);
+ return -ETIMEDOUT;
+ }
+
+@@ -897,6 +898,7 @@ int hinic_mbox_to_func(struct hinic_mbox_func_to_func *func_to_func,
+ set_mbox_to_func_event(func_to_func, EVENT_TIMEOUT);
+ dev_err(&func_to_func->hwif->pdev->dev,
+ "Send mbox msg timeout, msg_id: %d\n", msg_info.msg_id);
++ hinic_dump_aeq_info(func_to_func->hwdev);
+ err = -ETIMEDOUT;
+ goto err_send_mbox;
+ }
+diff --git a/drivers/net/ethernet/huawei/hinic/hinic_hw_mgmt.c b/drivers/net/ethernet/huawei/hinic/hinic_hw_mgmt.c
+index 7fe39a155b329..070288c8b4f37 100644
+--- a/drivers/net/ethernet/huawei/hinic/hinic_hw_mgmt.c
++++ b/drivers/net/ethernet/huawei/hinic/hinic_hw_mgmt.c
+@@ -276,6 +276,7 @@ static int msg_to_mgmt_sync(struct hinic_pf_to_mgmt *pf_to_mgmt,
+
+ if (!wait_for_completion_timeout(recv_done, timeo)) {
+ dev_err(&pdev->dev, "MGMT timeout, MSG id = %d\n", msg_id);
++ hinic_dump_aeq_info(pf_to_mgmt->hwdev);
+ err = -ETIMEDOUT;
+ goto unlock_sync_msg;
+ }
+diff --git a/drivers/net/ethernet/huawei/hinic/hinic_port.c b/drivers/net/ethernet/huawei/hinic/hinic_port.c
+index 175c0ee000384..2be7c254cca90 100644
+--- a/drivers/net/ethernet/huawei/hinic/hinic_port.c
++++ b/drivers/net/ethernet/huawei/hinic/hinic_port.c
+@@ -58,11 +58,11 @@ static int change_mac(struct hinic_dev *nic_dev, const u8 *addr,
+ sizeof(port_mac_cmd),
+ &port_mac_cmd, &out_size);
+ if (err || out_size != sizeof(port_mac_cmd) ||
+- (port_mac_cmd.status &&
+- port_mac_cmd.status != HINIC_PF_SET_VF_ALREADY &&
+- port_mac_cmd.status != HINIC_MGMT_STATUS_EXIST)) {
+- dev_err(&pdev->dev, "Failed to change MAC, ret = %d\n",
+- port_mac_cmd.status);
++ (port_mac_cmd.status &&
++ (port_mac_cmd.status != HINIC_PF_SET_VF_ALREADY || !HINIC_IS_VF(hwif)) &&
++ port_mac_cmd.status != HINIC_MGMT_STATUS_EXIST)) {
++ dev_err(&pdev->dev, "Failed to change MAC, err: %d, status: 0x%x, out size: 0x%x\n",
++ err, port_mac_cmd.status, out_size);
+ return -EFAULT;
+ }
+
+@@ -129,8 +129,8 @@ int hinic_port_get_mac(struct hinic_dev *nic_dev, u8 *addr)
+ &port_mac_cmd, sizeof(port_mac_cmd),
+ &port_mac_cmd, &out_size);
+ if (err || (out_size != sizeof(port_mac_cmd)) || port_mac_cmd.status) {
+- dev_err(&pdev->dev, "Failed to get mac, ret = %d\n",
+- port_mac_cmd.status);
++ dev_err(&pdev->dev, "Failed to get mac, err: %d, status: 0x%x, out size: 0x%x\n",
++ err, port_mac_cmd.status, out_size);
+ return -EFAULT;
+ }
+
+@@ -172,9 +172,9 @@ int hinic_port_set_mtu(struct hinic_dev *nic_dev, int new_mtu)
+ err = hinic_port_msg_cmd(hwdev, HINIC_PORT_CMD_CHANGE_MTU,
+ &port_mtu_cmd, sizeof(port_mtu_cmd),
+ &port_mtu_cmd, &out_size);
+- if (err || (out_size != sizeof(port_mtu_cmd)) || port_mtu_cmd.status) {
+- dev_err(&pdev->dev, "Failed to set mtu, ret = %d\n",
+- port_mtu_cmd.status);
++ if (err || out_size != sizeof(port_mtu_cmd) || port_mtu_cmd.status) {
++ dev_err(&pdev->dev, "Failed to set mtu, err: %d, status: 0x%x, out size: 0x%x\n",
++ err, port_mtu_cmd.status, out_size);
+ return -EFAULT;
+ }
+
+@@ -264,8 +264,8 @@ int hinic_port_link_state(struct hinic_dev *nic_dev,
+ &link_cmd, sizeof(link_cmd),
+ &link_cmd, &out_size);
+ if (err || (out_size != sizeof(link_cmd)) || link_cmd.status) {
+- dev_err(&pdev->dev, "Failed to get link state, ret = %d\n",
+- link_cmd.status);
++ dev_err(&pdev->dev, "Failed to get link state, err: %d, status: 0x%x, out size: 0x%x\n",
++ err, link_cmd.status, out_size);
+ return -EINVAL;
+ }
+
+@@ -298,8 +298,8 @@ int hinic_port_set_state(struct hinic_dev *nic_dev, enum hinic_port_state state)
+ &port_state, sizeof(port_state),
+ &port_state, &out_size);
+ if (err || (out_size != sizeof(port_state)) || port_state.status) {
+- dev_err(&pdev->dev, "Failed to set port state, ret = %d\n",
+- port_state.status);
++ dev_err(&pdev->dev, "Failed to set port state, err: %d, status: 0x%x, out size: 0x%x\n",
++ err, port_state.status, out_size);
+ return -EFAULT;
+ }
+
+@@ -330,8 +330,8 @@ int hinic_port_set_func_state(struct hinic_dev *nic_dev,
+ &func_state, sizeof(func_state),
+ &func_state, &out_size);
+ if (err || (out_size != sizeof(func_state)) || func_state.status) {
+- dev_err(&pdev->dev, "Failed to set port func state, ret = %d\n",
+- func_state.status);
++ dev_err(&pdev->dev, "Failed to set port func state, err: %d, status: 0x%x, out size: 0x%x\n",
++ err, func_state.status, out_size);
+ return -EFAULT;
+ }
+
+@@ -361,9 +361,9 @@ int hinic_port_get_cap(struct hinic_dev *nic_dev,
+ port_cap, &out_size);
+ if (err || (out_size != sizeof(*port_cap)) || port_cap->status) {
+ dev_err(&pdev->dev,
+- "Failed to get port capabilities, ret = %d\n",
+- port_cap->status);
+- return -EINVAL;
++ "Failed to get port capabilities, err: %d, status: 0x%x, out size: 0x%x\n",
++ err, port_cap->status, out_size);
++ return -EIO;
+ }
+
+ return 0;
+@@ -393,9 +393,9 @@ int hinic_port_set_tso(struct hinic_dev *nic_dev, enum hinic_tso_state state)
+ &tso_cfg, &out_size);
+ if (err || out_size != sizeof(tso_cfg) || tso_cfg.status) {
+ dev_err(&pdev->dev,
+- "Failed to set port tso, ret = %d\n",
+- tso_cfg.status);
+- return -EINVAL;
++ "Failed to set port tso, err: %d, status: 0x%x, out size: 0x%x\n",
++ err, tso_cfg.status, out_size);
++ return -EIO;
+ }
+
+ return 0;
+@@ -423,9 +423,9 @@ int hinic_set_rx_csum_offload(struct hinic_dev *nic_dev, u32 en)
+ &rx_csum_cfg, &out_size);
+ if (err || !out_size || rx_csum_cfg.status) {
+ dev_err(&pdev->dev,
+- "Failed to set rx csum offload, ret = %d\n",
+- rx_csum_cfg.status);
+- return -EINVAL;
++ "Failed to set rx csum offload, err: %d, status: 0x%x, out size: 0x%x\n",
++ err, rx_csum_cfg.status, out_size);
++ return -EIO;
+ }
+
+ return 0;
+@@ -480,9 +480,9 @@ int hinic_set_max_qnum(struct hinic_dev *nic_dev, u8 num_rqs)
+ &rq_num, &out_size);
+ if (err || !out_size || rq_num.status) {
+ dev_err(&pdev->dev,
+- "Failed to rxq number, ret = %d\n",
+- rq_num.status);
+- return -EINVAL;
++ "Failed to set rxq number, err: %d, status: 0x%x, out size: 0x%x\n",
++ err, rq_num.status, out_size);
++ return -EIO;
+ }
+
+ return 0;
+@@ -508,9 +508,9 @@ static int hinic_set_rx_lro(struct hinic_dev *nic_dev, u8 ipv4_en, u8 ipv6_en,
+ &lro_cfg, &out_size);
+ if (err || !out_size || lro_cfg.status) {
+ dev_err(&pdev->dev,
+- "Failed to set lro offload, ret = %d\n",
+- lro_cfg.status);
+- return -EINVAL;
++ "Failed to set lro offload, err: %d, status: 0x%x, out size: 0x%x\n",
++ err, lro_cfg.status, out_size);
++ return -EIO;
+ }
+
+ return 0;
+@@ -542,10 +542,10 @@ static int hinic_set_rx_lro_timer(struct hinic_dev *nic_dev, u32 timer_value)
+
+ if (err || !out_size || lro_timer.status) {
+ dev_err(&pdev->dev,
+- "Failed to set lro timer, ret = %d\n",
+- lro_timer.status);
++ "Failed to set lro timer, err: %d, status: 0x%x, out size: 0x%x\n",
++ err, lro_timer.status, out_size);
+
+- return -EINVAL;
++ return -EIO;
+ }
+
+ return 0;
+diff --git a/drivers/net/ethernet/huawei/hinic/hinic_sriov.c b/drivers/net/ethernet/huawei/hinic/hinic_sriov.c
+index efab2dd2c889b..b757f7057b8fe 100644
+--- a/drivers/net/ethernet/huawei/hinic/hinic_sriov.c
++++ b/drivers/net/ethernet/huawei/hinic/hinic_sriov.c
+@@ -38,11 +38,10 @@ static int hinic_set_mac(struct hinic_hwdev *hwdev, const u8 *mac_addr,
+ err = hinic_port_msg_cmd(hwdev, HINIC_PORT_CMD_SET_MAC, &mac_info,
+ sizeof(mac_info), &mac_info, &out_size);
+ if (err || out_size != sizeof(mac_info) ||
+- (mac_info.status && mac_info.status != HINIC_PF_SET_VF_ALREADY &&
+- mac_info.status != HINIC_MGMT_STATUS_EXIST)) {
+- dev_err(&hwdev->func_to_io.hwif->pdev->dev, "Failed to change MAC, ret = %d\n",
+- mac_info.status);
+- return -EFAULT;
++ (mac_info.status && mac_info.status != HINIC_MGMT_STATUS_EXIST)) {
++ dev_err(&hwdev->func_to_io.hwif->pdev->dev, "Failed to set MAC, err: %d, status: 0x%x, out size: 0x%x\n",
++ err, mac_info.status, out_size);
++ return -EIO;
+ }
+
+ return 0;
+@@ -452,8 +451,7 @@ struct hinic_sriov_info *hinic_get_sriov_info_by_pcidev(struct pci_dev *pdev)
+
+ static int hinic_check_mac_info(u8 status, u16 vlan_id)
+ {
+- if ((status && status != HINIC_MGMT_STATUS_EXIST &&
+- status != HINIC_PF_SET_VF_ALREADY) ||
++ if ((status && status != HINIC_MGMT_STATUS_EXIST) ||
+ (vlan_id & CHECK_IPSU_15BIT &&
+ status == HINIC_MGMT_STATUS_EXIST))
+ return -EINVAL;
+@@ -495,12 +493,6 @@ static int hinic_update_mac(struct hinic_hwdev *hwdev, u8 *old_mac,
+ return -EINVAL;
+ }
+
+- if (mac_info.status == HINIC_PF_SET_VF_ALREADY) {
+- dev_warn(&hwdev->hwif->pdev->dev,
+- "PF has already set VF MAC. Ignore update operation\n");
+- return HINIC_PF_SET_VF_ALREADY;
+- }
+-
+ if (mac_info.status == HINIC_MGMT_STATUS_EXIST)
+ dev_warn(&hwdev->hwif->pdev->dev, "MAC is repeated. Ignore update operation\n");
+
+diff --git a/drivers/net/ethernet/intel/iavf/iavf_main.c b/drivers/net/ethernet/intel/iavf/iavf_main.c
+index d338efe5f3f55..91343e2d3a145 100644
+--- a/drivers/net/ethernet/intel/iavf/iavf_main.c
++++ b/drivers/net/ethernet/intel/iavf/iavf_main.c
+@@ -3777,7 +3777,6 @@ err_dma:
+ return err;
+ }
+
+-#ifdef CONFIG_PM
+ /**
+ * iavf_suspend - Power management suspend routine
+ * @pdev: PCI device information struct
+@@ -3785,11 +3784,10 @@ err_dma:
+ *
+ * Called when the system (VM) is entering sleep/suspend.
+ **/
+-static int iavf_suspend(struct pci_dev *pdev, pm_message_t state)
++static int __maybe_unused iavf_suspend(struct device *dev_d)
+ {
+- struct net_device *netdev = pci_get_drvdata(pdev);
++ struct net_device *netdev = dev_get_drvdata(dev_d);
+ struct iavf_adapter *adapter = netdev_priv(netdev);
+- int retval = 0;
+
+ netif_device_detach(netdev);
+
+@@ -3807,12 +3805,6 @@ static int iavf_suspend(struct pci_dev *pdev, pm_message_t state)
+
+ clear_bit(__IAVF_IN_CRITICAL_TASK, &adapter->crit_section);
+
+- retval = pci_save_state(pdev);
+- if (retval)
+- return retval;
+-
+- pci_disable_device(pdev);
+-
+ return 0;
+ }
+
+@@ -3822,24 +3814,13 @@ static int iavf_suspend(struct pci_dev *pdev, pm_message_t state)
+ *
+ * Called when the system (VM) is resumed from sleep/suspend.
+ **/
+-static int iavf_resume(struct pci_dev *pdev)
++static int __maybe_unused iavf_resume(struct device *dev_d)
+ {
+- struct iavf_adapter *adapter = pci_get_drvdata(pdev);
+- struct net_device *netdev = adapter->netdev;
++ struct pci_dev *pdev = to_pci_dev(dev_d);
++ struct net_device *netdev = pci_get_drvdata(pdev);
++ struct iavf_adapter *adapter = netdev_priv(netdev);
+ u32 err;
+
+- pci_set_power_state(pdev, PCI_D0);
+- pci_restore_state(pdev);
+- /* pci_restore_state clears dev->state_saved so call
+- * pci_save_state to restore it.
+- */
+- pci_save_state(pdev);
+-
+- err = pci_enable_device_mem(pdev);
+- if (err) {
+- dev_err(&pdev->dev, "Cannot enable PCI device from suspend.\n");
+- return err;
+- }
+ pci_set_master(pdev);
+
+ rtnl_lock();
+@@ -3863,7 +3844,6 @@ static int iavf_resume(struct pci_dev *pdev)
+ return err;
+ }
+
+-#endif /* CONFIG_PM */
+ /**
+ * iavf_remove - Device Removal Routine
+ * @pdev: PCI device information struct
+@@ -3965,16 +3945,15 @@ static void iavf_remove(struct pci_dev *pdev)
+ pci_disable_device(pdev);
+ }
+
++static SIMPLE_DEV_PM_OPS(iavf_pm_ops, iavf_suspend, iavf_resume);
++
+ static struct pci_driver iavf_driver = {
+- .name = iavf_driver_name,
+- .id_table = iavf_pci_tbl,
+- .probe = iavf_probe,
+- .remove = iavf_remove,
+-#ifdef CONFIG_PM
+- .suspend = iavf_suspend,
+- .resume = iavf_resume,
+-#endif
+- .shutdown = iavf_shutdown,
++ .name = iavf_driver_name,
++ .id_table = iavf_pci_tbl,
++ .probe = iavf_probe,
++ .remove = iavf_remove,
++ .driver.pm = &iavf_pm_ops,
++ .shutdown = iavf_shutdown,
+ };
+
+ /**
+diff --git a/drivers/net/ethernet/intel/ice/ice_lib.c b/drivers/net/ethernet/intel/ice/ice_lib.c
+index 2e3a39cea2c03..4c5845a0965a9 100644
+--- a/drivers/net/ethernet/intel/ice/ice_lib.c
++++ b/drivers/net/ethernet/intel/ice/ice_lib.c
+@@ -240,7 +240,7 @@ static int ice_get_free_slot(void *array, int size, int curr)
+ * ice_vsi_delete - delete a VSI from the switch
+ * @vsi: pointer to VSI being removed
+ */
+-void ice_vsi_delete(struct ice_vsi *vsi)
++static void ice_vsi_delete(struct ice_vsi *vsi)
+ {
+ struct ice_pf *pf = vsi->back;
+ struct ice_vsi_ctx *ctxt;
+@@ -307,7 +307,7 @@ static void ice_vsi_free_arrays(struct ice_vsi *vsi)
+ *
+ * Returns 0 on success, negative on failure
+ */
+-int ice_vsi_clear(struct ice_vsi *vsi)
++static int ice_vsi_clear(struct ice_vsi *vsi)
+ {
+ struct ice_pf *pf = NULL;
+ struct device *dev;
+@@ -557,7 +557,7 @@ static int ice_vsi_get_qs(struct ice_vsi *vsi)
+ * ice_vsi_put_qs - Release queues from VSI to PF
+ * @vsi: the VSI that is going to release queues
+ */
+-void ice_vsi_put_qs(struct ice_vsi *vsi)
++static void ice_vsi_put_qs(struct ice_vsi *vsi)
+ {
+ struct ice_pf *pf = vsi->back;
+ int i;
+@@ -1190,6 +1190,18 @@ static void ice_vsi_clear_rings(struct ice_vsi *vsi)
+ {
+ int i;
+
++ /* Avoid stale references by clearing map from vector to ring */
++ if (vsi->q_vectors) {
++ ice_for_each_q_vector(vsi, i) {
++ struct ice_q_vector *q_vector = vsi->q_vectors[i];
++
++ if (q_vector) {
++ q_vector->tx.ring = NULL;
++ q_vector->rx.ring = NULL;
++ }
++ }
++ }
++
+ if (vsi->tx_rings) {
+ for (i = 0; i < vsi->alloc_txq; i++) {
+ if (vsi->tx_rings[i]) {
+@@ -2254,7 +2266,7 @@ ice_vsi_setup(struct ice_pf *pf, struct ice_port_info *pi,
+ if (status) {
+ dev_err(dev, "VSI %d failed lan queue config, error %s\n",
+ vsi->vsi_num, ice_stat_str(status));
+- goto unroll_vector_base;
++ goto unroll_clear_rings;
+ }
+
+ /* Add switch rule to drop all Tx Flow Control Frames, of look up
+diff --git a/drivers/net/ethernet/intel/ice/ice_lib.h b/drivers/net/ethernet/intel/ice/ice_lib.h
+index d80e6afa45112..2954b30e6ec79 100644
+--- a/drivers/net/ethernet/intel/ice/ice_lib.h
++++ b/drivers/net/ethernet/intel/ice/ice_lib.h
+@@ -43,10 +43,6 @@ int ice_cfg_vlan_pruning(struct ice_vsi *vsi, bool ena, bool vlan_promisc);
+
+ void ice_cfg_sw_lldp(struct ice_vsi *vsi, bool tx, bool create);
+
+-void ice_vsi_delete(struct ice_vsi *vsi);
+-
+-int ice_vsi_clear(struct ice_vsi *vsi);
+-
+ #ifdef CONFIG_DCB
+ int ice_vsi_cfg_tc(struct ice_vsi *vsi, u8 ena_tc);
+ #endif /* CONFIG_DCB */
+@@ -77,8 +73,6 @@ bool ice_is_reset_in_progress(unsigned long *state);
+ void
+ ice_write_qrxflxp_cntxt(struct ice_hw *hw, u16 pf_q, u32 rxdid, u32 prio);
+
+-void ice_vsi_put_qs(struct ice_vsi *vsi);
+-
+ void ice_vsi_dis_irq(struct ice_vsi *vsi);
+
+ void ice_vsi_free_irq(struct ice_vsi *vsi);
+diff --git a/drivers/net/ethernet/intel/ice/ice_main.c b/drivers/net/ethernet/intel/ice/ice_main.c
+index 4cbd49c87568a..4b52f1dea7f3a 100644
+--- a/drivers/net/ethernet/intel/ice/ice_main.c
++++ b/drivers/net/ethernet/intel/ice/ice_main.c
+@@ -2605,10 +2605,8 @@ static int ice_setup_pf_sw(struct ice_pf *pf)
+ return -EBUSY;
+
+ vsi = ice_pf_vsi_setup(pf, pf->hw.port_info);
+- if (!vsi) {
+- status = -ENOMEM;
+- goto unroll_vsi_setup;
+- }
++ if (!vsi)
++ return -ENOMEM;
+
+ status = ice_cfg_netdev(vsi);
+ if (status) {
+@@ -2655,12 +2653,7 @@ unroll_napi_add:
+ }
+
+ unroll_vsi_setup:
+- if (vsi) {
+- ice_vsi_free_q_vectors(vsi);
+- ice_vsi_delete(vsi);
+- ice_vsi_put_qs(vsi);
+- ice_vsi_clear(vsi);
+- }
++ ice_vsi_release(vsi);
+ return status;
+ }
+
+diff --git a/drivers/net/ethernet/marvell/mvneta.c b/drivers/net/ethernet/marvell/mvneta.c
+index 7d5d9d34f4e47..69a234e83b8b7 100644
+--- a/drivers/net/ethernet/marvell/mvneta.c
++++ b/drivers/net/ethernet/marvell/mvneta.c
+@@ -3372,24 +3372,15 @@ static int mvneta_txq_sw_init(struct mvneta_port *pp,
+ txq->last_desc = txq->size - 1;
+
+ txq->buf = kmalloc_array(txq->size, sizeof(*txq->buf), GFP_KERNEL);
+- if (!txq->buf) {
+- dma_free_coherent(pp->dev->dev.parent,
+- txq->size * MVNETA_DESC_ALIGNED_SIZE,
+- txq->descs, txq->descs_phys);
++ if (!txq->buf)
+ return -ENOMEM;
+- }
+
+ /* Allocate DMA buffers for TSO MAC/IP/TCP headers */
+ txq->tso_hdrs = dma_alloc_coherent(pp->dev->dev.parent,
+ txq->size * TSO_HEADER_SIZE,
+ &txq->tso_hdrs_phys, GFP_KERNEL);
+- if (!txq->tso_hdrs) {
+- kfree(txq->buf);
+- dma_free_coherent(pp->dev->dev.parent,
+- txq->size * MVNETA_DESC_ALIGNED_SIZE,
+- txq->descs, txq->descs_phys);
++ if (!txq->tso_hdrs)
+ return -ENOMEM;
+- }
+
+ /* Setup XPS mapping */
+ if (txq_number > 1)
+diff --git a/drivers/net/ethernet/marvell/octeontx2/af/mbox.c b/drivers/net/ethernet/marvell/octeontx2/af/mbox.c
+index 387e33fa417aa..2718fe201c147 100644
+--- a/drivers/net/ethernet/marvell/octeontx2/af/mbox.c
++++ b/drivers/net/ethernet/marvell/octeontx2/af/mbox.c
+@@ -17,7 +17,7 @@
+
+ static const u16 msgs_offset = ALIGN(sizeof(struct mbox_hdr), MBOX_MSG_ALIGN);
+
+-void otx2_mbox_reset(struct otx2_mbox *mbox, int devid)
++void __otx2_mbox_reset(struct otx2_mbox *mbox, int devid)
+ {
+ void *hw_mbase = mbox->hwbase + (devid * MBOX_SIZE);
+ struct otx2_mbox_dev *mdev = &mbox->dev[devid];
+@@ -26,13 +26,21 @@ void otx2_mbox_reset(struct otx2_mbox *mbox, int devid)
+ tx_hdr = hw_mbase + mbox->tx_start;
+ rx_hdr = hw_mbase + mbox->rx_start;
+
+- spin_lock(&mdev->mbox_lock);
+ mdev->msg_size = 0;
+ mdev->rsp_size = 0;
+ tx_hdr->num_msgs = 0;
+ tx_hdr->msg_size = 0;
+ rx_hdr->num_msgs = 0;
+ rx_hdr->msg_size = 0;
++}
++EXPORT_SYMBOL(__otx2_mbox_reset);
++
++void otx2_mbox_reset(struct otx2_mbox *mbox, int devid)
++{
++ struct otx2_mbox_dev *mdev = &mbox->dev[devid];
++
++ spin_lock(&mdev->mbox_lock);
++ __otx2_mbox_reset(mbox, devid);
+ spin_unlock(&mdev->mbox_lock);
+ }
+ EXPORT_SYMBOL(otx2_mbox_reset);
+diff --git a/drivers/net/ethernet/marvell/octeontx2/af/mbox.h b/drivers/net/ethernet/marvell/octeontx2/af/mbox.h
+index 6dfd0f90cd704..ab433789d2c31 100644
+--- a/drivers/net/ethernet/marvell/octeontx2/af/mbox.h
++++ b/drivers/net/ethernet/marvell/octeontx2/af/mbox.h
+@@ -93,6 +93,7 @@ struct mbox_msghdr {
+ };
+
+ void otx2_mbox_reset(struct otx2_mbox *mbox, int devid);
++void __otx2_mbox_reset(struct otx2_mbox *mbox, int devid);
+ void otx2_mbox_destroy(struct otx2_mbox *mbox);
+ int otx2_mbox_init(struct otx2_mbox *mbox, void __force *hwbase,
+ struct pci_dev *pdev, void __force *reg_base,
+diff --git a/drivers/net/ethernet/marvell/octeontx2/af/rvu.h b/drivers/net/ethernet/marvell/octeontx2/af/rvu.h
+index dcf25a0920084..b89dde2c8b089 100644
+--- a/drivers/net/ethernet/marvell/octeontx2/af/rvu.h
++++ b/drivers/net/ethernet/marvell/octeontx2/af/rvu.h
+@@ -463,6 +463,7 @@ void rvu_nix_freemem(struct rvu *rvu);
+ int rvu_get_nixlf_count(struct rvu *rvu);
+ void rvu_nix_lf_teardown(struct rvu *rvu, u16 pcifunc, int blkaddr, int npalf);
+ int nix_get_nixlf(struct rvu *rvu, u16 pcifunc, int *nixlf, int *nix_blkaddr);
++int nix_update_bcast_mce_list(struct rvu *rvu, u16 pcifunc, bool add);
+
+ /* NPC APIs */
+ int rvu_npc_init(struct rvu *rvu);
+@@ -477,7 +478,7 @@ void rvu_npc_disable_promisc_entry(struct rvu *rvu, u16 pcifunc, int nixlf);
+ void rvu_npc_enable_promisc_entry(struct rvu *rvu, u16 pcifunc, int nixlf);
+ void rvu_npc_install_bcast_match_entry(struct rvu *rvu, u16 pcifunc,
+ int nixlf, u64 chan);
+-void rvu_npc_disable_bcast_entry(struct rvu *rvu, u16 pcifunc);
++void rvu_npc_enable_bcast_entry(struct rvu *rvu, u16 pcifunc, bool enable);
+ int rvu_npc_update_rxvlan(struct rvu *rvu, u16 pcifunc, int nixlf);
+ void rvu_npc_disable_mcam_entries(struct rvu *rvu, u16 pcifunc, int nixlf);
+ void rvu_npc_disable_default_entries(struct rvu *rvu, u16 pcifunc, int nixlf);
+diff --git a/drivers/net/ethernet/marvell/octeontx2/af/rvu_nix.c b/drivers/net/ethernet/marvell/octeontx2/af/rvu_nix.c
+index 36953d4f51c73..3495b3a6828c0 100644
+--- a/drivers/net/ethernet/marvell/octeontx2/af/rvu_nix.c
++++ b/drivers/net/ethernet/marvell/octeontx2/af/rvu_nix.c
+@@ -17,7 +17,6 @@
+ #include "npc.h"
+ #include "cgx.h"
+
+-static int nix_update_bcast_mce_list(struct rvu *rvu, u16 pcifunc, bool add);
+ static int rvu_nix_get_bpid(struct rvu *rvu, struct nix_bp_cfg_req *req,
+ int type, int chan_id);
+
+@@ -2020,7 +2019,7 @@ static int nix_update_mce_list(struct nix_mce_list *mce_list,
+ return 0;
+ }
+
+-static int nix_update_bcast_mce_list(struct rvu *rvu, u16 pcifunc, bool add)
++int nix_update_bcast_mce_list(struct rvu *rvu, u16 pcifunc, bool add)
+ {
+ int err = 0, idx, next_idx, last_idx;
+ struct nix_mce_list *mce_list;
+@@ -2065,7 +2064,7 @@ static int nix_update_bcast_mce_list(struct rvu *rvu, u16 pcifunc, bool add)
+
+ /* Disable MCAM entry in NPC */
+ if (!mce_list->count) {
+- rvu_npc_disable_bcast_entry(rvu, pcifunc);
++ rvu_npc_enable_bcast_entry(rvu, pcifunc, false);
+ goto end;
+ }
+
+diff --git a/drivers/net/ethernet/marvell/octeontx2/af/rvu_npc.c b/drivers/net/ethernet/marvell/octeontx2/af/rvu_npc.c
+index 0a214084406a6..fbaf9bcd83f2f 100644
+--- a/drivers/net/ethernet/marvell/octeontx2/af/rvu_npc.c
++++ b/drivers/net/ethernet/marvell/octeontx2/af/rvu_npc.c
+@@ -530,7 +530,7 @@ void rvu_npc_install_bcast_match_entry(struct rvu *rvu, u16 pcifunc,
+ NIX_INTF_RX, &entry, true);
+ }
+
+-void rvu_npc_disable_bcast_entry(struct rvu *rvu, u16 pcifunc)
++void rvu_npc_enable_bcast_entry(struct rvu *rvu, u16 pcifunc, bool enable)
+ {
+ struct npc_mcam *mcam = &rvu->hw->mcam;
+ int blkaddr, index;
+@@ -543,7 +543,7 @@ void rvu_npc_disable_bcast_entry(struct rvu *rvu, u16 pcifunc)
+ pcifunc = pcifunc & ~RVU_PFVF_FUNC_MASK;
+
+ index = npc_get_nixlf_mcam_index(mcam, pcifunc, 0, NIXLF_BCAST_ENTRY);
+- npc_enable_mcam_entry(rvu, mcam, blkaddr, index, false);
++ npc_enable_mcam_entry(rvu, mcam, blkaddr, index, enable);
+ }
+
+ void rvu_npc_update_flowkey_alg_idx(struct rvu *rvu, u16 pcifunc, int nixlf,
+@@ -622,23 +622,35 @@ static void npc_enadis_default_entries(struct rvu *rvu, u16 pcifunc,
+ nixlf, NIXLF_UCAST_ENTRY);
+ npc_enable_mcam_entry(rvu, mcam, blkaddr, index, enable);
+
+- /* For PF, ena/dis promisc and bcast MCAM match entries */
+- if (pcifunc & RVU_PFVF_FUNC_MASK)
++ /* For PF, ena/dis promisc and bcast MCAM match entries.
++ * For VFs add/delete from bcast list when RX multicast
++ * feature is present.
++ */
++ if (pcifunc & RVU_PFVF_FUNC_MASK && !rvu->hw->cap.nix_rx_multicast)
+ return;
+
+ /* For bcast, enable/disable only if it's action is not
+ * packet replication, incase if action is replication
+- * then this PF's nixlf is removed from bcast replication
++ * then this PF/VF's nixlf is removed from bcast replication
+ * list.
+ */
+- index = npc_get_nixlf_mcam_index(mcam, pcifunc,
++ index = npc_get_nixlf_mcam_index(mcam, pcifunc & ~RVU_PFVF_FUNC_MASK,
+ nixlf, NIXLF_BCAST_ENTRY);
+ bank = npc_get_bank(mcam, index);
+ *(u64 *)&action = rvu_read64(rvu, blkaddr,
+ NPC_AF_MCAMEX_BANKX_ACTION(index & (mcam->banksize - 1), bank));
+- if (action.op != NIX_RX_ACTIONOP_MCAST)
++
++ /* VFs will not have BCAST entry */
++ if (action.op != NIX_RX_ACTIONOP_MCAST &&
++ !(pcifunc & RVU_PFVF_FUNC_MASK)) {
+ npc_enable_mcam_entry(rvu, mcam,
+ blkaddr, index, enable);
++ } else {
++ nix_update_bcast_mce_list(rvu, pcifunc, enable);
++ /* Enable PF's BCAST entry for packet replication */
++ rvu_npc_enable_bcast_entry(rvu, pcifunc, enable);
++ }
++
+ if (enable)
+ rvu_npc_enable_promisc_entry(rvu, pcifunc, nixlf);
+ else
+diff --git a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_pf.c b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_pf.c
+index 75a8c407e815c..2fb45670aca49 100644
+--- a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_pf.c
++++ b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_pf.c
+@@ -370,8 +370,8 @@ static int otx2_forward_vf_mbox_msgs(struct otx2_nic *pf,
+ dst_mbox = &pf->mbox;
+ dst_size = dst_mbox->mbox.tx_size -
+ ALIGN(sizeof(*mbox_hdr), MBOX_MSG_ALIGN);
+- /* Check if msgs fit into destination area */
+- if (mbox_hdr->msg_size > dst_size)
++ /* Check if msgs fit into destination area and has valid size */
++ if (mbox_hdr->msg_size > dst_size || !mbox_hdr->msg_size)
+ return -EINVAL;
+
+ dst_mdev = &dst_mbox->mbox.dev[0];
+@@ -526,10 +526,10 @@ static void otx2_pfvf_mbox_up_handler(struct work_struct *work)
+
+ end:
+ offset = mbox->rx_start + msg->next_msgoff;
++ if (mdev->msgs_acked == (vf_mbox->up_num_msgs - 1))
++ __otx2_mbox_reset(mbox, 0);
+ mdev->msgs_acked++;
+ }
+-
+- otx2_mbox_reset(mbox, vf_idx);
+ }
+
+ static irqreturn_t otx2_pfvf_mbox_intr_handler(int irq, void *pf_irq)
+@@ -803,10 +803,11 @@ static void otx2_pfaf_mbox_handler(struct work_struct *work)
+ msg = (struct mbox_msghdr *)(mdev->mbase + offset);
+ otx2_process_pfaf_mbox_msg(pf, msg);
+ offset = mbox->rx_start + msg->next_msgoff;
++ if (mdev->msgs_acked == (af_mbox->num_msgs - 1))
++ __otx2_mbox_reset(mbox, 0);
+ mdev->msgs_acked++;
+ }
+
+- otx2_mbox_reset(mbox, 0);
+ }
+
+ static void otx2_handle_link_event(struct otx2_nic *pf)
+@@ -1560,10 +1561,13 @@ int otx2_open(struct net_device *netdev)
+
+ err = otx2_rxtx_enable(pf, true);
+ if (err)
+- goto err_free_cints;
++ goto err_tx_stop_queues;
+
+ return 0;
+
++err_tx_stop_queues:
++ netif_tx_stop_all_queues(netdev);
++ netif_carrier_off(netdev);
+ err_free_cints:
+ otx2_free_cints(pf, qidx);
+ vec = pci_irq_vector(pf->pdev,
+diff --git a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_txrx.c b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_txrx.c
+index b04f5429d72d9..334eab976ee4a 100644
+--- a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_txrx.c
++++ b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_txrx.c
+@@ -524,6 +524,7 @@ static void otx2_sqe_add_hdr(struct otx2_nic *pfvf, struct otx2_snd_queue *sq,
+ sqe_hdr->ol3type = NIX_SENDL3TYPE_IP4_CKSUM;
+ } else if (skb->protocol == htons(ETH_P_IPV6)) {
+ proto = ipv6_hdr(skb)->nexthdr;
++ sqe_hdr->ol3type = NIX_SENDL3TYPE_IP6;
+ }
+
+ if (proto == IPPROTO_TCP)
+diff --git a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_vf.c b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_vf.c
+index 92a3db69a6cd6..2f90f17214415 100644
+--- a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_vf.c
++++ b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_vf.c
+@@ -99,10 +99,10 @@ static void otx2vf_vfaf_mbox_handler(struct work_struct *work)
+ msg = (struct mbox_msghdr *)(mdev->mbase + offset);
+ otx2vf_process_vfaf_mbox_msg(af_mbox->pfvf, msg);
+ offset = mbox->rx_start + msg->next_msgoff;
++ if (mdev->msgs_acked == (af_mbox->num_msgs - 1))
++ __otx2_mbox_reset(mbox, 0);
+ mdev->msgs_acked++;
+ }
+-
+- otx2_mbox_reset(mbox, 0);
+ }
+
+ static int otx2vf_process_mbox_msg_up(struct otx2_nic *vf,
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/cmd.c b/drivers/net/ethernet/mellanox/mlx5/core/cmd.c
+index 1d91a0d0ab1d7..2b597ac365f84 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/cmd.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/cmd.c
+@@ -69,12 +69,10 @@ enum {
+ MLX5_CMD_DELIVERY_STAT_CMD_DESCR_ERR = 0x10,
+ };
+
+-static struct mlx5_cmd_work_ent *alloc_cmd(struct mlx5_cmd *cmd,
+- struct mlx5_cmd_msg *in,
+- struct mlx5_cmd_msg *out,
+- void *uout, int uout_size,
+- mlx5_cmd_cbk_t cbk,
+- void *context, int page_queue)
++static struct mlx5_cmd_work_ent *
++cmd_alloc_ent(struct mlx5_cmd *cmd, struct mlx5_cmd_msg *in,
++ struct mlx5_cmd_msg *out, void *uout, int uout_size,
++ mlx5_cmd_cbk_t cbk, void *context, int page_queue)
+ {
+ gfp_t alloc_flags = cbk ? GFP_ATOMIC : GFP_KERNEL;
+ struct mlx5_cmd_work_ent *ent;
+@@ -83,6 +81,7 @@ static struct mlx5_cmd_work_ent *alloc_cmd(struct mlx5_cmd *cmd,
+ if (!ent)
+ return ERR_PTR(-ENOMEM);
+
++ ent->idx = -EINVAL;
+ ent->in = in;
+ ent->out = out;
+ ent->uout = uout;
+@@ -91,10 +90,16 @@ static struct mlx5_cmd_work_ent *alloc_cmd(struct mlx5_cmd *cmd,
+ ent->context = context;
+ ent->cmd = cmd;
+ ent->page_queue = page_queue;
++ refcount_set(&ent->refcnt, 1);
+
+ return ent;
+ }
+
++static void cmd_free_ent(struct mlx5_cmd_work_ent *ent)
++{
++ kfree(ent);
++}
++
+ static u8 alloc_token(struct mlx5_cmd *cmd)
+ {
+ u8 token;
+@@ -109,7 +114,7 @@ static u8 alloc_token(struct mlx5_cmd *cmd)
+ return token;
+ }
+
+-static int alloc_ent(struct mlx5_cmd *cmd)
++static int cmd_alloc_index(struct mlx5_cmd *cmd)
+ {
+ unsigned long flags;
+ int ret;
+@@ -123,7 +128,7 @@ static int alloc_ent(struct mlx5_cmd *cmd)
+ return ret < cmd->max_reg_cmds ? ret : -ENOMEM;
+ }
+
+-static void free_ent(struct mlx5_cmd *cmd, int idx)
++static void cmd_free_index(struct mlx5_cmd *cmd, int idx)
+ {
+ unsigned long flags;
+
+@@ -132,6 +137,22 @@ static void free_ent(struct mlx5_cmd *cmd, int idx)
+ spin_unlock_irqrestore(&cmd->alloc_lock, flags);
+ }
+
++static void cmd_ent_get(struct mlx5_cmd_work_ent *ent)
++{
++ refcount_inc(&ent->refcnt);
++}
++
++static void cmd_ent_put(struct mlx5_cmd_work_ent *ent)
++{
++ if (!refcount_dec_and_test(&ent->refcnt))
++ return;
++
++ if (ent->idx >= 0)
++ cmd_free_index(ent->cmd, ent->idx);
++
++ cmd_free_ent(ent);
++}
++
+ static struct mlx5_cmd_layout *get_inst(struct mlx5_cmd *cmd, int idx)
+ {
+ return cmd->cmd_buf + (idx << cmd->log_stride);
+@@ -219,11 +240,6 @@ static void poll_timeout(struct mlx5_cmd_work_ent *ent)
+ ent->ret = -ETIMEDOUT;
+ }
+
+-static void free_cmd(struct mlx5_cmd_work_ent *ent)
+-{
+- kfree(ent);
+-}
+-
+ static int verify_signature(struct mlx5_cmd_work_ent *ent)
+ {
+ struct mlx5_cmd_mailbox *next = ent->out->next;
+@@ -837,11 +853,22 @@ static void cb_timeout_handler(struct work_struct *work)
+ struct mlx5_core_dev *dev = container_of(ent->cmd, struct mlx5_core_dev,
+ cmd);
+
++ mlx5_cmd_eq_recover(dev);
++
++ /* Maybe got handled by eq recover ? */
++ if (!test_bit(MLX5_CMD_ENT_STATE_PENDING_COMP, &ent->state)) {
++ mlx5_core_warn(dev, "cmd[%d]: %s(0x%x) Async, recovered after timeout\n", ent->idx,
++ mlx5_command_str(msg_to_opcode(ent->in)), msg_to_opcode(ent->in));
++ goto out; /* phew, already handled */
++ }
++
+ ent->ret = -ETIMEDOUT;
+- mlx5_core_warn(dev, "%s(0x%x) timeout. Will cause a leak of a command resource\n",
+- mlx5_command_str(msg_to_opcode(ent->in)),
+- msg_to_opcode(ent->in));
++ mlx5_core_warn(dev, "cmd[%d]: %s(0x%x) Async, timeout. Will cause a leak of a command resource\n",
++ ent->idx, mlx5_command_str(msg_to_opcode(ent->in)), msg_to_opcode(ent->in));
+ mlx5_cmd_comp_handler(dev, 1UL << ent->idx, true);
++
++out:
++ cmd_ent_put(ent); /* for the cmd_ent_get() took on schedule delayed work */
+ }
+
+ static void free_msg(struct mlx5_core_dev *dev, struct mlx5_cmd_msg *msg);
+@@ -856,6 +883,25 @@ static bool opcode_allowed(struct mlx5_cmd *cmd, u16 opcode)
+ return cmd->allowed_opcode == opcode;
+ }
+
++static int cmd_alloc_index_retry(struct mlx5_cmd *cmd)
++{
++ unsigned long alloc_end = jiffies + msecs_to_jiffies(1000);
++ int idx;
++
++retry:
++ idx = cmd_alloc_index(cmd);
++ if (idx < 0 && time_before(jiffies, alloc_end)) {
++ /* Index allocation can fail on heavy load of commands. This is a temporary
++ * situation as the current command already holds the semaphore, meaning that
++ * another command completion is being handled and it is expected to release
++ * the entry index soon.
++ */
++ cpu_relax();
++ goto retry;
++ }
++ return idx;
++}
++
+ static void cmd_work_handler(struct work_struct *work)
+ {
+ struct mlx5_cmd_work_ent *ent = container_of(work, struct mlx5_cmd_work_ent, work);
+@@ -873,14 +919,14 @@ static void cmd_work_handler(struct work_struct *work)
+ sem = ent->page_queue ? &cmd->pages_sem : &cmd->sem;
+ down(sem);
+ if (!ent->page_queue) {
+- alloc_ret = alloc_ent(cmd);
++ alloc_ret = cmd_alloc_index_retry(cmd);
+ if (alloc_ret < 0) {
+ mlx5_core_err_rl(dev, "failed to allocate command entry\n");
+ if (ent->callback) {
+ ent->callback(-EAGAIN, ent->context);
+ mlx5_free_cmd_msg(dev, ent->out);
+ free_msg(dev, ent->in);
+- free_cmd(ent);
++ cmd_ent_put(ent);
+ } else {
+ ent->ret = -EAGAIN;
+ complete(&ent->done);
+@@ -916,8 +962,8 @@ static void cmd_work_handler(struct work_struct *work)
+ ent->ts1 = ktime_get_ns();
+ cmd_mode = cmd->mode;
+
+- if (ent->callback)
+- schedule_delayed_work(&ent->cb_timeout_work, cb_timeout);
++ if (ent->callback && schedule_delayed_work(&ent->cb_timeout_work, cb_timeout))
++ cmd_ent_get(ent);
+ set_bit(MLX5_CMD_ENT_STATE_PENDING_COMP, &ent->state);
+
+ /* Skip sending command to fw if internal error */
+@@ -933,13 +979,10 @@ static void cmd_work_handler(struct work_struct *work)
+ MLX5_SET(mbox_out, ent->out, syndrome, drv_synd);
+
+ mlx5_cmd_comp_handler(dev, 1UL << ent->idx, true);
+- /* no doorbell, no need to keep the entry */
+- free_ent(cmd, ent->idx);
+- if (ent->callback)
+- free_cmd(ent);
+ return;
+ }
+
++ cmd_ent_get(ent); /* for the _real_ FW event on completion */
+ /* ring doorbell after the descriptor is valid */
+ mlx5_core_dbg(dev, "writing 0x%x to command doorbell\n", 1 << ent->idx);
+ wmb();
+@@ -983,6 +1026,35 @@ static const char *deliv_status_to_str(u8 status)
+ }
+ }
+
++enum {
++ MLX5_CMD_TIMEOUT_RECOVER_MSEC = 5 * 1000,
++};
++
++static void wait_func_handle_exec_timeout(struct mlx5_core_dev *dev,
++ struct mlx5_cmd_work_ent *ent)
++{
++ unsigned long timeout = msecs_to_jiffies(MLX5_CMD_TIMEOUT_RECOVER_MSEC);
++
++ mlx5_cmd_eq_recover(dev);
++
++ /* Re-wait on the ent->done after executing the recovery flow. If the
++ * recovery flow (or any other recovery flow running simultaneously)
++ * has recovered an EQE, it should cause the entry to be completed by
++ * the command interface.
++ */
++ if (wait_for_completion_timeout(&ent->done, timeout)) {
++ mlx5_core_warn(dev, "cmd[%d]: %s(0x%x) recovered after timeout\n", ent->idx,
++ mlx5_command_str(msg_to_opcode(ent->in)), msg_to_opcode(ent->in));
++ return;
++ }
++
++ mlx5_core_warn(dev, "cmd[%d]: %s(0x%x) No done completion\n", ent->idx,
++ mlx5_command_str(msg_to_opcode(ent->in)), msg_to_opcode(ent->in));
++
++ ent->ret = -ETIMEDOUT;
++ mlx5_cmd_comp_handler(dev, 1UL << ent->idx, true);
++}
++
+ static int wait_func(struct mlx5_core_dev *dev, struct mlx5_cmd_work_ent *ent)
+ {
+ unsigned long timeout = msecs_to_jiffies(MLX5_CMD_TIMEOUT_MSEC);
+@@ -994,12 +1066,10 @@ static int wait_func(struct mlx5_core_dev *dev, struct mlx5_cmd_work_ent *ent)
+ ent->ret = -ECANCELED;
+ goto out_err;
+ }
+- if (cmd->mode == CMD_MODE_POLLING || ent->polling) {
++ if (cmd->mode == CMD_MODE_POLLING || ent->polling)
+ wait_for_completion(&ent->done);
+- } else if (!wait_for_completion_timeout(&ent->done, timeout)) {
+- ent->ret = -ETIMEDOUT;
+- mlx5_cmd_comp_handler(dev, 1UL << ent->idx, true);
+- }
++ else if (!wait_for_completion_timeout(&ent->done, timeout))
++ wait_func_handle_exec_timeout(dev, ent);
+
+ out_err:
+ err = ent->ret;
+@@ -1039,11 +1109,16 @@ static int mlx5_cmd_invoke(struct mlx5_core_dev *dev, struct mlx5_cmd_msg *in,
+ if (callback && page_queue)
+ return -EINVAL;
+
+- ent = alloc_cmd(cmd, in, out, uout, uout_size, callback, context,
+- page_queue);
++ ent = cmd_alloc_ent(cmd, in, out, uout, uout_size,
++ callback, context, page_queue);
+ if (IS_ERR(ent))
+ return PTR_ERR(ent);
+
++ /* put for this ent is when consumed, depending on the use case
++ * 1) (!callback) blocking flow: by caller after wait_func completes
++ * 2) (callback) flow: by mlx5_cmd_comp_handler() when ent is handled
++ */
++
+ ent->token = token;
+ ent->polling = force_polling;
+
+@@ -1062,12 +1137,10 @@ static int mlx5_cmd_invoke(struct mlx5_core_dev *dev, struct mlx5_cmd_msg *in,
+ }
+
+ if (callback)
+- goto out;
++ goto out; /* mlx5_cmd_comp_handler() will put(ent) */
+
+ err = wait_func(dev, ent);
+- if (err == -ETIMEDOUT)
+- goto out;
+- if (err == -ECANCELED)
++ if (err == -ETIMEDOUT || err == -ECANCELED)
+ goto out_free;
+
+ ds = ent->ts2 - ent->ts1;
+@@ -1085,7 +1158,7 @@ static int mlx5_cmd_invoke(struct mlx5_core_dev *dev, struct mlx5_cmd_msg *in,
+ *status = ent->status;
+
+ out_free:
+- free_cmd(ent);
++ cmd_ent_put(ent);
+ out:
+ return err;
+ }
+@@ -1516,14 +1589,19 @@ static void mlx5_cmd_comp_handler(struct mlx5_core_dev *dev, u64 vec, bool force
+ if (!forced) {
+ mlx5_core_err(dev, "Command completion arrived after timeout (entry idx = %d).\n",
+ ent->idx);
+- free_ent(cmd, ent->idx);
+- free_cmd(ent);
++ cmd_ent_put(ent);
+ }
+ continue;
+ }
+
+- if (ent->callback)
+- cancel_delayed_work(&ent->cb_timeout_work);
++ if (ent->callback && cancel_delayed_work(&ent->cb_timeout_work))
++ cmd_ent_put(ent); /* timeout work was canceled */
++
++ if (!forced || /* Real FW completion */
++ pci_channel_offline(dev->pdev) || /* FW is inaccessible */
++ dev->state == MLX5_DEVICE_STATE_INTERNAL_ERROR)
++ cmd_ent_put(ent);
++
+ if (ent->page_queue)
+ sem = &cmd->pages_sem;
+ else
+@@ -1545,10 +1623,6 @@ static void mlx5_cmd_comp_handler(struct mlx5_core_dev *dev, u64 vec, bool force
+ ent->ret, deliv_status_to_str(ent->status), ent->status);
+ }
+
+- /* only real completion will free the entry slot */
+- if (!forced)
+- free_ent(cmd, ent->idx);
+-
+ if (ent->callback) {
+ ds = ent->ts2 - ent->ts1;
+ if (ent->op < MLX5_CMD_OP_MAX) {
+@@ -1576,10 +1650,13 @@ static void mlx5_cmd_comp_handler(struct mlx5_core_dev *dev, u64 vec, bool force
+ free_msg(dev, ent->in);
+
+ err = err ? err : ent->status;
+- if (!forced)
+- free_cmd(ent);
++ /* final consumer is done, release ent */
++ cmd_ent_put(ent);
+ callback(err, context);
+ } else {
++ /* release wait_func() so mlx5_cmd_invoke()
++ * can make the final ent_put()
++ */
+ complete(&ent->done);
+ }
+ up(sem);
+@@ -1589,8 +1666,11 @@ static void mlx5_cmd_comp_handler(struct mlx5_core_dev *dev, u64 vec, bool force
+
+ void mlx5_cmd_trigger_completions(struct mlx5_core_dev *dev)
+ {
++ struct mlx5_cmd *cmd = &dev->cmd;
++ unsigned long bitmask;
+ unsigned long flags;
+ u64 vector;
++ int i;
+
+ /* wait for pending handlers to complete */
+ mlx5_eq_synchronize_cmd_irq(dev);
+@@ -1599,11 +1679,20 @@ void mlx5_cmd_trigger_completions(struct mlx5_core_dev *dev)
+ if (!vector)
+ goto no_trig;
+
++ bitmask = vector;
++ /* we must increment the allocated entries refcount before triggering the completions
++ * to guarantee pending commands will not get freed in the meanwhile.
++ * For that reason, it also has to be done inside the alloc_lock.
++ */
++ for_each_set_bit(i, &bitmask, (1 << cmd->log_sz))
++ cmd_ent_get(cmd->ent_arr[i]);
+ vector |= MLX5_TRIGGERED_CMD_COMP;
+ spin_unlock_irqrestore(&dev->cmd.alloc_lock, flags);
+
+ mlx5_core_dbg(dev, "vector 0x%llx\n", vector);
+ mlx5_cmd_comp_handler(dev, vector, true);
++ for_each_set_bit(i, &bitmask, (1 << cmd->log_sz))
++ cmd_ent_put(cmd->ent_arr[i]);
+ return;
+
+ no_trig:
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en.h b/drivers/net/ethernet/mellanox/mlx5/core/en.h
+index 76b23ba7a4687..cb3857e136d62 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/en.h
++++ b/drivers/net/ethernet/mellanox/mlx5/core/en.h
+@@ -90,7 +90,12 @@ struct page_pool;
+ #define MLX5_MPWRQ_PAGES_PER_WQE BIT(MLX5_MPWRQ_WQE_PAGE_ORDER)
+
+ #define MLX5_MTT_OCTW(npages) (ALIGN(npages, 8) / 2)
+-#define MLX5E_REQUIRED_WQE_MTTS (ALIGN(MLX5_MPWRQ_PAGES_PER_WQE, 8))
++/* Add another page to MLX5E_REQUIRED_WQE_MTTS as a buffer between
++ * WQEs, This page will absorb write overflow by the hardware, when
++ * receiving packets larger than MTU. These oversize packets are
++ * dropped by the driver at a later stage.
++ */
++#define MLX5E_REQUIRED_WQE_MTTS (ALIGN(MLX5_MPWRQ_PAGES_PER_WQE + 1, 8))
+ #define MLX5E_LOG_ALIGNED_MPWQE_PPW (ilog2(MLX5E_REQUIRED_WQE_MTTS))
+ #define MLX5E_REQUIRED_MTTS(wqes) (wqes * MLX5E_REQUIRED_WQE_MTTS)
+ #define MLX5E_MAX_RQ_NUM_MTTS \
+@@ -621,6 +626,7 @@ struct mlx5e_rq {
+ u32 rqn;
+ struct mlx5_core_dev *mdev;
+ struct mlx5_core_mkey umr_mkey;
++ struct mlx5e_dma_info wqe_overflow;
+
+ /* XDP read-mostly */
+ struct xdp_rxq_info xdp_rxq;
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en/port.c b/drivers/net/ethernet/mellanox/mlx5/core/en/port.c
+index 98e909bf3c1ec..3e32264cf6131 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/en/port.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/en/port.c
+@@ -566,6 +566,9 @@ int mlx5e_set_fec_mode(struct mlx5_core_dev *dev, u16 fec_policy)
+ if (fec_policy >= (1 << MLX5E_FEC_LLRS_272_257_1) && !fec_50g_per_lane)
+ return -EOPNOTSUPP;
+
++ if (fec_policy && !mlx5e_fec_in_caps(dev, fec_policy))
++ return -EOPNOTSUPP;
++
+ MLX5_SET(pplm_reg, in, local_port, 1);
+ err = mlx5_core_access_reg(dev, in, sz, out, sz, MLX5_REG_PPLM, 0, 0);
+ if (err)
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en/rep/neigh.c b/drivers/net/ethernet/mellanox/mlx5/core/en/rep/neigh.c
+index c3d167fa944c7..6a9d783d129b2 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/en/rep/neigh.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/en/rep/neigh.c
+@@ -109,11 +109,25 @@ static void mlx5e_rep_neigh_stats_work(struct work_struct *work)
+ rtnl_unlock();
+ }
+
++struct neigh_update_work {
++ struct work_struct work;
++ struct neighbour *n;
++ struct mlx5e_neigh_hash_entry *nhe;
++};
++
++static void mlx5e_release_neigh_update_work(struct neigh_update_work *update_work)
++{
++ neigh_release(update_work->n);
++ mlx5e_rep_neigh_entry_release(update_work->nhe);
++ kfree(update_work);
++}
++
+ static void mlx5e_rep_neigh_update(struct work_struct *work)
+ {
+- struct mlx5e_neigh_hash_entry *nhe =
+- container_of(work, struct mlx5e_neigh_hash_entry, neigh_update_work);
+- struct neighbour *n = nhe->n;
++ struct neigh_update_work *update_work = container_of(work, struct neigh_update_work,
++ work);
++ struct mlx5e_neigh_hash_entry *nhe = update_work->nhe;
++ struct neighbour *n = update_work->n;
+ struct mlx5e_encap_entry *e;
+ unsigned char ha[ETH_ALEN];
+ struct mlx5e_priv *priv;
+@@ -145,30 +159,42 @@ static void mlx5e_rep_neigh_update(struct work_struct *work)
+ mlx5e_rep_update_flows(priv, e, neigh_connected, ha);
+ mlx5e_encap_put(priv, e);
+ }
+- mlx5e_rep_neigh_entry_release(nhe);
+ rtnl_unlock();
+- neigh_release(n);
++ mlx5e_release_neigh_update_work(update_work);
+ }
+
+-static void mlx5e_rep_queue_neigh_update_work(struct mlx5e_priv *priv,
+- struct mlx5e_neigh_hash_entry *nhe,
+- struct neighbour *n)
++static struct neigh_update_work *mlx5e_alloc_neigh_update_work(struct mlx5e_priv *priv,
++ struct neighbour *n)
+ {
+- /* Take a reference to ensure the neighbour and mlx5 encap
+- * entry won't be destructed until we drop the reference in
+- * delayed work.
+- */
+- neigh_hold(n);
++ struct neigh_update_work *update_work;
++ struct mlx5e_neigh_hash_entry *nhe;
++ struct mlx5e_neigh m_neigh = {};
+
+- /* This assignment is valid as long as the the neigh reference
+- * is taken
+- */
+- nhe->n = n;
++ update_work = kzalloc(sizeof(*update_work), GFP_ATOMIC);
++ if (WARN_ON(!update_work))
++ return NULL;
+
+- if (!queue_work(priv->wq, &nhe->neigh_update_work)) {
+- mlx5e_rep_neigh_entry_release(nhe);
+- neigh_release(n);
++ m_neigh.dev = n->dev;
++ m_neigh.family = n->ops->family;
++ memcpy(&m_neigh.dst_ip, n->primary_key, n->tbl->key_len);
++
++ /* Obtain reference to nhe as last step in order not to release it in
++ * atomic context.
++ */
++ rcu_read_lock();
++ nhe = mlx5e_rep_neigh_entry_lookup(priv, &m_neigh);
++ rcu_read_unlock();
++ if (!nhe) {
++ kfree(update_work);
++ return NULL;
+ }
++
++ INIT_WORK(&update_work->work, mlx5e_rep_neigh_update);
++ neigh_hold(n);
++ update_work->n = n;
++ update_work->nhe = nhe;
++
++ return update_work;
+ }
+
+ static int mlx5e_rep_netevent_event(struct notifier_block *nb,
+@@ -180,7 +206,7 @@ static int mlx5e_rep_netevent_event(struct notifier_block *nb,
+ struct net_device *netdev = rpriv->netdev;
+ struct mlx5e_priv *priv = netdev_priv(netdev);
+ struct mlx5e_neigh_hash_entry *nhe = NULL;
+- struct mlx5e_neigh m_neigh = {};
++ struct neigh_update_work *update_work;
+ struct neigh_parms *p;
+ struct neighbour *n;
+ bool found = false;
+@@ -195,17 +221,11 @@ static int mlx5e_rep_netevent_event(struct notifier_block *nb,
+ #endif
+ return NOTIFY_DONE;
+
+- m_neigh.dev = n->dev;
+- m_neigh.family = n->ops->family;
+- memcpy(&m_neigh.dst_ip, n->primary_key, n->tbl->key_len);
+-
+- rcu_read_lock();
+- nhe = mlx5e_rep_neigh_entry_lookup(priv, &m_neigh);
+- rcu_read_unlock();
+- if (!nhe)
++ update_work = mlx5e_alloc_neigh_update_work(priv, n);
++ if (!update_work)
+ return NOTIFY_DONE;
+
+- mlx5e_rep_queue_neigh_update_work(priv, nhe, n);
++ queue_work(priv->wq, &update_work->work);
+ break;
+
+ case NETEVENT_DELAY_PROBE_TIME_UPDATE:
+@@ -351,7 +371,6 @@ int mlx5e_rep_neigh_entry_create(struct mlx5e_priv *priv,
+
+ (*nhe)->priv = priv;
+ memcpy(&(*nhe)->m_neigh, &e->m_neigh, sizeof(e->m_neigh));
+- INIT_WORK(&(*nhe)->neigh_update_work, mlx5e_rep_neigh_update);
+ spin_lock_init(&(*nhe)->encap_list_lock);
+ INIT_LIST_HEAD(&(*nhe)->encap_list);
+ refcount_set(&(*nhe)->refcnt, 1);
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_fs.c b/drivers/net/ethernet/mellanox/mlx5/core/en_fs.c
+index 73d3dc07331f1..713dc210f710c 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/en_fs.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/en_fs.c
+@@ -217,6 +217,9 @@ static int __mlx5e_add_vlan_rule(struct mlx5e_priv *priv,
+ break;
+ }
+
++ if (WARN_ONCE(*rule_p, "VLAN rule already exists type %d", rule_type))
++ return 0;
++
+ *rule_p = mlx5_add_flow_rules(ft, spec, &flow_act, &dest, 1);
+
+ if (IS_ERR(*rule_p)) {
+@@ -397,8 +400,7 @@ static void mlx5e_add_vlan_rules(struct mlx5e_priv *priv)
+ for_each_set_bit(i, priv->fs.vlan.active_svlans, VLAN_N_VID)
+ mlx5e_add_vlan_rule(priv, MLX5E_VLAN_RULE_TYPE_MATCH_STAG_VID, i);
+
+- if (priv->fs.vlan.cvlan_filter_disabled &&
+- !(priv->netdev->flags & IFF_PROMISC))
++ if (priv->fs.vlan.cvlan_filter_disabled)
+ mlx5e_add_any_vid_rules(priv);
+ }
+
+@@ -415,8 +417,12 @@ static void mlx5e_del_vlan_rules(struct mlx5e_priv *priv)
+ for_each_set_bit(i, priv->fs.vlan.active_svlans, VLAN_N_VID)
+ mlx5e_del_vlan_rule(priv, MLX5E_VLAN_RULE_TYPE_MATCH_STAG_VID, i);
+
+- if (priv->fs.vlan.cvlan_filter_disabled &&
+- !(priv->netdev->flags & IFF_PROMISC))
++ WARN_ON_ONCE(!(test_bit(MLX5E_STATE_DESTROYING, &priv->state)));
++
++ /* must be called after DESTROY bit is set and
++ * set_rx_mode is called and flushed
++ */
++ if (priv->fs.vlan.cvlan_filter_disabled)
+ mlx5e_del_any_vid_rules(priv);
+ }
+
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_main.c b/drivers/net/ethernet/mellanox/mlx5/core/en_main.c
+index cccf65fc116ee..f8a20dd814379 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/en_main.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/en_main.c
+@@ -258,12 +258,17 @@ static int mlx5e_rq_alloc_mpwqe_info(struct mlx5e_rq *rq,
+
+ static int mlx5e_create_umr_mkey(struct mlx5_core_dev *mdev,
+ u64 npages, u8 page_shift,
+- struct mlx5_core_mkey *umr_mkey)
++ struct mlx5_core_mkey *umr_mkey,
++ dma_addr_t filler_addr)
+ {
+- int inlen = MLX5_ST_SZ_BYTES(create_mkey_in);
++ struct mlx5_mtt *mtt;
++ int inlen;
+ void *mkc;
+ u32 *in;
+ int err;
++ int i;
++
++ inlen = MLX5_ST_SZ_BYTES(create_mkey_in) + sizeof(*mtt) * npages;
+
+ in = kvzalloc(inlen, GFP_KERNEL);
+ if (!in)
+@@ -283,6 +288,18 @@ static int mlx5e_create_umr_mkey(struct mlx5_core_dev *mdev,
+ MLX5_SET(mkc, mkc, translations_octword_size,
+ MLX5_MTT_OCTW(npages));
+ MLX5_SET(mkc, mkc, log_page_size, page_shift);
++ MLX5_SET(create_mkey_in, in, translations_octword_actual_size,
++ MLX5_MTT_OCTW(npages));
++
++ /* Initialize the mkey with all MTTs pointing to a default
++ * page (filler_addr). When the channels are activated, UMR
++ * WQEs will redirect the RX WQEs to the actual memory from
++ * the RQ's pool, while the gaps (wqe_overflow) remain mapped
++ * to the default page.
++ */
++ mtt = MLX5_ADDR_OF(create_mkey_in, in, klm_pas_mtt);
++ for (i = 0 ; i < npages ; i++)
++ mtt[i].ptag = cpu_to_be64(filler_addr);
+
+ err = mlx5_core_create_mkey(mdev, umr_mkey, in, inlen);
+
+@@ -294,7 +311,8 @@ static int mlx5e_create_rq_umr_mkey(struct mlx5_core_dev *mdev, struct mlx5e_rq
+ {
+ u64 num_mtts = MLX5E_REQUIRED_MTTS(mlx5_wq_ll_get_size(&rq->mpwqe.wq));
+
+- return mlx5e_create_umr_mkey(mdev, num_mtts, PAGE_SHIFT, &rq->umr_mkey);
++ return mlx5e_create_umr_mkey(mdev, num_mtts, PAGE_SHIFT, &rq->umr_mkey,
++ rq->wqe_overflow.addr);
+ }
+
+ static inline u64 mlx5e_get_mpwqe_offset(struct mlx5e_rq *rq, u16 wqe_ix)
+@@ -362,6 +380,28 @@ static void mlx5e_rq_err_cqe_work(struct work_struct *recover_work)
+ mlx5e_reporter_rq_cqe_err(rq);
+ }
+
++static int mlx5e_alloc_mpwqe_rq_drop_page(struct mlx5e_rq *rq)
++{
++ rq->wqe_overflow.page = alloc_page(GFP_KERNEL);
++ if (!rq->wqe_overflow.page)
++ return -ENOMEM;
++
++ rq->wqe_overflow.addr = dma_map_page(rq->pdev, rq->wqe_overflow.page, 0,
++ PAGE_SIZE, rq->buff.map_dir);
++ if (dma_mapping_error(rq->pdev, rq->wqe_overflow.addr)) {
++ __free_page(rq->wqe_overflow.page);
++ return -ENOMEM;
++ }
++ return 0;
++}
++
++static void mlx5e_free_mpwqe_rq_drop_page(struct mlx5e_rq *rq)
++{
++ dma_unmap_page(rq->pdev, rq->wqe_overflow.addr, PAGE_SIZE,
++ rq->buff.map_dir);
++ __free_page(rq->wqe_overflow.page);
++}
++
+ static int mlx5e_alloc_rq(struct mlx5e_channel *c,
+ struct mlx5e_params *params,
+ struct mlx5e_xsk_param *xsk,
+@@ -421,6 +461,10 @@ static int mlx5e_alloc_rq(struct mlx5e_channel *c,
+ if (err)
+ goto err_rq_wq_destroy;
+
++ err = mlx5e_alloc_mpwqe_rq_drop_page(rq);
++ if (err)
++ goto err_rq_wq_destroy;
++
+ rq->mpwqe.wq.db = &rq->mpwqe.wq.db[MLX5_RCV_DBR];
+
+ wq_sz = mlx5_wq_ll_get_size(&rq->mpwqe.wq);
+@@ -459,7 +503,7 @@ static int mlx5e_alloc_rq(struct mlx5e_channel *c,
+
+ err = mlx5e_create_rq_umr_mkey(mdev, rq);
+ if (err)
+- goto err_rq_wq_destroy;
++ goto err_rq_drop_page;
+ rq->mkey_be = cpu_to_be32(rq->umr_mkey.key);
+
+ err = mlx5e_rq_alloc_mpwqe_info(rq, c);
+@@ -598,6 +642,8 @@ err_free:
+ case MLX5_WQ_TYPE_LINKED_LIST_STRIDING_RQ:
+ kvfree(rq->mpwqe.info);
+ mlx5_core_destroy_mkey(mdev, &rq->umr_mkey);
++err_rq_drop_page:
++ mlx5e_free_mpwqe_rq_drop_page(rq);
+ break;
+ default: /* MLX5_WQ_TYPE_CYCLIC */
+ kvfree(rq->wqe.frags);
+@@ -631,6 +677,7 @@ static void mlx5e_free_rq(struct mlx5e_rq *rq)
+ case MLX5_WQ_TYPE_LINKED_LIST_STRIDING_RQ:
+ kvfree(rq->mpwqe.info);
+ mlx5_core_destroy_mkey(rq->mdev, &rq->umr_mkey);
++ mlx5e_free_mpwqe_rq_drop_page(rq);
+ break;
+ default: /* MLX5_WQ_TYPE_CYCLIC */
+ kvfree(rq->wqe.frags);
+@@ -4276,6 +4323,21 @@ void mlx5e_del_vxlan_port(struct net_device *netdev, struct udp_tunnel_info *ti)
+ mlx5e_vxlan_queue_work(priv, be16_to_cpu(ti->port), 0);
+ }
+
++static bool mlx5e_gre_tunnel_inner_proto_offload_supported(struct mlx5_core_dev *mdev,
++ struct sk_buff *skb)
++{
++ switch (skb->inner_protocol) {
++ case htons(ETH_P_IP):
++ case htons(ETH_P_IPV6):
++ case htons(ETH_P_TEB):
++ return true;
++ case htons(ETH_P_MPLS_UC):
++ case htons(ETH_P_MPLS_MC):
++ return MLX5_CAP_ETH(mdev, tunnel_stateless_mpls_over_gre);
++ }
++ return false;
++}
++
+ static netdev_features_t mlx5e_tunnel_features_check(struct mlx5e_priv *priv,
+ struct sk_buff *skb,
+ netdev_features_t features)
+@@ -4298,7 +4360,9 @@ static netdev_features_t mlx5e_tunnel_features_check(struct mlx5e_priv *priv,
+
+ switch (proto) {
+ case IPPROTO_GRE:
+- return features;
++ if (mlx5e_gre_tunnel_inner_proto_offload_supported(priv->mdev, skb))
++ return features;
++ break;
+ case IPPROTO_IPIP:
+ case IPPROTO_IPV6:
+ if (mlx5e_tunnel_proto_supported(priv->mdev, IPPROTO_IPIP))
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_rep.h b/drivers/net/ethernet/mellanox/mlx5/core/en_rep.h
+index 1d56698014843..fcaabafb2e56d 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/en_rep.h
++++ b/drivers/net/ethernet/mellanox/mlx5/core/en_rep.h
+@@ -133,12 +133,6 @@ struct mlx5e_neigh_hash_entry {
+ /* encap list sharing the same neigh */
+ struct list_head encap_list;
+
+- /* valid only when the neigh reference is taken during
+- * neigh_update_work workqueue callback.
+- */
+- struct neighbour *n;
+- struct work_struct neigh_update_work;
+-
+ /* neigh hash entry can be deleted only when the refcount is zero.
+ * refcount is needed to avoid neigh hash entry removal by TC, while
+ * it's used by the neigh notification call.
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/eq.c b/drivers/net/ethernet/mellanox/mlx5/core/eq.c
+index 31ef9f8420c87..22a19d391e179 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/eq.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/eq.c
+@@ -189,6 +189,29 @@ u32 mlx5_eq_poll_irq_disabled(struct mlx5_eq_comp *eq)
+ return count_eqe;
+ }
+
++static void mlx5_eq_async_int_lock(struct mlx5_eq_async *eq, unsigned long *flags)
++ __acquires(&eq->lock)
++{
++ if (in_irq())
++ spin_lock(&eq->lock);
++ else
++ spin_lock_irqsave(&eq->lock, *flags);
++}
++
++static void mlx5_eq_async_int_unlock(struct mlx5_eq_async *eq, unsigned long *flags)
++ __releases(&eq->lock)
++{
++ if (in_irq())
++ spin_unlock(&eq->lock);
++ else
++ spin_unlock_irqrestore(&eq->lock, *flags);
++}
++
++enum async_eq_nb_action {
++ ASYNC_EQ_IRQ_HANDLER = 0,
++ ASYNC_EQ_RECOVER = 1,
++};
++
+ static int mlx5_eq_async_int(struct notifier_block *nb,
+ unsigned long action, void *data)
+ {
+@@ -198,11 +221,14 @@ static int mlx5_eq_async_int(struct notifier_block *nb,
+ struct mlx5_eq_table *eqt;
+ struct mlx5_core_dev *dev;
+ struct mlx5_eqe *eqe;
++ unsigned long flags;
+ int num_eqes = 0;
+
+ dev = eq->dev;
+ eqt = dev->priv.eq_table;
+
++ mlx5_eq_async_int_lock(eq_async, &flags);
++
+ eqe = next_eqe_sw(eq);
+ if (!eqe)
+ goto out;
+@@ -223,8 +249,19 @@ static int mlx5_eq_async_int(struct notifier_block *nb,
+
+ out:
+ eq_update_ci(eq, 1);
++ mlx5_eq_async_int_unlock(eq_async, &flags);
+
+- return 0;
++ return unlikely(action == ASYNC_EQ_RECOVER) ? num_eqes : 0;
++}
++
++void mlx5_cmd_eq_recover(struct mlx5_core_dev *dev)
++{
++ struct mlx5_eq_async *eq = &dev->priv.eq_table->cmd_eq;
++ int eqes;
++
++ eqes = mlx5_eq_async_int(&eq->irq_nb, ASYNC_EQ_RECOVER, NULL);
++ if (eqes)
++ mlx5_core_warn(dev, "Recovered %d EQEs on cmd_eq\n", eqes);
+ }
+
+ static void init_eq_buf(struct mlx5_eq *eq)
+@@ -569,6 +606,7 @@ setup_async_eq(struct mlx5_core_dev *dev, struct mlx5_eq_async *eq,
+ int err;
+
+ eq->irq_nb.notifier_call = mlx5_eq_async_int;
++ spin_lock_init(&eq->lock);
+
+ err = create_async_eq(dev, &eq->core, param);
+ if (err) {
+@@ -656,8 +694,10 @@ static void destroy_async_eqs(struct mlx5_core_dev *dev)
+
+ cleanup_async_eq(dev, &table->pages_eq, "pages");
+ cleanup_async_eq(dev, &table->async_eq, "async");
++ mlx5_cmd_allowed_opcode(dev, MLX5_CMD_OP_DESTROY_EQ);
+ mlx5_cmd_use_polling(dev);
+ cleanup_async_eq(dev, &table->cmd_eq, "cmd");
++ mlx5_cmd_allowed_opcode(dev, CMD_ALLOWED_OPCODE_ALL);
+ mlx5_eq_notifier_unregister(dev, &table->cq_err_nb);
+ }
+
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/lib/eq.h b/drivers/net/ethernet/mellanox/mlx5/core/lib/eq.h
+index 4aaca7400fb29..5c681e31983bc 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/lib/eq.h
++++ b/drivers/net/ethernet/mellanox/mlx5/core/lib/eq.h
+@@ -37,6 +37,7 @@ struct mlx5_eq {
+ struct mlx5_eq_async {
+ struct mlx5_eq core;
+ struct notifier_block irq_nb;
++ spinlock_t lock; /* To avoid irq EQ handle races with resiliency flows */
+ };
+
+ struct mlx5_eq_comp {
+@@ -81,6 +82,7 @@ void mlx5_cq_tasklet_cb(unsigned long data);
+ struct cpumask *mlx5_eq_comp_cpumask(struct mlx5_core_dev *dev, int ix);
+
+ u32 mlx5_eq_poll_irq_disabled(struct mlx5_eq_comp *eq);
++void mlx5_cmd_eq_recover(struct mlx5_core_dev *dev);
+ void mlx5_eq_synchronize_async_irq(struct mlx5_core_dev *dev);
+ void mlx5_eq_synchronize_cmd_irq(struct mlx5_core_dev *dev);
+
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/pci_irq.c b/drivers/net/ethernet/mellanox/mlx5/core/pci_irq.c
+index 373981a659c7c..6fd9749203944 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/pci_irq.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/pci_irq.c
+@@ -115,7 +115,7 @@ static int request_irqs(struct mlx5_core_dev *dev, int nvec)
+ return 0;
+
+ err_request_irq:
+- for (; i >= 0; i--) {
++ while (i--) {
+ struct mlx5_irq *irq = mlx5_irq_get(dev, i);
+ int irqn = pci_irq_vector(dev->pdev, i);
+
+diff --git a/drivers/net/ethernet/mellanox/mlxsw/spectrum_acl_tcam.c b/drivers/net/ethernet/mellanox/mlxsw/spectrum_acl_tcam.c
+index 5c020403342f9..7cccc41dd69c9 100644
+--- a/drivers/net/ethernet/mellanox/mlxsw/spectrum_acl_tcam.c
++++ b/drivers/net/ethernet/mellanox/mlxsw/spectrum_acl_tcam.c
+@@ -292,13 +292,14 @@ mlxsw_sp_acl_tcam_group_add(struct mlxsw_sp_acl_tcam *tcam,
+ int err;
+
+ group->tcam = tcam;
+- mutex_init(&group->lock);
+ INIT_LIST_HEAD(&group->region_list);
+
+ err = mlxsw_sp_acl_tcam_group_id_get(tcam, &group->id);
+ if (err)
+ return err;
+
++ mutex_init(&group->lock);
++
+ return 0;
+ }
+
+diff --git a/drivers/net/ethernet/mscc/Makefile b/drivers/net/ethernet/mscc/Makefile
+index 91b33b55054e1..ad97a5cca6f99 100644
+--- a/drivers/net/ethernet/mscc/Makefile
++++ b/drivers/net/ethernet/mscc/Makefile
+@@ -2,4 +2,4 @@
+ obj-$(CONFIG_MSCC_OCELOT_SWITCH) += mscc_ocelot_common.o
+ mscc_ocelot_common-y := ocelot.o ocelot_io.o
+ mscc_ocelot_common-y += ocelot_regs.o ocelot_tc.o ocelot_police.o ocelot_ace.o ocelot_flower.o ocelot_ptp.o
+-obj-$(CONFIG_MSCC_OCELOT_SWITCH_OCELOT) += ocelot_board.o
++obj-$(CONFIG_MSCC_OCELOT_SWITCH_OCELOT) += ocelot_vsc7514.o
+diff --git a/drivers/net/ethernet/mscc/ocelot.c b/drivers/net/ethernet/mscc/ocelot.c
+index d0b79cca51840..61bbb7a090042 100644
+--- a/drivers/net/ethernet/mscc/ocelot.c
++++ b/drivers/net/ethernet/mscc/ocelot.c
+@@ -396,18 +396,6 @@ static void ocelot_vlan_init(struct ocelot *ocelot)
+ }
+ }
+
+-/* Watermark encode
+- * Bit 8: Unit; 0:1, 1:16
+- * Bit 7-0: Value to be multiplied with unit
+- */
+-static u16 ocelot_wm_enc(u16 value)
+-{
+- if (value >= BIT(8))
+- return BIT(8) | (value / 16);
+-
+- return value;
+-}
+-
+ void ocelot_adjust_link(struct ocelot *ocelot, int port,
+ struct phy_device *phydev)
+ {
+@@ -2012,7 +2000,8 @@ void ocelot_port_set_maxlen(struct ocelot *ocelot, int port, size_t sdu)
+ {
+ struct ocelot_port *ocelot_port = ocelot->ports[port];
+ int maxlen = sdu + ETH_HLEN + ETH_FCS_LEN;
+- int atop_wm;
++ int pause_start, pause_stop;
++ int atop, atop_tot;
+
+ if (port == ocelot->npi) {
+ maxlen += OCELOT_TAG_LEN;
+@@ -2025,20 +2014,20 @@ void ocelot_port_set_maxlen(struct ocelot *ocelot, int port, size_t sdu)
+
+ ocelot_port_writel(ocelot_port, maxlen, DEV_MAC_MAXLEN_CFG);
+
+- /* Set Pause WM hysteresis
+- * 152 = 6 * maxlen / OCELOT_BUFFER_CELL_SZ
+- * 101 = 4 * maxlen / OCELOT_BUFFER_CELL_SZ
+- */
+- ocelot_write_rix(ocelot, SYS_PAUSE_CFG_PAUSE_ENA |
+- SYS_PAUSE_CFG_PAUSE_STOP(101) |
+- SYS_PAUSE_CFG_PAUSE_START(152), SYS_PAUSE_CFG, port);
++ /* Set Pause watermark hysteresis */
++ pause_start = 6 * maxlen / OCELOT_BUFFER_CELL_SZ;
++ pause_stop = 4 * maxlen / OCELOT_BUFFER_CELL_SZ;
++ ocelot_rmw_rix(ocelot, SYS_PAUSE_CFG_PAUSE_START(pause_start),
++ SYS_PAUSE_CFG_PAUSE_START_M, SYS_PAUSE_CFG, port);
++ ocelot_rmw_rix(ocelot, SYS_PAUSE_CFG_PAUSE_STOP(pause_stop),
++ SYS_PAUSE_CFG_PAUSE_STOP_M, SYS_PAUSE_CFG, port);
+
+- /* Tail dropping watermark */
+- atop_wm = (ocelot->shared_queue_sz - 9 * maxlen) /
++ /* Tail dropping watermarks */
++ atop_tot = (ocelot->shared_queue_sz - 9 * maxlen) /
+ OCELOT_BUFFER_CELL_SZ;
+- ocelot_write_rix(ocelot, ocelot_wm_enc(9 * maxlen),
+- SYS_ATOP, port);
+- ocelot_write(ocelot, ocelot_wm_enc(atop_wm), SYS_ATOP_TOT_CFG);
++ atop = (9 * maxlen) / OCELOT_BUFFER_CELL_SZ;
++ ocelot_write_rix(ocelot, ocelot->ops->wm_enc(atop), SYS_ATOP, port);
++ ocelot_write(ocelot, ocelot->ops->wm_enc(atop_tot), SYS_ATOP_TOT_CFG);
+ }
+ EXPORT_SYMBOL(ocelot_port_set_maxlen);
+
+@@ -2094,6 +2083,10 @@ void ocelot_init_port(struct ocelot *ocelot, int port)
+ ocelot_port_writel(ocelot_port, 0, DEV_MAC_FC_MAC_HIGH_CFG);
+ ocelot_port_writel(ocelot_port, 0, DEV_MAC_FC_MAC_LOW_CFG);
+
++ /* Enable transmission of pause frames */
++ ocelot_rmw_rix(ocelot, SYS_PAUSE_CFG_PAUSE_ENA, SYS_PAUSE_CFG_PAUSE_ENA,
++ SYS_PAUSE_CFG, port);
++
+ /* Drop frames with multicast source address */
+ ocelot_rmw_gix(ocelot, ANA_PORT_DROP_CFG_DROP_MC_SMAC_ENA,
+ ANA_PORT_DROP_CFG_DROP_MC_SMAC_ENA,
+diff --git a/drivers/net/ethernet/mscc/ocelot_board.c b/drivers/net/ethernet/mscc/ocelot_board.c
+deleted file mode 100644
+index 4a15d2ff8b706..0000000000000
+--- a/drivers/net/ethernet/mscc/ocelot_board.c
++++ /dev/null
+@@ -1,626 +0,0 @@
+-// SPDX-License-Identifier: (GPL-2.0 OR MIT)
+-/*
+- * Microsemi Ocelot Switch driver
+- *
+- * Copyright (c) 2017 Microsemi Corporation
+- */
+-#include <linux/interrupt.h>
+-#include <linux/module.h>
+-#include <linux/of_net.h>
+-#include <linux/netdevice.h>
+-#include <linux/of_mdio.h>
+-#include <linux/of_platform.h>
+-#include <linux/mfd/syscon.h>
+-#include <linux/skbuff.h>
+-#include <net/switchdev.h>
+-
+-#include <soc/mscc/ocelot_vcap.h>
+-#include "ocelot.h"
+-
+-#define IFH_EXTRACT_BITFIELD64(x, o, w) (((x) >> (o)) & GENMASK_ULL((w) - 1, 0))
+-#define VSC7514_VCAP_IS2_CNT 64
+-#define VSC7514_VCAP_IS2_ENTRY_WIDTH 376
+-#define VSC7514_VCAP_IS2_ACTION_WIDTH 99
+-#define VSC7514_VCAP_PORT_CNT 11
+-
+-static int ocelot_parse_ifh(u32 *_ifh, struct frame_info *info)
+-{
+- u8 llen, wlen;
+- u64 ifh[2];
+-
+- ifh[0] = be64_to_cpu(((__force __be64 *)_ifh)[0]);
+- ifh[1] = be64_to_cpu(((__force __be64 *)_ifh)[1]);
+-
+- wlen = IFH_EXTRACT_BITFIELD64(ifh[0], 7, 8);
+- llen = IFH_EXTRACT_BITFIELD64(ifh[0], 15, 6);
+-
+- info->len = OCELOT_BUFFER_CELL_SZ * wlen + llen - 80;
+-
+- info->timestamp = IFH_EXTRACT_BITFIELD64(ifh[0], 21, 32);
+-
+- info->port = IFH_EXTRACT_BITFIELD64(ifh[1], 43, 4);
+-
+- info->tag_type = IFH_EXTRACT_BITFIELD64(ifh[1], 16, 1);
+- info->vid = IFH_EXTRACT_BITFIELD64(ifh[1], 0, 12);
+-
+- return 0;
+-}
+-
+-static int ocelot_rx_frame_word(struct ocelot *ocelot, u8 grp, bool ifh,
+- u32 *rval)
+-{
+- u32 val;
+- u32 bytes_valid;
+-
+- val = ocelot_read_rix(ocelot, QS_XTR_RD, grp);
+- if (val == XTR_NOT_READY) {
+- if (ifh)
+- return -EIO;
+-
+- do {
+- val = ocelot_read_rix(ocelot, QS_XTR_RD, grp);
+- } while (val == XTR_NOT_READY);
+- }
+-
+- switch (val) {
+- case XTR_ABORT:
+- return -EIO;
+- case XTR_EOF_0:
+- case XTR_EOF_1:
+- case XTR_EOF_2:
+- case XTR_EOF_3:
+- case XTR_PRUNED:
+- bytes_valid = XTR_VALID_BYTES(val);
+- val = ocelot_read_rix(ocelot, QS_XTR_RD, grp);
+- if (val == XTR_ESCAPE)
+- *rval = ocelot_read_rix(ocelot, QS_XTR_RD, grp);
+- else
+- *rval = val;
+-
+- return bytes_valid;
+- case XTR_ESCAPE:
+- *rval = ocelot_read_rix(ocelot, QS_XTR_RD, grp);
+-
+- return 4;
+- default:
+- *rval = val;
+-
+- return 4;
+- }
+-}
+-
+-static irqreturn_t ocelot_xtr_irq_handler(int irq, void *arg)
+-{
+- struct ocelot *ocelot = arg;
+- int i = 0, grp = 0;
+- int err = 0;
+-
+- if (!(ocelot_read(ocelot, QS_XTR_DATA_PRESENT) & BIT(grp)))
+- return IRQ_NONE;
+-
+- do {
+- struct skb_shared_hwtstamps *shhwtstamps;
+- struct ocelot_port_private *priv;
+- struct ocelot_port *ocelot_port;
+- u64 tod_in_ns, full_ts_in_ns;
+- struct frame_info info = {};
+- struct net_device *dev;
+- u32 ifh[4], val, *buf;
+- struct timespec64 ts;
+- int sz, len, buf_len;
+- struct sk_buff *skb;
+-
+- for (i = 0; i < OCELOT_TAG_LEN / 4; i++) {
+- err = ocelot_rx_frame_word(ocelot, grp, true, &ifh[i]);
+- if (err != 4)
+- break;
+- }
+-
+- if (err != 4)
+- break;
+-
+- /* At this point the IFH was read correctly, so it is safe to
+- * presume that there is no error. The err needs to be reset
+- * otherwise a frame could come in CPU queue between the while
+- * condition and the check for error later on. And in that case
+- * the new frame is just removed and not processed.
+- */
+- err = 0;
+-
+- ocelot_parse_ifh(ifh, &info);
+-
+- ocelot_port = ocelot->ports[info.port];
+- priv = container_of(ocelot_port, struct ocelot_port_private,
+- port);
+- dev = priv->dev;
+-
+- skb = netdev_alloc_skb(dev, info.len);
+-
+- if (unlikely(!skb)) {
+- netdev_err(dev, "Unable to allocate sk_buff\n");
+- err = -ENOMEM;
+- break;
+- }
+- buf_len = info.len - ETH_FCS_LEN;
+- buf = (u32 *)skb_put(skb, buf_len);
+-
+- len = 0;
+- do {
+- sz = ocelot_rx_frame_word(ocelot, grp, false, &val);
+- *buf++ = val;
+- len += sz;
+- } while (len < buf_len);
+-
+- /* Read the FCS */
+- sz = ocelot_rx_frame_word(ocelot, grp, false, &val);
+- /* Update the statistics if part of the FCS was read before */
+- len -= ETH_FCS_LEN - sz;
+-
+- if (unlikely(dev->features & NETIF_F_RXFCS)) {
+- buf = (u32 *)skb_put(skb, ETH_FCS_LEN);
+- *buf = val;
+- }
+-
+- if (sz < 0) {
+- err = sz;
+- break;
+- }
+-
+- if (ocelot->ptp) {
+- ocelot_ptp_gettime64(&ocelot->ptp_info, &ts);
+-
+- tod_in_ns = ktime_set(ts.tv_sec, ts.tv_nsec);
+- if ((tod_in_ns & 0xffffffff) < info.timestamp)
+- full_ts_in_ns = (((tod_in_ns >> 32) - 1) << 32) |
+- info.timestamp;
+- else
+- full_ts_in_ns = (tod_in_ns & GENMASK_ULL(63, 32)) |
+- info.timestamp;
+-
+- shhwtstamps = skb_hwtstamps(skb);
+- memset(shhwtstamps, 0, sizeof(struct skb_shared_hwtstamps));
+- shhwtstamps->hwtstamp = full_ts_in_ns;
+- }
+-
+- /* Everything we see on an interface that is in the HW bridge
+- * has already been forwarded.
+- */
+- if (ocelot->bridge_mask & BIT(info.port))
+- skb->offload_fwd_mark = 1;
+-
+- skb->protocol = eth_type_trans(skb, dev);
+- if (!skb_defer_rx_timestamp(skb))
+- netif_rx(skb);
+- dev->stats.rx_bytes += len;
+- dev->stats.rx_packets++;
+- } while (ocelot_read(ocelot, QS_XTR_DATA_PRESENT) & BIT(grp));
+-
+- if (err)
+- while (ocelot_read(ocelot, QS_XTR_DATA_PRESENT) & BIT(grp))
+- ocelot_read_rix(ocelot, QS_XTR_RD, grp);
+-
+- return IRQ_HANDLED;
+-}
+-
+-static irqreturn_t ocelot_ptp_rdy_irq_handler(int irq, void *arg)
+-{
+- struct ocelot *ocelot = arg;
+-
+- ocelot_get_txtstamp(ocelot);
+-
+- return IRQ_HANDLED;
+-}
+-
+-static const struct of_device_id mscc_ocelot_match[] = {
+- { .compatible = "mscc,vsc7514-switch" },
+- { }
+-};
+-MODULE_DEVICE_TABLE(of, mscc_ocelot_match);
+-
+-static int ocelot_reset(struct ocelot *ocelot)
+-{
+- int retries = 100;
+- u32 val;
+-
+- regmap_field_write(ocelot->regfields[SYS_RESET_CFG_MEM_INIT], 1);
+- regmap_field_write(ocelot->regfields[SYS_RESET_CFG_MEM_ENA], 1);
+-
+- do {
+- msleep(1);
+- regmap_field_read(ocelot->regfields[SYS_RESET_CFG_MEM_INIT],
+- &val);
+- } while (val && --retries);
+-
+- if (!retries)
+- return -ETIMEDOUT;
+-
+- regmap_field_write(ocelot->regfields[SYS_RESET_CFG_MEM_ENA], 1);
+- regmap_field_write(ocelot->regfields[SYS_RESET_CFG_CORE_ENA], 1);
+-
+- return 0;
+-}
+-
+-static const struct ocelot_ops ocelot_ops = {
+- .reset = ocelot_reset,
+-};
+-
+-static const struct vcap_field vsc7514_vcap_is2_keys[] = {
+- /* Common: 46 bits */
+- [VCAP_IS2_TYPE] = { 0, 4},
+- [VCAP_IS2_HK_FIRST] = { 4, 1},
+- [VCAP_IS2_HK_PAG] = { 5, 8},
+- [VCAP_IS2_HK_IGR_PORT_MASK] = { 13, 12},
+- [VCAP_IS2_HK_RSV2] = { 25, 1},
+- [VCAP_IS2_HK_HOST_MATCH] = { 26, 1},
+- [VCAP_IS2_HK_L2_MC] = { 27, 1},
+- [VCAP_IS2_HK_L2_BC] = { 28, 1},
+- [VCAP_IS2_HK_VLAN_TAGGED] = { 29, 1},
+- [VCAP_IS2_HK_VID] = { 30, 12},
+- [VCAP_IS2_HK_DEI] = { 42, 1},
+- [VCAP_IS2_HK_PCP] = { 43, 3},
+- /* MAC_ETYPE / MAC_LLC / MAC_SNAP / OAM common */
+- [VCAP_IS2_HK_L2_DMAC] = { 46, 48},
+- [VCAP_IS2_HK_L2_SMAC] = { 94, 48},
+- /* MAC_ETYPE (TYPE=000) */
+- [VCAP_IS2_HK_MAC_ETYPE_ETYPE] = {142, 16},
+- [VCAP_IS2_HK_MAC_ETYPE_L2_PAYLOAD0] = {158, 16},
+- [VCAP_IS2_HK_MAC_ETYPE_L2_PAYLOAD1] = {174, 8},
+- [VCAP_IS2_HK_MAC_ETYPE_L2_PAYLOAD2] = {182, 3},
+- /* MAC_LLC (TYPE=001) */
+- [VCAP_IS2_HK_MAC_LLC_L2_LLC] = {142, 40},
+- /* MAC_SNAP (TYPE=010) */
+- [VCAP_IS2_HK_MAC_SNAP_L2_SNAP] = {142, 40},
+- /* MAC_ARP (TYPE=011) */
+- [VCAP_IS2_HK_MAC_ARP_SMAC] = { 46, 48},
+- [VCAP_IS2_HK_MAC_ARP_ADDR_SPACE_OK] = { 94, 1},
+- [VCAP_IS2_HK_MAC_ARP_PROTO_SPACE_OK] = { 95, 1},
+- [VCAP_IS2_HK_MAC_ARP_LEN_OK] = { 96, 1},
+- [VCAP_IS2_HK_MAC_ARP_TARGET_MATCH] = { 97, 1},
+- [VCAP_IS2_HK_MAC_ARP_SENDER_MATCH] = { 98, 1},
+- [VCAP_IS2_HK_MAC_ARP_OPCODE_UNKNOWN] = { 99, 1},
+- [VCAP_IS2_HK_MAC_ARP_OPCODE] = {100, 2},
+- [VCAP_IS2_HK_MAC_ARP_L3_IP4_DIP] = {102, 32},
+- [VCAP_IS2_HK_MAC_ARP_L3_IP4_SIP] = {134, 32},
+- [VCAP_IS2_HK_MAC_ARP_DIP_EQ_SIP] = {166, 1},
+- /* IP4_TCP_UDP / IP4_OTHER common */
+- [VCAP_IS2_HK_IP4] = { 46, 1},
+- [VCAP_IS2_HK_L3_FRAGMENT] = { 47, 1},
+- [VCAP_IS2_HK_L3_FRAG_OFS_GT0] = { 48, 1},
+- [VCAP_IS2_HK_L3_OPTIONS] = { 49, 1},
+- [VCAP_IS2_HK_IP4_L3_TTL_GT0] = { 50, 1},
+- [VCAP_IS2_HK_L3_TOS] = { 51, 8},
+- [VCAP_IS2_HK_L3_IP4_DIP] = { 59, 32},
+- [VCAP_IS2_HK_L3_IP4_SIP] = { 91, 32},
+- [VCAP_IS2_HK_DIP_EQ_SIP] = {123, 1},
+- /* IP4_TCP_UDP (TYPE=100) */
+- [VCAP_IS2_HK_TCP] = {124, 1},
+- [VCAP_IS2_HK_L4_SPORT] = {125, 16},
+- [VCAP_IS2_HK_L4_DPORT] = {141, 16},
+- [VCAP_IS2_HK_L4_RNG] = {157, 8},
+- [VCAP_IS2_HK_L4_SPORT_EQ_DPORT] = {165, 1},
+- [VCAP_IS2_HK_L4_SEQUENCE_EQ0] = {166, 1},
+- [VCAP_IS2_HK_L4_URG] = {167, 1},
+- [VCAP_IS2_HK_L4_ACK] = {168, 1},
+- [VCAP_IS2_HK_L4_PSH] = {169, 1},
+- [VCAP_IS2_HK_L4_RST] = {170, 1},
+- [VCAP_IS2_HK_L4_SYN] = {171, 1},
+- [VCAP_IS2_HK_L4_FIN] = {172, 1},
+- [VCAP_IS2_HK_L4_1588_DOM] = {173, 8},
+- [VCAP_IS2_HK_L4_1588_VER] = {181, 4},
+- /* IP4_OTHER (TYPE=101) */
+- [VCAP_IS2_HK_IP4_L3_PROTO] = {124, 8},
+- [VCAP_IS2_HK_L3_PAYLOAD] = {132, 56},
+- /* IP6_STD (TYPE=110) */
+- [VCAP_IS2_HK_IP6_L3_TTL_GT0] = { 46, 1},
+- [VCAP_IS2_HK_L3_IP6_SIP] = { 47, 128},
+- [VCAP_IS2_HK_IP6_L3_PROTO] = {175, 8},
+- /* OAM (TYPE=111) */
+- [VCAP_IS2_HK_OAM_MEL_FLAGS] = {142, 7},
+- [VCAP_IS2_HK_OAM_VER] = {149, 5},
+- [VCAP_IS2_HK_OAM_OPCODE] = {154, 8},
+- [VCAP_IS2_HK_OAM_FLAGS] = {162, 8},
+- [VCAP_IS2_HK_OAM_MEPID] = {170, 16},
+- [VCAP_IS2_HK_OAM_CCM_CNTS_EQ0] = {186, 1},
+- [VCAP_IS2_HK_OAM_IS_Y1731] = {187, 1},
+-};
+-
+-static const struct vcap_field vsc7514_vcap_is2_actions[] = {
+- [VCAP_IS2_ACT_HIT_ME_ONCE] = { 0, 1},
+- [VCAP_IS2_ACT_CPU_COPY_ENA] = { 1, 1},
+- [VCAP_IS2_ACT_CPU_QU_NUM] = { 2, 3},
+- [VCAP_IS2_ACT_MASK_MODE] = { 5, 2},
+- [VCAP_IS2_ACT_MIRROR_ENA] = { 7, 1},
+- [VCAP_IS2_ACT_LRN_DIS] = { 8, 1},
+- [VCAP_IS2_ACT_POLICE_ENA] = { 9, 1},
+- [VCAP_IS2_ACT_POLICE_IDX] = { 10, 9},
+- [VCAP_IS2_ACT_POLICE_VCAP_ONLY] = { 19, 1},
+- [VCAP_IS2_ACT_PORT_MASK] = { 20, 11},
+- [VCAP_IS2_ACT_REW_OP] = { 31, 9},
+- [VCAP_IS2_ACT_SMAC_REPLACE_ENA] = { 40, 1},
+- [VCAP_IS2_ACT_RSV] = { 41, 2},
+- [VCAP_IS2_ACT_ACL_ID] = { 43, 6},
+- [VCAP_IS2_ACT_HIT_CNT] = { 49, 32},
+-};
+-
+-static const struct vcap_props vsc7514_vcap_props[] = {
+- [VCAP_IS2] = {
+- .tg_width = 2,
+- .sw_count = 4,
+- .entry_count = VSC7514_VCAP_IS2_CNT,
+- .entry_width = VSC7514_VCAP_IS2_ENTRY_WIDTH,
+- .action_count = VSC7514_VCAP_IS2_CNT +
+- VSC7514_VCAP_PORT_CNT + 2,
+- .action_width = 99,
+- .action_type_width = 1,
+- .action_table = {
+- [IS2_ACTION_TYPE_NORMAL] = {
+- .width = 49,
+- .count = 2
+- },
+- [IS2_ACTION_TYPE_SMAC_SIP] = {
+- .width = 6,
+- .count = 4
+- },
+- },
+- .counter_words = 4,
+- .counter_width = 32,
+- },
+-};
+-
+-static struct ptp_clock_info ocelot_ptp_clock_info = {
+- .owner = THIS_MODULE,
+- .name = "ocelot ptp",
+- .max_adj = 0x7fffffff,
+- .n_alarm = 0,
+- .n_ext_ts = 0,
+- .n_per_out = OCELOT_PTP_PINS_NUM,
+- .n_pins = OCELOT_PTP_PINS_NUM,
+- .pps = 0,
+- .gettime64 = ocelot_ptp_gettime64,
+- .settime64 = ocelot_ptp_settime64,
+- .adjtime = ocelot_ptp_adjtime,
+- .adjfine = ocelot_ptp_adjfine,
+- .verify = ocelot_ptp_verify,
+- .enable = ocelot_ptp_enable,
+-};
+-
+-static int mscc_ocelot_probe(struct platform_device *pdev)
+-{
+- struct device_node *np = pdev->dev.of_node;
+- struct device_node *ports, *portnp;
+- int err, irq_xtr, irq_ptp_rdy;
+- struct ocelot *ocelot;
+- struct regmap *hsio;
+- unsigned int i;
+-
+- struct {
+- enum ocelot_target id;
+- char *name;
+- u8 optional:1;
+- } io_target[] = {
+- { SYS, "sys" },
+- { REW, "rew" },
+- { QSYS, "qsys" },
+- { ANA, "ana" },
+- { QS, "qs" },
+- { S2, "s2" },
+- { PTP, "ptp", 1 },
+- };
+-
+- if (!np && !pdev->dev.platform_data)
+- return -ENODEV;
+-
+- ocelot = devm_kzalloc(&pdev->dev, sizeof(*ocelot), GFP_KERNEL);
+- if (!ocelot)
+- return -ENOMEM;
+-
+- platform_set_drvdata(pdev, ocelot);
+- ocelot->dev = &pdev->dev;
+-
+- for (i = 0; i < ARRAY_SIZE(io_target); i++) {
+- struct regmap *target;
+- struct resource *res;
+-
+- res = platform_get_resource_byname(pdev, IORESOURCE_MEM,
+- io_target[i].name);
+-
+- target = ocelot_regmap_init(ocelot, res);
+- if (IS_ERR(target)) {
+- if (io_target[i].optional) {
+- ocelot->targets[io_target[i].id] = NULL;
+- continue;
+- }
+- return PTR_ERR(target);
+- }
+-
+- ocelot->targets[io_target[i].id] = target;
+- }
+-
+- hsio = syscon_regmap_lookup_by_compatible("mscc,ocelot-hsio");
+- if (IS_ERR(hsio)) {
+- dev_err(&pdev->dev, "missing hsio syscon\n");
+- return PTR_ERR(hsio);
+- }
+-
+- ocelot->targets[HSIO] = hsio;
+-
+- err = ocelot_chip_init(ocelot, &ocelot_ops);
+- if (err)
+- return err;
+-
+- irq_xtr = platform_get_irq_byname(pdev, "xtr");
+- if (irq_xtr < 0)
+- return -ENODEV;
+-
+- err = devm_request_threaded_irq(&pdev->dev, irq_xtr, NULL,
+- ocelot_xtr_irq_handler, IRQF_ONESHOT,
+- "frame extraction", ocelot);
+- if (err)
+- return err;
+-
+- irq_ptp_rdy = platform_get_irq_byname(pdev, "ptp_rdy");
+- if (irq_ptp_rdy > 0 && ocelot->targets[PTP]) {
+- err = devm_request_threaded_irq(&pdev->dev, irq_ptp_rdy, NULL,
+- ocelot_ptp_rdy_irq_handler,
+- IRQF_ONESHOT, "ptp ready",
+- ocelot);
+- if (err)
+- return err;
+-
+- /* Both the PTP interrupt and the PTP bank are available */
+- ocelot->ptp = 1;
+- }
+-
+- ports = of_get_child_by_name(np, "ethernet-ports");
+- if (!ports) {
+- dev_err(&pdev->dev, "no ethernet-ports child node found\n");
+- return -ENODEV;
+- }
+-
+- ocelot->num_phys_ports = of_get_child_count(ports);
+-
+- ocelot->ports = devm_kcalloc(&pdev->dev, ocelot->num_phys_ports,
+- sizeof(struct ocelot_port *), GFP_KERNEL);
+-
+- ocelot->vcap_is2_keys = vsc7514_vcap_is2_keys;
+- ocelot->vcap_is2_actions = vsc7514_vcap_is2_actions;
+- ocelot->vcap = vsc7514_vcap_props;
+-
+- ocelot_init(ocelot);
+- if (ocelot->ptp) {
+- err = ocelot_init_timestamp(ocelot, &ocelot_ptp_clock_info);
+- if (err) {
+- dev_err(ocelot->dev,
+- "Timestamp initialization failed\n");
+- ocelot->ptp = 0;
+- }
+- }
+-
+- /* No NPI port */
+- ocelot_configure_cpu(ocelot, -1, OCELOT_TAG_PREFIX_NONE,
+- OCELOT_TAG_PREFIX_NONE);
+-
+- for_each_available_child_of_node(ports, portnp) {
+- struct ocelot_port_private *priv;
+- struct ocelot_port *ocelot_port;
+- struct device_node *phy_node;
+- phy_interface_t phy_mode;
+- struct phy_device *phy;
+- struct resource *res;
+- struct phy *serdes;
+- void __iomem *regs;
+- char res_name[8];
+- u32 port;
+-
+- if (of_property_read_u32(portnp, "reg", &port))
+- continue;
+-
+- snprintf(res_name, sizeof(res_name), "port%d", port);
+-
+- res = platform_get_resource_byname(pdev, IORESOURCE_MEM,
+- res_name);
+- regs = devm_ioremap_resource(&pdev->dev, res);
+- if (IS_ERR(regs))
+- continue;
+-
+- phy_node = of_parse_phandle(portnp, "phy-handle", 0);
+- if (!phy_node)
+- continue;
+-
+- phy = of_phy_find_device(phy_node);
+- of_node_put(phy_node);
+- if (!phy)
+- continue;
+-
+- err = ocelot_probe_port(ocelot, port, regs, phy);
+- if (err) {
+- of_node_put(portnp);
+- goto out_put_ports;
+- }
+-
+- ocelot_port = ocelot->ports[port];
+- priv = container_of(ocelot_port, struct ocelot_port_private,
+- port);
+-
+- of_get_phy_mode(portnp, &phy_mode);
+-
+- ocelot_port->phy_mode = phy_mode;
+-
+- switch (ocelot_port->phy_mode) {
+- case PHY_INTERFACE_MODE_NA:
+- continue;
+- case PHY_INTERFACE_MODE_SGMII:
+- break;
+- case PHY_INTERFACE_MODE_QSGMII:
+- /* Ensure clock signals and speed is set on all
+- * QSGMII links
+- */
+- ocelot_port_writel(ocelot_port,
+- DEV_CLOCK_CFG_LINK_SPEED
+- (OCELOT_SPEED_1000),
+- DEV_CLOCK_CFG);
+- break;
+- default:
+- dev_err(ocelot->dev,
+- "invalid phy mode for port%d, (Q)SGMII only\n",
+- port);
+- of_node_put(portnp);
+- err = -EINVAL;
+- goto out_put_ports;
+- }
+-
+- serdes = devm_of_phy_get(ocelot->dev, portnp, NULL);
+- if (IS_ERR(serdes)) {
+- err = PTR_ERR(serdes);
+- if (err == -EPROBE_DEFER)
+- dev_dbg(ocelot->dev, "deferring probe\n");
+- else
+- dev_err(ocelot->dev,
+- "missing SerDes phys for port%d\n",
+- port);
+-
+- of_node_put(portnp);
+- goto out_put_ports;
+- }
+-
+- priv->serdes = serdes;
+- }
+-
+- register_netdevice_notifier(&ocelot_netdevice_nb);
+- register_switchdev_notifier(&ocelot_switchdev_nb);
+- register_switchdev_blocking_notifier(&ocelot_switchdev_blocking_nb);
+-
+- dev_info(&pdev->dev, "Ocelot switch probed\n");
+-
+-out_put_ports:
+- of_node_put(ports);
+- return err;
+-}
+-
+-static int mscc_ocelot_remove(struct platform_device *pdev)
+-{
+- struct ocelot *ocelot = platform_get_drvdata(pdev);
+-
+- ocelot_deinit_timestamp(ocelot);
+- ocelot_deinit(ocelot);
+- unregister_switchdev_blocking_notifier(&ocelot_switchdev_blocking_nb);
+- unregister_switchdev_notifier(&ocelot_switchdev_nb);
+- unregister_netdevice_notifier(&ocelot_netdevice_nb);
+-
+- return 0;
+-}
+-
+-static struct platform_driver mscc_ocelot_driver = {
+- .probe = mscc_ocelot_probe,
+- .remove = mscc_ocelot_remove,
+- .driver = {
+- .name = "ocelot-switch",
+- .of_match_table = mscc_ocelot_match,
+- },
+-};
+-
+-module_platform_driver(mscc_ocelot_driver);
+-
+-MODULE_DESCRIPTION("Microsemi Ocelot switch driver");
+-MODULE_AUTHOR("Alexandre Belloni <alexandre.belloni@bootlin.com>");
+-MODULE_LICENSE("Dual MIT/GPL");
+diff --git a/drivers/net/ethernet/mscc/ocelot_vsc7514.c b/drivers/net/ethernet/mscc/ocelot_vsc7514.c
+new file mode 100644
+index 0000000000000..66b58b242f778
+--- /dev/null
++++ b/drivers/net/ethernet/mscc/ocelot_vsc7514.c
+@@ -0,0 +1,639 @@
++// SPDX-License-Identifier: (GPL-2.0 OR MIT)
++/*
++ * Microsemi Ocelot Switch driver
++ *
++ * Copyright (c) 2017 Microsemi Corporation
++ */
++#include <linux/interrupt.h>
++#include <linux/module.h>
++#include <linux/of_net.h>
++#include <linux/netdevice.h>
++#include <linux/of_mdio.h>
++#include <linux/of_platform.h>
++#include <linux/mfd/syscon.h>
++#include <linux/skbuff.h>
++#include <net/switchdev.h>
++
++#include <soc/mscc/ocelot_vcap.h>
++#include "ocelot.h"
++
++#define IFH_EXTRACT_BITFIELD64(x, o, w) (((x) >> (o)) & GENMASK_ULL((w) - 1, 0))
++#define VSC7514_VCAP_IS2_CNT 64
++#define VSC7514_VCAP_IS2_ENTRY_WIDTH 376
++#define VSC7514_VCAP_IS2_ACTION_WIDTH 99
++#define VSC7514_VCAP_PORT_CNT 11
++
++static int ocelot_parse_ifh(u32 *_ifh, struct frame_info *info)
++{
++ u8 llen, wlen;
++ u64 ifh[2];
++
++ ifh[0] = be64_to_cpu(((__force __be64 *)_ifh)[0]);
++ ifh[1] = be64_to_cpu(((__force __be64 *)_ifh)[1]);
++
++ wlen = IFH_EXTRACT_BITFIELD64(ifh[0], 7, 8);
++ llen = IFH_EXTRACT_BITFIELD64(ifh[0], 15, 6);
++
++ info->len = OCELOT_BUFFER_CELL_SZ * wlen + llen - 80;
++
++ info->timestamp = IFH_EXTRACT_BITFIELD64(ifh[0], 21, 32);
++
++ info->port = IFH_EXTRACT_BITFIELD64(ifh[1], 43, 4);
++
++ info->tag_type = IFH_EXTRACT_BITFIELD64(ifh[1], 16, 1);
++ info->vid = IFH_EXTRACT_BITFIELD64(ifh[1], 0, 12);
++
++ return 0;
++}
++
++static int ocelot_rx_frame_word(struct ocelot *ocelot, u8 grp, bool ifh,
++ u32 *rval)
++{
++ u32 val;
++ u32 bytes_valid;
++
++ val = ocelot_read_rix(ocelot, QS_XTR_RD, grp);
++ if (val == XTR_NOT_READY) {
++ if (ifh)
++ return -EIO;
++
++ do {
++ val = ocelot_read_rix(ocelot, QS_XTR_RD, grp);
++ } while (val == XTR_NOT_READY);
++ }
++
++ switch (val) {
++ case XTR_ABORT:
++ return -EIO;
++ case XTR_EOF_0:
++ case XTR_EOF_1:
++ case XTR_EOF_2:
++ case XTR_EOF_3:
++ case XTR_PRUNED:
++ bytes_valid = XTR_VALID_BYTES(val);
++ val = ocelot_read_rix(ocelot, QS_XTR_RD, grp);
++ if (val == XTR_ESCAPE)
++ *rval = ocelot_read_rix(ocelot, QS_XTR_RD, grp);
++ else
++ *rval = val;
++
++ return bytes_valid;
++ case XTR_ESCAPE:
++ *rval = ocelot_read_rix(ocelot, QS_XTR_RD, grp);
++
++ return 4;
++ default:
++ *rval = val;
++
++ return 4;
++ }
++}
++
++static irqreturn_t ocelot_xtr_irq_handler(int irq, void *arg)
++{
++ struct ocelot *ocelot = arg;
++ int i = 0, grp = 0;
++ int err = 0;
++
++ if (!(ocelot_read(ocelot, QS_XTR_DATA_PRESENT) & BIT(grp)))
++ return IRQ_NONE;
++
++ do {
++ struct skb_shared_hwtstamps *shhwtstamps;
++ struct ocelot_port_private *priv;
++ struct ocelot_port *ocelot_port;
++ u64 tod_in_ns, full_ts_in_ns;
++ struct frame_info info = {};
++ struct net_device *dev;
++ u32 ifh[4], val, *buf;
++ struct timespec64 ts;
++ int sz, len, buf_len;
++ struct sk_buff *skb;
++
++ for (i = 0; i < OCELOT_TAG_LEN / 4; i++) {
++ err = ocelot_rx_frame_word(ocelot, grp, true, &ifh[i]);
++ if (err != 4)
++ break;
++ }
++
++ if (err != 4)
++ break;
++
++ /* At this point the IFH was read correctly, so it is safe to
++ * presume that there is no error. The err needs to be reset
++ * otherwise a frame could come in CPU queue between the while
++ * condition and the check for error later on. And in that case
++ * the new frame is just removed and not processed.
++ */
++ err = 0;
++
++ ocelot_parse_ifh(ifh, &info);
++
++ ocelot_port = ocelot->ports[info.port];
++ priv = container_of(ocelot_port, struct ocelot_port_private,
++ port);
++ dev = priv->dev;
++
++ skb = netdev_alloc_skb(dev, info.len);
++
++ if (unlikely(!skb)) {
++ netdev_err(dev, "Unable to allocate sk_buff\n");
++ err = -ENOMEM;
++ break;
++ }
++ buf_len = info.len - ETH_FCS_LEN;
++ buf = (u32 *)skb_put(skb, buf_len);
++
++ len = 0;
++ do {
++ sz = ocelot_rx_frame_word(ocelot, grp, false, &val);
++ *buf++ = val;
++ len += sz;
++ } while (len < buf_len);
++
++ /* Read the FCS */
++ sz = ocelot_rx_frame_word(ocelot, grp, false, &val);
++ /* Update the statistics if part of the FCS was read before */
++ len -= ETH_FCS_LEN - sz;
++
++ if (unlikely(dev->features & NETIF_F_RXFCS)) {
++ buf = (u32 *)skb_put(skb, ETH_FCS_LEN);
++ *buf = val;
++ }
++
++ if (sz < 0) {
++ err = sz;
++ break;
++ }
++
++ if (ocelot->ptp) {
++ ocelot_ptp_gettime64(&ocelot->ptp_info, &ts);
++
++ tod_in_ns = ktime_set(ts.tv_sec, ts.tv_nsec);
++ if ((tod_in_ns & 0xffffffff) < info.timestamp)
++ full_ts_in_ns = (((tod_in_ns >> 32) - 1) << 32) |
++ info.timestamp;
++ else
++ full_ts_in_ns = (tod_in_ns & GENMASK_ULL(63, 32)) |
++ info.timestamp;
++
++ shhwtstamps = skb_hwtstamps(skb);
++ memset(shhwtstamps, 0, sizeof(struct skb_shared_hwtstamps));
++ shhwtstamps->hwtstamp = full_ts_in_ns;
++ }
++
++ /* Everything we see on an interface that is in the HW bridge
++ * has already been forwarded.
++ */
++ if (ocelot->bridge_mask & BIT(info.port))
++ skb->offload_fwd_mark = 1;
++
++ skb->protocol = eth_type_trans(skb, dev);
++ if (!skb_defer_rx_timestamp(skb))
++ netif_rx(skb);
++ dev->stats.rx_bytes += len;
++ dev->stats.rx_packets++;
++ } while (ocelot_read(ocelot, QS_XTR_DATA_PRESENT) & BIT(grp));
++
++ if (err)
++ while (ocelot_read(ocelot, QS_XTR_DATA_PRESENT) & BIT(grp))
++ ocelot_read_rix(ocelot, QS_XTR_RD, grp);
++
++ return IRQ_HANDLED;
++}
++
++static irqreturn_t ocelot_ptp_rdy_irq_handler(int irq, void *arg)
++{
++ struct ocelot *ocelot = arg;
++
++ ocelot_get_txtstamp(ocelot);
++
++ return IRQ_HANDLED;
++}
++
++static const struct of_device_id mscc_ocelot_match[] = {
++ { .compatible = "mscc,vsc7514-switch" },
++ { }
++};
++MODULE_DEVICE_TABLE(of, mscc_ocelot_match);
++
++static int ocelot_reset(struct ocelot *ocelot)
++{
++ int retries = 100;
++ u32 val;
++
++ regmap_field_write(ocelot->regfields[SYS_RESET_CFG_MEM_INIT], 1);
++ regmap_field_write(ocelot->regfields[SYS_RESET_CFG_MEM_ENA], 1);
++
++ do {
++ msleep(1);
++ regmap_field_read(ocelot->regfields[SYS_RESET_CFG_MEM_INIT],
++ &val);
++ } while (val && --retries);
++
++ if (!retries)
++ return -ETIMEDOUT;
++
++ regmap_field_write(ocelot->regfields[SYS_RESET_CFG_MEM_ENA], 1);
++ regmap_field_write(ocelot->regfields[SYS_RESET_CFG_CORE_ENA], 1);
++
++ return 0;
++}
++
++/* Watermark encode
++ * Bit 8: Unit; 0:1, 1:16
++ * Bit 7-0: Value to be multiplied with unit
++ */
++static u16 ocelot_wm_enc(u16 value)
++{
++ if (value >= BIT(8))
++ return BIT(8) | (value / 16);
++
++ return value;
++}
++
++static const struct ocelot_ops ocelot_ops = {
++ .reset = ocelot_reset,
++ .wm_enc = ocelot_wm_enc,
++};
++
++static const struct vcap_field vsc7514_vcap_is2_keys[] = {
++ /* Common: 46 bits */
++ [VCAP_IS2_TYPE] = { 0, 4},
++ [VCAP_IS2_HK_FIRST] = { 4, 1},
++ [VCAP_IS2_HK_PAG] = { 5, 8},
++ [VCAP_IS2_HK_IGR_PORT_MASK] = { 13, 12},
++ [VCAP_IS2_HK_RSV2] = { 25, 1},
++ [VCAP_IS2_HK_HOST_MATCH] = { 26, 1},
++ [VCAP_IS2_HK_L2_MC] = { 27, 1},
++ [VCAP_IS2_HK_L2_BC] = { 28, 1},
++ [VCAP_IS2_HK_VLAN_TAGGED] = { 29, 1},
++ [VCAP_IS2_HK_VID] = { 30, 12},
++ [VCAP_IS2_HK_DEI] = { 42, 1},
++ [VCAP_IS2_HK_PCP] = { 43, 3},
++ /* MAC_ETYPE / MAC_LLC / MAC_SNAP / OAM common */
++ [VCAP_IS2_HK_L2_DMAC] = { 46, 48},
++ [VCAP_IS2_HK_L2_SMAC] = { 94, 48},
++ /* MAC_ETYPE (TYPE=000) */
++ [VCAP_IS2_HK_MAC_ETYPE_ETYPE] = {142, 16},
++ [VCAP_IS2_HK_MAC_ETYPE_L2_PAYLOAD0] = {158, 16},
++ [VCAP_IS2_HK_MAC_ETYPE_L2_PAYLOAD1] = {174, 8},
++ [VCAP_IS2_HK_MAC_ETYPE_L2_PAYLOAD2] = {182, 3},
++ /* MAC_LLC (TYPE=001) */
++ [VCAP_IS2_HK_MAC_LLC_L2_LLC] = {142, 40},
++ /* MAC_SNAP (TYPE=010) */
++ [VCAP_IS2_HK_MAC_SNAP_L2_SNAP] = {142, 40},
++ /* MAC_ARP (TYPE=011) */
++ [VCAP_IS2_HK_MAC_ARP_SMAC] = { 46, 48},
++ [VCAP_IS2_HK_MAC_ARP_ADDR_SPACE_OK] = { 94, 1},
++ [VCAP_IS2_HK_MAC_ARP_PROTO_SPACE_OK] = { 95, 1},
++ [VCAP_IS2_HK_MAC_ARP_LEN_OK] = { 96, 1},
++ [VCAP_IS2_HK_MAC_ARP_TARGET_MATCH] = { 97, 1},
++ [VCAP_IS2_HK_MAC_ARP_SENDER_MATCH] = { 98, 1},
++ [VCAP_IS2_HK_MAC_ARP_OPCODE_UNKNOWN] = { 99, 1},
++ [VCAP_IS2_HK_MAC_ARP_OPCODE] = {100, 2},
++ [VCAP_IS2_HK_MAC_ARP_L3_IP4_DIP] = {102, 32},
++ [VCAP_IS2_HK_MAC_ARP_L3_IP4_SIP] = {134, 32},
++ [VCAP_IS2_HK_MAC_ARP_DIP_EQ_SIP] = {166, 1},
++ /* IP4_TCP_UDP / IP4_OTHER common */
++ [VCAP_IS2_HK_IP4] = { 46, 1},
++ [VCAP_IS2_HK_L3_FRAGMENT] = { 47, 1},
++ [VCAP_IS2_HK_L3_FRAG_OFS_GT0] = { 48, 1},
++ [VCAP_IS2_HK_L3_OPTIONS] = { 49, 1},
++ [VCAP_IS2_HK_IP4_L3_TTL_GT0] = { 50, 1},
++ [VCAP_IS2_HK_L3_TOS] = { 51, 8},
++ [VCAP_IS2_HK_L3_IP4_DIP] = { 59, 32},
++ [VCAP_IS2_HK_L3_IP4_SIP] = { 91, 32},
++ [VCAP_IS2_HK_DIP_EQ_SIP] = {123, 1},
++ /* IP4_TCP_UDP (TYPE=100) */
++ [VCAP_IS2_HK_TCP] = {124, 1},
++ [VCAP_IS2_HK_L4_SPORT] = {125, 16},
++ [VCAP_IS2_HK_L4_DPORT] = {141, 16},
++ [VCAP_IS2_HK_L4_RNG] = {157, 8},
++ [VCAP_IS2_HK_L4_SPORT_EQ_DPORT] = {165, 1},
++ [VCAP_IS2_HK_L4_SEQUENCE_EQ0] = {166, 1},
++ [VCAP_IS2_HK_L4_URG] = {167, 1},
++ [VCAP_IS2_HK_L4_ACK] = {168, 1},
++ [VCAP_IS2_HK_L4_PSH] = {169, 1},
++ [VCAP_IS2_HK_L4_RST] = {170, 1},
++ [VCAP_IS2_HK_L4_SYN] = {171, 1},
++ [VCAP_IS2_HK_L4_FIN] = {172, 1},
++ [VCAP_IS2_HK_L4_1588_DOM] = {173, 8},
++ [VCAP_IS2_HK_L4_1588_VER] = {181, 4},
++ /* IP4_OTHER (TYPE=101) */
++ [VCAP_IS2_HK_IP4_L3_PROTO] = {124, 8},
++ [VCAP_IS2_HK_L3_PAYLOAD] = {132, 56},
++ /* IP6_STD (TYPE=110) */
++ [VCAP_IS2_HK_IP6_L3_TTL_GT0] = { 46, 1},
++ [VCAP_IS2_HK_L3_IP6_SIP] = { 47, 128},
++ [VCAP_IS2_HK_IP6_L3_PROTO] = {175, 8},
++ /* OAM (TYPE=111) */
++ [VCAP_IS2_HK_OAM_MEL_FLAGS] = {142, 7},
++ [VCAP_IS2_HK_OAM_VER] = {149, 5},
++ [VCAP_IS2_HK_OAM_OPCODE] = {154, 8},
++ [VCAP_IS2_HK_OAM_FLAGS] = {162, 8},
++ [VCAP_IS2_HK_OAM_MEPID] = {170, 16},
++ [VCAP_IS2_HK_OAM_CCM_CNTS_EQ0] = {186, 1},
++ [VCAP_IS2_HK_OAM_IS_Y1731] = {187, 1},
++};
++
++static const struct vcap_field vsc7514_vcap_is2_actions[] = {
++ [VCAP_IS2_ACT_HIT_ME_ONCE] = { 0, 1},
++ [VCAP_IS2_ACT_CPU_COPY_ENA] = { 1, 1},
++ [VCAP_IS2_ACT_CPU_QU_NUM] = { 2, 3},
++ [VCAP_IS2_ACT_MASK_MODE] = { 5, 2},
++ [VCAP_IS2_ACT_MIRROR_ENA] = { 7, 1},
++ [VCAP_IS2_ACT_LRN_DIS] = { 8, 1},
++ [VCAP_IS2_ACT_POLICE_ENA] = { 9, 1},
++ [VCAP_IS2_ACT_POLICE_IDX] = { 10, 9},
++ [VCAP_IS2_ACT_POLICE_VCAP_ONLY] = { 19, 1},
++ [VCAP_IS2_ACT_PORT_MASK] = { 20, 11},
++ [VCAP_IS2_ACT_REW_OP] = { 31, 9},
++ [VCAP_IS2_ACT_SMAC_REPLACE_ENA] = { 40, 1},
++ [VCAP_IS2_ACT_RSV] = { 41, 2},
++ [VCAP_IS2_ACT_ACL_ID] = { 43, 6},
++ [VCAP_IS2_ACT_HIT_CNT] = { 49, 32},
++};
++
++static const struct vcap_props vsc7514_vcap_props[] = {
++ [VCAP_IS2] = {
++ .tg_width = 2,
++ .sw_count = 4,
++ .entry_count = VSC7514_VCAP_IS2_CNT,
++ .entry_width = VSC7514_VCAP_IS2_ENTRY_WIDTH,
++ .action_count = VSC7514_VCAP_IS2_CNT +
++ VSC7514_VCAP_PORT_CNT + 2,
++ .action_width = 99,
++ .action_type_width = 1,
++ .action_table = {
++ [IS2_ACTION_TYPE_NORMAL] = {
++ .width = 49,
++ .count = 2
++ },
++ [IS2_ACTION_TYPE_SMAC_SIP] = {
++ .width = 6,
++ .count = 4
++ },
++ },
++ .counter_words = 4,
++ .counter_width = 32,
++ },
++};
++
++static struct ptp_clock_info ocelot_ptp_clock_info = {
++ .owner = THIS_MODULE,
++ .name = "ocelot ptp",
++ .max_adj = 0x7fffffff,
++ .n_alarm = 0,
++ .n_ext_ts = 0,
++ .n_per_out = OCELOT_PTP_PINS_NUM,
++ .n_pins = OCELOT_PTP_PINS_NUM,
++ .pps = 0,
++ .gettime64 = ocelot_ptp_gettime64,
++ .settime64 = ocelot_ptp_settime64,
++ .adjtime = ocelot_ptp_adjtime,
++ .adjfine = ocelot_ptp_adjfine,
++ .verify = ocelot_ptp_verify,
++ .enable = ocelot_ptp_enable,
++};
++
++static int mscc_ocelot_probe(struct platform_device *pdev)
++{
++ struct device_node *np = pdev->dev.of_node;
++ struct device_node *ports, *portnp;
++ int err, irq_xtr, irq_ptp_rdy;
++ struct ocelot *ocelot;
++ struct regmap *hsio;
++ unsigned int i;
++
++ struct {
++ enum ocelot_target id;
++ char *name;
++ u8 optional:1;
++ } io_target[] = {
++ { SYS, "sys" },
++ { REW, "rew" },
++ { QSYS, "qsys" },
++ { ANA, "ana" },
++ { QS, "qs" },
++ { S2, "s2" },
++ { PTP, "ptp", 1 },
++ };
++
++ if (!np && !pdev->dev.platform_data)
++ return -ENODEV;
++
++ ocelot = devm_kzalloc(&pdev->dev, sizeof(*ocelot), GFP_KERNEL);
++ if (!ocelot)
++ return -ENOMEM;
++
++ platform_set_drvdata(pdev, ocelot);
++ ocelot->dev = &pdev->dev;
++
++ for (i = 0; i < ARRAY_SIZE(io_target); i++) {
++ struct regmap *target;
++ struct resource *res;
++
++ res = platform_get_resource_byname(pdev, IORESOURCE_MEM,
++ io_target[i].name);
++
++ target = ocelot_regmap_init(ocelot, res);
++ if (IS_ERR(target)) {
++ if (io_target[i].optional) {
++ ocelot->targets[io_target[i].id] = NULL;
++ continue;
++ }
++ return PTR_ERR(target);
++ }
++
++ ocelot->targets[io_target[i].id] = target;
++ }
++
++ hsio = syscon_regmap_lookup_by_compatible("mscc,ocelot-hsio");
++ if (IS_ERR(hsio)) {
++ dev_err(&pdev->dev, "missing hsio syscon\n");
++ return PTR_ERR(hsio);
++ }
++
++ ocelot->targets[HSIO] = hsio;
++
++ err = ocelot_chip_init(ocelot, &ocelot_ops);
++ if (err)
++ return err;
++
++ irq_xtr = platform_get_irq_byname(pdev, "xtr");
++ if (irq_xtr < 0)
++ return -ENODEV;
++
++ err = devm_request_threaded_irq(&pdev->dev, irq_xtr, NULL,
++ ocelot_xtr_irq_handler, IRQF_ONESHOT,
++ "frame extraction", ocelot);
++ if (err)
++ return err;
++
++ irq_ptp_rdy = platform_get_irq_byname(pdev, "ptp_rdy");
++ if (irq_ptp_rdy > 0 && ocelot->targets[PTP]) {
++ err = devm_request_threaded_irq(&pdev->dev, irq_ptp_rdy, NULL,
++ ocelot_ptp_rdy_irq_handler,
++ IRQF_ONESHOT, "ptp ready",
++ ocelot);
++ if (err)
++ return err;
++
++ /* Both the PTP interrupt and the PTP bank are available */
++ ocelot->ptp = 1;
++ }
++
++ ports = of_get_child_by_name(np, "ethernet-ports");
++ if (!ports) {
++ dev_err(&pdev->dev, "no ethernet-ports child node found\n");
++ return -ENODEV;
++ }
++
++ ocelot->num_phys_ports = of_get_child_count(ports);
++
++ ocelot->ports = devm_kcalloc(&pdev->dev, ocelot->num_phys_ports,
++ sizeof(struct ocelot_port *), GFP_KERNEL);
++
++ ocelot->vcap_is2_keys = vsc7514_vcap_is2_keys;
++ ocelot->vcap_is2_actions = vsc7514_vcap_is2_actions;
++ ocelot->vcap = vsc7514_vcap_props;
++
++ ocelot_init(ocelot);
++ if (ocelot->ptp) {
++ err = ocelot_init_timestamp(ocelot, &ocelot_ptp_clock_info);
++ if (err) {
++ dev_err(ocelot->dev,
++ "Timestamp initialization failed\n");
++ ocelot->ptp = 0;
++ }
++ }
++
++ /* No NPI port */
++ ocelot_configure_cpu(ocelot, -1, OCELOT_TAG_PREFIX_NONE,
++ OCELOT_TAG_PREFIX_NONE);
++
++ for_each_available_child_of_node(ports, portnp) {
++ struct ocelot_port_private *priv;
++ struct ocelot_port *ocelot_port;
++ struct device_node *phy_node;
++ phy_interface_t phy_mode;
++ struct phy_device *phy;
++ struct resource *res;
++ struct phy *serdes;
++ void __iomem *regs;
++ char res_name[8];
++ u32 port;
++
++ if (of_property_read_u32(portnp, "reg", &port))
++ continue;
++
++ snprintf(res_name, sizeof(res_name), "port%d", port);
++
++ res = platform_get_resource_byname(pdev, IORESOURCE_MEM,
++ res_name);
++ regs = devm_ioremap_resource(&pdev->dev, res);
++ if (IS_ERR(regs))
++ continue;
++
++ phy_node = of_parse_phandle(portnp, "phy-handle", 0);
++ if (!phy_node)
++ continue;
++
++ phy = of_phy_find_device(phy_node);
++ of_node_put(phy_node);
++ if (!phy)
++ continue;
++
++ err = ocelot_probe_port(ocelot, port, regs, phy);
++ if (err) {
++ of_node_put(portnp);
++ goto out_put_ports;
++ }
++
++ ocelot_port = ocelot->ports[port];
++ priv = container_of(ocelot_port, struct ocelot_port_private,
++ port);
++
++ of_get_phy_mode(portnp, &phy_mode);
++
++ ocelot_port->phy_mode = phy_mode;
++
++ switch (ocelot_port->phy_mode) {
++ case PHY_INTERFACE_MODE_NA:
++ continue;
++ case PHY_INTERFACE_MODE_SGMII:
++ break;
++ case PHY_INTERFACE_MODE_QSGMII:
++ /* Ensure clock signals and speed is set on all
++ * QSGMII links
++ */
++ ocelot_port_writel(ocelot_port,
++ DEV_CLOCK_CFG_LINK_SPEED
++ (OCELOT_SPEED_1000),
++ DEV_CLOCK_CFG);
++ break;
++ default:
++ dev_err(ocelot->dev,
++ "invalid phy mode for port%d, (Q)SGMII only\n",
++ port);
++ of_node_put(portnp);
++ err = -EINVAL;
++ goto out_put_ports;
++ }
++
++ serdes = devm_of_phy_get(ocelot->dev, portnp, NULL);
++ if (IS_ERR(serdes)) {
++ err = PTR_ERR(serdes);
++ if (err == -EPROBE_DEFER)
++ dev_dbg(ocelot->dev, "deferring probe\n");
++ else
++ dev_err(ocelot->dev,
++ "missing SerDes phys for port%d\n",
++ port);
++
++ of_node_put(portnp);
++ goto out_put_ports;
++ }
++
++ priv->serdes = serdes;
++ }
++
++ register_netdevice_notifier(&ocelot_netdevice_nb);
++ register_switchdev_notifier(&ocelot_switchdev_nb);
++ register_switchdev_blocking_notifier(&ocelot_switchdev_blocking_nb);
++
++ dev_info(&pdev->dev, "Ocelot switch probed\n");
++
++out_put_ports:
++ of_node_put(ports);
++ return err;
++}
++
++static int mscc_ocelot_remove(struct platform_device *pdev)
++{
++ struct ocelot *ocelot = platform_get_drvdata(pdev);
++
++ ocelot_deinit_timestamp(ocelot);
++ ocelot_deinit(ocelot);
++ unregister_switchdev_blocking_notifier(&ocelot_switchdev_blocking_nb);
++ unregister_switchdev_notifier(&ocelot_switchdev_nb);
++ unregister_netdevice_notifier(&ocelot_netdevice_nb);
++
++ return 0;
++}
++
++static struct platform_driver mscc_ocelot_driver = {
++ .probe = mscc_ocelot_probe,
++ .remove = mscc_ocelot_remove,
++ .driver = {
++ .name = "ocelot-switch",
++ .of_match_table = mscc_ocelot_match,
++ },
++};
++
++module_platform_driver(mscc_ocelot_driver);
++
++MODULE_DESCRIPTION("Microsemi Ocelot switch driver");
++MODULE_AUTHOR("Alexandre Belloni <alexandre.belloni@bootlin.com>");
++MODULE_LICENSE("Dual MIT/GPL");
+diff --git a/drivers/net/ethernet/realtek/r8169_main.c b/drivers/net/ethernet/realtek/r8169_main.c
+index b660ddbe40251..fe173ea894e2c 100644
+--- a/drivers/net/ethernet/realtek/r8169_main.c
++++ b/drivers/net/ethernet/realtek/r8169_main.c
+@@ -2113,11 +2113,18 @@ static void rtl_release_firmware(struct rtl8169_private *tp)
+
+ void r8169_apply_firmware(struct rtl8169_private *tp)
+ {
++ int val;
++
+ /* TODO: release firmware if rtl_fw_write_firmware signals failure. */
+ if (tp->rtl_fw) {
+ rtl_fw_write_firmware(tp, tp->rtl_fw);
+ /* At least one firmware doesn't reset tp->ocp_base. */
+ tp->ocp_base = OCP_STD_PHY_BASE;
++
++ /* PHY soft reset may still be in progress */
++ phy_read_poll_timeout(tp->phydev, MII_BMCR, val,
++ !(val & BMCR_RESET),
++ 50000, 600000, true);
+ }
+ }
+
+@@ -2951,7 +2958,7 @@ static void rtl_hw_start_8168f_1(struct rtl8169_private *tp)
+ { 0x08, 0x0001, 0x0002 },
+ { 0x09, 0x0000, 0x0080 },
+ { 0x19, 0x0000, 0x0224 },
+- { 0x00, 0x0000, 0x0004 },
++ { 0x00, 0x0000, 0x0008 },
+ { 0x0c, 0x3df0, 0x0200 },
+ };
+
+@@ -2968,7 +2975,7 @@ static void rtl_hw_start_8411(struct rtl8169_private *tp)
+ { 0x06, 0x00c0, 0x0020 },
+ { 0x0f, 0xffff, 0x5200 },
+ { 0x19, 0x0000, 0x0224 },
+- { 0x00, 0x0000, 0x0004 },
++ { 0x00, 0x0000, 0x0008 },
+ { 0x0c, 0x3df0, 0x0200 },
+ };
+
+diff --git a/drivers/net/ethernet/renesas/ravb_main.c b/drivers/net/ethernet/renesas/ravb_main.c
+index df89d09b253e2..99f7aae102ce1 100644
+--- a/drivers/net/ethernet/renesas/ravb_main.c
++++ b/drivers/net/ethernet/renesas/ravb_main.c
+@@ -1342,51 +1342,6 @@ static inline int ravb_hook_irq(unsigned int irq, irq_handler_t handler,
+ return error;
+ }
+
+-/* MDIO bus init function */
+-static int ravb_mdio_init(struct ravb_private *priv)
+-{
+- struct platform_device *pdev = priv->pdev;
+- struct device *dev = &pdev->dev;
+- int error;
+-
+- /* Bitbang init */
+- priv->mdiobb.ops = &bb_ops;
+-
+- /* MII controller setting */
+- priv->mii_bus = alloc_mdio_bitbang(&priv->mdiobb);
+- if (!priv->mii_bus)
+- return -ENOMEM;
+-
+- /* Hook up MII support for ethtool */
+- priv->mii_bus->name = "ravb_mii";
+- priv->mii_bus->parent = dev;
+- snprintf(priv->mii_bus->id, MII_BUS_ID_SIZE, "%s-%x",
+- pdev->name, pdev->id);
+-
+- /* Register MDIO bus */
+- error = of_mdiobus_register(priv->mii_bus, dev->of_node);
+- if (error)
+- goto out_free_bus;
+-
+- return 0;
+-
+-out_free_bus:
+- free_mdio_bitbang(priv->mii_bus);
+- return error;
+-}
+-
+-/* MDIO bus release function */
+-static int ravb_mdio_release(struct ravb_private *priv)
+-{
+- /* Unregister mdio bus */
+- mdiobus_unregister(priv->mii_bus);
+-
+- /* Free bitbang info */
+- free_mdio_bitbang(priv->mii_bus);
+-
+- return 0;
+-}
+-
+ /* Network device open function for Ethernet AVB */
+ static int ravb_open(struct net_device *ndev)
+ {
+@@ -1395,13 +1350,6 @@ static int ravb_open(struct net_device *ndev)
+ struct device *dev = &pdev->dev;
+ int error;
+
+- /* MDIO bus init */
+- error = ravb_mdio_init(priv);
+- if (error) {
+- netdev_err(ndev, "failed to initialize MDIO\n");
+- return error;
+- }
+-
+ napi_enable(&priv->napi[RAVB_BE]);
+ napi_enable(&priv->napi[RAVB_NC]);
+
+@@ -1479,7 +1427,6 @@ out_free_irq:
+ out_napi_off:
+ napi_disable(&priv->napi[RAVB_NC]);
+ napi_disable(&priv->napi[RAVB_BE]);
+- ravb_mdio_release(priv);
+ return error;
+ }
+
+@@ -1789,8 +1736,6 @@ static int ravb_close(struct net_device *ndev)
+ ravb_ring_free(ndev, RAVB_BE);
+ ravb_ring_free(ndev, RAVB_NC);
+
+- ravb_mdio_release(priv);
+-
+ return 0;
+ }
+
+@@ -1942,6 +1887,51 @@ static const struct net_device_ops ravb_netdev_ops = {
+ .ndo_set_features = ravb_set_features,
+ };
+
++/* MDIO bus init function */
++static int ravb_mdio_init(struct ravb_private *priv)
++{
++ struct platform_device *pdev = priv->pdev;
++ struct device *dev = &pdev->dev;
++ int error;
++
++ /* Bitbang init */
++ priv->mdiobb.ops = &bb_ops;
++
++ /* MII controller setting */
++ priv->mii_bus = alloc_mdio_bitbang(&priv->mdiobb);
++ if (!priv->mii_bus)
++ return -ENOMEM;
++
++ /* Hook up MII support for ethtool */
++ priv->mii_bus->name = "ravb_mii";
++ priv->mii_bus->parent = dev;
++ snprintf(priv->mii_bus->id, MII_BUS_ID_SIZE, "%s-%x",
++ pdev->name, pdev->id);
++
++ /* Register MDIO bus */
++ error = of_mdiobus_register(priv->mii_bus, dev->of_node);
++ if (error)
++ goto out_free_bus;
++
++ return 0;
++
++out_free_bus:
++ free_mdio_bitbang(priv->mii_bus);
++ return error;
++}
++
++/* MDIO bus release function */
++static int ravb_mdio_release(struct ravb_private *priv)
++{
++ /* Unregister mdio bus */
++ mdiobus_unregister(priv->mii_bus);
++
++ /* Free bitbang info */
++ free_mdio_bitbang(priv->mii_bus);
++
++ return 0;
++}
++
+ static const struct of_device_id ravb_match_table[] = {
+ { .compatible = "renesas,etheravb-r8a7790", .data = (void *)RCAR_GEN2 },
+ { .compatible = "renesas,etheravb-r8a7794", .data = (void *)RCAR_GEN2 },
+@@ -2184,6 +2174,13 @@ static int ravb_probe(struct platform_device *pdev)
+ eth_hw_addr_random(ndev);
+ }
+
++ /* MDIO bus init */
++ error = ravb_mdio_init(priv);
++ if (error) {
++ dev_err(&pdev->dev, "failed to initialize MDIO\n");
++ goto out_dma_free;
++ }
++
+ netif_napi_add(ndev, &priv->napi[RAVB_BE], ravb_poll, 64);
+ netif_napi_add(ndev, &priv->napi[RAVB_NC], ravb_poll, 64);
+
+@@ -2205,6 +2202,8 @@ static int ravb_probe(struct platform_device *pdev)
+ out_napi_del:
+ netif_napi_del(&priv->napi[RAVB_NC]);
+ netif_napi_del(&priv->napi[RAVB_BE]);
++ ravb_mdio_release(priv);
++out_dma_free:
+ dma_free_coherent(ndev->dev.parent, priv->desc_bat_size, priv->desc_bat,
+ priv->desc_bat_dma);
+
+@@ -2236,6 +2235,7 @@ static int ravb_remove(struct platform_device *pdev)
+ unregister_netdev(ndev);
+ netif_napi_del(&priv->napi[RAVB_NC]);
+ netif_napi_del(&priv->napi[RAVB_BE]);
++ ravb_mdio_release(priv);
+ pm_runtime_disable(&pdev->dev);
+ free_netdev(ndev);
+ platform_set_drvdata(pdev, NULL);
+diff --git a/drivers/net/ethernet/stmicro/stmmac/dwmac-intel.c b/drivers/net/ethernet/stmicro/stmmac/dwmac-intel.c
+index 2ac9dfb3462c6..9e6d60e75f85d 100644
+--- a/drivers/net/ethernet/stmicro/stmmac/dwmac-intel.c
++++ b/drivers/net/ethernet/stmicro/stmmac/dwmac-intel.c
+@@ -653,7 +653,6 @@ static void intel_eth_pci_remove(struct pci_dev *pdev)
+
+ pci_free_irq_vectors(pdev);
+
+- clk_disable_unprepare(priv->plat->stmmac_clk);
+ clk_unregister_fixed_rate(priv->plat->stmmac_clk);
+
+ pcim_iounmap_regions(pdev, BIT(0));
+diff --git a/drivers/net/ethernet/stmicro/stmmac/stmmac.h b/drivers/net/ethernet/stmicro/stmmac/stmmac.h
+index 9c02fc754bf1b..545696971f65e 100644
+--- a/drivers/net/ethernet/stmicro/stmmac/stmmac.h
++++ b/drivers/net/ethernet/stmicro/stmmac/stmmac.h
+@@ -203,6 +203,8 @@ struct stmmac_priv {
+ int eee_enabled;
+ int eee_active;
+ int tx_lpi_timer;
++ int tx_lpi_enabled;
++ int eee_tw_timer;
+ unsigned int mode;
+ unsigned int chain_mode;
+ int extend_desc;
+diff --git a/drivers/net/ethernet/stmicro/stmmac/stmmac_ethtool.c b/drivers/net/ethernet/stmicro/stmmac/stmmac_ethtool.c
+index eae11c5850251..b82c6715f95f3 100644
+--- a/drivers/net/ethernet/stmicro/stmmac/stmmac_ethtool.c
++++ b/drivers/net/ethernet/stmicro/stmmac/stmmac_ethtool.c
+@@ -652,6 +652,7 @@ static int stmmac_ethtool_op_get_eee(struct net_device *dev,
+ edata->eee_enabled = priv->eee_enabled;
+ edata->eee_active = priv->eee_active;
+ edata->tx_lpi_timer = priv->tx_lpi_timer;
++ edata->tx_lpi_enabled = priv->tx_lpi_enabled;
+
+ return phylink_ethtool_get_eee(priv->phylink, edata);
+ }
+@@ -662,24 +663,26 @@ static int stmmac_ethtool_op_set_eee(struct net_device *dev,
+ struct stmmac_priv *priv = netdev_priv(dev);
+ int ret;
+
+- if (!edata->eee_enabled) {
++ if (!priv->dma_cap.eee)
++ return -EOPNOTSUPP;
++
++ if (priv->tx_lpi_enabled != edata->tx_lpi_enabled)
++ netdev_warn(priv->dev,
++ "Setting EEE tx-lpi is not supported\n");
++
++ if (!edata->eee_enabled)
+ stmmac_disable_eee_mode(priv);
+- } else {
+- /* We are asking for enabling the EEE but it is safe
+- * to verify all by invoking the eee_init function.
+- * In case of failure it will return an error.
+- */
+- edata->eee_enabled = stmmac_eee_init(priv);
+- if (!edata->eee_enabled)
+- return -EOPNOTSUPP;
+- }
+
+ ret = phylink_ethtool_set_eee(priv->phylink, edata);
+ if (ret)
+ return ret;
+
+- priv->eee_enabled = edata->eee_enabled;
+- priv->tx_lpi_timer = edata->tx_lpi_timer;
++ if (edata->eee_enabled &&
++ priv->tx_lpi_timer != edata->tx_lpi_timer) {
++ priv->tx_lpi_timer = edata->tx_lpi_timer;
++ stmmac_eee_init(priv);
++ }
++
+ return 0;
+ }
+
+diff --git a/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c b/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c
+index 73677c3b33b65..73465e5f5a417 100644
+--- a/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c
++++ b/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c
+@@ -94,7 +94,7 @@ static const u32 default_msg_level = (NETIF_MSG_DRV | NETIF_MSG_PROBE |
+ static int eee_timer = STMMAC_DEFAULT_LPI_TIMER;
+ module_param(eee_timer, int, 0644);
+ MODULE_PARM_DESC(eee_timer, "LPI tx expiration time in msec");
+-#define STMMAC_LPI_T(x) (jiffies + msecs_to_jiffies(x))
++#define STMMAC_LPI_T(x) (jiffies + usecs_to_jiffies(x))
+
+ /* By default the driver will use the ring mode to manage tx and rx descriptors,
+ * but allow user to force to use the chain instead of the ring
+@@ -370,7 +370,7 @@ static void stmmac_eee_ctrl_timer(struct timer_list *t)
+ struct stmmac_priv *priv = from_timer(priv, t, eee_ctrl_timer);
+
+ stmmac_enable_eee_mode(priv);
+- mod_timer(&priv->eee_ctrl_timer, STMMAC_LPI_T(eee_timer));
++ mod_timer(&priv->eee_ctrl_timer, STMMAC_LPI_T(priv->tx_lpi_timer));
+ }
+
+ /**
+@@ -383,7 +383,7 @@ static void stmmac_eee_ctrl_timer(struct timer_list *t)
+ */
+ bool stmmac_eee_init(struct stmmac_priv *priv)
+ {
+- int tx_lpi_timer = priv->tx_lpi_timer;
++ int eee_tw_timer = priv->eee_tw_timer;
+
+ /* Using PCS we cannot dial with the phy registers at this stage
+ * so we do not support extra feature like EEE.
+@@ -403,7 +403,7 @@ bool stmmac_eee_init(struct stmmac_priv *priv)
+ if (priv->eee_enabled) {
+ netdev_dbg(priv->dev, "disable EEE\n");
+ del_timer_sync(&priv->eee_ctrl_timer);
+- stmmac_set_eee_timer(priv, priv->hw, 0, tx_lpi_timer);
++ stmmac_set_eee_timer(priv, priv->hw, 0, eee_tw_timer);
+ }
+ mutex_unlock(&priv->lock);
+ return false;
+@@ -411,11 +411,12 @@ bool stmmac_eee_init(struct stmmac_priv *priv)
+
+ if (priv->eee_active && !priv->eee_enabled) {
+ timer_setup(&priv->eee_ctrl_timer, stmmac_eee_ctrl_timer, 0);
+- mod_timer(&priv->eee_ctrl_timer, STMMAC_LPI_T(eee_timer));
+ stmmac_set_eee_timer(priv, priv->hw, STMMAC_DEFAULT_LIT_LS,
+- tx_lpi_timer);
++ eee_tw_timer);
+ }
+
++ mod_timer(&priv->eee_ctrl_timer, STMMAC_LPI_T(priv->tx_lpi_timer));
++
+ mutex_unlock(&priv->lock);
+ netdev_dbg(priv->dev, "Energy-Efficient Ethernet initialized\n");
+ return true;
+@@ -930,6 +931,7 @@ static void stmmac_mac_link_down(struct phylink_config *config,
+
+ stmmac_mac_set(priv, priv->ioaddr, false);
+ priv->eee_active = false;
++ priv->tx_lpi_enabled = false;
+ stmmac_eee_init(priv);
+ stmmac_set_eee_pls(priv, priv->hw, false);
+ }
+@@ -1027,6 +1029,7 @@ static void stmmac_mac_link_up(struct phylink_config *config,
+ if (phy && priv->dma_cap.eee) {
+ priv->eee_active = phy_init_eee(phy, 1) >= 0;
+ priv->eee_enabled = stmmac_eee_init(priv);
++ priv->tx_lpi_enabled = priv->eee_enabled;
+ stmmac_set_eee_pls(priv, priv->hw, true);
+ }
+ }
+@@ -2057,7 +2060,7 @@ static int stmmac_tx_clean(struct stmmac_priv *priv, int budget, u32 queue)
+
+ if ((priv->eee_enabled) && (!priv->tx_path_in_lpi_mode)) {
+ stmmac_enable_eee_mode(priv);
+- mod_timer(&priv->eee_ctrl_timer, STMMAC_LPI_T(eee_timer));
++ mod_timer(&priv->eee_ctrl_timer, STMMAC_LPI_T(priv->tx_lpi_timer));
+ }
+
+ /* We still have pending packets, let's call for a new scheduling */
+@@ -2690,7 +2693,11 @@ static int stmmac_hw_setup(struct net_device *dev, bool init_ptp)
+ netdev_warn(priv->dev, "PTP init failed\n");
+ }
+
+- priv->tx_lpi_timer = STMMAC_DEFAULT_TWT_LS;
++ priv->eee_tw_timer = STMMAC_DEFAULT_TWT_LS;
++
++ /* Convert the timer from msec to usec */
++ if (!priv->tx_lpi_timer)
++ priv->tx_lpi_timer = eee_timer * 1000;
+
+ if (priv->use_riwt) {
+ if (!priv->rx_riwt)
+diff --git a/drivers/net/macsec.c b/drivers/net/macsec.c
+index 9159846b8b938..787ac2c8e74eb 100644
+--- a/drivers/net/macsec.c
++++ b/drivers/net/macsec.c
+@@ -1077,6 +1077,7 @@ static rx_handler_result_t macsec_handle_frame(struct sk_buff **pskb)
+ struct macsec_rx_sa *rx_sa;
+ struct macsec_rxh_data *rxd;
+ struct macsec_dev *macsec;
++ unsigned int len;
+ sci_t sci;
+ u32 hdr_pn;
+ bool cbit;
+@@ -1232,9 +1233,10 @@ deliver:
+ macsec_rxsc_put(rx_sc);
+
+ skb_orphan(skb);
++ len = skb->len;
+ ret = gro_cells_receive(&macsec->gro_cells, skb);
+ if (ret == NET_RX_SUCCESS)
+- count_rx(dev, skb->len);
++ count_rx(dev, len);
+ else
+ macsec->secy.netdev->stats.rx_dropped++;
+
+diff --git a/drivers/net/phy/Kconfig b/drivers/net/phy/Kconfig
+index e351d65533aa8..06146ae4c6d8d 100644
+--- a/drivers/net/phy/Kconfig
++++ b/drivers/net/phy/Kconfig
+@@ -217,6 +217,7 @@ config MDIO_THUNDER
+ depends on 64BIT
+ depends on PCI
+ select MDIO_CAVIUM
++ select MDIO_DEVRES
+ help
+ This driver supports the MDIO interfaces found on Cavium
+ ThunderX SoCs when the MDIO bus device appears as a PCI
+diff --git a/drivers/net/phy/realtek.c b/drivers/net/phy/realtek.c
+index c7229d022a27b..48ba757046cea 100644
+--- a/drivers/net/phy/realtek.c
++++ b/drivers/net/phy/realtek.c
+@@ -1,6 +1,5 @@
+ // SPDX-License-Identifier: GPL-2.0+
+-/*
+- * drivers/net/phy/realtek.c
++/* drivers/net/phy/realtek.c
+ *
+ * Driver for Realtek PHYs
+ *
+@@ -32,9 +31,9 @@
+ #define RTL8211F_TX_DELAY BIT(8)
+ #define RTL8211F_RX_DELAY BIT(3)
+
+-#define RTL8211E_TX_DELAY BIT(1)
+-#define RTL8211E_RX_DELAY BIT(2)
+-#define RTL8211E_MODE_MII_GMII BIT(3)
++#define RTL8211E_CTRL_DELAY BIT(13)
++#define RTL8211E_TX_DELAY BIT(12)
++#define RTL8211E_RX_DELAY BIT(11)
+
+ #define RTL8201F_ISR 0x1e
+ #define RTL8201F_IER 0x13
+@@ -246,16 +245,16 @@ static int rtl8211e_config_init(struct phy_device *phydev)
+ /* enable TX/RX delay for rgmii-* modes, and disable them for rgmii. */
+ switch (phydev->interface) {
+ case PHY_INTERFACE_MODE_RGMII:
+- val = 0;
++ val = RTL8211E_CTRL_DELAY | 0;
+ break;
+ case PHY_INTERFACE_MODE_RGMII_ID:
+- val = RTL8211E_TX_DELAY | RTL8211E_RX_DELAY;
++ val = RTL8211E_CTRL_DELAY | RTL8211E_TX_DELAY | RTL8211E_RX_DELAY;
+ break;
+ case PHY_INTERFACE_MODE_RGMII_RXID:
+- val = RTL8211E_RX_DELAY;
++ val = RTL8211E_CTRL_DELAY | RTL8211E_RX_DELAY;
+ break;
+ case PHY_INTERFACE_MODE_RGMII_TXID:
+- val = RTL8211E_TX_DELAY;
++ val = RTL8211E_CTRL_DELAY | RTL8211E_TX_DELAY;
+ break;
+ default: /* the rest of the modes imply leaving delays as is. */
+ return 0;
+@@ -263,11 +262,12 @@ static int rtl8211e_config_init(struct phy_device *phydev)
+
+ /* According to a sample driver there is a 0x1c config register on the
+ * 0xa4 extension page (0x7) layout. It can be used to disable/enable
+- * the RX/TX delays otherwise controlled by RXDLY/TXDLY pins. It can
+- * also be used to customize the whole configuration register:
+- * 8:6 = PHY Address, 5:4 = Auto-Negotiation, 3 = Interface Mode Select,
+- * 2 = RX Delay, 1 = TX Delay, 0 = SELRGV (see original PHY datasheet
+- * for details).
++ * the RX/TX delays otherwise controlled by RXDLY/TXDLY pins.
++ * The configuration register definition:
++ * 14 = reserved
++ * 13 = Force Tx RX Delay controlled by bit12 bit11,
++ * 12 = RX Delay, 11 = TX Delay
++ * 10:0 = Test && debug settings reserved by realtek
+ */
+ oldpage = phy_select_page(phydev, 0x7);
+ if (oldpage < 0)
+@@ -277,7 +277,8 @@ static int rtl8211e_config_init(struct phy_device *phydev)
+ if (ret)
+ goto err_restore_page;
+
+- ret = __phy_modify(phydev, 0x1c, RTL8211E_TX_DELAY | RTL8211E_RX_DELAY,
++ ret = __phy_modify(phydev, 0x1c, RTL8211E_CTRL_DELAY
++ | RTL8211E_TX_DELAY | RTL8211E_RX_DELAY,
+ val);
+
+ err_restore_page:
+diff --git a/drivers/net/team/team.c b/drivers/net/team/team.c
+index 8c1e02752ff61..bcc4a4c011f1f 100644
+--- a/drivers/net/team/team.c
++++ b/drivers/net/team/team.c
+@@ -287,7 +287,7 @@ inst_rollback:
+ for (i--; i >= 0; i--)
+ __team_option_inst_del_option(team, dst_opts[i]);
+
+- i = option_count - 1;
++ i = option_count;
+ alloc_rollback:
+ for (i--; i >= 0; i--)
+ kfree(dst_opts[i]);
+@@ -2112,6 +2112,7 @@ static void team_setup_by_port(struct net_device *dev,
+ dev->header_ops = port_dev->header_ops;
+ dev->type = port_dev->type;
+ dev->hard_header_len = port_dev->hard_header_len;
++ dev->needed_headroom = port_dev->needed_headroom;
+ dev->addr_len = port_dev->addr_len;
+ dev->mtu = port_dev->mtu;
+ memcpy(dev->broadcast, port_dev->broadcast, port_dev->addr_len);
+diff --git a/drivers/net/usb/ax88179_178a.c b/drivers/net/usb/ax88179_178a.c
+index a38e868e44d46..f0ef3706aad96 100644
+--- a/drivers/net/usb/ax88179_178a.c
++++ b/drivers/net/usb/ax88179_178a.c
+@@ -1823,6 +1823,7 @@ static const struct driver_info belkin_info = {
+ .status = ax88179_status,
+ .link_reset = ax88179_link_reset,
+ .reset = ax88179_reset,
++ .stop = ax88179_stop,
+ .flags = FLAG_ETHER | FLAG_FRAMING_AX,
+ .rx_fixup = ax88179_rx_fixup,
+ .tx_fixup = ax88179_tx_fixup,
+diff --git a/drivers/net/usb/rtl8150.c b/drivers/net/usb/rtl8150.c
+index e7c630d375899..63a4da0b2d6dd 100644
+--- a/drivers/net/usb/rtl8150.c
++++ b/drivers/net/usb/rtl8150.c
+@@ -274,12 +274,20 @@ static int write_mii_word(rtl8150_t * dev, u8 phy, __u8 indx, u16 reg)
+ return 1;
+ }
+
+-static inline void set_ethernet_addr(rtl8150_t * dev)
++static void set_ethernet_addr(rtl8150_t *dev)
+ {
+- u8 node_id[6];
++ u8 node_id[ETH_ALEN];
++ int ret;
++
++ ret = get_registers(dev, IDR, sizeof(node_id), node_id);
+
+- get_registers(dev, IDR, sizeof(node_id), node_id);
+- memcpy(dev->netdev->dev_addr, node_id, sizeof(node_id));
++ if (ret == sizeof(node_id)) {
++ ether_addr_copy(dev->netdev->dev_addr, node_id);
++ } else {
++ eth_hw_addr_random(dev->netdev);
++ netdev_notice(dev->netdev, "Assigned a random MAC address: %pM\n",
++ dev->netdev->dev_addr);
++ }
+ }
+
+ static int rtl8150_set_mac_address(struct net_device *netdev, void *p)
+diff --git a/drivers/net/virtio_net.c b/drivers/net/virtio_net.c
+index ba38765dc4905..c34927b1d806e 100644
+--- a/drivers/net/virtio_net.c
++++ b/drivers/net/virtio_net.c
+@@ -63,6 +63,11 @@ static const unsigned long guest_offloads[] = {
+ VIRTIO_NET_F_GUEST_CSUM
+ };
+
++#define GUEST_OFFLOAD_LRO_MASK ((1ULL << VIRTIO_NET_F_GUEST_TSO4) | \
++ (1ULL << VIRTIO_NET_F_GUEST_TSO6) | \
++ (1ULL << VIRTIO_NET_F_GUEST_ECN) | \
++ (1ULL << VIRTIO_NET_F_GUEST_UFO))
++
+ struct virtnet_stat_desc {
+ char desc[ETH_GSTRING_LEN];
+ size_t offset;
+@@ -2547,7 +2552,8 @@ static int virtnet_set_features(struct net_device *dev,
+ if (features & NETIF_F_LRO)
+ offloads = vi->guest_offloads_capable;
+ else
+- offloads = 0;
++ offloads = vi->guest_offloads_capable &
++ ~GUEST_OFFLOAD_LRO_MASK;
+
+ err = virtnet_set_guest_offloads(vi, offloads);
+ if (err)
+diff --git a/drivers/net/vmxnet3/vmxnet3_drv.c b/drivers/net/vmxnet3/vmxnet3_drv.c
+index 2818015324b8b..336504b7531d9 100644
+--- a/drivers/net/vmxnet3/vmxnet3_drv.c
++++ b/drivers/net/vmxnet3/vmxnet3_drv.c
+@@ -1032,7 +1032,6 @@ vmxnet3_tq_xmit(struct sk_buff *skb, struct vmxnet3_tx_queue *tq,
+ /* Use temporary descriptor to avoid touching bits multiple times */
+ union Vmxnet3_GenericDesc tempTxDesc;
+ #endif
+- struct udphdr *udph;
+
+ count = txd_estimate(skb);
+
+@@ -1135,8 +1134,7 @@ vmxnet3_tq_xmit(struct sk_buff *skb, struct vmxnet3_tx_queue *tq,
+ gdesc->txd.om = VMXNET3_OM_ENCAP;
+ gdesc->txd.msscof = ctx.mss;
+
+- udph = udp_hdr(skb);
+- if (udph->check)
++ if (skb_shinfo(skb)->gso_type & SKB_GSO_UDP_TUNNEL_CSUM)
+ gdesc->txd.oco = 1;
+ } else {
+ gdesc->txd.hlen = ctx.l4_offset + ctx.l4_hdr_size;
+@@ -3371,6 +3369,7 @@ vmxnet3_probe_device(struct pci_dev *pdev,
+ .ndo_change_mtu = vmxnet3_change_mtu,
+ .ndo_fix_features = vmxnet3_fix_features,
+ .ndo_set_features = vmxnet3_set_features,
++ .ndo_features_check = vmxnet3_features_check,
+ .ndo_get_stats64 = vmxnet3_get_stats64,
+ .ndo_tx_timeout = vmxnet3_tx_timeout,
+ .ndo_set_rx_mode = vmxnet3_set_mc,
+diff --git a/drivers/net/vmxnet3/vmxnet3_ethtool.c b/drivers/net/vmxnet3/vmxnet3_ethtool.c
+index def27afa1c69f..3586677920046 100644
+--- a/drivers/net/vmxnet3/vmxnet3_ethtool.c
++++ b/drivers/net/vmxnet3/vmxnet3_ethtool.c
+@@ -267,6 +267,34 @@ netdev_features_t vmxnet3_fix_features(struct net_device *netdev,
+ return features;
+ }
+
++netdev_features_t vmxnet3_features_check(struct sk_buff *skb,
++ struct net_device *netdev,
++ netdev_features_t features)
++{
++ struct vmxnet3_adapter *adapter = netdev_priv(netdev);
++
++ /* Validate if the tunneled packet is being offloaded by the device */
++ if (VMXNET3_VERSION_GE_4(adapter) &&
++ skb->encapsulation && skb->ip_summed == CHECKSUM_PARTIAL) {
++ u8 l4_proto = 0;
++
++ switch (vlan_get_protocol(skb)) {
++ case htons(ETH_P_IP):
++ l4_proto = ip_hdr(skb)->protocol;
++ break;
++ case htons(ETH_P_IPV6):
++ l4_proto = ipv6_hdr(skb)->nexthdr;
++ break;
++ default:
++ return features & ~(NETIF_F_CSUM_MASK | NETIF_F_GSO_MASK);
++ }
++
++ if (l4_proto != IPPROTO_UDP)
++ return features & ~(NETIF_F_CSUM_MASK | NETIF_F_GSO_MASK);
++ }
++ return features;
++}
++
+ static void vmxnet3_enable_encap_offloads(struct net_device *netdev)
+ {
+ struct vmxnet3_adapter *adapter = netdev_priv(netdev);
+diff --git a/drivers/net/vmxnet3/vmxnet3_int.h b/drivers/net/vmxnet3/vmxnet3_int.h
+index 5d2b062215a27..d958b92c94299 100644
+--- a/drivers/net/vmxnet3/vmxnet3_int.h
++++ b/drivers/net/vmxnet3/vmxnet3_int.h
+@@ -470,6 +470,10 @@ vmxnet3_rq_destroy_all(struct vmxnet3_adapter *adapter);
+ netdev_features_t
+ vmxnet3_fix_features(struct net_device *netdev, netdev_features_t features);
+
++netdev_features_t
++vmxnet3_features_check(struct sk_buff *skb,
++ struct net_device *netdev, netdev_features_t features);
++
+ int
+ vmxnet3_set_features(struct net_device *netdev, netdev_features_t features);
+
+diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c
+index 69165a8f7c1f0..a9d5682cdea54 100644
+--- a/drivers/nvme/host/core.c
++++ b/drivers/nvme/host/core.c
+@@ -3061,8 +3061,10 @@ static int nvme_dev_open(struct inode *inode, struct file *file)
+ }
+
+ nvme_get_ctrl(ctrl);
+- if (!try_module_get(ctrl->ops->module))
++ if (!try_module_get(ctrl->ops->module)) {
++ nvme_put_ctrl(ctrl);
+ return -EINVAL;
++ }
+
+ file->private_data = ctrl;
+ return 0;
+diff --git a/drivers/nvme/host/tcp.c b/drivers/nvme/host/tcp.c
+index 24467eea73999..9b3827a6f3a39 100644
+--- a/drivers/nvme/host/tcp.c
++++ b/drivers/nvme/host/tcp.c
+@@ -889,12 +889,11 @@ static int nvme_tcp_try_send_data(struct nvme_tcp_request *req)
+ else
+ flags |= MSG_MORE | MSG_SENDPAGE_NOTLAST;
+
+- /* can't zcopy slab pages */
+- if (unlikely(PageSlab(page))) {
+- ret = sock_no_sendpage(queue->sock, page, offset, len,
++ if (sendpage_ok(page)) {
++ ret = kernel_sendpage(queue->sock, page, offset, len,
+ flags);
+ } else {
+- ret = kernel_sendpage(queue->sock, page, offset, len,
++ ret = sock_no_sendpage(queue->sock, page, offset, len,
+ flags);
+ }
+ if (ret <= 0)
+diff --git a/drivers/platform/olpc/olpc-ec.c b/drivers/platform/olpc/olpc-ec.c
+index 190e4a6186ef7..f64b82824db28 100644
+--- a/drivers/platform/olpc/olpc-ec.c
++++ b/drivers/platform/olpc/olpc-ec.c
+@@ -439,7 +439,9 @@ static int olpc_ec_probe(struct platform_device *pdev)
+ &config);
+ if (IS_ERR(ec->dcon_rdev)) {
+ dev_err(&pdev->dev, "failed to register DCON regulator\n");
+- return PTR_ERR(ec->dcon_rdev);
++ err = PTR_ERR(ec->dcon_rdev);
++ kfree(ec);
++ return err;
+ }
+
+ ec->dbgfs_dir = olpc_ec_setup_debugfs();
+diff --git a/drivers/platform/x86/Kconfig b/drivers/platform/x86/Kconfig
+index 0581a54cf562f..a5ad36083b671 100644
+--- a/drivers/platform/x86/Kconfig
++++ b/drivers/platform/x86/Kconfig
+@@ -469,6 +469,7 @@ config FUJITSU_LAPTOP
+ depends on BACKLIGHT_CLASS_DEVICE
+ depends on ACPI_VIDEO || ACPI_VIDEO = n
+ select INPUT_SPARSEKMAP
++ select NEW_LEDS
+ select LEDS_CLASS
+ help
+ This is a driver for laptops built by Fujitsu:
+@@ -1091,6 +1092,7 @@ config LG_LAPTOP
+ depends on ACPI_WMI
+ depends on INPUT
+ select INPUT_SPARSEKMAP
++ select NEW_LEDS
+ select LEDS_CLASS
+ help
+ This driver adds support for hotkeys as well as control of keyboard
+diff --git a/drivers/platform/x86/asus-nb-wmi.c b/drivers/platform/x86/asus-nb-wmi.c
+index 6c42f73c1dfd3..805ce604dee6a 100644
+--- a/drivers/platform/x86/asus-nb-wmi.c
++++ b/drivers/platform/x86/asus-nb-wmi.c
+@@ -120,6 +120,10 @@ static struct quirk_entry quirk_asus_ga502i = {
+ .wmi_backlight_set_devstate = true,
+ };
+
++static struct quirk_entry quirk_asus_use_kbd_dock_devid = {
++ .use_kbd_dock_devid = true,
++};
++
+ static int dmi_matched(const struct dmi_system_id *dmi)
+ {
+ pr_info("Identified laptop model '%s'\n", dmi->ident);
+@@ -493,6 +497,34 @@ static const struct dmi_system_id asus_quirks[] = {
+ },
+ .driver_data = &quirk_asus_ga502i,
+ },
++ {
++ .callback = dmi_matched,
++ .ident = "Asus Transformer T100TA / T100HA / T100CHI",
++ .matches = {
++ DMI_MATCH(DMI_SYS_VENDOR, "ASUSTeK COMPUTER INC."),
++ /* Match *T100* */
++ DMI_MATCH(DMI_PRODUCT_NAME, "T100"),
++ },
++ .driver_data = &quirk_asus_use_kbd_dock_devid,
++ },
++ {
++ .callback = dmi_matched,
++ .ident = "Asus Transformer T101HA",
++ .matches = {
++ DMI_MATCH(DMI_SYS_VENDOR, "ASUSTeK COMPUTER INC."),
++ DMI_MATCH(DMI_PRODUCT_NAME, "T101HA"),
++ },
++ .driver_data = &quirk_asus_use_kbd_dock_devid,
++ },
++ {
++ .callback = dmi_matched,
++ .ident = "Asus Transformer T200TA",
++ .matches = {
++ DMI_MATCH(DMI_SYS_VENDOR, "ASUSTeK COMPUTER INC."),
++ DMI_MATCH(DMI_PRODUCT_NAME, "T200TA"),
++ },
++ .driver_data = &quirk_asus_use_kbd_dock_devid,
++ },
+ {},
+ };
+
+diff --git a/drivers/platform/x86/asus-wmi.c b/drivers/platform/x86/asus-wmi.c
+index 8f4acdc06b134..ae6289d37faf6 100644
+--- a/drivers/platform/x86/asus-wmi.c
++++ b/drivers/platform/x86/asus-wmi.c
+@@ -365,12 +365,14 @@ static int asus_wmi_input_init(struct asus_wmi *asus)
+ if (err)
+ goto err_free_dev;
+
+- result = asus_wmi_get_devstate_simple(asus, ASUS_WMI_DEVID_KBD_DOCK);
+- if (result >= 0) {
+- input_set_capability(asus->inputdev, EV_SW, SW_TABLET_MODE);
+- input_report_switch(asus->inputdev, SW_TABLET_MODE, !result);
+- } else if (result != -ENODEV) {
+- pr_err("Error checking for keyboard-dock: %d\n", result);
++ if (asus->driver->quirks->use_kbd_dock_devid) {
++ result = asus_wmi_get_devstate_simple(asus, ASUS_WMI_DEVID_KBD_DOCK);
++ if (result >= 0) {
++ input_set_capability(asus->inputdev, EV_SW, SW_TABLET_MODE);
++ input_report_switch(asus->inputdev, SW_TABLET_MODE, !result);
++ } else if (result != -ENODEV) {
++ pr_err("Error checking for keyboard-dock: %d\n", result);
++ }
+ }
+
+ err = input_register_device(asus->inputdev);
+@@ -2114,7 +2116,7 @@ static void asus_wmi_handle_event_code(int code, struct asus_wmi *asus)
+ return;
+ }
+
+- if (code == NOTIFY_KBD_DOCK_CHANGE) {
++ if (asus->driver->quirks->use_kbd_dock_devid && code == NOTIFY_KBD_DOCK_CHANGE) {
+ result = asus_wmi_get_devstate_simple(asus,
+ ASUS_WMI_DEVID_KBD_DOCK);
+ if (result >= 0) {
+diff --git a/drivers/platform/x86/asus-wmi.h b/drivers/platform/x86/asus-wmi.h
+index 4f31b68642a08..1a95c172f94b0 100644
+--- a/drivers/platform/x86/asus-wmi.h
++++ b/drivers/platform/x86/asus-wmi.h
+@@ -33,6 +33,7 @@ struct quirk_entry {
+ bool wmi_backlight_native;
+ bool wmi_backlight_set_devstate;
+ bool wmi_force_als_set;
++ bool use_kbd_dock_devid;
+ int wapf;
+ /*
+ * For machines with AMD graphic chips, it will send out WMI event
+diff --git a/drivers/platform/x86/intel-vbtn.c b/drivers/platform/x86/intel-vbtn.c
+index e85d8e58320c1..663197fecb20d 100644
+--- a/drivers/platform/x86/intel-vbtn.c
++++ b/drivers/platform/x86/intel-vbtn.c
+@@ -15,9 +15,13 @@
+ #include <linux/platform_device.h>
+ #include <linux/suspend.h>
+
++/* Returned when NOT in tablet mode on some HP Stream x360 11 models */
++#define VGBS_TABLET_MODE_FLAG_ALT 0x10
+ /* When NOT in tablet mode, VGBS returns with the flag 0x40 */
+-#define TABLET_MODE_FLAG 0x40
+-#define DOCK_MODE_FLAG 0x80
++#define VGBS_TABLET_MODE_FLAG 0x40
++#define VGBS_DOCK_MODE_FLAG 0x80
++
++#define VGBS_TABLET_MODE_FLAGS (VGBS_TABLET_MODE_FLAG | VGBS_TABLET_MODE_FLAG_ALT)
+
+ MODULE_LICENSE("GPL");
+ MODULE_AUTHOR("AceLan Kao");
+@@ -72,9 +76,9 @@ static void detect_tablet_mode(struct platform_device *device)
+ if (ACPI_FAILURE(status))
+ return;
+
+- m = !(vgbs & TABLET_MODE_FLAG);
++ m = !(vgbs & VGBS_TABLET_MODE_FLAGS);
+ input_report_switch(priv->input_dev, SW_TABLET_MODE, m);
+- m = (vgbs & DOCK_MODE_FLAG) ? 1 : 0;
++ m = (vgbs & VGBS_DOCK_MODE_FLAG) ? 1 : 0;
+ input_report_switch(priv->input_dev, SW_DOCK, m);
+ }
+
+@@ -167,20 +171,54 @@ static bool intel_vbtn_has_buttons(acpi_handle handle)
+ return ACPI_SUCCESS(status);
+ }
+
++/*
++ * There are several laptops (non 2-in-1) models out there which support VGBS,
++ * but simply always return 0, which we translate to SW_TABLET_MODE=1. This in
++ * turn causes userspace (libinput) to suppress events from the builtin
++ * keyboard and touchpad, making the laptop essentially unusable.
++ *
++ * Since the problem of wrongly reporting SW_TABLET_MODE=1 in combination
++ * with libinput, leads to a non-usable system. Where as OTOH many people will
++ * not even notice when SW_TABLET_MODE is not being reported, a DMI based allow
++ * list is used here. This list mainly matches on the chassis-type of 2-in-1s.
++ *
++ * There are also some 2-in-1s which use the intel-vbtn ACPI interface to report
++ * SW_TABLET_MODE with a chassis-type of 8 ("Portable") or 10 ("Notebook"),
++ * these are matched on a per model basis, since many normal laptops with a
++ * possible broken VGBS ACPI-method also use these chassis-types.
++ */
++static const struct dmi_system_id dmi_switches_allow_list[] = {
++ {
++ .matches = {
++ DMI_EXACT_MATCH(DMI_CHASSIS_TYPE, "31" /* Convertible */),
++ },
++ },
++ {
++ .matches = {
++ DMI_EXACT_MATCH(DMI_CHASSIS_TYPE, "32" /* Detachable */),
++ },
++ },
++ {
++ .matches = {
++ DMI_MATCH(DMI_SYS_VENDOR, "Dell Inc."),
++ DMI_MATCH(DMI_PRODUCT_NAME, "Venue 11 Pro 7130"),
++ },
++ },
++ {
++ .matches = {
++ DMI_MATCH(DMI_SYS_VENDOR, "Hewlett-Packard"),
++ DMI_MATCH(DMI_PRODUCT_NAME, "HP Stream x360 Convertible PC 11"),
++ },
++ },
++ {} /* Array terminator */
++};
++
+ static bool intel_vbtn_has_switches(acpi_handle handle)
+ {
+- const char *chassis_type = dmi_get_system_info(DMI_CHASSIS_TYPE);
+ unsigned long long vgbs;
+ acpi_status status;
+
+- /*
+- * Some normal laptops have a VGBS method despite being non-convertible
+- * and their VGBS method always returns 0, causing detect_tablet_mode()
+- * to report SW_TABLET_MODE=1 to userspace, which causes issues.
+- * These laptops have a DMI chassis_type of 9 ("Laptop"), do not report
+- * switches on any devices with a DMI chassis_type of 9.
+- */
+- if (chassis_type && strcmp(chassis_type, "9") == 0)
++ if (!dmi_check_system(dmi_switches_allow_list))
+ return false;
+
+ status = acpi_evaluate_integer(handle, "VGBS", NULL, &vgbs);
+diff --git a/drivers/platform/x86/thinkpad_acpi.c b/drivers/platform/x86/thinkpad_acpi.c
+index 0f6fceda5fc0b..fe8386ab363c9 100644
+--- a/drivers/platform/x86/thinkpad_acpi.c
++++ b/drivers/platform/x86/thinkpad_acpi.c
+@@ -2569,7 +2569,7 @@ static void hotkey_compare_and_issue_event(struct tp_nvram_state *oldn,
+ */
+ static int hotkey_kthread(void *data)
+ {
+- struct tp_nvram_state s[2];
++ struct tp_nvram_state s[2] = { 0 };
+ u32 poll_mask, event_mask;
+ unsigned int si, so;
+ unsigned long t;
+@@ -6829,8 +6829,10 @@ static int __init tpacpi_query_bcl_levels(acpi_handle handle)
+ list_for_each_entry(child, &device->children, node) {
+ acpi_status status = acpi_evaluate_object(child->handle, "_BCL",
+ NULL, &buffer);
+- if (ACPI_FAILURE(status))
++ if (ACPI_FAILURE(status)) {
++ buffer.length = ACPI_ALLOCATE_BUFFER;
+ continue;
++ }
+
+ obj = (union acpi_object *)buffer.pointer;
+ if (!obj || (obj->type != ACPI_TYPE_PACKAGE)) {
+diff --git a/drivers/tty/vt/selection.c b/drivers/tty/vt/selection.c
+index 31bb3647a99c3..8e74654c1b271 100644
+--- a/drivers/tty/vt/selection.c
++++ b/drivers/tty/vt/selection.c
+@@ -193,7 +193,7 @@ static int vc_selection_store_chars(struct vc_data *vc, bool unicode)
+ /* Allocate a new buffer before freeing the old one ... */
+ /* chars can take up to 4 bytes with unicode */
+ bp = kmalloc_array((vc_sel.end - vc_sel.start) / 2 + 1, unicode ? 4 : 1,
+- GFP_KERNEL);
++ GFP_KERNEL | __GFP_NOWARN);
+ if (!bp) {
+ printk(KERN_WARNING "selection: kmalloc() failed\n");
+ clear_selection();
+diff --git a/drivers/vhost/vdpa.c b/drivers/vhost/vdpa.c
+index a54b60d6623f0..e172c2efc663c 100644
+--- a/drivers/vhost/vdpa.c
++++ b/drivers/vhost/vdpa.c
+@@ -527,6 +527,9 @@ static int vhost_vdpa_map(struct vhost_vdpa *v,
+ r = iommu_map(v->domain, iova, pa, size,
+ perm_to_iommu_flags(perm));
+
++ if (r)
++ vhost_iotlb_del_range(dev->iotlb, iova, iova + size - 1);
++
+ return r;
+ }
+
+@@ -552,21 +555,19 @@ static int vhost_vdpa_process_iotlb_update(struct vhost_vdpa *v,
+ struct vhost_dev *dev = &v->vdev;
+ struct vhost_iotlb *iotlb = dev->iotlb;
+ struct page **page_list;
+- unsigned long list_size = PAGE_SIZE / sizeof(struct page *);
++ struct vm_area_struct **vmas;
+ unsigned int gup_flags = FOLL_LONGTERM;
+- unsigned long npages, cur_base, map_pfn, last_pfn = 0;
+- unsigned long locked, lock_limit, pinned, i;
++ unsigned long map_pfn, last_pfn = 0;
++ unsigned long npages, lock_limit;
++ unsigned long i, nmap = 0;
+ u64 iova = msg->iova;
++ long pinned;
+ int ret = 0;
+
+ if (vhost_iotlb_itree_first(iotlb, msg->iova,
+ msg->iova + msg->size - 1))
+ return -EEXIST;
+
+- page_list = (struct page **) __get_free_page(GFP_KERNEL);
+- if (!page_list)
+- return -ENOMEM;
+-
+ if (msg->perm & VHOST_ACCESS_WO)
+ gup_flags |= FOLL_WRITE;
+
+@@ -574,61 +575,86 @@ static int vhost_vdpa_process_iotlb_update(struct vhost_vdpa *v,
+ if (!npages)
+ return -EINVAL;
+
++ page_list = kvmalloc_array(npages, sizeof(struct page *), GFP_KERNEL);
++ vmas = kvmalloc_array(npages, sizeof(struct vm_area_struct *),
++ GFP_KERNEL);
++ if (!page_list || !vmas) {
++ ret = -ENOMEM;
++ goto free;
++ }
++
+ mmap_read_lock(dev->mm);
+
+- locked = atomic64_add_return(npages, &dev->mm->pinned_vm);
+ lock_limit = rlimit(RLIMIT_MEMLOCK) >> PAGE_SHIFT;
+-
+- if (locked > lock_limit) {
++ if (npages + atomic64_read(&dev->mm->pinned_vm) > lock_limit) {
+ ret = -ENOMEM;
+- goto out;
++ goto unlock;
+ }
+
+- cur_base = msg->uaddr & PAGE_MASK;
+- iova &= PAGE_MASK;
++ pinned = pin_user_pages(msg->uaddr & PAGE_MASK, npages, gup_flags,
++ page_list, vmas);
++ if (npages != pinned) {
++ if (pinned < 0) {
++ ret = pinned;
++ } else {
++ unpin_user_pages(page_list, pinned);
++ ret = -ENOMEM;
++ }
++ goto unlock;
++ }
+
+- while (npages) {
+- pinned = min_t(unsigned long, npages, list_size);
+- ret = pin_user_pages(cur_base, pinned,
+- gup_flags, page_list, NULL);
+- if (ret != pinned)
+- goto out;
+-
+- if (!last_pfn)
+- map_pfn = page_to_pfn(page_list[0]);
+-
+- for (i = 0; i < ret; i++) {
+- unsigned long this_pfn = page_to_pfn(page_list[i]);
+- u64 csize;
+-
+- if (last_pfn && (this_pfn != last_pfn + 1)) {
+- /* Pin a contiguous chunk of memory */
+- csize = (last_pfn - map_pfn + 1) << PAGE_SHIFT;
+- if (vhost_vdpa_map(v, iova, csize,
+- map_pfn << PAGE_SHIFT,
+- msg->perm))
+- goto out;
+- map_pfn = this_pfn;
+- iova += csize;
++ iova &= PAGE_MASK;
++ map_pfn = page_to_pfn(page_list[0]);
++
++ /* One more iteration to avoid extra vdpa_map() call out of loop. */
++ for (i = 0; i <= npages; i++) {
++ unsigned long this_pfn;
++ u64 csize;
++
++ /* The last chunk may have no valid PFN next to it */
++ this_pfn = i < npages ? page_to_pfn(page_list[i]) : -1UL;
++
++ if (last_pfn && (this_pfn == -1UL ||
++ this_pfn != last_pfn + 1)) {
++ /* Pin a contiguous chunk of memory */
++ csize = last_pfn - map_pfn + 1;
++ ret = vhost_vdpa_map(v, iova, csize << PAGE_SHIFT,
++ map_pfn << PAGE_SHIFT,
++ msg->perm);
++ if (ret) {
++ /*
++ * Unpin the rest chunks of memory on the
++ * flight with no corresponding vdpa_map()
++ * calls having been made yet. On the other
++ * hand, vdpa_unmap() in the failure path
++ * is in charge of accounting the number of
++ * pinned pages for its own.
++ * This asymmetrical pattern of accounting
++ * is for efficiency to pin all pages at
++ * once, while there is no other callsite
++ * of vdpa_map() than here above.
++ */
++ unpin_user_pages(&page_list[nmap],
++ npages - nmap);
++ goto out;
+ }
+-
+- last_pfn = this_pfn;
++ atomic64_add(csize, &dev->mm->pinned_vm);
++ nmap += csize;
++ iova += csize << PAGE_SHIFT;
++ map_pfn = this_pfn;
+ }
+-
+- cur_base += ret << PAGE_SHIFT;
+- npages -= ret;
++ last_pfn = this_pfn;
+ }
+
+- /* Pin the rest chunk */
+- ret = vhost_vdpa_map(v, iova, (last_pfn - map_pfn + 1) << PAGE_SHIFT,
+- map_pfn << PAGE_SHIFT, msg->perm);
++ WARN_ON(nmap != npages);
+ out:
+- if (ret) {
++ if (ret)
+ vhost_vdpa_unmap(v, msg->iova, msg->size);
+- atomic64_sub(npages, &dev->mm->pinned_vm);
+- }
++unlock:
+ mmap_read_unlock(dev->mm);
+- free_page((unsigned long)page_list);
++free:
++ kvfree(vmas);
++ kvfree(page_list);
+ return ret;
+ }
+
+diff --git a/drivers/vhost/vhost.c b/drivers/vhost/vhost.c
+index d7b8df3edffcf..5f0030e086888 100644
+--- a/drivers/vhost/vhost.c
++++ b/drivers/vhost/vhost.c
+@@ -1283,6 +1283,11 @@ static bool vq_access_ok(struct vhost_virtqueue *vq, unsigned int num,
+ vring_used_t __user *used)
+
+ {
++ /* If an IOTLB device is present, the vring addresses are
++ * GIOVAs. Access validation occurs at prefetch time. */
++ if (vq->iotlb)
++ return true;
++
+ return access_ok(desc, vhost_get_desc_size(vq, num)) &&
+ access_ok(avail, vhost_get_avail_size(vq, num)) &&
+ access_ok(used, vhost_get_used_size(vq, num));
+@@ -1376,10 +1381,6 @@ bool vhost_vq_access_ok(struct vhost_virtqueue *vq)
+ if (!vq_log_access_ok(vq, vq->log_base))
+ return false;
+
+- /* Access validation occurs at prefetch time with IOTLB */
+- if (vq->iotlb)
+- return true;
+-
+ return vq_access_ok(vq, vq->num, vq->desc, vq->avail, vq->used);
+ }
+ EXPORT_SYMBOL_GPL(vhost_vq_access_ok);
+@@ -1511,8 +1512,7 @@ static long vhost_vring_set_addr(struct vhost_dev *d,
+ /* Also validate log access for used ring if enabled. */
+ if ((a.flags & (0x1 << VHOST_VRING_F_LOG)) &&
+ !log_access_ok(vq->log_base, a.log_guest_addr,
+- sizeof *vq->used +
+- vq->num * sizeof *vq->used->ring))
++ vhost_get_used_size(vq, vq->num)))
+ return -EINVAL;
+ }
+
+diff --git a/drivers/video/console/newport_con.c b/drivers/video/console/newport_con.c
+index df3c52d721597..8653950c56d45 100644
+--- a/drivers/video/console/newport_con.c
++++ b/drivers/video/console/newport_con.c
+@@ -35,12 +35,6 @@
+
+ #define FONT_DATA ((unsigned char *)font_vga_8x16.data)
+
+-/* borrowed from fbcon.c */
+-#define REFCOUNT(fd) (((int *)(fd))[-1])
+-#define FNTSIZE(fd) (((int *)(fd))[-2])
+-#define FNTCHARCNT(fd) (((int *)(fd))[-3])
+-#define FONT_EXTRA_WORDS 3
+-
+ static unsigned char *font_data[MAX_NR_CONSOLES];
+
+ static struct newport_regs *npregs;
+@@ -522,6 +516,7 @@ static int newport_set_font(int unit, struct console_font *op)
+ FNTSIZE(new_data) = size;
+ FNTCHARCNT(new_data) = op->charcount;
+ REFCOUNT(new_data) = 0; /* usage counter */
++ FNTSUM(new_data) = 0;
+
+ p = new_data;
+ for (i = 0; i < op->charcount; i++) {
+diff --git a/drivers/video/fbdev/core/fbcon.c b/drivers/video/fbdev/core/fbcon.c
+index 09cb46e94f405..3e82f632841d9 100644
+--- a/drivers/video/fbdev/core/fbcon.c
++++ b/drivers/video/fbdev/core/fbcon.c
+@@ -2299,6 +2299,9 @@ static int fbcon_get_font(struct vc_data *vc, struct console_font *font)
+
+ if (font->width <= 8) {
+ j = vc->vc_font.height;
++ if (font->charcount * j > FNTSIZE(fontdata))
++ return -EINVAL;
++
+ for (i = 0; i < font->charcount; i++) {
+ memcpy(data, fontdata, j);
+ memset(data + j, 0, 32 - j);
+@@ -2307,6 +2310,9 @@ static int fbcon_get_font(struct vc_data *vc, struct console_font *font)
+ }
+ } else if (font->width <= 16) {
+ j = vc->vc_font.height * 2;
++ if (font->charcount * j > FNTSIZE(fontdata))
++ return -EINVAL;
++
+ for (i = 0; i < font->charcount; i++) {
+ memcpy(data, fontdata, j);
+ memset(data + j, 0, 64 - j);
+@@ -2314,6 +2320,9 @@ static int fbcon_get_font(struct vc_data *vc, struct console_font *font)
+ fontdata += j;
+ }
+ } else if (font->width <= 24) {
++ if (font->charcount * (vc->vc_font.height * sizeof(u32)) > FNTSIZE(fontdata))
++ return -EINVAL;
++
+ for (i = 0; i < font->charcount; i++) {
+ for (j = 0; j < vc->vc_font.height; j++) {
+ *data++ = fontdata[0];
+@@ -2326,6 +2335,9 @@ static int fbcon_get_font(struct vc_data *vc, struct console_font *font)
+ }
+ } else {
+ j = vc->vc_font.height * 4;
++ if (font->charcount * j > FNTSIZE(fontdata))
++ return -EINVAL;
++
+ for (i = 0; i < font->charcount; i++) {
+ memcpy(data, fontdata, j);
+ memset(data + j, 0, 128 - j);
+diff --git a/drivers/video/fbdev/core/fbcon.h b/drivers/video/fbdev/core/fbcon.h
+index 78bb14c03643e..9315b360c8981 100644
+--- a/drivers/video/fbdev/core/fbcon.h
++++ b/drivers/video/fbdev/core/fbcon.h
+@@ -152,13 +152,6 @@ static inline int attr_col_ec(int shift, struct vc_data *vc,
+ #define attr_bgcol_ec(bgshift, vc, info) attr_col_ec(bgshift, vc, info, 0)
+ #define attr_fgcol_ec(fgshift, vc, info) attr_col_ec(fgshift, vc, info, 1)
+
+-/* Font */
+-#define REFCOUNT(fd) (((int *)(fd))[-1])
+-#define FNTSIZE(fd) (((int *)(fd))[-2])
+-#define FNTCHARCNT(fd) (((int *)(fd))[-3])
+-#define FNTSUM(fd) (((int *)(fd))[-4])
+-#define FONT_EXTRA_WORDS 4
+-
+ /*
+ * Scroll Method
+ */
+diff --git a/drivers/video/fbdev/core/fbcon_rotate.c b/drivers/video/fbdev/core/fbcon_rotate.c
+index c0d445294aa7c..ac72d4f85f7d0 100644
+--- a/drivers/video/fbdev/core/fbcon_rotate.c
++++ b/drivers/video/fbdev/core/fbcon_rotate.c
+@@ -14,6 +14,7 @@
+ #include <linux/fb.h>
+ #include <linux/vt_kern.h>
+ #include <linux/console.h>
++#include <linux/font.h>
+ #include <asm/types.h>
+ #include "fbcon.h"
+ #include "fbcon_rotate.h"
+diff --git a/drivers/video/fbdev/core/tileblit.c b/drivers/video/fbdev/core/tileblit.c
+index eb664dbf96f66..adff8d6ffe6f9 100644
+--- a/drivers/video/fbdev/core/tileblit.c
++++ b/drivers/video/fbdev/core/tileblit.c
+@@ -13,6 +13,7 @@
+ #include <linux/fb.h>
+ #include <linux/vt_kern.h>
+ #include <linux/console.h>
++#include <linux/font.h>
+ #include <asm/types.h>
+ #include "fbcon.h"
+
+diff --git a/fs/afs/inode.c b/fs/afs/inode.c
+index 1d13d2e882ada..0fe8844b4bee2 100644
+--- a/fs/afs/inode.c
++++ b/fs/afs/inode.c
+@@ -810,14 +810,32 @@ void afs_evict_inode(struct inode *inode)
+
+ static void afs_setattr_success(struct afs_operation *op)
+ {
+- struct inode *inode = &op->file[0].vnode->vfs_inode;
++ struct afs_vnode_param *vp = &op->file[0];
++ struct inode *inode = &vp->vnode->vfs_inode;
++ loff_t old_i_size = i_size_read(inode);
++
++ op->setattr.old_i_size = old_i_size;
++ afs_vnode_commit_status(op, vp);
++ /* inode->i_size has now been changed. */
++
++ if (op->setattr.attr->ia_valid & ATTR_SIZE) {
++ loff_t size = op->setattr.attr->ia_size;
++ if (size > old_i_size)
++ pagecache_isize_extended(inode, old_i_size, size);
++ }
++}
++
++static void afs_setattr_edit_file(struct afs_operation *op)
++{
++ struct afs_vnode_param *vp = &op->file[0];
++ struct inode *inode = &vp->vnode->vfs_inode;
+
+- afs_vnode_commit_status(op, &op->file[0]);
+ if (op->setattr.attr->ia_valid & ATTR_SIZE) {
+- loff_t i_size = inode->i_size, size = op->setattr.attr->ia_size;
+- if (size > i_size)
+- pagecache_isize_extended(inode, i_size, size);
+- truncate_pagecache(inode, size);
++ loff_t size = op->setattr.attr->ia_size;
++ loff_t i_size = op->setattr.old_i_size;
++
++ if (size < i_size)
++ truncate_pagecache(inode, size);
+ }
+ }
+
+@@ -825,6 +843,7 @@ static const struct afs_operation_ops afs_setattr_operation = {
+ .issue_afs_rpc = afs_fs_setattr,
+ .issue_yfs_rpc = yfs_fs_setattr,
+ .success = afs_setattr_success,
++ .edit_dir = afs_setattr_edit_file,
+ };
+
+ /*
+@@ -863,11 +882,16 @@ int afs_setattr(struct dentry *dentry, struct iattr *attr)
+ if (S_ISREG(vnode->vfs_inode.i_mode))
+ filemap_write_and_wait(vnode->vfs_inode.i_mapping);
+
++ /* Prevent any new writebacks from starting whilst we do this. */
++ down_write(&vnode->validate_lock);
++
+ op = afs_alloc_operation(((attr->ia_valid & ATTR_FILE) ?
+ afs_file_key(attr->ia_file) : NULL),
+ vnode->volume);
+- if (IS_ERR(op))
+- return PTR_ERR(op);
++ if (IS_ERR(op)) {
++ ret = PTR_ERR(op);
++ goto out_unlock;
++ }
+
+ afs_op_set_vnode(op, 0, vnode);
+ op->setattr.attr = attr;
+@@ -880,5 +904,10 @@ int afs_setattr(struct dentry *dentry, struct iattr *attr)
+ op->file[0].update_ctime = 1;
+
+ op->ops = &afs_setattr_operation;
+- return afs_do_sync_operation(op);
++ ret = afs_do_sync_operation(op);
++
++out_unlock:
++ up_write(&vnode->validate_lock);
++ _leave(" = %d", ret);
++ return ret;
+ }
+diff --git a/fs/afs/internal.h b/fs/afs/internal.h
+index 792ac711985eb..e1ebead2e505a 100644
+--- a/fs/afs/internal.h
++++ b/fs/afs/internal.h
+@@ -810,6 +810,7 @@ struct afs_operation {
+ } store;
+ struct {
+ struct iattr *attr;
++ loff_t old_i_size;
+ } setattr;
+ struct afs_acl *acl;
+ struct yfs_acl *yacl;
+diff --git a/fs/afs/write.c b/fs/afs/write.c
+index a121c247d95a3..0a98cf36e78a3 100644
+--- a/fs/afs/write.c
++++ b/fs/afs/write.c
+@@ -738,11 +738,21 @@ static int afs_writepages_region(struct address_space *mapping,
+ int afs_writepages(struct address_space *mapping,
+ struct writeback_control *wbc)
+ {
++ struct afs_vnode *vnode = AFS_FS_I(mapping->host);
+ pgoff_t start, end, next;
+ int ret;
+
+ _enter("");
+
++ /* We have to be careful as we can end up racing with setattr()
++ * truncating the pagecache since the caller doesn't take a lock here
++ * to prevent it.
++ */
++ if (wbc->sync_mode == WB_SYNC_ALL)
++ down_read(&vnode->validate_lock);
++ else if (!down_read_trylock(&vnode->validate_lock))
++ return 0;
++
+ if (wbc->range_cyclic) {
+ start = mapping->writeback_index;
+ end = -1;
+@@ -762,6 +772,7 @@ int afs_writepages(struct address_space *mapping,
+ ret = afs_writepages_region(mapping, wbc, start, end, &next);
+ }
+
++ up_read(&vnode->validate_lock);
+ _leave(" = %d", ret);
+ return ret;
+ }
+diff --git a/fs/btrfs/dev-replace.c b/fs/btrfs/dev-replace.c
+index eb86e4b88c73a..e4a1c6afe35dc 100644
+--- a/fs/btrfs/dev-replace.c
++++ b/fs/btrfs/dev-replace.c
+@@ -783,7 +783,9 @@ error:
+ /* replace the sysfs entry */
+ btrfs_sysfs_remove_devices_dir(fs_info->fs_devices, src_device);
+ btrfs_sysfs_update_devid(tgt_device);
+- btrfs_rm_dev_replace_free_srcdev(src_device);
++ if (test_bit(BTRFS_DEV_STATE_WRITEABLE, &src_device->dev_state))
++ btrfs_scratch_superblocks(fs_info, src_device->bdev,
++ src_device->name->str);
+
+ /* write back the superblocks */
+ trans = btrfs_start_transaction(root, 0);
+@@ -792,6 +794,8 @@ error:
+
+ mutex_unlock(&dev_replace->lock_finishing_cancel_unmount);
+
++ btrfs_rm_dev_replace_free_srcdev(src_device);
++
+ return 0;
+ }
+
+diff --git a/fs/btrfs/volumes.c b/fs/btrfs/volumes.c
+index 956eb0d6bc584..79e9a80bd37a0 100644
+--- a/fs/btrfs/volumes.c
++++ b/fs/btrfs/volumes.c
+@@ -1999,9 +1999,9 @@ static u64 btrfs_num_devices(struct btrfs_fs_info *fs_info)
+ return num_devices;
+ }
+
+-static void btrfs_scratch_superblocks(struct btrfs_fs_info *fs_info,
+- struct block_device *bdev,
+- const char *device_path)
++void btrfs_scratch_superblocks(struct btrfs_fs_info *fs_info,
++ struct block_device *bdev,
++ const char *device_path)
+ {
+ struct btrfs_super_block *disk_super;
+ int copy_num;
+@@ -2224,11 +2224,7 @@ void btrfs_rm_dev_replace_free_srcdev(struct btrfs_device *srcdev)
+ struct btrfs_fs_info *fs_info = srcdev->fs_info;
+ struct btrfs_fs_devices *fs_devices = srcdev->fs_devices;
+
+- if (test_bit(BTRFS_DEV_STATE_WRITEABLE, &srcdev->dev_state)) {
+- /* zero out the old super if it is writable */
+- btrfs_scratch_superblocks(fs_info, srcdev->bdev,
+- srcdev->name->str);
+- }
++ mutex_lock(&uuid_mutex);
+
+ btrfs_close_bdev(srcdev);
+ synchronize_rcu();
+@@ -2258,6 +2254,7 @@ void btrfs_rm_dev_replace_free_srcdev(struct btrfs_device *srcdev)
+ close_fs_devices(fs_devices);
+ free_fs_devices(fs_devices);
+ }
++ mutex_unlock(&uuid_mutex);
+ }
+
+ void btrfs_destroy_dev_replace_tgtdev(struct btrfs_device *tgtdev)
+diff --git a/fs/btrfs/volumes.h b/fs/btrfs/volumes.h
+index 75af2334b2e37..83862e27f5663 100644
+--- a/fs/btrfs/volumes.h
++++ b/fs/btrfs/volumes.h
+@@ -573,6 +573,9 @@ void btrfs_set_fs_info_ptr(struct btrfs_fs_info *fs_info);
+ void btrfs_reset_fs_info_ptr(struct btrfs_fs_info *fs_info);
+ bool btrfs_check_rw_degradable(struct btrfs_fs_info *fs_info,
+ struct btrfs_device *failing_dev);
++void btrfs_scratch_superblocks(struct btrfs_fs_info *fs_info,
++ struct block_device *bdev,
++ const char *device_path);
+
+ int btrfs_bg_type_to_factor(u64 flags);
+ const char *btrfs_bg_type_to_raid_name(u64 flags);
+diff --git a/fs/cifs/smb2ops.c b/fs/cifs/smb2ops.c
+index 32f90dc82c840..d44df8f95bcd4 100644
+--- a/fs/cifs/smb2ops.c
++++ b/fs/cifs/smb2ops.c
+@@ -1208,7 +1208,7 @@ smb2_set_ea(const unsigned int xid, struct cifs_tcon *tcon,
+ rqst[1].rq_iov = si_iov;
+ rqst[1].rq_nvec = 1;
+
+- len = sizeof(ea) + ea_name_len + ea_value_len + 1;
++ len = sizeof(*ea) + ea_name_len + ea_value_len + 1;
+ ea = kzalloc(len, GFP_KERNEL);
+ if (ea == NULL) {
+ rc = -ENOMEM;
+diff --git a/fs/exfat/cache.c b/fs/exfat/cache.c
+index 03d0824fc368a..5a2f119b7e8c7 100644
+--- a/fs/exfat/cache.c
++++ b/fs/exfat/cache.c
+@@ -17,7 +17,6 @@
+ #include "exfat_raw.h"
+ #include "exfat_fs.h"
+
+-#define EXFAT_CACHE_VALID 0
+ #define EXFAT_MAX_CACHE 16
+
+ struct exfat_cache {
+@@ -61,16 +60,6 @@ void exfat_cache_shutdown(void)
+ kmem_cache_destroy(exfat_cachep);
+ }
+
+-void exfat_cache_init_inode(struct inode *inode)
+-{
+- struct exfat_inode_info *ei = EXFAT_I(inode);
+-
+- spin_lock_init(&ei->cache_lru_lock);
+- ei->nr_caches = 0;
+- ei->cache_valid_id = EXFAT_CACHE_VALID + 1;
+- INIT_LIST_HEAD(&ei->cache_lru);
+-}
+-
+ static inline struct exfat_cache *exfat_cache_alloc(void)
+ {
+ return kmem_cache_alloc(exfat_cachep, GFP_NOFS);
+diff --git a/fs/exfat/exfat_fs.h b/fs/exfat/exfat_fs.h
+index 75c7bdbeba6d3..fb49928687bb5 100644
+--- a/fs/exfat/exfat_fs.h
++++ b/fs/exfat/exfat_fs.h
+@@ -250,6 +250,8 @@ struct exfat_sb_info {
+ struct rcu_head rcu;
+ };
+
++#define EXFAT_CACHE_VALID 0
++
+ /*
+ * EXFAT file system inode in-memory data
+ */
+@@ -429,7 +431,6 @@ extern const struct dentry_operations exfat_utf8_dentry_ops;
+ /* cache.c */
+ int exfat_cache_init(void);
+ void exfat_cache_shutdown(void);
+-void exfat_cache_init_inode(struct inode *inode);
+ void exfat_cache_inval_inode(struct inode *inode);
+ int exfat_get_cluster(struct inode *inode, unsigned int cluster,
+ unsigned int *fclus, unsigned int *dclus,
+diff --git a/fs/exfat/inode.c b/fs/exfat/inode.c
+index cf9ca6c4d046e..1952b88e14dbd 100644
+--- a/fs/exfat/inode.c
++++ b/fs/exfat/inode.c
+@@ -610,8 +610,6 @@ static int exfat_fill_inode(struct inode *inode, struct exfat_dir_entry *info)
+ ei->i_crtime = info->crtime;
+ inode->i_atime = info->atime;
+
+- exfat_cache_init_inode(inode);
+-
+ return 0;
+ }
+
+diff --git a/fs/exfat/super.c b/fs/exfat/super.c
+index 253a92460d522..142f3459c7ca7 100644
+--- a/fs/exfat/super.c
++++ b/fs/exfat/super.c
+@@ -361,7 +361,6 @@ static int exfat_read_root(struct inode *inode)
+ inode->i_mtime = inode->i_atime = inode->i_ctime = ei->i_crtime =
+ current_time(inode);
+ exfat_truncate_atime(&inode->i_atime);
+- exfat_cache_init_inode(inode);
+ return 0;
+ }
+
+@@ -747,6 +746,10 @@ static void exfat_inode_init_once(void *foo)
+ {
+ struct exfat_inode_info *ei = (struct exfat_inode_info *)foo;
+
++ spin_lock_init(&ei->cache_lru_lock);
++ ei->nr_caches = 0;
++ ei->cache_valid_id = EXFAT_CACHE_VALID + 1;
++ INIT_LIST_HEAD(&ei->cache_lru);
+ INIT_HLIST_NODE(&ei->i_hash_fat);
+ inode_init_once(&ei->vfs_inode);
+ }
+diff --git a/fs/io_uring.c b/fs/io_uring.c
+index ebc3586b18795..d2bb2ae9551f0 100644
+--- a/fs/io_uring.c
++++ b/fs/io_uring.c
+@@ -7998,11 +7998,19 @@ static int io_uring_show_cred(int id, void *p, void *data)
+
+ static void __io_uring_show_fdinfo(struct io_ring_ctx *ctx, struct seq_file *m)
+ {
++ bool has_lock;
+ int i;
+
+- mutex_lock(&ctx->uring_lock);
++ /*
++ * Avoid ABBA deadlock between the seq lock and the io_uring mutex,
++ * since fdinfo case grabs it in the opposite direction of normal use
++ * cases. If we fail to get the lock, we just don't iterate any
++ * structures that could be going away outside the io_uring mutex.
++ */
++ has_lock = mutex_trylock(&ctx->uring_lock);
++
+ seq_printf(m, "UserFiles:\t%u\n", ctx->nr_user_files);
+- for (i = 0; i < ctx->nr_user_files; i++) {
++ for (i = 0; has_lock && i < ctx->nr_user_files; i++) {
+ struct fixed_file_table *table;
+ struct file *f;
+
+@@ -8014,13 +8022,13 @@ static void __io_uring_show_fdinfo(struct io_ring_ctx *ctx, struct seq_file *m)
+ seq_printf(m, "%5u: <none>\n", i);
+ }
+ seq_printf(m, "UserBufs:\t%u\n", ctx->nr_user_bufs);
+- for (i = 0; i < ctx->nr_user_bufs; i++) {
++ for (i = 0; has_lock && i < ctx->nr_user_bufs; i++) {
+ struct io_mapped_ubuf *buf = &ctx->user_bufs[i];
+
+ seq_printf(m, "%5u: 0x%llx/%u\n", i, buf->ubuf,
+ (unsigned int) buf->len);
+ }
+- if (!idr_is_empty(&ctx->personality_idr)) {
++ if (has_lock && !idr_is_empty(&ctx->personality_idr)) {
+ seq_printf(m, "Personalities:\n");
+ idr_for_each(&ctx->personality_idr, io_uring_show_cred, m);
+ }
+@@ -8035,7 +8043,8 @@ static void __io_uring_show_fdinfo(struct io_ring_ctx *ctx, struct seq_file *m)
+ req->task->task_works != NULL);
+ }
+ spin_unlock_irq(&ctx->completion_lock);
+- mutex_unlock(&ctx->uring_lock);
++ if (has_lock)
++ mutex_unlock(&ctx->uring_lock);
+ }
+
+ static void io_uring_show_fdinfo(struct seq_file *m, struct file *f)
+diff --git a/fs/pipe.c b/fs/pipe.c
+index 117db82b10af5..0ac197658a2d6 100644
+--- a/fs/pipe.c
++++ b/fs/pipe.c
+@@ -894,19 +894,18 @@ int create_pipe_files(struct file **res, int flags)
+ {
+ struct inode *inode = get_pipe_inode();
+ struct file *f;
++ int error;
+
+ if (!inode)
+ return -ENFILE;
+
+ if (flags & O_NOTIFICATION_PIPE) {
+-#ifdef CONFIG_WATCH_QUEUE
+- if (watch_queue_init(inode->i_pipe) < 0) {
++ error = watch_queue_init(inode->i_pipe);
++ if (error) {
++ free_pipe_info(inode->i_pipe);
+ iput(inode);
+- return -ENOMEM;
++ return error;
+ }
+-#else
+- return -ENOPKG;
+-#endif
+ }
+
+ f = alloc_file_pseudo(inode, pipe_mnt, "",
+diff --git a/fs/splice.c b/fs/splice.c
+index c3d00dfc73446..ce75aec522744 100644
+--- a/fs/splice.c
++++ b/fs/splice.c
+@@ -526,6 +526,22 @@ static int splice_from_pipe_feed(struct pipe_inode_info *pipe, struct splice_des
+ return 1;
+ }
+
++/* We know we have a pipe buffer, but maybe it's empty? */
++static inline bool eat_empty_buffer(struct pipe_inode_info *pipe)
++{
++ unsigned int tail = pipe->tail;
++ unsigned int mask = pipe->ring_size - 1;
++ struct pipe_buffer *buf = &pipe->bufs[tail & mask];
++
++ if (unlikely(!buf->len)) {
++ pipe_buf_release(pipe, buf);
++ pipe->tail = tail+1;
++ return true;
++ }
++
++ return false;
++}
++
+ /**
+ * splice_from_pipe_next - wait for some data to splice from
+ * @pipe: pipe to splice from
+@@ -545,6 +561,7 @@ static int splice_from_pipe_next(struct pipe_inode_info *pipe, struct splice_des
+ if (signal_pending(current))
+ return -ERESTARTSYS;
+
++repeat:
+ while (pipe_empty(pipe->head, pipe->tail)) {
+ if (!pipe->writers)
+ return 0;
+@@ -566,6 +583,9 @@ static int splice_from_pipe_next(struct pipe_inode_info *pipe, struct splice_des
+ pipe_wait_readable(pipe);
+ }
+
++ if (eat_empty_buffer(pipe))
++ goto repeat;
++
+ return 1;
+ }
+
+diff --git a/include/asm-generic/vmlinux.lds.h b/include/asm-generic/vmlinux.lds.h
+index 9e9b1ec30b902..d7eb9e7689b45 100644
+--- a/include/asm-generic/vmlinux.lds.h
++++ b/include/asm-generic/vmlinux.lds.h
+@@ -641,7 +641,7 @@
+ #define BTF \
+ .BTF : AT(ADDR(.BTF) - LOAD_OFFSET) { \
+ __start_BTF = .; \
+- *(.BTF) \
++ KEEP(*(.BTF)) \
+ __stop_BTF = .; \
+ }
+ #else
+diff --git a/include/linux/font.h b/include/linux/font.h
+index 51b91c8b69d58..59faa80f586df 100644
+--- a/include/linux/font.h
++++ b/include/linux/font.h
+@@ -59,4 +59,17 @@ extern const struct font_desc *get_default_font(int xres, int yres,
+ /* Max. length for the name of a predefined font */
+ #define MAX_FONT_NAME 32
+
++/* Extra word getters */
++#define REFCOUNT(fd) (((int *)(fd))[-1])
++#define FNTSIZE(fd) (((int *)(fd))[-2])
++#define FNTCHARCNT(fd) (((int *)(fd))[-3])
++#define FNTSUM(fd) (((int *)(fd))[-4])
++
++#define FONT_EXTRA_WORDS 4
++
++struct font_data {
++ unsigned int extra[FONT_EXTRA_WORDS];
++ const unsigned char data[];
++} __packed;
++
+ #endif /* _VIDEO_FONT_H */
+diff --git a/include/linux/khugepaged.h b/include/linux/khugepaged.h
+index bc45ea1efbf79..c941b73773216 100644
+--- a/include/linux/khugepaged.h
++++ b/include/linux/khugepaged.h
+@@ -15,6 +15,7 @@ extern int __khugepaged_enter(struct mm_struct *mm);
+ extern void __khugepaged_exit(struct mm_struct *mm);
+ extern int khugepaged_enter_vma_merge(struct vm_area_struct *vma,
+ unsigned long vm_flags);
++extern void khugepaged_min_free_kbytes_update(void);
+ #ifdef CONFIG_SHMEM
+ extern void collapse_pte_mapped_thp(struct mm_struct *mm, unsigned long addr);
+ #else
+@@ -85,6 +86,10 @@ static inline void collapse_pte_mapped_thp(struct mm_struct *mm,
+ unsigned long addr)
+ {
+ }
++
++static inline void khugepaged_min_free_kbytes_update(void)
++{
++}
+ #endif /* CONFIG_TRANSPARENT_HUGEPAGE */
+
+ #endif /* _LINUX_KHUGEPAGED_H */
+diff --git a/include/linux/mlx5/driver.h b/include/linux/mlx5/driver.h
+index 1e6ca716635a9..484cd8ba869c5 100644
+--- a/include/linux/mlx5/driver.h
++++ b/include/linux/mlx5/driver.h
+@@ -764,6 +764,8 @@ struct mlx5_cmd_work_ent {
+ u64 ts2;
+ u16 op;
+ bool polling;
++ /* Track the max comp handlers */
++ refcount_t refcnt;
+ };
+
+ struct mlx5_pas {
+diff --git a/include/linux/net.h b/include/linux/net.h
+index 016a9c5faa347..5fab2dcd3364c 100644
+--- a/include/linux/net.h
++++ b/include/linux/net.h
+@@ -21,6 +21,7 @@
+ #include <linux/rcupdate.h>
+ #include <linux/once.h>
+ #include <linux/fs.h>
++#include <linux/mm.h>
+
+ #include <uapi/linux/net.h>
+
+@@ -290,6 +291,21 @@ do { \
+ #define net_get_random_once_wait(buf, nbytes) \
+ get_random_once_wait((buf), (nbytes))
+
++/*
++ * E.g. XFS meta- & log-data is in slab pages, or bcache meta
++ * data pages, or other high order pages allocated by
++ * __get_free_pages() without __GFP_COMP, which have a page_count
++ * of 0 and/or have PageSlab() set. We cannot use send_page for
++ * those, as that does get_page(); put_page(); and would cause
++ * either a VM_BUG directly, or __page_cache_release a page that
++ * would actually still be referenced by someone, leading to some
++ * obscure delayed Oops somewhere else.
++ */
++static inline bool sendpage_ok(struct page *page)
++{
++ return !PageSlab(page) && page_count(page) >= 1;
++}
++
+ int kernel_sendmsg(struct socket *sock, struct msghdr *msg, struct kvec *vec,
+ size_t num, size_t len);
+ int kernel_sendmsg_locked(struct sock *sk, struct msghdr *msg,
+diff --git a/include/linux/pagemap.h b/include/linux/pagemap.h
+index cf2468da68e91..601ad6957bb9d 100644
+--- a/include/linux/pagemap.h
++++ b/include/linux/pagemap.h
+@@ -54,7 +54,8 @@ static inline void mapping_set_error(struct address_space *mapping, int error)
+ __filemap_set_wb_err(mapping, error);
+
+ /* Record it in superblock */
+- errseq_set(&mapping->host->i_sb->s_wb_err, error);
++ if (mapping->host)
++ errseq_set(&mapping->host->i_sb->s_wb_err, error);
+
+ /* Record it in flags for now, for legacy callers */
+ if (error == -ENOSPC)
+diff --git a/include/linux/watch_queue.h b/include/linux/watch_queue.h
+index 5e08db2adc319..c994d1b2cdbaa 100644
+--- a/include/linux/watch_queue.h
++++ b/include/linux/watch_queue.h
+@@ -122,6 +122,12 @@ static inline void remove_watch_list(struct watch_list *wlist, u64 id)
+ */
+ #define watch_sizeof(STRUCT) (sizeof(STRUCT) << WATCH_INFO_LENGTH__SHIFT)
+
++#else
++static inline int watch_queue_init(struct pipe_inode_info *pipe)
++{
++ return -ENOPKG;
++}
++
+ #endif
+
+ #endif /* _LINUX_WATCH_QUEUE_H */
+diff --git a/include/net/act_api.h b/include/net/act_api.h
+index 8c39348806705..7dc88ebb6e3e7 100644
+--- a/include/net/act_api.h
++++ b/include/net/act_api.h
+@@ -166,8 +166,6 @@ int tcf_idr_create_from_flags(struct tc_action_net *tn, u32 index,
+ struct nlattr *est, struct tc_action **a,
+ const struct tc_action_ops *ops, int bind,
+ u32 flags);
+-void tcf_idr_insert(struct tc_action_net *tn, struct tc_action *a);
+-
+ void tcf_idr_cleanup(struct tc_action_net *tn, u32 index);
+ int tcf_idr_check_alloc(struct tc_action_net *tn, u32 *index,
+ struct tc_action **a, int bind);
+diff --git a/include/net/netlink.h b/include/net/netlink.h
+index c0411f14fb53b..6df0c91870a3e 100644
+--- a/include/net/netlink.h
++++ b/include/net/netlink.h
+@@ -1936,7 +1936,8 @@ void nla_get_range_signed(const struct nla_policy *pt,
+ int netlink_policy_dump_start(const struct nla_policy *policy,
+ unsigned int maxtype,
+ unsigned long *state);
+-bool netlink_policy_dump_loop(unsigned long *state);
++bool netlink_policy_dump_loop(unsigned long state);
+ int netlink_policy_dump_write(struct sk_buff *skb, unsigned long state);
++void netlink_policy_dump_free(unsigned long state);
+
+ #endif
+diff --git a/include/net/xfrm.h b/include/net/xfrm.h
+index 51f65d23ebafa..2e32cb10ac16b 100644
+--- a/include/net/xfrm.h
++++ b/include/net/xfrm.h
+@@ -1767,21 +1767,17 @@ static inline unsigned int xfrm_replay_state_esn_len(struct xfrm_replay_state_es
+ static inline int xfrm_replay_clone(struct xfrm_state *x,
+ struct xfrm_state *orig)
+ {
+- x->replay_esn = kzalloc(xfrm_replay_state_esn_len(orig->replay_esn),
++
++ x->replay_esn = kmemdup(orig->replay_esn,
++ xfrm_replay_state_esn_len(orig->replay_esn),
+ GFP_KERNEL);
+ if (!x->replay_esn)
+ return -ENOMEM;
+-
+- x->replay_esn->bmp_len = orig->replay_esn->bmp_len;
+- x->replay_esn->replay_window = orig->replay_esn->replay_window;
+-
+- x->preplay_esn = kmemdup(x->replay_esn,
+- xfrm_replay_state_esn_len(x->replay_esn),
++ x->preplay_esn = kmemdup(orig->preplay_esn,
++ xfrm_replay_state_esn_len(orig->preplay_esn),
+ GFP_KERNEL);
+- if (!x->preplay_esn) {
+- kfree(x->replay_esn);
++ if (!x->preplay_esn)
+ return -ENOMEM;
+- }
+
+ return 0;
+ }
+diff --git a/include/soc/mscc/ocelot.h b/include/soc/mscc/ocelot.h
+index 4953e9994df34..8e174a24c5757 100644
+--- a/include/soc/mscc/ocelot.h
++++ b/include/soc/mscc/ocelot.h
+@@ -468,6 +468,7 @@ struct ocelot;
+
+ struct ocelot_ops {
+ int (*reset)(struct ocelot *ocelot);
++ u16 (*wm_enc)(u16 value);
+ };
+
+ struct ocelot_acl_block {
+diff --git a/kernel/bpf/sysfs_btf.c b/kernel/bpf/sysfs_btf.c
+index 3b495773de5ae..11b3380887fa0 100644
+--- a/kernel/bpf/sysfs_btf.c
++++ b/kernel/bpf/sysfs_btf.c
+@@ -30,15 +30,15 @@ static struct kobject *btf_kobj;
+
+ static int __init btf_vmlinux_init(void)
+ {
+- if (!__start_BTF)
++ bin_attr_btf_vmlinux.size = __stop_BTF - __start_BTF;
++
++ if (!__start_BTF || bin_attr_btf_vmlinux.size == 0)
+ return 0;
+
+ btf_kobj = kobject_create_and_add("btf", kernel_kobj);
+ if (!btf_kobj)
+ return -ENOMEM;
+
+- bin_attr_btf_vmlinux.size = __stop_BTF - __start_BTF;
+-
+ return sysfs_create_bin_file(btf_kobj, &bin_attr_btf_vmlinux);
+ }
+
+diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
+index 94cead5a43e57..89b07db146763 100644
+--- a/kernel/bpf/verifier.c
++++ b/kernel/bpf/verifier.c
+@@ -5490,8 +5490,8 @@ static void scalar32_min_max_or(struct bpf_reg_state *dst_reg,
+ bool src_known = tnum_subreg_is_const(src_reg->var_off);
+ bool dst_known = tnum_subreg_is_const(dst_reg->var_off);
+ struct tnum var32_off = tnum_subreg(dst_reg->var_off);
+- s32 smin_val = src_reg->smin_value;
+- u32 umin_val = src_reg->umin_value;
++ s32 smin_val = src_reg->s32_min_value;
++ u32 umin_val = src_reg->u32_min_value;
+
+ /* Assuming scalar64_min_max_or will be called so it is safe
+ * to skip updating register for known case.
+@@ -5514,8 +5514,8 @@ static void scalar32_min_max_or(struct bpf_reg_state *dst_reg,
+ /* ORing two positives gives a positive, so safe to
+ * cast result into s64.
+ */
+- dst_reg->s32_min_value = dst_reg->umin_value;
+- dst_reg->s32_max_value = dst_reg->umax_value;
++ dst_reg->s32_min_value = dst_reg->u32_min_value;
++ dst_reg->s32_max_value = dst_reg->u32_max_value;
+ }
+ }
+
+diff --git a/kernel/events/core.c b/kernel/events/core.c
+index 856d98c36f562..fd8cd00099dae 100644
+--- a/kernel/events/core.c
++++ b/kernel/events/core.c
+@@ -99,7 +99,7 @@ static void remote_function(void *data)
+ * retry due to any failures in smp_call_function_single(), such as if the
+ * task_cpu() goes offline concurrently.
+ *
+- * returns @func return value or -ESRCH when the process isn't running
++ * returns @func return value or -ESRCH or -ENXIO when the process isn't running
+ */
+ static int
+ task_function_call(struct task_struct *p, remote_function_f func, void *info)
+@@ -115,7 +115,8 @@ task_function_call(struct task_struct *p, remote_function_f func, void *info)
+ for (;;) {
+ ret = smp_call_function_single(task_cpu(p), remote_function,
+ &data, 1);
+- ret = !ret ? data.ret : -EAGAIN;
++ if (!ret)
++ ret = data.ret;
+
+ if (ret != -EAGAIN)
+ break;
+diff --git a/kernel/umh.c b/kernel/umh.c
+index 79f139a7ca03c..6aaf456d402d9 100644
+--- a/kernel/umh.c
++++ b/kernel/umh.c
+@@ -14,6 +14,7 @@
+ #include <linux/cred.h>
+ #include <linux/file.h>
+ #include <linux/fdtable.h>
++#include <linux/fs_struct.h>
+ #include <linux/workqueue.h>
+ #include <linux/security.h>
+ #include <linux/mount.h>
+@@ -75,6 +76,14 @@ static int call_usermodehelper_exec_async(void *data)
+ flush_signal_handlers(current, 1);
+ spin_unlock_irq(¤t->sighand->siglock);
+
++ /*
++ * Initial kernel threads share ther FS with init, in order to
++ * get the init root directory. But we've now created a new
++ * thread that is going to execve a user process and has its own
++ * 'struct fs_struct'. Reset umask to the default.
++ */
++ current->fs->umask = 0022;
++
+ /*
+ * Our parent (unbound workqueue) runs with elevated scheduling
+ * priority. Avoid propagating that into the userspace child.
+diff --git a/lib/fonts/font_10x18.c b/lib/fonts/font_10x18.c
+index 532f0ff89a962..0e2deac97da0d 100644
+--- a/lib/fonts/font_10x18.c
++++ b/lib/fonts/font_10x18.c
+@@ -8,8 +8,8 @@
+
+ #define FONTDATAMAX 9216
+
+-static const unsigned char fontdata_10x18[FONTDATAMAX] = {
+-
++static struct font_data fontdata_10x18 = {
++ { 0, 0, FONTDATAMAX, 0 }, {
+ /* 0 0x00 '^@' */
+ 0x00, 0x00, /* 0000000000 */
+ 0x00, 0x00, /* 0000000000 */
+@@ -5129,8 +5129,7 @@ static const unsigned char fontdata_10x18[FONTDATAMAX] = {
+ 0x00, 0x00, /* 0000000000 */
+ 0x00, 0x00, /* 0000000000 */
+ 0x00, 0x00, /* 0000000000 */
+-
+-};
++} };
+
+
+ const struct font_desc font_10x18 = {
+@@ -5138,7 +5137,7 @@ const struct font_desc font_10x18 = {
+ .name = "10x18",
+ .width = 10,
+ .height = 18,
+- .data = fontdata_10x18,
++ .data = fontdata_10x18.data,
+ #ifdef __sparc__
+ .pref = 5,
+ #else
+diff --git a/lib/fonts/font_6x10.c b/lib/fonts/font_6x10.c
+index 09b2cc03435b9..87da8acd07db0 100644
+--- a/lib/fonts/font_6x10.c
++++ b/lib/fonts/font_6x10.c
+@@ -1,8 +1,10 @@
+ // SPDX-License-Identifier: GPL-2.0
+ #include <linux/font.h>
+
+-static const unsigned char fontdata_6x10[] = {
++#define FONTDATAMAX 2560
+
++static struct font_data fontdata_6x10 = {
++ { 0, 0, FONTDATAMAX, 0 }, {
+ /* 0 0x00 '^@' */
+ 0x00, /* 00000000 */
+ 0x00, /* 00000000 */
+@@ -3074,14 +3076,13 @@ static const unsigned char fontdata_6x10[] = {
+ 0x00, /* 00000000 */
+ 0x00, /* 00000000 */
+ 0x00, /* 00000000 */
+-
+-};
++} };
+
+ const struct font_desc font_6x10 = {
+ .idx = FONT6x10_IDX,
+ .name = "6x10",
+ .width = 6,
+ .height = 10,
+- .data = fontdata_6x10,
++ .data = fontdata_6x10.data,
+ .pref = 0,
+ };
+diff --git a/lib/fonts/font_6x11.c b/lib/fonts/font_6x11.c
+index d7136c33f1f01..5e975dfa10a53 100644
+--- a/lib/fonts/font_6x11.c
++++ b/lib/fonts/font_6x11.c
+@@ -9,8 +9,8 @@
+
+ #define FONTDATAMAX (11*256)
+
+-static const unsigned char fontdata_6x11[FONTDATAMAX] = {
+-
++static struct font_data fontdata_6x11 = {
++ { 0, 0, FONTDATAMAX, 0 }, {
+ /* 0 0x00 '^@' */
+ 0x00, /* 00000000 */
+ 0x00, /* 00000000 */
+@@ -3338,8 +3338,7 @@ static const unsigned char fontdata_6x11[FONTDATAMAX] = {
+ 0x00, /* 00000000 */
+ 0x00, /* 00000000 */
+ 0x00, /* 00000000 */
+-
+-};
++} };
+
+
+ const struct font_desc font_vga_6x11 = {
+@@ -3347,7 +3346,7 @@ const struct font_desc font_vga_6x11 = {
+ .name = "ProFont6x11",
+ .width = 6,
+ .height = 11,
+- .data = fontdata_6x11,
++ .data = fontdata_6x11.data,
+ /* Try avoiding this font if possible unless on MAC */
+ .pref = -2000,
+ };
+diff --git a/lib/fonts/font_7x14.c b/lib/fonts/font_7x14.c
+index 89752d0b23e8b..86d298f385058 100644
+--- a/lib/fonts/font_7x14.c
++++ b/lib/fonts/font_7x14.c
+@@ -8,8 +8,8 @@
+
+ #define FONTDATAMAX 3584
+
+-static const unsigned char fontdata_7x14[FONTDATAMAX] = {
+-
++static struct font_data fontdata_7x14 = {
++ { 0, 0, FONTDATAMAX, 0 }, {
+ /* 0 0x00 '^@' */
+ 0x00, /* 0000000 */
+ 0x00, /* 0000000 */
+@@ -4105,8 +4105,7 @@ static const unsigned char fontdata_7x14[FONTDATAMAX] = {
+ 0x00, /* 0000000 */
+ 0x00, /* 0000000 */
+ 0x00, /* 0000000 */
+-
+-};
++} };
+
+
+ const struct font_desc font_7x14 = {
+@@ -4114,6 +4113,6 @@ const struct font_desc font_7x14 = {
+ .name = "7x14",
+ .width = 7,
+ .height = 14,
+- .data = fontdata_7x14,
++ .data = fontdata_7x14.data,
+ .pref = 0,
+ };
+diff --git a/lib/fonts/font_8x16.c b/lib/fonts/font_8x16.c
+index b7ab1f5fbdb8a..37cedd36ca5ef 100644
+--- a/lib/fonts/font_8x16.c
++++ b/lib/fonts/font_8x16.c
+@@ -10,8 +10,8 @@
+
+ #define FONTDATAMAX 4096
+
+-static const unsigned char fontdata_8x16[FONTDATAMAX] = {
+-
++static struct font_data fontdata_8x16 = {
++ { 0, 0, FONTDATAMAX, 0 }, {
+ /* 0 0x00 '^@' */
+ 0x00, /* 00000000 */
+ 0x00, /* 00000000 */
+@@ -4619,8 +4619,7 @@ static const unsigned char fontdata_8x16[FONTDATAMAX] = {
+ 0x00, /* 00000000 */
+ 0x00, /* 00000000 */
+ 0x00, /* 00000000 */
+-
+-};
++} };
+
+
+ const struct font_desc font_vga_8x16 = {
+@@ -4628,7 +4627,7 @@ const struct font_desc font_vga_8x16 = {
+ .name = "VGA8x16",
+ .width = 8,
+ .height = 16,
+- .data = fontdata_8x16,
++ .data = fontdata_8x16.data,
+ .pref = 0,
+ };
+ EXPORT_SYMBOL(font_vga_8x16);
+diff --git a/lib/fonts/font_8x8.c b/lib/fonts/font_8x8.c
+index 2328ebc8bab5d..8ab695538395d 100644
+--- a/lib/fonts/font_8x8.c
++++ b/lib/fonts/font_8x8.c
+@@ -9,8 +9,8 @@
+
+ #define FONTDATAMAX 2048
+
+-static const unsigned char fontdata_8x8[FONTDATAMAX] = {
+-
++static struct font_data fontdata_8x8 = {
++ { 0, 0, FONTDATAMAX, 0 }, {
+ /* 0 0x00 '^@' */
+ 0x00, /* 00000000 */
+ 0x00, /* 00000000 */
+@@ -2570,8 +2570,7 @@ static const unsigned char fontdata_8x8[FONTDATAMAX] = {
+ 0x00, /* 00000000 */
+ 0x00, /* 00000000 */
+ 0x00, /* 00000000 */
+-
+-};
++} };
+
+
+ const struct font_desc font_vga_8x8 = {
+@@ -2579,6 +2578,6 @@ const struct font_desc font_vga_8x8 = {
+ .name = "VGA8x8",
+ .width = 8,
+ .height = 8,
+- .data = fontdata_8x8,
++ .data = fontdata_8x8.data,
+ .pref = 0,
+ };
+diff --git a/lib/fonts/font_acorn_8x8.c b/lib/fonts/font_acorn_8x8.c
+index 0ff0e85d4481b..069b3e80c4344 100644
+--- a/lib/fonts/font_acorn_8x8.c
++++ b/lib/fonts/font_acorn_8x8.c
+@@ -3,7 +3,10 @@
+
+ #include <linux/font.h>
+
+-static const unsigned char acorndata_8x8[] = {
++#define FONTDATAMAX 2048
++
++static struct font_data acorndata_8x8 = {
++{ 0, 0, FONTDATAMAX, 0 }, {
+ /* 00 */ 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, /* ^@ */
+ /* 01 */ 0x7e, 0x81, 0xa5, 0x81, 0xbd, 0x99, 0x81, 0x7e, /* ^A */
+ /* 02 */ 0x7e, 0xff, 0xbd, 0xff, 0xc3, 0xe7, 0xff, 0x7e, /* ^B */
+@@ -260,14 +263,14 @@ static const unsigned char acorndata_8x8[] = {
+ /* FD */ 0x38, 0x04, 0x18, 0x20, 0x3c, 0x00, 0x00, 0x00,
+ /* FE */ 0x00, 0x00, 0x3c, 0x3c, 0x3c, 0x3c, 0x00, 0x00,
+ /* FF */ 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00
+-};
++} };
+
+ const struct font_desc font_acorn_8x8 = {
+ .idx = ACORN8x8_IDX,
+ .name = "Acorn8x8",
+ .width = 8,
+ .height = 8,
+- .data = acorndata_8x8,
++ .data = acorndata_8x8.data,
+ #ifdef CONFIG_ARCH_ACORN
+ .pref = 20,
+ #else
+diff --git a/lib/fonts/font_mini_4x6.c b/lib/fonts/font_mini_4x6.c
+index 838caa1cfef70..1449876c6a270 100644
+--- a/lib/fonts/font_mini_4x6.c
++++ b/lib/fonts/font_mini_4x6.c
+@@ -43,8 +43,8 @@ __END__;
+
+ #define FONTDATAMAX 1536
+
+-static const unsigned char fontdata_mini_4x6[FONTDATAMAX] = {
+-
++static struct font_data fontdata_mini_4x6 = {
++ { 0, 0, FONTDATAMAX, 0 }, {
+ /*{*/
+ /* Char 0: ' ' */
+ 0xee, /*= [*** ] */
+@@ -2145,14 +2145,14 @@ static const unsigned char fontdata_mini_4x6[FONTDATAMAX] = {
+ 0xee, /*= [*** ] */
+ 0x00, /*= [ ] */
+ /*}*/
+-};
++} };
+
+ const struct font_desc font_mini_4x6 = {
+ .idx = MINI4x6_IDX,
+ .name = "MINI4x6",
+ .width = 4,
+ .height = 6,
+- .data = fontdata_mini_4x6,
++ .data = fontdata_mini_4x6.data,
+ .pref = 3,
+ };
+
+diff --git a/lib/fonts/font_pearl_8x8.c b/lib/fonts/font_pearl_8x8.c
+index b15d3c342c5bb..32d65551e7ed2 100644
+--- a/lib/fonts/font_pearl_8x8.c
++++ b/lib/fonts/font_pearl_8x8.c
+@@ -14,8 +14,8 @@
+
+ #define FONTDATAMAX 2048
+
+-static const unsigned char fontdata_pearl8x8[FONTDATAMAX] = {
+-
++static struct font_data fontdata_pearl8x8 = {
++ { 0, 0, FONTDATAMAX, 0 }, {
+ /* 0 0x00 '^@' */
+ 0x00, /* 00000000 */
+ 0x00, /* 00000000 */
+@@ -2575,14 +2575,13 @@ static const unsigned char fontdata_pearl8x8[FONTDATAMAX] = {
+ 0x00, /* 00000000 */
+ 0x00, /* 00000000 */
+ 0x00, /* 00000000 */
+-
+-};
++} };
+
+ const struct font_desc font_pearl_8x8 = {
+ .idx = PEARL8x8_IDX,
+ .name = "PEARL8x8",
+ .width = 8,
+ .height = 8,
+- .data = fontdata_pearl8x8,
++ .data = fontdata_pearl8x8.data,
+ .pref = 2,
+ };
+diff --git a/lib/fonts/font_sun12x22.c b/lib/fonts/font_sun12x22.c
+index 955d6eee3959d..641a6b4dca424 100644
+--- a/lib/fonts/font_sun12x22.c
++++ b/lib/fonts/font_sun12x22.c
+@@ -3,8 +3,8 @@
+
+ #define FONTDATAMAX 11264
+
+-static const unsigned char fontdata_sun12x22[FONTDATAMAX] = {
+-
++static struct font_data fontdata_sun12x22 = {
++ { 0, 0, FONTDATAMAX, 0 }, {
+ /* 0 0x00 '^@' */
+ 0x00, 0x00, /* 000000000000 */
+ 0x00, 0x00, /* 000000000000 */
+@@ -6148,8 +6148,7 @@ static const unsigned char fontdata_sun12x22[FONTDATAMAX] = {
+ 0x00, 0x00, /* 000000000000 */
+ 0x00, 0x00, /* 000000000000 */
+ 0x00, 0x00, /* 000000000000 */
+-
+-};
++} };
+
+
+ const struct font_desc font_sun_12x22 = {
+@@ -6157,7 +6156,7 @@ const struct font_desc font_sun_12x22 = {
+ .name = "SUN12x22",
+ .width = 12,
+ .height = 22,
+- .data = fontdata_sun12x22,
++ .data = fontdata_sun12x22.data,
+ #ifdef __sparc__
+ .pref = 5,
+ #else
+diff --git a/lib/fonts/font_sun8x16.c b/lib/fonts/font_sun8x16.c
+index 03d71e53954ab..193fe6d988e08 100644
+--- a/lib/fonts/font_sun8x16.c
++++ b/lib/fonts/font_sun8x16.c
+@@ -3,7 +3,8 @@
+
+ #define FONTDATAMAX 4096
+
+-static const unsigned char fontdata_sun8x16[FONTDATAMAX] = {
++static struct font_data fontdata_sun8x16 = {
++{ 0, 0, FONTDATAMAX, 0 }, {
+ /* */ 0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,
+ /* */ 0x00,0x00,0x7e,0x81,0xa5,0x81,0x81,0xbd,0x99,0x81,0x81,0x7e,0x00,0x00,0x00,0x00,
+ /* */ 0x00,0x00,0x7e,0xff,0xdb,0xff,0xff,0xc3,0xe7,0xff,0xff,0x7e,0x00,0x00,0x00,0x00,
+@@ -260,14 +261,14 @@ static const unsigned char fontdata_sun8x16[FONTDATAMAX] = {
+ /* */ 0x00,0x70,0xd8,0x30,0x60,0xc8,0xf8,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,
+ /* */ 0x00,0x00,0x00,0x00,0x7c,0x7c,0x7c,0x7c,0x7c,0x7c,0x7c,0x00,0x00,0x00,0x00,0x00,
+ /* */ 0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,
+-};
++} };
+
+ const struct font_desc font_sun_8x16 = {
+ .idx = SUN8x16_IDX,
+ .name = "SUN8x16",
+ .width = 8,
+ .height = 16,
+- .data = fontdata_sun8x16,
++ .data = fontdata_sun8x16.data,
+ #ifdef __sparc__
+ .pref = 10,
+ #else
+diff --git a/lib/fonts/font_ter16x32.c b/lib/fonts/font_ter16x32.c
+index 3f0cf1ccdf3a4..91b9c283bd9cc 100644
+--- a/lib/fonts/font_ter16x32.c
++++ b/lib/fonts/font_ter16x32.c
+@@ -4,8 +4,8 @@
+
+ #define FONTDATAMAX 16384
+
+-static const unsigned char fontdata_ter16x32[FONTDATAMAX] = {
+-
++static struct font_data fontdata_ter16x32 = {
++ { 0, 0, FONTDATAMAX, 0 }, {
+ 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+ 0x00, 0x00, 0x00, 0x00, 0x7f, 0xfc, 0x7f, 0xfc,
+ 0x70, 0x1c, 0x70, 0x1c, 0x70, 0x1c, 0x70, 0x1c,
+@@ -2054,8 +2054,7 @@ static const unsigned char fontdata_ter16x32[FONTDATAMAX] = {
+ 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+ 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+ 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, /* 255 */
+-
+-};
++} };
+
+
+ const struct font_desc font_ter_16x32 = {
+@@ -2063,7 +2062,7 @@ const struct font_desc font_ter_16x32 = {
+ .name = "TER16x32",
+ .width = 16,
+ .height = 32,
+- .data = fontdata_ter16x32,
++ .data = fontdata_ter16x32.data,
+ #ifdef __sparc__
+ .pref = 5,
+ #else
+diff --git a/mm/khugepaged.c b/mm/khugepaged.c
+index e6fc7c3e7dc98..bc2812bb1a010 100644
+--- a/mm/khugepaged.c
++++ b/mm/khugepaged.c
+@@ -56,6 +56,9 @@ enum scan_result {
+ #define CREATE_TRACE_POINTS
+ #include <trace/events/huge_memory.h>
+
++static struct task_struct *khugepaged_thread __read_mostly;
++static DEFINE_MUTEX(khugepaged_mutex);
++
+ /* default scan 8*512 pte (or vmas) every 30 second */
+ static unsigned int khugepaged_pages_to_scan __read_mostly;
+ static unsigned int khugepaged_pages_collapsed;
+@@ -914,6 +917,18 @@ static struct page *khugepaged_alloc_hugepage(bool *wait)
+
+ static bool khugepaged_prealloc_page(struct page **hpage, bool *wait)
+ {
++ /*
++ * If the hpage allocated earlier was briefly exposed in page cache
++ * before collapse_file() failed, it is possible that racing lookups
++ * have not yet completed, and would then be unpleasantly surprised by
++ * finding the hpage reused for the same mapping at a different offset.
++ * Just release the previous allocation if there is any danger of that.
++ */
++ if (*hpage && page_count(*hpage) > 1) {
++ put_page(*hpage);
++ *hpage = NULL;
++ }
++
+ if (!*hpage)
+ *hpage = khugepaged_alloc_hugepage(wait);
+
+@@ -2292,8 +2307,6 @@ static void set_recommended_min_free_kbytes(void)
+
+ int start_stop_khugepaged(void)
+ {
+- static struct task_struct *khugepaged_thread __read_mostly;
+- static DEFINE_MUTEX(khugepaged_mutex);
+ int err = 0;
+
+ mutex_lock(&khugepaged_mutex);
+@@ -2320,3 +2333,11 @@ fail:
+ mutex_unlock(&khugepaged_mutex);
+ return err;
+ }
++
++void khugepaged_min_free_kbytes_update(void)
++{
++ mutex_lock(&khugepaged_mutex);
++ if (khugepaged_enabled() && khugepaged_thread)
++ set_recommended_min_free_kbytes();
++ mutex_unlock(&khugepaged_mutex);
++}
+diff --git a/mm/page_alloc.c b/mm/page_alloc.c
+index 898ff44f2c7b2..43f6d91f57156 100644
+--- a/mm/page_alloc.c
++++ b/mm/page_alloc.c
+@@ -69,6 +69,7 @@
+ #include <linux/nmi.h>
+ #include <linux/psi.h>
+ #include <linux/padata.h>
++#include <linux/khugepaged.h>
+
+ #include <asm/sections.h>
+ #include <asm/tlbflush.h>
+@@ -7884,6 +7885,8 @@ int __meminit init_per_zone_wmark_min(void)
+ setup_min_slab_ratio();
+ #endif
+
++ khugepaged_min_free_kbytes_update();
++
+ return 0;
+ }
+ postcore_initcall(init_per_zone_wmark_min)
+diff --git a/net/bridge/br_fdb.c b/net/bridge/br_fdb.c
+index 4877a0db16c66..a8fa622e2d9b4 100644
+--- a/net/bridge/br_fdb.c
++++ b/net/bridge/br_fdb.c
+@@ -404,6 +404,8 @@ void br_fdb_delete_by_port(struct net_bridge *br,
+
+ if (!do_all)
+ if (test_bit(BR_FDB_STATIC, &f->flags) ||
++ (test_bit(BR_FDB_ADDED_BY_EXT_LEARN, &f->flags) &&
++ !test_bit(BR_FDB_OFFLOADED, &f->flags)) ||
+ (vid && f->key.vlan_id != vid))
+ continue;
+
+diff --git a/net/core/skbuff.c b/net/core/skbuff.c
+index c50bd7a7943ab..72f4a3730ecf0 100644
+--- a/net/core/skbuff.c
++++ b/net/core/skbuff.c
+@@ -5621,7 +5621,7 @@ int skb_mpls_push(struct sk_buff *skb, __be32 mpls_lse, __be16 mpls_proto,
+ lse->label_stack_entry = mpls_lse;
+ skb_postpush_rcsum(skb, lse, MPLS_HLEN);
+
+- if (ethernet)
++ if (ethernet && mac_len >= ETH_HLEN)
+ skb_mod_eth_type(skb, eth_hdr(skb), mpls_proto);
+ skb->protocol = mpls_proto;
+
+@@ -5661,7 +5661,7 @@ int skb_mpls_pop(struct sk_buff *skb, __be16 next_proto, int mac_len,
+ skb_reset_mac_header(skb);
+ skb_set_network_header(skb, mac_len);
+
+- if (ethernet) {
++ if (ethernet && mac_len >= ETH_HLEN) {
+ struct ethhdr *hdr;
+
+ /* use mpls_hdr() to get ethertype to account for VLANs. */
+diff --git a/net/ipv4/tcp.c b/net/ipv4/tcp.c
+index 30c1142584b1a..06a8242aa6980 100644
+--- a/net/ipv4/tcp.c
++++ b/net/ipv4/tcp.c
+@@ -970,7 +970,8 @@ ssize_t do_tcp_sendpages(struct sock *sk, struct page *page, int offset,
+ long timeo = sock_sndtimeo(sk, flags & MSG_DONTWAIT);
+
+ if (IS_ENABLED(CONFIG_DEBUG_VM) &&
+- WARN_ONCE(PageSlab(page), "page must not be a Slab one"))
++ WARN_ONCE(!sendpage_ok(page),
++ "page must not be a Slab one and have page_count > 0"))
+ return -EINVAL;
+
+ /* Wait for a connection to finish. One exception is TCP Fast Open
+diff --git a/net/ipv4/tcp_ipv4.c b/net/ipv4/tcp_ipv4.c
+index 04bfcbbfee83a..ab4576ac1fe6e 100644
+--- a/net/ipv4/tcp_ipv4.c
++++ b/net/ipv4/tcp_ipv4.c
+@@ -1787,12 +1787,12 @@ bool tcp_add_backlog(struct sock *sk, struct sk_buff *skb)
+
+ __skb_pull(skb, hdrlen);
+ if (skb_try_coalesce(tail, skb, &fragstolen, &delta)) {
+- thtail->window = th->window;
+-
+ TCP_SKB_CB(tail)->end_seq = TCP_SKB_CB(skb)->end_seq;
+
+- if (after(TCP_SKB_CB(skb)->ack_seq, TCP_SKB_CB(tail)->ack_seq))
++ if (likely(!before(TCP_SKB_CB(skb)->ack_seq, TCP_SKB_CB(tail)->ack_seq))) {
+ TCP_SKB_CB(tail)->ack_seq = TCP_SKB_CB(skb)->ack_seq;
++ thtail->window = th->window;
++ }
+
+ /* We have to update both TCP_SKB_CB(tail)->tcp_flags and
+ * thtail->fin, so that the fast path in tcp_rcv_established()
+diff --git a/net/netlink/genetlink.c b/net/netlink/genetlink.c
+index 9395ee8a868db..a4148ef314ea2 100644
+--- a/net/netlink/genetlink.c
++++ b/net/netlink/genetlink.c
+@@ -1079,7 +1079,7 @@ static int ctrl_dumppolicy(struct sk_buff *skb, struct netlink_callback *cb)
+ if (err)
+ return err;
+
+- while (netlink_policy_dump_loop(&cb->args[1])) {
++ while (netlink_policy_dump_loop(cb->args[1])) {
+ void *hdr;
+ struct nlattr *nest;
+
+@@ -1113,6 +1113,12 @@ nla_put_failure:
+ return skb->len;
+ }
+
++static int ctrl_dumppolicy_done(struct netlink_callback *cb)
++{
++ netlink_policy_dump_free(cb->args[1]);
++ return 0;
++}
++
+ static const struct genl_ops genl_ctrl_ops[] = {
+ {
+ .cmd = CTRL_CMD_GETFAMILY,
+@@ -1123,6 +1129,7 @@ static const struct genl_ops genl_ctrl_ops[] = {
+ {
+ .cmd = CTRL_CMD_GETPOLICY,
+ .dumpit = ctrl_dumppolicy,
++ .done = ctrl_dumppolicy_done,
+ },
+ };
+
+diff --git a/net/netlink/policy.c b/net/netlink/policy.c
+index 2b3e26f7496f5..ceeaee157b2f6 100644
+--- a/net/netlink/policy.c
++++ b/net/netlink/policy.c
+@@ -84,7 +84,6 @@ int netlink_policy_dump_start(const struct nla_policy *policy,
+ unsigned int policy_idx;
+ int err;
+
+- /* also returns 0 if "*_state" is our ERR_PTR() end marker */
+ if (*_state)
+ return 0;
+
+@@ -140,21 +139,11 @@ static bool netlink_policy_dump_finished(struct nl_policy_dump *state)
+ !state->policies[state->policy_idx].policy;
+ }
+
+-bool netlink_policy_dump_loop(unsigned long *_state)
++bool netlink_policy_dump_loop(unsigned long _state)
+ {
+- struct nl_policy_dump *state = (void *)*_state;
+-
+- if (IS_ERR(state))
+- return false;
+-
+- if (netlink_policy_dump_finished(state)) {
+- kfree(state);
+- /* store end marker instead of freed state */
+- *_state = (unsigned long)ERR_PTR(-ENOENT);
+- return false;
+- }
++ struct nl_policy_dump *state = (void *)_state;
+
+- return true;
++ return !netlink_policy_dump_finished(state);
+ }
+
+ int netlink_policy_dump_write(struct sk_buff *skb, unsigned long _state)
+@@ -309,3 +298,10 @@ nla_put_failure:
+ nla_nest_cancel(skb, policy);
+ return -ENOBUFS;
+ }
++
++void netlink_policy_dump_free(unsigned long _state)
++{
++ struct nl_policy_dump *state = (void *)_state;
++
++ kfree(state);
++}
+diff --git a/net/openvswitch/conntrack.c b/net/openvswitch/conntrack.c
+index 4340f25fe390f..67dacc7152f75 100644
+--- a/net/openvswitch/conntrack.c
++++ b/net/openvswitch/conntrack.c
+@@ -903,15 +903,19 @@ static int ovs_ct_nat(struct net *net, struct sw_flow_key *key,
+ }
+ err = ovs_ct_nat_execute(skb, ct, ctinfo, &info->range, maniptype);
+
+- if (err == NF_ACCEPT &&
+- ct->status & IPS_SRC_NAT && ct->status & IPS_DST_NAT) {
+- if (maniptype == NF_NAT_MANIP_SRC)
+- maniptype = NF_NAT_MANIP_DST;
+- else
+- maniptype = NF_NAT_MANIP_SRC;
+-
+- err = ovs_ct_nat_execute(skb, ct, ctinfo, &info->range,
+- maniptype);
++ if (err == NF_ACCEPT && ct->status & IPS_DST_NAT) {
++ if (ct->status & IPS_SRC_NAT) {
++ if (maniptype == NF_NAT_MANIP_SRC)
++ maniptype = NF_NAT_MANIP_DST;
++ else
++ maniptype = NF_NAT_MANIP_SRC;
++
++ err = ovs_ct_nat_execute(skb, ct, ctinfo, &info->range,
++ maniptype);
++ } else if (CTINFO2DIR(ctinfo) == IP_CT_DIR_ORIGINAL) {
++ err = ovs_ct_nat_execute(skb, ct, ctinfo, NULL,
++ NF_NAT_MANIP_SRC);
++ }
+ }
+
+ /* Mark NAT done if successful and update the flow key. */
+diff --git a/net/qrtr/ns.c b/net/qrtr/ns.c
+index d8252fdab851a..934999b56d60a 100644
+--- a/net/qrtr/ns.c
++++ b/net/qrtr/ns.c
+@@ -193,12 +193,13 @@ static int announce_servers(struct sockaddr_qrtr *sq)
+ struct qrtr_server *srv;
+ struct qrtr_node *node;
+ void __rcu **slot;
+- int ret;
++ int ret = 0;
+
+ node = node_get(qrtr_ns.local_node);
+ if (!node)
+ return 0;
+
++ rcu_read_lock();
+ /* Announce the list of servers registered in this node */
+ radix_tree_for_each_slot(slot, &node->servers, &iter, 0) {
+ srv = radix_tree_deref_slot(slot);
+@@ -206,11 +207,14 @@ static int announce_servers(struct sockaddr_qrtr *sq)
+ ret = service_announce_new(sq, srv);
+ if (ret < 0) {
+ pr_err("failed to announce new service\n");
+- return ret;
++ goto err_out;
+ }
+ }
+
+- return 0;
++err_out:
++ rcu_read_unlock();
++
++ return ret;
+ }
+
+ static struct qrtr_server *server_add(unsigned int service,
+@@ -335,7 +339,7 @@ static int ctrl_cmd_bye(struct sockaddr_qrtr *from)
+ struct qrtr_node *node;
+ void __rcu **slot;
+ struct kvec iv;
+- int ret;
++ int ret = 0;
+
+ iv.iov_base = &pkt;
+ iv.iov_len = sizeof(pkt);
+@@ -344,11 +348,13 @@ static int ctrl_cmd_bye(struct sockaddr_qrtr *from)
+ if (!node)
+ return 0;
+
++ rcu_read_lock();
+ /* Advertise removal of this client to all servers of remote node */
+ radix_tree_for_each_slot(slot, &node->servers, &iter, 0) {
+ srv = radix_tree_deref_slot(slot);
+ server_del(node, srv->port);
+ }
++ rcu_read_unlock();
+
+ /* Advertise the removal of this client to all local servers */
+ local_node = node_get(qrtr_ns.local_node);
+@@ -359,6 +365,7 @@ static int ctrl_cmd_bye(struct sockaddr_qrtr *from)
+ pkt.cmd = cpu_to_le32(QRTR_TYPE_BYE);
+ pkt.client.node = cpu_to_le32(from->sq_node);
+
++ rcu_read_lock();
+ radix_tree_for_each_slot(slot, &local_node->servers, &iter, 0) {
+ srv = radix_tree_deref_slot(slot);
+
+@@ -372,11 +379,14 @@ static int ctrl_cmd_bye(struct sockaddr_qrtr *from)
+ ret = kernel_sendmsg(qrtr_ns.sock, &msg, &iv, 1, sizeof(pkt));
+ if (ret < 0) {
+ pr_err("failed to send bye cmd\n");
+- return ret;
++ goto err_out;
+ }
+ }
+
+- return 0;
++err_out:
++ rcu_read_unlock();
++
++ return ret;
+ }
+
+ static int ctrl_cmd_del_client(struct sockaddr_qrtr *from,
+@@ -394,7 +404,7 @@ static int ctrl_cmd_del_client(struct sockaddr_qrtr *from,
+ struct list_head *li;
+ void __rcu **slot;
+ struct kvec iv;
+- int ret;
++ int ret = 0;
+
+ iv.iov_base = &pkt;
+ iv.iov_len = sizeof(pkt);
+@@ -434,6 +444,7 @@ static int ctrl_cmd_del_client(struct sockaddr_qrtr *from,
+ pkt.client.node = cpu_to_le32(node_id);
+ pkt.client.port = cpu_to_le32(port);
+
++ rcu_read_lock();
+ radix_tree_for_each_slot(slot, &local_node->servers, &iter, 0) {
+ srv = radix_tree_deref_slot(slot);
+
+@@ -447,11 +458,14 @@ static int ctrl_cmd_del_client(struct sockaddr_qrtr *from,
+ ret = kernel_sendmsg(qrtr_ns.sock, &msg, &iv, 1, sizeof(pkt));
+ if (ret < 0) {
+ pr_err("failed to send del client cmd\n");
+- return ret;
++ goto err_out;
+ }
+ }
+
+- return 0;
++err_out:
++ rcu_read_unlock();
++
++ return ret;
+ }
+
+ static int ctrl_cmd_new_server(struct sockaddr_qrtr *from,
+@@ -554,6 +568,7 @@ static int ctrl_cmd_new_lookup(struct sockaddr_qrtr *from,
+ filter.service = service;
+ filter.instance = instance;
+
++ rcu_read_lock();
+ radix_tree_for_each_slot(node_slot, &nodes, &node_iter, 0) {
+ node = radix_tree_deref_slot(node_slot);
+
+@@ -568,6 +583,7 @@ static int ctrl_cmd_new_lookup(struct sockaddr_qrtr *from,
+ lookup_notify(from, srv, true);
+ }
+ }
++ rcu_read_unlock();
+
+ /* Empty notification, to indicate end of listing */
+ lookup_notify(from, NULL, true);
+diff --git a/net/rxrpc/conn_event.c b/net/rxrpc/conn_event.c
+index 447f55ca68860..6e972b4823efa 100644
+--- a/net/rxrpc/conn_event.c
++++ b/net/rxrpc/conn_event.c
+@@ -340,18 +340,18 @@ static int rxrpc_process_event(struct rxrpc_connection *conn,
+ return ret;
+
+ spin_lock(&conn->channel_lock);
+- spin_lock(&conn->state_lock);
++ spin_lock_bh(&conn->state_lock);
+
+ if (conn->state == RXRPC_CONN_SERVICE_CHALLENGING) {
+ conn->state = RXRPC_CONN_SERVICE;
+- spin_unlock(&conn->state_lock);
++ spin_unlock_bh(&conn->state_lock);
+ for (loop = 0; loop < RXRPC_MAXCALLS; loop++)
+ rxrpc_call_is_secure(
+ rcu_dereference_protected(
+ conn->channels[loop].call,
+ lockdep_is_held(&conn->channel_lock)));
+ } else {
+- spin_unlock(&conn->state_lock);
++ spin_unlock_bh(&conn->state_lock);
+ }
+
+ spin_unlock(&conn->channel_lock);
+diff --git a/net/rxrpc/key.c b/net/rxrpc/key.c
+index 0c98313dd7a8c..85a9ff8cd236a 100644
+--- a/net/rxrpc/key.c
++++ b/net/rxrpc/key.c
+@@ -903,7 +903,7 @@ int rxrpc_request_key(struct rxrpc_sock *rx, char __user *optval, int optlen)
+
+ _enter("");
+
+- if (optlen <= 0 || optlen > PAGE_SIZE - 1)
++ if (optlen <= 0 || optlen > PAGE_SIZE - 1 || rx->securities)
+ return -EINVAL;
+
+ description = memdup_user_nul(optval, optlen);
+@@ -941,7 +941,7 @@ int rxrpc_server_keyring(struct rxrpc_sock *rx, char __user *optval,
+ if (IS_ERR(description))
+ return PTR_ERR(description);
+
+- key = request_key_net(&key_type_keyring, description, sock_net(&rx->sk), NULL);
++ key = request_key(&key_type_keyring, description, NULL);
+ if (IS_ERR(key)) {
+ kfree(description);
+ _leave(" = %ld", PTR_ERR(key));
+@@ -1073,7 +1073,7 @@ static long rxrpc_read(const struct key *key,
+
+ switch (token->security_index) {
+ case RXRPC_SECURITY_RXKAD:
+- toksize += 9 * 4; /* viceid, kvno, key*2 + len, begin,
++ toksize += 8 * 4; /* viceid, kvno, key*2, begin,
+ * end, primary, tktlen */
+ toksize += RND(token->kad->ticket_len);
+ break;
+@@ -1108,7 +1108,8 @@ static long rxrpc_read(const struct key *key,
+ break;
+
+ default: /* we have a ticket we can't encode */
+- BUG();
++ pr_err("Unsupported key token type (%u)\n",
++ token->security_index);
+ continue;
+ }
+
+@@ -1139,6 +1140,14 @@ static long rxrpc_read(const struct key *key,
+ memcpy((u8 *)xdr + _l, &zero, 4 - (_l & 3)); \
+ xdr += (_l + 3) >> 2; \
+ } while(0)
++#define ENCODE_BYTES(l, s) \
++ do { \
++ u32 _l = (l); \
++ memcpy(xdr, (s), _l); \
++ if (_l & 3) \
++ memcpy((u8 *)xdr + _l, &zero, 4 - (_l & 3)); \
++ xdr += (_l + 3) >> 2; \
++ } while(0)
+ #define ENCODE64(x) \
+ do { \
+ __be64 y = cpu_to_be64(x); \
+@@ -1166,7 +1175,7 @@ static long rxrpc_read(const struct key *key,
+ case RXRPC_SECURITY_RXKAD:
+ ENCODE(token->kad->vice_id);
+ ENCODE(token->kad->kvno);
+- ENCODE_DATA(8, token->kad->session_key);
++ ENCODE_BYTES(8, token->kad->session_key);
+ ENCODE(token->kad->start);
+ ENCODE(token->kad->expiry);
+ ENCODE(token->kad->primary_flag);
+@@ -1216,7 +1225,6 @@ static long rxrpc_read(const struct key *key,
+ break;
+
+ default:
+- BUG();
+ break;
+ }
+
+diff --git a/net/sched/act_api.c b/net/sched/act_api.c
+index 8ac7eb0a83096..aa69fc4ce39d9 100644
+--- a/net/sched/act_api.c
++++ b/net/sched/act_api.c
+@@ -307,6 +307,8 @@ static int tcf_del_walker(struct tcf_idrinfo *idrinfo, struct sk_buff *skb,
+
+ mutex_lock(&idrinfo->lock);
+ idr_for_each_entry_ul(idr, p, tmp, id) {
++ if (IS_ERR(p))
++ continue;
+ ret = tcf_idr_release_unsafe(p);
+ if (ret == ACT_P_DELETED) {
+ module_put(ops->owner);
+@@ -467,17 +469,6 @@ int tcf_idr_create_from_flags(struct tc_action_net *tn, u32 index,
+ }
+ EXPORT_SYMBOL(tcf_idr_create_from_flags);
+
+-void tcf_idr_insert(struct tc_action_net *tn, struct tc_action *a)
+-{
+- struct tcf_idrinfo *idrinfo = tn->idrinfo;
+-
+- mutex_lock(&idrinfo->lock);
+- /* Replace ERR_PTR(-EBUSY) allocated by tcf_idr_check_alloc */
+- WARN_ON(!IS_ERR(idr_replace(&idrinfo->action_idr, a, a->tcfa_index)));
+- mutex_unlock(&idrinfo->lock);
+-}
+-EXPORT_SYMBOL(tcf_idr_insert);
+-
+ /* Cleanup idr index that was allocated but not initialized. */
+
+ void tcf_idr_cleanup(struct tc_action_net *tn, u32 index)
+@@ -902,6 +893,26 @@ static const struct nla_policy tcf_action_policy[TCA_ACT_MAX + 1] = {
+ [TCA_ACT_HW_STATS] = NLA_POLICY_BITFIELD32(TCA_ACT_HW_STATS_ANY),
+ };
+
++static void tcf_idr_insert_many(struct tc_action *actions[])
++{
++ int i;
++
++ for (i = 0; i < TCA_ACT_MAX_PRIO; i++) {
++ struct tc_action *a = actions[i];
++ struct tcf_idrinfo *idrinfo;
++
++ if (!a)
++ continue;
++ idrinfo = a->idrinfo;
++ mutex_lock(&idrinfo->lock);
++ /* Replace ERR_PTR(-EBUSY) allocated by tcf_idr_check_alloc if
++ * it is just created, otherwise this is just a nop.
++ */
++ idr_replace(&idrinfo->action_idr, a, a->tcfa_index);
++ mutex_unlock(&idrinfo->lock);
++ }
++}
++
+ struct tc_action *tcf_action_init_1(struct net *net, struct tcf_proto *tp,
+ struct nlattr *nla, struct nlattr *est,
+ char *name, int ovr, int bind,
+@@ -989,6 +1000,13 @@ struct tc_action *tcf_action_init_1(struct net *net, struct tcf_proto *tp,
+ if (err < 0)
+ goto err_mod;
+
++ if (TC_ACT_EXT_CMP(a->tcfa_action, TC_ACT_GOTO_CHAIN) &&
++ !rcu_access_pointer(a->goto_chain)) {
++ tcf_action_destroy_1(a, bind);
++ NL_SET_ERR_MSG(extack, "can't use goto chain with NULL chain");
++ return ERR_PTR(-EINVAL);
++ }
++
+ if (!name && tb[TCA_ACT_COOKIE])
+ tcf_set_action_cookie(&a->act_cookie, cookie);
+
+@@ -1002,13 +1020,6 @@ struct tc_action *tcf_action_init_1(struct net *net, struct tcf_proto *tp,
+ if (err != ACT_P_CREATED)
+ module_put(a_o->owner);
+
+- if (TC_ACT_EXT_CMP(a->tcfa_action, TC_ACT_GOTO_CHAIN) &&
+- !rcu_access_pointer(a->goto_chain)) {
+- tcf_action_destroy_1(a, bind);
+- NL_SET_ERR_MSG(extack, "can't use goto chain with NULL chain");
+- return ERR_PTR(-EINVAL);
+- }
+-
+ return a;
+
+ err_mod:
+@@ -1051,6 +1062,11 @@ int tcf_action_init(struct net *net, struct tcf_proto *tp, struct nlattr *nla,
+ actions[i - 1] = act;
+ }
+
++ /* We have to commit them all together, because if any error happened in
++ * between, we could not handle the failure gracefully.
++ */
++ tcf_idr_insert_many(actions);
++
+ *attr_size = tcf_action_full_attrs_size(sz);
+ return i - 1;
+
+diff --git a/net/sched/act_bpf.c b/net/sched/act_bpf.c
+index 54d5652cfe6ca..a4c7ba35a3438 100644
+--- a/net/sched/act_bpf.c
++++ b/net/sched/act_bpf.c
+@@ -365,9 +365,7 @@ static int tcf_bpf_init(struct net *net, struct nlattr *nla,
+ if (goto_ch)
+ tcf_chain_put_by_act(goto_ch);
+
+- if (res == ACT_P_CREATED) {
+- tcf_idr_insert(tn, *act);
+- } else {
++ if (res != ACT_P_CREATED) {
+ /* make sure the program being replaced is no longer executing */
+ synchronize_rcu();
+ tcf_bpf_cfg_cleanup(&old);
+diff --git a/net/sched/act_connmark.c b/net/sched/act_connmark.c
+index f901421b0634d..e19885d7fe2cb 100644
+--- a/net/sched/act_connmark.c
++++ b/net/sched/act_connmark.c
+@@ -139,7 +139,6 @@ static int tcf_connmark_init(struct net *net, struct nlattr *nla,
+ ci->net = net;
+ ci->zone = parm->zone;
+
+- tcf_idr_insert(tn, *a);
+ ret = ACT_P_CREATED;
+ } else if (ret > 0) {
+ ci = to_connmark(*a);
+diff --git a/net/sched/act_csum.c b/net/sched/act_csum.c
+index c60674cf25c4f..8b3f45cdc319d 100644
+--- a/net/sched/act_csum.c
++++ b/net/sched/act_csum.c
+@@ -110,9 +110,6 @@ static int tcf_csum_init(struct net *net, struct nlattr *nla,
+ if (params_new)
+ kfree_rcu(params_new, rcu);
+
+- if (ret == ACT_P_CREATED)
+- tcf_idr_insert(tn, *a);
+-
+ return ret;
+ put_chain:
+ if (goto_ch)
+diff --git a/net/sched/act_ct.c b/net/sched/act_ct.c
+index 41d8440deaf14..0eb4722cf7cd9 100644
+--- a/net/sched/act_ct.c
++++ b/net/sched/act_ct.c
+@@ -1293,8 +1293,6 @@ static int tcf_ct_init(struct net *net, struct nlattr *nla,
+ tcf_chain_put_by_act(goto_ch);
+ if (params)
+ call_rcu(¶ms->rcu, tcf_ct_params_free);
+- if (res == ACT_P_CREATED)
+- tcf_idr_insert(tn, *a);
+
+ return res;
+
+diff --git a/net/sched/act_ctinfo.c b/net/sched/act_ctinfo.c
+index b5042f3ea079e..6084300e51adb 100644
+--- a/net/sched/act_ctinfo.c
++++ b/net/sched/act_ctinfo.c
+@@ -269,9 +269,6 @@ static int tcf_ctinfo_init(struct net *net, struct nlattr *nla,
+ if (cp_new)
+ kfree_rcu(cp_new, rcu);
+
+- if (ret == ACT_P_CREATED)
+- tcf_idr_insert(tn, *a);
+-
+ return ret;
+
+ put_chain:
+diff --git a/net/sched/act_gact.c b/net/sched/act_gact.c
+index 4160657727196..1a29751ae63b5 100644
+--- a/net/sched/act_gact.c
++++ b/net/sched/act_gact.c
+@@ -140,8 +140,6 @@ static int tcf_gact_init(struct net *net, struct nlattr *nla,
+ if (goto_ch)
+ tcf_chain_put_by_act(goto_ch);
+
+- if (ret == ACT_P_CREATED)
+- tcf_idr_insert(tn, *a);
+ return ret;
+ release_idr:
+ tcf_idr_release(*a, bind);
+diff --git a/net/sched/act_gate.c b/net/sched/act_gate.c
+index 323ae7f6315d4..c86e7fa7b2208 100644
+--- a/net/sched/act_gate.c
++++ b/net/sched/act_gate.c
+@@ -437,9 +437,6 @@ static int tcf_gate_init(struct net *net, struct nlattr *nla,
+ if (goto_ch)
+ tcf_chain_put_by_act(goto_ch);
+
+- if (ret == ACT_P_CREATED)
+- tcf_idr_insert(tn, *a);
+-
+ return ret;
+
+ chain_put:
+diff --git a/net/sched/act_ife.c b/net/sched/act_ife.c
+index 5c568757643b2..a2ddea04183af 100644
+--- a/net/sched/act_ife.c
++++ b/net/sched/act_ife.c
+@@ -627,9 +627,6 @@ static int tcf_ife_init(struct net *net, struct nlattr *nla,
+ if (p)
+ kfree_rcu(p, rcu);
+
+- if (ret == ACT_P_CREATED)
+- tcf_idr_insert(tn, *a);
+-
+ return ret;
+ metadata_parse_err:
+ if (goto_ch)
+diff --git a/net/sched/act_ipt.c b/net/sched/act_ipt.c
+index 400a2cfe84522..8dc3bec0d3258 100644
+--- a/net/sched/act_ipt.c
++++ b/net/sched/act_ipt.c
+@@ -189,8 +189,6 @@ static int __tcf_ipt_init(struct net *net, unsigned int id, struct nlattr *nla,
+ ipt->tcfi_t = t;
+ ipt->tcfi_hook = hook;
+ spin_unlock_bh(&ipt->tcf_lock);
+- if (ret == ACT_P_CREATED)
+- tcf_idr_insert(tn, *a);
+ return ret;
+
+ err3:
+diff --git a/net/sched/act_mirred.c b/net/sched/act_mirred.c
+index 83dd82fc9f40c..cd3a7f814fc8c 100644
+--- a/net/sched/act_mirred.c
++++ b/net/sched/act_mirred.c
+@@ -194,8 +194,6 @@ static int tcf_mirred_init(struct net *net, struct nlattr *nla,
+ spin_lock(&mirred_list_lock);
+ list_add(&m->tcfm_list, &mirred_list);
+ spin_unlock(&mirred_list_lock);
+-
+- tcf_idr_insert(tn, *a);
+ }
+
+ return ret;
+diff --git a/net/sched/act_mpls.c b/net/sched/act_mpls.c
+index 8118e26409796..e298ec3b3c9e3 100644
+--- a/net/sched/act_mpls.c
++++ b/net/sched/act_mpls.c
+@@ -273,8 +273,6 @@ static int tcf_mpls_init(struct net *net, struct nlattr *nla,
+ if (p)
+ kfree_rcu(p, rcu);
+
+- if (ret == ACT_P_CREATED)
+- tcf_idr_insert(tn, *a);
+ return ret;
+ put_chain:
+ if (goto_ch)
+diff --git a/net/sched/act_nat.c b/net/sched/act_nat.c
+index 855a6fa16a621..1ebd2a86d980f 100644
+--- a/net/sched/act_nat.c
++++ b/net/sched/act_nat.c
+@@ -93,9 +93,6 @@ static int tcf_nat_init(struct net *net, struct nlattr *nla, struct nlattr *est,
+ if (goto_ch)
+ tcf_chain_put_by_act(goto_ch);
+
+- if (ret == ACT_P_CREATED)
+- tcf_idr_insert(tn, *a);
+-
+ return ret;
+ release_idr:
+ tcf_idr_release(*a, bind);
+diff --git a/net/sched/act_pedit.c b/net/sched/act_pedit.c
+index d41d6200d9dec..ed1700fe662e1 100644
+--- a/net/sched/act_pedit.c
++++ b/net/sched/act_pedit.c
+@@ -238,8 +238,6 @@ static int tcf_pedit_init(struct net *net, struct nlattr *nla,
+ spin_unlock_bh(&p->tcf_lock);
+ if (goto_ch)
+ tcf_chain_put_by_act(goto_ch);
+- if (ret == ACT_P_CREATED)
+- tcf_idr_insert(tn, *a);
+ return ret;
+
+ put_chain:
+diff --git a/net/sched/act_police.c b/net/sched/act_police.c
+index 8b7a0ac96c516..2d236b9a411f0 100644
+--- a/net/sched/act_police.c
++++ b/net/sched/act_police.c
+@@ -201,8 +201,6 @@ static int tcf_police_init(struct net *net, struct nlattr *nla,
+ if (new)
+ kfree_rcu(new, rcu);
+
+- if (ret == ACT_P_CREATED)
+- tcf_idr_insert(tn, *a);
+ return ret;
+
+ failure:
+diff --git a/net/sched/act_sample.c b/net/sched/act_sample.c
+index 5e2df590bb58a..3ebf9ede3cf10 100644
+--- a/net/sched/act_sample.c
++++ b/net/sched/act_sample.c
+@@ -116,8 +116,6 @@ static int tcf_sample_init(struct net *net, struct nlattr *nla,
+ if (goto_ch)
+ tcf_chain_put_by_act(goto_ch);
+
+- if (ret == ACT_P_CREATED)
+- tcf_idr_insert(tn, *a);
+ return ret;
+ put_chain:
+ if (goto_ch)
+diff --git a/net/sched/act_simple.c b/net/sched/act_simple.c
+index 9813ca4006dd1..a4f3d0f0daa96 100644
+--- a/net/sched/act_simple.c
++++ b/net/sched/act_simple.c
+@@ -157,8 +157,6 @@ static int tcf_simp_init(struct net *net, struct nlattr *nla,
+ goto release_idr;
+ }
+
+- if (ret == ACT_P_CREATED)
+- tcf_idr_insert(tn, *a);
+ return ret;
+ put_chain:
+ if (goto_ch)
+diff --git a/net/sched/act_skbedit.c b/net/sched/act_skbedit.c
+index b2b3faa57294c..8012ae84847b8 100644
+--- a/net/sched/act_skbedit.c
++++ b/net/sched/act_skbedit.c
+@@ -224,8 +224,6 @@ static int tcf_skbedit_init(struct net *net, struct nlattr *nla,
+ if (goto_ch)
+ tcf_chain_put_by_act(goto_ch);
+
+- if (ret == ACT_P_CREATED)
+- tcf_idr_insert(tn, *a);
+ return ret;
+ put_chain:
+ if (goto_ch)
+diff --git a/net/sched/act_skbmod.c b/net/sched/act_skbmod.c
+index 39e6d94cfafbf..81a1c67335be6 100644
+--- a/net/sched/act_skbmod.c
++++ b/net/sched/act_skbmod.c
+@@ -190,8 +190,6 @@ static int tcf_skbmod_init(struct net *net, struct nlattr *nla,
+ if (goto_ch)
+ tcf_chain_put_by_act(goto_ch);
+
+- if (ret == ACT_P_CREATED)
+- tcf_idr_insert(tn, *a);
+ return ret;
+ put_chain:
+ if (goto_ch)
+diff --git a/net/sched/act_tunnel_key.c b/net/sched/act_tunnel_key.c
+index 536c4bc31be60..23cf8469a2e7c 100644
+--- a/net/sched/act_tunnel_key.c
++++ b/net/sched/act_tunnel_key.c
+@@ -536,9 +536,6 @@ static int tunnel_key_init(struct net *net, struct nlattr *nla,
+ if (goto_ch)
+ tcf_chain_put_by_act(goto_ch);
+
+- if (ret == ACT_P_CREATED)
+- tcf_idr_insert(tn, *a);
+-
+ return ret;
+
+ put_chain:
+diff --git a/net/sched/act_vlan.c b/net/sched/act_vlan.c
+index c91d3958fcbb8..68137d7519d01 100644
+--- a/net/sched/act_vlan.c
++++ b/net/sched/act_vlan.c
+@@ -229,8 +229,6 @@ static int tcf_vlan_init(struct net *net, struct nlattr *nla,
+ if (p)
+ kfree_rcu(p, rcu);
+
+- if (ret == ACT_P_CREATED)
+- tcf_idr_insert(tn, *a);
+ return ret;
+ put_chain:
+ if (goto_ch)
+diff --git a/net/sctp/auth.c b/net/sctp/auth.c
+index 83e97e8892e05..155aac6943d1b 100644
+--- a/net/sctp/auth.c
++++ b/net/sctp/auth.c
+@@ -494,6 +494,7 @@ int sctp_auth_init_hmacs(struct sctp_endpoint *ep, gfp_t gfp)
+ out_err:
+ /* Clean up any successful allocations */
+ sctp_auth_destroy_hmacs(ep->auth_hmacs);
++ ep->auth_hmacs = NULL;
+ return -ENOMEM;
+ }
+
+diff --git a/net/tls/tls_sw.c b/net/tls/tls_sw.c
+index 24f64bc0de18c..d8bc02b185f32 100644
+--- a/net/tls/tls_sw.c
++++ b/net/tls/tls_sw.c
+@@ -2142,10 +2142,15 @@ void tls_sw_release_resources_tx(struct sock *sk)
+ struct tls_context *tls_ctx = tls_get_ctx(sk);
+ struct tls_sw_context_tx *ctx = tls_sw_ctx_tx(tls_ctx);
+ struct tls_rec *rec, *tmp;
++ int pending;
+
+ /* Wait for any pending async encryptions to complete */
+- smp_store_mb(ctx->async_notify, true);
+- if (atomic_read(&ctx->encrypt_pending))
++ spin_lock_bh(&ctx->encrypt_compl_lock);
++ ctx->async_notify = true;
++ pending = atomic_read(&ctx->encrypt_pending);
++ spin_unlock_bh(&ctx->encrypt_compl_lock);
++
++ if (pending)
+ crypto_wait_req(-EINPROGRESS, &ctx->async_wait);
+
+ tls_tx_records(sk, -1);
+diff --git a/net/wireless/nl80211.c b/net/wireless/nl80211.c
+index 279c87a2a523b..4d7b255067225 100644
+--- a/net/wireless/nl80211.c
++++ b/net/wireless/nl80211.c
+@@ -4172,6 +4172,9 @@ static int nl80211_del_key(struct sk_buff *skb, struct genl_info *info)
+ if (err)
+ return err;
+
++ if (key.idx < 0)
++ return -EINVAL;
++
+ if (info->attrs[NL80211_ATTR_MAC])
+ mac_addr = nla_data(info->attrs[NL80211_ATTR_MAC]);
+
+diff --git a/net/xdp/xsk.c b/net/xdp/xsk.c
+index 3700266229f63..dcce888b8ef54 100644
+--- a/net/xdp/xsk.c
++++ b/net/xdp/xsk.c
+@@ -375,15 +375,30 @@ static int xsk_generic_xmit(struct sock *sk)
+ skb_shinfo(skb)->destructor_arg = (void *)(long)desc.addr;
+ skb->destructor = xsk_destruct_skb;
+
++ /* Hinder dev_direct_xmit from freeing the packet and
++ * therefore completing it in the destructor
++ */
++ refcount_inc(&skb->users);
+ err = dev_direct_xmit(skb, xs->queue_id);
++ if (err == NETDEV_TX_BUSY) {
++ /* Tell user-space to retry the send */
++ skb->destructor = sock_wfree;
++ /* Free skb without triggering the perf drop trace */
++ consume_skb(skb);
++ err = -EAGAIN;
++ goto out;
++ }
++
+ xskq_cons_release(xs->tx);
+ /* Ignore NET_XMIT_CN as packet might have been sent */
+- if (err == NET_XMIT_DROP || err == NETDEV_TX_BUSY) {
++ if (err == NET_XMIT_DROP) {
+ /* SKB completed but not sent */
++ kfree_skb(skb);
+ err = -EBUSY;
+ goto out;
+ }
+
++ consume_skb(skb);
+ sent_frame = true;
+ }
+
+diff --git a/net/xfrm/espintcp.c b/net/xfrm/espintcp.c
+index 827ccdf2db57f..1f08ebf7d80c5 100644
+--- a/net/xfrm/espintcp.c
++++ b/net/xfrm/espintcp.c
+@@ -29,8 +29,12 @@ static void handle_nonesp(struct espintcp_ctx *ctx, struct sk_buff *skb,
+
+ static void handle_esp(struct sk_buff *skb, struct sock *sk)
+ {
++ struct tcp_skb_cb *tcp_cb = (struct tcp_skb_cb *)skb->cb;
++
+ skb_reset_transport_header(skb);
+- memset(skb->cb, 0, sizeof(skb->cb));
++
++ /* restore IP CB, we need at least IP6CB->nhoff */
++ memmove(skb->cb, &tcp_cb->header, sizeof(tcp_cb->header));
+
+ rcu_read_lock();
+ skb->dev = dev_get_by_index_rcu(sock_net(sk), skb->skb_iif);
+diff --git a/net/xfrm/xfrm_interface.c b/net/xfrm/xfrm_interface.c
+index b615729812e5a..ade2eba863b39 100644
+--- a/net/xfrm/xfrm_interface.c
++++ b/net/xfrm/xfrm_interface.c
+@@ -292,7 +292,7 @@ xfrmi_xmit2(struct sk_buff *skb, struct net_device *dev, struct flowi *fl)
+ }
+
+ mtu = dst_mtu(dst);
+- if (!skb->ignore_df && skb->len > mtu) {
++ if (skb->len > mtu) {
+ skb_dst_update_pmtu_no_confirm(skb, mtu);
+
+ if (skb->protocol == htons(ETH_P_IPV6)) {
+diff --git a/net/xfrm/xfrm_state.c b/net/xfrm/xfrm_state.c
+index 8be2d926acc21..158510cd34ae8 100644
+--- a/net/xfrm/xfrm_state.c
++++ b/net/xfrm/xfrm_state.c
+@@ -1019,7 +1019,8 @@ static void xfrm_state_look_at(struct xfrm_policy *pol, struct xfrm_state *x,
+ */
+ if (x->km.state == XFRM_STATE_VALID) {
+ if ((x->sel.family &&
+- !xfrm_selector_match(&x->sel, fl, x->sel.family)) ||
++ (x->sel.family != family ||
++ !xfrm_selector_match(&x->sel, fl, family))) ||
+ !security_xfrm_state_pol_flow_match(x, pol, fl))
+ return;
+
+@@ -1032,7 +1033,9 @@ static void xfrm_state_look_at(struct xfrm_policy *pol, struct xfrm_state *x,
+ *acq_in_progress = 1;
+ } else if (x->km.state == XFRM_STATE_ERROR ||
+ x->km.state == XFRM_STATE_EXPIRED) {
+- if (xfrm_selector_match(&x->sel, fl, x->sel.family) &&
++ if ((!x->sel.family ||
++ (x->sel.family == family &&
++ xfrm_selector_match(&x->sel, fl, family))) &&
+ security_xfrm_state_pol_flow_match(x, pol, fl))
+ *error = -ESRCH;
+ }
+@@ -1072,7 +1075,7 @@ xfrm_state_find(const xfrm_address_t *daddr, const xfrm_address_t *saddr,
+ tmpl->mode == x->props.mode &&
+ tmpl->id.proto == x->id.proto &&
+ (tmpl->id.spi == x->id.spi || !tmpl->id.spi))
+- xfrm_state_look_at(pol, x, fl, encap_family,
++ xfrm_state_look_at(pol, x, fl, family,
+ &best, &acquire_in_progress, &error);
+ }
+ if (best || acquire_in_progress)
+@@ -1089,7 +1092,7 @@ xfrm_state_find(const xfrm_address_t *daddr, const xfrm_address_t *saddr,
+ tmpl->mode == x->props.mode &&
+ tmpl->id.proto == x->id.proto &&
+ (tmpl->id.spi == x->id.spi || !tmpl->id.spi))
+- xfrm_state_look_at(pol, x, fl, encap_family,
++ xfrm_state_look_at(pol, x, fl, family,
+ &best, &acquire_in_progress, &error);
+ }
+
+@@ -1441,6 +1444,30 @@ out:
+ EXPORT_SYMBOL(xfrm_state_add);
+
+ #ifdef CONFIG_XFRM_MIGRATE
++static inline int clone_security(struct xfrm_state *x, struct xfrm_sec_ctx *security)
++{
++ struct xfrm_user_sec_ctx *uctx;
++ int size = sizeof(*uctx) + security->ctx_len;
++ int err;
++
++ uctx = kmalloc(size, GFP_KERNEL);
++ if (!uctx)
++ return -ENOMEM;
++
++ uctx->exttype = XFRMA_SEC_CTX;
++ uctx->len = size;
++ uctx->ctx_doi = security->ctx_doi;
++ uctx->ctx_alg = security->ctx_alg;
++ uctx->ctx_len = security->ctx_len;
++ memcpy(uctx + 1, security->ctx_str, security->ctx_len);
++ err = security_xfrm_state_alloc(x, uctx);
++ kfree(uctx);
++ if (err)
++ return err;
++
++ return 0;
++}
++
+ static struct xfrm_state *xfrm_state_clone(struct xfrm_state *orig,
+ struct xfrm_encap_tmpl *encap)
+ {
+@@ -1497,6 +1524,10 @@ static struct xfrm_state *xfrm_state_clone(struct xfrm_state *orig,
+ goto error;
+ }
+
++ if (orig->security)
++ if (clone_security(x, orig->security))
++ goto error;
++
+ if (orig->coaddr) {
+ x->coaddr = kmemdup(orig->coaddr, sizeof(*x->coaddr),
+ GFP_KERNEL);
+@@ -1510,6 +1541,7 @@ static struct xfrm_state *xfrm_state_clone(struct xfrm_state *orig,
+ }
+
+ memcpy(&x->mark, &orig->mark, sizeof(x->mark));
++ memcpy(&x->props.smark, &orig->props.smark, sizeof(x->props.smark));
+
+ if (xfrm_init_state(x) < 0)
+ goto error;
+@@ -1521,7 +1553,7 @@ static struct xfrm_state *xfrm_state_clone(struct xfrm_state *orig,
+ x->tfcpad = orig->tfcpad;
+ x->replay_maxdiff = orig->replay_maxdiff;
+ x->replay_maxage = orig->replay_maxage;
+- x->curlft.add_time = orig->curlft.add_time;
++ memcpy(&x->curlft, &orig->curlft, sizeof(x->curlft));
+ x->km.state = orig->km.state;
+ x->km.seq = orig->km.seq;
+ x->replay = orig->replay;
^ permalink raw reply related [flat|nested] 30+ messages in thread
* [gentoo-commits] proj/linux-patches:5.8 commit in: /
@ 2020-10-17 10:19 Mike Pagano
0 siblings, 0 replies; 30+ messages in thread
From: Mike Pagano @ 2020-10-17 10:19 UTC (permalink / raw
To: gentoo-commits
commit: 2e94a538736e0df7e53bd421bf26abe9f50aef6a
Author: Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Sat Oct 17 10:19:31 2020 +0000
Commit: Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Sat Oct 17 10:19:31 2020 +0000
URL: https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=2e94a538
Linux patch 5.8.16
Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>
0000_README | 4 +
1015_linux-5.8.16.patch | 534 ++++++++++++++++++++++++++++++++++++++++++++++++
2 files changed, 538 insertions(+)
diff --git a/0000_README b/0000_README
index 3400494..e29fc26 100644
--- a/0000_README
+++ b/0000_README
@@ -103,6 +103,10 @@ Patch: 1014_linux-5.8.15.patch
From: http://www.kernel.org
Desc: Linux 5.8.15
+Patch: 1015_linux-5.8.16.patch
+From: http://www.kernel.org
+Desc: Linux 5.8.16
+
Patch: 1500_XATTR_USER_PREFIX.patch
From: https://bugs.gentoo.org/show_bug.cgi?id=470644
Desc: Support for namespace user.pax.* on tmpfs.
diff --git a/1015_linux-5.8.16.patch b/1015_linux-5.8.16.patch
new file mode 100644
index 0000000..38ab50b
--- /dev/null
+++ b/1015_linux-5.8.16.patch
@@ -0,0 +1,534 @@
+diff --git a/Makefile b/Makefile
+index 6c787cd1cb514..a4622ef65436e 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 5
+ PATCHLEVEL = 8
+-SUBLEVEL = 15
++SUBLEVEL = 16
+ EXTRAVERSION =
+ NAME = Kleptomaniac Octopus
+
+diff --git a/drivers/crypto/bcm/cipher.c b/drivers/crypto/bcm/cipher.c
+index a353217a0d33e..530c7d9437dd6 100644
+--- a/drivers/crypto/bcm/cipher.c
++++ b/drivers/crypto/bcm/cipher.c
+@@ -2930,7 +2930,6 @@ static int aead_gcm_ccm_setkey(struct crypto_aead *cipher,
+
+ ctx->enckeylen = keylen;
+ ctx->authkeylen = 0;
+- memcpy(ctx->enckey, key, ctx->enckeylen);
+
+ switch (ctx->enckeylen) {
+ case AES_KEYSIZE_128:
+@@ -2946,6 +2945,8 @@ static int aead_gcm_ccm_setkey(struct crypto_aead *cipher,
+ goto badkey;
+ }
+
++ memcpy(ctx->enckey, key, ctx->enckeylen);
++
+ flow_log(" enckeylen:%u authkeylen:%u\n", ctx->enckeylen,
+ ctx->authkeylen);
+ flow_dump(" enc: ", ctx->enckey, ctx->enckeylen);
+@@ -3000,6 +3001,10 @@ static int aead_gcm_esp_setkey(struct crypto_aead *cipher,
+ struct iproc_ctx_s *ctx = crypto_aead_ctx(cipher);
+
+ flow_log("%s\n", __func__);
++
++ if (keylen < GCM_ESP_SALT_SIZE)
++ return -EINVAL;
++
+ ctx->salt_len = GCM_ESP_SALT_SIZE;
+ ctx->salt_offset = GCM_ESP_SALT_OFFSET;
+ memcpy(ctx->salt, key + keylen - GCM_ESP_SALT_SIZE, GCM_ESP_SALT_SIZE);
+@@ -3028,6 +3033,10 @@ static int rfc4543_gcm_esp_setkey(struct crypto_aead *cipher,
+ struct iproc_ctx_s *ctx = crypto_aead_ctx(cipher);
+
+ flow_log("%s\n", __func__);
++
++ if (keylen < GCM_ESP_SALT_SIZE)
++ return -EINVAL;
++
+ ctx->salt_len = GCM_ESP_SALT_SIZE;
+ ctx->salt_offset = GCM_ESP_SALT_OFFSET;
+ memcpy(ctx->salt, key + keylen - GCM_ESP_SALT_SIZE, GCM_ESP_SALT_SIZE);
+@@ -3057,6 +3066,10 @@ static int aead_ccm_esp_setkey(struct crypto_aead *cipher,
+ struct iproc_ctx_s *ctx = crypto_aead_ctx(cipher);
+
+ flow_log("%s\n", __func__);
++
++ if (keylen < CCM_ESP_SALT_SIZE)
++ return -EINVAL;
++
+ ctx->salt_len = CCM_ESP_SALT_SIZE;
+ ctx->salt_offset = CCM_ESP_SALT_OFFSET;
+ memcpy(ctx->salt, key + keylen - CCM_ESP_SALT_SIZE, CCM_ESP_SALT_SIZE);
+diff --git a/drivers/crypto/qat/qat_common/qat_algs.c b/drivers/crypto/qat/qat_common/qat_algs.c
+index 1b050391c0c90..02dadcb8852fe 100644
+--- a/drivers/crypto/qat/qat_common/qat_algs.c
++++ b/drivers/crypto/qat/qat_common/qat_algs.c
+@@ -871,6 +871,11 @@ static int qat_alg_aead_dec(struct aead_request *areq)
+ struct icp_qat_fw_la_bulk_req *msg;
+ int digst_size = crypto_aead_authsize(aead_tfm);
+ int ret, ctr = 0;
++ u32 cipher_len;
++
++ cipher_len = areq->cryptlen - digst_size;
++ if (cipher_len % AES_BLOCK_SIZE != 0)
++ return -EINVAL;
+
+ ret = qat_alg_sgl_to_bufl(ctx->inst, areq->src, areq->dst, qat_req);
+ if (unlikely(ret))
+@@ -885,7 +890,7 @@ static int qat_alg_aead_dec(struct aead_request *areq)
+ qat_req->req.comn_mid.src_data_addr = qat_req->buf.blp;
+ qat_req->req.comn_mid.dest_data_addr = qat_req->buf.bloutp;
+ cipher_param = (void *)&qat_req->req.serv_specif_rqpars;
+- cipher_param->cipher_length = areq->cryptlen - digst_size;
++ cipher_param->cipher_length = cipher_len;
+ cipher_param->cipher_offset = areq->assoclen;
+ memcpy(cipher_param->u.cipher_IV_array, areq->iv, AES_BLOCK_SIZE);
+ auth_param = (void *)((uint8_t *)cipher_param + sizeof(*cipher_param));
+@@ -914,6 +919,9 @@ static int qat_alg_aead_enc(struct aead_request *areq)
+ uint8_t *iv = areq->iv;
+ int ret, ctr = 0;
+
++ if (areq->cryptlen % AES_BLOCK_SIZE != 0)
++ return -EINVAL;
++
+ ret = qat_alg_sgl_to_bufl(ctx->inst, areq->src, areq->dst, qat_req);
+ if (unlikely(ret))
+ return ret;
+diff --git a/drivers/media/usb/usbtv/usbtv-core.c b/drivers/media/usb/usbtv/usbtv-core.c
+index ee9c656d121f1..2308c0b4f5e7e 100644
+--- a/drivers/media/usb/usbtv/usbtv-core.c
++++ b/drivers/media/usb/usbtv/usbtv-core.c
+@@ -113,7 +113,8 @@ static int usbtv_probe(struct usb_interface *intf,
+
+ usbtv_audio_fail:
+ /* we must not free at this point */
+- usb_get_dev(usbtv->udev);
++ v4l2_device_get(&usbtv->v4l2_dev);
++ /* this will undo the v4l2_device_get() */
+ usbtv_video_free(usbtv);
+
+ usbtv_video_fail:
+diff --git a/drivers/staging/comedi/drivers/vmk80xx.c b/drivers/staging/comedi/drivers/vmk80xx.c
+index 65dc6c51037e3..7956abcbae22b 100644
+--- a/drivers/staging/comedi/drivers/vmk80xx.c
++++ b/drivers/staging/comedi/drivers/vmk80xx.c
+@@ -667,6 +667,9 @@ static int vmk80xx_find_usb_endpoints(struct comedi_device *dev)
+ if (!devpriv->ep_rx || !devpriv->ep_tx)
+ return -ENODEV;
+
++ if (!usb_endpoint_maxp(devpriv->ep_rx) || !usb_endpoint_maxp(devpriv->ep_tx))
++ return -EINVAL;
++
+ return 0;
+ }
+
+diff --git a/drivers/usb/serial/ftdi_sio.c b/drivers/usb/serial/ftdi_sio.c
+index ae98fe94fe91e..01a98d071c7c7 100644
+--- a/drivers/usb/serial/ftdi_sio.c
++++ b/drivers/usb/serial/ftdi_sio.c
+@@ -1037,6 +1037,11 @@ static const struct usb_device_id id_table_combined[] = {
+ /* U-Blox devices */
+ { USB_DEVICE(UBLOX_VID, UBLOX_C099F9P_ZED_PID) },
+ { USB_DEVICE(UBLOX_VID, UBLOX_C099F9P_ODIN_PID) },
++ /* FreeCalypso USB adapters */
++ { USB_DEVICE(FTDI_VID, FTDI_FALCONIA_JTAG_BUF_PID),
++ .driver_info = (kernel_ulong_t)&ftdi_jtag_quirk },
++ { USB_DEVICE(FTDI_VID, FTDI_FALCONIA_JTAG_UNBUF_PID),
++ .driver_info = (kernel_ulong_t)&ftdi_jtag_quirk },
+ { } /* Terminating entry */
+ };
+
+diff --git a/drivers/usb/serial/ftdi_sio_ids.h b/drivers/usb/serial/ftdi_sio_ids.h
+index b5ca17a5967a0..3d47c6d72256e 100644
+--- a/drivers/usb/serial/ftdi_sio_ids.h
++++ b/drivers/usb/serial/ftdi_sio_ids.h
+@@ -39,6 +39,13 @@
+
+ #define FTDI_LUMEL_PD12_PID 0x6002
+
++/*
++ * Custom USB adapters made by Falconia Partners LLC
++ * for FreeCalypso project, ID codes allocated to Falconia by FTDI.
++ */
++#define FTDI_FALCONIA_JTAG_BUF_PID 0x7150
++#define FTDI_FALCONIA_JTAG_UNBUF_PID 0x7151
++
+ /* Sienna Serial Interface by Secyourit GmbH */
+ #define FTDI_SIENNA_PID 0x8348
+
+diff --git a/drivers/usb/serial/option.c b/drivers/usb/serial/option.c
+index f7a6ac05ac57a..eb5538a44ee9d 100644
+--- a/drivers/usb/serial/option.c
++++ b/drivers/usb/serial/option.c
+@@ -528,6 +528,7 @@ static void option_instat_callback(struct urb *urb);
+ /* Cellient products */
+ #define CELLIENT_VENDOR_ID 0x2692
+ #define CELLIENT_PRODUCT_MEN200 0x9005
++#define CELLIENT_PRODUCT_MPL200 0x9025
+
+ /* Hyundai Petatel Inc. products */
+ #define PETATEL_VENDOR_ID 0x1ff4
+@@ -1186,6 +1187,8 @@ static const struct usb_device_id option_ids[] = {
+ .driver_info = NCTRL(2) | RSVD(3) },
+ { USB_DEVICE_INTERFACE_CLASS(TELIT_VENDOR_ID, 0x1053, 0xff), /* Telit FN980 (ECM) */
+ .driver_info = NCTRL(0) | RSVD(1) },
++ { USB_DEVICE_INTERFACE_CLASS(TELIT_VENDOR_ID, 0x1054, 0xff), /* Telit FT980-KS */
++ .driver_info = NCTRL(2) | RSVD(3) },
+ { USB_DEVICE(TELIT_VENDOR_ID, TELIT_PRODUCT_ME910),
+ .driver_info = NCTRL(0) | RSVD(1) | RSVD(3) },
+ { USB_DEVICE(TELIT_VENDOR_ID, TELIT_PRODUCT_ME910_DUAL_MODEM),
+@@ -1982,6 +1985,8 @@ static const struct usb_device_id option_ids[] = {
+ { USB_DEVICE_AND_INTERFACE_INFO(MEDIATEK_VENDOR_ID, MEDIATEK_PRODUCT_DC_4COM2, 0xff, 0x02, 0x01) },
+ { USB_DEVICE_AND_INTERFACE_INFO(MEDIATEK_VENDOR_ID, MEDIATEK_PRODUCT_DC_4COM2, 0xff, 0x00, 0x00) },
+ { USB_DEVICE(CELLIENT_VENDOR_ID, CELLIENT_PRODUCT_MEN200) },
++ { USB_DEVICE(CELLIENT_VENDOR_ID, CELLIENT_PRODUCT_MPL200),
++ .driver_info = RSVD(1) | RSVD(4) },
+ { USB_DEVICE(PETATEL_VENDOR_ID, PETATEL_PRODUCT_NP10T_600A) },
+ { USB_DEVICE(PETATEL_VENDOR_ID, PETATEL_PRODUCT_NP10T_600E) },
+ { USB_DEVICE_AND_INTERFACE_INFO(TPLINK_VENDOR_ID, TPLINK_PRODUCT_LTE, 0xff, 0x00, 0x00) }, /* TP-Link LTE Module */
+diff --git a/drivers/usb/serial/pl2303.c b/drivers/usb/serial/pl2303.c
+index c5a2995dfa2e3..c0bf7a424d930 100644
+--- a/drivers/usb/serial/pl2303.c
++++ b/drivers/usb/serial/pl2303.c
+@@ -100,6 +100,7 @@ static const struct usb_device_id id_table[] = {
+ { USB_DEVICE(HP_VENDOR_ID, HP_LD220_PRODUCT_ID) },
+ { USB_DEVICE(HP_VENDOR_ID, HP_LD220TA_PRODUCT_ID) },
+ { USB_DEVICE(HP_VENDOR_ID, HP_LD381_PRODUCT_ID) },
++ { USB_DEVICE(HP_VENDOR_ID, HP_LD381GC_PRODUCT_ID) },
+ { USB_DEVICE(HP_VENDOR_ID, HP_LD960_PRODUCT_ID) },
+ { USB_DEVICE(HP_VENDOR_ID, HP_LD960TA_PRODUCT_ID) },
+ { USB_DEVICE(HP_VENDOR_ID, HP_LCM220_PRODUCT_ID) },
+diff --git a/drivers/usb/serial/pl2303.h b/drivers/usb/serial/pl2303.h
+index 7d3090ee7e0cb..0f681ddbfd288 100644
+--- a/drivers/usb/serial/pl2303.h
++++ b/drivers/usb/serial/pl2303.h
+@@ -127,6 +127,7 @@
+
+ /* Hewlett-Packard POS Pole Displays */
+ #define HP_VENDOR_ID 0x03f0
++#define HP_LD381GC_PRODUCT_ID 0x0183
+ #define HP_LM920_PRODUCT_ID 0x026b
+ #define HP_TD620_PRODUCT_ID 0x0956
+ #define HP_LD960_PRODUCT_ID 0x0b39
+diff --git a/fs/reiserfs/inode.c b/fs/reiserfs/inode.c
+index 1509775da040a..e43fed96704d8 100644
+--- a/fs/reiserfs/inode.c
++++ b/fs/reiserfs/inode.c
+@@ -1551,11 +1551,7 @@ void reiserfs_read_locked_inode(struct inode *inode,
+ * set version 1, version 2 could be used too, because stat data
+ * key is the same in both versions
+ */
+- key.version = KEY_FORMAT_3_5;
+- key.on_disk_key.k_dir_id = dirino;
+- key.on_disk_key.k_objectid = inode->i_ino;
+- key.on_disk_key.k_offset = 0;
+- key.on_disk_key.k_type = 0;
++ _make_cpu_key(&key, KEY_FORMAT_3_5, dirino, inode->i_ino, 0, 0, 3);
+
+ /* look for the object's stat data */
+ retval = search_item(inode->i_sb, &key, &path_to_sd);
+diff --git a/fs/reiserfs/xattr.c b/fs/reiserfs/xattr.c
+index 28b241cd69870..fe63a7c3e0da2 100644
+--- a/fs/reiserfs/xattr.c
++++ b/fs/reiserfs/xattr.c
+@@ -674,6 +674,13 @@ reiserfs_xattr_get(struct inode *inode, const char *name, void *buffer,
+ if (get_inode_sd_version(inode) == STAT_DATA_V1)
+ return -EOPNOTSUPP;
+
++ /*
++ * priv_root needn't be initialized during mount so allow initial
++ * lookups to succeed.
++ */
++ if (!REISERFS_SB(inode->i_sb)->priv_root)
++ return 0;
++
+ dentry = xattr_lookup(inode, name, XATTR_REPLACE);
+ if (IS_ERR(dentry)) {
+ err = PTR_ERR(dentry);
+diff --git a/include/net/bluetooth/hci_core.h b/include/net/bluetooth/hci_core.h
+index da3728871e85d..78970afa96f97 100644
+--- a/include/net/bluetooth/hci_core.h
++++ b/include/net/bluetooth/hci_core.h
+@@ -1402,11 +1402,13 @@ static inline void hci_encrypt_cfm(struct hci_conn *conn, __u8 status)
+ else
+ encrypt = 0x01;
+
+- if (conn->sec_level == BT_SECURITY_SDP)
+- conn->sec_level = BT_SECURITY_LOW;
++ if (!status) {
++ if (conn->sec_level == BT_SECURITY_SDP)
++ conn->sec_level = BT_SECURITY_LOW;
+
+- if (conn->pending_sec_level > conn->sec_level)
+- conn->sec_level = conn->pending_sec_level;
++ if (conn->pending_sec_level > conn->sec_level)
++ conn->sec_level = conn->pending_sec_level;
++ }
+
+ mutex_lock(&hci_cb_list_lock);
+ list_for_each_entry(cb, &hci_cb_list, list) {
+diff --git a/include/net/bluetooth/l2cap.h b/include/net/bluetooth/l2cap.h
+index 8f1e6a7a2df84..1d1232917de72 100644
+--- a/include/net/bluetooth/l2cap.h
++++ b/include/net/bluetooth/l2cap.h
+@@ -665,6 +665,8 @@ struct l2cap_ops {
+ struct sk_buff *(*alloc_skb) (struct l2cap_chan *chan,
+ unsigned long hdr_len,
+ unsigned long len, int nb);
++ int (*filter) (struct l2cap_chan * chan,
++ struct sk_buff *skb);
+ };
+
+ struct l2cap_conn {
+diff --git a/net/bluetooth/a2mp.c b/net/bluetooth/a2mp.c
+index 26526be579c75..da7fd7c8c2dc0 100644
+--- a/net/bluetooth/a2mp.c
++++ b/net/bluetooth/a2mp.c
+@@ -226,6 +226,9 @@ static int a2mp_discover_rsp(struct amp_mgr *mgr, struct sk_buff *skb,
+ struct a2mp_info_req req;
+
+ found = true;
++
++ memset(&req, 0, sizeof(req));
++
+ req.id = cl->id;
+ a2mp_send(mgr, A2MP_GETINFO_REQ, __next_ident(mgr),
+ sizeof(req), &req);
+@@ -305,6 +308,8 @@ static int a2mp_getinfo_req(struct amp_mgr *mgr, struct sk_buff *skb,
+ if (!hdev || hdev->dev_type != HCI_AMP) {
+ struct a2mp_info_rsp rsp;
+
++ memset(&rsp, 0, sizeof(rsp));
++
+ rsp.id = req->id;
+ rsp.status = A2MP_STATUS_INVALID_CTRL_ID;
+
+@@ -348,6 +353,8 @@ static int a2mp_getinfo_rsp(struct amp_mgr *mgr, struct sk_buff *skb,
+ if (!ctrl)
+ return -ENOMEM;
+
++ memset(&req, 0, sizeof(req));
++
+ req.id = rsp->id;
+ a2mp_send(mgr, A2MP_GETAMPASSOC_REQ, __next_ident(mgr), sizeof(req),
+ &req);
+@@ -376,6 +383,8 @@ static int a2mp_getampassoc_req(struct amp_mgr *mgr, struct sk_buff *skb,
+ struct a2mp_amp_assoc_rsp rsp;
+ rsp.id = req->id;
+
++ memset(&rsp, 0, sizeof(rsp));
++
+ if (tmp) {
+ rsp.status = A2MP_STATUS_COLLISION_OCCURED;
+ amp_mgr_put(tmp);
+@@ -464,7 +473,6 @@ static int a2mp_createphyslink_req(struct amp_mgr *mgr, struct sk_buff *skb,
+ struct a2mp_cmd *hdr)
+ {
+ struct a2mp_physlink_req *req = (void *) skb->data;
+-
+ struct a2mp_physlink_rsp rsp;
+ struct hci_dev *hdev;
+ struct hci_conn *hcon;
+@@ -475,6 +483,8 @@ static int a2mp_createphyslink_req(struct amp_mgr *mgr, struct sk_buff *skb,
+
+ BT_DBG("local_id %d, remote_id %d", req->local_id, req->remote_id);
+
++ memset(&rsp, 0, sizeof(rsp));
++
+ rsp.local_id = req->remote_id;
+ rsp.remote_id = req->local_id;
+
+@@ -553,6 +563,8 @@ static int a2mp_discphyslink_req(struct amp_mgr *mgr, struct sk_buff *skb,
+
+ BT_DBG("local_id %d remote_id %d", req->local_id, req->remote_id);
+
++ memset(&rsp, 0, sizeof(rsp));
++
+ rsp.local_id = req->remote_id;
+ rsp.remote_id = req->local_id;
+ rsp.status = A2MP_STATUS_SUCCESS;
+@@ -675,6 +687,8 @@ static int a2mp_chan_recv_cb(struct l2cap_chan *chan, struct sk_buff *skb)
+ if (err) {
+ struct a2mp_cmd_rej rej;
+
++ memset(&rej, 0, sizeof(rej));
++
+ rej.reason = cpu_to_le16(0);
+ hdr = (void *) skb->data;
+
+@@ -898,6 +912,8 @@ void a2mp_send_getinfo_rsp(struct hci_dev *hdev)
+
+ BT_DBG("%s mgr %p", hdev->name, mgr);
+
++ memset(&rsp, 0, sizeof(rsp));
++
+ rsp.id = hdev->id;
+ rsp.status = A2MP_STATUS_INVALID_CTRL_ID;
+
+@@ -995,6 +1011,8 @@ void a2mp_send_create_phy_link_rsp(struct hci_dev *hdev, u8 status)
+ if (!mgr)
+ return;
+
++ memset(&rsp, 0, sizeof(rsp));
++
+ hs_hcon = hci_conn_hash_lookup_state(hdev, AMP_LINK, BT_CONNECT);
+ if (!hs_hcon) {
+ rsp.status = A2MP_STATUS_UNABLE_START_LINK_CREATION;
+@@ -1027,6 +1045,8 @@ void a2mp_discover_amp(struct l2cap_chan *chan)
+
+ mgr->bredr_chan = chan;
+
++ memset(&req, 0, sizeof(req));
++
+ req.mtu = cpu_to_le16(L2CAP_A2MP_DEFAULT_MTU);
+ req.ext_feat = 0;
+ a2mp_send(mgr, A2MP_DISCOVER_REQ, 1, sizeof(req), &req);
+diff --git a/net/bluetooth/hci_conn.c b/net/bluetooth/hci_conn.c
+index 307800fd18e6d..b99b5c6de55ab 100644
+--- a/net/bluetooth/hci_conn.c
++++ b/net/bluetooth/hci_conn.c
+@@ -1323,6 +1323,23 @@ int hci_conn_check_link_mode(struct hci_conn *conn)
+ return 0;
+ }
+
++ /* AES encryption is required for Level 4:
++ *
++ * BLUETOOTH CORE SPECIFICATION Version 5.2 | Vol 3, Part C
++ * page 1319:
++ *
++ * 128-bit equivalent strength for link and encryption keys
++ * required using FIPS approved algorithms (E0 not allowed,
++ * SAFER+ not allowed, and P-192 not allowed; encryption key
++ * not shortened)
++ */
++ if (conn->sec_level == BT_SECURITY_FIPS &&
++ !test_bit(HCI_CONN_AES_CCM, &conn->flags)) {
++ bt_dev_err(conn->hdev,
++ "Invalid security: Missing AES-CCM usage");
++ return 0;
++ }
++
+ if (hci_conn_ssp_enabled(conn) &&
+ !test_bit(HCI_CONN_ENCRYPT, &conn->flags))
+ return 0;
+diff --git a/net/bluetooth/hci_event.c b/net/bluetooth/hci_event.c
+index 6c6c9a81bee2c..ff38f98988db8 100644
+--- a/net/bluetooth/hci_event.c
++++ b/net/bluetooth/hci_event.c
+@@ -3068,27 +3068,23 @@ static void hci_encrypt_change_evt(struct hci_dev *hdev, struct sk_buff *skb)
+
+ clear_bit(HCI_CONN_ENCRYPT_PEND, &conn->flags);
+
++ /* Check link security requirements are met */
++ if (!hci_conn_check_link_mode(conn))
++ ev->status = HCI_ERROR_AUTH_FAILURE;
++
+ if (ev->status && conn->state == BT_CONNECTED) {
+ if (ev->status == HCI_ERROR_PIN_OR_KEY_MISSING)
+ set_bit(HCI_CONN_AUTH_FAILURE, &conn->flags);
+
++ /* Notify upper layers so they can cleanup before
++ * disconnecting.
++ */
++ hci_encrypt_cfm(conn, ev->status);
+ hci_disconnect(conn, HCI_ERROR_AUTH_FAILURE);
+ hci_conn_drop(conn);
+ goto unlock;
+ }
+
+- /* In Secure Connections Only mode, do not allow any connections
+- * that are not encrypted with AES-CCM using a P-256 authenticated
+- * combination key.
+- */
+- if (hci_dev_test_flag(hdev, HCI_SC_ONLY) &&
+- (!test_bit(HCI_CONN_AES_CCM, &conn->flags) ||
+- conn->key_type != HCI_LK_AUTH_COMBINATION_P256)) {
+- hci_connect_cfm(conn, HCI_ERROR_AUTH_FAILURE);
+- hci_conn_drop(conn);
+- goto unlock;
+- }
+-
+ /* Try reading the encryption key size for encrypted ACL links */
+ if (!ev->status && ev->encrypt && conn->type == ACL_LINK) {
+ struct hci_cp_read_enc_key_size cp;
+diff --git a/net/bluetooth/l2cap_core.c b/net/bluetooth/l2cap_core.c
+index fe913a5c754a8..129cbef8e5605 100644
+--- a/net/bluetooth/l2cap_core.c
++++ b/net/bluetooth/l2cap_core.c
+@@ -7302,9 +7302,10 @@ static int l2cap_data_rcv(struct l2cap_chan *chan, struct sk_buff *skb)
+ goto drop;
+ }
+
+- if ((chan->mode == L2CAP_MODE_ERTM ||
+- chan->mode == L2CAP_MODE_STREAMING) && sk_filter(chan->data, skb))
+- goto drop;
++ if (chan->ops->filter) {
++ if (chan->ops->filter(chan, skb))
++ goto drop;
++ }
+
+ if (!control->sframe) {
+ int err;
+diff --git a/net/bluetooth/l2cap_sock.c b/net/bluetooth/l2cap_sock.c
+index a995d2c51fa7f..c7fc28a465fdb 100644
+--- a/net/bluetooth/l2cap_sock.c
++++ b/net/bluetooth/l2cap_sock.c
+@@ -1663,6 +1663,19 @@ static void l2cap_sock_suspend_cb(struct l2cap_chan *chan)
+ sk->sk_state_change(sk);
+ }
+
++static int l2cap_sock_filter(struct l2cap_chan *chan, struct sk_buff *skb)
++{
++ struct sock *sk = chan->data;
++
++ switch (chan->mode) {
++ case L2CAP_MODE_ERTM:
++ case L2CAP_MODE_STREAMING:
++ return sk_filter(sk, skb);
++ }
++
++ return 0;
++}
++
+ static const struct l2cap_ops l2cap_chan_ops = {
+ .name = "L2CAP Socket Interface",
+ .new_connection = l2cap_sock_new_connection_cb,
+@@ -1678,6 +1691,7 @@ static const struct l2cap_ops l2cap_chan_ops = {
+ .get_sndtimeo = l2cap_sock_get_sndtimeo_cb,
+ .get_peer_pid = l2cap_sock_get_peer_pid_cb,
+ .alloc_skb = l2cap_sock_alloc_skb_cb,
++ .filter = l2cap_sock_filter,
+ };
+
+ static void l2cap_sock_destruct(struct sock *sk)
+diff --git a/net/bluetooth/mgmt.c b/net/bluetooth/mgmt.c
+index 9e8a3cccc6ca3..9bcba7eff4bd3 100644
+--- a/net/bluetooth/mgmt.c
++++ b/net/bluetooth/mgmt.c
+@@ -766,7 +766,8 @@ static u32 get_supported_settings(struct hci_dev *hdev)
+
+ if (lmp_ssp_capable(hdev)) {
+ settings |= MGMT_SETTING_SSP;
+- settings |= MGMT_SETTING_HS;
++ if (IS_ENABLED(CONFIG_BT_HS))
++ settings |= MGMT_SETTING_HS;
+ }
+
+ if (lmp_sc_capable(hdev))
+@@ -1794,6 +1795,10 @@ static int set_hs(struct sock *sk, struct hci_dev *hdev, void *data, u16 len)
+
+ bt_dev_dbg(hdev, "sock %p", sk);
+
++ if (!IS_ENABLED(CONFIG_BT_HS))
++ return mgmt_cmd_status(sk, hdev->id, MGMT_OP_SET_HS,
++ MGMT_STATUS_NOT_SUPPORTED);
++
+ status = mgmt_bredr_support(hdev);
+ if (status)
+ return mgmt_cmd_status(sk, hdev->id, MGMT_OP_SET_HS, status);
^ permalink raw reply related [flat|nested] 30+ messages in thread
* [gentoo-commits] proj/linux-patches:5.8 commit in: /
@ 2020-10-29 11:20 Mike Pagano
0 siblings, 0 replies; 30+ messages in thread
From: Mike Pagano @ 2020-10-29 11:20 UTC (permalink / raw
To: gentoo-commits
commit: fd9934805d4cef23e83f4c726a0cad4ab6d7a7a3
Author: Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Thu Oct 29 11:20:30 2020 +0000
Commit: Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Thu Oct 29 11:20:30 2020 +0000
URL: https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=fd993480
Linux patch 5.8.17
Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>
0000_README | 4 +
1016_linux-5.8.17.patch | 22065 ++++++++++++++++++++++++++++++++++++++++++++++
2 files changed, 22069 insertions(+)
diff --git a/0000_README b/0000_README
index e29fc26..333aabc 100644
--- a/0000_README
+++ b/0000_README
@@ -107,6 +107,10 @@ Patch: 1015_linux-5.8.16.patch
From: http://www.kernel.org
Desc: Linux 5.8.16
+Patch: 1016_linux-5.8.17.patch
+From: http://www.kernel.org
+Desc: Linux 5.8.17
+
Patch: 1500_XATTR_USER_PREFIX.patch
From: https://bugs.gentoo.org/show_bug.cgi?id=470644
Desc: Support for namespace user.pax.* on tmpfs.
diff --git a/1016_linux-5.8.17.patch b/1016_linux-5.8.17.patch
new file mode 100644
index 0000000..362fc27
--- /dev/null
+++ b/1016_linux-5.8.17.patch
@@ -0,0 +1,22065 @@
+diff --git a/Documentation/admin-guide/kernel-parameters.txt b/Documentation/admin-guide/kernel-parameters.txt
+index fb95fad81c79a..6746f91ebc490 100644
+--- a/Documentation/admin-guide/kernel-parameters.txt
++++ b/Documentation/admin-guide/kernel-parameters.txt
+@@ -577,7 +577,7 @@
+ loops can be debugged more effectively on production
+ systems.
+
+- clearcpuid=BITNUM [X86]
++ clearcpuid=BITNUM[,BITNUM...] [X86]
+ Disable CPUID feature X for the kernel. See
+ arch/x86/include/asm/cpufeatures.h for the valid bit
+ numbers. Note the Linux specific bits are not necessarily
+diff --git a/Documentation/devicetree/bindings/crypto/allwinner,sun4i-a10-crypto.yaml b/Documentation/devicetree/bindings/crypto/allwinner,sun4i-a10-crypto.yaml
+index fc823572bcff2..90c6d039b91b0 100644
+--- a/Documentation/devicetree/bindings/crypto/allwinner,sun4i-a10-crypto.yaml
++++ b/Documentation/devicetree/bindings/crypto/allwinner,sun4i-a10-crypto.yaml
+@@ -23,8 +23,7 @@ properties:
+ - items:
+ - const: allwinner,sun7i-a20-crypto
+ - const: allwinner,sun4i-a10-crypto
+- - items:
+- - const: allwinner,sun8i-a33-crypto
++ - const: allwinner,sun8i-a33-crypto
+
+ reg:
+ maxItems: 1
+@@ -59,7 +58,9 @@ if:
+ properties:
+ compatible:
+ contains:
+- const: allwinner,sun6i-a31-crypto
++ enum:
++ - allwinner,sun6i-a31-crypto
++ - allwinner,sun8i-a33-crypto
+
+ then:
+ required:
+diff --git a/Documentation/devicetree/bindings/net/socionext-netsec.txt b/Documentation/devicetree/bindings/net/socionext-netsec.txt
+index 9d6c9feb12ff1..a3c1dffaa4bb4 100644
+--- a/Documentation/devicetree/bindings/net/socionext-netsec.txt
++++ b/Documentation/devicetree/bindings/net/socionext-netsec.txt
+@@ -30,7 +30,9 @@ Optional properties: (See ethernet.txt file in the same directory)
+ - max-frame-size: See ethernet.txt in the same directory.
+
+ The MAC address will be determined using the optional properties
+-defined in ethernet.txt.
++defined in ethernet.txt. The 'phy-mode' property is required, but may
++be set to the empty string if the PHY configuration is programmed by
++the firmware or set by hardware straps, and needs to be preserved.
+
+ Example:
+ eth0: ethernet@522d0000 {
+diff --git a/Documentation/networking/ip-sysctl.rst b/Documentation/networking/ip-sysctl.rst
+index 837d51f9e1fab..25e6673a085a0 100644
+--- a/Documentation/networking/ip-sysctl.rst
++++ b/Documentation/networking/ip-sysctl.rst
+@@ -1142,13 +1142,15 @@ icmp_ratelimit - INTEGER
+ icmp_msgs_per_sec - INTEGER
+ Limit maximal number of ICMP packets sent per second from this host.
+ Only messages whose type matches icmp_ratemask (see below) are
+- controlled by this limit.
++ controlled by this limit. For security reasons, the precise count
++ of messages per second is randomized.
+
+ Default: 1000
+
+ icmp_msgs_burst - INTEGER
+ icmp_msgs_per_sec controls number of ICMP packets sent per second,
+ while icmp_msgs_burst controls the burst size of these packets.
++ For security reasons, the precise burst size is randomized.
+
+ Default: 50
+
+diff --git a/Makefile b/Makefile
+index a4622ef65436e..9bdb93053ee93 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 5
+ PATCHLEVEL = 8
+-SUBLEVEL = 16
++SUBLEVEL = 17
+ EXTRAVERSION =
+ NAME = Kleptomaniac Octopus
+
+diff --git a/arch/arc/plat-hsdk/Kconfig b/arch/arc/plat-hsdk/Kconfig
+index ce81018345184..6b5c54576f54d 100644
+--- a/arch/arc/plat-hsdk/Kconfig
++++ b/arch/arc/plat-hsdk/Kconfig
+@@ -8,5 +8,6 @@ menuconfig ARC_SOC_HSDK
+ select ARC_HAS_ACCL_REGS
+ select ARC_IRQ_NO_AUTOSAVE
+ select CLK_HSDK
++ select RESET_CONTROLLER
+ select RESET_HSDK
+ select HAVE_PCI
+diff --git a/arch/arm/boot/dts/imx6sl.dtsi b/arch/arm/boot/dts/imx6sl.dtsi
+index 911d8cf77f2c6..0339a46fa71c5 100644
+--- a/arch/arm/boot/dts/imx6sl.dtsi
++++ b/arch/arm/boot/dts/imx6sl.dtsi
+@@ -939,8 +939,10 @@
+ };
+
+ rngb: rngb@21b4000 {
++ compatible = "fsl,imx6sl-rngb", "fsl,imx25-rngb";
+ reg = <0x021b4000 0x4000>;
+ interrupts = <0 5 IRQ_TYPE_LEVEL_HIGH>;
++ clocks = <&clks IMX6SL_CLK_DUMMY>;
+ };
+
+ weim: weim@21b8000 {
+diff --git a/arch/arm/boot/dts/iwg20d-q7-common.dtsi b/arch/arm/boot/dts/iwg20d-q7-common.dtsi
+index ebbe1518ef8a6..63cafd220dba1 100644
+--- a/arch/arm/boot/dts/iwg20d-q7-common.dtsi
++++ b/arch/arm/boot/dts/iwg20d-q7-common.dtsi
+@@ -57,7 +57,7 @@
+
+ lvds-receiver {
+ compatible = "ti,ds90cf384a", "lvds-decoder";
+- powerdown-gpios = <&gpio7 25 GPIO_ACTIVE_LOW>;
++ power-supply = <&vcc_3v3_tft1>;
+
+ ports {
+ #address-cells = <1>;
+@@ -81,6 +81,7 @@
+ panel {
+ compatible = "edt,etm0700g0dh6";
+ backlight = <&lcd_backlight>;
++ power-supply = <&vcc_3v3_tft1>;
+
+ port {
+ panel_in: endpoint {
+@@ -113,6 +114,17 @@
+ };
+ };
+
++ vcc_3v3_tft1: regulator-panel {
++ compatible = "regulator-fixed";
++
++ regulator-name = "vcc-3v3-tft1";
++ regulator-min-microvolt = <3300000>;
++ regulator-max-microvolt = <3300000>;
++ enable-active-high;
++ startup-delay-us = <500>;
++ gpio = <&gpio7 25 GPIO_ACTIVE_HIGH>;
++ };
++
+ vcc_sdhi1: regulator-vcc-sdhi1 {
+ compatible = "regulator-fixed";
+
+@@ -207,6 +219,7 @@
+ reg = <0x38>;
+ interrupt-parent = <&gpio2>;
+ interrupts = <12 IRQ_TYPE_EDGE_FALLING>;
++ vcc-supply = <&vcc_3v3_tft1>;
+ };
+ };
+
+diff --git a/arch/arm/boot/dts/meson8.dtsi b/arch/arm/boot/dts/meson8.dtsi
+index eedb92526968a..a4ab8b96d0eb6 100644
+--- a/arch/arm/boot/dts/meson8.dtsi
++++ b/arch/arm/boot/dts/meson8.dtsi
+@@ -239,8 +239,6 @@
+ <GIC_SPI 167 IRQ_TYPE_LEVEL_HIGH>,
+ <GIC_SPI 168 IRQ_TYPE_LEVEL_HIGH>,
+ <GIC_SPI 169 IRQ_TYPE_LEVEL_HIGH>,
+- <GIC_SPI 170 IRQ_TYPE_LEVEL_HIGH>,
+- <GIC_SPI 171 IRQ_TYPE_LEVEL_HIGH>,
+ <GIC_SPI 172 IRQ_TYPE_LEVEL_HIGH>,
+ <GIC_SPI 173 IRQ_TYPE_LEVEL_HIGH>,
+ <GIC_SPI 174 IRQ_TYPE_LEVEL_HIGH>,
+diff --git a/arch/arm/boot/dts/owl-s500.dtsi b/arch/arm/boot/dts/owl-s500.dtsi
+index 5ceb6cc4451d2..1dbe4e8b38ac7 100644
+--- a/arch/arm/boot/dts/owl-s500.dtsi
++++ b/arch/arm/boot/dts/owl-s500.dtsi
+@@ -84,21 +84,21 @@
+ global_timer: timer@b0020200 {
+ compatible = "arm,cortex-a9-global-timer";
+ reg = <0xb0020200 0x100>;
+- interrupts = <GIC_PPI 0 (GIC_CPU_MASK_SIMPLE(4) | IRQ_TYPE_EDGE_RISING)>;
++ interrupts = <GIC_PPI 11 (GIC_CPU_MASK_SIMPLE(4) | IRQ_TYPE_EDGE_RISING)>;
+ status = "disabled";
+ };
+
+ twd_timer: timer@b0020600 {
+ compatible = "arm,cortex-a9-twd-timer";
+ reg = <0xb0020600 0x20>;
+- interrupts = <GIC_PPI 2 (GIC_CPU_MASK_SIMPLE(4) | IRQ_TYPE_EDGE_RISING)>;
++ interrupts = <GIC_PPI 13 (GIC_CPU_MASK_SIMPLE(4) | IRQ_TYPE_EDGE_RISING)>;
+ status = "disabled";
+ };
+
+ twd_wdt: wdt@b0020620 {
+ compatible = "arm,cortex-a9-twd-wdt";
+ reg = <0xb0020620 0xe0>;
+- interrupts = <GIC_PPI 3 (GIC_CPU_MASK_SIMPLE(4) | IRQ_TYPE_EDGE_RISING)>;
++ interrupts = <GIC_PPI 14 (GIC_CPU_MASK_SIMPLE(4) | IRQ_TYPE_EDGE_RISING)>;
+ status = "disabled";
+ };
+
+diff --git a/arch/arm/boot/dts/stm32mp157c-lxa-mc1.dts b/arch/arm/boot/dts/stm32mp157c-lxa-mc1.dts
+index 5700e6b700d36..b85025d009437 100644
+--- a/arch/arm/boot/dts/stm32mp157c-lxa-mc1.dts
++++ b/arch/arm/boot/dts/stm32mp157c-lxa-mc1.dts
+@@ -121,8 +121,6 @@
+ reset-gpios = <&gpiog 0 GPIO_ACTIVE_LOW>; /* ETH_RST# */
+ interrupt-parent = <&gpioa>;
+ interrupts = <6 IRQ_TYPE_EDGE_FALLING>; /* ETH_MDINT# */
+- rxc-skew-ps = <1860>;
+- txc-skew-ps = <1860>;
+ reset-assert-us = <10000>;
+ reset-deassert-us = <300>;
+ micrel,force-master;
+diff --git a/arch/arm/boot/dts/stm32mp15xx-dhcom-pdk2.dtsi b/arch/arm/boot/dts/stm32mp15xx-dhcom-pdk2.dtsi
+index 7c4bd615b3115..e4e3c92eb30d3 100644
+--- a/arch/arm/boot/dts/stm32mp15xx-dhcom-pdk2.dtsi
++++ b/arch/arm/boot/dts/stm32mp15xx-dhcom-pdk2.dtsi
+@@ -11,7 +11,6 @@
+ serial0 = &uart4;
+ serial1 = &usart3;
+ serial2 = &uart8;
+- ethernet0 = ðernet0;
+ };
+
+ chosen {
+@@ -26,23 +25,13 @@
+
+ display_bl: display-bl {
+ compatible = "pwm-backlight";
+- pwms = <&pwm2 0 500000 PWM_POLARITY_INVERTED>;
++ pwms = <&pwm2 3 500000 PWM_POLARITY_INVERTED>;
+ brightness-levels = <0 16 22 30 40 55 75 102 138 188 255>;
+ default-brightness-level = <8>;
+ enable-gpios = <&gpioi 0 GPIO_ACTIVE_HIGH>;
+ status = "okay";
+ };
+
+- ethernet_vio: vioregulator {
+- compatible = "regulator-fixed";
+- regulator-name = "vio";
+- regulator-min-microvolt = <3300000>;
+- regulator-max-microvolt = <3300000>;
+- gpio = <&gpiog 3 GPIO_ACTIVE_LOW>;
+- regulator-always-on;
+- regulator-boot-on;
+- };
+-
+ gpio-keys-polled {
+ compatible = "gpio-keys-polled";
+ #size-cells = <0>;
+@@ -141,28 +130,6 @@
+ status = "okay";
+ };
+
+-ðernet0 {
+- status = "okay";
+- pinctrl-0 = <ðernet0_rmii_pins_a>;
+- pinctrl-1 = <ðernet0_rmii_sleep_pins_a>;
+- pinctrl-names = "default", "sleep";
+- phy-mode = "rmii";
+- max-speed = <100>;
+- phy-handle = <&phy0>;
+- st,eth-ref-clk-sel;
+- phy-reset-gpios = <&gpioh 15 GPIO_ACTIVE_LOW>;
+-
+- mdio0 {
+- #address-cells = <1>;
+- #size-cells = <0>;
+- compatible = "snps,dwmac-mdio";
+-
+- phy0: ethernet-phy@1 {
+- reg = <1>;
+- };
+- };
+-};
+-
+ &i2c2 { /* Header X22 */
+ pinctrl-names = "default";
+ pinctrl-0 = <&i2c2_pins_a>;
+diff --git a/arch/arm/boot/dts/stm32mp15xx-dhcom-som.dtsi b/arch/arm/boot/dts/stm32mp15xx-dhcom-som.dtsi
+index ba905196fb549..a87ebc4843963 100644
+--- a/arch/arm/boot/dts/stm32mp15xx-dhcom-som.dtsi
++++ b/arch/arm/boot/dts/stm32mp15xx-dhcom-som.dtsi
+@@ -9,6 +9,10 @@
+ #include <dt-bindings/mfd/st,stpmic1.h>
+
+ / {
++ aliases {
++ ethernet0 = ðernet0;
++ };
++
+ memory@c0000000 {
+ device_type = "memory";
+ reg = <0xC0000000 0x40000000>;
+@@ -55,6 +59,16 @@
+ no-map;
+ };
+ };
++
++ ethernet_vio: vioregulator {
++ compatible = "regulator-fixed";
++ regulator-name = "vio";
++ regulator-min-microvolt = <3300000>;
++ regulator-max-microvolt = <3300000>;
++ gpio = <&gpiog 3 GPIO_ACTIVE_LOW>;
++ regulator-always-on;
++ regulator-boot-on;
++ };
+ };
+
+ &adc {
+@@ -94,6 +108,28 @@
+ status = "okay";
+ };
+
++ðernet0 {
++ status = "okay";
++ pinctrl-0 = <ðernet0_rmii_pins_a>;
++ pinctrl-1 = <ðernet0_rmii_sleep_pins_a>;
++ pinctrl-names = "default", "sleep";
++ phy-mode = "rmii";
++ max-speed = <100>;
++ phy-handle = <&phy0>;
++ st,eth-ref-clk-sel;
++ phy-reset-gpios = <&gpioh 3 GPIO_ACTIVE_LOW>;
++
++ mdio0 {
++ #address-cells = <1>;
++ #size-cells = <0>;
++ compatible = "snps,dwmac-mdio";
++
++ phy0: ethernet-phy@1 {
++ reg = <1>;
++ };
++ };
++};
++
+ &i2c4 {
+ pinctrl-names = "default";
+ pinctrl-0 = <&i2c4_pins_a>;
+@@ -249,7 +285,7 @@
+ compatible = "ti,tsc2004";
+ reg = <0x49>;
+ vio-supply = <&v3v3>;
+- interrupts-extended = <&gpioh 3 IRQ_TYPE_EDGE_FALLING>;
++ interrupts-extended = <&gpioh 15 IRQ_TYPE_EDGE_FALLING>;
+ };
+
+ eeprom@50 {
+diff --git a/arch/arm/boot/dts/stm32mp15xx-dhcor-avenger96.dtsi b/arch/arm/boot/dts/stm32mp15xx-dhcor-avenger96.dtsi
+index 930202742a3f6..905cd7bb98cf0 100644
+--- a/arch/arm/boot/dts/stm32mp15xx-dhcor-avenger96.dtsi
++++ b/arch/arm/boot/dts/stm32mp15xx-dhcor-avenger96.dtsi
+@@ -295,9 +295,9 @@
+
+ &sdmmc2 {
+ pinctrl-names = "default", "opendrain", "sleep";
+- pinctrl-0 = <&sdmmc2_b4_pins_a &sdmmc2_d47_pins_b>;
+- pinctrl-1 = <&sdmmc2_b4_od_pins_a &sdmmc2_d47_pins_b>;
+- pinctrl-2 = <&sdmmc2_b4_sleep_pins_a &sdmmc2_d47_sleep_pins_b>;
++ pinctrl-0 = <&sdmmc2_b4_pins_a &sdmmc2_d47_pins_c>;
++ pinctrl-1 = <&sdmmc2_b4_od_pins_a &sdmmc2_d47_pins_c>;
++ pinctrl-2 = <&sdmmc2_b4_sleep_pins_a &sdmmc2_d47_sleep_pins_c>;
+ bus-width = <8>;
+ mmc-ddr-1_8v;
+ no-sd;
+diff --git a/arch/arm/boot/dts/sun8i-r40-bananapi-m2-ultra.dts b/arch/arm/boot/dts/sun8i-r40-bananapi-m2-ultra.dts
+index 42d62d1ba1dc7..ea15073f0c79c 100644
+--- a/arch/arm/boot/dts/sun8i-r40-bananapi-m2-ultra.dts
++++ b/arch/arm/boot/dts/sun8i-r40-bananapi-m2-ultra.dts
+@@ -223,16 +223,16 @@
+ };
+
+ ®_dc1sw {
+- regulator-min-microvolt = <3000000>;
+- regulator-max-microvolt = <3000000>;
++ regulator-min-microvolt = <3300000>;
++ regulator-max-microvolt = <3300000>;
+ regulator-name = "vcc-gmac-phy";
+ };
+
+ ®_dcdc1 {
+ regulator-always-on;
+- regulator-min-microvolt = <3000000>;
+- regulator-max-microvolt = <3000000>;
+- regulator-name = "vcc-3v0";
++ regulator-min-microvolt = <3300000>;
++ regulator-max-microvolt = <3300000>;
++ regulator-name = "vcc-3v3";
+ };
+
+ ®_dcdc2 {
+diff --git a/arch/arm/mach-at91/pm.c b/arch/arm/mach-at91/pm.c
+index 2aab043441e8f..eae8aaaadc3bf 100644
+--- a/arch/arm/mach-at91/pm.c
++++ b/arch/arm/mach-at91/pm.c
+@@ -800,6 +800,7 @@ static void __init at91_pm_init(void (*pm_idle)(void))
+
+ pmc_np = of_find_matching_node_and_match(NULL, atmel_pmc_ids, &of_id);
+ soc_pm.data.pmc = of_iomap(pmc_np, 0);
++ of_node_put(pmc_np);
+ if (!soc_pm.data.pmc) {
+ pr_err("AT91: PM not supported, PMC not found\n");
+ return;
+diff --git a/arch/arm/mach-omap2/cpuidle44xx.c b/arch/arm/mach-omap2/cpuidle44xx.c
+index 6f5f89711f256..a92d277f81a08 100644
+--- a/arch/arm/mach-omap2/cpuidle44xx.c
++++ b/arch/arm/mach-omap2/cpuidle44xx.c
+@@ -174,8 +174,10 @@ static int omap_enter_idle_coupled(struct cpuidle_device *dev,
+ */
+ if (mpuss_can_lose_context) {
+ error = cpu_cluster_pm_enter();
+- if (error)
++ if (error) {
++ omap_set_pwrdm_state(mpu_pd, PWRDM_POWER_ON);
+ goto cpu_cluster_pm_out;
++ }
+ }
+ }
+
+diff --git a/arch/arm/mach-s3c24xx/mach-at2440evb.c b/arch/arm/mach-s3c24xx/mach-at2440evb.c
+index 58c5ef3cf1d7e..2d370f7f75fa2 100644
+--- a/arch/arm/mach-s3c24xx/mach-at2440evb.c
++++ b/arch/arm/mach-s3c24xx/mach-at2440evb.c
+@@ -143,7 +143,7 @@ static struct gpiod_lookup_table at2440evb_mci_gpio_table = {
+ .dev_id = "s3c2410-sdi",
+ .table = {
+ /* Card detect S3C2410_GPG(10) */
+- GPIO_LOOKUP("GPG", 10, "cd", GPIO_ACTIVE_LOW),
++ GPIO_LOOKUP("GPIOG", 10, "cd", GPIO_ACTIVE_LOW),
+ { },
+ },
+ };
+diff --git a/arch/arm/mach-s3c24xx/mach-h1940.c b/arch/arm/mach-s3c24xx/mach-h1940.c
+index e1c372e5447b6..82cc37513779c 100644
+--- a/arch/arm/mach-s3c24xx/mach-h1940.c
++++ b/arch/arm/mach-s3c24xx/mach-h1940.c
+@@ -468,9 +468,9 @@ static struct gpiod_lookup_table h1940_mmc_gpio_table = {
+ .dev_id = "s3c2410-sdi",
+ .table = {
+ /* Card detect S3C2410_GPF(5) */
+- GPIO_LOOKUP("GPF", 5, "cd", GPIO_ACTIVE_LOW),
++ GPIO_LOOKUP("GPIOF", 5, "cd", GPIO_ACTIVE_LOW),
+ /* Write protect S3C2410_GPH(8) */
+- GPIO_LOOKUP("GPH", 8, "wp", GPIO_ACTIVE_LOW),
++ GPIO_LOOKUP("GPIOH", 8, "wp", GPIO_ACTIVE_LOW),
+ { },
+ },
+ };
+diff --git a/arch/arm/mach-s3c24xx/mach-mini2440.c b/arch/arm/mach-s3c24xx/mach-mini2440.c
+index 9035f868fb34e..3a5b1124037b2 100644
+--- a/arch/arm/mach-s3c24xx/mach-mini2440.c
++++ b/arch/arm/mach-s3c24xx/mach-mini2440.c
+@@ -244,9 +244,9 @@ static struct gpiod_lookup_table mini2440_mmc_gpio_table = {
+ .dev_id = "s3c2410-sdi",
+ .table = {
+ /* Card detect S3C2410_GPG(8) */
+- GPIO_LOOKUP("GPG", 8, "cd", GPIO_ACTIVE_LOW),
++ GPIO_LOOKUP("GPIOG", 8, "cd", GPIO_ACTIVE_LOW),
+ /* Write protect S3C2410_GPH(8) */
+- GPIO_LOOKUP("GPH", 8, "wp", GPIO_ACTIVE_HIGH),
++ GPIO_LOOKUP("GPIOH", 8, "wp", GPIO_ACTIVE_HIGH),
+ { },
+ },
+ };
+diff --git a/arch/arm/mach-s3c24xx/mach-n30.c b/arch/arm/mach-s3c24xx/mach-n30.c
+index d856f23939aff..ffa20f52aa832 100644
+--- a/arch/arm/mach-s3c24xx/mach-n30.c
++++ b/arch/arm/mach-s3c24xx/mach-n30.c
+@@ -359,9 +359,9 @@ static struct gpiod_lookup_table n30_mci_gpio_table = {
+ .dev_id = "s3c2410-sdi",
+ .table = {
+ /* Card detect S3C2410_GPF(1) */
+- GPIO_LOOKUP("GPF", 1, "cd", GPIO_ACTIVE_LOW),
++ GPIO_LOOKUP("GPIOF", 1, "cd", GPIO_ACTIVE_LOW),
+ /* Write protect S3C2410_GPG(10) */
+- GPIO_LOOKUP("GPG", 10, "wp", GPIO_ACTIVE_LOW),
++ GPIO_LOOKUP("GPIOG", 10, "wp", GPIO_ACTIVE_LOW),
+ { },
+ },
+ };
+diff --git a/arch/arm/mach-s3c24xx/mach-rx1950.c b/arch/arm/mach-s3c24xx/mach-rx1950.c
+index fde98b175c752..c0a06f123cfea 100644
+--- a/arch/arm/mach-s3c24xx/mach-rx1950.c
++++ b/arch/arm/mach-s3c24xx/mach-rx1950.c
+@@ -571,9 +571,9 @@ static struct gpiod_lookup_table rx1950_mmc_gpio_table = {
+ .dev_id = "s3c2410-sdi",
+ .table = {
+ /* Card detect S3C2410_GPF(5) */
+- GPIO_LOOKUP("GPF", 5, "cd", GPIO_ACTIVE_LOW),
++ GPIO_LOOKUP("GPIOF", 5, "cd", GPIO_ACTIVE_LOW),
+ /* Write protect S3C2410_GPH(8) */
+- GPIO_LOOKUP("GPH", 8, "wp", GPIO_ACTIVE_LOW),
++ GPIO_LOOKUP("GPIOH", 8, "wp", GPIO_ACTIVE_LOW),
+ { },
+ },
+ };
+diff --git a/arch/arm/mm/cache-l2x0.c b/arch/arm/mm/cache-l2x0.c
+index 12c26eb88afbc..43d91bfd23600 100644
+--- a/arch/arm/mm/cache-l2x0.c
++++ b/arch/arm/mm/cache-l2x0.c
+@@ -1249,20 +1249,28 @@ static void __init l2c310_of_parse(const struct device_node *np,
+
+ ret = of_property_read_u32(np, "prefetch-data", &val);
+ if (ret == 0) {
+- if (val)
++ if (val) {
+ prefetch |= L310_PREFETCH_CTRL_DATA_PREFETCH;
+- else
++ *aux_val |= L310_PREFETCH_CTRL_DATA_PREFETCH;
++ } else {
+ prefetch &= ~L310_PREFETCH_CTRL_DATA_PREFETCH;
++ *aux_val &= ~L310_PREFETCH_CTRL_DATA_PREFETCH;
++ }
++ *aux_mask &= ~L310_PREFETCH_CTRL_DATA_PREFETCH;
+ } else if (ret != -EINVAL) {
+ pr_err("L2C-310 OF prefetch-data property value is missing\n");
+ }
+
+ ret = of_property_read_u32(np, "prefetch-instr", &val);
+ if (ret == 0) {
+- if (val)
++ if (val) {
+ prefetch |= L310_PREFETCH_CTRL_INSTR_PREFETCH;
+- else
++ *aux_val |= L310_PREFETCH_CTRL_INSTR_PREFETCH;
++ } else {
+ prefetch &= ~L310_PREFETCH_CTRL_INSTR_PREFETCH;
++ *aux_val &= ~L310_PREFETCH_CTRL_INSTR_PREFETCH;
++ }
++ *aux_mask &= ~L310_PREFETCH_CTRL_INSTR_PREFETCH;
+ } else if (ret != -EINVAL) {
+ pr_err("L2C-310 OF prefetch-instr property value is missing\n");
+ }
+diff --git a/arch/arm64/boot/dts/actions/s700.dtsi b/arch/arm64/boot/dts/actions/s700.dtsi
+index 2006ad5424fa6..f8eb72bb41254 100644
+--- a/arch/arm64/boot/dts/actions/s700.dtsi
++++ b/arch/arm64/boot/dts/actions/s700.dtsi
+@@ -231,7 +231,7 @@
+
+ pinctrl: pinctrl@e01b0000 {
+ compatible = "actions,s700-pinctrl";
+- reg = <0x0 0xe01b0000 0x0 0x1000>;
++ reg = <0x0 0xe01b0000 0x0 0x100>;
+ clocks = <&cmu CLK_GPIO>;
+ gpio-controller;
+ gpio-ranges = <&pinctrl 0 0 136>;
+diff --git a/arch/arm64/boot/dts/allwinner/sun50i-h5.dtsi b/arch/arm64/boot/dts/allwinner/sun50i-h5.dtsi
+index 4462a68c06815..cdc4209f94d0e 100644
+--- a/arch/arm64/boot/dts/allwinner/sun50i-h5.dtsi
++++ b/arch/arm64/boot/dts/allwinner/sun50i-h5.dtsi
+@@ -125,8 +125,7 @@
+ <GIC_SPI 104 IRQ_TYPE_LEVEL_HIGH>,
+ <GIC_SPI 105 IRQ_TYPE_LEVEL_HIGH>,
+ <GIC_SPI 106 IRQ_TYPE_LEVEL_HIGH>,
+- <GIC_SPI 107 IRQ_TYPE_LEVEL_HIGH>,
+- <GIC_SPI 98 IRQ_TYPE_LEVEL_HIGH>;
++ <GIC_SPI 107 IRQ_TYPE_LEVEL_HIGH>;
+ interrupt-names = "gp",
+ "gpmmu",
+ "pp",
+@@ -137,8 +136,7 @@
+ "pp2",
+ "ppmmu2",
+ "pp3",
+- "ppmmu3",
+- "pmu";
++ "ppmmu3";
+ clocks = <&ccu CLK_BUS_GPU>, <&ccu CLK_GPU>;
+ clock-names = "bus", "core";
+ resets = <&ccu RST_BUS_GPU>;
+diff --git a/arch/arm64/boot/dts/amlogic/meson-khadas-vim3.dtsi b/arch/arm64/boot/dts/amlogic/meson-khadas-vim3.dtsi
+index ff5ba85b7562e..833bbc3359c44 100644
+--- a/arch/arm64/boot/dts/amlogic/meson-khadas-vim3.dtsi
++++ b/arch/arm64/boot/dts/amlogic/meson-khadas-vim3.dtsi
+@@ -41,13 +41,13 @@
+
+ led-white {
+ label = "vim3:white:sys";
+- gpios = <&gpio_ao GPIOAO_4 GPIO_ACTIVE_LOW>;
++ gpios = <&gpio_ao GPIOAO_4 GPIO_ACTIVE_HIGH>;
+ linux,default-trigger = "heartbeat";
+ };
+
+ led-red {
+ label = "vim3:red";
+- gpios = <&gpio_expander 5 GPIO_ACTIVE_LOW>;
++ gpios = <&gpio_expander 5 GPIO_ACTIVE_HIGH>;
+ };
+ };
+
+diff --git a/arch/arm64/boot/dts/freescale/imx8mq.dtsi b/arch/arm64/boot/dts/freescale/imx8mq.dtsi
+index 66ac66856e7e8..077e12a0de3f9 100644
+--- a/arch/arm64/boot/dts/freescale/imx8mq.dtsi
++++ b/arch/arm64/boot/dts/freescale/imx8mq.dtsi
+@@ -614,6 +614,7 @@
+ gpc: gpc@303a0000 {
+ compatible = "fsl,imx8mq-gpc";
+ reg = <0x303a0000 0x10000>;
++ interrupts = <GIC_SPI 87 IRQ_TYPE_LEVEL_HIGH>;
+ interrupt-parent = <&gic>;
+ interrupt-controller;
+ #interrupt-cells = <3>;
+diff --git a/arch/arm64/boot/dts/mediatek/mt8173-elm.dtsi b/arch/arm64/boot/dts/mediatek/mt8173-elm.dtsi
+index a5a12b2599a4a..01522dd10603e 100644
+--- a/arch/arm64/boot/dts/mediatek/mt8173-elm.dtsi
++++ b/arch/arm64/boot/dts/mediatek/mt8173-elm.dtsi
+@@ -431,12 +431,11 @@
+ status = "okay";
+ pinctrl-names = "default";
+ pinctrl-0 = <&nor_gpio1_pins>;
+- bus-width = <8>;
+- max-frequency = <50000000>;
+- non-removable;
++
+ flash@0 {
+ compatible = "jedec,spi-nor";
+ reg = <0>;
++ spi-max-frequency = <50000000>;
+ };
+ };
+
+diff --git a/arch/arm64/boot/dts/qcom/msm8916.dtsi b/arch/arm64/boot/dts/qcom/msm8916.dtsi
+index 32bd140ac9fd4..103d2226c579b 100644
+--- a/arch/arm64/boot/dts/qcom/msm8916.dtsi
++++ b/arch/arm64/boot/dts/qcom/msm8916.dtsi
+@@ -228,14 +228,14 @@
+ };
+
+ thermal-zones {
+- cpu0_1-thermal {
++ cpu0-1-thermal {
+ polling-delay-passive = <250>;
+ polling-delay = <1000>;
+
+ thermal-sensors = <&tsens 5>;
+
+ trips {
+- cpu0_1_alert0: trip-point@0 {
++ cpu0_1_alert0: trip-point0 {
+ temperature = <75000>;
+ hysteresis = <2000>;
+ type = "passive";
+@@ -258,7 +258,7 @@
+ };
+ };
+
+- cpu2_3-thermal {
++ cpu2-3-thermal {
+ polling-delay-passive = <250>;
+ polling-delay = <1000>;
+
+@@ -1021,7 +1021,7 @@
+ reg-names = "mdp_phys";
+
+ interrupt-parent = <&mdss>;
+- interrupts = <0 0>;
++ interrupts = <0>;
+
+ clocks = <&gcc GCC_MDSS_AHB_CLK>,
+ <&gcc GCC_MDSS_AXI_CLK>,
+@@ -1053,7 +1053,7 @@
+ reg-names = "dsi_ctrl";
+
+ interrupt-parent = <&mdss>;
+- interrupts = <4 0>;
++ interrupts = <4>;
+
+ assigned-clocks = <&gcc BYTE0_CLK_SRC>,
+ <&gcc PCLK0_CLK_SRC>;
+diff --git a/arch/arm64/boot/dts/qcom/pm8916.dtsi b/arch/arm64/boot/dts/qcom/pm8916.dtsi
+index 0bcdf04711079..adf9a5988cdc2 100644
+--- a/arch/arm64/boot/dts/qcom/pm8916.dtsi
++++ b/arch/arm64/boot/dts/qcom/pm8916.dtsi
+@@ -119,7 +119,7 @@
+
+ wcd_codec: codec@f000 {
+ compatible = "qcom,pm8916-wcd-analog-codec";
+- reg = <0xf000 0x200>;
++ reg = <0xf000>;
+ reg-names = "pmic-codec-core";
+ clocks = <&gcc GCC_CODEC_DIGCODEC_CLK>;
+ clock-names = "mclk";
+diff --git a/arch/arm64/boot/dts/qcom/sc7180.dtsi b/arch/arm64/boot/dts/qcom/sc7180.dtsi
+index 31b9217bb5bfe..7f1b75b2bcee3 100644
+--- a/arch/arm64/boot/dts/qcom/sc7180.dtsi
++++ b/arch/arm64/boot/dts/qcom/sc7180.dtsi
+@@ -2193,7 +2193,7 @@
+
+ system-cache-controller@9200000 {
+ compatible = "qcom,sc7180-llcc";
+- reg = <0 0x09200000 0 0x200000>, <0 0x09600000 0 0x50000>;
++ reg = <0 0x09200000 0 0x50000>, <0 0x09600000 0 0x50000>;
+ reg-names = "llcc_base", "llcc_broadcast_base";
+ interrupts = <GIC_SPI 582 IRQ_TYPE_LEVEL_HIGH>;
+ };
+@@ -2357,7 +2357,7 @@
+ <19200000>;
+
+ interrupt-parent = <&mdss>;
+- interrupts = <0 IRQ_TYPE_LEVEL_HIGH>;
++ interrupts = <0>;
+
+ status = "disabled";
+
+@@ -2380,7 +2380,7 @@
+ reg-names = "dsi_ctrl";
+
+ interrupt-parent = <&mdss>;
+- interrupts = <4 IRQ_TYPE_LEVEL_HIGH>;
++ interrupts = <4>;
+
+ clocks = <&dispcc DISP_CC_MDSS_BYTE0_CLK>,
+ <&dispcc DISP_CC_MDSS_BYTE0_INTF_CLK>,
+diff --git a/arch/arm64/boot/dts/renesas/r8a774c0.dtsi b/arch/arm64/boot/dts/renesas/r8a774c0.dtsi
+index 42171190cce46..065e8fe3a071c 100644
+--- a/arch/arm64/boot/dts/renesas/r8a774c0.dtsi
++++ b/arch/arm64/boot/dts/renesas/r8a774c0.dtsi
+@@ -1214,9 +1214,8 @@
+ reg = <0 0xe6ea0000 0 0x0064>;
+ interrupts = <GIC_SPI 157 IRQ_TYPE_LEVEL_HIGH>;
+ clocks = <&cpg CPG_MOD 210>;
+- dmas = <&dmac1 0x43>, <&dmac1 0x42>,
+- <&dmac2 0x43>, <&dmac2 0x42>;
+- dma-names = "tx", "rx", "tx", "rx";
++ dmas = <&dmac0 0x43>, <&dmac0 0x42>;
++ dma-names = "tx", "rx";
+ power-domains = <&sysc R8A774C0_PD_ALWAYS_ON>;
+ resets = <&cpg 210>;
+ #address-cells = <1>;
+diff --git a/arch/arm64/boot/dts/renesas/r8a77990.dtsi b/arch/arm64/boot/dts/renesas/r8a77990.dtsi
+index 1991bdc36792f..27f74df8efbde 100644
+--- a/arch/arm64/boot/dts/renesas/r8a77990.dtsi
++++ b/arch/arm64/boot/dts/renesas/r8a77990.dtsi
+@@ -1192,9 +1192,8 @@
+ reg = <0 0xe6ea0000 0 0x0064>;
+ interrupts = <GIC_SPI 157 IRQ_TYPE_LEVEL_HIGH>;
+ clocks = <&cpg CPG_MOD 210>;
+- dmas = <&dmac1 0x43>, <&dmac1 0x42>,
+- <&dmac2 0x43>, <&dmac2 0x42>;
+- dma-names = "tx", "rx", "tx", "rx";
++ dmas = <&dmac0 0x43>, <&dmac0 0x42>;
++ dma-names = "tx", "rx";
+ power-domains = <&sysc R8A77990_PD_ALWAYS_ON>;
+ resets = <&cpg 210>;
+ #address-cells = <1>;
+diff --git a/arch/arm64/boot/dts/xilinx/zynqmp.dtsi b/arch/arm64/boot/dts/xilinx/zynqmp.dtsi
+index 9174ddc76bdc3..b8d04c5748bf3 100644
+--- a/arch/arm64/boot/dts/xilinx/zynqmp.dtsi
++++ b/arch/arm64/boot/dts/xilinx/zynqmp.dtsi
+@@ -500,7 +500,7 @@
+ };
+
+ i2c0: i2c@ff020000 {
+- compatible = "cdns,i2c-r1p14", "cdns,i2c-r1p10";
++ compatible = "cdns,i2c-r1p14";
+ status = "disabled";
+ interrupt-parent = <&gic>;
+ interrupts = <0 17 4>;
+@@ -511,7 +511,7 @@
+ };
+
+ i2c1: i2c@ff030000 {
+- compatible = "cdns,i2c-r1p14", "cdns,i2c-r1p10";
++ compatible = "cdns,i2c-r1p14";
+ status = "disabled";
+ interrupt-parent = <&gic>;
+ interrupts = <0 18 4>;
+diff --git a/arch/arm64/include/asm/insn.h b/arch/arm64/include/asm/insn.h
+index 0bc46149e4917..4b39293d0f72d 100644
+--- a/arch/arm64/include/asm/insn.h
++++ b/arch/arm64/include/asm/insn.h
+@@ -359,9 +359,13 @@ __AARCH64_INSN_FUNCS(brk, 0xFFE0001F, 0xD4200000)
+ __AARCH64_INSN_FUNCS(exception, 0xFF000000, 0xD4000000)
+ __AARCH64_INSN_FUNCS(hint, 0xFFFFF01F, 0xD503201F)
+ __AARCH64_INSN_FUNCS(br, 0xFFFFFC1F, 0xD61F0000)
++__AARCH64_INSN_FUNCS(br_auth, 0xFEFFF800, 0xD61F0800)
+ __AARCH64_INSN_FUNCS(blr, 0xFFFFFC1F, 0xD63F0000)
++__AARCH64_INSN_FUNCS(blr_auth, 0xFEFFF800, 0xD63F0800)
+ __AARCH64_INSN_FUNCS(ret, 0xFFFFFC1F, 0xD65F0000)
++__AARCH64_INSN_FUNCS(ret_auth, 0xFFFFFBFF, 0xD65F0BFF)
+ __AARCH64_INSN_FUNCS(eret, 0xFFFFFFFF, 0xD69F03E0)
++__AARCH64_INSN_FUNCS(eret_auth, 0xFFFFFBFF, 0xD69F0BFF)
+ __AARCH64_INSN_FUNCS(mrs, 0xFFF00000, 0xD5300000)
+ __AARCH64_INSN_FUNCS(msr_imm, 0xFFF8F01F, 0xD500401F)
+ __AARCH64_INSN_FUNCS(msr_reg, 0xFFF00000, 0xD5100000)
+diff --git a/arch/arm64/include/asm/memory.h b/arch/arm64/include/asm/memory.h
+index a1871bb32bb17..d207f63eb68e1 100644
+--- a/arch/arm64/include/asm/memory.h
++++ b/arch/arm64/include/asm/memory.h
+@@ -163,7 +163,6 @@ extern u64 vabits_actual;
+ #include <linux/bitops.h>
+ #include <linux/mmdebug.h>
+
+-extern s64 physvirt_offset;
+ extern s64 memstart_addr;
+ /* PHYS_OFFSET - the physical address of the start of memory. */
+ #define PHYS_OFFSET ({ VM_BUG_ON(memstart_addr & 1); memstart_addr; })
+@@ -239,7 +238,7 @@ static inline const void *__tag_set(const void *addr, u8 tag)
+ */
+ #define __is_lm_address(addr) (!(((u64)addr) & BIT(vabits_actual - 1)))
+
+-#define __lm_to_phys(addr) (((addr) + physvirt_offset))
++#define __lm_to_phys(addr) (((addr) & ~PAGE_OFFSET) + PHYS_OFFSET)
+ #define __kimg_to_phys(addr) ((addr) - kimage_voffset)
+
+ #define __virt_to_phys_nodebug(x) ({ \
+@@ -257,7 +256,7 @@ extern phys_addr_t __phys_addr_symbol(unsigned long x);
+ #define __phys_addr_symbol(x) __pa_symbol_nodebug(x)
+ #endif /* CONFIG_DEBUG_VIRTUAL */
+
+-#define __phys_to_virt(x) ((unsigned long)((x) - physvirt_offset))
++#define __phys_to_virt(x) ((unsigned long)((x) - PHYS_OFFSET) | PAGE_OFFSET)
+ #define __phys_to_kimg(x) ((unsigned long)((x) + kimage_voffset))
+
+ /*
+diff --git a/arch/arm64/include/asm/pgtable.h b/arch/arm64/include/asm/pgtable.h
+index 758e2d1577d0c..a1745d6ea4b58 100644
+--- a/arch/arm64/include/asm/pgtable.h
++++ b/arch/arm64/include/asm/pgtable.h
+@@ -23,6 +23,8 @@
+ #define VMALLOC_START (MODULES_END)
+ #define VMALLOC_END (- PUD_SIZE - VMEMMAP_SIZE - SZ_64K)
+
++#define vmemmap ((struct page *)VMEMMAP_START - (memstart_addr >> PAGE_SHIFT))
++
+ #define FIRST_USER_ADDRESS 0UL
+
+ #ifndef __ASSEMBLY__
+@@ -33,8 +35,6 @@
+ #include <linux/mm_types.h>
+ #include <linux/sched.h>
+
+-extern struct page *vmemmap;
+-
+ extern void __pte_error(const char *file, int line, unsigned long val);
+ extern void __pmd_error(const char *file, int line, unsigned long val);
+ extern void __pud_error(const char *file, int line, unsigned long val);
+diff --git a/arch/arm64/kernel/cpu_errata.c b/arch/arm64/kernel/cpu_errata.c
+index 422ed2e38a6c8..6e8a7eec667e8 100644
+--- a/arch/arm64/kernel/cpu_errata.c
++++ b/arch/arm64/kernel/cpu_errata.c
+@@ -234,14 +234,17 @@ static int detect_harden_bp_fw(void)
+ smccc_end = NULL;
+ break;
+
+-#if IS_ENABLED(CONFIG_KVM)
+ case SMCCC_CONDUIT_SMC:
+ cb = call_smc_arch_workaround_1;
++#if IS_ENABLED(CONFIG_KVM)
+ smccc_start = __smccc_workaround_1_smc;
+ smccc_end = __smccc_workaround_1_smc +
+ __SMCCC_WORKAROUND_1_SMC_SZ;
+- break;
++#else
++ smccc_start = NULL;
++ smccc_end = NULL;
+ #endif
++ break;
+
+ default:
+ return -1;
+diff --git a/arch/arm64/kernel/insn.c b/arch/arm64/kernel/insn.c
+index a107375005bc9..ccc8c9e22b258 100644
+--- a/arch/arm64/kernel/insn.c
++++ b/arch/arm64/kernel/insn.c
+@@ -176,7 +176,7 @@ bool __kprobes aarch64_insn_uses_literal(u32 insn)
+
+ bool __kprobes aarch64_insn_is_branch(u32 insn)
+ {
+- /* b, bl, cb*, tb*, b.cond, br, blr */
++ /* b, bl, cb*, tb*, ret*, b.cond, br*, blr* */
+
+ return aarch64_insn_is_b(insn) ||
+ aarch64_insn_is_bl(insn) ||
+@@ -185,8 +185,11 @@ bool __kprobes aarch64_insn_is_branch(u32 insn)
+ aarch64_insn_is_tbz(insn) ||
+ aarch64_insn_is_tbnz(insn) ||
+ aarch64_insn_is_ret(insn) ||
++ aarch64_insn_is_ret_auth(insn) ||
+ aarch64_insn_is_br(insn) ||
++ aarch64_insn_is_br_auth(insn) ||
+ aarch64_insn_is_blr(insn) ||
++ aarch64_insn_is_blr_auth(insn) ||
+ aarch64_insn_is_bcond(insn);
+ }
+
+diff --git a/arch/arm64/kernel/perf_event.c b/arch/arm64/kernel/perf_event.c
+index 581602413a130..c26d84ff0e224 100644
+--- a/arch/arm64/kernel/perf_event.c
++++ b/arch/arm64/kernel/perf_event.c
+@@ -510,6 +510,11 @@ static u32 armv8pmu_event_cnten_mask(struct perf_event *event)
+
+ static inline void armv8pmu_enable_counter(u32 mask)
+ {
++ /*
++ * Make sure event configuration register writes are visible before we
++ * enable the counter.
++ * */
++ isb();
+ write_sysreg(mask, pmcntenset_el0);
+ }
+
+diff --git a/arch/arm64/kernel/probes/decode-insn.c b/arch/arm64/kernel/probes/decode-insn.c
+index 263d5fba4c8a3..c541fb48886e3 100644
+--- a/arch/arm64/kernel/probes/decode-insn.c
++++ b/arch/arm64/kernel/probes/decode-insn.c
+@@ -29,7 +29,8 @@ static bool __kprobes aarch64_insn_is_steppable(u32 insn)
+ aarch64_insn_is_msr_imm(insn) ||
+ aarch64_insn_is_msr_reg(insn) ||
+ aarch64_insn_is_exception(insn) ||
+- aarch64_insn_is_eret(insn))
++ aarch64_insn_is_eret(insn) ||
++ aarch64_insn_is_eret_auth(insn))
+ return false;
+
+ /*
+diff --git a/arch/arm64/mm/init.c b/arch/arm64/mm/init.c
+index 1e93cfc7c47ad..ca4410eb230a3 100644
+--- a/arch/arm64/mm/init.c
++++ b/arch/arm64/mm/init.c
+@@ -54,12 +54,6 @@
+ s64 memstart_addr __ro_after_init = -1;
+ EXPORT_SYMBOL(memstart_addr);
+
+-s64 physvirt_offset __ro_after_init;
+-EXPORT_SYMBOL(physvirt_offset);
+-
+-struct page *vmemmap __ro_after_init;
+-EXPORT_SYMBOL(vmemmap);
+-
+ /*
+ * We create both ZONE_DMA and ZONE_DMA32. ZONE_DMA covers the first 1G of
+ * memory as some devices, namely the Raspberry Pi 4, have peripherals with
+@@ -290,20 +284,6 @@ void __init arm64_memblock_init(void)
+ memstart_addr = round_down(memblock_start_of_DRAM(),
+ ARM64_MEMSTART_ALIGN);
+
+- physvirt_offset = PHYS_OFFSET - PAGE_OFFSET;
+-
+- vmemmap = ((struct page *)VMEMMAP_START - (memstart_addr >> PAGE_SHIFT));
+-
+- /*
+- * If we are running with a 52-bit kernel VA config on a system that
+- * does not support it, we have to offset our vmemmap and physvirt_offset
+- * s.t. we avoid the 52-bit portion of the direct linear map
+- */
+- if (IS_ENABLED(CONFIG_ARM64_VA_BITS_52) && (vabits_actual != 52)) {
+- vmemmap += (_PAGE_OFFSET(48) - _PAGE_OFFSET(52)) >> PAGE_SHIFT;
+- physvirt_offset = PHYS_OFFSET - _PAGE_OFFSET(48);
+- }
+-
+ /*
+ * Remove the memory that we will not be able to cover with the
+ * linear mapping. Take care not to clip the kernel which may be
+@@ -318,6 +298,16 @@ void __init arm64_memblock_init(void)
+ memblock_remove(0, memstart_addr);
+ }
+
++ /*
++ * If we are running with a 52-bit kernel VA config on a system that
++ * does not support it, we have to place the available physical
++ * memory in the 48-bit addressable part of the linear region, i.e.,
++ * we have to move it upward. Since memstart_addr represents the
++ * physical address of PAGE_OFFSET, we have to *subtract* from it.
++ */
++ if (IS_ENABLED(CONFIG_ARM64_VA_BITS_52) && (vabits_actual != 52))
++ memstart_addr -= _PAGE_OFFSET(48) - _PAGE_OFFSET(52);
++
+ /*
+ * Apply the memory limit if it was set. Since the kernel may be loaded
+ * high up in memory, add back the kernel region that must be accessible
+diff --git a/arch/m68k/coldfire/device.c b/arch/m68k/coldfire/device.c
+index 9ef4ec0aea008..59f7dfe50a4d0 100644
+--- a/arch/m68k/coldfire/device.c
++++ b/arch/m68k/coldfire/device.c
+@@ -554,7 +554,7 @@ static struct platform_device mcf_edma = {
+ };
+ #endif /* IS_ENABLED(CONFIG_MCF_EDMA) */
+
+-#if IS_ENABLED(CONFIG_MMC)
++#ifdef MCFSDHC_BASE
+ static struct mcf_esdhc_platform_data mcf_esdhc_data = {
+ .max_bus_width = 4,
+ .cd_type = ESDHC_CD_NONE,
+@@ -579,7 +579,7 @@ static struct platform_device mcf_esdhc = {
+ .resource = mcf_esdhc_resources,
+ .dev.platform_data = &mcf_esdhc_data,
+ };
+-#endif /* IS_ENABLED(CONFIG_MMC) */
++#endif /* MCFSDHC_BASE */
+
+ static struct platform_device *mcf_devices[] __initdata = {
+ &mcf_uart,
+@@ -613,7 +613,7 @@ static struct platform_device *mcf_devices[] __initdata = {
+ #if IS_ENABLED(CONFIG_MCF_EDMA)
+ &mcf_edma,
+ #endif
+-#if IS_ENABLED(CONFIG_MMC)
++#ifdef MCFSDHC_BASE
+ &mcf_esdhc,
+ #endif
+ };
+diff --git a/arch/microblaze/include/asm/Kbuild b/arch/microblaze/include/asm/Kbuild
+index 2e87a9b6d312f..63bce836b9f10 100644
+--- a/arch/microblaze/include/asm/Kbuild
++++ b/arch/microblaze/include/asm/Kbuild
+@@ -1,7 +1,6 @@
+ # SPDX-License-Identifier: GPL-2.0
+ generated-y += syscall_table.h
+ generic-y += extable.h
+-generic-y += hw_irq.h
+ generic-y += kvm_para.h
+ generic-y += local64.h
+ generic-y += mcs_spinlock.h
+diff --git a/arch/powerpc/include/asm/book3s/64/hash-4k.h b/arch/powerpc/include/asm/book3s/64/hash-4k.h
+index 3f9ae3585ab98..80c9534148821 100644
+--- a/arch/powerpc/include/asm/book3s/64/hash-4k.h
++++ b/arch/powerpc/include/asm/book3s/64/hash-4k.h
+@@ -13,20 +13,19 @@
+ */
+ #define MAX_EA_BITS_PER_CONTEXT 46
+
+-#define REGION_SHIFT (MAX_EA_BITS_PER_CONTEXT - 2)
+
+ /*
+- * Our page table limit us to 64TB. Hence for the kernel mapping,
+- * each MAP area is limited to 16 TB.
+- * The four map areas are: linear mapping, vmap, IO and vmemmap
++ * Our page table limit us to 64TB. For 64TB physical memory, we only need 64GB
++ * of vmemmap space. To better support sparse memory layout, we use 61TB
++ * linear map range, 1TB of vmalloc, 1TB of I/O and 1TB of vmememmap.
+ */
++#define REGION_SHIFT (40)
+ #define H_KERN_MAP_SIZE (ASM_CONST(1) << REGION_SHIFT)
+
+ /*
+- * Define the address range of the kernel non-linear virtual area
+- * 16TB
++ * Define the address range of the kernel non-linear virtual area (61TB)
+ */
+-#define H_KERN_VIRT_START ASM_CONST(0xc000100000000000)
++#define H_KERN_VIRT_START ASM_CONST(0xc0003d0000000000)
+
+ #ifndef __ASSEMBLY__
+ #define H_PTE_TABLE_SIZE (sizeof(pte_t) << H_PTE_INDEX_SIZE)
+diff --git a/arch/powerpc/include/asm/drmem.h b/arch/powerpc/include/asm/drmem.h
+index 414d209f45bbe..c711fe8901109 100644
+--- a/arch/powerpc/include/asm/drmem.h
++++ b/arch/powerpc/include/asm/drmem.h
+@@ -8,14 +8,13 @@
+ #ifndef _ASM_POWERPC_LMB_H
+ #define _ASM_POWERPC_LMB_H
+
++#include <linux/sched.h>
++
+ struct drmem_lmb {
+ u64 base_addr;
+ u32 drc_index;
+ u32 aa_index;
+ u32 flags;
+-#ifdef CONFIG_MEMORY_HOTPLUG
+- int nid;
+-#endif
+ };
+
+ struct drmem_lmb_info {
+@@ -26,8 +25,22 @@ struct drmem_lmb_info {
+
+ extern struct drmem_lmb_info *drmem_info;
+
++static inline struct drmem_lmb *drmem_lmb_next(struct drmem_lmb *lmb,
++ const struct drmem_lmb *start)
++{
++ /*
++ * DLPAR code paths can take several milliseconds per element
++ * when interacting with firmware. Ensure that we don't
++ * unfairly monopolize the CPU.
++ */
++ if (((++lmb - start) % 16) == 0)
++ cond_resched();
++
++ return lmb;
++}
++
+ #define for_each_drmem_lmb_in_range(lmb, start, end) \
+- for ((lmb) = (start); (lmb) < (end); (lmb)++)
++ for ((lmb) = (start); (lmb) < (end); lmb = drmem_lmb_next(lmb, start))
+
+ #define for_each_drmem_lmb(lmb) \
+ for_each_drmem_lmb_in_range((lmb), \
+@@ -104,22 +117,4 @@ static inline void invalidate_lmb_associativity_index(struct drmem_lmb *lmb)
+ lmb->aa_index = 0xffffffff;
+ }
+
+-#ifdef CONFIG_MEMORY_HOTPLUG
+-static inline void lmb_set_nid(struct drmem_lmb *lmb)
+-{
+- lmb->nid = memory_add_physaddr_to_nid(lmb->base_addr);
+-}
+-static inline void lmb_clear_nid(struct drmem_lmb *lmb)
+-{
+- lmb->nid = -1;
+-}
+-#else
+-static inline void lmb_set_nid(struct drmem_lmb *lmb)
+-{
+-}
+-static inline void lmb_clear_nid(struct drmem_lmb *lmb)
+-{
+-}
+-#endif
+-
+ #endif /* _ASM_POWERPC_LMB_H */
+diff --git a/arch/powerpc/include/asm/hw_breakpoint.h b/arch/powerpc/include/asm/hw_breakpoint.h
+index cb424799da0dc..5a00da670a407 100644
+--- a/arch/powerpc/include/asm/hw_breakpoint.h
++++ b/arch/powerpc/include/asm/hw_breakpoint.h
+@@ -40,6 +40,7 @@ struct arch_hw_breakpoint {
+ #else
+ #define HW_BREAKPOINT_SIZE 0x8
+ #endif
++#define HW_BREAKPOINT_SIZE_QUADWORD 0x10
+
+ #define DABR_MAX_LEN 8
+ #define DAWR_MAX_LEN 512
+diff --git a/arch/powerpc/include/asm/reg.h b/arch/powerpc/include/asm/reg.h
+index 88e6c78100d9b..c750afc62887c 100644
+--- a/arch/powerpc/include/asm/reg.h
++++ b/arch/powerpc/include/asm/reg.h
+@@ -815,7 +815,7 @@
+ #define THRM1_TIN (1 << 31)
+ #define THRM1_TIV (1 << 30)
+ #define THRM1_THRES(x) ((x&0x7f)<<23)
+-#define THRM3_SITV(x) ((x&0x3fff)<<1)
++#define THRM3_SITV(x) ((x & 0x1fff) << 1)
+ #define THRM1_TID (1<<2)
+ #define THRM1_TIE (1<<1)
+ #define THRM1_V (1<<0)
+diff --git a/arch/powerpc/include/asm/svm.h b/arch/powerpc/include/asm/svm.h
+index 85580b30aba48..7546402d796af 100644
+--- a/arch/powerpc/include/asm/svm.h
++++ b/arch/powerpc/include/asm/svm.h
+@@ -15,6 +15,8 @@ static inline bool is_secure_guest(void)
+ return mfmsr() & MSR_S;
+ }
+
++void __init svm_swiotlb_init(void);
++
+ void dtl_cache_ctor(void *addr);
+ #define get_dtl_cache_ctor() (is_secure_guest() ? dtl_cache_ctor : NULL)
+
+@@ -25,6 +27,8 @@ static inline bool is_secure_guest(void)
+ return false;
+ }
+
++static inline void svm_swiotlb_init(void) {}
++
+ #define get_dtl_cache_ctor() NULL
+
+ #endif /* CONFIG_PPC_SVM */
+diff --git a/arch/powerpc/include/asm/tlb.h b/arch/powerpc/include/asm/tlb.h
+index 862985cf51804..cf87bbdcfdcb2 100644
+--- a/arch/powerpc/include/asm/tlb.h
++++ b/arch/powerpc/include/asm/tlb.h
+@@ -67,19 +67,6 @@ static inline int mm_is_thread_local(struct mm_struct *mm)
+ return false;
+ return cpumask_test_cpu(smp_processor_id(), mm_cpumask(mm));
+ }
+-static inline void mm_reset_thread_local(struct mm_struct *mm)
+-{
+- WARN_ON(atomic_read(&mm->context.copros) > 0);
+- /*
+- * It's possible for mm_access to take a reference on mm_users to
+- * access the remote mm from another thread, but it's not allowed
+- * to set mm_cpumask, so mm_users may be > 1 here.
+- */
+- WARN_ON(current->mm != mm);
+- atomic_set(&mm->context.active_cpus, 1);
+- cpumask_clear(mm_cpumask(mm));
+- cpumask_set_cpu(smp_processor_id(), mm_cpumask(mm));
+-}
+ #else /* CONFIG_PPC_BOOK3S_64 */
+ static inline int mm_is_thread_local(struct mm_struct *mm)
+ {
+diff --git a/arch/powerpc/kernel/hw_breakpoint.c b/arch/powerpc/kernel/hw_breakpoint.c
+index c55e67bab2710..2190be70c7fd9 100644
+--- a/arch/powerpc/kernel/hw_breakpoint.c
++++ b/arch/powerpc/kernel/hw_breakpoint.c
+@@ -519,9 +519,17 @@ static bool ea_hw_range_overlaps(unsigned long ea, int size,
+ struct arch_hw_breakpoint *info)
+ {
+ unsigned long hw_start_addr, hw_end_addr;
++ unsigned long align_size = HW_BREAKPOINT_SIZE;
+
+- hw_start_addr = ALIGN_DOWN(info->address, HW_BREAKPOINT_SIZE);
+- hw_end_addr = ALIGN(info->address + info->len, HW_BREAKPOINT_SIZE);
++ /*
++ * On p10 predecessors, quadword is handle differently then
++ * other instructions.
++ */
++ if (!cpu_has_feature(CPU_FTR_ARCH_31) && size == 16)
++ align_size = HW_BREAKPOINT_SIZE_QUADWORD;
++
++ hw_start_addr = ALIGN_DOWN(info->address, align_size);
++ hw_end_addr = ALIGN(info->address + info->len, align_size);
+
+ return ((ea < hw_end_addr) && (ea + size > hw_start_addr));
+ }
+@@ -635,6 +643,8 @@ static void get_instr_detail(struct pt_regs *regs, struct ppc_inst *instr,
+ if (*type == CACHEOP) {
+ *size = cache_op_size();
+ *ea &= ~(*size - 1);
++ } else if (*type == LOAD_VMX || *type == STORE_VMX) {
++ *ea &= ~(*size - 1);
+ }
+ }
+
+diff --git a/arch/powerpc/kernel/irq.c b/arch/powerpc/kernel/irq.c
+index 05b1cc0e009e4..3a22281a8264e 100644
+--- a/arch/powerpc/kernel/irq.c
++++ b/arch/powerpc/kernel/irq.c
+@@ -214,7 +214,7 @@ void replay_soft_interrupts(void)
+ struct pt_regs regs;
+
+ ppc_save_regs(®s);
+- regs.softe = IRQS_ALL_DISABLED;
++ regs.softe = IRQS_ENABLED;
+
+ again:
+ if (IS_ENABLED(CONFIG_PPC_IRQ_SOFT_MASK_DEBUG))
+@@ -368,6 +368,12 @@ notrace void arch_local_irq_restore(unsigned long mask)
+ }
+ }
+
++ /*
++ * Disable preempt here, so that the below preempt_enable will
++ * perform resched if required (a replayed interrupt may set
++ * need_resched).
++ */
++ preempt_disable();
+ irq_soft_mask_set(IRQS_ALL_DISABLED);
+ trace_hardirqs_off();
+
+@@ -377,6 +383,7 @@ notrace void arch_local_irq_restore(unsigned long mask)
+ trace_hardirqs_on();
+ irq_soft_mask_set(IRQS_ENABLED);
+ __hard_irq_enable();
++ preempt_enable();
+ }
+ EXPORT_SYMBOL(arch_local_irq_restore);
+
+diff --git a/arch/powerpc/kernel/ptrace/ptrace-noadv.c b/arch/powerpc/kernel/ptrace/ptrace-noadv.c
+index 697c7e4b5877f..8bd8d8de5c40b 100644
+--- a/arch/powerpc/kernel/ptrace/ptrace-noadv.c
++++ b/arch/powerpc/kernel/ptrace/ptrace-noadv.c
+@@ -219,6 +219,7 @@ long ppc_set_hwdebug(struct task_struct *child, struct ppc_hw_breakpoint *bp_inf
+ brk.address = ALIGN_DOWN(bp_info->addr, HW_BREAKPOINT_SIZE);
+ brk.type = HW_BRK_TYPE_TRANSLATE;
+ brk.len = DABR_MAX_LEN;
++ brk.hw_len = DABR_MAX_LEN;
+ if (bp_info->trigger_type & PPC_BREAKPOINT_TRIGGER_READ)
+ brk.type |= HW_BRK_TYPE_READ;
+ if (bp_info->trigger_type & PPC_BREAKPOINT_TRIGGER_WRITE)
+diff --git a/arch/powerpc/kernel/tau_6xx.c b/arch/powerpc/kernel/tau_6xx.c
+index e2ab8a111b693..0b4694b8d2482 100644
+--- a/arch/powerpc/kernel/tau_6xx.c
++++ b/arch/powerpc/kernel/tau_6xx.c
+@@ -13,13 +13,14 @@
+ */
+
+ #include <linux/errno.h>
+-#include <linux/jiffies.h>
+ #include <linux/kernel.h>
+ #include <linux/param.h>
+ #include <linux/string.h>
+ #include <linux/mm.h>
+ #include <linux/interrupt.h>
+ #include <linux/init.h>
++#include <linux/delay.h>
++#include <linux/workqueue.h>
+
+ #include <asm/io.h>
+ #include <asm/reg.h>
+@@ -39,9 +40,7 @@ static struct tau_temp
+ unsigned char grew;
+ } tau[NR_CPUS];
+
+-struct timer_list tau_timer;
+-
+-#undef DEBUG
++static bool tau_int_enable;
+
+ /* TODO: put these in a /proc interface, with some sanity checks, and maybe
+ * dynamic adjustment to minimize # of interrupts */
+@@ -50,72 +49,49 @@ struct timer_list tau_timer;
+ #define step_size 2 /* step size when temp goes out of range */
+ #define window_expand 1 /* expand the window by this much */
+ /* configurable values for shrinking the window */
+-#define shrink_timer 2*HZ /* period between shrinking the window */
++#define shrink_timer 2000 /* period between shrinking the window */
+ #define min_window 2 /* minimum window size, degrees C */
+
+ static void set_thresholds(unsigned long cpu)
+ {
+-#ifdef CONFIG_TAU_INT
+- /*
+- * setup THRM1,
+- * threshold, valid bit, enable interrupts, interrupt when below threshold
+- */
+- mtspr(SPRN_THRM1, THRM1_THRES(tau[cpu].low) | THRM1_V | THRM1_TIE | THRM1_TID);
++ u32 maybe_tie = tau_int_enable ? THRM1_TIE : 0;
+
+- /* setup THRM2,
+- * threshold, valid bit, enable interrupts, interrupt when above threshold
+- */
+- mtspr (SPRN_THRM2, THRM1_THRES(tau[cpu].high) | THRM1_V | THRM1_TIE);
+-#else
+- /* same thing but don't enable interrupts */
+- mtspr(SPRN_THRM1, THRM1_THRES(tau[cpu].low) | THRM1_V | THRM1_TID);
+- mtspr(SPRN_THRM2, THRM1_THRES(tau[cpu].high) | THRM1_V);
+-#endif
++ /* setup THRM1, threshold, valid bit, interrupt when below threshold */
++ mtspr(SPRN_THRM1, THRM1_THRES(tau[cpu].low) | THRM1_V | maybe_tie | THRM1_TID);
++
++ /* setup THRM2, threshold, valid bit, interrupt when above threshold */
++ mtspr(SPRN_THRM2, THRM1_THRES(tau[cpu].high) | THRM1_V | maybe_tie);
+ }
+
+ static void TAUupdate(int cpu)
+ {
+- unsigned thrm;
+-
+-#ifdef DEBUG
+- printk("TAUupdate ");
+-#endif
++ u32 thrm;
++ u32 bits = THRM1_TIV | THRM1_TIN | THRM1_V;
+
+ /* if both thresholds are crossed, the step_sizes cancel out
+ * and the window winds up getting expanded twice. */
+- if((thrm = mfspr(SPRN_THRM1)) & THRM1_TIV){ /* is valid? */
+- if(thrm & THRM1_TIN){ /* crossed low threshold */
+- if (tau[cpu].low >= step_size){
+- tau[cpu].low -= step_size;
+- tau[cpu].high -= (step_size - window_expand);
+- }
+- tau[cpu].grew = 1;
+-#ifdef DEBUG
+- printk("low threshold crossed ");
+-#endif
++ thrm = mfspr(SPRN_THRM1);
++ if ((thrm & bits) == bits) {
++ mtspr(SPRN_THRM1, 0);
++
++ if (tau[cpu].low >= step_size) {
++ tau[cpu].low -= step_size;
++ tau[cpu].high -= (step_size - window_expand);
+ }
++ tau[cpu].grew = 1;
++ pr_debug("%s: low threshold crossed\n", __func__);
+ }
+- if((thrm = mfspr(SPRN_THRM2)) & THRM1_TIV){ /* is valid? */
+- if(thrm & THRM1_TIN){ /* crossed high threshold */
+- if (tau[cpu].high <= 127-step_size){
+- tau[cpu].low += (step_size - window_expand);
+- tau[cpu].high += step_size;
+- }
+- tau[cpu].grew = 1;
+-#ifdef DEBUG
+- printk("high threshold crossed ");
+-#endif
++ thrm = mfspr(SPRN_THRM2);
++ if ((thrm & bits) == bits) {
++ mtspr(SPRN_THRM2, 0);
++
++ if (tau[cpu].high <= 127 - step_size) {
++ tau[cpu].low += (step_size - window_expand);
++ tau[cpu].high += step_size;
+ }
++ tau[cpu].grew = 1;
++ pr_debug("%s: high threshold crossed\n", __func__);
+ }
+-
+-#ifdef DEBUG
+- printk("grew = %d\n", tau[cpu].grew);
+-#endif
+-
+-#ifndef CONFIG_TAU_INT /* tau_timeout will do this if not using interrupts */
+- set_thresholds(cpu);
+-#endif
+-
+ }
+
+ #ifdef CONFIG_TAU_INT
+@@ -140,17 +116,16 @@ void TAUException(struct pt_regs * regs)
+ static void tau_timeout(void * info)
+ {
+ int cpu;
+- unsigned long flags;
+ int size;
+ int shrink;
+
+- /* disabling interrupts *should* be okay */
+- local_irq_save(flags);
+ cpu = smp_processor_id();
+
+-#ifndef CONFIG_TAU_INT
+- TAUupdate(cpu);
+-#endif
++ if (!tau_int_enable)
++ TAUupdate(cpu);
++
++ /* Stop thermal sensor comparisons and interrupts */
++ mtspr(SPRN_THRM3, 0);
+
+ size = tau[cpu].high - tau[cpu].low;
+ if (size > min_window && ! tau[cpu].grew) {
+@@ -173,32 +148,26 @@ static void tau_timeout(void * info)
+
+ set_thresholds(cpu);
+
+- /*
+- * Do the enable every time, since otherwise a bunch of (relatively)
+- * complex sleep code needs to be added. One mtspr every time
+- * tau_timeout is called is probably not a big deal.
+- *
+- * Enable thermal sensor and set up sample interval timer
+- * need 20 us to do the compare.. until a nice 'cpu_speed' function
+- * call is implemented, just assume a 500 mhz clock. It doesn't really
+- * matter if we take too long for a compare since it's all interrupt
+- * driven anyway.
+- *
+- * use a extra long time.. (60 us @ 500 mhz)
++ /* Restart thermal sensor comparisons and interrupts.
++ * The "PowerPC 740 and PowerPC 750 Microprocessor Datasheet"
++ * recommends that "the maximum value be set in THRM3 under all
++ * conditions."
+ */
+- mtspr(SPRN_THRM3, THRM3_SITV(500*60) | THRM3_E);
+-
+- local_irq_restore(flags);
++ mtspr(SPRN_THRM3, THRM3_SITV(0x1fff) | THRM3_E);
+ }
+
+-static void tau_timeout_smp(struct timer_list *unused)
+-{
++static struct workqueue_struct *tau_workq;
+
+- /* schedule ourselves to be run again */
+- mod_timer(&tau_timer, jiffies + shrink_timer) ;
++static void tau_work_func(struct work_struct *work)
++{
++ msleep(shrink_timer);
+ on_each_cpu(tau_timeout, NULL, 0);
++ /* schedule ourselves to be run again */
++ queue_work(tau_workq, work);
+ }
+
++DECLARE_WORK(tau_work, tau_work_func);
++
+ /*
+ * setup the TAU
+ *
+@@ -231,21 +200,19 @@ static int __init TAU_init(void)
+ return 1;
+ }
+
++ tau_int_enable = IS_ENABLED(CONFIG_TAU_INT) &&
++ !strcmp(cur_cpu_spec->platform, "ppc750");
+
+- /* first, set up the window shrinking timer */
+- timer_setup(&tau_timer, tau_timeout_smp, 0);
+- tau_timer.expires = jiffies + shrink_timer;
+- add_timer(&tau_timer);
++ tau_workq = alloc_workqueue("tau", WQ_UNBOUND, 1, 0);
++ if (!tau_workq)
++ return -ENOMEM;
+
+ on_each_cpu(TAU_init_smp, NULL, 0);
+
+- printk("Thermal assist unit ");
+-#ifdef CONFIG_TAU_INT
+- printk("using interrupts, ");
+-#else
+- printk("using timers, ");
+-#endif
+- printk("shrink_timer: %d jiffies\n", shrink_timer);
++ queue_work(tau_workq, &tau_work);
++
++ pr_info("Thermal assist unit using %s, shrink_timer: %d ms\n",
++ tau_int_enable ? "interrupts" : "workqueue", shrink_timer);
+ tau_initialized = 1;
+
+ return 0;
+diff --git a/arch/powerpc/mm/book3s64/radix_tlb.c b/arch/powerpc/mm/book3s64/radix_tlb.c
+index b5cc9b23cf024..277a07772e7d6 100644
+--- a/arch/powerpc/mm/book3s64/radix_tlb.c
++++ b/arch/powerpc/mm/book3s64/radix_tlb.c
+@@ -644,19 +644,29 @@ static void do_exit_flush_lazy_tlb(void *arg)
+ struct mm_struct *mm = arg;
+ unsigned long pid = mm->context.id;
+
++ /*
++ * A kthread could have done a mmget_not_zero() after the flushing CPU
++ * checked mm_is_singlethreaded, and be in the process of
++ * kthread_use_mm when interrupted here. In that case, current->mm will
++ * be set to mm, because kthread_use_mm() setting ->mm and switching to
++ * the mm is done with interrupts off.
++ */
+ if (current->mm == mm)
+- return; /* Local CPU */
++ goto out_flush;
+
+ if (current->active_mm == mm) {
+- /*
+- * Must be a kernel thread because sender is single-threaded.
+- */
+- BUG_ON(current->mm);
++ WARN_ON_ONCE(current->mm != NULL);
++ /* Is a kernel thread and is using mm as the lazy tlb */
+ mmgrab(&init_mm);
+- switch_mm(mm, &init_mm, current);
+ current->active_mm = &init_mm;
++ switch_mm_irqs_off(mm, &init_mm, current);
+ mmdrop(mm);
+ }
++
++ atomic_dec(&mm->context.active_cpus);
++ cpumask_clear_cpu(smp_processor_id(), mm_cpumask(mm));
++
++out_flush:
+ _tlbiel_pid(pid, RIC_FLUSH_ALL);
+ }
+
+@@ -671,7 +681,6 @@ static void exit_flush_lazy_tlbs(struct mm_struct *mm)
+ */
+ smp_call_function_many(mm_cpumask(mm), do_exit_flush_lazy_tlb,
+ (void *)mm, 1);
+- mm_reset_thread_local(mm);
+ }
+
+ void radix__flush_tlb_mm(struct mm_struct *mm)
+diff --git a/arch/powerpc/mm/drmem.c b/arch/powerpc/mm/drmem.c
+index 59327cefbc6a6..873fcfc7b8756 100644
+--- a/arch/powerpc/mm/drmem.c
++++ b/arch/powerpc/mm/drmem.c
+@@ -362,10 +362,8 @@ static void __init init_drmem_v1_lmbs(const __be32 *prop)
+ if (!drmem_info->lmbs)
+ return;
+
+- for_each_drmem_lmb(lmb) {
++ for_each_drmem_lmb(lmb)
+ read_drconf_v1_cell(lmb, &prop);
+- lmb_set_nid(lmb);
+- }
+ }
+
+ static void __init init_drmem_v2_lmbs(const __be32 *prop)
+@@ -410,8 +408,6 @@ static void __init init_drmem_v2_lmbs(const __be32 *prop)
+
+ lmb->aa_index = dr_cell.aa_index;
+ lmb->flags = dr_cell.flags;
+-
+- lmb_set_nid(lmb);
+ }
+ }
+ }
+diff --git a/arch/powerpc/mm/kasan/kasan_init_32.c b/arch/powerpc/mm/kasan/kasan_init_32.c
+index 019b0c0bbbf31..ca91d04d0a7ae 100644
+--- a/arch/powerpc/mm/kasan/kasan_init_32.c
++++ b/arch/powerpc/mm/kasan/kasan_init_32.c
+@@ -121,8 +121,7 @@ void __init kasan_mmu_init(void)
+ {
+ int ret;
+
+- if (early_mmu_has_feature(MMU_FTR_HPTE_TABLE) ||
+- IS_ENABLED(CONFIG_KASAN_VMALLOC)) {
++ if (early_mmu_has_feature(MMU_FTR_HPTE_TABLE)) {
+ ret = kasan_init_shadow_page_tables(KASAN_SHADOW_START, KASAN_SHADOW_END);
+
+ if (ret)
+@@ -133,11 +132,11 @@ void __init kasan_mmu_init(void)
+ void __init kasan_init(void)
+ {
+ struct memblock_region *reg;
++ int ret;
+
+ for_each_memblock(memory, reg) {
+ phys_addr_t base = reg->base;
+ phys_addr_t top = min(base + reg->size, total_lowmem);
+- int ret;
+
+ if (base >= top)
+ continue;
+@@ -147,6 +146,13 @@ void __init kasan_init(void)
+ panic("kasan: kasan_init_region() failed");
+ }
+
++ if (IS_ENABLED(CONFIG_KASAN_VMALLOC)) {
++ ret = kasan_init_shadow_page_tables(KASAN_SHADOW_START, KASAN_SHADOW_END);
++
++ if (ret)
++ panic("kasan: kasan_init_shadow_page_tables() failed");
++ }
++
+ kasan_remap_early_shadow_ro();
+
+ clear_page(kasan_early_shadow_page);
+diff --git a/arch/powerpc/mm/mem.c b/arch/powerpc/mm/mem.c
+index c2c11eb8dcfca..0f21bcb16405a 100644
+--- a/arch/powerpc/mm/mem.c
++++ b/arch/powerpc/mm/mem.c
+@@ -50,6 +50,7 @@
+ #include <asm/swiotlb.h>
+ #include <asm/rtas.h>
+ #include <asm/kasan.h>
++#include <asm/svm.h>
+
+ #include <mm/mmu_decl.h>
+
+@@ -290,7 +291,10 @@ void __init mem_init(void)
+ * back to to-down.
+ */
+ memblock_set_bottom_up(true);
+- swiotlb_init(0);
++ if (is_secure_guest())
++ svm_swiotlb_init();
++ else
++ swiotlb_init(0);
+ #endif
+
+ high_memory = (void *) __va(max_low_pfn * PAGE_SIZE);
+diff --git a/arch/powerpc/perf/hv-gpci-requests.h b/arch/powerpc/perf/hv-gpci-requests.h
+index e608f9db12ddc..8965b4463d433 100644
+--- a/arch/powerpc/perf/hv-gpci-requests.h
++++ b/arch/powerpc/perf/hv-gpci-requests.h
+@@ -95,7 +95,7 @@ REQUEST(__field(0, 8, partition_id)
+
+ #define REQUEST_NAME system_performance_capabilities
+ #define REQUEST_NUM 0x40
+-#define REQUEST_IDX_KIND "starting_index=0xffffffffffffffff"
++#define REQUEST_IDX_KIND "starting_index=0xffffffff"
+ #include I(REQUEST_BEGIN)
+ REQUEST(__field(0, 1, perf_collect_privileged)
+ __field(0x1, 1, capability_mask)
+@@ -223,7 +223,7 @@ REQUEST(__field(0, 2, partition_id)
+
+ #define REQUEST_NAME system_hypervisor_times
+ #define REQUEST_NUM 0xF0
+-#define REQUEST_IDX_KIND "starting_index=0xffffffffffffffff"
++#define REQUEST_IDX_KIND "starting_index=0xffffffff"
+ #include I(REQUEST_BEGIN)
+ REQUEST(__count(0, 8, time_spent_to_dispatch_virtual_processors)
+ __count(0x8, 8, time_spent_processing_virtual_processor_timers)
+@@ -234,7 +234,7 @@ REQUEST(__count(0, 8, time_spent_to_dispatch_virtual_processors)
+
+ #define REQUEST_NAME system_tlbie_count_and_time
+ #define REQUEST_NUM 0xF4
+-#define REQUEST_IDX_KIND "starting_index=0xffffffffffffffff"
++#define REQUEST_IDX_KIND "starting_index=0xffffffff"
+ #include I(REQUEST_BEGIN)
+ REQUEST(__count(0, 8, tlbie_instructions_issued)
+ /*
+diff --git a/arch/powerpc/perf/isa207-common.c b/arch/powerpc/perf/isa207-common.c
+index 4c86da5eb28ab..0b5c8f4fbdbfd 100644
+--- a/arch/powerpc/perf/isa207-common.c
++++ b/arch/powerpc/perf/isa207-common.c
+@@ -269,6 +269,15 @@ int isa207_get_constraint(u64 event, unsigned long *maskp, unsigned long *valp)
+
+ mask |= CNST_PMC_MASK(pmc);
+ value |= CNST_PMC_VAL(pmc);
++
++ /*
++ * PMC5 and PMC6 are used to count cycles and instructions and
++ * they do not support most of the constraint bits. Add a check
++ * to exclude PMC5/6 from most of the constraints except for
++ * EBB/BHRB.
++ */
++ if (pmc >= 5)
++ goto ebb_bhrb;
+ }
+
+ if (pmc <= 4) {
+@@ -335,6 +344,7 @@ int isa207_get_constraint(u64 event, unsigned long *maskp, unsigned long *valp)
+ }
+ }
+
++ebb_bhrb:
+ if (!pmc && ebb)
+ /* EBB events must specify the PMC */
+ return -1;
+diff --git a/arch/powerpc/platforms/Kconfig b/arch/powerpc/platforms/Kconfig
+index fb7515b4fa9c6..b439b027a42f1 100644
+--- a/arch/powerpc/platforms/Kconfig
++++ b/arch/powerpc/platforms/Kconfig
+@@ -223,12 +223,11 @@ config TAU
+ temperature within 2-4 degrees Celsius. This option shows the current
+ on-die temperature in /proc/cpuinfo if the cpu supports it.
+
+- Unfortunately, on some chip revisions, this sensor is very inaccurate
+- and in many cases, does not work at all, so don't assume the cpu
+- temp is actually what /proc/cpuinfo says it is.
++ Unfortunately, this sensor is very inaccurate when uncalibrated, so
++ don't assume the cpu temp is actually what /proc/cpuinfo says it is.
+
+ config TAU_INT
+- bool "Interrupt driven TAU driver (DANGEROUS)"
++ bool "Interrupt driven TAU driver (EXPERIMENTAL)"
+ depends on TAU
+ help
+ The TAU supports an interrupt driven mode which causes an interrupt
+@@ -236,12 +235,7 @@ config TAU_INT
+ to get notified the temp has exceeded a range. With this option off,
+ a timer is used to re-check the temperature periodically.
+
+- However, on some cpus it appears that the TAU interrupt hardware
+- is buggy and can cause a situation which would lead unexplained hard
+- lockups.
+-
+- Unless you are extending the TAU driver, or enjoy kernel/hardware
+- debugging, leave this option off.
++ If in doubt, say N here.
+
+ config TAU_AVERAGE
+ bool "Average high and low temp"
+diff --git a/arch/powerpc/platforms/powernv/opal-dump.c b/arch/powerpc/platforms/powernv/opal-dump.c
+index 543c816fa99ef..0e6693bacb7e7 100644
+--- a/arch/powerpc/platforms/powernv/opal-dump.c
++++ b/arch/powerpc/platforms/powernv/opal-dump.c
+@@ -318,15 +318,14 @@ static ssize_t dump_attr_read(struct file *filep, struct kobject *kobj,
+ return count;
+ }
+
+-static struct dump_obj *create_dump_obj(uint32_t id, size_t size,
+- uint32_t type)
++static void create_dump_obj(uint32_t id, size_t size, uint32_t type)
+ {
+ struct dump_obj *dump;
+ int rc;
+
+ dump = kzalloc(sizeof(*dump), GFP_KERNEL);
+ if (!dump)
+- return NULL;
++ return;
+
+ dump->kobj.kset = dump_kset;
+
+@@ -346,21 +345,39 @@ static struct dump_obj *create_dump_obj(uint32_t id, size_t size,
+ rc = kobject_add(&dump->kobj, NULL, "0x%x-0x%x", type, id);
+ if (rc) {
+ kobject_put(&dump->kobj);
+- return NULL;
++ return;
+ }
+
++ /*
++ * As soon as the sysfs file for this dump is created/activated there is
++ * a chance the opal_errd daemon (or any userspace) might read and
++ * acknowledge the dump before kobject_uevent() is called. If that
++ * happens then there is a potential race between
++ * dump_ack_store->kobject_put() and kobject_uevent() which leads to a
++ * use-after-free of a kernfs object resulting in a kernel crash.
++ *
++ * To avoid that, we need to take a reference on behalf of the bin file,
++ * so that our reference remains valid while we call kobject_uevent().
++ * We then drop our reference before exiting the function, leaving the
++ * bin file to drop the last reference (if it hasn't already).
++ */
++
++ /* Take a reference for the bin file */
++ kobject_get(&dump->kobj);
+ rc = sysfs_create_bin_file(&dump->kobj, &dump->dump_attr);
+- if (rc) {
++ if (rc == 0) {
++ kobject_uevent(&dump->kobj, KOBJ_ADD);
++
++ pr_info("%s: New platform dump. ID = 0x%x Size %u\n",
++ __func__, dump->id, dump->size);
++ } else {
++ /* Drop reference count taken for bin file */
+ kobject_put(&dump->kobj);
+- return NULL;
+ }
+
+- pr_info("%s: New platform dump. ID = 0x%x Size %u\n",
+- __func__, dump->id, dump->size);
+-
+- kobject_uevent(&dump->kobj, KOBJ_ADD);
+-
+- return dump;
++ /* Drop our reference */
++ kobject_put(&dump->kobj);
++ return;
+ }
+
+ static irqreturn_t process_dump(int irq, void *data)
+diff --git a/arch/powerpc/platforms/pseries/hotplug-memory.c b/arch/powerpc/platforms/pseries/hotplug-memory.c
+index 8b748690dac22..9f236149b4027 100644
+--- a/arch/powerpc/platforms/pseries/hotplug-memory.c
++++ b/arch/powerpc/platforms/pseries/hotplug-memory.c
+@@ -356,25 +356,32 @@ static int dlpar_add_lmb(struct drmem_lmb *);
+
+ static int dlpar_remove_lmb(struct drmem_lmb *lmb)
+ {
++ struct memory_block *mem_block;
+ unsigned long block_sz;
+ int rc;
+
+ if (!lmb_is_removable(lmb))
+ return -EINVAL;
+
++ mem_block = lmb_to_memblock(lmb);
++ if (mem_block == NULL)
++ return -EINVAL;
++
+ rc = dlpar_offline_lmb(lmb);
+- if (rc)
++ if (rc) {
++ put_device(&mem_block->dev);
+ return rc;
++ }
+
+ block_sz = pseries_memory_block_size();
+
+- __remove_memory(lmb->nid, lmb->base_addr, block_sz);
++ __remove_memory(mem_block->nid, lmb->base_addr, block_sz);
++ put_device(&mem_block->dev);
+
+ /* Update memory regions for memory remove */
+ memblock_remove(lmb->base_addr, block_sz);
+
+ invalidate_lmb_associativity_index(lmb);
+- lmb_clear_nid(lmb);
+ lmb->flags &= ~DRCONF_MEM_ASSIGNED;
+
+ return 0;
+@@ -631,7 +638,7 @@ static int dlpar_memory_remove_by_ic(u32 lmbs_to_remove, u32 drc_index)
+ static int dlpar_add_lmb(struct drmem_lmb *lmb)
+ {
+ unsigned long block_sz;
+- int rc;
++ int nid, rc;
+
+ if (lmb->flags & DRCONF_MEM_ASSIGNED)
+ return -EINVAL;
+@@ -642,11 +649,13 @@ static int dlpar_add_lmb(struct drmem_lmb *lmb)
+ return rc;
+ }
+
+- lmb_set_nid(lmb);
+ block_sz = memory_block_size_bytes();
+
++ /* Find the node id for this address. */
++ nid = memory_add_physaddr_to_nid(lmb->base_addr);
++
+ /* Add the memory */
+- rc = __add_memory(lmb->nid, lmb->base_addr, block_sz);
++ rc = __add_memory(nid, lmb->base_addr, block_sz);
+ if (rc) {
+ invalidate_lmb_associativity_index(lmb);
+ return rc;
+@@ -654,9 +663,8 @@ static int dlpar_add_lmb(struct drmem_lmb *lmb)
+
+ rc = dlpar_online_lmb(lmb);
+ if (rc) {
+- __remove_memory(lmb->nid, lmb->base_addr, block_sz);
++ __remove_memory(nid, lmb->base_addr, block_sz);
+ invalidate_lmb_associativity_index(lmb);
+- lmb_clear_nid(lmb);
+ } else {
+ lmb->flags |= DRCONF_MEM_ASSIGNED;
+ }
+diff --git a/arch/powerpc/platforms/pseries/papr_scm.c b/arch/powerpc/platforms/pseries/papr_scm.c
+index 9c569078a09fd..6c2c66450dac8 100644
+--- a/arch/powerpc/platforms/pseries/papr_scm.c
++++ b/arch/powerpc/platforms/pseries/papr_scm.c
+@@ -702,6 +702,9 @@ static int papr_scm_nvdimm_init(struct papr_scm_priv *p)
+ p->bus_desc.of_node = p->pdev->dev.of_node;
+ p->bus_desc.provider_name = kstrdup(p->pdev->name, GFP_KERNEL);
+
++ /* Set the dimm command family mask to accept PDSMs */
++ set_bit(NVDIMM_FAMILY_PAPR, &p->bus_desc.dimm_family_mask);
++
+ if (!p->bus_desc.provider_name)
+ return -ENOMEM;
+
+diff --git a/arch/powerpc/platforms/pseries/ras.c b/arch/powerpc/platforms/pseries/ras.c
+index 13c86a292c6d7..b2b245b25edba 100644
+--- a/arch/powerpc/platforms/pseries/ras.c
++++ b/arch/powerpc/platforms/pseries/ras.c
+@@ -521,18 +521,55 @@ int pSeries_system_reset_exception(struct pt_regs *regs)
+ return 0; /* need to perform reset */
+ }
+
++static int mce_handle_err_realmode(int disposition, u8 error_type)
++{
++#ifdef CONFIG_PPC_BOOK3S_64
++ if (disposition == RTAS_DISP_NOT_RECOVERED) {
++ switch (error_type) {
++ case MC_ERROR_TYPE_SLB:
++ case MC_ERROR_TYPE_ERAT:
++ /*
++ * Store the old slb content in paca before flushing.
++ * Print this when we go to virtual mode.
++ * There are chances that we may hit MCE again if there
++ * is a parity error on the SLB entry we trying to read
++ * for saving. Hence limit the slb saving to single
++ * level of recursion.
++ */
++ if (local_paca->in_mce == 1)
++ slb_save_contents(local_paca->mce_faulty_slbs);
++ flush_and_reload_slb();
++ disposition = RTAS_DISP_FULLY_RECOVERED;
++ break;
++ default:
++ break;
++ }
++ } else if (disposition == RTAS_DISP_LIMITED_RECOVERY) {
++ /* Platform corrected itself but could be degraded */
++ pr_err("MCE: limited recovery, system may be degraded\n");
++ disposition = RTAS_DISP_FULLY_RECOVERED;
++ }
++#endif
++ return disposition;
++}
+
+-static int mce_handle_error(struct pt_regs *regs, struct rtas_error_log *errp)
++static int mce_handle_err_virtmode(struct pt_regs *regs,
++ struct rtas_error_log *errp,
++ struct pseries_mc_errorlog *mce_log,
++ int disposition)
+ {
+ struct mce_error_info mce_err = { 0 };
+- unsigned long eaddr = 0, paddr = 0;
+- struct pseries_errorlog *pseries_log;
+- struct pseries_mc_errorlog *mce_log;
+- int disposition = rtas_error_disposition(errp);
+ int initiator = rtas_error_initiator(errp);
+ int severity = rtas_error_severity(errp);
++ unsigned long eaddr = 0, paddr = 0;
+ u8 error_type, err_sub_type;
+
++ if (!mce_log)
++ goto out;
++
++ error_type = mce_log->error_type;
++ err_sub_type = rtas_mc_error_sub_type(mce_log);
++
+ if (initiator == RTAS_INITIATOR_UNKNOWN)
+ mce_err.initiator = MCE_INITIATOR_UNKNOWN;
+ else if (initiator == RTAS_INITIATOR_CPU)
+@@ -571,18 +608,7 @@ static int mce_handle_error(struct pt_regs *regs, struct rtas_error_log *errp)
+ mce_err.error_type = MCE_ERROR_TYPE_UNKNOWN;
+ mce_err.error_class = MCE_ECLASS_UNKNOWN;
+
+- if (!rtas_error_extended(errp))
+- goto out;
+-
+- pseries_log = get_pseries_errorlog(errp, PSERIES_ELOG_SECT_ID_MCE);
+- if (pseries_log == NULL)
+- goto out;
+-
+- mce_log = (struct pseries_mc_errorlog *)pseries_log->data;
+- error_type = mce_log->error_type;
+- err_sub_type = rtas_mc_error_sub_type(mce_log);
+-
+- switch (mce_log->error_type) {
++ switch (error_type) {
+ case MC_ERROR_TYPE_UE:
+ mce_err.error_type = MCE_ERROR_TYPE_UE;
+ mce_common_process_ue(regs, &mce_err);
+@@ -682,37 +708,31 @@ static int mce_handle_error(struct pt_regs *regs, struct rtas_error_log *errp)
+ mce_err.error_type = MCE_ERROR_TYPE_UNKNOWN;
+ break;
+ }
++out:
++ save_mce_event(regs, disposition == RTAS_DISP_FULLY_RECOVERED,
++ &mce_err, regs->nip, eaddr, paddr);
++ return disposition;
++}
+
+-#ifdef CONFIG_PPC_BOOK3S_64
+- if (disposition == RTAS_DISP_NOT_RECOVERED) {
+- switch (error_type) {
+- case MC_ERROR_TYPE_SLB:
+- case MC_ERROR_TYPE_ERAT:
+- /*
+- * Store the old slb content in paca before flushing.
+- * Print this when we go to virtual mode.
+- * There are chances that we may hit MCE again if there
+- * is a parity error on the SLB entry we trying to read
+- * for saving. Hence limit the slb saving to single
+- * level of recursion.
+- */
+- if (local_paca->in_mce == 1)
+- slb_save_contents(local_paca->mce_faulty_slbs);
+- flush_and_reload_slb();
+- disposition = RTAS_DISP_FULLY_RECOVERED;
+- break;
+- default:
+- break;
+- }
+- } else if (disposition == RTAS_DISP_LIMITED_RECOVERY) {
+- /* Platform corrected itself but could be degraded */
+- printk(KERN_ERR "MCE: limited recovery, system may "
+- "be degraded\n");
+- disposition = RTAS_DISP_FULLY_RECOVERED;
+- }
+-#endif
++static int mce_handle_error(struct pt_regs *regs, struct rtas_error_log *errp)
++{
++ struct pseries_errorlog *pseries_log;
++ struct pseries_mc_errorlog *mce_log = NULL;
++ int disposition = rtas_error_disposition(errp);
++ u8 error_type;
++
++ if (!rtas_error_extended(errp))
++ goto out;
++
++ pseries_log = get_pseries_errorlog(errp, PSERIES_ELOG_SECT_ID_MCE);
++ if (!pseries_log)
++ goto out;
++
++ mce_log = (struct pseries_mc_errorlog *)pseries_log->data;
++ error_type = mce_log->error_type;
++
++ disposition = mce_handle_err_realmode(disposition, error_type);
+
+-out:
+ /*
+ * Enable translation as we will be accessing per-cpu variables
+ * in save_mce_event() which may fall outside RMO region, also
+@@ -723,10 +743,10 @@ out:
+ * Note: All the realmode handling like flushing SLB entries for
+ * SLB multihit is done by now.
+ */
++out:
+ mtmsr(mfmsr() | MSR_IR | MSR_DR);
+- save_mce_event(regs, disposition == RTAS_DISP_FULLY_RECOVERED,
+- &mce_err, regs->nip, eaddr, paddr);
+-
++ disposition = mce_handle_err_virtmode(regs, errp, mce_log,
++ disposition);
+ return disposition;
+ }
+
+diff --git a/arch/powerpc/platforms/pseries/rng.c b/arch/powerpc/platforms/pseries/rng.c
+index bbb97169bf63e..6268545947b83 100644
+--- a/arch/powerpc/platforms/pseries/rng.c
++++ b/arch/powerpc/platforms/pseries/rng.c
+@@ -36,6 +36,7 @@ static __init int rng_init(void)
+
+ ppc_md.get_random_seed = pseries_get_random_long;
+
++ of_node_put(dn);
+ return 0;
+ }
+ machine_subsys_initcall(pseries, rng_init);
+diff --git a/arch/powerpc/platforms/pseries/svm.c b/arch/powerpc/platforms/pseries/svm.c
+index 40c0637203d5b..81085eb8f2255 100644
+--- a/arch/powerpc/platforms/pseries/svm.c
++++ b/arch/powerpc/platforms/pseries/svm.c
+@@ -7,6 +7,7 @@
+ */
+
+ #include <linux/mm.h>
++#include <linux/memblock.h>
+ #include <asm/machdep.h>
+ #include <asm/svm.h>
+ #include <asm/swiotlb.h>
+@@ -34,6 +35,31 @@ static int __init init_svm(void)
+ }
+ machine_early_initcall(pseries, init_svm);
+
++/*
++ * Initialize SWIOTLB. Essentially the same as swiotlb_init(), except that it
++ * can allocate the buffer anywhere in memory. Since the hypervisor doesn't have
++ * any addressing limitation, we don't need to allocate it in low addresses.
++ */
++void __init svm_swiotlb_init(void)
++{
++ unsigned char *vstart;
++ unsigned long bytes, io_tlb_nslabs;
++
++ io_tlb_nslabs = (swiotlb_size_or_default() >> IO_TLB_SHIFT);
++ io_tlb_nslabs = ALIGN(io_tlb_nslabs, IO_TLB_SEGSIZE);
++
++ bytes = io_tlb_nslabs << IO_TLB_SHIFT;
++
++ vstart = memblock_alloc(PAGE_ALIGN(bytes), PAGE_SIZE);
++ if (vstart && !swiotlb_init_with_tbl(vstart, io_tlb_nslabs, false))
++ return;
++
++ if (io_tlb_start)
++ memblock_free_early(io_tlb_start,
++ PAGE_ALIGN(io_tlb_nslabs << IO_TLB_SHIFT));
++ panic("SVM: Cannot allocate SWIOTLB buffer");
++}
++
+ int set_memory_encrypted(unsigned long addr, int numpages)
+ {
+ if (!PAGE_ALIGNED(addr))
+diff --git a/arch/powerpc/sysdev/xics/icp-hv.c b/arch/powerpc/sysdev/xics/icp-hv.c
+index ad8117148ea3b..21b9d1bf39ff6 100644
+--- a/arch/powerpc/sysdev/xics/icp-hv.c
++++ b/arch/powerpc/sysdev/xics/icp-hv.c
+@@ -174,6 +174,7 @@ int icp_hv_init(void)
+
+ icp_ops = &icp_hv_ops;
+
++ of_node_put(np);
+ return 0;
+ }
+
+diff --git a/arch/powerpc/xmon/xmon.c b/arch/powerpc/xmon/xmon.c
+index 7efe4bc3ccf63..ac5862cee142a 100644
+--- a/arch/powerpc/xmon/xmon.c
++++ b/arch/powerpc/xmon/xmon.c
+@@ -962,6 +962,7 @@ static void insert_cpu_bpts(void)
+ brk.address = dabr[i].address;
+ brk.type = (dabr[i].enabled & HW_BRK_TYPE_DABR) | HW_BRK_TYPE_PRIV_ALL;
+ brk.len = 8;
++ brk.hw_len = 8;
+ __set_breakpoint(i, &brk);
+ }
+ }
+diff --git a/arch/s390/pci/pci_bus.c b/arch/s390/pci/pci_bus.c
+index 5967f30141563..c93486a9989bc 100644
+--- a/arch/s390/pci/pci_bus.c
++++ b/arch/s390/pci/pci_bus.c
+@@ -197,9 +197,10 @@ void pcibios_bus_add_device(struct pci_dev *pdev)
+ * With pdev->no_vf_scan the common PCI probing code does not
+ * perform PF/VF linking.
+ */
+- if (zdev->vfn)
++ if (zdev->vfn) {
+ zpci_bus_setup_virtfn(zdev->zbus, pdev, zdev->vfn);
+-
++ pdev->no_command_memory = 1;
++ }
+ }
+
+ static int zpci_bus_add_device(struct zpci_bus *zbus, struct zpci_dev *zdev)
+diff --git a/arch/um/drivers/vector_kern.c b/arch/um/drivers/vector_kern.c
+index 8735c468230a5..555203e3e7b45 100644
+--- a/arch/um/drivers/vector_kern.c
++++ b/arch/um/drivers/vector_kern.c
+@@ -1403,7 +1403,7 @@ static int vector_net_load_bpf_flash(struct net_device *dev,
+ kfree(vp->bpf->filter);
+ vp->bpf->filter = NULL;
+ } else {
+- vp->bpf = kmalloc(sizeof(struct sock_fprog), GFP_KERNEL);
++ vp->bpf = kmalloc(sizeof(struct sock_fprog), GFP_ATOMIC);
+ if (vp->bpf == NULL) {
+ netdev_err(dev, "failed to allocate memory for firmware\n");
+ goto flash_fail;
+@@ -1415,7 +1415,7 @@ static int vector_net_load_bpf_flash(struct net_device *dev,
+ if (request_firmware(&fw, efl->data, &vdevice->pdev.dev))
+ goto flash_fail;
+
+- vp->bpf->filter = kmemdup(fw->data, fw->size, GFP_KERNEL);
++ vp->bpf->filter = kmemdup(fw->data, fw->size, GFP_ATOMIC);
+ if (!vp->bpf->filter)
+ goto free_buffer;
+
+diff --git a/arch/um/kernel/time.c b/arch/um/kernel/time.c
+index 25eaa6a0c6583..c07436e89e599 100644
+--- a/arch/um/kernel/time.c
++++ b/arch/um/kernel/time.c
+@@ -70,13 +70,17 @@ static void time_travel_handle_message(struct um_timetravel_msg *msg,
+ * read of the message and write of the ACK.
+ */
+ if (mode != TTMH_READ) {
++ bool disabled = irqs_disabled();
++
++ BUG_ON(mode == TTMH_IDLE && !disabled);
++
++ if (disabled)
++ local_irq_enable();
+ while (os_poll(1, &time_travel_ext_fd) != 0) {
+- if (mode == TTMH_IDLE) {
+- BUG_ON(!irqs_disabled());
+- local_irq_enable();
+- local_irq_disable();
+- }
++ /* nothing */
+ }
++ if (disabled)
++ local_irq_disable();
+ }
+
+ ret = os_read_file(time_travel_ext_fd, msg, sizeof(*msg));
+diff --git a/arch/x86/boot/compressed/pgtable_64.c b/arch/x86/boot/compressed/pgtable_64.c
+index c8862696a47b9..7d0394f4ebf97 100644
+--- a/arch/x86/boot/compressed/pgtable_64.c
++++ b/arch/x86/boot/compressed/pgtable_64.c
+@@ -5,15 +5,6 @@
+ #include "pgtable.h"
+ #include "../string.h"
+
+-/*
+- * __force_order is used by special_insns.h asm code to force instruction
+- * serialization.
+- *
+- * It is not referenced from the code, but GCC < 5 with -fPIE would fail
+- * due to an undefined symbol. Define it to make these ancient GCCs work.
+- */
+-unsigned long __force_order;
+-
+ #define BIOS_START_MIN 0x20000U /* 128K, less than this is insane */
+ #define BIOS_START_MAX 0x9f000U /* 640K, absolute maximum */
+
+diff --git a/arch/x86/events/amd/iommu.c b/arch/x86/events/amd/iommu.c
+index fb616203ce427..be50ef8572cce 100644
+--- a/arch/x86/events/amd/iommu.c
++++ b/arch/x86/events/amd/iommu.c
+@@ -379,7 +379,7 @@ static __init int _init_events_attrs(void)
+ while (amd_iommu_v2_event_descs[i].attr.attr.name)
+ i++;
+
+- attrs = kcalloc(i + 1, sizeof(struct attribute **), GFP_KERNEL);
++ attrs = kcalloc(i + 1, sizeof(*attrs), GFP_KERNEL);
+ if (!attrs)
+ return -ENOMEM;
+
+diff --git a/arch/x86/events/core.c b/arch/x86/events/core.c
+index 4103665c6e032..29640b4079af0 100644
+--- a/arch/x86/events/core.c
++++ b/arch/x86/events/core.c
+@@ -1087,8 +1087,10 @@ static int collect_events(struct cpu_hw_events *cpuc, struct perf_event *leader,
+
+ cpuc->event_list[n] = event;
+ n++;
+- if (is_counter_pair(&event->hw))
++ if (is_counter_pair(&event->hw)) {
+ cpuc->n_pair++;
++ cpuc->n_txn_pair++;
++ }
+ }
+ return n;
+ }
+@@ -1953,6 +1955,7 @@ static void x86_pmu_start_txn(struct pmu *pmu, unsigned int txn_flags)
+
+ perf_pmu_disable(pmu);
+ __this_cpu_write(cpu_hw_events.n_txn, 0);
++ __this_cpu_write(cpu_hw_events.n_txn_pair, 0);
+ }
+
+ /*
+@@ -1978,6 +1981,7 @@ static void x86_pmu_cancel_txn(struct pmu *pmu)
+ */
+ __this_cpu_sub(cpu_hw_events.n_added, __this_cpu_read(cpu_hw_events.n_txn));
+ __this_cpu_sub(cpu_hw_events.n_events, __this_cpu_read(cpu_hw_events.n_txn));
++ __this_cpu_sub(cpu_hw_events.n_pair, __this_cpu_read(cpu_hw_events.n_txn_pair));
+ perf_pmu_enable(pmu);
+ }
+
+diff --git a/arch/x86/events/intel/ds.c b/arch/x86/events/intel/ds.c
+index dc43cc124e096..221d1766d6e6c 100644
+--- a/arch/x86/events/intel/ds.c
++++ b/arch/x86/events/intel/ds.c
+@@ -670,9 +670,7 @@ unlock:
+
+ static inline void intel_pmu_drain_pebs_buffer(void)
+ {
+- struct pt_regs regs;
+-
+- x86_pmu.drain_pebs(®s);
++ x86_pmu.drain_pebs(NULL);
+ }
+
+ /*
+@@ -1737,6 +1735,7 @@ static void __intel_pmu_pebs_event(struct perf_event *event,
+ struct x86_perf_regs perf_regs;
+ struct pt_regs *regs = &perf_regs.regs;
+ void *at = get_next_pebs_record_by_bit(base, top, bit);
++ struct pt_regs dummy_iregs;
+
+ if (hwc->flags & PERF_X86_EVENT_AUTO_RELOAD) {
+ /*
+@@ -1749,6 +1748,9 @@ static void __intel_pmu_pebs_event(struct perf_event *event,
+ } else if (!intel_pmu_save_and_restart(event))
+ return;
+
++ if (!iregs)
++ iregs = &dummy_iregs;
++
+ while (count > 1) {
+ setup_sample(event, iregs, at, &data, regs);
+ perf_event_output(event, &data, regs);
+@@ -1758,16 +1760,22 @@ static void __intel_pmu_pebs_event(struct perf_event *event,
+ }
+
+ setup_sample(event, iregs, at, &data, regs);
+-
+- /*
+- * All but the last records are processed.
+- * The last one is left to be able to call the overflow handler.
+- */
+- if (perf_event_overflow(event, &data, regs)) {
+- x86_pmu_stop(event, 0);
+- return;
++ if (iregs == &dummy_iregs) {
++ /*
++ * The PEBS records may be drained in the non-overflow context,
++ * e.g., large PEBS + context switch. Perf should treat the
++ * last record the same as other PEBS records, and doesn't
++ * invoke the generic overflow handler.
++ */
++ perf_event_output(event, &data, regs);
++ } else {
++ /*
++ * All but the last records are processed.
++ * The last one is left to be able to call the overflow handler.
++ */
++ if (perf_event_overflow(event, &data, regs))
++ x86_pmu_stop(event, 0);
+ }
+-
+ }
+
+ static void intel_pmu_drain_pebs_core(struct pt_regs *iregs)
+diff --git a/arch/x86/events/intel/uncore_snb.c b/arch/x86/events/intel/uncore_snb.c
+index 1038e9f1e3542..3b70c2ff177c0 100644
+--- a/arch/x86/events/intel/uncore_snb.c
++++ b/arch/x86/events/intel/uncore_snb.c
+@@ -115,6 +115,10 @@
+ #define ICL_UNC_CBO_0_PER_CTR0 0x702
+ #define ICL_UNC_CBO_MSR_OFFSET 0x8
+
++/* ICL ARB register */
++#define ICL_UNC_ARB_PER_CTR 0x3b1
++#define ICL_UNC_ARB_PERFEVTSEL 0x3b3
++
+ DEFINE_UNCORE_FORMAT_ATTR(event, event, "config:0-7");
+ DEFINE_UNCORE_FORMAT_ATTR(umask, umask, "config:8-15");
+ DEFINE_UNCORE_FORMAT_ATTR(edge, edge, "config:18");
+@@ -302,15 +306,21 @@ void skl_uncore_cpu_init(void)
+ snb_uncore_arb.ops = &skl_uncore_msr_ops;
+ }
+
++static struct intel_uncore_ops icl_uncore_msr_ops = {
++ .disable_event = snb_uncore_msr_disable_event,
++ .enable_event = snb_uncore_msr_enable_event,
++ .read_counter = uncore_msr_read_counter,
++};
++
+ static struct intel_uncore_type icl_uncore_cbox = {
+ .name = "cbox",
+- .num_counters = 4,
++ .num_counters = 2,
+ .perf_ctr_bits = 44,
+ .perf_ctr = ICL_UNC_CBO_0_PER_CTR0,
+ .event_ctl = SNB_UNC_CBO_0_PERFEVTSEL0,
+ .event_mask = SNB_UNC_RAW_EVENT_MASK,
+ .msr_offset = ICL_UNC_CBO_MSR_OFFSET,
+- .ops = &skl_uncore_msr_ops,
++ .ops = &icl_uncore_msr_ops,
+ .format_group = &snb_uncore_format_group,
+ };
+
+@@ -339,13 +349,25 @@ static struct intel_uncore_type icl_uncore_clockbox = {
+ .single_fixed = 1,
+ .event_mask = SNB_UNC_CTL_EV_SEL_MASK,
+ .format_group = &icl_uncore_clock_format_group,
+- .ops = &skl_uncore_msr_ops,
++ .ops = &icl_uncore_msr_ops,
+ .event_descs = icl_uncore_events,
+ };
+
++static struct intel_uncore_type icl_uncore_arb = {
++ .name = "arb",
++ .num_counters = 1,
++ .num_boxes = 1,
++ .perf_ctr_bits = 44,
++ .perf_ctr = ICL_UNC_ARB_PER_CTR,
++ .event_ctl = ICL_UNC_ARB_PERFEVTSEL,
++ .event_mask = SNB_UNC_RAW_EVENT_MASK,
++ .ops = &icl_uncore_msr_ops,
++ .format_group = &snb_uncore_format_group,
++};
++
+ static struct intel_uncore_type *icl_msr_uncores[] = {
+ &icl_uncore_cbox,
+- &snb_uncore_arb,
++ &icl_uncore_arb,
+ &icl_uncore_clockbox,
+ NULL,
+ };
+@@ -363,7 +385,6 @@ void icl_uncore_cpu_init(void)
+ {
+ uncore_msr_uncores = icl_msr_uncores;
+ icl_uncore_cbox.num_boxes = icl_get_cbox_num();
+- snb_uncore_arb.ops = &skl_uncore_msr_ops;
+ }
+
+ enum {
+diff --git a/arch/x86/events/intel/uncore_snbep.c b/arch/x86/events/intel/uncore_snbep.c
+index 07652fa20ebbe..6a03fe8054a81 100644
+--- a/arch/x86/events/intel/uncore_snbep.c
++++ b/arch/x86/events/intel/uncore_snbep.c
+@@ -4550,10 +4550,10 @@ static struct uncore_event_desc snr_uncore_imc_freerunning_events[] = {
+ INTEL_UNCORE_EVENT_DESC(dclk, "event=0xff,umask=0x10"),
+
+ INTEL_UNCORE_EVENT_DESC(read, "event=0xff,umask=0x20"),
+- INTEL_UNCORE_EVENT_DESC(read.scale, "3.814697266e-6"),
++ INTEL_UNCORE_EVENT_DESC(read.scale, "6.103515625e-5"),
+ INTEL_UNCORE_EVENT_DESC(read.unit, "MiB"),
+ INTEL_UNCORE_EVENT_DESC(write, "event=0xff,umask=0x21"),
+- INTEL_UNCORE_EVENT_DESC(write.scale, "3.814697266e-6"),
++ INTEL_UNCORE_EVENT_DESC(write.scale, "6.103515625e-5"),
+ INTEL_UNCORE_EVENT_DESC(write.unit, "MiB"),
+ { /* end: all zeroes */ },
+ };
+@@ -5009,17 +5009,17 @@ static struct uncore_event_desc icx_uncore_imc_freerunning_events[] = {
+ INTEL_UNCORE_EVENT_DESC(dclk, "event=0xff,umask=0x10"),
+
+ INTEL_UNCORE_EVENT_DESC(read, "event=0xff,umask=0x20"),
+- INTEL_UNCORE_EVENT_DESC(read.scale, "3.814697266e-6"),
++ INTEL_UNCORE_EVENT_DESC(read.scale, "6.103515625e-5"),
+ INTEL_UNCORE_EVENT_DESC(read.unit, "MiB"),
+ INTEL_UNCORE_EVENT_DESC(write, "event=0xff,umask=0x21"),
+- INTEL_UNCORE_EVENT_DESC(write.scale, "3.814697266e-6"),
++ INTEL_UNCORE_EVENT_DESC(write.scale, "6.103515625e-5"),
+ INTEL_UNCORE_EVENT_DESC(write.unit, "MiB"),
+
+ INTEL_UNCORE_EVENT_DESC(ddrt_read, "event=0xff,umask=0x30"),
+- INTEL_UNCORE_EVENT_DESC(ddrt_read.scale, "3.814697266e-6"),
++ INTEL_UNCORE_EVENT_DESC(ddrt_read.scale, "6.103515625e-5"),
+ INTEL_UNCORE_EVENT_DESC(ddrt_read.unit, "MiB"),
+ INTEL_UNCORE_EVENT_DESC(ddrt_write, "event=0xff,umask=0x31"),
+- INTEL_UNCORE_EVENT_DESC(ddrt_write.scale, "3.814697266e-6"),
++ INTEL_UNCORE_EVENT_DESC(ddrt_write.scale, "6.103515625e-5"),
+ INTEL_UNCORE_EVENT_DESC(ddrt_write.unit, "MiB"),
+ { /* end: all zeroes */ },
+ };
+diff --git a/arch/x86/events/perf_event.h b/arch/x86/events/perf_event.h
+index e17a3d8a47ede..d4d482d16fe18 100644
+--- a/arch/x86/events/perf_event.h
++++ b/arch/x86/events/perf_event.h
+@@ -198,6 +198,7 @@ struct cpu_hw_events {
+ they've never been enabled yet */
+ int n_txn; /* the # last events in the below arrays;
+ added in the current transaction */
++ int n_txn_pair;
+ int assign[X86_PMC_IDX_MAX]; /* event to counter assignment */
+ u64 tags[X86_PMC_IDX_MAX];
+
+diff --git a/arch/x86/include/asm/special_insns.h b/arch/x86/include/asm/special_insns.h
+index eb8e781c43539..b8f7c9659ef6b 100644
+--- a/arch/x86/include/asm/special_insns.h
++++ b/arch/x86/include/asm/special_insns.h
+@@ -11,45 +11,47 @@
+ #include <linux/jump_label.h>
+
+ /*
+- * Volatile isn't enough to prevent the compiler from reordering the
+- * read/write functions for the control registers and messing everything up.
+- * A memory clobber would solve the problem, but would prevent reordering of
+- * all loads stores around it, which can hurt performance. Solution is to
+- * use a variable and mimic reads and writes to it to enforce serialization
++ * The compiler should not reorder volatile asm statements with respect to each
++ * other: they should execute in program order. However GCC 4.9.x and 5.x have
++ * a bug (which was fixed in 8.1, 7.3 and 6.5) where they might reorder
++ * volatile asm. The write functions are not affected since they have memory
++ * clobbers preventing reordering. To prevent reads from being reordered with
++ * respect to writes, use a dummy memory operand.
+ */
+-extern unsigned long __force_order;
++
++#define __FORCE_ORDER "m"(*(unsigned int *)0x1000UL)
+
+ void native_write_cr0(unsigned long val);
+
+ static inline unsigned long native_read_cr0(void)
+ {
+ unsigned long val;
+- asm volatile("mov %%cr0,%0\n\t" : "=r" (val), "=m" (__force_order));
++ asm volatile("mov %%cr0,%0\n\t" : "=r" (val) : __FORCE_ORDER);
+ return val;
+ }
+
+ static __always_inline unsigned long native_read_cr2(void)
+ {
+ unsigned long val;
+- asm volatile("mov %%cr2,%0\n\t" : "=r" (val), "=m" (__force_order));
++ asm volatile("mov %%cr2,%0\n\t" : "=r" (val) : __FORCE_ORDER);
+ return val;
+ }
+
+ static __always_inline void native_write_cr2(unsigned long val)
+ {
+- asm volatile("mov %0,%%cr2": : "r" (val), "m" (__force_order));
++ asm volatile("mov %0,%%cr2": : "r" (val) : "memory");
+ }
+
+ static inline unsigned long __native_read_cr3(void)
+ {
+ unsigned long val;
+- asm volatile("mov %%cr3,%0\n\t" : "=r" (val), "=m" (__force_order));
++ asm volatile("mov %%cr3,%0\n\t" : "=r" (val) : __FORCE_ORDER);
+ return val;
+ }
+
+ static inline void native_write_cr3(unsigned long val)
+ {
+- asm volatile("mov %0,%%cr3": : "r" (val), "m" (__force_order));
++ asm volatile("mov %0,%%cr3": : "r" (val) : "memory");
+ }
+
+ static inline unsigned long native_read_cr4(void)
+@@ -64,10 +66,10 @@ static inline unsigned long native_read_cr4(void)
+ asm volatile("1: mov %%cr4, %0\n"
+ "2:\n"
+ _ASM_EXTABLE(1b, 2b)
+- : "=r" (val), "=m" (__force_order) : "0" (0));
++ : "=r" (val) : "0" (0), __FORCE_ORDER);
+ #else
+ /* CR4 always exists on x86_64. */
+- asm volatile("mov %%cr4,%0\n\t" : "=r" (val), "=m" (__force_order));
++ asm volatile("mov %%cr4,%0\n\t" : "=r" (val) : __FORCE_ORDER);
+ #endif
+ return val;
+ }
+diff --git a/arch/x86/kernel/cpu/common.c b/arch/x86/kernel/cpu/common.c
+index 95c090a45b4b4..d8ef789e00c15 100644
+--- a/arch/x86/kernel/cpu/common.c
++++ b/arch/x86/kernel/cpu/common.c
+@@ -358,7 +358,7 @@ void native_write_cr0(unsigned long val)
+ unsigned long bits_missing = 0;
+
+ set_register:
+- asm volatile("mov %0,%%cr0": "+r" (val), "+m" (__force_order));
++ asm volatile("mov %0,%%cr0": "+r" (val) : : "memory");
+
+ if (static_branch_likely(&cr_pinning)) {
+ if (unlikely((val & X86_CR0_WP) != X86_CR0_WP)) {
+@@ -377,7 +377,7 @@ void native_write_cr4(unsigned long val)
+ unsigned long bits_changed = 0;
+
+ set_register:
+- asm volatile("mov %0,%%cr4": "+r" (val), "+m" (cr4_pinned_bits));
++ asm volatile("mov %0,%%cr4": "+r" (val) : : "memory");
+
+ if (static_branch_likely(&cr_pinning)) {
+ if (unlikely((val & cr4_pinned_mask) != cr4_pinned_bits)) {
+diff --git a/arch/x86/kernel/cpu/mce/core.c b/arch/x86/kernel/cpu/mce/core.c
+index 14e4b4d17ee5b..07673a034d39c 100644
+--- a/arch/x86/kernel/cpu/mce/core.c
++++ b/arch/x86/kernel/cpu/mce/core.c
+@@ -370,42 +370,105 @@ static int msr_to_offset(u32 msr)
+ return -1;
+ }
+
++__visible bool ex_handler_rdmsr_fault(const struct exception_table_entry *fixup,
++ struct pt_regs *regs, int trapnr,
++ unsigned long error_code,
++ unsigned long fault_addr)
++{
++ pr_emerg("MSR access error: RDMSR from 0x%x at rIP: 0x%lx (%pS)\n",
++ (unsigned int)regs->cx, regs->ip, (void *)regs->ip);
++
++ show_stack_regs(regs);
++
++ panic("MCA architectural violation!\n");
++
++ while (true)
++ cpu_relax();
++
++ return true;
++}
++
+ /* MSR access wrappers used for error injection */
+-static u64 mce_rdmsrl(u32 msr)
++static noinstr u64 mce_rdmsrl(u32 msr)
+ {
+- u64 v;
++ DECLARE_ARGS(val, low, high);
+
+ if (__this_cpu_read(injectm.finished)) {
+- int offset = msr_to_offset(msr);
++ int offset;
++ u64 ret;
+
++ instrumentation_begin();
++
++ offset = msr_to_offset(msr);
+ if (offset < 0)
+- return 0;
+- return *(u64 *)((char *)this_cpu_ptr(&injectm) + offset);
+- }
++ ret = 0;
++ else
++ ret = *(u64 *)((char *)this_cpu_ptr(&injectm) + offset);
+
+- if (rdmsrl_safe(msr, &v)) {
+- WARN_ONCE(1, "mce: Unable to read MSR 0x%x!\n", msr);
+- /*
+- * Return zero in case the access faulted. This should
+- * not happen normally but can happen if the CPU does
+- * something weird, or if the code is buggy.
+- */
+- v = 0;
++ instrumentation_end();
++
++ return ret;
+ }
+
+- return v;
++ /*
++ * RDMSR on MCA MSRs should not fault. If they do, this is very much an
++ * architectural violation and needs to be reported to hw vendor. Panic
++ * the box to not allow any further progress.
++ */
++ asm volatile("1: rdmsr\n"
++ "2:\n"
++ _ASM_EXTABLE_HANDLE(1b, 2b, ex_handler_rdmsr_fault)
++ : EAX_EDX_RET(val, low, high) : "c" (msr));
++
++
++ return EAX_EDX_VAL(val, low, high);
++}
++
++__visible bool ex_handler_wrmsr_fault(const struct exception_table_entry *fixup,
++ struct pt_regs *regs, int trapnr,
++ unsigned long error_code,
++ unsigned long fault_addr)
++{
++ pr_emerg("MSR access error: WRMSR to 0x%x (tried to write 0x%08x%08x) at rIP: 0x%lx (%pS)\n",
++ (unsigned int)regs->cx, (unsigned int)regs->dx, (unsigned int)regs->ax,
++ regs->ip, (void *)regs->ip);
++
++ show_stack_regs(regs);
++
++ panic("MCA architectural violation!\n");
++
++ while (true)
++ cpu_relax();
++
++ return true;
+ }
+
+-static void mce_wrmsrl(u32 msr, u64 v)
++static noinstr void mce_wrmsrl(u32 msr, u64 v)
+ {
++ u32 low, high;
++
+ if (__this_cpu_read(injectm.finished)) {
+- int offset = msr_to_offset(msr);
++ int offset;
++
++ instrumentation_begin();
+
++ offset = msr_to_offset(msr);
+ if (offset >= 0)
+ *(u64 *)((char *)this_cpu_ptr(&injectm) + offset) = v;
++
++ instrumentation_end();
++
+ return;
+ }
+- wrmsrl(msr, v);
++
++ low = (u32)v;
++ high = (u32)(v >> 32);
++
++ /* See comment in mce_rdmsrl() */
++ asm volatile("1: wrmsr\n"
++ "2:\n"
++ _ASM_EXTABLE_HANDLE(1b, 2b, ex_handler_wrmsr_fault)
++ : : "c" (msr), "a"(low), "d" (high) : "memory");
+ }
+
+ /*
+diff --git a/arch/x86/kernel/cpu/mce/internal.h b/arch/x86/kernel/cpu/mce/internal.h
+index 6473070b5da49..b122610e9046a 100644
+--- a/arch/x86/kernel/cpu/mce/internal.h
++++ b/arch/x86/kernel/cpu/mce/internal.h
+@@ -185,4 +185,14 @@ extern bool amd_filter_mce(struct mce *m);
+ static inline bool amd_filter_mce(struct mce *m) { return false; };
+ #endif
+
++__visible bool ex_handler_rdmsr_fault(const struct exception_table_entry *fixup,
++ struct pt_regs *regs, int trapnr,
++ unsigned long error_code,
++ unsigned long fault_addr);
++
++__visible bool ex_handler_wrmsr_fault(const struct exception_table_entry *fixup,
++ struct pt_regs *regs, int trapnr,
++ unsigned long error_code,
++ unsigned long fault_addr);
++
+ #endif /* __X86_MCE_INTERNAL_H__ */
+diff --git a/arch/x86/kernel/cpu/mce/severity.c b/arch/x86/kernel/cpu/mce/severity.c
+index e1da619add192..567ce09a02868 100644
+--- a/arch/x86/kernel/cpu/mce/severity.c
++++ b/arch/x86/kernel/cpu/mce/severity.c
+@@ -9,9 +9,11 @@
+ #include <linux/seq_file.h>
+ #include <linux/init.h>
+ #include <linux/debugfs.h>
+-#include <asm/mce.h>
+ #include <linux/uaccess.h>
+
++#include <asm/mce.h>
++#include <asm/intel-family.h>
++
+ #include "internal.h"
+
+ /*
+@@ -40,9 +42,14 @@ static struct severity {
+ unsigned char context;
+ unsigned char excp;
+ unsigned char covered;
++ unsigned char cpu_model;
++ unsigned char cpu_minstepping;
++ unsigned char bank_lo, bank_hi;
+ char *msg;
+ } severities[] = {
+ #define MCESEV(s, m, c...) { .sev = MCE_ ## s ## _SEVERITY, .msg = m, ## c }
++#define BANK_RANGE(l, h) .bank_lo = l, .bank_hi = h
++#define MODEL_STEPPING(m, s) .cpu_model = m, .cpu_minstepping = s
+ #define KERNEL .context = IN_KERNEL
+ #define USER .context = IN_USER
+ #define KERNEL_RECOV .context = IN_KERNEL_RECOV
+@@ -97,7 +104,6 @@ static struct severity {
+ KEEP, "Corrected error",
+ NOSER, BITCLR(MCI_STATUS_UC)
+ ),
+-
+ /*
+ * known AO MCACODs reported via MCE or CMC:
+ *
+@@ -113,6 +119,18 @@ static struct severity {
+ AO, "Action optional: last level cache writeback error",
+ SER, MASK(MCI_UC_AR|MCACOD, MCI_STATUS_UC|MCACOD_L3WB)
+ ),
++ /*
++ * Quirk for Skylake/Cascade Lake. Patrol scrubber may be configured
++ * to report uncorrected errors using CMCI with a special signature.
++ * UC=0, MSCOD=0x0010, MCACOD=binary(000X 0000 1100 XXXX) reported
++ * in one of the memory controller banks.
++ * Set severity to "AO" for same action as normal patrol scrub error.
++ */
++ MCESEV(
++ AO, "Uncorrected Patrol Scrub Error",
++ SER, MASK(MCI_STATUS_UC|MCI_ADDR|0xffffeff0, MCI_ADDR|0x001000c0),
++ MODEL_STEPPING(INTEL_FAM6_SKYLAKE_X, 4), BANK_RANGE(13, 18)
++ ),
+
+ /* ignore OVER for UCNA */
+ MCESEV(
+@@ -324,6 +342,12 @@ static int mce_severity_intel(struct mce *m, int tolerant, char **msg, bool is_e
+ continue;
+ if (s->excp && excp != s->excp)
+ continue;
++ if (s->cpu_model && boot_cpu_data.x86_model != s->cpu_model)
++ continue;
++ if (s->cpu_minstepping && boot_cpu_data.x86_stepping < s->cpu_minstepping)
++ continue;
++ if (s->bank_lo && (m->bank < s->bank_lo || m->bank > s->bank_hi))
++ continue;
+ if (msg)
+ *msg = s->msg;
+ s->covered = 1;
+diff --git a/arch/x86/kernel/dumpstack.c b/arch/x86/kernel/dumpstack.c
+index 7401cc12c3ccf..42679610c9bea 100644
+--- a/arch/x86/kernel/dumpstack.c
++++ b/arch/x86/kernel/dumpstack.c
+@@ -115,7 +115,8 @@ void show_opcodes(struct pt_regs *regs, const char *loglvl)
+ unsigned long prologue = regs->ip - PROLOGUE_SIZE;
+
+ if (copy_code(regs, opcodes, prologue, sizeof(opcodes))) {
+- printk("%sCode: Bad RIP value.\n", loglvl);
++ printk("%sCode: Unable to access opcode bytes at RIP 0x%lx.\n",
++ loglvl, prologue);
+ } else {
+ printk("%sCode: %" __stringify(PROLOGUE_SIZE) "ph <%02x> %"
+ __stringify(EPILOGUE_SIZE) "ph\n", loglvl, opcodes,
+diff --git a/arch/x86/kernel/fpu/init.c b/arch/x86/kernel/fpu/init.c
+index 61ddc3a5e5c2b..f8ff895aaf7e1 100644
+--- a/arch/x86/kernel/fpu/init.c
++++ b/arch/x86/kernel/fpu/init.c
+@@ -243,9 +243,9 @@ static void __init fpu__init_system_ctx_switch(void)
+ */
+ static void __init fpu__init_parse_early_param(void)
+ {
+- char arg[32];
++ char arg[128];
+ char *argptr = arg;
+- int bit;
++ int arglen, res, bit;
+
+ #ifdef CONFIG_X86_32
+ if (cmdline_find_option_bool(boot_command_line, "no387"))
+@@ -268,12 +268,26 @@ static void __init fpu__init_parse_early_param(void)
+ if (cmdline_find_option_bool(boot_command_line, "noxsaves"))
+ setup_clear_cpu_cap(X86_FEATURE_XSAVES);
+
+- if (cmdline_find_option(boot_command_line, "clearcpuid", arg,
+- sizeof(arg)) &&
+- get_option(&argptr, &bit) &&
+- bit >= 0 &&
+- bit < NCAPINTS * 32)
+- setup_clear_cpu_cap(bit);
++ arglen = cmdline_find_option(boot_command_line, "clearcpuid", arg, sizeof(arg));
++ if (arglen <= 0)
++ return;
++
++ pr_info("Clearing CPUID bits:");
++ do {
++ res = get_option(&argptr, &bit);
++ if (res == 0 || res == 3)
++ break;
++
++ /* If the argument was too long, the last bit may be cut off */
++ if (res == 1 && arglen >= sizeof(arg))
++ break;
++
++ if (bit >= 0 && bit < NCAPINTS * 32) {
++ pr_cont(" " X86_CAP_FMT, x86_cap_flag(bit));
++ setup_clear_cpu_cap(bit);
++ }
++ } while (res == 2);
++ pr_cont("\n");
+ }
+
+ /*
+diff --git a/arch/x86/kernel/nmi.c b/arch/x86/kernel/nmi.c
+index d7c5e44b26f73..091752c3a19e2 100644
+--- a/arch/x86/kernel/nmi.c
++++ b/arch/x86/kernel/nmi.c
+@@ -102,7 +102,6 @@ fs_initcall(nmi_warning_debugfs);
+
+ static void nmi_check_duration(struct nmiaction *action, u64 duration)
+ {
+- u64 whole_msecs = READ_ONCE(action->max_duration);
+ int remainder_ns, decimal_msecs;
+
+ if (duration < nmi_longest_ns || duration < action->max_duration)
+@@ -110,12 +109,12 @@ static void nmi_check_duration(struct nmiaction *action, u64 duration)
+
+ action->max_duration = duration;
+
+- remainder_ns = do_div(whole_msecs, (1000 * 1000));
++ remainder_ns = do_div(duration, (1000 * 1000));
+ decimal_msecs = remainder_ns / 1000;
+
+ printk_ratelimited(KERN_INFO
+ "INFO: NMI handler (%ps) took too long to run: %lld.%03d msecs\n",
+- action->handler, whole_msecs, decimal_msecs);
++ action->handler, duration, decimal_msecs);
+ }
+
+ static int nmi_handle(unsigned int type, struct pt_regs *regs)
+diff --git a/arch/x86/kvm/emulate.c b/arch/x86/kvm/emulate.c
+index d0e2825ae6174..571cb8657e53e 100644
+--- a/arch/x86/kvm/emulate.c
++++ b/arch/x86/kvm/emulate.c
+@@ -3594,7 +3594,7 @@ static int em_rdpid(struct x86_emulate_ctxt *ctxt)
+ u64 tsc_aux = 0;
+
+ if (ctxt->ops->get_msr(ctxt, MSR_TSC_AUX, &tsc_aux))
+- return emulate_gp(ctxt, 0);
++ return emulate_ud(ctxt);
+ ctxt->dst.val = tsc_aux;
+ return X86EMUL_CONTINUE;
+ }
+diff --git a/arch/x86/kvm/ioapic.c b/arch/x86/kvm/ioapic.c
+index d057376bd3d33..698969e18fe35 100644
+--- a/arch/x86/kvm/ioapic.c
++++ b/arch/x86/kvm/ioapic.c
+@@ -197,12 +197,9 @@ static void ioapic_lazy_update_eoi(struct kvm_ioapic *ioapic, int irq)
+
+ /*
+ * If no longer has pending EOI in LAPICs, update
+- * EOI for this vetor.
++ * EOI for this vector.
+ */
+ rtc_irq_eoi(ioapic, vcpu, entry->fields.vector);
+- kvm_ioapic_update_eoi_one(vcpu, ioapic,
+- entry->fields.trig_mode,
+- irq);
+ break;
+ }
+ }
+diff --git a/arch/x86/kvm/kvm_cache_regs.h b/arch/x86/kvm/kvm_cache_regs.h
+index cfe83d4ae6252..ca0781b41df9d 100644
+--- a/arch/x86/kvm/kvm_cache_regs.h
++++ b/arch/x86/kvm/kvm_cache_regs.h
+@@ -7,7 +7,7 @@
+ #define KVM_POSSIBLE_CR0_GUEST_BITS X86_CR0_TS
+ #define KVM_POSSIBLE_CR4_GUEST_BITS \
+ (X86_CR4_PVI | X86_CR4_DE | X86_CR4_PCE | X86_CR4_OSFXSR \
+- | X86_CR4_OSXMMEXCPT | X86_CR4_LA57 | X86_CR4_PGE | X86_CR4_TSD)
++ | X86_CR4_OSXMMEXCPT | X86_CR4_PGE | X86_CR4_TSD)
+
+ #define BUILD_KVM_GPR_ACCESSORS(lname, uname) \
+ static __always_inline unsigned long kvm_##lname##_read(struct kvm_vcpu *vcpu)\
+diff --git a/arch/x86/kvm/lapic.c b/arch/x86/kvm/lapic.c
+index 4ce2ddd26c0b7..ccb72af1bcb5d 100644
+--- a/arch/x86/kvm/lapic.c
++++ b/arch/x86/kvm/lapic.c
+@@ -490,6 +490,12 @@ static inline void apic_clear_irr(int vec, struct kvm_lapic *apic)
+ }
+ }
+
++void kvm_apic_clear_irr(struct kvm_vcpu *vcpu, int vec)
++{
++ apic_clear_irr(vec, vcpu->arch.apic);
++}
++EXPORT_SYMBOL_GPL(kvm_apic_clear_irr);
++
+ static inline void apic_set_isr(int vec, struct kvm_lapic *apic)
+ {
+ struct kvm_vcpu *vcpu;
+@@ -2462,6 +2468,7 @@ int kvm_apic_has_interrupt(struct kvm_vcpu *vcpu)
+ __apic_update_ppr(apic, &ppr);
+ return apic_has_interrupt_for_ppr(apic, ppr);
+ }
++EXPORT_SYMBOL_GPL(kvm_apic_has_interrupt);
+
+ int kvm_apic_accept_pic_intr(struct kvm_vcpu *vcpu)
+ {
+diff --git a/arch/x86/kvm/lapic.h b/arch/x86/kvm/lapic.h
+index 754f29beb83e3..4fb86e3a9dd3d 100644
+--- a/arch/x86/kvm/lapic.h
++++ b/arch/x86/kvm/lapic.h
+@@ -89,6 +89,7 @@ int kvm_lapic_reg_read(struct kvm_lapic *apic, u32 offset, int len,
+ bool kvm_apic_match_dest(struct kvm_vcpu *vcpu, struct kvm_lapic *source,
+ int shorthand, unsigned int dest, int dest_mode);
+ int kvm_apic_compare_prio(struct kvm_vcpu *vcpu1, struct kvm_vcpu *vcpu2);
++void kvm_apic_clear_irr(struct kvm_vcpu *vcpu, int vec);
+ bool __kvm_apic_update_irr(u32 *pir, void *regs, int *max_irr);
+ bool kvm_apic_update_irr(struct kvm_vcpu *vcpu, u32 *pir, int *max_irr);
+ void kvm_apic_update_ppr(struct kvm_vcpu *vcpu);
+diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c
+index 1e6724c30cc05..57cd70801216f 100644
+--- a/arch/x86/kvm/mmu/mmu.c
++++ b/arch/x86/kvm/mmu/mmu.c
+@@ -6341,6 +6341,7 @@ static void kvm_recover_nx_lpages(struct kvm *kvm)
+ cond_resched_lock(&kvm->mmu_lock);
+ }
+ }
++ kvm_mmu_commit_zap_page(kvm, &invalid_list);
+
+ spin_unlock(&kvm->mmu_lock);
+ srcu_read_unlock(&kvm->srcu, rcu_idx);
+diff --git a/arch/x86/kvm/svm/avic.c b/arch/x86/kvm/svm/avic.c
+index e80daa98682f5..b74722e0abb53 100644
+--- a/arch/x86/kvm/svm/avic.c
++++ b/arch/x86/kvm/svm/avic.c
+@@ -868,6 +868,7 @@ int svm_update_pi_irte(struct kvm *kvm, unsigned int host_irq,
+ * - Tell IOMMU to use legacy mode for this interrupt.
+ * - Retrieve ga_tag of prior interrupt remapping data.
+ */
++ pi.prev_ga_tag = 0;
+ pi.is_guest_mode = false;
+ ret = irq_set_vcpu_affinity(host_irq, &pi);
+
+diff --git a/arch/x86/kvm/vmx/nested.c b/arch/x86/kvm/vmx/nested.c
+index a5810928b011f..27e41fac91965 100644
+--- a/arch/x86/kvm/vmx/nested.c
++++ b/arch/x86/kvm/vmx/nested.c
+@@ -2402,6 +2402,8 @@ static void prepare_vmcs02_rare(struct vcpu_vmx *vmx, struct vmcs12 *vmcs12)
+ vmcs_writel(GUEST_TR_BASE, vmcs12->guest_tr_base);
+ vmcs_writel(GUEST_GDTR_BASE, vmcs12->guest_gdtr_base);
+ vmcs_writel(GUEST_IDTR_BASE, vmcs12->guest_idtr_base);
++
++ vmx->segment_cache.bitmask = 0;
+ }
+
+ if (!hv_evmcs || !(hv_evmcs->hv_clean_fields &
+@@ -3295,8 +3297,10 @@ enum nvmx_vmentry_status nested_vmx_enter_non_root_mode(struct kvm_vcpu *vcpu,
+ prepare_vmcs02_early(vmx, vmcs12);
+
+ if (from_vmentry) {
+- if (unlikely(!nested_get_vmcs12_pages(vcpu)))
++ if (unlikely(!nested_get_vmcs12_pages(vcpu))) {
++ vmx_switch_vmcs(vcpu, &vmx->vmcs01);
+ return NVMX_VMENTRY_KVM_INTERNAL_ERROR;
++ }
+
+ if (nested_vmx_check_vmentry_hw(vcpu)) {
+ vmx_switch_vmcs(vcpu, &vmx->vmcs01);
+@@ -3480,6 +3484,14 @@ static int nested_vmx_run(struct kvm_vcpu *vcpu, bool launch)
+ if (unlikely(status != NVMX_VMENTRY_SUCCESS))
+ goto vmentry_failed;
+
++ /* Emulate processing of posted interrupts on VM-Enter. */
++ if (nested_cpu_has_posted_intr(vmcs12) &&
++ kvm_apic_has_interrupt(vcpu) == vmx->nested.posted_intr_nv) {
++ vmx->nested.pi_pending = true;
++ kvm_make_request(KVM_REQ_EVENT, vcpu);
++ kvm_apic_clear_irr(vcpu, vmx->nested.posted_intr_nv);
++ }
++
+ /* Hide L1D cache contents from the nested guest. */
+ vmx->vcpu.arch.l1tf_flush_l1d = true;
+
+diff --git a/block/blk-core.c b/block/blk-core.c
+index 619a3dcd3f5e7..8d6435b731186 100644
+--- a/block/blk-core.c
++++ b/block/blk-core.c
+@@ -798,11 +798,10 @@ static void handle_bad_sector(struct bio *bio, sector_t maxsector)
+ {
+ char b[BDEVNAME_SIZE];
+
+- printk(KERN_INFO "attempt to access beyond end of device\n");
+- printk(KERN_INFO "%s: rw=%d, want=%Lu, limit=%Lu\n",
+- bio_devname(bio, b), bio->bi_opf,
+- (unsigned long long)bio_end_sector(bio),
+- (long long)maxsector);
++ pr_info_ratelimited("attempt to access beyond end of device\n"
++ "%s: rw=%d, want=%llu, limit=%llu\n",
++ bio_devname(bio, b), bio->bi_opf,
++ bio_end_sector(bio), maxsector);
+ }
+
+ #ifdef CONFIG_FAIL_MAKE_REQUEST
+diff --git a/block/blk-mq-sysfs.c b/block/blk-mq-sysfs.c
+index 062229395a507..7b52e7657b2d1 100644
+--- a/block/blk-mq-sysfs.c
++++ b/block/blk-mq-sysfs.c
+@@ -36,8 +36,6 @@ static void blk_mq_hw_sysfs_release(struct kobject *kobj)
+ struct blk_mq_hw_ctx *hctx = container_of(kobj, struct blk_mq_hw_ctx,
+ kobj);
+
+- cancel_delayed_work_sync(&hctx->run_work);
+-
+ if (hctx->flags & BLK_MQ_F_BLOCKING)
+ cleanup_srcu_struct(hctx->srcu);
+ blk_free_flush_queue(hctx->fq);
+diff --git a/block/blk-sysfs.c b/block/blk-sysfs.c
+index 02643e149d5e1..95fea6c18baf7 100644
+--- a/block/blk-sysfs.c
++++ b/block/blk-sysfs.c
+@@ -896,9 +896,16 @@ static void __blk_release_queue(struct work_struct *work)
+
+ blk_free_queue_stats(q->stats);
+
+- if (queue_is_mq(q))
++ if (queue_is_mq(q)) {
++ struct blk_mq_hw_ctx *hctx;
++ int i;
++
+ cancel_delayed_work_sync(&q->requeue_work);
+
++ queue_for_each_hw_ctx(q, hctx, i)
++ cancel_delayed_work_sync(&hctx->run_work);
++ }
++
+ blk_exit_queue(q);
+
+ blk_queue_free_zone_bitmaps(q);
+diff --git a/crypto/algif_aead.c b/crypto/algif_aead.c
+index 43c6aa784858b..e62d735ed2660 100644
+--- a/crypto/algif_aead.c
++++ b/crypto/algif_aead.c
+@@ -78,7 +78,7 @@ static int crypto_aead_copy_sgl(struct crypto_sync_skcipher *null_tfm,
+ SYNC_SKCIPHER_REQUEST_ON_STACK(skreq, null_tfm);
+
+ skcipher_request_set_sync_tfm(skreq, null_tfm);
+- skcipher_request_set_callback(skreq, CRYPTO_TFM_REQ_MAY_BACKLOG,
++ skcipher_request_set_callback(skreq, CRYPTO_TFM_REQ_MAY_SLEEP,
+ NULL, NULL);
+ skcipher_request_set_crypt(skreq, src, dst, len, NULL);
+
+@@ -291,19 +291,20 @@ static int _aead_recvmsg(struct socket *sock, struct msghdr *msg,
+ areq->outlen = outlen;
+
+ aead_request_set_callback(&areq->cra_u.aead_req,
+- CRYPTO_TFM_REQ_MAY_BACKLOG,
++ CRYPTO_TFM_REQ_MAY_SLEEP,
+ af_alg_async_cb, areq);
+ err = ctx->enc ? crypto_aead_encrypt(&areq->cra_u.aead_req) :
+ crypto_aead_decrypt(&areq->cra_u.aead_req);
+
+ /* AIO operation in progress */
+- if (err == -EINPROGRESS || err == -EBUSY)
++ if (err == -EINPROGRESS)
+ return -EIOCBQUEUED;
+
+ sock_put(sk);
+ } else {
+ /* Synchronous operation */
+ aead_request_set_callback(&areq->cra_u.aead_req,
++ CRYPTO_TFM_REQ_MAY_SLEEP |
+ CRYPTO_TFM_REQ_MAY_BACKLOG,
+ crypto_req_done, &ctx->wait);
+ err = crypto_wait_req(ctx->enc ?
+diff --git a/crypto/algif_skcipher.c b/crypto/algif_skcipher.c
+index 81c4022285a7c..30069a92a9b22 100644
+--- a/crypto/algif_skcipher.c
++++ b/crypto/algif_skcipher.c
+@@ -123,7 +123,7 @@ static int _skcipher_recvmsg(struct socket *sock, struct msghdr *msg,
+ crypto_skcipher_decrypt(&areq->cra_u.skcipher_req);
+
+ /* AIO operation in progress */
+- if (err == -EINPROGRESS || err == -EBUSY)
++ if (err == -EINPROGRESS)
+ return -EIOCBQUEUED;
+
+ sock_put(sk);
+diff --git a/drivers/android/binder.c b/drivers/android/binder.c
+index 5b310eea9e527..adab46ca5dff7 100644
+--- a/drivers/android/binder.c
++++ b/drivers/android/binder.c
+@@ -223,7 +223,7 @@ static struct binder_transaction_log_entry *binder_transaction_log_add(
+ struct binder_work {
+ struct list_head entry;
+
+- enum {
++ enum binder_work_type {
+ BINDER_WORK_TRANSACTION = 1,
+ BINDER_WORK_TRANSACTION_COMPLETE,
+ BINDER_WORK_RETURN_ERROR,
+@@ -885,27 +885,6 @@ static struct binder_work *binder_dequeue_work_head_ilocked(
+ return w;
+ }
+
+-/**
+- * binder_dequeue_work_head() - Dequeues the item at head of list
+- * @proc: binder_proc associated with list
+- * @list: list to dequeue head
+- *
+- * Removes the head of the list if there are items on the list
+- *
+- * Return: pointer dequeued binder_work, NULL if list was empty
+- */
+-static struct binder_work *binder_dequeue_work_head(
+- struct binder_proc *proc,
+- struct list_head *list)
+-{
+- struct binder_work *w;
+-
+- binder_inner_proc_lock(proc);
+- w = binder_dequeue_work_head_ilocked(list);
+- binder_inner_proc_unlock(proc);
+- return w;
+-}
+-
+ static void
+ binder_defer_work(struct binder_proc *proc, enum binder_deferred_state defer);
+ static void binder_free_thread(struct binder_thread *thread);
+@@ -2345,8 +2324,6 @@ static void binder_transaction_buffer_release(struct binder_proc *proc,
+ * file is done when the transaction is torn
+ * down.
+ */
+- WARN_ON(failed_at &&
+- proc->tsk == current->group_leader);
+ } break;
+ case BINDER_TYPE_PTR:
+ /*
+@@ -4589,13 +4566,17 @@ static void binder_release_work(struct binder_proc *proc,
+ struct list_head *list)
+ {
+ struct binder_work *w;
++ enum binder_work_type wtype;
+
+ while (1) {
+- w = binder_dequeue_work_head(proc, list);
++ binder_inner_proc_lock(proc);
++ w = binder_dequeue_work_head_ilocked(list);
++ wtype = w ? w->type : 0;
++ binder_inner_proc_unlock(proc);
+ if (!w)
+ return;
+
+- switch (w->type) {
++ switch (wtype) {
+ case BINDER_WORK_TRANSACTION: {
+ struct binder_transaction *t;
+
+@@ -4629,9 +4610,11 @@ static void binder_release_work(struct binder_proc *proc,
+ kfree(death);
+ binder_stats_deleted(BINDER_STAT_DEATH);
+ } break;
++ case BINDER_WORK_NODE:
++ break;
+ default:
+ pr_err("unexpected work type, %d, not freed\n",
+- w->type);
++ wtype);
+ break;
+ }
+ }
+diff --git a/drivers/bluetooth/btusb.c b/drivers/bluetooth/btusb.c
+index a5fef9aa419fd..91a0c84d55c97 100644
+--- a/drivers/bluetooth/btusb.c
++++ b/drivers/bluetooth/btusb.c
+@@ -2849,6 +2849,7 @@ static int btusb_mtk_submit_wmt_recv_urb(struct hci_dev *hdev)
+ buf = kmalloc(size, GFP_KERNEL);
+ if (!buf) {
+ kfree(dr);
++ usb_free_urb(urb);
+ return -ENOMEM;
+ }
+
+diff --git a/drivers/bluetooth/hci_ldisc.c b/drivers/bluetooth/hci_ldisc.c
+index 85a30fb9177bb..f83d67eafc9f0 100644
+--- a/drivers/bluetooth/hci_ldisc.c
++++ b/drivers/bluetooth/hci_ldisc.c
+@@ -538,6 +538,7 @@ static void hci_uart_tty_close(struct tty_struct *tty)
+ clear_bit(HCI_UART_PROTO_READY, &hu->flags);
+ percpu_up_write(&hu->proto_lock);
+
++ cancel_work_sync(&hu->init_ready);
+ cancel_work_sync(&hu->write_work);
+
+ if (hdev) {
+diff --git a/drivers/bluetooth/hci_serdev.c b/drivers/bluetooth/hci_serdev.c
+index 7b233312e723f..3977bba485c22 100644
+--- a/drivers/bluetooth/hci_serdev.c
++++ b/drivers/bluetooth/hci_serdev.c
+@@ -355,6 +355,8 @@ void hci_uart_unregister_device(struct hci_uart *hu)
+ struct hci_dev *hdev = hu->hdev;
+
+ clear_bit(HCI_UART_PROTO_READY, &hu->flags);
++
++ cancel_work_sync(&hu->init_ready);
+ if (test_bit(HCI_UART_REGISTERED, &hu->flags))
+ hci_unregister_dev(hdev);
+ hci_free_dev(hdev);
+diff --git a/drivers/bus/mhi/core/Makefile b/drivers/bus/mhi/core/Makefile
+index 66e2700c9032a..bc1469778cf87 100644
+--- a/drivers/bus/mhi/core/Makefile
++++ b/drivers/bus/mhi/core/Makefile
+@@ -1,3 +1,3 @@
+-obj-$(CONFIG_MHI_BUS) := mhi.o
++obj-$(CONFIG_MHI_BUS) += mhi.o
+
+ mhi-y := init.o main.o pm.o boot.o
+diff --git a/drivers/char/ipmi/ipmi_si_intf.c b/drivers/char/ipmi/ipmi_si_intf.c
+index 77b8d551ae7fe..dd559661c15b3 100644
+--- a/drivers/char/ipmi/ipmi_si_intf.c
++++ b/drivers/char/ipmi/ipmi_si_intf.c
+@@ -1963,7 +1963,7 @@ static int try_smi_init(struct smi_info *new_smi)
+ /* Do this early so it's available for logs. */
+ if (!new_smi->io.dev) {
+ pr_err("IPMI interface added with no device\n");
+- rv = EIO;
++ rv = -EIO;
+ goto out_err;
+ }
+
+diff --git a/drivers/clk/at91/clk-main.c b/drivers/clk/at91/clk-main.c
+index 37c22667e8319..4313ecb2af5b2 100644
+--- a/drivers/clk/at91/clk-main.c
++++ b/drivers/clk/at91/clk-main.c
+@@ -437,12 +437,17 @@ static int clk_sam9x5_main_set_parent(struct clk_hw *hw, u8 index)
+ return -EINVAL;
+
+ regmap_read(regmap, AT91_CKGR_MOR, &tmp);
+- tmp &= ~MOR_KEY_MASK;
+
+ if (index && !(tmp & AT91_PMC_MOSCSEL))
+- regmap_write(regmap, AT91_CKGR_MOR, tmp | AT91_PMC_MOSCSEL);
++ tmp = AT91_PMC_MOSCSEL;
+ else if (!index && (tmp & AT91_PMC_MOSCSEL))
+- regmap_write(regmap, AT91_CKGR_MOR, tmp & ~AT91_PMC_MOSCSEL);
++ tmp = 0;
++ else
++ return 0;
++
++ regmap_update_bits(regmap, AT91_CKGR_MOR,
++ AT91_PMC_MOSCSEL | MOR_KEY_MASK,
++ tmp | AT91_PMC_KEY);
+
+ while (!clk_sam9x5_main_ready(regmap))
+ cpu_relax();
+diff --git a/drivers/clk/bcm/clk-bcm2835.c b/drivers/clk/bcm/clk-bcm2835.c
+index 011802f1a6df9..f18b4d9e9455b 100644
+--- a/drivers/clk/bcm/clk-bcm2835.c
++++ b/drivers/clk/bcm/clk-bcm2835.c
+@@ -1337,8 +1337,10 @@ static struct clk_hw *bcm2835_register_pll(struct bcm2835_cprman *cprman,
+ pll->hw.init = &init;
+
+ ret = devm_clk_hw_register(cprman->dev, &pll->hw);
+- if (ret)
++ if (ret) {
++ kfree(pll);
+ return NULL;
++ }
+ return &pll->hw;
+ }
+
+diff --git a/drivers/clk/imx/clk-imx8mq.c b/drivers/clk/imx/clk-imx8mq.c
+index a64aace213c27..7762c5825e77d 100644
+--- a/drivers/clk/imx/clk-imx8mq.c
++++ b/drivers/clk/imx/clk-imx8mq.c
+@@ -157,10 +157,10 @@ static const char * const imx8mq_qspi_sels[] = {"osc_25m", "sys1_pll_400m", "sys
+ "audio_pll2_out", "sys1_pll_266m", "sys3_pll_out", "sys1_pll_100m", };
+
+ static const char * const imx8mq_usdhc1_sels[] = {"osc_25m", "sys1_pll_400m", "sys1_pll_800m", "sys2_pll_500m",
+- "audio_pll2_out", "sys1_pll_266m", "sys3_pll_out", "sys1_pll_100m", };
++ "sys3_pll_out", "sys1_pll_266m", "audio_pll2_out", "sys1_pll_100m", };
+
+ static const char * const imx8mq_usdhc2_sels[] = {"osc_25m", "sys1_pll_400m", "sys1_pll_800m", "sys2_pll_500m",
+- "audio_pll2_out", "sys1_pll_266m", "sys3_pll_out", "sys1_pll_100m", };
++ "sys3_pll_out", "sys1_pll_266m", "audio_pll2_out", "sys1_pll_100m", };
+
+ static const char * const imx8mq_i2c1_sels[] = {"osc_25m", "sys1_pll_160m", "sys2_pll_50m", "sys3_pll_out", "audio_pll1_out",
+ "video_pll1_out", "audio_pll2_out", "sys1_pll_133m", };
+diff --git a/drivers/clk/keystone/sci-clk.c b/drivers/clk/keystone/sci-clk.c
+index 7edf8c8432b67..64ea895f1a7df 100644
+--- a/drivers/clk/keystone/sci-clk.c
++++ b/drivers/clk/keystone/sci-clk.c
+@@ -522,7 +522,7 @@ static int ti_sci_scan_clocks_from_dt(struct sci_clk_provider *provider)
+ np = of_find_node_with_property(np, *clk_name);
+ if (!np) {
+ clk_name++;
+- break;
++ continue;
+ }
+
+ if (!of_device_is_available(np))
+diff --git a/drivers/clk/mediatek/clk-mt6779.c b/drivers/clk/mediatek/clk-mt6779.c
+index 9766cccf5844c..6e0d3a1667291 100644
+--- a/drivers/clk/mediatek/clk-mt6779.c
++++ b/drivers/clk/mediatek/clk-mt6779.c
+@@ -919,6 +919,8 @@ static const struct mtk_gate infra_clks[] = {
+ "pwm_sel", 19),
+ GATE_INFRA0(CLK_INFRA_PWM, "infra_pwm",
+ "pwm_sel", 21),
++ GATE_INFRA0(CLK_INFRA_UART0, "infra_uart0",
++ "uart_sel", 22),
+ GATE_INFRA0(CLK_INFRA_UART1, "infra_uart1",
+ "uart_sel", 23),
+ GATE_INFRA0(CLK_INFRA_UART2, "infra_uart2",
+diff --git a/drivers/clk/meson/axg-audio.c b/drivers/clk/meson/axg-audio.c
+index 53715e36326c6..9918cb375de30 100644
+--- a/drivers/clk/meson/axg-audio.c
++++ b/drivers/clk/meson/axg-audio.c
+@@ -1209,13 +1209,132 @@ static struct clk_hw_onecell_data sm1_audio_hw_onecell_data = {
+ };
+
+
+-/* Convenience table to populate regmap in .probe()
+- * Note that this table is shared between both AXG and G12A,
+- * with spdifout_b clocks being exclusive to G12A. Since those
+- * clocks are not declared within the AXG onecell table, we do not
+- * feel the need to have separate AXG/G12A regmap tables.
+- */
++/* Convenience table to populate regmap in .probe(). */
+ static struct clk_regmap *const axg_clk_regmaps[] = {
++ &ddr_arb,
++ &pdm,
++ &tdmin_a,
++ &tdmin_b,
++ &tdmin_c,
++ &tdmin_lb,
++ &tdmout_a,
++ &tdmout_b,
++ &tdmout_c,
++ &frddr_a,
++ &frddr_b,
++ &frddr_c,
++ &toddr_a,
++ &toddr_b,
++ &toddr_c,
++ &loopback,
++ &spdifin,
++ &spdifout,
++ &resample,
++ &power_detect,
++ &mst_a_mclk_sel,
++ &mst_b_mclk_sel,
++ &mst_c_mclk_sel,
++ &mst_d_mclk_sel,
++ &mst_e_mclk_sel,
++ &mst_f_mclk_sel,
++ &mst_a_mclk_div,
++ &mst_b_mclk_div,
++ &mst_c_mclk_div,
++ &mst_d_mclk_div,
++ &mst_e_mclk_div,
++ &mst_f_mclk_div,
++ &mst_a_mclk,
++ &mst_b_mclk,
++ &mst_c_mclk,
++ &mst_d_mclk,
++ &mst_e_mclk,
++ &mst_f_mclk,
++ &spdifout_clk_sel,
++ &spdifout_clk_div,
++ &spdifout_clk,
++ &spdifin_clk_sel,
++ &spdifin_clk_div,
++ &spdifin_clk,
++ &pdm_dclk_sel,
++ &pdm_dclk_div,
++ &pdm_dclk,
++ &pdm_sysclk_sel,
++ &pdm_sysclk_div,
++ &pdm_sysclk,
++ &mst_a_sclk_pre_en,
++ &mst_b_sclk_pre_en,
++ &mst_c_sclk_pre_en,
++ &mst_d_sclk_pre_en,
++ &mst_e_sclk_pre_en,
++ &mst_f_sclk_pre_en,
++ &mst_a_sclk_div,
++ &mst_b_sclk_div,
++ &mst_c_sclk_div,
++ &mst_d_sclk_div,
++ &mst_e_sclk_div,
++ &mst_f_sclk_div,
++ &mst_a_sclk_post_en,
++ &mst_b_sclk_post_en,
++ &mst_c_sclk_post_en,
++ &mst_d_sclk_post_en,
++ &mst_e_sclk_post_en,
++ &mst_f_sclk_post_en,
++ &mst_a_sclk,
++ &mst_b_sclk,
++ &mst_c_sclk,
++ &mst_d_sclk,
++ &mst_e_sclk,
++ &mst_f_sclk,
++ &mst_a_lrclk_div,
++ &mst_b_lrclk_div,
++ &mst_c_lrclk_div,
++ &mst_d_lrclk_div,
++ &mst_e_lrclk_div,
++ &mst_f_lrclk_div,
++ &mst_a_lrclk,
++ &mst_b_lrclk,
++ &mst_c_lrclk,
++ &mst_d_lrclk,
++ &mst_e_lrclk,
++ &mst_f_lrclk,
++ &tdmin_a_sclk_sel,
++ &tdmin_b_sclk_sel,
++ &tdmin_c_sclk_sel,
++ &tdmin_lb_sclk_sel,
++ &tdmout_a_sclk_sel,
++ &tdmout_b_sclk_sel,
++ &tdmout_c_sclk_sel,
++ &tdmin_a_sclk_pre_en,
++ &tdmin_b_sclk_pre_en,
++ &tdmin_c_sclk_pre_en,
++ &tdmin_lb_sclk_pre_en,
++ &tdmout_a_sclk_pre_en,
++ &tdmout_b_sclk_pre_en,
++ &tdmout_c_sclk_pre_en,
++ &tdmin_a_sclk_post_en,
++ &tdmin_b_sclk_post_en,
++ &tdmin_c_sclk_post_en,
++ &tdmin_lb_sclk_post_en,
++ &tdmout_a_sclk_post_en,
++ &tdmout_b_sclk_post_en,
++ &tdmout_c_sclk_post_en,
++ &tdmin_a_sclk,
++ &tdmin_b_sclk,
++ &tdmin_c_sclk,
++ &tdmin_lb_sclk,
++ &tdmout_a_sclk,
++ &tdmout_b_sclk,
++ &tdmout_c_sclk,
++ &tdmin_a_lrclk,
++ &tdmin_b_lrclk,
++ &tdmin_c_lrclk,
++ &tdmin_lb_lrclk,
++ &tdmout_a_lrclk,
++ &tdmout_b_lrclk,
++ &tdmout_c_lrclk,
++};
++
++static struct clk_regmap *const g12a_clk_regmaps[] = {
+ &ddr_arb,
+ &pdm,
+ &tdmin_a,
+@@ -1713,8 +1832,8 @@ static const struct audioclk_data axg_audioclk_data = {
+ };
+
+ static const struct audioclk_data g12a_audioclk_data = {
+- .regmap_clks = axg_clk_regmaps,
+- .regmap_clk_num = ARRAY_SIZE(axg_clk_regmaps),
++ .regmap_clks = g12a_clk_regmaps,
++ .regmap_clk_num = ARRAY_SIZE(g12a_clk_regmaps),
+ .hw_onecell_data = &g12a_audio_hw_onecell_data,
+ .reset_offset = AUDIO_SW_RESET,
+ .reset_num = 26,
+diff --git a/drivers/clk/meson/g12a.c b/drivers/clk/meson/g12a.c
+index 30c15766ebb16..05d032be15c8f 100644
+--- a/drivers/clk/meson/g12a.c
++++ b/drivers/clk/meson/g12a.c
+@@ -298,6 +298,17 @@ static struct clk_regmap g12a_fclk_div2 = {
+ &g12a_fclk_div2_div.hw
+ },
+ .num_parents = 1,
++ /*
++ * Similar to fclk_div3, it seems that this clock is used by
++ * the resident firmware and is required by the platform to
++ * operate correctly.
++ * Until the following condition are met, we need this clock to
++ * be marked as critical:
++ * a) Mark the clock used by a firmware resource, if possible
++ * b) CCF has a clock hand-off mechanism to make the sure the
++ * clock stays on until the proper driver comes along
++ */
++ .flags = CLK_IS_CRITICAL,
+ },
+ };
+
+diff --git a/drivers/clk/qcom/gcc-sdm660.c b/drivers/clk/qcom/gcc-sdm660.c
+index c6fb57cd576f5..aa5c0c6ead017 100644
+--- a/drivers/clk/qcom/gcc-sdm660.c
++++ b/drivers/clk/qcom/gcc-sdm660.c
+@@ -666,7 +666,7 @@ static struct clk_rcg2 hmss_rbcpr_clk_src = {
+ .cmd_rcgr = 0x48044,
+ .mnd_width = 0,
+ .hid_width = 5,
+- .parent_map = gcc_parent_map_xo_gpll0_gpll0_early_div,
++ .parent_map = gcc_parent_map_xo_gpll0,
+ .freq_tbl = ftbl_hmss_rbcpr_clk_src,
+ .clkr.hw.init = &(struct clk_init_data){
+ .name = "hmss_rbcpr_clk_src",
+diff --git a/drivers/clk/rockchip/clk-half-divider.c b/drivers/clk/rockchip/clk-half-divider.c
+index b333fc28c94b6..37c858d689e0d 100644
+--- a/drivers/clk/rockchip/clk-half-divider.c
++++ b/drivers/clk/rockchip/clk-half-divider.c
+@@ -166,7 +166,7 @@ struct clk *rockchip_clk_register_halfdiv(const char *name,
+ unsigned long flags,
+ spinlock_t *lock)
+ {
+- struct clk *clk;
++ struct clk *clk = ERR_PTR(-ENOMEM);
+ struct clk_mux *mux = NULL;
+ struct clk_gate *gate = NULL;
+ struct clk_divider *div = NULL;
+diff --git a/drivers/clocksource/hyperv_timer.c b/drivers/clocksource/hyperv_timer.c
+index 09aa44cb8a91d..ba04cb381cd3f 100644
+--- a/drivers/clocksource/hyperv_timer.c
++++ b/drivers/clocksource/hyperv_timer.c
+@@ -341,7 +341,7 @@ static u64 notrace read_hv_clock_tsc_cs(struct clocksource *arg)
+ return read_hv_clock_tsc();
+ }
+
+-static u64 read_hv_sched_clock_tsc(void)
++static u64 notrace read_hv_sched_clock_tsc(void)
+ {
+ return (read_hv_clock_tsc() - hv_sched_clock_offset) *
+ (NSEC_PER_SEC / HV_CLOCK_HZ);
+@@ -404,7 +404,7 @@ static u64 notrace read_hv_clock_msr_cs(struct clocksource *arg)
+ return read_hv_clock_msr();
+ }
+
+-static u64 read_hv_sched_clock_msr(void)
++static u64 notrace read_hv_sched_clock_msr(void)
+ {
+ return (read_hv_clock_msr() - hv_sched_clock_offset) *
+ (NSEC_PER_SEC / HV_CLOCK_HZ);
+diff --git a/drivers/cpufreq/armada-37xx-cpufreq.c b/drivers/cpufreq/armada-37xx-cpufreq.c
+index df1c941260d14..b4af4094309b0 100644
+--- a/drivers/cpufreq/armada-37xx-cpufreq.c
++++ b/drivers/cpufreq/armada-37xx-cpufreq.c
+@@ -484,6 +484,12 @@ remove_opp:
+ /* late_initcall, to guarantee the driver is loaded after A37xx clock driver */
+ late_initcall(armada37xx_cpufreq_driver_init);
+
++static const struct of_device_id __maybe_unused armada37xx_cpufreq_of_match[] = {
++ { .compatible = "marvell,armada-3700-nb-pm" },
++ { },
++};
++MODULE_DEVICE_TABLE(of, armada37xx_cpufreq_of_match);
++
+ MODULE_AUTHOR("Gregory CLEMENT <gregory.clement@free-electrons.com>");
+ MODULE_DESCRIPTION("Armada 37xx cpufreq driver");
+ MODULE_LICENSE("GPL");
+diff --git a/drivers/cpufreq/powernv-cpufreq.c b/drivers/cpufreq/powernv-cpufreq.c
+index 8646eb197cd96..31f5c4ebbac9f 100644
+--- a/drivers/cpufreq/powernv-cpufreq.c
++++ b/drivers/cpufreq/powernv-cpufreq.c
+@@ -884,12 +884,15 @@ static int powernv_cpufreq_reboot_notifier(struct notifier_block *nb,
+ unsigned long action, void *unused)
+ {
+ int cpu;
+- struct cpufreq_policy cpu_policy;
++ struct cpufreq_policy *cpu_policy;
+
+ rebooting = true;
+ for_each_online_cpu(cpu) {
+- cpufreq_get_policy(&cpu_policy, cpu);
+- powernv_cpufreq_target_index(&cpu_policy, get_nominal_index());
++ cpu_policy = cpufreq_cpu_get(cpu);
++ if (!cpu_policy)
++ continue;
++ powernv_cpufreq_target_index(cpu_policy, get_nominal_index());
++ cpufreq_cpu_put(cpu_policy);
+ }
+
+ return NOTIFY_DONE;
+diff --git a/drivers/crypto/allwinner/sun8i-ce/sun8i-ce-core.c b/drivers/crypto/allwinner/sun8i-ce/sun8i-ce-core.c
+index b957061424a1f..8f3d6d31da52f 100644
+--- a/drivers/crypto/allwinner/sun8i-ce/sun8i-ce-core.c
++++ b/drivers/crypto/allwinner/sun8i-ce/sun8i-ce-core.c
+@@ -120,7 +120,10 @@ int sun8i_ce_run_task(struct sun8i_ce_dev *ce, int flow, const char *name)
+ /* Be sure all data is written before enabling the task */
+ wmb();
+
+- v = 1 | (ce->chanlist[flow].tl->t_common_ctl & 0x7F) << 8;
++ /* Only H6 needs to write a part of t_common_ctl along with "1", but since it is ignored
++ * on older SoCs, we have no reason to complicate things.
++ */
++ v = 1 | ((le32_to_cpu(ce->chanlist[flow].tl->t_common_ctl) & 0x7F) << 8);
+ writel(v, ce->base + CE_TLR);
+ mutex_unlock(&ce->mlock);
+
+diff --git a/drivers/crypto/caam/Kconfig b/drivers/crypto/caam/Kconfig
+index bc35aa0ec07ae..d7f2840cf0a94 100644
+--- a/drivers/crypto/caam/Kconfig
++++ b/drivers/crypto/caam/Kconfig
+@@ -114,6 +114,7 @@ config CRYPTO_DEV_FSL_CAAM_CRYPTO_API_QI
+ select CRYPTO_AUTHENC
+ select CRYPTO_SKCIPHER
+ select CRYPTO_DES
++ select CRYPTO_XTS
+ help
+ Selecting this will use CAAM Queue Interface (QI) for sending
+ & receiving crypto jobs to/from CAAM. This gives better performance
+diff --git a/drivers/crypto/caam/caamalg_qi.c b/drivers/crypto/caam/caamalg_qi.c
+index 315d53499ce85..829d41a1e5da1 100644
+--- a/drivers/crypto/caam/caamalg_qi.c
++++ b/drivers/crypto/caam/caamalg_qi.c
+@@ -18,6 +18,8 @@
+ #include "qi.h"
+ #include "jr.h"
+ #include "caamalg_desc.h"
++#include <crypto/xts.h>
++#include <asm/unaligned.h>
+
+ /*
+ * crypto alg
+@@ -67,6 +69,12 @@ struct caam_ctx {
+ struct device *qidev;
+ spinlock_t lock; /* Protects multiple init of driver context */
+ struct caam_drv_ctx *drv_ctx[NUM_OP];
++ bool xts_key_fallback;
++ struct crypto_skcipher *fallback;
++};
++
++struct caam_skcipher_req_ctx {
++ struct skcipher_request fallback_req;
+ };
+
+ static int aead_set_sh_desc(struct crypto_aead *aead)
+@@ -726,12 +734,21 @@ static int xts_skcipher_setkey(struct crypto_skcipher *skcipher, const u8 *key,
+ struct caam_ctx *ctx = crypto_skcipher_ctx(skcipher);
+ struct device *jrdev = ctx->jrdev;
+ int ret = 0;
++ int err;
+
+- if (keylen != 2 * AES_MIN_KEY_SIZE && keylen != 2 * AES_MAX_KEY_SIZE) {
++ err = xts_verify_key(skcipher, key, keylen);
++ if (err) {
+ dev_dbg(jrdev, "key size mismatch\n");
+- return -EINVAL;
++ return err;
+ }
+
++ if (keylen != 2 * AES_KEYSIZE_128 && keylen != 2 * AES_KEYSIZE_256)
++ ctx->xts_key_fallback = true;
++
++ err = crypto_skcipher_setkey(ctx->fallback, key, keylen);
++ if (err)
++ return err;
++
+ ctx->cdata.keylen = keylen;
+ ctx->cdata.key_virt = key;
+ ctx->cdata.key_inline = true;
+@@ -1373,6 +1390,14 @@ static struct skcipher_edesc *skcipher_edesc_alloc(struct skcipher_request *req,
+ return edesc;
+ }
+
++static inline bool xts_skcipher_ivsize(struct skcipher_request *req)
++{
++ struct crypto_skcipher *skcipher = crypto_skcipher_reqtfm(req);
++ unsigned int ivsize = crypto_skcipher_ivsize(skcipher);
++
++ return !!get_unaligned((u64 *)(req->iv + (ivsize / 2)));
++}
++
+ static inline int skcipher_crypt(struct skcipher_request *req, bool encrypt)
+ {
+ struct skcipher_edesc *edesc;
+@@ -1383,6 +1408,22 @@ static inline int skcipher_crypt(struct skcipher_request *req, bool encrypt)
+ if (!req->cryptlen)
+ return 0;
+
++ if (ctx->fallback && (xts_skcipher_ivsize(req) ||
++ ctx->xts_key_fallback)) {
++ struct caam_skcipher_req_ctx *rctx = skcipher_request_ctx(req);
++
++ skcipher_request_set_tfm(&rctx->fallback_req, ctx->fallback);
++ skcipher_request_set_callback(&rctx->fallback_req,
++ req->base.flags,
++ req->base.complete,
++ req->base.data);
++ skcipher_request_set_crypt(&rctx->fallback_req, req->src,
++ req->dst, req->cryptlen, req->iv);
++
++ return encrypt ? crypto_skcipher_encrypt(&rctx->fallback_req) :
++ crypto_skcipher_decrypt(&rctx->fallback_req);
++ }
++
+ if (unlikely(caam_congested))
+ return -EAGAIN;
+
+@@ -1507,6 +1548,7 @@ static struct caam_skcipher_alg driver_algs[] = {
+ .base = {
+ .cra_name = "xts(aes)",
+ .cra_driver_name = "xts-aes-caam-qi",
++ .cra_flags = CRYPTO_ALG_NEED_FALLBACK,
+ .cra_blocksize = AES_BLOCK_SIZE,
+ },
+ .setkey = xts_skcipher_setkey,
+@@ -2440,9 +2482,32 @@ static int caam_cra_init(struct crypto_skcipher *tfm)
+ struct skcipher_alg *alg = crypto_skcipher_alg(tfm);
+ struct caam_skcipher_alg *caam_alg =
+ container_of(alg, typeof(*caam_alg), skcipher);
++ struct caam_ctx *ctx = crypto_skcipher_ctx(tfm);
++ u32 alg_aai = caam_alg->caam.class1_alg_type & OP_ALG_AAI_MASK;
++ int ret = 0;
++
++ if (alg_aai == OP_ALG_AAI_XTS) {
++ const char *tfm_name = crypto_tfm_alg_name(&tfm->base);
++ struct crypto_skcipher *fallback;
++
++ fallback = crypto_alloc_skcipher(tfm_name, 0,
++ CRYPTO_ALG_NEED_FALLBACK);
++ if (IS_ERR(fallback)) {
++ dev_err(ctx->jrdev, "Failed to allocate %s fallback: %ld\n",
++ tfm_name, PTR_ERR(fallback));
++ return PTR_ERR(fallback);
++ }
++
++ ctx->fallback = fallback;
++ crypto_skcipher_set_reqsize(tfm, sizeof(struct caam_skcipher_req_ctx) +
++ crypto_skcipher_reqsize(fallback));
++ }
++
++ ret = caam_init_common(ctx, &caam_alg->caam, false);
++ if (ret && ctx->fallback)
++ crypto_free_skcipher(ctx->fallback);
+
+- return caam_init_common(crypto_skcipher_ctx(tfm), &caam_alg->caam,
+- false);
++ return ret;
+ }
+
+ static int caam_aead_init(struct crypto_aead *tfm)
+@@ -2468,7 +2533,11 @@ static void caam_exit_common(struct caam_ctx *ctx)
+
+ static void caam_cra_exit(struct crypto_skcipher *tfm)
+ {
+- caam_exit_common(crypto_skcipher_ctx(tfm));
++ struct caam_ctx *ctx = crypto_skcipher_ctx(tfm);
++
++ if (ctx->fallback)
++ crypto_free_skcipher(ctx->fallback);
++ caam_exit_common(ctx);
+ }
+
+ static void caam_aead_exit(struct crypto_aead *tfm)
+@@ -2502,7 +2571,7 @@ static void caam_skcipher_alg_init(struct caam_skcipher_alg *t_alg)
+ alg->base.cra_module = THIS_MODULE;
+ alg->base.cra_priority = CAAM_CRA_PRIORITY;
+ alg->base.cra_ctxsize = sizeof(struct caam_ctx);
+- alg->base.cra_flags = CRYPTO_ALG_ASYNC | CRYPTO_ALG_KERN_DRIVER_ONLY;
++ alg->base.cra_flags |= CRYPTO_ALG_ASYNC | CRYPTO_ALG_KERN_DRIVER_ONLY;
+
+ alg->init = caam_cra_init;
+ alg->exit = caam_cra_exit;
+diff --git a/drivers/crypto/ccp/ccp-ops.c b/drivers/crypto/ccp/ccp-ops.c
+index 64112c736810e..7234b95241e91 100644
+--- a/drivers/crypto/ccp/ccp-ops.c
++++ b/drivers/crypto/ccp/ccp-ops.c
+@@ -1746,7 +1746,7 @@ ccp_run_sha_cmd(struct ccp_cmd_queue *cmd_q, struct ccp_cmd *cmd)
+ break;
+ default:
+ ret = -EINVAL;
+- goto e_ctx;
++ goto e_data;
+ }
+ } else {
+ /* Stash the context */
+diff --git a/drivers/crypto/ccree/cc_pm.c b/drivers/crypto/ccree/cc_pm.c
+index d39e1664fc7ed..3c65bf070c908 100644
+--- a/drivers/crypto/ccree/cc_pm.c
++++ b/drivers/crypto/ccree/cc_pm.c
+@@ -65,8 +65,12 @@ const struct dev_pm_ops ccree_pm = {
+ int cc_pm_get(struct device *dev)
+ {
+ int rc = pm_runtime_get_sync(dev);
++ if (rc < 0) {
++ pm_runtime_put_noidle(dev);
++ return rc;
++ }
+
+- return (rc == 1 ? 0 : rc);
++ return 0;
+ }
+
+ void cc_pm_put_suspend(struct device *dev)
+diff --git a/drivers/crypto/chelsio/chtls/chtls_cm.c b/drivers/crypto/chelsio/chtls/chtls_cm.c
+index 54093115eb95d..bad8e90ba168d 100644
+--- a/drivers/crypto/chelsio/chtls/chtls_cm.c
++++ b/drivers/crypto/chelsio/chtls/chtls_cm.c
+@@ -92,11 +92,13 @@ static void chtls_sock_release(struct kref *ref)
+ static struct net_device *chtls_find_netdev(struct chtls_dev *cdev,
+ struct sock *sk)
+ {
++ struct adapter *adap = pci_get_drvdata(cdev->pdev);
+ struct net_device *ndev = cdev->ports[0];
+ #if IS_ENABLED(CONFIG_IPV6)
+ struct net_device *temp;
+ int addr_type;
+ #endif
++ int i;
+
+ switch (sk->sk_family) {
+ case PF_INET:
+@@ -127,8 +129,12 @@ static struct net_device *chtls_find_netdev(struct chtls_dev *cdev,
+ return NULL;
+
+ if (is_vlan_dev(ndev))
+- return vlan_dev_real_dev(ndev);
+- return ndev;
++ ndev = vlan_dev_real_dev(ndev);
++
++ for_each_port(adap, i)
++ if (cdev->ports[i] == ndev)
++ return ndev;
++ return NULL;
+ }
+
+ static void assign_rxopt(struct sock *sk, unsigned int opt)
+@@ -477,7 +483,6 @@ void chtls_destroy_sock(struct sock *sk)
+ chtls_purge_write_queue(sk);
+ free_tls_keyid(sk);
+ kref_put(&csk->kref, chtls_sock_release);
+- csk->cdev = NULL;
+ if (sk->sk_family == AF_INET)
+ sk->sk_prot = &tcp_prot;
+ #if IS_ENABLED(CONFIG_IPV6)
+@@ -736,14 +741,13 @@ void chtls_listen_stop(struct chtls_dev *cdev, struct sock *sk)
+
+ #if IS_ENABLED(CONFIG_IPV6)
+ if (sk->sk_family == PF_INET6) {
+- struct chtls_sock *csk;
++ struct net_device *ndev = chtls_find_netdev(cdev, sk);
+ int addr_type = 0;
+
+- csk = rcu_dereference_sk_user_data(sk);
+ addr_type = ipv6_addr_type((const struct in6_addr *)
+ &sk->sk_v6_rcv_saddr);
+ if (addr_type != IPV6_ADDR_ANY)
+- cxgb4_clip_release(csk->egress_dev, (const u32 *)
++ cxgb4_clip_release(ndev, (const u32 *)
+ &sk->sk_v6_rcv_saddr, 1);
+ }
+ #endif
+@@ -1156,6 +1160,9 @@ static struct sock *chtls_recv_sock(struct sock *lsk,
+ ndev = n->dev;
+ if (!ndev)
+ goto free_dst;
++ if (is_vlan_dev(ndev))
++ ndev = vlan_dev_real_dev(ndev);
++
+ port_id = cxgb4_port_idx(ndev);
+
+ csk = chtls_sock_create(cdev);
+diff --git a/drivers/crypto/chelsio/chtls/chtls_io.c b/drivers/crypto/chelsio/chtls/chtls_io.c
+index 2e9acae1cba3b..9fb5ca6682ea2 100644
+--- a/drivers/crypto/chelsio/chtls/chtls_io.c
++++ b/drivers/crypto/chelsio/chtls/chtls_io.c
+@@ -902,9 +902,9 @@ static int chtls_skb_copy_to_page_nocache(struct sock *sk,
+ return 0;
+ }
+
+-static int csk_mem_free(struct chtls_dev *cdev, struct sock *sk)
++static bool csk_mem_free(struct chtls_dev *cdev, struct sock *sk)
+ {
+- return (cdev->max_host_sndbuf - sk->sk_wmem_queued);
++ return (cdev->max_host_sndbuf - sk->sk_wmem_queued > 0);
+ }
+
+ static int csk_wait_memory(struct chtls_dev *cdev,
+@@ -1240,6 +1240,7 @@ int chtls_sendpage(struct sock *sk, struct page *page,
+ copied = 0;
+ csk = rcu_dereference_sk_user_data(sk);
+ cdev = csk->cdev;
++ lock_sock(sk);
+ timeo = sock_sndtimeo(sk, flags & MSG_DONTWAIT);
+
+ err = sk_stream_wait_connect(sk, &timeo);
+diff --git a/drivers/crypto/hisilicon/sec2/sec_crypto.c b/drivers/crypto/hisilicon/sec2/sec_crypto.c
+index 64614a9bdf219..047826f18bd35 100644
+--- a/drivers/crypto/hisilicon/sec2/sec_crypto.c
++++ b/drivers/crypto/hisilicon/sec2/sec_crypto.c
+@@ -332,11 +332,14 @@ static int sec_alg_resource_alloc(struct sec_ctx *ctx,
+ ret = sec_alloc_pbuf_resource(dev, res);
+ if (ret) {
+ dev_err(dev, "fail to alloc pbuf dma resource!\n");
+- goto alloc_fail;
++ goto alloc_pbuf_fail;
+ }
+ }
+
+ return 0;
++alloc_pbuf_fail:
++ if (ctx->alg_type == SEC_AEAD)
++ sec_free_mac_resource(dev, qp_ctx->res);
+ alloc_fail:
+ sec_free_civ_resource(dev, res);
+
+@@ -447,8 +450,10 @@ static int sec_ctx_base_init(struct sec_ctx *ctx)
+ ctx->fake_req_limit = QM_Q_DEPTH >> 1;
+ ctx->qp_ctx = kcalloc(sec->ctx_q_num, sizeof(struct sec_qp_ctx),
+ GFP_KERNEL);
+- if (!ctx->qp_ctx)
+- return -ENOMEM;
++ if (!ctx->qp_ctx) {
++ ret = -ENOMEM;
++ goto err_destroy_qps;
++ }
+
+ for (i = 0; i < sec->ctx_q_num; i++) {
+ ret = sec_create_qp_ctx(&sec->qm, ctx, i, 0);
+@@ -457,12 +462,15 @@ static int sec_ctx_base_init(struct sec_ctx *ctx)
+ }
+
+ return 0;
++
+ err_sec_release_qp_ctx:
+ for (i = i - 1; i >= 0; i--)
+ sec_release_qp_ctx(ctx, &ctx->qp_ctx[i]);
+
+- sec_destroy_qps(ctx->qps, sec->ctx_q_num);
+ kfree(ctx->qp_ctx);
++err_destroy_qps:
++ sec_destroy_qps(ctx->qps, sec->ctx_q_num);
++
+ return ret;
+ }
+
+diff --git a/drivers/crypto/ixp4xx_crypto.c b/drivers/crypto/ixp4xx_crypto.c
+index ad73fc9466821..3be6e0db0f9fc 100644
+--- a/drivers/crypto/ixp4xx_crypto.c
++++ b/drivers/crypto/ixp4xx_crypto.c
+@@ -528,7 +528,7 @@ static void release_ixp_crypto(struct device *dev)
+
+ if (crypt_virt) {
+ dma_free_coherent(dev,
+- NPE_QLEN_TOTAL * sizeof( struct crypt_ctl),
++ NPE_QLEN * sizeof(struct crypt_ctl),
+ crypt_virt, crypt_phys);
+ }
+ }
+diff --git a/drivers/crypto/mediatek/mtk-platform.c b/drivers/crypto/mediatek/mtk-platform.c
+index 7e3ad085b5bdd..efce3a83b35a8 100644
+--- a/drivers/crypto/mediatek/mtk-platform.c
++++ b/drivers/crypto/mediatek/mtk-platform.c
+@@ -442,7 +442,7 @@ static void mtk_desc_dma_free(struct mtk_cryp *cryp)
+ static int mtk_desc_ring_alloc(struct mtk_cryp *cryp)
+ {
+ struct mtk_ring **ring = cryp->ring;
+- int i, err = ENOMEM;
++ int i;
+
+ for (i = 0; i < MTK_RING_MAX; i++) {
+ ring[i] = kzalloc(sizeof(**ring), GFP_KERNEL);
+@@ -469,14 +469,14 @@ static int mtk_desc_ring_alloc(struct mtk_cryp *cryp)
+ return 0;
+
+ err_cleanup:
+- for (; i--; ) {
++ do {
+ dma_free_coherent(cryp->dev, MTK_DESC_RING_SZ,
+ ring[i]->res_base, ring[i]->res_dma);
+ dma_free_coherent(cryp->dev, MTK_DESC_RING_SZ,
+ ring[i]->cmd_base, ring[i]->cmd_dma);
+ kfree(ring[i]);
+- }
+- return err;
++ } while (i--);
++ return -ENOMEM;
+ }
+
+ static int mtk_crypto_probe(struct platform_device *pdev)
+diff --git a/drivers/crypto/omap-sham.c b/drivers/crypto/omap-sham.c
+index 82691a057d2a1..bc956dfb34de6 100644
+--- a/drivers/crypto/omap-sham.c
++++ b/drivers/crypto/omap-sham.c
+@@ -456,6 +456,9 @@ static void omap_sham_write_ctrl_omap4(struct omap_sham_dev *dd, size_t length,
+ struct omap_sham_reqctx *ctx = ahash_request_ctx(dd->req);
+ u32 val, mask;
+
++ if (likely(ctx->digcnt))
++ omap_sham_write(dd, SHA_REG_DIGCNT(dd), ctx->digcnt);
++
+ /*
+ * Setting ALGO_CONST only for the first iteration and
+ * CLOSE_HASH only for the last one. Note that flags mode bits
+diff --git a/drivers/crypto/picoxcell_crypto.c b/drivers/crypto/picoxcell_crypto.c
+index 7384e91c8b32b..0d32b641a7f9d 100644
+--- a/drivers/crypto/picoxcell_crypto.c
++++ b/drivers/crypto/picoxcell_crypto.c
+@@ -1666,11 +1666,6 @@ static int spacc_probe(struct platform_device *pdev)
+ goto err_clk_put;
+ }
+
+- ret = device_create_file(&pdev->dev, &dev_attr_stat_irq_thresh);
+- if (ret)
+- goto err_clk_disable;
+-
+-
+ /*
+ * Use an IRQ threshold of 50% as a default. This seems to be a
+ * reasonable trade off of latency against throughput but can be
+@@ -1678,6 +1673,10 @@ static int spacc_probe(struct platform_device *pdev)
+ */
+ engine->stat_irq_thresh = (engine->fifo_sz / 2);
+
++ ret = device_create_file(&pdev->dev, &dev_attr_stat_irq_thresh);
++ if (ret)
++ goto err_clk_disable;
++
+ /*
+ * Configure the interrupts. We only use the STAT_CNT interrupt as we
+ * only submit a new packet for processing when we complete another in
+diff --git a/drivers/crypto/stm32/Kconfig b/drivers/crypto/stm32/Kconfig
+index 4ef3eb11361c2..4a4c3284ae1f3 100644
+--- a/drivers/crypto/stm32/Kconfig
++++ b/drivers/crypto/stm32/Kconfig
+@@ -3,6 +3,7 @@ config CRYPTO_DEV_STM32_CRC
+ tristate "Support for STM32 crc accelerators"
+ depends on ARCH_STM32
+ select CRYPTO_HASH
++ select CRC32
+ help
+ This enables support for the CRC32 hw accelerator which can be found
+ on STMicroelectronics STM32 SOC.
+diff --git a/drivers/crypto/stm32/stm32-crc32.c b/drivers/crypto/stm32/stm32-crc32.c
+index 3ba41148c2a46..2c13f5214d2cf 100644
+--- a/drivers/crypto/stm32/stm32-crc32.c
++++ b/drivers/crypto/stm32/stm32-crc32.c
+@@ -6,6 +6,7 @@
+
+ #include <linux/bitrev.h>
+ #include <linux/clk.h>
++#include <linux/crc32.h>
+ #include <linux/crc32poly.h>
+ #include <linux/module.h>
+ #include <linux/mod_devicetable.h>
+@@ -147,7 +148,6 @@ static int burst_update(struct shash_desc *desc, const u8 *d8,
+ struct stm32_crc_desc_ctx *ctx = shash_desc_ctx(desc);
+ struct stm32_crc_ctx *mctx = crypto_shash_ctx(desc->tfm);
+ struct stm32_crc *crc;
+- unsigned long flags;
+
+ crc = stm32_crc_get_next_crc();
+ if (!crc)
+@@ -155,7 +155,15 @@ static int burst_update(struct shash_desc *desc, const u8 *d8,
+
+ pm_runtime_get_sync(crc->dev);
+
+- spin_lock_irqsave(&crc->lock, flags);
++ if (!spin_trylock(&crc->lock)) {
++ /* Hardware is busy, calculate crc32 by software */
++ if (mctx->poly == CRC32_POLY_LE)
++ ctx->partial = crc32_le(ctx->partial, d8, length);
++ else
++ ctx->partial = __crc32c_le(ctx->partial, d8, length);
++
++ goto pm_out;
++ }
+
+ /*
+ * Restore previously calculated CRC for this context as init value
+@@ -195,8 +203,9 @@ static int burst_update(struct shash_desc *desc, const u8 *d8,
+ /* Store partial result */
+ ctx->partial = readl_relaxed(crc->regs + CRC_DR);
+
+- spin_unlock_irqrestore(&crc->lock, flags);
++ spin_unlock(&crc->lock);
+
++pm_out:
+ pm_runtime_mark_last_busy(crc->dev);
+ pm_runtime_put_autosuspend(crc->dev);
+
+diff --git a/drivers/dma/dmatest.c b/drivers/dma/dmatest.c
+index 323822372b4ce..7480fc1042093 100644
+--- a/drivers/dma/dmatest.c
++++ b/drivers/dma/dmatest.c
+@@ -1240,15 +1240,14 @@ static int dmatest_chan_set(const char *val, const struct kernel_param *kp)
+ add_threaded_test(info);
+
+ /* Check if channel was added successfully */
+- dtc = list_last_entry(&info->channels, struct dmatest_chan, node);
+-
+- if (dtc->chan) {
++ if (!list_empty(&info->channels)) {
+ /*
+ * if new channel was not successfully added, revert the
+ * "test_channel" string to the name of the last successfully
+ * added channel. exception for when users issues empty string
+ * to channel parameter.
+ */
++ dtc = list_last_entry(&info->channels, struct dmatest_chan, node);
+ if ((strcmp(dma_chan_name(dtc->chan), strim(test_channel)) != 0)
+ && (strcmp("", strim(test_channel)) != 0)) {
+ ret = -EINVAL;
+diff --git a/drivers/dma/dw/core.c b/drivers/dma/dw/core.c
+index a1b56f52db2f2..5e7fdc0b6e3db 100644
+--- a/drivers/dma/dw/core.c
++++ b/drivers/dma/dw/core.c
+@@ -772,6 +772,10 @@ bool dw_dma_filter(struct dma_chan *chan, void *param)
+ if (dws->dma_dev != chan->device->dev)
+ return false;
+
++ /* permit channels in accordance with the channels mask */
++ if (dws->channels && !(dws->channels & dwc->mask))
++ return false;
++
+ /* We have to copy data since dws can be temporary storage */
+ memcpy(&dwc->dws, dws, sizeof(struct dw_dma_slave));
+
+diff --git a/drivers/dma/dw/dw.c b/drivers/dma/dw/dw.c
+index 7a085b3c1854c..d9810980920a1 100644
+--- a/drivers/dma/dw/dw.c
++++ b/drivers/dma/dw/dw.c
+@@ -14,7 +14,7 @@
+ static void dw_dma_initialize_chan(struct dw_dma_chan *dwc)
+ {
+ struct dw_dma *dw = to_dw_dma(dwc->chan.device);
+- u32 cfghi = DWC_CFGH_FIFO_MODE;
++ u32 cfghi = is_slave_direction(dwc->direction) ? 0 : DWC_CFGH_FIFO_MODE;
+ u32 cfglo = DWC_CFGL_CH_PRIOR(dwc->priority);
+ bool hs_polarity = dwc->dws.hs_polarity;
+
+diff --git a/drivers/dma/dw/of.c b/drivers/dma/dw/of.c
+index 9e27831dee324..43e975fb67142 100644
+--- a/drivers/dma/dw/of.c
++++ b/drivers/dma/dw/of.c
+@@ -22,18 +22,21 @@ static struct dma_chan *dw_dma_of_xlate(struct of_phandle_args *dma_spec,
+ };
+ dma_cap_mask_t cap;
+
+- if (dma_spec->args_count != 3)
++ if (dma_spec->args_count < 3 || dma_spec->args_count > 4)
+ return NULL;
+
+ slave.src_id = dma_spec->args[0];
+ slave.dst_id = dma_spec->args[0];
+ slave.m_master = dma_spec->args[1];
+ slave.p_master = dma_spec->args[2];
++ if (dma_spec->args_count >= 4)
++ slave.channels = dma_spec->args[3];
+
+ if (WARN_ON(slave.src_id >= DW_DMA_MAX_NR_REQUESTS ||
+ slave.dst_id >= DW_DMA_MAX_NR_REQUESTS ||
+ slave.m_master >= dw->pdata->nr_masters ||
+- slave.p_master >= dw->pdata->nr_masters))
++ slave.p_master >= dw->pdata->nr_masters ||
++ slave.channels >= BIT(dw->pdata->nr_channels)))
+ return NULL;
+
+ dma_cap_zero(cap);
+diff --git a/drivers/dma/ioat/dma.c b/drivers/dma/ioat/dma.c
+index fd782aee02d92..98c56606ab1a9 100644
+--- a/drivers/dma/ioat/dma.c
++++ b/drivers/dma/ioat/dma.c
+@@ -389,7 +389,7 @@ ioat_alloc_ring(struct dma_chan *c, int order, gfp_t flags)
+ struct ioat_descs *descs = &ioat_chan->descs[i];
+
+ descs->virt = dma_alloc_coherent(to_dev(ioat_chan),
+- SZ_2M, &descs->hw, flags);
++ IOAT_CHUNK_SIZE, &descs->hw, flags);
+ if (!descs->virt) {
+ int idx;
+
+diff --git a/drivers/edac/aspeed_edac.c b/drivers/edac/aspeed_edac.c
+index b194658b8b5c9..fbec28dc661d7 100644
+--- a/drivers/edac/aspeed_edac.c
++++ b/drivers/edac/aspeed_edac.c
+@@ -209,8 +209,8 @@ static int config_irq(void *ctx, struct platform_device *pdev)
+ /* register interrupt handler */
+ irq = platform_get_irq(pdev, 0);
+ dev_dbg(&pdev->dev, "got irq %d\n", irq);
+- if (!irq)
+- return -ENODEV;
++ if (irq < 0)
++ return irq;
+
+ rc = devm_request_irq(&pdev->dev, irq, mcr_isr, IRQF_TRIGGER_HIGH,
+ DRV_NAME, ctx);
+diff --git a/drivers/edac/i5100_edac.c b/drivers/edac/i5100_edac.c
+index 191aa7c19ded7..324a46b8479b0 100644
+--- a/drivers/edac/i5100_edac.c
++++ b/drivers/edac/i5100_edac.c
+@@ -1061,16 +1061,15 @@ static int i5100_init_one(struct pci_dev *pdev, const struct pci_device_id *id)
+ PCI_DEVICE_ID_INTEL_5100_19, 0);
+ if (!einj) {
+ ret = -ENODEV;
+- goto bail_einj;
++ goto bail_mc_free;
+ }
+
+ rc = pci_enable_device(einj);
+ if (rc < 0) {
+ ret = rc;
+- goto bail_disable_einj;
++ goto bail_einj;
+ }
+
+-
+ mci->pdev = &pdev->dev;
+
+ priv = mci->pvt_info;
+@@ -1136,14 +1135,14 @@ static int i5100_init_one(struct pci_dev *pdev, const struct pci_device_id *id)
+ bail_scrub:
+ priv->scrub_enable = 0;
+ cancel_delayed_work_sync(&(priv->i5100_scrubbing));
+- edac_mc_free(mci);
+-
+-bail_disable_einj:
+ pci_disable_device(einj);
+
+ bail_einj:
+ pci_dev_put(einj);
+
++bail_mc_free:
++ edac_mc_free(mci);
++
+ bail_disable_ch1:
+ pci_disable_device(ch1mm);
+
+diff --git a/drivers/edac/ti_edac.c b/drivers/edac/ti_edac.c
+index 8be3e89a510e4..d7419a90a2f5b 100644
+--- a/drivers/edac/ti_edac.c
++++ b/drivers/edac/ti_edac.c
+@@ -278,7 +278,8 @@ static int ti_edac_probe(struct platform_device *pdev)
+
+ /* add EMIF ECC error handler */
+ error_irq = platform_get_irq(pdev, 0);
+- if (!error_irq) {
++ if (error_irq < 0) {
++ ret = error_irq;
+ edac_printk(KERN_ERR, EDAC_MOD_NAME,
+ "EMIF irq number not defined.\n");
+ goto err;
+diff --git a/drivers/firmware/arm_scmi/mailbox.c b/drivers/firmware/arm_scmi/mailbox.c
+index 6998dc86b5ce8..b797a713c3313 100644
+--- a/drivers/firmware/arm_scmi/mailbox.c
++++ b/drivers/firmware/arm_scmi/mailbox.c
+@@ -110,7 +110,7 @@ static int mailbox_chan_free(int id, void *p, void *data)
+ struct scmi_chan_info *cinfo = p;
+ struct scmi_mailbox *smbox = cinfo->transport_info;
+
+- if (!IS_ERR(smbox->chan)) {
++ if (smbox && !IS_ERR(smbox->chan)) {
+ mbox_free_channel(smbox->chan);
+ cinfo->transport_info = NULL;
+ smbox->chan = NULL;
+diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
+index 7c1cc0ba30a55..78cf9e4fddbdf 100644
+--- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
++++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
+@@ -8178,8 +8178,7 @@ static int dm_update_plane_state(struct dc *dc,
+ dm_old_plane_state->dc_state,
+ dm_state->context)) {
+
+- ret = EINVAL;
+- return ret;
++ return -EINVAL;
+ }
+
+
+diff --git a/drivers/gpu/drm/amd/display/dc/core/dc.c b/drivers/gpu/drm/amd/display/dc/core/dc.c
+index d016f50e187c8..d261f425b80ec 100644
+--- a/drivers/gpu/drm/amd/display/dc/core/dc.c
++++ b/drivers/gpu/drm/amd/display/dc/core/dc.c
+@@ -2538,7 +2538,7 @@ void dc_commit_updates_for_stream(struct dc *dc,
+
+ copy_stream_update_to_stream(dc, context, stream, stream_update);
+
+- if (update_type > UPDATE_TYPE_FAST) {
++ if (update_type >= UPDATE_TYPE_FULL) {
+ if (!dc->res_pool->funcs->validate_bandwidth(dc, context, false)) {
+ DC_ERROR("Mode validation failed for stream update!\n");
+ dc_release_state(context);
+diff --git a/drivers/gpu/drm/amd/display/dc/dce/dce_panel_cntl.c b/drivers/gpu/drm/amd/display/dc/dce/dce_panel_cntl.c
+index ebff9b1e312e5..124c081a0f2ca 100644
+--- a/drivers/gpu/drm/amd/display/dc/dce/dce_panel_cntl.c
++++ b/drivers/gpu/drm/amd/display/dc/dce/dce_panel_cntl.c
+@@ -75,7 +75,7 @@ static unsigned int calculate_16_bit_backlight_from_pwm(struct dce_panel_cntl *d
+ else
+ bl_pwm &= 0xFFFF;
+
+- current_backlight = bl_pwm << (1 + bl_int_count);
++ current_backlight = (uint64_t)bl_pwm << (1 + bl_int_count);
+
+ if (bl_period == 0)
+ bl_period = 0xFFFF;
+diff --git a/drivers/gpu/drm/amd/display/dc/dcn20/dcn20_resource.c b/drivers/gpu/drm/amd/display/dc/dcn20/dcn20_resource.c
+index 20bdabebbc434..76cd4f3de4eaf 100644
+--- a/drivers/gpu/drm/amd/display/dc/dcn20/dcn20_resource.c
++++ b/drivers/gpu/drm/amd/display/dc/dcn20/dcn20_resource.c
+@@ -3165,6 +3165,9 @@ static noinline bool dcn20_validate_bandwidth_fp(struct dc *dc,
+ context->bw_ctx.dml.soc.allow_dram_clock_one_display_vactive =
+ dc->debug.enable_dram_clock_change_one_display_vactive;
+
++ /*Unsafe due to current pipe merge and split logic*/
++ ASSERT(context != dc->current_state);
++
+ if (fast_validate) {
+ return dcn20_validate_bandwidth_internal(dc, context, true);
+ }
+diff --git a/drivers/gpu/drm/amd/display/dc/dcn21/dcn21_resource.c b/drivers/gpu/drm/amd/display/dc/dcn21/dcn21_resource.c
+index f00a568350848..c6ab3dee4fd69 100644
+--- a/drivers/gpu/drm/amd/display/dc/dcn21/dcn21_resource.c
++++ b/drivers/gpu/drm/amd/display/dc/dcn21/dcn21_resource.c
+@@ -1184,6 +1184,9 @@ bool dcn21_validate_bandwidth(struct dc *dc, struct dc_state *context,
+
+ BW_VAL_TRACE_COUNT();
+
++ /*Unsafe due to current pipe merge and split logic*/
++ ASSERT(context != dc->current_state);
++
+ out = dcn20_fast_validate_bw(dc, context, pipes, &pipe_cnt, pipe_split_from, &vlevel);
+
+ if (pipe_cnt == 0)
+diff --git a/drivers/gpu/drm/drm_debugfs_crc.c b/drivers/gpu/drm/drm_debugfs_crc.c
+index 5d67a41f7c3a8..3dd70d813f694 100644
+--- a/drivers/gpu/drm/drm_debugfs_crc.c
++++ b/drivers/gpu/drm/drm_debugfs_crc.c
+@@ -144,8 +144,10 @@ static ssize_t crc_control_write(struct file *file, const char __user *ubuf,
+ source[len - 1] = '\0';
+
+ ret = crtc->funcs->verify_crc_source(crtc, source, &values_cnt);
+- if (ret)
++ if (ret) {
++ kfree(source);
+ return ret;
++ }
+
+ spin_lock_irq(&crc->lock);
+
+diff --git a/drivers/gpu/drm/drm_gem_vram_helper.c b/drivers/gpu/drm/drm_gem_vram_helper.c
+index 8b2d5c945c95c..1d85af9a481ac 100644
+--- a/drivers/gpu/drm/drm_gem_vram_helper.c
++++ b/drivers/gpu/drm/drm_gem_vram_helper.c
+@@ -175,6 +175,10 @@ static void drm_gem_vram_placement(struct drm_gem_vram_object *gbo,
+ }
+ }
+
++/*
++ * Note that on error, drm_gem_vram_init will free the buffer object.
++ */
++
+ static int drm_gem_vram_init(struct drm_device *dev,
+ struct drm_gem_vram_object *gbo,
+ size_t size, unsigned long pg_align)
+@@ -184,15 +188,19 @@ static int drm_gem_vram_init(struct drm_device *dev,
+ int ret;
+ size_t acc_size;
+
+- if (WARN_ONCE(!vmm, "VRAM MM not initialized"))
++ if (WARN_ONCE(!vmm, "VRAM MM not initialized")) {
++ kfree(gbo);
+ return -EINVAL;
++ }
+ bdev = &vmm->bdev;
+
+ gbo->bo.base.funcs = &drm_gem_vram_object_funcs;
+
+ ret = drm_gem_object_init(dev, &gbo->bo.base, size);
+- if (ret)
++ if (ret) {
++ kfree(gbo);
+ return ret;
++ }
+
+ acc_size = ttm_bo_dma_acc_size(bdev, size, sizeof(*gbo));
+
+@@ -203,13 +211,13 @@ static int drm_gem_vram_init(struct drm_device *dev,
+ &gbo->placement, pg_align, false, acc_size,
+ NULL, NULL, ttm_buffer_object_destroy);
+ if (ret)
+- goto err_drm_gem_object_release;
++ /*
++ * A failing ttm_bo_init will call ttm_buffer_object_destroy
++ * to release gbo->bo.base and kfree gbo.
++ */
++ return ret;
+
+ return 0;
+-
+-err_drm_gem_object_release:
+- drm_gem_object_release(&gbo->bo.base);
+- return ret;
+ }
+
+ /**
+@@ -243,13 +251,9 @@ struct drm_gem_vram_object *drm_gem_vram_create(struct drm_device *dev,
+
+ ret = drm_gem_vram_init(dev, gbo, size, pg_align);
+ if (ret < 0)
+- goto err_kfree;
++ return ERR_PTR(ret);
+
+ return gbo;
+-
+-err_kfree:
+- kfree(gbo);
+- return ERR_PTR(ret);
+ }
+ EXPORT_SYMBOL(drm_gem_vram_create);
+
+diff --git a/drivers/gpu/drm/gma500/cdv_intel_dp.c b/drivers/gpu/drm/gma500/cdv_intel_dp.c
+index f41cbb753bb46..720a767118c9c 100644
+--- a/drivers/gpu/drm/gma500/cdv_intel_dp.c
++++ b/drivers/gpu/drm/gma500/cdv_intel_dp.c
+@@ -2078,7 +2078,7 @@ cdv_intel_dp_init(struct drm_device *dev, struct psb_intel_mode_device *mode_dev
+ intel_dp->dpcd,
+ sizeof(intel_dp->dpcd));
+ cdv_intel_edp_panel_vdd_off(gma_encoder);
+- if (ret == 0) {
++ if (ret <= 0) {
+ /* if this fails, presume the device is a ghost */
+ DRM_INFO("failed to retrieve link info, disabling eDP\n");
+ drm_encoder_cleanup(encoder);
+diff --git a/drivers/gpu/drm/mediatek/mtk_drm_crtc.c b/drivers/gpu/drm/mediatek/mtk_drm_crtc.c
+index d8b43500f12d1..2d01a293aa782 100644
+--- a/drivers/gpu/drm/mediatek/mtk_drm_crtc.c
++++ b/drivers/gpu/drm/mediatek/mtk_drm_crtc.c
+@@ -485,7 +485,7 @@ static void mtk_drm_crtc_hw_config(struct mtk_drm_crtc *mtk_crtc)
+ mbox_flush(mtk_crtc->cmdq_client->chan, 2000);
+ cmdq_handle = cmdq_pkt_create(mtk_crtc->cmdq_client, PAGE_SIZE);
+ cmdq_pkt_clear_event(cmdq_handle, mtk_crtc->cmdq_event);
+- cmdq_pkt_wfe(cmdq_handle, mtk_crtc->cmdq_event);
++ cmdq_pkt_wfe(cmdq_handle, mtk_crtc->cmdq_event, false);
+ mtk_crtc_ddp_config(crtc, cmdq_handle);
+ cmdq_pkt_flush_async(cmdq_handle, ddp_cmdq_cb, cmdq_handle);
+ }
+diff --git a/drivers/gpu/drm/msm/adreno/a6xx_gpu_state.c b/drivers/gpu/drm/msm/adreno/a6xx_gpu_state.c
+index d6023ba8033c0..3bb567812b990 100644
+--- a/drivers/gpu/drm/msm/adreno/a6xx_gpu_state.c
++++ b/drivers/gpu/drm/msm/adreno/a6xx_gpu_state.c
+@@ -864,7 +864,7 @@ static void a6xx_get_indexed_registers(struct msm_gpu *gpu,
+ int i;
+
+ a6xx_state->indexed_regs = state_kcalloc(a6xx_state, count,
+- sizeof(a6xx_state->indexed_regs));
++ sizeof(*a6xx_state->indexed_regs));
+ if (!a6xx_state->indexed_regs)
+ return;
+
+diff --git a/drivers/gpu/drm/msm/adreno/adreno_gpu.c b/drivers/gpu/drm/msm/adreno/adreno_gpu.c
+index a74ccc5b8220d..5b5809c0e44b3 100644
+--- a/drivers/gpu/drm/msm/adreno/adreno_gpu.c
++++ b/drivers/gpu/drm/msm/adreno/adreno_gpu.c
+@@ -189,10 +189,16 @@ struct msm_gem_address_space *
+ adreno_iommu_create_address_space(struct msm_gpu *gpu,
+ struct platform_device *pdev)
+ {
+- struct iommu_domain *iommu = iommu_domain_alloc(&platform_bus_type);
+- struct msm_mmu *mmu = msm_iommu_new(&pdev->dev, iommu);
++ struct iommu_domain *iommu;
++ struct msm_mmu *mmu;
+ struct msm_gem_address_space *aspace;
+
++ iommu = iommu_domain_alloc(&platform_bus_type);
++ if (!iommu)
++ return NULL;
++
++ mmu = msm_iommu_new(&pdev->dev, iommu);
++
+ aspace = msm_gem_address_space_create(mmu, "gpu", SZ_16M,
+ 0xffffffff - SZ_16M);
+
+diff --git a/drivers/gpu/drm/msm/disp/dpu1/dpu_crtc.c b/drivers/gpu/drm/msm/disp/dpu1/dpu_crtc.c
+index 1026e1e5bec10..4d81a0c73616f 100644
+--- a/drivers/gpu/drm/msm/disp/dpu1/dpu_crtc.c
++++ b/drivers/gpu/drm/msm/disp/dpu1/dpu_crtc.c
+@@ -881,7 +881,7 @@ static int dpu_crtc_atomic_check(struct drm_crtc *crtc,
+ struct drm_plane *plane;
+ struct drm_display_mode *mode;
+
+- int cnt = 0, rc = 0, mixer_width, i, z_pos;
++ int cnt = 0, rc = 0, mixer_width = 0, i, z_pos;
+
+ struct dpu_multirect_plane_states multirect_plane[DPU_STAGE_MAX * 2];
+ int multirect_count = 0;
+@@ -914,9 +914,11 @@ static int dpu_crtc_atomic_check(struct drm_crtc *crtc,
+
+ memset(pipe_staged, 0, sizeof(pipe_staged));
+
+- mixer_width = mode->hdisplay / cstate->num_mixers;
++ if (cstate->num_mixers) {
++ mixer_width = mode->hdisplay / cstate->num_mixers;
+
+- _dpu_crtc_setup_lm_bounds(crtc, state);
++ _dpu_crtc_setup_lm_bounds(crtc, state);
++ }
+
+ crtc_rect.x2 = mode->hdisplay;
+ crtc_rect.y2 = mode->vdisplay;
+diff --git a/drivers/gpu/drm/mxsfb/mxsfb_drv.c b/drivers/gpu/drm/mxsfb/mxsfb_drv.c
+index 497cf443a9afa..0b02e65a89e79 100644
+--- a/drivers/gpu/drm/mxsfb/mxsfb_drv.c
++++ b/drivers/gpu/drm/mxsfb/mxsfb_drv.c
+@@ -26,6 +26,7 @@
+ #include <drm/drm_drv.h>
+ #include <drm/drm_fb_cma_helper.h>
+ #include <drm/drm_fb_helper.h>
++#include <drm/drm_fourcc.h>
+ #include <drm/drm_gem_cma_helper.h>
+ #include <drm/drm_gem_framebuffer_helper.h>
+ #include <drm/drm_irq.h>
+@@ -87,8 +88,26 @@ void mxsfb_disable_axi_clk(struct mxsfb_drm_private *mxsfb)
+ clk_disable_unprepare(mxsfb->clk_axi);
+ }
+
++static struct drm_framebuffer *
++mxsfb_fb_create(struct drm_device *dev, struct drm_file *file_priv,
++ const struct drm_mode_fb_cmd2 *mode_cmd)
++{
++ const struct drm_format_info *info;
++
++ info = drm_get_format_info(dev, mode_cmd);
++ if (!info)
++ return ERR_PTR(-EINVAL);
++
++ if (mode_cmd->width * info->cpp[0] != mode_cmd->pitches[0]) {
++ dev_dbg(dev->dev, "Invalid pitch: fb width must match pitch\n");
++ return ERR_PTR(-EINVAL);
++ }
++
++ return drm_gem_fb_create(dev, file_priv, mode_cmd);
++}
++
+ static const struct drm_mode_config_funcs mxsfb_mode_config_funcs = {
+- .fb_create = drm_gem_fb_create,
++ .fb_create = mxsfb_fb_create,
+ .atomic_check = drm_atomic_helper_check,
+ .atomic_commit = drm_atomic_helper_commit,
+ };
+diff --git a/drivers/gpu/drm/panel/panel-simple.c b/drivers/gpu/drm/panel/panel-simple.c
+index 7debf2ca42522..4b4ca31a2d577 100644
+--- a/drivers/gpu/drm/panel/panel-simple.c
++++ b/drivers/gpu/drm/panel/panel-simple.c
+@@ -2862,12 +2862,12 @@ static const struct drm_display_mode ortustech_com43h4m85ulc_mode = {
+ static const struct panel_desc ortustech_com43h4m85ulc = {
+ .modes = &ortustech_com43h4m85ulc_mode,
+ .num_modes = 1,
+- .bpc = 8,
++ .bpc = 6,
+ .size = {
+ .width = 56,
+ .height = 93,
+ },
+- .bus_format = MEDIA_BUS_FMT_RGB888_1X24,
++ .bus_format = MEDIA_BUS_FMT_RGB666_1X18,
+ .bus_flags = DRM_BUS_FLAG_DE_HIGH | DRM_BUS_FLAG_PIXDATA_DRIVE_POSEDGE,
+ .connector_type = DRM_MODE_CONNECTOR_DPI,
+ };
+diff --git a/drivers/gpu/drm/panfrost/panfrost_device.h b/drivers/gpu/drm/panfrost/panfrost_device.h
+index c30c719a80594..3c4a85213c15f 100644
+--- a/drivers/gpu/drm/panfrost/panfrost_device.h
++++ b/drivers/gpu/drm/panfrost/panfrost_device.h
+@@ -69,6 +69,9 @@ struct panfrost_compatible {
+ int num_pm_domains;
+ /* Only required if num_pm_domains > 1. */
+ const char * const *pm_domain_names;
++
++ /* Vendor implementation quirks callback */
++ void (*vendor_quirk)(struct panfrost_device *pfdev);
+ };
+
+ struct panfrost_device {
+diff --git a/drivers/gpu/drm/panfrost/panfrost_drv.c b/drivers/gpu/drm/panfrost/panfrost_drv.c
+index 882fecc33fdb1..6e11a73e81aa3 100644
+--- a/drivers/gpu/drm/panfrost/panfrost_drv.c
++++ b/drivers/gpu/drm/panfrost/panfrost_drv.c
+@@ -667,7 +667,18 @@ static const struct panfrost_compatible default_data = {
+ .pm_domain_names = NULL,
+ };
+
++static const struct panfrost_compatible amlogic_data = {
++ .num_supplies = ARRAY_SIZE(default_supplies),
++ .supply_names = default_supplies,
++ .vendor_quirk = panfrost_gpu_amlogic_quirk,
++};
++
+ static const struct of_device_id dt_match[] = {
++ /* Set first to probe before the generic compatibles */
++ { .compatible = "amlogic,meson-gxm-mali",
++ .data = &amlogic_data, },
++ { .compatible = "amlogic,meson-g12a-mali",
++ .data = &amlogic_data, },
+ { .compatible = "arm,mali-t604", .data = &default_data, },
+ { .compatible = "arm,mali-t624", .data = &default_data, },
+ { .compatible = "arm,mali-t628", .data = &default_data, },
+diff --git a/drivers/gpu/drm/panfrost/panfrost_gpu.c b/drivers/gpu/drm/panfrost/panfrost_gpu.c
+index f2c1ddc41a9bf..165403878ad9b 100644
+--- a/drivers/gpu/drm/panfrost/panfrost_gpu.c
++++ b/drivers/gpu/drm/panfrost/panfrost_gpu.c
+@@ -75,6 +75,17 @@ int panfrost_gpu_soft_reset(struct panfrost_device *pfdev)
+ return 0;
+ }
+
++void panfrost_gpu_amlogic_quirk(struct panfrost_device *pfdev)
++{
++ /*
++ * The Amlogic integrated Mali-T820, Mali-G31 & Mali-G52 needs
++ * these undocumented bits in GPU_PWR_OVERRIDE1 to be set in order
++ * to operate correctly.
++ */
++ gpu_write(pfdev, GPU_PWR_KEY, GPU_PWR_KEY_UNLOCK);
++ gpu_write(pfdev, GPU_PWR_OVERRIDE1, 0xfff | (0x20 << 16));
++}
++
+ static void panfrost_gpu_init_quirks(struct panfrost_device *pfdev)
+ {
+ u32 quirks = 0;
+@@ -135,6 +146,10 @@ static void panfrost_gpu_init_quirks(struct panfrost_device *pfdev)
+
+ if (quirks)
+ gpu_write(pfdev, GPU_JM_CONFIG, quirks);
++
++ /* Here goes platform specific quirks */
++ if (pfdev->comp->vendor_quirk)
++ pfdev->comp->vendor_quirk(pfdev);
+ }
+
+ #define MAX_HW_REVS 6
+@@ -304,16 +319,18 @@ void panfrost_gpu_power_on(struct panfrost_device *pfdev)
+ int ret;
+ u32 val;
+
++ panfrost_gpu_init_quirks(pfdev);
++
+ /* Just turn on everything for now */
+ gpu_write(pfdev, L2_PWRON_LO, pfdev->features.l2_present);
+ ret = readl_relaxed_poll_timeout(pfdev->iomem + L2_READY_LO,
+- val, val == pfdev->features.l2_present, 100, 1000);
++ val, val == pfdev->features.l2_present, 100, 20000);
+ if (ret)
+ dev_err(pfdev->dev, "error powering up gpu L2");
+
+ gpu_write(pfdev, SHADER_PWRON_LO, pfdev->features.shader_present);
+ ret = readl_relaxed_poll_timeout(pfdev->iomem + SHADER_READY_LO,
+- val, val == pfdev->features.shader_present, 100, 1000);
++ val, val == pfdev->features.shader_present, 100, 20000);
+ if (ret)
+ dev_err(pfdev->dev, "error powering up gpu shader");
+
+@@ -355,7 +372,6 @@ int panfrost_gpu_init(struct panfrost_device *pfdev)
+ return err;
+ }
+
+- panfrost_gpu_init_quirks(pfdev);
+ panfrost_gpu_power_on(pfdev);
+
+ return 0;
+diff --git a/drivers/gpu/drm/panfrost/panfrost_gpu.h b/drivers/gpu/drm/panfrost/panfrost_gpu.h
+index 4112412087b27..468c51e7e46db 100644
+--- a/drivers/gpu/drm/panfrost/panfrost_gpu.h
++++ b/drivers/gpu/drm/panfrost/panfrost_gpu.h
+@@ -16,4 +16,6 @@ int panfrost_gpu_soft_reset(struct panfrost_device *pfdev);
+ void panfrost_gpu_power_on(struct panfrost_device *pfdev);
+ void panfrost_gpu_power_off(struct panfrost_device *pfdev);
+
++void panfrost_gpu_amlogic_quirk(struct panfrost_device *pfdev);
++
+ #endif
+diff --git a/drivers/gpu/drm/panfrost/panfrost_regs.h b/drivers/gpu/drm/panfrost/panfrost_regs.h
+index ea38ac60581c6..eddaa62ad8b0e 100644
+--- a/drivers/gpu/drm/panfrost/panfrost_regs.h
++++ b/drivers/gpu/drm/panfrost/panfrost_regs.h
+@@ -51,6 +51,10 @@
+ #define GPU_STATUS 0x34
+ #define GPU_STATUS_PRFCNT_ACTIVE BIT(2)
+ #define GPU_LATEST_FLUSH_ID 0x38
++#define GPU_PWR_KEY 0x50 /* (WO) Power manager key register */
++#define GPU_PWR_KEY_UNLOCK 0x2968A819
++#define GPU_PWR_OVERRIDE0 0x54 /* (RW) Power manager override settings */
++#define GPU_PWR_OVERRIDE1 0x58 /* (RW) Power manager override settings */
+ #define GPU_FAULT_STATUS 0x3C
+ #define GPU_FAULT_ADDRESS_LO 0x40
+ #define GPU_FAULT_ADDRESS_HI 0x44
+diff --git a/drivers/gpu/drm/rcar-du/rcar_du_vsp.c b/drivers/gpu/drm/rcar-du/rcar_du_vsp.c
+index f1a81c9b184d4..fa09b3ae8b9d4 100644
+--- a/drivers/gpu/drm/rcar-du/rcar_du_vsp.c
++++ b/drivers/gpu/drm/rcar-du/rcar_du_vsp.c
+@@ -13,6 +13,7 @@
+ #include <drm/drm_fourcc.h>
+ #include <drm/drm_gem_cma_helper.h>
+ #include <drm/drm_gem_framebuffer_helper.h>
++#include <drm/drm_managed.h>
+ #include <drm/drm_plane_helper.h>
+ #include <drm/drm_vblank.h>
+
+@@ -341,6 +342,13 @@ static const struct drm_plane_funcs rcar_du_vsp_plane_funcs = {
+ .atomic_destroy_state = rcar_du_vsp_plane_atomic_destroy_state,
+ };
+
++static void rcar_du_vsp_cleanup(struct drm_device *dev, void *res)
++{
++ struct rcar_du_vsp *vsp = res;
++
++ put_device(vsp->vsp);
++}
++
+ int rcar_du_vsp_init(struct rcar_du_vsp *vsp, struct device_node *np,
+ unsigned int crtcs)
+ {
+@@ -357,6 +365,10 @@ int rcar_du_vsp_init(struct rcar_du_vsp *vsp, struct device_node *np,
+
+ vsp->vsp = &pdev->dev;
+
++ ret = drmm_add_action(rcdu->ddev, rcar_du_vsp_cleanup, vsp);
++ if (ret < 0)
++ return ret;
++
+ ret = vsp1_du_init(vsp->vsp);
+ if (ret < 0)
+ return ret;
+diff --git a/drivers/gpu/drm/vgem/vgem_drv.c b/drivers/gpu/drm/vgem/vgem_drv.c
+index fa39d140adc6c..94825ec3a09d8 100644
+--- a/drivers/gpu/drm/vgem/vgem_drv.c
++++ b/drivers/gpu/drm/vgem/vgem_drv.c
+@@ -471,8 +471,8 @@ static int __init vgem_init(void)
+
+ out_put:
+ drm_dev_put(&vgem_device->drm);
++ platform_device_unregister(vgem_device->platform);
+ return ret;
+-
+ out_unregister:
+ platform_device_unregister(vgem_device->platform);
+ out_free:
+diff --git a/drivers/gpu/drm/virtio/virtgpu_kms.c b/drivers/gpu/drm/virtio/virtgpu_kms.c
+index 0a5c8cf409fb8..dc8cb8dfce58e 100644
+--- a/drivers/gpu/drm/virtio/virtgpu_kms.c
++++ b/drivers/gpu/drm/virtio/virtgpu_kms.c
+@@ -80,8 +80,10 @@ static void virtio_gpu_get_capsets(struct virtio_gpu_device *vgdev,
+ vgdev->capsets[i].id > 0, 5 * HZ);
+ if (ret == 0) {
+ DRM_ERROR("timed out waiting for cap set %d\n", i);
++ spin_lock(&vgdev->display_info_lock);
+ kfree(vgdev->capsets);
+ vgdev->capsets = NULL;
++ spin_unlock(&vgdev->display_info_lock);
+ return;
+ }
+ DRM_INFO("cap set %d: id %d, max-version %d, max-size %d\n",
+diff --git a/drivers/gpu/drm/virtio/virtgpu_vq.c b/drivers/gpu/drm/virtio/virtgpu_vq.c
+index 9e663a5d99526..2517450bf46ba 100644
+--- a/drivers/gpu/drm/virtio/virtgpu_vq.c
++++ b/drivers/gpu/drm/virtio/virtgpu_vq.c
+@@ -684,9 +684,13 @@ static void virtio_gpu_cmd_get_capset_info_cb(struct virtio_gpu_device *vgdev,
+ int i = le32_to_cpu(cmd->capset_index);
+
+ spin_lock(&vgdev->display_info_lock);
+- vgdev->capsets[i].id = le32_to_cpu(resp->capset_id);
+- vgdev->capsets[i].max_version = le32_to_cpu(resp->capset_max_version);
+- vgdev->capsets[i].max_size = le32_to_cpu(resp->capset_max_size);
++ if (vgdev->capsets) {
++ vgdev->capsets[i].id = le32_to_cpu(resp->capset_id);
++ vgdev->capsets[i].max_version = le32_to_cpu(resp->capset_max_version);
++ vgdev->capsets[i].max_size = le32_to_cpu(resp->capset_max_size);
++ } else {
++ DRM_ERROR("invalid capset memory.");
++ }
+ spin_unlock(&vgdev->display_info_lock);
+ wake_up(&vgdev->resp_wq);
+ }
+diff --git a/drivers/gpu/drm/vkms/vkms_composer.c b/drivers/gpu/drm/vkms/vkms_composer.c
+index 4af2f19480f4f..b8b060354667e 100644
+--- a/drivers/gpu/drm/vkms/vkms_composer.c
++++ b/drivers/gpu/drm/vkms/vkms_composer.c
+@@ -33,7 +33,7 @@ static uint32_t compute_crc(void *vaddr_out, struct vkms_composer *composer)
+ + (i * composer->pitch)
+ + (j * composer->cpp);
+ /* XRGB format ignores Alpha channel */
+- memset(vaddr_out + src_offset + 24, 0, 8);
++ bitmap_clear(vaddr_out + src_offset, 24, 8);
+ crc = crc32_le(crc, vaddr_out + src_offset,
+ sizeof(u32));
+ }
+diff --git a/drivers/gpu/drm/vkms/vkms_drv.c b/drivers/gpu/drm/vkms/vkms_drv.c
+index 1e8b2169d8341..e6a3ea1b399a7 100644
+--- a/drivers/gpu/drm/vkms/vkms_drv.c
++++ b/drivers/gpu/drm/vkms/vkms_drv.c
+@@ -188,8 +188,8 @@ static int __init vkms_init(void)
+
+ out_put:
+ drm_dev_put(&vkms_device->drm);
++ platform_device_unregister(vkms_device->platform);
+ return ret;
+-
+ out_unregister:
+ platform_device_unregister(vkms_device->platform);
+ out_free:
+diff --git a/drivers/hid/hid-ids.h b/drivers/hid/hid-ids.h
+index b49ec7dde6457..b269c792d25dc 100644
+--- a/drivers/hid/hid-ids.h
++++ b/drivers/hid/hid-ids.h
+@@ -726,6 +726,7 @@
+ #define USB_DEVICE_ID_LENOVO_SCROLLPOINT_OPTICAL 0x6049
+ #define USB_DEVICE_ID_LENOVO_TPPRODOCK 0x6067
+ #define USB_DEVICE_ID_LENOVO_X1_COVER 0x6085
++#define USB_DEVICE_ID_LENOVO_X1_TAB3 0x60b5
+ #define USB_DEVICE_ID_LENOVO_PIXART_USB_MOUSE_608D 0x608d
+ #define USB_DEVICE_ID_LENOVO_PIXART_USB_MOUSE_6019 0x6019
+ #define USB_DEVICE_ID_LENOVO_PIXART_USB_MOUSE_602E 0x602e
+@@ -1122,6 +1123,7 @@
+ #define USB_DEVICE_ID_SYNAPTICS_DELL_K12A 0x2819
+ #define USB_DEVICE_ID_SYNAPTICS_ACER_SWITCH5_012 0x2968
+ #define USB_DEVICE_ID_SYNAPTICS_TP_V103 0x5710
++#define USB_DEVICE_ID_SYNAPTICS_ACER_ONE_S1003 0x73f5
+ #define USB_DEVICE_ID_SYNAPTICS_ACER_SWITCH5 0x81a7
+
+ #define USB_VENDOR_ID_TEXAS_INSTRUMENTS 0x2047
+diff --git a/drivers/hid/hid-input.c b/drivers/hid/hid-input.c
+index e3d475f4baf66..b2bff932c524f 100644
+--- a/drivers/hid/hid-input.c
++++ b/drivers/hid/hid-input.c
+@@ -797,7 +797,7 @@ static void hidinput_configure_usage(struct hid_input *hidinput, struct hid_fiel
+ case 0x3b: /* Battery Strength */
+ hidinput_setup_battery(device, HID_INPUT_REPORT, field);
+ usage->type = EV_PWR;
+- goto ignore;
++ return;
+
+ case 0x3c: /* Invert */
+ map_key_clear(BTN_TOOL_RUBBER);
+@@ -1059,7 +1059,7 @@ static void hidinput_configure_usage(struct hid_input *hidinput, struct hid_fiel
+ case HID_DC_BATTERYSTRENGTH:
+ hidinput_setup_battery(device, HID_INPUT_REPORT, field);
+ usage->type = EV_PWR;
+- goto ignore;
++ return;
+ }
+ goto unknown;
+
+diff --git a/drivers/hid/hid-ite.c b/drivers/hid/hid-ite.c
+index 6c55682c59740..044a93f3c1178 100644
+--- a/drivers/hid/hid-ite.c
++++ b/drivers/hid/hid-ite.c
+@@ -44,6 +44,10 @@ static const struct hid_device_id ite_devices[] = {
+ { HID_DEVICE(BUS_USB, HID_GROUP_GENERIC,
+ USB_VENDOR_ID_SYNAPTICS,
+ USB_DEVICE_ID_SYNAPTICS_ACER_SWITCH5_012) },
++ /* ITE8910 USB kbd ctlr, with Synaptics touchpad connected to it. */
++ { HID_DEVICE(BUS_USB, HID_GROUP_GENERIC,
++ USB_VENDOR_ID_SYNAPTICS,
++ USB_DEVICE_ID_SYNAPTICS_ACER_ONE_S1003) },
+ { }
+ };
+ MODULE_DEVICE_TABLE(hid, ite_devices);
+diff --git a/drivers/hid/hid-multitouch.c b/drivers/hid/hid-multitouch.c
+index e3152155c4b85..99f041afd5c0c 100644
+--- a/drivers/hid/hid-multitouch.c
++++ b/drivers/hid/hid-multitouch.c
+@@ -1973,6 +1973,12 @@ static const struct hid_device_id mt_devices[] = {
+ HID_DEVICE(BUS_I2C, HID_GROUP_GENERIC,
+ USB_VENDOR_ID_LG, I2C_DEVICE_ID_LG_7010) },
+
++ /* Lenovo X1 TAB Gen 3 */
++ { .driver_data = MT_CLS_WIN_8_FORCE_MULTI_INPUT,
++ HID_DEVICE(BUS_USB, HID_GROUP_MULTITOUCH_WIN_8,
++ USB_VENDOR_ID_LENOVO,
++ USB_DEVICE_ID_LENOVO_X1_TAB3) },
++
+ /* MosArt panels */
+ { .driver_data = MT_CLS_CONFIDENCE_MINUS_ONE,
+ MT_USB_DEVICE(USB_VENDOR_ID_ASUS,
+diff --git a/drivers/hid/hid-roccat-kone.c b/drivers/hid/hid-roccat-kone.c
+index 1a6e600197d0b..509b9bb1362cb 100644
+--- a/drivers/hid/hid-roccat-kone.c
++++ b/drivers/hid/hid-roccat-kone.c
+@@ -294,31 +294,40 @@ static ssize_t kone_sysfs_write_settings(struct file *fp, struct kobject *kobj,
+ struct kone_device *kone = hid_get_drvdata(dev_get_drvdata(dev));
+ struct usb_device *usb_dev = interface_to_usbdev(to_usb_interface(dev));
+ int retval = 0, difference, old_profile;
++ struct kone_settings *settings = (struct kone_settings *)buf;
+
+ /* I need to get my data in one piece */
+ if (off != 0 || count != sizeof(struct kone_settings))
+ return -EINVAL;
+
+ mutex_lock(&kone->kone_lock);
+- difference = memcmp(buf, &kone->settings, sizeof(struct kone_settings));
++ difference = memcmp(settings, &kone->settings,
++ sizeof(struct kone_settings));
+ if (difference) {
+- retval = kone_set_settings(usb_dev,
+- (struct kone_settings const *)buf);
+- if (retval) {
+- mutex_unlock(&kone->kone_lock);
+- return retval;
++ if (settings->startup_profile < 1 ||
++ settings->startup_profile > 5) {
++ retval = -EINVAL;
++ goto unlock;
+ }
+
++ retval = kone_set_settings(usb_dev, settings);
++ if (retval)
++ goto unlock;
++
+ old_profile = kone->settings.startup_profile;
+- memcpy(&kone->settings, buf, sizeof(struct kone_settings));
++ memcpy(&kone->settings, settings, sizeof(struct kone_settings));
+
+ kone_profile_activated(kone, kone->settings.startup_profile);
+
+ if (kone->settings.startup_profile != old_profile)
+ kone_profile_report(kone, kone->settings.startup_profile);
+ }
++unlock:
+ mutex_unlock(&kone->kone_lock);
+
++ if (retval)
++ return retval;
++
+ return sizeof(struct kone_settings);
+ }
+ static BIN_ATTR(settings, 0660, kone_sysfs_read_settings,
+diff --git a/drivers/hwmon/bt1-pvt.c b/drivers/hwmon/bt1-pvt.c
+index 94698cae04971..3e1d56585b91a 100644
+--- a/drivers/hwmon/bt1-pvt.c
++++ b/drivers/hwmon/bt1-pvt.c
+@@ -13,6 +13,7 @@
+ #include <linux/bitops.h>
+ #include <linux/clk.h>
+ #include <linux/completion.h>
++#include <linux/delay.h>
+ #include <linux/device.h>
+ #include <linux/hwmon-sysfs.h>
+ #include <linux/hwmon.h>
+@@ -476,6 +477,7 @@ static int pvt_read_data(struct pvt_hwmon *pvt, enum pvt_sensor_type type,
+ long *val)
+ {
+ struct pvt_cache *cache = &pvt->cache[type];
++ unsigned long timeout;
+ u32 data;
+ int ret;
+
+@@ -499,7 +501,14 @@ static int pvt_read_data(struct pvt_hwmon *pvt, enum pvt_sensor_type type,
+ pvt_update(pvt->regs + PVT_INTR_MASK, PVT_INTR_DVALID, 0);
+ pvt_update(pvt->regs + PVT_CTRL, PVT_CTRL_EN, PVT_CTRL_EN);
+
+- wait_for_completion(&cache->conversion);
++ /*
++ * Wait with timeout since in case if the sensor is suddenly powered
++ * down the request won't be completed and the caller will hang up on
++ * this procedure until the power is back up again. Multiply the
++ * timeout by the factor of two to prevent a false timeout.
++ */
++ timeout = 2 * usecs_to_jiffies(ktime_to_us(pvt->timeout));
++ ret = wait_for_completion_timeout(&cache->conversion, timeout);
+
+ pvt_update(pvt->regs + PVT_CTRL, PVT_CTRL_EN, 0);
+ pvt_update(pvt->regs + PVT_INTR_MASK, PVT_INTR_DVALID,
+@@ -509,6 +518,9 @@ static int pvt_read_data(struct pvt_hwmon *pvt, enum pvt_sensor_type type,
+
+ mutex_unlock(&pvt->iface_mtx);
+
++ if (!ret)
++ return -ETIMEDOUT;
++
+ if (type == PVT_TEMP)
+ *val = pvt_calc_poly(&poly_N_to_temp, data);
+ else
+@@ -654,44 +666,16 @@ static int pvt_write_trim(struct pvt_hwmon *pvt, long val)
+
+ static int pvt_read_timeout(struct pvt_hwmon *pvt, long *val)
+ {
+- unsigned long rate;
+- ktime_t kt;
+- u32 data;
+-
+- rate = clk_get_rate(pvt->clks[PVT_CLOCK_REF].clk);
+- if (!rate)
+- return -ENODEV;
+-
+- /*
+- * Don't bother with mutex here, since we just read data from MMIO.
+- * We also have to scale the ticks timeout up to compensate the
+- * ms-ns-data translations.
+- */
+- data = readl(pvt->regs + PVT_TTIMEOUT) + 1;
++ int ret;
+
+- /*
+- * Calculate ref-clock based delay (Ttotal) between two consecutive
+- * data samples of the same sensor. So we first must calculate the
+- * delay introduced by the internal ref-clock timer (Tref * Fclk).
+- * Then add the constant timeout cuased by each conversion latency
+- * (Tmin). The basic formulae for each conversion is following:
+- * Ttotal = Tref * Fclk + Tmin
+- * Note if alarms are enabled the sensors are polled one after
+- * another, so in order to have the delay being applicable for each
+- * sensor the requested value must be equally redistirbuted.
+- */
+-#if defined(CONFIG_SENSORS_BT1_PVT_ALARMS)
+- kt = ktime_set(PVT_SENSORS_NUM * (u64)data, 0);
+- kt = ktime_divns(kt, rate);
+- kt = ktime_add_ns(kt, PVT_SENSORS_NUM * PVT_TOUT_MIN);
+-#else
+- kt = ktime_set(data, 0);
+- kt = ktime_divns(kt, rate);
+- kt = ktime_add_ns(kt, PVT_TOUT_MIN);
+-#endif
++ ret = mutex_lock_interruptible(&pvt->iface_mtx);
++ if (ret)
++ return ret;
+
+ /* Return the result in msec as hwmon sysfs interface requires. */
+- *val = ktime_to_ms(kt);
++ *val = ktime_to_ms(pvt->timeout);
++
++ mutex_unlock(&pvt->iface_mtx);
+
+ return 0;
+ }
+@@ -699,7 +683,7 @@ static int pvt_read_timeout(struct pvt_hwmon *pvt, long *val)
+ static int pvt_write_timeout(struct pvt_hwmon *pvt, long val)
+ {
+ unsigned long rate;
+- ktime_t kt;
++ ktime_t kt, cache;
+ u32 data;
+ int ret;
+
+@@ -712,7 +696,7 @@ static int pvt_write_timeout(struct pvt_hwmon *pvt, long val)
+ * between all available sensors to have the requested delay
+ * applicable to each individual sensor.
+ */
+- kt = ms_to_ktime(val);
++ cache = kt = ms_to_ktime(val);
+ #if defined(CONFIG_SENSORS_BT1_PVT_ALARMS)
+ kt = ktime_divns(kt, PVT_SENSORS_NUM);
+ #endif
+@@ -741,6 +725,7 @@ static int pvt_write_timeout(struct pvt_hwmon *pvt, long val)
+ return ret;
+
+ pvt_set_tout(pvt, data);
++ pvt->timeout = cache;
+
+ mutex_unlock(&pvt->iface_mtx);
+
+@@ -982,10 +967,52 @@ static int pvt_request_clks(struct pvt_hwmon *pvt)
+ return 0;
+ }
+
+-static void pvt_init_iface(struct pvt_hwmon *pvt)
++static int pvt_check_pwr(struct pvt_hwmon *pvt)
+ {
++ unsigned long tout;
++ int ret = 0;
++ u32 data;
++
++ /*
++ * Test out the sensor conversion functionality. If it is not done on
++ * time then the domain must have been unpowered and we won't be able
++ * to use the device later in this driver.
++ * Note If the power source is lost during the normal driver work the
++ * data read procedure will either return -ETIMEDOUT (for the
++ * alarm-less driver configuration) or just stop the repeated
++ * conversion. In the later case alas we won't be able to detect the
++ * problem.
++ */
++ pvt_update(pvt->regs + PVT_INTR_MASK, PVT_INTR_ALL, PVT_INTR_ALL);
++ pvt_update(pvt->regs + PVT_CTRL, PVT_CTRL_EN, PVT_CTRL_EN);
++ pvt_set_tout(pvt, 0);
++ readl(pvt->regs + PVT_DATA);
++
++ tout = PVT_TOUT_MIN / NSEC_PER_USEC;
++ usleep_range(tout, 2 * tout);
++
++ data = readl(pvt->regs + PVT_DATA);
++ if (!(data & PVT_DATA_VALID)) {
++ ret = -ENODEV;
++ dev_err(pvt->dev, "Sensor is powered down\n");
++ }
++
++ pvt_update(pvt->regs + PVT_CTRL, PVT_CTRL_EN, 0);
++
++ return ret;
++}
++
++static int pvt_init_iface(struct pvt_hwmon *pvt)
++{
++ unsigned long rate;
+ u32 trim, temp;
+
++ rate = clk_get_rate(pvt->clks[PVT_CLOCK_REF].clk);
++ if (!rate) {
++ dev_err(pvt->dev, "Invalid reference clock rate\n");
++ return -ENODEV;
++ }
++
+ /*
+ * Make sure all interrupts and controller are disabled so not to
+ * accidentally have ISR executed before the driver data is fully
+@@ -1000,12 +1027,37 @@ static void pvt_init_iface(struct pvt_hwmon *pvt)
+ pvt_set_mode(pvt, pvt_info[pvt->sensor].mode);
+ pvt_set_tout(pvt, PVT_TOUT_DEF);
+
++ /*
++ * Preserve the current ref-clock based delay (Ttotal) between the
++ * sensors data samples in the driver data so not to recalculate it
++ * each time on the data requests and timeout reads. It consists of the
++ * delay introduced by the internal ref-clock timer (N / Fclk) and the
++ * constant timeout caused by each conversion latency (Tmin):
++ * Ttotal = N / Fclk + Tmin
++ * If alarms are enabled the sensors are polled one after another and
++ * in order to get the next measurement of a particular sensor the
++ * caller will have to wait for at most until all the others are
++ * polled. In that case the formulae will look a bit different:
++ * Ttotal = 5 * (N / Fclk + Tmin)
++ */
++#if defined(CONFIG_SENSORS_BT1_PVT_ALARMS)
++ pvt->timeout = ktime_set(PVT_SENSORS_NUM * PVT_TOUT_DEF, 0);
++ pvt->timeout = ktime_divns(pvt->timeout, rate);
++ pvt->timeout = ktime_add_ns(pvt->timeout, PVT_SENSORS_NUM * PVT_TOUT_MIN);
++#else
++ pvt->timeout = ktime_set(PVT_TOUT_DEF, 0);
++ pvt->timeout = ktime_divns(pvt->timeout, rate);
++ pvt->timeout = ktime_add_ns(pvt->timeout, PVT_TOUT_MIN);
++#endif
++
+ trim = PVT_TRIM_DEF;
+ if (!of_property_read_u32(pvt->dev->of_node,
+ "baikal,pvt-temp-offset-millicelsius", &temp))
+ trim = pvt_calc_trim(temp);
+
+ pvt_set_trim(pvt, trim);
++
++ return 0;
+ }
+
+ static int pvt_request_irq(struct pvt_hwmon *pvt)
+@@ -1109,7 +1161,13 @@ static int pvt_probe(struct platform_device *pdev)
+ if (ret)
+ return ret;
+
+- pvt_init_iface(pvt);
++ ret = pvt_check_pwr(pvt);
++ if (ret)
++ return ret;
++
++ ret = pvt_init_iface(pvt);
++ if (ret)
++ return ret;
+
+ ret = pvt_request_irq(pvt);
+ if (ret)
+diff --git a/drivers/hwmon/bt1-pvt.h b/drivers/hwmon/bt1-pvt.h
+index 5eac73e948854..93b8dd5e7c944 100644
+--- a/drivers/hwmon/bt1-pvt.h
++++ b/drivers/hwmon/bt1-pvt.h
+@@ -10,6 +10,7 @@
+ #include <linux/completion.h>
+ #include <linux/hwmon.h>
+ #include <linux/kernel.h>
++#include <linux/ktime.h>
+ #include <linux/mutex.h>
+ #include <linux/seqlock.h>
+
+@@ -201,6 +202,7 @@ struct pvt_cache {
+ * if alarms are disabled).
+ * @sensor: current PVT sensor the data conversion is being performed for.
+ * @cache: data cache descriptor.
++ * @timeout: conversion timeout cache.
+ */
+ struct pvt_hwmon {
+ struct device *dev;
+@@ -214,6 +216,7 @@ struct pvt_hwmon {
+ struct mutex iface_mtx;
+ enum pvt_sensor_type sensor;
+ struct pvt_cache cache[PVT_SENSORS_NUM];
++ ktime_t timeout;
+ };
+
+ /*
+diff --git a/drivers/hwmon/pmbus/max34440.c b/drivers/hwmon/pmbus/max34440.c
+index 18b4e071067f7..de04dff28945b 100644
+--- a/drivers/hwmon/pmbus/max34440.c
++++ b/drivers/hwmon/pmbus/max34440.c
+@@ -388,7 +388,6 @@ static struct pmbus_driver_info max34440_info[] = {
+ .func[18] = PMBUS_HAVE_TEMP | PMBUS_HAVE_STATUS_TEMP,
+ .func[19] = PMBUS_HAVE_TEMP | PMBUS_HAVE_STATUS_TEMP,
+ .func[20] = PMBUS_HAVE_TEMP | PMBUS_HAVE_STATUS_TEMP,
+- .read_byte_data = max34440_read_byte_data,
+ .read_word_data = max34440_read_word_data,
+ .write_word_data = max34440_write_word_data,
+ },
+@@ -419,7 +418,6 @@ static struct pmbus_driver_info max34440_info[] = {
+ .func[15] = PMBUS_HAVE_TEMP | PMBUS_HAVE_STATUS_TEMP,
+ .func[16] = PMBUS_HAVE_TEMP | PMBUS_HAVE_STATUS_TEMP,
+ .func[17] = PMBUS_HAVE_TEMP | PMBUS_HAVE_STATUS_TEMP,
+- .read_byte_data = max34440_read_byte_data,
+ .read_word_data = max34440_read_word_data,
+ .write_word_data = max34440_write_word_data,
+ },
+@@ -455,7 +453,6 @@ static struct pmbus_driver_info max34440_info[] = {
+ .func[19] = PMBUS_HAVE_TEMP | PMBUS_HAVE_STATUS_TEMP,
+ .func[20] = PMBUS_HAVE_TEMP | PMBUS_HAVE_STATUS_TEMP,
+ .func[21] = PMBUS_HAVE_TEMP | PMBUS_HAVE_STATUS_TEMP,
+- .read_byte_data = max34440_read_byte_data,
+ .read_word_data = max34440_read_word_data,
+ .write_word_data = max34440_write_word_data,
+ },
+diff --git a/drivers/hwmon/w83627ehf.c b/drivers/hwmon/w83627ehf.c
+index 5a5120121e507..3964ceab2817c 100644
+--- a/drivers/hwmon/w83627ehf.c
++++ b/drivers/hwmon/w83627ehf.c
+@@ -1951,8 +1951,12 @@ static int w83627ehf_probe(struct platform_device *pdev)
+ data,
+ &w83627ehf_chip_info,
+ w83627ehf_groups);
++ if (IS_ERR(hwmon_dev)) {
++ err = PTR_ERR(hwmon_dev);
++ goto exit_release;
++ }
+
+- return PTR_ERR_OR_ZERO(hwmon_dev);
++ return 0;
+
+ exit_release:
+ release_region(res->start, IOREGION_LENGTH);
+diff --git a/drivers/hwtracing/coresight/coresight-cti.c b/drivers/hwtracing/coresight/coresight-cti.c
+index 3ccc703dc9409..167fbc2e7033f 100644
+--- a/drivers/hwtracing/coresight/coresight-cti.c
++++ b/drivers/hwtracing/coresight/coresight-cti.c
+@@ -86,22 +86,16 @@ void cti_write_all_hw_regs(struct cti_drvdata *drvdata)
+ CS_LOCK(drvdata->base);
+ }
+
+-static void cti_enable_hw_smp_call(void *info)
+-{
+- struct cti_drvdata *drvdata = info;
+-
+- cti_write_all_hw_regs(drvdata);
+-}
+-
+ /* write regs to hardware and enable */
+ static int cti_enable_hw(struct cti_drvdata *drvdata)
+ {
+ struct cti_config *config = &drvdata->config;
+ struct device *dev = &drvdata->csdev->dev;
++ unsigned long flags;
+ int rc = 0;
+
+ pm_runtime_get_sync(dev->parent);
+- spin_lock(&drvdata->spinlock);
++ spin_lock_irqsave(&drvdata->spinlock, flags);
+
+ /* no need to do anything if enabled or unpowered*/
+ if (config->hw_enabled || !config->hw_powered)
+@@ -112,19 +106,11 @@ static int cti_enable_hw(struct cti_drvdata *drvdata)
+ if (rc)
+ goto cti_err_not_enabled;
+
+- if (drvdata->ctidev.cpu >= 0) {
+- rc = smp_call_function_single(drvdata->ctidev.cpu,
+- cti_enable_hw_smp_call,
+- drvdata, 1);
+- if (rc)
+- goto cti_err_not_enabled;
+- } else {
+- cti_write_all_hw_regs(drvdata);
+- }
++ cti_write_all_hw_regs(drvdata);
+
+ config->hw_enabled = true;
+ atomic_inc(&drvdata->config.enable_req_count);
+- spin_unlock(&drvdata->spinlock);
++ spin_unlock_irqrestore(&drvdata->spinlock, flags);
+ return rc;
+
+ cti_state_unchanged:
+@@ -132,7 +118,7 @@ cti_state_unchanged:
+
+ /* cannot enable due to error */
+ cti_err_not_enabled:
+- spin_unlock(&drvdata->spinlock);
++ spin_unlock_irqrestore(&drvdata->spinlock, flags);
+ pm_runtime_put(dev->parent);
+ return rc;
+ }
+@@ -141,9 +127,7 @@ cti_err_not_enabled:
+ static void cti_cpuhp_enable_hw(struct cti_drvdata *drvdata)
+ {
+ struct cti_config *config = &drvdata->config;
+- struct device *dev = &drvdata->csdev->dev;
+
+- pm_runtime_get_sync(dev->parent);
+ spin_lock(&drvdata->spinlock);
+ config->hw_powered = true;
+
+@@ -163,7 +147,6 @@ static void cti_cpuhp_enable_hw(struct cti_drvdata *drvdata)
+ /* did not re-enable due to no claim / no request */
+ cti_hp_not_enabled:
+ spin_unlock(&drvdata->spinlock);
+- pm_runtime_put(dev->parent);
+ }
+
+ /* disable hardware */
+@@ -511,12 +494,15 @@ static bool cti_add_sysfs_link(struct cti_drvdata *drvdata,
+ return !link_err;
+ }
+
+-static void cti_remove_sysfs_link(struct cti_trig_con *tc)
++static void cti_remove_sysfs_link(struct cti_drvdata *drvdata,
++ struct cti_trig_con *tc)
+ {
+ struct coresight_sysfs_link link_info;
+
++ link_info.orig = drvdata->csdev;
+ link_info.orig_name = tc->con_dev_name;
+ link_info.target = tc->con_dev;
++ link_info.target_name = dev_name(&drvdata->csdev->dev);
+ coresight_remove_sysfs_link(&link_info);
+ }
+
+@@ -606,8 +592,8 @@ void cti_remove_assoc_from_csdev(struct coresight_device *csdev)
+ ctidrv = csdev_to_cti_drvdata(csdev->ect_dev);
+ ctidev = &ctidrv->ctidev;
+ list_for_each_entry(tc, &ctidev->trig_cons, node) {
+- if (tc->con_dev == csdev->ect_dev) {
+- cti_remove_sysfs_link(tc);
++ if (tc->con_dev == csdev) {
++ cti_remove_sysfs_link(ctidrv, tc);
+ tc->con_dev = NULL;
+ break;
+ }
+@@ -651,7 +637,7 @@ static void cti_remove_conn_xrefs(struct cti_drvdata *drvdata)
+ if (tc->con_dev) {
+ coresight_set_assoc_ectdev_mutex(tc->con_dev,
+ NULL);
+- cti_remove_sysfs_link(tc);
++ cti_remove_sysfs_link(drvdata, tc);
+ tc->con_dev = NULL;
+ }
+ }
+@@ -742,7 +728,8 @@ static int cti_dying_cpu(unsigned int cpu)
+
+ spin_lock(&drvdata->spinlock);
+ drvdata->config.hw_powered = false;
+- coresight_disclaim_device(drvdata->base);
++ if (drvdata->config.hw_enabled)
++ coresight_disclaim_device(drvdata->base);
+ spin_unlock(&drvdata->spinlock);
+ return 0;
+ }
+diff --git a/drivers/hwtracing/coresight/coresight-etm-perf.c b/drivers/hwtracing/coresight/coresight-etm-perf.c
+index 84f1dcb698272..9b0c5d719232f 100644
+--- a/drivers/hwtracing/coresight/coresight-etm-perf.c
++++ b/drivers/hwtracing/coresight/coresight-etm-perf.c
+@@ -126,10 +126,10 @@ static void free_sink_buffer(struct etm_event_data *event_data)
+ cpumask_t *mask = &event_data->mask;
+ struct coresight_device *sink;
+
+- if (WARN_ON(cpumask_empty(mask)))
++ if (!event_data->snk_config)
+ return;
+
+- if (!event_data->snk_config)
++ if (WARN_ON(cpumask_empty(mask)))
+ return;
+
+ cpu = cpumask_first(mask);
+@@ -310,6 +310,16 @@ static void etm_event_start(struct perf_event *event, int flags)
+ if (!event_data)
+ goto fail;
+
++ /*
++ * Check if this ETM is allowed to trace, as decided
++ * at etm_setup_aux(). This could be due to an unreachable
++ * sink from this ETM. We can't do much in this case if
++ * the sink was specified or hinted to the driver. For
++ * now, simply don't record anything on this ETM.
++ */
++ if (!cpumask_test_cpu(cpu, &event_data->mask))
++ goto fail_end_stop;
++
+ path = etm_event_cpu_path(event_data, cpu);
+ /* We need a sink, no need to continue without one */
+ sink = coresight_get_sink(path);
+diff --git a/drivers/hwtracing/coresight/coresight-etm4x-sysfs.c b/drivers/hwtracing/coresight/coresight-etm4x-sysfs.c
+index b673e738bc9a8..a588cd6de01c7 100644
+--- a/drivers/hwtracing/coresight/coresight-etm4x-sysfs.c
++++ b/drivers/hwtracing/coresight/coresight-etm4x-sysfs.c
+@@ -206,7 +206,7 @@ static ssize_t reset_store(struct device *dev,
+ * each trace run.
+ */
+ config->vinst_ctrl = BIT(0);
+- if (drvdata->nr_addr_cmp == true) {
++ if (drvdata->nr_addr_cmp > 0) {
+ config->mode |= ETM_MODE_VIEWINST_STARTSTOP;
+ /* SSSTATUS, bit[9] */
+ config->vinst_ctrl |= BIT(9);
+diff --git a/drivers/hwtracing/coresight/coresight-etm4x.c b/drivers/hwtracing/coresight/coresight-etm4x.c
+index 6089c481f8f19..d4e74b03c1e0f 100644
+--- a/drivers/hwtracing/coresight/coresight-etm4x.c
++++ b/drivers/hwtracing/coresight/coresight-etm4x.c
+@@ -48,12 +48,11 @@ module_param(pm_save_enable, int, 0444);
+ MODULE_PARM_DESC(pm_save_enable,
+ "Save/restore state on power down: 1 = never, 2 = self-hosted");
+
+-/* The number of ETMv4 currently registered */
+-static int etm4_count;
+ static struct etmv4_drvdata *etmdrvdata[NR_CPUS];
+ static void etm4_set_default_config(struct etmv4_config *config);
+ static int etm4_set_event_filters(struct etmv4_drvdata *drvdata,
+ struct perf_event *event);
++static u64 etm4_get_access_type(struct etmv4_config *config);
+
+ static enum cpuhp_state hp_online;
+
+@@ -781,6 +780,22 @@ static void etm4_init_arch_data(void *info)
+ CS_LOCK(drvdata->base);
+ }
+
++/* Set ELx trace filter access in the TRCVICTLR register */
++static void etm4_set_victlr_access(struct etmv4_config *config)
++{
++ u64 access_type;
++
++ config->vinst_ctrl &= ~(ETM_EXLEVEL_S_VICTLR_MASK | ETM_EXLEVEL_NS_VICTLR_MASK);
++
++ /*
++ * TRCVICTLR::EXLEVEL_NS:EXLEVELS: Set kernel / user filtering
++ * bits in vinst_ctrl, same bit pattern as TRCACATRn values returned by
++ * etm4_get_access_type() but with a relative shift in this register.
++ */
++ access_type = etm4_get_access_type(config) << ETM_EXLEVEL_LSHIFT_TRCVICTLR;
++ config->vinst_ctrl |= (u32)access_type;
++}
++
+ static void etm4_set_default_config(struct etmv4_config *config)
+ {
+ /* disable all events tracing */
+@@ -798,6 +813,9 @@ static void etm4_set_default_config(struct etmv4_config *config)
+
+ /* TRCVICTLR::EVENT = 0x01, select the always on logic */
+ config->vinst_ctrl = BIT(0);
++
++ /* TRCVICTLR::EXLEVEL_NS:EXLEVELS: Set kernel / user filtering */
++ etm4_set_victlr_access(config);
+ }
+
+ static u64 etm4_get_ns_access_type(struct etmv4_config *config)
+@@ -1062,7 +1080,7 @@ out:
+
+ void etm4_config_trace_mode(struct etmv4_config *config)
+ {
+- u32 addr_acc, mode;
++ u32 mode;
+
+ mode = config->mode;
+ mode &= (ETM_MODE_EXCL_KERN | ETM_MODE_EXCL_USER);
+@@ -1074,15 +1092,7 @@ void etm4_config_trace_mode(struct etmv4_config *config)
+ if (!(mode & ETM_MODE_EXCL_KERN) && !(mode & ETM_MODE_EXCL_USER))
+ return;
+
+- addr_acc = config->addr_acc[ETM_DEFAULT_ADDR_COMP];
+- /* clear default config */
+- addr_acc &= ~(ETM_EXLEVEL_NS_APP | ETM_EXLEVEL_NS_OS |
+- ETM_EXLEVEL_NS_HYP);
+-
+- addr_acc |= etm4_get_ns_access_type(config);
+-
+- config->addr_acc[ETM_DEFAULT_ADDR_COMP] = addr_acc;
+- config->addr_acc[ETM_DEFAULT_ADDR_COMP + 1] = addr_acc;
++ etm4_set_victlr_access(config);
+ }
+
+ static int etm4_online_cpu(unsigned int cpu)
+@@ -1179,7 +1189,7 @@ static int etm4_cpu_save(struct etmv4_drvdata *drvdata)
+ state->trcvdsacctlr = readl(drvdata->base + TRCVDSACCTLR);
+ state->trcvdarcctlr = readl(drvdata->base + TRCVDARCCTLR);
+
+- for (i = 0; i < drvdata->nrseqstate; i++)
++ for (i = 0; i < drvdata->nrseqstate - 1; i++)
+ state->trcseqevr[i] = readl(drvdata->base + TRCSEQEVRn(i));
+
+ state->trcseqrstevr = readl(drvdata->base + TRCSEQRSTEVR);
+@@ -1223,7 +1233,7 @@ static int etm4_cpu_save(struct etmv4_drvdata *drvdata)
+ state->trccidcctlr1 = readl(drvdata->base + TRCCIDCCTLR1);
+
+ state->trcvmidcctlr0 = readl(drvdata->base + TRCVMIDCCTLR0);
+- state->trcvmidcctlr0 = readl(drvdata->base + TRCVMIDCCTLR1);
++ state->trcvmidcctlr1 = readl(drvdata->base + TRCVMIDCCTLR1);
+
+ state->trcclaimset = readl(drvdata->base + TRCCLAIMCLR);
+
+@@ -1284,7 +1294,7 @@ static void etm4_cpu_restore(struct etmv4_drvdata *drvdata)
+ writel_relaxed(state->trcvdsacctlr, drvdata->base + TRCVDSACCTLR);
+ writel_relaxed(state->trcvdarcctlr, drvdata->base + TRCVDARCCTLR);
+
+- for (i = 0; i < drvdata->nrseqstate; i++)
++ for (i = 0; i < drvdata->nrseqstate - 1; i++)
+ writel_relaxed(state->trcseqevr[i],
+ drvdata->base + TRCSEQEVRn(i));
+
+@@ -1333,7 +1343,7 @@ static void etm4_cpu_restore(struct etmv4_drvdata *drvdata)
+ writel_relaxed(state->trccidcctlr1, drvdata->base + TRCCIDCCTLR1);
+
+ writel_relaxed(state->trcvmidcctlr0, drvdata->base + TRCVMIDCCTLR0);
+- writel_relaxed(state->trcvmidcctlr0, drvdata->base + TRCVMIDCCTLR1);
++ writel_relaxed(state->trcvmidcctlr1, drvdata->base + TRCVMIDCCTLR1);
+
+ writel_relaxed(state->trcclaimset, drvdata->base + TRCCLAIMSET);
+
+@@ -1394,28 +1404,25 @@ static struct notifier_block etm4_cpu_pm_nb = {
+ .notifier_call = etm4_cpu_pm_notify,
+ };
+
+-/* Setup PM. Called with cpus locked. Deals with error conditions and counts */
+-static int etm4_pm_setup_cpuslocked(void)
++/* Setup PM. Deals with error conditions and counts */
++static int __init etm4_pm_setup(void)
+ {
+ int ret;
+
+- if (etm4_count++)
+- return 0;
+-
+ ret = cpu_pm_register_notifier(&etm4_cpu_pm_nb);
+ if (ret)
+- goto reduce_count;
++ return ret;
+
+- ret = cpuhp_setup_state_nocalls_cpuslocked(CPUHP_AP_ARM_CORESIGHT_STARTING,
+- "arm/coresight4:starting",
+- etm4_starting_cpu, etm4_dying_cpu);
++ ret = cpuhp_setup_state_nocalls(CPUHP_AP_ARM_CORESIGHT_STARTING,
++ "arm/coresight4:starting",
++ etm4_starting_cpu, etm4_dying_cpu);
+
+ if (ret)
+ goto unregister_notifier;
+
+- ret = cpuhp_setup_state_nocalls_cpuslocked(CPUHP_AP_ONLINE_DYN,
+- "arm/coresight4:online",
+- etm4_online_cpu, NULL);
++ ret = cpuhp_setup_state_nocalls(CPUHP_AP_ONLINE_DYN,
++ "arm/coresight4:online",
++ etm4_online_cpu, NULL);
+
+ /* HP dyn state ID returned in ret on success */
+ if (ret > 0) {
+@@ -1424,21 +1431,15 @@ static int etm4_pm_setup_cpuslocked(void)
+ }
+
+ /* failed dyn state - remove others */
+- cpuhp_remove_state_nocalls_cpuslocked(CPUHP_AP_ARM_CORESIGHT_STARTING);
++ cpuhp_remove_state_nocalls(CPUHP_AP_ARM_CORESIGHT_STARTING);
+
+ unregister_notifier:
+ cpu_pm_unregister_notifier(&etm4_cpu_pm_nb);
+-
+-reduce_count:
+- --etm4_count;
+ return ret;
+ }
+
+-static void etm4_pm_clear(void)
++static void __init etm4_pm_clear(void)
+ {
+- if (--etm4_count != 0)
+- return;
+-
+ cpu_pm_unregister_notifier(&etm4_cpu_pm_nb);
+ cpuhp_remove_state_nocalls(CPUHP_AP_ARM_CORESIGHT_STARTING);
+ if (hp_online) {
+@@ -1491,22 +1492,12 @@ static int etm4_probe(struct amba_device *adev, const struct amba_id *id)
+ if (!desc.name)
+ return -ENOMEM;
+
+- cpus_read_lock();
+ etmdrvdata[drvdata->cpu] = drvdata;
+
+ if (smp_call_function_single(drvdata->cpu,
+ etm4_init_arch_data, drvdata, 1))
+ dev_err(dev, "ETM arch init failed\n");
+
+- ret = etm4_pm_setup_cpuslocked();
+- cpus_read_unlock();
+-
+- /* etm4_pm_setup_cpuslocked() does its own cleanup - exit on error */
+- if (ret) {
+- etmdrvdata[drvdata->cpu] = NULL;
+- return ret;
+- }
+-
+ if (etm4_arch_supported(drvdata->arch) == false) {
+ ret = -EINVAL;
+ goto err_arch_supported;
+@@ -1553,7 +1544,6 @@ static int etm4_probe(struct amba_device *adev, const struct amba_id *id)
+
+ err_arch_supported:
+ etmdrvdata[drvdata->cpu] = NULL;
+- etm4_pm_clear();
+ return ret;
+ }
+
+@@ -1591,4 +1581,23 @@ static struct amba_driver etm4x_driver = {
+ .probe = etm4_probe,
+ .id_table = etm4_ids,
+ };
+-builtin_amba_driver(etm4x_driver);
++
++static int __init etm4x_init(void)
++{
++ int ret;
++
++ ret = etm4_pm_setup();
++
++ /* etm4_pm_setup() does its own cleanup - exit on error */
++ if (ret)
++ return ret;
++
++ ret = amba_driver_register(&etm4x_driver);
++ if (ret) {
++ pr_err("Error registering etm4x driver\n");
++ etm4_pm_clear();
++ }
++
++ return ret;
++}
++device_initcall(etm4x_init);
+diff --git a/drivers/hwtracing/coresight/coresight-etm4x.h b/drivers/hwtracing/coresight/coresight-etm4x.h
+index 47729e04aac72..ab38f9afd821a 100644
+--- a/drivers/hwtracing/coresight/coresight-etm4x.h
++++ b/drivers/hwtracing/coresight/coresight-etm4x.h
+@@ -192,6 +192,9 @@
+ #define ETM_EXLEVEL_NS_HYP BIT(14)
+ #define ETM_EXLEVEL_NS_NA BIT(15)
+
++/* access level control in TRCVICTLR - same bits as TRCACATRn but shifted */
++#define ETM_EXLEVEL_LSHIFT_TRCVICTLR 8
++
+ /* secure / non secure masks - TRCVICTLR, IDR3 */
+ #define ETM_EXLEVEL_S_VICTLR_MASK GENMASK(19, 16)
+ /* NS MON (EL3) mode never implemented */
+diff --git a/drivers/hwtracing/coresight/coresight-platform.c b/drivers/hwtracing/coresight/coresight-platform.c
+index e4912abda3aa2..85a6c099ddeb1 100644
+--- a/drivers/hwtracing/coresight/coresight-platform.c
++++ b/drivers/hwtracing/coresight/coresight-platform.c
+@@ -712,11 +712,11 @@ static int acpi_coresight_parse_graph(struct acpi_device *adev,
+ return dir;
+
+ if (dir == ACPI_CORESIGHT_LINK_MASTER) {
+- if (ptr->outport > pdata->nr_outport)
+- pdata->nr_outport = ptr->outport;
++ if (ptr->outport >= pdata->nr_outport)
++ pdata->nr_outport = ptr->outport + 1;
+ ptr++;
+ } else {
+- WARN_ON(pdata->nr_inport == ptr->child_port);
++ WARN_ON(pdata->nr_inport == ptr->child_port + 1);
+ /*
+ * We do not track input port connections for a device.
+ * However we need the highest port number described,
+@@ -724,8 +724,8 @@ static int acpi_coresight_parse_graph(struct acpi_device *adev,
+ * record for an output connection. Hence, do not move
+ * the ptr for input connections
+ */
+- if (ptr->child_port > pdata->nr_inport)
+- pdata->nr_inport = ptr->child_port;
++ if (ptr->child_port >= pdata->nr_inport)
++ pdata->nr_inport = ptr->child_port + 1;
+ }
+ }
+
+diff --git a/drivers/hwtracing/coresight/coresight.c b/drivers/hwtracing/coresight/coresight.c
+index f3efbb3b2b4d1..cf03af09c6ced 100644
+--- a/drivers/hwtracing/coresight/coresight.c
++++ b/drivers/hwtracing/coresight/coresight.c
+@@ -1023,7 +1023,6 @@ static void coresight_device_release(struct device *dev)
+ {
+ struct coresight_device *csdev = to_coresight_device(dev);
+
+- cti_remove_assoc_from_csdev(csdev);
+ fwnode_handle_put(csdev->dev.fwnode);
+ kfree(csdev->refcnt);
+ kfree(csdev);
+@@ -1357,6 +1356,7 @@ void coresight_unregister(struct coresight_device *csdev)
+ {
+ etm_perf_del_symlink_sink(csdev);
+ /* Remove references of that device in the topology */
++ cti_remove_assoc_from_csdev(csdev);
+ coresight_remove_conns(csdev);
+ coresight_release_platform_data(csdev, csdev->pdata);
+ device_unregister(&csdev->dev);
+diff --git a/drivers/i2c/busses/Kconfig b/drivers/i2c/busses/Kconfig
+index 735bf31a3fdff..6546d6cf3c24c 100644
+--- a/drivers/i2c/busses/Kconfig
++++ b/drivers/i2c/busses/Kconfig
+@@ -1191,6 +1191,7 @@ config I2C_RCAR
+ tristate "Renesas R-Car I2C Controller"
+ depends on ARCH_RENESAS || COMPILE_TEST
+ select I2C_SLAVE
++ select RESET_CONTROLLER if ARCH_RCAR_GEN3
+ help
+ If you say yes to this option, support will be included for the
+ R-Car I2C controller.
+diff --git a/drivers/i2c/i2c-core-acpi.c b/drivers/i2c/i2c-core-acpi.c
+index 2ade99b105b91..bbf8dd491d245 100644
+--- a/drivers/i2c/i2c-core-acpi.c
++++ b/drivers/i2c/i2c-core-acpi.c
+@@ -264,6 +264,7 @@ static acpi_status i2c_acpi_add_device(acpi_handle handle, u32 level,
+ void i2c_acpi_register_devices(struct i2c_adapter *adap)
+ {
+ acpi_status status;
++ acpi_handle handle;
+
+ if (!has_acpi_companion(&adap->dev))
+ return;
+@@ -274,6 +275,15 @@ void i2c_acpi_register_devices(struct i2c_adapter *adap)
+ adap, NULL);
+ if (ACPI_FAILURE(status))
+ dev_warn(&adap->dev, "failed to enumerate I2C slaves\n");
++
++ if (!adap->dev.parent)
++ return;
++
++ handle = ACPI_HANDLE(adap->dev.parent);
++ if (!handle)
++ return;
++
++ acpi_walk_dep_device_list(handle);
+ }
+
+ const struct acpi_device_id *
+@@ -729,7 +739,6 @@ int i2c_acpi_install_space_handler(struct i2c_adapter *adapter)
+ return -ENOMEM;
+ }
+
+- acpi_walk_dep_device_list(handle);
+ return 0;
+ }
+
+diff --git a/drivers/i3c/master.c b/drivers/i3c/master.c
+index 97f2e29265da7..cc7564446ccd2 100644
+--- a/drivers/i3c/master.c
++++ b/drivers/i3c/master.c
+@@ -1782,6 +1782,21 @@ static void i3c_master_bus_cleanup(struct i3c_master_controller *master)
+ i3c_master_detach_free_devs(master);
+ }
+
++static void i3c_master_attach_boardinfo(struct i3c_dev_desc *i3cdev)
++{
++ struct i3c_master_controller *master = i3cdev->common.master;
++ struct i3c_dev_boardinfo *i3cboardinfo;
++
++ list_for_each_entry(i3cboardinfo, &master->boardinfo.i3c, node) {
++ if (i3cdev->info.pid != i3cboardinfo->pid)
++ continue;
++
++ i3cdev->boardinfo = i3cboardinfo;
++ i3cdev->info.static_addr = i3cboardinfo->static_addr;
++ return;
++ }
++}
++
+ static struct i3c_dev_desc *
+ i3c_master_search_i3c_dev_duplicate(struct i3c_dev_desc *refdev)
+ {
+@@ -1837,10 +1852,10 @@ int i3c_master_add_i3c_dev_locked(struct i3c_master_controller *master,
+ if (ret)
+ goto err_detach_dev;
+
++ i3c_master_attach_boardinfo(newdev);
++
+ olddev = i3c_master_search_i3c_dev_duplicate(newdev);
+ if (olddev) {
+- newdev->boardinfo = olddev->boardinfo;
+- newdev->info.static_addr = olddev->info.static_addr;
+ newdev->dev = olddev->dev;
+ if (newdev->dev)
+ newdev->dev->desc = newdev;
+diff --git a/drivers/i3c/master/i3c-master-cdns.c b/drivers/i3c/master/i3c-master-cdns.c
+index 3fee8bd7fe20b..3f2226928fe05 100644
+--- a/drivers/i3c/master/i3c-master-cdns.c
++++ b/drivers/i3c/master/i3c-master-cdns.c
+@@ -1635,8 +1635,10 @@ static int cdns_i3c_master_probe(struct platform_device *pdev)
+ master->ibi.slots = devm_kcalloc(&pdev->dev, master->ibi.num_slots,
+ sizeof(*master->ibi.slots),
+ GFP_KERNEL);
+- if (!master->ibi.slots)
++ if (!master->ibi.slots) {
++ ret = -ENOMEM;
+ goto err_disable_sysclk;
++ }
+
+ writel(IBIR_THR(1), master->regs + CMD_IBI_THR_CTRL);
+ writel(MST_INT_IBIR_THR, master->regs + MST_IER);
+diff --git a/drivers/iio/adc/stm32-adc-core.c b/drivers/iio/adc/stm32-adc-core.c
+index 0e2068ec068b8..358636954619d 100644
+--- a/drivers/iio/adc/stm32-adc-core.c
++++ b/drivers/iio/adc/stm32-adc-core.c
+@@ -794,6 +794,13 @@ static int stm32_adc_core_runtime_resume(struct device *dev)
+ {
+ return stm32_adc_core_hw_start(dev);
+ }
++
++static int stm32_adc_core_runtime_idle(struct device *dev)
++{
++ pm_runtime_mark_last_busy(dev);
++
++ return 0;
++}
+ #endif
+
+ static const struct dev_pm_ops stm32_adc_core_pm_ops = {
+@@ -801,7 +808,7 @@ static const struct dev_pm_ops stm32_adc_core_pm_ops = {
+ pm_runtime_force_resume)
+ SET_RUNTIME_PM_OPS(stm32_adc_core_runtime_suspend,
+ stm32_adc_core_runtime_resume,
+- NULL)
++ stm32_adc_core_runtime_idle)
+ };
+
+ static const struct stm32_adc_priv_cfg stm32f4_adc_priv_cfg = {
+diff --git a/drivers/infiniband/core/cma.c b/drivers/infiniband/core/cma.c
+index 26de0dab60bbb..d28c7c6940b00 100644
+--- a/drivers/infiniband/core/cma.c
++++ b/drivers/infiniband/core/cma.c
+@@ -68,6 +68,9 @@ static const char * const cma_events[] = {
+ [RDMA_CM_EVENT_TIMEWAIT_EXIT] = "timewait exit",
+ };
+
++static void cma_set_mgid(struct rdma_id_private *id_priv, struct sockaddr *addr,
++ union ib_gid *mgid);
++
+ const char *__attribute_const__ rdma_event_msg(enum rdma_cm_event_type event)
+ {
+ size_t index = event;
+@@ -345,13 +348,10 @@ struct ib_device *cma_get_ib_dev(struct cma_device *cma_dev)
+
+ struct cma_multicast {
+ struct rdma_id_private *id_priv;
+- union {
+- struct ib_sa_multicast *ib;
+- } multicast;
++ struct ib_sa_multicast *sa_mc;
+ struct list_head list;
+ void *context;
+ struct sockaddr_storage addr;
+- struct kref mcref;
+ u8 join_state;
+ };
+
+@@ -363,18 +363,6 @@ struct cma_work {
+ struct rdma_cm_event event;
+ };
+
+-struct cma_ndev_work {
+- struct work_struct work;
+- struct rdma_id_private *id;
+- struct rdma_cm_event event;
+-};
+-
+-struct iboe_mcast_work {
+- struct work_struct work;
+- struct rdma_id_private *id;
+- struct cma_multicast *mc;
+-};
+-
+ union cma_ip_addr {
+ struct in6_addr ip6;
+ struct {
+@@ -483,14 +471,6 @@ static void cma_attach_to_dev(struct rdma_id_private *id_priv,
+ rdma_start_port(cma_dev->device)];
+ }
+
+-static inline void release_mc(struct kref *kref)
+-{
+- struct cma_multicast *mc = container_of(kref, struct cma_multicast, mcref);
+-
+- kfree(mc->multicast.ib);
+- kfree(mc);
+-}
+-
+ static void cma_release_dev(struct rdma_id_private *id_priv)
+ {
+ mutex_lock(&lock);
+@@ -1783,19 +1763,30 @@ static void cma_release_port(struct rdma_id_private *id_priv)
+ mutex_unlock(&lock);
+ }
+
+-static void cma_leave_roce_mc_group(struct rdma_id_private *id_priv,
+- struct cma_multicast *mc)
++static void destroy_mc(struct rdma_id_private *id_priv,
++ struct cma_multicast *mc)
+ {
+- struct rdma_dev_addr *dev_addr = &id_priv->id.route.addr.dev_addr;
+- struct net_device *ndev = NULL;
++ if (rdma_cap_ib_mcast(id_priv->id.device, id_priv->id.port_num))
++ ib_sa_free_multicast(mc->sa_mc);
+
+- if (dev_addr->bound_dev_if)
+- ndev = dev_get_by_index(dev_addr->net, dev_addr->bound_dev_if);
+- if (ndev) {
+- cma_igmp_send(ndev, &mc->multicast.ib->rec.mgid, false);
+- dev_put(ndev);
++ if (rdma_protocol_roce(id_priv->id.device, id_priv->id.port_num)) {
++ struct rdma_dev_addr *dev_addr =
++ &id_priv->id.route.addr.dev_addr;
++ struct net_device *ndev = NULL;
++
++ if (dev_addr->bound_dev_if)
++ ndev = dev_get_by_index(dev_addr->net,
++ dev_addr->bound_dev_if);
++ if (ndev) {
++ union ib_gid mgid;
++
++ cma_set_mgid(id_priv, (struct sockaddr *)&mc->addr,
++ &mgid);
++ cma_igmp_send(ndev, &mgid, false);
++ dev_put(ndev);
++ }
+ }
+- kref_put(&mc->mcref, release_mc);
++ kfree(mc);
+ }
+
+ static void cma_leave_mc_groups(struct rdma_id_private *id_priv)
+@@ -1803,16 +1794,10 @@ static void cma_leave_mc_groups(struct rdma_id_private *id_priv)
+ struct cma_multicast *mc;
+
+ while (!list_empty(&id_priv->mc_list)) {
+- mc = container_of(id_priv->mc_list.next,
+- struct cma_multicast, list);
++ mc = list_first_entry(&id_priv->mc_list, struct cma_multicast,
++ list);
+ list_del(&mc->list);
+- if (rdma_cap_ib_mcast(id_priv->cma_dev->device,
+- id_priv->id.port_num)) {
+- ib_sa_free_multicast(mc->multicast.ib);
+- kfree(mc);
+- } else {
+- cma_leave_roce_mc_group(id_priv, mc);
+- }
++ destroy_mc(id_priv, mc);
+ }
+ }
+
+@@ -2646,32 +2631,14 @@ static void cma_work_handler(struct work_struct *_work)
+ struct rdma_id_private *id_priv = work->id;
+
+ mutex_lock(&id_priv->handler_mutex);
+- if (!cma_comp_exch(id_priv, work->old_state, work->new_state))
++ if (READ_ONCE(id_priv->state) == RDMA_CM_DESTROYING ||
++ READ_ONCE(id_priv->state) == RDMA_CM_DEVICE_REMOVAL)
+ goto out_unlock;
+-
+- if (cma_cm_event_handler(id_priv, &work->event)) {
+- cma_id_put(id_priv);
+- destroy_id_handler_unlock(id_priv);
+- goto out_free;
++ if (work->old_state != 0 || work->new_state != 0) {
++ if (!cma_comp_exch(id_priv, work->old_state, work->new_state))
++ goto out_unlock;
+ }
+
+-out_unlock:
+- mutex_unlock(&id_priv->handler_mutex);
+- cma_id_put(id_priv);
+-out_free:
+- kfree(work);
+-}
+-
+-static void cma_ndev_work_handler(struct work_struct *_work)
+-{
+- struct cma_ndev_work *work = container_of(_work, struct cma_ndev_work, work);
+- struct rdma_id_private *id_priv = work->id;
+-
+- mutex_lock(&id_priv->handler_mutex);
+- if (id_priv->state == RDMA_CM_DESTROYING ||
+- id_priv->state == RDMA_CM_DEVICE_REMOVAL)
+- goto out_unlock;
+-
+ if (cma_cm_event_handler(id_priv, &work->event)) {
+ cma_id_put(id_priv);
+ destroy_id_handler_unlock(id_priv);
+@@ -2682,6 +2649,8 @@ out_unlock:
+ mutex_unlock(&id_priv->handler_mutex);
+ cma_id_put(id_priv);
+ out_free:
++ if (work->event.event == RDMA_CM_EVENT_MULTICAST_JOIN)
++ rdma_destroy_ah_attr(&work->event.param.ud.ah_attr);
+ kfree(work);
+ }
+
+@@ -4295,63 +4264,66 @@ out:
+ }
+ EXPORT_SYMBOL(rdma_disconnect);
+
+-static int cma_ib_mc_handler(int status, struct ib_sa_multicast *multicast)
++static void cma_make_mc_event(int status, struct rdma_id_private *id_priv,
++ struct ib_sa_multicast *multicast,
++ struct rdma_cm_event *event,
++ struct cma_multicast *mc)
+ {
+- struct rdma_id_private *id_priv;
+- struct cma_multicast *mc = multicast->context;
+- struct rdma_cm_event event = {};
+- int ret = 0;
+-
+- id_priv = mc->id_priv;
+- mutex_lock(&id_priv->handler_mutex);
+- if (id_priv->state != RDMA_CM_ADDR_BOUND &&
+- id_priv->state != RDMA_CM_ADDR_RESOLVED)
+- goto out;
++ struct rdma_dev_addr *dev_addr;
++ enum ib_gid_type gid_type;
++ struct net_device *ndev;
+
+ if (!status)
+ status = cma_set_qkey(id_priv, be32_to_cpu(multicast->rec.qkey));
+ else
+ pr_debug_ratelimited("RDMA CM: MULTICAST_ERROR: failed to join multicast. status %d\n",
+ status);
+- mutex_lock(&id_priv->qp_mutex);
+- if (!status && id_priv->id.qp) {
+- status = ib_attach_mcast(id_priv->id.qp, &multicast->rec.mgid,
+- be16_to_cpu(multicast->rec.mlid));
+- if (status)
+- pr_debug_ratelimited("RDMA CM: MULTICAST_ERROR: failed to attach QP. status %d\n",
+- status);
++
++ event->status = status;
++ event->param.ud.private_data = mc->context;
++ if (status) {
++ event->event = RDMA_CM_EVENT_MULTICAST_ERROR;
++ return;
+ }
+- mutex_unlock(&id_priv->qp_mutex);
+
+- event.status = status;
+- event.param.ud.private_data = mc->context;
+- if (!status) {
+- struct rdma_dev_addr *dev_addr =
+- &id_priv->id.route.addr.dev_addr;
+- struct net_device *ndev =
+- dev_get_by_index(dev_addr->net, dev_addr->bound_dev_if);
+- enum ib_gid_type gid_type =
+- id_priv->cma_dev->default_gid_type[id_priv->id.port_num -
+- rdma_start_port(id_priv->cma_dev->device)];
+-
+- event.event = RDMA_CM_EVENT_MULTICAST_JOIN;
+- ret = ib_init_ah_from_mcmember(id_priv->id.device,
+- id_priv->id.port_num,
+- &multicast->rec,
+- ndev, gid_type,
+- &event.param.ud.ah_attr);
+- if (ret)
+- event.event = RDMA_CM_EVENT_MULTICAST_ERROR;
++ dev_addr = &id_priv->id.route.addr.dev_addr;
++ ndev = dev_get_by_index(dev_addr->net, dev_addr->bound_dev_if);
++ gid_type =
++ id_priv->cma_dev
++ ->default_gid_type[id_priv->id.port_num -
++ rdma_start_port(
++ id_priv->cma_dev->device)];
++
++ event->event = RDMA_CM_EVENT_MULTICAST_JOIN;
++ if (ib_init_ah_from_mcmember(id_priv->id.device, id_priv->id.port_num,
++ &multicast->rec, ndev, gid_type,
++ &event->param.ud.ah_attr)) {
++ event->event = RDMA_CM_EVENT_MULTICAST_ERROR;
++ goto out;
++ }
+
+- event.param.ud.qp_num = 0xFFFFFF;
+- event.param.ud.qkey = be32_to_cpu(multicast->rec.qkey);
+- if (ndev)
+- dev_put(ndev);
+- } else
+- event.event = RDMA_CM_EVENT_MULTICAST_ERROR;
++ event->param.ud.qp_num = 0xFFFFFF;
++ event->param.ud.qkey = be32_to_cpu(multicast->rec.qkey);
+
+- ret = cma_cm_event_handler(id_priv, &event);
++out:
++ if (ndev)
++ dev_put(ndev);
++}
+
++static int cma_ib_mc_handler(int status, struct ib_sa_multicast *multicast)
++{
++ struct cma_multicast *mc = multicast->context;
++ struct rdma_id_private *id_priv = mc->id_priv;
++ struct rdma_cm_event event = {};
++ int ret = 0;
++
++ mutex_lock(&id_priv->handler_mutex);
++ if (id_priv->state != RDMA_CM_ADDR_BOUND &&
++ id_priv->state != RDMA_CM_ADDR_RESOLVED)
++ goto out;
++
++ cma_make_mc_event(status, id_priv, multicast, &event, mc);
++ ret = cma_cm_event_handler(id_priv, &event);
+ rdma_destroy_ah_attr(&event.param.ud.ah_attr);
+ if (ret) {
+ destroy_id_handler_unlock(id_priv);
+@@ -4441,23 +4413,10 @@ static int cma_join_ib_multicast(struct rdma_id_private *id_priv,
+ IB_SA_MCMEMBER_REC_MTU |
+ IB_SA_MCMEMBER_REC_HOP_LIMIT;
+
+- mc->multicast.ib = ib_sa_join_multicast(&sa_client, id_priv->id.device,
+- id_priv->id.port_num, &rec,
+- comp_mask, GFP_KERNEL,
+- cma_ib_mc_handler, mc);
+- return PTR_ERR_OR_ZERO(mc->multicast.ib);
+-}
+-
+-static void iboe_mcast_work_handler(struct work_struct *work)
+-{
+- struct iboe_mcast_work *mw = container_of(work, struct iboe_mcast_work, work);
+- struct cma_multicast *mc = mw->mc;
+- struct ib_sa_multicast *m = mc->multicast.ib;
+-
+- mc->multicast.ib->context = mc;
+- cma_ib_mc_handler(0, m);
+- kref_put(&mc->mcref, release_mc);
+- kfree(mw);
++ mc->sa_mc = ib_sa_join_multicast(&sa_client, id_priv->id.device,
++ id_priv->id.port_num, &rec, comp_mask,
++ GFP_KERNEL, cma_ib_mc_handler, mc);
++ return PTR_ERR_OR_ZERO(mc->sa_mc);
+ }
+
+ static void cma_iboe_set_mgid(struct sockaddr *addr, union ib_gid *mgid,
+@@ -4492,52 +4451,47 @@ static void cma_iboe_set_mgid(struct sockaddr *addr, union ib_gid *mgid,
+ static int cma_iboe_join_multicast(struct rdma_id_private *id_priv,
+ struct cma_multicast *mc)
+ {
+- struct iboe_mcast_work *work;
++ struct cma_work *work;
+ struct rdma_dev_addr *dev_addr = &id_priv->id.route.addr.dev_addr;
+ int err = 0;
+ struct sockaddr *addr = (struct sockaddr *)&mc->addr;
+ struct net_device *ndev = NULL;
++ struct ib_sa_multicast ib;
+ enum ib_gid_type gid_type;
+ bool send_only;
+
+ send_only = mc->join_state == BIT(SENDONLY_FULLMEMBER_JOIN);
+
+- if (cma_zero_addr((struct sockaddr *)&mc->addr))
++ if (cma_zero_addr(addr))
+ return -EINVAL;
+
+ work = kzalloc(sizeof *work, GFP_KERNEL);
+ if (!work)
+ return -ENOMEM;
+
+- mc->multicast.ib = kzalloc(sizeof(struct ib_sa_multicast), GFP_KERNEL);
+- if (!mc->multicast.ib) {
+- err = -ENOMEM;
+- goto out1;
+- }
+-
+ gid_type = id_priv->cma_dev->default_gid_type[id_priv->id.port_num -
+ rdma_start_port(id_priv->cma_dev->device)];
+- cma_iboe_set_mgid(addr, &mc->multicast.ib->rec.mgid, gid_type);
++ cma_iboe_set_mgid(addr, &ib.rec.mgid, gid_type);
+
+- mc->multicast.ib->rec.pkey = cpu_to_be16(0xffff);
++ ib.rec.pkey = cpu_to_be16(0xffff);
+ if (id_priv->id.ps == RDMA_PS_UDP)
+- mc->multicast.ib->rec.qkey = cpu_to_be32(RDMA_UDP_QKEY);
++ ib.rec.qkey = cpu_to_be32(RDMA_UDP_QKEY);
+
+ if (dev_addr->bound_dev_if)
+ ndev = dev_get_by_index(dev_addr->net, dev_addr->bound_dev_if);
+ if (!ndev) {
+ err = -ENODEV;
+- goto out2;
++ goto err_free;
+ }
+- mc->multicast.ib->rec.rate = iboe_get_rate(ndev);
+- mc->multicast.ib->rec.hop_limit = 1;
+- mc->multicast.ib->rec.mtu = iboe_get_mtu(ndev->mtu);
++ ib.rec.rate = iboe_get_rate(ndev);
++ ib.rec.hop_limit = 1;
++ ib.rec.mtu = iboe_get_mtu(ndev->mtu);
+
+ if (addr->sa_family == AF_INET) {
+ if (gid_type == IB_GID_TYPE_ROCE_UDP_ENCAP) {
+- mc->multicast.ib->rec.hop_limit = IPV6_DEFAULT_HOPLIMIT;
++ ib.rec.hop_limit = IPV6_DEFAULT_HOPLIMIT;
+ if (!send_only) {
+- err = cma_igmp_send(ndev, &mc->multicast.ib->rec.mgid,
++ err = cma_igmp_send(ndev, &ib.rec.mgid,
+ true);
+ }
+ }
+@@ -4546,24 +4500,22 @@ static int cma_iboe_join_multicast(struct rdma_id_private *id_priv,
+ err = -ENOTSUPP;
+ }
+ dev_put(ndev);
+- if (err || !mc->multicast.ib->rec.mtu) {
++ if (err || !ib.rec.mtu) {
+ if (!err)
+ err = -EINVAL;
+- goto out2;
++ goto err_free;
+ }
+ rdma_ip2gid((struct sockaddr *)&id_priv->id.route.addr.src_addr,
+- &mc->multicast.ib->rec.port_gid);
++ &ib.rec.port_gid);
+ work->id = id_priv;
+- work->mc = mc;
+- INIT_WORK(&work->work, iboe_mcast_work_handler);
+- kref_get(&mc->mcref);
++ INIT_WORK(&work->work, cma_work_handler);
++ cma_make_mc_event(0, id_priv, &ib, &work->event, mc);
++ /* Balances with cma_id_put() in cma_work_handler */
++ cma_id_get(id_priv);
+ queue_work(cma_wq, &work->work);
+-
+ return 0;
+
+-out2:
+- kfree(mc->multicast.ib);
+-out1:
++err_free:
+ kfree(work);
+ return err;
+ }
+@@ -4575,6 +4527,10 @@ int rdma_join_multicast(struct rdma_cm_id *id, struct sockaddr *addr,
+ struct cma_multicast *mc;
+ int ret;
+
++ /* Not supported for kernel QPs */
++ if (WARN_ON(id->qp))
++ return -EINVAL;
++
+ if (!id->device)
+ return -EINVAL;
+
+@@ -4583,7 +4539,7 @@ int rdma_join_multicast(struct rdma_cm_id *id, struct sockaddr *addr,
+ !cma_comp(id_priv, RDMA_CM_ADDR_RESOLVED))
+ return -EINVAL;
+
+- mc = kmalloc(sizeof *mc, GFP_KERNEL);
++ mc = kzalloc(sizeof(*mc), GFP_KERNEL);
+ if (!mc)
+ return -ENOMEM;
+
+@@ -4593,7 +4549,6 @@ int rdma_join_multicast(struct rdma_cm_id *id, struct sockaddr *addr,
+ mc->join_state = join_state;
+
+ if (rdma_protocol_roce(id->device, id->port_num)) {
+- kref_init(&mc->mcref);
+ ret = cma_iboe_join_multicast(id_priv, mc);
+ if (ret)
+ goto out_err;
+@@ -4625,25 +4580,14 @@ void rdma_leave_multicast(struct rdma_cm_id *id, struct sockaddr *addr)
+ id_priv = container_of(id, struct rdma_id_private, id);
+ spin_lock_irq(&id_priv->lock);
+ list_for_each_entry(mc, &id_priv->mc_list, list) {
+- if (!memcmp(&mc->addr, addr, rdma_addr_size(addr))) {
+- list_del(&mc->list);
+- spin_unlock_irq(&id_priv->lock);
+-
+- if (id->qp)
+- ib_detach_mcast(id->qp,
+- &mc->multicast.ib->rec.mgid,
+- be16_to_cpu(mc->multicast.ib->rec.mlid));
+-
+- BUG_ON(id_priv->cma_dev->device != id->device);
+-
+- if (rdma_cap_ib_mcast(id->device, id->port_num)) {
+- ib_sa_free_multicast(mc->multicast.ib);
+- kfree(mc);
+- } else if (rdma_protocol_roce(id->device, id->port_num)) {
+- cma_leave_roce_mc_group(id_priv, mc);
+- }
+- return;
+- }
++ if (memcmp(&mc->addr, addr, rdma_addr_size(addr)) != 0)
++ continue;
++ list_del(&mc->list);
++ spin_unlock_irq(&id_priv->lock);
++
++ WARN_ON(id_priv->cma_dev->device != id->device);
++ destroy_mc(id_priv, mc);
++ return;
+ }
+ spin_unlock_irq(&id_priv->lock);
+ }
+@@ -4652,7 +4596,7 @@ EXPORT_SYMBOL(rdma_leave_multicast);
+ static int cma_netdev_change(struct net_device *ndev, struct rdma_id_private *id_priv)
+ {
+ struct rdma_dev_addr *dev_addr;
+- struct cma_ndev_work *work;
++ struct cma_work *work;
+
+ dev_addr = &id_priv->id.route.addr.dev_addr;
+
+@@ -4665,7 +4609,7 @@ static int cma_netdev_change(struct net_device *ndev, struct rdma_id_private *id
+ if (!work)
+ return -ENOMEM;
+
+- INIT_WORK(&work->work, cma_ndev_work_handler);
++ INIT_WORK(&work->work, cma_work_handler);
+ work->id = id_priv;
+ work->event.event = RDMA_CM_EVENT_ADDR_CHANGE;
+ cma_id_get(id_priv);
+diff --git a/drivers/infiniband/core/cq.c b/drivers/infiniband/core/cq.c
+index a92fc3f90bb5b..19e36e52181be 100644
+--- a/drivers/infiniband/core/cq.c
++++ b/drivers/infiniband/core/cq.c
+@@ -197,24 +197,22 @@ static void ib_cq_completion_workqueue(struct ib_cq *cq, void *private)
+ }
+
+ /**
+- * __ib_alloc_cq_user - allocate a completion queue
++ * __ib_alloc_cq allocate a completion queue
+ * @dev: device to allocate the CQ for
+ * @private: driver private data, accessible from cq->cq_context
+ * @nr_cqe: number of CQEs to allocate
+ * @comp_vector: HCA completion vectors for this CQ
+ * @poll_ctx: context to poll the CQ from.
+ * @caller: module owner name.
+- * @udata: Valid user data or NULL for kernel object
+ *
+ * This is the proper interface to allocate a CQ for in-kernel users. A
+ * CQ allocated with this interface will automatically be polled from the
+ * specified context. The ULP must use wr->wr_cqe instead of wr->wr_id
+ * to use this CQ abstraction.
+ */
+-struct ib_cq *__ib_alloc_cq_user(struct ib_device *dev, void *private,
+- int nr_cqe, int comp_vector,
+- enum ib_poll_context poll_ctx,
+- const char *caller, struct ib_udata *udata)
++struct ib_cq *__ib_alloc_cq(struct ib_device *dev, void *private, int nr_cqe,
++ int comp_vector, enum ib_poll_context poll_ctx,
++ const char *caller)
+ {
+ struct ib_cq_init_attr cq_attr = {
+ .cqe = nr_cqe,
+@@ -277,7 +275,7 @@ struct ib_cq *__ib_alloc_cq_user(struct ib_device *dev, void *private,
+ out_destroy_cq:
+ rdma_dim_destroy(cq);
+ rdma_restrack_del(&cq->res);
+- cq->device->ops.destroy_cq(cq, udata);
++ cq->device->ops.destroy_cq(cq, NULL);
+ out_free_wc:
+ kfree(cq->wc);
+ out_free_cq:
+@@ -285,7 +283,7 @@ out_free_cq:
+ trace_cq_alloc_error(nr_cqe, comp_vector, poll_ctx, ret);
+ return ERR_PTR(ret);
+ }
+-EXPORT_SYMBOL(__ib_alloc_cq_user);
++EXPORT_SYMBOL(__ib_alloc_cq);
+
+ /**
+ * __ib_alloc_cq_any - allocate a completion queue
+@@ -310,18 +308,19 @@ struct ib_cq *__ib_alloc_cq_any(struct ib_device *dev, void *private,
+ atomic_inc_return(&counter) %
+ min_t(int, dev->num_comp_vectors, num_online_cpus());
+
+- return __ib_alloc_cq_user(dev, private, nr_cqe, comp_vector, poll_ctx,
+- caller, NULL);
++ return __ib_alloc_cq(dev, private, nr_cqe, comp_vector, poll_ctx,
++ caller);
+ }
+ EXPORT_SYMBOL(__ib_alloc_cq_any);
+
+ /**
+- * ib_free_cq_user - free a completion queue
++ * ib_free_cq - free a completion queue
+ * @cq: completion queue to free.
+- * @udata: User data or NULL for kernel object
+ */
+-void ib_free_cq_user(struct ib_cq *cq, struct ib_udata *udata)
++void ib_free_cq(struct ib_cq *cq)
+ {
++ int ret;
++
+ if (WARN_ON_ONCE(atomic_read(&cq->usecnt)))
+ return;
+ if (WARN_ON_ONCE(cq->cqe_used))
+@@ -343,12 +342,13 @@ void ib_free_cq_user(struct ib_cq *cq, struct ib_udata *udata)
+
+ rdma_dim_destroy(cq);
+ trace_cq_free(cq);
++ ret = cq->device->ops.destroy_cq(cq, NULL);
++ WARN_ONCE(ret, "Destroy of kernel CQ shouldn't fail");
+ rdma_restrack_del(&cq->res);
+- cq->device->ops.destroy_cq(cq, udata);
+ kfree(cq->wc);
+ kfree(cq);
+ }
+-EXPORT_SYMBOL(ib_free_cq_user);
++EXPORT_SYMBOL(ib_free_cq);
+
+ void ib_cq_pool_init(struct ib_device *dev)
+ {
+diff --git a/drivers/infiniband/core/ucma.c b/drivers/infiniband/core/ucma.c
+index d03dacaef7880..2643d5dbe1da8 100644
+--- a/drivers/infiniband/core/ucma.c
++++ b/drivers/infiniband/core/ucma.c
+@@ -586,6 +586,7 @@ static int ucma_free_ctx(struct ucma_context *ctx)
+ list_move_tail(&uevent->list, &list);
+ }
+ list_del(&ctx->list);
++ events_reported = ctx->events_reported;
+ mutex_unlock(&ctx->file->mut);
+
+ list_for_each_entry_safe(uevent, tmp, &list, list) {
+@@ -595,7 +596,6 @@ static int ucma_free_ctx(struct ucma_context *ctx)
+ kfree(uevent);
+ }
+
+- events_reported = ctx->events_reported;
+ mutex_destroy(&ctx->mutex);
+ kfree(ctx);
+ return events_reported;
+@@ -1512,7 +1512,9 @@ static ssize_t ucma_process_join(struct ucma_file *file,
+ return 0;
+
+ err3:
++ mutex_lock(&ctx->mutex);
+ rdma_leave_multicast(ctx->cm_id, (struct sockaddr *) &mc->addr);
++ mutex_unlock(&ctx->mutex);
+ ucma_cleanup_mc_events(mc);
+ err2:
+ xa_erase(&multicast_table, mc->id);
+@@ -1678,7 +1680,9 @@ static ssize_t ucma_migrate_id(struct ucma_file *new_file,
+
+ cur_file = ctx->file;
+ if (cur_file == new_file) {
++ mutex_lock(&cur_file->mut);
+ resp.events_reported = ctx->events_reported;
++ mutex_unlock(&cur_file->mut);
+ goto response;
+ }
+
+diff --git a/drivers/infiniband/core/umem.c b/drivers/infiniband/core/umem.c
+index 82455a1392f1d..7e765fe211607 100644
+--- a/drivers/infiniband/core/umem.c
++++ b/drivers/infiniband/core/umem.c
+@@ -151,13 +151,24 @@ unsigned long ib_umem_find_best_pgsz(struct ib_umem *umem,
+ dma_addr_t mask;
+ int i;
+
++ /* rdma_for_each_block() has a bug if the page size is smaller than the
++ * page size used to build the umem. For now prevent smaller page sizes
++ * from being returned.
++ */
++ pgsz_bitmap &= GENMASK(BITS_PER_LONG - 1, PAGE_SHIFT);
++
+ /* At minimum, drivers must support PAGE_SIZE or smaller */
+ if (WARN_ON(!(pgsz_bitmap & GENMASK(PAGE_SHIFT, 0))))
+ return 0;
+
+ va = virt;
+- /* max page size not to exceed MR length */
+- mask = roundup_pow_of_two(umem->length);
++ /* The best result is the smallest page size that results in the minimum
++ * number of required pages. Compute the largest page size that could
++ * work based on VA address bits that don't change.
++ */
++ mask = pgsz_bitmap &
++ GENMASK(BITS_PER_LONG - 1,
++ bits_per((umem->length - 1 + virt) ^ virt));
+ /* offset into first SGL */
+ pgoff = umem->address & ~PAGE_MASK;
+
+diff --git a/drivers/infiniband/core/verbs.c b/drivers/infiniband/core/verbs.c
+index 1b0ea945756f0..2e397d18dbf44 100644
+--- a/drivers/infiniband/core/verbs.c
++++ b/drivers/infiniband/core/verbs.c
+@@ -2016,16 +2016,21 @@ EXPORT_SYMBOL(rdma_set_cq_moderation);
+
+ int ib_destroy_cq_user(struct ib_cq *cq, struct ib_udata *udata)
+ {
++ int ret;
++
+ if (WARN_ON_ONCE(cq->shared))
+ return -EOPNOTSUPP;
+
+ if (atomic_read(&cq->usecnt))
+ return -EBUSY;
+
++ ret = cq->device->ops.destroy_cq(cq, udata);
++ if (ret)
++ return ret;
++
+ rdma_restrack_del(&cq->res);
+- cq->device->ops.destroy_cq(cq, udata);
+ kfree(cq);
+- return 0;
++ return ret;
+ }
+ EXPORT_SYMBOL(ib_destroy_cq_user);
+
+diff --git a/drivers/infiniband/hw/bnxt_re/ib_verbs.c b/drivers/infiniband/hw/bnxt_re/ib_verbs.c
+index cb6e873039df5..9f69abf01d331 100644
+--- a/drivers/infiniband/hw/bnxt_re/ib_verbs.c
++++ b/drivers/infiniband/hw/bnxt_re/ib_verbs.c
+@@ -2714,7 +2714,7 @@ int bnxt_re_post_recv(struct ib_qp *ib_qp, const struct ib_recv_wr *wr,
+ }
+
+ /* Completion Queues */
+-void bnxt_re_destroy_cq(struct ib_cq *ib_cq, struct ib_udata *udata)
++int bnxt_re_destroy_cq(struct ib_cq *ib_cq, struct ib_udata *udata)
+ {
+ struct bnxt_re_cq *cq;
+ struct bnxt_qplib_nq *nq;
+@@ -2730,6 +2730,7 @@ void bnxt_re_destroy_cq(struct ib_cq *ib_cq, struct ib_udata *udata)
+ atomic_dec(&rdev->cq_count);
+ nq->budget--;
+ kfree(cq->cql);
++ return 0;
+ }
+
+ int bnxt_re_create_cq(struct ib_cq *ibcq, const struct ib_cq_init_attr *attr,
+diff --git a/drivers/infiniband/hw/bnxt_re/ib_verbs.h b/drivers/infiniband/hw/bnxt_re/ib_verbs.h
+index e5fbbeba6d28d..f4a0ded67a8aa 100644
+--- a/drivers/infiniband/hw/bnxt_re/ib_verbs.h
++++ b/drivers/infiniband/hw/bnxt_re/ib_verbs.h
+@@ -193,7 +193,7 @@ int bnxt_re_post_recv(struct ib_qp *qp, const struct ib_recv_wr *recv_wr,
+ const struct ib_recv_wr **bad_recv_wr);
+ int bnxt_re_create_cq(struct ib_cq *ibcq, const struct ib_cq_init_attr *attr,
+ struct ib_udata *udata);
+-void bnxt_re_destroy_cq(struct ib_cq *cq, struct ib_udata *udata);
++int bnxt_re_destroy_cq(struct ib_cq *cq, struct ib_udata *udata);
+ int bnxt_re_poll_cq(struct ib_cq *cq, int num_entries, struct ib_wc *wc);
+ int bnxt_re_req_notify_cq(struct ib_cq *cq, enum ib_cq_notify_flags flags);
+ struct ib_mr *bnxt_re_get_dma_mr(struct ib_pd *pd, int mr_access_flags);
+diff --git a/drivers/infiniband/hw/cxgb4/cq.c b/drivers/infiniband/hw/cxgb4/cq.c
+index b1bb61c65f4f6..7b076fc23cf38 100644
+--- a/drivers/infiniband/hw/cxgb4/cq.c
++++ b/drivers/infiniband/hw/cxgb4/cq.c
+@@ -967,7 +967,7 @@ int c4iw_poll_cq(struct ib_cq *ibcq, int num_entries, struct ib_wc *wc)
+ return !err || err == -ENODATA ? npolled : err;
+ }
+
+-void c4iw_destroy_cq(struct ib_cq *ib_cq, struct ib_udata *udata)
++int c4iw_destroy_cq(struct ib_cq *ib_cq, struct ib_udata *udata)
+ {
+ struct c4iw_cq *chp;
+ struct c4iw_ucontext *ucontext;
+@@ -985,6 +985,7 @@ void c4iw_destroy_cq(struct ib_cq *ib_cq, struct ib_udata *udata)
+ ucontext ? &ucontext->uctx : &chp->cq.rdev->uctx,
+ chp->destroy_skb, chp->wr_waitp);
+ c4iw_put_wr_wait(chp->wr_waitp);
++ return 0;
+ }
+
+ int c4iw_create_cq(struct ib_cq *ibcq, const struct ib_cq_init_attr *attr,
+diff --git a/drivers/infiniband/hw/cxgb4/iw_cxgb4.h b/drivers/infiniband/hw/cxgb4/iw_cxgb4.h
+index e8e11bd95e429..de0f278e31501 100644
+--- a/drivers/infiniband/hw/cxgb4/iw_cxgb4.h
++++ b/drivers/infiniband/hw/cxgb4/iw_cxgb4.h
+@@ -992,7 +992,7 @@ struct ib_mr *c4iw_reg_user_mr(struct ib_pd *pd, u64 start,
+ struct ib_udata *udata);
+ struct ib_mr *c4iw_get_dma_mr(struct ib_pd *pd, int acc);
+ int c4iw_dereg_mr(struct ib_mr *ib_mr, struct ib_udata *udata);
+-void c4iw_destroy_cq(struct ib_cq *ib_cq, struct ib_udata *udata);
++int c4iw_destroy_cq(struct ib_cq *ib_cq, struct ib_udata *udata);
+ int c4iw_create_cq(struct ib_cq *ibcq, const struct ib_cq_init_attr *attr,
+ struct ib_udata *udata);
+ int c4iw_arm_cq(struct ib_cq *ibcq, enum ib_cq_notify_flags flags);
+diff --git a/drivers/infiniband/hw/efa/efa.h b/drivers/infiniband/hw/efa/efa.h
+index 1889dd172a252..05f593940e7b0 100644
+--- a/drivers/infiniband/hw/efa/efa.h
++++ b/drivers/infiniband/hw/efa/efa.h
+@@ -139,7 +139,7 @@ int efa_destroy_qp(struct ib_qp *ibqp, struct ib_udata *udata);
+ struct ib_qp *efa_create_qp(struct ib_pd *ibpd,
+ struct ib_qp_init_attr *init_attr,
+ struct ib_udata *udata);
+-void efa_destroy_cq(struct ib_cq *ibcq, struct ib_udata *udata);
++int efa_destroy_cq(struct ib_cq *ibcq, struct ib_udata *udata);
+ int efa_create_cq(struct ib_cq *ibcq, const struct ib_cq_init_attr *attr,
+ struct ib_udata *udata);
+ struct ib_mr *efa_reg_mr(struct ib_pd *ibpd, u64 start, u64 length,
+diff --git a/drivers/infiniband/hw/efa/efa_verbs.c b/drivers/infiniband/hw/efa/efa_verbs.c
+index 7dd082441333c..bd2caa2353c75 100644
+--- a/drivers/infiniband/hw/efa/efa_verbs.c
++++ b/drivers/infiniband/hw/efa/efa_verbs.c
+@@ -843,7 +843,7 @@ static int efa_destroy_cq_idx(struct efa_dev *dev, int cq_idx)
+ return efa_com_destroy_cq(&dev->edev, ¶ms);
+ }
+
+-void efa_destroy_cq(struct ib_cq *ibcq, struct ib_udata *udata)
++int efa_destroy_cq(struct ib_cq *ibcq, struct ib_udata *udata)
+ {
+ struct efa_dev *dev = to_edev(ibcq->device);
+ struct efa_cq *cq = to_ecq(ibcq);
+@@ -856,6 +856,7 @@ void efa_destroy_cq(struct ib_cq *ibcq, struct ib_udata *udata)
+ efa_destroy_cq_idx(dev, cq->cq_idx);
+ efa_free_mapped(dev, cq->cpu_addr, cq->dma_addr, cq->size,
+ DMA_FROM_DEVICE);
++ return 0;
+ }
+
+ static int cq_mmap_entries_setup(struct efa_dev *dev, struct efa_cq *cq,
+diff --git a/drivers/infiniband/hw/hns/hns_roce_cq.c b/drivers/infiniband/hw/hns/hns_roce_cq.c
+index e87d616f79882..c5acf3332519b 100644
+--- a/drivers/infiniband/hw/hns/hns_roce_cq.c
++++ b/drivers/infiniband/hw/hns/hns_roce_cq.c
+@@ -311,7 +311,7 @@ err_cq_buf:
+ return ret;
+ }
+
+-void hns_roce_destroy_cq(struct ib_cq *ib_cq, struct ib_udata *udata)
++int hns_roce_destroy_cq(struct ib_cq *ib_cq, struct ib_udata *udata)
+ {
+ struct hns_roce_dev *hr_dev = to_hr_dev(ib_cq->device);
+ struct hns_roce_cq *hr_cq = to_hr_cq(ib_cq);
+@@ -322,6 +322,7 @@ void hns_roce_destroy_cq(struct ib_cq *ib_cq, struct ib_udata *udata)
+ free_cq_buf(hr_dev, hr_cq);
+ free_cq_db(hr_dev, hr_cq, udata);
+ free_cqc(hr_dev, hr_cq);
++ return 0;
+ }
+
+ void hns_roce_cq_completion(struct hns_roce_dev *hr_dev, u32 cqn)
+diff --git a/drivers/infiniband/hw/hns/hns_roce_device.h b/drivers/infiniband/hw/hns/hns_roce_device.h
+index c69453a62767c..77ca55b559a0a 100644
+--- a/drivers/infiniband/hw/hns/hns_roce_device.h
++++ b/drivers/infiniband/hw/hns/hns_roce_device.h
+@@ -929,7 +929,7 @@ struct hns_roce_hw {
+ int (*poll_cq)(struct ib_cq *ibcq, int num_entries, struct ib_wc *wc);
+ int (*dereg_mr)(struct hns_roce_dev *hr_dev, struct hns_roce_mr *mr,
+ struct ib_udata *udata);
+- void (*destroy_cq)(struct ib_cq *ibcq, struct ib_udata *udata);
++ int (*destroy_cq)(struct ib_cq *ibcq, struct ib_udata *udata);
+ int (*modify_cq)(struct ib_cq *cq, u16 cq_count, u16 cq_period);
+ int (*init_eq)(struct hns_roce_dev *hr_dev);
+ void (*cleanup_eq)(struct hns_roce_dev *hr_dev);
+@@ -1246,7 +1246,7 @@ int to_hr_qp_type(int qp_type);
+ int hns_roce_create_cq(struct ib_cq *ib_cq, const struct ib_cq_init_attr *attr,
+ struct ib_udata *udata);
+
+-void hns_roce_destroy_cq(struct ib_cq *ib_cq, struct ib_udata *udata);
++int hns_roce_destroy_cq(struct ib_cq *ib_cq, struct ib_udata *udata);
+ int hns_roce_db_map_user(struct hns_roce_ucontext *context,
+ struct ib_udata *udata, unsigned long virt,
+ struct hns_roce_db *db);
+diff --git a/drivers/infiniband/hw/hns/hns_roce_hw_v1.c b/drivers/infiniband/hw/hns/hns_roce_hw_v1.c
+index cf39f560b8001..5a0c90e0b367b 100644
+--- a/drivers/infiniband/hw/hns/hns_roce_hw_v1.c
++++ b/drivers/infiniband/hw/hns/hns_roce_hw_v1.c
+@@ -271,7 +271,6 @@ static int hns_roce_v1_post_send(struct ib_qp *ibqp,
+ ps_opcode = HNS_ROCE_WQE_OPCODE_SEND;
+ break;
+ case IB_WR_LOCAL_INV:
+- break;
+ case IB_WR_ATOMIC_CMP_AND_SWP:
+ case IB_WR_ATOMIC_FETCH_AND_ADD:
+ case IB_WR_LSO:
+@@ -3573,7 +3572,7 @@ int hns_roce_v1_destroy_qp(struct ib_qp *ibqp, struct ib_udata *udata)
+ return 0;
+ }
+
+-static void hns_roce_v1_destroy_cq(struct ib_cq *ibcq, struct ib_udata *udata)
++static int hns_roce_v1_destroy_cq(struct ib_cq *ibcq, struct ib_udata *udata)
+ {
+ struct hns_roce_dev *hr_dev = to_hr_dev(ibcq->device);
+ struct hns_roce_cq *hr_cq = to_hr_cq(ibcq);
+@@ -3604,6 +3603,7 @@ static void hns_roce_v1_destroy_cq(struct ib_cq *ibcq, struct ib_udata *udata)
+ }
+ wait_time++;
+ }
++ return 0;
+ }
+
+ static void set_eq_cons_index_v1(struct hns_roce_eq *eq, int req_not)
+diff --git a/drivers/infiniband/hw/hns/hns_roce_hw_v2.c b/drivers/infiniband/hw/hns/hns_roce_hw_v2.c
+index 38a48ab3e1d02..37809a0b50e25 100644
+--- a/drivers/infiniband/hw/hns/hns_roce_hw_v2.c
++++ b/drivers/infiniband/hw/hns/hns_roce_hw_v2.c
+@@ -1770,9 +1770,9 @@ static void calc_pg_sz(int obj_num, int obj_size, int hop_num, int ctx_bt_num,
+ int *buf_page_size, int *bt_page_size, u32 hem_type)
+ {
+ u64 obj_per_chunk;
+- int bt_chunk_size = 1 << PAGE_SHIFT;
+- int buf_chunk_size = 1 << PAGE_SHIFT;
+- int obj_per_chunk_default = buf_chunk_size / obj_size;
++ u64 bt_chunk_size = PAGE_SIZE;
++ u64 buf_chunk_size = PAGE_SIZE;
++ u64 obj_per_chunk_default = buf_chunk_size / obj_size;
+
+ *buf_page_size = 0;
+ *bt_page_size = 0;
+@@ -3640,9 +3640,6 @@ static void modify_qp_reset_to_init(struct ib_qp *ibqp,
+ V2_QPC_BYTE_76_SRQ_EN_S, 1);
+ }
+
+- roce_set_field(context->byte_172_sq_psn, V2_QPC_BYTE_172_ACK_REQ_FREQ_M,
+- V2_QPC_BYTE_172_ACK_REQ_FREQ_S, 4);
+-
+ roce_set_bit(context->byte_172_sq_psn, V2_QPC_BYTE_172_FRE_S, 1);
+
+ hr_qp->access_flags = attr->qp_access_flags;
+@@ -3983,6 +3980,7 @@ static int modify_qp_init_to_rtr(struct ib_qp *ibqp,
+ dma_addr_t trrl_ba;
+ dma_addr_t irrl_ba;
+ enum ib_mtu mtu;
++ u8 lp_pktn_ini;
+ u8 port_num;
+ u64 *mtts;
+ u8 *dmac;
+@@ -4090,13 +4088,21 @@ static int modify_qp_init_to_rtr(struct ib_qp *ibqp,
+ }
+
+ #define MAX_LP_MSG_LEN 65536
+- /* MTU*(2^LP_PKTN_INI) shouldn't be bigger than 64kb */
++ /* MTU * (2 ^ LP_PKTN_INI) shouldn't be bigger than 64KB */
++ lp_pktn_ini = ilog2(MAX_LP_MSG_LEN / ib_mtu_enum_to_int(mtu));
++
+ roce_set_field(context->byte_56_dqpn_err, V2_QPC_BYTE_56_LP_PKTN_INI_M,
+- V2_QPC_BYTE_56_LP_PKTN_INI_S,
+- ilog2(MAX_LP_MSG_LEN / ib_mtu_enum_to_int(mtu)));
++ V2_QPC_BYTE_56_LP_PKTN_INI_S, lp_pktn_ini);
+ roce_set_field(qpc_mask->byte_56_dqpn_err, V2_QPC_BYTE_56_LP_PKTN_INI_M,
+ V2_QPC_BYTE_56_LP_PKTN_INI_S, 0);
+
++ /* ACK_REQ_FREQ should be larger than or equal to LP_PKTN_INI */
++ roce_set_field(context->byte_172_sq_psn, V2_QPC_BYTE_172_ACK_REQ_FREQ_M,
++ V2_QPC_BYTE_172_ACK_REQ_FREQ_S, lp_pktn_ini);
++ roce_set_field(qpc_mask->byte_172_sq_psn,
++ V2_QPC_BYTE_172_ACK_REQ_FREQ_M,
++ V2_QPC_BYTE_172_ACK_REQ_FREQ_S, 0);
++
+ roce_set_bit(qpc_mask->byte_108_rx_reqepsn,
+ V2_QPC_BYTE_108_RX_REQ_PSN_ERR_S, 0);
+ roce_set_field(qpc_mask->byte_96_rx_reqmsn, V2_QPC_BYTE_96_RX_REQ_MSN_M,
+@@ -4287,11 +4293,19 @@ static int hns_roce_v2_set_path(struct ib_qp *ibqp,
+ V2_QPC_BYTE_28_FL_S, 0);
+ memcpy(context->dgid, grh->dgid.raw, sizeof(grh->dgid.raw));
+ memset(qpc_mask->dgid, 0, sizeof(grh->dgid.raw));
++
++ hr_qp->sl = rdma_ah_get_sl(&attr->ah_attr);
++ if (unlikely(hr_qp->sl > MAX_SERVICE_LEVEL)) {
++ ibdev_err(ibdev,
++ "failed to fill QPC, sl (%d) shouldn't be larger than %d.\n",
++ hr_qp->sl, MAX_SERVICE_LEVEL);
++ return -EINVAL;
++ }
++
+ roce_set_field(context->byte_28_at_fl, V2_QPC_BYTE_28_SL_M,
+- V2_QPC_BYTE_28_SL_S, rdma_ah_get_sl(&attr->ah_attr));
++ V2_QPC_BYTE_28_SL_S, hr_qp->sl);
+ roce_set_field(qpc_mask->byte_28_at_fl, V2_QPC_BYTE_28_SL_M,
+ V2_QPC_BYTE_28_SL_S, 0);
+- hr_qp->sl = rdma_ah_get_sl(&attr->ah_attr);
+
+ return 0;
+ }
+@@ -4787,7 +4801,9 @@ static int hns_roce_v2_query_qp(struct ib_qp *ibqp, struct ib_qp_attr *qp_attr,
+ qp_attr->retry_cnt = roce_get_field(context.byte_212_lsn,
+ V2_QPC_BYTE_212_RETRY_CNT_M,
+ V2_QPC_BYTE_212_RETRY_CNT_S);
+- qp_attr->rnr_retry = le32_to_cpu(context.rq_rnr_timer);
++ qp_attr->rnr_retry = roce_get_field(context.byte_244_rnr_rxack,
++ V2_QPC_BYTE_244_RNR_CNT_M,
++ V2_QPC_BYTE_244_RNR_CNT_S);
+
+ done:
+ qp_attr->cur_qp_state = qp_attr->qp_state;
+@@ -4803,6 +4819,7 @@ done:
+ }
+
+ qp_init_attr->cap = qp_attr->cap;
++ qp_init_attr->sq_sig_type = hr_qp->sq_signal_bits;
+
+ out:
+ mutex_unlock(&hr_qp->mutex);
+diff --git a/drivers/infiniband/hw/hns/hns_roce_hw_v2.h b/drivers/infiniband/hw/hns/hns_roce_hw_v2.h
+index 4f840997c6c73..c6a280bdbfaaf 100644
+--- a/drivers/infiniband/hw/hns/hns_roce_hw_v2.h
++++ b/drivers/infiniband/hw/hns/hns_roce_hw_v2.h
+@@ -1957,6 +1957,8 @@ struct hns_roce_eq_context {
+ #define HNS_ROCE_V2_AEQE_EVENT_QUEUE_NUM_S 0
+ #define HNS_ROCE_V2_AEQE_EVENT_QUEUE_NUM_M GENMASK(23, 0)
+
++#define MAX_SERVICE_LEVEL 0x7
++
+ struct hns_roce_wqe_atomic_seg {
+ __le64 fetchadd_swap_data;
+ __le64 cmp_data;
+diff --git a/drivers/infiniband/hw/hns/hns_roce_qp.c b/drivers/infiniband/hw/hns/hns_roce_qp.c
+index 4edea397b6b80..4486c9b7c3e43 100644
+--- a/drivers/infiniband/hw/hns/hns_roce_qp.c
++++ b/drivers/infiniband/hw/hns/hns_roce_qp.c
+@@ -1171,8 +1171,10 @@ int hns_roce_modify_qp(struct ib_qp *ibqp, struct ib_qp_attr *attr,
+
+ mutex_lock(&hr_qp->mutex);
+
+- cur_state = attr_mask & IB_QP_CUR_STATE ?
+- attr->cur_qp_state : (enum ib_qp_state)hr_qp->state;
++ if (attr_mask & IB_QP_CUR_STATE && attr->cur_qp_state != hr_qp->state)
++ goto out;
++
++ cur_state = hr_qp->state;
+ new_state = attr_mask & IB_QP_STATE ? attr->qp_state : cur_state;
+
+ if (ibqp->uobject &&
+diff --git a/drivers/infiniband/hw/i40iw/i40iw.h b/drivers/infiniband/hw/i40iw/i40iw.h
+index 49d92638e0dbb..9a2b87cc3d301 100644
+--- a/drivers/infiniband/hw/i40iw/i40iw.h
++++ b/drivers/infiniband/hw/i40iw/i40iw.h
+@@ -409,8 +409,8 @@ static inline struct i40iw_qp *to_iwqp(struct ib_qp *ibqp)
+ }
+
+ /* i40iw.c */
+-void i40iw_add_ref(struct ib_qp *);
+-void i40iw_rem_ref(struct ib_qp *);
++void i40iw_qp_add_ref(struct ib_qp *ibqp);
++void i40iw_qp_rem_ref(struct ib_qp *ibqp);
+ struct ib_qp *i40iw_get_qp(struct ib_device *, int);
+
+ void i40iw_flush_wqes(struct i40iw_device *iwdev,
+@@ -554,9 +554,8 @@ enum i40iw_status_code i40iw_manage_qhash(struct i40iw_device *iwdev,
+ bool wait);
+ void i40iw_receive_ilq(struct i40iw_sc_vsi *vsi, struct i40iw_puda_buf *rbuf);
+ void i40iw_free_sqbuf(struct i40iw_sc_vsi *vsi, void *bufp);
+-void i40iw_free_qp_resources(struct i40iw_device *iwdev,
+- struct i40iw_qp *iwqp,
+- u32 qp_num);
++void i40iw_free_qp_resources(struct i40iw_qp *iwqp);
++
+ enum i40iw_status_code i40iw_obj_aligned_mem(struct i40iw_device *iwdev,
+ struct i40iw_dma_mem *memptr,
+ u32 size, u32 mask);
+diff --git a/drivers/infiniband/hw/i40iw/i40iw_cm.c b/drivers/infiniband/hw/i40iw/i40iw_cm.c
+index fa7a5ff498c73..56c1e9abc52dc 100644
+--- a/drivers/infiniband/hw/i40iw/i40iw_cm.c
++++ b/drivers/infiniband/hw/i40iw/i40iw_cm.c
+@@ -2322,7 +2322,7 @@ static void i40iw_rem_ref_cm_node(struct i40iw_cm_node *cm_node)
+ iwqp = cm_node->iwqp;
+ if (iwqp) {
+ iwqp->cm_node = NULL;
+- i40iw_rem_ref(&iwqp->ibqp);
++ i40iw_qp_rem_ref(&iwqp->ibqp);
+ cm_node->iwqp = NULL;
+ } else if (cm_node->qhash_set) {
+ i40iw_get_addr_info(cm_node, &nfo);
+@@ -3452,7 +3452,7 @@ void i40iw_cm_disconn(struct i40iw_qp *iwqp)
+ kfree(work);
+ return;
+ }
+- i40iw_add_ref(&iwqp->ibqp);
++ i40iw_qp_add_ref(&iwqp->ibqp);
+ spin_unlock_irqrestore(&iwdev->qptable_lock, flags);
+
+ work->iwqp = iwqp;
+@@ -3623,7 +3623,7 @@ static void i40iw_disconnect_worker(struct work_struct *work)
+
+ kfree(dwork);
+ i40iw_cm_disconn_true(iwqp);
+- i40iw_rem_ref(&iwqp->ibqp);
++ i40iw_qp_rem_ref(&iwqp->ibqp);
+ }
+
+ /**
+@@ -3745,7 +3745,7 @@ int i40iw_accept(struct iw_cm_id *cm_id, struct iw_cm_conn_param *conn_param)
+ cm_node->lsmm_size = accept.size + conn_param->private_data_len;
+ i40iw_cm_init_tsa_conn(iwqp, cm_node);
+ cm_id->add_ref(cm_id);
+- i40iw_add_ref(&iwqp->ibqp);
++ i40iw_qp_add_ref(&iwqp->ibqp);
+
+ attr.qp_state = IB_QPS_RTS;
+ cm_node->qhash_set = false;
+@@ -3908,7 +3908,7 @@ int i40iw_connect(struct iw_cm_id *cm_id, struct iw_cm_conn_param *conn_param)
+ iwqp->cm_node = cm_node;
+ cm_node->iwqp = iwqp;
+ iwqp->cm_id = cm_id;
+- i40iw_add_ref(&iwqp->ibqp);
++ i40iw_qp_add_ref(&iwqp->ibqp);
+
+ if (cm_node->state != I40IW_CM_STATE_OFFLOADED) {
+ cm_node->state = I40IW_CM_STATE_SYN_SENT;
+diff --git a/drivers/infiniband/hw/i40iw/i40iw_hw.c b/drivers/infiniband/hw/i40iw/i40iw_hw.c
+index ae8b97c306657..a7512508f7e60 100644
+--- a/drivers/infiniband/hw/i40iw/i40iw_hw.c
++++ b/drivers/infiniband/hw/i40iw/i40iw_hw.c
+@@ -313,7 +313,7 @@ void i40iw_process_aeq(struct i40iw_device *iwdev)
+ __func__, info->qp_cq_id);
+ continue;
+ }
+- i40iw_add_ref(&iwqp->ibqp);
++ i40iw_qp_add_ref(&iwqp->ibqp);
+ spin_unlock_irqrestore(&iwdev->qptable_lock, flags);
+ qp = &iwqp->sc_qp;
+ spin_lock_irqsave(&iwqp->lock, flags);
+@@ -427,7 +427,7 @@ void i40iw_process_aeq(struct i40iw_device *iwdev)
+ break;
+ }
+ if (info->qp)
+- i40iw_rem_ref(&iwqp->ibqp);
++ i40iw_qp_rem_ref(&iwqp->ibqp);
+ } while (1);
+
+ if (aeqcnt)
+diff --git a/drivers/infiniband/hw/i40iw/i40iw_utils.c b/drivers/infiniband/hw/i40iw/i40iw_utils.c
+index 016524683e17e..72db7c1dc2998 100644
+--- a/drivers/infiniband/hw/i40iw/i40iw_utils.c
++++ b/drivers/infiniband/hw/i40iw/i40iw_utils.c
+@@ -479,25 +479,6 @@ void i40iw_cleanup_pending_cqp_op(struct i40iw_device *iwdev)
+ }
+ }
+
+-/**
+- * i40iw_free_qp - callback after destroy cqp completes
+- * @cqp_request: cqp request for destroy qp
+- * @num: not used
+- */
+-static void i40iw_free_qp(struct i40iw_cqp_request *cqp_request, u32 num)
+-{
+- struct i40iw_sc_qp *qp = (struct i40iw_sc_qp *)cqp_request->param;
+- struct i40iw_qp *iwqp = (struct i40iw_qp *)qp->back_qp;
+- struct i40iw_device *iwdev;
+- u32 qp_num = iwqp->ibqp.qp_num;
+-
+- iwdev = iwqp->iwdev;
+-
+- i40iw_rem_pdusecount(iwqp->iwpd, iwdev);
+- i40iw_free_qp_resources(iwdev, iwqp, qp_num);
+- i40iw_rem_devusecount(iwdev);
+-}
+-
+ /**
+ * i40iw_wait_event - wait for completion
+ * @iwdev: iwarp device
+@@ -618,26 +599,23 @@ void i40iw_rem_pdusecount(struct i40iw_pd *iwpd, struct i40iw_device *iwdev)
+ }
+
+ /**
+- * i40iw_add_ref - add refcount for qp
++ * i40iw_qp_add_ref - add refcount for qp
+ * @ibqp: iqarp qp
+ */
+-void i40iw_add_ref(struct ib_qp *ibqp)
++void i40iw_qp_add_ref(struct ib_qp *ibqp)
+ {
+ struct i40iw_qp *iwqp = (struct i40iw_qp *)ibqp;
+
+- atomic_inc(&iwqp->refcount);
++ refcount_inc(&iwqp->refcount);
+ }
+
+ /**
+- * i40iw_rem_ref - rem refcount for qp and free if 0
++ * i40iw_qp_rem_ref - rem refcount for qp and free if 0
+ * @ibqp: iqarp qp
+ */
+-void i40iw_rem_ref(struct ib_qp *ibqp)
++void i40iw_qp_rem_ref(struct ib_qp *ibqp)
+ {
+ struct i40iw_qp *iwqp;
+- enum i40iw_status_code status;
+- struct i40iw_cqp_request *cqp_request;
+- struct cqp_commands_info *cqp_info;
+ struct i40iw_device *iwdev;
+ u32 qp_num;
+ unsigned long flags;
+@@ -645,7 +623,7 @@ void i40iw_rem_ref(struct ib_qp *ibqp)
+ iwqp = to_iwqp(ibqp);
+ iwdev = iwqp->iwdev;
+ spin_lock_irqsave(&iwdev->qptable_lock, flags);
+- if (!atomic_dec_and_test(&iwqp->refcount)) {
++ if (!refcount_dec_and_test(&iwqp->refcount)) {
+ spin_unlock_irqrestore(&iwdev->qptable_lock, flags);
+ return;
+ }
+@@ -653,25 +631,8 @@ void i40iw_rem_ref(struct ib_qp *ibqp)
+ qp_num = iwqp->ibqp.qp_num;
+ iwdev->qp_table[qp_num] = NULL;
+ spin_unlock_irqrestore(&iwdev->qptable_lock, flags);
+- cqp_request = i40iw_get_cqp_request(&iwdev->cqp, false);
+- if (!cqp_request)
+- return;
+-
+- cqp_request->callback_fcn = i40iw_free_qp;
+- cqp_request->param = (void *)&iwqp->sc_qp;
+- cqp_info = &cqp_request->info;
+- cqp_info->cqp_cmd = OP_QP_DESTROY;
+- cqp_info->post_sq = 1;
+- cqp_info->in.u.qp_destroy.qp = &iwqp->sc_qp;
+- cqp_info->in.u.qp_destroy.scratch = (uintptr_t)cqp_request;
+- cqp_info->in.u.qp_destroy.remove_hash_idx = true;
+- status = i40iw_handle_cqp_op(iwdev, cqp_request);
+- if (!status)
+- return;
++ complete(&iwqp->free_qp);
+
+- i40iw_rem_pdusecount(iwqp->iwpd, iwdev);
+- i40iw_free_qp_resources(iwdev, iwqp, qp_num);
+- i40iw_rem_devusecount(iwdev);
+ }
+
+ /**
+@@ -938,7 +899,7 @@ static void i40iw_terminate_timeout(struct timer_list *t)
+ struct i40iw_sc_qp *qp = (struct i40iw_sc_qp *)&iwqp->sc_qp;
+
+ i40iw_terminate_done(qp, 1);
+- i40iw_rem_ref(&iwqp->ibqp);
++ i40iw_qp_rem_ref(&iwqp->ibqp);
+ }
+
+ /**
+@@ -950,7 +911,7 @@ void i40iw_terminate_start_timer(struct i40iw_sc_qp *qp)
+ struct i40iw_qp *iwqp;
+
+ iwqp = (struct i40iw_qp *)qp->back_qp;
+- i40iw_add_ref(&iwqp->ibqp);
++ i40iw_qp_add_ref(&iwqp->ibqp);
+ timer_setup(&iwqp->terminate_timer, i40iw_terminate_timeout, 0);
+ iwqp->terminate_timer.expires = jiffies + HZ;
+ add_timer(&iwqp->terminate_timer);
+@@ -966,7 +927,7 @@ void i40iw_terminate_del_timer(struct i40iw_sc_qp *qp)
+
+ iwqp = (struct i40iw_qp *)qp->back_qp;
+ if (del_timer(&iwqp->terminate_timer))
+- i40iw_rem_ref(&iwqp->ibqp);
++ i40iw_qp_rem_ref(&iwqp->ibqp);
+ }
+
+ /**
+diff --git a/drivers/infiniband/hw/i40iw/i40iw_verbs.c b/drivers/infiniband/hw/i40iw/i40iw_verbs.c
+index 19af29a48c559..2419de36e943d 100644
+--- a/drivers/infiniband/hw/i40iw/i40iw_verbs.c
++++ b/drivers/infiniband/hw/i40iw/i40iw_verbs.c
+@@ -364,11 +364,11 @@ static struct i40iw_pbl *i40iw_get_pbl(unsigned long va,
+ * @iwqp: qp ptr (user or kernel)
+ * @qp_num: qp number assigned
+ */
+-void i40iw_free_qp_resources(struct i40iw_device *iwdev,
+- struct i40iw_qp *iwqp,
+- u32 qp_num)
++void i40iw_free_qp_resources(struct i40iw_qp *iwqp)
+ {
+ struct i40iw_pbl *iwpbl = &iwqp->iwpbl;
++ struct i40iw_device *iwdev = iwqp->iwdev;
++ u32 qp_num = iwqp->ibqp.qp_num;
+
+ i40iw_ieq_cleanup_qp(iwdev->vsi.ieq, &iwqp->sc_qp);
+ i40iw_dealloc_push_page(iwdev, &iwqp->sc_qp);
+@@ -402,6 +402,10 @@ static void i40iw_clean_cqes(struct i40iw_qp *iwqp, struct i40iw_cq *iwcq)
+ static int i40iw_destroy_qp(struct ib_qp *ibqp, struct ib_udata *udata)
+ {
+ struct i40iw_qp *iwqp = to_iwqp(ibqp);
++ struct ib_qp_attr attr;
++ struct i40iw_device *iwdev = iwqp->iwdev;
++
++ memset(&attr, 0, sizeof(attr));
+
+ iwqp->destroyed = 1;
+
+@@ -416,7 +420,15 @@ static int i40iw_destroy_qp(struct ib_qp *ibqp, struct ib_udata *udata)
+ }
+ }
+
+- i40iw_rem_ref(&iwqp->ibqp);
++ attr.qp_state = IB_QPS_ERR;
++ i40iw_modify_qp(&iwqp->ibqp, &attr, IB_QP_STATE, NULL);
++ i40iw_qp_rem_ref(&iwqp->ibqp);
++ wait_for_completion(&iwqp->free_qp);
++ i40iw_cqp_qp_destroy_cmd(&iwdev->sc_dev, &iwqp->sc_qp);
++ i40iw_rem_pdusecount(iwqp->iwpd, iwdev);
++ i40iw_free_qp_resources(iwqp);
++ i40iw_rem_devusecount(iwdev);
++
+ return 0;
+ }
+
+@@ -577,6 +589,7 @@ static struct ib_qp *i40iw_create_qp(struct ib_pd *ibpd,
+ qp->back_qp = (void *)iwqp;
+ qp->push_idx = I40IW_INVALID_PUSH_PAGE_INDEX;
+
++ iwqp->iwdev = iwdev;
+ iwqp->ctx_info.iwarp_info = &iwqp->iwarp_info;
+
+ if (i40iw_allocate_dma_mem(dev->hw,
+@@ -601,7 +614,6 @@ static struct ib_qp *i40iw_create_qp(struct ib_pd *ibpd,
+ goto error;
+ }
+
+- iwqp->iwdev = iwdev;
+ iwqp->iwpd = iwpd;
+ iwqp->ibqp.qp_num = qp_num;
+ qp = &iwqp->sc_qp;
+@@ -715,7 +727,7 @@ static struct ib_qp *i40iw_create_qp(struct ib_pd *ibpd,
+ goto error;
+ }
+
+- i40iw_add_ref(&iwqp->ibqp);
++ refcount_set(&iwqp->refcount, 1);
+ spin_lock_init(&iwqp->lock);
+ iwqp->sig_all = (init_attr->sq_sig_type == IB_SIGNAL_ALL_WR) ? 1 : 0;
+ iwdev->qp_table[qp_num] = iwqp;
+@@ -737,10 +749,11 @@ static struct ib_qp *i40iw_create_qp(struct ib_pd *ibpd,
+ }
+ init_completion(&iwqp->sq_drained);
+ init_completion(&iwqp->rq_drained);
++ init_completion(&iwqp->free_qp);
+
+ return &iwqp->ibqp;
+ error:
+- i40iw_free_qp_resources(iwdev, iwqp, qp_num);
++ i40iw_free_qp_resources(iwqp);
+ return ERR_PTR(err_code);
+ }
+
+@@ -1053,7 +1066,7 @@ void i40iw_cq_wq_destroy(struct i40iw_device *iwdev, struct i40iw_sc_cq *cq)
+ * @ib_cq: cq pointer
+ * @udata: user data or NULL for kernel object
+ */
+-static void i40iw_destroy_cq(struct ib_cq *ib_cq, struct ib_udata *udata)
++static int i40iw_destroy_cq(struct ib_cq *ib_cq, struct ib_udata *udata)
+ {
+ struct i40iw_cq *iwcq;
+ struct i40iw_device *iwdev;
+@@ -1065,6 +1078,7 @@ static void i40iw_destroy_cq(struct ib_cq *ib_cq, struct ib_udata *udata)
+ i40iw_cq_wq_destroy(iwdev, cq);
+ cq_free_resources(iwdev, iwcq);
+ i40iw_rem_devusecount(iwdev);
++ return 0;
+ }
+
+ /**
+@@ -2656,13 +2670,13 @@ static const struct ib_device_ops i40iw_dev_ops = {
+ .get_hw_stats = i40iw_get_hw_stats,
+ .get_port_immutable = i40iw_port_immutable,
+ .iw_accept = i40iw_accept,
+- .iw_add_ref = i40iw_add_ref,
++ .iw_add_ref = i40iw_qp_add_ref,
+ .iw_connect = i40iw_connect,
+ .iw_create_listen = i40iw_create_listen,
+ .iw_destroy_listen = i40iw_destroy_listen,
+ .iw_get_qp = i40iw_get_qp,
+ .iw_reject = i40iw_reject,
+- .iw_rem_ref = i40iw_rem_ref,
++ .iw_rem_ref = i40iw_qp_rem_ref,
+ .map_mr_sg = i40iw_map_mr_sg,
+ .mmap = i40iw_mmap,
+ .modify_qp = i40iw_modify_qp,
+diff --git a/drivers/infiniband/hw/i40iw/i40iw_verbs.h b/drivers/infiniband/hw/i40iw/i40iw_verbs.h
+index 331bc21cbcc73..bab71f3e56374 100644
+--- a/drivers/infiniband/hw/i40iw/i40iw_verbs.h
++++ b/drivers/infiniband/hw/i40iw/i40iw_verbs.h
+@@ -139,7 +139,7 @@ struct i40iw_qp {
+ struct i40iw_qp_host_ctx_info ctx_info;
+ struct i40iwarp_offload_info iwarp_info;
+ void *allocated_buffer;
+- atomic_t refcount;
++ refcount_t refcount;
+ struct iw_cm_id *cm_id;
+ void *cm_node;
+ struct ib_mr *lsmm_mr;
+@@ -174,5 +174,6 @@ struct i40iw_qp {
+ struct i40iw_dma_mem ietf_mem;
+ struct completion sq_drained;
+ struct completion rq_drained;
++ struct completion free_qp;
+ };
+ #endif
+diff --git a/drivers/infiniband/hw/mlx4/cm.c b/drivers/infiniband/hw/mlx4/cm.c
+index b591861934b3c..81d6a3460b55d 100644
+--- a/drivers/infiniband/hw/mlx4/cm.c
++++ b/drivers/infiniband/hw/mlx4/cm.c
+@@ -280,6 +280,9 @@ static void schedule_delayed(struct ib_device *ibdev, struct id_map_entry *id)
+ if (!sriov->is_going_down && !id->scheduled_delete) {
+ id->scheduled_delete = 1;
+ schedule_delayed_work(&id->timeout, CM_CLEANUP_CACHE_TIMEOUT);
++ } else if (id->scheduled_delete) {
++ /* Adjust timeout if already scheduled */
++ mod_delayed_work(system_wq, &id->timeout, CM_CLEANUP_CACHE_TIMEOUT);
+ }
+ spin_unlock_irqrestore(&sriov->going_down_lock, flags);
+ spin_unlock(&sriov->id_map_lock);
+diff --git a/drivers/infiniband/hw/mlx4/cq.c b/drivers/infiniband/hw/mlx4/cq.c
+index f8b936b76dcdf..3851316407ceb 100644
+--- a/drivers/infiniband/hw/mlx4/cq.c
++++ b/drivers/infiniband/hw/mlx4/cq.c
+@@ -475,7 +475,7 @@ out:
+ return err;
+ }
+
+-void mlx4_ib_destroy_cq(struct ib_cq *cq, struct ib_udata *udata)
++int mlx4_ib_destroy_cq(struct ib_cq *cq, struct ib_udata *udata)
+ {
+ struct mlx4_ib_dev *dev = to_mdev(cq->device);
+ struct mlx4_ib_cq *mcq = to_mcq(cq);
+@@ -495,6 +495,7 @@ void mlx4_ib_destroy_cq(struct ib_cq *cq, struct ib_udata *udata)
+ mlx4_db_free(dev->dev, &mcq->db);
+ }
+ ib_umem_release(mcq->umem);
++ return 0;
+ }
+
+ static void dump_cqe(void *cqe)
+diff --git a/drivers/infiniband/hw/mlx4/mad.c b/drivers/infiniband/hw/mlx4/mad.c
+index abe68708d6d6e..2cbdba4da9dfe 100644
+--- a/drivers/infiniband/hw/mlx4/mad.c
++++ b/drivers/infiniband/hw/mlx4/mad.c
+@@ -1299,6 +1299,18 @@ static void mlx4_ib_tunnel_comp_handler(struct ib_cq *cq, void *arg)
+ spin_unlock_irqrestore(&dev->sriov.going_down_lock, flags);
+ }
+
++static void mlx4_ib_wire_comp_handler(struct ib_cq *cq, void *arg)
++{
++ unsigned long flags;
++ struct mlx4_ib_demux_pv_ctx *ctx = cq->cq_context;
++ struct mlx4_ib_dev *dev = to_mdev(ctx->ib_dev);
++
++ spin_lock_irqsave(&dev->sriov.going_down_lock, flags);
++ if (!dev->sriov.is_going_down && ctx->state == DEMUX_PV_STATE_ACTIVE)
++ queue_work(ctx->wi_wq, &ctx->work);
++ spin_unlock_irqrestore(&dev->sriov.going_down_lock, flags);
++}
++
+ static int mlx4_ib_post_pv_qp_buf(struct mlx4_ib_demux_pv_ctx *ctx,
+ struct mlx4_ib_demux_pv_qp *tun_qp,
+ int index)
+@@ -2001,7 +2013,8 @@ static int create_pv_resources(struct ib_device *ibdev, int slave, int port,
+ cq_size *= 2;
+
+ cq_attr.cqe = cq_size;
+- ctx->cq = ib_create_cq(ctx->ib_dev, mlx4_ib_tunnel_comp_handler,
++ ctx->cq = ib_create_cq(ctx->ib_dev,
++ create_tun ? mlx4_ib_tunnel_comp_handler : mlx4_ib_wire_comp_handler,
+ NULL, ctx, &cq_attr);
+ if (IS_ERR(ctx->cq)) {
+ ret = PTR_ERR(ctx->cq);
+@@ -2038,6 +2051,7 @@ static int create_pv_resources(struct ib_device *ibdev, int slave, int port,
+ INIT_WORK(&ctx->work, mlx4_ib_sqp_comp_worker);
+
+ ctx->wq = to_mdev(ibdev)->sriov.demux[port - 1].wq;
++ ctx->wi_wq = to_mdev(ibdev)->sriov.demux[port - 1].wi_wq;
+
+ ret = ib_req_notify_cq(ctx->cq, IB_CQ_NEXT_COMP);
+ if (ret) {
+@@ -2181,7 +2195,7 @@ static int mlx4_ib_alloc_demux_ctx(struct mlx4_ib_dev *dev,
+ goto err_mcg;
+ }
+
+- snprintf(name, sizeof name, "mlx4_ibt%d", port);
++ snprintf(name, sizeof(name), "mlx4_ibt%d", port);
+ ctx->wq = alloc_ordered_workqueue(name, WQ_MEM_RECLAIM);
+ if (!ctx->wq) {
+ pr_err("Failed to create tunnelling WQ for port %d\n", port);
+@@ -2189,7 +2203,15 @@ static int mlx4_ib_alloc_demux_ctx(struct mlx4_ib_dev *dev,
+ goto err_wq;
+ }
+
+- snprintf(name, sizeof name, "mlx4_ibud%d", port);
++ snprintf(name, sizeof(name), "mlx4_ibwi%d", port);
++ ctx->wi_wq = alloc_ordered_workqueue(name, WQ_MEM_RECLAIM);
++ if (!ctx->wi_wq) {
++ pr_err("Failed to create wire WQ for port %d\n", port);
++ ret = -ENOMEM;
++ goto err_wiwq;
++ }
++
++ snprintf(name, sizeof(name), "mlx4_ibud%d", port);
+ ctx->ud_wq = alloc_ordered_workqueue(name, WQ_MEM_RECLAIM);
+ if (!ctx->ud_wq) {
+ pr_err("Failed to create up/down WQ for port %d\n", port);
+@@ -2200,6 +2222,10 @@ static int mlx4_ib_alloc_demux_ctx(struct mlx4_ib_dev *dev,
+ return 0;
+
+ err_udwq:
++ destroy_workqueue(ctx->wi_wq);
++ ctx->wi_wq = NULL;
++
++err_wiwq:
+ destroy_workqueue(ctx->wq);
+ ctx->wq = NULL;
+
+@@ -2247,12 +2273,14 @@ static void mlx4_ib_free_demux_ctx(struct mlx4_ib_demux_ctx *ctx)
+ ctx->tun[i]->state = DEMUX_PV_STATE_DOWNING;
+ }
+ flush_workqueue(ctx->wq);
++ flush_workqueue(ctx->wi_wq);
+ for (i = 0; i < dev->dev->caps.sqp_demux; i++) {
+ destroy_pv_resources(dev, i, ctx->port, ctx->tun[i], 0);
+ free_pv_object(dev, i, ctx->port);
+ }
+ kfree(ctx->tun);
+ destroy_workqueue(ctx->ud_wq);
++ destroy_workqueue(ctx->wi_wq);
+ destroy_workqueue(ctx->wq);
+ }
+ }
+diff --git a/drivers/infiniband/hw/mlx4/mlx4_ib.h b/drivers/infiniband/hw/mlx4/mlx4_ib.h
+index 6f4ea1067095e..bac526a703173 100644
+--- a/drivers/infiniband/hw/mlx4/mlx4_ib.h
++++ b/drivers/infiniband/hw/mlx4/mlx4_ib.h
+@@ -454,6 +454,7 @@ struct mlx4_ib_demux_pv_ctx {
+ struct ib_pd *pd;
+ struct work_struct work;
+ struct workqueue_struct *wq;
++ struct workqueue_struct *wi_wq;
+ struct mlx4_ib_demux_pv_qp qp[2];
+ };
+
+@@ -461,6 +462,7 @@ struct mlx4_ib_demux_ctx {
+ struct ib_device *ib_dev;
+ int port;
+ struct workqueue_struct *wq;
++ struct workqueue_struct *wi_wq;
+ struct workqueue_struct *ud_wq;
+ spinlock_t ud_lock;
+ atomic64_t subnet_prefix;
+@@ -736,7 +738,7 @@ int mlx4_ib_modify_cq(struct ib_cq *cq, u16 cq_count, u16 cq_period);
+ int mlx4_ib_resize_cq(struct ib_cq *ibcq, int entries, struct ib_udata *udata);
+ int mlx4_ib_create_cq(struct ib_cq *ibcq, const struct ib_cq_init_attr *attr,
+ struct ib_udata *udata);
+-void mlx4_ib_destroy_cq(struct ib_cq *cq, struct ib_udata *udata);
++int mlx4_ib_destroy_cq(struct ib_cq *cq, struct ib_udata *udata);
+ int mlx4_ib_poll_cq(struct ib_cq *ibcq, int num_entries, struct ib_wc *wc);
+ int mlx4_ib_arm_cq(struct ib_cq *cq, enum ib_cq_notify_flags flags);
+ void __mlx4_ib_cq_clean(struct mlx4_ib_cq *cq, u32 qpn, struct mlx4_ib_srq *srq);
+diff --git a/drivers/infiniband/hw/mlx5/cq.c b/drivers/infiniband/hw/mlx5/cq.c
+index 0c18cb6a2f148..ec634085e1d9a 100644
+--- a/drivers/infiniband/hw/mlx5/cq.c
++++ b/drivers/infiniband/hw/mlx5/cq.c
+@@ -168,7 +168,7 @@ static void handle_responder(struct ib_wc *wc, struct mlx5_cqe64 *cqe,
+ {
+ enum rdma_link_layer ll = rdma_port_get_link_layer(qp->ibqp.device, 1);
+ struct mlx5_ib_dev *dev = to_mdev(qp->ibqp.device);
+- struct mlx5_ib_srq *srq;
++ struct mlx5_ib_srq *srq = NULL;
+ struct mlx5_ib_wq *wq;
+ u16 wqe_ctr;
+ u8 roce_packet_type;
+@@ -180,7 +180,8 @@ static void handle_responder(struct ib_wc *wc, struct mlx5_cqe64 *cqe,
+
+ if (qp->ibqp.xrcd) {
+ msrq = mlx5_cmd_get_srq(dev, be32_to_cpu(cqe->srqn));
+- srq = to_mibsrq(msrq);
++ if (msrq)
++ srq = to_mibsrq(msrq);
+ } else {
+ srq = to_msrq(qp->ibqp.srq);
+ }
+@@ -1023,16 +1024,21 @@ err_cqb:
+ return err;
+ }
+
+-void mlx5_ib_destroy_cq(struct ib_cq *cq, struct ib_udata *udata)
++int mlx5_ib_destroy_cq(struct ib_cq *cq, struct ib_udata *udata)
+ {
+ struct mlx5_ib_dev *dev = to_mdev(cq->device);
+ struct mlx5_ib_cq *mcq = to_mcq(cq);
++ int ret;
++
++ ret = mlx5_core_destroy_cq(dev->mdev, &mcq->mcq);
++ if (ret)
++ return ret;
+
+- mlx5_core_destroy_cq(dev->mdev, &mcq->mcq);
+ if (udata)
+ destroy_cq_user(mcq, udata);
+ else
+ destroy_cq_kernel(dev, mcq);
++ return 0;
+ }
+
+ static int is_equal_rsn(struct mlx5_cqe64 *cqe64, u32 rsn)
+diff --git a/drivers/infiniband/hw/mlx5/main.c b/drivers/infiniband/hw/mlx5/main.c
+index 6f99ed03d88e7..1f4aa2647a6f3 100644
+--- a/drivers/infiniband/hw/mlx5/main.c
++++ b/drivers/infiniband/hw/mlx5/main.c
+@@ -867,7 +867,9 @@ static int mlx5_ib_query_device(struct ib_device *ibdev,
+ /* We support 'Gappy' memory registration too */
+ props->device_cap_flags |= IB_DEVICE_SG_GAPS_REG;
+ }
+- props->device_cap_flags |= IB_DEVICE_MEM_MGT_EXTENSIONS;
++ /* IB_WR_REG_MR always requires changing the entity size with UMR */
++ if (!MLX5_CAP_GEN(dev->mdev, umr_modify_entity_size_disabled))
++ props->device_cap_flags |= IB_DEVICE_MEM_MGT_EXTENSIONS;
+ if (MLX5_CAP_GEN(mdev, sho)) {
+ props->device_cap_flags |= IB_DEVICE_INTEGRITY_HANDOVER;
+ /* At this stage no support for signature handover */
+diff --git a/drivers/infiniband/hw/mlx5/mlx5_ib.h b/drivers/infiniband/hw/mlx5/mlx5_ib.h
+index 5dbe3eb0d9cb9..3825cdec6ac68 100644
+--- a/drivers/infiniband/hw/mlx5/mlx5_ib.h
++++ b/drivers/infiniband/hw/mlx5/mlx5_ib.h
+@@ -1180,7 +1180,7 @@ int mlx5_ib_read_wqe_srq(struct mlx5_ib_srq *srq, int wqe_index, void *buffer,
+ size_t buflen, size_t *bc);
+ int mlx5_ib_create_cq(struct ib_cq *ibcq, const struct ib_cq_init_attr *attr,
+ struct ib_udata *udata);
+-void mlx5_ib_destroy_cq(struct ib_cq *cq, struct ib_udata *udata);
++int mlx5_ib_destroy_cq(struct ib_cq *cq, struct ib_udata *udata);
+ int mlx5_ib_poll_cq(struct ib_cq *ibcq, int num_entries, struct ib_wc *wc);
+ int mlx5_ib_arm_cq(struct ib_cq *ibcq, enum ib_cq_notify_flags flags);
+ int mlx5_ib_modify_cq(struct ib_cq *cq, u16 cq_count, u16 cq_period);
+diff --git a/drivers/infiniband/hw/mlx5/mr.c b/drivers/infiniband/hw/mlx5/mr.c
+index 44683073be0c4..85c9a1ffdbb64 100644
+--- a/drivers/infiniband/hw/mlx5/mr.c
++++ b/drivers/infiniband/hw/mlx5/mr.c
+@@ -50,6 +50,29 @@ enum {
+ static void
+ create_mkey_callback(int status, struct mlx5_async_work *context);
+
++static void set_mkc_access_pd_addr_fields(void *mkc, int acc, u64 start_addr,
++ struct ib_pd *pd)
++{
++ struct mlx5_ib_dev *dev = to_mdev(pd->device);
++
++ MLX5_SET(mkc, mkc, a, !!(acc & IB_ACCESS_REMOTE_ATOMIC));
++ MLX5_SET(mkc, mkc, rw, !!(acc & IB_ACCESS_REMOTE_WRITE));
++ MLX5_SET(mkc, mkc, rr, !!(acc & IB_ACCESS_REMOTE_READ));
++ MLX5_SET(mkc, mkc, lw, !!(acc & IB_ACCESS_LOCAL_WRITE));
++ MLX5_SET(mkc, mkc, lr, 1);
++
++ if (MLX5_CAP_GEN(dev->mdev, relaxed_ordering_write))
++ MLX5_SET(mkc, mkc, relaxed_ordering_write,
++ !!(acc & IB_ACCESS_RELAXED_ORDERING));
++ if (MLX5_CAP_GEN(dev->mdev, relaxed_ordering_read))
++ MLX5_SET(mkc, mkc, relaxed_ordering_read,
++ !!(acc & IB_ACCESS_RELAXED_ORDERING));
++
++ MLX5_SET(mkc, mkc, pd, to_mpd(pd)->pdn);
++ MLX5_SET(mkc, mkc, qpn, 0xffffff);
++ MLX5_SET64(mkc, mkc, start_addr, start_addr);
++}
++
+ static void
+ assign_mkey_variant(struct mlx5_ib_dev *dev, struct mlx5_core_mkey *mkey,
+ u32 *in)
+@@ -152,12 +175,12 @@ static struct mlx5_ib_mr *alloc_cache_mr(struct mlx5_cache_ent *ent, void *mkc)
+ mr->cache_ent = ent;
+ mr->dev = ent->dev;
+
++ set_mkc_access_pd_addr_fields(mkc, 0, 0, ent->dev->umrc.pd);
+ MLX5_SET(mkc, mkc, free, 1);
+ MLX5_SET(mkc, mkc, umr_en, 1);
+ MLX5_SET(mkc, mkc, access_mode_1_0, ent->access_mode & 0x3);
+ MLX5_SET(mkc, mkc, access_mode_4_2, (ent->access_mode >> 2) & 0x7);
+
+- MLX5_SET(mkc, mkc, qpn, 0xffffff);
+ MLX5_SET(mkc, mkc, translations_octword_size, ent->xlt);
+ MLX5_SET(mkc, mkc, log_page_size, ent->page);
+ return mr;
+@@ -774,29 +797,6 @@ int mlx5_mr_cache_cleanup(struct mlx5_ib_dev *dev)
+ return 0;
+ }
+
+-static void set_mkc_access_pd_addr_fields(void *mkc, int acc, u64 start_addr,
+- struct ib_pd *pd)
+-{
+- struct mlx5_ib_dev *dev = to_mdev(pd->device);
+-
+- MLX5_SET(mkc, mkc, a, !!(acc & IB_ACCESS_REMOTE_ATOMIC));
+- MLX5_SET(mkc, mkc, rw, !!(acc & IB_ACCESS_REMOTE_WRITE));
+- MLX5_SET(mkc, mkc, rr, !!(acc & IB_ACCESS_REMOTE_READ));
+- MLX5_SET(mkc, mkc, lw, !!(acc & IB_ACCESS_LOCAL_WRITE));
+- MLX5_SET(mkc, mkc, lr, 1);
+-
+- if (MLX5_CAP_GEN(dev->mdev, relaxed_ordering_write))
+- MLX5_SET(mkc, mkc, relaxed_ordering_write,
+- !!(acc & IB_ACCESS_RELAXED_ORDERING));
+- if (MLX5_CAP_GEN(dev->mdev, relaxed_ordering_read))
+- MLX5_SET(mkc, mkc, relaxed_ordering_read,
+- !!(acc & IB_ACCESS_RELAXED_ORDERING));
+-
+- MLX5_SET(mkc, mkc, pd, to_mpd(pd)->pdn);
+- MLX5_SET(mkc, mkc, qpn, 0xffffff);
+- MLX5_SET64(mkc, mkc, start_addr, start_addr);
+-}
+-
+ struct ib_mr *mlx5_ib_get_dma_mr(struct ib_pd *pd, int acc)
+ {
+ struct mlx5_ib_dev *dev = to_mdev(pd->device);
+@@ -1190,29 +1190,17 @@ static struct mlx5_ib_mr *reg_create(struct ib_mr *ibmr, struct ib_pd *pd,
+ MLX5_SET(create_mkey_in, in, pg_access, !!(pg_cap));
+
+ mkc = MLX5_ADDR_OF(create_mkey_in, in, memory_key_mkey_entry);
++ set_mkc_access_pd_addr_fields(mkc, access_flags, virt_addr,
++ populate ? pd : dev->umrc.pd);
+ MLX5_SET(mkc, mkc, free, !populate);
+ MLX5_SET(mkc, mkc, access_mode_1_0, MLX5_MKC_ACCESS_MODE_MTT);
+- if (MLX5_CAP_GEN(dev->mdev, relaxed_ordering_write))
+- MLX5_SET(mkc, mkc, relaxed_ordering_write,
+- !!(access_flags & IB_ACCESS_RELAXED_ORDERING));
+- if (MLX5_CAP_GEN(dev->mdev, relaxed_ordering_read))
+- MLX5_SET(mkc, mkc, relaxed_ordering_read,
+- !!(access_flags & IB_ACCESS_RELAXED_ORDERING));
+- MLX5_SET(mkc, mkc, a, !!(access_flags & IB_ACCESS_REMOTE_ATOMIC));
+- MLX5_SET(mkc, mkc, rw, !!(access_flags & IB_ACCESS_REMOTE_WRITE));
+- MLX5_SET(mkc, mkc, rr, !!(access_flags & IB_ACCESS_REMOTE_READ));
+- MLX5_SET(mkc, mkc, lw, !!(access_flags & IB_ACCESS_LOCAL_WRITE));
+- MLX5_SET(mkc, mkc, lr, 1);
+ MLX5_SET(mkc, mkc, umr_en, 1);
+
+- MLX5_SET64(mkc, mkc, start_addr, virt_addr);
+ MLX5_SET64(mkc, mkc, len, length);
+- MLX5_SET(mkc, mkc, pd, to_mpd(pd)->pdn);
+ MLX5_SET(mkc, mkc, bsf_octword_size, 0);
+ MLX5_SET(mkc, mkc, translations_octword_size,
+ get_octo_len(virt_addr, length, page_shift));
+ MLX5_SET(mkc, mkc, log_page_size, page_shift);
+- MLX5_SET(mkc, mkc, qpn, 0xffffff);
+ if (populate) {
+ MLX5_SET(create_mkey_in, in, translations_octword_actual_size,
+ get_octo_len(virt_addr, length, page_shift));
+diff --git a/drivers/infiniband/hw/mthca/mthca_provider.c b/drivers/infiniband/hw/mthca/mthca_provider.c
+index 9fa2f9164a47b..2ad15adf304e5 100644
+--- a/drivers/infiniband/hw/mthca/mthca_provider.c
++++ b/drivers/infiniband/hw/mthca/mthca_provider.c
+@@ -789,7 +789,7 @@ out:
+ return ret;
+ }
+
+-static void mthca_destroy_cq(struct ib_cq *cq, struct ib_udata *udata)
++static int mthca_destroy_cq(struct ib_cq *cq, struct ib_udata *udata)
+ {
+ if (udata) {
+ struct mthca_ucontext *context =
+@@ -808,6 +808,7 @@ static void mthca_destroy_cq(struct ib_cq *cq, struct ib_udata *udata)
+ to_mcq(cq)->set_ci_db_index);
+ }
+ mthca_free_cq(to_mdev(cq->device), to_mcq(cq));
++ return 0;
+ }
+
+ static inline u32 convert_access(int acc)
+diff --git a/drivers/infiniband/hw/ocrdma/ocrdma_verbs.c b/drivers/infiniband/hw/ocrdma/ocrdma_verbs.c
+index d11c74390a124..927c70d1ffbc3 100644
+--- a/drivers/infiniband/hw/ocrdma/ocrdma_verbs.c
++++ b/drivers/infiniband/hw/ocrdma/ocrdma_verbs.c
+@@ -1056,7 +1056,7 @@ static void ocrdma_flush_cq(struct ocrdma_cq *cq)
+ spin_unlock_irqrestore(&cq->cq_lock, flags);
+ }
+
+-void ocrdma_destroy_cq(struct ib_cq *ibcq, struct ib_udata *udata)
++int ocrdma_destroy_cq(struct ib_cq *ibcq, struct ib_udata *udata)
+ {
+ struct ocrdma_cq *cq = get_ocrdma_cq(ibcq);
+ struct ocrdma_eq *eq = NULL;
+@@ -1081,6 +1081,7 @@ void ocrdma_destroy_cq(struct ib_cq *ibcq, struct ib_udata *udata)
+ ocrdma_get_db_addr(dev, pdid),
+ dev->nic_info.db_page_size);
+ }
++ return 0;
+ }
+
+ static int ocrdma_add_qpn_map(struct ocrdma_dev *dev, struct ocrdma_qp *qp)
+diff --git a/drivers/infiniband/hw/ocrdma/ocrdma_verbs.h b/drivers/infiniband/hw/ocrdma/ocrdma_verbs.h
+index 3a5010881be5b..c46412dff924a 100644
+--- a/drivers/infiniband/hw/ocrdma/ocrdma_verbs.h
++++ b/drivers/infiniband/hw/ocrdma/ocrdma_verbs.h
+@@ -72,7 +72,7 @@ void ocrdma_dealloc_pd(struct ib_pd *pd, struct ib_udata *udata);
+ int ocrdma_create_cq(struct ib_cq *ibcq, const struct ib_cq_init_attr *attr,
+ struct ib_udata *udata);
+ int ocrdma_resize_cq(struct ib_cq *, int cqe, struct ib_udata *);
+-void ocrdma_destroy_cq(struct ib_cq *ibcq, struct ib_udata *udata);
++int ocrdma_destroy_cq(struct ib_cq *ibcq, struct ib_udata *udata);
+
+ struct ib_qp *ocrdma_create_qp(struct ib_pd *,
+ struct ib_qp_init_attr *attrs,
+diff --git a/drivers/infiniband/hw/qedr/main.c b/drivers/infiniband/hw/qedr/main.c
+index ccaedfd53e49e..679766abb436e 100644
+--- a/drivers/infiniband/hw/qedr/main.c
++++ b/drivers/infiniband/hw/qedr/main.c
+@@ -601,7 +601,7 @@ static int qedr_set_device_attr(struct qedr_dev *dev)
+ qed_attr = dev->ops->rdma_query_device(dev->rdma_ctx);
+
+ /* Part 2 - check capabilities */
+- page_size = ~dev->attr.page_size_caps + 1;
++ page_size = ~qed_attr->page_size_caps + 1;
+ if (page_size > PAGE_SIZE) {
+ DP_ERR(dev,
+ "Kernel PAGE_SIZE is %ld which is smaller than minimum page size (%d) required by qedr\n",
+diff --git a/drivers/infiniband/hw/qedr/qedr_iw_cm.c b/drivers/infiniband/hw/qedr/qedr_iw_cm.c
+index 97fc7dd353b04..c7169d2c69e5b 100644
+--- a/drivers/infiniband/hw/qedr/qedr_iw_cm.c
++++ b/drivers/infiniband/hw/qedr/qedr_iw_cm.c
+@@ -736,7 +736,7 @@ int qedr_iw_accept(struct iw_cm_id *cm_id, struct iw_cm_conn_param *conn_param)
+ struct qedr_dev *dev = ep->dev;
+ struct qedr_qp *qp;
+ struct qed_iwarp_accept_in params;
+- int rc = 0;
++ int rc;
+
+ DP_DEBUG(dev, QEDR_MSG_IWARP, "Accept on qpid=%d\n", conn_param->qpn);
+
+@@ -759,8 +759,10 @@ int qedr_iw_accept(struct iw_cm_id *cm_id, struct iw_cm_conn_param *conn_param)
+ params.ord = conn_param->ord;
+
+ if (test_and_set_bit(QEDR_IWARP_CM_WAIT_FOR_CONNECT,
+- &qp->iwarp_cm_flags))
++ &qp->iwarp_cm_flags)) {
++ rc = -EINVAL;
+ goto err; /* QP already destroyed */
++ }
+
+ rc = dev->ops->iwarp_accept(dev->rdma_ctx, ¶ms);
+ if (rc) {
+diff --git a/drivers/infiniband/hw/qedr/verbs.c b/drivers/infiniband/hw/qedr/verbs.c
+index 1a7f1f805be3e..41813e9d771ff 100644
+--- a/drivers/infiniband/hw/qedr/verbs.c
++++ b/drivers/infiniband/hw/qedr/verbs.c
+@@ -998,7 +998,7 @@ int qedr_create_cq(struct ib_cq *ibcq, const struct ib_cq_init_attr *attr,
+ /* Generate doorbell address. */
+ cq->db.data.icid = cq->icid;
+ cq->db_addr = dev->db_addr + db_offset;
+- cq->db.data.params = DB_AGG_CMD_SET <<
++ cq->db.data.params = DB_AGG_CMD_MAX <<
+ RDMA_PWM_VAL32_DATA_AGG_CMD_SHIFT;
+
+ /* point to the very last element, passing it we will toggle */
+@@ -1050,7 +1050,7 @@ int qedr_resize_cq(struct ib_cq *ibcq, int new_cnt, struct ib_udata *udata)
+ #define QEDR_DESTROY_CQ_MAX_ITERATIONS (10)
+ #define QEDR_DESTROY_CQ_ITER_DURATION (10)
+
+-void qedr_destroy_cq(struct ib_cq *ibcq, struct ib_udata *udata)
++int qedr_destroy_cq(struct ib_cq *ibcq, struct ib_udata *udata)
+ {
+ struct qedr_dev *dev = get_qedr_dev(ibcq->device);
+ struct qed_rdma_destroy_cq_out_params oparams;
+@@ -1065,7 +1065,7 @@ void qedr_destroy_cq(struct ib_cq *ibcq, struct ib_udata *udata)
+ /* GSIs CQs are handled by driver, so they don't exist in the FW */
+ if (cq->cq_type == QEDR_CQ_TYPE_GSI) {
+ qedr_db_recovery_del(dev, cq->db_addr, &cq->db.data);
+- return;
++ return 0;
+ }
+
+ iparams.icid = cq->icid;
+@@ -1113,6 +1113,7 @@ void qedr_destroy_cq(struct ib_cq *ibcq, struct ib_udata *udata)
+ * Since the destroy CQ ramrod has also been received on the EQ we can
+ * be certain that there's no event handler in process.
+ */
++ return 0;
+ }
+
+ static inline int get_gid_info_from_table(struct ib_qp *ibqp,
+@@ -2112,6 +2113,28 @@ static int qedr_create_kernel_qp(struct qedr_dev *dev,
+ return rc;
+ }
+
++static int qedr_free_qp_resources(struct qedr_dev *dev, struct qedr_qp *qp,
++ struct ib_udata *udata)
++{
++ struct qedr_ucontext *ctx =
++ rdma_udata_to_drv_context(udata, struct qedr_ucontext,
++ ibucontext);
++ int rc;
++
++ if (qp->qp_type != IB_QPT_GSI) {
++ rc = dev->ops->rdma_destroy_qp(dev->rdma_ctx, qp->qed_qp);
++ if (rc)
++ return rc;
++ }
++
++ if (qp->create_type == QEDR_QP_CREATE_USER)
++ qedr_cleanup_user(dev, ctx, qp);
++ else
++ qedr_cleanup_kernel(dev, qp);
++
++ return 0;
++}
++
+ struct ib_qp *qedr_create_qp(struct ib_pd *ibpd,
+ struct ib_qp_init_attr *attrs,
+ struct ib_udata *udata)
+@@ -2158,19 +2181,21 @@ struct ib_qp *qedr_create_qp(struct ib_pd *ibpd,
+ rc = qedr_create_kernel_qp(dev, qp, ibpd, attrs);
+
+ if (rc)
+- goto err;
++ goto out_free_qp;
+
+ qp->ibqp.qp_num = qp->qp_id;
+
+ if (rdma_protocol_iwarp(&dev->ibdev, 1)) {
+ rc = xa_insert(&dev->qps, qp->qp_id, qp, GFP_KERNEL);
+ if (rc)
+- goto err;
++ goto out_free_qp_resources;
+ }
+
+ return &qp->ibqp;
+
+-err:
++out_free_qp_resources:
++ qedr_free_qp_resources(dev, qp, udata);
++out_free_qp:
+ kfree(qp);
+
+ return ERR_PTR(-EFAULT);
+@@ -2636,7 +2661,7 @@ int qedr_query_qp(struct ib_qp *ibqp,
+ qp_attr->cap.max_recv_wr = qp->rq.max_wr;
+ qp_attr->cap.max_send_sge = qp->sq.max_sges;
+ qp_attr->cap.max_recv_sge = qp->rq.max_sges;
+- qp_attr->cap.max_inline_data = ROCE_REQ_MAX_INLINE_DATA_SIZE;
++ qp_attr->cap.max_inline_data = dev->attr.max_inline;
+ qp_init_attr->cap = qp_attr->cap;
+
+ qp_attr->ah_attr.type = RDMA_AH_ATTR_TYPE_ROCE;
+@@ -2671,28 +2696,6 @@ err:
+ return rc;
+ }
+
+-static int qedr_free_qp_resources(struct qedr_dev *dev, struct qedr_qp *qp,
+- struct ib_udata *udata)
+-{
+- struct qedr_ucontext *ctx =
+- rdma_udata_to_drv_context(udata, struct qedr_ucontext,
+- ibucontext);
+- int rc;
+-
+- if (qp->qp_type != IB_QPT_GSI) {
+- rc = dev->ops->rdma_destroy_qp(dev->rdma_ctx, qp->qed_qp);
+- if (rc)
+- return rc;
+- }
+-
+- if (qp->create_type == QEDR_QP_CREATE_USER)
+- qedr_cleanup_user(dev, ctx, qp);
+- else
+- qedr_cleanup_kernel(dev, qp);
+-
+- return 0;
+-}
+-
+ int qedr_destroy_qp(struct ib_qp *ibqp, struct ib_udata *udata)
+ {
+ struct qedr_qp *qp = get_qedr_qp(ibqp);
+@@ -2752,6 +2755,8 @@ int qedr_destroy_qp(struct ib_qp *ibqp, struct ib_udata *udata)
+
+ if (rdma_protocol_iwarp(&dev->ibdev, 1))
+ qedr_iw_qp_rem_ref(&qp->ibqp);
++ else
++ kfree(qp);
+
+ return 0;
+ }
+diff --git a/drivers/infiniband/hw/qedr/verbs.h b/drivers/infiniband/hw/qedr/verbs.h
+index 5e02387e068d1..e0db3bc1653e2 100644
+--- a/drivers/infiniband/hw/qedr/verbs.h
++++ b/drivers/infiniband/hw/qedr/verbs.h
+@@ -52,7 +52,7 @@ void qedr_dealloc_pd(struct ib_pd *pd, struct ib_udata *udata);
+ int qedr_create_cq(struct ib_cq *ibcq, const struct ib_cq_init_attr *attr,
+ struct ib_udata *udata);
+ int qedr_resize_cq(struct ib_cq *, int cqe, struct ib_udata *);
+-void qedr_destroy_cq(struct ib_cq *ibcq, struct ib_udata *udata);
++int qedr_destroy_cq(struct ib_cq *ibcq, struct ib_udata *udata);
+ int qedr_arm_cq(struct ib_cq *ibcq, enum ib_cq_notify_flags flags);
+ struct ib_qp *qedr_create_qp(struct ib_pd *, struct ib_qp_init_attr *attrs,
+ struct ib_udata *);
+diff --git a/drivers/infiniband/hw/usnic/usnic_ib_verbs.c b/drivers/infiniband/hw/usnic/usnic_ib_verbs.c
+index b8a77ce115908..586ff16be1bb3 100644
+--- a/drivers/infiniband/hw/usnic/usnic_ib_verbs.c
++++ b/drivers/infiniband/hw/usnic/usnic_ib_verbs.c
+@@ -596,9 +596,9 @@ int usnic_ib_create_cq(struct ib_cq *ibcq, const struct ib_cq_init_attr *attr,
+ return 0;
+ }
+
+-void usnic_ib_destroy_cq(struct ib_cq *cq, struct ib_udata *udata)
++int usnic_ib_destroy_cq(struct ib_cq *cq, struct ib_udata *udata)
+ {
+- return;
++ return 0;
+ }
+
+ struct ib_mr *usnic_ib_reg_mr(struct ib_pd *pd, u64 start, u64 length,
+diff --git a/drivers/infiniband/hw/usnic/usnic_ib_verbs.h b/drivers/infiniband/hw/usnic/usnic_ib_verbs.h
+index 2aedf78c13cf2..f13b08c59b9a3 100644
+--- a/drivers/infiniband/hw/usnic/usnic_ib_verbs.h
++++ b/drivers/infiniband/hw/usnic/usnic_ib_verbs.h
+@@ -60,7 +60,7 @@ int usnic_ib_modify_qp(struct ib_qp *ibqp, struct ib_qp_attr *attr,
+ int attr_mask, struct ib_udata *udata);
+ int usnic_ib_create_cq(struct ib_cq *ibcq, const struct ib_cq_init_attr *attr,
+ struct ib_udata *udata);
+-void usnic_ib_destroy_cq(struct ib_cq *cq, struct ib_udata *udata);
++int usnic_ib_destroy_cq(struct ib_cq *cq, struct ib_udata *udata);
+ struct ib_mr *usnic_ib_reg_mr(struct ib_pd *pd, u64 start, u64 length,
+ u64 virt_addr, int access_flags,
+ struct ib_udata *udata);
+diff --git a/drivers/infiniband/hw/vmw_pvrdma/pvrdma_cq.c b/drivers/infiniband/hw/vmw_pvrdma/pvrdma_cq.c
+index 4f6cc0de7ef95..6d3e6389e47da 100644
+--- a/drivers/infiniband/hw/vmw_pvrdma/pvrdma_cq.c
++++ b/drivers/infiniband/hw/vmw_pvrdma/pvrdma_cq.c
+@@ -235,7 +235,7 @@ static void pvrdma_free_cq(struct pvrdma_dev *dev, struct pvrdma_cq *cq)
+ * @cq: the completion queue to destroy.
+ * @udata: user data or null for kernel object
+ */
+-void pvrdma_destroy_cq(struct ib_cq *cq, struct ib_udata *udata)
++int pvrdma_destroy_cq(struct ib_cq *cq, struct ib_udata *udata)
+ {
+ struct pvrdma_cq *vcq = to_vcq(cq);
+ union pvrdma_cmd_req req;
+@@ -261,6 +261,7 @@ void pvrdma_destroy_cq(struct ib_cq *cq, struct ib_udata *udata)
+
+ pvrdma_free_cq(dev, vcq);
+ atomic_dec(&dev->num_cqs);
++ return 0;
+ }
+
+ static inline struct pvrdma_cqe *get_cqe(struct pvrdma_cq *cq, int i)
+diff --git a/drivers/infiniband/hw/vmw_pvrdma/pvrdma_verbs.h b/drivers/infiniband/hw/vmw_pvrdma/pvrdma_verbs.h
+index 267702226f108..af36e9f767eed 100644
+--- a/drivers/infiniband/hw/vmw_pvrdma/pvrdma_verbs.h
++++ b/drivers/infiniband/hw/vmw_pvrdma/pvrdma_verbs.h
+@@ -411,7 +411,7 @@ int pvrdma_map_mr_sg(struct ib_mr *ibmr, struct scatterlist *sg,
+ int sg_nents, unsigned int *sg_offset);
+ int pvrdma_create_cq(struct ib_cq *ibcq, const struct ib_cq_init_attr *attr,
+ struct ib_udata *udata);
+-void pvrdma_destroy_cq(struct ib_cq *cq, struct ib_udata *udata);
++int pvrdma_destroy_cq(struct ib_cq *cq, struct ib_udata *udata);
+ int pvrdma_poll_cq(struct ib_cq *ibcq, int num_entries, struct ib_wc *wc);
+ int pvrdma_req_notify_cq(struct ib_cq *cq, enum ib_cq_notify_flags flags);
+ int pvrdma_create_ah(struct ib_ah *ah, struct rdma_ah_init_attr *init_attr,
+diff --git a/drivers/infiniband/sw/rdmavt/cq.c b/drivers/infiniband/sw/rdmavt/cq.c
+index 04d2e72017fed..19248be140933 100644
+--- a/drivers/infiniband/sw/rdmavt/cq.c
++++ b/drivers/infiniband/sw/rdmavt/cq.c
+@@ -315,7 +315,7 @@ bail_wc:
+ *
+ * Called by ib_destroy_cq() in the generic verbs code.
+ */
+-void rvt_destroy_cq(struct ib_cq *ibcq, struct ib_udata *udata)
++int rvt_destroy_cq(struct ib_cq *ibcq, struct ib_udata *udata)
+ {
+ struct rvt_cq *cq = ibcq_to_rvtcq(ibcq);
+ struct rvt_dev_info *rdi = cq->rdi;
+@@ -328,6 +328,7 @@ void rvt_destroy_cq(struct ib_cq *ibcq, struct ib_udata *udata)
+ kref_put(&cq->ip->ref, rvt_release_mmap_info);
+ else
+ vfree(cq->kqueue);
++ return 0;
+ }
+
+ /**
+diff --git a/drivers/infiniband/sw/rdmavt/cq.h b/drivers/infiniband/sw/rdmavt/cq.h
+index 5e26a2eb19a4c..feb01e7ee0044 100644
+--- a/drivers/infiniband/sw/rdmavt/cq.h
++++ b/drivers/infiniband/sw/rdmavt/cq.h
+@@ -53,7 +53,7 @@
+
+ int rvt_create_cq(struct ib_cq *ibcq, const struct ib_cq_init_attr *attr,
+ struct ib_udata *udata);
+-void rvt_destroy_cq(struct ib_cq *ibcq, struct ib_udata *udata);
++int rvt_destroy_cq(struct ib_cq *ibcq, struct ib_udata *udata);
+ int rvt_req_notify_cq(struct ib_cq *ibcq, enum ib_cq_notify_flags notify_flags);
+ int rvt_resize_cq(struct ib_cq *ibcq, int cqe, struct ib_udata *udata);
+ int rvt_poll_cq(struct ib_cq *ibcq, int num_entries, struct ib_wc *entry);
+diff --git a/drivers/infiniband/sw/rdmavt/vt.c b/drivers/infiniband/sw/rdmavt/vt.c
+index f904bb34477ae..2d534c450f3c8 100644
+--- a/drivers/infiniband/sw/rdmavt/vt.c
++++ b/drivers/infiniband/sw/rdmavt/vt.c
+@@ -95,9 +95,7 @@ struct rvt_dev_info *rvt_alloc_device(size_t size, int nports)
+ if (!rdi)
+ return rdi;
+
+- rdi->ports = kcalloc(nports,
+- sizeof(struct rvt_ibport **),
+- GFP_KERNEL);
++ rdi->ports = kcalloc(nports, sizeof(*rdi->ports), GFP_KERNEL);
+ if (!rdi->ports)
+ ib_dealloc_device(&rdi->ibdev);
+
+diff --git a/drivers/infiniband/sw/rxe/rxe_recv.c b/drivers/infiniband/sw/rxe/rxe_recv.c
+index 46e111c218fd4..9bfb98056fc2a 100644
+--- a/drivers/infiniband/sw/rxe/rxe_recv.c
++++ b/drivers/infiniband/sw/rxe/rxe_recv.c
+@@ -281,6 +281,8 @@ static void rxe_rcv_mcast_pkt(struct rxe_dev *rxe, struct sk_buff *skb)
+ struct rxe_mc_elem *mce;
+ struct rxe_qp *qp;
+ union ib_gid dgid;
++ struct sk_buff *per_qp_skb;
++ struct rxe_pkt_info *per_qp_pkt;
+ int err;
+
+ if (skb->protocol == htons(ETH_P_IP))
+@@ -309,21 +311,29 @@ static void rxe_rcv_mcast_pkt(struct rxe_dev *rxe, struct sk_buff *skb)
+ if (err)
+ continue;
+
+- /* if *not* the last qp in the list
+- * increase the users of the skb then post to the next qp
++ /* for all but the last qp create a new clone of the
++ * skb and pass to the qp.
+ */
+ if (mce->qp_list.next != &mcg->qp_list)
+- skb_get(skb);
++ per_qp_skb = skb_clone(skb, GFP_ATOMIC);
++ else
++ per_qp_skb = skb;
++
++ if (unlikely(!per_qp_skb))
++ continue;
+
+- pkt->qp = qp;
++ per_qp_pkt = SKB_TO_PKT(per_qp_skb);
++ per_qp_pkt->qp = qp;
+ rxe_add_ref(qp);
+- rxe_rcv_pkt(pkt, skb);
++ rxe_rcv_pkt(per_qp_pkt, per_qp_skb);
+ }
+
+ spin_unlock_bh(&mcg->mcg_lock);
+
+ rxe_drop_ref(mcg); /* drop ref from rxe_pool_get_key. */
+
++ return;
++
+ err1:
+ kfree_skb(skb);
+ }
+diff --git a/drivers/infiniband/sw/rxe/rxe_verbs.c b/drivers/infiniband/sw/rxe/rxe_verbs.c
+index 00ba6fb1e6763..452748b3854b5 100644
+--- a/drivers/infiniband/sw/rxe/rxe_verbs.c
++++ b/drivers/infiniband/sw/rxe/rxe_verbs.c
+@@ -816,13 +816,14 @@ static int rxe_create_cq(struct ib_cq *ibcq, const struct ib_cq_init_attr *attr,
+ return rxe_add_to_pool(&rxe->cq_pool, &cq->pelem);
+ }
+
+-static void rxe_destroy_cq(struct ib_cq *ibcq, struct ib_udata *udata)
++static int rxe_destroy_cq(struct ib_cq *ibcq, struct ib_udata *udata)
+ {
+ struct rxe_cq *cq = to_rcq(ibcq);
+
+ rxe_cq_disable(cq);
+
+ rxe_drop_ref(cq);
++ return 0;
+ }
+
+ static int rxe_resize_cq(struct ib_cq *ibcq, int cqe, struct ib_udata *udata)
+diff --git a/drivers/infiniband/sw/siw/siw_verbs.c b/drivers/infiniband/sw/siw/siw_verbs.c
+index 987e2ba05dbc0..7e657f90ca4f4 100644
+--- a/drivers/infiniband/sw/siw/siw_verbs.c
++++ b/drivers/infiniband/sw/siw/siw_verbs.c
+@@ -1064,7 +1064,7 @@ int siw_post_receive(struct ib_qp *base_qp, const struct ib_recv_wr *wr,
+ return rv > 0 ? 0 : rv;
+ }
+
+-void siw_destroy_cq(struct ib_cq *base_cq, struct ib_udata *udata)
++int siw_destroy_cq(struct ib_cq *base_cq, struct ib_udata *udata)
+ {
+ struct siw_cq *cq = to_siw_cq(base_cq);
+ struct siw_device *sdev = to_siw_dev(base_cq->device);
+@@ -1082,6 +1082,7 @@ void siw_destroy_cq(struct ib_cq *base_cq, struct ib_udata *udata)
+ atomic_dec(&sdev->num_cq);
+
+ vfree(cq->queue);
++ return 0;
+ }
+
+ /*
+diff --git a/drivers/infiniband/sw/siw/siw_verbs.h b/drivers/infiniband/sw/siw/siw_verbs.h
+index 1a731989fad60..b0b7488869104 100644
+--- a/drivers/infiniband/sw/siw/siw_verbs.h
++++ b/drivers/infiniband/sw/siw/siw_verbs.h
+@@ -63,7 +63,7 @@ int siw_post_send(struct ib_qp *base_qp, const struct ib_send_wr *wr,
+ const struct ib_send_wr **bad_wr);
+ int siw_post_receive(struct ib_qp *base_qp, const struct ib_recv_wr *wr,
+ const struct ib_recv_wr **bad_wr);
+-void siw_destroy_cq(struct ib_cq *base_cq, struct ib_udata *udata);
++int siw_destroy_cq(struct ib_cq *base_cq, struct ib_udata *udata);
+ int siw_poll_cq(struct ib_cq *base_cq, int num_entries, struct ib_wc *wc);
+ int siw_req_notify_cq(struct ib_cq *base_cq, enum ib_cq_notify_flags flags);
+ struct ib_mr *siw_reg_user_mr(struct ib_pd *base_pd, u64 start, u64 len,
+diff --git a/drivers/infiniband/ulp/ipoib/ipoib_main.c b/drivers/infiniband/ulp/ipoib/ipoib_main.c
+index ef60e8e4ae67b..7c0bb2642d232 100644
+--- a/drivers/infiniband/ulp/ipoib/ipoib_main.c
++++ b/drivers/infiniband/ulp/ipoib/ipoib_main.c
+@@ -2470,6 +2470,8 @@ static struct net_device *ipoib_add_port(const char *format,
+ /* call event handler to ensure pkey in sync */
+ queue_work(ipoib_workqueue, &priv->flush_heavy);
+
++ ndev->rtnl_link_ops = ipoib_get_link_ops();
++
+ result = register_netdev(ndev);
+ if (result) {
+ pr_warn("%s: couldn't register ipoib port %d; error %d\n",
+diff --git a/drivers/infiniband/ulp/ipoib/ipoib_netlink.c b/drivers/infiniband/ulp/ipoib/ipoib_netlink.c
+index 38c984d16996d..d5a90a66b45cf 100644
+--- a/drivers/infiniband/ulp/ipoib/ipoib_netlink.c
++++ b/drivers/infiniband/ulp/ipoib/ipoib_netlink.c
+@@ -144,6 +144,16 @@ static int ipoib_new_child_link(struct net *src_net, struct net_device *dev,
+ return 0;
+ }
+
++static void ipoib_del_child_link(struct net_device *dev, struct list_head *head)
++{
++ struct ipoib_dev_priv *priv = ipoib_priv(dev);
++
++ if (!priv->parent)
++ return;
++
++ unregister_netdevice_queue(dev, head);
++}
++
+ static size_t ipoib_get_size(const struct net_device *dev)
+ {
+ return nla_total_size(2) + /* IFLA_IPOIB_PKEY */
+@@ -158,6 +168,7 @@ static struct rtnl_link_ops ipoib_link_ops __read_mostly = {
+ .priv_size = sizeof(struct ipoib_dev_priv),
+ .setup = ipoib_setup_common,
+ .newlink = ipoib_new_child_link,
++ .dellink = ipoib_del_child_link,
+ .changelink = ipoib_changelink,
+ .get_size = ipoib_get_size,
+ .fill_info = ipoib_fill_info,
+diff --git a/drivers/infiniband/ulp/ipoib/ipoib_vlan.c b/drivers/infiniband/ulp/ipoib/ipoib_vlan.c
+index 30865605e0980..4c50a87ed7cc2 100644
+--- a/drivers/infiniband/ulp/ipoib/ipoib_vlan.c
++++ b/drivers/infiniband/ulp/ipoib/ipoib_vlan.c
+@@ -195,6 +195,8 @@ int ipoib_vlan_add(struct net_device *pdev, unsigned short pkey)
+ }
+ priv = ipoib_priv(ndev);
+
++ ndev->rtnl_link_ops = ipoib_get_link_ops();
++
+ result = __ipoib_vlan_add(ppriv, priv, pkey, IPOIB_LEGACY_CHILD);
+
+ if (result && ndev->reg_state == NETREG_UNINITIALIZED)
+diff --git a/drivers/infiniband/ulp/rtrs/rtrs-srv.c b/drivers/infiniband/ulp/rtrs/rtrs-srv.c
+index 28f6414dfa3dc..d6f93601712e4 100644
+--- a/drivers/infiniband/ulp/rtrs/rtrs-srv.c
++++ b/drivers/infiniband/ulp/rtrs/rtrs-srv.c
+@@ -16,6 +16,7 @@
+ #include "rtrs-srv.h"
+ #include "rtrs-log.h"
+ #include <rdma/ib_cm.h>
++#include <rdma/ib_verbs.h>
+
+ MODULE_DESCRIPTION("RDMA Transport Server");
+ MODULE_LICENSE("GPL");
+@@ -31,6 +32,7 @@ MODULE_LICENSE("GPL");
+ static struct rtrs_rdma_dev_pd dev_pd;
+ static mempool_t *chunk_pool;
+ struct class *rtrs_dev_class;
++static struct rtrs_srv_ib_ctx ib_ctx;
+
+ static int __read_mostly max_chunk_size = DEFAULT_MAX_CHUNK_SIZE;
+ static int __read_mostly sess_queue_depth = DEFAULT_SESS_QUEUE_DEPTH;
+@@ -2042,6 +2044,70 @@ static void free_srv_ctx(struct rtrs_srv_ctx *ctx)
+ kfree(ctx);
+ }
+
++static int rtrs_srv_add_one(struct ib_device *device)
++{
++ struct rtrs_srv_ctx *ctx;
++ int ret = 0;
++
++ mutex_lock(&ib_ctx.ib_dev_mutex);
++ if (ib_ctx.ib_dev_count)
++ goto out;
++
++ /*
++ * Since our CM IDs are NOT bound to any ib device we will create them
++ * only once
++ */
++ ctx = ib_ctx.srv_ctx;
++ ret = rtrs_srv_rdma_init(ctx, ib_ctx.port);
++ if (ret) {
++ /*
++ * We errored out here.
++ * According to the ib code, if we encounter an error here then the
++ * error code is ignored, and no more calls to our ops are made.
++ */
++ pr_err("Failed to initialize RDMA connection");
++ goto err_out;
++ }
++
++out:
++ /*
++ * Keep a track on the number of ib devices added
++ */
++ ib_ctx.ib_dev_count++;
++
++err_out:
++ mutex_unlock(&ib_ctx.ib_dev_mutex);
++ return ret;
++}
++
++static void rtrs_srv_remove_one(struct ib_device *device, void *client_data)
++{
++ struct rtrs_srv_ctx *ctx;
++
++ mutex_lock(&ib_ctx.ib_dev_mutex);
++ ib_ctx.ib_dev_count--;
++
++ if (ib_ctx.ib_dev_count)
++ goto out;
++
++ /*
++ * Since our CM IDs are NOT bound to any ib device we will remove them
++ * only once, when the last device is removed
++ */
++ ctx = ib_ctx.srv_ctx;
++ rdma_destroy_id(ctx->cm_id_ip);
++ rdma_destroy_id(ctx->cm_id_ib);
++
++out:
++ mutex_unlock(&ib_ctx.ib_dev_mutex);
++}
++
++static struct ib_client rtrs_srv_client = {
++ .name = "rtrs_server",
++ .add = rtrs_srv_add_one,
++ .remove = rtrs_srv_remove_one
++};
++
+ /**
+ * rtrs_srv_open() - open RTRS server context
+ * @ops: callback functions
+@@ -2060,7 +2126,11 @@ struct rtrs_srv_ctx *rtrs_srv_open(struct rtrs_srv_ops *ops, u16 port)
+ if (!ctx)
+ return ERR_PTR(-ENOMEM);
+
+- err = rtrs_srv_rdma_init(ctx, port);
++ mutex_init(&ib_ctx.ib_dev_mutex);
++ ib_ctx.srv_ctx = ctx;
++ ib_ctx.port = port;
++
++ err = ib_register_client(&rtrs_srv_client);
+ if (err) {
+ free_srv_ctx(ctx);
+ return ERR_PTR(err);
+@@ -2099,8 +2169,8 @@ static void close_ctx(struct rtrs_srv_ctx *ctx)
+ */
+ void rtrs_srv_close(struct rtrs_srv_ctx *ctx)
+ {
+- rdma_destroy_id(ctx->cm_id_ip);
+- rdma_destroy_id(ctx->cm_id_ib);
++ ib_unregister_client(&rtrs_srv_client);
++ mutex_destroy(&ib_ctx.ib_dev_mutex);
+ close_ctx(ctx);
+ free_srv_ctx(ctx);
+ }
+diff --git a/drivers/infiniband/ulp/rtrs/rtrs-srv.h b/drivers/infiniband/ulp/rtrs/rtrs-srv.h
+index dc95b0932f0df..08b0b8a6eebe6 100644
+--- a/drivers/infiniband/ulp/rtrs/rtrs-srv.h
++++ b/drivers/infiniband/ulp/rtrs/rtrs-srv.h
+@@ -118,6 +118,13 @@ struct rtrs_srv_ctx {
+ struct list_head srv_list;
+ };
+
++struct rtrs_srv_ib_ctx {
++ struct rtrs_srv_ctx *srv_ctx;
++ u16 port;
++ struct mutex ib_dev_mutex;
++ int ib_dev_count;
++};
++
+ extern struct class *rtrs_dev_class;
+
+ void close_sess(struct rtrs_srv_sess *sess);
+diff --git a/drivers/input/keyboard/ep93xx_keypad.c b/drivers/input/keyboard/ep93xx_keypad.c
+index 7c70492d9d6b5..f831f01501d58 100644
+--- a/drivers/input/keyboard/ep93xx_keypad.c
++++ b/drivers/input/keyboard/ep93xx_keypad.c
+@@ -250,8 +250,8 @@ static int ep93xx_keypad_probe(struct platform_device *pdev)
+ }
+
+ keypad->irq = platform_get_irq(pdev, 0);
+- if (!keypad->irq) {
+- err = -ENXIO;
++ if (keypad->irq < 0) {
++ err = keypad->irq;
+ goto failed_free;
+ }
+
+diff --git a/drivers/input/keyboard/omap4-keypad.c b/drivers/input/keyboard/omap4-keypad.c
+index 94c94d7f5155f..d6c924032aaa8 100644
+--- a/drivers/input/keyboard/omap4-keypad.c
++++ b/drivers/input/keyboard/omap4-keypad.c
+@@ -240,10 +240,8 @@ static int omap4_keypad_probe(struct platform_device *pdev)
+ }
+
+ irq = platform_get_irq(pdev, 0);
+- if (!irq) {
+- dev_err(&pdev->dev, "no keyboard irq assigned\n");
+- return -EINVAL;
+- }
++ if (irq < 0)
++ return irq;
+
+ keypad_data = kzalloc(sizeof(struct omap4_keypad), GFP_KERNEL);
+ if (!keypad_data) {
+diff --git a/drivers/input/keyboard/twl4030_keypad.c b/drivers/input/keyboard/twl4030_keypad.c
+index af3a6824f1a4d..77e0743a3cf85 100644
+--- a/drivers/input/keyboard/twl4030_keypad.c
++++ b/drivers/input/keyboard/twl4030_keypad.c
+@@ -50,7 +50,7 @@ struct twl4030_keypad {
+ bool autorepeat;
+ unsigned int n_rows;
+ unsigned int n_cols;
+- unsigned int irq;
++ int irq;
+
+ struct device *dbg_dev;
+ struct input_dev *input;
+@@ -376,10 +376,8 @@ static int twl4030_kp_probe(struct platform_device *pdev)
+ }
+
+ kp->irq = platform_get_irq(pdev, 0);
+- if (!kp->irq) {
+- dev_err(&pdev->dev, "no keyboard irq assigned\n");
+- return -EINVAL;
+- }
++ if (kp->irq < 0)
++ return kp->irq;
+
+ error = matrix_keypad_build_keymap(keymap_data, NULL,
+ TWL4030_MAX_ROWS,
+diff --git a/drivers/input/serio/sun4i-ps2.c b/drivers/input/serio/sun4i-ps2.c
+index a681a2c04e399..f15ed3dcdb9b2 100644
+--- a/drivers/input/serio/sun4i-ps2.c
++++ b/drivers/input/serio/sun4i-ps2.c
+@@ -211,7 +211,6 @@ static int sun4i_ps2_probe(struct platform_device *pdev)
+ struct sun4i_ps2data *drvdata;
+ struct serio *serio;
+ struct device *dev = &pdev->dev;
+- unsigned int irq;
+ int error;
+
+ drvdata = kzalloc(sizeof(struct sun4i_ps2data), GFP_KERNEL);
+@@ -264,14 +263,12 @@ static int sun4i_ps2_probe(struct platform_device *pdev)
+ writel(0, drvdata->reg_base + PS2_REG_GCTL);
+
+ /* Get IRQ for the device */
+- irq = platform_get_irq(pdev, 0);
+- if (!irq) {
+- dev_err(dev, "no IRQ found\n");
+- error = -ENXIO;
++ drvdata->irq = platform_get_irq(pdev, 0);
++ if (drvdata->irq < 0) {
++ error = drvdata->irq;
+ goto err_disable_clk;
+ }
+
+- drvdata->irq = irq;
+ drvdata->serio = serio;
+ drvdata->dev = dev;
+
+diff --git a/drivers/input/touchscreen/elants_i2c.c b/drivers/input/touchscreen/elants_i2c.c
+index 5477a5718202a..db7f27d4734a9 100644
+--- a/drivers/input/touchscreen/elants_i2c.c
++++ b/drivers/input/touchscreen/elants_i2c.c
+@@ -90,7 +90,7 @@
+ /* FW read command, 0x53 0x?? 0x0, 0x01 */
+ #define E_ELAN_INFO_FW_VER 0x00
+ #define E_ELAN_INFO_BC_VER 0x10
+-#define E_ELAN_INFO_REK 0xE0
++#define E_ELAN_INFO_REK 0xD0
+ #define E_ELAN_INFO_TEST_VER 0xE0
+ #define E_ELAN_INFO_FW_ID 0xF0
+ #define E_INFO_OSR 0xD6
+diff --git a/drivers/input/touchscreen/imx6ul_tsc.c b/drivers/input/touchscreen/imx6ul_tsc.c
+index 9ed258854349b..5e6ba5c4eca2a 100644
+--- a/drivers/input/touchscreen/imx6ul_tsc.c
++++ b/drivers/input/touchscreen/imx6ul_tsc.c
+@@ -530,20 +530,25 @@ static int __maybe_unused imx6ul_tsc_resume(struct device *dev)
+
+ mutex_lock(&input_dev->mutex);
+
+- if (input_dev->users) {
+- retval = clk_prepare_enable(tsc->adc_clk);
+- if (retval)
+- goto out;
+-
+- retval = clk_prepare_enable(tsc->tsc_clk);
+- if (retval) {
+- clk_disable_unprepare(tsc->adc_clk);
+- goto out;
+- }
++ if (!input_dev->users)
++ goto out;
+
+- retval = imx6ul_tsc_init(tsc);
++ retval = clk_prepare_enable(tsc->adc_clk);
++ if (retval)
++ goto out;
++
++ retval = clk_prepare_enable(tsc->tsc_clk);
++ if (retval) {
++ clk_disable_unprepare(tsc->adc_clk);
++ goto out;
+ }
+
++ retval = imx6ul_tsc_init(tsc);
++ if (retval) {
++ clk_disable_unprepare(tsc->tsc_clk);
++ clk_disable_unprepare(tsc->adc_clk);
++ goto out;
++ }
+ out:
+ mutex_unlock(&input_dev->mutex);
+ return retval;
+diff --git a/drivers/input/touchscreen/stmfts.c b/drivers/input/touchscreen/stmfts.c
+index b54cc64e4ea64..389356332c54a 100644
+--- a/drivers/input/touchscreen/stmfts.c
++++ b/drivers/input/touchscreen/stmfts.c
+@@ -479,7 +479,7 @@ static ssize_t stmfts_sysfs_hover_enable_write(struct device *dev,
+
+ mutex_lock(&sdata->mutex);
+
+- if (value & sdata->hover_enabled)
++ if (value && sdata->hover_enabled)
+ goto out;
+
+ if (sdata->running)
+diff --git a/drivers/iommu/qcom_iommu.c b/drivers/iommu/qcom_iommu.c
+index d176df569af8f..78d813bd0dcc8 100644
+--- a/drivers/iommu/qcom_iommu.c
++++ b/drivers/iommu/qcom_iommu.c
+@@ -578,8 +578,10 @@ static int qcom_iommu_of_xlate(struct device *dev, struct of_phandle_args *args)
+ * index into qcom_iommu->ctxs:
+ */
+ if (WARN_ON(asid < 1) ||
+- WARN_ON(asid > qcom_iommu->num_ctxs))
++ WARN_ON(asid > qcom_iommu->num_ctxs)) {
++ put_device(&iommu_pdev->dev);
+ return -EINVAL;
++ }
+
+ if (!dev_iommu_priv_get(dev)) {
+ dev_iommu_priv_set(dev, qcom_iommu);
+@@ -588,8 +590,10 @@ static int qcom_iommu_of_xlate(struct device *dev, struct of_phandle_args *args)
+ * multiple different iommu devices. Multiple context
+ * banks are ok, but multiple devices are not:
+ */
+- if (WARN_ON(qcom_iommu != dev_iommu_priv_get(dev)))
++ if (WARN_ON(qcom_iommu != dev_iommu_priv_get(dev))) {
++ put_device(&iommu_pdev->dev);
+ return -EINVAL;
++ }
+ }
+
+ return iommu_fwspec_add_ids(dev, &asid, 1);
+diff --git a/drivers/lightnvm/core.c b/drivers/lightnvm/core.c
+index db38a68abb6c0..a6f4ca438bca1 100644
+--- a/drivers/lightnvm/core.c
++++ b/drivers/lightnvm/core.c
+@@ -1315,8 +1315,9 @@ static long nvm_ioctl_get_devices(struct file *file, void __user *arg)
+ strlcpy(info->bmname, "gennvm", sizeof(info->bmname));
+ i++;
+
+- if (i > 31) {
+- pr_err("max 31 devices can be reported.\n");
++ if (i >= ARRAY_SIZE(devices->info)) {
++ pr_err("max %zd devices can be reported.\n",
++ ARRAY_SIZE(devices->info));
+ break;
+ }
+ }
+diff --git a/drivers/mailbox/mailbox.c b/drivers/mailbox/mailbox.c
+index 0b821a5b2db84..3e7d4b20ab34f 100644
+--- a/drivers/mailbox/mailbox.c
++++ b/drivers/mailbox/mailbox.c
+@@ -82,9 +82,12 @@ static void msg_submit(struct mbox_chan *chan)
+ exit:
+ spin_unlock_irqrestore(&chan->lock, flags);
+
+- if (!err && (chan->txdone_method & TXDONE_BY_POLL))
+- /* kick start the timer immediately to avoid delays */
+- hrtimer_start(&chan->mbox->poll_hrt, 0, HRTIMER_MODE_REL);
++ /* kick start the timer immediately to avoid delays */
++ if (!err && (chan->txdone_method & TXDONE_BY_POLL)) {
++ /* but only if not already active */
++ if (!hrtimer_active(&chan->mbox->poll_hrt))
++ hrtimer_start(&chan->mbox->poll_hrt, 0, HRTIMER_MODE_REL);
++ }
+ }
+
+ static void tx_tick(struct mbox_chan *chan, int r)
+@@ -122,11 +125,10 @@ static enum hrtimer_restart txdone_hrtimer(struct hrtimer *hrtimer)
+ struct mbox_chan *chan = &mbox->chans[i];
+
+ if (chan->active_req && chan->cl) {
++ resched = true;
+ txdone = chan->mbox->ops->last_tx_done(chan);
+ if (txdone)
+ tx_tick(chan, 0);
+- else
+- resched = true;
+ }
+ }
+
+diff --git a/drivers/mailbox/mtk-cmdq-mailbox.c b/drivers/mailbox/mtk-cmdq-mailbox.c
+index b24822ad8409c..9963bb9cd74fa 100644
+--- a/drivers/mailbox/mtk-cmdq-mailbox.c
++++ b/drivers/mailbox/mtk-cmdq-mailbox.c
+@@ -69,7 +69,7 @@ struct cmdq_task {
+ struct cmdq {
+ struct mbox_controller mbox;
+ void __iomem *base;
+- u32 irq;
++ int irq;
+ u32 thread_nr;
+ u32 irq_mask;
+ struct cmdq_thread *thread;
+@@ -466,10 +466,8 @@ static int cmdq_probe(struct platform_device *pdev)
+ }
+
+ cmdq->irq = platform_get_irq(pdev, 0);
+- if (!cmdq->irq) {
+- dev_err(dev, "failed to get irq\n");
+- return -EINVAL;
+- }
++ if (cmdq->irq < 0)
++ return cmdq->irq;
+
+ cmdq->thread_nr = (u32)(unsigned long)of_device_get_match_data(dev);
+ cmdq->irq_mask = GENMASK(cmdq->thread_nr - 1, 0);
+diff --git a/drivers/md/md-bitmap.c b/drivers/md/md-bitmap.c
+index 95a5f3757fa30..19b2601be3c5e 100644
+--- a/drivers/md/md-bitmap.c
++++ b/drivers/md/md-bitmap.c
+@@ -1949,6 +1949,7 @@ out:
+ }
+ EXPORT_SYMBOL_GPL(md_bitmap_load);
+
++/* caller need to free returned bitmap with md_bitmap_free() */
+ struct bitmap *get_bitmap_from_slot(struct mddev *mddev, int slot)
+ {
+ int rv = 0;
+@@ -2012,6 +2013,7 @@ int md_bitmap_copy_from_slot(struct mddev *mddev, int slot,
+ md_bitmap_unplug(mddev->bitmap);
+ *low = lo;
+ *high = hi;
++ md_bitmap_free(bitmap);
+
+ return rv;
+ }
+@@ -2615,4 +2617,3 @@ struct attribute_group md_bitmap_group = {
+ .name = "bitmap",
+ .attrs = md_bitmap_attrs,
+ };
+-
+diff --git a/drivers/md/md-cluster.c b/drivers/md/md-cluster.c
+index d50737ec40394..afbbc552c3275 100644
+--- a/drivers/md/md-cluster.c
++++ b/drivers/md/md-cluster.c
+@@ -1166,6 +1166,7 @@ static int resize_bitmaps(struct mddev *mddev, sector_t newsize, sector_t oldsiz
+ * can't resize bitmap
+ */
+ goto out;
++ md_bitmap_free(bitmap);
+ }
+
+ return 0;
+diff --git a/drivers/media/firewire/firedtv-fw.c b/drivers/media/firewire/firedtv-fw.c
+index 3f1ca40b9b987..8a8585261bb80 100644
+--- a/drivers/media/firewire/firedtv-fw.c
++++ b/drivers/media/firewire/firedtv-fw.c
+@@ -272,8 +272,10 @@ static int node_probe(struct fw_unit *unit, const struct ieee1394_device_id *id)
+
+ name_len = fw_csr_string(unit->directory, CSR_MODEL,
+ name, sizeof(name));
+- if (name_len < 0)
+- return name_len;
++ if (name_len < 0) {
++ err = name_len;
++ goto fail_free;
++ }
+ for (i = ARRAY_SIZE(model_names); --i; )
+ if (strlen(model_names[i]) <= name_len &&
+ strncmp(name, model_names[i], name_len) == 0)
+diff --git a/drivers/media/i2c/m5mols/m5mols_core.c b/drivers/media/i2c/m5mols/m5mols_core.c
+index de295114ca482..21666d705e372 100644
+--- a/drivers/media/i2c/m5mols/m5mols_core.c
++++ b/drivers/media/i2c/m5mols/m5mols_core.c
+@@ -764,7 +764,8 @@ static int m5mols_sensor_power(struct m5mols_info *info, bool enable)
+
+ ret = regulator_bulk_enable(ARRAY_SIZE(supplies), supplies);
+ if (ret) {
+- info->set_power(&client->dev, 0);
++ if (info->set_power)
++ info->set_power(&client->dev, 0);
+ return ret;
+ }
+
+diff --git a/drivers/media/i2c/ov5640.c b/drivers/media/i2c/ov5640.c
+index 2fe4a7ac05929..3a4268aa5f023 100644
+--- a/drivers/media/i2c/ov5640.c
++++ b/drivers/media/i2c/ov5640.c
+@@ -34,6 +34,8 @@
+ #define OV5640_REG_SYS_RESET02 0x3002
+ #define OV5640_REG_SYS_CLOCK_ENABLE02 0x3006
+ #define OV5640_REG_SYS_CTRL0 0x3008
++#define OV5640_REG_SYS_CTRL0_SW_PWDN 0x42
++#define OV5640_REG_SYS_CTRL0_SW_PWUP 0x02
+ #define OV5640_REG_CHIP_ID 0x300a
+ #define OV5640_REG_IO_MIPI_CTRL00 0x300e
+ #define OV5640_REG_PAD_OUTPUT_ENABLE01 0x3017
+@@ -274,8 +276,7 @@ static inline struct v4l2_subdev *ctrl_to_sd(struct v4l2_ctrl *ctrl)
+ /* YUV422 UYVY VGA@30fps */
+ static const struct reg_value ov5640_init_setting_30fps_VGA[] = {
+ {0x3103, 0x11, 0, 0}, {0x3008, 0x82, 0, 5}, {0x3008, 0x42, 0, 0},
+- {0x3103, 0x03, 0, 0}, {0x3017, 0x00, 0, 0}, {0x3018, 0x00, 0, 0},
+- {0x3630, 0x36, 0, 0},
++ {0x3103, 0x03, 0, 0}, {0x3630, 0x36, 0, 0},
+ {0x3631, 0x0e, 0, 0}, {0x3632, 0xe2, 0, 0}, {0x3633, 0x12, 0, 0},
+ {0x3621, 0xe0, 0, 0}, {0x3704, 0xa0, 0, 0}, {0x3703, 0x5a, 0, 0},
+ {0x3715, 0x78, 0, 0}, {0x3717, 0x01, 0, 0}, {0x370b, 0x60, 0, 0},
+@@ -751,7 +752,7 @@ static int ov5640_mod_reg(struct ov5640_dev *sensor, u16 reg,
+ * +->| PLL Root Div | - reg 0x3037, bit 4
+ * +-+------------+
+ * | +---------+
+- * +->| Bit Div | - reg 0x3035, bits 0-3
++ * +->| Bit Div | - reg 0x3034, bits 0-3
+ * +-+-------+
+ * | +-------------+
+ * +->| SCLK Div | - reg 0x3108, bits 0-1
+@@ -1120,6 +1121,12 @@ static int ov5640_load_regs(struct ov5640_dev *sensor,
+ val = regs->val;
+ mask = regs->mask;
+
++ /* remain in power down mode for DVP */
++ if (regs->reg_addr == OV5640_REG_SYS_CTRL0 &&
++ val == OV5640_REG_SYS_CTRL0_SW_PWUP &&
++ sensor->ep.bus_type != V4L2_MBUS_CSI2_DPHY)
++ continue;
++
+ if (mask)
+ ret = ov5640_mod_reg(sensor, reg_addr, mask, val);
+ else
+@@ -1275,31 +1282,9 @@ static int ov5640_set_stream_dvp(struct ov5640_dev *sensor, bool on)
+ if (ret)
+ return ret;
+
+- /*
+- * enable VSYNC/HREF/PCLK DVP control lines
+- * & D[9:6] DVP data lines
+- *
+- * PAD OUTPUT ENABLE 01
+- * - 6: VSYNC output enable
+- * - 5: HREF output enable
+- * - 4: PCLK output enable
+- * - [3:0]: D[9:6] output enable
+- */
+- ret = ov5640_write_reg(sensor,
+- OV5640_REG_PAD_OUTPUT_ENABLE01,
+- on ? 0x7f : 0);
+- if (ret)
+- return ret;
+-
+- /*
+- * enable D[5:0] DVP data lines
+- *
+- * PAD OUTPUT ENABLE 02
+- * - [7:2]: D[5:0] output enable
+- */
+- return ov5640_write_reg(sensor,
+- OV5640_REG_PAD_OUTPUT_ENABLE02,
+- on ? 0xfc : 0);
++ return ov5640_write_reg(sensor, OV5640_REG_SYS_CTRL0, on ?
++ OV5640_REG_SYS_CTRL0_SW_PWUP :
++ OV5640_REG_SYS_CTRL0_SW_PWDN);
+ }
+
+ static int ov5640_set_stream_mipi(struct ov5640_dev *sensor, bool on)
+@@ -2001,6 +1986,95 @@ static void ov5640_set_power_off(struct ov5640_dev *sensor)
+ clk_disable_unprepare(sensor->xclk);
+ }
+
++static int ov5640_set_power_mipi(struct ov5640_dev *sensor, bool on)
++{
++ int ret;
++
++ if (!on) {
++ /* Reset MIPI bus settings to their default values. */
++ ov5640_write_reg(sensor, OV5640_REG_IO_MIPI_CTRL00, 0x58);
++ ov5640_write_reg(sensor, OV5640_REG_MIPI_CTRL00, 0x04);
++ ov5640_write_reg(sensor, OV5640_REG_PAD_OUTPUT00, 0x00);
++ return 0;
++ }
++
++ /*
++ * Power up MIPI HS Tx and LS Rx; 2 data lanes mode
++ *
++ * 0x300e = 0x40
++ * [7:5] = 010 : 2 data lanes mode (see FIXME note in
++ * "ov5640_set_stream_mipi()")
++ * [4] = 0 : Power up MIPI HS Tx
++ * [3] = 0 : Power up MIPI LS Rx
++ * [2] = 0 : MIPI interface disabled
++ */
++ ret = ov5640_write_reg(sensor, OV5640_REG_IO_MIPI_CTRL00, 0x40);
++ if (ret)
++ return ret;
++
++ /*
++ * Gate clock and set LP11 in 'no packets mode' (idle)
++ *
++ * 0x4800 = 0x24
++ * [5] = 1 : Gate clock when 'no packets'
++ * [2] = 1 : MIPI bus in LP11 when 'no packets'
++ */
++ ret = ov5640_write_reg(sensor, OV5640_REG_MIPI_CTRL00, 0x24);
++ if (ret)
++ return ret;
++
++ /*
++ * Set data lanes and clock in LP11 when 'sleeping'
++ *
++ * 0x3019 = 0x70
++ * [6] = 1 : MIPI data lane 2 in LP11 when 'sleeping'
++ * [5] = 1 : MIPI data lane 1 in LP11 when 'sleeping'
++ * [4] = 1 : MIPI clock lane in LP11 when 'sleeping'
++ */
++ ret = ov5640_write_reg(sensor, OV5640_REG_PAD_OUTPUT00, 0x70);
++ if (ret)
++ return ret;
++
++ /* Give lanes some time to coax into LP11 state. */
++ usleep_range(500, 1000);
++
++ return 0;
++}
++
++static int ov5640_set_power_dvp(struct ov5640_dev *sensor, bool on)
++{
++ int ret;
++
++ if (!on) {
++ /* Reset settings to their default values. */
++ ov5640_write_reg(sensor, OV5640_REG_PAD_OUTPUT_ENABLE01, 0x00);
++ ov5640_write_reg(sensor, OV5640_REG_PAD_OUTPUT_ENABLE02, 0x00);
++ return 0;
++ }
++
++ /*
++ * enable VSYNC/HREF/PCLK DVP control lines
++ * & D[9:6] DVP data lines
++ *
++ * PAD OUTPUT ENABLE 01
++ * - 6: VSYNC output enable
++ * - 5: HREF output enable
++ * - 4: PCLK output enable
++ * - [3:0]: D[9:6] output enable
++ */
++ ret = ov5640_write_reg(sensor, OV5640_REG_PAD_OUTPUT_ENABLE01, 0x7f);
++ if (ret)
++ return ret;
++
++ /*
++ * enable D[5:0] DVP data lines
++ *
++ * PAD OUTPUT ENABLE 02
++ * - [7:2]: D[5:0] output enable
++ */
++ return ov5640_write_reg(sensor, OV5640_REG_PAD_OUTPUT_ENABLE02, 0xfc);
++}
++
+ static int ov5640_set_power(struct ov5640_dev *sensor, bool on)
+ {
+ int ret = 0;
+@@ -2013,67 +2087,17 @@ static int ov5640_set_power(struct ov5640_dev *sensor, bool on)
+ ret = ov5640_restore_mode(sensor);
+ if (ret)
+ goto power_off;
++ }
+
+- /* We're done here for DVP bus, while CSI-2 needs setup. */
+- if (sensor->ep.bus_type != V4L2_MBUS_CSI2_DPHY)
+- return 0;
+-
+- /*
+- * Power up MIPI HS Tx and LS Rx; 2 data lanes mode
+- *
+- * 0x300e = 0x40
+- * [7:5] = 010 : 2 data lanes mode (see FIXME note in
+- * "ov5640_set_stream_mipi()")
+- * [4] = 0 : Power up MIPI HS Tx
+- * [3] = 0 : Power up MIPI LS Rx
+- * [2] = 0 : MIPI interface disabled
+- */
+- ret = ov5640_write_reg(sensor,
+- OV5640_REG_IO_MIPI_CTRL00, 0x40);
+- if (ret)
+- goto power_off;
+-
+- /*
+- * Gate clock and set LP11 in 'no packets mode' (idle)
+- *
+- * 0x4800 = 0x24
+- * [5] = 1 : Gate clock when 'no packets'
+- * [2] = 1 : MIPI bus in LP11 when 'no packets'
+- */
+- ret = ov5640_write_reg(sensor,
+- OV5640_REG_MIPI_CTRL00, 0x24);
+- if (ret)
+- goto power_off;
+-
+- /*
+- * Set data lanes and clock in LP11 when 'sleeping'
+- *
+- * 0x3019 = 0x70
+- * [6] = 1 : MIPI data lane 2 in LP11 when 'sleeping'
+- * [5] = 1 : MIPI data lane 1 in LP11 when 'sleeping'
+- * [4] = 1 : MIPI clock lane in LP11 when 'sleeping'
+- */
+- ret = ov5640_write_reg(sensor,
+- OV5640_REG_PAD_OUTPUT00, 0x70);
+- if (ret)
+- goto power_off;
+-
+- /* Give lanes some time to coax into LP11 state. */
+- usleep_range(500, 1000);
+-
+- } else {
+- if (sensor->ep.bus_type == V4L2_MBUS_CSI2_DPHY) {
+- /* Reset MIPI bus settings to their default values. */
+- ov5640_write_reg(sensor,
+- OV5640_REG_IO_MIPI_CTRL00, 0x58);
+- ov5640_write_reg(sensor,
+- OV5640_REG_MIPI_CTRL00, 0x04);
+- ov5640_write_reg(sensor,
+- OV5640_REG_PAD_OUTPUT00, 0x00);
+- }
++ if (sensor->ep.bus_type == V4L2_MBUS_CSI2_DPHY)
++ ret = ov5640_set_power_mipi(sensor, on);
++ else
++ ret = ov5640_set_power_dvp(sensor, on);
++ if (ret)
++ goto power_off;
+
++ if (!on)
+ ov5640_set_power_off(sensor);
+- }
+
+ return 0;
+
+diff --git a/drivers/media/i2c/tc358743.c b/drivers/media/i2c/tc358743.c
+index dbbab75f135ec..cff99cf61ed4d 100644
+--- a/drivers/media/i2c/tc358743.c
++++ b/drivers/media/i2c/tc358743.c
+@@ -919,8 +919,8 @@ static const struct cec_adap_ops tc358743_cec_adap_ops = {
+ .adap_monitor_all_enable = tc358743_cec_adap_monitor_all_enable,
+ };
+
+-static void tc358743_cec_isr(struct v4l2_subdev *sd, u16 intstatus,
+- bool *handled)
++static void tc358743_cec_handler(struct v4l2_subdev *sd, u16 intstatus,
++ bool *handled)
+ {
+ struct tc358743_state *state = to_state(sd);
+ unsigned int cec_rxint, cec_txint;
+@@ -953,7 +953,8 @@ static void tc358743_cec_isr(struct v4l2_subdev *sd, u16 intstatus,
+ cec_transmit_attempt_done(state->cec_adap,
+ CEC_TX_STATUS_ERROR);
+ }
+- *handled = true;
++ if (handled)
++ *handled = true;
+ }
+ if ((intstatus & MASK_CEC_RINT) &&
+ (cec_rxint & MASK_CECRIEND)) {
+@@ -968,7 +969,8 @@ static void tc358743_cec_isr(struct v4l2_subdev *sd, u16 intstatus,
+ msg.msg[i] = v & 0xff;
+ }
+ cec_received_msg(state->cec_adap, &msg);
+- *handled = true;
++ if (handled)
++ *handled = true;
+ }
+ i2c_wr16(sd, INTSTATUS,
+ intstatus & (MASK_CEC_RINT | MASK_CEC_TINT));
+@@ -1432,7 +1434,7 @@ static int tc358743_isr(struct v4l2_subdev *sd, u32 status, bool *handled)
+
+ #ifdef CONFIG_VIDEO_TC358743_CEC
+ if (intstatus & (MASK_CEC_RINT | MASK_CEC_TINT)) {
+- tc358743_cec_isr(sd, intstatus, handled);
++ tc358743_cec_handler(sd, intstatus, handled);
+ i2c_wr16(sd, INTSTATUS,
+ intstatus & (MASK_CEC_RINT | MASK_CEC_TINT));
+ intstatus &= ~(MASK_CEC_RINT | MASK_CEC_TINT);
+@@ -1461,7 +1463,7 @@ static int tc358743_isr(struct v4l2_subdev *sd, u32 status, bool *handled)
+ static irqreturn_t tc358743_irq_handler(int irq, void *dev_id)
+ {
+ struct tc358743_state *state = dev_id;
+- bool handled;
++ bool handled = false;
+
+ tc358743_isr(&state->sd, 0, &handled);
+
+diff --git a/drivers/media/pci/bt8xx/bttv-driver.c b/drivers/media/pci/bt8xx/bttv-driver.c
+index 9144f795fb933..b721720f9845a 100644
+--- a/drivers/media/pci/bt8xx/bttv-driver.c
++++ b/drivers/media/pci/bt8xx/bttv-driver.c
+@@ -4013,11 +4013,13 @@ static int bttv_probe(struct pci_dev *dev, const struct pci_device_id *pci_id)
+ btv->id = dev->device;
+ if (pci_enable_device(dev)) {
+ pr_warn("%d: Can't enable device\n", btv->c.nr);
+- return -EIO;
++ result = -EIO;
++ goto free_mem;
+ }
+ if (pci_set_dma_mask(dev, DMA_BIT_MASK(32))) {
+ pr_warn("%d: No suitable DMA available\n", btv->c.nr);
+- return -EIO;
++ result = -EIO;
++ goto free_mem;
+ }
+ if (!request_mem_region(pci_resource_start(dev,0),
+ pci_resource_len(dev,0),
+@@ -4025,7 +4027,8 @@ static int bttv_probe(struct pci_dev *dev, const struct pci_device_id *pci_id)
+ pr_warn("%d: can't request iomem (0x%llx)\n",
+ btv->c.nr,
+ (unsigned long long)pci_resource_start(dev, 0));
+- return -EBUSY;
++ result = -EBUSY;
++ goto free_mem;
+ }
+ pci_set_master(dev);
+ pci_set_command(dev);
+@@ -4211,6 +4214,10 @@ fail0:
+ release_mem_region(pci_resource_start(btv->c.pci,0),
+ pci_resource_len(btv->c.pci,0));
+ pci_disable_device(btv->c.pci);
++
++free_mem:
++ bttvs[btv->c.nr] = NULL;
++ kfree(btv);
+ return result;
+ }
+
+diff --git a/drivers/media/pci/saa7134/saa7134-tvaudio.c b/drivers/media/pci/saa7134/saa7134-tvaudio.c
+index 79e1afb710758..5cc4ef21f9d37 100644
+--- a/drivers/media/pci/saa7134/saa7134-tvaudio.c
++++ b/drivers/media/pci/saa7134/saa7134-tvaudio.c
+@@ -683,7 +683,8 @@ int saa_dsp_writel(struct saa7134_dev *dev, int reg, u32 value)
+ {
+ int err;
+
+- audio_dbg(2, "dsp write reg 0x%x = 0x%06x\n", reg << 2, value);
++ audio_dbg(2, "dsp write reg 0x%x = 0x%06x\n",
++ (reg << 2) & 0xffffffff, value);
+ err = saa_dsp_wait_bit(dev,SAA7135_DSP_RWSTATE_WRR);
+ if (err < 0)
+ return err;
+diff --git a/drivers/media/platform/exynos4-is/fimc-isp.c b/drivers/media/platform/exynos4-is/fimc-isp.c
+index cde0d254ec1c4..a77c49b185115 100644
+--- a/drivers/media/platform/exynos4-is/fimc-isp.c
++++ b/drivers/media/platform/exynos4-is/fimc-isp.c
+@@ -305,8 +305,10 @@ static int fimc_isp_subdev_s_power(struct v4l2_subdev *sd, int on)
+
+ if (on) {
+ ret = pm_runtime_get_sync(&is->pdev->dev);
+- if (ret < 0)
++ if (ret < 0) {
++ pm_runtime_put(&is->pdev->dev);
+ return ret;
++ }
+ set_bit(IS_ST_PWR_ON, &is->state);
+
+ ret = fimc_is_start_firmware(is);
+diff --git a/drivers/media/platform/exynos4-is/fimc-lite.c b/drivers/media/platform/exynos4-is/fimc-lite.c
+index 394e0818f2d5c..92130d7791378 100644
+--- a/drivers/media/platform/exynos4-is/fimc-lite.c
++++ b/drivers/media/platform/exynos4-is/fimc-lite.c
+@@ -470,7 +470,7 @@ static int fimc_lite_open(struct file *file)
+ set_bit(ST_FLITE_IN_USE, &fimc->state);
+ ret = pm_runtime_get_sync(&fimc->pdev->dev);
+ if (ret < 0)
+- goto unlock;
++ goto err_pm;
+
+ ret = v4l2_fh_open(file);
+ if (ret < 0)
+diff --git a/drivers/media/platform/exynos4-is/media-dev.c b/drivers/media/platform/exynos4-is/media-dev.c
+index 9c31d950cddf7..a07d796f63df0 100644
+--- a/drivers/media/platform/exynos4-is/media-dev.c
++++ b/drivers/media/platform/exynos4-is/media-dev.c
+@@ -484,8 +484,10 @@ static int fimc_md_register_sensor_entities(struct fimc_md *fmd)
+ return -ENXIO;
+
+ ret = pm_runtime_get_sync(fmd->pmf);
+- if (ret < 0)
++ if (ret < 0) {
++ pm_runtime_put(fmd->pmf);
+ return ret;
++ }
+
+ fmd->num_sensors = 0;
+
+@@ -1268,11 +1270,9 @@ static int fimc_md_get_pinctrl(struct fimc_md *fmd)
+ if (IS_ERR(pctl->state_default))
+ return PTR_ERR(pctl->state_default);
+
++ /* PINCTRL_STATE_IDLE is optional */
+ pctl->state_idle = pinctrl_lookup_state(pctl->pinctrl,
+ PINCTRL_STATE_IDLE);
+- if (IS_ERR(pctl->state_idle))
+- return PTR_ERR(pctl->state_idle);
+-
+ return 0;
+ }
+
+diff --git a/drivers/media/platform/exynos4-is/mipi-csis.c b/drivers/media/platform/exynos4-is/mipi-csis.c
+index 540151bbf58f2..1aac167abb175 100644
+--- a/drivers/media/platform/exynos4-is/mipi-csis.c
++++ b/drivers/media/platform/exynos4-is/mipi-csis.c
+@@ -510,8 +510,10 @@ static int s5pcsis_s_stream(struct v4l2_subdev *sd, int enable)
+ if (enable) {
+ s5pcsis_clear_counters(state);
+ ret = pm_runtime_get_sync(&state->pdev->dev);
+- if (ret && ret != 1)
++ if (ret && ret != 1) {
++ pm_runtime_put_noidle(&state->pdev->dev);
+ return ret;
++ }
+ }
+
+ mutex_lock(&state->lock);
+diff --git a/drivers/media/platform/mx2_emmaprp.c b/drivers/media/platform/mx2_emmaprp.c
+index df78df59da456..08a5473b56104 100644
+--- a/drivers/media/platform/mx2_emmaprp.c
++++ b/drivers/media/platform/mx2_emmaprp.c
+@@ -852,8 +852,11 @@ static int emmaprp_probe(struct platform_device *pdev)
+ platform_set_drvdata(pdev, pcdev);
+
+ irq = platform_get_irq(pdev, 0);
+- if (irq < 0)
+- return irq;
++ if (irq < 0) {
++ ret = irq;
++ goto rel_vdev;
++ }
++
+ ret = devm_request_irq(&pdev->dev, irq, emmaprp_irq, 0,
+ dev_name(&pdev->dev), pcdev);
+ if (ret)
+diff --git a/drivers/media/platform/omap3isp/isp.c b/drivers/media/platform/omap3isp/isp.c
+index b91e472ee764e..de066757726de 100644
+--- a/drivers/media/platform/omap3isp/isp.c
++++ b/drivers/media/platform/omap3isp/isp.c
+@@ -2328,8 +2328,10 @@ static int isp_probe(struct platform_device *pdev)
+ mem = platform_get_resource(pdev, IORESOURCE_MEM, i);
+ isp->mmio_base[map_idx] =
+ devm_ioremap_resource(isp->dev, mem);
+- if (IS_ERR(isp->mmio_base[map_idx]))
+- return PTR_ERR(isp->mmio_base[map_idx]);
++ if (IS_ERR(isp->mmio_base[map_idx])) {
++ ret = PTR_ERR(isp->mmio_base[map_idx]);
++ goto error;
++ }
+ }
+
+ ret = isp_get_clocks(isp);
+diff --git a/drivers/media/platform/qcom/camss/camss-csiphy.c b/drivers/media/platform/qcom/camss/camss-csiphy.c
+index 008afb85023be..3c5b9082ad723 100644
+--- a/drivers/media/platform/qcom/camss/camss-csiphy.c
++++ b/drivers/media/platform/qcom/camss/camss-csiphy.c
+@@ -176,8 +176,10 @@ static int csiphy_set_power(struct v4l2_subdev *sd, int on)
+ int ret;
+
+ ret = pm_runtime_get_sync(dev);
+- if (ret < 0)
++ if (ret < 0) {
++ pm_runtime_put_sync(dev);
+ return ret;
++ }
+
+ ret = csiphy_set_clock_rates(csiphy);
+ if (ret < 0) {
+diff --git a/drivers/media/platform/qcom/venus/core.c b/drivers/media/platform/qcom/venus/core.c
+index 203c6538044fb..321ad77cb6cf4 100644
+--- a/drivers/media/platform/qcom/venus/core.c
++++ b/drivers/media/platform/qcom/venus/core.c
+@@ -224,13 +224,15 @@ static int venus_probe(struct platform_device *pdev)
+
+ ret = dma_set_mask_and_coherent(dev, core->res->dma_mask);
+ if (ret)
+- return ret;
++ goto err_core_put;
+
+ if (!dev->dma_parms) {
+ dev->dma_parms = devm_kzalloc(dev, sizeof(*dev->dma_parms),
+ GFP_KERNEL);
+- if (!dev->dma_parms)
+- return -ENOMEM;
++ if (!dev->dma_parms) {
++ ret = -ENOMEM;
++ goto err_core_put;
++ }
+ }
+ dma_set_max_seg_size(dev, DMA_BIT_MASK(32));
+
+@@ -242,11 +244,11 @@ static int venus_probe(struct platform_device *pdev)
+ IRQF_TRIGGER_HIGH | IRQF_ONESHOT,
+ "venus", core);
+ if (ret)
+- return ret;
++ goto err_core_put;
+
+ ret = hfi_create(core, &venus_core_ops);
+ if (ret)
+- return ret;
++ goto err_core_put;
+
+ pm_runtime_enable(dev);
+
+@@ -287,8 +289,10 @@ static int venus_probe(struct platform_device *pdev)
+ goto err_core_deinit;
+
+ ret = pm_runtime_put_sync(dev);
+- if (ret)
++ if (ret) {
++ pm_runtime_get_noresume(dev);
+ goto err_dev_unregister;
++ }
+
+ return 0;
+
+@@ -299,9 +303,13 @@ err_core_deinit:
+ err_venus_shutdown:
+ venus_shutdown(core);
+ err_runtime_disable:
++ pm_runtime_put_noidle(dev);
+ pm_runtime_set_suspended(dev);
+ pm_runtime_disable(dev);
+ hfi_destroy(core);
++err_core_put:
++ if (core->pm_ops->core_put)
++ core->pm_ops->core_put(dev);
+ return ret;
+ }
+
+diff --git a/drivers/media/platform/qcom/venus/vdec.c b/drivers/media/platform/qcom/venus/vdec.c
+index 7c4c483d54389..76be14efbfb09 100644
+--- a/drivers/media/platform/qcom/venus/vdec.c
++++ b/drivers/media/platform/qcom/venus/vdec.c
+@@ -1088,8 +1088,6 @@ static int vdec_stop_capture(struct venus_inst *inst)
+ break;
+ }
+
+- INIT_LIST_HEAD(&inst->registeredbufs);
+-
+ return ret;
+ }
+
+@@ -1189,6 +1187,14 @@ static int vdec_buf_init(struct vb2_buffer *vb)
+ static void vdec_buf_cleanup(struct vb2_buffer *vb)
+ {
+ struct venus_inst *inst = vb2_get_drv_priv(vb->vb2_queue);
++ struct vb2_v4l2_buffer *vbuf = to_vb2_v4l2_buffer(vb);
++ struct venus_buffer *buf = to_venus_buffer(vbuf);
++
++ mutex_lock(&inst->lock);
++ if (vb->type == V4L2_BUF_TYPE_VIDEO_CAPTURE_MPLANE)
++ if (!list_empty(&inst->registeredbufs))
++ list_del_init(&buf->reg_list);
++ mutex_unlock(&inst->lock);
+
+ inst->buf_count--;
+ if (!inst->buf_count)
+diff --git a/drivers/media/platform/rcar-fcp.c b/drivers/media/platform/rcar-fcp.c
+index 5c6b00737fe75..05c712e00a2a7 100644
+--- a/drivers/media/platform/rcar-fcp.c
++++ b/drivers/media/platform/rcar-fcp.c
+@@ -103,8 +103,10 @@ int rcar_fcp_enable(struct rcar_fcp_device *fcp)
+ return 0;
+
+ ret = pm_runtime_get_sync(fcp->dev);
+- if (ret < 0)
++ if (ret < 0) {
++ pm_runtime_put_noidle(fcp->dev);
+ return ret;
++ }
+
+ return 0;
+ }
+diff --git a/drivers/media/platform/rcar-vin/rcar-csi2.c b/drivers/media/platform/rcar-vin/rcar-csi2.c
+index 151e6a90c5fbc..d9bc8cef7db58 100644
+--- a/drivers/media/platform/rcar-vin/rcar-csi2.c
++++ b/drivers/media/platform/rcar-vin/rcar-csi2.c
+@@ -361,7 +361,6 @@ struct rcar_csi2 {
+ struct media_pad pads[NR_OF_RCAR_CSI2_PAD];
+
+ struct v4l2_async_notifier notifier;
+- struct v4l2_async_subdev asd;
+ struct v4l2_subdev *remote;
+
+ struct v4l2_mbus_framefmt mf;
+@@ -810,6 +809,8 @@ static int rcsi2_parse_v4l2(struct rcar_csi2 *priv,
+
+ static int rcsi2_parse_dt(struct rcar_csi2 *priv)
+ {
++ struct v4l2_async_subdev *asd;
++ struct fwnode_handle *fwnode;
+ struct device_node *ep;
+ struct v4l2_fwnode_endpoint v4l2_ep = { .bus_type = 0 };
+ int ret;
+@@ -833,24 +834,19 @@ static int rcsi2_parse_dt(struct rcar_csi2 *priv)
+ return ret;
+ }
+
+- priv->asd.match.fwnode =
+- fwnode_graph_get_remote_endpoint(of_fwnode_handle(ep));
+- priv->asd.match_type = V4L2_ASYNC_MATCH_FWNODE;
+-
++ fwnode = fwnode_graph_get_remote_endpoint(of_fwnode_handle(ep));
+ of_node_put(ep);
+
+- v4l2_async_notifier_init(&priv->notifier);
+-
+- ret = v4l2_async_notifier_add_subdev(&priv->notifier, &priv->asd);
+- if (ret) {
+- fwnode_handle_put(priv->asd.match.fwnode);
+- return ret;
+- }
++ dev_dbg(priv->dev, "Found '%pOF'\n", to_of_node(fwnode));
+
++ v4l2_async_notifier_init(&priv->notifier);
+ priv->notifier.ops = &rcar_csi2_notify_ops;
+
+- dev_dbg(priv->dev, "Found '%pOF'\n",
+- to_of_node(priv->asd.match.fwnode));
++ asd = v4l2_async_notifier_add_fwnode_subdev(&priv->notifier, fwnode,
++ sizeof(*asd));
++ fwnode_handle_put(fwnode);
++ if (IS_ERR(asd))
++ return PTR_ERR(asd);
+
+ ret = v4l2_async_subdev_notifier_register(&priv->subdev,
+ &priv->notifier);
+diff --git a/drivers/media/platform/rcar-vin/rcar-dma.c b/drivers/media/platform/rcar-vin/rcar-dma.c
+index 1a30cd0363711..95bc9e0e87926 100644
+--- a/drivers/media/platform/rcar-vin/rcar-dma.c
++++ b/drivers/media/platform/rcar-vin/rcar-dma.c
+@@ -1392,8 +1392,10 @@ int rvin_set_channel_routing(struct rvin_dev *vin, u8 chsel)
+ int ret;
+
+ ret = pm_runtime_get_sync(vin->dev);
+- if (ret < 0)
++ if (ret < 0) {
++ pm_runtime_put_noidle(vin->dev);
+ return ret;
++ }
+
+ /* Make register writes take effect immediately. */
+ vnmc = rvin_read(vin, VNMC_REG);
+diff --git a/drivers/media/platform/rcar_drif.c b/drivers/media/platform/rcar_drif.c
+index 3d2451ac347d7..f318cd4b8086f 100644
+--- a/drivers/media/platform/rcar_drif.c
++++ b/drivers/media/platform/rcar_drif.c
+@@ -185,7 +185,6 @@ struct rcar_drif_frame_buf {
+ /* OF graph endpoint's V4L2 async data */
+ struct rcar_drif_graph_ep {
+ struct v4l2_subdev *subdev; /* Async matched subdev */
+- struct v4l2_async_subdev asd; /* Async sub-device descriptor */
+ };
+
+ /* DMA buffer */
+@@ -1109,12 +1108,6 @@ static int rcar_drif_notify_bound(struct v4l2_async_notifier *notifier,
+ struct rcar_drif_sdr *sdr =
+ container_of(notifier, struct rcar_drif_sdr, notifier);
+
+- if (sdr->ep.asd.match.fwnode !=
+- of_fwnode_handle(subdev->dev->of_node)) {
+- rdrif_err(sdr, "subdev %s cannot bind\n", subdev->name);
+- return -EINVAL;
+- }
+-
+ v4l2_set_subdev_hostdata(subdev, sdr);
+ sdr->ep.subdev = subdev;
+ rdrif_dbg(sdr, "bound asd %s\n", subdev->name);
+@@ -1218,7 +1211,7 @@ static int rcar_drif_parse_subdevs(struct rcar_drif_sdr *sdr)
+ {
+ struct v4l2_async_notifier *notifier = &sdr->notifier;
+ struct fwnode_handle *fwnode, *ep;
+- int ret;
++ struct v4l2_async_subdev *asd;
+
+ v4l2_async_notifier_init(notifier);
+
+@@ -1227,26 +1220,21 @@ static int rcar_drif_parse_subdevs(struct rcar_drif_sdr *sdr)
+ if (!ep)
+ return 0;
+
++ /* Get the endpoint properties */
++ rcar_drif_get_ep_properties(sdr, ep);
++
+ fwnode = fwnode_graph_get_remote_port_parent(ep);
++ fwnode_handle_put(ep);
+ if (!fwnode) {
+ dev_warn(sdr->dev, "bad remote port parent\n");
+- fwnode_handle_put(ep);
+ return -EINVAL;
+ }
+
+- sdr->ep.asd.match.fwnode = fwnode;
+- sdr->ep.asd.match_type = V4L2_ASYNC_MATCH_FWNODE;
+- ret = v4l2_async_notifier_add_subdev(notifier, &sdr->ep.asd);
+- if (ret) {
+- fwnode_handle_put(fwnode);
+- return ret;
+- }
+-
+- /* Get the endpoint properties */
+- rcar_drif_get_ep_properties(sdr, ep);
+-
++ asd = v4l2_async_notifier_add_fwnode_subdev(notifier, fwnode,
++ sizeof(*asd));
+ fwnode_handle_put(fwnode);
+- fwnode_handle_put(ep);
++ if (IS_ERR(asd))
++ return PTR_ERR(asd);
+
+ return 0;
+ }
+diff --git a/drivers/media/platform/rockchip/rga/rga-buf.c b/drivers/media/platform/rockchip/rga/rga-buf.c
+index 36b821ccc1dba..bf9a75b75083b 100644
+--- a/drivers/media/platform/rockchip/rga/rga-buf.c
++++ b/drivers/media/platform/rockchip/rga/rga-buf.c
+@@ -81,6 +81,7 @@ static int rga_buf_start_streaming(struct vb2_queue *q, unsigned int count)
+
+ ret = pm_runtime_get_sync(rga->dev);
+ if (ret < 0) {
++ pm_runtime_put_noidle(rga->dev);
+ rga_buf_return_buffers(q, VB2_BUF_STATE_QUEUED);
+ return ret;
+ }
+diff --git a/drivers/media/platform/s3c-camif/camif-core.c b/drivers/media/platform/s3c-camif/camif-core.c
+index c6fbcd7036d6d..ee624804862e2 100644
+--- a/drivers/media/platform/s3c-camif/camif-core.c
++++ b/drivers/media/platform/s3c-camif/camif-core.c
+@@ -464,7 +464,7 @@ static int s3c_camif_probe(struct platform_device *pdev)
+
+ ret = camif_media_dev_init(camif);
+ if (ret < 0)
+- goto err_alloc;
++ goto err_pm;
+
+ ret = camif_register_sensor(camif);
+ if (ret < 0)
+@@ -498,10 +498,9 @@ err_sens:
+ media_device_unregister(&camif->media_dev);
+ media_device_cleanup(&camif->media_dev);
+ camif_unregister_media_entities(camif);
+-err_alloc:
++err_pm:
+ pm_runtime_put(dev);
+ pm_runtime_disable(dev);
+-err_pm:
+ camif_clk_put(camif);
+ err_clk:
+ s3c_camif_unregister_subdev(camif);
+diff --git a/drivers/media/platform/s5p-mfc/s5p_mfc_pm.c b/drivers/media/platform/s5p-mfc/s5p_mfc_pm.c
+index 7d52431c2c837..62d2320a72186 100644
+--- a/drivers/media/platform/s5p-mfc/s5p_mfc_pm.c
++++ b/drivers/media/platform/s5p-mfc/s5p_mfc_pm.c
+@@ -79,8 +79,10 @@ int s5p_mfc_power_on(void)
+ int i, ret = 0;
+
+ ret = pm_runtime_get_sync(pm->device);
+- if (ret < 0)
++ if (ret < 0) {
++ pm_runtime_put_noidle(pm->device);
+ return ret;
++ }
+
+ /* clock control */
+ for (i = 0; i < pm->num_clocks; i++) {
+diff --git a/drivers/media/platform/sti/bdisp/bdisp-v4l2.c b/drivers/media/platform/sti/bdisp/bdisp-v4l2.c
+index af2d5eb782cee..e1d150584bdc2 100644
+--- a/drivers/media/platform/sti/bdisp/bdisp-v4l2.c
++++ b/drivers/media/platform/sti/bdisp/bdisp-v4l2.c
+@@ -1371,7 +1371,7 @@ static int bdisp_probe(struct platform_device *pdev)
+ ret = pm_runtime_get_sync(dev);
+ if (ret < 0) {
+ dev_err(dev, "failed to set PM\n");
+- goto err_dbg;
++ goto err_pm;
+ }
+
+ /* Filters */
+@@ -1399,7 +1399,6 @@ err_filter:
+ bdisp_hw_free_filters(bdisp->dev);
+ err_pm:
+ pm_runtime_put(dev);
+-err_dbg:
+ bdisp_debugfs_remove(bdisp);
+ err_v4l2:
+ v4l2_device_unregister(&bdisp->v4l2_dev);
+diff --git a/drivers/media/platform/sti/delta/delta-v4l2.c b/drivers/media/platform/sti/delta/delta-v4l2.c
+index 2503224eeee51..c691b3d81549d 100644
+--- a/drivers/media/platform/sti/delta/delta-v4l2.c
++++ b/drivers/media/platform/sti/delta/delta-v4l2.c
+@@ -954,8 +954,10 @@ static void delta_run_work(struct work_struct *work)
+ /* enable the hardware */
+ if (!dec->pm) {
+ ret = delta_get_sync(ctx);
+- if (ret)
++ if (ret) {
++ delta_put_autosuspend(ctx);
+ goto err;
++ }
+ }
+
+ /* decode this access unit */
+diff --git a/drivers/media/platform/sti/hva/hva-hw.c b/drivers/media/platform/sti/hva/hva-hw.c
+index 401aaafa17109..43f279e2a6a38 100644
+--- a/drivers/media/platform/sti/hva/hva-hw.c
++++ b/drivers/media/platform/sti/hva/hva-hw.c
+@@ -272,6 +272,7 @@ static unsigned long int hva_hw_get_ip_version(struct hva_dev *hva)
+
+ if (pm_runtime_get_sync(dev) < 0) {
+ dev_err(dev, "%s failed to get pm_runtime\n", HVA_PREFIX);
++ pm_runtime_put_noidle(dev);
+ mutex_unlock(&hva->protect_mutex);
+ return -EFAULT;
+ }
+@@ -388,7 +389,7 @@ int hva_hw_probe(struct platform_device *pdev, struct hva_dev *hva)
+ ret = pm_runtime_get_sync(dev);
+ if (ret < 0) {
+ dev_err(dev, "%s failed to set PM\n", HVA_PREFIX);
+- goto err_clk;
++ goto err_pm;
+ }
+
+ /* check IP hardware version */
+@@ -553,6 +554,7 @@ void hva_hw_dump_regs(struct hva_dev *hva, struct seq_file *s)
+
+ if (pm_runtime_get_sync(dev) < 0) {
+ seq_puts(s, "Cannot wake up IP\n");
++ pm_runtime_put_noidle(dev);
+ mutex_unlock(&hva->protect_mutex);
+ return;
+ }
+diff --git a/drivers/media/platform/stm32/stm32-dcmi.c b/drivers/media/platform/stm32/stm32-dcmi.c
+index b8931490b83b7..fd1c41cba52fc 100644
+--- a/drivers/media/platform/stm32/stm32-dcmi.c
++++ b/drivers/media/platform/stm32/stm32-dcmi.c
+@@ -733,7 +733,7 @@ static int dcmi_start_streaming(struct vb2_queue *vq, unsigned int count)
+ if (ret < 0) {
+ dev_err(dcmi->dev, "%s: Failed to start streaming, cannot get sync (%d)\n",
+ __func__, ret);
+- goto err_release_buffers;
++ goto err_pm_put;
+ }
+
+ ret = media_pipeline_start(&dcmi->vdev->entity, &dcmi->pipeline);
+@@ -837,8 +837,6 @@ err_media_pipeline_stop:
+
+ err_pm_put:
+ pm_runtime_put(dcmi->dev);
+-
+-err_release_buffers:
+ spin_lock_irq(&dcmi->irqlock);
+ /*
+ * Return all buffers to vb2 in QUEUED state.
+diff --git a/drivers/media/platform/ti-vpe/vpe.c b/drivers/media/platform/ti-vpe/vpe.c
+index cff2fcd6d812a..82d3ee45e2e90 100644
+--- a/drivers/media/platform/ti-vpe/vpe.c
++++ b/drivers/media/platform/ti-vpe/vpe.c
+@@ -2475,6 +2475,8 @@ static int vpe_runtime_get(struct platform_device *pdev)
+
+ r = pm_runtime_get_sync(&pdev->dev);
+ WARN_ON(r < 0);
++ if (r)
++ pm_runtime_put_noidle(&pdev->dev);
+ return r < 0 ? r : 0;
+ }
+
+diff --git a/drivers/media/platform/vsp1/vsp1_drv.c b/drivers/media/platform/vsp1/vsp1_drv.c
+index c650e45bb0ad1..dc62533cf32ce 100644
+--- a/drivers/media/platform/vsp1/vsp1_drv.c
++++ b/drivers/media/platform/vsp1/vsp1_drv.c
+@@ -562,7 +562,12 @@ int vsp1_device_get(struct vsp1_device *vsp1)
+ int ret;
+
+ ret = pm_runtime_get_sync(vsp1->dev);
+- return ret < 0 ? ret : 0;
++ if (ret < 0) {
++ pm_runtime_put_noidle(vsp1->dev);
++ return ret;
++ }
++
++ return 0;
+ }
+
+ /*
+@@ -845,12 +850,12 @@ static int vsp1_probe(struct platform_device *pdev)
+ /* Configure device parameters based on the version register. */
+ pm_runtime_enable(&pdev->dev);
+
+- ret = pm_runtime_get_sync(&pdev->dev);
++ ret = vsp1_device_get(vsp1);
+ if (ret < 0)
+ goto done;
+
+ vsp1->version = vsp1_read(vsp1, VI6_IP_VERSION);
+- pm_runtime_put_sync(&pdev->dev);
++ vsp1_device_put(vsp1);
+
+ for (i = 0; i < ARRAY_SIZE(vsp1_device_infos); ++i) {
+ if ((vsp1->version & VI6_IP_VERSION_MODEL_MASK) ==
+diff --git a/drivers/media/rc/ati_remote.c b/drivers/media/rc/ati_remote.c
+index 9cdef17b4793f..c12dda73cdd53 100644
+--- a/drivers/media/rc/ati_remote.c
++++ b/drivers/media/rc/ati_remote.c
+@@ -835,6 +835,10 @@ static int ati_remote_probe(struct usb_interface *interface,
+ err("%s: endpoint_in message size==0? \n", __func__);
+ return -ENODEV;
+ }
++ if (!usb_endpoint_is_int_out(endpoint_out)) {
++ err("%s: Unexpected endpoint_out\n", __func__);
++ return -ENODEV;
++ }
+
+ ati_remote = kzalloc(sizeof (struct ati_remote), GFP_KERNEL);
+ rc_dev = rc_allocate_device(RC_DRIVER_SCANCODE);
+diff --git a/drivers/media/test-drivers/vivid/vivid-meta-out.c b/drivers/media/test-drivers/vivid/vivid-meta-out.c
+index ff8a039aba72e..95835b52b58fc 100644
+--- a/drivers/media/test-drivers/vivid/vivid-meta-out.c
++++ b/drivers/media/test-drivers/vivid/vivid-meta-out.c
+@@ -164,10 +164,11 @@ void vivid_meta_out_process(struct vivid_dev *dev,
+ {
+ struct vivid_meta_out_buf *meta = vb2_plane_vaddr(&buf->vb.vb2_buf, 0);
+
+- tpg_s_brightness(&dev->tpg, meta->brightness);
+- tpg_s_contrast(&dev->tpg, meta->contrast);
+- tpg_s_saturation(&dev->tpg, meta->saturation);
+- tpg_s_hue(&dev->tpg, meta->hue);
++ v4l2_ctrl_s_ctrl(dev->brightness, meta->brightness);
++ v4l2_ctrl_s_ctrl(dev->contrast, meta->contrast);
++ v4l2_ctrl_s_ctrl(dev->saturation, meta->saturation);
++ v4l2_ctrl_s_ctrl(dev->hue, meta->hue);
++
+ dprintk(dev, 2, " %s brightness %u contrast %u saturation %u hue %d\n",
+ __func__, meta->brightness, meta->contrast,
+ meta->saturation, meta->hue);
+diff --git a/drivers/media/tuners/tuner-simple.c b/drivers/media/tuners/tuner-simple.c
+index b6e70fada3fb2..8fb186b25d6af 100644
+--- a/drivers/media/tuners/tuner-simple.c
++++ b/drivers/media/tuners/tuner-simple.c
+@@ -500,7 +500,7 @@ static int simple_radio_bandswitch(struct dvb_frontend *fe, u8 *buffer)
+ case TUNER_TENA_9533_DI:
+ case TUNER_YMEC_TVF_5533MF:
+ tuner_dbg("This tuner doesn't have FM. Most cards have a TEA5767 for FM\n");
+- return 0;
++ return -EINVAL;
+ case TUNER_PHILIPS_FM1216ME_MK3:
+ case TUNER_PHILIPS_FM1236_MK3:
+ case TUNER_PHILIPS_FMD1216ME_MK3:
+@@ -702,7 +702,8 @@ static int simple_set_radio_freq(struct dvb_frontend *fe,
+ TUNER_RATIO_SELECT_50; /* 50 kHz step */
+
+ /* Bandswitch byte */
+- simple_radio_bandswitch(fe, &buffer[0]);
++ if (simple_radio_bandswitch(fe, &buffer[0]))
++ return 0;
+
+ /* Convert from 1/16 kHz V4L steps to 1/20 MHz (=50 kHz) PLL steps
+ freq * (1 Mhz / 16000 V4L steps) * (20 PLL steps / 1 MHz) =
+diff --git a/drivers/media/usb/uvc/uvc_ctrl.c b/drivers/media/usb/uvc/uvc_ctrl.c
+index e399b9fad7574..a30a8a731eda8 100644
+--- a/drivers/media/usb/uvc/uvc_ctrl.c
++++ b/drivers/media/usb/uvc/uvc_ctrl.c
+@@ -773,12 +773,16 @@ static s32 uvc_get_le_value(struct uvc_control_mapping *mapping,
+ offset &= 7;
+ mask = ((1LL << bits) - 1) << offset;
+
+- for (; bits > 0; data++) {
++ while (1) {
+ u8 byte = *data & mask;
+ value |= offset > 0 ? (byte >> offset) : (byte << (-offset));
+ bits -= 8 - (offset > 0 ? offset : 0);
++ if (bits <= 0)
++ break;
++
+ offset -= 8;
+ mask = (1 << bits) - 1;
++ data++;
+ }
+
+ /* Sign-extend the value if needed. */
+diff --git a/drivers/media/usb/uvc/uvc_entity.c b/drivers/media/usb/uvc/uvc_entity.c
+index b4499cddeffe5..ca3a9c2eec271 100644
+--- a/drivers/media/usb/uvc/uvc_entity.c
++++ b/drivers/media/usb/uvc/uvc_entity.c
+@@ -73,10 +73,45 @@ static int uvc_mc_init_entity(struct uvc_video_chain *chain,
+ int ret;
+
+ if (UVC_ENTITY_TYPE(entity) != UVC_TT_STREAMING) {
++ u32 function;
++
+ v4l2_subdev_init(&entity->subdev, &uvc_subdev_ops);
+ strscpy(entity->subdev.name, entity->name,
+ sizeof(entity->subdev.name));
+
++ switch (UVC_ENTITY_TYPE(entity)) {
++ case UVC_VC_SELECTOR_UNIT:
++ function = MEDIA_ENT_F_VID_MUX;
++ break;
++ case UVC_VC_PROCESSING_UNIT:
++ case UVC_VC_EXTENSION_UNIT:
++ /* For lack of a better option. */
++ function = MEDIA_ENT_F_PROC_VIDEO_PIXEL_FORMATTER;
++ break;
++ case UVC_COMPOSITE_CONNECTOR:
++ case UVC_COMPONENT_CONNECTOR:
++ function = MEDIA_ENT_F_CONN_COMPOSITE;
++ break;
++ case UVC_SVIDEO_CONNECTOR:
++ function = MEDIA_ENT_F_CONN_SVIDEO;
++ break;
++ case UVC_ITT_CAMERA:
++ function = MEDIA_ENT_F_CAM_SENSOR;
++ break;
++ case UVC_TT_VENDOR_SPECIFIC:
++ case UVC_ITT_VENDOR_SPECIFIC:
++ case UVC_ITT_MEDIA_TRANSPORT_INPUT:
++ case UVC_OTT_VENDOR_SPECIFIC:
++ case UVC_OTT_DISPLAY:
++ case UVC_OTT_MEDIA_TRANSPORT_OUTPUT:
++ case UVC_EXTERNAL_VENDOR_SPECIFIC:
++ default:
++ function = MEDIA_ENT_F_V4L2_SUBDEV_UNKNOWN;
++ break;
++ }
++
++ entity->subdev.entity.function = function;
++
+ ret = media_entity_pads_init(&entity->subdev.entity,
+ entity->num_pads, entity->pads);
+
+diff --git a/drivers/media/usb/uvc/uvc_v4l2.c b/drivers/media/usb/uvc/uvc_v4l2.c
+index 0335e69b70abe..5e6f3153b5ff8 100644
+--- a/drivers/media/usb/uvc/uvc_v4l2.c
++++ b/drivers/media/usb/uvc/uvc_v4l2.c
+@@ -247,11 +247,41 @@ static int uvc_v4l2_try_format(struct uvc_streaming *stream,
+ if (ret < 0)
+ goto done;
+
++ /* After the probe, update fmt with the values returned from
++ * negotiation with the device.
++ */
++ for (i = 0; i < stream->nformats; ++i) {
++ if (probe->bFormatIndex == stream->format[i].index) {
++ format = &stream->format[i];
++ break;
++ }
++ }
++
++ if (i == stream->nformats) {
++ uvc_trace(UVC_TRACE_FORMAT, "Unknown bFormatIndex %u\n",
++ probe->bFormatIndex);
++ return -EINVAL;
++ }
++
++ for (i = 0; i < format->nframes; ++i) {
++ if (probe->bFrameIndex == format->frame[i].bFrameIndex) {
++ frame = &format->frame[i];
++ break;
++ }
++ }
++
++ if (i == format->nframes) {
++ uvc_trace(UVC_TRACE_FORMAT, "Unknown bFrameIndex %u\n",
++ probe->bFrameIndex);
++ return -EINVAL;
++ }
++
+ fmt->fmt.pix.width = frame->wWidth;
+ fmt->fmt.pix.height = frame->wHeight;
+ fmt->fmt.pix.field = V4L2_FIELD_NONE;
+ fmt->fmt.pix.bytesperline = uvc_v4l2_get_bytesperline(format, frame);
+ fmt->fmt.pix.sizeimage = probe->dwMaxVideoFrameSize;
++ fmt->fmt.pix.pixelformat = format->fcc;
+ fmt->fmt.pix.colorspace = format->colorspace;
+
+ if (uvc_format != NULL)
+diff --git a/drivers/memory/fsl-corenet-cf.c b/drivers/memory/fsl-corenet-cf.c
+index 0b0ed72016da8..0309bd5a18008 100644
+--- a/drivers/memory/fsl-corenet-cf.c
++++ b/drivers/memory/fsl-corenet-cf.c
+@@ -211,10 +211,8 @@ static int ccf_probe(struct platform_device *pdev)
+ dev_set_drvdata(&pdev->dev, ccf);
+
+ irq = platform_get_irq(pdev, 0);
+- if (!irq) {
+- dev_err(&pdev->dev, "%s: no irq\n", __func__);
+- return -ENXIO;
+- }
++ if (irq < 0)
++ return irq;
+
+ ret = devm_request_irq(&pdev->dev, irq, ccf_irq, 0, pdev->name, ccf);
+ if (ret) {
+diff --git a/drivers/memory/omap-gpmc.c b/drivers/memory/omap-gpmc.c
+index eff26c1b13940..27bc417029e11 100644
+--- a/drivers/memory/omap-gpmc.c
++++ b/drivers/memory/omap-gpmc.c
+@@ -949,7 +949,7 @@ static int gpmc_cs_remap(int cs, u32 base)
+ int ret;
+ u32 old_base, size;
+
+- if (cs > gpmc_cs_num) {
++ if (cs >= gpmc_cs_num) {
+ pr_err("%s: requested chip-select is disabled\n", __func__);
+ return -ENODEV;
+ }
+@@ -984,7 +984,7 @@ int gpmc_cs_request(int cs, unsigned long size, unsigned long *base)
+ struct resource *res = &gpmc->mem;
+ int r = -1;
+
+- if (cs > gpmc_cs_num) {
++ if (cs >= gpmc_cs_num) {
+ pr_err("%s: requested chip-select is disabled\n", __func__);
+ return -ENODEV;
+ }
+@@ -2274,6 +2274,10 @@ static void gpmc_probe_dt_children(struct platform_device *pdev)
+ }
+ }
+ #else
++void gpmc_read_settings_dt(struct device_node *np, struct gpmc_settings *p)
++{
++ memset(p, 0, sizeof(*p));
++}
+ static int gpmc_probe_dt(struct platform_device *pdev)
+ {
+ return 0;
+diff --git a/drivers/mfd/sm501.c b/drivers/mfd/sm501.c
+index ccd62b9639528..6d2f4a0a901dc 100644
+--- a/drivers/mfd/sm501.c
++++ b/drivers/mfd/sm501.c
+@@ -1415,8 +1415,14 @@ static int sm501_plat_probe(struct platform_device *dev)
+ goto err_claim;
+ }
+
+- return sm501_init_dev(sm);
++ ret = sm501_init_dev(sm);
++ if (ret)
++ goto err_unmap;
++
++ return 0;
+
++ err_unmap:
++ iounmap(sm->regs);
+ err_claim:
+ release_mem_region(sm->io_res->start, 0x100);
+ err_res:
+diff --git a/drivers/misc/cardreader/rtsx_pcr.c b/drivers/misc/cardreader/rtsx_pcr.c
+index 0d5928bc1b6d7..82246f7aec6fb 100644
+--- a/drivers/misc/cardreader/rtsx_pcr.c
++++ b/drivers/misc/cardreader/rtsx_pcr.c
+@@ -1536,12 +1536,14 @@ static int rtsx_pci_probe(struct pci_dev *pcidev,
+ ret = mfd_add_devices(&pcidev->dev, pcr->id, rtsx_pcr_cells,
+ ARRAY_SIZE(rtsx_pcr_cells), NULL, 0, NULL);
+ if (ret < 0)
+- goto disable_irq;
++ goto free_slots;
+
+ schedule_delayed_work(&pcr->idle_work, msecs_to_jiffies(200));
+
+ return 0;
+
++free_slots:
++ kfree(pcr->slots);
+ disable_irq:
+ free_irq(pcr->irq, (void *)pcr);
+ disable_msi:
+diff --git a/drivers/misc/eeprom/at25.c b/drivers/misc/eeprom/at25.c
+index cde9a2fc13250..490ff49d11ede 100644
+--- a/drivers/misc/eeprom/at25.c
++++ b/drivers/misc/eeprom/at25.c
+@@ -358,7 +358,7 @@ static int at25_probe(struct spi_device *spi)
+ at25->nvmem_config.reg_read = at25_ee_read;
+ at25->nvmem_config.reg_write = at25_ee_write;
+ at25->nvmem_config.priv = at25;
+- at25->nvmem_config.stride = 4;
++ at25->nvmem_config.stride = 1;
+ at25->nvmem_config.word_size = 1;
+ at25->nvmem_config.size = chip.byte_len;
+
+diff --git a/drivers/misc/habanalabs/gaudi/gaudi.c b/drivers/misc/habanalabs/gaudi/gaudi.c
+index ca183733847b6..bcc45bf7af2c8 100644
+--- a/drivers/misc/habanalabs/gaudi/gaudi.c
++++ b/drivers/misc/habanalabs/gaudi/gaudi.c
+@@ -6285,7 +6285,7 @@ static bool gaudi_is_device_idle(struct hl_device *hdev, u32 *mask,
+ is_idle &= is_eng_idle;
+
+ if (mask)
+- *mask |= !is_eng_idle <<
++ *mask |= ((u64) !is_eng_idle) <<
+ (GAUDI_ENGINE_ID_DMA_0 + dma_id);
+ if (s)
+ seq_printf(s, fmt, dma_id,
+@@ -6308,7 +6308,8 @@ static bool gaudi_is_device_idle(struct hl_device *hdev, u32 *mask,
+ is_idle &= is_eng_idle;
+
+ if (mask)
+- *mask |= !is_eng_idle << (GAUDI_ENGINE_ID_TPC_0 + i);
++ *mask |= ((u64) !is_eng_idle) <<
++ (GAUDI_ENGINE_ID_TPC_0 + i);
+ if (s)
+ seq_printf(s, fmt, i,
+ is_eng_idle ? "Y" : "N",
+@@ -6336,7 +6337,8 @@ static bool gaudi_is_device_idle(struct hl_device *hdev, u32 *mask,
+ is_idle &= is_eng_idle;
+
+ if (mask)
+- *mask |= !is_eng_idle << (GAUDI_ENGINE_ID_MME_0 + i);
++ *mask |= ((u64) !is_eng_idle) <<
++ (GAUDI_ENGINE_ID_MME_0 + i);
+ if (s) {
+ if (!is_slave)
+ seq_printf(s, fmt, i,
+diff --git a/drivers/misc/habanalabs/goya/goya.c b/drivers/misc/habanalabs/goya/goya.c
+index c179085ced7b8..a8041a39fae31 100644
+--- a/drivers/misc/habanalabs/goya/goya.c
++++ b/drivers/misc/habanalabs/goya/goya.c
+@@ -5098,7 +5098,8 @@ static bool goya_is_device_idle(struct hl_device *hdev, u32 *mask,
+ is_idle &= is_eng_idle;
+
+ if (mask)
+- *mask |= !is_eng_idle << (GOYA_ENGINE_ID_DMA_0 + i);
++ *mask |= ((u64) !is_eng_idle) <<
++ (GOYA_ENGINE_ID_DMA_0 + i);
+ if (s)
+ seq_printf(s, dma_fmt, i, is_eng_idle ? "Y" : "N",
+ qm_glbl_sts0, dma_core_sts0);
+@@ -5121,7 +5122,8 @@ static bool goya_is_device_idle(struct hl_device *hdev, u32 *mask,
+ is_idle &= is_eng_idle;
+
+ if (mask)
+- *mask |= !is_eng_idle << (GOYA_ENGINE_ID_TPC_0 + i);
++ *mask |= ((u64) !is_eng_idle) <<
++ (GOYA_ENGINE_ID_TPC_0 + i);
+ if (s)
+ seq_printf(s, fmt, i, is_eng_idle ? "Y" : "N",
+ qm_glbl_sts0, cmdq_glbl_sts0, tpc_cfg_sts);
+@@ -5141,7 +5143,7 @@ static bool goya_is_device_idle(struct hl_device *hdev, u32 *mask,
+ is_idle &= is_eng_idle;
+
+ if (mask)
+- *mask |= !is_eng_idle << GOYA_ENGINE_ID_MME_0;
++ *mask |= ((u64) !is_eng_idle) << GOYA_ENGINE_ID_MME_0;
+ if (s) {
+ seq_printf(s, fmt, 0, is_eng_idle ? "Y" : "N", qm_glbl_sts0,
+ cmdq_glbl_sts0, mme_arch_sts);
+diff --git a/drivers/misc/mic/scif/scif_rma.c b/drivers/misc/mic/scif/scif_rma.c
+index 406cd5abfa726..56c784699eb8e 100644
+--- a/drivers/misc/mic/scif/scif_rma.c
++++ b/drivers/misc/mic/scif/scif_rma.c
+@@ -1384,6 +1384,8 @@ retry:
+ (prot & SCIF_PROT_WRITE) ? FOLL_WRITE : 0,
+ pinned_pages->pages);
+ if (nr_pages != pinned_pages->nr_pages) {
++ if (pinned_pages->nr_pages < 0)
++ pinned_pages->nr_pages = 0;
+ if (try_upgrade) {
+ if (ulimit)
+ __scif_dec_pinned_vm_lock(mm, nr_pages);
+@@ -1400,7 +1402,6 @@ retry:
+
+ if (pinned_pages->nr_pages < nr_pages) {
+ err = -EFAULT;
+- pinned_pages->nr_pages = nr_pages;
+ goto dec_pinned;
+ }
+
+@@ -1413,7 +1414,6 @@ dec_pinned:
+ __scif_dec_pinned_vm_lock(mm, nr_pages);
+ /* Something went wrong! Rollback */
+ error_unmap:
+- pinned_pages->nr_pages = nr_pages;
+ scif_destroy_pinned_pages(pinned_pages);
+ *pages = NULL;
+ dev_dbg(scif_info.mdev.this_device,
+diff --git a/drivers/misc/mic/vop/vop_main.c b/drivers/misc/mic/vop/vop_main.c
+index 85942f6717c57..8aadc6055df17 100644
+--- a/drivers/misc/mic/vop/vop_main.c
++++ b/drivers/misc/mic/vop/vop_main.c
+@@ -320,7 +320,7 @@ static struct virtqueue *vop_find_vq(struct virtio_device *dev,
+ /* First assign the vring's allocated in host memory */
+ vqconfig = _vop_vq_config(vdev->desc) + index;
+ memcpy_fromio(&config, vqconfig, sizeof(config));
+- _vr_size = vring_size(le16_to_cpu(config.num), MIC_VIRTIO_RING_ALIGN);
++ _vr_size = round_up(vring_size(le16_to_cpu(config.num), MIC_VIRTIO_RING_ALIGN), 4);
+ vr_size = PAGE_ALIGN(_vr_size + sizeof(struct _mic_vring_info));
+ va = vpdev->hw_ops->remap(vpdev, le64_to_cpu(config.address), vr_size);
+ if (!va)
+diff --git a/drivers/misc/mic/vop/vop_vringh.c b/drivers/misc/mic/vop/vop_vringh.c
+index 30eac172f0170..7014ffe88632e 100644
+--- a/drivers/misc/mic/vop/vop_vringh.c
++++ b/drivers/misc/mic/vop/vop_vringh.c
+@@ -296,7 +296,7 @@ static int vop_virtio_add_device(struct vop_vdev *vdev,
+
+ num = le16_to_cpu(vqconfig[i].num);
+ mutex_init(&vvr->vr_mutex);
+- vr_size = PAGE_ALIGN(vring_size(num, MIC_VIRTIO_RING_ALIGN) +
++ vr_size = PAGE_ALIGN(round_up(vring_size(num, MIC_VIRTIO_RING_ALIGN), 4) +
+ sizeof(struct _mic_vring_info));
+ vr->va = (void *)
+ __get_free_pages(GFP_KERNEL | __GFP_ZERO,
+@@ -308,7 +308,7 @@ static int vop_virtio_add_device(struct vop_vdev *vdev,
+ goto err;
+ }
+ vr->len = vr_size;
+- vr->info = vr->va + vring_size(num, MIC_VIRTIO_RING_ALIGN);
++ vr->info = vr->va + round_up(vring_size(num, MIC_VIRTIO_RING_ALIGN), 4);
+ vr->info->magic = cpu_to_le32(MIC_MAGIC + vdev->virtio_id + i);
+ vr_addr = dma_map_single(&vpdev->dev, vr->va, vr_size,
+ DMA_BIDIRECTIONAL);
+@@ -602,6 +602,7 @@ static int vop_virtio_copy_from_user(struct vop_vdev *vdev, void __user *ubuf,
+ size_t partlen;
+ bool dma = VOP_USE_DMA && vi->dma_ch;
+ int err = 0;
++ size_t offset = 0;
+
+ if (dma) {
+ dma_alignment = 1 << vi->dma_ch->device->copy_align;
+@@ -655,13 +656,20 @@ memcpy:
+ * We are copying to IO below and should ideally use something
+ * like copy_from_user_toio(..) if it existed.
+ */
+- if (copy_from_user((void __force *)dbuf, ubuf, len)) {
+- err = -EFAULT;
+- dev_err(vop_dev(vdev), "%s %d err %d\n",
+- __func__, __LINE__, err);
+- goto err;
++ while (len) {
++ partlen = min_t(size_t, len, VOP_INT_DMA_BUF_SIZE);
++
++ if (copy_from_user(vvr->buf, ubuf + offset, partlen)) {
++ err = -EFAULT;
++ dev_err(vop_dev(vdev), "%s %d err %d\n",
++ __func__, __LINE__, err);
++ goto err;
++ }
++ memcpy_toio(dbuf + offset, vvr->buf, partlen);
++ offset += partlen;
++ vdev->out_bytes += partlen;
++ len -= partlen;
+ }
+- vdev->out_bytes += len;
+ err = 0;
+ err:
+ vpdev->hw_ops->unmap(vpdev, dbuf);
+diff --git a/drivers/misc/ocxl/Kconfig b/drivers/misc/ocxl/Kconfig
+index 2d2266c1439ef..51b51f3774701 100644
+--- a/drivers/misc/ocxl/Kconfig
++++ b/drivers/misc/ocxl/Kconfig
+@@ -9,9 +9,8 @@ config OCXL_BASE
+
+ config OCXL
+ tristate "OpenCAPI coherent accelerator support"
+- depends on PPC_POWERNV && PCI && EEH
++ depends on PPC_POWERNV && PCI && EEH && HOTPLUG_PCI_POWERNV
+ select OCXL_BASE
+- select HOTPLUG_PCI_POWERNV
+ default m
+ help
+ Select this option to enable the ocxl driver for Open
+diff --git a/drivers/misc/vmw_vmci/vmci_queue_pair.c b/drivers/misc/vmw_vmci/vmci_queue_pair.c
+index 8531ae7811956..c49065887e8f5 100644
+--- a/drivers/misc/vmw_vmci/vmci_queue_pair.c
++++ b/drivers/misc/vmw_vmci/vmci_queue_pair.c
+@@ -657,8 +657,9 @@ static int qp_host_get_user_memory(u64 produce_uva,
+ if (retval < (int)produce_q->kernel_if->num_pages) {
+ pr_debug("get_user_pages_fast(produce) failed (retval=%d)",
+ retval);
+- qp_release_pages(produce_q->kernel_if->u.h.header_page,
+- retval, false);
++ if (retval > 0)
++ qp_release_pages(produce_q->kernel_if->u.h.header_page,
++ retval, false);
+ err = VMCI_ERROR_NO_MEM;
+ goto out;
+ }
+@@ -670,8 +671,9 @@ static int qp_host_get_user_memory(u64 produce_uva,
+ if (retval < (int)consume_q->kernel_if->num_pages) {
+ pr_debug("get_user_pages_fast(consume) failed (retval=%d)",
+ retval);
+- qp_release_pages(consume_q->kernel_if->u.h.header_page,
+- retval, false);
++ if (retval > 0)
++ qp_release_pages(consume_q->kernel_if->u.h.header_page,
++ retval, false);
+ qp_release_pages(produce_q->kernel_if->u.h.header_page,
+ produce_q->kernel_if->num_pages, false);
+ err = VMCI_ERROR_NO_MEM;
+diff --git a/drivers/mmc/core/sdio_cis.c b/drivers/mmc/core/sdio_cis.c
+index e0655278c5c32..3efaa9534a777 100644
+--- a/drivers/mmc/core/sdio_cis.c
++++ b/drivers/mmc/core/sdio_cis.c
+@@ -26,6 +26,9 @@ static int cistpl_vers_1(struct mmc_card *card, struct sdio_func *func,
+ unsigned i, nr_strings;
+ char **buffer, *string;
+
++ if (size < 2)
++ return 0;
++
+ /* Find all null-terminated (including zero length) strings in
+ the TPLLV1_INFO field. Trailing garbage is ignored. */
+ buf += 2;
+diff --git a/drivers/mtd/hyperbus/hbmc-am654.c b/drivers/mtd/hyperbus/hbmc-am654.c
+index f350a0809f880..a808fa28cd9a1 100644
+--- a/drivers/mtd/hyperbus/hbmc-am654.c
++++ b/drivers/mtd/hyperbus/hbmc-am654.c
+@@ -70,7 +70,8 @@ static int am654_hbmc_probe(struct platform_device *pdev)
+
+ platform_set_drvdata(pdev, priv);
+
+- ret = of_address_to_resource(np, 0, &res);
++ priv->hbdev.np = of_get_next_child(np, NULL);
++ ret = of_address_to_resource(priv->hbdev.np, 0, &res);
+ if (ret)
+ return ret;
+
+@@ -103,7 +104,6 @@ static int am654_hbmc_probe(struct platform_device *pdev)
+ priv->ctlr.dev = dev;
+ priv->ctlr.ops = &am654_hbmc_ops;
+ priv->hbdev.ctlr = &priv->ctlr;
+- priv->hbdev.np = of_get_next_child(dev->of_node, NULL);
+ ret = hyperbus_register_device(&priv->hbdev);
+ if (ret) {
+ dev_err(dev, "failed to register controller\n");
+diff --git a/drivers/mtd/lpddr/lpddr2_nvm.c b/drivers/mtd/lpddr/lpddr2_nvm.c
+index 0f1547f09d08b..72f5c7b300790 100644
+--- a/drivers/mtd/lpddr/lpddr2_nvm.c
++++ b/drivers/mtd/lpddr/lpddr2_nvm.c
+@@ -393,6 +393,17 @@ static int lpddr2_nvm_lock(struct mtd_info *mtd, loff_t start_add,
+ return lpddr2_nvm_do_block_op(mtd, start_add, len, LPDDR2_NVM_LOCK);
+ }
+
++static const struct mtd_info lpddr2_nvm_mtd_info = {
++ .type = MTD_RAM,
++ .writesize = 1,
++ .flags = (MTD_CAP_NVRAM | MTD_POWERUP_LOCK),
++ ._read = lpddr2_nvm_read,
++ ._write = lpddr2_nvm_write,
++ ._erase = lpddr2_nvm_erase,
++ ._unlock = lpddr2_nvm_unlock,
++ ._lock = lpddr2_nvm_lock,
++};
++
+ /*
+ * lpddr2_nvm driver probe method
+ */
+@@ -433,6 +444,7 @@ static int lpddr2_nvm_probe(struct platform_device *pdev)
+ .pfow_base = OW_BASE_ADDRESS,
+ .fldrv_priv = pcm_data,
+ };
++
+ if (IS_ERR(map->virt))
+ return PTR_ERR(map->virt);
+
+@@ -444,22 +456,13 @@ static int lpddr2_nvm_probe(struct platform_device *pdev)
+ return PTR_ERR(pcm_data->ctl_regs);
+
+ /* Populate mtd_info data structure */
+- *mtd = (struct mtd_info) {
+- .dev = { .parent = &pdev->dev },
+- .name = pdev->dev.init_name,
+- .type = MTD_RAM,
+- .priv = map,
+- .size = resource_size(add_range),
+- .erasesize = ERASE_BLOCKSIZE * pcm_data->bus_width,
+- .writesize = 1,
+- .writebufsize = WRITE_BUFFSIZE * pcm_data->bus_width,
+- .flags = (MTD_CAP_NVRAM | MTD_POWERUP_LOCK),
+- ._read = lpddr2_nvm_read,
+- ._write = lpddr2_nvm_write,
+- ._erase = lpddr2_nvm_erase,
+- ._unlock = lpddr2_nvm_unlock,
+- ._lock = lpddr2_nvm_lock,
+- };
++ *mtd = lpddr2_nvm_mtd_info;
++ mtd->dev.parent = &pdev->dev;
++ mtd->name = pdev->dev.init_name;
++ mtd->priv = map;
++ mtd->size = resource_size(add_range);
++ mtd->erasesize = ERASE_BLOCKSIZE * pcm_data->bus_width;
++ mtd->writebufsize = WRITE_BUFFSIZE * pcm_data->bus_width;
+
+ /* Verify the presence of the device looking for PFOW string */
+ if (!lpddr2_nvm_pfow_present(map)) {
+diff --git a/drivers/mtd/mtdoops.c b/drivers/mtd/mtdoops.c
+index 4ced68be7ed7e..774970bfcf859 100644
+--- a/drivers/mtd/mtdoops.c
++++ b/drivers/mtd/mtdoops.c
+@@ -279,12 +279,13 @@ static void mtdoops_do_dump(struct kmsg_dumper *dumper,
+ kmsg_dump_get_buffer(dumper, true, cxt->oops_buf + MTDOOPS_HEADER_SIZE,
+ record_size - MTDOOPS_HEADER_SIZE, NULL);
+
+- /* Panics must be written immediately */
+- if (reason != KMSG_DUMP_OOPS)
++ if (reason != KMSG_DUMP_OOPS) {
++ /* Panics must be written immediately */
+ mtdoops_write(cxt, 1);
+-
+- /* For other cases, schedule work to write it "nicely" */
+- schedule_work(&cxt->work_write);
++ } else {
++ /* For other cases, schedule work to write it "nicely" */
++ schedule_work(&cxt->work_write);
++ }
+ }
+
+ static void mtdoops_notify_add(struct mtd_info *mtd)
+diff --git a/drivers/mtd/nand/raw/ams-delta.c b/drivers/mtd/nand/raw/ams-delta.c
+index 3711e7a0436cd..b3390028c6bfb 100644
+--- a/drivers/mtd/nand/raw/ams-delta.c
++++ b/drivers/mtd/nand/raw/ams-delta.c
+@@ -400,12 +400,14 @@ static int gpio_nand_remove(struct platform_device *pdev)
+ return 0;
+ }
+
++#ifdef CONFIG_OF
+ static const struct of_device_id gpio_nand_of_id_table[] = {
+ {
+ /* sentinel */
+ },
+ };
+ MODULE_DEVICE_TABLE(of, gpio_nand_of_id_table);
++#endif
+
+ static const struct platform_device_id gpio_nand_plat_id_table[] = {
+ {
+diff --git a/drivers/mtd/nand/raw/stm32_fmc2_nand.c b/drivers/mtd/nand/raw/stm32_fmc2_nand.c
+index 65c9d17b25a3c..dce6d7a10a364 100644
+--- a/drivers/mtd/nand/raw/stm32_fmc2_nand.c
++++ b/drivers/mtd/nand/raw/stm32_fmc2_nand.c
+@@ -1791,7 +1791,7 @@ static int stm32_fmc2_nfc_parse_child(struct stm32_fmc2_nfc *nfc,
+ return ret;
+ }
+
+- if (cs > FMC2_MAX_CE) {
++ if (cs >= FMC2_MAX_CE) {
+ dev_err(nfc->dev, "invalid reg value: %d\n", cs);
+ return -EINVAL;
+ }
+diff --git a/drivers/mtd/nand/raw/vf610_nfc.c b/drivers/mtd/nand/raw/vf610_nfc.c
+index 7248c59011836..fcca45e2abe20 100644
+--- a/drivers/mtd/nand/raw/vf610_nfc.c
++++ b/drivers/mtd/nand/raw/vf610_nfc.c
+@@ -852,8 +852,10 @@ static int vf610_nfc_probe(struct platform_device *pdev)
+ }
+
+ of_id = of_match_device(vf610_nfc_dt_ids, &pdev->dev);
+- if (!of_id)
+- return -ENODEV;
++ if (!of_id) {
++ err = -ENODEV;
++ goto err_disable_clk;
++ }
+
+ nfc->variant = (enum vf610_nfc_variant)of_id->data;
+
+diff --git a/drivers/mtd/nand/spi/gigadevice.c b/drivers/mtd/nand/spi/gigadevice.c
+index d219c970042a2..0b7667e60780f 100644
+--- a/drivers/mtd/nand/spi/gigadevice.c
++++ b/drivers/mtd/nand/spi/gigadevice.c
+@@ -21,7 +21,7 @@
+ #define GD5FXGQ4UXFXXG_STATUS_ECC_UNCOR_ERROR (7 << 4)
+
+ static SPINAND_OP_VARIANTS(read_cache_variants,
+- SPINAND_PAGE_READ_FROM_CACHE_QUADIO_OP(0, 2, NULL, 0),
++ SPINAND_PAGE_READ_FROM_CACHE_QUADIO_OP(0, 1, NULL, 0),
+ SPINAND_PAGE_READ_FROM_CACHE_X4_OP(0, 1, NULL, 0),
+ SPINAND_PAGE_READ_FROM_CACHE_DUALIO_OP(0, 1, NULL, 0),
+ SPINAND_PAGE_READ_FROM_CACHE_X2_OP(0, 1, NULL, 0),
+@@ -29,7 +29,7 @@ static SPINAND_OP_VARIANTS(read_cache_variants,
+ SPINAND_PAGE_READ_FROM_CACHE_OP(false, 0, 1, NULL, 0));
+
+ static SPINAND_OP_VARIANTS(read_cache_variants_f,
+- SPINAND_PAGE_READ_FROM_CACHE_QUADIO_OP(0, 2, NULL, 0),
++ SPINAND_PAGE_READ_FROM_CACHE_QUADIO_OP(0, 1, NULL, 0),
+ SPINAND_PAGE_READ_FROM_CACHE_X4_OP_3A(0, 1, NULL, 0),
+ SPINAND_PAGE_READ_FROM_CACHE_DUALIO_OP(0, 1, NULL, 0),
+ SPINAND_PAGE_READ_FROM_CACHE_X2_OP_3A(0, 1, NULL, 0),
+@@ -202,7 +202,7 @@ static const struct spinand_info gigadevice_spinand_table[] = {
+ SPINAND_INFO_OP_VARIANTS(&read_cache_variants,
+ &write_cache_variants,
+ &update_cache_variants),
+- 0,
++ SPINAND_HAS_QE_BIT,
+ SPINAND_ECCINFO(&gd5fxgq4xa_ooblayout,
+ gd5fxgq4xa_ecc_get_status)),
+ SPINAND_INFO("GD5F2GQ4xA",
+@@ -212,7 +212,7 @@ static const struct spinand_info gigadevice_spinand_table[] = {
+ SPINAND_INFO_OP_VARIANTS(&read_cache_variants,
+ &write_cache_variants,
+ &update_cache_variants),
+- 0,
++ SPINAND_HAS_QE_BIT,
+ SPINAND_ECCINFO(&gd5fxgq4xa_ooblayout,
+ gd5fxgq4xa_ecc_get_status)),
+ SPINAND_INFO("GD5F4GQ4xA",
+@@ -222,7 +222,7 @@ static const struct spinand_info gigadevice_spinand_table[] = {
+ SPINAND_INFO_OP_VARIANTS(&read_cache_variants,
+ &write_cache_variants,
+ &update_cache_variants),
+- 0,
++ SPINAND_HAS_QE_BIT,
+ SPINAND_ECCINFO(&gd5fxgq4xa_ooblayout,
+ gd5fxgq4xa_ecc_get_status)),
+ SPINAND_INFO("GD5F1GQ4UExxG",
+@@ -232,7 +232,7 @@ static const struct spinand_info gigadevice_spinand_table[] = {
+ SPINAND_INFO_OP_VARIANTS(&read_cache_variants,
+ &write_cache_variants,
+ &update_cache_variants),
+- 0,
++ SPINAND_HAS_QE_BIT,
+ SPINAND_ECCINFO(&gd5fxgq4_variant2_ooblayout,
+ gd5fxgq4uexxg_ecc_get_status)),
+ SPINAND_INFO("GD5F1GQ4UFxxG",
+@@ -242,7 +242,7 @@ static const struct spinand_info gigadevice_spinand_table[] = {
+ SPINAND_INFO_OP_VARIANTS(&read_cache_variants_f,
+ &write_cache_variants,
+ &update_cache_variants),
+- 0,
++ SPINAND_HAS_QE_BIT,
+ SPINAND_ECCINFO(&gd5fxgq4_variant2_ooblayout,
+ gd5fxgq4ufxxg_ecc_get_status)),
+ };
+diff --git a/drivers/net/can/flexcan.c b/drivers/net/can/flexcan.c
+index 94d10ec954a05..2ac7a667bde35 100644
+--- a/drivers/net/can/flexcan.c
++++ b/drivers/net/can/flexcan.c
+@@ -1260,18 +1260,23 @@ static int flexcan_chip_start(struct net_device *dev)
+ return err;
+ }
+
+-/* flexcan_chip_stop
++/* __flexcan_chip_stop
+ *
+- * this functions is entered with clocks enabled
++ * this function is entered with clocks enabled
+ */
+-static void flexcan_chip_stop(struct net_device *dev)
++static int __flexcan_chip_stop(struct net_device *dev, bool disable_on_error)
+ {
+ struct flexcan_priv *priv = netdev_priv(dev);
+ struct flexcan_regs __iomem *regs = priv->regs;
++ int err;
+
+ /* freeze + disable module */
+- flexcan_chip_freeze(priv);
+- flexcan_chip_disable(priv);
++ err = flexcan_chip_freeze(priv);
++ if (err && !disable_on_error)
++ return err;
++ err = flexcan_chip_disable(priv);
++ if (err && !disable_on_error)
++ goto out_chip_unfreeze;
+
+ /* Disable all interrupts */
+ priv->write(0, ®s->imask2);
+@@ -1281,6 +1286,23 @@ static void flexcan_chip_stop(struct net_device *dev)
+
+ flexcan_transceiver_disable(priv);
+ priv->can.state = CAN_STATE_STOPPED;
++
++ return 0;
++
++ out_chip_unfreeze:
++ flexcan_chip_unfreeze(priv);
++
++ return err;
++}
++
++static inline int flexcan_chip_stop_disable_on_error(struct net_device *dev)
++{
++ return __flexcan_chip_stop(dev, true);
++}
++
++static inline int flexcan_chip_stop(struct net_device *dev)
++{
++ return __flexcan_chip_stop(dev, false);
+ }
+
+ static int flexcan_open(struct net_device *dev)
+@@ -1362,7 +1384,7 @@ static int flexcan_close(struct net_device *dev)
+
+ netif_stop_queue(dev);
+ can_rx_offload_disable(&priv->offload);
+- flexcan_chip_stop(dev);
++ flexcan_chip_stop_disable_on_error(dev);
+
+ can_rx_offload_del(&priv->offload);
+ free_irq(dev->irq, dev);
+diff --git a/drivers/net/can/m_can/m_can_platform.c b/drivers/net/can/m_can/m_can_platform.c
+index 38ea5e600fb84..e6d0cb9ee02f0 100644
+--- a/drivers/net/can/m_can/m_can_platform.c
++++ b/drivers/net/can/m_can/m_can_platform.c
+@@ -144,8 +144,6 @@ static int __maybe_unused m_can_runtime_suspend(struct device *dev)
+ struct net_device *ndev = dev_get_drvdata(dev);
+ struct m_can_classdev *mcan_class = netdev_priv(ndev);
+
+- m_can_class_suspend(dev);
+-
+ clk_disable_unprepare(mcan_class->cclk);
+ clk_disable_unprepare(mcan_class->hclk);
+
+diff --git a/drivers/net/dsa/microchip/ksz_common.c b/drivers/net/dsa/microchip/ksz_common.c
+index 7b6c0dce75360..ee433abc2d4b5 100644
+--- a/drivers/net/dsa/microchip/ksz_common.c
++++ b/drivers/net/dsa/microchip/ksz_common.c
+@@ -103,14 +103,8 @@ void ksz_init_mib_timer(struct ksz_device *dev)
+
+ INIT_DELAYED_WORK(&dev->mib_read, ksz_mib_read_work);
+
+- /* Read MIB counters every 30 seconds to avoid overflow. */
+- dev->mib_read_interval = msecs_to_jiffies(30000);
+-
+ for (i = 0; i < dev->mib_port_cnt; i++)
+ dev->dev_ops->port_init_cnt(dev, i);
+-
+- /* Start the timer 2 seconds later. */
+- schedule_delayed_work(&dev->mib_read, msecs_to_jiffies(2000));
+ }
+ EXPORT_SYMBOL_GPL(ksz_init_mib_timer);
+
+@@ -144,7 +138,9 @@ void ksz_adjust_link(struct dsa_switch *ds, int port,
+ /* Read all MIB counters when the link is going down. */
+ if (!phydev->link) {
+ p->read = true;
+- schedule_delayed_work(&dev->mib_read, 0);
++ /* timer started */
++ if (dev->mib_read_interval)
++ schedule_delayed_work(&dev->mib_read, 0);
+ }
+ mutex_lock(&dev->dev_mutex);
+ if (!phydev->link)
+@@ -460,6 +456,12 @@ int ksz_switch_register(struct ksz_device *dev,
+ return ret;
+ }
+
++ /* Read MIB counters every 30 seconds to avoid overflow. */
++ dev->mib_read_interval = msecs_to_jiffies(30000);
++
++ /* Start the MIB timer. */
++ schedule_delayed_work(&dev->mib_read, 0);
++
+ return 0;
+ }
+ EXPORT_SYMBOL(ksz_switch_register);
+diff --git a/drivers/net/dsa/realtek-smi-core.h b/drivers/net/dsa/realtek-smi-core.h
+index 9a63b51e1d82f..6f2dab7e33d65 100644
+--- a/drivers/net/dsa/realtek-smi-core.h
++++ b/drivers/net/dsa/realtek-smi-core.h
+@@ -25,6 +25,9 @@ struct rtl8366_mib_counter {
+ const char *name;
+ };
+
++/**
++ * struct rtl8366_vlan_mc - Virtual LAN member configuration
++ */
+ struct rtl8366_vlan_mc {
+ u16 vid;
+ u16 untag;
+@@ -119,7 +122,6 @@ int realtek_smi_setup_mdio(struct realtek_smi *smi);
+ int rtl8366_mc_is_used(struct realtek_smi *smi, int mc_index, int *used);
+ int rtl8366_set_vlan(struct realtek_smi *smi, int vid, u32 member,
+ u32 untag, u32 fid);
+-int rtl8366_get_pvid(struct realtek_smi *smi, int port, int *val);
+ int rtl8366_set_pvid(struct realtek_smi *smi, unsigned int port,
+ unsigned int vid);
+ int rtl8366_enable_vlan4k(struct realtek_smi *smi, bool enable);
+diff --git a/drivers/net/dsa/rtl8366.c b/drivers/net/dsa/rtl8366.c
+index 99cdb2f18fa2f..49c626a336803 100644
+--- a/drivers/net/dsa/rtl8366.c
++++ b/drivers/net/dsa/rtl8366.c
+@@ -36,12 +36,113 @@ int rtl8366_mc_is_used(struct realtek_smi *smi, int mc_index, int *used)
+ }
+ EXPORT_SYMBOL_GPL(rtl8366_mc_is_used);
+
++/**
++ * rtl8366_obtain_mc() - retrieve or allocate a VLAN member configuration
++ * @smi: the Realtek SMI device instance
++ * @vid: the VLAN ID to look up or allocate
++ * @vlanmc: the pointer will be assigned to a pointer to a valid member config
++ * if successful
++ * @return: index of a new member config or negative error number
++ */
++static int rtl8366_obtain_mc(struct realtek_smi *smi, int vid,
++ struct rtl8366_vlan_mc *vlanmc)
++{
++ struct rtl8366_vlan_4k vlan4k;
++ int ret;
++ int i;
++
++ /* Try to find an existing member config entry for this VID */
++ for (i = 0; i < smi->num_vlan_mc; i++) {
++ ret = smi->ops->get_vlan_mc(smi, i, vlanmc);
++ if (ret) {
++ dev_err(smi->dev, "error searching for VLAN MC %d for VID %d\n",
++ i, vid);
++ return ret;
++ }
++
++ if (vid == vlanmc->vid)
++ return i;
++ }
++
++ /* We have no MC entry for this VID, try to find an empty one */
++ for (i = 0; i < smi->num_vlan_mc; i++) {
++ ret = smi->ops->get_vlan_mc(smi, i, vlanmc);
++ if (ret) {
++ dev_err(smi->dev, "error searching for VLAN MC %d for VID %d\n",
++ i, vid);
++ return ret;
++ }
++
++ if (vlanmc->vid == 0 && vlanmc->member == 0) {
++ /* Update the entry from the 4K table */
++ ret = smi->ops->get_vlan_4k(smi, vid, &vlan4k);
++ if (ret) {
++ dev_err(smi->dev, "error looking for 4K VLAN MC %d for VID %d\n",
++ i, vid);
++ return ret;
++ }
++
++ vlanmc->vid = vid;
++ vlanmc->member = vlan4k.member;
++ vlanmc->untag = vlan4k.untag;
++ vlanmc->fid = vlan4k.fid;
++ ret = smi->ops->set_vlan_mc(smi, i, vlanmc);
++ if (ret) {
++ dev_err(smi->dev, "unable to set/update VLAN MC %d for VID %d\n",
++ i, vid);
++ return ret;
++ }
++
++ dev_dbg(smi->dev, "created new MC at index %d for VID %d\n",
++ i, vid);
++ return i;
++ }
++ }
++
++ /* MC table is full, try to find an unused entry and replace it */
++ for (i = 0; i < smi->num_vlan_mc; i++) {
++ int used;
++
++ ret = rtl8366_mc_is_used(smi, i, &used);
++ if (ret)
++ return ret;
++
++ if (!used) {
++ /* Update the entry from the 4K table */
++ ret = smi->ops->get_vlan_4k(smi, vid, &vlan4k);
++ if (ret)
++ return ret;
++
++ vlanmc->vid = vid;
++ vlanmc->member = vlan4k.member;
++ vlanmc->untag = vlan4k.untag;
++ vlanmc->fid = vlan4k.fid;
++ ret = smi->ops->set_vlan_mc(smi, i, vlanmc);
++ if (ret) {
++ dev_err(smi->dev, "unable to set/update VLAN MC %d for VID %d\n",
++ i, vid);
++ return ret;
++ }
++ dev_dbg(smi->dev, "recycled MC at index %i for VID %d\n",
++ i, vid);
++ return i;
++ }
++ }
++
++ dev_err(smi->dev, "all VLAN member configurations are in use\n");
++ return -ENOSPC;
++}
++
+ int rtl8366_set_vlan(struct realtek_smi *smi, int vid, u32 member,
+ u32 untag, u32 fid)
+ {
++ struct rtl8366_vlan_mc vlanmc;
+ struct rtl8366_vlan_4k vlan4k;
++ int mc;
+ int ret;
+- int i;
++
++ if (!smi->ops->is_vlan_valid(smi, vid))
++ return -EINVAL;
+
+ dev_dbg(smi->dev,
+ "setting VLAN%d 4k members: 0x%02x, untagged: 0x%02x\n",
+@@ -63,133 +164,58 @@ int rtl8366_set_vlan(struct realtek_smi *smi, int vid, u32 member,
+ "resulting VLAN%d 4k members: 0x%02x, untagged: 0x%02x\n",
+ vid, vlan4k.member, vlan4k.untag);
+
+- /* Try to find an existing MC entry for this VID */
+- for (i = 0; i < smi->num_vlan_mc; i++) {
+- struct rtl8366_vlan_mc vlanmc;
+-
+- ret = smi->ops->get_vlan_mc(smi, i, &vlanmc);
+- if (ret)
+- return ret;
+-
+- if (vid == vlanmc.vid) {
+- /* update the MC entry */
+- vlanmc.member |= member;
+- vlanmc.untag |= untag;
+- vlanmc.fid = fid;
+-
+- ret = smi->ops->set_vlan_mc(smi, i, &vlanmc);
++ /* Find or allocate a member config for this VID */
++ ret = rtl8366_obtain_mc(smi, vid, &vlanmc);
++ if (ret < 0)
++ return ret;
++ mc = ret;
+
+- dev_dbg(smi->dev,
+- "resulting VLAN%d MC members: 0x%02x, untagged: 0x%02x\n",
+- vid, vlanmc.member, vlanmc.untag);
++ /* Update the MC entry */
++ vlanmc.member |= member;
++ vlanmc.untag |= untag;
++ vlanmc.fid = fid;
+
+- break;
+- }
+- }
++ /* Commit updates to the MC entry */
++ ret = smi->ops->set_vlan_mc(smi, mc, &vlanmc);
++ if (ret)
++ dev_err(smi->dev, "failed to commit changes to VLAN MC index %d for VID %d\n",
++ mc, vid);
++ else
++ dev_dbg(smi->dev,
++ "resulting VLAN%d MC members: 0x%02x, untagged: 0x%02x\n",
++ vid, vlanmc.member, vlanmc.untag);
+
+ return ret;
+ }
+ EXPORT_SYMBOL_GPL(rtl8366_set_vlan);
+
+-int rtl8366_get_pvid(struct realtek_smi *smi, int port, int *val)
+-{
+- struct rtl8366_vlan_mc vlanmc;
+- int ret;
+- int index;
+-
+- ret = smi->ops->get_mc_index(smi, port, &index);
+- if (ret)
+- return ret;
+-
+- ret = smi->ops->get_vlan_mc(smi, index, &vlanmc);
+- if (ret)
+- return ret;
+-
+- *val = vlanmc.vid;
+- return 0;
+-}
+-EXPORT_SYMBOL_GPL(rtl8366_get_pvid);
+-
+ int rtl8366_set_pvid(struct realtek_smi *smi, unsigned int port,
+ unsigned int vid)
+ {
+ struct rtl8366_vlan_mc vlanmc;
+- struct rtl8366_vlan_4k vlan4k;
++ int mc;
+ int ret;
+- int i;
+-
+- /* Try to find an existing MC entry for this VID */
+- for (i = 0; i < smi->num_vlan_mc; i++) {
+- ret = smi->ops->get_vlan_mc(smi, i, &vlanmc);
+- if (ret)
+- return ret;
+-
+- if (vid == vlanmc.vid) {
+- ret = smi->ops->set_vlan_mc(smi, i, &vlanmc);
+- if (ret)
+- return ret;
+-
+- ret = smi->ops->set_mc_index(smi, port, i);
+- return ret;
+- }
+- }
+-
+- /* We have no MC entry for this VID, try to find an empty one */
+- for (i = 0; i < smi->num_vlan_mc; i++) {
+- ret = smi->ops->get_vlan_mc(smi, i, &vlanmc);
+- if (ret)
+- return ret;
+-
+- if (vlanmc.vid == 0 && vlanmc.member == 0) {
+- /* Update the entry from the 4K table */
+- ret = smi->ops->get_vlan_4k(smi, vid, &vlan4k);
+- if (ret)
+- return ret;
+
+- vlanmc.vid = vid;
+- vlanmc.member = vlan4k.member;
+- vlanmc.untag = vlan4k.untag;
+- vlanmc.fid = vlan4k.fid;
+- ret = smi->ops->set_vlan_mc(smi, i, &vlanmc);
+- if (ret)
+- return ret;
+-
+- ret = smi->ops->set_mc_index(smi, port, i);
+- return ret;
+- }
+- }
+-
+- /* MC table is full, try to find an unused entry and replace it */
+- for (i = 0; i < smi->num_vlan_mc; i++) {
+- int used;
+-
+- ret = rtl8366_mc_is_used(smi, i, &used);
+- if (ret)
+- return ret;
+-
+- if (!used) {
+- /* Update the entry from the 4K table */
+- ret = smi->ops->get_vlan_4k(smi, vid, &vlan4k);
+- if (ret)
+- return ret;
++ if (!smi->ops->is_vlan_valid(smi, vid))
++ return -EINVAL;
+
+- vlanmc.vid = vid;
+- vlanmc.member = vlan4k.member;
+- vlanmc.untag = vlan4k.untag;
+- vlanmc.fid = vlan4k.fid;
+- ret = smi->ops->set_vlan_mc(smi, i, &vlanmc);
+- if (ret)
+- return ret;
++ /* Find or allocate a member config for this VID */
++ ret = rtl8366_obtain_mc(smi, vid, &vlanmc);
++ if (ret < 0)
++ return ret;
++ mc = ret;
+
+- ret = smi->ops->set_mc_index(smi, port, i);
+- return ret;
+- }
++ ret = smi->ops->set_mc_index(smi, port, mc);
++ if (ret) {
++ dev_err(smi->dev, "set PVID: failed to set MC index %d for port %d\n",
++ mc, port);
++ return ret;
+ }
+
+- dev_err(smi->dev,
+- "all VLAN member configurations are in use\n");
++ dev_dbg(smi->dev, "set PVID: the PVID for port %d set to %d using existing MC index %d\n",
++ port, vid, mc);
+
+- return -ENOSPC;
++ return 0;
+ }
+ EXPORT_SYMBOL_GPL(rtl8366_set_pvid);
+
+@@ -389,7 +415,8 @@ void rtl8366_vlan_add(struct dsa_switch *ds, int port,
+ if (!smi->ops->is_vlan_valid(smi, vid))
+ return;
+
+- dev_info(smi->dev, "add VLAN on port %d, %s, %s\n",
++ dev_info(smi->dev, "add VLAN %d on port %d, %s, %s\n",
++ vlan->vid_begin,
+ port,
+ untagged ? "untagged" : "tagged",
+ pvid ? " PVID" : "no PVID");
+@@ -398,34 +425,29 @@ void rtl8366_vlan_add(struct dsa_switch *ds, int port,
+ dev_err(smi->dev, "port is DSA or CPU port\n");
+
+ for (vid = vlan->vid_begin; vid <= vlan->vid_end; vid++) {
+- int pvid_val = 0;
+-
+- dev_info(smi->dev, "add VLAN %04x\n", vid);
+ member |= BIT(port);
+
+ if (untagged)
+ untag |= BIT(port);
+
+- /* To ensure that we have a valid MC entry for this VLAN,
+- * initialize the port VLAN ID here.
+- */
+- ret = rtl8366_get_pvid(smi, port, &pvid_val);
+- if (ret < 0) {
+- dev_err(smi->dev, "could not lookup PVID for port %d\n",
+- port);
+- return;
+- }
+- if (pvid_val == 0) {
+- ret = rtl8366_set_pvid(smi, port, vid);
+- if (ret < 0)
+- return;
+- }
+-
+ ret = rtl8366_set_vlan(smi, vid, member, untag, 0);
+ if (ret)
+ dev_err(smi->dev,
+ "failed to set up VLAN %04x",
+ vid);
++
++ if (!pvid)
++ continue;
++
++ ret = rtl8366_set_pvid(smi, port, vid);
++ if (ret)
++ dev_err(smi->dev,
++ "failed to set PVID on port %d to VLAN %04x",
++ port, vid);
++
++ if (!ret)
++ dev_dbg(smi->dev, "VLAN add: added VLAN %d with PVID on port %d\n",
++ vid, port);
+ }
+ }
+ EXPORT_SYMBOL_GPL(rtl8366_vlan_add);
+diff --git a/drivers/net/dsa/rtl8366rb.c b/drivers/net/dsa/rtl8366rb.c
+index fd1977590cb4b..c83b332656a4b 100644
+--- a/drivers/net/dsa/rtl8366rb.c
++++ b/drivers/net/dsa/rtl8366rb.c
+@@ -1270,7 +1270,7 @@ static bool rtl8366rb_is_vlan_valid(struct realtek_smi *smi, unsigned int vlan)
+ if (smi->vlan4k_enabled)
+ max = RTL8366RB_NUM_VIDS - 1;
+
+- if (vlan == 0 || vlan >= max)
++ if (vlan == 0 || vlan > max)
+ return false;
+
+ return true;
+diff --git a/drivers/net/ethernet/chelsio/cxgb4/cxgb4_tc_flower.c b/drivers/net/ethernet/chelsio/cxgb4/cxgb4_tc_flower.c
+index 59b65d4db086e..dff564e1cfc7f 100644
+--- a/drivers/net/ethernet/chelsio/cxgb4/cxgb4_tc_flower.c
++++ b/drivers/net/ethernet/chelsio/cxgb4/cxgb4_tc_flower.c
+@@ -60,6 +60,89 @@ static struct ch_tc_pedit_fields pedits[] = {
+ PEDIT_FIELDS(IP6_, DST_127_96, 4, nat_lip, 12),
+ };
+
++static const struct cxgb4_natmode_config cxgb4_natmode_config_array[] = {
++ /* Default supported NAT modes */
++ {
++ .chip = CHELSIO_T5,
++ .flags = CXGB4_ACTION_NATMODE_NONE,
++ .natmode = NAT_MODE_NONE,
++ },
++ {
++ .chip = CHELSIO_T5,
++ .flags = CXGB4_ACTION_NATMODE_DIP,
++ .natmode = NAT_MODE_DIP,
++ },
++ {
++ .chip = CHELSIO_T5,
++ .flags = CXGB4_ACTION_NATMODE_DIP | CXGB4_ACTION_NATMODE_DPORT,
++ .natmode = NAT_MODE_DIP_DP,
++ },
++ {
++ .chip = CHELSIO_T5,
++ .flags = CXGB4_ACTION_NATMODE_DIP | CXGB4_ACTION_NATMODE_DPORT |
++ CXGB4_ACTION_NATMODE_SIP,
++ .natmode = NAT_MODE_DIP_DP_SIP,
++ },
++ {
++ .chip = CHELSIO_T5,
++ .flags = CXGB4_ACTION_NATMODE_DIP | CXGB4_ACTION_NATMODE_DPORT |
++ CXGB4_ACTION_NATMODE_SPORT,
++ .natmode = NAT_MODE_DIP_DP_SP,
++ },
++ {
++ .chip = CHELSIO_T5,
++ .flags = CXGB4_ACTION_NATMODE_SIP | CXGB4_ACTION_NATMODE_SPORT,
++ .natmode = NAT_MODE_SIP_SP,
++ },
++ {
++ .chip = CHELSIO_T5,
++ .flags = CXGB4_ACTION_NATMODE_DIP | CXGB4_ACTION_NATMODE_SIP |
++ CXGB4_ACTION_NATMODE_SPORT,
++ .natmode = NAT_MODE_DIP_SIP_SP,
++ },
++ {
++ .chip = CHELSIO_T5,
++ .flags = CXGB4_ACTION_NATMODE_DIP | CXGB4_ACTION_NATMODE_SIP |
++ CXGB4_ACTION_NATMODE_DPORT |
++ CXGB4_ACTION_NATMODE_SPORT,
++ .natmode = NAT_MODE_ALL,
++ },
++ /* T6+ can ignore L4 ports when they're disabled. */
++ {
++ .chip = CHELSIO_T6,
++ .flags = CXGB4_ACTION_NATMODE_SIP,
++ .natmode = NAT_MODE_SIP_SP,
++ },
++ {
++ .chip = CHELSIO_T6,
++ .flags = CXGB4_ACTION_NATMODE_DIP | CXGB4_ACTION_NATMODE_SPORT,
++ .natmode = NAT_MODE_DIP_DP_SP,
++ },
++ {
++ .chip = CHELSIO_T6,
++ .flags = CXGB4_ACTION_NATMODE_DIP | CXGB4_ACTION_NATMODE_SIP,
++ .natmode = NAT_MODE_ALL,
++ },
++};
++
++static void cxgb4_action_natmode_tweak(struct ch_filter_specification *fs,
++ u8 natmode_flags)
++{
++ u8 i = 0;
++
++ /* Translate the enabled NAT 4-tuple fields to one of the
++ * hardware supported NAT mode configurations. This ensures
++ * that we pick a valid combination, where the disabled fields
++ * do not get overwritten to 0.
++ */
++ for (i = 0; i < ARRAY_SIZE(cxgb4_natmode_config_array); i++) {
++ if (cxgb4_natmode_config_array[i].flags == natmode_flags) {
++ fs->nat_mode = cxgb4_natmode_config_array[i].natmode;
++ return;
++ }
++ }
++}
++
+ static struct ch_tc_flower_entry *allocate_flower_entry(void)
+ {
+ struct ch_tc_flower_entry *new = kzalloc(sizeof(*new), GFP_KERNEL);
+@@ -287,7 +370,8 @@ static void offload_pedit(struct ch_filter_specification *fs, u32 val, u32 mask,
+ }
+
+ static void process_pedit_field(struct ch_filter_specification *fs, u32 val,
+- u32 mask, u32 offset, u8 htype)
++ u32 mask, u32 offset, u8 htype,
++ u8 *natmode_flags)
+ {
+ switch (htype) {
+ case FLOW_ACT_MANGLE_HDR_TYPE_ETH:
+@@ -312,60 +396,94 @@ static void process_pedit_field(struct ch_filter_specification *fs, u32 val,
+ switch (offset) {
+ case PEDIT_IP4_SRC:
+ offload_pedit(fs, val, mask, IP4_SRC);
++ *natmode_flags |= CXGB4_ACTION_NATMODE_SIP;
+ break;
+ case PEDIT_IP4_DST:
+ offload_pedit(fs, val, mask, IP4_DST);
++ *natmode_flags |= CXGB4_ACTION_NATMODE_DIP;
+ }
+- fs->nat_mode = NAT_MODE_ALL;
+ break;
+ case FLOW_ACT_MANGLE_HDR_TYPE_IP6:
+ switch (offset) {
+ case PEDIT_IP6_SRC_31_0:
+ offload_pedit(fs, val, mask, IP6_SRC_31_0);
++ *natmode_flags |= CXGB4_ACTION_NATMODE_SIP;
+ break;
+ case PEDIT_IP6_SRC_63_32:
+ offload_pedit(fs, val, mask, IP6_SRC_63_32);
++ *natmode_flags |= CXGB4_ACTION_NATMODE_SIP;
+ break;
+ case PEDIT_IP6_SRC_95_64:
+ offload_pedit(fs, val, mask, IP6_SRC_95_64);
++ *natmode_flags |= CXGB4_ACTION_NATMODE_SIP;
+ break;
+ case PEDIT_IP6_SRC_127_96:
+ offload_pedit(fs, val, mask, IP6_SRC_127_96);
++ *natmode_flags |= CXGB4_ACTION_NATMODE_SIP;
+ break;
+ case PEDIT_IP6_DST_31_0:
+ offload_pedit(fs, val, mask, IP6_DST_31_0);
++ *natmode_flags |= CXGB4_ACTION_NATMODE_DIP;
+ break;
+ case PEDIT_IP6_DST_63_32:
+ offload_pedit(fs, val, mask, IP6_DST_63_32);
++ *natmode_flags |= CXGB4_ACTION_NATMODE_DIP;
+ break;
+ case PEDIT_IP6_DST_95_64:
+ offload_pedit(fs, val, mask, IP6_DST_95_64);
++ *natmode_flags |= CXGB4_ACTION_NATMODE_DIP;
+ break;
+ case PEDIT_IP6_DST_127_96:
+ offload_pedit(fs, val, mask, IP6_DST_127_96);
++ *natmode_flags |= CXGB4_ACTION_NATMODE_DIP;
+ }
+- fs->nat_mode = NAT_MODE_ALL;
+ break;
+ case FLOW_ACT_MANGLE_HDR_TYPE_TCP:
+ switch (offset) {
+ case PEDIT_TCP_SPORT_DPORT:
+- if (~mask & PEDIT_TCP_UDP_SPORT_MASK)
++ if (~mask & PEDIT_TCP_UDP_SPORT_MASK) {
+ fs->nat_fport = val;
+- else
++ *natmode_flags |= CXGB4_ACTION_NATMODE_SPORT;
++ } else {
+ fs->nat_lport = val >> 16;
++ *natmode_flags |= CXGB4_ACTION_NATMODE_DPORT;
++ }
+ }
+- fs->nat_mode = NAT_MODE_ALL;
+ break;
+ case FLOW_ACT_MANGLE_HDR_TYPE_UDP:
+ switch (offset) {
+ case PEDIT_UDP_SPORT_DPORT:
+- if (~mask & PEDIT_TCP_UDP_SPORT_MASK)
++ if (~mask & PEDIT_TCP_UDP_SPORT_MASK) {
+ fs->nat_fport = val;
+- else
++ *natmode_flags |= CXGB4_ACTION_NATMODE_SPORT;
++ } else {
+ fs->nat_lport = val >> 16;
++ *natmode_flags |= CXGB4_ACTION_NATMODE_DPORT;
++ }
+ }
+- fs->nat_mode = NAT_MODE_ALL;
++ break;
++ }
++}
++
++static int cxgb4_action_natmode_validate(struct adapter *adap, u8 natmode_flags,
++ struct netlink_ext_ack *extack)
++{
++ u8 i = 0;
++
++ /* Extract the NAT mode to enable based on what 4-tuple fields
++ * are enabled to be overwritten. This ensures that the
++ * disabled fields don't get overwritten to 0.
++ */
++ for (i = 0; i < ARRAY_SIZE(cxgb4_natmode_config_array); i++) {
++ const struct cxgb4_natmode_config *c;
++
++ c = &cxgb4_natmode_config_array[i];
++ if (CHELSIO_CHIP_VERSION(adap->params.chip) >= c->chip &&
++ natmode_flags == c->flags)
++ return 0;
+ }
++ NL_SET_ERR_MSG_MOD(extack, "Unsupported NAT mode 4-tuple combination");
++ return -EOPNOTSUPP;
+ }
+
+ void cxgb4_process_flow_actions(struct net_device *in,
+@@ -373,6 +491,7 @@ void cxgb4_process_flow_actions(struct net_device *in,
+ struct ch_filter_specification *fs)
+ {
+ struct flow_action_entry *act;
++ u8 natmode_flags = 0;
+ int i;
+
+ flow_action_for_each(i, act, actions) {
+@@ -423,13 +542,17 @@ void cxgb4_process_flow_actions(struct net_device *in,
+ val = act->mangle.val;
+ offset = act->mangle.offset;
+
+- process_pedit_field(fs, val, mask, offset, htype);
++ process_pedit_field(fs, val, mask, offset, htype,
++ &natmode_flags);
+ }
+ break;
+ default:
+ break;
+ }
+ }
++ if (natmode_flags)
++ cxgb4_action_natmode_tweak(fs, natmode_flags);
++
+ }
+
+ static bool valid_l4_mask(u32 mask)
+@@ -446,7 +569,8 @@ static bool valid_l4_mask(u32 mask)
+ }
+
+ static bool valid_pedit_action(struct net_device *dev,
+- const struct flow_action_entry *act)
++ const struct flow_action_entry *act,
++ u8 *natmode_flags)
+ {
+ u32 mask, offset;
+ u8 htype;
+@@ -471,7 +595,10 @@ static bool valid_pedit_action(struct net_device *dev,
+ case FLOW_ACT_MANGLE_HDR_TYPE_IP4:
+ switch (offset) {
+ case PEDIT_IP4_SRC:
++ *natmode_flags |= CXGB4_ACTION_NATMODE_SIP;
++ break;
+ case PEDIT_IP4_DST:
++ *natmode_flags |= CXGB4_ACTION_NATMODE_DIP;
+ break;
+ default:
+ netdev_err(dev, "%s: Unsupported pedit field\n",
+@@ -485,10 +612,13 @@ static bool valid_pedit_action(struct net_device *dev,
+ case PEDIT_IP6_SRC_63_32:
+ case PEDIT_IP6_SRC_95_64:
+ case PEDIT_IP6_SRC_127_96:
++ *natmode_flags |= CXGB4_ACTION_NATMODE_SIP;
++ break;
+ case PEDIT_IP6_DST_31_0:
+ case PEDIT_IP6_DST_63_32:
+ case PEDIT_IP6_DST_95_64:
+ case PEDIT_IP6_DST_127_96:
++ *natmode_flags |= CXGB4_ACTION_NATMODE_DIP;
+ break;
+ default:
+ netdev_err(dev, "%s: Unsupported pedit field\n",
+@@ -504,6 +634,10 @@ static bool valid_pedit_action(struct net_device *dev,
+ __func__);
+ return false;
+ }
++ if (~mask & PEDIT_TCP_UDP_SPORT_MASK)
++ *natmode_flags |= CXGB4_ACTION_NATMODE_SPORT;
++ else
++ *natmode_flags |= CXGB4_ACTION_NATMODE_DPORT;
+ break;
+ default:
+ netdev_err(dev, "%s: Unsupported pedit field\n",
+@@ -519,6 +653,10 @@ static bool valid_pedit_action(struct net_device *dev,
+ __func__);
+ return false;
+ }
++ if (~mask & PEDIT_TCP_UDP_SPORT_MASK)
++ *natmode_flags |= CXGB4_ACTION_NATMODE_SPORT;
++ else
++ *natmode_flags |= CXGB4_ACTION_NATMODE_DPORT;
+ break;
+ default:
+ netdev_err(dev, "%s: Unsupported pedit field\n",
+@@ -537,10 +675,12 @@ int cxgb4_validate_flow_actions(struct net_device *dev,
+ struct flow_action *actions,
+ struct netlink_ext_ack *extack)
+ {
++ struct adapter *adap = netdev2adap(dev);
+ struct flow_action_entry *act;
+ bool act_redir = false;
+ bool act_pedit = false;
+ bool act_vlan = false;
++ u8 natmode_flags = 0;
+ int i;
+
+ if (!flow_action_basic_hw_stats_check(actions, extack))
+@@ -553,7 +693,6 @@ int cxgb4_validate_flow_actions(struct net_device *dev,
+ /* Do nothing */
+ break;
+ case FLOW_ACTION_REDIRECT: {
+- struct adapter *adap = netdev2adap(dev);
+ struct net_device *n_dev, *target_dev;
+ unsigned int i;
+ bool found = false;
+@@ -603,7 +742,8 @@ int cxgb4_validate_flow_actions(struct net_device *dev,
+ }
+ break;
+ case FLOW_ACTION_MANGLE: {
+- bool pedit_valid = valid_pedit_action(dev, act);
++ bool pedit_valid = valid_pedit_action(dev, act,
++ &natmode_flags);
+
+ if (!pedit_valid)
+ return -EOPNOTSUPP;
+@@ -622,6 +762,15 @@ int cxgb4_validate_flow_actions(struct net_device *dev,
+ return -EINVAL;
+ }
+
++ if (act_pedit) {
++ int ret;
++
++ ret = cxgb4_action_natmode_validate(adap, natmode_flags,
++ extack);
++ if (ret)
++ return ret;
++ }
++
+ return 0;
+ }
+
+diff --git a/drivers/net/ethernet/chelsio/cxgb4/cxgb4_tc_flower.h b/drivers/net/ethernet/chelsio/cxgb4/cxgb4_tc_flower.h
+index 0a30c96b81ffa..95142b1a88af6 100644
+--- a/drivers/net/ethernet/chelsio/cxgb4/cxgb4_tc_flower.h
++++ b/drivers/net/ethernet/chelsio/cxgb4/cxgb4_tc_flower.h
+@@ -108,6 +108,21 @@ struct ch_tc_pedit_fields {
+ #define PEDIT_TCP_SPORT_DPORT 0x0
+ #define PEDIT_UDP_SPORT_DPORT 0x0
+
++enum cxgb4_action_natmode_flags {
++ CXGB4_ACTION_NATMODE_NONE = 0,
++ CXGB4_ACTION_NATMODE_DIP = (1 << 0),
++ CXGB4_ACTION_NATMODE_SIP = (1 << 1),
++ CXGB4_ACTION_NATMODE_DPORT = (1 << 2),
++ CXGB4_ACTION_NATMODE_SPORT = (1 << 3),
++};
++
++/* TC PEDIT action to NATMODE translation entry */
++struct cxgb4_natmode_config {
++ enum chip_type chip;
++ u8 flags;
++ u8 natmode;
++};
++
+ void cxgb4_process_flow_actions(struct net_device *in,
+ struct flow_action *actions,
+ struct ch_filter_specification *fs);
+diff --git a/drivers/net/ethernet/cisco/enic/enic.h b/drivers/net/ethernet/cisco/enic/enic.h
+index 18f3aeb88f22a..c67a16a48d624 100644
+--- a/drivers/net/ethernet/cisco/enic/enic.h
++++ b/drivers/net/ethernet/cisco/enic/enic.h
+@@ -169,6 +169,7 @@ struct enic {
+ u16 num_vfs;
+ #endif
+ spinlock_t enic_api_lock;
++ bool enic_api_busy;
+ struct enic_port_profile *pp;
+
+ /* work queue cache line section */
+diff --git a/drivers/net/ethernet/cisco/enic/enic_api.c b/drivers/net/ethernet/cisco/enic/enic_api.c
+index b161f24522b87..b028ea2dec2b9 100644
+--- a/drivers/net/ethernet/cisco/enic/enic_api.c
++++ b/drivers/net/ethernet/cisco/enic/enic_api.c
+@@ -34,6 +34,12 @@ int enic_api_devcmd_proxy_by_index(struct net_device *netdev, int vf,
+ struct vnic_dev *vdev = enic->vdev;
+
+ spin_lock(&enic->enic_api_lock);
++ while (enic->enic_api_busy) {
++ spin_unlock(&enic->enic_api_lock);
++ cpu_relax();
++ spin_lock(&enic->enic_api_lock);
++ }
++
+ spin_lock_bh(&enic->devcmd_lock);
+
+ vnic_dev_cmd_proxy_by_index_start(vdev, vf);
+diff --git a/drivers/net/ethernet/cisco/enic/enic_main.c b/drivers/net/ethernet/cisco/enic/enic_main.c
+index cd5fe4f6b54ce..21093f33d2d73 100644
+--- a/drivers/net/ethernet/cisco/enic/enic_main.c
++++ b/drivers/net/ethernet/cisco/enic/enic_main.c
+@@ -2140,8 +2140,6 @@ static int enic_dev_wait(struct vnic_dev *vdev,
+ int done;
+ int err;
+
+- BUG_ON(in_interrupt());
+-
+ err = start(vdev, arg);
+ if (err)
+ return err;
+@@ -2329,6 +2327,13 @@ static int enic_set_rss_nic_cfg(struct enic *enic)
+ rss_hash_bits, rss_base_cpu, rss_enable);
+ }
+
++static void enic_set_api_busy(struct enic *enic, bool busy)
++{
++ spin_lock(&enic->enic_api_lock);
++ enic->enic_api_busy = busy;
++ spin_unlock(&enic->enic_api_lock);
++}
++
+ static void enic_reset(struct work_struct *work)
+ {
+ struct enic *enic = container_of(work, struct enic, reset);
+@@ -2338,7 +2343,9 @@ static void enic_reset(struct work_struct *work)
+
+ rtnl_lock();
+
+- spin_lock(&enic->enic_api_lock);
++ /* Stop any activity from infiniband */
++ enic_set_api_busy(enic, true);
++
+ enic_stop(enic->netdev);
+ enic_dev_soft_reset(enic);
+ enic_reset_addr_lists(enic);
+@@ -2346,7 +2353,10 @@ static void enic_reset(struct work_struct *work)
+ enic_set_rss_nic_cfg(enic);
+ enic_dev_set_ig_vlan_rewrite_mode(enic);
+ enic_open(enic->netdev);
+- spin_unlock(&enic->enic_api_lock);
++
++ /* Allow infiniband to fiddle with the device again */
++ enic_set_api_busy(enic, false);
++
+ call_netdevice_notifiers(NETDEV_REBOOT, enic->netdev);
+
+ rtnl_unlock();
+@@ -2358,7 +2368,9 @@ static void enic_tx_hang_reset(struct work_struct *work)
+
+ rtnl_lock();
+
+- spin_lock(&enic->enic_api_lock);
++ /* Stop any activity from infiniband */
++ enic_set_api_busy(enic, true);
++
+ enic_dev_hang_notify(enic);
+ enic_stop(enic->netdev);
+ enic_dev_hang_reset(enic);
+@@ -2367,7 +2379,10 @@ static void enic_tx_hang_reset(struct work_struct *work)
+ enic_set_rss_nic_cfg(enic);
+ enic_dev_set_ig_vlan_rewrite_mode(enic);
+ enic_open(enic->netdev);
+- spin_unlock(&enic->enic_api_lock);
++
++ /* Allow infiniband to fiddle with the device again */
++ enic_set_api_busy(enic, false);
++
+ call_netdevice_notifiers(NETDEV_REBOOT, enic->netdev);
+
+ rtnl_unlock();
+diff --git a/drivers/net/ethernet/faraday/ftgmac100.c b/drivers/net/ethernet/faraday/ftgmac100.c
+index 87236206366fd..00024dd411471 100644
+--- a/drivers/net/ethernet/faraday/ftgmac100.c
++++ b/drivers/net/ethernet/faraday/ftgmac100.c
+@@ -1817,6 +1817,11 @@ static int ftgmac100_probe(struct platform_device *pdev)
+ priv->rxdes0_edorr_mask = BIT(30);
+ priv->txdes0_edotr_mask = BIT(30);
+ priv->is_aspeed = true;
++ /* Disable ast2600 problematic HW arbitration */
++ if (of_device_is_compatible(np, "aspeed,ast2600-mac")) {
++ iowrite32(FTGMAC100_TM_DEFAULT,
++ priv->base + FTGMAC100_OFFSET_TM);
++ }
+ } else {
+ priv->rxdes0_edorr_mask = BIT(15);
+ priv->txdes0_edotr_mask = BIT(15);
+diff --git a/drivers/net/ethernet/faraday/ftgmac100.h b/drivers/net/ethernet/faraday/ftgmac100.h
+index e5876a3fda91d..63b3e02fab162 100644
+--- a/drivers/net/ethernet/faraday/ftgmac100.h
++++ b/drivers/net/ethernet/faraday/ftgmac100.h
+@@ -169,6 +169,14 @@
+ #define FTGMAC100_MACCR_FAST_MODE (1 << 19)
+ #define FTGMAC100_MACCR_SW_RST (1 << 31)
+
++/*
++ * test mode control register
++ */
++#define FTGMAC100_TM_RQ_TX_VALID_DIS (1 << 28)
++#define FTGMAC100_TM_RQ_RR_IDLE_PREV (1 << 27)
++#define FTGMAC100_TM_DEFAULT \
++ (FTGMAC100_TM_RQ_TX_VALID_DIS | FTGMAC100_TM_RQ_RR_IDLE_PREV)
++
+ /*
+ * PHY control register
+ */
+diff --git a/drivers/net/ethernet/freescale/fec_main.c b/drivers/net/ethernet/freescale/fec_main.c
+index 534fcc71a2a53..e1cd795556294 100644
+--- a/drivers/net/ethernet/freescale/fec_main.c
++++ b/drivers/net/ethernet/freescale/fec_main.c
+@@ -1913,6 +1913,27 @@ out:
+ return ret;
+ }
+
++static void fec_enet_phy_reset_after_clk_enable(struct net_device *ndev)
++{
++ struct fec_enet_private *fep = netdev_priv(ndev);
++ struct phy_device *phy_dev = ndev->phydev;
++
++ if (phy_dev) {
++ phy_reset_after_clk_enable(phy_dev);
++ } else if (fep->phy_node) {
++ /*
++ * If the PHY still is not bound to the MAC, but there is
++ * OF PHY node and a matching PHY device instance already,
++ * use the OF PHY node to obtain the PHY device instance,
++ * and then use that PHY device instance when triggering
++ * the PHY reset.
++ */
++ phy_dev = of_phy_find_device(fep->phy_node);
++ phy_reset_after_clk_enable(phy_dev);
++ put_device(&phy_dev->mdio.dev);
++ }
++}
++
+ static int fec_enet_clk_enable(struct net_device *ndev, bool enable)
+ {
+ struct fec_enet_private *fep = netdev_priv(ndev);
+@@ -1939,7 +1960,7 @@ static int fec_enet_clk_enable(struct net_device *ndev, bool enable)
+ if (ret)
+ goto failed_clk_ref;
+
+- phy_reset_after_clk_enable(ndev->phydev);
++ fec_enet_phy_reset_after_clk_enable(ndev);
+ } else {
+ clk_disable_unprepare(fep->clk_enet_out);
+ if (fep->clk_ptp) {
+@@ -2985,16 +3006,16 @@ fec_enet_open(struct net_device *ndev)
+ /* Init MAC prior to mii bus probe */
+ fec_restart(ndev);
+
+- /* Probe and connect to PHY when open the interface */
+- ret = fec_enet_mii_probe(ndev);
+- if (ret)
+- goto err_enet_mii_probe;
+-
+ /* Call phy_reset_after_clk_enable() again if it failed during
+ * phy_reset_after_clk_enable() before because the PHY wasn't probed.
+ */
+ if (reset_again)
+- phy_reset_after_clk_enable(ndev->phydev);
++ fec_enet_phy_reset_after_clk_enable(ndev);
++
++ /* Probe and connect to PHY when open the interface */
++ ret = fec_enet_mii_probe(ndev);
++ if (ret)
++ goto err_enet_mii_probe;
+
+ if (fep->quirks & FEC_QUIRK_ERR006687)
+ imx6q_cpuidle_fec_irqs_used();
+diff --git a/drivers/net/ethernet/ibm/ibmveth.c b/drivers/net/ethernet/ibm/ibmveth.c
+index c5c732601e35e..7ef3369953b6a 100644
+--- a/drivers/net/ethernet/ibm/ibmveth.c
++++ b/drivers/net/ethernet/ibm/ibmveth.c
+@@ -1349,6 +1349,7 @@ static int ibmveth_poll(struct napi_struct *napi, int budget)
+ int offset = ibmveth_rxq_frame_offset(adapter);
+ int csum_good = ibmveth_rxq_csum_good(adapter);
+ int lrg_pkt = ibmveth_rxq_large_packet(adapter);
++ __sum16 iph_check = 0;
+
+ skb = ibmveth_rxq_get_buffer(adapter);
+
+@@ -1385,16 +1386,26 @@ static int ibmveth_poll(struct napi_struct *napi, int budget)
+ skb_put(skb, length);
+ skb->protocol = eth_type_trans(skb, netdev);
+
+- if (csum_good) {
+- skb->ip_summed = CHECKSUM_UNNECESSARY;
+- ibmveth_rx_csum_helper(skb, adapter);
++ /* PHYP without PLSO support places a -1 in the ip
++ * checksum for large send frames.
++ */
++ if (skb->protocol == cpu_to_be16(ETH_P_IP)) {
++ struct iphdr *iph = (struct iphdr *)skb->data;
++
++ iph_check = iph->check;
+ }
+
+- if (length > netdev->mtu + ETH_HLEN) {
++ if ((length > netdev->mtu + ETH_HLEN) ||
++ lrg_pkt || iph_check == 0xffff) {
+ ibmveth_rx_mss_helper(skb, mss, lrg_pkt);
+ adapter->rx_large_packets++;
+ }
+
++ if (csum_good) {
++ skb->ip_summed = CHECKSUM_UNNECESSARY;
++ ibmveth_rx_csum_helper(skb, adapter);
++ }
++
+ napi_gro_receive(napi, skb); /* send it up */
+
+ netdev->stats.rx_packets++;
+diff --git a/drivers/net/ethernet/ibm/ibmvnic.c b/drivers/net/ethernet/ibm/ibmvnic.c
+index 1b702a43a5d01..3e0aab04d86fb 100644
+--- a/drivers/net/ethernet/ibm/ibmvnic.c
++++ b/drivers/net/ethernet/ibm/ibmvnic.c
+@@ -4194,8 +4194,13 @@ static int handle_change_mac_rsp(union ibmvnic_crq *crq,
+ dev_err(dev, "Error %ld in CHANGE_MAC_ADDR_RSP\n", rc);
+ goto out;
+ }
++ /* crq->change_mac_addr.mac_addr is the requested one
++ * crq->change_mac_addr_rsp.mac_addr is the returned valid one.
++ */
+ ether_addr_copy(netdev->dev_addr,
+ &crq->change_mac_addr_rsp.mac_addr[0]);
++ ether_addr_copy(adapter->mac_addr,
++ &crq->change_mac_addr_rsp.mac_addr[0]);
+ out:
+ complete(&adapter->fw_done);
+ return rc;
+@@ -4605,7 +4610,7 @@ static int handle_query_phys_parms_rsp(union ibmvnic_crq *crq,
+ case IBMVNIC_1GBPS:
+ adapter->speed = SPEED_1000;
+ break;
+- case IBMVNIC_10GBP:
++ case IBMVNIC_10GBPS:
+ adapter->speed = SPEED_10000;
+ break;
+ case IBMVNIC_25GBPS:
+@@ -4620,6 +4625,9 @@ static int handle_query_phys_parms_rsp(union ibmvnic_crq *crq,
+ case IBMVNIC_100GBPS:
+ adapter->speed = SPEED_100000;
+ break;
++ case IBMVNIC_200GBPS:
++ adapter->speed = SPEED_200000;
++ break;
+ default:
+ if (netif_carrier_ok(netdev))
+ netdev_warn(netdev, "Unknown speed 0x%08x\n", rspeed);
+diff --git a/drivers/net/ethernet/ibm/ibmvnic.h b/drivers/net/ethernet/ibm/ibmvnic.h
+index f8416e1d4cf09..43feb96b0a68a 100644
+--- a/drivers/net/ethernet/ibm/ibmvnic.h
++++ b/drivers/net/ethernet/ibm/ibmvnic.h
+@@ -373,7 +373,7 @@ struct ibmvnic_phys_parms {
+ #define IBMVNIC_10MBPS 0x40000000
+ #define IBMVNIC_100MBPS 0x20000000
+ #define IBMVNIC_1GBPS 0x10000000
+-#define IBMVNIC_10GBP 0x08000000
++#define IBMVNIC_10GBPS 0x08000000
+ #define IBMVNIC_40GBPS 0x04000000
+ #define IBMVNIC_100GBPS 0x02000000
+ #define IBMVNIC_25GBPS 0x01000000
+diff --git a/drivers/net/ethernet/korina.c b/drivers/net/ethernet/korina.c
+index 03e034918d147..bf48f0ded9c7d 100644
+--- a/drivers/net/ethernet/korina.c
++++ b/drivers/net/ethernet/korina.c
+@@ -1113,7 +1113,7 @@ out:
+ return rc;
+
+ probe_err_register:
+- kfree(lp->td_ring);
++ kfree((struct dma_desc *)KSEG0ADDR(lp->td_ring));
+ probe_err_td_ring:
+ iounmap(lp->tx_dma_regs);
+ probe_err_dma_tx:
+@@ -1133,6 +1133,7 @@ static int korina_remove(struct platform_device *pdev)
+ iounmap(lp->eth_regs);
+ iounmap(lp->rx_dma_regs);
+ iounmap(lp->tx_dma_regs);
++ kfree((struct dma_desc *)KSEG0ADDR(lp->td_ring));
+
+ unregister_netdev(bif->dev);
+ free_netdev(bif->dev);
+diff --git a/drivers/net/ethernet/mediatek/Kconfig b/drivers/net/ethernet/mediatek/Kconfig
+index 62a820b1eb163..3362b148de23c 100644
+--- a/drivers/net/ethernet/mediatek/Kconfig
++++ b/drivers/net/ethernet/mediatek/Kconfig
+@@ -17,6 +17,7 @@ config NET_MEDIATEK_SOC
+ config NET_MEDIATEK_STAR_EMAC
+ tristate "MediaTek STAR Ethernet MAC support"
+ select PHYLIB
++ select REGMAP_MMIO
+ help
+ This driver supports the ethernet MAC IP first used on
+ MediaTek MT85** SoCs.
+diff --git a/drivers/net/ethernet/mellanox/mlx4/en_rx.c b/drivers/net/ethernet/mellanox/mlx4/en_rx.c
+index 8a10285b0e10c..89edcb5fca4fb 100644
+--- a/drivers/net/ethernet/mellanox/mlx4/en_rx.c
++++ b/drivers/net/ethernet/mellanox/mlx4/en_rx.c
+@@ -943,6 +943,9 @@ int mlx4_en_poll_rx_cq(struct napi_struct *napi, int budget)
+ bool clean_complete = true;
+ int done;
+
++ if (!budget)
++ return 0;
++
+ if (priv->tx_ring_num[TX_XDP]) {
+ xdp_tx_cq = priv->tx_cq[TX_XDP][cq->ring];
+ if (xdp_tx_cq->xdp_busy) {
+diff --git a/drivers/net/ethernet/mellanox/mlx4/en_tx.c b/drivers/net/ethernet/mellanox/mlx4/en_tx.c
+index 9dff7b086c9fb..1f11379ad5b64 100644
+--- a/drivers/net/ethernet/mellanox/mlx4/en_tx.c
++++ b/drivers/net/ethernet/mellanox/mlx4/en_tx.c
+@@ -350,7 +350,7 @@ u32 mlx4_en_recycle_tx_desc(struct mlx4_en_priv *priv,
+ .dma = tx_info->map0_dma,
+ };
+
+- if (!mlx4_en_rx_recycle(ring->recycle_ring, &frame)) {
++ if (!napi_mode || !mlx4_en_rx_recycle(ring->recycle_ring, &frame)) {
+ dma_unmap_page(priv->ddev, tx_info->map0_dma,
+ PAGE_SIZE, priv->dma_dir);
+ put_page(tx_info->page);
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en/health.c b/drivers/net/ethernet/mellanox/mlx5/core/en/health.c
+index 7283443868f3c..13c87ab50b267 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/en/health.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/en/health.c
+@@ -212,8 +212,8 @@ static int mlx5e_health_rsc_fmsg_binary(struct devlink_fmsg *fmsg,
+
+ {
+ u32 data_size;
++ int err = 0;
+ u32 offset;
+- int err;
+
+ for (offset = 0; offset < value_len; offset += data_size) {
+ data_size = value_len - offset;
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/lib/clock.c b/drivers/net/ethernet/mellanox/mlx5/core/lib/clock.c
+index 2d55b7c22c034..4e7cfa22b3d2f 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/lib/clock.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/lib/clock.c
+@@ -550,8 +550,9 @@ static int mlx5_pps_event(struct notifier_block *nb,
+ switch (clock->ptp_info.pin_config[pin].func) {
+ case PTP_PF_EXTTS:
+ ptp_event.index = pin;
+- ptp_event.timestamp = timecounter_cyc2time(&clock->tc,
+- be64_to_cpu(eqe->data.pps.time_stamp));
++ ptp_event.timestamp =
++ mlx5_timecounter_cyc2time(clock,
++ be64_to_cpu(eqe->data.pps.time_stamp));
+ if (clock->pps_info.enabled) {
+ ptp_event.type = PTP_CLOCK_PPSUSR;
+ ptp_event.pps_times.ts_real =
+diff --git a/drivers/net/ethernet/realtek/r8169_main.c b/drivers/net/ethernet/realtek/r8169_main.c
+index fe173ea894e2c..b1feef473b746 100644
+--- a/drivers/net/ethernet/realtek/r8169_main.c
++++ b/drivers/net/ethernet/realtek/r8169_main.c
+@@ -4675,7 +4675,7 @@ static int rtl8169_close(struct net_device *dev)
+
+ phy_disconnect(tp->phydev);
+
+- pci_free_irq(pdev, 0, tp);
++ free_irq(pci_irq_vector(pdev, 0), tp);
+
+ dma_free_coherent(&pdev->dev, R8169_RX_RING_BYTES, tp->RxDescArray,
+ tp->RxPhyAddr);
+@@ -4726,8 +4726,8 @@ static int rtl_open(struct net_device *dev)
+
+ rtl_request_firmware(tp);
+
+- retval = pci_request_irq(pdev, 0, rtl8169_interrupt, NULL, tp,
+- dev->name);
++ retval = request_irq(pci_irq_vector(pdev, 0), rtl8169_interrupt,
++ IRQF_NO_THREAD | IRQF_SHARED, dev->name, tp);
+ if (retval < 0)
+ goto err_release_fw_2;
+
+@@ -4759,7 +4759,7 @@ out:
+ return retval;
+
+ err_free_irq:
+- pci_free_irq(pdev, 0, tp);
++ free_irq(pci_irq_vector(pdev, 0), tp);
+ err_release_fw_2:
+ rtl_release_firmware(tp);
+ rtl8169_rx_clear(tp);
+@@ -4871,6 +4871,10 @@ static int __maybe_unused rtl8169_resume(struct device *device)
+ if (netif_running(tp->dev))
+ __rtl8169_resume(tp);
+
++ /* Reportedly at least Asus X453MA truncates packets otherwise */
++ if (tp->mac_version == RTL_GIGA_MAC_VER_37)
++ rtl_init_rxcfg(tp);
++
+ return 0;
+ }
+
+diff --git a/drivers/net/ethernet/socionext/netsec.c b/drivers/net/ethernet/socionext/netsec.c
+index 0f366cc50b74c..7f8be61a37089 100644
+--- a/drivers/net/ethernet/socionext/netsec.c
++++ b/drivers/net/ethernet/socionext/netsec.c
+@@ -6,6 +6,7 @@
+ #include <linux/pm_runtime.h>
+ #include <linux/acpi.h>
+ #include <linux/of_mdio.h>
++#include <linux/of_net.h>
+ #include <linux/etherdevice.h>
+ #include <linux/interrupt.h>
+ #include <linux/io.h>
+@@ -1836,6 +1837,14 @@ static const struct net_device_ops netsec_netdev_ops = {
+ static int netsec_of_probe(struct platform_device *pdev,
+ struct netsec_priv *priv, u32 *phy_addr)
+ {
++ int err;
++
++ err = of_get_phy_mode(pdev->dev.of_node, &priv->phy_interface);
++ if (err) {
++ dev_err(&pdev->dev, "missing required property 'phy-mode'\n");
++ return err;
++ }
++
+ priv->phy_np = of_parse_phandle(pdev->dev.of_node, "phy-handle", 0);
+ if (!priv->phy_np) {
+ dev_err(&pdev->dev, "missing required property 'phy-handle'\n");
+@@ -1862,6 +1871,14 @@ static int netsec_acpi_probe(struct platform_device *pdev,
+ if (!IS_ENABLED(CONFIG_ACPI))
+ return -ENODEV;
+
++ /* ACPI systems are assumed to configure the PHY in firmware, so
++ * there is really no need to discover the PHY mode from the DSDT.
++ * Since firmware is known to exist in the field that configures the
++ * PHY correctly but passes the wrong mode string in the phy-mode
++ * device property, we have no choice but to ignore it.
++ */
++ priv->phy_interface = PHY_INTERFACE_MODE_NA;
++
+ ret = device_property_read_u32(&pdev->dev, "phy-channel", phy_addr);
+ if (ret) {
+ dev_err(&pdev->dev,
+@@ -1998,13 +2015,6 @@ static int netsec_probe(struct platform_device *pdev)
+ priv->msg_enable = NETIF_MSG_TX_ERR | NETIF_MSG_HW | NETIF_MSG_DRV |
+ NETIF_MSG_LINK | NETIF_MSG_PROBE;
+
+- priv->phy_interface = device_get_phy_mode(&pdev->dev);
+- if ((int)priv->phy_interface < 0) {
+- dev_err(&pdev->dev, "missing required property 'phy-mode'\n");
+- ret = -ENODEV;
+- goto free_ndev;
+- }
+-
+ priv->ioaddr = devm_ioremap(&pdev->dev, mmio_res->start,
+ resource_size(mmio_res));
+ if (!priv->ioaddr) {
+diff --git a/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c b/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c
+index 73465e5f5a417..d4be2559bb73d 100644
+--- a/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c
++++ b/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c
+@@ -176,32 +176,6 @@ static void stmmac_enable_all_queues(struct stmmac_priv *priv)
+ }
+ }
+
+-/**
+- * stmmac_stop_all_queues - Stop all queues
+- * @priv: driver private structure
+- */
+-static void stmmac_stop_all_queues(struct stmmac_priv *priv)
+-{
+- u32 tx_queues_cnt = priv->plat->tx_queues_to_use;
+- u32 queue;
+-
+- for (queue = 0; queue < tx_queues_cnt; queue++)
+- netif_tx_stop_queue(netdev_get_tx_queue(priv->dev, queue));
+-}
+-
+-/**
+- * stmmac_start_all_queues - Start all queues
+- * @priv: driver private structure
+- */
+-static void stmmac_start_all_queues(struct stmmac_priv *priv)
+-{
+- u32 tx_queues_cnt = priv->plat->tx_queues_to_use;
+- u32 queue;
+-
+- for (queue = 0; queue < tx_queues_cnt; queue++)
+- netif_tx_start_queue(netdev_get_tx_queue(priv->dev, queue));
+-}
+-
+ static void stmmac_service_event_schedule(struct stmmac_priv *priv)
+ {
+ if (!test_bit(STMMAC_DOWN, &priv->state) &&
+@@ -2736,6 +2710,10 @@ static int stmmac_hw_setup(struct net_device *dev, bool init_ptp)
+ stmmac_enable_tbs(priv, priv->ioaddr, enable, chan);
+ }
+
++ /* Configure real RX and TX queues */
++ netif_set_real_num_rx_queues(dev, priv->plat->rx_queues_to_use);
++ netif_set_real_num_tx_queues(dev, priv->plat->tx_queues_to_use);
++
+ /* Start the ball rolling... */
+ stmmac_start_all_dma(priv);
+
+@@ -2862,7 +2840,7 @@ static int stmmac_open(struct net_device *dev)
+ }
+
+ stmmac_enable_all_queues(priv);
+- stmmac_start_all_queues(priv);
++ netif_tx_start_all_queues(priv->dev);
+
+ return 0;
+
+@@ -2903,8 +2881,6 @@ static int stmmac_release(struct net_device *dev)
+ phylink_stop(priv->phylink);
+ phylink_disconnect_phy(priv->phylink);
+
+- stmmac_stop_all_queues(priv);
+-
+ stmmac_disable_all_queues(priv);
+
+ for (chan = 0; chan < priv->plat->tx_queues_to_use; chan++)
+@@ -4819,10 +4795,6 @@ int stmmac_dvr_probe(struct device *device,
+
+ stmmac_check_ether_addr(priv);
+
+- /* Configure real RX and TX queues */
+- netif_set_real_num_rx_queues(ndev, priv->plat->rx_queues_to_use);
+- netif_set_real_num_tx_queues(ndev, priv->plat->tx_queues_to_use);
+-
+ ndev->netdev_ops = &stmmac_netdev_ops;
+
+ ndev->hw_features = NETIF_F_SG | NETIF_F_IP_CSUM | NETIF_F_IPV6_CSUM |
+@@ -5078,7 +5050,6 @@ int stmmac_suspend(struct device *dev)
+ mutex_lock(&priv->lock);
+
+ netif_device_detach(ndev);
+- stmmac_stop_all_queues(priv);
+
+ stmmac_disable_all_queues(priv);
+
+@@ -5203,8 +5174,6 @@ int stmmac_resume(struct device *dev)
+
+ stmmac_enable_all_queues(priv);
+
+- stmmac_start_all_queues(priv);
+-
+ mutex_unlock(&priv->lock);
+
+ if (!device_may_wakeup(priv->device)) {
+diff --git a/drivers/net/ipa/ipa_endpoint.c b/drivers/net/ipa/ipa_endpoint.c
+index 9e58e495d3731..bb46741fbe47e 100644
+--- a/drivers/net/ipa/ipa_endpoint.c
++++ b/drivers/net/ipa/ipa_endpoint.c
+@@ -1447,6 +1447,9 @@ void ipa_endpoint_resume_one(struct ipa_endpoint *endpoint)
+
+ void ipa_endpoint_suspend(struct ipa *ipa)
+ {
++ if (!ipa->setup_complete)
++ return;
++
+ if (ipa->modem_netdev)
+ ipa_modem_suspend(ipa->modem_netdev);
+
+@@ -1458,6 +1461,9 @@ void ipa_endpoint_suspend(struct ipa *ipa)
+
+ void ipa_endpoint_resume(struct ipa *ipa)
+ {
++ if (!ipa->setup_complete)
++ return;
++
+ ipa_endpoint_resume_one(ipa->name_map[IPA_ENDPOINT_AP_COMMAND_TX]);
+ ipa_endpoint_resume_one(ipa->name_map[IPA_ENDPOINT_AP_LAN_RX]);
+
+diff --git a/drivers/net/usb/qmi_wwan.c b/drivers/net/usb/qmi_wwan.c
+index 07c42c0719f5b..5ca1356b8656f 100644
+--- a/drivers/net/usb/qmi_wwan.c
++++ b/drivers/net/usb/qmi_wwan.c
+@@ -1375,6 +1375,7 @@ static const struct usb_device_id products[] = {
+ {QMI_QUIRK_SET_DTR(0x2cb7, 0x0104, 4)}, /* Fibocom NL678 series */
+ {QMI_FIXED_INTF(0x0489, 0xe0b4, 0)}, /* Foxconn T77W968 LTE */
+ {QMI_FIXED_INTF(0x0489, 0xe0b5, 0)}, /* Foxconn T77W968 LTE with eSIM support*/
++ {QMI_FIXED_INTF(0x2692, 0x9025, 4)}, /* Cellient MPL200 (rebranded Qualcomm 05c6:9025) */
+
+ /* 4. Gobi 1000 devices */
+ {QMI_GOBI1K_DEVICE(0x05c6, 0x9212)}, /* Acer Gobi Modem Device */
+diff --git a/drivers/net/wan/hdlc.c b/drivers/net/wan/hdlc.c
+index 9b00708676cf7..1bdd3df0867a5 100644
+--- a/drivers/net/wan/hdlc.c
++++ b/drivers/net/wan/hdlc.c
+@@ -46,7 +46,15 @@ static struct hdlc_proto *first_proto;
+ static int hdlc_rcv(struct sk_buff *skb, struct net_device *dev,
+ struct packet_type *p, struct net_device *orig_dev)
+ {
+- struct hdlc_device *hdlc = dev_to_hdlc(dev);
++ struct hdlc_device *hdlc;
++
++ /* First make sure "dev" is an HDLC device */
++ if (!(dev->priv_flags & IFF_WAN_HDLC)) {
++ kfree_skb(skb);
++ return NET_RX_SUCCESS;
++ }
++
++ hdlc = dev_to_hdlc(dev);
+
+ if (!net_eq(dev_net(dev), &init_net)) {
+ kfree_skb(skb);
+diff --git a/drivers/net/wan/hdlc_raw_eth.c b/drivers/net/wan/hdlc_raw_eth.c
+index 08e0a46501dec..c70a518b8b478 100644
+--- a/drivers/net/wan/hdlc_raw_eth.c
++++ b/drivers/net/wan/hdlc_raw_eth.c
+@@ -99,6 +99,7 @@ static int raw_eth_ioctl(struct net_device *dev, struct ifreq *ifr)
+ old_qlen = dev->tx_queue_len;
+ ether_setup(dev);
+ dev->tx_queue_len = old_qlen;
++ dev->priv_flags &= ~IFF_TX_SKB_SHARING;
+ eth_hw_addr_random(dev);
+ call_netdevice_notifiers(NETDEV_POST_TYPE_CHANGE, dev);
+ netif_dormant_off(dev);
+diff --git a/drivers/net/wireless/ath/ath10k/ce.c b/drivers/net/wireless/ath/ath10k/ce.c
+index 294fbc1e89ab8..e6e0284e47837 100644
+--- a/drivers/net/wireless/ath/ath10k/ce.c
++++ b/drivers/net/wireless/ath/ath10k/ce.c
+@@ -1555,7 +1555,7 @@ ath10k_ce_alloc_src_ring(struct ath10k *ar, unsigned int ce_id,
+ ret = ath10k_ce_alloc_shadow_base(ar, src_ring, nentries);
+ if (ret) {
+ dma_free_coherent(ar->dev,
+- (nentries * sizeof(struct ce_desc_64) +
++ (nentries * sizeof(struct ce_desc) +
+ CE_DESC_RING_ALIGN),
+ src_ring->base_addr_owner_space_unaligned,
+ base_addr);
+diff --git a/drivers/net/wireless/ath/ath10k/htt_rx.c b/drivers/net/wireless/ath/ath10k/htt_rx.c
+index d787cbead56ab..215ade6faf328 100644
+--- a/drivers/net/wireless/ath/ath10k/htt_rx.c
++++ b/drivers/net/wireless/ath/ath10k/htt_rx.c
+@@ -142,6 +142,14 @@ static int __ath10k_htt_rx_ring_fill_n(struct ath10k_htt *htt, int num)
+ BUILD_BUG_ON(HTT_RX_RING_FILL_LEVEL >= HTT_RX_RING_SIZE / 2);
+
+ idx = __le32_to_cpu(*htt->rx_ring.alloc_idx.vaddr);
++
++ if (idx < 0 || idx >= htt->rx_ring.size) {
++ ath10k_err(htt->ar, "rx ring index is not valid, firmware malfunctioning?\n");
++ idx &= htt->rx_ring.size_mask;
++ ret = -ENOMEM;
++ goto fail;
++ }
++
+ while (num > 0) {
+ skb = dev_alloc_skb(HTT_RX_BUF_SIZE + HTT_RX_DESC_ALIGN);
+ if (!skb) {
+diff --git a/drivers/net/wireless/ath/ath10k/mac.c b/drivers/net/wireless/ath/ath10k/mac.c
+index 919d15584d4a2..77daca67a8e14 100644
+--- a/drivers/net/wireless/ath/ath10k/mac.c
++++ b/drivers/net/wireless/ath/ath10k/mac.c
+@@ -7283,7 +7283,7 @@ ath10k_mac_update_bss_chan_survey(struct ath10k *ar,
+ struct ieee80211_channel *channel)
+ {
+ int ret;
+- enum wmi_bss_survey_req_type type = WMI_BSS_SURVEY_REQ_TYPE_READ_CLEAR;
++ enum wmi_bss_survey_req_type type = WMI_BSS_SURVEY_REQ_TYPE_READ;
+
+ lockdep_assert_held(&ar->conf_mutex);
+
+diff --git a/drivers/net/wireless/ath/ath11k/ahb.c b/drivers/net/wireless/ath/ath11k/ahb.c
+index 30092841ac464..a0314c1c84653 100644
+--- a/drivers/net/wireless/ath/ath11k/ahb.c
++++ b/drivers/net/wireless/ath/ath11k/ahb.c
+@@ -981,12 +981,16 @@ err_core_free:
+ static int ath11k_ahb_remove(struct platform_device *pdev)
+ {
+ struct ath11k_base *ab = platform_get_drvdata(pdev);
++ unsigned long left;
+
+ reinit_completion(&ab->driver_recovery);
+
+- if (test_bit(ATH11K_FLAG_RECOVERY, &ab->dev_flags))
+- wait_for_completion_timeout(&ab->driver_recovery,
+- ATH11K_AHB_RECOVERY_TIMEOUT);
++ if (test_bit(ATH11K_FLAG_RECOVERY, &ab->dev_flags)) {
++ left = wait_for_completion_timeout(&ab->driver_recovery,
++ ATH11K_AHB_RECOVERY_TIMEOUT);
++ if (!left)
++ ath11k_warn(ab, "failed to receive recovery response completion\n");
++ }
+
+ set_bit(ATH11K_FLAG_UNREGISTERING, &ab->dev_flags);
+ cancel_work_sync(&ab->restart_work);
+diff --git a/drivers/net/wireless/ath/ath11k/mac.c b/drivers/net/wireless/ath/ath11k/mac.c
+index 2836a0f197ab0..fc5be7e8c043e 100644
+--- a/drivers/net/wireless/ath/ath11k/mac.c
++++ b/drivers/net/wireless/ath/ath11k/mac.c
+@@ -5824,7 +5824,7 @@ static int __ath11k_mac_register(struct ath11k *ar)
+ ret = ath11k_mac_setup_channels_rates(ar,
+ cap->supported_bands);
+ if (ret)
+- goto err_free;
++ goto err;
+
+ ath11k_mac_setup_ht_vht_cap(ar, cap, &ht_cap);
+ ath11k_mac_setup_he_cap(ar, cap);
+@@ -5938,7 +5938,9 @@ static int __ath11k_mac_register(struct ath11k *ar)
+ err_free:
+ kfree(ar->mac.sbands[NL80211_BAND_2GHZ].channels);
+ kfree(ar->mac.sbands[NL80211_BAND_5GHZ].channels);
++ kfree(ar->mac.sbands[NL80211_BAND_6GHZ].channels);
+
++err:
+ SET_IEEE80211_DEV(ar->hw, NULL);
+ return ret;
+ }
+diff --git a/drivers/net/wireless/ath/ath11k/qmi.c b/drivers/net/wireless/ath/ath11k/qmi.c
+index c00a99ad8dbc1..497cff7e64cc5 100644
+--- a/drivers/net/wireless/ath/ath11k/qmi.c
++++ b/drivers/net/wireless/ath/ath11k/qmi.c
+@@ -2419,6 +2419,7 @@ int ath11k_qmi_init_service(struct ath11k_base *ab)
+ ATH11K_QMI_WLFW_SERVICE_INS_ID_V01);
+ if (ret < 0) {
+ ath11k_warn(ab, "failed to add qmi lookup\n");
++ destroy_workqueue(ab->qmi.event_wq);
+ return ret;
+ }
+
+diff --git a/drivers/net/wireless/ath/ath6kl/main.c b/drivers/net/wireless/ath/ath6kl/main.c
+index 5e7ea838a9218..814131a0680a4 100644
+--- a/drivers/net/wireless/ath/ath6kl/main.c
++++ b/drivers/net/wireless/ath/ath6kl/main.c
+@@ -430,6 +430,9 @@ void ath6kl_connect_ap_mode_sta(struct ath6kl_vif *vif, u16 aid, u8 *mac_addr,
+
+ ath6kl_dbg(ATH6KL_DBG_TRC, "new station %pM aid=%d\n", mac_addr, aid);
+
++ if (aid < 1 || aid > AP_MAX_NUM_STA)
++ return;
++
+ if (assoc_req_len > sizeof(struct ieee80211_hdr_3addr)) {
+ struct ieee80211_mgmt *mgmt =
+ (struct ieee80211_mgmt *) assoc_info;
+diff --git a/drivers/net/wireless/ath/ath6kl/wmi.c b/drivers/net/wireless/ath/ath6kl/wmi.c
+index 6885d2ded53a8..3d5db84d64650 100644
+--- a/drivers/net/wireless/ath/ath6kl/wmi.c
++++ b/drivers/net/wireless/ath/ath6kl/wmi.c
+@@ -2645,6 +2645,11 @@ int ath6kl_wmi_delete_pstream_cmd(struct wmi *wmi, u8 if_idx, u8 traffic_class,
+ return -EINVAL;
+ }
+
++ if (tsid >= 16) {
++ ath6kl_err("invalid tsid: %d\n", tsid);
++ return -EINVAL;
++ }
++
+ skb = ath6kl_wmi_get_new_buf(sizeof(*cmd));
+ if (!skb)
+ return -ENOMEM;
+diff --git a/drivers/net/wireless/ath/ath9k/hif_usb.c b/drivers/net/wireless/ath/ath9k/hif_usb.c
+index 3f563e02d17da..2ed98aaed6fb5 100644
+--- a/drivers/net/wireless/ath/ath9k/hif_usb.c
++++ b/drivers/net/wireless/ath/ath9k/hif_usb.c
+@@ -449,10 +449,19 @@ static void hif_usb_stop(void *hif_handle)
+ spin_unlock_irqrestore(&hif_dev->tx.tx_lock, flags);
+
+ /* The pending URBs have to be canceled. */
++ spin_lock_irqsave(&hif_dev->tx.tx_lock, flags);
+ list_for_each_entry_safe(tx_buf, tx_buf_tmp,
+ &hif_dev->tx.tx_pending, list) {
++ usb_get_urb(tx_buf->urb);
++ spin_unlock_irqrestore(&hif_dev->tx.tx_lock, flags);
+ usb_kill_urb(tx_buf->urb);
++ list_del(&tx_buf->list);
++ usb_free_urb(tx_buf->urb);
++ kfree(tx_buf->buf);
++ kfree(tx_buf);
++ spin_lock_irqsave(&hif_dev->tx.tx_lock, flags);
+ }
++ spin_unlock_irqrestore(&hif_dev->tx.tx_lock, flags);
+
+ usb_kill_anchored_urbs(&hif_dev->mgmt_submitted);
+ }
+@@ -762,27 +771,37 @@ static void ath9k_hif_usb_dealloc_tx_urbs(struct hif_device_usb *hif_dev)
+ struct tx_buf *tx_buf = NULL, *tx_buf_tmp = NULL;
+ unsigned long flags;
+
++ spin_lock_irqsave(&hif_dev->tx.tx_lock, flags);
+ list_for_each_entry_safe(tx_buf, tx_buf_tmp,
+ &hif_dev->tx.tx_buf, list) {
++ usb_get_urb(tx_buf->urb);
++ spin_unlock_irqrestore(&hif_dev->tx.tx_lock, flags);
+ usb_kill_urb(tx_buf->urb);
+ list_del(&tx_buf->list);
+ usb_free_urb(tx_buf->urb);
+ kfree(tx_buf->buf);
+ kfree(tx_buf);
++ spin_lock_irqsave(&hif_dev->tx.tx_lock, flags);
+ }
++ spin_unlock_irqrestore(&hif_dev->tx.tx_lock, flags);
+
+ spin_lock_irqsave(&hif_dev->tx.tx_lock, flags);
+ hif_dev->tx.flags |= HIF_USB_TX_FLUSH;
+ spin_unlock_irqrestore(&hif_dev->tx.tx_lock, flags);
+
++ spin_lock_irqsave(&hif_dev->tx.tx_lock, flags);
+ list_for_each_entry_safe(tx_buf, tx_buf_tmp,
+ &hif_dev->tx.tx_pending, list) {
++ usb_get_urb(tx_buf->urb);
++ spin_unlock_irqrestore(&hif_dev->tx.tx_lock, flags);
+ usb_kill_urb(tx_buf->urb);
+ list_del(&tx_buf->list);
+ usb_free_urb(tx_buf->urb);
+ kfree(tx_buf->buf);
+ kfree(tx_buf);
++ spin_lock_irqsave(&hif_dev->tx.tx_lock, flags);
+ }
++ spin_unlock_irqrestore(&hif_dev->tx.tx_lock, flags);
+
+ usb_kill_anchored_urbs(&hif_dev->mgmt_submitted);
+ }
+diff --git a/drivers/net/wireless/ath/ath9k/htc_hst.c b/drivers/net/wireless/ath/ath9k/htc_hst.c
+index d2e062eaf5614..510e61e97dbcb 100644
+--- a/drivers/net/wireless/ath/ath9k/htc_hst.c
++++ b/drivers/net/wireless/ath/ath9k/htc_hst.c
+@@ -339,6 +339,8 @@ void ath9k_htc_txcompletion_cb(struct htc_target *htc_handle,
+
+ if (skb) {
+ htc_hdr = (struct htc_frame_hdr *) skb->data;
++ if (htc_hdr->endpoint_id >= ARRAY_SIZE(htc_handle->endpoint))
++ goto ret;
+ endpoint = &htc_handle->endpoint[htc_hdr->endpoint_id];
+ skb_pull(skb, sizeof(struct htc_frame_hdr));
+
+diff --git a/drivers/net/wireless/ath/wcn36xx/main.c b/drivers/net/wireless/ath/wcn36xx/main.c
+index 702b689c06df3..f3ea629764fa8 100644
+--- a/drivers/net/wireless/ath/wcn36xx/main.c
++++ b/drivers/net/wireless/ath/wcn36xx/main.c
+@@ -163,7 +163,7 @@ static struct ieee80211_supported_band wcn_band_5ghz = {
+ .ampdu_density = IEEE80211_HT_MPDU_DENSITY_16,
+ .mcs = {
+ .rx_mask = { 0xff, 0, 0, 0, 0, 0, 0, 0, 0, 0, },
+- .rx_highest = cpu_to_le16(72),
++ .rx_highest = cpu_to_le16(150),
+ .tx_params = IEEE80211_HT_MCS_TX_DEFINED,
+ }
+ }
+diff --git a/drivers/net/wireless/broadcom/brcm80211/brcmfmac/core.c b/drivers/net/wireless/broadcom/brcm80211/brcmfmac/core.c
+index c88655acc78c7..76b478f70b4bb 100644
+--- a/drivers/net/wireless/broadcom/brcm80211/brcmfmac/core.c
++++ b/drivers/net/wireless/broadcom/brcm80211/brcmfmac/core.c
+@@ -483,7 +483,7 @@ static int brcmf_rx_hdrpull(struct brcmf_pub *drvr, struct sk_buff *skb,
+ ret = brcmf_proto_hdrpull(drvr, true, skb, ifp);
+
+ if (ret || !(*ifp) || !(*ifp)->ndev) {
+- if (ret != -ENODATA && *ifp)
++ if (ret != -ENODATA && *ifp && (*ifp)->ndev)
+ (*ifp)->ndev->stats.rx_errors++;
+ brcmu_pkt_buf_free_skb(skb);
+ return -ENODATA;
+diff --git a/drivers/net/wireless/broadcom/brcm80211/brcmfmac/msgbuf.c b/drivers/net/wireless/broadcom/brcm80211/brcmfmac/msgbuf.c
+index 8bb4f1fa790e7..1bb270e782ff2 100644
+--- a/drivers/net/wireless/broadcom/brcm80211/brcmfmac/msgbuf.c
++++ b/drivers/net/wireless/broadcom/brcm80211/brcmfmac/msgbuf.c
+@@ -1619,6 +1619,8 @@ fail:
+ BRCMF_TX_IOCTL_MAX_MSG_SIZE,
+ msgbuf->ioctbuf,
+ msgbuf->ioctbuf_handle);
++ if (msgbuf->txflow_wq)
++ destroy_workqueue(msgbuf->txflow_wq);
+ kfree(msgbuf);
+ }
+ return -ENOMEM;
+diff --git a/drivers/net/wireless/broadcom/brcm80211/brcmsmac/phy/phy_lcn.c b/drivers/net/wireless/broadcom/brcm80211/brcmsmac/phy/phy_lcn.c
+index 7ef36234a25dc..66797dc5e90d5 100644
+--- a/drivers/net/wireless/broadcom/brcm80211/brcmsmac/phy/phy_lcn.c
++++ b/drivers/net/wireless/broadcom/brcm80211/brcmsmac/phy/phy_lcn.c
+@@ -5065,8 +5065,10 @@ bool wlc_phy_attach_lcnphy(struct brcms_phy *pi)
+ pi->pi_fptr.radioloftget = wlc_lcnphy_get_radio_loft;
+ pi->pi_fptr.detach = wlc_phy_detach_lcnphy;
+
+- if (!wlc_phy_txpwr_srom_read_lcnphy(pi))
++ if (!wlc_phy_txpwr_srom_read_lcnphy(pi)) {
++ kfree(pi->u.pi_lcnphy);
+ return false;
++ }
+
+ if (LCNREV_IS(pi->pubpi.phy_rev, 1)) {
+ if (pi_lcn->lcnphy_tempsense_option == 3) {
+diff --git a/drivers/net/wireless/intel/iwlwifi/iwl-dbg-tlv.c b/drivers/net/wireless/intel/iwlwifi/iwl-dbg-tlv.c
+index 27116c7d3f4f8..48269a4cf8964 100644
+--- a/drivers/net/wireless/intel/iwlwifi/iwl-dbg-tlv.c
++++ b/drivers/net/wireless/intel/iwlwifi/iwl-dbg-tlv.c
+@@ -947,9 +947,8 @@ static bool iwl_dbg_tlv_check_fw_pkt(struct iwl_fw_runtime *fwrt,
+ struct iwl_rx_packet *pkt = tp_data->fw_pkt;
+ struct iwl_cmd_header *wanted_hdr = (void *)&trig_data;
+
+- if (pkt && ((wanted_hdr->cmd == 0 && wanted_hdr->group_id == 0) ||
+- (pkt->hdr.cmd == wanted_hdr->cmd &&
+- pkt->hdr.group_id == wanted_hdr->group_id))) {
++ if (pkt && (pkt->hdr.cmd == wanted_hdr->cmd &&
++ pkt->hdr.group_id == wanted_hdr->group_id)) {
+ struct iwl_rx_packet *fw_pkt =
+ kmemdup(pkt,
+ sizeof(*pkt) + iwl_rx_packet_payload_len(pkt),
+@@ -1012,6 +1011,9 @@ static void iwl_dbg_tlv_init_cfg(struct iwl_fw_runtime *fwrt)
+ enum iwl_fw_ini_buffer_location *ini_dest = &fwrt->trans->dbg.ini_dest;
+ int ret, i;
+
++ if (*ini_dest != IWL_FW_INI_LOCATION_INVALID)
++ return;
++
+ IWL_DEBUG_FW(fwrt,
+ "WRT: Generating active triggers list, domain 0x%x\n",
+ fwrt->trans->dbg.domains_bitmap);
+diff --git a/drivers/net/wireless/intel/iwlwifi/mvm/mac80211.c b/drivers/net/wireless/intel/iwlwifi/mvm/mac80211.c
+index 77916231ff7d3..03b73003b0095 100644
+--- a/drivers/net/wireless/intel/iwlwifi/mvm/mac80211.c
++++ b/drivers/net/wireless/intel/iwlwifi/mvm/mac80211.c
+@@ -3685,9 +3685,12 @@ static int iwl_mvm_send_aux_roc_cmd(struct iwl_mvm *mvm,
+ tail->apply_time_max_delay = cpu_to_le32(delay);
+
+ IWL_DEBUG_TE(mvm,
+- "ROC: Requesting to remain on channel %u for %ums (requested = %ums, max_delay = %ums, dtim_interval = %ums)\n",
+- channel->hw_value, req_dur, duration, delay,
+- dtim_interval);
++ "ROC: Requesting to remain on channel %u for %ums\n",
++ channel->hw_value, req_dur);
++ IWL_DEBUG_TE(mvm,
++ "\t(requested = %ums, max_delay = %ums, dtim_interval = %ums)\n",
++ duration, delay, dtim_interval);
++
+ /* Set the node address */
+ memcpy(tail->node_addr, vif->addr, ETH_ALEN);
+
+diff --git a/drivers/net/wireless/marvell/mwifiex/scan.c b/drivers/net/wireless/marvell/mwifiex/scan.c
+index ff932627a46c1..2fb69a590bd8e 100644
+--- a/drivers/net/wireless/marvell/mwifiex/scan.c
++++ b/drivers/net/wireless/marvell/mwifiex/scan.c
+@@ -1889,7 +1889,7 @@ mwifiex_parse_single_response_buf(struct mwifiex_private *priv, u8 **bss_info,
+ chan, CFG80211_BSS_FTYPE_UNKNOWN,
+ bssid, timestamp,
+ cap_info_bitmap, beacon_period,
+- ie_buf, ie_len, rssi, GFP_KERNEL);
++ ie_buf, ie_len, rssi, GFP_ATOMIC);
+ if (bss) {
+ bss_priv = (struct mwifiex_bss_priv *)bss->priv;
+ bss_priv->band = band;
+diff --git a/drivers/net/wireless/marvell/mwifiex/sdio.c b/drivers/net/wireless/marvell/mwifiex/sdio.c
+index a042965962a2d..1b6bee5465288 100644
+--- a/drivers/net/wireless/marvell/mwifiex/sdio.c
++++ b/drivers/net/wireless/marvell/mwifiex/sdio.c
+@@ -1976,6 +1976,8 @@ error:
+ kfree(card->mpa_rx.buf);
+ card->mpa_tx.buf_size = 0;
+ card->mpa_rx.buf_size = 0;
++ card->mpa_tx.buf = NULL;
++ card->mpa_rx.buf = NULL;
+ }
+
+ return ret;
+diff --git a/drivers/net/wireless/marvell/mwifiex/usb.c b/drivers/net/wireless/marvell/mwifiex/usb.c
+index 6f3cfde4654cc..426e39d4ccf0f 100644
+--- a/drivers/net/wireless/marvell/mwifiex/usb.c
++++ b/drivers/net/wireless/marvell/mwifiex/usb.c
+@@ -1353,7 +1353,8 @@ static void mwifiex_usb_cleanup_tx_aggr(struct mwifiex_adapter *adapter)
+ skb_dequeue(&port->tx_aggr.aggr_list)))
+ mwifiex_write_data_complete(adapter, skb_tmp,
+ 0, -1);
+- del_timer_sync(&port->tx_aggr.timer_cnxt.hold_timer);
++ if (port->tx_aggr.timer_cnxt.hold_timer.function)
++ del_timer_sync(&port->tx_aggr.timer_cnxt.hold_timer);
+ port->tx_aggr.timer_cnxt.is_hold_timer_set = false;
+ port->tx_aggr.timer_cnxt.hold_tmo_msecs = 0;
+ }
+diff --git a/drivers/net/wireless/mediatek/mt76/mt7915/mcu.c b/drivers/net/wireless/mediatek/mt76/mt7915/mcu.c
+index 8fb8255650a7e..6969579e6b1dd 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt7915/mcu.c
++++ b/drivers/net/wireless/mediatek/mt76/mt7915/mcu.c
+@@ -2267,14 +2267,6 @@ int mt7915_mcu_add_beacon(struct ieee80211_hw *hw,
+ struct bss_info_bcn *bcn;
+ int len = MT7915_BEACON_UPDATE_SIZE + MAX_BEACON_SIZE;
+
+- rskb = mt7915_mcu_alloc_sta_req(dev, mvif, NULL, len);
+- if (IS_ERR(rskb))
+- return PTR_ERR(rskb);
+-
+- tlv = mt7915_mcu_add_tlv(rskb, BSS_INFO_OFFLOAD, sizeof(*bcn));
+- bcn = (struct bss_info_bcn *)tlv;
+- bcn->enable = en;
+-
+ skb = ieee80211_beacon_get_template(hw, vif, &offs);
+ if (!skb)
+ return -EINVAL;
+@@ -2285,6 +2277,16 @@ int mt7915_mcu_add_beacon(struct ieee80211_hw *hw,
+ return -EINVAL;
+ }
+
++ rskb = mt7915_mcu_alloc_sta_req(dev, mvif, NULL, len);
++ if (IS_ERR(rskb)) {
++ dev_kfree_skb(skb);
++ return PTR_ERR(rskb);
++ }
++
++ tlv = mt7915_mcu_add_tlv(rskb, BSS_INFO_OFFLOAD, sizeof(*bcn));
++ bcn = (struct bss_info_bcn *)tlv;
++ bcn->enable = en;
++
+ if (mvif->band_idx) {
+ info = IEEE80211_SKB_CB(skb);
+ info->hw_queue |= MT_TX_HW_QUEUE_EXT_PHY;
+diff --git a/drivers/net/wireless/quantenna/qtnfmac/commands.c b/drivers/net/wireless/quantenna/qtnfmac/commands.c
+index f40d8c3c3d9e5..f3ccbd2b10847 100644
+--- a/drivers/net/wireless/quantenna/qtnfmac/commands.c
++++ b/drivers/net/wireless/quantenna/qtnfmac/commands.c
+@@ -869,6 +869,7 @@ int qtnf_cmd_send_del_intf(struct qtnf_vif *vif)
+ default:
+ pr_warn("VIF%u.%u: unsupported iftype %d\n", vif->mac->macid,
+ vif->vifid, vif->wdev.iftype);
++ dev_kfree_skb(cmd_skb);
+ ret = -EINVAL;
+ goto out;
+ }
+@@ -1924,6 +1925,7 @@ int qtnf_cmd_send_change_sta(struct qtnf_vif *vif, const u8 *mac,
+ break;
+ default:
+ pr_err("unsupported iftype %d\n", vif->wdev.iftype);
++ dev_kfree_skb(cmd_skb);
+ ret = -EINVAL;
+ goto out;
+ }
+diff --git a/drivers/net/wireless/realtek/rtl8xxxu/rtl8xxxu_core.c b/drivers/net/wireless/realtek/rtl8xxxu/rtl8xxxu_core.c
+index 19efae462a242..5cd7ef3625c5e 100644
+--- a/drivers/net/wireless/realtek/rtl8xxxu/rtl8xxxu_core.c
++++ b/drivers/net/wireless/realtek/rtl8xxxu/rtl8xxxu_core.c
+@@ -5795,7 +5795,6 @@ static int rtl8xxxu_submit_int_urb(struct ieee80211_hw *hw)
+ ret = usb_submit_urb(urb, GFP_KERNEL);
+ if (ret) {
+ usb_unanchor_urb(urb);
+- usb_free_urb(urb);
+ goto error;
+ }
+
+@@ -5804,6 +5803,7 @@ static int rtl8xxxu_submit_int_urb(struct ieee80211_hw *hw)
+ rtl8xxxu_write32(priv, REG_USB_HIMR, val32);
+
+ error:
++ usb_free_urb(urb);
+ return ret;
+ }
+
+@@ -6318,6 +6318,7 @@ static int rtl8xxxu_start(struct ieee80211_hw *hw)
+ struct rtl8xxxu_priv *priv = hw->priv;
+ struct rtl8xxxu_rx_urb *rx_urb;
+ struct rtl8xxxu_tx_urb *tx_urb;
++ struct sk_buff *skb;
+ unsigned long flags;
+ int ret, i;
+
+@@ -6368,6 +6369,13 @@ static int rtl8xxxu_start(struct ieee80211_hw *hw)
+ rx_urb->hw = hw;
+
+ ret = rtl8xxxu_submit_rx_urb(priv, rx_urb);
++ if (ret) {
++ if (ret != -ENOMEM) {
++ skb = (struct sk_buff *)rx_urb->urb.context;
++ dev_kfree_skb(skb);
++ }
++ rtl8xxxu_queue_rx_urb(priv, rx_urb);
++ }
+ }
+
+ schedule_delayed_work(&priv->ra_watchdog, 2 * HZ);
+diff --git a/drivers/net/wireless/realtek/rtw88/main.c b/drivers/net/wireless/realtek/rtw88/main.c
+index 665d4bbdee6a0..6a881d0be9bf0 100644
+--- a/drivers/net/wireless/realtek/rtw88/main.c
++++ b/drivers/net/wireless/realtek/rtw88/main.c
+@@ -1465,6 +1465,9 @@ int rtw_core_init(struct rtw_dev *rtwdev)
+ ret = rtw_load_firmware(rtwdev, RTW_WOWLAN_FW);
+ if (ret) {
+ rtw_warn(rtwdev, "no wow firmware loaded\n");
++ wait_for_completion(&rtwdev->fw.completion);
++ if (rtwdev->fw.firmware)
++ release_firmware(rtwdev->fw.firmware);
+ return ret;
+ }
+ }
+@@ -1479,6 +1482,8 @@ void rtw_core_deinit(struct rtw_dev *rtwdev)
+ struct rtw_rsvd_page *rsvd_pkt, *tmp;
+ unsigned long flags;
+
++ rtw_wait_firmware_completion(rtwdev);
++
+ if (fw->firmware)
+ release_firmware(fw->firmware);
+
+diff --git a/drivers/net/wireless/realtek/rtw88/pci.c b/drivers/net/wireless/realtek/rtw88/pci.c
+index 3413973bc4750..7f1f5073b9f4d 100644
+--- a/drivers/net/wireless/realtek/rtw88/pci.c
++++ b/drivers/net/wireless/realtek/rtw88/pci.c
+@@ -1599,6 +1599,8 @@ void rtw_pci_shutdown(struct pci_dev *pdev)
+
+ if (chip->ops->shutdown)
+ chip->ops->shutdown(rtwdev);
++
++ pci_set_power_state(pdev, PCI_D3hot);
+ }
+ EXPORT_SYMBOL(rtw_pci_shutdown);
+
+diff --git a/drivers/net/wireless/realtek/rtw88/pci.h b/drivers/net/wireless/realtek/rtw88/pci.h
+index 024c2bc275cbe..ca17aa9cf7dc7 100644
+--- a/drivers/net/wireless/realtek/rtw88/pci.h
++++ b/drivers/net/wireless/realtek/rtw88/pci.h
+@@ -9,8 +9,8 @@
+ #define RTK_BEQ_TX_DESC_NUM 256
+
+ #define RTK_MAX_RX_DESC_NUM 512
+-/* 8K + rx desc size */
+-#define RTK_PCI_RX_BUF_SIZE (8192 + 24)
++/* 11K + rx desc size */
++#define RTK_PCI_RX_BUF_SIZE (11454 + 24)
+
+ #define RTK_PCI_CTRL 0x300
+ #define BIT_RST_TRXDMA_INTF BIT(20)
+diff --git a/drivers/net/wireless/realtek/rtw88/phy.c b/drivers/net/wireless/realtek/rtw88/phy.c
+index 8d93f31597469..9687b376d221b 100644
+--- a/drivers/net/wireless/realtek/rtw88/phy.c
++++ b/drivers/net/wireless/realtek/rtw88/phy.c
+@@ -147,12 +147,13 @@ void rtw_phy_dig_write(struct rtw_dev *rtwdev, u8 igi)
+ {
+ struct rtw_chip_info *chip = rtwdev->chip;
+ struct rtw_hal *hal = &rtwdev->hal;
+- const struct rtw_hw_reg *dig_cck = &chip->dig_cck[0];
+ u32 addr, mask;
+ u8 path;
+
+- if (dig_cck)
++ if (chip->dig_cck) {
++ const struct rtw_hw_reg *dig_cck = &chip->dig_cck[0];
+ rtw_write32_mask(rtwdev, dig_cck->addr, dig_cck->mask, igi >> 1);
++ }
+
+ for (path = 0; path < hal->rf_path_num; path++) {
+ addr = chip->dig[path].addr;
+diff --git a/drivers/ntb/hw/amd/ntb_hw_amd.c b/drivers/ntb/hw/amd/ntb_hw_amd.c
+index 88e1db65be02c..71428d8cbcfc5 100644
+--- a/drivers/ntb/hw/amd/ntb_hw_amd.c
++++ b/drivers/ntb/hw/amd/ntb_hw_amd.c
+@@ -1203,6 +1203,7 @@ static int amd_ntb_init_pci(struct amd_ntb_dev *ndev,
+
+ err_dma_mask:
+ pci_clear_master(pdev);
++ pci_release_regions(pdev);
+ err_pci_regions:
+ pci_disable_device(pdev);
+ err_pci_enable:
+diff --git a/drivers/ntb/hw/intel/ntb_hw_gen1.c b/drivers/ntb/hw/intel/ntb_hw_gen1.c
+index 423f9b8fbbcf5..fa561d455f7c8 100644
+--- a/drivers/ntb/hw/intel/ntb_hw_gen1.c
++++ b/drivers/ntb/hw/intel/ntb_hw_gen1.c
+@@ -1893,7 +1893,7 @@ static int intel_ntb_pci_probe(struct pci_dev *pdev,
+ goto err_init_dev;
+ } else {
+ rc = -EINVAL;
+- goto err_ndev;
++ goto err_init_pci;
+ }
+
+ ndev_reset_unsafe_flags(ndev);
+diff --git a/drivers/nvme/target/core.c b/drivers/nvme/target/core.c
+index 58b035cc67a01..75ed95a250fb5 100644
+--- a/drivers/nvme/target/core.c
++++ b/drivers/nvme/target/core.c
+@@ -1142,7 +1142,8 @@ static void nvmet_start_ctrl(struct nvmet_ctrl *ctrl)
+ * in case a host died before it enabled the controller. Hence, simply
+ * reset the keep alive timer when the controller is enabled.
+ */
+- mod_delayed_work(system_wq, &ctrl->ka_work, ctrl->kato * HZ);
++ if (ctrl->kato)
++ mod_delayed_work(system_wq, &ctrl->ka_work, ctrl->kato * HZ);
+ }
+
+ static void nvmet_clear_ctrl(struct nvmet_ctrl *ctrl)
+diff --git a/drivers/nvmem/core.c b/drivers/nvmem/core.c
+index 927eb5f6003f0..4aca5b4a87d75 100644
+--- a/drivers/nvmem/core.c
++++ b/drivers/nvmem/core.c
+@@ -355,16 +355,14 @@ static void nvmem_cell_add(struct nvmem_cell *cell)
+ blocking_notifier_call_chain(&nvmem_notifier, NVMEM_CELL_ADD, cell);
+ }
+
+-static int nvmem_cell_info_to_nvmem_cell(struct nvmem_device *nvmem,
+- const struct nvmem_cell_info *info,
+- struct nvmem_cell *cell)
++static int nvmem_cell_info_to_nvmem_cell_nodup(struct nvmem_device *nvmem,
++ const struct nvmem_cell_info *info,
++ struct nvmem_cell *cell)
+ {
+ cell->nvmem = nvmem;
+ cell->offset = info->offset;
+ cell->bytes = info->bytes;
+- cell->name = kstrdup_const(info->name, GFP_KERNEL);
+- if (!cell->name)
+- return -ENOMEM;
++ cell->name = info->name;
+
+ cell->bit_offset = info->bit_offset;
+ cell->nbits = info->nbits;
+@@ -376,13 +374,30 @@ static int nvmem_cell_info_to_nvmem_cell(struct nvmem_device *nvmem,
+ if (!IS_ALIGNED(cell->offset, nvmem->stride)) {
+ dev_err(&nvmem->dev,
+ "cell %s unaligned to nvmem stride %d\n",
+- cell->name, nvmem->stride);
++ cell->name ?: "<unknown>", nvmem->stride);
+ return -EINVAL;
+ }
+
+ return 0;
+ }
+
++static int nvmem_cell_info_to_nvmem_cell(struct nvmem_device *nvmem,
++ const struct nvmem_cell_info *info,
++ struct nvmem_cell *cell)
++{
++ int err;
++
++ err = nvmem_cell_info_to_nvmem_cell_nodup(nvmem, info, cell);
++ if (err)
++ return err;
++
++ cell->name = kstrdup_const(info->name, GFP_KERNEL);
++ if (!cell->name)
++ return -ENOMEM;
++
++ return 0;
++}
++
+ /**
+ * nvmem_add_cells() - Add cell information to an nvmem device
+ *
+@@ -823,6 +838,7 @@ struct nvmem_device *of_nvmem_device_get(struct device_node *np, const char *id)
+ {
+
+ struct device_node *nvmem_np;
++ struct nvmem_device *nvmem;
+ int index = 0;
+
+ if (id)
+@@ -832,7 +848,9 @@ struct nvmem_device *of_nvmem_device_get(struct device_node *np, const char *id)
+ if (!nvmem_np)
+ return ERR_PTR(-ENOENT);
+
+- return __nvmem_device_get(nvmem_np, device_match_of_node);
++ nvmem = __nvmem_device_get(nvmem_np, device_match_of_node);
++ of_node_put(nvmem_np);
++ return nvmem;
+ }
+ EXPORT_SYMBOL_GPL(of_nvmem_device_get);
+ #endif
+@@ -1433,7 +1451,7 @@ ssize_t nvmem_device_cell_read(struct nvmem_device *nvmem,
+ if (!nvmem)
+ return -EINVAL;
+
+- rc = nvmem_cell_info_to_nvmem_cell(nvmem, info, &cell);
++ rc = nvmem_cell_info_to_nvmem_cell_nodup(nvmem, info, &cell);
+ if (rc)
+ return rc;
+
+@@ -1463,7 +1481,7 @@ int nvmem_device_cell_write(struct nvmem_device *nvmem,
+ if (!nvmem)
+ return -EINVAL;
+
+- rc = nvmem_cell_info_to_nvmem_cell(nvmem, info, &cell);
++ rc = nvmem_cell_info_to_nvmem_cell_nodup(nvmem, info, &cell);
+ if (rc)
+ return rc;
+
+diff --git a/drivers/opp/core.c b/drivers/opp/core.c
+index 91dcad982d362..11d192fb2e813 100644
+--- a/drivers/opp/core.c
++++ b/drivers/opp/core.c
+@@ -1918,6 +1918,9 @@ static void _opp_detach_genpd(struct opp_table *opp_table)
+ {
+ int index;
+
++ if (!opp_table->genpd_virt_devs)
++ return;
++
+ for (index = 0; index < opp_table->required_opp_count; index++) {
+ if (!opp_table->genpd_virt_devs[index])
+ continue;
+@@ -1964,6 +1967,9 @@ struct opp_table *dev_pm_opp_attach_genpd(struct device *dev,
+ if (!opp_table)
+ return ERR_PTR(-ENOMEM);
+
++ if (opp_table->genpd_virt_devs)
++ return opp_table;
++
+ /*
+ * If the genpd's OPP table isn't already initialized, parsing of the
+ * required-opps fail for dev. We should retry this after genpd's OPP
+diff --git a/drivers/pci/controller/dwc/pcie-designware-ep.c b/drivers/pci/controller/dwc/pcie-designware-ep.c
+index 5e5b8821bed8c..ce1c00ea5fdca 100644
+--- a/drivers/pci/controller/dwc/pcie-designware-ep.c
++++ b/drivers/pci/controller/dwc/pcie-designware-ep.c
+@@ -505,7 +505,8 @@ int dw_pcie_ep_init_complete(struct dw_pcie_ep *ep)
+ u32 reg;
+ int i;
+
+- hdr_type = dw_pcie_readb_dbi(pci, PCI_HEADER_TYPE);
++ hdr_type = dw_pcie_readb_dbi(pci, PCI_HEADER_TYPE) &
++ PCI_HEADER_TYPE_MASK;
+ if (hdr_type != PCI_HEADER_TYPE_NORMAL) {
+ dev_err(pci->dev,
+ "PCIe controller is not set to EP mode (hdr_type:0x%x)!\n",
+diff --git a/drivers/pci/controller/pci-aardvark.c b/drivers/pci/controller/pci-aardvark.c
+index 90ff291c24f09..d5f58684d962c 100644
+--- a/drivers/pci/controller/pci-aardvark.c
++++ b/drivers/pci/controller/pci-aardvark.c
+@@ -9,7 +9,7 @@
+ */
+
+ #include <linux/delay.h>
+-#include <linux/gpio.h>
++#include <linux/gpio/consumer.h>
+ #include <linux/interrupt.h>
+ #include <linux/irq.h>
+ #include <linux/irqdomain.h>
+@@ -608,7 +608,7 @@ static struct pci_bridge_emul_ops advk_pci_bridge_emul_ops = {
+ * Initialize the configuration space of the PCI-to-PCI bridge
+ * associated with the given PCIe interface.
+ */
+-static void advk_sw_pci_bridge_init(struct advk_pcie *pcie)
++static int advk_sw_pci_bridge_init(struct advk_pcie *pcie)
+ {
+ struct pci_bridge_emul *bridge = &pcie->bridge;
+
+@@ -634,8 +634,7 @@ static void advk_sw_pci_bridge_init(struct advk_pcie *pcie)
+ bridge->data = pcie;
+ bridge->ops = &advk_pci_bridge_emul_ops;
+
+- pci_bridge_emul_init(bridge, 0);
+-
++ return pci_bridge_emul_init(bridge, 0);
+ }
+
+ static bool advk_pcie_valid_device(struct advk_pcie *pcie, struct pci_bus *bus,
+@@ -1169,7 +1168,11 @@ static int advk_pcie_probe(struct platform_device *pdev)
+
+ advk_pcie_setup_hw(pcie);
+
+- advk_sw_pci_bridge_init(pcie);
++ ret = advk_sw_pci_bridge_init(pcie);
++ if (ret) {
++ dev_err(dev, "Failed to register emulated root PCI bridge\n");
++ return ret;
++ }
+
+ ret = advk_pcie_init_irq_domain(pcie);
+ if (ret) {
+diff --git a/drivers/pci/controller/pci-hyperv.c b/drivers/pci/controller/pci-hyperv.c
+index bf40ff09c99d6..95c04b0ffeb16 100644
+--- a/drivers/pci/controller/pci-hyperv.c
++++ b/drivers/pci/controller/pci-hyperv.c
+@@ -1275,11 +1275,25 @@ static void hv_irq_unmask(struct irq_data *data)
+ exit_unlock:
+ spin_unlock_irqrestore(&hbus->retarget_msi_interrupt_lock, flags);
+
+- if (res) {
++ /*
++ * During hibernation, when a CPU is offlined, the kernel tries
++ * to move the interrupt to the remaining CPUs that haven't
++ * been offlined yet. In this case, the below hv_do_hypercall()
++ * always fails since the vmbus channel has been closed:
++ * refer to cpu_disable_common() -> fixup_irqs() ->
++ * irq_migrate_all_off_this_cpu() -> migrate_one_irq().
++ *
++ * Suppress the error message for hibernation because the failure
++ * during hibernation does not matter (at this time all the devices
++ * have been frozen). Note: the correct affinity info is still updated
++ * into the irqdata data structure in migrate_one_irq() ->
++ * irq_do_set_affinity() -> hv_set_affinity(), so later when the VM
++ * resumes, hv_pci_restore_msi_state() is able to correctly restore
++ * the interrupt with the correct affinity.
++ */
++ if (res && hbus->state != hv_pcibus_removing)
+ dev_err(&hbus->hdev->device,
+ "%s() failed: %#llx", __func__, res);
+- return;
+- }
+
+ pci_msi_unmask_irq(data);
+ }
+@@ -3368,6 +3382,34 @@ static int hv_pci_suspend(struct hv_device *hdev)
+ return 0;
+ }
+
++static int hv_pci_restore_msi_msg(struct pci_dev *pdev, void *arg)
++{
++ struct msi_desc *entry;
++ struct irq_data *irq_data;
++
++ for_each_pci_msi_entry(entry, pdev) {
++ irq_data = irq_get_irq_data(entry->irq);
++ if (WARN_ON_ONCE(!irq_data))
++ return -EINVAL;
++
++ hv_compose_msi_msg(irq_data, &entry->msg);
++ }
++
++ return 0;
++}
++
++/*
++ * Upon resume, pci_restore_msi_state() -> ... -> __pci_write_msi_msg()
++ * directly writes the MSI/MSI-X registers via MMIO, but since Hyper-V
++ * doesn't trap and emulate the MMIO accesses, here hv_compose_msi_msg()
++ * must be used to ask Hyper-V to re-create the IOMMU Interrupt Remapping
++ * Table entries.
++ */
++static void hv_pci_restore_msi_state(struct hv_pcibus_device *hbus)
++{
++ pci_walk_bus(hbus->pci_bus, hv_pci_restore_msi_msg, NULL);
++}
++
+ static int hv_pci_resume(struct hv_device *hdev)
+ {
+ struct hv_pcibus_device *hbus = hv_get_drvdata(hdev);
+@@ -3401,6 +3443,8 @@ static int hv_pci_resume(struct hv_device *hdev)
+
+ prepopulate_bars(hbus);
+
++ hv_pci_restore_msi_state(hbus);
++
+ hbus->state = hv_pcibus_installed;
+ return 0;
+ out:
+diff --git a/drivers/pci/controller/pcie-iproc-msi.c b/drivers/pci/controller/pcie-iproc-msi.c
+index 3176ad3ab0e52..908475d27e0e7 100644
+--- a/drivers/pci/controller/pcie-iproc-msi.c
++++ b/drivers/pci/controller/pcie-iproc-msi.c
+@@ -209,15 +209,20 @@ static int iproc_msi_irq_set_affinity(struct irq_data *data,
+ struct iproc_msi *msi = irq_data_get_irq_chip_data(data);
+ int target_cpu = cpumask_first(mask);
+ int curr_cpu;
++ int ret;
+
+ curr_cpu = hwirq_to_cpu(msi, data->hwirq);
+ if (curr_cpu == target_cpu)
+- return IRQ_SET_MASK_OK_DONE;
++ ret = IRQ_SET_MASK_OK_DONE;
++ else {
++ /* steer MSI to the target CPU */
++ data->hwirq = hwirq_to_canonical_hwirq(msi, data->hwirq) + target_cpu;
++ ret = IRQ_SET_MASK_OK;
++ }
+
+- /* steer MSI to the target CPU */
+- data->hwirq = hwirq_to_canonical_hwirq(msi, data->hwirq) + target_cpu;
++ irq_data_update_effective_affinity(data, cpumask_of(target_cpu));
+
+- return IRQ_SET_MASK_OK;
++ return ret;
+ }
+
+ static void iproc_msi_irq_compose_msi_msg(struct irq_data *data,
+diff --git a/drivers/pci/iov.c b/drivers/pci/iov.c
+index b37e08c4f9d1a..4afd4ee4f7f04 100644
+--- a/drivers/pci/iov.c
++++ b/drivers/pci/iov.c
+@@ -180,6 +180,7 @@ int pci_iov_add_virtfn(struct pci_dev *dev, int id)
+ virtfn->device = iov->vf_device;
+ virtfn->is_virtfn = 1;
+ virtfn->physfn = pci_dev_get(dev);
++ virtfn->no_command_memory = 1;
+
+ if (id == 0)
+ pci_read_vf_config_common(virtfn);
+diff --git a/drivers/perf/thunderx2_pmu.c b/drivers/perf/thunderx2_pmu.c
+index aac9823b0c6bb..e116815fa8092 100644
+--- a/drivers/perf/thunderx2_pmu.c
++++ b/drivers/perf/thunderx2_pmu.c
+@@ -805,14 +805,17 @@ static struct tx2_uncore_pmu *tx2_uncore_pmu_init_dev(struct device *dev,
+ list_for_each_entry(rentry, &list, node) {
+ if (resource_type(rentry->res) == IORESOURCE_MEM) {
+ res = *rentry->res;
++ rentry = NULL;
+ break;
+ }
+ }
++ acpi_dev_free_resource_list(&list);
+
+- if (!rentry->res)
++ if (rentry) {
++ dev_err(dev, "PMU type %d: Fail to find resource\n", type);
+ return NULL;
++ }
+
+- acpi_dev_free_resource_list(&list);
+ base = devm_ioremap_resource(dev, &res);
+ if (IS_ERR(base)) {
+ dev_err(dev, "PMU type %d: Fail to map resource\n", type);
+diff --git a/drivers/perf/xgene_pmu.c b/drivers/perf/xgene_pmu.c
+index edac28cd25ddc..633cf07ba6723 100644
+--- a/drivers/perf/xgene_pmu.c
++++ b/drivers/perf/xgene_pmu.c
+@@ -1453,17 +1453,6 @@ static char *xgene_pmu_dev_name(struct device *dev, u32 type, int id)
+ }
+
+ #if defined(CONFIG_ACPI)
+-static int acpi_pmu_dev_add_resource(struct acpi_resource *ares, void *data)
+-{
+- struct resource *res = data;
+-
+- if (ares->type == ACPI_RESOURCE_TYPE_FIXED_MEMORY32)
+- acpi_dev_resource_memory(ares, res);
+-
+- /* Always tell the ACPI core to skip this resource */
+- return 1;
+-}
+-
+ static struct
+ xgene_pmu_dev_ctx *acpi_get_pmu_hw_inf(struct xgene_pmu *xgene_pmu,
+ struct acpi_device *adev, u32 type)
+@@ -1475,6 +1464,7 @@ xgene_pmu_dev_ctx *acpi_get_pmu_hw_inf(struct xgene_pmu *xgene_pmu,
+ struct hw_pmu_info *inf;
+ void __iomem *dev_csr;
+ struct resource res;
++ struct resource_entry *rentry;
+ int enable_bit;
+ int rc;
+
+@@ -1483,11 +1473,23 @@ xgene_pmu_dev_ctx *acpi_get_pmu_hw_inf(struct xgene_pmu *xgene_pmu,
+ return NULL;
+
+ INIT_LIST_HEAD(&resource_list);
+- rc = acpi_dev_get_resources(adev, &resource_list,
+- acpi_pmu_dev_add_resource, &res);
++ rc = acpi_dev_get_resources(adev, &resource_list, NULL, NULL);
++ if (rc <= 0) {
++ dev_err(dev, "PMU type %d: No resources found\n", type);
++ return NULL;
++ }
++
++ list_for_each_entry(rentry, &resource_list, node) {
++ if (resource_type(rentry->res) == IORESOURCE_MEM) {
++ res = *rentry->res;
++ rentry = NULL;
++ break;
++ }
++ }
+ acpi_dev_free_resource_list(&resource_list);
+- if (rc < 0) {
+- dev_err(dev, "PMU type %d: No resource address found\n", type);
++
++ if (rentry) {
++ dev_err(dev, "PMU type %d: No memory resource found\n", type);
+ return NULL;
+ }
+
+diff --git a/drivers/pinctrl/aspeed/pinctrl-aspeed.c b/drivers/pinctrl/aspeed/pinctrl-aspeed.c
+index b625a657171e6..11e27136032b9 100644
+--- a/drivers/pinctrl/aspeed/pinctrl-aspeed.c
++++ b/drivers/pinctrl/aspeed/pinctrl-aspeed.c
+@@ -515,7 +515,7 @@ int aspeed_pin_config_set(struct pinctrl_dev *pctldev, unsigned int offset,
+ val = pmap->val << __ffs(pconf->mask);
+
+ rc = regmap_update_bits(pdata->scu, pconf->reg,
+- pmap->mask, val);
++ pconf->mask, val);
+
+ if (rc < 0)
+ return rc;
+diff --git a/drivers/pinctrl/bcm/Kconfig b/drivers/pinctrl/bcm/Kconfig
+index dcf7df797af75..0ed14de0134cf 100644
+--- a/drivers/pinctrl/bcm/Kconfig
++++ b/drivers/pinctrl/bcm/Kconfig
+@@ -23,6 +23,7 @@ config PINCTRL_BCM2835
+ select PINMUX
+ select PINCONF
+ select GENERIC_PINCONF
++ select GPIOLIB
+ select GPIOLIB_IRQCHIP
+ default ARCH_BCM2835 || ARCH_BRCMSTB
+ help
+diff --git a/drivers/pinctrl/devicetree.c b/drivers/pinctrl/devicetree.c
+index c6fe7d64c9137..c7448be64d073 100644
+--- a/drivers/pinctrl/devicetree.c
++++ b/drivers/pinctrl/devicetree.c
+@@ -129,9 +129,8 @@ static int dt_to_map_one_config(struct pinctrl *p,
+ if (!np_pctldev || of_node_is_root(np_pctldev)) {
+ of_node_put(np_pctldev);
+ ret = driver_deferred_probe_check_state(p->dev);
+- /* keep deferring if modules are enabled unless we've timed out */
+- if (IS_ENABLED(CONFIG_MODULES) && !allow_default &&
+- (ret == -ENODEV))
++ /* keep deferring if modules are enabled */
++ if (IS_ENABLED(CONFIG_MODULES) && !allow_default && ret < 0)
+ ret = -EPROBE_DEFER;
+ return ret;
+ }
+diff --git a/drivers/pinctrl/pinctrl-mcp23s08.c b/drivers/pinctrl/pinctrl-mcp23s08.c
+index 151931b593f6e..235a141182bf6 100644
+--- a/drivers/pinctrl/pinctrl-mcp23s08.c
++++ b/drivers/pinctrl/pinctrl-mcp23s08.c
+@@ -87,7 +87,7 @@ const struct regmap_config mcp23x08_regmap = {
+ };
+ EXPORT_SYMBOL_GPL(mcp23x08_regmap);
+
+-static const struct reg_default mcp23x16_defaults[] = {
++static const struct reg_default mcp23x17_defaults[] = {
+ {.reg = MCP_IODIR << 1, .def = 0xffff},
+ {.reg = MCP_IPOL << 1, .def = 0x0000},
+ {.reg = MCP_GPINTEN << 1, .def = 0x0000},
+@@ -98,23 +98,23 @@ static const struct reg_default mcp23x16_defaults[] = {
+ {.reg = MCP_OLAT << 1, .def = 0x0000},
+ };
+
+-static const struct regmap_range mcp23x16_volatile_range = {
++static const struct regmap_range mcp23x17_volatile_range = {
+ .range_min = MCP_INTF << 1,
+ .range_max = MCP_GPIO << 1,
+ };
+
+-static const struct regmap_access_table mcp23x16_volatile_table = {
+- .yes_ranges = &mcp23x16_volatile_range,
++static const struct regmap_access_table mcp23x17_volatile_table = {
++ .yes_ranges = &mcp23x17_volatile_range,
+ .n_yes_ranges = 1,
+ };
+
+-static const struct regmap_range mcp23x16_precious_range = {
+- .range_min = MCP_GPIO << 1,
++static const struct regmap_range mcp23x17_precious_range = {
++ .range_min = MCP_INTCAP << 1,
+ .range_max = MCP_GPIO << 1,
+ };
+
+-static const struct regmap_access_table mcp23x16_precious_table = {
+- .yes_ranges = &mcp23x16_precious_range,
++static const struct regmap_access_table mcp23x17_precious_table = {
++ .yes_ranges = &mcp23x17_precious_range,
+ .n_yes_ranges = 1,
+ };
+
+@@ -124,10 +124,10 @@ const struct regmap_config mcp23x17_regmap = {
+
+ .reg_stride = 2,
+ .max_register = MCP_OLAT << 1,
+- .volatile_table = &mcp23x16_volatile_table,
+- .precious_table = &mcp23x16_precious_table,
+- .reg_defaults = mcp23x16_defaults,
+- .num_reg_defaults = ARRAY_SIZE(mcp23x16_defaults),
++ .volatile_table = &mcp23x17_volatile_table,
++ .precious_table = &mcp23x17_precious_table,
++ .reg_defaults = mcp23x17_defaults,
++ .num_reg_defaults = ARRAY_SIZE(mcp23x17_defaults),
+ .cache_type = REGCACHE_FLAT,
+ .val_format_endian = REGMAP_ENDIAN_LITTLE,
+ };
+diff --git a/drivers/pinctrl/qcom/pinctrl-msm.c b/drivers/pinctrl/qcom/pinctrl-msm.c
+index c322f30a20648..22283ba797cd0 100644
+--- a/drivers/pinctrl/qcom/pinctrl-msm.c
++++ b/drivers/pinctrl/qcom/pinctrl-msm.c
+@@ -1060,12 +1060,10 @@ static int msm_gpio_irq_set_wake(struct irq_data *d, unsigned int on)
+ * when TLMM is powered on. To allow that, enable the GPIO
+ * summary line to be wakeup capable at GIC.
+ */
+- if (d->parent_data)
+- irq_chip_set_wake_parent(d, on);
+-
+- irq_set_irq_wake(pctrl->irq, on);
++ if (d->parent_data && test_bit(d->hwirq, pctrl->skip_wake_irqs))
++ return irq_chip_set_wake_parent(d, on);
+
+- return 0;
++ return irq_set_irq_wake(pctrl->irq, on);
+ }
+
+ static int msm_gpio_irq_reqres(struct irq_data *d)
+@@ -1226,6 +1224,8 @@ static int msm_gpio_init(struct msm_pinctrl *pctrl)
+ pctrl->irq_chip.irq_release_resources = msm_gpio_irq_relres;
+ pctrl->irq_chip.irq_set_affinity = msm_gpio_irq_set_affinity;
+ pctrl->irq_chip.irq_set_vcpu_affinity = msm_gpio_irq_set_vcpu_affinity;
++ pctrl->irq_chip.flags = IRQCHIP_MASK_ON_SUSPEND |
++ IRQCHIP_SET_TYPE_MASKED;
+
+ np = of_parse_phandle(pctrl->dev->of_node, "wakeup-parent", 0);
+ if (np) {
+diff --git a/drivers/platform/chrome/cros_ec_lightbar.c b/drivers/platform/chrome/cros_ec_lightbar.c
+index b59180bff5a3e..ef61298c30bdd 100644
+--- a/drivers/platform/chrome/cros_ec_lightbar.c
++++ b/drivers/platform/chrome/cros_ec_lightbar.c
+@@ -116,6 +116,8 @@ static int get_lightbar_version(struct cros_ec_dev *ec,
+
+ param = (struct ec_params_lightbar *)msg->data;
+ param->cmd = LIGHTBAR_CMD_VERSION;
++ msg->outsize = sizeof(param->cmd);
++ msg->result = sizeof(resp->version);
+ ret = cros_ec_cmd_xfer_status(ec->ec_dev, msg);
+ if (ret < 0) {
+ ret = 0;
+diff --git a/drivers/platform/x86/mlx-platform.c b/drivers/platform/x86/mlx-platform.c
+index c27548fd386ac..0d2ed6d1f9c79 100644
+--- a/drivers/platform/x86/mlx-platform.c
++++ b/drivers/platform/x86/mlx-platform.c
+@@ -319,15 +319,6 @@ static struct i2c_board_info mlxplat_mlxcpld_psu[] = {
+ },
+ };
+
+-static struct i2c_board_info mlxplat_mlxcpld_ng_psu[] = {
+- {
+- I2C_BOARD_INFO("24c32", 0x51),
+- },
+- {
+- I2C_BOARD_INFO("24c32", 0x50),
+- },
+-};
+-
+ static struct i2c_board_info mlxplat_mlxcpld_pwr[] = {
+ {
+ I2C_BOARD_INFO("dps460", 0x59),
+@@ -752,15 +743,13 @@ static struct mlxreg_core_data mlxplat_mlxcpld_default_ng_psu_items_data[] = {
+ .label = "psu1",
+ .reg = MLXPLAT_CPLD_LPC_REG_PSU_OFFSET,
+ .mask = BIT(0),
+- .hpdev.brdinfo = &mlxplat_mlxcpld_ng_psu[0],
+- .hpdev.nr = MLXPLAT_CPLD_PSU_MSNXXXX_NR,
++ .hpdev.nr = MLXPLAT_CPLD_NR_NONE,
+ },
+ {
+ .label = "psu2",
+ .reg = MLXPLAT_CPLD_LPC_REG_PSU_OFFSET,
+ .mask = BIT(1),
+- .hpdev.brdinfo = &mlxplat_mlxcpld_ng_psu[1],
+- .hpdev.nr = MLXPLAT_CPLD_PSU_MSNXXXX_NR,
++ .hpdev.nr = MLXPLAT_CPLD_NR_NONE,
+ },
+ };
+
+diff --git a/drivers/pwm/pwm-img.c b/drivers/pwm/pwm-img.c
+index 599a0f66a3845..a34d95ed70b20 100644
+--- a/drivers/pwm/pwm-img.c
++++ b/drivers/pwm/pwm-img.c
+@@ -277,6 +277,8 @@ static int img_pwm_probe(struct platform_device *pdev)
+ return PTR_ERR(pwm->pwm_clk);
+ }
+
++ platform_set_drvdata(pdev, pwm);
++
+ pm_runtime_set_autosuspend_delay(&pdev->dev, IMG_PWM_PM_TIMEOUT);
+ pm_runtime_use_autosuspend(&pdev->dev);
+ pm_runtime_enable(&pdev->dev);
+@@ -313,7 +315,6 @@ static int img_pwm_probe(struct platform_device *pdev)
+ goto err_suspend;
+ }
+
+- platform_set_drvdata(pdev, pwm);
+ return 0;
+
+ err_suspend:
+diff --git a/drivers/pwm/pwm-lpss.c b/drivers/pwm/pwm-lpss.c
+index 9d965ffe66d1e..da9bc3d10104a 100644
+--- a/drivers/pwm/pwm-lpss.c
++++ b/drivers/pwm/pwm-lpss.c
+@@ -93,10 +93,12 @@ static void pwm_lpss_prepare(struct pwm_lpss_chip *lpwm, struct pwm_device *pwm,
+ * The equation is:
+ * base_unit = round(base_unit_range * freq / c)
+ */
+- base_unit_range = BIT(lpwm->info->base_unit_bits) - 1;
++ base_unit_range = BIT(lpwm->info->base_unit_bits);
+ freq *= base_unit_range;
+
+ base_unit = DIV_ROUND_CLOSEST_ULL(freq, c);
++ /* base_unit must not be 0 and we also want to avoid overflowing it */
++ base_unit = clamp_val(base_unit, 1, base_unit_range - 1);
+
+ on_time_div = 255ULL * duty_ns;
+ do_div(on_time_div, period_ns);
+@@ -104,8 +106,7 @@ static void pwm_lpss_prepare(struct pwm_lpss_chip *lpwm, struct pwm_device *pwm,
+
+ orig_ctrl = ctrl = pwm_lpss_read(pwm);
+ ctrl &= ~PWM_ON_TIME_DIV_MASK;
+- ctrl &= ~(base_unit_range << PWM_BASE_UNIT_SHIFT);
+- base_unit &= base_unit_range;
++ ctrl &= ~((base_unit_range - 1) << PWM_BASE_UNIT_SHIFT);
+ ctrl |= (u32) base_unit << PWM_BASE_UNIT_SHIFT;
+ ctrl |= on_time_div;
+
+diff --git a/drivers/pwm/pwm-rockchip.c b/drivers/pwm/pwm-rockchip.c
+index eb8c9cb645a6c..098e94335cb5b 100644
+--- a/drivers/pwm/pwm-rockchip.c
++++ b/drivers/pwm/pwm-rockchip.c
+@@ -288,6 +288,7 @@ static int rockchip_pwm_probe(struct platform_device *pdev)
+ const struct of_device_id *id;
+ struct rockchip_pwm_chip *pc;
+ struct resource *r;
++ u32 enable_conf, ctrl;
+ int ret, count;
+
+ id = of_match_device(rockchip_pwm_dt_ids, &pdev->dev);
+@@ -362,7 +363,9 @@ static int rockchip_pwm_probe(struct platform_device *pdev)
+ }
+
+ /* Keep the PWM clk enabled if the PWM appears to be up and running. */
+- if (!pwm_is_enabled(pc->chip.pwms))
++ enable_conf = pc->data->enable_conf;
++ ctrl = readl_relaxed(pc->base + pc->data->regs.ctrl);
++ if ((ctrl & enable_conf) != enable_conf)
+ clk_disable(pc->clk);
+
+ return 0;
+diff --git a/drivers/rapidio/devices/rio_mport_cdev.c b/drivers/rapidio/devices/rio_mport_cdev.c
+index 451608e960a18..152946e033d17 100644
+--- a/drivers/rapidio/devices/rio_mport_cdev.c
++++ b/drivers/rapidio/devices/rio_mport_cdev.c
+@@ -871,15 +871,16 @@ rio_dma_transfer(struct file *filp, u32 transfer_mode,
+ rmcd_error("pin_user_pages_fast err=%ld",
+ pinned);
+ nr_pages = 0;
+- } else
++ } else {
+ rmcd_error("pinned %ld out of %ld pages",
+ pinned, nr_pages);
++ /*
++ * Set nr_pages up to mean "how many pages to unpin, in
++ * the error handler:
++ */
++ nr_pages = pinned;
++ }
+ ret = -EFAULT;
+- /*
+- * Set nr_pages up to mean "how many pages to unpin, in
+- * the error handler:
+- */
+- nr_pages = pinned;
+ goto err_pg;
+ }
+
+@@ -1679,6 +1680,7 @@ static int rio_mport_add_riodev(struct mport_cdev_priv *priv,
+ struct rio_dev *rdev;
+ struct rio_switch *rswitch = NULL;
+ struct rio_mport *mport;
++ struct device *dev;
+ size_t size;
+ u32 rval;
+ u32 swpinfo = 0;
+@@ -1693,8 +1695,10 @@ static int rio_mport_add_riodev(struct mport_cdev_priv *priv,
+ rmcd_debug(RDEV, "name:%s ct:0x%x did:0x%x hc:0x%x", dev_info.name,
+ dev_info.comptag, dev_info.destid, dev_info.hopcount);
+
+- if (bus_find_device_by_name(&rio_bus_type, NULL, dev_info.name)) {
++ dev = bus_find_device_by_name(&rio_bus_type, NULL, dev_info.name);
++ if (dev) {
+ rmcd_debug(RDEV, "device %s already exists", dev_info.name);
++ put_device(dev);
+ return -EEXIST;
+ }
+
+diff --git a/drivers/ras/cec.c b/drivers/ras/cec.c
+index 569d9ad2c5942..6939aa5b3dc7f 100644
+--- a/drivers/ras/cec.c
++++ b/drivers/ras/cec.c
+@@ -553,20 +553,20 @@ static struct notifier_block cec_nb = {
+ .priority = MCE_PRIO_CEC,
+ };
+
+-static void __init cec_init(void)
++static int __init cec_init(void)
+ {
+ if (ce_arr.disabled)
+- return;
++ return -ENODEV;
+
+ ce_arr.array = (void *)get_zeroed_page(GFP_KERNEL);
+ if (!ce_arr.array) {
+ pr_err("Error allocating CE array page!\n");
+- return;
++ return -ENOMEM;
+ }
+
+ if (create_debugfs_nodes()) {
+ free_page((unsigned long)ce_arr.array);
+- return;
++ return -ENOMEM;
+ }
+
+ INIT_DELAYED_WORK(&cec_work, cec_work_fn);
+@@ -575,6 +575,7 @@ static void __init cec_init(void)
+ mce_register_decode_chain(&cec_nb);
+
+ pr_info("Correctable Errors collector initialized.\n");
++ return 0;
+ }
+ late_initcall(cec_init);
+
+diff --git a/drivers/regulator/core.c b/drivers/regulator/core.c
+index be8c709a74883..25e601bf9383e 100644
+--- a/drivers/regulator/core.c
++++ b/drivers/regulator/core.c
+@@ -5187,15 +5187,20 @@ regulator_register(const struct regulator_desc *regulator_desc,
+ else if (regulator_desc->supply_name)
+ rdev->supply_name = regulator_desc->supply_name;
+
+- /*
+- * Attempt to resolve the regulator supply, if specified,
+- * but don't return an error if we fail because we will try
+- * to resolve it again later as more regulators are added.
+- */
+- if (regulator_resolve_supply(rdev))
+- rdev_dbg(rdev, "unable to resolve supply\n");
+-
+ ret = set_machine_constraints(rdev, constraints);
++ if (ret == -EPROBE_DEFER) {
++ /* Regulator might be in bypass mode and so needs its supply
++ * to set the constraints */
++ /* FIXME: this currently triggers a chicken-and-egg problem
++ * when creating -SUPPLY symlink in sysfs to a regulator
++ * that is just being created */
++ ret = regulator_resolve_supply(rdev);
++ if (!ret)
++ ret = set_machine_constraints(rdev, constraints);
++ else
++ rdev_dbg(rdev, "unable to resolve supply early: %pe\n",
++ ERR_PTR(ret));
++ }
+ if (ret < 0)
+ goto wash;
+
+diff --git a/drivers/remoteproc/mtk_scp_ipi.c b/drivers/remoteproc/mtk_scp_ipi.c
+index 3d3d87210ef2c..58d1d7e571d66 100644
+--- a/drivers/remoteproc/mtk_scp_ipi.c
++++ b/drivers/remoteproc/mtk_scp_ipi.c
+@@ -30,10 +30,8 @@ int scp_ipi_register(struct mtk_scp *scp,
+ scp_ipi_handler_t handler,
+ void *priv)
+ {
+- if (!scp) {
+- dev_err(scp->dev, "scp device is not ready\n");
++ if (!scp)
+ return -EPROBE_DEFER;
+- }
+
+ if (WARN_ON(id >= SCP_IPI_MAX) || WARN_ON(handler == NULL))
+ return -EINVAL;
+diff --git a/drivers/rpmsg/mtk_rpmsg.c b/drivers/rpmsg/mtk_rpmsg.c
+index 83f2b8804ee98..96a17ec291401 100644
+--- a/drivers/rpmsg/mtk_rpmsg.c
++++ b/drivers/rpmsg/mtk_rpmsg.c
+@@ -200,7 +200,6 @@ static int mtk_rpmsg_register_device(struct mtk_rpmsg_rproc_subdev *mtk_subdev,
+ struct rpmsg_device *rpdev;
+ struct mtk_rpmsg_device *mdev;
+ struct platform_device *pdev = mtk_subdev->pdev;
+- int ret;
+
+ mdev = kzalloc(sizeof(*mdev), GFP_KERNEL);
+ if (!mdev)
+@@ -219,13 +218,7 @@ static int mtk_rpmsg_register_device(struct mtk_rpmsg_rproc_subdev *mtk_subdev,
+ rpdev->dev.parent = &pdev->dev;
+ rpdev->dev.release = mtk_rpmsg_release_device;
+
+- ret = rpmsg_register_device(rpdev);
+- if (ret) {
+- kfree(mdev);
+- return ret;
+- }
+-
+- return 0;
++ return rpmsg_register_device(rpdev);
+ }
+
+ static void mtk_register_device_work_function(struct work_struct *register_work)
+diff --git a/drivers/rpmsg/qcom_smd.c b/drivers/rpmsg/qcom_smd.c
+index 4abbeea782fa4..19903de6268db 100644
+--- a/drivers/rpmsg/qcom_smd.c
++++ b/drivers/rpmsg/qcom_smd.c
+@@ -1338,7 +1338,7 @@ static int qcom_smd_parse_edge(struct device *dev,
+ ret = of_property_read_u32(node, key, &edge->edge_id);
+ if (ret) {
+ dev_err(dev, "edge missing %s property\n", key);
+- return -EINVAL;
++ goto put_node;
+ }
+
+ edge->remote_pid = QCOM_SMEM_HOST_ANY;
+@@ -1349,32 +1349,37 @@ static int qcom_smd_parse_edge(struct device *dev,
+ edge->mbox_client.knows_txdone = true;
+ edge->mbox_chan = mbox_request_channel(&edge->mbox_client, 0);
+ if (IS_ERR(edge->mbox_chan)) {
+- if (PTR_ERR(edge->mbox_chan) != -ENODEV)
+- return PTR_ERR(edge->mbox_chan);
++ if (PTR_ERR(edge->mbox_chan) != -ENODEV) {
++ ret = PTR_ERR(edge->mbox_chan);
++ goto put_node;
++ }
+
+ edge->mbox_chan = NULL;
+
+ syscon_np = of_parse_phandle(node, "qcom,ipc", 0);
+ if (!syscon_np) {
+ dev_err(dev, "no qcom,ipc node\n");
+- return -ENODEV;
++ ret = -ENODEV;
++ goto put_node;
+ }
+
+ edge->ipc_regmap = syscon_node_to_regmap(syscon_np);
+- if (IS_ERR(edge->ipc_regmap))
+- return PTR_ERR(edge->ipc_regmap);
++ if (IS_ERR(edge->ipc_regmap)) {
++ ret = PTR_ERR(edge->ipc_regmap);
++ goto put_node;
++ }
+
+ key = "qcom,ipc";
+ ret = of_property_read_u32_index(node, key, 1, &edge->ipc_offset);
+ if (ret < 0) {
+ dev_err(dev, "no offset in %s\n", key);
+- return -EINVAL;
++ goto put_node;
+ }
+
+ ret = of_property_read_u32_index(node, key, 2, &edge->ipc_bit);
+ if (ret < 0) {
+ dev_err(dev, "no bit in %s\n", key);
+- return -EINVAL;
++ goto put_node;
+ }
+ }
+
+@@ -1385,7 +1390,8 @@ static int qcom_smd_parse_edge(struct device *dev,
+ irq = irq_of_parse_and_map(node, 0);
+ if (irq < 0) {
+ dev_err(dev, "required smd interrupt missing\n");
+- return -EINVAL;
++ ret = irq;
++ goto put_node;
+ }
+
+ ret = devm_request_irq(dev, irq,
+@@ -1393,12 +1399,18 @@ static int qcom_smd_parse_edge(struct device *dev,
+ node->name, edge);
+ if (ret) {
+ dev_err(dev, "failed to request smd irq\n");
+- return ret;
++ goto put_node;
+ }
+
+ edge->irq = irq;
+
+ return 0;
++
++put_node:
++ of_node_put(node);
++ edge->of_node = NULL;
++
++ return ret;
+ }
+
+ /*
+diff --git a/drivers/rtc/rtc-ds1307.c b/drivers/rtc/rtc-ds1307.c
+index 49702942bb086..70b198423deba 100644
+--- a/drivers/rtc/rtc-ds1307.c
++++ b/drivers/rtc/rtc-ds1307.c
+@@ -352,6 +352,10 @@ static int ds1307_set_time(struct device *dev, struct rtc_time *t)
+ regmap_update_bits(ds1307->regmap, DS1340_REG_FLAG,
+ DS1340_BIT_OSF, 0);
+ break;
++ case ds_1388:
++ regmap_update_bits(ds1307->regmap, DS1388_REG_FLAG,
++ DS1388_BIT_OSF, 0);
++ break;
+ case mcp794xx:
+ /*
+ * these bits were cleared when preparing the date/time
+diff --git a/drivers/s390/net/qeth_core.h b/drivers/s390/net/qeth_core.h
+index 51ea56b73a97d..4e30047d76c46 100644
+--- a/drivers/s390/net/qeth_core.h
++++ b/drivers/s390/net/qeth_core.h
+@@ -680,6 +680,11 @@ struct qeth_card_blkt {
+ int inter_packet_jumbo;
+ };
+
++enum qeth_pnso_mode {
++ QETH_PNSO_NONE,
++ QETH_PNSO_BRIDGEPORT,
++};
++
+ #define QETH_BROADCAST_WITH_ECHO 0x01
+ #define QETH_BROADCAST_WITHOUT_ECHO 0x02
+ struct qeth_card_info {
+@@ -696,6 +701,7 @@ struct qeth_card_info {
+ /* no bitfield, we take a pointer on these two: */
+ u8 has_lp2lp_cso_v6;
+ u8 has_lp2lp_cso_v4;
++ enum qeth_pnso_mode pnso_mode;
+ enum qeth_card_types type;
+ enum qeth_link_types link_type;
+ int broadcast_capable;
+diff --git a/drivers/s390/net/qeth_l2_main.c b/drivers/s390/net/qeth_l2_main.c
+index b4e06aeb6dc1c..7c6f6a09b99e4 100644
+--- a/drivers/s390/net/qeth_l2_main.c
++++ b/drivers/s390/net/qeth_l2_main.c
+@@ -273,6 +273,17 @@ static int qeth_l2_vlan_rx_kill_vid(struct net_device *dev,
+ return qeth_l2_send_setdelvlan(card, vid, IPA_CMD_DELVLAN);
+ }
+
++static void qeth_l2_set_pnso_mode(struct qeth_card *card,
++ enum qeth_pnso_mode mode)
++{
++ spin_lock_irq(get_ccwdev_lock(CARD_RDEV(card)));
++ WRITE_ONCE(card->info.pnso_mode, mode);
++ spin_unlock_irq(get_ccwdev_lock(CARD_RDEV(card)));
++
++ if (mode == QETH_PNSO_NONE)
++ drain_workqueue(card->event_wq);
++}
++
+ static void qeth_l2_stop_card(struct qeth_card *card)
+ {
+ QETH_CARD_TEXT(card, 2, "stopcard");
+@@ -291,7 +302,7 @@ static void qeth_l2_stop_card(struct qeth_card *card)
+
+ qeth_qdio_clear_card(card, 0);
+ qeth_clear_working_pool_list(card);
+- flush_workqueue(card->event_wq);
++ qeth_l2_set_pnso_mode(card, QETH_PNSO_NONE);
+ qeth_flush_local_addrs(card);
+ card->info.promisc_mode = 0;
+ }
+@@ -1111,12 +1122,6 @@ static void qeth_bridge_state_change_worker(struct work_struct *work)
+ NULL
+ };
+
+- /* Role should not change by itself, but if it did, */
+- /* information from the hardware is authoritative. */
+- mutex_lock(&data->card->sbp_lock);
+- data->card->options.sbp.role = entry->role;
+- mutex_unlock(&data->card->sbp_lock);
+-
+ snprintf(env_locrem, sizeof(env_locrem), "BRIDGEPORT=statechange");
+ snprintf(env_role, sizeof(env_role), "ROLE=%s",
+ (entry->role == QETH_SBP_ROLE_NONE) ? "none" :
+@@ -1165,19 +1170,34 @@ static void qeth_bridge_state_change(struct qeth_card *card,
+ }
+
+ struct qeth_addr_change_data {
+- struct work_struct worker;
++ struct delayed_work dwork;
+ struct qeth_card *card;
+ struct qeth_ipacmd_addr_change ac_event;
+ };
+
+ static void qeth_addr_change_event_worker(struct work_struct *work)
+ {
+- struct qeth_addr_change_data *data =
+- container_of(work, struct qeth_addr_change_data, worker);
++ struct delayed_work *dwork = to_delayed_work(work);
++ struct qeth_addr_change_data *data;
++ struct qeth_card *card;
+ int i;
+
++ data = container_of(dwork, struct qeth_addr_change_data, dwork);
++ card = data->card;
++
+ QETH_CARD_TEXT(data->card, 4, "adrchgew");
++
++ if (READ_ONCE(card->info.pnso_mode) == QETH_PNSO_NONE)
++ goto free;
++
+ if (data->ac_event.lost_event_mask) {
++ /* Potential re-config in progress, try again later: */
++ if (!mutex_trylock(&card->sbp_lock)) {
++ queue_delayed_work(card->event_wq, dwork,
++ msecs_to_jiffies(100));
++ return;
++ }
++
+ dev_info(&data->card->gdev->dev,
+ "Address change notification stopped on %s (%s)\n",
+ data->card->dev->name,
+@@ -1186,8 +1206,9 @@ static void qeth_addr_change_event_worker(struct work_struct *work)
+ : (data->ac_event.lost_event_mask == 0x02)
+ ? "Bridge port state change"
+ : "Unknown reason");
+- mutex_lock(&data->card->sbp_lock);
++
+ data->card->options.sbp.hostnotification = 0;
++ card->info.pnso_mode = QETH_PNSO_NONE;
+ mutex_unlock(&data->card->sbp_lock);
+ qeth_bridge_emit_host_event(data->card, anev_abort,
+ 0, NULL, NULL);
+@@ -1201,6 +1222,8 @@ static void qeth_addr_change_event_worker(struct work_struct *work)
+ &entry->token,
+ &entry->addr_lnid);
+ }
++
++free:
+ kfree(data);
+ }
+
+@@ -1212,6 +1235,9 @@ static void qeth_addr_change_event(struct qeth_card *card,
+ struct qeth_addr_change_data *data;
+ int extrasize;
+
++ if (card->info.pnso_mode == QETH_PNSO_NONE)
++ return;
++
+ QETH_CARD_TEXT(card, 4, "adrchgev");
+ if (cmd->hdr.return_code != 0x0000) {
+ if (cmd->hdr.return_code == 0x0010) {
+@@ -1231,11 +1257,11 @@ static void qeth_addr_change_event(struct qeth_card *card,
+ QETH_CARD_TEXT(card, 2, "ACNalloc");
+ return;
+ }
+- INIT_WORK(&data->worker, qeth_addr_change_event_worker);
++ INIT_DELAYED_WORK(&data->dwork, qeth_addr_change_event_worker);
+ data->card = card;
+ memcpy(&data->ac_event, hostevs,
+ sizeof(struct qeth_ipacmd_addr_change) + extrasize);
+- queue_work(card->event_wq, &data->worker);
++ queue_delayed_work(card->event_wq, &data->dwork, 0);
+ }
+
+ /* SETBRIDGEPORT support; sending commands */
+@@ -1556,9 +1582,14 @@ int qeth_bridgeport_an_set(struct qeth_card *card, int enable)
+
+ if (enable) {
+ qeth_bridge_emit_host_event(card, anev_reset, 0, NULL, NULL);
++ qeth_l2_set_pnso_mode(card, QETH_PNSO_BRIDGEPORT);
+ rc = qeth_l2_pnso(card, 1, qeth_bridgeport_an_set_cb, card);
+- } else
++ if (rc)
++ qeth_l2_set_pnso_mode(card, QETH_PNSO_NONE);
++ } else {
+ rc = qeth_l2_pnso(card, 0, NULL, NULL);
++ qeth_l2_set_pnso_mode(card, QETH_PNSO_NONE);
++ }
+ return rc;
+ }
+
+diff --git a/drivers/s390/net/qeth_l2_sys.c b/drivers/s390/net/qeth_l2_sys.c
+index 86bcae992f725..4695d25e54f24 100644
+--- a/drivers/s390/net/qeth_l2_sys.c
++++ b/drivers/s390/net/qeth_l2_sys.c
+@@ -157,6 +157,7 @@ static ssize_t qeth_bridgeport_hostnotification_store(struct device *dev,
+ rc = -EBUSY;
+ else if (qeth_card_hw_is_reachable(card)) {
+ rc = qeth_bridgeport_an_set(card, enable);
++ /* sbp_lock ensures ordering vs notifications-stopped events */
+ if (!rc)
+ card->options.sbp.hostnotification = enable;
+ } else
+diff --git a/drivers/scsi/be2iscsi/be_main.c b/drivers/scsi/be2iscsi/be_main.c
+index 9b81cfbbc5c53..239e04c03cf90 100644
+--- a/drivers/scsi/be2iscsi/be_main.c
++++ b/drivers/scsi/be2iscsi/be_main.c
+@@ -3020,6 +3020,7 @@ static int beiscsi_create_eqs(struct beiscsi_hba *phba,
+ goto create_eq_error;
+ }
+
++ mem->dma = paddr;
+ mem->va = eq_vaddress;
+ ret = be_fill_queue(eq, phba->params.num_eq_entries,
+ sizeof(struct be_eq_entry), eq_vaddress);
+@@ -3029,7 +3030,6 @@ static int beiscsi_create_eqs(struct beiscsi_hba *phba,
+ goto create_eq_error;
+ }
+
+- mem->dma = paddr;
+ ret = beiscsi_cmd_eq_create(&phba->ctrl, eq,
+ BEISCSI_EQ_DELAY_DEF);
+ if (ret) {
+@@ -3086,6 +3086,7 @@ static int beiscsi_create_cqs(struct beiscsi_hba *phba,
+ goto create_cq_error;
+ }
+
++ mem->dma = paddr;
+ ret = be_fill_queue(cq, phba->params.num_cq_entries,
+ sizeof(struct sol_cqe), cq_vaddress);
+ if (ret) {
+@@ -3095,7 +3096,6 @@ static int beiscsi_create_cqs(struct beiscsi_hba *phba,
+ goto create_cq_error;
+ }
+
+- mem->dma = paddr;
+ ret = beiscsi_cmd_cq_create(&phba->ctrl, cq, eq, false,
+ false, 0);
+ if (ret) {
+diff --git a/drivers/scsi/bfa/bfad.c b/drivers/scsi/bfa/bfad.c
+index bc5d84f87d8fc..440ef32be048f 100644
+--- a/drivers/scsi/bfa/bfad.c
++++ b/drivers/scsi/bfa/bfad.c
+@@ -749,6 +749,7 @@ bfad_pci_init(struct pci_dev *pdev, struct bfad_s *bfad)
+
+ if (bfad->pci_bar0_kva == NULL) {
+ printk(KERN_ERR "Fail to map bar0\n");
++ rc = -ENODEV;
+ goto out_release_region;
+ }
+
+diff --git a/drivers/scsi/csiostor/csio_hw.c b/drivers/scsi/csiostor/csio_hw.c
+index 950f9cdf0577f..5d0f42031d121 100644
+--- a/drivers/scsi/csiostor/csio_hw.c
++++ b/drivers/scsi/csiostor/csio_hw.c
+@@ -2384,7 +2384,7 @@ static int csio_hw_prep_fw(struct csio_hw *hw, struct fw_info *fw_info,
+ FW_HDR_FW_VER_MICRO_G(c), FW_HDR_FW_VER_BUILD_G(c),
+ FW_HDR_FW_VER_MAJOR_G(k), FW_HDR_FW_VER_MINOR_G(k),
+ FW_HDR_FW_VER_MICRO_G(k), FW_HDR_FW_VER_BUILD_G(k));
+- ret = EINVAL;
++ ret = -EINVAL;
+ goto bye;
+ }
+
+diff --git a/drivers/scsi/ibmvscsi/ibmvfc.c b/drivers/scsi/ibmvscsi/ibmvfc.c
+index 635f6f9cffc40..ef91f3d01f989 100644
+--- a/drivers/scsi/ibmvscsi/ibmvfc.c
++++ b/drivers/scsi/ibmvscsi/ibmvfc.c
+@@ -4928,6 +4928,7 @@ static int ibmvfc_probe(struct vio_dev *vdev, const struct vio_device_id *id)
+ if (IS_ERR(vhost->work_thread)) {
+ dev_err(dev, "Couldn't create kernel thread: %ld\n",
+ PTR_ERR(vhost->work_thread));
++ rc = PTR_ERR(vhost->work_thread);
+ goto free_host_mem;
+ }
+
+diff --git a/drivers/scsi/mpt3sas/mpt3sas_base.c b/drivers/scsi/mpt3sas/mpt3sas_base.c
+index a85c9672c6ea3..a67749c8f4ab3 100644
+--- a/drivers/scsi/mpt3sas/mpt3sas_base.c
++++ b/drivers/scsi/mpt3sas/mpt3sas_base.c
+@@ -1808,18 +1808,22 @@ mpt3sas_base_sync_reply_irqs(struct MPT3SAS_ADAPTER *ioc)
+ /* TMs are on msix_index == 0 */
+ if (reply_q->msix_index == 0)
+ continue;
++ synchronize_irq(pci_irq_vector(ioc->pdev, reply_q->msix_index));
+ if (reply_q->irq_poll_scheduled) {
+ /* Calling irq_poll_disable will wait for any pending
+ * callbacks to have completed.
+ */
+ irq_poll_disable(&reply_q->irqpoll);
+ irq_poll_enable(&reply_q->irqpoll);
+- reply_q->irq_poll_scheduled = false;
+- reply_q->irq_line_enable = true;
+- enable_irq(reply_q->os_irq);
+- continue;
++ /* check how the scheduled poll has ended,
++ * clean up only if necessary
++ */
++ if (reply_q->irq_poll_scheduled) {
++ reply_q->irq_poll_scheduled = false;
++ reply_q->irq_line_enable = true;
++ enable_irq(reply_q->os_irq);
++ }
+ }
+- synchronize_irq(pci_irq_vector(ioc->pdev, reply_q->msix_index));
+ }
+ }
+
+diff --git a/drivers/scsi/mvumi.c b/drivers/scsi/mvumi.c
+index 8906aceda4c43..0354898d7cac1 100644
+--- a/drivers/scsi/mvumi.c
++++ b/drivers/scsi/mvumi.c
+@@ -2425,6 +2425,7 @@ static int mvumi_io_attach(struct mvumi_hba *mhba)
+ if (IS_ERR(mhba->dm_thread)) {
+ dev_err(&mhba->pdev->dev,
+ "failed to create device scan thread\n");
++ ret = PTR_ERR(mhba->dm_thread);
+ mutex_unlock(&mhba->sas_discovery_mutex);
+ goto fail_create_thread;
+ }
+diff --git a/drivers/scsi/qedf/qedf_main.c b/drivers/scsi/qedf/qedf_main.c
+index 51cfab9d1afdc..ed3054fffa344 100644
+--- a/drivers/scsi/qedf/qedf_main.c
++++ b/drivers/scsi/qedf/qedf_main.c
+@@ -704,7 +704,7 @@ static int qedf_eh_abort(struct scsi_cmnd *sc_cmd)
+ rdata = fcport->rdata;
+ if (!rdata || !kref_get_unless_zero(&rdata->kref)) {
+ QEDF_ERR(&qedf->dbg_ctx, "stale rport, sc_cmd=%p\n", sc_cmd);
+- rc = 1;
++ rc = SUCCESS;
+ goto out;
+ }
+
+diff --git a/drivers/scsi/qedi/qedi_fw.c b/drivers/scsi/qedi/qedi_fw.c
+index 946cebc4c9322..90aa64604ad78 100644
+--- a/drivers/scsi/qedi/qedi_fw.c
++++ b/drivers/scsi/qedi/qedi_fw.c
+@@ -59,6 +59,7 @@ static void qedi_process_logout_resp(struct qedi_ctx *qedi,
+ "Freeing tid=0x%x for cid=0x%x\n",
+ cmd->task_id, qedi_conn->iscsi_conn_id);
+
++ spin_lock(&qedi_conn->list_lock);
+ if (likely(cmd->io_cmd_in_list)) {
+ cmd->io_cmd_in_list = false;
+ list_del_init(&cmd->io_cmd);
+@@ -69,6 +70,7 @@ static void qedi_process_logout_resp(struct qedi_ctx *qedi,
+ cmd->task_id, qedi_conn->iscsi_conn_id,
+ &cmd->io_cmd);
+ }
++ spin_unlock(&qedi_conn->list_lock);
+
+ cmd->state = RESPONSE_RECEIVED;
+ qedi_clear_task_idx(qedi, cmd->task_id);
+@@ -122,6 +124,7 @@ static void qedi_process_text_resp(struct qedi_ctx *qedi,
+ "Freeing tid=0x%x for cid=0x%x\n",
+ cmd->task_id, qedi_conn->iscsi_conn_id);
+
++ spin_lock(&qedi_conn->list_lock);
+ if (likely(cmd->io_cmd_in_list)) {
+ cmd->io_cmd_in_list = false;
+ list_del_init(&cmd->io_cmd);
+@@ -132,6 +135,7 @@ static void qedi_process_text_resp(struct qedi_ctx *qedi,
+ cmd->task_id, qedi_conn->iscsi_conn_id,
+ &cmd->io_cmd);
+ }
++ spin_unlock(&qedi_conn->list_lock);
+
+ cmd->state = RESPONSE_RECEIVED;
+ qedi_clear_task_idx(qedi, cmd->task_id);
+@@ -222,11 +226,13 @@ static void qedi_process_tmf_resp(struct qedi_ctx *qedi,
+
+ tmf_hdr = (struct iscsi_tm *)qedi_cmd->task->hdr;
+
++ spin_lock(&qedi_conn->list_lock);
+ if (likely(qedi_cmd->io_cmd_in_list)) {
+ qedi_cmd->io_cmd_in_list = false;
+ list_del_init(&qedi_cmd->io_cmd);
+ qedi_conn->active_cmd_count--;
+ }
++ spin_unlock(&qedi_conn->list_lock);
+
+ if (((tmf_hdr->flags & ISCSI_FLAG_TM_FUNC_MASK) ==
+ ISCSI_TM_FUNC_LOGICAL_UNIT_RESET) ||
+@@ -288,11 +294,13 @@ static void qedi_process_login_resp(struct qedi_ctx *qedi,
+ ISCSI_LOGIN_RESPONSE_HDR_DATA_SEG_LEN_MASK;
+ qedi_conn->gen_pdu.resp_wr_ptr = qedi_conn->gen_pdu.resp_buf + pld_len;
+
++ spin_lock(&qedi_conn->list_lock);
+ if (likely(cmd->io_cmd_in_list)) {
+ cmd->io_cmd_in_list = false;
+ list_del_init(&cmd->io_cmd);
+ qedi_conn->active_cmd_count--;
+ }
++ spin_unlock(&qedi_conn->list_lock);
+
+ memset(task_ctx, '\0', sizeof(*task_ctx));
+
+@@ -817,8 +825,11 @@ static void qedi_process_cmd_cleanup_resp(struct qedi_ctx *qedi,
+ qedi_clear_task_idx(qedi_conn->qedi, rtid);
+
+ spin_lock(&qedi_conn->list_lock);
+- list_del_init(&dbg_cmd->io_cmd);
+- qedi_conn->active_cmd_count--;
++ if (likely(dbg_cmd->io_cmd_in_list)) {
++ dbg_cmd->io_cmd_in_list = false;
++ list_del_init(&dbg_cmd->io_cmd);
++ qedi_conn->active_cmd_count--;
++ }
+ spin_unlock(&qedi_conn->list_lock);
+ qedi_cmd->state = CLEANUP_RECV;
+ wake_up_interruptible(&qedi_conn->wait_queue);
+@@ -1236,6 +1247,7 @@ int qedi_cleanup_all_io(struct qedi_ctx *qedi, struct qedi_conn *qedi_conn,
+ qedi_conn->cmd_cleanup_req++;
+ qedi_iscsi_cleanup_task(ctask, true);
+
++ cmd->io_cmd_in_list = false;
+ list_del_init(&cmd->io_cmd);
+ qedi_conn->active_cmd_count--;
+ QEDI_WARN(&qedi->dbg_ctx,
+@@ -1447,8 +1459,11 @@ ldel_exit:
+ spin_unlock_bh(&qedi_conn->tmf_work_lock);
+
+ spin_lock(&qedi_conn->list_lock);
+- list_del_init(&cmd->io_cmd);
+- qedi_conn->active_cmd_count--;
++ if (likely(cmd->io_cmd_in_list)) {
++ cmd->io_cmd_in_list = false;
++ list_del_init(&cmd->io_cmd);
++ qedi_conn->active_cmd_count--;
++ }
+ spin_unlock(&qedi_conn->list_lock);
+
+ clear_bit(QEDI_CONN_FW_CLEANUP, &qedi_conn->flags);
+diff --git a/drivers/scsi/qedi/qedi_iscsi.c b/drivers/scsi/qedi/qedi_iscsi.c
+index 425e665ec08b2..6e92625df4b7c 100644
+--- a/drivers/scsi/qedi/qedi_iscsi.c
++++ b/drivers/scsi/qedi/qedi_iscsi.c
+@@ -975,11 +975,13 @@ static void qedi_cleanup_active_cmd_list(struct qedi_conn *qedi_conn)
+ {
+ struct qedi_cmd *cmd, *cmd_tmp;
+
++ spin_lock(&qedi_conn->list_lock);
+ list_for_each_entry_safe(cmd, cmd_tmp, &qedi_conn->active_cmd_list,
+ io_cmd) {
+ list_del_init(&cmd->io_cmd);
+ qedi_conn->active_cmd_count--;
+ }
++ spin_unlock(&qedi_conn->list_lock);
+ }
+
+ static void qedi_ep_disconnect(struct iscsi_endpoint *ep)
+diff --git a/drivers/scsi/qedi/qedi_main.c b/drivers/scsi/qedi/qedi_main.c
+index 81a307695cc91..569fa4b28e4e2 100644
+--- a/drivers/scsi/qedi/qedi_main.c
++++ b/drivers/scsi/qedi/qedi_main.c
+@@ -1127,6 +1127,15 @@ static void qedi_schedule_recovery_handler(void *dev)
+ schedule_delayed_work(&qedi->recovery_work, 0);
+ }
+
++static void qedi_set_conn_recovery(struct iscsi_cls_session *cls_session)
++{
++ struct iscsi_session *session = cls_session->dd_data;
++ struct iscsi_conn *conn = session->leadconn;
++ struct qedi_conn *qedi_conn = conn->dd_data;
++
++ qedi_start_conn_recovery(qedi_conn->qedi, qedi_conn);
++}
++
+ static void qedi_link_update(void *dev, struct qed_link_output *link)
+ {
+ struct qedi_ctx *qedi = (struct qedi_ctx *)dev;
+@@ -1138,6 +1147,7 @@ static void qedi_link_update(void *dev, struct qed_link_output *link)
+ QEDI_INFO(&qedi->dbg_ctx, QEDI_LOG_INFO,
+ "Link Down event.\n");
+ atomic_set(&qedi->link_state, QEDI_LINK_DOWN);
++ iscsi_host_for_each_session(qedi->shost, qedi_set_conn_recovery);
+ }
+ }
+
+diff --git a/drivers/scsi/qla2xxx/qla_init.c b/drivers/scsi/qla2xxx/qla_init.c
+index 2861c636dd651..f17ab22ad0e4a 100644
+--- a/drivers/scsi/qla2xxx/qla_init.c
++++ b/drivers/scsi/qla2xxx/qla_init.c
+@@ -63,6 +63,16 @@ void qla2x00_sp_free(srb_t *sp)
+ qla2x00_rel_sp(sp);
+ }
+
++void qla2xxx_rel_done_warning(srb_t *sp, int res)
++{
++ WARN_ONCE(1, "Calling done() of an already freed srb %p object\n", sp);
++}
++
++void qla2xxx_rel_free_warning(srb_t *sp)
++{
++ WARN_ONCE(1, "Calling free() of an already freed srb %p object\n", sp);
++}
++
+ /* Asynchronous Login/Logout Routines -------------------------------------- */
+
+ unsigned long
+diff --git a/drivers/scsi/qla2xxx/qla_inline.h b/drivers/scsi/qla2xxx/qla_inline.h
+index 1fb6ccac07ccd..26d9c78d4c52c 100644
+--- a/drivers/scsi/qla2xxx/qla_inline.h
++++ b/drivers/scsi/qla2xxx/qla_inline.h
+@@ -207,10 +207,15 @@ qla2xxx_get_qpair_sp(scsi_qla_host_t *vha, struct qla_qpair *qpair,
+ return sp;
+ }
+
++void qla2xxx_rel_done_warning(srb_t *sp, int res);
++void qla2xxx_rel_free_warning(srb_t *sp);
++
+ static inline void
+ qla2xxx_rel_qpair_sp(struct qla_qpair *qpair, srb_t *sp)
+ {
+ sp->qpair = NULL;
++ sp->done = qla2xxx_rel_done_warning;
++ sp->free = qla2xxx_rel_free_warning;
+ mempool_free(sp, qpair->srb_mempool);
+ QLA_QPAIR_MARK_NOT_BUSY(qpair);
+ }
+diff --git a/drivers/scsi/qla2xxx/qla_mbx.c b/drivers/scsi/qla2xxx/qla_mbx.c
+index fdb2ce7acb912..9f5d3aa1d8745 100644
+--- a/drivers/scsi/qla2xxx/qla_mbx.c
++++ b/drivers/scsi/qla2xxx/qla_mbx.c
+@@ -4908,7 +4908,7 @@ qla25xx_set_els_cmds_supported(scsi_qla_host_t *vha)
+ "Done %s.\n", __func__);
+ }
+
+- dma_free_coherent(&ha->pdev->dev, DMA_POOL_SIZE,
++ dma_free_coherent(&ha->pdev->dev, ELS_CMD_MAP_SIZE,
+ els_cmd_map, els_cmd_map_dma);
+
+ return rval;
+diff --git a/drivers/scsi/qla2xxx/qla_nvme.c b/drivers/scsi/qla2xxx/qla_nvme.c
+index 262dfd7635a48..7b14fd1cb0309 100644
+--- a/drivers/scsi/qla2xxx/qla_nvme.c
++++ b/drivers/scsi/qla2xxx/qla_nvme.c
+@@ -683,7 +683,7 @@ int qla_nvme_register_hba(struct scsi_qla_host *vha)
+ struct nvme_fc_port_template *tmpl;
+ struct qla_hw_data *ha;
+ struct nvme_fc_port_info pinfo;
+- int ret = EINVAL;
++ int ret = -EINVAL;
+
+ if (!IS_ENABLED(CONFIG_NVME_FC))
+ return ret;
+diff --git a/drivers/scsi/qla2xxx/qla_target.c b/drivers/scsi/qla2xxx/qla_target.c
+index 90289162dbd4c..a034e9caa2997 100644
+--- a/drivers/scsi/qla2xxx/qla_target.c
++++ b/drivers/scsi/qla2xxx/qla_target.c
+@@ -5668,7 +5668,7 @@ static int qlt_chk_unresolv_exchg(struct scsi_qla_host *vha,
+ /* found existing exchange */
+ qpair->retry_term_cnt++;
+ if (qpair->retry_term_cnt >= 5) {
+- rc = EIO;
++ rc = -EIO;
+ qpair->retry_term_cnt = 0;
+ ql_log(ql_log_warn, vha, 0xffff,
+ "Unable to send ABTS Respond. Dumping firmware.\n");
+diff --git a/drivers/scsi/qla4xxx/ql4_os.c b/drivers/scsi/qla4xxx/ql4_os.c
+index 5dc697ce8b5dd..4a6b15dc36aaf 100644
+--- a/drivers/scsi/qla4xxx/ql4_os.c
++++ b/drivers/scsi/qla4xxx/ql4_os.c
+@@ -1220,7 +1220,7 @@ static int qla4xxx_get_host_stats(struct Scsi_Host *shost, char *buf, int len)
+ le64_to_cpu(ql_iscsi_stats->iscsi_sequence_error);
+ exit_host_stats:
+ if (ql_iscsi_stats)
+- dma_free_coherent(&ha->pdev->dev, host_stats_size,
++ dma_free_coherent(&ha->pdev->dev, stats_size,
+ ql_iscsi_stats, iscsi_stats_dma);
+
+ ql4_printk(KERN_INFO, ha, "%s: Get host stats done\n",
+diff --git a/drivers/scsi/smartpqi/smartpqi.h b/drivers/scsi/smartpqi/smartpqi.h
+index 1129fe7a27edd..ee069a8b442a7 100644
+--- a/drivers/scsi/smartpqi/smartpqi.h
++++ b/drivers/scsi/smartpqi/smartpqi.h
+@@ -359,7 +359,7 @@ struct pqi_event_response {
+ struct pqi_iu_header header;
+ u8 event_type;
+ u8 reserved2 : 7;
+- u8 request_acknowlege : 1;
++ u8 request_acknowledge : 1;
+ __le16 event_id;
+ __le32 additional_event_id;
+ union {
+diff --git a/drivers/scsi/smartpqi/smartpqi_init.c b/drivers/scsi/smartpqi/smartpqi_init.c
+index cd157f11eb222..10afbaaa4a82f 100644
+--- a/drivers/scsi/smartpqi/smartpqi_init.c
++++ b/drivers/scsi/smartpqi/smartpqi_init.c
+@@ -542,8 +542,7 @@ static int pqi_build_raid_path_request(struct pqi_ctrl_info *ctrl_info,
+ put_unaligned_be16(cdb_length, &cdb[7]);
+ break;
+ default:
+- dev_err(&ctrl_info->pci_dev->dev, "unknown command 0x%c\n",
+- cmd);
++ dev_err(&ctrl_info->pci_dev->dev, "unknown command 0x%c\n", cmd);
+ break;
+ }
+
+@@ -2462,7 +2461,6 @@ static int pqi_raid_bypass_submit_scsi_cmd(struct pqi_ctrl_info *ctrl_info,
+ offload_to_mirror =
+ (offload_to_mirror >= layout_map_count - 1) ?
+ 0 : offload_to_mirror + 1;
+- WARN_ON(offload_to_mirror >= layout_map_count);
+ device->offload_to_mirror = offload_to_mirror;
+ /*
+ * Avoid direct use of device->offload_to_mirror within this
+@@ -2915,10 +2913,14 @@ static int pqi_interpret_task_management_response(
+ return rc;
+ }
+
+-static unsigned int pqi_process_io_intr(struct pqi_ctrl_info *ctrl_info,
+- struct pqi_queue_group *queue_group)
++static inline void pqi_invalid_response(struct pqi_ctrl_info *ctrl_info)
++{
++ pqi_take_ctrl_offline(ctrl_info);
++}
++
++static int pqi_process_io_intr(struct pqi_ctrl_info *ctrl_info, struct pqi_queue_group *queue_group)
+ {
+- unsigned int num_responses;
++ int num_responses;
+ pqi_index_t oq_pi;
+ pqi_index_t oq_ci;
+ struct pqi_io_request *io_request;
+@@ -2930,6 +2932,13 @@ static unsigned int pqi_process_io_intr(struct pqi_ctrl_info *ctrl_info,
+
+ while (1) {
+ oq_pi = readl(queue_group->oq_pi);
++ if (oq_pi >= ctrl_info->num_elements_per_oq) {
++ pqi_invalid_response(ctrl_info);
++ dev_err(&ctrl_info->pci_dev->dev,
++ "I/O interrupt: producer index (%u) out of range (0-%u): consumer index: %u\n",
++ oq_pi, ctrl_info->num_elements_per_oq - 1, oq_ci);
++ return -1;
++ }
+ if (oq_pi == oq_ci)
+ break;
+
+@@ -2938,10 +2947,22 @@ static unsigned int pqi_process_io_intr(struct pqi_ctrl_info *ctrl_info,
+ (oq_ci * PQI_OPERATIONAL_OQ_ELEMENT_LENGTH);
+
+ request_id = get_unaligned_le16(&response->request_id);
+- WARN_ON(request_id >= ctrl_info->max_io_slots);
++ if (request_id >= ctrl_info->max_io_slots) {
++ pqi_invalid_response(ctrl_info);
++ dev_err(&ctrl_info->pci_dev->dev,
++ "request ID in response (%u) out of range (0-%u): producer index: %u consumer index: %u\n",
++ request_id, ctrl_info->max_io_slots - 1, oq_pi, oq_ci);
++ return -1;
++ }
+
+ io_request = &ctrl_info->io_request_pool[request_id];
+- WARN_ON(atomic_read(&io_request->refcount) == 0);
++ if (atomic_read(&io_request->refcount) == 0) {
++ pqi_invalid_response(ctrl_info);
++ dev_err(&ctrl_info->pci_dev->dev,
++ "request ID in response (%u) does not match an outstanding I/O request: producer index: %u consumer index: %u\n",
++ request_id, oq_pi, oq_ci);
++ return -1;
++ }
+
+ switch (response->header.iu_type) {
+ case PQI_RESPONSE_IU_RAID_PATH_IO_SUCCESS:
+@@ -2971,24 +2992,22 @@ static unsigned int pqi_process_io_intr(struct pqi_ctrl_info *ctrl_info,
+ io_request->error_info = ctrl_info->error_buffer +
+ (get_unaligned_le16(&response->error_index) *
+ PQI_ERROR_BUFFER_ELEMENT_LENGTH);
+- pqi_process_io_error(response->header.iu_type,
+- io_request);
++ pqi_process_io_error(response->header.iu_type, io_request);
+ break;
+ default:
++ pqi_invalid_response(ctrl_info);
+ dev_err(&ctrl_info->pci_dev->dev,
+- "unexpected IU type: 0x%x\n",
+- response->header.iu_type);
+- break;
++ "unexpected IU type: 0x%x: producer index: %u consumer index: %u\n",
++ response->header.iu_type, oq_pi, oq_ci);
++ return -1;
+ }
+
+- io_request->io_complete_callback(io_request,
+- io_request->context);
++ io_request->io_complete_callback(io_request, io_request->context);
+
+ /*
+ * Note that the I/O request structure CANNOT BE TOUCHED after
+ * returning from the I/O completion callback!
+ */
+-
+ oq_ci = (oq_ci + 1) % ctrl_info->num_elements_per_oq;
+ }
+
+@@ -3301,9 +3320,9 @@ static void pqi_ofa_capture_event_payload(struct pqi_event *event,
+ }
+ }
+
+-static unsigned int pqi_process_event_intr(struct pqi_ctrl_info *ctrl_info)
++static int pqi_process_event_intr(struct pqi_ctrl_info *ctrl_info)
+ {
+- unsigned int num_events;
++ int num_events;
+ pqi_index_t oq_pi;
+ pqi_index_t oq_ci;
+ struct pqi_event_queue *event_queue;
+@@ -3317,26 +3336,31 @@ static unsigned int pqi_process_event_intr(struct pqi_ctrl_info *ctrl_info)
+
+ while (1) {
+ oq_pi = readl(event_queue->oq_pi);
++ if (oq_pi >= PQI_NUM_EVENT_QUEUE_ELEMENTS) {
++ pqi_invalid_response(ctrl_info);
++ dev_err(&ctrl_info->pci_dev->dev,
++ "event interrupt: producer index (%u) out of range (0-%u): consumer index: %u\n",
++ oq_pi, PQI_NUM_EVENT_QUEUE_ELEMENTS - 1, oq_ci);
++ return -1;
++ }
++
+ if (oq_pi == oq_ci)
+ break;
+
+ num_events++;
+- response = event_queue->oq_element_array +
+- (oq_ci * PQI_EVENT_OQ_ELEMENT_LENGTH);
++ response = event_queue->oq_element_array + (oq_ci * PQI_EVENT_OQ_ELEMENT_LENGTH);
+
+ event_index =
+ pqi_event_type_to_event_index(response->event_type);
+
+- if (event_index >= 0) {
+- if (response->request_acknowlege) {
+- event = &ctrl_info->events[event_index];
+- event->pending = true;
+- event->event_type = response->event_type;
+- event->event_id = response->event_id;
+- event->additional_event_id =
+- response->additional_event_id;
++ if (event_index >= 0 && response->request_acknowledge) {
++ event = &ctrl_info->events[event_index];
++ event->pending = true;
++ event->event_type = response->event_type;
++ event->event_id = response->event_id;
++ event->additional_event_id = response->additional_event_id;
++ if (event->event_type == PQI_EVENT_TYPE_OFA)
+ pqi_ofa_capture_event_payload(event, response);
+- }
+ }
+
+ oq_ci = (oq_ci + 1) % PQI_NUM_EVENT_QUEUE_ELEMENTS;
+@@ -3451,7 +3475,8 @@ static irqreturn_t pqi_irq_handler(int irq, void *data)
+ {
+ struct pqi_ctrl_info *ctrl_info;
+ struct pqi_queue_group *queue_group;
+- unsigned int num_responses_handled;
++ int num_io_responses_handled;
++ int num_events_handled;
+
+ queue_group = data;
+ ctrl_info = queue_group->ctrl_info;
+@@ -3459,17 +3484,25 @@ static irqreturn_t pqi_irq_handler(int irq, void *data)
+ if (!pqi_is_valid_irq(ctrl_info))
+ return IRQ_NONE;
+
+- num_responses_handled = pqi_process_io_intr(ctrl_info, queue_group);
++ num_io_responses_handled = pqi_process_io_intr(ctrl_info, queue_group);
++ if (num_io_responses_handled < 0)
++ goto out;
+
+- if (irq == ctrl_info->event_irq)
+- num_responses_handled += pqi_process_event_intr(ctrl_info);
++ if (irq == ctrl_info->event_irq) {
++ num_events_handled = pqi_process_event_intr(ctrl_info);
++ if (num_events_handled < 0)
++ goto out;
++ } else {
++ num_events_handled = 0;
++ }
+
+- if (num_responses_handled)
++ if (num_io_responses_handled + num_events_handled > 0)
+ atomic_inc(&ctrl_info->num_interrupts);
+
+ pqi_start_io(ctrl_info, queue_group, RAID_PATH, NULL);
+ pqi_start_io(ctrl_info, queue_group, AIO_PATH, NULL);
+
++out:
+ return IRQ_HANDLED;
+ }
+
+diff --git a/drivers/scsi/ufs/ufs-mediatek.c b/drivers/scsi/ufs/ufs-mediatek.c
+index d56ce8d97d4e8..7ad127f213977 100644
+--- a/drivers/scsi/ufs/ufs-mediatek.c
++++ b/drivers/scsi/ufs/ufs-mediatek.c
+@@ -585,13 +585,7 @@ static int ufs_mtk_apply_dev_quirks(struct ufs_hba *hba)
+
+ static void ufs_mtk_fixup_dev_quirks(struct ufs_hba *hba)
+ {
+- struct ufs_dev_info *dev_info = &hba->dev_info;
+- u16 mid = dev_info->wmanufacturerid;
+-
+ ufshcd_fixup_dev_quirks(hba, ufs_mtk_dev_fixups);
+-
+- if (mid == UFS_VENDOR_SAMSUNG)
+- hba->dev_quirks &= ~UFS_DEVICE_QUIRK_HOST_PA_TACTIVATE;
+ }
+
+ /**
+diff --git a/drivers/scsi/ufs/ufs-qcom.c b/drivers/scsi/ufs/ufs-qcom.c
+index 2e6ddb5cdfc23..7da27eed1fe7b 100644
+--- a/drivers/scsi/ufs/ufs-qcom.c
++++ b/drivers/scsi/ufs/ufs-qcom.c
+@@ -1604,9 +1604,6 @@ int ufs_qcom_testbus_config(struct ufs_qcom_host *host)
+ */
+ }
+ mask <<= offset;
+-
+- pm_runtime_get_sync(host->hba->dev);
+- ufshcd_hold(host->hba, false);
+ ufshcd_rmwl(host->hba, TEST_BUS_SEL,
+ (u32)host->testbus.select_major << 19,
+ REG_UFS_CFG1);
+@@ -1619,8 +1616,6 @@ int ufs_qcom_testbus_config(struct ufs_qcom_host *host)
+ * committed before returning.
+ */
+ mb();
+- ufshcd_release(host->hba);
+- pm_runtime_put_sync(host->hba->dev);
+
+ return 0;
+ }
+diff --git a/drivers/scsi/ufs/ufshcd.c b/drivers/scsi/ufs/ufshcd.c
+index 8bc8e4e62c045..e5f75b2e07e2c 100644
+--- a/drivers/scsi/ufs/ufshcd.c
++++ b/drivers/scsi/ufs/ufshcd.c
+@@ -484,6 +484,9 @@ void ufshcd_print_trs(struct ufs_hba *hba, unsigned long bitmap, bool pr_prdt)
+
+ prdt_length = le16_to_cpu(
+ lrbp->utr_descriptor_ptr->prd_table_length);
++ if (hba->quirks & UFSHCD_QUIRK_PRDT_BYTE_GRAN)
++ prdt_length /= sizeof(struct ufshcd_sg_entry);
++
+ dev_err(hba->dev,
+ "UPIU[%d] - PRDT - %d entries phys@0x%llx\n",
+ tag, prdt_length,
+diff --git a/drivers/slimbus/core.c b/drivers/slimbus/core.c
+index ae1e248a8fb8a..1d2bc181da050 100644
+--- a/drivers/slimbus/core.c
++++ b/drivers/slimbus/core.c
+@@ -301,8 +301,6 @@ int slim_unregister_controller(struct slim_controller *ctrl)
+ {
+ /* Remove all clients */
+ device_for_each_child(ctrl->dev, NULL, slim_ctrl_remove_device);
+- /* Enter Clock Pause */
+- slim_ctrl_clk_pause(ctrl, false, 0);
+ ida_simple_remove(&ctrl_ida, ctrl->id);
+
+ return 0;
+@@ -326,8 +324,8 @@ void slim_report_absent(struct slim_device *sbdev)
+ mutex_lock(&ctrl->lock);
+ sbdev->is_laddr_valid = false;
+ mutex_unlock(&ctrl->lock);
+-
+- ida_simple_remove(&ctrl->laddr_ida, sbdev->laddr);
++ if (!ctrl->get_laddr)
++ ida_simple_remove(&ctrl->laddr_ida, sbdev->laddr);
+ slim_device_update_status(sbdev, SLIM_DEVICE_STATUS_DOWN);
+ }
+ EXPORT_SYMBOL_GPL(slim_report_absent);
+diff --git a/drivers/slimbus/qcom-ngd-ctrl.c b/drivers/slimbus/qcom-ngd-ctrl.c
+index 743ee7b4e63f2..218aefc3531cd 100644
+--- a/drivers/slimbus/qcom-ngd-ctrl.c
++++ b/drivers/slimbus/qcom-ngd-ctrl.c
+@@ -1277,9 +1277,13 @@ static void qcom_slim_ngd_qmi_del_server(struct qmi_handle *hdl,
+ {
+ struct qcom_slim_ngd_qmi *qmi =
+ container_of(hdl, struct qcom_slim_ngd_qmi, svc_event_hdl);
++ struct qcom_slim_ngd_ctrl *ctrl =
++ container_of(qmi, struct qcom_slim_ngd_ctrl, qmi);
+
+ qmi->svc_info.sq_node = 0;
+ qmi->svc_info.sq_port = 0;
++
++ qcom_slim_ngd_enable(ctrl, false);
+ }
+
+ static struct qmi_ops qcom_slim_ngd_qmi_svc_event_ops = {
+diff --git a/drivers/soc/fsl/qbman/bman.c b/drivers/soc/fsl/qbman/bman.c
+index f4fb527d83018..c5dd026fe889f 100644
+--- a/drivers/soc/fsl/qbman/bman.c
++++ b/drivers/soc/fsl/qbman/bman.c
+@@ -660,7 +660,7 @@ int bm_shutdown_pool(u32 bpid)
+ }
+ done:
+ put_affine_portal();
+- return 0;
++ return err;
+ }
+
+ struct gen_pool *bm_bpalloc;
+diff --git a/drivers/soc/mediatek/mtk-cmdq-helper.c b/drivers/soc/mediatek/mtk-cmdq-helper.c
+index 87ee9f767b7af..d8ace96832bac 100644
+--- a/drivers/soc/mediatek/mtk-cmdq-helper.c
++++ b/drivers/soc/mediatek/mtk-cmdq-helper.c
+@@ -213,15 +213,16 @@ int cmdq_pkt_write_mask(struct cmdq_pkt *pkt, u8 subsys,
+ }
+ EXPORT_SYMBOL(cmdq_pkt_write_mask);
+
+-int cmdq_pkt_wfe(struct cmdq_pkt *pkt, u16 event)
++int cmdq_pkt_wfe(struct cmdq_pkt *pkt, u16 event, bool clear)
+ {
+ struct cmdq_instruction inst = { {0} };
++ u32 clear_option = clear ? CMDQ_WFE_UPDATE : 0;
+
+ if (event >= CMDQ_MAX_EVENT)
+ return -EINVAL;
+
+ inst.op = CMDQ_CODE_WFE;
+- inst.value = CMDQ_WFE_OPTION;
++ inst.value = CMDQ_WFE_OPTION | clear_option;
+ inst.event = event;
+
+ return cmdq_pkt_append_command(pkt, inst);
+diff --git a/drivers/soc/qcom/apr.c b/drivers/soc/qcom/apr.c
+index 1f35b097c6356..7abfc8c4fdc72 100644
+--- a/drivers/soc/qcom/apr.c
++++ b/drivers/soc/qcom/apr.c
+@@ -328,7 +328,7 @@ static int of_apr_add_pd_lookups(struct device *dev)
+
+ pds = pdr_add_lookup(apr->pdr, service_name, service_path);
+ if (IS_ERR(pds) && PTR_ERR(pds) != -EALREADY) {
+- dev_err(dev, "pdr add lookup failed: %d\n", ret);
++ dev_err(dev, "pdr add lookup failed: %ld\n", PTR_ERR(pds));
+ return PTR_ERR(pds);
+ }
+ }
+diff --git a/drivers/soc/qcom/pdr_internal.h b/drivers/soc/qcom/pdr_internal.h
+index 15b5002e4127b..ab9ae8cdfa54c 100644
+--- a/drivers/soc/qcom/pdr_internal.h
++++ b/drivers/soc/qcom/pdr_internal.h
+@@ -185,7 +185,7 @@ struct qmi_elem_info servreg_get_domain_list_resp_ei[] = {
+ .data_type = QMI_STRUCT,
+ .elem_len = SERVREG_DOMAIN_LIST_LENGTH,
+ .elem_size = sizeof(struct servreg_location_entry),
+- .array_type = NO_ARRAY,
++ .array_type = VAR_LEN_ARRAY,
+ .tlv_type = 0x12,
+ .offset = offsetof(struct servreg_get_domain_list_resp,
+ domain_list),
+diff --git a/drivers/soc/xilinx/zynqmp_power.c b/drivers/soc/xilinx/zynqmp_power.c
+index 31ff49fcd078b..c556623dae024 100644
+--- a/drivers/soc/xilinx/zynqmp_power.c
++++ b/drivers/soc/xilinx/zynqmp_power.c
+@@ -205,7 +205,7 @@ static int zynqmp_pm_probe(struct platform_device *pdev)
+ rx_chan = mbox_request_channel_byname(client, "rx");
+ if (IS_ERR(rx_chan)) {
+ dev_err(&pdev->dev, "Failed to request rx channel\n");
+- return IS_ERR(rx_chan);
++ return PTR_ERR(rx_chan);
+ }
+ } else if (of_find_property(pdev->dev.of_node, "interrupts", NULL)) {
+ irq = platform_get_irq(pdev, 0);
+diff --git a/drivers/spi/spi-dw-pci.c b/drivers/spi/spi-dw-pci.c
+index 2ea73809ca345..271839a8add0e 100644
+--- a/drivers/spi/spi-dw-pci.c
++++ b/drivers/spi/spi-dw-pci.c
+@@ -127,18 +127,16 @@ static int spi_pci_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
+ if (desc->setup) {
+ ret = desc->setup(dws);
+ if (ret)
+- return ret;
++ goto err_free_irq_vectors;
+ }
+ } else {
+- pci_free_irq_vectors(pdev);
+- return -ENODEV;
++ ret = -ENODEV;
++ goto err_free_irq_vectors;
+ }
+
+ ret = dw_spi_add_host(&pdev->dev, dws);
+- if (ret) {
+- pci_free_irq_vectors(pdev);
+- return ret;
+- }
++ if (ret)
++ goto err_free_irq_vectors;
+
+ /* PCI hook and SPI hook use the same drv data */
+ pci_set_drvdata(pdev, dws);
+@@ -152,6 +150,10 @@ static int spi_pci_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
+ pm_runtime_allow(&pdev->dev);
+
+ return 0;
++
++err_free_irq_vectors:
++ pci_free_irq_vectors(pdev);
++ return ret;
+ }
+
+ static void spi_pci_remove(struct pci_dev *pdev)
+diff --git a/drivers/spi/spi-fsi.c b/drivers/spi/spi-fsi.c
+index 37a3e0f8e7526..a702e9d7d68c0 100644
+--- a/drivers/spi/spi-fsi.c
++++ b/drivers/spi/spi-fsi.c
+@@ -24,11 +24,16 @@
+
+ #define SPI_FSI_BASE 0x70000
+ #define SPI_FSI_INIT_TIMEOUT_MS 1000
+-#define SPI_FSI_MAX_TRANSFER_SIZE 2048
++#define SPI_FSI_MAX_XFR_SIZE 2048
++#define SPI_FSI_MAX_XFR_SIZE_RESTRICTED 32
+
+ #define SPI_FSI_ERROR 0x0
+ #define SPI_FSI_COUNTER_CFG 0x1
+ #define SPI_FSI_COUNTER_CFG_LOOPS(x) (((u64)(x) & 0xffULL) << 32)
++#define SPI_FSI_COUNTER_CFG_N2_RX BIT_ULL(8)
++#define SPI_FSI_COUNTER_CFG_N2_TX BIT_ULL(9)
++#define SPI_FSI_COUNTER_CFG_N2_IMPLICIT BIT_ULL(10)
++#define SPI_FSI_COUNTER_CFG_N2_RELOAD BIT_ULL(11)
+ #define SPI_FSI_CFG1 0x2
+ #define SPI_FSI_CLOCK_CFG 0x3
+ #define SPI_FSI_CLOCK_CFG_MM_ENABLE BIT_ULL(32)
+@@ -61,7 +66,7 @@
+ #define SPI_FSI_STATUS_RDR_OVERRUN BIT_ULL(62)
+ #define SPI_FSI_STATUS_RDR_FULL BIT_ULL(63)
+ #define SPI_FSI_STATUS_ANY_ERROR \
+- (SPI_FSI_STATUS_ERROR | SPI_FSI_STATUS_TDR_UNDERRUN | \
++ (SPI_FSI_STATUS_ERROR | \
+ SPI_FSI_STATUS_TDR_OVERRUN | SPI_FSI_STATUS_RDR_UNDERRUN | \
+ SPI_FSI_STATUS_RDR_OVERRUN)
+ #define SPI_FSI_PORT_CTRL 0x9
+@@ -70,6 +75,8 @@ struct fsi_spi {
+ struct device *dev; /* SPI controller device */
+ struct fsi_device *fsi; /* FSI2SPI CFAM engine device */
+ u32 base;
++ size_t max_xfr_size;
++ bool restricted;
+ };
+
+ struct fsi_spi_sequence {
+@@ -205,8 +212,12 @@ static int fsi_spi_reset(struct fsi_spi *ctx)
+ if (rc)
+ return rc;
+
+- return fsi_spi_write_reg(ctx, SPI_FSI_CLOCK_CFG,
+- SPI_FSI_CLOCK_CFG_RESET2);
++ rc = fsi_spi_write_reg(ctx, SPI_FSI_CLOCK_CFG,
++ SPI_FSI_CLOCK_CFG_RESET2);
++ if (rc)
++ return rc;
++
++ return fsi_spi_write_reg(ctx, SPI_FSI_STATUS, 0ULL);
+ }
+
+ static int fsi_spi_sequence_add(struct fsi_spi_sequence *seq, u8 val)
+@@ -214,8 +225,8 @@ static int fsi_spi_sequence_add(struct fsi_spi_sequence *seq, u8 val)
+ /*
+ * Add the next byte of instruction to the 8-byte sequence register.
+ * Then decrement the counter so that the next instruction will go in
+- * the right place. Return the number of "slots" left in the sequence
+- * register.
++ * the right place. Return the index of the slot we just filled in the
++ * sequence register.
+ */
+ seq->data |= (u64)val << seq->bit;
+ seq->bit -= 8;
+@@ -233,40 +244,71 @@ static int fsi_spi_sequence_transfer(struct fsi_spi *ctx,
+ struct fsi_spi_sequence *seq,
+ struct spi_transfer *transfer)
+ {
++ bool docfg = false;
+ int loops;
+ int idx;
+ int rc;
++ u8 val = 0;
+ u8 len = min(transfer->len, 8U);
+ u8 rem = transfer->len % len;
++ u64 cfg = 0ULL;
+
+ loops = transfer->len / len;
+
+ if (transfer->tx_buf) {
+- idx = fsi_spi_sequence_add(seq,
+- SPI_FSI_SEQUENCE_SHIFT_OUT(len));
++ val = SPI_FSI_SEQUENCE_SHIFT_OUT(len);
++ idx = fsi_spi_sequence_add(seq, val);
++
+ if (rem)
+ rem = SPI_FSI_SEQUENCE_SHIFT_OUT(rem);
+ } else if (transfer->rx_buf) {
+- idx = fsi_spi_sequence_add(seq,
+- SPI_FSI_SEQUENCE_SHIFT_IN(len));
++ val = SPI_FSI_SEQUENCE_SHIFT_IN(len);
++ idx = fsi_spi_sequence_add(seq, val);
++
+ if (rem)
+ rem = SPI_FSI_SEQUENCE_SHIFT_IN(rem);
+ } else {
+ return -EINVAL;
+ }
+
++ if (ctx->restricted) {
++ const int eidx = rem ? 5 : 6;
++
++ while (loops > 1 && idx <= eidx) {
++ idx = fsi_spi_sequence_add(seq, val);
++ loops--;
++ docfg = true;
++ }
++
++ if (loops > 1) {
++ dev_warn(ctx->dev, "No sequencer slots; aborting.\n");
++ return -EINVAL;
++ }
++ }
++
+ if (loops > 1) {
+ fsi_spi_sequence_add(seq, SPI_FSI_SEQUENCE_BRANCH(idx));
++ docfg = true;
++ }
+
+- if (rem)
+- fsi_spi_sequence_add(seq, rem);
++ if (docfg) {
++ cfg = SPI_FSI_COUNTER_CFG_LOOPS(loops - 1);
++ if (transfer->rx_buf)
++ cfg |= SPI_FSI_COUNTER_CFG_N2_RX |
++ SPI_FSI_COUNTER_CFG_N2_TX |
++ SPI_FSI_COUNTER_CFG_N2_IMPLICIT |
++ SPI_FSI_COUNTER_CFG_N2_RELOAD;
+
+- rc = fsi_spi_write_reg(ctx, SPI_FSI_COUNTER_CFG,
+- SPI_FSI_COUNTER_CFG_LOOPS(loops - 1));
++ rc = fsi_spi_write_reg(ctx, SPI_FSI_COUNTER_CFG, cfg);
+ if (rc)
+ return rc;
++ } else {
++ fsi_spi_write_reg(ctx, SPI_FSI_COUNTER_CFG, 0ULL);
+ }
+
++ if (rem)
++ fsi_spi_sequence_add(seq, rem);
++
+ return 0;
+ }
+
+@@ -275,6 +317,7 @@ static int fsi_spi_transfer_data(struct fsi_spi *ctx,
+ {
+ int rc = 0;
+ u64 status = 0ULL;
++ u64 cfg = 0ULL;
+
+ if (transfer->tx_buf) {
+ int nb;
+@@ -312,6 +355,16 @@ static int fsi_spi_transfer_data(struct fsi_spi *ctx,
+ u64 in = 0ULL;
+ u8 *rx = transfer->rx_buf;
+
++ rc = fsi_spi_read_reg(ctx, SPI_FSI_COUNTER_CFG, &cfg);
++ if (rc)
++ return rc;
++
++ if (cfg & SPI_FSI_COUNTER_CFG_N2_IMPLICIT) {
++ rc = fsi_spi_write_reg(ctx, SPI_FSI_DATA_TX, 0);
++ if (rc)
++ return rc;
++ }
++
+ while (transfer->len > recv) {
+ do {
+ rc = fsi_spi_read_reg(ctx, SPI_FSI_STATUS,
+@@ -350,7 +403,7 @@ static int fsi_spi_transfer_init(struct fsi_spi *ctx)
+ u64 status = 0ULL;
+ u64 wanted_clock_cfg = SPI_FSI_CLOCK_CFG_ECC_DISABLE |
+ SPI_FSI_CLOCK_CFG_SCK_NO_DEL |
+- FIELD_PREP(SPI_FSI_CLOCK_CFG_SCK_DIV, 4);
++ FIELD_PREP(SPI_FSI_CLOCK_CFG_SCK_DIV, 19);
+
+ end = jiffies + msecs_to_jiffies(SPI_FSI_INIT_TIMEOUT_MS);
+ do {
+@@ -407,7 +460,7 @@ static int fsi_spi_transfer_one_message(struct spi_controller *ctlr,
+
+ /* Sequencer must do shift out (tx) first. */
+ if (!transfer->tx_buf ||
+- transfer->len > SPI_FSI_MAX_TRANSFER_SIZE) {
++ transfer->len > (ctx->max_xfr_size + 8)) {
+ rc = -EINVAL;
+ goto error;
+ }
+@@ -431,7 +484,7 @@ static int fsi_spi_transfer_one_message(struct spi_controller *ctlr,
+
+ /* Sequencer can only do shift in (rx) after tx. */
+ if (next->rx_buf) {
+- if (next->len > SPI_FSI_MAX_TRANSFER_SIZE) {
++ if (next->len > ctx->max_xfr_size) {
+ rc = -EINVAL;
+ goto error;
+ }
+@@ -476,7 +529,9 @@ error:
+
+ static size_t fsi_spi_max_transfer_size(struct spi_device *spi)
+ {
+- return SPI_FSI_MAX_TRANSFER_SIZE;
++ struct fsi_spi *ctx = spi_controller_get_devdata(spi->controller);
++
++ return ctx->max_xfr_size;
+ }
+
+ static int fsi_spi_probe(struct device *dev)
+@@ -524,6 +579,14 @@ static int fsi_spi_probe(struct device *dev)
+ ctx->fsi = fsi;
+ ctx->base = base + SPI_FSI_BASE;
+
++ if (of_device_is_compatible(np, "ibm,fsi2spi-restricted")) {
++ ctx->restricted = true;
++ ctx->max_xfr_size = SPI_FSI_MAX_XFR_SIZE_RESTRICTED;
++ } else {
++ ctx->restricted = false;
++ ctx->max_xfr_size = SPI_FSI_MAX_XFR_SIZE;
++ }
++
+ rc = devm_spi_register_controller(dev, ctlr);
+ if (rc)
+ spi_controller_put(ctlr);
+diff --git a/drivers/spi/spi-omap2-mcspi.c b/drivers/spi/spi-omap2-mcspi.c
+index e9e256718ef4a..10d8a722b0833 100644
+--- a/drivers/spi/spi-omap2-mcspi.c
++++ b/drivers/spi/spi-omap2-mcspi.c
+@@ -24,7 +24,6 @@
+ #include <linux/of.h>
+ #include <linux/of_device.h>
+ #include <linux/gcd.h>
+-#include <linux/iopoll.h>
+
+ #include <linux/spi/spi.h>
+ #include <linux/gpio.h>
+@@ -349,9 +348,19 @@ disable_fifo:
+
+ static int mcspi_wait_for_reg_bit(void __iomem *reg, unsigned long bit)
+ {
+- u32 val;
+-
+- return readl_poll_timeout(reg, val, val & bit, 1, MSEC_PER_SEC);
++ unsigned long timeout;
++
++ timeout = jiffies + msecs_to_jiffies(1000);
++ while (!(readl_relaxed(reg) & bit)) {
++ if (time_after(jiffies, timeout)) {
++ if (!(readl_relaxed(reg) & bit))
++ return -ETIMEDOUT;
++ else
++ return 0;
++ }
++ cpu_relax();
++ }
++ return 0;
+ }
+
+ static int mcspi_wait_for_completion(struct omap2_mcspi *mcspi,
+diff --git a/drivers/spi/spi-s3c64xx.c b/drivers/spi/spi-s3c64xx.c
+index cf67ea60dc0ed..6587a7dc3f5ba 100644
+--- a/drivers/spi/spi-s3c64xx.c
++++ b/drivers/spi/spi-s3c64xx.c
+@@ -122,6 +122,7 @@
+
+ struct s3c64xx_spi_dma_data {
+ struct dma_chan *ch;
++ dma_cookie_t cookie;
+ enum dma_transfer_direction direction;
+ };
+
+@@ -264,12 +265,13 @@ static void s3c64xx_spi_dmacb(void *data)
+ spin_unlock_irqrestore(&sdd->lock, flags);
+ }
+
+-static void prepare_dma(struct s3c64xx_spi_dma_data *dma,
++static int prepare_dma(struct s3c64xx_spi_dma_data *dma,
+ struct sg_table *sgt)
+ {
+ struct s3c64xx_spi_driver_data *sdd;
+ struct dma_slave_config config;
+ struct dma_async_tx_descriptor *desc;
++ int ret;
+
+ memset(&config, 0, sizeof(config));
+
+@@ -293,12 +295,24 @@ static void prepare_dma(struct s3c64xx_spi_dma_data *dma,
+
+ desc = dmaengine_prep_slave_sg(dma->ch, sgt->sgl, sgt->nents,
+ dma->direction, DMA_PREP_INTERRUPT);
++ if (!desc) {
++ dev_err(&sdd->pdev->dev, "unable to prepare %s scatterlist",
++ dma->direction == DMA_DEV_TO_MEM ? "rx" : "tx");
++ return -ENOMEM;
++ }
+
+ desc->callback = s3c64xx_spi_dmacb;
+ desc->callback_param = dma;
+
+- dmaengine_submit(desc);
++ dma->cookie = dmaengine_submit(desc);
++ ret = dma_submit_error(dma->cookie);
++ if (ret) {
++ dev_err(&sdd->pdev->dev, "DMA submission failed");
++ return -EIO;
++ }
++
+ dma_async_issue_pending(dma->ch);
++ return 0;
+ }
+
+ static void s3c64xx_spi_set_cs(struct spi_device *spi, bool enable)
+@@ -348,11 +362,12 @@ static bool s3c64xx_spi_can_dma(struct spi_master *master,
+ return xfer->len > (FIFO_LVL_MASK(sdd) >> 1) + 1;
+ }
+
+-static void s3c64xx_enable_datapath(struct s3c64xx_spi_driver_data *sdd,
++static int s3c64xx_enable_datapath(struct s3c64xx_spi_driver_data *sdd,
+ struct spi_transfer *xfer, int dma_mode)
+ {
+ void __iomem *regs = sdd->regs;
+ u32 modecfg, chcfg;
++ int ret = 0;
+
+ modecfg = readl(regs + S3C64XX_SPI_MODE_CFG);
+ modecfg &= ~(S3C64XX_SPI_MODE_TXDMA_ON | S3C64XX_SPI_MODE_RXDMA_ON);
+@@ -378,7 +393,7 @@ static void s3c64xx_enable_datapath(struct s3c64xx_spi_driver_data *sdd,
+ chcfg |= S3C64XX_SPI_CH_TXCH_ON;
+ if (dma_mode) {
+ modecfg |= S3C64XX_SPI_MODE_TXDMA_ON;
+- prepare_dma(&sdd->tx_dma, &xfer->tx_sg);
++ ret = prepare_dma(&sdd->tx_dma, &xfer->tx_sg);
+ } else {
+ switch (sdd->cur_bpw) {
+ case 32:
+@@ -410,12 +425,17 @@ static void s3c64xx_enable_datapath(struct s3c64xx_spi_driver_data *sdd,
+ writel(((xfer->len * 8 / sdd->cur_bpw) & 0xffff)
+ | S3C64XX_SPI_PACKET_CNT_EN,
+ regs + S3C64XX_SPI_PACKET_CNT);
+- prepare_dma(&sdd->rx_dma, &xfer->rx_sg);
++ ret = prepare_dma(&sdd->rx_dma, &xfer->rx_sg);
+ }
+ }
+
++ if (ret)
++ return ret;
++
+ writel(modecfg, regs + S3C64XX_SPI_MODE_CFG);
+ writel(chcfg, regs + S3C64XX_SPI_CH_CFG);
++
++ return 0;
+ }
+
+ static u32 s3c64xx_spi_wait_for_timeout(struct s3c64xx_spi_driver_data *sdd,
+@@ -548,9 +568,10 @@ static int s3c64xx_wait_for_pio(struct s3c64xx_spi_driver_data *sdd,
+ return 0;
+ }
+
+-static void s3c64xx_spi_config(struct s3c64xx_spi_driver_data *sdd)
++static int s3c64xx_spi_config(struct s3c64xx_spi_driver_data *sdd)
+ {
+ void __iomem *regs = sdd->regs;
++ int ret;
+ u32 val;
+
+ /* Disable Clock */
+@@ -598,7 +619,9 @@ static void s3c64xx_spi_config(struct s3c64xx_spi_driver_data *sdd)
+
+ if (sdd->port_conf->clk_from_cmu) {
+ /* The src_clk clock is divided internally by 2 */
+- clk_set_rate(sdd->src_clk, sdd->cur_speed * 2);
++ ret = clk_set_rate(sdd->src_clk, sdd->cur_speed * 2);
++ if (ret)
++ return ret;
+ } else {
+ /* Configure Clock */
+ val = readl(regs + S3C64XX_SPI_CLK_CFG);
+@@ -612,6 +635,8 @@ static void s3c64xx_spi_config(struct s3c64xx_spi_driver_data *sdd)
+ val |= S3C64XX_SPI_ENCLK_ENABLE;
+ writel(val, regs + S3C64XX_SPI_CLK_CFG);
+ }
++
++ return 0;
+ }
+
+ #define XFER_DMAADDR_INVALID DMA_BIT_MASK(32)
+@@ -654,7 +679,9 @@ static int s3c64xx_spi_transfer_one(struct spi_master *master,
+ sdd->cur_bpw = bpw;
+ sdd->cur_speed = speed;
+ sdd->cur_mode = spi->mode;
+- s3c64xx_spi_config(sdd);
++ status = s3c64xx_spi_config(sdd);
++ if (status)
++ return status;
+ }
+
+ if (!is_polling(sdd) && (xfer->len > fifo_len) &&
+@@ -678,13 +705,18 @@ static int s3c64xx_spi_transfer_one(struct spi_master *master,
+ sdd->state &= ~RXBUSY;
+ sdd->state &= ~TXBUSY;
+
+- s3c64xx_enable_datapath(sdd, xfer, use_dma);
+-
+ /* Start the signals */
+ s3c64xx_spi_set_cs(spi, true);
+
++ status = s3c64xx_enable_datapath(sdd, xfer, use_dma);
++
+ spin_unlock_irqrestore(&sdd->lock, flags);
+
++ if (status) {
++ dev_err(&spi->dev, "failed to enable data path for transfer: %d\n", status);
++ break;
++ }
++
+ if (use_dma)
+ status = s3c64xx_wait_for_dma(sdd, xfer);
+ else
+diff --git a/drivers/staging/emxx_udc/emxx_udc.c b/drivers/staging/emxx_udc/emxx_udc.c
+index 03929b9d3a8bc..d0725bc8b48a4 100644
+--- a/drivers/staging/emxx_udc/emxx_udc.c
++++ b/drivers/staging/emxx_udc/emxx_udc.c
+@@ -2593,7 +2593,7 @@ static int nbu2ss_ep_queue(struct usb_ep *_ep,
+
+ if (req->unaligned) {
+ if (!ep->virt_buf)
+- ep->virt_buf = dma_alloc_coherent(NULL, PAGE_SIZE,
++ ep->virt_buf = dma_alloc_coherent(udc->dev, PAGE_SIZE,
+ &ep->phys_buf,
+ GFP_ATOMIC | GFP_DMA);
+ if (ep->epnum > 0) {
+@@ -3148,7 +3148,7 @@ static int nbu2ss_drv_remove(struct platform_device *pdev)
+ for (i = 0; i < NUM_ENDPOINTS; i++) {
+ ep = &udc->ep[i];
+ if (ep->virt_buf)
+- dma_free_coherent(NULL, PAGE_SIZE, (void *)ep->virt_buf,
++ dma_free_coherent(udc->dev, PAGE_SIZE, (void *)ep->virt_buf,
+ ep->phys_buf);
+ }
+
+diff --git a/drivers/staging/media/atomisp/pci/sh_css.c b/drivers/staging/media/atomisp/pci/sh_css.c
+index 54434c2dbaf90..8473e14370747 100644
+--- a/drivers/staging/media/atomisp/pci/sh_css.c
++++ b/drivers/staging/media/atomisp/pci/sh_css.c
+@@ -9521,7 +9521,7 @@ ia_css_stream_create(const struct ia_css_stream_config *stream_config,
+ if (err)
+ {
+ IA_CSS_LEAVE_ERR(err);
+- return err;
++ goto ERR;
+ }
+ #endif
+ for (i = 0; i < num_pipes; i++)
+diff --git a/drivers/staging/media/hantro/hantro_h264.c b/drivers/staging/media/hantro/hantro_h264.c
+index d561f125085a7..d72ebbd17a692 100644
+--- a/drivers/staging/media/hantro/hantro_h264.c
++++ b/drivers/staging/media/hantro/hantro_h264.c
+@@ -327,7 +327,7 @@ dma_addr_t hantro_h264_get_ref_buf(struct hantro_ctx *ctx,
+ */
+ dst_buf = hantro_get_dst_buf(ctx);
+ buf = &dst_buf->vb2_buf;
+- dma_addr = vb2_dma_contig_plane_dma_addr(buf, 0);
++ dma_addr = hantro_get_dec_buf_addr(ctx, buf);
+ }
+
+ return dma_addr;
+diff --git a/drivers/staging/media/hantro/hantro_postproc.c b/drivers/staging/media/hantro/hantro_postproc.c
+index 44062ffceaea7..6d2a8f2a8f0bb 100644
+--- a/drivers/staging/media/hantro/hantro_postproc.c
++++ b/drivers/staging/media/hantro/hantro_postproc.c
+@@ -118,7 +118,9 @@ int hantro_postproc_alloc(struct hantro_ctx *ctx)
+ unsigned int num_buffers = cap_queue->num_buffers;
+ unsigned int i, buf_size;
+
+- buf_size = ctx->dst_fmt.plane_fmt[0].sizeimage;
++ buf_size = ctx->dst_fmt.plane_fmt[0].sizeimage +
++ hantro_h264_mv_size(ctx->dst_fmt.width,
++ ctx->dst_fmt.height);
+
+ for (i = 0; i < num_buffers; ++i) {
+ struct hantro_aux_buf *priv = &ctx->postproc.dec_q[i];
+diff --git a/drivers/staging/media/ipu3/ipu3-css-params.c b/drivers/staging/media/ipu3/ipu3-css-params.c
+index fbd53d7c097cd..e9d6bd9e9332a 100644
+--- a/drivers/staging/media/ipu3/ipu3-css-params.c
++++ b/drivers/staging/media/ipu3/ipu3-css-params.c
+@@ -159,7 +159,7 @@ imgu_css_scaler_calc(u32 input_width, u32 input_height, u32 target_width,
+
+ memset(&cfg->scaler_coeffs_chroma, 0,
+ sizeof(cfg->scaler_coeffs_chroma));
+- memset(&cfg->scaler_coeffs_luma, 0, sizeof(*cfg->scaler_coeffs_luma));
++ memset(&cfg->scaler_coeffs_luma, 0, sizeof(cfg->scaler_coeffs_luma));
+ do {
+ phase_step_correction++;
+
+diff --git a/drivers/staging/media/phy-rockchip-dphy-rx0/phy-rockchip-dphy-rx0.c b/drivers/staging/media/phy-rockchip-dphy-rx0/phy-rockchip-dphy-rx0.c
+index 7c4df6d48c43d..4df9476ef2a9b 100644
+--- a/drivers/staging/media/phy-rockchip-dphy-rx0/phy-rockchip-dphy-rx0.c
++++ b/drivers/staging/media/phy-rockchip-dphy-rx0/phy-rockchip-dphy-rx0.c
+@@ -16,6 +16,7 @@
+ */
+
+ #include <linux/clk.h>
++#include <linux/delay.h>
+ #include <linux/io.h>
+ #include <linux/mfd/syscon.h>
+ #include <linux/module.h>
+diff --git a/drivers/staging/rtl8192u/ieee80211/ieee80211_rx.c b/drivers/staging/rtl8192u/ieee80211/ieee80211_rx.c
+index 195d963c4fbb4..b6fee7230ce05 100644
+--- a/drivers/staging/rtl8192u/ieee80211/ieee80211_rx.c
++++ b/drivers/staging/rtl8192u/ieee80211/ieee80211_rx.c
+@@ -597,7 +597,7 @@ static void RxReorderIndicatePacket(struct ieee80211_device *ieee,
+
+ prxbIndicateArray = kmalloc_array(REORDER_WIN_SIZE,
+ sizeof(struct ieee80211_rxb *),
+- GFP_KERNEL);
++ GFP_ATOMIC);
+ if (!prxbIndicateArray)
+ return;
+
+diff --git a/drivers/staging/wfx/data_rx.c b/drivers/staging/wfx/data_rx.c
+index 0e959ebc38b56..a9fb5165b33d9 100644
+--- a/drivers/staging/wfx/data_rx.c
++++ b/drivers/staging/wfx/data_rx.c
+@@ -80,7 +80,7 @@ void wfx_rx_cb(struct wfx_vif *wvif,
+ goto drop;
+
+ if (arg->status == HIF_STATUS_RX_FAIL_MIC)
+- hdr->flag |= RX_FLAG_MMIC_ERROR;
++ hdr->flag |= RX_FLAG_MMIC_ERROR | RX_FLAG_IV_STRIPPED;
+ else if (arg->status)
+ goto drop;
+
+diff --git a/drivers/staging/wilc1000/mon.c b/drivers/staging/wilc1000/mon.c
+index 60331417bd983..66f1c870f4f69 100644
+--- a/drivers/staging/wilc1000/mon.c
++++ b/drivers/staging/wilc1000/mon.c
+@@ -236,11 +236,10 @@ struct net_device *wilc_wfi_init_mon_interface(struct wilc *wl,
+
+ if (register_netdevice(wl->monitor_dev)) {
+ netdev_err(real_dev, "register_netdevice failed\n");
++ free_netdev(wl->monitor_dev);
+ return NULL;
+ }
+ priv = netdev_priv(wl->monitor_dev);
+- if (!priv)
+- return NULL;
+
+ priv->real_ndev = real_dev;
+
+diff --git a/drivers/staging/wilc1000/sdio.c b/drivers/staging/wilc1000/sdio.c
+index 36eb589263bfd..b14e4ed6134fc 100644
+--- a/drivers/staging/wilc1000/sdio.c
++++ b/drivers/staging/wilc1000/sdio.c
+@@ -151,9 +151,10 @@ static int wilc_sdio_probe(struct sdio_func *func,
+ wilc->dev = &func->dev;
+
+ wilc->rtc_clk = devm_clk_get(&func->card->dev, "rtc");
+- if (PTR_ERR_OR_ZERO(wilc->rtc_clk) == -EPROBE_DEFER)
++ if (PTR_ERR_OR_ZERO(wilc->rtc_clk) == -EPROBE_DEFER) {
++ kfree(sdio_priv);
+ return -EPROBE_DEFER;
+- else if (!IS_ERR(wilc->rtc_clk))
++ } else if (!IS_ERR(wilc->rtc_clk))
+ clk_prepare_enable(wilc->rtc_clk);
+
+ dev_info(&func->dev, "Driver Initializing success\n");
+diff --git a/drivers/staging/wilc1000/spi.c b/drivers/staging/wilc1000/spi.c
+index 3f19e3f38a397..a18dac0aa6b67 100644
+--- a/drivers/staging/wilc1000/spi.c
++++ b/drivers/staging/wilc1000/spi.c
+@@ -112,9 +112,10 @@ static int wilc_bus_probe(struct spi_device *spi)
+ wilc->dev_irq_num = spi->irq;
+
+ wilc->rtc_clk = devm_clk_get(&spi->dev, "rtc_clk");
+- if (PTR_ERR_OR_ZERO(wilc->rtc_clk) == -EPROBE_DEFER)
++ if (PTR_ERR_OR_ZERO(wilc->rtc_clk) == -EPROBE_DEFER) {
++ kfree(spi_priv);
+ return -EPROBE_DEFER;
+- else if (!IS_ERR(wilc->rtc_clk))
++ } else if (!IS_ERR(wilc->rtc_clk))
+ clk_prepare_enable(wilc->rtc_clk);
+
+ return 0;
+diff --git a/drivers/target/target_core_user.c b/drivers/target/target_core_user.c
+index 0209bc23e631e..13a280c780c39 100644
+--- a/drivers/target/target_core_user.c
++++ b/drivers/target/target_core_user.c
+@@ -669,7 +669,7 @@ static void scatter_data_area(struct tcmu_dev *udev,
+ void *from, *to = NULL;
+ size_t copy_bytes, to_offset, offset;
+ struct scatterlist *sg;
+- struct page *page;
++ struct page *page = NULL;
+
+ for_each_sg(data_sg, sg, data_nents, i) {
+ int sg_remaining = sg->length;
+diff --git a/drivers/tty/hvc/Kconfig b/drivers/tty/hvc/Kconfig
+index d1b27b0522a3c..8d60e0ff67b4d 100644
+--- a/drivers/tty/hvc/Kconfig
++++ b/drivers/tty/hvc/Kconfig
+@@ -81,6 +81,7 @@ config HVC_DCC
+ bool "ARM JTAG DCC console"
+ depends on ARM || ARM64
+ select HVC_DRIVER
++ select SERIAL_CORE_CONSOLE
+ help
+ This console uses the JTAG DCC on ARM to create a console under the HVC
+ driver. This console is used through a JTAG only on ARM. If you don't have
+diff --git a/drivers/tty/hvc/hvcs.c b/drivers/tty/hvc/hvcs.c
+index 55105ac38f89b..509d1042825a1 100644
+--- a/drivers/tty/hvc/hvcs.c
++++ b/drivers/tty/hvc/hvcs.c
+@@ -1216,13 +1216,6 @@ static void hvcs_close(struct tty_struct *tty, struct file *filp)
+
+ tty_wait_until_sent(tty, HVCS_CLOSE_WAIT);
+
+- /*
+- * This line is important because it tells hvcs_open that this
+- * device needs to be re-configured the next time hvcs_open is
+- * called.
+- */
+- tty->driver_data = NULL;
+-
+ free_irq(irq, hvcsd);
+ return;
+ } else if (hvcsd->port.count < 0) {
+@@ -1237,6 +1230,13 @@ static void hvcs_cleanup(struct tty_struct * tty)
+ {
+ struct hvcs_struct *hvcsd = tty->driver_data;
+
++ /*
++ * This line is important because it tells hvcs_open that this
++ * device needs to be re-configured the next time hvcs_open is
++ * called.
++ */
++ tty->driver_data = NULL;
++
+ tty_port_put(&hvcsd->port);
+ }
+
+diff --git a/drivers/tty/ipwireless/network.c b/drivers/tty/ipwireless/network.c
+index cf20616340a1a..fe569f6294a24 100644
+--- a/drivers/tty/ipwireless/network.c
++++ b/drivers/tty/ipwireless/network.c
+@@ -117,7 +117,7 @@ static int ipwireless_ppp_start_xmit(struct ppp_channel *ppp_channel,
+ skb->len,
+ notify_packet_sent,
+ network);
+- if (ret == -1) {
++ if (ret < 0) {
+ skb_pull(skb, 2);
+ return 0;
+ }
+@@ -134,7 +134,7 @@ static int ipwireless_ppp_start_xmit(struct ppp_channel *ppp_channel,
+ notify_packet_sent,
+ network);
+ kfree(buf);
+- if (ret == -1)
++ if (ret < 0)
+ return 0;
+ }
+ kfree_skb(skb);
+diff --git a/drivers/tty/ipwireless/tty.c b/drivers/tty/ipwireless/tty.c
+index fad3401e604d9..23584769fc292 100644
+--- a/drivers/tty/ipwireless/tty.c
++++ b/drivers/tty/ipwireless/tty.c
+@@ -218,7 +218,7 @@ static int ipw_write(struct tty_struct *linux_tty,
+ ret = ipwireless_send_packet(tty->hardware, IPW_CHANNEL_RAS,
+ buf, count,
+ ipw_write_packet_sent_callback, tty);
+- if (ret == -1) {
++ if (ret < 0) {
+ mutex_unlock(&tty->ipw_tty_mutex);
+ return 0;
+ }
+diff --git a/drivers/tty/pty.c b/drivers/tty/pty.c
+index 00099a8439d21..c6a1d8c4e6894 100644
+--- a/drivers/tty/pty.c
++++ b/drivers/tty/pty.c
+@@ -120,10 +120,10 @@ static int pty_write(struct tty_struct *tty, const unsigned char *buf, int c)
+ spin_lock_irqsave(&to->port->lock, flags);
+ /* Stuff the data into the input queue of the other end */
+ c = tty_insert_flip_string(to->port, buf, c);
++ spin_unlock_irqrestore(&to->port->lock, flags);
+ /* And shovel */
+ if (c)
+ tty_flip_buffer_push(to->port);
+- spin_unlock_irqrestore(&to->port->lock, flags);
+ }
+ return c;
+ }
+diff --git a/drivers/tty/serial/Kconfig b/drivers/tty/serial/Kconfig
+index 780908d435577..896b9c77117d3 100644
+--- a/drivers/tty/serial/Kconfig
++++ b/drivers/tty/serial/Kconfig
+@@ -8,6 +8,7 @@ menu "Serial drivers"
+
+ config SERIAL_EARLYCON
+ bool
++ depends on SERIAL_CORE
+ help
+ Support for early consoles with the earlycon parameter. This enables
+ the console before standard serial driver is probed. The console is
+diff --git a/drivers/tty/serial/fsl_lpuart.c b/drivers/tty/serial/fsl_lpuart.c
+index 90298c4030421..f8ba7690efe31 100644
+--- a/drivers/tty/serial/fsl_lpuart.c
++++ b/drivers/tty/serial/fsl_lpuart.c
+@@ -649,26 +649,24 @@ static int lpuart32_poll_init(struct uart_port *port)
+ spin_lock_irqsave(&sport->port.lock, flags);
+
+ /* Disable Rx & Tx */
+- lpuart32_write(&sport->port, UARTCTRL, 0);
++ lpuart32_write(&sport->port, 0, UARTCTRL);
+
+ temp = lpuart32_read(&sport->port, UARTFIFO);
+
+ /* Enable Rx and Tx FIFO */
+- lpuart32_write(&sport->port, UARTFIFO,
+- temp | UARTFIFO_RXFE | UARTFIFO_TXFE);
++ lpuart32_write(&sport->port, temp | UARTFIFO_RXFE | UARTFIFO_TXFE, UARTFIFO);
+
+ /* flush Tx and Rx FIFO */
+- lpuart32_write(&sport->port, UARTFIFO,
+- UARTFIFO_TXFLUSH | UARTFIFO_RXFLUSH);
++ lpuart32_write(&sport->port, UARTFIFO_TXFLUSH | UARTFIFO_RXFLUSH, UARTFIFO);
+
+ /* explicitly clear RDRF */
+ if (lpuart32_read(&sport->port, UARTSTAT) & UARTSTAT_RDRF) {
+ lpuart32_read(&sport->port, UARTDATA);
+- lpuart32_write(&sport->port, UARTFIFO, UARTFIFO_RXUF);
++ lpuart32_write(&sport->port, UARTFIFO_RXUF, UARTFIFO);
+ }
+
+ /* Enable Rx and Tx */
+- lpuart32_write(&sport->port, UARTCTRL, UARTCTRL_RE | UARTCTRL_TE);
++ lpuart32_write(&sport->port, UARTCTRL_RE | UARTCTRL_TE, UARTCTRL);
+ spin_unlock_irqrestore(&sport->port.lock, flags);
+
+ return 0;
+@@ -677,12 +675,12 @@ static int lpuart32_poll_init(struct uart_port *port)
+ static void lpuart32_poll_put_char(struct uart_port *port, unsigned char c)
+ {
+ lpuart32_wait_bit_set(port, UARTSTAT, UARTSTAT_TDRE);
+- lpuart32_write(port, UARTDATA, c);
++ lpuart32_write(port, c, UARTDATA);
+ }
+
+ static int lpuart32_poll_get_char(struct uart_port *port)
+ {
+- if (!(lpuart32_read(port, UARTSTAT) & UARTSTAT_RDRF))
++ if (!(lpuart32_read(port, UARTWATER) >> UARTWATER_RXCNT_OFF))
+ return NO_POLL_CHAR;
+
+ return lpuart32_read(port, UARTDATA);
+diff --git a/drivers/usb/cdns3/gadget.c b/drivers/usb/cdns3/gadget.c
+index 37ae7fc5f8dd8..7bac485b49ba9 100644
+--- a/drivers/usb/cdns3/gadget.c
++++ b/drivers/usb/cdns3/gadget.c
+@@ -2988,12 +2988,12 @@ void cdns3_gadget_exit(struct cdns3 *cdns)
+
+ priv_dev = cdns->gadget_dev;
+
+- devm_free_irq(cdns->dev, cdns->dev_irq, priv_dev);
+
+ pm_runtime_mark_last_busy(cdns->dev);
+ pm_runtime_put_autosuspend(cdns->dev);
+
+ usb_del_gadget_udc(&priv_dev->gadget);
++ devm_free_irq(cdns->dev, cdns->dev_irq, priv_dev);
+
+ cdns3_free_all_eps(priv_dev);
+
+diff --git a/drivers/usb/class/cdc-acm.c b/drivers/usb/class/cdc-acm.c
+index 7499ba118665a..808722b8294a4 100644
+--- a/drivers/usb/class/cdc-acm.c
++++ b/drivers/usb/class/cdc-acm.c
+@@ -1243,9 +1243,21 @@ static int acm_probe(struct usb_interface *intf,
+ }
+ }
+ } else {
++ int class = -1;
++
+ data_intf_num = union_header->bSlaveInterface0;
+ control_interface = usb_ifnum_to_if(usb_dev, union_header->bMasterInterface0);
+ data_interface = usb_ifnum_to_if(usb_dev, data_intf_num);
++
++ if (control_interface)
++ class = control_interface->cur_altsetting->desc.bInterfaceClass;
++
++ if (class != USB_CLASS_COMM && class != USB_CLASS_CDC_DATA) {
++ dev_dbg(&intf->dev, "Broken union descriptor, assuming single interface\n");
++ combined_interfaces = 1;
++ control_interface = data_interface = intf;
++ goto look_for_collapsed_interface;
++ }
+ }
+
+ if (!control_interface || !data_interface) {
+@@ -1900,6 +1912,17 @@ static const struct usb_device_id acm_ids[] = {
+ .driver_info = IGNORE_DEVICE,
+ },
+
++ /* Exclude ETAS ES58x */
++ { USB_DEVICE(0x108c, 0x0159), /* ES581.4 */
++ .driver_info = IGNORE_DEVICE,
++ },
++ { USB_DEVICE(0x108c, 0x0168), /* ES582.1 */
++ .driver_info = IGNORE_DEVICE,
++ },
++ { USB_DEVICE(0x108c, 0x0169), /* ES584.1 */
++ .driver_info = IGNORE_DEVICE,
++ },
++
+ { USB_DEVICE(0x1bc7, 0x0021), /* Telit 3G ACM only composition */
+ .driver_info = SEND_ZERO_PACKET,
+ },
+diff --git a/drivers/usb/class/cdc-wdm.c b/drivers/usb/class/cdc-wdm.c
+index e3db6fbeadef8..0c7a0adfd1e1f 100644
+--- a/drivers/usb/class/cdc-wdm.c
++++ b/drivers/usb/class/cdc-wdm.c
+@@ -58,6 +58,9 @@ MODULE_DEVICE_TABLE (usb, wdm_ids);
+
+ #define WDM_MAX 16
+
++/* we cannot wait forever at flush() */
++#define WDM_FLUSH_TIMEOUT (30 * HZ)
++
+ /* CDC-WMC r1.1 requires wMaxCommand to be "at least 256 decimal (0x100)" */
+ #define WDM_DEFAULT_BUFSIZE 256
+
+@@ -151,7 +154,7 @@ static void wdm_out_callback(struct urb *urb)
+ kfree(desc->outbuf);
+ desc->outbuf = NULL;
+ clear_bit(WDM_IN_USE, &desc->flags);
+- wake_up(&desc->wait);
++ wake_up_all(&desc->wait);
+ }
+
+ static void wdm_in_callback(struct urb *urb)
+@@ -393,6 +396,9 @@ static ssize_t wdm_write
+ if (test_bit(WDM_RESETTING, &desc->flags))
+ r = -EIO;
+
++ if (test_bit(WDM_DISCONNECTING, &desc->flags))
++ r = -ENODEV;
++
+ if (r < 0) {
+ rv = r;
+ goto out_free_mem_pm;
+@@ -424,6 +430,7 @@ static ssize_t wdm_write
+ if (rv < 0) {
+ desc->outbuf = NULL;
+ clear_bit(WDM_IN_USE, &desc->flags);
++ wake_up_all(&desc->wait); /* for wdm_wait_for_response() */
+ dev_err(&desc->intf->dev, "Tx URB error: %d\n", rv);
+ rv = usb_translate_errors(rv);
+ goto out_free_mem_pm;
+@@ -583,28 +590,58 @@ err:
+ return rv;
+ }
+
+-static int wdm_flush(struct file *file, fl_owner_t id)
++static int wdm_wait_for_response(struct file *file, long timeout)
+ {
+ struct wdm_device *desc = file->private_data;
++ long rv; /* Use long here because (int) MAX_SCHEDULE_TIMEOUT < 0. */
++
++ /*
++ * Needs both flags. We cannot do with one because resetting it would
++ * cause a race with write() yet we need to signal a disconnect.
++ */
++ rv = wait_event_interruptible_timeout(desc->wait,
++ !test_bit(WDM_IN_USE, &desc->flags) ||
++ test_bit(WDM_DISCONNECTING, &desc->flags),
++ timeout);
+
+- wait_event(desc->wait,
+- /*
+- * needs both flags. We cannot do with one
+- * because resetting it would cause a race
+- * with write() yet we need to signal
+- * a disconnect
+- */
+- !test_bit(WDM_IN_USE, &desc->flags) ||
+- test_bit(WDM_DISCONNECTING, &desc->flags));
+-
+- /* cannot dereference desc->intf if WDM_DISCONNECTING */
++ /*
++ * To report the correct error. This is best effort.
++ * We are inevitably racing with the hardware.
++ */
+ if (test_bit(WDM_DISCONNECTING, &desc->flags))
+ return -ENODEV;
+- if (desc->werr < 0)
+- dev_err(&desc->intf->dev, "Error in flush path: %d\n",
+- desc->werr);
++ if (!rv)
++ return -EIO;
++ if (rv < 0)
++ return -EINTR;
++
++ spin_lock_irq(&desc->iuspin);
++ rv = desc->werr;
++ desc->werr = 0;
++ spin_unlock_irq(&desc->iuspin);
++
++ return usb_translate_errors(rv);
++
++}
++
++/*
++ * You need to send a signal when you react to malicious or defective hardware.
++ * Also, don't abort when fsync() returned -EINVAL, for older kernels which do
++ * not implement wdm_flush() will return -EINVAL.
++ */
++static int wdm_fsync(struct file *file, loff_t start, loff_t end, int datasync)
++{
++ return wdm_wait_for_response(file, MAX_SCHEDULE_TIMEOUT);
++}
+
+- return usb_translate_errors(desc->werr);
++/*
++ * Same with wdm_fsync(), except it uses finite timeout in order to react to
++ * malicious or defective hardware which ceased communication after close() was
++ * implicitly called due to process termination.
++ */
++static int wdm_flush(struct file *file, fl_owner_t id)
++{
++ return wdm_wait_for_response(file, WDM_FLUSH_TIMEOUT);
+ }
+
+ static __poll_t wdm_poll(struct file *file, struct poll_table_struct *wait)
+@@ -729,6 +766,7 @@ static const struct file_operations wdm_fops = {
+ .owner = THIS_MODULE,
+ .read = wdm_read,
+ .write = wdm_write,
++ .fsync = wdm_fsync,
+ .open = wdm_open,
+ .flush = wdm_flush,
+ .release = wdm_release,
+diff --git a/drivers/usb/core/urb.c b/drivers/usb/core/urb.c
+index da923ec176122..31ca5abb4c12a 100644
+--- a/drivers/usb/core/urb.c
++++ b/drivers/usb/core/urb.c
+@@ -772,11 +772,12 @@ void usb_block_urb(struct urb *urb)
+ EXPORT_SYMBOL_GPL(usb_block_urb);
+
+ /**
+- * usb_kill_anchored_urbs - cancel transfer requests en masse
++ * usb_kill_anchored_urbs - kill all URBs associated with an anchor
+ * @anchor: anchor the requests are bound to
+ *
+- * this allows all outstanding URBs to be killed starting
+- * from the back of the queue
++ * This kills all outstanding URBs starting from the back of the queue,
++ * with guarantee that no completer callbacks will take place from the
++ * anchor after this function returns.
+ *
+ * This routine should not be called by a driver after its disconnect
+ * method has returned.
+@@ -784,20 +785,26 @@ EXPORT_SYMBOL_GPL(usb_block_urb);
+ void usb_kill_anchored_urbs(struct usb_anchor *anchor)
+ {
+ struct urb *victim;
++ int surely_empty;
+
+- spin_lock_irq(&anchor->lock);
+- while (!list_empty(&anchor->urb_list)) {
+- victim = list_entry(anchor->urb_list.prev, struct urb,
+- anchor_list);
+- /* we must make sure the URB isn't freed before we kill it*/
+- usb_get_urb(victim);
+- spin_unlock_irq(&anchor->lock);
+- /* this will unanchor the URB */
+- usb_kill_urb(victim);
+- usb_put_urb(victim);
++ do {
+ spin_lock_irq(&anchor->lock);
+- }
+- spin_unlock_irq(&anchor->lock);
++ while (!list_empty(&anchor->urb_list)) {
++ victim = list_entry(anchor->urb_list.prev,
++ struct urb, anchor_list);
++ /* make sure the URB isn't freed before we kill it */
++ usb_get_urb(victim);
++ spin_unlock_irq(&anchor->lock);
++ /* this will unanchor the URB */
++ usb_kill_urb(victim);
++ usb_put_urb(victim);
++ spin_lock_irq(&anchor->lock);
++ }
++ surely_empty = usb_anchor_check_wakeup(anchor);
++
++ spin_unlock_irq(&anchor->lock);
++ cpu_relax();
++ } while (!surely_empty);
+ }
+ EXPORT_SYMBOL_GPL(usb_kill_anchored_urbs);
+
+@@ -816,21 +823,27 @@ EXPORT_SYMBOL_GPL(usb_kill_anchored_urbs);
+ void usb_poison_anchored_urbs(struct usb_anchor *anchor)
+ {
+ struct urb *victim;
++ int surely_empty;
+
+- spin_lock_irq(&anchor->lock);
+- anchor->poisoned = 1;
+- while (!list_empty(&anchor->urb_list)) {
+- victim = list_entry(anchor->urb_list.prev, struct urb,
+- anchor_list);
+- /* we must make sure the URB isn't freed before we kill it*/
+- usb_get_urb(victim);
+- spin_unlock_irq(&anchor->lock);
+- /* this will unanchor the URB */
+- usb_poison_urb(victim);
+- usb_put_urb(victim);
++ do {
+ spin_lock_irq(&anchor->lock);
+- }
+- spin_unlock_irq(&anchor->lock);
++ anchor->poisoned = 1;
++ while (!list_empty(&anchor->urb_list)) {
++ victim = list_entry(anchor->urb_list.prev,
++ struct urb, anchor_list);
++ /* make sure the URB isn't freed before we kill it */
++ usb_get_urb(victim);
++ spin_unlock_irq(&anchor->lock);
++ /* this will unanchor the URB */
++ usb_poison_urb(victim);
++ usb_put_urb(victim);
++ spin_lock_irq(&anchor->lock);
++ }
++ surely_empty = usb_anchor_check_wakeup(anchor);
++
++ spin_unlock_irq(&anchor->lock);
++ cpu_relax();
++ } while (!surely_empty);
+ }
+ EXPORT_SYMBOL_GPL(usb_poison_anchored_urbs);
+
+@@ -970,14 +983,20 @@ void usb_scuttle_anchored_urbs(struct usb_anchor *anchor)
+ {
+ struct urb *victim;
+ unsigned long flags;
++ int surely_empty;
++
++ do {
++ spin_lock_irqsave(&anchor->lock, flags);
++ while (!list_empty(&anchor->urb_list)) {
++ victim = list_entry(anchor->urb_list.prev,
++ struct urb, anchor_list);
++ __usb_unanchor_urb(victim, anchor);
++ }
++ surely_empty = usb_anchor_check_wakeup(anchor);
+
+- spin_lock_irqsave(&anchor->lock, flags);
+- while (!list_empty(&anchor->urb_list)) {
+- victim = list_entry(anchor->urb_list.prev, struct urb,
+- anchor_list);
+- __usb_unanchor_urb(victim, anchor);
+- }
+- spin_unlock_irqrestore(&anchor->lock, flags);
++ spin_unlock_irqrestore(&anchor->lock, flags);
++ cpu_relax();
++ } while (!surely_empty);
+ }
+
+ EXPORT_SYMBOL_GPL(usb_scuttle_anchored_urbs);
+diff --git a/drivers/usb/dwc2/gadget.c b/drivers/usb/dwc2/gadget.c
+index 7faf5f8c056d4..642926f9670e6 100644
+--- a/drivers/usb/dwc2/gadget.c
++++ b/drivers/usb/dwc2/gadget.c
+@@ -712,8 +712,11 @@ static u32 dwc2_hsotg_read_frameno(struct dwc2_hsotg *hsotg)
+ */
+ static unsigned int dwc2_gadget_get_chain_limit(struct dwc2_hsotg_ep *hs_ep)
+ {
++ const struct usb_endpoint_descriptor *ep_desc = hs_ep->ep.desc;
+ int is_isoc = hs_ep->isochronous;
+ unsigned int maxsize;
++ u32 mps = hs_ep->ep.maxpacket;
++ int dir_in = hs_ep->dir_in;
+
+ if (is_isoc)
+ maxsize = (hs_ep->dir_in ? DEV_DMA_ISOC_TX_NBYTES_LIMIT :
+@@ -722,6 +725,11 @@ static unsigned int dwc2_gadget_get_chain_limit(struct dwc2_hsotg_ep *hs_ep)
+ else
+ maxsize = DEV_DMA_NBYTES_LIMIT * MAX_DMA_DESC_NUM_GENERIC;
+
++ /* Interrupt OUT EP with mps not multiple of 4 */
++ if (hs_ep->index)
++ if (usb_endpoint_xfer_int(ep_desc) && !dir_in && (mps % 4))
++ maxsize = mps * MAX_DMA_DESC_NUM_GENERIC;
++
+ return maxsize;
+ }
+
+@@ -737,11 +745,14 @@ static unsigned int dwc2_gadget_get_chain_limit(struct dwc2_hsotg_ep *hs_ep)
+ * Isochronous - descriptor rx/tx bytes bitfield limit,
+ * Control In/Bulk/Interrupt - multiple of mps. This will allow to not
+ * have concatenations from various descriptors within one packet.
++ * Interrupt OUT - if mps not multiple of 4 then a single packet corresponds
++ * to a single descriptor.
+ *
+ * Selects corresponding mask for RX/TX bytes as well.
+ */
+ static u32 dwc2_gadget_get_desc_params(struct dwc2_hsotg_ep *hs_ep, u32 *mask)
+ {
++ const struct usb_endpoint_descriptor *ep_desc = hs_ep->ep.desc;
+ u32 mps = hs_ep->ep.maxpacket;
+ int dir_in = hs_ep->dir_in;
+ u32 desc_size = 0;
+@@ -765,6 +776,13 @@ static u32 dwc2_gadget_get_desc_params(struct dwc2_hsotg_ep *hs_ep, u32 *mask)
+ desc_size -= desc_size % mps;
+ }
+
++ /* Interrupt OUT EP with mps not multiple of 4 */
++ if (hs_ep->index)
++ if (usb_endpoint_xfer_int(ep_desc) && !dir_in && (mps % 4)) {
++ desc_size = mps;
++ *mask = DEV_DMA_NBYTES_MASK;
++ }
++
+ return desc_size;
+ }
+
+@@ -1123,13 +1141,7 @@ static void dwc2_hsotg_start_req(struct dwc2_hsotg *hsotg,
+ length += (mps - (length % mps));
+ }
+
+- /*
+- * If more data to send, adjust DMA for EP0 out data stage.
+- * ureq->dma stays unchanged, hence increment it by already
+- * passed passed data count before starting new transaction.
+- */
+- if (!index && hsotg->ep0_state == DWC2_EP0_DATA_OUT &&
+- continuing)
++ if (continuing)
+ offset = ureq->actual;
+
+ /* Fill DDMA chain entries */
+@@ -2320,22 +2332,36 @@ static void dwc2_hsotg_change_ep_iso_parity(struct dwc2_hsotg *hsotg,
+ */
+ static unsigned int dwc2_gadget_get_xfersize_ddma(struct dwc2_hsotg_ep *hs_ep)
+ {
++ const struct usb_endpoint_descriptor *ep_desc = hs_ep->ep.desc;
+ struct dwc2_hsotg *hsotg = hs_ep->parent;
+ unsigned int bytes_rem = 0;
++ unsigned int bytes_rem_correction = 0;
+ struct dwc2_dma_desc *desc = hs_ep->desc_list;
+ int i;
+ u32 status;
++ u32 mps = hs_ep->ep.maxpacket;
++ int dir_in = hs_ep->dir_in;
+
+ if (!desc)
+ return -EINVAL;
+
++ /* Interrupt OUT EP with mps not multiple of 4 */
++ if (hs_ep->index)
++ if (usb_endpoint_xfer_int(ep_desc) && !dir_in && (mps % 4))
++ bytes_rem_correction = 4 - (mps % 4);
++
+ for (i = 0; i < hs_ep->desc_count; ++i) {
+ status = desc->status;
+ bytes_rem += status & DEV_DMA_NBYTES_MASK;
++ bytes_rem -= bytes_rem_correction;
+
+ if (status & DEV_DMA_STS_MASK)
+ dev_err(hsotg->dev, "descriptor %d closed with %x\n",
+ i, status & DEV_DMA_STS_MASK);
++
++ if (status & DEV_DMA_L)
++ break;
++
+ desc++;
+ }
+
+diff --git a/drivers/usb/dwc2/params.c b/drivers/usb/dwc2/params.c
+index ce736d67c7c34..fd73ddd8eb753 100644
+--- a/drivers/usb/dwc2/params.c
++++ b/drivers/usb/dwc2/params.c
+@@ -860,7 +860,7 @@ int dwc2_get_hwparams(struct dwc2_hsotg *hsotg)
+ int dwc2_init_params(struct dwc2_hsotg *hsotg)
+ {
+ const struct of_device_id *match;
+- void (*set_params)(void *data);
++ void (*set_params)(struct dwc2_hsotg *data);
+
+ dwc2_set_default_params(hsotg);
+ dwc2_get_device_properties(hsotg);
+diff --git a/drivers/usb/dwc2/platform.c b/drivers/usb/dwc2/platform.c
+index db9fd4bd1a38c..b28e90e0b685d 100644
+--- a/drivers/usb/dwc2/platform.c
++++ b/drivers/usb/dwc2/platform.c
+@@ -584,12 +584,16 @@ static int dwc2_driver_probe(struct platform_device *dev)
+ if (retval) {
+ hsotg->gadget.udc = NULL;
+ dwc2_hsotg_remove(hsotg);
+- goto error_init;
++ goto error_debugfs;
+ }
+ }
+ #endif /* CONFIG_USB_DWC2_PERIPHERAL || CONFIG_USB_DWC2_DUAL_ROLE */
+ return 0;
+
++error_debugfs:
++ dwc2_debugfs_exit(hsotg);
++ if (hsotg->hcd_enabled)
++ dwc2_hcd_remove(hsotg);
+ error_init:
+ if (hsotg->params.activate_stm_id_vb_detection)
+ regulator_disable(hsotg->usb33d);
+diff --git a/drivers/usb/dwc3/core.c b/drivers/usb/dwc3/core.c
+index 25c686a752b0f..928a85b0d1cdd 100644
+--- a/drivers/usb/dwc3/core.c
++++ b/drivers/usb/dwc3/core.c
+@@ -119,6 +119,7 @@ static void __dwc3_set_mode(struct work_struct *work)
+ struct dwc3 *dwc = work_to_dwc(work);
+ unsigned long flags;
+ int ret;
++ u32 reg;
+
+ if (dwc->dr_mode != USB_DR_MODE_OTG)
+ return;
+@@ -172,6 +173,11 @@ static void __dwc3_set_mode(struct work_struct *work)
+ otg_set_vbus(dwc->usb2_phy->otg, true);
+ phy_set_mode(dwc->usb2_generic_phy, PHY_MODE_USB_HOST);
+ phy_set_mode(dwc->usb3_generic_phy, PHY_MODE_USB_HOST);
++ if (dwc->dis_split_quirk) {
++ reg = dwc3_readl(dwc->regs, DWC3_GUCTL3);
++ reg |= DWC3_GUCTL3_SPLITDISABLE;
++ dwc3_writel(dwc->regs, DWC3_GUCTL3, reg);
++ }
+ }
+ break;
+ case DWC3_GCTL_PRTCAP_DEVICE:
+@@ -930,13 +936,6 @@ static int dwc3_core_init(struct dwc3 *dwc)
+ */
+ dwc3_writel(dwc->regs, DWC3_GUID, LINUX_VERSION_CODE);
+
+- /* Handle USB2.0-only core configuration */
+- if (DWC3_GHWPARAMS3_SSPHY_IFC(dwc->hwparams.hwparams3) ==
+- DWC3_GHWPARAMS3_SSPHY_IFC_DIS) {
+- if (dwc->maximum_speed == USB_SPEED_SUPER)
+- dwc->maximum_speed = USB_SPEED_HIGH;
+- }
+-
+ ret = dwc3_phy_setup(dwc);
+ if (ret)
+ goto err0;
+@@ -1357,6 +1356,9 @@ static void dwc3_get_properties(struct dwc3 *dwc)
+ dwc->dis_metastability_quirk = device_property_read_bool(dev,
+ "snps,dis_metastability_quirk");
+
++ dwc->dis_split_quirk = device_property_read_bool(dev,
++ "snps,dis-split-quirk");
++
+ dwc->lpm_nyet_threshold = lpm_nyet_threshold;
+ dwc->tx_de_emphasis = tx_de_emphasis;
+
+@@ -1382,6 +1384,8 @@ bool dwc3_has_imod(struct dwc3 *dwc)
+ static void dwc3_check_params(struct dwc3 *dwc)
+ {
+ struct device *dev = dwc->dev;
++ unsigned int hwparam_gen =
++ DWC3_GHWPARAMS3_SSPHY_IFC(dwc->hwparams.hwparams3);
+
+ /* Check for proper value of imod_interval */
+ if (dwc->imod_interval && !dwc3_has_imod(dwc)) {
+@@ -1413,17 +1417,23 @@ static void dwc3_check_params(struct dwc3 *dwc)
+ dwc->maximum_speed);
+ /* fall through */
+ case USB_SPEED_UNKNOWN:
+- /* default to superspeed */
+- dwc->maximum_speed = USB_SPEED_SUPER;
+-
+- /*
+- * default to superspeed plus if we are capable.
+- */
+- if ((DWC3_IP_IS(DWC31) || DWC3_IP_IS(DWC32)) &&
+- (DWC3_GHWPARAMS3_SSPHY_IFC(dwc->hwparams.hwparams3) ==
+- DWC3_GHWPARAMS3_SSPHY_IFC_GEN2))
++ switch (hwparam_gen) {
++ case DWC3_GHWPARAMS3_SSPHY_IFC_GEN2:
+ dwc->maximum_speed = USB_SPEED_SUPER_PLUS;
+-
++ break;
++ case DWC3_GHWPARAMS3_SSPHY_IFC_GEN1:
++ if (DWC3_IP_IS(DWC32))
++ dwc->maximum_speed = USB_SPEED_SUPER_PLUS;
++ else
++ dwc->maximum_speed = USB_SPEED_SUPER;
++ break;
++ case DWC3_GHWPARAMS3_SSPHY_IFC_DIS:
++ dwc->maximum_speed = USB_SPEED_HIGH;
++ break;
++ default:
++ dwc->maximum_speed = USB_SPEED_SUPER;
++ break;
++ }
+ break;
+ }
+ }
+@@ -1866,10 +1876,26 @@ static int dwc3_resume(struct device *dev)
+
+ return 0;
+ }
++
++static void dwc3_complete(struct device *dev)
++{
++ struct dwc3 *dwc = dev_get_drvdata(dev);
++ u32 reg;
++
++ if (dwc->current_dr_role == DWC3_GCTL_PRTCAP_HOST &&
++ dwc->dis_split_quirk) {
++ reg = dwc3_readl(dwc->regs, DWC3_GUCTL3);
++ reg |= DWC3_GUCTL3_SPLITDISABLE;
++ dwc3_writel(dwc->regs, DWC3_GUCTL3, reg);
++ }
++}
++#else
++#define dwc3_complete NULL
+ #endif /* CONFIG_PM_SLEEP */
+
+ static const struct dev_pm_ops dwc3_dev_pm_ops = {
+ SET_SYSTEM_SLEEP_PM_OPS(dwc3_suspend, dwc3_resume)
++ .complete = dwc3_complete,
+ SET_RUNTIME_PM_OPS(dwc3_runtime_suspend, dwc3_runtime_resume,
+ dwc3_runtime_idle)
+ };
+diff --git a/drivers/usb/dwc3/core.h b/drivers/usb/dwc3/core.h
+index 013f42a2b5dcc..af5533b097133 100644
+--- a/drivers/usb/dwc3/core.h
++++ b/drivers/usb/dwc3/core.h
+@@ -138,6 +138,7 @@
+ #define DWC3_GEVNTCOUNT(n) (0xc40c + ((n) * 0x10))
+
+ #define DWC3_GHWPARAMS8 0xc600
++#define DWC3_GUCTL3 0xc60c
+ #define DWC3_GFLADJ 0xc630
+
+ /* Device Registers */
+@@ -380,6 +381,9 @@
+ /* Global User Control Register 2 */
+ #define DWC3_GUCTL2_RST_ACTBITLATER BIT(14)
+
++/* Global User Control Register 3 */
++#define DWC3_GUCTL3_SPLITDISABLE BIT(14)
++
+ /* Device Configuration Register */
+ #define DWC3_DCFG_DEVADDR(addr) ((addr) << 3)
+ #define DWC3_DCFG_DEVADDR_MASK DWC3_DCFG_DEVADDR(0x7f)
+@@ -1052,6 +1056,7 @@ struct dwc3_scratchpad_array {
+ * 2 - No de-emphasis
+ * 3 - Reserved
+ * @dis_metastability_quirk: set to disable metastability quirk.
++ * @dis_split_quirk: set to disable split boundary.
+ * @imod_interval: set the interrupt moderation interval in 250ns
+ * increments or 0 to disable.
+ */
+@@ -1245,6 +1250,8 @@ struct dwc3 {
+
+ unsigned dis_metastability_quirk:1;
+
++ unsigned dis_split_quirk:1;
++
+ u16 imod_interval;
+ };
+
+diff --git a/drivers/usb/dwc3/dwc3-of-simple.c b/drivers/usb/dwc3/dwc3-of-simple.c
+index 8852fbfdead4e..336253ff55749 100644
+--- a/drivers/usb/dwc3/dwc3-of-simple.c
++++ b/drivers/usb/dwc3/dwc3-of-simple.c
+@@ -176,6 +176,7 @@ static const struct of_device_id of_dwc3_simple_match[] = {
+ { .compatible = "cavium,octeon-7130-usb-uctl" },
+ { .compatible = "sprd,sc9860-dwc3" },
+ { .compatible = "allwinner,sun50i-h6-dwc3" },
++ { .compatible = "hisilicon,hi3670-dwc3" },
+ { /* Sentinel */ }
+ };
+ MODULE_DEVICE_TABLE(of, of_dwc3_simple_match);
+diff --git a/drivers/usb/gadget/function/f_ncm.c b/drivers/usb/gadget/function/f_ncm.c
+index 1f638759a9533..92a7c3a839454 100644
+--- a/drivers/usb/gadget/function/f_ncm.c
++++ b/drivers/usb/gadget/function/f_ncm.c
+@@ -85,8 +85,10 @@ static inline struct f_ncm *func_to_ncm(struct usb_function *f)
+ /* peak (theoretical) bulk transfer rate in bits-per-second */
+ static inline unsigned ncm_bitrate(struct usb_gadget *g)
+ {
+- if (gadget_is_superspeed(g) && g->speed == USB_SPEED_SUPER)
+- return 13 * 1024 * 8 * 1000 * 8;
++ if (gadget_is_superspeed(g) && g->speed >= USB_SPEED_SUPER_PLUS)
++ return 4250000000U;
++ else if (gadget_is_superspeed(g) && g->speed == USB_SPEED_SUPER)
++ return 3750000000U;
+ else if (gadget_is_dualspeed(g) && g->speed == USB_SPEED_HIGH)
+ return 13 * 512 * 8 * 1000 * 8;
+ else
+@@ -1534,7 +1536,7 @@ static int ncm_bind(struct usb_configuration *c, struct usb_function *f)
+ fs_ncm_notify_desc.bEndpointAddress;
+
+ status = usb_assign_descriptors(f, ncm_fs_function, ncm_hs_function,
+- ncm_ss_function, NULL);
++ ncm_ss_function, ncm_ss_function);
+ if (status)
+ goto fail;
+
+diff --git a/drivers/usb/gadget/function/f_printer.c b/drivers/usb/gadget/function/f_printer.c
+index 9c7ed2539ff77..8ed1295d7e350 100644
+--- a/drivers/usb/gadget/function/f_printer.c
++++ b/drivers/usb/gadget/function/f_printer.c
+@@ -31,6 +31,7 @@
+ #include <linux/types.h>
+ #include <linux/ctype.h>
+ #include <linux/cdev.h>
++#include <linux/kref.h>
+
+ #include <asm/byteorder.h>
+ #include <linux/io.h>
+@@ -64,7 +65,7 @@ struct printer_dev {
+ struct usb_gadget *gadget;
+ s8 interface;
+ struct usb_ep *in_ep, *out_ep;
+-
++ struct kref kref;
+ struct list_head rx_reqs; /* List of free RX structs */
+ struct list_head rx_reqs_active; /* List of Active RX xfers */
+ struct list_head rx_buffers; /* List of completed xfers */
+@@ -218,6 +219,13 @@ static inline struct usb_endpoint_descriptor *ep_desc(struct usb_gadget *gadget,
+
+ /*-------------------------------------------------------------------------*/
+
++static void printer_dev_free(struct kref *kref)
++{
++ struct printer_dev *dev = container_of(kref, struct printer_dev, kref);
++
++ kfree(dev);
++}
++
+ static struct usb_request *
+ printer_req_alloc(struct usb_ep *ep, unsigned len, gfp_t gfp_flags)
+ {
+@@ -348,6 +356,7 @@ printer_open(struct inode *inode, struct file *fd)
+
+ spin_unlock_irqrestore(&dev->lock, flags);
+
++ kref_get(&dev->kref);
+ DBG(dev, "printer_open returned %x\n", ret);
+ return ret;
+ }
+@@ -365,6 +374,7 @@ printer_close(struct inode *inode, struct file *fd)
+ dev->printer_status &= ~PRINTER_SELECTED;
+ spin_unlock_irqrestore(&dev->lock, flags);
+
++ kref_put(&dev->kref, printer_dev_free);
+ DBG(dev, "printer_close\n");
+
+ return 0;
+@@ -1350,7 +1360,8 @@ static void gprinter_free(struct usb_function *f)
+ struct f_printer_opts *opts;
+
+ opts = container_of(f->fi, struct f_printer_opts, func_inst);
+- kfree(dev);
++
++ kref_put(&dev->kref, printer_dev_free);
+ mutex_lock(&opts->lock);
+ --opts->refcnt;
+ mutex_unlock(&opts->lock);
+@@ -1419,6 +1430,7 @@ static struct usb_function *gprinter_alloc(struct usb_function_instance *fi)
+ return ERR_PTR(-ENOMEM);
+ }
+
++ kref_init(&dev->kref);
+ ++opts->refcnt;
+ dev->minor = opts->minor;
+ dev->pnp_string = opts->pnp_string;
+diff --git a/drivers/usb/gadget/function/u_ether.c b/drivers/usb/gadget/function/u_ether.c
+index fbe96ef1ac7a4..891e9f7f40d59 100644
+--- a/drivers/usb/gadget/function/u_ether.c
++++ b/drivers/usb/gadget/function/u_ether.c
+@@ -93,7 +93,7 @@ struct eth_dev {
+ static inline int qlen(struct usb_gadget *gadget, unsigned qmult)
+ {
+ if (gadget_is_dualspeed(gadget) && (gadget->speed == USB_SPEED_HIGH ||
+- gadget->speed == USB_SPEED_SUPER))
++ gadget->speed >= USB_SPEED_SUPER))
+ return qmult * DEFAULT_QLEN;
+ else
+ return DEFAULT_QLEN;
+diff --git a/drivers/usb/gadget/function/u_serial.c b/drivers/usb/gadget/function/u_serial.c
+index 3cfc6e2eba71a..e0e3cb2f6f3bc 100644
+--- a/drivers/usb/gadget/function/u_serial.c
++++ b/drivers/usb/gadget/function/u_serial.c
+@@ -1391,6 +1391,7 @@ void gserial_disconnect(struct gserial *gser)
+ if (port->port.tty)
+ tty_hangup(port->port.tty);
+ }
++ port->suspended = false;
+ spin_unlock_irqrestore(&port->port_lock, flags);
+
+ /* disable endpoints, aborting down any active I/O */
+diff --git a/drivers/usb/gadget/udc/bcm63xx_udc.c b/drivers/usb/gadget/udc/bcm63xx_udc.c
+index 54501814dc3fd..aebe11829baa6 100644
+--- a/drivers/usb/gadget/udc/bcm63xx_udc.c
++++ b/drivers/usb/gadget/udc/bcm63xx_udc.c
+@@ -26,6 +26,7 @@
+ #include <linux/seq_file.h>
+ #include <linux/slab.h>
+ #include <linux/timer.h>
++#include <linux/usb.h>
+ #include <linux/usb/ch9.h>
+ #include <linux/usb/gadget.h>
+ #include <linux/workqueue.h>
+diff --git a/drivers/usb/host/ohci-hcd.c b/drivers/usb/host/ohci-hcd.c
+index 4de91653a2c7b..5eb62240c7f87 100644
+--- a/drivers/usb/host/ohci-hcd.c
++++ b/drivers/usb/host/ohci-hcd.c
+@@ -673,20 +673,24 @@ retry:
+
+ /* handle root hub init quirks ... */
+ val = roothub_a (ohci);
+- val &= ~(RH_A_PSM | RH_A_OCPM);
++ /* Configure for per-port over-current protection by default */
++ val &= ~RH_A_NOCP;
++ val |= RH_A_OCPM;
+ if (ohci->flags & OHCI_QUIRK_SUPERIO) {
+- /* NSC 87560 and maybe others */
++ /* NSC 87560 and maybe others.
++ * Ganged power switching, no over-current protection.
++ */
+ val |= RH_A_NOCP;
+- val &= ~(RH_A_POTPGT | RH_A_NPS);
+- ohci_writel (ohci, val, &ohci->regs->roothub.a);
++ val &= ~(RH_A_POTPGT | RH_A_NPS | RH_A_PSM | RH_A_OCPM);
+ } else if ((ohci->flags & OHCI_QUIRK_AMD756) ||
+ (ohci->flags & OHCI_QUIRK_HUB_POWER)) {
+ /* hub power always on; required for AMD-756 and some
+- * Mac platforms. ganged overcurrent reporting, if any.
++ * Mac platforms.
+ */
+ val |= RH_A_NPS;
+- ohci_writel (ohci, val, &ohci->regs->roothub.a);
+ }
++ ohci_writel(ohci, val, &ohci->regs->roothub.a);
++
+ ohci_writel (ohci, RH_HS_LPSC, &ohci->regs->roothub.status);
+ ohci_writel (ohci, (val & RH_A_NPS) ? 0 : RH_B_PPCM,
+ &ohci->regs->roothub.b);
+diff --git a/drivers/usb/host/xhci.c b/drivers/usb/host/xhci.c
+index 113ab5d3cbfe5..f665da34a8f73 100644
+--- a/drivers/usb/host/xhci.c
++++ b/drivers/usb/host/xhci.c
+@@ -1915,8 +1915,6 @@ static int xhci_add_endpoint(struct usb_hcd *hcd, struct usb_device *udev,
+ ep_ctx = xhci_get_ep_ctx(xhci, virt_dev->in_ctx, ep_index);
+ trace_xhci_add_endpoint(ep_ctx);
+
+- xhci_debugfs_create_endpoint(xhci, virt_dev, ep_index);
+-
+ xhci_dbg(xhci, "add ep 0x%x, slot id %d, new drop flags = %#x, new add flags = %#x\n",
+ (unsigned int) ep->desc.bEndpointAddress,
+ udev->slot_id,
+@@ -2949,6 +2947,7 @@ static int xhci_check_bandwidth(struct usb_hcd *hcd, struct usb_device *udev)
+ xhci_check_bw_drop_ep_streams(xhci, virt_dev, i);
+ virt_dev->eps[i].ring = virt_dev->eps[i].new_ring;
+ virt_dev->eps[i].new_ring = NULL;
++ xhci_debugfs_create_endpoint(xhci, virt_dev, i);
+ }
+ command_cleanup:
+ kfree(command->completion);
+diff --git a/drivers/vfio/pci/vfio_pci_config.c b/drivers/vfio/pci/vfio_pci_config.c
+index d98843feddce0..5076d0155bc3f 100644
+--- a/drivers/vfio/pci/vfio_pci_config.c
++++ b/drivers/vfio/pci/vfio_pci_config.c
+@@ -406,7 +406,7 @@ bool __vfio_pci_memory_enabled(struct vfio_pci_device *vdev)
+ * PF SR-IOV capability, there's therefore no need to trigger
+ * faults based on the virtual value.
+ */
+- return pdev->is_virtfn || (cmd & PCI_COMMAND_MEMORY);
++ return pdev->no_command_memory || (cmd & PCI_COMMAND_MEMORY);
+ }
+
+ /*
+@@ -520,8 +520,8 @@ static int vfio_basic_config_read(struct vfio_pci_device *vdev, int pos,
+
+ count = vfio_default_config_read(vdev, pos, count, perm, offset, val);
+
+- /* Mask in virtual memory enable for SR-IOV devices */
+- if (offset == PCI_COMMAND && vdev->pdev->is_virtfn) {
++ /* Mask in virtual memory enable */
++ if (offset == PCI_COMMAND && vdev->pdev->no_command_memory) {
+ u16 cmd = le16_to_cpu(*(__le16 *)&vdev->vconfig[PCI_COMMAND]);
+ u32 tmp_val = le32_to_cpu(*val);
+
+@@ -589,9 +589,11 @@ static int vfio_basic_config_write(struct vfio_pci_device *vdev, int pos,
+ * shows it disabled (phys_mem/io, then the device has
+ * undergone some kind of backdoor reset and needs to be
+ * restored before we allow it to enable the bars.
+- * SR-IOV devices will trigger this, but we catch them later
++ * SR-IOV devices will trigger this - for mem enable let's
++ * catch this now and for io enable it will be caught later
+ */
+- if ((new_mem && virt_mem && !phys_mem) ||
++ if ((new_mem && virt_mem && !phys_mem &&
++ !pdev->no_command_memory) ||
+ (new_io && virt_io && !phys_io) ||
+ vfio_need_bar_restore(vdev))
+ vfio_bar_restore(vdev);
+@@ -1734,12 +1736,14 @@ int vfio_config_init(struct vfio_pci_device *vdev)
+ vconfig[PCI_INTERRUPT_PIN]);
+
+ vconfig[PCI_INTERRUPT_PIN] = 0; /* Gratuitous for good VFs */
+-
++ }
++ if (pdev->no_command_memory) {
+ /*
+- * VFs do no implement the memory enable bit of the COMMAND
+- * register therefore we'll not have it set in our initial
+- * copy of config space after pci_enable_device(). For
+- * consistency with PFs, set the virtual enable bit here.
++ * VFs and devices that set pdev->no_command_memory do not
++ * implement the memory enable bit of the COMMAND register
++ * therefore we'll not have it set in our initial copy of
++ * config space after pci_enable_device(). For consistency
++ * with PFs, set the virtual enable bit here.
+ */
+ *(__le16 *)&vconfig[PCI_COMMAND] |=
+ cpu_to_le16(PCI_COMMAND_MEMORY);
+diff --git a/drivers/vfio/pci/vfio_pci_intrs.c b/drivers/vfio/pci/vfio_pci_intrs.c
+index 1d9fb25929459..869dce5f134dd 100644
+--- a/drivers/vfio/pci/vfio_pci_intrs.c
++++ b/drivers/vfio/pci/vfio_pci_intrs.c
+@@ -352,11 +352,13 @@ static int vfio_msi_set_vector_signal(struct vfio_pci_device *vdev,
+ vdev->ctx[vector].producer.token = trigger;
+ vdev->ctx[vector].producer.irq = irq;
+ ret = irq_bypass_register_producer(&vdev->ctx[vector].producer);
+- if (unlikely(ret))
++ if (unlikely(ret)) {
+ dev_info(&pdev->dev,
+ "irq bypass producer (token %p) registration fails: %d\n",
+ vdev->ctx[vector].producer.token, ret);
+
++ vdev->ctx[vector].producer.token = NULL;
++ }
+ vdev->ctx[vector].trigger = trigger;
+
+ return 0;
+diff --git a/drivers/vfio/vfio.c b/drivers/vfio/vfio.c
+index 580099afeaffa..fbff5c4743c5e 100644
+--- a/drivers/vfio/vfio.c
++++ b/drivers/vfio/vfio.c
+@@ -1948,8 +1948,10 @@ int vfio_pin_pages(struct device *dev, unsigned long *user_pfn, int npage,
+ if (!group)
+ return -ENODEV;
+
+- if (group->dev_counter > 1)
+- return -EINVAL;
++ if (group->dev_counter > 1) {
++ ret = -EINVAL;
++ goto err_pin_pages;
++ }
+
+ ret = vfio_group_add_container_user(group);
+ if (ret)
+@@ -2050,6 +2052,9 @@ int vfio_group_pin_pages(struct vfio_group *group,
+ if (!group || !user_iova_pfn || !phys_pfn || !npage)
+ return -EINVAL;
+
++ if (group->dev_counter > 1)
++ return -EINVAL;
++
+ if (npage > VFIO_PIN_PAGES_MAX_ENTRIES)
+ return -E2BIG;
+
+diff --git a/drivers/vfio/vfio_iommu_type1.c b/drivers/vfio/vfio_iommu_type1.c
+index f48f0db908a46..00d3cf12e92c3 100644
+--- a/drivers/vfio/vfio_iommu_type1.c
++++ b/drivers/vfio/vfio_iommu_type1.c
+@@ -693,7 +693,8 @@ static int vfio_iommu_type1_pin_pages(void *iommu_data,
+
+ ret = vfio_add_to_pfn_list(dma, iova, phys_pfn[i]);
+ if (ret) {
+- vfio_unpin_page_external(dma, iova, do_accounting);
++ if (put_pfn(phys_pfn[i], dma->prot) && do_accounting)
++ vfio_lock_acct(dma, -1, true);
+ goto pin_unwind;
+ }
+
+@@ -2899,7 +2900,8 @@ static int vfio_iommu_type1_dma_rw_chunk(struct vfio_iommu *iommu,
+ * size
+ */
+ bitmap_set(dma->bitmap, offset >> pgshift,
+- *copied >> pgshift);
++ ((offset + *copied - 1) >> pgshift) -
++ (offset >> pgshift) + 1);
+ }
+ } else
+ *copied = copy_from_user(data, (void __user *)vaddr,
+diff --git a/drivers/video/backlight/sky81452-backlight.c b/drivers/video/backlight/sky81452-backlight.c
+index 2355f00f57732..1f6301375fd33 100644
+--- a/drivers/video/backlight/sky81452-backlight.c
++++ b/drivers/video/backlight/sky81452-backlight.c
+@@ -196,6 +196,7 @@ static struct sky81452_bl_platform_data *sky81452_bl_parse_dt(
+ num_entry);
+ if (ret < 0) {
+ dev_err(dev, "led-sources node is invalid.\n");
++ of_node_put(np);
+ return ERR_PTR(-EINVAL);
+ }
+
+diff --git a/drivers/video/fbdev/aty/radeon_base.c b/drivers/video/fbdev/aty/radeon_base.c
+index e116a3f9ad566..687bd2c0d5040 100644
+--- a/drivers/video/fbdev/aty/radeon_base.c
++++ b/drivers/video/fbdev/aty/radeon_base.c
+@@ -2311,7 +2311,7 @@ static int radeonfb_pci_register(struct pci_dev *pdev,
+
+ ret = radeon_kick_out_firmware_fb(pdev);
+ if (ret)
+- return ret;
++ goto err_release_fb;
+
+ /* request the mem regions */
+ ret = pci_request_region(pdev, 0, "radeonfb framebuffer");
+diff --git a/drivers/video/fbdev/core/fbmem.c b/drivers/video/fbdev/core/fbmem.c
+index da7c88ffaa6a8..1136b569ccb7c 100644
+--- a/drivers/video/fbdev/core/fbmem.c
++++ b/drivers/video/fbdev/core/fbmem.c
+@@ -1006,6 +1006,10 @@ fb_set_var(struct fb_info *info, struct fb_var_screeninfo *var)
+ return 0;
+ }
+
++ /* bitfill_aligned() assumes that it's at least 8x8 */
++ if (var->xres < 8 || var->yres < 8)
++ return -EINVAL;
++
+ ret = info->fbops->fb_check_var(var, info);
+
+ if (ret)
+diff --git a/drivers/video/fbdev/sis/init.c b/drivers/video/fbdev/sis/init.c
+index dfe3eb769638b..fde27feae5d0c 100644
+--- a/drivers/video/fbdev/sis/init.c
++++ b/drivers/video/fbdev/sis/init.c
+@@ -2428,6 +2428,11 @@ SiS_SetCRT1FIFO_630(struct SiS_Private *SiS_Pr, unsigned short ModeNo,
+
+ i = 0;
+
++ if (SiS_Pr->ChipType == SIS_730)
++ queuedata = &FQBQData730[0];
++ else
++ queuedata = &FQBQData[0];
++
+ if(ModeNo > 0x13) {
+
+ /* Get VCLK */
+@@ -2445,12 +2450,6 @@ SiS_SetCRT1FIFO_630(struct SiS_Private *SiS_Pr, unsigned short ModeNo,
+ /* Get half colordepth */
+ colorth = colortharray[(SiS_Pr->SiS_ModeType - ModeEGA)];
+
+- if(SiS_Pr->ChipType == SIS_730) {
+- queuedata = &FQBQData730[0];
+- } else {
+- queuedata = &FQBQData[0];
+- }
+-
+ do {
+ templ = SiS_CalcDelay2(SiS_Pr, queuedata[i]) * VCLK * colorth;
+
+diff --git a/drivers/video/fbdev/vga16fb.c b/drivers/video/fbdev/vga16fb.c
+index 578d3541e3d6f..1e8a38a7967d8 100644
+--- a/drivers/video/fbdev/vga16fb.c
++++ b/drivers/video/fbdev/vga16fb.c
+@@ -243,7 +243,7 @@ static void vga16fb_update_fix(struct fb_info *info)
+ }
+
+ static void vga16fb_clock_chip(struct vga16fb_par *par,
+- unsigned int pixclock,
++ unsigned int *pixclock,
+ const struct fb_info *info,
+ int mul, int div)
+ {
+@@ -259,14 +259,14 @@ static void vga16fb_clock_chip(struct vga16fb_par *par,
+ { 0 /* bad */, 0x00, 0x00}};
+ int err;
+
+- pixclock = (pixclock * mul) / div;
++ *pixclock = (*pixclock * mul) / div;
+ best = vgaclocks;
+- err = pixclock - best->pixclock;
++ err = *pixclock - best->pixclock;
+ if (err < 0) err = -err;
+ for (ptr = vgaclocks + 1; ptr->pixclock; ptr++) {
+ int tmp;
+
+- tmp = pixclock - ptr->pixclock;
++ tmp = *pixclock - ptr->pixclock;
+ if (tmp < 0) tmp = -tmp;
+ if (tmp < err) {
+ err = tmp;
+@@ -275,7 +275,7 @@ static void vga16fb_clock_chip(struct vga16fb_par *par,
+ }
+ par->misc |= best->misc;
+ par->clkdiv = best->seq_clock_mode;
+- pixclock = (best->pixclock * div) / mul;
++ *pixclock = (best->pixclock * div) / mul;
+ }
+
+ #define FAIL(X) return -EINVAL
+@@ -497,10 +497,10 @@ static int vga16fb_check_var(struct fb_var_screeninfo *var,
+
+ if (mode & MODE_8BPP)
+ /* pixel clock == vga clock / 2 */
+- vga16fb_clock_chip(par, var->pixclock, info, 1, 2);
++ vga16fb_clock_chip(par, &var->pixclock, info, 1, 2);
+ else
+ /* pixel clock == vga clock */
+- vga16fb_clock_chip(par, var->pixclock, info, 1, 1);
++ vga16fb_clock_chip(par, &var->pixclock, info, 1, 1);
+
+ var->red.offset = var->green.offset = var->blue.offset =
+ var->transp.offset = 0;
+diff --git a/drivers/virt/fsl_hypervisor.c b/drivers/virt/fsl_hypervisor.c
+index 1b0b11b55d2a0..46ee0a0998b6f 100644
+--- a/drivers/virt/fsl_hypervisor.c
++++ b/drivers/virt/fsl_hypervisor.c
+@@ -157,7 +157,7 @@ static long ioctl_memcpy(struct fsl_hv_ioctl_memcpy __user *p)
+
+ unsigned int i;
+ long ret = 0;
+- int num_pinned; /* return value from get_user_pages() */
++ int num_pinned = 0; /* return value from get_user_pages_fast() */
+ phys_addr_t remote_paddr; /* The next address in the remote buffer */
+ uint32_t count; /* The number of bytes left to copy */
+
+@@ -174,7 +174,7 @@ static long ioctl_memcpy(struct fsl_hv_ioctl_memcpy __user *p)
+ return -EINVAL;
+
+ /*
+- * The array of pages returned by get_user_pages() covers only
++ * The array of pages returned by get_user_pages_fast() covers only
+ * page-aligned memory. Since the user buffer is probably not
+ * page-aligned, we need to handle the discrepancy.
+ *
+@@ -224,7 +224,7 @@ static long ioctl_memcpy(struct fsl_hv_ioctl_memcpy __user *p)
+
+ /*
+ * 'pages' is an array of struct page pointers that's initialized by
+- * get_user_pages().
++ * get_user_pages_fast().
+ */
+ pages = kcalloc(num_pages, sizeof(struct page *), GFP_KERNEL);
+ if (!pages) {
+@@ -241,7 +241,7 @@ static long ioctl_memcpy(struct fsl_hv_ioctl_memcpy __user *p)
+ if (!sg_list_unaligned) {
+ pr_debug("fsl-hv: could not allocate S/G list\n");
+ ret = -ENOMEM;
+- goto exit;
++ goto free_pages;
+ }
+ sg_list = PTR_ALIGN(sg_list_unaligned, sizeof(struct fh_sg_list));
+
+@@ -250,7 +250,6 @@ static long ioctl_memcpy(struct fsl_hv_ioctl_memcpy __user *p)
+ num_pages, param.source != -1 ? FOLL_WRITE : 0, pages);
+
+ if (num_pinned != num_pages) {
+- /* get_user_pages() failed */
+ pr_debug("fsl-hv: could not lock source buffer\n");
+ ret = (num_pinned < 0) ? num_pinned : -EFAULT;
+ goto exit;
+@@ -292,13 +291,13 @@ static long ioctl_memcpy(struct fsl_hv_ioctl_memcpy __user *p)
+ virt_to_phys(sg_list), num_pages);
+
+ exit:
+- if (pages) {
+- for (i = 0; i < num_pages; i++)
+- if (pages[i])
+- put_page(pages[i]);
++ if (pages && (num_pinned > 0)) {
++ for (i = 0; i < num_pinned; i++)
++ put_page(pages[i]);
+ }
+
+ kfree(sg_list_unaligned);
++free_pages:
+ kfree(pages);
+
+ if (!ret)
+diff --git a/drivers/watchdog/sp5100_tco.h b/drivers/watchdog/sp5100_tco.h
+index 87eaf357ae01f..adf015aa4126f 100644
+--- a/drivers/watchdog/sp5100_tco.h
++++ b/drivers/watchdog/sp5100_tco.h
+@@ -70,7 +70,7 @@
+ #define EFCH_PM_DECODEEN_WDT_TMREN BIT(7)
+
+
+-#define EFCH_PM_DECODEEN3 0x00
++#define EFCH_PM_DECODEEN3 0x03
+ #define EFCH_PM_DECODEEN_SECOND_RES GENMASK(1, 0)
+ #define EFCH_PM_WATCHDOG_DISABLE ((u8)GENMASK(3, 2))
+
+diff --git a/drivers/watchdog/watchdog_dev.c b/drivers/watchdog/watchdog_dev.c
+index b535f5fa279b9..c2065615fd6ca 100644
+--- a/drivers/watchdog/watchdog_dev.c
++++ b/drivers/watchdog/watchdog_dev.c
+@@ -991,8 +991,10 @@ static int watchdog_cdev_register(struct watchdog_device *wdd)
+ wd_data->wdd = wdd;
+ wdd->wd_data = wd_data;
+
+- if (IS_ERR_OR_NULL(watchdog_kworker))
++ if (IS_ERR_OR_NULL(watchdog_kworker)) {
++ kfree(wd_data);
+ return -ENODEV;
++ }
+
+ device_initialize(&wd_data->dev);
+ wd_data->dev.devt = MKDEV(MAJOR(watchdog_devt), wdd->id);
+@@ -1018,7 +1020,7 @@ static int watchdog_cdev_register(struct watchdog_device *wdd)
+ pr_err("%s: a legacy watchdog module is probably present.\n",
+ wdd->info->identity);
+ old_wd_data = NULL;
+- kfree(wd_data);
++ put_device(&wd_data->dev);
+ return err;
+ }
+ }
+diff --git a/fs/afs/cell.c b/fs/afs/cell.c
+index 5b79cdceefa0f..bc7ed46aaca9f 100644
+--- a/fs/afs/cell.c
++++ b/fs/afs/cell.c
+@@ -19,7 +19,8 @@ static unsigned __read_mostly afs_cell_gc_delay = 10;
+ static unsigned __read_mostly afs_cell_min_ttl = 10 * 60;
+ static unsigned __read_mostly afs_cell_max_ttl = 24 * 60 * 60;
+
+-static void afs_manage_cell(struct work_struct *);
++static void afs_queue_cell_manager(struct afs_net *);
++static void afs_manage_cell_work(struct work_struct *);
+
+ static void afs_dec_cells_outstanding(struct afs_net *net)
+ {
+@@ -37,19 +38,21 @@ static void afs_set_cell_timer(struct afs_net *net, time64_t delay)
+ atomic_inc(&net->cells_outstanding);
+ if (timer_reduce(&net->cells_timer, jiffies + delay * HZ))
+ afs_dec_cells_outstanding(net);
++ } else {
++ afs_queue_cell_manager(net);
+ }
+ }
+
+ /*
+- * Look up and get an activation reference on a cell record under RCU
+- * conditions. The caller must hold the RCU read lock.
++ * Look up and get an activation reference on a cell record. The caller must
++ * hold net->cells_lock at least read-locked.
+ */
+-struct afs_cell *afs_lookup_cell_rcu(struct afs_net *net,
+- const char *name, unsigned int namesz)
++static struct afs_cell *afs_find_cell_locked(struct afs_net *net,
++ const char *name, unsigned int namesz)
+ {
+ struct afs_cell *cell = NULL;
+ struct rb_node *p;
+- int n, seq = 0, ret = 0;
++ int n;
+
+ _enter("%*.*s", namesz, namesz, name);
+
+@@ -58,61 +61,47 @@ struct afs_cell *afs_lookup_cell_rcu(struct afs_net *net,
+ if (namesz > AFS_MAXCELLNAME)
+ return ERR_PTR(-ENAMETOOLONG);
+
+- do {
+- /* Unfortunately, rbtree walking doesn't give reliable results
+- * under just the RCU read lock, so we have to check for
+- * changes.
+- */
+- if (cell)
+- afs_put_cell(net, cell);
+- cell = NULL;
+- ret = -ENOENT;
+-
+- read_seqbegin_or_lock(&net->cells_lock, &seq);
+-
+- if (!name) {
+- cell = rcu_dereference_raw(net->ws_cell);
+- if (cell) {
+- afs_get_cell(cell);
+- ret = 0;
+- break;
+- }
+- ret = -EDESTADDRREQ;
+- continue;
+- }
++ if (!name) {
++ cell = net->ws_cell;
++ if (!cell)
++ return ERR_PTR(-EDESTADDRREQ);
++ goto found;
++ }
+
+- p = rcu_dereference_raw(net->cells.rb_node);
+- while (p) {
+- cell = rb_entry(p, struct afs_cell, net_node);
+-
+- n = strncasecmp(cell->name, name,
+- min_t(size_t, cell->name_len, namesz));
+- if (n == 0)
+- n = cell->name_len - namesz;
+- if (n < 0) {
+- p = rcu_dereference_raw(p->rb_left);
+- } else if (n > 0) {
+- p = rcu_dereference_raw(p->rb_right);
+- } else {
+- if (atomic_inc_not_zero(&cell->usage)) {
+- ret = 0;
+- break;
+- }
+- /* We want to repeat the search, this time with
+- * the lock properly locked.
+- */
+- }
+- cell = NULL;
+- }
++ p = net->cells.rb_node;
++ while (p) {
++ cell = rb_entry(p, struct afs_cell, net_node);
++
++ n = strncasecmp(cell->name, name,
++ min_t(size_t, cell->name_len, namesz));
++ if (n == 0)
++ n = cell->name_len - namesz;
++ if (n < 0)
++ p = p->rb_left;
++ else if (n > 0)
++ p = p->rb_right;
++ else
++ goto found;
++ }
+
+- } while (need_seqretry(&net->cells_lock, seq));
++ return ERR_PTR(-ENOENT);
+
+- done_seqretry(&net->cells_lock, seq);
++found:
++ return afs_use_cell(cell);
++}
+
+- if (ret != 0 && cell)
+- afs_put_cell(net, cell);
++/*
++ * Look up and get an activation reference on a cell record.
++ */
++struct afs_cell *afs_find_cell(struct afs_net *net,
++ const char *name, unsigned int namesz)
++{
++ struct afs_cell *cell;
+
+- return ret == 0 ? cell : ERR_PTR(ret);
++ down_read(&net->cells_lock);
++ cell = afs_find_cell_locked(net, name, namesz);
++ up_read(&net->cells_lock);
++ return cell;
+ }
+
+ /*
+@@ -166,8 +155,9 @@ static struct afs_cell *afs_alloc_cell(struct afs_net *net,
+ cell->name[i] = tolower(name[i]);
+ cell->name[i] = 0;
+
+- atomic_set(&cell->usage, 2);
+- INIT_WORK(&cell->manager, afs_manage_cell);
++ atomic_set(&cell->ref, 1);
++ atomic_set(&cell->active, 0);
++ INIT_WORK(&cell->manager, afs_manage_cell_work);
+ cell->volumes = RB_ROOT;
+ INIT_HLIST_HEAD(&cell->proc_volumes);
+ seqlock_init(&cell->volume_lock);
+@@ -206,6 +196,7 @@ static struct afs_cell *afs_alloc_cell(struct afs_net *net,
+ cell->dns_source = vllist->source;
+ cell->dns_status = vllist->status;
+ smp_store_release(&cell->dns_lookup_count, 1); /* vs source/status */
++ atomic_inc(&net->cells_outstanding);
+
+ _leave(" = %p", cell);
+ return cell;
+@@ -245,9 +236,7 @@ struct afs_cell *afs_lookup_cell(struct afs_net *net,
+ _enter("%s,%s", name, vllist);
+
+ if (!excl) {
+- rcu_read_lock();
+- cell = afs_lookup_cell_rcu(net, name, namesz);
+- rcu_read_unlock();
++ cell = afs_find_cell(net, name, namesz);
+ if (!IS_ERR(cell))
+ goto wait_for_cell;
+ }
+@@ -268,7 +257,7 @@ struct afs_cell *afs_lookup_cell(struct afs_net *net,
+ /* Find the insertion point and check to see if someone else added a
+ * cell whilst we were allocating.
+ */
+- write_seqlock(&net->cells_lock);
++ down_write(&net->cells_lock);
+
+ pp = &net->cells.rb_node;
+ parent = NULL;
+@@ -290,23 +279,23 @@ struct afs_cell *afs_lookup_cell(struct afs_net *net,
+
+ cell = candidate;
+ candidate = NULL;
++ atomic_set(&cell->active, 2);
+ rb_link_node_rcu(&cell->net_node, parent, pp);
+ rb_insert_color(&cell->net_node, &net->cells);
+- atomic_inc(&net->cells_outstanding);
+- write_sequnlock(&net->cells_lock);
++ up_write(&net->cells_lock);
+
+- queue_work(afs_wq, &cell->manager);
++ afs_queue_cell(cell);
+
+ wait_for_cell:
+ _debug("wait_for_cell");
+ wait_var_event(&cell->state,
+ ({
+ state = smp_load_acquire(&cell->state); /* vs error */
+- state == AFS_CELL_ACTIVE || state == AFS_CELL_FAILED;
++ state == AFS_CELL_ACTIVE || state == AFS_CELL_REMOVED;
+ }));
+
+ /* Check the state obtained from the wait check. */
+- if (state == AFS_CELL_FAILED) {
++ if (state == AFS_CELL_REMOVED) {
+ ret = cell->error;
+ goto error;
+ }
+@@ -320,16 +309,17 @@ cell_already_exists:
+ if (excl) {
+ ret = -EEXIST;
+ } else {
+- afs_get_cell(cursor);
++ afs_use_cell(cursor);
+ ret = 0;
+ }
+- write_sequnlock(&net->cells_lock);
+- kfree(candidate);
++ up_write(&net->cells_lock);
++ if (candidate)
++ afs_put_cell(candidate);
+ if (ret == 0)
+ goto wait_for_cell;
+ goto error_noput;
+ error:
+- afs_put_cell(net, cell);
++ afs_unuse_cell(net, cell);
+ error_noput:
+ _leave(" = %d [error]", ret);
+ return ERR_PTR(ret);
+@@ -374,15 +364,15 @@ int afs_cell_init(struct afs_net *net, const char *rootcell)
+ }
+
+ if (!test_and_set_bit(AFS_CELL_FL_NO_GC, &new_root->flags))
+- afs_get_cell(new_root);
++ afs_use_cell(new_root);
+
+ /* install the new cell */
+- write_seqlock(&net->cells_lock);
+- old_root = rcu_access_pointer(net->ws_cell);
+- rcu_assign_pointer(net->ws_cell, new_root);
+- write_sequnlock(&net->cells_lock);
++ down_write(&net->cells_lock);
++ old_root = net->ws_cell;
++ net->ws_cell = new_root;
++ up_write(&net->cells_lock);
+
+- afs_put_cell(net, old_root);
++ afs_unuse_cell(net, old_root);
+ _leave(" = 0");
+ return 0;
+ }
+@@ -488,18 +478,21 @@ out_wake:
+ static void afs_cell_destroy(struct rcu_head *rcu)
+ {
+ struct afs_cell *cell = container_of(rcu, struct afs_cell, rcu);
++ struct afs_net *net = cell->net;
++ int u;
+
+ _enter("%p{%s}", cell, cell->name);
+
+- ASSERTCMP(atomic_read(&cell->usage), ==, 0);
++ u = atomic_read(&cell->ref);
++ ASSERTCMP(u, ==, 0);
+
+- afs_put_volume(cell->net, cell->root_volume, afs_volume_trace_put_cell_root);
+- afs_put_vlserverlist(cell->net, rcu_access_pointer(cell->vl_servers));
+- afs_put_cell(cell->net, cell->alias_of);
++ afs_put_vlserverlist(net, rcu_access_pointer(cell->vl_servers));
++ afs_unuse_cell(net, cell->alias_of);
+ key_put(cell->anonymous_key);
+ kfree(cell->name);
+ kfree(cell);
+
++ afs_dec_cells_outstanding(net);
+ _leave(" [destroyed]");
+ }
+
+@@ -534,16 +527,50 @@ void afs_cells_timer(struct timer_list *timer)
+ */
+ struct afs_cell *afs_get_cell(struct afs_cell *cell)
+ {
+- atomic_inc(&cell->usage);
++ if (atomic_read(&cell->ref) <= 0)
++ BUG();
++
++ atomic_inc(&cell->ref);
+ return cell;
+ }
+
+ /*
+ * Drop a reference on a cell record.
+ */
+-void afs_put_cell(struct afs_net *net, struct afs_cell *cell)
++void afs_put_cell(struct afs_cell *cell)
++{
++ if (cell) {
++ unsigned int u, a;
++
++ u = atomic_dec_return(&cell->ref);
++ if (u == 0) {
++ a = atomic_read(&cell->active);
++ WARN(a != 0, "Cell active count %u > 0\n", a);
++ call_rcu(&cell->rcu, afs_cell_destroy);
++ }
++ }
++}
++
++/*
++ * Note a cell becoming more active.
++ */
++struct afs_cell *afs_use_cell(struct afs_cell *cell)
++{
++ if (atomic_read(&cell->ref) <= 0)
++ BUG();
++
++ atomic_inc(&cell->active);
++ return cell;
++}
++
++/*
++ * Record a cell becoming less active. When the active counter reaches 1, it
++ * is scheduled for destruction, but may get reactivated.
++ */
++void afs_unuse_cell(struct afs_net *net, struct afs_cell *cell)
+ {
+ time64_t now, expire_delay;
++ int a;
+
+ if (!cell)
+ return;
+@@ -556,11 +583,21 @@ void afs_put_cell(struct afs_net *net, struct afs_cell *cell)
+ if (cell->vl_servers->nr_servers)
+ expire_delay = afs_cell_gc_delay;
+
+- if (atomic_dec_return(&cell->usage) > 1)
+- return;
++ a = atomic_dec_return(&cell->active);
++ WARN_ON(a == 0);
++ if (a == 1)
++ /* 'cell' may now be garbage collected. */
++ afs_set_cell_timer(net, expire_delay);
++}
+
+- /* 'cell' may now be garbage collected. */
+- afs_set_cell_timer(net, expire_delay);
++/*
++ * Queue a cell for management, giving the workqueue a ref to hold.
++ */
++void afs_queue_cell(struct afs_cell *cell)
++{
++ afs_get_cell(cell);
++ if (!queue_work(afs_wq, &cell->manager))
++ afs_put_cell(cell);
+ }
+
+ /*
+@@ -660,12 +697,10 @@ static void afs_deactivate_cell(struct afs_net *net, struct afs_cell *cell)
+ * Manage a cell record, initialising and destroying it, maintaining its DNS
+ * records.
+ */
+-static void afs_manage_cell(struct work_struct *work)
++static void afs_manage_cell(struct afs_cell *cell)
+ {
+- struct afs_cell *cell = container_of(work, struct afs_cell, manager);
+ struct afs_net *net = cell->net;
+- bool deleted;
+- int ret, usage;
++ int ret, active;
+
+ _enter("%s", cell->name);
+
+@@ -674,14 +709,17 @@ again:
+ switch (cell->state) {
+ case AFS_CELL_INACTIVE:
+ case AFS_CELL_FAILED:
+- write_seqlock(&net->cells_lock);
+- usage = 1;
+- deleted = atomic_try_cmpxchg_relaxed(&cell->usage, &usage, 0);
+- if (deleted)
++ down_write(&net->cells_lock);
++ active = 1;
++ if (atomic_try_cmpxchg_relaxed(&cell->active, &active, 0)) {
+ rb_erase(&cell->net_node, &net->cells);
+- write_sequnlock(&net->cells_lock);
+- if (deleted)
++ smp_store_release(&cell->state, AFS_CELL_REMOVED);
++ }
++ up_write(&net->cells_lock);
++ if (cell->state == AFS_CELL_REMOVED) {
++ wake_up_var(&cell->state);
+ goto final_destruction;
++ }
+ if (cell->state == AFS_CELL_FAILED)
+ goto done;
+ smp_store_release(&cell->state, AFS_CELL_UNSET);
+@@ -703,7 +741,7 @@ again:
+ goto again;
+
+ case AFS_CELL_ACTIVE:
+- if (atomic_read(&cell->usage) > 1) {
++ if (atomic_read(&cell->active) > 1) {
+ if (test_and_clear_bit(AFS_CELL_FL_DO_LOOKUP, &cell->flags)) {
+ ret = afs_update_cell(cell);
+ if (ret < 0)
+@@ -716,13 +754,16 @@ again:
+ goto again;
+
+ case AFS_CELL_DEACTIVATING:
+- if (atomic_read(&cell->usage) > 1)
++ if (atomic_read(&cell->active) > 1)
+ goto reverse_deactivation;
+ afs_deactivate_cell(net, cell);
+ smp_store_release(&cell->state, AFS_CELL_INACTIVE);
+ wake_up_var(&cell->state);
+ goto again;
+
++ case AFS_CELL_REMOVED:
++ goto done;
++
+ default:
+ break;
+ }
+@@ -748,9 +789,18 @@ done:
+ return;
+
+ final_destruction:
+- call_rcu(&cell->rcu, afs_cell_destroy);
+- afs_dec_cells_outstanding(net);
+- _leave(" [destruct %d]", atomic_read(&net->cells_outstanding));
++ /* The root volume is pinning the cell */
++ afs_put_volume(cell->net, cell->root_volume, afs_volume_trace_put_cell_root);
++ cell->root_volume = NULL;
++ afs_put_cell(cell);
++}
++
++static void afs_manage_cell_work(struct work_struct *work)
++{
++ struct afs_cell *cell = container_of(work, struct afs_cell, manager);
++
++ afs_manage_cell(cell);
++ afs_put_cell(cell);
+ }
+
+ /*
+@@ -779,26 +829,25 @@ void afs_manage_cells(struct work_struct *work)
+ * lack of use and cells whose DNS results have expired and dispatch
+ * their managers.
+ */
+- read_seqlock_excl(&net->cells_lock);
++ down_read(&net->cells_lock);
+
+ for (cursor = rb_first(&net->cells); cursor; cursor = rb_next(cursor)) {
+ struct afs_cell *cell =
+ rb_entry(cursor, struct afs_cell, net_node);
+- unsigned usage;
++ unsigned active;
+ bool sched_cell = false;
+
+- usage = atomic_read(&cell->usage);
+- _debug("manage %s %u", cell->name, usage);
++ active = atomic_read(&cell->active);
++ _debug("manage %s %u %u", cell->name, atomic_read(&cell->ref), active);
+
+- ASSERTCMP(usage, >=, 1);
++ ASSERTCMP(active, >=, 1);
+
+ if (purging) {
+ if (test_and_clear_bit(AFS_CELL_FL_NO_GC, &cell->flags))
+- usage = atomic_dec_return(&cell->usage);
+- ASSERTCMP(usage, ==, 1);
++ atomic_dec(&cell->active);
+ }
+
+- if (usage == 1) {
++ if (active == 1) {
+ struct afs_vlserver_list *vllist;
+ time64_t expire_at = cell->last_inactive;
+
+@@ -821,10 +870,10 @@ void afs_manage_cells(struct work_struct *work)
+ }
+
+ if (sched_cell)
+- queue_work(afs_wq, &cell->manager);
++ afs_queue_cell(cell);
+ }
+
+- read_sequnlock_excl(&net->cells_lock);
++ up_read(&net->cells_lock);
+
+ /* Update the timer on the way out. We have to pass an increment on
+ * cells_outstanding in the namespace that we are in to the timer or
+@@ -854,11 +903,11 @@ void afs_cell_purge(struct afs_net *net)
+
+ _enter("");
+
+- write_seqlock(&net->cells_lock);
+- ws = rcu_access_pointer(net->ws_cell);
+- RCU_INIT_POINTER(net->ws_cell, NULL);
+- write_sequnlock(&net->cells_lock);
+- afs_put_cell(net, ws);
++ down_write(&net->cells_lock);
++ ws = net->ws_cell;
++ net->ws_cell = NULL;
++ up_write(&net->cells_lock);
++ afs_unuse_cell(net, ws);
+
+ _debug("del timer");
+ if (del_timer_sync(&net->cells_timer))
+diff --git a/fs/afs/dynroot.c b/fs/afs/dynroot.c
+index 7b784af604fd9..da32797dd4257 100644
+--- a/fs/afs/dynroot.c
++++ b/fs/afs/dynroot.c
+@@ -123,9 +123,9 @@ static int afs_probe_cell_name(struct dentry *dentry)
+ len--;
+ }
+
+- cell = afs_lookup_cell_rcu(net, name, len);
++ cell = afs_find_cell(net, name, len);
+ if (!IS_ERR(cell)) {
+- afs_put_cell(net, cell);
++ afs_unuse_cell(net, cell);
+ return 0;
+ }
+
+@@ -179,7 +179,6 @@ static struct dentry *afs_lookup_atcell(struct dentry *dentry)
+ struct afs_cell *cell;
+ struct afs_net *net = afs_d2net(dentry);
+ struct dentry *ret;
+- unsigned int seq = 0;
+ char *name;
+ int len;
+
+@@ -191,17 +190,13 @@ static struct dentry *afs_lookup_atcell(struct dentry *dentry)
+ if (!name)
+ goto out_p;
+
+- rcu_read_lock();
+- do {
+- read_seqbegin_or_lock(&net->cells_lock, &seq);
+- cell = rcu_dereference_raw(net->ws_cell);
+- if (cell) {
+- len = cell->name_len;
+- memcpy(name, cell->name, len + 1);
+- }
+- } while (need_seqretry(&net->cells_lock, seq));
+- done_seqretry(&net->cells_lock, seq);
+- rcu_read_unlock();
++ down_read(&net->cells_lock);
++ cell = net->ws_cell;
++ if (cell) {
++ len = cell->name_len;
++ memcpy(name, cell->name, len + 1);
++ }
++ up_read(&net->cells_lock);
+
+ ret = ERR_PTR(-ENOENT);
+ if (!cell)
+diff --git a/fs/afs/internal.h b/fs/afs/internal.h
+index e1ebead2e505a..7689f4535ef9c 100644
+--- a/fs/afs/internal.h
++++ b/fs/afs/internal.h
+@@ -263,11 +263,11 @@ struct afs_net {
+
+ /* Cell database */
+ struct rb_root cells;
+- struct afs_cell __rcu *ws_cell;
++ struct afs_cell *ws_cell;
+ struct work_struct cells_manager;
+ struct timer_list cells_timer;
+ atomic_t cells_outstanding;
+- seqlock_t cells_lock;
++ struct rw_semaphore cells_lock;
+ struct mutex cells_alias_lock;
+
+ struct mutex proc_cells_lock;
+@@ -326,6 +326,7 @@ enum afs_cell_state {
+ AFS_CELL_DEACTIVATING,
+ AFS_CELL_INACTIVE,
+ AFS_CELL_FAILED,
++ AFS_CELL_REMOVED,
+ };
+
+ /*
+@@ -363,7 +364,8 @@ struct afs_cell {
+ #endif
+ time64_t dns_expiry; /* Time AFSDB/SRV record expires */
+ time64_t last_inactive; /* Time of last drop of usage count */
+- atomic_t usage;
++ atomic_t ref; /* Struct refcount */
++ atomic_t active; /* Active usage counter */
+ unsigned long flags;
+ #define AFS_CELL_FL_NO_GC 0 /* The cell was added manually, don't auto-gc */
+ #define AFS_CELL_FL_DO_LOOKUP 1 /* DNS lookup requested */
+@@ -915,11 +917,14 @@ static inline bool afs_cb_is_broken(unsigned int cb_break,
+ * cell.c
+ */
+ extern int afs_cell_init(struct afs_net *, const char *);
+-extern struct afs_cell *afs_lookup_cell_rcu(struct afs_net *, const char *, unsigned);
++extern struct afs_cell *afs_find_cell(struct afs_net *, const char *, unsigned);
+ extern struct afs_cell *afs_lookup_cell(struct afs_net *, const char *, unsigned,
+ const char *, bool);
++extern struct afs_cell *afs_use_cell(struct afs_cell *);
++extern void afs_unuse_cell(struct afs_net *, struct afs_cell *);
+ extern struct afs_cell *afs_get_cell(struct afs_cell *);
+-extern void afs_put_cell(struct afs_net *, struct afs_cell *);
++extern void afs_put_cell(struct afs_cell *);
++extern void afs_queue_cell(struct afs_cell *);
+ extern void afs_manage_cells(struct work_struct *);
+ extern void afs_cells_timer(struct timer_list *);
+ extern void __net_exit afs_cell_purge(struct afs_net *);
+diff --git a/fs/afs/main.c b/fs/afs/main.c
+index 31b472f7c734c..accdd8970e7c0 100644
+--- a/fs/afs/main.c
++++ b/fs/afs/main.c
+@@ -78,7 +78,7 @@ static int __net_init afs_net_init(struct net *net_ns)
+ mutex_init(&net->socket_mutex);
+
+ net->cells = RB_ROOT;
+- seqlock_init(&net->cells_lock);
++ init_rwsem(&net->cells_lock);
+ INIT_WORK(&net->cells_manager, afs_manage_cells);
+ timer_setup(&net->cells_timer, afs_cells_timer, 0);
+
+diff --git a/fs/afs/mntpt.c b/fs/afs/mntpt.c
+index 79bc5f1338edf..c69a0282960cc 100644
+--- a/fs/afs/mntpt.c
++++ b/fs/afs/mntpt.c
+@@ -88,7 +88,7 @@ static int afs_mntpt_set_params(struct fs_context *fc, struct dentry *mntpt)
+ ctx->force = true;
+ }
+ if (ctx->cell) {
+- afs_put_cell(ctx->net, ctx->cell);
++ afs_unuse_cell(ctx->net, ctx->cell);
+ ctx->cell = NULL;
+ }
+ if (test_bit(AFS_VNODE_PSEUDODIR, &vnode->flags)) {
+@@ -124,7 +124,7 @@ static int afs_mntpt_set_params(struct fs_context *fc, struct dentry *mntpt)
+ char *buf;
+
+ if (src_as->cell)
+- ctx->cell = afs_get_cell(src_as->cell);
++ ctx->cell = afs_use_cell(src_as->cell);
+
+ if (size < 2 || size > PAGE_SIZE - 1)
+ return -EINVAL;
+diff --git a/fs/afs/proc.c b/fs/afs/proc.c
+index e817fc740ba01..855d7358933b4 100644
+--- a/fs/afs/proc.c
++++ b/fs/afs/proc.c
+@@ -38,7 +38,7 @@ static int afs_proc_cells_show(struct seq_file *m, void *v)
+
+ if (v == SEQ_START_TOKEN) {
+ /* display header on line 1 */
+- seq_puts(m, "USE TTL SV ST NAME\n");
++ seq_puts(m, "USE ACT TTL SV ST NAME\n");
+ return 0;
+ }
+
+@@ -46,10 +46,11 @@ static int afs_proc_cells_show(struct seq_file *m, void *v)
+ vllist = rcu_dereference(cell->vl_servers);
+
+ /* display one cell per line on subsequent lines */
+- seq_printf(m, "%3u %6lld %2u %2u %s\n",
+- atomic_read(&cell->usage),
++ seq_printf(m, "%3u %3u %6lld %2u %2u %s\n",
++ atomic_read(&cell->ref),
++ atomic_read(&cell->active),
+ cell->dns_expiry - ktime_get_real_seconds(),
+- vllist->nr_servers,
++ vllist ? vllist->nr_servers : 0,
+ cell->state,
+ cell->name);
+ return 0;
+@@ -128,7 +129,7 @@ static int afs_proc_cells_write(struct file *file, char *buf, size_t size)
+ }
+
+ if (test_and_set_bit(AFS_CELL_FL_NO_GC, &cell->flags))
+- afs_put_cell(net, cell);
++ afs_unuse_cell(net, cell);
+ } else {
+ goto inval;
+ }
+@@ -154,13 +155,11 @@ static int afs_proc_rootcell_show(struct seq_file *m, void *v)
+ struct afs_net *net;
+
+ net = afs_seq2net_single(m);
+- if (rcu_access_pointer(net->ws_cell)) {
+- rcu_read_lock();
+- cell = rcu_dereference(net->ws_cell);
+- if (cell)
+- seq_printf(m, "%s\n", cell->name);
+- rcu_read_unlock();
+- }
++ down_read(&net->cells_lock);
++ cell = net->ws_cell;
++ if (cell)
++ seq_printf(m, "%s\n", cell->name);
++ up_read(&net->cells_lock);
+ return 0;
+ }
+
+diff --git a/fs/afs/super.c b/fs/afs/super.c
+index b552357b1d137..e72c223f831d2 100644
+--- a/fs/afs/super.c
++++ b/fs/afs/super.c
+@@ -294,7 +294,7 @@ static int afs_parse_source(struct fs_context *fc, struct fs_parameter *param)
+ cellnamesz, cellnamesz, cellname ?: "");
+ return PTR_ERR(cell);
+ }
+- afs_put_cell(ctx->net, ctx->cell);
++ afs_unuse_cell(ctx->net, ctx->cell);
+ ctx->cell = cell;
+ }
+
+@@ -389,8 +389,8 @@ static int afs_validate_fc(struct fs_context *fc)
+ _debug("switch to alias");
+ key_put(ctx->key);
+ ctx->key = NULL;
+- cell = afs_get_cell(ctx->cell->alias_of);
+- afs_put_cell(ctx->net, ctx->cell);
++ cell = afs_use_cell(ctx->cell->alias_of);
++ afs_unuse_cell(ctx->net, ctx->cell);
+ ctx->cell = cell;
+ goto reget_key;
+ }
+@@ -508,7 +508,7 @@ static struct afs_super_info *afs_alloc_sbi(struct fs_context *fc)
+ if (ctx->dyn_root) {
+ as->dyn_root = true;
+ } else {
+- as->cell = afs_get_cell(ctx->cell);
++ as->cell = afs_use_cell(ctx->cell);
+ as->volume = afs_get_volume(ctx->volume,
+ afs_volume_trace_get_alloc_sbi);
+ }
+@@ -521,7 +521,7 @@ static void afs_destroy_sbi(struct afs_super_info *as)
+ if (as) {
+ struct afs_net *net = afs_net(as->net_ns);
+ afs_put_volume(net, as->volume, afs_volume_trace_put_destroy_sbi);
+- afs_put_cell(net, as->cell);
++ afs_unuse_cell(net, as->cell);
+ put_net(as->net_ns);
+ kfree(as);
+ }
+@@ -607,7 +607,7 @@ static void afs_free_fc(struct fs_context *fc)
+
+ afs_destroy_sbi(fc->s_fs_info);
+ afs_put_volume(ctx->net, ctx->volume, afs_volume_trace_put_free_fc);
+- afs_put_cell(ctx->net, ctx->cell);
++ afs_unuse_cell(ctx->net, ctx->cell);
+ key_put(ctx->key);
+ kfree(ctx);
+ }
+@@ -634,9 +634,7 @@ static int afs_init_fs_context(struct fs_context *fc)
+ ctx->net = afs_net(fc->net_ns);
+
+ /* Default to the workstation cell. */
+- rcu_read_lock();
+- cell = afs_lookup_cell_rcu(ctx->net, NULL, 0);
+- rcu_read_unlock();
++ cell = afs_find_cell(ctx->net, NULL, 0);
+ if (IS_ERR(cell))
+ cell = NULL;
+ ctx->cell = cell;
+diff --git a/fs/afs/vl_alias.c b/fs/afs/vl_alias.c
+index 5082ef04e99c5..ddb4cb67d0fd9 100644
+--- a/fs/afs/vl_alias.c
++++ b/fs/afs/vl_alias.c
+@@ -177,7 +177,7 @@ static int afs_compare_cell_roots(struct afs_cell *cell)
+
+ is_alias:
+ rcu_read_unlock();
+- cell->alias_of = afs_get_cell(p);
++ cell->alias_of = afs_use_cell(p);
+ return 1;
+ }
+
+@@ -247,18 +247,18 @@ static int afs_query_for_alias(struct afs_cell *cell, struct key *key)
+ continue;
+ if (p->root_volume)
+ continue; /* Ignore cells that have a root.cell volume. */
+- afs_get_cell(p);
++ afs_use_cell(p);
+ mutex_unlock(&cell->net->proc_cells_lock);
+
+ if (afs_query_for_alias_one(cell, key, p) != 0)
+ goto is_alias;
+
+ if (mutex_lock_interruptible(&cell->net->proc_cells_lock) < 0) {
+- afs_put_cell(cell->net, p);
++ afs_unuse_cell(cell->net, p);
+ return -ERESTARTSYS;
+ }
+
+- afs_put_cell(cell->net, p);
++ afs_unuse_cell(cell->net, p);
+ }
+
+ mutex_unlock(&cell->net->proc_cells_lock);
+diff --git a/fs/afs/vl_rotate.c b/fs/afs/vl_rotate.c
+index f405ca8b240a5..750bd1579f212 100644
+--- a/fs/afs/vl_rotate.c
++++ b/fs/afs/vl_rotate.c
+@@ -45,7 +45,7 @@ static bool afs_start_vl_iteration(struct afs_vl_cursor *vc)
+ cell->dns_expiry <= ktime_get_real_seconds()) {
+ dns_lookup_count = smp_load_acquire(&cell->dns_lookup_count);
+ set_bit(AFS_CELL_FL_DO_LOOKUP, &cell->flags);
+- queue_work(afs_wq, &cell->manager);
++ afs_queue_cell(cell);
+
+ if (cell->dns_source == DNS_RECORD_UNAVAILABLE) {
+ if (wait_var_event_interruptible(
+diff --git a/fs/afs/volume.c b/fs/afs/volume.c
+index 9bc0509e3634c..a838030e95634 100644
+--- a/fs/afs/volume.c
++++ b/fs/afs/volume.c
+@@ -106,7 +106,7 @@ static struct afs_volume *afs_alloc_volume(struct afs_fs_context *params,
+ return volume;
+
+ error_1:
+- afs_put_cell(params->net, volume->cell);
++ afs_put_cell(volume->cell);
+ kfree(volume);
+ error_0:
+ return ERR_PTR(ret);
+@@ -228,7 +228,7 @@ static void afs_destroy_volume(struct afs_net *net, struct afs_volume *volume)
+
+ afs_remove_volume_from_cell(volume);
+ afs_put_serverlist(net, rcu_access_pointer(volume->servers));
+- afs_put_cell(net, volume->cell);
++ afs_put_cell(volume->cell);
+ trace_afs_volume(volume->vid, atomic_read(&volume->usage),
+ afs_volume_trace_free);
+ kfree_rcu(volume, rcu);
+diff --git a/fs/btrfs/extent-io-tree.h b/fs/btrfs/extent-io-tree.h
+index 8bbb734f3f514..49384d55a908f 100644
+--- a/fs/btrfs/extent-io-tree.h
++++ b/fs/btrfs/extent-io-tree.h
+@@ -48,6 +48,7 @@ enum {
+ IO_TREE_INODE_FILE_EXTENT,
+ IO_TREE_LOG_CSUM_RANGE,
+ IO_TREE_SELFTEST,
++ IO_TREE_DEVICE_ALLOC_STATE,
+ };
+
+ struct extent_io_tree {
+diff --git a/fs/btrfs/volumes.c b/fs/btrfs/volumes.c
+index 79e9a80bd37a0..f9d8bd3099488 100644
+--- a/fs/btrfs/volumes.c
++++ b/fs/btrfs/volumes.c
+@@ -406,7 +406,7 @@ void __exit btrfs_cleanup_fs_uuids(void)
+ * Returned struct is not linked onto any lists and must be destroyed using
+ * btrfs_free_device.
+ */
+-static struct btrfs_device *__alloc_device(void)
++static struct btrfs_device *__alloc_device(struct btrfs_fs_info *fs_info)
+ {
+ struct btrfs_device *dev;
+
+@@ -433,7 +433,8 @@ static struct btrfs_device *__alloc_device(void)
+ btrfs_device_data_ordered_init(dev);
+ INIT_RADIX_TREE(&dev->reada_zones, GFP_NOFS & ~__GFP_DIRECT_RECLAIM);
+ INIT_RADIX_TREE(&dev->reada_extents, GFP_NOFS & ~__GFP_DIRECT_RECLAIM);
+- extent_io_tree_init(NULL, &dev->alloc_state, 0, NULL);
++ extent_io_tree_init(fs_info, &dev->alloc_state,
++ IO_TREE_DEVICE_ALLOC_STATE, NULL);
+
+ return dev;
+ }
+@@ -6545,7 +6546,7 @@ struct btrfs_device *btrfs_alloc_device(struct btrfs_fs_info *fs_info,
+ if (WARN_ON(!devid && !fs_info))
+ return ERR_PTR(-EINVAL);
+
+- dev = __alloc_device();
++ dev = __alloc_device(fs_info);
+ if (IS_ERR(dev))
+ return dev;
+
+diff --git a/fs/cifs/asn1.c b/fs/cifs/asn1.c
+index 689162e2e1755..3150c19cdc2fb 100644
+--- a/fs/cifs/asn1.c
++++ b/fs/cifs/asn1.c
+@@ -530,8 +530,8 @@ decode_negTokenInit(unsigned char *security_blob, int length,
+ return 0;
+ } else if ((cls != ASN1_CTX) || (con != ASN1_CON)
+ || (tag != ASN1_EOC)) {
+- cifs_dbg(FYI, "cls = %d con = %d tag = %d end = %p (%d) exit 0\n",
+- cls, con, tag, end, *end);
++ cifs_dbg(FYI, "cls = %d con = %d tag = %d end = %p exit 0\n",
++ cls, con, tag, end);
+ return 0;
+ }
+
+@@ -541,8 +541,8 @@ decode_negTokenInit(unsigned char *security_blob, int length,
+ return 0;
+ } else if ((cls != ASN1_UNI) || (con != ASN1_CON)
+ || (tag != ASN1_SEQ)) {
+- cifs_dbg(FYI, "cls = %d con = %d tag = %d end = %p (%d) exit 1\n",
+- cls, con, tag, end, *end);
++ cifs_dbg(FYI, "cls = %d con = %d tag = %d end = %p exit 1\n",
++ cls, con, tag, end);
+ return 0;
+ }
+
+@@ -552,8 +552,8 @@ decode_negTokenInit(unsigned char *security_blob, int length,
+ return 0;
+ } else if ((cls != ASN1_CTX) || (con != ASN1_CON)
+ || (tag != ASN1_EOC)) {
+- cifs_dbg(FYI, "cls = %d con = %d tag = %d end = %p (%d) exit 0\n",
+- cls, con, tag, end, *end);
++ cifs_dbg(FYI, "cls = %d con = %d tag = %d end = %p exit 0\n",
++ cls, con, tag, end);
+ return 0;
+ }
+
+@@ -564,8 +564,8 @@ decode_negTokenInit(unsigned char *security_blob, int length,
+ return 0;
+ } else if ((cls != ASN1_UNI) || (con != ASN1_CON)
+ || (tag != ASN1_SEQ)) {
+- cifs_dbg(FYI, "cls = %d con = %d tag = %d end = %p (%d) exit 1\n",
+- cls, con, tag, end, *end);
++ cifs_dbg(FYI, "cls = %d con = %d tag = %d sequence_end = %p exit 1\n",
++ cls, con, tag, sequence_end);
+ return 0;
+ }
+
+diff --git a/fs/cifs/cifsacl.c b/fs/cifs/cifsacl.c
+index 6025d7fc7bbfd..d0658891b0a6d 100644
+--- a/fs/cifs/cifsacl.c
++++ b/fs/cifs/cifsacl.c
+@@ -338,7 +338,7 @@ invalidate_key:
+ goto out_key_put;
+ }
+
+-static int
++int
+ sid_to_id(struct cifs_sb_info *cifs_sb, struct cifs_sid *psid,
+ struct cifs_fattr *fattr, uint sidtype)
+ {
+@@ -359,7 +359,8 @@ sid_to_id(struct cifs_sb_info *cifs_sb, struct cifs_sid *psid,
+ return -EIO;
+ }
+
+- if (cifs_sb->mnt_cifs_flags & CIFS_MOUNT_UID_FROM_ACL) {
++ if ((cifs_sb->mnt_cifs_flags & CIFS_MOUNT_UID_FROM_ACL) ||
++ (cifs_sb_master_tcon(cifs_sb)->posix_extensions)) {
+ uint32_t unix_id;
+ bool is_group;
+
+diff --git a/fs/cifs/cifsproto.h b/fs/cifs/cifsproto.h
+index 7a836ec0438e8..f4751cb391238 100644
+--- a/fs/cifs/cifsproto.h
++++ b/fs/cifs/cifsproto.h
+@@ -208,6 +208,8 @@ extern int cifs_set_file_info(struct inode *inode, struct iattr *attrs,
+ extern int cifs_rename_pending_delete(const char *full_path,
+ struct dentry *dentry,
+ const unsigned int xid);
++extern int sid_to_id(struct cifs_sb_info *cifs_sb, struct cifs_sid *psid,
++ struct cifs_fattr *fattr, uint sidtype);
+ extern int cifs_acl_to_fattr(struct cifs_sb_info *cifs_sb,
+ struct cifs_fattr *fattr, struct inode *inode,
+ bool get_mode_from_special_sid,
+diff --git a/fs/cifs/connect.c b/fs/cifs/connect.c
+index a61abde09ffe1..f4ecc13b02c0a 100644
+--- a/fs/cifs/connect.c
++++ b/fs/cifs/connect.c
+@@ -3594,7 +3594,10 @@ cifs_get_tcon(struct cifs_ses *ses, struct smb_vol *volume_info)
+ */
+ tcon->retry = volume_info->retry;
+ tcon->nocase = volume_info->nocase;
+- tcon->nohandlecache = volume_info->nohandlecache;
++ if (ses->server->capabilities & SMB2_GLOBAL_CAP_DIRECTORY_LEASING)
++ tcon->nohandlecache = volume_info->nohandlecache;
++ else
++ tcon->nohandlecache = 1;
+ tcon->nodelete = volume_info->nodelete;
+ tcon->local_lease = volume_info->local_lease;
+ INIT_LIST_HEAD(&tcon->pending_opens);
+diff --git a/fs/cifs/readdir.c b/fs/cifs/readdir.c
+index 6df0922e7e304..709fb53e9fee1 100644
+--- a/fs/cifs/readdir.c
++++ b/fs/cifs/readdir.c
+@@ -267,9 +267,8 @@ cifs_posix_to_fattr(struct cifs_fattr *fattr, struct smb2_posix_info *info,
+ if (reparse_file_needs_reval(fattr))
+ fattr->cf_flags |= CIFS_FATTR_NEED_REVAL;
+
+- /* TODO map SIDs */
+- fattr->cf_uid = cifs_sb->mnt_uid;
+- fattr->cf_gid = cifs_sb->mnt_gid;
++ sid_to_id(cifs_sb, &parsed.owner, fattr, SIDOWNER);
++ sid_to_id(cifs_sb, &parsed.group, fattr, SIDGROUP);
+ }
+
+ static void __dir_info_to_fattr(struct cifs_fattr *fattr, const void *info)
+diff --git a/fs/cifs/smb2ops.c b/fs/cifs/smb2ops.c
+index d44df8f95bcd4..09e1cd320ee56 100644
+--- a/fs/cifs/smb2ops.c
++++ b/fs/cifs/smb2ops.c
+@@ -3072,7 +3072,12 @@ get_smb2_acl_by_path(struct cifs_sb_info *cifs_sb,
+ oparms.tcon = tcon;
+ oparms.desired_access = READ_CONTROL;
+ oparms.disposition = FILE_OPEN;
+- oparms.create_options = cifs_create_options(cifs_sb, 0);
++ /*
++ * When querying an ACL, even if the file is a symlink we want to open
++ * the source not the target, and so the protocol requires that the
++ * client specify this flag when opening a reparse point
++ */
++ oparms.create_options = cifs_create_options(cifs_sb, 0) | OPEN_REPARSE_POINT;
+ oparms.fid = &fid;
+ oparms.reconnect = false;
+
+@@ -3924,7 +3929,7 @@ crypt_message(struct TCP_Server_Info *server, int num_rqst,
+ if (rc) {
+ cifs_server_dbg(VFS, "%s: Could not get %scryption key\n", __func__,
+ enc ? "en" : "de");
+- return 0;
++ return rc;
+ }
+
+ rc = smb3_crypto_aead_allocate(server);
+@@ -4103,7 +4108,8 @@ smb3_is_transform_hdr(void *buf)
+ static int
+ decrypt_raw_data(struct TCP_Server_Info *server, char *buf,
+ unsigned int buf_data_size, struct page **pages,
+- unsigned int npages, unsigned int page_data_size)
++ unsigned int npages, unsigned int page_data_size,
++ bool is_offloaded)
+ {
+ struct kvec iov[2];
+ struct smb_rqst rqst = {NULL};
+@@ -4129,7 +4135,8 @@ decrypt_raw_data(struct TCP_Server_Info *server, char *buf,
+
+ memmove(buf, iov[1].iov_base, buf_data_size);
+
+- server->total_read = buf_data_size + page_data_size;
++ if (!is_offloaded)
++ server->total_read = buf_data_size + page_data_size;
+
+ return rc;
+ }
+@@ -4342,7 +4349,7 @@ static void smb2_decrypt_offload(struct work_struct *work)
+ struct mid_q_entry *mid;
+
+ rc = decrypt_raw_data(dw->server, dw->buf, dw->server->vals->read_rsp_size,
+- dw->ppages, dw->npages, dw->len);
++ dw->ppages, dw->npages, dw->len, true);
+ if (rc) {
+ cifs_dbg(VFS, "error decrypting rc=%d\n", rc);
+ goto free_pages;
+@@ -4448,7 +4455,7 @@ receive_encrypted_read(struct TCP_Server_Info *server, struct mid_q_entry **mid,
+
+ non_offloaded_decrypt:
+ rc = decrypt_raw_data(server, buf, server->vals->read_rsp_size,
+- pages, npages, len);
++ pages, npages, len, false);
+ if (rc)
+ goto free_pages;
+
+@@ -4504,7 +4511,7 @@ receive_encrypted_standard(struct TCP_Server_Info *server,
+ server->total_read += length;
+
+ buf_size = pdu_length - sizeof(struct smb2_transform_hdr);
+- length = decrypt_raw_data(server, buf, buf_size, NULL, 0, 0);
++ length = decrypt_raw_data(server, buf, buf_size, NULL, 0, 0, false);
+ if (length)
+ return length;
+
+diff --git a/fs/crypto/policy.c b/fs/crypto/policy.c
+index d23ff162c78bc..0b32c64eb4053 100644
+--- a/fs/crypto/policy.c
++++ b/fs/crypto/policy.c
+@@ -178,10 +178,15 @@ static bool fscrypt_supported_v2_policy(const struct fscrypt_policy_v2 *policy,
+ 32, 32))
+ return false;
+
++ /*
++ * IV_INO_LBLK_32 hashes the inode number, so in principle it can
++ * support any ino_bits. However, currently the inode number is gotten
++ * from inode::i_ino which is 'unsigned long'. So for now the
++ * implementation limit is 32 bits.
++ */
+ if ((policy->flags & FSCRYPT_POLICY_FLAG_IV_INO_LBLK_32) &&
+- /* This uses hashed inode numbers, so ino_bits doesn't matter. */
+ !supported_iv_ino_lblk_policy(policy, inode, "IV_INO_LBLK_32",
+- INT_MAX, 32))
++ 32, 32))
+ return false;
+
+ if (memchr_inv(policy->__reserved, 0, sizeof(policy->__reserved))) {
+diff --git a/fs/d_path.c b/fs/d_path.c
+index 0f1fc1743302f..a69e2cd36e6e3 100644
+--- a/fs/d_path.c
++++ b/fs/d_path.c
+@@ -102,6 +102,8 @@ restart:
+
+ if (dentry == vfsmnt->mnt_root || IS_ROOT(dentry)) {
+ struct mount *parent = READ_ONCE(mnt->mnt_parent);
++ struct mnt_namespace *mnt_ns;
++
+ /* Escaped? */
+ if (dentry != vfsmnt->mnt_root) {
+ bptr = *buffer;
+@@ -116,7 +118,9 @@ restart:
+ vfsmnt = &mnt->mnt;
+ continue;
+ }
+- if (is_mounted(vfsmnt) && !is_anon_ns(mnt->mnt_ns))
++ mnt_ns = READ_ONCE(mnt->mnt_ns);
++ /* open-coded is_mounted() to use local mnt_ns */
++ if (!IS_ERR_OR_NULL(mnt_ns) && !is_anon_ns(mnt_ns))
+ error = 1; // absolute root
+ else
+ error = 2; // detached or not attached yet
+diff --git a/fs/dlm/config.c b/fs/dlm/config.c
+index 3b21082e1b550..3b1012a3c4396 100644
+--- a/fs/dlm/config.c
++++ b/fs/dlm/config.c
+@@ -216,6 +216,7 @@ struct dlm_space {
+ struct list_head members;
+ struct mutex members_lock;
+ int members_count;
++ struct dlm_nodes *nds;
+ };
+
+ struct dlm_comms {
+@@ -424,6 +425,7 @@ static struct config_group *make_space(struct config_group *g, const char *name)
+ INIT_LIST_HEAD(&sp->members);
+ mutex_init(&sp->members_lock);
+ sp->members_count = 0;
++ sp->nds = nds;
+ return &sp->group;
+
+ fail:
+@@ -445,6 +447,7 @@ static void drop_space(struct config_group *g, struct config_item *i)
+ static void release_space(struct config_item *i)
+ {
+ struct dlm_space *sp = config_item_to_space(i);
++ kfree(sp->nds);
+ kfree(sp);
+ }
+
+diff --git a/fs/ext4/ext4.h b/fs/ext4/ext4.h
+index ff46defc65683..dc943e714d142 100644
+--- a/fs/ext4/ext4.h
++++ b/fs/ext4/ext4.h
+@@ -466,7 +466,7 @@ struct flex_groups {
+
+ /* Flags which are mutually exclusive to DAX */
+ #define EXT4_DAX_MUT_EXCL (EXT4_VERITY_FL | EXT4_ENCRYPT_FL |\
+- EXT4_JOURNAL_DATA_FL)
++ EXT4_JOURNAL_DATA_FL | EXT4_INLINE_DATA_FL)
+
+ /* Mask out flags that are inappropriate for the given type of inode. */
+ static inline __u32 ext4_mask_flags(umode_t mode, __u32 flags)
+diff --git a/fs/ext4/fsmap.c b/fs/ext4/fsmap.c
+index dbccf46f17709..37347ba868b70 100644
+--- a/fs/ext4/fsmap.c
++++ b/fs/ext4/fsmap.c
+@@ -108,6 +108,9 @@ static int ext4_getfsmap_helper(struct super_block *sb,
+
+ /* Are we just counting mappings? */
+ if (info->gfi_head->fmh_count == 0) {
++ if (info->gfi_head->fmh_entries == UINT_MAX)
++ return EXT4_QUERY_RANGE_ABORT;
++
+ if (rec_fsblk > info->gfi_next_fsblk)
+ info->gfi_head->fmh_entries++;
+
+diff --git a/fs/ext4/mballoc.c b/fs/ext4/mballoc.c
+index e88eff999bd15..79d32ea606aa1 100644
+--- a/fs/ext4/mballoc.c
++++ b/fs/ext4/mballoc.c
+@@ -4037,7 +4037,7 @@ ext4_mb_discard_group_preallocations(struct super_block *sb,
+ struct ext4_buddy e4b;
+ int err;
+ int busy = 0;
+- int free = 0;
++ int free, free_total = 0;
+
+ mb_debug(sb, "discard preallocation for group %u\n", group);
+ if (list_empty(&grp->bb_prealloc_list))
+@@ -4065,8 +4065,8 @@ ext4_mb_discard_group_preallocations(struct super_block *sb,
+
+ INIT_LIST_HEAD(&list);
+ repeat:
++ free = 0;
+ ext4_lock_group(sb, group);
+- this_cpu_inc(discard_pa_seq);
+ list_for_each_entry_safe(pa, tmp,
+ &grp->bb_prealloc_list, pa_group_list) {
+ spin_lock(&pa->pa_lock);
+@@ -4083,6 +4083,9 @@ repeat:
+ /* seems this one can be freed ... */
+ ext4_mb_mark_pa_deleted(sb, pa);
+
++ if (!free)
++ this_cpu_inc(discard_pa_seq);
++
+ /* we can trust pa_free ... */
+ free += pa->pa_free;
+
+@@ -4092,22 +4095,6 @@ repeat:
+ list_add(&pa->u.pa_tmp_list, &list);
+ }
+
+- /* if we still need more blocks and some PAs were used, try again */
+- if (free < needed && busy) {
+- busy = 0;
+- ext4_unlock_group(sb, group);
+- cond_resched();
+- goto repeat;
+- }
+-
+- /* found anything to free? */
+- if (list_empty(&list)) {
+- BUG_ON(free != 0);
+- mb_debug(sb, "Someone else may have freed PA for this group %u\n",
+- group);
+- goto out;
+- }
+-
+ /* now free all selected PAs */
+ list_for_each_entry_safe(pa, tmp, &list, u.pa_tmp_list) {
+
+@@ -4125,14 +4112,22 @@ repeat:
+ call_rcu(&(pa)->u.pa_rcu, ext4_mb_pa_callback);
+ }
+
+-out:
++ free_total += free;
++
++ /* if we still need more blocks and some PAs were used, try again */
++ if (free_total < needed && busy) {
++ ext4_unlock_group(sb, group);
++ cond_resched();
++ busy = 0;
++ goto repeat;
++ }
+ ext4_unlock_group(sb, group);
+ ext4_mb_unload_buddy(&e4b);
+ put_bh(bitmap_bh);
+ out_dbg:
+ mb_debug(sb, "discarded (%d) blocks preallocated for group %u bb_free (%d)\n",
+- free, group, grp->bb_free);
+- return free;
++ free_total, group, grp->bb_free);
++ return free_total;
+ }
+
+ /*
+diff --git a/fs/f2fs/inode.c b/fs/f2fs/inode.c
+index 44582a4db513e..1e014535c2530 100644
+--- a/fs/f2fs/inode.c
++++ b/fs/f2fs/inode.c
+@@ -287,6 +287,13 @@ static bool sanity_check_inode(struct inode *inode, struct page *node_page)
+ return false;
+ }
+
++ if ((fi->i_flags & F2FS_CASEFOLD_FL) && !f2fs_sb_has_casefold(sbi)) {
++ set_sbi_flag(sbi, SBI_NEED_FSCK);
++ f2fs_warn(sbi, "%s: inode (ino=%lx) has casefold flag, but casefold feature is off",
++ __func__, inode->i_ino);
++ return false;
++ }
++
+ if (f2fs_has_extra_attr(inode) && f2fs_sb_has_compression(sbi) &&
+ fi->i_flags & F2FS_COMPR_FL &&
+ F2FS_FITS_IN_INODE(ri, fi->i_extra_isize,
+diff --git a/fs/f2fs/sysfs.c b/fs/f2fs/sysfs.c
+index c5e32ceb94827..e186d3af61368 100644
+--- a/fs/f2fs/sysfs.c
++++ b/fs/f2fs/sysfs.c
+@@ -964,4 +964,5 @@ void f2fs_unregister_sysfs(struct f2fs_sb_info *sbi)
+ }
+ kobject_del(&sbi->s_kobj);
+ kobject_put(&sbi->s_kobj);
++ wait_for_completion(&sbi->s_kobj_unregister);
+ }
+diff --git a/fs/iomap/buffered-io.c b/fs/iomap/buffered-io.c
+index bcfc288dba3fb..b115e7d47fcec 100644
+--- a/fs/iomap/buffered-io.c
++++ b/fs/iomap/buffered-io.c
+@@ -49,16 +49,8 @@ iomap_page_create(struct inode *inode, struct page *page)
+ if (iop || i_blocksize(inode) == PAGE_SIZE)
+ return iop;
+
+- iop = kmalloc(sizeof(*iop), GFP_NOFS | __GFP_NOFAIL);
+- atomic_set(&iop->read_count, 0);
+- atomic_set(&iop->write_count, 0);
++ iop = kzalloc(sizeof(*iop), GFP_NOFS | __GFP_NOFAIL);
+ spin_lock_init(&iop->uptodate_lock);
+- bitmap_zero(iop->uptodate, PAGE_SIZE / SECTOR_SIZE);
+-
+- /*
+- * migrate_page_move_mapping() assumes that pages with private data have
+- * their count elevated by 1.
+- */
+ attach_page_private(page, iop);
+ return iop;
+ }
+@@ -574,10 +566,10 @@ __iomap_write_begin(struct inode *inode, loff_t pos, unsigned len, int flags,
+ loff_t block_start = pos & ~(block_size - 1);
+ loff_t block_end = (pos + len + block_size - 1) & ~(block_size - 1);
+ unsigned from = offset_in_page(pos), to = from + len, poff, plen;
+- int status;
+
+ if (PageUptodate(page))
+ return 0;
++ ClearPageError(page);
+
+ do {
+ iomap_adjust_read_range(inode, iop, &block_start,
+@@ -594,14 +586,13 @@ __iomap_write_begin(struct inode *inode, loff_t pos, unsigned len, int flags,
+ if (WARN_ON_ONCE(flags & IOMAP_WRITE_F_UNSHARE))
+ return -EIO;
+ zero_user_segments(page, poff, from, to, poff + plen);
+- iomap_set_range_uptodate(page, poff, plen);
+- continue;
++ } else {
++ int status = iomap_read_page_sync(block_start, page,
++ poff, plen, srcmap);
++ if (status)
++ return status;
+ }
+-
+- status = iomap_read_page_sync(block_start, page, poff, plen,
+- srcmap);
+- if (status)
+- return status;
++ iomap_set_range_uptodate(page, poff, plen);
+ } while ((block_start += plen) < block_end);
+
+ return 0;
+diff --git a/fs/iomap/direct-io.c b/fs/iomap/direct-io.c
+index ec7b78e6fecaf..28d656b15300b 100644
+--- a/fs/iomap/direct-io.c
++++ b/fs/iomap/direct-io.c
+@@ -387,6 +387,16 @@ iomap_dio_actor(struct inode *inode, loff_t pos, loff_t length,
+ return iomap_dio_bio_actor(inode, pos, length, dio, iomap);
+ case IOMAP_INLINE:
+ return iomap_dio_inline_actor(inode, pos, length, dio, iomap);
++ case IOMAP_DELALLOC:
++ /*
++ * DIO is not serialised against mmap() access at all, and so
++ * if the page_mkwrite occurs between the writeback and the
++ * iomap_apply() call in the DIO path, then it will see the
++ * DELALLOC block that the page-mkwrite allocated.
++ */
++ pr_warn_ratelimited("Direct I/O collision with buffered writes! File: %pD4 Comm: %.20s\n",
++ dio->iocb->ki_filp, current->comm);
++ return -EIO;
+ default:
+ WARN_ON_ONCE(1);
+ return -EIO;
+diff --git a/fs/nfs/fs_context.c b/fs/nfs/fs_context.c
+index ccc88be88d6ae..a30b4bcb95a2c 100644
+--- a/fs/nfs/fs_context.c
++++ b/fs/nfs/fs_context.c
+@@ -94,6 +94,7 @@ enum {
+ static const struct constant_table nfs_param_enums_local_lock[] = {
+ { "all", Opt_local_lock_all },
+ { "flock", Opt_local_lock_flock },
++ { "posix", Opt_local_lock_posix },
+ { "none", Opt_local_lock_none },
+ {}
+ };
+diff --git a/fs/ntfs/inode.c b/fs/ntfs/inode.c
+index d4359a1df3d5e..84933a0af49b6 100644
+--- a/fs/ntfs/inode.c
++++ b/fs/ntfs/inode.c
+@@ -1809,6 +1809,12 @@ int ntfs_read_inode_mount(struct inode *vi)
+ brelse(bh);
+ }
+
++ if (le32_to_cpu(m->bytes_allocated) != vol->mft_record_size) {
++ ntfs_error(sb, "Incorrect mft record size %u in superblock, should be %u.",
++ le32_to_cpu(m->bytes_allocated), vol->mft_record_size);
++ goto err_out;
++ }
++
+ /* Apply the mst fixups. */
+ if (post_read_mst_fixup((NTFS_RECORD*)m, vol->mft_record_size)) {
+ /* FIXME: Try to use the $MFTMirr now. */
+diff --git a/fs/proc/base.c b/fs/proc/base.c
+index d86c0afc8a859..297ff606ae0f6 100644
+--- a/fs/proc/base.c
++++ b/fs/proc/base.c
+@@ -1046,7 +1046,6 @@ static ssize_t oom_adj_read(struct file *file, char __user *buf, size_t count,
+
+ static int __set_oom_adj(struct file *file, int oom_adj, bool legacy)
+ {
+- static DEFINE_MUTEX(oom_adj_mutex);
+ struct mm_struct *mm = NULL;
+ struct task_struct *task;
+ int err = 0;
+@@ -1086,7 +1085,7 @@ static int __set_oom_adj(struct file *file, int oom_adj, bool legacy)
+ struct task_struct *p = find_lock_task_mm(task);
+
+ if (p) {
+- if (atomic_read(&p->mm->mm_users) > 1) {
++ if (test_bit(MMF_MULTIPROCESS, &p->mm->flags)) {
+ mm = p->mm;
+ mmgrab(mm);
+ }
+diff --git a/fs/quota/quota_v2.c b/fs/quota/quota_v2.c
+index 58fc2a7c7fd19..e69a2bfdd81c0 100644
+--- a/fs/quota/quota_v2.c
++++ b/fs/quota/quota_v2.c
+@@ -282,6 +282,7 @@ static void v2r1_mem2diskdqb(void *dp, struct dquot *dquot)
+ d->dqb_curspace = cpu_to_le64(m->dqb_curspace);
+ d->dqb_btime = cpu_to_le64(m->dqb_btime);
+ d->dqb_id = cpu_to_le32(from_kqid(&init_user_ns, dquot->dq_id));
++ d->dqb_pad = 0;
+ if (qtree_entry_unused(info, dp))
+ d->dqb_itime = cpu_to_le64(1);
+ }
+diff --git a/fs/ramfs/file-nommu.c b/fs/ramfs/file-nommu.c
+index 4146954549560..355523f4a4bf3 100644
+--- a/fs/ramfs/file-nommu.c
++++ b/fs/ramfs/file-nommu.c
+@@ -224,7 +224,7 @@ static unsigned long ramfs_nommu_get_unmapped_area(struct file *file,
+ if (!pages)
+ goto out_free;
+
+- nr = find_get_pages(inode->i_mapping, &pgoff, lpages, pages);
++ nr = find_get_pages_contig(inode->i_mapping, pgoff, lpages, pages);
+ if (nr != lpages)
+ goto out_free_pages; /* leave if some pages were missing */
+
+diff --git a/fs/reiserfs/inode.c b/fs/reiserfs/inode.c
+index e43fed96704d8..c76d563dec0e1 100644
+--- a/fs/reiserfs/inode.c
++++ b/fs/reiserfs/inode.c
+@@ -2159,7 +2159,8 @@ out_end_trans:
+ out_inserted_sd:
+ clear_nlink(inode);
+ th->t_trans_id = 0; /* so the caller can't use this handle later */
+- unlock_new_inode(inode); /* OK to do even if we hadn't locked it */
++ if (inode->i_state & I_NEW)
++ unlock_new_inode(inode);
+ iput(inode);
+ return err;
+ }
+diff --git a/fs/reiserfs/super.c b/fs/reiserfs/super.c
+index a6bce5b1fb1dc..1b9c7a387dc71 100644
+--- a/fs/reiserfs/super.c
++++ b/fs/reiserfs/super.c
+@@ -1258,6 +1258,10 @@ static int reiserfs_parse_options(struct super_block *s,
+ "turned on.");
+ return 0;
+ }
++ if (qf_names[qtype] !=
++ REISERFS_SB(s)->s_qf_names[qtype])
++ kfree(qf_names[qtype]);
++ qf_names[qtype] = NULL;
+ if (*arg) { /* Some filename specified? */
+ if (REISERFS_SB(s)->s_qf_names[qtype]
+ && strcmp(REISERFS_SB(s)->s_qf_names[qtype],
+@@ -1287,10 +1291,6 @@ static int reiserfs_parse_options(struct super_block *s,
+ else
+ *mount_options |= 1 << REISERFS_GRPQUOTA;
+ } else {
+- if (qf_names[qtype] !=
+- REISERFS_SB(s)->s_qf_names[qtype])
+- kfree(qf_names[qtype]);
+- qf_names[qtype] = NULL;
+ if (qtype == USRQUOTA)
+ *mount_options &= ~(1 << REISERFS_USRQUOTA);
+ else
+diff --git a/fs/udf/inode.c b/fs/udf/inode.c
+index adaba8e8b326e..566118417e562 100644
+--- a/fs/udf/inode.c
++++ b/fs/udf/inode.c
+@@ -139,21 +139,24 @@ void udf_evict_inode(struct inode *inode)
+ struct udf_inode_info *iinfo = UDF_I(inode);
+ int want_delete = 0;
+
+- if (!inode->i_nlink && !is_bad_inode(inode)) {
+- want_delete = 1;
+- udf_setsize(inode, 0);
+- udf_update_inode(inode, IS_SYNC(inode));
++ if (!is_bad_inode(inode)) {
++ if (!inode->i_nlink) {
++ want_delete = 1;
++ udf_setsize(inode, 0);
++ udf_update_inode(inode, IS_SYNC(inode));
++ }
++ if (iinfo->i_alloc_type != ICBTAG_FLAG_AD_IN_ICB &&
++ inode->i_size != iinfo->i_lenExtents) {
++ udf_warn(inode->i_sb,
++ "Inode %lu (mode %o) has inode size %llu different from extent length %llu. Filesystem need not be standards compliant.\n",
++ inode->i_ino, inode->i_mode,
++ (unsigned long long)inode->i_size,
++ (unsigned long long)iinfo->i_lenExtents);
++ }
+ }
+ truncate_inode_pages_final(&inode->i_data);
+ invalidate_inode_buffers(inode);
+ clear_inode(inode);
+- if (iinfo->i_alloc_type != ICBTAG_FLAG_AD_IN_ICB &&
+- inode->i_size != iinfo->i_lenExtents) {
+- udf_warn(inode->i_sb, "Inode %lu (mode %o) has inode size %llu different from extent length %llu. Filesystem need not be standards compliant.\n",
+- inode->i_ino, inode->i_mode,
+- (unsigned long long)inode->i_size,
+- (unsigned long long)iinfo->i_lenExtents);
+- }
+ kfree(iinfo->i_ext.i_data);
+ iinfo->i_ext.i_data = NULL;
+ udf_clear_extent_cache(inode);
+diff --git a/fs/udf/super.c b/fs/udf/super.c
+index f747bf72edbe0..a6ce0ddb392c7 100644
+--- a/fs/udf/super.c
++++ b/fs/udf/super.c
+@@ -1353,6 +1353,12 @@ static int udf_load_sparable_map(struct super_block *sb,
+ (int)spm->numSparingTables);
+ return -EIO;
+ }
++ if (le32_to_cpu(spm->sizeSparingTable) > sb->s_blocksize) {
++ udf_err(sb, "error loading logical volume descriptor: "
++ "Too big sparing table size (%u)\n",
++ le32_to_cpu(spm->sizeSparingTable));
++ return -EIO;
++ }
+
+ for (i = 0; i < spm->numSparingTables; i++) {
+ loc = le32_to_cpu(spm->locSparingTable[i]);
+diff --git a/fs/xfs/libxfs/xfs_rtbitmap.c b/fs/xfs/libxfs/xfs_rtbitmap.c
+index 9498ced947be9..2a38576189307 100644
+--- a/fs/xfs/libxfs/xfs_rtbitmap.c
++++ b/fs/xfs/libxfs/xfs_rtbitmap.c
+@@ -1018,7 +1018,6 @@ xfs_rtalloc_query_range(
+ struct xfs_mount *mp = tp->t_mountp;
+ xfs_rtblock_t rtstart;
+ xfs_rtblock_t rtend;
+- xfs_rtblock_t rem;
+ int is_free;
+ int error = 0;
+
+@@ -1027,13 +1026,12 @@ xfs_rtalloc_query_range(
+ if (low_rec->ar_startext >= mp->m_sb.sb_rextents ||
+ low_rec->ar_startext == high_rec->ar_startext)
+ return 0;
+- if (high_rec->ar_startext > mp->m_sb.sb_rextents)
+- high_rec->ar_startext = mp->m_sb.sb_rextents;
++ high_rec->ar_startext = min(high_rec->ar_startext,
++ mp->m_sb.sb_rextents - 1);
+
+ /* Iterate the bitmap, looking for discrepancies. */
+ rtstart = low_rec->ar_startext;
+- rem = high_rec->ar_startext - rtstart;
+- while (rem) {
++ while (rtstart <= high_rec->ar_startext) {
+ /* Is the first block free? */
+ error = xfs_rtcheck_range(mp, tp, rtstart, 1, 1, &rtend,
+ &is_free);
+@@ -1042,7 +1040,7 @@ xfs_rtalloc_query_range(
+
+ /* How long does the extent go for? */
+ error = xfs_rtfind_forw(mp, tp, rtstart,
+- high_rec->ar_startext - 1, &rtend);
++ high_rec->ar_startext, &rtend);
+ if (error)
+ break;
+
+@@ -1055,7 +1053,6 @@ xfs_rtalloc_query_range(
+ break;
+ }
+
+- rem -= rtend - rtstart + 1;
+ rtstart = rtend + 1;
+ }
+
+diff --git a/fs/xfs/xfs_buf_item_recover.c b/fs/xfs/xfs_buf_item_recover.c
+index 04faa7310c4f0..8140bd870226a 100644
+--- a/fs/xfs/xfs_buf_item_recover.c
++++ b/fs/xfs/xfs_buf_item_recover.c
+@@ -721,6 +721,8 @@ xlog_recover_get_buf_lsn(
+ case XFS_ABTC_MAGIC:
+ case XFS_RMAP_CRC_MAGIC:
+ case XFS_REFC_CRC_MAGIC:
++ case XFS_FIBT_CRC_MAGIC:
++ case XFS_FIBT_MAGIC:
+ case XFS_IBT_CRC_MAGIC:
+ case XFS_IBT_MAGIC: {
+ struct xfs_btree_block *btb = blk;
+diff --git a/fs/xfs/xfs_file.c b/fs/xfs/xfs_file.c
+index 4d7385426149c..3ebc73ccc1337 100644
+--- a/fs/xfs/xfs_file.c
++++ b/fs/xfs/xfs_file.c
+@@ -1005,6 +1005,21 @@ xfs_file_fadvise(
+ return ret;
+ }
+
++/* Does this file, inode, or mount want synchronous writes? */
++static inline bool xfs_file_sync_writes(struct file *filp)
++{
++ struct xfs_inode *ip = XFS_I(file_inode(filp));
++
++ if (ip->i_mount->m_flags & XFS_MOUNT_WSYNC)
++ return true;
++ if (filp->f_flags & (__O_SYNC | O_DSYNC))
++ return true;
++ if (IS_SYNC(file_inode(filp)))
++ return true;
++
++ return false;
++}
++
+ STATIC loff_t
+ xfs_file_remap_range(
+ struct file *file_in,
+@@ -1062,7 +1077,7 @@ xfs_file_remap_range(
+ if (ret)
+ goto out_unlock;
+
+- if (mp->m_flags & XFS_MOUNT_WSYNC)
++ if (xfs_file_sync_writes(file_in) || xfs_file_sync_writes(file_out))
+ xfs_log_force_inode(dest);
+ out_unlock:
+ xfs_reflink_remap_unlock(file_in, file_out);
+diff --git a/fs/xfs/xfs_fsmap.c b/fs/xfs/xfs_fsmap.c
+index 4eebcec4aae6c..9ce5e7d5bf8f2 100644
+--- a/fs/xfs/xfs_fsmap.c
++++ b/fs/xfs/xfs_fsmap.c
+@@ -26,7 +26,7 @@
+ #include "xfs_rtalloc.h"
+
+ /* Convert an xfs_fsmap to an fsmap. */
+-void
++static void
+ xfs_fsmap_from_internal(
+ struct fsmap *dest,
+ struct xfs_fsmap *src)
+@@ -155,8 +155,7 @@ xfs_fsmap_owner_from_rmap(
+ /* getfsmap query state */
+ struct xfs_getfsmap_info {
+ struct xfs_fsmap_head *head;
+- xfs_fsmap_format_t formatter; /* formatting fn */
+- void *format_arg; /* format buffer */
++ struct fsmap *fsmap_recs; /* mapping records */
+ struct xfs_buf *agf_bp; /* AGF, for refcount queries */
+ xfs_daddr_t next_daddr; /* next daddr we expect */
+ u64 missing_owner; /* owner of holes */
+@@ -224,6 +223,20 @@ xfs_getfsmap_is_shared(
+ return 0;
+ }
+
++static inline void
++xfs_getfsmap_format(
++ struct xfs_mount *mp,
++ struct xfs_fsmap *xfm,
++ struct xfs_getfsmap_info *info)
++{
++ struct fsmap *rec;
++
++ trace_xfs_getfsmap_mapping(mp, xfm);
++
++ rec = &info->fsmap_recs[info->head->fmh_entries++];
++ xfs_fsmap_from_internal(rec, xfm);
++}
++
+ /*
+ * Format a reverse mapping for getfsmap, having translated rm_startblock
+ * into the appropriate daddr units.
+@@ -256,6 +269,9 @@ xfs_getfsmap_helper(
+
+ /* Are we just counting mappings? */
+ if (info->head->fmh_count == 0) {
++ if (info->head->fmh_entries == UINT_MAX)
++ return -ECANCELED;
++
+ if (rec_daddr > info->next_daddr)
+ info->head->fmh_entries++;
+
+@@ -285,10 +301,7 @@ xfs_getfsmap_helper(
+ fmr.fmr_offset = 0;
+ fmr.fmr_length = rec_daddr - info->next_daddr;
+ fmr.fmr_flags = FMR_OF_SPECIAL_OWNER;
+- error = info->formatter(&fmr, info->format_arg);
+- if (error)
+- return error;
+- info->head->fmh_entries++;
++ xfs_getfsmap_format(mp, &fmr, info);
+ }
+
+ if (info->last)
+@@ -320,11 +333,8 @@ xfs_getfsmap_helper(
+ if (shared)
+ fmr.fmr_flags |= FMR_OF_SHARED;
+ }
+- error = info->formatter(&fmr, info->format_arg);
+- if (error)
+- return error;
+- info->head->fmh_entries++;
+
++ xfs_getfsmap_format(mp, &fmr, info);
+ out:
+ rec_daddr += XFS_FSB_TO_BB(mp, rec->rm_blockcount);
+ if (info->next_daddr < rec_daddr)
+@@ -792,11 +802,11 @@ xfs_getfsmap_check_keys(
+ #endif /* CONFIG_XFS_RT */
+
+ /*
+- * Get filesystem's extents as described in head, and format for
+- * output. Calls formatter to fill the user's buffer until all
+- * extents are mapped, until the passed-in head->fmh_count slots have
+- * been filled, or until the formatter short-circuits the loop, if it
+- * is tracking filled-in extents on its own.
++ * Get filesystem's extents as described in head, and format for output. Fills
++ * in the supplied records array until there are no more reverse mappings to
++ * return or head.fmh_entries == head.fmh_count. In the second case, this
++ * function returns -ECANCELED to indicate that more records would have been
++ * returned.
+ *
+ * Key to Confusion
+ * ----------------
+@@ -816,8 +826,7 @@ int
+ xfs_getfsmap(
+ struct xfs_mount *mp,
+ struct xfs_fsmap_head *head,
+- xfs_fsmap_format_t formatter,
+- void *arg)
++ struct fsmap *fsmap_recs)
+ {
+ struct xfs_trans *tp = NULL;
+ struct xfs_fsmap dkeys[2]; /* per-dev keys */
+@@ -892,8 +901,7 @@ xfs_getfsmap(
+
+ info.next_daddr = head->fmh_keys[0].fmr_physical +
+ head->fmh_keys[0].fmr_length;
+- info.formatter = formatter;
+- info.format_arg = arg;
++ info.fsmap_recs = fsmap_recs;
+ info.head = head;
+
+ /*
+diff --git a/fs/xfs/xfs_fsmap.h b/fs/xfs/xfs_fsmap.h
+index c6c57739b8626..a0775788e7b13 100644
+--- a/fs/xfs/xfs_fsmap.h
++++ b/fs/xfs/xfs_fsmap.h
+@@ -27,13 +27,9 @@ struct xfs_fsmap_head {
+ struct xfs_fsmap fmh_keys[2]; /* low and high keys */
+ };
+
+-void xfs_fsmap_from_internal(struct fsmap *dest, struct xfs_fsmap *src);
+ void xfs_fsmap_to_internal(struct xfs_fsmap *dest, struct fsmap *src);
+
+-/* fsmap to userspace formatter - copy to user & advance pointer */
+-typedef int (*xfs_fsmap_format_t)(struct xfs_fsmap *, void *);
+-
+ int xfs_getfsmap(struct xfs_mount *mp, struct xfs_fsmap_head *head,
+- xfs_fsmap_format_t formatter, void *arg);
++ struct fsmap *out_recs);
+
+ #endif /* __XFS_FSMAP_H__ */
+diff --git a/fs/xfs/xfs_ioctl.c b/fs/xfs/xfs_ioctl.c
+index a190212ca85d0..e2a8edcb367bb 100644
+--- a/fs/xfs/xfs_ioctl.c
++++ b/fs/xfs/xfs_ioctl.c
+@@ -1707,39 +1707,17 @@ out_free_buf:
+ return error;
+ }
+
+-struct getfsmap_info {
+- struct xfs_mount *mp;
+- struct fsmap_head __user *data;
+- unsigned int idx;
+- __u32 last_flags;
+-};
+-
+-STATIC int
+-xfs_getfsmap_format(struct xfs_fsmap *xfm, void *priv)
+-{
+- struct getfsmap_info *info = priv;
+- struct fsmap fm;
+-
+- trace_xfs_getfsmap_mapping(info->mp, xfm);
+-
+- info->last_flags = xfm->fmr_flags;
+- xfs_fsmap_from_internal(&fm, xfm);
+- if (copy_to_user(&info->data->fmh_recs[info->idx++], &fm,
+- sizeof(struct fsmap)))
+- return -EFAULT;
+-
+- return 0;
+-}
+-
+ STATIC int
+ xfs_ioc_getfsmap(
+ struct xfs_inode *ip,
+ struct fsmap_head __user *arg)
+ {
+- struct getfsmap_info info = { NULL };
+ struct xfs_fsmap_head xhead = {0};
+ struct fsmap_head head;
+- bool aborted = false;
++ struct fsmap *recs;
++ unsigned int count;
++ __u32 last_flags = 0;
++ bool done = false;
+ int error;
+
+ if (copy_from_user(&head, arg, sizeof(struct fsmap_head)))
+@@ -1751,38 +1729,112 @@ xfs_ioc_getfsmap(
+ sizeof(head.fmh_keys[1].fmr_reserved)))
+ return -EINVAL;
+
++ /*
++ * Use an internal memory buffer so that we don't have to copy fsmap
++ * data to userspace while holding locks. Start by trying to allocate
++ * up to 128k for the buffer, but fall back to a single page if needed.
++ */
++ count = min_t(unsigned int, head.fmh_count,
++ 131072 / sizeof(struct fsmap));
++ recs = kvzalloc(count * sizeof(struct fsmap), GFP_KERNEL);
++ if (!recs) {
++ count = min_t(unsigned int, head.fmh_count,
++ PAGE_SIZE / sizeof(struct fsmap));
++ recs = kvzalloc(count * sizeof(struct fsmap), GFP_KERNEL);
++ if (!recs)
++ return -ENOMEM;
++ }
++
+ xhead.fmh_iflags = head.fmh_iflags;
+- xhead.fmh_count = head.fmh_count;
+ xfs_fsmap_to_internal(&xhead.fmh_keys[0], &head.fmh_keys[0]);
+ xfs_fsmap_to_internal(&xhead.fmh_keys[1], &head.fmh_keys[1]);
+
+ trace_xfs_getfsmap_low_key(ip->i_mount, &xhead.fmh_keys[0]);
+ trace_xfs_getfsmap_high_key(ip->i_mount, &xhead.fmh_keys[1]);
+
+- info.mp = ip->i_mount;
+- info.data = arg;
+- error = xfs_getfsmap(ip->i_mount, &xhead, xfs_getfsmap_format, &info);
+- if (error == -ECANCELED) {
+- error = 0;
+- aborted = true;
+- } else if (error)
+- return error;
++ head.fmh_entries = 0;
++ do {
++ struct fsmap __user *user_recs;
++ struct fsmap *last_rec;
++
++ user_recs = &arg->fmh_recs[head.fmh_entries];
++ xhead.fmh_entries = 0;
++ xhead.fmh_count = min_t(unsigned int, count,
++ head.fmh_count - head.fmh_entries);
++
++ /* Run query, record how many entries we got. */
++ error = xfs_getfsmap(ip->i_mount, &xhead, recs);
++ switch (error) {
++ case 0:
++ /*
++ * There are no more records in the result set. Copy
++ * whatever we got to userspace and break out.
++ */
++ done = true;
++ break;
++ case -ECANCELED:
++ /*
++ * The internal memory buffer is full. Copy whatever
++ * records we got to userspace and go again if we have
++ * not yet filled the userspace buffer.
++ */
++ error = 0;
++ break;
++ default:
++ goto out_free;
++ }
++ head.fmh_entries += xhead.fmh_entries;
++ head.fmh_oflags = xhead.fmh_oflags;
+
+- /* If we didn't abort, set the "last" flag in the last fmx */
+- if (!aborted && info.idx) {
+- info.last_flags |= FMR_OF_LAST;
+- if (copy_to_user(&info.data->fmh_recs[info.idx - 1].fmr_flags,
+- &info.last_flags, sizeof(info.last_flags)))
+- return -EFAULT;
++ /*
++ * If the caller wanted a record count or there aren't any
++ * new records to return, we're done.
++ */
++ if (head.fmh_count == 0 || xhead.fmh_entries == 0)
++ break;
++
++ /* Copy all the records we got out to userspace. */
++ if (copy_to_user(user_recs, recs,
++ xhead.fmh_entries * sizeof(struct fsmap))) {
++ error = -EFAULT;
++ goto out_free;
++ }
++
++ /* Remember the last record flags we copied to userspace. */
++ last_rec = &recs[xhead.fmh_entries - 1];
++ last_flags = last_rec->fmr_flags;
++
++ /* Set up the low key for the next iteration. */
++ xfs_fsmap_to_internal(&xhead.fmh_keys[0], last_rec);
++ trace_xfs_getfsmap_low_key(ip->i_mount, &xhead.fmh_keys[0]);
++ } while (!done && head.fmh_entries < head.fmh_count);
++
++ /*
++ * If there are no more records in the query result set and we're not
++ * in counting mode, mark the last record returned with the LAST flag.
++ */
++ if (done && head.fmh_count > 0 && head.fmh_entries > 0) {
++ struct fsmap __user *user_rec;
++
++ last_flags |= FMR_OF_LAST;
++ user_rec = &arg->fmh_recs[head.fmh_entries - 1];
++
++ if (copy_to_user(&user_rec->fmr_flags, &last_flags,
++ sizeof(last_flags))) {
++ error = -EFAULT;
++ goto out_free;
++ }
+ }
+
+ /* copy back header */
+- head.fmh_entries = xhead.fmh_entries;
+- head.fmh_oflags = xhead.fmh_oflags;
+- if (copy_to_user(arg, &head, sizeof(struct fsmap_head)))
+- return -EFAULT;
++ if (copy_to_user(arg, &head, sizeof(struct fsmap_head))) {
++ error = -EFAULT;
++ goto out_free;
++ }
+
+- return 0;
++out_free:
++ kmem_free(recs);
++ return error;
+ }
+
+ STATIC int
+diff --git a/fs/xfs/xfs_rtalloc.c b/fs/xfs/xfs_rtalloc.c
+index 6209e7b6b895b..86994d7f7cba3 100644
+--- a/fs/xfs/xfs_rtalloc.c
++++ b/fs/xfs/xfs_rtalloc.c
+@@ -247,6 +247,9 @@ xfs_rtallocate_extent_block(
+ end = XFS_BLOCKTOBIT(mp, bbno + 1) - 1;
+ i <= end;
+ i++) {
++ /* Make sure we don't scan off the end of the rt volume. */
++ maxlen = min(mp->m_sb.sb_rextents, i + maxlen) - i;
++
+ /*
+ * See if there's a free extent of maxlen starting at i.
+ * If it's not so then next will contain the first non-free.
+@@ -442,6 +445,14 @@ xfs_rtallocate_extent_near(
+ */
+ if (bno >= mp->m_sb.sb_rextents)
+ bno = mp->m_sb.sb_rextents - 1;
++
++ /* Make sure we don't run off the end of the rt volume. */
++ maxlen = min(mp->m_sb.sb_rextents, bno + maxlen) - bno;
++ if (maxlen < minlen) {
++ *rtblock = NULLRTBLOCK;
++ return 0;
++ }
++
+ /*
+ * Try the exact allocation first.
+ */
+diff --git a/include/linux/bpf_verifier.h b/include/linux/bpf_verifier.h
+index ca08db4ffb5f7..ce3f5231aa698 100644
+--- a/include/linux/bpf_verifier.h
++++ b/include/linux/bpf_verifier.h
+@@ -358,6 +358,7 @@ struct bpf_subprog_info {
+ u32 start; /* insn idx of function entry point */
+ u32 linfo_idx; /* The idx to the main_prog->aux->linfo */
+ u16 stack_depth; /* max. stack depth used by this function */
++ bool has_tail_call;
+ };
+
+ /* single container for all structs
+diff --git a/include/linux/mailbox/mtk-cmdq-mailbox.h b/include/linux/mailbox/mtk-cmdq-mailbox.h
+index a4dc45fbec0a4..23bc366f6c3b3 100644
+--- a/include/linux/mailbox/mtk-cmdq-mailbox.h
++++ b/include/linux/mailbox/mtk-cmdq-mailbox.h
+@@ -27,8 +27,7 @@
+ * bit 16-27: update value
+ * bit 31: 1 - update, 0 - no update
+ */
+-#define CMDQ_WFE_OPTION (CMDQ_WFE_UPDATE | CMDQ_WFE_WAIT | \
+- CMDQ_WFE_WAIT_VALUE)
++#define CMDQ_WFE_OPTION (CMDQ_WFE_WAIT | CMDQ_WFE_WAIT_VALUE)
+
+ /** cmdq event maximum */
+ #define CMDQ_MAX_EVENT 0x3ff
+diff --git a/include/linux/oom.h b/include/linux/oom.h
+index c696c265f0193..b9df34326772c 100644
+--- a/include/linux/oom.h
++++ b/include/linux/oom.h
+@@ -55,6 +55,7 @@ struct oom_control {
+ };
+
+ extern struct mutex oom_lock;
++extern struct mutex oom_adj_mutex;
+
+ static inline void set_current_oom_origin(void)
+ {
+diff --git a/include/linux/overflow.h b/include/linux/overflow.h
+index 93fcef105061b..ff3c48f0abc5b 100644
+--- a/include/linux/overflow.h
++++ b/include/linux/overflow.h
+@@ -3,6 +3,7 @@
+ #define __LINUX_OVERFLOW_H
+
+ #include <linux/compiler.h>
++#include <linux/limits.h>
+
+ /*
+ * In the fallback code below, we need to compute the minimum and
+diff --git a/include/linux/page_owner.h b/include/linux/page_owner.h
+index 8679ccd722e89..3468794f83d23 100644
+--- a/include/linux/page_owner.h
++++ b/include/linux/page_owner.h
+@@ -11,7 +11,7 @@ extern struct page_ext_operations page_owner_ops;
+ extern void __reset_page_owner(struct page *page, unsigned int order);
+ extern void __set_page_owner(struct page *page,
+ unsigned int order, gfp_t gfp_mask);
+-extern void __split_page_owner(struct page *page, unsigned int order);
++extern void __split_page_owner(struct page *page, unsigned int nr);
+ extern void __copy_page_owner(struct page *oldpage, struct page *newpage);
+ extern void __set_page_owner_migrate_reason(struct page *page, int reason);
+ extern void __dump_page_owner(struct page *page);
+@@ -31,10 +31,10 @@ static inline void set_page_owner(struct page *page,
+ __set_page_owner(page, order, gfp_mask);
+ }
+
+-static inline void split_page_owner(struct page *page, unsigned int order)
++static inline void split_page_owner(struct page *page, unsigned int nr)
+ {
+ if (static_branch_unlikely(&page_owner_inited))
+- __split_page_owner(page, order);
++ __split_page_owner(page, nr);
+ }
+ static inline void copy_page_owner(struct page *oldpage, struct page *newpage)
+ {
+diff --git a/include/linux/pci.h b/include/linux/pci.h
+index 34c1c4f45288f..1bc3c020672fd 100644
+--- a/include/linux/pci.h
++++ b/include/linux/pci.h
+@@ -439,6 +439,7 @@ struct pci_dev {
+ unsigned int is_probed:1; /* Device probing in progress */
+ unsigned int link_active_reporting:1;/* Device capable of reporting link active */
+ unsigned int no_vf_scan:1; /* Don't scan for VFs after IOV enablement */
++ unsigned int no_command_memory:1; /* No PCI_COMMAND_MEMORY */
+ pci_dev_flags_t dev_flags;
+ atomic_t enable_cnt; /* pci_enable_device has been called */
+
+diff --git a/include/linux/platform_data/dma-dw.h b/include/linux/platform_data/dma-dw.h
+index f3eaf9ec00a1b..70078be166e3c 100644
+--- a/include/linux/platform_data/dma-dw.h
++++ b/include/linux/platform_data/dma-dw.h
+@@ -21,6 +21,7 @@
+ * @dst_id: dst request line
+ * @m_master: memory master for transfers on allocated channel
+ * @p_master: peripheral master for transfers on allocated channel
++ * @channels: mask of the channels permitted for allocation (zero value means any)
+ * @hs_polarity:set active low polarity of handshake interface
+ */
+ struct dw_dma_slave {
+@@ -29,6 +30,7 @@ struct dw_dma_slave {
+ u8 dst_id;
+ u8 m_master;
+ u8 p_master;
++ u8 channels;
+ bool hs_polarity;
+ };
+
+diff --git a/include/linux/sched/coredump.h b/include/linux/sched/coredump.h
+index ecdc6542070f1..dfd82eab29025 100644
+--- a/include/linux/sched/coredump.h
++++ b/include/linux/sched/coredump.h
+@@ -72,6 +72,7 @@ static inline int get_dumpable(struct mm_struct *mm)
+ #define MMF_DISABLE_THP 24 /* disable THP for all VMAs */
+ #define MMF_OOM_VICTIM 25 /* mm is the oom victim */
+ #define MMF_OOM_REAP_QUEUED 26 /* mm was queued for oom_reaper */
++#define MMF_MULTIPROCESS 27 /* mm is shared between processes */
+ #define MMF_DISABLE_THP_MASK (1 << MMF_DISABLE_THP)
+
+ #define MMF_INIT_MASK (MMF_DUMPABLE_MASK | MMF_DUMP_FILTER_MASK |\
+diff --git a/include/linux/soc/mediatek/mtk-cmdq.h b/include/linux/soc/mediatek/mtk-cmdq.h
+index a74c1d5acdf3c..cb71dca985589 100644
+--- a/include/linux/soc/mediatek/mtk-cmdq.h
++++ b/include/linux/soc/mediatek/mtk-cmdq.h
+@@ -105,11 +105,12 @@ int cmdq_pkt_write_mask(struct cmdq_pkt *pkt, u8 subsys,
+ /**
+ * cmdq_pkt_wfe() - append wait for event command to the CMDQ packet
+ * @pkt: the CMDQ packet
+- * @event: the desired event type to "wait and CLEAR"
++ * @event: the desired event type to wait
++ * @clear: clear event or not after event arrive
+ *
+ * Return: 0 for success; else the error code is returned
+ */
+-int cmdq_pkt_wfe(struct cmdq_pkt *pkt, u16 event);
++int cmdq_pkt_wfe(struct cmdq_pkt *pkt, u16 event, bool clear);
+
+ /**
+ * cmdq_pkt_clear_event() - append clear event command to the CMDQ packet
+diff --git a/include/net/ip.h b/include/net/ip.h
+index 04ebe7bf54c6a..d61c26ab4ee84 100644
+--- a/include/net/ip.h
++++ b/include/net/ip.h
+@@ -439,12 +439,18 @@ static inline unsigned int ip_dst_mtu_maybe_forward(const struct dst_entry *dst,
+ bool forwarding)
+ {
+ struct net *net = dev_net(dst->dev);
++ unsigned int mtu;
+
+ if (net->ipv4.sysctl_ip_fwd_use_pmtu ||
+ ip_mtu_locked(dst) ||
+ !forwarding)
+ return dst_mtu(dst);
+
++ /* 'forwarding = true' case should always honour route mtu */
++ mtu = dst_metric_raw(dst, RTAX_MTU);
++ if (mtu)
++ return mtu;
++
+ return min(READ_ONCE(dst->dev->mtu), IP_MAX_MTU);
+ }
+
+diff --git a/include/net/netfilter/nf_log.h b/include/net/netfilter/nf_log.h
+index 0d3920896d502..716db4a0fed89 100644
+--- a/include/net/netfilter/nf_log.h
++++ b/include/net/netfilter/nf_log.h
+@@ -108,6 +108,7 @@ int nf_log_dump_tcp_header(struct nf_log_buf *m, const struct sk_buff *skb,
+ unsigned int logflags);
+ void nf_log_dump_sk_uid_gid(struct net *net, struct nf_log_buf *m,
+ struct sock *sk);
++void nf_log_dump_vlan(struct nf_log_buf *m, const struct sk_buff *skb);
+ void nf_log_dump_packet_common(struct nf_log_buf *m, u_int8_t pf,
+ unsigned int hooknum, const struct sk_buff *skb,
+ const struct net_device *in,
+diff --git a/include/net/tc_act/tc_tunnel_key.h b/include/net/tc_act/tc_tunnel_key.h
+index e1057b255f69a..879fe8cff5819 100644
+--- a/include/net/tc_act/tc_tunnel_key.h
++++ b/include/net/tc_act/tc_tunnel_key.h
+@@ -56,7 +56,10 @@ static inline struct ip_tunnel_info *tcf_tunnel_info(const struct tc_action *a)
+ {
+ #ifdef CONFIG_NET_CLS_ACT
+ struct tcf_tunnel_key *t = to_tunnel_key(a);
+- struct tcf_tunnel_key_params *params = rtnl_dereference(t->params);
++ struct tcf_tunnel_key_params *params;
++
++ params = rcu_dereference_protected(t->params,
++ lockdep_is_held(&a->tcfa_lock));
+
+ return ¶ms->tcft_enc_metadata->u.tun_info;
+ #else
+diff --git a/include/rdma/ib_umem.h b/include/rdma/ib_umem.h
+index e3518fd6b95b1..9353910915d41 100644
+--- a/include/rdma/ib_umem.h
++++ b/include/rdma/ib_umem.h
+@@ -95,10 +95,11 @@ static inline int ib_umem_copy_from(void *dst, struct ib_umem *umem, size_t offs
+ size_t length) {
+ return -EINVAL;
+ }
+-static inline int ib_umem_find_best_pgsz(struct ib_umem *umem,
+- unsigned long pgsz_bitmap,
+- unsigned long virt) {
+- return -EINVAL;
++static inline unsigned long ib_umem_find_best_pgsz(struct ib_umem *umem,
++ unsigned long pgsz_bitmap,
++ unsigned long virt)
++{
++ return 0;
+ }
+
+ #endif /* CONFIG_INFINIBAND_USER_MEM */
+diff --git a/include/rdma/ib_verbs.h b/include/rdma/ib_verbs.h
+index ef2f3986c4933..d7809f203715f 100644
+--- a/include/rdma/ib_verbs.h
++++ b/include/rdma/ib_verbs.h
+@@ -2465,7 +2465,7 @@ struct ib_device_ops {
+ int (*create_cq)(struct ib_cq *cq, const struct ib_cq_init_attr *attr,
+ struct ib_udata *udata);
+ int (*modify_cq)(struct ib_cq *cq, u16 cq_count, u16 cq_period);
+- void (*destroy_cq)(struct ib_cq *cq, struct ib_udata *udata);
++ int (*destroy_cq)(struct ib_cq *cq, struct ib_udata *udata);
+ int (*resize_cq)(struct ib_cq *cq, int cqe, struct ib_udata *udata);
+ struct ib_mr *(*get_dma_mr)(struct ib_pd *pd, int mr_access_flags);
+ struct ib_mr *(*reg_user_mr)(struct ib_pd *pd, u64 start, u64 length,
+@@ -3834,46 +3834,15 @@ static inline int ib_post_recv(struct ib_qp *qp,
+ return qp->device->ops.post_recv(qp, recv_wr, bad_recv_wr ? : &dummy);
+ }
+
+-struct ib_cq *__ib_alloc_cq_user(struct ib_device *dev, void *private,
+- int nr_cqe, int comp_vector,
+- enum ib_poll_context poll_ctx,
+- const char *caller, struct ib_udata *udata);
+-
+-/**
+- * ib_alloc_cq_user: Allocate kernel/user CQ
+- * @dev: The IB device
+- * @private: Private data attached to the CQE
+- * @nr_cqe: Number of CQEs in the CQ
+- * @comp_vector: Completion vector used for the IRQs
+- * @poll_ctx: Context used for polling the CQ
+- * @udata: Valid user data or NULL for kernel objects
+- */
+-static inline struct ib_cq *ib_alloc_cq_user(struct ib_device *dev,
+- void *private, int nr_cqe,
+- int comp_vector,
+- enum ib_poll_context poll_ctx,
+- struct ib_udata *udata)
+-{
+- return __ib_alloc_cq_user(dev, private, nr_cqe, comp_vector, poll_ctx,
+- KBUILD_MODNAME, udata);
+-}
+-
+-/**
+- * ib_alloc_cq: Allocate kernel CQ
+- * @dev: The IB device
+- * @private: Private data attached to the CQE
+- * @nr_cqe: Number of CQEs in the CQ
+- * @comp_vector: Completion vector used for the IRQs
+- * @poll_ctx: Context used for polling the CQ
+- *
+- * NOTE: for user cq use ib_alloc_cq_user with valid udata!
+- */
++struct ib_cq *__ib_alloc_cq(struct ib_device *dev, void *private, int nr_cqe,
++ int comp_vector, enum ib_poll_context poll_ctx,
++ const char *caller);
+ static inline struct ib_cq *ib_alloc_cq(struct ib_device *dev, void *private,
+ int nr_cqe, int comp_vector,
+ enum ib_poll_context poll_ctx)
+ {
+- return ib_alloc_cq_user(dev, private, nr_cqe, comp_vector, poll_ctx,
+- NULL);
++ return __ib_alloc_cq(dev, private, nr_cqe, comp_vector, poll_ctx,
++ KBUILD_MODNAME);
+ }
+
+ struct ib_cq *__ib_alloc_cq_any(struct ib_device *dev, void *private,
+@@ -3895,26 +3864,7 @@ static inline struct ib_cq *ib_alloc_cq_any(struct ib_device *dev,
+ KBUILD_MODNAME);
+ }
+
+-/**
+- * ib_free_cq_user - Free kernel/user CQ
+- * @cq: The CQ to free
+- * @udata: Valid user data or NULL for kernel objects
+- *
+- * NOTE: This function shouldn't be called on shared CQs.
+- */
+-void ib_free_cq_user(struct ib_cq *cq, struct ib_udata *udata);
+-
+-/**
+- * ib_free_cq - Free kernel CQ
+- * @cq: The CQ to free
+- *
+- * NOTE: for user cq use ib_free_cq_user with valid udata!
+- */
+-static inline void ib_free_cq(struct ib_cq *cq)
+-{
+- ib_free_cq_user(cq, NULL);
+-}
+-
++void ib_free_cq(struct ib_cq *cq);
+ int ib_process_cq_direct(struct ib_cq *cq, int budget);
+
+ /**
+@@ -3972,7 +3922,9 @@ int ib_destroy_cq_user(struct ib_cq *cq, struct ib_udata *udata);
+ */
+ static inline void ib_destroy_cq(struct ib_cq *cq)
+ {
+- ib_destroy_cq_user(cq, NULL);
++ int ret = ib_destroy_cq_user(cq, NULL);
++
++ WARN_ONCE(ret, "Destroy of kernel CQ shouldn't fail");
+ }
+
+ /**
+diff --git a/include/scsi/scsi_common.h b/include/scsi/scsi_common.h
+index 731ac09ed2313..5b567b43e1b16 100644
+--- a/include/scsi/scsi_common.h
++++ b/include/scsi/scsi_common.h
+@@ -25,6 +25,13 @@ scsi_command_size(const unsigned char *cmnd)
+ scsi_varlen_cdb_length(cmnd) : COMMAND_SIZE(cmnd[0]);
+ }
+
++static inline unsigned char
++scsi_command_control(const unsigned char *cmnd)
++{
++ return (cmnd[0] == VARIABLE_LENGTH_CMD) ?
++ cmnd[1] : cmnd[COMMAND_SIZE(cmnd[0]) - 1];
++}
++
+ /* Returns a human-readable name for the device */
+ extern const char *scsi_device_type(unsigned type);
+
+diff --git a/include/sound/hda_codec.h b/include/sound/hda_codec.h
+index d16a4229209b2..a2becf13293a3 100644
+--- a/include/sound/hda_codec.h
++++ b/include/sound/hda_codec.h
+@@ -253,6 +253,7 @@ struct hda_codec {
+ unsigned int force_pin_prefix:1; /* Add location prefix */
+ unsigned int link_down_at_suspend:1; /* link down at runtime suspend */
+ unsigned int relaxed_resume:1; /* don't resume forcibly for jack */
++ unsigned int forced_resume:1; /* forced resume for jack */
+ unsigned int mst_no_extra_pcms:1; /* no backup PCMs for DP-MST */
+
+ #ifdef CONFIG_PM
+diff --git a/include/trace/events/target.h b/include/trace/events/target.h
+index 77408edd29d2a..67fad2677ed55 100644
+--- a/include/trace/events/target.h
++++ b/include/trace/events/target.h
+@@ -141,6 +141,7 @@ TRACE_EVENT(target_sequencer_start,
+ __field( unsigned int, opcode )
+ __field( unsigned int, data_length )
+ __field( unsigned int, task_attribute )
++ __field( unsigned char, control )
+ __array( unsigned char, cdb, TCM_MAX_COMMAND_SIZE )
+ __string( initiator, cmd->se_sess->se_node_acl->initiatorname )
+ ),
+@@ -151,6 +152,7 @@ TRACE_EVENT(target_sequencer_start,
+ __entry->opcode = cmd->t_task_cdb[0];
+ __entry->data_length = cmd->data_length;
+ __entry->task_attribute = cmd->sam_task_attr;
++ __entry->control = scsi_command_control(cmd->t_task_cdb);
+ memcpy(__entry->cdb, cmd->t_task_cdb, TCM_MAX_COMMAND_SIZE);
+ __assign_str(initiator, cmd->se_sess->se_node_acl->initiatorname);
+ ),
+@@ -160,9 +162,7 @@ TRACE_EVENT(target_sequencer_start,
+ __entry->tag, show_opcode_name(__entry->opcode),
+ __entry->data_length, __print_hex(__entry->cdb, 16),
+ show_task_attribute_name(__entry->task_attribute),
+- scsi_command_size(__entry->cdb) <= 16 ?
+- __entry->cdb[scsi_command_size(__entry->cdb) - 1] :
+- __entry->cdb[1]
++ __entry->control
+ )
+ );
+
+@@ -178,6 +178,7 @@ TRACE_EVENT(target_cmd_complete,
+ __field( unsigned int, opcode )
+ __field( unsigned int, data_length )
+ __field( unsigned int, task_attribute )
++ __field( unsigned char, control )
+ __field( unsigned char, scsi_status )
+ __field( unsigned char, sense_length )
+ __array( unsigned char, cdb, TCM_MAX_COMMAND_SIZE )
+@@ -191,6 +192,7 @@ TRACE_EVENT(target_cmd_complete,
+ __entry->opcode = cmd->t_task_cdb[0];
+ __entry->data_length = cmd->data_length;
+ __entry->task_attribute = cmd->sam_task_attr;
++ __entry->control = scsi_command_control(cmd->t_task_cdb);
+ __entry->scsi_status = cmd->scsi_status;
+ __entry->sense_length = cmd->scsi_status == SAM_STAT_CHECK_CONDITION ?
+ min(18, ((u8 *) cmd->sense_buffer)[SPC_ADD_SENSE_LEN_OFFSET] + 8) : 0;
+@@ -208,9 +210,7 @@ TRACE_EVENT(target_cmd_complete,
+ show_opcode_name(__entry->opcode),
+ __entry->data_length, __print_hex(__entry->cdb, 16),
+ show_task_attribute_name(__entry->task_attribute),
+- scsi_command_size(__entry->cdb) <= 16 ?
+- __entry->cdb[scsi_command_size(__entry->cdb) - 1] :
+- __entry->cdb[1]
++ __entry->control
+ )
+ );
+
+diff --git a/include/uapi/linux/pci_regs.h b/include/uapi/linux/pci_regs.h
+index f9701410d3b52..57a222014cd20 100644
+--- a/include/uapi/linux/pci_regs.h
++++ b/include/uapi/linux/pci_regs.h
+@@ -76,6 +76,7 @@
+ #define PCI_CACHE_LINE_SIZE 0x0c /* 8 bits */
+ #define PCI_LATENCY_TIMER 0x0d /* 8 bits */
+ #define PCI_HEADER_TYPE 0x0e /* 8 bits */
++#define PCI_HEADER_TYPE_MASK 0x7f
+ #define PCI_HEADER_TYPE_NORMAL 0
+ #define PCI_HEADER_TYPE_BRIDGE 1
+ #define PCI_HEADER_TYPE_CARDBUS 2
+diff --git a/include/uapi/linux/perf_event.h b/include/uapi/linux/perf_event.h
+index 7b2d6fc9e6ed7..dc33e3051819d 100644
+--- a/include/uapi/linux/perf_event.h
++++ b/include/uapi/linux/perf_event.h
+@@ -1155,7 +1155,7 @@ union perf_mem_data_src {
+
+ #define PERF_MEM_SNOOPX_FWD 0x01 /* forward */
+ /* 1 free */
+-#define PERF_MEM_SNOOPX_SHIFT 37
++#define PERF_MEM_SNOOPX_SHIFT 38
+
+ /* locked instruction */
+ #define PERF_MEM_LOCK_NA 0x01 /* not available */
+diff --git a/kernel/bpf/percpu_freelist.c b/kernel/bpf/percpu_freelist.c
+index b367430e611c7..3d897de890612 100644
+--- a/kernel/bpf/percpu_freelist.c
++++ b/kernel/bpf/percpu_freelist.c
+@@ -17,6 +17,8 @@ int pcpu_freelist_init(struct pcpu_freelist *s)
+ raw_spin_lock_init(&head->lock);
+ head->first = NULL;
+ }
++ raw_spin_lock_init(&s->extralist.lock);
++ s->extralist.first = NULL;
+ return 0;
+ }
+
+@@ -40,12 +42,50 @@ static inline void ___pcpu_freelist_push(struct pcpu_freelist_head *head,
+ raw_spin_unlock(&head->lock);
+ }
+
++static inline bool pcpu_freelist_try_push_extra(struct pcpu_freelist *s,
++ struct pcpu_freelist_node *node)
++{
++ if (!raw_spin_trylock(&s->extralist.lock))
++ return false;
++
++ pcpu_freelist_push_node(&s->extralist, node);
++ raw_spin_unlock(&s->extralist.lock);
++ return true;
++}
++
++static inline void ___pcpu_freelist_push_nmi(struct pcpu_freelist *s,
++ struct pcpu_freelist_node *node)
++{
++ int cpu, orig_cpu;
++
++ orig_cpu = cpu = raw_smp_processor_id();
++ while (1) {
++ struct pcpu_freelist_head *head;
++
++ head = per_cpu_ptr(s->freelist, cpu);
++ if (raw_spin_trylock(&head->lock)) {
++ pcpu_freelist_push_node(head, node);
++ raw_spin_unlock(&head->lock);
++ return;
++ }
++ cpu = cpumask_next(cpu, cpu_possible_mask);
++ if (cpu >= nr_cpu_ids)
++ cpu = 0;
++
++ /* cannot lock any per cpu lock, try extralist */
++ if (cpu == orig_cpu &&
++ pcpu_freelist_try_push_extra(s, node))
++ return;
++ }
++}
++
+ void __pcpu_freelist_push(struct pcpu_freelist *s,
+ struct pcpu_freelist_node *node)
+ {
+- struct pcpu_freelist_head *head = this_cpu_ptr(s->freelist);
+-
+- ___pcpu_freelist_push(head, node);
++ if (in_nmi())
++ ___pcpu_freelist_push_nmi(s, node);
++ else
++ ___pcpu_freelist_push(this_cpu_ptr(s->freelist), node);
+ }
+
+ void pcpu_freelist_push(struct pcpu_freelist *s,
+@@ -81,7 +121,7 @@ again:
+ }
+ }
+
+-struct pcpu_freelist_node *__pcpu_freelist_pop(struct pcpu_freelist *s)
++static struct pcpu_freelist_node *___pcpu_freelist_pop(struct pcpu_freelist *s)
+ {
+ struct pcpu_freelist_head *head;
+ struct pcpu_freelist_node *node;
+@@ -102,8 +142,59 @@ struct pcpu_freelist_node *__pcpu_freelist_pop(struct pcpu_freelist *s)
+ if (cpu >= nr_cpu_ids)
+ cpu = 0;
+ if (cpu == orig_cpu)
+- return NULL;
++ break;
++ }
++
++ /* per cpu lists are all empty, try extralist */
++ raw_spin_lock(&s->extralist.lock);
++ node = s->extralist.first;
++ if (node)
++ s->extralist.first = node->next;
++ raw_spin_unlock(&s->extralist.lock);
++ return node;
++}
++
++static struct pcpu_freelist_node *
++___pcpu_freelist_pop_nmi(struct pcpu_freelist *s)
++{
++ struct pcpu_freelist_head *head;
++ struct pcpu_freelist_node *node;
++ int orig_cpu, cpu;
++
++ orig_cpu = cpu = raw_smp_processor_id();
++ while (1) {
++ head = per_cpu_ptr(s->freelist, cpu);
++ if (raw_spin_trylock(&head->lock)) {
++ node = head->first;
++ if (node) {
++ head->first = node->next;
++ raw_spin_unlock(&head->lock);
++ return node;
++ }
++ raw_spin_unlock(&head->lock);
++ }
++ cpu = cpumask_next(cpu, cpu_possible_mask);
++ if (cpu >= nr_cpu_ids)
++ cpu = 0;
++ if (cpu == orig_cpu)
++ break;
+ }
++
++ /* cannot pop from per cpu lists, try extralist */
++ if (!raw_spin_trylock(&s->extralist.lock))
++ return NULL;
++ node = s->extralist.first;
++ if (node)
++ s->extralist.first = node->next;
++ raw_spin_unlock(&s->extralist.lock);
++ return node;
++}
++
++struct pcpu_freelist_node *__pcpu_freelist_pop(struct pcpu_freelist *s)
++{
++ if (in_nmi())
++ return ___pcpu_freelist_pop_nmi(s);
++ return ___pcpu_freelist_pop(s);
+ }
+
+ struct pcpu_freelist_node *pcpu_freelist_pop(struct pcpu_freelist *s)
+diff --git a/kernel/bpf/percpu_freelist.h b/kernel/bpf/percpu_freelist.h
+index fbf8a8a289791..3c76553cfe571 100644
+--- a/kernel/bpf/percpu_freelist.h
++++ b/kernel/bpf/percpu_freelist.h
+@@ -13,6 +13,7 @@ struct pcpu_freelist_head {
+
+ struct pcpu_freelist {
+ struct pcpu_freelist_head __percpu *freelist;
++ struct pcpu_freelist_head extralist;
+ };
+
+ struct pcpu_freelist_node {
+diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
+index 89b07db146763..12eb9e47d101c 100644
+--- a/kernel/bpf/verifier.c
++++ b/kernel/bpf/verifier.c
+@@ -1470,6 +1470,10 @@ static int check_subprogs(struct bpf_verifier_env *env)
+ for (i = 0; i < insn_cnt; i++) {
+ u8 code = insn[i].code;
+
++ if (code == (BPF_JMP | BPF_CALL) &&
++ insn[i].imm == BPF_FUNC_tail_call &&
++ insn[i].src_reg != BPF_PSEUDO_CALL)
++ subprog[cur_subprog].has_tail_call = true;
+ if (BPF_CLASS(code) != BPF_JMP && BPF_CLASS(code) != BPF_JMP32)
+ goto next;
+ if (BPF_OP(code) == BPF_EXIT || BPF_OP(code) == BPF_CALL)
+@@ -2951,6 +2955,31 @@ static int check_max_stack_depth(struct bpf_verifier_env *env)
+ int ret_prog[MAX_CALL_FRAMES];
+
+ process_func:
++ /* protect against potential stack overflow that might happen when
++ * bpf2bpf calls get combined with tailcalls. Limit the caller's stack
++ * depth for such case down to 256 so that the worst case scenario
++ * would result in 8k stack size (32 which is tailcall limit * 256 =
++ * 8k).
++ *
++ * To get the idea what might happen, see an example:
++ * func1 -> sub rsp, 128
++ * subfunc1 -> sub rsp, 256
++ * tailcall1 -> add rsp, 256
++ * func2 -> sub rsp, 192 (total stack size = 128 + 192 = 320)
++ * subfunc2 -> sub rsp, 64
++ * subfunc22 -> sub rsp, 128
++ * tailcall2 -> add rsp, 128
++ * func3 -> sub rsp, 32 (total stack size 128 + 192 + 64 + 32 = 416)
++ *
++ * tailcall will unwind the current stack frame but it will not get rid
++ * of caller's stack as shown on the example above.
++ */
++ if (idx && subprog[idx].has_tail_call && depth >= 256) {
++ verbose(env,
++ "tail_calls are not allowed when call stack of previous frames is %d bytes. Too large\n",
++ depth);
++ return -EACCES;
++ }
+ /* round up to 32-bytes, since this is granularity
+ * of interpreter stack size
+ */
+@@ -10862,6 +10891,11 @@ static int check_attach_btf_id(struct bpf_verifier_env *env)
+ }
+
+ if (prog->expected_attach_type == BPF_MODIFY_RETURN) {
++ if (tgt_prog) {
++ verbose(env, "can't modify return codes of BPF programs\n");
++ ret = -EINVAL;
++ goto out;
++ }
+ ret = check_attach_modify_return(prog, addr);
+ if (ret)
+ verbose(env, "%s() is not modifiable\n",
+diff --git a/kernel/debug/kdb/kdb_io.c b/kernel/debug/kdb/kdb_io.c
+index 683a799618ade..bc827bd547c81 100644
+--- a/kernel/debug/kdb/kdb_io.c
++++ b/kernel/debug/kdb/kdb_io.c
+@@ -706,12 +706,16 @@ int vkdb_printf(enum kdb_msgsrc src, const char *fmt, va_list ap)
+ size_avail = sizeof(kdb_buffer) - len;
+ goto kdb_print_out;
+ }
+- if (kdb_grepping_flag >= KDB_GREPPING_FLAG_SEARCH)
++ if (kdb_grepping_flag >= KDB_GREPPING_FLAG_SEARCH) {
+ /*
+ * This was a interactive search (using '/' at more
+- * prompt) and it has completed. Clear the flag.
++ * prompt) and it has completed. Replace the \0 with
++ * its original value to ensure multi-line strings
++ * are handled properly, and return to normal mode.
+ */
++ *cphold = replaced_byte;
+ kdb_grepping_flag = 0;
++ }
+ /*
+ * at this point the string is a full line and
+ * should be printed, up to the null.
+diff --git a/kernel/events/core.c b/kernel/events/core.c
+index fd8cd00099dae..38eeb297255e4 100644
+--- a/kernel/events/core.c
++++ b/kernel/events/core.c
+@@ -5852,11 +5852,11 @@ static void perf_pmu_output_stop(struct perf_event *event);
+ static void perf_mmap_close(struct vm_area_struct *vma)
+ {
+ struct perf_event *event = vma->vm_file->private_data;
+-
+ struct perf_buffer *rb = ring_buffer_get(event);
+ struct user_struct *mmap_user = rb->mmap_user;
+ int mmap_locked = rb->mmap_locked;
+ unsigned long size = perf_data_size(rb);
++ bool detach_rest = false;
+
+ if (event->pmu->event_unmapped)
+ event->pmu->event_unmapped(event, vma->vm_mm);
+@@ -5887,7 +5887,8 @@ static void perf_mmap_close(struct vm_area_struct *vma)
+ mutex_unlock(&event->mmap_mutex);
+ }
+
+- atomic_dec(&rb->mmap_count);
++ if (atomic_dec_and_test(&rb->mmap_count))
++ detach_rest = true;
+
+ if (!atomic_dec_and_mutex_lock(&event->mmap_count, &event->mmap_mutex))
+ goto out_put;
+@@ -5896,7 +5897,7 @@ static void perf_mmap_close(struct vm_area_struct *vma)
+ mutex_unlock(&event->mmap_mutex);
+
+ /* If there's still other mmap()s of this buffer, we're done. */
+- if (atomic_read(&rb->mmap_count))
++ if (!detach_rest)
+ goto out_put;
+
+ /*
+diff --git a/kernel/fork.c b/kernel/fork.c
+index efc5493203ae0..0074bbe8c66f1 100644
+--- a/kernel/fork.c
++++ b/kernel/fork.c
+@@ -1830,6 +1830,25 @@ static __always_inline void delayed_free_task(struct task_struct *tsk)
+ free_task(tsk);
+ }
+
++static void copy_oom_score_adj(u64 clone_flags, struct task_struct *tsk)
++{
++ /* Skip if kernel thread */
++ if (!tsk->mm)
++ return;
++
++ /* Skip if spawning a thread or using vfork */
++ if ((clone_flags & (CLONE_VM | CLONE_THREAD | CLONE_VFORK)) != CLONE_VM)
++ return;
++
++ /* We need to synchronize with __set_oom_adj */
++ mutex_lock(&oom_adj_mutex);
++ set_bit(MMF_MULTIPROCESS, &tsk->mm->flags);
++ /* Update the values in case they were changed after copy_signal */
++ tsk->signal->oom_score_adj = current->signal->oom_score_adj;
++ tsk->signal->oom_score_adj_min = current->signal->oom_score_adj_min;
++ mutex_unlock(&oom_adj_mutex);
++}
++
+ /*
+ * This creates a new process as a copy of the old one,
+ * but does not actually start it yet.
+@@ -2310,6 +2329,8 @@ static __latent_entropy struct task_struct *copy_process(
+ trace_task_newtask(p, clone_flags);
+ uprobe_copy_process(p, clone_flags);
+
++ copy_oom_score_adj(clone_flags, p);
++
+ return p;
+
+ bad_fork_cancel_cgroup:
+diff --git a/kernel/module.c b/kernel/module.c
+index 08c46084d8cca..991395d60f59c 100644
+--- a/kernel/module.c
++++ b/kernel/module.c
+@@ -91,8 +91,9 @@ EXPORT_SYMBOL_GPL(module_mutex);
+ static LIST_HEAD(modules);
+
+ /* Work queue for freeing init sections in success case */
+-static struct work_struct init_free_wq;
+-static struct llist_head init_free_list;
++static void do_free_init(struct work_struct *w);
++static DECLARE_WORK(init_free_wq, do_free_init);
++static LLIST_HEAD(init_free_list);
+
+ #ifdef CONFIG_MODULES_TREE_LOOKUP
+
+@@ -3551,14 +3552,6 @@ static void do_free_init(struct work_struct *w)
+ }
+ }
+
+-static int __init modules_wq_init(void)
+-{
+- INIT_WORK(&init_free_wq, do_free_init);
+- init_llist_head(&init_free_list);
+- return 0;
+-}
+-module_init(modules_wq_init);
+-
+ /*
+ * This is where the real work happens.
+ *
+diff --git a/kernel/power/hibernate.c b/kernel/power/hibernate.c
+index 02ec716a49271..0e60e10ed66a3 100644
+--- a/kernel/power/hibernate.c
++++ b/kernel/power/hibernate.c
+@@ -851,17 +851,6 @@ static int software_resume(void)
+
+ /* Check if the device is there */
+ swsusp_resume_device = name_to_dev_t(resume_file);
+-
+- /*
+- * name_to_dev_t is ineffective to verify parition if resume_file is in
+- * integer format. (e.g. major:minor)
+- */
+- if (isdigit(resume_file[0]) && resume_wait) {
+- int partno;
+- while (!get_gendisk(swsusp_resume_device, &partno))
+- msleep(10);
+- }
+-
+ if (!swsusp_resume_device) {
+ /*
+ * Some device discovery might still be in progress; we need
+diff --git a/kernel/rcu/rcutorture.c b/kernel/rcu/rcutorture.c
+index efb792e13fca9..23ec68d8ff3aa 100644
+--- a/kernel/rcu/rcutorture.c
++++ b/kernel/rcu/rcutorture.c
+@@ -2154,9 +2154,20 @@ static int __init rcu_torture_fwd_prog_init(void)
+ return -ENOMEM;
+ spin_lock_init(&rfp->rcu_fwd_lock);
+ rfp->rcu_fwd_cb_tail = &rfp->rcu_fwd_cb_head;
++ rcu_fwds = rfp;
+ return torture_create_kthread(rcu_torture_fwd_prog, rfp, fwd_prog_task);
+ }
+
++static void rcu_torture_fwd_prog_cleanup(void)
++{
++ struct rcu_fwd *rfp;
++
++ torture_stop_kthread(rcu_torture_fwd_prog, fwd_prog_task);
++ rfp = rcu_fwds;
++ rcu_fwds = NULL;
++ kfree(rfp);
++}
++
+ /* Callback function for RCU barrier testing. */
+ static void rcu_torture_barrier_cbf(struct rcu_head *rcu)
+ {
+@@ -2360,7 +2371,7 @@ rcu_torture_cleanup(void)
+
+ show_rcu_gp_kthreads();
+ rcu_torture_barrier_cleanup();
+- torture_stop_kthread(rcu_torture_fwd_prog, fwd_prog_task);
++ rcu_torture_fwd_prog_cleanup();
+ torture_stop_kthread(rcu_torture_stall, stall_task);
+ torture_stop_kthread(rcu_torture_writer, writer_task);
+
+diff --git a/kernel/rcu/tree.c b/kernel/rcu/tree.c
+index 1e9e500ff7906..572a79b1a8510 100644
+--- a/kernel/rcu/tree.c
++++ b/kernel/rcu/tree.c
+@@ -1882,7 +1882,7 @@ static void rcu_gp_fqs_loop(void)
+ break;
+ /* If time for quiescent-state forcing, do it. */
+ if (!time_after(rcu_state.jiffies_force_qs, jiffies) ||
+- (gf & RCU_GP_FLAG_FQS)) {
++ (gf & (RCU_GP_FLAG_FQS | RCU_GP_FLAG_OVLD))) {
+ trace_rcu_grace_period(rcu_state.name, rcu_state.gp_seq,
+ TPS("fqsstart"));
+ rcu_gp_fqs(first_gp_fqs);
+diff --git a/kernel/sched/core.c b/kernel/sched/core.c
+index f788cd61df212..1c68621743ac2 100644
+--- a/kernel/sched/core.c
++++ b/kernel/sched/core.c
+@@ -39,7 +39,7 @@ EXPORT_TRACEPOINT_SYMBOL_GPL(sched_overutilized_tp);
+
+ DEFINE_PER_CPU_SHARED_ALIGNED(struct rq, runqueues);
+
+-#if defined(CONFIG_SCHED_DEBUG) && defined(CONFIG_JUMP_LABEL)
++#ifdef CONFIG_SCHED_DEBUG
+ /*
+ * Debugging: various feature bits
+ *
+diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
+index 6b3b59cc51d6c..f3496556b6992 100644
+--- a/kernel/sched/fair.c
++++ b/kernel/sched/fair.c
+@@ -1550,7 +1550,7 @@ struct task_numa_env {
+
+ static unsigned long cpu_load(struct rq *rq);
+ static unsigned long cpu_util(int cpu);
+-static inline long adjust_numa_imbalance(int imbalance, int src_nr_running);
++static inline long adjust_numa_imbalance(int imbalance, int nr_running);
+
+ static inline enum
+ numa_type numa_classify(unsigned int imbalance_pct,
+@@ -1927,7 +1927,7 @@ static void task_numa_find_cpu(struct task_numa_env *env,
+ src_running = env->src_stats.nr_running - 1;
+ dst_running = env->dst_stats.nr_running + 1;
+ imbalance = max(0, dst_running - src_running);
+- imbalance = adjust_numa_imbalance(imbalance, src_running);
++ imbalance = adjust_numa_imbalance(imbalance, dst_running);
+
+ /* Use idle CPU if there is no imbalance */
+ if (!imbalance) {
+@@ -6067,7 +6067,7 @@ static int select_idle_core(struct task_struct *p, struct sched_domain *sd, int
+ /*
+ * Scan the local SMT mask for idle CPUs.
+ */
+-static int select_idle_smt(struct task_struct *p, int target)
++static int select_idle_smt(struct task_struct *p, struct sched_domain *sd, int target)
+ {
+ int cpu;
+
+@@ -6075,7 +6075,8 @@ static int select_idle_smt(struct task_struct *p, int target)
+ return -1;
+
+ for_each_cpu(cpu, cpu_smt_mask(target)) {
+- if (!cpumask_test_cpu(cpu, p->cpus_ptr))
++ if (!cpumask_test_cpu(cpu, p->cpus_ptr) ||
++ !cpumask_test_cpu(cpu, sched_domain_span(sd)))
+ continue;
+ if (available_idle_cpu(cpu) || sched_idle_cpu(cpu))
+ return cpu;
+@@ -6091,7 +6092,7 @@ static inline int select_idle_core(struct task_struct *p, struct sched_domain *s
+ return -1;
+ }
+
+-static inline int select_idle_smt(struct task_struct *p, int target)
++static inline int select_idle_smt(struct task_struct *p, struct sched_domain *sd, int target)
+ {
+ return -1;
+ }
+@@ -6266,7 +6267,7 @@ symmetric:
+ if ((unsigned)i < nr_cpumask_bits)
+ return i;
+
+- i = select_idle_smt(p, target);
++ i = select_idle_smt(p, sd, target);
+ if ((unsigned)i < nr_cpumask_bits)
+ return i;
+
+@@ -6586,7 +6587,8 @@ static int find_energy_efficient_cpu(struct task_struct *p, int prev_cpu)
+
+ util = cpu_util_next(cpu, p, cpu);
+ cpu_cap = capacity_of(cpu);
+- spare_cap = cpu_cap - util;
++ spare_cap = cpu_cap;
++ lsub_positive(&spare_cap, util);
+
+ /*
+ * Skip CPUs that cannot satisfy the capacity request.
+@@ -8943,7 +8945,7 @@ next_group:
+ }
+ }
+
+-static inline long adjust_numa_imbalance(int imbalance, int src_nr_running)
++static inline long adjust_numa_imbalance(int imbalance, int nr_running)
+ {
+ unsigned int imbalance_min;
+
+@@ -8952,7 +8954,7 @@ static inline long adjust_numa_imbalance(int imbalance, int src_nr_running)
+ * tasks that remain local when the source domain is almost idle.
+ */
+ imbalance_min = 2;
+- if (src_nr_running <= imbalance_min)
++ if (nr_running <= imbalance_min)
+ return 0;
+
+ return imbalance;
+diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h
+index c82857e2e288a..0b1485ac19c4e 100644
+--- a/kernel/sched/sched.h
++++ b/kernel/sched/sched.h
+@@ -1600,7 +1600,7 @@ enum {
+
+ #undef SCHED_FEAT
+
+-#if defined(CONFIG_SCHED_DEBUG) && defined(CONFIG_JUMP_LABEL)
++#ifdef CONFIG_SCHED_DEBUG
+
+ /*
+ * To support run-time toggling of sched features, all the translation units
+@@ -1608,6 +1608,7 @@ enum {
+ */
+ extern const_debug unsigned int sysctl_sched_features;
+
++#ifdef CONFIG_JUMP_LABEL
+ #define SCHED_FEAT(name, enabled) \
+ static __always_inline bool static_branch_##name(struct static_key *key) \
+ { \
+@@ -1620,7 +1621,13 @@ static __always_inline bool static_branch_##name(struct static_key *key) \
+ extern struct static_key sched_feat_keys[__SCHED_FEAT_NR];
+ #define sched_feat(x) (static_branch_##x(&sched_feat_keys[__SCHED_FEAT_##x]))
+
+-#else /* !(SCHED_DEBUG && CONFIG_JUMP_LABEL) */
++#else /* !CONFIG_JUMP_LABEL */
++
++#define sched_feat(x) (sysctl_sched_features & (1UL << __SCHED_FEAT_##x))
++
++#endif /* CONFIG_JUMP_LABEL */
++
++#else /* !SCHED_DEBUG */
+
+ /*
+ * Each translation unit has its own copy of sysctl_sched_features to allow
+@@ -1636,7 +1643,7 @@ static const_debug __maybe_unused unsigned int sysctl_sched_features =
+
+ #define sched_feat(x) !!(sysctl_sched_features & (1UL << __SCHED_FEAT_##x))
+
+-#endif /* SCHED_DEBUG && CONFIG_JUMP_LABEL */
++#endif /* SCHED_DEBUG */
+
+ extern struct static_key_false sched_numa_balancing;
+ extern struct static_key_false sched_schedstats;
+diff --git a/kernel/trace/trace_events_synth.c b/kernel/trace/trace_events_synth.c
+index c6cca0d1d5840..c8892156db341 100644
+--- a/kernel/trace/trace_events_synth.c
++++ b/kernel/trace/trace_events_synth.c
+@@ -132,7 +132,7 @@ static int synth_field_string_size(char *type)
+ start += sizeof("char[") - 1;
+
+ end = strchr(type, ']');
+- if (!end || end < start)
++ if (!end || end < start || type + strlen(type) > end + 1)
+ return -EINVAL;
+
+ len = end - start;
+@@ -465,6 +465,7 @@ static struct synth_field *parse_synth_field(int argc, const char **argv,
+ struct synth_field *field;
+ const char *prefix = NULL, *field_type = argv[0], *field_name, *array;
+ int len, ret = 0;
++ ssize_t size;
+
+ if (field_type[0] == ';')
+ field_type++;
+@@ -501,8 +502,14 @@ static struct synth_field *parse_synth_field(int argc, const char **argv,
+ if (field_type[0] == ';')
+ field_type++;
+ len = strlen(field_type) + 1;
+- if (array)
+- len += strlen(array);
++
++ if (array) {
++ int l = strlen(array);
++
++ if (l && array[l - 1] == ';')
++ l--;
++ len += l;
++ }
+ if (prefix)
+ len += strlen(prefix);
+
+@@ -520,11 +527,12 @@ static struct synth_field *parse_synth_field(int argc, const char **argv,
+ field->type[len - 1] = '\0';
+ }
+
+- field->size = synth_field_size(field->type);
+- if (!field->size) {
++ size = synth_field_size(field->type);
++ if (size <= 0) {
+ ret = -EINVAL;
+ goto free;
+ }
++ field->size = size;
+
+ if (synth_field_is_string(field->type))
+ field->is_string = true;
+diff --git a/lib/crc32.c b/lib/crc32.c
+index 4a20455d1f61e..bf60ef26a45c2 100644
+--- a/lib/crc32.c
++++ b/lib/crc32.c
+@@ -331,7 +331,7 @@ static inline u32 __pure crc32_be_generic(u32 crc, unsigned char const *p,
+ return crc;
+ }
+
+-#if CRC_LE_BITS == 1
++#if CRC_BE_BITS == 1
+ u32 __pure crc32_be(u32 crc, unsigned char const *p, size_t len)
+ {
+ return crc32_be_generic(crc, p, len, NULL, CRC32_POLY_BE);
+diff --git a/lib/idr.c b/lib/idr.c
+index c2cf2c52bbde5..4d2eef0259d2c 100644
+--- a/lib/idr.c
++++ b/lib/idr.c
+@@ -470,6 +470,7 @@ alloc:
+ goto retry;
+ nospc:
+ xas_unlock_irqrestore(&xas, flags);
++ kfree(alloc);
+ return -ENOSPC;
+ }
+ EXPORT_SYMBOL(ida_alloc_range);
+diff --git a/mm/filemap.c b/mm/filemap.c
+index 385759c4ce4be..6c3b879116212 100644
+--- a/mm/filemap.c
++++ b/mm/filemap.c
+@@ -826,10 +826,10 @@ int replace_page_cache_page(struct page *old, struct page *new, gfp_t gfp_mask)
+ }
+ EXPORT_SYMBOL_GPL(replace_page_cache_page);
+
+-static int __add_to_page_cache_locked(struct page *page,
+- struct address_space *mapping,
+- pgoff_t offset, gfp_t gfp_mask,
+- void **shadowp)
++noinline int __add_to_page_cache_locked(struct page *page,
++ struct address_space *mapping,
++ pgoff_t offset, gfp_t gfp_mask,
++ void **shadowp)
+ {
+ XA_STATE(xas, &mapping->i_pages, offset);
+ int huge = PageHuge(page);
+diff --git a/mm/huge_memory.c b/mm/huge_memory.c
+index 74300e337c3c7..358403422104b 100644
+--- a/mm/huge_memory.c
++++ b/mm/huge_memory.c
+@@ -2449,7 +2449,7 @@ static void __split_huge_page(struct page *page, struct list_head *list,
+
+ ClearPageCompound(head);
+
+- split_page_owner(head, HPAGE_PMD_ORDER);
++ split_page_owner(head, HPAGE_PMD_NR);
+
+ /* See comment in __split_huge_page_tail() */
+ if (PageAnon(head)) {
+diff --git a/mm/memcontrol.c b/mm/memcontrol.c
+index 13f559af1ab6a..6795bdf662566 100644
+--- a/mm/memcontrol.c
++++ b/mm/memcontrol.c
+@@ -5276,7 +5276,7 @@ static struct page *mc_handle_swap_pte(struct vm_area_struct *vma,
+ struct page *page = NULL;
+ swp_entry_t ent = pte_to_swp_entry(ptent);
+
+- if (!(mc.flags & MOVE_ANON) || non_swap_entry(ent))
++ if (!(mc.flags & MOVE_ANON))
+ return NULL;
+
+ /*
+@@ -5295,6 +5295,9 @@ static struct page *mc_handle_swap_pte(struct vm_area_struct *vma,
+ return page;
+ }
+
++ if (non_swap_entry(ent))
++ return NULL;
++
+ /*
+ * Because lookup_swap_cache() updates some statistics counter,
+ * we call find_get_page() with swapper_space directly.
+diff --git a/mm/oom_kill.c b/mm/oom_kill.c
+index 6e94962893ee8..67e5bb0900b37 100644
+--- a/mm/oom_kill.c
++++ b/mm/oom_kill.c
+@@ -64,6 +64,8 @@ int sysctl_oom_dump_tasks = 1;
+ * and mark_oom_victim
+ */
+ DEFINE_MUTEX(oom_lock);
++/* Serializes oom_score_adj and oom_score_adj_min updates */
++DEFINE_MUTEX(oom_adj_mutex);
+
+ static inline bool is_memcg_oom(struct oom_control *oc)
+ {
+diff --git a/mm/page_alloc.c b/mm/page_alloc.c
+index 43f6d91f57156..8cc774340d490 100644
+--- a/mm/page_alloc.c
++++ b/mm/page_alloc.c
+@@ -3213,7 +3213,7 @@ void split_page(struct page *page, unsigned int order)
+
+ for (i = 1; i < (1 << order); i++)
+ set_page_refcounted(page + i);
+- split_page_owner(page, order);
++ split_page_owner(page, 1 << order);
+ }
+ EXPORT_SYMBOL_GPL(split_page);
+
+@@ -3487,7 +3487,7 @@ static inline bool __should_fail_alloc_page(gfp_t gfp_mask, unsigned int order)
+
+ #endif /* CONFIG_FAIL_PAGE_ALLOC */
+
+-static noinline bool should_fail_alloc_page(gfp_t gfp_mask, unsigned int order)
++noinline bool should_fail_alloc_page(gfp_t gfp_mask, unsigned int order)
+ {
+ return __should_fail_alloc_page(gfp_mask, order);
+ }
+diff --git a/mm/page_owner.c b/mm/page_owner.c
+index 3604615094235..4ca3051a10358 100644
+--- a/mm/page_owner.c
++++ b/mm/page_owner.c
+@@ -204,7 +204,7 @@ void __set_page_owner_migrate_reason(struct page *page, int reason)
+ page_owner->last_migrate_reason = reason;
+ }
+
+-void __split_page_owner(struct page *page, unsigned int order)
++void __split_page_owner(struct page *page, unsigned int nr)
+ {
+ int i;
+ struct page_ext *page_ext = lookup_page_ext(page);
+@@ -213,7 +213,7 @@ void __split_page_owner(struct page *page, unsigned int order)
+ if (unlikely(!page_ext))
+ return;
+
+- for (i = 0; i < (1 << order); i++) {
++ for (i = 0; i < nr; i++) {
+ page_owner = get_page_owner(page_ext);
+ page_owner->order = 0;
+ page_ext = page_ext_next(page_ext);
+diff --git a/mm/swapfile.c b/mm/swapfile.c
+index 26707c5dc9fce..605294e4df684 100644
+--- a/mm/swapfile.c
++++ b/mm/swapfile.c
+@@ -3336,7 +3336,7 @@ SYSCALL_DEFINE2(swapon, const char __user *, specialfile, int, swap_flags)
+ error = inode_drain_writes(inode);
+ if (error) {
+ inode->i_flags &= ~S_SWAPFILE;
+- goto bad_swap_unlock_inode;
++ goto free_swap_address_space;
+ }
+
+ mutex_lock(&swapon_mutex);
+@@ -3361,6 +3361,8 @@ SYSCALL_DEFINE2(swapon, const char __user *, specialfile, int, swap_flags)
+
+ error = 0;
+ goto out;
++free_swap_address_space:
++ exit_swap_address_space(p->type);
+ bad_swap_unlock_inode:
+ inode_unlock(inode);
+ bad_swap:
+diff --git a/net/bluetooth/hci_core.c b/net/bluetooth/hci_core.c
+index fc28dc201b936..131d29e902a30 100644
+--- a/net/bluetooth/hci_core.c
++++ b/net/bluetooth/hci_core.c
+@@ -3280,6 +3280,16 @@ void hci_copy_identity_address(struct hci_dev *hdev, bdaddr_t *bdaddr,
+ }
+ }
+
++static void hci_suspend_clear_tasks(struct hci_dev *hdev)
++{
++ int i;
++
++ for (i = 0; i < __SUSPEND_NUM_TASKS; i++)
++ clear_bit(i, hdev->suspend_tasks);
++
++ wake_up(&hdev->suspend_wait_q);
++}
++
+ static int hci_suspend_wait_event(struct hci_dev *hdev)
+ {
+ #define WAKE_COND \
+@@ -3608,6 +3618,7 @@ void hci_unregister_dev(struct hci_dev *hdev)
+
+ cancel_work_sync(&hdev->power_on);
+
++ hci_suspend_clear_tasks(hdev);
+ unregister_pm_notifier(&hdev->suspend_notifier);
+ cancel_work_sync(&hdev->suspend_prepare);
+
+diff --git a/net/bluetooth/l2cap_sock.c b/net/bluetooth/l2cap_sock.c
+index c7fc28a465fdb..fa66e27b73635 100644
+--- a/net/bluetooth/l2cap_sock.c
++++ b/net/bluetooth/l2cap_sock.c
+@@ -1521,8 +1521,6 @@ static void l2cap_sock_teardown_cb(struct l2cap_chan *chan, int err)
+
+ parent = bt_sk(sk)->parent;
+
+- sock_set_flag(sk, SOCK_ZAPPED);
+-
+ switch (chan->state) {
+ case BT_OPEN:
+ case BT_BOUND:
+@@ -1549,8 +1547,11 @@ static void l2cap_sock_teardown_cb(struct l2cap_chan *chan, int err)
+
+ break;
+ }
+-
+ release_sock(sk);
++
++ /* Only zap after cleanup to avoid use after free race */
++ sock_set_flag(sk, SOCK_ZAPPED);
++
+ }
+
+ static void l2cap_sock_state_change_cb(struct l2cap_chan *chan, int state,
+diff --git a/net/bridge/netfilter/ebt_dnat.c b/net/bridge/netfilter/ebt_dnat.c
+index 12a4f4d936810..3fda71a8579d1 100644
+--- a/net/bridge/netfilter/ebt_dnat.c
++++ b/net/bridge/netfilter/ebt_dnat.c
+@@ -21,7 +21,7 @@ ebt_dnat_tg(struct sk_buff *skb, const struct xt_action_param *par)
+ {
+ const struct ebt_nat_info *info = par->targinfo;
+
+- if (skb_ensure_writable(skb, ETH_ALEN))
++ if (skb_ensure_writable(skb, 0))
+ return EBT_DROP;
+
+ ether_addr_copy(eth_hdr(skb)->h_dest, info->mac);
+diff --git a/net/bridge/netfilter/ebt_redirect.c b/net/bridge/netfilter/ebt_redirect.c
+index 0cad62a4052b9..307790562b492 100644
+--- a/net/bridge/netfilter/ebt_redirect.c
++++ b/net/bridge/netfilter/ebt_redirect.c
+@@ -21,7 +21,7 @@ ebt_redirect_tg(struct sk_buff *skb, const struct xt_action_param *par)
+ {
+ const struct ebt_redirect_info *info = par->targinfo;
+
+- if (skb_ensure_writable(skb, ETH_ALEN))
++ if (skb_ensure_writable(skb, 0))
+ return EBT_DROP;
+
+ if (xt_hooknum(par) != NF_BR_BROUTING)
+diff --git a/net/bridge/netfilter/ebt_snat.c b/net/bridge/netfilter/ebt_snat.c
+index 27443bf229a3b..7dfbcdfc30e5d 100644
+--- a/net/bridge/netfilter/ebt_snat.c
++++ b/net/bridge/netfilter/ebt_snat.c
+@@ -22,7 +22,7 @@ ebt_snat_tg(struct sk_buff *skb, const struct xt_action_param *par)
+ {
+ const struct ebt_nat_info *info = par->targinfo;
+
+- if (skb_ensure_writable(skb, ETH_ALEN * 2))
++ if (skb_ensure_writable(skb, 0))
+ return EBT_DROP;
+
+ ether_addr_copy(eth_hdr(skb)->h_source, info->mac);
+diff --git a/net/can/j1939/transport.c b/net/can/j1939/transport.c
+index a8dd956b5e8e1..916fdf2464bc2 100644
+--- a/net/can/j1939/transport.c
++++ b/net/can/j1939/transport.c
+@@ -580,6 +580,7 @@ sk_buff *j1939_tp_tx_dat_new(struct j1939_priv *priv,
+ skb->dev = priv->ndev;
+ can_skb_reserve(skb);
+ can_skb_prv(skb)->ifindex = priv->ndev->ifindex;
++ can_skb_prv(skb)->skbcnt = 0;
+ /* reserve CAN header */
+ skb_reserve(skb, offsetof(struct can_frame, data));
+
+@@ -1487,6 +1488,7 @@ j1939_session *j1939_session_fresh_new(struct j1939_priv *priv,
+ skb->dev = priv->ndev;
+ can_skb_reserve(skb);
+ can_skb_prv(skb)->ifindex = priv->ndev->ifindex;
++ can_skb_prv(skb)->skbcnt = 0;
+ skcb = j1939_skb_to_cb(skb);
+ memcpy(skcb, rel_skcb, sizeof(*skcb));
+
+diff --git a/net/core/filter.c b/net/core/filter.c
+index 0261531d4fda6..3e4de9e461bd0 100644
+--- a/net/core/filter.c
++++ b/net/core/filter.c
+@@ -4323,7 +4323,8 @@ static int _bpf_setsockopt(struct sock *sk, int level, int optname,
+ cmpxchg(&sk->sk_pacing_status,
+ SK_PACING_NONE,
+ SK_PACING_NEEDED);
+- sk->sk_max_pacing_rate = (val == ~0U) ? ~0UL : val;
++ sk->sk_max_pacing_rate = (val == ~0U) ?
++ ~0UL : (unsigned int)val;
+ sk->sk_pacing_rate = min(sk->sk_pacing_rate,
+ sk->sk_max_pacing_rate);
+ break;
+diff --git a/net/core/skmsg.c b/net/core/skmsg.c
+index 6a32a1fd34f8c..053472c48354b 100644
+--- a/net/core/skmsg.c
++++ b/net/core/skmsg.c
+@@ -662,15 +662,16 @@ static int sk_psock_bpf_run(struct sk_psock *psock, struct bpf_prog *prog,
+ {
+ int ret;
+
++ /* strparser clones the skb before handing it to a upper layer,
++ * meaning we have the same data, but sk is NULL. We do want an
++ * sk pointer though when we run the BPF program. So we set it
++ * here and then NULL it to ensure we don't trigger a BUG_ON()
++ * in skb/sk operations later if kfree_skb is called with a
++ * valid skb->sk pointer and no destructor assigned.
++ */
+ skb->sk = psock->sk;
+ bpf_compute_data_end_sk_skb(skb);
+ ret = bpf_prog_run_pin_on_cpu(prog, skb);
+- /* strparser clones the skb before handing it to a upper layer,
+- * meaning skb_orphan has been called. We NULL sk on the way out
+- * to ensure we don't trigger a BUG_ON() in skb/sk operations
+- * later and because we are not charging the memory of this skb
+- * to any socket yet.
+- */
+ skb->sk = NULL;
+ return ret;
+ }
+@@ -795,7 +796,6 @@ static void sk_psock_strp_read(struct strparser *strp, struct sk_buff *skb)
+ }
+ prog = READ_ONCE(psock->progs.skb_verdict);
+ if (likely(prog)) {
+- skb_orphan(skb);
+ tcp_skb_bpf_redirect_clear(skb);
+ ret = sk_psock_bpf_run(psock, prog, skb);
+ ret = sk_psock_map_verd(ret, tcp_skb_bpf_redirect_fetch(skb));
+diff --git a/net/core/sock.c b/net/core/sock.c
+index 78f8736be9c50..25968369fe7f6 100644
+--- a/net/core/sock.c
++++ b/net/core/sock.c
+@@ -777,7 +777,6 @@ static void __sock_set_timestamps(struct sock *sk, bool val, bool new, bool ns)
+ } else {
+ sock_reset_flag(sk, SOCK_RCVTSTAMP);
+ sock_reset_flag(sk, SOCK_RCVTSTAMPNS);
+- sock_reset_flag(sk, SOCK_TSTAMP_NEW);
+ }
+ }
+
+@@ -1007,8 +1006,6 @@ set_sndbuf:
+ __sock_set_timestamps(sk, valbool, true, true);
+ break;
+ case SO_TIMESTAMPING_NEW:
+- sock_set_flag(sk, SOCK_TSTAMP_NEW);
+- /* fall through */
+ case SO_TIMESTAMPING_OLD:
+ if (val & ~SOF_TIMESTAMPING_MASK) {
+ ret = -EINVAL;
+@@ -1037,16 +1034,14 @@ set_sndbuf:
+ }
+
+ sk->sk_tsflags = val;
++ sock_valbool_flag(sk, SOCK_TSTAMP_NEW, optname == SO_TIMESTAMPING_NEW);
++
+ if (val & SOF_TIMESTAMPING_RX_SOFTWARE)
+ sock_enable_timestamp(sk,
+ SOCK_TIMESTAMPING_RX_SOFTWARE);
+- else {
+- if (optname == SO_TIMESTAMPING_NEW)
+- sock_reset_flag(sk, SOCK_TSTAMP_NEW);
+-
++ else
+ sock_disable_timestamp(sk,
+ (1UL << SOCK_TIMESTAMPING_RX_SOFTWARE));
+- }
+ break;
+
+ case SO_RCVLOWAT:
+@@ -1189,7 +1184,7 @@ set_sndbuf:
+
+ case SO_MAX_PACING_RATE:
+ {
+- unsigned long ulval = (val == ~0U) ? ~0UL : val;
++ unsigned long ulval = (val == ~0U) ? ~0UL : (unsigned int)val;
+
+ if (sizeof(ulval) != sizeof(val) &&
+ optlen >= sizeof(ulval) &&
+diff --git a/net/ipv4/icmp.c b/net/ipv4/icmp.c
+index e30515f898023..70a505a713a56 100644
+--- a/net/ipv4/icmp.c
++++ b/net/ipv4/icmp.c
+@@ -239,7 +239,7 @@ static struct {
+ /**
+ * icmp_global_allow - Are we allowed to send one more ICMP message ?
+ *
+- * Uses a token bucket to limit our ICMP messages to sysctl_icmp_msgs_per_sec.
++ * Uses a token bucket to limit our ICMP messages to ~sysctl_icmp_msgs_per_sec.
+ * Returns false if we reached the limit and can not send another packet.
+ * Note: called with BH disabled
+ */
+@@ -267,7 +267,10 @@ bool icmp_global_allow(void)
+ }
+ credit = min_t(u32, icmp_global.credit + incr, sysctl_icmp_msgs_burst);
+ if (credit) {
+- credit--;
++ /* We want to use a credit of one in average, but need to randomize
++ * it for security reasons.
++ */
++ credit = max_t(int, credit - prandom_u32_max(3), 0);
+ rc = true;
+ }
+ WRITE_ONCE(icmp_global.credit, credit);
+diff --git a/net/ipv4/ip_gre.c b/net/ipv4/ip_gre.c
+index 4e31f23e4117e..e70291748889b 100644
+--- a/net/ipv4/ip_gre.c
++++ b/net/ipv4/ip_gre.c
+@@ -625,9 +625,7 @@ static netdev_tx_t ipgre_xmit(struct sk_buff *skb,
+ }
+
+ if (dev->header_ops) {
+- /* Need space for new headers */
+- if (skb_cow_head(skb, dev->needed_headroom -
+- (tunnel->hlen + sizeof(struct iphdr))))
++ if (skb_cow_head(skb, 0))
+ goto free_skb;
+
+ tnl_params = (const struct iphdr *)skb->data;
+@@ -748,7 +746,11 @@ static void ipgre_link_update(struct net_device *dev, bool set_mtu)
+ len = tunnel->tun_hlen - len;
+ tunnel->hlen = tunnel->hlen + len;
+
+- dev->needed_headroom = dev->needed_headroom + len;
++ if (dev->header_ops)
++ dev->hard_header_len += len;
++ else
++ dev->needed_headroom += len;
++
+ if (set_mtu)
+ dev->mtu = max_t(int, dev->mtu - len, 68);
+
+@@ -944,6 +946,7 @@ static void __gre_tunnel_init(struct net_device *dev)
+ tunnel->parms.iph.protocol = IPPROTO_GRE;
+
+ tunnel->hlen = tunnel->tun_hlen + tunnel->encap_hlen;
++ dev->needed_headroom = tunnel->hlen + sizeof(tunnel->parms.iph);
+
+ dev->features |= GRE_FEATURES;
+ dev->hw_features |= GRE_FEATURES;
+@@ -987,10 +990,14 @@ static int ipgre_tunnel_init(struct net_device *dev)
+ return -EINVAL;
+ dev->flags = IFF_BROADCAST;
+ dev->header_ops = &ipgre_header_ops;
++ dev->hard_header_len = tunnel->hlen + sizeof(*iph);
++ dev->needed_headroom = 0;
+ }
+ #endif
+ } else if (!tunnel->collect_md) {
+ dev->header_ops = &ipgre_header_ops;
++ dev->hard_header_len = tunnel->hlen + sizeof(*iph);
++ dev->needed_headroom = 0;
+ }
+
+ return ip_tunnel_init(dev);
+diff --git a/net/ipv4/netfilter/nf_log_arp.c b/net/ipv4/netfilter/nf_log_arp.c
+index 7a83f881efa9e..136030ad2e546 100644
+--- a/net/ipv4/netfilter/nf_log_arp.c
++++ b/net/ipv4/netfilter/nf_log_arp.c
+@@ -43,16 +43,31 @@ static void dump_arp_packet(struct nf_log_buf *m,
+ const struct nf_loginfo *info,
+ const struct sk_buff *skb, unsigned int nhoff)
+ {
+- const struct arphdr *ah;
+- struct arphdr _arph;
+ const struct arppayload *ap;
+ struct arppayload _arpp;
++ const struct arphdr *ah;
++ unsigned int logflags;
++ struct arphdr _arph;
+
+ ah = skb_header_pointer(skb, 0, sizeof(_arph), &_arph);
+ if (ah == NULL) {
+ nf_log_buf_add(m, "TRUNCATED");
+ return;
+ }
++
++ if (info->type == NF_LOG_TYPE_LOG)
++ logflags = info->u.log.logflags;
++ else
++ logflags = NF_LOG_DEFAULT_MASK;
++
++ if (logflags & NF_LOG_MACDECODE) {
++ nf_log_buf_add(m, "MACSRC=%pM MACDST=%pM ",
++ eth_hdr(skb)->h_source, eth_hdr(skb)->h_dest);
++ nf_log_dump_vlan(m, skb);
++ nf_log_buf_add(m, "MACPROTO=%04x ",
++ ntohs(eth_hdr(skb)->h_proto));
++ }
++
+ nf_log_buf_add(m, "ARP HTYPE=%d PTYPE=0x%04x OPCODE=%d",
+ ntohs(ah->ar_hrd), ntohs(ah->ar_pro), ntohs(ah->ar_op));
+
+diff --git a/net/ipv4/netfilter/nf_log_ipv4.c b/net/ipv4/netfilter/nf_log_ipv4.c
+index 0c72156130b68..d07583fac8f8c 100644
+--- a/net/ipv4/netfilter/nf_log_ipv4.c
++++ b/net/ipv4/netfilter/nf_log_ipv4.c
+@@ -284,8 +284,10 @@ static void dump_ipv4_mac_header(struct nf_log_buf *m,
+
+ switch (dev->type) {
+ case ARPHRD_ETHER:
+- nf_log_buf_add(m, "MACSRC=%pM MACDST=%pM MACPROTO=%04x ",
+- eth_hdr(skb)->h_source, eth_hdr(skb)->h_dest,
++ nf_log_buf_add(m, "MACSRC=%pM MACDST=%pM ",
++ eth_hdr(skb)->h_source, eth_hdr(skb)->h_dest);
++ nf_log_dump_vlan(m, skb);
++ nf_log_buf_add(m, "MACPROTO=%04x ",
+ ntohs(eth_hdr(skb)->h_proto));
+ return;
+ default:
+diff --git a/net/ipv4/nexthop.c b/net/ipv4/nexthop.c
+index 134e923822750..355c4499fa1b5 100644
+--- a/net/ipv4/nexthop.c
++++ b/net/ipv4/nexthop.c
+@@ -842,7 +842,7 @@ static void remove_nexthop_from_groups(struct net *net, struct nexthop *nh,
+ remove_nh_grp_entry(net, nhge, nlinfo);
+
+ /* make sure all see the newly published array before releasing rtnl */
+- synchronize_rcu();
++ synchronize_net();
+ }
+
+ static void remove_nexthop_group(struct nexthop *nh, struct nl_info *nlinfo)
+diff --git a/net/ipv4/route.c b/net/ipv4/route.c
+index 37f1288894747..71a9b11b7126d 100644
+--- a/net/ipv4/route.c
++++ b/net/ipv4/route.c
+@@ -2764,10 +2764,12 @@ struct rtable *ip_route_output_flow(struct net *net, struct flowi4 *flp4,
+ if (IS_ERR(rt))
+ return rt;
+
+- if (flp4->flowi4_proto)
++ if (flp4->flowi4_proto) {
++ flp4->flowi4_oif = rt->dst.dev->ifindex;
+ rt = (struct rtable *)xfrm_lookup_route(net, &rt->dst,
+ flowi4_to_flowi(flp4),
+ sk, 0);
++ }
+
+ return rt;
+ }
+diff --git a/net/ipv4/tcp_input.c b/net/ipv4/tcp_input.c
+index 518f04355fbf3..02cc972edd0b0 100644
+--- a/net/ipv4/tcp_input.c
++++ b/net/ipv4/tcp_input.c
+@@ -5716,6 +5716,8 @@ void tcp_rcv_established(struct sock *sk, struct sk_buff *skb)
+ tcp_data_snd_check(sk);
+ if (!inet_csk_ack_scheduled(sk))
+ goto no_ack;
++ } else {
++ tcp_update_wl(tp, TCP_SKB_CB(skb)->seq);
+ }
+
+ __tcp_ack_snd_check(sk, 0);
+diff --git a/net/ipv6/ip6_fib.c b/net/ipv6/ip6_fib.c
+index 3c32dcb5fd8e2..c0a0d41b6c37d 100644
+--- a/net/ipv6/ip6_fib.c
++++ b/net/ipv6/ip6_fib.c
+@@ -2617,8 +2617,10 @@ static void *ipv6_route_seq_start(struct seq_file *seq, loff_t *pos)
+ iter->skip = *pos;
+
+ if (iter->tbl) {
++ loff_t p = 0;
++
+ ipv6_route_seq_setup_walk(iter, net);
+- return ipv6_route_seq_next(seq, NULL, pos);
++ return ipv6_route_seq_next(seq, NULL, &p);
+ } else {
+ return NULL;
+ }
+diff --git a/net/ipv6/netfilter/nf_log_ipv6.c b/net/ipv6/netfilter/nf_log_ipv6.c
+index da64550a57075..8210ff34ed9b7 100644
+--- a/net/ipv6/netfilter/nf_log_ipv6.c
++++ b/net/ipv6/netfilter/nf_log_ipv6.c
+@@ -297,9 +297,11 @@ static void dump_ipv6_mac_header(struct nf_log_buf *m,
+
+ switch (dev->type) {
+ case ARPHRD_ETHER:
+- nf_log_buf_add(m, "MACSRC=%pM MACDST=%pM MACPROTO=%04x ",
+- eth_hdr(skb)->h_source, eth_hdr(skb)->h_dest,
+- ntohs(eth_hdr(skb)->h_proto));
++ nf_log_buf_add(m, "MACSRC=%pM MACDST=%pM ",
++ eth_hdr(skb)->h_source, eth_hdr(skb)->h_dest);
++ nf_log_dump_vlan(m, skb);
++ nf_log_buf_add(m, "MACPROTO=%04x ",
++ ntohs(eth_hdr(skb)->h_proto));
+ return;
+ default:
+ break;
+diff --git a/net/mac80211/cfg.c b/net/mac80211/cfg.c
+index 1079a07e43e49..d74cfec685477 100644
+--- a/net/mac80211/cfg.c
++++ b/net/mac80211/cfg.c
+@@ -709,7 +709,8 @@ void sta_set_rate_info_tx(struct sta_info *sta,
+ u16 brate;
+
+ sband = ieee80211_get_sband(sta->sdata);
+- if (sband) {
++ WARN_ON_ONCE(sband && !sband->bitrates);
++ if (sband && sband->bitrates) {
+ brate = sband->bitrates[rate->idx].bitrate;
+ rinfo->legacy = DIV_ROUND_UP(brate, 1 << shift);
+ }
+diff --git a/net/mac80211/sta_info.c b/net/mac80211/sta_info.c
+index 05e966f1609e2..b93916c382cdb 100644
+--- a/net/mac80211/sta_info.c
++++ b/net/mac80211/sta_info.c
+@@ -2122,6 +2122,10 @@ static void sta_stats_decode_rate(struct ieee80211_local *local, u32 rate,
+ int rate_idx = STA_STATS_GET(LEGACY_IDX, rate);
+
+ sband = local->hw.wiphy->bands[band];
++
++ if (WARN_ON_ONCE(!sband->bitrates))
++ break;
++
+ brate = sband->bitrates[rate_idx].bitrate;
+ if (rinfo->bw == RATE_INFO_BW_5)
+ shift = 2;
+diff --git a/net/mptcp/options.c b/net/mptcp/options.c
+index 8f940be42f98a..430a9213a7bf9 100644
+--- a/net/mptcp/options.c
++++ b/net/mptcp/options.c
+@@ -296,6 +296,7 @@ void mptcp_get_options(const struct sk_buff *skb,
+ mp_opt->mp_capable = 0;
+ mp_opt->mp_join = 0;
+ mp_opt->add_addr = 0;
++ mp_opt->ahmac = 0;
+ mp_opt->rm_addr = 0;
+ mp_opt->dss = 0;
+
+@@ -517,7 +518,7 @@ static bool mptcp_established_options_dss(struct sock *sk, struct sk_buff *skb,
+ return ret;
+ }
+
+- if (subflow->use_64bit_ack) {
++ if (READ_ONCE(msk->use_64bit_ack)) {
+ ack_size = TCPOLEN_MPTCP_DSS_ACK64;
+ opts->ext_copy.data_ack = msk->ack_seq;
+ opts->ext_copy.ack64 = 1;
+diff --git a/net/mptcp/protocol.h b/net/mptcp/protocol.h
+index c6eeaf3e8dcb7..4675a7bbebb15 100644
+--- a/net/mptcp/protocol.h
++++ b/net/mptcp/protocol.h
+@@ -199,6 +199,7 @@ struct mptcp_sock {
+ u32 token;
+ unsigned long flags;
+ bool can_ack;
++ bool use_64bit_ack; /* Set when we received a 64-bit DSN */
+ spinlock_t join_list_lock;
+ struct work_struct work;
+ struct list_head conn_list;
+@@ -285,7 +286,6 @@ struct mptcp_subflow_context {
+ data_avail : 1,
+ rx_eof : 1,
+ data_fin_tx_enable : 1,
+- use_64bit_ack : 1, /* Set when we received a 64-bit DSN */
+ can_ack : 1; /* only after processing the remote a key */
+ u64 data_fin_tx_seq;
+ u32 remote_nonce;
+diff --git a/net/mptcp/subflow.c b/net/mptcp/subflow.c
+index 3838a0b3a21ff..2e145b53b81f4 100644
+--- a/net/mptcp/subflow.c
++++ b/net/mptcp/subflow.c
+@@ -682,12 +682,11 @@ static enum mapping_status get_mapping_status(struct sock *ssk)
+ if (!mpext->dsn64) {
+ map_seq = expand_seq(subflow->map_seq, subflow->map_data_len,
+ mpext->data_seq);
+- subflow->use_64bit_ack = 0;
+ pr_debug("expanded seq=%llu", subflow->map_seq);
+ } else {
+ map_seq = mpext->data_seq;
+- subflow->use_64bit_ack = 1;
+ }
++ WRITE_ONCE(mptcp_sk(subflow->conn)->use_64bit_ack, !!mpext->dsn64);
+
+ if (subflow->map_valid) {
+ /* Allow replacing only with an identical map */
+diff --git a/net/netfilter/ipvs/ip_vs_ctl.c b/net/netfilter/ipvs/ip_vs_ctl.c
+index 412656c34f205..beeafa42aad76 100644
+--- a/net/netfilter/ipvs/ip_vs_ctl.c
++++ b/net/netfilter/ipvs/ip_vs_ctl.c
+@@ -2471,6 +2471,10 @@ do_ip_vs_set_ctl(struct sock *sk, int cmd, void __user *user, unsigned int len)
+ /* Set timeout values for (tcp tcpfin udp) */
+ ret = ip_vs_set_timeout(ipvs, (struct ip_vs_timeout_user *)arg);
+ goto out_unlock;
++ } else if (!len) {
++ /* No more commands with len == 0 below */
++ ret = -EINVAL;
++ goto out_unlock;
+ }
+
+ usvc_compat = (struct ip_vs_service_user *)arg;
+@@ -2547,9 +2551,6 @@ do_ip_vs_set_ctl(struct sock *sk, int cmd, void __user *user, unsigned int len)
+ break;
+ case IP_VS_SO_SET_DELDEST:
+ ret = ip_vs_del_dest(svc, &udest);
+- break;
+- default:
+- ret = -EINVAL;
+ }
+
+ out_unlock:
+diff --git a/net/netfilter/ipvs/ip_vs_xmit.c b/net/netfilter/ipvs/ip_vs_xmit.c
+index b00866d777fe0..d2e5a8f644b80 100644
+--- a/net/netfilter/ipvs/ip_vs_xmit.c
++++ b/net/netfilter/ipvs/ip_vs_xmit.c
+@@ -609,6 +609,8 @@ static inline int ip_vs_tunnel_xmit_prepare(struct sk_buff *skb,
+ if (ret == NF_ACCEPT) {
+ nf_reset_ct(skb);
+ skb_forward_csum(skb);
++ if (skb->dev)
++ skb->tstamp = 0;
+ }
+ return ret;
+ }
+@@ -649,6 +651,8 @@ static inline int ip_vs_nat_send_or_cont(int pf, struct sk_buff *skb,
+
+ if (!local) {
+ skb_forward_csum(skb);
++ if (skb->dev)
++ skb->tstamp = 0;
+ NF_HOOK(pf, NF_INET_LOCAL_OUT, cp->ipvs->net, NULL, skb,
+ NULL, skb_dst(skb)->dev, dst_output);
+ } else
+@@ -669,6 +673,8 @@ static inline int ip_vs_send_or_cont(int pf, struct sk_buff *skb,
+ if (!local) {
+ ip_vs_drop_early_demux_sk(skb);
+ skb_forward_csum(skb);
++ if (skb->dev)
++ skb->tstamp = 0;
+ NF_HOOK(pf, NF_INET_LOCAL_OUT, cp->ipvs->net, NULL, skb,
+ NULL, skb_dst(skb)->dev, dst_output);
+ } else
+diff --git a/net/netfilter/nf_conntrack_proto_tcp.c b/net/netfilter/nf_conntrack_proto_tcp.c
+index 1926fd56df56a..848b137151c26 100644
+--- a/net/netfilter/nf_conntrack_proto_tcp.c
++++ b/net/netfilter/nf_conntrack_proto_tcp.c
+@@ -541,13 +541,20 @@ static bool tcp_in_window(const struct nf_conn *ct,
+ swin = win << sender->td_scale;
+ sender->td_maxwin = (swin == 0 ? 1 : swin);
+ sender->td_maxend = end + sender->td_maxwin;
+- /*
+- * We haven't seen traffic in the other direction yet
+- * but we have to tweak window tracking to pass III
+- * and IV until that happens.
+- */
+- if (receiver->td_maxwin == 0)
++ if (receiver->td_maxwin == 0) {
++ /* We haven't seen traffic in the other
++ * direction yet but we have to tweak window
++ * tracking to pass III and IV until that
++ * happens.
++ */
+ receiver->td_end = receiver->td_maxend = sack;
++ } else if (sack == receiver->td_end + 1) {
++ /* Likely a reply to a keepalive.
++ * Needed for III.
++ */
++ receiver->td_end++;
++ }
++
+ }
+ } else if (((state->state == TCP_CONNTRACK_SYN_SENT
+ && dir == IP_CT_DIR_ORIGINAL)
+diff --git a/net/netfilter/nf_dup_netdev.c b/net/netfilter/nf_dup_netdev.c
+index 2b01a151eaa80..a579e59ee5c5e 100644
+--- a/net/netfilter/nf_dup_netdev.c
++++ b/net/netfilter/nf_dup_netdev.c
+@@ -19,6 +19,7 @@ static void nf_do_netdev_egress(struct sk_buff *skb, struct net_device *dev)
+ skb_push(skb, skb->mac_len);
+
+ skb->dev = dev;
++ skb->tstamp = 0;
+ dev_queue_xmit(skb);
+ }
+
+diff --git a/net/netfilter/nf_log_common.c b/net/netfilter/nf_log_common.c
+index ae5628ddbe6d7..fd7c5f0f5c25b 100644
+--- a/net/netfilter/nf_log_common.c
++++ b/net/netfilter/nf_log_common.c
+@@ -171,6 +171,18 @@ nf_log_dump_packet_common(struct nf_log_buf *m, u_int8_t pf,
+ }
+ EXPORT_SYMBOL_GPL(nf_log_dump_packet_common);
+
++void nf_log_dump_vlan(struct nf_log_buf *m, const struct sk_buff *skb)
++{
++ u16 vid;
++
++ if (!skb_vlan_tag_present(skb))
++ return;
++
++ vid = skb_vlan_tag_get(skb);
++ nf_log_buf_add(m, "VPROTO=%04x VID=%u ", ntohs(skb->vlan_proto), vid);
++}
++EXPORT_SYMBOL_GPL(nf_log_dump_vlan);
++
+ /* bridge and netdev logging families share this code. */
+ void nf_log_l2packet(struct net *net, u_int8_t pf,
+ __be16 protocol,
+diff --git a/net/netfilter/nft_fwd_netdev.c b/net/netfilter/nft_fwd_netdev.c
+index 3087e23297dbf..b77985986b24e 100644
+--- a/net/netfilter/nft_fwd_netdev.c
++++ b/net/netfilter/nft_fwd_netdev.c
+@@ -138,6 +138,7 @@ static void nft_fwd_neigh_eval(const struct nft_expr *expr,
+ return;
+
+ skb->dev = dev;
++ skb->tstamp = 0;
+ neigh_xmit(neigh_table, dev, addr, skb);
+ out:
+ regs->verdict.code = verdict;
+diff --git a/net/nfc/netlink.c b/net/nfc/netlink.c
+index e894254c17d43..8709f3d4e7c4b 100644
+--- a/net/nfc/netlink.c
++++ b/net/nfc/netlink.c
+@@ -1217,7 +1217,7 @@ static int nfc_genl_fw_download(struct sk_buff *skb, struct genl_info *info)
+ u32 idx;
+ char firmware_name[NFC_FIRMWARE_NAME_MAXSIZE + 1];
+
+- if (!info->attrs[NFC_ATTR_DEVICE_INDEX])
++ if (!info->attrs[NFC_ATTR_DEVICE_INDEX] || !info->attrs[NFC_ATTR_FIRMWARE_NAME])
+ return -EINVAL;
+
+ idx = nla_get_u32(info->attrs[NFC_ATTR_DEVICE_INDEX]);
+diff --git a/net/sched/act_api.c b/net/sched/act_api.c
+index aa69fc4ce39d9..3715b1261c6f3 100644
+--- a/net/sched/act_api.c
++++ b/net/sched/act_api.c
+@@ -722,13 +722,6 @@ int tcf_action_destroy(struct tc_action *actions[], int bind)
+ return ret;
+ }
+
+-static int tcf_action_destroy_1(struct tc_action *a, int bind)
+-{
+- struct tc_action *actions[] = { a, NULL };
+-
+- return tcf_action_destroy(actions, bind);
+-}
+-
+ static int tcf_action_put(struct tc_action *p)
+ {
+ return __tcf_action_put(p, false);
+@@ -1000,13 +993,6 @@ struct tc_action *tcf_action_init_1(struct net *net, struct tcf_proto *tp,
+ if (err < 0)
+ goto err_mod;
+
+- if (TC_ACT_EXT_CMP(a->tcfa_action, TC_ACT_GOTO_CHAIN) &&
+- !rcu_access_pointer(a->goto_chain)) {
+- tcf_action_destroy_1(a, bind);
+- NL_SET_ERR_MSG(extack, "can't use goto chain with NULL chain");
+- return ERR_PTR(-EINVAL);
+- }
+-
+ if (!name && tb[TCA_ACT_COOKIE])
+ tcf_set_action_cookie(&a->act_cookie, cookie);
+
+diff --git a/net/sched/act_ct.c b/net/sched/act_ct.c
+index 0eb4722cf7cd9..1558126af0d4b 100644
+--- a/net/sched/act_ct.c
++++ b/net/sched/act_ct.c
+@@ -156,11 +156,11 @@ tcf_ct_flow_table_add_action_nat_udp(const struct nf_conntrack_tuple *tuple,
+ __be16 target_dst = target.dst.u.udp.port;
+
+ if (target_src != tuple->src.u.udp.port)
+- tcf_ct_add_mangle_action(action, FLOW_ACT_MANGLE_HDR_TYPE_TCP,
++ tcf_ct_add_mangle_action(action, FLOW_ACT_MANGLE_HDR_TYPE_UDP,
+ offsetof(struct udphdr, source),
+ 0xFFFF, be16_to_cpu(target_src));
+ if (target_dst != tuple->dst.u.udp.port)
+- tcf_ct_add_mangle_action(action, FLOW_ACT_MANGLE_HDR_TYPE_TCP,
++ tcf_ct_add_mangle_action(action, FLOW_ACT_MANGLE_HDR_TYPE_UDP,
+ offsetof(struct udphdr, dest),
+ 0xFFFF, be16_to_cpu(target_dst));
+ }
+diff --git a/net/sched/act_tunnel_key.c b/net/sched/act_tunnel_key.c
+index 23cf8469a2e7c..e167f0ddfbcd4 100644
+--- a/net/sched/act_tunnel_key.c
++++ b/net/sched/act_tunnel_key.c
+@@ -458,7 +458,7 @@ static int tunnel_key_init(struct net *net, struct nlattr *nla,
+
+ metadata = __ipv6_tun_set_dst(&saddr, &daddr, tos, ttl, dst_port,
+ 0, flags,
+- key_id, 0);
++ key_id, opts_len);
+ } else {
+ NL_SET_ERR_MSG(extack, "Missing either ipv4 or ipv6 src and dst");
+ ret = -EINVAL;
+diff --git a/net/sched/cls_api.c b/net/sched/cls_api.c
+index 4619cb3cb0a8f..8bf6bde1cfe59 100644
+--- a/net/sched/cls_api.c
++++ b/net/sched/cls_api.c
+@@ -3707,7 +3707,7 @@ int tc_setup_flow_action(struct flow_action *flow_action,
+ entry->gate.num_entries = tcf_gate_num_entries(act);
+ err = tcf_gate_get_entries(entry, act);
+ if (err)
+- goto err_out;
++ goto err_out_locked;
+ } else {
+ err = -EOPNOTSUPP;
+ goto err_out_locked;
+diff --git a/net/smc/smc_core.c b/net/smc/smc_core.c
+index f82a2e5999171..49696f464794f 100644
+--- a/net/smc/smc_core.c
++++ b/net/smc/smc_core.c
+@@ -1595,7 +1595,7 @@ out:
+ return rc;
+ }
+
+-#define SMCD_DMBE_SIZES 7 /* 0 -> 16KB, 1 -> 32KB, .. 6 -> 1MB */
++#define SMCD_DMBE_SIZES 6 /* 0 -> 16KB, 1 -> 32KB, .. 6 -> 1MB */
+
+ static struct smc_buf_desc *smcd_new_buf_create(struct smc_link_group *lgr,
+ bool is_dmb, int bufsize)
+diff --git a/net/smc/smc_llc.c b/net/smc/smc_llc.c
+index df5b0a6ea8488..398f1d9521351 100644
+--- a/net/smc/smc_llc.c
++++ b/net/smc/smc_llc.c
+@@ -233,8 +233,6 @@ static bool smc_llc_flow_start(struct smc_llc_flow *flow,
+ default:
+ flow->type = SMC_LLC_FLOW_NONE;
+ }
+- if (qentry == lgr->delayed_event)
+- lgr->delayed_event = NULL;
+ smc_llc_flow_qentry_set(flow, qentry);
+ spin_unlock_bh(&lgr->llc_flow_lock);
+ return true;
+@@ -1590,13 +1588,12 @@ static void smc_llc_event_work(struct work_struct *work)
+ struct smc_llc_qentry *qentry;
+
+ if (!lgr->llc_flow_lcl.type && lgr->delayed_event) {
+- if (smc_link_usable(lgr->delayed_event->link)) {
+- smc_llc_event_handler(lgr->delayed_event);
+- } else {
+- qentry = lgr->delayed_event;
+- lgr->delayed_event = NULL;
++ qentry = lgr->delayed_event;
++ lgr->delayed_event = NULL;
++ if (smc_link_usable(qentry->link))
++ smc_llc_event_handler(qentry);
++ else
+ kfree(qentry);
+- }
+ }
+
+ again:
+diff --git a/net/sunrpc/auth_gss/svcauth_gss.c b/net/sunrpc/auth_gss/svcauth_gss.c
+index c28051f7d217d..653c317694406 100644
+--- a/net/sunrpc/auth_gss/svcauth_gss.c
++++ b/net/sunrpc/auth_gss/svcauth_gss.c
+@@ -1104,9 +1104,9 @@ static int gss_read_proxy_verf(struct svc_rqst *rqstp,
+ struct gssp_in_token *in_token)
+ {
+ struct kvec *argv = &rqstp->rq_arg.head[0];
+- unsigned int page_base, length;
+- int pages, i, res;
+- size_t inlen;
++ unsigned int length, pgto_offs, pgfrom_offs;
++ int pages, i, res, pgto, pgfrom;
++ size_t inlen, to_offs, from_offs;
+
+ res = gss_read_common_verf(gc, argv, authp, in_handle);
+ if (res)
+@@ -1134,17 +1134,24 @@ static int gss_read_proxy_verf(struct svc_rqst *rqstp,
+ memcpy(page_address(in_token->pages[0]), argv->iov_base, length);
+ inlen -= length;
+
+- i = 1;
+- page_base = rqstp->rq_arg.page_base;
++ to_offs = length;
++ from_offs = rqstp->rq_arg.page_base;
+ while (inlen) {
+- length = min_t(unsigned int, inlen, PAGE_SIZE);
+- memcpy(page_address(in_token->pages[i]),
+- page_address(rqstp->rq_arg.pages[i]) + page_base,
++ pgto = to_offs >> PAGE_SHIFT;
++ pgfrom = from_offs >> PAGE_SHIFT;
++ pgto_offs = to_offs & ~PAGE_MASK;
++ pgfrom_offs = from_offs & ~PAGE_MASK;
++
++ length = min_t(unsigned int, inlen,
++ min_t(unsigned int, PAGE_SIZE - pgto_offs,
++ PAGE_SIZE - pgfrom_offs));
++ memcpy(page_address(in_token->pages[pgto]) + pgto_offs,
++ page_address(rqstp->rq_arg.pages[pgfrom]) + pgfrom_offs,
+ length);
+
++ to_offs += length;
++ from_offs += length;
+ inlen -= length;
+- page_base = 0;
+- i++;
+ }
+ return 0;
+ }
+diff --git a/net/sunrpc/xprtrdma/svc_rdma_sendto.c b/net/sunrpc/xprtrdma/svc_rdma_sendto.c
+index 38e7c3c8c4a9c..e4f410084c748 100644
+--- a/net/sunrpc/xprtrdma/svc_rdma_sendto.c
++++ b/net/sunrpc/xprtrdma/svc_rdma_sendto.c
+@@ -637,10 +637,11 @@ static int svc_rdma_pull_up_reply_msg(struct svcxprt_rdma *rdma,
+ while (remaining) {
+ len = min_t(u32, PAGE_SIZE - pageoff, remaining);
+
+- memcpy(dst, page_address(*ppages), len);
++ memcpy(dst, page_address(*ppages) + pageoff, len);
+ remaining -= len;
+ dst += len;
+ pageoff = 0;
++ ppages++;
+ }
+ }
+
+diff --git a/net/tipc/bcast.c b/net/tipc/bcast.c
+index 383f87bc10615..f69fb54821e6b 100644
+--- a/net/tipc/bcast.c
++++ b/net/tipc/bcast.c
+@@ -108,6 +108,8 @@ static void tipc_bcbase_select_primary(struct net *net)
+ {
+ struct tipc_bc_base *bb = tipc_bc_base(net);
+ int all_dests = tipc_link_bc_peers(bb->link);
++ int max_win = tipc_link_max_win(bb->link);
++ int min_win = tipc_link_min_win(bb->link);
+ int i, mtu, prim;
+
+ bb->primary_bearer = INVALID_BEARER_ID;
+@@ -121,8 +123,12 @@ static void tipc_bcbase_select_primary(struct net *net)
+ continue;
+
+ mtu = tipc_bearer_mtu(net, i);
+- if (mtu < tipc_link_mtu(bb->link))
++ if (mtu < tipc_link_mtu(bb->link)) {
+ tipc_link_set_mtu(bb->link, mtu);
++ tipc_link_set_queue_limits(bb->link,
++ min_win,
++ max_win);
++ }
+ bb->bcast_support &= tipc_bearer_bcast_support(net, i);
+ if (bb->dests[i] < all_dests)
+ continue;
+@@ -585,7 +591,7 @@ static int tipc_bc_link_set_queue_limits(struct net *net, u32 max_win)
+ if (max_win > TIPC_MAX_LINK_WIN)
+ return -EINVAL;
+ tipc_bcast_lock(net);
+- tipc_link_set_queue_limits(l, BCLINK_WIN_MIN, max_win);
++ tipc_link_set_queue_limits(l, tipc_link_min_win(l), max_win);
+ tipc_bcast_unlock(net);
+ return 0;
+ }
+diff --git a/net/tipc/msg.c b/net/tipc/msg.c
+index 2776a41e0dece..15b24fbcbe970 100644
+--- a/net/tipc/msg.c
++++ b/net/tipc/msg.c
+@@ -150,7 +150,8 @@ int tipc_buf_append(struct sk_buff **headbuf, struct sk_buff **buf)
+ if (fragid == FIRST_FRAGMENT) {
+ if (unlikely(head))
+ goto err;
+- frag = skb_unshare(frag, GFP_ATOMIC);
++ if (skb_cloned(frag))
++ frag = skb_copy(frag, GFP_ATOMIC);
+ if (unlikely(!frag))
+ goto err;
+ head = *headbuf = frag;
+diff --git a/net/tls/tls_device.c b/net/tls/tls_device.c
+index 0cbad566f2811..f19416371bb99 100644
+--- a/net/tls/tls_device.c
++++ b/net/tls/tls_device.c
+@@ -418,14 +418,14 @@ static int tls_push_data(struct sock *sk,
+ struct tls_context *tls_ctx = tls_get_ctx(sk);
+ struct tls_prot_info *prot = &tls_ctx->prot_info;
+ struct tls_offload_context_tx *ctx = tls_offload_ctx_tx(tls_ctx);
+- int more = flags & (MSG_SENDPAGE_NOTLAST | MSG_MORE);
+ struct tls_record_info *record = ctx->open_record;
+ int tls_push_record_flags;
+ struct page_frag *pfrag;
+ size_t orig_size = size;
+ u32 max_open_record_len;
+- int copy, rc = 0;
++ bool more = false;
+ bool done = false;
++ int copy, rc = 0;
+ long timeo;
+
+ if (flags &
+@@ -492,9 +492,8 @@ handle_error:
+ if (!size) {
+ last_record:
+ tls_push_record_flags = flags;
+- if (more) {
+- tls_ctx->pending_open_record_frags =
+- !!record->num_frags;
++ if (flags & (MSG_SENDPAGE_NOTLAST | MSG_MORE)) {
++ more = true;
+ break;
+ }
+
+@@ -526,6 +525,8 @@ last_record:
+ }
+ } while (!done);
+
++ tls_ctx->pending_open_record_frags = more;
++
+ if (orig_size - size > 0)
+ rc = orig_size - size;
+
+diff --git a/net/wireless/nl80211.c b/net/wireless/nl80211.c
+index 4d7b255067225..47ab86ee192ac 100644
+--- a/net/wireless/nl80211.c
++++ b/net/wireless/nl80211.c
+@@ -2355,7 +2355,10 @@ static int nl80211_send_wiphy(struct cfg80211_registered_device *rdev,
+ * case we'll continue with more data in the next round,
+ * but break unconditionally so unsplit data stops here.
+ */
+- state->split_start++;
++ if (state->split)
++ state->split_start++;
++ else
++ state->split_start = 0;
+ break;
+ case 9:
+ if (rdev->wiphy.extended_capabilities &&
+@@ -4683,16 +4686,14 @@ static int nl80211_parse_he_obss_pd(struct nlattr *attrs,
+ if (err)
+ return err;
+
+- if (!tb[NL80211_HE_OBSS_PD_ATTR_MIN_OFFSET] ||
+- !tb[NL80211_HE_OBSS_PD_ATTR_MAX_OFFSET])
+- return -EINVAL;
+-
+- he_obss_pd->min_offset =
+- nla_get_u32(tb[NL80211_HE_OBSS_PD_ATTR_MIN_OFFSET]);
+- he_obss_pd->max_offset =
+- nla_get_u32(tb[NL80211_HE_OBSS_PD_ATTR_MAX_OFFSET]);
++ if (tb[NL80211_HE_OBSS_PD_ATTR_MIN_OFFSET])
++ he_obss_pd->min_offset =
++ nla_get_u8(tb[NL80211_HE_OBSS_PD_ATTR_MIN_OFFSET]);
++ if (tb[NL80211_HE_OBSS_PD_ATTR_MAX_OFFSET])
++ he_obss_pd->max_offset =
++ nla_get_u8(tb[NL80211_HE_OBSS_PD_ATTR_MAX_OFFSET]);
+
+- if (he_obss_pd->min_offset >= he_obss_pd->max_offset)
++ if (he_obss_pd->min_offset > he_obss_pd->max_offset)
+ return -EINVAL;
+
+ he_obss_pd->enable = true;
+diff --git a/samples/bpf/xdpsock_user.c b/samples/bpf/xdpsock_user.c
+index c91e91362a0c6..0151bb0b2fc71 100644
+--- a/samples/bpf/xdpsock_user.c
++++ b/samples/bpf/xdpsock_user.c
+@@ -921,7 +921,7 @@ static void rx_drop_all(void)
+ }
+ }
+
+-static void tx_only(struct xsk_socket_info *xsk, u32 frame_nb, int batch_size)
++static void tx_only(struct xsk_socket_info *xsk, u32 *frame_nb, int batch_size)
+ {
+ u32 idx;
+ unsigned int i;
+@@ -934,14 +934,14 @@ static void tx_only(struct xsk_socket_info *xsk, u32 frame_nb, int batch_size)
+ for (i = 0; i < batch_size; i++) {
+ struct xdp_desc *tx_desc = xsk_ring_prod__tx_desc(&xsk->tx,
+ idx + i);
+- tx_desc->addr = (frame_nb + i) << XSK_UMEM__DEFAULT_FRAME_SHIFT;
++ tx_desc->addr = (*frame_nb + i) << XSK_UMEM__DEFAULT_FRAME_SHIFT;
+ tx_desc->len = PKT_SIZE;
+ }
+
+ xsk_ring_prod__submit(&xsk->tx, batch_size);
+ xsk->outstanding_tx += batch_size;
+- frame_nb += batch_size;
+- frame_nb %= NUM_FRAMES;
++ *frame_nb += batch_size;
++ *frame_nb %= NUM_FRAMES;
+ complete_tx_only(xsk, batch_size);
+ }
+
+@@ -997,7 +997,7 @@ static void tx_only_all(void)
+ }
+
+ for (i = 0; i < num_socks; i++)
+- tx_only(xsks[i], frame_nb[i], batch_size);
++ tx_only(xsks[i], &frame_nb[i], batch_size);
+
+ pkt_cnt += batch_size;
+
+diff --git a/samples/mic/mpssd/mpssd.c b/samples/mic/mpssd/mpssd.c
+index a11bf6c5b53b4..cd3f16a6f5caf 100644
+--- a/samples/mic/mpssd/mpssd.c
++++ b/samples/mic/mpssd/mpssd.c
+@@ -403,9 +403,9 @@ mic_virtio_copy(struct mic_info *mic, int fd,
+
+ static inline unsigned _vring_size(unsigned int num, unsigned long align)
+ {
+- return ((sizeof(struct vring_desc) * num + sizeof(__u16) * (3 + num)
++ return _ALIGN_UP(((sizeof(struct vring_desc) * num + sizeof(__u16) * (3 + num)
+ + align - 1) & ~(align - 1))
+- + sizeof(__u16) * 3 + sizeof(struct vring_used_elem) * num;
++ + sizeof(__u16) * 3 + sizeof(struct vring_used_elem) * num, 4);
+ }
+
+ /*
+diff --git a/scripts/package/builddeb b/scripts/package/builddeb
+index 6df3c9f8b2da6..8277144298a00 100755
+--- a/scripts/package/builddeb
++++ b/scripts/package/builddeb
+@@ -202,8 +202,10 @@ EOF
+ done
+
+ if [ "$ARCH" != "um" ]; then
+- deploy_kernel_headers debian/linux-headers
+- create_package linux-headers-$version debian/linux-headers
++ if is_enabled CONFIG_MODULES; then
++ deploy_kernel_headers debian/linux-headers
++ create_package linux-headers-$version debian/linux-headers
++ fi
+
+ deploy_libc_headers debian/linux-libc-dev
+ create_package linux-libc-dev debian/linux-libc-dev
+diff --git a/scripts/package/mkdebian b/scripts/package/mkdebian
+index df1adbfb8ead0..9342517778bf3 100755
+--- a/scripts/package/mkdebian
++++ b/scripts/package/mkdebian
+@@ -183,13 +183,6 @@ Description: Linux kernel, version $version
+ This package contains the Linux kernel, modules and corresponding other
+ files, version: $version.
+
+-Package: $kernel_headers_packagename
+-Architecture: $debarch
+-Description: Linux kernel headers for $version on $debarch
+- This package provides kernel header files for $version on $debarch
+- .
+- This is useful for people who need to build external modules
+-
+ Package: linux-libc-dev
+ Section: devel
+ Provides: linux-kernel-headers
+@@ -200,6 +193,18 @@ Description: Linux support headers for userspace development
+ Multi-Arch: same
+ EOF
+
++if is_enabled CONFIG_MODULES; then
++cat <<EOF >> debian/control
++
++Package: $kernel_headers_packagename
++Architecture: $debarch
++Description: Linux kernel headers for $version on $debarch
++ This package provides kernel header files for $version on $debarch
++ .
++ This is useful for people who need to build external modules
++EOF
++fi
++
+ if is_enabled CONFIG_DEBUG_INFO; then
+ cat <<EOF >> debian/control
+
+diff --git a/security/integrity/ima/ima_crypto.c b/security/integrity/ima/ima_crypto.c
+index 011c3c76af865..21989fa0c1074 100644
+--- a/security/integrity/ima/ima_crypto.c
++++ b/security/integrity/ima/ima_crypto.c
+@@ -829,6 +829,8 @@ static int ima_calc_boot_aggregate_tfm(char *digest, u16 alg_id,
+ /* now accumulate with current aggregate */
+ rc = crypto_shash_update(shash, d.digest,
+ crypto_shash_digestsize(tfm));
++ if (rc != 0)
++ return rc;
+ }
+ /*
+ * Extend cumulative digest over TPM registers 8-9, which contain
+diff --git a/security/integrity/ima/ima_main.c b/security/integrity/ima/ima_main.c
+index c1583d98c5e50..0b8f17570f210 100644
+--- a/security/integrity/ima/ima_main.c
++++ b/security/integrity/ima/ima_main.c
+@@ -531,6 +531,16 @@ int ima_file_hash(struct file *file, char *buf, size_t buf_size)
+ return -EOPNOTSUPP;
+
+ mutex_lock(&iint->mutex);
++
++ /*
++ * ima_file_hash can be called when ima_collect_measurement has still
++ * not been called, we might not always have a hash.
++ */
++ if (!iint->ima_hash) {
++ mutex_unlock(&iint->mutex);
++ return -EOPNOTSUPP;
++ }
++
+ if (buf) {
+ size_t copied_size;
+
+diff --git a/sound/core/seq/oss/seq_oss.c b/sound/core/seq/oss/seq_oss.c
+index c8b9c0b315d8f..250a92b187265 100644
+--- a/sound/core/seq/oss/seq_oss.c
++++ b/sound/core/seq/oss/seq_oss.c
+@@ -174,9 +174,12 @@ odev_ioctl(struct file *file, unsigned int cmd, unsigned long arg)
+ if (snd_BUG_ON(!dp))
+ return -ENXIO;
+
+- mutex_lock(®ister_mutex);
++ if (cmd != SNDCTL_SEQ_SYNC &&
++ mutex_lock_interruptible(®ister_mutex))
++ return -ERESTARTSYS;
+ rc = snd_seq_oss_ioctl(dp, cmd, arg);
+- mutex_unlock(®ister_mutex);
++ if (cmd != SNDCTL_SEQ_SYNC)
++ mutex_unlock(®ister_mutex);
+ return rc;
+ }
+
+diff --git a/sound/firewire/bebob/bebob_hwdep.c b/sound/firewire/bebob/bebob_hwdep.c
+index 45b740f44c459..c362eb38ab906 100644
+--- a/sound/firewire/bebob/bebob_hwdep.c
++++ b/sound/firewire/bebob/bebob_hwdep.c
+@@ -36,12 +36,11 @@ hwdep_read(struct snd_hwdep *hwdep, char __user *buf, long count,
+ }
+
+ memset(&event, 0, sizeof(event));
++ count = min_t(long, count, sizeof(event.lock_status));
+ if (bebob->dev_lock_changed) {
+ event.lock_status.type = SNDRV_FIREWIRE_EVENT_LOCK_STATUS;
+ event.lock_status.status = (bebob->dev_lock_count > 0);
+ bebob->dev_lock_changed = false;
+-
+- count = min_t(long, count, sizeof(event.lock_status));
+ }
+
+ spin_unlock_irq(&bebob->lock);
+diff --git a/sound/pci/hda/hda_intel.c b/sound/pci/hda/hda_intel.c
+index 4c23b169ac67e..cc51ef98752a9 100644
+--- a/sound/pci/hda/hda_intel.c
++++ b/sound/pci/hda/hda_intel.c
+@@ -1003,12 +1003,14 @@ static void __azx_runtime_resume(struct azx *chip, bool from_rt)
+ azx_init_pci(chip);
+ hda_intel_init_chip(chip, true);
+
+- if (status && from_rt) {
+- list_for_each_codec(codec, &chip->bus)
+- if (!codec->relaxed_resume &&
+- (status & (1 << codec->addr)))
+- schedule_delayed_work(&codec->jackpoll_work,
+- codec->jackpoll_interval);
++ if (from_rt) {
++ list_for_each_codec(codec, &chip->bus) {
++ if (codec->relaxed_resume)
++ continue;
++
++ if (codec->forced_resume || (status & (1 << codec->addr)))
++ pm_request_resume(hda_codec_dev(codec));
++ }
+ }
+
+ /* power down again for link-controlled chips */
+diff --git a/sound/pci/hda/hda_jack.c b/sound/pci/hda/hda_jack.c
+index 02cc682caa55a..588059428d8f5 100644
+--- a/sound/pci/hda/hda_jack.c
++++ b/sound/pci/hda/hda_jack.c
+@@ -275,6 +275,23 @@ int snd_hda_jack_detect_state_mst(struct hda_codec *codec,
+ }
+ EXPORT_SYMBOL_GPL(snd_hda_jack_detect_state_mst);
+
++static struct hda_jack_callback *
++find_callback_from_list(struct hda_jack_tbl *jack,
++ hda_jack_callback_fn func)
++{
++ struct hda_jack_callback *cb;
++
++ if (!func)
++ return NULL;
++
++ for (cb = jack->callback; cb; cb = cb->next) {
++ if (cb->func == func)
++ return cb;
++ }
++
++ return NULL;
++}
++
+ /**
+ * snd_hda_jack_detect_enable_mst - enable the jack-detection
+ * @codec: the HDA codec
+@@ -297,7 +314,10 @@ snd_hda_jack_detect_enable_callback_mst(struct hda_codec *codec, hda_nid_t nid,
+ jack = snd_hda_jack_tbl_new(codec, nid, dev_id);
+ if (!jack)
+ return ERR_PTR(-ENOMEM);
+- if (func) {
++
++ callback = find_callback_from_list(jack, func);
++
++ if (func && !callback) {
+ callback = kzalloc(sizeof(*callback), GFP_KERNEL);
+ if (!callback)
+ return ERR_PTR(-ENOMEM);
+diff --git a/sound/pci/hda/patch_ca0132.c b/sound/pci/hda/patch_ca0132.c
+index 6dfa864d3fe7b..a49c322bdbe9d 100644
+--- a/sound/pci/hda/patch_ca0132.c
++++ b/sound/pci/hda/patch_ca0132.c
+@@ -1065,6 +1065,7 @@ enum {
+ QUIRK_R3DI,
+ QUIRK_R3D,
+ QUIRK_AE5,
++ QUIRK_AE7,
+ };
+
+ #ifdef CONFIG_PCI
+@@ -1184,6 +1185,7 @@ static const struct snd_pci_quirk ca0132_quirks[] = {
+ SND_PCI_QUIRK(0x1102, 0x0013, "Recon3D", QUIRK_R3D),
+ SND_PCI_QUIRK(0x1102, 0x0018, "Recon3D", QUIRK_R3D),
+ SND_PCI_QUIRK(0x1102, 0x0051, "Sound Blaster AE-5", QUIRK_AE5),
++ SND_PCI_QUIRK(0x1102, 0x0081, "Sound Blaster AE-7", QUIRK_AE7),
+ {}
+ };
+
+@@ -4675,6 +4677,15 @@ static int ca0132_alt_select_in(struct hda_codec *codec)
+ ca0113_mmio_command_set(codec, 0x30, 0x28, 0x00);
+ tmp = FLOAT_THREE;
+ break;
++ case QUIRK_AE7:
++ ca0113_mmio_command_set(codec, 0x30, 0x28, 0x00);
++ tmp = FLOAT_THREE;
++ chipio_set_conn_rate(codec, MEM_CONNID_MICIN2,
++ SR_96_000);
++ chipio_set_conn_rate(codec, MEM_CONNID_MICOUT2,
++ SR_96_000);
++ dspio_set_uint_param(codec, 0x80, 0x01, FLOAT_ZERO);
++ break;
+ default:
+ tmp = FLOAT_ONE;
+ break;
+@@ -4720,6 +4731,14 @@ static int ca0132_alt_select_in(struct hda_codec *codec)
+ case QUIRK_AE5:
+ ca0113_mmio_command_set(codec, 0x30, 0x28, 0x00);
+ break;
++ case QUIRK_AE7:
++ ca0113_mmio_command_set(codec, 0x30, 0x28, 0x3f);
++ chipio_set_conn_rate(codec, MEM_CONNID_MICIN2,
++ SR_96_000);
++ chipio_set_conn_rate(codec, MEM_CONNID_MICOUT2,
++ SR_96_000);
++ dspio_set_uint_param(codec, 0x80, 0x01, FLOAT_ZERO);
++ break;
+ default:
+ break;
+ }
+@@ -4729,7 +4748,10 @@ static int ca0132_alt_select_in(struct hda_codec *codec)
+ if (ca0132_quirk(spec) == QUIRK_R3DI)
+ chipio_set_conn_rate(codec, 0x0F, SR_96_000);
+
+- tmp = FLOAT_ZERO;
++ if (ca0132_quirk(spec) == QUIRK_AE7)
++ tmp = FLOAT_THREE;
++ else
++ tmp = FLOAT_ZERO;
+ dspio_set_uint_param(codec, 0x80, 0x00, tmp);
+
+ switch (ca0132_quirk(spec)) {
+diff --git a/sound/pci/hda/patch_hdmi.c b/sound/pci/hda/patch_hdmi.c
+index 419f012b9853c..0d3e996beede1 100644
+--- a/sound/pci/hda/patch_hdmi.c
++++ b/sound/pci/hda/patch_hdmi.c
+@@ -1989,22 +1989,25 @@ static int hdmi_pcm_close(struct hda_pcm_stream *hinfo,
+ int pinctl;
+ int err = 0;
+
++ mutex_lock(&spec->pcm_lock);
+ if (hinfo->nid) {
+ pcm_idx = hinfo_to_pcm_index(codec, hinfo);
+- if (snd_BUG_ON(pcm_idx < 0))
+- return -EINVAL;
++ if (snd_BUG_ON(pcm_idx < 0)) {
++ err = -EINVAL;
++ goto unlock;
++ }
+ cvt_idx = cvt_nid_to_cvt_index(codec, hinfo->nid);
+- if (snd_BUG_ON(cvt_idx < 0))
+- return -EINVAL;
++ if (snd_BUG_ON(cvt_idx < 0)) {
++ err = -EINVAL;
++ goto unlock;
++ }
+ per_cvt = get_cvt(spec, cvt_idx);
+-
+ snd_BUG_ON(!per_cvt->assigned);
+ per_cvt->assigned = 0;
+ hinfo->nid = 0;
+
+ azx_stream(get_azx_dev(substream))->stripe = 0;
+
+- mutex_lock(&spec->pcm_lock);
+ snd_hda_spdif_ctls_unassign(codec, pcm_idx);
+ clear_bit(pcm_idx, &spec->pcm_in_use);
+ pin_idx = hinfo_to_pin_index(codec, hinfo);
+@@ -2034,10 +2037,11 @@ static int hdmi_pcm_close(struct hda_pcm_stream *hinfo,
+ per_pin->setup = false;
+ per_pin->channels = 0;
+ mutex_unlock(&per_pin->lock);
+- unlock:
+- mutex_unlock(&spec->pcm_lock);
+ }
+
++unlock:
++ mutex_unlock(&spec->pcm_lock);
++
+ return err;
+ }
+
+diff --git a/sound/pci/hda/patch_realtek.c b/sound/pci/hda/patch_realtek.c
+index 601683e05ccca..e9593abd4e232 100644
+--- a/sound/pci/hda/patch_realtek.c
++++ b/sound/pci/hda/patch_realtek.c
+@@ -1142,6 +1142,7 @@ static int alc_alloc_spec(struct hda_codec *codec, hda_nid_t mixer_nid)
+ codec->single_adc_amp = 1;
+ /* FIXME: do we need this for all Realtek codec models? */
+ codec->spdif_status_reset = 1;
++ codec->forced_resume = 1;
+ codec->patch_ops = alc_patch_ops;
+
+ err = alc_codec_rename_from_preset(codec);
+@@ -1921,6 +1922,8 @@ enum {
+ ALC1220_FIXUP_CLEVO_P950,
+ ALC1220_FIXUP_CLEVO_PB51ED,
+ ALC1220_FIXUP_CLEVO_PB51ED_PINS,
++ ALC887_FIXUP_ASUS_AUDIO,
++ ALC887_FIXUP_ASUS_HMIC,
+ };
+
+ static void alc889_fixup_coef(struct hda_codec *codec,
+@@ -2133,6 +2136,31 @@ static void alc1220_fixup_clevo_pb51ed(struct hda_codec *codec,
+ alc_fixup_headset_mode_no_hp_mic(codec, fix, action);
+ }
+
++static void alc887_asus_hp_automute_hook(struct hda_codec *codec,
++ struct hda_jack_callback *jack)
++{
++ struct alc_spec *spec = codec->spec;
++ unsigned int vref;
++
++ snd_hda_gen_hp_automute(codec, jack);
++
++ if (spec->gen.hp_jack_present)
++ vref = AC_PINCTL_VREF_80;
++ else
++ vref = AC_PINCTL_VREF_HIZ;
++ snd_hda_set_pin_ctl(codec, 0x19, PIN_HP | vref);
++}
++
++static void alc887_fixup_asus_jack(struct hda_codec *codec,
++ const struct hda_fixup *fix, int action)
++{
++ struct alc_spec *spec = codec->spec;
++ if (action != HDA_FIXUP_ACT_PROBE)
++ return;
++ snd_hda_set_pin_ctl_cache(codec, 0x1b, PIN_HP);
++ spec->gen.hp_automute_hook = alc887_asus_hp_automute_hook;
++}
++
+ static const struct hda_fixup alc882_fixups[] = {
+ [ALC882_FIXUP_ABIT_AW9D_MAX] = {
+ .type = HDA_FIXUP_PINS,
+@@ -2390,6 +2418,20 @@ static const struct hda_fixup alc882_fixups[] = {
+ .chained = true,
+ .chain_id = ALC1220_FIXUP_CLEVO_PB51ED,
+ },
++ [ALC887_FIXUP_ASUS_AUDIO] = {
++ .type = HDA_FIXUP_PINS,
++ .v.pins = (const struct hda_pintbl[]) {
++ { 0x15, 0x02a14150 }, /* use as headset mic, without its own jack detect */
++ { 0x19, 0x22219420 },
++ {}
++ },
++ },
++ [ALC887_FIXUP_ASUS_HMIC] = {
++ .type = HDA_FIXUP_FUNC,
++ .v.func = alc887_fixup_asus_jack,
++ .chained = true,
++ .chain_id = ALC887_FIXUP_ASUS_AUDIO,
++ },
+ };
+
+ static const struct snd_pci_quirk alc882_fixup_tbl[] = {
+@@ -2423,6 +2465,7 @@ static const struct snd_pci_quirk alc882_fixup_tbl[] = {
+ SND_PCI_QUIRK(0x1043, 0x13c2, "Asus A7M", ALC882_FIXUP_EAPD),
+ SND_PCI_QUIRK(0x1043, 0x1873, "ASUS W90V", ALC882_FIXUP_ASUS_W90V),
+ SND_PCI_QUIRK(0x1043, 0x1971, "Asus W2JC", ALC882_FIXUP_ASUS_W2JC),
++ SND_PCI_QUIRK(0x1043, 0x2390, "Asus D700SA", ALC887_FIXUP_ASUS_HMIC),
+ SND_PCI_QUIRK(0x1043, 0x835f, "Asus Eee 1601", ALC888_FIXUP_EEE1601),
+ SND_PCI_QUIRK(0x1043, 0x84bc, "ASUS ET2700", ALC887_FIXUP_ASUS_BASS),
+ SND_PCI_QUIRK(0x1043, 0x8691, "ASUS ROG Ranger VIII", ALC882_FIXUP_GPIO3),
+@@ -6245,6 +6288,7 @@ enum {
+ ALC269_FIXUP_LEMOTE_A190X,
+ ALC256_FIXUP_INTEL_NUC8_RUGGED,
+ ALC255_FIXUP_XIAOMI_HEADSET_MIC,
++ ALC274_FIXUP_HP_MIC,
+ };
+
+ static const struct hda_fixup alc269_fixups[] = {
+@@ -7624,6 +7668,14 @@ static const struct hda_fixup alc269_fixups[] = {
+ .chained = true,
+ .chain_id = ALC289_FIXUP_ASUS_GA401
+ },
++ [ALC274_FIXUP_HP_MIC] = {
++ .type = HDA_FIXUP_VERBS,
++ .v.verbs = (const struct hda_verb[]) {
++ { 0x20, AC_VERB_SET_COEF_INDEX, 0x45 },
++ { 0x20, AC_VERB_SET_PROC_COEF, 0x5089 },
++ { }
++ },
++ },
+ };
+
+ static const struct snd_pci_quirk alc269_fixup_tbl[] = {
+@@ -7775,6 +7827,8 @@ static const struct snd_pci_quirk alc269_fixup_tbl[] = {
+ SND_PCI_QUIRK(0x103c, 0x869d, "HP", ALC236_FIXUP_HP_MUTE_LED),
+ SND_PCI_QUIRK(0x103c, 0x8729, "HP", ALC285_FIXUP_HP_GPIO_LED),
+ SND_PCI_QUIRK(0x103c, 0x8736, "HP", ALC285_FIXUP_HP_GPIO_AMP_INIT),
++ SND_PCI_QUIRK(0x103c, 0x874e, "HP", ALC274_FIXUP_HP_MIC),
++ SND_PCI_QUIRK(0x103c, 0x8760, "HP", ALC285_FIXUP_HP_MUTE_LED),
+ SND_PCI_QUIRK(0x103c, 0x877a, "HP", ALC285_FIXUP_HP_MUTE_LED),
+ SND_PCI_QUIRK(0x103c, 0x877d, "HP", ALC236_FIXUP_HP_MUTE_LED),
+ SND_PCI_QUIRK(0x1043, 0x103e, "ASUS X540SA", ALC256_FIXUP_ASUS_MIC),
+@@ -8100,6 +8154,7 @@ static const struct hda_model_fixup alc269_fixup_models[] = {
+ {.id = ALC256_FIXUP_MEDION_HEADSET_NO_PRESENCE, .name = "alc256-medion-headset"},
+ {.id = ALC298_FIXUP_SAMSUNG_HEADPHONE_VERY_QUIET, .name = "alc298-samsung-headphone"},
+ {.id = ALC255_FIXUP_XIAOMI_HEADSET_MIC, .name = "alc255-xiaomi-headset"},
++ {.id = ALC274_FIXUP_HP_MIC, .name = "alc274-hp-mic-detect"},
+ {}
+ };
+ #define ALC225_STANDARD_PINS \
+@@ -9634,6 +9689,7 @@ static const struct snd_pci_quirk alc662_fixup_tbl[] = {
+ SND_PCI_QUIRK(0x1028, 0x0698, "Dell", ALC668_FIXUP_DELL_MIC_NO_PRESENCE),
+ SND_PCI_QUIRK(0x1028, 0x069f, "Dell", ALC668_FIXUP_DELL_MIC_NO_PRESENCE),
+ SND_PCI_QUIRK(0x103c, 0x1632, "HP RP5800", ALC662_FIXUP_HP_RP5800),
++ SND_PCI_QUIRK(0x103c, 0x873e, "HP", ALC671_FIXUP_HP_HEADSET_MIC2),
+ SND_PCI_QUIRK(0x1043, 0x1080, "Asus UX501VW", ALC668_FIXUP_HEADSET_MODE),
+ SND_PCI_QUIRK(0x1043, 0x11cd, "Asus N550", ALC662_FIXUP_ASUS_Nx50),
+ SND_PCI_QUIRK(0x1043, 0x13df, "Asus N550JX", ALC662_FIXUP_BASS_1A),
+diff --git a/sound/soc/codecs/Kconfig b/sound/soc/codecs/Kconfig
+index 986a6308818b2..2a8484f37496c 100644
+--- a/sound/soc/codecs/Kconfig
++++ b/sound/soc/codecs/Kconfig
+@@ -539,6 +539,7 @@ config SND_SOC_CQ0093VC
+ config SND_SOC_CROS_EC_CODEC
+ tristate "codec driver for ChromeOS EC"
+ depends on CROS_EC
++ select CRYPTO
+ select CRYPTO_LIB_SHA256
+ help
+ If you say yes here you will get support for the
+diff --git a/sound/soc/codecs/tas2770.c b/sound/soc/codecs/tas2770.c
+index cf071121c8398..531bf32043813 100644
+--- a/sound/soc/codecs/tas2770.c
++++ b/sound/soc/codecs/tas2770.c
+@@ -16,7 +16,6 @@
+ #include <linux/i2c.h>
+ #include <linux/gpio.h>
+ #include <linux/gpio/consumer.h>
+-#include <linux/pm_runtime.h>
+ #include <linux/regulator/consumer.h>
+ #include <linux/firmware.h>
+ #include <linux/regmap.h>
+@@ -57,7 +56,12 @@ static int tas2770_set_bias_level(struct snd_soc_component *component,
+ TAS2770_PWR_CTRL_MASK,
+ TAS2770_PWR_CTRL_ACTIVE);
+ break;
+-
++ case SND_SOC_BIAS_STANDBY:
++ case SND_SOC_BIAS_PREPARE:
++ snd_soc_component_update_bits(component,
++ TAS2770_PWR_CTRL,
++ TAS2770_PWR_CTRL_MASK, TAS2770_PWR_CTRL_MUTE);
++ break;
+ case SND_SOC_BIAS_OFF:
+ snd_soc_component_update_bits(component,
+ TAS2770_PWR_CTRL,
+@@ -135,23 +139,18 @@ static int tas2770_dac_event(struct snd_soc_dapm_widget *w,
+ TAS2770_PWR_CTRL,
+ TAS2770_PWR_CTRL_MASK,
+ TAS2770_PWR_CTRL_MUTE);
+- if (ret)
+- goto end;
+ break;
+ case SND_SOC_DAPM_PRE_PMD:
+ ret = snd_soc_component_update_bits(component,
+ TAS2770_PWR_CTRL,
+ TAS2770_PWR_CTRL_MASK,
+ TAS2770_PWR_CTRL_SHUTDOWN);
+- if (ret)
+- goto end;
+ break;
+ default:
+ dev_err(tas2770->dev, "Not supported evevt\n");
+ return -EINVAL;
+ }
+
+-end:
+ if (ret < 0)
+ return ret;
+
+@@ -243,6 +242,9 @@ static int tas2770_set_bitwidth(struct tas2770_priv *tas2770, int bitwidth)
+ return -EINVAL;
+ }
+
++ if (ret < 0)
++ return ret;
++
+ tas2770->channel_size = bitwidth;
+
+ ret = snd_soc_component_update_bits(component,
+@@ -251,16 +253,15 @@ static int tas2770_set_bitwidth(struct tas2770_priv *tas2770, int bitwidth)
+ TAS2770_TDM_CFG_REG5_50_MASK,
+ TAS2770_TDM_CFG_REG5_VSNS_ENABLE |
+ tas2770->v_sense_slot);
+- if (ret)
+- goto end;
++ if (ret < 0)
++ return ret;
++
+ ret = snd_soc_component_update_bits(component,
+ TAS2770_TDM_CFG_REG6,
+ TAS2770_TDM_CFG_REG6_ISNS_MASK |
+ TAS2770_TDM_CFG_REG6_50_MASK,
+ TAS2770_TDM_CFG_REG6_ISNS_ENABLE |
+ tas2770->i_sense_slot);
+-
+-end:
+ if (ret < 0)
+ return ret;
+
+@@ -278,36 +279,35 @@ static int tas2770_set_samplerate(struct tas2770_priv *tas2770, int samplerate)
+ TAS2770_TDM_CFG_REG0,
+ TAS2770_TDM_CFG_REG0_SMP_MASK,
+ TAS2770_TDM_CFG_REG0_SMP_48KHZ);
+- if (ret)
+- goto end;
++ if (ret < 0)
++ return ret;
++
+ ret = snd_soc_component_update_bits(component,
+ TAS2770_TDM_CFG_REG0,
+ TAS2770_TDM_CFG_REG0_31_MASK,
+ TAS2770_TDM_CFG_REG0_31_44_1_48KHZ);
+- if (ret)
+- goto end;
+ break;
+ case 44100:
+ ret = snd_soc_component_update_bits(component,
+ TAS2770_TDM_CFG_REG0,
+ TAS2770_TDM_CFG_REG0_SMP_MASK,
+ TAS2770_TDM_CFG_REG0_SMP_44_1KHZ);
+- if (ret)
+- goto end;
++ if (ret < 0)
++ return ret;
++
+ ret = snd_soc_component_update_bits(component,
+ TAS2770_TDM_CFG_REG0,
+ TAS2770_TDM_CFG_REG0_31_MASK,
+ TAS2770_TDM_CFG_REG0_31_44_1_48KHZ);
+- if (ret)
+- goto end;
+ break;
+ case 96000:
+ ret = snd_soc_component_update_bits(component,
+ TAS2770_TDM_CFG_REG0,
+ TAS2770_TDM_CFG_REG0_SMP_MASK,
+ TAS2770_TDM_CFG_REG0_SMP_48KHZ);
+- if (ret)
+- goto end;
++ if (ret < 0)
++ return ret;
++
+ ret = snd_soc_component_update_bits(component,
+ TAS2770_TDM_CFG_REG0,
+ TAS2770_TDM_CFG_REG0_31_MASK,
+@@ -318,8 +318,9 @@ static int tas2770_set_samplerate(struct tas2770_priv *tas2770, int samplerate)
+ TAS2770_TDM_CFG_REG0,
+ TAS2770_TDM_CFG_REG0_SMP_MASK,
+ TAS2770_TDM_CFG_REG0_SMP_44_1KHZ);
+- if (ret)
+- goto end;
++ if (ret < 0)
++ return ret;
++
+ ret = snd_soc_component_update_bits(component,
+ TAS2770_TDM_CFG_REG0,
+ TAS2770_TDM_CFG_REG0_31_MASK,
+@@ -330,22 +331,22 @@ static int tas2770_set_samplerate(struct tas2770_priv *tas2770, int samplerate)
+ TAS2770_TDM_CFG_REG0,
+ TAS2770_TDM_CFG_REG0_SMP_MASK,
+ TAS2770_TDM_CFG_REG0_SMP_48KHZ);
+- if (ret)
+- goto end;
++ if (ret < 0)
++ return ret;
++
+ ret = snd_soc_component_update_bits(component,
+ TAS2770_TDM_CFG_REG0,
+ TAS2770_TDM_CFG_REG0_31_MASK,
+ TAS2770_TDM_CFG_REG0_31_176_4_192KHZ);
+- if (ret)
+- goto end;
+ break;
+ case 17640:
+ ret = snd_soc_component_update_bits(component,
+ TAS2770_TDM_CFG_REG0,
+ TAS2770_TDM_CFG_REG0_SMP_MASK,
+ TAS2770_TDM_CFG_REG0_SMP_44_1KHZ);
+- if (ret)
+- goto end;
++ if (ret < 0)
++ return ret;
++
+ ret = snd_soc_component_update_bits(component,
+ TAS2770_TDM_CFG_REG0,
+ TAS2770_TDM_CFG_REG0_31_MASK,
+@@ -355,7 +356,6 @@ static int tas2770_set_samplerate(struct tas2770_priv *tas2770, int samplerate)
+ ret = -EINVAL;
+ }
+
+-end:
+ if (ret < 0)
+ return ret;
+
+@@ -574,6 +574,8 @@ static int tas2770_codec_probe(struct snd_soc_component *component)
+
+ tas2770->component = component;
+
++ tas2770_reset(tas2770);
++
+ return 0;
+ }
+
+@@ -700,29 +702,28 @@ static int tas2770_parse_dt(struct device *dev, struct tas2770_priv *tas2770)
+ rc = fwnode_property_read_u32(dev->fwnode, "ti,asi-format",
+ &tas2770->asi_format);
+ if (rc) {
+- dev_err(tas2770->dev, "Looking up %s property failed %d\n",
+- "ti,asi-format", rc);
+- goto end;
++ dev_info(tas2770->dev, "Property %s is missing setting default slot\n",
++ "ti,asi-format");
++ tas2770->asi_format = 0;
+ }
+
+ rc = fwnode_property_read_u32(dev->fwnode, "ti,imon-slot-no",
+ &tas2770->i_sense_slot);
+ if (rc) {
+- dev_err(tas2770->dev, "Looking up %s property failed %d\n",
+- "ti,imon-slot-no", rc);
+- goto end;
++ dev_info(tas2770->dev, "Property %s is missing setting default slot\n",
++ "ti,imon-slot-no");
++ tas2770->i_sense_slot = 0;
+ }
+
+ rc = fwnode_property_read_u32(dev->fwnode, "ti,vmon-slot-no",
+ &tas2770->v_sense_slot);
+ if (rc) {
+- dev_err(tas2770->dev, "Looking up %s property failed %d\n",
+- "ti,vmon-slot-no", rc);
+- goto end;
++ dev_info(tas2770->dev, "Property %s is missing setting default slot\n",
++ "ti,vmon-slot-no");
++ tas2770->v_sense_slot = 2;
+ }
+
+-end:
+- return rc;
++ return 0;
+ }
+
+ static int tas2770_i2c_probe(struct i2c_client *client,
+@@ -770,8 +771,6 @@ static int tas2770_i2c_probe(struct i2c_client *client,
+ tas2770->channel_size = 0;
+ tas2770->slot_width = 0;
+
+- tas2770_reset(tas2770);
+-
+ result = tas2770_register_codec(tas2770);
+ if (result)
+ dev_err(tas2770->dev, "Register codec failed.\n");
+@@ -780,13 +779,6 @@ end:
+ return result;
+ }
+
+-static int tas2770_i2c_remove(struct i2c_client *client)
+-{
+- pm_runtime_disable(&client->dev);
+- return 0;
+-}
+-
+-
+ static const struct i2c_device_id tas2770_i2c_id[] = {
+ { "tas2770", 0},
+ { }
+@@ -807,7 +799,6 @@ static struct i2c_driver tas2770_i2c_driver = {
+ .of_match_table = of_match_ptr(tas2770_of_match),
+ },
+ .probe = tas2770_i2c_probe,
+- .remove = tas2770_i2c_remove,
+ .id_table = tas2770_i2c_id,
+ };
+
+diff --git a/sound/soc/codecs/tlv320adcx140.c b/sound/soc/codecs/tlv320adcx140.c
+index 03fb50175d876..a6273ccb84013 100644
+--- a/sound/soc/codecs/tlv320adcx140.c
++++ b/sound/soc/codecs/tlv320adcx140.c
+@@ -154,7 +154,7 @@ static const struct regmap_config adcx140_i2c_regmap = {
+ };
+
+ /* Digital Volume control. From -100 to 27 dB in 0.5 dB steps */
+-static DECLARE_TLV_DB_SCALE(dig_vol_tlv, -10000, 50, 0);
++static DECLARE_TLV_DB_SCALE(dig_vol_tlv, -10050, 50, 0);
+
+ /* ADC gain. From 0 to 42 dB in 1 dB steps */
+ static DECLARE_TLV_DB_SCALE(adc_tlv, 0, 100, 0);
+diff --git a/sound/soc/codecs/tlv320aic32x4.c b/sound/soc/codecs/tlv320aic32x4.c
+index d087f3b20b1d5..50b66cf9ea8f9 100644
+--- a/sound/soc/codecs/tlv320aic32x4.c
++++ b/sound/soc/codecs/tlv320aic32x4.c
+@@ -665,7 +665,7 @@ static int aic32x4_set_processing_blocks(struct snd_soc_component *component,
+ }
+
+ static int aic32x4_setup_clocks(struct snd_soc_component *component,
+- unsigned int sample_rate)
++ unsigned int sample_rate, unsigned int channels)
+ {
+ u8 aosr;
+ u16 dosr;
+@@ -753,7 +753,9 @@ static int aic32x4_setup_clocks(struct snd_soc_component *component,
+ dosr);
+
+ clk_set_rate(clocks[5].clk,
+- sample_rate * 32);
++ sample_rate * 32 *
++ channels);
++
+ return 0;
+ }
+ }
+@@ -775,7 +777,8 @@ static int aic32x4_hw_params(struct snd_pcm_substream *substream,
+ u8 iface1_reg = 0;
+ u8 dacsetup_reg = 0;
+
+- aic32x4_setup_clocks(component, params_rate(params));
++ aic32x4_setup_clocks(component, params_rate(params),
++ params_channels(params));
+
+ switch (params_width(params)) {
+ case 16:
+diff --git a/sound/soc/codecs/wm_adsp.c b/sound/soc/codecs/wm_adsp.c
+index 519ca2e696372..18f62fde92537 100644
+--- a/sound/soc/codecs/wm_adsp.c
++++ b/sound/soc/codecs/wm_adsp.c
+@@ -2043,6 +2043,7 @@ int wm_adsp_write_ctl(struct wm_adsp *dsp, const char *name, int type,
+ {
+ struct wm_coeff_ctl *ctl;
+ struct snd_kcontrol *kcontrol;
++ char ctl_name[SNDRV_CTL_ELEM_ID_NAME_MAXLEN];
+ int ret;
+
+ ctl = wm_adsp_get_ctl(dsp, name, type, alg);
+@@ -2053,8 +2054,25 @@ int wm_adsp_write_ctl(struct wm_adsp *dsp, const char *name, int type,
+ return -EINVAL;
+
+ ret = wm_coeff_write_ctrl(ctl, buf, len);
++ if (ret)
++ return ret;
++
++ if (ctl->flags & WMFW_CTL_FLAG_SYS)
++ return 0;
++
++ if (dsp->component->name_prefix)
++ snprintf(ctl_name, SNDRV_CTL_ELEM_ID_NAME_MAXLEN, "%s %s",
++ dsp->component->name_prefix, ctl->name);
++ else
++ snprintf(ctl_name, SNDRV_CTL_ELEM_ID_NAME_MAXLEN, "%s",
++ ctl->name);
++
++ kcontrol = snd_soc_card_get_kcontrol(dsp->component->card, ctl_name);
++ if (!kcontrol) {
++ adsp_err(dsp, "Can't find kcontrol %s\n", ctl_name);
++ return -EINVAL;
++ }
+
+- kcontrol = snd_soc_card_get_kcontrol(dsp->component->card, ctl->name);
+ snd_ctl_notify(dsp->component->card->snd_card,
+ SNDRV_CTL_EVENT_MASK_VALUE, &kcontrol->id);
+
+diff --git a/sound/soc/fsl/fsl_sai.c b/sound/soc/fsl/fsl_sai.c
+index 7031869a023a1..211e29a73a41a 100644
+--- a/sound/soc/fsl/fsl_sai.c
++++ b/sound/soc/fsl/fsl_sai.c
+@@ -694,7 +694,7 @@ static int fsl_sai_dai_probe(struct snd_soc_dai *cpu_dai)
+ return 0;
+ }
+
+-static struct snd_soc_dai_driver fsl_sai_dai = {
++static struct snd_soc_dai_driver fsl_sai_dai_template = {
+ .probe = fsl_sai_dai_probe,
+ .playback = {
+ .stream_name = "CPU-Playback",
+@@ -966,12 +966,15 @@ static int fsl_sai_probe(struct platform_device *pdev)
+ return ret;
+ }
+
++ memcpy(&sai->cpu_dai_drv, &fsl_sai_dai_template,
++ sizeof(fsl_sai_dai_template));
++
+ /* Sync Tx with Rx as default by following old DT binding */
+ sai->synchronous[RX] = true;
+ sai->synchronous[TX] = false;
+- fsl_sai_dai.symmetric_rates = 1;
+- fsl_sai_dai.symmetric_channels = 1;
+- fsl_sai_dai.symmetric_samplebits = 1;
++ sai->cpu_dai_drv.symmetric_rates = 1;
++ sai->cpu_dai_drv.symmetric_channels = 1;
++ sai->cpu_dai_drv.symmetric_samplebits = 1;
+
+ if (of_find_property(np, "fsl,sai-synchronous-rx", NULL) &&
+ of_find_property(np, "fsl,sai-asynchronous", NULL)) {
+@@ -988,9 +991,9 @@ static int fsl_sai_probe(struct platform_device *pdev)
+ /* Discard all settings for asynchronous mode */
+ sai->synchronous[RX] = false;
+ sai->synchronous[TX] = false;
+- fsl_sai_dai.symmetric_rates = 0;
+- fsl_sai_dai.symmetric_channels = 0;
+- fsl_sai_dai.symmetric_samplebits = 0;
++ sai->cpu_dai_drv.symmetric_rates = 0;
++ sai->cpu_dai_drv.symmetric_channels = 0;
++ sai->cpu_dai_drv.symmetric_samplebits = 0;
+ }
+
+ if (of_find_property(np, "fsl,sai-mclk-direction-output", NULL) &&
+@@ -1019,7 +1022,7 @@ static int fsl_sai_probe(struct platform_device *pdev)
+ pm_runtime_enable(&pdev->dev);
+
+ ret = devm_snd_soc_register_component(&pdev->dev, &fsl_component,
+- &fsl_sai_dai, 1);
++ &sai->cpu_dai_drv, 1);
+ if (ret)
+ goto err_pm_disable;
+
+diff --git a/sound/soc/fsl/fsl_sai.h b/sound/soc/fsl/fsl_sai.h
+index 6aba7d28f5f34..677ecfc1ec68f 100644
+--- a/sound/soc/fsl/fsl_sai.h
++++ b/sound/soc/fsl/fsl_sai.h
+@@ -180,6 +180,7 @@ struct fsl_sai {
+ unsigned int bclk_ratio;
+
+ const struct fsl_sai_soc_data *soc_data;
++ struct snd_soc_dai_driver cpu_dai_drv;
+ struct snd_dmaengine_dai_dma_data dma_params_rx;
+ struct snd_dmaengine_dai_dma_data dma_params_tx;
+ };
+diff --git a/sound/soc/fsl/imx-es8328.c b/sound/soc/fsl/imx-es8328.c
+index 15a27a2cd0cae..fad1eb6253d53 100644
+--- a/sound/soc/fsl/imx-es8328.c
++++ b/sound/soc/fsl/imx-es8328.c
+@@ -145,13 +145,13 @@ static int imx_es8328_probe(struct platform_device *pdev)
+ data = devm_kzalloc(dev, sizeof(*data), GFP_KERNEL);
+ if (!data) {
+ ret = -ENOMEM;
+- goto fail;
++ goto put_device;
+ }
+
+ comp = devm_kzalloc(dev, 3 * sizeof(*comp), GFP_KERNEL);
+ if (!comp) {
+ ret = -ENOMEM;
+- goto fail;
++ goto put_device;
+ }
+
+ data->dev = dev;
+@@ -182,12 +182,12 @@ static int imx_es8328_probe(struct platform_device *pdev)
+ ret = snd_soc_of_parse_card_name(&data->card, "model");
+ if (ret) {
+ dev_err(dev, "Unable to parse card name\n");
+- goto fail;
++ goto put_device;
+ }
+ ret = snd_soc_of_parse_audio_routing(&data->card, "audio-routing");
+ if (ret) {
+ dev_err(dev, "Unable to parse routing: %d\n", ret);
+- goto fail;
++ goto put_device;
+ }
+ data->card.num_links = 1;
+ data->card.owner = THIS_MODULE;
+@@ -196,10 +196,12 @@ static int imx_es8328_probe(struct platform_device *pdev)
+ ret = snd_soc_register_card(&data->card);
+ if (ret) {
+ dev_err(dev, "Unable to register: %d\n", ret);
+- goto fail;
++ goto put_device;
+ }
+
+ platform_set_drvdata(pdev, data);
++put_device:
++ put_device(&ssi_pdev->dev);
+ fail:
+ of_node_put(ssi_np);
+ of_node_put(codec_np);
+diff --git a/sound/soc/intel/boards/sof_rt5682.c b/sound/soc/intel/boards/sof_rt5682.c
+index 13a48b0c35aef..11233c3aeadfb 100644
+--- a/sound/soc/intel/boards/sof_rt5682.c
++++ b/sound/soc/intel/boards/sof_rt5682.c
+@@ -118,6 +118,19 @@ static const struct dmi_system_id sof_rt5682_quirk_table[] = {
+ .driver_data = (void *)(SOF_RT5682_MCLK_EN |
+ SOF_RT5682_SSP_CODEC(0)),
+ },
++ {
++ .callback = sof_rt5682_quirk_cb,
++ .matches = {
++ DMI_MATCH(DMI_PRODUCT_FAMILY, "Google_Volteer"),
++ DMI_MATCH(DMI_PRODUCT_NAME, "Terrador"),
++ },
++ .driver_data = (void *)(SOF_RT5682_MCLK_EN |
++ SOF_RT5682_SSP_CODEC(0) |
++ SOF_SPEAKER_AMP_PRESENT |
++ SOF_MAX98373_SPEAKER_AMP_PRESENT |
++ SOF_RT5682_SSP_AMP(2) |
++ SOF_RT5682_NUM_HDMIDEV(4)),
++ },
+ {}
+ };
+
+diff --git a/sound/soc/qcom/lpass-cpu.c b/sound/soc/qcom/lpass-cpu.c
+index e00a4af29c13f..f25da84f175ac 100644
+--- a/sound/soc/qcom/lpass-cpu.c
++++ b/sound/soc/qcom/lpass-cpu.c
+@@ -209,21 +209,6 @@ static int lpass_cpu_daiops_hw_params(struct snd_pcm_substream *substream,
+ return 0;
+ }
+
+-static int lpass_cpu_daiops_hw_free(struct snd_pcm_substream *substream,
+- struct snd_soc_dai *dai)
+-{
+- struct lpass_data *drvdata = snd_soc_dai_get_drvdata(dai);
+- int ret;
+-
+- ret = regmap_write(drvdata->lpaif_map,
+- LPAIF_I2SCTL_REG(drvdata->variant, dai->driver->id),
+- 0);
+- if (ret)
+- dev_err(dai->dev, "error writing to i2sctl reg: %d\n", ret);
+-
+- return ret;
+-}
+-
+ static int lpass_cpu_daiops_prepare(struct snd_pcm_substream *substream,
+ struct snd_soc_dai *dai)
+ {
+@@ -304,7 +289,6 @@ const struct snd_soc_dai_ops asoc_qcom_lpass_cpu_dai_ops = {
+ .startup = lpass_cpu_daiops_startup,
+ .shutdown = lpass_cpu_daiops_shutdown,
+ .hw_params = lpass_cpu_daiops_hw_params,
+- .hw_free = lpass_cpu_daiops_hw_free,
+ .prepare = lpass_cpu_daiops_prepare,
+ .trigger = lpass_cpu_daiops_trigger,
+ };
+diff --git a/sound/soc/qcom/lpass-platform.c b/sound/soc/qcom/lpass-platform.c
+index 34f7fd1bab1cf..693839deebfe8 100644
+--- a/sound/soc/qcom/lpass-platform.c
++++ b/sound/soc/qcom/lpass-platform.c
+@@ -61,7 +61,7 @@ static int lpass_platform_pcmops_open(struct snd_soc_component *component,
+ int ret, dma_ch, dir = substream->stream;
+ struct lpass_pcm_data *data;
+
+- data = devm_kzalloc(soc_runtime->dev, sizeof(*data), GFP_KERNEL);
++ data = kzalloc(sizeof(*data), GFP_KERNEL);
+ if (!data)
+ return -ENOMEM;
+
+@@ -118,6 +118,7 @@ static int lpass_platform_pcmops_close(struct snd_soc_component *component,
+ if (v->free_dma_channel)
+ v->free_dma_channel(drvdata, data->dma_ch);
+
++ kfree(data);
+ return 0;
+ }
+
+diff --git a/sound/soc/soc-topology.c b/sound/soc/soc-topology.c
+index 6eaa00c210117..a5460155b3f64 100644
+--- a/sound/soc/soc-topology.c
++++ b/sound/soc/soc-topology.c
+@@ -592,6 +592,17 @@ static int soc_tplg_kcontrol_bind_io(struct snd_soc_tplg_ctl_hdr *hdr,
+ k->info = snd_soc_bytes_info_ext;
+ k->tlv.c = snd_soc_bytes_tlv_callback;
+
++ /*
++ * When a topology-based implementation abuses the
++ * control interface and uses bytes_ext controls of
++ * more than 512 bytes, we need to disable the size
++ * checks, otherwise accesses to such controls will
++ * return an -EINVAL error and prevent the card from
++ * being configured.
++ */
++ if (IS_ENABLED(CONFIG_SND_CTL_VALIDATION) && sbe->max > 512)
++ k->access |= SNDRV_CTL_ELEM_ACCESS_SKIP_CHECK;
++
+ ext_ops = tplg->bytes_ext_ops;
+ num_ops = tplg->bytes_ext_ops_count;
+ for (i = 0; i < num_ops; i++) {
+diff --git a/sound/soc/sof/control.c b/sound/soc/sof/control.c
+index 186eea105bb15..009938d45ddd9 100644
+--- a/sound/soc/sof/control.c
++++ b/sound/soc/sof/control.c
+@@ -298,6 +298,10 @@ int snd_sof_bytes_ext_put(struct snd_kcontrol *kcontrol,
+ const struct snd_ctl_tlv __user *tlvd =
+ (const struct snd_ctl_tlv __user *)binary_data;
+
++ /* make sure we have at least a header */
++ if (size < sizeof(struct snd_ctl_tlv))
++ return -EINVAL;
++
+ /*
+ * The beginning of bytes data contains a header from where
+ * the length (as bytes) is needed to know the correct copy
+@@ -306,6 +310,13 @@ int snd_sof_bytes_ext_put(struct snd_kcontrol *kcontrol,
+ if (copy_from_user(&header, tlvd, sizeof(const struct snd_ctl_tlv)))
+ return -EFAULT;
+
++ /* make sure TLV info is consistent */
++ if (header.length + sizeof(struct snd_ctl_tlv) > size) {
++ dev_err_ratelimited(scomp->dev, "error: inconsistent TLV, data %d + header %zu > %d\n",
++ header.length, sizeof(struct snd_ctl_tlv), size);
++ return -EINVAL;
++ }
++
+ /* be->max is coming from topology */
+ if (header.length > be->max) {
+ dev_err_ratelimited(scomp->dev, "error: Bytes data size %d exceeds max %d.\n",
+diff --git a/sound/soc/sof/intel/hda.c b/sound/soc/sof/intel/hda.c
+index 63ca920c8e6e0..7152e6d1cf673 100644
+--- a/sound/soc/sof/intel/hda.c
++++ b/sound/soc/sof/intel/hda.c
+@@ -1179,7 +1179,13 @@ void hda_machine_select(struct snd_sof_dev *sdev)
+
+ mach = snd_soc_acpi_find_machine(desc->machines);
+ if (mach) {
+- sof_pdata->tplg_filename = mach->sof_tplg_filename;
++ /*
++ * If tplg file name is overridden, use it instead of
++ * the one set in mach table
++ */
++ if (!sof_pdata->tplg_filename)
++ sof_pdata->tplg_filename = mach->sof_tplg_filename;
++
+ sof_pdata->machine = mach;
+
+ if (mach->link_mask) {
+diff --git a/sound/soc/sof/sof-pci-dev.c b/sound/soc/sof/sof-pci-dev.c
+index aa3532ba14349..f3a8140773db5 100644
+--- a/sound/soc/sof/sof-pci-dev.c
++++ b/sound/soc/sof/sof-pci-dev.c
+@@ -35,8 +35,28 @@ static int sof_pci_debug;
+ module_param_named(sof_pci_debug, sof_pci_debug, int, 0444);
+ MODULE_PARM_DESC(sof_pci_debug, "SOF PCI debug options (0x0 all off)");
+
++static const char *sof_override_tplg_name;
++
+ #define SOF_PCI_DISABLE_PM_RUNTIME BIT(0)
+
++static int sof_tplg_cb(const struct dmi_system_id *id)
++{
++ sof_override_tplg_name = id->driver_data;
++ return 1;
++}
++
++static const struct dmi_system_id sof_tplg_table[] = {
++ {
++ .callback = sof_tplg_cb,
++ .matches = {
++ DMI_MATCH(DMI_PRODUCT_FAMILY, "Google_Volteer"),
++ DMI_MATCH(DMI_PRODUCT_NAME, "Terrador"),
++ },
++ .driver_data = "sof-tgl-rt5682-ssp0-max98373-ssp2.tplg",
++ },
++ {}
++};
++
+ static const struct dmi_system_id community_key_platforms[] = {
+ {
+ .ident = "Up Squared",
+@@ -347,6 +367,10 @@ static int sof_pci_probe(struct pci_dev *pci,
+ sof_pdata->tplg_filename_prefix =
+ sof_pdata->desc->default_tplg_path;
+
++ dmi_check_system(sof_tplg_table);
++ if (sof_override_tplg_name)
++ sof_pdata->tplg_filename = sof_override_tplg_name;
++
+ #if IS_ENABLED(CONFIG_SND_SOC_SOF_PROBE_WORK_QUEUE)
+ /* set callback to enable runtime_pm */
+ sof_pdata->sof_probe_complete = sof_pci_probe_complete;
+diff --git a/sound/usb/format.c b/sound/usb/format.c
+index 1b28d01d1f4cd..3bfead393aa34 100644
+--- a/sound/usb/format.c
++++ b/sound/usb/format.c
+@@ -406,6 +406,7 @@ static int line6_parse_audio_format_rates_quirk(struct snd_usb_audio *chip,
+ case USB_ID(0x0e41, 0x4242): /* Line6 Helix Rack */
+ case USB_ID(0x0e41, 0x4244): /* Line6 Helix LT */
+ case USB_ID(0x0e41, 0x4246): /* Line6 HX-Stomp */
++ case USB_ID(0x0e41, 0x4247): /* Line6 Pod Go */
+ case USB_ID(0x0e41, 0x4248): /* Line6 Helix >= fw 2.82 */
+ case USB_ID(0x0e41, 0x4249): /* Line6 Helix Rack >= fw 2.82 */
+ case USB_ID(0x0e41, 0x424a): /* Line6 Helix LT >= fw 2.82 */
+diff --git a/tools/build/Makefile.feature b/tools/build/Makefile.feature
+index e7818b44b48ee..6e5c907680b1a 100644
+--- a/tools/build/Makefile.feature
++++ b/tools/build/Makefile.feature
+@@ -38,8 +38,6 @@ FEATURE_TESTS_BASIC := \
+ get_current_dir_name \
+ gettid \
+ glibc \
+- gtk2 \
+- gtk2-infobar \
+ libbfd \
+ libcap \
+ libelf \
+@@ -81,6 +79,8 @@ FEATURE_TESTS_EXTRA := \
+ compile-32 \
+ compile-x32 \
+ cplus-demangle \
++ gtk2 \
++ gtk2-infobar \
+ hello \
+ libbabeltrace \
+ libbfd-liberty \
+@@ -110,7 +110,6 @@ FEATURE_DISPLAY ?= \
+ dwarf \
+ dwarf_getlocations \
+ glibc \
+- gtk2 \
+ libbfd \
+ libcap \
+ libelf \
+diff --git a/tools/build/feature/Makefile b/tools/build/feature/Makefile
+index 93b590d81209c..85d341e25eaec 100644
+--- a/tools/build/feature/Makefile
++++ b/tools/build/feature/Makefile
+@@ -89,7 +89,7 @@ __BUILDXX = $(CXX) $(CXXFLAGS) -MD -Wall -Werror -o $@ $(patsubst %.bin,%.cpp,$(
+ ###############################
+
+ $(OUTPUT)test-all.bin:
+- $(BUILD) -fstack-protector-all -O2 -D_FORTIFY_SOURCE=2 -ldw -lelf -lnuma -lelf -I/usr/include/slang -lslang $(shell $(PKG_CONFIG) --libs --cflags gtk+-2.0 2>/dev/null) $(FLAGS_PERL_EMBED) $(FLAGS_PYTHON_EMBED) -DPACKAGE='"perf"' -lbfd -ldl -lz -llzma
++ $(BUILD) -fstack-protector-all -O2 -D_FORTIFY_SOURCE=2 -ldw -lelf -lnuma -lelf -I/usr/include/slang -lslang $(FLAGS_PERL_EMBED) $(FLAGS_PYTHON_EMBED) -DPACKAGE='"perf"' -lbfd -ldl -lz -llzma -lzstd
+
+ $(OUTPUT)test-hello.bin:
+ $(BUILD)
+diff --git a/tools/build/feature/test-all.c b/tools/build/feature/test-all.c
+index 5479e543b1947..d2623992ccd61 100644
+--- a/tools/build/feature/test-all.c
++++ b/tools/build/feature/test-all.c
+@@ -78,14 +78,6 @@
+ # include "test-libslang.c"
+ #undef main
+
+-#define main main_test_gtk2
+-# include "test-gtk2.c"
+-#undef main
+-
+-#define main main_test_gtk2_infobar
+-# include "test-gtk2-infobar.c"
+-#undef main
+-
+ #define main main_test_libbfd
+ # include "test-libbfd.c"
+ #undef main
+@@ -205,8 +197,6 @@ int main(int argc, char *argv[])
+ main_test_libelf_getshdrstrndx();
+ main_test_libunwind();
+ main_test_libslang();
+- main_test_gtk2(argc, argv);
+- main_test_gtk2_infobar(argc, argv);
+ main_test_libbfd();
+ main_test_backtrace();
+ main_test_libnuma();
+diff --git a/tools/lib/bpf/libbpf.c b/tools/lib/bpf/libbpf.c
+index 236c91aff48f8..3e71c2f69afe8 100644
+--- a/tools/lib/bpf/libbpf.c
++++ b/tools/lib/bpf/libbpf.c
+@@ -3677,6 +3677,36 @@ static int bpf_object__create_map(struct bpf_object *obj, struct bpf_map *map)
+ return 0;
+ }
+
++static int init_map_slots(struct bpf_map *map)
++{
++ const struct bpf_map *targ_map;
++ unsigned int i;
++ int fd, err;
++
++ for (i = 0; i < map->init_slots_sz; i++) {
++ if (!map->init_slots[i])
++ continue;
++
++ targ_map = map->init_slots[i];
++ fd = bpf_map__fd(targ_map);
++ err = bpf_map_update_elem(map->fd, &i, &fd, 0);
++ if (err) {
++ err = -errno;
++ pr_warn("map '%s': failed to initialize slot [%d] to map '%s' fd=%d: %d\n",
++ map->name, i, targ_map->name,
++ fd, err);
++ return err;
++ }
++ pr_debug("map '%s': slot [%d] set to map '%s' fd=%d\n",
++ map->name, i, targ_map->name, fd);
++ }
++
++ zfree(&map->init_slots);
++ map->init_slots_sz = 0;
++
++ return 0;
++}
++
+ static int
+ bpf_object__create_maps(struct bpf_object *obj)
+ {
+@@ -3719,28 +3749,11 @@ bpf_object__create_maps(struct bpf_object *obj)
+ }
+
+ if (map->init_slots_sz) {
+- for (j = 0; j < map->init_slots_sz; j++) {
+- const struct bpf_map *targ_map;
+- int fd;
+-
+- if (!map->init_slots[j])
+- continue;
+-
+- targ_map = map->init_slots[j];
+- fd = bpf_map__fd(targ_map);
+- err = bpf_map_update_elem(map->fd, &j, &fd, 0);
+- if (err) {
+- err = -errno;
+- pr_warn("map '%s': failed to initialize slot [%d] to map '%s' fd=%d: %d\n",
+- map->name, j, targ_map->name,
+- fd, err);
+- goto err_out;
+- }
+- pr_debug("map '%s': slot [%d] set to map '%s' fd=%d\n",
+- map->name, j, targ_map->name, fd);
++ err = init_map_slots(map);
++ if (err < 0) {
++ zclose(map->fd);
++ goto err_out;
+ }
+- zfree(&map->init_slots);
+- map->init_slots_sz = 0;
+ }
+
+ if (map->pin_path && !map->pinned) {
+@@ -5253,7 +5266,7 @@ retry_load:
+ free(log_buf);
+ goto retry_load;
+ }
+- ret = -errno;
++ ret = errno ? -errno : -LIBBPF_ERRNO__LOAD;
+ cp = libbpf_strerror_r(errno, errmsg, sizeof(errmsg));
+ pr_warn("load bpf program failed: %s\n", cp);
+ pr_perm_msg(ret);
+diff --git a/tools/lib/perf/evlist.c b/tools/lib/perf/evlist.c
+index 6a875a0f01bb0..233592c5a52c7 100644
+--- a/tools/lib/perf/evlist.c
++++ b/tools/lib/perf/evlist.c
+@@ -45,6 +45,9 @@ static void __perf_evlist__propagate_maps(struct perf_evlist *evlist,
+ if (!evsel->own_cpus || evlist->has_user_cpus) {
+ perf_cpu_map__put(evsel->cpus);
+ evsel->cpus = perf_cpu_map__get(evlist->cpus);
++ } else if (!evsel->system_wide && perf_cpu_map__empty(evlist->cpus)) {
++ perf_cpu_map__put(evsel->cpus);
++ evsel->cpus = perf_cpu_map__get(evlist->cpus);
+ } else if (evsel->cpus != evsel->own_cpus) {
+ perf_cpu_map__put(evsel->cpus);
+ evsel->cpus = perf_cpu_map__get(evsel->own_cpus);
+diff --git a/tools/perf/Makefile.config b/tools/perf/Makefile.config
+index 513633809c81e..ab6dbd8ef6cf6 100644
+--- a/tools/perf/Makefile.config
++++ b/tools/perf/Makefile.config
+@@ -716,12 +716,14 @@ ifndef NO_SLANG
+ endif
+ endif
+
+-ifndef NO_GTK2
++ifdef GTK2
+ FLAGS_GTK2=$(CFLAGS) $(LDFLAGS) $(EXTLIBS) $(shell $(PKG_CONFIG) --libs --cflags gtk+-2.0 2>/dev/null)
++ $(call feature_check,gtk2)
+ ifneq ($(feature-gtk2), 1)
+ msg := $(warning GTK2 not found, disables GTK2 support. Please install gtk2-devel or libgtk2.0-dev);
+ NO_GTK2 := 1
+ else
++ $(call feature_check,gtk2-infobar)
+ ifeq ($(feature-gtk2-infobar), 1)
+ GTK_CFLAGS := -DHAVE_GTK_INFO_BAR_SUPPORT
+ endif
+diff --git a/tools/perf/Makefile.perf b/tools/perf/Makefile.perf
+index 86dbb51bb2723..bc45b1a61d3a3 100644
+--- a/tools/perf/Makefile.perf
++++ b/tools/perf/Makefile.perf
+@@ -48,7 +48,7 @@ include ../scripts/utilities.mak
+ #
+ # Define NO_SLANG if you do not want TUI support.
+ #
+-# Define NO_GTK2 if you do not want GTK+ GUI support.
++# Define GTK2 if you want GTK+ GUI support.
+ #
+ # Define NO_DEMANGLE if you do not want C++ symbol demangling.
+ #
+@@ -384,7 +384,7 @@ ifneq ($(OUTPUT),)
+ CFLAGS += -I$(OUTPUT)
+ endif
+
+-ifndef NO_GTK2
++ifdef GTK2
+ ALL_PROGRAMS += $(OUTPUT)libperf-gtk.so
+ GTK_IN := $(OUTPUT)gtk-in.o
+ endif
+@@ -876,7 +876,7 @@ check: $(OUTPUT)common-cmds.h
+
+ ### Installation rules
+
+-ifndef NO_GTK2
++ifdef GTK2
+ install-gtk: $(OUTPUT)libperf-gtk.so
+ $(call QUIET_INSTALL, 'GTK UI') \
+ $(INSTALL) -d -m 755 '$(DESTDIR_SQ)$(libdir_SQ)'; \
+diff --git a/tools/perf/builtin-stat.c b/tools/perf/builtin-stat.c
+index 6e2502de755a8..6494383687f89 100644
+--- a/tools/perf/builtin-stat.c
++++ b/tools/perf/builtin-stat.c
+@@ -1963,8 +1963,10 @@ static void setup_system_wide(int forks)
+ struct evsel *counter;
+
+ evlist__for_each_entry(evsel_list, counter) {
+- if (!counter->core.system_wide)
++ if (!counter->core.system_wide &&
++ strcmp(counter->name, "duration_time")) {
+ return;
++ }
+ }
+
+ if (evsel_list->core.nr_entries)
+diff --git a/tools/perf/builtin-trace.c b/tools/perf/builtin-trace.c
+index 4cbb64edc9983..83e8cd663b4e4 100644
+--- a/tools/perf/builtin-trace.c
++++ b/tools/perf/builtin-trace.c
+@@ -1762,7 +1762,11 @@ static int trace__read_syscall_info(struct trace *trace, int id)
+ if (table == NULL)
+ return -ENOMEM;
+
+- memset(table + trace->sctbl->syscalls.max_id, 0, (id - trace->sctbl->syscalls.max_id) * sizeof(*sc));
++ // Need to memset from offset 0 and +1 members if brand new
++ if (trace->syscalls.table == NULL)
++ memset(table, 0, (id + 1) * sizeof(*sc));
++ else
++ memset(table + trace->sctbl->syscalls.max_id + 1, 0, (id - trace->sctbl->syscalls.max_id) * sizeof(*sc));
+
+ trace->syscalls.table = table;
+ trace->sctbl->syscalls.max_id = id;
+diff --git a/tools/perf/builtin-version.c b/tools/perf/builtin-version.c
+index 05cf2af9e2c27..d09ec2f030719 100644
+--- a/tools/perf/builtin-version.c
++++ b/tools/perf/builtin-version.c
+@@ -60,7 +60,6 @@ static void library_status(void)
+ STATUS(HAVE_DWARF_SUPPORT, dwarf);
+ STATUS(HAVE_DWARF_GETLOCATIONS_SUPPORT, dwarf_getlocations);
+ STATUS(HAVE_GLIBC_SUPPORT, glibc);
+- STATUS(HAVE_GTK2_SUPPORT, gtk2);
+ #ifndef HAVE_SYSCALL_TABLE_SUPPORT
+ STATUS(HAVE_LIBAUDIT_SUPPORT, libaudit);
+ #endif
+diff --git a/tools/perf/util/intel-pt.c b/tools/perf/util/intel-pt.c
+index 9357b5f62c273..bc88175e377ce 100644
+--- a/tools/perf/util/intel-pt.c
++++ b/tools/perf/util/intel-pt.c
+@@ -1071,6 +1071,8 @@ static void intel_pt_set_pid_tid_cpu(struct intel_pt *pt,
+
+ if (queue->tid == -1 || pt->have_sched_switch) {
+ ptq->tid = machine__get_current_tid(pt->machine, ptq->cpu);
++ if (ptq->tid == -1)
++ ptq->pid = -1;
+ thread__zput(ptq->thread);
+ }
+
+@@ -2561,10 +2563,8 @@ static int intel_pt_context_switch(struct intel_pt *pt, union perf_event *event,
+ tid = sample->tid;
+ }
+
+- if (tid == -1) {
+- pr_err("context_switch event has no tid\n");
+- return -EINVAL;
+- }
++ if (tid == -1)
++ intel_pt_log("context_switch event has no tid\n");
+
+ intel_pt_log("context_switch: cpu %d pid %d tid %d time %"PRIu64" tsc %#"PRIx64"\n",
+ cpu, pid, tid, sample->time, perf_time_to_tsc(sample->time,
+diff --git a/tools/testing/radix-tree/idr-test.c b/tools/testing/radix-tree/idr-test.c
+index 8995092d541ec..3b796dd5e5772 100644
+--- a/tools/testing/radix-tree/idr-test.c
++++ b/tools/testing/radix-tree/idr-test.c
+@@ -523,8 +523,27 @@ static void *ida_random_fn(void *arg)
+ return NULL;
+ }
+
++static void *ida_leak_fn(void *arg)
++{
++ struct ida *ida = arg;
++ time_t s = time(NULL);
++ int i, ret;
++
++ rcu_register_thread();
++
++ do for (i = 0; i < 1000; i++) {
++ ret = ida_alloc_range(ida, 128, 128, GFP_KERNEL);
++ if (ret >= 0)
++ ida_free(ida, 128);
++ } while (time(NULL) < s + 2);
++
++ rcu_unregister_thread();
++ return NULL;
++}
++
+ void ida_thread_tests(void)
+ {
++ DEFINE_IDA(ida);
+ pthread_t threads[20];
+ int i;
+
+@@ -536,6 +555,16 @@ void ida_thread_tests(void)
+
+ while (i--)
+ pthread_join(threads[i], NULL);
++
++ for (i = 0; i < ARRAY_SIZE(threads); i++)
++ if (pthread_create(&threads[i], NULL, ida_leak_fn, &ida)) {
++ perror("creating ida thread");
++ exit(1);
++ }
++
++ while (i--)
++ pthread_join(threads[i], NULL);
++ assert(ida_is_empty(&ida));
+ }
+
+ void ida_tests(void)
+diff --git a/tools/testing/selftests/bpf/bench.c b/tools/testing/selftests/bpf/bench.c
+index 944ad4721c83c..da14eaac71d03 100644
+--- a/tools/testing/selftests/bpf/bench.c
++++ b/tools/testing/selftests/bpf/bench.c
+@@ -311,7 +311,6 @@ extern const struct bench bench_rename_kretprobe;
+ extern const struct bench bench_rename_rawtp;
+ extern const struct bench bench_rename_fentry;
+ extern const struct bench bench_rename_fexit;
+-extern const struct bench bench_rename_fmodret;
+ extern const struct bench bench_trig_base;
+ extern const struct bench bench_trig_tp;
+ extern const struct bench bench_trig_rawtp;
+@@ -332,7 +331,6 @@ static const struct bench *benchs[] = {
+ &bench_rename_rawtp,
+ &bench_rename_fentry,
+ &bench_rename_fexit,
+- &bench_rename_fmodret,
+ &bench_trig_base,
+ &bench_trig_tp,
+ &bench_trig_rawtp,
+@@ -462,4 +460,3 @@ int main(int argc, char **argv)
+
+ return 0;
+ }
+-
+diff --git a/tools/testing/selftests/bpf/benchs/bench_rename.c b/tools/testing/selftests/bpf/benchs/bench_rename.c
+index e74cff40f4fea..a967674098ada 100644
+--- a/tools/testing/selftests/bpf/benchs/bench_rename.c
++++ b/tools/testing/selftests/bpf/benchs/bench_rename.c
+@@ -106,12 +106,6 @@ static void setup_fexit()
+ attach_bpf(ctx.skel->progs.prog5);
+ }
+
+-static void setup_fmodret()
+-{
+- setup_ctx();
+- attach_bpf(ctx.skel->progs.prog6);
+-}
+-
+ static void *consumer(void *input)
+ {
+ return NULL;
+@@ -182,14 +176,3 @@ const struct bench bench_rename_fexit = {
+ .report_progress = hits_drops_report_progress,
+ .report_final = hits_drops_report_final,
+ };
+-
+-const struct bench bench_rename_fmodret = {
+- .name = "rename-fmodret",
+- .validate = validate,
+- .setup = setup_fmodret,
+- .producer_thread = producer,
+- .consumer_thread = consumer,
+- .measure = measure,
+- .report_progress = hits_drops_report_progress,
+- .report_final = hits_drops_report_final,
+-};
+diff --git a/tools/testing/selftests/bpf/prog_tests/sk_assign.c b/tools/testing/selftests/bpf/prog_tests/sk_assign.c
+index 47fa04adc1471..21c2d265c3e8e 100644
+--- a/tools/testing/selftests/bpf/prog_tests/sk_assign.c
++++ b/tools/testing/selftests/bpf/prog_tests/sk_assign.c
+@@ -265,7 +265,7 @@ void test_sk_assign(void)
+ TEST("ipv6 udp port redir", AF_INET6, SOCK_DGRAM, false),
+ TEST("ipv6 udp addr redir", AF_INET6, SOCK_DGRAM, true),
+ };
+- int server = -1;
++ __s64 server = -1;
+ int server_map;
+ int self_net;
+
+diff --git a/tools/testing/selftests/bpf/prog_tests/sockopt_sk.c b/tools/testing/selftests/bpf/prog_tests/sockopt_sk.c
+index 5f54c6aec7f07..b25c9c45c1484 100644
+--- a/tools/testing/selftests/bpf/prog_tests/sockopt_sk.c
++++ b/tools/testing/selftests/bpf/prog_tests/sockopt_sk.c
+@@ -45,9 +45,9 @@ static int getsetsockopt(void)
+ goto err;
+ }
+
+- if (*(int *)big_buf != 0x08) {
++ if (*big_buf != 0x08) {
+ log_err("Unexpected getsockopt(IP_TOS) optval 0x%x != 0x08",
+- *(int *)big_buf);
++ (int)*big_buf);
+ goto err;
+ }
+
+diff --git a/tools/testing/selftests/bpf/prog_tests/test_overhead.c b/tools/testing/selftests/bpf/prog_tests/test_overhead.c
+index 2702df2b23433..9966685866fdf 100644
+--- a/tools/testing/selftests/bpf/prog_tests/test_overhead.c
++++ b/tools/testing/selftests/bpf/prog_tests/test_overhead.c
+@@ -61,10 +61,9 @@ void test_test_overhead(void)
+ const char *raw_tp_name = "raw_tp/task_rename";
+ const char *fentry_name = "fentry/__set_task_comm";
+ const char *fexit_name = "fexit/__set_task_comm";
+- const char *fmodret_name = "fmod_ret/__set_task_comm";
+ const char *kprobe_func = "__set_task_comm";
+ struct bpf_program *kprobe_prog, *kretprobe_prog, *raw_tp_prog;
+- struct bpf_program *fentry_prog, *fexit_prog, *fmodret_prog;
++ struct bpf_program *fentry_prog, *fexit_prog;
+ struct bpf_object *obj;
+ struct bpf_link *link;
+ int err, duration = 0;
+@@ -97,11 +96,6 @@ void test_test_overhead(void)
+ if (CHECK(!fexit_prog, "find_probe",
+ "prog '%s' not found\n", fexit_name))
+ goto cleanup;
+- fmodret_prog = bpf_object__find_program_by_title(obj, fmodret_name);
+- if (CHECK(!fmodret_prog, "find_probe",
+- "prog '%s' not found\n", fmodret_name))
+- goto cleanup;
+-
+ err = bpf_object__load(obj);
+ if (CHECK(err, "obj_load", "err %d\n", err))
+ goto cleanup;
+@@ -148,12 +142,6 @@ void test_test_overhead(void)
+ test_run("fexit");
+ bpf_link__destroy(link);
+
+- /* attach fmod_ret */
+- link = bpf_program__attach_trace(fmodret_prog);
+- if (CHECK(IS_ERR(link), "attach fmod_ret", "err %ld\n", PTR_ERR(link)))
+- goto cleanup;
+- test_run("fmod_ret");
+- bpf_link__destroy(link);
+ cleanup:
+ prctl(PR_SET_NAME, comm, 0L, 0L, 0L);
+ bpf_object__close(obj);
+diff --git a/tools/testing/selftests/bpf/progs/test_overhead.c b/tools/testing/selftests/bpf/progs/test_overhead.c
+index 42403d088abc9..abb7344b531f4 100644
+--- a/tools/testing/selftests/bpf/progs/test_overhead.c
++++ b/tools/testing/selftests/bpf/progs/test_overhead.c
+@@ -39,10 +39,4 @@ int BPF_PROG(prog5, struct task_struct *tsk, const char *buf, bool exec)
+ return 0;
+ }
+
+-SEC("fmod_ret/__set_task_comm")
+-int BPF_PROG(prog6, struct task_struct *tsk, const char *buf, bool exec)
+-{
+- return !tsk;
+-}
+-
+ char _license[] SEC("license") = "GPL";
+diff --git a/tools/testing/selftests/bpf/progs/test_sysctl_loop1.c b/tools/testing/selftests/bpf/progs/test_sysctl_loop1.c
+index 458b0d69133e4..553a282d816ab 100644
+--- a/tools/testing/selftests/bpf/progs/test_sysctl_loop1.c
++++ b/tools/testing/selftests/bpf/progs/test_sysctl_loop1.c
+@@ -18,11 +18,11 @@
+ #define MAX_ULONG_STR_LEN 7
+ #define MAX_VALUE_STR_LEN (TCP_MEM_LOOPS * MAX_ULONG_STR_LEN)
+
++const char tcp_mem_name[] = "net/ipv4/tcp_mem/very_very_very_very_long_pointless_string";
+ static __always_inline int is_tcp_mem(struct bpf_sysctl *ctx)
+ {
+- volatile char tcp_mem_name[] = "net/ipv4/tcp_mem/very_very_very_very_long_pointless_string";
+ unsigned char i;
+- char name[64];
++ char name[sizeof(tcp_mem_name)];
+ int ret;
+
+ memset(name, 0, sizeof(name));
+diff --git a/tools/testing/selftests/bpf/progs/test_sysctl_loop2.c b/tools/testing/selftests/bpf/progs/test_sysctl_loop2.c
+index b2e6f9b0894d8..2b64bc563a12e 100644
+--- a/tools/testing/selftests/bpf/progs/test_sysctl_loop2.c
++++ b/tools/testing/selftests/bpf/progs/test_sysctl_loop2.c
+@@ -18,11 +18,11 @@
+ #define MAX_ULONG_STR_LEN 7
+ #define MAX_VALUE_STR_LEN (TCP_MEM_LOOPS * MAX_ULONG_STR_LEN)
+
++const char tcp_mem_name[] = "net/ipv4/tcp_mem/very_very_very_very_long_pointless_string_to_stress_byte_loop";
+ static __attribute__((noinline)) int is_tcp_mem(struct bpf_sysctl *ctx)
+ {
+- volatile char tcp_mem_name[] = "net/ipv4/tcp_mem/very_very_very_very_long_pointless_string_to_stress_byte_loop";
+ unsigned char i;
+- char name[64];
++ char name[sizeof(tcp_mem_name)];
+ int ret;
+
+ memset(name, 0, sizeof(name));
+diff --git a/tools/testing/selftests/bpf/progs/test_vmlinux.c b/tools/testing/selftests/bpf/progs/test_vmlinux.c
+index 5611b564d3b1c..f54b2293c490f 100644
+--- a/tools/testing/selftests/bpf/progs/test_vmlinux.c
++++ b/tools/testing/selftests/bpf/progs/test_vmlinux.c
+@@ -19,12 +19,14 @@ SEC("tp/syscalls/sys_enter_nanosleep")
+ int handle__tp(struct trace_event_raw_sys_enter *args)
+ {
+ struct __kernel_timespec *ts;
++ long tv_nsec;
+
+ if (args->id != __NR_nanosleep)
+ return 0;
+
+ ts = (void *)args->args[0];
+- if (BPF_CORE_READ(ts, tv_nsec) != MY_TV_NSEC)
++ if (bpf_probe_read_user(&tv_nsec, sizeof(ts->tv_nsec), &ts->tv_nsec) ||
++ tv_nsec != MY_TV_NSEC)
+ return 0;
+
+ tp_called = true;
+@@ -35,12 +37,14 @@ SEC("raw_tp/sys_enter")
+ int BPF_PROG(handle__raw_tp, struct pt_regs *regs, long id)
+ {
+ struct __kernel_timespec *ts;
++ long tv_nsec;
+
+ if (id != __NR_nanosleep)
+ return 0;
+
+ ts = (void *)PT_REGS_PARM1_CORE(regs);
+- if (BPF_CORE_READ(ts, tv_nsec) != MY_TV_NSEC)
++ if (bpf_probe_read_user(&tv_nsec, sizeof(ts->tv_nsec), &ts->tv_nsec) ||
++ tv_nsec != MY_TV_NSEC)
+ return 0;
+
+ raw_tp_called = true;
+@@ -51,12 +55,14 @@ SEC("tp_btf/sys_enter")
+ int BPF_PROG(handle__tp_btf, struct pt_regs *regs, long id)
+ {
+ struct __kernel_timespec *ts;
++ long tv_nsec;
+
+ if (id != __NR_nanosleep)
+ return 0;
+
+ ts = (void *)PT_REGS_PARM1_CORE(regs);
+- if (BPF_CORE_READ(ts, tv_nsec) != MY_TV_NSEC)
++ if (bpf_probe_read_user(&tv_nsec, sizeof(ts->tv_nsec), &ts->tv_nsec) ||
++ tv_nsec != MY_TV_NSEC)
+ return 0;
+
+ tp_btf_called = true;
+diff --git a/tools/testing/selftests/ftrace/test.d/trigger/inter-event/trigger-inter-event-combined-hist.tc b/tools/testing/selftests/ftrace/test.d/trigger/inter-event/trigger-inter-event-combined-hist.tc
+index 7449a4b8f1f9a..9098f1e7433fd 100644
+--- a/tools/testing/selftests/ftrace/test.d/trigger/inter-event/trigger-inter-event-combined-hist.tc
++++ b/tools/testing/selftests/ftrace/test.d/trigger/inter-event/trigger-inter-event-combined-hist.tc
+@@ -25,12 +25,12 @@ echo 'wakeup_latency u64 lat pid_t pid' >> synthetic_events
+ echo 'hist:keys=pid:ts1=common_timestamp.usecs if comm=="ping"' >> events/sched/sched_wakeup/trigger
+ echo 'hist:keys=next_pid:wakeup_lat=common_timestamp.usecs-$ts1:onmatch(sched.sched_wakeup).wakeup_latency($wakeup_lat,next_pid) if next_comm=="ping"' > events/sched/sched_switch/trigger
+
+-echo 'waking+wakeup_latency u64 lat; pid_t pid' >> synthetic_events
+-echo 'hist:keys=pid,lat:sort=pid,lat:ww_lat=$waking_lat+$wakeup_lat:onmatch(synthetic.wakeup_latency).waking+wakeup_latency($ww_lat,pid)' >> events/synthetic/wakeup_latency/trigger
+-echo 'hist:keys=pid,lat:sort=pid,lat' >> events/synthetic/waking+wakeup_latency/trigger
++echo 'waking_plus_wakeup_latency u64 lat; pid_t pid' >> synthetic_events
++echo 'hist:keys=pid,lat:sort=pid,lat:ww_lat=$waking_lat+$wakeup_lat:onmatch(synthetic.wakeup_latency).waking_plus_wakeup_latency($ww_lat,pid)' >> events/synthetic/wakeup_latency/trigger
++echo 'hist:keys=pid,lat:sort=pid,lat' >> events/synthetic/waking_plus_wakeup_latency/trigger
+
+ ping $LOCALHOST -c 3
+-if ! grep -q "pid:" events/synthetic/waking+wakeup_latency/hist; then
++if ! grep -q "pid:" events/synthetic/waking_plus_wakeup_latency/hist; then
+ fail "Failed to create combined histogram"
+ fi
+
+diff --git a/tools/testing/selftests/lkdtm/run.sh b/tools/testing/selftests/lkdtm/run.sh
+index 8383eb89d88a9..bb7a1775307b8 100755
+--- a/tools/testing/selftests/lkdtm/run.sh
++++ b/tools/testing/selftests/lkdtm/run.sh
+@@ -82,7 +82,7 @@ dmesg > "$DMESG"
+ ($SHELL -c 'cat <(echo '"$test"') >'"$TRIGGER" 2>/dev/null) || true
+
+ # Record and dump the results
+-dmesg | diff --changed-group-format='%>' --unchanged-group-format='' "$DMESG" - > "$LOG" || true
++dmesg | comm --nocheck-order -13 "$DMESG" - > "$LOG" || true
+
+ cat "$LOG"
+ # Check for expected output
+diff --git a/tools/testing/selftests/net/config b/tools/testing/selftests/net/config
+index 3b42c06b59858..c5e50ab2ced60 100644
+--- a/tools/testing/selftests/net/config
++++ b/tools/testing/selftests/net/config
+@@ -31,3 +31,4 @@ CONFIG_NET_SCH_ETF=m
+ CONFIG_NET_SCH_NETEM=y
+ CONFIG_TEST_BLACKHOLE_DEV=m
+ CONFIG_KALLSYMS=y
++CONFIG_NET_FOU=m
+diff --git a/tools/testing/selftests/net/forwarding/vxlan_asymmetric.sh b/tools/testing/selftests/net/forwarding/vxlan_asymmetric.sh
+index a0b5f57d6bd31..0727e2012b685 100755
+--- a/tools/testing/selftests/net/forwarding/vxlan_asymmetric.sh
++++ b/tools/testing/selftests/net/forwarding/vxlan_asymmetric.sh
+@@ -215,10 +215,16 @@ switch_create()
+
+ bridge fdb add 00:00:5e:00:01:01 dev br1 self local vlan 10
+ bridge fdb add 00:00:5e:00:01:01 dev br1 self local vlan 20
++
++ sysctl_set net.ipv4.conf.all.rp_filter 0
++ sysctl_set net.ipv4.conf.vlan10-v.rp_filter 0
++ sysctl_set net.ipv4.conf.vlan20-v.rp_filter 0
+ }
+
+ switch_destroy()
+ {
++ sysctl_restore net.ipv4.conf.all.rp_filter
++
+ bridge fdb del 00:00:5e:00:01:01 dev br1 self local vlan 20
+ bridge fdb del 00:00:5e:00:01:01 dev br1 self local vlan 10
+
+@@ -359,6 +365,10 @@ ns_switch_create()
+
+ bridge fdb add 00:00:5e:00:01:01 dev br1 self local vlan 10
+ bridge fdb add 00:00:5e:00:01:01 dev br1 self local vlan 20
++
++ sysctl_set net.ipv4.conf.all.rp_filter 0
++ sysctl_set net.ipv4.conf.vlan10-v.rp_filter 0
++ sysctl_set net.ipv4.conf.vlan20-v.rp_filter 0
+ }
+ export -f ns_switch_create
+
+diff --git a/tools/testing/selftests/net/forwarding/vxlan_symmetric.sh b/tools/testing/selftests/net/forwarding/vxlan_symmetric.sh
+index 1209031bc794d..5d97fa347d75a 100755
+--- a/tools/testing/selftests/net/forwarding/vxlan_symmetric.sh
++++ b/tools/testing/selftests/net/forwarding/vxlan_symmetric.sh
+@@ -237,10 +237,16 @@ switch_create()
+
+ bridge fdb add 00:00:5e:00:01:01 dev br1 self local vlan 10
+ bridge fdb add 00:00:5e:00:01:01 dev br1 self local vlan 20
++
++ sysctl_set net.ipv4.conf.all.rp_filter 0
++ sysctl_set net.ipv4.conf.vlan10-v.rp_filter 0
++ sysctl_set net.ipv4.conf.vlan20-v.rp_filter 0
+ }
+
+ switch_destroy()
+ {
++ sysctl_restore net.ipv4.conf.all.rp_filter
++
+ bridge fdb del 00:00:5e:00:01:01 dev br1 self local vlan 20
+ bridge fdb del 00:00:5e:00:01:01 dev br1 self local vlan 10
+
+@@ -402,6 +408,10 @@ ns_switch_create()
+
+ bridge fdb add 00:00:5e:00:01:01 dev br1 self local vlan 10
+ bridge fdb add 00:00:5e:00:01:01 dev br1 self local vlan 20
++
++ sysctl_set net.ipv4.conf.all.rp_filter 0
++ sysctl_set net.ipv4.conf.vlan10-v.rp_filter 0
++ sysctl_set net.ipv4.conf.vlan20-v.rp_filter 0
+ }
+ export -f ns_switch_create
+
+diff --git a/tools/testing/selftests/net/mptcp/mptcp_connect.sh b/tools/testing/selftests/net/mptcp/mptcp_connect.sh
+index acf02e156d20f..ed163e4ad4344 100755
+--- a/tools/testing/selftests/net/mptcp/mptcp_connect.sh
++++ b/tools/testing/selftests/net/mptcp/mptcp_connect.sh
+@@ -421,9 +421,9 @@ do_transfer()
+ duration=$(printf "(duration %05sms)" $duration)
+ if [ ${rets} -ne 0 ] || [ ${retc} -ne 0 ]; then
+ echo "$duration [ FAIL ] client exit code $retc, server $rets" 1>&2
+- echo "\nnetns ${listener_ns} socket stat for $port:" 1>&2
++ echo -e "\nnetns ${listener_ns} socket stat for ${port}:" 1>&2
+ ip netns exec ${listener_ns} ss -nita 1>&2 -o "sport = :$port"
+- echo "\nnetns ${connector_ns} socket stat for $port:" 1>&2
++ echo -e "\nnetns ${connector_ns} socket stat for ${port}:" 1>&2
+ ip netns exec ${connector_ns} ss -nita 1>&2 -o "dport = :$port"
+
+ cat "$capout"
+diff --git a/tools/testing/selftests/net/mptcp/mptcp_join.sh b/tools/testing/selftests/net/mptcp/mptcp_join.sh
+index dd42c2f692d01..9cb0c6af326ba 100755
+--- a/tools/testing/selftests/net/mptcp/mptcp_join.sh
++++ b/tools/testing/selftests/net/mptcp/mptcp_join.sh
+@@ -167,9 +167,9 @@ do_transfer()
+
+ if [ ${rets} -ne 0 ] || [ ${retc} -ne 0 ]; then
+ echo " client exit code $retc, server $rets" 1>&2
+- echo "\nnetns ${listener_ns} socket stat for $port:" 1>&2
++ echo -e "\nnetns ${listener_ns} socket stat for ${port}:" 1>&2
+ ip netns exec ${listener_ns} ss -nita 1>&2 -o "sport = :$port"
+- echo "\nnetns ${connector_ns} socket stat for $port:" 1>&2
++ echo -e "\nnetns ${connector_ns} socket stat for ${port}:" 1>&2
+ ip netns exec ${connector_ns} ss -nita 1>&2 -o "dport = :$port"
+
+ cat "$capout"
+diff --git a/tools/testing/selftests/net/rtnetlink.sh b/tools/testing/selftests/net/rtnetlink.sh
+index bdbf4b3125b6a..28ea3753da207 100755
+--- a/tools/testing/selftests/net/rtnetlink.sh
++++ b/tools/testing/selftests/net/rtnetlink.sh
+@@ -521,6 +521,11 @@ kci_test_encap_fou()
+ return $ksft_skip
+ fi
+
++ if ! /sbin/modprobe -q -n fou; then
++ echo "SKIP: module fou is not found"
++ return $ksft_skip
++ fi
++ /sbin/modprobe -q fou
+ ip -netns "$testns" fou add port 7777 ipproto 47 2>/dev/null
+ if [ $? -ne 0 ];then
+ echo "FAIL: can't add fou port 7777, skipping test"
+diff --git a/tools/testing/selftests/powerpc/eeh/eeh-basic.sh b/tools/testing/selftests/powerpc/eeh/eeh-basic.sh
+index 8a8d0f456946c..0d783e1065c86 100755
+--- a/tools/testing/selftests/powerpc/eeh/eeh-basic.sh
++++ b/tools/testing/selftests/powerpc/eeh/eeh-basic.sh
+@@ -1,17 +1,19 @@
+ #!/bin/sh
+ # SPDX-License-Identifier: GPL-2.0-only
+
++KSELFTESTS_SKIP=4
++
+ . ./eeh-functions.sh
+
+ if ! eeh_supported ; then
+ echo "EEH not supported on this system, skipping"
+- exit 0;
++ exit $KSELFTESTS_SKIP;
+ fi
+
+ if [ ! -e "/sys/kernel/debug/powerpc/eeh_dev_check" ] && \
+ [ ! -e "/sys/kernel/debug/powerpc/eeh_dev_break" ] ; then
+ echo "debugfs EEH testing files are missing. Is debugfs mounted?"
+- exit 1;
++ exit $KSELFTESTS_SKIP;
+ fi
+
+ pre_lspci=`mktemp`
+@@ -84,4 +86,5 @@ echo "$failed devices failed to recover ($dev_count tested)"
+ lspci | diff -u $pre_lspci -
+ rm -f $pre_lspci
+
+-exit $failed
++test "$failed" == 0
++exit $?
+diff --git a/tools/testing/selftests/vm/config b/tools/testing/selftests/vm/config
+index 3ba674b64fa9f..69dd0d1aa30b2 100644
+--- a/tools/testing/selftests/vm/config
++++ b/tools/testing/selftests/vm/config
+@@ -3,3 +3,4 @@ CONFIG_USERFAULTFD=y
+ CONFIG_TEST_VMALLOC=m
+ CONFIG_DEVICE_PRIVATE=y
+ CONFIG_TEST_HMM=m
++CONFIG_GUP_BENCHMARK=y
^ permalink raw reply related [flat|nested] 30+ messages in thread
* [gentoo-commits] proj/linux-patches:5.8 commit in: /
@ 2020-11-01 20:32 Mike Pagano
0 siblings, 0 replies; 30+ messages in thread
From: Mike Pagano @ 2020-11-01 20:32 UTC (permalink / raw
To: gentoo-commits
commit: 5c3df6341bbcb452808a06a1104d6616e72143d0
Author: Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Sun Nov 1 20:32:02 2020 +0000
Commit: Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Sun Nov 1 20:32:02 2020 +0000
URL: https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=5c3df634
Linux patch 5.8.18
Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>
0000_README | 4 +
1017_linux-5.8.18.patch | 5442 +++++++++++++++++++++++++++++++++++++++++++++++
2 files changed, 5446 insertions(+)
diff --git a/0000_README b/0000_README
index 333aabc..a90cff2 100644
--- a/0000_README
+++ b/0000_README
@@ -111,6 +111,10 @@ Patch: 1016_linux-5.8.17.patch
From: http://www.kernel.org
Desc: Linux 5.8.17
+Patch: 1017_linux-5.8.18.patch
+From: http://www.kernel.org
+Desc: Linux 5.8.18
+
Patch: 1500_XATTR_USER_PREFIX.patch
From: https://bugs.gentoo.org/show_bug.cgi?id=470644
Desc: Support for namespace user.pax.* on tmpfs.
diff --git a/1017_linux-5.8.18.patch b/1017_linux-5.8.18.patch
new file mode 100644
index 0000000..473975b
--- /dev/null
+++ b/1017_linux-5.8.18.patch
@@ -0,0 +1,5442 @@
+diff --git a/Makefile b/Makefile
+index 9bdb93053ee93..33c45a0cd8582 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 5
+ PATCHLEVEL = 8
+-SUBLEVEL = 17
++SUBLEVEL = 18
+ EXTRAVERSION =
+ NAME = Kleptomaniac Octopus
+
+diff --git a/arch/arm64/Makefile b/arch/arm64/Makefile
+index d5fe7c9e0be1d..5a34423464188 100644
+--- a/arch/arm64/Makefile
++++ b/arch/arm64/Makefile
+@@ -10,14 +10,14 @@
+ #
+ # Copyright (C) 1995-2001 by Russell King
+
+-LDFLAGS_vmlinux :=--no-undefined -X
++LDFLAGS_vmlinux :=--no-undefined -X -z norelro
+ CPPFLAGS_vmlinux.lds = -DTEXT_OFFSET=$(TEXT_OFFSET)
+
+ ifeq ($(CONFIG_RELOCATABLE), y)
+ # Pass --no-apply-dynamic-relocs to restore pre-binutils-2.27 behaviour
+ # for relative relocs, since this leads to better Image compression
+ # with the relocation offsets always being zero.
+-LDFLAGS_vmlinux += -shared -Bsymbolic -z notext -z norelro \
++LDFLAGS_vmlinux += -shared -Bsymbolic -z notext \
+ $(call ld-option, --no-apply-dynamic-relocs)
+ endif
+
+diff --git a/arch/arm64/kernel/cpu_errata.c b/arch/arm64/kernel/cpu_errata.c
+index 6e8a7eec667e8..d8a2bacf4e0a8 100644
+--- a/arch/arm64/kernel/cpu_errata.c
++++ b/arch/arm64/kernel/cpu_errata.c
+@@ -457,6 +457,12 @@ out_printmsg:
+ return required;
+ }
+
++static void cpu_enable_ssbd_mitigation(const struct arm64_cpu_capabilities *cap)
++{
++ if (ssbd_state != ARM64_SSBD_FORCE_DISABLE)
++ cap->matches(cap, SCOPE_LOCAL_CPU);
++}
++
+ /* known invulnerable cores */
+ static const struct midr_range arm64_ssb_cpus[] = {
+ MIDR_ALL_VERSIONS(MIDR_CORTEX_A35),
+@@ -599,6 +605,12 @@ check_branch_predictor(const struct arm64_cpu_capabilities *entry, int scope)
+ return (need_wa > 0);
+ }
+
++static void
++cpu_enable_branch_predictor_hardening(const struct arm64_cpu_capabilities *cap)
++{
++ cap->matches(cap, SCOPE_LOCAL_CPU);
++}
++
+ static const __maybe_unused struct midr_range tx2_family_cpus[] = {
+ MIDR_ALL_VERSIONS(MIDR_BRCM_VULCAN),
+ MIDR_ALL_VERSIONS(MIDR_CAVIUM_THUNDERX2),
+@@ -890,9 +902,11 @@ const struct arm64_cpu_capabilities arm64_errata[] = {
+ },
+ #endif
+ {
++ .desc = "Branch predictor hardening",
+ .capability = ARM64_HARDEN_BRANCH_PREDICTOR,
+ .type = ARM64_CPUCAP_LOCAL_CPU_ERRATUM,
+ .matches = check_branch_predictor,
++ .cpu_enable = cpu_enable_branch_predictor_hardening,
+ },
+ #ifdef CONFIG_HARDEN_EL2_VECTORS
+ {
+@@ -906,6 +920,7 @@ const struct arm64_cpu_capabilities arm64_errata[] = {
+ .capability = ARM64_SSBD,
+ .type = ARM64_CPUCAP_LOCAL_CPU_ERRATUM,
+ .matches = has_ssbd_mitigation,
++ .cpu_enable = cpu_enable_ssbd_mitigation,
+ .midr_range_list = arm64_ssb_cpus,
+ },
+ #ifdef CONFIG_ARM64_ERRATUM_1418040
+diff --git a/arch/openrisc/include/asm/uaccess.h b/arch/openrisc/include/asm/uaccess.h
+index 17c24f14615fb..6839f8fcf76b2 100644
+--- a/arch/openrisc/include/asm/uaccess.h
++++ b/arch/openrisc/include/asm/uaccess.h
+@@ -164,19 +164,19 @@ struct __large_struct {
+
+ #define __get_user_nocheck(x, ptr, size) \
+ ({ \
+- long __gu_err, __gu_val; \
+- __get_user_size(__gu_val, (ptr), (size), __gu_err); \
+- (x) = (__force __typeof__(*(ptr)))__gu_val; \
++ long __gu_err; \
++ __get_user_size((x), (ptr), (size), __gu_err); \
+ __gu_err; \
+ })
+
+ #define __get_user_check(x, ptr, size) \
+ ({ \
+- long __gu_err = -EFAULT, __gu_val = 0; \
+- const __typeof__(*(ptr)) * __gu_addr = (ptr); \
+- if (access_ok(__gu_addr, size)) \
+- __get_user_size(__gu_val, __gu_addr, (size), __gu_err); \
+- (x) = (__force __typeof__(*(ptr)))__gu_val; \
++ long __gu_err = -EFAULT; \
++ const __typeof__(*(ptr)) *__gu_addr = (ptr); \
++ if (access_ok(__gu_addr, size)) \
++ __get_user_size((x), __gu_addr, (size), __gu_err); \
++ else \
++ (x) = (__typeof__(*(ptr))) 0; \
+ __gu_err; \
+ })
+
+@@ -190,11 +190,13 @@ do { \
+ case 2: __get_user_asm(x, ptr, retval, "l.lhz"); break; \
+ case 4: __get_user_asm(x, ptr, retval, "l.lwz"); break; \
+ case 8: __get_user_asm2(x, ptr, retval); break; \
+- default: (x) = __get_user_bad(); \
++ default: (x) = (__typeof__(*(ptr)))__get_user_bad(); \
+ } \
+ } while (0)
+
+ #define __get_user_asm(x, addr, err, op) \
++{ \
++ unsigned long __gu_tmp; \
+ __asm__ __volatile__( \
+ "1: "op" %1,0(%2)\n" \
+ "2:\n" \
+@@ -208,10 +210,14 @@ do { \
+ " .align 2\n" \
+ " .long 1b,3b\n" \
+ ".previous" \
+- : "=r"(err), "=r"(x) \
+- : "r"(addr), "i"(-EFAULT), "0"(err))
++ : "=r"(err), "=r"(__gu_tmp) \
++ : "r"(addr), "i"(-EFAULT), "0"(err)); \
++ (x) = (__typeof__(*(addr)))__gu_tmp; \
++}
+
+ #define __get_user_asm2(x, addr, err) \
++{ \
++ unsigned long long __gu_tmp; \
+ __asm__ __volatile__( \
+ "1: l.lwz %1,0(%2)\n" \
+ "2: l.lwz %H1,4(%2)\n" \
+@@ -228,8 +234,11 @@ do { \
+ " .long 1b,4b\n" \
+ " .long 2b,4b\n" \
+ ".previous" \
+- : "=r"(err), "=&r"(x) \
+- : "r"(addr), "i"(-EFAULT), "0"(err))
++ : "=r"(err), "=&r"(__gu_tmp) \
++ : "r"(addr), "i"(-EFAULT), "0"(err)); \
++ (x) = (__typeof__(*(addr)))( \
++ (__typeof__((x)-(x)))__gu_tmp); \
++}
+
+ /* more complex routines */
+
+diff --git a/arch/powerpc/Kconfig b/arch/powerpc/Kconfig
+index 9fa23eb320ff5..cf78ad7ff0b7c 100644
+--- a/arch/powerpc/Kconfig
++++ b/arch/powerpc/Kconfig
+@@ -135,7 +135,7 @@ config PPC
+ select ARCH_HAS_STRICT_KERNEL_RWX if (PPC32 && !HIBERNATION)
+ select ARCH_HAS_TICK_BROADCAST if GENERIC_CLOCKEVENTS_BROADCAST
+ select ARCH_HAS_UACCESS_FLUSHCACHE
+- select ARCH_HAS_UACCESS_MCSAFE if PPC64
++ select ARCH_HAS_COPY_MC if PPC64
+ select ARCH_HAS_UBSAN_SANITIZE_ALL
+ select ARCH_HAVE_NMI_SAFE_CMPXCHG
+ select ARCH_KEEP_MEMBLOCK
+diff --git a/arch/powerpc/include/asm/string.h b/arch/powerpc/include/asm/string.h
+index b72692702f35f..9bf6dffb40900 100644
+--- a/arch/powerpc/include/asm/string.h
++++ b/arch/powerpc/include/asm/string.h
+@@ -53,9 +53,7 @@ void *__memmove(void *to, const void *from, __kernel_size_t n);
+ #ifndef CONFIG_KASAN
+ #define __HAVE_ARCH_MEMSET32
+ #define __HAVE_ARCH_MEMSET64
+-#define __HAVE_ARCH_MEMCPY_MCSAFE
+
+-extern int memcpy_mcsafe(void *dst, const void *src, __kernel_size_t sz);
+ extern void *__memset16(uint16_t *, uint16_t v, __kernel_size_t);
+ extern void *__memset32(uint32_t *, uint32_t v, __kernel_size_t);
+ extern void *__memset64(uint64_t *, uint64_t v, __kernel_size_t);
+diff --git a/arch/powerpc/include/asm/uaccess.h b/arch/powerpc/include/asm/uaccess.h
+index 64c04ab091123..97506441c15b1 100644
+--- a/arch/powerpc/include/asm/uaccess.h
++++ b/arch/powerpc/include/asm/uaccess.h
+@@ -436,6 +436,32 @@ do { \
+ extern unsigned long __copy_tofrom_user(void __user *to,
+ const void __user *from, unsigned long size);
+
++#ifdef CONFIG_ARCH_HAS_COPY_MC
++unsigned long __must_check
++copy_mc_generic(void *to, const void *from, unsigned long size);
++
++static inline unsigned long __must_check
++copy_mc_to_kernel(void *to, const void *from, unsigned long size)
++{
++ return copy_mc_generic(to, from, size);
++}
++#define copy_mc_to_kernel copy_mc_to_kernel
++
++static inline unsigned long __must_check
++copy_mc_to_user(void __user *to, const void *from, unsigned long n)
++{
++ if (likely(check_copy_size(from, n, true))) {
++ if (access_ok(to, n)) {
++ allow_write_to_user(to, n);
++ n = copy_mc_generic((void *)to, from, n);
++ prevent_write_to_user(to, n);
++ }
++ }
++
++ return n;
++}
++#endif
++
+ #ifdef __powerpc64__
+ static inline unsigned long
+ raw_copy_in_user(void __user *to, const void __user *from, unsigned long n)
+@@ -524,20 +550,6 @@ raw_copy_to_user(void __user *to, const void *from, unsigned long n)
+ return ret;
+ }
+
+-static __always_inline unsigned long __must_check
+-copy_to_user_mcsafe(void __user *to, const void *from, unsigned long n)
+-{
+- if (likely(check_copy_size(from, n, true))) {
+- if (access_ok(to, n)) {
+- allow_write_to_user(to, n);
+- n = memcpy_mcsafe((void *)to, from, n);
+- prevent_write_to_user(to, n);
+- }
+- }
+-
+- return n;
+-}
+-
+ unsigned long __arch_clear_user(void __user *addr, unsigned long size);
+
+ static inline unsigned long clear_user(void __user *addr, unsigned long size)
+diff --git a/arch/powerpc/lib/Makefile b/arch/powerpc/lib/Makefile
+index 5e994cda8e401..c254f5f733a86 100644
+--- a/arch/powerpc/lib/Makefile
++++ b/arch/powerpc/lib/Makefile
+@@ -39,7 +39,7 @@ obj-$(CONFIG_PPC_BOOK3S_64) += copyuser_power7.o copypage_power7.o \
+ memcpy_power7.o
+
+ obj64-y += copypage_64.o copyuser_64.o mem_64.o hweight_64.o \
+- memcpy_64.o memcpy_mcsafe_64.o
++ memcpy_64.o copy_mc_64.o
+
+ obj64-$(CONFIG_SMP) += locks.o
+ obj64-$(CONFIG_ALTIVEC) += vmx-helper.o
+diff --git a/arch/powerpc/lib/copy_mc_64.S b/arch/powerpc/lib/copy_mc_64.S
+new file mode 100644
+index 0000000000000..88d46c471493b
+--- /dev/null
++++ b/arch/powerpc/lib/copy_mc_64.S
+@@ -0,0 +1,242 @@
++/* SPDX-License-Identifier: GPL-2.0 */
++/*
++ * Copyright (C) IBM Corporation, 2011
++ * Derived from copyuser_power7.s by Anton Blanchard <anton@au.ibm.com>
++ * Author - Balbir Singh <bsingharora@gmail.com>
++ */
++#include <asm/ppc_asm.h>
++#include <asm/errno.h>
++#include <asm/export.h>
++
++ .macro err1
++100:
++ EX_TABLE(100b,.Ldo_err1)
++ .endm
++
++ .macro err2
++200:
++ EX_TABLE(200b,.Ldo_err2)
++ .endm
++
++ .macro err3
++300: EX_TABLE(300b,.Ldone)
++ .endm
++
++.Ldo_err2:
++ ld r22,STK_REG(R22)(r1)
++ ld r21,STK_REG(R21)(r1)
++ ld r20,STK_REG(R20)(r1)
++ ld r19,STK_REG(R19)(r1)
++ ld r18,STK_REG(R18)(r1)
++ ld r17,STK_REG(R17)(r1)
++ ld r16,STK_REG(R16)(r1)
++ ld r15,STK_REG(R15)(r1)
++ ld r14,STK_REG(R14)(r1)
++ addi r1,r1,STACKFRAMESIZE
++.Ldo_err1:
++ /* Do a byte by byte copy to get the exact remaining size */
++ mtctr r7
++46:
++err3; lbz r0,0(r4)
++ addi r4,r4,1
++err3; stb r0,0(r3)
++ addi r3,r3,1
++ bdnz 46b
++ li r3,0
++ blr
++
++.Ldone:
++ mfctr r3
++ blr
++
++
++_GLOBAL(copy_mc_generic)
++ mr r7,r5
++ cmpldi r5,16
++ blt .Lshort_copy
++
++.Lcopy:
++ /* Get the source 8B aligned */
++ neg r6,r4
++ mtocrf 0x01,r6
++ clrldi r6,r6,(64-3)
++
++ bf cr7*4+3,1f
++err1; lbz r0,0(r4)
++ addi r4,r4,1
++err1; stb r0,0(r3)
++ addi r3,r3,1
++ subi r7,r7,1
++
++1: bf cr7*4+2,2f
++err1; lhz r0,0(r4)
++ addi r4,r4,2
++err1; sth r0,0(r3)
++ addi r3,r3,2
++ subi r7,r7,2
++
++2: bf cr7*4+1,3f
++err1; lwz r0,0(r4)
++ addi r4,r4,4
++err1; stw r0,0(r3)
++ addi r3,r3,4
++ subi r7,r7,4
++
++3: sub r5,r5,r6
++ cmpldi r5,128
++
++ mflr r0
++ stdu r1,-STACKFRAMESIZE(r1)
++ std r14,STK_REG(R14)(r1)
++ std r15,STK_REG(R15)(r1)
++ std r16,STK_REG(R16)(r1)
++ std r17,STK_REG(R17)(r1)
++ std r18,STK_REG(R18)(r1)
++ std r19,STK_REG(R19)(r1)
++ std r20,STK_REG(R20)(r1)
++ std r21,STK_REG(R21)(r1)
++ std r22,STK_REG(R22)(r1)
++ std r0,STACKFRAMESIZE+16(r1)
++
++ blt 5f
++ srdi r6,r5,7
++ mtctr r6
++
++ /* Now do cacheline (128B) sized loads and stores. */
++ .align 5
++4:
++err2; ld r0,0(r4)
++err2; ld r6,8(r4)
++err2; ld r8,16(r4)
++err2; ld r9,24(r4)
++err2; ld r10,32(r4)
++err2; ld r11,40(r4)
++err2; ld r12,48(r4)
++err2; ld r14,56(r4)
++err2; ld r15,64(r4)
++err2; ld r16,72(r4)
++err2; ld r17,80(r4)
++err2; ld r18,88(r4)
++err2; ld r19,96(r4)
++err2; ld r20,104(r4)
++err2; ld r21,112(r4)
++err2; ld r22,120(r4)
++ addi r4,r4,128
++err2; std r0,0(r3)
++err2; std r6,8(r3)
++err2; std r8,16(r3)
++err2; std r9,24(r3)
++err2; std r10,32(r3)
++err2; std r11,40(r3)
++err2; std r12,48(r3)
++err2; std r14,56(r3)
++err2; std r15,64(r3)
++err2; std r16,72(r3)
++err2; std r17,80(r3)
++err2; std r18,88(r3)
++err2; std r19,96(r3)
++err2; std r20,104(r3)
++err2; std r21,112(r3)
++err2; std r22,120(r3)
++ addi r3,r3,128
++ subi r7,r7,128
++ bdnz 4b
++
++ clrldi r5,r5,(64-7)
++
++ /* Up to 127B to go */
++5: srdi r6,r5,4
++ mtocrf 0x01,r6
++
++6: bf cr7*4+1,7f
++err2; ld r0,0(r4)
++err2; ld r6,8(r4)
++err2; ld r8,16(r4)
++err2; ld r9,24(r4)
++err2; ld r10,32(r4)
++err2; ld r11,40(r4)
++err2; ld r12,48(r4)
++err2; ld r14,56(r4)
++ addi r4,r4,64
++err2; std r0,0(r3)
++err2; std r6,8(r3)
++err2; std r8,16(r3)
++err2; std r9,24(r3)
++err2; std r10,32(r3)
++err2; std r11,40(r3)
++err2; std r12,48(r3)
++err2; std r14,56(r3)
++ addi r3,r3,64
++ subi r7,r7,64
++
++7: ld r14,STK_REG(R14)(r1)
++ ld r15,STK_REG(R15)(r1)
++ ld r16,STK_REG(R16)(r1)
++ ld r17,STK_REG(R17)(r1)
++ ld r18,STK_REG(R18)(r1)
++ ld r19,STK_REG(R19)(r1)
++ ld r20,STK_REG(R20)(r1)
++ ld r21,STK_REG(R21)(r1)
++ ld r22,STK_REG(R22)(r1)
++ addi r1,r1,STACKFRAMESIZE
++
++ /* Up to 63B to go */
++ bf cr7*4+2,8f
++err1; ld r0,0(r4)
++err1; ld r6,8(r4)
++err1; ld r8,16(r4)
++err1; ld r9,24(r4)
++ addi r4,r4,32
++err1; std r0,0(r3)
++err1; std r6,8(r3)
++err1; std r8,16(r3)
++err1; std r9,24(r3)
++ addi r3,r3,32
++ subi r7,r7,32
++
++ /* Up to 31B to go */
++8: bf cr7*4+3,9f
++err1; ld r0,0(r4)
++err1; ld r6,8(r4)
++ addi r4,r4,16
++err1; std r0,0(r3)
++err1; std r6,8(r3)
++ addi r3,r3,16
++ subi r7,r7,16
++
++9: clrldi r5,r5,(64-4)
++
++ /* Up to 15B to go */
++.Lshort_copy:
++ mtocrf 0x01,r5
++ bf cr7*4+0,12f
++err1; lwz r0,0(r4) /* Less chance of a reject with word ops */
++err1; lwz r6,4(r4)
++ addi r4,r4,8
++err1; stw r0,0(r3)
++err1; stw r6,4(r3)
++ addi r3,r3,8
++ subi r7,r7,8
++
++12: bf cr7*4+1,13f
++err1; lwz r0,0(r4)
++ addi r4,r4,4
++err1; stw r0,0(r3)
++ addi r3,r3,4
++ subi r7,r7,4
++
++13: bf cr7*4+2,14f
++err1; lhz r0,0(r4)
++ addi r4,r4,2
++err1; sth r0,0(r3)
++ addi r3,r3,2
++ subi r7,r7,2
++
++14: bf cr7*4+3,15f
++err1; lbz r0,0(r4)
++err1; stb r0,0(r3)
++
++15: li r3,0
++ blr
++
++EXPORT_SYMBOL_GPL(copy_mc_generic);
+diff --git a/arch/powerpc/lib/memcpy_mcsafe_64.S b/arch/powerpc/lib/memcpy_mcsafe_64.S
+deleted file mode 100644
+index cb882d9a6d8a3..0000000000000
+--- a/arch/powerpc/lib/memcpy_mcsafe_64.S
++++ /dev/null
+@@ -1,242 +0,0 @@
+-/* SPDX-License-Identifier: GPL-2.0 */
+-/*
+- * Copyright (C) IBM Corporation, 2011
+- * Derived from copyuser_power7.s by Anton Blanchard <anton@au.ibm.com>
+- * Author - Balbir Singh <bsingharora@gmail.com>
+- */
+-#include <asm/ppc_asm.h>
+-#include <asm/errno.h>
+-#include <asm/export.h>
+-
+- .macro err1
+-100:
+- EX_TABLE(100b,.Ldo_err1)
+- .endm
+-
+- .macro err2
+-200:
+- EX_TABLE(200b,.Ldo_err2)
+- .endm
+-
+- .macro err3
+-300: EX_TABLE(300b,.Ldone)
+- .endm
+-
+-.Ldo_err2:
+- ld r22,STK_REG(R22)(r1)
+- ld r21,STK_REG(R21)(r1)
+- ld r20,STK_REG(R20)(r1)
+- ld r19,STK_REG(R19)(r1)
+- ld r18,STK_REG(R18)(r1)
+- ld r17,STK_REG(R17)(r1)
+- ld r16,STK_REG(R16)(r1)
+- ld r15,STK_REG(R15)(r1)
+- ld r14,STK_REG(R14)(r1)
+- addi r1,r1,STACKFRAMESIZE
+-.Ldo_err1:
+- /* Do a byte by byte copy to get the exact remaining size */
+- mtctr r7
+-46:
+-err3; lbz r0,0(r4)
+- addi r4,r4,1
+-err3; stb r0,0(r3)
+- addi r3,r3,1
+- bdnz 46b
+- li r3,0
+- blr
+-
+-.Ldone:
+- mfctr r3
+- blr
+-
+-
+-_GLOBAL(memcpy_mcsafe)
+- mr r7,r5
+- cmpldi r5,16
+- blt .Lshort_copy
+-
+-.Lcopy:
+- /* Get the source 8B aligned */
+- neg r6,r4
+- mtocrf 0x01,r6
+- clrldi r6,r6,(64-3)
+-
+- bf cr7*4+3,1f
+-err1; lbz r0,0(r4)
+- addi r4,r4,1
+-err1; stb r0,0(r3)
+- addi r3,r3,1
+- subi r7,r7,1
+-
+-1: bf cr7*4+2,2f
+-err1; lhz r0,0(r4)
+- addi r4,r4,2
+-err1; sth r0,0(r3)
+- addi r3,r3,2
+- subi r7,r7,2
+-
+-2: bf cr7*4+1,3f
+-err1; lwz r0,0(r4)
+- addi r4,r4,4
+-err1; stw r0,0(r3)
+- addi r3,r3,4
+- subi r7,r7,4
+-
+-3: sub r5,r5,r6
+- cmpldi r5,128
+-
+- mflr r0
+- stdu r1,-STACKFRAMESIZE(r1)
+- std r14,STK_REG(R14)(r1)
+- std r15,STK_REG(R15)(r1)
+- std r16,STK_REG(R16)(r1)
+- std r17,STK_REG(R17)(r1)
+- std r18,STK_REG(R18)(r1)
+- std r19,STK_REG(R19)(r1)
+- std r20,STK_REG(R20)(r1)
+- std r21,STK_REG(R21)(r1)
+- std r22,STK_REG(R22)(r1)
+- std r0,STACKFRAMESIZE+16(r1)
+-
+- blt 5f
+- srdi r6,r5,7
+- mtctr r6
+-
+- /* Now do cacheline (128B) sized loads and stores. */
+- .align 5
+-4:
+-err2; ld r0,0(r4)
+-err2; ld r6,8(r4)
+-err2; ld r8,16(r4)
+-err2; ld r9,24(r4)
+-err2; ld r10,32(r4)
+-err2; ld r11,40(r4)
+-err2; ld r12,48(r4)
+-err2; ld r14,56(r4)
+-err2; ld r15,64(r4)
+-err2; ld r16,72(r4)
+-err2; ld r17,80(r4)
+-err2; ld r18,88(r4)
+-err2; ld r19,96(r4)
+-err2; ld r20,104(r4)
+-err2; ld r21,112(r4)
+-err2; ld r22,120(r4)
+- addi r4,r4,128
+-err2; std r0,0(r3)
+-err2; std r6,8(r3)
+-err2; std r8,16(r3)
+-err2; std r9,24(r3)
+-err2; std r10,32(r3)
+-err2; std r11,40(r3)
+-err2; std r12,48(r3)
+-err2; std r14,56(r3)
+-err2; std r15,64(r3)
+-err2; std r16,72(r3)
+-err2; std r17,80(r3)
+-err2; std r18,88(r3)
+-err2; std r19,96(r3)
+-err2; std r20,104(r3)
+-err2; std r21,112(r3)
+-err2; std r22,120(r3)
+- addi r3,r3,128
+- subi r7,r7,128
+- bdnz 4b
+-
+- clrldi r5,r5,(64-7)
+-
+- /* Up to 127B to go */
+-5: srdi r6,r5,4
+- mtocrf 0x01,r6
+-
+-6: bf cr7*4+1,7f
+-err2; ld r0,0(r4)
+-err2; ld r6,8(r4)
+-err2; ld r8,16(r4)
+-err2; ld r9,24(r4)
+-err2; ld r10,32(r4)
+-err2; ld r11,40(r4)
+-err2; ld r12,48(r4)
+-err2; ld r14,56(r4)
+- addi r4,r4,64
+-err2; std r0,0(r3)
+-err2; std r6,8(r3)
+-err2; std r8,16(r3)
+-err2; std r9,24(r3)
+-err2; std r10,32(r3)
+-err2; std r11,40(r3)
+-err2; std r12,48(r3)
+-err2; std r14,56(r3)
+- addi r3,r3,64
+- subi r7,r7,64
+-
+-7: ld r14,STK_REG(R14)(r1)
+- ld r15,STK_REG(R15)(r1)
+- ld r16,STK_REG(R16)(r1)
+- ld r17,STK_REG(R17)(r1)
+- ld r18,STK_REG(R18)(r1)
+- ld r19,STK_REG(R19)(r1)
+- ld r20,STK_REG(R20)(r1)
+- ld r21,STK_REG(R21)(r1)
+- ld r22,STK_REG(R22)(r1)
+- addi r1,r1,STACKFRAMESIZE
+-
+- /* Up to 63B to go */
+- bf cr7*4+2,8f
+-err1; ld r0,0(r4)
+-err1; ld r6,8(r4)
+-err1; ld r8,16(r4)
+-err1; ld r9,24(r4)
+- addi r4,r4,32
+-err1; std r0,0(r3)
+-err1; std r6,8(r3)
+-err1; std r8,16(r3)
+-err1; std r9,24(r3)
+- addi r3,r3,32
+- subi r7,r7,32
+-
+- /* Up to 31B to go */
+-8: bf cr7*4+3,9f
+-err1; ld r0,0(r4)
+-err1; ld r6,8(r4)
+- addi r4,r4,16
+-err1; std r0,0(r3)
+-err1; std r6,8(r3)
+- addi r3,r3,16
+- subi r7,r7,16
+-
+-9: clrldi r5,r5,(64-4)
+-
+- /* Up to 15B to go */
+-.Lshort_copy:
+- mtocrf 0x01,r5
+- bf cr7*4+0,12f
+-err1; lwz r0,0(r4) /* Less chance of a reject with word ops */
+-err1; lwz r6,4(r4)
+- addi r4,r4,8
+-err1; stw r0,0(r3)
+-err1; stw r6,4(r3)
+- addi r3,r3,8
+- subi r7,r7,8
+-
+-12: bf cr7*4+1,13f
+-err1; lwz r0,0(r4)
+- addi r4,r4,4
+-err1; stw r0,0(r3)
+- addi r3,r3,4
+- subi r7,r7,4
+-
+-13: bf cr7*4+2,14f
+-err1; lhz r0,0(r4)
+- addi r4,r4,2
+-err1; sth r0,0(r3)
+- addi r3,r3,2
+- subi r7,r7,2
+-
+-14: bf cr7*4+3,15f
+-err1; lbz r0,0(r4)
+-err1; stb r0,0(r3)
+-
+-15: li r3,0
+- blr
+-
+-EXPORT_SYMBOL_GPL(memcpy_mcsafe);
+diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig
+index 883da0abf7790..1f4104f8852b8 100644
+--- a/arch/x86/Kconfig
++++ b/arch/x86/Kconfig
+@@ -75,7 +75,7 @@ config X86
+ select ARCH_HAS_PTE_DEVMAP if X86_64
+ select ARCH_HAS_PTE_SPECIAL
+ select ARCH_HAS_UACCESS_FLUSHCACHE if X86_64
+- select ARCH_HAS_UACCESS_MCSAFE if X86_64 && X86_MCE
++ select ARCH_HAS_COPY_MC if X86_64
+ select ARCH_HAS_SET_MEMORY
+ select ARCH_HAS_SET_DIRECT_MAP
+ select ARCH_HAS_STRICT_KERNEL_RWX
+diff --git a/arch/x86/Kconfig.debug b/arch/x86/Kconfig.debug
+index 0dd319e6e5b49..ec98b400e38f9 100644
+--- a/arch/x86/Kconfig.debug
++++ b/arch/x86/Kconfig.debug
+@@ -59,7 +59,7 @@ config EARLY_PRINTK_USB_XDBC
+ You should normally say N here, unless you want to debug early
+ crashes or need a very simple printk logging facility.
+
+-config MCSAFE_TEST
++config COPY_MC_TEST
+ def_bool n
+
+ config EFI_PGT_DUMP
+diff --git a/arch/x86/events/amd/ibs.c b/arch/x86/events/amd/ibs.c
+index 26c36357c4c9c..a023cbe21230a 100644
+--- a/arch/x86/events/amd/ibs.c
++++ b/arch/x86/events/amd/ibs.c
+@@ -89,6 +89,7 @@ struct perf_ibs {
+ u64 max_period;
+ unsigned long offset_mask[1];
+ int offset_max;
++ unsigned int fetch_count_reset_broken : 1;
+ struct cpu_perf_ibs __percpu *pcpu;
+
+ struct attribute **format_attrs;
+@@ -363,7 +364,12 @@ perf_ibs_event_update(struct perf_ibs *perf_ibs, struct perf_event *event,
+ static inline void perf_ibs_enable_event(struct perf_ibs *perf_ibs,
+ struct hw_perf_event *hwc, u64 config)
+ {
+- wrmsrl(hwc->config_base, hwc->config | config | perf_ibs->enable_mask);
++ u64 tmp = hwc->config | config;
++
++ if (perf_ibs->fetch_count_reset_broken)
++ wrmsrl(hwc->config_base, tmp & ~perf_ibs->enable_mask);
++
++ wrmsrl(hwc->config_base, tmp | perf_ibs->enable_mask);
+ }
+
+ /*
+@@ -733,6 +739,13 @@ static __init void perf_event_ibs_init(void)
+ {
+ struct attribute **attr = ibs_op_format_attrs;
+
++ /*
++ * Some chips fail to reset the fetch count when it is written; instead
++ * they need a 0-1 transition of IbsFetchEn.
++ */
++ if (boot_cpu_data.x86 >= 0x16 && boot_cpu_data.x86 <= 0x18)
++ perf_ibs_fetch.fetch_count_reset_broken = 1;
++
+ perf_ibs_pmu_init(&perf_ibs_fetch, "ibs_fetch");
+
+ if (ibs_caps & IBS_CAPS_OPCNT) {
+diff --git a/arch/x86/include/asm/copy_mc_test.h b/arch/x86/include/asm/copy_mc_test.h
+new file mode 100644
+index 0000000000000..e4991ba967266
+--- /dev/null
++++ b/arch/x86/include/asm/copy_mc_test.h
+@@ -0,0 +1,75 @@
++/* SPDX-License-Identifier: GPL-2.0 */
++#ifndef _COPY_MC_TEST_H_
++#define _COPY_MC_TEST_H_
++
++#ifndef __ASSEMBLY__
++#ifdef CONFIG_COPY_MC_TEST
++extern unsigned long copy_mc_test_src;
++extern unsigned long copy_mc_test_dst;
++
++static inline void copy_mc_inject_src(void *addr)
++{
++ if (addr)
++ copy_mc_test_src = (unsigned long) addr;
++ else
++ copy_mc_test_src = ~0UL;
++}
++
++static inline void copy_mc_inject_dst(void *addr)
++{
++ if (addr)
++ copy_mc_test_dst = (unsigned long) addr;
++ else
++ copy_mc_test_dst = ~0UL;
++}
++#else /* CONFIG_COPY_MC_TEST */
++static inline void copy_mc_inject_src(void *addr)
++{
++}
++
++static inline void copy_mc_inject_dst(void *addr)
++{
++}
++#endif /* CONFIG_COPY_MC_TEST */
++
++#else /* __ASSEMBLY__ */
++#include <asm/export.h>
++
++#ifdef CONFIG_COPY_MC_TEST
++.macro COPY_MC_TEST_CTL
++ .pushsection .data
++ .align 8
++ .globl copy_mc_test_src
++ copy_mc_test_src:
++ .quad 0
++ EXPORT_SYMBOL_GPL(copy_mc_test_src)
++ .globl copy_mc_test_dst
++ copy_mc_test_dst:
++ .quad 0
++ EXPORT_SYMBOL_GPL(copy_mc_test_dst)
++ .popsection
++.endm
++
++.macro COPY_MC_TEST_SRC reg count target
++ leaq \count(\reg), %r9
++ cmp copy_mc_test_src, %r9
++ ja \target
++.endm
++
++.macro COPY_MC_TEST_DST reg count target
++ leaq \count(\reg), %r9
++ cmp copy_mc_test_dst, %r9
++ ja \target
++.endm
++#else
++.macro COPY_MC_TEST_CTL
++.endm
++
++.macro COPY_MC_TEST_SRC reg count target
++.endm
++
++.macro COPY_MC_TEST_DST reg count target
++.endm
++#endif /* CONFIG_COPY_MC_TEST */
++#endif /* __ASSEMBLY__ */
++#endif /* _COPY_MC_TEST_H_ */
+diff --git a/arch/x86/include/asm/mce.h b/arch/x86/include/asm/mce.h
+index cf503824529ce..9b9112e4379ab 100644
+--- a/arch/x86/include/asm/mce.h
++++ b/arch/x86/include/asm/mce.h
+@@ -174,6 +174,15 @@ extern void mce_unregister_decode_chain(struct notifier_block *nb);
+
+ extern int mce_p5_enabled;
+
++#ifdef CONFIG_ARCH_HAS_COPY_MC
++extern void enable_copy_mc_fragile(void);
++unsigned long __must_check copy_mc_fragile(void *dst, const void *src, unsigned cnt);
++#else
++static inline void enable_copy_mc_fragile(void)
++{
++}
++#endif
++
+ #ifdef CONFIG_X86_MCE
+ int mcheck_init(void);
+ void mcheck_cpu_init(struct cpuinfo_x86 *c);
+diff --git a/arch/x86/include/asm/mcsafe_test.h b/arch/x86/include/asm/mcsafe_test.h
+deleted file mode 100644
+index eb59804b6201c..0000000000000
+--- a/arch/x86/include/asm/mcsafe_test.h
++++ /dev/null
+@@ -1,75 +0,0 @@
+-/* SPDX-License-Identifier: GPL-2.0 */
+-#ifndef _MCSAFE_TEST_H_
+-#define _MCSAFE_TEST_H_
+-
+-#ifndef __ASSEMBLY__
+-#ifdef CONFIG_MCSAFE_TEST
+-extern unsigned long mcsafe_test_src;
+-extern unsigned long mcsafe_test_dst;
+-
+-static inline void mcsafe_inject_src(void *addr)
+-{
+- if (addr)
+- mcsafe_test_src = (unsigned long) addr;
+- else
+- mcsafe_test_src = ~0UL;
+-}
+-
+-static inline void mcsafe_inject_dst(void *addr)
+-{
+- if (addr)
+- mcsafe_test_dst = (unsigned long) addr;
+- else
+- mcsafe_test_dst = ~0UL;
+-}
+-#else /* CONFIG_MCSAFE_TEST */
+-static inline void mcsafe_inject_src(void *addr)
+-{
+-}
+-
+-static inline void mcsafe_inject_dst(void *addr)
+-{
+-}
+-#endif /* CONFIG_MCSAFE_TEST */
+-
+-#else /* __ASSEMBLY__ */
+-#include <asm/export.h>
+-
+-#ifdef CONFIG_MCSAFE_TEST
+-.macro MCSAFE_TEST_CTL
+- .pushsection .data
+- .align 8
+- .globl mcsafe_test_src
+- mcsafe_test_src:
+- .quad 0
+- EXPORT_SYMBOL_GPL(mcsafe_test_src)
+- .globl mcsafe_test_dst
+- mcsafe_test_dst:
+- .quad 0
+- EXPORT_SYMBOL_GPL(mcsafe_test_dst)
+- .popsection
+-.endm
+-
+-.macro MCSAFE_TEST_SRC reg count target
+- leaq \count(\reg), %r9
+- cmp mcsafe_test_src, %r9
+- ja \target
+-.endm
+-
+-.macro MCSAFE_TEST_DST reg count target
+- leaq \count(\reg), %r9
+- cmp mcsafe_test_dst, %r9
+- ja \target
+-.endm
+-#else
+-.macro MCSAFE_TEST_CTL
+-.endm
+-
+-.macro MCSAFE_TEST_SRC reg count target
+-.endm
+-
+-.macro MCSAFE_TEST_DST reg count target
+-.endm
+-#endif /* CONFIG_MCSAFE_TEST */
+-#endif /* __ASSEMBLY__ */
+-#endif /* _MCSAFE_TEST_H_ */
+diff --git a/arch/x86/include/asm/string_64.h b/arch/x86/include/asm/string_64.h
+index 75314c3dbe471..6e450827f677a 100644
+--- a/arch/x86/include/asm/string_64.h
++++ b/arch/x86/include/asm/string_64.h
+@@ -82,38 +82,6 @@ int strcmp(const char *cs, const char *ct);
+
+ #endif
+
+-#define __HAVE_ARCH_MEMCPY_MCSAFE 1
+-__must_check unsigned long __memcpy_mcsafe(void *dst, const void *src,
+- size_t cnt);
+-DECLARE_STATIC_KEY_FALSE(mcsafe_key);
+-
+-/**
+- * memcpy_mcsafe - copy memory with indication if a machine check happened
+- *
+- * @dst: destination address
+- * @src: source address
+- * @cnt: number of bytes to copy
+- *
+- * Low level memory copy function that catches machine checks
+- * We only call into the "safe" function on systems that can
+- * actually do machine check recovery. Everyone else can just
+- * use memcpy().
+- *
+- * Return 0 for success, or number of bytes not copied if there was an
+- * exception.
+- */
+-static __always_inline __must_check unsigned long
+-memcpy_mcsafe(void *dst, const void *src, size_t cnt)
+-{
+-#ifdef CONFIG_X86_MCE
+- if (static_branch_unlikely(&mcsafe_key))
+- return __memcpy_mcsafe(dst, src, cnt);
+- else
+-#endif
+- memcpy(dst, src, cnt);
+- return 0;
+-}
+-
+ #ifdef CONFIG_ARCH_HAS_UACCESS_FLUSHCACHE
+ #define __HAVE_ARCH_MEMCPY_FLUSHCACHE 1
+ void __memcpy_flushcache(void *dst, const void *src, size_t cnt);
+diff --git a/arch/x86/include/asm/uaccess.h b/arch/x86/include/asm/uaccess.h
+index 2f3e8f2a958f6..9bfca52b46411 100644
+--- a/arch/x86/include/asm/uaccess.h
++++ b/arch/x86/include/asm/uaccess.h
+@@ -455,6 +455,15 @@ extern __must_check long strnlen_user(const char __user *str, long n);
+ unsigned long __must_check clear_user(void __user *mem, unsigned long len);
+ unsigned long __must_check __clear_user(void __user *mem, unsigned long len);
+
++#ifdef CONFIG_ARCH_HAS_COPY_MC
++unsigned long __must_check
++copy_mc_to_kernel(void *to, const void *from, unsigned len);
++#define copy_mc_to_kernel copy_mc_to_kernel
++
++unsigned long __must_check
++copy_mc_to_user(void *to, const void *from, unsigned len);
++#endif
++
+ /*
+ * movsl can be slow when source and dest are not both 8-byte aligned
+ */
+diff --git a/arch/x86/include/asm/uaccess_64.h b/arch/x86/include/asm/uaccess_64.h
+index bc10e3dc64fed..e7265a552f4f0 100644
+--- a/arch/x86/include/asm/uaccess_64.h
++++ b/arch/x86/include/asm/uaccess_64.h
+@@ -46,22 +46,6 @@ copy_user_generic(void *to, const void *from, unsigned len)
+ return ret;
+ }
+
+-static __always_inline __must_check unsigned long
+-copy_to_user_mcsafe(void *to, const void *from, unsigned len)
+-{
+- unsigned long ret;
+-
+- __uaccess_begin();
+- /*
+- * Note, __memcpy_mcsafe() is explicitly used since it can
+- * handle exceptions / faults. memcpy_mcsafe() may fall back to
+- * memcpy() which lacks this handling.
+- */
+- ret = __memcpy_mcsafe(to, from, len);
+- __uaccess_end();
+- return ret;
+-}
+-
+ static __always_inline __must_check unsigned long
+ raw_copy_from_user(void *dst, const void __user *src, unsigned long size)
+ {
+@@ -102,8 +86,4 @@ __copy_from_user_flushcache(void *dst, const void __user *src, unsigned size)
+ kasan_check_write(dst, size);
+ return __copy_user_flushcache(dst, src, size);
+ }
+-
+-unsigned long
+-mcsafe_handle_tail(char *to, char *from, unsigned len);
+-
+ #endif /* _ASM_X86_UACCESS_64_H */
+diff --git a/arch/x86/kernel/cpu/mce/core.c b/arch/x86/kernel/cpu/mce/core.c
+index 07673a034d39c..69b2bb305a5a7 100644
+--- a/arch/x86/kernel/cpu/mce/core.c
++++ b/arch/x86/kernel/cpu/mce/core.c
+@@ -40,7 +40,6 @@
+ #include <linux/debugfs.h>
+ #include <linux/irq_work.h>
+ #include <linux/export.h>
+-#include <linux/jump_label.h>
+ #include <linux/set_memory.h>
+ #include <linux/task_work.h>
+ #include <linux/hardirq.h>
+@@ -2122,7 +2121,7 @@ void mce_disable_bank(int bank)
+ and older.
+ * mce=nobootlog Don't log MCEs from before booting.
+ * mce=bios_cmci_threshold Don't program the CMCI threshold
+- * mce=recovery force enable memcpy_mcsafe()
++ * mce=recovery force enable copy_mc_fragile()
+ */
+ static int __init mcheck_enable(char *str)
+ {
+@@ -2730,13 +2729,10 @@ static void __init mcheck_debugfs_init(void)
+ static void __init mcheck_debugfs_init(void) { }
+ #endif
+
+-DEFINE_STATIC_KEY_FALSE(mcsafe_key);
+-EXPORT_SYMBOL_GPL(mcsafe_key);
+-
+ static int __init mcheck_late_init(void)
+ {
+ if (mca_cfg.recovery)
+- static_branch_inc(&mcsafe_key);
++ enable_copy_mc_fragile();
+
+ mcheck_debugfs_init();
+
+diff --git a/arch/x86/kernel/quirks.c b/arch/x86/kernel/quirks.c
+index 896d74cb5081a..e0296983a2386 100644
+--- a/arch/x86/kernel/quirks.c
++++ b/arch/x86/kernel/quirks.c
+@@ -8,6 +8,7 @@
+
+ #include <asm/hpet.h>
+ #include <asm/setup.h>
++#include <asm/mce.h>
+
+ #if defined(CONFIG_X86_IO_APIC) && defined(CONFIG_SMP) && defined(CONFIG_PCI)
+
+@@ -624,10 +625,6 @@ static void amd_disable_seq_and_redirect_scrub(struct pci_dev *dev)
+ DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_AMD, PCI_DEVICE_ID_AMD_16H_NB_F3,
+ amd_disable_seq_and_redirect_scrub);
+
+-#if defined(CONFIG_X86_64) && defined(CONFIG_X86_MCE)
+-#include <linux/jump_label.h>
+-#include <asm/string_64.h>
+-
+ /* Ivy Bridge, Haswell, Broadwell */
+ static void quirk_intel_brickland_xeon_ras_cap(struct pci_dev *pdev)
+ {
+@@ -636,7 +633,7 @@ static void quirk_intel_brickland_xeon_ras_cap(struct pci_dev *pdev)
+ pci_read_config_dword(pdev, 0x84, &capid0);
+
+ if (capid0 & 0x10)
+- static_branch_inc(&mcsafe_key);
++ enable_copy_mc_fragile();
+ }
+
+ /* Skylake */
+@@ -653,7 +650,7 @@ static void quirk_intel_purley_xeon_ras_cap(struct pci_dev *pdev)
+ * enabled, so memory machine check recovery is also enabled.
+ */
+ if ((capid0 & 0xc0) == 0xc0 || (capid5 & 0x1e0))
+- static_branch_inc(&mcsafe_key);
++ enable_copy_mc_fragile();
+
+ }
+ DECLARE_PCI_FIXUP_EARLY(PCI_VENDOR_ID_INTEL, 0x0ec3, quirk_intel_brickland_xeon_ras_cap);
+@@ -661,7 +658,6 @@ DECLARE_PCI_FIXUP_EARLY(PCI_VENDOR_ID_INTEL, 0x2fc0, quirk_intel_brickland_xeon_
+ DECLARE_PCI_FIXUP_EARLY(PCI_VENDOR_ID_INTEL, 0x6fc0, quirk_intel_brickland_xeon_ras_cap);
+ DECLARE_PCI_FIXUP_EARLY(PCI_VENDOR_ID_INTEL, 0x2083, quirk_intel_purley_xeon_ras_cap);
+ #endif
+-#endif
+
+ bool x86_apple_machine;
+ EXPORT_SYMBOL(x86_apple_machine);
+diff --git a/arch/x86/kernel/traps.c b/arch/x86/kernel/traps.c
+index 69cc823109740..d43df8de75a6a 100644
+--- a/arch/x86/kernel/traps.c
++++ b/arch/x86/kernel/traps.c
+@@ -196,7 +196,7 @@ static __always_inline void __user *error_get_trap_addr(struct pt_regs *regs)
+
+ DEFINE_IDTENTRY(exc_divide_error)
+ {
+- do_error_trap(regs, 0, "divide_error", X86_TRAP_DE, SIGFPE,
++ do_error_trap(regs, 0, "divide error", X86_TRAP_DE, SIGFPE,
+ FPE_INTDIV, error_get_trap_addr(regs));
+ }
+
+diff --git a/arch/x86/lib/Makefile b/arch/x86/lib/Makefile
+index 6110bce7237bd..02c3cec7e5157 100644
+--- a/arch/x86/lib/Makefile
++++ b/arch/x86/lib/Makefile
+@@ -44,6 +44,7 @@ obj-$(CONFIG_SMP) += msr-smp.o cache-smp.o
+ lib-y := delay.o misc.o cmdline.o cpu.o
+ lib-y += usercopy_$(BITS).o usercopy.o getuser.o putuser.o
+ lib-y += memcpy_$(BITS).o
++lib-$(CONFIG_ARCH_HAS_COPY_MC) += copy_mc.o copy_mc_64.o
+ lib-$(CONFIG_INSTRUCTION_DECODER) += insn.o inat.o insn-eval.o
+ lib-$(CONFIG_RANDOMIZE_BASE) += kaslr.o
+ lib-$(CONFIG_FUNCTION_ERROR_INJECTION) += error-inject.o
+diff --git a/arch/x86/lib/copy_mc.c b/arch/x86/lib/copy_mc.c
+new file mode 100644
+index 0000000000000..c13e8c9ee926b
+--- /dev/null
++++ b/arch/x86/lib/copy_mc.c
+@@ -0,0 +1,96 @@
++// SPDX-License-Identifier: GPL-2.0
++/* Copyright(c) 2016-2020 Intel Corporation. All rights reserved. */
++
++#include <linux/jump_label.h>
++#include <linux/uaccess.h>
++#include <linux/export.h>
++#include <linux/string.h>
++#include <linux/types.h>
++
++#include <asm/mce.h>
++
++#ifdef CONFIG_X86_MCE
++/*
++ * See COPY_MC_TEST for self-test of the copy_mc_fragile()
++ * implementation.
++ */
++static DEFINE_STATIC_KEY_FALSE(copy_mc_fragile_key);
++
++void enable_copy_mc_fragile(void)
++{
++ static_branch_inc(©_mc_fragile_key);
++}
++#define copy_mc_fragile_enabled (static_branch_unlikely(©_mc_fragile_key))
++
++/*
++ * Similar to copy_user_handle_tail, probe for the write fault point, or
++ * source exception point.
++ */
++__visible notrace unsigned long
++copy_mc_fragile_handle_tail(char *to, char *from, unsigned len)
++{
++ for (; len; --len, to++, from++)
++ if (copy_mc_fragile(to, from, 1))
++ break;
++ return len;
++}
++#else
++/*
++ * No point in doing careful copying, or consulting a static key when
++ * there is no #MC handler in the CONFIG_X86_MCE=n case.
++ */
++void enable_copy_mc_fragile(void)
++{
++}
++#define copy_mc_fragile_enabled (0)
++#endif
++
++unsigned long copy_mc_enhanced_fast_string(void *dst, const void *src, unsigned len);
++
++/**
++ * copy_mc_to_kernel - memory copy that handles source exceptions
++ *
++ * @dst: destination address
++ * @src: source address
++ * @len: number of bytes to copy
++ *
++ * Call into the 'fragile' version on systems that benefit from avoiding
++ * corner case poison consumption scenarios, For example, accessing
++ * poison across 2 cachelines with a single instruction. Almost all
++ * other uses case can use copy_mc_enhanced_fast_string() for a fast
++ * recoverable copy, or fallback to plain memcpy.
++ *
++ * Return 0 for success, or number of bytes not copied if there was an
++ * exception.
++ */
++unsigned long __must_check copy_mc_to_kernel(void *dst, const void *src, unsigned len)
++{
++ if (copy_mc_fragile_enabled)
++ return copy_mc_fragile(dst, src, len);
++ if (static_cpu_has(X86_FEATURE_ERMS))
++ return copy_mc_enhanced_fast_string(dst, src, len);
++ memcpy(dst, src, len);
++ return 0;
++}
++EXPORT_SYMBOL_GPL(copy_mc_to_kernel);
++
++unsigned long __must_check copy_mc_to_user(void *dst, const void *src, unsigned len)
++{
++ unsigned long ret;
++
++ if (copy_mc_fragile_enabled) {
++ __uaccess_begin();
++ ret = copy_mc_fragile(dst, src, len);
++ __uaccess_end();
++ return ret;
++ }
++
++ if (static_cpu_has(X86_FEATURE_ERMS)) {
++ __uaccess_begin();
++ ret = copy_mc_enhanced_fast_string(dst, src, len);
++ __uaccess_end();
++ return ret;
++ }
++
++ return copy_user_generic(dst, src, len);
++}
+diff --git a/arch/x86/lib/copy_mc_64.S b/arch/x86/lib/copy_mc_64.S
+new file mode 100644
+index 0000000000000..892d8915f609e
+--- /dev/null
++++ b/arch/x86/lib/copy_mc_64.S
+@@ -0,0 +1,163 @@
++/* SPDX-License-Identifier: GPL-2.0-only */
++/* Copyright(c) 2016-2020 Intel Corporation. All rights reserved. */
++
++#include <linux/linkage.h>
++#include <asm/copy_mc_test.h>
++#include <asm/export.h>
++#include <asm/asm.h>
++
++#ifndef CONFIG_UML
++
++#ifdef CONFIG_X86_MCE
++COPY_MC_TEST_CTL
++
++/*
++ * copy_mc_fragile - copy memory with indication if an exception / fault happened
++ *
++ * The 'fragile' version is opted into by platform quirks and takes
++ * pains to avoid unrecoverable corner cases like 'fast-string'
++ * instruction sequences, and consuming poison across a cacheline
++ * boundary. The non-fragile version is equivalent to memcpy()
++ * regardless of CPU machine-check-recovery capability.
++ */
++SYM_FUNC_START(copy_mc_fragile)
++ cmpl $8, %edx
++ /* Less than 8 bytes? Go to byte copy loop */
++ jb .L_no_whole_words
++
++ /* Check for bad alignment of source */
++ testl $7, %esi
++ /* Already aligned */
++ jz .L_8byte_aligned
++
++ /* Copy one byte at a time until source is 8-byte aligned */
++ movl %esi, %ecx
++ andl $7, %ecx
++ subl $8, %ecx
++ negl %ecx
++ subl %ecx, %edx
++.L_read_leading_bytes:
++ movb (%rsi), %al
++ COPY_MC_TEST_SRC %rsi 1 .E_leading_bytes
++ COPY_MC_TEST_DST %rdi 1 .E_leading_bytes
++.L_write_leading_bytes:
++ movb %al, (%rdi)
++ incq %rsi
++ incq %rdi
++ decl %ecx
++ jnz .L_read_leading_bytes
++
++.L_8byte_aligned:
++ movl %edx, %ecx
++ andl $7, %edx
++ shrl $3, %ecx
++ jz .L_no_whole_words
++
++.L_read_words:
++ movq (%rsi), %r8
++ COPY_MC_TEST_SRC %rsi 8 .E_read_words
++ COPY_MC_TEST_DST %rdi 8 .E_write_words
++.L_write_words:
++ movq %r8, (%rdi)
++ addq $8, %rsi
++ addq $8, %rdi
++ decl %ecx
++ jnz .L_read_words
++
++ /* Any trailing bytes? */
++.L_no_whole_words:
++ andl %edx, %edx
++ jz .L_done_memcpy_trap
++
++ /* Copy trailing bytes */
++ movl %edx, %ecx
++.L_read_trailing_bytes:
++ movb (%rsi), %al
++ COPY_MC_TEST_SRC %rsi 1 .E_trailing_bytes
++ COPY_MC_TEST_DST %rdi 1 .E_trailing_bytes
++.L_write_trailing_bytes:
++ movb %al, (%rdi)
++ incq %rsi
++ incq %rdi
++ decl %ecx
++ jnz .L_read_trailing_bytes
++
++ /* Copy successful. Return zero */
++.L_done_memcpy_trap:
++ xorl %eax, %eax
++.L_done:
++ ret
++SYM_FUNC_END(copy_mc_fragile)
++EXPORT_SYMBOL_GPL(copy_mc_fragile)
++
++ .section .fixup, "ax"
++ /*
++ * Return number of bytes not copied for any failure. Note that
++ * there is no "tail" handling since the source buffer is 8-byte
++ * aligned and poison is cacheline aligned.
++ */
++.E_read_words:
++ shll $3, %ecx
++.E_leading_bytes:
++ addl %edx, %ecx
++.E_trailing_bytes:
++ mov %ecx, %eax
++ jmp .L_done
++
++ /*
++ * For write fault handling, given the destination is unaligned,
++ * we handle faults on multi-byte writes with a byte-by-byte
++ * copy up to the write-protected page.
++ */
++.E_write_words:
++ shll $3, %ecx
++ addl %edx, %ecx
++ movl %ecx, %edx
++ jmp copy_mc_fragile_handle_tail
++
++ .previous
++
++ _ASM_EXTABLE_FAULT(.L_read_leading_bytes, .E_leading_bytes)
++ _ASM_EXTABLE_FAULT(.L_read_words, .E_read_words)
++ _ASM_EXTABLE_FAULT(.L_read_trailing_bytes, .E_trailing_bytes)
++ _ASM_EXTABLE(.L_write_leading_bytes, .E_leading_bytes)
++ _ASM_EXTABLE(.L_write_words, .E_write_words)
++ _ASM_EXTABLE(.L_write_trailing_bytes, .E_trailing_bytes)
++#endif /* CONFIG_X86_MCE */
++
++/*
++ * copy_mc_enhanced_fast_string - memory copy with exception handling
++ *
++ * Fast string copy + fault / exception handling. If the CPU does
++ * support machine check exception recovery, but does not support
++ * recovering from fast-string exceptions then this CPU needs to be
++ * added to the copy_mc_fragile_key set of quirks. Otherwise, absent any
++ * machine check recovery support this version should be no slower than
++ * standard memcpy.
++ */
++SYM_FUNC_START(copy_mc_enhanced_fast_string)
++ movq %rdi, %rax
++ movq %rdx, %rcx
++.L_copy:
++ rep movsb
++ /* Copy successful. Return zero */
++ xorl %eax, %eax
++ ret
++SYM_FUNC_END(copy_mc_enhanced_fast_string)
++
++ .section .fixup, "ax"
++.E_copy:
++ /*
++ * On fault %rcx is updated such that the copy instruction could
++ * optionally be restarted at the fault position, i.e. it
++ * contains 'bytes remaining'. A non-zero return indicates error
++ * to copy_mc_generic() users, or indicate short transfers to
++ * user-copy routines.
++ */
++ movq %rcx, %rax
++ ret
++
++ .previous
++
++ _ASM_EXTABLE_FAULT(.L_copy, .E_copy)
++#endif /* !CONFIG_UML */
+diff --git a/arch/x86/lib/memcpy_64.S b/arch/x86/lib/memcpy_64.S
+index bbcc05bcefadb..037faac46b0cc 100644
+--- a/arch/x86/lib/memcpy_64.S
++++ b/arch/x86/lib/memcpy_64.S
+@@ -4,7 +4,6 @@
+ #include <linux/linkage.h>
+ #include <asm/errno.h>
+ #include <asm/cpufeatures.h>
+-#include <asm/mcsafe_test.h>
+ #include <asm/alternative-asm.h>
+ #include <asm/export.h>
+
+@@ -187,117 +186,3 @@ SYM_FUNC_START_LOCAL(memcpy_orig)
+ SYM_FUNC_END(memcpy_orig)
+
+ .popsection
+-
+-#ifndef CONFIG_UML
+-
+-MCSAFE_TEST_CTL
+-
+-/*
+- * __memcpy_mcsafe - memory copy with machine check exception handling
+- * Note that we only catch machine checks when reading the source addresses.
+- * Writes to target are posted and don't generate machine checks.
+- */
+-SYM_FUNC_START(__memcpy_mcsafe)
+- cmpl $8, %edx
+- /* Less than 8 bytes? Go to byte copy loop */
+- jb .L_no_whole_words
+-
+- /* Check for bad alignment of source */
+- testl $7, %esi
+- /* Already aligned */
+- jz .L_8byte_aligned
+-
+- /* Copy one byte at a time until source is 8-byte aligned */
+- movl %esi, %ecx
+- andl $7, %ecx
+- subl $8, %ecx
+- negl %ecx
+- subl %ecx, %edx
+-.L_read_leading_bytes:
+- movb (%rsi), %al
+- MCSAFE_TEST_SRC %rsi 1 .E_leading_bytes
+- MCSAFE_TEST_DST %rdi 1 .E_leading_bytes
+-.L_write_leading_bytes:
+- movb %al, (%rdi)
+- incq %rsi
+- incq %rdi
+- decl %ecx
+- jnz .L_read_leading_bytes
+-
+-.L_8byte_aligned:
+- movl %edx, %ecx
+- andl $7, %edx
+- shrl $3, %ecx
+- jz .L_no_whole_words
+-
+-.L_read_words:
+- movq (%rsi), %r8
+- MCSAFE_TEST_SRC %rsi 8 .E_read_words
+- MCSAFE_TEST_DST %rdi 8 .E_write_words
+-.L_write_words:
+- movq %r8, (%rdi)
+- addq $8, %rsi
+- addq $8, %rdi
+- decl %ecx
+- jnz .L_read_words
+-
+- /* Any trailing bytes? */
+-.L_no_whole_words:
+- andl %edx, %edx
+- jz .L_done_memcpy_trap
+-
+- /* Copy trailing bytes */
+- movl %edx, %ecx
+-.L_read_trailing_bytes:
+- movb (%rsi), %al
+- MCSAFE_TEST_SRC %rsi 1 .E_trailing_bytes
+- MCSAFE_TEST_DST %rdi 1 .E_trailing_bytes
+-.L_write_trailing_bytes:
+- movb %al, (%rdi)
+- incq %rsi
+- incq %rdi
+- decl %ecx
+- jnz .L_read_trailing_bytes
+-
+- /* Copy successful. Return zero */
+-.L_done_memcpy_trap:
+- xorl %eax, %eax
+-.L_done:
+- ret
+-SYM_FUNC_END(__memcpy_mcsafe)
+-EXPORT_SYMBOL_GPL(__memcpy_mcsafe)
+-
+- .section .fixup, "ax"
+- /*
+- * Return number of bytes not copied for any failure. Note that
+- * there is no "tail" handling since the source buffer is 8-byte
+- * aligned and poison is cacheline aligned.
+- */
+-.E_read_words:
+- shll $3, %ecx
+-.E_leading_bytes:
+- addl %edx, %ecx
+-.E_trailing_bytes:
+- mov %ecx, %eax
+- jmp .L_done
+-
+- /*
+- * For write fault handling, given the destination is unaligned,
+- * we handle faults on multi-byte writes with a byte-by-byte
+- * copy up to the write-protected page.
+- */
+-.E_write_words:
+- shll $3, %ecx
+- addl %edx, %ecx
+- movl %ecx, %edx
+- jmp mcsafe_handle_tail
+-
+- .previous
+-
+- _ASM_EXTABLE_FAULT(.L_read_leading_bytes, .E_leading_bytes)
+- _ASM_EXTABLE_FAULT(.L_read_words, .E_read_words)
+- _ASM_EXTABLE_FAULT(.L_read_trailing_bytes, .E_trailing_bytes)
+- _ASM_EXTABLE(.L_write_leading_bytes, .E_leading_bytes)
+- _ASM_EXTABLE(.L_write_words, .E_write_words)
+- _ASM_EXTABLE(.L_write_trailing_bytes, .E_trailing_bytes)
+-#endif
+diff --git a/arch/x86/lib/usercopy_64.c b/arch/x86/lib/usercopy_64.c
+index 1847e993ac63a..508c81e97ab10 100644
+--- a/arch/x86/lib/usercopy_64.c
++++ b/arch/x86/lib/usercopy_64.c
+@@ -56,27 +56,6 @@ unsigned long clear_user(void __user *to, unsigned long n)
+ }
+ EXPORT_SYMBOL(clear_user);
+
+-/*
+- * Similar to copy_user_handle_tail, probe for the write fault point,
+- * but reuse __memcpy_mcsafe in case a new read error is encountered.
+- * clac() is handled in _copy_to_iter_mcsafe().
+- */
+-__visible notrace unsigned long
+-mcsafe_handle_tail(char *to, char *from, unsigned len)
+-{
+- for (; len; --len, to++, from++) {
+- /*
+- * Call the assembly routine back directly since
+- * memcpy_mcsafe() may silently fallback to memcpy.
+- */
+- unsigned long rem = __memcpy_mcsafe(to, from, 1);
+-
+- if (rem)
+- break;
+- }
+- return len;
+-}
+-
+ #ifdef CONFIG_ARCH_HAS_UACCESS_FLUSHCACHE
+ /**
+ * clean_cache_range - write back a cache range with CLWB
+diff --git a/arch/x86/pci/intel_mid_pci.c b/arch/x86/pci/intel_mid_pci.c
+index 00c62115f39cd..0aaf31917061d 100644
+--- a/arch/x86/pci/intel_mid_pci.c
++++ b/arch/x86/pci/intel_mid_pci.c
+@@ -33,6 +33,7 @@
+ #include <asm/hw_irq.h>
+ #include <asm/io_apic.h>
+ #include <asm/intel-mid.h>
++#include <asm/acpi.h>
+
+ #define PCIE_CAP_OFFSET 0x100
+
+diff --git a/arch/x86/xen/enlighten_pv.c b/arch/x86/xen/enlighten_pv.c
+index c46b9f2e732ff..6e39eda00c2c9 100644
+--- a/arch/x86/xen/enlighten_pv.c
++++ b/arch/x86/xen/enlighten_pv.c
+@@ -1438,6 +1438,15 @@ asmlinkage __visible void __init xen_start_kernel(void)
+ x86_init.mpparse.get_smp_config = x86_init_uint_noop;
+
+ xen_boot_params_init_edd();
++
++#ifdef CONFIG_ACPI
++ /*
++ * Disable selecting "Firmware First mode" for correctable
++ * memory errors, as this is the duty of the hypervisor to
++ * decide.
++ */
++ acpi_disable_cmcff = 1;
++#endif
+ }
+
+ if (!boot_params.screen_info.orig_video_isVGA)
+diff --git a/drivers/ata/ahci.h b/drivers/ata/ahci.h
+index d991dd46e89cc..98b8baa47dc5e 100644
+--- a/drivers/ata/ahci.h
++++ b/drivers/ata/ahci.h
+@@ -240,6 +240,8 @@ enum {
+ as default lpm_policy */
+ AHCI_HFLAG_SUSPEND_PHYS = (1 << 26), /* handle PHYs during
+ suspend/resume */
++ AHCI_HFLAG_IGN_NOTSUPP_POWER_ON = (1 << 27), /* ignore -EOPNOTSUPP
++ from phy_power_on() */
+
+ /* ap->flags bits */
+
+diff --git a/drivers/ata/ahci_mvebu.c b/drivers/ata/ahci_mvebu.c
+index d4bba3ace45d7..3ad46d26d9d51 100644
+--- a/drivers/ata/ahci_mvebu.c
++++ b/drivers/ata/ahci_mvebu.c
+@@ -227,7 +227,7 @@ static const struct ahci_mvebu_plat_data ahci_mvebu_armada_380_plat_data = {
+
+ static const struct ahci_mvebu_plat_data ahci_mvebu_armada_3700_plat_data = {
+ .plat_config = ahci_mvebu_armada_3700_config,
+- .flags = AHCI_HFLAG_SUSPEND_PHYS,
++ .flags = AHCI_HFLAG_SUSPEND_PHYS | AHCI_HFLAG_IGN_NOTSUPP_POWER_ON,
+ };
+
+ static const struct of_device_id ahci_mvebu_of_match[] = {
+diff --git a/drivers/ata/libahci_platform.c b/drivers/ata/libahci_platform.c
+index 129556fcf6be7..a1cbb894e5f0a 100644
+--- a/drivers/ata/libahci_platform.c
++++ b/drivers/ata/libahci_platform.c
+@@ -59,7 +59,7 @@ int ahci_platform_enable_phys(struct ahci_host_priv *hpriv)
+ }
+
+ rc = phy_power_on(hpriv->phys[i]);
+- if (rc) {
++ if (rc && !(rc == -EOPNOTSUPP && (hpriv->flags & AHCI_HFLAG_IGN_NOTSUPP_POWER_ON))) {
+ phy_exit(hpriv->phys[i]);
+ goto disable_phys;
+ }
+diff --git a/drivers/ata/sata_rcar.c b/drivers/ata/sata_rcar.c
+index 141ac600b64c8..44b0ed8f6bb8a 100644
+--- a/drivers/ata/sata_rcar.c
++++ b/drivers/ata/sata_rcar.c
+@@ -120,7 +120,7 @@
+ /* Descriptor table word 0 bit (when DTA32M = 1) */
+ #define SATA_RCAR_DTEND BIT(0)
+
+-#define SATA_RCAR_DMA_BOUNDARY 0x1FFFFFFEUL
++#define SATA_RCAR_DMA_BOUNDARY 0x1FFFFFFFUL
+
+ /* Gen2 Physical Layer Control Registers */
+ #define RCAR_GEN2_PHY_CTL1_REG 0x1704
+diff --git a/drivers/base/firmware_loader/fallback_platform.c b/drivers/base/firmware_loader/fallback_platform.c
+index 685edb7dd05a7..6958ab1a80593 100644
+--- a/drivers/base/firmware_loader/fallback_platform.c
++++ b/drivers/base/firmware_loader/fallback_platform.c
+@@ -17,7 +17,7 @@ int firmware_fallback_platform(struct fw_priv *fw_priv, u32 opt_flags)
+ if (!(opt_flags & FW_OPT_FALLBACK_PLATFORM))
+ return -ENOENT;
+
+- rc = security_kernel_load_data(LOADING_FIRMWARE_EFI_EMBEDDED);
++ rc = security_kernel_load_data(LOADING_FIRMWARE);
+ if (rc)
+ return rc;
+
+diff --git a/drivers/crypto/chelsio/chtls/chtls_cm.c b/drivers/crypto/chelsio/chtls/chtls_cm.c
+index bad8e90ba168d..62fbc7df022bc 100644
+--- a/drivers/crypto/chelsio/chtls/chtls_cm.c
++++ b/drivers/crypto/chelsio/chtls/chtls_cm.c
+@@ -772,14 +772,13 @@ static int chtls_pass_open_rpl(struct chtls_dev *cdev, struct sk_buff *skb)
+ if (rpl->status != CPL_ERR_NONE) {
+ pr_info("Unexpected PASS_OPEN_RPL status %u for STID %u\n",
+ rpl->status, stid);
+- return CPL_RET_BUF_DONE;
++ } else {
++ cxgb4_free_stid(cdev->tids, stid, listen_ctx->lsk->sk_family);
++ sock_put(listen_ctx->lsk);
++ kfree(listen_ctx);
++ module_put(THIS_MODULE);
+ }
+- cxgb4_free_stid(cdev->tids, stid, listen_ctx->lsk->sk_family);
+- sock_put(listen_ctx->lsk);
+- kfree(listen_ctx);
+- module_put(THIS_MODULE);
+-
+- return 0;
++ return CPL_RET_BUF_DONE;
+ }
+
+ static int chtls_close_listsrv_rpl(struct chtls_dev *cdev, struct sk_buff *skb)
+@@ -796,15 +795,13 @@ static int chtls_close_listsrv_rpl(struct chtls_dev *cdev, struct sk_buff *skb)
+ if (rpl->status != CPL_ERR_NONE) {
+ pr_info("Unexpected CLOSE_LISTSRV_RPL status %u for STID %u\n",
+ rpl->status, stid);
+- return CPL_RET_BUF_DONE;
++ } else {
++ cxgb4_free_stid(cdev->tids, stid, listen_ctx->lsk->sk_family);
++ sock_put(listen_ctx->lsk);
++ kfree(listen_ctx);
++ module_put(THIS_MODULE);
+ }
+-
+- cxgb4_free_stid(cdev->tids, stid, listen_ctx->lsk->sk_family);
+- sock_put(listen_ctx->lsk);
+- kfree(listen_ctx);
+- module_put(THIS_MODULE);
+-
+- return 0;
++ return CPL_RET_BUF_DONE;
+ }
+
+ static void chtls_purge_wr_queue(struct sock *sk)
+@@ -1513,7 +1510,6 @@ static void add_to_reap_list(struct sock *sk)
+ struct chtls_sock *csk = sk->sk_user_data;
+
+ local_bh_disable();
+- bh_lock_sock(sk);
+ release_tcp_port(sk); /* release the port immediately */
+
+ spin_lock(&reap_list_lock);
+@@ -1522,7 +1518,6 @@ static void add_to_reap_list(struct sock *sk)
+ if (!csk->passive_reap_next)
+ schedule_work(&reap_task);
+ spin_unlock(&reap_list_lock);
+- bh_unlock_sock(sk);
+ local_bh_enable();
+ }
+
+diff --git a/drivers/crypto/chelsio/chtls/chtls_io.c b/drivers/crypto/chelsio/chtls/chtls_io.c
+index 9fb5ca6682ea2..188d871f6b8cd 100644
+--- a/drivers/crypto/chelsio/chtls/chtls_io.c
++++ b/drivers/crypto/chelsio/chtls/chtls_io.c
+@@ -1585,6 +1585,7 @@ skip_copy:
+ tp->urg_data = 0;
+
+ if ((avail + offset) >= skb->len) {
++ struct sk_buff *next_skb;
+ if (ULP_SKB_CB(skb)->flags & ULPCB_FLAG_TLS_HDR) {
+ tp->copied_seq += skb->len;
+ hws->rcvpld = skb->hdr_len;
+@@ -1595,8 +1596,10 @@ skip_copy:
+ chtls_free_skb(sk, skb);
+ buffers_freed++;
+ hws->copied_seq = 0;
+- if (copied >= target &&
+- !skb_peek(&sk->sk_receive_queue))
++ next_skb = skb_peek(&sk->sk_receive_queue);
++ if (copied >= target && !next_skb)
++ break;
++ if (ULP_SKB_CB(next_skb)->flags & ULPCB_FLAG_TLS_HDR)
+ break;
+ }
+ } while (len > 0);
+diff --git a/drivers/firmware/efi/libstub/arm64-stub.c b/drivers/firmware/efi/libstub/arm64-stub.c
+index e5bfac79e5ac9..04f5d79d42653 100644
+--- a/drivers/firmware/efi/libstub/arm64-stub.c
++++ b/drivers/firmware/efi/libstub/arm64-stub.c
+@@ -62,10 +62,12 @@ efi_status_t handle_kernel_image(unsigned long *image_addr,
+ status = efi_get_random_bytes(sizeof(phys_seed),
+ (u8 *)&phys_seed);
+ if (status == EFI_NOT_FOUND) {
+- efi_info("EFI_RNG_PROTOCOL unavailable, no randomness supplied\n");
++ efi_info("EFI_RNG_PROTOCOL unavailable, KASLR will be disabled\n");
++ efi_nokaslr = true;
+ } else if (status != EFI_SUCCESS) {
+- efi_err("efi_get_random_bytes() failed\n");
+- return status;
++ efi_err("efi_get_random_bytes() failed (0x%lx), KASLR will be disabled\n",
++ status);
++ efi_nokaslr = true;
+ }
+ } else {
+ efi_info("KASLR disabled on kernel command line\n");
+diff --git a/drivers/firmware/efi/libstub/fdt.c b/drivers/firmware/efi/libstub/fdt.c
+index 11ecf3c4640eb..368cd60000eec 100644
+--- a/drivers/firmware/efi/libstub/fdt.c
++++ b/drivers/firmware/efi/libstub/fdt.c
+@@ -136,7 +136,7 @@ static efi_status_t update_fdt(void *orig_fdt, unsigned long orig_fdt_size,
+ if (status)
+ goto fdt_set_fail;
+
+- if (IS_ENABLED(CONFIG_RANDOMIZE_BASE)) {
++ if (IS_ENABLED(CONFIG_RANDOMIZE_BASE) && !efi_nokaslr) {
+ efi_status_t efi_status;
+
+ efi_status = efi_get_random_bytes(sizeof(fdt_val64),
+@@ -145,8 +145,6 @@ static efi_status_t update_fdt(void *orig_fdt, unsigned long orig_fdt_size,
+ status = fdt_setprop_var(fdt, node, "kaslr-seed", fdt_val64);
+ if (status)
+ goto fdt_set_fail;
+- } else if (efi_status != EFI_NOT_FOUND) {
+- return efi_status;
+ }
+ }
+
+diff --git a/drivers/gpu/drm/i915/i915_debugfs.c b/drivers/gpu/drm/i915/i915_debugfs.c
+index e7532e7d74e91..0e1f11669b072 100644
+--- a/drivers/gpu/drm/i915/i915_debugfs.c
++++ b/drivers/gpu/drm/i915/i915_debugfs.c
+@@ -323,6 +323,7 @@ static void print_context_stats(struct seq_file *m,
+ }
+ i915_gem_context_unlock_engines(ctx);
+
++ mutex_lock(&ctx->mutex);
+ if (!IS_ERR_OR_NULL(ctx->file_priv)) {
+ struct file_stats stats = {
+ .vm = rcu_access_pointer(ctx->vm),
+@@ -343,6 +344,7 @@ static void print_context_stats(struct seq_file *m,
+
+ print_file_stats(m, name, stats);
+ }
++ mutex_unlock(&ctx->mutex);
+
+ spin_lock(&i915->gem.contexts.lock);
+ list_safe_reset_next(ctx, cn, link);
+diff --git a/drivers/infiniband/core/addr.c b/drivers/infiniband/core/addr.c
+index 3a98439bba832..0abce004a9591 100644
+--- a/drivers/infiniband/core/addr.c
++++ b/drivers/infiniband/core/addr.c
+@@ -647,13 +647,12 @@ static void process_one_req(struct work_struct *_work)
+ req->callback = NULL;
+
+ spin_lock_bh(&lock);
++ /*
++ * Although the work will normally have been canceled by the workqueue,
++ * it can still be requeued as long as it is on the req_list.
++ */
++ cancel_delayed_work(&req->work);
+ if (!list_empty(&req->list)) {
+- /*
+- * Although the work will normally have been canceled by the
+- * workqueue, it can still be requeued as long as it is on the
+- * req_list.
+- */
+- cancel_delayed_work(&req->work);
+ list_del_init(&req->list);
+ kfree(req);
+ }
+diff --git a/drivers/md/dm-writecache.c b/drivers/md/dm-writecache.c
+index 1533419f18758..de467a1303db3 100644
+--- a/drivers/md/dm-writecache.c
++++ b/drivers/md/dm-writecache.c
+@@ -49,7 +49,7 @@ do { \
+ #define pmem_assign(dest, src) ((dest) = (src))
+ #endif
+
+-#if defined(__HAVE_ARCH_MEMCPY_MCSAFE) && defined(DM_WRITECACHE_HAS_PMEM)
++#if IS_ENABLED(CONFIG_ARCH_HAS_COPY_MC) && defined(DM_WRITECACHE_HAS_PMEM)
+ #define DM_WRITECACHE_HANDLE_HARDWARE_ERRORS
+ #endif
+
+@@ -992,7 +992,8 @@ static void writecache_resume(struct dm_target *ti)
+ }
+ wc->freelist_size = 0;
+
+- r = memcpy_mcsafe(&sb_seq_count, &sb(wc)->seq_count, sizeof(uint64_t));
++ r = copy_mc_to_kernel(&sb_seq_count, &sb(wc)->seq_count,
++ sizeof(uint64_t));
+ if (r) {
+ writecache_error(wc, r, "hardware memory error when reading superblock: %d", r);
+ sb_seq_count = cpu_to_le64(0);
+@@ -1008,7 +1009,8 @@ static void writecache_resume(struct dm_target *ti)
+ e->seq_count = -1;
+ continue;
+ }
+- r = memcpy_mcsafe(&wme, memory_entry(wc, e), sizeof(struct wc_memory_entry));
++ r = copy_mc_to_kernel(&wme, memory_entry(wc, e),
++ sizeof(struct wc_memory_entry));
+ if (r) {
+ writecache_error(wc, r, "hardware memory error when reading metadata entry %lu: %d",
+ (unsigned long)b, r);
+@@ -1206,7 +1208,7 @@ static void bio_copy_block(struct dm_writecache *wc, struct bio *bio, void *data
+
+ if (rw == READ) {
+ int r;
+- r = memcpy_mcsafe(buf, data, size);
++ r = copy_mc_to_kernel(buf, data, size);
+ flush_dcache_page(bio_page(bio));
+ if (unlikely(r)) {
+ writecache_error(wc, r, "hardware memory error when reading data: %d", r);
+@@ -2349,7 +2351,7 @@ invalid_optional:
+ }
+ }
+
+- r = memcpy_mcsafe(&s, sb(wc), sizeof(struct wc_memory_superblock));
++ r = copy_mc_to_kernel(&s, sb(wc), sizeof(struct wc_memory_superblock));
+ if (r) {
+ ti->error = "Hardware memory error when reading superblock";
+ goto bad;
+@@ -2360,7 +2362,8 @@ invalid_optional:
+ ti->error = "Unable to initialize device";
+ goto bad;
+ }
+- r = memcpy_mcsafe(&s, sb(wc), sizeof(struct wc_memory_superblock));
++ r = copy_mc_to_kernel(&s, sb(wc),
++ sizeof(struct wc_memory_superblock));
+ if (r) {
+ ti->error = "Hardware memory error when reading superblock";
+ goto bad;
+diff --git a/drivers/misc/cardreader/rtsx_pcr.c b/drivers/misc/cardreader/rtsx_pcr.c
+index 82246f7aec6fb..e39b118b945f8 100644
+--- a/drivers/misc/cardreader/rtsx_pcr.c
++++ b/drivers/misc/cardreader/rtsx_pcr.c
+@@ -1172,10 +1172,6 @@ void rtsx_pci_init_ocp(struct rtsx_pcr *pcr)
+ rtsx_pci_write_register(pcr, REG_OCPGLITCH,
+ SD_OCP_GLITCH_MASK, pcr->hw_param.ocp_glitch);
+ rtsx_pci_enable_ocp(pcr);
+- } else {
+- /* OC power down */
+- rtsx_pci_write_register(pcr, FPDCTL, OC_POWER_DOWN,
+- OC_POWER_DOWN);
+ }
+ }
+ }
+diff --git a/drivers/misc/cxl/pci.c b/drivers/misc/cxl/pci.c
+index 25a9dd9c0c1b5..2ba899f5659ff 100644
+--- a/drivers/misc/cxl/pci.c
++++ b/drivers/misc/cxl/pci.c
+@@ -393,8 +393,8 @@ int cxl_calc_capp_routing(struct pci_dev *dev, u64 *chipid,
+ *capp_unit_id = get_capp_unit_id(np, *phb_index);
+ of_node_put(np);
+ if (!*capp_unit_id) {
+- pr_err("cxl: invalid capp unit id (phb_index: %d)\n",
+- *phb_index);
++ pr_err("cxl: No capp unit found for PHB[%lld,%d]. Make sure the adapter is on a capi-compatible slot\n",
++ *chipid, *phb_index);
+ return -ENODEV;
+ }
+
+diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt.c b/drivers/net/ethernet/broadcom/bnxt/bnxt.c
+index dd07db656a5c3..f3c125d50d7a0 100644
+--- a/drivers/net/ethernet/broadcom/bnxt/bnxt.c
++++ b/drivers/net/ethernet/broadcom/bnxt/bnxt.c
+@@ -1158,16 +1158,6 @@ static void bnxt_queue_sp_work(struct bnxt *bp)
+ schedule_work(&bp->sp_task);
+ }
+
+-static void bnxt_cancel_sp_work(struct bnxt *bp)
+-{
+- if (BNXT_PF(bp)) {
+- flush_workqueue(bnxt_pf_wq);
+- } else {
+- cancel_work_sync(&bp->sp_task);
+- cancel_delayed_work_sync(&bp->fw_reset_task);
+- }
+-}
+-
+ static void bnxt_sched_reset(struct bnxt *bp, struct bnxt_rx_ring_info *rxr)
+ {
+ if (!rxr->bnapi->in_reset) {
+@@ -4198,7 +4188,8 @@ static int bnxt_hwrm_do_send_msg(struct bnxt *bp, void *msg, u32 msg_len,
+ u32 bar_offset = BNXT_GRCPF_REG_CHIMP_COMM;
+ u16 dst = BNXT_HWRM_CHNL_CHIMP;
+
+- if (BNXT_NO_FW_ACCESS(bp))
++ if (BNXT_NO_FW_ACCESS(bp) &&
++ le16_to_cpu(req->req_type) != HWRM_FUNC_RESET)
+ return -EBUSY;
+
+ if (msg_len > BNXT_HWRM_MAX_REQ_LEN) {
+@@ -9247,7 +9238,10 @@ int bnxt_open_nic(struct bnxt *bp, bool irq_re_init, bool link_re_init)
+ {
+ int rc = 0;
+
+- rc = __bnxt_open_nic(bp, irq_re_init, link_re_init);
++ if (test_bit(BNXT_STATE_ABORT_ERR, &bp->state))
++ rc = -EIO;
++ if (!rc)
++ rc = __bnxt_open_nic(bp, irq_re_init, link_re_init);
+ if (rc) {
+ netdev_err(bp->dev, "nic open fail (rc: %x)\n", rc);
+ dev_close(bp->dev);
+@@ -11505,15 +11499,17 @@ static void bnxt_remove_one(struct pci_dev *pdev)
+ if (BNXT_PF(bp))
+ bnxt_sriov_disable(bp);
+
+- clear_bit(BNXT_STATE_IN_FW_RESET, &bp->state);
+- bnxt_cancel_sp_work(bp);
+- bp->sp_event = 0;
+-
+- bnxt_dl_fw_reporters_destroy(bp, true);
+ if (BNXT_PF(bp))
+ devlink_port_type_clear(&bp->dl_port);
+ pci_disable_pcie_error_reporting(pdev);
+ unregister_netdev(dev);
++ clear_bit(BNXT_STATE_IN_FW_RESET, &bp->state);
++ /* Flush any pending tasks */
++ cancel_work_sync(&bp->sp_task);
++ cancel_delayed_work_sync(&bp->fw_reset_task);
++ bp->sp_event = 0;
++
++ bnxt_dl_fw_reporters_destroy(bp, true);
+ bnxt_dl_unregister(bp);
+ bnxt_shutdown_tc(bp);
+
+@@ -12238,6 +12234,9 @@ static pci_ers_result_t bnxt_io_error_detected(struct pci_dev *pdev,
+ return PCI_ERS_RESULT_DISCONNECT;
+ }
+
++ if (state == pci_channel_io_frozen)
++ set_bit(BNXT_STATE_PCI_CHANNEL_IO_FROZEN, &bp->state);
++
+ if (netif_running(netdev))
+ bnxt_close(netdev);
+
+@@ -12264,7 +12263,7 @@ static pci_ers_result_t bnxt_io_slot_reset(struct pci_dev *pdev)
+ {
+ struct net_device *netdev = pci_get_drvdata(pdev);
+ struct bnxt *bp = netdev_priv(netdev);
+- int err = 0;
++ int err = 0, off;
+ pci_ers_result_t result = PCI_ERS_RESULT_DISCONNECT;
+
+ netdev_info(bp->dev, "PCI Slot Reset\n");
+@@ -12276,6 +12275,20 @@ static pci_ers_result_t bnxt_io_slot_reset(struct pci_dev *pdev)
+ "Cannot re-enable PCI device after reset.\n");
+ } else {
+ pci_set_master(pdev);
++ /* Upon fatal error, our device internal logic that latches to
++ * BAR value is getting reset and will restore only upon
++ * rewritting the BARs.
++ *
++ * As pci_restore_state() does not re-write the BARs if the
++ * value is same as saved value earlier, driver needs to
++ * write the BARs to 0 to force restore, in case of fatal error.
++ */
++ if (test_and_clear_bit(BNXT_STATE_PCI_CHANNEL_IO_FROZEN,
++ &bp->state)) {
++ for (off = PCI_BASE_ADDRESS_0;
++ off <= PCI_BASE_ADDRESS_5; off += 4)
++ pci_write_config_dword(bp->pdev, off, 0);
++ }
+ pci_restore_state(pdev);
+ pci_save_state(pdev);
+
+diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt.h b/drivers/net/ethernet/broadcom/bnxt/bnxt.h
+index 440b43c8068f1..a80ac2ae57a68 100644
+--- a/drivers/net/ethernet/broadcom/bnxt/bnxt.h
++++ b/drivers/net/ethernet/broadcom/bnxt/bnxt.h
+@@ -1672,6 +1672,7 @@ struct bnxt {
+ #define BNXT_STATE_ABORT_ERR 5
+ #define BNXT_STATE_FW_FATAL_COND 6
+ #define BNXT_STATE_DRV_REGISTERED 7
++#define BNXT_STATE_PCI_CHANNEL_IO_FROZEN 8
+
+ #define BNXT_NO_FW_ACCESS(bp) \
+ (test_bit(BNXT_STATE_FW_FATAL_COND, &(bp)->state) || \
+diff --git a/drivers/net/ethernet/chelsio/cxgb4/cxgb4_filter.c b/drivers/net/ethernet/chelsio/cxgb4/cxgb4_filter.c
+index ff0d82e2535da..fd33c888046b9 100644
+--- a/drivers/net/ethernet/chelsio/cxgb4/cxgb4_filter.c
++++ b/drivers/net/ethernet/chelsio/cxgb4/cxgb4_filter.c
+@@ -145,13 +145,13 @@ static int configure_filter_smac(struct adapter *adap, struct filter_entry *f)
+ int err;
+
+ /* do a set-tcb for smac-sel and CWR bit.. */
+- err = set_tcb_tflag(adap, f, f->tid, TF_CCTRL_CWR_S, 1, 1);
+- if (err)
+- goto smac_err;
+-
+ err = set_tcb_field(adap, f, f->tid, TCB_SMAC_SEL_W,
+ TCB_SMAC_SEL_V(TCB_SMAC_SEL_M),
+ TCB_SMAC_SEL_V(f->smt->idx), 1);
++ if (err)
++ goto smac_err;
++
++ err = set_tcb_tflag(adap, f, f->tid, TF_CCTRL_CWR_S, 1, 1);
+ if (!err)
+ return 0;
+
+@@ -865,6 +865,7 @@ int set_filter_wr(struct adapter *adapter, int fidx)
+ FW_FILTER_WR_DIRSTEERHASH_V(f->fs.dirsteerhash) |
+ FW_FILTER_WR_LPBK_V(f->fs.action == FILTER_SWITCH) |
+ FW_FILTER_WR_DMAC_V(f->fs.newdmac) |
++ FW_FILTER_WR_SMAC_V(f->fs.newsmac) |
+ FW_FILTER_WR_INSVLAN_V(f->fs.newvlan == VLAN_INSERT ||
+ f->fs.newvlan == VLAN_REWRITE) |
+ FW_FILTER_WR_RMVLAN_V(f->fs.newvlan == VLAN_REMOVE ||
+@@ -882,7 +883,7 @@ int set_filter_wr(struct adapter *adapter, int fidx)
+ FW_FILTER_WR_OVLAN_VLD_V(f->fs.val.ovlan_vld) |
+ FW_FILTER_WR_IVLAN_VLDM_V(f->fs.mask.ivlan_vld) |
+ FW_FILTER_WR_OVLAN_VLDM_V(f->fs.mask.ovlan_vld));
+- fwr->smac_sel = 0;
++ fwr->smac_sel = f->smt->idx;
+ fwr->rx_chan_rx_rpl_iq =
+ htons(FW_FILTER_WR_RX_CHAN_V(0) |
+ FW_FILTER_WR_RX_RPL_IQ_V(adapter->sge.fw_evtq.abs_id));
+@@ -1321,11 +1322,8 @@ static void mk_act_open_req6(struct filter_entry *f, struct sk_buff *skb,
+ TX_QUEUE_V(f->fs.nat_mode) |
+ T5_OPT_2_VALID_F |
+ RX_CHANNEL_V(cxgb4_port_e2cchan(f->dev)) |
+- CONG_CNTRL_V((f->fs.action == FILTER_DROP) |
+- (f->fs.dirsteer << 1)) |
+ PACE_V((f->fs.maskhash) |
+- ((f->fs.dirsteerhash) << 1)) |
+- CCTRL_ECN_V(f->fs.action == FILTER_SWITCH));
++ ((f->fs.dirsteerhash) << 1)));
+ }
+
+ static void mk_act_open_req(struct filter_entry *f, struct sk_buff *skb,
+@@ -1361,11 +1359,8 @@ static void mk_act_open_req(struct filter_entry *f, struct sk_buff *skb,
+ TX_QUEUE_V(f->fs.nat_mode) |
+ T5_OPT_2_VALID_F |
+ RX_CHANNEL_V(cxgb4_port_e2cchan(f->dev)) |
+- CONG_CNTRL_V((f->fs.action == FILTER_DROP) |
+- (f->fs.dirsteer << 1)) |
+ PACE_V((f->fs.maskhash) |
+- ((f->fs.dirsteerhash) << 1)) |
+- CCTRL_ECN_V(f->fs.action == FILTER_SWITCH));
++ ((f->fs.dirsteerhash) << 1)));
+ }
+
+ static int cxgb4_set_hash_filter(struct net_device *dev,
+@@ -2037,6 +2032,20 @@ void hash_filter_rpl(struct adapter *adap, const struct cpl_act_open_rpl *rpl)
+ }
+ return;
+ }
++ switch (f->fs.action) {
++ case FILTER_PASS:
++ if (f->fs.dirsteer)
++ set_tcb_tflag(adap, f, tid,
++ TF_DIRECT_STEER_S, 1, 1);
++ break;
++ case FILTER_DROP:
++ set_tcb_tflag(adap, f, tid, TF_DROP_S, 1, 1);
++ break;
++ case FILTER_SWITCH:
++ set_tcb_tflag(adap, f, tid, TF_LPBK_S, 1, 1);
++ break;
++ }
++
+ break;
+
+ default:
+@@ -2104,22 +2113,11 @@ void filter_rpl(struct adapter *adap, const struct cpl_set_tcb_rpl *rpl)
+ if (ctx)
+ ctx->result = 0;
+ } else if (ret == FW_FILTER_WR_FLT_ADDED) {
+- int err = 0;
+-
+- if (f->fs.newsmac)
+- err = configure_filter_smac(adap, f);
+-
+- if (!err) {
+- f->pending = 0; /* async setup completed */
+- f->valid = 1;
+- if (ctx) {
+- ctx->result = 0;
+- ctx->tid = idx;
+- }
+- } else {
+- clear_filter(adap, f);
+- if (ctx)
+- ctx->result = err;
++ f->pending = 0; /* async setup completed */
++ f->valid = 1;
++ if (ctx) {
++ ctx->result = 0;
++ ctx->tid = idx;
+ }
+ } else {
+ /* Something went wrong. Issue a warning about the
+diff --git a/drivers/net/ethernet/chelsio/cxgb4/t4_tcb.h b/drivers/net/ethernet/chelsio/cxgb4/t4_tcb.h
+index 50232e063f49e..92473dda55d9f 100644
+--- a/drivers/net/ethernet/chelsio/cxgb4/t4_tcb.h
++++ b/drivers/net/ethernet/chelsio/cxgb4/t4_tcb.h
+@@ -50,6 +50,10 @@
+ #define TCB_T_FLAGS_M 0xffffffffffffffffULL
+ #define TCB_T_FLAGS_V(x) ((__u64)(x) << TCB_T_FLAGS_S)
+
++#define TF_DROP_S 22
++#define TF_DIRECT_STEER_S 23
++#define TF_LPBK_S 59
++
+ #define TF_CCTRL_ECE_S 60
+ #define TF_CCTRL_CWR_S 61
+ #define TF_CCTRL_RFR_S 62
+diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3vf/hclgevf_main.c b/drivers/net/ethernet/hisilicon/hns3/hns3vf/hclgevf_main.c
+index 9162856de1b19..ab15f1c588b3a 100644
+--- a/drivers/net/ethernet/hisilicon/hns3/hns3vf/hclgevf_main.c
++++ b/drivers/net/ethernet/hisilicon/hns3/hns3vf/hclgevf_main.c
+@@ -3146,8 +3146,8 @@ static void hclgevf_uninit_hdev(struct hclgevf_dev *hdev)
+ hclgevf_uninit_msi(hdev);
+ }
+
+- hclgevf_pci_uninit(hdev);
+ hclgevf_cmd_uninit(hdev);
++ hclgevf_pci_uninit(hdev);
+ hclgevf_uninit_mac_list(hdev);
+ }
+
+diff --git a/drivers/net/ethernet/ibm/ibmveth.c b/drivers/net/ethernet/ibm/ibmveth.c
+index 7ef3369953b6a..c3ec9ceed833e 100644
+--- a/drivers/net/ethernet/ibm/ibmveth.c
++++ b/drivers/net/ethernet/ibm/ibmveth.c
+@@ -1031,12 +1031,6 @@ static int ibmveth_is_packet_unsupported(struct sk_buff *skb,
+ ret = -EOPNOTSUPP;
+ }
+
+- if (!ether_addr_equal(ether_header->h_source, netdev->dev_addr)) {
+- netdev_dbg(netdev, "source packet MAC address does not match veth device's, dropping packet.\n");
+- netdev->stats.tx_dropped++;
+- ret = -EOPNOTSUPP;
+- }
+-
+ return ret;
+ }
+
+diff --git a/drivers/net/ethernet/ibm/ibmvnic.c b/drivers/net/ethernet/ibm/ibmvnic.c
+index 3e0aab04d86fb..f96bb3dab5a8b 100644
+--- a/drivers/net/ethernet/ibm/ibmvnic.c
++++ b/drivers/net/ethernet/ibm/ibmvnic.c
+@@ -1828,9 +1828,13 @@ static int ibmvnic_set_mac(struct net_device *netdev, void *p)
+ int rc;
+
+ rc = 0;
+- ether_addr_copy(adapter->mac_addr, addr->sa_data);
+- if (adapter->state != VNIC_PROBED)
++ if (!is_valid_ether_addr(addr->sa_data))
++ return -EADDRNOTAVAIL;
++
++ if (adapter->state != VNIC_PROBED) {
++ ether_addr_copy(adapter->mac_addr, addr->sa_data);
+ rc = __ibmvnic_set_mac(netdev, addr->sa_data);
++ }
+
+ return rc;
+ }
+diff --git a/drivers/net/ethernet/mellanox/mlxsw/core.c b/drivers/net/ethernet/mellanox/mlxsw/core.c
+index 71b6185b49042..42726fdf5a3af 100644
+--- a/drivers/net/ethernet/mellanox/mlxsw/core.c
++++ b/drivers/net/ethernet/mellanox/mlxsw/core.c
+@@ -1483,6 +1483,8 @@ void mlxsw_core_bus_device_unregister(struct mlxsw_core *mlxsw_core,
+ if (!reload)
+ devlink_resources_unregister(devlink, NULL);
+ mlxsw_core->bus->fini(mlxsw_core->bus_priv);
++ if (!reload)
++ devlink_free(devlink);
+
+ return;
+
+diff --git a/drivers/net/ethernet/realtek/r8169_main.c b/drivers/net/ethernet/realtek/r8169_main.c
+index b1feef473b746..ed89e669ddd5b 100644
+--- a/drivers/net/ethernet/realtek/r8169_main.c
++++ b/drivers/net/ethernet/realtek/r8169_main.c
+@@ -4559,7 +4559,7 @@ static irqreturn_t rtl8169_interrupt(int irq, void *dev_instance)
+ }
+
+ rtl_irq_disable(tp);
+- napi_schedule_irqoff(&tp->napi);
++ napi_schedule(&tp->napi);
+ out:
+ rtl_ack_events(tp, status);
+
+@@ -4727,7 +4727,7 @@ static int rtl_open(struct net_device *dev)
+ rtl_request_firmware(tp);
+
+ retval = request_irq(pci_irq_vector(pdev, 0), rtl8169_interrupt,
+- IRQF_NO_THREAD | IRQF_SHARED, dev->name, tp);
++ IRQF_SHARED, dev->name, tp);
+ if (retval < 0)
+ goto err_release_fw_2;
+
+diff --git a/drivers/net/ethernet/renesas/ravb_main.c b/drivers/net/ethernet/renesas/ravb_main.c
+index 99f7aae102ce1..6c58ba186b2cb 100644
+--- a/drivers/net/ethernet/renesas/ravb_main.c
++++ b/drivers/net/ethernet/renesas/ravb_main.c
+@@ -1747,12 +1747,16 @@ static int ravb_hwtstamp_get(struct net_device *ndev, struct ifreq *req)
+ config.flags = 0;
+ config.tx_type = priv->tstamp_tx_ctrl ? HWTSTAMP_TX_ON :
+ HWTSTAMP_TX_OFF;
+- if (priv->tstamp_rx_ctrl & RAVB_RXTSTAMP_TYPE_V2_L2_EVENT)
++ switch (priv->tstamp_rx_ctrl & RAVB_RXTSTAMP_TYPE) {
++ case RAVB_RXTSTAMP_TYPE_V2_L2_EVENT:
+ config.rx_filter = HWTSTAMP_FILTER_PTP_V2_L2_EVENT;
+- else if (priv->tstamp_rx_ctrl & RAVB_RXTSTAMP_TYPE_ALL)
++ break;
++ case RAVB_RXTSTAMP_TYPE_ALL:
+ config.rx_filter = HWTSTAMP_FILTER_ALL;
+- else
++ break;
++ default:
+ config.rx_filter = HWTSTAMP_FILTER_NONE;
++ }
+
+ return copy_to_user(req->ifr_data, &config, sizeof(config)) ?
+ -EFAULT : 0;
+diff --git a/drivers/net/gtp.c b/drivers/net/gtp.c
+index 8e47d0112e5dc..10f910f8cbe52 100644
+--- a/drivers/net/gtp.c
++++ b/drivers/net/gtp.c
+@@ -663,10 +663,6 @@ static int gtp_newlink(struct net *src_net, struct net_device *dev,
+
+ gtp = netdev_priv(dev);
+
+- err = gtp_encap_enable(gtp, data);
+- if (err < 0)
+- return err;
+-
+ if (!data[IFLA_GTP_PDP_HASHSIZE]) {
+ hashsize = 1024;
+ } else {
+@@ -677,12 +673,16 @@ static int gtp_newlink(struct net *src_net, struct net_device *dev,
+
+ err = gtp_hashtable_new(gtp, hashsize);
+ if (err < 0)
+- goto out_encap;
++ return err;
++
++ err = gtp_encap_enable(gtp, data);
++ if (err < 0)
++ goto out_hashtable;
+
+ err = register_netdevice(dev);
+ if (err < 0) {
+ netdev_dbg(dev, "failed to register new netdev %d\n", err);
+- goto out_hashtable;
++ goto out_encap;
+ }
+
+ gn = net_generic(dev_net(dev), gtp_net_id);
+@@ -693,11 +693,11 @@ static int gtp_newlink(struct net *src_net, struct net_device *dev,
+
+ return 0;
+
++out_encap:
++ gtp_encap_disable(gtp);
+ out_hashtable:
+ kfree(gtp->addr_hash);
+ kfree(gtp->tid_hash);
+-out_encap:
+- gtp_encap_disable(gtp);
+ return err;
+ }
+
+diff --git a/drivers/net/ipa/gsi_trans.c b/drivers/net/ipa/gsi_trans.c
+index bdbfeed359db3..41e9af35a5820 100644
+--- a/drivers/net/ipa/gsi_trans.c
++++ b/drivers/net/ipa/gsi_trans.c
+@@ -398,15 +398,24 @@ void gsi_trans_cmd_add(struct gsi_trans *trans, void *buf, u32 size,
+
+ /* assert(which < trans->tre_count); */
+
+- /* Set the page information for the buffer. We also need to fill in
+- * the DMA address and length for the buffer (something dma_map_sg()
+- * normally does).
++ /* Commands are quite different from data transfer requests.
++ * Their payloads come from a pool whose memory is allocated
++ * using dma_alloc_coherent(). We therefore do *not* map them
++ * for DMA (unlike what we do for pages and skbs).
++ *
++ * When a transaction completes, the SGL is normally unmapped.
++ * A command transaction has direction DMA_NONE, which tells
++ * gsi_trans_complete() to skip the unmapping step.
++ *
++ * The only things we use directly in a command scatter/gather
++ * entry are the DMA address and length. We still need the SG
++ * table flags to be maintained though, so assign a NULL page
++ * pointer for that purpose.
+ */
+ sg = &trans->sgl[which];
+-
+- sg_set_buf(sg, buf, size);
++ sg_assign_page(sg, NULL);
+ sg_dma_address(sg) = addr;
+- sg_dma_len(sg) = sg->length;
++ sg_dma_len(sg) = size;
+
+ info = &trans->info[which];
+ info->opcode = opcode;
+diff --git a/drivers/net/wireless/intersil/p54/p54pci.c b/drivers/net/wireless/intersil/p54/p54pci.c
+index 80ad0b7eaef43..f8c6027cab6b4 100644
+--- a/drivers/net/wireless/intersil/p54/p54pci.c
++++ b/drivers/net/wireless/intersil/p54/p54pci.c
+@@ -329,10 +329,12 @@ static void p54p_tx(struct ieee80211_hw *dev, struct sk_buff *skb)
+ struct p54p_desc *desc;
+ dma_addr_t mapping;
+ u32 idx, i;
++ __le32 device_addr;
+
+ spin_lock_irqsave(&priv->lock, flags);
+ idx = le32_to_cpu(ring_control->host_idx[1]);
+ i = idx % ARRAY_SIZE(ring_control->tx_data);
++ device_addr = ((struct p54_hdr *)skb->data)->req_id;
+
+ mapping = pci_map_single(priv->pdev, skb->data, skb->len,
+ PCI_DMA_TODEVICE);
+@@ -346,7 +348,7 @@ static void p54p_tx(struct ieee80211_hw *dev, struct sk_buff *skb)
+
+ desc = &ring_control->tx_data[i];
+ desc->host_addr = cpu_to_le32(mapping);
+- desc->device_addr = ((struct p54_hdr *)skb->data)->req_id;
++ desc->device_addr = device_addr;
+ desc->len = cpu_to_le16(skb->len);
+ desc->flags = 0;
+
+diff --git a/drivers/nvdimm/claim.c b/drivers/nvdimm/claim.c
+index 45964acba9443..22d865ba6353d 100644
+--- a/drivers/nvdimm/claim.c
++++ b/drivers/nvdimm/claim.c
+@@ -268,7 +268,7 @@ static int nsio_rw_bytes(struct nd_namespace_common *ndns,
+ if (rw == READ) {
+ if (unlikely(is_bad_pmem(&nsio->bb, sector, sz_align)))
+ return -EIO;
+- if (memcpy_mcsafe(buf, nsio->addr + offset, size) != 0)
++ if (copy_mc_to_kernel(buf, nsio->addr + offset, size) != 0)
+ return -EIO;
+ return 0;
+ }
+diff --git a/drivers/nvdimm/pmem.c b/drivers/nvdimm/pmem.c
+index d25e66fd942dd..5a4f588605caf 100644
+--- a/drivers/nvdimm/pmem.c
++++ b/drivers/nvdimm/pmem.c
+@@ -125,7 +125,7 @@ static blk_status_t read_pmem(struct page *page, unsigned int off,
+ while (len) {
+ mem = kmap_atomic(page);
+ chunk = min_t(unsigned int, len, PAGE_SIZE - off);
+- rem = memcpy_mcsafe(mem + off, pmem_addr, chunk);
++ rem = copy_mc_to_kernel(mem + off, pmem_addr, chunk);
+ kunmap_atomic(mem);
+ if (rem)
+ return BLK_STS_IOERR;
+@@ -305,7 +305,7 @@ static long pmem_dax_direct_access(struct dax_device *dax_dev,
+
+ /*
+ * Use the 'no check' versions of copy_from_iter_flushcache() and
+- * copy_to_iter_mcsafe() to bypass HARDENED_USERCOPY overhead. Bounds
++ * copy_mc_to_iter() to bypass HARDENED_USERCOPY overhead. Bounds
+ * checking, both file offset and device offset, is handled by
+ * dax_iomap_actor()
+ */
+@@ -318,7 +318,7 @@ static size_t pmem_copy_from_iter(struct dax_device *dax_dev, pgoff_t pgoff,
+ static size_t pmem_copy_to_iter(struct dax_device *dax_dev, pgoff_t pgoff,
+ void *addr, size_t bytes, struct iov_iter *i)
+ {
+- return _copy_to_iter_mcsafe(addr, bytes, i);
++ return _copy_mc_to_iter(addr, bytes, i);
+ }
+
+ static const struct dax_operations pmem_dax_ops = {
+diff --git a/drivers/pci/controller/pci-aardvark.c b/drivers/pci/controller/pci-aardvark.c
+index d5f58684d962c..c79326e699e82 100644
+--- a/drivers/pci/controller/pci-aardvark.c
++++ b/drivers/pci/controller/pci-aardvark.c
+@@ -1068,7 +1068,9 @@ static int advk_pcie_enable_phy(struct advk_pcie *pcie)
+ }
+
+ ret = phy_power_on(pcie->phy);
+- if (ret) {
++ if (ret == -EOPNOTSUPP) {
++ dev_warn(&pcie->pdev->dev, "PHY unsupported by firmware\n");
++ } else if (ret) {
+ phy_exit(pcie->phy);
+ return ret;
+ }
+diff --git a/drivers/phy/marvell/phy-mvebu-a3700-comphy.c b/drivers/phy/marvell/phy-mvebu-a3700-comphy.c
+index 1a138be8bd6a0..810f25a476321 100644
+--- a/drivers/phy/marvell/phy-mvebu-a3700-comphy.c
++++ b/drivers/phy/marvell/phy-mvebu-a3700-comphy.c
+@@ -26,7 +26,6 @@
+ #define COMPHY_SIP_POWER_ON 0x82000001
+ #define COMPHY_SIP_POWER_OFF 0x82000002
+ #define COMPHY_SIP_PLL_LOCK 0x82000003
+-#define COMPHY_FW_NOT_SUPPORTED (-1)
+
+ #define COMPHY_FW_MODE_SATA 0x1
+ #define COMPHY_FW_MODE_SGMII 0x2
+@@ -112,10 +111,19 @@ static int mvebu_a3700_comphy_smc(unsigned long function, unsigned long lane,
+ unsigned long mode)
+ {
+ struct arm_smccc_res res;
++ s32 ret;
+
+ arm_smccc_smc(function, lane, mode, 0, 0, 0, 0, 0, &res);
++ ret = res.a0;
+
+- return res.a0;
++ switch (ret) {
++ case SMCCC_RET_SUCCESS:
++ return 0;
++ case SMCCC_RET_NOT_SUPPORTED:
++ return -EOPNOTSUPP;
++ default:
++ return -EINVAL;
++ }
+ }
+
+ static int mvebu_a3700_comphy_get_fw_mode(int lane, int port,
+@@ -220,7 +228,7 @@ static int mvebu_a3700_comphy_power_on(struct phy *phy)
+ }
+
+ ret = mvebu_a3700_comphy_smc(COMPHY_SIP_POWER_ON, lane->id, fw_param);
+- if (ret == COMPHY_FW_NOT_SUPPORTED)
++ if (ret == -EOPNOTSUPP)
+ dev_err(lane->dev,
+ "unsupported SMC call, try updating your firmware\n");
+
+diff --git a/drivers/phy/marvell/phy-mvebu-cp110-comphy.c b/drivers/phy/marvell/phy-mvebu-cp110-comphy.c
+index e41367f36ee1c..53ad127b100fe 100644
+--- a/drivers/phy/marvell/phy-mvebu-cp110-comphy.c
++++ b/drivers/phy/marvell/phy-mvebu-cp110-comphy.c
+@@ -123,7 +123,6 @@
+
+ #define COMPHY_SIP_POWER_ON 0x82000001
+ #define COMPHY_SIP_POWER_OFF 0x82000002
+-#define COMPHY_FW_NOT_SUPPORTED (-1)
+
+ /*
+ * A lane is described by the following bitfields:
+@@ -273,10 +272,19 @@ static int mvebu_comphy_smc(unsigned long function, unsigned long phys,
+ unsigned long lane, unsigned long mode)
+ {
+ struct arm_smccc_res res;
++ s32 ret;
+
+ arm_smccc_smc(function, phys, lane, mode, 0, 0, 0, 0, &res);
++ ret = res.a0;
+
+- return res.a0;
++ switch (ret) {
++ case SMCCC_RET_SUCCESS:
++ return 0;
++ case SMCCC_RET_NOT_SUPPORTED:
++ return -EOPNOTSUPP;
++ default:
++ return -EINVAL;
++ }
+ }
+
+ static int mvebu_comphy_get_mode(bool fw_mode, int lane, int port,
+@@ -819,7 +827,7 @@ static int mvebu_comphy_power_on(struct phy *phy)
+ if (!ret)
+ return ret;
+
+- if (ret == COMPHY_FW_NOT_SUPPORTED)
++ if (ret == -EOPNOTSUPP)
+ dev_err(priv->dev,
+ "unsupported SMC call, try updating your firmware\n");
+
+diff --git a/drivers/tty/serial/amba-pl011.c b/drivers/tty/serial/amba-pl011.c
+index a8d1edcf252c7..64e801a3a0206 100644
+--- a/drivers/tty/serial/amba-pl011.c
++++ b/drivers/tty/serial/amba-pl011.c
+@@ -308,8 +308,9 @@ static void pl011_write(unsigned int val, const struct uart_amba_port *uap,
+ */
+ static int pl011_fifo_to_tty(struct uart_amba_port *uap)
+ {
+- u16 status;
+ unsigned int ch, flag, fifotaken;
++ int sysrq;
++ u16 status;
+
+ for (fifotaken = 0; fifotaken != 256; fifotaken++) {
+ status = pl011_read(uap, REG_FR);
+@@ -344,10 +345,12 @@ static int pl011_fifo_to_tty(struct uart_amba_port *uap)
+ flag = TTY_FRAME;
+ }
+
+- if (uart_handle_sysrq_char(&uap->port, ch & 255))
+- continue;
++ spin_unlock(&uap->port.lock);
++ sysrq = uart_handle_sysrq_char(&uap->port, ch & 255);
++ spin_lock(&uap->port.lock);
+
+- uart_insert_char(&uap->port, ch, UART011_DR_OE, ch, flag);
++ if (!sysrq)
++ uart_insert_char(&uap->port, ch, UART011_DR_OE, ch, flag);
+ }
+
+ return fifotaken;
+diff --git a/drivers/tty/serial/qcom_geni_serial.c b/drivers/tty/serial/qcom_geni_serial.c
+index ffdf6da016c21..2bb800ca5f0ca 100644
+--- a/drivers/tty/serial/qcom_geni_serial.c
++++ b/drivers/tty/serial/qcom_geni_serial.c
+@@ -954,7 +954,7 @@ static void qcom_geni_serial_set_termios(struct uart_port *uport,
+ sampling_rate = UART_OVERSAMPLING;
+ /* Sampling rate is halved for IP versions >= 2.5 */
+ ver = geni_se_get_qup_hw_version(&port->se);
+- if (GENI_SE_VERSION_MAJOR(ver) >= 2 && GENI_SE_VERSION_MINOR(ver) >= 5)
++ if (ver >= QUP_SE_VERSION_2_5)
+ sampling_rate /= 2;
+
+ clk_rate = get_clk_div_rate(baud, sampling_rate, &clk_div);
+diff --git a/drivers/xen/gntdev.c b/drivers/xen/gntdev.c
+index 64a9025a87bee..1f32db7b72b2c 100644
+--- a/drivers/xen/gntdev.c
++++ b/drivers/xen/gntdev.c
+@@ -720,17 +720,18 @@ struct gntdev_copy_batch {
+ s16 __user *status[GNTDEV_COPY_BATCH];
+ unsigned int nr_ops;
+ unsigned int nr_pages;
++ bool writeable;
+ };
+
+ static int gntdev_get_page(struct gntdev_copy_batch *batch, void __user *virt,
+- bool writeable, unsigned long *gfn)
++ unsigned long *gfn)
+ {
+ unsigned long addr = (unsigned long)virt;
+ struct page *page;
+ unsigned long xen_pfn;
+ int ret;
+
+- ret = get_user_pages_fast(addr, 1, writeable ? FOLL_WRITE : 0, &page);
++ ret = get_user_pages_fast(addr, 1, batch->writeable ? FOLL_WRITE : 0, &page);
+ if (ret < 0)
+ return ret;
+
+@@ -746,9 +747,13 @@ static void gntdev_put_pages(struct gntdev_copy_batch *batch)
+ {
+ unsigned int i;
+
+- for (i = 0; i < batch->nr_pages; i++)
++ for (i = 0; i < batch->nr_pages; i++) {
++ if (batch->writeable && !PageDirty(batch->pages[i]))
++ set_page_dirty_lock(batch->pages[i]);
+ put_page(batch->pages[i]);
++ }
+ batch->nr_pages = 0;
++ batch->writeable = false;
+ }
+
+ static int gntdev_copy(struct gntdev_copy_batch *batch)
+@@ -837,8 +842,9 @@ static int gntdev_grant_copy_seg(struct gntdev_copy_batch *batch,
+ virt = seg->source.virt + copied;
+ off = (unsigned long)virt & ~XEN_PAGE_MASK;
+ len = min(len, (size_t)XEN_PAGE_SIZE - off);
++ batch->writeable = false;
+
+- ret = gntdev_get_page(batch, virt, false, &gfn);
++ ret = gntdev_get_page(batch, virt, &gfn);
+ if (ret < 0)
+ return ret;
+
+@@ -856,8 +862,9 @@ static int gntdev_grant_copy_seg(struct gntdev_copy_batch *batch,
+ virt = seg->dest.virt + copied;
+ off = (unsigned long)virt & ~XEN_PAGE_MASK;
+ len = min(len, (size_t)XEN_PAGE_SIZE - off);
++ batch->writeable = true;
+
+- ret = gntdev_get_page(batch, virt, true, &gfn);
++ ret = gntdev_get_page(batch, virt, &gfn);
+ if (ret < 0)
+ return ret;
+
+diff --git a/fs/efivarfs/super.c b/fs/efivarfs/super.c
+index 28bb5689333a5..15880a68faadc 100644
+--- a/fs/efivarfs/super.c
++++ b/fs/efivarfs/super.c
+@@ -141,6 +141,9 @@ static int efivarfs_callback(efi_char16_t *name16, efi_guid_t vendor,
+
+ name[len + EFI_VARIABLE_GUID_LEN+1] = '\0';
+
++ /* replace invalid slashes like kobject_set_name_vargs does for /sys/firmware/efi/vars. */
++ strreplace(name, '/', '!');
++
+ inode = efivarfs_get_inode(sb, d_inode(root), S_IFREG | 0644, 0,
+ is_removable);
+ if (!inode)
+diff --git a/fs/erofs/xattr.c b/fs/erofs/xattr.c
+index 87e437e7b34f2..f86e3247febc1 100644
+--- a/fs/erofs/xattr.c
++++ b/fs/erofs/xattr.c
+@@ -473,8 +473,6 @@ static int erofs_xattr_generic_get(const struct xattr_handler *handler,
+ return -EOPNOTSUPP;
+ break;
+ case EROFS_XATTR_INDEX_TRUSTED:
+- if (!capable(CAP_SYS_ADMIN))
+- return -EPERM;
+ break;
+ case EROFS_XATTR_INDEX_SECURITY:
+ break;
+diff --git a/fs/exec.c b/fs/exec.c
+index e6e8a9a703278..78976a3260c6a 100644
+--- a/fs/exec.c
++++ b/fs/exec.c
+@@ -62,6 +62,7 @@
+ #include <linux/oom.h>
+ #include <linux/compat.h>
+ #include <linux/vmalloc.h>
++#include <linux/io_uring.h>
+
+ #include <linux/uaccess.h>
+ #include <asm/mmu_context.h>
+@@ -1847,6 +1848,11 @@ static int __do_execve_file(int fd, struct filename *filename,
+ * further execve() calls fail. */
+ current->flags &= ~PF_NPROC_EXCEEDED;
+
++ /*
++ * Cancel any io_uring activity across execve
++ */
++ io_uring_task_cancel();
++
+ retval = unshare_files(&displaced);
+ if (retval)
+ goto out_ret;
+diff --git a/fs/file.c b/fs/file.c
+index abb8b7081d7a4..8e2c532bb02e3 100644
+--- a/fs/file.c
++++ b/fs/file.c
+@@ -18,6 +18,7 @@
+ #include <linux/bitops.h>
+ #include <linux/spinlock.h>
+ #include <linux/rcupdate.h>
++#include <linux/io_uring.h>
+
+ unsigned int sysctl_nr_open __read_mostly = 1024*1024;
+ unsigned int sysctl_nr_open_min = BITS_PER_LONG;
+@@ -439,6 +440,7 @@ void exit_files(struct task_struct *tsk)
+ struct files_struct * files = tsk->files;
+
+ if (files) {
++ io_uring_files_cancel(files);
+ task_lock(tsk);
+ tsk->files = NULL;
+ task_unlock(tsk);
+diff --git a/fs/fuse/dev.c b/fs/fuse/dev.c
+index 02b3c36b36766..5078a6ca7dfcd 100644
+--- a/fs/fuse/dev.c
++++ b/fs/fuse/dev.c
+@@ -785,15 +785,16 @@ static int fuse_try_move_page(struct fuse_copy_state *cs, struct page **pagep)
+ struct page *newpage;
+ struct pipe_buffer *buf = cs->pipebufs;
+
++ get_page(oldpage);
+ err = unlock_request(cs->req);
+ if (err)
+- return err;
++ goto out_put_old;
+
+ fuse_copy_finish(cs);
+
+ err = pipe_buf_confirm(cs->pipe, buf);
+ if (err)
+- return err;
++ goto out_put_old;
+
+ BUG_ON(!cs->nr_segs);
+ cs->currbuf = buf;
+@@ -833,7 +834,7 @@ static int fuse_try_move_page(struct fuse_copy_state *cs, struct page **pagep)
+ err = replace_page_cache_page(oldpage, newpage, GFP_KERNEL);
+ if (err) {
+ unlock_page(newpage);
+- return err;
++ goto out_put_old;
+ }
+
+ get_page(newpage);
+@@ -852,14 +853,19 @@ static int fuse_try_move_page(struct fuse_copy_state *cs, struct page **pagep)
+ if (err) {
+ unlock_page(newpage);
+ put_page(newpage);
+- return err;
++ goto out_put_old;
+ }
+
+ unlock_page(oldpage);
++ /* Drop ref for ap->pages[] array */
+ put_page(oldpage);
+ cs->len = 0;
+
+- return 0;
++ err = 0;
++out_put_old:
++ /* Drop ref obtained in this function */
++ put_page(oldpage);
++ return err;
+
+ out_fallback_unlock:
+ unlock_page(newpage);
+@@ -868,10 +874,10 @@ out_fallback:
+ cs->offset = buf->offset;
+
+ err = lock_request(cs->req);
+- if (err)
+- return err;
++ if (!err)
++ err = 1;
+
+- return 1;
++ goto out_put_old;
+ }
+
+ static int fuse_ref_page(struct fuse_copy_state *cs, struct page *page,
+@@ -883,14 +889,16 @@ static int fuse_ref_page(struct fuse_copy_state *cs, struct page *page,
+ if (cs->nr_segs >= cs->pipe->max_usage)
+ return -EIO;
+
++ get_page(page);
+ err = unlock_request(cs->req);
+- if (err)
++ if (err) {
++ put_page(page);
+ return err;
++ }
+
+ fuse_copy_finish(cs);
+
+ buf = cs->pipebufs;
+- get_page(page);
+ buf->page = page;
+ buf->offset = offset;
+ buf->len = count;
+diff --git a/fs/io-wq.c b/fs/io-wq.c
+index cb9e5a444fba7..56a229621a831 100644
+--- a/fs/io-wq.c
++++ b/fs/io-wq.c
+@@ -60,6 +60,7 @@ struct io_worker {
+ const struct cred *cur_creds;
+ const struct cred *saved_creds;
+ struct files_struct *restore_files;
++ struct nsproxy *restore_nsproxy;
+ struct fs_struct *restore_fs;
+ };
+
+@@ -87,7 +88,7 @@ enum {
+ */
+ struct io_wqe {
+ struct {
+- spinlock_t lock;
++ raw_spinlock_t lock;
+ struct io_wq_work_list work_list;
+ unsigned long hash_map;
+ unsigned flags;
+@@ -148,11 +149,12 @@ static bool __io_worker_unuse(struct io_wqe *wqe, struct io_worker *worker)
+
+ if (current->files != worker->restore_files) {
+ __acquire(&wqe->lock);
+- spin_unlock_irq(&wqe->lock);
++ raw_spin_unlock_irq(&wqe->lock);
+ dropped_lock = true;
+
+ task_lock(current);
+ current->files = worker->restore_files;
++ current->nsproxy = worker->restore_nsproxy;
+ task_unlock(current);
+ }
+
+@@ -166,7 +168,7 @@ static bool __io_worker_unuse(struct io_wqe *wqe, struct io_worker *worker)
+ if (worker->mm) {
+ if (!dropped_lock) {
+ __acquire(&wqe->lock);
+- spin_unlock_irq(&wqe->lock);
++ raw_spin_unlock_irq(&wqe->lock);
+ dropped_lock = true;
+ }
+ __set_current_state(TASK_RUNNING);
+@@ -200,7 +202,6 @@ static void io_worker_exit(struct io_worker *worker)
+ {
+ struct io_wqe *wqe = worker->wqe;
+ struct io_wqe_acct *acct = io_wqe_get_acct(wqe, worker);
+- unsigned nr_workers;
+
+ /*
+ * If we're not at zero, someone else is holding a brief reference
+@@ -220,23 +221,19 @@ static void io_worker_exit(struct io_worker *worker)
+ worker->flags = 0;
+ preempt_enable();
+
+- spin_lock_irq(&wqe->lock);
++ raw_spin_lock_irq(&wqe->lock);
+ hlist_nulls_del_rcu(&worker->nulls_node);
+ list_del_rcu(&worker->all_list);
+ if (__io_worker_unuse(wqe, worker)) {
+ __release(&wqe->lock);
+- spin_lock_irq(&wqe->lock);
++ raw_spin_lock_irq(&wqe->lock);
+ }
+ acct->nr_workers--;
+- nr_workers = wqe->acct[IO_WQ_ACCT_BOUND].nr_workers +
+- wqe->acct[IO_WQ_ACCT_UNBOUND].nr_workers;
+- spin_unlock_irq(&wqe->lock);
+-
+- /* all workers gone, wq exit can proceed */
+- if (!nr_workers && refcount_dec_and_test(&wqe->wq->refs))
+- complete(&wqe->wq->done);
++ raw_spin_unlock_irq(&wqe->lock);
+
+ kfree_rcu(worker, rcu);
++ if (refcount_dec_and_test(&wqe->wq->refs))
++ complete(&wqe->wq->done);
+ }
+
+ static inline bool io_wqe_run_queue(struct io_wqe *wqe)
+@@ -318,6 +315,7 @@ static void io_worker_start(struct io_wqe *wqe, struct io_worker *worker)
+
+ worker->flags |= (IO_WORKER_F_UP | IO_WORKER_F_RUNNING);
+ worker->restore_files = current->files;
++ worker->restore_nsproxy = current->nsproxy;
+ worker->restore_fs = current->fs;
+ io_wqe_inc_running(wqe, worker);
+ }
+@@ -454,6 +452,7 @@ static void io_impersonate_work(struct io_worker *worker,
+ if (work->files && current->files != work->files) {
+ task_lock(current);
+ current->files = work->files;
++ current->nsproxy = work->nsproxy;
+ task_unlock(current);
+ }
+ if (work->fs && current->fs != work->fs)
+@@ -504,7 +503,7 @@ get_next:
+ else if (!wq_list_empty(&wqe->work_list))
+ wqe->flags |= IO_WQE_FLAG_STALLED;
+
+- spin_unlock_irq(&wqe->lock);
++ raw_spin_unlock_irq(&wqe->lock);
+ if (!work)
+ break;
+ io_assign_current_work(worker, work);
+@@ -539,7 +538,7 @@ get_next:
+ io_wqe_enqueue(wqe, linked);
+
+ if (hash != -1U && !next_hashed) {
+- spin_lock_irq(&wqe->lock);
++ raw_spin_lock_irq(&wqe->lock);
+ wqe->hash_map &= ~BIT_ULL(hash);
+ wqe->flags &= ~IO_WQE_FLAG_STALLED;
+ /* dependent work is not hashed */
+@@ -547,11 +546,11 @@ get_next:
+ /* skip unnecessary unlock-lock wqe->lock */
+ if (!work)
+ goto get_next;
+- spin_unlock_irq(&wqe->lock);
++ raw_spin_unlock_irq(&wqe->lock);
+ }
+ } while (work);
+
+- spin_lock_irq(&wqe->lock);
++ raw_spin_lock_irq(&wqe->lock);
+ } while (1);
+ }
+
+@@ -566,7 +565,7 @@ static int io_wqe_worker(void *data)
+ while (!test_bit(IO_WQ_BIT_EXIT, &wq->state)) {
+ set_current_state(TASK_INTERRUPTIBLE);
+ loop:
+- spin_lock_irq(&wqe->lock);
++ raw_spin_lock_irq(&wqe->lock);
+ if (io_wqe_run_queue(wqe)) {
+ __set_current_state(TASK_RUNNING);
+ io_worker_handle_work(worker);
+@@ -577,7 +576,7 @@ loop:
+ __release(&wqe->lock);
+ goto loop;
+ }
+- spin_unlock_irq(&wqe->lock);
++ raw_spin_unlock_irq(&wqe->lock);
+ if (signal_pending(current))
+ flush_signals(current);
+ if (schedule_timeout(WORKER_IDLE_TIMEOUT))
+@@ -589,11 +588,11 @@ loop:
+ }
+
+ if (test_bit(IO_WQ_BIT_EXIT, &wq->state)) {
+- spin_lock_irq(&wqe->lock);
++ raw_spin_lock_irq(&wqe->lock);
+ if (!wq_list_empty(&wqe->work_list))
+ io_worker_handle_work(worker);
+ else
+- spin_unlock_irq(&wqe->lock);
++ raw_spin_unlock_irq(&wqe->lock);
+ }
+
+ io_worker_exit(worker);
+@@ -633,14 +632,14 @@ void io_wq_worker_sleeping(struct task_struct *tsk)
+
+ worker->flags &= ~IO_WORKER_F_RUNNING;
+
+- spin_lock_irq(&wqe->lock);
++ raw_spin_lock_irq(&wqe->lock);
+ io_wqe_dec_running(wqe, worker);
+- spin_unlock_irq(&wqe->lock);
++ raw_spin_unlock_irq(&wqe->lock);
+ }
+
+ static bool create_io_worker(struct io_wq *wq, struct io_wqe *wqe, int index)
+ {
+- struct io_wqe_acct *acct =&wqe->acct[index];
++ struct io_wqe_acct *acct = &wqe->acct[index];
+ struct io_worker *worker;
+
+ worker = kzalloc_node(sizeof(*worker), GFP_KERNEL, wqe->node);
+@@ -659,7 +658,7 @@ static bool create_io_worker(struct io_wq *wq, struct io_wqe *wqe, int index)
+ return false;
+ }
+
+- spin_lock_irq(&wqe->lock);
++ raw_spin_lock_irq(&wqe->lock);
+ hlist_nulls_add_head_rcu(&worker->nulls_node, &wqe->free_list);
+ list_add_tail_rcu(&worker->all_list, &wqe->all_list);
+ worker->flags |= IO_WORKER_F_FREE;
+@@ -668,11 +667,12 @@ static bool create_io_worker(struct io_wq *wq, struct io_wqe *wqe, int index)
+ if (!acct->nr_workers && (worker->flags & IO_WORKER_F_BOUND))
+ worker->flags |= IO_WORKER_F_FIXED;
+ acct->nr_workers++;
+- spin_unlock_irq(&wqe->lock);
++ raw_spin_unlock_irq(&wqe->lock);
+
+ if (index == IO_WQ_ACCT_UNBOUND)
+ atomic_inc(&wq->user->processes);
+
++ refcount_inc(&wq->refs);
+ wake_up_process(worker->task);
+ return true;
+ }
+@@ -688,28 +688,63 @@ static inline bool io_wqe_need_worker(struct io_wqe *wqe, int index)
+ return acct->nr_workers < acct->max_workers;
+ }
+
++static bool io_wqe_worker_send_sig(struct io_worker *worker, void *data)
++{
++ send_sig(SIGINT, worker->task, 1);
++ return false;
++}
++
++/*
++ * Iterate the passed in list and call the specific function for each
++ * worker that isn't exiting
++ */
++static bool io_wq_for_each_worker(struct io_wqe *wqe,
++ bool (*func)(struct io_worker *, void *),
++ void *data)
++{
++ struct io_worker *worker;
++ bool ret = false;
++
++ list_for_each_entry_rcu(worker, &wqe->all_list, all_list) {
++ if (io_worker_get(worker)) {
++ /* no task if node is/was offline */
++ if (worker->task)
++ ret = func(worker, data);
++ io_worker_release(worker);
++ if (ret)
++ break;
++ }
++ }
++
++ return ret;
++}
++
++static bool io_wq_worker_wake(struct io_worker *worker, void *data)
++{
++ wake_up_process(worker->task);
++ return false;
++}
++
+ /*
+ * Manager thread. Tasked with creating new workers, if we need them.
+ */
+ static int io_wq_manager(void *data)
+ {
+ struct io_wq *wq = data;
+- int workers_to_create = num_possible_nodes();
+ int node;
+
+ /* create fixed workers */
+- refcount_set(&wq->refs, workers_to_create);
++ refcount_set(&wq->refs, 1);
+ for_each_node(node) {
+ if (!node_online(node))
+ continue;
+- if (!create_io_worker(wq, wq->wqes[node], IO_WQ_ACCT_BOUND))
+- goto err;
+- workers_to_create--;
++ if (create_io_worker(wq, wq->wqes[node], IO_WQ_ACCT_BOUND))
++ continue;
++ set_bit(IO_WQ_BIT_ERROR, &wq->state);
++ set_bit(IO_WQ_BIT_EXIT, &wq->state);
++ goto out;
+ }
+
+- while (workers_to_create--)
+- refcount_dec(&wq->refs);
+-
+ complete(&wq->done);
+
+ while (!kthread_should_stop()) {
+@@ -723,12 +758,12 @@ static int io_wq_manager(void *data)
+ if (!node_online(node))
+ continue;
+
+- spin_lock_irq(&wqe->lock);
++ raw_spin_lock_irq(&wqe->lock);
+ if (io_wqe_need_worker(wqe, IO_WQ_ACCT_BOUND))
+ fork_worker[IO_WQ_ACCT_BOUND] = true;
+ if (io_wqe_need_worker(wqe, IO_WQ_ACCT_UNBOUND))
+ fork_worker[IO_WQ_ACCT_UNBOUND] = true;
+- spin_unlock_irq(&wqe->lock);
++ raw_spin_unlock_irq(&wqe->lock);
+ if (fork_worker[IO_WQ_ACCT_BOUND])
+ create_io_worker(wq, wqe, IO_WQ_ACCT_BOUND);
+ if (fork_worker[IO_WQ_ACCT_UNBOUND])
+@@ -741,12 +776,18 @@ static int io_wq_manager(void *data)
+ if (current->task_works)
+ task_work_run();
+
+- return 0;
+-err:
+- set_bit(IO_WQ_BIT_ERROR, &wq->state);
+- set_bit(IO_WQ_BIT_EXIT, &wq->state);
+- if (refcount_sub_and_test(workers_to_create, &wq->refs))
++out:
++ if (refcount_dec_and_test(&wq->refs)) {
+ complete(&wq->done);
++ return 0;
++ }
++ /* if ERROR is set and we get here, we have workers to wake */
++ if (test_bit(IO_WQ_BIT_ERROR, &wq->state)) {
++ rcu_read_lock();
++ for_each_node(node)
++ io_wq_for_each_worker(wq->wqes[node], io_wq_worker_wake, NULL);
++ rcu_read_unlock();
++ }
+ return 0;
+ }
+
+@@ -825,10 +866,10 @@ static void io_wqe_enqueue(struct io_wqe *wqe, struct io_wq_work *work)
+ }
+
+ work_flags = work->flags;
+- spin_lock_irqsave(&wqe->lock, flags);
++ raw_spin_lock_irqsave(&wqe->lock, flags);
+ io_wqe_insert_work(wqe, work);
+ wqe->flags &= ~IO_WQE_FLAG_STALLED;
+- spin_unlock_irqrestore(&wqe->lock, flags);
++ raw_spin_unlock_irqrestore(&wqe->lock, flags);
+
+ if ((work_flags & IO_WQ_WORK_CONCURRENT) ||
+ !atomic_read(&acct->nr_running))
+@@ -854,37 +895,6 @@ void io_wq_hash_work(struct io_wq_work *work, void *val)
+ work->flags |= (IO_WQ_WORK_HASHED | (bit << IO_WQ_HASH_SHIFT));
+ }
+
+-static bool io_wqe_worker_send_sig(struct io_worker *worker, void *data)
+-{
+- send_sig(SIGINT, worker->task, 1);
+- return false;
+-}
+-
+-/*
+- * Iterate the passed in list and call the specific function for each
+- * worker that isn't exiting
+- */
+-static bool io_wq_for_each_worker(struct io_wqe *wqe,
+- bool (*func)(struct io_worker *, void *),
+- void *data)
+-{
+- struct io_worker *worker;
+- bool ret = false;
+-
+- list_for_each_entry_rcu(worker, &wqe->all_list, all_list) {
+- if (io_worker_get(worker)) {
+- /* no task if node is/was offline */
+- if (worker->task)
+- ret = func(worker, data);
+- io_worker_release(worker);
+- if (ret)
+- break;
+- }
+- }
+-
+- return ret;
+-}
+-
+ void io_wq_cancel_all(struct io_wq *wq)
+ {
+ int node;
+@@ -955,13 +965,13 @@ static void io_wqe_cancel_pending_work(struct io_wqe *wqe,
+ unsigned long flags;
+
+ retry:
+- spin_lock_irqsave(&wqe->lock, flags);
++ raw_spin_lock_irqsave(&wqe->lock, flags);
+ wq_list_for_each(node, prev, &wqe->work_list) {
+ work = container_of(node, struct io_wq_work, list);
+ if (!match->fn(work, match->data))
+ continue;
+ io_wqe_remove_pending(wqe, work, prev);
+- spin_unlock_irqrestore(&wqe->lock, flags);
++ raw_spin_unlock_irqrestore(&wqe->lock, flags);
+ io_run_cancel(work, wqe);
+ match->nr_pending++;
+ if (!match->cancel_all)
+@@ -970,7 +980,7 @@ retry:
+ /* not safe to continue after unlock */
+ goto retry;
+ }
+- spin_unlock_irqrestore(&wqe->lock, flags);
++ raw_spin_unlock_irqrestore(&wqe->lock, flags);
+ }
+
+ static void io_wqe_cancel_running_work(struct io_wqe *wqe,
+@@ -1078,7 +1088,7 @@ struct io_wq *io_wq_create(unsigned bounded, struct io_wq_data *data)
+ }
+ atomic_set(&wqe->acct[IO_WQ_ACCT_UNBOUND].nr_running, 0);
+ wqe->wq = wq;
+- spin_lock_init(&wqe->lock);
++ raw_spin_lock_init(&wqe->lock);
+ INIT_WQ_LIST(&wqe->work_list);
+ INIT_HLIST_NULLS_HEAD(&wqe->free_list, 0);
+ INIT_LIST_HEAD(&wqe->all_list);
+@@ -1117,12 +1127,6 @@ bool io_wq_get(struct io_wq *wq, struct io_wq_data *data)
+ return refcount_inc_not_zero(&wq->use_refs);
+ }
+
+-static bool io_wq_worker_wake(struct io_worker *worker, void *data)
+-{
+- wake_up_process(worker->task);
+- return false;
+-}
+-
+ static void __io_wq_destroy(struct io_wq *wq)
+ {
+ int node;
+diff --git a/fs/io-wq.h b/fs/io-wq.h
+index 071f1a9978002..9be6def2b5a6f 100644
+--- a/fs/io-wq.h
++++ b/fs/io-wq.h
+@@ -88,6 +88,7 @@ struct io_wq_work {
+ struct files_struct *files;
+ struct mm_struct *mm;
+ const struct cred *creds;
++ struct nsproxy *nsproxy;
+ struct fs_struct *fs;
+ unsigned flags;
+ };
+diff --git a/fs/io_uring.c b/fs/io_uring.c
+index d2bb2ae9551f0..8e9c58fa76362 100644
+--- a/fs/io_uring.c
++++ b/fs/io_uring.c
+@@ -78,6 +78,7 @@
+ #include <linux/fs_struct.h>
+ #include <linux/splice.h>
+ #include <linux/task_work.h>
++#include <linux/io_uring.h>
+
+ #define CREATE_TRACE_POINTS
+ #include <trace/events/io_uring.h>
+@@ -264,7 +265,16 @@ struct io_ring_ctx {
+ /* IO offload */
+ struct io_wq *io_wq;
+ struct task_struct *sqo_thread; /* if using sq thread polling */
+- struct mm_struct *sqo_mm;
++
++ /*
++ * For SQPOLL usage - we hold a reference to the parent task, so we
++ * have access to the ->files
++ */
++ struct task_struct *sqo_task;
++
++ /* Only used for accounting purposes */
++ struct mm_struct *mm_account;
++
+ wait_queue_head_t sqo_wait;
+
+ /*
+@@ -274,8 +284,6 @@ struct io_ring_ctx {
+ */
+ struct fixed_file_data *file_data;
+ unsigned nr_user_files;
+- int ring_fd;
+- struct file *ring_file;
+
+ /* if used, fixed mapped user buffers */
+ unsigned nr_user_bufs;
+@@ -541,7 +549,6 @@ enum {
+ REQ_F_NO_FILE_TABLE_BIT,
+ REQ_F_QUEUE_TIMEOUT_BIT,
+ REQ_F_WORK_INITIALIZED_BIT,
+- REQ_F_TASK_PINNED_BIT,
+
+ /* not a real bit, just to check we're not overflowing the space */
+ __REQ_F_LAST_BIT,
+@@ -599,8 +606,6 @@ enum {
+ REQ_F_QUEUE_TIMEOUT = BIT(REQ_F_QUEUE_TIMEOUT_BIT),
+ /* io_wq_work is initialized */
+ REQ_F_WORK_INITIALIZED = BIT(REQ_F_WORK_INITIALIZED_BIT),
+- /* req->task is refcounted */
+- REQ_F_TASK_PINNED = BIT(REQ_F_TASK_PINNED_BIT),
+ };
+
+ struct async_poll {
+@@ -915,21 +920,6 @@ struct sock *io_uring_get_socket(struct file *file)
+ }
+ EXPORT_SYMBOL(io_uring_get_socket);
+
+-static void io_get_req_task(struct io_kiocb *req)
+-{
+- if (req->flags & REQ_F_TASK_PINNED)
+- return;
+- get_task_struct(req->task);
+- req->flags |= REQ_F_TASK_PINNED;
+-}
+-
+-/* not idempotent -- it doesn't clear REQ_F_TASK_PINNED */
+-static void __io_put_req_task(struct io_kiocb *req)
+-{
+- if (req->flags & REQ_F_TASK_PINNED)
+- put_task_struct(req->task);
+-}
+-
+ static void io_file_put_work(struct work_struct *work);
+
+ /*
+@@ -1141,14 +1131,34 @@ static void io_kill_timeout(struct io_kiocb *req)
+ }
+ }
+
+-static void io_kill_timeouts(struct io_ring_ctx *ctx)
++static bool io_task_match(struct io_kiocb *req, struct task_struct *tsk)
++{
++ struct io_ring_ctx *ctx = req->ctx;
++
++ if (!tsk || req->task == tsk)
++ return true;
++ if ((ctx->flags & IORING_SETUP_SQPOLL) && req->task == ctx->sqo_thread)
++ return true;
++ return false;
++}
++
++/*
++ * Returns true if we found and killed one or more timeouts
++ */
++static bool io_kill_timeouts(struct io_ring_ctx *ctx, struct task_struct *tsk)
+ {
+ struct io_kiocb *req, *tmp;
++ int canceled = 0;
+
+ spin_lock_irq(&ctx->completion_lock);
+- list_for_each_entry_safe(req, tmp, &ctx->timeout_list, list)
+- io_kill_timeout(req);
++ list_for_each_entry_safe(req, tmp, &ctx->timeout_list, list) {
++ if (io_task_match(req, tsk)) {
++ io_kill_timeout(req);
++ canceled++;
++ }
++ }
+ spin_unlock_irq(&ctx->completion_lock);
++ return canceled != 0;
+ }
+
+ static void __io_queue_deferred(struct io_ring_ctx *ctx)
+@@ -1229,12 +1239,24 @@ static void io_cqring_ev_posted(struct io_ring_ctx *ctx)
+ eventfd_signal(ctx->cq_ev_fd, 1);
+ }
+
++static inline bool io_match_files(struct io_kiocb *req,
++ struct files_struct *files)
++{
++ if (!files)
++ return true;
++ if (req->flags & REQ_F_WORK_INITIALIZED)
++ return req->work.files == files;
++ return false;
++}
++
+ /* Returns true if there are no backlogged entries after the flush */
+-static bool io_cqring_overflow_flush(struct io_ring_ctx *ctx, bool force)
++static bool io_cqring_overflow_flush(struct io_ring_ctx *ctx, bool force,
++ struct task_struct *tsk,
++ struct files_struct *files)
+ {
+ struct io_rings *rings = ctx->rings;
++ struct io_kiocb *req, *tmp;
+ struct io_uring_cqe *cqe;
+- struct io_kiocb *req;
+ unsigned long flags;
+ LIST_HEAD(list);
+
+@@ -1253,7 +1275,12 @@ static bool io_cqring_overflow_flush(struct io_ring_ctx *ctx, bool force)
+ ctx->cq_overflow_flushed = 1;
+
+ cqe = NULL;
+- while (!list_empty(&ctx->cq_overflow_list)) {
++ list_for_each_entry_safe(req, tmp, &ctx->cq_overflow_list, list) {
++ if (tsk && req->task != tsk)
++ continue;
++ if (!io_match_files(req, files))
++ continue;
++
+ cqe = io_get_cqring(ctx);
+ if (!cqe && !force)
+ break;
+@@ -1307,7 +1334,12 @@ static void __io_cqring_fill_event(struct io_kiocb *req, long res, long cflags)
+ WRITE_ONCE(cqe->user_data, req->user_data);
+ WRITE_ONCE(cqe->res, res);
+ WRITE_ONCE(cqe->flags, cflags);
+- } else if (ctx->cq_overflow_flushed) {
++ } else if (ctx->cq_overflow_flushed || req->task->io_uring->in_idle) {
++ /*
++ * If we're in ring overflow flush mode, or in task cancel mode,
++ * then we cannot store the request for later flushing, we need
++ * to drop it on the floor.
++ */
+ WRITE_ONCE(ctx->rings->cq_overflow,
+ atomic_inc_return(&ctx->cached_cq_overflow));
+ } else {
+@@ -1412,15 +1444,35 @@ static inline void io_put_file(struct io_kiocb *req, struct file *file,
+ fput(file);
+ }
+
++static void io_req_drop_files(struct io_kiocb *req)
++{
++ struct io_ring_ctx *ctx = req->ctx;
++ unsigned long flags;
++
++ spin_lock_irqsave(&ctx->inflight_lock, flags);
++ list_del(&req->inflight_entry);
++ if (waitqueue_active(&ctx->inflight_wait))
++ wake_up(&ctx->inflight_wait);
++ spin_unlock_irqrestore(&ctx->inflight_lock, flags);
++ req->flags &= ~REQ_F_INFLIGHT;
++ put_files_struct(req->work.files);
++ put_nsproxy(req->work.nsproxy);
++ req->work.files = NULL;
++}
++
+ static void __io_req_aux_free(struct io_kiocb *req)
+ {
++ struct io_uring_task *tctx = req->task->io_uring;
+ if (req->flags & REQ_F_NEED_CLEANUP)
+ io_cleanup_req(req);
+
+ kfree(req->io);
+ if (req->file)
+ io_put_file(req, req->file, (req->flags & REQ_F_FIXED_FILE));
+- __io_put_req_task(req);
++ atomic_long_inc(&tctx->req_complete);
++ if (tctx->in_idle)
++ wake_up(&tctx->wait);
++ put_task_struct(req->task);
+ io_req_work_drop_env(req);
+ }
+
+@@ -1428,16 +1480,8 @@ static void __io_free_req(struct io_kiocb *req)
+ {
+ __io_req_aux_free(req);
+
+- if (req->flags & REQ_F_INFLIGHT) {
+- struct io_ring_ctx *ctx = req->ctx;
+- unsigned long flags;
+-
+- spin_lock_irqsave(&ctx->inflight_lock, flags);
+- list_del(&req->inflight_entry);
+- if (waitqueue_active(&ctx->inflight_wait))
+- wake_up(&ctx->inflight_wait);
+- spin_unlock_irqrestore(&ctx->inflight_lock, flags);
+- }
++ if (req->flags & REQ_F_INFLIGHT)
++ io_req_drop_files(req);
+
+ percpu_ref_put(&req->ctx->refs);
+ if (likely(!io_is_fallback_req(req)))
+@@ -1717,7 +1761,7 @@ static unsigned io_cqring_events(struct io_ring_ctx *ctx, bool noflush)
+ if (noflush && !list_empty(&ctx->cq_overflow_list))
+ return -1U;
+
+- io_cqring_overflow_flush(ctx, false);
++ io_cqring_overflow_flush(ctx, false, NULL, NULL);
+ }
+
+ /* See comment at the top of this file */
+@@ -1738,7 +1782,7 @@ static inline bool io_req_multi_free(struct req_batch *rb, struct io_kiocb *req)
+ if ((req->flags & REQ_F_LINK_HEAD) || io_is_fallback_req(req))
+ return false;
+
+- if (req->file || req->io)
++ if (req->file || req->io || req->task)
+ rb->need_iter++;
+
+ rb->reqs[rb->to_free++] = req;
+@@ -1762,6 +1806,12 @@ static int io_put_kbuf(struct io_kiocb *req)
+
+ static inline bool io_run_task_work(void)
+ {
++ /*
++ * Not safe to run on exiting task, and the task_work handling will
++ * not add work to such a task.
++ */
++ if (unlikely(current->flags & PF_EXITING))
++ return false;
+ if (current->task_works) {
+ __set_current_state(TASK_RUNNING);
+ task_work_run();
+@@ -3492,8 +3542,7 @@ static int io_close_prep(struct io_kiocb *req, const struct io_uring_sqe *sqe)
+ return -EBADF;
+
+ req->close.fd = READ_ONCE(sqe->fd);
+- if ((req->file && req->file->f_op == &io_uring_fops) ||
+- req->close.fd == req->ctx->ring_fd)
++ if ((req->file && req->file->f_op == &io_uring_fops))
+ return -EBADF;
+
+ req->close.put_file = NULL;
+@@ -4397,9 +4446,10 @@ static int io_sq_thread_acquire_mm(struct io_ring_ctx *ctx,
+ {
+ if (io_op_defs[req->opcode].needs_mm && !current->mm) {
+ if (unlikely(!(ctx->flags & IORING_SETUP_SQPOLL) ||
+- !mmget_not_zero(ctx->sqo_mm)))
++ !ctx->sqo_task->mm ||
++ !mmget_not_zero(ctx->sqo_task->mm)))
+ return -EFAULT;
+- kthread_use_mm(ctx->sqo_mm);
++ kthread_use_mm(ctx->sqo_task->mm);
+ }
+
+ return 0;
+@@ -4550,7 +4600,6 @@ static bool io_arm_poll_handler(struct io_kiocb *req)
+ if (req->flags & REQ_F_WORK_INITIALIZED)
+ memcpy(&apoll->work, &req->work, sizeof(req->work));
+
+- io_get_req_task(req);
+ req->apoll = apoll;
+ INIT_HLIST_NODE(&req->hash_node);
+
+@@ -4635,7 +4684,10 @@ static bool io_poll_remove_one(struct io_kiocb *req)
+ return do_complete;
+ }
+
+-static void io_poll_remove_all(struct io_ring_ctx *ctx)
++/*
++ * Returns true if we found and killed one or more poll requests
++ */
++static bool io_poll_remove_all(struct io_ring_ctx *ctx, struct task_struct *tsk)
+ {
+ struct hlist_node *tmp;
+ struct io_kiocb *req;
+@@ -4646,13 +4698,17 @@ static void io_poll_remove_all(struct io_ring_ctx *ctx)
+ struct hlist_head *list;
+
+ list = &ctx->cancel_hash[i];
+- hlist_for_each_entry_safe(req, tmp, list, hash_node)
+- posted += io_poll_remove_one(req);
++ hlist_for_each_entry_safe(req, tmp, list, hash_node) {
++ if (io_task_match(req, tsk))
++ posted += io_poll_remove_one(req);
++ }
+ }
+ spin_unlock_irq(&ctx->completion_lock);
+
+ if (posted)
+ io_cqring_ev_posted(ctx);
++
++ return posted != 0;
+ }
+
+ static int io_poll_cancel(struct io_ring_ctx *ctx, __u64 sqe_addr)
+@@ -4738,8 +4794,6 @@ static int io_poll_add_prep(struct io_kiocb *req, const struct io_uring_sqe *sqe
+
+ events = READ_ONCE(sqe->poll_events);
+ poll->events = demangle_poll(events) | EPOLLERR | EPOLLHUP;
+-
+- io_get_req_task(req);
+ return 0;
+ }
+
+@@ -5626,32 +5680,20 @@ static int io_req_set_file(struct io_submit_state *state, struct io_kiocb *req,
+
+ static int io_grab_files(struct io_kiocb *req)
+ {
+- int ret = -EBADF;
+ struct io_ring_ctx *ctx = req->ctx;
+
+ if (req->work.files || (req->flags & REQ_F_NO_FILE_TABLE))
+ return 0;
+- if (!ctx->ring_file)
+- return -EBADF;
+
+- rcu_read_lock();
++ req->work.files = get_files_struct(current);
++ get_nsproxy(current->nsproxy);
++ req->work.nsproxy = current->nsproxy;
++ req->flags |= REQ_F_INFLIGHT;
++
+ spin_lock_irq(&ctx->inflight_lock);
+- /*
+- * We use the f_ops->flush() handler to ensure that we can flush
+- * out work accessing these files if the fd is closed. Check if
+- * the fd has changed since we started down this path, and disallow
+- * this operation if it has.
+- */
+- if (fcheck(ctx->ring_fd) == ctx->ring_file) {
+- list_add(&req->inflight_entry, &ctx->inflight_list);
+- req->flags |= REQ_F_INFLIGHT;
+- req->work.files = current->files;
+- ret = 0;
+- }
++ list_add(&req->inflight_entry, &ctx->inflight_list);
+ spin_unlock_irq(&ctx->inflight_lock);
+- rcu_read_unlock();
+-
+- return ret;
++ return 0;
+ }
+
+ static enum hrtimer_restart io_link_timeout_fn(struct hrtimer *timer)
+@@ -6021,6 +6063,8 @@ static int io_init_req(struct io_ring_ctx *ctx, struct io_kiocb *req,
+ /* one is dropped after submission, the other at completion */
+ refcount_set(&req->refs, 2);
+ req->task = current;
++ get_task_struct(req->task);
++ atomic_long_inc(&req->task->io_uring->req_issue);
+ req->result = 0;
+
+ if (unlikely(req->opcode >= IORING_OP_LAST))
+@@ -6056,8 +6100,7 @@ static int io_init_req(struct io_ring_ctx *ctx, struct io_kiocb *req,
+ return io_req_set_file(state, req, READ_ONCE(sqe->fd));
+ }
+
+-static int io_submit_sqes(struct io_ring_ctx *ctx, unsigned int nr,
+- struct file *ring_file, int ring_fd)
++static int io_submit_sqes(struct io_ring_ctx *ctx, unsigned int nr)
+ {
+ struct io_submit_state state, *statep = NULL;
+ struct io_kiocb *link = NULL;
+@@ -6066,7 +6109,7 @@ static int io_submit_sqes(struct io_ring_ctx *ctx, unsigned int nr,
+ /* if we have a backlog and couldn't flush it all, return BUSY */
+ if (test_bit(0, &ctx->sq_check_overflow)) {
+ if (!list_empty(&ctx->cq_overflow_list) &&
+- !io_cqring_overflow_flush(ctx, false))
++ !io_cqring_overflow_flush(ctx, false, NULL, NULL))
+ return -EBUSY;
+ }
+
+@@ -6081,9 +6124,6 @@ static int io_submit_sqes(struct io_ring_ctx *ctx, unsigned int nr,
+ statep = &state;
+ }
+
+- ctx->ring_fd = ring_fd;
+- ctx->ring_file = ring_file;
+-
+ for (i = 0; i < nr; i++) {
+ const struct io_uring_sqe *sqe;
+ struct io_kiocb *req;
+@@ -6244,7 +6284,7 @@ static int io_sq_thread(void *data)
+
+ mutex_lock(&ctx->uring_lock);
+ if (likely(!percpu_ref_is_dying(&ctx->refs)))
+- ret = io_submit_sqes(ctx, to_submit, NULL, -1);
++ ret = io_submit_sqes(ctx, to_submit);
+ mutex_unlock(&ctx->uring_lock);
+ timeout = jiffies + ctx->sq_thread_idle;
+ }
+@@ -7073,14 +7113,38 @@ out_fput:
+ return ret;
+ }
+
++static int io_uring_alloc_task_context(struct task_struct *task)
++{
++ struct io_uring_task *tctx;
++
++ tctx = kmalloc(sizeof(*tctx), GFP_KERNEL);
++ if (unlikely(!tctx))
++ return -ENOMEM;
++
++ xa_init(&tctx->xa);
++ init_waitqueue_head(&tctx->wait);
++ tctx->last = NULL;
++ tctx->in_idle = 0;
++ atomic_long_set(&tctx->req_issue, 0);
++ atomic_long_set(&tctx->req_complete, 0);
++ task->io_uring = tctx;
++ return 0;
++}
++
++void __io_uring_free(struct task_struct *tsk)
++{
++ struct io_uring_task *tctx = tsk->io_uring;
++
++ WARN_ON_ONCE(!xa_empty(&tctx->xa));
++ kfree(tctx);
++ tsk->io_uring = NULL;
++}
++
+ static int io_sq_offload_start(struct io_ring_ctx *ctx,
+ struct io_uring_params *p)
+ {
+ int ret;
+
+- mmgrab(current->mm);
+- ctx->sqo_mm = current->mm;
+-
+ if (ctx->flags & IORING_SETUP_SQPOLL) {
+ ret = -EPERM;
+ if (!capable(CAP_SYS_ADMIN))
+@@ -7111,6 +7175,9 @@ static int io_sq_offload_start(struct io_ring_ctx *ctx,
+ ctx->sqo_thread = NULL;
+ goto err;
+ }
++ ret = io_uring_alloc_task_context(ctx->sqo_thread);
++ if (ret)
++ goto err;
+ wake_up_process(ctx->sqo_thread);
+ } else if (p->flags & IORING_SETUP_SQ_AFF) {
+ /* Can't have SQ_AFF without SQPOLL */
+@@ -7125,8 +7192,6 @@ static int io_sq_offload_start(struct io_ring_ctx *ctx,
+ return 0;
+ err:
+ io_finish_async(ctx);
+- mmdrop(ctx->sqo_mm);
+- ctx->sqo_mm = NULL;
+ return ret;
+ }
+
+@@ -7456,8 +7521,12 @@ static void io_destroy_buffers(struct io_ring_ctx *ctx)
+ static void io_ring_ctx_free(struct io_ring_ctx *ctx)
+ {
+ io_finish_async(ctx);
+- if (ctx->sqo_mm)
+- mmdrop(ctx->sqo_mm);
++ if (ctx->sqo_task) {
++ put_task_struct(ctx->sqo_task);
++ ctx->sqo_task = NULL;
++ mmdrop(ctx->mm_account);
++ ctx->mm_account = NULL;
++ }
+
+ io_iopoll_reap_events(ctx);
+ io_sqe_buffer_unregister(ctx);
+@@ -7528,7 +7597,7 @@ static void io_ring_exit_work(struct work_struct *work)
+
+ ctx = container_of(work, struct io_ring_ctx, exit_work);
+ if (ctx->rings)
+- io_cqring_overflow_flush(ctx, true);
++ io_cqring_overflow_flush(ctx, true, NULL, NULL);
+
+ /*
+ * If we're doing polled IO and end up having requests being
+@@ -7539,7 +7608,7 @@ static void io_ring_exit_work(struct work_struct *work)
+ while (!wait_for_completion_timeout(&ctx->ref_comp, HZ/20)) {
+ io_iopoll_reap_events(ctx);
+ if (ctx->rings)
+- io_cqring_overflow_flush(ctx, true);
++ io_cqring_overflow_flush(ctx, true, NULL, NULL);
+ }
+ io_ring_ctx_free(ctx);
+ }
+@@ -7550,8 +7619,8 @@ static void io_ring_ctx_wait_and_kill(struct io_ring_ctx *ctx)
+ percpu_ref_kill(&ctx->refs);
+ mutex_unlock(&ctx->uring_lock);
+
+- io_kill_timeouts(ctx);
+- io_poll_remove_all(ctx);
++ io_kill_timeouts(ctx, NULL);
++ io_poll_remove_all(ctx, NULL);
+
+ if (ctx->io_wq)
+ io_wq_cancel_all(ctx->io_wq);
+@@ -7559,7 +7628,7 @@ static void io_ring_ctx_wait_and_kill(struct io_ring_ctx *ctx)
+ io_iopoll_reap_events(ctx);
+ /* if we failed setting up the ctx, we might not have any rings */
+ if (ctx->rings)
+- io_cqring_overflow_flush(ctx, true);
++ io_cqring_overflow_flush(ctx, true, NULL, NULL);
+ idr_for_each(&ctx->personality_idr, io_remove_personalities, ctx);
+
+ /*
+@@ -7588,7 +7657,7 @@ static bool io_wq_files_match(struct io_wq_work *work, void *data)
+ {
+ struct files_struct *files = data;
+
+- return work->files == files;
++ return !files || work->files == files;
+ }
+
+ /*
+@@ -7609,12 +7678,6 @@ static bool io_match_link(struct io_kiocb *preq, struct io_kiocb *req)
+ return false;
+ }
+
+-static inline bool io_match_files(struct io_kiocb *req,
+- struct files_struct *files)
+-{
+- return (req->flags & REQ_F_WORK_INITIALIZED) && req->work.files == files;
+-}
+-
+ static bool io_match_link_files(struct io_kiocb *req,
+ struct files_struct *files)
+ {
+@@ -7729,11 +7792,14 @@ static void io_cancel_defer_files(struct io_ring_ctx *ctx,
+ }
+ }
+
+-static void io_uring_cancel_files(struct io_ring_ctx *ctx,
++/*
++ * Returns true if we found and killed one or more files pinning requests
++ */
++static bool io_uring_cancel_files(struct io_ring_ctx *ctx,
+ struct files_struct *files)
+ {
+ if (list_empty_careful(&ctx->inflight_list))
+- return;
++ return false;
+
+ io_cancel_defer_files(ctx, files);
+ /* cancel all at once, should be faster than doing it one by one*/
+@@ -7745,7 +7811,7 @@ static void io_uring_cancel_files(struct io_ring_ctx *ctx,
+
+ spin_lock_irq(&ctx->inflight_lock);
+ list_for_each_entry(req, &ctx->inflight_list, inflight_entry) {
+- if (req->work.files != files)
++ if (files && req->work.files != files)
+ continue;
+ /* req is being completed, ignore */
+ if (!refcount_inc_not_zero(&req->refs))
+@@ -7791,9 +7857,13 @@ static void io_uring_cancel_files(struct io_ring_ctx *ctx,
+ io_put_req(cancel_req);
+ }
+
++ /* cancellations _may_ trigger task work */
++ io_run_task_work();
+ schedule();
+ finish_wait(&ctx->inflight_wait, &wait);
+ }
++
++ return true;
+ }
+
+ static bool io_cancel_task_cb(struct io_wq_work *work, void *data)
+@@ -7801,21 +7871,198 @@ static bool io_cancel_task_cb(struct io_wq_work *work, void *data)
+ struct io_kiocb *req = container_of(work, struct io_kiocb, work);
+ struct task_struct *task = data;
+
+- return req->task == task;
++ return io_task_match(req, task);
++}
++
++static bool __io_uring_cancel_task_requests(struct io_ring_ctx *ctx,
++ struct task_struct *task,
++ struct files_struct *files)
++{
++ bool ret;
++
++ ret = io_uring_cancel_files(ctx, files);
++ if (!files) {
++ enum io_wq_cancel cret;
++
++ cret = io_wq_cancel_cb(ctx->io_wq, io_cancel_task_cb, task, true);
++ if (cret != IO_WQ_CANCEL_NOTFOUND)
++ ret = true;
++
++ /* SQPOLL thread does its own polling */
++ if (!(ctx->flags & IORING_SETUP_SQPOLL)) {
++ if (!list_empty_careful(&ctx->poll_list)) {
++ io_iopoll_reap_events(ctx);
++ ret = true;
++ }
++ }
++
++ ret |= io_poll_remove_all(ctx, task);
++ ret |= io_kill_timeouts(ctx, task);
++ }
++
++ return ret;
++}
++
++/*
++ * We need to iteratively cancel requests, in case a request has dependent
++ * hard links. These persist even for failure of cancelations, hence keep
++ * looping until none are found.
++ */
++static void io_uring_cancel_task_requests(struct io_ring_ctx *ctx,
++ struct files_struct *files)
++{
++ struct task_struct *task = current;
++
++ if (ctx->flags & IORING_SETUP_SQPOLL)
++ task = ctx->sqo_thread;
++
++ io_cqring_overflow_flush(ctx, true, task, files);
++
++ while (__io_uring_cancel_task_requests(ctx, task, files)) {
++ io_run_task_work();
++ cond_resched();
++ }
++}
++
++/*
++ * Note that this task has used io_uring. We use it for cancelation purposes.
++ */
++static int io_uring_add_task_file(struct file *file)
++{
++ struct io_uring_task *tctx = current->io_uring;
++
++ if (unlikely(!tctx)) {
++ int ret;
++
++ ret = io_uring_alloc_task_context(current);
++ if (unlikely(ret))
++ return ret;
++ tctx = current->io_uring;
++ }
++ if (tctx->last != file) {
++ void *old = xa_load(&tctx->xa, (unsigned long)file);
++
++ if (!old) {
++ get_file(file);
++ xa_store(&tctx->xa, (unsigned long)file, file, GFP_KERNEL);
++ }
++ tctx->last = file;
++ }
++
++ return 0;
++}
++
++/*
++ * Remove this io_uring_file -> task mapping.
++ */
++static void io_uring_del_task_file(struct file *file)
++{
++ struct io_uring_task *tctx = current->io_uring;
++
++ if (tctx->last == file)
++ tctx->last = NULL;
++ file = xa_erase(&tctx->xa, (unsigned long)file);
++ if (file)
++ fput(file);
++}
++
++static void __io_uring_attempt_task_drop(struct file *file)
++{
++ struct file *old = xa_load(¤t->io_uring->xa, (unsigned long)file);
++
++ if (old == file)
++ io_uring_del_task_file(file);
++}
++
++/*
++ * Drop task note for this file if we're the only ones that hold it after
++ * pending fput()
++ */
++static void io_uring_attempt_task_drop(struct file *file, bool exiting)
++{
++ if (!current->io_uring)
++ return;
++ /*
++ * fput() is pending, will be 2 if the only other ref is our potential
++ * task file note. If the task is exiting, drop regardless of count.
++ */
++ if (!exiting && atomic_long_read(&file->f_count) != 2)
++ return;
++
++ __io_uring_attempt_task_drop(file);
++}
++
++void __io_uring_files_cancel(struct files_struct *files)
++{
++ struct io_uring_task *tctx = current->io_uring;
++ struct file *file;
++ unsigned long index;
++
++ /* make sure overflow events are dropped */
++ tctx->in_idle = true;
++
++ xa_for_each(&tctx->xa, index, file) {
++ struct io_ring_ctx *ctx = file->private_data;
++
++ io_uring_cancel_task_requests(ctx, files);
++ if (files)
++ io_uring_del_task_file(file);
++ }
++}
++
++static inline bool io_uring_task_idle(struct io_uring_task *tctx)
++{
++ return atomic_long_read(&tctx->req_issue) ==
++ atomic_long_read(&tctx->req_complete);
++}
++
++/*
++ * Find any io_uring fd that this task has registered or done IO on, and cancel
++ * requests.
++ */
++void __io_uring_task_cancel(void)
++{
++ struct io_uring_task *tctx = current->io_uring;
++ DEFINE_WAIT(wait);
++ long completions;
++
++ /* make sure overflow events are dropped */
++ tctx->in_idle = true;
++
++ while (!io_uring_task_idle(tctx)) {
++ /* read completions before cancelations */
++ completions = atomic_long_read(&tctx->req_complete);
++ __io_uring_files_cancel(NULL);
++
++ prepare_to_wait(&tctx->wait, &wait, TASK_UNINTERRUPTIBLE);
++
++ /*
++ * If we've seen completions, retry. This avoids a race where
++ * a completion comes in before we did prepare_to_wait().
++ */
++ if (completions != atomic_long_read(&tctx->req_complete))
++ continue;
++ if (io_uring_task_idle(tctx))
++ break;
++ schedule();
++ }
++
++ finish_wait(&tctx->wait, &wait);
++ tctx->in_idle = false;
+ }
+
+ static int io_uring_flush(struct file *file, void *data)
+ {
+ struct io_ring_ctx *ctx = file->private_data;
+
+- io_uring_cancel_files(ctx, data);
+-
+ /*
+ * If the task is going away, cancel work it may have pending
+ */
+ if (fatal_signal_pending(current) || (current->flags & PF_EXITING))
+- io_wq_cancel_cb(ctx->io_wq, io_cancel_task_cb, current, true);
++ data = NULL;
+
++ io_uring_cancel_task_requests(ctx, data);
++ io_uring_attempt_task_drop(file, !data);
+ return 0;
+ }
+
+@@ -7924,13 +8171,16 @@ SYSCALL_DEFINE6(io_uring_enter, unsigned int, fd, u32, to_submit,
+ ret = 0;
+ if (ctx->flags & IORING_SETUP_SQPOLL) {
+ if (!list_empty_careful(&ctx->cq_overflow_list))
+- io_cqring_overflow_flush(ctx, false);
++ io_cqring_overflow_flush(ctx, false, NULL, NULL);
+ if (flags & IORING_ENTER_SQ_WAKEUP)
+ wake_up(&ctx->sqo_wait);
+ submitted = to_submit;
+ } else if (to_submit) {
++ ret = io_uring_add_task_file(f.file);
++ if (unlikely(ret))
++ goto out;
+ mutex_lock(&ctx->uring_lock);
+- submitted = io_submit_sqes(ctx, to_submit, f.file, fd);
++ submitted = io_submit_sqes(ctx, to_submit);
+ mutex_unlock(&ctx->uring_lock);
+
+ if (submitted != to_submit)
+@@ -8142,6 +8392,7 @@ static int io_uring_get_fd(struct io_ring_ctx *ctx)
+ file = anon_inode_getfile("[io_uring]", &io_uring_fops, ctx,
+ O_RDWR | O_CLOEXEC);
+ if (IS_ERR(file)) {
++err_fd:
+ put_unused_fd(ret);
+ ret = PTR_ERR(file);
+ goto err;
+@@ -8150,6 +8401,10 @@ static int io_uring_get_fd(struct io_ring_ctx *ctx)
+ #if defined(CONFIG_UNIX)
+ ctx->ring_sock->file = file;
+ #endif
++ if (unlikely(io_uring_add_task_file(file))) {
++ file = ERR_PTR(-ENOMEM);
++ goto err_fd;
++ }
+ fd_install(ret, file);
+ return ret;
+ err:
+@@ -8228,6 +8483,16 @@ static int io_uring_create(unsigned entries, struct io_uring_params *p,
+ ctx->user = user;
+ ctx->creds = get_current_cred();
+
++ ctx->sqo_task = get_task_struct(current);
++ /*
++ * This is just grabbed for accounting purposes. When a process exits,
++ * the mm is exited and dropped before the files, hence we need to hang
++ * on to this mm purely for the purposes of being able to unaccount
++ * memory (locked/pinned vm). It's not used for anything else.
++ */
++ mmgrab(current->mm);
++ ctx->mm_account = current->mm;
++
+ ret = io_allocate_scq_urings(ctx, p);
+ if (ret)
+ goto err;
+diff --git a/include/linux/fs.h b/include/linux/fs.h
+index ac1e89872db4f..819245cc9dbd4 100644
+--- a/include/linux/fs.h
++++ b/include/linux/fs.h
+@@ -3011,7 +3011,6 @@ extern int do_pipe_flags(int *, int);
+ id(UNKNOWN, unknown) \
+ id(FIRMWARE, firmware) \
+ id(FIRMWARE_PREALLOC_BUFFER, firmware) \
+- id(FIRMWARE_EFI_EMBEDDED, firmware) \
+ id(MODULE, kernel-module) \
+ id(KEXEC_IMAGE, kexec-image) \
+ id(KEXEC_INITRAMFS, kexec-initramfs) \
+diff --git a/include/linux/io_uring.h b/include/linux/io_uring.h
+new file mode 100644
+index 0000000000000..c09135a1ef132
+--- /dev/null
++++ b/include/linux/io_uring.h
+@@ -0,0 +1,53 @@
++/* SPDX-License-Identifier: GPL-2.0-or-later */
++#ifndef _LINUX_IO_URING_H
++#define _LINUX_IO_URING_H
++
++#include <linux/sched.h>
++#include <linux/xarray.h>
++#include <linux/percpu-refcount.h>
++
++struct io_uring_task {
++ /* submission side */
++ struct xarray xa;
++ struct wait_queue_head wait;
++ struct file *last;
++ atomic_long_t req_issue;
++
++ /* completion side */
++ bool in_idle ____cacheline_aligned_in_smp;
++ atomic_long_t req_complete;
++};
++
++#if defined(CONFIG_IO_URING)
++void __io_uring_task_cancel(void);
++void __io_uring_files_cancel(struct files_struct *files);
++void __io_uring_free(struct task_struct *tsk);
++
++static inline void io_uring_task_cancel(void)
++{
++ if (current->io_uring && !xa_empty(¤t->io_uring->xa))
++ __io_uring_task_cancel();
++}
++static inline void io_uring_files_cancel(struct files_struct *files)
++{
++ if (current->io_uring && !xa_empty(¤t->io_uring->xa))
++ __io_uring_files_cancel(files);
++}
++static inline void io_uring_free(struct task_struct *tsk)
++{
++ if (tsk->io_uring)
++ __io_uring_free(tsk);
++}
++#else
++static inline void io_uring_task_cancel(void)
++{
++}
++static inline void io_uring_files_cancel(struct files_struct *files)
++{
++}
++static inline void io_uring_free(struct task_struct *tsk)
++{
++}
++#endif
++
++#endif
+diff --git a/include/linux/mtd/pfow.h b/include/linux/mtd/pfow.h
+index 122f3439e1af2..c65d7a3be3c69 100644
+--- a/include/linux/mtd/pfow.h
++++ b/include/linux/mtd/pfow.h
+@@ -128,7 +128,7 @@ static inline void print_drs_error(unsigned dsr)
+
+ if (!(dsr & DSR_AVAILABLE))
+ printk(KERN_NOTICE"DSR.15: (0) Device not Available\n");
+- if (prog_status & 0x03)
++ if ((prog_status & 0x03) == 0x03)
+ printk(KERN_NOTICE"DSR.9,8: (11) Attempt to program invalid "
+ "half with 41h command\n");
+ else if (prog_status & 0x02)
+diff --git a/include/linux/pm.h b/include/linux/pm.h
+index 121c104a4090e..1010bf3d3008b 100644
+--- a/include/linux/pm.h
++++ b/include/linux/pm.h
+@@ -584,7 +584,7 @@ struct dev_pm_info {
+ #endif
+ #ifdef CONFIG_PM
+ struct hrtimer suspend_timer;
+- unsigned long timer_expires;
++ u64 timer_expires;
+ struct work_struct work;
+ wait_queue_head_t wait_queue;
+ struct wake_irq *wakeirq;
+diff --git a/include/linux/qcom-geni-se.h b/include/linux/qcom-geni-se.h
+index dd464943f717a..5b90eff50bf6e 100644
+--- a/include/linux/qcom-geni-se.h
++++ b/include/linux/qcom-geni-se.h
+@@ -229,6 +229,9 @@ struct geni_se {
+ #define GENI_SE_VERSION_MINOR(ver) ((ver & HW_VER_MINOR_MASK) >> HW_VER_MINOR_SHFT)
+ #define GENI_SE_VERSION_STEP(ver) (ver & HW_VER_STEP_MASK)
+
++/* QUP SE VERSION value for major number 2 and minor number 5 */
++#define QUP_SE_VERSION_2_5 0x20050000
++
+ #if IS_ENABLED(CONFIG_QCOM_GENI_SE)
+
+ u32 geni_se_get_qup_hw_version(struct geni_se *se);
+diff --git a/include/linux/sched.h b/include/linux/sched.h
+index 6833729430932..f0f38e86ab1ee 100644
+--- a/include/linux/sched.h
++++ b/include/linux/sched.h
+@@ -61,6 +61,7 @@ struct sighand_struct;
+ struct signal_struct;
+ struct task_delay_info;
+ struct task_group;
++struct io_uring_task;
+
+ /*
+ * Task state bitmask. NOTE! These bits are also
+@@ -923,6 +924,10 @@ struct task_struct {
+ /* Open file information: */
+ struct files_struct *files;
+
++#ifdef CONFIG_IO_URING
++ struct io_uring_task *io_uring;
++#endif
++
+ /* Namespaces: */
+ struct nsproxy *nsproxy;
+
+diff --git a/include/linux/string.h b/include/linux/string.h
+index 9b7a0632e87aa..b1f3894a0a3e4 100644
+--- a/include/linux/string.h
++++ b/include/linux/string.h
+@@ -161,20 +161,13 @@ extern int bcmp(const void *,const void *,__kernel_size_t);
+ #ifndef __HAVE_ARCH_MEMCHR
+ extern void * memchr(const void *,int,__kernel_size_t);
+ #endif
+-#ifndef __HAVE_ARCH_MEMCPY_MCSAFE
+-static inline __must_check unsigned long memcpy_mcsafe(void *dst,
+- const void *src, size_t cnt)
+-{
+- memcpy(dst, src, cnt);
+- return 0;
+-}
+-#endif
+ #ifndef __HAVE_ARCH_MEMCPY_FLUSHCACHE
+ static inline void memcpy_flushcache(void *dst, const void *src, size_t cnt)
+ {
+ memcpy(dst, src, cnt);
+ }
+ #endif
++
+ void *memchr_inv(const void *s, int c, size_t n);
+ char *strreplace(char *s, char old, char new);
+
+diff --git a/include/linux/uaccess.h b/include/linux/uaccess.h
+index 0a76ddc07d597..1ef3ab2343aa4 100644
+--- a/include/linux/uaccess.h
++++ b/include/linux/uaccess.h
+@@ -163,6 +163,19 @@ copy_in_user(void __user *to, const void __user *from, unsigned long n)
+ }
+ #endif
+
++#ifndef copy_mc_to_kernel
++/*
++ * Without arch opt-in this generic copy_mc_to_kernel() will not handle
++ * #MC (or arch equivalent) during source read.
++ */
++static inline unsigned long __must_check
++copy_mc_to_kernel(void *dst, const void *src, size_t cnt)
++{
++ memcpy(dst, src, cnt);
++ return 0;
++}
++#endif
++
+ static __always_inline void pagefault_disabled_inc(void)
+ {
+ current->pagefault_disabled++;
+diff --git a/include/linux/uio.h b/include/linux/uio.h
+index 9576fd8158d7d..6a97b4d10b2ed 100644
+--- a/include/linux/uio.h
++++ b/include/linux/uio.h
+@@ -186,10 +186,10 @@ size_t _copy_from_iter_flushcache(void *addr, size_t bytes, struct iov_iter *i);
+ #define _copy_from_iter_flushcache _copy_from_iter_nocache
+ #endif
+
+-#ifdef CONFIG_ARCH_HAS_UACCESS_MCSAFE
+-size_t _copy_to_iter_mcsafe(const void *addr, size_t bytes, struct iov_iter *i);
++#ifdef CONFIG_ARCH_HAS_COPY_MC
++size_t _copy_mc_to_iter(const void *addr, size_t bytes, struct iov_iter *i);
+ #else
+-#define _copy_to_iter_mcsafe _copy_to_iter
++#define _copy_mc_to_iter _copy_to_iter
+ #endif
+
+ static __always_inline __must_check
+@@ -202,12 +202,12 @@ size_t copy_from_iter_flushcache(void *addr, size_t bytes, struct iov_iter *i)
+ }
+
+ static __always_inline __must_check
+-size_t copy_to_iter_mcsafe(void *addr, size_t bytes, struct iov_iter *i)
++size_t copy_mc_to_iter(void *addr, size_t bytes, struct iov_iter *i)
+ {
+ if (unlikely(!check_copy_size(addr, bytes, true)))
+ return 0;
+ else
+- return _copy_to_iter_mcsafe(addr, bytes, i);
++ return _copy_mc_to_iter(addr, bytes, i);
+ }
+
+ size_t iov_iter_zero(size_t bytes, struct iov_iter *);
+diff --git a/include/net/netfilter/nf_tables.h b/include/net/netfilter/nf_tables.h
+index ec2cbfab71f35..f09541cba3c9d 100644
+--- a/include/net/netfilter/nf_tables.h
++++ b/include/net/netfilter/nf_tables.h
+@@ -896,6 +896,12 @@ static inline struct nft_expr *nft_expr_last(const struct nft_rule *rule)
+ return (struct nft_expr *)&rule->data[rule->dlen];
+ }
+
++static inline bool nft_expr_more(const struct nft_rule *rule,
++ const struct nft_expr *expr)
++{
++ return expr != nft_expr_last(rule) && expr->ops;
++}
++
+ static inline struct nft_userdata *nft_userdata(const struct nft_rule *rule)
+ {
+ return (void *)&rule->data[rule->dlen];
+diff --git a/include/uapi/linux/bpf.h b/include/uapi/linux/bpf.h
+index a3fd55194e0b1..7bffadcfd6eb0 100644
+--- a/include/uapi/linux/bpf.h
++++ b/include/uapi/linux/bpf.h
+@@ -1416,8 +1416,8 @@ union bpf_attr {
+ * Return
+ * The return value depends on the result of the test, and can be:
+ *
+- * * 0, if the *skb* task belongs to the cgroup2.
+- * * 1, if the *skb* task does not belong to the cgroup2.
++ * * 0, if current task belongs to the cgroup2.
++ * * 1, if current task does not belong to the cgroup2.
+ * * A negative error code, if an error occurred.
+ *
+ * int bpf_skb_change_tail(struct sk_buff *skb, u32 len, u64 flags)
+diff --git a/init/init_task.c b/init/init_task.c
+index 15089d15010ab..7802f91109b48 100644
+--- a/init/init_task.c
++++ b/init/init_task.c
+@@ -113,6 +113,9 @@ struct task_struct init_task
+ .thread = INIT_THREAD,
+ .fs = &init_fs,
+ .files = &init_files,
++#ifdef CONFIG_IO_URING
++ .io_uring = NULL,
++#endif
+ .signal = &init_signals,
+ .sighand = &init_sighand,
+ .nsproxy = &init_nsproxy,
+diff --git a/kernel/fork.c b/kernel/fork.c
+index 0074bbe8c66f1..c725015b3c465 100644
+--- a/kernel/fork.c
++++ b/kernel/fork.c
+@@ -95,6 +95,7 @@
+ #include <linux/stackleak.h>
+ #include <linux/kasan.h>
+ #include <linux/scs.h>
++#include <linux/io_uring.h>
+
+ #include <asm/pgalloc.h>
+ #include <linux/uaccess.h>
+@@ -745,6 +746,7 @@ void __put_task_struct(struct task_struct *tsk)
+ WARN_ON(refcount_read(&tsk->usage));
+ WARN_ON(tsk == current);
+
++ io_uring_free(tsk);
+ cgroup_free(tsk);
+ task_numa_free(tsk, true);
+ security_task_free(tsk);
+@@ -2022,6 +2024,10 @@ static __latent_entropy struct task_struct *copy_process(
+ p->vtime.state = VTIME_INACTIVE;
+ #endif
+
++#ifdef CONFIG_IO_URING
++ p->io_uring = NULL;
++#endif
++
+ #if defined(SPLIT_RSS_COUNTING)
+ memset(&p->rss_stat, 0, sizeof(p->rss_stat));
+ #endif
+diff --git a/lib/Kconfig b/lib/Kconfig
+index df3f3da959900..7761458649377 100644
+--- a/lib/Kconfig
++++ b/lib/Kconfig
+@@ -631,7 +631,12 @@ config UACCESS_MEMCPY
+ config ARCH_HAS_UACCESS_FLUSHCACHE
+ bool
+
+-config ARCH_HAS_UACCESS_MCSAFE
++# arch has a concept of a recoverable synchronous exception due to a
++# memory-read error like x86 machine-check or ARM data-abort, and
++# implements copy_mc_to_{user,kernel} to abort and report
++# 'bytes-transferred' if that exception fires when accessing the source
++# buffer.
++config ARCH_HAS_COPY_MC
+ bool
+
+ # Temporary. Goes away when all archs are cleaned up
+diff --git a/lib/iov_iter.c b/lib/iov_iter.c
+index bf538c2bec777..aefe469905434 100644
+--- a/lib/iov_iter.c
++++ b/lib/iov_iter.c
+@@ -636,30 +636,30 @@ size_t _copy_to_iter(const void *addr, size_t bytes, struct iov_iter *i)
+ }
+ EXPORT_SYMBOL(_copy_to_iter);
+
+-#ifdef CONFIG_ARCH_HAS_UACCESS_MCSAFE
+-static int copyout_mcsafe(void __user *to, const void *from, size_t n)
++#ifdef CONFIG_ARCH_HAS_COPY_MC
++static int copyout_mc(void __user *to, const void *from, size_t n)
+ {
+ if (access_ok(to, n)) {
+ instrument_copy_to_user(to, from, n);
+- n = copy_to_user_mcsafe((__force void *) to, from, n);
++ n = copy_mc_to_user((__force void *) to, from, n);
+ }
+ return n;
+ }
+
+-static unsigned long memcpy_mcsafe_to_page(struct page *page, size_t offset,
++static unsigned long copy_mc_to_page(struct page *page, size_t offset,
+ const char *from, size_t len)
+ {
+ unsigned long ret;
+ char *to;
+
+ to = kmap_atomic(page);
+- ret = memcpy_mcsafe(to + offset, from, len);
++ ret = copy_mc_to_kernel(to + offset, from, len);
+ kunmap_atomic(to);
+
+ return ret;
+ }
+
+-static size_t copy_pipe_to_iter_mcsafe(const void *addr, size_t bytes,
++static size_t copy_mc_pipe_to_iter(const void *addr, size_t bytes,
+ struct iov_iter *i)
+ {
+ struct pipe_inode_info *pipe = i->pipe;
+@@ -677,7 +677,7 @@ static size_t copy_pipe_to_iter_mcsafe(const void *addr, size_t bytes,
+ size_t chunk = min_t(size_t, n, PAGE_SIZE - off);
+ unsigned long rem;
+
+- rem = memcpy_mcsafe_to_page(pipe->bufs[i_head & p_mask].page,
++ rem = copy_mc_to_page(pipe->bufs[i_head & p_mask].page,
+ off, addr, chunk);
+ i->head = i_head;
+ i->iov_offset = off + chunk - rem;
+@@ -694,18 +694,17 @@ static size_t copy_pipe_to_iter_mcsafe(const void *addr, size_t bytes,
+ }
+
+ /**
+- * _copy_to_iter_mcsafe - copy to user with source-read error exception handling
++ * _copy_mc_to_iter - copy to iter with source memory error exception handling
+ * @addr: source kernel address
+ * @bytes: total transfer length
+ * @iter: destination iterator
+ *
+- * The pmem driver arranges for filesystem-dax to use this facility via
+- * dax_copy_to_iter() for protecting read/write to persistent memory.
+- * Unless / until an architecture can guarantee identical performance
+- * between _copy_to_iter_mcsafe() and _copy_to_iter() it would be a
+- * performance regression to switch more users to the mcsafe version.
++ * The pmem driver deploys this for the dax operation
++ * (dax_copy_to_iter()) for dax reads (bypass page-cache and the
++ * block-layer). Upon #MC read(2) aborts and returns EIO or the bytes
++ * successfully copied.
+ *
+- * Otherwise, the main differences between this and typical _copy_to_iter().
++ * The main differences between this and typical _copy_to_iter().
+ *
+ * * Typical tail/residue handling after a fault retries the copy
+ * byte-by-byte until the fault happens again. Re-triggering machine
+@@ -716,23 +715,22 @@ static size_t copy_pipe_to_iter_mcsafe(const void *addr, size_t bytes,
+ * * ITER_KVEC, ITER_PIPE, and ITER_BVEC can return short copies.
+ * Compare to copy_to_iter() where only ITER_IOVEC attempts might return
+ * a short copy.
+- *
+- * See MCSAFE_TEST for self-test.
+ */
+-size_t _copy_to_iter_mcsafe(const void *addr, size_t bytes, struct iov_iter *i)
++size_t _copy_mc_to_iter(const void *addr, size_t bytes, struct iov_iter *i)
+ {
+ const char *from = addr;
+ unsigned long rem, curr_addr, s_addr = (unsigned long) addr;
+
+ if (unlikely(iov_iter_is_pipe(i)))
+- return copy_pipe_to_iter_mcsafe(addr, bytes, i);
++ return copy_mc_pipe_to_iter(addr, bytes, i);
+ if (iter_is_iovec(i))
+ might_fault();
+ iterate_and_advance(i, bytes, v,
+- copyout_mcsafe(v.iov_base, (from += v.iov_len) - v.iov_len, v.iov_len),
++ copyout_mc(v.iov_base, (from += v.iov_len) - v.iov_len,
++ v.iov_len),
+ ({
+- rem = memcpy_mcsafe_to_page(v.bv_page, v.bv_offset,
+- (from += v.bv_len) - v.bv_len, v.bv_len);
++ rem = copy_mc_to_page(v.bv_page, v.bv_offset,
++ (from += v.bv_len) - v.bv_len, v.bv_len);
+ if (rem) {
+ curr_addr = (unsigned long) from;
+ bytes = curr_addr - s_addr - rem;
+@@ -740,8 +738,8 @@ size_t _copy_to_iter_mcsafe(const void *addr, size_t bytes, struct iov_iter *i)
+ }
+ }),
+ ({
+- rem = memcpy_mcsafe(v.iov_base, (from += v.iov_len) - v.iov_len,
+- v.iov_len);
++ rem = copy_mc_to_kernel(v.iov_base, (from += v.iov_len)
++ - v.iov_len, v.iov_len);
+ if (rem) {
+ curr_addr = (unsigned long) from;
+ bytes = curr_addr - s_addr - rem;
+@@ -752,8 +750,8 @@ size_t _copy_to_iter_mcsafe(const void *addr, size_t bytes, struct iov_iter *i)
+
+ return bytes;
+ }
+-EXPORT_SYMBOL_GPL(_copy_to_iter_mcsafe);
+-#endif /* CONFIG_ARCH_HAS_UACCESS_MCSAFE */
++EXPORT_SYMBOL_GPL(_copy_mc_to_iter);
++#endif /* CONFIG_ARCH_HAS_COPY_MC */
+
+ size_t _copy_from_iter(void *addr, size_t bytes, struct iov_iter *i)
+ {
+diff --git a/net/ipv4/tcp.c b/net/ipv4/tcp.c
+index 06a8242aa6980..6dd7f44497ecc 100644
+--- a/net/ipv4/tcp.c
++++ b/net/ipv4/tcp.c
+@@ -483,6 +483,8 @@ static inline bool tcp_stream_is_readable(const struct tcp_sock *tp,
+ return true;
+ if (tcp_rmem_pressure(sk))
+ return true;
++ if (tcp_receive_window(tp) <= inet_csk(sk)->icsk_ack.rcv_mss)
++ return true;
+ }
+ if (sk->sk_prot->stream_memory_read)
+ return sk->sk_prot->stream_memory_read(sk);
+diff --git a/net/ipv4/tcp_input.c b/net/ipv4/tcp_input.c
+index 02cc972edd0b0..6c7e982169467 100644
+--- a/net/ipv4/tcp_input.c
++++ b/net/ipv4/tcp_input.c
+@@ -4790,7 +4790,8 @@ void tcp_data_ready(struct sock *sk)
+ int avail = tp->rcv_nxt - tp->copied_seq;
+
+ if (avail < sk->sk_rcvlowat && !tcp_rmem_pressure(sk) &&
+- !sock_flag(sk, SOCK_DONE))
++ !sock_flag(sk, SOCK_DONE) &&
++ tcp_receive_window(tp) > inet_csk(sk)->icsk_ack.rcv_mss)
+ return;
+
+ sk->sk_data_ready(sk);
+diff --git a/net/netfilter/nf_tables_api.c b/net/netfilter/nf_tables_api.c
+index 05059f620d41e..fe51a7df4f524 100644
+--- a/net/netfilter/nf_tables_api.c
++++ b/net/netfilter/nf_tables_api.c
+@@ -295,7 +295,7 @@ static void nft_rule_expr_activate(const struct nft_ctx *ctx,
+ struct nft_expr *expr;
+
+ expr = nft_expr_first(rule);
+- while (expr != nft_expr_last(rule) && expr->ops) {
++ while (nft_expr_more(rule, expr)) {
+ if (expr->ops->activate)
+ expr->ops->activate(ctx, expr);
+
+@@ -310,7 +310,7 @@ static void nft_rule_expr_deactivate(const struct nft_ctx *ctx,
+ struct nft_expr *expr;
+
+ expr = nft_expr_first(rule);
+- while (expr != nft_expr_last(rule) && expr->ops) {
++ while (nft_expr_more(rule, expr)) {
+ if (expr->ops->deactivate)
+ expr->ops->deactivate(ctx, expr, phase);
+
+@@ -2917,7 +2917,7 @@ static void nf_tables_rule_destroy(const struct nft_ctx *ctx,
+ * is called on error from nf_tables_newrule().
+ */
+ expr = nft_expr_first(rule);
+- while (expr != nft_expr_last(rule) && expr->ops) {
++ while (nft_expr_more(rule, expr)) {
+ next = nft_expr_next(expr);
+ nf_tables_expr_destroy(ctx, expr);
+ expr = next;
+diff --git a/net/netfilter/nf_tables_offload.c b/net/netfilter/nf_tables_offload.c
+index c7cf1cde46def..ce2387bfb5dce 100644
+--- a/net/netfilter/nf_tables_offload.c
++++ b/net/netfilter/nf_tables_offload.c
+@@ -37,7 +37,7 @@ struct nft_flow_rule *nft_flow_rule_create(struct net *net,
+ struct nft_expr *expr;
+
+ expr = nft_expr_first(rule);
+- while (expr->ops && expr != nft_expr_last(rule)) {
++ while (nft_expr_more(rule, expr)) {
+ if (expr->ops->offload_flags & NFT_OFFLOAD_F_ACTION)
+ num_actions++;
+
+@@ -61,7 +61,7 @@ struct nft_flow_rule *nft_flow_rule_create(struct net *net,
+ ctx->net = net;
+ ctx->dep.type = NFT_OFFLOAD_DEP_UNSPEC;
+
+- while (expr->ops && expr != nft_expr_last(rule)) {
++ while (nft_expr_more(rule, expr)) {
+ if (!expr->ops->offload) {
+ err = -EOPNOTSUPP;
+ goto err_out;
+diff --git a/net/sched/act_mpls.c b/net/sched/act_mpls.c
+index e298ec3b3c9e3..ca026e2bf8d27 100644
+--- a/net/sched/act_mpls.c
++++ b/net/sched/act_mpls.c
+@@ -408,6 +408,7 @@ static void __exit mpls_cleanup_module(void)
+ module_init(mpls_init_module);
+ module_exit(mpls_cleanup_module);
+
++MODULE_SOFTDEP("post: mpls_gso");
+ MODULE_AUTHOR("Netronome Systems <oss-drivers@netronome.com>");
+ MODULE_LICENSE("GPL");
+ MODULE_DESCRIPTION("MPLS manipulation actions");
+diff --git a/net/sched/cls_api.c b/net/sched/cls_api.c
+index 8bf6bde1cfe59..aa2448253dbab 100644
+--- a/net/sched/cls_api.c
++++ b/net/sched/cls_api.c
+@@ -650,12 +650,12 @@ static void tc_block_indr_cleanup(struct flow_block_cb *block_cb)
+ block_cb->indr.binder_type,
+ &block->flow_block, tcf_block_shared(block),
+ &extack);
++ rtnl_lock();
+ down_write(&block->cb_lock);
+ list_del(&block_cb->driver_list);
+ list_move(&block_cb->list, &bo.cb_list);
+- up_write(&block->cb_lock);
+- rtnl_lock();
+ tcf_block_unbind(block, &bo);
++ up_write(&block->cb_lock);
+ rtnl_unlock();
+ }
+
+diff --git a/net/sched/sch_netem.c b/net/sched/sch_netem.c
+index 84f82771cdf5d..0c345e43a09a3 100644
+--- a/net/sched/sch_netem.c
++++ b/net/sched/sch_netem.c
+@@ -330,7 +330,7 @@ static s64 tabledist(s64 mu, s32 sigma,
+
+ /* default uniform distribution */
+ if (dist == NULL)
+- return ((rnd % (2 * sigma)) + mu) - sigma;
++ return ((rnd % (2 * (u32)sigma)) + mu) - sigma;
+
+ t = dist->table[rnd % dist->size];
+ x = (sigma % NETEM_DIST_SCALE) * t;
+@@ -812,6 +812,10 @@ static void get_slot(struct netem_sched_data *q, const struct nlattr *attr)
+ q->slot_config.max_packets = INT_MAX;
+ if (q->slot_config.max_bytes == 0)
+ q->slot_config.max_bytes = INT_MAX;
++
++ /* capping dist_jitter to the range acceptable by tabledist() */
++ q->slot_config.dist_jitter = min_t(__s64, INT_MAX, abs(q->slot_config.dist_jitter));
++
+ q->slot.packets_left = q->slot_config.max_packets;
+ q->slot.bytes_left = q->slot_config.max_bytes;
+ if (q->slot_config.min_delay | q->slot_config.max_delay |
+@@ -1037,6 +1041,9 @@ static int netem_change(struct Qdisc *sch, struct nlattr *opt,
+ if (tb[TCA_NETEM_SLOT])
+ get_slot(q, tb[TCA_NETEM_SLOT]);
+
++ /* capping jitter to the range acceptable by tabledist() */
++ q->jitter = min_t(s64, abs(q->jitter), INT_MAX);
++
+ return ret;
+
+ get_table_failure:
+diff --git a/net/tipc/msg.c b/net/tipc/msg.c
+index 15b24fbcbe970..0d6297f75df18 100644
+--- a/net/tipc/msg.c
++++ b/net/tipc/msg.c
+@@ -150,12 +150,11 @@ int tipc_buf_append(struct sk_buff **headbuf, struct sk_buff **buf)
+ if (fragid == FIRST_FRAGMENT) {
+ if (unlikely(head))
+ goto err;
+- if (skb_cloned(frag))
+- frag = skb_copy(frag, GFP_ATOMIC);
++ *buf = NULL;
++ frag = skb_unshare(frag, GFP_ATOMIC);
+ if (unlikely(!frag))
+ goto err;
+ head = *headbuf = frag;
+- *buf = NULL;
+ TIPC_SKB_CB(head)->tail = NULL;
+ if (skb_is_nonlinear(head)) {
+ skb_walk_frags(head, tail) {
+diff --git a/scripts/setlocalversion b/scripts/setlocalversion
+index 20f2efd57b11a..bb709eda96cdf 100755
+--- a/scripts/setlocalversion
++++ b/scripts/setlocalversion
+@@ -45,7 +45,7 @@ scm_version()
+
+ # Check for git and a git repo.
+ if test -z "$(git rev-parse --show-cdup 2>/dev/null)" &&
+- head=$(git rev-parse --verify --short HEAD 2>/dev/null); then
++ head=$(git rev-parse --verify HEAD 2>/dev/null); then
+
+ # If we are at a tagged commit (like "v2.6.30-rc6"), we ignore
+ # it, because this version is defined in the top level Makefile.
+@@ -59,11 +59,22 @@ scm_version()
+ fi
+ # If we are past a tagged commit (like
+ # "v2.6.30-rc5-302-g72357d5"), we pretty print it.
+- if atag="$(git describe 2>/dev/null)"; then
+- echo "$atag" | awk -F- '{printf("-%05d-%s", $(NF-1),$(NF))}'
+-
+- # If we don't have a tag at all we print -g{commitish}.
++ #
++ # Ensure the abbreviated sha1 has exactly 12
++ # hex characters, to make the output
++ # independent of git version, local
++ # core.abbrev settings and/or total number of
++ # objects in the current repository - passing
++ # --abbrev=12 ensures a minimum of 12, and the
++ # awk substr() then picks the 'g' and first 12
++ # hex chars.
++ if atag="$(git describe --abbrev=12 2>/dev/null)"; then
++ echo "$atag" | awk -F- '{printf("-%05d-%s", $(NF-1),substr($(NF),0,13))}'
++
++ # If we don't have a tag at all we print -g{commitish},
++ # again using exactly 12 hex chars.
+ else
++ head="$(echo $head | cut -c1-12)"
+ printf '%s%s' -g $head
+ fi
+ fi
+diff --git a/security/integrity/evm/evm_main.c b/security/integrity/evm/evm_main.c
+index 0d36259b690df..e4b47759ba1ca 100644
+--- a/security/integrity/evm/evm_main.c
++++ b/security/integrity/evm/evm_main.c
+@@ -181,6 +181,12 @@ static enum integrity_status evm_verify_hmac(struct dentry *dentry,
+ break;
+ case EVM_IMA_XATTR_DIGSIG:
+ case EVM_XATTR_PORTABLE_DIGSIG:
++ /* accept xattr with non-empty signature field */
++ if (xattr_len <= sizeof(struct signature_v2_hdr)) {
++ evm_status = INTEGRITY_FAIL;
++ goto out;
++ }
++
+ hdr = (struct signature_v2_hdr *)xattr_data;
+ digest.hdr.algo = hdr->hash_algo;
+ rc = evm_calc_hash(dentry, xattr_name, xattr_value,
+diff --git a/tools/arch/x86/include/asm/mcsafe_test.h b/tools/arch/x86/include/asm/mcsafe_test.h
+deleted file mode 100644
+index 2ccd588fbad45..0000000000000
+--- a/tools/arch/x86/include/asm/mcsafe_test.h
++++ /dev/null
+@@ -1,13 +0,0 @@
+-/* SPDX-License-Identifier: GPL-2.0 */
+-#ifndef _MCSAFE_TEST_H_
+-#define _MCSAFE_TEST_H_
+-
+-.macro MCSAFE_TEST_CTL
+-.endm
+-
+-.macro MCSAFE_TEST_SRC reg count target
+-.endm
+-
+-.macro MCSAFE_TEST_DST reg count target
+-.endm
+-#endif /* _MCSAFE_TEST_H_ */
+diff --git a/tools/arch/x86/lib/memcpy_64.S b/tools/arch/x86/lib/memcpy_64.S
+index 45f8e1b02241f..0b5b8ae56bd91 100644
+--- a/tools/arch/x86/lib/memcpy_64.S
++++ b/tools/arch/x86/lib/memcpy_64.S
+@@ -4,7 +4,6 @@
+ #include <linux/linkage.h>
+ #include <asm/errno.h>
+ #include <asm/cpufeatures.h>
+-#include <asm/mcsafe_test.h>
+ #include <asm/alternative-asm.h>
+ #include <asm/export.h>
+
+@@ -187,117 +186,3 @@ SYM_FUNC_START(memcpy_orig)
+ SYM_FUNC_END(memcpy_orig)
+
+ .popsection
+-
+-#ifndef CONFIG_UML
+-
+-MCSAFE_TEST_CTL
+-
+-/*
+- * __memcpy_mcsafe - memory copy with machine check exception handling
+- * Note that we only catch machine checks when reading the source addresses.
+- * Writes to target are posted and don't generate machine checks.
+- */
+-SYM_FUNC_START(__memcpy_mcsafe)
+- cmpl $8, %edx
+- /* Less than 8 bytes? Go to byte copy loop */
+- jb .L_no_whole_words
+-
+- /* Check for bad alignment of source */
+- testl $7, %esi
+- /* Already aligned */
+- jz .L_8byte_aligned
+-
+- /* Copy one byte at a time until source is 8-byte aligned */
+- movl %esi, %ecx
+- andl $7, %ecx
+- subl $8, %ecx
+- negl %ecx
+- subl %ecx, %edx
+-.L_read_leading_bytes:
+- movb (%rsi), %al
+- MCSAFE_TEST_SRC %rsi 1 .E_leading_bytes
+- MCSAFE_TEST_DST %rdi 1 .E_leading_bytes
+-.L_write_leading_bytes:
+- movb %al, (%rdi)
+- incq %rsi
+- incq %rdi
+- decl %ecx
+- jnz .L_read_leading_bytes
+-
+-.L_8byte_aligned:
+- movl %edx, %ecx
+- andl $7, %edx
+- shrl $3, %ecx
+- jz .L_no_whole_words
+-
+-.L_read_words:
+- movq (%rsi), %r8
+- MCSAFE_TEST_SRC %rsi 8 .E_read_words
+- MCSAFE_TEST_DST %rdi 8 .E_write_words
+-.L_write_words:
+- movq %r8, (%rdi)
+- addq $8, %rsi
+- addq $8, %rdi
+- decl %ecx
+- jnz .L_read_words
+-
+- /* Any trailing bytes? */
+-.L_no_whole_words:
+- andl %edx, %edx
+- jz .L_done_memcpy_trap
+-
+- /* Copy trailing bytes */
+- movl %edx, %ecx
+-.L_read_trailing_bytes:
+- movb (%rsi), %al
+- MCSAFE_TEST_SRC %rsi 1 .E_trailing_bytes
+- MCSAFE_TEST_DST %rdi 1 .E_trailing_bytes
+-.L_write_trailing_bytes:
+- movb %al, (%rdi)
+- incq %rsi
+- incq %rdi
+- decl %ecx
+- jnz .L_read_trailing_bytes
+-
+- /* Copy successful. Return zero */
+-.L_done_memcpy_trap:
+- xorl %eax, %eax
+-.L_done:
+- ret
+-SYM_FUNC_END(__memcpy_mcsafe)
+-EXPORT_SYMBOL_GPL(__memcpy_mcsafe)
+-
+- .section .fixup, "ax"
+- /*
+- * Return number of bytes not copied for any failure. Note that
+- * there is no "tail" handling since the source buffer is 8-byte
+- * aligned and poison is cacheline aligned.
+- */
+-.E_read_words:
+- shll $3, %ecx
+-.E_leading_bytes:
+- addl %edx, %ecx
+-.E_trailing_bytes:
+- mov %ecx, %eax
+- jmp .L_done
+-
+- /*
+- * For write fault handling, given the destination is unaligned,
+- * we handle faults on multi-byte writes with a byte-by-byte
+- * copy up to the write-protected page.
+- */
+-.E_write_words:
+- shll $3, %ecx
+- addl %edx, %ecx
+- movl %ecx, %edx
+- jmp mcsafe_handle_tail
+-
+- .previous
+-
+- _ASM_EXTABLE_FAULT(.L_read_leading_bytes, .E_leading_bytes)
+- _ASM_EXTABLE_FAULT(.L_read_words, .E_read_words)
+- _ASM_EXTABLE_FAULT(.L_read_trailing_bytes, .E_trailing_bytes)
+- _ASM_EXTABLE(.L_write_leading_bytes, .E_leading_bytes)
+- _ASM_EXTABLE(.L_write_words, .E_write_words)
+- _ASM_EXTABLE(.L_write_trailing_bytes, .E_trailing_bytes)
+-#endif
+diff --git a/tools/include/uapi/linux/bpf.h b/tools/include/uapi/linux/bpf.h
+index a3fd55194e0b1..7bffadcfd6eb0 100644
+--- a/tools/include/uapi/linux/bpf.h
++++ b/tools/include/uapi/linux/bpf.h
+@@ -1416,8 +1416,8 @@ union bpf_attr {
+ * Return
+ * The return value depends on the result of the test, and can be:
+ *
+- * * 0, if the *skb* task belongs to the cgroup2.
+- * * 1, if the *skb* task does not belong to the cgroup2.
++ * * 0, if current task belongs to the cgroup2.
++ * * 1, if current task does not belong to the cgroup2.
+ * * A negative error code, if an error occurred.
+ *
+ * int bpf_skb_change_tail(struct sk_buff *skb, u32 len, u64 flags)
+diff --git a/tools/objtool/check.c b/tools/objtool/check.c
+index 773e6c7ee5f93..0ed92c3b19266 100644
+--- a/tools/objtool/check.c
++++ b/tools/objtool/check.c
+@@ -548,8 +548,9 @@ static const char *uaccess_safe_builtin[] = {
+ "__ubsan_handle_shift_out_of_bounds",
+ /* misc */
+ "csum_partial_copy_generic",
+- "__memcpy_mcsafe",
+- "mcsafe_handle_tail",
++ "copy_mc_fragile",
++ "copy_mc_fragile_handle_tail",
++ "copy_mc_enhanced_fast_string",
+ "ftrace_likely_update", /* CONFIG_TRACE_BRANCH_PROFILING */
+ NULL
+ };
+diff --git a/tools/perf/bench/Build b/tools/perf/bench/Build
+index 768e408757a05..5352303518e1f 100644
+--- a/tools/perf/bench/Build
++++ b/tools/perf/bench/Build
+@@ -11,7 +11,6 @@ perf-y += epoll-ctl.o
+ perf-y += synthesize.o
+ perf-y += kallsyms-parse.o
+
+-perf-$(CONFIG_X86_64) += mem-memcpy-x86-64-lib.o
+ perf-$(CONFIG_X86_64) += mem-memcpy-x86-64-asm.o
+ perf-$(CONFIG_X86_64) += mem-memset-x86-64-asm.o
+
+diff --git a/tools/perf/bench/mem-memcpy-x86-64-lib.c b/tools/perf/bench/mem-memcpy-x86-64-lib.c
+deleted file mode 100644
+index 4130734dde84b..0000000000000
+--- a/tools/perf/bench/mem-memcpy-x86-64-lib.c
++++ /dev/null
+@@ -1,24 +0,0 @@
+-/*
+- * From code in arch/x86/lib/usercopy_64.c, copied to keep tools/ copy
+- * of the kernel's arch/x86/lib/memcpy_64.s used in 'perf bench mem memcpy'
+- * happy.
+- */
+-#include <linux/types.h>
+-
+-unsigned long __memcpy_mcsafe(void *dst, const void *src, size_t cnt);
+-unsigned long mcsafe_handle_tail(char *to, char *from, unsigned len);
+-
+-unsigned long mcsafe_handle_tail(char *to, char *from, unsigned len)
+-{
+- for (; len; --len, to++, from++) {
+- /*
+- * Call the assembly routine back directly since
+- * memcpy_mcsafe() may silently fallback to memcpy.
+- */
+- unsigned long rem = __memcpy_mcsafe(to, from, 1);
+-
+- if (rem)
+- break;
+- }
+- return len;
+-}
+diff --git a/tools/testing/nvdimm/test/nfit.c b/tools/testing/nvdimm/test/nfit.c
+index a8ee5c4d41ebb..50a390d87db26 100644
+--- a/tools/testing/nvdimm/test/nfit.c
++++ b/tools/testing/nvdimm/test/nfit.c
+@@ -23,7 +23,8 @@
+ #include "nfit_test.h"
+ #include "../watermark.h"
+
+-#include <asm/mcsafe_test.h>
++#include <asm/copy_mc_test.h>
++#include <asm/mce.h>
+
+ /*
+ * Generate an NFIT table to describe the following topology:
+@@ -3052,7 +3053,7 @@ static struct platform_driver nfit_test_driver = {
+ .id_table = nfit_test_id,
+ };
+
+-static char mcsafe_buf[PAGE_SIZE] __attribute__((__aligned__(PAGE_SIZE)));
++static char copy_mc_buf[PAGE_SIZE] __attribute__((__aligned__(PAGE_SIZE)));
+
+ enum INJECT {
+ INJECT_NONE,
+@@ -3060,7 +3061,7 @@ enum INJECT {
+ INJECT_DST,
+ };
+
+-static void mcsafe_test_init(char *dst, char *src, size_t size)
++static void copy_mc_test_init(char *dst, char *src, size_t size)
+ {
+ size_t i;
+
+@@ -3069,7 +3070,7 @@ static void mcsafe_test_init(char *dst, char *src, size_t size)
+ src[i] = (char) i;
+ }
+
+-static bool mcsafe_test_validate(unsigned char *dst, unsigned char *src,
++static bool copy_mc_test_validate(unsigned char *dst, unsigned char *src,
+ size_t size, unsigned long rem)
+ {
+ size_t i;
+@@ -3090,12 +3091,12 @@ static bool mcsafe_test_validate(unsigned char *dst, unsigned char *src,
+ return true;
+ }
+
+-void mcsafe_test(void)
++void copy_mc_test(void)
+ {
+ char *inject_desc[] = { "none", "source", "destination" };
+ enum INJECT inj;
+
+- if (IS_ENABLED(CONFIG_MCSAFE_TEST)) {
++ if (IS_ENABLED(CONFIG_COPY_MC_TEST)) {
+ pr_info("%s: run...\n", __func__);
+ } else {
+ pr_info("%s: disabled, skip.\n", __func__);
+@@ -3113,31 +3114,31 @@ void mcsafe_test(void)
+
+ switch (inj) {
+ case INJECT_NONE:
+- mcsafe_inject_src(NULL);
+- mcsafe_inject_dst(NULL);
+- dst = &mcsafe_buf[2048];
+- src = &mcsafe_buf[1024 - i];
++ copy_mc_inject_src(NULL);
++ copy_mc_inject_dst(NULL);
++ dst = ©_mc_buf[2048];
++ src = ©_mc_buf[1024 - i];
+ expect = 0;
+ break;
+ case INJECT_SRC:
+- mcsafe_inject_src(&mcsafe_buf[1024]);
+- mcsafe_inject_dst(NULL);
+- dst = &mcsafe_buf[2048];
+- src = &mcsafe_buf[1024 - i];
++ copy_mc_inject_src(©_mc_buf[1024]);
++ copy_mc_inject_dst(NULL);
++ dst = ©_mc_buf[2048];
++ src = ©_mc_buf[1024 - i];
+ expect = 512 - i;
+ break;
+ case INJECT_DST:
+- mcsafe_inject_src(NULL);
+- mcsafe_inject_dst(&mcsafe_buf[2048]);
+- dst = &mcsafe_buf[2048 - i];
+- src = &mcsafe_buf[1024];
++ copy_mc_inject_src(NULL);
++ copy_mc_inject_dst(©_mc_buf[2048]);
++ dst = ©_mc_buf[2048 - i];
++ src = ©_mc_buf[1024];
+ expect = 512 - i;
+ break;
+ }
+
+- mcsafe_test_init(dst, src, 512);
+- rem = __memcpy_mcsafe(dst, src, 512);
+- valid = mcsafe_test_validate(dst, src, 512, expect);
++ copy_mc_test_init(dst, src, 512);
++ rem = copy_mc_fragile(dst, src, 512);
++ valid = copy_mc_test_validate(dst, src, 512, expect);
+ if (rem == expect && valid)
+ continue;
+ pr_info("%s: copy(%#lx, %#lx, %d) off: %d rem: %ld %s expect: %ld\n",
+@@ -3149,8 +3150,8 @@ void mcsafe_test(void)
+ }
+ }
+
+- mcsafe_inject_src(NULL);
+- mcsafe_inject_dst(NULL);
++ copy_mc_inject_src(NULL);
++ copy_mc_inject_dst(NULL);
+ }
+
+ static __init int nfit_test_init(void)
+@@ -3161,7 +3162,7 @@ static __init int nfit_test_init(void)
+ libnvdimm_test();
+ acpi_nfit_test();
+ device_dax_test();
+- mcsafe_test();
++ copy_mc_test();
+ dax_pmem_test();
+ dax_pmem_core_test();
+ #ifdef CONFIG_DEV_DAX_PMEM_COMPAT
+diff --git a/tools/testing/selftests/powerpc/copyloops/.gitignore b/tools/testing/selftests/powerpc/copyloops/.gitignore
+index ddaf140b82553..994b11af765ce 100644
+--- a/tools/testing/selftests/powerpc/copyloops/.gitignore
++++ b/tools/testing/selftests/powerpc/copyloops/.gitignore
+@@ -12,4 +12,4 @@ memcpy_p7_t1
+ copyuser_64_exc_t0
+ copyuser_64_exc_t1
+ copyuser_64_exc_t2
+-memcpy_mcsafe_64
++copy_mc_64
+diff --git a/tools/testing/selftests/powerpc/copyloops/Makefile b/tools/testing/selftests/powerpc/copyloops/Makefile
+index 0917983a1c781..3095b1f1c02b3 100644
+--- a/tools/testing/selftests/powerpc/copyloops/Makefile
++++ b/tools/testing/selftests/powerpc/copyloops/Makefile
+@@ -12,7 +12,7 @@ ASFLAGS = $(CFLAGS) -Wa,-mpower4
+ TEST_GEN_PROGS := copyuser_64_t0 copyuser_64_t1 copyuser_64_t2 \
+ copyuser_p7_t0 copyuser_p7_t1 \
+ memcpy_64_t0 memcpy_64_t1 memcpy_64_t2 \
+- memcpy_p7_t0 memcpy_p7_t1 memcpy_mcsafe_64 \
++ memcpy_p7_t0 memcpy_p7_t1 copy_mc_64 \
+ copyuser_64_exc_t0 copyuser_64_exc_t1 copyuser_64_exc_t2
+
+ EXTRA_SOURCES := validate.c ../harness.c stubs.S
+@@ -45,9 +45,9 @@ $(OUTPUT)/memcpy_p7_t%: memcpy_power7.S $(EXTRA_SOURCES)
+ -D SELFTEST_CASE=$(subst memcpy_p7_t,,$(notdir $@)) \
+ -o $@ $^
+
+-$(OUTPUT)/memcpy_mcsafe_64: memcpy_mcsafe_64.S $(EXTRA_SOURCES)
++$(OUTPUT)/copy_mc_64: copy_mc_64.S $(EXTRA_SOURCES)
+ $(CC) $(CPPFLAGS) $(CFLAGS) \
+- -D COPY_LOOP=test_memcpy_mcsafe \
++ -D COPY_LOOP=test_copy_mc_generic \
+ -o $@ $^
+
+ $(OUTPUT)/copyuser_64_exc_t%: copyuser_64.S exc_validate.c ../harness.c \
+diff --git a/tools/testing/selftests/powerpc/copyloops/copy_mc_64.S b/tools/testing/selftests/powerpc/copyloops/copy_mc_64.S
+new file mode 100644
+index 0000000000000..88d46c471493b
+--- /dev/null
++++ b/tools/testing/selftests/powerpc/copyloops/copy_mc_64.S
+@@ -0,0 +1,242 @@
++/* SPDX-License-Identifier: GPL-2.0 */
++/*
++ * Copyright (C) IBM Corporation, 2011
++ * Derived from copyuser_power7.s by Anton Blanchard <anton@au.ibm.com>
++ * Author - Balbir Singh <bsingharora@gmail.com>
++ */
++#include <asm/ppc_asm.h>
++#include <asm/errno.h>
++#include <asm/export.h>
++
++ .macro err1
++100:
++ EX_TABLE(100b,.Ldo_err1)
++ .endm
++
++ .macro err2
++200:
++ EX_TABLE(200b,.Ldo_err2)
++ .endm
++
++ .macro err3
++300: EX_TABLE(300b,.Ldone)
++ .endm
++
++.Ldo_err2:
++ ld r22,STK_REG(R22)(r1)
++ ld r21,STK_REG(R21)(r1)
++ ld r20,STK_REG(R20)(r1)
++ ld r19,STK_REG(R19)(r1)
++ ld r18,STK_REG(R18)(r1)
++ ld r17,STK_REG(R17)(r1)
++ ld r16,STK_REG(R16)(r1)
++ ld r15,STK_REG(R15)(r1)
++ ld r14,STK_REG(R14)(r1)
++ addi r1,r1,STACKFRAMESIZE
++.Ldo_err1:
++ /* Do a byte by byte copy to get the exact remaining size */
++ mtctr r7
++46:
++err3; lbz r0,0(r4)
++ addi r4,r4,1
++err3; stb r0,0(r3)
++ addi r3,r3,1
++ bdnz 46b
++ li r3,0
++ blr
++
++.Ldone:
++ mfctr r3
++ blr
++
++
++_GLOBAL(copy_mc_generic)
++ mr r7,r5
++ cmpldi r5,16
++ blt .Lshort_copy
++
++.Lcopy:
++ /* Get the source 8B aligned */
++ neg r6,r4
++ mtocrf 0x01,r6
++ clrldi r6,r6,(64-3)
++
++ bf cr7*4+3,1f
++err1; lbz r0,0(r4)
++ addi r4,r4,1
++err1; stb r0,0(r3)
++ addi r3,r3,1
++ subi r7,r7,1
++
++1: bf cr7*4+2,2f
++err1; lhz r0,0(r4)
++ addi r4,r4,2
++err1; sth r0,0(r3)
++ addi r3,r3,2
++ subi r7,r7,2
++
++2: bf cr7*4+1,3f
++err1; lwz r0,0(r4)
++ addi r4,r4,4
++err1; stw r0,0(r3)
++ addi r3,r3,4
++ subi r7,r7,4
++
++3: sub r5,r5,r6
++ cmpldi r5,128
++
++ mflr r0
++ stdu r1,-STACKFRAMESIZE(r1)
++ std r14,STK_REG(R14)(r1)
++ std r15,STK_REG(R15)(r1)
++ std r16,STK_REG(R16)(r1)
++ std r17,STK_REG(R17)(r1)
++ std r18,STK_REG(R18)(r1)
++ std r19,STK_REG(R19)(r1)
++ std r20,STK_REG(R20)(r1)
++ std r21,STK_REG(R21)(r1)
++ std r22,STK_REG(R22)(r1)
++ std r0,STACKFRAMESIZE+16(r1)
++
++ blt 5f
++ srdi r6,r5,7
++ mtctr r6
++
++ /* Now do cacheline (128B) sized loads and stores. */
++ .align 5
++4:
++err2; ld r0,0(r4)
++err2; ld r6,8(r4)
++err2; ld r8,16(r4)
++err2; ld r9,24(r4)
++err2; ld r10,32(r4)
++err2; ld r11,40(r4)
++err2; ld r12,48(r4)
++err2; ld r14,56(r4)
++err2; ld r15,64(r4)
++err2; ld r16,72(r4)
++err2; ld r17,80(r4)
++err2; ld r18,88(r4)
++err2; ld r19,96(r4)
++err2; ld r20,104(r4)
++err2; ld r21,112(r4)
++err2; ld r22,120(r4)
++ addi r4,r4,128
++err2; std r0,0(r3)
++err2; std r6,8(r3)
++err2; std r8,16(r3)
++err2; std r9,24(r3)
++err2; std r10,32(r3)
++err2; std r11,40(r3)
++err2; std r12,48(r3)
++err2; std r14,56(r3)
++err2; std r15,64(r3)
++err2; std r16,72(r3)
++err2; std r17,80(r3)
++err2; std r18,88(r3)
++err2; std r19,96(r3)
++err2; std r20,104(r3)
++err2; std r21,112(r3)
++err2; std r22,120(r3)
++ addi r3,r3,128
++ subi r7,r7,128
++ bdnz 4b
++
++ clrldi r5,r5,(64-7)
++
++ /* Up to 127B to go */
++5: srdi r6,r5,4
++ mtocrf 0x01,r6
++
++6: bf cr7*4+1,7f
++err2; ld r0,0(r4)
++err2; ld r6,8(r4)
++err2; ld r8,16(r4)
++err2; ld r9,24(r4)
++err2; ld r10,32(r4)
++err2; ld r11,40(r4)
++err2; ld r12,48(r4)
++err2; ld r14,56(r4)
++ addi r4,r4,64
++err2; std r0,0(r3)
++err2; std r6,8(r3)
++err2; std r8,16(r3)
++err2; std r9,24(r3)
++err2; std r10,32(r3)
++err2; std r11,40(r3)
++err2; std r12,48(r3)
++err2; std r14,56(r3)
++ addi r3,r3,64
++ subi r7,r7,64
++
++7: ld r14,STK_REG(R14)(r1)
++ ld r15,STK_REG(R15)(r1)
++ ld r16,STK_REG(R16)(r1)
++ ld r17,STK_REG(R17)(r1)
++ ld r18,STK_REG(R18)(r1)
++ ld r19,STK_REG(R19)(r1)
++ ld r20,STK_REG(R20)(r1)
++ ld r21,STK_REG(R21)(r1)
++ ld r22,STK_REG(R22)(r1)
++ addi r1,r1,STACKFRAMESIZE
++
++ /* Up to 63B to go */
++ bf cr7*4+2,8f
++err1; ld r0,0(r4)
++err1; ld r6,8(r4)
++err1; ld r8,16(r4)
++err1; ld r9,24(r4)
++ addi r4,r4,32
++err1; std r0,0(r3)
++err1; std r6,8(r3)
++err1; std r8,16(r3)
++err1; std r9,24(r3)
++ addi r3,r3,32
++ subi r7,r7,32
++
++ /* Up to 31B to go */
++8: bf cr7*4+3,9f
++err1; ld r0,0(r4)
++err1; ld r6,8(r4)
++ addi r4,r4,16
++err1; std r0,0(r3)
++err1; std r6,8(r3)
++ addi r3,r3,16
++ subi r7,r7,16
++
++9: clrldi r5,r5,(64-4)
++
++ /* Up to 15B to go */
++.Lshort_copy:
++ mtocrf 0x01,r5
++ bf cr7*4+0,12f
++err1; lwz r0,0(r4) /* Less chance of a reject with word ops */
++err1; lwz r6,4(r4)
++ addi r4,r4,8
++err1; stw r0,0(r3)
++err1; stw r6,4(r3)
++ addi r3,r3,8
++ subi r7,r7,8
++
++12: bf cr7*4+1,13f
++err1; lwz r0,0(r4)
++ addi r4,r4,4
++err1; stw r0,0(r3)
++ addi r3,r3,4
++ subi r7,r7,4
++
++13: bf cr7*4+2,14f
++err1; lhz r0,0(r4)
++ addi r4,r4,2
++err1; sth r0,0(r3)
++ addi r3,r3,2
++ subi r7,r7,2
++
++14: bf cr7*4+3,15f
++err1; lbz r0,0(r4)
++err1; stb r0,0(r3)
++
++15: li r3,0
++ blr
++
++EXPORT_SYMBOL_GPL(copy_mc_generic);
^ permalink raw reply related [flat|nested] 30+ messages in thread
end of thread, other threads:[~2020-11-01 20:32 UTC | newest]
Thread overview: 30+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2020-10-14 20:38 [gentoo-commits] proj/linux-patches:5.8 commit in: / Mike Pagano
-- strict thread matches above, loose matches on Subject: below --
2020-11-01 20:32 Mike Pagano
2020-10-29 11:20 Mike Pagano
2020-10-17 10:19 Mike Pagano
2020-10-07 12:47 Mike Pagano
2020-10-01 19:00 Mike Pagano
2020-09-26 21:50 Mike Pagano
2020-09-24 15:37 Mike Pagano
2020-09-24 15:37 Mike Pagano
2020-09-24 15:37 Mike Pagano
2020-09-23 13:06 Mike Pagano
2020-09-23 12:14 Mike Pagano
2020-09-17 14:57 Mike Pagano
2020-09-14 17:36 Mike Pagano
2020-09-12 18:14 Mike Pagano
2020-09-09 18:02 Mike Pagano
2020-09-05 10:48 Mike Pagano
2020-09-03 12:52 Mike Pagano
2020-09-03 12:37 Mike Pagano
2020-08-27 13:22 Mike Pagano
2020-08-26 11:18 Mike Pagano
2020-08-21 11:41 Alice Ferrazzi
2020-08-19 14:58 Mike Pagano
2020-08-19 9:16 Alice Ferrazzi
2020-08-13 11:55 Alice Ferrazzi
2020-08-03 14:42 Mike Pagano
2020-08-03 11:35 Mike Pagano
2020-06-29 17:33 Mike Pagano
2020-06-29 17:25 Mike Pagano
2020-06-16 18:22 Mike Pagano
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox