* [gentoo-commits] proj/linux-patches:5.7 commit in: /
@ 2020-05-04 20:59 Mike Pagano
0 siblings, 0 replies; 25+ messages in thread
From: Mike Pagano @ 2020-05-04 20:59 UTC (permalink / raw
To: gentoo-commits
commit: 6664485fa485389a7dc56f27dc52dff43bfb6bbd
Author: Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Mon May 4 20:58:59 2020 +0000
Commit: Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Mon May 4 20:58:59 2020 +0000
URL: https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=6664485f
Adding genpatches. Detail in full log
Support for namespace user.pax.* on tmpfs.
Enable link security restrictions by default.
Bluetooth: Check key sizes only when Secure Simple Pairing is
enabled. See bug #686758.This hid-apple patch enables swapping
of the FN and left Control keys and some additional on some
apple keyboards. See bug #622902.Add Gentoo Linux support
config settings and defaults. Kernel patch enables gcc >= v9.1
optimizations for additional CPUs. Patch to tmp513 to require
REGMAP_I2C for building. Add support for ZSTD-compressed kernel
and initramfs (use=experimental)
Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>
0000_README | 56 ++
1500_XATTR_USER_PREFIX.patch | 67 +++
...ble-link-security-restrictions-by-default.patch | 20 +
...zes-only-if-Secure-Simple-Pairing-enabled.patch | 37 ++
2600_enable-key-swapping-for-apple-mac.patch | 114 ++++
...3-Fix-build-issue-by-selecting-CONFIG_REG.patch | 30 +
..._ZSTD-v5-1-8-prepare-zstd-for-preboot-env.patch | 82 +++
...STD-v5-2-8-prepare-xxhash-for-preboot-env.patch | 94 +++
...STD-v5-3-8-add-zstd-support-to-decompress.patch | 422 ++++++++++++++
...-v5-4-8-add-support-for-zstd-compres-kern.patch | 65 +++
...add-support-for-zstd-compressed-initramfs.patch | 50 ++
| 20 +
...v5-7-8-support-for-ZSTD-compressed-kernel.patch | 92 +++
...5-8-8-gitignore-add-ZSTD-compressed-files.patch | 12 +
5012_enable-cpu-optimizations-for-gcc91.patch | 632 +++++++++++++++++++++
15 files changed, 1793 insertions(+)
diff --git a/0000_README b/0000_README
index 9018993..639ad9e 100644
--- a/0000_README
+++ b/0000_README
@@ -43,6 +43,62 @@ EXPERIMENTAL
Individual Patch Descriptions:
--------------------------------------------------------------------------
+Patch: 1500_XATTR_USER_PREFIX.patch
+From: https://bugs.gentoo.org/show_bug.cgi?id=470644
+Desc: Support for namespace user.pax.* on tmpfs.
+
+Patch: 1510_fs-enable-link-security-restrictions-by-default.patch
+From: http://sources.debian.net/src/linux/3.16.7-ckt4-3/debian/patches/debian/fs-enable-link-security-restrictions-by-default.patch/
+Desc: Enable link security restrictions by default.
+
+Patch: 2000_BT-Check-key-sizes-only-if-Secure-Simple-Pairing-enabled.patch
+From: https://lore.kernel.org/linux-bluetooth/20190522070540.48895-1-marcel@holtmann.org/raw
+Desc: Bluetooth: Check key sizes only when Secure Simple Pairing is enabled. See bug #686758
+
+Patch: 2600_enable-key-swapping-for-apple-mac.patch
+From: https://github.com/free5lot/hid-apple-patched
+Desc: This hid-apple patch enables swapping of the FN and left Control keys and some additional on some apple keyboards. See bug #622902
+
+Patch: 2900_tmp513-Fix-build-issue-by-selecting-CONFIG_REG.patch
+From: https://bugs.gentoo.org/710790
+Desc: tmp513 requies REGMAP_I2C to build. Select it by default in Kconfig. See bug #710790. Thanks to Phil Stracchino
+
Patch: 4567_distro-Gentoo-Kconfig.patch
From: Tom Wijsman <TomWij@gentoo.org>
Desc: Add Gentoo Linux support config settings and defaults.
+
+Patch: 5000_ZSTD-v5-1-8-prepare-zstd-for-preboot-env.patch
+From: https://lkml.org/lkml/2020/4/1/29
+Desc: lib: prepare zstd for preboot environment
+
+Patch: 5001_ZSTD-v5-2-8-prepare-xxhash-for-preboot-env.patch
+From: https://lkml.org/lkml/2020/4/1/29
+Desc: lib: prepare xxhash for preboot environment
+
+Patch: 5002_ZSTD-v5-3-8-add-zstd-support-to-decompress.patch
+From: https://lkml.org/lkml/2020/4/1/29
+Desc: lib: add zstd support to decompress
+
+Patch: 5003_ZSTD-v5-4-8-add-support-for-zstd-compres-kern.patch
+From: https://lkml.org/lkml/2020/4/1/29
+Desc: init: add support for zstd compressed kernel
+
+Patch: 5004_ZSTD-v5-5-8-add-support-for-zstd-compressed-initramfs.patch
+From: https://lkml.org/lkml/2020/4/1/29
+Desc: usr: add support for zstd compressed initramfs
+
+Patch: 5005_ZSTD-v5-6-8-bump-ZO-z-extra-bytes-margin.patch
+From: https://lkml.org/lkml/2020/4/1/29
+Desc: x86: bump ZO_z_extra_bytes margin for zstd
+
+Patch: 5006_ZSTD-v5-7-8-support-for-ZSTD-compressed-kernel.patch
+From: https://lkml.org/lkml/2020/4/1/29
+Desc: x86: Add support for ZSTD compressed kernel
+
+Patch: 5007_ZSTD-v5-8-8-gitignore-add-ZSTD-compressed-files.patch
+From: https://lkml.org/lkml/2020/4/1/29
+Desc: .gitignore: add ZSTD-compressed files
+
+Patch: 5012_enable-cpu-optimizations-for-gcc91.patch
+From: https://github.com/graysky2/kernel_gcc_patch/
+Desc: Kernel patch enables gcc >= v9.1 optimizations for additional CPUs.
diff --git a/1500_XATTR_USER_PREFIX.patch b/1500_XATTR_USER_PREFIX.patch
new file mode 100644
index 0000000..245dcc2
--- /dev/null
+++ b/1500_XATTR_USER_PREFIX.patch
@@ -0,0 +1,67 @@
+From: Anthony G. Basile <blueness@gentoo.org>
+
+This patch adds support for a restricted user-controlled namespace on
+tmpfs filesystem used to house PaX flags. The namespace must be of the
+form user.pax.* and its value cannot exceed a size of 8 bytes.
+
+This is needed even on all Gentoo systems so that XATTR_PAX flags
+are preserved for users who might build packages using portage on
+a tmpfs system with a non-hardened kernel and then switch to a
+hardened kernel with XATTR_PAX enabled.
+
+The namespace is added to any user with Extended Attribute support
+enabled for tmpfs. Users who do not enable xattrs will not have
+the XATTR_PAX flags preserved.
+
+diff --git a/include/uapi/linux/xattr.h b/include/uapi/linux/xattr.h
+index 1590c49..5eab462 100644
+--- a/include/uapi/linux/xattr.h
++++ b/include/uapi/linux/xattr.h
+@@ -73,5 +73,9 @@
+ #define XATTR_POSIX_ACL_DEFAULT "posix_acl_default"
+ #define XATTR_NAME_POSIX_ACL_DEFAULT XATTR_SYSTEM_PREFIX XATTR_POSIX_ACL_DEFAULT
+
++/* User namespace */
++#define XATTR_PAX_PREFIX XATTR_USER_PREFIX "pax."
++#define XATTR_PAX_FLAGS_SUFFIX "flags"
++#define XATTR_NAME_PAX_FLAGS XATTR_PAX_PREFIX XATTR_PAX_FLAGS_SUFFIX
+
+ #endif /* _UAPI_LINUX_XATTR_H */
+--- a/mm/shmem.c 2020-05-04 15:30:27.042035334 -0400
++++ b/mm/shmem.c 2020-05-04 15:34:57.013881725 -0400
+@@ -3238,6 +3238,14 @@ static int shmem_xattr_handler_set(const
+ struct shmem_inode_info *info = SHMEM_I(inode);
+
+ name = xattr_full_name(handler, name);
++
++ if (!strncmp(name, XATTR_USER_PREFIX, XATTR_USER_PREFIX_LEN)) {
++ if (strcmp(name, XATTR_NAME_PAX_FLAGS))
++ return -EOPNOTSUPP;
++ if (size > 8)
++ return -EINVAL;
++ }
++
+ return simple_xattr_set(&info->xattrs, name, value, size, flags, NULL);
+ }
+
+@@ -3253,6 +3261,12 @@ static const struct xattr_handler shmem_
+ .set = shmem_xattr_handler_set,
+ };
+
++static const struct xattr_handler shmem_user_xattr_handler = {
++ .prefix = XATTR_USER_PREFIX,
++ .get = shmem_xattr_handler_get,
++ .set = shmem_xattr_handler_set,
++};
++
+ static const struct xattr_handler *shmem_xattr_handlers[] = {
+ #ifdef CONFIG_TMPFS_POSIX_ACL
+ &posix_acl_access_xattr_handler,
+@@ -3260,6 +3274,7 @@ static const struct xattr_handler *shmem
+ #endif
+ &shmem_security_xattr_handler,
+ &shmem_trusted_xattr_handler,
++ &shmem_user_xattr_handler,
+ NULL
+ };
+
diff --git a/1510_fs-enable-link-security-restrictions-by-default.patch b/1510_fs-enable-link-security-restrictions-by-default.patch
new file mode 100644
index 0000000..f0ed144
--- /dev/null
+++ b/1510_fs-enable-link-security-restrictions-by-default.patch
@@ -0,0 +1,20 @@
+From: Ben Hutchings <ben@decadent.org.uk>
+Subject: fs: Enable link security restrictions by default
+Date: Fri, 02 Nov 2012 05:32:06 +0000
+Bug-Debian: https://bugs.debian.org/609455
+Forwarded: not-needed
+This reverts commit 561ec64ae67ef25cac8d72bb9c4bfc955edfd415
+('VFS: don't do protected {sym,hard}links by default').
+--- a/fs/namei.c 2018-09-28 07:56:07.770005006 -0400
++++ b/fs/namei.c 2018-09-28 07:56:43.370349204 -0400
+@@ -885,8 +885,8 @@ static inline void put_link(struct namei
+ path_put(&last->link);
+ }
+
+-int sysctl_protected_symlinks __read_mostly = 0;
+-int sysctl_protected_hardlinks __read_mostly = 0;
++int sysctl_protected_symlinks __read_mostly = 1;
++int sysctl_protected_hardlinks __read_mostly = 1;
+ int sysctl_protected_fifos __read_mostly;
+ int sysctl_protected_regular __read_mostly;
+
diff --git a/2000_BT-Check-key-sizes-only-if-Secure-Simple-Pairing-enabled.patch b/2000_BT-Check-key-sizes-only-if-Secure-Simple-Pairing-enabled.patch
new file mode 100644
index 0000000..394ad48
--- /dev/null
+++ b/2000_BT-Check-key-sizes-only-if-Secure-Simple-Pairing-enabled.patch
@@ -0,0 +1,37 @@
+The encryption is only mandatory to be enforced when both sides are using
+Secure Simple Pairing and this means the key size check makes only sense
+in that case.
+
+On legacy Bluetooth 2.0 and earlier devices like mice the encryption was
+optional and thus causing an issue if the key size check is not bound to
+using Secure Simple Pairing.
+
+Fixes: d5bb334a8e17 ("Bluetooth: Align minimum encryption key size for LE and BR/EDR connections")
+Signed-off-by: Marcel Holtmann <marcel@holtmann.org>
+Cc: stable@vger.kernel.org
+---
+ net/bluetooth/hci_conn.c | 9 +++++++--
+ 1 file changed, 7 insertions(+), 2 deletions(-)
+
+diff --git a/net/bluetooth/hci_conn.c b/net/bluetooth/hci_conn.c
+index 3cf0764d5793..7516cdde3373 100644
+--- a/net/bluetooth/hci_conn.c
++++ b/net/bluetooth/hci_conn.c
+@@ -1272,8 +1272,13 @@ int hci_conn_check_link_mode(struct hci_conn *conn)
+ return 0;
+ }
+
+- if (hci_conn_ssp_enabled(conn) &&
+- !test_bit(HCI_CONN_ENCRYPT, &conn->flags))
++ /* If Secure Simple Pairing is not enabled, then legacy connection
++ * setup is used and no encryption or key sizes can be enforced.
++ */
++ if (!hci_conn_ssp_enabled(conn))
++ return 1;
++
++ if (!test_bit(HCI_CONN_ENCRYPT, &conn->flags))
+ return 0;
+
+ /* The minimum encryption key size needs to be enforced by the
+--
+2.20.1
diff --git a/2600_enable-key-swapping-for-apple-mac.patch b/2600_enable-key-swapping-for-apple-mac.patch
new file mode 100644
index 0000000..ab228d3
--- /dev/null
+++ b/2600_enable-key-swapping-for-apple-mac.patch
@@ -0,0 +1,114 @@
+--- a/drivers/hid/hid-apple.c
++++ b/drivers/hid/hid-apple.c
+@@ -52,6 +52,22 @@
+ "(For people who want to keep Windows PC keyboard muscle memory. "
+ "[0] = as-is, Mac layout. 1 = swapped, Windows layout.)");
+
++static unsigned int swap_fn_leftctrl;
++module_param(swap_fn_leftctrl, uint, 0644);
++MODULE_PARM_DESC(swap_fn_leftctrl, "Swap the Fn and left Control keys. "
++ "(For people who want to keep PC keyboard muscle memory. "
++ "[0] = as-is, Mac layout, 1 = swapped, PC layout)");
++
++static unsigned int rightalt_as_rightctrl;
++module_param(rightalt_as_rightctrl, uint, 0644);
++MODULE_PARM_DESC(rightalt_as_rightctrl, "Use the right Alt key as a right Ctrl key. "
++ "[0] = as-is, Mac layout. 1 = Right Alt is right Ctrl");
++
++static unsigned int ejectcd_as_delete;
++module_param(ejectcd_as_delete, uint, 0644);
++MODULE_PARM_DESC(ejectcd_as_delete, "Use Eject-CD key as Delete key. "
++ "([0] = disabled, 1 = enabled)");
++
+ struct apple_sc {
+ unsigned long quirks;
+ unsigned int fn_on;
+@@ -164,6 +180,21 @@
+ { }
+ };
+
++static const struct apple_key_translation swapped_fn_leftctrl_keys[] = {
++ { KEY_FN, KEY_LEFTCTRL },
++ { }
++};
++
++static const struct apple_key_translation rightalt_as_rightctrl_keys[] = {
++ { KEY_RIGHTALT, KEY_RIGHTCTRL },
++ { }
++};
++
++static const struct apple_key_translation ejectcd_as_delete_keys[] = {
++ { KEY_EJECTCD, KEY_DELETE },
++ { }
++};
++
+ static const struct apple_key_translation *apple_find_translation(
+ const struct apple_key_translation *table, u16 from)
+ {
+@@ -183,9 +214,11 @@
+ struct apple_sc *asc = hid_get_drvdata(hid);
+ const struct apple_key_translation *trans, *table;
+
+- if (usage->code == KEY_FN) {
++ u16 fn_keycode = (swap_fn_leftctrl) ? (KEY_LEFTCTRL) : (KEY_FN);
++
++ if (usage->code == fn_keycode) {
+ asc->fn_on = !!value;
+- input_event(input, usage->type, usage->code, value);
++ input_event(input, usage->type, KEY_FN, value);
+ return 1;
+ }
+
+@@ -264,6 +297,30 @@
+ }
+ }
+
++ if (swap_fn_leftctrl) {
++ trans = apple_find_translation(swapped_fn_leftctrl_keys, usage->code);
++ if (trans) {
++ input_event(input, usage->type, trans->to, value);
++ return 1;
++ }
++ }
++
++ if (ejectcd_as_delete) {
++ trans = apple_find_translation(ejectcd_as_delete_keys, usage->code);
++ if (trans) {
++ input_event(input, usage->type, trans->to, value);
++ return 1;
++ }
++ }
++
++ if (rightalt_as_rightctrl) {
++ trans = apple_find_translation(rightalt_as_rightctrl_keys, usage->code);
++ if (trans) {
++ input_event(input, usage->type, trans->to, value);
++ return 1;
++ }
++ }
++
+ return 0;
+ }
+
+@@ -327,6 +384,21 @@
+
+ for (trans = apple_iso_keyboard; trans->from; trans++)
+ set_bit(trans->to, input->keybit);
++
++ if (swap_fn_leftctrl) {
++ for (trans = swapped_fn_leftctrl_keys; trans->from; trans++)
++ set_bit(trans->to, input->keybit);
++ }
++
++ if (ejectcd_as_delete) {
++ for (trans = ejectcd_as_delete_keys; trans->from; trans++)
++ set_bit(trans->to, input->keybit);
++ }
++
++ if (rightalt_as_rightctrl) {
++ for (trans = rightalt_as_rightctrl_keys; trans->from; trans++)
++ set_bit(trans->to, input->keybit);
++ }
+ }
+
+ static int apple_input_mapping(struct hid_device *hdev, struct hid_input *hi,
diff --git a/2900_tmp513-Fix-build-issue-by-selecting-CONFIG_REG.patch b/2900_tmp513-Fix-build-issue-by-selecting-CONFIG_REG.patch
new file mode 100644
index 0000000..4335685
--- /dev/null
+++ b/2900_tmp513-Fix-build-issue-by-selecting-CONFIG_REG.patch
@@ -0,0 +1,30 @@
+From dc328d75a6f37f4ff11a81ae16b1ec88c3197640 Mon Sep 17 00:00:00 2001
+From: Mike Pagano <mpagano@gentoo.org>
+Date: Mon, 23 Mar 2020 08:20:06 -0400
+Subject: [PATCH 1/1] This driver requires REGMAP_I2C to build. Select it by
+ default in Kconfig. Reported at gentoo bugzilla:
+ https://bugs.gentoo.org/710790
+Cc: mpagano@gentoo.org
+
+Reported-by: Phil Stracchino <phils@caerllewys.net>
+
+Signed-off-by: Mike Pagano <mpagano@gentoo.org>
+---
+ drivers/hwmon/Kconfig | 1 +
+ 1 file changed, 1 insertion(+)
+
+diff --git a/drivers/hwmon/Kconfig b/drivers/hwmon/Kconfig
+index 47ac20aee06f..530b4f29ba85 100644
+--- a/drivers/hwmon/Kconfig
++++ b/drivers/hwmon/Kconfig
+@@ -1769,6 +1769,7 @@ config SENSORS_TMP421
+ config SENSORS_TMP513
+ tristate "Texas Instruments TMP513 and compatibles"
+ depends on I2C
++ select REGMAP_I2C
+ help
+ If you say yes here you get support for Texas Instruments TMP512,
+ and TMP513 temperature and power supply sensor chips.
+--
+2.24.1
+
diff --git a/5000_ZSTD-v5-1-8-prepare-zstd-for-preboot-env.patch b/5000_ZSTD-v5-1-8-prepare-zstd-for-preboot-env.patch
new file mode 100644
index 0000000..297a8d4
--- /dev/null
+++ b/5000_ZSTD-v5-1-8-prepare-zstd-for-preboot-env.patch
@@ -0,0 +1,82 @@
+diff --git a/lib/zstd/decompress.c b/lib/zstd/decompress.c
+index 269ee9a796c1..73ded63278cf 100644
+--- a/lib/zstd/decompress.c
++++ b/lib/zstd/decompress.c
+@@ -2490,6 +2490,7 @@ size_t ZSTD_decompressStream(ZSTD_DStream *zds, ZSTD_outBuffer *output, ZSTD_inB
+ }
+ }
+
++#ifndef ZSTD_PREBOOT
+ EXPORT_SYMBOL(ZSTD_DCtxWorkspaceBound);
+ EXPORT_SYMBOL(ZSTD_initDCtx);
+ EXPORT_SYMBOL(ZSTD_decompressDCtx);
+@@ -2529,3 +2530,4 @@ EXPORT_SYMBOL(ZSTD_insertBlock);
+
+ MODULE_LICENSE("Dual BSD/GPL");
+ MODULE_DESCRIPTION("Zstd Decompressor");
++#endif
+diff --git a/lib/zstd/fse_decompress.c b/lib/zstd/fse_decompress.c
+index a84300e5a013..0b353530fb3f 100644
+--- a/lib/zstd/fse_decompress.c
++++ b/lib/zstd/fse_decompress.c
+@@ -47,6 +47,7 @@
+ ****************************************************************/
+ #include "bitstream.h"
+ #include "fse.h"
++#include "zstd_internal.h"
+ #include <linux/compiler.h>
+ #include <linux/kernel.h>
+ #include <linux/string.h> /* memcpy, memset */
+@@ -60,14 +61,6 @@
+ enum { FSE_static_assert = 1 / (int)(!!(c)) }; \
+ } /* use only *after* variable declarations */
+
+-/* check and forward error code */
+-#define CHECK_F(f) \
+- { \
+- size_t const e = f; \
+- if (FSE_isError(e)) \
+- return e; \
+- }
+-
+ /* **************************************************************
+ * Templates
+ ****************************************************************/
+diff --git a/lib/zstd/zstd_internal.h b/lib/zstd/zstd_internal.h
+index 1a79fab9e13a..dac753397f86 100644
+--- a/lib/zstd/zstd_internal.h
++++ b/lib/zstd/zstd_internal.h
+@@ -127,7 +127,14 @@ static const U32 OF_defaultNormLog = OF_DEFAULTNORMLOG;
+ * Shared functions to include for inlining
+ *********************************************/
+ ZSTD_STATIC void ZSTD_copy8(void *dst, const void *src) {
+- memcpy(dst, src, 8);
++ /*
++ * zstd relies heavily on gcc being able to analyze and inline this
++ * memcpy() call, since it is called in a tight loop. Preboot mode
++ * is compiled in freestanding mode, which stops gcc from analyzing
++ * memcpy(). Use __builtin_memcpy() to tell gcc to analyze this as a
++ * regular memcpy().
++ */
++ __builtin_memcpy(dst, src, 8);
+ }
+ /*! ZSTD_wildcopy() :
+ * custom version of memcpy(), can copy up to 7 bytes too many (8 bytes if length==0) */
+@@ -137,13 +144,16 @@ ZSTD_STATIC void ZSTD_wildcopy(void *dst, const void *src, ptrdiff_t length)
+ const BYTE* ip = (const BYTE*)src;
+ BYTE* op = (BYTE*)dst;
+ BYTE* const oend = op + length;
+- /* Work around https://gcc.gnu.org/bugzilla/show_bug.cgi?id=81388.
++#if defined(GCC_VERSION) && GCC_VERSION >= 70000 && GCC_VERSION < 70200
++ /*
++ * Work around https://gcc.gnu.org/bugzilla/show_bug.cgi?id=81388.
+ * Avoid the bad case where the loop only runs once by handling the
+ * special case separately. This doesn't trigger the bug because it
+ * doesn't involve pointer/integer overflow.
+ */
+ if (length <= 8)
+ return ZSTD_copy8(dst, src);
++#endif
+ do {
+ ZSTD_copy8(op, ip);
+ op += 8;
diff --git a/5001_ZSTD-v5-2-8-prepare-xxhash-for-preboot-env.patch b/5001_ZSTD-v5-2-8-prepare-xxhash-for-preboot-env.patch
new file mode 100644
index 0000000..88e4674
--- /dev/null
+++ b/5001_ZSTD-v5-2-8-prepare-xxhash-for-preboot-env.patch
@@ -0,0 +1,94 @@
+diff --git a/lib/xxhash.c b/lib/xxhash.c
+index aa61e2a3802f..b4364e011392 100644
+--- a/lib/xxhash.c
++++ b/lib/xxhash.c
+@@ -80,13 +80,11 @@ void xxh32_copy_state(struct xxh32_state *dst, const struct xxh32_state *src)
+ {
+ memcpy(dst, src, sizeof(*dst));
+ }
+-EXPORT_SYMBOL(xxh32_copy_state);
+
+ void xxh64_copy_state(struct xxh64_state *dst, const struct xxh64_state *src)
+ {
+ memcpy(dst, src, sizeof(*dst));
+ }
+-EXPORT_SYMBOL(xxh64_copy_state);
+
+ /*-***************************
+ * Simple Hash Functions
+@@ -151,7 +149,6 @@ uint32_t xxh32(const void *input, const size_t len, const uint32_t seed)
+
+ return h32;
+ }
+-EXPORT_SYMBOL(xxh32);
+
+ static uint64_t xxh64_round(uint64_t acc, const uint64_t input)
+ {
+@@ -234,7 +231,6 @@ uint64_t xxh64(const void *input, const size_t len, const uint64_t seed)
+
+ return h64;
+ }
+-EXPORT_SYMBOL(xxh64);
+
+ /*-**************************************************
+ * Advanced Hash Functions
+@@ -251,7 +247,6 @@ void xxh32_reset(struct xxh32_state *statePtr, const uint32_t seed)
+ state.v4 = seed - PRIME32_1;
+ memcpy(statePtr, &state, sizeof(state));
+ }
+-EXPORT_SYMBOL(xxh32_reset);
+
+ void xxh64_reset(struct xxh64_state *statePtr, const uint64_t seed)
+ {
+@@ -265,7 +260,6 @@ void xxh64_reset(struct xxh64_state *statePtr, const uint64_t seed)
+ state.v4 = seed - PRIME64_1;
+ memcpy(statePtr, &state, sizeof(state));
+ }
+-EXPORT_SYMBOL(xxh64_reset);
+
+ int xxh32_update(struct xxh32_state *state, const void *input, const size_t len)
+ {
+@@ -334,7 +328,6 @@ int xxh32_update(struct xxh32_state *state, const void *input, const size_t len)
+
+ return 0;
+ }
+-EXPORT_SYMBOL(xxh32_update);
+
+ uint32_t xxh32_digest(const struct xxh32_state *state)
+ {
+@@ -372,7 +365,6 @@ uint32_t xxh32_digest(const struct xxh32_state *state)
+
+ return h32;
+ }
+-EXPORT_SYMBOL(xxh32_digest);
+
+ int xxh64_update(struct xxh64_state *state, const void *input, const size_t len)
+ {
+@@ -439,7 +431,6 @@ int xxh64_update(struct xxh64_state *state, const void *input, const size_t len)
+
+ return 0;
+ }
+-EXPORT_SYMBOL(xxh64_update);
+
+ uint64_t xxh64_digest(const struct xxh64_state *state)
+ {
+@@ -494,7 +485,19 @@ uint64_t xxh64_digest(const struct xxh64_state *state)
+
+ return h64;
+ }
++
++#ifndef XXH_PREBOOT
++EXPORT_SYMBOL(xxh32_copy_state);
++EXPORT_SYMBOL(xxh64_copy_state);
++EXPORT_SYMBOL(xxh32);
++EXPORT_SYMBOL(xxh64);
++EXPORT_SYMBOL(xxh32_reset);
++EXPORT_SYMBOL(xxh64_reset);
++EXPORT_SYMBOL(xxh32_update);
++EXPORT_SYMBOL(xxh32_digest);
++EXPORT_SYMBOL(xxh64_update);
+ EXPORT_SYMBOL(xxh64_digest);
+
+ MODULE_LICENSE("Dual BSD/GPL");
+ MODULE_DESCRIPTION("xxHash");
++#endif
diff --git a/5002_ZSTD-v5-3-8-add-zstd-support-to-decompress.patch b/5002_ZSTD-v5-3-8-add-zstd-support-to-decompress.patch
new file mode 100644
index 0000000..1c22fa3
--- /dev/null
+++ b/5002_ZSTD-v5-3-8-add-zstd-support-to-decompress.patch
@@ -0,0 +1,422 @@
+diff --git a/include/linux/decompress/unzstd.h b/include/linux/decompress/unzstd.h
+new file mode 100644
+index 000000000000..56d539ae880f
+--- /dev/null
++++ b/include/linux/decompress/unzstd.h
+@@ -0,0 +1,11 @@
++/* SPDX-License-Identifier: GPL-2.0 */
++#ifndef LINUX_DECOMPRESS_UNZSTD_H
++#define LINUX_DECOMPRESS_UNZSTD_H
++
++int unzstd(unsigned char *inbuf, long len,
++ long (*fill)(void*, unsigned long),
++ long (*flush)(void*, unsigned long),
++ unsigned char *output,
++ long *pos,
++ void (*error_fn)(char *x));
++#endif
+diff --git a/lib/Kconfig b/lib/Kconfig
+index 5d53f9609c25..e883aecb9279 100644
+--- a/lib/Kconfig
++++ b/lib/Kconfig
+@@ -336,6 +336,10 @@ config DECOMPRESS_LZ4
+ select LZ4_DECOMPRESS
+ tristate
+
++config DECOMPRESS_ZSTD
++ select ZSTD_DECOMPRESS
++ tristate
++
+ #
+ # Generic allocator support is selected if needed
+ #
+diff --git a/lib/Makefile b/lib/Makefile
+index ab68a8674360..3ce4ac296611 100644
+--- a/lib/Makefile
++++ b/lib/Makefile
+@@ -166,6 +166,7 @@ lib-$(CONFIG_DECOMPRESS_LZMA) += decompress_unlzma.o
+ lib-$(CONFIG_DECOMPRESS_XZ) += decompress_unxz.o
+ lib-$(CONFIG_DECOMPRESS_LZO) += decompress_unlzo.o
+ lib-$(CONFIG_DECOMPRESS_LZ4) += decompress_unlz4.o
++lib-$(CONFIG_DECOMPRESS_ZSTD) += decompress_unzstd.o
+
+ obj-$(CONFIG_TEXTSEARCH) += textsearch.o
+ obj-$(CONFIG_TEXTSEARCH_KMP) += ts_kmp.o
+diff --git a/lib/decompress.c b/lib/decompress.c
+index 857ab1af1ef3..ab3fc90ffc64 100644
+--- a/lib/decompress.c
++++ b/lib/decompress.c
+@@ -13,6 +13,7 @@
+ #include <linux/decompress/inflate.h>
+ #include <linux/decompress/unlzo.h>
+ #include <linux/decompress/unlz4.h>
++#include <linux/decompress/unzstd.h>
+
+ #include <linux/types.h>
+ #include <linux/string.h>
+@@ -37,6 +38,9 @@
+ #ifndef CONFIG_DECOMPRESS_LZ4
+ # define unlz4 NULL
+ #endif
++#ifndef CONFIG_DECOMPRESS_ZSTD
++# define unzstd NULL
++#endif
+
+ struct compress_format {
+ unsigned char magic[2];
+@@ -52,6 +56,7 @@ static const struct compress_format compressed_formats[] __initconst = {
+ { {0xfd, 0x37}, "xz", unxz },
+ { {0x89, 0x4c}, "lzo", unlzo },
+ { {0x02, 0x21}, "lz4", unlz4 },
++ { {0x28, 0xb5}, "zstd", unzstd },
+ { {0, 0}, NULL, NULL }
+ };
+
+diff --git a/lib/decompress_unzstd.c b/lib/decompress_unzstd.c
+new file mode 100644
+index 000000000000..f317afab502f
+--- /dev/null
++++ b/lib/decompress_unzstd.c
+@@ -0,0 +1,342 @@
++// SPDX-License-Identifier: GPL-2.0
++
++/*
++ * Important notes about in-place decompression
++ *
++ * At least on x86, the kernel is decompressed in place: the compressed data
++ * is placed to the end of the output buffer, and the decompressor overwrites
++ * most of the compressed data. There must be enough safety margin to
++ * guarantee that the write position is always behind the read position.
++ *
++ * The safety margin for ZSTD with a 128 KB block size is calculated below.
++ * Note that the margin with ZSTD is bigger than with GZIP or XZ!
++ *
++ * The worst case for in-place decompression is that the beginning of
++ * the file is compressed extremely well, and the rest of the file is
++ * uncompressible. Thus, we must look for worst-case expansion when the
++ * compressor is encoding uncompressible data.
++ *
++ * The structure of the .zst file in case of a compresed kernel is as follows.
++ * Maximum sizes (as bytes) of the fields are in parenthesis.
++ *
++ * Frame Header: (18)
++ * Blocks: (N)
++ * Checksum: (4)
++ *
++ * The frame header and checksum overhead is at most 22 bytes.
++ *
++ * ZSTD stores the data in blocks. Each block has a header whose size is
++ * a 3 bytes. After the block header, there is up to 128 KB of payload.
++ * The maximum uncompressed size of the payload is 128 KB. The minimum
++ * uncompressed size of the payload is never less than the payload size
++ * (excluding the block header).
++ *
++ * The assumption, that the uncompressed size of the payload is never
++ * smaller than the payload itself, is valid only when talking about
++ * the payload as a whole. It is possible that the payload has parts where
++ * the decompressor consumes more input than it produces output. Calculating
++ * the worst case for this would be tricky. Instead of trying to do that,
++ * let's simply make sure that the decompressor never overwrites any bytes
++ * of the payload which it is currently reading.
++ *
++ * Now we have enough information to calculate the safety margin. We need
++ * - 22 bytes for the .zst file format headers;
++ * - 3 bytes per every 128 KiB of uncompressed size (one block header per
++ * block); and
++ * - 128 KiB (biggest possible zstd block size) to make sure that the
++ * decompressor never overwrites anything from the block it is currently
++ * reading.
++ *
++ * We get the following formula:
++ *
++ * safety_margin = 22 + uncompressed_size * 3 / 131072 + 131072
++ * <= 22 + (uncompressed_size >> 15) + 131072
++ */
++
++/*
++ * Preboot environments #include "path/to/decompress_unzstd.c".
++ * All of the source files we depend on must be #included.
++ * zstd's only source dependeny is xxhash, which has no source
++ * dependencies.
++ *
++ * zstd and xxhash avoid declaring themselves as modules
++ * when ZSTD_PREBOOT and XXH_PREBOOT are defined.
++ */
++#ifdef STATIC
++# define ZSTD_PREBOOT
++# define XXH_PREBOOT
++# include "xxhash.c"
++# include "zstd/entropy_common.c"
++# include "zstd/fse_decompress.c"
++# include "zstd/huf_decompress.c"
++# include "zstd/zstd_common.c"
++# include "zstd/decompress.c"
++#endif
++
++#include <linux/decompress/mm.h>
++#include <linux/kernel.h>
++#include <linux/zstd.h>
++
++/* 128MB is the maximum window size supported by zstd. */
++#define ZSTD_WINDOWSIZE_MAX (1 << ZSTD_WINDOWLOG_MAX)
++/* Size of the input and output buffers in multi-call mode.
++ * Pick a larger size because it isn't used during kernel decompression,
++ * since that is single pass, and we have to allocate a large buffer for
++ * zstd's window anyways. The larger size speeds up initramfs decompression.
++ */
++#define ZSTD_IOBUF_SIZE (1 << 17)
++
++static int INIT handle_zstd_error(size_t ret, void (*error)(char *x))
++{
++ const int err = ZSTD_getErrorCode(ret);
++
++ if (!ZSTD_isError(ret))
++ return 0;
++
++ switch (err) {
++ case ZSTD_error_memory_allocation:
++ error("ZSTD decompressor ran out of memory");
++ break;
++ case ZSTD_error_prefix_unknown:
++ error("Input is not in the ZSTD format (wrong magic bytes)");
++ break;
++ case ZSTD_error_dstSize_tooSmall:
++ case ZSTD_error_corruption_detected:
++ case ZSTD_error_checksum_wrong:
++ error("ZSTD-compressed data is corrupt");
++ break;
++ default:
++ error("ZSTD-compressed data is probably corrupt");
++ break;
++ }
++ return -1;
++}
++
++/*
++ * Handle the case where we have the entire input and output in one segment.
++ * We can allocate less memory (no circular buffer for the sliding window),
++ * and avoid some memcpy() calls.
++ */
++static int INIT decompress_single(const u8 *in_buf, long in_len, u8 *out_buf,
++ long out_len, long *in_pos,
++ void (*error)(char *x))
++{
++ const size_t wksp_size = ZSTD_DCtxWorkspaceBound();
++ void *wksp = large_malloc(wksp_size);
++ ZSTD_DCtx *dctx = ZSTD_initDCtx(wksp, wksp_size);
++ int err;
++ size_t ret;
++
++ if (dctx == NULL) {
++ error("Out of memory while allocating ZSTD_DCtx");
++ err = -1;
++ goto out;
++ }
++ /*
++ * Find out how large the frame actually is, there may be junk at
++ * the end of the frame that ZSTD_decompressDCtx() can't handle.
++ */
++ ret = ZSTD_findFrameCompressedSize(in_buf, in_len);
++ err = handle_zstd_error(ret, error);
++ if (err)
++ goto out;
++ in_len = (long)ret;
++
++ ret = ZSTD_decompressDCtx(dctx, out_buf, out_len, in_buf, in_len);
++ err = handle_zstd_error(ret, error);
++ if (err)
++ goto out;
++
++ if (in_pos != NULL)
++ *in_pos = in_len;
++
++ err = 0;
++out:
++ if (wksp != NULL)
++ large_free(wksp);
++ return err;
++}
++
++static int INIT __unzstd(unsigned char *in_buf, long in_len,
++ long (*fill)(void*, unsigned long),
++ long (*flush)(void*, unsigned long),
++ unsigned char *out_buf, long out_len,
++ long *in_pos,
++ void (*error)(char *x))
++{
++ ZSTD_inBuffer in;
++ ZSTD_outBuffer out;
++ ZSTD_frameParams params;
++ void *in_allocated = NULL;
++ void *out_allocated = NULL;
++ void *wksp = NULL;
++ size_t wksp_size;
++ ZSTD_DStream *dstream;
++ int err;
++ size_t ret;
++
++ if (out_len == 0)
++ out_len = LONG_MAX; /* no limit */
++
++ if (fill == NULL && flush == NULL)
++ /*
++ * We can decompress faster and with less memory when we have a
++ * single chunk.
++ */
++ return decompress_single(in_buf, in_len, out_buf, out_len,
++ in_pos, error);
++
++ /*
++ * If in_buf is not provided, we must be using fill(), so allocate
++ * a large enough buffer. If it is provided, it must be at least
++ * ZSTD_IOBUF_SIZE large.
++ */
++ if (in_buf == NULL) {
++ in_allocated = large_malloc(ZSTD_IOBUF_SIZE);
++ if (in_allocated == NULL) {
++ error("Out of memory while allocating input buffer");
++ err = -1;
++ goto out;
++ }
++ in_buf = in_allocated;
++ in_len = 0;
++ }
++ /* Read the first chunk, since we need to decode the frame header. */
++ if (fill != NULL)
++ in_len = fill(in_buf, ZSTD_IOBUF_SIZE);
++ if (in_len < 0) {
++ error("ZSTD-compressed data is truncated");
++ err = -1;
++ goto out;
++ }
++ /* Set the first non-empty input buffer. */
++ in.src = in_buf;
++ in.pos = 0;
++ in.size = in_len;
++ /* Allocate the output buffer if we are using flush(). */
++ if (flush != NULL) {
++ out_allocated = large_malloc(ZSTD_IOBUF_SIZE);
++ if (out_allocated == NULL) {
++ error("Out of memory while allocating output buffer");
++ err = -1;
++ goto out;
++ }
++ out_buf = out_allocated;
++ out_len = ZSTD_IOBUF_SIZE;
++ }
++ /* Set the output buffer. */
++ out.dst = out_buf;
++ out.pos = 0;
++ out.size = out_len;
++
++ /*
++ * We need to know the window size to allocate the ZSTD_DStream.
++ * Since we are streaming, we need to allocate a buffer for the sliding
++ * window. The window size varies from 1 KB to ZSTD_WINDOWSIZE_MAX
++ * (8 MB), so it is important to use the actual value so as not to
++ * waste memory when it is smaller.
++ */
++ ret = ZSTD_getFrameParams(¶ms, in.src, in.size);
++ err = handle_zstd_error(ret, error);
++ if (err)
++ goto out;
++ if (ret != 0) {
++ error("ZSTD-compressed data has an incomplete frame header");
++ err = -1;
++ goto out;
++ }
++ if (params.windowSize > ZSTD_WINDOWSIZE_MAX) {
++ error("ZSTD-compressed data has too large a window size");
++ err = -1;
++ goto out;
++ }
++
++ /*
++ * Allocate the ZSTD_DStream now that we know how much memory is
++ * required.
++ */
++ wksp_size = ZSTD_DStreamWorkspaceBound(params.windowSize);
++ wksp = large_malloc(wksp_size);
++ dstream = ZSTD_initDStream(params.windowSize, wksp, wksp_size);
++ if (dstream == NULL) {
++ error("Out of memory while allocating ZSTD_DStream");
++ err = -1;
++ goto out;
++ }
++
++ /*
++ * Decompression loop:
++ * Read more data if necessary (error if no more data can be read).
++ * Call the decompression function, which returns 0 when finished.
++ * Flush any data produced if using flush().
++ */
++ if (in_pos != NULL)
++ *in_pos = 0;
++ do {
++ /*
++ * If we need to reload data, either we have fill() and can
++ * try to get more data, or we don't and the input is truncated.
++ */
++ if (in.pos == in.size) {
++ if (in_pos != NULL)
++ *in_pos += in.pos;
++ in_len = fill ? fill(in_buf, ZSTD_IOBUF_SIZE) : -1;
++ if (in_len < 0) {
++ error("ZSTD-compressed data is truncated");
++ err = -1;
++ goto out;
++ }
++ in.pos = 0;
++ in.size = in_len;
++ }
++ /* Returns zero when the frame is complete. */
++ ret = ZSTD_decompressStream(dstream, &out, &in);
++ err = handle_zstd_error(ret, error);
++ if (err)
++ goto out;
++ /* Flush all of the data produced if using flush(). */
++ if (flush != NULL && out.pos > 0) {
++ if (out.pos != flush(out.dst, out.pos)) {
++ error("Failed to flush()");
++ err = -1;
++ goto out;
++ }
++ out.pos = 0;
++ }
++ } while (ret != 0);
++
++ if (in_pos != NULL)
++ *in_pos += in.pos;
++
++ err = 0;
++out:
++ if (in_allocated != NULL)
++ large_free(in_allocated);
++ if (out_allocated != NULL)
++ large_free(out_allocated);
++ if (wksp != NULL)
++ large_free(wksp);
++ return err;
++}
++
++#ifndef ZSTD_PREBOOT
++STATIC int INIT unzstd(unsigned char *buf, long len,
++ long (*fill)(void*, unsigned long),
++ long (*flush)(void*, unsigned long),
++ unsigned char *out_buf,
++ long *pos,
++ void (*error)(char *x))
++{
++ return __unzstd(buf, len, fill, flush, out_buf, 0, pos, error);
++}
++#else
++STATIC int INIT __decompress(unsigned char *buf, long len,
++ long (*fill)(void*, unsigned long),
++ long (*flush)(void*, unsigned long),
++ unsigned char *out_buf, long out_len,
++ long *pos,
++ void (*error)(char *x))
++{
++ return __unzstd(buf, len, fill, flush, out_buf, out_len, pos, error);
++}
++#endif
diff --git a/5003_ZSTD-v5-4-8-add-support-for-zstd-compres-kern.patch b/5003_ZSTD-v5-4-8-add-support-for-zstd-compres-kern.patch
new file mode 100644
index 0000000..d9dc79e
--- /dev/null
+++ b/5003_ZSTD-v5-4-8-add-support-for-zstd-compres-kern.patch
@@ -0,0 +1,65 @@
+diff --git a/init/Kconfig b/init/Kconfig
+index 492bb7000aa4..806874fdd663 100644
+--- a/init/Kconfig
++++ b/init/Kconfig
+@@ -176,13 +176,16 @@ config HAVE_KERNEL_LZO
+ config HAVE_KERNEL_LZ4
+ bool
+
++config HAVE_KERNEL_ZSTD
++ bool
++
+ config HAVE_KERNEL_UNCOMPRESSED
+ bool
+
+ choice
+ prompt "Kernel compression mode"
+ default KERNEL_GZIP
+- depends on HAVE_KERNEL_GZIP || HAVE_KERNEL_BZIP2 || HAVE_KERNEL_LZMA || HAVE_KERNEL_XZ || HAVE_KERNEL_LZO || HAVE_KERNEL_LZ4 || HAVE_KERNEL_UNCOMPRESSED
++ depends on HAVE_KERNEL_GZIP || HAVE_KERNEL_BZIP2 || HAVE_KERNEL_LZMA || HAVE_KERNEL_XZ || HAVE_KERNEL_LZO || HAVE_KERNEL_LZ4 || HAVE_KERNEL_ZSTD || HAVE_KERNEL_UNCOMPRESSED
+ help
+ The linux kernel is a kind of self-extracting executable.
+ Several compression algorithms are available, which differ
+@@ -261,6 +264,16 @@ config KERNEL_LZ4
+ is about 8% bigger than LZO. But the decompression speed is
+ faster than LZO.
+
++config KERNEL_ZSTD
++ bool "ZSTD"
++ depends on HAVE_KERNEL_ZSTD
++ help
++ ZSTD is a compression algorithm targeting intermediate compression
++ with fast decompression speed. It will compress better than GZIP and
++ decompress around the same speed as LZO, but slower than LZ4. You
++ will need at least 192 KB RAM or more for booting. The zstd command
++ line tools is required for compression.
++
+ config KERNEL_UNCOMPRESSED
+ bool "None"
+ depends on HAVE_KERNEL_UNCOMPRESSED
+diff --git a/scripts/Makefile.lib b/scripts/Makefile.lib
+index b12dd5ba4896..efe69b78d455 100644
+--- a/scripts/Makefile.lib
++++ b/scripts/Makefile.lib
+@@ -405,6 +405,21 @@ quiet_cmd_xzkern = XZKERN $@
+ quiet_cmd_xzmisc = XZMISC $@
+ cmd_xzmisc = cat $(real-prereqs) | xz --check=crc32 --lzma2=dict=1MiB > $@
+
++# ZSTD
++# ---------------------------------------------------------------------------
++# Appends the uncompressed size of the data using size_append. The .zst
++# format has the size information available at the beginning of the file too,
++# but it's in a more complex format and it's good to avoid changing the part
++# of the boot code that reads the uncompressed size.
++# Note that the bytes added by size_append will make the zstd tool think that
++# the file is corrupt. This is expected.
++
++quiet_cmd_zstd = ZSTD $@
++cmd_zstd = (cat $(filter-out FORCE,$^) | \
++ zstd -19 && \
++ $(call size_append, $(filter-out FORCE,$^))) > $@ || \
++ (rm -f $@ ; false)
++
+ # ASM offsets
+ # ---------------------------------------------------------------------------
+
diff --git a/5004_ZSTD-v5-5-8-add-support-for-zstd-compressed-initramfs.patch b/5004_ZSTD-v5-5-8-add-support-for-zstd-compressed-initramfs.patch
new file mode 100644
index 0000000..0096db1
--- /dev/null
+++ b/5004_ZSTD-v5-5-8-add-support-for-zstd-compressed-initramfs.patch
@@ -0,0 +1,50 @@
+diff --git a/usr/Kconfig b/usr/Kconfig
+index 96afb03b65f9..2599bc21c1b2 100644
+--- a/usr/Kconfig
++++ b/usr/Kconfig
+@@ -100,6 +100,15 @@ config RD_LZ4
+ Support loading of a LZ4 encoded initial ramdisk or cpio buffer
+ If unsure, say N.
+
++config RD_ZSTD
++ bool "Support initial ramdisk/ramfs compressed using ZSTD"
++ default y
++ depends on BLK_DEV_INITRD
++ select DECOMPRESS_ZSTD
++ help
++ Support loading of a ZSTD encoded initial ramdisk or cpio buffer.
++ If unsure, say N.
++
+ choice
+ prompt "Built-in initramfs compression mode"
+ depends on INITRAMFS_SOURCE != ""
+@@ -196,6 +205,17 @@ config INITRAMFS_COMPRESSION_LZ4
+ If you choose this, keep in mind that most distros don't provide lz4
+ by default which could cause a build failure.
+
++config INITRAMFS_COMPRESSION_ZSTD
++ bool "ZSTD"
++ depends on RD_ZSTD
++ help
++ ZSTD is a compression algorithm targeting intermediate compression
++ with fast decompression speed. It will compress better than GZIP and
++ decompress around the same speed as LZO, but slower than LZ4.
++
++ If you choose this, keep in mind that you may need to install the zstd
++ tool to be able to compress the initram.
++
+ config INITRAMFS_COMPRESSION_NONE
+ bool "None"
+ help
+diff --git a/usr/Makefile b/usr/Makefile
+index c12e6b15ce72..b1a81a40eab1 100644
+--- a/usr/Makefile
++++ b/usr/Makefile
+@@ -15,6 +15,7 @@ compress-$(CONFIG_INITRAMFS_COMPRESSION_LZMA) := lzma
+ compress-$(CONFIG_INITRAMFS_COMPRESSION_XZ) := xzmisc
+ compress-$(CONFIG_INITRAMFS_COMPRESSION_LZO) := lzo
+ compress-$(CONFIG_INITRAMFS_COMPRESSION_LZ4) := lz4
++compress-$(CONFIG_INITRAMFS_COMPRESSION_ZSTD) := zstd
+
+ obj-$(CONFIG_BLK_DEV_INITRD) := initramfs_data.o
+
--git a/5005_ZSTD-v5-6-8-bump-ZO-z-extra-bytes-margin.patch b/5005_ZSTD-v5-6-8-bump-ZO-z-extra-bytes-margin.patch
new file mode 100644
index 0000000..4e86d56
--- /dev/null
+++ b/5005_ZSTD-v5-6-8-bump-ZO-z-extra-bytes-margin.patch
@@ -0,0 +1,20 @@
+diff --git a/arch/x86/boot/header.S b/arch/x86/boot/header.S
+index 735ad7f21ab0..6dbd7e9f74c9 100644
+--- a/arch/x86/boot/header.S
++++ b/arch/x86/boot/header.S
+@@ -539,8 +539,14 @@ pref_address: .quad LOAD_PHYSICAL_ADDR # preferred load addr
+ # the size-dependent part now grows so fast.
+ #
+ # extra_bytes = (uncompressed_size >> 8) + 65536
++#
++# ZSTD compressed data grows by at most 3 bytes per 128K, and only has a 22
++# byte fixed overhead but has a maximum block size of 128K, so it needs a
++# larger margin.
++#
++# extra_bytes = (uncompressed_size >> 8) + 131072
+
+-#define ZO_z_extra_bytes ((ZO_z_output_len >> 8) + 65536)
++#define ZO_z_extra_bytes ((ZO_z_output_len >> 8) + 131072)
+ #if ZO_z_output_len > ZO_z_input_len
+ # define ZO_z_extract_offset (ZO_z_output_len + ZO_z_extra_bytes - \
+ ZO_z_input_len)
diff --git a/5006_ZSTD-v5-7-8-support-for-ZSTD-compressed-kernel.patch b/5006_ZSTD-v5-7-8-support-for-ZSTD-compressed-kernel.patch
new file mode 100644
index 0000000..6147136
--- /dev/null
+++ b/5006_ZSTD-v5-7-8-support-for-ZSTD-compressed-kernel.patch
@@ -0,0 +1,92 @@
+diff --git a/Documentation/x86/boot.rst b/Documentation/x86/boot.rst
+index fa7ddc0428c8..0404e99dc1d4 100644
+--- a/Documentation/x86/boot.rst
++++ b/Documentation/x86/boot.rst
+@@ -782,9 +782,9 @@ Protocol: 2.08+
+ uncompressed data should be determined using the standard magic
+ numbers. The currently supported compression formats are gzip
+ (magic numbers 1F 8B or 1F 9E), bzip2 (magic number 42 5A), LZMA
+- (magic number 5D 00), XZ (magic number FD 37), and LZ4 (magic number
+- 02 21). The uncompressed payload is currently always ELF (magic
+- number 7F 45 4C 46).
++ (magic number 5D 00), XZ (magic number FD 37), LZ4 (magic number
++ 02 21) and ZSTD (magic number 28 B5). The uncompressed payload is
++ currently always ELF (magic number 7F 45 4C 46).
+
+ ============ ==============
+ Field name: payload_length
+diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig
+index 886fa8368256..912f783bc01a 100644
+--- a/arch/x86/Kconfig
++++ b/arch/x86/Kconfig
+@@ -185,6 +185,7 @@ config X86
+ select HAVE_KERNEL_LZMA
+ select HAVE_KERNEL_LZO
+ select HAVE_KERNEL_XZ
++ select HAVE_KERNEL_ZSTD
+ select HAVE_KPROBES
+ select HAVE_KPROBES_ON_FTRACE
+ select HAVE_FUNCTION_ERROR_INJECTION
+diff --git a/arch/x86/boot/compressed/Makefile b/arch/x86/boot/compressed/Makefile
+index 7619742f91c9..471e61400a2e 100644
+--- a/arch/x86/boot/compressed/Makefile
++++ b/arch/x86/boot/compressed/Makefile
+@@ -26,7 +26,7 @@ OBJECT_FILES_NON_STANDARD := y
+ KCOV_INSTRUMENT := n
+
+ targets := vmlinux vmlinux.bin vmlinux.bin.gz vmlinux.bin.bz2 vmlinux.bin.lzma \
+- vmlinux.bin.xz vmlinux.bin.lzo vmlinux.bin.lz4
++ vmlinux.bin.xz vmlinux.bin.lzo vmlinux.bin.lz4 vmlinux.bin.zst
+
+ KBUILD_CFLAGS := -m$(BITS) -O2
+ KBUILD_CFLAGS += -fno-strict-aliasing $(call cc-option, -fPIE, -fPIC)
+@@ -145,6 +145,8 @@ $(obj)/vmlinux.bin.lzo: $(vmlinux.bin.all-y) FORCE
+ $(call if_changed,lzo)
+ $(obj)/vmlinux.bin.lz4: $(vmlinux.bin.all-y) FORCE
+ $(call if_changed,lz4)
++$(obj)/vmlinux.bin.zst: $(vmlinux.bin.all-y) FORCE
++ $(call if_changed,zstd)
+
+ suffix-$(CONFIG_KERNEL_GZIP) := gz
+ suffix-$(CONFIG_KERNEL_BZIP2) := bz2
+@@ -152,6 +154,7 @@ suffix-$(CONFIG_KERNEL_LZMA) := lzma
+ suffix-$(CONFIG_KERNEL_XZ) := xz
+ suffix-$(CONFIG_KERNEL_LZO) := lzo
+ suffix-$(CONFIG_KERNEL_LZ4) := lz4
++suffix-$(CONFIG_KERNEL_ZSTD) := zst
+
+ quiet_cmd_mkpiggy = MKPIGGY $@
+ cmd_mkpiggy = $(obj)/mkpiggy $< > $@
+diff --git a/arch/x86/boot/compressed/misc.c b/arch/x86/boot/compressed/misc.c
+index 9652d5c2afda..39e592d0e0b4 100644
+--- a/arch/x86/boot/compressed/misc.c
++++ b/arch/x86/boot/compressed/misc.c
+@@ -77,6 +77,10 @@ static int lines, cols;
+ #ifdef CONFIG_KERNEL_LZ4
+ #include "../../../../lib/decompress_unlz4.c"
+ #endif
++
++#ifdef CONFIG_KERNEL_ZSTD
++#include "../../../../lib/decompress_unzstd.c"
++#endif
+ /*
+ * NOTE: When adding a new decompressor, please update the analysis in
+ * ../header.S.
+diff --git a/arch/x86/include/asm/boot.h b/arch/x86/include/asm/boot.h
+index 680c320363db..d6dd43d25d9f 100644
+--- a/arch/x86/include/asm/boot.h
++++ b/arch/x86/include/asm/boot.h
+@@ -24,9 +24,11 @@
+ # error "Invalid value for CONFIG_PHYSICAL_ALIGN"
+ #endif
+
+-#ifdef CONFIG_KERNEL_BZIP2
++#if defined(CONFIG_KERNEL_BZIP2)
+ # define BOOT_HEAP_SIZE 0x400000
+-#else /* !CONFIG_KERNEL_BZIP2 */
++#elif defined(CONFIG_KERNEL_ZSTD)
++# define BOOT_HEAP_SIZE 0x30000
++#else
+ # define BOOT_HEAP_SIZE 0x10000
+ #endif
+
diff --git a/5007_ZSTD-v5-8-8-gitignore-add-ZSTD-compressed-files.patch b/5007_ZSTD-v5-8-8-gitignore-add-ZSTD-compressed-files.patch
new file mode 100644
index 0000000..adf8578
--- /dev/null
+++ b/5007_ZSTD-v5-8-8-gitignore-add-ZSTD-compressed-files.patch
@@ -0,0 +1,12 @@
+diff --git a/.gitignore b/.gitignore
+index 2258e906f01c..23871de69072 100644
+--- a/.gitignore
++++ b/.gitignore
+@@ -44,6 +44,7 @@
+ *.tab.[ch]
+ *.tar
+ *.xz
++*.zst
+ Module.symvers
+ modules.builtin
+ modules.order
diff --git a/5012_enable-cpu-optimizations-for-gcc91.patch b/5012_enable-cpu-optimizations-for-gcc91.patch
new file mode 100644
index 0000000..049ec12
--- /dev/null
+++ b/5012_enable-cpu-optimizations-for-gcc91.patch
@@ -0,0 +1,632 @@
+WARNING
+This patch works with gcc versions 9.1+ and with kernel version 5.7+ and should
+NOT be applied when compiling on older versions of gcc due to key name changes
+of the march flags introduced with the version 4.9 release of gcc.[1]
+
+Use the older version of this patch hosted on the same github for older
+versions of gcc.
+
+FEATURES
+This patch adds additional CPU options to the Linux kernel accessible under:
+ Processor type and features --->
+ Processor family --->
+
+The expanded microarchitectures include:
+* AMD Improved K8-family
+* AMD K10-family
+* AMD Family 10h (Barcelona)
+* AMD Family 14h (Bobcat)
+* AMD Family 16h (Jaguar)
+* AMD Family 15h (Bulldozer)
+* AMD Family 15h (Piledriver)
+* AMD Family 15h (Steamroller)
+* AMD Family 15h (Excavator)
+* AMD Family 17h (Zen)
+* AMD Family 17h (Zen 2)
+* Intel Silvermont low-power processors
+* Intel Goldmont low-power processors (Apollo Lake and Denverton)
+* Intel Goldmont Plus low-power processors (Gemini Lake)
+* Intel 1st Gen Core i3/i5/i7 (Nehalem)
+* Intel 1.5 Gen Core i3/i5/i7 (Westmere)
+* Intel 2nd Gen Core i3/i5/i7 (Sandybridge)
+* Intel 3rd Gen Core i3/i5/i7 (Ivybridge)
+* Intel 4th Gen Core i3/i5/i7 (Haswell)
+* Intel 5th Gen Core i3/i5/i7 (Broadwell)
+* Intel 6th Gen Core i3/i5/i7 (Skylake)
+* Intel 6th Gen Core i7/i9 (Skylake X)
+* Intel 8th Gen Core i3/i5/i7 (Cannon Lake)
+* Intel 10th Gen Core i7/i9 (Ice Lake)
+* Intel Xeon (Cascade Lake)
+
+It also offers to compile passing the 'native' option which, "selects the CPU
+to generate code for at compilation time by determining the processor type of
+the compiling machine. Using -march=native enables all instruction subsets
+supported by the local machine and will produce code optimized for the local
+machine under the constraints of the selected instruction set."[3]
+
+MINOR NOTES
+This patch also changes 'atom' to 'bonnell' in accordance with the gcc v4.9
+changes. Note that upstream is using the deprecated 'match=atom' flags when I
+believe it should use the newer 'march=bonnell' flag for atom processors.[2]
+
+It is not recommended to compile on Atom-CPUs with the 'native' option.[4] The
+recommendation is to use the 'atom' option instead.
+
+BENEFITS
+Small but real speed increases are measurable using a make endpoint comparing
+a generic kernel to one built with one of the respective microarchs.
+
+See the following experimental evidence supporting this statement:
+https://github.com/graysky2/kernel_gcc_patch
+
+REQUIREMENTS
+linux version >=5.7
+gcc version >=9.1
+
+ACKNOWLEDGMENTS
+This patch builds on the seminal work by Jeroen.[5]
+
+REFERENCES
+1. https://gcc.gnu.org/gcc-4.9/changes.html
+2. https://bugzilla.kernel.org/show_bug.cgi?id=77461
+3. https://gcc.gnu.org/onlinedocs/gcc/x86-Options.html
+4. https://github.com/graysky2/kernel_gcc_patch/issues/15
+5. http://www.linuxforge.net/docs/linux/linux-gcc.php
+
+--- a/arch/x86/include/asm/vermagic.h 2019-12-15 18:16:08.000000000 -0500
++++ b/arch/x86/include/asm/vermagic.h 2019-12-17 14:03:55.968871551 -0500
+@@ -27,6 +27,36 @@ struct mod_arch_specific {
+ #define MODULE_PROC_FAMILY "586MMX "
+ #elif defined CONFIG_MCORE2
+ #define MODULE_PROC_FAMILY "CORE2 "
++#elif defined CONFIG_MNATIVE
++#define MODULE_PROC_FAMILY "NATIVE "
++#elif defined CONFIG_MNEHALEM
++#define MODULE_PROC_FAMILY "NEHALEM "
++#elif defined CONFIG_MWESTMERE
++#define MODULE_PROC_FAMILY "WESTMERE "
++#elif defined CONFIG_MSILVERMONT
++#define MODULE_PROC_FAMILY "SILVERMONT "
++#elif defined CONFIG_MGOLDMONT
++#define MODULE_PROC_FAMILY "GOLDMONT "
++#elif defined CONFIG_MGOLDMONTPLUS
++#define MODULE_PROC_FAMILY "GOLDMONTPLUS "
++#elif defined CONFIG_MSANDYBRIDGE
++#define MODULE_PROC_FAMILY "SANDYBRIDGE "
++#elif defined CONFIG_MIVYBRIDGE
++#define MODULE_PROC_FAMILY "IVYBRIDGE "
++#elif defined CONFIG_MHASWELL
++#define MODULE_PROC_FAMILY "HASWELL "
++#elif defined CONFIG_MBROADWELL
++#define MODULE_PROC_FAMILY "BROADWELL "
++#elif defined CONFIG_MSKYLAKE
++#define MODULE_PROC_FAMILY "SKYLAKE "
++#elif defined CONFIG_MSKYLAKEX
++#define MODULE_PROC_FAMILY "SKYLAKEX "
++#elif defined CONFIG_MCANNONLAKE
++#define MODULE_PROC_FAMILY "CANNONLAKE "
++#elif defined CONFIG_MICELAKE
++#define MODULE_PROC_FAMILY "ICELAKE "
++#elif defined CONFIG_MCASCADELAKE
++#define MODULE_PROC_FAMILY "CASCADELAKE "
+ #elif defined CONFIG_MATOM
+ #define MODULE_PROC_FAMILY "ATOM "
+ #elif defined CONFIG_M686
+@@ -45,6 +75,28 @@ struct mod_arch_specific {
+ #define MODULE_PROC_FAMILY "K7 "
+ #elif defined CONFIG_MK8
+ #define MODULE_PROC_FAMILY "K8 "
++#elif defined CONFIG_MK8SSE3
++#define MODULE_PROC_FAMILY "K8SSE3 "
++#elif defined CONFIG_MK10
++#define MODULE_PROC_FAMILY "K10 "
++#elif defined CONFIG_MBARCELONA
++#define MODULE_PROC_FAMILY "BARCELONA "
++#elif defined CONFIG_MBOBCAT
++#define MODULE_PROC_FAMILY "BOBCAT "
++#elif defined CONFIG_MBULLDOZER
++#define MODULE_PROC_FAMILY "BULLDOZER "
++#elif defined CONFIG_MPILEDRIVER
++#define MODULE_PROC_FAMILY "PILEDRIVER "
++#elif defined CONFIG_MSTEAMROLLER
++#define MODULE_PROC_FAMILY "STEAMROLLER "
++#elif defined CONFIG_MJAGUAR
++#define MODULE_PROC_FAMILY "JAGUAR "
++#elif defined CONFIG_MEXCAVATOR
++#define MODULE_PROC_FAMILY "EXCAVATOR "
++#elif defined CONFIG_MZEN
++#define MODULE_PROC_FAMILY "ZEN "
++#elif defined CONFIG_MZEN2
++#define MODULE_PROC_FAMILY "ZEN2 "
+ #elif defined CONFIG_MELAN
+ #define MODULE_PROC_FAMILY "ELAN "
+ #elif defined CONFIG_MCRUSOE
+--- a/arch/x86/Kconfig.cpu 2019-12-15 18:16:08.000000000 -0500
++++ b/arch/x86/Kconfig.cpu 2019-12-17 14:09:03.805642284 -0500
+@@ -123,6 +123,7 @@ config MPENTIUMM
+ config MPENTIUM4
+ bool "Pentium-4/Celeron(P4-based)/Pentium-4 M/older Xeon"
+ depends on X86_32
++ select X86_P6_NOP
+ ---help---
+ Select this for Intel Pentium 4 chips. This includes the
+ Pentium 4, Pentium D, P4-based Celeron and Xeon, and
+@@ -155,9 +156,8 @@ config MPENTIUM4
+ -Paxville
+ -Dempsey
+
+-
+ config MK6
+- bool "K6/K6-II/K6-III"
++ bool "AMD K6/K6-II/K6-III"
+ depends on X86_32
+ ---help---
+ Select this for an AMD K6-family processor. Enables use of
+@@ -165,7 +165,7 @@ config MK6
+ flags to GCC.
+
+ config MK7
+- bool "Athlon/Duron/K7"
++ bool "AMD Athlon/Duron/K7"
+ depends on X86_32
+ ---help---
+ Select this for an AMD Athlon K7-family processor. Enables use of
+@@ -173,12 +173,90 @@ config MK7
+ flags to GCC.
+
+ config MK8
+- bool "Opteron/Athlon64/Hammer/K8"
++ bool "AMD Opteron/Athlon64/Hammer/K8"
+ ---help---
+ Select this for an AMD Opteron or Athlon64 Hammer-family processor.
+ Enables use of some extended instructions, and passes appropriate
+ optimization flags to GCC.
+
++config MK8SSE3
++ bool "AMD Opteron/Athlon64/Hammer/K8 with SSE3"
++ ---help---
++ Select this for improved AMD Opteron or Athlon64 Hammer-family processors.
++ Enables use of some extended instructions, and passes appropriate
++ optimization flags to GCC.
++
++config MK10
++ bool "AMD 61xx/7x50/PhenomX3/X4/II/K10"
++ ---help---
++ Select this for an AMD 61xx Eight-Core Magny-Cours, Athlon X2 7x50,
++ Phenom X3/X4/II, Athlon II X2/X3/X4, or Turion II-family processor.
++ Enables use of some extended instructions, and passes appropriate
++ optimization flags to GCC.
++
++config MBARCELONA
++ bool "AMD Barcelona"
++ ---help---
++ Select this for AMD Family 10h Barcelona processors.
++
++ Enables -march=barcelona
++
++config MBOBCAT
++ bool "AMD Bobcat"
++ ---help---
++ Select this for AMD Family 14h Bobcat processors.
++
++ Enables -march=btver1
++
++config MJAGUAR
++ bool "AMD Jaguar"
++ ---help---
++ Select this for AMD Family 16h Jaguar processors.
++
++ Enables -march=btver2
++
++config MBULLDOZER
++ bool "AMD Bulldozer"
++ ---help---
++ Select this for AMD Family 15h Bulldozer processors.
++
++ Enables -march=bdver1
++
++config MPILEDRIVER
++ bool "AMD Piledriver"
++ ---help---
++ Select this for AMD Family 15h Piledriver processors.
++
++ Enables -march=bdver2
++
++config MSTEAMROLLER
++ bool "AMD Steamroller"
++ ---help---
++ Select this for AMD Family 15h Steamroller processors.
++
++ Enables -march=bdver3
++
++config MEXCAVATOR
++ bool "AMD Excavator"
++ ---help---
++ Select this for AMD Family 15h Excavator processors.
++
++ Enables -march=bdver4
++
++config MZEN
++ bool "AMD Zen"
++ ---help---
++ Select this for AMD Family 17h Zen processors.
++
++ Enables -march=znver1
++
++config MZEN2
++ bool "AMD Zen 2"
++ ---help---
++ Select this for AMD Family 17h Zen 2 processors.
++
++ Enables -march=znver2
++
+ config MCRUSOE
+ bool "Crusoe"
+ depends on X86_32
+@@ -260,6 +338,7 @@ config MVIAC7
+
+ config MPSC
+ bool "Intel P4 / older Netburst based Xeon"
++ select X86_P6_NOP
+ depends on X86_64
+ ---help---
+ Optimize for Intel Pentium 4, Pentium D and older Nocona/Dempsey
+@@ -269,8 +348,19 @@ config MPSC
+ using the cpu family field
+ in /proc/cpuinfo. Family 15 is an older Xeon, Family 6 a newer one.
+
++config MATOM
++ bool "Intel Atom"
++ select X86_P6_NOP
++ ---help---
++
++ Select this for the Intel Atom platform. Intel Atom CPUs have an
++ in-order pipelining architecture and thus can benefit from
++ accordingly optimized code. Use a recent GCC with specific Atom
++ support in order to fully benefit from selecting this option.
++
+ config MCORE2
+- bool "Core 2/newer Xeon"
++ bool "Intel Core 2"
++ select X86_P6_NOP
+ ---help---
+
+ Select this for Intel Core 2 and newer Core 2 Xeons (Xeon 51xx and
+@@ -278,14 +368,133 @@ config MCORE2
+ family in /proc/cpuinfo. Newer ones have 6 and older ones 15
+ (not a typo)
+
+-config MATOM
+- bool "Intel Atom"
++ Enables -march=core2
++
++config MNEHALEM
++ bool "Intel Nehalem"
++ select X86_P6_NOP
+ ---help---
+
+- Select this for the Intel Atom platform. Intel Atom CPUs have an
+- in-order pipelining architecture and thus can benefit from
+- accordingly optimized code. Use a recent GCC with specific Atom
+- support in order to fully benefit from selecting this option.
++ Select this for 1st Gen Core processors in the Nehalem family.
++
++ Enables -march=nehalem
++
++config MWESTMERE
++ bool "Intel Westmere"
++ select X86_P6_NOP
++ ---help---
++
++ Select this for the Intel Westmere formerly Nehalem-C family.
++
++ Enables -march=westmere
++
++config MSILVERMONT
++ bool "Intel Silvermont"
++ select X86_P6_NOP
++ ---help---
++
++ Select this for the Intel Silvermont platform.
++
++ Enables -march=silvermont
++
++config MGOLDMONT
++ bool "Intel Goldmont"
++ select X86_P6_NOP
++ ---help---
++
++ Select this for the Intel Goldmont platform including Apollo Lake and Denverton.
++
++ Enables -march=goldmont
++
++config MGOLDMONTPLUS
++ bool "Intel Goldmont Plus"
++ select X86_P6_NOP
++ ---help---
++
++ Select this for the Intel Goldmont Plus platform including Gemini Lake.
++
++ Enables -march=goldmont-plus
++
++config MSANDYBRIDGE
++ bool "Intel Sandy Bridge"
++ select X86_P6_NOP
++ ---help---
++
++ Select this for 2nd Gen Core processors in the Sandy Bridge family.
++
++ Enables -march=sandybridge
++
++config MIVYBRIDGE
++ bool "Intel Ivy Bridge"
++ select X86_P6_NOP
++ ---help---
++
++ Select this for 3rd Gen Core processors in the Ivy Bridge family.
++
++ Enables -march=ivybridge
++
++config MHASWELL
++ bool "Intel Haswell"
++ select X86_P6_NOP
++ ---help---
++
++ Select this for 4th Gen Core processors in the Haswell family.
++
++ Enables -march=haswell
++
++config MBROADWELL
++ bool "Intel Broadwell"
++ select X86_P6_NOP
++ ---help---
++
++ Select this for 5th Gen Core processors in the Broadwell family.
++
++ Enables -march=broadwell
++
++config MSKYLAKE
++ bool "Intel Skylake"
++ select X86_P6_NOP
++ ---help---
++
++ Select this for 6th Gen Core processors in the Skylake family.
++
++ Enables -march=skylake
++
++config MSKYLAKEX
++ bool "Intel Skylake X"
++ select X86_P6_NOP
++ ---help---
++
++ Select this for 6th Gen Core processors in the Skylake X family.
++
++ Enables -march=skylake-avx512
++
++config MCANNONLAKE
++ bool "Intel Cannon Lake"
++ select X86_P6_NOP
++ ---help---
++
++ Select this for 8th Gen Core processors
++
++ Enables -march=cannonlake
++
++config MICELAKE
++ bool "Intel Ice Lake"
++ select X86_P6_NOP
++ ---help---
++
++ Select this for 10th Gen Core processors in the Ice Lake family.
++
++ Enables -march=icelake-client
++
++config MCASCADELAKE
++ bool "Intel Cascade Lake"
++ select X86_P6_NOP
++ ---help---
++
++ Select this for Xeon processors in the Cascade Lake family.
++
++ Enables -march=cascadelake
+
+ config GENERIC_CPU
+ bool "Generic-x86-64"
+@@ -294,6 +503,19 @@ config GENERIC_CPU
+ Generic x86-64 CPU.
+ Run equally well on all x86-64 CPUs.
+
++config MNATIVE
++ bool "Native optimizations autodetected by GCC"
++ ---help---
++
++ GCC 4.2 and above support -march=native, which automatically detects
++ the optimum settings to use based on your processor. -march=native
++ also detects and applies additional settings beyond -march specific
++ to your CPU, (eg. -msse4). Unless you have a specific reason not to
++ (e.g. distcc cross-compiling), you should probably be using
++ -march=native rather than anything listed below.
++
++ Enables -march=native
++
+ endchoice
+
+ config X86_GENERIC
+@@ -318,7 +540,7 @@ config X86_INTERNODE_CACHE_SHIFT
+ config X86_L1_CACHE_SHIFT
+ int
+ default "7" if MPENTIUM4 || MPSC
+- default "6" if MK7 || MK8 || MPENTIUMM || MCORE2 || MATOM || MVIAC7 || X86_GENERIC || GENERIC_CPU
++ default "6" if MK7 || MK8 || MK8SSE3 || MK10 || MBARCELONA || MBOBCAT || MBULLDOZER || MPILEDRIVER || MSTEAMROLLER || MEXCAVATOR || MZEN || MZEN2 || MJAGUAR || MPENTIUMM || MCORE2 || MNEHALEM || MWESTMERE || MSILVERMONT || MGOLDMONT || MGOLDMONTPLUS || MSANDYBRIDGE || MIVYBRIDGE || MHASWELL || MBROADWELL || MSKYLAKE || MSKYLAKEX || MCANNONLAKE || MICELAKE || MCASCADELAKE || MNATIVE || MATOM || MVIAC7 || X86_GENERIC || GENERIC_CPU
+ default "4" if MELAN || M486SX || M486 || MGEODEGX1
+ default "5" if MWINCHIP3D || MWINCHIPC6 || MCRUSOE || MEFFICEON || MCYRIXIII || MK6 || MPENTIUMIII || MPENTIUMII || M686 || M586MMX || M586TSC || M586 || MVIAC3_2 || MGEODE_LX
+
+@@ -336,35 +558,36 @@ config X86_ALIGNMENT_16
+
+ config X86_INTEL_USERCOPY
+ def_bool y
+- depends on MPENTIUM4 || MPENTIUMM || MPENTIUMIII || MPENTIUMII || M586MMX || X86_GENERIC || MK8 || MK7 || MEFFICEON || MCORE2
++ depends on MPENTIUM4 || MPENTIUMM || MPENTIUMIII || MPENTIUMII || M586MMX || X86_GENERIC || MK8 || MK8SSE3 || MK7 || MEFFICEON || MCORE2 || MK10 || MBARCELONA || MNEHALEM || MWESTMERE || MSILVERMONT || MGOLDMONT || MGOLDMONTPLUS || MSANDYBRIDGE || MIVYBRIDGE || MHASWELL || MBROADWELL || MSKYLAKE || MSKYLAKEX || MCANNONLAKE || MICELAKE || MCASCADELAKE || MNATIVE
+
+ config X86_USE_PPRO_CHECKSUM
+ def_bool y
+- depends on MWINCHIP3D || MWINCHIPC6 || MCYRIXIII || MK7 || MK6 || MPENTIUM4 || MPENTIUMM || MPENTIUMIII || MPENTIUMII || M686 || MK8 || MVIAC3_2 || MVIAC7 || MEFFICEON || MGEODE_LX || MCORE2 || MATOM
++ depends on MWINCHIP3D || MWINCHIPC6 || MCYRIXIII || MK7 || MK6 || MK10 || MPENTIUM4 || MPENTIUMM || MPENTIUMIII || MPENTIUMII || M686 || MK8 || MK8SSE3 || MVIAC3_2 || MVIAC7 || MEFFICEON || MGEODE_LX || MCORE2 || MNEHALEM || MWESTMERE || MSILVERMONT || MGOLDMONT || MGOLDMONTPLUS || MSANDYBRIDGE || MIVYBRIDGE || MHASWELL || MBROADWELL || MSKYLAKE || MSKYLAKEX || MCANNONLAKE || MICELAKE || MCASCADELAKE || MATOM || MNATIVE
+
+ config X86_USE_3DNOW
+ def_bool y
+ depends on (MCYRIXIII || MK7 || MGEODE_LX) && !UML
+
+-#
+-# P6_NOPs are a relatively minor optimization that require a family >=
+-# 6 processor, except that it is broken on certain VIA chips.
+-# Furthermore, AMD chips prefer a totally different sequence of NOPs
+-# (which work on all CPUs). In addition, it looks like Virtual PC
+-# does not understand them.
+-#
+-# As a result, disallow these if we're not compiling for X86_64 (these
+-# NOPs do work on all x86-64 capable chips); the list of processors in
+-# the right-hand clause are the cores that benefit from this optimization.
+-#
+ config X86_P6_NOP
+- def_bool y
+- depends on X86_64
+- depends on (MCORE2 || MPENTIUM4 || MPSC)
++ default n
++ bool "Support for P6_NOPs on Intel chips"
++ depends on (MCORE2 || MPENTIUM4 || MPSC || MATOM || MNEHALEM || MWESTMERE || MSILVERMONT || MGOLDMONT || MGOLDMONTPLUS || MSANDYBRIDGE || MIVYBRIDGE || MHASWELL || MBROADWELL || MSKYLAKE || MSKYLAKEX || MCANNONLAKE || MICELAKE || MCASCADELAKE || MNATIVE)
++ ---help---
++ P6_NOPs are a relatively minor optimization that require a family >=
++ 6 processor, except that it is broken on certain VIA chips.
++ Furthermore, AMD chips prefer a totally different sequence of NOPs
++ (which work on all CPUs). In addition, it looks like Virtual PC
++ does not understand them.
++
++ As a result, disallow these if we're not compiling for X86_64 (these
++ NOPs do work on all x86-64 capable chips); the list of processors in
++ the right-hand clause are the cores that benefit from this optimization.
++
++ Say Y if you have Intel CPU newer than Pentium Pro, N otherwise.
+
+ config X86_TSC
+ def_bool y
+- depends on (MWINCHIP3D || MCRUSOE || MEFFICEON || MCYRIXIII || MK7 || MK6 || MPENTIUM4 || MPENTIUMM || MPENTIUMIII || MPENTIUMII || M686 || M586MMX || M586TSC || MK8 || MVIAC3_2 || MVIAC7 || MGEODEGX1 || MGEODE_LX || MCORE2 || MATOM) || X86_64
++ depends on (MWINCHIP3D || MCRUSOE || MEFFICEON || MCYRIXIII || MK7 || MK6 || MPENTIUM4 || MPENTIUMM || MPENTIUMIII || MPENTIUMII || M686 || M586MMX || M586TSC || MK8 || MK8SSE3 || MVIAC3_2 || MVIAC7 || MGEODEGX1 || MGEODE_LX || MCORE2 || MNEHALEM || MWESTMERE || MSILVERMONT || MGOLDMONT || MGOLDMONTPLUS || MSANDYBRIDGE || MIVYBRIDGE || MHASWELL || MBROADWELL || MSKYLAKE || MSKYLAKEX || MCANNONLAKE || MICELAKE || MCASCADELAKE || MNATIVE || MATOM) || X86_64
+
+ config X86_CMPXCHG64
+ def_bool y
+@@ -374,7 +597,7 @@ config X86_CMPXCHG64
+ # generates cmov.
+ config X86_CMOV
+ def_bool y
+- depends on (MK8 || MK7 || MCORE2 || MPENTIUM4 || MPENTIUMM || MPENTIUMIII || MPENTIUMII || M686 || MVIAC3_2 || MVIAC7 || MCRUSOE || MEFFICEON || X86_64 || MATOM || MGEODE_LX)
++ depends on (MK8 || MK8SSE3 || MK10 || MBARCELONA || MBOBCAT || MBULLDOZER || MPILEDRIVER || MSTEAMROLLER || MEXCAVATOR || MZEN || MZEN2 || MJAGUAR || MK7 || MCORE2 || MNEHALEM || MWESTMERE || MSILVERMONT || MGOLDMONT || MGOLDMONTPLUS || MSANDYBRIDGE || MIVYBRIDGE || MHASWELL || MBROADWELL || MSKYLAKE || MSKYLAKEX || MCANNONLAKE || MICELAKE || MCASCADELAKE || MPENTIUM4 || MPENTIUMM || MPENTIUMIII || MPENTIUMII || M686 || MVIAC3_2 || MVIAC7 || MCRUSOE || MEFFICEON || X86_64 || MNATIVE || MATOM || MGEODE_LX)
+
+ config X86_MINIMUM_CPU_FAMILY
+ int
+--- a/arch/x86/Makefile 2019-12-15 18:16:08.000000000 -0500
++++ b/arch/x86/Makefile 2019-12-17 14:03:55.972204960 -0500
+@@ -119,13 +119,53 @@ else
+ KBUILD_CFLAGS += $(call cc-option,-mskip-rax-setup)
+
+ # FIXME - should be integrated in Makefile.cpu (Makefile_32.cpu)
++ cflags-$(CONFIG_MNATIVE) += $(call cc-option,-march=native)
+ cflags-$(CONFIG_MK8) += $(call cc-option,-march=k8)
++ cflags-$(CONFIG_MK8SSE3) += $(call cc-option,-march=k8-sse3,-mtune=k8)
++ cflags-$(CONFIG_MK10) += $(call cc-option,-march=amdfam10)
++ cflags-$(CONFIG_MBARCELONA) += $(call cc-option,-march=barcelona)
++ cflags-$(CONFIG_MBOBCAT) += $(call cc-option,-march=btver1)
++ cflags-$(CONFIG_MJAGUAR) += $(call cc-option,-march=btver2)
++ cflags-$(CONFIG_MBULLDOZER) += $(call cc-option,-march=bdver1)
++ cflags-$(CONFIG_MPILEDRIVER) += $(call cc-option,-march=bdver2)
++ cflags-$(CONFIG_MSTEAMROLLER) += $(call cc-option,-march=bdver3)
++ cflags-$(CONFIG_MEXCAVATOR) += $(call cc-option,-march=bdver4)
++ cflags-$(CONFIG_MZEN) += $(call cc-option,-march=znver1)
++ cflags-$(CONFIG_MZEN2) += $(call cc-option,-march=znver2)
+ cflags-$(CONFIG_MPSC) += $(call cc-option,-march=nocona)
+
+ cflags-$(CONFIG_MCORE2) += \
+- $(call cc-option,-march=core2,$(call cc-option,-mtune=generic))
+- cflags-$(CONFIG_MATOM) += $(call cc-option,-march=atom) \
+- $(call cc-option,-mtune=atom,$(call cc-option,-mtune=generic))
++ $(call cc-option,-march=core2,$(call cc-option,-mtune=core2))
++ cflags-$(CONFIG_MNEHALEM) += \
++ $(call cc-option,-march=nehalem,$(call cc-option,-mtune=nehalem))
++ cflags-$(CONFIG_MWESTMERE) += \
++ $(call cc-option,-march=westmere,$(call cc-option,-mtune=westmere))
++ cflags-$(CONFIG_MSILVERMONT) += \
++ $(call cc-option,-march=silvermont,$(call cc-option,-mtune=silvermont))
++ cflags-$(CONFIG_MGOLDMONT) += \
++ $(call cc-option,-march=goldmont,$(call cc-option,-mtune=goldmont))
++ cflags-$(CONFIG_MGOLDMONTPLUS) += \
++ $(call cc-option,-march=goldmont-plus,$(call cc-option,-mtune=goldmont-plus))
++ cflags-$(CONFIG_MSANDYBRIDGE) += \
++ $(call cc-option,-march=sandybridge,$(call cc-option,-mtune=sandybridge))
++ cflags-$(CONFIG_MIVYBRIDGE) += \
++ $(call cc-option,-march=ivybridge,$(call cc-option,-mtune=ivybridge))
++ cflags-$(CONFIG_MHASWELL) += \
++ $(call cc-option,-march=haswell,$(call cc-option,-mtune=haswell))
++ cflags-$(CONFIG_MBROADWELL) += \
++ $(call cc-option,-march=broadwell,$(call cc-option,-mtune=broadwell))
++ cflags-$(CONFIG_MSKYLAKE) += \
++ $(call cc-option,-march=skylake,$(call cc-option,-mtune=skylake))
++ cflags-$(CONFIG_MSKYLAKEX) += \
++ $(call cc-option,-march=skylake-avx512,$(call cc-option,-mtune=skylake-avx512))
++ cflags-$(CONFIG_MCANNONLAKE) += \
++ $(call cc-option,-march=cannonlake,$(call cc-option,-mtune=cannonlake))
++ cflags-$(CONFIG_MICELAKE) += \
++ $(call cc-option,-march=icelake-client,$(call cc-option,-mtune=icelake-client))
++ cflags-$(CONFIG_MCASCADELAKE) += \
++ $(call cc-option,-march=cascadelake,$(call cc-option,-mtune=cascadelake))
++ cflags-$(CONFIG_MATOM) += $(call cc-option,-march=bonnell) \
++ $(call cc-option,-mtune=bonnell,$(call cc-option,-mtune=generic))
+ cflags-$(CONFIG_GENERIC_CPU) += $(call cc-option,-mtune=generic)
+ KBUILD_CFLAGS += $(cflags-y)
+
+--- a/arch/x86/Makefile_32.cpu 2019-12-15 18:16:08.000000000 -0500
++++ b/arch/x86/Makefile_32.cpu 2019-12-17 14:03:55.972204960 -0500
+@@ -24,7 +24,19 @@ cflags-$(CONFIG_MK6) += -march=k6
+ # Please note, that patches that add -march=athlon-xp and friends are pointless.
+ # They make zero difference whatsosever to performance at this time.
+ cflags-$(CONFIG_MK7) += -march=athlon
++cflags-$(CONFIG_MNATIVE) += $(call cc-option,-march=native)
+ cflags-$(CONFIG_MK8) += $(call cc-option,-march=k8,-march=athlon)
++cflags-$(CONFIG_MK8SSE3) += $(call cc-option,-march=k8-sse3,-march=athlon)
++cflags-$(CONFIG_MK10) += $(call cc-option,-march=amdfam10,-march=athlon)
++cflags-$(CONFIG_MBARCELONA) += $(call cc-option,-march=barcelona,-march=athlon)
++cflags-$(CONFIG_MBOBCAT) += $(call cc-option,-march=btver1,-march=athlon)
++cflags-$(CONFIG_MJAGUAR) += $(call cc-option,-march=btver2,-march=athlon)
++cflags-$(CONFIG_MBULLDOZER) += $(call cc-option,-march=bdver1,-march=athlon)
++cflags-$(CONFIG_MPILEDRIVER) += $(call cc-option,-march=bdver2,-march=athlon)
++cflags-$(CONFIG_MSTEAMROLLER) += $(call cc-option,-march=bdver3,-march=athlon)
++cflags-$(CONFIG_MEXCAVATOR) += $(call cc-option,-march=bdver4,-march=athlon)
++cflags-$(CONFIG_MZEN) += $(call cc-option,-march=znver1,-march=athlon)
++cflags-$(CONFIG_MZEN2) += $(call cc-option,-march=znver2,-march=athlon)
+ cflags-$(CONFIG_MCRUSOE) += -march=i686 -falign-functions=0 -falign-jumps=0 -falign-loops=0
+ cflags-$(CONFIG_MEFFICEON) += -march=i686 $(call tune,pentium3) -falign-functions=0 -falign-jumps=0 -falign-loops=0
+ cflags-$(CONFIG_MWINCHIPC6) += $(call cc-option,-march=winchip-c6,-march=i586)
+@@ -33,8 +45,22 @@ cflags-$(CONFIG_MCYRIXIII) += $(call cc-
+ cflags-$(CONFIG_MVIAC3_2) += $(call cc-option,-march=c3-2,-march=i686)
+ cflags-$(CONFIG_MVIAC7) += -march=i686
+ cflags-$(CONFIG_MCORE2) += -march=i686 $(call tune,core2)
+-cflags-$(CONFIG_MATOM) += $(call cc-option,-march=atom,$(call cc-option,-march=core2,-march=i686)) \
+- $(call cc-option,-mtune=atom,$(call cc-option,-mtune=generic))
++cflags-$(CONFIG_MNEHALEM) += -march=i686 $(call tune,nehalem)
++cflags-$(CONFIG_MWESTMERE) += -march=i686 $(call tune,westmere)
++cflags-$(CONFIG_MSILVERMONT) += -march=i686 $(call tune,silvermont)
++cflags-$(CONFIG_MGOLDMONT) += -march=i686 $(call tune,goldmont)
++cflags-$(CONFIG_MGOLDMONTPLUS) += -march=i686 $(call tune,goldmont-plus)
++cflags-$(CONFIG_MSANDYBRIDGE) += -march=i686 $(call tune,sandybridge)
++cflags-$(CONFIG_MIVYBRIDGE) += -march=i686 $(call tune,ivybridge)
++cflags-$(CONFIG_MHASWELL) += -march=i686 $(call tune,haswell)
++cflags-$(CONFIG_MBROADWELL) += -march=i686 $(call tune,broadwell)
++cflags-$(CONFIG_MSKYLAKE) += -march=i686 $(call tune,skylake)
++cflags-$(CONFIG_MSKYLAKEX) += -march=i686 $(call tune,skylake-avx512)
++cflags-$(CONFIG_MCANNONLAKE) += -march=i686 $(call tune,cannonlake)
++cflags-$(CONFIG_MICELAKE) += -march=i686 $(call tune,icelake-client)
++cflags-$(CONFIG_MCASCADELAKE) += -march=i686 $(call tune,cascadelake)
++cflags-$(CONFIG_MATOM) += $(call cc-option,-march=bonnell,$(call cc-option,-march=core2,-march=i686)) \
++ $(call cc-option,-mtune=bonnell,$(call cc-option,-mtune=generic))
+
+ # AMD Elan support
+ cflags-$(CONFIG_MELAN) += -march=i486
^ permalink raw reply related [flat|nested] 25+ messages in thread
* [gentoo-commits] proj/linux-patches:5.7 commit in: /
@ 2020-05-26 17:49 Mike Pagano
0 siblings, 0 replies; 25+ messages in thread
From: Mike Pagano @ 2020-05-26 17:49 UTC (permalink / raw
To: gentoo-commits
commit: 833474bac40c498315a937c946ae6fdfe4f7e99d
Author: Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Wed May 13 11:55:40 2020 +0000
Commit: Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Tue May 26 17:48:57 2020 +0000
URL: https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=833474ba
Add UTS_NS to GENTOO_LINUX_PORTAGE as required by portage since 2.3.99
Bug: https://bugs.gentoo.org/722772
Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>
4567_distro-Gentoo-Kconfig.patch | 7 ++++---
1 file changed, 4 insertions(+), 3 deletions(-)
diff --git a/4567_distro-Gentoo-Kconfig.patch b/4567_distro-Gentoo-Kconfig.patch
index 581cb20..cb2eaa6 100644
--- a/4567_distro-Gentoo-Kconfig.patch
+++ b/4567_distro-Gentoo-Kconfig.patch
@@ -6,9 +6,9 @@
source "Documentation/Kconfig"
+
+source "distro/Kconfig"
---- /dev/null 2020-04-15 02:49:37.900191585 -0400
-+++ b/distro/Kconfig 2020-04-15 11:07:10.952929540 -0400
-@@ -0,0 +1,156 @@
+--- /dev/null 2020-05-13 03:13:57.920193259 -0400
++++ b/distro/Kconfig 2020-05-13 07:51:36.841663359 -0400
+@@ -0,0 +1,157 @@
+menu "Gentoo Linux"
+
+config GENTOO_LINUX
@@ -65,6 +65,7 @@
+ select NET_NS
+ select PID_NS
+ select SYSVIPC
++ select UTS_NS
+
+ help
+ This enables options required by various Portage FEATURES.
^ permalink raw reply related [flat|nested] 25+ messages in thread
* [gentoo-commits] proj/linux-patches:5.7 commit in: /
@ 2020-05-26 17:59 Mike Pagano
0 siblings, 0 replies; 25+ messages in thread
From: Mike Pagano @ 2020-05-26 17:59 UTC (permalink / raw
To: gentoo-commits
commit: ec9ddda7922e72d5f53c5b45262d3b24f67d1b18
Author: Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Tue May 26 17:58:36 2020 +0000
Commit: Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Tue May 26 17:58:36 2020 +0000
URL: https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=ec9ddda7
Added two fixes to genpatches
VIDEO_TVP5150 requies REGMAP_I2C to build. Select it by default in Kconfig.
See bug #721096. Thanks to Max Steel
sign-file: full functionality with modern LibreSSL
Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>
0000_README | 8 ++++++++
...TVP5150-Fix-build-issue-by-selecting-REGMAP-I2C.patch | 10 ++++++++++
2920_sign-file-patch-for-libressl.patch | 16 ++++++++++++++++
3 files changed, 34 insertions(+)
diff --git a/0000_README b/0000_README
index 639ad9e..b0aed76 100644
--- a/0000_README
+++ b/0000_README
@@ -63,6 +63,14 @@ Patch: 2900_tmp513-Fix-build-issue-by-selecting-CONFIG_REG.patch
From: https://bugs.gentoo.org/710790
Desc: tmp513 requies REGMAP_I2C to build. Select it by default in Kconfig. See bug #710790. Thanks to Phil Stracchino
+Patch: 2910_TVP5150-Fix-build-issue-by-selecting-REGMAP-I2C.patch
+From: https://bugs.gentoo.org/721096
+Desc: VIDEO_TVP5150 requies REGMAP_I2C to build. Select it by default in Kconfig. See bug #721096. Thanks to Max Steel
+
+Patch: 2920_sign-file-patch-for-libressl.patch
+From: https://bugs.gentoo.org/717166
+Desc: sign-file: full functionality with modern LibreSSL
+
Patch: 4567_distro-Gentoo-Kconfig.patch
From: Tom Wijsman <TomWij@gentoo.org>
Desc: Add Gentoo Linux support config settings and defaults.
diff --git a/2910_TVP5150-Fix-build-issue-by-selecting-REGMAP-I2C.patch b/2910_TVP5150-Fix-build-issue-by-selecting-REGMAP-I2C.patch
new file mode 100644
index 0000000..1bc058e
--- /dev/null
+++ b/2910_TVP5150-Fix-build-issue-by-selecting-REGMAP-I2C.patch
@@ -0,0 +1,10 @@
+--- a/drivers/media/i2c/Kconfig 2020-05-13 12:38:05.102903309 -0400
++++ b/drivers/media/i2c/Kconfig 2020-05-13 12:38:51.283171977 -0400
+@@ -378,6 +378,7 @@ config VIDEO_TVP514X
+ config VIDEO_TVP5150
+ tristate "Texas Instruments TVP5150 video decoder"
+ depends on VIDEO_V4L2 && I2C
++ select REGMAP_I2C
+ select V4L2_FWNODE
+ help
+ Support for the Texas Instruments TVP5150 video decoder.
diff --git a/2920_sign-file-patch-for-libressl.patch b/2920_sign-file-patch-for-libressl.patch
new file mode 100644
index 0000000..e6ec017
--- /dev/null
+++ b/2920_sign-file-patch-for-libressl.patch
@@ -0,0 +1,16 @@
+--- a/scripts/sign-file.c 2020-05-20 18:47:21.282820662 -0400
++++ b/scripts/sign-file.c 2020-05-20 18:48:37.991081899 -0400
+@@ -41,9 +41,10 @@
+ * signing with anything other than SHA1 - so we're stuck with that if such is
+ * the case.
+ */
+-#if defined(LIBRESSL_VERSION_NUMBER) || \
+- OPENSSL_VERSION_NUMBER < 0x10000000L || \
+- defined(OPENSSL_NO_CMS)
++#if defined(OPENSSL_NO_CMS) || \
++ ( defined(LIBRESSL_VERSION_NUMBER) \
++ && (LIBRESSL_VERSION_NUMBER < 0x3010000fL) ) || \
++ OPENSSL_VERSION_NUMBER < 0x10000000L
+ #define USE_PKCS7
+ #endif
+ #ifndef USE_PKCS7
^ permalink raw reply related [flat|nested] 25+ messages in thread
* [gentoo-commits] proj/linux-patches:5.7 commit in: /
@ 2020-06-07 21:57 Mike Pagano
0 siblings, 0 replies; 25+ messages in thread
From: Mike Pagano @ 2020-06-07 21:57 UTC (permalink / raw
To: gentoo-commits
commit: ef39c3e7bf550a0089be6fe30fafbca9a6d3f174
Author: Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Sun Jun 7 21:57:12 2020 +0000
Commit: Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Sun Jun 7 21:57:12 2020 +0000
URL: https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=ef39c3e7
Linux patch 5.7.1
Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>
0000_README | 4 +
1000_linux-5.7.1.patch | 416 +++++++++++++++++++++++++++++++++++++++++++++++++
2 files changed, 420 insertions(+)
diff --git a/0000_README b/0000_README
index b0aed76..a2b7036 100644
--- a/0000_README
+++ b/0000_README
@@ -43,6 +43,10 @@ EXPERIMENTAL
Individual Patch Descriptions:
--------------------------------------------------------------------------
+Patch: 1000_linux-5.7.1.patch
+From: http://www.kernel.org
+Desc: Linux 5.7.1
+
Patch: 1500_XATTR_USER_PREFIX.patch
From: https://bugs.gentoo.org/show_bug.cgi?id=470644
Desc: Support for namespace user.pax.* on tmpfs.
diff --git a/1000_linux-5.7.1.patch b/1000_linux-5.7.1.patch
new file mode 100644
index 0000000..e323c49
--- /dev/null
+++ b/1000_linux-5.7.1.patch
@@ -0,0 +1,416 @@
+diff --git a/Makefile b/Makefile
+index b668725a2a62..2dd4f37c9f10 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 5
+ PATCHLEVEL = 7
+-SUBLEVEL = 0
++SUBLEVEL = 1
+ EXTRAVERSION =
+ NAME = Kleptomaniac Octopus
+
+diff --git a/arch/x86/include/asm/pgtable.h b/arch/x86/include/asm/pgtable.h
+index 4d02e64af1b3..19cdeebfbde6 100644
+--- a/arch/x86/include/asm/pgtable.h
++++ b/arch/x86/include/asm/pgtable.h
+@@ -257,6 +257,7 @@ static inline int pmd_large(pmd_t pte)
+ }
+
+ #ifdef CONFIG_TRANSPARENT_HUGEPAGE
++/* NOTE: when predicate huge page, consider also pmd_devmap, or use pmd_large */
+ static inline int pmd_trans_huge(pmd_t pmd)
+ {
+ return (pmd_val(pmd) & (_PAGE_PSE|_PAGE_DEVMAP)) == _PAGE_PSE;
+diff --git a/crypto/algapi.c b/crypto/algapi.c
+index 69605e21af92..f8b4dc161c02 100644
+--- a/crypto/algapi.c
++++ b/crypto/algapi.c
+@@ -716,17 +716,27 @@ EXPORT_SYMBOL_GPL(crypto_drop_spawn);
+
+ static struct crypto_alg *crypto_spawn_alg(struct crypto_spawn *spawn)
+ {
+- struct crypto_alg *alg;
++ struct crypto_alg *alg = ERR_PTR(-EAGAIN);
++ struct crypto_alg *target;
++ bool shoot = false;
+
+ down_read(&crypto_alg_sem);
+- alg = spawn->alg;
+- if (!spawn->dead && !crypto_mod_get(alg)) {
+- alg->cra_flags |= CRYPTO_ALG_DYING;
+- alg = NULL;
++ if (!spawn->dead) {
++ alg = spawn->alg;
++ if (!crypto_mod_get(alg)) {
++ target = crypto_alg_get(alg);
++ shoot = true;
++ alg = ERR_PTR(-EAGAIN);
++ }
+ }
+ up_read(&crypto_alg_sem);
+
+- return alg ?: ERR_PTR(-EAGAIN);
++ if (shoot) {
++ crypto_shoot_alg(target);
++ crypto_alg_put(target);
++ }
++
++ return alg;
+ }
+
+ struct crypto_tfm *crypto_spawn_tfm(struct crypto_spawn *spawn, u32 type,
+diff --git a/crypto/api.c b/crypto/api.c
+index 7d71a9b10e5f..edcf690800d4 100644
+--- a/crypto/api.c
++++ b/crypto/api.c
+@@ -333,12 +333,13 @@ static unsigned int crypto_ctxsize(struct crypto_alg *alg, u32 type, u32 mask)
+ return len;
+ }
+
+-static void crypto_shoot_alg(struct crypto_alg *alg)
++void crypto_shoot_alg(struct crypto_alg *alg)
+ {
+ down_write(&crypto_alg_sem);
+ alg->cra_flags |= CRYPTO_ALG_DYING;
+ up_write(&crypto_alg_sem);
+ }
++EXPORT_SYMBOL_GPL(crypto_shoot_alg);
+
+ struct crypto_tfm *__crypto_alloc_tfm(struct crypto_alg *alg, u32 type,
+ u32 mask)
+diff --git a/crypto/internal.h b/crypto/internal.h
+index d5ebc60c5143..ff06a3bd1ca1 100644
+--- a/crypto/internal.h
++++ b/crypto/internal.h
+@@ -65,6 +65,7 @@ void crypto_alg_tested(const char *name, int err);
+ void crypto_remove_spawns(struct crypto_alg *alg, struct list_head *list,
+ struct crypto_alg *nalg);
+ void crypto_remove_final(struct list_head *list);
++void crypto_shoot_alg(struct crypto_alg *alg);
+ struct crypto_tfm *__crypto_alloc_tfm(struct crypto_alg *alg, u32 type,
+ u32 mask);
+ void *crypto_create_tfm(struct crypto_alg *alg,
+diff --git a/drivers/hid/hid-multitouch.c b/drivers/hid/hid-multitouch.c
+index 03c720b47306..39e4da7468e1 100644
+--- a/drivers/hid/hid-multitouch.c
++++ b/drivers/hid/hid-multitouch.c
+@@ -69,6 +69,7 @@ MODULE_LICENSE("GPL");
+ #define MT_QUIRK_ASUS_CUSTOM_UP BIT(17)
+ #define MT_QUIRK_WIN8_PTP_BUTTONS BIT(18)
+ #define MT_QUIRK_SEPARATE_APP_REPORT BIT(19)
++#define MT_QUIRK_FORCE_MULTI_INPUT BIT(20)
+
+ #define MT_INPUTMODE_TOUCHSCREEN 0x02
+ #define MT_INPUTMODE_TOUCHPAD 0x03
+@@ -189,6 +190,7 @@ static void mt_post_parse(struct mt_device *td, struct mt_application *app);
+ #define MT_CLS_WIN_8 0x0012
+ #define MT_CLS_EXPORT_ALL_INPUTS 0x0013
+ #define MT_CLS_WIN_8_DUAL 0x0014
++#define MT_CLS_WIN_8_FORCE_MULTI_INPUT 0x0015
+
+ /* vendor specific classes */
+ #define MT_CLS_3M 0x0101
+@@ -279,6 +281,15 @@ static const struct mt_class mt_classes[] = {
+ MT_QUIRK_CONTACT_CNT_ACCURATE |
+ MT_QUIRK_WIN8_PTP_BUTTONS,
+ .export_all_inputs = true },
++ { .name = MT_CLS_WIN_8_FORCE_MULTI_INPUT,
++ .quirks = MT_QUIRK_ALWAYS_VALID |
++ MT_QUIRK_IGNORE_DUPLICATES |
++ MT_QUIRK_HOVERING |
++ MT_QUIRK_CONTACT_CNT_ACCURATE |
++ MT_QUIRK_STICKY_FINGERS |
++ MT_QUIRK_WIN8_PTP_BUTTONS |
++ MT_QUIRK_FORCE_MULTI_INPUT,
++ .export_all_inputs = true },
+
+ /*
+ * vendor specific classes
+@@ -1714,6 +1725,11 @@ static int mt_probe(struct hid_device *hdev, const struct hid_device_id *id)
+ if (id->group != HID_GROUP_MULTITOUCH_WIN_8)
+ hdev->quirks |= HID_QUIRK_MULTI_INPUT;
+
++ if (mtclass->quirks & MT_QUIRK_FORCE_MULTI_INPUT) {
++ hdev->quirks &= ~HID_QUIRK_INPUT_PER_APP;
++ hdev->quirks |= HID_QUIRK_MULTI_INPUT;
++ }
++
+ timer_setup(&td->release_timer, mt_expired_timeout, 0);
+
+ ret = hid_parse(hdev);
+@@ -1926,6 +1942,11 @@ static const struct hid_device_id mt_devices[] = {
+ MT_USB_DEVICE(USB_VENDOR_ID_DWAV,
+ USB_DEVICE_ID_DWAV_EGALAX_MULTITOUCH_C002) },
+
++ /* Elan devices */
++ { .driver_data = MT_CLS_WIN_8_FORCE_MULTI_INPUT,
++ HID_DEVICE(BUS_I2C, HID_GROUP_MULTITOUCH_WIN_8,
++ USB_VENDOR_ID_ELAN, 0x313a) },
++
+ /* Elitegroup panel */
+ { .driver_data = MT_CLS_SERIAL,
+ MT_USB_DEVICE(USB_VENDOR_ID_ELITEGROUP,
+@@ -2056,6 +2077,11 @@ static const struct hid_device_id mt_devices[] = {
+ MT_USB_DEVICE(USB_VENDOR_ID_STANTUM_STM,
+ USB_DEVICE_ID_MTP_STM)},
+
++ /* Synaptics devices */
++ { .driver_data = MT_CLS_WIN_8_FORCE_MULTI_INPUT,
++ HID_DEVICE(BUS_I2C, HID_GROUP_MULTITOUCH_WIN_8,
++ USB_VENDOR_ID_SYNAPTICS, 0xce08) },
++
+ /* TopSeed panels */
+ { .driver_data = MT_CLS_TOPSEED,
+ MT_USB_DEVICE(USB_VENDOR_ID_TOPSEED2,
+diff --git a/drivers/hid/hid-sony.c b/drivers/hid/hid-sony.c
+index 4c6ed6ef31f1..2f073f536070 100644
+--- a/drivers/hid/hid-sony.c
++++ b/drivers/hid/hid-sony.c
+@@ -867,6 +867,23 @@ static u8 *sony_report_fixup(struct hid_device *hdev, u8 *rdesc,
+ if (sc->quirks & PS3REMOTE)
+ return ps3remote_fixup(hdev, rdesc, rsize);
+
++ /*
++ * Some knock-off USB dongles incorrectly report their button count
++ * as 13 instead of 16 causing three non-functional buttons.
++ */
++ if ((sc->quirks & SIXAXIS_CONTROLLER_USB) && *rsize >= 45 &&
++ /* Report Count (13) */
++ rdesc[23] == 0x95 && rdesc[24] == 0x0D &&
++ /* Usage Maximum (13) */
++ rdesc[37] == 0x29 && rdesc[38] == 0x0D &&
++ /* Report Count (3) */
++ rdesc[43] == 0x95 && rdesc[44] == 0x03) {
++ hid_info(hdev, "Fixing up USB dongle report descriptor\n");
++ rdesc[24] = 0x10;
++ rdesc[38] = 0x10;
++ rdesc[44] = 0x00;
++ }
++
+ return rdesc;
+ }
+
+diff --git a/drivers/hid/i2c-hid/i2c-hid-dmi-quirks.c b/drivers/hid/i2c-hid/i2c-hid-dmi-quirks.c
+index a66f08041a1a..ec142bc8c1da 100644
+--- a/drivers/hid/i2c-hid/i2c-hid-dmi-quirks.c
++++ b/drivers/hid/i2c-hid/i2c-hid-dmi-quirks.c
+@@ -389,6 +389,14 @@ static const struct dmi_system_id i2c_hid_dmi_desc_override_table[] = {
+ },
+ .driver_data = (void *)&sipodev_desc
+ },
++ {
++ .ident = "Schneider SCL142ALM",
++ .matches = {
++ DMI_EXACT_MATCH(DMI_SYS_VENDOR, "SCHNEIDER"),
++ DMI_EXACT_MATCH(DMI_PRODUCT_NAME, "SCL142ALM"),
++ },
++ .driver_data = (void *)&sipodev_desc
++ },
+ { } /* Terminate list */
+ };
+
+diff --git a/drivers/media/dvb-core/dvbdev.c b/drivers/media/dvb-core/dvbdev.c
+index 80b6a71aa33e..959fa2820259 100644
+--- a/drivers/media/dvb-core/dvbdev.c
++++ b/drivers/media/dvb-core/dvbdev.c
+@@ -707,9 +707,10 @@ int dvb_create_media_graph(struct dvb_adapter *adap,
+ }
+
+ if (ntuner && ndemod) {
+- pad_source = media_get_pad_index(tuner, true,
++ /* NOTE: first found tuner source pad presumed correct */
++ pad_source = media_get_pad_index(tuner, false,
+ PAD_SIGNAL_ANALOG);
+- if (pad_source)
++ if (pad_source < 0)
+ return -EINVAL;
+ ret = media_create_pad_links(mdev,
+ MEDIA_ENT_F_TUNER,
+diff --git a/drivers/net/dsa/mt7530.c b/drivers/net/dsa/mt7530.c
+index 34e4aadfa705..b75d09783a05 100644
+--- a/drivers/net/dsa/mt7530.c
++++ b/drivers/net/dsa/mt7530.c
+@@ -807,10 +807,15 @@ mt7530_port_set_vlan_aware(struct dsa_switch *ds, int port)
+ PCR_MATRIX_MASK, PCR_MATRIX(MT7530_ALL_MEMBERS));
+
+ /* Trapped into security mode allows packet forwarding through VLAN
+- * table lookup.
++ * table lookup. CPU port is set to fallback mode to let untagged
++ * frames pass through.
+ */
+- mt7530_rmw(priv, MT7530_PCR_P(port), PCR_PORT_VLAN_MASK,
+- MT7530_PORT_SECURITY_MODE);
++ if (dsa_is_cpu_port(ds, port))
++ mt7530_rmw(priv, MT7530_PCR_P(port), PCR_PORT_VLAN_MASK,
++ MT7530_PORT_FALLBACK_MODE);
++ else
++ mt7530_rmw(priv, MT7530_PCR_P(port), PCR_PORT_VLAN_MASK,
++ MT7530_PORT_SECURITY_MODE);
+
+ /* Set the port as a user port which is to be able to recognize VID
+ * from incoming packets before fetching entry within the VLAN table.
+diff --git a/drivers/net/dsa/mt7530.h b/drivers/net/dsa/mt7530.h
+index 82af4d2d406e..14de60d0b9ca 100644
+--- a/drivers/net/dsa/mt7530.h
++++ b/drivers/net/dsa/mt7530.h
+@@ -153,6 +153,12 @@ enum mt7530_port_mode {
+ /* Port Matrix Mode: Frames are forwarded by the PCR_MATRIX members. */
+ MT7530_PORT_MATRIX_MODE = PORT_VLAN(0),
+
++ /* Fallback Mode: Forward received frames with ingress ports that do
++ * not belong to the VLAN member. Frames whose VID is not listed on
++ * the VLAN table are forwarded by the PCR_MATRIX members.
++ */
++ MT7530_PORT_FALLBACK_MODE = PORT_VLAN(1),
++
+ /* Security Mode: Discard any frame due to ingress membership
+ * violation or VID missed on the VLAN table.
+ */
+diff --git a/drivers/net/wireless/cisco/airo.c b/drivers/net/wireless/cisco/airo.c
+index 8363f91df7ea..827bb6d74815 100644
+--- a/drivers/net/wireless/cisco/airo.c
++++ b/drivers/net/wireless/cisco/airo.c
+@@ -1925,6 +1925,10 @@ static netdev_tx_t mpi_start_xmit(struct sk_buff *skb,
+ airo_print_err(dev->name, "%s: skb == NULL!",__func__);
+ return NETDEV_TX_OK;
+ }
++ if (skb_padto(skb, ETH_ZLEN)) {
++ dev->stats.tx_dropped++;
++ return NETDEV_TX_OK;
++ }
+ npacks = skb_queue_len (&ai->txq);
+
+ if (npacks >= MAXTXQ - 1) {
+@@ -2127,6 +2131,10 @@ static netdev_tx_t airo_start_xmit(struct sk_buff *skb,
+ airo_print_err(dev->name, "%s: skb == NULL!", __func__);
+ return NETDEV_TX_OK;
+ }
++ if (skb_padto(skb, ETH_ZLEN)) {
++ dev->stats.tx_dropped++;
++ return NETDEV_TX_OK;
++ }
+
+ /* Find a vacant FID */
+ for( i = 0; i < MAX_FIDS / 2 && (fids[i] & 0xffff0000); i++ );
+@@ -2201,6 +2209,10 @@ static netdev_tx_t airo_start_xmit11(struct sk_buff *skb,
+ airo_print_err(dev->name, "%s: skb == NULL!", __func__);
+ return NETDEV_TX_OK;
+ }
++ if (skb_padto(skb, ETH_ZLEN)) {
++ dev->stats.tx_dropped++;
++ return NETDEV_TX_OK;
++ }
+
+ /* Find a vacant FID */
+ for( i = MAX_FIDS / 2; i < MAX_FIDS && (fids[i] & 0xffff0000); i++ );
+diff --git a/drivers/net/wireless/intersil/p54/p54usb.c b/drivers/net/wireless/intersil/p54/p54usb.c
+index b94764c88750..ff0e30c0c14c 100644
+--- a/drivers/net/wireless/intersil/p54/p54usb.c
++++ b/drivers/net/wireless/intersil/p54/p54usb.c
+@@ -61,6 +61,7 @@ static const struct usb_device_id p54u_table[] = {
+ {USB_DEVICE(0x0db0, 0x6826)}, /* MSI UB54G (MS-6826) */
+ {USB_DEVICE(0x107b, 0x55f2)}, /* Gateway WGU-210 (Gemtek) */
+ {USB_DEVICE(0x124a, 0x4023)}, /* Shuttle PN15, Airvast WM168g, IOGear GWU513 */
++ {USB_DEVICE(0x124a, 0x4026)}, /* AirVasT USB wireless device */
+ {USB_DEVICE(0x1435, 0x0210)}, /* Inventel UR054G */
+ {USB_DEVICE(0x15a9, 0x0002)}, /* Gemtek WUBI-100GW 802.11g */
+ {USB_DEVICE(0x1630, 0x0005)}, /* 2Wire 802.11g USB (v1) / Z-Com */
+diff --git a/drivers/net/wireless/mediatek/mt76/mt76x02.h b/drivers/net/wireless/mediatek/mt76/mt76x02.h
+index 23040c193ca5..830532b85b58 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt76x02.h
++++ b/drivers/net/wireless/mediatek/mt76/mt76x02.h
+@@ -216,6 +216,7 @@ static inline bool is_mt76x0(struct mt76x02_dev *dev)
+ static inline bool is_mt76x2(struct mt76x02_dev *dev)
+ {
+ return mt76_chip(&dev->mt76) == 0x7612 ||
++ mt76_chip(&dev->mt76) == 0x7632 ||
+ mt76_chip(&dev->mt76) == 0x7662 ||
+ mt76_chip(&dev->mt76) == 0x7602;
+ }
+diff --git a/drivers/net/wireless/mediatek/mt76/mt76x2/usb.c b/drivers/net/wireless/mediatek/mt76/mt76x2/usb.c
+index eafa283ca699..6376734282b7 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt76x2/usb.c
++++ b/drivers/net/wireless/mediatek/mt76/mt76x2/usb.c
+@@ -18,6 +18,7 @@ static const struct usb_device_id mt76x2u_device_table[] = {
+ { USB_DEVICE(0x7392, 0xb711) }, /* Edimax EW 7722 UAC */
+ { USB_DEVICE(0x0846, 0x9053) }, /* Netgear A6210 */
+ { USB_DEVICE(0x045e, 0x02e6) }, /* XBox One Wireless Adapter */
++ { USB_DEVICE(0x045e, 0x02fe) }, /* XBox One Wireless Adapter */
+ { },
+ };
+
+diff --git a/drivers/staging/media/ipu3/include/intel-ipu3.h b/drivers/staging/media/ipu3/include/intel-ipu3.h
+index 1c9c3ba4d518..a607b0158c81 100644
+--- a/drivers/staging/media/ipu3/include/intel-ipu3.h
++++ b/drivers/staging/media/ipu3/include/intel-ipu3.h
+@@ -450,7 +450,7 @@ struct ipu3_uapi_awb_fr_config_s {
+ __u32 bayer_sign;
+ __u8 bayer_nf;
+ __u8 reserved2[7];
+-} __attribute__((aligned(32))) __packed;
++} __packed;
+
+ /**
+ * struct ipu3_uapi_4a_config - 4A config
+@@ -466,7 +466,8 @@ struct ipu3_uapi_4a_config {
+ struct ipu3_uapi_ae_grid_config ae_grd_config;
+ __u8 padding[20];
+ struct ipu3_uapi_af_config_s af_config;
+- struct ipu3_uapi_awb_fr_config_s awb_fr_config;
++ struct ipu3_uapi_awb_fr_config_s awb_fr_config
++ __attribute__((aligned(32)));
+ } __packed;
+
+ /**
+@@ -2477,7 +2478,7 @@ struct ipu3_uapi_acc_param {
+ struct ipu3_uapi_yuvp1_yds_config yds2 __attribute__((aligned(32)));
+ struct ipu3_uapi_yuvp2_tcc_static_config tcc __attribute__((aligned(32)));
+ struct ipu3_uapi_anr_config anr;
+- struct ipu3_uapi_awb_fr_config_s awb_fr __attribute__((aligned(32)));
++ struct ipu3_uapi_awb_fr_config_s awb_fr;
+ struct ipu3_uapi_ae_config ae;
+ struct ipu3_uapi_af_config_s af;
+ struct ipu3_uapi_awb_config awb;
+diff --git a/include/uapi/linux/mmc/ioctl.h b/include/uapi/linux/mmc/ioctl.h
+index 00c08120f3ba..27a39847d55c 100644
+--- a/include/uapi/linux/mmc/ioctl.h
++++ b/include/uapi/linux/mmc/ioctl.h
+@@ -3,6 +3,7 @@
+ #define LINUX_MMC_IOCTL_H
+
+ #include <linux/types.h>
++#include <linux/major.h>
+
+ struct mmc_ioc_cmd {
+ /*
+diff --git a/kernel/relay.c b/kernel/relay.c
+index ade14fb7ce2e..4b760ec16342 100644
+--- a/kernel/relay.c
++++ b/kernel/relay.c
+@@ -581,6 +581,11 @@ struct rchan *relay_open(const char *base_filename,
+ return NULL;
+
+ chan->buf = alloc_percpu(struct rchan_buf *);
++ if (!chan->buf) {
++ kfree(chan);
++ return NULL;
++ }
++
+ chan->version = RELAYFS_CHANNEL_VERSION;
+ chan->n_subbufs = n_subbufs;
+ chan->subbuf_size = subbuf_size;
+diff --git a/mm/mremap.c b/mm/mremap.c
+index 6aa6ea605068..57b1f999f789 100644
+--- a/mm/mremap.c
++++ b/mm/mremap.c
+@@ -266,7 +266,7 @@ unsigned long move_page_tables(struct vm_area_struct *vma,
+ new_pmd = alloc_new_pmd(vma->vm_mm, vma, new_addr);
+ if (!new_pmd)
+ break;
+- if (is_swap_pmd(*old_pmd) || pmd_trans_huge(*old_pmd)) {
++ if (is_swap_pmd(*old_pmd) || pmd_trans_huge(*old_pmd) || pmd_devmap(*old_pmd)) {
+ if (extent == HPAGE_PMD_SIZE) {
+ bool moved;
+ /* See comment in move_ptes() */
^ permalink raw reply related [flat|nested] 25+ messages in thread
* [gentoo-commits] proj/linux-patches:5.7 commit in: /
@ 2020-06-10 19:41 Mike Pagano
0 siblings, 0 replies; 25+ messages in thread
From: Mike Pagano @ 2020-06-10 19:41 UTC (permalink / raw
To: gentoo-commits
commit: 43d4f67830ec3ca200452eb7207772b0cef4d981
Author: Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Wed Jun 10 19:41:44 2020 +0000
Commit: Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Wed Jun 10 19:41:44 2020 +0000
URL: https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=43d4f678
Linux patch 5.7.2
Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>
0000_README | 4 +
1001_linux-5.7.2.patch | 1336 ++++++++++++++++++++++++++++++++++++++++++++++++
2 files changed, 1340 insertions(+)
diff --git a/0000_README b/0000_README
index a2b7036..f0fc6ef 100644
--- a/0000_README
+++ b/0000_README
@@ -47,6 +47,10 @@ Patch: 1000_linux-5.7.1.patch
From: http://www.kernel.org
Desc: Linux 5.7.1
+Patch: 1001_linux-5.7.2.patch
+From: http://www.kernel.org
+Desc: Linux 5.7.2
+
Patch: 1500_XATTR_USER_PREFIX.patch
From: https://bugs.gentoo.org/show_bug.cgi?id=470644
Desc: Support for namespace user.pax.* on tmpfs.
diff --git a/1001_linux-5.7.2.patch b/1001_linux-5.7.2.patch
new file mode 100644
index 0000000..ff7e6ad
--- /dev/null
+++ b/1001_linux-5.7.2.patch
@@ -0,0 +1,1336 @@
+diff --git a/Documentation/ABI/testing/sysfs-devices-system-cpu b/Documentation/ABI/testing/sysfs-devices-system-cpu
+index 2e0e3b45d02a..b39531a3c5bc 100644
+--- a/Documentation/ABI/testing/sysfs-devices-system-cpu
++++ b/Documentation/ABI/testing/sysfs-devices-system-cpu
+@@ -492,6 +492,7 @@ What: /sys/devices/system/cpu/vulnerabilities
+ /sys/devices/system/cpu/vulnerabilities/spec_store_bypass
+ /sys/devices/system/cpu/vulnerabilities/l1tf
+ /sys/devices/system/cpu/vulnerabilities/mds
++ /sys/devices/system/cpu/vulnerabilities/srbds
+ /sys/devices/system/cpu/vulnerabilities/tsx_async_abort
+ /sys/devices/system/cpu/vulnerabilities/itlb_multihit
+ Date: January 2018
+diff --git a/Documentation/admin-guide/hw-vuln/index.rst b/Documentation/admin-guide/hw-vuln/index.rst
+index 0795e3c2643f..ca4dbdd9016d 100644
+--- a/Documentation/admin-guide/hw-vuln/index.rst
++++ b/Documentation/admin-guide/hw-vuln/index.rst
+@@ -14,3 +14,4 @@ are configurable at compile, boot or run time.
+ mds
+ tsx_async_abort
+ multihit.rst
++ special-register-buffer-data-sampling.rst
+diff --git a/Documentation/admin-guide/hw-vuln/special-register-buffer-data-sampling.rst b/Documentation/admin-guide/hw-vuln/special-register-buffer-data-sampling.rst
+new file mode 100644
+index 000000000000..47b1b3afac99
+--- /dev/null
++++ b/Documentation/admin-guide/hw-vuln/special-register-buffer-data-sampling.rst
+@@ -0,0 +1,149 @@
++.. SPDX-License-Identifier: GPL-2.0
++
++SRBDS - Special Register Buffer Data Sampling
++=============================================
++
++SRBDS is a hardware vulnerability that allows MDS :doc:`mds` techniques to
++infer values returned from special register accesses. Special register
++accesses are accesses to off core registers. According to Intel's evaluation,
++the special register reads that have a security expectation of privacy are
++RDRAND, RDSEED and SGX EGETKEY.
++
++When RDRAND, RDSEED and EGETKEY instructions are used, the data is moved
++to the core through the special register mechanism that is susceptible
++to MDS attacks.
++
++Affected processors
++--------------------
++Core models (desktop, mobile, Xeon-E3) that implement RDRAND and/or RDSEED may
++be affected.
++
++A processor is affected by SRBDS if its Family_Model and stepping is
++in the following list, with the exception of the listed processors
++exporting MDS_NO while Intel TSX is available yet not enabled. The
++latter class of processors are only affected when Intel TSX is enabled
++by software using TSX_CTRL_MSR otherwise they are not affected.
++
++ ============= ============ ========
++ common name Family_Model Stepping
++ ============= ============ ========
++ IvyBridge 06_3AH All
++
++ Haswell 06_3CH All
++ Haswell_L 06_45H All
++ Haswell_G 06_46H All
++
++ Broadwell_G 06_47H All
++ Broadwell 06_3DH All
++
++ Skylake_L 06_4EH All
++ Skylake 06_5EH All
++
++ Kabylake_L 06_8EH <= 0xC
++ Kabylake 06_9EH <= 0xD
++ ============= ============ ========
++
++Related CVEs
++------------
++
++The following CVE entry is related to this SRBDS issue:
++
++ ============== ===== =====================================
++ CVE-2020-0543 SRBDS Special Register Buffer Data Sampling
++ ============== ===== =====================================
++
++Attack scenarios
++----------------
++An unprivileged user can extract values returned from RDRAND and RDSEED
++executed on another core or sibling thread using MDS techniques.
++
++
++Mitigation mechanism
++-------------------
++Intel will release microcode updates that modify the RDRAND, RDSEED, and
++EGETKEY instructions to overwrite secret special register data in the shared
++staging buffer before the secret data can be accessed by another logical
++processor.
++
++During execution of the RDRAND, RDSEED, or EGETKEY instructions, off-core
++accesses from other logical processors will be delayed until the special
++register read is complete and the secret data in the shared staging buffer is
++overwritten.
++
++This has three effects on performance:
++
++#. RDRAND, RDSEED, or EGETKEY instructions have higher latency.
++
++#. Executing RDRAND at the same time on multiple logical processors will be
++ serialized, resulting in an overall reduction in the maximum RDRAND
++ bandwidth.
++
++#. Executing RDRAND, RDSEED or EGETKEY will delay memory accesses from other
++ logical processors that miss their core caches, with an impact similar to
++ legacy locked cache-line-split accesses.
++
++The microcode updates provide an opt-out mechanism (RNGDS_MITG_DIS) to disable
++the mitigation for RDRAND and RDSEED instructions executed outside of Intel
++Software Guard Extensions (Intel SGX) enclaves. On logical processors that
++disable the mitigation using this opt-out mechanism, RDRAND and RDSEED do not
++take longer to execute and do not impact performance of sibling logical
++processors memory accesses. The opt-out mechanism does not affect Intel SGX
++enclaves (including execution of RDRAND or RDSEED inside an enclave, as well
++as EGETKEY execution).
++
++IA32_MCU_OPT_CTRL MSR Definition
++--------------------------------
++Along with the mitigation for this issue, Intel added a new thread-scope
++IA32_MCU_OPT_CTRL MSR, (address 0x123). The presence of this MSR and
++RNGDS_MITG_DIS (bit 0) is enumerated by CPUID.(EAX=07H,ECX=0).EDX[SRBDS_CTRL =
++9]==1. This MSR is introduced through the microcode update.
++
++Setting IA32_MCU_OPT_CTRL[0] (RNGDS_MITG_DIS) to 1 for a logical processor
++disables the mitigation for RDRAND and RDSEED executed outside of an Intel SGX
++enclave on that logical processor. Opting out of the mitigation for a
++particular logical processor does not affect the RDRAND and RDSEED mitigations
++for other logical processors.
++
++Note that inside of an Intel SGX enclave, the mitigation is applied regardless
++of the value of RNGDS_MITG_DS.
++
++Mitigation control on the kernel command line
++---------------------------------------------
++The kernel command line allows control over the SRBDS mitigation at boot time
++with the option "srbds=". The option for this is:
++
++ ============= =============================================================
++ off This option disables SRBDS mitigation for RDRAND and RDSEED on
++ affected platforms.
++ ============= =============================================================
++
++SRBDS System Information
++-----------------------
++The Linux kernel provides vulnerability status information through sysfs. For
++SRBDS this can be accessed by the following sysfs file:
++/sys/devices/system/cpu/vulnerabilities/srbds
++
++The possible values contained in this file are:
++
++ ============================== =============================================
++ Not affected Processor not vulnerable
++ Vulnerable Processor vulnerable and mitigation disabled
++ Vulnerable: No microcode Processor vulnerable and microcode is missing
++ mitigation
++ Mitigation: Microcode Processor is vulnerable and mitigation is in
++ effect.
++ Mitigation: TSX disabled Processor is only vulnerable when TSX is
++ enabled while this system was booted with TSX
++ disabled.
++ Unknown: Dependent on
++ hypervisor status Running on virtual guest processor that is
++ affected but with no way to know if host
++ processor is mitigated or vulnerable.
++ ============================== =============================================
++
++SRBDS Default mitigation
++------------------------
++This new microcode serializes processor access during execution of RDRAND,
++RDSEED ensures that the shared buffer is overwritten before it is released for
++reuse. Use the "srbds=off" kernel command line to disable the mitigation for
++RDRAND and RDSEED.
+diff --git a/Documentation/admin-guide/kernel-parameters.txt b/Documentation/admin-guide/kernel-parameters.txt
+index 7bc83f3d9bdf..5e2ce88d6eda 100644
+--- a/Documentation/admin-guide/kernel-parameters.txt
++++ b/Documentation/admin-guide/kernel-parameters.txt
+@@ -4757,6 +4757,26 @@
+ the kernel will oops in either "warn" or "fatal"
+ mode.
+
++ srbds= [X86,INTEL]
++ Control the Special Register Buffer Data Sampling
++ (SRBDS) mitigation.
++
++ Certain CPUs are vulnerable to an MDS-like
++ exploit which can leak bits from the random
++ number generator.
++
++ By default, this issue is mitigated by
++ microcode. However, the microcode fix can cause
++ the RDRAND and RDSEED instructions to become
++ much slower. Among other effects, this will
++ result in reduced throughput from /dev/urandom.
++
++ The microcode mitigation can be disabled with
++ the following option:
++
++ off: Disable mitigation and remove
++ performance impact to RDRAND and RDSEED
++
+ srcutree.counter_wrap_check [KNL]
+ Specifies how frequently to check for
+ grace-period sequence counter wrap for the
+diff --git a/Makefile b/Makefile
+index 2dd4f37c9f10..a6dda75d18cd 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 5
+ PATCHLEVEL = 7
+-SUBLEVEL = 1
++SUBLEVEL = 2
+ EXTRAVERSION =
+ NAME = Kleptomaniac Octopus
+
+diff --git a/arch/x86/include/asm/cpu_device_id.h b/arch/x86/include/asm/cpu_device_id.h
+index cf3d621c6892..10426cd56dca 100644
+--- a/arch/x86/include/asm/cpu_device_id.h
++++ b/arch/x86/include/asm/cpu_device_id.h
+@@ -20,12 +20,14 @@
+ #define X86_CENTAUR_FAM6_C7_D 0xd
+ #define X86_CENTAUR_FAM6_NANO 0xf
+
++#define X86_STEPPINGS(mins, maxs) GENMASK(maxs, mins)
+ /**
+- * X86_MATCH_VENDOR_FAM_MODEL_FEATURE - Base macro for CPU matching
++ * X86_MATCH_VENDOR_FAM_MODEL_STEPPINGS_FEATURE - Base macro for CPU matching
+ * @_vendor: The vendor name, e.g. INTEL, AMD, HYGON, ..., ANY
+ * The name is expanded to X86_VENDOR_@_vendor
+ * @_family: The family number or X86_FAMILY_ANY
+ * @_model: The model number, model constant or X86_MODEL_ANY
++ * @_steppings: Bitmask for steppings, stepping constant or X86_STEPPING_ANY
+ * @_feature: A X86_FEATURE bit or X86_FEATURE_ANY
+ * @_data: Driver specific data or NULL. The internal storage
+ * format is unsigned long. The supplied value, pointer
+@@ -37,15 +39,34 @@
+ * into another macro at the usage site for good reasons, then please
+ * start this local macro with X86_MATCH to allow easy grepping.
+ */
+-#define X86_MATCH_VENDOR_FAM_MODEL_FEATURE(_vendor, _family, _model, \
+- _feature, _data) { \
++#define X86_MATCH_VENDOR_FAM_MODEL_STEPPINGS_FEATURE(_vendor, _family, _model, \
++ _steppings, _feature, _data) { \
+ .vendor = X86_VENDOR_##_vendor, \
+ .family = _family, \
+ .model = _model, \
++ .steppings = _steppings, \
+ .feature = _feature, \
+ .driver_data = (unsigned long) _data \
+ }
+
++/**
++ * X86_MATCH_VENDOR_FAM_MODEL_FEATURE - Macro for CPU matching
++ * @_vendor: The vendor name, e.g. INTEL, AMD, HYGON, ..., ANY
++ * The name is expanded to X86_VENDOR_@_vendor
++ * @_family: The family number or X86_FAMILY_ANY
++ * @_model: The model number, model constant or X86_MODEL_ANY
++ * @_feature: A X86_FEATURE bit or X86_FEATURE_ANY
++ * @_data: Driver specific data or NULL. The internal storage
++ * format is unsigned long. The supplied value, pointer
++ * etc. is casted to unsigned long internally.
++ *
++ * The steppings arguments of X86_MATCH_VENDOR_FAM_MODEL_STEPPINGS_FEATURE() is
++ * set to wildcards.
++ */
++#define X86_MATCH_VENDOR_FAM_MODEL_FEATURE(vendor, family, model, feature, data) \
++ X86_MATCH_VENDOR_FAM_MODEL_STEPPINGS_FEATURE(vendor, family, model, \
++ X86_STEPPING_ANY, feature, data)
++
+ /**
+ * X86_MATCH_VENDOR_FAM_FEATURE - Macro for matching vendor, family and CPU feature
+ * @vendor: The vendor name, e.g. INTEL, AMD, HYGON, ..., ANY
+diff --git a/arch/x86/include/asm/cpufeatures.h b/arch/x86/include/asm/cpufeatures.h
+index db189945e9b0..02dabc9e77b0 100644
+--- a/arch/x86/include/asm/cpufeatures.h
++++ b/arch/x86/include/asm/cpufeatures.h
+@@ -362,6 +362,7 @@
+ #define X86_FEATURE_AVX512_4FMAPS (18*32+ 3) /* AVX-512 Multiply Accumulation Single precision */
+ #define X86_FEATURE_FSRM (18*32+ 4) /* Fast Short Rep Mov */
+ #define X86_FEATURE_AVX512_VP2INTERSECT (18*32+ 8) /* AVX-512 Intersect for D/Q */
++#define X86_FEATURE_SRBDS_CTRL (18*32+ 9) /* "" SRBDS mitigation MSR available */
+ #define X86_FEATURE_MD_CLEAR (18*32+10) /* VERW clears CPU buffers */
+ #define X86_FEATURE_TSX_FORCE_ABORT (18*32+13) /* "" TSX_FORCE_ABORT */
+ #define X86_FEATURE_PCONFIG (18*32+18) /* Intel PCONFIG */
+@@ -407,5 +408,6 @@
+ #define X86_BUG_SWAPGS X86_BUG(21) /* CPU is affected by speculation through SWAPGS */
+ #define X86_BUG_TAA X86_BUG(22) /* CPU is affected by TSX Async Abort(TAA) */
+ #define X86_BUG_ITLB_MULTIHIT X86_BUG(23) /* CPU may incur MCE during certain page attribute changes */
++#define X86_BUG_SRBDS X86_BUG(24) /* CPU may leak RNG bits if not mitigated */
+
+ #endif /* _ASM_X86_CPUFEATURES_H */
+diff --git a/arch/x86/include/asm/msr-index.h b/arch/x86/include/asm/msr-index.h
+index 12c9684d59ba..3efde600a674 100644
+--- a/arch/x86/include/asm/msr-index.h
++++ b/arch/x86/include/asm/msr-index.h
+@@ -128,6 +128,10 @@
+ #define TSX_CTRL_RTM_DISABLE BIT(0) /* Disable RTM feature */
+ #define TSX_CTRL_CPUID_CLEAR BIT(1) /* Disable TSX enumeration */
+
++/* SRBDS support */
++#define MSR_IA32_MCU_OPT_CTRL 0x00000123
++#define RNGDS_MITG_DIS BIT(0)
++
+ #define MSR_IA32_SYSENTER_CS 0x00000174
+ #define MSR_IA32_SYSENTER_ESP 0x00000175
+ #define MSR_IA32_SYSENTER_EIP 0x00000176
+diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c
+index ed54b3b21c39..56978cb06149 100644
+--- a/arch/x86/kernel/cpu/bugs.c
++++ b/arch/x86/kernel/cpu/bugs.c
+@@ -41,6 +41,7 @@ static void __init l1tf_select_mitigation(void);
+ static void __init mds_select_mitigation(void);
+ static void __init mds_print_mitigation(void);
+ static void __init taa_select_mitigation(void);
++static void __init srbds_select_mitigation(void);
+
+ /* The base value of the SPEC_CTRL MSR that always has to be preserved. */
+ u64 x86_spec_ctrl_base;
+@@ -108,6 +109,7 @@ void __init check_bugs(void)
+ l1tf_select_mitigation();
+ mds_select_mitigation();
+ taa_select_mitigation();
++ srbds_select_mitigation();
+
+ /*
+ * As MDS and TAA mitigations are inter-related, print MDS
+@@ -397,6 +399,97 @@ static int __init tsx_async_abort_parse_cmdline(char *str)
+ }
+ early_param("tsx_async_abort", tsx_async_abort_parse_cmdline);
+
++#undef pr_fmt
++#define pr_fmt(fmt) "SRBDS: " fmt
++
++enum srbds_mitigations {
++ SRBDS_MITIGATION_OFF,
++ SRBDS_MITIGATION_UCODE_NEEDED,
++ SRBDS_MITIGATION_FULL,
++ SRBDS_MITIGATION_TSX_OFF,
++ SRBDS_MITIGATION_HYPERVISOR,
++};
++
++static enum srbds_mitigations srbds_mitigation __ro_after_init = SRBDS_MITIGATION_FULL;
++
++static const char * const srbds_strings[] = {
++ [SRBDS_MITIGATION_OFF] = "Vulnerable",
++ [SRBDS_MITIGATION_UCODE_NEEDED] = "Vulnerable: No microcode",
++ [SRBDS_MITIGATION_FULL] = "Mitigation: Microcode",
++ [SRBDS_MITIGATION_TSX_OFF] = "Mitigation: TSX disabled",
++ [SRBDS_MITIGATION_HYPERVISOR] = "Unknown: Dependent on hypervisor status",
++};
++
++static bool srbds_off;
++
++void update_srbds_msr(void)
++{
++ u64 mcu_ctrl;
++
++ if (!boot_cpu_has_bug(X86_BUG_SRBDS))
++ return;
++
++ if (boot_cpu_has(X86_FEATURE_HYPERVISOR))
++ return;
++
++ if (srbds_mitigation == SRBDS_MITIGATION_UCODE_NEEDED)
++ return;
++
++ rdmsrl(MSR_IA32_MCU_OPT_CTRL, mcu_ctrl);
++
++ switch (srbds_mitigation) {
++ case SRBDS_MITIGATION_OFF:
++ case SRBDS_MITIGATION_TSX_OFF:
++ mcu_ctrl |= RNGDS_MITG_DIS;
++ break;
++ case SRBDS_MITIGATION_FULL:
++ mcu_ctrl &= ~RNGDS_MITG_DIS;
++ break;
++ default:
++ break;
++ }
++
++ wrmsrl(MSR_IA32_MCU_OPT_CTRL, mcu_ctrl);
++}
++
++static void __init srbds_select_mitigation(void)
++{
++ u64 ia32_cap;
++
++ if (!boot_cpu_has_bug(X86_BUG_SRBDS))
++ return;
++
++ /*
++ * Check to see if this is one of the MDS_NO systems supporting
++ * TSX that are only exposed to SRBDS when TSX is enabled.
++ */
++ ia32_cap = x86_read_arch_cap_msr();
++ if ((ia32_cap & ARCH_CAP_MDS_NO) && !boot_cpu_has(X86_FEATURE_RTM))
++ srbds_mitigation = SRBDS_MITIGATION_TSX_OFF;
++ else if (boot_cpu_has(X86_FEATURE_HYPERVISOR))
++ srbds_mitigation = SRBDS_MITIGATION_HYPERVISOR;
++ else if (!boot_cpu_has(X86_FEATURE_SRBDS_CTRL))
++ srbds_mitigation = SRBDS_MITIGATION_UCODE_NEEDED;
++ else if (cpu_mitigations_off() || srbds_off)
++ srbds_mitigation = SRBDS_MITIGATION_OFF;
++
++ update_srbds_msr();
++ pr_info("%s\n", srbds_strings[srbds_mitigation]);
++}
++
++static int __init srbds_parse_cmdline(char *str)
++{
++ if (!str)
++ return -EINVAL;
++
++ if (!boot_cpu_has_bug(X86_BUG_SRBDS))
++ return 0;
++
++ srbds_off = !strcmp(str, "off");
++ return 0;
++}
++early_param("srbds", srbds_parse_cmdline);
++
+ #undef pr_fmt
+ #define pr_fmt(fmt) "Spectre V1 : " fmt
+
+@@ -1528,6 +1621,11 @@ static char *ibpb_state(void)
+ return "";
+ }
+
++static ssize_t srbds_show_state(char *buf)
++{
++ return sprintf(buf, "%s\n", srbds_strings[srbds_mitigation]);
++}
++
+ static ssize_t cpu_show_common(struct device *dev, struct device_attribute *attr,
+ char *buf, unsigned int bug)
+ {
+@@ -1572,6 +1670,9 @@ static ssize_t cpu_show_common(struct device *dev, struct device_attribute *attr
+ case X86_BUG_ITLB_MULTIHIT:
+ return itlb_multihit_show_state(buf);
+
++ case X86_BUG_SRBDS:
++ return srbds_show_state(buf);
++
+ default:
+ break;
+ }
+@@ -1618,4 +1719,9 @@ ssize_t cpu_show_itlb_multihit(struct device *dev, struct device_attribute *attr
+ {
+ return cpu_show_common(dev, attr, buf, X86_BUG_ITLB_MULTIHIT);
+ }
++
++ssize_t cpu_show_srbds(struct device *dev, struct device_attribute *attr, char *buf)
++{
++ return cpu_show_common(dev, attr, buf, X86_BUG_SRBDS);
++}
+ #endif
+diff --git a/arch/x86/kernel/cpu/common.c b/arch/x86/kernel/cpu/common.c
+index bed0cb83fe24..8293ee514975 100644
+--- a/arch/x86/kernel/cpu/common.c
++++ b/arch/x86/kernel/cpu/common.c
+@@ -1075,9 +1075,30 @@ static const __initconst struct x86_cpu_id cpu_vuln_whitelist[] = {
+ {}
+ };
+
+-static bool __init cpu_matches(unsigned long which)
++#define VULNBL_INTEL_STEPPINGS(model, steppings, issues) \
++ X86_MATCH_VENDOR_FAM_MODEL_STEPPINGS_FEATURE(INTEL, 6, \
++ INTEL_FAM6_##model, steppings, \
++ X86_FEATURE_ANY, issues)
++
++#define SRBDS BIT(0)
++
++static const struct x86_cpu_id cpu_vuln_blacklist[] __initconst = {
++ VULNBL_INTEL_STEPPINGS(IVYBRIDGE, X86_STEPPING_ANY, SRBDS),
++ VULNBL_INTEL_STEPPINGS(HASWELL, X86_STEPPING_ANY, SRBDS),
++ VULNBL_INTEL_STEPPINGS(HASWELL_L, X86_STEPPING_ANY, SRBDS),
++ VULNBL_INTEL_STEPPINGS(HASWELL_G, X86_STEPPING_ANY, SRBDS),
++ VULNBL_INTEL_STEPPINGS(BROADWELL_G, X86_STEPPING_ANY, SRBDS),
++ VULNBL_INTEL_STEPPINGS(BROADWELL, X86_STEPPING_ANY, SRBDS),
++ VULNBL_INTEL_STEPPINGS(SKYLAKE_L, X86_STEPPING_ANY, SRBDS),
++ VULNBL_INTEL_STEPPINGS(SKYLAKE, X86_STEPPING_ANY, SRBDS),
++ VULNBL_INTEL_STEPPINGS(KABYLAKE_L, X86_STEPPINGS(0x0, 0xC), SRBDS),
++ VULNBL_INTEL_STEPPINGS(KABYLAKE, X86_STEPPINGS(0x0, 0xD), SRBDS),
++ {}
++};
++
++static bool __init cpu_matches(const struct x86_cpu_id *table, unsigned long which)
+ {
+- const struct x86_cpu_id *m = x86_match_cpu(cpu_vuln_whitelist);
++ const struct x86_cpu_id *m = x86_match_cpu(table);
+
+ return m && !!(m->driver_data & which);
+ }
+@@ -1097,31 +1118,34 @@ static void __init cpu_set_bug_bits(struct cpuinfo_x86 *c)
+ u64 ia32_cap = x86_read_arch_cap_msr();
+
+ /* Set ITLB_MULTIHIT bug if cpu is not in the whitelist and not mitigated */
+- if (!cpu_matches(NO_ITLB_MULTIHIT) && !(ia32_cap & ARCH_CAP_PSCHANGE_MC_NO))
++ if (!cpu_matches(cpu_vuln_whitelist, NO_ITLB_MULTIHIT) &&
++ !(ia32_cap & ARCH_CAP_PSCHANGE_MC_NO))
+ setup_force_cpu_bug(X86_BUG_ITLB_MULTIHIT);
+
+- if (cpu_matches(NO_SPECULATION))
++ if (cpu_matches(cpu_vuln_whitelist, NO_SPECULATION))
+ return;
+
+ setup_force_cpu_bug(X86_BUG_SPECTRE_V1);
+
+- if (!cpu_matches(NO_SPECTRE_V2))
++ if (!cpu_matches(cpu_vuln_whitelist, NO_SPECTRE_V2))
+ setup_force_cpu_bug(X86_BUG_SPECTRE_V2);
+
+- if (!cpu_matches(NO_SSB) && !(ia32_cap & ARCH_CAP_SSB_NO) &&
++ if (!cpu_matches(cpu_vuln_whitelist, NO_SSB) &&
++ !(ia32_cap & ARCH_CAP_SSB_NO) &&
+ !cpu_has(c, X86_FEATURE_AMD_SSB_NO))
+ setup_force_cpu_bug(X86_BUG_SPEC_STORE_BYPASS);
+
+ if (ia32_cap & ARCH_CAP_IBRS_ALL)
+ setup_force_cpu_cap(X86_FEATURE_IBRS_ENHANCED);
+
+- if (!cpu_matches(NO_MDS) && !(ia32_cap & ARCH_CAP_MDS_NO)) {
++ if (!cpu_matches(cpu_vuln_whitelist, NO_MDS) &&
++ !(ia32_cap & ARCH_CAP_MDS_NO)) {
+ setup_force_cpu_bug(X86_BUG_MDS);
+- if (cpu_matches(MSBDS_ONLY))
++ if (cpu_matches(cpu_vuln_whitelist, MSBDS_ONLY))
+ setup_force_cpu_bug(X86_BUG_MSBDS_ONLY);
+ }
+
+- if (!cpu_matches(NO_SWAPGS))
++ if (!cpu_matches(cpu_vuln_whitelist, NO_SWAPGS))
+ setup_force_cpu_bug(X86_BUG_SWAPGS);
+
+ /*
+@@ -1139,7 +1163,16 @@ static void __init cpu_set_bug_bits(struct cpuinfo_x86 *c)
+ (ia32_cap & ARCH_CAP_TSX_CTRL_MSR)))
+ setup_force_cpu_bug(X86_BUG_TAA);
+
+- if (cpu_matches(NO_MELTDOWN))
++ /*
++ * SRBDS affects CPUs which support RDRAND or RDSEED and are listed
++ * in the vulnerability blacklist.
++ */
++ if ((cpu_has(c, X86_FEATURE_RDRAND) ||
++ cpu_has(c, X86_FEATURE_RDSEED)) &&
++ cpu_matches(cpu_vuln_blacklist, SRBDS))
++ setup_force_cpu_bug(X86_BUG_SRBDS);
++
++ if (cpu_matches(cpu_vuln_whitelist, NO_MELTDOWN))
+ return;
+
+ /* Rogue Data Cache Load? No! */
+@@ -1148,7 +1181,7 @@ static void __init cpu_set_bug_bits(struct cpuinfo_x86 *c)
+
+ setup_force_cpu_bug(X86_BUG_CPU_MELTDOWN);
+
+- if (cpu_matches(NO_L1TF))
++ if (cpu_matches(cpu_vuln_whitelist, NO_L1TF))
+ return;
+
+ setup_force_cpu_bug(X86_BUG_L1TF);
+@@ -1591,6 +1624,7 @@ void identify_secondary_cpu(struct cpuinfo_x86 *c)
+ mtrr_ap_init();
+ validate_apic_and_package_id(c);
+ x86_spec_ctrl_setup_ap();
++ update_srbds_msr();
+ }
+
+ static __init int setup_noclflush(char *arg)
+diff --git a/arch/x86/kernel/cpu/cpu.h b/arch/x86/kernel/cpu/cpu.h
+index 37fdefd14f28..fb538fccd24c 100644
+--- a/arch/x86/kernel/cpu/cpu.h
++++ b/arch/x86/kernel/cpu/cpu.h
+@@ -77,6 +77,7 @@ extern void detect_ht(struct cpuinfo_x86 *c);
+ unsigned int aperfmperf_get_khz(int cpu);
+
+ extern void x86_spec_ctrl_setup_ap(void);
++extern void update_srbds_msr(void);
+
+ extern u64 x86_read_arch_cap_msr(void);
+
+diff --git a/arch/x86/kernel/cpu/match.c b/arch/x86/kernel/cpu/match.c
+index d3482eb43ff3..ad6776081e60 100644
+--- a/arch/x86/kernel/cpu/match.c
++++ b/arch/x86/kernel/cpu/match.c
+@@ -39,13 +39,18 @@ const struct x86_cpu_id *x86_match_cpu(const struct x86_cpu_id *match)
+ const struct x86_cpu_id *m;
+ struct cpuinfo_x86 *c = &boot_cpu_data;
+
+- for (m = match; m->vendor | m->family | m->model | m->feature; m++) {
++ for (m = match;
++ m->vendor | m->family | m->model | m->steppings | m->feature;
++ m++) {
+ if (m->vendor != X86_VENDOR_ANY && c->x86_vendor != m->vendor)
+ continue;
+ if (m->family != X86_FAMILY_ANY && c->x86 != m->family)
+ continue;
+ if (m->model != X86_MODEL_ANY && c->x86_model != m->model)
+ continue;
++ if (m->steppings != X86_STEPPING_ANY &&
++ !(BIT(c->x86_stepping) & m->steppings))
++ continue;
+ if (m->feature != X86_FEATURE_ANY && !cpu_has(c, m->feature))
+ continue;
+ return m;
+diff --git a/drivers/base/cpu.c b/drivers/base/cpu.c
+index 9a1c00fbbaef..d2136ab9b14a 100644
+--- a/drivers/base/cpu.c
++++ b/drivers/base/cpu.c
+@@ -562,6 +562,12 @@ ssize_t __weak cpu_show_itlb_multihit(struct device *dev,
+ return sprintf(buf, "Not affected\n");
+ }
+
++ssize_t __weak cpu_show_srbds(struct device *dev,
++ struct device_attribute *attr, char *buf)
++{
++ return sprintf(buf, "Not affected\n");
++}
++
+ static DEVICE_ATTR(meltdown, 0444, cpu_show_meltdown, NULL);
+ static DEVICE_ATTR(spectre_v1, 0444, cpu_show_spectre_v1, NULL);
+ static DEVICE_ATTR(spectre_v2, 0444, cpu_show_spectre_v2, NULL);
+@@ -570,6 +576,7 @@ static DEVICE_ATTR(l1tf, 0444, cpu_show_l1tf, NULL);
+ static DEVICE_ATTR(mds, 0444, cpu_show_mds, NULL);
+ static DEVICE_ATTR(tsx_async_abort, 0444, cpu_show_tsx_async_abort, NULL);
+ static DEVICE_ATTR(itlb_multihit, 0444, cpu_show_itlb_multihit, NULL);
++static DEVICE_ATTR(srbds, 0444, cpu_show_srbds, NULL);
+
+ static struct attribute *cpu_root_vulnerabilities_attrs[] = {
+ &dev_attr_meltdown.attr,
+@@ -580,6 +587,7 @@ static struct attribute *cpu_root_vulnerabilities_attrs[] = {
+ &dev_attr_mds.attr,
+ &dev_attr_tsx_async_abort.attr,
+ &dev_attr_itlb_multihit.attr,
++ &dev_attr_srbds.attr,
+ NULL
+ };
+
+diff --git a/drivers/iio/adc/stm32-adc-core.c b/drivers/iio/adc/stm32-adc-core.c
+index 2df88d2b880a..0e2068ec068b 100644
+--- a/drivers/iio/adc/stm32-adc-core.c
++++ b/drivers/iio/adc/stm32-adc-core.c
+@@ -65,12 +65,14 @@ struct stm32_adc_priv;
+ * @clk_sel: clock selection routine
+ * @max_clk_rate_hz: maximum analog clock rate (Hz, from datasheet)
+ * @has_syscfg: SYSCFG capability flags
++ * @num_irqs: number of interrupt lines
+ */
+ struct stm32_adc_priv_cfg {
+ const struct stm32_adc_common_regs *regs;
+ int (*clk_sel)(struct platform_device *, struct stm32_adc_priv *);
+ u32 max_clk_rate_hz;
+ unsigned int has_syscfg;
++ unsigned int num_irqs;
+ };
+
+ /**
+@@ -375,21 +377,15 @@ static int stm32_adc_irq_probe(struct platform_device *pdev,
+ struct device_node *np = pdev->dev.of_node;
+ unsigned int i;
+
+- for (i = 0; i < STM32_ADC_MAX_ADCS; i++) {
++ /*
++ * Interrupt(s) must be provided, depending on the compatible:
++ * - stm32f4/h7 shares a common interrupt line.
++ * - stm32mp1, has one line per ADC
++ */
++ for (i = 0; i < priv->cfg->num_irqs; i++) {
+ priv->irq[i] = platform_get_irq(pdev, i);
+- if (priv->irq[i] < 0) {
+- /*
+- * At least one interrupt must be provided, make others
+- * optional:
+- * - stm32f4/h7 shares a common interrupt.
+- * - stm32mp1, has one line per ADC (either for ADC1,
+- * ADC2 or both).
+- */
+- if (i && priv->irq[i] == -ENXIO)
+- continue;
+-
++ if (priv->irq[i] < 0)
+ return priv->irq[i];
+- }
+ }
+
+ priv->domain = irq_domain_add_simple(np, STM32_ADC_MAX_ADCS, 0,
+@@ -400,9 +396,7 @@ static int stm32_adc_irq_probe(struct platform_device *pdev,
+ return -ENOMEM;
+ }
+
+- for (i = 0; i < STM32_ADC_MAX_ADCS; i++) {
+- if (priv->irq[i] < 0)
+- continue;
++ for (i = 0; i < priv->cfg->num_irqs; i++) {
+ irq_set_chained_handler(priv->irq[i], stm32_adc_irq_handler);
+ irq_set_handler_data(priv->irq[i], priv);
+ }
+@@ -420,11 +414,8 @@ static void stm32_adc_irq_remove(struct platform_device *pdev,
+ irq_dispose_mapping(irq_find_mapping(priv->domain, hwirq));
+ irq_domain_remove(priv->domain);
+
+- for (i = 0; i < STM32_ADC_MAX_ADCS; i++) {
+- if (priv->irq[i] < 0)
+- continue;
++ for (i = 0; i < priv->cfg->num_irqs; i++)
+ irq_set_chained_handler(priv->irq[i], NULL);
+- }
+ }
+
+ static int stm32_adc_core_switches_supply_en(struct stm32_adc_priv *priv,
+@@ -817,6 +808,7 @@ static const struct stm32_adc_priv_cfg stm32f4_adc_priv_cfg = {
+ .regs = &stm32f4_adc_common_regs,
+ .clk_sel = stm32f4_adc_clk_sel,
+ .max_clk_rate_hz = 36000000,
++ .num_irqs = 1,
+ };
+
+ static const struct stm32_adc_priv_cfg stm32h7_adc_priv_cfg = {
+@@ -824,6 +816,7 @@ static const struct stm32_adc_priv_cfg stm32h7_adc_priv_cfg = {
+ .clk_sel = stm32h7_adc_clk_sel,
+ .max_clk_rate_hz = 36000000,
+ .has_syscfg = HAS_VBOOSTER,
++ .num_irqs = 1,
+ };
+
+ static const struct stm32_adc_priv_cfg stm32mp1_adc_priv_cfg = {
+@@ -831,6 +824,7 @@ static const struct stm32_adc_priv_cfg stm32mp1_adc_priv_cfg = {
+ .clk_sel = stm32h7_adc_clk_sel,
+ .max_clk_rate_hz = 40000000,
+ .has_syscfg = HAS_VBOOSTER | HAS_ANASWVDD,
++ .num_irqs = 2,
+ };
+
+ static const struct of_device_id stm32_adc_of_match[] = {
+diff --git a/drivers/iio/chemical/pms7003.c b/drivers/iio/chemical/pms7003.c
+index 23c9ab252470..07bb90d72434 100644
+--- a/drivers/iio/chemical/pms7003.c
++++ b/drivers/iio/chemical/pms7003.c
+@@ -73,6 +73,11 @@ struct pms7003_state {
+ struct pms7003_frame frame;
+ struct completion frame_ready;
+ struct mutex lock; /* must be held whenever state gets touched */
++ /* Used to construct scan to push to the IIO buffer */
++ struct {
++ u16 data[3]; /* PM1, PM2P5, PM10 */
++ s64 ts;
++ } scan;
+ };
+
+ static int pms7003_do_cmd(struct pms7003_state *state, enum pms7003_cmd cmd)
+@@ -104,7 +109,6 @@ static irqreturn_t pms7003_trigger_handler(int irq, void *p)
+ struct iio_dev *indio_dev = pf->indio_dev;
+ struct pms7003_state *state = iio_priv(indio_dev);
+ struct pms7003_frame *frame = &state->frame;
+- u16 data[3 + 1 + 4]; /* PM1, PM2P5, PM10, padding, timestamp */
+ int ret;
+
+ mutex_lock(&state->lock);
+@@ -114,12 +118,15 @@ static irqreturn_t pms7003_trigger_handler(int irq, void *p)
+ goto err;
+ }
+
+- data[PM1] = pms7003_get_pm(frame->data + PMS7003_PM1_OFFSET);
+- data[PM2P5] = pms7003_get_pm(frame->data + PMS7003_PM2P5_OFFSET);
+- data[PM10] = pms7003_get_pm(frame->data + PMS7003_PM10_OFFSET);
++ state->scan.data[PM1] =
++ pms7003_get_pm(frame->data + PMS7003_PM1_OFFSET);
++ state->scan.data[PM2P5] =
++ pms7003_get_pm(frame->data + PMS7003_PM2P5_OFFSET);
++ state->scan.data[PM10] =
++ pms7003_get_pm(frame->data + PMS7003_PM10_OFFSET);
+ mutex_unlock(&state->lock);
+
+- iio_push_to_buffers_with_timestamp(indio_dev, data,
++ iio_push_to_buffers_with_timestamp(indio_dev, &state->scan,
+ iio_get_time_ns(indio_dev));
+ err:
+ iio_trigger_notify_done(indio_dev->trig);
+diff --git a/drivers/iio/chemical/sps30.c b/drivers/iio/chemical/sps30.c
+index acb9f8ecbb3d..a88c1fb875a0 100644
+--- a/drivers/iio/chemical/sps30.c
++++ b/drivers/iio/chemical/sps30.c
+@@ -230,15 +230,18 @@ static irqreturn_t sps30_trigger_handler(int irq, void *p)
+ struct iio_dev *indio_dev = pf->indio_dev;
+ struct sps30_state *state = iio_priv(indio_dev);
+ int ret;
+- s32 data[4 + 2]; /* PM1, PM2P5, PM4, PM10, timestamp */
++ struct {
++ s32 data[4]; /* PM1, PM2P5, PM4, PM10 */
++ s64 ts;
++ } scan;
+
+ mutex_lock(&state->lock);
+- ret = sps30_do_meas(state, data, 4);
++ ret = sps30_do_meas(state, scan.data, ARRAY_SIZE(scan.data));
+ mutex_unlock(&state->lock);
+ if (ret)
+ goto err;
+
+- iio_push_to_buffers_with_timestamp(indio_dev, data,
++ iio_push_to_buffers_with_timestamp(indio_dev, &scan,
+ iio_get_time_ns(indio_dev));
+ err:
+ iio_trigger_notify_done(indio_dev->trig);
+diff --git a/drivers/iio/light/vcnl4000.c b/drivers/iio/light/vcnl4000.c
+index ec803c1e81df..5d476f174a90 100644
+--- a/drivers/iio/light/vcnl4000.c
++++ b/drivers/iio/light/vcnl4000.c
+@@ -219,7 +219,6 @@ static int vcnl4000_measure(struct vcnl4000_data *data, u8 req_mask,
+ u8 rdy_mask, u8 data_reg, int *val)
+ {
+ int tries = 20;
+- __be16 buf;
+ int ret;
+
+ mutex_lock(&data->vcnl4000_lock);
+@@ -246,13 +245,12 @@ static int vcnl4000_measure(struct vcnl4000_data *data, u8 req_mask,
+ goto fail;
+ }
+
+- ret = i2c_smbus_read_i2c_block_data(data->client,
+- data_reg, sizeof(buf), (u8 *) &buf);
++ ret = i2c_smbus_read_word_swapped(data->client, data_reg);
+ if (ret < 0)
+ goto fail;
+
+ mutex_unlock(&data->vcnl4000_lock);
+- *val = be16_to_cpu(buf);
++ *val = ret;
+
+ return 0;
+
+diff --git a/drivers/nvmem/qfprom.c b/drivers/nvmem/qfprom.c
+index d057f1bfb2e9..8a91717600be 100644
+--- a/drivers/nvmem/qfprom.c
++++ b/drivers/nvmem/qfprom.c
+@@ -27,25 +27,11 @@ static int qfprom_reg_read(void *context,
+ return 0;
+ }
+
+-static int qfprom_reg_write(void *context,
+- unsigned int reg, void *_val, size_t bytes)
+-{
+- struct qfprom_priv *priv = context;
+- u8 *val = _val;
+- int i = 0, words = bytes;
+-
+- while (words--)
+- writeb(*val++, priv->base + reg + i++);
+-
+- return 0;
+-}
+-
+ static struct nvmem_config econfig = {
+ .name = "qfprom",
+ .stride = 1,
+ .word_size = 1,
+ .reg_read = qfprom_reg_read,
+- .reg_write = qfprom_reg_write,
+ };
+
+ static int qfprom_probe(struct platform_device *pdev)
+diff --git a/drivers/staging/rtl8712/wifi.h b/drivers/staging/rtl8712/wifi.h
+index be731f1a2209..91b65731fcaa 100644
+--- a/drivers/staging/rtl8712/wifi.h
++++ b/drivers/staging/rtl8712/wifi.h
+@@ -440,7 +440,7 @@ static inline unsigned char *get_hdr_bssid(unsigned char *pframe)
+ /* block-ack parameters */
+ #define IEEE80211_ADDBA_PARAM_POLICY_MASK 0x0002
+ #define IEEE80211_ADDBA_PARAM_TID_MASK 0x003C
+-#define IEEE80211_ADDBA_PARAM_BUF_SIZE_MASK 0xFFA0
++#define IEEE80211_ADDBA_PARAM_BUF_SIZE_MASK 0xFFC0
+ #define IEEE80211_DELBA_PARAM_TID_MASK 0xF000
+ #define IEEE80211_DELBA_PARAM_INITIATOR_MASK 0x0800
+
+@@ -532,13 +532,6 @@ struct ieee80211_ht_addt_info {
+ #define IEEE80211_HT_IE_NON_GF_STA_PRSNT 0x0004
+ #define IEEE80211_HT_IE_NON_HT_STA_PRSNT 0x0010
+
+-/* block-ack parameters */
+-#define IEEE80211_ADDBA_PARAM_POLICY_MASK 0x0002
+-#define IEEE80211_ADDBA_PARAM_TID_MASK 0x003C
+-#define IEEE80211_ADDBA_PARAM_BUF_SIZE_MASK 0xFFA0
+-#define IEEE80211_DELBA_PARAM_TID_MASK 0xF000
+-#define IEEE80211_DELBA_PARAM_INITIATOR_MASK 0x0800
+-
+ /*
+ * A-PMDU buffer sizes
+ * According to IEEE802.11n spec size varies from 8K to 64K (in powers of 2)
+diff --git a/drivers/tty/hvc/hvc_console.c b/drivers/tty/hvc/hvc_console.c
+index 436cc51c92c3..cdcc64ea2554 100644
+--- a/drivers/tty/hvc/hvc_console.c
++++ b/drivers/tty/hvc/hvc_console.c
+@@ -371,15 +371,14 @@ static int hvc_open(struct tty_struct *tty, struct file * filp)
+ * tty fields and return the kref reference.
+ */
+ if (rc) {
+- tty_port_tty_set(&hp->port, NULL);
+- tty->driver_data = NULL;
+- tty_port_put(&hp->port);
+ printk(KERN_ERR "hvc_open: request_irq failed with rc %d.\n", rc);
+- } else
++ } else {
+ /* We are ready... raise DTR/RTS */
+ if (C_BAUD(tty))
+ if (hp->ops->dtr_rts)
+ hp->ops->dtr_rts(hp, 1);
++ tty_port_set_initialized(&hp->port, true);
++ }
+
+ /* Force wakeup of the polling thread */
+ hvc_kick();
+@@ -389,22 +388,12 @@ static int hvc_open(struct tty_struct *tty, struct file * filp)
+
+ static void hvc_close(struct tty_struct *tty, struct file * filp)
+ {
+- struct hvc_struct *hp;
++ struct hvc_struct *hp = tty->driver_data;
+ unsigned long flags;
+
+ if (tty_hung_up_p(filp))
+ return;
+
+- /*
+- * No driver_data means that this close was issued after a failed
+- * hvc_open by the tty layer's release_dev() function and we can just
+- * exit cleanly because the kref reference wasn't made.
+- */
+- if (!tty->driver_data)
+- return;
+-
+- hp = tty->driver_data;
+-
+ spin_lock_irqsave(&hp->port.lock, flags);
+
+ if (--hp->port.count == 0) {
+@@ -412,6 +401,9 @@ static void hvc_close(struct tty_struct *tty, struct file * filp)
+ /* We are done with the tty pointer now. */
+ tty_port_tty_set(&hp->port, NULL);
+
++ if (!tty_port_initialized(&hp->port))
++ return;
++
+ if (C_HUPCL(tty))
+ if (hp->ops->dtr_rts)
+ hp->ops->dtr_rts(hp, 0);
+@@ -428,6 +420,7 @@ static void hvc_close(struct tty_struct *tty, struct file * filp)
+ * waking periodically to check chars_in_buffer().
+ */
+ tty_wait_until_sent(tty, HVC_CLOSE_WAIT);
++ tty_port_set_initialized(&hp->port, false);
+ } else {
+ if (hp->port.count < 0)
+ printk(KERN_ERR "hvc_close %X: oops, count is %d\n",
+diff --git a/drivers/tty/serial/8250/Kconfig b/drivers/tty/serial/8250/Kconfig
+index af0688156dd0..8195a31519ea 100644
+--- a/drivers/tty/serial/8250/Kconfig
++++ b/drivers/tty/serial/8250/Kconfig
+@@ -63,6 +63,7 @@ config SERIAL_8250_PNP
+ config SERIAL_8250_16550A_VARIANTS
+ bool "Support for variants of the 16550A serial port"
+ depends on SERIAL_8250
++ default !X86
+ help
+ The 8250 driver can probe for many variants of the venerable 16550A
+ serial port. Doing so takes additional time at boot.
+diff --git a/drivers/tty/vt/keyboard.c b/drivers/tty/vt/keyboard.c
+index 15d33fa0c925..568b2171f335 100644
+--- a/drivers/tty/vt/keyboard.c
++++ b/drivers/tty/vt/keyboard.c
+@@ -127,7 +127,11 @@ static DEFINE_SPINLOCK(func_buf_lock); /* guard 'func_buf' and friends */
+ static unsigned long key_down[BITS_TO_LONGS(KEY_CNT)]; /* keyboard key bitmap */
+ static unsigned char shift_down[NR_SHIFT]; /* shift state counters.. */
+ static bool dead_key_next;
+-static int npadch = -1; /* -1 or number assembled on pad */
++
++/* Handles a number being assembled on the number pad */
++static bool npadch_active;
++static unsigned int npadch_value;
++
+ static unsigned int diacr;
+ static char rep; /* flag telling character repeat */
+
+@@ -845,12 +849,12 @@ static void k_shift(struct vc_data *vc, unsigned char value, char up_flag)
+ shift_state &= ~(1 << value);
+
+ /* kludge */
+- if (up_flag && shift_state != old_state && npadch != -1) {
++ if (up_flag && shift_state != old_state && npadch_active) {
+ if (kbd->kbdmode == VC_UNICODE)
+- to_utf8(vc, npadch);
++ to_utf8(vc, npadch_value);
+ else
+- put_queue(vc, npadch & 0xff);
+- npadch = -1;
++ put_queue(vc, npadch_value & 0xff);
++ npadch_active = false;
+ }
+ }
+
+@@ -868,7 +872,7 @@ static void k_meta(struct vc_data *vc, unsigned char value, char up_flag)
+
+ static void k_ascii(struct vc_data *vc, unsigned char value, char up_flag)
+ {
+- int base;
++ unsigned int base;
+
+ if (up_flag)
+ return;
+@@ -882,10 +886,12 @@ static void k_ascii(struct vc_data *vc, unsigned char value, char up_flag)
+ base = 16;
+ }
+
+- if (npadch == -1)
+- npadch = value;
+- else
+- npadch = npadch * base + value;
++ if (!npadch_active) {
++ npadch_value = 0;
++ npadch_active = true;
++ }
++
++ npadch_value = npadch_value * base + value;
+ }
+
+ static void k_lock(struct vc_data *vc, unsigned char value, char up_flag)
+diff --git a/drivers/usb/class/cdc-acm.c b/drivers/usb/class/cdc-acm.c
+index ded8d93834ca..f67088bb8218 100644
+--- a/drivers/usb/class/cdc-acm.c
++++ b/drivers/usb/class/cdc-acm.c
+@@ -584,7 +584,7 @@ static void acm_softint(struct work_struct *work)
+ }
+
+ if (test_and_clear_bit(ACM_ERROR_DELAY, &acm->flags)) {
+- for (i = 0; i < ACM_NR; i++)
++ for (i = 0; i < acm->rx_buflimit; i++)
+ if (test_and_clear_bit(i, &acm->urbs_in_error_delay))
+ acm_submit_read_urb(acm, i, GFP_NOIO);
+ }
+diff --git a/drivers/usb/musb/jz4740.c b/drivers/usb/musb/jz4740.c
+index e64dd30e80e7..c4fe1f4cd17a 100644
+--- a/drivers/usb/musb/jz4740.c
++++ b/drivers/usb/musb/jz4740.c
+@@ -30,11 +30,11 @@ static irqreturn_t jz4740_musb_interrupt(int irq, void *__hci)
+ irqreturn_t retval = IRQ_NONE, retval_dma = IRQ_NONE;
+ struct musb *musb = __hci;
+
+- spin_lock_irqsave(&musb->lock, flags);
+-
+ if (IS_ENABLED(CONFIG_USB_INVENTRA_DMA) && musb->dma_controller)
+ retval_dma = dma_controller_irq(irq, musb->dma_controller);
+
++ spin_lock_irqsave(&musb->lock, flags);
++
+ musb->int_usb = musb_readb(musb->mregs, MUSB_INTRUSB);
+ musb->int_tx = musb_readw(musb->mregs, MUSB_INTRTX);
+ musb->int_rx = musb_readw(musb->mregs, MUSB_INTRRX);
+diff --git a/drivers/usb/musb/musb_core.c b/drivers/usb/musb/musb_core.c
+index d590110539ab..48178aeccf5b 100644
+--- a/drivers/usb/musb/musb_core.c
++++ b/drivers/usb/musb/musb_core.c
+@@ -2877,6 +2877,13 @@ static int musb_resume(struct device *dev)
+ musb_enable_interrupts(musb);
+ musb_platform_enable(musb);
+
++ /* session might be disabled in suspend */
++ if (musb->port_mode == MUSB_HOST &&
++ !(musb->ops->quirks & MUSB_PRESERVE_SESSION)) {
++ devctl |= MUSB_DEVCTL_SESSION;
++ musb_writeb(musb->mregs, MUSB_DEVCTL, devctl);
++ }
++
+ spin_lock_irqsave(&musb->lock, flags);
+ error = musb_run_resume_work(musb);
+ if (error)
+diff --git a/drivers/usb/musb/musb_debugfs.c b/drivers/usb/musb/musb_debugfs.c
+index 7b6281ab62ed..30a89aa8a3e7 100644
+--- a/drivers/usb/musb/musb_debugfs.c
++++ b/drivers/usb/musb/musb_debugfs.c
+@@ -168,6 +168,11 @@ static ssize_t musb_test_mode_write(struct file *file,
+ u8 test;
+ char buf[24];
+
++ memset(buf, 0x00, sizeof(buf));
++
++ if (copy_from_user(buf, ubuf, min_t(size_t, sizeof(buf) - 1, count)))
++ return -EFAULT;
++
+ pm_runtime_get_sync(musb->controller);
+ test = musb_readb(musb->mregs, MUSB_TESTMODE);
+ if (test) {
+@@ -176,11 +181,6 @@ static ssize_t musb_test_mode_write(struct file *file,
+ goto ret;
+ }
+
+- memset(buf, 0x00, sizeof(buf));
+-
+- if (copy_from_user(buf, ubuf, min_t(size_t, sizeof(buf) - 1, count)))
+- return -EFAULT;
+-
+ if (strstarts(buf, "force host full-speed"))
+ test = MUSB_TEST_FORCE_HOST | MUSB_TEST_FORCE_FS;
+
+diff --git a/drivers/usb/serial/ch341.c b/drivers/usb/serial/ch341.c
+index c5ecdcd51ffc..89675ee29645 100644
+--- a/drivers/usb/serial/ch341.c
++++ b/drivers/usb/serial/ch341.c
+@@ -73,6 +73,8 @@
+ #define CH341_LCR_CS6 0x01
+ #define CH341_LCR_CS5 0x00
+
++#define CH341_QUIRK_LIMITED_PRESCALER BIT(0)
++
+ static const struct usb_device_id id_table[] = {
+ { USB_DEVICE(0x4348, 0x5523) },
+ { USB_DEVICE(0x1a86, 0x7523) },
+@@ -87,6 +89,7 @@ struct ch341_private {
+ u8 mcr;
+ u8 msr;
+ u8 lcr;
++ unsigned long quirks;
+ };
+
+ static void ch341_set_termios(struct tty_struct *tty,
+@@ -159,9 +162,11 @@ static const speed_t ch341_min_rates[] = {
+ * 2 <= div <= 256 if fact = 0, or
+ * 9 <= div <= 256 if fact = 1
+ */
+-static int ch341_get_divisor(speed_t speed)
++static int ch341_get_divisor(struct ch341_private *priv)
+ {
+ unsigned int fact, div, clk_div;
++ speed_t speed = priv->baud_rate;
++ bool force_fact0 = false;
+ int ps;
+
+ /*
+@@ -187,8 +192,12 @@ static int ch341_get_divisor(speed_t speed)
+ clk_div = CH341_CLK_DIV(ps, fact);
+ div = CH341_CLKRATE / (clk_div * speed);
+
++ /* Some devices require a lower base clock if ps < 3. */
++ if (ps < 3 && (priv->quirks & CH341_QUIRK_LIMITED_PRESCALER))
++ force_fact0 = true;
++
+ /* Halve base clock (fact = 0) if required. */
+- if (div < 9 || div > 255) {
++ if (div < 9 || div > 255 || force_fact0) {
+ div /= 2;
+ clk_div *= 2;
+ fact = 0;
+@@ -227,7 +236,7 @@ static int ch341_set_baudrate_lcr(struct usb_device *dev,
+ if (!priv->baud_rate)
+ return -EINVAL;
+
+- val = ch341_get_divisor(priv->baud_rate);
++ val = ch341_get_divisor(priv);
+ if (val < 0)
+ return -EINVAL;
+
+@@ -308,6 +317,54 @@ out: kfree(buffer);
+ return r;
+ }
+
++static int ch341_detect_quirks(struct usb_serial_port *port)
++{
++ struct ch341_private *priv = usb_get_serial_port_data(port);
++ struct usb_device *udev = port->serial->dev;
++ const unsigned int size = 2;
++ unsigned long quirks = 0;
++ char *buffer;
++ int r;
++
++ buffer = kmalloc(size, GFP_KERNEL);
++ if (!buffer)
++ return -ENOMEM;
++
++ /*
++ * A subset of CH34x devices does not support all features. The
++ * prescaler is limited and there is no support for sending a RS232
++ * break condition. A read failure when trying to set up the latter is
++ * used to detect these devices.
++ */
++ r = usb_control_msg(udev, usb_rcvctrlpipe(udev, 0), CH341_REQ_READ_REG,
++ USB_TYPE_VENDOR | USB_RECIP_DEVICE | USB_DIR_IN,
++ CH341_REG_BREAK, 0, buffer, size, DEFAULT_TIMEOUT);
++ if (r == -EPIPE) {
++ dev_dbg(&port->dev, "break control not supported\n");
++ quirks = CH341_QUIRK_LIMITED_PRESCALER;
++ r = 0;
++ goto out;
++ }
++
++ if (r != size) {
++ if (r >= 0)
++ r = -EIO;
++ dev_err(&port->dev, "failed to read break control: %d\n", r);
++ goto out;
++ }
++
++ r = 0;
++out:
++ kfree(buffer);
++
++ if (quirks) {
++ dev_dbg(&port->dev, "enabling quirk flags: 0x%02lx\n", quirks);
++ priv->quirks |= quirks;
++ }
++
++ return r;
++}
++
+ static int ch341_port_probe(struct usb_serial_port *port)
+ {
+ struct ch341_private *priv;
+@@ -330,6 +387,11 @@ static int ch341_port_probe(struct usb_serial_port *port)
+ goto error;
+
+ usb_set_serial_port_data(port, priv);
++
++ r = ch341_detect_quirks(port);
++ if (r < 0)
++ goto error;
++
+ return 0;
+
+ error: kfree(priv);
+diff --git a/drivers/usb/serial/option.c b/drivers/usb/serial/option.c
+index 8bfffca3e4ae..254a8bbeea67 100644
+--- a/drivers/usb/serial/option.c
++++ b/drivers/usb/serial/option.c
+@@ -1157,6 +1157,10 @@ static const struct usb_device_id option_ids[] = {
+ { USB_DEVICE(TELIT_VENDOR_ID, TELIT_PRODUCT_CC864_SINGLE) },
+ { USB_DEVICE(TELIT_VENDOR_ID, TELIT_PRODUCT_DE910_DUAL) },
+ { USB_DEVICE(TELIT_VENDOR_ID, TELIT_PRODUCT_UE910_V2) },
++ { USB_DEVICE_INTERFACE_CLASS(TELIT_VENDOR_ID, 0x1031, 0xff), /* Telit LE910C1-EUX */
++ .driver_info = NCTRL(0) | RSVD(3) },
++ { USB_DEVICE_INTERFACE_CLASS(TELIT_VENDOR_ID, 0x1033, 0xff), /* Telit LE910C1-EUX (ECM) */
++ .driver_info = NCTRL(0) },
+ { USB_DEVICE(TELIT_VENDOR_ID, TELIT_PRODUCT_LE922_USBCFG0),
+ .driver_info = RSVD(0) | RSVD(1) | NCTRL(2) | RSVD(3) },
+ { USB_DEVICE(TELIT_VENDOR_ID, TELIT_PRODUCT_LE922_USBCFG1),
+diff --git a/drivers/usb/serial/qcserial.c b/drivers/usb/serial/qcserial.c
+index ce0401d3137f..d147feae83e6 100644
+--- a/drivers/usb/serial/qcserial.c
++++ b/drivers/usb/serial/qcserial.c
+@@ -173,6 +173,7 @@ static const struct usb_device_id id_table[] = {
+ {DEVICE_SWI(0x413c, 0x81b3)}, /* Dell Wireless 5809e Gobi(TM) 4G LTE Mobile Broadband Card (rev3) */
+ {DEVICE_SWI(0x413c, 0x81b5)}, /* Dell Wireless 5811e QDL */
+ {DEVICE_SWI(0x413c, 0x81b6)}, /* Dell Wireless 5811e QDL */
++ {DEVICE_SWI(0x413c, 0x81cb)}, /* Dell Wireless 5816e QDL */
+ {DEVICE_SWI(0x413c, 0x81cc)}, /* Dell Wireless 5816e */
+ {DEVICE_SWI(0x413c, 0x81cf)}, /* Dell Wireless 5819 */
+ {DEVICE_SWI(0x413c, 0x81d0)}, /* Dell Wireless 5819 */
+diff --git a/drivers/usb/serial/usb_wwan.c b/drivers/usb/serial/usb_wwan.c
+index 13be21aad2f4..4b9845807bee 100644
+--- a/drivers/usb/serial/usb_wwan.c
++++ b/drivers/usb/serial/usb_wwan.c
+@@ -270,6 +270,10 @@ static void usb_wwan_indat_callback(struct urb *urb)
+ if (status) {
+ dev_dbg(dev, "%s: nonzero status: %d on endpoint %02x.\n",
+ __func__, status, endpoint);
++
++ /* don't resubmit on fatal errors */
++ if (status == -ESHUTDOWN || status == -ENOENT)
++ return;
+ } else {
+ if (urb->actual_length) {
+ tty_insert_flip_string(&port->port, data,
+diff --git a/include/linux/mod_devicetable.h b/include/linux/mod_devicetable.h
+index 4c2ddd0941a7..0754b8d71262 100644
+--- a/include/linux/mod_devicetable.h
++++ b/include/linux/mod_devicetable.h
+@@ -663,6 +663,7 @@ struct x86_cpu_id {
+ __u16 vendor;
+ __u16 family;
+ __u16 model;
++ __u16 steppings;
+ __u16 feature; /* bit index */
+ kernel_ulong_t driver_data;
+ };
+@@ -671,6 +672,7 @@ struct x86_cpu_id {
+ #define X86_VENDOR_ANY 0xffff
+ #define X86_FAMILY_ANY 0
+ #define X86_MODEL_ANY 0
++#define X86_STEPPING_ANY 0
+ #define X86_FEATURE_ANY 0 /* Same as FPU, you can't test for that */
+
+ /*
+diff --git a/kernel/events/uprobes.c b/kernel/events/uprobes.c
+index ece7e13f6e4a..cc2095607c74 100644
+--- a/kernel/events/uprobes.c
++++ b/kernel/events/uprobes.c
+@@ -867,10 +867,6 @@ static int prepare_uprobe(struct uprobe *uprobe, struct file *file,
+ if (ret)
+ goto out;
+
+- /* uprobe_write_opcode() assumes we don't cross page boundary */
+- BUG_ON((uprobe->offset & ~PAGE_MASK) +
+- UPROBE_SWBP_INSN_SIZE > PAGE_SIZE);
+-
+ smp_wmb(); /* pairs with the smp_rmb() in handle_swbp() */
+ set_bit(UPROBE_COPY_INSN, &uprobe->flags);
+
+@@ -1166,6 +1162,15 @@ static int __uprobe_register(struct inode *inode, loff_t offset,
+ if (offset > i_size_read(inode))
+ return -EINVAL;
+
++ /*
++ * This ensures that copy_from_page(), copy_to_page() and
++ * __update_ref_ctr() can't cross page boundary.
++ */
++ if (!IS_ALIGNED(offset, UPROBE_SWBP_INSN_SIZE))
++ return -EINVAL;
++ if (!IS_ALIGNED(ref_ctr_offset, sizeof(short)))
++ return -EINVAL;
++
+ retry:
+ uprobe = alloc_uprobe(inode, offset, ref_ctr_offset);
+ if (!uprobe)
+@@ -2014,6 +2019,9 @@ static int is_trap_at_addr(struct mm_struct *mm, unsigned long vaddr)
+ uprobe_opcode_t opcode;
+ int result;
+
++ if (WARN_ON_ONCE(!IS_ALIGNED(vaddr, UPROBE_SWBP_INSN_SIZE)))
++ return -EINVAL;
++
+ pagefault_disable();
+ result = __get_user(opcode, (uprobe_opcode_t __user *)vaddr);
+ pagefault_enable();
^ permalink raw reply related [flat|nested] 25+ messages in thread
* [gentoo-commits] proj/linux-patches:5.7 commit in: /
@ 2020-06-17 16:41 Mike Pagano
0 siblings, 0 replies; 25+ messages in thread
From: Mike Pagano @ 2020-06-17 16:41 UTC (permalink / raw
To: gentoo-commits
commit: c0976f629d96e2e4621977f5a875f189f81c73dc
Author: Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Wed Jun 17 16:41:45 2020 +0000
Commit: Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Wed Jun 17 16:41:45 2020 +0000
URL: https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=c0976f62
Linux patch 5.7.3
Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>
0000_README | 4 +
1002_linux-5.7.3.patch | 5486 ++++++++++++++++++++++++++++++++++++++++++++++++
2 files changed, 5490 insertions(+)
diff --git a/0000_README b/0000_README
index f0fc6ef..f77851e 100644
--- a/0000_README
+++ b/0000_README
@@ -51,6 +51,10 @@ Patch: 1001_linux-5.7.2.patch
From: http://www.kernel.org
Desc: Linux 5.7.2
+Patch: 1002_linux-5.7.3.patch
+From: http://www.kernel.org
+Desc: Linux 5.7.3
+
Patch: 1500_XATTR_USER_PREFIX.patch
From: https://bugs.gentoo.org/show_bug.cgi?id=470644
Desc: Support for namespace user.pax.* on tmpfs.
diff --git a/1002_linux-5.7.3.patch b/1002_linux-5.7.3.patch
new file mode 100644
index 0000000..0be58c3
--- /dev/null
+++ b/1002_linux-5.7.3.patch
@@ -0,0 +1,5486 @@
+diff --git a/Documentation/lzo.txt b/Documentation/lzo.txt
+index ca983328976b..f65b51523014 100644
+--- a/Documentation/lzo.txt
++++ b/Documentation/lzo.txt
+@@ -159,11 +159,15 @@ Byte sequences
+ distance = 16384 + (H << 14) + D
+ state = S (copy S literals after this block)
+ End of stream is reached if distance == 16384
++ In version 1 only, to prevent ambiguity with the RLE case when
++ ((distance & 0x803f) == 0x803f) && (261 <= length <= 264), the
++ compressor must not emit block copies where distance and length
++ meet these conditions.
+
+ In version 1 only, this instruction is also used to encode a run of
+- zeros if distance = 0xbfff, i.e. H = 1 and the D bits are all 1.
++ zeros if distance = 0xbfff, i.e. H = 1 and the D bits are all 1.
+ In this case, it is followed by a fourth byte, X.
+- run length = ((X << 3) | (0 0 0 0 0 L L L)) + 4.
++ run length = ((X << 3) | (0 0 0 0 0 L L L)) + 4
+
+ 0 0 1 L L L L L (32..63)
+ Copy of small block within 16kB distance (preferably less than 34B)
+diff --git a/Makefile b/Makefile
+index a6dda75d18cd..a2ce556f4347 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 5
+ PATCHLEVEL = 7
+-SUBLEVEL = 2
++SUBLEVEL = 3
+ EXTRAVERSION =
+ NAME = Kleptomaniac Octopus
+
+diff --git a/arch/arm/boot/dts/at91-sama5d2_ptc_ek.dts b/arch/arm/boot/dts/at91-sama5d2_ptc_ek.dts
+index 1c24ac8019ba..772809c54c1f 100644
+--- a/arch/arm/boot/dts/at91-sama5d2_ptc_ek.dts
++++ b/arch/arm/boot/dts/at91-sama5d2_ptc_ek.dts
+@@ -125,8 +125,6 @@
+ bus-width = <8>;
+ pinctrl-names = "default";
+ pinctrl-0 = <&pinctrl_sdmmc0_default>;
+- non-removable;
+- mmc-ddr-1_8v;
+ status = "okay";
+ };
+
+diff --git a/arch/arm64/include/asm/acpi.h b/arch/arm64/include/asm/acpi.h
+index b263e239cb59..a45366c3909b 100644
+--- a/arch/arm64/include/asm/acpi.h
++++ b/arch/arm64/include/asm/acpi.h
+@@ -12,6 +12,7 @@
+ #include <linux/efi.h>
+ #include <linux/memblock.h>
+ #include <linux/psci.h>
++#include <linux/stddef.h>
+
+ #include <asm/cputype.h>
+ #include <asm/io.h>
+@@ -31,14 +32,14 @@
+ * is therefore used to delimit the MADT GICC structure minimum length
+ * appropriately.
+ */
+-#define ACPI_MADT_GICC_MIN_LENGTH ACPI_OFFSET( \
++#define ACPI_MADT_GICC_MIN_LENGTH offsetof( \
+ struct acpi_madt_generic_interrupt, efficiency_class)
+
+ #define BAD_MADT_GICC_ENTRY(entry, end) \
+ (!(entry) || (entry)->header.length < ACPI_MADT_GICC_MIN_LENGTH || \
+ (unsigned long)(entry) + (entry)->header.length > (end))
+
+-#define ACPI_MADT_GICC_SPE (ACPI_OFFSET(struct acpi_madt_generic_interrupt, \
++#define ACPI_MADT_GICC_SPE (offsetof(struct acpi_madt_generic_interrupt, \
+ spe_interrupt) + sizeof(u16))
+
+ /* Basic configuration for ACPI */
+diff --git a/arch/arm64/include/asm/kvm_emulate.h b/arch/arm64/include/asm/kvm_emulate.h
+index a30b4eec7cb4..977843e4d5fb 100644
+--- a/arch/arm64/include/asm/kvm_emulate.h
++++ b/arch/arm64/include/asm/kvm_emulate.h
+@@ -112,12 +112,6 @@ static inline void vcpu_ptrauth_disable(struct kvm_vcpu *vcpu)
+ vcpu->arch.hcr_el2 &= ~(HCR_API | HCR_APK);
+ }
+
+-static inline void vcpu_ptrauth_setup_lazy(struct kvm_vcpu *vcpu)
+-{
+- if (vcpu_has_ptrauth(vcpu))
+- vcpu_ptrauth_disable(vcpu);
+-}
+-
+ static inline unsigned long vcpu_get_vsesr(struct kvm_vcpu *vcpu)
+ {
+ return vcpu->arch.vsesr_el2;
+diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
+index 32c8a675e5a4..26fca93cd697 100644
+--- a/arch/arm64/include/asm/kvm_host.h
++++ b/arch/arm64/include/asm/kvm_host.h
+@@ -405,8 +405,10 @@ void vcpu_write_sys_reg(struct kvm_vcpu *vcpu, u64 val, int reg);
+ * CP14 and CP15 live in the same array, as they are backed by the
+ * same system registers.
+ */
+-#define vcpu_cp14(v,r) ((v)->arch.ctxt.copro[(r)])
+-#define vcpu_cp15(v,r) ((v)->arch.ctxt.copro[(r)])
++#define CPx_BIAS IS_ENABLED(CONFIG_CPU_BIG_ENDIAN)
++
++#define vcpu_cp14(v,r) ((v)->arch.ctxt.copro[(r) ^ CPx_BIAS])
++#define vcpu_cp15(v,r) ((v)->arch.ctxt.copro[(r) ^ CPx_BIAS])
+
+ struct kvm_vm_stat {
+ ulong remote_tlb_flush;
+diff --git a/arch/arm64/kvm/handle_exit.c b/arch/arm64/kvm/handle_exit.c
+index aacfc55de44c..e0a4bcdb9451 100644
+--- a/arch/arm64/kvm/handle_exit.c
++++ b/arch/arm64/kvm/handle_exit.c
+@@ -162,31 +162,16 @@ static int handle_sve(struct kvm_vcpu *vcpu, struct kvm_run *run)
+ return 1;
+ }
+
+-#define __ptrauth_save_key(regs, key) \
+-({ \
+- regs[key ## KEYLO_EL1] = read_sysreg_s(SYS_ ## key ## KEYLO_EL1); \
+- regs[key ## KEYHI_EL1] = read_sysreg_s(SYS_ ## key ## KEYHI_EL1); \
+-})
+-
+ /*
+ * Handle the guest trying to use a ptrauth instruction, or trying to access a
+ * ptrauth register.
+ */
+ void kvm_arm_vcpu_ptrauth_trap(struct kvm_vcpu *vcpu)
+ {
+- struct kvm_cpu_context *ctxt;
+-
+- if (vcpu_has_ptrauth(vcpu)) {
++ if (vcpu_has_ptrauth(vcpu))
+ vcpu_ptrauth_enable(vcpu);
+- ctxt = vcpu->arch.host_cpu_context;
+- __ptrauth_save_key(ctxt->sys_regs, APIA);
+- __ptrauth_save_key(ctxt->sys_regs, APIB);
+- __ptrauth_save_key(ctxt->sys_regs, APDA);
+- __ptrauth_save_key(ctxt->sys_regs, APDB);
+- __ptrauth_save_key(ctxt->sys_regs, APGA);
+- } else {
++ else
+ kvm_inject_undefined(vcpu);
+- }
+ }
+
+ /*
+diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
+index 51db934702b6..bfd68cd4fc54 100644
+--- a/arch/arm64/kvm/sys_regs.c
++++ b/arch/arm64/kvm/sys_regs.c
+@@ -1305,10 +1305,16 @@ static bool access_clidr(struct kvm_vcpu *vcpu, struct sys_reg_params *p,
+ static bool access_csselr(struct kvm_vcpu *vcpu, struct sys_reg_params *p,
+ const struct sys_reg_desc *r)
+ {
++ int reg = r->reg;
++
++ /* See the 32bit mapping in kvm_host.h */
++ if (p->is_aarch32)
++ reg = r->reg / 2;
++
+ if (p->is_write)
+- vcpu_write_sys_reg(vcpu, p->regval, r->reg);
++ vcpu_write_sys_reg(vcpu, p->regval, reg);
+ else
+- p->regval = vcpu_read_sys_reg(vcpu, r->reg);
++ p->regval = vcpu_read_sys_reg(vcpu, reg);
+ return true;
+ }
+
+diff --git a/arch/mips/include/asm/kvm_host.h b/arch/mips/include/asm/kvm_host.h
+index 2c343c346b79..caa2b936125c 100644
+--- a/arch/mips/include/asm/kvm_host.h
++++ b/arch/mips/include/asm/kvm_host.h
+@@ -274,8 +274,12 @@ enum emulation_result {
+ #define MIPS3_PG_SHIFT 6
+ #define MIPS3_PG_FRAME 0x3fffffc0
+
++#if defined(CONFIG_64BIT)
++#define VPN2_MASK GENMASK(cpu_vmbits - 1, 13)
++#else
+ #define VPN2_MASK 0xffffe000
+-#define KVM_ENTRYHI_ASID MIPS_ENTRYHI_ASID
++#endif
++#define KVM_ENTRYHI_ASID cpu_asid_mask(&boot_cpu_data)
+ #define TLB_IS_GLOBAL(x) ((x).tlb_lo[0] & (x).tlb_lo[1] & ENTRYLO_G)
+ #define TLB_VPN2(x) ((x).tlb_hi & VPN2_MASK)
+ #define TLB_ASID(x) ((x).tlb_hi & KVM_ENTRYHI_ASID)
+diff --git a/arch/powerpc/mm/ptdump/ptdump.c b/arch/powerpc/mm/ptdump/ptdump.c
+index d92bb8ea229c..34248641fe58 100644
+--- a/arch/powerpc/mm/ptdump/ptdump.c
++++ b/arch/powerpc/mm/ptdump/ptdump.c
+@@ -60,6 +60,7 @@ struct pg_state {
+ unsigned long start_address;
+ unsigned long start_pa;
+ unsigned long last_pa;
++ unsigned long page_size;
+ unsigned int level;
+ u64 current_flags;
+ bool check_wx;
+@@ -157,9 +158,9 @@ static void dump_addr(struct pg_state *st, unsigned long addr)
+ #endif
+
+ pt_dump_seq_printf(st->seq, REG "-" REG " ", st->start_address, addr - 1);
+- if (st->start_pa == st->last_pa && st->start_address + PAGE_SIZE != addr) {
++ if (st->start_pa == st->last_pa && st->start_address + st->page_size != addr) {
+ pt_dump_seq_printf(st->seq, "[" REG "]", st->start_pa);
+- delta = PAGE_SIZE >> 10;
++ delta = st->page_size >> 10;
+ } else {
+ pt_dump_seq_printf(st->seq, " " REG " ", st->start_pa);
+ delta = (addr - st->start_address) >> 10;
+@@ -190,7 +191,7 @@ static void note_prot_wx(struct pg_state *st, unsigned long addr)
+ }
+
+ static void note_page(struct pg_state *st, unsigned long addr,
+- unsigned int level, u64 val)
++ unsigned int level, u64 val, unsigned long page_size)
+ {
+ u64 flag = val & pg_level[level].mask;
+ u64 pa = val & PTE_RPN_MASK;
+@@ -202,6 +203,7 @@ static void note_page(struct pg_state *st, unsigned long addr,
+ st->start_address = addr;
+ st->start_pa = pa;
+ st->last_pa = pa;
++ st->page_size = page_size;
+ pt_dump_seq_printf(st->seq, "---[ %s ]---\n", st->marker->name);
+ /*
+ * Dump the section of virtual memory when:
+@@ -213,7 +215,7 @@ static void note_page(struct pg_state *st, unsigned long addr,
+ */
+ } else if (flag != st->current_flags || level != st->level ||
+ addr >= st->marker[1].start_address ||
+- (pa != st->last_pa + PAGE_SIZE &&
++ (pa != st->last_pa + st->page_size &&
+ (pa != st->start_pa || st->start_pa != st->last_pa))) {
+
+ /* Check the PTE flags */
+@@ -241,6 +243,7 @@ static void note_page(struct pg_state *st, unsigned long addr,
+ st->start_address = addr;
+ st->start_pa = pa;
+ st->last_pa = pa;
++ st->page_size = page_size;
+ st->current_flags = flag;
+ st->level = level;
+ } else {
+@@ -256,7 +259,7 @@ static void walk_pte(struct pg_state *st, pmd_t *pmd, unsigned long start)
+
+ for (i = 0; i < PTRS_PER_PTE; i++, pte++) {
+ addr = start + i * PAGE_SIZE;
+- note_page(st, addr, 4, pte_val(*pte));
++ note_page(st, addr, 4, pte_val(*pte), PAGE_SIZE);
+
+ }
+ }
+@@ -273,7 +276,7 @@ static void walk_pmd(struct pg_state *st, pud_t *pud, unsigned long start)
+ /* pmd exists */
+ walk_pte(st, pmd, addr);
+ else
+- note_page(st, addr, 3, pmd_val(*pmd));
++ note_page(st, addr, 3, pmd_val(*pmd), PMD_SIZE);
+ }
+ }
+
+@@ -289,7 +292,7 @@ static void walk_pud(struct pg_state *st, pgd_t *pgd, unsigned long start)
+ /* pud exists */
+ walk_pmd(st, pud, addr);
+ else
+- note_page(st, addr, 2, pud_val(*pud));
++ note_page(st, addr, 2, pud_val(*pud), PUD_SIZE);
+ }
+ }
+
+@@ -308,7 +311,7 @@ static void walk_pagetables(struct pg_state *st)
+ /* pgd exists */
+ walk_pud(st, pgd, addr);
+ else
+- note_page(st, addr, 1, pgd_val(*pgd));
++ note_page(st, addr, 1, pgd_val(*pgd), PGDIR_SIZE);
+ }
+ }
+
+@@ -363,7 +366,7 @@ static int ptdump_show(struct seq_file *m, void *v)
+
+ /* Traverse kernel page tables */
+ walk_pagetables(&st);
+- note_page(&st, 0, 0, 0);
++ note_page(&st, 0, 0, 0, 0);
+ return 0;
+ }
+
+diff --git a/arch/powerpc/sysdev/xive/common.c b/arch/powerpc/sysdev/xive/common.c
+index b294f70f1a67..261cff60df14 100644
+--- a/arch/powerpc/sysdev/xive/common.c
++++ b/arch/powerpc/sysdev/xive/common.c
+@@ -19,6 +19,7 @@
+ #include <linux/slab.h>
+ #include <linux/spinlock.h>
+ #include <linux/msi.h>
++#include <linux/vmalloc.h>
+
+ #include <asm/debugfs.h>
+ #include <asm/prom.h>
+@@ -1017,12 +1018,16 @@ EXPORT_SYMBOL_GPL(is_xive_irq);
+ void xive_cleanup_irq_data(struct xive_irq_data *xd)
+ {
+ if (xd->eoi_mmio) {
++ unmap_kernel_range((unsigned long)xd->eoi_mmio,
++ 1u << xd->esb_shift);
+ iounmap(xd->eoi_mmio);
+ if (xd->eoi_mmio == xd->trig_mmio)
+ xd->trig_mmio = NULL;
+ xd->eoi_mmio = NULL;
+ }
+ if (xd->trig_mmio) {
++ unmap_kernel_range((unsigned long)xd->trig_mmio,
++ 1u << xd->esb_shift);
+ iounmap(xd->trig_mmio);
+ xd->trig_mmio = NULL;
+ }
+diff --git a/arch/s390/pci/pci_clp.c b/arch/s390/pci/pci_clp.c
+index ea794ae755ae..179bcecefdee 100644
+--- a/arch/s390/pci/pci_clp.c
++++ b/arch/s390/pci/pci_clp.c
+@@ -309,14 +309,13 @@ out:
+
+ int clp_disable_fh(struct zpci_dev *zdev)
+ {
+- u32 fh = zdev->fh;
+ int rc;
+
+ if (!zdev_enabled(zdev))
+ return 0;
+
+ rc = clp_set_pci_fn(zdev, 0, CLP_SET_DISABLE_PCI_FN);
+- zpci_dbg(3, "dis fid:%x, fh:%x, rc:%d\n", zdev->fid, fh, rc);
++ zpci_dbg(3, "dis fid:%x, fh:%x, rc:%d\n", zdev->fid, zdev->fh, rc);
+ return rc;
+ }
+
+diff --git a/arch/x86/events/intel/core.c b/arch/x86/events/intel/core.c
+index 332954cccece..ca35c8b5ee10 100644
+--- a/arch/x86/events/intel/core.c
++++ b/arch/x86/events/intel/core.c
+@@ -1892,8 +1892,8 @@ static __initconst const u64 tnt_hw_cache_extra_regs
+
+ static struct extra_reg intel_tnt_extra_regs[] __read_mostly = {
+ /* must define OFFCORE_RSP_X first, see intel_fixup_er() */
+- INTEL_UEVENT_EXTRA_REG(0x01b7, MSR_OFFCORE_RSP_0, 0xffffff9fffull, RSP_0),
+- INTEL_UEVENT_EXTRA_REG(0x02b7, MSR_OFFCORE_RSP_1, 0xffffff9fffull, RSP_1),
++ INTEL_UEVENT_EXTRA_REG(0x01b7, MSR_OFFCORE_RSP_0, 0x800ff0ffffff9fffull, RSP_0),
++ INTEL_UEVENT_EXTRA_REG(0x02b7, MSR_OFFCORE_RSP_1, 0xff0ffffff9fffull, RSP_1),
+ EVENT_EXTRA_END
+ };
+
+diff --git a/arch/x86/include/asm/set_memory.h b/arch/x86/include/asm/set_memory.h
+index ec2c0a094b5d..5948218f35c5 100644
+--- a/arch/x86/include/asm/set_memory.h
++++ b/arch/x86/include/asm/set_memory.h
+@@ -86,28 +86,35 @@ int set_direct_map_default_noflush(struct page *page);
+ extern int kernel_set_to_readonly;
+
+ #ifdef CONFIG_X86_64
+-static inline int set_mce_nospec(unsigned long pfn)
++/*
++ * Prevent speculative access to the page by either unmapping
++ * it (if we do not require access to any part of the page) or
++ * marking it uncacheable (if we want to try to retrieve data
++ * from non-poisoned lines in the page).
++ */
++static inline int set_mce_nospec(unsigned long pfn, bool unmap)
+ {
+ unsigned long decoy_addr;
+ int rc;
+
+ /*
+- * Mark the linear address as UC to make sure we don't log more
+- * errors because of speculative access to the page.
+ * We would like to just call:
+- * set_memory_uc((unsigned long)pfn_to_kaddr(pfn), 1);
++ * set_memory_XX((unsigned long)pfn_to_kaddr(pfn), 1);
+ * but doing that would radically increase the odds of a
+ * speculative access to the poison page because we'd have
+ * the virtual address of the kernel 1:1 mapping sitting
+ * around in registers.
+ * Instead we get tricky. We create a non-canonical address
+ * that looks just like the one we want, but has bit 63 flipped.
+- * This relies on set_memory_uc() properly sanitizing any __pa()
++ * This relies on set_memory_XX() properly sanitizing any __pa()
+ * results with __PHYSICAL_MASK or PTE_PFN_MASK.
+ */
+ decoy_addr = (pfn << PAGE_SHIFT) + (PAGE_OFFSET ^ BIT(63));
+
+- rc = set_memory_uc(decoy_addr, 1);
++ if (unmap)
++ rc = set_memory_np(decoy_addr, 1);
++ else
++ rc = set_memory_uc(decoy_addr, 1);
+ if (rc)
+ pr_warn("Could not invalidate pfn=0x%lx from 1:1 map\n", pfn);
+ return rc;
+diff --git a/arch/x86/include/asm/vdso/gettimeofday.h b/arch/x86/include/asm/vdso/gettimeofday.h
+index 9a6dc9b4ec99..fb81fea99093 100644
+--- a/arch/x86/include/asm/vdso/gettimeofday.h
++++ b/arch/x86/include/asm/vdso/gettimeofday.h
+@@ -271,6 +271,24 @@ static __always_inline const struct vdso_data *__arch_get_vdso_data(void)
+ return __vdso_data;
+ }
+
++static inline bool arch_vdso_clocksource_ok(const struct vdso_data *vd)
++{
++ return true;
++}
++#define vdso_clocksource_ok arch_vdso_clocksource_ok
++
++/*
++ * Clocksource read value validation to handle PV and HyperV clocksources
++ * which can be invalidated asynchronously and indicate invalidation by
++ * returning U64_MAX, which can be effectively tested by checking for a
++ * negative value after casting it to s64.
++ */
++static inline bool arch_vdso_cycles_ok(u64 cycles)
++{
++ return (s64)cycles >= 0;
++}
++#define vdso_cycles_ok arch_vdso_cycles_ok
++
+ /*
+ * x86 specific delta calculation.
+ *
+diff --git a/arch/x86/kernel/cpu/amd.c b/arch/x86/kernel/cpu/amd.c
+index 547ad7bbf0e0..8a1bdda895a4 100644
+--- a/arch/x86/kernel/cpu/amd.c
++++ b/arch/x86/kernel/cpu/amd.c
+@@ -1142,8 +1142,7 @@ static const int amd_erratum_383[] =
+
+ /* #1054: Instructions Retired Performance Counter May Be Inaccurate */
+ static const int amd_erratum_1054[] =
+- AMD_OSVW_ERRATUM(0, AMD_MODEL_RANGE(0x17, 0, 0, 0x2f, 0xf));
+-
++ AMD_LEGACY_ERRATUM(AMD_MODEL_RANGE(0x17, 0, 0, 0x2f, 0xf));
+
+ static bool cpu_has_amd_erratum(struct cpuinfo_x86 *cpu, const int *erratum)
+ {
+diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c
+index 56978cb06149..b53dcff21438 100644
+--- a/arch/x86/kernel/cpu/bugs.c
++++ b/arch/x86/kernel/cpu/bugs.c
+@@ -588,7 +588,9 @@ early_param("nospectre_v1", nospectre_v1_cmdline);
+ static enum spectre_v2_mitigation spectre_v2_enabled __ro_after_init =
+ SPECTRE_V2_NONE;
+
+-static enum spectre_v2_user_mitigation spectre_v2_user __ro_after_init =
++static enum spectre_v2_user_mitigation spectre_v2_user_stibp __ro_after_init =
++ SPECTRE_V2_USER_NONE;
++static enum spectre_v2_user_mitigation spectre_v2_user_ibpb __ro_after_init =
+ SPECTRE_V2_USER_NONE;
+
+ #ifdef CONFIG_RETPOLINE
+@@ -734,15 +736,6 @@ spectre_v2_user_select_mitigation(enum spectre_v2_mitigation_cmd v2_cmd)
+ break;
+ }
+
+- /*
+- * At this point, an STIBP mode other than "off" has been set.
+- * If STIBP support is not being forced, check if STIBP always-on
+- * is preferred.
+- */
+- if (mode != SPECTRE_V2_USER_STRICT &&
+- boot_cpu_has(X86_FEATURE_AMD_STIBP_ALWAYS_ON))
+- mode = SPECTRE_V2_USER_STRICT_PREFERRED;
+-
+ /* Initialize Indirect Branch Prediction Barrier */
+ if (boot_cpu_has(X86_FEATURE_IBPB)) {
+ setup_force_cpu_cap(X86_FEATURE_USE_IBPB);
+@@ -765,23 +758,36 @@ spectre_v2_user_select_mitigation(enum spectre_v2_mitigation_cmd v2_cmd)
+ pr_info("mitigation: Enabling %s Indirect Branch Prediction Barrier\n",
+ static_key_enabled(&switch_mm_always_ibpb) ?
+ "always-on" : "conditional");
++
++ spectre_v2_user_ibpb = mode;
+ }
+
+- /* If enhanced IBRS is enabled no STIBP required */
+- if (spectre_v2_enabled == SPECTRE_V2_IBRS_ENHANCED)
++ /*
++ * If enhanced IBRS is enabled or SMT impossible, STIBP is not
++ * required.
++ */
++ if (!smt_possible || spectre_v2_enabled == SPECTRE_V2_IBRS_ENHANCED)
+ return;
+
+ /*
+- * If SMT is not possible or STIBP is not available clear the STIBP
+- * mode.
++ * At this point, an STIBP mode other than "off" has been set.
++ * If STIBP support is not being forced, check if STIBP always-on
++ * is preferred.
+ */
+- if (!smt_possible || !boot_cpu_has(X86_FEATURE_STIBP))
++ if (mode != SPECTRE_V2_USER_STRICT &&
++ boot_cpu_has(X86_FEATURE_AMD_STIBP_ALWAYS_ON))
++ mode = SPECTRE_V2_USER_STRICT_PREFERRED;
++
++ /*
++ * If STIBP is not available, clear the STIBP mode.
++ */
++ if (!boot_cpu_has(X86_FEATURE_STIBP))
+ mode = SPECTRE_V2_USER_NONE;
++
++ spectre_v2_user_stibp = mode;
++
+ set_mode:
+- spectre_v2_user = mode;
+- /* Only print the STIBP mode when SMT possible */
+- if (smt_possible)
+- pr_info("%s\n", spectre_v2_user_strings[mode]);
++ pr_info("%s\n", spectre_v2_user_strings[mode]);
+ }
+
+ static const char * const spectre_v2_strings[] = {
+@@ -1014,7 +1020,7 @@ void cpu_bugs_smt_update(void)
+ {
+ mutex_lock(&spec_ctrl_mutex);
+
+- switch (spectre_v2_user) {
++ switch (spectre_v2_user_stibp) {
+ case SPECTRE_V2_USER_NONE:
+ break;
+ case SPECTRE_V2_USER_STRICT:
+@@ -1257,14 +1263,19 @@ static int ib_prctl_set(struct task_struct *task, unsigned long ctrl)
+ {
+ switch (ctrl) {
+ case PR_SPEC_ENABLE:
+- if (spectre_v2_user == SPECTRE_V2_USER_NONE)
++ if (spectre_v2_user_ibpb == SPECTRE_V2_USER_NONE &&
++ spectre_v2_user_stibp == SPECTRE_V2_USER_NONE)
+ return 0;
+ /*
+ * Indirect branch speculation is always disabled in strict
+- * mode.
++ * mode. It can neither be enabled if it was force-disabled
++ * by a previous prctl call.
++
+ */
+- if (spectre_v2_user == SPECTRE_V2_USER_STRICT ||
+- spectre_v2_user == SPECTRE_V2_USER_STRICT_PREFERRED)
++ if (spectre_v2_user_ibpb == SPECTRE_V2_USER_STRICT ||
++ spectre_v2_user_stibp == SPECTRE_V2_USER_STRICT ||
++ spectre_v2_user_stibp == SPECTRE_V2_USER_STRICT_PREFERRED ||
++ task_spec_ib_force_disable(task))
+ return -EPERM;
+ task_clear_spec_ib_disable(task);
+ task_update_spec_tif(task);
+@@ -1275,10 +1286,12 @@ static int ib_prctl_set(struct task_struct *task, unsigned long ctrl)
+ * Indirect branch speculation is always allowed when
+ * mitigation is force disabled.
+ */
+- if (spectre_v2_user == SPECTRE_V2_USER_NONE)
++ if (spectre_v2_user_ibpb == SPECTRE_V2_USER_NONE &&
++ spectre_v2_user_stibp == SPECTRE_V2_USER_NONE)
+ return -EPERM;
+- if (spectre_v2_user == SPECTRE_V2_USER_STRICT ||
+- spectre_v2_user == SPECTRE_V2_USER_STRICT_PREFERRED)
++ if (spectre_v2_user_ibpb == SPECTRE_V2_USER_STRICT ||
++ spectre_v2_user_stibp == SPECTRE_V2_USER_STRICT ||
++ spectre_v2_user_stibp == SPECTRE_V2_USER_STRICT_PREFERRED)
+ return 0;
+ task_set_spec_ib_disable(task);
+ if (ctrl == PR_SPEC_FORCE_DISABLE)
+@@ -1309,7 +1322,8 @@ void arch_seccomp_spec_mitigate(struct task_struct *task)
+ {
+ if (ssb_mode == SPEC_STORE_BYPASS_SECCOMP)
+ ssb_prctl_set(task, PR_SPEC_FORCE_DISABLE);
+- if (spectre_v2_user == SPECTRE_V2_USER_SECCOMP)
++ if (spectre_v2_user_ibpb == SPECTRE_V2_USER_SECCOMP ||
++ spectre_v2_user_stibp == SPECTRE_V2_USER_SECCOMP)
+ ib_prctl_set(task, PR_SPEC_FORCE_DISABLE);
+ }
+ #endif
+@@ -1340,22 +1354,24 @@ static int ib_prctl_get(struct task_struct *task)
+ if (!boot_cpu_has_bug(X86_BUG_SPECTRE_V2))
+ return PR_SPEC_NOT_AFFECTED;
+
+- switch (spectre_v2_user) {
+- case SPECTRE_V2_USER_NONE:
++ if (spectre_v2_user_ibpb == SPECTRE_V2_USER_NONE &&
++ spectre_v2_user_stibp == SPECTRE_V2_USER_NONE)
+ return PR_SPEC_ENABLE;
+- case SPECTRE_V2_USER_PRCTL:
+- case SPECTRE_V2_USER_SECCOMP:
++ else if (spectre_v2_user_ibpb == SPECTRE_V2_USER_STRICT ||
++ spectre_v2_user_stibp == SPECTRE_V2_USER_STRICT ||
++ spectre_v2_user_stibp == SPECTRE_V2_USER_STRICT_PREFERRED)
++ return PR_SPEC_DISABLE;
++ else if (spectre_v2_user_ibpb == SPECTRE_V2_USER_PRCTL ||
++ spectre_v2_user_ibpb == SPECTRE_V2_USER_SECCOMP ||
++ spectre_v2_user_stibp == SPECTRE_V2_USER_PRCTL ||
++ spectre_v2_user_stibp == SPECTRE_V2_USER_SECCOMP) {
+ if (task_spec_ib_force_disable(task))
+ return PR_SPEC_PRCTL | PR_SPEC_FORCE_DISABLE;
+ if (task_spec_ib_disable(task))
+ return PR_SPEC_PRCTL | PR_SPEC_DISABLE;
+ return PR_SPEC_PRCTL | PR_SPEC_ENABLE;
+- case SPECTRE_V2_USER_STRICT:
+- case SPECTRE_V2_USER_STRICT_PREFERRED:
+- return PR_SPEC_DISABLE;
+- default:
++ } else
+ return PR_SPEC_NOT_AFFECTED;
+- }
+ }
+
+ int arch_prctl_spec_ctrl_get(struct task_struct *task, unsigned long which)
+@@ -1594,7 +1610,7 @@ static char *stibp_state(void)
+ if (spectre_v2_enabled == SPECTRE_V2_IBRS_ENHANCED)
+ return "";
+
+- switch (spectre_v2_user) {
++ switch (spectre_v2_user_stibp) {
+ case SPECTRE_V2_USER_NONE:
+ return ", STIBP: disabled";
+ case SPECTRE_V2_USER_STRICT:
+diff --git a/arch/x86/kernel/cpu/mce/core.c b/arch/x86/kernel/cpu/mce/core.c
+index 54165f3569e8..c1a480a27164 100644
+--- a/arch/x86/kernel/cpu/mce/core.c
++++ b/arch/x86/kernel/cpu/mce/core.c
+@@ -529,6 +529,13 @@ bool mce_is_memory_error(struct mce *m)
+ }
+ EXPORT_SYMBOL_GPL(mce_is_memory_error);
+
++static bool whole_page(struct mce *m)
++{
++ if (!mca_cfg.ser || !(m->status & MCI_STATUS_MISCV))
++ return true;
++ return MCI_MISC_ADDR_LSB(m->misc) >= PAGE_SHIFT;
++}
++
+ bool mce_is_correctable(struct mce *m)
+ {
+ if (m->cpuvendor == X86_VENDOR_AMD && m->status & MCI_STATUS_DEFERRED)
+@@ -600,7 +607,7 @@ static int uc_decode_notifier(struct notifier_block *nb, unsigned long val,
+
+ pfn = mce->addr >> PAGE_SHIFT;
+ if (!memory_failure(pfn, 0))
+- set_mce_nospec(pfn);
++ set_mce_nospec(pfn, whole_page(mce));
+
+ return NOTIFY_OK;
+ }
+@@ -1098,7 +1105,7 @@ static int do_memory_failure(struct mce *m)
+ if (ret)
+ pr_err("Memory error not recovered");
+ else
+- set_mce_nospec(m->addr >> PAGE_SHIFT);
++ set_mce_nospec(m->addr >> PAGE_SHIFT, whole_page(m));
+ return ret;
+ }
+
+diff --git a/arch/x86/kernel/process.c b/arch/x86/kernel/process.c
+index 35638f1c5791..8f4533c1a4ec 100644
+--- a/arch/x86/kernel/process.c
++++ b/arch/x86/kernel/process.c
+@@ -545,28 +545,20 @@ static __always_inline void __speculation_ctrl_update(unsigned long tifp,
+
+ lockdep_assert_irqs_disabled();
+
+- /*
+- * If TIF_SSBD is different, select the proper mitigation
+- * method. Note that if SSBD mitigation is disabled or permanentely
+- * enabled this branch can't be taken because nothing can set
+- * TIF_SSBD.
+- */
+- if (tif_diff & _TIF_SSBD) {
+- if (static_cpu_has(X86_FEATURE_VIRT_SSBD)) {
++ /* Handle change of TIF_SSBD depending on the mitigation method. */
++ if (static_cpu_has(X86_FEATURE_VIRT_SSBD)) {
++ if (tif_diff & _TIF_SSBD)
+ amd_set_ssb_virt_state(tifn);
+- } else if (static_cpu_has(X86_FEATURE_LS_CFG_SSBD)) {
++ } else if (static_cpu_has(X86_FEATURE_LS_CFG_SSBD)) {
++ if (tif_diff & _TIF_SSBD)
+ amd_set_core_ssb_state(tifn);
+- } else if (static_cpu_has(X86_FEATURE_SPEC_CTRL_SSBD) ||
+- static_cpu_has(X86_FEATURE_AMD_SSBD)) {
+- msr |= ssbd_tif_to_spec_ctrl(tifn);
+- updmsr = true;
+- }
++ } else if (static_cpu_has(X86_FEATURE_SPEC_CTRL_SSBD) ||
++ static_cpu_has(X86_FEATURE_AMD_SSBD)) {
++ updmsr |= !!(tif_diff & _TIF_SSBD);
++ msr |= ssbd_tif_to_spec_ctrl(tifn);
+ }
+
+- /*
+- * Only evaluate TIF_SPEC_IB if conditional STIBP is enabled,
+- * otherwise avoid the MSR write.
+- */
++ /* Only evaluate TIF_SPEC_IB if conditional STIBP is enabled. */
+ if (IS_ENABLED(CONFIG_SMP) &&
+ static_branch_unlikely(&switch_to_cond_stibp)) {
+ updmsr |= !!(tif_diff & _TIF_SPEC_IB);
+diff --git a/arch/x86/kernel/reboot.c b/arch/x86/kernel/reboot.c
+index 3ca43be4f9cf..8b8cebfd3298 100644
+--- a/arch/x86/kernel/reboot.c
++++ b/arch/x86/kernel/reboot.c
+@@ -197,6 +197,14 @@ static const struct dmi_system_id reboot_dmi_table[] __initconst = {
+ DMI_MATCH(DMI_PRODUCT_NAME, "MacBook5"),
+ },
+ },
++ { /* Handle problems with rebooting on Apple MacBook6,1 */
++ .callback = set_pci_reboot,
++ .ident = "Apple MacBook6,1",
++ .matches = {
++ DMI_MATCH(DMI_SYS_VENDOR, "Apple Inc."),
++ DMI_MATCH(DMI_PRODUCT_NAME, "MacBook6,1"),
++ },
++ },
+ { /* Handle problems with rebooting on Apple MacBookPro5 */
+ .callback = set_pci_reboot,
+ .ident = "Apple MacBookPro5",
+diff --git a/arch/x86/kernel/time.c b/arch/x86/kernel/time.c
+index 106e7f87f534..f39572982635 100644
+--- a/arch/x86/kernel/time.c
++++ b/arch/x86/kernel/time.c
+@@ -25,10 +25,6 @@
+ #include <asm/hpet.h>
+ #include <asm/time.h>
+
+-#ifdef CONFIG_X86_64
+-__visible volatile unsigned long jiffies __cacheline_aligned_in_smp = INITIAL_JIFFIES;
+-#endif
+-
+ unsigned long profile_pc(struct pt_regs *regs)
+ {
+ unsigned long pc = instruction_pointer(regs);
+diff --git a/arch/x86/kernel/vmlinux.lds.S b/arch/x86/kernel/vmlinux.lds.S
+index 1bf7e312361f..7c35556c7827 100644
+--- a/arch/x86/kernel/vmlinux.lds.S
++++ b/arch/x86/kernel/vmlinux.lds.S
+@@ -40,13 +40,13 @@ OUTPUT_FORMAT(CONFIG_OUTPUT_FORMAT)
+ #ifdef CONFIG_X86_32
+ OUTPUT_ARCH(i386)
+ ENTRY(phys_startup_32)
+-jiffies = jiffies_64;
+ #else
+ OUTPUT_ARCH(i386:x86-64)
+ ENTRY(phys_startup_64)
+-jiffies_64 = jiffies;
+ #endif
+
++jiffies = jiffies_64;
++
+ #if defined(CONFIG_X86_64)
+ /*
+ * On 64-bit, align RODATA to 2MB so we retain large page mappings for
+diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c
+index 8071952e9cf2..92d056954194 100644
+--- a/arch/x86/kvm/mmu/mmu.c
++++ b/arch/x86/kvm/mmu/mmu.c
+@@ -335,6 +335,8 @@ void kvm_mmu_set_mmio_spte_mask(u64 mmio_mask, u64 mmio_value, u64 access_mask)
+ {
+ BUG_ON((u64)(unsigned)access_mask != access_mask);
+ BUG_ON((mmio_mask & mmio_value) != mmio_value);
++ WARN_ON(mmio_value & (shadow_nonpresent_or_rsvd_mask << shadow_nonpresent_or_rsvd_mask_len));
++ WARN_ON(mmio_value & shadow_nonpresent_or_rsvd_lower_gfn_mask);
+ shadow_mmio_value = mmio_value | SPTE_MMIO_MASK;
+ shadow_mmio_mask = mmio_mask | SPTE_SPECIAL_MASK;
+ shadow_mmio_access_mask = access_mask;
+@@ -583,16 +585,15 @@ static void kvm_mmu_reset_all_pte_masks(void)
+ * the most significant bits of legal physical address space.
+ */
+ shadow_nonpresent_or_rsvd_mask = 0;
+- low_phys_bits = boot_cpu_data.x86_cache_bits;
+- if (boot_cpu_data.x86_cache_bits <
+- 52 - shadow_nonpresent_or_rsvd_mask_len) {
++ low_phys_bits = boot_cpu_data.x86_phys_bits;
++ if (boot_cpu_has_bug(X86_BUG_L1TF) &&
++ !WARN_ON_ONCE(boot_cpu_data.x86_cache_bits >=
++ 52 - shadow_nonpresent_or_rsvd_mask_len)) {
++ low_phys_bits = boot_cpu_data.x86_cache_bits
++ - shadow_nonpresent_or_rsvd_mask_len;
+ shadow_nonpresent_or_rsvd_mask =
+- rsvd_bits(boot_cpu_data.x86_cache_bits -
+- shadow_nonpresent_or_rsvd_mask_len,
+- boot_cpu_data.x86_cache_bits - 1);
+- low_phys_bits -= shadow_nonpresent_or_rsvd_mask_len;
+- } else
+- WARN_ON_ONCE(boot_cpu_has_bug(X86_BUG_L1TF));
++ rsvd_bits(low_phys_bits, boot_cpu_data.x86_cache_bits - 1);
++ }
+
+ shadow_nonpresent_or_rsvd_lower_gfn_mask =
+ GENMASK_ULL(low_phys_bits - 1, PAGE_SHIFT);
+@@ -6142,25 +6143,16 @@ static void kvm_set_mmio_spte_mask(void)
+ u64 mask;
+
+ /*
+- * Set the reserved bits and the present bit of an paging-structure
+- * entry to generate page fault with PFER.RSV = 1.
+- */
+-
+- /*
+- * Mask the uppermost physical address bit, which would be reserved as
+- * long as the supported physical address width is less than 52.
++ * Set a reserved PA bit in MMIO SPTEs to generate page faults with
++ * PFEC.RSVD=1 on MMIO accesses. 64-bit PTEs (PAE, x86-64, and EPT
++ * paging) support a maximum of 52 bits of PA, i.e. if the CPU supports
++ * 52-bit physical addresses then there are no reserved PA bits in the
++ * PTEs and so the reserved PA approach must be disabled.
+ */
+- mask = 1ull << 51;
+-
+- /* Set the present bit. */
+- mask |= 1ull;
+-
+- /*
+- * If reserved bit is not supported, clear the present bit to disable
+- * mmio page fault.
+- */
+- if (shadow_phys_bits == 52)
+- mask &= ~1ull;
++ if (shadow_phys_bits < 52)
++ mask = BIT_ULL(51) | PT_PRESENT_MASK;
++ else
++ mask = 0;
+
+ kvm_mmu_set_mmio_spte_mask(mask, mask, ACC_WRITE_MASK | ACC_USER_MASK);
+ }
+diff --git a/arch/x86/kvm/svm/nested.c b/arch/x86/kvm/svm/nested.c
+index 9a2a62e5afeb..c2a31a2a3cef 100644
+--- a/arch/x86/kvm/svm/nested.c
++++ b/arch/x86/kvm/svm/nested.c
+@@ -150,7 +150,7 @@ static void copy_vmcb_control_area(struct vmcb *dst_vmcb, struct vmcb *from_vmcb
+ dst->iopm_base_pa = from->iopm_base_pa;
+ dst->msrpm_base_pa = from->msrpm_base_pa;
+ dst->tsc_offset = from->tsc_offset;
+- dst->asid = from->asid;
++ /* asid not copied, it is handled manually for svm->vmcb. */
+ dst->tlb_ctl = from->tlb_ctl;
+ dst->int_ctl = from->int_ctl;
+ dst->int_vector = from->int_vector;
+@@ -834,8 +834,8 @@ int nested_svm_exit_special(struct vcpu_svm *svm)
+ return NESTED_EXIT_HOST;
+ break;
+ case SVM_EXIT_EXCP_BASE + PF_VECTOR:
+- /* When we're shadowing, trap PFs, but not async PF */
+- if (!npt_enabled && svm->vcpu.arch.apf.host_apf_reason == 0)
++ /* Trap async PF even if not shadowing */
++ if (!npt_enabled || svm->vcpu.arch.apf.host_apf_reason)
+ return NESTED_EXIT_HOST;
+ break;
+ default:
+diff --git a/arch/x86/kvm/vmx/nested.c b/arch/x86/kvm/vmx/nested.c
+index e44f33c82332..bd3ea83ca223 100644
+--- a/arch/x86/kvm/vmx/nested.c
++++ b/arch/x86/kvm/vmx/nested.c
+@@ -303,7 +303,7 @@ static void vmx_switch_vmcs(struct kvm_vcpu *vcpu, struct loaded_vmcs *vmcs)
+ cpu = get_cpu();
+ prev = vmx->loaded_vmcs;
+ vmx->loaded_vmcs = vmcs;
+- vmx_vcpu_load_vmcs(vcpu, cpu);
++ vmx_vcpu_load_vmcs(vcpu, cpu, prev);
+ vmx_sync_vmcs_host_state(vmx, prev);
+ put_cpu();
+
+@@ -5577,7 +5577,7 @@ bool nested_vmx_exit_reflected(struct kvm_vcpu *vcpu, u32 exit_reason)
+ vmcs_read32(VM_EXIT_INTR_ERROR_CODE),
+ KVM_ISA_VMX);
+
+- switch (exit_reason) {
++ switch ((u16)exit_reason) {
+ case EXIT_REASON_EXCEPTION_NMI:
+ if (is_nmi(intr_info))
+ return false;
+diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c
+index 89c766fad889..d7aa0dfab8bb 100644
+--- a/arch/x86/kvm/vmx/vmx.c
++++ b/arch/x86/kvm/vmx/vmx.c
+@@ -1306,10 +1306,12 @@ after_clear_sn:
+ pi_set_on(pi_desc);
+ }
+
+-void vmx_vcpu_load_vmcs(struct kvm_vcpu *vcpu, int cpu)
++void vmx_vcpu_load_vmcs(struct kvm_vcpu *vcpu, int cpu,
++ struct loaded_vmcs *buddy)
+ {
+ struct vcpu_vmx *vmx = to_vmx(vcpu);
+ bool already_loaded = vmx->loaded_vmcs->cpu == cpu;
++ struct vmcs *prev;
+
+ if (!already_loaded) {
+ loaded_vmcs_clear(vmx->loaded_vmcs);
+@@ -1328,10 +1330,18 @@ void vmx_vcpu_load_vmcs(struct kvm_vcpu *vcpu, int cpu)
+ local_irq_enable();
+ }
+
+- if (per_cpu(current_vmcs, cpu) != vmx->loaded_vmcs->vmcs) {
++ prev = per_cpu(current_vmcs, cpu);
++ if (prev != vmx->loaded_vmcs->vmcs) {
+ per_cpu(current_vmcs, cpu) = vmx->loaded_vmcs->vmcs;
+ vmcs_load(vmx->loaded_vmcs->vmcs);
+- indirect_branch_prediction_barrier();
++
++ /*
++ * No indirect branch prediction barrier needed when switching
++ * the active VMCS within a guest, e.g. on nested VM-Enter.
++ * The L1 VMM can protect itself with retpolines, IBPB or IBRS.
++ */
++ if (!buddy || WARN_ON_ONCE(buddy->vmcs != prev))
++ indirect_branch_prediction_barrier();
+ }
+
+ if (!already_loaded) {
+@@ -1368,7 +1378,7 @@ void vmx_vcpu_load(struct kvm_vcpu *vcpu, int cpu)
+ {
+ struct vcpu_vmx *vmx = to_vmx(vcpu);
+
+- vmx_vcpu_load_vmcs(vcpu, cpu);
++ vmx_vcpu_load_vmcs(vcpu, cpu, NULL);
+
+ vmx_vcpu_pi_load(vcpu, cpu);
+
+@@ -7138,6 +7148,9 @@ static __init void vmx_set_cpu_caps(void)
+ /* CPUID 0x80000001 */
+ if (!cpu_has_vmx_rdtscp())
+ kvm_cpu_cap_clear(X86_FEATURE_RDTSCP);
++
++ if (vmx_waitpkg_supported())
++ kvm_cpu_cap_check_and_set(X86_FEATURE_WAITPKG);
+ }
+
+ static void vmx_request_immediate_exit(struct kvm_vcpu *vcpu)
+diff --git a/arch/x86/kvm/vmx/vmx.h b/arch/x86/kvm/vmx/vmx.h
+index aab9df55336e..6bd7e552c534 100644
+--- a/arch/x86/kvm/vmx/vmx.h
++++ b/arch/x86/kvm/vmx/vmx.h
+@@ -317,7 +317,8 @@ struct kvm_vmx {
+ };
+
+ bool nested_vmx_allowed(struct kvm_vcpu *vcpu);
+-void vmx_vcpu_load_vmcs(struct kvm_vcpu *vcpu, int cpu);
++void vmx_vcpu_load_vmcs(struct kvm_vcpu *vcpu, int cpu,
++ struct loaded_vmcs *buddy);
+ void vmx_vcpu_load(struct kvm_vcpu *vcpu, int cpu);
+ int allocate_vpid(void);
+ void free_vpid(int vpid);
+diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
+index c17e6eb9ad43..97c5a92146f9 100644
+--- a/arch/x86/kvm/x86.c
++++ b/arch/x86/kvm/x86.c
+@@ -4586,7 +4586,7 @@ long kvm_arch_vcpu_ioctl(struct file *filp,
+
+ if (kvm_state.flags &
+ ~(KVM_STATE_NESTED_RUN_PENDING | KVM_STATE_NESTED_GUEST_MODE
+- | KVM_STATE_NESTED_EVMCS))
++ | KVM_STATE_NESTED_EVMCS | KVM_STATE_NESTED_MTF_PENDING))
+ break;
+
+ /* nested_run_pending implies guest_mode. */
+@@ -5242,6 +5242,10 @@ static void kvm_init_msr_list(void)
+ if (!kvm_cpu_cap_has(X86_FEATURE_RDTSCP))
+ continue;
+ break;
++ case MSR_IA32_UMWAIT_CONTROL:
++ if (!kvm_cpu_cap_has(X86_FEATURE_WAITPKG))
++ continue;
++ break;
+ case MSR_IA32_RTIT_CTL:
+ case MSR_IA32_RTIT_STATUS:
+ if (!kvm_cpu_cap_has(X86_FEATURE_INTEL_PT))
+@@ -6919,7 +6923,7 @@ restart:
+ if (!ctxt->have_exception ||
+ exception_type(ctxt->exception.vector) == EXCPT_TRAP) {
+ kvm_rip_write(vcpu, ctxt->eip);
+- if (r && ctxt->tf)
++ if (r && (ctxt->tf || (vcpu->guest_debug & KVM_GUESTDBG_SINGLESTEP)))
+ r = kvm_vcpu_do_singlestep(vcpu);
+ if (kvm_x86_ops.update_emulated_instruction)
+ kvm_x86_ops.update_emulated_instruction(vcpu);
+@@ -8150,9 +8154,8 @@ static void vcpu_load_eoi_exitmap(struct kvm_vcpu *vcpu)
+ kvm_x86_ops.load_eoi_exitmap(vcpu, eoi_exit_bitmap);
+ }
+
+-int kvm_arch_mmu_notifier_invalidate_range(struct kvm *kvm,
+- unsigned long start, unsigned long end,
+- bool blockable)
++void kvm_arch_mmu_notifier_invalidate_range(struct kvm *kvm,
++ unsigned long start, unsigned long end)
+ {
+ unsigned long apic_address;
+
+@@ -8163,8 +8166,6 @@ int kvm_arch_mmu_notifier_invalidate_range(struct kvm *kvm,
+ apic_address = gfn_to_hva(kvm, APIC_DEFAULT_PHYS_BASE >> PAGE_SHIFT);
+ if (start <= apic_address && apic_address < end)
+ kvm_make_all_cpus_request(kvm, KVM_REQ_APIC_PAGE_RELOAD);
+-
+- return 0;
+ }
+
+ void kvm_vcpu_reload_apic_access_page(struct kvm_vcpu *vcpu)
+diff --git a/arch/x86/mm/dump_pagetables.c b/arch/x86/mm/dump_pagetables.c
+index 69309cd56fdf..33093fdedb02 100644
+--- a/arch/x86/mm/dump_pagetables.c
++++ b/arch/x86/mm/dump_pagetables.c
+@@ -249,10 +249,22 @@ static void note_wx(struct pg_state *st, unsigned long addr)
+ (void *)st->start_address);
+ }
+
+-static inline pgprotval_t effective_prot(pgprotval_t prot1, pgprotval_t prot2)
++static void effective_prot(struct ptdump_state *pt_st, int level, u64 val)
+ {
+- return (prot1 & prot2 & (_PAGE_USER | _PAGE_RW)) |
+- ((prot1 | prot2) & _PAGE_NX);
++ struct pg_state *st = container_of(pt_st, struct pg_state, ptdump);
++ pgprotval_t prot = val & PTE_FLAGS_MASK;
++ pgprotval_t effective;
++
++ if (level > 0) {
++ pgprotval_t higher_prot = st->prot_levels[level - 1];
++
++ effective = (higher_prot & prot & (_PAGE_USER | _PAGE_RW)) |
++ ((higher_prot | prot) & _PAGE_NX);
++ } else {
++ effective = prot;
++ }
++
++ st->prot_levels[level] = effective;
+ }
+
+ /*
+@@ -270,16 +282,10 @@ static void note_page(struct ptdump_state *pt_st, unsigned long addr, int level,
+ struct seq_file *m = st->seq;
+
+ new_prot = val & PTE_FLAGS_MASK;
+-
+- if (level > 0) {
+- new_eff = effective_prot(st->prot_levels[level - 1],
+- new_prot);
+- } else {
+- new_eff = new_prot;
+- }
+-
+- if (level >= 0)
+- st->prot_levels[level] = new_eff;
++ if (!val)
++ new_eff = 0;
++ else
++ new_eff = st->prot_levels[level];
+
+ /*
+ * If we have a "break" in the series, we need to flush the state that
+@@ -374,6 +380,7 @@ static void ptdump_walk_pgd_level_core(struct seq_file *m,
+ struct pg_state st = {
+ .ptdump = {
+ .note_page = note_page,
++ .effective_prot = effective_prot,
+ .range = ptdump_ranges
+ },
+ .level = -1,
+diff --git a/arch/x86/pci/fixup.c b/arch/x86/pci/fixup.c
+index e723559c386a..0c67a5a94de3 100644
+--- a/arch/x86/pci/fixup.c
++++ b/arch/x86/pci/fixup.c
+@@ -572,6 +572,10 @@ DECLARE_PCI_FIXUP_EARLY(PCI_VENDOR_ID_INTEL, 0x2fc0, pci_invalid_bar);
+ DECLARE_PCI_FIXUP_EARLY(PCI_VENDOR_ID_INTEL, 0x6f60, pci_invalid_bar);
+ DECLARE_PCI_FIXUP_EARLY(PCI_VENDOR_ID_INTEL, 0x6fa0, pci_invalid_bar);
+ DECLARE_PCI_FIXUP_EARLY(PCI_VENDOR_ID_INTEL, 0x6fc0, pci_invalid_bar);
++DECLARE_PCI_FIXUP_EARLY(PCI_VENDOR_ID_INTEL, 0xa1ec, pci_invalid_bar);
++DECLARE_PCI_FIXUP_EARLY(PCI_VENDOR_ID_INTEL, 0xa1ed, pci_invalid_bar);
++DECLARE_PCI_FIXUP_EARLY(PCI_VENDOR_ID_INTEL, 0xa26c, pci_invalid_bar);
++DECLARE_PCI_FIXUP_EARLY(PCI_VENDOR_ID_INTEL, 0xa26d, pci_invalid_bar);
+
+ /*
+ * Device [1022:7808]
+diff --git a/crypto/algapi.c b/crypto/algapi.c
+index f8b4dc161c02..f1e6ccaff853 100644
+--- a/crypto/algapi.c
++++ b/crypto/algapi.c
+@@ -403,7 +403,7 @@ static void crypto_wait_for_test(struct crypto_larval *larval)
+ err = wait_for_completion_killable(&larval->completion);
+ WARN_ON(err);
+ if (!err)
+- crypto_probing_notify(CRYPTO_MSG_ALG_LOADED, larval);
++ crypto_notify(CRYPTO_MSG_ALG_LOADED, larval);
+
+ out:
+ crypto_larval_kill(&larval->alg);
+diff --git a/crypto/drbg.c b/crypto/drbg.c
+index b6929eb5f565..04379ca624cd 100644
+--- a/crypto/drbg.c
++++ b/crypto/drbg.c
+@@ -1294,8 +1294,10 @@ static inline int drbg_alloc_state(struct drbg_state *drbg)
+ if (IS_ENABLED(CONFIG_CRYPTO_FIPS)) {
+ drbg->prev = kzalloc(drbg_sec_strength(drbg->core->flags),
+ GFP_KERNEL);
+- if (!drbg->prev)
++ if (!drbg->prev) {
++ ret = -ENOMEM;
+ goto fini;
++ }
+ drbg->fips_primed = false;
+ }
+
+diff --git a/drivers/acpi/cppc_acpi.c b/drivers/acpi/cppc_acpi.c
+index 8b2e89c20c11..067067bc03d4 100644
+--- a/drivers/acpi/cppc_acpi.c
++++ b/drivers/acpi/cppc_acpi.c
+@@ -846,6 +846,7 @@ int acpi_cppc_processor_probe(struct acpi_processor *pr)
+ "acpi_cppc");
+ if (ret) {
+ per_cpu(cpc_desc_ptr, pr->id) = NULL;
++ kobject_put(&cpc_ptr->kobj);
+ goto out_free;
+ }
+
+diff --git a/drivers/acpi/device_pm.c b/drivers/acpi/device_pm.c
+index 5832bc10aca8..95e200b618bd 100644
+--- a/drivers/acpi/device_pm.c
++++ b/drivers/acpi/device_pm.c
+@@ -186,7 +186,7 @@ int acpi_device_set_power(struct acpi_device *device, int state)
+ * possibly drop references to the power resources in use.
+ */
+ state = ACPI_STATE_D3_HOT;
+- /* If _PR3 is not available, use D3hot as the target state. */
++ /* If D3cold is not supported, use D3hot as the target state. */
+ if (!device->power.states[ACPI_STATE_D3_COLD].flags.valid)
+ target_state = state;
+ } else if (!device->power.states[state].flags.valid) {
+diff --git a/drivers/acpi/evged.c b/drivers/acpi/evged.c
+index aba0d0027586..6d7a522952bf 100644
+--- a/drivers/acpi/evged.c
++++ b/drivers/acpi/evged.c
+@@ -79,6 +79,8 @@ static acpi_status acpi_ged_request_interrupt(struct acpi_resource *ares,
+ struct resource r;
+ struct acpi_resource_irq *p = &ares->data.irq;
+ struct acpi_resource_extended_irq *pext = &ares->data.extended_irq;
++ char ev_name[5];
++ u8 trigger;
+
+ if (ares->type == ACPI_RESOURCE_TYPE_END_TAG)
+ return AE_OK;
+@@ -87,14 +89,28 @@ static acpi_status acpi_ged_request_interrupt(struct acpi_resource *ares,
+ dev_err(dev, "unable to parse IRQ resource\n");
+ return AE_ERROR;
+ }
+- if (ares->type == ACPI_RESOURCE_TYPE_IRQ)
++ if (ares->type == ACPI_RESOURCE_TYPE_IRQ) {
+ gsi = p->interrupts[0];
+- else
++ trigger = p->triggering;
++ } else {
+ gsi = pext->interrupts[0];
++ trigger = p->triggering;
++ }
+
+ irq = r.start;
+
+- if (ACPI_FAILURE(acpi_get_handle(handle, "_EVT", &evt_handle))) {
++ switch (gsi) {
++ case 0 ... 255:
++ sprintf(ev_name, "_%c%02hhX",
++ trigger == ACPI_EDGE_SENSITIVE ? 'E' : 'L', gsi);
++
++ if (ACPI_SUCCESS(acpi_get_handle(handle, ev_name, &evt_handle)))
++ break;
++ /* fall through */
++ default:
++ if (ACPI_SUCCESS(acpi_get_handle(handle, "_EVT", &evt_handle)))
++ break;
++
+ dev_err(dev, "cannot locate _EVT method\n");
+ return AE_ERROR;
+ }
+diff --git a/drivers/acpi/scan.c b/drivers/acpi/scan.c
+index 6d3448895382..1b255e98de4d 100644
+--- a/drivers/acpi/scan.c
++++ b/drivers/acpi/scan.c
+@@ -919,12 +919,9 @@ static void acpi_bus_init_power_state(struct acpi_device *device, int state)
+
+ if (buffer.length && package
+ && package->type == ACPI_TYPE_PACKAGE
+- && package->package.count) {
+- int err = acpi_extract_power_resources(package, 0,
+- &ps->resources);
+- if (!err)
+- device->power.flags.power_resources = 1;
+- }
++ && package->package.count)
++ acpi_extract_power_resources(package, 0, &ps->resources);
++
+ ACPI_FREE(buffer.pointer);
+ }
+
+@@ -971,14 +968,27 @@ static void acpi_bus_get_power_flags(struct acpi_device *device)
+ acpi_bus_init_power_state(device, i);
+
+ INIT_LIST_HEAD(&device->power.states[ACPI_STATE_D3_COLD].resources);
+- if (!list_empty(&device->power.states[ACPI_STATE_D3_HOT].resources))
+- device->power.states[ACPI_STATE_D3_COLD].flags.valid = 1;
+
+- /* Set defaults for D0 and D3hot states (always valid) */
++ /* Set the defaults for D0 and D3hot (always supported). */
+ device->power.states[ACPI_STATE_D0].flags.valid = 1;
+ device->power.states[ACPI_STATE_D0].power = 100;
+ device->power.states[ACPI_STATE_D3_HOT].flags.valid = 1;
+
++ /*
++ * Use power resources only if the D0 list of them is populated, because
++ * some platforms may provide _PR3 only to indicate D3cold support and
++ * in those cases the power resources list returned by it may be bogus.
++ */
++ if (!list_empty(&device->power.states[ACPI_STATE_D0].resources)) {
++ device->power.flags.power_resources = 1;
++ /*
++ * D3cold is supported if the D3hot list of power resources is
++ * not empty.
++ */
++ if (!list_empty(&device->power.states[ACPI_STATE_D3_HOT].resources))
++ device->power.states[ACPI_STATE_D3_COLD].flags.valid = 1;
++ }
++
+ if (acpi_bus_init_power(device))
+ device->flags.power_manageable = 0;
+ }
+diff --git a/drivers/acpi/sysfs.c b/drivers/acpi/sysfs.c
+index c60d2c6d31d6..3a89909b50a6 100644
+--- a/drivers/acpi/sysfs.c
++++ b/drivers/acpi/sysfs.c
+@@ -993,8 +993,10 @@ void acpi_sysfs_add_hotplug_profile(struct acpi_hotplug_profile *hotplug,
+
+ error = kobject_init_and_add(&hotplug->kobj,
+ &acpi_hotplug_profile_ktype, hotplug_kobj, "%s", name);
+- if (error)
++ if (error) {
++ kobject_put(&hotplug->kobj);
+ goto err_out;
++ }
+
+ kobject_uevent(&hotplug->kobj, KOBJ_ADD);
+ return;
+diff --git a/drivers/base/core.c b/drivers/base/core.c
+index 0cad34f1eede..213106ed8a56 100644
+--- a/drivers/base/core.c
++++ b/drivers/base/core.c
+@@ -643,9 +643,17 @@ static void device_links_missing_supplier(struct device *dev)
+ {
+ struct device_link *link;
+
+- list_for_each_entry(link, &dev->links.suppliers, c_node)
+- if (link->status == DL_STATE_CONSUMER_PROBE)
++ list_for_each_entry(link, &dev->links.suppliers, c_node) {
++ if (link->status != DL_STATE_CONSUMER_PROBE)
++ continue;
++
++ if (link->supplier->links.status == DL_DEV_DRIVER_BOUND) {
+ WRITE_ONCE(link->status, DL_STATE_AVAILABLE);
++ } else {
++ WARN_ON(!(link->flags & DL_FLAG_SYNC_STATE_ONLY));
++ WRITE_ONCE(link->status, DL_STATE_DORMANT);
++ }
++ }
+ }
+
+ /**
+@@ -684,11 +692,11 @@ int device_links_check_suppliers(struct device *dev)
+ device_links_write_lock();
+
+ list_for_each_entry(link, &dev->links.suppliers, c_node) {
+- if (!(link->flags & DL_FLAG_MANAGED) ||
+- link->flags & DL_FLAG_SYNC_STATE_ONLY)
++ if (!(link->flags & DL_FLAG_MANAGED))
+ continue;
+
+- if (link->status != DL_STATE_AVAILABLE) {
++ if (link->status != DL_STATE_AVAILABLE &&
++ !(link->flags & DL_FLAG_SYNC_STATE_ONLY)) {
+ device_links_missing_supplier(dev);
+ ret = -EPROBE_DEFER;
+ break;
+@@ -949,11 +957,21 @@ static void __device_links_no_driver(struct device *dev)
+ if (!(link->flags & DL_FLAG_MANAGED))
+ continue;
+
+- if (link->flags & DL_FLAG_AUTOREMOVE_CONSUMER)
++ if (link->flags & DL_FLAG_AUTOREMOVE_CONSUMER) {
+ device_link_drop_managed(link);
+- else if (link->status == DL_STATE_CONSUMER_PROBE ||
+- link->status == DL_STATE_ACTIVE)
++ continue;
++ }
++
++ if (link->status != DL_STATE_CONSUMER_PROBE &&
++ link->status != DL_STATE_ACTIVE)
++ continue;
++
++ if (link->supplier->links.status == DL_DEV_DRIVER_BOUND) {
+ WRITE_ONCE(link->status, DL_STATE_AVAILABLE);
++ } else {
++ WARN_ON(!(link->flags & DL_FLAG_SYNC_STATE_ONLY));
++ WRITE_ONCE(link->status, DL_STATE_DORMANT);
++ }
+ }
+
+ dev->links.status = DL_DEV_NO_DRIVER;
+diff --git a/drivers/block/floppy.c b/drivers/block/floppy.c
+index c3daa64cb52c..975cd0a6baa1 100644
+--- a/drivers/block/floppy.c
++++ b/drivers/block/floppy.c
+@@ -2938,17 +2938,17 @@ static blk_status_t floppy_queue_rq(struct blk_mq_hw_ctx *hctx,
+ (unsigned long long) current_req->cmd_flags))
+ return BLK_STS_IOERR;
+
+- spin_lock_irq(&floppy_lock);
+- list_add_tail(&bd->rq->queuelist, &floppy_reqs);
+- spin_unlock_irq(&floppy_lock);
+-
+ if (test_and_set_bit(0, &fdc_busy)) {
+ /* fdc busy, this new request will be treated when the
+ current one is done */
+ is_alive(__func__, "old request running");
+- return BLK_STS_OK;
++ return BLK_STS_RESOURCE;
+ }
+
++ spin_lock_irq(&floppy_lock);
++ list_add_tail(&bd->rq->queuelist, &floppy_reqs);
++ spin_unlock_irq(&floppy_lock);
++
+ command_status = FD_COMMAND_NONE;
+ __reschedule_timeout(MAXTIMEOUT, "fd_request");
+ set_fdc(0);
+diff --git a/drivers/char/agp/intel-gtt.c b/drivers/char/agp/intel-gtt.c
+index 66a62d17a3f5..3d42fc4290bc 100644
+--- a/drivers/char/agp/intel-gtt.c
++++ b/drivers/char/agp/intel-gtt.c
+@@ -846,6 +846,7 @@ void intel_gtt_insert_page(dma_addr_t addr,
+ unsigned int flags)
+ {
+ intel_private.driver->write_entry(addr, pg, flags);
++ readl(intel_private.gtt + pg);
+ if (intel_private.driver->chipset_flush)
+ intel_private.driver->chipset_flush();
+ }
+@@ -871,7 +872,7 @@ void intel_gtt_insert_sg_entries(struct sg_table *st,
+ j++;
+ }
+ }
+- wmb();
++ readl(intel_private.gtt + j - 1);
+ if (intel_private.driver->chipset_flush)
+ intel_private.driver->chipset_flush();
+ }
+@@ -1105,6 +1106,7 @@ static void i9xx_cleanup(void)
+
+ static void i9xx_chipset_flush(void)
+ {
++ wmb();
+ if (intel_private.i9xx_flush_page)
+ writel(1, intel_private.i9xx_flush_page);
+ }
+diff --git a/drivers/clk/clk.c b/drivers/clk/clk.c
+index 2dfb30b963c4..407f6919604c 100644
+--- a/drivers/clk/clk.c
++++ b/drivers/clk/clk.c
+@@ -114,7 +114,11 @@ static int clk_pm_runtime_get(struct clk_core *core)
+ return 0;
+
+ ret = pm_runtime_get_sync(core->dev);
+- return ret < 0 ? ret : 0;
++ if (ret < 0) {
++ pm_runtime_put_noidle(core->dev);
++ return ret;
++ }
++ return 0;
+ }
+
+ static void clk_pm_runtime_put(struct clk_core *core)
+diff --git a/drivers/cpufreq/cpufreq.c b/drivers/cpufreq/cpufreq.c
+index 045f9fe157ce..d03f250f68e4 100644
+--- a/drivers/cpufreq/cpufreq.c
++++ b/drivers/cpufreq/cpufreq.c
+@@ -2535,26 +2535,27 @@ EXPORT_SYMBOL_GPL(cpufreq_update_limits);
+ static int cpufreq_boost_set_sw(int state)
+ {
+ struct cpufreq_policy *policy;
+- int ret = -EINVAL;
+
+ for_each_active_policy(policy) {
++ int ret;
++
+ if (!policy->freq_table)
+- continue;
++ return -ENXIO;
+
+ ret = cpufreq_frequency_table_cpuinfo(policy,
+ policy->freq_table);
+ if (ret) {
+ pr_err("%s: Policy frequency update failed\n",
+ __func__);
+- break;
++ return ret;
+ }
+
+ ret = freq_qos_update_request(policy->max_freq_req, policy->max);
+ if (ret < 0)
+- break;
++ return ret;
+ }
+
+- return ret;
++ return 0;
+ }
+
+ int cpufreq_boost_trigger_state(int state)
+diff --git a/drivers/crypto/cavium/nitrox/nitrox_main.c b/drivers/crypto/cavium/nitrox/nitrox_main.c
+index e91be9b8b083..eeba262bd458 100644
+--- a/drivers/crypto/cavium/nitrox/nitrox_main.c
++++ b/drivers/crypto/cavium/nitrox/nitrox_main.c
+@@ -278,7 +278,7 @@ static void nitrox_remove_from_devlist(struct nitrox_device *ndev)
+
+ struct nitrox_device *nitrox_get_first_device(void)
+ {
+- struct nitrox_device *ndev = NULL;
++ struct nitrox_device *ndev;
+
+ mutex_lock(&devlist_lock);
+ list_for_each_entry(ndev, &ndevlist, list) {
+@@ -286,7 +286,7 @@ struct nitrox_device *nitrox_get_first_device(void)
+ break;
+ }
+ mutex_unlock(&devlist_lock);
+- if (!ndev)
++ if (&ndev->list == &ndevlist)
+ return NULL;
+
+ refcount_inc(&ndev->refcnt);
+diff --git a/drivers/crypto/virtio/virtio_crypto_algs.c b/drivers/crypto/virtio/virtio_crypto_algs.c
+index fd045e64972a..cb8a6ea2a4bc 100644
+--- a/drivers/crypto/virtio/virtio_crypto_algs.c
++++ b/drivers/crypto/virtio/virtio_crypto_algs.c
+@@ -350,13 +350,18 @@ __virtio_crypto_skcipher_do_req(struct virtio_crypto_sym_request *vc_sym_req,
+ int err;
+ unsigned long flags;
+ struct scatterlist outhdr, iv_sg, status_sg, **sgs;
+- int i;
+ u64 dst_len;
+ unsigned int num_out = 0, num_in = 0;
+ int sg_total;
+ uint8_t *iv;
++ struct scatterlist *sg;
+
+ src_nents = sg_nents_for_len(req->src, req->cryptlen);
++ if (src_nents < 0) {
++ pr_err("Invalid number of src SG.\n");
++ return src_nents;
++ }
++
+ dst_nents = sg_nents(req->dst);
+
+ pr_debug("virtio_crypto: Number of sgs (src_nents: %d, dst_nents: %d)\n",
+@@ -402,6 +407,7 @@ __virtio_crypto_skcipher_do_req(struct virtio_crypto_sym_request *vc_sym_req,
+ goto free;
+ }
+
++ dst_len = min_t(unsigned int, req->cryptlen, dst_len);
+ pr_debug("virtio_crypto: src_len: %u, dst_len: %llu\n",
+ req->cryptlen, dst_len);
+
+@@ -442,12 +448,12 @@ __virtio_crypto_skcipher_do_req(struct virtio_crypto_sym_request *vc_sym_req,
+ vc_sym_req->iv = iv;
+
+ /* Source data */
+- for (i = 0; i < src_nents; i++)
+- sgs[num_out++] = &req->src[i];
++ for (sg = req->src; src_nents; sg = sg_next(sg), src_nents--)
++ sgs[num_out++] = sg;
+
+ /* Destination data */
+- for (i = 0; i < dst_nents; i++)
+- sgs[num_out + num_in++] = &req->dst[i];
++ for (sg = req->dst; sg; sg = sg_next(sg))
++ sgs[num_out + num_in++] = sg;
+
+ /* Status */
+ sg_init_one(&status_sg, &vc_req->status, sizeof(vc_req->status));
+@@ -577,10 +583,11 @@ static void virtio_crypto_skcipher_finalize_req(
+ scatterwalk_map_and_copy(req->iv, req->dst,
+ req->cryptlen - AES_BLOCK_SIZE,
+ AES_BLOCK_SIZE, 0);
+- crypto_finalize_skcipher_request(vc_sym_req->base.dataq->engine,
+- req, err);
+ kzfree(vc_sym_req->iv);
+ virtcrypto_clear_request(&vc_sym_req->base);
++
++ crypto_finalize_skcipher_request(vc_sym_req->base.dataq->engine,
++ req, err);
+ }
+
+ static struct virtio_crypto_algo virtio_crypto_algs[] = { {
+diff --git a/drivers/edac/i10nm_base.c b/drivers/edac/i10nm_base.c
+index df08de963d10..916d37f0503b 100644
+--- a/drivers/edac/i10nm_base.c
++++ b/drivers/edac/i10nm_base.c
+@@ -161,7 +161,7 @@ static int i10nm_get_dimm_config(struct mem_ctl_info *mci)
+ mtr, mcddrtcfg, imc->mc, i, j);
+
+ if (IS_DIMM_PRESENT(mtr))
+- ndimms += skx_get_dimm_info(mtr, 0, dimm,
++ ndimms += skx_get_dimm_info(mtr, 0, 0, dimm,
+ imc, i, j);
+ else if (IS_NVDIMM_PRESENT(mcddrtcfg, j))
+ ndimms += skx_get_nvdimm_info(dimm, imc, i, j,
+diff --git a/drivers/edac/skx_base.c b/drivers/edac/skx_base.c
+index 46a3a3440f5e..a51954bc488c 100644
+--- a/drivers/edac/skx_base.c
++++ b/drivers/edac/skx_base.c
+@@ -163,27 +163,23 @@ static const struct x86_cpu_id skx_cpuids[] = {
+ };
+ MODULE_DEVICE_TABLE(x86cpu, skx_cpuids);
+
+-#define SKX_GET_MTMTR(dev, reg) \
+- pci_read_config_dword((dev), 0x87c, &(reg))
+-
+-static bool skx_check_ecc(struct pci_dev *pdev)
++static bool skx_check_ecc(u32 mcmtr)
+ {
+- u32 mtmtr;
+-
+- SKX_GET_MTMTR(pdev, mtmtr);
+-
+- return !!GET_BITFIELD(mtmtr, 2, 2);
++ return !!GET_BITFIELD(mcmtr, 2, 2);
+ }
+
+ static int skx_get_dimm_config(struct mem_ctl_info *mci)
+ {
+ struct skx_pvt *pvt = mci->pvt_info;
++ u32 mtr, mcmtr, amap, mcddrtcfg;
+ struct skx_imc *imc = pvt->imc;
+- u32 mtr, amap, mcddrtcfg;
+ struct dimm_info *dimm;
+ int i, j;
+ int ndimms;
+
++ /* Only the mcmtr on the first channel is effective */
++ pci_read_config_dword(imc->chan[0].cdev, 0x87c, &mcmtr);
++
+ for (i = 0; i < SKX_NUM_CHANNELS; i++) {
+ ndimms = 0;
+ pci_read_config_dword(imc->chan[i].cdev, 0x8C, &amap);
+@@ -193,14 +189,14 @@ static int skx_get_dimm_config(struct mem_ctl_info *mci)
+ pci_read_config_dword(imc->chan[i].cdev,
+ 0x80 + 4 * j, &mtr);
+ if (IS_DIMM_PRESENT(mtr)) {
+- ndimms += skx_get_dimm_info(mtr, amap, dimm, imc, i, j);
++ ndimms += skx_get_dimm_info(mtr, mcmtr, amap, dimm, imc, i, j);
+ } else if (IS_NVDIMM_PRESENT(mcddrtcfg, j)) {
+ ndimms += skx_get_nvdimm_info(dimm, imc, i, j,
+ EDAC_MOD_STR);
+ nvdimm_count++;
+ }
+ }
+- if (ndimms && !skx_check_ecc(imc->chan[0].cdev)) {
++ if (ndimms && !skx_check_ecc(mcmtr)) {
+ skx_printk(KERN_ERR, "ECC is disabled on imc %d\n", imc->mc);
+ return -ENODEV;
+ }
+diff --git a/drivers/edac/skx_common.c b/drivers/edac/skx_common.c
+index 99bbaf629b8d..412c651bef26 100644
+--- a/drivers/edac/skx_common.c
++++ b/drivers/edac/skx_common.c
+@@ -304,7 +304,7 @@ static int skx_get_dimm_attr(u32 reg, int lobit, int hibit, int add,
+ #define numrow(reg) skx_get_dimm_attr(reg, 2, 4, 12, 1, 6, "rows")
+ #define numcol(reg) skx_get_dimm_attr(reg, 0, 1, 10, 0, 2, "cols")
+
+-int skx_get_dimm_info(u32 mtr, u32 amap, struct dimm_info *dimm,
++int skx_get_dimm_info(u32 mtr, u32 mcmtr, u32 amap, struct dimm_info *dimm,
+ struct skx_imc *imc, int chan, int dimmno)
+ {
+ int banks = 16, ranks, rows, cols, npages;
+@@ -324,8 +324,8 @@ int skx_get_dimm_info(u32 mtr, u32 amap, struct dimm_info *dimm,
+ imc->mc, chan, dimmno, size, npages,
+ banks, 1 << ranks, rows, cols);
+
+- imc->chan[chan].dimms[dimmno].close_pg = GET_BITFIELD(mtr, 0, 0);
+- imc->chan[chan].dimms[dimmno].bank_xor_enable = GET_BITFIELD(mtr, 9, 9);
++ imc->chan[chan].dimms[dimmno].close_pg = GET_BITFIELD(mcmtr, 0, 0);
++ imc->chan[chan].dimms[dimmno].bank_xor_enable = GET_BITFIELD(mcmtr, 9, 9);
+ imc->chan[chan].dimms[dimmno].fine_grain_bank = GET_BITFIELD(amap, 0, 0);
+ imc->chan[chan].dimms[dimmno].rowbits = rows;
+ imc->chan[chan].dimms[dimmno].colbits = cols;
+diff --git a/drivers/edac/skx_common.h b/drivers/edac/skx_common.h
+index 60d1ea669afd..319f9b2f1f89 100644
+--- a/drivers/edac/skx_common.h
++++ b/drivers/edac/skx_common.h
+@@ -128,7 +128,7 @@ int skx_get_all_bus_mappings(unsigned int did, int off, enum type,
+
+ int skx_get_hi_lo(unsigned int did, int off[], u64 *tolm, u64 *tohm);
+
+-int skx_get_dimm_info(u32 mtr, u32 amap, struct dimm_info *dimm,
++int skx_get_dimm_info(u32 mtr, u32 mcmtr, u32 amap, struct dimm_info *dimm,
+ struct skx_imc *imc, int chan, int dimmno);
+
+ int skx_get_nvdimm_info(struct dimm_info *dimm, struct skx_imc *imc,
+diff --git a/drivers/firmware/efi/efivars.c b/drivers/firmware/efi/efivars.c
+index 78ad1ba8c987..26528a46d99e 100644
+--- a/drivers/firmware/efi/efivars.c
++++ b/drivers/firmware/efi/efivars.c
+@@ -522,8 +522,10 @@ efivar_create_sysfs_entry(struct efivar_entry *new_var)
+ ret = kobject_init_and_add(&new_var->kobj, &efivar_ktype,
+ NULL, "%s", short_name);
+ kfree(short_name);
+- if (ret)
++ if (ret) {
++ kobject_put(&new_var->kobj);
+ return ret;
++ }
+
+ kobject_uevent(&new_var->kobj, KOBJ_ADD);
+ if (efivar_entry_add(new_var, &efivar_sysfs_list)) {
+diff --git a/drivers/firmware/imx/imx-scu.c b/drivers/firmware/imx/imx-scu.c
+index f71eaa5bf52d..b3da2e193ad2 100644
+--- a/drivers/firmware/imx/imx-scu.c
++++ b/drivers/firmware/imx/imx-scu.c
+@@ -38,6 +38,7 @@ struct imx_sc_ipc {
+ struct device *dev;
+ struct mutex lock;
+ struct completion done;
++ bool fast_ipc;
+
+ /* temporarily store the SCU msg */
+ u32 *msg;
+@@ -115,6 +116,7 @@ static void imx_scu_rx_callback(struct mbox_client *c, void *msg)
+ struct imx_sc_ipc *sc_ipc = sc_chan->sc_ipc;
+ struct imx_sc_rpc_msg *hdr;
+ u32 *data = msg;
++ int i;
+
+ if (!sc_ipc->msg) {
+ dev_warn(sc_ipc->dev, "unexpected rx idx %d 0x%08x, ignore!\n",
+@@ -122,6 +124,19 @@ static void imx_scu_rx_callback(struct mbox_client *c, void *msg)
+ return;
+ }
+
++ if (sc_ipc->fast_ipc) {
++ hdr = msg;
++ sc_ipc->rx_size = hdr->size;
++ sc_ipc->msg[0] = *data++;
++
++ for (i = 1; i < sc_ipc->rx_size; i++)
++ sc_ipc->msg[i] = *data++;
++
++ complete(&sc_ipc->done);
++
++ return;
++ }
++
+ if (sc_chan->idx == 0) {
+ hdr = msg;
+ sc_ipc->rx_size = hdr->size;
+@@ -143,20 +158,22 @@ static void imx_scu_rx_callback(struct mbox_client *c, void *msg)
+
+ static int imx_scu_ipc_write(struct imx_sc_ipc *sc_ipc, void *msg)
+ {
+- struct imx_sc_rpc_msg *hdr = msg;
++ struct imx_sc_rpc_msg hdr = *(struct imx_sc_rpc_msg *)msg;
+ struct imx_sc_chan *sc_chan;
+ u32 *data = msg;
+ int ret;
++ int size;
+ int i;
+
+ /* Check size */
+- if (hdr->size > IMX_SC_RPC_MAX_MSG)
++ if (hdr.size > IMX_SC_RPC_MAX_MSG)
+ return -EINVAL;
+
+- dev_dbg(sc_ipc->dev, "RPC SVC %u FUNC %u SIZE %u\n", hdr->svc,
+- hdr->func, hdr->size);
++ dev_dbg(sc_ipc->dev, "RPC SVC %u FUNC %u SIZE %u\n", hdr.svc,
++ hdr.func, hdr.size);
+
+- for (i = 0; i < hdr->size; i++) {
++ size = sc_ipc->fast_ipc ? 1 : hdr.size;
++ for (i = 0; i < size; i++) {
+ sc_chan = &sc_ipc->chans[i % 4];
+
+ /*
+@@ -168,8 +185,10 @@ static int imx_scu_ipc_write(struct imx_sc_ipc *sc_ipc, void *msg)
+ * Wait for tx_done before every send to ensure that no
+ * queueing happens at the mailbox channel level.
+ */
+- wait_for_completion(&sc_chan->tx_done);
+- reinit_completion(&sc_chan->tx_done);
++ if (!sc_ipc->fast_ipc) {
++ wait_for_completion(&sc_chan->tx_done);
++ reinit_completion(&sc_chan->tx_done);
++ }
+
+ ret = mbox_send_message(sc_chan->ch, &data[i]);
+ if (ret < 0)
+@@ -246,6 +265,8 @@ static int imx_scu_probe(struct platform_device *pdev)
+ struct imx_sc_chan *sc_chan;
+ struct mbox_client *cl;
+ char *chan_name;
++ struct of_phandle_args args;
++ int num_channel;
+ int ret;
+ int i;
+
+@@ -253,11 +274,20 @@ static int imx_scu_probe(struct platform_device *pdev)
+ if (!sc_ipc)
+ return -ENOMEM;
+
+- for (i = 0; i < SCU_MU_CHAN_NUM; i++) {
+- if (i < 4)
++ ret = of_parse_phandle_with_args(pdev->dev.of_node, "mboxes",
++ "#mbox-cells", 0, &args);
++ if (ret)
++ return ret;
++
++ sc_ipc->fast_ipc = of_device_is_compatible(args.np, "fsl,imx8-mu-scu");
++
++ num_channel = sc_ipc->fast_ipc ? 2 : SCU_MU_CHAN_NUM;
++ for (i = 0; i < num_channel; i++) {
++ if (i < num_channel / 2)
+ chan_name = kasprintf(GFP_KERNEL, "tx%d", i);
+ else
+- chan_name = kasprintf(GFP_KERNEL, "rx%d", i - 4);
++ chan_name = kasprintf(GFP_KERNEL, "rx%d",
++ i - num_channel / 2);
+
+ if (!chan_name)
+ return -ENOMEM;
+@@ -269,13 +299,15 @@ static int imx_scu_probe(struct platform_device *pdev)
+ cl->knows_txdone = true;
+ cl->rx_callback = imx_scu_rx_callback;
+
+- /* Initial tx_done completion as "done" */
+- cl->tx_done = imx_scu_tx_done;
+- init_completion(&sc_chan->tx_done);
+- complete(&sc_chan->tx_done);
++ if (!sc_ipc->fast_ipc) {
++ /* Initial tx_done completion as "done" */
++ cl->tx_done = imx_scu_tx_done;
++ init_completion(&sc_chan->tx_done);
++ complete(&sc_chan->tx_done);
++ }
+
+ sc_chan->sc_ipc = sc_ipc;
+- sc_chan->idx = i % 4;
++ sc_chan->idx = i % (num_channel / 2);
+ sc_chan->ch = mbox_request_channel_byname(cl, chan_name);
+ if (IS_ERR(sc_chan->ch)) {
+ ret = PTR_ERR(sc_chan->ch);
+diff --git a/drivers/gpu/drm/i915/gem/i915_gem_userptr.c b/drivers/gpu/drm/i915/gem/i915_gem_userptr.c
+index 7ffd7afeb7a5..f80cf6ac20c5 100644
+--- a/drivers/gpu/drm/i915/gem/i915_gem_userptr.c
++++ b/drivers/gpu/drm/i915/gem/i915_gem_userptr.c
+@@ -598,6 +598,14 @@ static int i915_gem_userptr_get_pages(struct drm_i915_gem_object *obj)
+ GFP_KERNEL |
+ __GFP_NORETRY |
+ __GFP_NOWARN);
++ /*
++ * Using __get_user_pages_fast() with a read-only
++ * access is questionable. A read-only page may be
++ * COW-broken, and then this might end up giving
++ * the wrong side of the COW..
++ *
++ * We may or may not care.
++ */
+ if (pvec) /* defer to worker if malloc fails */
+ pinned = __get_user_pages_fast(obj->userptr.ptr,
+ num_pages,
+diff --git a/drivers/gpu/drm/vkms/vkms_drv.h b/drivers/gpu/drm/vkms/vkms_drv.h
+index eda04ffba7b1..f4036bb0b9a8 100644
+--- a/drivers/gpu/drm/vkms/vkms_drv.h
++++ b/drivers/gpu/drm/vkms/vkms_drv.h
+@@ -117,11 +117,6 @@ struct drm_plane *vkms_plane_init(struct vkms_device *vkmsdev,
+ enum drm_plane_type type, int index);
+
+ /* Gem stuff */
+-struct drm_gem_object *vkms_gem_create(struct drm_device *dev,
+- struct drm_file *file,
+- u32 *handle,
+- u64 size);
+-
+ vm_fault_t vkms_gem_fault(struct vm_fault *vmf);
+
+ int vkms_dumb_create(struct drm_file *file, struct drm_device *dev,
+diff --git a/drivers/gpu/drm/vkms/vkms_gem.c b/drivers/gpu/drm/vkms/vkms_gem.c
+index 2e01186fb943..c541fec57566 100644
+--- a/drivers/gpu/drm/vkms/vkms_gem.c
++++ b/drivers/gpu/drm/vkms/vkms_gem.c
+@@ -97,10 +97,10 @@ vm_fault_t vkms_gem_fault(struct vm_fault *vmf)
+ return ret;
+ }
+
+-struct drm_gem_object *vkms_gem_create(struct drm_device *dev,
+- struct drm_file *file,
+- u32 *handle,
+- u64 size)
++static struct drm_gem_object *vkms_gem_create(struct drm_device *dev,
++ struct drm_file *file,
++ u32 *handle,
++ u64 size)
+ {
+ struct vkms_gem_object *obj;
+ int ret;
+@@ -113,7 +113,6 @@ struct drm_gem_object *vkms_gem_create(struct drm_device *dev,
+ return ERR_CAST(obj);
+
+ ret = drm_gem_handle_create(file, &obj->gem, handle);
+- drm_gem_object_put_unlocked(&obj->gem);
+ if (ret)
+ return ERR_PTR(ret);
+
+@@ -142,6 +141,8 @@ int vkms_dumb_create(struct drm_file *file, struct drm_device *dev,
+ args->size = gem_obj->size;
+ args->pitch = pitch;
+
++ drm_gem_object_put_unlocked(gem_obj);
++
+ DRM_DEBUG_DRIVER("Created object of size %lld\n", size);
+
+ return 0;
+diff --git a/drivers/infiniband/core/uverbs_main.c b/drivers/infiniband/core/uverbs_main.c
+index 1bab8de14757..b94572e9c24f 100644
+--- a/drivers/infiniband/core/uverbs_main.c
++++ b/drivers/infiniband/core/uverbs_main.c
+@@ -296,6 +296,8 @@ static __poll_t ib_uverbs_event_poll(struct ib_uverbs_event_queue *ev_queue,
+ spin_lock_irq(&ev_queue->lock);
+ if (!list_empty(&ev_queue->event_list))
+ pollflags = EPOLLIN | EPOLLRDNORM;
++ else if (ev_queue->is_closed)
++ pollflags = EPOLLERR;
+ spin_unlock_irq(&ev_queue->lock);
+
+ return pollflags;
+diff --git a/drivers/media/common/videobuf2/videobuf2-dma-contig.c b/drivers/media/common/videobuf2/videobuf2-dma-contig.c
+index d3a3ee5b597b..f4b4a7c135eb 100644
+--- a/drivers/media/common/videobuf2/videobuf2-dma-contig.c
++++ b/drivers/media/common/videobuf2/videobuf2-dma-contig.c
+@@ -726,9 +726,8 @@ EXPORT_SYMBOL_GPL(vb2_dma_contig_memops);
+ int vb2_dma_contig_set_max_seg_size(struct device *dev, unsigned int size)
+ {
+ if (!dev->dma_parms) {
+- dev->dma_parms = kzalloc(sizeof(*dev->dma_parms), GFP_KERNEL);
+- if (!dev->dma_parms)
+- return -ENOMEM;
++ dev_err(dev, "Failed to set max_seg_size: dma_parms is NULL\n");
++ return -ENODEV;
+ }
+ if (dma_get_max_seg_size(dev) < size)
+ return dma_set_max_seg_size(dev, size);
+@@ -737,21 +736,6 @@ int vb2_dma_contig_set_max_seg_size(struct device *dev, unsigned int size)
+ }
+ EXPORT_SYMBOL_GPL(vb2_dma_contig_set_max_seg_size);
+
+-/*
+- * vb2_dma_contig_clear_max_seg_size() - release resources for DMA parameters
+- * @dev: device for configuring DMA parameters
+- *
+- * This function releases resources allocated to configure DMA parameters
+- * (see vb2_dma_contig_set_max_seg_size() function). It should be called from
+- * device drivers on driver remove.
+- */
+-void vb2_dma_contig_clear_max_seg_size(struct device *dev)
+-{
+- kfree(dev->dma_parms);
+- dev->dma_parms = NULL;
+-}
+-EXPORT_SYMBOL_GPL(vb2_dma_contig_clear_max_seg_size);
+-
+ MODULE_DESCRIPTION("DMA-contig memory handling routines for videobuf2");
+ MODULE_AUTHOR("Pawel Osciak <pawel@osciak.com>");
+ MODULE_LICENSE("GPL");
+diff --git a/drivers/mmc/core/sdio.c b/drivers/mmc/core/sdio.c
+index ebb387aa5158..20eed28ea60d 100644
+--- a/drivers/mmc/core/sdio.c
++++ b/drivers/mmc/core/sdio.c
+@@ -584,7 +584,7 @@ try_again:
+ */
+ err = mmc_send_io_op_cond(host, ocr, &rocr);
+ if (err)
+- goto err;
++ return err;
+
+ /*
+ * For SPI, enable CRC as appropriate.
+@@ -592,17 +592,15 @@ try_again:
+ if (mmc_host_is_spi(host)) {
+ err = mmc_spi_set_crc(host, use_spi_crc);
+ if (err)
+- goto err;
++ return err;
+ }
+
+ /*
+ * Allocate card structure.
+ */
+ card = mmc_alloc_card(host, NULL);
+- if (IS_ERR(card)) {
+- err = PTR_ERR(card);
+- goto err;
+- }
++ if (IS_ERR(card))
++ return PTR_ERR(card);
+
+ if ((rocr & R4_MEMORY_PRESENT) &&
+ mmc_sd_get_cid(host, ocr & rocr, card->raw_cid, NULL) == 0) {
+@@ -610,19 +608,15 @@ try_again:
+
+ if (oldcard && (oldcard->type != MMC_TYPE_SD_COMBO ||
+ memcmp(card->raw_cid, oldcard->raw_cid, sizeof(card->raw_cid)) != 0)) {
+- mmc_remove_card(card);
+- pr_debug("%s: Perhaps the card was replaced\n",
+- mmc_hostname(host));
+- return -ENOENT;
++ err = -ENOENT;
++ goto mismatch;
+ }
+ } else {
+ card->type = MMC_TYPE_SDIO;
+
+ if (oldcard && oldcard->type != MMC_TYPE_SDIO) {
+- mmc_remove_card(card);
+- pr_debug("%s: Perhaps the card was replaced\n",
+- mmc_hostname(host));
+- return -ENOENT;
++ err = -ENOENT;
++ goto mismatch;
+ }
+ }
+
+@@ -677,7 +671,7 @@ try_again:
+ if (!oldcard && card->type == MMC_TYPE_SD_COMBO) {
+ err = mmc_sd_get_csd(host, card);
+ if (err)
+- return err;
++ goto remove;
+
+ mmc_decode_cid(card);
+ }
+@@ -704,7 +698,12 @@ try_again:
+ mmc_set_timing(card->host, MMC_TIMING_SD_HS);
+ }
+
+- goto finish;
++ if (oldcard)
++ mmc_remove_card(card);
++ else
++ host->card = card;
++
++ return 0;
+ }
+
+ /*
+@@ -718,9 +717,8 @@ try_again:
+ /* Retry init sequence, but without R4_18V_PRESENT. */
+ retries = 0;
+ goto try_again;
+- } else {
+- goto remove;
+ }
++ return err;
+ }
+
+ /*
+@@ -731,16 +729,14 @@ try_again:
+ goto remove;
+
+ if (oldcard) {
+- int same = (card->cis.vendor == oldcard->cis.vendor &&
+- card->cis.device == oldcard->cis.device);
+- mmc_remove_card(card);
+- if (!same) {
+- pr_debug("%s: Perhaps the card was replaced\n",
+- mmc_hostname(host));
+- return -ENOENT;
++ if (card->cis.vendor == oldcard->cis.vendor &&
++ card->cis.device == oldcard->cis.device) {
++ mmc_remove_card(card);
++ card = oldcard;
++ } else {
++ err = -ENOENT;
++ goto mismatch;
+ }
+-
+- card = oldcard;
+ }
+ card->ocr = ocr_card;
+ mmc_fixup_device(card, sdio_fixup_methods);
+@@ -801,16 +797,15 @@ try_again:
+ err = -EINVAL;
+ goto remove;
+ }
+-finish:
+- if (!oldcard)
+- host->card = card;
++
++ host->card = card;
+ return 0;
+
++mismatch:
++ pr_debug("%s: Perhaps the card was replaced\n", mmc_hostname(host));
+ remove:
+- if (!oldcard)
++ if (oldcard != card)
+ mmc_remove_card(card);
+-
+-err:
+ return err;
+ }
+
+diff --git a/drivers/mmc/host/mmci_stm32_sdmmc.c b/drivers/mmc/host/mmci_stm32_sdmmc.c
+index d33e62bd6153..cca7b3b3f618 100644
+--- a/drivers/mmc/host/mmci_stm32_sdmmc.c
++++ b/drivers/mmc/host/mmci_stm32_sdmmc.c
+@@ -188,6 +188,9 @@ static int sdmmc_idma_start(struct mmci_host *host, unsigned int *datactrl)
+ static void sdmmc_idma_finalize(struct mmci_host *host, struct mmc_data *data)
+ {
+ writel_relaxed(0, host->base + MMCI_STM32_IDMACTRLR);
++
++ if (!data->host_cookie)
++ sdmmc_idma_unprep_data(host, data, 0);
+ }
+
+ static void mmci_sdmmc_set_clkreg(struct mmci_host *host, unsigned int desired)
+diff --git a/drivers/mmc/host/sdhci-msm.c b/drivers/mmc/host/sdhci-msm.c
+index a8bcb3f16aa4..87de46b6ed07 100644
+--- a/drivers/mmc/host/sdhci-msm.c
++++ b/drivers/mmc/host/sdhci-msm.c
+@@ -1129,6 +1129,12 @@ static int sdhci_msm_execute_tuning(struct mmc_host *mmc, u32 opcode)
+ /* Clock-Data-Recovery used to dynamically adjust RX sampling point */
+ msm_host->use_cdr = true;
+
++ /*
++ * Clear tuning_done flag before tuning to ensure proper
++ * HS400 settings.
++ */
++ msm_host->tuning_done = 0;
++
+ /*
+ * For HS400 tuning in HS200 timing requires:
+ * - select MCLK/2 in VENDOR_SPEC
+diff --git a/drivers/mmc/host/sdhci-of-at91.c b/drivers/mmc/host/sdhci-of-at91.c
+index c79bff5e2280..117ad7232767 100644
+--- a/drivers/mmc/host/sdhci-of-at91.c
++++ b/drivers/mmc/host/sdhci-of-at91.c
+@@ -120,9 +120,12 @@ static void sdhci_at91_reset(struct sdhci_host *host, u8 mask)
+ || mmc_gpio_get_cd(host->mmc) >= 0)
+ sdhci_at91_set_force_card_detect(host);
+
+- if (priv->cal_always_on && (mask & SDHCI_RESET_ALL))
+- sdhci_writel(host, SDMMC_CALCR_ALWYSON | SDMMC_CALCR_EN,
++ if (priv->cal_always_on && (mask & SDHCI_RESET_ALL)) {
++ u32 calcr = sdhci_readl(host, SDMMC_CALCR);
++
++ sdhci_writel(host, calcr | SDMMC_CALCR_ALWYSON | SDMMC_CALCR_EN,
+ SDMMC_CALCR);
++ }
+ }
+
+ static const struct sdhci_ops sdhci_at91_sama5d2_ops = {
+diff --git a/drivers/mmc/host/tmio_mmc_core.c b/drivers/mmc/host/tmio_mmc_core.c
+index 9520bd94cf43..98be09c5e3ff 100644
+--- a/drivers/mmc/host/tmio_mmc_core.c
++++ b/drivers/mmc/host/tmio_mmc_core.c
+@@ -1231,12 +1231,14 @@ void tmio_mmc_host_remove(struct tmio_mmc_host *host)
+ cancel_work_sync(&host->done);
+ cancel_delayed_work_sync(&host->delayed_reset_work);
+ tmio_mmc_release_dma(host);
++ tmio_mmc_disable_mmc_irqs(host, TMIO_MASK_ALL);
+
+- pm_runtime_dont_use_autosuspend(&pdev->dev);
+ if (host->native_hotplug)
+ pm_runtime_put_noidle(&pdev->dev);
+- pm_runtime_put_sync(&pdev->dev);
++
+ pm_runtime_disable(&pdev->dev);
++ pm_runtime_dont_use_autosuspend(&pdev->dev);
++ pm_runtime_put_noidle(&pdev->dev);
+ }
+ EXPORT_SYMBOL_GPL(tmio_mmc_host_remove);
+
+diff --git a/drivers/mmc/host/uniphier-sd.c b/drivers/mmc/host/uniphier-sd.c
+index a1683c49cb90..f82baf99fd69 100644
+--- a/drivers/mmc/host/uniphier-sd.c
++++ b/drivers/mmc/host/uniphier-sd.c
+@@ -610,11 +610,6 @@ static int uniphier_sd_probe(struct platform_device *pdev)
+ }
+ }
+
+- ret = devm_request_irq(dev, irq, tmio_mmc_irq, IRQF_SHARED,
+- dev_name(dev), host);
+- if (ret)
+- goto free_host;
+-
+ if (priv->caps & UNIPHIER_SD_CAP_EXTENDED_IP)
+ host->dma_ops = &uniphier_sd_internal_dma_ops;
+ else
+@@ -642,8 +637,15 @@ static int uniphier_sd_probe(struct platform_device *pdev)
+ if (ret)
+ goto free_host;
+
++ ret = devm_request_irq(dev, irq, tmio_mmc_irq, IRQF_SHARED,
++ dev_name(dev), host);
++ if (ret)
++ goto remove_host;
++
+ return 0;
+
++remove_host:
++ tmio_mmc_host_remove(host);
+ free_host:
+ tmio_mmc_host_free(host);
+
+diff --git a/drivers/net/dsa/qca8k.c b/drivers/net/dsa/qca8k.c
+index 9f4205b4439b..d2b5ab403e06 100644
+--- a/drivers/net/dsa/qca8k.c
++++ b/drivers/net/dsa/qca8k.c
+@@ -1079,8 +1079,7 @@ qca8k_sw_probe(struct mdio_device *mdiodev)
+ if (id != QCA8K_ID_QCA8337)
+ return -ENODEV;
+
+- priv->ds = devm_kzalloc(&mdiodev->dev, sizeof(*priv->ds),
+- QCA8K_NUM_PORTS);
++ priv->ds = devm_kzalloc(&mdiodev->dev, sizeof(*priv->ds), GFP_KERNEL);
+ if (!priv->ds)
+ return -ENOMEM;
+
+diff --git a/drivers/net/ethernet/amazon/ena/ena_netdev.c b/drivers/net/ethernet/amazon/ena/ena_netdev.c
+index 2cc765df8da3..15ce93be05ea 100644
+--- a/drivers/net/ethernet/amazon/ena/ena_netdev.c
++++ b/drivers/net/ethernet/amazon/ena/ena_netdev.c
+@@ -355,7 +355,7 @@ error_unmap_dma:
+ ena_unmap_tx_buff(xdp_ring, tx_info);
+ tx_info->xdpf = NULL;
+ error_drop_packet:
+-
++ __free_page(tx_info->xdp_rx_page);
+ return NETDEV_TX_OK;
+ }
+
+@@ -1638,11 +1638,9 @@ static int ena_clean_rx_irq(struct ena_ring *rx_ring, struct napi_struct *napi,
+ &next_to_clean);
+
+ if (unlikely(!skb)) {
+- if (xdp_verdict == XDP_TX) {
++ if (xdp_verdict == XDP_TX)
+ ena_free_rx_page(rx_ring,
+ &rx_ring->rx_buffer_info[rx_ring->ena_bufs[0].req_id]);
+- res_budget--;
+- }
+ for (i = 0; i < ena_rx_ctx.descs; i++) {
+ rx_ring->free_ids[next_to_clean] =
+ rx_ring->ena_bufs[i].req_id;
+@@ -1650,8 +1648,10 @@ static int ena_clean_rx_irq(struct ena_ring *rx_ring, struct napi_struct *napi,
+ ENA_RX_RING_IDX_NEXT(next_to_clean,
+ rx_ring->ring_size);
+ }
+- if (xdp_verdict == XDP_TX || xdp_verdict == XDP_DROP)
++ if (xdp_verdict != XDP_PASS) {
++ res_budget--;
+ continue;
++ }
+ break;
+ }
+
+diff --git a/drivers/net/ethernet/cadence/macb_main.c b/drivers/net/ethernet/cadence/macb_main.c
+index 36290a8e2a84..67933079aeea 100644
+--- a/drivers/net/ethernet/cadence/macb_main.c
++++ b/drivers/net/ethernet/cadence/macb_main.c
+@@ -2558,19 +2558,21 @@ static int macb_open(struct net_device *dev)
+
+ err = macb_phylink_connect(bp);
+ if (err)
+- goto pm_exit;
++ goto napi_exit;
+
+ netif_tx_start_all_queues(dev);
+
+ if (bp->ptp_info)
+ bp->ptp_info->ptp_init(dev);
+
+-pm_exit:
+- if (err) {
+- pm_runtime_put_sync(&bp->pdev->dev);
+- return err;
+- }
+ return 0;
++
++napi_exit:
++ for (q = 0, queue = bp->queues; q < bp->num_queues; ++q, ++queue)
++ napi_disable(&queue->napi);
++pm_exit:
++ pm_runtime_put_sync(&bp->pdev->dev);
++ return err;
+ }
+
+ static int macb_close(struct net_device *dev)
+diff --git a/drivers/net/ethernet/marvell/mvneta.c b/drivers/net/ethernet/marvell/mvneta.c
+index 51889770958d..43b44a1e8f69 100644
+--- a/drivers/net/ethernet/marvell/mvneta.c
++++ b/drivers/net/ethernet/marvell/mvneta.c
+@@ -451,11 +451,17 @@ struct mvneta_pcpu_port {
+ u32 cause_rx_tx;
+ };
+
++enum {
++ __MVNETA_DOWN,
++};
++
+ struct mvneta_port {
+ u8 id;
+ struct mvneta_pcpu_port __percpu *ports;
+ struct mvneta_pcpu_stats __percpu *stats;
+
++ unsigned long state;
++
+ int pkt_size;
+ void __iomem *base;
+ struct mvneta_rx_queue *rxqs;
+@@ -2112,6 +2118,9 @@ mvneta_xdp_xmit(struct net_device *dev, int num_frame,
+ struct netdev_queue *nq;
+ u32 ret;
+
++ if (unlikely(test_bit(__MVNETA_DOWN, &pp->state)))
++ return -ENETDOWN;
++
+ if (unlikely(flags & ~XDP_XMIT_FLAGS_MASK))
+ return -EINVAL;
+
+@@ -3562,12 +3571,16 @@ static void mvneta_start_dev(struct mvneta_port *pp)
+
+ phylink_start(pp->phylink);
+ netif_tx_start_all_queues(pp->dev);
++
++ clear_bit(__MVNETA_DOWN, &pp->state);
+ }
+
+ static void mvneta_stop_dev(struct mvneta_port *pp)
+ {
+ unsigned int cpu;
+
++ set_bit(__MVNETA_DOWN, &pp->state);
++
+ phylink_stop(pp->phylink);
+
+ if (!pp->neta_armada3700) {
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/devlink.c b/drivers/net/ethernet/mellanox/mlx5/core/devlink.c
+index e94f0c4d74a7..a99fe4b02b9b 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/devlink.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/devlink.c
+@@ -283,7 +283,6 @@ int mlx5_devlink_register(struct devlink *devlink, struct device *dev)
+ goto params_reg_err;
+ mlx5_devlink_set_params_init_values(devlink);
+ devlink_params_publish(devlink);
+- devlink_reload_enable(devlink);
+ return 0;
+
+ params_reg_err:
+@@ -293,7 +292,6 @@ params_reg_err:
+
+ void mlx5_devlink_unregister(struct devlink *devlink)
+ {
+- devlink_reload_disable(devlink);
+ devlink_params_unregister(devlink, mlx5_devlink_params,
+ ARRAY_SIZE(mlx5_devlink_params));
+ devlink_unregister(devlink);
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en/tc_ct.c b/drivers/net/ethernet/mellanox/mlx5/core/en/tc_ct.c
+index 4eb305af0106..153d6eb19d3c 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/en/tc_ct.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/en/tc_ct.c
+@@ -320,21 +320,21 @@ mlx5_tc_ct_parse_mangle_to_mod_act(struct flow_action_entry *act,
+
+ case FLOW_ACT_MANGLE_HDR_TYPE_IP6:
+ MLX5_SET(set_action_in, modact, length, 0);
+- if (offset == offsetof(struct ipv6hdr, saddr))
++ if (offset == offsetof(struct ipv6hdr, saddr) + 12)
+ field = MLX5_ACTION_IN_FIELD_OUT_SIPV6_31_0;
+- else if (offset == offsetof(struct ipv6hdr, saddr) + 4)
+- field = MLX5_ACTION_IN_FIELD_OUT_SIPV6_63_32;
+ else if (offset == offsetof(struct ipv6hdr, saddr) + 8)
++ field = MLX5_ACTION_IN_FIELD_OUT_SIPV6_63_32;
++ else if (offset == offsetof(struct ipv6hdr, saddr) + 4)
+ field = MLX5_ACTION_IN_FIELD_OUT_SIPV6_95_64;
+- else if (offset == offsetof(struct ipv6hdr, saddr) + 12)
++ else if (offset == offsetof(struct ipv6hdr, saddr))
+ field = MLX5_ACTION_IN_FIELD_OUT_SIPV6_127_96;
+- else if (offset == offsetof(struct ipv6hdr, daddr))
++ else if (offset == offsetof(struct ipv6hdr, daddr) + 12)
+ field = MLX5_ACTION_IN_FIELD_OUT_DIPV6_31_0;
+- else if (offset == offsetof(struct ipv6hdr, daddr) + 4)
+- field = MLX5_ACTION_IN_FIELD_OUT_DIPV6_63_32;
+ else if (offset == offsetof(struct ipv6hdr, daddr) + 8)
++ field = MLX5_ACTION_IN_FIELD_OUT_DIPV6_63_32;
++ else if (offset == offsetof(struct ipv6hdr, daddr) + 4)
+ field = MLX5_ACTION_IN_FIELD_OUT_DIPV6_95_64;
+- else if (offset == offsetof(struct ipv6hdr, daddr) + 12)
++ else if (offset == offsetof(struct ipv6hdr, daddr))
+ field = MLX5_ACTION_IN_FIELD_OUT_DIPV6_127_96;
+ else
+ return -EOPNOTSUPP;
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en/xsk/setup.c b/drivers/net/ethernet/mellanox/mlx5/core/en/xsk/setup.c
+index c28cbae42331..2c80205dc939 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/en/xsk/setup.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/en/xsk/setup.c
+@@ -152,6 +152,10 @@ void mlx5e_close_xsk(struct mlx5e_channel *c)
+ mlx5e_close_cq(&c->xskicosq.cq);
+ mlx5e_close_xdpsq(&c->xsksq);
+ mlx5e_close_cq(&c->xsksq.cq);
++
++ memset(&c->xskrq, 0, sizeof(c->xskrq));
++ memset(&c->xsksq, 0, sizeof(c->xsksq));
++ memset(&c->xskicosq, 0, sizeof(c->xskicosq));
+ }
+
+ void mlx5e_activate_xsk(struct mlx5e_channel *c)
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/health.c b/drivers/net/ethernet/mellanox/mlx5/core/health.c
+index f99e1752d4e5..e22b7ae11275 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/health.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/health.c
+@@ -193,15 +193,23 @@ static bool reset_fw_if_needed(struct mlx5_core_dev *dev)
+
+ void mlx5_enter_error_state(struct mlx5_core_dev *dev, bool force)
+ {
++ bool err_detected = false;
++
++ /* Mark the device as fatal in order to abort FW commands */
++ if ((check_fatal_sensors(dev) || force) &&
++ dev->state == MLX5_DEVICE_STATE_UP) {
++ dev->state = MLX5_DEVICE_STATE_INTERNAL_ERROR;
++ err_detected = true;
++ }
+ mutex_lock(&dev->intf_state_mutex);
+- if (dev->state == MLX5_DEVICE_STATE_INTERNAL_ERROR)
+- goto unlock;
++ if (!err_detected && dev->state == MLX5_DEVICE_STATE_INTERNAL_ERROR)
++ goto unlock;/* a previous error is still being handled */
+ if (dev->state == MLX5_DEVICE_STATE_UNINITIALIZED) {
+ dev->state = MLX5_DEVICE_STATE_INTERNAL_ERROR;
+ goto unlock;
+ }
+
+- if (check_fatal_sensors(dev) || force) {
++ if (check_fatal_sensors(dev) || force) { /* protected state setting */
+ dev->state = MLX5_DEVICE_STATE_INTERNAL_ERROR;
+ mlx5_cmd_flush(dev);
+ }
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/main.c b/drivers/net/ethernet/mellanox/mlx5/core/main.c
+index 17f818a54090..980f6b833cbe 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/main.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/main.c
+@@ -795,6 +795,11 @@ err_disable:
+
+ static void mlx5_pci_close(struct mlx5_core_dev *dev)
+ {
++ /* health work might still be active, and it needs pci bar in
++ * order to know the NIC state. Therefore, drain the health WQ
++ * before removing the pci bars
++ */
++ mlx5_drain_health_wq(dev);
+ iounmap(dev->iseg);
+ pci_clear_master(dev->pdev);
+ release_bar(dev->pdev);
+@@ -1368,6 +1373,7 @@ static int init_one(struct pci_dev *pdev, const struct pci_device_id *id)
+ dev_err(&pdev->dev, "mlx5_crdump_enable failed with error code %d\n", err);
+
+ pci_save_state(pdev);
++ devlink_reload_enable(devlink);
+ return 0;
+
+ err_load_one:
+@@ -1385,6 +1391,7 @@ static void remove_one(struct pci_dev *pdev)
+ struct mlx5_core_dev *dev = pci_get_drvdata(pdev);
+ struct devlink *devlink = priv_to_devlink(dev);
+
++ devlink_reload_disable(devlink);
+ mlx5_crdump_disable(dev);
+ mlx5_devlink_unregister(devlink);
+
+diff --git a/drivers/net/ethernet/mellanox/mlxsw/core_thermal.c b/drivers/net/ethernet/mellanox/mlxsw/core_thermal.c
+index ce0a6837daa3..05f8d5a92862 100644
+--- a/drivers/net/ethernet/mellanox/mlxsw/core_thermal.c
++++ b/drivers/net/ethernet/mellanox/mlxsw/core_thermal.c
+@@ -391,8 +391,7 @@ static int mlxsw_thermal_set_trip_hyst(struct thermal_zone_device *tzdev,
+ static int mlxsw_thermal_trend_get(struct thermal_zone_device *tzdev,
+ int trip, enum thermal_trend *trend)
+ {
+- struct mlxsw_thermal_module *tz = tzdev->devdata;
+- struct mlxsw_thermal *thermal = tz->parent;
++ struct mlxsw_thermal *thermal = tzdev->devdata;
+
+ if (trip < 0 || trip >= MLXSW_THERMAL_NUM_TRIPS)
+ return -EINVAL;
+@@ -593,6 +592,22 @@ mlxsw_thermal_module_trip_hyst_set(struct thermal_zone_device *tzdev, int trip,
+ return 0;
+ }
+
++static int mlxsw_thermal_module_trend_get(struct thermal_zone_device *tzdev,
++ int trip, enum thermal_trend *trend)
++{
++ struct mlxsw_thermal_module *tz = tzdev->devdata;
++ struct mlxsw_thermal *thermal = tz->parent;
++
++ if (trip < 0 || trip >= MLXSW_THERMAL_NUM_TRIPS)
++ return -EINVAL;
++
++ if (tzdev == thermal->tz_highest_dev)
++ return 1;
++
++ *trend = THERMAL_TREND_STABLE;
++ return 0;
++}
++
+ static struct thermal_zone_device_ops mlxsw_thermal_module_ops = {
+ .bind = mlxsw_thermal_module_bind,
+ .unbind = mlxsw_thermal_module_unbind,
+@@ -604,7 +619,7 @@ static struct thermal_zone_device_ops mlxsw_thermal_module_ops = {
+ .set_trip_temp = mlxsw_thermal_module_trip_temp_set,
+ .get_trip_hyst = mlxsw_thermal_module_trip_hyst_get,
+ .set_trip_hyst = mlxsw_thermal_module_trip_hyst_set,
+- .get_trend = mlxsw_thermal_trend_get,
++ .get_trend = mlxsw_thermal_module_trend_get,
+ };
+
+ static int mlxsw_thermal_gearbox_temp_get(struct thermal_zone_device *tzdev,
+@@ -643,7 +658,7 @@ static struct thermal_zone_device_ops mlxsw_thermal_gearbox_ops = {
+ .set_trip_temp = mlxsw_thermal_module_trip_temp_set,
+ .get_trip_hyst = mlxsw_thermal_module_trip_hyst_get,
+ .set_trip_hyst = mlxsw_thermal_module_trip_hyst_set,
+- .get_trend = mlxsw_thermal_trend_get,
++ .get_trend = mlxsw_thermal_module_trend_get,
+ };
+
+ static int mlxsw_thermal_get_max_state(struct thermal_cooling_device *cdev,
+diff --git a/drivers/net/ethernet/pensando/ionic/ionic_lif.c b/drivers/net/ethernet/pensando/ionic/ionic_lif.c
+index f8a9c1bcffc9..7aa037c3fe02 100644
+--- a/drivers/net/ethernet/pensando/ionic/ionic_lif.c
++++ b/drivers/net/ethernet/pensando/ionic/ionic_lif.c
+@@ -105,7 +105,7 @@ static void ionic_link_status_check(struct ionic_lif *lif)
+ netif_carrier_on(netdev);
+ }
+
+- if (netif_running(lif->netdev))
++ if (lif->netdev->flags & IFF_UP && netif_running(lif->netdev))
+ ionic_start_queues(lif);
+ } else {
+ if (netif_carrier_ok(netdev)) {
+@@ -113,7 +113,7 @@ static void ionic_link_status_check(struct ionic_lif *lif)
+ netif_carrier_off(netdev);
+ }
+
+- if (netif_running(lif->netdev))
++ if (lif->netdev->flags & IFF_UP && netif_running(lif->netdev))
+ ionic_stop_queues(lif);
+ }
+
+diff --git a/drivers/net/ethernet/ti/am65-cpsw-nuss.c b/drivers/net/ethernet/ti/am65-cpsw-nuss.c
+index 88f52a2f85b3..3e4388e6b5fa 100644
+--- a/drivers/net/ethernet/ti/am65-cpsw-nuss.c
++++ b/drivers/net/ethernet/ti/am65-cpsw-nuss.c
+@@ -1804,7 +1804,7 @@ MODULE_DEVICE_TABLE(of, am65_cpsw_nuss_of_mtable);
+
+ static int am65_cpsw_nuss_probe(struct platform_device *pdev)
+ {
+- struct cpsw_ale_params ale_params;
++ struct cpsw_ale_params ale_params = { 0 };
+ const struct of_device_id *of_id;
+ struct device *dev = &pdev->dev;
+ struct am65_cpsw_common *common;
+diff --git a/drivers/net/ethernet/ti/cpsw_ale.c b/drivers/net/ethernet/ti/cpsw_ale.c
+index 8dc6be11b2ff..9ad872bfae3a 100644
+--- a/drivers/net/ethernet/ti/cpsw_ale.c
++++ b/drivers/net/ethernet/ti/cpsw_ale.c
+@@ -604,10 +604,44 @@ void cpsw_ale_set_unreg_mcast(struct cpsw_ale *ale, int unreg_mcast_mask,
+ }
+ }
+
++static void cpsw_ale_vlan_set_unreg_mcast(struct cpsw_ale *ale, u32 *ale_entry,
++ int allmulti)
++{
++ int unreg_mcast;
++
++ unreg_mcast =
++ cpsw_ale_get_vlan_unreg_mcast(ale_entry,
++ ale->vlan_field_bits);
++ if (allmulti)
++ unreg_mcast |= ALE_PORT_HOST;
++ else
++ unreg_mcast &= ~ALE_PORT_HOST;
++ cpsw_ale_set_vlan_unreg_mcast(ale_entry, unreg_mcast,
++ ale->vlan_field_bits);
++}
++
++static void
++cpsw_ale_vlan_set_unreg_mcast_idx(struct cpsw_ale *ale, u32 *ale_entry,
++ int allmulti)
++{
++ int unreg_mcast;
++ int idx;
++
++ idx = cpsw_ale_get_vlan_unreg_mcast_idx(ale_entry);
++
++ unreg_mcast = readl(ale->params.ale_regs + ALE_VLAN_MASK_MUX(idx));
++
++ if (allmulti)
++ unreg_mcast |= ALE_PORT_HOST;
++ else
++ unreg_mcast &= ~ALE_PORT_HOST;
++
++ writel(unreg_mcast, ale->params.ale_regs + ALE_VLAN_MASK_MUX(idx));
++}
++
+ void cpsw_ale_set_allmulti(struct cpsw_ale *ale, int allmulti, int port)
+ {
+ u32 ale_entry[ALE_ENTRY_WORDS];
+- int unreg_mcast = 0;
+ int type, idx;
+
+ for (idx = 0; idx < ale->params.ale_entries; idx++) {
+@@ -624,15 +658,12 @@ void cpsw_ale_set_allmulti(struct cpsw_ale *ale, int allmulti, int port)
+ if (port != -1 && !(vlan_members & BIT(port)))
+ continue;
+
+- unreg_mcast =
+- cpsw_ale_get_vlan_unreg_mcast(ale_entry,
+- ale->vlan_field_bits);
+- if (allmulti)
+- unreg_mcast |= ALE_PORT_HOST;
++ if (!ale->params.nu_switch_ale)
++ cpsw_ale_vlan_set_unreg_mcast(ale, ale_entry, allmulti);
+ else
+- unreg_mcast &= ~ALE_PORT_HOST;
+- cpsw_ale_set_vlan_unreg_mcast(ale_entry, unreg_mcast,
+- ale->vlan_field_bits);
++ cpsw_ale_vlan_set_unreg_mcast_idx(ale, ale_entry,
++ allmulti);
++
+ cpsw_ale_write(ale, idx, ale_entry);
+ }
+ }
+diff --git a/drivers/net/net_failover.c b/drivers/net/net_failover.c
+index b16a1221d19b..fb182bec8f06 100644
+--- a/drivers/net/net_failover.c
++++ b/drivers/net/net_failover.c
+@@ -61,7 +61,8 @@ static int net_failover_open(struct net_device *dev)
+ return 0;
+
+ err_standby_open:
+- dev_close(primary_dev);
++ if (primary_dev)
++ dev_close(primary_dev);
+ err_primary_open:
+ netif_tx_disable(dev);
+ return err;
+diff --git a/drivers/net/tun.c b/drivers/net/tun.c
+index 44889eba1dbc..b984733c6c31 100644
+--- a/drivers/net/tun.c
++++ b/drivers/net/tun.c
+@@ -1871,8 +1871,11 @@ drop:
+ skb->dev = tun->dev;
+ break;
+ case IFF_TAP:
+- if (!frags)
+- skb->protocol = eth_type_trans(skb, tun->dev);
++ if (frags && !pskb_may_pull(skb, ETH_HLEN)) {
++ err = -ENOMEM;
++ goto drop;
++ }
++ skb->protocol = eth_type_trans(skb, tun->dev);
+ break;
+ }
+
+@@ -1929,9 +1932,12 @@ drop:
+ }
+
+ if (frags) {
++ u32 headlen;
++
+ /* Exercise flow dissector code path. */
+- u32 headlen = eth_get_headlen(tun->dev, skb->data,
+- skb_headlen(skb));
++ skb_push(skb, ETH_HLEN);
++ headlen = eth_get_headlen(tun->dev, skb->data,
++ skb_headlen(skb));
+
+ if (unlikely(headlen > skb_headlen(skb))) {
+ this_cpu_inc(tun->pcpu_stats->rx_dropped);
+diff --git a/drivers/net/vxlan.c b/drivers/net/vxlan.c
+index a5b415fed11e..779e56c43d27 100644
+--- a/drivers/net/vxlan.c
++++ b/drivers/net/vxlan.c
+@@ -1924,6 +1924,10 @@ static struct sk_buff *vxlan_na_create(struct sk_buff *request,
+ ns_olen = request->len - skb_network_offset(request) -
+ sizeof(struct ipv6hdr) - sizeof(*ns);
+ for (i = 0; i < ns_olen-1; i += (ns->opt[i+1]<<3)) {
++ if (!ns->opt[i + 1]) {
++ kfree_skb(reply);
++ return NULL;
++ }
+ if (ns->opt[i] == ND_OPT_SOURCE_LL_ADDR) {
+ daddr = ns->opt + i + sizeof(struct nd_opt_hdr);
+ break;
+diff --git a/drivers/net/wireless/ath/ath9k/hif_usb.c b/drivers/net/wireless/ath/ath9k/hif_usb.c
+index dd0c32379375..4ed21dad6a8e 100644
+--- a/drivers/net/wireless/ath/ath9k/hif_usb.c
++++ b/drivers/net/wireless/ath/ath9k/hif_usb.c
+@@ -612,6 +612,11 @@ static void ath9k_hif_usb_rx_stream(struct hif_device_usb *hif_dev,
+ hif_dev->remain_skb = nskb;
+ spin_unlock(&hif_dev->rx_lock);
+ } else {
++ if (pool_index == MAX_PKT_NUM_IN_TRANSFER) {
++ dev_err(&hif_dev->udev->dev,
++ "ath9k_htc: over RX MAX_PKT_NUM\n");
++ goto err;
++ }
+ nskb = __dev_alloc_skb(pkt_len + 32, GFP_ATOMIC);
+ if (!nskb) {
+ dev_err(&hif_dev->udev->dev,
+@@ -638,9 +643,9 @@ err:
+
+ static void ath9k_hif_usb_rx_cb(struct urb *urb)
+ {
+- struct sk_buff *skb = (struct sk_buff *) urb->context;
+- struct hif_device_usb *hif_dev =
+- usb_get_intfdata(usb_ifnum_to_if(urb->dev, 0));
++ struct rx_buf *rx_buf = (struct rx_buf *)urb->context;
++ struct hif_device_usb *hif_dev = rx_buf->hif_dev;
++ struct sk_buff *skb = rx_buf->skb;
+ int ret;
+
+ if (!skb)
+@@ -680,14 +685,15 @@ resubmit:
+ return;
+ free:
+ kfree_skb(skb);
++ kfree(rx_buf);
+ }
+
+ static void ath9k_hif_usb_reg_in_cb(struct urb *urb)
+ {
+- struct sk_buff *skb = (struct sk_buff *) urb->context;
++ struct rx_buf *rx_buf = (struct rx_buf *)urb->context;
++ struct hif_device_usb *hif_dev = rx_buf->hif_dev;
++ struct sk_buff *skb = rx_buf->skb;
+ struct sk_buff *nskb;
+- struct hif_device_usb *hif_dev =
+- usb_get_intfdata(usb_ifnum_to_if(urb->dev, 0));
+ int ret;
+
+ if (!skb)
+@@ -745,6 +751,7 @@ resubmit:
+ return;
+ free:
+ kfree_skb(skb);
++ kfree(rx_buf);
+ urb->context = NULL;
+ }
+
+@@ -790,7 +797,7 @@ static int ath9k_hif_usb_alloc_tx_urbs(struct hif_device_usb *hif_dev)
+ init_usb_anchor(&hif_dev->mgmt_submitted);
+
+ for (i = 0; i < MAX_TX_URB_NUM; i++) {
+- tx_buf = kzalloc(sizeof(struct tx_buf), GFP_KERNEL);
++ tx_buf = kzalloc(sizeof(*tx_buf), GFP_KERNEL);
+ if (!tx_buf)
+ goto err;
+
+@@ -827,8 +834,9 @@ static void ath9k_hif_usb_dealloc_rx_urbs(struct hif_device_usb *hif_dev)
+
+ static int ath9k_hif_usb_alloc_rx_urbs(struct hif_device_usb *hif_dev)
+ {
+- struct urb *urb = NULL;
++ struct rx_buf *rx_buf = NULL;
+ struct sk_buff *skb = NULL;
++ struct urb *urb = NULL;
+ int i, ret;
+
+ init_usb_anchor(&hif_dev->rx_submitted);
+@@ -836,6 +844,12 @@ static int ath9k_hif_usb_alloc_rx_urbs(struct hif_device_usb *hif_dev)
+
+ for (i = 0; i < MAX_RX_URB_NUM; i++) {
+
++ rx_buf = kzalloc(sizeof(*rx_buf), GFP_KERNEL);
++ if (!rx_buf) {
++ ret = -ENOMEM;
++ goto err_rxb;
++ }
++
+ /* Allocate URB */
+ urb = usb_alloc_urb(0, GFP_KERNEL);
+ if (urb == NULL) {
+@@ -850,11 +864,14 @@ static int ath9k_hif_usb_alloc_rx_urbs(struct hif_device_usb *hif_dev)
+ goto err_skb;
+ }
+
++ rx_buf->hif_dev = hif_dev;
++ rx_buf->skb = skb;
++
+ usb_fill_bulk_urb(urb, hif_dev->udev,
+ usb_rcvbulkpipe(hif_dev->udev,
+ USB_WLAN_RX_PIPE),
+ skb->data, MAX_RX_BUF_SIZE,
+- ath9k_hif_usb_rx_cb, skb);
++ ath9k_hif_usb_rx_cb, rx_buf);
+
+ /* Anchor URB */
+ usb_anchor_urb(urb, &hif_dev->rx_submitted);
+@@ -880,6 +897,8 @@ err_submit:
+ err_skb:
+ usb_free_urb(urb);
+ err_urb:
++ kfree(rx_buf);
++err_rxb:
+ ath9k_hif_usb_dealloc_rx_urbs(hif_dev);
+ return ret;
+ }
+@@ -891,14 +910,21 @@ static void ath9k_hif_usb_dealloc_reg_in_urbs(struct hif_device_usb *hif_dev)
+
+ static int ath9k_hif_usb_alloc_reg_in_urbs(struct hif_device_usb *hif_dev)
+ {
+- struct urb *urb = NULL;
++ struct rx_buf *rx_buf = NULL;
+ struct sk_buff *skb = NULL;
++ struct urb *urb = NULL;
+ int i, ret;
+
+ init_usb_anchor(&hif_dev->reg_in_submitted);
+
+ for (i = 0; i < MAX_REG_IN_URB_NUM; i++) {
+
++ rx_buf = kzalloc(sizeof(*rx_buf), GFP_KERNEL);
++ if (!rx_buf) {
++ ret = -ENOMEM;
++ goto err_rxb;
++ }
++
+ /* Allocate URB */
+ urb = usb_alloc_urb(0, GFP_KERNEL);
+ if (urb == NULL) {
+@@ -913,11 +939,14 @@ static int ath9k_hif_usb_alloc_reg_in_urbs(struct hif_device_usb *hif_dev)
+ goto err_skb;
+ }
+
++ rx_buf->hif_dev = hif_dev;
++ rx_buf->skb = skb;
++
+ usb_fill_int_urb(urb, hif_dev->udev,
+ usb_rcvintpipe(hif_dev->udev,
+ USB_REG_IN_PIPE),
+ skb->data, MAX_REG_IN_BUF_SIZE,
+- ath9k_hif_usb_reg_in_cb, skb, 1);
++ ath9k_hif_usb_reg_in_cb, rx_buf, 1);
+
+ /* Anchor URB */
+ usb_anchor_urb(urb, &hif_dev->reg_in_submitted);
+@@ -943,6 +972,8 @@ err_submit:
+ err_skb:
+ usb_free_urb(urb);
+ err_urb:
++ kfree(rx_buf);
++err_rxb:
+ ath9k_hif_usb_dealloc_reg_in_urbs(hif_dev);
+ return ret;
+ }
+@@ -973,7 +1004,7 @@ err:
+ return -ENOMEM;
+ }
+
+-static void ath9k_hif_usb_dealloc_urbs(struct hif_device_usb *hif_dev)
++void ath9k_hif_usb_dealloc_urbs(struct hif_device_usb *hif_dev)
+ {
+ usb_kill_anchored_urbs(&hif_dev->regout_submitted);
+ ath9k_hif_usb_dealloc_reg_in_urbs(hif_dev);
+@@ -1341,8 +1372,9 @@ static void ath9k_hif_usb_disconnect(struct usb_interface *interface)
+
+ if (hif_dev->flags & HIF_USB_READY) {
+ ath9k_htc_hw_deinit(hif_dev->htc_handle, unplugged);
+- ath9k_htc_hw_free(hif_dev->htc_handle);
+ ath9k_hif_usb_dev_deinit(hif_dev);
++ ath9k_destoy_wmi(hif_dev->htc_handle->drv_priv);
++ ath9k_htc_hw_free(hif_dev->htc_handle);
+ }
+
+ usb_set_intfdata(interface, NULL);
+diff --git a/drivers/net/wireless/ath/ath9k/hif_usb.h b/drivers/net/wireless/ath/ath9k/hif_usb.h
+index 7846916aa01d..5985aa15ca93 100644
+--- a/drivers/net/wireless/ath/ath9k/hif_usb.h
++++ b/drivers/net/wireless/ath/ath9k/hif_usb.h
+@@ -86,6 +86,11 @@ struct tx_buf {
+ struct list_head list;
+ };
+
++struct rx_buf {
++ struct sk_buff *skb;
++ struct hif_device_usb *hif_dev;
++};
++
+ #define HIF_USB_TX_STOP BIT(0)
+ #define HIF_USB_TX_FLUSH BIT(1)
+
+@@ -133,5 +138,6 @@ struct hif_device_usb {
+
+ int ath9k_hif_usb_init(void);
+ void ath9k_hif_usb_exit(void);
++void ath9k_hif_usb_dealloc_urbs(struct hif_device_usb *hif_dev);
+
+ #endif /* HTC_USB_H */
+diff --git a/drivers/net/wireless/ath/ath9k/htc_drv_init.c b/drivers/net/wireless/ath/ath9k/htc_drv_init.c
+index d961095ab01f..40a065028ebe 100644
+--- a/drivers/net/wireless/ath/ath9k/htc_drv_init.c
++++ b/drivers/net/wireless/ath/ath9k/htc_drv_init.c
+@@ -931,8 +931,9 @@ err_init:
+ int ath9k_htc_probe_device(struct htc_target *htc_handle, struct device *dev,
+ u16 devid, char *product, u32 drv_info)
+ {
+- struct ieee80211_hw *hw;
++ struct hif_device_usb *hif_dev;
+ struct ath9k_htc_priv *priv;
++ struct ieee80211_hw *hw;
+ int ret;
+
+ hw = ieee80211_alloc_hw(sizeof(struct ath9k_htc_priv), &ath9k_htc_ops);
+@@ -967,7 +968,10 @@ int ath9k_htc_probe_device(struct htc_target *htc_handle, struct device *dev,
+ return 0;
+
+ err_init:
+- ath9k_deinit_wmi(priv);
++ ath9k_stop_wmi(priv);
++ hif_dev = (struct hif_device_usb *)htc_handle->hif_dev;
++ ath9k_hif_usb_dealloc_urbs(hif_dev);
++ ath9k_destoy_wmi(priv);
+ err_free:
+ ieee80211_free_hw(hw);
+ return ret;
+@@ -982,7 +986,7 @@ void ath9k_htc_disconnect_device(struct htc_target *htc_handle, bool hotunplug)
+ htc_handle->drv_priv->ah->ah_flags |= AH_UNPLUGGED;
+
+ ath9k_deinit_device(htc_handle->drv_priv);
+- ath9k_deinit_wmi(htc_handle->drv_priv);
++ ath9k_stop_wmi(htc_handle->drv_priv);
+ ieee80211_free_hw(htc_handle->drv_priv->hw);
+ }
+ }
+diff --git a/drivers/net/wireless/ath/ath9k/htc_drv_txrx.c b/drivers/net/wireless/ath/ath9k/htc_drv_txrx.c
+index 9cec5c216e1f..118e5550b10c 100644
+--- a/drivers/net/wireless/ath/ath9k/htc_drv_txrx.c
++++ b/drivers/net/wireless/ath/ath9k/htc_drv_txrx.c
+@@ -999,9 +999,9 @@ static bool ath9k_rx_prepare(struct ath9k_htc_priv *priv,
+ * which are not PHY_ERROR (short radar pulses have a length of 3)
+ */
+ if (unlikely(!rs_datalen || (rs_datalen < 10 && !is_phyerr))) {
+- ath_warn(common,
+- "Short RX data len, dropping (dlen: %d)\n",
+- rs_datalen);
++ ath_dbg(common, ANY,
++ "Short RX data len, dropping (dlen: %d)\n",
++ rs_datalen);
+ goto rx_next;
+ }
+
+diff --git a/drivers/net/wireless/ath/ath9k/htc_hst.c b/drivers/net/wireless/ath/ath9k/htc_hst.c
+index d091c8ebdcf0..d2e062eaf561 100644
+--- a/drivers/net/wireless/ath/ath9k/htc_hst.c
++++ b/drivers/net/wireless/ath/ath9k/htc_hst.c
+@@ -113,6 +113,9 @@ static void htc_process_conn_rsp(struct htc_target *target,
+
+ if (svc_rspmsg->status == HTC_SERVICE_SUCCESS) {
+ epid = svc_rspmsg->endpoint_id;
++ if (epid < 0 || epid >= ENDPOINT_MAX)
++ return;
++
+ service_id = be16_to_cpu(svc_rspmsg->service_id);
+ max_msglen = be16_to_cpu(svc_rspmsg->max_msg_len);
+ endpoint = &target->endpoint[epid];
+@@ -170,7 +173,6 @@ static int htc_config_pipe_credits(struct htc_target *target)
+ time_left = wait_for_completion_timeout(&target->cmd_wait, HZ);
+ if (!time_left) {
+ dev_err(target->dev, "HTC credit config timeout\n");
+- kfree_skb(skb);
+ return -ETIMEDOUT;
+ }
+
+@@ -206,7 +208,6 @@ static int htc_setup_complete(struct htc_target *target)
+ time_left = wait_for_completion_timeout(&target->cmd_wait, HZ);
+ if (!time_left) {
+ dev_err(target->dev, "HTC start timeout\n");
+- kfree_skb(skb);
+ return -ETIMEDOUT;
+ }
+
+@@ -279,7 +280,6 @@ int htc_connect_service(struct htc_target *target,
+ if (!time_left) {
+ dev_err(target->dev, "Service connection timeout for: %d\n",
+ service_connreq->service_id);
+- kfree_skb(skb);
+ return -ETIMEDOUT;
+ }
+
+diff --git a/drivers/net/wireless/ath/ath9k/wmi.c b/drivers/net/wireless/ath/ath9k/wmi.c
+index cdc146091194..e7a3127395be 100644
+--- a/drivers/net/wireless/ath/ath9k/wmi.c
++++ b/drivers/net/wireless/ath/ath9k/wmi.c
+@@ -112,14 +112,17 @@ struct wmi *ath9k_init_wmi(struct ath9k_htc_priv *priv)
+ return wmi;
+ }
+
+-void ath9k_deinit_wmi(struct ath9k_htc_priv *priv)
++void ath9k_stop_wmi(struct ath9k_htc_priv *priv)
+ {
+ struct wmi *wmi = priv->wmi;
+
+ mutex_lock(&wmi->op_mutex);
+ wmi->stopped = true;
+ mutex_unlock(&wmi->op_mutex);
++}
+
++void ath9k_destoy_wmi(struct ath9k_htc_priv *priv)
++{
+ kfree(priv->wmi);
+ }
+
+@@ -336,7 +339,6 @@ int ath9k_wmi_cmd(struct wmi *wmi, enum wmi_cmd_id cmd_id,
+ ath_dbg(common, WMI, "Timeout waiting for WMI command: %s\n",
+ wmi_cmd_to_name(cmd_id));
+ mutex_unlock(&wmi->op_mutex);
+- kfree_skb(skb);
+ return -ETIMEDOUT;
+ }
+
+diff --git a/drivers/net/wireless/ath/ath9k/wmi.h b/drivers/net/wireless/ath/ath9k/wmi.h
+index 380175d5ecd7..d8b912206232 100644
+--- a/drivers/net/wireless/ath/ath9k/wmi.h
++++ b/drivers/net/wireless/ath/ath9k/wmi.h
+@@ -179,7 +179,6 @@ struct wmi {
+ };
+
+ struct wmi *ath9k_init_wmi(struct ath9k_htc_priv *priv);
+-void ath9k_deinit_wmi(struct ath9k_htc_priv *priv);
+ int ath9k_wmi_connect(struct htc_target *htc, struct wmi *wmi,
+ enum htc_endpoint_id *wmi_ctrl_epid);
+ int ath9k_wmi_cmd(struct wmi *wmi, enum wmi_cmd_id cmd_id,
+@@ -189,6 +188,8 @@ int ath9k_wmi_cmd(struct wmi *wmi, enum wmi_cmd_id cmd_id,
+ void ath9k_wmi_event_tasklet(unsigned long data);
+ void ath9k_fatal_work(struct work_struct *work);
+ void ath9k_wmi_event_drain(struct ath9k_htc_priv *priv);
++void ath9k_stop_wmi(struct ath9k_htc_priv *priv);
++void ath9k_destoy_wmi(struct ath9k_htc_priv *priv);
+
+ #define WMI_CMD(_wmi_cmd) \
+ do { \
+diff --git a/drivers/pci/pci.c b/drivers/pci/pci.c
+index 595fcf59843f..6d3234f75692 100644
+--- a/drivers/pci/pci.c
++++ b/drivers/pci/pci.c
+@@ -4673,10 +4673,10 @@ static bool pcie_wait_for_link_delay(struct pci_dev *pdev, bool active,
+
+ /*
+ * Some controllers might not implement link active reporting. In this
+- * case, we wait for 1000 + 100 ms.
++ * case, we wait for 1000 ms + any delay requested by the caller.
+ */
+ if (!pdev->link_active_reporting) {
+- msleep(1100);
++ msleep(timeout + delay);
+ return true;
+ }
+
+diff --git a/drivers/platform/x86/sony-laptop.c b/drivers/platform/x86/sony-laptop.c
+index 51309f7ceede..e5a1b5533408 100644
+--- a/drivers/platform/x86/sony-laptop.c
++++ b/drivers/platform/x86/sony-laptop.c
+@@ -757,33 +757,6 @@ static union acpi_object *__call_snc_method(acpi_handle handle, char *method,
+ return result;
+ }
+
+-static int sony_nc_int_call(acpi_handle handle, char *name, int *value,
+- int *result)
+-{
+- union acpi_object *object = NULL;
+- if (value) {
+- u64 v = *value;
+- object = __call_snc_method(handle, name, &v);
+- } else
+- object = __call_snc_method(handle, name, NULL);
+-
+- if (!object)
+- return -EINVAL;
+-
+- if (object->type != ACPI_TYPE_INTEGER) {
+- pr_warn("Invalid acpi_object: expected 0x%x got 0x%x\n",
+- ACPI_TYPE_INTEGER, object->type);
+- kfree(object);
+- return -EINVAL;
+- }
+-
+- if (result)
+- *result = object->integer.value;
+-
+- kfree(object);
+- return 0;
+-}
+-
+ #define MIN(a, b) (a > b ? b : a)
+ static int sony_nc_buffer_call(acpi_handle handle, char *name, u64 *value,
+ void *buffer, size_t buflen)
+@@ -795,17 +768,20 @@ static int sony_nc_buffer_call(acpi_handle handle, char *name, u64 *value,
+ if (!object)
+ return -EINVAL;
+
+- if (object->type == ACPI_TYPE_BUFFER) {
++ if (!buffer) {
++ /* do nothing */
++ } else if (object->type == ACPI_TYPE_BUFFER) {
+ len = MIN(buflen, object->buffer.length);
++ memset(buffer, 0, buflen);
+ memcpy(buffer, object->buffer.pointer, len);
+
+ } else if (object->type == ACPI_TYPE_INTEGER) {
+ len = MIN(buflen, sizeof(object->integer.value));
++ memset(buffer, 0, buflen);
+ memcpy(buffer, &object->integer.value, len);
+
+ } else {
+- pr_warn("Invalid acpi_object: expected 0x%x got 0x%x\n",
+- ACPI_TYPE_BUFFER, object->type);
++ pr_warn("Unexpected acpi_object: 0x%x\n", object->type);
+ ret = -EINVAL;
+ }
+
+@@ -813,6 +789,23 @@ static int sony_nc_buffer_call(acpi_handle handle, char *name, u64 *value,
+ return ret;
+ }
+
++static int sony_nc_int_call(acpi_handle handle, char *name, int *value, int
++ *result)
++{
++ int ret;
++
++ if (value) {
++ u64 v = *value;
++
++ ret = sony_nc_buffer_call(handle, name, &v, result,
++ sizeof(*result));
++ } else {
++ ret = sony_nc_buffer_call(handle, name, NULL, result,
++ sizeof(*result));
++ }
++ return ret;
++}
++
+ struct sony_nc_handles {
+ u16 cap[0x10];
+ struct device_attribute devattr;
+@@ -2295,7 +2288,12 @@ static void sony_nc_thermal_cleanup(struct platform_device *pd)
+ #ifdef CONFIG_PM_SLEEP
+ static void sony_nc_thermal_resume(void)
+ {
+- unsigned int status = sony_nc_thermal_mode_get();
++ int status;
++
++ if (!th_handle)
++ return;
++
++ status = sony_nc_thermal_mode_get();
+
+ if (status != th_handle->mode)
+ sony_nc_thermal_mode_set(th_handle->mode);
+diff --git a/drivers/remoteproc/remoteproc_core.c b/drivers/remoteproc/remoteproc_core.c
+index e12a54e67588..be15aace9b3c 100644
+--- a/drivers/remoteproc/remoteproc_core.c
++++ b/drivers/remoteproc/remoteproc_core.c
+@@ -517,7 +517,7 @@ static int rproc_handle_vdev(struct rproc *rproc, struct fw_rsc_vdev *rsc,
+
+ /* Initialise vdev subdevice */
+ snprintf(name, sizeof(name), "vdev%dbuffer", rvdev->index);
+- rvdev->dev.parent = rproc->dev.parent;
++ rvdev->dev.parent = &rproc->dev;
+ rvdev->dev.dma_pfn_offset = rproc->dev.parent->dma_pfn_offset;
+ rvdev->dev.release = rproc_rvdev_release;
+ dev_set_name(&rvdev->dev, "%s#%s", dev_name(rvdev->dev.parent), name);
+diff --git a/drivers/remoteproc/remoteproc_virtio.c b/drivers/remoteproc/remoteproc_virtio.c
+index e61d738d9b47..44187fe43677 100644
+--- a/drivers/remoteproc/remoteproc_virtio.c
++++ b/drivers/remoteproc/remoteproc_virtio.c
+@@ -376,6 +376,18 @@ int rproc_add_virtio_dev(struct rproc_vdev *rvdev, int id)
+ goto out;
+ }
+ }
++ } else {
++ struct device_node *np = rproc->dev.parent->of_node;
++
++ /*
++ * If we don't have dedicated buffer, just attempt to re-assign
++ * the reserved memory from our parent. A default memory-region
++ * at index 0 from the parent's memory-regions is assigned for
++ * the rvdev dev to allocate from. Failure is non-critical and
++ * the allocations will fall back to global pools, so don't
++ * check return value either.
++ */
++ of_reserved_mem_device_init_by_idx(dev, np, 0);
+ }
+
+ /* Allocate virtio device */
+diff --git a/drivers/scsi/lpfc/lpfc_ct.c b/drivers/scsi/lpfc/lpfc_ct.c
+index 2aa578d20f8c..7fce73c39c1c 100644
+--- a/drivers/scsi/lpfc/lpfc_ct.c
++++ b/drivers/scsi/lpfc/lpfc_ct.c
+@@ -462,7 +462,6 @@ lpfc_prep_node_fc4type(struct lpfc_vport *vport, uint32_t Did, uint8_t fc4_type)
+ struct lpfc_nodelist *ndlp;
+
+ if ((vport->port_type != LPFC_NPIV_PORT) ||
+- (fc4_type == FC_TYPE_FCP) ||
+ !(vport->ct_flags & FC_CT_RFF_ID) || !vport->cfg_restrict_login) {
+
+ ndlp = lpfc_setup_disc_node(vport, Did);
+diff --git a/drivers/scsi/megaraid/megaraid_sas.h b/drivers/scsi/megaraid/megaraid_sas.h
+index 83d8c4cb1ad5..98827363bc49 100644
+--- a/drivers/scsi/megaraid/megaraid_sas.h
++++ b/drivers/scsi/megaraid/megaraid_sas.h
+@@ -511,7 +511,7 @@ union MR_PROGRESS {
+ */
+ struct MR_PD_PROGRESS {
+ struct {
+-#ifndef MFI_BIG_ENDIAN
++#ifndef __BIG_ENDIAN_BITFIELD
+ u32 rbld:1;
+ u32 patrol:1;
+ u32 clear:1;
+@@ -537,7 +537,7 @@ struct MR_PD_PROGRESS {
+ };
+
+ struct {
+-#ifndef MFI_BIG_ENDIAN
++#ifndef __BIG_ENDIAN_BITFIELD
+ u32 rbld:1;
+ u32 patrol:1;
+ u32 clear:1;
+diff --git a/drivers/scsi/megaraid/megaraid_sas_fusion.c b/drivers/scsi/megaraid/megaraid_sas_fusion.c
+index b2ad96564484..03a6c86475c8 100644
+--- a/drivers/scsi/megaraid/megaraid_sas_fusion.c
++++ b/drivers/scsi/megaraid/megaraid_sas_fusion.c
+@@ -4238,6 +4238,7 @@ void megasas_refire_mgmt_cmd(struct megasas_instance *instance,
+ struct fusion_context *fusion;
+ struct megasas_cmd *cmd_mfi;
+ union MEGASAS_REQUEST_DESCRIPTOR_UNION *req_desc;
++ struct MPI2_RAID_SCSI_IO_REQUEST *scsi_io_req;
+ u16 smid;
+ bool refire_cmd = 0;
+ u8 result;
+@@ -4305,6 +4306,11 @@ void megasas_refire_mgmt_cmd(struct megasas_instance *instance,
+ result = COMPLETE_CMD;
+ }
+
++ scsi_io_req = (struct MPI2_RAID_SCSI_IO_REQUEST *)
++ cmd_fusion->io_request;
++ if (scsi_io_req->Function == MPI2_FUNCTION_SCSI_TASK_MGMT)
++ result = RETURN_CMD;
++
+ switch (result) {
+ case REFIRE_CMD:
+ megasas_fire_cmd_fusion(instance, req_desc);
+@@ -4533,7 +4539,6 @@ megasas_issue_tm(struct megasas_instance *instance, u16 device_handle,
+ if (!timeleft) {
+ dev_err(&instance->pdev->dev,
+ "task mgmt type 0x%x timed out\n", type);
+- cmd_mfi->flags |= DRV_DCMD_SKIP_REFIRE;
+ mutex_unlock(&instance->reset_mutex);
+ rc = megasas_reset_fusion(instance->host, MFI_IO_TIMEOUT_OCR);
+ mutex_lock(&instance->reset_mutex);
+diff --git a/drivers/scsi/megaraid/megaraid_sas_fusion.h b/drivers/scsi/megaraid/megaraid_sas_fusion.h
+index d57ecc7f88d8..30de4b01f703 100644
+--- a/drivers/scsi/megaraid/megaraid_sas_fusion.h
++++ b/drivers/scsi/megaraid/megaraid_sas_fusion.h
+@@ -774,7 +774,7 @@ struct MR_SPAN_BLOCK_INFO {
+ struct MR_CPU_AFFINITY_MASK {
+ union {
+ struct {
+-#ifndef MFI_BIG_ENDIAN
++#ifndef __BIG_ENDIAN_BITFIELD
+ u8 hw_path:1;
+ u8 cpu0:1;
+ u8 cpu1:1;
+@@ -866,7 +866,7 @@ struct MR_LD_RAID {
+ __le16 seqNum;
+
+ struct {
+-#ifndef MFI_BIG_ENDIAN
++#ifndef __BIG_ENDIAN_BITFIELD
+ u32 ldSyncRequired:1;
+ u32 regTypeReqOnReadIsValid:1;
+ u32 isEPD:1;
+@@ -889,7 +889,7 @@ struct {
+ /* 0x30 - 0x33, Logical block size for the LD */
+ u32 logical_block_length;
+ struct {
+-#ifndef MFI_BIG_ENDIAN
++#ifndef __BIG_ENDIAN_BITFIELD
+ /* 0x34, P_I_EXPONENT from READ CAPACITY 16 */
+ u32 ld_pi_exp:4;
+ /* 0x34, LOGICAL BLOCKS PER PHYSICAL
+diff --git a/drivers/spi/spi-bcm-qspi.c b/drivers/spi/spi-bcm-qspi.c
+index 23d295f36c80..c64be5e8fb8a 100644
+--- a/drivers/spi/spi-bcm-qspi.c
++++ b/drivers/spi/spi-bcm-qspi.c
+@@ -670,7 +670,7 @@ static void read_from_hw(struct bcm_qspi *qspi, int slots)
+ if (buf)
+ buf[tp.byte] = read_rxram_slot_u8(qspi, slot);
+ dev_dbg(&qspi->pdev->dev, "RD %02x\n",
+- buf ? buf[tp.byte] : 0xff);
++ buf ? buf[tp.byte] : 0x0);
+ } else {
+ u16 *buf = tp.trans->rx_buf;
+
+@@ -678,7 +678,7 @@ static void read_from_hw(struct bcm_qspi *qspi, int slots)
+ buf[tp.byte / 2] = read_rxram_slot_u16(qspi,
+ slot);
+ dev_dbg(&qspi->pdev->dev, "RD %04x\n",
+- buf ? buf[tp.byte] : 0xffff);
++ buf ? buf[tp.byte / 2] : 0x0);
+ }
+
+ update_qspi_trans_byte_count(qspi, &tp,
+@@ -733,13 +733,13 @@ static int write_to_hw(struct bcm_qspi *qspi, struct spi_device *spi)
+ while (!tstatus && slot < MSPI_NUM_CDRAM) {
+ if (tp.trans->bits_per_word <= 8) {
+ const u8 *buf = tp.trans->tx_buf;
+- u8 val = buf ? buf[tp.byte] : 0xff;
++ u8 val = buf ? buf[tp.byte] : 0x00;
+
+ write_txram_slot_u8(qspi, slot, val);
+ dev_dbg(&qspi->pdev->dev, "WR %02x\n", val);
+ } else {
+ const u16 *buf = tp.trans->tx_buf;
+- u16 val = buf ? buf[tp.byte / 2] : 0xffff;
++ u16 val = buf ? buf[tp.byte / 2] : 0x0000;
+
+ write_txram_slot_u16(qspi, slot, val);
+ dev_dbg(&qspi->pdev->dev, "WR %04x\n", val);
+@@ -1222,6 +1222,11 @@ int bcm_qspi_probe(struct platform_device *pdev,
+ }
+
+ qspi = spi_master_get_devdata(master);
++
++ qspi->clk = devm_clk_get_optional(&pdev->dev, NULL);
++ if (IS_ERR(qspi->clk))
++ return PTR_ERR(qspi->clk);
++
+ qspi->pdev = pdev;
+ qspi->trans_pos.trans = NULL;
+ qspi->trans_pos.byte = 0;
+@@ -1335,13 +1340,6 @@ int bcm_qspi_probe(struct platform_device *pdev,
+ qspi->soc_intc = NULL;
+ }
+
+- qspi->clk = devm_clk_get(&pdev->dev, NULL);
+- if (IS_ERR(qspi->clk)) {
+- dev_warn(dev, "unable to get clock\n");
+- ret = PTR_ERR(qspi->clk);
+- goto qspi_probe_err;
+- }
+-
+ ret = clk_prepare_enable(qspi->clk);
+ if (ret) {
+ dev_err(dev, "failed to prepare clock\n");
+diff --git a/drivers/spi/spi-bcm2835.c b/drivers/spi/spi-bcm2835.c
+index 11c235879bb7..fd887a6492f4 100644
+--- a/drivers/spi/spi-bcm2835.c
++++ b/drivers/spi/spi-bcm2835.c
+@@ -1347,7 +1347,7 @@ static int bcm2835_spi_probe(struct platform_device *pdev)
+ goto out_dma_release;
+ }
+
+- err = devm_spi_register_controller(&pdev->dev, ctlr);
++ err = spi_register_controller(ctlr);
+ if (err) {
+ dev_err(&pdev->dev, "could not register SPI controller: %d\n",
+ err);
+@@ -1374,6 +1374,8 @@ static int bcm2835_spi_remove(struct platform_device *pdev)
+
+ bcm2835_debugfs_remove(bs);
+
++ spi_unregister_controller(ctlr);
++
+ /* Clear FIFOs, and disable the HW block */
+ bcm2835_wr(bs, BCM2835_SPI_CS,
+ BCM2835_SPI_CS_CLEAR_RX | BCM2835_SPI_CS_CLEAR_TX);
+diff --git a/drivers/spi/spi-bcm2835aux.c b/drivers/spi/spi-bcm2835aux.c
+index a2162ff56a12..c331efd6e86b 100644
+--- a/drivers/spi/spi-bcm2835aux.c
++++ b/drivers/spi/spi-bcm2835aux.c
+@@ -569,7 +569,7 @@ static int bcm2835aux_spi_probe(struct platform_device *pdev)
+ goto out_clk_disable;
+ }
+
+- err = devm_spi_register_master(&pdev->dev, master);
++ err = spi_register_master(master);
+ if (err) {
+ dev_err(&pdev->dev, "could not register SPI master: %d\n", err);
+ goto out_clk_disable;
+@@ -593,6 +593,8 @@ static int bcm2835aux_spi_remove(struct platform_device *pdev)
+
+ bcm2835aux_debugfs_remove(bs);
+
++ spi_unregister_master(master);
++
+ bcm2835aux_spi_reset_hw(bs);
+
+ /* disable the HW block by releasing the clock */
+diff --git a/drivers/spi/spi-dw.c b/drivers/spi/spi-dw.c
+index 31e3f866d11a..dbf9b8d5cebe 100644
+--- a/drivers/spi/spi-dw.c
++++ b/drivers/spi/spi-dw.c
+@@ -128,12 +128,20 @@ void dw_spi_set_cs(struct spi_device *spi, bool enable)
+ {
+ struct dw_spi *dws = spi_controller_get_devdata(spi->controller);
+ struct chip_data *chip = spi_get_ctldata(spi);
++ bool cs_high = !!(spi->mode & SPI_CS_HIGH);
+
+ /* Chip select logic is inverted from spi_set_cs() */
+ if (chip && chip->cs_control)
+ chip->cs_control(!enable);
+
+- if (!enable)
++ /*
++ * DW SPI controller demands any native CS being set in order to
++ * proceed with data transfer. So in order to activate the SPI
++ * communications we must set a corresponding bit in the Slave
++ * Enable register no matter whether the SPI core is configured to
++ * support active-high or active-low CS level.
++ */
++ if (cs_high == enable)
+ dw_writel(dws, DW_SPI_SER, BIT(spi->chip_select));
+ else if (dws->cs_override)
+ dw_writel(dws, DW_SPI_SER, 0);
+@@ -526,7 +534,7 @@ int dw_spi_add_host(struct device *dev, struct dw_spi *dws)
+ }
+ }
+
+- ret = devm_spi_register_controller(dev, master);
++ ret = spi_register_controller(master);
+ if (ret) {
+ dev_err(&master->dev, "problem registering spi master\n");
+ goto err_dma_exit;
+@@ -550,6 +558,8 @@ void dw_spi_remove_host(struct dw_spi *dws)
+ {
+ dw_spi_debugfs_remove(dws);
+
++ spi_unregister_controller(dws->master);
++
+ if (dws->dma_ops && dws->dma_ops->dma_exit)
+ dws->dma_ops->dma_exit(dws);
+
+diff --git a/drivers/spi/spi-pxa2xx.c b/drivers/spi/spi-pxa2xx.c
+index 73d2a65d0b6e..f6e87344a36c 100644
+--- a/drivers/spi/spi-pxa2xx.c
++++ b/drivers/spi/spi-pxa2xx.c
+@@ -1884,7 +1884,7 @@ static int pxa2xx_spi_probe(struct platform_device *pdev)
+
+ /* Register with the SPI framework */
+ platform_set_drvdata(pdev, drv_data);
+- status = devm_spi_register_controller(&pdev->dev, controller);
++ status = spi_register_controller(controller);
+ if (status != 0) {
+ dev_err(&pdev->dev, "problem registering spi controller\n");
+ goto out_error_pm_runtime_enabled;
+@@ -1893,7 +1893,6 @@ static int pxa2xx_spi_probe(struct platform_device *pdev)
+ return status;
+
+ out_error_pm_runtime_enabled:
+- pm_runtime_put_noidle(&pdev->dev);
+ pm_runtime_disable(&pdev->dev);
+
+ out_error_clock_enabled:
+@@ -1916,6 +1915,8 @@ static int pxa2xx_spi_remove(struct platform_device *pdev)
+
+ pm_runtime_get_sync(&pdev->dev);
+
++ spi_unregister_controller(drv_data->controller);
++
+ /* Disable the SSP at the peripheral and SOC level */
+ pxa2xx_spi_write(drv_data, SSCR0, 0);
+ clk_disable_unprepare(ssp->clk);
+diff --git a/drivers/spi/spi.c b/drivers/spi/spi.c
+index c92c89467e7e..7067e4c44400 100644
+--- a/drivers/spi/spi.c
++++ b/drivers/spi/spi.c
+@@ -2760,6 +2760,8 @@ void spi_unregister_controller(struct spi_controller *ctlr)
+ struct spi_controller *found;
+ int id = ctlr->bus_num;
+
++ device_for_each_child(&ctlr->dev, NULL, __unregister);
++
+ /* First make sure that this controller was ever added */
+ mutex_lock(&board_lock);
+ found = idr_find(&spi_master_idr, id);
+@@ -2772,7 +2774,6 @@ void spi_unregister_controller(struct spi_controller *ctlr)
+ list_del(&ctlr->list);
+ mutex_unlock(&board_lock);
+
+- device_for_each_child(&ctlr->dev, NULL, __unregister);
+ device_unregister(&ctlr->dev);
+ /* free bus id */
+ mutex_lock(&board_lock);
+diff --git a/drivers/staging/mt7621-pci/pci-mt7621.c b/drivers/staging/mt7621-pci/pci-mt7621.c
+index f58e3a51fc71..b9d460a9c041 100644
+--- a/drivers/staging/mt7621-pci/pci-mt7621.c
++++ b/drivers/staging/mt7621-pci/pci-mt7621.c
+@@ -502,17 +502,25 @@ static void mt7621_pcie_init_ports(struct mt7621_pcie *pcie)
+
+ mt7621_pcie_reset_ep_deassert(pcie);
+
++ tmp = NULL;
+ list_for_each_entry(port, &pcie->ports, list) {
+ u32 slot = port->slot;
+
+ if (!mt7621_pcie_port_is_linkup(port)) {
+ dev_err(dev, "pcie%d no card, disable it (RST & CLK)\n",
+ slot);
+- if (slot != 1)
+- phy_power_off(port->phy);
+ mt7621_control_assert(port);
+ mt7621_pcie_port_clk_disable(port);
+ port->enabled = false;
++
++ if (slot == 0) {
++ tmp = port;
++ continue;
++ }
++
++ if (slot == 1 && tmp && !tmp->enabled)
++ phy_power_off(tmp->phy);
++
+ }
+ }
+ }
+diff --git a/drivers/staging/wfx/main.c b/drivers/staging/wfx/main.c
+index 3c4c240229ad..8f19bd0fd2a1 100644
+--- a/drivers/staging/wfx/main.c
++++ b/drivers/staging/wfx/main.c
+@@ -466,7 +466,6 @@ int wfx_probe(struct wfx_dev *wdev)
+
+ err2:
+ ieee80211_unregister_hw(wdev->hw);
+- ieee80211_free_hw(wdev->hw);
+ err1:
+ wfx_bh_unregister(wdev);
+ return err;
+diff --git a/drivers/tty/serial/amba-pl011.c b/drivers/tty/serial/amba-pl011.c
+index 2296bb0f9578..458fc3d9d48c 100644
+--- a/drivers/tty/serial/amba-pl011.c
++++ b/drivers/tty/serial/amba-pl011.c
+@@ -2575,6 +2575,7 @@ static int pl011_setup_port(struct device *dev, struct uart_amba_port *uap,
+ uap->port.has_sysrq = IS_ENABLED(CONFIG_SERIAL_AMBA_PL011_CONSOLE);
+ uap->port.flags = UPF_BOOT_AUTOCONF;
+ uap->port.line = index;
++ spin_lock_init(&uap->port.lock);
+
+ amba_ports[index] = uap;
+
+diff --git a/drivers/tty/serial/imx.c b/drivers/tty/serial/imx.c
+index f4d68109bc8b..d5979a8bdc40 100644
+--- a/drivers/tty/serial/imx.c
++++ b/drivers/tty/serial/imx.c
+@@ -2398,6 +2398,9 @@ static int imx_uart_probe(struct platform_device *pdev)
+ }
+ }
+
++ /* We need to initialize lock even for non-registered console */
++ spin_lock_init(&sport->port.lock);
++
+ imx_uart_ports[sport->port.line] = sport;
+
+ platform_set_drvdata(pdev, sport);
+diff --git a/drivers/video/fbdev/vt8500lcdfb.c b/drivers/video/fbdev/vt8500lcdfb.c
+index f744479dc7df..c61476247ba8 100644
+--- a/drivers/video/fbdev/vt8500lcdfb.c
++++ b/drivers/video/fbdev/vt8500lcdfb.c
+@@ -230,6 +230,7 @@ static int vt8500lcd_blank(int blank, struct fb_info *info)
+ info->fix.visual == FB_VISUAL_STATIC_PSEUDOCOLOR)
+ for (i = 0; i < 256; i++)
+ vt8500lcd_setcolreg(i, 0, 0, 0, 0, info);
++ fallthrough;
+ case FB_BLANK_UNBLANK:
+ if (info->fix.visual == FB_VISUAL_PSEUDOCOLOR ||
+ info->fix.visual == FB_VISUAL_STATIC_PSEUDOCOLOR)
+diff --git a/drivers/video/fbdev/w100fb.c b/drivers/video/fbdev/w100fb.c
+index 2d6e2738b792..d96ab28f8ce4 100644
+--- a/drivers/video/fbdev/w100fb.c
++++ b/drivers/video/fbdev/w100fb.c
+@@ -588,6 +588,7 @@ static void w100fb_restore_vidmem(struct w100fb_par *par)
+ memsize=par->mach->mem->size;
+ memcpy_toio(remapped_fbuf + (W100_FB_BASE-MEM_WINDOW_BASE), par->saved_extmem, memsize);
+ vfree(par->saved_extmem);
++ par->saved_extmem = NULL;
+ }
+ if (par->saved_intmem) {
+ memsize=MEM_INT_SIZE;
+@@ -596,6 +597,7 @@ static void w100fb_restore_vidmem(struct w100fb_par *par)
+ else
+ memcpy_toio(remapped_fbuf + (W100_FB_BASE-MEM_WINDOW_BASE), par->saved_intmem, memsize);
+ vfree(par->saved_intmem);
++ par->saved_intmem = NULL;
+ }
+ }
+
+diff --git a/drivers/watchdog/imx_sc_wdt.c b/drivers/watchdog/imx_sc_wdt.c
+index 60a32469f7de..e9ee22a7cb45 100644
+--- a/drivers/watchdog/imx_sc_wdt.c
++++ b/drivers/watchdog/imx_sc_wdt.c
+@@ -175,6 +175,11 @@ static int imx_sc_wdt_probe(struct platform_device *pdev)
+ wdog->timeout = DEFAULT_TIMEOUT;
+
+ watchdog_init_timeout(wdog, 0, dev);
++
++ ret = imx_sc_wdt_set_timeout(wdog, wdog->timeout);
++ if (ret)
++ return ret;
++
+ watchdog_stop_on_reboot(wdog);
+ watchdog_stop_on_unregister(wdog);
+
+diff --git a/drivers/xen/pvcalls-back.c b/drivers/xen/pvcalls-back.c
+index cf4ce3e9358d..41a18ece029a 100644
+--- a/drivers/xen/pvcalls-back.c
++++ b/drivers/xen/pvcalls-back.c
+@@ -1088,7 +1088,8 @@ static void set_backend_state(struct xenbus_device *dev,
+ case XenbusStateInitialised:
+ switch (state) {
+ case XenbusStateConnected:
+- backend_connect(dev);
++ if (backend_connect(dev))
++ return;
+ xenbus_switch_state(dev, XenbusStateConnected);
+ break;
+ case XenbusStateClosing:
+diff --git a/fs/aio.c b/fs/aio.c
+index 5f3d3d814928..6483f9274d5e 100644
+--- a/fs/aio.c
++++ b/fs/aio.c
+@@ -176,6 +176,7 @@ struct fsync_iocb {
+ struct file *file;
+ struct work_struct work;
+ bool datasync;
++ struct cred *creds;
+ };
+
+ struct poll_iocb {
+@@ -1589,8 +1590,11 @@ static int aio_write(struct kiocb *req, const struct iocb *iocb,
+ static void aio_fsync_work(struct work_struct *work)
+ {
+ struct aio_kiocb *iocb = container_of(work, struct aio_kiocb, fsync.work);
++ const struct cred *old_cred = override_creds(iocb->fsync.creds);
+
+ iocb->ki_res.res = vfs_fsync(iocb->fsync.file, iocb->fsync.datasync);
++ revert_creds(old_cred);
++ put_cred(iocb->fsync.creds);
+ iocb_put(iocb);
+ }
+
+@@ -1604,6 +1608,10 @@ static int aio_fsync(struct fsync_iocb *req, const struct iocb *iocb,
+ if (unlikely(!req->file->f_op->fsync))
+ return -EINVAL;
+
++ req->creds = prepare_creds();
++ if (!req->creds)
++ return -ENOMEM;
++
+ req->datasync = datasync;
+ INIT_WORK(&req->work, aio_fsync_work);
+ schedule_work(&req->work);
+diff --git a/fs/cifs/cifsfs.c b/fs/cifs/cifsfs.c
+index c31f362fa098..e4a6d9d10b92 100644
+--- a/fs/cifs/cifsfs.c
++++ b/fs/cifs/cifsfs.c
+@@ -621,7 +621,7 @@ cifs_show_options(struct seq_file *s, struct dentry *root)
+ seq_printf(s, ",actimeo=%lu", cifs_sb->actimeo / HZ);
+
+ if (tcon->ses->chan_max > 1)
+- seq_printf(s, ",multichannel,max_channel=%zu",
++ seq_printf(s, ",multichannel,max_channels=%zu",
+ tcon->ses->chan_max);
+
+ return 0;
+diff --git a/fs/cifs/smb2pdu.c b/fs/cifs/smb2pdu.c
+index b30aa3cdd845..cdad4d933bce 100644
+--- a/fs/cifs/smb2pdu.c
++++ b/fs/cifs/smb2pdu.c
+@@ -2922,7 +2922,9 @@ SMB2_ioctl_init(struct cifs_tcon *tcon, struct smb_rqst *rqst,
+ * response size smaller.
+ */
+ req->MaxOutputResponse = cpu_to_le32(max_response_size);
+-
++ req->sync_hdr.CreditCharge =
++ cpu_to_le16(DIV_ROUND_UP(max(indatalen, max_response_size),
++ SMB2_MAX_BUFFER_SIZE));
+ if (is_fsctl)
+ req->Flags = cpu_to_le32(SMB2_0_IOCTL_IS_FSCTL);
+ else
+diff --git a/fs/exfat/file.c b/fs/exfat/file.c
+index c9db8eb0cfc3..5b4ddff18731 100644
+--- a/fs/exfat/file.c
++++ b/fs/exfat/file.c
+@@ -170,11 +170,11 @@ int __exfat_truncate(struct inode *inode, loff_t new_size)
+
+ /* File size should be zero if there is no cluster allocated */
+ if (ei->start_clu == EXFAT_EOF_CLUSTER) {
+- ep->dentry.stream.valid_size = 0;
+- ep->dentry.stream.size = 0;
++ ep2->dentry.stream.valid_size = 0;
++ ep2->dentry.stream.size = 0;
+ } else {
+- ep->dentry.stream.valid_size = cpu_to_le64(new_size);
+- ep->dentry.stream.size = ep->dentry.stream.valid_size;
++ ep2->dentry.stream.valid_size = cpu_to_le64(new_size);
++ ep2->dentry.stream.size = ep->dentry.stream.valid_size;
+ }
+
+ if (new_size == 0) {
+diff --git a/fs/exfat/super.c b/fs/exfat/super.c
+index a846ff555656..c1b1ed306a48 100644
+--- a/fs/exfat/super.c
++++ b/fs/exfat/super.c
+@@ -273,9 +273,8 @@ static int exfat_parse_param(struct fs_context *fc, struct fs_parameter *param)
+ break;
+ case Opt_charset:
+ exfat_free_iocharset(sbi);
+- opts->iocharset = kstrdup(param->string, GFP_KERNEL);
+- if (!opts->iocharset)
+- return -ENOMEM;
++ opts->iocharset = param->string;
++ param->string = NULL;
+ break;
+ case Opt_errors:
+ opts->errors = result.uint_32;
+@@ -630,7 +629,12 @@ static int exfat_get_tree(struct fs_context *fc)
+
+ static void exfat_free(struct fs_context *fc)
+ {
+- kfree(fc->s_fs_info);
++ struct exfat_sb_info *sbi = fc->s_fs_info;
++
++ if (sbi) {
++ exfat_free_iocharset(sbi);
++ kfree(sbi);
++ }
+ }
+
+ static const struct fs_context_operations exfat_context_ops = {
+diff --git a/fs/fat/inode.c b/fs/fat/inode.c
+index 71946da84388..bf8e04e25f35 100644
+--- a/fs/fat/inode.c
++++ b/fs/fat/inode.c
+@@ -1520,6 +1520,12 @@ static int fat_read_bpb(struct super_block *sb, struct fat_boot_sector *b,
+ goto out;
+ }
+
++ if (bpb->fat_fat_length == 0 && bpb->fat32_length == 0) {
++ if (!silent)
++ fat_msg(sb, KERN_ERR, "bogus number of FAT sectors");
++ goto out;
++ }
++
+ error = 0;
+
+ out:
+diff --git a/fs/io_uring.c b/fs/io_uring.c
+index bb25e3997d41..f071505e3430 100644
+--- a/fs/io_uring.c
++++ b/fs/io_uring.c
+@@ -2038,6 +2038,10 @@ static bool io_file_supports_async(struct file *file, int rw)
+ if (S_ISREG(mode) && file->f_op != &io_uring_fops)
+ return true;
+
++ /* any ->read/write should understand O_NONBLOCK */
++ if (file->f_flags & O_NONBLOCK)
++ return true;
++
+ if (!(file->f_mode & FMODE_NOWAIT))
+ return false;
+
+@@ -2080,8 +2084,7 @@ static int io_prep_rw(struct io_kiocb *req, const struct io_uring_sqe *sqe,
+ kiocb->ki_ioprio = get_current_ioprio();
+
+ /* don't allow async punt if RWF_NOWAIT was requested */
+- if ((kiocb->ki_flags & IOCB_NOWAIT) ||
+- (req->file->f_flags & O_NONBLOCK))
++ if (kiocb->ki_flags & IOCB_NOWAIT)
+ req->flags |= REQ_F_NOWAIT;
+
+ if (force_nonblock)
+@@ -2333,8 +2336,14 @@ static ssize_t __io_iov_buffer_select(struct io_kiocb *req, struct iovec *iov,
+ static ssize_t io_iov_buffer_select(struct io_kiocb *req, struct iovec *iov,
+ bool needs_lock)
+ {
+- if (req->flags & REQ_F_BUFFER_SELECTED)
++ if (req->flags & REQ_F_BUFFER_SELECTED) {
++ struct io_buffer *kbuf;
++
++ kbuf = (struct io_buffer *) (unsigned long) req->rw.addr;
++ iov[0].iov_base = u64_to_user_ptr(kbuf->addr);
++ iov[0].iov_len = kbuf->len;
+ return 0;
++ }
+ if (!req->rw.len)
+ return 0;
+ else if (req->rw.len > 1)
+@@ -2716,7 +2725,8 @@ copy_iov:
+ if (ret)
+ goto out_free;
+ /* any defer here is final, must blocking retry */
+- if (!file_can_poll(req->file))
++ if (!(req->flags & REQ_F_NOWAIT) &&
++ !file_can_poll(req->file))
+ req->flags |= REQ_F_MUST_PUNT;
+ return -EAGAIN;
+ }
+@@ -7087,8 +7097,8 @@ static int io_sqe_buffer_register(struct io_ring_ctx *ctx, void __user *arg,
+
+ ret = 0;
+ if (!pages || nr_pages > got_pages) {
+- kfree(vmas);
+- kfree(pages);
++ kvfree(vmas);
++ kvfree(pages);
+ pages = kvmalloc_array(nr_pages, sizeof(struct page *),
+ GFP_KERNEL);
+ vmas = kvmalloc_array(nr_pages,
+@@ -7390,7 +7400,7 @@ static void io_uring_cancel_files(struct io_ring_ctx *ctx,
+ * all we had, then we're done with this request.
+ */
+ if (refcount_sub_and_test(2, &cancel_req->refs)) {
+- io_put_req(cancel_req);
++ io_free_req(cancel_req);
+ finish_wait(&ctx->inflight_wait, &wait);
+ continue;
+ }
+diff --git a/fs/nilfs2/segment.c b/fs/nilfs2/segment.c
+index 445eef41bfaf..91b58c897f92 100644
+--- a/fs/nilfs2/segment.c
++++ b/fs/nilfs2/segment.c
+@@ -2780,6 +2780,8 @@ int nilfs_attach_log_writer(struct super_block *sb, struct nilfs_root *root)
+ if (!nilfs->ns_writer)
+ return -ENOMEM;
+
++ inode_attach_wb(nilfs->ns_bdev->bd_inode, NULL);
++
+ err = nilfs_segctor_start_thread(nilfs->ns_writer);
+ if (err) {
+ kfree(nilfs->ns_writer);
+diff --git a/fs/notify/fanotify/fanotify.c b/fs/notify/fanotify/fanotify.c
+index c18459cea6f4..29a9de57c34c 100644
+--- a/fs/notify/fanotify/fanotify.c
++++ b/fs/notify/fanotify/fanotify.c
+@@ -232,6 +232,10 @@ static u32 fanotify_group_event_mask(struct fsnotify_group *group,
+ if (!fsnotify_iter_should_report_type(iter_info, type))
+ continue;
+ mark = iter_info->marks[type];
++
++ /* Apply ignore mask regardless of ISDIR and ON_CHILD flags */
++ marks_ignored_mask |= mark->ignored_mask;
++
+ /*
+ * If the event is on dir and this mark doesn't care about
+ * events on dir, don't send it!
+@@ -249,7 +253,6 @@ static u32 fanotify_group_event_mask(struct fsnotify_group *group,
+ continue;
+
+ marks_mask |= mark->mask;
+- marks_ignored_mask |= mark->ignored_mask;
+ }
+
+ test_mask = event_mask & marks_mask & ~marks_ignored_mask;
+diff --git a/fs/overlayfs/copy_up.c b/fs/overlayfs/copy_up.c
+index 9709cf22cab3..07e0d1961e96 100644
+--- a/fs/overlayfs/copy_up.c
++++ b/fs/overlayfs/copy_up.c
+@@ -47,7 +47,7 @@ int ovl_copy_xattr(struct dentry *old, struct dentry *new)
+ {
+ ssize_t list_size, size, value_size = 0;
+ char *buf, *name, *value = NULL;
+- int uninitialized_var(error);
++ int error = 0;
+ size_t slen;
+
+ if (!(old->d_inode->i_opflags & IOP_XATTR) ||
+diff --git a/fs/overlayfs/overlayfs.h b/fs/overlayfs/overlayfs.h
+index e6f3670146ed..64039f36c54d 100644
+--- a/fs/overlayfs/overlayfs.h
++++ b/fs/overlayfs/overlayfs.h
+@@ -355,6 +355,9 @@ int ovl_check_fb_len(struct ovl_fb *fb, int fb_len);
+
+ static inline int ovl_check_fh_len(struct ovl_fh *fh, int fh_len)
+ {
++ if (fh_len < sizeof(struct ovl_fh))
++ return -EINVAL;
++
+ return ovl_check_fb_len(&fh->fb, fh_len - OVL_FH_WIRE_OFFSET);
+ }
+
+diff --git a/fs/proc/inode.c b/fs/proc/inode.c
+index fb4cace9ea41..8f507f9f6d3a 100644
+--- a/fs/proc/inode.c
++++ b/fs/proc/inode.c
+@@ -599,7 +599,7 @@ const struct inode_operations proc_link_inode_operations = {
+
+ struct inode *proc_get_inode(struct super_block *sb, struct proc_dir_entry *de)
+ {
+- struct inode *inode = new_inode_pseudo(sb);
++ struct inode *inode = new_inode(sb);
+
+ if (inode) {
+ inode->i_ino = de->low_ino;
+diff --git a/fs/proc/self.c b/fs/proc/self.c
+index 57c0a1047250..32af065397f8 100644
+--- a/fs/proc/self.c
++++ b/fs/proc/self.c
+@@ -43,7 +43,7 @@ int proc_setup_self(struct super_block *s)
+ inode_lock(root_inode);
+ self = d_alloc_name(s->s_root, "self");
+ if (self) {
+- struct inode *inode = new_inode_pseudo(s);
++ struct inode *inode = new_inode(s);
+ if (inode) {
+ inode->i_ino = self_inum;
+ inode->i_mtime = inode->i_atime = inode->i_ctime = current_time(inode);
+diff --git a/fs/proc/thread_self.c b/fs/proc/thread_self.c
+index f61ae53533f5..fac9e50b33a6 100644
+--- a/fs/proc/thread_self.c
++++ b/fs/proc/thread_self.c
+@@ -43,7 +43,7 @@ int proc_setup_thread_self(struct super_block *s)
+ inode_lock(root_inode);
+ thread_self = d_alloc_name(s->s_root, "thread-self");
+ if (thread_self) {
+- struct inode *inode = new_inode_pseudo(s);
++ struct inode *inode = new_inode(s);
+ if (inode) {
+ inode->i_ino = thread_self_inum;
+ inode->i_mtime = inode->i_atime = inode->i_ctime = current_time(inode);
+diff --git a/include/linux/elfnote.h b/include/linux/elfnote.h
+index 594d4e78654f..69b136e4dd2b 100644
+--- a/include/linux/elfnote.h
++++ b/include/linux/elfnote.h
+@@ -54,7 +54,7 @@
+ .popsection ;
+
+ #define ELFNOTE(name, type, desc) \
+- ELFNOTE_START(name, type, "") \
++ ELFNOTE_START(name, type, "a") \
+ desc ; \
+ ELFNOTE_END
+
+diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h
+index 131cc1527d68..92efa39ea3d7 100644
+--- a/include/linux/kvm_host.h
++++ b/include/linux/kvm_host.h
+@@ -1406,8 +1406,8 @@ static inline long kvm_arch_vcpu_async_ioctl(struct file *filp,
+ }
+ #endif /* CONFIG_HAVE_KVM_VCPU_ASYNC_IOCTL */
+
+-int kvm_arch_mmu_notifier_invalidate_range(struct kvm *kvm,
+- unsigned long start, unsigned long end, bool blockable);
++void kvm_arch_mmu_notifier_invalidate_range(struct kvm *kvm,
++ unsigned long start, unsigned long end);
+
+ #ifdef CONFIG_HAVE_KVM_VCPU_RUN_PID_CHANGE
+ int kvm_arch_vcpu_run_pid_change(struct kvm_vcpu *vcpu);
+diff --git a/include/linux/mm.h b/include/linux/mm.h
+index f3fe7371855c..465e8ad671f8 100644
+--- a/include/linux/mm.h
++++ b/include/linux/mm.h
+@@ -781,6 +781,7 @@ static inline void *kvcalloc(size_t n, size_t size, gfp_t flags)
+ }
+
+ extern void kvfree(const void *addr);
++extern void kvfree_sensitive(const void *addr, size_t len);
+
+ /*
+ * Mapcount of compound page as a whole, does not include mapped sub-pages.
+diff --git a/include/linux/padata.h b/include/linux/padata.h
+index a0d8b41850b2..693cae9bfe66 100644
+--- a/include/linux/padata.h
++++ b/include/linux/padata.h
+@@ -139,7 +139,8 @@ struct padata_shell {
+ /**
+ * struct padata_instance - The overall control structure.
+ *
+- * @node: Used by CPU hotplug.
++ * @cpu_online_node: Linkage for CPU online callback.
++ * @cpu_dead_node: Linkage for CPU offline callback.
+ * @parallel_wq: The workqueue used for parallel work.
+ * @serial_wq: The workqueue used for serial work.
+ * @pslist: List of padata_shell objects attached to this instance.
+@@ -150,7 +151,8 @@ struct padata_shell {
+ * @flags: padata flags.
+ */
+ struct padata_instance {
+- struct hlist_node node;
++ struct hlist_node cpu_online_node;
++ struct hlist_node cpu_dead_node;
+ struct workqueue_struct *parallel_wq;
+ struct workqueue_struct *serial_wq;
+ struct list_head pslist;
+diff --git a/include/linux/ptdump.h b/include/linux/ptdump.h
+index a67065c403c3..ac01502763bf 100644
+--- a/include/linux/ptdump.h
++++ b/include/linux/ptdump.h
+@@ -14,6 +14,7 @@ struct ptdump_state {
+ /* level is 0:PGD to 4:PTE, or -1 if unknown */
+ void (*note_page)(struct ptdump_state *st, unsigned long addr,
+ int level, unsigned long val);
++ void (*effective_prot)(struct ptdump_state *st, int level, u64 val);
+ const struct ptdump_range *range;
+ };
+
+diff --git a/include/linux/set_memory.h b/include/linux/set_memory.h
+index 86281ac7c305..860e0f843c12 100644
+--- a/include/linux/set_memory.h
++++ b/include/linux/set_memory.h
+@@ -26,7 +26,7 @@ static inline int set_direct_map_default_noflush(struct page *page)
+ #endif
+
+ #ifndef set_mce_nospec
+-static inline int set_mce_nospec(unsigned long pfn)
++static inline int set_mce_nospec(unsigned long pfn, bool unmap)
+ {
+ return 0;
+ }
+diff --git a/include/media/videobuf2-dma-contig.h b/include/media/videobuf2-dma-contig.h
+index 5604818d137e..5be313cbf7d7 100644
+--- a/include/media/videobuf2-dma-contig.h
++++ b/include/media/videobuf2-dma-contig.h
+@@ -25,7 +25,7 @@ vb2_dma_contig_plane_dma_addr(struct vb2_buffer *vb, unsigned int plane_no)
+ }
+
+ int vb2_dma_contig_set_max_seg_size(struct device *dev, unsigned int size);
+-void vb2_dma_contig_clear_max_seg_size(struct device *dev);
++static inline void vb2_dma_contig_clear_max_seg_size(struct device *dev) { }
+
+ extern const struct vb2_mem_ops vb2_dma_contig_memops;
+
+diff --git a/include/net/inet_hashtables.h b/include/net/inet_hashtables.h
+index ad64ba6a057f..92560974ea67 100644
+--- a/include/net/inet_hashtables.h
++++ b/include/net/inet_hashtables.h
+@@ -185,6 +185,12 @@ static inline spinlock_t *inet_ehash_lockp(
+
+ int inet_ehash_locks_alloc(struct inet_hashinfo *hashinfo);
+
++static inline void inet_hashinfo2_free_mod(struct inet_hashinfo *h)
++{
++ kfree(h->lhash2);
++ h->lhash2 = NULL;
++}
++
+ static inline void inet_ehash_locks_free(struct inet_hashinfo *hashinfo)
+ {
+ kvfree(hashinfo->ehash_locks);
+diff --git a/kernel/events/core.c b/kernel/events/core.c
+index 633b4ae72ed5..1dd91f960839 100644
+--- a/kernel/events/core.c
++++ b/kernel/events/core.c
+@@ -95,11 +95,11 @@ static void remote_function(void *data)
+ * @info: the function call argument
+ *
+ * Calls the function @func when the task is currently running. This might
+- * be on the current CPU, which just calls the function directly
++ * be on the current CPU, which just calls the function directly. This will
++ * retry due to any failures in smp_call_function_single(), such as if the
++ * task_cpu() goes offline concurrently.
+ *
+- * returns: @func return value, or
+- * -ESRCH - when the process isn't running
+- * -EAGAIN - when the process moved away
++ * returns @func return value or -ESRCH when the process isn't running
+ */
+ static int
+ task_function_call(struct task_struct *p, remote_function_f func, void *info)
+@@ -112,11 +112,16 @@ task_function_call(struct task_struct *p, remote_function_f func, void *info)
+ };
+ int ret;
+
+- do {
+- ret = smp_call_function_single(task_cpu(p), remote_function, &data, 1);
+- if (!ret)
+- ret = data.ret;
+- } while (ret == -EAGAIN);
++ for (;;) {
++ ret = smp_call_function_single(task_cpu(p), remote_function,
++ &data, 1);
++ ret = !ret ? data.ret : -EAGAIN;
++
++ if (ret != -EAGAIN)
++ break;
++
++ cond_resched();
++ }
+
+ return ret;
+ }
+diff --git a/kernel/padata.c b/kernel/padata.c
+index a6afa12fb75e..aae789896616 100644
+--- a/kernel/padata.c
++++ b/kernel/padata.c
+@@ -703,7 +703,7 @@ static int padata_cpu_online(unsigned int cpu, struct hlist_node *node)
+ struct padata_instance *pinst;
+ int ret;
+
+- pinst = hlist_entry_safe(node, struct padata_instance, node);
++ pinst = hlist_entry_safe(node, struct padata_instance, cpu_online_node);
+ if (!pinst_has_cpu(pinst, cpu))
+ return 0;
+
+@@ -718,7 +718,7 @@ static int padata_cpu_dead(unsigned int cpu, struct hlist_node *node)
+ struct padata_instance *pinst;
+ int ret;
+
+- pinst = hlist_entry_safe(node, struct padata_instance, node);
++ pinst = hlist_entry_safe(node, struct padata_instance, cpu_dead_node);
+ if (!pinst_has_cpu(pinst, cpu))
+ return 0;
+
+@@ -734,8 +734,9 @@ static enum cpuhp_state hp_online;
+ static void __padata_free(struct padata_instance *pinst)
+ {
+ #ifdef CONFIG_HOTPLUG_CPU
+- cpuhp_state_remove_instance_nocalls(CPUHP_PADATA_DEAD, &pinst->node);
+- cpuhp_state_remove_instance_nocalls(hp_online, &pinst->node);
++ cpuhp_state_remove_instance_nocalls(CPUHP_PADATA_DEAD,
++ &pinst->cpu_dead_node);
++ cpuhp_state_remove_instance_nocalls(hp_online, &pinst->cpu_online_node);
+ #endif
+
+ WARN_ON(!list_empty(&pinst->pslist));
+@@ -939,9 +940,10 @@ static struct padata_instance *padata_alloc(const char *name,
+ mutex_init(&pinst->lock);
+
+ #ifdef CONFIG_HOTPLUG_CPU
+- cpuhp_state_add_instance_nocalls_cpuslocked(hp_online, &pinst->node);
++ cpuhp_state_add_instance_nocalls_cpuslocked(hp_online,
++ &pinst->cpu_online_node);
+ cpuhp_state_add_instance_nocalls_cpuslocked(CPUHP_PADATA_DEAD,
+- &pinst->node);
++ &pinst->cpu_dead_node);
+ #endif
+
+ put_online_cpus();
+diff --git a/lib/bitmap.c b/lib/bitmap.c
+index 89260aa342d6..972eb01f4d0b 100644
+--- a/lib/bitmap.c
++++ b/lib/bitmap.c
+@@ -740,8 +740,9 @@ int bitmap_parse(const char *start, unsigned int buflen,
+ int chunks = BITS_TO_U32(nmaskbits);
+ u32 *bitmap = (u32 *)maskp;
+ int unset_bit;
++ int chunk;
+
+- while (1) {
++ for (chunk = 0; ; chunk++) {
+ end = bitmap_find_region_reverse(start, end);
+ if (start > end)
+ break;
+@@ -749,7 +750,11 @@ int bitmap_parse(const char *start, unsigned int buflen,
+ if (!chunks--)
+ return -EOVERFLOW;
+
+- end = bitmap_get_x32_reverse(start, end, bitmap++);
++#if defined(CONFIG_64BIT) && defined(__BIG_ENDIAN)
++ end = bitmap_get_x32_reverse(start, end, &bitmap[chunk ^ 1]);
++#else
++ end = bitmap_get_x32_reverse(start, end, &bitmap[chunk]);
++#endif
+ if (IS_ERR(end))
+ return PTR_ERR(end);
+ }
+diff --git a/lib/lzo/lzo1x_compress.c b/lib/lzo/lzo1x_compress.c
+index 717c940112f9..8ad5ba2b86e2 100644
+--- a/lib/lzo/lzo1x_compress.c
++++ b/lib/lzo/lzo1x_compress.c
+@@ -268,6 +268,19 @@ m_len_done:
+ *op++ = (M4_MARKER | ((m_off >> 11) & 8)
+ | (m_len - 2));
+ else {
++ if (unlikely(((m_off & 0x403f) == 0x403f)
++ && (m_len >= 261)
++ && (m_len <= 264))
++ && likely(bitstream_version)) {
++ // Under lzo-rle, block copies
++ // for 261 <= length <= 264 and
++ // (distance & 0x80f3) == 0x80f3
++ // can result in ambiguous
++ // output. Adjust length
++ // to 260 to prevent ambiguity.
++ ip -= m_len - 260;
++ m_len = 260;
++ }
+ m_len -= M4_MAX_LEN;
+ *op++ = (M4_MARKER | ((m_off >> 11) & 8));
+ while (unlikely(m_len > 255)) {
+diff --git a/mm/gup.c b/mm/gup.c
+index 87a6a59fe667..43cce23aea89 100644
+--- a/mm/gup.c
++++ b/mm/gup.c
+@@ -382,13 +382,22 @@ static int follow_pfn_pte(struct vm_area_struct *vma, unsigned long address,
+ }
+
+ /*
+- * FOLL_FORCE can write to even unwritable pte's, but only
+- * after we've gone through a COW cycle and they are dirty.
++ * FOLL_FORCE or a forced COW break can write even to unwritable pte's,
++ * but only after we've gone through a COW cycle and they are dirty.
+ */
+ static inline bool can_follow_write_pte(pte_t pte, unsigned int flags)
+ {
+- return pte_write(pte) ||
+- ((flags & FOLL_FORCE) && (flags & FOLL_COW) && pte_dirty(pte));
++ return pte_write(pte) || ((flags & FOLL_COW) && pte_dirty(pte));
++}
++
++/*
++ * A (separate) COW fault might break the page the other way and
++ * get_user_pages() would return the page from what is now the wrong
++ * VM. So we need to force a COW break at GUP time even for reads.
++ */
++static inline bool should_force_cow_break(struct vm_area_struct *vma, unsigned int flags)
++{
++ return is_cow_mapping(vma->vm_flags) && (flags & (FOLL_GET | FOLL_PIN));
+ }
+
+ static struct page *follow_page_pte(struct vm_area_struct *vma,
+@@ -1066,9 +1075,11 @@ static long __get_user_pages(struct task_struct *tsk, struct mm_struct *mm,
+ goto out;
+ }
+ if (is_vm_hugetlb_page(vma)) {
++ if (should_force_cow_break(vma, foll_flags))
++ foll_flags |= FOLL_WRITE;
+ i = follow_hugetlb_page(mm, vma, pages, vmas,
+ &start, &nr_pages, i,
+- gup_flags, locked);
++ foll_flags, locked);
+ if (locked && *locked == 0) {
+ /*
+ * We've got a VM_FAULT_RETRY
+@@ -1082,6 +1093,10 @@ static long __get_user_pages(struct task_struct *tsk, struct mm_struct *mm,
+ continue;
+ }
+ }
++
++ if (should_force_cow_break(vma, foll_flags))
++ foll_flags |= FOLL_WRITE;
++
+ retry:
+ /*
+ * If we have a pending SIGKILL, don't keep faulting pages and
+@@ -2674,6 +2689,10 @@ static bool gup_fast_permitted(unsigned long start, unsigned long end)
+ *
+ * If the architecture does not support this function, simply return with no
+ * pages pinned.
++ *
++ * Careful, careful! COW breaking can go either way, so a non-write
++ * access can get ambiguous page results. If you call this function without
++ * 'write' set, you'd better be sure that you're ok with that ambiguity.
+ */
+ int __get_user_pages_fast(unsigned long start, int nr_pages, int write,
+ struct page **pages)
+@@ -2709,6 +2728,12 @@ int __get_user_pages_fast(unsigned long start, int nr_pages, int write,
+ *
+ * We do not adopt an rcu_read_lock(.) here as we also want to
+ * block IPIs that come from THPs splitting.
++ *
++ * NOTE! We allow read-only gup_fast() here, but you'd better be
++ * careful about possible COW pages. You'll get _a_ COW page, but
++ * not necessarily the one you intended to get depending on what
++ * COW event happens after this. COW may break the page copy in a
++ * random direction.
+ */
+
+ if (IS_ENABLED(CONFIG_HAVE_FAST_GUP) &&
+@@ -2766,10 +2791,17 @@ static int internal_get_user_pages_fast(unsigned long start, int nr_pages,
+ if (unlikely(!access_ok((void __user *)start, len)))
+ return -EFAULT;
+
++ /*
++ * The FAST_GUP case requires FOLL_WRITE even for pure reads,
++ * because get_user_pages() may need to cause an early COW in
++ * order to avoid confusing the normal COW routines. So only
++ * targets that are already writable are safe to do by just
++ * looking at the page tables.
++ */
+ if (IS_ENABLED(CONFIG_HAVE_FAST_GUP) &&
+ gup_fast_permitted(start, end)) {
+ local_irq_disable();
+- gup_pgd_range(addr, end, gup_flags, pages, &nr_pinned);
++ gup_pgd_range(addr, end, gup_flags | FOLL_WRITE, pages, &nr_pinned);
+ local_irq_enable();
+ ret = nr_pinned;
+ }
+diff --git a/mm/huge_memory.c b/mm/huge_memory.c
+index 6ecd1045113b..11fe0b4dbe67 100644
+--- a/mm/huge_memory.c
++++ b/mm/huge_memory.c
+@@ -1515,13 +1515,12 @@ out_unlock:
+ }
+
+ /*
+- * FOLL_FORCE can write to even unwritable pmd's, but only
+- * after we've gone through a COW cycle and they are dirty.
++ * FOLL_FORCE or a forced COW break can write even to unwritable pmd's,
++ * but only after we've gone through a COW cycle and they are dirty.
+ */
+ static inline bool can_follow_write_pmd(pmd_t pmd, unsigned int flags)
+ {
+- return pmd_write(pmd) ||
+- ((flags & FOLL_FORCE) && (flags & FOLL_COW) && pmd_dirty(pmd));
++ return pmd_write(pmd) || ((flags & FOLL_COW) && pmd_dirty(pmd));
+ }
+
+ struct page *follow_trans_huge_pmd(struct vm_area_struct *vma,
+diff --git a/mm/ptdump.c b/mm/ptdump.c
+index 26208d0d03b7..f4ce916f5602 100644
+--- a/mm/ptdump.c
++++ b/mm/ptdump.c
+@@ -36,6 +36,9 @@ static int ptdump_pgd_entry(pgd_t *pgd, unsigned long addr,
+ return note_kasan_page_table(walk, addr);
+ #endif
+
++ if (st->effective_prot)
++ st->effective_prot(st, 0, pgd_val(val));
++
+ if (pgd_leaf(val))
+ st->note_page(st, addr, 0, pgd_val(val));
+
+@@ -53,6 +56,9 @@ static int ptdump_p4d_entry(p4d_t *p4d, unsigned long addr,
+ return note_kasan_page_table(walk, addr);
+ #endif
+
++ if (st->effective_prot)
++ st->effective_prot(st, 1, p4d_val(val));
++
+ if (p4d_leaf(val))
+ st->note_page(st, addr, 1, p4d_val(val));
+
+@@ -70,6 +76,9 @@ static int ptdump_pud_entry(pud_t *pud, unsigned long addr,
+ return note_kasan_page_table(walk, addr);
+ #endif
+
++ if (st->effective_prot)
++ st->effective_prot(st, 2, pud_val(val));
++
+ if (pud_leaf(val))
+ st->note_page(st, addr, 2, pud_val(val));
+
+@@ -87,6 +96,8 @@ static int ptdump_pmd_entry(pmd_t *pmd, unsigned long addr,
+ return note_kasan_page_table(walk, addr);
+ #endif
+
++ if (st->effective_prot)
++ st->effective_prot(st, 3, pmd_val(val));
+ if (pmd_leaf(val))
+ st->note_page(st, addr, 3, pmd_val(val));
+
+@@ -97,8 +108,12 @@ static int ptdump_pte_entry(pte_t *pte, unsigned long addr,
+ unsigned long next, struct mm_walk *walk)
+ {
+ struct ptdump_state *st = walk->private;
++ pte_t val = READ_ONCE(*pte);
++
++ if (st->effective_prot)
++ st->effective_prot(st, 4, pte_val(val));
+
+- st->note_page(st, addr, 4, pte_val(READ_ONCE(*pte)));
++ st->note_page(st, addr, 4, pte_val(val));
+
+ return 0;
+ }
+diff --git a/mm/slab_common.c b/mm/slab_common.c
+index 23c7500eea7d..9e72ba224175 100644
+--- a/mm/slab_common.c
++++ b/mm/slab_common.c
+@@ -1303,7 +1303,8 @@ void __init create_kmalloc_caches(slab_flags_t flags)
+ kmalloc_caches[KMALLOC_DMA][i] = create_kmalloc_cache(
+ kmalloc_info[i].name[KMALLOC_DMA],
+ kmalloc_info[i].size,
+- SLAB_CACHE_DMA | flags, 0, 0);
++ SLAB_CACHE_DMA | flags, 0,
++ kmalloc_info[i].size);
+ }
+ }
+ #endif
+diff --git a/mm/slub.c b/mm/slub.c
+index b762450fc9f0..63bd39c47643 100644
+--- a/mm/slub.c
++++ b/mm/slub.c
+@@ -5809,8 +5809,10 @@ static int sysfs_slab_add(struct kmem_cache *s)
+
+ s->kobj.kset = kset;
+ err = kobject_init_and_add(&s->kobj, &slab_ktype, NULL, "%s", name);
+- if (err)
++ if (err) {
++ kobject_put(&s->kobj);
+ goto out;
++ }
+
+ err = sysfs_create_group(&s->kobj, &slab_attr_group);
+ if (err)
+diff --git a/mm/util.c b/mm/util.c
+index 988d11e6c17c..dc1c877d5481 100644
+--- a/mm/util.c
++++ b/mm/util.c
+@@ -604,6 +604,24 @@ void kvfree(const void *addr)
+ }
+ EXPORT_SYMBOL(kvfree);
+
++/**
++ * kvfree_sensitive - Free a data object containing sensitive information.
++ * @addr: address of the data object to be freed.
++ * @len: length of the data object.
++ *
++ * Use the special memzero_explicit() function to clear the content of a
++ * kvmalloc'ed object containing sensitive data to make sure that the
++ * compiler won't optimize out the data clearing.
++ */
++void kvfree_sensitive(const void *addr, size_t len)
++{
++ if (likely(!ZERO_OR_NULL_PTR(addr))) {
++ memzero_explicit((void *)addr, len);
++ kvfree(addr);
++ }
++}
++EXPORT_SYMBOL(kvfree_sensitive);
++
+ static inline void *__page_rmapping(struct page *page)
+ {
+ unsigned long mapping;
+diff --git a/net/bridge/br_arp_nd_proxy.c b/net/bridge/br_arp_nd_proxy.c
+index 37908561a64b..b18cdf03edb3 100644
+--- a/net/bridge/br_arp_nd_proxy.c
++++ b/net/bridge/br_arp_nd_proxy.c
+@@ -276,6 +276,10 @@ static void br_nd_send(struct net_bridge *br, struct net_bridge_port *p,
+ ns_olen = request->len - (skb_network_offset(request) +
+ sizeof(struct ipv6hdr)) - sizeof(*ns);
+ for (i = 0; i < ns_olen - 1; i += (ns->opt[i + 1] << 3)) {
++ if (!ns->opt[i + 1]) {
++ kfree_skb(reply);
++ return;
++ }
+ if (ns->opt[i] == ND_OPT_SOURCE_LL_ADDR) {
+ daddr = ns->opt + i + sizeof(struct nd_opt_hdr);
+ break;
+diff --git a/net/dccp/proto.c b/net/dccp/proto.c
+index 4af8a98fe784..c13b6609474b 100644
+--- a/net/dccp/proto.c
++++ b/net/dccp/proto.c
+@@ -1139,14 +1139,14 @@ static int __init dccp_init(void)
+ inet_hashinfo_init(&dccp_hashinfo);
+ rc = inet_hashinfo2_init_mod(&dccp_hashinfo);
+ if (rc)
+- goto out_fail;
++ goto out_free_percpu;
+ rc = -ENOBUFS;
+ dccp_hashinfo.bind_bucket_cachep =
+ kmem_cache_create("dccp_bind_bucket",
+ sizeof(struct inet_bind_bucket), 0,
+ SLAB_HWCACHE_ALIGN, NULL);
+ if (!dccp_hashinfo.bind_bucket_cachep)
+- goto out_free_percpu;
++ goto out_free_hashinfo2;
+
+ /*
+ * Size and allocate the main established and bind bucket
+@@ -1242,6 +1242,8 @@ out_free_dccp_ehash:
+ free_pages((unsigned long)dccp_hashinfo.ehash, ehash_order);
+ out_free_bind_bucket_cachep:
+ kmem_cache_destroy(dccp_hashinfo.bind_bucket_cachep);
++out_free_hashinfo2:
++ inet_hashinfo2_free_mod(&dccp_hashinfo);
+ out_free_percpu:
+ percpu_counter_destroy(&dccp_orphan_count);
+ out_fail:
+@@ -1265,6 +1267,7 @@ static void __exit dccp_fini(void)
+ kmem_cache_destroy(dccp_hashinfo.bind_bucket_cachep);
+ dccp_ackvec_exit();
+ dccp_sysctl_exit();
++ inet_hashinfo2_free_mod(&dccp_hashinfo);
+ percpu_counter_destroy(&dccp_orphan_count);
+ }
+
+diff --git a/net/ipv6/ipv6_sockglue.c b/net/ipv6/ipv6_sockglue.c
+index 18d05403d3b5..5af97b4f5df3 100644
+--- a/net/ipv6/ipv6_sockglue.c
++++ b/net/ipv6/ipv6_sockglue.c
+@@ -183,14 +183,15 @@ static int do_ipv6_setsockopt(struct sock *sk, int level, int optname,
+ retv = -EBUSY;
+ break;
+ }
+- }
+- if (sk->sk_protocol == IPPROTO_TCP &&
+- sk->sk_prot != &tcpv6_prot) {
+- retv = -EBUSY;
++ } else if (sk->sk_protocol == IPPROTO_TCP) {
++ if (sk->sk_prot != &tcpv6_prot) {
++ retv = -EBUSY;
++ break;
++ }
++ } else {
+ break;
+ }
+- if (sk->sk_protocol != IPPROTO_TCP)
+- break;
++
+ if (sk->sk_state != TCP_ESTABLISHED) {
+ retv = -ENOTCONN;
+ break;
+diff --git a/net/mptcp/options.c b/net/mptcp/options.c
+index 7793b6011fa7..1c20dd14b2aa 100644
+--- a/net/mptcp/options.c
++++ b/net/mptcp/options.c
+@@ -273,6 +273,8 @@ static void mptcp_parse_option(const struct sk_buff *skb,
+ if (opsize != TCPOLEN_MPTCP_RM_ADDR_BASE)
+ break;
+
++ ptr++;
++
+ mp_opt->rm_addr = 1;
+ mp_opt->rm_id = *ptr++;
+ pr_debug("RM_ADDR: id=%d", mp_opt->rm_id);
+diff --git a/net/mptcp/protocol.c b/net/mptcp/protocol.c
+index 34dd0e278a82..4bf4f629975d 100644
+--- a/net/mptcp/protocol.c
++++ b/net/mptcp/protocol.c
+@@ -357,6 +357,27 @@ void mptcp_subflow_eof(struct sock *sk)
+ sock_hold(sk);
+ }
+
++static void mptcp_check_for_eof(struct mptcp_sock *msk)
++{
++ struct mptcp_subflow_context *subflow;
++ struct sock *sk = (struct sock *)msk;
++ int receivers = 0;
++
++ mptcp_for_each_subflow(msk, subflow)
++ receivers += !subflow->rx_eof;
++
++ if (!receivers && !(sk->sk_shutdown & RCV_SHUTDOWN)) {
++ /* hopefully temporary hack: propagate shutdown status
++ * to msk, when all subflows agree on it
++ */
++ sk->sk_shutdown |= RCV_SHUTDOWN;
++
++ smp_mb__before_atomic(); /* SHUTDOWN must be visible first */
++ set_bit(MPTCP_DATA_READY, &msk->flags);
++ sk->sk_data_ready(sk);
++ }
++}
++
+ static void mptcp_stop_timer(struct sock *sk)
+ {
+ struct inet_connection_sock *icsk = inet_csk(sk);
+@@ -933,6 +954,9 @@ fallback:
+ break;
+ }
+
++ if (test_and_clear_bit(MPTCP_WORK_EOF, &msk->flags))
++ mptcp_check_for_eof(msk);
++
+ if (sk->sk_shutdown & RCV_SHUTDOWN)
+ break;
+
+@@ -1070,27 +1094,6 @@ static unsigned int mptcp_sync_mss(struct sock *sk, u32 pmtu)
+ return 0;
+ }
+
+-static void mptcp_check_for_eof(struct mptcp_sock *msk)
+-{
+- struct mptcp_subflow_context *subflow;
+- struct sock *sk = (struct sock *)msk;
+- int receivers = 0;
+-
+- mptcp_for_each_subflow(msk, subflow)
+- receivers += !subflow->rx_eof;
+-
+- if (!receivers && !(sk->sk_shutdown & RCV_SHUTDOWN)) {
+- /* hopefully temporary hack: propagate shutdown status
+- * to msk, when all subflows agree on it
+- */
+- sk->sk_shutdown |= RCV_SHUTDOWN;
+-
+- smp_mb__before_atomic(); /* SHUTDOWN must be visible first */
+- set_bit(MPTCP_DATA_READY, &msk->flags);
+- sk->sk_data_ready(sk);
+- }
+-}
+-
+ static void mptcp_worker(struct work_struct *work)
+ {
+ struct mptcp_sock *msk = container_of(work, struct mptcp_sock, work);
+diff --git a/net/mptcp/subflow.c b/net/mptcp/subflow.c
+index 8968b2c065e7..e6feb05a93dc 100644
+--- a/net/mptcp/subflow.c
++++ b/net/mptcp/subflow.c
+@@ -393,6 +393,7 @@ static void mptcp_sock_destruct(struct sock *sk)
+ sock_orphan(sk);
+ }
+
++ mptcp_token_destroy(mptcp_sk(sk)->token);
+ inet_sock_destruct(sk);
+ }
+
+diff --git a/net/netlink/genetlink.c b/net/netlink/genetlink.c
+index 9f357aa22b94..bcbba0bef1c2 100644
+--- a/net/netlink/genetlink.c
++++ b/net/netlink/genetlink.c
+@@ -513,15 +513,58 @@ static void genl_family_rcv_msg_attrs_free(const struct genl_family *family,
+ kfree(attrbuf);
+ }
+
+-static int genl_lock_start(struct netlink_callback *cb)
++struct genl_start_context {
++ const struct genl_family *family;
++ struct nlmsghdr *nlh;
++ struct netlink_ext_ack *extack;
++ const struct genl_ops *ops;
++ int hdrlen;
++};
++
++static int genl_start(struct netlink_callback *cb)
+ {
+- const struct genl_ops *ops = genl_dumpit_info(cb)->ops;
++ struct genl_start_context *ctx = cb->data;
++ const struct genl_ops *ops = ctx->ops;
++ struct genl_dumpit_info *info;
++ struct nlattr **attrs = NULL;
+ int rc = 0;
+
++ if (ops->validate & GENL_DONT_VALIDATE_DUMP)
++ goto no_attrs;
++
++ if (ctx->nlh->nlmsg_len < nlmsg_msg_size(ctx->hdrlen))
++ return -EINVAL;
++
++ attrs = genl_family_rcv_msg_attrs_parse(ctx->family, ctx->nlh, ctx->extack,
++ ops, ctx->hdrlen,
++ GENL_DONT_VALIDATE_DUMP_STRICT,
++ true);
++ if (IS_ERR(attrs))
++ return PTR_ERR(attrs);
++
++no_attrs:
++ info = genl_dumpit_info_alloc();
++ if (!info) {
++ kfree(attrs);
++ return -ENOMEM;
++ }
++ info->family = ctx->family;
++ info->ops = ops;
++ info->attrs = attrs;
++
++ cb->data = info;
+ if (ops->start) {
+- genl_lock();
++ if (!ctx->family->parallel_ops)
++ genl_lock();
+ rc = ops->start(cb);
+- genl_unlock();
++ if (!ctx->family->parallel_ops)
++ genl_unlock();
++ }
++
++ if (rc) {
++ kfree(attrs);
++ genl_dumpit_info_free(info);
++ cb->data = NULL;
+ }
+ return rc;
+ }
+@@ -548,7 +591,7 @@ static int genl_lock_done(struct netlink_callback *cb)
+ rc = ops->done(cb);
+ genl_unlock();
+ }
+- genl_family_rcv_msg_attrs_free(info->family, info->attrs, true);
++ genl_family_rcv_msg_attrs_free(info->family, info->attrs, false);
+ genl_dumpit_info_free(info);
+ return rc;
+ }
+@@ -573,43 +616,23 @@ static int genl_family_rcv_msg_dumpit(const struct genl_family *family,
+ const struct genl_ops *ops,
+ int hdrlen, struct net *net)
+ {
+- struct genl_dumpit_info *info;
+- struct nlattr **attrs = NULL;
++ struct genl_start_context ctx;
+ int err;
+
+ if (!ops->dumpit)
+ return -EOPNOTSUPP;
+
+- if (ops->validate & GENL_DONT_VALIDATE_DUMP)
+- goto no_attrs;
+-
+- if (nlh->nlmsg_len < nlmsg_msg_size(hdrlen))
+- return -EINVAL;
+-
+- attrs = genl_family_rcv_msg_attrs_parse(family, nlh, extack,
+- ops, hdrlen,
+- GENL_DONT_VALIDATE_DUMP_STRICT,
+- true);
+- if (IS_ERR(attrs))
+- return PTR_ERR(attrs);
+-
+-no_attrs:
+- /* Allocate dumpit info. It is going to be freed by done() callback. */
+- info = genl_dumpit_info_alloc();
+- if (!info) {
+- genl_family_rcv_msg_attrs_free(family, attrs, true);
+- return -ENOMEM;
+- }
+-
+- info->family = family;
+- info->ops = ops;
+- info->attrs = attrs;
++ ctx.family = family;
++ ctx.nlh = nlh;
++ ctx.extack = extack;
++ ctx.ops = ops;
++ ctx.hdrlen = hdrlen;
+
+ if (!family->parallel_ops) {
+ struct netlink_dump_control c = {
+ .module = family->module,
+- .data = info,
+- .start = genl_lock_start,
++ .data = &ctx,
++ .start = genl_start,
+ .dump = genl_lock_dumpit,
+ .done = genl_lock_done,
+ };
+@@ -617,12 +640,11 @@ no_attrs:
+ genl_unlock();
+ err = __netlink_dump_start(net->genl_sock, skb, nlh, &c);
+ genl_lock();
+-
+ } else {
+ struct netlink_dump_control c = {
+ .module = family->module,
+- .data = info,
+- .start = ops->start,
++ .data = &ctx,
++ .start = genl_start,
+ .dump = ops->dumpit,
+ .done = genl_parallel_done,
+ };
+diff --git a/net/sched/sch_generic.c b/net/sched/sch_generic.c
+index 2efd5b61acef..9763da6daa9f 100644
+--- a/net/sched/sch_generic.c
++++ b/net/sched/sch_generic.c
+@@ -464,6 +464,7 @@ void __netdev_watchdog_up(struct net_device *dev)
+ dev_hold(dev);
+ }
+ }
++EXPORT_SYMBOL_GPL(__netdev_watchdog_up);
+
+ static void dev_watchdog_up(struct net_device *dev)
+ {
+diff --git a/net/tipc/msg.c b/net/tipc/msg.c
+index 4d0e0bdd997b..3ad411884e6c 100644
+--- a/net/tipc/msg.c
++++ b/net/tipc/msg.c
+@@ -221,7 +221,7 @@ int tipc_msg_append(struct tipc_msg *_hdr, struct msghdr *m, int dlen,
+ accounted = skb ? msg_blocks(buf_msg(skb)) : 0;
+ total = accounted;
+
+- while (rem) {
++ do {
+ if (!skb || skb->len >= mss) {
+ prev = skb;
+ skb = tipc_buf_acquire(mss, GFP_KERNEL);
+@@ -249,7 +249,7 @@ int tipc_msg_append(struct tipc_msg *_hdr, struct msghdr *m, int dlen,
+ skb_put(skb, cpy);
+ rem -= cpy;
+ total += msg_blocks(hdr) - curr;
+- }
++ } while (rem);
+ return total - accounted;
+ }
+
+diff --git a/security/keys/internal.h b/security/keys/internal.h
+index 6d0ca48ae9a5..153d35c20d3d 100644
+--- a/security/keys/internal.h
++++ b/security/keys/internal.h
+@@ -350,15 +350,4 @@ static inline void key_check(const struct key *key)
+ #define key_check(key) do {} while(0)
+
+ #endif
+-
+-/*
+- * Helper function to clear and free a kvmalloc'ed memory object.
+- */
+-static inline void __kvzfree(const void *addr, size_t len)
+-{
+- if (addr) {
+- memset((void *)addr, 0, len);
+- kvfree(addr);
+- }
+-}
+ #endif /* _INTERNAL_H */
+diff --git a/security/keys/keyctl.c b/security/keys/keyctl.c
+index 5e01192e222a..edde63a63007 100644
+--- a/security/keys/keyctl.c
++++ b/security/keys/keyctl.c
+@@ -142,10 +142,7 @@ SYSCALL_DEFINE5(add_key, const char __user *, _type,
+
+ key_ref_put(keyring_ref);
+ error3:
+- if (payload) {
+- memzero_explicit(payload, plen);
+- kvfree(payload);
+- }
++ kvfree_sensitive(payload, plen);
+ error2:
+ kfree(description);
+ error:
+@@ -360,7 +357,7 @@ long keyctl_update_key(key_serial_t id,
+
+ key_ref_put(key_ref);
+ error2:
+- __kvzfree(payload, plen);
++ kvfree_sensitive(payload, plen);
+ error:
+ return ret;
+ }
+@@ -914,7 +911,7 @@ can_read_key:
+ */
+ if (ret > key_data_len) {
+ if (unlikely(key_data))
+- __kvzfree(key_data, key_data_len);
++ kvfree_sensitive(key_data, key_data_len);
+ key_data_len = ret;
+ continue; /* Allocate buffer */
+ }
+@@ -923,7 +920,7 @@ can_read_key:
+ ret = -EFAULT;
+ break;
+ }
+- __kvzfree(key_data, key_data_len);
++ kvfree_sensitive(key_data, key_data_len);
+
+ key_put_out:
+ key_put(key);
+@@ -1225,10 +1222,7 @@ long keyctl_instantiate_key_common(key_serial_t id,
+ keyctl_change_reqkey_auth(NULL);
+
+ error2:
+- if (payload) {
+- memzero_explicit(payload, plen);
+- kvfree(payload);
+- }
++ kvfree_sensitive(payload, plen);
+ error:
+ return ret;
+ }
+diff --git a/security/smack/smack.h b/security/smack/smack.h
+index 62529f382942..335d2411abe4 100644
+--- a/security/smack/smack.h
++++ b/security/smack/smack.h
+@@ -148,7 +148,6 @@ struct smk_net4addr {
+ struct smack_known *smk_label; /* label */
+ };
+
+-#if IS_ENABLED(CONFIG_IPV6)
+ /*
+ * An entry in the table identifying IPv6 hosts.
+ */
+@@ -159,9 +158,7 @@ struct smk_net6addr {
+ int smk_masks; /* mask size */
+ struct smack_known *smk_label; /* label */
+ };
+-#endif /* CONFIG_IPV6 */
+
+-#ifdef SMACK_IPV6_PORT_LABELING
+ /*
+ * An entry in the table identifying ports.
+ */
+@@ -174,7 +171,6 @@ struct smk_port_label {
+ short smk_sock_type; /* Socket type */
+ short smk_can_reuse;
+ };
+-#endif /* SMACK_IPV6_PORT_LABELING */
+
+ struct smack_known_list_elem {
+ struct list_head list;
+@@ -335,9 +331,7 @@ extern struct smack_known smack_known_web;
+ extern struct mutex smack_known_lock;
+ extern struct list_head smack_known_list;
+ extern struct list_head smk_net4addr_list;
+-#if IS_ENABLED(CONFIG_IPV6)
+ extern struct list_head smk_net6addr_list;
+-#endif /* CONFIG_IPV6 */
+
+ extern struct mutex smack_onlycap_lock;
+ extern struct list_head smack_onlycap_list;
+diff --git a/security/smack/smack_lsm.c b/security/smack/smack_lsm.c
+index 8c61d175e195..14bf2f4aea3b 100644
+--- a/security/smack/smack_lsm.c
++++ b/security/smack/smack_lsm.c
+@@ -50,10 +50,8 @@
+ #define SMK_RECEIVING 1
+ #define SMK_SENDING 2
+
+-#ifdef SMACK_IPV6_PORT_LABELING
+-DEFINE_MUTEX(smack_ipv6_lock);
++static DEFINE_MUTEX(smack_ipv6_lock);
+ static LIST_HEAD(smk_ipv6_port_list);
+-#endif
+ static struct kmem_cache *smack_inode_cache;
+ struct kmem_cache *smack_rule_cache;
+ int smack_enabled;
+@@ -2320,7 +2318,6 @@ static struct smack_known *smack_ipv4host_label(struct sockaddr_in *sip)
+ return NULL;
+ }
+
+-#if IS_ENABLED(CONFIG_IPV6)
+ /*
+ * smk_ipv6_localhost - Check for local ipv6 host address
+ * @sip: the address
+@@ -2388,7 +2385,6 @@ static struct smack_known *smack_ipv6host_label(struct sockaddr_in6 *sip)
+
+ return NULL;
+ }
+-#endif /* CONFIG_IPV6 */
+
+ /**
+ * smack_netlabel - Set the secattr on a socket
+@@ -2477,7 +2473,6 @@ static int smack_netlabel_send(struct sock *sk, struct sockaddr_in *sap)
+ return smack_netlabel(sk, sk_lbl);
+ }
+
+-#if IS_ENABLED(CONFIG_IPV6)
+ /**
+ * smk_ipv6_check - check Smack access
+ * @subject: subject Smack label
+@@ -2510,7 +2505,6 @@ static int smk_ipv6_check(struct smack_known *subject,
+ rc = smk_bu_note("IPv6 check", subject, object, MAY_WRITE, rc);
+ return rc;
+ }
+-#endif /* CONFIG_IPV6 */
+
+ #ifdef SMACK_IPV6_PORT_LABELING
+ /**
+@@ -2599,6 +2593,7 @@ static void smk_ipv6_port_label(struct socket *sock, struct sockaddr *address)
+ mutex_unlock(&smack_ipv6_lock);
+ return;
+ }
++#endif
+
+ /**
+ * smk_ipv6_port_check - check Smack port access
+@@ -2661,7 +2656,6 @@ static int smk_ipv6_port_check(struct sock *sk, struct sockaddr_in6 *address,
+
+ return smk_ipv6_check(skp, object, address, act);
+ }
+-#endif /* SMACK_IPV6_PORT_LABELING */
+
+ /**
+ * smack_inode_setsecurity - set smack xattrs
+@@ -2836,24 +2830,21 @@ static int smack_socket_connect(struct socket *sock, struct sockaddr *sap,
+ return 0;
+ if (IS_ENABLED(CONFIG_IPV6) && sap->sa_family == AF_INET6) {
+ struct sockaddr_in6 *sip = (struct sockaddr_in6 *)sap;
+-#ifdef SMACK_IPV6_SECMARK_LABELING
+- struct smack_known *rsp;
+-#endif
++ struct smack_known *rsp = NULL;
+
+ if (addrlen < SIN6_LEN_RFC2133)
+ return 0;
+-#ifdef SMACK_IPV6_SECMARK_LABELING
+- rsp = smack_ipv6host_label(sip);
++ if (__is_defined(SMACK_IPV6_SECMARK_LABELING))
++ rsp = smack_ipv6host_label(sip);
+ if (rsp != NULL) {
+ struct socket_smack *ssp = sock->sk->sk_security;
+
+ rc = smk_ipv6_check(ssp->smk_out, rsp, sip,
+ SMK_CONNECTING);
+ }
+-#endif
+-#ifdef SMACK_IPV6_PORT_LABELING
+- rc = smk_ipv6_port_check(sock->sk, sip, SMK_CONNECTING);
+-#endif
++ if (__is_defined(SMACK_IPV6_PORT_LABELING))
++ rc = smk_ipv6_port_check(sock->sk, sip, SMK_CONNECTING);
++
+ return rc;
+ }
+ if (sap->sa_family != AF_INET || addrlen < sizeof(struct sockaddr_in))
+diff --git a/security/smack/smackfs.c b/security/smack/smackfs.c
+index e3e05c04dbd1..c21b656b3263 100644
+--- a/security/smack/smackfs.c
++++ b/security/smack/smackfs.c
+@@ -878,11 +878,21 @@ static ssize_t smk_set_cipso(struct file *file, const char __user *buf,
+ else
+ rule += strlen(skp->smk_known) + 1;
+
++ if (rule > data + count) {
++ rc = -EOVERFLOW;
++ goto out;
++ }
++
+ ret = sscanf(rule, "%d", &maplevel);
+ if (ret != 1 || maplevel > SMACK_CIPSO_MAXLEVEL)
+ goto out;
+
+ rule += SMK_DIGITLEN;
++ if (rule > data + count) {
++ rc = -EOVERFLOW;
++ goto out;
++ }
++
+ ret = sscanf(rule, "%d", &catlen);
+ if (ret != 1 || catlen > SMACK_CIPSO_MAXCATNUM)
+ goto out;
+diff --git a/sound/core/pcm_native.c b/sound/core/pcm_native.c
+index aef860256278..eeab8850ed76 100644
+--- a/sound/core/pcm_native.c
++++ b/sound/core/pcm_native.c
+@@ -138,6 +138,16 @@ void snd_pcm_stream_lock_irq(struct snd_pcm_substream *substream)
+ }
+ EXPORT_SYMBOL_GPL(snd_pcm_stream_lock_irq);
+
++static void snd_pcm_stream_lock_nested(struct snd_pcm_substream *substream)
++{
++ struct snd_pcm_group *group = &substream->self_group;
++
++ if (substream->pcm->nonatomic)
++ mutex_lock_nested(&group->mutex, SINGLE_DEPTH_NESTING);
++ else
++ spin_lock_nested(&group->lock, SINGLE_DEPTH_NESTING);
++}
++
+ /**
+ * snd_pcm_stream_unlock_irq - Unlock the PCM stream
+ * @substream: PCM substream
+@@ -2166,6 +2176,12 @@ static int snd_pcm_link(struct snd_pcm_substream *substream, int fd)
+ }
+ pcm_file = f.file->private_data;
+ substream1 = pcm_file->substream;
++
++ if (substream == substream1) {
++ res = -EINVAL;
++ goto _badf;
++ }
++
+ group = kzalloc(sizeof(*group), GFP_KERNEL);
+ if (!group) {
+ res = -ENOMEM;
+@@ -2194,7 +2210,7 @@ static int snd_pcm_link(struct snd_pcm_substream *substream, int fd)
+ snd_pcm_stream_unlock_irq(substream);
+
+ snd_pcm_group_lock_irq(target_group, nonatomic);
+- snd_pcm_stream_lock(substream1);
++ snd_pcm_stream_lock_nested(substream1);
+ snd_pcm_group_assign(substream1, target_group);
+ refcount_inc(&target_group->refs);
+ snd_pcm_stream_unlock(substream1);
+@@ -2210,7 +2226,7 @@ static int snd_pcm_link(struct snd_pcm_substream *substream, int fd)
+
+ static void relink_to_local(struct snd_pcm_substream *substream)
+ {
+- snd_pcm_stream_lock(substream);
++ snd_pcm_stream_lock_nested(substream);
+ snd_pcm_group_assign(substream, &substream->self_group);
+ snd_pcm_stream_unlock(substream);
+ }
+diff --git a/sound/firewire/fireface/ff-protocol-latter.c b/sound/firewire/fireface/ff-protocol-latter.c
+index 0e4c3a9ed5e4..76ae568489ef 100644
+--- a/sound/firewire/fireface/ff-protocol-latter.c
++++ b/sound/firewire/fireface/ff-protocol-latter.c
+@@ -107,18 +107,18 @@ static int latter_allocate_resources(struct snd_ff *ff, unsigned int rate)
+ int err;
+
+ // Set the number of data blocks transferred in a second.
+- if (rate % 32000 == 0)
+- code = 0x00;
++ if (rate % 48000 == 0)
++ code = 0x04;
+ else if (rate % 44100 == 0)
+ code = 0x02;
+- else if (rate % 48000 == 0)
+- code = 0x04;
++ else if (rate % 32000 == 0)
++ code = 0x00;
+ else
+ return -EINVAL;
+
+ if (rate >= 64000 && rate < 128000)
+ code |= 0x08;
+- else if (rate >= 128000 && rate < 192000)
++ else if (rate >= 128000)
+ code |= 0x10;
+
+ reg = cpu_to_le32(code);
+@@ -140,7 +140,7 @@ static int latter_allocate_resources(struct snd_ff *ff, unsigned int rate)
+ if (curr_rate == rate)
+ break;
+ }
+- if (count == 10)
++ if (count > 10)
+ return -ETIMEDOUT;
+
+ for (i = 0; i < ARRAY_SIZE(amdtp_rate_table); ++i) {
+diff --git a/sound/firewire/fireface/ff-stream.c b/sound/firewire/fireface/ff-stream.c
+index 63b79c4a5405..5452115c0ef9 100644
+--- a/sound/firewire/fireface/ff-stream.c
++++ b/sound/firewire/fireface/ff-stream.c
+@@ -184,7 +184,6 @@ int snd_ff_stream_start_duplex(struct snd_ff *ff, unsigned int rate)
+ */
+ if (!amdtp_stream_running(&ff->rx_stream)) {
+ int spd = fw_parent_device(ff->unit)->max_speed;
+- unsigned int ir_delay_cycle;
+
+ err = ff->spec->protocol->begin_session(ff, rate);
+ if (err < 0)
+@@ -200,14 +199,7 @@ int snd_ff_stream_start_duplex(struct snd_ff *ff, unsigned int rate)
+ if (err < 0)
+ goto error;
+
+- // The device postpones start of transmission mostly for several
+- // cycles after receiving packets firstly.
+- if (ff->spec->protocol == &snd_ff_protocol_ff800)
+- ir_delay_cycle = 800; // = 100 msec
+- else
+- ir_delay_cycle = 16; // = 2 msec
+-
+- err = amdtp_domain_start(&ff->domain, ir_delay_cycle);
++ err = amdtp_domain_start(&ff->domain, 0);
+ if (err < 0)
+ goto error;
+
+diff --git a/sound/isa/es1688/es1688.c b/sound/isa/es1688/es1688.c
+index ff3a05ad99c0..64610571a5e1 100644
+--- a/sound/isa/es1688/es1688.c
++++ b/sound/isa/es1688/es1688.c
+@@ -267,8 +267,10 @@ static int snd_es968_pnp_detect(struct pnp_card_link *pcard,
+ return error;
+ }
+ error = snd_es1688_probe(card, dev);
+- if (error < 0)
++ if (error < 0) {
++ snd_card_free(card);
+ return error;
++ }
+ pnp_set_card_drvdata(pcard, card);
+ snd_es968_pnp_is_probed = 1;
+ return 0;
+diff --git a/sound/pci/hda/hda_intel.c b/sound/pci/hda/hda_intel.c
+index 0310193ea1bd..41a03c61a74b 100644
+--- a/sound/pci/hda/hda_intel.c
++++ b/sound/pci/hda/hda_intel.c
+@@ -2662,6 +2662,9 @@ static const struct pci_device_id azx_ids[] = {
+ { PCI_DEVICE(0x1002, 0xab20),
+ .driver_data = AZX_DRIVER_ATIHDMI_NS | AZX_DCAPS_PRESET_ATI_HDMI_NS |
+ AZX_DCAPS_PM_RUNTIME },
++ { PCI_DEVICE(0x1002, 0xab28),
++ .driver_data = AZX_DRIVER_ATIHDMI_NS | AZX_DCAPS_PRESET_ATI_HDMI_NS |
++ AZX_DCAPS_PM_RUNTIME },
+ { PCI_DEVICE(0x1002, 0xab38),
+ .driver_data = AZX_DRIVER_ATIHDMI_NS | AZX_DCAPS_PRESET_ATI_HDMI_NS |
+ AZX_DCAPS_PM_RUNTIME },
+diff --git a/sound/pci/hda/patch_realtek.c b/sound/pci/hda/patch_realtek.c
+index e62d58872b6e..2c4575909441 100644
+--- a/sound/pci/hda/patch_realtek.c
++++ b/sound/pci/hda/patch_realtek.c
+@@ -8124,6 +8124,12 @@ static const struct snd_hda_pin_quirk alc269_pin_fixup_tbl[] = {
+ ALC225_STANDARD_PINS,
+ {0x12, 0xb7a60130},
+ {0x17, 0x90170110}),
++ SND_HDA_PIN_QUIRK(0x10ec0623, 0x17aa, "Lenovo", ALC283_FIXUP_HEADSET_MIC,
++ {0x14, 0x01014010},
++ {0x17, 0x90170120},
++ {0x18, 0x02a11030},
++ {0x19, 0x02a1103f},
++ {0x21, 0x0221101f}),
+ {}
+ };
+
+diff --git a/sound/soc/codecs/max9867.c b/sound/soc/codecs/max9867.c
+index 8600c5439e1e..2e4aa23b5a60 100644
+--- a/sound/soc/codecs/max9867.c
++++ b/sound/soc/codecs/max9867.c
+@@ -46,13 +46,13 @@ static const SNDRV_CTL_TLVD_DECLARE_DB_RANGE(max9867_micboost_tlv,
+
+ static const struct snd_kcontrol_new max9867_snd_controls[] = {
+ SOC_DOUBLE_R_TLV("Master Playback Volume", MAX9867_LEFTVOL,
+- MAX9867_RIGHTVOL, 0, 41, 1, max9867_master_tlv),
++ MAX9867_RIGHTVOL, 0, 40, 1, max9867_master_tlv),
+ SOC_DOUBLE_R_TLV("Line Capture Volume", MAX9867_LEFTLINELVL,
+ MAX9867_RIGHTLINELVL, 0, 15, 1, max9867_line_tlv),
+ SOC_DOUBLE_R_TLV("Mic Capture Volume", MAX9867_LEFTMICGAIN,
+ MAX9867_RIGHTMICGAIN, 0, 20, 1, max9867_mic_tlv),
+ SOC_DOUBLE_R_TLV("Mic Boost Capture Volume", MAX9867_LEFTMICGAIN,
+- MAX9867_RIGHTMICGAIN, 5, 4, 0, max9867_micboost_tlv),
++ MAX9867_RIGHTMICGAIN, 5, 3, 0, max9867_micboost_tlv),
+ SOC_SINGLE("Digital Sidetone Volume", MAX9867_SIDETONE, 0, 31, 1),
+ SOC_SINGLE_TLV("Digital Playback Volume", MAX9867_DACLEVEL, 0, 15, 1,
+ max9867_dac_tlv),
+diff --git a/sound/soc/codecs/tlv320adcx140.c b/sound/soc/codecs/tlv320adcx140.c
+index 38897568ee96..0f713efde046 100644
+--- a/sound/soc/codecs/tlv320adcx140.c
++++ b/sound/soc/codecs/tlv320adcx140.c
+@@ -511,11 +511,11 @@ static const struct snd_soc_dapm_route adcx140_audio_map[] = {
+ static const struct snd_kcontrol_new adcx140_snd_controls[] = {
+ SOC_SINGLE_TLV("Analog CH1 Mic Gain Volume", ADCX140_CH1_CFG1, 2, 42, 0,
+ adc_tlv),
+- SOC_SINGLE_TLV("Analog CH2 Mic Gain Volume", ADCX140_CH1_CFG2, 2, 42, 0,
++ SOC_SINGLE_TLV("Analog CH2 Mic Gain Volume", ADCX140_CH2_CFG1, 2, 42, 0,
+ adc_tlv),
+- SOC_SINGLE_TLV("Analog CH3 Mic Gain Volume", ADCX140_CH1_CFG3, 2, 42, 0,
++ SOC_SINGLE_TLV("Analog CH3 Mic Gain Volume", ADCX140_CH3_CFG1, 2, 42, 0,
+ adc_tlv),
+- SOC_SINGLE_TLV("Analog CH4 Mic Gain Volume", ADCX140_CH1_CFG4, 2, 42, 0,
++ SOC_SINGLE_TLV("Analog CH4 Mic Gain Volume", ADCX140_CH4_CFG1, 2, 42, 0,
+ adc_tlv),
+
+ SOC_SINGLE_TLV("DRE Threshold", ADCX140_DRE_CFG0, 4, 9, 0,
+diff --git a/sound/usb/card.c b/sound/usb/card.c
+index fd6fd1726ea0..359f7a04be1c 100644
+--- a/sound/usb/card.c
++++ b/sound/usb/card.c
+@@ -843,9 +843,6 @@ static int usb_audio_suspend(struct usb_interface *intf, pm_message_t message)
+ if (chip == (void *)-1L)
+ return 0;
+
+- chip->autosuspended = !!PMSG_IS_AUTO(message);
+- if (!chip->autosuspended)
+- snd_power_change_state(chip->card, SNDRV_CTL_POWER_D3hot);
+ if (!chip->num_suspended_intf++) {
+ list_for_each_entry(as, &chip->pcm_list, list) {
+ snd_usb_pcm_suspend(as);
+@@ -858,6 +855,11 @@ static int usb_audio_suspend(struct usb_interface *intf, pm_message_t message)
+ snd_usb_mixer_suspend(mixer);
+ }
+
++ if (!PMSG_IS_AUTO(message) && !chip->system_suspend) {
++ snd_power_change_state(chip->card, SNDRV_CTL_POWER_D3hot);
++ chip->system_suspend = chip->num_suspended_intf;
++ }
++
+ return 0;
+ }
+
+@@ -871,10 +873,10 @@ static int __usb_audio_resume(struct usb_interface *intf, bool reset_resume)
+
+ if (chip == (void *)-1L)
+ return 0;
+- if (--chip->num_suspended_intf)
+- return 0;
+
+ atomic_inc(&chip->active); /* avoid autopm */
++ if (chip->num_suspended_intf > 1)
++ goto out;
+
+ list_for_each_entry(as, &chip->pcm_list, list) {
+ err = snd_usb_pcm_resume(as);
+@@ -896,9 +898,12 @@ static int __usb_audio_resume(struct usb_interface *intf, bool reset_resume)
+ snd_usbmidi_resume(p);
+ }
+
+- if (!chip->autosuspended)
++ out:
++ if (chip->num_suspended_intf == chip->system_suspend) {
+ snd_power_change_state(chip->card, SNDRV_CTL_POWER_D0);
+- chip->autosuspended = 0;
++ chip->system_suspend = 0;
++ }
++ chip->num_suspended_intf--;
+
+ err_out:
+ atomic_dec(&chip->active); /* allow autopm after this point */
+diff --git a/sound/usb/quirks-table.h b/sound/usb/quirks-table.h
+index eb89902a83be..0bf370d89556 100644
+--- a/sound/usb/quirks-table.h
++++ b/sound/usb/quirks-table.h
+@@ -25,6 +25,26 @@
+ .idProduct = prod, \
+ .bInterfaceClass = USB_CLASS_VENDOR_SPEC
+
++/* HP Thunderbolt Dock Audio Headset */
++{
++ USB_DEVICE(0x03f0, 0x0269),
++ .driver_info = (unsigned long) &(const struct snd_usb_audio_quirk) {
++ .vendor_name = "HP",
++ .product_name = "Thunderbolt Dock Audio Headset",
++ .profile_name = "HP-Thunderbolt-Dock-Audio-Headset",
++ .ifnum = QUIRK_NO_INTERFACE
++ }
++},
++/* HP Thunderbolt Dock Audio Module */
++{
++ USB_DEVICE(0x03f0, 0x0567),
++ .driver_info = (unsigned long) &(const struct snd_usb_audio_quirk) {
++ .vendor_name = "HP",
++ .product_name = "Thunderbolt Dock Audio Module",
++ .profile_name = "HP-Thunderbolt-Dock-Audio-Module",
++ .ifnum = QUIRK_NO_INTERFACE
++ }
++},
+ /* FTDI devices */
+ {
+ USB_DEVICE(0x0403, 0xb8d8),
+diff --git a/sound/usb/usbaudio.h b/sound/usb/usbaudio.h
+index 1c892c7f14d7..e0ebfb25fbd5 100644
+--- a/sound/usb/usbaudio.h
++++ b/sound/usb/usbaudio.h
+@@ -26,7 +26,7 @@ struct snd_usb_audio {
+ struct usb_interface *pm_intf;
+ u32 usb_id;
+ struct mutex mutex;
+- unsigned int autosuspended:1;
++ unsigned int system_suspend;
+ atomic_t active;
+ atomic_t shutdown;
+ atomic_t usage_count;
+diff --git a/tools/perf/util/probe-event.c b/tools/perf/util/probe-event.c
+index eea132f512b0..c6bcf5709564 100644
+--- a/tools/perf/util/probe-event.c
++++ b/tools/perf/util/probe-event.c
+@@ -1765,8 +1765,7 @@ int parse_probe_trace_command(const char *cmd, struct probe_trace_event *tev)
+ fmt1_str = strtok_r(argv0_str, ":", &fmt);
+ fmt2_str = strtok_r(NULL, "/", &fmt);
+ fmt3_str = strtok_r(NULL, " \t", &fmt);
+- if (fmt1_str == NULL || strlen(fmt1_str) != 1 || fmt2_str == NULL
+- || fmt3_str == NULL) {
++ if (fmt1_str == NULL || fmt2_str == NULL || fmt3_str == NULL) {
+ semantic_error("Failed to parse event name: %s\n", argv[0]);
+ ret = -EINVAL;
+ goto out;
+diff --git a/tools/testing/selftests/ftrace/test.d/ftrace/tracing-error-log.tc b/tools/testing/selftests/ftrace/test.d/ftrace/tracing-error-log.tc
+index 021c03fd885d..23465823532b 100644
+--- a/tools/testing/selftests/ftrace/test.d/ftrace/tracing-error-log.tc
++++ b/tools/testing/selftests/ftrace/test.d/ftrace/tracing-error-log.tc
+@@ -14,6 +14,8 @@ if [ ! -f set_event ]; then
+ exit_unsupported
+ fi
+
++[ -f error_log ] || exit_unsupported
++
+ ftrace_errlog_check 'event filter parse error' '((sig >= 10 && sig < 15) || dsig ^== 17) && comm != bash' 'events/signal/signal_generate/filter'
+
+ exit 0
+diff --git a/tools/testing/selftests/net/rxtimestamp.c b/tools/testing/selftests/net/rxtimestamp.c
+index 6dee9e636a95..422e7761254d 100644
+--- a/tools/testing/selftests/net/rxtimestamp.c
++++ b/tools/testing/selftests/net/rxtimestamp.c
+@@ -115,6 +115,7 @@ static struct option long_options[] = {
+ { "tcp", no_argument, 0, 't' },
+ { "udp", no_argument, 0, 'u' },
+ { "ip", no_argument, 0, 'i' },
++ { NULL, 0, NULL, 0 },
+ };
+
+ static int next_port = 19999;
+diff --git a/tools/testing/selftests/tc-testing/tc-tests/filters/tests.json b/tools/testing/selftests/tc-testing/tc-tests/filters/tests.json
+index 8877f7b2b809..12aa4bc1f6a0 100644
+--- a/tools/testing/selftests/tc-testing/tc-tests/filters/tests.json
++++ b/tools/testing/selftests/tc-testing/tc-tests/filters/tests.json
+@@ -32,7 +32,7 @@
+ "setup": [
+ "$TC qdisc add dev $DEV2 ingress"
+ ],
+- "cmdUnderTest": "$TC filter add dev $DEV2 protocol ip pref 1 parent ffff: handle 0xffffffff flower action ok",
++ "cmdUnderTest": "$TC filter add dev $DEV2 protocol ip pref 1 ingress handle 0xffffffff flower action ok",
+ "expExitCode": "0",
+ "verifyCmd": "$TC filter show dev $DEV2 ingress",
+ "matchPattern": "filter protocol ip pref 1 flower.*handle 0xffffffff",
+@@ -77,9 +77,9 @@
+ },
+ "setup": [
+ "$TC qdisc add dev $DEV2 ingress",
+- "$TC filter add dev $DEV2 protocol ip prio 1 parent ffff: flower dst_mac e4:11:22:11:4a:51 src_mac e4:11:22:11:4a:50 ip_proto tcp src_ip 1.1.1.1 dst_ip 2.2.2.2 action drop"
++ "$TC filter add dev $DEV2 protocol ip prio 1 ingress flower dst_mac e4:11:22:11:4a:51 src_mac e4:11:22:11:4a:50 ip_proto tcp src_ip 1.1.1.1 dst_ip 2.2.2.2 action drop"
+ ],
+- "cmdUnderTest": "$TC filter add dev $DEV2 protocol ip prio 1 parent ffff: flower dst_mac e4:11:22:11:4a:51 src_mac e4:11:22:11:4a:50 ip_proto tcp src_ip 1.1.1.1 dst_ip 2.2.2.2 action drop",
++ "cmdUnderTest": "$TC filter add dev $DEV2 protocol ip prio 1 ingress flower dst_mac e4:11:22:11:4a:51 src_mac e4:11:22:11:4a:50 ip_proto tcp src_ip 1.1.1.1 dst_ip 2.2.2.2 action drop",
+ "expExitCode": "2",
+ "verifyCmd": "$TC -s filter show dev $DEV2 ingress",
+ "matchPattern": "filter protocol ip pref 1 flower chain 0 handle",
+diff --git a/tools/testing/selftests/tc-testing/tdc_batch.py b/tools/testing/selftests/tc-testing/tdc_batch.py
+index 6a2bd2cf528e..995f66ce43eb 100755
+--- a/tools/testing/selftests/tc-testing/tdc_batch.py
++++ b/tools/testing/selftests/tc-testing/tdc_batch.py
+@@ -72,21 +72,21 @@ mac_prefix = args.mac_prefix
+
+ def format_add_filter(device, prio, handle, skip, src_mac, dst_mac,
+ share_action):
+- return ("filter add dev {} {} protocol ip parent ffff: handle {} "
++ return ("filter add dev {} {} protocol ip ingress handle {} "
+ " flower {} src_mac {} dst_mac {} action drop {}".format(
+ device, prio, handle, skip, src_mac, dst_mac, share_action))
+
+
+ def format_rep_filter(device, prio, handle, skip, src_mac, dst_mac,
+ share_action):
+- return ("filter replace dev {} {} protocol ip parent ffff: handle {} "
++ return ("filter replace dev {} {} protocol ip ingress handle {} "
+ " flower {} src_mac {} dst_mac {} action drop {}".format(
+ device, prio, handle, skip, src_mac, dst_mac, share_action))
+
+
+ def format_del_filter(device, prio, handle, skip, src_mac, dst_mac,
+ share_action):
+- return ("filter del dev {} {} protocol ip parent ffff: handle {} "
++ return ("filter del dev {} {} protocol ip ingress handle {} "
+ "flower".format(device, prio, handle))
+
+
+diff --git a/virt/kvm/arm/aarch32.c b/virt/kvm/arm/aarch32.c
+index 0a356aa91aa1..40a62a99fbf8 100644
+--- a/virt/kvm/arm/aarch32.c
++++ b/virt/kvm/arm/aarch32.c
+@@ -33,6 +33,26 @@ static const u8 return_offsets[8][2] = {
+ [7] = { 4, 4 }, /* FIQ, unused */
+ };
+
++static bool pre_fault_synchronize(struct kvm_vcpu *vcpu)
++{
++ preempt_disable();
++ if (vcpu->arch.sysregs_loaded_on_cpu) {
++ kvm_arch_vcpu_put(vcpu);
++ return true;
++ }
++
++ preempt_enable();
++ return false;
++}
++
++static void post_fault_synchronize(struct kvm_vcpu *vcpu, bool loaded)
++{
++ if (loaded) {
++ kvm_arch_vcpu_load(vcpu, smp_processor_id());
++ preempt_enable();
++ }
++}
++
+ /*
+ * When an exception is taken, most CPSR fields are left unchanged in the
+ * handler. However, some are explicitly overridden (e.g. M[4:0]).
+@@ -155,7 +175,10 @@ static void prepare_fault32(struct kvm_vcpu *vcpu, u32 mode, u32 vect_offset)
+
+ void kvm_inject_undef32(struct kvm_vcpu *vcpu)
+ {
++ bool loaded = pre_fault_synchronize(vcpu);
++
+ prepare_fault32(vcpu, PSR_AA32_MODE_UND, 4);
++ post_fault_synchronize(vcpu, loaded);
+ }
+
+ /*
+@@ -168,6 +191,9 @@ static void inject_abt32(struct kvm_vcpu *vcpu, bool is_pabt,
+ u32 vect_offset;
+ u32 *far, *fsr;
+ bool is_lpae;
++ bool loaded;
++
++ loaded = pre_fault_synchronize(vcpu);
+
+ if (is_pabt) {
+ vect_offset = 12;
+@@ -191,6 +217,8 @@ static void inject_abt32(struct kvm_vcpu *vcpu, bool is_pabt,
+ /* no need to shuffle FS[4] into DFSR[10] as its 0 */
+ *fsr = DFSR_FSC_EXTABT_nLPAE;
+ }
++
++ post_fault_synchronize(vcpu, loaded);
+ }
+
+ void kvm_inject_dabt32(struct kvm_vcpu *vcpu, unsigned long addr)
+diff --git a/virt/kvm/arm/arm.c b/virt/kvm/arm/arm.c
+index 48d0ec44ad77..3d7e8fdeebcd 100644
+--- a/virt/kvm/arm/arm.c
++++ b/virt/kvm/arm/arm.c
+@@ -332,6 +332,12 @@ void kvm_arch_vcpu_unblocking(struct kvm_vcpu *vcpu)
+ preempt_enable();
+ }
+
++#define __ptrauth_save_key(regs, key) \
++({ \
++ regs[key ## KEYLO_EL1] = read_sysreg_s(SYS_ ## key ## KEYLO_EL1); \
++ regs[key ## KEYHI_EL1] = read_sysreg_s(SYS_ ## key ## KEYHI_EL1); \
++})
++
+ void kvm_arch_vcpu_load(struct kvm_vcpu *vcpu, int cpu)
+ {
+ int *last_ran;
+@@ -365,7 +371,17 @@ void kvm_arch_vcpu_load(struct kvm_vcpu *vcpu, int cpu)
+ else
+ vcpu_set_wfx_traps(vcpu);
+
+- vcpu_ptrauth_setup_lazy(vcpu);
++ if (vcpu_has_ptrauth(vcpu)) {
++ struct kvm_cpu_context *ctxt = vcpu->arch.host_cpu_context;
++
++ __ptrauth_save_key(ctxt->sys_regs, APIA);
++ __ptrauth_save_key(ctxt->sys_regs, APIB);
++ __ptrauth_save_key(ctxt->sys_regs, APDA);
++ __ptrauth_save_key(ctxt->sys_regs, APDB);
++ __ptrauth_save_key(ctxt->sys_regs, APGA);
++
++ vcpu_ptrauth_disable(vcpu);
++ }
+ }
+
+ void kvm_arch_vcpu_put(struct kvm_vcpu *vcpu)
+diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c
+index 731c1e517716..77aa91fb08d2 100644
+--- a/virt/kvm/kvm_main.c
++++ b/virt/kvm/kvm_main.c
+@@ -155,10 +155,9 @@ static void kvm_uevent_notify_change(unsigned int type, struct kvm *kvm);
+ static unsigned long long kvm_createvm_count;
+ static unsigned long long kvm_active_vms;
+
+-__weak int kvm_arch_mmu_notifier_invalidate_range(struct kvm *kvm,
+- unsigned long start, unsigned long end, bool blockable)
++__weak void kvm_arch_mmu_notifier_invalidate_range(struct kvm *kvm,
++ unsigned long start, unsigned long end)
+ {
+- return 0;
+ }
+
+ bool kvm_is_zone_device_pfn(kvm_pfn_t pfn)
+@@ -384,6 +383,18 @@ static inline struct kvm *mmu_notifier_to_kvm(struct mmu_notifier *mn)
+ return container_of(mn, struct kvm, mmu_notifier);
+ }
+
++static void kvm_mmu_notifier_invalidate_range(struct mmu_notifier *mn,
++ struct mm_struct *mm,
++ unsigned long start, unsigned long end)
++{
++ struct kvm *kvm = mmu_notifier_to_kvm(mn);
++ int idx;
++
++ idx = srcu_read_lock(&kvm->srcu);
++ kvm_arch_mmu_notifier_invalidate_range(kvm, start, end);
++ srcu_read_unlock(&kvm->srcu, idx);
++}
++
+ static void kvm_mmu_notifier_change_pte(struct mmu_notifier *mn,
+ struct mm_struct *mm,
+ unsigned long address,
+@@ -408,7 +419,6 @@ static int kvm_mmu_notifier_invalidate_range_start(struct mmu_notifier *mn,
+ {
+ struct kvm *kvm = mmu_notifier_to_kvm(mn);
+ int need_tlb_flush = 0, idx;
+- int ret;
+
+ idx = srcu_read_lock(&kvm->srcu);
+ spin_lock(&kvm->mmu_lock);
+@@ -425,14 +435,9 @@ static int kvm_mmu_notifier_invalidate_range_start(struct mmu_notifier *mn,
+ kvm_flush_remote_tlbs(kvm);
+
+ spin_unlock(&kvm->mmu_lock);
+-
+- ret = kvm_arch_mmu_notifier_invalidate_range(kvm, range->start,
+- range->end,
+- mmu_notifier_range_blockable(range));
+-
+ srcu_read_unlock(&kvm->srcu, idx);
+
+- return ret;
++ return 0;
+ }
+
+ static void kvm_mmu_notifier_invalidate_range_end(struct mmu_notifier *mn,
+@@ -538,6 +543,7 @@ static void kvm_mmu_notifier_release(struct mmu_notifier *mn,
+ }
+
+ static const struct mmu_notifier_ops kvm_mmu_notifier_ops = {
++ .invalidate_range = kvm_mmu_notifier_invalidate_range,
+ .invalidate_range_start = kvm_mmu_notifier_invalidate_range_start,
+ .invalidate_range_end = kvm_mmu_notifier_invalidate_range_end,
+ .clear_flush_young = kvm_mmu_notifier_clear_flush_young,
^ permalink raw reply related [flat|nested] 25+ messages in thread
* [gentoo-commits] proj/linux-patches:5.7 commit in: /
@ 2020-06-18 17:33 Mike Pagano
0 siblings, 0 replies; 25+ messages in thread
From: Mike Pagano @ 2020-06-18 17:33 UTC (permalink / raw
To: gentoo-commits
commit: 28de17ab0edba34c093a469ad76bb3b51af116da
Author: Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Thu Jun 18 17:33:16 2020 +0000
Commit: Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Thu Jun 18 17:33:16 2020 +0000
URL: https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=28de17ab
Linux patch 5.7.4
Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>
0000_README | 4 ++++
1003_linux-5.7.4.patch | 49 +++++++++++++++++++++++++++++++++++++++++++++++++
2 files changed, 53 insertions(+)
diff --git a/0000_README b/0000_README
index f77851e..0cde4ba 100644
--- a/0000_README
+++ b/0000_README
@@ -55,6 +55,10 @@ Patch: 1002_linux-5.7.3.patch
From: http://www.kernel.org
Desc: Linux 5.7.3
+Patch: 1003_linux-5.7.4.patch
+From: http://www.kernel.org
+Desc: Linux 5.7.4
+
Patch: 1500_XATTR_USER_PREFIX.patch
From: https://bugs.gentoo.org/show_bug.cgi?id=470644
Desc: Support for namespace user.pax.* on tmpfs.
diff --git a/1003_linux-5.7.4.patch b/1003_linux-5.7.4.patch
new file mode 100644
index 0000000..915786e
--- /dev/null
+++ b/1003_linux-5.7.4.patch
@@ -0,0 +1,49 @@
+diff --git a/Makefile b/Makefile
+index a2ce556f4347..64da771d4ac5 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 5
+ PATCHLEVEL = 7
+-SUBLEVEL = 3
++SUBLEVEL = 4
+ EXTRAVERSION =
+ NAME = Kleptomaniac Octopus
+
+diff --git a/lib/vdso/gettimeofday.c b/lib/vdso/gettimeofday.c
+index a2909af4b924..3bb82a6cc5aa 100644
+--- a/lib/vdso/gettimeofday.c
++++ b/lib/vdso/gettimeofday.c
+@@ -38,6 +38,13 @@ static inline bool vdso_clocksource_ok(const struct vdso_data *vd)
+ }
+ #endif
+
++#ifndef vdso_cycles_ok
++static inline bool vdso_cycles_ok(u64 cycles)
++{
++ return true;
++}
++#endif
++
+ #ifdef CONFIG_TIME_NS
+ static int do_hres_timens(const struct vdso_data *vdns, clockid_t clk,
+ struct __kernel_timespec *ts)
+@@ -62,6 +69,8 @@ static int do_hres_timens(const struct vdso_data *vdns, clockid_t clk,
+ return -1;
+
+ cycles = __arch_get_hw_counter(vd->clock_mode);
++ if (unlikely(!vdso_cycles_ok(cycles)))
++ return -1;
+ ns = vdso_ts->nsec;
+ last = vd->cycle_last;
+ ns += vdso_calc_delta(cycles, last, vd->mask, vd->mult);
+@@ -130,6 +139,8 @@ static __always_inline int do_hres(const struct vdso_data *vd, clockid_t clk,
+ return -1;
+
+ cycles = __arch_get_hw_counter(vd->clock_mode);
++ if (unlikely(!vdso_cycles_ok(cycles)))
++ return -1;
+ ns = vdso_ts->nsec;
+ last = vd->cycle_last;
+ ns += vdso_calc_delta(cycles, last, vd->mask, vd->mult);
^ permalink raw reply related [flat|nested] 25+ messages in thread
* [gentoo-commits] proj/linux-patches:5.7 commit in: /
@ 2020-06-22 11:25 Mike Pagano
0 siblings, 0 replies; 25+ messages in thread
From: Mike Pagano @ 2020-06-22 11:25 UTC (permalink / raw
To: gentoo-commits
commit: a69c813a7c34e3bf105d7643dceda941de45a8be
Author: Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Mon Jun 22 11:25:29 2020 +0000
Commit: Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Mon Jun 22 11:25:29 2020 +0000
URL: https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=a69c813a
Linux patch 5.7.5
Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>
0000_README | 4 +
1004_linux-5.7.5.patch | 14678 +++++++++++++++++++++++++++++++++++++++++++++++
2 files changed, 14682 insertions(+)
diff --git a/0000_README b/0000_README
index 0cde4ba..eab26a2 100644
--- a/0000_README
+++ b/0000_README
@@ -59,6 +59,10 @@ Patch: 1003_linux-5.7.4.patch
From: http://www.kernel.org
Desc: Linux 5.7.4
+Patch: 1004_linux-5.7.5.patch
+From: http://www.kernel.org
+Desc: Linux 5.7.5
+
Patch: 1500_XATTR_USER_PREFIX.patch
From: https://bugs.gentoo.org/show_bug.cgi?id=470644
Desc: Support for namespace user.pax.* on tmpfs.
diff --git a/1004_linux-5.7.5.patch b/1004_linux-5.7.5.patch
new file mode 100644
index 0000000..17438eb
--- /dev/null
+++ b/1004_linux-5.7.5.patch
@@ -0,0 +1,14678 @@
+diff --git a/Documentation/devicetree/bindings/display/mediatek/mediatek,dpi.txt b/Documentation/devicetree/bindings/display/mediatek/mediatek,dpi.txt
+index 58914cf681b8..77def4456706 100644
+--- a/Documentation/devicetree/bindings/display/mediatek/mediatek,dpi.txt
++++ b/Documentation/devicetree/bindings/display/mediatek/mediatek,dpi.txt
+@@ -17,6 +17,9 @@ Required properties:
+ Documentation/devicetree/bindings/graph.txt. This port should be connected
+ to the input port of an attached HDMI or LVDS encoder chip.
+
++Optional properties:
++- pinctrl-names: Contain "default" and "sleep".
++
+ Example:
+
+ dpi0: dpi@1401d000 {
+@@ -27,6 +30,9 @@ dpi0: dpi@1401d000 {
+ <&mmsys CLK_MM_DPI_ENGINE>,
+ <&apmixedsys CLK_APMIXED_TVDPLL>;
+ clock-names = "pixel", "engine", "pll";
++ pinctrl-names = "default", "sleep";
++ pinctrl-0 = <&dpi_pin_func>;
++ pinctrl-1 = <&dpi_pin_idle>;
+
+ port {
+ dpi0_out: endpoint {
+diff --git a/Documentation/virt/kvm/api.rst b/Documentation/virt/kvm/api.rst
+index efbbe570aa9b..750d005a75bc 100644
+--- a/Documentation/virt/kvm/api.rst
++++ b/Documentation/virt/kvm/api.rst
+@@ -5067,9 +5067,11 @@ EOI was received.
+ #define KVM_EXIT_HYPERV_SYNIC 1
+ #define KVM_EXIT_HYPERV_HCALL 2
+ __u32 type;
++ __u32 pad1;
+ union {
+ struct {
+ __u32 msr;
++ __u32 pad2;
+ __u64 control;
+ __u64 evt_page;
+ __u64 msg_page;
+diff --git a/Makefile b/Makefile
+index 64da771d4ac5..c48d489f82bc 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 5
+ PATCHLEVEL = 7
+-SUBLEVEL = 4
++SUBLEVEL = 5
+ EXTRAVERSION =
+ NAME = Kleptomaniac Octopus
+
+@@ -608,12 +608,8 @@ KBUILD_MODULES :=
+ KBUILD_BUILTIN := 1
+
+ # If we have only "make modules", don't compile built-in objects.
+-# When we're building modules with modversions, we need to consider
+-# the built-in objects during the descend as well, in order to
+-# make sure the checksums are up to date before we record them.
+-
+ ifeq ($(MAKECMDGOALS),modules)
+- KBUILD_BUILTIN := $(if $(CONFIG_MODVERSIONS),1)
++ KBUILD_BUILTIN :=
+ endif
+
+ # If we have "make <whatever> modules", compile modules
+@@ -1315,6 +1311,13 @@ ifdef CONFIG_MODULES
+
+ all: modules
+
++# When we're building modules with modversions, we need to consider
++# the built-in objects during the descend as well, in order to
++# make sure the checksums are up to date before we record them.
++ifdef CONFIG_MODVERSIONS
++ KBUILD_BUILTIN := 1
++endif
++
+ # Build modules
+ #
+ # A module can be listed more than once in obj-m resulting in
+diff --git a/arch/alpha/include/asm/io.h b/arch/alpha/include/asm/io.h
+index d1ed5a8133c5..e6225cf40de5 100644
+--- a/arch/alpha/include/asm/io.h
++++ b/arch/alpha/include/asm/io.h
+@@ -310,14 +310,18 @@ static inline int __is_mmio(const volatile void __iomem *addr)
+ #if IO_CONCAT(__IO_PREFIX,trivial_io_bw)
+ extern inline unsigned int ioread8(void __iomem *addr)
+ {
+- unsigned int ret = IO_CONCAT(__IO_PREFIX,ioread8)(addr);
++ unsigned int ret;
++ mb();
++ ret = IO_CONCAT(__IO_PREFIX,ioread8)(addr);
+ mb();
+ return ret;
+ }
+
+ extern inline unsigned int ioread16(void __iomem *addr)
+ {
+- unsigned int ret = IO_CONCAT(__IO_PREFIX,ioread16)(addr);
++ unsigned int ret;
++ mb();
++ ret = IO_CONCAT(__IO_PREFIX,ioread16)(addr);
+ mb();
+ return ret;
+ }
+@@ -358,7 +362,9 @@ extern inline void outw(u16 b, unsigned long port)
+ #if IO_CONCAT(__IO_PREFIX,trivial_io_lq)
+ extern inline unsigned int ioread32(void __iomem *addr)
+ {
+- unsigned int ret = IO_CONCAT(__IO_PREFIX,ioread32)(addr);
++ unsigned int ret;
++ mb();
++ ret = IO_CONCAT(__IO_PREFIX,ioread32)(addr);
+ mb();
+ return ret;
+ }
+@@ -403,14 +409,18 @@ extern inline void __raw_writew(u16 b, volatile void __iomem *addr)
+
+ extern inline u8 readb(const volatile void __iomem *addr)
+ {
+- u8 ret = __raw_readb(addr);
++ u8 ret;
++ mb();
++ ret = __raw_readb(addr);
+ mb();
+ return ret;
+ }
+
+ extern inline u16 readw(const volatile void __iomem *addr)
+ {
+- u16 ret = __raw_readw(addr);
++ u16 ret;
++ mb();
++ ret = __raw_readw(addr);
+ mb();
+ return ret;
+ }
+@@ -451,14 +461,18 @@ extern inline void __raw_writeq(u64 b, volatile void __iomem *addr)
+
+ extern inline u32 readl(const volatile void __iomem *addr)
+ {
+- u32 ret = __raw_readl(addr);
++ u32 ret;
++ mb();
++ ret = __raw_readl(addr);
+ mb();
+ return ret;
+ }
+
+ extern inline u64 readq(const volatile void __iomem *addr)
+ {
+- u64 ret = __raw_readq(addr);
++ u64 ret;
++ mb();
++ ret = __raw_readq(addr);
+ mb();
+ return ret;
+ }
+@@ -487,14 +501,44 @@ extern inline void writeq(u64 b, volatile void __iomem *addr)
+ #define outb_p outb
+ #define outw_p outw
+ #define outl_p outl
+-#define readb_relaxed(addr) __raw_readb(addr)
+-#define readw_relaxed(addr) __raw_readw(addr)
+-#define readl_relaxed(addr) __raw_readl(addr)
+-#define readq_relaxed(addr) __raw_readq(addr)
+-#define writeb_relaxed(b, addr) __raw_writeb(b, addr)
+-#define writew_relaxed(b, addr) __raw_writew(b, addr)
+-#define writel_relaxed(b, addr) __raw_writel(b, addr)
+-#define writeq_relaxed(b, addr) __raw_writeq(b, addr)
++
++extern u8 readb_relaxed(const volatile void __iomem *addr);
++extern u16 readw_relaxed(const volatile void __iomem *addr);
++extern u32 readl_relaxed(const volatile void __iomem *addr);
++extern u64 readq_relaxed(const volatile void __iomem *addr);
++
++#if IO_CONCAT(__IO_PREFIX,trivial_io_bw)
++extern inline u8 readb_relaxed(const volatile void __iomem *addr)
++{
++ mb();
++ return __raw_readb(addr);
++}
++
++extern inline u16 readw_relaxed(const volatile void __iomem *addr)
++{
++ mb();
++ return __raw_readw(addr);
++}
++#endif
++
++#if IO_CONCAT(__IO_PREFIX,trivial_io_lq)
++extern inline u32 readl_relaxed(const volatile void __iomem *addr)
++{
++ mb();
++ return __raw_readl(addr);
++}
++
++extern inline u64 readq_relaxed(const volatile void __iomem *addr)
++{
++ mb();
++ return __raw_readq(addr);
++}
++#endif
++
++#define writeb_relaxed writeb
++#define writew_relaxed writew
++#define writel_relaxed writel
++#define writeq_relaxed writeq
+
+ /*
+ * String version of IO memory access ops:
+diff --git a/arch/alpha/kernel/io.c b/arch/alpha/kernel/io.c
+index c025a3e5e357..938de13adfbf 100644
+--- a/arch/alpha/kernel/io.c
++++ b/arch/alpha/kernel/io.c
+@@ -16,21 +16,27 @@
+ unsigned int
+ ioread8(void __iomem *addr)
+ {
+- unsigned int ret = IO_CONCAT(__IO_PREFIX,ioread8)(addr);
++ unsigned int ret;
++ mb();
++ ret = IO_CONCAT(__IO_PREFIX,ioread8)(addr);
+ mb();
+ return ret;
+ }
+
+ unsigned int ioread16(void __iomem *addr)
+ {
+- unsigned int ret = IO_CONCAT(__IO_PREFIX,ioread16)(addr);
++ unsigned int ret;
++ mb();
++ ret = IO_CONCAT(__IO_PREFIX,ioread16)(addr);
+ mb();
+ return ret;
+ }
+
+ unsigned int ioread32(void __iomem *addr)
+ {
+- unsigned int ret = IO_CONCAT(__IO_PREFIX,ioread32)(addr);
++ unsigned int ret;
++ mb();
++ ret = IO_CONCAT(__IO_PREFIX,ioread32)(addr);
+ mb();
+ return ret;
+ }
+@@ -148,28 +154,36 @@ EXPORT_SYMBOL(__raw_writeq);
+
+ u8 readb(const volatile void __iomem *addr)
+ {
+- u8 ret = __raw_readb(addr);
++ u8 ret;
++ mb();
++ ret = __raw_readb(addr);
+ mb();
+ return ret;
+ }
+
+ u16 readw(const volatile void __iomem *addr)
+ {
+- u16 ret = __raw_readw(addr);
++ u16 ret;
++ mb();
++ ret = __raw_readw(addr);
+ mb();
+ return ret;
+ }
+
+ u32 readl(const volatile void __iomem *addr)
+ {
+- u32 ret = __raw_readl(addr);
++ u32 ret;
++ mb();
++ ret = __raw_readl(addr);
+ mb();
+ return ret;
+ }
+
+ u64 readq(const volatile void __iomem *addr)
+ {
+- u64 ret = __raw_readq(addr);
++ u64 ret;
++ mb();
++ ret = __raw_readq(addr);
+ mb();
+ return ret;
+ }
+@@ -207,6 +221,38 @@ EXPORT_SYMBOL(writew);
+ EXPORT_SYMBOL(writel);
+ EXPORT_SYMBOL(writeq);
+
++/*
++ * The _relaxed functions must be ordered w.r.t. each other, but they don't
++ * have to be ordered w.r.t. other memory accesses.
++ */
++u8 readb_relaxed(const volatile void __iomem *addr)
++{
++ mb();
++ return __raw_readb(addr);
++}
++
++u16 readw_relaxed(const volatile void __iomem *addr)
++{
++ mb();
++ return __raw_readw(addr);
++}
++
++u32 readl_relaxed(const volatile void __iomem *addr)
++{
++ mb();
++ return __raw_readl(addr);
++}
++
++u64 readq_relaxed(const volatile void __iomem *addr)
++{
++ mb();
++ return __raw_readq(addr);
++}
++
++EXPORT_SYMBOL(readb_relaxed);
++EXPORT_SYMBOL(readw_relaxed);
++EXPORT_SYMBOL(readl_relaxed);
++EXPORT_SYMBOL(readq_relaxed);
+
+ /*
+ * Read COUNT 8-bit bytes from port PORT into memory starting at SRC.
+diff --git a/arch/arm/boot/compressed/.gitignore b/arch/arm/boot/compressed/.gitignore
+index db05c6ef3e31..60606b0f378d 100644
+--- a/arch/arm/boot/compressed/.gitignore
++++ b/arch/arm/boot/compressed/.gitignore
+@@ -7,12 +7,3 @@ hyp-stub.S
+ piggy_data
+ vmlinux
+ vmlinux.lds
+-
+-# borrowed libfdt files
+-fdt.c
+-fdt.h
+-fdt_ro.c
+-fdt_rw.c
+-fdt_wip.c
+-libfdt.h
+-libfdt_internal.h
+diff --git a/arch/arm/boot/compressed/Makefile b/arch/arm/boot/compressed/Makefile
+index 9c11e7490292..00602a6fba04 100644
+--- a/arch/arm/boot/compressed/Makefile
++++ b/arch/arm/boot/compressed/Makefile
+@@ -76,29 +76,30 @@ compress-$(CONFIG_KERNEL_LZMA) = lzma
+ compress-$(CONFIG_KERNEL_XZ) = xzkern
+ compress-$(CONFIG_KERNEL_LZ4) = lz4
+
+-# Borrowed libfdt files for the ATAG compatibility mode
+-
+-libfdt := fdt_rw.c fdt_ro.c fdt_wip.c fdt.c
+-libfdt_hdrs := fdt.h libfdt.h libfdt_internal.h
+-
+-libfdt_objs := $(addsuffix .o, $(basename $(libfdt)))
+-
+-$(addprefix $(obj)/,$(libfdt) $(libfdt_hdrs)): $(obj)/%: $(srctree)/scripts/dtc/libfdt/%
+- $(call cmd,shipped)
+-
+-$(addprefix $(obj)/,$(libfdt_objs) atags_to_fdt.o): \
+- $(addprefix $(obj)/,$(libfdt_hdrs))
++libfdt_objs := fdt_rw.o fdt_ro.o fdt_wip.o fdt.o
+
+ ifeq ($(CONFIG_ARM_ATAG_DTB_COMPAT),y)
+ OBJS += $(libfdt_objs) atags_to_fdt.o
+ endif
+
++# -fstack-protector-strong triggers protection checks in this code,
++# but it is being used too early to link to meaningful stack_chk logic.
++nossp-flags-$(CONFIG_CC_HAS_STACKPROTECTOR_NONE) := -fno-stack-protector
++$(foreach o, $(libfdt_objs) atags_to_fdt.o, \
++ $(eval CFLAGS_$(o) := -I $(srctree)/scripts/dtc/libfdt $(nossp-flags-y)))
++
++# These were previously generated C files. When you are building the kernel
++# with O=, make sure to remove the stale files in the output tree. Otherwise,
++# the build system wrongly compiles the stale ones.
++ifdef building_out_of_srctree
++$(shell rm -f $(addprefix $(obj)/, fdt_rw.c fdt_ro.c fdt_wip.c fdt.c))
++endif
++
+ targets := vmlinux vmlinux.lds piggy_data piggy.o \
+ lib1funcs.o ashldi3.o bswapsdi2.o \
+ head.o $(OBJS)
+
+-clean-files += piggy_data lib1funcs.S ashldi3.S bswapsdi2.S \
+- $(libfdt) $(libfdt_hdrs) hyp-stub.S
++clean-files += piggy_data lib1funcs.S ashldi3.S bswapsdi2.S hyp-stub.S
+
+ KBUILD_CFLAGS += -DDISABLE_BRANCH_PROFILING
+
+@@ -107,15 +108,6 @@ ORIG_CFLAGS := $(KBUILD_CFLAGS)
+ KBUILD_CFLAGS = $(subst -pg, , $(ORIG_CFLAGS))
+ endif
+
+-# -fstack-protector-strong triggers protection checks in this code,
+-# but it is being used too early to link to meaningful stack_chk logic.
+-nossp-flags-$(CONFIG_CC_HAS_STACKPROTECTOR_NONE) := -fno-stack-protector
+-CFLAGS_atags_to_fdt.o := $(nossp-flags-y)
+-CFLAGS_fdt.o := $(nossp-flags-y)
+-CFLAGS_fdt_ro.o := $(nossp-flags-y)
+-CFLAGS_fdt_rw.o := $(nossp-flags-y)
+-CFLAGS_fdt_wip.o := $(nossp-flags-y)
+-
+ ccflags-y := -fpic $(call cc-option,-mno-single-pic-base,) -fno-builtin \
+ -I$(obj) $(DISABLE_ARM_SSP_PER_TASK_PLUGIN)
+ asflags-y := -DZIMAGE
+diff --git a/arch/arm/boot/compressed/atags_to_fdt.c b/arch/arm/boot/compressed/atags_to_fdt.c
+index 64c49747f8a3..8452753efebe 100644
+--- a/arch/arm/boot/compressed/atags_to_fdt.c
++++ b/arch/arm/boot/compressed/atags_to_fdt.c
+@@ -1,4 +1,5 @@
+ // SPDX-License-Identifier: GPL-2.0
++#include <linux/libfdt_env.h>
+ #include <asm/setup.h>
+ #include <libfdt.h>
+
+diff --git a/arch/arm/boot/compressed/fdt.c b/arch/arm/boot/compressed/fdt.c
+new file mode 100644
+index 000000000000..f8ea7a201ab1
+--- /dev/null
++++ b/arch/arm/boot/compressed/fdt.c
+@@ -0,0 +1,2 @@
++// SPDX-License-Identifier: GPL-2.0-only
++#include "../../../../lib/fdt.c"
+diff --git a/arch/arm/boot/compressed/fdt_ro.c b/arch/arm/boot/compressed/fdt_ro.c
+new file mode 100644
+index 000000000000..93970a4ad5ae
+--- /dev/null
++++ b/arch/arm/boot/compressed/fdt_ro.c
+@@ -0,0 +1,2 @@
++// SPDX-License-Identifier: GPL-2.0-only
++#include "../../../../lib/fdt_ro.c"
+diff --git a/arch/arm/boot/compressed/fdt_rw.c b/arch/arm/boot/compressed/fdt_rw.c
+new file mode 100644
+index 000000000000..f7c6b8b7e01c
+--- /dev/null
++++ b/arch/arm/boot/compressed/fdt_rw.c
+@@ -0,0 +1,2 @@
++// SPDX-License-Identifier: GPL-2.0-only
++#include "../../../../lib/fdt_rw.c"
+diff --git a/arch/arm/boot/compressed/fdt_wip.c b/arch/arm/boot/compressed/fdt_wip.c
+new file mode 100644
+index 000000000000..048d2c7a088d
+--- /dev/null
++++ b/arch/arm/boot/compressed/fdt_wip.c
+@@ -0,0 +1,2 @@
++// SPDX-License-Identifier: GPL-2.0-only
++#include "../../../../lib/fdt_wip.c"
+diff --git a/arch/arm/boot/compressed/libfdt_env.h b/arch/arm/boot/compressed/libfdt_env.h
+deleted file mode 100644
+index 6a0f1f524466..000000000000
+--- a/arch/arm/boot/compressed/libfdt_env.h
++++ /dev/null
+@@ -1,24 +0,0 @@
+-/* SPDX-License-Identifier: GPL-2.0 */
+-#ifndef _ARM_LIBFDT_ENV_H
+-#define _ARM_LIBFDT_ENV_H
+-
+-#include <linux/limits.h>
+-#include <linux/types.h>
+-#include <linux/string.h>
+-#include <asm/byteorder.h>
+-
+-#define INT32_MAX S32_MAX
+-#define UINT32_MAX U32_MAX
+-
+-typedef __be16 fdt16_t;
+-typedef __be32 fdt32_t;
+-typedef __be64 fdt64_t;
+-
+-#define fdt16_to_cpu(x) be16_to_cpu(x)
+-#define cpu_to_fdt16(x) cpu_to_be16(x)
+-#define fdt32_to_cpu(x) be32_to_cpu(x)
+-#define cpu_to_fdt32(x) cpu_to_be32(x)
+-#define fdt64_to_cpu(x) be64_to_cpu(x)
+-#define cpu_to_fdt64(x) cpu_to_be64(x)
+-
+-#endif
+diff --git a/arch/arm/boot/dts/at91-sama5d2_ptc_ek.dts b/arch/arm/boot/dts/at91-sama5d2_ptc_ek.dts
+index 772809c54c1f..b803fa1f2039 100644
+--- a/arch/arm/boot/dts/at91-sama5d2_ptc_ek.dts
++++ b/arch/arm/boot/dts/at91-sama5d2_ptc_ek.dts
+@@ -40,7 +40,7 @@
+
+ ahb {
+ usb0: gadget@300000 {
+- atmel,vbus-gpio = <&pioA PIN_PA27 GPIO_ACTIVE_HIGH>;
++ atmel,vbus-gpio = <&pioA PIN_PB11 GPIO_ACTIVE_HIGH>;
+ pinctrl-names = "default";
+ pinctrl-0 = <&pinctrl_usba_vbus>;
+ status = "okay";
+diff --git a/arch/arm/boot/dts/exynos4412-galaxy-s3.dtsi b/arch/arm/boot/dts/exynos4412-galaxy-s3.dtsi
+index 44f97546dd0a..f910aa924bfb 100644
+--- a/arch/arm/boot/dts/exynos4412-galaxy-s3.dtsi
++++ b/arch/arm/boot/dts/exynos4412-galaxy-s3.dtsi
+@@ -68,7 +68,7 @@
+
+ i2c_cm36651: i2c-gpio-2 {
+ compatible = "i2c-gpio";
+- gpios = <&gpf0 0 GPIO_ACTIVE_LOW>, <&gpf0 1 GPIO_ACTIVE_LOW>;
++ gpios = <&gpf0 0 GPIO_ACTIVE_HIGH>, <&gpf0 1 GPIO_ACTIVE_HIGH>;
+ i2c-gpio,delay-us = <2>;
+ #address-cells = <1>;
+ #size-cells = <0>;
+diff --git a/arch/arm/boot/dts/s5pv210-aries.dtsi b/arch/arm/boot/dts/s5pv210-aries.dtsi
+index 8ff70b856334..d419b77201f7 100644
+--- a/arch/arm/boot/dts/s5pv210-aries.dtsi
++++ b/arch/arm/boot/dts/s5pv210-aries.dtsi
+@@ -454,6 +454,7 @@
+ pinctrl-names = "default";
+ cap-sd-highspeed;
+ cap-mmc-highspeed;
++ keep-power-in-suspend;
+
+ mmc-pwrseq = <&wifi_pwrseq>;
+ non-removable;
+diff --git a/arch/arm/mach-tegra/tegra.c b/arch/arm/mach-tegra/tegra.c
+index f1ce2857a251..b620b0651157 100644
+--- a/arch/arm/mach-tegra/tegra.c
++++ b/arch/arm/mach-tegra/tegra.c
+@@ -107,8 +107,8 @@ static const char * const tegra_dt_board_compat[] = {
+ };
+
+ DT_MACHINE_START(TEGRA_DT, "NVIDIA Tegra SoC (Flattened Device Tree)")
+- .l2c_aux_val = 0x3c400001,
+- .l2c_aux_mask = 0xc20fc3fe,
++ .l2c_aux_val = 0x3c400000,
++ .l2c_aux_mask = 0xc20fc3ff,
+ .smp = smp_ops(tegra_smp_ops),
+ .map_io = tegra_map_common_io,
+ .init_early = tegra_init_early,
+diff --git a/arch/arm/mm/proc-macros.S b/arch/arm/mm/proc-macros.S
+index 5461d589a1e2..60ac7c5999a9 100644
+--- a/arch/arm/mm/proc-macros.S
++++ b/arch/arm/mm/proc-macros.S
+@@ -5,6 +5,7 @@
+ * VMA_VM_FLAGS
+ * VM_EXEC
+ */
++#include <linux/const.h>
+ #include <asm/asm-offsets.h>
+ #include <asm/thread_info.h>
+
+@@ -30,7 +31,7 @@
+ * act_mm - get current->active_mm
+ */
+ .macro act_mm, rd
+- bic \rd, sp, #8128
++ bic \rd, sp, #(THREAD_SIZE - 1) & ~63
+ bic \rd, \rd, #63
+ ldr \rd, [\rd, #TI_TASK]
+ .if (TSK_ACTIVE_MM > IMM12_MASK)
+diff --git a/arch/arm64/include/asm/cacheflush.h b/arch/arm64/include/asm/cacheflush.h
+index e6cca3d4acf7..ce50c1f1f1ea 100644
+--- a/arch/arm64/include/asm/cacheflush.h
++++ b/arch/arm64/include/asm/cacheflush.h
+@@ -79,7 +79,7 @@ static inline void flush_icache_range(unsigned long start, unsigned long end)
+ * IPI all online CPUs so that they undergo a context synchronization
+ * event and are forced to refetch the new instructions.
+ */
+-#ifdef CONFIG_KGDB
++
+ /*
+ * KGDB performs cache maintenance with interrupts disabled, so we
+ * will deadlock trying to IPI the secondary CPUs. In theory, we can
+@@ -89,9 +89,9 @@ static inline void flush_icache_range(unsigned long start, unsigned long end)
+ * the patching operation, so we don't need extra IPIs here anyway.
+ * In which case, add a KGDB-specific bodge and return early.
+ */
+- if (kgdb_connected && irqs_disabled())
++ if (in_dbg_master())
+ return;
+-#endif
++
+ kick_all_cpus_sync();
+ }
+
+diff --git a/arch/arm64/include/asm/pgtable.h b/arch/arm64/include/asm/pgtable.h
+index 538c85e62f86..25f56df7ed9a 100644
+--- a/arch/arm64/include/asm/pgtable.h
++++ b/arch/arm64/include/asm/pgtable.h
+@@ -457,6 +457,7 @@ extern pgd_t init_pg_dir[PTRS_PER_PGD];
+ extern pgd_t init_pg_end[];
+ extern pgd_t swapper_pg_dir[PTRS_PER_PGD];
+ extern pgd_t idmap_pg_dir[PTRS_PER_PGD];
++extern pgd_t idmap_pg_end[];
+ extern pgd_t tramp_pg_dir[PTRS_PER_PGD];
+
+ extern void set_swapper_pgd(pgd_t *pgdp, pgd_t pgd);
+diff --git a/arch/arm64/kernel/head.S b/arch/arm64/kernel/head.S
+index 57a91032b4c2..32f5ecbec0ea 100644
+--- a/arch/arm64/kernel/head.S
++++ b/arch/arm64/kernel/head.S
+@@ -394,13 +394,19 @@ SYM_FUNC_START_LOCAL(__create_page_tables)
+
+ /*
+ * Since the page tables have been populated with non-cacheable
+- * accesses (MMU disabled), invalidate the idmap and swapper page
+- * tables again to remove any speculatively loaded cache lines.
++ * accesses (MMU disabled), invalidate those tables again to
++ * remove any speculatively loaded cache lines.
+ */
++ dmb sy
++
+ adrp x0, idmap_pg_dir
++ adrp x1, idmap_pg_end
++ sub x1, x1, x0
++ bl __inval_dcache_area
++
++ adrp x0, init_pg_dir
+ adrp x1, init_pg_end
+ sub x1, x1, x0
+- dmb sy
+ bl __inval_dcache_area
+
+ ret x28
+diff --git a/arch/arm64/kernel/insn.c b/arch/arm64/kernel/insn.c
+index 4a9e773a177f..cc2f3d901c91 100644
+--- a/arch/arm64/kernel/insn.c
++++ b/arch/arm64/kernel/insn.c
+@@ -1535,16 +1535,10 @@ static u32 aarch64_encode_immediate(u64 imm,
+ u32 insn)
+ {
+ unsigned int immr, imms, n, ones, ror, esz, tmp;
+- u64 mask = ~0UL;
+-
+- /* Can't encode full zeroes or full ones */
+- if (!imm || !~imm)
+- return AARCH64_BREAK_FAULT;
++ u64 mask;
+
+ switch (variant) {
+ case AARCH64_INSN_VARIANT_32BIT:
+- if (upper_32_bits(imm))
+- return AARCH64_BREAK_FAULT;
+ esz = 32;
+ break;
+ case AARCH64_INSN_VARIANT_64BIT:
+@@ -1556,6 +1550,12 @@ static u32 aarch64_encode_immediate(u64 imm,
+ return AARCH64_BREAK_FAULT;
+ }
+
++ mask = GENMASK(esz - 1, 0);
++
++ /* Can't encode full zeroes, full ones, or value wider than the mask */
++ if (!imm || imm == mask || imm & ~mask)
++ return AARCH64_BREAK_FAULT;
++
+ /*
+ * Inverse of Replicate(). Try to spot a repeating pattern
+ * with a pow2 stride.
+diff --git a/arch/arm64/kernel/machine_kexec_file.c b/arch/arm64/kernel/machine_kexec_file.c
+index b40c3b0def92..5ebb21b859b4 100644
+--- a/arch/arm64/kernel/machine_kexec_file.c
++++ b/arch/arm64/kernel/machine_kexec_file.c
+@@ -284,7 +284,7 @@ int load_other_segments(struct kimage *image,
+ image->arch.elf_headers_sz = headers_sz;
+
+ pr_debug("Loaded elf core header at 0x%lx bufsz=0x%lx memsz=0x%lx\n",
+- image->arch.elf_headers_mem, headers_sz, headers_sz);
++ image->arch.elf_headers_mem, kbuf.bufsz, kbuf.memsz);
+ }
+
+ /* load initrd */
+@@ -305,7 +305,7 @@ int load_other_segments(struct kimage *image,
+ initrd_load_addr = kbuf.mem;
+
+ pr_debug("Loaded initrd at 0x%lx bufsz=0x%lx memsz=0x%lx\n",
+- initrd_load_addr, initrd_len, initrd_len);
++ initrd_load_addr, kbuf.bufsz, kbuf.memsz);
+ }
+
+ /* load dtb */
+@@ -332,7 +332,7 @@ int load_other_segments(struct kimage *image,
+ image->arch.dtb_mem = kbuf.mem;
+
+ pr_debug("Loaded dtb at 0x%lx bufsz=0x%lx memsz=0x%lx\n",
+- kbuf.mem, dtb_len, dtb_len);
++ kbuf.mem, kbuf.bufsz, kbuf.memsz);
+
+ return 0;
+
+diff --git a/arch/arm64/kernel/vmlinux.lds.S b/arch/arm64/kernel/vmlinux.lds.S
+index 497f9675071d..94402aaf5f5c 100644
+--- a/arch/arm64/kernel/vmlinux.lds.S
++++ b/arch/arm64/kernel/vmlinux.lds.S
+@@ -139,6 +139,7 @@ SECTIONS
+
+ idmap_pg_dir = .;
+ . += IDMAP_DIR_SIZE;
++ idmap_pg_end = .;
+
+ #ifdef CONFIG_UNMAP_KERNEL_AT_EL0
+ tramp_pg_dir = .;
+diff --git a/arch/m68k/include/asm/mac_via.h b/arch/m68k/include/asm/mac_via.h
+index de1470c4d829..1149251ea58d 100644
+--- a/arch/m68k/include/asm/mac_via.h
++++ b/arch/m68k/include/asm/mac_via.h
+@@ -257,6 +257,7 @@ extern int rbv_present,via_alt_mapping;
+
+ struct irq_desc;
+
++extern void via_l2_flush(int writeback);
+ extern void via_register_interrupts(void);
+ extern void via_irq_enable(int);
+ extern void via_irq_disable(int);
+diff --git a/arch/m68k/mac/config.c b/arch/m68k/mac/config.c
+index 611f73bfc87c..d0126ab01360 100644
+--- a/arch/m68k/mac/config.c
++++ b/arch/m68k/mac/config.c
+@@ -59,7 +59,6 @@ extern void iop_preinit(void);
+ extern void iop_init(void);
+ extern void via_init(void);
+ extern void via_init_clock(irq_handler_t func);
+-extern void via_flush_cache(void);
+ extern void oss_init(void);
+ extern void psc_init(void);
+ extern void baboon_init(void);
+@@ -130,21 +129,6 @@ int __init mac_parse_bootinfo(const struct bi_record *record)
+ return unknown;
+ }
+
+-/*
+- * Flip into 24bit mode for an instant - flushes the L2 cache card. We
+- * have to disable interrupts for this. Our IRQ handlers will crap
+- * themselves if they take an IRQ in 24bit mode!
+- */
+-
+-static void mac_cache_card_flush(int writeback)
+-{
+- unsigned long flags;
+-
+- local_irq_save(flags);
+- via_flush_cache();
+- local_irq_restore(flags);
+-}
+-
+ void __init config_mac(void)
+ {
+ if (!MACH_IS_MAC)
+@@ -175,9 +159,8 @@ void __init config_mac(void)
+ * not.
+ */
+
+- if (macintosh_config->ident == MAC_MODEL_IICI
+- || macintosh_config->ident == MAC_MODEL_IIFX)
+- mach_l2_flush = mac_cache_card_flush;
++ if (macintosh_config->ident == MAC_MODEL_IICI)
++ mach_l2_flush = via_l2_flush;
+ }
+
+
+diff --git a/arch/m68k/mac/via.c b/arch/m68k/mac/via.c
+index 3c2cfcb74982..1f0fad2a98a0 100644
+--- a/arch/m68k/mac/via.c
++++ b/arch/m68k/mac/via.c
+@@ -294,10 +294,14 @@ void via_debug_dump(void)
+ * the system into 24-bit mode for an instant.
+ */
+
+-void via_flush_cache(void)
++void via_l2_flush(int writeback)
+ {
++ unsigned long flags;
++
++ local_irq_save(flags);
+ via2[gBufB] &= ~VIA2B_vMode32;
+ via2[gBufB] |= VIA2B_vMode32;
++ local_irq_restore(flags);
+ }
+
+ /*
+diff --git a/arch/mips/Makefile b/arch/mips/Makefile
+index e1c44aed8156..b6ee29e4565a 100644
+--- a/arch/mips/Makefile
++++ b/arch/mips/Makefile
+@@ -288,12 +288,23 @@ ifdef CONFIG_64BIT
+ endif
+ endif
+
++# When linking a 32-bit executable the LLVM linker cannot cope with a
++# 32-bit load address that has been sign-extended to 64 bits. Simply
++# remove the upper 32 bits then, as it is safe to do so with other
++# linkers.
++ifdef CONFIG_64BIT
++ load-ld = $(load-y)
++else
++ load-ld = $(subst 0xffffffff,0x,$(load-y))
++endif
++
+ KBUILD_AFLAGS += $(cflags-y)
+ KBUILD_CFLAGS += $(cflags-y)
+-KBUILD_CPPFLAGS += -DVMLINUX_LOAD_ADDRESS=$(load-y)
++KBUILD_CPPFLAGS += -DVMLINUX_LOAD_ADDRESS=$(load-y) -DLINKER_LOAD_ADDRESS=$(load-ld)
+ KBUILD_CPPFLAGS += -DDATAOFFSET=$(if $(dataoffset-y),$(dataoffset-y),0)
+
+ bootvars-y = VMLINUX_LOAD_ADDRESS=$(load-y) \
++ LINKER_LOAD_ADDRESS=$(load-ld) \
+ VMLINUX_ENTRY_ADDRESS=$(entry-y) \
+ PLATFORM="$(platform-y)" \
+ ITS_INPUTS="$(its-y)"
+diff --git a/arch/mips/boot/compressed/Makefile b/arch/mips/boot/compressed/Makefile
+index 0df0ee8a298d..6e56caef69f0 100644
+--- a/arch/mips/boot/compressed/Makefile
++++ b/arch/mips/boot/compressed/Makefile
+@@ -90,7 +90,7 @@ ifneq ($(zload-y),)
+ VMLINUZ_LOAD_ADDRESS := $(zload-y)
+ else
+ VMLINUZ_LOAD_ADDRESS = $(shell $(obj)/calc_vmlinuz_load_addr \
+- $(obj)/vmlinux.bin $(VMLINUX_LOAD_ADDRESS))
++ $(obj)/vmlinux.bin $(LINKER_LOAD_ADDRESS))
+ endif
+ UIMAGE_LOADADDR = $(VMLINUZ_LOAD_ADDRESS)
+
+diff --git a/arch/mips/configs/loongson3_defconfig b/arch/mips/configs/loongson3_defconfig
+index 51675f5000d6..b0c24bd292b2 100644
+--- a/arch/mips/configs/loongson3_defconfig
++++ b/arch/mips/configs/loongson3_defconfig
+@@ -229,7 +229,7 @@ CONFIG_MEDIA_CAMERA_SUPPORT=y
+ CONFIG_MEDIA_USB_SUPPORT=y
+ CONFIG_USB_VIDEO_CLASS=m
+ CONFIG_DRM=y
+-CONFIG_DRM_RADEON=y
++CONFIG_DRM_RADEON=m
+ CONFIG_FB_RADEON=y
+ CONFIG_LCD_CLASS_DEVICE=y
+ CONFIG_LCD_PLATFORM=m
+diff --git a/arch/mips/include/asm/cpu-features.h b/arch/mips/include/asm/cpu-features.h
+index de44c92b1c1f..d4e120464d41 100644
+--- a/arch/mips/include/asm/cpu-features.h
++++ b/arch/mips/include/asm/cpu-features.h
+@@ -288,10 +288,12 @@
+ # define cpu_has_mips32r6 __isa_ge_or_flag(6, MIPS_CPU_ISA_M32R6)
+ #endif
+ #ifndef cpu_has_mips64r1
+-# define cpu_has_mips64r1 __isa_range_or_flag(1, 6, MIPS_CPU_ISA_M64R1)
++# define cpu_has_mips64r1 (cpu_has_64bits && \
++ __isa_range_or_flag(1, 6, MIPS_CPU_ISA_M64R1))
+ #endif
+ #ifndef cpu_has_mips64r2
+-# define cpu_has_mips64r2 __isa_range_or_flag(2, 6, MIPS_CPU_ISA_M64R2)
++# define cpu_has_mips64r2 (cpu_has_64bits && \
++ __isa_range_or_flag(2, 6, MIPS_CPU_ISA_M64R2))
+ #endif
+ #ifndef cpu_has_mips64r6
+ # define cpu_has_mips64r6 __isa_ge_and_flag(6, MIPS_CPU_ISA_M64R6)
+diff --git a/arch/mips/include/asm/mipsregs.h b/arch/mips/include/asm/mipsregs.h
+index 796fe47cfd17..274c2bf0d4a1 100644
+--- a/arch/mips/include/asm/mipsregs.h
++++ b/arch/mips/include/asm/mipsregs.h
+@@ -753,7 +753,7 @@
+
+ /* MAAR bit definitions */
+ #define MIPS_MAAR_VH (_U64CAST_(1) << 63)
+-#define MIPS_MAAR_ADDR ((BIT_ULL(BITS_PER_LONG - 12) - 1) << 12)
++#define MIPS_MAAR_ADDR GENMASK_ULL(55, 12)
+ #define MIPS_MAAR_ADDR_SHIFT 12
+ #define MIPS_MAAR_S (_ULCAST_(1) << 1)
+ #define MIPS_MAAR_VL (_ULCAST_(1) << 0)
+diff --git a/arch/mips/kernel/genex.S b/arch/mips/kernel/genex.S
+index 0a43c9125267..5b7c67a3f78f 100644
+--- a/arch/mips/kernel/genex.S
++++ b/arch/mips/kernel/genex.S
+@@ -476,20 +476,20 @@ NESTED(nmi_handler, PT_SIZE, sp)
+ .endm
+
+ .macro __build_clear_fpe
++ CLI
++ TRACE_IRQS_OFF
+ .set push
+ /* gas fails to assemble cfc1 for some archs (octeon).*/ \
+ .set mips1
+ SET_HARDFLOAT
+ cfc1 a1, fcr31
+ .set pop
+- CLI
+- TRACE_IRQS_OFF
+ .endm
+
+ .macro __build_clear_msa_fpe
+- _cfcmsa a1, MSA_CSR
+ CLI
+ TRACE_IRQS_OFF
++ _cfcmsa a1, MSA_CSR
+ .endm
+
+ .macro __build_clear_ade
+diff --git a/arch/mips/kernel/mips-cm.c b/arch/mips/kernel/mips-cm.c
+index cdb93ed91cde..361bfc91a0e6 100644
+--- a/arch/mips/kernel/mips-cm.c
++++ b/arch/mips/kernel/mips-cm.c
+@@ -119,9 +119,9 @@ static char *cm2_causes[32] = {
+ "COH_RD_ERR", "MMIO_WR_ERR", "MMIO_RD_ERR", "0x07",
+ "0x08", "0x09", "0x0a", "0x0b",
+ "0x0c", "0x0d", "0x0e", "0x0f",
+- "0x10", "0x11", "0x12", "0x13",
+- "0x14", "0x15", "0x16", "INTVN_WR_ERR",
+- "INTVN_RD_ERR", "0x19", "0x1a", "0x1b",
++ "0x10", "INTVN_WR_ERR", "INTVN_RD_ERR", "0x13",
++ "0x14", "0x15", "0x16", "0x17",
++ "0x18", "0x19", "0x1a", "0x1b",
+ "0x1c", "0x1d", "0x1e", "0x1f"
+ };
+
+diff --git a/arch/mips/kernel/setup.c b/arch/mips/kernel/setup.c
+index 10bef8f78e7c..573509e0f2d4 100644
+--- a/arch/mips/kernel/setup.c
++++ b/arch/mips/kernel/setup.c
+@@ -702,7 +702,17 @@ static void __init arch_mem_init(char **cmdline_p)
+ memblock_reserve(crashk_res.start, resource_size(&crashk_res));
+ #endif
+ device_tree_init();
++
++ /*
++ * In order to reduce the possibility of kernel panic when failed to
++ * get IO TLB memory under CONFIG_SWIOTLB, it is better to allocate
++ * low memory as small as possible before plat_swiotlb_setup(), so
++ * make sparse_init() using top-down allocation.
++ */
++ memblock_set_bottom_up(false);
+ sparse_init();
++ memblock_set_bottom_up(true);
++
+ plat_swiotlb_setup();
+
+ dma_contiguous_reserve(PFN_PHYS(max_low_pfn));
+diff --git a/arch/mips/kernel/time.c b/arch/mips/kernel/time.c
+index 37e9413a393d..caa01457dce6 100644
+--- a/arch/mips/kernel/time.c
++++ b/arch/mips/kernel/time.c
+@@ -18,12 +18,82 @@
+ #include <linux/smp.h>
+ #include <linux/spinlock.h>
+ #include <linux/export.h>
++#include <linux/cpufreq.h>
++#include <linux/delay.h>
+
+ #include <asm/cpu-features.h>
+ #include <asm/cpu-type.h>
+ #include <asm/div64.h>
+ #include <asm/time.h>
+
++#ifdef CONFIG_CPU_FREQ
++
++static DEFINE_PER_CPU(unsigned long, pcp_lpj_ref);
++static DEFINE_PER_CPU(unsigned long, pcp_lpj_ref_freq);
++static unsigned long glb_lpj_ref;
++static unsigned long glb_lpj_ref_freq;
++
++static int cpufreq_callback(struct notifier_block *nb,
++ unsigned long val, void *data)
++{
++ struct cpufreq_freqs *freq = data;
++ struct cpumask *cpus = freq->policy->cpus;
++ unsigned long lpj;
++ int cpu;
++
++ /*
++ * Skip lpj numbers adjustment if the CPU-freq transition is safe for
++ * the loops delay. (Is this possible?)
++ */
++ if (freq->flags & CPUFREQ_CONST_LOOPS)
++ return NOTIFY_OK;
++
++ /* Save the initial values of the lpjes for future scaling. */
++ if (!glb_lpj_ref) {
++ glb_lpj_ref = boot_cpu_data.udelay_val;
++ glb_lpj_ref_freq = freq->old;
++
++ for_each_online_cpu(cpu) {
++ per_cpu(pcp_lpj_ref, cpu) =
++ cpu_data[cpu].udelay_val;
++ per_cpu(pcp_lpj_ref_freq, cpu) = freq->old;
++ }
++ }
++
++ /*
++ * Adjust global lpj variable and per-CPU udelay_val number in
++ * accordance with the new CPU frequency.
++ */
++ if ((val == CPUFREQ_PRECHANGE && freq->old < freq->new) ||
++ (val == CPUFREQ_POSTCHANGE && freq->old > freq->new)) {
++ loops_per_jiffy = cpufreq_scale(glb_lpj_ref,
++ glb_lpj_ref_freq,
++ freq->new);
++
++ for_each_cpu(cpu, cpus) {
++ lpj = cpufreq_scale(per_cpu(pcp_lpj_ref, cpu),
++ per_cpu(pcp_lpj_ref_freq, cpu),
++ freq->new);
++ cpu_data[cpu].udelay_val = (unsigned int)lpj;
++ }
++ }
++
++ return NOTIFY_OK;
++}
++
++static struct notifier_block cpufreq_notifier = {
++ .notifier_call = cpufreq_callback,
++};
++
++static int __init register_cpufreq_notifier(void)
++{
++ return cpufreq_register_notifier(&cpufreq_notifier,
++ CPUFREQ_TRANSITION_NOTIFIER);
++}
++core_initcall(register_cpufreq_notifier);
++
++#endif /* CONFIG_CPU_FREQ */
++
+ /*
+ * forward reference
+ */
+diff --git a/arch/mips/kernel/vmlinux.lds.S b/arch/mips/kernel/vmlinux.lds.S
+index a5f00ec73ea6..f185a85a27c1 100644
+--- a/arch/mips/kernel/vmlinux.lds.S
++++ b/arch/mips/kernel/vmlinux.lds.S
+@@ -55,7 +55,7 @@ SECTIONS
+ /* . = 0xa800000000300000; */
+ . = 0xffffffff80300000;
+ #endif
+- . = VMLINUX_LOAD_ADDRESS;
++ . = LINKER_LOAD_ADDRESS;
+ /* read-only */
+ _text = .; /* Text and read-only data */
+ .text : {
+diff --git a/arch/mips/loongson2ef/common/init.c b/arch/mips/loongson2ef/common/init.c
+index 45512178be77..ce3f02f75e2a 100644
+--- a/arch/mips/loongson2ef/common/init.c
++++ b/arch/mips/loongson2ef/common/init.c
+@@ -19,10 +19,10 @@ unsigned long __maybe_unused _loongson_addrwincfg_base;
+ static void __init mips_nmi_setup(void)
+ {
+ void *base;
+- extern char except_vec_nmi;
++ extern char except_vec_nmi[];
+
+ base = (void *)(CAC_BASE + 0x380);
+- memcpy(base, &except_vec_nmi, 0x80);
++ memcpy(base, except_vec_nmi, 0x80);
+ flush_icache_range((unsigned long)base, (unsigned long)base + 0x80);
+ }
+
+diff --git a/arch/mips/loongson64/init.c b/arch/mips/loongson64/init.c
+index da38944471f4..86c5e93258ce 100644
+--- a/arch/mips/loongson64/init.c
++++ b/arch/mips/loongson64/init.c
+@@ -17,10 +17,10 @@
+ static void __init mips_nmi_setup(void)
+ {
+ void *base;
+- extern char except_vec_nmi;
++ extern char except_vec_nmi[];
+
+ base = (void *)(CAC_BASE + 0x380);
+- memcpy(base, &except_vec_nmi, 0x80);
++ memcpy(base, except_vec_nmi, 0x80);
+ flush_icache_range((unsigned long)base, (unsigned long)base + 0x80);
+ }
+
+diff --git a/arch/mips/mm/dma-noncoherent.c b/arch/mips/mm/dma-noncoherent.c
+index fcea92d95d86..563c2c0d0c81 100644
+--- a/arch/mips/mm/dma-noncoherent.c
++++ b/arch/mips/mm/dma-noncoherent.c
+@@ -33,6 +33,7 @@ static inline bool cpu_needs_post_dma_flush(void)
+ case CPU_R10000:
+ case CPU_R12000:
+ case CPU_BMIPS5000:
++ case CPU_LOONGSON2EF:
+ return true;
+ default:
+ /*
+diff --git a/arch/mips/mti-malta/malta-init.c b/arch/mips/mti-malta/malta-init.c
+index ff2c1d809538..893af377aacc 100644
+--- a/arch/mips/mti-malta/malta-init.c
++++ b/arch/mips/mti-malta/malta-init.c
+@@ -90,24 +90,24 @@ static void __init console_config(void)
+ static void __init mips_nmi_setup(void)
+ {
+ void *base;
+- extern char except_vec_nmi;
++ extern char except_vec_nmi[];
+
+ base = cpu_has_veic ?
+ (void *)(CAC_BASE + 0xa80) :
+ (void *)(CAC_BASE + 0x380);
+- memcpy(base, &except_vec_nmi, 0x80);
++ memcpy(base, except_vec_nmi, 0x80);
+ flush_icache_range((unsigned long)base, (unsigned long)base + 0x80);
+ }
+
+ static void __init mips_ejtag_setup(void)
+ {
+ void *base;
+- extern char except_vec_ejtag_debug;
++ extern char except_vec_ejtag_debug[];
+
+ base = cpu_has_veic ?
+ (void *)(CAC_BASE + 0xa00) :
+ (void *)(CAC_BASE + 0x300);
+- memcpy(base, &except_vec_ejtag_debug, 0x80);
++ memcpy(base, except_vec_ejtag_debug, 0x80);
+ flush_icache_range((unsigned long)base, (unsigned long)base + 0x80);
+ }
+
+diff --git a/arch/mips/pistachio/init.c b/arch/mips/pistachio/init.c
+index a09a5da38e6b..558995ed6fe8 100644
+--- a/arch/mips/pistachio/init.c
++++ b/arch/mips/pistachio/init.c
+@@ -83,12 +83,12 @@ phys_addr_t mips_cdmm_phys_base(void)
+ static void __init mips_nmi_setup(void)
+ {
+ void *base;
+- extern char except_vec_nmi;
++ extern char except_vec_nmi[];
+
+ base = cpu_has_veic ?
+ (void *)(CAC_BASE + 0xa80) :
+ (void *)(CAC_BASE + 0x380);
+- memcpy(base, &except_vec_nmi, 0x80);
++ memcpy(base, except_vec_nmi, 0x80);
+ flush_icache_range((unsigned long)base,
+ (unsigned long)base + 0x80);
+ }
+@@ -96,12 +96,12 @@ static void __init mips_nmi_setup(void)
+ static void __init mips_ejtag_setup(void)
+ {
+ void *base;
+- extern char except_vec_ejtag_debug;
++ extern char except_vec_ejtag_debug[];
+
+ base = cpu_has_veic ?
+ (void *)(CAC_BASE + 0xa00) :
+ (void *)(CAC_BASE + 0x300);
+- memcpy(base, &except_vec_ejtag_debug, 0x80);
++ memcpy(base, except_vec_ejtag_debug, 0x80);
+ flush_icache_range((unsigned long)base,
+ (unsigned long)base + 0x80);
+ }
+diff --git a/arch/mips/tools/elf-entry.c b/arch/mips/tools/elf-entry.c
+index adde79ce7fc0..dbd14ff05b4c 100644
+--- a/arch/mips/tools/elf-entry.c
++++ b/arch/mips/tools/elf-entry.c
+@@ -51,11 +51,14 @@ int main(int argc, const char *argv[])
+ nread = fread(&hdr, 1, sizeof(hdr), file);
+ if (nread != sizeof(hdr)) {
+ perror("Unable to read input file");
++ fclose(file);
+ return EXIT_FAILURE;
+ }
+
+- if (memcmp(hdr.ehdr32.e_ident, ELFMAG, SELFMAG))
++ if (memcmp(hdr.ehdr32.e_ident, ELFMAG, SELFMAG)) {
++ fclose(file);
+ die("Input is not an ELF\n");
++ }
+
+ switch (hdr.ehdr32.e_ident[EI_CLASS]) {
+ case ELFCLASS32:
+@@ -67,6 +70,7 @@ int main(int argc, const char *argv[])
+ entry = be32toh(hdr.ehdr32.e_entry);
+ break;
+ default:
++ fclose(file);
+ die("Invalid ELF encoding\n");
+ }
+
+@@ -83,14 +87,17 @@ int main(int argc, const char *argv[])
+ entry = be64toh(hdr.ehdr64.e_entry);
+ break;
+ default:
++ fclose(file);
+ die("Invalid ELF encoding\n");
+ }
+ break;
+
+ default:
++ fclose(file);
+ die("Invalid ELF class\n");
+ }
+
+ printf("0x%016" PRIx64 "\n", entry);
++ fclose(file);
+ return EXIT_SUCCESS;
+ }
+diff --git a/arch/powerpc/Kconfig b/arch/powerpc/Kconfig
+index b29d7cb38368..62aca9efbbbe 100644
+--- a/arch/powerpc/Kconfig
++++ b/arch/powerpc/Kconfig
+@@ -170,8 +170,8 @@ config PPC
+ select HAVE_ARCH_AUDITSYSCALL
+ select HAVE_ARCH_HUGE_VMAP if PPC_BOOK3S_64 && PPC_RADIX_MMU
+ select HAVE_ARCH_JUMP_LABEL
+- select HAVE_ARCH_KASAN if PPC32
+- select HAVE_ARCH_KASAN_VMALLOC if PPC32
++ select HAVE_ARCH_KASAN if PPC32 && PPC_PAGE_SHIFT <= 14
++ select HAVE_ARCH_KASAN_VMALLOC if PPC32 && PPC_PAGE_SHIFT <= 14
+ select HAVE_ARCH_KGDB
+ select HAVE_ARCH_MMAP_RND_BITS
+ select HAVE_ARCH_MMAP_RND_COMPAT_BITS if COMPAT
+diff --git a/arch/powerpc/include/asm/book3s/32/kup.h b/arch/powerpc/include/asm/book3s/32/kup.h
+index db0a1c281587..a41cfc7cc669 100644
+--- a/arch/powerpc/include/asm/book3s/32/kup.h
++++ b/arch/powerpc/include/asm/book3s/32/kup.h
+@@ -2,6 +2,7 @@
+ #ifndef _ASM_POWERPC_BOOK3S_32_KUP_H
+ #define _ASM_POWERPC_BOOK3S_32_KUP_H
+
++#include <asm/bug.h>
+ #include <asm/book3s/32/mmu-hash.h>
+
+ #ifdef __ASSEMBLY__
+@@ -75,7 +76,7 @@
+
+ .macro kuap_check current, gpr
+ #ifdef CONFIG_PPC_KUAP_DEBUG
+- lwz \gpr, KUAP(thread)
++ lwz \gpr, THREAD + KUAP(\current)
+ 999: twnei \gpr, 0
+ EMIT_BUG_ENTRY 999b, __FILE__, __LINE__, (BUGFLAG_WARNING | BUGFLAG_ONCE)
+ #endif
+diff --git a/arch/powerpc/include/asm/fadump-internal.h b/arch/powerpc/include/asm/fadump-internal.h
+index c814a2b55389..8d61c8f3fec4 100644
+--- a/arch/powerpc/include/asm/fadump-internal.h
++++ b/arch/powerpc/include/asm/fadump-internal.h
+@@ -64,12 +64,14 @@ struct fadump_memory_range {
+ };
+
+ /* fadump memory ranges info */
++#define RNG_NAME_SZ 16
+ struct fadump_mrange_info {
+- char name[16];
++ char name[RNG_NAME_SZ];
+ struct fadump_memory_range *mem_ranges;
+ u32 mem_ranges_sz;
+ u32 mem_range_cnt;
+ u32 max_mem_ranges;
++ bool is_static;
+ };
+
+ /* Platform specific callback functions */
+diff --git a/arch/powerpc/include/asm/kasan.h b/arch/powerpc/include/asm/kasan.h
+index fbff9ff9032e..4769bbf7173a 100644
+--- a/arch/powerpc/include/asm/kasan.h
++++ b/arch/powerpc/include/asm/kasan.h
+@@ -23,18 +23,14 @@
+
+ #define KASAN_SHADOW_OFFSET ASM_CONST(CONFIG_KASAN_SHADOW_OFFSET)
+
+-#define KASAN_SHADOW_END 0UL
+-
+-#define KASAN_SHADOW_SIZE (KASAN_SHADOW_END - KASAN_SHADOW_START)
++#define KASAN_SHADOW_END (-(-KASAN_SHADOW_START >> KASAN_SHADOW_SCALE_SHIFT))
+
+ #ifdef CONFIG_KASAN
+ void kasan_early_init(void);
+-void kasan_mmu_init(void);
+ void kasan_init(void);
+ void kasan_late_init(void);
+ #else
+ static inline void kasan_init(void) { }
+-static inline void kasan_mmu_init(void) { }
+ static inline void kasan_late_init(void) { }
+ #endif
+
+diff --git a/arch/powerpc/kernel/dt_cpu_ftrs.c b/arch/powerpc/kernel/dt_cpu_ftrs.c
+index 36bc0d5c4f3a..fca4d7ff22b9 100644
+--- a/arch/powerpc/kernel/dt_cpu_ftrs.c
++++ b/arch/powerpc/kernel/dt_cpu_ftrs.c
+@@ -346,6 +346,14 @@ static int __init feat_enable_dscr(struct dt_cpu_feature *f)
+ {
+ u64 lpcr;
+
++ /*
++ * Linux relies on FSCR[DSCR] being clear, so that we can take the
++ * facility unavailable interrupt and track the task's usage of DSCR.
++ * See facility_unavailable_exception().
++ * Clear the bit here so that feat_enable() doesn't set it.
++ */
++ f->fscr_bit_nr = -1;
++
+ feat_enable(f);
+
+ lpcr = mfspr(SPRN_LPCR);
+diff --git a/arch/powerpc/kernel/fadump.c b/arch/powerpc/kernel/fadump.c
+index 59e60a9a9f5c..78ab9a6ee6ac 100644
+--- a/arch/powerpc/kernel/fadump.c
++++ b/arch/powerpc/kernel/fadump.c
+@@ -40,8 +40,17 @@ struct kobject *fadump_kobj;
+
+ #ifndef CONFIG_PRESERVE_FA_DUMP
+ static DEFINE_MUTEX(fadump_mutex);
+-struct fadump_mrange_info crash_mrange_info = { "crash", NULL, 0, 0, 0 };
+-struct fadump_mrange_info reserved_mrange_info = { "reserved", NULL, 0, 0, 0 };
++struct fadump_mrange_info crash_mrange_info = { "crash", NULL, 0, 0, 0, false };
++
++#define RESERVED_RNGS_SZ 16384 /* 16K - 128 entries */
++#define RESERVED_RNGS_CNT (RESERVED_RNGS_SZ / \
++ sizeof(struct fadump_memory_range))
++static struct fadump_memory_range rngs[RESERVED_RNGS_CNT];
++struct fadump_mrange_info reserved_mrange_info = { "reserved", rngs,
++ RESERVED_RNGS_SZ, 0,
++ RESERVED_RNGS_CNT, true };
++
++static void __init early_init_dt_scan_reserved_ranges(unsigned long node);
+
+ #ifdef CONFIG_CMA
+ static struct cma *fadump_cma;
+@@ -110,6 +119,11 @@ static int __init fadump_cma_init(void) { return 1; }
+ int __init early_init_dt_scan_fw_dump(unsigned long node, const char *uname,
+ int depth, void *data)
+ {
++ if (depth == 0) {
++ early_init_dt_scan_reserved_ranges(node);
++ return 0;
++ }
++
+ if (depth != 1)
+ return 0;
+
+@@ -431,10 +445,72 @@ static int __init fadump_get_boot_mem_regions(void)
+ return ret;
+ }
+
++/*
++ * Returns true, if the given range overlaps with reserved memory ranges
++ * starting at idx. Also, updates idx to index of overlapping memory range
++ * with the given memory range.
++ * False, otherwise.
++ */
++static bool overlaps_reserved_ranges(u64 base, u64 end, int *idx)
++{
++ bool ret = false;
++ int i;
++
++ for (i = *idx; i < reserved_mrange_info.mem_range_cnt; i++) {
++ u64 rbase = reserved_mrange_info.mem_ranges[i].base;
++ u64 rend = rbase + reserved_mrange_info.mem_ranges[i].size;
++
++ if (end <= rbase)
++ break;
++
++ if ((end > rbase) && (base < rend)) {
++ *idx = i;
++ ret = true;
++ break;
++ }
++ }
++
++ return ret;
++}
++
++/*
++ * Locate a suitable memory area to reserve memory for FADump. While at it,
++ * lookup reserved-ranges & avoid overlap with them, as they are used by F/W.
++ */
++static u64 __init fadump_locate_reserve_mem(u64 base, u64 size)
++{
++ struct fadump_memory_range *mrngs;
++ phys_addr_t mstart, mend;
++ int idx = 0;
++ u64 i, ret = 0;
++
++ mrngs = reserved_mrange_info.mem_ranges;
++ for_each_free_mem_range(i, NUMA_NO_NODE, MEMBLOCK_NONE,
++ &mstart, &mend, NULL) {
++ pr_debug("%llu) mstart: %llx, mend: %llx, base: %llx\n",
++ i, mstart, mend, base);
++
++ if (mstart > base)
++ base = PAGE_ALIGN(mstart);
++
++ while ((mend > base) && ((mend - base) >= size)) {
++ if (!overlaps_reserved_ranges(base, base+size, &idx)) {
++ ret = base;
++ goto out;
++ }
++
++ base = mrngs[idx].base + mrngs[idx].size;
++ base = PAGE_ALIGN(base);
++ }
++ }
++
++out:
++ return ret;
++}
++
+ int __init fadump_reserve_mem(void)
+ {
+- u64 base, size, mem_boundary, bootmem_min, align = PAGE_SIZE;
+- bool is_memblock_bottom_up = memblock_bottom_up();
++ u64 base, size, mem_boundary, bootmem_min;
+ int ret = 1;
+
+ if (!fw_dump.fadump_enabled)
+@@ -455,9 +531,9 @@ int __init fadump_reserve_mem(void)
+ PAGE_ALIGN(fadump_calculate_reserve_size());
+ #ifdef CONFIG_CMA
+ if (!fw_dump.nocma) {
+- align = FADUMP_CMA_ALIGNMENT;
+ fw_dump.boot_memory_size =
+- ALIGN(fw_dump.boot_memory_size, align);
++ ALIGN(fw_dump.boot_memory_size,
++ FADUMP_CMA_ALIGNMENT);
+ }
+ #endif
+
+@@ -525,13 +601,9 @@ int __init fadump_reserve_mem(void)
+ * Reserve memory at an offset closer to bottom of the RAM to
+ * minimize the impact of memory hot-remove operation.
+ */
+- memblock_set_bottom_up(true);
+- base = memblock_find_in_range(base, mem_boundary, size, align);
++ base = fadump_locate_reserve_mem(base, size);
+
+- /* Restore the previous allocation mode */
+- memblock_set_bottom_up(is_memblock_bottom_up);
+-
+- if (!base) {
++ if (!base || (base + size > mem_boundary)) {
+ pr_err("Failed to find memory chunk for reservation!\n");
+ goto error_out;
+ }
+@@ -728,10 +800,14 @@ void fadump_free_cpu_notes_buf(void)
+
+ static void fadump_free_mem_ranges(struct fadump_mrange_info *mrange_info)
+ {
++ if (mrange_info->is_static) {
++ mrange_info->mem_range_cnt = 0;
++ return;
++ }
++
+ kfree(mrange_info->mem_ranges);
+- mrange_info->mem_ranges = NULL;
+- mrange_info->mem_ranges_sz = 0;
+- mrange_info->max_mem_ranges = 0;
++ memset((void *)((u64)mrange_info + RNG_NAME_SZ), 0,
++ (sizeof(struct fadump_mrange_info) - RNG_NAME_SZ));
+ }
+
+ /*
+@@ -788,6 +864,12 @@ static inline int fadump_add_mem_range(struct fadump_mrange_info *mrange_info,
+ if (mrange_info->mem_range_cnt == mrange_info->max_mem_ranges) {
+ int ret;
+
++ if (mrange_info->is_static) {
++ pr_err("Reached array size limit for %s memory ranges\n",
++ mrange_info->name);
++ return -ENOSPC;
++ }
++
+ ret = fadump_alloc_mem_ranges(mrange_info);
+ if (ret)
+ return ret;
+@@ -1204,20 +1286,19 @@ static void sort_and_merge_mem_ranges(struct fadump_mrange_info *mrange_info)
+ * Scan reserved-ranges to consider them while reserving/releasing
+ * memory for FADump.
+ */
+-static inline int fadump_scan_reserved_mem_ranges(void)
++static void __init early_init_dt_scan_reserved_ranges(unsigned long node)
+ {
+- struct device_node *root;
+ const __be32 *prop;
+ int len, ret = -1;
+ unsigned long i;
+
+- root = of_find_node_by_path("/");
+- if (!root)
+- return ret;
++ /* reserved-ranges already scanned */
++ if (reserved_mrange_info.mem_range_cnt != 0)
++ return;
+
+- prop = of_get_property(root, "reserved-ranges", &len);
++ prop = of_get_flat_dt_prop(node, "reserved-ranges", &len);
+ if (!prop)
+- return ret;
++ return;
+
+ /*
+ * Each reserved range is an (address,size) pair, 2 cells each,
+@@ -1239,7 +1320,8 @@ static inline int fadump_scan_reserved_mem_ranges(void)
+ }
+ }
+
+- return ret;
++ /* Compact reserved ranges */
++ sort_and_merge_mem_ranges(&reserved_mrange_info);
+ }
+
+ /*
+@@ -1253,32 +1335,21 @@ static void fadump_release_memory(u64 begin, u64 end)
+ u64 ra_start, ra_end, tstart;
+ int i, ret;
+
+- fadump_scan_reserved_mem_ranges();
+-
+ ra_start = fw_dump.reserve_dump_area_start;
+ ra_end = ra_start + fw_dump.reserve_dump_area_size;
+
+ /*
+- * Add reserved dump area to reserved ranges list
+- * and exclude all these ranges while releasing memory.
++ * If reserved ranges array limit is hit, overwrite the last reserved
++ * memory range with reserved dump area to ensure it is excluded from
++ * the memory being released (reused for next FADump registration).
+ */
+- ret = fadump_add_mem_range(&reserved_mrange_info, ra_start, ra_end);
+- if (ret != 0) {
+- /*
+- * Not enough memory to setup reserved ranges but the system is
+- * running shortage of memory. So, release all the memory except
+- * Reserved dump area (reused for next fadump registration).
+- */
+- if (begin < ra_end && end > ra_start) {
+- if (begin < ra_start)
+- fadump_release_reserved_area(begin, ra_start);
+- if (end > ra_end)
+- fadump_release_reserved_area(ra_end, end);
+- } else
+- fadump_release_reserved_area(begin, end);
++ if (reserved_mrange_info.mem_range_cnt ==
++ reserved_mrange_info.max_mem_ranges)
++ reserved_mrange_info.mem_range_cnt--;
+
++ ret = fadump_add_mem_range(&reserved_mrange_info, ra_start, ra_end);
++ if (ret != 0)
+ return;
+- }
+
+ /* Get the reserved ranges list in order first. */
+ sort_and_merge_mem_ranges(&reserved_mrange_info);
+diff --git a/arch/powerpc/kernel/prom.c b/arch/powerpc/kernel/prom.c
+index 6620f37abe73..e13e96e665e0 100644
+--- a/arch/powerpc/kernel/prom.c
++++ b/arch/powerpc/kernel/prom.c
+@@ -685,6 +685,23 @@ static void __init tm_init(void)
+ static void tm_init(void) { }
+ #endif /* CONFIG_PPC_TRANSACTIONAL_MEM */
+
++#ifdef CONFIG_PPC64
++static void __init save_fscr_to_task(void)
++{
++ /*
++ * Ensure the init_task (pid 0, aka swapper) uses the value of FSCR we
++ * have configured via the device tree features or via __init_FSCR().
++ * That value will then be propagated to pid 1 (init) and all future
++ * processes.
++ */
++ if (early_cpu_has_feature(CPU_FTR_ARCH_207S))
++ init_task.thread.fscr = mfspr(SPRN_FSCR);
++}
++#else
++static inline void save_fscr_to_task(void) {};
++#endif
++
++
+ void __init early_init_devtree(void *params)
+ {
+ phys_addr_t limit;
+@@ -773,6 +790,8 @@ void __init early_init_devtree(void *params)
+ BUG();
+ }
+
++ save_fscr_to_task();
++
+ #if defined(CONFIG_SMP) && defined(CONFIG_PPC64)
+ /* We'll later wait for secondaries to check in; there are
+ * NCPUS-1 non-boot CPUs :-)
+diff --git a/arch/powerpc/mm/init_32.c b/arch/powerpc/mm/init_32.c
+index 872df48ae41b..a6991ef8727d 100644
+--- a/arch/powerpc/mm/init_32.c
++++ b/arch/powerpc/mm/init_32.c
+@@ -170,8 +170,6 @@ void __init MMU_init(void)
+ btext_unmap();
+ #endif
+
+- kasan_mmu_init();
+-
+ setup_kup();
+
+ /* Shortly after that, the entire linear mapping will be available */
+diff --git a/arch/powerpc/mm/kasan/kasan_init_32.c b/arch/powerpc/mm/kasan/kasan_init_32.c
+index cbcad369fcb2..59e49c0e8154 100644
+--- a/arch/powerpc/mm/kasan/kasan_init_32.c
++++ b/arch/powerpc/mm/kasan/kasan_init_32.c
+@@ -132,7 +132,7 @@ static void __init kasan_unmap_early_shadow_vmalloc(void)
+ flush_tlb_kernel_range(k_start, k_end);
+ }
+
+-void __init kasan_mmu_init(void)
++static void __init kasan_mmu_init(void)
+ {
+ int ret;
+ struct memblock_region *reg;
+@@ -160,6 +160,8 @@ void __init kasan_mmu_init(void)
+
+ void __init kasan_init(void)
+ {
++ kasan_mmu_init();
++
+ kasan_remap_early_shadow_ro();
+
+ clear_page(kasan_early_shadow_page);
+diff --git a/arch/powerpc/mm/pgtable_32.c b/arch/powerpc/mm/pgtable_32.c
+index f62de06e3d07..033a53c77ef1 100644
+--- a/arch/powerpc/mm/pgtable_32.c
++++ b/arch/powerpc/mm/pgtable_32.c
+@@ -169,7 +169,7 @@ void mark_initmem_nx(void)
+ unsigned long numpages = PFN_UP((unsigned long)_einittext) -
+ PFN_DOWN((unsigned long)_sinittext);
+
+- if (v_block_mapped((unsigned long)_stext + 1))
++ if (v_block_mapped((unsigned long)_sinittext))
+ mmu_mark_initmem_nx();
+ else
+ change_page_attr(page, numpages, PAGE_KERNEL);
+@@ -181,7 +181,7 @@ void mark_rodata_ro(void)
+ struct page *page;
+ unsigned long numpages;
+
+- if (v_block_mapped((unsigned long)_sinittext)) {
++ if (v_block_mapped((unsigned long)_stext + 1)) {
+ mmu_mark_rodata_ro();
+ ptdump_check_wx();
+ return;
+diff --git a/arch/powerpc/platforms/cell/spufs/file.c b/arch/powerpc/platforms/cell/spufs/file.c
+index c0f950a3f4e1..f4a4dfb191e7 100644
+--- a/arch/powerpc/platforms/cell/spufs/file.c
++++ b/arch/powerpc/platforms/cell/spufs/file.c
+@@ -1978,8 +1978,9 @@ static ssize_t __spufs_mbox_info_read(struct spu_context *ctx,
+ static ssize_t spufs_mbox_info_read(struct file *file, char __user *buf,
+ size_t len, loff_t *pos)
+ {
+- int ret;
+ struct spu_context *ctx = file->private_data;
++ u32 stat, data;
++ int ret;
+
+ if (!access_ok(buf, len))
+ return -EFAULT;
+@@ -1988,11 +1989,16 @@ static ssize_t spufs_mbox_info_read(struct file *file, char __user *buf,
+ if (ret)
+ return ret;
+ spin_lock(&ctx->csa.register_lock);
+- ret = __spufs_mbox_info_read(ctx, buf, len, pos);
++ stat = ctx->csa.prob.mb_stat_R;
++ data = ctx->csa.prob.pu_mb_R;
+ spin_unlock(&ctx->csa.register_lock);
+ spu_release_saved(ctx);
+
+- return ret;
++ /* EOF if there's no entry in the mbox */
++ if (!(stat & 0x0000ff))
++ return 0;
++
++ return simple_read_from_buffer(buf, len, pos, &data, sizeof(data));
+ }
+
+ static const struct file_operations spufs_mbox_info_fops = {
+@@ -2019,6 +2025,7 @@ static ssize_t spufs_ibox_info_read(struct file *file, char __user *buf,
+ size_t len, loff_t *pos)
+ {
+ struct spu_context *ctx = file->private_data;
++ u32 stat, data;
+ int ret;
+
+ if (!access_ok(buf, len))
+@@ -2028,11 +2035,16 @@ static ssize_t spufs_ibox_info_read(struct file *file, char __user *buf,
+ if (ret)
+ return ret;
+ spin_lock(&ctx->csa.register_lock);
+- ret = __spufs_ibox_info_read(ctx, buf, len, pos);
++ stat = ctx->csa.prob.mb_stat_R;
++ data = ctx->csa.priv2.puint_mb_R;
+ spin_unlock(&ctx->csa.register_lock);
+ spu_release_saved(ctx);
+
+- return ret;
++ /* EOF if there's no entry in the ibox */
++ if (!(stat & 0xff0000))
++ return 0;
++
++ return simple_read_from_buffer(buf, len, pos, &data, sizeof(data));
+ }
+
+ static const struct file_operations spufs_ibox_info_fops = {
+@@ -2041,6 +2053,11 @@ static const struct file_operations spufs_ibox_info_fops = {
+ .llseek = generic_file_llseek,
+ };
+
++static size_t spufs_wbox_info_cnt(struct spu_context *ctx)
++{
++ return (4 - ((ctx->csa.prob.mb_stat_R & 0x00ff00) >> 8)) * sizeof(u32);
++}
++
+ static ssize_t __spufs_wbox_info_read(struct spu_context *ctx,
+ char __user *buf, size_t len, loff_t *pos)
+ {
+@@ -2049,7 +2066,7 @@ static ssize_t __spufs_wbox_info_read(struct spu_context *ctx,
+ u32 wbox_stat;
+
+ wbox_stat = ctx->csa.prob.mb_stat_R;
+- cnt = 4 - ((wbox_stat & 0x00ff00) >> 8);
++ cnt = spufs_wbox_info_cnt(ctx);
+ for (i = 0; i < cnt; i++) {
+ data[i] = ctx->csa.spu_mailbox_data[i];
+ }
+@@ -2062,7 +2079,8 @@ static ssize_t spufs_wbox_info_read(struct file *file, char __user *buf,
+ size_t len, loff_t *pos)
+ {
+ struct spu_context *ctx = file->private_data;
+- int ret;
++ u32 data[ARRAY_SIZE(ctx->csa.spu_mailbox_data)];
++ int ret, count;
+
+ if (!access_ok(buf, len))
+ return -EFAULT;
+@@ -2071,11 +2089,13 @@ static ssize_t spufs_wbox_info_read(struct file *file, char __user *buf,
+ if (ret)
+ return ret;
+ spin_lock(&ctx->csa.register_lock);
+- ret = __spufs_wbox_info_read(ctx, buf, len, pos);
++ count = spufs_wbox_info_cnt(ctx);
++ memcpy(&data, &ctx->csa.spu_mailbox_data, sizeof(data));
+ spin_unlock(&ctx->csa.register_lock);
+ spu_release_saved(ctx);
+
+- return ret;
++ return simple_read_from_buffer(buf, len, pos, &data,
++ count * sizeof(u32));
+ }
+
+ static const struct file_operations spufs_wbox_info_fops = {
+@@ -2084,27 +2104,33 @@ static const struct file_operations spufs_wbox_info_fops = {
+ .llseek = generic_file_llseek,
+ };
+
+-static ssize_t __spufs_dma_info_read(struct spu_context *ctx,
+- char __user *buf, size_t len, loff_t *pos)
++static void spufs_get_dma_info(struct spu_context *ctx,
++ struct spu_dma_info *info)
+ {
+- struct spu_dma_info info;
+- struct mfc_cq_sr *qp, *spuqp;
+ int i;
+
+- info.dma_info_type = ctx->csa.priv2.spu_tag_status_query_RW;
+- info.dma_info_mask = ctx->csa.lscsa->tag_mask.slot[0];
+- info.dma_info_status = ctx->csa.spu_chnldata_RW[24];
+- info.dma_info_stall_and_notify = ctx->csa.spu_chnldata_RW[25];
+- info.dma_info_atomic_command_status = ctx->csa.spu_chnldata_RW[27];
++ info->dma_info_type = ctx->csa.priv2.spu_tag_status_query_RW;
++ info->dma_info_mask = ctx->csa.lscsa->tag_mask.slot[0];
++ info->dma_info_status = ctx->csa.spu_chnldata_RW[24];
++ info->dma_info_stall_and_notify = ctx->csa.spu_chnldata_RW[25];
++ info->dma_info_atomic_command_status = ctx->csa.spu_chnldata_RW[27];
+ for (i = 0; i < 16; i++) {
+- qp = &info.dma_info_command_data[i];
+- spuqp = &ctx->csa.priv2.spuq[i];
++ struct mfc_cq_sr *qp = &info->dma_info_command_data[i];
++ struct mfc_cq_sr *spuqp = &ctx->csa.priv2.spuq[i];
+
+ qp->mfc_cq_data0_RW = spuqp->mfc_cq_data0_RW;
+ qp->mfc_cq_data1_RW = spuqp->mfc_cq_data1_RW;
+ qp->mfc_cq_data2_RW = spuqp->mfc_cq_data2_RW;
+ qp->mfc_cq_data3_RW = spuqp->mfc_cq_data3_RW;
+ }
++}
++
++static ssize_t __spufs_dma_info_read(struct spu_context *ctx,
++ char __user *buf, size_t len, loff_t *pos)
++{
++ struct spu_dma_info info;
++
++ spufs_get_dma_info(ctx, &info);
+
+ return simple_read_from_buffer(buf, len, pos, &info,
+ sizeof info);
+@@ -2114,6 +2140,7 @@ static ssize_t spufs_dma_info_read(struct file *file, char __user *buf,
+ size_t len, loff_t *pos)
+ {
+ struct spu_context *ctx = file->private_data;
++ struct spu_dma_info info;
+ int ret;
+
+ if (!access_ok(buf, len))
+@@ -2123,11 +2150,12 @@ static ssize_t spufs_dma_info_read(struct file *file, char __user *buf,
+ if (ret)
+ return ret;
+ spin_lock(&ctx->csa.register_lock);
+- ret = __spufs_dma_info_read(ctx, buf, len, pos);
++ spufs_get_dma_info(ctx, &info);
+ spin_unlock(&ctx->csa.register_lock);
+ spu_release_saved(ctx);
+
+- return ret;
++ return simple_read_from_buffer(buf, len, pos, &info,
++ sizeof(info));
+ }
+
+ static const struct file_operations spufs_dma_info_fops = {
+@@ -2136,13 +2164,31 @@ static const struct file_operations spufs_dma_info_fops = {
+ .llseek = no_llseek,
+ };
+
++static void spufs_get_proxydma_info(struct spu_context *ctx,
++ struct spu_proxydma_info *info)
++{
++ int i;
++
++ info->proxydma_info_type = ctx->csa.prob.dma_querytype_RW;
++ info->proxydma_info_mask = ctx->csa.prob.dma_querymask_RW;
++ info->proxydma_info_status = ctx->csa.prob.dma_tagstatus_R;
++
++ for (i = 0; i < 8; i++) {
++ struct mfc_cq_sr *qp = &info->proxydma_info_command_data[i];
++ struct mfc_cq_sr *puqp = &ctx->csa.priv2.puq[i];
++
++ qp->mfc_cq_data0_RW = puqp->mfc_cq_data0_RW;
++ qp->mfc_cq_data1_RW = puqp->mfc_cq_data1_RW;
++ qp->mfc_cq_data2_RW = puqp->mfc_cq_data2_RW;
++ qp->mfc_cq_data3_RW = puqp->mfc_cq_data3_RW;
++ }
++}
++
+ static ssize_t __spufs_proxydma_info_read(struct spu_context *ctx,
+ char __user *buf, size_t len, loff_t *pos)
+ {
+ struct spu_proxydma_info info;
+- struct mfc_cq_sr *qp, *puqp;
+ int ret = sizeof info;
+- int i;
+
+ if (len < ret)
+ return -EINVAL;
+@@ -2150,18 +2196,7 @@ static ssize_t __spufs_proxydma_info_read(struct spu_context *ctx,
+ if (!access_ok(buf, len))
+ return -EFAULT;
+
+- info.proxydma_info_type = ctx->csa.prob.dma_querytype_RW;
+- info.proxydma_info_mask = ctx->csa.prob.dma_querymask_RW;
+- info.proxydma_info_status = ctx->csa.prob.dma_tagstatus_R;
+- for (i = 0; i < 8; i++) {
+- qp = &info.proxydma_info_command_data[i];
+- puqp = &ctx->csa.priv2.puq[i];
+-
+- qp->mfc_cq_data0_RW = puqp->mfc_cq_data0_RW;
+- qp->mfc_cq_data1_RW = puqp->mfc_cq_data1_RW;
+- qp->mfc_cq_data2_RW = puqp->mfc_cq_data2_RW;
+- qp->mfc_cq_data3_RW = puqp->mfc_cq_data3_RW;
+- }
++ spufs_get_proxydma_info(ctx, &info);
+
+ return simple_read_from_buffer(buf, len, pos, &info,
+ sizeof info);
+@@ -2171,17 +2206,19 @@ static ssize_t spufs_proxydma_info_read(struct file *file, char __user *buf,
+ size_t len, loff_t *pos)
+ {
+ struct spu_context *ctx = file->private_data;
++ struct spu_proxydma_info info;
+ int ret;
+
+ ret = spu_acquire_saved(ctx);
+ if (ret)
+ return ret;
+ spin_lock(&ctx->csa.register_lock);
+- ret = __spufs_proxydma_info_read(ctx, buf, len, pos);
++ spufs_get_proxydma_info(ctx, &info);
+ spin_unlock(&ctx->csa.register_lock);
+ spu_release_saved(ctx);
+
+- return ret;
++ return simple_read_from_buffer(buf, len, pos, &info,
++ sizeof(info));
+ }
+
+ static const struct file_operations spufs_proxydma_info_fops = {
+diff --git a/arch/powerpc/platforms/powernv/smp.c b/arch/powerpc/platforms/powernv/smp.c
+index 13e251699346..b2ba3e95bda7 100644
+--- a/arch/powerpc/platforms/powernv/smp.c
++++ b/arch/powerpc/platforms/powernv/smp.c
+@@ -167,7 +167,6 @@ static void pnv_smp_cpu_kill_self(void)
+ /* Standard hot unplug procedure */
+
+ idle_task_exit();
+- current->active_mm = NULL; /* for sanity */
+ cpu = smp_processor_id();
+ DBG("CPU%d offline\n", cpu);
+ generic_set_cpu_dead(cpu);
+diff --git a/arch/riscv/mm/init.c b/arch/riscv/mm/init.c
+index 736de6c8739f..fdc772f57edc 100644
+--- a/arch/riscv/mm/init.c
++++ b/arch/riscv/mm/init.c
+@@ -479,17 +479,6 @@ static void __init setup_vm_final(void)
+ csr_write(CSR_SATP, PFN_DOWN(__pa_symbol(swapper_pg_dir)) | SATP_MODE);
+ local_flush_tlb_all();
+ }
+-
+-void free_initmem(void)
+-{
+- unsigned long init_begin = (unsigned long)__init_begin;
+- unsigned long init_end = (unsigned long)__init_end;
+-
+- /* Make the region as non-execuatble. */
+- set_memory_nx(init_begin, (init_end - init_begin) >> PAGE_SHIFT);
+- free_initmem_default(POISON_FREE_INITMEM);
+-}
+-
+ #else
+ asmlinkage void __init setup_vm(uintptr_t dtb_pa)
+ {
+diff --git a/arch/riscv/net/bpf_jit_comp32.c b/arch/riscv/net/bpf_jit_comp32.c
+index 302934177760..11083d4d5f2d 100644
+--- a/arch/riscv/net/bpf_jit_comp32.c
++++ b/arch/riscv/net/bpf_jit_comp32.c
+@@ -770,12 +770,13 @@ static int emit_bpf_tail_call(int insn, struct rv_jit_context *ctx)
+ emit_bcc(BPF_JGE, lo(idx_reg), RV_REG_T1, off, ctx);
+
+ /*
+- * if ((temp_tcc = tcc - 1) < 0)
++ * temp_tcc = tcc - 1;
++ * if (tcc < 0)
+ * goto out;
+ */
+ emit(rv_addi(RV_REG_T1, RV_REG_TCC, -1), ctx);
+ off = (tc_ninsn - (ctx->ninsns - start_insn)) << 2;
+- emit_bcc(BPF_JSLT, RV_REG_T1, RV_REG_ZERO, off, ctx);
++ emit_bcc(BPF_JSLT, RV_REG_TCC, RV_REG_ZERO, off, ctx);
+
+ /*
+ * prog = array->ptrs[index];
+diff --git a/arch/s390/net/bpf_jit_comp.c b/arch/s390/net/bpf_jit_comp.c
+index 8d2134136290..0f37a1b635f8 100644
+--- a/arch/s390/net/bpf_jit_comp.c
++++ b/arch/s390/net/bpf_jit_comp.c
+@@ -594,7 +594,7 @@ static void bpf_jit_epilogue(struct bpf_jit *jit, u32 stack_depth)
+ * stack space for the large switch statement.
+ */
+ static noinline int bpf_jit_insn(struct bpf_jit *jit, struct bpf_prog *fp,
+- int i, bool extra_pass)
++ int i, bool extra_pass, u32 stack_depth)
+ {
+ struct bpf_insn *insn = &fp->insnsi[i];
+ u32 dst_reg = insn->dst_reg;
+@@ -1207,7 +1207,7 @@ static noinline int bpf_jit_insn(struct bpf_jit *jit, struct bpf_prog *fp,
+ */
+
+ if (jit->seen & SEEN_STACK)
+- off = STK_OFF_TCCNT + STK_OFF + fp->aux->stack_depth;
++ off = STK_OFF_TCCNT + STK_OFF + stack_depth;
+ else
+ off = STK_OFF_TCCNT;
+ /* lhi %w0,1 */
+@@ -1249,7 +1249,7 @@ static noinline int bpf_jit_insn(struct bpf_jit *jit, struct bpf_prog *fp,
+ /*
+ * Restore registers before calling function
+ */
+- save_restore_regs(jit, REGS_RESTORE, fp->aux->stack_depth);
++ save_restore_regs(jit, REGS_RESTORE, stack_depth);
+
+ /*
+ * goto *(prog->bpf_func + tail_call_start);
+@@ -1519,7 +1519,7 @@ static int bpf_set_addr(struct bpf_jit *jit, int i)
+ * Compile eBPF program into s390x code
+ */
+ static int bpf_jit_prog(struct bpf_jit *jit, struct bpf_prog *fp,
+- bool extra_pass)
++ bool extra_pass, u32 stack_depth)
+ {
+ int i, insn_count, lit32_size, lit64_size;
+
+@@ -1527,18 +1527,18 @@ static int bpf_jit_prog(struct bpf_jit *jit, struct bpf_prog *fp,
+ jit->lit64 = jit->lit64_start;
+ jit->prg = 0;
+
+- bpf_jit_prologue(jit, fp->aux->stack_depth);
++ bpf_jit_prologue(jit, stack_depth);
+ if (bpf_set_addr(jit, 0) < 0)
+ return -1;
+ for (i = 0; i < fp->len; i += insn_count) {
+- insn_count = bpf_jit_insn(jit, fp, i, extra_pass);
++ insn_count = bpf_jit_insn(jit, fp, i, extra_pass, stack_depth);
+ if (insn_count < 0)
+ return -1;
+ /* Next instruction address */
+ if (bpf_set_addr(jit, i + insn_count) < 0)
+ return -1;
+ }
+- bpf_jit_epilogue(jit, fp->aux->stack_depth);
++ bpf_jit_epilogue(jit, stack_depth);
+
+ lit32_size = jit->lit32 - jit->lit32_start;
+ lit64_size = jit->lit64 - jit->lit64_start;
+@@ -1569,6 +1569,7 @@ struct s390_jit_data {
+ */
+ struct bpf_prog *bpf_int_jit_compile(struct bpf_prog *fp)
+ {
++ u32 stack_depth = round_up(fp->aux->stack_depth, 8);
+ struct bpf_prog *tmp, *orig_fp = fp;
+ struct bpf_binary_header *header;
+ struct s390_jit_data *jit_data;
+@@ -1621,7 +1622,7 @@ struct bpf_prog *bpf_int_jit_compile(struct bpf_prog *fp)
+ * - 3: Calculate program size and addrs arrray
+ */
+ for (pass = 1; pass <= 3; pass++) {
+- if (bpf_jit_prog(&jit, fp, extra_pass)) {
++ if (bpf_jit_prog(&jit, fp, extra_pass, stack_depth)) {
+ fp = orig_fp;
+ goto free_addrs;
+ }
+@@ -1635,7 +1636,7 @@ struct bpf_prog *bpf_int_jit_compile(struct bpf_prog *fp)
+ goto free_addrs;
+ }
+ skip_init_ctx:
+- if (bpf_jit_prog(&jit, fp, extra_pass)) {
++ if (bpf_jit_prog(&jit, fp, extra_pass, stack_depth)) {
+ bpf_jit_binary_free(header);
+ fp = orig_fp;
+ goto free_addrs;
+diff --git a/arch/sparc/kernel/ptrace_32.c b/arch/sparc/kernel/ptrace_32.c
+index 16b50afe7b52..60f7205ebe40 100644
+--- a/arch/sparc/kernel/ptrace_32.c
++++ b/arch/sparc/kernel/ptrace_32.c
+@@ -46,82 +46,79 @@ enum sparc_regset {
+ REGSET_FP,
+ };
+
++static int regwindow32_get(struct task_struct *target,
++ const struct pt_regs *regs,
++ u32 *uregs)
++{
++ unsigned long reg_window = regs->u_regs[UREG_I6];
++ int size = 16 * sizeof(u32);
++
++ if (target == current) {
++ if (copy_from_user(uregs, (void __user *)reg_window, size))
++ return -EFAULT;
++ } else {
++ if (access_process_vm(target, reg_window, uregs, size,
++ FOLL_FORCE) != size)
++ return -EFAULT;
++ }
++ return 0;
++}
++
++static int regwindow32_set(struct task_struct *target,
++ const struct pt_regs *regs,
++ u32 *uregs)
++{
++ unsigned long reg_window = regs->u_regs[UREG_I6];
++ int size = 16 * sizeof(u32);
++
++ if (target == current) {
++ if (copy_to_user((void __user *)reg_window, uregs, size))
++ return -EFAULT;
++ } else {
++ if (access_process_vm(target, reg_window, uregs, size,
++ FOLL_FORCE | FOLL_WRITE) != size)
++ return -EFAULT;
++ }
++ return 0;
++}
++
+ static int genregs32_get(struct task_struct *target,
+ const struct user_regset *regset,
+ unsigned int pos, unsigned int count,
+ void *kbuf, void __user *ubuf)
+ {
+ const struct pt_regs *regs = target->thread.kregs;
+- unsigned long __user *reg_window;
+- unsigned long *k = kbuf;
+- unsigned long __user *u = ubuf;
+- unsigned long reg;
++ u32 uregs[16];
++ int ret;
+
+ if (target == current)
+ flush_user_windows();
+
+- pos /= sizeof(reg);
+- count /= sizeof(reg);
+-
+- if (kbuf) {
+- for (; count > 0 && pos < 16; count--)
+- *k++ = regs->u_regs[pos++];
+-
+- reg_window = (unsigned long __user *) regs->u_regs[UREG_I6];
+- reg_window -= 16;
+- for (; count > 0 && pos < 32; count--) {
+- if (get_user(*k++, ®_window[pos++]))
+- return -EFAULT;
+- }
+- } else {
+- for (; count > 0 && pos < 16; count--) {
+- if (put_user(regs->u_regs[pos++], u++))
+- return -EFAULT;
+- }
+-
+- reg_window = (unsigned long __user *) regs->u_regs[UREG_I6];
+- reg_window -= 16;
+- for (; count > 0 && pos < 32; count--) {
+- if (get_user(reg, ®_window[pos++]) ||
+- put_user(reg, u++))
+- return -EFAULT;
+- }
+- }
+- while (count > 0) {
+- switch (pos) {
+- case 32: /* PSR */
+- reg = regs->psr;
+- break;
+- case 33: /* PC */
+- reg = regs->pc;
+- break;
+- case 34: /* NPC */
+- reg = regs->npc;
+- break;
+- case 35: /* Y */
+- reg = regs->y;
+- break;
+- case 36: /* WIM */
+- case 37: /* TBR */
+- reg = 0;
+- break;
+- default:
+- goto finish;
+- }
++ ret = user_regset_copyout(&pos, &count, &kbuf, &ubuf,
++ regs->u_regs,
++ 0, 16 * sizeof(u32));
++ if (ret || !count)
++ return ret;
+
+- if (kbuf)
+- *k++ = reg;
+- else if (put_user(reg, u++))
++ if (pos < 32 * sizeof(u32)) {
++ if (regwindow32_get(target, regs, uregs))
+ return -EFAULT;
+- pos++;
+- count--;
++ ret = user_regset_copyout(&pos, &count, &kbuf, &ubuf,
++ uregs,
++ 16 * sizeof(u32), 32 * sizeof(u32));
++ if (ret || !count)
++ return ret;
+ }
+-finish:
+- pos *= sizeof(reg);
+- count *= sizeof(reg);
+
+- return user_regset_copyout_zero(&pos, &count, &kbuf, &ubuf,
+- 38 * sizeof(reg), -1);
++ uregs[0] = regs->psr;
++ uregs[1] = regs->pc;
++ uregs[2] = regs->npc;
++ uregs[3] = regs->y;
++ uregs[4] = 0; /* WIM */
++ uregs[5] = 0; /* TBR */
++ return user_regset_copyout(&pos, &count, &kbuf, &ubuf,
++ uregs,
++ 32 * sizeof(u32), 38 * sizeof(u32));
+ }
+
+ static int genregs32_set(struct task_struct *target,
+@@ -130,82 +127,53 @@ static int genregs32_set(struct task_struct *target,
+ const void *kbuf, const void __user *ubuf)
+ {
+ struct pt_regs *regs = target->thread.kregs;
+- unsigned long __user *reg_window;
+- const unsigned long *k = kbuf;
+- const unsigned long __user *u = ubuf;
+- unsigned long reg;
++ u32 uregs[16];
++ u32 psr;
++ int ret;
+
+ if (target == current)
+ flush_user_windows();
+
+- pos /= sizeof(reg);
+- count /= sizeof(reg);
+-
+- if (kbuf) {
+- for (; count > 0 && pos < 16; count--)
+- regs->u_regs[pos++] = *k++;
+-
+- reg_window = (unsigned long __user *) regs->u_regs[UREG_I6];
+- reg_window -= 16;
+- for (; count > 0 && pos < 32; count--) {
+- if (put_user(*k++, ®_window[pos++]))
+- return -EFAULT;
+- }
+- } else {
+- for (; count > 0 && pos < 16; count--) {
+- if (get_user(reg, u++))
+- return -EFAULT;
+- regs->u_regs[pos++] = reg;
+- }
+-
+- reg_window = (unsigned long __user *) regs->u_regs[UREG_I6];
+- reg_window -= 16;
+- for (; count > 0 && pos < 32; count--) {
+- if (get_user(reg, u++) ||
+- put_user(reg, ®_window[pos++]))
+- return -EFAULT;
+- }
+- }
+- while (count > 0) {
+- unsigned long psr;
++ ret = user_regset_copyin(&pos, &count, &kbuf, &ubuf,
++ regs->u_regs,
++ 0, 16 * sizeof(u32));
++ if (ret || !count)
++ return ret;
+
+- if (kbuf)
+- reg = *k++;
+- else if (get_user(reg, u++))
++ if (pos < 32 * sizeof(u32)) {
++ if (regwindow32_get(target, regs, uregs))
+ return -EFAULT;
+-
+- switch (pos) {
+- case 32: /* PSR */
+- psr = regs->psr;
+- psr &= ~(PSR_ICC | PSR_SYSCALL);
+- psr |= (reg & (PSR_ICC | PSR_SYSCALL));
+- regs->psr = psr;
+- break;
+- case 33: /* PC */
+- regs->pc = reg;
+- break;
+- case 34: /* NPC */
+- regs->npc = reg;
+- break;
+- case 35: /* Y */
+- regs->y = reg;
+- break;
+- case 36: /* WIM */
+- case 37: /* TBR */
+- break;
+- default:
+- goto finish;
+- }
+-
+- pos++;
+- count--;
++ ret = user_regset_copyin(&pos, &count, &kbuf, &ubuf,
++ uregs,
++ 16 * sizeof(u32), 32 * sizeof(u32));
++ if (ret)
++ return ret;
++ if (regwindow32_set(target, regs, uregs))
++ return -EFAULT;
++ if (!count)
++ return 0;
+ }
+-finish:
+- pos *= sizeof(reg);
+- count *= sizeof(reg);
+-
++ ret = user_regset_copyin(&pos, &count, &kbuf, &ubuf,
++ &psr,
++ 32 * sizeof(u32), 33 * sizeof(u32));
++ if (ret)
++ return ret;
++ regs->psr = (regs->psr & ~(PSR_ICC | PSR_SYSCALL)) |
++ (psr & (PSR_ICC | PSR_SYSCALL));
++ if (!count)
++ return 0;
++ ret = user_regset_copyin(&pos, &count, &kbuf, &ubuf,
++ ®s->pc,
++ 33 * sizeof(u32), 34 * sizeof(u32));
++ if (ret || !count)
++ return ret;
++ ret = user_regset_copyin(&pos, &count, &kbuf, &ubuf,
++ ®s->y,
++ 34 * sizeof(u32), 35 * sizeof(u32));
++ if (ret || !count)
++ return ret;
+ return user_regset_copyin_ignore(&pos, &count, &kbuf, &ubuf,
+- 38 * sizeof(reg), -1);
++ 35 * sizeof(u32), 38 * sizeof(u32));
+ }
+
+ static int fpregs32_get(struct task_struct *target,
+diff --git a/arch/sparc/kernel/ptrace_64.c b/arch/sparc/kernel/ptrace_64.c
+index c9d41a96468f..3f5930bfab06 100644
+--- a/arch/sparc/kernel/ptrace_64.c
++++ b/arch/sparc/kernel/ptrace_64.c
+@@ -572,19 +572,13 @@ static int genregs32_get(struct task_struct *target,
+ for (; count > 0 && pos < 32; count--) {
+ if (access_process_vm(target,
+ (unsigned long)
+- ®_window[pos],
++ ®_window[pos++],
+ ®, sizeof(reg),
+ FOLL_FORCE)
+ != sizeof(reg))
+ return -EFAULT;
+- if (access_process_vm(target,
+- (unsigned long) u,
+- ®, sizeof(reg),
+- FOLL_FORCE | FOLL_WRITE)
+- != sizeof(reg))
++ if (put_user(reg, u++))
+ return -EFAULT;
+- pos++;
+- u++;
+ }
+ }
+ }
+@@ -684,12 +678,7 @@ static int genregs32_set(struct task_struct *target,
+ }
+ } else {
+ for (; count > 0 && pos < 32; count--) {
+- if (access_process_vm(target,
+- (unsigned long)
+- u,
+- ®, sizeof(reg),
+- FOLL_FORCE)
+- != sizeof(reg))
++ if (get_user(reg, u++))
+ return -EFAULT;
+ if (access_process_vm(target,
+ (unsigned long)
+diff --git a/arch/x86/boot/compressed/head_32.S b/arch/x86/boot/compressed/head_32.S
+index ab3307036ba4..03557f2174bf 100644
+--- a/arch/x86/boot/compressed/head_32.S
++++ b/arch/x86/boot/compressed/head_32.S
+@@ -49,16 +49,17 @@
+ * Position Independent Executable (PIE) so that linker won't optimize
+ * R_386_GOT32X relocation to its fixed symbol address. Older
+ * linkers generate R_386_32 relocations against locally defined symbols,
+- * _bss, _ebss, _got and _egot, in PIE. It isn't wrong, just less
++ * _bss, _ebss, _got, _egot and _end, in PIE. It isn't wrong, just less
+ * optimal than R_386_RELATIVE. But the x86 kernel fails to properly handle
+ * R_386_32 relocations when relocating the kernel. To generate
+- * R_386_RELATIVE relocations, we mark _bss, _ebss, _got and _egot as
++ * R_386_RELATIVE relocations, we mark _bss, _ebss, _got, _egot and _end as
+ * hidden:
+ */
+ .hidden _bss
+ .hidden _ebss
+ .hidden _got
+ .hidden _egot
++ .hidden _end
+
+ __HEAD
+ SYM_FUNC_START(startup_32)
+diff --git a/arch/x86/boot/compressed/head_64.S b/arch/x86/boot/compressed/head_64.S
+index 4f7e6b84be07..76d1d64d51e3 100644
+--- a/arch/x86/boot/compressed/head_64.S
++++ b/arch/x86/boot/compressed/head_64.S
+@@ -42,6 +42,7 @@
+ .hidden _ebss
+ .hidden _got
+ .hidden _egot
++ .hidden _end
+
+ __HEAD
+ .code32
+diff --git a/arch/x86/include/asm/smap.h b/arch/x86/include/asm/smap.h
+index 27c47d183f4b..8b58d6975d5d 100644
+--- a/arch/x86/include/asm/smap.h
++++ b/arch/x86/include/asm/smap.h
+@@ -57,8 +57,10 @@ static __always_inline unsigned long smap_save(void)
+ {
+ unsigned long flags;
+
+- asm volatile (ALTERNATIVE("", "pushf; pop %0; " __ASM_CLAC,
+- X86_FEATURE_SMAP)
++ asm volatile ("# smap_save\n\t"
++ ALTERNATIVE("jmp 1f", "", X86_FEATURE_SMAP)
++ "pushf; pop %0; " __ASM_CLAC "\n\t"
++ "1:"
+ : "=rm" (flags) : : "memory", "cc");
+
+ return flags;
+@@ -66,7 +68,10 @@ static __always_inline unsigned long smap_save(void)
+
+ static __always_inline void smap_restore(unsigned long flags)
+ {
+- asm volatile (ALTERNATIVE("", "push %0; popf", X86_FEATURE_SMAP)
++ asm volatile ("# smap_restore\n\t"
++ ALTERNATIVE("jmp 1f", "", X86_FEATURE_SMAP)
++ "push %0; popf\n\t"
++ "1:"
+ : : "g" (flags) : "memory", "cc");
+ }
+
+diff --git a/arch/x86/kernel/amd_nb.c b/arch/x86/kernel/amd_nb.c
+index b6b3297851f3..18f6b7c4bd79 100644
+--- a/arch/x86/kernel/amd_nb.c
++++ b/arch/x86/kernel/amd_nb.c
+@@ -18,9 +18,11 @@
+ #define PCI_DEVICE_ID_AMD_17H_ROOT 0x1450
+ #define PCI_DEVICE_ID_AMD_17H_M10H_ROOT 0x15d0
+ #define PCI_DEVICE_ID_AMD_17H_M30H_ROOT 0x1480
++#define PCI_DEVICE_ID_AMD_17H_M60H_ROOT 0x1630
+ #define PCI_DEVICE_ID_AMD_17H_DF_F4 0x1464
+ #define PCI_DEVICE_ID_AMD_17H_M10H_DF_F4 0x15ec
+ #define PCI_DEVICE_ID_AMD_17H_M30H_DF_F4 0x1494
++#define PCI_DEVICE_ID_AMD_17H_M60H_DF_F4 0x144c
+ #define PCI_DEVICE_ID_AMD_17H_M70H_DF_F4 0x1444
+ #define PCI_DEVICE_ID_AMD_19H_DF_F4 0x1654
+
+@@ -33,6 +35,7 @@ static const struct pci_device_id amd_root_ids[] = {
+ { PCI_DEVICE(PCI_VENDOR_ID_AMD, PCI_DEVICE_ID_AMD_17H_ROOT) },
+ { PCI_DEVICE(PCI_VENDOR_ID_AMD, PCI_DEVICE_ID_AMD_17H_M10H_ROOT) },
+ { PCI_DEVICE(PCI_VENDOR_ID_AMD, PCI_DEVICE_ID_AMD_17H_M30H_ROOT) },
++ { PCI_DEVICE(PCI_VENDOR_ID_AMD, PCI_DEVICE_ID_AMD_17H_M60H_ROOT) },
+ {}
+ };
+
+@@ -50,6 +53,7 @@ static const struct pci_device_id amd_nb_misc_ids[] = {
+ { PCI_DEVICE(PCI_VENDOR_ID_AMD, PCI_DEVICE_ID_AMD_17H_DF_F3) },
+ { PCI_DEVICE(PCI_VENDOR_ID_AMD, PCI_DEVICE_ID_AMD_17H_M10H_DF_F3) },
+ { PCI_DEVICE(PCI_VENDOR_ID_AMD, PCI_DEVICE_ID_AMD_17H_M30H_DF_F3) },
++ { PCI_DEVICE(PCI_VENDOR_ID_AMD, PCI_DEVICE_ID_AMD_17H_M60H_DF_F3) },
+ { PCI_DEVICE(PCI_VENDOR_ID_AMD, PCI_DEVICE_ID_AMD_CNB17H_F3) },
+ { PCI_DEVICE(PCI_VENDOR_ID_AMD, PCI_DEVICE_ID_AMD_17H_M70H_DF_F3) },
+ { PCI_DEVICE(PCI_VENDOR_ID_AMD, PCI_DEVICE_ID_AMD_19H_DF_F3) },
+@@ -65,6 +69,7 @@ static const struct pci_device_id amd_nb_link_ids[] = {
+ { PCI_DEVICE(PCI_VENDOR_ID_AMD, PCI_DEVICE_ID_AMD_17H_DF_F4) },
+ { PCI_DEVICE(PCI_VENDOR_ID_AMD, PCI_DEVICE_ID_AMD_17H_M10H_DF_F4) },
+ { PCI_DEVICE(PCI_VENDOR_ID_AMD, PCI_DEVICE_ID_AMD_17H_M30H_DF_F4) },
++ { PCI_DEVICE(PCI_VENDOR_ID_AMD, PCI_DEVICE_ID_AMD_17H_M60H_DF_F4) },
+ { PCI_DEVICE(PCI_VENDOR_ID_AMD, PCI_DEVICE_ID_AMD_17H_M70H_DF_F4) },
+ { PCI_DEVICE(PCI_VENDOR_ID_AMD, PCI_DEVICE_ID_AMD_19H_DF_F4) },
+ { PCI_DEVICE(PCI_VENDOR_ID_AMD, PCI_DEVICE_ID_AMD_CNB17H_F4) },
+diff --git a/arch/x86/kernel/irq_64.c b/arch/x86/kernel/irq_64.c
+index 12df3a4abfdd..6b32ab009c19 100644
+--- a/arch/x86/kernel/irq_64.c
++++ b/arch/x86/kernel/irq_64.c
+@@ -43,7 +43,7 @@ static int map_irq_stack(unsigned int cpu)
+ pages[i] = pfn_to_page(pa >> PAGE_SHIFT);
+ }
+
+- va = vmap(pages, IRQ_STACK_SIZE / PAGE_SIZE, GFP_KERNEL, PAGE_KERNEL);
++ va = vmap(pages, IRQ_STACK_SIZE / PAGE_SIZE, VM_MAP, PAGE_KERNEL);
+ if (!va)
+ return -ENOMEM;
+
+diff --git a/arch/x86/mm/init.c b/arch/x86/mm/init.c
+index 1bba16c5742b..a573a3e63f02 100644
+--- a/arch/x86/mm/init.c
++++ b/arch/x86/mm/init.c
+@@ -121,8 +121,6 @@ __ref void *alloc_low_pages(unsigned int num)
+ } else {
+ pfn = pgt_buf_end;
+ pgt_buf_end += num;
+- printk(KERN_DEBUG "BRK [%#010lx, %#010lx] PGTABLE\n",
+- pfn << PAGE_SHIFT, (pgt_buf_end << PAGE_SHIFT) - 1);
+ }
+
+ for (i = 0; i < num; i++) {
+diff --git a/block/blk-iocost.c b/block/blk-iocost.c
+index 7c1fe605d0d6..ef193389fffe 100644
+--- a/block/blk-iocost.c
++++ b/block/blk-iocost.c
+@@ -1543,19 +1543,39 @@ skip_surplus_transfers:
+ if (rq_wait_pct > RQ_WAIT_BUSY_PCT ||
+ missed_ppm[READ] > ppm_rthr ||
+ missed_ppm[WRITE] > ppm_wthr) {
++ /* clearly missing QoS targets, slow down vrate */
+ ioc->busy_level = max(ioc->busy_level, 0);
+ ioc->busy_level++;
+ } else if (rq_wait_pct <= RQ_WAIT_BUSY_PCT * UNBUSY_THR_PCT / 100 &&
+ missed_ppm[READ] <= ppm_rthr * UNBUSY_THR_PCT / 100 &&
+ missed_ppm[WRITE] <= ppm_wthr * UNBUSY_THR_PCT / 100) {
+- /* take action iff there is contention */
+- if (nr_shortages && !nr_lagging) {
++ /* QoS targets are being met with >25% margin */
++ if (nr_shortages) {
++ /*
++ * We're throttling while the device has spare
++ * capacity. If vrate was being slowed down, stop.
++ */
+ ioc->busy_level = min(ioc->busy_level, 0);
+- /* redistribute surpluses first */
+- if (!nr_surpluses)
++
++ /*
++ * If there are IOs spanning multiple periods, wait
++ * them out before pushing the device harder. If
++ * there are surpluses, let redistribution work it
++ * out first.
++ */
++ if (!nr_lagging && !nr_surpluses)
+ ioc->busy_level--;
++ } else {
++ /*
++ * Nobody is being throttled and the users aren't
++ * issuing enough IOs to saturate the device. We
++ * simply don't know how close the device is to
++ * saturation. Coast.
++ */
++ ioc->busy_level = 0;
+ }
+ } else {
++ /* inside the hysterisis margin, we're good */
+ ioc->busy_level = 0;
+ }
+
+diff --git a/block/blk-mq.c b/block/blk-mq.c
+index a7785df2c944..98a702761e2c 100644
+--- a/block/blk-mq.c
++++ b/block/blk-mq.c
+@@ -2521,18 +2521,6 @@ static void blk_mq_map_swqueue(struct request_queue *q)
+ * If the cpu isn't present, the cpu is mapped to first hctx.
+ */
+ for_each_possible_cpu(i) {
+- hctx_idx = set->map[HCTX_TYPE_DEFAULT].mq_map[i];
+- /* unmapped hw queue can be remapped after CPU topo changed */
+- if (!set->tags[hctx_idx] &&
+- !__blk_mq_alloc_rq_map(set, hctx_idx)) {
+- /*
+- * If tags initialization fail for some hctx,
+- * that hctx won't be brought online. In this
+- * case, remap the current ctx to hctx[0] which
+- * is guaranteed to always have tags allocated
+- */
+- set->map[HCTX_TYPE_DEFAULT].mq_map[i] = 0;
+- }
+
+ ctx = per_cpu_ptr(q->queue_ctx, i);
+ for (j = 0; j < set->nr_maps; j++) {
+@@ -2541,6 +2529,18 @@ static void blk_mq_map_swqueue(struct request_queue *q)
+ HCTX_TYPE_DEFAULT, i);
+ continue;
+ }
++ hctx_idx = set->map[j].mq_map[i];
++ /* unmapped hw queue can be remapped after CPU topo changed */
++ if (!set->tags[hctx_idx] &&
++ !__blk_mq_alloc_rq_map(set, hctx_idx)) {
++ /*
++ * If tags initialization fail for some hctx,
++ * that hctx won't be brought online. In this
++ * case, remap the current ctx to hctx[0] which
++ * is guaranteed to always have tags allocated
++ */
++ set->map[j].mq_map[i] = 0;
++ }
+
+ hctx = blk_mq_map_queue_type(q, j, i);
+ ctx->hctxs[j] = hctx;
+@@ -3353,8 +3353,8 @@ static void __blk_mq_update_nr_hw_queues(struct blk_mq_tag_set *set,
+
+ prev_nr_hw_queues = set->nr_hw_queues;
+ set->nr_hw_queues = nr_hw_queues;
+- blk_mq_update_queue_map(set);
+ fallback:
++ blk_mq_update_queue_map(set);
+ list_for_each_entry(q, &set->tag_list, tag_set_list) {
+ blk_mq_realloc_hw_ctxs(set, q);
+ if (q->nr_hw_queues != set->nr_hw_queues) {
+diff --git a/block/blk.h b/block/blk.h
+index 0a94ec68af32..151f86932547 100644
+--- a/block/blk.h
++++ b/block/blk.h
+@@ -470,9 +470,11 @@ static inline sector_t part_nr_sects_read(struct hd_struct *part)
+ static inline void part_nr_sects_write(struct hd_struct *part, sector_t size)
+ {
+ #if BITS_PER_LONG==32 && defined(CONFIG_SMP)
++ preempt_disable();
+ write_seqcount_begin(&part->nr_sects_seq);
+ part->nr_sects = size;
+ write_seqcount_end(&part->nr_sects_seq);
++ preempt_enable();
+ #elif BITS_PER_LONG==32 && defined(CONFIG_PREEMPTION)
+ preempt_disable();
+ part->nr_sects = size;
+diff --git a/crypto/blake2b_generic.c b/crypto/blake2b_generic.c
+index 1d262374fa4e..0ffd8d92e308 100644
+--- a/crypto/blake2b_generic.c
++++ b/crypto/blake2b_generic.c
+@@ -129,7 +129,9 @@ static void blake2b_compress(struct blake2b_state *S,
+ ROUND(9);
+ ROUND(10);
+ ROUND(11);
+-
++#ifdef CONFIG_CC_IS_CLANG
++#pragma nounroll /* https://bugs.llvm.org/show_bug.cgi?id=45803 */
++#endif
+ for (i = 0; i < 8; ++i)
+ S->h[i] = S->h[i] ^ v[i] ^ v[i + 8];
+ }
+diff --git a/drivers/acpi/acpica/dsfield.c b/drivers/acpi/acpica/dsfield.c
+index c901f5aec739..5725baec60f3 100644
+--- a/drivers/acpi/acpica/dsfield.c
++++ b/drivers/acpi/acpica/dsfield.c
+@@ -514,13 +514,20 @@ acpi_ds_create_field(union acpi_parse_object *op,
+ info.region_node = region_node;
+
+ status = acpi_ds_get_field_names(&info, walk_state, arg->common.next);
++ if (ACPI_FAILURE(status)) {
++ return_ACPI_STATUS(status);
++ }
++
+ if (info.region_node->object->region.space_id ==
+- ACPI_ADR_SPACE_PLATFORM_COMM
+- && !(region_node->object->field.internal_pcc_buffer =
+- ACPI_ALLOCATE_ZEROED(info.region_node->object->region.
+- length))) {
+- return_ACPI_STATUS(AE_NO_MEMORY);
++ ACPI_ADR_SPACE_PLATFORM_COMM) {
++ region_node->object->field.internal_pcc_buffer =
++ ACPI_ALLOCATE_ZEROED(info.region_node->object->region.
++ length);
++ if (!region_node->object->field.internal_pcc_buffer) {
++ return_ACPI_STATUS(AE_NO_MEMORY);
++ }
+ }
++
+ return_ACPI_STATUS(status);
+ }
+
+diff --git a/drivers/acpi/arm64/iort.c b/drivers/acpi/arm64/iort.c
+index 7d04424189df..ec04435a7cea 100644
+--- a/drivers/acpi/arm64/iort.c
++++ b/drivers/acpi/arm64/iort.c
+@@ -414,6 +414,7 @@ static struct acpi_iort_node *iort_node_get_id(struct acpi_iort_node *node,
+ static int iort_get_id_mapping_index(struct acpi_iort_node *node)
+ {
+ struct acpi_iort_smmu_v3 *smmu;
++ struct acpi_iort_pmcg *pmcg;
+
+ switch (node->type) {
+ case ACPI_IORT_NODE_SMMU_V3:
+@@ -441,6 +442,10 @@ static int iort_get_id_mapping_index(struct acpi_iort_node *node)
+
+ return smmu->id_mapping_index;
+ case ACPI_IORT_NODE_PMCG:
++ pmcg = (struct acpi_iort_pmcg *)node->node_data;
++ if (pmcg->overflow_gsiv || node->mapping_count == 0)
++ return -EINVAL;
++
+ return 0;
+ default:
+ return -EINVAL;
+diff --git a/drivers/acpi/evged.c b/drivers/acpi/evged.c
+index 6d7a522952bf..ccd900690b6f 100644
+--- a/drivers/acpi/evged.c
++++ b/drivers/acpi/evged.c
+@@ -94,7 +94,7 @@ static acpi_status acpi_ged_request_interrupt(struct acpi_resource *ares,
+ trigger = p->triggering;
+ } else {
+ gsi = pext->interrupts[0];
+- trigger = p->triggering;
++ trigger = pext->triggering;
+ }
+
+ irq = r.start;
+diff --git a/drivers/acpi/video_detect.c b/drivers/acpi/video_detect.c
+index b4994e50608d..2499d7e3c710 100644
+--- a/drivers/acpi/video_detect.c
++++ b/drivers/acpi/video_detect.c
+@@ -361,6 +361,16 @@ static const struct dmi_system_id video_detect_dmi_table[] = {
+ DMI_MATCH(DMI_BOARD_NAME, "JV50"),
+ },
+ },
++ {
++ /* https://bugzilla.kernel.org/show_bug.cgi?id=207835 */
++ .callback = video_detect_force_native,
++ .ident = "Acer TravelMate 5735Z",
++ .matches = {
++ DMI_MATCH(DMI_SYS_VENDOR, "Acer"),
++ DMI_MATCH(DMI_PRODUCT_NAME, "TravelMate 5735Z"),
++ DMI_MATCH(DMI_BOARD_NAME, "BA51_MV"),
++ },
++ },
+
+ /*
+ * Desktops which falsely report a backlight and which our heuristics
+diff --git a/drivers/base/swnode.c b/drivers/base/swnode.c
+index de8d3543e8fe..770b1f47a625 100644
+--- a/drivers/base/swnode.c
++++ b/drivers/base/swnode.c
+@@ -712,17 +712,18 @@ EXPORT_SYMBOL_GPL(software_node_register_nodes);
+ * @nodes: Zero terminated array of software nodes to be unregistered
+ *
+ * Unregister multiple software nodes at once.
++ *
++ * NOTE: Be careful using this call if the nodes had parent pointers set up in
++ * them before registering. If so, it is wiser to remove the nodes
++ * individually, in the correct order (child before parent) instead of relying
++ * on the sequential order of the list of nodes in the array.
+ */
+ void software_node_unregister_nodes(const struct software_node *nodes)
+ {
+- struct swnode *swnode;
+ int i;
+
+- for (i = 0; nodes[i].name; i++) {
+- swnode = software_node_to_swnode(&nodes[i]);
+- if (swnode)
+- fwnode_remove_software_node(&swnode->fwnode);
+- }
++ for (i = 0; nodes[i].name; i++)
++ software_node_unregister(&nodes[i]);
+ }
+ EXPORT_SYMBOL_GPL(software_node_unregister_nodes);
+
+@@ -741,6 +742,20 @@ int software_node_register(const struct software_node *node)
+ }
+ EXPORT_SYMBOL_GPL(software_node_register);
+
++/**
++ * software_node_unregister - Unregister static software node
++ * @node: The software node to be unregistered
++ */
++void software_node_unregister(const struct software_node *node)
++{
++ struct swnode *swnode;
++
++ swnode = software_node_to_swnode(node);
++ if (swnode)
++ fwnode_remove_software_node(&swnode->fwnode);
++}
++EXPORT_SYMBOL_GPL(software_node_unregister);
++
+ struct fwnode_handle *
+ fwnode_create_software_node(const struct property_entry *properties,
+ const struct fwnode_handle *parent)
+diff --git a/drivers/bluetooth/btbcm.c b/drivers/bluetooth/btbcm.c
+index 1f498f358f60..e1377934507c 100644
+--- a/drivers/bluetooth/btbcm.c
++++ b/drivers/bluetooth/btbcm.c
+@@ -380,6 +380,7 @@ static const struct bcm_subver_table bcm_uart_subver_table[] = {
+ { 0x410e, "BCM43341B0" }, /* 002.001.014 */
+ { 0x4204, "BCM2076B1" }, /* 002.002.004 */
+ { 0x4406, "BCM4324B3" }, /* 002.004.006 */
++ { 0x4606, "BCM4324B5" }, /* 002.006.006 */
+ { 0x6109, "BCM4335C0" }, /* 003.001.009 */
+ { 0x610c, "BCM4354" }, /* 003.001.012 */
+ { 0x2122, "BCM4343A0" }, /* 001.001.034 */
+@@ -395,6 +396,7 @@ static const struct bcm_subver_table bcm_uart_subver_table[] = {
+ };
+
+ static const struct bcm_subver_table bcm_usb_subver_table[] = {
++ { 0x2105, "BCM20703A1" }, /* 001.001.005 */
+ { 0x210b, "BCM43142A0" }, /* 001.001.011 */
+ { 0x2112, "BCM4314A0" }, /* 001.001.018 */
+ { 0x2118, "BCM20702A0" }, /* 001.001.024 */
+diff --git a/drivers/bluetooth/btmtkuart.c b/drivers/bluetooth/btmtkuart.c
+index e11169ad8247..8a81fbca5c9d 100644
+--- a/drivers/bluetooth/btmtkuart.c
++++ b/drivers/bluetooth/btmtkuart.c
+@@ -1015,7 +1015,7 @@ static int btmtkuart_probe(struct serdev_device *serdev)
+ if (btmtkuart_is_standalone(bdev)) {
+ err = clk_prepare_enable(bdev->osc);
+ if (err < 0)
+- return err;
++ goto err_hci_free_dev;
+
+ if (bdev->boot) {
+ gpiod_set_value_cansleep(bdev->boot, 1);
+@@ -1028,10 +1028,8 @@ static int btmtkuart_probe(struct serdev_device *serdev)
+
+ /* Power on */
+ err = regulator_enable(bdev->vcc);
+- if (err < 0) {
+- clk_disable_unprepare(bdev->osc);
+- return err;
+- }
++ if (err < 0)
++ goto err_clk_disable_unprepare;
+
+ /* Reset if the reset-gpios is available otherwise the board
+ * -level design should be guaranteed.
+@@ -1063,7 +1061,6 @@ static int btmtkuart_probe(struct serdev_device *serdev)
+ err = hci_register_dev(hdev);
+ if (err < 0) {
+ dev_err(&serdev->dev, "Can't register HCI device\n");
+- hci_free_dev(hdev);
+ goto err_regulator_disable;
+ }
+
+@@ -1072,6 +1069,11 @@ static int btmtkuart_probe(struct serdev_device *serdev)
+ err_regulator_disable:
+ if (btmtkuart_is_standalone(bdev))
+ regulator_disable(bdev->vcc);
++err_clk_disable_unprepare:
++ if (btmtkuart_is_standalone(bdev))
++ clk_disable_unprepare(bdev->osc);
++err_hci_free_dev:
++ hci_free_dev(hdev);
+
+ return err;
+ }
+diff --git a/drivers/bluetooth/btusb.c b/drivers/bluetooth/btusb.c
+index 3bdec42c9612..3d9313c746f3 100644
+--- a/drivers/bluetooth/btusb.c
++++ b/drivers/bluetooth/btusb.c
+@@ -58,6 +58,7 @@ static struct usb_driver btusb_driver;
+ #define BTUSB_CW6622 0x100000
+ #define BTUSB_MEDIATEK 0x200000
+ #define BTUSB_WIDEBAND_SPEECH 0x400000
++#define BTUSB_VALID_LE_STATES 0x800000
+
+ static const struct usb_device_id btusb_table[] = {
+ /* Generic Bluetooth USB device */
+diff --git a/drivers/bluetooth/hci_bcm.c b/drivers/bluetooth/hci_bcm.c
+index b236cb11c0dc..19e4587f366c 100644
+--- a/drivers/bluetooth/hci_bcm.c
++++ b/drivers/bluetooth/hci_bcm.c
+@@ -118,6 +118,7 @@ struct bcm_device {
+ u32 oper_speed;
+ int irq;
+ bool irq_active_low;
++ bool irq_acquired;
+
+ #ifdef CONFIG_PM
+ struct hci_uart *hu;
+@@ -333,6 +334,8 @@ static int bcm_request_irq(struct bcm_data *bcm)
+ goto unlock;
+ }
+
++ bdev->irq_acquired = true;
++
+ device_init_wakeup(bdev->dev, true);
+
+ pm_runtime_set_autosuspend_delay(bdev->dev,
+@@ -514,7 +517,7 @@ static int bcm_close(struct hci_uart *hu)
+ }
+
+ if (bdev) {
+- if (IS_ENABLED(CONFIG_PM) && bdev->irq > 0) {
++ if (IS_ENABLED(CONFIG_PM) && bdev->irq_acquired) {
+ devm_free_irq(bdev->dev, bdev->irq, bdev);
+ device_init_wakeup(bdev->dev, false);
+ pm_runtime_disable(bdev->dev);
+@@ -1153,7 +1156,8 @@ static int bcm_of_probe(struct bcm_device *bdev)
+ device_property_read_u8_array(bdev->dev, "brcm,bt-pcm-int-params",
+ bdev->pcm_int_params, 5);
+ bdev->irq = of_irq_get_byname(bdev->dev->of_node, "host-wakeup");
+-
++ bdev->irq_active_low = irq_get_trigger_type(bdev->irq)
++ & (IRQ_TYPE_EDGE_FALLING | IRQ_TYPE_LEVEL_LOW);
+ return 0;
+ }
+
+diff --git a/drivers/bluetooth/hci_qca.c b/drivers/bluetooth/hci_qca.c
+index 439392b1c043..0b1036e5e963 100644
+--- a/drivers/bluetooth/hci_qca.c
++++ b/drivers/bluetooth/hci_qca.c
+@@ -1953,8 +1953,9 @@ static void qca_serdev_remove(struct serdev_device *serdev)
+
+ static int __maybe_unused qca_suspend(struct device *dev)
+ {
+- struct hci_dev *hdev = container_of(dev, struct hci_dev, dev);
+- struct hci_uart *hu = hci_get_drvdata(hdev);
++ struct serdev_device *serdev = to_serdev_device(dev);
++ struct qca_serdev *qcadev = serdev_device_get_drvdata(serdev);
++ struct hci_uart *hu = &qcadev->serdev_hu;
+ struct qca_data *qca = hu->priv;
+ unsigned long flags;
+ int ret = 0;
+@@ -2033,8 +2034,9 @@ error:
+
+ static int __maybe_unused qca_resume(struct device *dev)
+ {
+- struct hci_dev *hdev = container_of(dev, struct hci_dev, dev);
+- struct hci_uart *hu = hci_get_drvdata(hdev);
++ struct serdev_device *serdev = to_serdev_device(dev);
++ struct qca_serdev *qcadev = serdev_device_get_drvdata(serdev);
++ struct hci_uart *hu = &qcadev->serdev_hu;
+ struct qca_data *qca = hu->priv;
+
+ clear_bit(QCA_SUSPENDING, &qca->flags);
+diff --git a/drivers/clk/mediatek/clk-mux.c b/drivers/clk/mediatek/clk-mux.c
+index 76f9cd039195..14e127e9a740 100644
+--- a/drivers/clk/mediatek/clk-mux.c
++++ b/drivers/clk/mediatek/clk-mux.c
+@@ -160,7 +160,7 @@ struct clk *mtk_clk_register_mux(const struct mtk_mux *mux,
+ spinlock_t *lock)
+ {
+ struct mtk_clk_mux *clk_mux;
+- struct clk_init_data init;
++ struct clk_init_data init = {};
+ struct clk *clk;
+
+ clk_mux = kzalloc(sizeof(*clk_mux), GFP_KERNEL);
+diff --git a/drivers/clocksource/Kconfig b/drivers/clocksource/Kconfig
+index f2142e6bbea3..f225c27b70f7 100644
+--- a/drivers/clocksource/Kconfig
++++ b/drivers/clocksource/Kconfig
+@@ -709,6 +709,7 @@ config MICROCHIP_PIT64B
+ bool "Microchip PIT64B support"
+ depends on OF || COMPILE_TEST
+ select CLKSRC_MMIO
++ select TIMER_OF
+ help
+ This option enables Microchip PIT64B timer for Atmel
+ based system. It supports the oneshot, the periodic
+diff --git a/drivers/clocksource/dw_apb_timer.c b/drivers/clocksource/dw_apb_timer.c
+index b207a77b0831..f5f24a95ee82 100644
+--- a/drivers/clocksource/dw_apb_timer.c
++++ b/drivers/clocksource/dw_apb_timer.c
+@@ -222,7 +222,8 @@ static int apbt_next_event(unsigned long delta,
+ /**
+ * dw_apb_clockevent_init() - use an APB timer as a clock_event_device
+ *
+- * @cpu: The CPU the events will be targeted at.
++ * @cpu: The CPU the events will be targeted at or -1 if CPU affiliation
++ * isn't required.
+ * @name: The name used for the timer and the IRQ for it.
+ * @rating: The rating to give the timer.
+ * @base: I/O base for the timer registers.
+@@ -257,7 +258,7 @@ dw_apb_clockevent_init(int cpu, const char *name, unsigned rating,
+ dw_ced->ced.max_delta_ticks = 0x7fffffff;
+ dw_ced->ced.min_delta_ns = clockevent_delta2ns(5000, &dw_ced->ced);
+ dw_ced->ced.min_delta_ticks = 5000;
+- dw_ced->ced.cpumask = cpumask_of(cpu);
++ dw_ced->ced.cpumask = cpu < 0 ? cpu_possible_mask : cpumask_of(cpu);
+ dw_ced->ced.features = CLOCK_EVT_FEAT_PERIODIC |
+ CLOCK_EVT_FEAT_ONESHOT | CLOCK_EVT_FEAT_DYNIRQ;
+ dw_ced->ced.set_state_shutdown = apbt_shutdown;
+diff --git a/drivers/clocksource/dw_apb_timer_of.c b/drivers/clocksource/dw_apb_timer_of.c
+index 8c28b127759f..6921b91b61ef 100644
+--- a/drivers/clocksource/dw_apb_timer_of.c
++++ b/drivers/clocksource/dw_apb_timer_of.c
+@@ -147,10 +147,6 @@ static int num_called;
+ static int __init dw_apb_timer_init(struct device_node *timer)
+ {
+ switch (num_called) {
+- case 0:
+- pr_debug("%s: found clockevent timer\n", __func__);
+- add_clockevent(timer);
+- break;
+ case 1:
+ pr_debug("%s: found clocksource timer\n", __func__);
+ add_clocksource(timer);
+@@ -161,6 +157,8 @@ static int __init dw_apb_timer_init(struct device_node *timer)
+ #endif
+ break;
+ default:
++ pr_debug("%s: found clockevent timer\n", __func__);
++ add_clockevent(timer);
+ break;
+ }
+
+diff --git a/drivers/clocksource/timer-versatile.c b/drivers/clocksource/timer-versatile.c
+index e4ebb656d005..f5d017b31afa 100644
+--- a/drivers/clocksource/timer-versatile.c
++++ b/drivers/clocksource/timer-versatile.c
+@@ -6,6 +6,7 @@
+
+ #include <linux/clocksource.h>
+ #include <linux/io.h>
++#include <linux/of.h>
+ #include <linux/of_address.h>
+ #include <linux/sched_clock.h>
+
+@@ -22,6 +23,8 @@ static int __init versatile_sched_clock_init(struct device_node *node)
+ {
+ void __iomem *base = of_iomap(node, 0);
+
++ of_node_clear_flag(node, OF_POPULATED);
++
+ if (!base)
+ return -ENXIO;
+
+diff --git a/drivers/cpufreq/qcom-cpufreq-nvmem.c b/drivers/cpufreq/qcom-cpufreq-nvmem.c
+index a1b8238872a2..d06b37822c3d 100644
+--- a/drivers/cpufreq/qcom-cpufreq-nvmem.c
++++ b/drivers/cpufreq/qcom-cpufreq-nvmem.c
+@@ -277,7 +277,7 @@ static int qcom_cpufreq_probe(struct platform_device *pdev)
+ if (!np)
+ return -ENOENT;
+
+- ret = of_device_is_compatible(np, "operating-points-v2-qcom-cpu");
++ ret = of_device_is_compatible(np, "operating-points-v2-kryo-cpu");
+ if (!ret) {
+ of_node_put(np);
+ return -ENOENT;
+diff --git a/drivers/cpuidle/cpuidle-psci.c b/drivers/cpuidle/cpuidle-psci.c
+index bae9140a65a5..d0fb585073c6 100644
+--- a/drivers/cpuidle/cpuidle-psci.c
++++ b/drivers/cpuidle/cpuidle-psci.c
+@@ -58,6 +58,10 @@ static int psci_enter_domain_idle_state(struct cpuidle_device *dev,
+ u32 state;
+ int ret;
+
++ ret = cpu_pm_enter();
++ if (ret)
++ return -1;
++
+ /* Do runtime PM to manage a hierarchical CPU toplogy. */
+ pm_runtime_put_sync_suspend(pd_dev);
+
+@@ -65,10 +69,12 @@ static int psci_enter_domain_idle_state(struct cpuidle_device *dev,
+ if (!state)
+ state = states[idx];
+
+- ret = psci_enter_state(idx, state);
++ ret = psci_cpu_suspend_enter(state) ? -1 : idx;
+
+ pm_runtime_get_sync(pd_dev);
+
++ cpu_pm_exit();
++
+ /* Clear the domain state to start fresh when back from idle. */
+ psci_set_domain_state(0);
+ return ret;
+diff --git a/drivers/cpuidle/sysfs.c b/drivers/cpuidle/sysfs.c
+index cdeedbf02646..55107565b319 100644
+--- a/drivers/cpuidle/sysfs.c
++++ b/drivers/cpuidle/sysfs.c
+@@ -515,7 +515,7 @@ static int cpuidle_add_state_sysfs(struct cpuidle_device *device)
+ ret = kobject_init_and_add(&kobj->kobj, &ktype_state_cpuidle,
+ &kdev->kobj, "state%d", i);
+ if (ret) {
+- kfree(kobj);
++ kobject_put(&kobj->kobj);
+ goto error_state;
+ }
+ cpuidle_add_s2idle_attr_group(kobj);
+@@ -646,7 +646,7 @@ static int cpuidle_add_driver_sysfs(struct cpuidle_device *dev)
+ ret = kobject_init_and_add(&kdrv->kobj, &ktype_driver_cpuidle,
+ &kdev->kobj, "driver");
+ if (ret) {
+- kfree(kdrv);
++ kobject_put(&kdrv->kobj);
+ return ret;
+ }
+
+@@ -740,7 +740,7 @@ int cpuidle_add_sysfs(struct cpuidle_device *dev)
+ error = kobject_init_and_add(&kdev->kobj, &ktype_cpuidle, &cpu_dev->kobj,
+ "cpuidle");
+ if (error) {
+- kfree(kdev);
++ kobject_put(&kdev->kobj);
+ return error;
+ }
+
+diff --git a/drivers/crypto/ccp/Kconfig b/drivers/crypto/ccp/Kconfig
+index e0a8bd15aa74..32268e239bf1 100644
+--- a/drivers/crypto/ccp/Kconfig
++++ b/drivers/crypto/ccp/Kconfig
+@@ -10,10 +10,9 @@ config CRYPTO_DEV_CCP_DD
+ config CRYPTO_DEV_SP_CCP
+ bool "Cryptographic Coprocessor device"
+ default y
+- depends on CRYPTO_DEV_CCP_DD
++ depends on CRYPTO_DEV_CCP_DD && DMADEVICES
+ select HW_RANDOM
+ select DMA_ENGINE
+- select DMADEVICES
+ select CRYPTO_SHA1
+ select CRYPTO_SHA256
+ help
+diff --git a/drivers/crypto/chelsio/chcr_algo.c b/drivers/crypto/chelsio/chcr_algo.c
+index c29b80dd30d8..6c2cd36048ea 100644
+--- a/drivers/crypto/chelsio/chcr_algo.c
++++ b/drivers/crypto/chelsio/chcr_algo.c
+@@ -1054,8 +1054,8 @@ static unsigned int adjust_ctr_overflow(u8 *iv, u32 bytes)
+ u32 temp = be32_to_cpu(*--b);
+
+ temp = ~temp;
+- c = (u64)temp + 1; // No of block can processed withou overflow
+- if ((bytes / AES_BLOCK_SIZE) > c)
++ c = (u64)temp + 1; // No of block can processed without overflow
++ if ((bytes / AES_BLOCK_SIZE) >= c)
+ bytes = c * AES_BLOCK_SIZE;
+ return bytes;
+ }
+@@ -1158,15 +1158,16 @@ static int chcr_final_cipher_iv(struct skcipher_request *req,
+ static int chcr_handle_cipher_resp(struct skcipher_request *req,
+ unsigned char *input, int err)
+ {
++ struct chcr_skcipher_req_ctx *reqctx = skcipher_request_ctx(req);
+ struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(req);
+- struct chcr_context *ctx = c_ctx(tfm);
+- struct uld_ctx *u_ctx = ULD_CTX(c_ctx(tfm));
+- struct ablk_ctx *ablkctx = ABLK_CTX(c_ctx(tfm));
+- struct sk_buff *skb;
+ struct cpl_fw6_pld *fw6_pld = (struct cpl_fw6_pld *)input;
+- struct chcr_skcipher_req_ctx *reqctx = skcipher_request_ctx(req);
+- struct cipher_wr_param wrparam;
++ struct ablk_ctx *ablkctx = ABLK_CTX(c_ctx(tfm));
++ struct uld_ctx *u_ctx = ULD_CTX(c_ctx(tfm));
+ struct chcr_dev *dev = c_ctx(tfm)->dev;
++ struct chcr_context *ctx = c_ctx(tfm);
++ struct adapter *adap = padap(ctx->dev);
++ struct cipher_wr_param wrparam;
++ struct sk_buff *skb;
+ int bytes;
+
+ if (err)
+@@ -1197,6 +1198,8 @@ static int chcr_handle_cipher_resp(struct skcipher_request *req,
+ if (unlikely(bytes == 0)) {
+ chcr_cipher_dma_unmap(&ULD_CTX(c_ctx(tfm))->lldi.pdev->dev,
+ req);
++ memcpy(req->iv, reqctx->init_iv, IV);
++ atomic_inc(&adap->chcr_stats.fallback);
+ err = chcr_cipher_fallback(ablkctx->sw_cipher,
+ req->base.flags,
+ req->src,
+@@ -1248,20 +1251,28 @@ static int process_cipher(struct skcipher_request *req,
+ struct sk_buff **skb,
+ unsigned short op_type)
+ {
++ struct chcr_skcipher_req_ctx *reqctx = skcipher_request_ctx(req);
+ struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(req);
+ unsigned int ivsize = crypto_skcipher_ivsize(tfm);
+- struct chcr_skcipher_req_ctx *reqctx = skcipher_request_ctx(req);
+ struct ablk_ctx *ablkctx = ABLK_CTX(c_ctx(tfm));
++ struct adapter *adap = padap(c_ctx(tfm)->dev);
+ struct cipher_wr_param wrparam;
+ int bytes, err = -EINVAL;
++ int subtype;
+
+ reqctx->processed = 0;
+ reqctx->partial_req = 0;
+ if (!req->iv)
+ goto error;
++ subtype = get_cryptoalg_subtype(tfm);
+ if ((ablkctx->enckey_len == 0) || (ivsize > AES_BLOCK_SIZE) ||
+ (req->cryptlen == 0) ||
+ (req->cryptlen % crypto_skcipher_blocksize(tfm))) {
++ if (req->cryptlen == 0 && subtype != CRYPTO_ALG_SUB_TYPE_XTS)
++ goto fallback;
++ else if (req->cryptlen % crypto_skcipher_blocksize(tfm) &&
++ subtype == CRYPTO_ALG_SUB_TYPE_XTS)
++ goto fallback;
+ pr_err("AES: Invalid value of Key Len %d nbytes %d IV Len %d\n",
+ ablkctx->enckey_len, req->cryptlen, ivsize);
+ goto error;
+@@ -1302,12 +1313,10 @@ static int process_cipher(struct skcipher_request *req,
+ } else {
+ bytes = req->cryptlen;
+ }
+- if (get_cryptoalg_subtype(tfm) ==
+- CRYPTO_ALG_SUB_TYPE_CTR) {
++ if (subtype == CRYPTO_ALG_SUB_TYPE_CTR) {
+ bytes = adjust_ctr_overflow(req->iv, bytes);
+ }
+- if (get_cryptoalg_subtype(tfm) ==
+- CRYPTO_ALG_SUB_TYPE_CTR_RFC3686) {
++ if (subtype == CRYPTO_ALG_SUB_TYPE_CTR_RFC3686) {
+ memcpy(reqctx->iv, ablkctx->nonce, CTR_RFC3686_NONCE_SIZE);
+ memcpy(reqctx->iv + CTR_RFC3686_NONCE_SIZE, req->iv,
+ CTR_RFC3686_IV_SIZE);
+@@ -1315,20 +1324,25 @@ static int process_cipher(struct skcipher_request *req,
+ /* initialize counter portion of counter block */
+ *(__be32 *)(reqctx->iv + CTR_RFC3686_NONCE_SIZE +
+ CTR_RFC3686_IV_SIZE) = cpu_to_be32(1);
++ memcpy(reqctx->init_iv, reqctx->iv, IV);
+
+ } else {
+
+ memcpy(reqctx->iv, req->iv, IV);
++ memcpy(reqctx->init_iv, req->iv, IV);
+ }
+ if (unlikely(bytes == 0)) {
+ chcr_cipher_dma_unmap(&ULD_CTX(c_ctx(tfm))->lldi.pdev->dev,
+ req);
++fallback: atomic_inc(&adap->chcr_stats.fallback);
+ err = chcr_cipher_fallback(ablkctx->sw_cipher,
+ req->base.flags,
+ req->src,
+ req->dst,
+ req->cryptlen,
+- reqctx->iv,
++ subtype ==
++ CRYPTO_ALG_SUB_TYPE_CTR_RFC3686 ?
++ reqctx->iv : req->iv,
+ op_type);
+ goto error;
+ }
+@@ -1443,6 +1457,7 @@ static int chcr_device_init(struct chcr_context *ctx)
+ if (!ctx->dev) {
+ u_ctx = assign_chcr_device();
+ if (!u_ctx) {
++ err = -ENXIO;
+ pr_err("chcr device assignment fails\n");
+ goto out;
+ }
+@@ -2910,7 +2925,7 @@ static void fill_sec_cpl_for_aead(struct cpl_tx_sec_pdu *sec_cpl,
+ unsigned int mac_mode = CHCR_SCMD_AUTH_MODE_CBCMAC;
+ unsigned int rx_channel_id = reqctx->rxqidx / ctx->rxq_perchan;
+ unsigned int ccm_xtra;
+- unsigned char tag_offset = 0, auth_offset = 0;
++ unsigned int tag_offset = 0, auth_offset = 0;
+ unsigned int assoclen;
+
+ if (get_aead_subtype(tfm) == CRYPTO_ALG_SUB_TYPE_AEAD_RFC4309)
+diff --git a/drivers/crypto/chelsio/chcr_crypto.h b/drivers/crypto/chelsio/chcr_crypto.h
+index 542bebae001f..b3fdbdc25acb 100644
+--- a/drivers/crypto/chelsio/chcr_crypto.h
++++ b/drivers/crypto/chelsio/chcr_crypto.h
+@@ -302,6 +302,7 @@ struct chcr_skcipher_req_ctx {
+ unsigned int op;
+ u16 imm;
+ u8 iv[CHCR_MAX_CRYPTO_IV_LEN];
++ u8 init_iv[CHCR_MAX_CRYPTO_IV_LEN];
+ u16 txqidx;
+ u16 rxqidx;
+ };
+diff --git a/drivers/crypto/stm32/stm32-crc32.c b/drivers/crypto/stm32/stm32-crc32.c
+index 8e92e4ac79f1..10304511f9b4 100644
+--- a/drivers/crypto/stm32/stm32-crc32.c
++++ b/drivers/crypto/stm32/stm32-crc32.c
+@@ -28,8 +28,10 @@
+
+ /* Registers values */
+ #define CRC_CR_RESET BIT(0)
+-#define CRC_CR_REVERSE (BIT(7) | BIT(6) | BIT(5))
+-#define CRC_INIT_DEFAULT 0xFFFFFFFF
++#define CRC_CR_REV_IN_WORD (BIT(6) | BIT(5))
++#define CRC_CR_REV_IN_BYTE BIT(5)
++#define CRC_CR_REV_OUT BIT(7)
++#define CRC32C_INIT_DEFAULT 0xFFFFFFFF
+
+ #define CRC_AUTOSUSPEND_DELAY 50
+
+@@ -38,8 +40,6 @@ struct stm32_crc {
+ struct device *dev;
+ void __iomem *regs;
+ struct clk *clk;
+- u8 pending_data[sizeof(u32)];
+- size_t nb_pending_bytes;
+ };
+
+ struct stm32_crc_list {
+@@ -59,14 +59,13 @@ struct stm32_crc_ctx {
+
+ struct stm32_crc_desc_ctx {
+ u32 partial; /* crc32c: partial in first 4 bytes of that struct */
+- struct stm32_crc *crc;
+ };
+
+ static int stm32_crc32_cra_init(struct crypto_tfm *tfm)
+ {
+ struct stm32_crc_ctx *mctx = crypto_tfm_ctx(tfm);
+
+- mctx->key = CRC_INIT_DEFAULT;
++ mctx->key = 0;
+ mctx->poly = CRC32_POLY_LE;
+ return 0;
+ }
+@@ -75,7 +74,7 @@ static int stm32_crc32c_cra_init(struct crypto_tfm *tfm)
+ {
+ struct stm32_crc_ctx *mctx = crypto_tfm_ctx(tfm);
+
+- mctx->key = CRC_INIT_DEFAULT;
++ mctx->key = CRC32C_INIT_DEFAULT;
+ mctx->poly = CRC32C_POLY_LE;
+ return 0;
+ }
+@@ -92,32 +91,42 @@ static int stm32_crc_setkey(struct crypto_shash *tfm, const u8 *key,
+ return 0;
+ }
+
++static struct stm32_crc *stm32_crc_get_next_crc(void)
++{
++ struct stm32_crc *crc;
++
++ spin_lock_bh(&crc_list.lock);
++ crc = list_first_entry(&crc_list.dev_list, struct stm32_crc, list);
++ if (crc)
++ list_move_tail(&crc->list, &crc_list.dev_list);
++ spin_unlock_bh(&crc_list.lock);
++
++ return crc;
++}
++
+ static int stm32_crc_init(struct shash_desc *desc)
+ {
+ struct stm32_crc_desc_ctx *ctx = shash_desc_ctx(desc);
+ struct stm32_crc_ctx *mctx = crypto_shash_ctx(desc->tfm);
+ struct stm32_crc *crc;
+
+- spin_lock_bh(&crc_list.lock);
+- list_for_each_entry(crc, &crc_list.dev_list, list) {
+- ctx->crc = crc;
+- break;
+- }
+- spin_unlock_bh(&crc_list.lock);
++ crc = stm32_crc_get_next_crc();
++ if (!crc)
++ return -ENODEV;
+
+- pm_runtime_get_sync(ctx->crc->dev);
++ pm_runtime_get_sync(crc->dev);
+
+ /* Reset, set key, poly and configure in bit reverse mode */
+- writel_relaxed(bitrev32(mctx->key), ctx->crc->regs + CRC_INIT);
+- writel_relaxed(bitrev32(mctx->poly), ctx->crc->regs + CRC_POL);
+- writel_relaxed(CRC_CR_RESET | CRC_CR_REVERSE, ctx->crc->regs + CRC_CR);
++ writel_relaxed(bitrev32(mctx->key), crc->regs + CRC_INIT);
++ writel_relaxed(bitrev32(mctx->poly), crc->regs + CRC_POL);
++ writel_relaxed(CRC_CR_RESET | CRC_CR_REV_IN_WORD | CRC_CR_REV_OUT,
++ crc->regs + CRC_CR);
+
+ /* Store partial result */
+- ctx->partial = readl_relaxed(ctx->crc->regs + CRC_DR);
+- ctx->crc->nb_pending_bytes = 0;
++ ctx->partial = readl_relaxed(crc->regs + CRC_DR);
+
+- pm_runtime_mark_last_busy(ctx->crc->dev);
+- pm_runtime_put_autosuspend(ctx->crc->dev);
++ pm_runtime_mark_last_busy(crc->dev);
++ pm_runtime_put_autosuspend(crc->dev);
+
+ return 0;
+ }
+@@ -126,31 +135,49 @@ static int stm32_crc_update(struct shash_desc *desc, const u8 *d8,
+ unsigned int length)
+ {
+ struct stm32_crc_desc_ctx *ctx = shash_desc_ctx(desc);
+- struct stm32_crc *crc = ctx->crc;
+- u32 *d32;
+- unsigned int i;
++ struct stm32_crc_ctx *mctx = crypto_shash_ctx(desc->tfm);
++ struct stm32_crc *crc;
++
++ crc = stm32_crc_get_next_crc();
++ if (!crc)
++ return -ENODEV;
+
+ pm_runtime_get_sync(crc->dev);
+
+- if (unlikely(crc->nb_pending_bytes)) {
+- while (crc->nb_pending_bytes != sizeof(u32) && length) {
+- /* Fill in pending data */
+- crc->pending_data[crc->nb_pending_bytes++] = *(d8++);
++ /*
++ * Restore previously calculated CRC for this context as init value
++ * Restore polynomial configuration
++ * Configure in register for word input data,
++ * Configure out register in reversed bit mode data.
++ */
++ writel_relaxed(bitrev32(ctx->partial), crc->regs + CRC_INIT);
++ writel_relaxed(bitrev32(mctx->poly), crc->regs + CRC_POL);
++ writel_relaxed(CRC_CR_RESET | CRC_CR_REV_IN_WORD | CRC_CR_REV_OUT,
++ crc->regs + CRC_CR);
++
++ if (d8 != PTR_ALIGN(d8, sizeof(u32))) {
++ /* Configure for byte data */
++ writel_relaxed(CRC_CR_REV_IN_BYTE | CRC_CR_REV_OUT,
++ crc->regs + CRC_CR);
++ while (d8 != PTR_ALIGN(d8, sizeof(u32)) && length) {
++ writeb_relaxed(*d8++, crc->regs + CRC_DR);
+ length--;
+ }
+-
+- if (crc->nb_pending_bytes == sizeof(u32)) {
+- /* Process completed pending data */
+- writel_relaxed(*(u32 *)crc->pending_data,
+- crc->regs + CRC_DR);
+- crc->nb_pending_bytes = 0;
+- }
++ /* Configure for word data */
++ writel_relaxed(CRC_CR_REV_IN_WORD | CRC_CR_REV_OUT,
++ crc->regs + CRC_CR);
+ }
+
+- d32 = (u32 *)d8;
+- for (i = 0; i < length >> 2; i++)
+- /* Process 32 bits data */
+- writel_relaxed(*(d32++), crc->regs + CRC_DR);
++ for (; length >= sizeof(u32); d8 += sizeof(u32), length -= sizeof(u32))
++ writel_relaxed(*((u32 *)d8), crc->regs + CRC_DR);
++
++ if (length) {
++ /* Configure for byte data */
++ writel_relaxed(CRC_CR_REV_IN_BYTE | CRC_CR_REV_OUT,
++ crc->regs + CRC_CR);
++ while (length--)
++ writeb_relaxed(*d8++, crc->regs + CRC_DR);
++ }
+
+ /* Store partial result */
+ ctx->partial = readl_relaxed(crc->regs + CRC_DR);
+@@ -158,22 +185,6 @@ static int stm32_crc_update(struct shash_desc *desc, const u8 *d8,
+ pm_runtime_mark_last_busy(crc->dev);
+ pm_runtime_put_autosuspend(crc->dev);
+
+- /* Check for pending data (non 32 bits) */
+- length &= 3;
+- if (likely(!length))
+- return 0;
+-
+- if ((crc->nb_pending_bytes + length) >= sizeof(u32)) {
+- /* Shall not happen */
+- dev_err(crc->dev, "Pending data overflow\n");
+- return -EINVAL;
+- }
+-
+- d8 = (const u8 *)d32;
+- for (i = 0; i < length; i++)
+- /* Store pending data */
+- crc->pending_data[crc->nb_pending_bytes++] = *(d8++);
+-
+ return 0;
+ }
+
+@@ -202,6 +213,8 @@ static int stm32_crc_digest(struct shash_desc *desc, const u8 *data,
+ return stm32_crc_init(desc) ?: stm32_crc_finup(desc, data, length, out);
+ }
+
++static unsigned int refcnt;
++static DEFINE_MUTEX(refcnt_lock);
+ static struct shash_alg algs[] = {
+ /* CRC-32 */
+ {
+@@ -292,12 +305,18 @@ static int stm32_crc_probe(struct platform_device *pdev)
+ list_add(&crc->list, &crc_list.dev_list);
+ spin_unlock(&crc_list.lock);
+
+- ret = crypto_register_shashes(algs, ARRAY_SIZE(algs));
+- if (ret) {
+- dev_err(dev, "Failed to register\n");
+- clk_disable_unprepare(crc->clk);
+- return ret;
++ mutex_lock(&refcnt_lock);
++ if (!refcnt) {
++ ret = crypto_register_shashes(algs, ARRAY_SIZE(algs));
++ if (ret) {
++ mutex_unlock(&refcnt_lock);
++ dev_err(dev, "Failed to register\n");
++ clk_disable_unprepare(crc->clk);
++ return ret;
++ }
+ }
++ refcnt++;
++ mutex_unlock(&refcnt_lock);
+
+ dev_info(dev, "Initialized\n");
+
+@@ -318,7 +337,10 @@ static int stm32_crc_remove(struct platform_device *pdev)
+ list_del(&crc->list);
+ spin_unlock(&crc_list.lock);
+
+- crypto_unregister_shashes(algs, ARRAY_SIZE(algs));
++ mutex_lock(&refcnt_lock);
++ if (!--refcnt)
++ crypto_unregister_shashes(algs, ARRAY_SIZE(algs));
++ mutex_unlock(&refcnt_lock);
+
+ pm_runtime_disable(crc->dev);
+ pm_runtime_put_noidle(crc->dev);
+diff --git a/drivers/edac/amd64_edac.c b/drivers/edac/amd64_edac.c
+index f91f3bc1e0b2..4e9994de0b90 100644
+--- a/drivers/edac/amd64_edac.c
++++ b/drivers/edac/amd64_edac.c
+@@ -2319,6 +2319,16 @@ static struct amd64_family_type family_types[] = {
+ .dbam_to_cs = f17_addr_mask_to_cs_size,
+ }
+ },
++ [F17_M60H_CPUS] = {
++ .ctl_name = "F17h_M60h",
++ .f0_id = PCI_DEVICE_ID_AMD_17H_M60H_DF_F0,
++ .f6_id = PCI_DEVICE_ID_AMD_17H_M60H_DF_F6,
++ .max_mcs = 2,
++ .ops = {
++ .early_channel_count = f17_early_channel_count,
++ .dbam_to_cs = f17_addr_mask_to_cs_size,
++ }
++ },
+ [F17_M70H_CPUS] = {
+ .ctl_name = "F17h_M70h",
+ .f0_id = PCI_DEVICE_ID_AMD_17H_M70H_DF_F0,
+@@ -3357,6 +3367,10 @@ static struct amd64_family_type *per_family_init(struct amd64_pvt *pvt)
+ fam_type = &family_types[F17_M30H_CPUS];
+ pvt->ops = &family_types[F17_M30H_CPUS].ops;
+ break;
++ } else if (pvt->model >= 0x60 && pvt->model <= 0x6f) {
++ fam_type = &family_types[F17_M60H_CPUS];
++ pvt->ops = &family_types[F17_M60H_CPUS].ops;
++ break;
+ } else if (pvt->model >= 0x70 && pvt->model <= 0x7f) {
+ fam_type = &family_types[F17_M70H_CPUS];
+ pvt->ops = &family_types[F17_M70H_CPUS].ops;
+diff --git a/drivers/edac/amd64_edac.h b/drivers/edac/amd64_edac.h
+index abbf3c274d74..52b5d03eeba0 100644
+--- a/drivers/edac/amd64_edac.h
++++ b/drivers/edac/amd64_edac.h
+@@ -120,6 +120,8 @@
+ #define PCI_DEVICE_ID_AMD_17H_M10H_DF_F6 0x15ee
+ #define PCI_DEVICE_ID_AMD_17H_M30H_DF_F0 0x1490
+ #define PCI_DEVICE_ID_AMD_17H_M30H_DF_F6 0x1496
++#define PCI_DEVICE_ID_AMD_17H_M60H_DF_F0 0x1448
++#define PCI_DEVICE_ID_AMD_17H_M60H_DF_F6 0x144e
+ #define PCI_DEVICE_ID_AMD_17H_M70H_DF_F0 0x1440
+ #define PCI_DEVICE_ID_AMD_17H_M70H_DF_F6 0x1446
+ #define PCI_DEVICE_ID_AMD_19H_DF_F0 0x1650
+@@ -293,6 +295,7 @@ enum amd_families {
+ F17_CPUS,
+ F17_M10H_CPUS,
+ F17_M30H_CPUS,
++ F17_M60H_CPUS,
+ F17_M70H_CPUS,
+ F19_CPUS,
+ NUM_FAMILIES,
+diff --git a/drivers/firmware/efi/libstub/Makefile b/drivers/firmware/efi/libstub/Makefile
+index 094eabdecfe6..d85016553f14 100644
+--- a/drivers/firmware/efi/libstub/Makefile
++++ b/drivers/firmware/efi/libstub/Makefile
+@@ -30,6 +30,7 @@ KBUILD_CFLAGS := $(cflags-y) -DDISABLE_BRANCH_PROFILING \
+ -D__NO_FORTIFY \
+ $(call cc-option,-ffreestanding) \
+ $(call cc-option,-fno-stack-protector) \
++ $(call cc-option,-fno-addrsig) \
+ -D__DISABLE_EXPORTS
+
+ GCOV_PROFILE := n
+diff --git a/drivers/firmware/efi/libstub/randomalloc.c b/drivers/firmware/efi/libstub/randomalloc.c
+index 4578f59e160c..6200dfa650f5 100644
+--- a/drivers/firmware/efi/libstub/randomalloc.c
++++ b/drivers/firmware/efi/libstub/randomalloc.c
+@@ -74,6 +74,8 @@ efi_status_t efi_random_alloc(unsigned long size,
+ if (align < EFI_ALLOC_ALIGN)
+ align = EFI_ALLOC_ALIGN;
+
++ size = round_up(size, EFI_ALLOC_ALIGN);
++
+ /* count the suitable slots in each memory map entry */
+ for (map_offset = 0; map_offset < map_size; map_offset += desc_size) {
+ efi_memory_desc_t *md = (void *)memory_map + map_offset;
+@@ -109,7 +111,7 @@ efi_status_t efi_random_alloc(unsigned long size,
+ }
+
+ target = round_up(md->phys_addr, align) + target_slot * align;
+- pages = round_up(size, EFI_PAGE_SIZE) / EFI_PAGE_SIZE;
++ pages = size / EFI_PAGE_SIZE;
+
+ status = efi_bs_call(allocate_pages, EFI_ALLOCATE_ADDRESS,
+ EFI_LOADER_DATA, pages, &target);
+diff --git a/drivers/gnss/sirf.c b/drivers/gnss/sirf.c
+index effed3a8d398..2ecb1d3e8eeb 100644
+--- a/drivers/gnss/sirf.c
++++ b/drivers/gnss/sirf.c
+@@ -439,14 +439,18 @@ static int sirf_probe(struct serdev_device *serdev)
+
+ data->on_off = devm_gpiod_get_optional(dev, "sirf,onoff",
+ GPIOD_OUT_LOW);
+- if (IS_ERR(data->on_off))
++ if (IS_ERR(data->on_off)) {
++ ret = PTR_ERR(data->on_off);
+ goto err_put_device;
++ }
+
+ if (data->on_off) {
+ data->wakeup = devm_gpiod_get_optional(dev, "sirf,wakeup",
+ GPIOD_IN);
+- if (IS_ERR(data->wakeup))
++ if (IS_ERR(data->wakeup)) {
++ ret = PTR_ERR(data->wakeup);
+ goto err_put_device;
++ }
+
+ ret = regulator_enable(data->vcc);
+ if (ret)
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c
+index 4277125a79ee..32f36c940abb 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c
+@@ -161,16 +161,17 @@ void amdgpu_gem_object_close(struct drm_gem_object *obj,
+
+ struct amdgpu_bo_list_entry vm_pd;
+ struct list_head list, duplicates;
++ struct dma_fence *fence = NULL;
+ struct ttm_validate_buffer tv;
+ struct ww_acquire_ctx ticket;
+ struct amdgpu_bo_va *bo_va;
+- int r;
++ long r;
+
+ INIT_LIST_HEAD(&list);
+ INIT_LIST_HEAD(&duplicates);
+
+ tv.bo = &bo->tbo;
+- tv.num_shared = 1;
++ tv.num_shared = 2;
+ list_add(&tv.head, &list);
+
+ amdgpu_vm_get_pd_bo(vm, &list, &vm_pd);
+@@ -178,28 +179,34 @@ void amdgpu_gem_object_close(struct drm_gem_object *obj,
+ r = ttm_eu_reserve_buffers(&ticket, &list, false, &duplicates);
+ if (r) {
+ dev_err(adev->dev, "leaking bo va because "
+- "we fail to reserve bo (%d)\n", r);
++ "we fail to reserve bo (%ld)\n", r);
+ return;
+ }
+ bo_va = amdgpu_vm_bo_find(vm, bo);
+- if (bo_va && --bo_va->ref_count == 0) {
+- amdgpu_vm_bo_rmv(adev, bo_va);
+-
+- if (amdgpu_vm_ready(vm)) {
+- struct dma_fence *fence = NULL;
++ if (!bo_va || --bo_va->ref_count)
++ goto out_unlock;
+
+- r = amdgpu_vm_clear_freed(adev, vm, &fence);
+- if (unlikely(r)) {
+- dev_err(adev->dev, "failed to clear page "
+- "tables on GEM object close (%d)\n", r);
+- }
++ amdgpu_vm_bo_rmv(adev, bo_va);
++ if (!amdgpu_vm_ready(vm))
++ goto out_unlock;
+
+- if (fence) {
+- amdgpu_bo_fence(bo, fence, true);
+- dma_fence_put(fence);
+- }
+- }
++ fence = dma_resv_get_excl(bo->tbo.base.resv);
++ if (fence) {
++ amdgpu_bo_fence(bo, fence, true);
++ fence = NULL;
+ }
++
++ r = amdgpu_vm_clear_freed(adev, vm, &fence);
++ if (r || !fence)
++ goto out_unlock;
++
++ amdgpu_bo_fence(bo, fence, true);
++ dma_fence_put(fence);
++
++out_unlock:
++ if (unlikely(r < 0))
++ dev_err(adev->dev, "failed to clear page "
++ "tables on GEM object close (%ld)\n", r);
+ ttm_eu_backoff_reservation(&ticket, &list);
+ }
+
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_pm.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_pm.c
+index abe94a55ecad..532f4d908b8d 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_pm.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_pm.c
+@@ -383,6 +383,15 @@ static ssize_t amdgpu_set_dpm_forced_performance_level(struct device *dev,
+ return count;
+ }
+
++ if (adev->asic_type == CHIP_RAVEN) {
++ if (adev->rev_id < 8) {
++ if (current_level != AMD_DPM_FORCED_LEVEL_MANUAL && level == AMD_DPM_FORCED_LEVEL_MANUAL)
++ amdgpu_gfx_off_ctrl(adev, false);
++ else if (current_level == AMD_DPM_FORCED_LEVEL_MANUAL && level != AMD_DPM_FORCED_LEVEL_MANUAL)
++ amdgpu_gfx_off_ctrl(adev, true);
++ }
++ }
++
+ /* profile_exit setting is valid only when current mode is in profile mode */
+ if (!(current_level & (AMD_DPM_FORCED_LEVEL_PROFILE_STANDARD |
+ AMD_DPM_FORCED_LEVEL_PROFILE_MIN_SCLK |
+@@ -444,8 +453,11 @@ static ssize_t amdgpu_get_pp_num_states(struct device *dev,
+ ret = smu_get_power_num_states(&adev->smu, &data);
+ if (ret)
+ return ret;
+- } else if (adev->powerplay.pp_funcs->get_pp_num_states)
++ } else if (adev->powerplay.pp_funcs->get_pp_num_states) {
+ amdgpu_dpm_get_pp_num_states(adev, &data);
++ } else {
++ memset(&data, 0, sizeof(data));
++ }
+
+ pm_runtime_mark_last_busy(ddev->dev);
+ pm_runtime_put_autosuspend(ddev->dev);
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
+index 6d9252a27916..06242096973c 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
+@@ -2996,10 +2996,17 @@ int amdgpu_vm_make_compute(struct amdgpu_device *adev, struct amdgpu_vm *vm,
+ !amdgpu_gmc_vram_full_visible(&adev->gmc)),
+ "CPU update of VM recommended only for large BAR system\n");
+
+- if (vm->use_cpu_for_update)
++ if (vm->use_cpu_for_update) {
++ /* Sync with last SDMA update/clear before switching to CPU */
++ r = amdgpu_bo_sync_wait(vm->root.base.bo,
++ AMDGPU_FENCE_OWNER_UNDEFINED, true);
++ if (r)
++ goto free_idr;
++
+ vm->update_funcs = &amdgpu_vm_cpu_funcs;
+- else
++ } else {
+ vm->update_funcs = &amdgpu_vm_sdma_funcs;
++ }
+ dma_fence_put(vm->last_update);
+ vm->last_update = NULL;
+ vm->is_compute_context = true;
+diff --git a/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn10/rv1_clk_mgr_vbios_smu.c b/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn10/rv1_clk_mgr_vbios_smu.c
+index 97b7f32294fd..c320b7af7d34 100644
+--- a/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn10/rv1_clk_mgr_vbios_smu.c
++++ b/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn10/rv1_clk_mgr_vbios_smu.c
+@@ -97,9 +97,6 @@ int rv1_vbios_smu_set_dispclk(struct clk_mgr_internal *clk_mgr, int requested_di
+ VBIOSSMC_MSG_SetDispclkFreq,
+ requested_dispclk_khz / 1000);
+
+- /* Actual dispclk set is returned in the parameter register */
+- actual_dispclk_set_mhz = REG_READ(MP1_SMN_C2PMSG_83) * 1000;
+-
+ if (!IS_FPGA_MAXIMUS_DC(dc->ctx->dce_environment)) {
+ if (dmcu && dmcu->funcs->is_dmcu_initialized(dmcu)) {
+ if (clk_mgr->dfs_bypass_disp_clk != actual_dispclk_set_mhz)
+diff --git a/drivers/gpu/drm/amd/display/dc/core/dc_link_hwss.c b/drivers/gpu/drm/amd/display/dc/core/dc_link_hwss.c
+index 51e0ee6e7695..6590f51caefa 100644
+--- a/drivers/gpu/drm/amd/display/dc/core/dc_link_hwss.c
++++ b/drivers/gpu/drm/amd/display/dc/core/dc_link_hwss.c
+@@ -400,7 +400,7 @@ static bool dp_set_dsc_on_rx(struct pipe_ctx *pipe_ctx, bool enable)
+ struct dc_stream_state *stream = pipe_ctx->stream;
+ bool result = false;
+
+- if (IS_FPGA_MAXIMUS_DC(dc->ctx->dce_environment))
++ if (dc_is_virtual_signal(stream->signal) || IS_FPGA_MAXIMUS_DC(dc->ctx->dce_environment))
+ result = true;
+ else
+ result = dm_helpers_dp_write_dsc_enable(dc->ctx, stream, enable);
+diff --git a/drivers/gpu/drm/amd/display/dc/dcn10/dcn10_optc.c b/drivers/gpu/drm/amd/display/dc/dcn10/dcn10_optc.c
+index 17d96ec6acd8..ec0ab42becba 100644
+--- a/drivers/gpu/drm/amd/display/dc/dcn10/dcn10_optc.c
++++ b/drivers/gpu/drm/amd/display/dc/dcn10/dcn10_optc.c
+@@ -299,6 +299,7 @@ void optc1_set_vtg_params(struct timing_generator *optc,
+ uint32_t asic_blank_end;
+ uint32_t v_init;
+ uint32_t v_fp2 = 0;
++ int32_t vertical_line_start;
+
+ struct optc *optc1 = DCN10TG_FROM_TG(optc);
+
+@@ -315,8 +316,9 @@ void optc1_set_vtg_params(struct timing_generator *optc,
+ patched_crtc_timing.v_border_top;
+
+ /* if VSTARTUP is before VSYNC, FP2 is the offset, otherwise 0 */
+- if (optc1->vstartup_start > asic_blank_end)
+- v_fp2 = optc1->vstartup_start - asic_blank_end;
++ vertical_line_start = asic_blank_end - optc1->vstartup_start + 1;
++ if (vertical_line_start < 0)
++ v_fp2 = -vertical_line_start;
+
+ /* Interlace */
+ if (REG(OTG_INTERLACE_CONTROL)) {
+diff --git a/drivers/gpu/drm/amd/display/dc/dcn20/dcn20_hwseq.c b/drivers/gpu/drm/amd/display/dc/dcn20/dcn20_hwseq.c
+index a023a4d59f41..c4fa13e4eaf9 100644
+--- a/drivers/gpu/drm/amd/display/dc/dcn20/dcn20_hwseq.c
++++ b/drivers/gpu/drm/amd/display/dc/dcn20/dcn20_hwseq.c
+@@ -1478,8 +1478,11 @@ static void dcn20_program_pipe(
+ if (pipe_ctx->update_flags.bits.odm)
+ hws->funcs.update_odm(dc, context, pipe_ctx);
+
+- if (pipe_ctx->update_flags.bits.enable)
++ if (pipe_ctx->update_flags.bits.enable) {
+ dcn20_enable_plane(dc, pipe_ctx, context);
++ if (dc->res_pool->hubbub->funcs->force_wm_propagate_to_pipes)
++ dc->res_pool->hubbub->funcs->force_wm_propagate_to_pipes(dc->res_pool->hubbub);
++ }
+
+ if (pipe_ctx->update_flags.raw || pipe_ctx->plane_state->update_flags.raw || pipe_ctx->stream->update_flags.raw)
+ dcn20_update_dchubp_dpp(dc, pipe_ctx, context);
+diff --git a/drivers/gpu/drm/amd/display/dc/dcn20/dcn20_resource.c b/drivers/gpu/drm/amd/display/dc/dcn20/dcn20_resource.c
+index e4348e3b6389..2719cdecc1cb 100644
+--- a/drivers/gpu/drm/amd/display/dc/dcn20/dcn20_resource.c
++++ b/drivers/gpu/drm/amd/display/dc/dcn20/dcn20_resource.c
+@@ -2597,19 +2597,24 @@ int dcn20_validate_apply_pipe_split_flags(
+
+ /* Avoid split loop looks for lowest voltage level that allows most unsplit pipes possible */
+ if (avoid_split) {
++ int max_mpc_comb = context->bw_ctx.dml.vba.maxMpcComb;
++
+ for (i = 0, pipe_idx = 0; i < dc->res_pool->pipe_count; i++) {
+ if (!context->res_ctx.pipe_ctx[i].stream)
+ continue;
+
+ for (vlevel_split = vlevel; vlevel <= context->bw_ctx.dml.soc.num_states; vlevel++)
+- if (context->bw_ctx.dml.vba.NoOfDPP[vlevel][0][pipe_idx] == 1)
++ if (context->bw_ctx.dml.vba.NoOfDPP[vlevel][0][pipe_idx] == 1 &&
++ context->bw_ctx.dml.vba.ModeSupport[vlevel][0])
+ break;
+ /* Impossible to not split this pipe */
+ if (vlevel > context->bw_ctx.dml.soc.num_states)
+ vlevel = vlevel_split;
++ else
++ max_mpc_comb = 0;
+ pipe_idx++;
+ }
+- context->bw_ctx.dml.vba.maxMpcComb = 0;
++ context->bw_ctx.dml.vba.maxMpcComb = max_mpc_comb;
+ }
+
+ /* Split loop sets which pipe should be split based on dml outputs and dc flags */
+diff --git a/drivers/gpu/drm/amd/display/dc/dcn21/dcn21_resource.c b/drivers/gpu/drm/amd/display/dc/dcn21/dcn21_resource.c
+index a721bb401ef0..6d1736cf5c12 100644
+--- a/drivers/gpu/drm/amd/display/dc/dcn21/dcn21_resource.c
++++ b/drivers/gpu/drm/amd/display/dc/dcn21/dcn21_resource.c
+@@ -1694,12 +1694,8 @@ static int dcn21_populate_dml_pipes_from_context(
+ {
+ uint32_t pipe_cnt = dcn20_populate_dml_pipes_from_context(dc, context, pipes);
+ int i;
+- struct resource_context *res_ctx = &context->res_ctx;
+
+- for (i = 0; i < dc->res_pool->pipe_count; i++) {
+-
+- if (!res_ctx->pipe_ctx[i].stream)
+- continue;
++ for (i = 0; i < pipe_cnt; i++) {
+
+ pipes[i].pipe.src.hostvm = 1;
+ pipes[i].pipe.src.gpuvm = 1;
+diff --git a/drivers/gpu/drm/amd/display/dc/inc/hw/dchubbub.h b/drivers/gpu/drm/amd/display/dc/inc/hw/dchubbub.h
+index f5dd0cc73c63..47a566d82d6e 100644
+--- a/drivers/gpu/drm/amd/display/dc/inc/hw/dchubbub.h
++++ b/drivers/gpu/drm/amd/display/dc/inc/hw/dchubbub.h
+@@ -144,6 +144,8 @@ struct hubbub_funcs {
+ void (*allow_self_refresh_control)(struct hubbub *hubbub, bool allow);
+
+ void (*apply_DEDCN21_147_wa)(struct hubbub *hubbub);
++
++ void (*force_wm_propagate_to_pipes)(struct hubbub *hubbub);
+ };
+
+ struct hubbub {
+diff --git a/drivers/gpu/drm/ast/ast_mode.c b/drivers/gpu/drm/ast/ast_mode.c
+index cdd6c46d6557..7a9f20a2fd30 100644
+--- a/drivers/gpu/drm/ast/ast_mode.c
++++ b/drivers/gpu/drm/ast/ast_mode.c
+@@ -881,6 +881,17 @@ static const struct drm_crtc_helper_funcs ast_crtc_helper_funcs = {
+ .atomic_disable = ast_crtc_helper_atomic_disable,
+ };
+
++static void ast_crtc_reset(struct drm_crtc *crtc)
++{
++ struct ast_crtc_state *ast_state =
++ kzalloc(sizeof(*ast_state), GFP_KERNEL);
++
++ if (crtc->state)
++ crtc->funcs->atomic_destroy_state(crtc, crtc->state);
++
++ __drm_atomic_helper_crtc_reset(crtc, &ast_state->base);
++}
++
+ static void ast_crtc_destroy(struct drm_crtc *crtc)
+ {
+ drm_crtc_cleanup(crtc);
+@@ -919,7 +930,7 @@ static void ast_crtc_atomic_destroy_state(struct drm_crtc *crtc,
+ }
+
+ static const struct drm_crtc_funcs ast_crtc_funcs = {
+- .reset = drm_atomic_helper_crtc_reset,
++ .reset = ast_crtc_reset,
+ .set_config = drm_crtc_helper_set_config,
+ .gamma_set = drm_atomic_helper_legacy_gamma_set,
+ .destroy = ast_crtc_destroy,
+diff --git a/drivers/gpu/drm/bridge/adv7511/adv7511_audio.c b/drivers/gpu/drm/bridge/adv7511/adv7511_audio.c
+index a428185be2c1..d05b3033b510 100644
+--- a/drivers/gpu/drm/bridge/adv7511/adv7511_audio.c
++++ b/drivers/gpu/drm/bridge/adv7511/adv7511_audio.c
+@@ -19,13 +19,15 @@ static void adv7511_calc_cts_n(unsigned int f_tmds, unsigned int fs,
+ {
+ switch (fs) {
+ case 32000:
+- *n = 4096;
++ case 48000:
++ case 96000:
++ case 192000:
++ *n = fs * 128 / 1000;
+ break;
+ case 44100:
+- *n = 6272;
+- break;
+- case 48000:
+- *n = 6144;
++ case 88200:
++ case 176400:
++ *n = fs * 128 / 900;
+ break;
+ }
+
+diff --git a/drivers/gpu/drm/bridge/panel.c b/drivers/gpu/drm/bridge/panel.c
+index 8461ee8304ba..7a3df0f319f3 100644
+--- a/drivers/gpu/drm/bridge/panel.c
++++ b/drivers/gpu/drm/bridge/panel.c
+@@ -166,7 +166,7 @@ static const struct drm_bridge_funcs panel_bridge_bridge_funcs = {
+ *
+ * The connector type is set to @panel->connector_type, which must be set to a
+ * known type. Calling this function with a panel whose connector type is
+- * DRM_MODE_CONNECTOR_Unknown will return NULL.
++ * DRM_MODE_CONNECTOR_Unknown will return ERR_PTR(-EINVAL).
+ *
+ * See devm_drm_panel_bridge_add() for an automatically managed version of this
+ * function.
+@@ -174,7 +174,7 @@ static const struct drm_bridge_funcs panel_bridge_bridge_funcs = {
+ struct drm_bridge *drm_panel_bridge_add(struct drm_panel *panel)
+ {
+ if (WARN_ON(panel->connector_type == DRM_MODE_CONNECTOR_Unknown))
+- return NULL;
++ return ERR_PTR(-EINVAL);
+
+ return drm_panel_bridge_add_typed(panel, panel->connector_type);
+ }
+@@ -265,7 +265,7 @@ struct drm_bridge *devm_drm_panel_bridge_add(struct device *dev,
+ struct drm_panel *panel)
+ {
+ if (WARN_ON(panel->connector_type == DRM_MODE_CONNECTOR_Unknown))
+- return NULL;
++ return ERR_PTR(-EINVAL);
+
+ return devm_drm_panel_bridge_add_typed(dev, panel,
+ panel->connector_type);
+diff --git a/drivers/gpu/drm/bridge/tc358768.c b/drivers/gpu/drm/bridge/tc358768.c
+index 1b39e8d37834..6650fe4cfc20 100644
+--- a/drivers/gpu/drm/bridge/tc358768.c
++++ b/drivers/gpu/drm/bridge/tc358768.c
+@@ -178,6 +178,8 @@ static int tc358768_clear_error(struct tc358768_priv *priv)
+
+ static void tc358768_write(struct tc358768_priv *priv, u32 reg, u32 val)
+ {
++ /* work around https://gcc.gnu.org/bugzilla/show_bug.cgi?id=81715 */
++ int tmpval = val;
+ size_t count = 2;
+
+ if (priv->error)
+@@ -187,7 +189,7 @@ static void tc358768_write(struct tc358768_priv *priv, u32 reg, u32 val)
+ if (reg < 0x100 || reg >= 0x600)
+ count = 1;
+
+- priv->error = regmap_bulk_write(priv->regmap, reg, &val, count);
++ priv->error = regmap_bulk_write(priv->regmap, reg, &tmpval, count);
+ }
+
+ static void tc358768_read(struct tc358768_priv *priv, u32 reg, u32 *val)
+diff --git a/drivers/gpu/drm/drm_dp_helper.c b/drivers/gpu/drm/drm_dp_helper.c
+index c6fbe6e6bc9d..41f0e797ce8c 100644
+--- a/drivers/gpu/drm/drm_dp_helper.c
++++ b/drivers/gpu/drm/drm_dp_helper.c
+@@ -1313,6 +1313,7 @@ static const struct edid_quirk edid_quirk_list[] = {
+ { MFG(0x06, 0xaf), PROD_ID(0xeb, 0x41), BIT(DP_QUIRK_FORCE_DPCD_BACKLIGHT) },
+ { MFG(0x4d, 0x10), PROD_ID(0xc7, 0x14), BIT(DP_QUIRK_FORCE_DPCD_BACKLIGHT) },
+ { MFG(0x4d, 0x10), PROD_ID(0xe6, 0x14), BIT(DP_QUIRK_FORCE_DPCD_BACKLIGHT) },
++ { MFG(0x4c, 0x83), PROD_ID(0x47, 0x41), BIT(DP_QUIRK_FORCE_DPCD_BACKLIGHT) },
+ };
+
+ #undef MFG
+diff --git a/drivers/gpu/drm/hisilicon/hibmc/hibmc_drm_de.c b/drivers/gpu/drm/hisilicon/hibmc/hibmc_drm_de.c
+index 55b46a7150a5..cc70e836522f 100644
+--- a/drivers/gpu/drm/hisilicon/hibmc/hibmc_drm_de.c
++++ b/drivers/gpu/drm/hisilicon/hibmc/hibmc_drm_de.c
+@@ -94,6 +94,10 @@ static int hibmc_plane_atomic_check(struct drm_plane *plane,
+ return -EINVAL;
+ }
+
++ if (state->fb->pitches[0] % 128 != 0) {
++ DRM_DEBUG_ATOMIC("wrong stride with 128-byte aligned\n");
++ return -EINVAL;
++ }
+ return 0;
+ }
+
+@@ -119,11 +123,8 @@ static void hibmc_plane_atomic_update(struct drm_plane *plane,
+ writel(gpu_addr, priv->mmio + HIBMC_CRT_FB_ADDRESS);
+
+ reg = state->fb->width * (state->fb->format->cpp[0]);
+- /* now line_pad is 16 */
+- reg = PADDING(16, reg);
+
+- line_l = state->fb->width * state->fb->format->cpp[0];
+- line_l = PADDING(16, line_l);
++ line_l = state->fb->pitches[0];
+ writel(HIBMC_FIELD(HIBMC_CRT_FB_WIDTH_WIDTH, reg) |
+ HIBMC_FIELD(HIBMC_CRT_FB_WIDTH_OFFS, line_l),
+ priv->mmio + HIBMC_CRT_FB_WIDTH);
+diff --git a/drivers/gpu/drm/hisilicon/hibmc/hibmc_drm_drv.c b/drivers/gpu/drm/hisilicon/hibmc/hibmc_drm_drv.c
+index 222356a4f9a8..79a180ae4509 100644
+--- a/drivers/gpu/drm/hisilicon/hibmc/hibmc_drm_drv.c
++++ b/drivers/gpu/drm/hisilicon/hibmc/hibmc_drm_drv.c
+@@ -94,7 +94,7 @@ static int hibmc_kms_init(struct hibmc_drm_private *priv)
+ priv->dev->mode_config.max_height = 1200;
+
+ priv->dev->mode_config.fb_base = priv->fb_base;
+- priv->dev->mode_config.preferred_depth = 24;
++ priv->dev->mode_config.preferred_depth = 32;
+ priv->dev->mode_config.prefer_shadow = 1;
+
+ priv->dev->mode_config.funcs = (void *)&hibmc_mode_funcs;
+@@ -307,7 +307,7 @@ static int hibmc_load(struct drm_device *dev)
+ /* reset all the states of crtc/plane/encoder/connector */
+ drm_mode_config_reset(dev);
+
+- ret = drm_fbdev_generic_setup(dev, 16);
++ ret = drm_fbdev_generic_setup(dev, dev->mode_config.preferred_depth);
+ if (ret) {
+ DRM_ERROR("failed to initialize fbdev: %d\n", ret);
+ goto err;
+diff --git a/drivers/gpu/drm/hisilicon/hibmc/hibmc_ttm.c b/drivers/gpu/drm/hisilicon/hibmc/hibmc_ttm.c
+index 99397ac3b363..322bd542e89d 100644
+--- a/drivers/gpu/drm/hisilicon/hibmc/hibmc_ttm.c
++++ b/drivers/gpu/drm/hisilicon/hibmc/hibmc_ttm.c
+@@ -50,7 +50,7 @@ void hibmc_mm_fini(struct hibmc_drm_private *hibmc)
+ int hibmc_dumb_create(struct drm_file *file, struct drm_device *dev,
+ struct drm_mode_create_dumb *args)
+ {
+- return drm_gem_vram_fill_create_dumb(file, dev, 0, 16, args);
++ return drm_gem_vram_fill_create_dumb(file, dev, 0, 128, args);
+ }
+
+ const struct drm_mode_config_funcs hibmc_mode_funcs = {
+diff --git a/drivers/gpu/drm/mcde/mcde_dsi.c b/drivers/gpu/drm/mcde/mcde_dsi.c
+index 7af5ebb0c436..e705afc08c4e 100644
+--- a/drivers/gpu/drm/mcde/mcde_dsi.c
++++ b/drivers/gpu/drm/mcde/mcde_dsi.c
+@@ -1073,10 +1073,9 @@ static int mcde_dsi_bind(struct device *dev, struct device *master,
+ panel = NULL;
+
+ bridge = of_drm_find_bridge(child);
+- if (IS_ERR(bridge)) {
+- dev_err(dev, "failed to find bridge (%ld)\n",
+- PTR_ERR(bridge));
+- return PTR_ERR(bridge);
++ if (!bridge) {
++ dev_err(dev, "failed to find bridge\n");
++ return -EINVAL;
+ }
+ }
+ }
+diff --git a/drivers/gpu/drm/mediatek/mtk_dpi.c b/drivers/gpu/drm/mediatek/mtk_dpi.c
+index 4f0ce4cd5b8c..2994c63ea279 100644
+--- a/drivers/gpu/drm/mediatek/mtk_dpi.c
++++ b/drivers/gpu/drm/mediatek/mtk_dpi.c
+@@ -10,7 +10,9 @@
+ #include <linux/kernel.h>
+ #include <linux/of.h>
+ #include <linux/of_device.h>
++#include <linux/of_gpio.h>
+ #include <linux/of_graph.h>
++#include <linux/pinctrl/consumer.h>
+ #include <linux/platform_device.h>
+ #include <linux/types.h>
+
+@@ -74,6 +76,9 @@ struct mtk_dpi {
+ enum mtk_dpi_out_yc_map yc_map;
+ enum mtk_dpi_out_bit_num bit_num;
+ enum mtk_dpi_out_channel_swap channel_swap;
++ struct pinctrl *pinctrl;
++ struct pinctrl_state *pins_gpio;
++ struct pinctrl_state *pins_dpi;
+ int refcount;
+ };
+
+@@ -379,6 +384,9 @@ static void mtk_dpi_power_off(struct mtk_dpi *dpi)
+ if (--dpi->refcount != 0)
+ return;
+
++ if (dpi->pinctrl && dpi->pins_gpio)
++ pinctrl_select_state(dpi->pinctrl, dpi->pins_gpio);
++
+ mtk_dpi_disable(dpi);
+ clk_disable_unprepare(dpi->pixel_clk);
+ clk_disable_unprepare(dpi->engine_clk);
+@@ -403,6 +411,9 @@ static int mtk_dpi_power_on(struct mtk_dpi *dpi)
+ goto err_pixel;
+ }
+
++ if (dpi->pinctrl && dpi->pins_dpi)
++ pinctrl_select_state(dpi->pinctrl, dpi->pins_dpi);
++
+ mtk_dpi_enable(dpi);
+ return 0;
+
+@@ -705,6 +716,26 @@ static int mtk_dpi_probe(struct platform_device *pdev)
+ dpi->dev = dev;
+ dpi->conf = (struct mtk_dpi_conf *)of_device_get_match_data(dev);
+
++ dpi->pinctrl = devm_pinctrl_get(&pdev->dev);
++ if (IS_ERR(dpi->pinctrl)) {
++ dpi->pinctrl = NULL;
++ dev_dbg(&pdev->dev, "Cannot find pinctrl!\n");
++ }
++ if (dpi->pinctrl) {
++ dpi->pins_gpio = pinctrl_lookup_state(dpi->pinctrl, "sleep");
++ if (IS_ERR(dpi->pins_gpio)) {
++ dpi->pins_gpio = NULL;
++ dev_dbg(&pdev->dev, "Cannot find pinctrl idle!\n");
++ }
++ if (dpi->pins_gpio)
++ pinctrl_select_state(dpi->pinctrl, dpi->pins_gpio);
++
++ dpi->pins_dpi = pinctrl_lookup_state(dpi->pinctrl, "default");
++ if (IS_ERR(dpi->pins_dpi)) {
++ dpi->pins_dpi = NULL;
++ dev_dbg(&pdev->dev, "Cannot find pinctrl active!\n");
++ }
++ }
+ mem = platform_get_resource(pdev, IORESOURCE_MEM, 0);
+ dpi->regs = devm_ioremap_resource(dev, mem);
+ if (IS_ERR(dpi->regs)) {
+diff --git a/drivers/gpu/drm/rcar-du/rcar_du_plane.c b/drivers/gpu/drm/rcar-du/rcar_du_plane.c
+index c6430027169f..a0021fc25b27 100644
+--- a/drivers/gpu/drm/rcar-du/rcar_du_plane.c
++++ b/drivers/gpu/drm/rcar-du/rcar_du_plane.c
+@@ -785,13 +785,15 @@ int rcar_du_planes_init(struct rcar_du_group *rgrp)
+
+ drm_plane_create_alpha_property(&plane->plane);
+
+- if (type == DRM_PLANE_TYPE_PRIMARY)
+- continue;
+-
+- drm_object_attach_property(&plane->plane.base,
+- rcdu->props.colorkey,
+- RCAR_DU_COLORKEY_NONE);
+- drm_plane_create_zpos_property(&plane->plane, 1, 1, 7);
++ if (type == DRM_PLANE_TYPE_PRIMARY) {
++ drm_plane_create_zpos_immutable_property(&plane->plane,
++ 0);
++ } else {
++ drm_object_attach_property(&plane->plane.base,
++ rcdu->props.colorkey,
++ RCAR_DU_COLORKEY_NONE);
++ drm_plane_create_zpos_property(&plane->plane, 1, 1, 7);
++ }
+ }
+
+ return 0;
+diff --git a/drivers/gpu/drm/rcar-du/rcar_du_vsp.c b/drivers/gpu/drm/rcar-du/rcar_du_vsp.c
+index 5e4faf258c31..f1a81c9b184d 100644
+--- a/drivers/gpu/drm/rcar-du/rcar_du_vsp.c
++++ b/drivers/gpu/drm/rcar-du/rcar_du_vsp.c
+@@ -392,12 +392,14 @@ int rcar_du_vsp_init(struct rcar_du_vsp *vsp, struct device_node *np,
+ drm_plane_helper_add(&plane->plane,
+ &rcar_du_vsp_plane_helper_funcs);
+
+- if (type == DRM_PLANE_TYPE_PRIMARY)
+- continue;
+-
+- drm_plane_create_alpha_property(&plane->plane);
+- drm_plane_create_zpos_property(&plane->plane, 1, 1,
+- vsp->num_planes - 1);
++ if (type == DRM_PLANE_TYPE_PRIMARY) {
++ drm_plane_create_zpos_immutable_property(&plane->plane,
++ 0);
++ } else {
++ drm_plane_create_alpha_property(&plane->plane);
++ drm_plane_create_zpos_property(&plane->plane, 1, 1,
++ vsp->num_planes - 1);
++ }
+ }
+
+ return 0;
+diff --git a/drivers/hv/connection.c b/drivers/hv/connection.c
+index 74e77de89b4f..f4bd306d2cef 100644
+--- a/drivers/hv/connection.c
++++ b/drivers/hv/connection.c
+@@ -69,7 +69,6 @@ MODULE_PARM_DESC(max_version,
+ int vmbus_negotiate_version(struct vmbus_channel_msginfo *msginfo, u32 version)
+ {
+ int ret = 0;
+- unsigned int cur_cpu;
+ struct vmbus_channel_initiate_contact *msg;
+ unsigned long flags;
+
+@@ -102,24 +101,7 @@ int vmbus_negotiate_version(struct vmbus_channel_msginfo *msginfo, u32 version)
+
+ msg->monitor_page1 = virt_to_phys(vmbus_connection.monitor_pages[0]);
+ msg->monitor_page2 = virt_to_phys(vmbus_connection.monitor_pages[1]);
+- /*
+- * We want all channel messages to be delivered on CPU 0.
+- * This has been the behavior pre-win8. This is not
+- * perf issue and having all channel messages delivered on CPU 0
+- * would be ok.
+- * For post win8 hosts, we support receiving channel messagges on
+- * all the CPUs. This is needed for kexec to work correctly where
+- * the CPU attempting to connect may not be CPU 0.
+- */
+- if (version >= VERSION_WIN8_1) {
+- cur_cpu = get_cpu();
+- msg->target_vcpu = hv_cpu_number_to_vp_number(cur_cpu);
+- vmbus_connection.connect_cpu = cur_cpu;
+- put_cpu();
+- } else {
+- msg->target_vcpu = 0;
+- vmbus_connection.connect_cpu = 0;
+- }
++ msg->target_vcpu = hv_cpu_number_to_vp_number(VMBUS_CONNECT_CPU);
+
+ /*
+ * Add to list before we send the request since we may
+diff --git a/drivers/hv/hv.c b/drivers/hv/hv.c
+index 533c8b82b344..3a5648aa5599 100644
+--- a/drivers/hv/hv.c
++++ b/drivers/hv/hv.c
+@@ -245,6 +245,13 @@ int hv_synic_cleanup(unsigned int cpu)
+ bool channel_found = false;
+ unsigned long flags;
+
++ /*
++ * Hyper-V does not provide a way to change the connect CPU once
++ * it is set; we must prevent the connect CPU from going offline.
++ */
++ if (cpu == VMBUS_CONNECT_CPU)
++ return -EBUSY;
++
+ /*
+ * Search for channels which are bound to the CPU we're about to
+ * cleanup. In case we find one and vmbus is still connected we need to
+diff --git a/drivers/hv/hyperv_vmbus.h b/drivers/hv/hyperv_vmbus.h
+index 70b30e223a57..67fb1edcbf52 100644
+--- a/drivers/hv/hyperv_vmbus.h
++++ b/drivers/hv/hyperv_vmbus.h
+@@ -212,12 +212,13 @@ enum vmbus_connect_state {
+
+ #define MAX_SIZE_CHANNEL_MESSAGE HV_MESSAGE_PAYLOAD_BYTE_COUNT
+
+-struct vmbus_connection {
+- /*
+- * CPU on which the initial host contact was made.
+- */
+- int connect_cpu;
++/*
++ * The CPU that Hyper-V will interrupt for VMBUS messages, such as
++ * CHANNELMSG_OFFERCHANNEL and CHANNELMSG_RESCIND_CHANNELOFFER.
++ */
++#define VMBUS_CONNECT_CPU 0
+
++struct vmbus_connection {
+ u32 msg_conn_id;
+
+ atomic_t offer_in_progress;
+diff --git a/drivers/hv/vmbus_drv.c b/drivers/hv/vmbus_drv.c
+index e06c6b9555cf..ec173da45b42 100644
+--- a/drivers/hv/vmbus_drv.c
++++ b/drivers/hv/vmbus_drv.c
+@@ -1098,14 +1098,28 @@ void vmbus_on_msg_dpc(unsigned long data)
+ /*
+ * If we are handling the rescind message;
+ * schedule the work on the global work queue.
++ *
++ * The OFFER message and the RESCIND message should
++ * not be handled by the same serialized work queue,
++ * because the OFFER handler may call vmbus_open(),
++ * which tries to open the channel by sending an
++ * OPEN_CHANNEL message to the host and waits for
++ * the host's response; however, if the host has
++ * rescinded the channel before it receives the
++ * OPEN_CHANNEL message, the host just silently
++ * ignores the OPEN_CHANNEL message; as a result,
++ * the guest's OFFER handler hangs for ever, if we
++ * handle the RESCIND message in the same serialized
++ * work queue: the RESCIND handler can not start to
++ * run before the OFFER handler finishes.
+ */
+- schedule_work_on(vmbus_connection.connect_cpu,
++ schedule_work_on(VMBUS_CONNECT_CPU,
+ &ctx->work);
+ break;
+
+ case CHANNELMSG_OFFERCHANNEL:
+ atomic_inc(&vmbus_connection.offer_in_progress);
+- queue_work_on(vmbus_connection.connect_cpu,
++ queue_work_on(VMBUS_CONNECT_CPU,
+ vmbus_connection.work_queue,
+ &ctx->work);
+ break;
+@@ -1152,7 +1166,7 @@ static void vmbus_force_channel_rescinded(struct vmbus_channel *channel)
+
+ INIT_WORK(&ctx->work, vmbus_onmessage_work);
+
+- queue_work_on(vmbus_connection.connect_cpu,
++ queue_work_on(VMBUS_CONNECT_CPU,
+ vmbus_connection.work_queue,
+ &ctx->work);
+ }
+diff --git a/drivers/hwmon/k10temp.c b/drivers/hwmon/k10temp.c
+index 9915578533bb..8f12995ec133 100644
+--- a/drivers/hwmon/k10temp.c
++++ b/drivers/hwmon/k10temp.c
+@@ -632,6 +632,7 @@ static const struct pci_device_id k10temp_id_table[] = {
+ { PCI_VDEVICE(AMD, PCI_DEVICE_ID_AMD_17H_DF_F3) },
+ { PCI_VDEVICE(AMD, PCI_DEVICE_ID_AMD_17H_M10H_DF_F3) },
+ { PCI_VDEVICE(AMD, PCI_DEVICE_ID_AMD_17H_M30H_DF_F3) },
++ { PCI_VDEVICE(AMD, PCI_DEVICE_ID_AMD_17H_M60H_DF_F3) },
+ { PCI_VDEVICE(AMD, PCI_DEVICE_ID_AMD_17H_M70H_DF_F3) },
+ { PCI_VDEVICE(HYGON, PCI_DEVICE_ID_AMD_17H_DF_F3) },
+ {}
+diff --git a/drivers/iommu/intel-iommu.c b/drivers/iommu/intel-iommu.c
+index 0182cff2c7ac..11ed871dd255 100644
+--- a/drivers/iommu/intel-iommu.c
++++ b/drivers/iommu/intel-iommu.c
+@@ -2545,7 +2545,7 @@ dmar_search_domain_by_dev_info(int segment, int bus, int devfn)
+ struct device_domain_info *info;
+
+ list_for_each_entry(info, &device_domain_list, global)
+- if (info->iommu->segment == segment && info->bus == bus &&
++ if (info->segment == segment && info->bus == bus &&
+ info->devfn == devfn)
+ return info;
+
+@@ -2582,6 +2582,12 @@ static int domain_setup_first_level(struct intel_iommu *iommu,
+ flags);
+ }
+
++static bool dev_is_real_dma_subdevice(struct device *dev)
++{
++ return dev && dev_is_pci(dev) &&
++ pci_real_dma_dev(to_pci_dev(dev)) != to_pci_dev(dev);
++}
++
+ static struct dmar_domain *dmar_insert_one_dev_info(struct intel_iommu *iommu,
+ int bus, int devfn,
+ struct device *dev,
+@@ -2596,8 +2602,18 @@ static struct dmar_domain *dmar_insert_one_dev_info(struct intel_iommu *iommu,
+ if (!info)
+ return NULL;
+
+- info->bus = bus;
+- info->devfn = devfn;
++ if (!dev_is_real_dma_subdevice(dev)) {
++ info->bus = bus;
++ info->devfn = devfn;
++ info->segment = iommu->segment;
++ } else {
++ struct pci_dev *pdev = to_pci_dev(dev);
++
++ info->bus = pdev->bus->number;
++ info->devfn = pdev->devfn;
++ info->segment = pci_domain_nr(pdev->bus);
++ }
++
+ info->ats_supported = info->pasid_supported = info->pri_supported = 0;
+ info->ats_enabled = info->pasid_enabled = info->pri_enabled = 0;
+ info->ats_qdep = 0;
+@@ -2637,7 +2653,8 @@ static struct dmar_domain *dmar_insert_one_dev_info(struct intel_iommu *iommu,
+
+ if (!found) {
+ struct device_domain_info *info2;
+- info2 = dmar_search_domain_by_dev_info(iommu->segment, bus, devfn);
++ info2 = dmar_search_domain_by_dev_info(info->segment, info->bus,
++ info->devfn);
+ if (info2) {
+ found = info2->domain;
+ info2->dev = dev;
+@@ -5286,7 +5303,8 @@ static void __dmar_remove_one_dev_info(struct device_domain_info *info)
+ PASID_RID2PASID);
+
+ iommu_disable_dev_iotlb(info);
+- domain_context_clear(iommu, info->dev);
++ if (!dev_is_real_dma_subdevice(info->dev))
++ domain_context_clear(iommu, info->dev);
+ intel_pasid_free_table(info->dev);
+ }
+
+diff --git a/drivers/irqchip/irq-sifive-plic.c b/drivers/irqchip/irq-sifive-plic.c
+index d0a71febdadc..eb44816b8fad 100644
+--- a/drivers/irqchip/irq-sifive-plic.c
++++ b/drivers/irqchip/irq-sifive-plic.c
+@@ -76,6 +76,7 @@ struct plic_handler {
+ void __iomem *enable_base;
+ struct plic_priv *priv;
+ };
++static bool plic_cpuhp_setup_done;
+ static DEFINE_PER_CPU(struct plic_handler, plic_handlers);
+
+ static inline void plic_toggle(struct plic_handler *handler,
+@@ -176,9 +177,12 @@ static struct irq_chip plic_chip = {
+ static int plic_irqdomain_map(struct irq_domain *d, unsigned int irq,
+ irq_hw_number_t hwirq)
+ {
++ struct plic_priv *priv = d->host_data;
++
+ irq_domain_set_info(d, irq, hwirq, &plic_chip, d->host_data,
+ handle_fasteoi_irq, NULL, NULL);
+ irq_set_noprobe(irq);
++ irq_set_affinity(irq, &priv->lmask);
+ return 0;
+ }
+
+@@ -282,6 +286,7 @@ static int __init plic_init(struct device_node *node,
+ int error = 0, nr_contexts, nr_handlers = 0, i;
+ u32 nr_irqs;
+ struct plic_priv *priv;
++ struct plic_handler *handler;
+
+ priv = kzalloc(sizeof(*priv), GFP_KERNEL);
+ if (!priv)
+@@ -312,7 +317,6 @@ static int __init plic_init(struct device_node *node,
+
+ for (i = 0; i < nr_contexts; i++) {
+ struct of_phandle_args parent;
+- struct plic_handler *handler;
+ irq_hw_number_t hwirq;
+ int cpu, hartid;
+
+@@ -366,9 +370,18 @@ done:
+ nr_handlers++;
+ }
+
+- cpuhp_setup_state(CPUHP_AP_IRQ_SIFIVE_PLIC_STARTING,
++ /*
++ * We can have multiple PLIC instances so setup cpuhp state only
++ * when context handler for current/boot CPU is present.
++ */
++ handler = this_cpu_ptr(&plic_handlers);
++ if (handler->present && !plic_cpuhp_setup_done) {
++ cpuhp_setup_state(CPUHP_AP_IRQ_SIFIVE_PLIC_STARTING,
+ "irqchip/sifive/plic:starting",
+ plic_starting_cpu, plic_dying_cpu);
++ plic_cpuhp_setup_done = true;
++ }
++
+ pr_info("mapped %d interrupts with %d handlers for %d contexts.\n",
+ nr_irqs, nr_handlers, nr_contexts);
+ set_handle_irq(plic_handle_irq);
+diff --git a/drivers/macintosh/windfarm_pm112.c b/drivers/macintosh/windfarm_pm112.c
+index 4150301a89a5..e8377ce0a95a 100644
+--- a/drivers/macintosh/windfarm_pm112.c
++++ b/drivers/macintosh/windfarm_pm112.c
+@@ -132,14 +132,6 @@ static int create_cpu_loop(int cpu)
+ s32 tmax;
+ int fmin;
+
+- /* Get PID params from the appropriate SAT */
+- hdr = smu_sat_get_sdb_partition(chip, 0xC8 + core, NULL);
+- if (hdr == NULL) {
+- printk(KERN_WARNING"windfarm: can't get CPU PID fan config\n");
+- return -EINVAL;
+- }
+- piddata = (struct smu_sdbp_cpupiddata *)&hdr[1];
+-
+ /* Get FVT params to get Tmax; if not found, assume default */
+ hdr = smu_sat_get_sdb_partition(chip, 0xC4 + core, NULL);
+ if (hdr) {
+@@ -152,6 +144,16 @@ static int create_cpu_loop(int cpu)
+ if (tmax < cpu_all_tmax)
+ cpu_all_tmax = tmax;
+
++ kfree(hdr);
++
++ /* Get PID params from the appropriate SAT */
++ hdr = smu_sat_get_sdb_partition(chip, 0xC8 + core, NULL);
++ if (hdr == NULL) {
++ printk(KERN_WARNING"windfarm: can't get CPU PID fan config\n");
++ return -EINVAL;
++ }
++ piddata = (struct smu_sdbp_cpupiddata *)&hdr[1];
++
+ /*
+ * Darwin has a minimum fan speed of 1000 rpm for the 4-way and
+ * 515 for the 2-way. That appears to be overkill, so for now,
+@@ -174,6 +176,9 @@ static int create_cpu_loop(int cpu)
+ pid.min = fmin;
+
+ wf_cpu_pid_init(&cpu_pid[cpu], &pid);
++
++ kfree(hdr);
++
+ return 0;
+ }
+
+diff --git a/drivers/md/bcache/request.c b/drivers/md/bcache/request.c
+index 71a90fbec314..77d1a2697517 100644
+--- a/drivers/md/bcache/request.c
++++ b/drivers/md/bcache/request.c
+@@ -1372,7 +1372,6 @@ void bch_flash_dev_request_init(struct bcache_device *d)
+ {
+ struct gendisk *g = d->disk;
+
+- g->queue->make_request_fn = flash_dev_make_request;
+ g->queue->backing_dev_info->congested_fn = flash_dev_congested;
+ d->cache_miss = flash_dev_cache_miss;
+ d->ioctl = flash_dev_ioctl;
+diff --git a/drivers/md/bcache/super.c b/drivers/md/bcache/super.c
+index d98354fa28e3..4d8bf731b118 100644
+--- a/drivers/md/bcache/super.c
++++ b/drivers/md/bcache/super.c
+@@ -797,7 +797,9 @@ static void bcache_device_free(struct bcache_device *d)
+ bcache_device_detach(d);
+
+ if (disk) {
+- if (disk->flags & GENHD_FL_UP)
++ bool disk_added = (disk->flags & GENHD_FL_UP) != 0;
++
++ if (disk_added)
+ del_gendisk(disk);
+
+ if (disk->queue)
+@@ -805,7 +807,8 @@ static void bcache_device_free(struct bcache_device *d)
+
+ ida_simple_remove(&bcache_device_idx,
+ first_minor_to_idx(disk->first_minor));
+- put_disk(disk);
++ if (disk_added)
++ put_disk(disk);
+ }
+
+ bioset_exit(&d->bio_split);
+diff --git a/drivers/md/dm-crypt.c b/drivers/md/dm-crypt.c
+index 3df90daba89e..a1dcb8675484 100644
+--- a/drivers/md/dm-crypt.c
++++ b/drivers/md/dm-crypt.c
+@@ -3274,7 +3274,7 @@ static void crypt_io_hints(struct dm_target *ti, struct queue_limits *limits)
+ limits->max_segment_size = PAGE_SIZE;
+
+ limits->logical_block_size =
+- max_t(unsigned short, limits->logical_block_size, cc->sector_size);
++ max_t(unsigned, limits->logical_block_size, cc->sector_size);
+ limits->physical_block_size =
+ max_t(unsigned, limits->physical_block_size, cc->sector_size);
+ limits->io_min = max_t(unsigned, limits->io_min, cc->sector_size);
+diff --git a/drivers/md/md.c b/drivers/md/md.c
+index 271e8a587354..41eead9cbee9 100644
+--- a/drivers/md/md.c
++++ b/drivers/md/md.c
+@@ -7752,7 +7752,8 @@ static int md_open(struct block_device *bdev, fmode_t mode)
+ */
+ mddev_put(mddev);
+ /* Wait until bdev->bd_disk is definitely gone */
+- flush_workqueue(md_misc_wq);
++ if (work_pending(&mddev->del_work))
++ flush_workqueue(md_misc_wq);
+ /* Then retry the open from the top */
+ return -ERESTARTSYS;
+ }
+diff --git a/drivers/md/raid5.c b/drivers/md/raid5.c
+index ba00e9877f02..190dd70db514 100644
+--- a/drivers/md/raid5.c
++++ b/drivers/md/raid5.c
+@@ -2228,14 +2228,19 @@ static int grow_stripes(struct r5conf *conf, int num)
+ * of the P and Q blocks.
+ */
+ static int scribble_alloc(struct raid5_percpu *percpu,
+- int num, int cnt, gfp_t flags)
++ int num, int cnt)
+ {
+ size_t obj_size =
+ sizeof(struct page *) * (num+2) +
+ sizeof(addr_conv_t) * (num+2);
+ void *scribble;
+
+- scribble = kvmalloc_array(cnt, obj_size, flags);
++ /*
++ * If here is in raid array suspend context, it is in memalloc noio
++ * context as well, there is no potential recursive memory reclaim
++ * I/Os with the GFP_KERNEL flag.
++ */
++ scribble = kvmalloc_array(cnt, obj_size, GFP_KERNEL);
+ if (!scribble)
+ return -ENOMEM;
+
+@@ -2267,8 +2272,7 @@ static int resize_chunks(struct r5conf *conf, int new_disks, int new_sectors)
+
+ percpu = per_cpu_ptr(conf->percpu, cpu);
+ err = scribble_alloc(percpu, new_disks,
+- new_sectors / STRIPE_SECTORS,
+- GFP_NOIO);
++ new_sectors / STRIPE_SECTORS);
+ if (err)
+ break;
+ }
+@@ -6759,8 +6763,7 @@ static int alloc_scratch_buffer(struct r5conf *conf, struct raid5_percpu *percpu
+ conf->previous_raid_disks),
+ max(conf->chunk_sectors,
+ conf->prev_chunk_sectors)
+- / STRIPE_SECTORS,
+- GFP_KERNEL)) {
++ / STRIPE_SECTORS)) {
+ free_scratch_buffer(conf, percpu);
+ return -ENOMEM;
+ }
+diff --git a/drivers/media/cec/cec-adap.c b/drivers/media/cec/cec-adap.c
+index 6c95dc471d4c..6a04d19a96b2 100644
+--- a/drivers/media/cec/cec-adap.c
++++ b/drivers/media/cec/cec-adap.c
+@@ -1734,6 +1734,10 @@ int __cec_s_log_addrs(struct cec_adapter *adap,
+ unsigned j;
+
+ log_addrs->log_addr[i] = CEC_LOG_ADDR_INVALID;
++ if (log_addrs->log_addr_type[i] > CEC_LOG_ADDR_TYPE_UNREGISTERED) {
++ dprintk(1, "unknown logical address type\n");
++ return -EINVAL;
++ }
+ if (type_mask & (1 << log_addrs->log_addr_type[i])) {
+ dprintk(1, "duplicate logical address type\n");
+ return -EINVAL;
+@@ -1754,10 +1758,6 @@ int __cec_s_log_addrs(struct cec_adapter *adap,
+ dprintk(1, "invalid primary device type\n");
+ return -EINVAL;
+ }
+- if (log_addrs->log_addr_type[i] > CEC_LOG_ADDR_TYPE_UNREGISTERED) {
+- dprintk(1, "unknown logical address type\n");
+- return -EINVAL;
+- }
+ for (j = 0; j < feature_sz; j++) {
+ if ((features[j] & 0x80) == 0) {
+ if (op_is_dev_features)
+diff --git a/drivers/media/dvb-frontends/m88ds3103.c b/drivers/media/dvb-frontends/m88ds3103.c
+index d2c28dcf6b42..abddab02db9e 100644
+--- a/drivers/media/dvb-frontends/m88ds3103.c
++++ b/drivers/media/dvb-frontends/m88ds3103.c
+@@ -980,6 +980,8 @@ static int m88ds3103_set_frontend(struct dvb_frontend *fe)
+ goto err;
+
+ ret = m88ds3103_update_bits(dev, 0xc9, 0x08, 0x08);
++ if (ret)
++ goto err;
+ }
+
+ dev_dbg(&client->dev, "carrier offset=%d\n",
+diff --git a/drivers/media/i2c/imx219.c b/drivers/media/i2c/imx219.c
+index cb03bdec1f9c..86e0564bfb4f 100644
+--- a/drivers/media/i2c/imx219.c
++++ b/drivers/media/i2c/imx219.c
+@@ -781,7 +781,7 @@ static int imx219_enum_frame_size(struct v4l2_subdev *sd,
+ if (fse->index >= ARRAY_SIZE(supported_modes))
+ return -EINVAL;
+
+- if (fse->code != imx219_get_format_code(imx219, imx219->fmt.code))
++ if (fse->code != imx219_get_format_code(imx219, fse->code))
+ return -EINVAL;
+
+ fse->min_width = supported_modes[fse->index].width;
+diff --git a/drivers/media/i2c/ov5640.c b/drivers/media/i2c/ov5640.c
+index 854031f0b64a..2fe4a7ac0592 100644
+--- a/drivers/media/i2c/ov5640.c
++++ b/drivers/media/i2c/ov5640.c
+@@ -3093,8 +3093,8 @@ static int ov5640_probe(struct i2c_client *client)
+ free_ctrls:
+ v4l2_ctrl_handler_free(&sensor->ctrls.handler);
+ entity_cleanup:
+- mutex_destroy(&sensor->lock);
+ media_entity_cleanup(&sensor->sd.entity);
++ mutex_destroy(&sensor->lock);
+ return ret;
+ }
+
+@@ -3104,9 +3104,9 @@ static int ov5640_remove(struct i2c_client *client)
+ struct ov5640_dev *sensor = to_ov5640_dev(sd);
+
+ v4l2_async_unregister_subdev(&sensor->sd);
+- mutex_destroy(&sensor->lock);
+ media_entity_cleanup(&sensor->sd.entity);
+ v4l2_ctrl_handler_free(&sensor->ctrls.handler);
++ mutex_destroy(&sensor->lock);
+
+ return 0;
+ }
+diff --git a/drivers/media/platform/qcom/venus/core.c b/drivers/media/platform/qcom/venus/core.c
+index 194b10b98767..13fa5076314c 100644
+--- a/drivers/media/platform/qcom/venus/core.c
++++ b/drivers/media/platform/qcom/venus/core.c
+@@ -242,10 +242,6 @@ static int venus_probe(struct platform_device *pdev)
+ if (ret)
+ return ret;
+
+- ret = icc_set_bw(core->cpucfg_path, 0, kbps_to_icc(1000));
+- if (ret)
+- return ret;
+-
+ ret = hfi_create(core, &venus_core_ops);
+ if (ret)
+ return ret;
+@@ -350,6 +346,10 @@ static __maybe_unused int venus_runtime_suspend(struct device *dev)
+ if (ret)
+ return ret;
+
++ ret = icc_set_bw(core->cpucfg_path, 0, 0);
++ if (ret)
++ return ret;
++
+ if (pm_ops->core_power)
+ ret = pm_ops->core_power(dev, POWER_OFF);
+
+@@ -368,6 +368,10 @@ static __maybe_unused int venus_runtime_resume(struct device *dev)
+ return ret;
+ }
+
++ ret = icc_set_bw(core->cpucfg_path, 0, kbps_to_icc(1000));
++ if (ret)
++ return ret;
++
+ return hfi_core_resume(core, false);
+ }
+
+diff --git a/drivers/media/platform/rcar-fcp.c b/drivers/media/platform/rcar-fcp.c
+index 43c78620c9d8..5c6b00737fe7 100644
+--- a/drivers/media/platform/rcar-fcp.c
++++ b/drivers/media/platform/rcar-fcp.c
+@@ -8,6 +8,7 @@
+ */
+
+ #include <linux/device.h>
++#include <linux/dma-mapping.h>
+ #include <linux/list.h>
+ #include <linux/module.h>
+ #include <linux/mod_devicetable.h>
+@@ -21,6 +22,7 @@
+ struct rcar_fcp_device {
+ struct list_head list;
+ struct device *dev;
++ struct device_dma_parameters dma_parms;
+ };
+
+ static LIST_HEAD(fcp_devices);
+@@ -136,6 +138,9 @@ static int rcar_fcp_probe(struct platform_device *pdev)
+
+ fcp->dev = &pdev->dev;
+
++ fcp->dev->dma_parms = &fcp->dma_parms;
++ dma_set_max_seg_size(fcp->dev, DMA_BIT_MASK(32));
++
+ pm_runtime_enable(&pdev->dev);
+
+ mutex_lock(&fcp_lock);
+diff --git a/drivers/media/platform/sunxi/sun8i-di/sun8i-di.c b/drivers/media/platform/sunxi/sun8i-di/sun8i-di.c
+index d78f6593ddd1..ba5d07886607 100644
+--- a/drivers/media/platform/sunxi/sun8i-di/sun8i-di.c
++++ b/drivers/media/platform/sunxi/sun8i-di/sun8i-di.c
+@@ -941,7 +941,7 @@ static int deinterlace_runtime_resume(struct device *device)
+ if (ret) {
+ dev_err(dev->dev, "Failed to enable bus clock\n");
+
+- goto err_exlusive_rate;
++ goto err_exclusive_rate;
+ }
+
+ ret = clk_prepare_enable(dev->mod_clk);
+@@ -969,14 +969,14 @@ static int deinterlace_runtime_resume(struct device *device)
+
+ return 0;
+
+-err_exlusive_rate:
+- clk_rate_exclusive_put(dev->mod_clk);
+ err_ram_clk:
+ clk_disable_unprepare(dev->ram_clk);
+ err_mod_clk:
+ clk_disable_unprepare(dev->mod_clk);
+ err_bus_clk:
+ clk_disable_unprepare(dev->bus_clk);
++err_exclusive_rate:
++ clk_rate_exclusive_put(dev->mod_clk);
+
+ return ret;
+ }
+diff --git a/drivers/media/platform/vicodec/vicodec-core.c b/drivers/media/platform/vicodec/vicodec-core.c
+index 30ced1c21387..e879290727ef 100644
+--- a/drivers/media/platform/vicodec/vicodec-core.c
++++ b/drivers/media/platform/vicodec/vicodec-core.c
+@@ -2114,16 +2114,19 @@ static int vicodec_probe(struct platform_device *pdev)
+
+ platform_set_drvdata(pdev, dev);
+
+- if (register_instance(dev, &dev->stateful_enc,
+- "stateful-encoder", true))
++ ret = register_instance(dev, &dev->stateful_enc, "stateful-encoder",
++ true);
++ if (ret)
+ goto unreg_dev;
+
+- if (register_instance(dev, &dev->stateful_dec,
+- "stateful-decoder", false))
++ ret = register_instance(dev, &dev->stateful_dec, "stateful-decoder",
++ false);
++ if (ret)
+ goto unreg_sf_enc;
+
+- if (register_instance(dev, &dev->stateless_dec,
+- "stateless-decoder", false))
++ ret = register_instance(dev, &dev->stateless_dec, "stateless-decoder",
++ false);
++ if (ret)
+ goto unreg_sf_dec;
+
+ #ifdef CONFIG_MEDIA_CONTROLLER
+diff --git a/drivers/media/tuners/si2157.c b/drivers/media/tuners/si2157.c
+index 898e0f9f8b70..20487b25fbe1 100644
+--- a/drivers/media/tuners/si2157.c
++++ b/drivers/media/tuners/si2157.c
+@@ -75,24 +75,23 @@ static int si2157_init(struct dvb_frontend *fe)
+ struct si2157_cmd cmd;
+ const struct firmware *fw;
+ const char *fw_name;
+- unsigned int uitmp, chip_id;
++ unsigned int chip_id, xtal_trim;
+
+ dev_dbg(&client->dev, "\n");
+
+- /* Returned IF frequency is garbage when firmware is not running */
+- memcpy(cmd.args, "\x15\x00\x06\x07", 4);
++ /* Try to get Xtal trim property, to verify tuner still running */
++ memcpy(cmd.args, "\x15\x00\x04\x02", 4);
+ cmd.wlen = 4;
+ cmd.rlen = 4;
+ ret = si2157_cmd_execute(client, &cmd);
+- if (ret)
+- goto err;
+
+- uitmp = cmd.args[2] << 0 | cmd.args[3] << 8;
+- dev_dbg(&client->dev, "if_frequency kHz=%u\n", uitmp);
++ xtal_trim = cmd.args[2] | (cmd.args[3] << 8);
+
+- if (uitmp == dev->if_frequency / 1000)
++ if (ret == 0 && xtal_trim < 16)
+ goto warm;
+
++ dev->if_frequency = 0; /* we no longer know current tuner state */
++
+ /* power up */
+ if (dev->chiptype == SI2157_CHIPTYPE_SI2146) {
+ memcpy(cmd.args, "\xc0\x05\x01\x00\x00\x0b\x00\x00\x01", 9);
+diff --git a/drivers/media/usb/dvb-usb/dibusb-mb.c b/drivers/media/usb/dvb-usb/dibusb-mb.c
+index d4ea72bf09c5..5131c8d4c632 100644
+--- a/drivers/media/usb/dvb-usb/dibusb-mb.c
++++ b/drivers/media/usb/dvb-usb/dibusb-mb.c
+@@ -81,7 +81,7 @@ static int dibusb_tuner_probe_and_attach(struct dvb_usb_adapter *adap)
+
+ if (i2c_transfer(&adap->dev->i2c_adap, msg, 2) != 2) {
+ err("tuner i2c write failed.");
+- ret = -EREMOTEIO;
++ return -EREMOTEIO;
+ }
+
+ if (adap->fe_adap[0].fe->ops.i2c_gate_ctrl)
+diff --git a/drivers/media/v4l2-core/v4l2-ctrls.c b/drivers/media/v4l2-core/v4l2-ctrls.c
+index 93d33d1db4e8..452edd06d67d 100644
+--- a/drivers/media/v4l2-core/v4l2-ctrls.c
++++ b/drivers/media/v4l2-core/v4l2-ctrls.c
+@@ -3794,7 +3794,8 @@ s32 v4l2_ctrl_g_ctrl(struct v4l2_ctrl *ctrl)
+ struct v4l2_ext_control c;
+
+ /* It's a driver bug if this happens. */
+- WARN_ON(!ctrl->is_int);
++ if (WARN_ON(!ctrl->is_int))
++ return 0;
+ c.value = 0;
+ get_ctrl(ctrl, &c);
+ return c.value;
+@@ -3806,7 +3807,8 @@ s64 v4l2_ctrl_g_ctrl_int64(struct v4l2_ctrl *ctrl)
+ struct v4l2_ext_control c;
+
+ /* It's a driver bug if this happens. */
+- WARN_ON(ctrl->is_ptr || ctrl->type != V4L2_CTRL_TYPE_INTEGER64);
++ if (WARN_ON(ctrl->is_ptr || ctrl->type != V4L2_CTRL_TYPE_INTEGER64))
++ return 0;
+ c.value64 = 0;
+ get_ctrl(ctrl, &c);
+ return c.value64;
+@@ -4215,7 +4217,8 @@ int __v4l2_ctrl_s_ctrl(struct v4l2_ctrl *ctrl, s32 val)
+ lockdep_assert_held(ctrl->handler->lock);
+
+ /* It's a driver bug if this happens. */
+- WARN_ON(!ctrl->is_int);
++ if (WARN_ON(!ctrl->is_int))
++ return -EINVAL;
+ ctrl->val = val;
+ return set_ctrl(NULL, ctrl, 0);
+ }
+@@ -4226,7 +4229,8 @@ int __v4l2_ctrl_s_ctrl_int64(struct v4l2_ctrl *ctrl, s64 val)
+ lockdep_assert_held(ctrl->handler->lock);
+
+ /* It's a driver bug if this happens. */
+- WARN_ON(ctrl->is_ptr || ctrl->type != V4L2_CTRL_TYPE_INTEGER64);
++ if (WARN_ON(ctrl->is_ptr || ctrl->type != V4L2_CTRL_TYPE_INTEGER64))
++ return -EINVAL;
+ *ctrl->p_new.p_s64 = val;
+ return set_ctrl(NULL, ctrl, 0);
+ }
+@@ -4237,7 +4241,8 @@ int __v4l2_ctrl_s_ctrl_string(struct v4l2_ctrl *ctrl, const char *s)
+ lockdep_assert_held(ctrl->handler->lock);
+
+ /* It's a driver bug if this happens. */
+- WARN_ON(ctrl->type != V4L2_CTRL_TYPE_STRING);
++ if (WARN_ON(ctrl->type != V4L2_CTRL_TYPE_STRING))
++ return -EINVAL;
+ strscpy(ctrl->p_new.p_char, s, ctrl->maximum + 1);
+ return set_ctrl(NULL, ctrl, 0);
+ }
+@@ -4249,7 +4254,8 @@ int __v4l2_ctrl_s_ctrl_area(struct v4l2_ctrl *ctrl,
+ lockdep_assert_held(ctrl->handler->lock);
+
+ /* It's a driver bug if this happens. */
+- WARN_ON(ctrl->type != V4L2_CTRL_TYPE_AREA);
++ if (WARN_ON(ctrl->type != V4L2_CTRL_TYPE_AREA))
++ return -EINVAL;
+ *ctrl->p_new.p_area = *area;
+ return set_ctrl(NULL, ctrl, 0);
+ }
+diff --git a/drivers/memory/samsung/exynos5422-dmc.c b/drivers/memory/samsung/exynos5422-dmc.c
+index 81a1b1d01683..22a43d662833 100644
+--- a/drivers/memory/samsung/exynos5422-dmc.c
++++ b/drivers/memory/samsung/exynos5422-dmc.c
+@@ -1091,7 +1091,7 @@ static int create_timings_aligned(struct exynos5_dmc *dmc, u32 *reg_timing_row,
+ /* power related timings */
+ val = dmc->timings->tFAW / clk_period_ps;
+ val += dmc->timings->tFAW % clk_period_ps ? 1 : 0;
+- val = max(val, dmc->min_tck->tXP);
++ val = max(val, dmc->min_tck->tFAW);
+ reg = &timing_power[0];
+ *reg_timing_power |= TIMING_VAL2REG(reg, val);
+
+diff --git a/drivers/mmc/host/meson-mx-sdio.c b/drivers/mmc/host/meson-mx-sdio.c
+index 2e58743d83bb..3813b544f571 100644
+--- a/drivers/mmc/host/meson-mx-sdio.c
++++ b/drivers/mmc/host/meson-mx-sdio.c
+@@ -246,6 +246,9 @@ static void meson_mx_mmc_request_done(struct meson_mx_mmc_host *host)
+
+ mrq = host->mrq;
+
++ if (host->cmd->error)
++ meson_mx_mmc_soft_reset(host);
++
+ host->mrq = NULL;
+ host->cmd = NULL;
+
+diff --git a/drivers/mmc/host/mmci.c b/drivers/mmc/host/mmci.c
+index 647567def612..a69d6a0c2e15 100644
+--- a/drivers/mmc/host/mmci.c
++++ b/drivers/mmc/host/mmci.c
+@@ -1861,31 +1861,17 @@ static int mmci_get_cd(struct mmc_host *mmc)
+ static int mmci_sig_volt_switch(struct mmc_host *mmc, struct mmc_ios *ios)
+ {
+ struct mmci_host *host = mmc_priv(mmc);
+- int ret = 0;
+-
+- if (!IS_ERR(mmc->supply.vqmmc)) {
++ int ret;
+
+- switch (ios->signal_voltage) {
+- case MMC_SIGNAL_VOLTAGE_330:
+- ret = regulator_set_voltage(mmc->supply.vqmmc,
+- 2700000, 3600000);
+- break;
+- case MMC_SIGNAL_VOLTAGE_180:
+- ret = regulator_set_voltage(mmc->supply.vqmmc,
+- 1700000, 1950000);
+- break;
+- case MMC_SIGNAL_VOLTAGE_120:
+- ret = regulator_set_voltage(mmc->supply.vqmmc,
+- 1100000, 1300000);
+- break;
+- }
++ ret = mmc_regulator_set_vqmmc(mmc, ios);
+
+- if (!ret && host->ops && host->ops->post_sig_volt_switch)
+- ret = host->ops->post_sig_volt_switch(host, ios);
++ if (!ret && host->ops && host->ops->post_sig_volt_switch)
++ ret = host->ops->post_sig_volt_switch(host, ios);
++ else if (ret)
++ ret = 0;
+
+- if (ret)
+- dev_warn(mmc_dev(mmc), "Voltage switch failed\n");
+- }
++ if (ret < 0)
++ dev_warn(mmc_dev(mmc), "Voltage switch failed\n");
+
+ return ret;
+ }
+diff --git a/drivers/mmc/host/mmci_stm32_sdmmc.c b/drivers/mmc/host/mmci_stm32_sdmmc.c
+index cca7b3b3f618..2965b1c062e1 100644
+--- a/drivers/mmc/host/mmci_stm32_sdmmc.c
++++ b/drivers/mmc/host/mmci_stm32_sdmmc.c
+@@ -522,6 +522,7 @@ void sdmmc_variant_init(struct mmci_host *host)
+ struct sdmmc_dlyb *dlyb;
+
+ host->ops = &sdmmc_variant_ops;
++ host->pwr_reg = readl_relaxed(host->base + MMCIPOWER);
+
+ base_dlyb = devm_of_iomap(mmc_dev(host->mmc), np, 1, NULL);
+ if (IS_ERR(base_dlyb))
+diff --git a/drivers/mmc/host/owl-mmc.c b/drivers/mmc/host/owl-mmc.c
+index 01ffe51f413d..5e20c099fe03 100644
+--- a/drivers/mmc/host/owl-mmc.c
++++ b/drivers/mmc/host/owl-mmc.c
+@@ -92,6 +92,8 @@
+ #define OWL_SD_STATE_RC16ER BIT(1)
+ #define OWL_SD_STATE_CRC7ER BIT(0)
+
++#define OWL_CMD_TIMEOUT_MS 30000
++
+ struct owl_mmc_host {
+ struct device *dev;
+ struct reset_control *reset;
+@@ -172,6 +174,7 @@ static void owl_mmc_send_cmd(struct owl_mmc_host *owl_host,
+ struct mmc_command *cmd,
+ struct mmc_data *data)
+ {
++ unsigned long timeout;
+ u32 mode, state, resp[2];
+ u32 cmd_rsp_mask = 0;
+
+@@ -239,7 +242,10 @@ static void owl_mmc_send_cmd(struct owl_mmc_host *owl_host,
+ if (data)
+ return;
+
+- if (!wait_for_completion_timeout(&owl_host->sdc_complete, 30 * HZ)) {
++ timeout = msecs_to_jiffies(cmd->busy_timeout ? cmd->busy_timeout :
++ OWL_CMD_TIMEOUT_MS);
++
++ if (!wait_for_completion_timeout(&owl_host->sdc_complete, timeout)) {
+ dev_err(owl_host->dev, "CMD interrupt timeout\n");
+ cmd->error = -ETIMEDOUT;
+ return;
+diff --git a/drivers/mmc/host/sdhci-esdhc-imx.c b/drivers/mmc/host/sdhci-esdhc-imx.c
+index 5ec8e4bf1ac7..a514b9ea9460 100644
+--- a/drivers/mmc/host/sdhci-esdhc-imx.c
++++ b/drivers/mmc/host/sdhci-esdhc-imx.c
+@@ -89,7 +89,7 @@
+ #define ESDHC_STD_TUNING_EN (1 << 24)
+ /* NOTE: the minimum valid tuning start tap for mx6sl is 1 */
+ #define ESDHC_TUNING_START_TAP_DEFAULT 0x1
+-#define ESDHC_TUNING_START_TAP_MASK 0xff
++#define ESDHC_TUNING_START_TAP_MASK 0x7f
+ #define ESDHC_TUNING_STEP_MASK 0x00070000
+ #define ESDHC_TUNING_STEP_SHIFT 16
+
+diff --git a/drivers/mmc/host/sdhci-msm.c b/drivers/mmc/host/sdhci-msm.c
+index 87de46b6ed07..9ec733403027 100644
+--- a/drivers/mmc/host/sdhci-msm.c
++++ b/drivers/mmc/host/sdhci-msm.c
+@@ -1888,7 +1888,9 @@ static const struct sdhci_ops sdhci_msm_ops = {
+ static const struct sdhci_pltfm_data sdhci_msm_pdata = {
+ .quirks = SDHCI_QUIRK_BROKEN_CARD_DETECTION |
+ SDHCI_QUIRK_SINGLE_POWER_WRITE |
+- SDHCI_QUIRK_CAP_CLOCK_BASE_BROKEN,
++ SDHCI_QUIRK_CAP_CLOCK_BASE_BROKEN |
++ SDHCI_QUIRK_MULTIBLOCK_READ_ACMD12,
++
+ .quirks2 = SDHCI_QUIRK2_PRESET_VALUE_BROKEN,
+ .ops = &sdhci_msm_ops,
+ };
+diff --git a/drivers/mmc/host/sdhci.c b/drivers/mmc/host/sdhci.c
+index e368f2dabf20..5dcdda5918cb 100644
+--- a/drivers/mmc/host/sdhci.c
++++ b/drivers/mmc/host/sdhci.c
+@@ -634,9 +634,13 @@ static int sdhci_pre_dma_transfer(struct sdhci_host *host,
+ }
+ if (mmc_get_dma_dir(data) == DMA_TO_DEVICE) {
+ /* Copy the data to the bounce buffer */
+- sg_copy_to_buffer(data->sg, data->sg_len,
+- host->bounce_buffer,
+- length);
++ if (host->ops->copy_to_bounce_buffer) {
++ host->ops->copy_to_bounce_buffer(host,
++ data, length);
++ } else {
++ sg_copy_to_buffer(data->sg, data->sg_len,
++ host->bounce_buffer, length);
++ }
+ }
+ /* Switch ownership to the DMA */
+ dma_sync_single_for_device(host->mmc->parent,
+diff --git a/drivers/mmc/host/sdhci.h b/drivers/mmc/host/sdhci.h
+index 79dffbb731d3..1bf4f1d91951 100644
+--- a/drivers/mmc/host/sdhci.h
++++ b/drivers/mmc/host/sdhci.h
+@@ -653,6 +653,9 @@ struct sdhci_ops {
+ void (*voltage_switch)(struct sdhci_host *host);
+ void (*adma_write_desc)(struct sdhci_host *host, void **desc,
+ dma_addr_t addr, int len, unsigned int cmd);
++ void (*copy_to_bounce_buffer)(struct sdhci_host *host,
++ struct mmc_data *data,
++ unsigned int length);
+ void (*request_done)(struct sdhci_host *host,
+ struct mmc_request *mrq);
+ };
+diff --git a/drivers/mmc/host/via-sdmmc.c b/drivers/mmc/host/via-sdmmc.c
+index e48bddd95ce6..ef95bce50889 100644
+--- a/drivers/mmc/host/via-sdmmc.c
++++ b/drivers/mmc/host/via-sdmmc.c
+@@ -319,6 +319,8 @@ struct via_crdr_mmc_host {
+ /* some devices need a very long delay for power to stabilize */
+ #define VIA_CRDR_QUIRK_300MS_PWRDELAY 0x0001
+
++#define VIA_CMD_TIMEOUT_MS 1000
++
+ static const struct pci_device_id via_ids[] = {
+ {PCI_VENDOR_ID_VIA, PCI_DEVICE_ID_VIA_9530,
+ PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0,},
+@@ -551,14 +553,17 @@ static void via_sdc_send_command(struct via_crdr_mmc_host *host,
+ {
+ void __iomem *addrbase;
+ struct mmc_data *data;
++ unsigned int timeout_ms;
+ u32 cmdctrl = 0;
+
+ WARN_ON(host->cmd);
+
+ data = cmd->data;
+- mod_timer(&host->timer, jiffies + HZ);
+ host->cmd = cmd;
+
++ timeout_ms = cmd->busy_timeout ? cmd->busy_timeout : VIA_CMD_TIMEOUT_MS;
++ mod_timer(&host->timer, jiffies + msecs_to_jiffies(timeout_ms));
++
+ /*Command index*/
+ cmdctrl = cmd->opcode << 8;
+
+diff --git a/drivers/mtd/nand/raw/brcmnand/brcmnand.c b/drivers/mtd/nand/raw/brcmnand/brcmnand.c
+index 8f9ffb46a09f..52402aa7b4d3 100644
+--- a/drivers/mtd/nand/raw/brcmnand/brcmnand.c
++++ b/drivers/mtd/nand/raw/brcmnand/brcmnand.c
+@@ -1116,11 +1116,14 @@ static int brcmnand_hamming_ooblayout_free(struct mtd_info *mtd, int section,
+ if (!section) {
+ /*
+ * Small-page NAND use byte 6 for BBI while large-page
+- * NAND use byte 0.
++ * NAND use bytes 0 and 1.
+ */
+- if (cfg->page_size > 512)
+- oobregion->offset++;
+- oobregion->length--;
++ if (cfg->page_size > 512) {
++ oobregion->offset += 2;
++ oobregion->length -= 2;
++ } else {
++ oobregion->length--;
++ }
+ }
+ }
+
+diff --git a/drivers/mtd/nand/raw/diskonchip.c b/drivers/mtd/nand/raw/diskonchip.c
+index c2a391ad2c35..baabc6633557 100644
+--- a/drivers/mtd/nand/raw/diskonchip.c
++++ b/drivers/mtd/nand/raw/diskonchip.c
+@@ -1609,13 +1609,10 @@ static int __init doc_probe(unsigned long physadr)
+ numchips = doc2001_init(mtd);
+
+ if ((ret = nand_scan(nand, numchips)) || (ret = doc->late_init(mtd))) {
+- /* DBB note: i believe nand_release is necessary here, as
++ /* DBB note: i believe nand_cleanup is necessary here, as
+ buffers may have been allocated in nand_base. Check with
+ Thomas. FIX ME! */
+- /* nand_release will call mtd_device_unregister, but we
+- haven't yet added it. This is handled without incident by
+- mtd_device_unregister, as far as I can tell. */
+- nand_release(nand);
++ nand_cleanup(nand);
+ goto fail;
+ }
+
+diff --git a/drivers/mtd/nand/raw/ingenic/ingenic_nand_drv.c b/drivers/mtd/nand/raw/ingenic/ingenic_nand_drv.c
+index 935c4902ada7..6e90c2d5cb3a 100644
+--- a/drivers/mtd/nand/raw/ingenic/ingenic_nand_drv.c
++++ b/drivers/mtd/nand/raw/ingenic/ingenic_nand_drv.c
+@@ -376,7 +376,7 @@ static int ingenic_nand_init_chip(struct platform_device *pdev,
+
+ ret = mtd_device_register(mtd, NULL, 0);
+ if (ret) {
+- nand_release(chip);
++ nand_cleanup(chip);
+ return ret;
+ }
+
+diff --git a/drivers/mtd/nand/raw/mtk_nand.c b/drivers/mtd/nand/raw/mtk_nand.c
+index ef149e8b26d0..c22d993849a9 100644
+--- a/drivers/mtd/nand/raw/mtk_nand.c
++++ b/drivers/mtd/nand/raw/mtk_nand.c
+@@ -1419,7 +1419,7 @@ static int mtk_nfc_nand_chip_init(struct device *dev, struct mtk_nfc *nfc,
+ ret = mtd_device_register(mtd, NULL, 0);
+ if (ret) {
+ dev_err(dev, "mtd parse partition error\n");
+- nand_release(nand);
++ nand_cleanup(nand);
+ return ret;
+ }
+
+diff --git a/drivers/mtd/nand/raw/nand_base.c b/drivers/mtd/nand/raw/nand_base.c
+index c24e5e2ba130..4f1bb862b62f 100644
+--- a/drivers/mtd/nand/raw/nand_base.c
++++ b/drivers/mtd/nand/raw/nand_base.c
+@@ -736,8 +736,14 @@ EXPORT_SYMBOL_GPL(nand_soft_waitrdy);
+ int nand_gpio_waitrdy(struct nand_chip *chip, struct gpio_desc *gpiod,
+ unsigned long timeout_ms)
+ {
+- /* Wait until R/B pin indicates chip is ready or timeout occurs */
+- timeout_ms = jiffies + msecs_to_jiffies(timeout_ms);
++
++ /*
++ * Wait until R/B pin indicates chip is ready or timeout occurs.
++ * +1 below is necessary because if we are now in the last fraction
++ * of jiffy and msecs_to_jiffies is 1 then we will wait only that
++ * small jiffy fraction - possibly leading to false timeout.
++ */
++ timeout_ms = jiffies + msecs_to_jiffies(timeout_ms) + 1;
+ do {
+ if (gpiod_get_value_cansleep(gpiod))
+ return 0;
+diff --git a/drivers/mtd/nand/raw/nand_onfi.c b/drivers/mtd/nand/raw/nand_onfi.c
+index 0b879bd0a68c..8fe8d7bdd203 100644
+--- a/drivers/mtd/nand/raw/nand_onfi.c
++++ b/drivers/mtd/nand/raw/nand_onfi.c
+@@ -173,7 +173,7 @@ int nand_onfi_detect(struct nand_chip *chip)
+ }
+
+ if (onfi_crc16(ONFI_CRC_BASE, (u8 *)&p[i], 254) ==
+- le16_to_cpu(p->crc)) {
++ le16_to_cpu(p[i].crc)) {
+ if (i)
+ memcpy(p, &p[i], sizeof(*p));
+ break;
+diff --git a/drivers/mtd/nand/raw/orion_nand.c b/drivers/mtd/nand/raw/orion_nand.c
+index d27b39a7223c..a3dcdf25f5f2 100644
+--- a/drivers/mtd/nand/raw/orion_nand.c
++++ b/drivers/mtd/nand/raw/orion_nand.c
+@@ -180,7 +180,7 @@ static int __init orion_nand_probe(struct platform_device *pdev)
+ mtd->name = "orion_nand";
+ ret = mtd_device_register(mtd, board->parts, board->nr_parts);
+ if (ret) {
+- nand_release(nc);
++ nand_cleanup(nc);
+ goto no_dev;
+ }
+
+diff --git a/drivers/mtd/nand/raw/oxnas_nand.c b/drivers/mtd/nand/raw/oxnas_nand.c
+index c43cb4d92d3d..0429d218fd9f 100644
+--- a/drivers/mtd/nand/raw/oxnas_nand.c
++++ b/drivers/mtd/nand/raw/oxnas_nand.c
+@@ -140,10 +140,8 @@ static int oxnas_nand_probe(struct platform_device *pdev)
+ goto err_release_child;
+
+ err = mtd_device_register(mtd, NULL, 0);
+- if (err) {
+- nand_release(chip);
+- goto err_release_child;
+- }
++ if (err)
++ goto err_cleanup_nand;
+
+ oxnas->chips[nchips] = chip;
+ ++nchips;
+@@ -159,6 +157,8 @@ static int oxnas_nand_probe(struct platform_device *pdev)
+
+ return 0;
+
++err_cleanup_nand:
++ nand_cleanup(chip);
+ err_release_child:
+ of_node_put(nand_np);
+ err_clk_unprepare:
+diff --git a/drivers/mtd/nand/raw/pasemi_nand.c b/drivers/mtd/nand/raw/pasemi_nand.c
+index 9cfe7395172a..066ff6dc9a23 100644
+--- a/drivers/mtd/nand/raw/pasemi_nand.c
++++ b/drivers/mtd/nand/raw/pasemi_nand.c
+@@ -146,7 +146,7 @@ static int pasemi_nand_probe(struct platform_device *ofdev)
+ if (mtd_device_register(pasemi_nand_mtd, NULL, 0)) {
+ dev_err(dev, "Unable to register MTD device\n");
+ err = -ENODEV;
+- goto out_lpc;
++ goto out_cleanup_nand;
+ }
+
+ dev_info(dev, "PA Semi NAND flash at %pR, control at I/O %x\n", &res,
+@@ -154,6 +154,8 @@ static int pasemi_nand_probe(struct platform_device *ofdev)
+
+ return 0;
+
++ out_cleanup_nand:
++ nand_cleanup(chip);
+ out_lpc:
+ release_region(lpcctl, 4);
+ out_ior:
+diff --git a/drivers/mtd/nand/raw/plat_nand.c b/drivers/mtd/nand/raw/plat_nand.c
+index dc0f3074ddbf..3a495b233443 100644
+--- a/drivers/mtd/nand/raw/plat_nand.c
++++ b/drivers/mtd/nand/raw/plat_nand.c
+@@ -92,7 +92,7 @@ static int plat_nand_probe(struct platform_device *pdev)
+ if (!err)
+ return err;
+
+- nand_release(&data->chip);
++ nand_cleanup(&data->chip);
+ out:
+ if (pdata->ctrl.remove)
+ pdata->ctrl.remove(pdev);
+diff --git a/drivers/mtd/nand/raw/sharpsl.c b/drivers/mtd/nand/raw/sharpsl.c
+index b47a9eaff89b..d8c52a016080 100644
+--- a/drivers/mtd/nand/raw/sharpsl.c
++++ b/drivers/mtd/nand/raw/sharpsl.c
+@@ -183,7 +183,7 @@ static int sharpsl_nand_probe(struct platform_device *pdev)
+ return 0;
+
+ err_add:
+- nand_release(this);
++ nand_cleanup(this);
+
+ err_scan:
+ iounmap(sharpsl->io);
+diff --git a/drivers/mtd/nand/raw/socrates_nand.c b/drivers/mtd/nand/raw/socrates_nand.c
+index 20f40c0e812c..7c94fc51a611 100644
+--- a/drivers/mtd/nand/raw/socrates_nand.c
++++ b/drivers/mtd/nand/raw/socrates_nand.c
+@@ -169,7 +169,7 @@ static int socrates_nand_probe(struct platform_device *ofdev)
+ if (!res)
+ return res;
+
+- nand_release(nand_chip);
++ nand_cleanup(nand_chip);
+
+ out:
+ iounmap(host->io_base);
+diff --git a/drivers/mtd/nand/raw/sunxi_nand.c b/drivers/mtd/nand/raw/sunxi_nand.c
+index 5f3e40b79fb1..c1a76ba9f61c 100644
+--- a/drivers/mtd/nand/raw/sunxi_nand.c
++++ b/drivers/mtd/nand/raw/sunxi_nand.c
+@@ -2003,7 +2003,7 @@ static int sunxi_nand_chip_init(struct device *dev, struct sunxi_nfc *nfc,
+ ret = mtd_device_register(mtd, NULL, 0);
+ if (ret) {
+ dev_err(dev, "failed to register mtd device: %d\n", ret);
+- nand_release(nand);
++ nand_cleanup(nand);
+ return ret;
+ }
+
+diff --git a/drivers/mtd/nand/raw/tmio_nand.c b/drivers/mtd/nand/raw/tmio_nand.c
+index db030f1701ee..4e9a6d94f6e8 100644
+--- a/drivers/mtd/nand/raw/tmio_nand.c
++++ b/drivers/mtd/nand/raw/tmio_nand.c
+@@ -448,7 +448,7 @@ static int tmio_probe(struct platform_device *dev)
+ if (!retval)
+ return retval;
+
+- nand_release(nand_chip);
++ nand_cleanup(nand_chip);
+
+ err_irq:
+ tmio_hw_stop(dev, tmio);
+diff --git a/drivers/mtd/nand/raw/xway_nand.c b/drivers/mtd/nand/raw/xway_nand.c
+index 834f794816a9..018311dc8fe1 100644
+--- a/drivers/mtd/nand/raw/xway_nand.c
++++ b/drivers/mtd/nand/raw/xway_nand.c
+@@ -210,7 +210,7 @@ static int xway_nand_probe(struct platform_device *pdev)
+
+ err = mtd_device_register(mtd, NULL, 0);
+ if (err)
+- nand_release(&data->chip);
++ nand_cleanup(&data->chip);
+
+ return err;
+ }
+diff --git a/drivers/net/dsa/sja1105/sja1105_ethtool.c b/drivers/net/dsa/sja1105/sja1105_ethtool.c
+index d742ffcbfce9..709f035055c5 100644
+--- a/drivers/net/dsa/sja1105/sja1105_ethtool.c
++++ b/drivers/net/dsa/sja1105/sja1105_ethtool.c
+@@ -421,92 +421,96 @@ static char sja1105pqrs_extra_port_stats[][ETH_GSTRING_LEN] = {
+ void sja1105_get_ethtool_stats(struct dsa_switch *ds, int port, u64 *data)
+ {
+ struct sja1105_private *priv = ds->priv;
+- struct sja1105_port_status status;
++ struct sja1105_port_status *status;
+ int rc, i, k = 0;
+
+- memset(&status, 0, sizeof(status));
++ status = kzalloc(sizeof(*status), GFP_KERNEL);
++ if (!status)
++ goto out;
+
+- rc = sja1105_port_status_get(priv, &status, port);
++ rc = sja1105_port_status_get(priv, status, port);
+ if (rc < 0) {
+ dev_err(ds->dev, "Failed to read port %d counters: %d\n",
+ port, rc);
+- return;
++ goto out;
+ }
+ memset(data, 0, ARRAY_SIZE(sja1105_port_stats) * sizeof(u64));
+- data[k++] = status.mac.n_runt;
+- data[k++] = status.mac.n_soferr;
+- data[k++] = status.mac.n_alignerr;
+- data[k++] = status.mac.n_miierr;
+- data[k++] = status.mac.typeerr;
+- data[k++] = status.mac.sizeerr;
+- data[k++] = status.mac.tctimeout;
+- data[k++] = status.mac.priorerr;
+- data[k++] = status.mac.nomaster;
+- data[k++] = status.mac.memov;
+- data[k++] = status.mac.memerr;
+- data[k++] = status.mac.invtyp;
+- data[k++] = status.mac.intcyov;
+- data[k++] = status.mac.domerr;
+- data[k++] = status.mac.pcfbagdrop;
+- data[k++] = status.mac.spcprior;
+- data[k++] = status.mac.ageprior;
+- data[k++] = status.mac.portdrop;
+- data[k++] = status.mac.lendrop;
+- data[k++] = status.mac.bagdrop;
+- data[k++] = status.mac.policeerr;
+- data[k++] = status.mac.drpnona664err;
+- data[k++] = status.mac.spcerr;
+- data[k++] = status.mac.agedrp;
+- data[k++] = status.hl1.n_n664err;
+- data[k++] = status.hl1.n_vlanerr;
+- data[k++] = status.hl1.n_unreleased;
+- data[k++] = status.hl1.n_sizeerr;
+- data[k++] = status.hl1.n_crcerr;
+- data[k++] = status.hl1.n_vlnotfound;
+- data[k++] = status.hl1.n_ctpolerr;
+- data[k++] = status.hl1.n_polerr;
+- data[k++] = status.hl1.n_rxfrm;
+- data[k++] = status.hl1.n_rxbyte;
+- data[k++] = status.hl1.n_txfrm;
+- data[k++] = status.hl1.n_txbyte;
+- data[k++] = status.hl2.n_qfull;
+- data[k++] = status.hl2.n_part_drop;
+- data[k++] = status.hl2.n_egr_disabled;
+- data[k++] = status.hl2.n_not_reach;
++ data[k++] = status->mac.n_runt;
++ data[k++] = status->mac.n_soferr;
++ data[k++] = status->mac.n_alignerr;
++ data[k++] = status->mac.n_miierr;
++ data[k++] = status->mac.typeerr;
++ data[k++] = status->mac.sizeerr;
++ data[k++] = status->mac.tctimeout;
++ data[k++] = status->mac.priorerr;
++ data[k++] = status->mac.nomaster;
++ data[k++] = status->mac.memov;
++ data[k++] = status->mac.memerr;
++ data[k++] = status->mac.invtyp;
++ data[k++] = status->mac.intcyov;
++ data[k++] = status->mac.domerr;
++ data[k++] = status->mac.pcfbagdrop;
++ data[k++] = status->mac.spcprior;
++ data[k++] = status->mac.ageprior;
++ data[k++] = status->mac.portdrop;
++ data[k++] = status->mac.lendrop;
++ data[k++] = status->mac.bagdrop;
++ data[k++] = status->mac.policeerr;
++ data[k++] = status->mac.drpnona664err;
++ data[k++] = status->mac.spcerr;
++ data[k++] = status->mac.agedrp;
++ data[k++] = status->hl1.n_n664err;
++ data[k++] = status->hl1.n_vlanerr;
++ data[k++] = status->hl1.n_unreleased;
++ data[k++] = status->hl1.n_sizeerr;
++ data[k++] = status->hl1.n_crcerr;
++ data[k++] = status->hl1.n_vlnotfound;
++ data[k++] = status->hl1.n_ctpolerr;
++ data[k++] = status->hl1.n_polerr;
++ data[k++] = status->hl1.n_rxfrm;
++ data[k++] = status->hl1.n_rxbyte;
++ data[k++] = status->hl1.n_txfrm;
++ data[k++] = status->hl1.n_txbyte;
++ data[k++] = status->hl2.n_qfull;
++ data[k++] = status->hl2.n_part_drop;
++ data[k++] = status->hl2.n_egr_disabled;
++ data[k++] = status->hl2.n_not_reach;
+
+ if (priv->info->device_id == SJA1105E_DEVICE_ID ||
+ priv->info->device_id == SJA1105T_DEVICE_ID)
+- return;
++ goto out;;
+
+ memset(data + k, 0, ARRAY_SIZE(sja1105pqrs_extra_port_stats) *
+ sizeof(u64));
+ for (i = 0; i < 8; i++) {
+- data[k++] = status.hl2.qlevel_hwm[i];
+- data[k++] = status.hl2.qlevel[i];
++ data[k++] = status->hl2.qlevel_hwm[i];
++ data[k++] = status->hl2.qlevel[i];
+ }
+- data[k++] = status.ether.n_drops_nolearn;
+- data[k++] = status.ether.n_drops_noroute;
+- data[k++] = status.ether.n_drops_ill_dtag;
+- data[k++] = status.ether.n_drops_dtag;
+- data[k++] = status.ether.n_drops_sotag;
+- data[k++] = status.ether.n_drops_sitag;
+- data[k++] = status.ether.n_drops_utag;
+- data[k++] = status.ether.n_tx_bytes_1024_2047;
+- data[k++] = status.ether.n_tx_bytes_512_1023;
+- data[k++] = status.ether.n_tx_bytes_256_511;
+- data[k++] = status.ether.n_tx_bytes_128_255;
+- data[k++] = status.ether.n_tx_bytes_65_127;
+- data[k++] = status.ether.n_tx_bytes_64;
+- data[k++] = status.ether.n_tx_mcast;
+- data[k++] = status.ether.n_tx_bcast;
+- data[k++] = status.ether.n_rx_bytes_1024_2047;
+- data[k++] = status.ether.n_rx_bytes_512_1023;
+- data[k++] = status.ether.n_rx_bytes_256_511;
+- data[k++] = status.ether.n_rx_bytes_128_255;
+- data[k++] = status.ether.n_rx_bytes_65_127;
+- data[k++] = status.ether.n_rx_bytes_64;
+- data[k++] = status.ether.n_rx_mcast;
+- data[k++] = status.ether.n_rx_bcast;
++ data[k++] = status->ether.n_drops_nolearn;
++ data[k++] = status->ether.n_drops_noroute;
++ data[k++] = status->ether.n_drops_ill_dtag;
++ data[k++] = status->ether.n_drops_dtag;
++ data[k++] = status->ether.n_drops_sotag;
++ data[k++] = status->ether.n_drops_sitag;
++ data[k++] = status->ether.n_drops_utag;
++ data[k++] = status->ether.n_tx_bytes_1024_2047;
++ data[k++] = status->ether.n_tx_bytes_512_1023;
++ data[k++] = status->ether.n_tx_bytes_256_511;
++ data[k++] = status->ether.n_tx_bytes_128_255;
++ data[k++] = status->ether.n_tx_bytes_65_127;
++ data[k++] = status->ether.n_tx_bytes_64;
++ data[k++] = status->ether.n_tx_mcast;
++ data[k++] = status->ether.n_tx_bcast;
++ data[k++] = status->ether.n_rx_bytes_1024_2047;
++ data[k++] = status->ether.n_rx_bytes_512_1023;
++ data[k++] = status->ether.n_rx_bytes_256_511;
++ data[k++] = status->ether.n_rx_bytes_128_255;
++ data[k++] = status->ether.n_rx_bytes_65_127;
++ data[k++] = status->ether.n_rx_bytes_64;
++ data[k++] = status->ether.n_rx_mcast;
++ data[k++] = status->ether.n_rx_bcast;
++out:
++ kfree(status);
+ }
+
+ void sja1105_get_strings(struct dsa_switch *ds, int port,
+diff --git a/drivers/net/ethernet/allwinner/sun4i-emac.c b/drivers/net/ethernet/allwinner/sun4i-emac.c
+index 18d3b4340bd4..b3b8a8010142 100644
+--- a/drivers/net/ethernet/allwinner/sun4i-emac.c
++++ b/drivers/net/ethernet/allwinner/sun4i-emac.c
+@@ -417,7 +417,7 @@ static void emac_timeout(struct net_device *dev, unsigned int txqueue)
+ /* Hardware start transmission.
+ * Send a packet to media from the upper layer.
+ */
+-static int emac_start_xmit(struct sk_buff *skb, struct net_device *dev)
++static netdev_tx_t emac_start_xmit(struct sk_buff *skb, struct net_device *dev)
+ {
+ struct emac_board_info *db = netdev_priv(dev);
+ unsigned long channel;
+@@ -425,7 +425,7 @@ static int emac_start_xmit(struct sk_buff *skb, struct net_device *dev)
+
+ channel = db->tx_fifo_stat & 3;
+ if (channel == 3)
+- return 1;
++ return NETDEV_TX_BUSY;
+
+ channel = (channel == 1 ? 1 : 0);
+
+diff --git a/drivers/net/ethernet/amazon/ena/ena_com.c b/drivers/net/ethernet/amazon/ena/ena_com.c
+index a250046b8e18..07b0f396d3c2 100644
+--- a/drivers/net/ethernet/amazon/ena/ena_com.c
++++ b/drivers/net/ethernet/amazon/ena/ena_com.c
+@@ -2345,6 +2345,9 @@ int ena_com_get_hash_function(struct ena_com_dev *ena_dev,
+ rss->hash_key;
+ int rc;
+
++ if (unlikely(!func))
++ return -EINVAL;
++
+ rc = ena_com_get_feature_ex(ena_dev, &get_resp,
+ ENA_ADMIN_RSS_HASH_FUNCTION,
+ rss->hash_key_dma_addr,
+@@ -2357,8 +2360,7 @@ int ena_com_get_hash_function(struct ena_com_dev *ena_dev,
+ if (rss->hash_func)
+ rss->hash_func--;
+
+- if (func)
+- *func = rss->hash_func;
++ *func = rss->hash_func;
+
+ if (key)
+ memcpy(key, hash_key->key, (size_t)(hash_key->keys_num) << 2);
+diff --git a/drivers/net/ethernet/aquantia/atlantic/aq_nic.c b/drivers/net/ethernet/aquantia/atlantic/aq_nic.c
+index a369705a786a..e5391e0b84f8 100644
+--- a/drivers/net/ethernet/aquantia/atlantic/aq_nic.c
++++ b/drivers/net/ethernet/aquantia/atlantic/aq_nic.c
+@@ -764,6 +764,9 @@ int aq_nic_get_regs(struct aq_nic_s *self, struct ethtool_regs *regs, void *p)
+ u32 *regs_buff = p;
+ int err = 0;
+
++ if (unlikely(!self->aq_hw_ops->hw_get_regs))
++ return -EOPNOTSUPP;
++
+ regs->version = 1;
+
+ err = self->aq_hw_ops->hw_get_regs(self->aq_hw,
+@@ -778,6 +781,9 @@ err_exit:
+
+ int aq_nic_get_regs_count(struct aq_nic_s *self)
+ {
++ if (unlikely(!self->aq_hw_ops->hw_get_regs))
++ return 0;
++
+ return self->aq_nic_cfg.aq_hw_caps->mac_regs_count;
+ }
+
+diff --git a/drivers/net/ethernet/broadcom/genet/bcmgenet.c b/drivers/net/ethernet/broadcom/genet/bcmgenet.c
+index 79636c78127c..38bdfd4b46f0 100644
+--- a/drivers/net/ethernet/broadcom/genet/bcmgenet.c
++++ b/drivers/net/ethernet/broadcom/genet/bcmgenet.c
+@@ -70,6 +70,9 @@
+ #define GENET_RDMA_REG_OFF (priv->hw_params->rdma_offset + \
+ TOTAL_DESC * DMA_DESC_SIZE)
+
++/* Forward declarations */
++static void bcmgenet_set_rx_mode(struct net_device *dev);
++
+ static inline void bcmgenet_writel(u32 value, void __iomem *offset)
+ {
+ /* MIPS chips strapped for BE will automagically configure the
+@@ -2803,6 +2806,7 @@ static void bcmgenet_netif_start(struct net_device *dev)
+ struct bcmgenet_priv *priv = netdev_priv(dev);
+
+ /* Start the network engine */
++ bcmgenet_set_rx_mode(dev);
+ bcmgenet_enable_rx_napi(priv);
+
+ umac_enable_set(priv, CMD_TX_EN | CMD_RX_EN, true);
+diff --git a/drivers/net/ethernet/broadcom/genet/bcmgenet.h b/drivers/net/ethernet/broadcom/genet/bcmgenet.h
+index daf8fb2c39b6..c3bfe97f2e5c 100644
+--- a/drivers/net/ethernet/broadcom/genet/bcmgenet.h
++++ b/drivers/net/ethernet/broadcom/genet/bcmgenet.h
+@@ -14,6 +14,7 @@
+ #include <linux/if_vlan.h>
+ #include <linux/phy.h>
+ #include <linux/dim.h>
++#include <linux/ethtool.h>
+
+ /* total number of Buffer Descriptors, same for Rx/Tx */
+ #define TOTAL_DESC 256
+@@ -676,6 +677,7 @@ struct bcmgenet_priv {
+ /* WOL */
+ struct clk *clk_wol;
+ u32 wolopts;
++ u8 sopass[SOPASS_MAX];
+
+ struct bcmgenet_mib_counters mib;
+
+diff --git a/drivers/net/ethernet/broadcom/genet/bcmgenet_wol.c b/drivers/net/ethernet/broadcom/genet/bcmgenet_wol.c
+index c9a43695b182..597c0498689a 100644
+--- a/drivers/net/ethernet/broadcom/genet/bcmgenet_wol.c
++++ b/drivers/net/ethernet/broadcom/genet/bcmgenet_wol.c
+@@ -41,18 +41,13 @@
+ void bcmgenet_get_wol(struct net_device *dev, struct ethtool_wolinfo *wol)
+ {
+ struct bcmgenet_priv *priv = netdev_priv(dev);
+- u32 reg;
+
+ wol->supported = WAKE_MAGIC | WAKE_MAGICSECURE;
+ wol->wolopts = priv->wolopts;
+ memset(wol->sopass, 0, sizeof(wol->sopass));
+
+- if (wol->wolopts & WAKE_MAGICSECURE) {
+- reg = bcmgenet_umac_readl(priv, UMAC_MPD_PW_MS);
+- put_unaligned_be16(reg, &wol->sopass[0]);
+- reg = bcmgenet_umac_readl(priv, UMAC_MPD_PW_LS);
+- put_unaligned_be32(reg, &wol->sopass[2]);
+- }
++ if (wol->wolopts & WAKE_MAGICSECURE)
++ memcpy(wol->sopass, priv->sopass, sizeof(priv->sopass));
+ }
+
+ /* ethtool function - set WOL (Wake on LAN) settings.
+@@ -62,7 +57,6 @@ int bcmgenet_set_wol(struct net_device *dev, struct ethtool_wolinfo *wol)
+ {
+ struct bcmgenet_priv *priv = netdev_priv(dev);
+ struct device *kdev = &priv->pdev->dev;
+- u32 reg;
+
+ if (!device_can_wakeup(kdev))
+ return -ENOTSUPP;
+@@ -70,17 +64,8 @@ int bcmgenet_set_wol(struct net_device *dev, struct ethtool_wolinfo *wol)
+ if (wol->wolopts & ~(WAKE_MAGIC | WAKE_MAGICSECURE))
+ return -EINVAL;
+
+- reg = bcmgenet_umac_readl(priv, UMAC_MPD_CTRL);
+- if (wol->wolopts & WAKE_MAGICSECURE) {
+- bcmgenet_umac_writel(priv, get_unaligned_be16(&wol->sopass[0]),
+- UMAC_MPD_PW_MS);
+- bcmgenet_umac_writel(priv, get_unaligned_be32(&wol->sopass[2]),
+- UMAC_MPD_PW_LS);
+- reg |= MPD_PW_EN;
+- } else {
+- reg &= ~MPD_PW_EN;
+- }
+- bcmgenet_umac_writel(priv, reg, UMAC_MPD_CTRL);
++ if (wol->wolopts & WAKE_MAGICSECURE)
++ memcpy(priv->sopass, wol->sopass, sizeof(priv->sopass));
+
+ /* Flag the device and relevant IRQ as wakeup capable */
+ if (wol->wolopts) {
+@@ -120,6 +105,14 @@ static int bcmgenet_poll_wol_status(struct bcmgenet_priv *priv)
+ return retries;
+ }
+
++static void bcmgenet_set_mpd_password(struct bcmgenet_priv *priv)
++{
++ bcmgenet_umac_writel(priv, get_unaligned_be16(&priv->sopass[0]),
++ UMAC_MPD_PW_MS);
++ bcmgenet_umac_writel(priv, get_unaligned_be32(&priv->sopass[2]),
++ UMAC_MPD_PW_LS);
++}
++
+ int bcmgenet_wol_power_down_cfg(struct bcmgenet_priv *priv,
+ enum bcmgenet_power_mode mode)
+ {
+@@ -144,13 +137,17 @@ int bcmgenet_wol_power_down_cfg(struct bcmgenet_priv *priv,
+
+ reg = bcmgenet_umac_readl(priv, UMAC_MPD_CTRL);
+ reg |= MPD_EN;
++ if (priv->wolopts & WAKE_MAGICSECURE) {
++ bcmgenet_set_mpd_password(priv);
++ reg |= MPD_PW_EN;
++ }
+ bcmgenet_umac_writel(priv, reg, UMAC_MPD_CTRL);
+
+ /* Do not leave UniMAC in MPD mode only */
+ retries = bcmgenet_poll_wol_status(priv);
+ if (retries < 0) {
+ reg = bcmgenet_umac_readl(priv, UMAC_MPD_CTRL);
+- reg &= ~MPD_EN;
++ reg &= ~(MPD_EN | MPD_PW_EN);
+ bcmgenet_umac_writel(priv, reg, UMAC_MPD_CTRL);
+ return retries;
+ }
+@@ -189,7 +186,7 @@ void bcmgenet_wol_power_up_cfg(struct bcmgenet_priv *priv,
+ reg = bcmgenet_umac_readl(priv, UMAC_MPD_CTRL);
+ if (!(reg & MPD_EN))
+ return; /* already powered up so skip the rest */
+- reg &= ~MPD_EN;
++ reg &= ~(MPD_EN | MPD_PW_EN);
+ bcmgenet_umac_writel(priv, reg, UMAC_MPD_CTRL);
+
+ /* Disable CRC Forward */
+diff --git a/drivers/net/ethernet/freescale/dpaa2/dpaa2-eth.c b/drivers/net/ethernet/freescale/dpaa2/dpaa2-eth.c
+index d97c320a2dc0..569e06d2bab2 100644
+--- a/drivers/net/ethernet/freescale/dpaa2/dpaa2-eth.c
++++ b/drivers/net/ethernet/freescale/dpaa2/dpaa2-eth.c
+@@ -2018,7 +2018,7 @@ static int dpaa2_eth_setup_tc(struct net_device *net_dev,
+ int i;
+
+ if (type != TC_SETUP_QDISC_MQPRIO)
+- return -EINVAL;
++ return -EOPNOTSUPP;
+
+ mqprio->hw = TC_MQPRIO_HW_OFFLOAD_TCS;
+ num_queues = dpaa2_eth_queue_count(priv);
+@@ -2030,7 +2030,7 @@ static int dpaa2_eth_setup_tc(struct net_device *net_dev,
+ if (num_tc > dpaa2_eth_tc_count(priv)) {
+ netdev_err(net_dev, "Max %d traffic classes supported\n",
+ dpaa2_eth_tc_count(priv));
+- return -EINVAL;
++ return -EOPNOTSUPP;
+ }
+
+ if (!num_tc) {
+diff --git a/drivers/net/ethernet/intel/e1000/e1000_main.c b/drivers/net/ethernet/intel/e1000/e1000_main.c
+index 0d51cbc88028..05bc6e216bca 100644
+--- a/drivers/net/ethernet/intel/e1000/e1000_main.c
++++ b/drivers/net/ethernet/intel/e1000/e1000_main.c
+@@ -3136,8 +3136,9 @@ static netdev_tx_t e1000_xmit_frame(struct sk_buff *skb,
+ hdr_len = skb_transport_offset(skb) + tcp_hdrlen(skb);
+ if (skb->data_len && hdr_len == len) {
+ switch (hw->mac_type) {
++ case e1000_82544: {
+ unsigned int pull_size;
+- case e1000_82544:
++
+ /* Make sure we have room to chop off 4 bytes,
+ * and that the end alignment will work out to
+ * this hardware's requirements
+@@ -3158,6 +3159,7 @@ static netdev_tx_t e1000_xmit_frame(struct sk_buff *skb,
+ }
+ len = skb_headlen(skb);
+ break;
++ }
+ default:
+ /* do nothing */
+ break;
+diff --git a/drivers/net/ethernet/intel/e1000e/e1000.h b/drivers/net/ethernet/intel/e1000e/e1000.h
+index 37a2314d3e6b..944abd5eae11 100644
+--- a/drivers/net/ethernet/intel/e1000e/e1000.h
++++ b/drivers/net/ethernet/intel/e1000e/e1000.h
+@@ -576,7 +576,6 @@ static inline u32 __er32(struct e1000_hw *hw, unsigned long reg)
+
+ #define er32(reg) __er32(hw, E1000_##reg)
+
+-s32 __ew32_prepare(struct e1000_hw *hw);
+ void __ew32(struct e1000_hw *hw, unsigned long reg, u32 val);
+
+ #define ew32(reg, val) __ew32(hw, E1000_##reg, (val))
+diff --git a/drivers/net/ethernet/intel/e1000e/netdev.c b/drivers/net/ethernet/intel/e1000e/netdev.c
+index 177c6da80c57..df3d50e759de 100644
+--- a/drivers/net/ethernet/intel/e1000e/netdev.c
++++ b/drivers/net/ethernet/intel/e1000e/netdev.c
+@@ -119,14 +119,12 @@ static const struct e1000_reg_info e1000_reg_info_tbl[] = {
+ * has bit 24 set while ME is accessing MAC CSR registers, wait if it is set
+ * and try again a number of times.
+ **/
+-s32 __ew32_prepare(struct e1000_hw *hw)
++static void __ew32_prepare(struct e1000_hw *hw)
+ {
+ s32 i = E1000_ICH_FWSM_PCIM2PCI_COUNT;
+
+ while ((er32(FWSM) & E1000_ICH_FWSM_PCIM2PCI) && --i)
+ udelay(50);
+-
+- return i;
+ }
+
+ void __ew32(struct e1000_hw *hw, unsigned long reg, u32 val)
+@@ -607,11 +605,11 @@ static void e1000e_update_rdt_wa(struct e1000_ring *rx_ring, unsigned int i)
+ {
+ struct e1000_adapter *adapter = rx_ring->adapter;
+ struct e1000_hw *hw = &adapter->hw;
+- s32 ret_val = __ew32_prepare(hw);
+
++ __ew32_prepare(hw);
+ writel(i, rx_ring->tail);
+
+- if (unlikely(!ret_val && (i != readl(rx_ring->tail)))) {
++ if (unlikely(i != readl(rx_ring->tail))) {
+ u32 rctl = er32(RCTL);
+
+ ew32(RCTL, rctl & ~E1000_RCTL_EN);
+@@ -624,11 +622,11 @@ static void e1000e_update_tdt_wa(struct e1000_ring *tx_ring, unsigned int i)
+ {
+ struct e1000_adapter *adapter = tx_ring->adapter;
+ struct e1000_hw *hw = &adapter->hw;
+- s32 ret_val = __ew32_prepare(hw);
+
++ __ew32_prepare(hw);
+ writel(i, tx_ring->tail);
+
+- if (unlikely(!ret_val && (i != readl(tx_ring->tail)))) {
++ if (unlikely(i != readl(tx_ring->tail))) {
+ u32 tctl = er32(TCTL);
+
+ ew32(TCTL, tctl & ~E1000_TCTL_EN);
+@@ -5294,6 +5292,10 @@ static void e1000_watchdog_task(struct work_struct *work)
+ /* oops */
+ break;
+ }
++ if (hw->mac.type == e1000_pch_spt) {
++ netdev->features &= ~NETIF_F_TSO;
++ netdev->features &= ~NETIF_F_TSO6;
++ }
+ }
+
+ /* enable transmits in the hardware, need to do this
+diff --git a/drivers/net/ethernet/intel/ice/ice.h b/drivers/net/ethernet/intel/ice/ice.h
+index 5c11448bfbb3..020ee167f73a 100644
+--- a/drivers/net/ethernet/intel/ice/ice.h
++++ b/drivers/net/ethernet/intel/ice/ice.h
+@@ -366,7 +366,7 @@ struct ice_pf {
+ struct ice_sw *first_sw; /* first switch created by firmware */
+ /* Virtchnl/SR-IOV config info */
+ struct ice_vf *vf;
+- int num_alloc_vfs; /* actual number of VFs allocated */
++ u16 num_alloc_vfs; /* actual number of VFs allocated */
+ u16 num_vfs_supported; /* num VFs supported for this PF */
+ u16 num_qps_per_vf;
+ u16 num_msix_per_vf;
+diff --git a/drivers/net/ethernet/intel/ice/ice_common.c b/drivers/net/ethernet/intel/ice/ice_common.c
+index 2c0d8fd3d5cd..09b374590ffc 100644
+--- a/drivers/net/ethernet/intel/ice/ice_common.c
++++ b/drivers/net/ethernet/intel/ice/ice_common.c
+@@ -322,6 +322,7 @@ ice_aq_get_link_info(struct ice_port_info *pi, bool ena_lse,
+ static enum ice_status ice_init_fltr_mgmt_struct(struct ice_hw *hw)
+ {
+ struct ice_switch_info *sw;
++ enum ice_status status;
+
+ hw->switch_info = devm_kzalloc(ice_hw_to_dev(hw),
+ sizeof(*hw->switch_info), GFP_KERNEL);
+@@ -332,7 +333,12 @@ static enum ice_status ice_init_fltr_mgmt_struct(struct ice_hw *hw)
+
+ INIT_LIST_HEAD(&sw->vsi_list_map_head);
+
+- return ice_init_def_sw_recp(hw);
++ status = ice_init_def_sw_recp(hw);
++ if (status) {
++ devm_kfree(ice_hw_to_dev(hw), hw->switch_info);
++ return status;
++ }
++ return 0;
+ }
+
+ /**
+diff --git a/drivers/net/ethernet/intel/ice/ice_controlq.c b/drivers/net/ethernet/intel/ice/ice_controlq.c
+index dd946866d7b8..cc29a16f41f7 100644
+--- a/drivers/net/ethernet/intel/ice/ice_controlq.c
++++ b/drivers/net/ethernet/intel/ice/ice_controlq.c
+@@ -199,7 +199,9 @@ unwind_alloc_rq_bufs:
+ cq->rq.r.rq_bi[i].pa = 0;
+ cq->rq.r.rq_bi[i].size = 0;
+ }
++ cq->rq.r.rq_bi = NULL;
+ devm_kfree(ice_hw_to_dev(hw), cq->rq.dma_head);
++ cq->rq.dma_head = NULL;
+
+ return ICE_ERR_NO_MEMORY;
+ }
+@@ -245,7 +247,9 @@ unwind_alloc_sq_bufs:
+ cq->sq.r.sq_bi[i].pa = 0;
+ cq->sq.r.sq_bi[i].size = 0;
+ }
++ cq->sq.r.sq_bi = NULL;
+ devm_kfree(ice_hw_to_dev(hw), cq->sq.dma_head);
++ cq->sq.dma_head = NULL;
+
+ return ICE_ERR_NO_MEMORY;
+ }
+@@ -304,6 +308,28 @@ ice_cfg_rq_regs(struct ice_hw *hw, struct ice_ctl_q_info *cq)
+ return 0;
+ }
+
++#define ICE_FREE_CQ_BUFS(hw, qi, ring) \
++do { \
++ int i; \
++ /* free descriptors */ \
++ if ((qi)->ring.r.ring##_bi) \
++ for (i = 0; i < (qi)->num_##ring##_entries; i++) \
++ if ((qi)->ring.r.ring##_bi[i].pa) { \
++ dmam_free_coherent(ice_hw_to_dev(hw), \
++ (qi)->ring.r.ring##_bi[i].size, \
++ (qi)->ring.r.ring##_bi[i].va, \
++ (qi)->ring.r.ring##_bi[i].pa); \
++ (qi)->ring.r.ring##_bi[i].va = NULL;\
++ (qi)->ring.r.ring##_bi[i].pa = 0;\
++ (qi)->ring.r.ring##_bi[i].size = 0;\
++ } \
++ /* free the buffer info list */ \
++ if ((qi)->ring.cmd_buf) \
++ devm_kfree(ice_hw_to_dev(hw), (qi)->ring.cmd_buf); \
++ /* free DMA head */ \
++ devm_kfree(ice_hw_to_dev(hw), (qi)->ring.dma_head); \
++} while (0)
++
+ /**
+ * ice_init_sq - main initialization routine for Control ATQ
+ * @hw: pointer to the hardware structure
+@@ -357,6 +383,7 @@ static enum ice_status ice_init_sq(struct ice_hw *hw, struct ice_ctl_q_info *cq)
+ goto init_ctrlq_exit;
+
+ init_ctrlq_free_rings:
++ ICE_FREE_CQ_BUFS(hw, cq, sq);
+ ice_free_cq_ring(hw, &cq->sq);
+
+ init_ctrlq_exit:
+@@ -416,33 +443,13 @@ static enum ice_status ice_init_rq(struct ice_hw *hw, struct ice_ctl_q_info *cq)
+ goto init_ctrlq_exit;
+
+ init_ctrlq_free_rings:
++ ICE_FREE_CQ_BUFS(hw, cq, rq);
+ ice_free_cq_ring(hw, &cq->rq);
+
+ init_ctrlq_exit:
+ return ret_code;
+ }
+
+-#define ICE_FREE_CQ_BUFS(hw, qi, ring) \
+-do { \
+- int i; \
+- /* free descriptors */ \
+- for (i = 0; i < (qi)->num_##ring##_entries; i++) \
+- if ((qi)->ring.r.ring##_bi[i].pa) { \
+- dmam_free_coherent(ice_hw_to_dev(hw), \
+- (qi)->ring.r.ring##_bi[i].size,\
+- (qi)->ring.r.ring##_bi[i].va,\
+- (qi)->ring.r.ring##_bi[i].pa);\
+- (qi)->ring.r.ring##_bi[i].va = NULL; \
+- (qi)->ring.r.ring##_bi[i].pa = 0; \
+- (qi)->ring.r.ring##_bi[i].size = 0; \
+- } \
+- /* free the buffer info list */ \
+- if ((qi)->ring.cmd_buf) \
+- devm_kfree(ice_hw_to_dev(hw), (qi)->ring.cmd_buf); \
+- /* free DMA head */ \
+- devm_kfree(ice_hw_to_dev(hw), (qi)->ring.dma_head); \
+-} while (0)
+-
+ /**
+ * ice_shutdown_sq - shutdown the Control ATQ
+ * @hw: pointer to the hardware structure
+diff --git a/drivers/net/ethernet/intel/ice/ice_ethtool.c b/drivers/net/ethernet/intel/ice/ice_ethtool.c
+index 593fb37bd59e..153e3565e313 100644
+--- a/drivers/net/ethernet/intel/ice/ice_ethtool.c
++++ b/drivers/net/ethernet/intel/ice/ice_ethtool.c
+@@ -3171,10 +3171,6 @@ ice_get_channels(struct net_device *dev, struct ethtool_channels *ch)
+ struct ice_vsi *vsi = np->vsi;
+ struct ice_pf *pf = vsi->back;
+
+- /* check to see if VSI is active */
+- if (test_bit(__ICE_DOWN, vsi->state))
+- return;
+-
+ /* report maximum channels */
+ ch->max_rx = ice_get_max_rxq(pf);
+ ch->max_tx = ice_get_max_txq(pf);
+diff --git a/drivers/net/ethernet/intel/ice/ice_flex_pipe.c b/drivers/net/ethernet/intel/ice/ice_flex_pipe.c
+index 42bac3ec5526..abfec38bb483 100644
+--- a/drivers/net/ethernet/intel/ice/ice_flex_pipe.c
++++ b/drivers/net/ethernet/intel/ice/ice_flex_pipe.c
+@@ -2962,8 +2962,10 @@ ice_add_prof(struct ice_hw *hw, enum ice_block blk, u64 id, u8 ptypes[],
+
+ /* add profile info */
+ prof = devm_kzalloc(ice_hw_to_dev(hw), sizeof(*prof), GFP_KERNEL);
+- if (!prof)
++ if (!prof) {
++ status = ICE_ERR_NO_MEMORY;
+ goto err_ice_add_prof;
++ }
+
+ prof->profile_cookie = id;
+ prof->prof_id = prof_id;
+@@ -3703,8 +3705,10 @@ ice_add_prof_id_vsig(struct ice_hw *hw, enum ice_block blk, u16 vsig, u64 hdl,
+ t->tcam[i].prof_id,
+ t->tcam[i].ptg, vsig, 0, 0,
+ vl_msk, dc_msk, nm_msk);
+- if (status)
++ if (status) {
++ devm_kfree(ice_hw_to_dev(hw), p);
+ goto err_ice_add_prof_id_vsig;
++ }
+
+ /* log change */
+ list_add(&p->list_entry, chg);
+diff --git a/drivers/net/ethernet/intel/ice/ice_main.c b/drivers/net/ethernet/intel/ice/ice_main.c
+index 5b190c257124..69e50331e08e 100644
+--- a/drivers/net/ethernet/intel/ice/ice_main.c
++++ b/drivers/net/ethernet/intel/ice/ice_main.c
+@@ -1898,6 +1898,9 @@ free_qmap:
+ for (i = 0; i < vsi->tc_cfg.numtc; i++)
+ max_txqs[i] = vsi->num_txq;
+
++ /* change number of XDP Tx queues to 0 */
++ vsi->num_xdp_txq = 0;
++
+ return ice_cfg_vsi_lan(vsi->port_info, vsi->idx, vsi->tc_cfg.ena_tc,
+ max_txqs);
+ }
+@@ -3123,7 +3126,7 @@ static char *ice_get_opt_fw_name(struct ice_pf *pf)
+ if (!opt_fw_filename)
+ return NULL;
+
+- snprintf(opt_fw_filename, NAME_MAX, "%sice-%016llX.pkg",
++ snprintf(opt_fw_filename, NAME_MAX, "%sice-%016llx.pkg",
+ ICE_DDP_PKG_PATH, dsn);
+
+ return opt_fw_filename;
+@@ -3295,7 +3298,7 @@ ice_probe(struct pci_dev *pdev, const struct pci_device_id __always_unused *ent)
+ if (err) {
+ dev_err(dev, "ice_init_interrupt_scheme failed: %d\n", err);
+ err = -EIO;
+- goto err_init_interrupt_unroll;
++ goto err_init_vsi_unroll;
+ }
+
+ /* Driver is mostly up */
+@@ -3384,6 +3387,7 @@ err_msix_misc_unroll:
+ ice_free_irq_msix_misc(pf);
+ err_init_interrupt_unroll:
+ ice_clear_interrupt_scheme(pf);
++err_init_vsi_unroll:
+ devm_kfree(dev, pf->vsi);
+ err_init_pf_unroll:
+ ice_deinit_pf(pf);
+diff --git a/drivers/net/ethernet/intel/ice/ice_virtchnl_pf.c b/drivers/net/ethernet/intel/ice/ice_virtchnl_pf.c
+index 15191a325918..f1fdb4d4c826 100644
+--- a/drivers/net/ethernet/intel/ice/ice_virtchnl_pf.c
++++ b/drivers/net/ethernet/intel/ice/ice_virtchnl_pf.c
+@@ -10,10 +10,11 @@
+ * @pf: pointer to the PF structure
+ * @vf_id: the ID of the VF to check
+ */
+-static int ice_validate_vf_id(struct ice_pf *pf, int vf_id)
++static int ice_validate_vf_id(struct ice_pf *pf, u16 vf_id)
+ {
++ /* vf_id range is only valid for 0-255, and should always be unsigned */
+ if (vf_id >= pf->num_alloc_vfs) {
+- dev_err(ice_pf_to_dev(pf), "Invalid VF ID: %d\n", vf_id);
++ dev_err(ice_pf_to_dev(pf), "Invalid VF ID: %u\n", vf_id);
+ return -EINVAL;
+ }
+ return 0;
+@@ -27,7 +28,7 @@ static int ice_validate_vf_id(struct ice_pf *pf, int vf_id)
+ static int ice_check_vf_init(struct ice_pf *pf, struct ice_vf *vf)
+ {
+ if (!test_bit(ICE_VF_STATE_INIT, vf->vf_states)) {
+- dev_err(ice_pf_to_dev(pf), "VF ID: %d in reset. Try again.\n",
++ dev_err(ice_pf_to_dev(pf), "VF ID: %u in reset. Try again.\n",
+ vf->vf_id);
+ return -EBUSY;
+ }
+@@ -337,7 +338,7 @@ void ice_free_vfs(struct ice_pf *pf)
+ * before this function ever gets called.
+ */
+ if (!pci_vfs_assigned(pf->pdev)) {
+- int vf_id;
++ unsigned int vf_id;
+
+ /* Acknowledge VFLR for all VFs. Without this, VFs will fail to
+ * work correctly when SR-IOV gets re-enabled.
+@@ -368,9 +369,9 @@ static void ice_trigger_vf_reset(struct ice_vf *vf, bool is_vflr, bool is_pfr)
+ {
+ struct ice_pf *pf = vf->pf;
+ u32 reg, reg_idx, bit_idx;
++ unsigned int vf_abs_id, i;
+ struct device *dev;
+ struct ice_hw *hw;
+- int vf_abs_id, i;
+
+ dev = ice_pf_to_dev(pf);
+ hw = &pf->hw;
+@@ -418,7 +419,7 @@ static void ice_trigger_vf_reset(struct ice_vf *vf, bool is_vflr, bool is_pfr)
+ if ((reg & VF_TRANS_PENDING_M) == 0)
+ break;
+
+- dev_err(dev, "VF %d PCI transactions stuck\n", vf->vf_id);
++ dev_err(dev, "VF %u PCI transactions stuck\n", vf->vf_id);
+ udelay(ICE_PCI_CIAD_WAIT_DELAY_US);
+ }
+ }
+@@ -1483,7 +1484,7 @@ int ice_sriov_configure(struct pci_dev *pdev, int num_vfs)
+ void ice_process_vflr_event(struct ice_pf *pf)
+ {
+ struct ice_hw *hw = &pf->hw;
+- int vf_id;
++ unsigned int vf_id;
+ u32 reg;
+
+ if (!test_and_clear_bit(__ICE_VFLR_EVENT_PENDING, pf->state) ||
+@@ -1524,7 +1525,7 @@ static void ice_vc_reset_vf(struct ice_vf *vf)
+ */
+ static struct ice_vf *ice_get_vf_from_pfq(struct ice_pf *pf, u16 pfq)
+ {
+- int vf_id;
++ unsigned int vf_id;
+
+ ice_for_each_vf(pf, vf_id) {
+ struct ice_vf *vf = &pf->vf[vf_id];
+@@ -2117,6 +2118,52 @@ static bool ice_vc_validate_vqs_bitmaps(struct virtchnl_queue_select *vqs)
+ return true;
+ }
+
++/**
++ * ice_vf_ena_txq_interrupt - enable Tx queue interrupt via QINT_TQCTL
++ * @vsi: VSI of the VF to configure
++ * @q_idx: VF queue index used to determine the queue in the PF's space
++ */
++static void ice_vf_ena_txq_interrupt(struct ice_vsi *vsi, u32 q_idx)
++{
++ struct ice_hw *hw = &vsi->back->hw;
++ u32 pfq = vsi->txq_map[q_idx];
++ u32 reg;
++
++ reg = rd32(hw, QINT_TQCTL(pfq));
++
++ /* MSI-X index 0 in the VF's space is always for the OICR, which means
++ * this is most likely a poll mode VF driver, so don't enable an
++ * interrupt that was never configured via VIRTCHNL_OP_CONFIG_IRQ_MAP
++ */
++ if (!(reg & QINT_TQCTL_MSIX_INDX_M))
++ return;
++
++ wr32(hw, QINT_TQCTL(pfq), reg | QINT_TQCTL_CAUSE_ENA_M);
++}
++
++/**
++ * ice_vf_ena_rxq_interrupt - enable Tx queue interrupt via QINT_RQCTL
++ * @vsi: VSI of the VF to configure
++ * @q_idx: VF queue index used to determine the queue in the PF's space
++ */
++static void ice_vf_ena_rxq_interrupt(struct ice_vsi *vsi, u32 q_idx)
++{
++ struct ice_hw *hw = &vsi->back->hw;
++ u32 pfq = vsi->rxq_map[q_idx];
++ u32 reg;
++
++ reg = rd32(hw, QINT_RQCTL(pfq));
++
++ /* MSI-X index 0 in the VF's space is always for the OICR, which means
++ * this is most likely a poll mode VF driver, so don't enable an
++ * interrupt that was never configured via VIRTCHNL_OP_CONFIG_IRQ_MAP
++ */
++ if (!(reg & QINT_RQCTL_MSIX_INDX_M))
++ return;
++
++ wr32(hw, QINT_RQCTL(pfq), reg | QINT_RQCTL_CAUSE_ENA_M);
++}
++
+ /**
+ * ice_vc_ena_qs_msg
+ * @vf: pointer to the VF info
+@@ -2177,6 +2224,7 @@ static int ice_vc_ena_qs_msg(struct ice_vf *vf, u8 *msg)
+ goto error_param;
+ }
+
++ ice_vf_ena_rxq_interrupt(vsi, vf_q_id);
+ set_bit(vf_q_id, vf->rxq_ena);
+ }
+
+@@ -2192,6 +2240,7 @@ static int ice_vc_ena_qs_msg(struct ice_vf *vf, u8 *msg)
+ if (test_bit(vf_q_id, vf->txq_ena))
+ continue;
+
++ ice_vf_ena_txq_interrupt(vsi, vf_q_id);
+ set_bit(vf_q_id, vf->txq_ena);
+ }
+
+diff --git a/drivers/net/ethernet/intel/ice/ice_virtchnl_pf.h b/drivers/net/ethernet/intel/ice/ice_virtchnl_pf.h
+index 3f9464269bd2..62875704cecf 100644
+--- a/drivers/net/ethernet/intel/ice/ice_virtchnl_pf.h
++++ b/drivers/net/ethernet/intel/ice/ice_virtchnl_pf.h
+@@ -64,7 +64,7 @@ struct ice_mdd_vf_events {
+ struct ice_vf {
+ struct ice_pf *pf;
+
+- s16 vf_id; /* VF ID in the PF space */
++ u16 vf_id; /* VF ID in the PF space */
+ u16 lan_vsi_idx; /* index into PF struct */
+ /* first vector index of this VF in the PF space */
+ int first_vector_idx;
+diff --git a/drivers/net/ethernet/intel/igb/igb_ethtool.c b/drivers/net/ethernet/intel/igb/igb_ethtool.c
+index 39d3b76a6f5d..2cd003c5ad43 100644
+--- a/drivers/net/ethernet/intel/igb/igb_ethtool.c
++++ b/drivers/net/ethernet/intel/igb/igb_ethtool.c
+@@ -143,7 +143,8 @@ static int igb_get_link_ksettings(struct net_device *netdev,
+ u32 speed;
+ u32 supported, advertising;
+
+- status = rd32(E1000_STATUS);
++ status = pm_runtime_suspended(&adapter->pdev->dev) ?
++ 0 : rd32(E1000_STATUS);
+ if (hw->phy.media_type == e1000_media_type_copper) {
+
+ supported = (SUPPORTED_10baseT_Half |
+diff --git a/drivers/net/ethernet/intel/igc/igc_main.c b/drivers/net/ethernet/intel/igc/igc_main.c
+index 69fa1ce1f927..c7020ff2f490 100644
+--- a/drivers/net/ethernet/intel/igc/igc_main.c
++++ b/drivers/net/ethernet/intel/igc/igc_main.c
+@@ -2325,7 +2325,9 @@ static void igc_configure(struct igc_adapter *adapter)
+ igc_setup_mrqc(adapter);
+ igc_setup_rctl(adapter);
+
++ igc_set_default_mac_filter(adapter);
+ igc_nfc_filter_restore(adapter);
++
+ igc_configure_tx(adapter);
+ igc_configure_rx(adapter);
+
+diff --git a/drivers/net/ethernet/intel/ixgbe/ixgbe_common.c b/drivers/net/ethernet/intel/ixgbe/ixgbe_common.c
+index 0bd1294ba517..39c5e6fdb72c 100644
+--- a/drivers/net/ethernet/intel/ixgbe/ixgbe_common.c
++++ b/drivers/net/ethernet/intel/ixgbe/ixgbe_common.c
+@@ -2243,7 +2243,7 @@ s32 ixgbe_fc_enable_generic(struct ixgbe_hw *hw)
+ }
+
+ /* Configure pause time (2 TCs per register) */
+- reg = hw->fc.pause_time * 0x00010001;
++ reg = hw->fc.pause_time * 0x00010001U;
+ for (i = 0; i < (MAX_TRAFFIC_CLASS / 2); i++)
+ IXGBE_WRITE_REG(hw, IXGBE_FCTTV(i), reg);
+
+diff --git a/drivers/net/ethernet/intel/ixgbe/ixgbe_main.c b/drivers/net/ethernet/intel/ixgbe/ixgbe_main.c
+index 718931d951bc..ea6834bae04c 100644
+--- a/drivers/net/ethernet/intel/ixgbe/ixgbe_main.c
++++ b/drivers/net/ethernet/intel/ixgbe/ixgbe_main.c
+@@ -2254,7 +2254,8 @@ static void ixgbe_rx_buffer_flip(struct ixgbe_ring *rx_ring,
+ rx_buffer->page_offset ^= truesize;
+ #else
+ unsigned int truesize = ring_uses_build_skb(rx_ring) ?
+- SKB_DATA_ALIGN(IXGBE_SKB_PAD + size) :
++ SKB_DATA_ALIGN(IXGBE_SKB_PAD + size) +
++ SKB_DATA_ALIGN(sizeof(struct skb_shared_info)) :
+ SKB_DATA_ALIGN(size);
+
+ rx_buffer->page_offset += truesize;
+diff --git a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_pf.c b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_pf.c
+index 411e5ea1031e..64786568af0d 100644
+--- a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_pf.c
++++ b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_pf.c
+@@ -1856,13 +1856,17 @@ static int otx2_probe(struct pci_dev *pdev, const struct pci_device_id *id)
+ num_vec = pci_msix_vec_count(pdev);
+ hw->irq_name = devm_kmalloc_array(&hw->pdev->dev, num_vec, NAME_SIZE,
+ GFP_KERNEL);
+- if (!hw->irq_name)
++ if (!hw->irq_name) {
++ err = -ENOMEM;
+ goto err_free_netdev;
++ }
+
+ hw->affinity_mask = devm_kcalloc(&hw->pdev->dev, num_vec,
+ sizeof(cpumask_var_t), GFP_KERNEL);
+- if (!hw->affinity_mask)
++ if (!hw->affinity_mask) {
++ err = -ENOMEM;
+ goto err_free_netdev;
++ }
+
+ /* Map CSRs */
+ pf->reg_base = pcim_iomap(pdev, PCI_CFG_REG_BAR_NUM, 0);
+diff --git a/drivers/net/ethernet/mellanox/mlx4/crdump.c b/drivers/net/ethernet/mellanox/mlx4/crdump.c
+index 73eae80e1cb7..ac5468b77488 100644
+--- a/drivers/net/ethernet/mellanox/mlx4/crdump.c
++++ b/drivers/net/ethernet/mellanox/mlx4/crdump.c
+@@ -197,6 +197,7 @@ int mlx4_crdump_collect(struct mlx4_dev *dev)
+ err = devlink_region_snapshot_id_get(devlink, &id);
+ if (err) {
+ mlx4_err(dev, "crdump: devlink get snapshot id err %d\n", err);
++ iounmap(cr_space);
+ return err;
+ }
+
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en/tc_ct.c b/drivers/net/ethernet/mellanox/mlx5/core/en/tc_ct.c
+index 153d6eb19d3c..470282daed19 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/en/tc_ct.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/en/tc_ct.c
+@@ -1132,7 +1132,7 @@ mlx5_tc_ct_flow_offload(struct mlx5e_priv *priv,
+ {
+ bool clear_action = attr->ct_attr.ct_action & TCA_CT_ACT_CLEAR;
+ struct mlx5_tc_ct_priv *ct_priv = mlx5_tc_ct_get_ct_priv(priv);
+- struct mlx5_flow_handle *rule;
++ struct mlx5_flow_handle *rule = ERR_PTR(-EINVAL);
+ int err;
+
+ if (!ct_priv)
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_rx.c b/drivers/net/ethernet/mellanox/mlx5/core/en_rx.c
+index e2beb89c1832..b69957be653a 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/en_rx.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/en_rx.c
+@@ -1501,6 +1501,7 @@ out:
+
+ #ifdef CONFIG_MLX5_CORE_IPOIB
+
++#define MLX5_IB_GRH_SGID_OFFSET 8
+ #define MLX5_IB_GRH_DGID_OFFSET 24
+ #define MLX5_GID_SIZE 16
+
+@@ -1514,6 +1515,7 @@ static inline void mlx5i_complete_rx_cqe(struct mlx5e_rq *rq,
+ struct net_device *netdev;
+ struct mlx5e_priv *priv;
+ char *pseudo_header;
++ u32 flags_rqpn;
+ u32 qpn;
+ u8 *dgid;
+ u8 g;
+@@ -1535,7 +1537,8 @@ static inline void mlx5i_complete_rx_cqe(struct mlx5e_rq *rq,
+ tstamp = &priv->tstamp;
+ stats = &priv->channel_stats[rq->ix].rq;
+
+- g = (be32_to_cpu(cqe->flags_rqpn) >> 28) & 3;
++ flags_rqpn = be32_to_cpu(cqe->flags_rqpn);
++ g = (flags_rqpn >> 28) & 3;
+ dgid = skb->data + MLX5_IB_GRH_DGID_OFFSET;
+ if ((!g) || dgid[0] != 0xff)
+ skb->pkt_type = PACKET_HOST;
+@@ -1544,9 +1547,15 @@ static inline void mlx5i_complete_rx_cqe(struct mlx5e_rq *rq,
+ else
+ skb->pkt_type = PACKET_MULTICAST;
+
+- /* TODO: IB/ipoib: Allow mcast packets from other VFs
+- * 68996a6e760e5c74654723eeb57bf65628ae87f4
++ /* Drop packets that this interface sent, ie multicast packets
++ * that the HCA has replicated.
+ */
++ if (g && (qpn == (flags_rqpn & 0xffffff)) &&
++ (memcmp(netdev->dev_addr + 4, skb->data + MLX5_IB_GRH_SGID_OFFSET,
++ MLX5_GID_SIZE) == 0)) {
++ skb->dev = NULL;
++ return;
++ }
+
+ skb_pull(skb, MLX5_IB_GRH_BYTES);
+
+diff --git a/drivers/net/ethernet/mscc/ocelot_ace.c b/drivers/net/ethernet/mscc/ocelot_ace.c
+index 3bd286044480..8a2f7d13ef6d 100644
+--- a/drivers/net/ethernet/mscc/ocelot_ace.c
++++ b/drivers/net/ethernet/mscc/ocelot_ace.c
+@@ -706,13 +706,114 @@ ocelot_ace_rule_get_rule_index(struct ocelot_acl_block *block, int index)
+ return NULL;
+ }
+
++/* If @on=false, then SNAP, ARP, IP and OAM frames will not match on keys based
++ * on destination and source MAC addresses, but only on higher-level protocol
++ * information. The only frame types to match on keys containing MAC addresses
++ * in this case are non-SNAP, non-ARP, non-IP and non-OAM frames.
++ *
++ * If @on=true, then the above frame types (SNAP, ARP, IP and OAM) will match
++ * on MAC_ETYPE keys such as destination and source MAC on this ingress port.
++ * However the setting has the side effect of making these frames not matching
++ * on any _other_ keys than MAC_ETYPE ones.
++ */
++static void ocelot_match_all_as_mac_etype(struct ocelot *ocelot, int port,
++ bool on)
++{
++ u32 val = 0;
++
++ if (on)
++ val = ANA_PORT_VCAP_S2_CFG_S2_SNAP_DIS(3) |
++ ANA_PORT_VCAP_S2_CFG_S2_ARP_DIS(3) |
++ ANA_PORT_VCAP_S2_CFG_S2_IP_TCPUDP_DIS(3) |
++ ANA_PORT_VCAP_S2_CFG_S2_IP_OTHER_DIS(3) |
++ ANA_PORT_VCAP_S2_CFG_S2_OAM_DIS(3);
++
++ ocelot_rmw_gix(ocelot, val,
++ ANA_PORT_VCAP_S2_CFG_S2_SNAP_DIS_M |
++ ANA_PORT_VCAP_S2_CFG_S2_ARP_DIS_M |
++ ANA_PORT_VCAP_S2_CFG_S2_IP_TCPUDP_DIS_M |
++ ANA_PORT_VCAP_S2_CFG_S2_IP_OTHER_DIS_M |
++ ANA_PORT_VCAP_S2_CFG_S2_OAM_DIS_M,
++ ANA_PORT_VCAP_S2_CFG, port);
++}
++
++static bool ocelot_ace_is_problematic_mac_etype(struct ocelot_ace_rule *ace)
++{
++ if (ace->type != OCELOT_ACE_TYPE_ETYPE)
++ return false;
++ if (ether_addr_to_u64(ace->frame.etype.dmac.value) &
++ ether_addr_to_u64(ace->frame.etype.dmac.mask))
++ return true;
++ if (ether_addr_to_u64(ace->frame.etype.smac.value) &
++ ether_addr_to_u64(ace->frame.etype.smac.mask))
++ return true;
++ return false;
++}
++
++static bool ocelot_ace_is_problematic_non_mac_etype(struct ocelot_ace_rule *ace)
++{
++ if (ace->type == OCELOT_ACE_TYPE_SNAP)
++ return true;
++ if (ace->type == OCELOT_ACE_TYPE_ARP)
++ return true;
++ if (ace->type == OCELOT_ACE_TYPE_IPV4)
++ return true;
++ if (ace->type == OCELOT_ACE_TYPE_IPV6)
++ return true;
++ return false;
++}
++
++static bool ocelot_exclusive_mac_etype_ace_rules(struct ocelot *ocelot,
++ struct ocelot_ace_rule *ace)
++{
++ struct ocelot_acl_block *block = &ocelot->acl_block;
++ struct ocelot_ace_rule *tmp;
++ unsigned long port;
++ int i;
++
++ if (ocelot_ace_is_problematic_mac_etype(ace)) {
++ /* Search for any non-MAC_ETYPE rules on the port */
++ for (i = 0; i < block->count; i++) {
++ tmp = ocelot_ace_rule_get_rule_index(block, i);
++ if (tmp->ingress_port_mask & ace->ingress_port_mask &&
++ ocelot_ace_is_problematic_non_mac_etype(tmp))
++ return false;
++ }
++
++ for_each_set_bit(port, &ace->ingress_port_mask,
++ ocelot->num_phys_ports)
++ ocelot_match_all_as_mac_etype(ocelot, port, true);
++ } else if (ocelot_ace_is_problematic_non_mac_etype(ace)) {
++ /* Search for any MAC_ETYPE rules on the port */
++ for (i = 0; i < block->count; i++) {
++ tmp = ocelot_ace_rule_get_rule_index(block, i);
++ if (tmp->ingress_port_mask & ace->ingress_port_mask &&
++ ocelot_ace_is_problematic_mac_etype(tmp))
++ return false;
++ }
++
++ for_each_set_bit(port, &ace->ingress_port_mask,
++ ocelot->num_phys_ports)
++ ocelot_match_all_as_mac_etype(ocelot, port, false);
++ }
++
++ return true;
++}
++
+ int ocelot_ace_rule_offload_add(struct ocelot *ocelot,
+- struct ocelot_ace_rule *rule)
++ struct ocelot_ace_rule *rule,
++ struct netlink_ext_ack *extack)
+ {
+ struct ocelot_acl_block *block = &ocelot->acl_block;
+ struct ocelot_ace_rule *ace;
+ int i, index;
+
++ if (!ocelot_exclusive_mac_etype_ace_rules(ocelot, rule)) {
++ NL_SET_ERR_MSG_MOD(extack,
++ "Cannot mix MAC_ETYPE with non-MAC_ETYPE rules");
++ return -EBUSY;
++ }
++
+ /* Add rule to the linked list */
+ ocelot_ace_rule_add(ocelot, block, rule);
+
+diff --git a/drivers/net/ethernet/mscc/ocelot_ace.h b/drivers/net/ethernet/mscc/ocelot_ace.h
+index 29d22c566786..099e177f2617 100644
+--- a/drivers/net/ethernet/mscc/ocelot_ace.h
++++ b/drivers/net/ethernet/mscc/ocelot_ace.h
+@@ -194,7 +194,7 @@ struct ocelot_ace_rule {
+
+ enum ocelot_ace_action action;
+ struct ocelot_ace_stats stats;
+- u16 ingress_port_mask;
++ unsigned long ingress_port_mask;
+
+ enum ocelot_vcap_bit dmac_mc;
+ enum ocelot_vcap_bit dmac_bc;
+@@ -215,7 +215,8 @@ struct ocelot_ace_rule {
+ };
+
+ int ocelot_ace_rule_offload_add(struct ocelot *ocelot,
+- struct ocelot_ace_rule *rule);
++ struct ocelot_ace_rule *rule,
++ struct netlink_ext_ack *extack);
+ int ocelot_ace_rule_offload_del(struct ocelot *ocelot,
+ struct ocelot_ace_rule *rule);
+ int ocelot_ace_rule_stats_update(struct ocelot *ocelot,
+diff --git a/drivers/net/ethernet/mscc/ocelot_flower.c b/drivers/net/ethernet/mscc/ocelot_flower.c
+index 341923311fec..954cb67eeaa2 100644
+--- a/drivers/net/ethernet/mscc/ocelot_flower.c
++++ b/drivers/net/ethernet/mscc/ocelot_flower.c
+@@ -205,7 +205,7 @@ int ocelot_cls_flower_replace(struct ocelot *ocelot, int port,
+ return ret;
+ }
+
+- return ocelot_ace_rule_offload_add(ocelot, ace);
++ return ocelot_ace_rule_offload_add(ocelot, ace, f->common.extack);
+ }
+ EXPORT_SYMBOL_GPL(ocelot_cls_flower_replace);
+
+diff --git a/drivers/net/ethernet/nxp/lpc_eth.c b/drivers/net/ethernet/nxp/lpc_eth.c
+index d20cf03a3ea0..311454d9b0bc 100644
+--- a/drivers/net/ethernet/nxp/lpc_eth.c
++++ b/drivers/net/ethernet/nxp/lpc_eth.c
+@@ -823,7 +823,8 @@ static int lpc_mii_init(struct netdata_local *pldat)
+ if (err)
+ goto err_out_unregister_bus;
+
+- if (lpc_mii_probe(pldat->ndev) != 0)
++ err = lpc_mii_probe(pldat->ndev);
++ if (err)
+ goto err_out_unregister_bus;
+
+ return 0;
+diff --git a/drivers/net/ethernet/qlogic/qede/qede.h b/drivers/net/ethernet/qlogic/qede/qede.h
+index 234c6f30effb..234c7e35ee1e 100644
+--- a/drivers/net/ethernet/qlogic/qede/qede.h
++++ b/drivers/net/ethernet/qlogic/qede/qede.h
+@@ -574,12 +574,14 @@ int qede_add_tc_flower_fltr(struct qede_dev *edev, __be16 proto,
+ #define RX_RING_SIZE ((u16)BIT(RX_RING_SIZE_POW))
+ #define NUM_RX_BDS_MAX (RX_RING_SIZE - 1)
+ #define NUM_RX_BDS_MIN 128
++#define NUM_RX_BDS_KDUMP_MIN 63
+ #define NUM_RX_BDS_DEF ((u16)BIT(10) - 1)
+
+ #define TX_RING_SIZE_POW 13
+ #define TX_RING_SIZE ((u16)BIT(TX_RING_SIZE_POW))
+ #define NUM_TX_BDS_MAX (TX_RING_SIZE - 1)
+ #define NUM_TX_BDS_MIN 128
++#define NUM_TX_BDS_KDUMP_MIN 63
+ #define NUM_TX_BDS_DEF NUM_TX_BDS_MAX
+
+ #define QEDE_MIN_PKT_LEN 64
+diff --git a/drivers/net/ethernet/qlogic/qede/qede_main.c b/drivers/net/ethernet/qlogic/qede/qede_main.c
+index 34fa3917eb33..1a83d1fd8ccd 100644
+--- a/drivers/net/ethernet/qlogic/qede/qede_main.c
++++ b/drivers/net/ethernet/qlogic/qede/qede_main.c
+@@ -29,6 +29,7 @@
+ * CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
+ * SOFTWARE.
+ */
++#include <linux/crash_dump.h>
+ #include <linux/module.h>
+ #include <linux/pci.h>
+ #include <linux/version.h>
+@@ -707,8 +708,14 @@ static struct qede_dev *qede_alloc_etherdev(struct qed_dev *cdev,
+ edev->dp_module = dp_module;
+ edev->dp_level = dp_level;
+ edev->ops = qed_ops;
+- edev->q_num_rx_buffers = NUM_RX_BDS_DEF;
+- edev->q_num_tx_buffers = NUM_TX_BDS_DEF;
++
++ if (is_kdump_kernel()) {
++ edev->q_num_rx_buffers = NUM_RX_BDS_KDUMP_MIN;
++ edev->q_num_tx_buffers = NUM_TX_BDS_KDUMP_MIN;
++ } else {
++ edev->q_num_rx_buffers = NUM_RX_BDS_DEF;
++ edev->q_num_tx_buffers = NUM_TX_BDS_DEF;
++ }
+
+ DP_INFO(edev, "Allocated netdev with %d tx queues and %d rx queues\n",
+ info->num_queues, info->num_queues);
+diff --git a/drivers/net/ethernet/stmicro/stmmac/dwmac-intel.c b/drivers/net/ethernet/stmicro/stmmac/dwmac-intel.c
+index 2e4aaedb93f5..d163c4b43da0 100644
+--- a/drivers/net/ethernet/stmicro/stmmac/dwmac-intel.c
++++ b/drivers/net/ethernet/stmicro/stmmac/dwmac-intel.c
+@@ -252,6 +252,7 @@ static void common_default_data(struct plat_stmmacenet_data *plat)
+ static int intel_mgbe_common_data(struct pci_dev *pdev,
+ struct plat_stmmacenet_data *plat)
+ {
++ int ret;
+ int i;
+
+ plat->clk_csr = 5;
+@@ -324,7 +325,12 @@ static int intel_mgbe_common_data(struct pci_dev *pdev,
+ dev_warn(&pdev->dev, "Fail to register stmmac-clk\n");
+ plat->stmmac_clk = NULL;
+ }
+- clk_prepare_enable(plat->stmmac_clk);
++
++ ret = clk_prepare_enable(plat->stmmac_clk);
++ if (ret) {
++ clk_unregister_fixed_rate(plat->stmmac_clk);
++ return ret;
++ }
+
+ /* Set default value for multicast hash bins */
+ plat->multicast_filter_bins = HASH_TABLE_SIZE;
+@@ -657,7 +663,13 @@ static int intel_eth_pci_probe(struct pci_dev *pdev,
+ res.wol_irq = pdev->irq;
+ res.irq = pdev->irq;
+
+- return stmmac_dvr_probe(&pdev->dev, plat, &res);
++ ret = stmmac_dvr_probe(&pdev->dev, plat, &res);
++ if (ret) {
++ clk_disable_unprepare(plat->stmmac_clk);
++ clk_unregister_fixed_rate(plat->stmmac_clk);
++ }
++
++ return ret;
+ }
+
+ /**
+@@ -675,8 +687,8 @@ static void intel_eth_pci_remove(struct pci_dev *pdev)
+
+ stmmac_dvr_remove(&pdev->dev);
+
+- if (priv->plat->stmmac_clk)
+- clk_unregister_fixed_rate(priv->plat->stmmac_clk);
++ clk_disable_unprepare(priv->plat->stmmac_clk);
++ clk_unregister_fixed_rate(priv->plat->stmmac_clk);
+
+ for (i = 0; i < PCI_STD_NUM_BARS; i++) {
+ if (pci_resource_len(pdev, i) == 0)
+diff --git a/drivers/net/ethernet/stmicro/stmmac/stmmac_pci.c b/drivers/net/ethernet/stmicro/stmmac/stmmac_pci.c
+index 3fb21f7ac9fb..272cb47af9f2 100644
+--- a/drivers/net/ethernet/stmicro/stmmac/stmmac_pci.c
++++ b/drivers/net/ethernet/stmicro/stmmac/stmmac_pci.c
+@@ -217,15 +217,10 @@ static int stmmac_pci_probe(struct pci_dev *pdev,
+ */
+ static void stmmac_pci_remove(struct pci_dev *pdev)
+ {
+- struct net_device *ndev = dev_get_drvdata(&pdev->dev);
+- struct stmmac_priv *priv = netdev_priv(ndev);
+ int i;
+
+ stmmac_dvr_remove(&pdev->dev);
+
+- if (priv->plat->stmmac_clk)
+- clk_unregister_fixed_rate(priv->plat->stmmac_clk);
+-
+ for (i = 0; i < PCI_STD_NUM_BARS; i++) {
+ if (pci_resource_len(pdev, i) == 0)
+ continue;
+diff --git a/drivers/net/ethernet/ti/davinci_mdio.c b/drivers/net/ethernet/ti/davinci_mdio.c
+index 38b7f6d35759..702fdc393da0 100644
+--- a/drivers/net/ethernet/ti/davinci_mdio.c
++++ b/drivers/net/ethernet/ti/davinci_mdio.c
+@@ -397,6 +397,8 @@ static int davinci_mdio_probe(struct platform_device *pdev)
+ data->dev = dev;
+
+ res = platform_get_resource(pdev, IORESOURCE_MEM, 0);
++ if (!res)
++ return -EINVAL;
+ data->regs = devm_ioremap(dev, res->start, resource_size(res));
+ if (!data->regs)
+ return -ENOMEM;
+diff --git a/drivers/net/ethernet/ti/k3-cppi-desc-pool.c b/drivers/net/ethernet/ti/k3-cppi-desc-pool.c
+index ad7cfc1316ce..38cc12f9f133 100644
+--- a/drivers/net/ethernet/ti/k3-cppi-desc-pool.c
++++ b/drivers/net/ethernet/ti/k3-cppi-desc-pool.c
+@@ -64,8 +64,8 @@ k3_cppi_desc_pool_create_name(struct device *dev, size_t size,
+ return ERR_PTR(-ENOMEM);
+
+ pool->gen_pool = gen_pool_create(ilog2(pool->desc_size), -1);
+- if (IS_ERR(pool->gen_pool)) {
+- ret = PTR_ERR(pool->gen_pool);
++ if (!pool->gen_pool) {
++ ret = -ENOMEM;
+ dev_err(pool->dev, "pool create failed %d\n", ret);
+ kfree_const(pool_name);
+ goto gen_pool_create_fail;
+diff --git a/drivers/net/ipa/gsi.c b/drivers/net/ipa/gsi.c
+index 8d9ca1c335e8..043a675e1be1 100644
+--- a/drivers/net/ipa/gsi.c
++++ b/drivers/net/ipa/gsi.c
+@@ -238,11 +238,6 @@ static void gsi_irq_ieob_enable(struct gsi *gsi, u32 evt_ring_id)
+ iowrite32(val, gsi->virt + GSI_CNTXT_SRC_IEOB_IRQ_MSK_OFFSET);
+ }
+
+-static void gsi_isr_ieob_clear(struct gsi *gsi, u32 mask)
+-{
+- iowrite32(mask, gsi->virt + GSI_CNTXT_SRC_IEOB_IRQ_CLR_OFFSET);
+-}
+-
+ static void gsi_irq_ieob_disable(struct gsi *gsi, u32 evt_ring_id)
+ {
+ u32 val;
+@@ -756,7 +751,6 @@ static void gsi_channel_deprogram(struct gsi_channel *channel)
+ int gsi_channel_start(struct gsi *gsi, u32 channel_id)
+ {
+ struct gsi_channel *channel = &gsi->channel[channel_id];
+- u32 evt_ring_id = channel->evt_ring_id;
+ int ret;
+
+ mutex_lock(&gsi->mutex);
+@@ -765,9 +759,6 @@ int gsi_channel_start(struct gsi *gsi, u32 channel_id)
+
+ mutex_unlock(&gsi->mutex);
+
+- /* Clear the channel's event ring interrupt in case it's pending */
+- gsi_isr_ieob_clear(gsi, BIT(evt_ring_id));
+-
+ gsi_channel_thaw(channel);
+
+ return ret;
+@@ -1071,7 +1062,7 @@ static void gsi_isr_ieob(struct gsi *gsi)
+ u32 event_mask;
+
+ event_mask = ioread32(gsi->virt + GSI_CNTXT_SRC_IEOB_IRQ_OFFSET);
+- gsi_isr_ieob_clear(gsi, event_mask);
++ iowrite32(event_mask, gsi->virt + GSI_CNTXT_SRC_IEOB_IRQ_CLR_OFFSET);
+
+ while (event_mask) {
+ u32 evt_ring_id = __ffs(event_mask);
+diff --git a/drivers/net/macvlan.c b/drivers/net/macvlan.c
+index 0482adc9916b..e900ebb94499 100644
+--- a/drivers/net/macvlan.c
++++ b/drivers/net/macvlan.c
+@@ -447,6 +447,10 @@ static rx_handler_result_t macvlan_handle_frame(struct sk_buff **pskb)
+ int ret;
+ rx_handler_result_t handle_res;
+
++ /* Packets from dev_loopback_xmit() do not have L2 header, bail out */
++ if (unlikely(skb->pkt_type == PACKET_LOOPBACK))
++ return RX_HANDLER_PASS;
++
+ port = macvlan_port_get_rcu(skb->dev);
+ if (is_multicast_ether_addr(eth->h_dest)) {
+ unsigned int hash;
+diff --git a/drivers/net/veth.c b/drivers/net/veth.c
+index aece0e5eec8c..d5691bb84448 100644
+--- a/drivers/net/veth.c
++++ b/drivers/net/veth.c
+@@ -564,13 +564,15 @@ static struct sk_buff *veth_xdp_rcv_one(struct veth_rq *rq,
+ struct veth_stats *stats)
+ {
+ void *hard_start = frame->data - frame->headroom;
+- void *head = hard_start - sizeof(struct xdp_frame);
+ int len = frame->len, delta = 0;
+ struct xdp_frame orig_frame;
+ struct bpf_prog *xdp_prog;
+ unsigned int headroom;
+ struct sk_buff *skb;
+
++ /* bpf_xdp_adjust_head() assures BPF cannot access xdp_frame area */
++ hard_start -= sizeof(struct xdp_frame);
++
+ rcu_read_lock();
+ xdp_prog = rcu_dereference(rq->xdp_prog);
+ if (likely(xdp_prog)) {
+@@ -592,7 +594,6 @@ static struct sk_buff *veth_xdp_rcv_one(struct veth_rq *rq,
+ break;
+ case XDP_TX:
+ orig_frame = *frame;
+- xdp.data_hard_start = head;
+ xdp.rxq->mem = frame->mem;
+ if (unlikely(veth_xdp_tx(rq, &xdp, bq) < 0)) {
+ trace_xdp_exception(rq->dev, xdp_prog, act);
+@@ -605,7 +606,6 @@ static struct sk_buff *veth_xdp_rcv_one(struct veth_rq *rq,
+ goto xdp_xmit;
+ case XDP_REDIRECT:
+ orig_frame = *frame;
+- xdp.data_hard_start = head;
+ xdp.rxq->mem = frame->mem;
+ if (xdp_do_redirect(rq->dev, &xdp, xdp_prog)) {
+ frame = &orig_frame;
+@@ -629,7 +629,7 @@ static struct sk_buff *veth_xdp_rcv_one(struct veth_rq *rq,
+ rcu_read_unlock();
+
+ headroom = sizeof(struct xdp_frame) + frame->headroom - delta;
+- skb = veth_build_skb(head, headroom, len, 0);
++ skb = veth_build_skb(hard_start, headroom, len, 0);
+ if (!skb) {
+ xdp_return_frame(frame);
+ stats->rx_drops++;
+diff --git a/drivers/net/vmxnet3/vmxnet3_ethtool.c b/drivers/net/vmxnet3/vmxnet3_ethtool.c
+index 6528940ce5f3..b53bb8bcd47f 100644
+--- a/drivers/net/vmxnet3/vmxnet3_ethtool.c
++++ b/drivers/net/vmxnet3/vmxnet3_ethtool.c
+@@ -700,6 +700,8 @@ vmxnet3_get_rss(struct net_device *netdev, u32 *p, u8 *key, u8 *hfunc)
+ *hfunc = ETH_RSS_HASH_TOP;
+ if (!p)
+ return 0;
++ if (n > UPT1_RSS_MAX_IND_TABLE_SIZE)
++ return 0;
+ while (n--)
+ p[n] = rssConf->indTable[n];
+ return 0;
+diff --git a/drivers/net/wireless/ath/ath10k/bmi.c b/drivers/net/wireless/ath/ath10k/bmi.c
+index ea908107581d..5b6db6e66f65 100644
+--- a/drivers/net/wireless/ath/ath10k/bmi.c
++++ b/drivers/net/wireless/ath/ath10k/bmi.c
+@@ -380,6 +380,7 @@ static int ath10k_bmi_lz_data_large(struct ath10k *ar, const void *buffer, u32 l
+ NULL, NULL);
+ if (ret) {
+ ath10k_warn(ar, "unable to write to the device\n");
++ kfree(cmd);
+ return ret;
+ }
+
+diff --git a/drivers/net/wireless/ath/ath10k/htt.h b/drivers/net/wireless/ath/ath10k/htt.h
+index 4a12564fc30e..c5ac5b277017 100644
+--- a/drivers/net/wireless/ath/ath10k/htt.h
++++ b/drivers/net/wireless/ath/ath10k/htt.h
+@@ -2035,6 +2035,7 @@ struct ath10k_htt_tx_ops {
+ int (*htt_h2t_aggr_cfg_msg)(struct ath10k_htt *htt,
+ u8 max_subfrms_ampdu,
+ u8 max_subfrms_amsdu);
++ void (*htt_flush_tx)(struct ath10k_htt *htt);
+ };
+
+ static inline int ath10k_htt_send_rx_ring_cfg(struct ath10k_htt *htt)
+@@ -2074,6 +2075,12 @@ static inline int ath10k_htt_tx(struct ath10k_htt *htt,
+ return htt->tx_ops->htt_tx(htt, txmode, msdu);
+ }
+
++static inline void ath10k_htt_flush_tx(struct ath10k_htt *htt)
++{
++ if (htt->tx_ops->htt_flush_tx)
++ htt->tx_ops->htt_flush_tx(htt);
++}
++
+ static inline int ath10k_htt_alloc_txbuff(struct ath10k_htt *htt)
+ {
+ if (!htt->tx_ops->htt_alloc_txbuff)
+diff --git a/drivers/net/wireless/ath/ath10k/htt_tx.c b/drivers/net/wireless/ath/ath10k/htt_tx.c
+index e9d12ea708b6..517ee2af2231 100644
+--- a/drivers/net/wireless/ath/ath10k/htt_tx.c
++++ b/drivers/net/wireless/ath/ath10k/htt_tx.c
+@@ -529,9 +529,14 @@ void ath10k_htt_tx_destroy(struct ath10k_htt *htt)
+ htt->tx_mem_allocated = false;
+ }
+
+-void ath10k_htt_tx_stop(struct ath10k_htt *htt)
++static void ath10k_htt_flush_tx_queue(struct ath10k_htt *htt)
+ {
+ idr_for_each(&htt->pending_tx, ath10k_htt_tx_clean_up_pending, htt->ar);
++}
++
++void ath10k_htt_tx_stop(struct ath10k_htt *htt)
++{
++ ath10k_htt_flush_tx_queue(htt);
+ idr_destroy(&htt->pending_tx);
+ }
+
+@@ -1784,6 +1789,7 @@ static const struct ath10k_htt_tx_ops htt_tx_ops_hl = {
+ .htt_send_frag_desc_bank_cfg = ath10k_htt_send_frag_desc_bank_cfg_32,
+ .htt_tx = ath10k_htt_tx_hl,
+ .htt_h2t_aggr_cfg_msg = ath10k_htt_h2t_aggr_cfg_msg_32,
++ .htt_flush_tx = ath10k_htt_flush_tx_queue,
+ };
+
+ void ath10k_htt_set_tx_ops(struct ath10k_htt *htt)
+diff --git a/drivers/net/wireless/ath/ath10k/mac.c b/drivers/net/wireless/ath/ath10k/mac.c
+index 2d03b8dd3b8c..7b60d8d6bfa9 100644
+--- a/drivers/net/wireless/ath/ath10k/mac.c
++++ b/drivers/net/wireless/ath/ath10k/mac.c
+@@ -3921,6 +3921,9 @@ void ath10k_mgmt_over_wmi_tx_work(struct work_struct *work)
+ if (ret) {
+ ath10k_warn(ar, "failed to transmit management frame by ref via WMI: %d\n",
+ ret);
++ /* remove this msdu from idr tracking */
++ ath10k_wmi_cleanup_mgmt_tx_send(ar, skb);
++
+ dma_unmap_single(ar->dev, paddr, skb->len,
+ DMA_TO_DEVICE);
+ ieee80211_free_txskb(ar->hw, skb);
+@@ -7190,6 +7193,7 @@ static void ath10k_flush(struct ieee80211_hw *hw, struct ieee80211_vif *vif,
+ ath10k_wmi_peer_flush(ar, arvif->vdev_id,
+ arvif->bssid, bitmap);
+ }
++ ath10k_htt_flush_tx(&ar->htt);
+ }
+ return;
+ }
+@@ -8919,7 +8923,6 @@ int ath10k_mac_register(struct ath10k *ar)
+ ar->hw->wiphy->max_scan_ie_len = WLAN_SCAN_PARAMS_MAX_IE_LEN;
+
+ if (test_bit(WMI_SERVICE_NLO, ar->wmi.svc_map)) {
+- ar->hw->wiphy->max_sched_scan_reqs = 1;
+ ar->hw->wiphy->max_sched_scan_ssids = WMI_PNO_MAX_SUPP_NETWORKS;
+ ar->hw->wiphy->max_match_sets = WMI_PNO_MAX_SUPP_NETWORKS;
+ ar->hw->wiphy->max_sched_scan_ie_len = WMI_PNO_MAX_IE_LENGTH;
+diff --git a/drivers/net/wireless/ath/ath10k/pci.c b/drivers/net/wireless/ath/ath10k/pci.c
+index ded7a220a4aa..cd1c5d60261f 100644
+--- a/drivers/net/wireless/ath/ath10k/pci.c
++++ b/drivers/net/wireless/ath/ath10k/pci.c
+@@ -2074,6 +2074,7 @@ static void ath10k_pci_hif_stop(struct ath10k *ar)
+ ath10k_pci_irq_sync(ar);
+ napi_synchronize(&ar->napi);
+ napi_disable(&ar->napi);
++ cancel_work_sync(&ar_pci->dump_work);
+
+ /* Most likely the device has HTT Rx ring configured. The only way to
+ * prevent the device from accessing (and possible corrupting) host
+diff --git a/drivers/net/wireless/ath/ath10k/qmi.c b/drivers/net/wireless/ath/ath10k/qmi.c
+index 85dce43c5439..7abdef8d6b9b 100644
+--- a/drivers/net/wireless/ath/ath10k/qmi.c
++++ b/drivers/net/wireless/ath/ath10k/qmi.c
+@@ -961,7 +961,16 @@ static void ath10k_qmi_del_server(struct qmi_handle *qmi_hdl,
+ container_of(qmi_hdl, struct ath10k_qmi, qmi_hdl);
+
+ qmi->fw_ready = false;
+- ath10k_qmi_driver_event_post(qmi, ATH10K_QMI_EVENT_SERVER_EXIT, NULL);
++
++ /*
++ * The del_server event is to be processed only if coming from
++ * the qmi server. The qmi infrastructure sends del_server, when
++ * any client releases the qmi handle. In this case do not process
++ * this del_server event.
++ */
++ if (qmi->state == ATH10K_QMI_STATE_INIT_DONE)
++ ath10k_qmi_driver_event_post(qmi, ATH10K_QMI_EVENT_SERVER_EXIT,
++ NULL);
+ }
+
+ static struct qmi_ops ath10k_qmi_ops = {
+@@ -1091,6 +1100,7 @@ int ath10k_qmi_init(struct ath10k *ar, u32 msa_size)
+ if (ret)
+ goto err_qmi_lookup;
+
++ qmi->state = ATH10K_QMI_STATE_INIT_DONE;
+ return 0;
+
+ err_qmi_lookup:
+@@ -1109,6 +1119,7 @@ int ath10k_qmi_deinit(struct ath10k *ar)
+ struct ath10k_snoc *ar_snoc = ath10k_snoc_priv(ar);
+ struct ath10k_qmi *qmi = ar_snoc->qmi;
+
++ qmi->state = ATH10K_QMI_STATE_DEINIT;
+ qmi_handle_release(&qmi->qmi_hdl);
+ cancel_work_sync(&qmi->event_work);
+ destroy_workqueue(qmi->event_wq);
+diff --git a/drivers/net/wireless/ath/ath10k/qmi.h b/drivers/net/wireless/ath/ath10k/qmi.h
+index dc257375f161..b59720524224 100644
+--- a/drivers/net/wireless/ath/ath10k/qmi.h
++++ b/drivers/net/wireless/ath/ath10k/qmi.h
+@@ -83,6 +83,11 @@ struct ath10k_qmi_driver_event {
+ void *data;
+ };
+
++enum ath10k_qmi_state {
++ ATH10K_QMI_STATE_INIT_DONE,
++ ATH10K_QMI_STATE_DEINIT,
++};
++
+ struct ath10k_qmi {
+ struct ath10k *ar;
+ struct qmi_handle qmi_hdl;
+@@ -105,6 +110,7 @@ struct ath10k_qmi {
+ char fw_build_timestamp[MAX_TIMESTAMP_LEN + 1];
+ struct ath10k_qmi_cal_data cal_data[MAX_NUM_CAL_V01];
+ bool msa_fixed_perm;
++ enum ath10k_qmi_state state;
+ };
+
+ int ath10k_qmi_wlan_enable(struct ath10k *ar,
+diff --git a/drivers/net/wireless/ath/ath10k/txrx.c b/drivers/net/wireless/ath/ath10k/txrx.c
+index 39abf8b12903..f46b9083bbf1 100644
+--- a/drivers/net/wireless/ath/ath10k/txrx.c
++++ b/drivers/net/wireless/ath/ath10k/txrx.c
+@@ -84,9 +84,11 @@ int ath10k_txrx_tx_unref(struct ath10k_htt *htt,
+ wake_up(&htt->empty_tx_wq);
+ spin_unlock_bh(&htt->tx_lock);
+
++ rcu_read_lock();
+ if (txq && txq->sta && skb_cb->airtime_est)
+ ieee80211_sta_register_airtime(txq->sta, txq->tid,
+ skb_cb->airtime_est, 0);
++ rcu_read_unlock();
+
+ if (ar->bus_param.dev_type != ATH10K_DEV_TYPE_HL)
+ dma_unmap_single(dev, skb_cb->paddr, msdu->len, DMA_TO_DEVICE);
+diff --git a/drivers/net/wireless/ath/ath10k/wmi-ops.h b/drivers/net/wireless/ath/ath10k/wmi-ops.h
+index 1491c25518bb..edccabc667e8 100644
+--- a/drivers/net/wireless/ath/ath10k/wmi-ops.h
++++ b/drivers/net/wireless/ath/ath10k/wmi-ops.h
+@@ -133,6 +133,7 @@ struct wmi_ops {
+ struct sk_buff *(*gen_mgmt_tx_send)(struct ath10k *ar,
+ struct sk_buff *skb,
+ dma_addr_t paddr);
++ int (*cleanup_mgmt_tx_send)(struct ath10k *ar, struct sk_buff *msdu);
+ struct sk_buff *(*gen_dbglog_cfg)(struct ath10k *ar, u64 module_enable,
+ u32 log_level);
+ struct sk_buff *(*gen_pktlog_enable)(struct ath10k *ar, u32 filter);
+@@ -441,6 +442,15 @@ ath10k_wmi_get_txbf_conf_scheme(struct ath10k *ar)
+ return ar->wmi.ops->get_txbf_conf_scheme(ar);
+ }
+
++static inline int
++ath10k_wmi_cleanup_mgmt_tx_send(struct ath10k *ar, struct sk_buff *msdu)
++{
++ if (!ar->wmi.ops->cleanup_mgmt_tx_send)
++ return -EOPNOTSUPP;
++
++ return ar->wmi.ops->cleanup_mgmt_tx_send(ar, msdu);
++}
++
+ static inline int
+ ath10k_wmi_mgmt_tx_send(struct ath10k *ar, struct sk_buff *msdu,
+ dma_addr_t paddr)
+diff --git a/drivers/net/wireless/ath/ath10k/wmi-tlv.c b/drivers/net/wireless/ath/ath10k/wmi-tlv.c
+index 4e68debda9bf..4a3e169965ae 100644
+--- a/drivers/net/wireless/ath/ath10k/wmi-tlv.c
++++ b/drivers/net/wireless/ath/ath10k/wmi-tlv.c
+@@ -2897,6 +2897,18 @@ ath10k_wmi_tlv_op_gen_request_stats(struct ath10k *ar, u32 stats_mask)
+ return skb;
+ }
+
++static int
++ath10k_wmi_tlv_op_cleanup_mgmt_tx_send(struct ath10k *ar,
++ struct sk_buff *msdu)
++{
++ struct ath10k_skb_cb *cb = ATH10K_SKB_CB(msdu);
++ struct ath10k_wmi *wmi = &ar->wmi;
++
++ idr_remove(&wmi->mgmt_pending_tx, cb->msdu_id);
++
++ return 0;
++}
++
+ static int
+ ath10k_wmi_mgmt_tx_alloc_msdu_id(struct ath10k *ar, struct sk_buff *skb,
+ dma_addr_t paddr)
+@@ -2971,6 +2983,8 @@ ath10k_wmi_tlv_op_gen_mgmt_tx_send(struct ath10k *ar, struct sk_buff *msdu,
+ if (desc_id < 0)
+ goto err_free_skb;
+
++ cb->msdu_id = desc_id;
++
+ ptr = (void *)skb->data;
+ tlv = ptr;
+ tlv->tag = __cpu_to_le16(WMI_TLV_TAG_STRUCT_MGMT_TX_CMD);
+@@ -4419,6 +4433,7 @@ static const struct wmi_ops wmi_tlv_ops = {
+ .gen_force_fw_hang = ath10k_wmi_tlv_op_gen_force_fw_hang,
+ /* .gen_mgmt_tx = not implemented; HTT is used */
+ .gen_mgmt_tx_send = ath10k_wmi_tlv_op_gen_mgmt_tx_send,
++ .cleanup_mgmt_tx_send = ath10k_wmi_tlv_op_cleanup_mgmt_tx_send,
+ .gen_dbglog_cfg = ath10k_wmi_tlv_op_gen_dbglog_cfg,
+ .gen_pktlog_enable = ath10k_wmi_tlv_op_gen_pktlog_enable,
+ .gen_pktlog_disable = ath10k_wmi_tlv_op_gen_pktlog_disable,
+diff --git a/drivers/net/wireless/ath/ath11k/dp.c b/drivers/net/wireless/ath/ath11k/dp.c
+index 50350f77b309..2f35d325f7a5 100644
+--- a/drivers/net/wireless/ath/ath11k/dp.c
++++ b/drivers/net/wireless/ath/ath11k/dp.c
+@@ -909,8 +909,10 @@ int ath11k_dp_alloc(struct ath11k_base *ab)
+ dp->tx_ring[i].tx_status_head = 0;
+ dp->tx_ring[i].tx_status_tail = DP_TX_COMP_RING_SIZE - 1;
+ dp->tx_ring[i].tx_status = kmalloc(size, GFP_KERNEL);
+- if (!dp->tx_ring[i].tx_status)
++ if (!dp->tx_ring[i].tx_status) {
++ ret = -ENOMEM;
+ goto fail_cmn_srng_cleanup;
++ }
+ }
+
+ for (i = 0; i < HAL_DSCP_TID_MAP_TBL_NUM_ENTRIES_MAX; i++)
+diff --git a/drivers/net/wireless/ath/ath11k/dp_rx.c b/drivers/net/wireless/ath/ath11k/dp_rx.c
+index f74a0e74bf3e..007bb73d6c61 100644
+--- a/drivers/net/wireless/ath/ath11k/dp_rx.c
++++ b/drivers/net/wireless/ath/ath11k/dp_rx.c
+@@ -892,7 +892,7 @@ int ath11k_peer_rx_tid_setup(struct ath11k *ar, const u8 *peer_mac, int vdev_id,
+ else
+ hw_desc_sz = ath11k_hal_reo_qdesc_size(DP_BA_WIN_SZ_MAX, tid);
+
+- vaddr = kzalloc(hw_desc_sz + HAL_LINK_DESC_ALIGN - 1, GFP_KERNEL);
++ vaddr = kzalloc(hw_desc_sz + HAL_LINK_DESC_ALIGN - 1, GFP_ATOMIC);
+ if (!vaddr) {
+ spin_unlock_bh(&ab->base_lock);
+ return -ENOMEM;
+@@ -2265,6 +2265,7 @@ static int ath11k_dp_rx_process_msdu(struct ath11k *ar,
+ struct ieee80211_hdr *hdr;
+ struct sk_buff *last_buf;
+ u8 l3_pad_bytes;
++ u8 *hdr_status;
+ u16 msdu_len;
+ int ret;
+
+@@ -2293,8 +2294,13 @@ static int ath11k_dp_rx_process_msdu(struct ath11k *ar,
+ skb_pull(msdu, HAL_RX_DESC_SIZE);
+ } else if (!rxcb->is_continuation) {
+ if ((msdu_len + HAL_RX_DESC_SIZE) > DP_RX_BUFFER_SIZE) {
++ hdr_status = ath11k_dp_rx_h_80211_hdr(rx_desc);
+ ret = -EINVAL;
+ ath11k_warn(ar->ab, "invalid msdu len %u\n", msdu_len);
++ ath11k_dbg_dump(ar->ab, ATH11K_DBG_DATA, NULL, "", hdr_status,
++ sizeof(struct ieee80211_hdr));
++ ath11k_dbg_dump(ar->ab, ATH11K_DBG_DATA, NULL, "", rx_desc,
++ sizeof(struct hal_rx_desc));
+ goto free_out;
+ }
+ skb_put(msdu, HAL_RX_DESC_SIZE + l3_pad_bytes + msdu_len);
+@@ -3389,6 +3395,7 @@ ath11k_dp_process_rx_err_buf(struct ath11k *ar, u32 *ring_desc, int buf_id, bool
+ struct sk_buff *msdu;
+ struct ath11k_skb_rxcb *rxcb;
+ struct hal_rx_desc *rx_desc;
++ u8 *hdr_status;
+ u16 msdu_len;
+
+ spin_lock_bh(&rx_ring->idr_lock);
+@@ -3426,6 +3433,17 @@ ath11k_dp_process_rx_err_buf(struct ath11k *ar, u32 *ring_desc, int buf_id, bool
+
+ rx_desc = (struct hal_rx_desc *)msdu->data;
+ msdu_len = ath11k_dp_rx_h_msdu_start_msdu_len(rx_desc);
++ if ((msdu_len + HAL_RX_DESC_SIZE) > DP_RX_BUFFER_SIZE) {
++ hdr_status = ath11k_dp_rx_h_80211_hdr(rx_desc);
++ ath11k_warn(ar->ab, "invalid msdu leng %u", msdu_len);
++ ath11k_dbg_dump(ar->ab, ATH11K_DBG_DATA, NULL, "", hdr_status,
++ sizeof(struct ieee80211_hdr));
++ ath11k_dbg_dump(ar->ab, ATH11K_DBG_DATA, NULL, "", rx_desc,
++ sizeof(struct hal_rx_desc));
++ dev_kfree_skb_any(msdu);
++ goto exit;
++ }
++
+ skb_put(msdu, HAL_RX_DESC_SIZE + msdu_len);
+
+ if (ath11k_dp_rx_frag_h_mpdu(ar, msdu, ring_desc)) {
+diff --git a/drivers/net/wireless/ath/ath11k/thermal.c b/drivers/net/wireless/ath/ath11k/thermal.c
+index 259dddbda2c7..5a7e150c621b 100644
+--- a/drivers/net/wireless/ath/ath11k/thermal.c
++++ b/drivers/net/wireless/ath/ath11k/thermal.c
+@@ -174,9 +174,12 @@ int ath11k_thermal_register(struct ath11k_base *sc)
+ if (IS_ERR(cdev)) {
+ ath11k_err(sc, "failed to setup thermal device result: %ld\n",
+ PTR_ERR(cdev));
+- return -EINVAL;
++ ret = -EINVAL;
++ goto err_thermal_destroy;
+ }
+
++ ar->thermal.cdev = cdev;
++
+ ret = sysfs_create_link(&ar->hw->wiphy->dev.kobj, &cdev->device.kobj,
+ "cooling_device");
+ if (ret) {
+@@ -184,7 +187,6 @@ int ath11k_thermal_register(struct ath11k_base *sc)
+ goto err_thermal_destroy;
+ }
+
+- ar->thermal.cdev = cdev;
+ if (!IS_REACHABLE(CONFIG_HWMON))
+ return 0;
+
+diff --git a/drivers/net/wireless/ath/ath11k/wmi.c b/drivers/net/wireless/ath/ath11k/wmi.c
+index e7ce36966d6a..73beca6d6b5f 100644
+--- a/drivers/net/wireless/ath/ath11k/wmi.c
++++ b/drivers/net/wireless/ath/ath11k/wmi.c
+@@ -2779,7 +2779,7 @@ int ath11k_wmi_send_bss_color_change_enable_cmd(struct ath11k *ar, u32 vdev_id,
+ ret = ath11k_wmi_cmd_send(wmi, skb,
+ WMI_BSS_COLOR_CHANGE_ENABLE_CMDID);
+ if (ret) {
+- ath11k_warn(ab, "Failed to send WMI_TWT_DIeABLE_CMDID");
++ ath11k_warn(ab, "Failed to send WMI_BSS_COLOR_CHANGE_ENABLE_CMDID");
+ dev_kfree_skb(skb);
+ }
+ return ret;
+@@ -3740,8 +3740,9 @@ static int wmi_process_mgmt_tx_comp(struct ath11k *ar, u32 desc_id,
+
+ ieee80211_tx_status_irqsafe(ar->hw, msdu);
+
+- WARN_ON_ONCE(atomic_read(&ar->num_pending_mgmt_tx) == 0);
+- atomic_dec(&ar->num_pending_mgmt_tx);
++ /* WARN when we received this event without doing any mgmt tx */
++ if (atomic_dec_if_positive(&ar->num_pending_mgmt_tx) < 0)
++ WARN_ON_ONCE(1);
+
+ return 0;
+ }
+diff --git a/drivers/net/wireless/ath/carl9170/fw.c b/drivers/net/wireless/ath/carl9170/fw.c
+index 51934d191f33..1ab09e1c9ec5 100644
+--- a/drivers/net/wireless/ath/carl9170/fw.c
++++ b/drivers/net/wireless/ath/carl9170/fw.c
+@@ -338,9 +338,7 @@ static int carl9170_fw(struct ar9170 *ar, const __u8 *data, size_t len)
+ ar->hw->wiphy->interface_modes |= BIT(NL80211_IFTYPE_ADHOC);
+
+ if (SUPP(CARL9170FW_WLANTX_CAB)) {
+- if_comb_types |=
+- BIT(NL80211_IFTYPE_AP) |
+- BIT(NL80211_IFTYPE_P2P_GO);
++ if_comb_types |= BIT(NL80211_IFTYPE_AP);
+
+ #ifdef CONFIG_MAC80211_MESH
+ if_comb_types |=
+diff --git a/drivers/net/wireless/ath/carl9170/main.c b/drivers/net/wireless/ath/carl9170/main.c
+index 5914926a5c5b..816929fb5b14 100644
+--- a/drivers/net/wireless/ath/carl9170/main.c
++++ b/drivers/net/wireless/ath/carl9170/main.c
+@@ -582,11 +582,10 @@ static int carl9170_init_interface(struct ar9170 *ar,
+ ar->disable_offload |= ((vif->type != NL80211_IFTYPE_STATION) &&
+ (vif->type != NL80211_IFTYPE_AP));
+
+- /* While the driver supports HW offload in a single
+- * P2P client configuration, it doesn't support HW
+- * offload in the favourit, concurrent P2P GO+CLIENT
+- * configuration. Hence, HW offload will always be
+- * disabled for P2P.
++ /* The driver used to have P2P GO+CLIENT support,
++ * but since this was dropped and we don't know if
++ * there are any gremlins lurking in the shadows,
++ * so best we keep HW offload disabled for P2P.
+ */
+ ar->disable_offload |= vif->p2p;
+
+@@ -639,18 +638,6 @@ static int carl9170_op_add_interface(struct ieee80211_hw *hw,
+ if (vif->type == NL80211_IFTYPE_STATION)
+ break;
+
+- /* P2P GO [master] use-case
+- * Because the P2P GO station is selected dynamically
+- * by all participating peers of a WIFI Direct network,
+- * the driver has be able to change the main interface
+- * operating mode on the fly.
+- */
+- if (main_vif->p2p && vif->p2p &&
+- vif->type == NL80211_IFTYPE_AP) {
+- old_main = main_vif;
+- break;
+- }
+-
+ err = -EBUSY;
+ rcu_read_unlock();
+
+diff --git a/drivers/net/wireless/ath/wcn36xx/main.c b/drivers/net/wireless/ath/wcn36xx/main.c
+index e49c306e0eef..702b689c06df 100644
+--- a/drivers/net/wireless/ath/wcn36xx/main.c
++++ b/drivers/net/wireless/ath/wcn36xx/main.c
+@@ -1339,7 +1339,7 @@ static int wcn36xx_probe(struct platform_device *pdev)
+ if (addr && ret != ETH_ALEN) {
+ wcn36xx_err("invalid local-mac-address\n");
+ ret = -EINVAL;
+- goto out_wq;
++ goto out_destroy_ept;
+ } else if (addr) {
+ wcn36xx_info("mac address: %pM\n", addr);
+ SET_IEEE80211_PERM_ADDR(wcn->hw, addr);
+@@ -1347,7 +1347,7 @@ static int wcn36xx_probe(struct platform_device *pdev)
+
+ ret = wcn36xx_platform_get_resources(wcn, pdev);
+ if (ret)
+- goto out_wq;
++ goto out_destroy_ept;
+
+ wcn36xx_init_ieee80211(wcn);
+ ret = ieee80211_register_hw(wcn->hw);
+@@ -1359,6 +1359,8 @@ static int wcn36xx_probe(struct platform_device *pdev)
+ out_unmap:
+ iounmap(wcn->ccu_base);
+ iounmap(wcn->dxe_base);
++out_destroy_ept:
++ rpmsg_destroy_ept(wcn->smd_channel);
+ out_wq:
+ ieee80211_free_hw(hw);
+ out_err:
+diff --git a/drivers/net/wireless/broadcom/b43/main.c b/drivers/net/wireless/broadcom/b43/main.c
+index 39da1a4c30ac..3ad94dad2d89 100644
+--- a/drivers/net/wireless/broadcom/b43/main.c
++++ b/drivers/net/wireless/broadcom/b43/main.c
+@@ -5569,7 +5569,7 @@ static struct b43_wl *b43_wireless_init(struct b43_bus_dev *dev)
+ /* fill hw info */
+ ieee80211_hw_set(hw, RX_INCLUDES_FCS);
+ ieee80211_hw_set(hw, SIGNAL_DBM);
+-
++ ieee80211_hw_set(hw, MFP_CAPABLE);
+ hw->wiphy->interface_modes =
+ BIT(NL80211_IFTYPE_AP) |
+ BIT(NL80211_IFTYPE_MESH_POINT) |
+diff --git a/drivers/net/wireless/broadcom/b43legacy/main.c b/drivers/net/wireless/broadcom/b43legacy/main.c
+index 8b6b657c4b85..5208a39fd6f7 100644
+--- a/drivers/net/wireless/broadcom/b43legacy/main.c
++++ b/drivers/net/wireless/broadcom/b43legacy/main.c
+@@ -3801,6 +3801,7 @@ static int b43legacy_wireless_init(struct ssb_device *dev)
+ /* fill hw info */
+ ieee80211_hw_set(hw, RX_INCLUDES_FCS);
+ ieee80211_hw_set(hw, SIGNAL_DBM);
++ ieee80211_hw_set(hw, MFP_CAPABLE); /* Allow WPA3 in software */
+
+ hw->wiphy->interface_modes =
+ BIT(NL80211_IFTYPE_AP) |
+diff --git a/drivers/net/wireless/broadcom/b43legacy/xmit.c b/drivers/net/wireless/broadcom/b43legacy/xmit.c
+index e9b23c2e5bd4..efd63f4ce74f 100644
+--- a/drivers/net/wireless/broadcom/b43legacy/xmit.c
++++ b/drivers/net/wireless/broadcom/b43legacy/xmit.c
+@@ -558,6 +558,7 @@ void b43legacy_rx(struct b43legacy_wldev *dev,
+ default:
+ b43legacywarn(dev->wl, "Unexpected value for chanstat (0x%X)\n",
+ chanstat);
++ goto drop;
+ }
+
+ memcpy(IEEE80211_SKB_RXCB(skb), &status, sizeof(status));
+diff --git a/drivers/net/wireless/broadcom/brcm80211/brcmfmac/cfg80211.c b/drivers/net/wireless/broadcom/brcm80211/brcmfmac/cfg80211.c
+index 2ba165330038..bacd762cdf3e 100644
+--- a/drivers/net/wireless/broadcom/brcm80211/brcmfmac/cfg80211.c
++++ b/drivers/net/wireless/broadcom/brcm80211/brcmfmac/cfg80211.c
+@@ -1819,6 +1819,10 @@ brcmf_set_key_mgmt(struct net_device *ndev, struct cfg80211_connect_params *sme)
+ switch (sme->crypto.akm_suites[0]) {
+ case WLAN_AKM_SUITE_SAE:
+ val = WPA3_AUTH_SAE_PSK;
++ if (sme->crypto.sae_pwd) {
++ brcmf_dbg(INFO, "using SAE offload\n");
++ profile->use_fwsup = BRCMF_PROFILE_FWSUP_SAE;
++ }
+ break;
+ default:
+ bphy_err(drvr, "invalid cipher group (%d)\n",
+@@ -2104,11 +2108,6 @@ brcmf_cfg80211_connect(struct wiphy *wiphy, struct net_device *ndev,
+ goto done;
+ }
+
+- if (sme->crypto.sae_pwd) {
+- brcmf_dbg(INFO, "using SAE offload\n");
+- profile->use_fwsup = BRCMF_PROFILE_FWSUP_SAE;
+- }
+-
+ if (sme->crypto.psk &&
+ profile->use_fwsup != BRCMF_PROFILE_FWSUP_SAE) {
+ if (WARN_ON(profile->use_fwsup != BRCMF_PROFILE_FWSUP_NONE)) {
+@@ -5495,7 +5494,8 @@ static bool brcmf_is_linkup(struct brcmf_cfg80211_vif *vif,
+ u32 event = e->event_code;
+ u32 status = e->status;
+
+- if (vif->profile.use_fwsup == BRCMF_PROFILE_FWSUP_PSK &&
++ if ((vif->profile.use_fwsup == BRCMF_PROFILE_FWSUP_PSK ||
++ vif->profile.use_fwsup == BRCMF_PROFILE_FWSUP_SAE) &&
+ event == BRCMF_E_PSK_SUP &&
+ status == BRCMF_E_STATUS_FWSUP_COMPLETED)
+ set_bit(BRCMF_VIF_STATUS_EAP_SUCCESS, &vif->sme_state);
+diff --git a/drivers/net/wireless/broadcom/brcm80211/brcmfmac/feature.c b/drivers/net/wireless/broadcom/brcm80211/brcmfmac/feature.c
+index 5da0dda0d899..0dcefbd0c000 100644
+--- a/drivers/net/wireless/broadcom/brcm80211/brcmfmac/feature.c
++++ b/drivers/net/wireless/broadcom/brcm80211/brcmfmac/feature.c
+@@ -285,13 +285,14 @@ void brcmf_feat_attach(struct brcmf_pub *drvr)
+ if (!err)
+ ifp->drvr->feat_flags |= BIT(BRCMF_FEAT_SCAN_RANDOM_MAC);
+
++ brcmf_feat_iovar_int_get(ifp, BRCMF_FEAT_FWSUP, "sup_wpa");
++
+ if (drvr->settings->feature_disable) {
+ brcmf_dbg(INFO, "Features: 0x%02x, disable: 0x%02x\n",
+ ifp->drvr->feat_flags,
+ drvr->settings->feature_disable);
+ ifp->drvr->feat_flags &= ~drvr->settings->feature_disable;
+ }
+- brcmf_feat_iovar_int_get(ifp, BRCMF_FEAT_FWSUP, "sup_wpa");
+
+ brcmf_feat_firmware_overrides(drvr);
+
+diff --git a/drivers/net/wireless/intel/iwlwifi/mvm/debugfs.c b/drivers/net/wireless/intel/iwlwifi/mvm/debugfs.c
+index 3beef8d077b8..8fae7e707374 100644
+--- a/drivers/net/wireless/intel/iwlwifi/mvm/debugfs.c
++++ b/drivers/net/wireless/intel/iwlwifi/mvm/debugfs.c
+@@ -5,10 +5,9 @@
+ *
+ * GPL LICENSE SUMMARY
+ *
+- * Copyright(c) 2012 - 2014 Intel Corporation. All rights reserved.
+ * Copyright(c) 2013 - 2015 Intel Mobile Communications GmbH
+ * Copyright(c) 2016 - 2017 Intel Deutschland GmbH
+- * Copyright(c) 2018 - 2019 Intel Corporation
++ * Copyright(c) 2012 - 2014, 2018 - 2020 Intel Corporation
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of version 2 of the GNU General Public License as
+@@ -28,10 +27,9 @@
+ *
+ * BSD LICENSE
+ *
+- * Copyright(c) 2012 - 2014 Intel Corporation. All rights reserved.
+ * Copyright(c) 2013 - 2015 Intel Mobile Communications GmbH
+ * Copyright(c) 2016 - 2017 Intel Deutschland GmbH
+- * Copyright(c) 2018 - 2019 Intel Corporation
++ * Copyright(c) 2012 - 2014, 2018 - 2020 Intel Corporation
+ * All rights reserved.
+ *
+ * Redistribution and use in source and binary forms, with or without
+@@ -481,6 +479,11 @@ static ssize_t iwl_dbgfs_amsdu_len_write(struct ieee80211_sta *sta,
+ if (kstrtou16(buf, 0, &amsdu_len))
+ return -EINVAL;
+
++ /* only change from debug set <-> debug unset */
++ if ((amsdu_len && mvmsta->orig_amsdu_len) ||
++ (!!amsdu_len && mvmsta->orig_amsdu_len))
++ return -EBUSY;
++
+ if (amsdu_len) {
+ mvmsta->orig_amsdu_len = sta->max_amsdu_len;
+ sta->max_amsdu_len = amsdu_len;
+diff --git a/drivers/net/wireless/intel/iwlwifi/mvm/mac80211.c b/drivers/net/wireless/intel/iwlwifi/mvm/mac80211.c
+index 7aa1350b093e..cf3c46c9b1ee 100644
+--- a/drivers/net/wireless/intel/iwlwifi/mvm/mac80211.c
++++ b/drivers/net/wireless/intel/iwlwifi/mvm/mac80211.c
+@@ -1209,14 +1209,13 @@ void __iwl_mvm_mac_stop(struct iwl_mvm *mvm)
+ */
+ flush_work(&mvm->roc_done_wk);
+
++ iwl_mvm_rm_aux_sta(mvm);
++
+ iwl_mvm_stop_device(mvm);
+
+ iwl_mvm_async_handlers_purge(mvm);
+ /* async_handlers_list is empty and will stay empty: HW is stopped */
+
+- /* the fw is stopped, the aux sta is dead: clean up driver state */
+- iwl_mvm_del_aux_sta(mvm);
+-
+ /*
+ * Clear IN_HW_RESTART and HW_RESTART_REQUESTED flag when stopping the
+ * hw (as restart_complete() won't be called in this case) and mac80211
+diff --git a/drivers/net/wireless/intel/iwlwifi/mvm/rs-fw.c b/drivers/net/wireless/intel/iwlwifi/mvm/rs-fw.c
+index 15d11fb72aca..6f4d241d47e9 100644
+--- a/drivers/net/wireless/intel/iwlwifi/mvm/rs-fw.c
++++ b/drivers/net/wireless/intel/iwlwifi/mvm/rs-fw.c
+@@ -369,14 +369,15 @@ void iwl_mvm_tlc_update_notif(struct iwl_mvm *mvm,
+ u16 size = le32_to_cpu(notif->amsdu_size);
+ int i;
+
+- /*
+- * In debug sta->max_amsdu_len < size
+- * so also check with orig_amsdu_len which holds the original
+- * data before debugfs changed the value
+- */
+- if (WARN_ON(sta->max_amsdu_len < size &&
+- mvmsta->orig_amsdu_len < size))
++ if (sta->max_amsdu_len < size) {
++ /*
++ * In debug sta->max_amsdu_len < size
++ * so also check with orig_amsdu_len which holds the
++ * original data before debugfs changed the value
++ */
++ WARN_ON(mvmsta->orig_amsdu_len < size);
+ goto out;
++ }
+
+ mvmsta->amsdu_enabled = le32_to_cpu(notif->amsdu_enabled);
+ mvmsta->max_amsdu_len = size;
+diff --git a/drivers/net/wireless/intel/iwlwifi/mvm/sta.c b/drivers/net/wireless/intel/iwlwifi/mvm/sta.c
+index 56ae72debb96..07ca8c91499d 100644
+--- a/drivers/net/wireless/intel/iwlwifi/mvm/sta.c
++++ b/drivers/net/wireless/intel/iwlwifi/mvm/sta.c
+@@ -2080,16 +2080,24 @@ int iwl_mvm_rm_snif_sta(struct iwl_mvm *mvm, struct ieee80211_vif *vif)
+ return ret;
+ }
+
+-void iwl_mvm_dealloc_snif_sta(struct iwl_mvm *mvm)
++int iwl_mvm_rm_aux_sta(struct iwl_mvm *mvm)
+ {
+- iwl_mvm_dealloc_int_sta(mvm, &mvm->snif_sta);
+-}
++ int ret;
+
+-void iwl_mvm_del_aux_sta(struct iwl_mvm *mvm)
+-{
+ lockdep_assert_held(&mvm->mutex);
+
++ iwl_mvm_disable_txq(mvm, NULL, mvm->aux_queue, IWL_MAX_TID_COUNT, 0);
++ ret = iwl_mvm_rm_sta_common(mvm, mvm->aux_sta.sta_id);
++ if (ret)
++ IWL_WARN(mvm, "Failed sending remove station\n");
+ iwl_mvm_dealloc_int_sta(mvm, &mvm->aux_sta);
++
++ return ret;
++}
++
++void iwl_mvm_dealloc_snif_sta(struct iwl_mvm *mvm)
++{
++ iwl_mvm_dealloc_int_sta(mvm, &mvm->snif_sta);
+ }
+
+ /*
+diff --git a/drivers/net/wireless/intel/iwlwifi/mvm/sta.h b/drivers/net/wireless/intel/iwlwifi/mvm/sta.h
+index 8d70093847cb..da2d1ac01229 100644
+--- a/drivers/net/wireless/intel/iwlwifi/mvm/sta.h
++++ b/drivers/net/wireless/intel/iwlwifi/mvm/sta.h
+@@ -8,7 +8,7 @@
+ * Copyright(c) 2012 - 2014 Intel Corporation. All rights reserved.
+ * Copyright(c) 2013 - 2014 Intel Mobile Communications GmbH
+ * Copyright(c) 2015 - 2016 Intel Deutschland GmbH
+- * Copyright(c) 2018 - 2019 Intel Corporation
++ * Copyright(c) 2018 - 2020 Intel Corporation
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of version 2 of the GNU General Public License as
+@@ -31,7 +31,7 @@
+ * Copyright(c) 2012 - 2014 Intel Corporation. All rights reserved.
+ * Copyright(c) 2013 - 2014 Intel Mobile Communications GmbH
+ * Copyright(c) 2015 - 2016 Intel Deutschland GmbH
+- * Copyright(c) 2018 - 2019 Intel Corporation
++ * Copyright(c) 2018 - 2020 Intel Corporation
+ * All rights reserved.
+ *
+ * Redistribution and use in source and binary forms, with or without
+@@ -541,7 +541,7 @@ int iwl_mvm_sta_tx_agg(struct iwl_mvm *mvm, struct ieee80211_sta *sta,
+ int tid, u8 queue, bool start);
+
+ int iwl_mvm_add_aux_sta(struct iwl_mvm *mvm);
+-void iwl_mvm_del_aux_sta(struct iwl_mvm *mvm);
++int iwl_mvm_rm_aux_sta(struct iwl_mvm *mvm);
+
+ int iwl_mvm_alloc_bcast_sta(struct iwl_mvm *mvm, struct ieee80211_vif *vif);
+ int iwl_mvm_send_add_bcast_sta(struct iwl_mvm *mvm, struct ieee80211_vif *vif);
+diff --git a/drivers/net/wireless/marvell/libertas_tf/if_usb.c b/drivers/net/wireless/marvell/libertas_tf/if_usb.c
+index 25ac9db35dbf..bedc09215088 100644
+--- a/drivers/net/wireless/marvell/libertas_tf/if_usb.c
++++ b/drivers/net/wireless/marvell/libertas_tf/if_usb.c
+@@ -247,10 +247,10 @@ static void if_usb_disconnect(struct usb_interface *intf)
+
+ lbtf_deb_enter(LBTF_DEB_MAIN);
+
+- if_usb_reset_device(priv);
+-
+- if (priv)
++ if (priv) {
++ if_usb_reset_device(priv);
+ lbtf_remove_card(priv);
++ }
+
+ /* Unlink and free urb */
+ if_usb_free(cardp);
+diff --git a/drivers/net/wireless/marvell/mwifiex/cfg80211.c b/drivers/net/wireless/marvell/mwifiex/cfg80211.c
+index 1566d2197906..12bfd653a405 100644
+--- a/drivers/net/wireless/marvell/mwifiex/cfg80211.c
++++ b/drivers/net/wireless/marvell/mwifiex/cfg80211.c
+@@ -1496,7 +1496,8 @@ mwifiex_cfg80211_dump_station(struct wiphy *wiphy, struct net_device *dev,
+ int idx, u8 *mac, struct station_info *sinfo)
+ {
+ struct mwifiex_private *priv = mwifiex_netdev_get_priv(dev);
+- static struct mwifiex_sta_node *node;
++ struct mwifiex_sta_node *node;
++ int i;
+
+ if ((GET_BSS_ROLE(priv) == MWIFIEX_BSS_ROLE_STA) &&
+ priv->media_connected && idx == 0) {
+@@ -1506,13 +1507,10 @@ mwifiex_cfg80211_dump_station(struct wiphy *wiphy, struct net_device *dev,
+ mwifiex_send_cmd(priv, HOST_CMD_APCMD_STA_LIST,
+ HostCmd_ACT_GEN_GET, 0, NULL, true);
+
+- if (node && (&node->list == &priv->sta_list)) {
+- node = NULL;
+- return -ENOENT;
+- }
+-
+- node = list_prepare_entry(node, &priv->sta_list, list);
+- list_for_each_entry_continue(node, &priv->sta_list, list) {
++ i = 0;
++ list_for_each_entry(node, &priv->sta_list, list) {
++ if (i++ != idx)
++ continue;
+ ether_addr_copy(mac, node->mac_addr);
+ return mwifiex_dump_station_info(priv, node, sinfo);
+ }
+diff --git a/drivers/net/wireless/mediatek/mt76/agg-rx.c b/drivers/net/wireless/mediatek/mt76/agg-rx.c
+index f77f03530259..acdbe6f8248d 100644
+--- a/drivers/net/wireless/mediatek/mt76/agg-rx.c
++++ b/drivers/net/wireless/mediatek/mt76/agg-rx.c
+@@ -152,8 +152,8 @@ void mt76_rx_aggr_reorder(struct sk_buff *skb, struct sk_buff_head *frames)
+ struct ieee80211_sta *sta;
+ struct mt76_rx_tid *tid;
+ bool sn_less;
+- u16 seqno, head, size;
+- u8 ackp, idx;
++ u16 seqno, head, size, idx;
++ u8 ackp;
+
+ __skb_queue_tail(frames, skb);
+
+@@ -239,7 +239,7 @@ out:
+ }
+
+ int mt76_rx_aggr_start(struct mt76_dev *dev, struct mt76_wcid *wcid, u8 tidno,
+- u16 ssn, u8 size)
++ u16 ssn, u16 size)
+ {
+ struct mt76_rx_tid *tid;
+
+@@ -264,7 +264,7 @@ EXPORT_SYMBOL_GPL(mt76_rx_aggr_start);
+
+ static void mt76_rx_aggr_shutdown(struct mt76_dev *dev, struct mt76_rx_tid *tid)
+ {
+- u8 size = tid->size;
++ u16 size = tid->size;
+ int i;
+
+ spin_lock_bh(&tid->lock);
+diff --git a/drivers/net/wireless/mediatek/mt76/mt76.h b/drivers/net/wireless/mediatek/mt76/mt76.h
+index 8e4759bc8f59..37641ad14d49 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt76.h
++++ b/drivers/net/wireless/mediatek/mt76/mt76.h
+@@ -241,8 +241,8 @@ struct mt76_rx_tid {
+ struct delayed_work reorder_work;
+
+ u16 head;
+- u8 size;
+- u8 nframes;
++ u16 size;
++ u16 nframes;
+
+ u8 num;
+
+@@ -788,7 +788,7 @@ int mt76_get_survey(struct ieee80211_hw *hw, int idx,
+ void mt76_set_stream_caps(struct mt76_dev *dev, bool vht);
+
+ int mt76_rx_aggr_start(struct mt76_dev *dev, struct mt76_wcid *wcid, u8 tid,
+- u16 ssn, u8 size);
++ u16 ssn, u16 size);
+ void mt76_rx_aggr_stop(struct mt76_dev *dev, struct mt76_wcid *wcid, u8 tid);
+
+ void mt76_wcid_key_setup(struct mt76_dev *dev, struct mt76_wcid *wcid,
+diff --git a/drivers/net/wireless/mediatek/mt76/mt7615/mac.c b/drivers/net/wireless/mediatek/mt76/mt7615/mac.c
+index a27a6d164009..f66b76ff2978 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt7615/mac.c
++++ b/drivers/net/wireless/mediatek/mt76/mt7615/mac.c
+@@ -666,24 +666,27 @@ mt7615_txp_skb_unmap_fw(struct mt76_dev *dev, struct mt7615_fw_txp *txp)
+ static void
+ mt7615_txp_skb_unmap_hw(struct mt76_dev *dev, struct mt7615_hw_txp *txp)
+ {
++ u32 last_mask;
+ int i;
+
++ last_mask = is_mt7663(dev) ? MT_TXD_LEN_LAST : MT_TXD_LEN_MSDU_LAST;
++
+ for (i = 0; i < ARRAY_SIZE(txp->ptr); i++) {
+ struct mt7615_txp_ptr *ptr = &txp->ptr[i];
+ bool last;
+ u16 len;
+
+ len = le16_to_cpu(ptr->len0);
+- last = len & MT_TXD_LEN_MSDU_LAST;
+- len &= ~MT_TXD_LEN_MSDU_LAST;
++ last = len & last_mask;
++ len &= MT_TXD_LEN_MASK;
+ dma_unmap_single(dev->dev, le32_to_cpu(ptr->buf0), len,
+ DMA_TO_DEVICE);
+ if (last)
+ break;
+
+ len = le16_to_cpu(ptr->len1);
+- last = len & MT_TXD_LEN_MSDU_LAST;
+- len &= ~MT_TXD_LEN_MSDU_LAST;
++ last = len & last_mask;
++ len &= MT_TXD_LEN_MASK;
+ dma_unmap_single(dev->dev, le32_to_cpu(ptr->buf1), len,
+ DMA_TO_DEVICE);
+ if (last)
+@@ -1098,21 +1101,26 @@ mt7615_write_hw_txp(struct mt7615_dev *dev, struct mt76_tx_info *tx_info,
+ {
+ struct mt7615_hw_txp *txp = txp_ptr;
+ struct mt7615_txp_ptr *ptr = &txp->ptr[0];
+- int nbuf = tx_info->nbuf - 1;
+- int i;
++ int i, nbuf = tx_info->nbuf - 1;
++ u32 last_mask;
+
+ tx_info->buf[0].len = MT_TXD_SIZE + sizeof(*txp);
+ tx_info->nbuf = 1;
+
+ txp->msdu_id[0] = cpu_to_le16(id | MT_MSDU_ID_VALID);
+
++ if (is_mt7663(&dev->mt76))
++ last_mask = MT_TXD_LEN_LAST;
++ else
++ last_mask = MT_TXD_LEN_AMSDU_LAST |
++ MT_TXD_LEN_MSDU_LAST;
++
+ for (i = 0; i < nbuf; i++) {
++ u16 len = tx_info->buf[i + 1].len & MT_TXD_LEN_MASK;
+ u32 addr = tx_info->buf[i + 1].addr;
+- u16 len = tx_info->buf[i + 1].len;
+
+ if (i == nbuf - 1)
+- len |= MT_TXD_LEN_MSDU_LAST |
+- MT_TXD_LEN_AMSDU_LAST;
++ len |= last_mask;
+
+ if (i & 1) {
+ ptr->buf1 = cpu_to_le32(addr);
+@@ -1574,8 +1582,14 @@ void mt7615_mac_cca_stats_reset(struct mt7615_phy *phy)
+ {
+ struct mt7615_dev *dev = phy->dev;
+ bool ext_phy = phy != &dev->phy;
+- u32 reg = MT_WF_PHY_R0_PHYMUX_5(ext_phy);
++ u32 reg;
++
++ if (is_mt7663(&dev->mt76))
++ reg = MT7663_WF_PHY_R0_PHYMUX_5;
++ else
++ reg = MT_WF_PHY_R0_PHYMUX_5(ext_phy);
+
++ /* reset PD and MDRDY counters */
+ mt76_clear(dev, reg, GENMASK(22, 20));
+ mt76_set(dev, reg, BIT(22) | BIT(20));
+ }
+diff --git a/drivers/net/wireless/mediatek/mt76/mt7615/mac.h b/drivers/net/wireless/mediatek/mt76/mt7615/mac.h
+index e0b89257db90..d3da40df7f32 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt7615/mac.h
++++ b/drivers/net/wireless/mediatek/mt76/mt7615/mac.h
+@@ -252,8 +252,11 @@ enum tx_phy_bandwidth {
+
+ #define MT_MSDU_ID_VALID BIT(15)
+
++#define MT_TXD_LEN_MASK GENMASK(11, 0)
+ #define MT_TXD_LEN_MSDU_LAST BIT(14)
+ #define MT_TXD_LEN_AMSDU_LAST BIT(15)
++/* mt7663 */
++#define MT_TXD_LEN_LAST BIT(15)
+
+ struct mt7615_txp_ptr {
+ __le32 buf0;
+diff --git a/drivers/net/wireless/mediatek/mt76/mt7615/main.c b/drivers/net/wireless/mediatek/mt76/mt7615/main.c
+index 6586176c29af..f92ac9a916fc 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt7615/main.c
++++ b/drivers/net/wireless/mediatek/mt76/mt7615/main.c
+@@ -218,6 +218,25 @@ static void mt7615_remove_interface(struct ieee80211_hw *hw,
+ spin_unlock_bh(&dev->sta_poll_lock);
+ }
+
++static void mt7615_init_dfs_state(struct mt7615_phy *phy)
++{
++ struct mt76_phy *mphy = phy->mt76;
++ struct ieee80211_hw *hw = mphy->hw;
++ struct cfg80211_chan_def *chandef = &hw->conf.chandef;
++
++ if (hw->conf.flags & IEEE80211_CONF_OFFCHANNEL)
++ return;
++
++ if (!(chandef->chan->flags & IEEE80211_CHAN_RADAR))
++ return;
++
++ if (mphy->chandef.chan->center_freq == chandef->chan->center_freq &&
++ mphy->chandef.width == chandef->width)
++ return;
++
++ phy->dfs_state = -1;
++}
++
+ static int mt7615_set_channel(struct mt7615_phy *phy)
+ {
+ struct mt7615_dev *dev = phy->dev;
+@@ -229,7 +248,7 @@ static int mt7615_set_channel(struct mt7615_phy *phy)
+ mutex_lock(&dev->mt76.mutex);
+ set_bit(MT76_RESET, &phy->mt76->state);
+
+- phy->dfs_state = -1;
++ mt7615_init_dfs_state(phy);
+ mt76_set_channel(phy->mt76);
+
+ ret = mt7615_mcu_set_chan_info(phy, MCU_EXT_CMD_CHANNEL_SWITCH);
+diff --git a/drivers/net/wireless/mediatek/mt76/mt7615/mcu.c b/drivers/net/wireless/mediatek/mt76/mt7615/mcu.c
+index 610cfa918c7b..29a7aaabb6da 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt7615/mcu.c
++++ b/drivers/net/wireless/mediatek/mt76/mt7615/mcu.c
+@@ -823,8 +823,11 @@ mt7615_mcu_wtbl_generic_tlv(struct sk_buff *skb, struct ieee80211_vif *vif,
+ generic = (struct wtbl_generic *)tlv;
+
+ if (sta) {
++ if (vif->type == NL80211_IFTYPE_STATION)
++ generic->partial_aid = cpu_to_le16(vif->bss_conf.aid);
++ else
++ generic->partial_aid = cpu_to_le16(sta->aid);
+ memcpy(generic->peer_addr, sta->addr, ETH_ALEN);
+- generic->partial_aid = cpu_to_le16(sta->aid);
+ generic->muar_idx = mvif->omac_idx;
+ generic->qos = sta->wme;
+ } else {
+@@ -1523,16 +1526,20 @@ static void mt7622_trigger_hif_int(struct mt7615_dev *dev, bool en)
+
+ static int mt7615_driver_own(struct mt7615_dev *dev)
+ {
++ struct mt76_dev *mdev = &dev->mt76;
+ u32 addr;
+
+- addr = is_mt7663(&dev->mt76) ? MT_CONN_HIF_ON_LPCTL : MT_CFG_LPCR_HOST;
++ addr = is_mt7663(mdev) ? MT_PCIE_DOORBELL_PUSH : MT_CFG_LPCR_HOST;
+ mt76_wr(dev, addr, MT_CFG_LPCR_HOST_DRV_OWN);
+
+ mt7622_trigger_hif_int(dev, true);
++
++ addr = is_mt7663(mdev) ? MT_CONN_HIF_ON_LPCTL : MT_CFG_LPCR_HOST;
+ if (!mt76_poll_msec(dev, addr, MT_CFG_LPCR_HOST_FW_OWN, 0, 3000)) {
+ dev_err(dev->mt76.dev, "Timeout for driver own\n");
+ return -EIO;
+ }
++
+ mt7622_trigger_hif_int(dev, false);
+
+ return 0;
+@@ -1547,9 +1554,8 @@ static int mt7615_firmware_own(struct mt7615_dev *dev)
+
+ mt76_wr(dev, addr, MT_CFG_LPCR_HOST_FW_OWN);
+
+- if (is_mt7622(&dev->mt76) &&
+- !mt76_poll_msec(dev, MT_CFG_LPCR_HOST,
+- MT_CFG_LPCR_HOST_FW_OWN,
++ if (!is_mt7615(&dev->mt76) &&
++ !mt76_poll_msec(dev, addr, MT_CFG_LPCR_HOST_FW_OWN,
+ MT_CFG_LPCR_HOST_FW_OWN, 3000)) {
+ dev_err(dev->mt76.dev, "Timeout for firmware own\n");
+ return -EIO;
+diff --git a/drivers/net/wireless/mediatek/mt76/mt7615/regs.h b/drivers/net/wireless/mediatek/mt76/mt7615/regs.h
+index 1e0d95b917e1..de0ef165c0ba 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt7615/regs.h
++++ b/drivers/net/wireless/mediatek/mt76/mt7615/regs.h
+@@ -65,6 +65,7 @@ enum mt7615_reg_base {
+ #define MT_HIF2_BASE 0xf0000
+ #define MT_HIF2(ofs) (MT_HIF2_BASE + (ofs))
+ #define MT_PCIE_IRQ_ENABLE MT_HIF2(0x188)
++#define MT_PCIE_DOORBELL_PUSH MT_HIF2(0x1484)
+
+ #define MT_CFG_LPCR_HOST MT_HIF(0x1f0)
+ #define MT_CFG_LPCR_HOST_FW_OWN BIT(0)
+@@ -151,6 +152,7 @@ enum mt7615_reg_base {
+ #define MT_WF_PHY_WF2_RFCTRL0_LPBCN_EN BIT(9)
+
+ #define MT_WF_PHY_R0_PHYMUX_5(_phy) MT_WF_PHY(0x0614 + ((_phy) << 9))
++#define MT7663_WF_PHY_R0_PHYMUX_5 MT_WF_PHY(0x0414)
+
+ #define MT_WF_PHY_R0_PHYCTRL_STS0(_phy) MT_WF_PHY(0x020c + ((_phy) << 9))
+ #define MT_WF_PHYCTRL_STAT_PD_OFDM GENMASK(31, 16)
+diff --git a/drivers/net/wireless/realtek/rtlwifi/usb.c b/drivers/net/wireless/realtek/rtlwifi/usb.c
+index 348b0072cdd6..c66c6dc00378 100644
+--- a/drivers/net/wireless/realtek/rtlwifi/usb.c
++++ b/drivers/net/wireless/realtek/rtlwifi/usb.c
+@@ -881,10 +881,8 @@ static struct urb *_rtl_usb_tx_urb_setup(struct ieee80211_hw *hw,
+
+ WARN_ON(NULL == skb);
+ _urb = usb_alloc_urb(0, GFP_ATOMIC);
+- if (!_urb) {
+- kfree_skb(skb);
++ if (!_urb)
+ return NULL;
+- }
+ _rtl_install_trx_info(rtlusb, skb, ep_num);
+ usb_fill_bulk_urb(_urb, rtlusb->udev, usb_sndbulkpipe(rtlusb->udev,
+ ep_num), skb->data, skb->len, _rtl_tx_complete, skb);
+@@ -898,7 +896,6 @@ static void _rtl_usb_transmit(struct ieee80211_hw *hw, struct sk_buff *skb,
+ struct rtl_usb *rtlusb = rtl_usbdev(rtl_usbpriv(hw));
+ u32 ep_num;
+ struct urb *_urb = NULL;
+- struct sk_buff *_skb = NULL;
+
+ WARN_ON(NULL == rtlusb->usb_tx_aggregate_hdl);
+ if (unlikely(IS_USB_STOP(rtlusb))) {
+@@ -907,8 +904,7 @@ static void _rtl_usb_transmit(struct ieee80211_hw *hw, struct sk_buff *skb,
+ return;
+ }
+ ep_num = rtlusb->ep_map.ep_mapping[qnum];
+- _skb = skb;
+- _urb = _rtl_usb_tx_urb_setup(hw, _skb, ep_num);
++ _urb = _rtl_usb_tx_urb_setup(hw, skb, ep_num);
+ if (unlikely(!_urb)) {
+ pr_err("Can't allocate urb. Drop skb!\n");
+ kfree_skb(skb);
+diff --git a/drivers/net/wireless/realtek/rtw88/pci.c b/drivers/net/wireless/realtek/rtw88/pci.c
+index 1af87eb2e53a..d735f3127fe8 100644
+--- a/drivers/net/wireless/realtek/rtw88/pci.c
++++ b/drivers/net/wireless/realtek/rtw88/pci.c
+@@ -1091,6 +1091,7 @@ static int rtw_pci_io_mapping(struct rtw_dev *rtwdev,
+ len = pci_resource_len(pdev, bar_id);
+ rtwpci->mmap = pci_iomap(pdev, bar_id, len);
+ if (!rtwpci->mmap) {
++ pci_release_regions(pdev);
+ rtw_err(rtwdev, "failed to map pci memory\n");
+ return -ENOMEM;
+ }
+diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c
+index f3c037f5a9ba..7b4cbe2c6954 100644
+--- a/drivers/nvme/host/core.c
++++ b/drivers/nvme/host/core.c
+@@ -1027,6 +1027,19 @@ void nvme_stop_keep_alive(struct nvme_ctrl *ctrl)
+ }
+ EXPORT_SYMBOL_GPL(nvme_stop_keep_alive);
+
++/*
++ * In NVMe 1.0 the CNS field was just a binary controller or namespace
++ * flag, thus sending any new CNS opcodes has a big chance of not working.
++ * Qemu unfortunately had that bug after reporting a 1.1 version compliance
++ * (but not for any later version).
++ */
++static bool nvme_ctrl_limited_cns(struct nvme_ctrl *ctrl)
++{
++ if (ctrl->quirks & NVME_QUIRK_IDENTIFY_CNS)
++ return ctrl->vs < NVME_VS(1, 2, 0);
++ return ctrl->vs < NVME_VS(1, 1, 0);
++}
++
+ static int nvme_identify_ctrl(struct nvme_ctrl *dev, struct nvme_id_ctrl **id)
+ {
+ struct nvme_command c = { };
+@@ -3815,8 +3828,7 @@ static void nvme_scan_work(struct work_struct *work)
+
+ mutex_lock(&ctrl->scan_lock);
+ nn = le32_to_cpu(id->nn);
+- if (ctrl->vs >= NVME_VS(1, 1, 0) &&
+- !(ctrl->quirks & NVME_QUIRK_IDENTIFY_CNS)) {
++ if (!nvme_ctrl_limited_cns(ctrl)) {
+ if (!nvme_scan_ns_list(ctrl, nn))
+ goto out_free_id;
+ }
+diff --git a/drivers/nvme/host/fc.c b/drivers/nvme/host/fc.c
+index 7dfc4a2ecf1e..5ef4a84c442a 100644
+--- a/drivers/nvme/host/fc.c
++++ b/drivers/nvme/host/fc.c
+@@ -1771,7 +1771,7 @@ nvme_fc_init_request(struct blk_mq_tag_set *set, struct request *rq,
+ res = __nvme_fc_init_request(ctrl, queue, &op->op, rq, queue->rqcnt++);
+ if (res)
+ return res;
+- op->op.fcp_req.first_sgl = &op->sgl[0];
++ op->op.fcp_req.first_sgl = op->sgl;
+ op->op.fcp_req.private = &op->priv[0];
+ nvme_req(rq)->ctrl = &ctrl->ctrl;
+ return res;
+diff --git a/drivers/nvme/host/pci.c b/drivers/nvme/host/pci.c
+index cc46e250fcac..076bdd90c922 100644
+--- a/drivers/nvme/host/pci.c
++++ b/drivers/nvme/host/pci.c
+@@ -68,14 +68,30 @@ static int io_queue_depth = 1024;
+ module_param_cb(io_queue_depth, &io_queue_depth_ops, &io_queue_depth, 0644);
+ MODULE_PARM_DESC(io_queue_depth, "set io queue depth, should >= 2");
+
++static int io_queue_count_set(const char *val, const struct kernel_param *kp)
++{
++ unsigned int n;
++ int ret;
++
++ ret = kstrtouint(val, 10, &n);
++ if (ret != 0 || n > num_possible_cpus())
++ return -EINVAL;
++ return param_set_uint(val, kp);
++}
++
++static const struct kernel_param_ops io_queue_count_ops = {
++ .set = io_queue_count_set,
++ .get = param_get_uint,
++};
++
+ static unsigned int write_queues;
+-module_param(write_queues, uint, 0644);
++module_param_cb(write_queues, &io_queue_count_ops, &write_queues, 0644);
+ MODULE_PARM_DESC(write_queues,
+ "Number of queues to use for writes. If not set, reads and writes "
+ "will share a queue set.");
+
+ static unsigned int poll_queues;
+-module_param(poll_queues, uint, 0644);
++module_param_cb(poll_queues, &io_queue_count_ops, &poll_queues, 0644);
+ MODULE_PARM_DESC(poll_queues, "Number of queues to use for polled IO.");
+
+ struct nvme_dev;
+@@ -128,6 +144,9 @@ struct nvme_dev {
+ dma_addr_t host_mem_descs_dma;
+ struct nvme_host_mem_buf_desc *host_mem_descs;
+ void **host_mem_desc_bufs;
++ unsigned int nr_allocated_queues;
++ unsigned int nr_write_queues;
++ unsigned int nr_poll_queues;
+ };
+
+ static int io_queue_depth_set(const char *val, const struct kernel_param *kp)
+@@ -209,25 +228,14 @@ struct nvme_iod {
+ struct scatterlist *sg;
+ };
+
+-static unsigned int max_io_queues(void)
+-{
+- return num_possible_cpus() + write_queues + poll_queues;
+-}
+-
+-static unsigned int max_queue_count(void)
++static inline unsigned int nvme_dbbuf_size(struct nvme_dev *dev)
+ {
+- /* IO queues + admin queue */
+- return 1 + max_io_queues();
+-}
+-
+-static inline unsigned int nvme_dbbuf_size(u32 stride)
+-{
+- return (max_queue_count() * 8 * stride);
++ return dev->nr_allocated_queues * 8 * dev->db_stride;
+ }
+
+ static int nvme_dbbuf_dma_alloc(struct nvme_dev *dev)
+ {
+- unsigned int mem_size = nvme_dbbuf_size(dev->db_stride);
++ unsigned int mem_size = nvme_dbbuf_size(dev);
+
+ if (dev->dbbuf_dbs)
+ return 0;
+@@ -252,7 +260,7 @@ static int nvme_dbbuf_dma_alloc(struct nvme_dev *dev)
+
+ static void nvme_dbbuf_dma_free(struct nvme_dev *dev)
+ {
+- unsigned int mem_size = nvme_dbbuf_size(dev->db_stride);
++ unsigned int mem_size = nvme_dbbuf_size(dev);
+
+ if (dev->dbbuf_dbs) {
+ dma_free_coherent(dev->dev, mem_size,
+@@ -2003,7 +2011,7 @@ static int nvme_setup_host_mem(struct nvme_dev *dev)
+ static void nvme_calc_irq_sets(struct irq_affinity *affd, unsigned int nrirqs)
+ {
+ struct nvme_dev *dev = affd->priv;
+- unsigned int nr_read_queues;
++ unsigned int nr_read_queues, nr_write_queues = dev->nr_write_queues;
+
+ /*
+ * If there is no interupt available for queues, ensure that
+@@ -2019,12 +2027,12 @@ static void nvme_calc_irq_sets(struct irq_affinity *affd, unsigned int nrirqs)
+ if (!nrirqs) {
+ nrirqs = 1;
+ nr_read_queues = 0;
+- } else if (nrirqs == 1 || !write_queues) {
++ } else if (nrirqs == 1 || !nr_write_queues) {
+ nr_read_queues = 0;
+- } else if (write_queues >= nrirqs) {
++ } else if (nr_write_queues >= nrirqs) {
+ nr_read_queues = 1;
+ } else {
+- nr_read_queues = nrirqs - write_queues;
++ nr_read_queues = nrirqs - nr_write_queues;
+ }
+
+ dev->io_queues[HCTX_TYPE_DEFAULT] = nrirqs - nr_read_queues;
+@@ -2048,7 +2056,7 @@ static int nvme_setup_irqs(struct nvme_dev *dev, unsigned int nr_io_queues)
+ * Poll queues don't need interrupts, but we need at least one IO
+ * queue left over for non-polled IO.
+ */
+- this_p_queues = poll_queues;
++ this_p_queues = dev->nr_poll_queues;
+ if (this_p_queues >= nr_io_queues) {
+ this_p_queues = nr_io_queues - 1;
+ irq_queues = 1;
+@@ -2078,14 +2086,25 @@ static void nvme_disable_io_queues(struct nvme_dev *dev)
+ __nvme_disable_io_queues(dev, nvme_admin_delete_cq);
+ }
+
++static unsigned int nvme_max_io_queues(struct nvme_dev *dev)
++{
++ return num_possible_cpus() + dev->nr_write_queues + dev->nr_poll_queues;
++}
++
+ static int nvme_setup_io_queues(struct nvme_dev *dev)
+ {
+ struct nvme_queue *adminq = &dev->queues[0];
+ struct pci_dev *pdev = to_pci_dev(dev->dev);
+- int result, nr_io_queues;
++ unsigned int nr_io_queues;
+ unsigned long size;
++ int result;
+
+- nr_io_queues = max_io_queues();
++ /*
++ * Sample the module parameters once at reset time so that we have
++ * stable values to work with.
++ */
++ dev->nr_write_queues = write_queues;
++ dev->nr_poll_queues = poll_queues;
+
+ /*
+ * If tags are shared with admin queue (Apple bug), then
+@@ -2093,6 +2112,9 @@ static int nvme_setup_io_queues(struct nvme_dev *dev)
+ */
+ if (dev->ctrl.quirks & NVME_QUIRK_SHARED_TAGS)
+ nr_io_queues = 1;
++ else
++ nr_io_queues = min(nvme_max_io_queues(dev),
++ dev->nr_allocated_queues - 1);
+
+ result = nvme_set_queue_count(&dev->ctrl, &nr_io_queues);
+ if (result < 0)
+@@ -2767,8 +2789,11 @@ static int nvme_probe(struct pci_dev *pdev, const struct pci_device_id *id)
+ if (!dev)
+ return -ENOMEM;
+
+- dev->queues = kcalloc_node(max_queue_count(), sizeof(struct nvme_queue),
+- GFP_KERNEL, node);
++ dev->nr_write_queues = write_queues;
++ dev->nr_poll_queues = poll_queues;
++ dev->nr_allocated_queues = nvme_max_io_queues(dev) + 1;
++ dev->queues = kcalloc_node(dev->nr_allocated_queues,
++ sizeof(struct nvme_queue), GFP_KERNEL, node);
+ if (!dev->queues)
+ goto free;
+
+@@ -3131,8 +3156,6 @@ static int __init nvme_init(void)
+ BUILD_BUG_ON(sizeof(struct nvme_delete_queue) != 64);
+ BUILD_BUG_ON(IRQ_AFFINITY_MAX_SETS < 2);
+
+- write_queues = min(write_queues, num_possible_cpus());
+- poll_queues = min(poll_queues, num_possible_cpus());
+ return pci_register_driver(&nvme_driver);
+ }
+
+diff --git a/drivers/nvme/host/tcp.c b/drivers/nvme/host/tcp.c
+index c15a92163c1f..4862fa962011 100644
+--- a/drivers/nvme/host/tcp.c
++++ b/drivers/nvme/host/tcp.c
+@@ -794,11 +794,11 @@ static void nvme_tcp_data_ready(struct sock *sk)
+ {
+ struct nvme_tcp_queue *queue;
+
+- read_lock(&sk->sk_callback_lock);
++ read_lock_bh(&sk->sk_callback_lock);
+ queue = sk->sk_user_data;
+ if (likely(queue && queue->rd_enabled))
+ queue_work_on(queue->io_cpu, nvme_tcp_wq, &queue->io_work);
+- read_unlock(&sk->sk_callback_lock);
++ read_unlock_bh(&sk->sk_callback_lock);
+ }
+
+ static void nvme_tcp_write_space(struct sock *sk)
+diff --git a/drivers/nvme/target/core.c b/drivers/nvme/target/core.c
+index b685f99d56a1..aa5ca222c6f5 100644
+--- a/drivers/nvme/target/core.c
++++ b/drivers/nvme/target/core.c
+@@ -157,14 +157,12 @@ static void nvmet_async_events_process(struct nvmet_ctrl *ctrl, u16 status)
+
+ static void nvmet_async_events_free(struct nvmet_ctrl *ctrl)
+ {
+- struct nvmet_req *req;
++ struct nvmet_async_event *aen, *tmp;
+
+ mutex_lock(&ctrl->lock);
+- while (ctrl->nr_async_event_cmds) {
+- req = ctrl->async_event_cmds[--ctrl->nr_async_event_cmds];
+- mutex_unlock(&ctrl->lock);
+- nvmet_req_complete(req, NVME_SC_INTERNAL | NVME_SC_DNR);
+- mutex_lock(&ctrl->lock);
++ list_for_each_entry_safe(aen, tmp, &ctrl->async_events, entry) {
++ list_del(&aen->entry);
++ kfree(aen);
+ }
+ mutex_unlock(&ctrl->lock);
+ }
+@@ -764,10 +762,8 @@ void nvmet_sq_destroy(struct nvmet_sq *sq)
+ * If this is the admin queue, complete all AERs so that our
+ * queue doesn't have outstanding requests on it.
+ */
+- if (ctrl && ctrl->sqs && ctrl->sqs[0] == sq) {
++ if (ctrl && ctrl->sqs && ctrl->sqs[0] == sq)
+ nvmet_async_events_process(ctrl, status);
+- nvmet_async_events_free(ctrl);
+- }
+ percpu_ref_kill_and_confirm(&sq->ref, nvmet_confirm_sq);
+ wait_for_completion(&sq->confirm_done);
+ wait_for_completion(&sq->free_done);
+@@ -1357,6 +1353,7 @@ static void nvmet_ctrl_free(struct kref *ref)
+
+ ida_simple_remove(&cntlid_ida, ctrl->cntlid);
+
++ nvmet_async_events_free(ctrl);
+ kfree(ctrl->sqs);
+ kfree(ctrl->cqs);
+ kfree(ctrl->changed_ns_list);
+diff --git a/drivers/pci/probe.c b/drivers/pci/probe.c
+index 77b8a145c39b..c7e3a8267521 100644
+--- a/drivers/pci/probe.c
++++ b/drivers/pci/probe.c
+@@ -1822,7 +1822,7 @@ int pci_setup_device(struct pci_dev *dev)
+ /* Device class may be changed after fixup */
+ class = dev->class >> 8;
+
+- if (dev->non_compliant_bars) {
++ if (dev->non_compliant_bars && !dev->mmio_always_on) {
+ pci_read_config_word(dev, PCI_COMMAND, &cmd);
+ if (cmd & (PCI_COMMAND_IO | PCI_COMMAND_MEMORY)) {
+ pci_info(dev, "device has non-compliant BARs; disabling IO/MEM decoding\n");
+@@ -1934,13 +1934,33 @@ static void pci_configure_mps(struct pci_dev *dev)
+ struct pci_dev *bridge = pci_upstream_bridge(dev);
+ int mps, mpss, p_mps, rc;
+
+- if (!pci_is_pcie(dev) || !bridge || !pci_is_pcie(bridge))
++ if (!pci_is_pcie(dev))
+ return;
+
+ /* MPS and MRRS fields are of type 'RsvdP' for VFs, short-circuit out */
+ if (dev->is_virtfn)
+ return;
+
++ /*
++ * For Root Complex Integrated Endpoints, program the maximum
++ * supported value unless limited by the PCIE_BUS_PEER2PEER case.
++ */
++ if (pci_pcie_type(dev) == PCI_EXP_TYPE_RC_END) {
++ if (pcie_bus_config == PCIE_BUS_PEER2PEER)
++ mps = 128;
++ else
++ mps = 128 << dev->pcie_mpss;
++ rc = pcie_set_mps(dev, mps);
++ if (rc) {
++ pci_warn(dev, "can't set Max Payload Size to %d; if necessary, use \"pci=pcie_bus_safe\" and report a bug\n",
++ mps);
++ }
++ return;
++ }
++
++ if (!bridge || !pci_is_pcie(bridge))
++ return;
++
+ mps = pcie_get_mps(dev);
+ p_mps = pcie_get_mps(bridge);
+
+diff --git a/drivers/pci/quirks.c b/drivers/pci/quirks.c
+index ca9ed5774eb1..5067562924f0 100644
+--- a/drivers/pci/quirks.c
++++ b/drivers/pci/quirks.c
+@@ -4682,6 +4682,20 @@ static int pci_quirk_mf_endpoint_acs(struct pci_dev *dev, u16 acs_flags)
+ PCI_ACS_CR | PCI_ACS_UF | PCI_ACS_DT);
+ }
+
++static int pci_quirk_rciep_acs(struct pci_dev *dev, u16 acs_flags)
++{
++ /*
++ * Intel RCiEP's are required to allow p2p only on translated
++ * addresses. Refer to Intel VT-d specification, r3.1, sec 3.16,
++ * "Root-Complex Peer to Peer Considerations".
++ */
++ if (pci_pcie_type(dev) != PCI_EXP_TYPE_RC_END)
++ return -ENOTTY;
++
++ return pci_acs_ctrl_enabled(acs_flags,
++ PCI_ACS_SV | PCI_ACS_RR | PCI_ACS_CR | PCI_ACS_UF);
++}
++
+ static int pci_quirk_brcm_acs(struct pci_dev *dev, u16 acs_flags)
+ {
+ /*
+@@ -4764,6 +4778,7 @@ static const struct pci_dev_acs_enabled {
+ /* I219 */
+ { PCI_VENDOR_ID_INTEL, 0x15b7, pci_quirk_mf_endpoint_acs },
+ { PCI_VENDOR_ID_INTEL, 0x15b8, pci_quirk_mf_endpoint_acs },
++ { PCI_VENDOR_ID_INTEL, PCI_ANY_ID, pci_quirk_rciep_acs },
+ /* QCOM QDF2xxx root ports */
+ { PCI_VENDOR_ID_QCOM, 0x0400, pci_quirk_qcom_rp_acs },
+ { PCI_VENDOR_ID_QCOM, 0x0401, pci_quirk_qcom_rp_acs },
+@@ -5129,13 +5144,25 @@ static void quirk_intel_qat_vf_cap(struct pci_dev *pdev)
+ }
+ DECLARE_PCI_FIXUP_EARLY(PCI_VENDOR_ID_INTEL, 0x443, quirk_intel_qat_vf_cap);
+
+-/* FLR may cause some 82579 devices to hang */
+-static void quirk_intel_no_flr(struct pci_dev *dev)
++/*
++ * FLR may cause the following to devices to hang:
++ *
++ * AMD Starship/Matisse HD Audio Controller 0x1487
++ * AMD Starship USB 3.0 Host Controller 0x148c
++ * AMD Matisse USB 3.0 Host Controller 0x149c
++ * Intel 82579LM Gigabit Ethernet Controller 0x1502
++ * Intel 82579V Gigabit Ethernet Controller 0x1503
++ *
++ */
++static void quirk_no_flr(struct pci_dev *dev)
+ {
+ dev->dev_flags |= PCI_DEV_FLAGS_NO_FLR_RESET;
+ }
+-DECLARE_PCI_FIXUP_EARLY(PCI_VENDOR_ID_INTEL, 0x1502, quirk_intel_no_flr);
+-DECLARE_PCI_FIXUP_EARLY(PCI_VENDOR_ID_INTEL, 0x1503, quirk_intel_no_flr);
++DECLARE_PCI_FIXUP_EARLY(PCI_VENDOR_ID_AMD, 0x1487, quirk_no_flr);
++DECLARE_PCI_FIXUP_EARLY(PCI_VENDOR_ID_AMD, 0x148c, quirk_no_flr);
++DECLARE_PCI_FIXUP_EARLY(PCI_VENDOR_ID_AMD, 0x149c, quirk_no_flr);
++DECLARE_PCI_FIXUP_EARLY(PCI_VENDOR_ID_INTEL, 0x1502, quirk_no_flr);
++DECLARE_PCI_FIXUP_EARLY(PCI_VENDOR_ID_INTEL, 0x1503, quirk_no_flr);
+
+ static void quirk_no_ext_tags(struct pci_dev *pdev)
+ {
+@@ -5568,6 +5595,19 @@ static void pci_fixup_no_d0_pme(struct pci_dev *dev)
+ }
+ DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_ASMEDIA, 0x2142, pci_fixup_no_d0_pme);
+
++/*
++ * Device [12d8:0x400e] and [12d8:0x400f]
++ * These devices advertise PME# support in all power states but don't
++ * reliably assert it.
++ */
++static void pci_fixup_no_pme(struct pci_dev *dev)
++{
++ pci_info(dev, "PME# is unreliable, disabling it\n");
++ dev->pme_support = 0;
++}
++DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_PERICOM, 0x400e, pci_fixup_no_pme);
++DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_PERICOM, 0x400f, pci_fixup_no_pme);
++
+ static void apex_pci_fixup_class(struct pci_dev *pdev)
+ {
+ pdev->class = (PCI_CLASS_SYSTEM_OTHER << 8) | pdev->class;
+diff --git a/drivers/perf/arm_smmuv3_pmu.c b/drivers/perf/arm_smmuv3_pmu.c
+index f01a57e5a5f3..48e28ef93a70 100644
+--- a/drivers/perf/arm_smmuv3_pmu.c
++++ b/drivers/perf/arm_smmuv3_pmu.c
+@@ -814,7 +814,7 @@ static int smmu_pmu_probe(struct platform_device *pdev)
+ if (err) {
+ dev_err(dev, "Error %d registering hotplug, PMU @%pa\n",
+ err, &res_0->start);
+- return err;
++ goto out_clear_affinity;
+ }
+
+ err = perf_pmu_register(&smmu_pmu->pmu, name, -1);
+@@ -833,6 +833,8 @@ static int smmu_pmu_probe(struct platform_device *pdev)
+
+ out_unregister:
+ cpuhp_state_remove_instance_nocalls(cpuhp_state_num, &smmu_pmu->node);
++out_clear_affinity:
++ irq_set_affinity_hint(smmu_pmu->irq, NULL);
+ return err;
+ }
+
+@@ -842,6 +844,7 @@ static int smmu_pmu_remove(struct platform_device *pdev)
+
+ perf_pmu_unregister(&smmu_pmu->pmu);
+ cpuhp_state_remove_instance_nocalls(cpuhp_state_num, &smmu_pmu->node);
++ irq_set_affinity_hint(smmu_pmu->irq, NULL);
+
+ return 0;
+ }
+diff --git a/drivers/perf/hisilicon/hisi_uncore_hha_pmu.c b/drivers/perf/hisilicon/hisi_uncore_hha_pmu.c
+index 6a1dd72d8abb..e5af9d7e6e14 100644
+--- a/drivers/perf/hisilicon/hisi_uncore_hha_pmu.c
++++ b/drivers/perf/hisilicon/hisi_uncore_hha_pmu.c
+@@ -283,7 +283,7 @@ static struct attribute *hisi_hha_pmu_events_attr[] = {
+ HISI_PMU_EVENT_ATTR(rx_wbip, 0x05),
+ HISI_PMU_EVENT_ATTR(rx_wtistash, 0x11),
+ HISI_PMU_EVENT_ATTR(rd_ddr_64b, 0x1c),
+- HISI_PMU_EVENT_ATTR(wr_dr_64b, 0x1d),
++ HISI_PMU_EVENT_ATTR(wr_ddr_64b, 0x1d),
+ HISI_PMU_EVENT_ATTR(rd_ddr_128b, 0x1e),
+ HISI_PMU_EVENT_ATTR(wr_ddr_128b, 0x1f),
+ HISI_PMU_EVENT_ATTR(spill_num, 0x20),
+diff --git a/drivers/pinctrl/samsung/pinctrl-exynos.c b/drivers/pinctrl/samsung/pinctrl-exynos.c
+index 0599f5127b01..84501c785473 100644
+--- a/drivers/pinctrl/samsung/pinctrl-exynos.c
++++ b/drivers/pinctrl/samsung/pinctrl-exynos.c
+@@ -40,6 +40,8 @@ struct exynos_irq_chip {
+ u32 eint_pend;
+ u32 eint_wake_mask_value;
+ u32 eint_wake_mask_reg;
++ void (*set_eint_wakeup_mask)(struct samsung_pinctrl_drv_data *drvdata,
++ struct exynos_irq_chip *irq_chip);
+ };
+
+ static inline struct exynos_irq_chip *to_exynos_irq_chip(struct irq_chip *chip)
+@@ -265,6 +267,7 @@ struct exynos_eint_gpio_save {
+ u32 eint_con;
+ u32 eint_fltcon0;
+ u32 eint_fltcon1;
++ u32 eint_mask;
+ };
+
+ /*
+@@ -342,6 +345,47 @@ static int exynos_wkup_irq_set_wake(struct irq_data *irqd, unsigned int on)
+ return 0;
+ }
+
++static void
++exynos_pinctrl_set_eint_wakeup_mask(struct samsung_pinctrl_drv_data *drvdata,
++ struct exynos_irq_chip *irq_chip)
++{
++ struct regmap *pmu_regs;
++
++ if (!drvdata->retention_ctrl || !drvdata->retention_ctrl->priv) {
++ dev_warn(drvdata->dev,
++ "No retention data configured bank with external wakeup interrupt. Wake-up mask will not be set.\n");
++ return;
++ }
++
++ pmu_regs = drvdata->retention_ctrl->priv;
++ dev_info(drvdata->dev,
++ "Setting external wakeup interrupt mask: 0x%x\n",
++ irq_chip->eint_wake_mask_value);
++
++ regmap_write(pmu_regs, irq_chip->eint_wake_mask_reg,
++ irq_chip->eint_wake_mask_value);
++}
++
++static void
++s5pv210_pinctrl_set_eint_wakeup_mask(struct samsung_pinctrl_drv_data *drvdata,
++ struct exynos_irq_chip *irq_chip)
++
++{
++ void __iomem *clk_base;
++
++ if (!drvdata->retention_ctrl || !drvdata->retention_ctrl->priv) {
++ dev_warn(drvdata->dev,
++ "No retention data configured bank with external wakeup interrupt. Wake-up mask will not be set.\n");
++ return;
++ }
++
++
++ clk_base = (void __iomem *) drvdata->retention_ctrl->priv;
++
++ __raw_writel(irq_chip->eint_wake_mask_value,
++ clk_base + irq_chip->eint_wake_mask_reg);
++}
++
+ /*
+ * irq_chip for wakeup interrupts
+ */
+@@ -360,8 +404,9 @@ static const struct exynos_irq_chip s5pv210_wkup_irq_chip __initconst = {
+ .eint_mask = EXYNOS_WKUP_EMASK_OFFSET,
+ .eint_pend = EXYNOS_WKUP_EPEND_OFFSET,
+ .eint_wake_mask_value = EXYNOS_EINT_WAKEUP_MASK_DISABLED,
+- /* Only difference with exynos4210_wkup_irq_chip: */
++ /* Only differences with exynos4210_wkup_irq_chip: */
+ .eint_wake_mask_reg = S5PV210_EINT_WAKEUP_MASK,
++ .set_eint_wakeup_mask = s5pv210_pinctrl_set_eint_wakeup_mask,
+ };
+
+ static const struct exynos_irq_chip exynos4210_wkup_irq_chip __initconst = {
+@@ -380,6 +425,7 @@ static const struct exynos_irq_chip exynos4210_wkup_irq_chip __initconst = {
+ .eint_pend = EXYNOS_WKUP_EPEND_OFFSET,
+ .eint_wake_mask_value = EXYNOS_EINT_WAKEUP_MASK_DISABLED,
+ .eint_wake_mask_reg = EXYNOS_EINT_WAKEUP_MASK,
++ .set_eint_wakeup_mask = exynos_pinctrl_set_eint_wakeup_mask,
+ };
+
+ static const struct exynos_irq_chip exynos7_wkup_irq_chip __initconst = {
+@@ -398,6 +444,7 @@ static const struct exynos_irq_chip exynos7_wkup_irq_chip __initconst = {
+ .eint_pend = EXYNOS7_WKUP_EPEND_OFFSET,
+ .eint_wake_mask_value = EXYNOS_EINT_WAKEUP_MASK_DISABLED,
+ .eint_wake_mask_reg = EXYNOS5433_EINT_WAKEUP_MASK,
++ .set_eint_wakeup_mask = exynos_pinctrl_set_eint_wakeup_mask,
+ };
+
+ /* list of external wakeup controllers supported */
+@@ -574,27 +621,6 @@ int exynos_eint_wkup_init(struct samsung_pinctrl_drv_data *d)
+ return 0;
+ }
+
+-static void
+-exynos_pinctrl_set_eint_wakeup_mask(struct samsung_pinctrl_drv_data *drvdata,
+- struct exynos_irq_chip *irq_chip)
+-{
+- struct regmap *pmu_regs;
+-
+- if (!drvdata->retention_ctrl || !drvdata->retention_ctrl->priv) {
+- dev_warn(drvdata->dev,
+- "No retention data configured bank with external wakeup interrupt. Wake-up mask will not be set.\n");
+- return;
+- }
+-
+- pmu_regs = drvdata->retention_ctrl->priv;
+- dev_info(drvdata->dev,
+- "Setting external wakeup interrupt mask: 0x%x\n",
+- irq_chip->eint_wake_mask_value);
+-
+- regmap_write(pmu_regs, irq_chip->eint_wake_mask_reg,
+- irq_chip->eint_wake_mask_value);
+-}
+-
+ static void exynos_pinctrl_suspend_bank(
+ struct samsung_pinctrl_drv_data *drvdata,
+ struct samsung_pin_bank *bank)
+@@ -608,10 +634,13 @@ static void exynos_pinctrl_suspend_bank(
+ + 2 * bank->eint_offset);
+ save->eint_fltcon1 = readl(regs + EXYNOS_GPIO_EFLTCON_OFFSET
+ + 2 * bank->eint_offset + 4);
++ save->eint_mask = readl(regs + bank->irq_chip->eint_mask
++ + bank->eint_offset);
+
+ pr_debug("%s: save con %#010x\n", bank->name, save->eint_con);
+ pr_debug("%s: save fltcon0 %#010x\n", bank->name, save->eint_fltcon0);
+ pr_debug("%s: save fltcon1 %#010x\n", bank->name, save->eint_fltcon1);
++ pr_debug("%s: save mask %#010x\n", bank->name, save->eint_mask);
+ }
+
+ void exynos_pinctrl_suspend(struct samsung_pinctrl_drv_data *drvdata)
+@@ -626,8 +655,8 @@ void exynos_pinctrl_suspend(struct samsung_pinctrl_drv_data *drvdata)
+ else if (bank->eint_type == EINT_TYPE_WKUP) {
+ if (!irq_chip) {
+ irq_chip = bank->irq_chip;
+- exynos_pinctrl_set_eint_wakeup_mask(drvdata,
+- irq_chip);
++ irq_chip->set_eint_wakeup_mask(drvdata,
++ irq_chip);
+ } else if (bank->irq_chip != irq_chip) {
+ dev_warn(drvdata->dev,
+ "More than one external wakeup interrupt chip configured (bank: %s). This is not supported by hardware nor by driver.\n",
+@@ -653,6 +682,9 @@ static void exynos_pinctrl_resume_bank(
+ pr_debug("%s: fltcon1 %#010x => %#010x\n", bank->name,
+ readl(regs + EXYNOS_GPIO_EFLTCON_OFFSET
+ + 2 * bank->eint_offset + 4), save->eint_fltcon1);
++ pr_debug("%s: mask %#010x => %#010x\n", bank->name,
++ readl(regs + bank->irq_chip->eint_mask
++ + bank->eint_offset), save->eint_mask);
+
+ writel(save->eint_con, regs + EXYNOS_GPIO_ECON_OFFSET
+ + bank->eint_offset);
+@@ -660,6 +692,8 @@ static void exynos_pinctrl_resume_bank(
+ + 2 * bank->eint_offset);
+ writel(save->eint_fltcon1, regs + EXYNOS_GPIO_EFLTCON_OFFSET
+ + 2 * bank->eint_offset + 4);
++ writel(save->eint_mask, regs + bank->irq_chip->eint_mask
++ + bank->eint_offset);
+ }
+
+ void exynos_pinctrl_resume(struct samsung_pinctrl_drv_data *drvdata)
+diff --git a/drivers/platform/x86/asus-wmi.c b/drivers/platform/x86/asus-wmi.c
+index bb7c529d7d16..cd212ee210e2 100644
+--- a/drivers/platform/x86/asus-wmi.c
++++ b/drivers/platform/x86/asus-wmi.c
+@@ -116,6 +116,8 @@ struct bios_args {
+ u32 arg0;
+ u32 arg1;
+ u32 arg2; /* At least TUF Gaming series uses 3 dword input buffer. */
++ u32 arg4;
++ u32 arg5;
+ } __packed;
+
+ /*
+diff --git a/drivers/platform/x86/dell-laptop.c b/drivers/platform/x86/dell-laptop.c
+index f8d3e3bd1bb5..5e9c2296931c 100644
+--- a/drivers/platform/x86/dell-laptop.c
++++ b/drivers/platform/x86/dell-laptop.c
+@@ -2204,10 +2204,13 @@ static int __init dell_init(void)
+
+ dell_laptop_register_notifier(&dell_laptop_notifier);
+
+- micmute_led_cdev.brightness = ledtrig_audio_get(LED_AUDIO_MICMUTE);
+- ret = led_classdev_register(&platform_device->dev, &micmute_led_cdev);
+- if (ret < 0)
+- goto fail_led;
++ if (dell_smbios_find_token(GLOBAL_MIC_MUTE_DISABLE) &&
++ dell_smbios_find_token(GLOBAL_MIC_MUTE_ENABLE)) {
++ micmute_led_cdev.brightness = ledtrig_audio_get(LED_AUDIO_MICMUTE);
++ ret = led_classdev_register(&platform_device->dev, &micmute_led_cdev);
++ if (ret < 0)
++ goto fail_led;
++ }
+
+ if (acpi_video_get_backlight_type() != acpi_backlight_vendor)
+ return 0;
+diff --git a/drivers/platform/x86/hp-wmi.c b/drivers/platform/x86/hp-wmi.c
+index a881b709af25..a44a2ec33287 100644
+--- a/drivers/platform/x86/hp-wmi.c
++++ b/drivers/platform/x86/hp-wmi.c
+@@ -461,8 +461,14 @@ static ssize_t postcode_show(struct device *dev, struct device_attribute *attr,
+ static ssize_t als_store(struct device *dev, struct device_attribute *attr,
+ const char *buf, size_t count)
+ {
+- u32 tmp = simple_strtoul(buf, NULL, 10);
+- int ret = hp_wmi_perform_query(HPWMI_ALS_QUERY, HPWMI_WRITE, &tmp,
++ u32 tmp;
++ int ret;
++
++ ret = kstrtou32(buf, 10, &tmp);
++ if (ret)
++ return ret;
++
++ ret = hp_wmi_perform_query(HPWMI_ALS_QUERY, HPWMI_WRITE, &tmp,
+ sizeof(tmp), sizeof(tmp));
+ if (ret)
+ return ret < 0 ? ret : -EINVAL;
+diff --git a/drivers/platform/x86/intel-hid.c b/drivers/platform/x86/intel-hid.c
+index cc7dd4d87cce..9ee79b74311c 100644
+--- a/drivers/platform/x86/intel-hid.c
++++ b/drivers/platform/x86/intel-hid.c
+@@ -79,6 +79,13 @@ static const struct dmi_system_id button_array_table[] = {
+ DMI_MATCH(DMI_PRODUCT_NAME, "Wacom MobileStudio Pro 16"),
+ },
+ },
++ {
++ .ident = "HP Spectre x2 (2015)",
++ .matches = {
++ DMI_MATCH(DMI_SYS_VENDOR, "HP"),
++ DMI_MATCH(DMI_PRODUCT_NAME, "HP Spectre x2 Detachable"),
++ },
++ },
+ { }
+ };
+
+diff --git a/drivers/platform/x86/intel-vbtn.c b/drivers/platform/x86/intel-vbtn.c
+index b5880936d785..a05b80955dcd 100644
+--- a/drivers/platform/x86/intel-vbtn.c
++++ b/drivers/platform/x86/intel-vbtn.c
+@@ -40,28 +40,51 @@ static const struct key_entry intel_vbtn_keymap[] = {
+ { KE_IGNORE, 0xC7, { KEY_VOLUMEDOWN } }, /* volume-down key release */
+ { KE_KEY, 0xC8, { KEY_ROTATE_LOCK_TOGGLE } }, /* rotate-lock key press */
+ { KE_KEY, 0xC9, { KEY_ROTATE_LOCK_TOGGLE } }, /* rotate-lock key release */
++};
++
++static const struct key_entry intel_vbtn_switchmap[] = {
+ { KE_SW, 0xCA, { .sw = { SW_DOCK, 1 } } }, /* Docked */
+ { KE_SW, 0xCB, { .sw = { SW_DOCK, 0 } } }, /* Undocked */
+ { KE_SW, 0xCC, { .sw = { SW_TABLET_MODE, 1 } } }, /* Tablet */
+ { KE_SW, 0xCD, { .sw = { SW_TABLET_MODE, 0 } } }, /* Laptop */
+- { KE_END },
+ };
+
++#define KEYMAP_LEN \
++ (ARRAY_SIZE(intel_vbtn_keymap) + ARRAY_SIZE(intel_vbtn_switchmap) + 1)
++
+ struct intel_vbtn_priv {
++ struct key_entry keymap[KEYMAP_LEN];
+ struct input_dev *input_dev;
++ bool has_switches;
+ bool wakeup_mode;
+ };
+
+ static int intel_vbtn_input_setup(struct platform_device *device)
+ {
+ struct intel_vbtn_priv *priv = dev_get_drvdata(&device->dev);
+- int ret;
++ int ret, keymap_len = 0;
++
++ if (true) {
++ memcpy(&priv->keymap[keymap_len], intel_vbtn_keymap,
++ ARRAY_SIZE(intel_vbtn_keymap) *
++ sizeof(struct key_entry));
++ keymap_len += ARRAY_SIZE(intel_vbtn_keymap);
++ }
++
++ if (priv->has_switches) {
++ memcpy(&priv->keymap[keymap_len], intel_vbtn_switchmap,
++ ARRAY_SIZE(intel_vbtn_switchmap) *
++ sizeof(struct key_entry));
++ keymap_len += ARRAY_SIZE(intel_vbtn_switchmap);
++ }
++
++ priv->keymap[keymap_len].type = KE_END;
+
+ priv->input_dev = devm_input_allocate_device(&device->dev);
+ if (!priv->input_dev)
+ return -ENOMEM;
+
+- ret = sparse_keymap_setup(priv->input_dev, intel_vbtn_keymap, NULL);
++ ret = sparse_keymap_setup(priv->input_dev, priv->keymap, NULL);
+ if (ret)
+ return ret;
+
+@@ -116,31 +139,40 @@ out_unknown:
+
+ static void detect_tablet_mode(struct platform_device *device)
+ {
+- const char *chassis_type = dmi_get_system_info(DMI_CHASSIS_TYPE);
+ struct intel_vbtn_priv *priv = dev_get_drvdata(&device->dev);
+ acpi_handle handle = ACPI_HANDLE(&device->dev);
+- struct acpi_buffer vgbs_output = { ACPI_ALLOCATE_BUFFER, NULL };
+- union acpi_object *obj;
++ unsigned long long vgbs;
+ acpi_status status;
+ int m;
+
+- if (!(chassis_type && strcmp(chassis_type, "31") == 0))
+- goto out;
+-
+- status = acpi_evaluate_object(handle, "VGBS", NULL, &vgbs_output);
++ status = acpi_evaluate_integer(handle, "VGBS", NULL, &vgbs);
+ if (ACPI_FAILURE(status))
+- goto out;
+-
+- obj = vgbs_output.pointer;
+- if (!(obj && obj->type == ACPI_TYPE_INTEGER))
+- goto out;
++ return;
+
+- m = !(obj->integer.value & TABLET_MODE_FLAG);
++ m = !(vgbs & TABLET_MODE_FLAG);
+ input_report_switch(priv->input_dev, SW_TABLET_MODE, m);
+- m = (obj->integer.value & DOCK_MODE_FLAG) ? 1 : 0;
++ m = (vgbs & DOCK_MODE_FLAG) ? 1 : 0;
+ input_report_switch(priv->input_dev, SW_DOCK, m);
+-out:
+- kfree(vgbs_output.pointer);
++}
++
++static bool intel_vbtn_has_switches(acpi_handle handle)
++{
++ const char *chassis_type = dmi_get_system_info(DMI_CHASSIS_TYPE);
++ unsigned long long vgbs;
++ acpi_status status;
++
++ /*
++ * Some normal laptops have a VGBS method despite being non-convertible
++ * and their VGBS method always returns 0, causing detect_tablet_mode()
++ * to report SW_TABLET_MODE=1 to userspace, which causes issues.
++ * These laptops have a DMI chassis_type of 9 ("Laptop"), do not report
++ * switches on any devices with a DMI chassis_type of 9.
++ */
++ if (chassis_type && strcmp(chassis_type, "9") == 0)
++ return false;
++
++ status = acpi_evaluate_integer(handle, "VGBS", NULL, &vgbs);
++ return ACPI_SUCCESS(status);
+ }
+
+ static int intel_vbtn_probe(struct platform_device *device)
+@@ -161,13 +193,16 @@ static int intel_vbtn_probe(struct platform_device *device)
+ return -ENOMEM;
+ dev_set_drvdata(&device->dev, priv);
+
++ priv->has_switches = intel_vbtn_has_switches(handle);
++
+ err = intel_vbtn_input_setup(device);
+ if (err) {
+ pr_err("Failed to setup Intel Virtual Button\n");
+ return err;
+ }
+
+- detect_tablet_mode(device);
++ if (priv->has_switches)
++ detect_tablet_mode(device);
+
+ status = acpi_install_notify_handler(handle,
+ ACPI_DEVICE_NOTIFY,
+diff --git a/drivers/power/reset/vexpress-poweroff.c b/drivers/power/reset/vexpress-poweroff.c
+index 90cbaa8341e3..0bf9ab8653ae 100644
+--- a/drivers/power/reset/vexpress-poweroff.c
++++ b/drivers/power/reset/vexpress-poweroff.c
+@@ -143,6 +143,7 @@ static struct platform_driver vexpress_reset_driver = {
+ .driver = {
+ .name = "vexpress-reset",
+ .of_match_table = vexpress_reset_of_match,
++ .suppress_bind_attrs = true,
+ },
+ };
+
+diff --git a/drivers/power/supply/power_supply_hwmon.c b/drivers/power/supply/power_supply_hwmon.c
+index 75cf861ba492..2e7e2b73b012 100644
+--- a/drivers/power/supply/power_supply_hwmon.c
++++ b/drivers/power/supply/power_supply_hwmon.c
+@@ -144,7 +144,7 @@ static int power_supply_hwmon_read_string(struct device *dev,
+ u32 attr, int channel,
+ const char **str)
+ {
+- *str = channel ? "temp" : "temp ambient";
++ *str = channel ? "temp ambient" : "temp";
+ return 0;
+ }
+
+@@ -304,7 +304,7 @@ int power_supply_add_hwmon_sysfs(struct power_supply *psy)
+ goto error;
+ }
+
+- ret = devm_add_action(dev, power_supply_hwmon_bitmap_free,
++ ret = devm_add_action_or_reset(dev, power_supply_hwmon_bitmap_free,
+ psyhw->props);
+ if (ret)
+ goto error;
+diff --git a/drivers/pwm/pwm-jz4740.c b/drivers/pwm/pwm-jz4740.c
+index 3cd5c054ad9a..4fe9d99ac9a9 100644
+--- a/drivers/pwm/pwm-jz4740.c
++++ b/drivers/pwm/pwm-jz4740.c
+@@ -158,11 +158,11 @@ static int jz4740_pwm_apply(struct pwm_chip *chip, struct pwm_device *pwm,
+ /* Calculate period value */
+ tmp = (unsigned long long)rate * state->period;
+ do_div(tmp, NSEC_PER_SEC);
+- period = (unsigned long)tmp;
++ period = tmp;
+
+ /* Calculate duty value */
+- tmp = (unsigned long long)period * state->duty_cycle;
+- do_div(tmp, state->period);
++ tmp = (unsigned long long)rate * state->duty_cycle;
++ do_div(tmp, NSEC_PER_SEC);
+ duty = period - tmp;
+
+ if (duty >= period)
+diff --git a/drivers/pwm/pwm-lpss.c b/drivers/pwm/pwm-lpss.c
+index 75bbfe5f3bc2..9d965ffe66d1 100644
+--- a/drivers/pwm/pwm-lpss.c
++++ b/drivers/pwm/pwm-lpss.c
+@@ -158,7 +158,6 @@ static int pwm_lpss_apply(struct pwm_chip *chip, struct pwm_device *pwm,
+ return 0;
+ }
+
+-/* This function gets called once from pwmchip_add to get the initial state */
+ static void pwm_lpss_get_state(struct pwm_chip *chip, struct pwm_device *pwm,
+ struct pwm_state *state)
+ {
+@@ -167,6 +166,8 @@ static void pwm_lpss_get_state(struct pwm_chip *chip, struct pwm_device *pwm,
+ unsigned long long base_unit, freq, on_time_div;
+ u32 ctrl;
+
++ pm_runtime_get_sync(chip->dev);
++
+ base_unit_range = BIT(lpwm->info->base_unit_bits);
+
+ ctrl = pwm_lpss_read(pwm);
+@@ -187,8 +188,7 @@ static void pwm_lpss_get_state(struct pwm_chip *chip, struct pwm_device *pwm,
+ state->polarity = PWM_POLARITY_NORMAL;
+ state->enabled = !!(ctrl & PWM_ENABLE);
+
+- if (state->enabled)
+- pm_runtime_get(chip->dev);
++ pm_runtime_put(chip->dev);
+ }
+
+ static const struct pwm_ops pwm_lpss_ops = {
+@@ -202,7 +202,8 @@ struct pwm_lpss_chip *pwm_lpss_probe(struct device *dev, struct resource *r,
+ {
+ struct pwm_lpss_chip *lpwm;
+ unsigned long c;
+- int ret;
++ int i, ret;
++ u32 ctrl;
+
+ if (WARN_ON(info->npwm > MAX_PWMS))
+ return ERR_PTR(-ENODEV);
+@@ -232,6 +233,12 @@ struct pwm_lpss_chip *pwm_lpss_probe(struct device *dev, struct resource *r,
+ return ERR_PTR(ret);
+ }
+
++ for (i = 0; i < lpwm->info->npwm; i++) {
++ ctrl = pwm_lpss_read(&lpwm->chip.pwms[i]);
++ if (ctrl & PWM_ENABLE)
++ pm_runtime_get(dev);
++ }
++
+ return lpwm;
+ }
+ EXPORT_SYMBOL_GPL(pwm_lpss_probe);
+diff --git a/drivers/regulator/qcom-rpmh-regulator.c b/drivers/regulator/qcom-rpmh-regulator.c
+index c86ad40015ce..c88cfa8952d6 100644
+--- a/drivers/regulator/qcom-rpmh-regulator.c
++++ b/drivers/regulator/qcom-rpmh-regulator.c
+@@ -832,11 +832,11 @@ static const struct rpmh_vreg_init_data pm8150_vreg_data[] = {
+ RPMH_VREG("ldo10", "ldo%s10", &pmic5_pldo, "vdd-l2-l10"),
+ RPMH_VREG("ldo11", "ldo%s11", &pmic5_nldo, "vdd-l1-l8-l11"),
+ RPMH_VREG("ldo12", "ldo%s12", &pmic5_pldo_lv, "vdd-l7-l12-l14-l15"),
+- RPMH_VREG("ldo13", "ldo%s13", &pmic5_pldo, "vdd-l13-l6-l17"),
++ RPMH_VREG("ldo13", "ldo%s13", &pmic5_pldo, "vdd-l13-l16-l17"),
+ RPMH_VREG("ldo14", "ldo%s14", &pmic5_pldo_lv, "vdd-l7-l12-l14-l15"),
+ RPMH_VREG("ldo15", "ldo%s15", &pmic5_pldo_lv, "vdd-l7-l12-l14-l15"),
+- RPMH_VREG("ldo16", "ldo%s16", &pmic5_pldo, "vdd-l13-l6-l17"),
+- RPMH_VREG("ldo17", "ldo%s17", &pmic5_pldo, "vdd-l13-l6-l17"),
++ RPMH_VREG("ldo16", "ldo%s16", &pmic5_pldo, "vdd-l13-l16-l17"),
++ RPMH_VREG("ldo17", "ldo%s17", &pmic5_pldo, "vdd-l13-l16-l17"),
+ RPMH_VREG("ldo18", "ldo%s18", &pmic5_nldo, "vdd-l3-l4-l5-l18"),
+ {},
+ };
+@@ -857,7 +857,7 @@ static const struct rpmh_vreg_init_data pm8150l_vreg_data[] = {
+ RPMH_VREG("ldo5", "ldo%s5", &pmic5_pldo, "vdd-l4-l5-l6"),
+ RPMH_VREG("ldo6", "ldo%s6", &pmic5_pldo, "vdd-l4-l5-l6"),
+ RPMH_VREG("ldo7", "ldo%s7", &pmic5_pldo, "vdd-l7-l11"),
+- RPMH_VREG("ldo8", "ldo%s8", &pmic5_pldo_lv, "vdd-l1-l8-l11"),
++ RPMH_VREG("ldo8", "ldo%s8", &pmic5_pldo_lv, "vdd-l1-l8"),
+ RPMH_VREG("ldo9", "ldo%s9", &pmic5_pldo, "vdd-l9-l10"),
+ RPMH_VREG("ldo10", "ldo%s10", &pmic5_pldo, "vdd-l9-l10"),
+ RPMH_VREG("ldo11", "ldo%s11", &pmic5_pldo, "vdd-l7-l11"),
+diff --git a/drivers/soc/fsl/dpio/qbman-portal.c b/drivers/soc/fsl/dpio/qbman-portal.c
+index 804b8ba9bf5c..23a1377971f4 100644
+--- a/drivers/soc/fsl/dpio/qbman-portal.c
++++ b/drivers/soc/fsl/dpio/qbman-portal.c
+@@ -669,6 +669,7 @@ int qbman_swp_enqueue_multiple_direct(struct qbman_swp *s,
+ eqcr_ci = s->eqcr.ci;
+ p = s->addr_cena + QBMAN_CENA_SWP_EQCR_CI;
+ s->eqcr.ci = qbman_read_register(s, QBMAN_CINH_SWP_EQCR_CI);
++ s->eqcr.ci &= full_mask;
+
+ s->eqcr.available = qm_cyc_diff(s->eqcr.pi_ring_size,
+ eqcr_ci, s->eqcr.ci);
+diff --git a/drivers/soc/tegra/Kconfig b/drivers/soc/tegra/Kconfig
+index 3693532949b8..6bc603d0b9d9 100644
+--- a/drivers/soc/tegra/Kconfig
++++ b/drivers/soc/tegra/Kconfig
+@@ -133,6 +133,7 @@ config SOC_TEGRA_FLOWCTRL
+
+ config SOC_TEGRA_PMC
+ bool
++ select GENERIC_PINCONF
+
+ config SOC_TEGRA_POWERGATE_BPMP
+ def_bool y
+diff --git a/drivers/spi/spi-dw-mid.c b/drivers/spi/spi-dw-mid.c
+index 0d86c37e0aeb..23cebdeb67e2 100644
+--- a/drivers/spi/spi-dw-mid.c
++++ b/drivers/spi/spi-dw-mid.c
+@@ -147,6 +147,7 @@ static struct dma_async_tx_descriptor *dw_spi_dma_prepare_tx(struct dw_spi *dws,
+ if (!xfer->tx_buf)
+ return NULL;
+
++ memset(&txconf, 0, sizeof(txconf));
+ txconf.direction = DMA_MEM_TO_DEV;
+ txconf.dst_addr = dws->dma_addr;
+ txconf.dst_maxburst = 16;
+@@ -193,6 +194,7 @@ static struct dma_async_tx_descriptor *dw_spi_dma_prepare_rx(struct dw_spi *dws,
+ if (!xfer->rx_buf)
+ return NULL;
+
++ memset(&rxconf, 0, sizeof(rxconf));
+ rxconf.direction = DMA_DEV_TO_MEM;
+ rxconf.src_addr = dws->dma_addr;
+ rxconf.src_maxburst = 16;
+@@ -218,19 +220,23 @@ static struct dma_async_tx_descriptor *dw_spi_dma_prepare_rx(struct dw_spi *dws,
+
+ static int mid_spi_dma_setup(struct dw_spi *dws, struct spi_transfer *xfer)
+ {
+- u16 dma_ctrl = 0;
++ u16 imr = 0, dma_ctrl = 0;
+
+ dw_writel(dws, DW_SPI_DMARDLR, 0xf);
+ dw_writel(dws, DW_SPI_DMATDLR, 0x10);
+
+- if (xfer->tx_buf)
++ if (xfer->tx_buf) {
+ dma_ctrl |= SPI_DMA_TDMAE;
+- if (xfer->rx_buf)
++ imr |= SPI_INT_TXOI;
++ }
++ if (xfer->rx_buf) {
+ dma_ctrl |= SPI_DMA_RDMAE;
++ imr |= SPI_INT_RXUI | SPI_INT_RXOI;
++ }
+ dw_writel(dws, DW_SPI_DMACR, dma_ctrl);
+
+ /* Set the interrupt mask */
+- spi_umask_intr(dws, SPI_INT_TXOI | SPI_INT_RXUI | SPI_INT_RXOI);
++ spi_umask_intr(dws, imr);
+
+ dws->transfer_handler = dma_transfer;
+
+@@ -260,7 +266,7 @@ static int mid_spi_dma_transfer(struct dw_spi *dws, struct spi_transfer *xfer)
+ dma_async_issue_pending(dws->txchan);
+ }
+
+- return 0;
++ return 1;
+ }
+
+ static void mid_spi_dma_stop(struct dw_spi *dws)
+diff --git a/drivers/spi/spi-dw.c b/drivers/spi/spi-dw.c
+index dbf9b8d5cebe..c86c4bbb102e 100644
+--- a/drivers/spi/spi-dw.c
++++ b/drivers/spi/spi-dw.c
+@@ -381,11 +381,8 @@ static int dw_spi_transfer_one(struct spi_controller *master,
+
+ spi_enable_chip(dws, 1);
+
+- if (dws->dma_mapped) {
+- ret = dws->dma_ops->dma_transfer(dws, transfer);
+- if (ret < 0)
+- return ret;
+- }
++ if (dws->dma_mapped)
++ return dws->dma_ops->dma_transfer(dws, transfer);
+
+ if (chip->poll_mode)
+ return poll_transfer(dws);
+@@ -531,6 +528,7 @@ int dw_spi_add_host(struct device *dev, struct dw_spi *dws)
+ dws->dma_inited = 0;
+ } else {
+ master->can_dma = dws->dma_ops->can_dma;
++ master->flags |= SPI_CONTROLLER_MUST_TX;
+ }
+ }
+
+diff --git a/drivers/spi/spi-fsl-dspi.c b/drivers/spi/spi-fsl-dspi.c
+index 50e41f66a2d7..2e9f9adc5900 100644
+--- a/drivers/spi/spi-fsl-dspi.c
++++ b/drivers/spi/spi-fsl-dspi.c
+@@ -246,13 +246,33 @@ struct fsl_dspi {
+
+ static void dspi_native_host_to_dev(struct fsl_dspi *dspi, u32 *txdata)
+ {
+- memcpy(txdata, dspi->tx, dspi->oper_word_size);
++ switch (dspi->oper_word_size) {
++ case 1:
++ *txdata = *(u8 *)dspi->tx;
++ break;
++ case 2:
++ *txdata = *(u16 *)dspi->tx;
++ break;
++ case 4:
++ *txdata = *(u32 *)dspi->tx;
++ break;
++ }
+ dspi->tx += dspi->oper_word_size;
+ }
+
+ static void dspi_native_dev_to_host(struct fsl_dspi *dspi, u32 rxdata)
+ {
+- memcpy(dspi->rx, &rxdata, dspi->oper_word_size);
++ switch (dspi->oper_word_size) {
++ case 1:
++ *(u8 *)dspi->rx = rxdata;
++ break;
++ case 2:
++ *(u16 *)dspi->rx = rxdata;
++ break;
++ case 4:
++ *(u32 *)dspi->rx = rxdata;
++ break;
++ }
+ dspi->rx += dspi->oper_word_size;
+ }
+
+diff --git a/drivers/spi/spi-mem.c b/drivers/spi/spi-mem.c
+index adaa0c49f966..9a86cc27fcc0 100644
+--- a/drivers/spi/spi-mem.c
++++ b/drivers/spi/spi-mem.c
+@@ -108,15 +108,17 @@ static int spi_check_buswidth_req(struct spi_mem *mem, u8 buswidth, bool tx)
+ return 0;
+
+ case 2:
+- if ((tx && (mode & (SPI_TX_DUAL | SPI_TX_QUAD))) ||
+- (!tx && (mode & (SPI_RX_DUAL | SPI_RX_QUAD))))
++ if ((tx &&
++ (mode & (SPI_TX_DUAL | SPI_TX_QUAD | SPI_TX_OCTAL))) ||
++ (!tx &&
++ (mode & (SPI_RX_DUAL | SPI_RX_QUAD | SPI_RX_OCTAL))))
+ return 0;
+
+ break;
+
+ case 4:
+- if ((tx && (mode & SPI_TX_QUAD)) ||
+- (!tx && (mode & SPI_RX_QUAD)))
++ if ((tx && (mode & (SPI_TX_QUAD | SPI_TX_OCTAL))) ||
++ (!tx && (mode & (SPI_RX_QUAD | SPI_RX_OCTAL))))
+ return 0;
+
+ break;
+diff --git a/drivers/spi/spi-mux.c b/drivers/spi/spi-mux.c
+index 4f94c9127fc1..cc9ef371db14 100644
+--- a/drivers/spi/spi-mux.c
++++ b/drivers/spi/spi-mux.c
+@@ -51,6 +51,10 @@ static int spi_mux_select(struct spi_device *spi)
+ struct spi_mux_priv *priv = spi_controller_get_devdata(spi->controller);
+ int ret;
+
++ ret = mux_control_select(priv->mux, spi->chip_select);
++ if (ret)
++ return ret;
++
+ if (priv->current_cs == spi->chip_select)
+ return 0;
+
+@@ -62,10 +66,6 @@ static int spi_mux_select(struct spi_device *spi)
+ priv->spi->mode = spi->mode;
+ priv->spi->bits_per_word = spi->bits_per_word;
+
+- ret = mux_control_select(priv->mux, spi->chip_select);
+- if (ret)
+- return ret;
+-
+ priv->current_cs = spi->chip_select;
+
+ return 0;
+diff --git a/drivers/spi/spi-pxa2xx.c b/drivers/spi/spi-pxa2xx.c
+index f6e87344a36c..6721910e5f2a 100644
+--- a/drivers/spi/spi-pxa2xx.c
++++ b/drivers/spi/spi-pxa2xx.c
+@@ -150,6 +150,7 @@ static const struct lpss_config lpss_platforms[] = {
+ .tx_threshold_hi = 48,
+ .cs_sel_shift = 8,
+ .cs_sel_mask = 3 << 8,
++ .cs_clk_stays_gated = true,
+ },
+ { /* LPSS_CNL_SSP */
+ .offset = 0x200,
+diff --git a/drivers/spi/spi.c b/drivers/spi/spi.c
+index 7067e4c44400..299384c91917 100644
+--- a/drivers/spi/spi.c
++++ b/drivers/spi/spi.c
+@@ -2111,6 +2111,7 @@ static int acpi_spi_add_resource(struct acpi_resource *ares, void *data)
+ }
+
+ lookup->max_speed_hz = sb->connection_speed;
++ lookup->bits_per_word = sb->data_bit_length;
+
+ if (sb->clock_phase == ACPI_SPI_SECOND_PHASE)
+ lookup->mode |= SPI_CPHA;
+diff --git a/drivers/staging/android/ion/ion_heap.c b/drivers/staging/android/ion/ion_heap.c
+index 473b465724f1..0755b11348ed 100644
+--- a/drivers/staging/android/ion/ion_heap.c
++++ b/drivers/staging/android/ion/ion_heap.c
+@@ -99,12 +99,12 @@ int ion_heap_map_user(struct ion_heap *heap, struct ion_buffer *buffer,
+
+ static int ion_heap_clear_pages(struct page **pages, int num, pgprot_t pgprot)
+ {
+- void *addr = vm_map_ram(pages, num, -1, pgprot);
++ void *addr = vmap(pages, num, VM_MAP, pgprot);
+
+ if (!addr)
+ return -ENOMEM;
+ memset(addr, 0, PAGE_SIZE * num);
+- vm_unmap_ram(addr, num);
++ vunmap(addr);
+
+ return 0;
+ }
+diff --git a/drivers/staging/greybus/sdio.c b/drivers/staging/greybus/sdio.c
+index 68c5718be827..c4b16bb5c1a4 100644
+--- a/drivers/staging/greybus/sdio.c
++++ b/drivers/staging/greybus/sdio.c
+@@ -411,6 +411,7 @@ static int gb_sdio_command(struct gb_sdio_host *host, struct mmc_command *cmd)
+ struct gb_sdio_command_request request = {0};
+ struct gb_sdio_command_response response;
+ struct mmc_data *data = host->mrq->data;
++ unsigned int timeout_ms;
+ u8 cmd_flags;
+ u8 cmd_type;
+ int i;
+@@ -469,9 +470,12 @@ static int gb_sdio_command(struct gb_sdio_host *host, struct mmc_command *cmd)
+ request.data_blksz = cpu_to_le16(data->blksz);
+ }
+
+- ret = gb_operation_sync(host->connection, GB_SDIO_TYPE_COMMAND,
+- &request, sizeof(request), &response,
+- sizeof(response));
++ timeout_ms = cmd->busy_timeout ? cmd->busy_timeout :
++ GB_OPERATION_TIMEOUT_DEFAULT;
++
++ ret = gb_operation_sync_timeout(host->connection, GB_SDIO_TYPE_COMMAND,
++ &request, sizeof(request), &response,
++ sizeof(response), timeout_ms);
+ if (ret < 0)
+ goto out;
+
+diff --git a/drivers/staging/media/imx/imx-media-utils.c b/drivers/staging/media/imx/imx-media-utils.c
+index fae981698c49..00a71f01786c 100644
+--- a/drivers/staging/media/imx/imx-media-utils.c
++++ b/drivers/staging/media/imx/imx-media-utils.c
+@@ -9,12 +9,9 @@
+
+ /*
+ * List of supported pixel formats for the subdevs.
+- *
+- * In all of these tables, the non-mbus formats (with no
+- * mbus codes) must all fall at the end of the table.
+ */
+-
+-static const struct imx_media_pixfmt yuv_formats[] = {
++static const struct imx_media_pixfmt pixel_formats[] = {
++ /*** YUV formats start here ***/
+ {
+ .fourcc = V4L2_PIX_FMT_UYVY,
+ .codes = {
+@@ -31,12 +28,7 @@ static const struct imx_media_pixfmt yuv_formats[] = {
+ },
+ .cs = IPUV3_COLORSPACE_YUV,
+ .bpp = 16,
+- },
+- /***
+- * non-mbus YUV formats start here. NOTE! when adding non-mbus
+- * formats, NUM_NON_MBUS_YUV_FORMATS must be updated below.
+- ***/
+- {
++ }, {
+ .fourcc = V4L2_PIX_FMT_YUV420,
+ .cs = IPUV3_COLORSPACE_YUV,
+ .bpp = 12,
+@@ -62,13 +54,7 @@ static const struct imx_media_pixfmt yuv_formats[] = {
+ .bpp = 16,
+ .planar = true,
+ },
+-};
+-
+-#define NUM_NON_MBUS_YUV_FORMATS 5
+-#define NUM_YUV_FORMATS ARRAY_SIZE(yuv_formats)
+-#define NUM_MBUS_YUV_FORMATS (NUM_YUV_FORMATS - NUM_NON_MBUS_YUV_FORMATS)
+-
+-static const struct imx_media_pixfmt rgb_formats[] = {
++ /*** RGB formats start here ***/
+ {
+ .fourcc = V4L2_PIX_FMT_RGB565,
+ .codes = {MEDIA_BUS_FMT_RGB565_2X8_LE},
+@@ -83,12 +69,28 @@ static const struct imx_media_pixfmt rgb_formats[] = {
+ },
+ .cs = IPUV3_COLORSPACE_RGB,
+ .bpp = 24,
++ }, {
++ .fourcc = V4L2_PIX_FMT_BGR24,
++ .cs = IPUV3_COLORSPACE_RGB,
++ .bpp = 24,
+ }, {
+ .fourcc = V4L2_PIX_FMT_XRGB32,
+ .codes = {MEDIA_BUS_FMT_ARGB8888_1X32},
+ .cs = IPUV3_COLORSPACE_RGB,
+ .bpp = 32,
+ .ipufmt = true,
++ }, {
++ .fourcc = V4L2_PIX_FMT_XBGR32,
++ .cs = IPUV3_COLORSPACE_RGB,
++ .bpp = 32,
++ }, {
++ .fourcc = V4L2_PIX_FMT_BGRX32,
++ .cs = IPUV3_COLORSPACE_RGB,
++ .bpp = 32,
++ }, {
++ .fourcc = V4L2_PIX_FMT_RGBX32,
++ .cs = IPUV3_COLORSPACE_RGB,
++ .bpp = 32,
+ },
+ /*** raw bayer and grayscale formats start here ***/
+ {
+@@ -182,33 +184,8 @@ static const struct imx_media_pixfmt rgb_formats[] = {
+ .bpp = 16,
+ .bayer = true,
+ },
+- /***
+- * non-mbus RGB formats start here. NOTE! when adding non-mbus
+- * formats, NUM_NON_MBUS_RGB_FORMATS must be updated below.
+- ***/
+- {
+- .fourcc = V4L2_PIX_FMT_BGR24,
+- .cs = IPUV3_COLORSPACE_RGB,
+- .bpp = 24,
+- }, {
+- .fourcc = V4L2_PIX_FMT_XBGR32,
+- .cs = IPUV3_COLORSPACE_RGB,
+- .bpp = 32,
+- }, {
+- .fourcc = V4L2_PIX_FMT_BGRX32,
+- .cs = IPUV3_COLORSPACE_RGB,
+- .bpp = 32,
+- }, {
+- .fourcc = V4L2_PIX_FMT_RGBX32,
+- .cs = IPUV3_COLORSPACE_RGB,
+- .bpp = 32,
+- },
+ };
+
+-#define NUM_NON_MBUS_RGB_FORMATS 2
+-#define NUM_RGB_FORMATS ARRAY_SIZE(rgb_formats)
+-#define NUM_MBUS_RGB_FORMATS (NUM_RGB_FORMATS - NUM_NON_MBUS_RGB_FORMATS)
+-
+ static const struct imx_media_pixfmt ipu_yuv_formats[] = {
+ {
+ .fourcc = V4L2_PIX_FMT_YUV32,
+@@ -246,21 +223,24 @@ static void init_mbus_colorimetry(struct v4l2_mbus_framefmt *mbus,
+ mbus->ycbcr_enc);
+ }
+
+-static const
+-struct imx_media_pixfmt *__find_format(u32 fourcc,
+- u32 code,
+- bool allow_non_mbus,
+- bool allow_bayer,
+- const struct imx_media_pixfmt *array,
+- u32 array_size)
++static const struct imx_media_pixfmt *find_format(u32 fourcc,
++ u32 code,
++ enum codespace_sel cs_sel,
++ bool allow_non_mbus,
++ bool allow_bayer)
+ {
+- const struct imx_media_pixfmt *fmt;
+- int i, j;
++ unsigned int i;
+
+- for (i = 0; i < array_size; i++) {
+- fmt = &array[i];
++ for (i = 0; i < ARRAY_SIZE(pixel_formats); i++) {
++ const struct imx_media_pixfmt *fmt = &pixel_formats[i];
++ enum codespace_sel fmt_cs_sel;
++ unsigned int j;
++
++ fmt_cs_sel = (fmt->cs == IPUV3_COLORSPACE_YUV) ?
++ CS_SEL_YUV : CS_SEL_RGB;
+
+- if ((!allow_non_mbus && !fmt->codes[0]) ||
++ if ((cs_sel != CS_SEL_ANY && fmt_cs_sel != cs_sel) ||
++ (!allow_non_mbus && !fmt->codes[0]) ||
+ (!allow_bayer && fmt->bayer))
+ continue;
+
+@@ -270,39 +250,13 @@ struct imx_media_pixfmt *__find_format(u32 fourcc,
+ if (!code)
+ continue;
+
+- for (j = 0; fmt->codes[j]; j++) {
++ for (j = 0; j < ARRAY_SIZE(fmt->codes) && fmt->codes[j]; j++) {
+ if (code == fmt->codes[j])
+ return fmt;
+ }
+ }
+- return NULL;
+-}
+-
+-static const struct imx_media_pixfmt *find_format(u32 fourcc,
+- u32 code,
+- enum codespace_sel cs_sel,
+- bool allow_non_mbus,
+- bool allow_bayer)
+-{
+- const struct imx_media_pixfmt *ret;
+
+- switch (cs_sel) {
+- case CS_SEL_YUV:
+- return __find_format(fourcc, code, allow_non_mbus, allow_bayer,
+- yuv_formats, NUM_YUV_FORMATS);
+- case CS_SEL_RGB:
+- return __find_format(fourcc, code, allow_non_mbus, allow_bayer,
+- rgb_formats, NUM_RGB_FORMATS);
+- case CS_SEL_ANY:
+- ret = __find_format(fourcc, code, allow_non_mbus, allow_bayer,
+- yuv_formats, NUM_YUV_FORMATS);
+- if (ret)
+- return ret;
+- return __find_format(fourcc, code, allow_non_mbus, allow_bayer,
+- rgb_formats, NUM_RGB_FORMATS);
+- default:
+- return NULL;
+- }
++ return NULL;
+ }
+
+ static int enum_format(u32 *fourcc, u32 *code, u32 index,
+@@ -310,61 +264,42 @@ static int enum_format(u32 *fourcc, u32 *code, u32 index,
+ bool allow_non_mbus,
+ bool allow_bayer)
+ {
+- const struct imx_media_pixfmt *fmt;
+- u32 mbus_yuv_sz = NUM_MBUS_YUV_FORMATS;
+- u32 mbus_rgb_sz = NUM_MBUS_RGB_FORMATS;
+- u32 yuv_sz = NUM_YUV_FORMATS;
+- u32 rgb_sz = NUM_RGB_FORMATS;
++ unsigned int i;
+
+- switch (cs_sel) {
+- case CS_SEL_YUV:
+- if (index >= yuv_sz ||
+- (!allow_non_mbus && index >= mbus_yuv_sz))
+- return -EINVAL;
+- fmt = &yuv_formats[index];
+- break;
+- case CS_SEL_RGB:
+- if (index >= rgb_sz ||
+- (!allow_non_mbus && index >= mbus_rgb_sz))
+- return -EINVAL;
+- fmt = &rgb_formats[index];
+- if (!allow_bayer && fmt->bayer)
+- return -EINVAL;
+- break;
+- case CS_SEL_ANY:
+- if (!allow_non_mbus) {
+- if (index >= mbus_yuv_sz) {
+- index -= mbus_yuv_sz;
+- if (index >= mbus_rgb_sz)
+- return -EINVAL;
+- fmt = &rgb_formats[index];
+- if (!allow_bayer && fmt->bayer)
+- return -EINVAL;
+- } else {
+- fmt = &yuv_formats[index];
+- }
+- } else {
+- if (index >= yuv_sz + rgb_sz)
+- return -EINVAL;
+- if (index >= yuv_sz) {
+- fmt = &rgb_formats[index - yuv_sz];
+- if (!allow_bayer && fmt->bayer)
+- return -EINVAL;
+- } else {
+- fmt = &yuv_formats[index];
++ for (i = 0; i < ARRAY_SIZE(pixel_formats); i++) {
++ const struct imx_media_pixfmt *fmt = &pixel_formats[i];
++ enum codespace_sel fmt_cs_sel;
++ unsigned int j;
++
++ fmt_cs_sel = (fmt->cs == IPUV3_COLORSPACE_YUV) ?
++ CS_SEL_YUV : CS_SEL_RGB;
++
++ if ((cs_sel != CS_SEL_ANY && fmt_cs_sel != cs_sel) ||
++ (!allow_non_mbus && !fmt->codes[0]) ||
++ (!allow_bayer && fmt->bayer))
++ continue;
++
++ if (fourcc && index == 0) {
++ *fourcc = fmt->fourcc;
++ return 0;
++ }
++
++ if (!code) {
++ index--;
++ continue;
++ }
++
++ for (j = 0; j < ARRAY_SIZE(fmt->codes) && fmt->codes[j]; j++) {
++ if (index == 0) {
++ *code = fmt->codes[j];
++ return 0;
+ }
++
++ index--;
+ }
+- break;
+- default:
+- return -EINVAL;
+ }
+
+- if (fourcc)
+- *fourcc = fmt->fourcc;
+- if (code)
+- *code = fmt->codes[0];
+-
+- return 0;
++ return -EINVAL;
+ }
+
+ const struct imx_media_pixfmt *
+diff --git a/drivers/staging/media/imx/imx7-mipi-csis.c b/drivers/staging/media/imx/imx7-mipi-csis.c
+index fbc1a924652a..6318f0aebb4b 100644
+--- a/drivers/staging/media/imx/imx7-mipi-csis.c
++++ b/drivers/staging/media/imx/imx7-mipi-csis.c
+@@ -669,28 +669,6 @@ static int mipi_csis_init_cfg(struct v4l2_subdev *mipi_sd,
+ return 0;
+ }
+
+-static struct csis_pix_format const *
+-mipi_csis_try_format(struct v4l2_subdev *mipi_sd, struct v4l2_mbus_framefmt *mf)
+-{
+- struct csi_state *state = mipi_sd_to_csis_state(mipi_sd);
+- struct csis_pix_format const *csis_fmt;
+-
+- csis_fmt = find_csis_format(mf->code);
+- if (!csis_fmt)
+- csis_fmt = &mipi_csis_formats[0];
+-
+- v4l_bound_align_image(&mf->width, 1, CSIS_MAX_PIX_WIDTH,
+- csis_fmt->pix_width_alignment,
+- &mf->height, 1, CSIS_MAX_PIX_HEIGHT, 1,
+- 0);
+-
+- state->format_mbus.code = csis_fmt->code;
+- state->format_mbus.width = mf->width;
+- state->format_mbus.height = mf->height;
+-
+- return csis_fmt;
+-}
+-
+ static struct v4l2_mbus_framefmt *
+ mipi_csis_get_format(struct csi_state *state,
+ struct v4l2_subdev_pad_config *cfg,
+@@ -703,53 +681,67 @@ mipi_csis_get_format(struct csi_state *state,
+ return &state->format_mbus;
+ }
+
+-static int mipi_csis_set_fmt(struct v4l2_subdev *mipi_sd,
++static int mipi_csis_get_fmt(struct v4l2_subdev *mipi_sd,
+ struct v4l2_subdev_pad_config *cfg,
+ struct v4l2_subdev_format *sdformat)
+ {
+ struct csi_state *state = mipi_sd_to_csis_state(mipi_sd);
+- struct csis_pix_format const *csis_fmt;
+ struct v4l2_mbus_framefmt *fmt;
+
+- if (sdformat->pad >= CSIS_PADS_NUM)
+- return -EINVAL;
+-
+- fmt = mipi_csis_get_format(state, cfg, sdformat->which, sdformat->pad);
+-
+ mutex_lock(&state->lock);
+- if (sdformat->pad == CSIS_PAD_SOURCE) {
+- sdformat->format = *fmt;
+- goto unlock;
+- }
+-
+- csis_fmt = mipi_csis_try_format(mipi_sd, &sdformat->format);
+-
++ fmt = mipi_csis_get_format(state, cfg, sdformat->which, sdformat->pad);
+ sdformat->format = *fmt;
+-
+- if (csis_fmt && sdformat->which == V4L2_SUBDEV_FORMAT_ACTIVE)
+- state->csis_fmt = csis_fmt;
+- else
+- cfg->try_fmt = sdformat->format;
+-
+-unlock:
+ mutex_unlock(&state->lock);
+
+ return 0;
+ }
+
+-static int mipi_csis_get_fmt(struct v4l2_subdev *mipi_sd,
++static int mipi_csis_set_fmt(struct v4l2_subdev *mipi_sd,
+ struct v4l2_subdev_pad_config *cfg,
+ struct v4l2_subdev_format *sdformat)
+ {
+ struct csi_state *state = mipi_sd_to_csis_state(mipi_sd);
++ struct csis_pix_format const *csis_fmt;
+ struct v4l2_mbus_framefmt *fmt;
+
+- mutex_lock(&state->lock);
++ /*
++ * The CSIS can't transcode in any way, the source format can't be
++ * modified.
++ */
++ if (sdformat->pad == CSIS_PAD_SOURCE)
++ return mipi_csis_get_fmt(mipi_sd, cfg, sdformat);
++
++ if (sdformat->pad != CSIS_PAD_SINK)
++ return -EINVAL;
+
+ fmt = mipi_csis_get_format(state, cfg, sdformat->which, sdformat->pad);
+
++ mutex_lock(&state->lock);
++
++ /* Validate the media bus code and clamp the size. */
++ csis_fmt = find_csis_format(sdformat->format.code);
++ if (!csis_fmt)
++ csis_fmt = &mipi_csis_formats[0];
++
++ fmt->code = csis_fmt->code;
++ fmt->width = sdformat->format.width;
++ fmt->height = sdformat->format.height;
++
++ v4l_bound_align_image(&fmt->width, 1, CSIS_MAX_PIX_WIDTH,
++ csis_fmt->pix_width_alignment,
++ &fmt->height, 1, CSIS_MAX_PIX_HEIGHT, 1, 0);
++
+ sdformat->format = *fmt;
+
++ /* Propagate the format from sink to source. */
++ fmt = mipi_csis_get_format(state, cfg, sdformat->which,
++ CSIS_PAD_SOURCE);
++ *fmt = sdformat->format;
++
++ /* Store the CSIS format descriptor for active formats. */
++ if (sdformat->which == V4L2_SUBDEV_FORMAT_ACTIVE)
++ state->csis_fmt = csis_fmt;
++
+ mutex_unlock(&state->lock);
+
+ return 0;
+diff --git a/drivers/staging/media/ipu3/ipu3-mmu.c b/drivers/staging/media/ipu3/ipu3-mmu.c
+index 5f3ff964f3e7..cb9bf5fb29a5 100644
+--- a/drivers/staging/media/ipu3/ipu3-mmu.c
++++ b/drivers/staging/media/ipu3/ipu3-mmu.c
+@@ -174,8 +174,10 @@ static u32 *imgu_mmu_get_l2pt(struct imgu_mmu *mmu, u32 l1pt_idx)
+ spin_lock_irqsave(&mmu->lock, flags);
+
+ l2pt = mmu->l2pts[l1pt_idx];
+- if (l2pt)
+- goto done;
++ if (l2pt) {
++ spin_unlock_irqrestore(&mmu->lock, flags);
++ return l2pt;
++ }
+
+ spin_unlock_irqrestore(&mmu->lock, flags);
+
+@@ -190,8 +192,9 @@ static u32 *imgu_mmu_get_l2pt(struct imgu_mmu *mmu, u32 l1pt_idx)
+
+ l2pt = mmu->l2pts[l1pt_idx];
+ if (l2pt) {
++ spin_unlock_irqrestore(&mmu->lock, flags);
+ imgu_mmu_free_page_table(new_l2pt);
+- goto done;
++ return l2pt;
+ }
+
+ l2pt = new_l2pt;
+@@ -200,7 +203,6 @@ static u32 *imgu_mmu_get_l2pt(struct imgu_mmu *mmu, u32 l1pt_idx)
+ pteval = IPU3_ADDR2PTE(virt_to_phys(new_l2pt));
+ mmu->l1pt[l1pt_idx] = pteval;
+
+-done:
+ spin_unlock_irqrestore(&mmu->lock, flags);
+ return l2pt;
+ }
+diff --git a/drivers/staging/media/ipu3/ipu3-v4l2.c b/drivers/staging/media/ipu3/ipu3-v4l2.c
+index 09c8ede1457c..db8b5d13631a 100644
+--- a/drivers/staging/media/ipu3/ipu3-v4l2.c
++++ b/drivers/staging/media/ipu3/ipu3-v4l2.c
+@@ -367,8 +367,10 @@ static void imgu_vb2_buf_queue(struct vb2_buffer *vb)
+
+ vb2_set_plane_payload(vb, 0, need_bytes);
+
++ mutex_lock(&imgu->streaming_lock);
+ if (imgu->streaming)
+ imgu_queue_buffers(imgu, false, node->pipe);
++ mutex_unlock(&imgu->streaming_lock);
+
+ dev_dbg(&imgu->pci_dev->dev, "%s for pipe %u node %u", __func__,
+ node->pipe, node->id);
+@@ -468,10 +470,13 @@ static int imgu_vb2_start_streaming(struct vb2_queue *vq, unsigned int count)
+ dev_dbg(dev, "%s node name %s pipe %u id %u", __func__,
+ node->name, node->pipe, node->id);
+
++ mutex_lock(&imgu->streaming_lock);
+ if (imgu->streaming) {
+ r = -EBUSY;
++ mutex_unlock(&imgu->streaming_lock);
+ goto fail_return_bufs;
+ }
++ mutex_unlock(&imgu->streaming_lock);
+
+ if (!node->enabled) {
+ dev_err(dev, "IMGU node is not enabled");
+@@ -498,9 +503,11 @@ static int imgu_vb2_start_streaming(struct vb2_queue *vq, unsigned int count)
+
+ /* Start streaming of the whole pipeline now */
+ dev_dbg(dev, "IMGU streaming is ready to start");
++ mutex_lock(&imgu->streaming_lock);
+ r = imgu_s_stream(imgu, true);
+ if (!r)
+ imgu->streaming = true;
++ mutex_unlock(&imgu->streaming_lock);
+
+ return 0;
+
+@@ -532,6 +539,7 @@ static void imgu_vb2_stop_streaming(struct vb2_queue *vq)
+ dev_err(&imgu->pci_dev->dev,
+ "failed to stop subdev streaming\n");
+
++ mutex_lock(&imgu->streaming_lock);
+ /* Was this the first node with streaming disabled? */
+ if (imgu->streaming && imgu_all_nodes_streaming(imgu, node)) {
+ /* Yes, really stop streaming now */
+@@ -542,6 +550,8 @@ static void imgu_vb2_stop_streaming(struct vb2_queue *vq)
+ }
+
+ imgu_return_all_buffers(imgu, node, VB2_BUF_STATE_ERROR);
++ mutex_unlock(&imgu->streaming_lock);
++
+ media_pipeline_stop(&node->vdev.entity);
+ }
+
+diff --git a/drivers/staging/media/ipu3/ipu3.c b/drivers/staging/media/ipu3/ipu3.c
+index 4d53aad31483..ee1bba6bdcac 100644
+--- a/drivers/staging/media/ipu3/ipu3.c
++++ b/drivers/staging/media/ipu3/ipu3.c
+@@ -261,6 +261,7 @@ int imgu_queue_buffers(struct imgu_device *imgu, bool initial, unsigned int pipe
+
+ ivb = list_first_entry(&imgu_pipe->nodes[node].buffers,
+ struct imgu_vb2_buffer, list);
++ list_del(&ivb->list);
+ vb = &ivb->vbb.vb2_buf;
+ r = imgu_css_set_parameters(&imgu->css, pipe,
+ vb2_plane_vaddr(vb, 0));
+@@ -274,7 +275,6 @@ int imgu_queue_buffers(struct imgu_device *imgu, bool initial, unsigned int pipe
+ vb2_buffer_done(vb, VB2_BUF_STATE_DONE);
+ dev_dbg(&imgu->pci_dev->dev,
+ "queue user parameters %d to css.", vb->index);
+- list_del(&ivb->list);
+ } else if (imgu_pipe->queue_enabled[node]) {
+ struct imgu_css_buffer *buf =
+ imgu_queue_getbuf(imgu, node, pipe);
+@@ -675,6 +675,7 @@ static int imgu_pci_probe(struct pci_dev *pci_dev,
+ return r;
+
+ mutex_init(&imgu->lock);
++ mutex_init(&imgu->streaming_lock);
+ atomic_set(&imgu->qbuf_barrier, 0);
+ init_waitqueue_head(&imgu->buf_drain_wq);
+
+@@ -738,6 +739,7 @@ out_mmu_exit:
+ out_css_powerdown:
+ imgu_css_set_powerdown(&pci_dev->dev, imgu->base);
+ out_mutex_destroy:
++ mutex_destroy(&imgu->streaming_lock);
+ mutex_destroy(&imgu->lock);
+
+ return r;
+@@ -755,6 +757,7 @@ static void imgu_pci_remove(struct pci_dev *pci_dev)
+ imgu_css_set_powerdown(&pci_dev->dev, imgu->base);
+ imgu_dmamap_exit(imgu);
+ imgu_mmu_exit(imgu->mmu);
++ mutex_destroy(&imgu->streaming_lock);
+ mutex_destroy(&imgu->lock);
+ }
+
+diff --git a/drivers/staging/media/ipu3/ipu3.h b/drivers/staging/media/ipu3/ipu3.h
+index 73b123b2b8a2..8cd6a0077d99 100644
+--- a/drivers/staging/media/ipu3/ipu3.h
++++ b/drivers/staging/media/ipu3/ipu3.h
+@@ -146,6 +146,10 @@ struct imgu_device {
+ * vid_buf.list and css->queue
+ */
+ struct mutex lock;
++
++ /* Lock to protect writes to streaming flag in this struct */
++ struct mutex streaming_lock;
++
+ /* Forbid streaming and buffer queuing during system suspend. */
+ atomic_t qbuf_barrier;
+ /* Indicate if system suspend take place while imgu is streaming. */
+diff --git a/drivers/staging/media/sunxi/cedrus/cedrus_dec.c b/drivers/staging/media/sunxi/cedrus/cedrus_dec.c
+index 4a2fc33a1d79..58c48e4fdfe9 100644
+--- a/drivers/staging/media/sunxi/cedrus/cedrus_dec.c
++++ b/drivers/staging/media/sunxi/cedrus/cedrus_dec.c
+@@ -74,6 +74,8 @@ void cedrus_device_run(void *priv)
+
+ v4l2_m2m_buf_copy_metadata(run.src, run.dst, true);
+
++ cedrus_dst_format_set(dev, &ctx->dst_fmt);
++
+ dev->dec_ops[ctx->current_codec]->setup(ctx, &run);
+
+ /* Complete request(s) controls if needed. */
+diff --git a/drivers/staging/media/sunxi/cedrus/cedrus_video.c b/drivers/staging/media/sunxi/cedrus/cedrus_video.c
+index 15cf1f10221b..ed3f511f066f 100644
+--- a/drivers/staging/media/sunxi/cedrus/cedrus_video.c
++++ b/drivers/staging/media/sunxi/cedrus/cedrus_video.c
+@@ -273,7 +273,6 @@ static int cedrus_s_fmt_vid_cap(struct file *file, void *priv,
+ struct v4l2_format *f)
+ {
+ struct cedrus_ctx *ctx = cedrus_file2ctx(file);
+- struct cedrus_dev *dev = ctx->dev;
+ struct vb2_queue *vq;
+ int ret;
+
+@@ -287,8 +286,6 @@ static int cedrus_s_fmt_vid_cap(struct file *file, void *priv,
+
+ ctx->dst_fmt = f->fmt.pix;
+
+- cedrus_dst_format_set(dev, &ctx->dst_fmt);
+-
+ return 0;
+ }
+
+diff --git a/drivers/tty/serial/8250/8250_core.c b/drivers/tty/serial/8250/8250_core.c
+index 45d9117cab68..9548d3f8fc8e 100644
+--- a/drivers/tty/serial/8250/8250_core.c
++++ b/drivers/tty/serial/8250/8250_core.c
+@@ -1040,7 +1040,7 @@ int serial8250_register_8250_port(struct uart_8250_port *up)
+ gpios = mctrl_gpio_init(&uart->port, 0);
+ if (IS_ERR(gpios)) {
+ ret = PTR_ERR(gpios);
+- goto out_unlock;
++ goto err;
+ } else {
+ uart->gpios = gpios;
+ }
+@@ -1089,8 +1089,10 @@ int serial8250_register_8250_port(struct uart_8250_port *up)
+ serial8250_apply_quirks(uart);
+ ret = uart_add_one_port(&serial8250_reg,
+ &uart->port);
+- if (ret == 0)
+- ret = uart->port.line;
++ if (ret)
++ goto err;
++
++ ret = uart->port.line;
+ } else {
+ dev_info(uart->port.dev,
+ "skipping CIR port at 0x%lx / 0x%llx, IRQ %d\n",
+@@ -1112,10 +1114,14 @@ int serial8250_register_8250_port(struct uart_8250_port *up)
+ }
+ }
+
+-out_unlock:
+ mutex_unlock(&serial_mutex);
+
+ return ret;
++
++err:
++ uart->port.dev = NULL;
++ mutex_unlock(&serial_mutex);
++ return ret;
+ }
+ EXPORT_SYMBOL(serial8250_register_8250_port);
+
+diff --git a/drivers/tty/serial/8250/8250_pci.c b/drivers/tty/serial/8250/8250_pci.c
+index 0804469ff052..1a74d511b02a 100644
+--- a/drivers/tty/serial/8250/8250_pci.c
++++ b/drivers/tty/serial/8250/8250_pci.c
+@@ -1869,12 +1869,6 @@ pci_moxa_setup(struct serial_private *priv,
+ #define PCIE_DEVICE_ID_WCH_CH384_4S 0x3470
+ #define PCIE_DEVICE_ID_WCH_CH382_2S 0x3253
+
+-#define PCI_VENDOR_ID_PERICOM 0x12D8
+-#define PCI_DEVICE_ID_PERICOM_PI7C9X7951 0x7951
+-#define PCI_DEVICE_ID_PERICOM_PI7C9X7952 0x7952
+-#define PCI_DEVICE_ID_PERICOM_PI7C9X7954 0x7954
+-#define PCI_DEVICE_ID_PERICOM_PI7C9X7958 0x7958
+-
+ #define PCI_VENDOR_ID_ACCESIO 0x494f
+ #define PCI_DEVICE_ID_ACCESIO_PCIE_COM_2SDB 0x1051
+ #define PCI_DEVICE_ID_ACCESIO_MPCIE_COM_2S 0x1053
+diff --git a/drivers/tty/serial/kgdboc.c b/drivers/tty/serial/kgdboc.c
+index c9f94fa82be4..151256f70d37 100644
+--- a/drivers/tty/serial/kgdboc.c
++++ b/drivers/tty/serial/kgdboc.c
+@@ -20,6 +20,7 @@
+ #include <linux/vt_kern.h>
+ #include <linux/input.h>
+ #include <linux/module.h>
++#include <linux/platform_device.h>
+
+ #define MAX_CONFIG_LEN 40
+
+@@ -27,6 +28,7 @@ static struct kgdb_io kgdboc_io_ops;
+
+ /* -1 = init not run yet, 0 = unconfigured, 1 = configured. */
+ static int configured = -1;
++static DEFINE_MUTEX(config_mutex);
+
+ static char config[MAX_CONFIG_LEN];
+ static struct kparam_string kps = {
+@@ -38,6 +40,8 @@ static int kgdboc_use_kms; /* 1 if we use kernel mode switching */
+ static struct tty_driver *kgdb_tty_driver;
+ static int kgdb_tty_line;
+
++static struct platform_device *kgdboc_pdev;
++
+ #ifdef CONFIG_KDB_KEYBOARD
+ static int kgdboc_reset_connect(struct input_handler *handler,
+ struct input_dev *dev,
+@@ -133,11 +137,13 @@ static void kgdboc_unregister_kbd(void)
+
+ static void cleanup_kgdboc(void)
+ {
++ if (configured != 1)
++ return;
++
+ if (kgdb_unregister_nmi_console())
+ return;
+ kgdboc_unregister_kbd();
+- if (configured == 1)
+- kgdb_unregister_io_module(&kgdboc_io_ops);
++ kgdb_unregister_io_module(&kgdboc_io_ops);
+ }
+
+ static int configure_kgdboc(void)
+@@ -198,20 +204,79 @@ nmi_con_failed:
+ kgdb_unregister_io_module(&kgdboc_io_ops);
+ noconfig:
+ kgdboc_unregister_kbd();
+- config[0] = 0;
+ configured = 0;
+- cleanup_kgdboc();
+
+ return err;
+ }
+
++static int kgdboc_probe(struct platform_device *pdev)
++{
++ int ret = 0;
++
++ mutex_lock(&config_mutex);
++ if (configured != 1) {
++ ret = configure_kgdboc();
++
++ /* Convert "no device" to "defer" so we'll keep trying */
++ if (ret == -ENODEV)
++ ret = -EPROBE_DEFER;
++ }
++ mutex_unlock(&config_mutex);
++
++ return ret;
++}
++
++static struct platform_driver kgdboc_platform_driver = {
++ .probe = kgdboc_probe,
++ .driver = {
++ .name = "kgdboc",
++ .suppress_bind_attrs = true,
++ },
++};
++
+ static int __init init_kgdboc(void)
+ {
+- /* Already configured? */
+- if (configured == 1)
++ int ret;
++
++ /*
++ * kgdboc is a little bit of an odd "platform_driver". It can be
++ * up and running long before the platform_driver object is
++ * created and thus doesn't actually store anything in it. There's
++ * only one instance of kgdb so anything is stored as global state.
++ * The platform_driver is only created so that we can leverage the
++ * kernel's mechanisms (like -EPROBE_DEFER) to call us when our
++ * underlying tty is ready. Here we init our platform driver and
++ * then create the single kgdboc instance.
++ */
++ ret = platform_driver_register(&kgdboc_platform_driver);
++ if (ret)
++ return ret;
++
++ kgdboc_pdev = platform_device_alloc("kgdboc", PLATFORM_DEVID_NONE);
++ if (!kgdboc_pdev) {
++ ret = -ENOMEM;
++ goto err_did_register;
++ }
++
++ ret = platform_device_add(kgdboc_pdev);
++ if (!ret)
+ return 0;
+
+- return configure_kgdboc();
++ platform_device_put(kgdboc_pdev);
++
++err_did_register:
++ platform_driver_unregister(&kgdboc_platform_driver);
++ return ret;
++}
++
++static void exit_kgdboc(void)
++{
++ mutex_lock(&config_mutex);
++ cleanup_kgdboc();
++ mutex_unlock(&config_mutex);
++
++ platform_device_unregister(kgdboc_pdev);
++ platform_driver_unregister(&kgdboc_platform_driver);
+ }
+
+ static int kgdboc_get_char(void)
+@@ -234,24 +299,20 @@ static int param_set_kgdboc_var(const char *kmessage,
+ const struct kernel_param *kp)
+ {
+ size_t len = strlen(kmessage);
++ int ret = 0;
+
+ if (len >= MAX_CONFIG_LEN) {
+ pr_err("config string too long\n");
+ return -ENOSPC;
+ }
+
+- /* Only copy in the string if the init function has not run yet */
+- if (configured < 0) {
+- strcpy(config, kmessage);
+- return 0;
+- }
+-
+ if (kgdb_connected) {
+ pr_err("Cannot reconfigure while KGDB is connected.\n");
+-
+ return -EBUSY;
+ }
+
++ mutex_lock(&config_mutex);
++
+ strcpy(config, kmessage);
+ /* Chop out \n char as a result of echo */
+ if (len && config[len - 1] == '\n')
+@@ -260,8 +321,30 @@ static int param_set_kgdboc_var(const char *kmessage,
+ if (configured == 1)
+ cleanup_kgdboc();
+
+- /* Go and configure with the new params. */
+- return configure_kgdboc();
++ /*
++ * Configure with the new params as long as init already ran.
++ * Note that we can get called before init if someone loads us
++ * with "modprobe kgdboc kgdboc=..." or if they happen to use the
++ * the odd syntax of "kgdboc.kgdboc=..." on the kernel command.
++ */
++ if (configured >= 0)
++ ret = configure_kgdboc();
++
++ /*
++ * If we couldn't configure then clear out the config. Note that
++ * specifying an invalid config on the kernel command line vs.
++ * through sysfs have slightly different behaviors. If we fail
++ * to configure what was specified on the kernel command line
++ * we'll leave it in the 'config' and return -EPROBE_DEFER from
++ * our probe. When specified through sysfs userspace is
++ * responsible for loading the tty driver before setting up.
++ */
++ if (ret)
++ config[0] = '\0';
++
++ mutex_unlock(&config_mutex);
++
++ return ret;
+ }
+
+ static int dbg_restore_graphics;
+@@ -324,15 +407,8 @@ __setup("kgdboc=", kgdboc_option_setup);
+ /* This is only available if kgdboc is a built in for early debugging */
+ static int __init kgdboc_early_init(char *opt)
+ {
+- /* save the first character of the config string because the
+- * init routine can destroy it.
+- */
+- char save_ch;
+-
+ kgdboc_option_setup(opt);
+- save_ch = config[0];
+- init_kgdboc();
+- config[0] = save_ch;
++ configure_kgdboc();
+ return 0;
+ }
+
+@@ -340,7 +416,7 @@ early_param("ekgdboc", kgdboc_early_init);
+ #endif /* CONFIG_KGDB_SERIAL_CONSOLE */
+
+ module_init(init_kgdboc);
+-module_exit(cleanup_kgdboc);
++module_exit(exit_kgdboc);
+ module_param_call(kgdboc, param_set_kgdboc_var, param_get_string, &kps, 0644);
+ MODULE_PARM_DESC(kgdboc, "<serial_device>[,baud]");
+ MODULE_DESCRIPTION("KGDB Console TTY Driver");
+diff --git a/drivers/usb/musb/mediatek.c b/drivers/usb/musb/mediatek.c
+index 6196b0e8d77d..eebeadd26946 100644
+--- a/drivers/usb/musb/mediatek.c
++++ b/drivers/usb/musb/mediatek.c
+@@ -208,6 +208,12 @@ static irqreturn_t generic_interrupt(int irq, void *__hci)
+ musb->int_rx = musb_clearw(musb->mregs, MUSB_INTRRX);
+ musb->int_tx = musb_clearw(musb->mregs, MUSB_INTRTX);
+
++ if ((musb->int_usb & MUSB_INTR_RESET) && !is_host_active(musb)) {
++ /* ep0 FADDR must be 0 when (re)entering peripheral mode */
++ musb_ep_select(musb->mregs, 0);
++ musb_writeb(musb->mregs, MUSB_FADDR, 0);
++ }
++
+ if (musb->int_usb || musb->int_tx || musb->int_rx)
+ retval = musb_interrupt(musb);
+
+diff --git a/drivers/virtio/virtio_balloon.c b/drivers/virtio/virtio_balloon.c
+index 51086a5afdd4..1f157d2f4952 100644
+--- a/drivers/virtio/virtio_balloon.c
++++ b/drivers/virtio/virtio_balloon.c
+@@ -1107,11 +1107,18 @@ static int virtballoon_restore(struct virtio_device *vdev)
+
+ static int virtballoon_validate(struct virtio_device *vdev)
+ {
+- /* Tell the host whether we care about poisoned pages. */
++ /*
++ * Inform the hypervisor that our pages are poisoned or
++ * initialized. If we cannot do that then we should disable
++ * page reporting as it could potentially change the contents
++ * of our free pages.
++ */
+ if (!want_init_on_free() &&
+ (IS_ENABLED(CONFIG_PAGE_POISONING_NO_SANITY) ||
+ !page_poisoning_enabled()))
+ __virtio_clear_bit(vdev, VIRTIO_BALLOON_F_PAGE_POISON);
++ else if (!virtio_has_feature(vdev, VIRTIO_BALLOON_F_PAGE_POISON))
++ __virtio_clear_bit(vdev, VIRTIO_BALLOON_F_REPORTING);
+
+ __virtio_clear_bit(vdev, VIRTIO_F_IOMMU_PLATFORM);
+ return 0;
+diff --git a/drivers/w1/masters/omap_hdq.c b/drivers/w1/masters/omap_hdq.c
+index aa09f8527776..a6484700f3b3 100644
+--- a/drivers/w1/masters/omap_hdq.c
++++ b/drivers/w1/masters/omap_hdq.c
+@@ -54,10 +54,10 @@ MODULE_PARM_DESC(w1_id, "1-wire id for the slave detection in HDQ mode");
+ struct hdq_data {
+ struct device *dev;
+ void __iomem *hdq_base;
+- /* lock status update */
++ /* lock read/write/break operations */
+ struct mutex hdq_mutex;
++ /* interrupt status and a lock for it */
+ u8 hdq_irqstatus;
+- /* device lock */
+ spinlock_t hdq_spinlock;
+ /* mode: 0-HDQ 1-W1 */
+ int mode;
+@@ -120,13 +120,18 @@ static int hdq_wait_for_flag(struct hdq_data *hdq_data, u32 offset,
+ }
+
+ /* Clear saved irqstatus after using an interrupt */
+-static void hdq_reset_irqstatus(struct hdq_data *hdq_data)
++static u8 hdq_reset_irqstatus(struct hdq_data *hdq_data, u8 bits)
+ {
+ unsigned long irqflags;
++ u8 status;
+
+ spin_lock_irqsave(&hdq_data->hdq_spinlock, irqflags);
+- hdq_data->hdq_irqstatus = 0;
++ status = hdq_data->hdq_irqstatus;
++ /* this is a read-modify-write */
++ hdq_data->hdq_irqstatus &= ~bits;
+ spin_unlock_irqrestore(&hdq_data->hdq_spinlock, irqflags);
++
++ return status;
+ }
+
+ /* write out a byte and fill *status with HDQ_INT_STATUS */
+@@ -135,6 +140,12 @@ static int hdq_write_byte(struct hdq_data *hdq_data, u8 val, u8 *status)
+ int ret;
+ u8 tmp_status;
+
++ ret = mutex_lock_interruptible(&hdq_data->hdq_mutex);
++ if (ret < 0) {
++ ret = -EINTR;
++ goto rtn;
++ }
++
+ *status = 0;
+
+ hdq_reg_out(hdq_data, OMAP_HDQ_TX_DATA, val);
+@@ -144,18 +155,19 @@ static int hdq_write_byte(struct hdq_data *hdq_data, u8 val, u8 *status)
+ OMAP_HDQ_CTRL_STATUS_DIR | OMAP_HDQ_CTRL_STATUS_GO);
+ /* wait for the TXCOMPLETE bit */
+ ret = wait_event_timeout(hdq_wait_queue,
+- hdq_data->hdq_irqstatus, OMAP_HDQ_TIMEOUT);
++ (hdq_data->hdq_irqstatus & OMAP_HDQ_INT_STATUS_TXCOMPLETE),
++ OMAP_HDQ_TIMEOUT);
++ *status = hdq_reset_irqstatus(hdq_data, OMAP_HDQ_INT_STATUS_TXCOMPLETE);
+ if (ret == 0) {
+ dev_dbg(hdq_data->dev, "TX wait elapsed\n");
+ ret = -ETIMEDOUT;
+ goto out;
+ }
+
+- *status = hdq_data->hdq_irqstatus;
+ /* check irqstatus */
+ if (!(*status & OMAP_HDQ_INT_STATUS_TXCOMPLETE)) {
+ dev_dbg(hdq_data->dev, "timeout waiting for"
+- " TXCOMPLETE/RXCOMPLETE, %x", *status);
++ " TXCOMPLETE/RXCOMPLETE, %x\n", *status);
+ ret = -ETIMEDOUT;
+ goto out;
+ }
+@@ -166,11 +178,12 @@ static int hdq_write_byte(struct hdq_data *hdq_data, u8 val, u8 *status)
+ OMAP_HDQ_FLAG_CLEAR, &tmp_status);
+ if (ret) {
+ dev_dbg(hdq_data->dev, "timeout waiting GO bit"
+- " return to zero, %x", tmp_status);
++ " return to zero, %x\n", tmp_status);
+ }
+
+ out:
+- hdq_reset_irqstatus(hdq_data);
++ mutex_unlock(&hdq_data->hdq_mutex);
++rtn:
+ return ret;
+ }
+
+@@ -181,9 +194,9 @@ static irqreturn_t hdq_isr(int irq, void *_hdq)
+ unsigned long irqflags;
+
+ spin_lock_irqsave(&hdq_data->hdq_spinlock, irqflags);
+- hdq_data->hdq_irqstatus = hdq_reg_in(hdq_data, OMAP_HDQ_INT_STATUS);
++ hdq_data->hdq_irqstatus |= hdq_reg_in(hdq_data, OMAP_HDQ_INT_STATUS);
+ spin_unlock_irqrestore(&hdq_data->hdq_spinlock, irqflags);
+- dev_dbg(hdq_data->dev, "hdq_isr: %x", hdq_data->hdq_irqstatus);
++ dev_dbg(hdq_data->dev, "hdq_isr: %x\n", hdq_data->hdq_irqstatus);
+
+ if (hdq_data->hdq_irqstatus &
+ (OMAP_HDQ_INT_STATUS_TXCOMPLETE | OMAP_HDQ_INT_STATUS_RXCOMPLETE
+@@ -238,18 +251,19 @@ static int omap_hdq_break(struct hdq_data *hdq_data)
+
+ /* wait for the TIMEOUT bit */
+ ret = wait_event_timeout(hdq_wait_queue,
+- hdq_data->hdq_irqstatus, OMAP_HDQ_TIMEOUT);
++ (hdq_data->hdq_irqstatus & OMAP_HDQ_INT_STATUS_TIMEOUT),
++ OMAP_HDQ_TIMEOUT);
++ tmp_status = hdq_reset_irqstatus(hdq_data, OMAP_HDQ_INT_STATUS_TIMEOUT);
+ if (ret == 0) {
+ dev_dbg(hdq_data->dev, "break wait elapsed\n");
+ ret = -EINTR;
+ goto out;
+ }
+
+- tmp_status = hdq_data->hdq_irqstatus;
+ /* check irqstatus */
+ if (!(tmp_status & OMAP_HDQ_INT_STATUS_TIMEOUT)) {
+- dev_dbg(hdq_data->dev, "timeout waiting for TIMEOUT, %x",
+- tmp_status);
++ dev_dbg(hdq_data->dev, "timeout waiting for TIMEOUT, %x\n",
++ tmp_status);
+ ret = -ETIMEDOUT;
+ goto out;
+ }
+@@ -275,10 +289,9 @@ static int omap_hdq_break(struct hdq_data *hdq_data)
+ &tmp_status);
+ if (ret)
+ dev_dbg(hdq_data->dev, "timeout waiting INIT&GO bits"
+- " return to zero, %x", tmp_status);
++ " return to zero, %x\n", tmp_status);
+
+ out:
+- hdq_reset_irqstatus(hdq_data);
+ mutex_unlock(&hdq_data->hdq_mutex);
+ rtn:
+ return ret;
+@@ -309,12 +322,15 @@ static int hdq_read_byte(struct hdq_data *hdq_data, u8 *val)
+ */
+ wait_event_timeout(hdq_wait_queue,
+ (hdq_data->hdq_irqstatus
+- & OMAP_HDQ_INT_STATUS_RXCOMPLETE),
++ & (OMAP_HDQ_INT_STATUS_RXCOMPLETE |
++ OMAP_HDQ_INT_STATUS_TIMEOUT)),
+ OMAP_HDQ_TIMEOUT);
+-
++ status = hdq_reset_irqstatus(hdq_data,
++ OMAP_HDQ_INT_STATUS_RXCOMPLETE |
++ OMAP_HDQ_INT_STATUS_TIMEOUT);
+ hdq_reg_merge(hdq_data, OMAP_HDQ_CTRL_STATUS, 0,
+ OMAP_HDQ_CTRL_STATUS_DIR);
+- status = hdq_data->hdq_irqstatus;
++
+ /* check irqstatus */
+ if (!(status & OMAP_HDQ_INT_STATUS_RXCOMPLETE)) {
+ dev_dbg(hdq_data->dev, "timeout waiting for"
+@@ -322,11 +338,12 @@ static int hdq_read_byte(struct hdq_data *hdq_data, u8 *val)
+ ret = -ETIMEDOUT;
+ goto out;
+ }
++ } else { /* interrupt had occurred before hdq_read_byte was called */
++ hdq_reset_irqstatus(hdq_data, OMAP_HDQ_INT_STATUS_RXCOMPLETE);
+ }
+ /* the data is ready. Read it in! */
+ *val = hdq_reg_in(hdq_data, OMAP_HDQ_RX_DATA);
+ out:
+- hdq_reset_irqstatus(hdq_data);
+ mutex_unlock(&hdq_data->hdq_mutex);
+ rtn:
+ return ret;
+@@ -367,15 +384,15 @@ static u8 omap_w1_triplet(void *_hdq, u8 bdir)
+ (hdq_data->hdq_irqstatus
+ & OMAP_HDQ_INT_STATUS_RXCOMPLETE),
+ OMAP_HDQ_TIMEOUT);
++ /* Must clear irqstatus for another RXCOMPLETE interrupt */
++ hdq_reset_irqstatus(hdq_data, OMAP_HDQ_INT_STATUS_RXCOMPLETE);
++
+ if (err == 0) {
+ dev_dbg(hdq_data->dev, "RX wait elapsed\n");
+ goto out;
+ }
+ id_bit = (hdq_reg_in(_hdq, OMAP_HDQ_RX_DATA) & 0x01);
+
+- /* Must clear irqstatus for another RXCOMPLETE interrupt */
+- hdq_reset_irqstatus(hdq_data);
+-
+ /* read comp_bit */
+ hdq_reg_merge(_hdq, OMAP_HDQ_CTRL_STATUS,
+ ctrl | OMAP_HDQ_CTRL_STATUS_DIR, mask);
+@@ -383,6 +400,9 @@ static u8 omap_w1_triplet(void *_hdq, u8 bdir)
+ (hdq_data->hdq_irqstatus
+ & OMAP_HDQ_INT_STATUS_RXCOMPLETE),
+ OMAP_HDQ_TIMEOUT);
++ /* Must clear irqstatus for another RXCOMPLETE interrupt */
++ hdq_reset_irqstatus(hdq_data, OMAP_HDQ_INT_STATUS_RXCOMPLETE);
++
+ if (err == 0) {
+ dev_dbg(hdq_data->dev, "RX wait elapsed\n");
+ goto out;
+@@ -409,6 +429,9 @@ static u8 omap_w1_triplet(void *_hdq, u8 bdir)
+ (hdq_data->hdq_irqstatus
+ & OMAP_HDQ_INT_STATUS_TXCOMPLETE),
+ OMAP_HDQ_TIMEOUT);
++ /* Must clear irqstatus for another TXCOMPLETE interrupt */
++ hdq_reset_irqstatus(hdq_data, OMAP_HDQ_INT_STATUS_TXCOMPLETE);
++
+ if (err == 0) {
+ dev_dbg(hdq_data->dev, "TX wait elapsed\n");
+ goto out;
+@@ -418,7 +441,6 @@ static u8 omap_w1_triplet(void *_hdq, u8 bdir)
+ OMAP_HDQ_CTRL_STATUS_SINGLE);
+
+ out:
+- hdq_reset_irqstatus(hdq_data);
+ mutex_unlock(&hdq_data->hdq_mutex);
+ rtn:
+ pm_runtime_mark_last_busy(hdq_data->dev);
+@@ -464,7 +486,7 @@ static u8 omap_w1_read_byte(void *_hdq)
+
+ ret = hdq_read_byte(hdq_data, &val);
+ if (ret)
+- ret = -1;
++ val = -1;
+
+ pm_runtime_mark_last_busy(hdq_data->dev);
+ pm_runtime_put_autosuspend(hdq_data->dev);
+diff --git a/fs/btrfs/block-group.c b/fs/btrfs/block-group.c
+index 696f47103cfc..233c5663f233 100644
+--- a/fs/btrfs/block-group.c
++++ b/fs/btrfs/block-group.c
+@@ -1175,7 +1175,7 @@ struct btrfs_trans_handle *btrfs_start_trans_remove_block_group(
+ free_extent_map(em);
+
+ return btrfs_start_transaction_fallback_global_rsv(fs_info->extent_root,
+- num_items, 1);
++ num_items);
+ }
+
+ /*
+diff --git a/fs/btrfs/block-rsv.c b/fs/btrfs/block-rsv.c
+index 27efec8f7c5b..dbba53e712e6 100644
+--- a/fs/btrfs/block-rsv.c
++++ b/fs/btrfs/block-rsv.c
+@@ -5,6 +5,7 @@
+ #include "block-rsv.h"
+ #include "space-info.h"
+ #include "transaction.h"
++#include "block-group.h"
+
+ /*
+ * HOW DO BLOCK RESERVES WORK
+@@ -405,6 +406,8 @@ void btrfs_update_global_block_rsv(struct btrfs_fs_info *fs_info)
+ else
+ block_rsv->full = 0;
+
++ if (block_rsv->size >= sinfo->total_bytes)
++ sinfo->force_alloc = CHUNK_ALLOC_FORCE;
+ spin_unlock(&block_rsv->lock);
+ spin_unlock(&sinfo->lock);
+ }
+diff --git a/fs/btrfs/ctree.h b/fs/btrfs/ctree.h
+index 8aa7b9dac405..196d4511f812 100644
+--- a/fs/btrfs/ctree.h
++++ b/fs/btrfs/ctree.h
+@@ -1146,6 +1146,9 @@ struct btrfs_root {
+ /* Record pairs of swapped blocks for qgroup */
+ struct btrfs_qgroup_swapped_blocks swapped_blocks;
+
++ /* Used only by log trees, when logging csum items */
++ struct extent_io_tree log_csum_range;
++
+ #ifdef CONFIG_BTRFS_FS_RUN_SANITY_TESTS
+ u64 alloc_bytenr;
+ #endif
+@@ -2512,6 +2515,7 @@ enum btrfs_reserve_flush_enum {
+ BTRFS_RESERVE_FLUSH_LIMIT,
+ BTRFS_RESERVE_FLUSH_EVICT,
+ BTRFS_RESERVE_FLUSH_ALL,
++ BTRFS_RESERVE_FLUSH_ALL_STEAL,
+ };
+
+ enum btrfs_flush_state {
+diff --git a/fs/btrfs/disk-io.c b/fs/btrfs/disk-io.c
+index d10c7be10f3b..91def9fd9456 100644
+--- a/fs/btrfs/disk-io.c
++++ b/fs/btrfs/disk-io.c
+@@ -1137,9 +1137,12 @@ static void __setup_root(struct btrfs_root *root, struct btrfs_fs_info *fs_info,
+ root->log_transid = 0;
+ root->log_transid_committed = -1;
+ root->last_log_commit = 0;
+- if (!dummy)
++ if (!dummy) {
+ extent_io_tree_init(fs_info, &root->dirty_log_pages,
+ IO_TREE_ROOT_DIRTY_LOG_PAGES, NULL);
++ extent_io_tree_init(fs_info, &root->log_csum_range,
++ IO_TREE_LOG_CSUM_RANGE, NULL);
++ }
+
+ memset(&root->root_key, 0, sizeof(root->root_key));
+ memset(&root->root_item, 0, sizeof(root->root_item));
+diff --git a/fs/btrfs/extent-io-tree.h b/fs/btrfs/extent-io-tree.h
+index b4a7bad3e82e..b6561455b3c4 100644
+--- a/fs/btrfs/extent-io-tree.h
++++ b/fs/btrfs/extent-io-tree.h
+@@ -44,6 +44,7 @@ enum {
+ IO_TREE_TRANS_DIRTY_PAGES,
+ IO_TREE_ROOT_DIRTY_LOG_PAGES,
+ IO_TREE_INODE_FILE_EXTENT,
++ IO_TREE_LOG_CSUM_RANGE,
+ IO_TREE_SELFTEST,
+ };
+
+diff --git a/fs/btrfs/file-item.c b/fs/btrfs/file-item.c
+index b618ad5339ba..a88a8bf4b12c 100644
+--- a/fs/btrfs/file-item.c
++++ b/fs/btrfs/file-item.c
+@@ -887,10 +887,12 @@ again:
+ nritems = btrfs_header_nritems(path->nodes[0]);
+ if (!nritems || (path->slots[0] >= nritems - 1)) {
+ ret = btrfs_next_leaf(root, path);
+- if (ret == 1)
++ if (ret < 0) {
++ goto out;
++ } else if (ret > 0) {
+ found_next = 1;
+- if (ret != 0)
+ goto insert;
++ }
+ slot = path->slots[0];
+ }
+ btrfs_item_key_to_cpu(path->nodes[0], &found_key, slot);
+diff --git a/fs/btrfs/inode.c b/fs/btrfs/inode.c
+index 320d1062068d..66dd919fc723 100644
+--- a/fs/btrfs/inode.c
++++ b/fs/btrfs/inode.c
+@@ -49,6 +49,7 @@
+ #include "qgroup.h"
+ #include "delalloc-space.h"
+ #include "block-group.h"
++#include "space-info.h"
+
+ struct btrfs_iget_args {
+ struct btrfs_key *location;
+@@ -1142,7 +1143,7 @@ out_unlock:
+ */
+ if (extent_reserved) {
+ extent_clear_unlock_delalloc(inode, start,
+- start + cur_alloc_size,
++ start + cur_alloc_size - 1,
+ locked_page,
+ clear_bits,
+ page_ops);
+@@ -1355,6 +1356,66 @@ static noinline int csum_exist_in_range(struct btrfs_fs_info *fs_info,
+ return 1;
+ }
+
++static int fallback_to_cow(struct inode *inode, struct page *locked_page,
++ const u64 start, const u64 end,
++ int *page_started, unsigned long *nr_written)
++{
++ const bool is_space_ino = btrfs_is_free_space_inode(BTRFS_I(inode));
++ const u64 range_bytes = end + 1 - start;
++ struct extent_io_tree *io_tree = &BTRFS_I(inode)->io_tree;
++ u64 range_start = start;
++ u64 count;
++
++ /*
++ * If EXTENT_NORESERVE is set it means that when the buffered write was
++ * made we had not enough available data space and therefore we did not
++ * reserve data space for it, since we though we could do NOCOW for the
++ * respective file range (either there is prealloc extent or the inode
++ * has the NOCOW bit set).
++ *
++ * However when we need to fallback to COW mode (because for example the
++ * block group for the corresponding extent was turned to RO mode by a
++ * scrub or relocation) we need to do the following:
++ *
++ * 1) We increment the bytes_may_use counter of the data space info.
++ * If COW succeeds, it allocates a new data extent and after doing
++ * that it decrements the space info's bytes_may_use counter and
++ * increments its bytes_reserved counter by the same amount (we do
++ * this at btrfs_add_reserved_bytes()). So we need to increment the
++ * bytes_may_use counter to compensate (when space is reserved at
++ * buffered write time, the bytes_may_use counter is incremented);
++ *
++ * 2) We clear the EXTENT_NORESERVE bit from the range. We do this so
++ * that if the COW path fails for any reason, it decrements (through
++ * extent_clear_unlock_delalloc()) the bytes_may_use counter of the
++ * data space info, which we incremented in the step above.
++ *
++ * If we need to fallback to cow and the inode corresponds to a free
++ * space cache inode, we must also increment bytes_may_use of the data
++ * space_info for the same reason. Space caches always get a prealloc
++ * extent for them, however scrub or balance may have set the block
++ * group that contains that extent to RO mode.
++ */
++ count = count_range_bits(io_tree, &range_start, end, range_bytes,
++ EXTENT_NORESERVE, 0);
++ if (count > 0 || is_space_ino) {
++ const u64 bytes = is_space_ino ? range_bytes : count;
++ struct btrfs_fs_info *fs_info = BTRFS_I(inode)->root->fs_info;
++ struct btrfs_space_info *sinfo = fs_info->data_sinfo;
++
++ spin_lock(&sinfo->lock);
++ btrfs_space_info_update_bytes_may_use(fs_info, sinfo, bytes);
++ spin_unlock(&sinfo->lock);
++
++ if (count > 0)
++ clear_extent_bit(io_tree, start, end, EXTENT_NORESERVE,
++ 0, 0, NULL);
++ }
++
++ return cow_file_range(inode, locked_page, start, end, page_started,
++ nr_written, 1);
++}
++
+ /*
+ * when nowcow writeback call back. This checks for snapshots or COW copies
+ * of the extents that exist in the file, and COWs the file as required.
+@@ -1602,9 +1663,9 @@ out_check:
+ * NOCOW, following one which needs to be COW'ed
+ */
+ if (cow_start != (u64)-1) {
+- ret = cow_file_range(inode, locked_page,
+- cow_start, found_key.offset - 1,
+- page_started, nr_written, 1);
++ ret = fallback_to_cow(inode, locked_page, cow_start,
++ found_key.offset - 1,
++ page_started, nr_written);
+ if (ret) {
+ if (nocow)
+ btrfs_dec_nocow_writers(fs_info,
+@@ -1693,8 +1754,8 @@ out_check:
+
+ if (cow_start != (u64)-1) {
+ cur_offset = end;
+- ret = cow_file_range(inode, locked_page, cow_start, end,
+- page_started, nr_written, 1);
++ ret = fallback_to_cow(inode, locked_page, cow_start, end,
++ page_started, nr_written);
+ if (ret)
+ goto error;
+ }
+@@ -3618,7 +3679,7 @@ static struct btrfs_trans_handle *__unlink_start_trans(struct inode *dir)
+ * 1 for the inode ref
+ * 1 for the inode
+ */
+- return btrfs_start_transaction_fallback_global_rsv(root, 5, 5);
++ return btrfs_start_transaction_fallback_global_rsv(root, 5);
+ }
+
+ static int btrfs_unlink(struct inode *dir, struct dentry *dentry)
+@@ -7939,7 +8000,6 @@ static int btrfs_submit_direct_hook(struct btrfs_dio_private *dip)
+
+ /* bio split */
+ ASSERT(geom.len <= INT_MAX);
+- atomic_inc(&dip->pending_bios);
+ do {
+ clone_len = min_t(int, submit_len, geom.len);
+
+@@ -7989,7 +8049,8 @@ submit:
+ if (!status)
+ return 0;
+
+- bio_put(bio);
++ if (bio != orig_bio)
++ bio_put(bio);
+ out_err:
+ dip->errors = 1;
+ /*
+@@ -8030,7 +8091,7 @@ static void btrfs_submit_direct(struct bio *dio_bio, struct inode *inode,
+ bio->bi_private = dip;
+ dip->orig_bio = bio;
+ dip->dio_bio = dio_bio;
+- atomic_set(&dip->pending_bios, 0);
++ atomic_set(&dip->pending_bios, 1);
+ io_bio = btrfs_io_bio(bio);
+ io_bio->logical = file_offset;
+
+diff --git a/fs/btrfs/qgroup.c b/fs/btrfs/qgroup.c
+index c3888fb367e7..5bd4089ad0e1 100644
+--- a/fs/btrfs/qgroup.c
++++ b/fs/btrfs/qgroup.c
+@@ -2622,6 +2622,7 @@ int btrfs_qgroup_inherit(struct btrfs_trans_handle *trans, u64 srcid,
+ struct btrfs_root *quota_root;
+ struct btrfs_qgroup *srcgroup;
+ struct btrfs_qgroup *dstgroup;
++ bool need_rescan = false;
+ u32 level_size = 0;
+ u64 nums;
+
+@@ -2765,6 +2766,13 @@ int btrfs_qgroup_inherit(struct btrfs_trans_handle *trans, u64 srcid,
+ goto unlock;
+ }
+ ++i_qgroups;
++
++ /*
++ * If we're doing a snapshot, and adding the snapshot to a new
++ * qgroup, the numbers are guaranteed to be incorrect.
++ */
++ if (srcid)
++ need_rescan = true;
+ }
+
+ for (i = 0; i < inherit->num_ref_copies; ++i, i_qgroups += 2) {
+@@ -2784,6 +2792,9 @@ int btrfs_qgroup_inherit(struct btrfs_trans_handle *trans, u64 srcid,
+
+ dst->rfer = src->rfer - level_size;
+ dst->rfer_cmpr = src->rfer_cmpr - level_size;
++
++ /* Manually tweaking numbers certainly needs a rescan */
++ need_rescan = true;
+ }
+ for (i = 0; i < inherit->num_excl_copies; ++i, i_qgroups += 2) {
+ struct btrfs_qgroup *src;
+@@ -2802,6 +2813,7 @@ int btrfs_qgroup_inherit(struct btrfs_trans_handle *trans, u64 srcid,
+
+ dst->excl = src->excl + level_size;
+ dst->excl_cmpr = src->excl_cmpr + level_size;
++ need_rescan = true;
+ }
+
+ unlock:
+@@ -2809,6 +2821,8 @@ unlock:
+ out:
+ if (!committing)
+ mutex_unlock(&fs_info->qgroup_ioctl_lock);
++ if (need_rescan)
++ fs_info->qgroup_flags |= BTRFS_QGROUP_STATUS_FLAG_INCONSISTENT;
+ return ret;
+ }
+
+diff --git a/fs/btrfs/relocation.c b/fs/btrfs/relocation.c
+index 03bc7134e8cb..157452a5e110 100644
+--- a/fs/btrfs/relocation.c
++++ b/fs/btrfs/relocation.c
+@@ -2624,12 +2624,10 @@ again:
+ reloc_root = list_entry(reloc_roots.next,
+ struct btrfs_root, root_list);
+
++ root = read_fs_root(fs_info, reloc_root->root_key.offset);
+ if (btrfs_root_refs(&reloc_root->root_item) > 0) {
+- root = read_fs_root(fs_info,
+- reloc_root->root_key.offset);
+ BUG_ON(IS_ERR(root));
+ BUG_ON(root->reloc_root != reloc_root);
+-
+ ret = merge_reloc_root(rc, root);
+ btrfs_put_root(root);
+ if (ret) {
+@@ -2639,6 +2637,14 @@ again:
+ goto out;
+ }
+ } else {
++ if (!IS_ERR(root)) {
++ if (root->reloc_root == reloc_root) {
++ root->reloc_root = NULL;
++ btrfs_put_root(reloc_root);
++ }
++ btrfs_put_root(root);
++ }
++
+ list_del_init(&reloc_root->root_list);
+ /* Don't forget to queue this reloc root for cleanup */
+ list_add_tail(&reloc_root->reloc_dirty_list,
+diff --git a/fs/btrfs/scrub.c b/fs/btrfs/scrub.c
+index adaf8ab694d5..7c50ac5b6876 100644
+--- a/fs/btrfs/scrub.c
++++ b/fs/btrfs/scrub.c
+@@ -3046,7 +3046,8 @@ out:
+ static noinline_for_stack int scrub_stripe(struct scrub_ctx *sctx,
+ struct map_lookup *map,
+ struct btrfs_device *scrub_dev,
+- int num, u64 base, u64 length)
++ int num, u64 base, u64 length,
++ struct btrfs_block_group *cache)
+ {
+ struct btrfs_path *path, *ppath;
+ struct btrfs_fs_info *fs_info = sctx->fs_info;
+@@ -3284,6 +3285,20 @@ static noinline_for_stack int scrub_stripe(struct scrub_ctx *sctx,
+ break;
+ }
+
++ /*
++ * If our block group was removed in the meanwhile, just
++ * stop scrubbing since there is no point in continuing.
++ * Continuing would prevent reusing its device extents
++ * for new block groups for a long time.
++ */
++ spin_lock(&cache->lock);
++ if (cache->removed) {
++ spin_unlock(&cache->lock);
++ ret = 0;
++ goto out;
++ }
++ spin_unlock(&cache->lock);
++
+ extent = btrfs_item_ptr(l, slot,
+ struct btrfs_extent_item);
+ flags = btrfs_extent_flags(l, extent);
+@@ -3457,7 +3472,7 @@ static noinline_for_stack int scrub_chunk(struct scrub_ctx *sctx,
+ if (map->stripes[i].dev->bdev == scrub_dev->bdev &&
+ map->stripes[i].physical == dev_offset) {
+ ret = scrub_stripe(sctx, map, scrub_dev, i,
+- chunk_offset, length);
++ chunk_offset, length, cache);
+ if (ret)
+ goto out;
+ }
+@@ -3554,6 +3569,23 @@ int scrub_enumerate_chunks(struct scrub_ctx *sctx,
+ if (!cache)
+ goto skip;
+
++ /*
++ * Make sure that while we are scrubbing the corresponding block
++ * group doesn't get its logical address and its device extents
++ * reused for another block group, which can possibly be of a
++ * different type and different profile. We do this to prevent
++ * false error detections and crashes due to bogus attempts to
++ * repair extents.
++ */
++ spin_lock(&cache->lock);
++ if (cache->removed) {
++ spin_unlock(&cache->lock);
++ btrfs_put_block_group(cache);
++ goto skip;
++ }
++ btrfs_get_block_group_trimming(cache);
++ spin_unlock(&cache->lock);
++
+ /*
+ * we need call btrfs_inc_block_group_ro() with scrubs_paused,
+ * to avoid deadlock caused by:
+@@ -3609,6 +3641,7 @@ int scrub_enumerate_chunks(struct scrub_ctx *sctx,
+ } else {
+ btrfs_warn(fs_info,
+ "failed setting block group ro: %d", ret);
++ btrfs_put_block_group_trimming(cache);
+ btrfs_put_block_group(cache);
+ scrub_pause_off(fs_info);
+ break;
+@@ -3695,6 +3728,7 @@ int scrub_enumerate_chunks(struct scrub_ctx *sctx,
+ spin_unlock(&cache->lock);
+ }
+
++ btrfs_put_block_group_trimming(cache);
+ btrfs_put_block_group(cache);
+ if (ret)
+ break;
+diff --git a/fs/btrfs/send.c b/fs/btrfs/send.c
+index c5f41bd86765..4f3b8d2bb56b 100644
+--- a/fs/btrfs/send.c
++++ b/fs/btrfs/send.c
+@@ -23,6 +23,7 @@
+ #include "btrfs_inode.h"
+ #include "transaction.h"
+ #include "compression.h"
++#include "xattr.h"
+
+ /*
+ * Maximum number of references an extent can have in order for us to attempt to
+@@ -4545,6 +4546,10 @@ static int __process_new_xattr(int num, struct btrfs_key *di_key,
+ struct fs_path *p;
+ struct posix_acl_xattr_header dummy_acl;
+
++ /* Capabilities are emitted by finish_inode_if_needed */
++ if (!strncmp(name, XATTR_NAME_CAPS, name_len))
++ return 0;
++
+ p = fs_path_alloc();
+ if (!p)
+ return -ENOMEM;
+@@ -5107,6 +5112,64 @@ static int send_extent_data(struct send_ctx *sctx,
+ return 0;
+ }
+
++/*
++ * Search for a capability xattr related to sctx->cur_ino. If the capability is
++ * found, call send_set_xattr function to emit it.
++ *
++ * Return 0 if there isn't a capability, or when the capability was emitted
++ * successfully, or < 0 if an error occurred.
++ */
++static int send_capabilities(struct send_ctx *sctx)
++{
++ struct fs_path *fspath = NULL;
++ struct btrfs_path *path;
++ struct btrfs_dir_item *di;
++ struct extent_buffer *leaf;
++ unsigned long data_ptr;
++ char *buf = NULL;
++ int buf_len;
++ int ret = 0;
++
++ path = alloc_path_for_send();
++ if (!path)
++ return -ENOMEM;
++
++ di = btrfs_lookup_xattr(NULL, sctx->send_root, path, sctx->cur_ino,
++ XATTR_NAME_CAPS, strlen(XATTR_NAME_CAPS), 0);
++ if (!di) {
++ /* There is no xattr for this inode */
++ goto out;
++ } else if (IS_ERR(di)) {
++ ret = PTR_ERR(di);
++ goto out;
++ }
++
++ leaf = path->nodes[0];
++ buf_len = btrfs_dir_data_len(leaf, di);
++
++ fspath = fs_path_alloc();
++ buf = kmalloc(buf_len, GFP_KERNEL);
++ if (!fspath || !buf) {
++ ret = -ENOMEM;
++ goto out;
++ }
++
++ ret = get_cur_path(sctx, sctx->cur_ino, sctx->cur_inode_gen, fspath);
++ if (ret < 0)
++ goto out;
++
++ data_ptr = (unsigned long)(di + 1) + btrfs_dir_name_len(leaf, di);
++ read_extent_buffer(leaf, buf, data_ptr, buf_len);
++
++ ret = send_set_xattr(sctx, fspath, XATTR_NAME_CAPS,
++ strlen(XATTR_NAME_CAPS), buf, buf_len);
++out:
++ kfree(buf);
++ fs_path_free(fspath);
++ btrfs_free_path(path);
++ return ret;
++}
++
+ static int clone_range(struct send_ctx *sctx,
+ struct clone_root *clone_root,
+ const u64 disk_byte,
+@@ -5972,6 +6035,10 @@ static int finish_inode_if_needed(struct send_ctx *sctx, int at_end)
+ goto out;
+ }
+
++ ret = send_capabilities(sctx);
++ if (ret < 0)
++ goto out;
++
+ /*
+ * If other directory inodes depended on our current directory
+ * inode's move/rename, now do their move/rename operations.
+diff --git a/fs/btrfs/space-info.c b/fs/btrfs/space-info.c
+index ff17a4420358..eee6748c49e4 100644
+--- a/fs/btrfs/space-info.c
++++ b/fs/btrfs/space-info.c
+@@ -626,6 +626,7 @@ static int may_commit_transaction(struct btrfs_fs_info *fs_info,
+ struct reserve_ticket *ticket = NULL;
+ struct btrfs_block_rsv *delayed_rsv = &fs_info->delayed_block_rsv;
+ struct btrfs_block_rsv *delayed_refs_rsv = &fs_info->delayed_refs_rsv;
++ struct btrfs_block_rsv *trans_rsv = &fs_info->trans_block_rsv;
+ struct btrfs_trans_handle *trans;
+ u64 bytes_needed;
+ u64 reclaim_bytes = 0;
+@@ -688,6 +689,11 @@ static int may_commit_transaction(struct btrfs_fs_info *fs_info,
+ spin_lock(&delayed_refs_rsv->lock);
+ reclaim_bytes += delayed_refs_rsv->reserved;
+ spin_unlock(&delayed_refs_rsv->lock);
++
++ spin_lock(&trans_rsv->lock);
++ reclaim_bytes += trans_rsv->reserved;
++ spin_unlock(&trans_rsv->lock);
++
+ if (reclaim_bytes >= bytes_needed)
+ goto commit;
+ bytes_needed -= reclaim_bytes;
+@@ -856,6 +862,34 @@ static inline int need_do_async_reclaim(struct btrfs_fs_info *fs_info,
+ !test_bit(BTRFS_FS_STATE_REMOUNTING, &fs_info->fs_state));
+ }
+
++static bool steal_from_global_rsv(struct btrfs_fs_info *fs_info,
++ struct btrfs_space_info *space_info,
++ struct reserve_ticket *ticket)
++{
++ struct btrfs_block_rsv *global_rsv = &fs_info->global_block_rsv;
++ u64 min_bytes;
++
++ if (global_rsv->space_info != space_info)
++ return false;
++
++ spin_lock(&global_rsv->lock);
++ min_bytes = div_factor(global_rsv->size, 5);
++ if (global_rsv->reserved < min_bytes + ticket->bytes) {
++ spin_unlock(&global_rsv->lock);
++ return false;
++ }
++ global_rsv->reserved -= ticket->bytes;
++ ticket->bytes = 0;
++ list_del_init(&ticket->list);
++ wake_up(&ticket->wait);
++ space_info->tickets_id++;
++ if (global_rsv->reserved < global_rsv->size)
++ global_rsv->full = 0;
++ spin_unlock(&global_rsv->lock);
++
++ return true;
++}
++
+ /*
+ * maybe_fail_all_tickets - we've exhausted our flushing, start failing tickets
+ * @fs_info - fs_info for this fs
+@@ -888,6 +922,10 @@ static bool maybe_fail_all_tickets(struct btrfs_fs_info *fs_info,
+ ticket = list_first_entry(&space_info->tickets,
+ struct reserve_ticket, list);
+
++ if (ticket->steal &&
++ steal_from_global_rsv(fs_info, space_info, ticket))
++ return true;
++
+ /*
+ * may_commit_transaction will avoid committing the transaction
+ * if it doesn't feel like the space reclaimed by the commit
+@@ -1104,6 +1142,7 @@ static int handle_reserve_ticket(struct btrfs_fs_info *fs_info,
+
+ switch (flush) {
+ case BTRFS_RESERVE_FLUSH_ALL:
++ case BTRFS_RESERVE_FLUSH_ALL_STEAL:
+ wait_reserve_ticket(fs_info, space_info, ticket);
+ break;
+ case BTRFS_RESERVE_FLUSH_LIMIT:
+@@ -1203,7 +1242,9 @@ static int __reserve_metadata_bytes(struct btrfs_fs_info *fs_info,
+ ticket.error = 0;
+ space_info->reclaim_size += ticket.bytes;
+ init_waitqueue_head(&ticket.wait);
+- if (flush == BTRFS_RESERVE_FLUSH_ALL) {
++ ticket.steal = (flush == BTRFS_RESERVE_FLUSH_ALL_STEAL);
++ if (flush == BTRFS_RESERVE_FLUSH_ALL ||
++ flush == BTRFS_RESERVE_FLUSH_ALL_STEAL) {
+ list_add_tail(&ticket.list, &space_info->tickets);
+ if (!space_info->flush) {
+ space_info->flush = 1;
+diff --git a/fs/btrfs/space-info.h b/fs/btrfs/space-info.h
+index 0a5001ef1481..c3c64019950a 100644
+--- a/fs/btrfs/space-info.h
++++ b/fs/btrfs/space-info.h
+@@ -78,6 +78,7 @@ struct btrfs_space_info {
+ struct reserve_ticket {
+ u64 bytes;
+ int error;
++ bool steal;
+ struct list_head list;
+ wait_queue_head_t wait;
+ };
+diff --git a/fs/btrfs/transaction.c b/fs/btrfs/transaction.c
+index 2d5498136e5e..96eb313a5080 100644
+--- a/fs/btrfs/transaction.c
++++ b/fs/btrfs/transaction.c
+@@ -21,6 +21,7 @@
+ #include "dev-replace.h"
+ #include "qgroup.h"
+ #include "block-group.h"
++#include "space-info.h"
+
+ #define BTRFS_ROOT_TRANS_TAG 0
+
+@@ -523,6 +524,7 @@ start_transaction(struct btrfs_root *root, unsigned int num_items,
+ u64 num_bytes = 0;
+ u64 qgroup_reserved = 0;
+ bool reloc_reserved = false;
++ bool do_chunk_alloc = false;
+ int ret;
+
+ /* Send isn't supposed to start transactions. */
+@@ -563,7 +565,8 @@ start_transaction(struct btrfs_root *root, unsigned int num_items,
+ * refill that amount for whatever is missing in the reserve.
+ */
+ num_bytes = btrfs_calc_insert_metadata_size(fs_info, num_items);
+- if (delayed_refs_rsv->full == 0) {
++ if (flush == BTRFS_RESERVE_FLUSH_ALL &&
++ delayed_refs_rsv->full == 0) {
+ delayed_refs_bytes = num_bytes;
+ num_bytes <<= 1;
+ }
+@@ -584,6 +587,9 @@ start_transaction(struct btrfs_root *root, unsigned int num_items,
+ delayed_refs_bytes);
+ num_bytes -= delayed_refs_bytes;
+ }
++
++ if (rsv->space_info->force_alloc)
++ do_chunk_alloc = true;
+ } else if (num_items == 0 && flush == BTRFS_RESERVE_FLUSH_ALL &&
+ !delayed_refs_rsv->full) {
+ /*
+@@ -665,6 +671,19 @@ got_it:
+ if (!current->journal_info)
+ current->journal_info = h;
+
++ /*
++ * If the space_info is marked ALLOC_FORCE then we'll get upgraded to
++ * ALLOC_FORCE the first run through, and then we won't allocate for
++ * anybody else who races in later. We don't care about the return
++ * value here.
++ */
++ if (do_chunk_alloc && num_bytes) {
++ u64 flags = h->block_rsv->space_info->flags;
++
++ btrfs_chunk_alloc(h, btrfs_get_alloc_profile(fs_info, flags),
++ CHUNK_ALLOC_NO_FORCE);
++ }
++
+ /*
+ * btrfs_record_root_in_trans() needs to alloc new extents, and may
+ * call btrfs_join_transaction() while we're also starting a
+@@ -699,43 +718,10 @@ struct btrfs_trans_handle *btrfs_start_transaction(struct btrfs_root *root,
+
+ struct btrfs_trans_handle *btrfs_start_transaction_fallback_global_rsv(
+ struct btrfs_root *root,
+- unsigned int num_items,
+- int min_factor)
++ unsigned int num_items)
+ {
+- struct btrfs_fs_info *fs_info = root->fs_info;
+- struct btrfs_trans_handle *trans;
+- u64 num_bytes;
+- int ret;
+-
+- /*
+- * We have two callers: unlink and block group removal. The
+- * former should succeed even if we will temporarily exceed
+- * quota and the latter operates on the extent root so
+- * qgroup enforcement is ignored anyway.
+- */
+- trans = start_transaction(root, num_items, TRANS_START,
+- BTRFS_RESERVE_FLUSH_ALL, false);
+- if (!IS_ERR(trans) || PTR_ERR(trans) != -ENOSPC)
+- return trans;
+-
+- trans = btrfs_start_transaction(root, 0);
+- if (IS_ERR(trans))
+- return trans;
+-
+- num_bytes = btrfs_calc_insert_metadata_size(fs_info, num_items);
+- ret = btrfs_cond_migrate_bytes(fs_info, &fs_info->trans_block_rsv,
+- num_bytes, min_factor);
+- if (ret) {
+- btrfs_end_transaction(trans);
+- return ERR_PTR(ret);
+- }
+-
+- trans->block_rsv = &fs_info->trans_block_rsv;
+- trans->bytes_reserved = num_bytes;
+- trace_btrfs_space_reservation(fs_info, "transaction",
+- trans->transid, num_bytes, 1);
+-
+- return trans;
++ return start_transaction(root, num_items, TRANS_START,
++ BTRFS_RESERVE_FLUSH_ALL_STEAL, false);
+ }
+
+ struct btrfs_trans_handle *btrfs_join_transaction(struct btrfs_root *root)
+diff --git a/fs/btrfs/transaction.h b/fs/btrfs/transaction.h
+index 31ae8d273065..bf102e64bfb2 100644
+--- a/fs/btrfs/transaction.h
++++ b/fs/btrfs/transaction.h
+@@ -193,8 +193,7 @@ struct btrfs_trans_handle *btrfs_start_transaction(struct btrfs_root *root,
+ unsigned int num_items);
+ struct btrfs_trans_handle *btrfs_start_transaction_fallback_global_rsv(
+ struct btrfs_root *root,
+- unsigned int num_items,
+- int min_factor);
++ unsigned int num_items);
+ struct btrfs_trans_handle *btrfs_join_transaction(struct btrfs_root *root);
+ struct btrfs_trans_handle *btrfs_join_transaction_spacecache(struct btrfs_root *root);
+ struct btrfs_trans_handle *btrfs_join_transaction_nostart(struct btrfs_root *root);
+diff --git a/fs/btrfs/tree-log.c b/fs/btrfs/tree-log.c
+index 02ebdd9edc19..ea72b9d54ec8 100644
+--- a/fs/btrfs/tree-log.c
++++ b/fs/btrfs/tree-log.c
+@@ -3299,6 +3299,7 @@ static void free_log_tree(struct btrfs_trans_handle *trans,
+
+ clear_extent_bits(&log->dirty_log_pages, 0, (u64)-1,
+ EXTENT_DIRTY | EXTENT_NEW | EXTENT_NEED_WAIT);
++ extent_io_tree_release(&log->log_csum_range);
+ btrfs_put_root(log);
+ }
+
+@@ -3916,8 +3917,20 @@ static int log_csums(struct btrfs_trans_handle *trans,
+ struct btrfs_root *log_root,
+ struct btrfs_ordered_sum *sums)
+ {
++ const u64 lock_end = sums->bytenr + sums->len - 1;
++ struct extent_state *cached_state = NULL;
+ int ret;
+
++ /*
++ * Serialize logging for checksums. This is to avoid racing with the
++ * same checksum being logged by another task that is logging another
++ * file which happens to refer to the same extent as well. Such races
++ * can leave checksum items in the log with overlapping ranges.
++ */
++ ret = lock_extent_bits(&log_root->log_csum_range, sums->bytenr,
++ lock_end, &cached_state);
++ if (ret)
++ return ret;
+ /*
+ * Due to extent cloning, we might have logged a csum item that covers a
+ * subrange of a cloned extent, and later we can end up logging a csum
+@@ -3928,10 +3941,13 @@ static int log_csums(struct btrfs_trans_handle *trans,
+ * trim and adjust) any existing csum items in the log for this range.
+ */
+ ret = btrfs_del_csums(trans, log_root, sums->bytenr, sums->len);
+- if (ret)
+- return ret;
++ if (!ret)
++ ret = btrfs_csum_file_blocks(trans, log_root, sums);
+
+- return btrfs_csum_file_blocks(trans, log_root, sums);
++ unlock_extent_cached(&log_root->log_csum_range, sums->bytenr, lock_end,
++ &cached_state);
++
++ return ret;
+ }
+
+ static noinline int copy_items(struct btrfs_trans_handle *trans,
+diff --git a/fs/btrfs/volumes.c b/fs/btrfs/volumes.c
+index c1909e5f4506..21c7d3d87827 100644
+--- a/fs/btrfs/volumes.c
++++ b/fs/btrfs/volumes.c
+@@ -1042,6 +1042,8 @@ again:
+ &device->dev_state)) {
+ if (!test_bit(BTRFS_DEV_STATE_REPLACE_TGT,
+ &device->dev_state) &&
++ !test_bit(BTRFS_DEV_STATE_MISSING,
++ &device->dev_state) &&
+ (!latest_dev ||
+ device->generation > latest_dev->generation)) {
+ latest_dev = device;
+@@ -2663,8 +2665,18 @@ int btrfs_init_new_device(struct btrfs_fs_info *fs_info, const char *device_path
+ ret = btrfs_commit_transaction(trans);
+ }
+
+- /* Update ctime/mtime for libblkid */
++ /*
++ * Now that we have written a new super block to this device, check all
++ * other fs_devices list if device_path alienates any other scanned
++ * device.
++ * We can ignore the return value as it typically returns -EINVAL and
++ * only succeeds if the device was an alien.
++ */
++ btrfs_forget_devices(device_path);
++
++ /* Update ctime/mtime for blkid or udev */
+ update_dev_time(device_path);
++
+ return ret;
+
+ error_sysfs:
+diff --git a/fs/ext4/ext4_extents.h b/fs/ext4/ext4_extents.h
+index 1c216fcc202a..44e59881a1f0 100644
+--- a/fs/ext4/ext4_extents.h
++++ b/fs/ext4/ext4_extents.h
+@@ -170,10 +170,13 @@ struct partial_cluster {
+ (EXT_FIRST_EXTENT((__hdr__)) + le16_to_cpu((__hdr__)->eh_entries) - 1)
+ #define EXT_LAST_INDEX(__hdr__) \
+ (EXT_FIRST_INDEX((__hdr__)) + le16_to_cpu((__hdr__)->eh_entries) - 1)
+-#define EXT_MAX_EXTENT(__hdr__) \
+- (EXT_FIRST_EXTENT((__hdr__)) + le16_to_cpu((__hdr__)->eh_max) - 1)
++#define EXT_MAX_EXTENT(__hdr__) \
++ ((le16_to_cpu((__hdr__)->eh_max)) ? \
++ ((EXT_FIRST_EXTENT((__hdr__)) + le16_to_cpu((__hdr__)->eh_max) - 1)) \
++ : 0)
+ #define EXT_MAX_INDEX(__hdr__) \
+- (EXT_FIRST_INDEX((__hdr__)) + le16_to_cpu((__hdr__)->eh_max) - 1)
++ ((le16_to_cpu((__hdr__)->eh_max)) ? \
++ ((EXT_FIRST_INDEX((__hdr__)) + le16_to_cpu((__hdr__)->eh_max) - 1)) : 0)
+
+ static inline struct ext4_extent_header *ext_inode_hdr(struct inode *inode)
+ {
+diff --git a/fs/ext4/fsync.c b/fs/ext4/fsync.c
+index e10206e7f4bb..093c359952cd 100644
+--- a/fs/ext4/fsync.c
++++ b/fs/ext4/fsync.c
+@@ -44,30 +44,28 @@
+ */
+ static int ext4_sync_parent(struct inode *inode)
+ {
+- struct dentry *dentry = NULL;
+- struct inode *next;
++ struct dentry *dentry, *next;
+ int ret = 0;
+
+ if (!ext4_test_inode_state(inode, EXT4_STATE_NEWENTRY))
+ return 0;
+- inode = igrab(inode);
++ dentry = d_find_any_alias(inode);
++ if (!dentry)
++ return 0;
+ while (ext4_test_inode_state(inode, EXT4_STATE_NEWENTRY)) {
+ ext4_clear_inode_state(inode, EXT4_STATE_NEWENTRY);
+- dentry = d_find_any_alias(inode);
+- if (!dentry)
+- break;
+- next = igrab(d_inode(dentry->d_parent));
++
++ next = dget_parent(dentry);
+ dput(dentry);
+- if (!next)
+- break;
+- iput(inode);
+- inode = next;
++ dentry = next;
++ inode = dentry->d_inode;
++
+ /*
+ * The directory inode may have gone through rmdir by now. But
+ * the inode itself and its blocks are still allocated (we hold
+- * a reference to the inode so it didn't go through
+- * ext4_evict_inode()) and so we are safe to flush metadata
+- * blocks and the inode.
++ * a reference to the inode via its dentry), so it didn't go
++ * through ext4_evict_inode()) and so we are safe to flush
++ * metadata blocks and the inode.
+ */
+ ret = sync_mapping_buffers(inode->i_mapping);
+ if (ret)
+@@ -76,7 +74,7 @@ static int ext4_sync_parent(struct inode *inode)
+ if (ret)
+ break;
+ }
+- iput(inode);
++ dput(dentry);
+ return ret;
+ }
+
+diff --git a/fs/ext4/ialloc.c b/fs/ext4/ialloc.c
+index 4b8c9a9bdf0c..011bcb8c4770 100644
+--- a/fs/ext4/ialloc.c
++++ b/fs/ext4/ialloc.c
+@@ -1246,6 +1246,7 @@ struct inode *ext4_orphan_get(struct super_block *sb, unsigned long ino)
+ ext4_error_err(sb, -err,
+ "couldn't read orphan inode %lu (err %d)",
+ ino, err);
++ brelse(bitmap_bh);
+ return inode;
+ }
+
+diff --git a/fs/ext4/xattr.c b/fs/ext4/xattr.c
+index 21df43a25328..01ba66373e97 100644
+--- a/fs/ext4/xattr.c
++++ b/fs/ext4/xattr.c
+@@ -1800,8 +1800,11 @@ ext4_xattr_block_find(struct inode *inode, struct ext4_xattr_info *i,
+ if (EXT4_I(inode)->i_file_acl) {
+ /* The inode already has an extended attribute block. */
+ bs->bh = ext4_sb_bread(sb, EXT4_I(inode)->i_file_acl, REQ_PRIO);
+- if (IS_ERR(bs->bh))
+- return PTR_ERR(bs->bh);
++ if (IS_ERR(bs->bh)) {
++ error = PTR_ERR(bs->bh);
++ bs->bh = NULL;
++ return error;
++ }
+ ea_bdebug(bs->bh, "b_count=%d, refcount=%d",
+ atomic_read(&(bs->bh->b_count)),
+ le32_to_cpu(BHDR(bs->bh)->h_refcount));
+diff --git a/fs/f2fs/f2fs.h b/fs/f2fs/f2fs.h
+index ba470d5687fe..7c5dd7f666a0 100644
+--- a/fs/f2fs/f2fs.h
++++ b/fs/f2fs/f2fs.h
+@@ -139,6 +139,7 @@ struct f2fs_mount_info {
+ int fs_mode; /* fs mode: LFS or ADAPTIVE */
+ int bggc_mode; /* bggc mode: off, on or sync */
+ bool test_dummy_encryption; /* test dummy encryption */
++ block_t unusable_cap_perc; /* percentage for cap */
+ block_t unusable_cap; /* Amount of space allowed to be
+ * unusable when disabling checkpoint
+ */
+diff --git a/fs/f2fs/inline.c b/fs/f2fs/inline.c
+index 4167e5408151..59a4b7ff11e1 100644
+--- a/fs/f2fs/inline.c
++++ b/fs/f2fs/inline.c
+@@ -559,12 +559,12 @@ int f2fs_try_convert_inline_dir(struct inode *dir, struct dentry *dentry)
+ ipage = f2fs_get_node_page(sbi, dir->i_ino);
+ if (IS_ERR(ipage)) {
+ err = PTR_ERR(ipage);
+- goto out;
++ goto out_fname;
+ }
+
+ if (f2fs_has_enough_room(dir, ipage, &fname)) {
+ f2fs_put_page(ipage, 1);
+- goto out;
++ goto out_fname;
+ }
+
+ inline_dentry = inline_data_addr(dir, ipage);
+@@ -572,6 +572,8 @@ int f2fs_try_convert_inline_dir(struct inode *dir, struct dentry *dentry)
+ err = do_convert_inline_dir(dir, ipage, inline_dentry);
+ if (!err)
+ f2fs_put_page(ipage, 1);
++out_fname:
++ fscrypt_free_filename(&fname);
+ out:
+ f2fs_unlock_op(sbi);
+ return err;
+diff --git a/fs/f2fs/super.c b/fs/f2fs/super.c
+index f2dfc21c6abb..56ccb8323e21 100644
+--- a/fs/f2fs/super.c
++++ b/fs/f2fs/super.c
+@@ -284,6 +284,22 @@ static inline void limit_reserve_root(struct f2fs_sb_info *sbi)
+ F2FS_OPTION(sbi).s_resgid));
+ }
+
++static inline void adjust_unusable_cap_perc(struct f2fs_sb_info *sbi)
++{
++ if (!F2FS_OPTION(sbi).unusable_cap_perc)
++ return;
++
++ if (F2FS_OPTION(sbi).unusable_cap_perc == 100)
++ F2FS_OPTION(sbi).unusable_cap = sbi->user_block_count;
++ else
++ F2FS_OPTION(sbi).unusable_cap = (sbi->user_block_count / 100) *
++ F2FS_OPTION(sbi).unusable_cap_perc;
++
++ f2fs_info(sbi, "Adjust unusable cap for checkpoint=disable = %u / %u%%",
++ F2FS_OPTION(sbi).unusable_cap,
++ F2FS_OPTION(sbi).unusable_cap_perc);
++}
++
+ static void init_once(void *foo)
+ {
+ struct f2fs_inode_info *fi = (struct f2fs_inode_info *) foo;
+@@ -795,12 +811,7 @@ static int parse_options(struct super_block *sb, char *options)
+ return -EINVAL;
+ if (arg < 0 || arg > 100)
+ return -EINVAL;
+- if (arg == 100)
+- F2FS_OPTION(sbi).unusable_cap =
+- sbi->user_block_count;
+- else
+- F2FS_OPTION(sbi).unusable_cap =
+- (sbi->user_block_count / 100) * arg;
++ F2FS_OPTION(sbi).unusable_cap_perc = arg;
+ set_opt(sbi, DISABLE_CHECKPOINT);
+ break;
+ case Opt_checkpoint_disable_cap:
+@@ -1845,6 +1856,7 @@ skip:
+ (test_opt(sbi, POSIX_ACL) ? SB_POSIXACL : 0);
+
+ limit_reserve_root(sbi);
++ adjust_unusable_cap_perc(sbi);
+ *flags = (*flags & ~SB_LAZYTIME) | (sb->s_flags & SB_LAZYTIME);
+ return 0;
+ restore_gc:
+@@ -3521,6 +3533,7 @@ try_onemore:
+ sbi->reserved_blocks = 0;
+ sbi->current_reserved_blocks = 0;
+ limit_reserve_root(sbi);
++ adjust_unusable_cap_perc(sbi);
+
+ for (i = 0; i < NR_INODE_TYPE; i++) {
+ INIT_LIST_HEAD(&sbi->inode_list[i]);
+diff --git a/fs/io_uring.c b/fs/io_uring.c
+index f071505e3430..2698e9b08490 100644
+--- a/fs/io_uring.c
++++ b/fs/io_uring.c
+@@ -4106,27 +4106,6 @@ struct io_poll_table {
+ int error;
+ };
+
+-static void __io_queue_proc(struct io_poll_iocb *poll, struct io_poll_table *pt,
+- struct wait_queue_head *head)
+-{
+- if (unlikely(poll->head)) {
+- pt->error = -EINVAL;
+- return;
+- }
+-
+- pt->error = 0;
+- poll->head = head;
+- add_wait_queue(head, &poll->wait);
+-}
+-
+-static void io_async_queue_proc(struct file *file, struct wait_queue_head *head,
+- struct poll_table_struct *p)
+-{
+- struct io_poll_table *pt = container_of(p, struct io_poll_table, pt);
+-
+- __io_queue_proc(&pt->req->apoll->poll, pt, head);
+-}
+-
+ static int __io_async_wake(struct io_kiocb *req, struct io_poll_iocb *poll,
+ __poll_t mask, task_work_func_t func)
+ {
+@@ -4180,6 +4159,144 @@ static bool io_poll_rewait(struct io_kiocb *req, struct io_poll_iocb *poll)
+ return false;
+ }
+
++static void io_poll_remove_double(struct io_kiocb *req)
++{
++ struct io_poll_iocb *poll = (struct io_poll_iocb *) req->io;
++
++ lockdep_assert_held(&req->ctx->completion_lock);
++
++ if (poll && poll->head) {
++ struct wait_queue_head *head = poll->head;
++
++ spin_lock(&head->lock);
++ list_del_init(&poll->wait.entry);
++ if (poll->wait.private)
++ refcount_dec(&req->refs);
++ poll->head = NULL;
++ spin_unlock(&head->lock);
++ }
++}
++
++static void io_poll_complete(struct io_kiocb *req, __poll_t mask, int error)
++{
++ struct io_ring_ctx *ctx = req->ctx;
++
++ io_poll_remove_double(req);
++ req->poll.done = true;
++ io_cqring_fill_event(req, error ? error : mangle_poll(mask));
++ io_commit_cqring(ctx);
++}
++
++static void io_poll_task_handler(struct io_kiocb *req, struct io_kiocb **nxt)
++{
++ struct io_ring_ctx *ctx = req->ctx;
++
++ if (io_poll_rewait(req, &req->poll)) {
++ spin_unlock_irq(&ctx->completion_lock);
++ return;
++ }
++
++ hash_del(&req->hash_node);
++ io_poll_complete(req, req->result, 0);
++ req->flags |= REQ_F_COMP_LOCKED;
++ io_put_req_find_next(req, nxt);
++ spin_unlock_irq(&ctx->completion_lock);
++
++ io_cqring_ev_posted(ctx);
++}
++
++static void io_poll_task_func(struct callback_head *cb)
++{
++ struct io_kiocb *req = container_of(cb, struct io_kiocb, task_work);
++ struct io_kiocb *nxt = NULL;
++
++ io_poll_task_handler(req, &nxt);
++ if (nxt) {
++ struct io_ring_ctx *ctx = nxt->ctx;
++
++ mutex_lock(&ctx->uring_lock);
++ __io_queue_sqe(nxt, NULL);
++ mutex_unlock(&ctx->uring_lock);
++ }
++}
++
++static int io_poll_double_wake(struct wait_queue_entry *wait, unsigned mode,
++ int sync, void *key)
++{
++ struct io_kiocb *req = wait->private;
++ struct io_poll_iocb *poll = (struct io_poll_iocb *) req->io;
++ __poll_t mask = key_to_poll(key);
++
++ /* for instances that support it check for an event match first: */
++ if (mask && !(mask & poll->events))
++ return 0;
++
++ if (req->poll.head) {
++ bool done;
++
++ spin_lock(&req->poll.head->lock);
++ done = list_empty(&req->poll.wait.entry);
++ if (!done)
++ list_del_init(&req->poll.wait.entry);
++ spin_unlock(&req->poll.head->lock);
++ if (!done)
++ __io_async_wake(req, poll, mask, io_poll_task_func);
++ }
++ refcount_dec(&req->refs);
++ return 1;
++}
++
++static void io_init_poll_iocb(struct io_poll_iocb *poll, __poll_t events,
++ wait_queue_func_t wake_func)
++{
++ poll->head = NULL;
++ poll->done = false;
++ poll->canceled = false;
++ poll->events = events;
++ INIT_LIST_HEAD(&poll->wait.entry);
++ init_waitqueue_func_entry(&poll->wait, wake_func);
++}
++
++static void __io_queue_proc(struct io_poll_iocb *poll, struct io_poll_table *pt,
++ struct wait_queue_head *head)
++{
++ struct io_kiocb *req = pt->req;
++
++ /*
++ * If poll->head is already set, it's because the file being polled
++ * uses multiple waitqueues for poll handling (eg one for read, one
++ * for write). Setup a separate io_poll_iocb if this happens.
++ */
++ if (unlikely(poll->head)) {
++ /* already have a 2nd entry, fail a third attempt */
++ if (req->io) {
++ pt->error = -EINVAL;
++ return;
++ }
++ poll = kmalloc(sizeof(*poll), GFP_ATOMIC);
++ if (!poll) {
++ pt->error = -ENOMEM;
++ return;
++ }
++ io_init_poll_iocb(poll, req->poll.events, io_poll_double_wake);
++ refcount_inc(&req->refs);
++ poll->wait.private = req;
++ req->io = (void *) poll;
++ }
++
++ pt->error = 0;
++ poll->head = head;
++ add_wait_queue(head, &poll->wait);
++}
++
++static void io_async_queue_proc(struct file *file, struct wait_queue_head *head,
++ struct poll_table_struct *p)
++{
++ struct io_poll_table *pt = container_of(p, struct io_poll_table, pt);
++
++ __io_queue_proc(&pt->req->apoll->poll, pt, head);
++}
++
+ static void io_async_task_func(struct callback_head *cb)
+ {
+ struct io_kiocb *req = container_of(cb, struct io_kiocb, task_work);
+@@ -4255,18 +4372,13 @@ static __poll_t __io_arm_poll_handler(struct io_kiocb *req,
+ bool cancel = false;
+
+ poll->file = req->file;
+- poll->head = NULL;
+- poll->done = poll->canceled = false;
+- poll->events = mask;
++ io_init_poll_iocb(poll, mask, wake_func);
++ poll->wait.private = req;
+
+ ipt->pt._key = mask;
+ ipt->req = req;
+ ipt->error = -EINVAL;
+
+- INIT_LIST_HEAD(&poll->wait.entry);
+- init_waitqueue_func_entry(&poll->wait, wake_func);
+- poll->wait.private = req;
+-
+ mask = vfs_poll(req->file, &ipt->pt) & poll->events;
+
+ spin_lock_irq(&ctx->completion_lock);
+@@ -4297,6 +4409,7 @@ static bool io_arm_poll_handler(struct io_kiocb *req)
+ struct async_poll *apoll;
+ struct io_poll_table ipt;
+ __poll_t mask, ret;
++ bool had_io;
+
+ if (!req->file || !file_can_poll(req->file))
+ return false;
+@@ -4311,6 +4424,7 @@ static bool io_arm_poll_handler(struct io_kiocb *req)
+
+ req->flags |= REQ_F_POLLED;
+ memcpy(&apoll->work, &req->work, sizeof(req->work));
++ had_io = req->io != NULL;
+
+ get_task_struct(current);
+ req->task = current;
+@@ -4330,7 +4444,9 @@ static bool io_arm_poll_handler(struct io_kiocb *req)
+ io_async_wake);
+ if (ret) {
+ ipt.error = 0;
+- apoll->poll.done = true;
++ /* only remove double add if we did it here */
++ if (!had_io)
++ io_poll_remove_double(req);
+ spin_unlock_irq(&ctx->completion_lock);
+ memcpy(&req->work, &apoll->work, sizeof(req->work));
+ kfree(apoll);
+@@ -4354,32 +4470,32 @@ static bool __io_poll_remove_one(struct io_kiocb *req,
+ do_complete = true;
+ }
+ spin_unlock(&poll->head->lock);
++ hash_del(&req->hash_node);
+ return do_complete;
+ }
+
+ static bool io_poll_remove_one(struct io_kiocb *req)
+ {
+- struct async_poll *apoll = NULL;
+ bool do_complete;
+
+ if (req->opcode == IORING_OP_POLL_ADD) {
++ io_poll_remove_double(req);
+ do_complete = __io_poll_remove_one(req, &req->poll);
+ } else {
+- apoll = req->apoll;
++ struct async_poll *apoll = req->apoll;
++
+ /* non-poll requests have submit ref still */
+- do_complete = __io_poll_remove_one(req, &req->apoll->poll);
+- if (do_complete)
++ do_complete = __io_poll_remove_one(req, &apoll->poll);
++ if (do_complete) {
+ io_put_req(req);
+- }
+-
+- hash_del(&req->hash_node);
+-
+- if (do_complete && apoll) {
+- /*
+- * restore ->work because we need to call io_req_work_drop_env.
+- */
+- memcpy(&req->work, &apoll->work, sizeof(req->work));
+- kfree(apoll);
++ /*
++ * restore ->work because we will call
++ * io_req_work_drop_env below when dropping the
++ * final reference.
++ */
++ memcpy(&req->work, &apoll->work, sizeof(req->work));
++ kfree(apoll);
++ }
+ }
+
+ if (do_complete) {
+@@ -4464,49 +4580,6 @@ static int io_poll_remove(struct io_kiocb *req)
+ return 0;
+ }
+
+-static void io_poll_complete(struct io_kiocb *req, __poll_t mask, int error)
+-{
+- struct io_ring_ctx *ctx = req->ctx;
+-
+- req->poll.done = true;
+- io_cqring_fill_event(req, error ? error : mangle_poll(mask));
+- io_commit_cqring(ctx);
+-}
+-
+-static void io_poll_task_handler(struct io_kiocb *req, struct io_kiocb **nxt)
+-{
+- struct io_ring_ctx *ctx = req->ctx;
+- struct io_poll_iocb *poll = &req->poll;
+-
+- if (io_poll_rewait(req, poll)) {
+- spin_unlock_irq(&ctx->completion_lock);
+- return;
+- }
+-
+- hash_del(&req->hash_node);
+- io_poll_complete(req, req->result, 0);
+- req->flags |= REQ_F_COMP_LOCKED;
+- io_put_req_find_next(req, nxt);
+- spin_unlock_irq(&ctx->completion_lock);
+-
+- io_cqring_ev_posted(ctx);
+-}
+-
+-static void io_poll_task_func(struct callback_head *cb)
+-{
+- struct io_kiocb *req = container_of(cb, struct io_kiocb, task_work);
+- struct io_kiocb *nxt = NULL;
+-
+- io_poll_task_handler(req, &nxt);
+- if (nxt) {
+- struct io_ring_ctx *ctx = nxt->ctx;
+-
+- mutex_lock(&ctx->uring_lock);
+- __io_queue_sqe(nxt, NULL);
+- mutex_unlock(&ctx->uring_lock);
+- }
+-}
+-
+ static int io_poll_wake(struct wait_queue_entry *wait, unsigned mode, int sync,
+ void *key)
+ {
+@@ -7404,10 +7477,11 @@ static void io_uring_cancel_files(struct io_ring_ctx *ctx,
+ finish_wait(&ctx->inflight_wait, &wait);
+ continue;
+ }
++ } else {
++ io_wq_cancel_work(ctx->io_wq, &cancel_req->work);
++ io_put_req(cancel_req);
+ }
+
+- io_wq_cancel_work(ctx->io_wq, &cancel_req->work);
+- io_put_req(cancel_req);
+ schedule();
+ finish_wait(&ctx->inflight_wait, &wait);
+ }
+diff --git a/fs/jbd2/transaction.c b/fs/jbd2/transaction.c
+index 3dccc23cf010..e91aad3637a2 100644
+--- a/fs/jbd2/transaction.c
++++ b/fs/jbd2/transaction.c
+@@ -541,17 +541,24 @@ handle_t *jbd2_journal_start(journal_t *journal, int nblocks)
+ }
+ EXPORT_SYMBOL(jbd2_journal_start);
+
+-static void __jbd2_journal_unreserve_handle(handle_t *handle)
++static void __jbd2_journal_unreserve_handle(handle_t *handle, transaction_t *t)
+ {
+ journal_t *journal = handle->h_journal;
+
+ WARN_ON(!handle->h_reserved);
+ sub_reserved_credits(journal, handle->h_total_credits);
++ if (t)
++ atomic_sub(handle->h_total_credits, &t->t_outstanding_credits);
+ }
+
+ void jbd2_journal_free_reserved(handle_t *handle)
+ {
+- __jbd2_journal_unreserve_handle(handle);
++ journal_t *journal = handle->h_journal;
++
++ /* Get j_state_lock to pin running transaction if it exists */
++ read_lock(&journal->j_state_lock);
++ __jbd2_journal_unreserve_handle(handle, journal->j_running_transaction);
++ read_unlock(&journal->j_state_lock);
+ jbd2_free_handle(handle);
+ }
+ EXPORT_SYMBOL(jbd2_journal_free_reserved);
+@@ -722,7 +729,8 @@ static void stop_this_handle(handle_t *handle)
+ atomic_sub(handle->h_total_credits,
+ &transaction->t_outstanding_credits);
+ if (handle->h_rsv_handle)
+- __jbd2_journal_unreserve_handle(handle->h_rsv_handle);
++ __jbd2_journal_unreserve_handle(handle->h_rsv_handle,
++ transaction);
+ if (atomic_dec_and_test(&transaction->t_updates))
+ wake_up(&journal->j_wait_updates);
+
+diff --git a/fs/xfs/kmem.h b/fs/xfs/kmem.h
+index 6143117770e9..11623489b769 100644
+--- a/fs/xfs/kmem.h
++++ b/fs/xfs/kmem.h
+@@ -19,6 +19,7 @@ typedef unsigned __bitwise xfs_km_flags_t;
+ #define KM_NOFS ((__force xfs_km_flags_t)0x0004u)
+ #define KM_MAYFAIL ((__force xfs_km_flags_t)0x0008u)
+ #define KM_ZERO ((__force xfs_km_flags_t)0x0010u)
++#define KM_NOLOCKDEP ((__force xfs_km_flags_t)0x0020u)
+
+ /*
+ * We use a special process flag to avoid recursive callbacks into
+@@ -30,7 +31,7 @@ kmem_flags_convert(xfs_km_flags_t flags)
+ {
+ gfp_t lflags;
+
+- BUG_ON(flags & ~(KM_NOFS|KM_MAYFAIL|KM_ZERO));
++ BUG_ON(flags & ~(KM_NOFS | KM_MAYFAIL | KM_ZERO | KM_NOLOCKDEP));
+
+ lflags = GFP_KERNEL | __GFP_NOWARN;
+ if (flags & KM_NOFS)
+@@ -49,6 +50,9 @@ kmem_flags_convert(xfs_km_flags_t flags)
+ if (flags & KM_ZERO)
+ lflags |= __GFP_ZERO;
+
++ if (flags & KM_NOLOCKDEP)
++ lflags |= __GFP_NOLOCKDEP;
++
+ return lflags;
+ }
+
+diff --git a/fs/xfs/libxfs/xfs_attr_leaf.c b/fs/xfs/libxfs/xfs_attr_leaf.c
+index 863444e2dda7..5d0b55281f9d 100644
+--- a/fs/xfs/libxfs/xfs_attr_leaf.c
++++ b/fs/xfs/libxfs/xfs_attr_leaf.c
+@@ -308,14 +308,6 @@ xfs_attr3_leaf_verify(
+ if (fa)
+ return fa;
+
+- /*
+- * In recovery there is a transient state where count == 0 is valid
+- * because we may have transitioned an empty shortform attr to a leaf
+- * if the attr didn't fit in shortform.
+- */
+- if (!xfs_log_in_recovery(mp) && ichdr.count == 0)
+- return __this_address;
+-
+ /*
+ * firstused is the block offset of the first name info structure.
+ * Make sure it doesn't go off the block or crash into the header.
+@@ -331,6 +323,13 @@ xfs_attr3_leaf_verify(
+ (char *)bp->b_addr + ichdr.firstused)
+ return __this_address;
+
++ /*
++ * NOTE: This verifier historically failed empty leaf buffers because
++ * we expect the fork to be in another format. Empty attr fork format
++ * conversions are possible during xattr set, however, and format
++ * conversion is not atomic with the xattr set that triggers it. We
++ * cannot assume leaf blocks are non-empty until that is addressed.
++ */
+ buf_end = (char *)bp->b_addr + mp->m_attr_geo->blksize;
+ for (i = 0, ent = entries; i < ichdr.count; ent++, i++) {
+ fa = xfs_attr3_leaf_verify_entry(mp, buf_end, leaf, &ichdr,
+@@ -489,7 +488,7 @@ xfs_attr_copy_value(
+ }
+
+ if (!args->value) {
+- args->value = kmem_alloc_large(valuelen, 0);
++ args->value = kmem_alloc_large(valuelen, KM_NOLOCKDEP);
+ if (!args->value)
+ return -ENOMEM;
+ }
+diff --git a/fs/xfs/xfs_bmap_util.c b/fs/xfs/xfs_bmap_util.c
+index 4f800f7fe888..cc23a3e23e2d 100644
+--- a/fs/xfs/xfs_bmap_util.c
++++ b/fs/xfs/xfs_bmap_util.c
+@@ -1606,7 +1606,7 @@ xfs_swap_extents(
+ if (xfs_inode_has_cow_data(tip)) {
+ error = xfs_reflink_cancel_cow_range(tip, 0, NULLFILEOFF, true);
+ if (error)
+- return error;
++ goto out_unlock;
+ }
+
+ /*
+diff --git a/fs/xfs/xfs_buf.c b/fs/xfs/xfs_buf.c
+index 9ec3eaf1c618..afa73a19caa1 100644
+--- a/fs/xfs/xfs_buf.c
++++ b/fs/xfs/xfs_buf.c
+@@ -1197,8 +1197,10 @@ xfs_buf_ioend(
+ bp->b_ops->verify_read(bp);
+ }
+
+- if (!bp->b_error)
++ if (!bp->b_error) {
++ bp->b_flags &= ~XBF_WRITE_FAIL;
+ bp->b_flags |= XBF_DONE;
++ }
+
+ if (bp->b_iodone)
+ (*(bp->b_iodone))(bp);
+@@ -1258,7 +1260,7 @@ xfs_bwrite(
+
+ bp->b_flags |= XBF_WRITE;
+ bp->b_flags &= ~(XBF_ASYNC | XBF_READ | _XBF_DELWRI_Q |
+- XBF_WRITE_FAIL | XBF_DONE);
++ XBF_DONE);
+
+ error = xfs_buf_submit(bp);
+ if (error)
+@@ -1983,7 +1985,7 @@ xfs_buf_delwri_submit_buffers(
+ * synchronously. Otherwise, drop the buffer from the delwri
+ * queue and submit async.
+ */
+- bp->b_flags &= ~(_XBF_DELWRI_Q | XBF_WRITE_FAIL);
++ bp->b_flags &= ~_XBF_DELWRI_Q;
+ bp->b_flags |= XBF_WRITE;
+ if (wait_list) {
+ bp->b_flags &= ~XBF_ASYNC;
+diff --git a/fs/xfs/xfs_dquot.c b/fs/xfs/xfs_dquot.c
+index af2c8e5ceea0..265feb62290d 100644
+--- a/fs/xfs/xfs_dquot.c
++++ b/fs/xfs/xfs_dquot.c
+@@ -1116,13 +1116,12 @@ xfs_qm_dqflush(
+ dqb = bp->b_addr + dqp->q_bufoffset;
+ ddqp = &dqb->dd_diskdq;
+
+- /*
+- * A simple sanity check in case we got a corrupted dquot.
+- */
+- fa = xfs_dqblk_verify(mp, dqb, be32_to_cpu(ddqp->d_id), 0);
++ /* sanity check the in-core structure before we flush */
++ fa = xfs_dquot_verify(mp, &dqp->q_core, be32_to_cpu(dqp->q_core.d_id),
++ 0);
+ if (fa) {
+ xfs_alert(mp, "corrupt dquot ID 0x%x in memory at %pS",
+- be32_to_cpu(ddqp->d_id), fa);
++ be32_to_cpu(dqp->q_core.d_id), fa);
+ xfs_buf_relse(bp);
+ xfs_dqfunlock(dqp);
+ xfs_force_shutdown(mp, SHUTDOWN_CORRUPT_INCORE);
+diff --git a/include/linux/intel-iommu.h b/include/linux/intel-iommu.h
+index 980234ae0312..de23fb95fe91 100644
+--- a/include/linux/intel-iommu.h
++++ b/include/linux/intel-iommu.h
+@@ -571,6 +571,7 @@ struct device_domain_info {
+ struct list_head auxiliary_domains; /* auxiliary domains
+ * attached to this device
+ */
++ u32 segment; /* PCI segment number */
+ u8 bus; /* PCI bus number */
+ u8 devfn; /* PCI devfn number */
+ u16 pfsid; /* SRIOV physical function source ID */
+diff --git a/include/linux/kgdb.h b/include/linux/kgdb.h
+index b072aeb1fd78..4d6fe87fd38f 100644
+--- a/include/linux/kgdb.h
++++ b/include/linux/kgdb.h
+@@ -323,7 +323,7 @@ extern void gdbstub_exit(int status);
+ extern int kgdb_single_step;
+ extern atomic_t kgdb_active;
+ #define in_dbg_master() \
+- (raw_smp_processor_id() == atomic_read(&kgdb_active))
++ (irqs_disabled() && (smp_processor_id() == atomic_read(&kgdb_active)))
+ extern bool dbg_is_early;
+ extern void __init dbg_late_init(void);
+ extern void kgdb_panic(const char *msg);
+diff --git a/include/linux/mmzone.h b/include/linux/mmzone.h
+index 1b9de7d220fb..6196fd16b7f4 100644
+--- a/include/linux/mmzone.h
++++ b/include/linux/mmzone.h
+@@ -678,6 +678,8 @@ typedef struct pglist_data {
+ /*
+ * Must be held any time you expect node_start_pfn,
+ * node_present_pages, node_spanned_pages or nr_zones to stay constant.
++ * Also synchronizes pgdat->first_deferred_pfn during deferred page
++ * init.
+ *
+ * pgdat_resize_lock() and pgdat_resize_unlock() are provided to
+ * manipulate node_size_lock without checking for CONFIG_MEMORY_HOTPLUG
+diff --git a/include/linux/pci_ids.h b/include/linux/pci_ids.h
+index 1dfc4e1dcb94..0ad57693f392 100644
+--- a/include/linux/pci_ids.h
++++ b/include/linux/pci_ids.h
+@@ -550,6 +550,7 @@
+ #define PCI_DEVICE_ID_AMD_17H_DF_F3 0x1463
+ #define PCI_DEVICE_ID_AMD_17H_M10H_DF_F3 0x15eb
+ #define PCI_DEVICE_ID_AMD_17H_M30H_DF_F3 0x1493
++#define PCI_DEVICE_ID_AMD_17H_M60H_DF_F3 0x144b
+ #define PCI_DEVICE_ID_AMD_17H_M70H_DF_F3 0x1443
+ #define PCI_DEVICE_ID_AMD_19H_DF_F3 0x1653
+ #define PCI_DEVICE_ID_AMD_CNB17H_F3 0x1703
+@@ -1832,6 +1833,12 @@
+ #define PCI_VENDOR_ID_NVIDIA_SGS 0x12d2
+ #define PCI_DEVICE_ID_NVIDIA_SGS_RIVA128 0x0018
+
++#define PCI_VENDOR_ID_PERICOM 0x12D8
++#define PCI_DEVICE_ID_PERICOM_PI7C9X7951 0x7951
++#define PCI_DEVICE_ID_PERICOM_PI7C9X7952 0x7952
++#define PCI_DEVICE_ID_PERICOM_PI7C9X7954 0x7954
++#define PCI_DEVICE_ID_PERICOM_PI7C9X7958 0x7958
++
+ #define PCI_SUBVENDOR_ID_CHASE_PCIFAST 0x12E0
+ #define PCI_SUBDEVICE_ID_CHASE_PCIFAST4 0x0031
+ #define PCI_SUBDEVICE_ID_CHASE_PCIFAST8 0x0021
+diff --git a/include/linux/property.h b/include/linux/property.h
+index d86de017c689..0d4099b4ce1f 100644
+--- a/include/linux/property.h
++++ b/include/linux/property.h
+@@ -441,6 +441,7 @@ int software_node_register_nodes(const struct software_node *nodes);
+ void software_node_unregister_nodes(const struct software_node *nodes);
+
+ int software_node_register(const struct software_node *node);
++void software_node_unregister(const struct software_node *node);
+
+ int software_node_notify(struct device *dev, unsigned long action);
+
+diff --git a/include/linux/sched/mm.h b/include/linux/sched/mm.h
+index c49257a3b510..a132d875d351 100644
+--- a/include/linux/sched/mm.h
++++ b/include/linux/sched/mm.h
+@@ -49,6 +49,8 @@ static inline void mmdrop(struct mm_struct *mm)
+ __mmdrop(mm);
+ }
+
++void mmdrop(struct mm_struct *mm);
++
+ /*
+ * This has to be called after a get_task_mm()/mmget_not_zero()
+ * followed by taking the mmap_sem for writing before modifying the
+diff --git a/include/linux/skbuff.h b/include/linux/skbuff.h
+index 3000c526f552..7e737a94bc63 100644
+--- a/include/linux/skbuff.h
++++ b/include/linux/skbuff.h
+@@ -3945,6 +3945,14 @@ static inline void __skb_incr_checksum_unnecessary(struct sk_buff *skb)
+ }
+ }
+
++static inline void __skb_reset_checksum_unnecessary(struct sk_buff *skb)
++{
++ if (skb->ip_summed == CHECKSUM_UNNECESSARY) {
++ skb->ip_summed = CHECKSUM_NONE;
++ skb->csum_level = 0;
++ }
++}
++
+ /* Check if we need to perform checksum complete validation.
+ *
+ * Returns true if checksum complete is needed, false otherwise
+diff --git a/include/linux/skmsg.h b/include/linux/skmsg.h
+index ad31c9fb7158..08674cd14d5a 100644
+--- a/include/linux/skmsg.h
++++ b/include/linux/skmsg.h
+@@ -437,4 +437,12 @@ static inline void psock_progs_drop(struct sk_psock_progs *progs)
+ psock_set_prog(&progs->skb_verdict, NULL);
+ }
+
++int sk_psock_tls_strp_read(struct sk_psock *psock, struct sk_buff *skb);
++
++static inline bool sk_psock_strp_enabled(struct sk_psock *psock)
++{
++ if (!psock)
++ return false;
++ return psock->parser.enabled;
++}
+ #endif /* _LINUX_SKMSG_H */
+diff --git a/include/linux/string.h b/include/linux/string.h
+index 6dfbb2efa815..9b7a0632e87a 100644
+--- a/include/linux/string.h
++++ b/include/linux/string.h
+@@ -272,6 +272,31 @@ void __read_overflow3(void) __compiletime_error("detected read beyond size of ob
+ void __write_overflow(void) __compiletime_error("detected write beyond size of object passed as 1st parameter");
+
+ #if !defined(__NO_FORTIFY) && defined(__OPTIMIZE__) && defined(CONFIG_FORTIFY_SOURCE)
++
++#ifdef CONFIG_KASAN
++extern void *__underlying_memchr(const void *p, int c, __kernel_size_t size) __RENAME(memchr);
++extern int __underlying_memcmp(const void *p, const void *q, __kernel_size_t size) __RENAME(memcmp);
++extern void *__underlying_memcpy(void *p, const void *q, __kernel_size_t size) __RENAME(memcpy);
++extern void *__underlying_memmove(void *p, const void *q, __kernel_size_t size) __RENAME(memmove);
++extern void *__underlying_memset(void *p, int c, __kernel_size_t size) __RENAME(memset);
++extern char *__underlying_strcat(char *p, const char *q) __RENAME(strcat);
++extern char *__underlying_strcpy(char *p, const char *q) __RENAME(strcpy);
++extern __kernel_size_t __underlying_strlen(const char *p) __RENAME(strlen);
++extern char *__underlying_strncat(char *p, const char *q, __kernel_size_t count) __RENAME(strncat);
++extern char *__underlying_strncpy(char *p, const char *q, __kernel_size_t size) __RENAME(strncpy);
++#else
++#define __underlying_memchr __builtin_memchr
++#define __underlying_memcmp __builtin_memcmp
++#define __underlying_memcpy __builtin_memcpy
++#define __underlying_memmove __builtin_memmove
++#define __underlying_memset __builtin_memset
++#define __underlying_strcat __builtin_strcat
++#define __underlying_strcpy __builtin_strcpy
++#define __underlying_strlen __builtin_strlen
++#define __underlying_strncat __builtin_strncat
++#define __underlying_strncpy __builtin_strncpy
++#endif
++
+ __FORTIFY_INLINE char *strncpy(char *p, const char *q, __kernel_size_t size)
+ {
+ size_t p_size = __builtin_object_size(p, 0);
+@@ -279,14 +304,14 @@ __FORTIFY_INLINE char *strncpy(char *p, const char *q, __kernel_size_t size)
+ __write_overflow();
+ if (p_size < size)
+ fortify_panic(__func__);
+- return __builtin_strncpy(p, q, size);
++ return __underlying_strncpy(p, q, size);
+ }
+
+ __FORTIFY_INLINE char *strcat(char *p, const char *q)
+ {
+ size_t p_size = __builtin_object_size(p, 0);
+ if (p_size == (size_t)-1)
+- return __builtin_strcat(p, q);
++ return __underlying_strcat(p, q);
+ if (strlcat(p, q, p_size) >= p_size)
+ fortify_panic(__func__);
+ return p;
+@@ -300,7 +325,7 @@ __FORTIFY_INLINE __kernel_size_t strlen(const char *p)
+ /* Work around gcc excess stack consumption issue */
+ if (p_size == (size_t)-1 ||
+ (__builtin_constant_p(p[p_size - 1]) && p[p_size - 1] == '\0'))
+- return __builtin_strlen(p);
++ return __underlying_strlen(p);
+ ret = strnlen(p, p_size);
+ if (p_size <= ret)
+ fortify_panic(__func__);
+@@ -333,7 +358,7 @@ __FORTIFY_INLINE size_t strlcpy(char *p, const char *q, size_t size)
+ __write_overflow();
+ if (len >= p_size)
+ fortify_panic(__func__);
+- __builtin_memcpy(p, q, len);
++ __underlying_memcpy(p, q, len);
+ p[len] = '\0';
+ }
+ return ret;
+@@ -346,12 +371,12 @@ __FORTIFY_INLINE char *strncat(char *p, const char *q, __kernel_size_t count)
+ size_t p_size = __builtin_object_size(p, 0);
+ size_t q_size = __builtin_object_size(q, 0);
+ if (p_size == (size_t)-1 && q_size == (size_t)-1)
+- return __builtin_strncat(p, q, count);
++ return __underlying_strncat(p, q, count);
+ p_len = strlen(p);
+ copy_len = strnlen(q, count);
+ if (p_size < p_len + copy_len + 1)
+ fortify_panic(__func__);
+- __builtin_memcpy(p + p_len, q, copy_len);
++ __underlying_memcpy(p + p_len, q, copy_len);
+ p[p_len + copy_len] = '\0';
+ return p;
+ }
+@@ -363,7 +388,7 @@ __FORTIFY_INLINE void *memset(void *p, int c, __kernel_size_t size)
+ __write_overflow();
+ if (p_size < size)
+ fortify_panic(__func__);
+- return __builtin_memset(p, c, size);
++ return __underlying_memset(p, c, size);
+ }
+
+ __FORTIFY_INLINE void *memcpy(void *p, const void *q, __kernel_size_t size)
+@@ -378,7 +403,7 @@ __FORTIFY_INLINE void *memcpy(void *p, const void *q, __kernel_size_t size)
+ }
+ if (p_size < size || q_size < size)
+ fortify_panic(__func__);
+- return __builtin_memcpy(p, q, size);
++ return __underlying_memcpy(p, q, size);
+ }
+
+ __FORTIFY_INLINE void *memmove(void *p, const void *q, __kernel_size_t size)
+@@ -393,7 +418,7 @@ __FORTIFY_INLINE void *memmove(void *p, const void *q, __kernel_size_t size)
+ }
+ if (p_size < size || q_size < size)
+ fortify_panic(__func__);
+- return __builtin_memmove(p, q, size);
++ return __underlying_memmove(p, q, size);
+ }
+
+ extern void *__real_memscan(void *, int, __kernel_size_t) __RENAME(memscan);
+@@ -419,7 +444,7 @@ __FORTIFY_INLINE int memcmp(const void *p, const void *q, __kernel_size_t size)
+ }
+ if (p_size < size || q_size < size)
+ fortify_panic(__func__);
+- return __builtin_memcmp(p, q, size);
++ return __underlying_memcmp(p, q, size);
+ }
+
+ __FORTIFY_INLINE void *memchr(const void *p, int c, __kernel_size_t size)
+@@ -429,7 +454,7 @@ __FORTIFY_INLINE void *memchr(const void *p, int c, __kernel_size_t size)
+ __read_overflow();
+ if (p_size < size)
+ fortify_panic(__func__);
+- return __builtin_memchr(p, c, size);
++ return __underlying_memchr(p, c, size);
+ }
+
+ void *__real_memchr_inv(const void *s, int c, size_t n) __RENAME(memchr_inv);
+@@ -460,11 +485,22 @@ __FORTIFY_INLINE char *strcpy(char *p, const char *q)
+ size_t p_size = __builtin_object_size(p, 0);
+ size_t q_size = __builtin_object_size(q, 0);
+ if (p_size == (size_t)-1 && q_size == (size_t)-1)
+- return __builtin_strcpy(p, q);
++ return __underlying_strcpy(p, q);
+ memcpy(p, q, strlen(q) + 1);
+ return p;
+ }
+
++/* Don't use these outside the FORITFY_SOURCE implementation */
++#undef __underlying_memchr
++#undef __underlying_memcmp
++#undef __underlying_memcpy
++#undef __underlying_memmove
++#undef __underlying_memset
++#undef __underlying_strcat
++#undef __underlying_strcpy
++#undef __underlying_strlen
++#undef __underlying_strncat
++#undef __underlying_strncpy
+ #endif
+
+ /**
+diff --git a/include/linux/sunrpc/gss_api.h b/include/linux/sunrpc/gss_api.h
+index bc07e51f20d1..bf4ac8a0268c 100644
+--- a/include/linux/sunrpc/gss_api.h
++++ b/include/linux/sunrpc/gss_api.h
+@@ -84,6 +84,7 @@ struct pf_desc {
+ u32 service;
+ char *name;
+ char *auth_domain_name;
++ struct auth_domain *domain;
+ bool datatouch;
+ };
+
+diff --git a/include/linux/sunrpc/svcauth_gss.h b/include/linux/sunrpc/svcauth_gss.h
+index ca39a388dc22..f09c82b0a7ae 100644
+--- a/include/linux/sunrpc/svcauth_gss.h
++++ b/include/linux/sunrpc/svcauth_gss.h
+@@ -20,7 +20,8 @@ int gss_svc_init(void);
+ void gss_svc_shutdown(void);
+ int gss_svc_init_net(struct net *net);
+ void gss_svc_shutdown_net(struct net *net);
+-int svcauth_gss_register_pseudoflavor(u32 pseudoflavor, char * name);
++struct auth_domain *svcauth_gss_register_pseudoflavor(u32 pseudoflavor,
++ char *name);
+ u32 svcauth_gss_flavor(struct auth_domain *dom);
+
+ #endif /* _LINUX_SUNRPC_SVCAUTH_GSS_H */
+diff --git a/include/net/bluetooth/hci.h b/include/net/bluetooth/hci.h
+index 5f60e135aeb6..25c2e5ee81dc 100644
+--- a/include/net/bluetooth/hci.h
++++ b/include/net/bluetooth/hci.h
+@@ -214,6 +214,15 @@ enum {
+ * This quirk must be set before hci_register_dev is called.
+ */
+ HCI_QUIRK_WIDEBAND_SPEECH_SUPPORTED,
++
++ /* When this quirk is set, the controller has validated that
++ * LE states reported through the HCI_LE_READ_SUPPORTED_STATES are
++ * valid. This mechanism is necessary as many controllers have
++ * been seen has having trouble initiating a connectable
++ * advertisement despite the state combination being reported as
++ * supported.
++ */
++ HCI_QUIRK_VALID_LE_STATES,
+ };
+
+ /* HCI device flags */
+diff --git a/include/net/tls.h b/include/net/tls.h
+index 18cd4f418464..ca5f7f437289 100644
+--- a/include/net/tls.h
++++ b/include/net/tls.h
+@@ -571,6 +571,15 @@ static inline bool tls_sw_has_ctx_tx(const struct sock *sk)
+ return !!tls_sw_ctx_tx(ctx);
+ }
+
++static inline bool tls_sw_has_ctx_rx(const struct sock *sk)
++{
++ struct tls_context *ctx = tls_get_ctx(sk);
++
++ if (!ctx)
++ return false;
++ return !!tls_sw_ctx_rx(ctx);
++}
++
+ void tls_sw_write_space(struct sock *sk, struct tls_context *ctx);
+ void tls_device_write_space(struct sock *sk, struct tls_context *ctx);
+
+diff --git a/include/trace/events/btrfs.h b/include/trace/events/btrfs.h
+index bcbc763b8814..360b0f9d2220 100644
+--- a/include/trace/events/btrfs.h
++++ b/include/trace/events/btrfs.h
+@@ -89,6 +89,7 @@ TRACE_DEFINE_ENUM(COMMIT_TRANS);
+ { IO_TREE_TRANS_DIRTY_PAGES, "TRANS_DIRTY_PAGES" }, \
+ { IO_TREE_ROOT_DIRTY_LOG_PAGES, "ROOT_DIRTY_LOG_PAGES" }, \
+ { IO_TREE_INODE_FILE_EXTENT, "INODE_FILE_EXTENT" }, \
++ { IO_TREE_LOG_CSUM_RANGE, "LOG_CSUM_RANGE" }, \
+ { IO_TREE_SELFTEST, "SELFTEST" })
+
+ #define BTRFS_GROUP_FLAGS \
+diff --git a/include/uapi/linux/bpf.h b/include/uapi/linux/bpf.h
+index f9b7fdd951e4..c01de7924e97 100644
+--- a/include/uapi/linux/bpf.h
++++ b/include/uapi/linux/bpf.h
+@@ -1589,6 +1589,13 @@ union bpf_attr {
+ * Grow or shrink the room for data in the packet associated to
+ * *skb* by *len_diff*, and according to the selected *mode*.
+ *
++ * By default, the helper will reset any offloaded checksum
++ * indicator of the skb to CHECKSUM_NONE. This can be avoided
++ * by the following flag:
++ *
++ * * **BPF_F_ADJ_ROOM_NO_CSUM_RESET**: Do not reset offloaded
++ * checksum data of the skb to CHECKSUM_NONE.
++ *
+ * There are two supported modes at this time:
+ *
+ * * **BPF_ADJ_ROOM_MAC**: Adjust room at the mac layer
+@@ -3235,6 +3242,7 @@ enum {
+ BPF_F_ADJ_ROOM_ENCAP_L3_IPV6 = (1ULL << 2),
+ BPF_F_ADJ_ROOM_ENCAP_L4_GRE = (1ULL << 3),
+ BPF_F_ADJ_ROOM_ENCAP_L4_UDP = (1ULL << 4),
++ BPF_F_ADJ_ROOM_NO_CSUM_RESET = (1ULL << 5),
+ };
+
+ enum {
+diff --git a/include/uapi/linux/kvm.h b/include/uapi/linux/kvm.h
+index 428c7dde6b4b..9cdc5356f542 100644
+--- a/include/uapi/linux/kvm.h
++++ b/include/uapi/linux/kvm.h
+@@ -189,9 +189,11 @@ struct kvm_hyperv_exit {
+ #define KVM_EXIT_HYPERV_SYNIC 1
+ #define KVM_EXIT_HYPERV_HCALL 2
+ __u32 type;
++ __u32 pad1;
+ union {
+ struct {
+ __u32 msr;
++ __u32 pad2;
+ __u64 control;
+ __u64 evt_page;
+ __u64 msg_page;
+diff --git a/kernel/audit.c b/kernel/audit.c
+index 87f31bf1f0a0..f711f424a28a 100644
+--- a/kernel/audit.c
++++ b/kernel/audit.c
+@@ -880,7 +880,7 @@ main_queue:
+ return 0;
+ }
+
+-int audit_send_list(void *_dest)
++int audit_send_list_thread(void *_dest)
+ {
+ struct audit_netlink_list *dest = _dest;
+ struct sk_buff *skb;
+@@ -924,19 +924,30 @@ out_kfree_skb:
+ return NULL;
+ }
+
++static void audit_free_reply(struct audit_reply *reply)
++{
++ if (!reply)
++ return;
++
++ if (reply->skb)
++ kfree_skb(reply->skb);
++ if (reply->net)
++ put_net(reply->net);
++ kfree(reply);
++}
++
+ static int audit_send_reply_thread(void *arg)
+ {
+ struct audit_reply *reply = (struct audit_reply *)arg;
+- struct sock *sk = audit_get_sk(reply->net);
+
+ audit_ctl_lock();
+ audit_ctl_unlock();
+
+ /* Ignore failure. It'll only happen if the sender goes away,
+ because our timeout is set to infinite. */
+- netlink_unicast(sk, reply->skb, reply->portid, 0);
+- put_net(reply->net);
+- kfree(reply);
++ netlink_unicast(audit_get_sk(reply->net), reply->skb, reply->portid, 0);
++ reply->skb = NULL;
++ audit_free_reply(reply);
+ return 0;
+ }
+
+@@ -950,35 +961,32 @@ static int audit_send_reply_thread(void *arg)
+ * @payload: payload data
+ * @size: payload size
+ *
+- * Allocates an skb, builds the netlink message, and sends it to the port id.
+- * No failure notifications.
++ * Allocates a skb, builds the netlink message, and sends it to the port id.
+ */
+ static void audit_send_reply(struct sk_buff *request_skb, int seq, int type, int done,
+ int multi, const void *payload, int size)
+ {
+- struct net *net = sock_net(NETLINK_CB(request_skb).sk);
+- struct sk_buff *skb;
+ struct task_struct *tsk;
+- struct audit_reply *reply = kmalloc(sizeof(struct audit_reply),
+- GFP_KERNEL);
++ struct audit_reply *reply;
+
++ reply = kzalloc(sizeof(*reply), GFP_KERNEL);
+ if (!reply)
+ return;
+
+- skb = audit_make_reply(seq, type, done, multi, payload, size);
+- if (!skb)
+- goto out;
+-
+- reply->net = get_net(net);
++ reply->skb = audit_make_reply(seq, type, done, multi, payload, size);
++ if (!reply->skb)
++ goto err;
++ reply->net = get_net(sock_net(NETLINK_CB(request_skb).sk));
+ reply->portid = NETLINK_CB(request_skb).portid;
+- reply->skb = skb;
+
+ tsk = kthread_run(audit_send_reply_thread, reply, "audit_send_reply");
+- if (!IS_ERR(tsk))
+- return;
+- kfree_skb(skb);
+-out:
+- kfree(reply);
++ if (IS_ERR(tsk))
++ goto err;
++
++ return;
++
++err:
++ audit_free_reply(reply);
+ }
+
+ /*
+diff --git a/kernel/audit.h b/kernel/audit.h
+index 2eed4d231624..f0233dc40b17 100644
+--- a/kernel/audit.h
++++ b/kernel/audit.h
+@@ -229,7 +229,7 @@ struct audit_netlink_list {
+ struct sk_buff_head q;
+ };
+
+-int audit_send_list(void *_dest);
++int audit_send_list_thread(void *_dest);
+
+ extern int selinux_audit_rule_update(void);
+
+diff --git a/kernel/auditfilter.c b/kernel/auditfilter.c
+index 026e34da4ace..a10e2997aa6c 100644
+--- a/kernel/auditfilter.c
++++ b/kernel/auditfilter.c
+@@ -1161,11 +1161,8 @@ int audit_rule_change(int type, int seq, void *data, size_t datasz)
+ */
+ int audit_list_rules_send(struct sk_buff *request_skb, int seq)
+ {
+- u32 portid = NETLINK_CB(request_skb).portid;
+- struct net *net = sock_net(NETLINK_CB(request_skb).sk);
+ struct task_struct *tsk;
+ struct audit_netlink_list *dest;
+- int err = 0;
+
+ /* We can't just spew out the rules here because we might fill
+ * the available socket buffer space and deadlock waiting for
+@@ -1173,25 +1170,26 @@ int audit_list_rules_send(struct sk_buff *request_skb, int seq)
+ * happen if we're actually running in the context of auditctl
+ * trying to _send_ the stuff */
+
+- dest = kmalloc(sizeof(struct audit_netlink_list), GFP_KERNEL);
++ dest = kmalloc(sizeof(*dest), GFP_KERNEL);
+ if (!dest)
+ return -ENOMEM;
+- dest->net = get_net(net);
+- dest->portid = portid;
++ dest->net = get_net(sock_net(NETLINK_CB(request_skb).sk));
++ dest->portid = NETLINK_CB(request_skb).portid;
+ skb_queue_head_init(&dest->q);
+
+ mutex_lock(&audit_filter_mutex);
+ audit_list_rules(seq, &dest->q);
+ mutex_unlock(&audit_filter_mutex);
+
+- tsk = kthread_run(audit_send_list, dest, "audit_send_list");
++ tsk = kthread_run(audit_send_list_thread, dest, "audit_send_list");
+ if (IS_ERR(tsk)) {
+ skb_queue_purge(&dest->q);
++ put_net(dest->net);
+ kfree(dest);
+- err = PTR_ERR(tsk);
++ return PTR_ERR(tsk);
+ }
+
+- return err;
++ return 0;
+ }
+
+ int audit_comparator(u32 left, u32 op, u32 right)
+diff --git a/kernel/bpf/syscall.c b/kernel/bpf/syscall.c
+index 4e6dee19a668..5e52765161f9 100644
+--- a/kernel/bpf/syscall.c
++++ b/kernel/bpf/syscall.c
+@@ -1468,7 +1468,8 @@ static int map_lookup_and_delete_elem(union bpf_attr *attr)
+ map = __bpf_map_get(f);
+ if (IS_ERR(map))
+ return PTR_ERR(map);
+- if (!(map_get_sys_perms(map, f) & FMODE_CAN_WRITE)) {
++ if (!(map_get_sys_perms(map, f) & FMODE_CAN_READ) ||
++ !(map_get_sys_perms(map, f) & FMODE_CAN_WRITE)) {
+ err = -EPERM;
+ goto err_put;
+ }
+diff --git a/kernel/cpu.c b/kernel/cpu.c
+index 2371292f30b0..244d30544377 100644
+--- a/kernel/cpu.c
++++ b/kernel/cpu.c
+@@ -3,6 +3,7 @@
+ *
+ * This code is licenced under the GPL.
+ */
++#include <linux/sched/mm.h>
+ #include <linux/proc_fs.h>
+ #include <linux/smp.h>
+ #include <linux/init.h>
+@@ -564,6 +565,21 @@ static int bringup_cpu(unsigned int cpu)
+ return bringup_wait_for_ap(cpu);
+ }
+
++static int finish_cpu(unsigned int cpu)
++{
++ struct task_struct *idle = idle_thread_get(cpu);
++ struct mm_struct *mm = idle->active_mm;
++
++ /*
++ * idle_task_exit() will have switched to &init_mm, now
++ * clean up any remaining active_mm state.
++ */
++ if (mm != &init_mm)
++ idle->active_mm = &init_mm;
++ mmdrop(mm);
++ return 0;
++}
++
+ /*
+ * Hotplug state machine related functions
+ */
+@@ -1549,7 +1565,7 @@ static struct cpuhp_step cpuhp_hp_states[] = {
+ [CPUHP_BRINGUP_CPU] = {
+ .name = "cpu:bringup",
+ .startup.single = bringup_cpu,
+- .teardown.single = NULL,
++ .teardown.single = finish_cpu,
+ .cant_stop = true,
+ },
+ /* Final state before CPU kills itself */
+diff --git a/kernel/cpu_pm.c b/kernel/cpu_pm.c
+index cbca6879ab7d..44a259338e33 100644
+--- a/kernel/cpu_pm.c
++++ b/kernel/cpu_pm.c
+@@ -80,7 +80,7 @@ EXPORT_SYMBOL_GPL(cpu_pm_unregister_notifier);
+ */
+ int cpu_pm_enter(void)
+ {
+- int nr_calls;
++ int nr_calls = 0;
+ int ret = 0;
+
+ ret = cpu_pm_notify(CPU_PM_ENTER, -1, &nr_calls);
+@@ -131,7 +131,7 @@ EXPORT_SYMBOL_GPL(cpu_pm_exit);
+ */
+ int cpu_cluster_pm_enter(void)
+ {
+- int nr_calls;
++ int nr_calls = 0;
+ int ret = 0;
+
+ ret = cpu_pm_notify(CPU_CLUSTER_PM_ENTER, -1, &nr_calls);
+diff --git a/kernel/debug/debug_core.c b/kernel/debug/debug_core.c
+index 2b7c9b67931d..d47c7d6656cd 100644
+--- a/kernel/debug/debug_core.c
++++ b/kernel/debug/debug_core.c
+@@ -532,6 +532,7 @@ static int kgdb_reenter_check(struct kgdb_state *ks)
+
+ if (exception_level > 1) {
+ dump_stack();
++ kgdb_io_module_registered = false;
+ panic("Recursive entry to debugger");
+ }
+
+@@ -668,6 +669,8 @@ return_normal:
+ if (kgdb_skipexception(ks->ex_vector, ks->linux_regs))
+ goto kgdb_restore;
+
++ atomic_inc(&ignore_console_lock_warning);
++
+ /* Call the I/O driver's pre_exception routine */
+ if (dbg_io_ops->pre_exception)
+ dbg_io_ops->pre_exception();
+@@ -740,6 +743,8 @@ cpu_master_loop:
+ if (dbg_io_ops->post_exception)
+ dbg_io_ops->post_exception();
+
++ atomic_dec(&ignore_console_lock_warning);
++
+ if (!kgdb_single_step) {
+ raw_spin_unlock(&dbg_slave_lock);
+ /* Wait till all the CPUs have quit from the debugger. */
+diff --git a/kernel/exit.c b/kernel/exit.c
+index ce2a75bc0ade..d56fe51bdf07 100644
+--- a/kernel/exit.c
++++ b/kernel/exit.c
+@@ -708,8 +708,12 @@ void __noreturn do_exit(long code)
+ struct task_struct *tsk = current;
+ int group_dead;
+
+- profile_task_exit(tsk);
+- kcov_task_exit(tsk);
++ /*
++ * We can get here from a kernel oops, sometimes with preemption off.
++ * Start by checking for critical errors.
++ * Then fix up important state like USER_DS and preemption.
++ * Then do everything else.
++ */
+
+ WARN_ON(blk_needs_flush_plug(tsk));
+
+@@ -727,6 +731,16 @@ void __noreturn do_exit(long code)
+ */
+ set_fs(USER_DS);
+
++ if (unlikely(in_atomic())) {
++ pr_info("note: %s[%d] exited with preempt_count %d\n",
++ current->comm, task_pid_nr(current),
++ preempt_count());
++ preempt_count_set(PREEMPT_ENABLED);
++ }
++
++ profile_task_exit(tsk);
++ kcov_task_exit(tsk);
++
+ ptrace_event(PTRACE_EVENT_EXIT, code);
+
+ validate_creds_for_do_exit(tsk);
+@@ -744,13 +758,6 @@ void __noreturn do_exit(long code)
+
+ exit_signals(tsk); /* sets PF_EXITING */
+
+- if (unlikely(in_atomic())) {
+- pr_info("note: %s[%d] exited with preempt_count %d\n",
+- current->comm, task_pid_nr(current),
+- preempt_count());
+- preempt_count_set(PREEMPT_ENABLED);
+- }
+-
+ /* sync mm's RSS info before statistics gathering */
+ if (tsk->mm)
+ sync_mm_rss(tsk->mm);
+diff --git a/kernel/sched/core.c b/kernel/sched/core.c
+index 9a2fbf98fd6f..5eccfb816d23 100644
+--- a/kernel/sched/core.c
++++ b/kernel/sched/core.c
+@@ -6190,13 +6190,14 @@ void idle_task_exit(void)
+ struct mm_struct *mm = current->active_mm;
+
+ BUG_ON(cpu_online(smp_processor_id()));
++ BUG_ON(current != this_rq()->idle);
+
+ if (mm != &init_mm) {
+ switch_mm(mm, &init_mm, current);
+- current->active_mm = &init_mm;
+ finish_arch_post_lock_switch();
+ }
+- mmdrop(mm);
++
++ /* finish_cpu(), as ran on the BP, will clean up the active_mm state */
+ }
+
+ /*
+@@ -7385,6 +7386,8 @@ static DEFINE_MUTEX(cfs_constraints_mutex);
+
+ const u64 max_cfs_quota_period = 1 * NSEC_PER_SEC; /* 1s */
+ static const u64 min_cfs_quota_period = 1 * NSEC_PER_MSEC; /* 1ms */
++/* More than 203 days if BW_SHIFT equals 20. */
++static const u64 max_cfs_runtime = MAX_BW * NSEC_PER_USEC;
+
+ static int __cfs_schedulable(struct task_group *tg, u64 period, u64 runtime);
+
+@@ -7412,6 +7415,12 @@ static int tg_set_cfs_bandwidth(struct task_group *tg, u64 period, u64 quota)
+ if (period > max_cfs_quota_period)
+ return -EINVAL;
+
++ /*
++ * Bound quota to defend quota against overflow during bandwidth shift.
++ */
++ if (quota != RUNTIME_INF && quota > max_cfs_runtime)
++ return -EINVAL;
++
+ /*
+ * Prevent race between setting of cfs_rq->runtime_enabled and
+ * unthrottle_offline_cfs_rqs().
+diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
+index da3e5b54715b..2ae7e30ccb33 100644
+--- a/kernel/sched/fair.c
++++ b/kernel/sched/fair.c
+@@ -5170,6 +5170,8 @@ static enum hrtimer_restart sched_cfs_period_timer(struct hrtimer *timer)
+ if (!overrun)
+ break;
+
++ idle = do_sched_cfs_period_timer(cfs_b, overrun, flags);
++
+ if (++count > 3) {
+ u64 new, old = ktime_to_ns(cfs_b->period);
+
+@@ -5199,8 +5201,6 @@ static enum hrtimer_restart sched_cfs_period_timer(struct hrtimer *timer)
+ /* reset count so we don't come right back in here */
+ count = 0;
+ }
+-
+- idle = do_sched_cfs_period_timer(cfs_b, overrun, flags);
+ }
+ if (idle)
+ cfs_b->period_active = 0;
+diff --git a/kernel/sched/rt.c b/kernel/sched/rt.c
+index df11d88c9895..6d60ba21ed29 100644
+--- a/kernel/sched/rt.c
++++ b/kernel/sched/rt.c
+@@ -9,6 +9,8 @@
+
+ int sched_rr_timeslice = RR_TIMESLICE;
+ int sysctl_sched_rr_timeslice = (MSEC_PER_SEC / HZ) * RR_TIMESLICE;
++/* More than 4 hours if BW_SHIFT equals 20. */
++static const u64 max_rt_runtime = MAX_BW;
+
+ static int do_sched_rt_period_timer(struct rt_bandwidth *rt_b, int overrun);
+
+@@ -2585,6 +2587,12 @@ static int tg_set_rt_bandwidth(struct task_group *tg,
+ if (rt_period == 0)
+ return -EINVAL;
+
++ /*
++ * Bound quota to defend quota against overflow during bandwidth shift.
++ */
++ if (rt_runtime != RUNTIME_INF && rt_runtime > max_rt_runtime)
++ return -EINVAL;
++
+ mutex_lock(&rt_constraints_mutex);
+ err = __rt_schedulable(tg, rt_period, rt_runtime);
+ if (err)
+@@ -2702,7 +2710,9 @@ static int sched_rt_global_validate(void)
+ return -EINVAL;
+
+ if ((sysctl_sched_rt_runtime != RUNTIME_INF) &&
+- (sysctl_sched_rt_runtime > sysctl_sched_rt_period))
++ ((sysctl_sched_rt_runtime > sysctl_sched_rt_period) ||
++ ((u64)sysctl_sched_rt_runtime *
++ NSEC_PER_USEC > max_rt_runtime)))
+ return -EINVAL;
+
+ return 0;
+diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h
+index db3a57675ccf..1f58677a8f23 100644
+--- a/kernel/sched/sched.h
++++ b/kernel/sched/sched.h
+@@ -1918,6 +1918,8 @@ extern void init_dl_inactive_task_timer(struct sched_dl_entity *dl_se);
+ #define BW_SHIFT 20
+ #define BW_UNIT (1 << BW_SHIFT)
+ #define RATIO_SHIFT 8
++#define MAX_BW_BITS (64 - BW_SHIFT)
++#define MAX_BW ((1ULL << MAX_BW_BITS) - 1)
+ unsigned long to_ratio(u64 period, u64 runtime);
+
+ extern void init_entity_runnable_average(struct sched_entity *se);
+diff --git a/kernel/time/clocksource.c b/kernel/time/clocksource.c
+index 7cb09c4cf21c..02441ead3c3b 100644
+--- a/kernel/time/clocksource.c
++++ b/kernel/time/clocksource.c
+@@ -928,14 +928,12 @@ int __clocksource_register_scale(struct clocksource *cs, u32 scale, u32 freq)
+
+ clocksource_arch_init(cs);
+
+-#ifdef CONFIG_GENERIC_VDSO_CLOCK_MODE
+ if (cs->vdso_clock_mode < 0 ||
+ cs->vdso_clock_mode >= VDSO_CLOCKMODE_MAX) {
+ pr_warn("clocksource %s registered with invalid VDSO mode %d. Disabling VDSO support.\n",
+ cs->name, cs->vdso_clock_mode);
+ cs->vdso_clock_mode = VDSO_CLOCKMODE_NONE;
+ }
+-#endif
+
+ /* Initialize mult/shift and max_idle_ns */
+ __clocksource_update_freq_scale(cs, scale, freq);
+diff --git a/lib/Kconfig.ubsan b/lib/Kconfig.ubsan
+index 929211039bac..27bcc2568c95 100644
+--- a/lib/Kconfig.ubsan
++++ b/lib/Kconfig.ubsan
+@@ -63,7 +63,7 @@ config UBSAN_SANITIZE_ALL
+ config UBSAN_ALIGNMENT
+ bool "Enable checks for pointers alignment"
+ default !HAVE_EFFICIENT_UNALIGNED_ACCESS
+- depends on !X86 || !COMPILE_TEST
++ depends on !UBSAN_TRAP
+ help
+ This option enables the check of unaligned memory accesses.
+ Enabling this option on architectures that support unaligned
+diff --git a/lib/mpi/longlong.h b/lib/mpi/longlong.h
+index 891e1c3549c4..afbd99987cf8 100644
+--- a/lib/mpi/longlong.h
++++ b/lib/mpi/longlong.h
+@@ -653,7 +653,7 @@ do { \
+ ************** MIPS/64 **************
+ ***************************************/
+ #if (defined(__mips) && __mips >= 3) && W_TYPE_SIZE == 64
+-#if defined(__mips_isa_rev) && __mips_isa_rev >= 6
++#if defined(__mips_isa_rev) && __mips_isa_rev >= 6 && defined(CONFIG_CC_IS_GCC)
+ /*
+ * GCC ends up emitting a __multi3 intrinsic call for MIPS64r6 with the plain C
+ * code below, so we special case MIPS64r6 until the compiler can do better.
+diff --git a/lib/test_kasan.c b/lib/test_kasan.c
+index e3087d90e00d..dc2c6a51d11a 100644
+--- a/lib/test_kasan.c
++++ b/lib/test_kasan.c
+@@ -23,6 +23,14 @@
+
+ #include <asm/page.h>
+
++/*
++ * We assign some test results to these globals to make sure the tests
++ * are not eliminated as dead code.
++ */
++
++int kasan_int_result;
++void *kasan_ptr_result;
++
+ /*
+ * Note: test functions are marked noinline so that their names appear in
+ * reports.
+@@ -622,7 +630,7 @@ static noinline void __init kasan_memchr(void)
+ if (!ptr)
+ return;
+
+- memchr(ptr, '1', size + 1);
++ kasan_ptr_result = memchr(ptr, '1', size + 1);
+ kfree(ptr);
+ }
+
+@@ -638,7 +646,7 @@ static noinline void __init kasan_memcmp(void)
+ return;
+
+ memset(arr, 0, sizeof(arr));
+- memcmp(ptr, arr, size+1);
++ kasan_int_result = memcmp(ptr, arr, size + 1);
+ kfree(ptr);
+ }
+
+@@ -661,22 +669,22 @@ static noinline void __init kasan_strings(void)
+ * will likely point to zeroed byte.
+ */
+ ptr += 16;
+- strchr(ptr, '1');
++ kasan_ptr_result = strchr(ptr, '1');
+
+ pr_info("use-after-free in strrchr\n");
+- strrchr(ptr, '1');
++ kasan_ptr_result = strrchr(ptr, '1');
+
+ pr_info("use-after-free in strcmp\n");
+- strcmp(ptr, "2");
++ kasan_int_result = strcmp(ptr, "2");
+
+ pr_info("use-after-free in strncmp\n");
+- strncmp(ptr, "2", 1);
++ kasan_int_result = strncmp(ptr, "2", 1);
+
+ pr_info("use-after-free in strlen\n");
+- strlen(ptr);
++ kasan_int_result = strlen(ptr);
+
+ pr_info("use-after-free in strnlen\n");
+- strnlen(ptr, 1);
++ kasan_int_result = strnlen(ptr, 1);
+ }
+
+ static noinline void __init kasan_bitops(void)
+@@ -743,11 +751,12 @@ static noinline void __init kasan_bitops(void)
+ __test_and_change_bit(BITS_PER_LONG + BITS_PER_BYTE, bits);
+
+ pr_info("out-of-bounds in test_bit\n");
+- (void)test_bit(BITS_PER_LONG + BITS_PER_BYTE, bits);
++ kasan_int_result = test_bit(BITS_PER_LONG + BITS_PER_BYTE, bits);
+
+ #if defined(clear_bit_unlock_is_negative_byte)
+ pr_info("out-of-bounds in clear_bit_unlock_is_negative_byte\n");
+- clear_bit_unlock_is_negative_byte(BITS_PER_LONG + BITS_PER_BYTE, bits);
++ kasan_int_result = clear_bit_unlock_is_negative_byte(BITS_PER_LONG +
++ BITS_PER_BYTE, bits);
+ #endif
+ kfree(bits);
+ }
+diff --git a/lib/test_printf.c b/lib/test_printf.c
+index 6b1622f4d7c2..fc63b8959d42 100644
+--- a/lib/test_printf.c
++++ b/lib/test_printf.c
+@@ -637,7 +637,9 @@ static void __init fwnode_pointer(void)
+ test(second_name, "%pfwP", software_node_fwnode(&softnodes[1]));
+ test(third_name, "%pfwP", software_node_fwnode(&softnodes[2]));
+
+- software_node_unregister_nodes(softnodes);
++ software_node_unregister(&softnodes[2]);
++ software_node_unregister(&softnodes[1]);
++ software_node_unregister(&softnodes[0]);
+ }
+
+ static void __init
+diff --git a/mm/huge_memory.c b/mm/huge_memory.c
+index 11fe0b4dbe67..dddc863b3cbc 100644
+--- a/mm/huge_memory.c
++++ b/mm/huge_memory.c
+@@ -2385,6 +2385,8 @@ void __split_huge_pmd(struct vm_area_struct *vma, pmd_t *pmd,
+ {
+ spinlock_t *ptl;
+ struct mmu_notifier_range range;
++ bool was_locked = false;
++ pmd_t _pmd;
+
+ mmu_notifier_range_init(&range, MMU_NOTIFY_CLEAR, 0, vma, vma->vm_mm,
+ address & HPAGE_PMD_MASK,
+@@ -2397,11 +2399,32 @@ void __split_huge_pmd(struct vm_area_struct *vma, pmd_t *pmd,
+ * pmd against. Otherwise we can end up replacing wrong page.
+ */
+ VM_BUG_ON(freeze && !page);
+- if (page && page != pmd_page(*pmd))
+- goto out;
++ if (page) {
++ VM_WARN_ON_ONCE(!PageLocked(page));
++ was_locked = true;
++ if (page != pmd_page(*pmd))
++ goto out;
++ }
+
++repeat:
+ if (pmd_trans_huge(*pmd)) {
+- page = pmd_page(*pmd);
++ if (!page) {
++ page = pmd_page(*pmd);
++ if (unlikely(!trylock_page(page))) {
++ get_page(page);
++ _pmd = *pmd;
++ spin_unlock(ptl);
++ lock_page(page);
++ spin_lock(ptl);
++ if (unlikely(!pmd_same(*pmd, _pmd))) {
++ unlock_page(page);
++ put_page(page);
++ page = NULL;
++ goto repeat;
++ }
++ put_page(page);
++ }
++ }
+ if (PageMlocked(page))
+ clear_page_mlock(page);
+ } else if (!(pmd_devmap(*pmd) || is_pmd_migration_entry(*pmd)))
+@@ -2409,6 +2432,8 @@ void __split_huge_pmd(struct vm_area_struct *vma, pmd_t *pmd,
+ __split_huge_pmd_locked(vma, pmd, range.start, freeze);
+ out:
+ spin_unlock(ptl);
++ if (!was_locked && page)
++ unlock_page(page);
+ /*
+ * No need to double call mmu_notifier->invalidate_range() callback.
+ * They are 3 cases to consider inside __split_huge_pmd_locked():
+diff --git a/mm/page_alloc.c b/mm/page_alloc.c
+index 13cc653122b7..d0c0d9364aa6 100644
+--- a/mm/page_alloc.c
++++ b/mm/page_alloc.c
+@@ -1692,7 +1692,6 @@ static void __init deferred_free_pages(unsigned long pfn,
+ } else if (!(pfn & nr_pgmask)) {
+ deferred_free_range(pfn - nr_free, nr_free);
+ nr_free = 1;
+- touch_nmi_watchdog();
+ } else {
+ nr_free++;
+ }
+@@ -1722,7 +1721,6 @@ static unsigned long __init deferred_init_pages(struct zone *zone,
+ continue;
+ } else if (!page || !(pfn & nr_pgmask)) {
+ page = pfn_to_page(pfn);
+- touch_nmi_watchdog();
+ } else {
+ page++;
+ }
+@@ -1845,6 +1843,13 @@ static int __init deferred_init_memmap(void *data)
+ BUG_ON(pgdat->first_deferred_pfn > pgdat_end_pfn(pgdat));
+ pgdat->first_deferred_pfn = ULONG_MAX;
+
++ /*
++ * Once we unlock here, the zone cannot be grown anymore, thus if an
++ * interrupt thread must allocate this early in boot, zone must be
++ * pre-grown prior to start of deferred page initialization.
++ */
++ pgdat_resize_unlock(pgdat, &flags);
++
+ /* Only the highest zone is deferred so find it */
+ for (zid = 0; zid < MAX_NR_ZONES; zid++) {
+ zone = pgdat->node_zones + zid;
+@@ -1862,11 +1867,11 @@ static int __init deferred_init_memmap(void *data)
+ * that we can avoid introducing any issues with the buddy
+ * allocator.
+ */
+- while (spfn < epfn)
++ while (spfn < epfn) {
+ nr_pages += deferred_init_maxorder(&i, zone, &spfn, &epfn);
++ cond_resched();
++ }
+ zone_empty:
+- pgdat_resize_unlock(pgdat, &flags);
+-
+ /* Sanity check that the next zone really is unpopulated */
+ WARN_ON(++zid < MAX_NR_ZONES && populated_zone(++zone));
+
+@@ -1908,17 +1913,6 @@ deferred_grow_zone(struct zone *zone, unsigned int order)
+
+ pgdat_resize_lock(pgdat, &flags);
+
+- /*
+- * If deferred pages have been initialized while we were waiting for
+- * the lock, return true, as the zone was grown. The caller will retry
+- * this zone. We won't return to this function since the caller also
+- * has this static branch.
+- */
+- if (!static_branch_unlikely(&deferred_pages)) {
+- pgdat_resize_unlock(pgdat, &flags);
+- return true;
+- }
+-
+ /*
+ * If someone grew this zone while we were waiting for spinlock, return
+ * true, as there might be enough pages already.
+@@ -1947,6 +1941,7 @@ deferred_grow_zone(struct zone *zone, unsigned int order)
+ first_deferred_pfn = spfn;
+
+ nr_pages += deferred_init_maxorder(&i, zone, &spfn, &epfn);
++ touch_nmi_watchdog();
+
+ /* We should only stop along section boundaries */
+ if ((first_deferred_pfn ^ spfn) < PAGES_PER_SECTION)
+diff --git a/net/batman-adv/bat_v_elp.c b/net/batman-adv/bat_v_elp.c
+index 1e3172db7492..955e0b8960d6 100644
+--- a/net/batman-adv/bat_v_elp.c
++++ b/net/batman-adv/bat_v_elp.c
+@@ -127,20 +127,7 @@ static u32 batadv_v_elp_get_throughput(struct batadv_hardif_neigh_node *neigh)
+ rtnl_lock();
+ ret = __ethtool_get_link_ksettings(hard_iface->net_dev, &link_settings);
+ rtnl_unlock();
+-
+- /* Virtual interface drivers such as tun / tap interfaces, VLAN, etc
+- * tend to initialize the interface throughput with some value for the
+- * sake of having a throughput number to export via ethtool. This
+- * exported throughput leaves batman-adv to conclude the interface
+- * throughput is genuine (reflecting reality), thus no measurements
+- * are necessary.
+- *
+- * Based on the observation that those interface types also tend to set
+- * the link auto-negotiation to 'off', batman-adv shall check this
+- * setting to differentiate between genuine link throughput information
+- * and placeholders installed by virtual interfaces.
+- */
+- if (ret == 0 && link_settings.base.autoneg == AUTONEG_ENABLE) {
++ if (ret == 0) {
+ /* link characteristics might change over time */
+ if (link_settings.base.duplex == DUPLEX_FULL)
+ hard_iface->bat_v.flags |= BATADV_FULL_DUPLEX;
+diff --git a/net/bluetooth/hci_event.c b/net/bluetooth/hci_event.c
+index 0a591be8b0ae..b11f8d391ad8 100644
+--- a/net/bluetooth/hci_event.c
++++ b/net/bluetooth/hci_event.c
+@@ -4292,6 +4292,7 @@ static void hci_sync_conn_complete_evt(struct hci_dev *hdev,
+ case 0x11: /* Unsupported Feature or Parameter Value */
+ case 0x1c: /* SCO interval rejected */
+ case 0x1a: /* Unsupported Remote Feature */
++ case 0x1e: /* Invalid LMP Parameters */
+ case 0x1f: /* Unspecified error */
+ case 0x20: /* Unsupported LMP Parameter value */
+ if (conn->out) {
+diff --git a/net/core/filter.c b/net/core/filter.c
+index 5cc9276f1023..11b97c31bca5 100644
+--- a/net/core/filter.c
++++ b/net/core/filter.c
+@@ -3124,7 +3124,8 @@ static int bpf_skb_net_shrink(struct sk_buff *skb, u32 off, u32 len_diff,
+ {
+ int ret;
+
+- if (flags & ~BPF_F_ADJ_ROOM_FIXED_GSO)
++ if (unlikely(flags & ~(BPF_F_ADJ_ROOM_FIXED_GSO |
++ BPF_F_ADJ_ROOM_NO_CSUM_RESET)))
+ return -EINVAL;
+
+ if (skb_is_gso(skb) && !skb_is_gso_tcp(skb)) {
+@@ -3174,7 +3175,8 @@ BPF_CALL_4(bpf_skb_adjust_room, struct sk_buff *, skb, s32, len_diff,
+ u32 off;
+ int ret;
+
+- if (unlikely(flags & ~BPF_F_ADJ_ROOM_MASK))
++ if (unlikely(flags & ~(BPF_F_ADJ_ROOM_MASK |
++ BPF_F_ADJ_ROOM_NO_CSUM_RESET)))
+ return -EINVAL;
+ if (unlikely(len_diff_abs > 0xfffU))
+ return -EFAULT;
+@@ -3202,6 +3204,8 @@ BPF_CALL_4(bpf_skb_adjust_room, struct sk_buff *, skb, s32, len_diff,
+
+ ret = shrink ? bpf_skb_net_shrink(skb, off, len_diff_abs, flags) :
+ bpf_skb_net_grow(skb, off, len_diff_abs, flags);
++ if (!ret && !(flags & BPF_F_ADJ_ROOM_NO_CSUM_RESET))
++ __skb_reset_checksum_unnecessary(skb);
+
+ bpf_compute_data_pointers(skb);
+ return ret;
+diff --git a/net/core/skmsg.c b/net/core/skmsg.c
+index c479372f2cd2..351afbf6bfba 100644
+--- a/net/core/skmsg.c
++++ b/net/core/skmsg.c
+@@ -7,6 +7,7 @@
+
+ #include <net/sock.h>
+ #include <net/tcp.h>
++#include <net/tls.h>
+
+ static bool sk_msg_try_coalesce_ok(struct sk_msg *msg, int elem_first_coalesce)
+ {
+@@ -682,13 +683,75 @@ static struct sk_psock *sk_psock_from_strp(struct strparser *strp)
+ return container_of(parser, struct sk_psock, parser);
+ }
+
+-static void sk_psock_verdict_apply(struct sk_psock *psock,
+- struct sk_buff *skb, int verdict)
++static void sk_psock_skb_redirect(struct sk_psock *psock, struct sk_buff *skb)
+ {
+ struct sk_psock *psock_other;
+ struct sock *sk_other;
+ bool ingress;
+
++ sk_other = tcp_skb_bpf_redirect_fetch(skb);
++ if (unlikely(!sk_other)) {
++ kfree_skb(skb);
++ return;
++ }
++ psock_other = sk_psock(sk_other);
++ if (!psock_other || sock_flag(sk_other, SOCK_DEAD) ||
++ !sk_psock_test_state(psock_other, SK_PSOCK_TX_ENABLED)) {
++ kfree_skb(skb);
++ return;
++ }
++
++ ingress = tcp_skb_bpf_ingress(skb);
++ if ((!ingress && sock_writeable(sk_other)) ||
++ (ingress &&
++ atomic_read(&sk_other->sk_rmem_alloc) <=
++ sk_other->sk_rcvbuf)) {
++ if (!ingress)
++ skb_set_owner_w(skb, sk_other);
++ skb_queue_tail(&psock_other->ingress_skb, skb);
++ schedule_work(&psock_other->work);
++ } else {
++ kfree_skb(skb);
++ }
++}
++
++static void sk_psock_tls_verdict_apply(struct sk_psock *psock,
++ struct sk_buff *skb, int verdict)
++{
++ switch (verdict) {
++ case __SK_REDIRECT:
++ sk_psock_skb_redirect(psock, skb);
++ break;
++ case __SK_PASS:
++ case __SK_DROP:
++ default:
++ break;
++ }
++}
++
++int sk_psock_tls_strp_read(struct sk_psock *psock, struct sk_buff *skb)
++{
++ struct bpf_prog *prog;
++ int ret = __SK_PASS;
++
++ rcu_read_lock();
++ prog = READ_ONCE(psock->progs.skb_verdict);
++ if (likely(prog)) {
++ tcp_skb_bpf_redirect_clear(skb);
++ ret = sk_psock_bpf_run(psock, prog, skb);
++ ret = sk_psock_map_verd(ret, tcp_skb_bpf_redirect_fetch(skb));
++ }
++ rcu_read_unlock();
++ sk_psock_tls_verdict_apply(psock, skb, ret);
++ return ret;
++}
++EXPORT_SYMBOL_GPL(sk_psock_tls_strp_read);
++
++static void sk_psock_verdict_apply(struct sk_psock *psock,
++ struct sk_buff *skb, int verdict)
++{
++ struct sock *sk_other;
++
+ switch (verdict) {
+ case __SK_PASS:
+ sk_other = psock->sk;
+@@ -707,25 +770,8 @@ static void sk_psock_verdict_apply(struct sk_psock *psock,
+ }
+ goto out_free;
+ case __SK_REDIRECT:
+- sk_other = tcp_skb_bpf_redirect_fetch(skb);
+- if (unlikely(!sk_other))
+- goto out_free;
+- psock_other = sk_psock(sk_other);
+- if (!psock_other || sock_flag(sk_other, SOCK_DEAD) ||
+- !sk_psock_test_state(psock_other, SK_PSOCK_TX_ENABLED))
+- goto out_free;
+- ingress = tcp_skb_bpf_ingress(skb);
+- if ((!ingress && sock_writeable(sk_other)) ||
+- (ingress &&
+- atomic_read(&sk_other->sk_rmem_alloc) <=
+- sk_other->sk_rcvbuf)) {
+- if (!ingress)
+- skb_set_owner_w(skb, sk_other);
+- skb_queue_tail(&psock_other->ingress_skb, skb);
+- schedule_work(&psock_other->work);
+- break;
+- }
+- /* fall-through */
++ sk_psock_skb_redirect(psock, skb);
++ break;
+ case __SK_DROP:
+ /* fall-through */
+ default:
+@@ -779,9 +825,13 @@ static void sk_psock_strp_data_ready(struct sock *sk)
+ rcu_read_lock();
+ psock = sk_psock(sk);
+ if (likely(psock)) {
+- write_lock_bh(&sk->sk_callback_lock);
+- strp_data_ready(&psock->parser.strp);
+- write_unlock_bh(&sk->sk_callback_lock);
++ if (tls_sw_has_ctx_rx(sk)) {
++ psock->parser.saved_data_ready(sk);
++ } else {
++ write_lock_bh(&sk->sk_callback_lock);
++ strp_data_ready(&psock->parser.strp);
++ write_unlock_bh(&sk->sk_callback_lock);
++ }
+ }
+ rcu_read_unlock();
+ }
+diff --git a/net/netfilter/nft_nat.c b/net/netfilter/nft_nat.c
+index 8b44a4de5329..bb49a217635e 100644
+--- a/net/netfilter/nft_nat.c
++++ b/net/netfilter/nft_nat.c
+@@ -129,7 +129,7 @@ static int nft_nat_init(const struct nft_ctx *ctx, const struct nft_expr *expr,
+ priv->type = NF_NAT_MANIP_DST;
+ break;
+ default:
+- return -EINVAL;
++ return -EOPNOTSUPP;
+ }
+
+ if (tb[NFTA_NAT_FAMILY] == NULL)
+@@ -196,7 +196,7 @@ static int nft_nat_init(const struct nft_ctx *ctx, const struct nft_expr *expr,
+ if (tb[NFTA_NAT_FLAGS]) {
+ priv->flags = ntohl(nla_get_be32(tb[NFTA_NAT_FLAGS]));
+ if (priv->flags & ~NF_NAT_RANGE_MASK)
+- return -EINVAL;
++ return -EOPNOTSUPP;
+ }
+
+ return nf_ct_netns_get(ctx->net, family);
+diff --git a/net/sunrpc/auth_gss/gss_mech_switch.c b/net/sunrpc/auth_gss/gss_mech_switch.c
+index 69316ab1b9fa..fae632da1058 100644
+--- a/net/sunrpc/auth_gss/gss_mech_switch.c
++++ b/net/sunrpc/auth_gss/gss_mech_switch.c
+@@ -37,6 +37,8 @@ gss_mech_free(struct gss_api_mech *gm)
+
+ for (i = 0; i < gm->gm_pf_num; i++) {
+ pf = &gm->gm_pfs[i];
++ if (pf->domain)
++ auth_domain_put(pf->domain);
+ kfree(pf->auth_domain_name);
+ pf->auth_domain_name = NULL;
+ }
+@@ -59,6 +61,7 @@ make_auth_domain_name(char *name)
+ static int
+ gss_mech_svc_setup(struct gss_api_mech *gm)
+ {
++ struct auth_domain *dom;
+ struct pf_desc *pf;
+ int i, status;
+
+@@ -68,10 +71,13 @@ gss_mech_svc_setup(struct gss_api_mech *gm)
+ status = -ENOMEM;
+ if (pf->auth_domain_name == NULL)
+ goto out;
+- status = svcauth_gss_register_pseudoflavor(pf->pseudoflavor,
+- pf->auth_domain_name);
+- if (status)
++ dom = svcauth_gss_register_pseudoflavor(
++ pf->pseudoflavor, pf->auth_domain_name);
++ if (IS_ERR(dom)) {
++ status = PTR_ERR(dom);
+ goto out;
++ }
++ pf->domain = dom;
+ }
+ return 0;
+ out:
+diff --git a/net/sunrpc/auth_gss/svcauth_gss.c b/net/sunrpc/auth_gss/svcauth_gss.c
+index 50d93c49ef1a..46027d0c903f 100644
+--- a/net/sunrpc/auth_gss/svcauth_gss.c
++++ b/net/sunrpc/auth_gss/svcauth_gss.c
+@@ -809,7 +809,7 @@ u32 svcauth_gss_flavor(struct auth_domain *dom)
+
+ EXPORT_SYMBOL_GPL(svcauth_gss_flavor);
+
+-int
++struct auth_domain *
+ svcauth_gss_register_pseudoflavor(u32 pseudoflavor, char * name)
+ {
+ struct gss_domain *new;
+@@ -826,21 +826,23 @@ svcauth_gss_register_pseudoflavor(u32 pseudoflavor, char * name)
+ new->h.flavour = &svcauthops_gss;
+ new->pseudoflavor = pseudoflavor;
+
+- stat = 0;
+ test = auth_domain_lookup(name, &new->h);
+- if (test != &new->h) { /* Duplicate registration */
++ if (test != &new->h) {
++ pr_warn("svc: duplicate registration of gss pseudo flavour %s.\n",
++ name);
++ stat = -EADDRINUSE;
+ auth_domain_put(test);
+- kfree(new->h.name);
+- goto out_free_dom;
++ goto out_free_name;
+ }
+- return 0;
++ return test;
+
++out_free_name:
++ kfree(new->h.name);
+ out_free_dom:
+ kfree(new);
+ out:
+- return stat;
++ return ERR_PTR(stat);
+ }
+-
+ EXPORT_SYMBOL_GPL(svcauth_gss_register_pseudoflavor);
+
+ static inline int
+diff --git a/net/tls/tls_sw.c b/net/tls/tls_sw.c
+index 8c2763eb6aae..24f64bc0de18 100644
+--- a/net/tls/tls_sw.c
++++ b/net/tls/tls_sw.c
+@@ -1742,6 +1742,7 @@ int tls_sw_recvmsg(struct sock *sk,
+ long timeo;
+ bool is_kvec = iov_iter_is_kvec(&msg->msg_iter);
+ bool is_peek = flags & MSG_PEEK;
++ bool bpf_strp_enabled;
+ int num_async = 0;
+ int pending;
+
+@@ -1752,6 +1753,7 @@ int tls_sw_recvmsg(struct sock *sk,
+
+ psock = sk_psock_get(sk);
+ lock_sock(sk);
++ bpf_strp_enabled = sk_psock_strp_enabled(psock);
+
+ /* Process pending decrypted records. It must be non-zero-copy */
+ err = process_rx_list(ctx, msg, &control, &cmsg, 0, len, false,
+@@ -1805,11 +1807,12 @@ int tls_sw_recvmsg(struct sock *sk,
+
+ if (to_decrypt <= len && !is_kvec && !is_peek &&
+ ctx->control == TLS_RECORD_TYPE_DATA &&
+- prot->version != TLS_1_3_VERSION)
++ prot->version != TLS_1_3_VERSION &&
++ !bpf_strp_enabled)
+ zc = true;
+
+ /* Do not use async mode if record is non-data */
+- if (ctx->control == TLS_RECORD_TYPE_DATA)
++ if (ctx->control == TLS_RECORD_TYPE_DATA && !bpf_strp_enabled)
+ async_capable = ctx->async_capable;
+ else
+ async_capable = false;
+@@ -1859,6 +1862,19 @@ int tls_sw_recvmsg(struct sock *sk,
+ goto pick_next_record;
+
+ if (!zc) {
++ if (bpf_strp_enabled) {
++ err = sk_psock_tls_strp_read(psock, skb);
++ if (err != __SK_PASS) {
++ rxm->offset = rxm->offset + rxm->full_len;
++ rxm->full_len = 0;
++ if (err == __SK_DROP)
++ consume_skb(skb);
++ ctx->recv_pkt = NULL;
++ __strp_unpause(&ctx->strp);
++ continue;
++ }
++ }
++
+ if (rxm->full_len > len) {
+ retain_skb = true;
+ chunk = len;
+diff --git a/scripts/sphinx-pre-install b/scripts/sphinx-pre-install
+index fa3fb05cd54b..09b38ee38ce8 100755
+--- a/scripts/sphinx-pre-install
++++ b/scripts/sphinx-pre-install
+@@ -557,7 +557,8 @@ sub give_gentoo_hints()
+ "media-fonts/dejavu", 2) if ($pdf);
+
+ if ($pdf) {
+- check_missing_file(["/usr/share/fonts/noto-cjk/NotoSansCJKsc-Regular.otf"],
++ check_missing_file(["/usr/share/fonts/noto-cjk/NotoSansCJKsc-Regular.otf",
++ "/usr/share/fonts/noto-cjk/NotoSerifCJK-Regular.ttc"],
+ "media-fonts/noto-cjk", 2);
+ }
+
+@@ -572,10 +573,10 @@ sub give_gentoo_hints()
+ my $portage_imagemagick = "/etc/portage/package.use/imagemagick";
+ my $portage_cairo = "/etc/portage/package.use/graphviz";
+
+- if (qx(cat $portage_imagemagick) ne "$imagemagick\n") {
++ if (qx(grep imagemagick $portage_imagemagick 2>/dev/null) eq "") {
+ printf("\tsudo su -c 'echo \"$imagemagick\" > $portage_imagemagick'\n")
+ }
+- if (qx(cat $portage_cairo) ne "$cairo\n") {
++ if (qx(grep graphviz $portage_cairo 2>/dev/null) eq "") {
+ printf("\tsudo su -c 'echo \"$cairo\" > $portage_cairo'\n");
+ }
+
+diff --git a/security/integrity/evm/evm_crypto.c b/security/integrity/evm/evm_crypto.c
+index 764b896cd628..168c3b78ac47 100644
+--- a/security/integrity/evm/evm_crypto.c
++++ b/security/integrity/evm/evm_crypto.c
+@@ -241,7 +241,7 @@ static int evm_calc_hmac_or_hash(struct dentry *dentry,
+
+ /* Portable EVM signatures must include an IMA hash */
+ if (type == EVM_XATTR_PORTABLE_DIGSIG && !ima_present)
+- return -EPERM;
++ error = -EPERM;
+ out:
+ kfree(xattr_value);
+ kfree(desc);
+diff --git a/security/integrity/ima/ima.h b/security/integrity/ima/ima.h
+index 64317d95363e..495e28bd488e 100644
+--- a/security/integrity/ima/ima.h
++++ b/security/integrity/ima/ima.h
+@@ -36,7 +36,7 @@ enum tpm_pcrs { TPM_PCR0 = 0, TPM_PCR8 = 8 };
+ #define IMA_DIGEST_SIZE SHA1_DIGEST_SIZE
+ #define IMA_EVENT_NAME_LEN_MAX 255
+
+-#define IMA_HASH_BITS 9
++#define IMA_HASH_BITS 10
+ #define IMA_MEASURE_HTABLE_SIZE (1 << IMA_HASH_BITS)
+
+ #define IMA_TEMPLATE_FIELD_ID_MAX_LEN 16
+@@ -52,6 +52,7 @@ extern int ima_policy_flag;
+ extern int ima_hash_algo;
+ extern int ima_appraise;
+ extern struct tpm_chip *ima_tpm_chip;
++extern const char boot_aggregate_name[];
+
+ /* IMA event related data */
+ struct ima_event_data {
+@@ -140,7 +141,7 @@ int ima_calc_buffer_hash(const void *buf, loff_t len,
+ int ima_calc_field_array_hash(struct ima_field_data *field_data,
+ struct ima_template_desc *desc, int num_fields,
+ struct ima_digest_data *hash);
+-int __init ima_calc_boot_aggregate(struct ima_digest_data *hash);
++int ima_calc_boot_aggregate(struct ima_digest_data *hash);
+ void ima_add_violation(struct file *file, const unsigned char *filename,
+ struct integrity_iint_cache *iint,
+ const char *op, const char *cause);
+@@ -175,9 +176,10 @@ struct ima_h_table {
+ };
+ extern struct ima_h_table ima_htable;
+
+-static inline unsigned long ima_hash_key(u8 *digest)
++static inline unsigned int ima_hash_key(u8 *digest)
+ {
+- return hash_long(*digest, IMA_HASH_BITS);
++ /* there is no point in taking a hash of part of a digest */
++ return (digest[0] | digest[1] << 8) % IMA_MEASURE_HTABLE_SIZE;
+ }
+
+ #define __ima_hooks(hook) \
+diff --git a/security/integrity/ima/ima_crypto.c b/security/integrity/ima/ima_crypto.c
+index 88b5e288f241..fb27174806ba 100644
+--- a/security/integrity/ima/ima_crypto.c
++++ b/security/integrity/ima/ima_crypto.c
+@@ -645,7 +645,7 @@ int ima_calc_buffer_hash(const void *buf, loff_t len,
+ return calc_buffer_shash(buf, len, hash);
+ }
+
+-static void __init ima_pcrread(u32 idx, struct tpm_digest *d)
++static void ima_pcrread(u32 idx, struct tpm_digest *d)
+ {
+ if (!ima_tpm_chip)
+ return;
+@@ -655,18 +655,29 @@ static void __init ima_pcrread(u32 idx, struct tpm_digest *d)
+ }
+
+ /*
+- * Calculate the boot aggregate hash
++ * The boot_aggregate is a cumulative hash over TPM registers 0 - 7. With
++ * TPM 1.2 the boot_aggregate was based on reading the SHA1 PCRs, but with
++ * TPM 2.0 hash agility, TPM chips could support multiple TPM PCR banks,
++ * allowing firmware to configure and enable different banks.
++ *
++ * Knowing which TPM bank is read to calculate the boot_aggregate digest
++ * needs to be conveyed to a verifier. For this reason, use the same
++ * hash algorithm for reading the TPM PCRs as for calculating the boot
++ * aggregate digest as stored in the measurement list.
+ */
+-static int __init ima_calc_boot_aggregate_tfm(char *digest,
+- struct crypto_shash *tfm)
++static int ima_calc_boot_aggregate_tfm(char *digest, u16 alg_id,
++ struct crypto_shash *tfm)
+ {
+- struct tpm_digest d = { .alg_id = TPM_ALG_SHA1, .digest = {0} };
++ struct tpm_digest d = { .alg_id = alg_id, .digest = {0} };
+ int rc;
+ u32 i;
+ SHASH_DESC_ON_STACK(shash, tfm);
+
+ shash->tfm = tfm;
+
++ pr_devel("calculating the boot-aggregate based on TPM bank: %04x\n",
++ d.alg_id);
++
+ rc = crypto_shash_init(shash);
+ if (rc != 0)
+ return rc;
+@@ -675,24 +686,48 @@ static int __init ima_calc_boot_aggregate_tfm(char *digest,
+ for (i = TPM_PCR0; i < TPM_PCR8; i++) {
+ ima_pcrread(i, &d);
+ /* now accumulate with current aggregate */
+- rc = crypto_shash_update(shash, d.digest, TPM_DIGEST_SIZE);
++ rc = crypto_shash_update(shash, d.digest,
++ crypto_shash_digestsize(tfm));
+ }
+ if (!rc)
+ crypto_shash_final(shash, digest);
+ return rc;
+ }
+
+-int __init ima_calc_boot_aggregate(struct ima_digest_data *hash)
++int ima_calc_boot_aggregate(struct ima_digest_data *hash)
+ {
+ struct crypto_shash *tfm;
+- int rc;
++ u16 crypto_id, alg_id;
++ int rc, i, bank_idx = -1;
++
++ for (i = 0; i < ima_tpm_chip->nr_allocated_banks; i++) {
++ crypto_id = ima_tpm_chip->allocated_banks[i].crypto_id;
++ if (crypto_id == hash->algo) {
++ bank_idx = i;
++ break;
++ }
++
++ if (crypto_id == HASH_ALGO_SHA256)
++ bank_idx = i;
++
++ if (bank_idx == -1 && crypto_id == HASH_ALGO_SHA1)
++ bank_idx = i;
++ }
++
++ if (bank_idx == -1) {
++ pr_err("No suitable TPM algorithm for boot aggregate\n");
++ return 0;
++ }
++
++ hash->algo = ima_tpm_chip->allocated_banks[bank_idx].crypto_id;
+
+ tfm = ima_alloc_tfm(hash->algo);
+ if (IS_ERR(tfm))
+ return PTR_ERR(tfm);
+
+ hash->length = crypto_shash_digestsize(tfm);
+- rc = ima_calc_boot_aggregate_tfm(hash->digest, tfm);
++ alg_id = ima_tpm_chip->allocated_banks[bank_idx].alg_id;
++ rc = ima_calc_boot_aggregate_tfm(hash->digest, alg_id, tfm);
+
+ ima_free_tfm(tfm);
+
+diff --git a/security/integrity/ima/ima_init.c b/security/integrity/ima/ima_init.c
+index 567468188a61..4902fe7bd570 100644
+--- a/security/integrity/ima/ima_init.c
++++ b/security/integrity/ima/ima_init.c
+@@ -19,13 +19,13 @@
+ #include "ima.h"
+
+ /* name for boot aggregate entry */
+-static const char boot_aggregate_name[] = "boot_aggregate";
++const char boot_aggregate_name[] = "boot_aggregate";
+ struct tpm_chip *ima_tpm_chip;
+
+ /* Add the boot aggregate to the IMA measurement list and extend
+ * the PCR register.
+ *
+- * Calculate the boot aggregate, a SHA1 over tpm registers 0-7,
++ * Calculate the boot aggregate, a hash over tpm registers 0-7,
+ * assuming a TPM chip exists, and zeroes if the TPM chip does not
+ * exist. Add the boot aggregate measurement to the measurement
+ * list and extend the PCR register.
+@@ -49,15 +49,27 @@ static int __init ima_add_boot_aggregate(void)
+ int violation = 0;
+ struct {
+ struct ima_digest_data hdr;
+- char digest[TPM_DIGEST_SIZE];
++ char digest[TPM_MAX_DIGEST_SIZE];
+ } hash;
+
+ memset(iint, 0, sizeof(*iint));
+ memset(&hash, 0, sizeof(hash));
+ iint->ima_hash = &hash.hdr;
+- iint->ima_hash->algo = HASH_ALGO_SHA1;
+- iint->ima_hash->length = SHA1_DIGEST_SIZE;
+-
++ iint->ima_hash->algo = ima_hash_algo;
++ iint->ima_hash->length = hash_digest_size[ima_hash_algo];
++
++ /*
++ * With TPM 2.0 hash agility, TPM chips could support multiple TPM
++ * PCR banks, allowing firmware to configure and enable different
++ * banks. The SHA1 bank is not necessarily enabled.
++ *
++ * Use the same hash algorithm for reading the TPM PCRs as for
++ * calculating the boot aggregate digest. Preference is given to
++ * the configured IMA default hash algorithm. Otherwise, use the
++ * TCG required banks - SHA256 for TPM 2.0, SHA1 for TPM 1.2.
++ * Ultimately select SHA1 also for TPM 2.0 if the SHA256 PCR bank
++ * is not found.
++ */
+ if (ima_tpm_chip) {
+ result = ima_calc_boot_aggregate(&hash.hdr);
+ if (result < 0) {
+diff --git a/security/integrity/ima/ima_main.c b/security/integrity/ima/ima_main.c
+index 9d0abedeae77..f96f151294e6 100644
+--- a/security/integrity/ima/ima_main.c
++++ b/security/integrity/ima/ima_main.c
+@@ -792,6 +792,9 @@ static int __init init_ima(void)
+ error = ima_init();
+ }
+
++ if (error)
++ return error;
++
+ error = register_blocking_lsm_notifier(&ima_lsm_policy_notifier);
+ if (error)
+ pr_warn("Couldn't register LSM notifier, error %d\n", error);
+diff --git a/security/integrity/ima/ima_policy.c b/security/integrity/ima/ima_policy.c
+index c334e0dc6083..e493063a3c34 100644
+--- a/security/integrity/ima/ima_policy.c
++++ b/security/integrity/ima/ima_policy.c
+@@ -204,7 +204,7 @@ static struct ima_rule_entry *arch_policy_entry __ro_after_init;
+ static LIST_HEAD(ima_default_rules);
+ static LIST_HEAD(ima_policy_rules);
+ static LIST_HEAD(ima_temp_rules);
+-static struct list_head *ima_rules;
++static struct list_head *ima_rules = &ima_default_rules;
+
+ /* Pre-allocated buffer used for matching keyrings. */
+ static char *ima_keyrings;
+@@ -644,9 +644,12 @@ static void add_rules(struct ima_rule_entry *entries, int count,
+ list_add_tail(&entry->list, &ima_policy_rules);
+ }
+ if (entries[i].action == APPRAISE) {
+- temp_ima_appraise |= ima_appraise_flag(entries[i].func);
+- if (entries[i].func == POLICY_CHECK)
+- temp_ima_appraise |= IMA_APPRAISE_POLICY;
++ if (entries != build_appraise_rules)
++ temp_ima_appraise |=
++ ima_appraise_flag(entries[i].func);
++ else
++ build_ima_appraise |=
++ ima_appraise_flag(entries[i].func);
+ }
+ }
+ }
+@@ -765,7 +768,6 @@ void __init ima_init_policy(void)
+ ARRAY_SIZE(default_appraise_rules),
+ IMA_DEFAULT_POLICY);
+
+- ima_rules = &ima_default_rules;
+ ima_update_policy_flag();
+ }
+
+diff --git a/security/integrity/ima/ima_template_lib.c b/security/integrity/ima/ima_template_lib.c
+index 9cd1e50f3ccc..635c6ac05050 100644
+--- a/security/integrity/ima/ima_template_lib.c
++++ b/security/integrity/ima/ima_template_lib.c
+@@ -286,6 +286,24 @@ int ima_eventdigest_init(struct ima_event_data *event_data,
+ goto out;
+ }
+
++ if ((const char *)event_data->filename == boot_aggregate_name) {
++ if (ima_tpm_chip) {
++ hash.hdr.algo = HASH_ALGO_SHA1;
++ result = ima_calc_boot_aggregate(&hash.hdr);
++
++ /* algo can change depending on available PCR banks */
++ if (!result && hash.hdr.algo != HASH_ALGO_SHA1)
++ result = -EINVAL;
++
++ if (result < 0)
++ memset(&hash, 0, sizeof(hash));
++ }
++
++ cur_digest = hash.hdr.digest;
++ cur_digestsize = hash_digest_size[HASH_ALGO_SHA1];
++ goto out;
++ }
++
+ if (!event_data->file) /* missing info to re-calculate the digest */
+ return -EINVAL;
+
+diff --git a/security/lockdown/lockdown.c b/security/lockdown/lockdown.c
+index 5a952617a0eb..87cbdc64d272 100644
+--- a/security/lockdown/lockdown.c
++++ b/security/lockdown/lockdown.c
+@@ -150,7 +150,7 @@ static int __init lockdown_secfs_init(void)
+ {
+ struct dentry *dentry;
+
+- dentry = securityfs_create_file("lockdown", 0600, NULL, NULL,
++ dentry = securityfs_create_file("lockdown", 0644, NULL, NULL,
+ &lockdown_ops);
+ return PTR_ERR_OR_ZERO(dentry);
+ }
+diff --git a/security/selinux/ss/policydb.c b/security/selinux/ss/policydb.c
+index c21b922e5ebe..1a4f74e7a267 100644
+--- a/security/selinux/ss/policydb.c
++++ b/security/selinux/ss/policydb.c
+@@ -2504,6 +2504,7 @@ int policydb_read(struct policydb *p, void *fp)
+ if (rc)
+ goto bad;
+
++ rc = -ENOMEM;
+ p->type_attr_map_array = kvcalloc(p->p_types.nprim,
+ sizeof(*p->type_attr_map_array),
+ GFP_KERNEL);
+diff --git a/tools/cgroup/iocost_monitor.py b/tools/cgroup/iocost_monitor.py
+index 9d8e9613008a..103605f5be8c 100644
+--- a/tools/cgroup/iocost_monitor.py
++++ b/tools/cgroup/iocost_monitor.py
+@@ -112,14 +112,14 @@ class IocStat:
+
+ def dict(self, now):
+ return { 'device' : devname,
+- 'timestamp' : str(now),
+- 'enabled' : str(int(self.enabled)),
+- 'running' : str(int(self.running)),
+- 'period_ms' : str(self.period_ms),
+- 'period_at' : str(self.period_at),
+- 'period_vtime_at' : str(self.vperiod_at),
+- 'busy_level' : str(self.busy_level),
+- 'vrate_pct' : str(self.vrate_pct), }
++ 'timestamp' : now,
++ 'enabled' : self.enabled,
++ 'running' : self.running,
++ 'period_ms' : self.period_ms,
++ 'period_at' : self.period_at,
++ 'period_vtime_at' : self.vperiod_at,
++ 'busy_level' : self.busy_level,
++ 'vrate_pct' : self.vrate_pct, }
+
+ def table_preamble_str(self):
+ state = ('RUN' if self.running else 'IDLE') if self.enabled else 'OFF'
+@@ -179,19 +179,19 @@ class IocgStat:
+
+ def dict(self, now, path):
+ out = { 'cgroup' : path,
+- 'timestamp' : str(now),
+- 'is_active' : str(int(self.is_active)),
+- 'weight' : str(self.weight),
+- 'weight_active' : str(self.active),
+- 'weight_inuse' : str(self.inuse),
+- 'hweight_active_pct' : str(self.hwa_pct),
+- 'hweight_inuse_pct' : str(self.hwi_pct),
+- 'inflight_pct' : str(self.inflight_pct),
+- 'debt_ms' : str(self.debt_ms),
+- 'use_delay' : str(self.use_delay),
+- 'delay_ms' : str(self.delay_ms),
+- 'usage_pct' : str(self.usage),
+- 'address' : str(hex(self.address)) }
++ 'timestamp' : now,
++ 'is_active' : self.is_active,
++ 'weight' : self.weight,
++ 'weight_active' : self.active,
++ 'weight_inuse' : self.inuse,
++ 'hweight_active_pct' : self.hwa_pct,
++ 'hweight_inuse_pct' : self.hwi_pct,
++ 'inflight_pct' : self.inflight_pct,
++ 'debt_ms' : self.debt_ms,
++ 'use_delay' : self.use_delay,
++ 'delay_ms' : self.delay_ms,
++ 'usage_pct' : self.usage,
++ 'address' : self.address }
+ for i in range(len(self.usages)):
+ out[f'usage_pct_{i}'] = str(self.usages[i])
+ return out
+diff --git a/tools/include/uapi/linux/bpf.h b/tools/include/uapi/linux/bpf.h
+index 7bbf1b65be10..ad77cf9bb37e 100644
+--- a/tools/include/uapi/linux/bpf.h
++++ b/tools/include/uapi/linux/bpf.h
+@@ -1589,6 +1589,13 @@ union bpf_attr {
+ * Grow or shrink the room for data in the packet associated to
+ * *skb* by *len_diff*, and according to the selected *mode*.
+ *
++ * By default, the helper will reset any offloaded checksum
++ * indicator of the skb to CHECKSUM_NONE. This can be avoided
++ * by the following flag:
++ *
++ * * **BPF_F_ADJ_ROOM_NO_CSUM_RESET**: Do not reset offloaded
++ * checksum data of the skb to CHECKSUM_NONE.
++ *
+ * There are two supported modes at this time:
+ *
+ * * **BPF_ADJ_ROOM_MAC**: Adjust room at the mac layer
+@@ -3235,6 +3242,7 @@ enum {
+ BPF_F_ADJ_ROOM_ENCAP_L3_IPV6 = (1ULL << 2),
+ BPF_F_ADJ_ROOM_ENCAP_L4_GRE = (1ULL << 3),
+ BPF_F_ADJ_ROOM_ENCAP_L4_UDP = (1ULL << 4),
++ BPF_F_ADJ_ROOM_NO_CSUM_RESET = (1ULL << 5),
+ };
+
+ enum {
+diff --git a/tools/lib/api/fs/fs.c b/tools/lib/api/fs/fs.c
+index 027b18f7ed8c..82f53d81a7a7 100644
+--- a/tools/lib/api/fs/fs.c
++++ b/tools/lib/api/fs/fs.c
+@@ -90,6 +90,7 @@ struct fs {
+ const char * const *mounts;
+ char path[PATH_MAX];
+ bool found;
++ bool checked;
+ long magic;
+ };
+
+@@ -111,31 +112,37 @@ static struct fs fs__entries[] = {
+ .name = "sysfs",
+ .mounts = sysfs__fs_known_mountpoints,
+ .magic = SYSFS_MAGIC,
++ .checked = false,
+ },
+ [FS__PROCFS] = {
+ .name = "proc",
+ .mounts = procfs__known_mountpoints,
+ .magic = PROC_SUPER_MAGIC,
++ .checked = false,
+ },
+ [FS__DEBUGFS] = {
+ .name = "debugfs",
+ .mounts = debugfs__known_mountpoints,
+ .magic = DEBUGFS_MAGIC,
++ .checked = false,
+ },
+ [FS__TRACEFS] = {
+ .name = "tracefs",
+ .mounts = tracefs__known_mountpoints,
+ .magic = TRACEFS_MAGIC,
++ .checked = false,
+ },
+ [FS__HUGETLBFS] = {
+ .name = "hugetlbfs",
+ .mounts = hugetlbfs__known_mountpoints,
+ .magic = HUGETLBFS_MAGIC,
++ .checked = false,
+ },
+ [FS__BPF_FS] = {
+ .name = "bpf",
+ .mounts = bpf_fs__known_mountpoints,
+ .magic = BPF_FS_MAGIC,
++ .checked = false,
+ },
+ };
+
+@@ -158,6 +165,7 @@ static bool fs__read_mounts(struct fs *fs)
+ }
+
+ fclose(fp);
++ fs->checked = true;
+ return fs->found = found;
+ }
+
+@@ -220,6 +228,7 @@ static bool fs__env_override(struct fs *fs)
+ return false;
+
+ fs->found = true;
++ fs->checked = true;
+ strncpy(fs->path, override_path, sizeof(fs->path) - 1);
+ fs->path[sizeof(fs->path) - 1] = '\0';
+ return true;
+@@ -246,6 +255,14 @@ static const char *fs__mountpoint(int idx)
+ if (fs->found)
+ return (const char *)fs->path;
+
++ /* the mount point was already checked for the mount point
++ * but and did not exist, so return NULL to avoid scanning again.
++ * This makes the found and not found paths cost equivalent
++ * in case of multiple calls.
++ */
++ if (fs->checked)
++ return NULL;
++
+ return fs__get_mountpoint(fs);
+ }
+
+diff --git a/tools/lib/api/fs/fs.h b/tools/lib/api/fs/fs.h
+index 936edb95e1f3..aa222ca30311 100644
+--- a/tools/lib/api/fs/fs.h
++++ b/tools/lib/api/fs/fs.h
+@@ -18,6 +18,18 @@
+ const char *name##__mount(void); \
+ bool name##__configured(void); \
+
++/*
++ * The xxxx__mountpoint() entry points find the first match mount point for each
++ * filesystems listed below, where xxxx is the filesystem type.
++ *
++ * The interface is as follows:
++ *
++ * - If a mount point is found on first call, it is cached and used for all
++ * subsequent calls.
++ *
++ * - If a mount point is not found, NULL is returned on first call and all
++ * subsequent calls.
++ */
+ FS(sysfs)
+ FS(procfs)
+ FS(debugfs)
+diff --git a/tools/lib/bpf/hashmap.c b/tools/lib/bpf/hashmap.c
+index 54c30c802070..cffb96202e0d 100644
+--- a/tools/lib/bpf/hashmap.c
++++ b/tools/lib/bpf/hashmap.c
+@@ -59,7 +59,14 @@ struct hashmap *hashmap__new(hashmap_hash_fn hash_fn,
+
+ void hashmap__clear(struct hashmap *map)
+ {
++ struct hashmap_entry *cur, *tmp;
++ int bkt;
++
++ hashmap__for_each_entry_safe(map, cur, tmp, bkt) {
++ free(cur);
++ }
+ free(map->buckets);
++ map->buckets = NULL;
+ map->cap = map->cap_bits = map->sz = 0;
+ }
+
+diff --git a/tools/lib/bpf/libbpf.c b/tools/lib/bpf/libbpf.c
+index 8f480e29a6b0..0c5b4fb553fb 100644
+--- a/tools/lib/bpf/libbpf.c
++++ b/tools/lib/bpf/libbpf.c
+@@ -3482,107 +3482,111 @@ bpf_object__populate_internal_map(struct bpf_object *obj, struct bpf_map *map)
+ return 0;
+ }
+
++static void bpf_map__destroy(struct bpf_map *map);
++
++static int bpf_object__create_map(struct bpf_object *obj, struct bpf_map *map)
++{
++ struct bpf_create_map_attr create_attr;
++ struct bpf_map_def *def = &map->def;
++
++ memset(&create_attr, 0, sizeof(create_attr));
++
++ if (obj->caps.name)
++ create_attr.name = map->name;
++ create_attr.map_ifindex = map->map_ifindex;
++ create_attr.map_type = def->type;
++ create_attr.map_flags = def->map_flags;
++ create_attr.key_size = def->key_size;
++ create_attr.value_size = def->value_size;
++
++ if (def->type == BPF_MAP_TYPE_PERF_EVENT_ARRAY && !def->max_entries) {
++ int nr_cpus;
++
++ nr_cpus = libbpf_num_possible_cpus();
++ if (nr_cpus < 0) {
++ pr_warn("map '%s': failed to determine number of system CPUs: %d\n",
++ map->name, nr_cpus);
++ return nr_cpus;
++ }
++ pr_debug("map '%s': setting size to %d\n", map->name, nr_cpus);
++ create_attr.max_entries = nr_cpus;
++ } else {
++ create_attr.max_entries = def->max_entries;
++ }
++
++ if (bpf_map__is_struct_ops(map))
++ create_attr.btf_vmlinux_value_type_id =
++ map->btf_vmlinux_value_type_id;
++
++ create_attr.btf_fd = 0;
++ create_attr.btf_key_type_id = 0;
++ create_attr.btf_value_type_id = 0;
++ if (obj->btf && !bpf_map_find_btf_info(obj, map)) {
++ create_attr.btf_fd = btf__fd(obj->btf);
++ create_attr.btf_key_type_id = map->btf_key_type_id;
++ create_attr.btf_value_type_id = map->btf_value_type_id;
++ }
++
++ map->fd = bpf_create_map_xattr(&create_attr);
++ if (map->fd < 0 && (create_attr.btf_key_type_id ||
++ create_attr.btf_value_type_id)) {
++ char *cp, errmsg[STRERR_BUFSIZE];
++ int err = -errno;
++
++ cp = libbpf_strerror_r(err, errmsg, sizeof(errmsg));
++ pr_warn("Error in bpf_create_map_xattr(%s):%s(%d). Retrying without BTF.\n",
++ map->name, cp, err);
++ create_attr.btf_fd = 0;
++ create_attr.btf_key_type_id = 0;
++ create_attr.btf_value_type_id = 0;
++ map->btf_key_type_id = 0;
++ map->btf_value_type_id = 0;
++ map->fd = bpf_create_map_xattr(&create_attr);
++ }
++
++ if (map->fd < 0)
++ return -errno;
++
++ return 0;
++}
++
+ static int
+ bpf_object__create_maps(struct bpf_object *obj)
+ {
+- struct bpf_create_map_attr create_attr = {};
+- int nr_cpus = 0;
+- unsigned int i;
++ struct bpf_map *map;
++ char *cp, errmsg[STRERR_BUFSIZE];
++ unsigned int i, j;
+ int err;
+
+ for (i = 0; i < obj->nr_maps; i++) {
+- struct bpf_map *map = &obj->maps[i];
+- struct bpf_map_def *def = &map->def;
+- char *cp, errmsg[STRERR_BUFSIZE];
+- int *pfd = &map->fd;
++ map = &obj->maps[i];
+
+ if (map->pin_path) {
+ err = bpf_object__reuse_map(map);
+ if (err) {
+- pr_warn("error reusing pinned map %s\n",
++ pr_warn("map '%s': error reusing pinned map\n",
+ map->name);
+- return err;
++ goto err_out;
+ }
+ }
+
+ if (map->fd >= 0) {
+- pr_debug("skip map create (preset) %s: fd=%d\n",
++ pr_debug("map '%s': skipping creation (preset fd=%d)\n",
+ map->name, map->fd);
+ continue;
+ }
+
+- if (obj->caps.name)
+- create_attr.name = map->name;
+- create_attr.map_ifindex = map->map_ifindex;
+- create_attr.map_type = def->type;
+- create_attr.map_flags = def->map_flags;
+- create_attr.key_size = def->key_size;
+- create_attr.value_size = def->value_size;
+- if (def->type == BPF_MAP_TYPE_PERF_EVENT_ARRAY &&
+- !def->max_entries) {
+- if (!nr_cpus)
+- nr_cpus = libbpf_num_possible_cpus();
+- if (nr_cpus < 0) {
+- pr_warn("failed to determine number of system CPUs: %d\n",
+- nr_cpus);
+- err = nr_cpus;
+- goto err_out;
+- }
+- pr_debug("map '%s': setting size to %d\n",
+- map->name, nr_cpus);
+- create_attr.max_entries = nr_cpus;
+- } else {
+- create_attr.max_entries = def->max_entries;
+- }
+- create_attr.btf_fd = 0;
+- create_attr.btf_key_type_id = 0;
+- create_attr.btf_value_type_id = 0;
+- if (bpf_map_type__is_map_in_map(def->type) &&
+- map->inner_map_fd >= 0)
+- create_attr.inner_map_fd = map->inner_map_fd;
+- if (bpf_map__is_struct_ops(map))
+- create_attr.btf_vmlinux_value_type_id =
+- map->btf_vmlinux_value_type_id;
+-
+- if (obj->btf && !bpf_map_find_btf_info(obj, map)) {
+- create_attr.btf_fd = btf__fd(obj->btf);
+- create_attr.btf_key_type_id = map->btf_key_type_id;
+- create_attr.btf_value_type_id = map->btf_value_type_id;
+- }
+-
+- *pfd = bpf_create_map_xattr(&create_attr);
+- if (*pfd < 0 && (create_attr.btf_key_type_id ||
+- create_attr.btf_value_type_id)) {
+- err = -errno;
+- cp = libbpf_strerror_r(err, errmsg, sizeof(errmsg));
+- pr_warn("Error in bpf_create_map_xattr(%s):%s(%d). Retrying without BTF.\n",
+- map->name, cp, err);
+- create_attr.btf_fd = 0;
+- create_attr.btf_key_type_id = 0;
+- create_attr.btf_value_type_id = 0;
+- map->btf_key_type_id = 0;
+- map->btf_value_type_id = 0;
+- *pfd = bpf_create_map_xattr(&create_attr);
+- }
+-
+- if (*pfd < 0) {
+- size_t j;
++ err = bpf_object__create_map(obj, map);
++ if (err)
++ goto err_out;
+
+- err = -errno;
+-err_out:
+- cp = libbpf_strerror_r(err, errmsg, sizeof(errmsg));
+- pr_warn("failed to create map (name: '%s'): %s(%d)\n",
+- map->name, cp, err);
+- pr_perm_msg(err);
+- for (j = 0; j < i; j++)
+- zclose(obj->maps[j].fd);
+- return err;
+- }
++ pr_debug("map '%s': created successfully, fd=%d\n", map->name,
++ map->fd);
+
+ if (bpf_map__is_internal(map)) {
+ err = bpf_object__populate_internal_map(obj, map);
+ if (err < 0) {
+- zclose(*pfd);
++ zclose(map->fd);
+ goto err_out;
+ }
+ }
+@@ -3590,16 +3594,23 @@ err_out:
+ if (map->pin_path && !map->pinned) {
+ err = bpf_map__pin(map, NULL);
+ if (err) {
+- pr_warn("failed to auto-pin map name '%s' at '%s'\n",
+- map->name, map->pin_path);
+- return err;
++ pr_warn("map '%s': failed to auto-pin at '%s': %d\n",
++ map->name, map->pin_path, err);
++ zclose(map->fd);
++ goto err_out;
+ }
+ }
+-
+- pr_debug("created map %s: fd=%d\n", map->name, *pfd);
+ }
+
+ return 0;
++
++err_out:
++ cp = libbpf_strerror_r(err, errmsg, sizeof(errmsg));
++ pr_warn("map '%s': failed to create: %s(%d)\n", map->name, cp, err);
++ pr_perm_msg(err);
++ for (j = 0; j < i; j++)
++ zclose(obj->maps[j].fd);
++ return err;
+ }
+
+ static int
+@@ -5955,6 +5966,32 @@ int bpf_object__pin(struct bpf_object *obj, const char *path)
+ return 0;
+ }
+
++static void bpf_map__destroy(struct bpf_map *map)
++{
++ if (map->clear_priv)
++ map->clear_priv(map, map->priv);
++ map->priv = NULL;
++ map->clear_priv = NULL;
++
++ if (map->mmaped) {
++ munmap(map->mmaped, bpf_map_mmap_sz(map));
++ map->mmaped = NULL;
++ }
++
++ if (map->st_ops) {
++ zfree(&map->st_ops->data);
++ zfree(&map->st_ops->progs);
++ zfree(&map->st_ops->kern_func_off);
++ zfree(&map->st_ops);
++ }
++
++ zfree(&map->name);
++ zfree(&map->pin_path);
++
++ if (map->fd >= 0)
++ zclose(map->fd);
++}
++
+ void bpf_object__close(struct bpf_object *obj)
+ {
+ size_t i;
+@@ -5970,29 +6007,8 @@ void bpf_object__close(struct bpf_object *obj)
+ btf__free(obj->btf);
+ btf_ext__free(obj->btf_ext);
+
+- for (i = 0; i < obj->nr_maps; i++) {
+- struct bpf_map *map = &obj->maps[i];
+-
+- if (map->clear_priv)
+- map->clear_priv(map, map->priv);
+- map->priv = NULL;
+- map->clear_priv = NULL;
+-
+- if (map->mmaped) {
+- munmap(map->mmaped, bpf_map_mmap_sz(map));
+- map->mmaped = NULL;
+- }
+-
+- if (map->st_ops) {
+- zfree(&map->st_ops->data);
+- zfree(&map->st_ops->progs);
+- zfree(&map->st_ops->kern_func_off);
+- zfree(&map->st_ops);
+- }
+-
+- zfree(&map->name);
+- zfree(&map->pin_path);
+- }
++ for (i = 0; i < obj->nr_maps; i++)
++ bpf_map__destroy(&obj->maps[i]);
+
+ zfree(&obj->kconfig);
+ zfree(&obj->externs);
+@@ -6672,6 +6688,7 @@ int libbpf_find_vmlinux_btf_id(const char *name,
+ enum bpf_attach_type attach_type)
+ {
+ struct btf *btf;
++ int err;
+
+ btf = libbpf_find_kernel_btf();
+ if (IS_ERR(btf)) {
+@@ -6679,7 +6696,9 @@ int libbpf_find_vmlinux_btf_id(const char *name,
+ return -EINVAL;
+ }
+
+- return __find_vmlinux_btf_id(btf, name, attach_type);
++ err = __find_vmlinux_btf_id(btf, name, attach_type);
++ btf__free(btf);
++ return err;
+ }
+
+ static int libbpf_find_prog_btf_id(const char *name, __u32 attach_prog_fd)
+@@ -7790,9 +7809,12 @@ void perf_buffer__free(struct perf_buffer *pb)
+ if (!pb)
+ return;
+ if (pb->cpu_bufs) {
+- for (i = 0; i < pb->cpu_cnt && pb->cpu_bufs[i]; i++) {
++ for (i = 0; i < pb->cpu_cnt; i++) {
+ struct perf_cpu_buf *cpu_buf = pb->cpu_bufs[i];
+
++ if (!cpu_buf)
++ continue;
++
+ bpf_map_delete_elem(pb->map_fd, &cpu_buf->map_key);
+ perf_buffer__free_cpu_buf(pb, cpu_buf);
+ }
+diff --git a/tools/lib/perf/evlist.c b/tools/lib/perf/evlist.c
+index 5b9f2ca50591..62130d28652d 100644
+--- a/tools/lib/perf/evlist.c
++++ b/tools/lib/perf/evlist.c
+@@ -125,6 +125,7 @@ static void perf_evlist__purge(struct perf_evlist *evlist)
+ void perf_evlist__exit(struct perf_evlist *evlist)
+ {
+ perf_cpu_map__put(evlist->cpus);
++ perf_cpu_map__put(evlist->all_cpus);
+ perf_thread_map__put(evlist->threads);
+ evlist->cpus = NULL;
+ evlist->threads = NULL;
+diff --git a/tools/objtool/check.c b/tools/objtool/check.c
+index 3c6da70e6084..5a867a469ba5 100644
+--- a/tools/objtool/check.c
++++ b/tools/objtool/check.c
+@@ -916,6 +916,12 @@ static int add_special_section_alts(struct objtool_file *file)
+ }
+
+ if (special_alt->group) {
++ if (!special_alt->orig_len) {
++ WARN_FUNC("empty alternative entry",
++ orig_insn->sec, orig_insn->offset);
++ continue;
++ }
++
+ ret = handle_group_alt(file, special_alt, orig_insn,
+ &new_insn);
+ if (ret)
+diff --git a/tools/perf/builtin-probe.c b/tools/perf/builtin-probe.c
+index 70548df2abb9..6b1507566770 100644
+--- a/tools/perf/builtin-probe.c
++++ b/tools/perf/builtin-probe.c
+@@ -364,6 +364,9 @@ static int perf_add_probe_events(struct perf_probe_event *pevs, int npevs)
+
+ for (k = 0; k < pev->ntevs; k++) {
+ struct probe_trace_event *tev = &pev->tevs[k];
++ /* Skipped events have no event name */
++ if (!tev->event)
++ continue;
+
+ /* We use tev's name for showing new events */
+ show_perf_probe_event(tev->group, tev->event, pev,
+diff --git a/tools/perf/util/dso.c b/tools/perf/util/dso.c
+index 91f21239608b..88b05f08f768 100644
+--- a/tools/perf/util/dso.c
++++ b/tools/perf/util/dso.c
+@@ -47,6 +47,7 @@ char dso__symtab_origin(const struct dso *dso)
+ [DSO_BINARY_TYPE__BUILD_ID_CACHE_DEBUGINFO] = 'D',
+ [DSO_BINARY_TYPE__FEDORA_DEBUGINFO] = 'f',
+ [DSO_BINARY_TYPE__UBUNTU_DEBUGINFO] = 'u',
++ [DSO_BINARY_TYPE__MIXEDUP_UBUNTU_DEBUGINFO] = 'x',
+ [DSO_BINARY_TYPE__OPENEMBEDDED_DEBUGINFO] = 'o',
+ [DSO_BINARY_TYPE__BUILDID_DEBUGINFO] = 'b',
+ [DSO_BINARY_TYPE__SYSTEM_PATH_DSO] = 'd',
+@@ -129,6 +130,21 @@ int dso__read_binary_type_filename(const struct dso *dso,
+ snprintf(filename + len, size - len, "%s", dso->long_name);
+ break;
+
++ case DSO_BINARY_TYPE__MIXEDUP_UBUNTU_DEBUGINFO:
++ /*
++ * Ubuntu can mixup /usr/lib with /lib, putting debuginfo in
++ * /usr/lib/debug/lib when it is expected to be in
++ * /usr/lib/debug/usr/lib
++ */
++ if (strlen(dso->long_name) < 9 ||
++ strncmp(dso->long_name, "/usr/lib/", 9)) {
++ ret = -1;
++ break;
++ }
++ len = __symbol__join_symfs(filename, size, "/usr/lib/debug");
++ snprintf(filename + len, size - len, "%s", dso->long_name + 4);
++ break;
++
+ case DSO_BINARY_TYPE__OPENEMBEDDED_DEBUGINFO:
+ {
+ const char *last_slash;
+diff --git a/tools/perf/util/dso.h b/tools/perf/util/dso.h
+index 2db64b79617a..6e69b4fc0522 100644
+--- a/tools/perf/util/dso.h
++++ b/tools/perf/util/dso.h
+@@ -30,6 +30,7 @@ enum dso_binary_type {
+ DSO_BINARY_TYPE__BUILD_ID_CACHE_DEBUGINFO,
+ DSO_BINARY_TYPE__FEDORA_DEBUGINFO,
+ DSO_BINARY_TYPE__UBUNTU_DEBUGINFO,
++ DSO_BINARY_TYPE__MIXEDUP_UBUNTU_DEBUGINFO,
+ DSO_BINARY_TYPE__BUILDID_DEBUGINFO,
+ DSO_BINARY_TYPE__SYSTEM_PATH_DSO,
+ DSO_BINARY_TYPE__GUEST_KMODULE,
+diff --git a/tools/perf/util/probe-event.c b/tools/perf/util/probe-event.c
+index c6bcf5709564..a08f373d3305 100644
+--- a/tools/perf/util/probe-event.c
++++ b/tools/perf/util/probe-event.c
+@@ -102,7 +102,7 @@ void exit_probe_symbol_maps(void)
+ symbol__exit();
+ }
+
+-static struct ref_reloc_sym *kernel_get_ref_reloc_sym(void)
++static struct ref_reloc_sym *kernel_get_ref_reloc_sym(struct map **pmap)
+ {
+ /* kmap->ref_reloc_sym should be set if host_machine is initialized */
+ struct kmap *kmap;
+@@ -114,6 +114,10 @@ static struct ref_reloc_sym *kernel_get_ref_reloc_sym(void)
+ kmap = map__kmap(map);
+ if (!kmap)
+ return NULL;
++
++ if (pmap)
++ *pmap = map;
++
+ return kmap->ref_reloc_sym;
+ }
+
+@@ -125,7 +129,7 @@ static int kernel_get_symbol_address_by_name(const char *name, u64 *addr,
+ struct map *map;
+
+ /* ref_reloc_sym is just a label. Need a special fix*/
+- reloc_sym = kernel_get_ref_reloc_sym();
++ reloc_sym = kernel_get_ref_reloc_sym(NULL);
+ if (reloc_sym && strcmp(name, reloc_sym->name) == 0)
+ *addr = (reloc) ? reloc_sym->addr : reloc_sym->unrelocated_addr;
+ else {
+@@ -232,21 +236,22 @@ static void clear_probe_trace_events(struct probe_trace_event *tevs, int ntevs)
+ static bool kprobe_blacklist__listed(unsigned long address);
+ static bool kprobe_warn_out_range(const char *symbol, unsigned long address)
+ {
+- u64 etext_addr = 0;
+- int ret;
+-
+- /* Get the address of _etext for checking non-probable text symbol */
+- ret = kernel_get_symbol_address_by_name("_etext", &etext_addr,
+- false, false);
++ struct map *map;
++ bool ret = false;
+
+- if (ret == 0 && etext_addr < address)
+- pr_warning("%s is out of .text, skip it.\n", symbol);
+- else if (kprobe_blacklist__listed(address))
++ map = kernel_get_module_map(NULL);
++ if (map) {
++ ret = address <= map->start || map->end < address;
++ if (ret)
++ pr_warning("%s is out of .text, skip it.\n", symbol);
++ map__put(map);
++ }
++ if (!ret && kprobe_blacklist__listed(address)) {
+ pr_warning("%s is blacklisted function, skip it.\n", symbol);
+- else
+- return false;
++ ret = true;
++ }
+
+- return true;
++ return ret;
+ }
+
+ /*
+@@ -745,6 +750,7 @@ post_process_kernel_probe_trace_events(struct probe_trace_event *tevs,
+ int ntevs)
+ {
+ struct ref_reloc_sym *reloc_sym;
++ struct map *map;
+ char *tmp;
+ int i, skipped = 0;
+
+@@ -753,7 +759,7 @@ post_process_kernel_probe_trace_events(struct probe_trace_event *tevs,
+ return post_process_offline_probe_trace_events(tevs, ntevs,
+ symbol_conf.vmlinux_name);
+
+- reloc_sym = kernel_get_ref_reloc_sym();
++ reloc_sym = kernel_get_ref_reloc_sym(&map);
+ if (!reloc_sym) {
+ pr_warning("Relocated base symbol is not found!\n");
+ return -EINVAL;
+@@ -764,9 +770,13 @@ post_process_kernel_probe_trace_events(struct probe_trace_event *tevs,
+ continue;
+ if (tevs[i].point.retprobe && !kretprobe_offset_is_supported())
+ continue;
+- /* If we found a wrong one, mark it by NULL symbol */
++ /*
++ * If we found a wrong one, mark it by NULL symbol.
++ * Since addresses in debuginfo is same as objdump, we need
++ * to convert it to addresses on memory.
++ */
+ if (kprobe_warn_out_range(tevs[i].point.symbol,
+- tevs[i].point.address)) {
++ map__objdump_2mem(map, tevs[i].point.address))) {
+ tmp = NULL;
+ skipped++;
+ } else {
+@@ -2935,7 +2945,7 @@ static int find_probe_trace_events_from_map(struct perf_probe_event *pev,
+ /* Note that the symbols in the kmodule are not relocated */
+ if (!pev->uprobes && !pev->target &&
+ (!pp->retprobe || kretprobe_offset_is_supported())) {
+- reloc_sym = kernel_get_ref_reloc_sym();
++ reloc_sym = kernel_get_ref_reloc_sym(NULL);
+ if (!reloc_sym) {
+ pr_warning("Relocated base symbol is not found!\n");
+ ret = -EINVAL;
+diff --git a/tools/perf/util/probe-finder.c b/tools/perf/util/probe-finder.c
+index e4cff49384f4..55924255c535 100644
+--- a/tools/perf/util/probe-finder.c
++++ b/tools/perf/util/probe-finder.c
+@@ -101,6 +101,7 @@ enum dso_binary_type distro_dwarf_types[] = {
+ DSO_BINARY_TYPE__UBUNTU_DEBUGINFO,
+ DSO_BINARY_TYPE__OPENEMBEDDED_DEBUGINFO,
+ DSO_BINARY_TYPE__BUILDID_DEBUGINFO,
++ DSO_BINARY_TYPE__MIXEDUP_UBUNTU_DEBUGINFO,
+ DSO_BINARY_TYPE__NOT_FOUND,
+ };
+
+diff --git a/tools/perf/util/symbol.c b/tools/perf/util/symbol.c
+index 26bc6a0096ce..f28eb3e92c7f 100644
+--- a/tools/perf/util/symbol.c
++++ b/tools/perf/util/symbol.c
+@@ -79,6 +79,7 @@ static enum dso_binary_type binary_type_symtab[] = {
+ DSO_BINARY_TYPE__SYSTEM_PATH_KMODULE,
+ DSO_BINARY_TYPE__SYSTEM_PATH_KMODULE_COMP,
+ DSO_BINARY_TYPE__OPENEMBEDDED_DEBUGINFO,
++ DSO_BINARY_TYPE__MIXEDUP_UBUNTU_DEBUGINFO,
+ DSO_BINARY_TYPE__NOT_FOUND,
+ };
+
+@@ -1209,6 +1210,7 @@ int maps__merge_in(struct maps *kmaps, struct map *new_map)
+
+ m->end = old_map->start;
+ list_add_tail(&m->node, &merged);
++ new_map->pgoff += old_map->end - new_map->start;
+ new_map->start = old_map->end;
+ }
+ } else {
+@@ -1229,6 +1231,7 @@ int maps__merge_in(struct maps *kmaps, struct map *new_map)
+ * |new......| -> |new...|
+ * |old....| -> |old....|
+ */
++ new_map->pgoff += old_map->end - new_map->start;
+ new_map->start = old_map->end;
+ }
+ }
+@@ -1515,6 +1518,7 @@ static bool dso__is_compatible_symtab_type(struct dso *dso, bool kmod,
+ case DSO_BINARY_TYPE__SYSTEM_PATH_DSO:
+ case DSO_BINARY_TYPE__FEDORA_DEBUGINFO:
+ case DSO_BINARY_TYPE__UBUNTU_DEBUGINFO:
++ case DSO_BINARY_TYPE__MIXEDUP_UBUNTU_DEBUGINFO:
+ case DSO_BINARY_TYPE__BUILDID_DEBUGINFO:
+ case DSO_BINARY_TYPE__OPENEMBEDDED_DEBUGINFO:
+ return !kmod && dso->kernel == DSO_TYPE_USER;
+diff --git a/tools/power/x86/intel-speed-select/isst-config.c b/tools/power/x86/intel-speed-select/isst-config.c
+index b73763489410..3688f1101ec4 100644
+--- a/tools/power/x86/intel-speed-select/isst-config.c
++++ b/tools/power/x86/intel-speed-select/isst-config.c
+@@ -1169,6 +1169,7 @@ static void dump_clx_n_config_for_cpu(int cpu, void *arg1, void *arg2,
+
+ ctdp_level = &clx_n_pkg_dev.ctdp_level[0];
+ pbf_info = &ctdp_level->pbf_info;
++ clx_n_pkg_dev.processed = 1;
+ isst_ctdp_display_information(cpu, outf, tdp_level, &clx_n_pkg_dev);
+ free_cpu_set(ctdp_level->core_cpumask);
+ free_cpu_set(pbf_info->core_cpumask);
+diff --git a/tools/testing/selftests/bpf/.gitignore b/tools/testing/selftests/bpf/.gitignore
+index c30079c86998..35a577ca0226 100644
+--- a/tools/testing/selftests/bpf/.gitignore
++++ b/tools/testing/selftests/bpf/.gitignore
+@@ -39,4 +39,4 @@ test_cpp
+ /no_alu32
+ /bpf_gcc
+ /tools
+-
++/runqslower
+diff --git a/tools/testing/selftests/bpf/Makefile b/tools/testing/selftests/bpf/Makefile
+index 7729892e0b04..af139d0e2e0c 100644
+--- a/tools/testing/selftests/bpf/Makefile
++++ b/tools/testing/selftests/bpf/Makefile
+@@ -141,7 +141,8 @@ VMLINUX_BTF := $(abspath $(firstword $(wildcard $(VMLINUX_BTF_PATHS))))
+ $(OUTPUT)/runqslower: $(BPFOBJ)
+ $(Q)$(MAKE) $(submake_extras) -C $(TOOLSDIR)/bpf/runqslower \
+ OUTPUT=$(SCRATCH_DIR)/ VMLINUX_BTF=$(VMLINUX_BTF) \
+- BPFOBJ=$(BPFOBJ) BPF_INCLUDE=$(INCLUDE_DIR)
++ BPFOBJ=$(BPFOBJ) BPF_INCLUDE=$(INCLUDE_DIR) && \
++ cp $(SCRATCH_DIR)/runqslower $@
+
+ $(TEST_GEN_PROGS) $(TEST_GEN_PROGS_EXTENDED): $(OUTPUT)/test_stub.o $(BPFOBJ)
+
+@@ -263,6 +264,7 @@ TRUNNER_BPF_OBJS := $$(patsubst %.c,$$(TRUNNER_OUTPUT)/%.o, $$(TRUNNER_BPF_SRCS)
+ TRUNNER_BPF_SKELS := $$(patsubst %.c,$$(TRUNNER_OUTPUT)/%.skel.h, \
+ $$(filter-out $(SKEL_BLACKLIST), \
+ $$(TRUNNER_BPF_SRCS)))
++TEST_GEN_FILES += $$(TRUNNER_BPF_OBJS)
+
+ # Evaluate rules now with extra TRUNNER_XXX variables above already defined
+ $$(eval $$(call DEFINE_TEST_RUNNER_RULES,$1,$2))
+@@ -323,7 +325,7 @@ $(TRUNNER_TEST_OBJS): $(TRUNNER_OUTPUT)/%.test.o: \
+ $(TRUNNER_BPF_SKELS) \
+ $$(BPFOBJ) | $(TRUNNER_OUTPUT)
+ $$(call msg,TEST-OBJ,$(TRUNNER_BINARY),$$@)
+- cd $$(@D) && $$(CC) $$(CFLAGS) -c $(CURDIR)/$$< $$(LDLIBS) -o $$(@F)
++ cd $$(@D) && $$(CC) -I. $$(CFLAGS) -c $(CURDIR)/$$< $$(LDLIBS) -o $$(@F)
+
+ $(TRUNNER_EXTRA_OBJS): $(TRUNNER_OUTPUT)/%.o: \
+ %.c \
+diff --git a/tools/testing/selftests/bpf/config b/tools/testing/selftests/bpf/config
+index 60e3ae5d4e48..2118e23ac07a 100644
+--- a/tools/testing/selftests/bpf/config
++++ b/tools/testing/selftests/bpf/config
+@@ -25,6 +25,7 @@ CONFIG_XDP_SOCKETS=y
+ CONFIG_FTRACE_SYSCALLS=y
+ CONFIG_IPV6_TUNNEL=y
+ CONFIG_IPV6_GRE=y
++CONFIG_IPV6_SEG6_BPF=y
+ CONFIG_NET_FOU=m
+ CONFIG_NET_FOU_IP_TUNNELS=y
+ CONFIG_IPV6_FOU=m
+@@ -37,3 +38,4 @@ CONFIG_IPV6_SIT=m
+ CONFIG_BPF_JIT=y
+ CONFIG_BPF_LSM=y
+ CONFIG_SECURITY=y
++CONFIG_LIRC=y
+diff --git a/tools/testing/selftests/bpf/prog_tests/core_reloc.c b/tools/testing/selftests/bpf/prog_tests/core_reloc.c
+index 31e177adbdf1..084ed26a7d78 100644
+--- a/tools/testing/selftests/bpf/prog_tests/core_reloc.c
++++ b/tools/testing/selftests/bpf/prog_tests/core_reloc.c
+@@ -392,7 +392,7 @@ static struct core_reloc_test_case test_cases[] = {
+ .input = STRUCT_TO_CHAR_PTR(core_reloc_existence___minimal) {
+ .a = 42,
+ },
+- .input_len = sizeof(struct core_reloc_existence),
++ .input_len = sizeof(struct core_reloc_existence___minimal),
+ .output = STRUCT_TO_CHAR_PTR(core_reloc_existence_output) {
+ .a_exists = 1,
+ .b_exists = 0,
+diff --git a/tools/testing/selftests/bpf/prog_tests/flow_dissector.c b/tools/testing/selftests/bpf/prog_tests/flow_dissector.c
+index 92563898867c..9f3634c9971d 100644
+--- a/tools/testing/selftests/bpf/prog_tests/flow_dissector.c
++++ b/tools/testing/selftests/bpf/prog_tests/flow_dissector.c
+@@ -523,6 +523,7 @@ void test_flow_dissector(void)
+ CHECK_ATTR(err, tests[i].name, "bpf_map_delete_elem %d\n", err);
+ }
+
++ close(tap_fd);
+ bpf_prog_detach(prog_fd, BPF_FLOW_DISSECTOR);
+ bpf_object__close(obj);
+ }
+diff --git a/tools/testing/selftests/bpf/prog_tests/ns_current_pid_tgid.c b/tools/testing/selftests/bpf/prog_tests/ns_current_pid_tgid.c
+index 542240e16564..e74dc501b27f 100644
+--- a/tools/testing/selftests/bpf/prog_tests/ns_current_pid_tgid.c
++++ b/tools/testing/selftests/bpf/prog_tests/ns_current_pid_tgid.c
+@@ -80,9 +80,6 @@ void test_ns_current_pid_tgid(void)
+ "User pid/tgid %llu BPF pid/tgid %llu\n", id, bss.pid_tgid))
+ goto cleanup;
+ cleanup:
+- if (!link) {
+- bpf_link__destroy(link);
+- link = NULL;
+- }
++ bpf_link__destroy(link);
+ bpf_object__close(obj);
+ }
+diff --git a/tools/testing/selftests/bpf/test_align.c b/tools/testing/selftests/bpf/test_align.c
+index 0262f7b374f9..c9c9bdce9d6d 100644
+--- a/tools/testing/selftests/bpf/test_align.c
++++ b/tools/testing/selftests/bpf/test_align.c
+@@ -359,15 +359,15 @@ static struct bpf_align_test tests[] = {
+ * is still (4n), fixed offset is not changed.
+ * Also, we create a new reg->id.
+ */
+- {29, "R5_w=pkt(id=4,off=18,r=0,umax_value=2040,var_off=(0x0; 0x7fc))"},
++ {29, "R5_w=pkt(id=4,off=18,r=0,umax_value=2040,var_off=(0x0; 0x7fc)"},
+ /* At the time the word size load is performed from R5,
+ * its total fixed offset is NET_IP_ALIGN + reg->off (18)
+ * which is 20. Then the variable offset is (4n), so
+ * the total offset is 4-byte aligned and meets the
+ * load's requirements.
+ */
+- {33, "R4=pkt(id=4,off=22,r=22,umax_value=2040,var_off=(0x0; 0x7fc))"},
+- {33, "R5=pkt(id=4,off=18,r=22,umax_value=2040,var_off=(0x0; 0x7fc))"},
++ {33, "R4=pkt(id=4,off=22,r=22,umax_value=2040,var_off=(0x0; 0x7fc)"},
++ {33, "R5=pkt(id=4,off=18,r=22,umax_value=2040,var_off=(0x0; 0x7fc)"},
+ },
+ },
+ {
+@@ -410,15 +410,15 @@ static struct bpf_align_test tests[] = {
+ /* Adding 14 makes R6 be (4n+2) */
+ {9, "R6_w=inv(id=0,umin_value=14,umax_value=1034,var_off=(0x2; 0x7fc))"},
+ /* Packet pointer has (4n+2) offset */
+- {11, "R5_w=pkt(id=1,off=0,r=0,umin_value=14,umax_value=1034,var_off=(0x2; 0x7fc))"},
+- {13, "R4=pkt(id=1,off=4,r=0,umin_value=14,umax_value=1034,var_off=(0x2; 0x7fc))"},
++ {11, "R5_w=pkt(id=1,off=0,r=0,umin_value=14,umax_value=1034,var_off=(0x2; 0x7fc)"},
++ {13, "R4=pkt(id=1,off=4,r=0,umin_value=14,umax_value=1034,var_off=(0x2; 0x7fc)"},
+ /* At the time the word size load is performed from R5,
+ * its total fixed offset is NET_IP_ALIGN + reg->off (0)
+ * which is 2. Then the variable offset is (4n+2), so
+ * the total offset is 4-byte aligned and meets the
+ * load's requirements.
+ */
+- {15, "R5=pkt(id=1,off=0,r=4,umin_value=14,umax_value=1034,var_off=(0x2; 0x7fc))"},
++ {15, "R5=pkt(id=1,off=0,r=4,umin_value=14,umax_value=1034,var_off=(0x2; 0x7fc)"},
+ /* Newly read value in R6 was shifted left by 2, so has
+ * known alignment of 4.
+ */
+@@ -426,15 +426,15 @@ static struct bpf_align_test tests[] = {
+ /* Added (4n) to packet pointer's (4n+2) var_off, giving
+ * another (4n+2).
+ */
+- {19, "R5_w=pkt(id=2,off=0,r=0,umin_value=14,umax_value=2054,var_off=(0x2; 0xffc))"},
+- {21, "R4=pkt(id=2,off=4,r=0,umin_value=14,umax_value=2054,var_off=(0x2; 0xffc))"},
++ {19, "R5_w=pkt(id=2,off=0,r=0,umin_value=14,umax_value=2054,var_off=(0x2; 0xffc)"},
++ {21, "R4=pkt(id=2,off=4,r=0,umin_value=14,umax_value=2054,var_off=(0x2; 0xffc)"},
+ /* At the time the word size load is performed from R5,
+ * its total fixed offset is NET_IP_ALIGN + reg->off (0)
+ * which is 2. Then the variable offset is (4n+2), so
+ * the total offset is 4-byte aligned and meets the
+ * load's requirements.
+ */
+- {23, "R5=pkt(id=2,off=0,r=4,umin_value=14,umax_value=2054,var_off=(0x2; 0xffc))"},
++ {23, "R5=pkt(id=2,off=0,r=4,umin_value=14,umax_value=2054,var_off=(0x2; 0xffc)"},
+ },
+ },
+ {
+@@ -469,16 +469,16 @@ static struct bpf_align_test tests[] = {
+ .matches = {
+ {4, "R5_w=pkt_end(id=0,off=0,imm=0)"},
+ /* (ptr - ptr) << 2 == unknown, (4n) */
+- {6, "R5_w=inv(id=0,smax_value=9223372036854775804,umax_value=18446744073709551612,var_off=(0x0; 0xfffffffffffffffc))"},
++ {6, "R5_w=inv(id=0,smax_value=9223372036854775804,umax_value=18446744073709551612,var_off=(0x0; 0xfffffffffffffffc)"},
+ /* (4n) + 14 == (4n+2). We blow our bounds, because
+ * the add could overflow.
+ */
+- {7, "R5_w=inv(id=0,var_off=(0x2; 0xfffffffffffffffc))"},
++ {7, "R5_w=inv(id=0,smin_value=-9223372036854775806,smax_value=9223372036854775806,umin_value=2,umax_value=18446744073709551614,var_off=(0x2; 0xfffffffffffffffc)"},
+ /* Checked s>=0 */
+- {9, "R5=inv(id=0,umin_value=2,umax_value=9223372036854775806,var_off=(0x2; 0x7ffffffffffffffc))"},
++ {9, "R5=inv(id=0,umin_value=2,umax_value=9223372034707292158,var_off=(0x2; 0x7fffffff7ffffffc)"},
+ /* packet pointer + nonnegative (4n+2) */
+- {11, "R6_w=pkt(id=1,off=0,r=0,umin_value=2,umax_value=9223372036854775806,var_off=(0x2; 0x7ffffffffffffffc))"},
+- {13, "R4_w=pkt(id=1,off=4,r=0,umin_value=2,umax_value=9223372036854775806,var_off=(0x2; 0x7ffffffffffffffc))"},
++ {11, "R6_w=pkt(id=1,off=0,r=0,umin_value=2,umax_value=9223372034707292158,var_off=(0x2; 0x7fffffff7ffffffc)"},
++ {13, "R4_w=pkt(id=1,off=4,r=0,umin_value=2,umax_value=9223372034707292158,var_off=(0x2; 0x7fffffff7ffffffc)"},
+ /* NET_IP_ALIGN + (4n+2) == (4n), alignment is fine.
+ * We checked the bounds, but it might have been able
+ * to overflow if the packet pointer started in the
+@@ -486,7 +486,7 @@ static struct bpf_align_test tests[] = {
+ * So we did not get a 'range' on R6, and the access
+ * attempt will fail.
+ */
+- {15, "R6_w=pkt(id=1,off=0,r=0,umin_value=2,umax_value=9223372036854775806,var_off=(0x2; 0x7ffffffffffffffc))"},
++ {15, "R6_w=pkt(id=1,off=0,r=0,umin_value=2,umax_value=9223372034707292158,var_off=(0x2; 0x7fffffff7ffffffc)"},
+ }
+ },
+ {
+@@ -528,7 +528,7 @@ static struct bpf_align_test tests[] = {
+ /* New unknown value in R7 is (4n) */
+ {11, "R7_w=inv(id=0,umax_value=1020,var_off=(0x0; 0x3fc))"},
+ /* Subtracting it from R6 blows our unsigned bounds */
+- {12, "R6=inv(id=0,smin_value=-1006,smax_value=1034,var_off=(0x2; 0xfffffffffffffffc))"},
++ {12, "R6=inv(id=0,smin_value=-1006,smax_value=1034,umin_value=2,umax_value=18446744073709551614,var_off=(0x2; 0xfffffffffffffffc)"},
+ /* Checked s>= 0 */
+ {14, "R6=inv(id=0,umin_value=2,umax_value=1034,var_off=(0x2; 0x7fc))"},
+ /* At the time the word size load is performed from R5,
+@@ -537,7 +537,8 @@ static struct bpf_align_test tests[] = {
+ * the total offset is 4-byte aligned and meets the
+ * load's requirements.
+ */
+- {20, "R5=pkt(id=1,off=0,r=4,umin_value=2,umax_value=1034,var_off=(0x2; 0x7fc))"},
++ {20, "R5=pkt(id=1,off=0,r=4,umin_value=2,umax_value=1034,var_off=(0x2; 0x7fc)"},
++
+ },
+ },
+ {
+@@ -579,18 +580,18 @@ static struct bpf_align_test tests[] = {
+ /* Adding 14 makes R6 be (4n+2) */
+ {11, "R6_w=inv(id=0,umin_value=14,umax_value=74,var_off=(0x2; 0x7c))"},
+ /* Subtracting from packet pointer overflows ubounds */
+- {13, "R5_w=pkt(id=1,off=0,r=8,umin_value=18446744073709551542,umax_value=18446744073709551602,var_off=(0xffffffffffffff82; 0x7c))"},
++ {13, "R5_w=pkt(id=1,off=0,r=8,umin_value=18446744073709551542,umax_value=18446744073709551602,var_off=(0xffffffffffffff82; 0x7c)"},
+ /* New unknown value in R7 is (4n), >= 76 */
+ {15, "R7_w=inv(id=0,umin_value=76,umax_value=1096,var_off=(0x0; 0x7fc))"},
+ /* Adding it to packet pointer gives nice bounds again */
+- {16, "R5_w=pkt(id=2,off=0,r=0,umin_value=2,umax_value=1082,var_off=(0x2; 0x7fc))"},
++ {16, "R5_w=pkt(id=2,off=0,r=0,umin_value=2,umax_value=1082,var_off=(0x2; 0xfffffffc)"},
+ /* At the time the word size load is performed from R5,
+ * its total fixed offset is NET_IP_ALIGN + reg->off (0)
+ * which is 2. Then the variable offset is (4n+2), so
+ * the total offset is 4-byte aligned and meets the
+ * load's requirements.
+ */
+- {20, "R5=pkt(id=2,off=0,r=4,umin_value=2,umax_value=1082,var_off=(0x2; 0x7fc))"},
++ {20, "R5=pkt(id=2,off=0,r=4,umin_value=2,umax_value=1082,var_off=(0x2; 0xfffffffc)"},
+ },
+ },
+ };
+diff --git a/tools/testing/selftests/bpf/test_progs.c b/tools/testing/selftests/bpf/test_progs.c
+index b521e0a512b6..93970ec1c9e9 100644
+--- a/tools/testing/selftests/bpf/test_progs.c
++++ b/tools/testing/selftests/bpf/test_progs.c
+@@ -351,6 +351,7 @@ int extract_build_id(char *build_id, size_t size)
+ len = size;
+ memcpy(build_id, line, len);
+ build_id[len] = '\0';
++ free(line);
+ return 0;
+ err:
+ fclose(fp);
+@@ -420,6 +421,18 @@ static int libbpf_print_fn(enum libbpf_print_level level,
+ return 0;
+ }
+
++static void free_str_set(const struct str_set *set)
++{
++ int i;
++
++ if (!set)
++ return;
++
++ for (i = 0; i < set->cnt; i++)
++ free((void *)set->strs[i]);
++ free(set->strs);
++}
++
+ static int parse_str_list(const char *s, struct str_set *set)
+ {
+ char *input, *state = NULL, *next, **tmp, **strs = NULL;
+@@ -756,11 +769,11 @@ int main(int argc, char **argv)
+ fprintf(stdout, "Summary: %d/%d PASSED, %d SKIPPED, %d FAILED\n",
+ env.succ_cnt, env.sub_succ_cnt, env.skip_cnt, env.fail_cnt);
+
+- free(env.test_selector.blacklist.strs);
+- free(env.test_selector.whitelist.strs);
++ free_str_set(&env.test_selector.blacklist);
++ free_str_set(&env.test_selector.whitelist);
+ free(env.test_selector.num_set);
+- free(env.subtest_selector.blacklist.strs);
+- free(env.subtest_selector.whitelist.strs);
++ free_str_set(&env.subtest_selector.blacklist);
++ free_str_set(&env.subtest_selector.whitelist);
+ free(env.subtest_selector.num_set);
+
+ return env.fail_cnt ? EXIT_FAILURE : EXIT_SUCCESS;
^ permalink raw reply related [flat|nested] 25+ messages in thread
* [gentoo-commits] proj/linux-patches:5.7 commit in: /
@ 2020-06-24 16:49 Mike Pagano
0 siblings, 0 replies; 25+ messages in thread
From: Mike Pagano @ 2020-06-24 16:49 UTC (permalink / raw
To: gentoo-commits
commit: 92f5b75384daf7cc5cc9582f0a8b4f2a208e35eb
Author: Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Wed Jun 24 16:48:56 2020 +0000
Commit: Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Wed Jun 24 16:48:56 2020 +0000
URL: https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=92f5b753
Linux patch 5.7.6
Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>
0000_README | 4 +
1005_linux-5.7.6.patch | 20870 +++++++++++++++++++++++++++++++++++++++++++++++
2 files changed, 20874 insertions(+)
diff --git a/0000_README b/0000_README
index eab26a2..1d59c5b 100644
--- a/0000_README
+++ b/0000_README
@@ -63,6 +63,10 @@ Patch: 1004_linux-5.7.5.patch
From: http://www.kernel.org
Desc: Linux 5.7.5
+Patch: 1005_linux-5.7.6.patch
+From: http://www.kernel.org
+Desc: Linux 5.7.6
+
Patch: 1500_XATTR_USER_PREFIX.patch
From: https://bugs.gentoo.org/show_bug.cgi?id=470644
Desc: Support for namespace user.pax.* on tmpfs.
diff --git a/1005_linux-5.7.6.patch b/1005_linux-5.7.6.patch
new file mode 100644
index 0000000..9939e08
--- /dev/null
+++ b/1005_linux-5.7.6.patch
@@ -0,0 +1,20870 @@
+diff --git a/Makefile b/Makefile
+index c48d489f82bc..f928cd1dfdc1 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 5
+ PATCHLEVEL = 7
+-SUBLEVEL = 5
++SUBLEVEL = 6
+ EXTRAVERSION =
+ NAME = Kleptomaniac Octopus
+
+diff --git a/arch/arm/boot/dts/aspeed-bmc-facebook-tiogapass.dts b/arch/arm/boot/dts/aspeed-bmc-facebook-tiogapass.dts
+index 5d7cbd9164d4..669980c690f9 100644
+--- a/arch/arm/boot/dts/aspeed-bmc-facebook-tiogapass.dts
++++ b/arch/arm/boot/dts/aspeed-bmc-facebook-tiogapass.dts
+@@ -112,13 +112,13 @@
+ &kcs2 {
+ // BMC KCS channel 2
+ status = "okay";
+- kcs_addr = <0xca8>;
++ aspeed,lpc-io-reg = <0xca8>;
+ };
+
+ &kcs3 {
+ // BMC KCS channel 3
+ status = "okay";
+- kcs_addr = <0xca2>;
++ aspeed,lpc-io-reg = <0xca2>;
+ };
+
+ &mac0 {
+diff --git a/arch/arm/boot/dts/aspeed-g5.dtsi b/arch/arm/boot/dts/aspeed-g5.dtsi
+index f12ec04d3cbc..bc92d3db7b78 100644
+--- a/arch/arm/boot/dts/aspeed-g5.dtsi
++++ b/arch/arm/boot/dts/aspeed-g5.dtsi
+@@ -426,22 +426,22 @@
+ #size-cells = <1>;
+ ranges = <0x0 0x0 0x80>;
+
+- kcs1: kcs1@0 {
+- compatible = "aspeed,ast2500-kcs-bmc";
++ kcs1: kcs@24 {
++ compatible = "aspeed,ast2500-kcs-bmc-v2";
++ reg = <0x24 0x1>, <0x30 0x1>, <0x3c 0x1>;
+ interrupts = <8>;
+- kcs_chan = <1>;
+ status = "disabled";
+ };
+- kcs2: kcs2@0 {
+- compatible = "aspeed,ast2500-kcs-bmc";
++ kcs2: kcs@28 {
++ compatible = "aspeed,ast2500-kcs-bmc-v2";
++ reg = <0x28 0x1>, <0x34 0x1>, <0x40 0x1>;
+ interrupts = <8>;
+- kcs_chan = <2>;
+ status = "disabled";
+ };
+- kcs3: kcs3@0 {
+- compatible = "aspeed,ast2500-kcs-bmc";
++ kcs3: kcs@2c {
++ compatible = "aspeed,ast2500-kcs-bmc-v2";
++ reg = <0x2c 0x1>, <0x38 0x1>, <0x44 0x1>;
+ interrupts = <8>;
+- kcs_chan = <3>;
+ status = "disabled";
+ };
+ };
+@@ -455,10 +455,10 @@
+ #size-cells = <1>;
+ ranges = <0x0 0x80 0x1e0>;
+
+- kcs4: kcs4@0 {
+- compatible = "aspeed,ast2500-kcs-bmc";
++ kcs4: kcs@94 {
++ compatible = "aspeed,ast2500-kcs-bmc-v2";
++ reg = <0x94 0x1>, <0x98 0x1>, <0x9c 0x1>;
+ interrupts = <8>;
+- kcs_chan = <4>;
+ status = "disabled";
+ };
+
+diff --git a/arch/arm/boot/dts/aspeed-g6.dtsi b/arch/arm/boot/dts/aspeed-g6.dtsi
+index 0a29b3b57a9d..a2d2ac720a51 100644
+--- a/arch/arm/boot/dts/aspeed-g6.dtsi
++++ b/arch/arm/boot/dts/aspeed-g6.dtsi
+@@ -65,6 +65,7 @@
+ <GIC_PPI 10 (GIC_CPU_MASK_SIMPLE(2) | IRQ_TYPE_LEVEL_LOW)>;
+ clocks = <&syscon ASPEED_CLK_HPLL>;
+ arm,cpu-registers-not-fw-configured;
++ always-on;
+ };
+
+ ahb {
+@@ -368,6 +369,7 @@
+ <&gic GIC_SPI 23 IRQ_TYPE_LEVEL_HIGH>;
+ clocks = <&syscon ASPEED_CLK_APB1>;
+ clock-names = "PCLK";
++ status = "disabled";
+ };
+
+ uart1: serial@1e783000 {
+@@ -433,22 +435,23 @@
+ #size-cells = <1>;
+ ranges = <0x0 0x0 0x80>;
+
+- kcs1: kcs1@0 {
+- compatible = "aspeed,ast2600-kcs-bmc";
++ kcs1: kcs@24 {
++ compatible = "aspeed,ast2500-kcs-bmc-v2";
++ reg = <0x24 0x1>, <0x30 0x1>, <0x3c 0x1>;
+ interrupts = <GIC_SPI 138 IRQ_TYPE_LEVEL_HIGH>;
+ kcs_chan = <1>;
+ status = "disabled";
+ };
+- kcs2: kcs2@0 {
+- compatible = "aspeed,ast2600-kcs-bmc";
++ kcs2: kcs@28 {
++ compatible = "aspeed,ast2500-kcs-bmc-v2";
++ reg = <0x28 0x1>, <0x34 0x1>, <0x40 0x1>;
+ interrupts = <GIC_SPI 139 IRQ_TYPE_LEVEL_HIGH>;
+- kcs_chan = <2>;
+ status = "disabled";
+ };
+- kcs3: kcs3@0 {
+- compatible = "aspeed,ast2600-kcs-bmc";
++ kcs3: kcs@2c {
++ compatible = "aspeed,ast2500-kcs-bmc-v2";
++ reg = <0x2c 0x1>, <0x38 0x1>, <0x44 0x1>;
+ interrupts = <GIC_SPI 140 IRQ_TYPE_LEVEL_HIGH>;
+- kcs_chan = <3>;
+ status = "disabled";
+ };
+ };
+@@ -462,10 +465,10 @@
+ #size-cells = <1>;
+ ranges = <0x0 0x80 0x1e0>;
+
+- kcs4: kcs4@0 {
+- compatible = "aspeed,ast2600-kcs-bmc";
++ kcs4: kcs@94 {
++ compatible = "aspeed,ast2500-kcs-bmc-v2";
++ reg = <0x94 0x1>, <0x98 0x1>, <0x9c 0x1>;
+ interrupts = <GIC_SPI 141 IRQ_TYPE_LEVEL_HIGH>;
+- kcs_chan = <4>;
+ status = "disabled";
+ };
+
+diff --git a/arch/arm/boot/dts/bcm2835-common.dtsi b/arch/arm/boot/dts/bcm2835-common.dtsi
+index 2b1d9d4c0cde..4119271c979d 100644
+--- a/arch/arm/boot/dts/bcm2835-common.dtsi
++++ b/arch/arm/boot/dts/bcm2835-common.dtsi
+@@ -130,7 +130,6 @@
+ compatible = "brcm,bcm2835-v3d";
+ reg = <0x7ec00000 0x1000>;
+ interrupts = <1 10>;
+- power-domains = <&pm BCM2835_POWER_DOMAIN_GRAFX_V3D>;
+ };
+
+ vc4: gpu {
+diff --git a/arch/arm/boot/dts/bcm2835-rpi-common.dtsi b/arch/arm/boot/dts/bcm2835-rpi-common.dtsi
+new file mode 100644
+index 000000000000..8a55b6cded59
+--- /dev/null
++++ b/arch/arm/boot/dts/bcm2835-rpi-common.dtsi
+@@ -0,0 +1,12 @@
++// SPDX-License-Identifier: GPL-2.0
++/*
++ * This include file covers the common peripherals and configuration between
++ * bcm2835, bcm2836 and bcm2837 implementations that interact with RPi's
++ * firmware interface.
++ */
++
++#include <dt-bindings/power/raspberrypi-power.h>
++
++&v3d {
++ power-domains = <&power RPI_POWER_DOMAIN_V3D>;
++};
+diff --git a/arch/arm/boot/dts/bcm2835.dtsi b/arch/arm/boot/dts/bcm2835.dtsi
+index 53bf4579cc22..0549686134ea 100644
+--- a/arch/arm/boot/dts/bcm2835.dtsi
++++ b/arch/arm/boot/dts/bcm2835.dtsi
+@@ -1,6 +1,7 @@
+ // SPDX-License-Identifier: GPL-2.0
+ #include "bcm283x.dtsi"
+ #include "bcm2835-common.dtsi"
++#include "bcm2835-rpi-common.dtsi"
+
+ / {
+ compatible = "brcm,bcm2835";
+diff --git a/arch/arm/boot/dts/bcm2836.dtsi b/arch/arm/boot/dts/bcm2836.dtsi
+index 82d6c4662ae4..b390006aef79 100644
+--- a/arch/arm/boot/dts/bcm2836.dtsi
++++ b/arch/arm/boot/dts/bcm2836.dtsi
+@@ -1,6 +1,7 @@
+ // SPDX-License-Identifier: GPL-2.0
+ #include "bcm283x.dtsi"
+ #include "bcm2835-common.dtsi"
++#include "bcm2835-rpi-common.dtsi"
+
+ / {
+ compatible = "brcm,bcm2836";
+diff --git a/arch/arm/boot/dts/bcm2837.dtsi b/arch/arm/boot/dts/bcm2837.dtsi
+index 9e95fee78e19..0199ec98cd61 100644
+--- a/arch/arm/boot/dts/bcm2837.dtsi
++++ b/arch/arm/boot/dts/bcm2837.dtsi
+@@ -1,5 +1,6 @@
+ #include "bcm283x.dtsi"
+ #include "bcm2835-common.dtsi"
++#include "bcm2835-rpi-common.dtsi"
+
+ / {
+ compatible = "brcm,bcm2837";
+diff --git a/arch/arm/boot/dts/r8a7743.dtsi b/arch/arm/boot/dts/r8a7743.dtsi
+index e8b340bb99bc..fff123753b85 100644
+--- a/arch/arm/boot/dts/r8a7743.dtsi
++++ b/arch/arm/boot/dts/r8a7743.dtsi
+@@ -338,7 +338,7 @@
+ #thermal-sensor-cells = <0>;
+ };
+
+- ipmmu_sy0: mmu@e6280000 {
++ ipmmu_sy0: iommu@e6280000 {
+ compatible = "renesas,ipmmu-r8a7743",
+ "renesas,ipmmu-vmsa";
+ reg = <0 0xe6280000 0 0x1000>;
+@@ -348,7 +348,7 @@
+ status = "disabled";
+ };
+
+- ipmmu_sy1: mmu@e6290000 {
++ ipmmu_sy1: iommu@e6290000 {
+ compatible = "renesas,ipmmu-r8a7743",
+ "renesas,ipmmu-vmsa";
+ reg = <0 0xe6290000 0 0x1000>;
+@@ -357,7 +357,7 @@
+ status = "disabled";
+ };
+
+- ipmmu_ds: mmu@e6740000 {
++ ipmmu_ds: iommu@e6740000 {
+ compatible = "renesas,ipmmu-r8a7743",
+ "renesas,ipmmu-vmsa";
+ reg = <0 0xe6740000 0 0x1000>;
+@@ -367,7 +367,7 @@
+ status = "disabled";
+ };
+
+- ipmmu_mp: mmu@ec680000 {
++ ipmmu_mp: iommu@ec680000 {
+ compatible = "renesas,ipmmu-r8a7743",
+ "renesas,ipmmu-vmsa";
+ reg = <0 0xec680000 0 0x1000>;
+@@ -376,7 +376,7 @@
+ status = "disabled";
+ };
+
+- ipmmu_mx: mmu@fe951000 {
++ ipmmu_mx: iommu@fe951000 {
+ compatible = "renesas,ipmmu-r8a7743",
+ "renesas,ipmmu-vmsa";
+ reg = <0 0xfe951000 0 0x1000>;
+@@ -386,7 +386,7 @@
+ status = "disabled";
+ };
+
+- ipmmu_gp: mmu@e62a0000 {
++ ipmmu_gp: iommu@e62a0000 {
+ compatible = "renesas,ipmmu-r8a7743",
+ "renesas,ipmmu-vmsa";
+ reg = <0 0xe62a0000 0 0x1000>;
+diff --git a/arch/arm/boot/dts/r8a7744.dtsi b/arch/arm/boot/dts/r8a7744.dtsi
+index def840b8b2d3..5050ac19041d 100644
+--- a/arch/arm/boot/dts/r8a7744.dtsi
++++ b/arch/arm/boot/dts/r8a7744.dtsi
+@@ -338,7 +338,7 @@
+ #thermal-sensor-cells = <0>;
+ };
+
+- ipmmu_sy0: mmu@e6280000 {
++ ipmmu_sy0: iommu@e6280000 {
+ compatible = "renesas,ipmmu-r8a7744",
+ "renesas,ipmmu-vmsa";
+ reg = <0 0xe6280000 0 0x1000>;
+@@ -348,7 +348,7 @@
+ status = "disabled";
+ };
+
+- ipmmu_sy1: mmu@e6290000 {
++ ipmmu_sy1: iommu@e6290000 {
+ compatible = "renesas,ipmmu-r8a7744",
+ "renesas,ipmmu-vmsa";
+ reg = <0 0xe6290000 0 0x1000>;
+@@ -357,7 +357,7 @@
+ status = "disabled";
+ };
+
+- ipmmu_ds: mmu@e6740000 {
++ ipmmu_ds: iommu@e6740000 {
+ compatible = "renesas,ipmmu-r8a7744",
+ "renesas,ipmmu-vmsa";
+ reg = <0 0xe6740000 0 0x1000>;
+@@ -367,7 +367,7 @@
+ status = "disabled";
+ };
+
+- ipmmu_mp: mmu@ec680000 {
++ ipmmu_mp: iommu@ec680000 {
+ compatible = "renesas,ipmmu-r8a7744",
+ "renesas,ipmmu-vmsa";
+ reg = <0 0xec680000 0 0x1000>;
+@@ -376,7 +376,7 @@
+ status = "disabled";
+ };
+
+- ipmmu_mx: mmu@fe951000 {
++ ipmmu_mx: iommu@fe951000 {
+ compatible = "renesas,ipmmu-r8a7744",
+ "renesas,ipmmu-vmsa";
+ reg = <0 0xfe951000 0 0x1000>;
+@@ -386,7 +386,7 @@
+ status = "disabled";
+ };
+
+- ipmmu_gp: mmu@e62a0000 {
++ ipmmu_gp: iommu@e62a0000 {
+ compatible = "renesas,ipmmu-r8a7744",
+ "renesas,ipmmu-vmsa";
+ reg = <0 0xe62a0000 0 0x1000>;
+diff --git a/arch/arm/boot/dts/r8a7745.dtsi b/arch/arm/boot/dts/r8a7745.dtsi
+index 7ab58d8bb740..b0d1fc24e97e 100644
+--- a/arch/arm/boot/dts/r8a7745.dtsi
++++ b/arch/arm/boot/dts/r8a7745.dtsi
+@@ -302,7 +302,7 @@
+ resets = <&cpg 407>;
+ };
+
+- ipmmu_sy0: mmu@e6280000 {
++ ipmmu_sy0: iommu@e6280000 {
+ compatible = "renesas,ipmmu-r8a7745",
+ "renesas,ipmmu-vmsa";
+ reg = <0 0xe6280000 0 0x1000>;
+@@ -312,7 +312,7 @@
+ status = "disabled";
+ };
+
+- ipmmu_sy1: mmu@e6290000 {
++ ipmmu_sy1: iommu@e6290000 {
+ compatible = "renesas,ipmmu-r8a7745",
+ "renesas,ipmmu-vmsa";
+ reg = <0 0xe6290000 0 0x1000>;
+@@ -321,7 +321,7 @@
+ status = "disabled";
+ };
+
+- ipmmu_ds: mmu@e6740000 {
++ ipmmu_ds: iommu@e6740000 {
+ compatible = "renesas,ipmmu-r8a7745",
+ "renesas,ipmmu-vmsa";
+ reg = <0 0xe6740000 0 0x1000>;
+@@ -331,7 +331,7 @@
+ status = "disabled";
+ };
+
+- ipmmu_mp: mmu@ec680000 {
++ ipmmu_mp: iommu@ec680000 {
+ compatible = "renesas,ipmmu-r8a7745",
+ "renesas,ipmmu-vmsa";
+ reg = <0 0xec680000 0 0x1000>;
+@@ -340,7 +340,7 @@
+ status = "disabled";
+ };
+
+- ipmmu_mx: mmu@fe951000 {
++ ipmmu_mx: iommu@fe951000 {
+ compatible = "renesas,ipmmu-r8a7745",
+ "renesas,ipmmu-vmsa";
+ reg = <0 0xfe951000 0 0x1000>;
+@@ -350,7 +350,7 @@
+ status = "disabled";
+ };
+
+- ipmmu_gp: mmu@e62a0000 {
++ ipmmu_gp: iommu@e62a0000 {
+ compatible = "renesas,ipmmu-r8a7745",
+ "renesas,ipmmu-vmsa";
+ reg = <0 0xe62a0000 0 0x1000>;
+diff --git a/arch/arm/boot/dts/r8a7790.dtsi b/arch/arm/boot/dts/r8a7790.dtsi
+index e5ef9fd4284a..166d5566229d 100644
+--- a/arch/arm/boot/dts/r8a7790.dtsi
++++ b/arch/arm/boot/dts/r8a7790.dtsi
+@@ -427,7 +427,7 @@
+ #thermal-sensor-cells = <0>;
+ };
+
+- ipmmu_sy0: mmu@e6280000 {
++ ipmmu_sy0: iommu@e6280000 {
+ compatible = "renesas,ipmmu-r8a7790",
+ "renesas,ipmmu-vmsa";
+ reg = <0 0xe6280000 0 0x1000>;
+@@ -437,7 +437,7 @@
+ status = "disabled";
+ };
+
+- ipmmu_sy1: mmu@e6290000 {
++ ipmmu_sy1: iommu@e6290000 {
+ compatible = "renesas,ipmmu-r8a7790",
+ "renesas,ipmmu-vmsa";
+ reg = <0 0xe6290000 0 0x1000>;
+@@ -446,7 +446,7 @@
+ status = "disabled";
+ };
+
+- ipmmu_ds: mmu@e6740000 {
++ ipmmu_ds: iommu@e6740000 {
+ compatible = "renesas,ipmmu-r8a7790",
+ "renesas,ipmmu-vmsa";
+ reg = <0 0xe6740000 0 0x1000>;
+@@ -456,7 +456,7 @@
+ status = "disabled";
+ };
+
+- ipmmu_mp: mmu@ec680000 {
++ ipmmu_mp: iommu@ec680000 {
+ compatible = "renesas,ipmmu-r8a7790",
+ "renesas,ipmmu-vmsa";
+ reg = <0 0xec680000 0 0x1000>;
+@@ -465,7 +465,7 @@
+ status = "disabled";
+ };
+
+- ipmmu_mx: mmu@fe951000 {
++ ipmmu_mx: iommu@fe951000 {
+ compatible = "renesas,ipmmu-r8a7790",
+ "renesas,ipmmu-vmsa";
+ reg = <0 0xfe951000 0 0x1000>;
+@@ -475,7 +475,7 @@
+ status = "disabled";
+ };
+
+- ipmmu_rt: mmu@ffc80000 {
++ ipmmu_rt: iommu@ffc80000 {
+ compatible = "renesas,ipmmu-r8a7790",
+ "renesas,ipmmu-vmsa";
+ reg = <0 0xffc80000 0 0x1000>;
+diff --git a/arch/arm/boot/dts/r8a7791.dtsi b/arch/arm/boot/dts/r8a7791.dtsi
+index 6e5bd86731cd..09e47cc17765 100644
+--- a/arch/arm/boot/dts/r8a7791.dtsi
++++ b/arch/arm/boot/dts/r8a7791.dtsi
+@@ -350,7 +350,7 @@
+ #thermal-sensor-cells = <0>;
+ };
+
+- ipmmu_sy0: mmu@e6280000 {
++ ipmmu_sy0: iommu@e6280000 {
+ compatible = "renesas,ipmmu-r8a7791",
+ "renesas,ipmmu-vmsa";
+ reg = <0 0xe6280000 0 0x1000>;
+@@ -360,7 +360,7 @@
+ status = "disabled";
+ };
+
+- ipmmu_sy1: mmu@e6290000 {
++ ipmmu_sy1: iommu@e6290000 {
+ compatible = "renesas,ipmmu-r8a7791",
+ "renesas,ipmmu-vmsa";
+ reg = <0 0xe6290000 0 0x1000>;
+@@ -369,7 +369,7 @@
+ status = "disabled";
+ };
+
+- ipmmu_ds: mmu@e6740000 {
++ ipmmu_ds: iommu@e6740000 {
+ compatible = "renesas,ipmmu-r8a7791",
+ "renesas,ipmmu-vmsa";
+ reg = <0 0xe6740000 0 0x1000>;
+@@ -379,7 +379,7 @@
+ status = "disabled";
+ };
+
+- ipmmu_mp: mmu@ec680000 {
++ ipmmu_mp: iommu@ec680000 {
+ compatible = "renesas,ipmmu-r8a7791",
+ "renesas,ipmmu-vmsa";
+ reg = <0 0xec680000 0 0x1000>;
+@@ -388,7 +388,7 @@
+ status = "disabled";
+ };
+
+- ipmmu_mx: mmu@fe951000 {
++ ipmmu_mx: iommu@fe951000 {
+ compatible = "renesas,ipmmu-r8a7791",
+ "renesas,ipmmu-vmsa";
+ reg = <0 0xfe951000 0 0x1000>;
+@@ -398,7 +398,7 @@
+ status = "disabled";
+ };
+
+- ipmmu_rt: mmu@ffc80000 {
++ ipmmu_rt: iommu@ffc80000 {
+ compatible = "renesas,ipmmu-r8a7791",
+ "renesas,ipmmu-vmsa";
+ reg = <0 0xffc80000 0 0x1000>;
+@@ -407,7 +407,7 @@
+ status = "disabled";
+ };
+
+- ipmmu_gp: mmu@e62a0000 {
++ ipmmu_gp: iommu@e62a0000 {
+ compatible = "renesas,ipmmu-r8a7791",
+ "renesas,ipmmu-vmsa";
+ reg = <0 0xe62a0000 0 0x1000>;
+diff --git a/arch/arm/boot/dts/r8a7793.dtsi b/arch/arm/boot/dts/r8a7793.dtsi
+index dadbda16161b..1b62a7e06b42 100644
+--- a/arch/arm/boot/dts/r8a7793.dtsi
++++ b/arch/arm/boot/dts/r8a7793.dtsi
+@@ -336,7 +336,7 @@
+ #thermal-sensor-cells = <0>;
+ };
+
+- ipmmu_sy0: mmu@e6280000 {
++ ipmmu_sy0: iommu@e6280000 {
+ compatible = "renesas,ipmmu-r8a7793",
+ "renesas,ipmmu-vmsa";
+ reg = <0 0xe6280000 0 0x1000>;
+@@ -346,7 +346,7 @@
+ status = "disabled";
+ };
+
+- ipmmu_sy1: mmu@e6290000 {
++ ipmmu_sy1: iommu@e6290000 {
+ compatible = "renesas,ipmmu-r8a7793",
+ "renesas,ipmmu-vmsa";
+ reg = <0 0xe6290000 0 0x1000>;
+@@ -355,7 +355,7 @@
+ status = "disabled";
+ };
+
+- ipmmu_ds: mmu@e6740000 {
++ ipmmu_ds: iommu@e6740000 {
+ compatible = "renesas,ipmmu-r8a7793",
+ "renesas,ipmmu-vmsa";
+ reg = <0 0xe6740000 0 0x1000>;
+@@ -365,7 +365,7 @@
+ status = "disabled";
+ };
+
+- ipmmu_mp: mmu@ec680000 {
++ ipmmu_mp: iommu@ec680000 {
+ compatible = "renesas,ipmmu-r8a7793",
+ "renesas,ipmmu-vmsa";
+ reg = <0 0xec680000 0 0x1000>;
+@@ -374,7 +374,7 @@
+ status = "disabled";
+ };
+
+- ipmmu_mx: mmu@fe951000 {
++ ipmmu_mx: iommu@fe951000 {
+ compatible = "renesas,ipmmu-r8a7793",
+ "renesas,ipmmu-vmsa";
+ reg = <0 0xfe951000 0 0x1000>;
+@@ -384,7 +384,7 @@
+ status = "disabled";
+ };
+
+- ipmmu_rt: mmu@ffc80000 {
++ ipmmu_rt: iommu@ffc80000 {
+ compatible = "renesas,ipmmu-r8a7793",
+ "renesas,ipmmu-vmsa";
+ reg = <0 0xffc80000 0 0x1000>;
+@@ -393,7 +393,7 @@
+ status = "disabled";
+ };
+
+- ipmmu_gp: mmu@e62a0000 {
++ ipmmu_gp: iommu@e62a0000 {
+ compatible = "renesas,ipmmu-r8a7793",
+ "renesas,ipmmu-vmsa";
+ reg = <0 0xe62a0000 0 0x1000>;
+diff --git a/arch/arm/boot/dts/r8a7794.dtsi b/arch/arm/boot/dts/r8a7794.dtsi
+index 2c9e7a1ebfec..8d7f8798628a 100644
+--- a/arch/arm/boot/dts/r8a7794.dtsi
++++ b/arch/arm/boot/dts/r8a7794.dtsi
+@@ -290,7 +290,7 @@
+ resets = <&cpg 407>;
+ };
+
+- ipmmu_sy0: mmu@e6280000 {
++ ipmmu_sy0: iommu@e6280000 {
+ compatible = "renesas,ipmmu-r8a7794",
+ "renesas,ipmmu-vmsa";
+ reg = <0 0xe6280000 0 0x1000>;
+@@ -300,7 +300,7 @@
+ status = "disabled";
+ };
+
+- ipmmu_sy1: mmu@e6290000 {
++ ipmmu_sy1: iommu@e6290000 {
+ compatible = "renesas,ipmmu-r8a7794",
+ "renesas,ipmmu-vmsa";
+ reg = <0 0xe6290000 0 0x1000>;
+@@ -309,7 +309,7 @@
+ status = "disabled";
+ };
+
+- ipmmu_ds: mmu@e6740000 {
++ ipmmu_ds: iommu@e6740000 {
+ compatible = "renesas,ipmmu-r8a7794",
+ "renesas,ipmmu-vmsa";
+ reg = <0 0xe6740000 0 0x1000>;
+@@ -319,7 +319,7 @@
+ status = "disabled";
+ };
+
+- ipmmu_mp: mmu@ec680000 {
++ ipmmu_mp: iommu@ec680000 {
+ compatible = "renesas,ipmmu-r8a7794",
+ "renesas,ipmmu-vmsa";
+ reg = <0 0xec680000 0 0x1000>;
+@@ -328,7 +328,7 @@
+ status = "disabled";
+ };
+
+- ipmmu_mx: mmu@fe951000 {
++ ipmmu_mx: iommu@fe951000 {
+ compatible = "renesas,ipmmu-r8a7794",
+ "renesas,ipmmu-vmsa";
+ reg = <0 0xfe951000 0 0x1000>;
+@@ -338,7 +338,7 @@
+ status = "disabled";
+ };
+
+- ipmmu_gp: mmu@e62a0000 {
++ ipmmu_gp: iommu@e62a0000 {
+ compatible = "renesas,ipmmu-r8a7794",
+ "renesas,ipmmu-vmsa";
+ reg = <0 0xe62a0000 0 0x1000>;
+diff --git a/arch/arm/boot/dts/stm32mp157a-avenger96.dts b/arch/arm/boot/dts/stm32mp157a-avenger96.dts
+index 425175f7d83c..081037b510bc 100644
+--- a/arch/arm/boot/dts/stm32mp157a-avenger96.dts
++++ b/arch/arm/boot/dts/stm32mp157a-avenger96.dts
+@@ -92,6 +92,9 @@
+ #address-cells = <1>;
+ #size-cells = <0>;
+ compatible = "snps,dwmac-mdio";
++ reset-gpios = <&gpioz 2 GPIO_ACTIVE_LOW>;
++ reset-delay-us = <1000>;
++
+ phy0: ethernet-phy@7 {
+ reg = <7>;
+ };
+diff --git a/arch/arm/boot/dts/sun8i-h2-plus-bananapi-m2-zero.dts b/arch/arm/boot/dts/sun8i-h2-plus-bananapi-m2-zero.dts
+index d277d043031b..4c6704e4c57e 100644
+--- a/arch/arm/boot/dts/sun8i-h2-plus-bananapi-m2-zero.dts
++++ b/arch/arm/boot/dts/sun8i-h2-plus-bananapi-m2-zero.dts
+@@ -31,7 +31,7 @@
+
+ pwr_led {
+ label = "bananapi-m2-zero:red:pwr";
+- gpios = <&r_pio 0 10 GPIO_ACTIVE_HIGH>; /* PL10 */
++ gpios = <&r_pio 0 10 GPIO_ACTIVE_LOW>; /* PL10 */
+ default-state = "on";
+ };
+ };
+diff --git a/arch/arm/boot/dts/vexpress-v2m-rs1.dtsi b/arch/arm/boot/dts/vexpress-v2m-rs1.dtsi
+index 5c183483ec3b..8010cdcdb37a 100644
+--- a/arch/arm/boot/dts/vexpress-v2m-rs1.dtsi
++++ b/arch/arm/boot/dts/vexpress-v2m-rs1.dtsi
+@@ -31,7 +31,7 @@
+ #interrupt-cells = <1>;
+ ranges;
+
+- nor_flash: flash@0,00000000 {
++ nor_flash: flash@0 {
+ compatible = "arm,vexpress-flash", "cfi-flash";
+ reg = <0 0x00000000 0x04000000>,
+ <4 0x00000000 0x04000000>;
+@@ -41,13 +41,13 @@
+ };
+ };
+
+- psram@1,00000000 {
++ psram@100000000 {
+ compatible = "arm,vexpress-psram", "mtd-ram";
+ reg = <1 0x00000000 0x02000000>;
+ bank-width = <4>;
+ };
+
+- ethernet@2,02000000 {
++ ethernet@202000000 {
+ compatible = "smsc,lan9118", "smsc,lan9115";
+ reg = <2 0x02000000 0x10000>;
+ interrupts = <15>;
+@@ -59,14 +59,14 @@
+ vddvario-supply = <&v2m_fixed_3v3>;
+ };
+
+- usb@2,03000000 {
++ usb@203000000 {
+ compatible = "nxp,usb-isp1761";
+ reg = <2 0x03000000 0x20000>;
+ interrupts = <16>;
+ port1-otg;
+ };
+
+- iofpga@3,00000000 {
++ iofpga@300000000 {
+ compatible = "simple-bus";
+ #address-cells = <1>;
+ #size-cells = <1>;
+diff --git a/arch/arm/mach-davinci/board-dm644x-evm.c b/arch/arm/mach-davinci/board-dm644x-evm.c
+index 3461d12bbfc0..a5d3708fedf6 100644
+--- a/arch/arm/mach-davinci/board-dm644x-evm.c
++++ b/arch/arm/mach-davinci/board-dm644x-evm.c
+@@ -655,19 +655,6 @@ static struct i2c_board_info __initdata i2c_info[] = {
+ },
+ };
+
+-/* Fixed regulator support */
+-static struct regulator_consumer_supply fixed_supplies_3_3v[] = {
+- /* Baseboard 3.3V: 5V -> TPS54310PWP -> 3.3V */
+- REGULATOR_SUPPLY("AVDD", "1-001b"),
+- REGULATOR_SUPPLY("DRVDD", "1-001b"),
+-};
+-
+-static struct regulator_consumer_supply fixed_supplies_1_8v[] = {
+- /* Baseboard 1.8V: 5V -> TPS54310PWP -> 1.8V */
+- REGULATOR_SUPPLY("IOVDD", "1-001b"),
+- REGULATOR_SUPPLY("DVDD", "1-001b"),
+-};
+-
+ #define DM644X_I2C_SDA_PIN GPIO_TO_PIN(2, 12)
+ #define DM644X_I2C_SCL_PIN GPIO_TO_PIN(2, 11)
+
+@@ -700,6 +687,19 @@ static void __init evm_init_i2c(void)
+ }
+ #endif
+
++/* Fixed regulator support */
++static struct regulator_consumer_supply fixed_supplies_3_3v[] = {
++ /* Baseboard 3.3V: 5V -> TPS54310PWP -> 3.3V */
++ REGULATOR_SUPPLY("AVDD", "1-001b"),
++ REGULATOR_SUPPLY("DRVDD", "1-001b"),
++};
++
++static struct regulator_consumer_supply fixed_supplies_1_8v[] = {
++ /* Baseboard 1.8V: 5V -> TPS54310PWP -> 1.8V */
++ REGULATOR_SUPPLY("IOVDD", "1-001b"),
++ REGULATOR_SUPPLY("DVDD", "1-001b"),
++};
++
+ #define VENC_STD_ALL (V4L2_STD_NTSC | V4L2_STD_PAL)
+
+ /* venc standard timings */
+diff --git a/arch/arm/mach-integrator/Kconfig b/arch/arm/mach-integrator/Kconfig
+index 982eabc36163..2406cab73835 100644
+--- a/arch/arm/mach-integrator/Kconfig
++++ b/arch/arm/mach-integrator/Kconfig
+@@ -4,6 +4,8 @@ menuconfig ARCH_INTEGRATOR
+ depends on ARCH_MULTI_V4T || ARCH_MULTI_V5 || ARCH_MULTI_V6
+ select ARM_AMBA
+ select COMMON_CLK_VERSATILE
++ select CMA
++ select DMA_CMA
+ select HAVE_TCM
+ select ICST
+ select MFD_SYSCON
+@@ -35,14 +37,13 @@ config INTEGRATOR_IMPD1
+ select ARM_VIC
+ select GPIO_PL061
+ select GPIOLIB
++ select REGULATOR
++ select REGULATOR_FIXED_VOLTAGE
+ help
+ The IM-PD1 is an add-on logic module for the Integrator which
+ allows ARM(R) Ltd PrimeCells to be developed and evaluated.
+ The IM-PD1 can be found on the Integrator/PP2 platform.
+
+- To compile this driver as a module, choose M here: the
+- module will be called impd1.
+-
+ config INTEGRATOR_CM7TDMI
+ bool "Integrator/CM7TDMI core module"
+ depends on ARCH_INTEGRATOR_AP
+diff --git a/arch/arm64/Kconfig.platforms b/arch/arm64/Kconfig.platforms
+index 55d70cfe0f9e..3c7e310fd8bf 100644
+--- a/arch/arm64/Kconfig.platforms
++++ b/arch/arm64/Kconfig.platforms
+@@ -248,7 +248,7 @@ config ARCH_TEGRA
+ This enables support for the NVIDIA Tegra SoC family.
+
+ config ARCH_SPRD
+- tristate "Spreadtrum SoC platform"
++ bool "Spreadtrum SoC platform"
+ help
+ Support for Spreadtrum ARM based SoCs
+
+diff --git a/arch/arm64/boot/dts/amlogic/meson-axg.dtsi b/arch/arm64/boot/dts/amlogic/meson-axg.dtsi
+index aace3d32a3df..8e6281c685fa 100644
+--- a/arch/arm64/boot/dts/amlogic/meson-axg.dtsi
++++ b/arch/arm64/boot/dts/amlogic/meson-axg.dtsi
+@@ -1735,18 +1735,18 @@
+ };
+
+ sram: sram@fffc0000 {
+- compatible = "amlogic,meson-axg-sram", "mmio-sram";
++ compatible = "mmio-sram";
+ reg = <0x0 0xfffc0000 0x0 0x20000>;
+ #address-cells = <1>;
+ #size-cells = <1>;
+ ranges = <0 0x0 0xfffc0000 0x20000>;
+
+- cpu_scp_lpri: scp-shmem@13000 {
++ cpu_scp_lpri: scp-sram@13000 {
+ compatible = "amlogic,meson-axg-scp-shmem";
+ reg = <0x13000 0x400>;
+ };
+
+- cpu_scp_hpri: scp-shmem@13400 {
++ cpu_scp_hpri: scp-sram@13400 {
+ compatible = "amlogic,meson-axg-scp-shmem";
+ reg = <0x13400 0x400>;
+ };
+diff --git a/arch/arm64/boot/dts/amlogic/meson-g12b-ugoos-am6.dts b/arch/arm64/boot/dts/amlogic/meson-g12b-ugoos-am6.dts
+index 06c5430eb92d..fdaacfd96b97 100644
+--- a/arch/arm64/boot/dts/amlogic/meson-g12b-ugoos-am6.dts
++++ b/arch/arm64/boot/dts/amlogic/meson-g12b-ugoos-am6.dts
+@@ -14,7 +14,7 @@
+ #include <dt-bindings/sound/meson-g12a-tohdmitx.h>
+
+ / {
+- compatible = "ugoos,am6", "amlogic,g12b";
++ compatible = "ugoos,am6", "amlogic,s922x", "amlogic,g12b";
+ model = "Ugoos AM6";
+
+ aliases {
+diff --git a/arch/arm64/boot/dts/amlogic/meson-gx-libretech-pc.dtsi b/arch/arm64/boot/dts/amlogic/meson-gx-libretech-pc.dtsi
+index 248b018c83d5..b1da36fdeac6 100644
+--- a/arch/arm64/boot/dts/amlogic/meson-gx-libretech-pc.dtsi
++++ b/arch/arm64/boot/dts/amlogic/meson-gx-libretech-pc.dtsi
+@@ -96,14 +96,14 @@
+ leds {
+ compatible = "gpio-leds";
+
+- green {
++ led-green {
+ color = <LED_COLOR_ID_GREEN>;
+ function = LED_FUNCTION_DISK_ACTIVITY;
+ gpios = <&gpio_ao GPIOAO_9 GPIO_ACTIVE_HIGH>;
+ linux,default-trigger = "disk-activity";
+ };
+
+- blue {
++ led-blue {
+ color = <LED_COLOR_ID_BLUE>;
+ function = LED_FUNCTION_STATUS;
+ gpios = <&gpio GPIODV_28 GPIO_ACTIVE_HIGH>;
+diff --git a/arch/arm64/boot/dts/amlogic/meson-gx.dtsi b/arch/arm64/boot/dts/amlogic/meson-gx.dtsi
+index 03f79fe045b7..e2bb68ec8502 100644
+--- a/arch/arm64/boot/dts/amlogic/meson-gx.dtsi
++++ b/arch/arm64/boot/dts/amlogic/meson-gx.dtsi
+@@ -398,20 +398,20 @@
+ };
+
+ sram: sram@c8000000 {
+- compatible = "amlogic,meson-gx-sram", "amlogic,meson-gxbb-sram", "mmio-sram";
++ compatible = "mmio-sram";
+ reg = <0x0 0xc8000000 0x0 0x14000>;
+
+ #address-cells = <1>;
+ #size-cells = <1>;
+ ranges = <0 0x0 0xc8000000 0x14000>;
+
+- cpu_scp_lpri: scp-shmem@0 {
+- compatible = "amlogic,meson-gx-scp-shmem", "amlogic,meson-gxbb-scp-shmem";
++ cpu_scp_lpri: scp-sram@0 {
++ compatible = "amlogic,meson-gxbb-scp-shmem";
+ reg = <0x13000 0x400>;
+ };
+
+- cpu_scp_hpri: scp-shmem@200 {
+- compatible = "amlogic,meson-gx-scp-shmem", "amlogic,meson-gxbb-scp-shmem";
++ cpu_scp_hpri: scp-sram@200 {
++ compatible = "amlogic,meson-gxbb-scp-shmem";
+ reg = <0x13400 0x400>;
+ };
+ };
+diff --git a/arch/arm64/boot/dts/amlogic/meson-gxbb-kii-pro.dts b/arch/arm64/boot/dts/amlogic/meson-gxbb-kii-pro.dts
+index 6c9cc45fb417..e8394a8269ee 100644
+--- a/arch/arm64/boot/dts/amlogic/meson-gxbb-kii-pro.dts
++++ b/arch/arm64/boot/dts/amlogic/meson-gxbb-kii-pro.dts
+@@ -11,7 +11,7 @@
+ #include <dt-bindings/input/input.h>
+ #include <dt-bindings/leds/common.h>
+ / {
+- compatible = "videostrong,kii-pro", "amlogic,p201", "amlogic,s905", "amlogic,meson-gxbb";
++ compatible = "videostrong,kii-pro", "amlogic,meson-gxbb";
+ model = "Videostrong KII Pro";
+
+ leds {
+diff --git a/arch/arm64/boot/dts/amlogic/meson-gxbb-nanopi-k2.dts b/arch/arm64/boot/dts/amlogic/meson-gxbb-nanopi-k2.dts
+index d6ca684e0e61..7be3e354093b 100644
+--- a/arch/arm64/boot/dts/amlogic/meson-gxbb-nanopi-k2.dts
++++ b/arch/arm64/boot/dts/amlogic/meson-gxbb-nanopi-k2.dts
+@@ -29,7 +29,7 @@
+ leds {
+ compatible = "gpio-leds";
+
+- stat {
++ led-stat {
+ label = "nanopi-k2:blue:stat";
+ gpios = <&gpio_ao GPIOAO_13 GPIO_ACTIVE_HIGH>;
+ default-state = "on";
+diff --git a/arch/arm64/boot/dts/amlogic/meson-gxbb-nexbox-a95x.dts b/arch/arm64/boot/dts/amlogic/meson-gxbb-nexbox-a95x.dts
+index 65ec7dea828c..67d901ed2fa3 100644
+--- a/arch/arm64/boot/dts/amlogic/meson-gxbb-nexbox-a95x.dts
++++ b/arch/arm64/boot/dts/amlogic/meson-gxbb-nexbox-a95x.dts
+@@ -31,7 +31,7 @@
+
+ leds {
+ compatible = "gpio-leds";
+- blue {
++ led-blue {
+ label = "a95x:system-status";
+ gpios = <&gpio_ao GPIOAO_13 GPIO_ACTIVE_LOW>;
+ linux,default-trigger = "heartbeat";
+diff --git a/arch/arm64/boot/dts/amlogic/meson-gxbb-odroidc2.dts b/arch/arm64/boot/dts/amlogic/meson-gxbb-odroidc2.dts
+index b46ef985bb44..70fcfb7b0683 100644
+--- a/arch/arm64/boot/dts/amlogic/meson-gxbb-odroidc2.dts
++++ b/arch/arm64/boot/dts/amlogic/meson-gxbb-odroidc2.dts
+@@ -49,7 +49,7 @@
+
+ leds {
+ compatible = "gpio-leds";
+- blue {
++ led-blue {
+ label = "c2:blue:alive";
+ gpios = <&gpio_ao GPIOAO_13 GPIO_ACTIVE_LOW>;
+ linux,default-trigger = "heartbeat";
+diff --git a/arch/arm64/boot/dts/amlogic/meson-gxbb-vega-s95.dtsi b/arch/arm64/boot/dts/amlogic/meson-gxbb-vega-s95.dtsi
+index 45cb83625951..222ee8069cfa 100644
+--- a/arch/arm64/boot/dts/amlogic/meson-gxbb-vega-s95.dtsi
++++ b/arch/arm64/boot/dts/amlogic/meson-gxbb-vega-s95.dtsi
+@@ -20,7 +20,7 @@
+ leds {
+ compatible = "gpio-leds";
+
+- blue {
++ led-blue {
+ label = "vega-s95:blue:on";
+ gpios = <&gpio_ao GPIOAO_13 GPIO_ACTIVE_HIGH>;
+ default-state = "on";
+diff --git a/arch/arm64/boot/dts/amlogic/meson-gxbb-wetek-play2.dts b/arch/arm64/boot/dts/amlogic/meson-gxbb-wetek-play2.dts
+index 1d32d1f6d032..2ab8a3d10079 100644
+--- a/arch/arm64/boot/dts/amlogic/meson-gxbb-wetek-play2.dts
++++ b/arch/arm64/boot/dts/amlogic/meson-gxbb-wetek-play2.dts
+@@ -14,13 +14,13 @@
+ model = "WeTek Play 2";
+
+ leds {
+- wifi {
++ led-wifi {
+ label = "wetek-play:wifi-status";
+ gpios = <&gpio GPIODV_26 GPIO_ACTIVE_HIGH>;
+ default-state = "off";
+ };
+
+- ethernet {
++ led-ethernet {
+ label = "wetek-play:ethernet-status";
+ gpios = <&gpio GPIODV_27 GPIO_ACTIVE_HIGH>;
+ default-state = "off";
+diff --git a/arch/arm64/boot/dts/amlogic/meson-gxbb-wetek.dtsi b/arch/arm64/boot/dts/amlogic/meson-gxbb-wetek.dtsi
+index dee51cf95223..d6133af09d64 100644
+--- a/arch/arm64/boot/dts/amlogic/meson-gxbb-wetek.dtsi
++++ b/arch/arm64/boot/dts/amlogic/meson-gxbb-wetek.dtsi
+@@ -25,7 +25,7 @@
+ leds {
+ compatible = "gpio-leds";
+
+- system {
++ led-system {
+ label = "wetek-play:system-status";
+ gpios = <&gpio_ao GPIOAO_13 GPIO_ACTIVE_HIGH>;
+ default-state = "on";
+diff --git a/arch/arm64/boot/dts/amlogic/meson-gxl-s905x-libretech-cc.dts b/arch/arm64/boot/dts/amlogic/meson-gxl-s905x-libretech-cc.dts
+index e8348b2728db..a4a71c13891b 100644
+--- a/arch/arm64/boot/dts/amlogic/meson-gxl-s905x-libretech-cc.dts
++++ b/arch/arm64/boot/dts/amlogic/meson-gxl-s905x-libretech-cc.dts
+@@ -54,14 +54,14 @@
+ leds {
+ compatible = "gpio-leds";
+
+- system {
++ led-system {
+ label = "librecomputer:system-status";
+ gpios = <&gpio GPIODV_24 GPIO_ACTIVE_HIGH>;
+ default-state = "on";
+ panic-indicator;
+ };
+
+- blue {
++ led-blue {
+ label = "librecomputer:blue";
+ gpios = <&gpio_ao GPIOAO_2 GPIO_ACTIVE_HIGH>;
+ linux,default-trigger = "heartbeat";
+diff --git a/arch/arm64/boot/dts/amlogic/meson-gxm-rbox-pro.dts b/arch/arm64/boot/dts/amlogic/meson-gxm-rbox-pro.dts
+index 420a88e9a195..c89c9f846fb1 100644
+--- a/arch/arm64/boot/dts/amlogic/meson-gxm-rbox-pro.dts
++++ b/arch/arm64/boot/dts/amlogic/meson-gxm-rbox-pro.dts
+@@ -36,13 +36,13 @@
+ leds {
+ compatible = "gpio-leds";
+
+- blue {
++ led-blue {
+ label = "rbox-pro:blue:on";
+ gpios = <&gpio_ao GPIOAO_9 GPIO_ACTIVE_HIGH>;
+ default-state = "on";
+ };
+
+- red {
++ led-red {
+ label = "rbox-pro:red:standby";
+ gpios = <&gpio GPIODV_28 GPIO_ACTIVE_HIGH>;
+ default-state = "off";
+diff --git a/arch/arm64/boot/dts/amlogic/meson-khadas-vim3.dtsi b/arch/arm64/boot/dts/amlogic/meson-khadas-vim3.dtsi
+index 094ecf2222bb..1ef1e3672b96 100644
+--- a/arch/arm64/boot/dts/amlogic/meson-khadas-vim3.dtsi
++++ b/arch/arm64/boot/dts/amlogic/meson-khadas-vim3.dtsi
+@@ -39,13 +39,13 @@
+ leds {
+ compatible = "gpio-leds";
+
+- white {
++ led-white {
+ label = "vim3:white:sys";
+ gpios = <&gpio_ao GPIOAO_4 GPIO_ACTIVE_LOW>;
+ linux,default-trigger = "heartbeat";
+ };
+
+- red {
++ led-red {
+ label = "vim3:red";
+ gpios = <&gpio_expander 5 GPIO_ACTIVE_LOW>;
+ };
+diff --git a/arch/arm64/boot/dts/amlogic/meson-sm1-sei610.dts b/arch/arm64/boot/dts/amlogic/meson-sm1-sei610.dts
+index dfb2438851c0..5ab139a34c01 100644
+--- a/arch/arm64/boot/dts/amlogic/meson-sm1-sei610.dts
++++ b/arch/arm64/boot/dts/amlogic/meson-sm1-sei610.dts
+@@ -104,7 +104,7 @@
+ leds {
+ compatible = "gpio-leds";
+
+- bluetooth {
++ led-bluetooth {
+ label = "sei610:blue:bt";
+ gpios = <&gpio GPIOC_7 (GPIO_ACTIVE_LOW | GPIO_OPEN_DRAIN)>;
+ default-state = "off";
+diff --git a/arch/arm64/boot/dts/arm/foundation-v8-gicv2.dtsi b/arch/arm64/boot/dts/arm/foundation-v8-gicv2.dtsi
+index 15fe81738e94..dfb23dfc0b0f 100644
+--- a/arch/arm64/boot/dts/arm/foundation-v8-gicv2.dtsi
++++ b/arch/arm64/boot/dts/arm/foundation-v8-gicv2.dtsi
+@@ -8,7 +8,7 @@
+ gic: interrupt-controller@2c001000 {
+ compatible = "arm,cortex-a15-gic", "arm,cortex-a9-gic";
+ #interrupt-cells = <3>;
+- #address-cells = <2>;
++ #address-cells = <1>;
+ interrupt-controller;
+ reg = <0x0 0x2c001000 0 0x1000>,
+ <0x0 0x2c002000 0 0x2000>,
+diff --git a/arch/arm64/boot/dts/arm/foundation-v8-gicv3.dtsi b/arch/arm64/boot/dts/arm/foundation-v8-gicv3.dtsi
+index f2c75c756039..906f51935b36 100644
+--- a/arch/arm64/boot/dts/arm/foundation-v8-gicv3.dtsi
++++ b/arch/arm64/boot/dts/arm/foundation-v8-gicv3.dtsi
+@@ -8,9 +8,9 @@
+ gic: interrupt-controller@2f000000 {
+ compatible = "arm,gic-v3";
+ #interrupt-cells = <3>;
+- #address-cells = <2>;
+- #size-cells = <2>;
+- ranges;
++ #address-cells = <1>;
++ #size-cells = <1>;
++ ranges = <0x0 0x0 0x2f000000 0x100000>;
+ interrupt-controller;
+ reg = <0x0 0x2f000000 0x0 0x10000>,
+ <0x0 0x2f100000 0x0 0x200000>,
+@@ -22,7 +22,7 @@
+ its: its@2f020000 {
+ compatible = "arm,gic-v3-its";
+ msi-controller;
+- reg = <0x0 0x2f020000 0x0 0x20000>;
++ reg = <0x20000 0x20000>;
+ };
+ };
+ };
+diff --git a/arch/arm64/boot/dts/arm/foundation-v8.dtsi b/arch/arm64/boot/dts/arm/foundation-v8.dtsi
+index 12f039fa3dad..e2da63f78298 100644
+--- a/arch/arm64/boot/dts/arm/foundation-v8.dtsi
++++ b/arch/arm64/boot/dts/arm/foundation-v8.dtsi
+@@ -107,51 +107,51 @@
+
+ #interrupt-cells = <1>;
+ interrupt-map-mask = <0 0 63>;
+- interrupt-map = <0 0 0 &gic 0 0 GIC_SPI 0 IRQ_TYPE_LEVEL_HIGH>,
+- <0 0 1 &gic 0 0 GIC_SPI 1 IRQ_TYPE_LEVEL_HIGH>,
+- <0 0 2 &gic 0 0 GIC_SPI 2 IRQ_TYPE_LEVEL_HIGH>,
+- <0 0 3 &gic 0 0 GIC_SPI 3 IRQ_TYPE_LEVEL_HIGH>,
+- <0 0 4 &gic 0 0 GIC_SPI 4 IRQ_TYPE_LEVEL_HIGH>,
+- <0 0 5 &gic 0 0 GIC_SPI 5 IRQ_TYPE_LEVEL_HIGH>,
+- <0 0 6 &gic 0 0 GIC_SPI 6 IRQ_TYPE_LEVEL_HIGH>,
+- <0 0 7 &gic 0 0 GIC_SPI 7 IRQ_TYPE_LEVEL_HIGH>,
+- <0 0 8 &gic 0 0 GIC_SPI 8 IRQ_TYPE_LEVEL_HIGH>,
+- <0 0 9 &gic 0 0 GIC_SPI 9 IRQ_TYPE_LEVEL_HIGH>,
+- <0 0 10 &gic 0 0 GIC_SPI 10 IRQ_TYPE_LEVEL_HIGH>,
+- <0 0 11 &gic 0 0 GIC_SPI 11 IRQ_TYPE_LEVEL_HIGH>,
+- <0 0 12 &gic 0 0 GIC_SPI 12 IRQ_TYPE_LEVEL_HIGH>,
+- <0 0 13 &gic 0 0 GIC_SPI 13 IRQ_TYPE_LEVEL_HIGH>,
+- <0 0 14 &gic 0 0 GIC_SPI 14 IRQ_TYPE_LEVEL_HIGH>,
+- <0 0 15 &gic 0 0 GIC_SPI 15 IRQ_TYPE_LEVEL_HIGH>,
+- <0 0 16 &gic 0 0 GIC_SPI 16 IRQ_TYPE_LEVEL_HIGH>,
+- <0 0 17 &gic 0 0 GIC_SPI 17 IRQ_TYPE_LEVEL_HIGH>,
+- <0 0 18 &gic 0 0 GIC_SPI 18 IRQ_TYPE_LEVEL_HIGH>,
+- <0 0 19 &gic 0 0 GIC_SPI 19 IRQ_TYPE_LEVEL_HIGH>,
+- <0 0 20 &gic 0 0 GIC_SPI 20 IRQ_TYPE_LEVEL_HIGH>,
+- <0 0 21 &gic 0 0 GIC_SPI 21 IRQ_TYPE_LEVEL_HIGH>,
+- <0 0 22 &gic 0 0 GIC_SPI 22 IRQ_TYPE_LEVEL_HIGH>,
+- <0 0 23 &gic 0 0 GIC_SPI 23 IRQ_TYPE_LEVEL_HIGH>,
+- <0 0 24 &gic 0 0 GIC_SPI 24 IRQ_TYPE_LEVEL_HIGH>,
+- <0 0 25 &gic 0 0 GIC_SPI 25 IRQ_TYPE_LEVEL_HIGH>,
+- <0 0 26 &gic 0 0 GIC_SPI 26 IRQ_TYPE_LEVEL_HIGH>,
+- <0 0 27 &gic 0 0 GIC_SPI 27 IRQ_TYPE_LEVEL_HIGH>,
+- <0 0 28 &gic 0 0 GIC_SPI 28 IRQ_TYPE_LEVEL_HIGH>,
+- <0 0 29 &gic 0 0 GIC_SPI 29 IRQ_TYPE_LEVEL_HIGH>,
+- <0 0 30 &gic 0 0 GIC_SPI 30 IRQ_TYPE_LEVEL_HIGH>,
+- <0 0 31 &gic 0 0 GIC_SPI 31 IRQ_TYPE_LEVEL_HIGH>,
+- <0 0 32 &gic 0 0 GIC_SPI 32 IRQ_TYPE_LEVEL_HIGH>,
+- <0 0 33 &gic 0 0 GIC_SPI 33 IRQ_TYPE_LEVEL_HIGH>,
+- <0 0 34 &gic 0 0 GIC_SPI 34 IRQ_TYPE_LEVEL_HIGH>,
+- <0 0 35 &gic 0 0 GIC_SPI 35 IRQ_TYPE_LEVEL_HIGH>,
+- <0 0 36 &gic 0 0 GIC_SPI 36 IRQ_TYPE_LEVEL_HIGH>,
+- <0 0 37 &gic 0 0 GIC_SPI 37 IRQ_TYPE_LEVEL_HIGH>,
+- <0 0 38 &gic 0 0 GIC_SPI 38 IRQ_TYPE_LEVEL_HIGH>,
+- <0 0 39 &gic 0 0 GIC_SPI 39 IRQ_TYPE_LEVEL_HIGH>,
+- <0 0 40 &gic 0 0 GIC_SPI 40 IRQ_TYPE_LEVEL_HIGH>,
+- <0 0 41 &gic 0 0 GIC_SPI 41 IRQ_TYPE_LEVEL_HIGH>,
+- <0 0 42 &gic 0 0 GIC_SPI 42 IRQ_TYPE_LEVEL_HIGH>;
+-
+- ethernet@2,02000000 {
++ interrupt-map = <0 0 0 &gic 0 GIC_SPI 0 IRQ_TYPE_LEVEL_HIGH>,
++ <0 0 1 &gic 0 GIC_SPI 1 IRQ_TYPE_LEVEL_HIGH>,
++ <0 0 2 &gic 0 GIC_SPI 2 IRQ_TYPE_LEVEL_HIGH>,
++ <0 0 3 &gic 0 GIC_SPI 3 IRQ_TYPE_LEVEL_HIGH>,
++ <0 0 4 &gic 0 GIC_SPI 4 IRQ_TYPE_LEVEL_HIGH>,
++ <0 0 5 &gic 0 GIC_SPI 5 IRQ_TYPE_LEVEL_HIGH>,
++ <0 0 6 &gic 0 GIC_SPI 6 IRQ_TYPE_LEVEL_HIGH>,
++ <0 0 7 &gic 0 GIC_SPI 7 IRQ_TYPE_LEVEL_HIGH>,
++ <0 0 8 &gic 0 GIC_SPI 8 IRQ_TYPE_LEVEL_HIGH>,
++ <0 0 9 &gic 0 GIC_SPI 9 IRQ_TYPE_LEVEL_HIGH>,
++ <0 0 10 &gic 0 GIC_SPI 10 IRQ_TYPE_LEVEL_HIGH>,
++ <0 0 11 &gic 0 GIC_SPI 11 IRQ_TYPE_LEVEL_HIGH>,
++ <0 0 12 &gic 0 GIC_SPI 12 IRQ_TYPE_LEVEL_HIGH>,
++ <0 0 13 &gic 0 GIC_SPI 13 IRQ_TYPE_LEVEL_HIGH>,
++ <0 0 14 &gic 0 GIC_SPI 14 IRQ_TYPE_LEVEL_HIGH>,
++ <0 0 15 &gic 0 GIC_SPI 15 IRQ_TYPE_LEVEL_HIGH>,
++ <0 0 16 &gic 0 GIC_SPI 16 IRQ_TYPE_LEVEL_HIGH>,
++ <0 0 17 &gic 0 GIC_SPI 17 IRQ_TYPE_LEVEL_HIGH>,
++ <0 0 18 &gic 0 GIC_SPI 18 IRQ_TYPE_LEVEL_HIGH>,
++ <0 0 19 &gic 0 GIC_SPI 19 IRQ_TYPE_LEVEL_HIGH>,
++ <0 0 20 &gic 0 GIC_SPI 20 IRQ_TYPE_LEVEL_HIGH>,
++ <0 0 21 &gic 0 GIC_SPI 21 IRQ_TYPE_LEVEL_HIGH>,
++ <0 0 22 &gic 0 GIC_SPI 22 IRQ_TYPE_LEVEL_HIGH>,
++ <0 0 23 &gic 0 GIC_SPI 23 IRQ_TYPE_LEVEL_HIGH>,
++ <0 0 24 &gic 0 GIC_SPI 24 IRQ_TYPE_LEVEL_HIGH>,
++ <0 0 25 &gic 0 GIC_SPI 25 IRQ_TYPE_LEVEL_HIGH>,
++ <0 0 26 &gic 0 GIC_SPI 26 IRQ_TYPE_LEVEL_HIGH>,
++ <0 0 27 &gic 0 GIC_SPI 27 IRQ_TYPE_LEVEL_HIGH>,
++ <0 0 28 &gic 0 GIC_SPI 28 IRQ_TYPE_LEVEL_HIGH>,
++ <0 0 29 &gic 0 GIC_SPI 29 IRQ_TYPE_LEVEL_HIGH>,
++ <0 0 30 &gic 0 GIC_SPI 30 IRQ_TYPE_LEVEL_HIGH>,
++ <0 0 31 &gic 0 GIC_SPI 31 IRQ_TYPE_LEVEL_HIGH>,
++ <0 0 32 &gic 0 GIC_SPI 32 IRQ_TYPE_LEVEL_HIGH>,
++ <0 0 33 &gic 0 GIC_SPI 33 IRQ_TYPE_LEVEL_HIGH>,
++ <0 0 34 &gic 0 GIC_SPI 34 IRQ_TYPE_LEVEL_HIGH>,
++ <0 0 35 &gic 0 GIC_SPI 35 IRQ_TYPE_LEVEL_HIGH>,
++ <0 0 36 &gic 0 GIC_SPI 36 IRQ_TYPE_LEVEL_HIGH>,
++ <0 0 37 &gic 0 GIC_SPI 37 IRQ_TYPE_LEVEL_HIGH>,
++ <0 0 38 &gic 0 GIC_SPI 38 IRQ_TYPE_LEVEL_HIGH>,
++ <0 0 39 &gic 0 GIC_SPI 39 IRQ_TYPE_LEVEL_HIGH>,
++ <0 0 40 &gic 0 GIC_SPI 40 IRQ_TYPE_LEVEL_HIGH>,
++ <0 0 41 &gic 0 GIC_SPI 41 IRQ_TYPE_LEVEL_HIGH>,
++ <0 0 42 &gic 0 GIC_SPI 42 IRQ_TYPE_LEVEL_HIGH>;
++
++ ethernet@202000000 {
+ compatible = "smsc,lan91c111";
+ reg = <2 0x02000000 0x10000>;
+ interrupts = <15>;
+@@ -178,7 +178,7 @@
+ clock-output-names = "v2m:refclk32khz";
+ };
+
+- iofpga@3,00000000 {
++ iofpga@300000000 {
+ compatible = "simple-bus";
+ #address-cells = <1>;
+ #size-cells = <1>;
+diff --git a/arch/arm64/boot/dts/arm/juno-base.dtsi b/arch/arm64/boot/dts/arm/juno-base.dtsi
+index f5889281545f..59b6ac0b828a 100644
+--- a/arch/arm64/boot/dts/arm/juno-base.dtsi
++++ b/arch/arm64/boot/dts/arm/juno-base.dtsi
+@@ -74,35 +74,35 @@
+ <0x0 0x2c02f000 0 0x2000>,
+ <0x0 0x2c04f000 0 0x2000>,
+ <0x0 0x2c06f000 0 0x2000>;
+- #address-cells = <2>;
++ #address-cells = <1>;
+ #interrupt-cells = <3>;
+- #size-cells = <2>;
++ #size-cells = <1>;
+ interrupt-controller;
+ interrupts = <GIC_PPI 9 (GIC_CPU_MASK_SIMPLE(6) | IRQ_TYPE_LEVEL_HIGH)>;
+- ranges = <0 0 0 0x2c1c0000 0 0x40000>;
++ ranges = <0 0 0x2c1c0000 0x40000>;
+
+ v2m_0: v2m@0 {
+ compatible = "arm,gic-v2m-frame";
+ msi-controller;
+- reg = <0 0 0 0x10000>;
++ reg = <0 0x10000>;
+ };
+
+ v2m@10000 {
+ compatible = "arm,gic-v2m-frame";
+ msi-controller;
+- reg = <0 0x10000 0 0x10000>;
++ reg = <0x10000 0x10000>;
+ };
+
+ v2m@20000 {
+ compatible = "arm,gic-v2m-frame";
+ msi-controller;
+- reg = <0 0x20000 0 0x10000>;
++ reg = <0x20000 0x10000>;
+ };
+
+ v2m@30000 {
+ compatible = "arm,gic-v2m-frame";
+ msi-controller;
+- reg = <0 0x30000 0 0x10000>;
++ reg = <0x30000 0x10000>;
+ };
+ };
+
+@@ -546,10 +546,10 @@
+ <0x42000000 0x40 0x00000000 0x40 0x00000000 0x1 0x00000000>;
+ #interrupt-cells = <1>;
+ interrupt-map-mask = <0 0 0 7>;
+- interrupt-map = <0 0 0 1 &gic 0 0 GIC_SPI 136 IRQ_TYPE_LEVEL_HIGH>,
+- <0 0 0 2 &gic 0 0 GIC_SPI 137 IRQ_TYPE_LEVEL_HIGH>,
+- <0 0 0 3 &gic 0 0 GIC_SPI 138 IRQ_TYPE_LEVEL_HIGH>,
+- <0 0 0 4 &gic 0 0 GIC_SPI 139 IRQ_TYPE_LEVEL_HIGH>;
++ interrupt-map = <0 0 0 1 &gic 0 GIC_SPI 136 IRQ_TYPE_LEVEL_HIGH>,
++ <0 0 0 2 &gic 0 GIC_SPI 137 IRQ_TYPE_LEVEL_HIGH>,
++ <0 0 0 3 &gic 0 GIC_SPI 138 IRQ_TYPE_LEVEL_HIGH>,
++ <0 0 0 4 &gic 0 GIC_SPI 139 IRQ_TYPE_LEVEL_HIGH>;
+ msi-parent = <&v2m_0>;
+ status = "disabled";
+ iommu-map-mask = <0x0>; /* RC has no means to output PCI RID */
+@@ -813,19 +813,19 @@
+
+ #interrupt-cells = <1>;
+ interrupt-map-mask = <0 0 15>;
+- interrupt-map = <0 0 0 &gic 0 0 GIC_SPI 68 IRQ_TYPE_LEVEL_HIGH>,
+- <0 0 1 &gic 0 0 GIC_SPI 69 IRQ_TYPE_LEVEL_HIGH>,
+- <0 0 2 &gic 0 0 GIC_SPI 70 IRQ_TYPE_LEVEL_HIGH>,
+- <0 0 3 &gic 0 0 GIC_SPI 160 IRQ_TYPE_LEVEL_HIGH>,
+- <0 0 4 &gic 0 0 GIC_SPI 161 IRQ_TYPE_LEVEL_HIGH>,
+- <0 0 5 &gic 0 0 GIC_SPI 162 IRQ_TYPE_LEVEL_HIGH>,
+- <0 0 6 &gic 0 0 GIC_SPI 163 IRQ_TYPE_LEVEL_HIGH>,
+- <0 0 7 &gic 0 0 GIC_SPI 164 IRQ_TYPE_LEVEL_HIGH>,
+- <0 0 8 &gic 0 0 GIC_SPI 165 IRQ_TYPE_LEVEL_HIGH>,
+- <0 0 9 &gic 0 0 GIC_SPI 166 IRQ_TYPE_LEVEL_HIGH>,
+- <0 0 10 &gic 0 0 GIC_SPI 167 IRQ_TYPE_LEVEL_HIGH>,
+- <0 0 11 &gic 0 0 GIC_SPI 168 IRQ_TYPE_LEVEL_HIGH>,
+- <0 0 12 &gic 0 0 GIC_SPI 169 IRQ_TYPE_LEVEL_HIGH>;
++ interrupt-map = <0 0 0 &gic 0 GIC_SPI 68 IRQ_TYPE_LEVEL_HIGH>,
++ <0 0 1 &gic 0 GIC_SPI 69 IRQ_TYPE_LEVEL_HIGH>,
++ <0 0 2 &gic 0 GIC_SPI 70 IRQ_TYPE_LEVEL_HIGH>,
++ <0 0 3 &gic 0 GIC_SPI 160 IRQ_TYPE_LEVEL_HIGH>,
++ <0 0 4 &gic 0 GIC_SPI 161 IRQ_TYPE_LEVEL_HIGH>,
++ <0 0 5 &gic 0 GIC_SPI 162 IRQ_TYPE_LEVEL_HIGH>,
++ <0 0 6 &gic 0 GIC_SPI 163 IRQ_TYPE_LEVEL_HIGH>,
++ <0 0 7 &gic 0 GIC_SPI 164 IRQ_TYPE_LEVEL_HIGH>,
++ <0 0 8 &gic 0 GIC_SPI 165 IRQ_TYPE_LEVEL_HIGH>,
++ <0 0 9 &gic 0 GIC_SPI 166 IRQ_TYPE_LEVEL_HIGH>,
++ <0 0 10 &gic 0 GIC_SPI 167 IRQ_TYPE_LEVEL_HIGH>,
++ <0 0 11 &gic 0 GIC_SPI 168 IRQ_TYPE_LEVEL_HIGH>,
++ <0 0 12 &gic 0 GIC_SPI 169 IRQ_TYPE_LEVEL_HIGH>;
+ };
+
+ site2: tlx@60000000 {
+@@ -835,6 +835,6 @@
+ ranges = <0 0 0x60000000 0x10000000>;
+ #interrupt-cells = <1>;
+ interrupt-map-mask = <0 0>;
+- interrupt-map = <0 0 &gic 0 0 GIC_SPI 168 IRQ_TYPE_LEVEL_HIGH>;
++ interrupt-map = <0 0 &gic 0 GIC_SPI 168 IRQ_TYPE_LEVEL_HIGH>;
+ };
+ };
+diff --git a/arch/arm64/boot/dts/arm/juno-motherboard.dtsi b/arch/arm64/boot/dts/arm/juno-motherboard.dtsi
+index e3983ded3c3c..d5cefddde08c 100644
+--- a/arch/arm64/boot/dts/arm/juno-motherboard.dtsi
++++ b/arch/arm64/boot/dts/arm/juno-motherboard.dtsi
+@@ -103,7 +103,7 @@
+ };
+ };
+
+- flash@0,00000000 {
++ flash@0 {
+ /* 2 * 32MiB NOR Flash memory mounted on CS0 */
+ compatible = "arm,vexpress-flash", "cfi-flash";
+ reg = <0 0x00000000 0x04000000>;
+@@ -120,7 +120,7 @@
+ };
+ };
+
+- ethernet@2,00000000 {
++ ethernet@200000000 {
+ compatible = "smsc,lan9118", "smsc,lan9115";
+ reg = <2 0x00000000 0x10000>;
+ interrupts = <3>;
+@@ -133,7 +133,7 @@
+ vddvario-supply = <&mb_fixed_3v3>;
+ };
+
+- iofpga@3,00000000 {
++ iofpga@300000000 {
+ compatible = "simple-bus";
+ #address-cells = <1>;
+ #size-cells = <1>;
+diff --git a/arch/arm64/boot/dts/arm/rtsm_ve-motherboard-rs2.dtsi b/arch/arm64/boot/dts/arm/rtsm_ve-motherboard-rs2.dtsi
+index 60703b5763c6..350cbf17e8b4 100644
+--- a/arch/arm64/boot/dts/arm/rtsm_ve-motherboard-rs2.dtsi
++++ b/arch/arm64/boot/dts/arm/rtsm_ve-motherboard-rs2.dtsi
+@@ -9,7 +9,7 @@
+ motherboard {
+ arm,v2m-memory-map = "rs2";
+
+- iofpga@3,00000000 {
++ iofpga@300000000 {
+ virtio-p9@140000 {
+ compatible = "virtio,mmio";
+ reg = <0x140000 0x200>;
+diff --git a/arch/arm64/boot/dts/arm/rtsm_ve-motherboard.dtsi b/arch/arm64/boot/dts/arm/rtsm_ve-motherboard.dtsi
+index e333c8d2d0e4..d1bfa62ca073 100644
+--- a/arch/arm64/boot/dts/arm/rtsm_ve-motherboard.dtsi
++++ b/arch/arm64/boot/dts/arm/rtsm_ve-motherboard.dtsi
+@@ -17,14 +17,14 @@
+ #interrupt-cells = <1>;
+ ranges;
+
+- flash@0,00000000 {
++ flash@0 {
+ compatible = "arm,vexpress-flash", "cfi-flash";
+ reg = <0 0x00000000 0x04000000>,
+ <4 0x00000000 0x04000000>;
+ bank-width = <4>;
+ };
+
+- ethernet@2,02000000 {
++ ethernet@202000000 {
+ compatible = "smsc,lan91c111";
+ reg = <2 0x02000000 0x10000>;
+ interrupts = <15>;
+@@ -51,7 +51,7 @@
+ clock-output-names = "v2m:refclk32khz";
+ };
+
+- iofpga@3,00000000 {
++ iofpga@300000000 {
+ compatible = "simple-bus";
+ #address-cells = <1>;
+ #size-cells = <1>;
+diff --git a/arch/arm64/boot/dts/marvell/armada-3720-db.dts b/arch/arm64/boot/dts/marvell/armada-3720-db.dts
+index f2cc00594d64..3e5789f37206 100644
+--- a/arch/arm64/boot/dts/marvell/armada-3720-db.dts
++++ b/arch/arm64/boot/dts/marvell/armada-3720-db.dts
+@@ -128,6 +128,9 @@
+
+ /* CON15(V2.0)/CON17(V1.4) : PCIe / CON15(V2.0)/CON12(V1.4) :mini-PCIe */
+ &pcie0 {
++ pinctrl-names = "default";
++ pinctrl-0 = <&pcie_reset_pins &pcie_clkreq_pins>;
++ reset-gpios = <&gpiosb 3 GPIO_ACTIVE_LOW>;
+ status = "okay";
+ };
+
+diff --git a/arch/arm64/boot/dts/marvell/armada-3720-espressobin.dtsi b/arch/arm64/boot/dts/marvell/armada-3720-espressobin.dtsi
+index 42e992f9c8a5..c92ad664cb0e 100644
+--- a/arch/arm64/boot/dts/marvell/armada-3720-espressobin.dtsi
++++ b/arch/arm64/boot/dts/marvell/armada-3720-espressobin.dtsi
+@@ -47,6 +47,7 @@
+ phys = <&comphy1 0>;
+ pinctrl-names = "default";
+ pinctrl-0 = <&pcie_reset_pins &pcie_clkreq_pins>;
++ reset-gpios = <&gpiosb 3 GPIO_ACTIVE_LOW>;
+ };
+
+ /* J6 */
+diff --git a/arch/arm64/boot/dts/marvell/armada-3720-turris-mox.dts b/arch/arm64/boot/dts/marvell/armada-3720-turris-mox.dts
+index bb42d1e6a4e9..1452c821f8c0 100644
+--- a/arch/arm64/boot/dts/marvell/armada-3720-turris-mox.dts
++++ b/arch/arm64/boot/dts/marvell/armada-3720-turris-mox.dts
+@@ -95,7 +95,7 @@
+ };
+
+ sfp: sfp {
+- compatible = "sff,sfp+";
++ compatible = "sff,sfp";
+ i2c-bus = <&i2c0>;
+ los-gpio = <&moxtet_sfp 0 GPIO_ACTIVE_HIGH>;
+ tx-fault-gpio = <&moxtet_sfp 1 GPIO_ACTIVE_HIGH>;
+@@ -128,10 +128,6 @@
+ };
+ };
+
+-&pcie_reset_pins {
+- function = "gpio";
+-};
+-
+ &pcie0 {
+ pinctrl-names = "default";
+ pinctrl-0 = <&pcie_reset_pins &pcie_clkreq_pins>;
+@@ -179,6 +175,8 @@
+ marvell,pad-type = "sd";
+ vqmmc-supply = <&vsdio_reg>;
+ mmc-pwrseq = <&sdhci1_pwrseq>;
++ /* forbid SDR104 for FCC purposes */
++ sdhci-caps-mask = <0x2 0x0>;
+ status = "okay";
+ };
+
+diff --git a/arch/arm64/boot/dts/marvell/armada-37xx.dtsi b/arch/arm64/boot/dts/marvell/armada-37xx.dtsi
+index 000c135e39b7..7909c146eabf 100644
+--- a/arch/arm64/boot/dts/marvell/armada-37xx.dtsi
++++ b/arch/arm64/boot/dts/marvell/armada-37xx.dtsi
+@@ -317,7 +317,7 @@
+
+ pcie_reset_pins: pcie-reset-pins {
+ groups = "pcie1";
+- function = "pcie";
++ function = "gpio";
+ };
+
+ pcie_clkreq_pins: pcie-clkreq-pins {
+diff --git a/arch/arm64/boot/dts/mediatek/mt8173.dtsi b/arch/arm64/boot/dts/mediatek/mt8173.dtsi
+index d819e44d94a8..6ad1053afd27 100644
+--- a/arch/arm64/boot/dts/mediatek/mt8173.dtsi
++++ b/arch/arm64/boot/dts/mediatek/mt8173.dtsi
+@@ -242,21 +242,21 @@
+ cpu_on = <0x84000003>;
+ };
+
+- clk26m: oscillator@0 {
++ clk26m: oscillator0 {
+ compatible = "fixed-clock";
+ #clock-cells = <0>;
+ clock-frequency = <26000000>;
+ clock-output-names = "clk26m";
+ };
+
+- clk32k: oscillator@1 {
++ clk32k: oscillator1 {
+ compatible = "fixed-clock";
+ #clock-cells = <0>;
+ clock-frequency = <32000>;
+ clock-output-names = "clk32k";
+ };
+
+- cpum_ck: oscillator@2 {
++ cpum_ck: oscillator2 {
+ compatible = "fixed-clock";
+ #clock-cells = <0>;
+ clock-frequency = <0>;
+@@ -272,19 +272,19 @@
+ sustainable-power = <1500>; /* milliwatts */
+
+ trips {
+- threshold: trip-point@0 {
++ threshold: trip-point0 {
+ temperature = <68000>;
+ hysteresis = <2000>;
+ type = "passive";
+ };
+
+- target: trip-point@1 {
++ target: trip-point1 {
+ temperature = <85000>;
+ hysteresis = <2000>;
+ type = "passive";
+ };
+
+- cpu_crit: cpu_crit@0 {
++ cpu_crit: cpu_crit0 {
+ temperature = <115000>;
+ hysteresis = <2000>;
+ type = "critical";
+@@ -292,13 +292,13 @@
+ };
+
+ cooling-maps {
+- map@0 {
++ map0 {
+ trip = <&target>;
+ cooling-device = <&cpu0 0 0>,
+ <&cpu1 0 0>;
+ contribution = <3072>;
+ };
+- map@1 {
++ map1 {
+ trip = <&target>;
+ cooling-device = <&cpu2 0 0>,
+ <&cpu3 0 0>;
+@@ -312,7 +312,7 @@
+ #address-cells = <2>;
+ #size-cells = <2>;
+ ranges;
+- vpu_dma_reserved: vpu_dma_mem_region {
++ vpu_dma_reserved: vpu_dma_mem_region@b7000000 {
+ compatible = "shared-dma-pool";
+ reg = <0 0xb7000000 0 0x500000>;
+ alignment = <0x1000>;
+@@ -365,7 +365,7 @@
+ reg = <0 0x10005000 0 0x1000>;
+ };
+
+- pio: pinctrl@10005000 {
++ pio: pinctrl@1000b000 {
+ compatible = "mediatek,mt8173-pinctrl";
+ reg = <0 0x1000b000 0 0x1000>;
+ mediatek,pctl-regmap = <&syscfg_pctl_a>;
+@@ -572,7 +572,7 @@
+ status = "disabled";
+ };
+
+- gic: interrupt-controller@10220000 {
++ gic: interrupt-controller@10221000 {
+ compatible = "arm,gic-400";
+ #interrupt-cells = <3>;
+ interrupt-parent = <&gic>;
+diff --git a/arch/arm64/boot/dts/nvidia/tegra194-p2888.dtsi b/arch/arm64/boot/dts/nvidia/tegra194-p2888.dtsi
+index 623f7d7d216b..8e3136dfdd62 100644
+--- a/arch/arm64/boot/dts/nvidia/tegra194-p2888.dtsi
++++ b/arch/arm64/boot/dts/nvidia/tegra194-p2888.dtsi
+@@ -33,7 +33,7 @@
+
+ phy-reset-gpios = <&gpio TEGRA194_MAIN_GPIO(G, 5) GPIO_ACTIVE_LOW>;
+ phy-handle = <&phy>;
+- phy-mode = "rgmii";
++ phy-mode = "rgmii-id";
+
+ mdio {
+ #address-cells = <1>;
+diff --git a/arch/arm64/boot/dts/nvidia/tegra194.dtsi b/arch/arm64/boot/dts/nvidia/tegra194.dtsi
+index f4ede86e32b4..3c928360f4ed 100644
+--- a/arch/arm64/boot/dts/nvidia/tegra194.dtsi
++++ b/arch/arm64/boot/dts/nvidia/tegra194.dtsi
+@@ -1387,7 +1387,7 @@
+
+ bus-range = <0x0 0xff>;
+ ranges = <0x81000000 0x0 0x30100000 0x0 0x30100000 0x0 0x00100000 /* downstream I/O (1MB) */
+- 0xc2000000 0x12 0x00000000 0x12 0x00000000 0x0 0x30000000 /* prefetchable memory (768MB) */
++ 0xc3000000 0x12 0x00000000 0x12 0x00000000 0x0 0x30000000 /* prefetchable memory (768MB) */
+ 0x82000000 0x0 0x40000000 0x12 0x30000000 0x0 0x10000000>; /* non-prefetchable memory (256MB) */
+ };
+
+@@ -1432,7 +1432,7 @@
+
+ bus-range = <0x0 0xff>;
+ ranges = <0x81000000 0x0 0x32100000 0x0 0x32100000 0x0 0x00100000 /* downstream I/O (1MB) */
+- 0xc2000000 0x12 0x40000000 0x12 0x40000000 0x0 0x30000000 /* prefetchable memory (768MB) */
++ 0xc3000000 0x12 0x40000000 0x12 0x40000000 0x0 0x30000000 /* prefetchable memory (768MB) */
+ 0x82000000 0x0 0x40000000 0x12 0x70000000 0x0 0x10000000>; /* non-prefetchable memory (256MB) */
+ };
+
+@@ -1477,7 +1477,7 @@
+
+ bus-range = <0x0 0xff>;
+ ranges = <0x81000000 0x0 0x34100000 0x0 0x34100000 0x0 0x00100000 /* downstream I/O (1MB) */
+- 0xc2000000 0x12 0x80000000 0x12 0x80000000 0x0 0x30000000 /* prefetchable memory (768MB) */
++ 0xc3000000 0x12 0x80000000 0x12 0x80000000 0x0 0x30000000 /* prefetchable memory (768MB) */
+ 0x82000000 0x0 0x40000000 0x12 0xb0000000 0x0 0x10000000>; /* non-prefetchable memory (256MB) */
+ };
+
+@@ -1522,7 +1522,7 @@
+
+ bus-range = <0x0 0xff>;
+ ranges = <0x81000000 0x0 0x36100000 0x0 0x36100000 0x0 0x00100000 /* downstream I/O (1MB) */
+- 0xc2000000 0x14 0x00000000 0x14 0x00000000 0x3 0x40000000 /* prefetchable memory (13GB) */
++ 0xc3000000 0x14 0x00000000 0x14 0x00000000 0x3 0x40000000 /* prefetchable memory (13GB) */
+ 0x82000000 0x0 0x40000000 0x17 0x40000000 0x0 0xc0000000>; /* non-prefetchable memory (3GB) */
+ };
+
+@@ -1567,7 +1567,7 @@
+
+ bus-range = <0x0 0xff>;
+ ranges = <0x81000000 0x0 0x38100000 0x0 0x38100000 0x0 0x00100000 /* downstream I/O (1MB) */
+- 0xc2000000 0x18 0x00000000 0x18 0x00000000 0x3 0x40000000 /* prefetchable memory (13GB) */
++ 0xc3000000 0x18 0x00000000 0x18 0x00000000 0x3 0x40000000 /* prefetchable memory (13GB) */
+ 0x82000000 0x0 0x40000000 0x1b 0x40000000 0x0 0xc0000000>; /* non-prefetchable memory (3GB) */
+ };
+
+@@ -1616,7 +1616,7 @@
+
+ bus-range = <0x0 0xff>;
+ ranges = <0x81000000 0x0 0x3a100000 0x0 0x3a100000 0x0 0x00100000 /* downstream I/O (1MB) */
+- 0xc2000000 0x1c 0x00000000 0x1c 0x00000000 0x3 0x40000000 /* prefetchable memory (13GB) */
++ 0xc3000000 0x1c 0x00000000 0x1c 0x00000000 0x3 0x40000000 /* prefetchable memory (13GB) */
+ 0x82000000 0x0 0x40000000 0x1f 0x40000000 0x0 0xc0000000>; /* non-prefetchable memory (3GB) */
+ };
+
+diff --git a/arch/arm64/boot/dts/qcom/apq8096-db820c.dtsi b/arch/arm64/boot/dts/qcom/apq8096-db820c.dtsi
+index c4abbccf2bed..eaa1eb70b455 100644
+--- a/arch/arm64/boot/dts/qcom/apq8096-db820c.dtsi
++++ b/arch/arm64/boot/dts/qcom/apq8096-db820c.dtsi
+@@ -117,16 +117,6 @@
+ regulator-max-microvolt = <3700000>;
+ };
+
+- vreg_s8a_l3a_input: vreg-s8a-l3a-input {
+- compatible = "regulator-fixed";
+- regulator-name = "vreg_s8a_l3a_input";
+- regulator-always-on;
+- regulator-boot-on;
+-
+- regulator-min-microvolt = <0>;
+- regulator-max-microvolt = <0>;
+- };
+-
+ wlan_en: wlan-en-1-8v {
+ pinctrl-names = "default";
+ pinctrl-0 = <&wlan_en_gpios>;
+@@ -705,14 +695,14 @@
+ vdd_s11-supply = <&vph_pwr>;
+ vdd_s12-supply = <&vph_pwr>;
+ vdd_l2_l26_l28-supply = <&vreg_s3a_1p3>;
+- vdd_l3_l11-supply = <&vreg_s8a_l3a_input>;
++ vdd_l3_l11-supply = <&vreg_s3a_1p3>;
+ vdd_l4_l27_l31-supply = <&vreg_s3a_1p3>;
+ vdd_l5_l7-supply = <&vreg_s5a_2p15>;
+ vdd_l6_l12_l32-supply = <&vreg_s5a_2p15>;
+ vdd_l8_l16_l30-supply = <&vph_pwr>;
+ vdd_l14_l15-supply = <&vreg_s5a_2p15>;
+ vdd_l25-supply = <&vreg_s3a_1p3>;
+- vdd_lvs1_2-supply = <&vreg_s4a_1p8>;
++ vdd_lvs1_lvs2-supply = <&vreg_s4a_1p8>;
+
+ vreg_s3a_1p3: s3 {
+ regulator-name = "vreg_s3a_1p3";
+diff --git a/arch/arm64/boot/dts/qcom/msm8916.dtsi b/arch/arm64/boot/dts/qcom/msm8916.dtsi
+index a88a15f2352b..5548d7b5096c 100644
+--- a/arch/arm64/boot/dts/qcom/msm8916.dtsi
++++ b/arch/arm64/boot/dts/qcom/msm8916.dtsi
+@@ -261,7 +261,7 @@
+ thermal-sensors = <&tsens 4>;
+
+ trips {
+- cpu2_3_alert0: trip-point@0 {
++ cpu2_3_alert0: trip-point0 {
+ temperature = <75000>;
+ hysteresis = <2000>;
+ type = "passive";
+@@ -291,7 +291,7 @@
+ thermal-sensors = <&tsens 2>;
+
+ trips {
+- gpu_alert0: trip-point@0 {
++ gpu_alert0: trip-point0 {
+ temperature = <75000>;
+ hysteresis = <2000>;
+ type = "passive";
+@@ -311,7 +311,7 @@
+ thermal-sensors = <&tsens 1>;
+
+ trips {
+- cam_alert0: trip-point@0 {
++ cam_alert0: trip-point0 {
+ temperature = <75000>;
+ hysteresis = <2000>;
+ type = "hot";
+@@ -326,7 +326,7 @@
+ thermal-sensors = <&tsens 0>;
+
+ trips {
+- modem_alert0: trip-point@0 {
++ modem_alert0: trip-point0 {
+ temperature = <85000>;
+ hysteresis = <2000>;
+ type = "hot";
+diff --git a/arch/arm64/boot/dts/qcom/msm8996.dtsi b/arch/arm64/boot/dts/qcom/msm8996.dtsi
+index 98634d5c4440..d22c364b520a 100644
+--- a/arch/arm64/boot/dts/qcom/msm8996.dtsi
++++ b/arch/arm64/boot/dts/qcom/msm8996.dtsi
+@@ -989,16 +989,16 @@
+ "csi_clk_mux",
+ "vfe0",
+ "vfe1";
+- interrupts = <GIC_SPI 78 0>,
+- <GIC_SPI 79 0>,
+- <GIC_SPI 80 0>,
+- <GIC_SPI 296 0>,
+- <GIC_SPI 297 0>,
+- <GIC_SPI 298 0>,
+- <GIC_SPI 299 0>,
+- <GIC_SPI 309 0>,
+- <GIC_SPI 314 0>,
+- <GIC_SPI 315 0>;
++ interrupts = <GIC_SPI 78 IRQ_TYPE_EDGE_RISING>,
++ <GIC_SPI 79 IRQ_TYPE_EDGE_RISING>,
++ <GIC_SPI 80 IRQ_TYPE_EDGE_RISING>,
++ <GIC_SPI 296 IRQ_TYPE_EDGE_RISING>,
++ <GIC_SPI 297 IRQ_TYPE_EDGE_RISING>,
++ <GIC_SPI 298 IRQ_TYPE_EDGE_RISING>,
++ <GIC_SPI 299 IRQ_TYPE_EDGE_RISING>,
++ <GIC_SPI 309 IRQ_TYPE_EDGE_RISING>,
++ <GIC_SPI 314 IRQ_TYPE_EDGE_RISING>,
++ <GIC_SPI 315 IRQ_TYPE_EDGE_RISING>;
+ interrupt-names = "csiphy0",
+ "csiphy1",
+ "csiphy2",
+diff --git a/arch/arm64/boot/dts/qcom/pm8150.dtsi b/arch/arm64/boot/dts/qcom/pm8150.dtsi
+index b6e304748a57..c0b197458665 100644
+--- a/arch/arm64/boot/dts/qcom/pm8150.dtsi
++++ b/arch/arm64/boot/dts/qcom/pm8150.dtsi
+@@ -73,18 +73,8 @@
+ reg = <0xc000>;
+ gpio-controller;
+ #gpio-cells = <2>;
+- interrupts = <0x0 0xc0 0x0 IRQ_TYPE_NONE>,
+- <0x0 0xc1 0x0 IRQ_TYPE_NONE>,
+- <0x0 0xc2 0x0 IRQ_TYPE_NONE>,
+- <0x0 0xc3 0x0 IRQ_TYPE_NONE>,
+- <0x0 0xc4 0x0 IRQ_TYPE_NONE>,
+- <0x0 0xc5 0x0 IRQ_TYPE_NONE>,
+- <0x0 0xc6 0x0 IRQ_TYPE_NONE>,
+- <0x0 0xc7 0x0 IRQ_TYPE_NONE>,
+- <0x0 0xc8 0x0 IRQ_TYPE_NONE>,
+- <0x0 0xc9 0x0 IRQ_TYPE_NONE>,
+- <0x0 0xca 0x0 IRQ_TYPE_NONE>,
+- <0x0 0xcb 0x0 IRQ_TYPE_NONE>;
++ interrupt-controller;
++ #interrupt-cells = <2>;
+ };
+ };
+
+diff --git a/arch/arm64/boot/dts/qcom/pm8150b.dtsi b/arch/arm64/boot/dts/qcom/pm8150b.dtsi
+index 322379d5c31f..40b5d75a4a1d 100644
+--- a/arch/arm64/boot/dts/qcom/pm8150b.dtsi
++++ b/arch/arm64/boot/dts/qcom/pm8150b.dtsi
+@@ -62,18 +62,8 @@
+ reg = <0xc000>;
+ gpio-controller;
+ #gpio-cells = <2>;
+- interrupts = <0x2 0xc0 0x0 IRQ_TYPE_NONE>,
+- <0x2 0xc1 0x0 IRQ_TYPE_NONE>,
+- <0x2 0xc2 0x0 IRQ_TYPE_NONE>,
+- <0x2 0xc3 0x0 IRQ_TYPE_NONE>,
+- <0x2 0xc4 0x0 IRQ_TYPE_NONE>,
+- <0x2 0xc5 0x0 IRQ_TYPE_NONE>,
+- <0x2 0xc6 0x0 IRQ_TYPE_NONE>,
+- <0x2 0xc7 0x0 IRQ_TYPE_NONE>,
+- <0x2 0xc8 0x0 IRQ_TYPE_NONE>,
+- <0x2 0xc9 0x0 IRQ_TYPE_NONE>,
+- <0x2 0xca 0x0 IRQ_TYPE_NONE>,
+- <0x2 0xcb 0x0 IRQ_TYPE_NONE>;
++ interrupt-controller;
++ #interrupt-cells = <2>;
+ };
+ };
+
+diff --git a/arch/arm64/boot/dts/qcom/pm8150l.dtsi b/arch/arm64/boot/dts/qcom/pm8150l.dtsi
+index eb0e9a090e42..cf05e0685d10 100644
+--- a/arch/arm64/boot/dts/qcom/pm8150l.dtsi
++++ b/arch/arm64/boot/dts/qcom/pm8150l.dtsi
+@@ -56,18 +56,8 @@
+ reg = <0xc000>;
+ gpio-controller;
+ #gpio-cells = <2>;
+- interrupts = <0x4 0xc0 0x0 IRQ_TYPE_NONE>,
+- <0x4 0xc1 0x0 IRQ_TYPE_NONE>,
+- <0x4 0xc2 0x0 IRQ_TYPE_NONE>,
+- <0x4 0xc3 0x0 IRQ_TYPE_NONE>,
+- <0x4 0xc4 0x0 IRQ_TYPE_NONE>,
+- <0x4 0xc5 0x0 IRQ_TYPE_NONE>,
+- <0x4 0xc6 0x0 IRQ_TYPE_NONE>,
+- <0x4 0xc7 0x0 IRQ_TYPE_NONE>,
+- <0x4 0xc8 0x0 IRQ_TYPE_NONE>,
+- <0x4 0xc9 0x0 IRQ_TYPE_NONE>,
+- <0x4 0xca 0x0 IRQ_TYPE_NONE>,
+- <0x4 0xcb 0x0 IRQ_TYPE_NONE>;
++ interrupt-controller;
++ #interrupt-cells = <2>;
+ };
+ };
+
+diff --git a/arch/arm64/boot/dts/qcom/sc7180.dtsi b/arch/arm64/boot/dts/qcom/sc7180.dtsi
+index 998f101ad623..eea92b314fc6 100644
+--- a/arch/arm64/boot/dts/qcom/sc7180.dtsi
++++ b/arch/arm64/boot/dts/qcom/sc7180.dtsi
+@@ -1657,8 +1657,7 @@
+ pdc: interrupt-controller@b220000 {
+ compatible = "qcom,sc7180-pdc", "qcom,pdc";
+ reg = <0 0x0b220000 0 0x30000>;
+- qcom,pdc-ranges = <0 480 15>, <17 497 98>,
+- <119 634 4>, <124 639 1>;
++ qcom,pdc-ranges = <0 480 94>, <94 609 31>, <125 63 1>;
+ #interrupt-cells = <2>;
+ interrupt-parent = <&intc>;
+ interrupt-controller;
+diff --git a/arch/arm64/boot/dts/qcom/sdm850-lenovo-yoga-c630.dts b/arch/arm64/boot/dts/qcom/sdm850-lenovo-yoga-c630.dts
+index 51a670ad15b2..4b9860a2c8eb 100644
+--- a/arch/arm64/boot/dts/qcom/sdm850-lenovo-yoga-c630.dts
++++ b/arch/arm64/boot/dts/qcom/sdm850-lenovo-yoga-c630.dts
+@@ -577,3 +577,14 @@
+ };
+ };
+ };
++
++&wifi {
++ status = "okay";
++
++ vdd-0.8-cx-mx-supply = <&vreg_l5a_0p8>;
++ vdd-1.8-xo-supply = <&vreg_l7a_1p8>;
++ vdd-1.3-rfa-supply = <&vreg_l17a_1p3>;
++ vdd-3.3-ch0-supply = <&vreg_l25a_3p3>;
++
++ qcom,snoc-host-cap-8bit-quirk;
++};
+diff --git a/arch/arm64/boot/dts/qcom/sm8250.dtsi b/arch/arm64/boot/dts/qcom/sm8250.dtsi
+index 891d83b2afea..2a7eaefd221d 100644
+--- a/arch/arm64/boot/dts/qcom/sm8250.dtsi
++++ b/arch/arm64/boot/dts/qcom/sm8250.dtsi
+@@ -314,8 +314,8 @@
+ };
+
+ pdc: interrupt-controller@b220000 {
+- compatible = "qcom,sm8250-pdc";
+- reg = <0x0b220000 0x30000>, <0x17c000f0 0x60>;
++ compatible = "qcom,sm8250-pdc", "qcom,pdc";
++ reg = <0 0x0b220000 0 0x30000>, <0 0x17c000f0 0 0x60>;
+ qcom,pdc-ranges = <0 480 94>, <94 609 31>,
+ <125 63 1>, <126 716 12>;
+ #interrupt-cells = <2>;
+diff --git a/arch/arm64/boot/dts/realtek/rtd1293-ds418j.dts b/arch/arm64/boot/dts/realtek/rtd1293-ds418j.dts
+index b2dd583146b4..b2e44c6c2d22 100644
+--- a/arch/arm64/boot/dts/realtek/rtd1293-ds418j.dts
++++ b/arch/arm64/boot/dts/realtek/rtd1293-ds418j.dts
+@@ -1,6 +1,6 @@
+ // SPDX-License-Identifier: (GPL-2.0-or-later OR BSD-2-Clause)
+ /*
+- * Copyright (c) 2017 Andreas Färber
++ * Copyright (c) 2017-2019 Andreas Färber
+ */
+
+ /dts-v1/;
+@@ -11,9 +11,9 @@
+ compatible = "synology,ds418j", "realtek,rtd1293";
+ model = "Synology DiskStation DS418j";
+
+- memory@0 {
++ memory@1f000 {
+ device_type = "memory";
+- reg = <0x0 0x40000000>;
++ reg = <0x1f000 0x3ffe1000>; /* boot ROM to 1 GiB */
+ };
+
+ aliases {
+diff --git a/arch/arm64/boot/dts/realtek/rtd1293.dtsi b/arch/arm64/boot/dts/realtek/rtd1293.dtsi
+index bd4e22723f7b..2d92b56ac94d 100644
+--- a/arch/arm64/boot/dts/realtek/rtd1293.dtsi
++++ b/arch/arm64/boot/dts/realtek/rtd1293.dtsi
+@@ -36,16 +36,20 @@
+ timer {
+ compatible = "arm,armv8-timer";
+ interrupts = <GIC_PPI 13
+- (GIC_CPU_MASK_RAW(0xf) | IRQ_TYPE_LEVEL_LOW)>,
++ (GIC_CPU_MASK_SIMPLE(2) | IRQ_TYPE_LEVEL_LOW)>,
+ <GIC_PPI 14
+- (GIC_CPU_MASK_RAW(0xf) | IRQ_TYPE_LEVEL_LOW)>,
++ (GIC_CPU_MASK_SIMPLE(2) | IRQ_TYPE_LEVEL_LOW)>,
+ <GIC_PPI 11
+- (GIC_CPU_MASK_RAW(0xf) | IRQ_TYPE_LEVEL_LOW)>,
++ (GIC_CPU_MASK_SIMPLE(2) | IRQ_TYPE_LEVEL_LOW)>,
+ <GIC_PPI 10
+- (GIC_CPU_MASK_RAW(0xf) | IRQ_TYPE_LEVEL_LOW)>;
++ (GIC_CPU_MASK_SIMPLE(2) | IRQ_TYPE_LEVEL_LOW)>;
+ };
+ };
+
+ &arm_pmu {
+ interrupt-affinity = <&cpu0>, <&cpu1>;
+ };
++
++&gic {
++ interrupts = <GIC_PPI 9 (GIC_CPU_MASK_SIMPLE(2) | IRQ_TYPE_LEVEL_LOW)>;
++};
+diff --git a/arch/arm64/boot/dts/realtek/rtd1295-mele-v9.dts b/arch/arm64/boot/dts/realtek/rtd1295-mele-v9.dts
+index bd584e99fff9..cf4a57c012a8 100644
+--- a/arch/arm64/boot/dts/realtek/rtd1295-mele-v9.dts
++++ b/arch/arm64/boot/dts/realtek/rtd1295-mele-v9.dts
+@@ -1,5 +1,5 @@
+ /*
+- * Copyright (c) 2017 Andreas Färber
++ * Copyright (c) 2017-2019 Andreas Färber
+ *
+ * SPDX-License-Identifier: (GPL-2.0+ OR MIT)
+ */
+@@ -12,9 +12,9 @@
+ compatible = "mele,v9", "realtek,rtd1295";
+ model = "MeLE V9";
+
+- memory@0 {
++ memory@1f000 {
+ device_type = "memory";
+- reg = <0x0 0x80000000>;
++ reg = <0x1f000 0x7ffe1000>; /* boot ROM to 2 GiB */
+ };
+
+ aliases {
+diff --git a/arch/arm64/boot/dts/realtek/rtd1295-probox2-ava.dts b/arch/arm64/boot/dts/realtek/rtd1295-probox2-ava.dts
+index 8e2b0e75298a..14161c3f304d 100644
+--- a/arch/arm64/boot/dts/realtek/rtd1295-probox2-ava.dts
++++ b/arch/arm64/boot/dts/realtek/rtd1295-probox2-ava.dts
+@@ -1,5 +1,5 @@
+ /*
+- * Copyright (c) 2017 Andreas Färber
++ * Copyright (c) 2017-2019 Andreas Färber
+ *
+ * SPDX-License-Identifier: (GPL-2.0+ OR MIT)
+ */
+@@ -12,9 +12,9 @@
+ compatible = "probox2,ava", "realtek,rtd1295";
+ model = "PROBOX2 AVA";
+
+- memory@0 {
++ memory@1f000 {
+ device_type = "memory";
+- reg = <0x0 0x80000000>;
++ reg = <0x1f000 0x7ffe1000>; /* boot ROM to 2 GiB */
+ };
+
+ aliases {
+diff --git a/arch/arm64/boot/dts/realtek/rtd1295-zidoo-x9s.dts b/arch/arm64/boot/dts/realtek/rtd1295-zidoo-x9s.dts
+index e98e508b9514..4beb37bb9522 100644
+--- a/arch/arm64/boot/dts/realtek/rtd1295-zidoo-x9s.dts
++++ b/arch/arm64/boot/dts/realtek/rtd1295-zidoo-x9s.dts
+@@ -11,9 +11,9 @@
+ compatible = "zidoo,x9s", "realtek,rtd1295";
+ model = "Zidoo X9S";
+
+- memory@0 {
++ memory@1f000 {
+ device_type = "memory";
+- reg = <0x0 0x80000000>;
++ reg = <0x1f000 0x7ffe1000>; /* boot ROM to 2 GiB */
+ };
+
+ aliases {
+diff --git a/arch/arm64/boot/dts/realtek/rtd1295.dtsi b/arch/arm64/boot/dts/realtek/rtd1295.dtsi
+index 93f0e1d97721..1402abe80ea1 100644
+--- a/arch/arm64/boot/dts/realtek/rtd1295.dtsi
++++ b/arch/arm64/boot/dts/realtek/rtd1295.dtsi
+@@ -2,7 +2,7 @@
+ /*
+ * Realtek RTD1295 SoC
+ *
+- * Copyright (c) 2016-2017 Andreas Färber
++ * Copyright (c) 2016-2019 Andreas Färber
+ */
+
+ #include "rtd129x.dtsi"
+@@ -47,27 +47,16 @@
+ };
+ };
+
+- reserved-memory {
+- #address-cells = <1>;
+- #size-cells = <1>;
+- ranges;
+-
+- tee@10100000 {
+- reg = <0x10100000 0xf00000>;
+- no-map;
+- };
+- };
+-
+ timer {
+ compatible = "arm,armv8-timer";
+ interrupts = <GIC_PPI 13
+- (GIC_CPU_MASK_RAW(0xf) | IRQ_TYPE_LEVEL_LOW)>,
++ (GIC_CPU_MASK_SIMPLE(4) | IRQ_TYPE_LEVEL_LOW)>,
+ <GIC_PPI 14
+- (GIC_CPU_MASK_RAW(0xf) | IRQ_TYPE_LEVEL_LOW)>,
++ (GIC_CPU_MASK_SIMPLE(4) | IRQ_TYPE_LEVEL_LOW)>,
+ <GIC_PPI 11
+- (GIC_CPU_MASK_RAW(0xf) | IRQ_TYPE_LEVEL_LOW)>,
++ (GIC_CPU_MASK_SIMPLE(4) | IRQ_TYPE_LEVEL_LOW)>,
+ <GIC_PPI 10
+- (GIC_CPU_MASK_RAW(0xf) | IRQ_TYPE_LEVEL_LOW)>;
++ (GIC_CPU_MASK_SIMPLE(4) | IRQ_TYPE_LEVEL_LOW)>;
+ };
+ };
+
+diff --git a/arch/arm64/boot/dts/realtek/rtd1296-ds418.dts b/arch/arm64/boot/dts/realtek/rtd1296-ds418.dts
+index 5a051a52bf88..cc706d13da8b 100644
+--- a/arch/arm64/boot/dts/realtek/rtd1296-ds418.dts
++++ b/arch/arm64/boot/dts/realtek/rtd1296-ds418.dts
+@@ -11,9 +11,9 @@
+ compatible = "synology,ds418", "realtek,rtd1296";
+ model = "Synology DiskStation DS418";
+
+- memory@0 {
++ memory@1f000 {
+ device_type = "memory";
+- reg = <0x0 0x80000000>;
++ reg = <0x1f000 0x7ffe1000>; /* boot ROM to 2 GiB */
+ };
+
+ aliases {
+diff --git a/arch/arm64/boot/dts/realtek/rtd1296.dtsi b/arch/arm64/boot/dts/realtek/rtd1296.dtsi
+index 0f9e59cac086..fb864a139c97 100644
+--- a/arch/arm64/boot/dts/realtek/rtd1296.dtsi
++++ b/arch/arm64/boot/dts/realtek/rtd1296.dtsi
+@@ -50,13 +50,13 @@
+ timer {
+ compatible = "arm,armv8-timer";
+ interrupts = <GIC_PPI 13
+- (GIC_CPU_MASK_RAW(0xf) | IRQ_TYPE_LEVEL_LOW)>,
++ (GIC_CPU_MASK_SIMPLE(4) | IRQ_TYPE_LEVEL_LOW)>,
+ <GIC_PPI 14
+- (GIC_CPU_MASK_RAW(0xf) | IRQ_TYPE_LEVEL_LOW)>,
++ (GIC_CPU_MASK_SIMPLE(4) | IRQ_TYPE_LEVEL_LOW)>,
+ <GIC_PPI 11
+- (GIC_CPU_MASK_RAW(0xf) | IRQ_TYPE_LEVEL_LOW)>,
++ (GIC_CPU_MASK_SIMPLE(4) | IRQ_TYPE_LEVEL_LOW)>,
+ <GIC_PPI 10
+- (GIC_CPU_MASK_RAW(0xf) | IRQ_TYPE_LEVEL_LOW)>;
++ (GIC_CPU_MASK_SIMPLE(4) | IRQ_TYPE_LEVEL_LOW)>;
+ };
+ };
+
+diff --git a/arch/arm64/boot/dts/realtek/rtd129x.dtsi b/arch/arm64/boot/dts/realtek/rtd129x.dtsi
+index 4433114476f5..b63d0c03597a 100644
+--- a/arch/arm64/boot/dts/realtek/rtd129x.dtsi
++++ b/arch/arm64/boot/dts/realtek/rtd129x.dtsi
+@@ -2,14 +2,12 @@
+ /*
+ * Realtek RTD1293/RTD1295/RTD1296 SoC
+ *
+- * Copyright (c) 2016-2017 Andreas Färber
++ * Copyright (c) 2016-2019 Andreas Färber
+ */
+
+-/memreserve/ 0x0000000000000000 0x0000000000030000;
+-/memreserve/ 0x000000000001f000 0x0000000000001000;
+-/memreserve/ 0x0000000000030000 0x00000000000d0000;
++/memreserve/ 0x0000000000000000 0x000000000001f000;
++/memreserve/ 0x000000000001f000 0x00000000000e1000;
+ /memreserve/ 0x0000000001b00000 0x00000000004be000;
+-/memreserve/ 0x0000000001ffe000 0x0000000000004000;
+
+ #include <dt-bindings/interrupt-controller/arm-gic.h>
+ #include <dt-bindings/reset/realtek,rtd1295.h>
+@@ -19,6 +17,25 @@
+ #address-cells = <1>;
+ #size-cells = <1>;
+
++ reserved-memory {
++ #address-cells = <1>;
++ #size-cells = <1>;
++ ranges;
++
++ rpc_comm: rpc@1f000 {
++ reg = <0x1f000 0x1000>;
++ };
++
++ rpc_ringbuf: rpc@1ffe000 {
++ reg = <0x1ffe000 0x4000>;
++ };
++
++ tee: tee@10100000 {
++ reg = <0x10100000 0xf00000>;
++ no-map;
++ };
++ };
++
+ arm_pmu: arm-pmu {
+ compatible = "arm,cortex-a53-pmu";
+ interrupts = <GIC_SPI 48 IRQ_TYPE_LEVEL_HIGH>;
+@@ -35,8 +52,9 @@
+ compatible = "simple-bus";
+ #address-cells = <1>;
+ #size-cells = <1>;
+- /* Exclude up to 2 GiB of RAM */
+- ranges = <0x80000000 0x80000000 0x80000000>;
++ ranges = <0x00000000 0x00000000 0x0001f000>, /* boot ROM */
++ /* Exclude up to 2 GiB of RAM */
++ <0x80000000 0x80000000 0x80000000>;
+
+ reset1: reset-controller@98000000 {
+ compatible = "snps,dw-low-reset";
+diff --git a/arch/arm64/boot/dts/renesas/r8a774a1.dtsi b/arch/arm64/boot/dts/renesas/r8a774a1.dtsi
+index 79023433a740..a603d947970e 100644
+--- a/arch/arm64/boot/dts/renesas/r8a774a1.dtsi
++++ b/arch/arm64/boot/dts/renesas/r8a774a1.dtsi
+@@ -1000,7 +1000,7 @@
+ <&ipmmu_ds1 30>, <&ipmmu_ds1 31>;
+ };
+
+- ipmmu_ds0: mmu@e6740000 {
++ ipmmu_ds0: iommu@e6740000 {
+ compatible = "renesas,ipmmu-r8a774a1";
+ reg = <0 0xe6740000 0 0x1000>;
+ renesas,ipmmu-main = <&ipmmu_mm 0>;
+@@ -1008,7 +1008,7 @@
+ #iommu-cells = <1>;
+ };
+
+- ipmmu_ds1: mmu@e7740000 {
++ ipmmu_ds1: iommu@e7740000 {
+ compatible = "renesas,ipmmu-r8a774a1";
+ reg = <0 0xe7740000 0 0x1000>;
+ renesas,ipmmu-main = <&ipmmu_mm 1>;
+@@ -1016,7 +1016,7 @@
+ #iommu-cells = <1>;
+ };
+
+- ipmmu_hc: mmu@e6570000 {
++ ipmmu_hc: iommu@e6570000 {
+ compatible = "renesas,ipmmu-r8a774a1";
+ reg = <0 0xe6570000 0 0x1000>;
+ renesas,ipmmu-main = <&ipmmu_mm 2>;
+@@ -1024,7 +1024,7 @@
+ #iommu-cells = <1>;
+ };
+
+- ipmmu_mm: mmu@e67b0000 {
++ ipmmu_mm: iommu@e67b0000 {
+ compatible = "renesas,ipmmu-r8a774a1";
+ reg = <0 0xe67b0000 0 0x1000>;
+ interrupts = <GIC_SPI 196 IRQ_TYPE_LEVEL_HIGH>,
+@@ -1033,7 +1033,7 @@
+ #iommu-cells = <1>;
+ };
+
+- ipmmu_mp: mmu@ec670000 {
++ ipmmu_mp: iommu@ec670000 {
+ compatible = "renesas,ipmmu-r8a774a1";
+ reg = <0 0xec670000 0 0x1000>;
+ renesas,ipmmu-main = <&ipmmu_mm 4>;
+@@ -1041,7 +1041,7 @@
+ #iommu-cells = <1>;
+ };
+
+- ipmmu_pv0: mmu@fd800000 {
++ ipmmu_pv0: iommu@fd800000 {
+ compatible = "renesas,ipmmu-r8a774a1";
+ reg = <0 0xfd800000 0 0x1000>;
+ renesas,ipmmu-main = <&ipmmu_mm 5>;
+@@ -1049,7 +1049,7 @@
+ #iommu-cells = <1>;
+ };
+
+- ipmmu_pv1: mmu@fd950000 {
++ ipmmu_pv1: iommu@fd950000 {
+ compatible = "renesas,ipmmu-r8a774a1";
+ reg = <0 0xfd950000 0 0x1000>;
+ renesas,ipmmu-main = <&ipmmu_mm 6>;
+@@ -1057,7 +1057,7 @@
+ #iommu-cells = <1>;
+ };
+
+- ipmmu_vc0: mmu@fe6b0000 {
++ ipmmu_vc0: iommu@fe6b0000 {
+ compatible = "renesas,ipmmu-r8a774a1";
+ reg = <0 0xfe6b0000 0 0x1000>;
+ renesas,ipmmu-main = <&ipmmu_mm 8>;
+@@ -1065,7 +1065,7 @@
+ #iommu-cells = <1>;
+ };
+
+- ipmmu_vi0: mmu@febd0000 {
++ ipmmu_vi0: iommu@febd0000 {
+ compatible = "renesas,ipmmu-r8a774a1";
+ reg = <0 0xfebd0000 0 0x1000>;
+ renesas,ipmmu-main = <&ipmmu_mm 9>;
+diff --git a/arch/arm64/boot/dts/renesas/r8a774b1.dtsi b/arch/arm64/boot/dts/renesas/r8a774b1.dtsi
+index 3137f735974b..1e51855c7cd3 100644
+--- a/arch/arm64/boot/dts/renesas/r8a774b1.dtsi
++++ b/arch/arm64/boot/dts/renesas/r8a774b1.dtsi
+@@ -874,7 +874,7 @@
+ <&ipmmu_ds1 30>, <&ipmmu_ds1 31>;
+ };
+
+- ipmmu_ds0: mmu@e6740000 {
++ ipmmu_ds0: iommu@e6740000 {
+ compatible = "renesas,ipmmu-r8a774b1";
+ reg = <0 0xe6740000 0 0x1000>;
+ renesas,ipmmu-main = <&ipmmu_mm 0>;
+@@ -882,7 +882,7 @@
+ #iommu-cells = <1>;
+ };
+
+- ipmmu_ds1: mmu@e7740000 {
++ ipmmu_ds1: iommu@e7740000 {
+ compatible = "renesas,ipmmu-r8a774b1";
+ reg = <0 0xe7740000 0 0x1000>;
+ renesas,ipmmu-main = <&ipmmu_mm 1>;
+@@ -890,7 +890,7 @@
+ #iommu-cells = <1>;
+ };
+
+- ipmmu_hc: mmu@e6570000 {
++ ipmmu_hc: iommu@e6570000 {
+ compatible = "renesas,ipmmu-r8a774b1";
+ reg = <0 0xe6570000 0 0x1000>;
+ renesas,ipmmu-main = <&ipmmu_mm 2>;
+@@ -898,7 +898,7 @@
+ #iommu-cells = <1>;
+ };
+
+- ipmmu_mm: mmu@e67b0000 {
++ ipmmu_mm: iommu@e67b0000 {
+ compatible = "renesas,ipmmu-r8a774b1";
+ reg = <0 0xe67b0000 0 0x1000>;
+ interrupts = <GIC_SPI 196 IRQ_TYPE_LEVEL_HIGH>,
+@@ -907,7 +907,7 @@
+ #iommu-cells = <1>;
+ };
+
+- ipmmu_mp: mmu@ec670000 {
++ ipmmu_mp: iommu@ec670000 {
+ compatible = "renesas,ipmmu-r8a774b1";
+ reg = <0 0xec670000 0 0x1000>;
+ renesas,ipmmu-main = <&ipmmu_mm 4>;
+@@ -915,7 +915,7 @@
+ #iommu-cells = <1>;
+ };
+
+- ipmmu_pv0: mmu@fd800000 {
++ ipmmu_pv0: iommu@fd800000 {
+ compatible = "renesas,ipmmu-r8a774b1";
+ reg = <0 0xfd800000 0 0x1000>;
+ renesas,ipmmu-main = <&ipmmu_mm 6>;
+@@ -923,7 +923,7 @@
+ #iommu-cells = <1>;
+ };
+
+- ipmmu_vc0: mmu@fe6b0000 {
++ ipmmu_vc0: iommu@fe6b0000 {
+ compatible = "renesas,ipmmu-r8a774b1";
+ reg = <0 0xfe6b0000 0 0x1000>;
+ renesas,ipmmu-main = <&ipmmu_mm 12>;
+@@ -931,7 +931,7 @@
+ #iommu-cells = <1>;
+ };
+
+- ipmmu_vi0: mmu@febd0000 {
++ ipmmu_vi0: iommu@febd0000 {
+ compatible = "renesas,ipmmu-r8a774b1";
+ reg = <0 0xfebd0000 0 0x1000>;
+ renesas,ipmmu-main = <&ipmmu_mm 14>;
+@@ -939,7 +939,7 @@
+ #iommu-cells = <1>;
+ };
+
+- ipmmu_vp0: mmu@fe990000 {
++ ipmmu_vp0: iommu@fe990000 {
+ compatible = "renesas,ipmmu-r8a774b1";
+ reg = <0 0xfe990000 0 0x1000>;
+ renesas,ipmmu-main = <&ipmmu_mm 16>;
+diff --git a/arch/arm64/boot/dts/renesas/r8a774c0.dtsi b/arch/arm64/boot/dts/renesas/r8a774c0.dtsi
+index 22785cbddff5..5c72a7efbb03 100644
+--- a/arch/arm64/boot/dts/renesas/r8a774c0.dtsi
++++ b/arch/arm64/boot/dts/renesas/r8a774c0.dtsi
+@@ -847,7 +847,7 @@
+ <&ipmmu_ds1 30>, <&ipmmu_ds1 31>;
+ };
+
+- ipmmu_ds0: mmu@e6740000 {
++ ipmmu_ds0: iommu@e6740000 {
+ compatible = "renesas,ipmmu-r8a774c0";
+ reg = <0 0xe6740000 0 0x1000>;
+ renesas,ipmmu-main = <&ipmmu_mm 0>;
+@@ -855,7 +855,7 @@
+ #iommu-cells = <1>;
+ };
+
+- ipmmu_ds1: mmu@e7740000 {
++ ipmmu_ds1: iommu@e7740000 {
+ compatible = "renesas,ipmmu-r8a774c0";
+ reg = <0 0xe7740000 0 0x1000>;
+ renesas,ipmmu-main = <&ipmmu_mm 1>;
+@@ -863,7 +863,7 @@
+ #iommu-cells = <1>;
+ };
+
+- ipmmu_hc: mmu@e6570000 {
++ ipmmu_hc: iommu@e6570000 {
+ compatible = "renesas,ipmmu-r8a774c0";
+ reg = <0 0xe6570000 0 0x1000>;
+ renesas,ipmmu-main = <&ipmmu_mm 2>;
+@@ -871,7 +871,7 @@
+ #iommu-cells = <1>;
+ };
+
+- ipmmu_mm: mmu@e67b0000 {
++ ipmmu_mm: iommu@e67b0000 {
+ compatible = "renesas,ipmmu-r8a774c0";
+ reg = <0 0xe67b0000 0 0x1000>;
+ interrupts = <GIC_SPI 196 IRQ_TYPE_LEVEL_HIGH>,
+@@ -880,7 +880,7 @@
+ #iommu-cells = <1>;
+ };
+
+- ipmmu_mp: mmu@ec670000 {
++ ipmmu_mp: iommu@ec670000 {
+ compatible = "renesas,ipmmu-r8a774c0";
+ reg = <0 0xec670000 0 0x1000>;
+ renesas,ipmmu-main = <&ipmmu_mm 4>;
+@@ -888,7 +888,7 @@
+ #iommu-cells = <1>;
+ };
+
+- ipmmu_pv0: mmu@fd800000 {
++ ipmmu_pv0: iommu@fd800000 {
+ compatible = "renesas,ipmmu-r8a774c0";
+ reg = <0 0xfd800000 0 0x1000>;
+ renesas,ipmmu-main = <&ipmmu_mm 6>;
+@@ -896,7 +896,7 @@
+ #iommu-cells = <1>;
+ };
+
+- ipmmu_vc0: mmu@fe6b0000 {
++ ipmmu_vc0: iommu@fe6b0000 {
+ compatible = "renesas,ipmmu-r8a774c0";
+ reg = <0 0xfe6b0000 0 0x1000>;
+ renesas,ipmmu-main = <&ipmmu_mm 12>;
+@@ -904,7 +904,7 @@
+ #iommu-cells = <1>;
+ };
+
+- ipmmu_vi0: mmu@febd0000 {
++ ipmmu_vi0: iommu@febd0000 {
+ compatible = "renesas,ipmmu-r8a774c0";
+ reg = <0 0xfebd0000 0 0x1000>;
+ renesas,ipmmu-main = <&ipmmu_mm 14>;
+@@ -912,7 +912,7 @@
+ #iommu-cells = <1>;
+ };
+
+- ipmmu_vp0: mmu@fe990000 {
++ ipmmu_vp0: iommu@fe990000 {
+ compatible = "renesas,ipmmu-r8a774c0";
+ reg = <0 0xfe990000 0 0x1000>;
+ renesas,ipmmu-main = <&ipmmu_mm 16>;
+diff --git a/arch/arm64/boot/dts/renesas/r8a77950.dtsi b/arch/arm64/boot/dts/renesas/r8a77950.dtsi
+index 3975eecd50c4..d716c4386ae9 100644
+--- a/arch/arm64/boot/dts/renesas/r8a77950.dtsi
++++ b/arch/arm64/boot/dts/renesas/r8a77950.dtsi
+@@ -77,7 +77,7 @@
+ /delete-node/ dma-controller@e6460000;
+ /delete-node/ dma-controller@e6470000;
+
+- ipmmu_mp1: mmu@ec680000 {
++ ipmmu_mp1: iommu@ec680000 {
+ compatible = "renesas,ipmmu-r8a7795";
+ reg = <0 0xec680000 0 0x1000>;
+ renesas,ipmmu-main = <&ipmmu_mm 5>;
+@@ -85,7 +85,7 @@
+ #iommu-cells = <1>;
+ };
+
+- ipmmu_sy: mmu@e7730000 {
++ ipmmu_sy: iommu@e7730000 {
+ compatible = "renesas,ipmmu-r8a7795";
+ reg = <0 0xe7730000 0 0x1000>;
+ renesas,ipmmu-main = <&ipmmu_mm 8>;
+@@ -93,11 +93,11 @@
+ #iommu-cells = <1>;
+ };
+
+- /delete-node/ mmu@fd950000;
+- /delete-node/ mmu@fd960000;
+- /delete-node/ mmu@fd970000;
+- /delete-node/ mmu@febe0000;
+- /delete-node/ mmu@fe980000;
++ /delete-node/ iommu@fd950000;
++ /delete-node/ iommu@fd960000;
++ /delete-node/ iommu@fd970000;
++ /delete-node/ iommu@febe0000;
++ /delete-node/ iommu@fe980000;
+
+ xhci1: usb@ee040000 {
+ compatible = "renesas,xhci-r8a7795", "renesas,rcar-gen3-xhci";
+diff --git a/arch/arm64/boot/dts/renesas/r8a77951.dtsi b/arch/arm64/boot/dts/renesas/r8a77951.dtsi
+index 52229546454c..61d67d9714ab 100644
+--- a/arch/arm64/boot/dts/renesas/r8a77951.dtsi
++++ b/arch/arm64/boot/dts/renesas/r8a77951.dtsi
+@@ -1073,7 +1073,7 @@
+ <&ipmmu_ds1 30>, <&ipmmu_ds1 31>;
+ };
+
+- ipmmu_ds0: mmu@e6740000 {
++ ipmmu_ds0: iommu@e6740000 {
+ compatible = "renesas,ipmmu-r8a7795";
+ reg = <0 0xe6740000 0 0x1000>;
+ renesas,ipmmu-main = <&ipmmu_mm 0>;
+@@ -1081,7 +1081,7 @@
+ #iommu-cells = <1>;
+ };
+
+- ipmmu_ds1: mmu@e7740000 {
++ ipmmu_ds1: iommu@e7740000 {
+ compatible = "renesas,ipmmu-r8a7795";
+ reg = <0 0xe7740000 0 0x1000>;
+ renesas,ipmmu-main = <&ipmmu_mm 1>;
+@@ -1089,7 +1089,7 @@
+ #iommu-cells = <1>;
+ };
+
+- ipmmu_hc: mmu@e6570000 {
++ ipmmu_hc: iommu@e6570000 {
+ compatible = "renesas,ipmmu-r8a7795";
+ reg = <0 0xe6570000 0 0x1000>;
+ renesas,ipmmu-main = <&ipmmu_mm 2>;
+@@ -1097,7 +1097,7 @@
+ #iommu-cells = <1>;
+ };
+
+- ipmmu_ir: mmu@ff8b0000 {
++ ipmmu_ir: iommu@ff8b0000 {
+ compatible = "renesas,ipmmu-r8a7795";
+ reg = <0 0xff8b0000 0 0x1000>;
+ renesas,ipmmu-main = <&ipmmu_mm 3>;
+@@ -1105,7 +1105,7 @@
+ #iommu-cells = <1>;
+ };
+
+- ipmmu_mm: mmu@e67b0000 {
++ ipmmu_mm: iommu@e67b0000 {
+ compatible = "renesas,ipmmu-r8a7795";
+ reg = <0 0xe67b0000 0 0x1000>;
+ interrupts = <GIC_SPI 196 IRQ_TYPE_LEVEL_HIGH>,
+@@ -1114,7 +1114,7 @@
+ #iommu-cells = <1>;
+ };
+
+- ipmmu_mp0: mmu@ec670000 {
++ ipmmu_mp0: iommu@ec670000 {
+ compatible = "renesas,ipmmu-r8a7795";
+ reg = <0 0xec670000 0 0x1000>;
+ renesas,ipmmu-main = <&ipmmu_mm 4>;
+@@ -1122,7 +1122,7 @@
+ #iommu-cells = <1>;
+ };
+
+- ipmmu_pv0: mmu@fd800000 {
++ ipmmu_pv0: iommu@fd800000 {
+ compatible = "renesas,ipmmu-r8a7795";
+ reg = <0 0xfd800000 0 0x1000>;
+ renesas,ipmmu-main = <&ipmmu_mm 6>;
+@@ -1130,7 +1130,7 @@
+ #iommu-cells = <1>;
+ };
+
+- ipmmu_pv1: mmu@fd950000 {
++ ipmmu_pv1: iommu@fd950000 {
+ compatible = "renesas,ipmmu-r8a7795";
+ reg = <0 0xfd950000 0 0x1000>;
+ renesas,ipmmu-main = <&ipmmu_mm 7>;
+@@ -1138,7 +1138,7 @@
+ #iommu-cells = <1>;
+ };
+
+- ipmmu_pv2: mmu@fd960000 {
++ ipmmu_pv2: iommu@fd960000 {
+ compatible = "renesas,ipmmu-r8a7795";
+ reg = <0 0xfd960000 0 0x1000>;
+ renesas,ipmmu-main = <&ipmmu_mm 8>;
+@@ -1146,7 +1146,7 @@
+ #iommu-cells = <1>;
+ };
+
+- ipmmu_pv3: mmu@fd970000 {
++ ipmmu_pv3: iommu@fd970000 {
+ compatible = "renesas,ipmmu-r8a7795";
+ reg = <0 0xfd970000 0 0x1000>;
+ renesas,ipmmu-main = <&ipmmu_mm 9>;
+@@ -1154,7 +1154,7 @@
+ #iommu-cells = <1>;
+ };
+
+- ipmmu_rt: mmu@ffc80000 {
++ ipmmu_rt: iommu@ffc80000 {
+ compatible = "renesas,ipmmu-r8a7795";
+ reg = <0 0xffc80000 0 0x1000>;
+ renesas,ipmmu-main = <&ipmmu_mm 10>;
+@@ -1162,7 +1162,7 @@
+ #iommu-cells = <1>;
+ };
+
+- ipmmu_vc0: mmu@fe6b0000 {
++ ipmmu_vc0: iommu@fe6b0000 {
+ compatible = "renesas,ipmmu-r8a7795";
+ reg = <0 0xfe6b0000 0 0x1000>;
+ renesas,ipmmu-main = <&ipmmu_mm 12>;
+@@ -1170,7 +1170,7 @@
+ #iommu-cells = <1>;
+ };
+
+- ipmmu_vc1: mmu@fe6f0000 {
++ ipmmu_vc1: iommu@fe6f0000 {
+ compatible = "renesas,ipmmu-r8a7795";
+ reg = <0 0xfe6f0000 0 0x1000>;
+ renesas,ipmmu-main = <&ipmmu_mm 13>;
+@@ -1178,7 +1178,7 @@
+ #iommu-cells = <1>;
+ };
+
+- ipmmu_vi0: mmu@febd0000 {
++ ipmmu_vi0: iommu@febd0000 {
+ compatible = "renesas,ipmmu-r8a7795";
+ reg = <0 0xfebd0000 0 0x1000>;
+ renesas,ipmmu-main = <&ipmmu_mm 14>;
+@@ -1186,7 +1186,7 @@
+ #iommu-cells = <1>;
+ };
+
+- ipmmu_vi1: mmu@febe0000 {
++ ipmmu_vi1: iommu@febe0000 {
+ compatible = "renesas,ipmmu-r8a7795";
+ reg = <0 0xfebe0000 0 0x1000>;
+ renesas,ipmmu-main = <&ipmmu_mm 15>;
+@@ -1194,7 +1194,7 @@
+ #iommu-cells = <1>;
+ };
+
+- ipmmu_vp0: mmu@fe990000 {
++ ipmmu_vp0: iommu@fe990000 {
+ compatible = "renesas,ipmmu-r8a7795";
+ reg = <0 0xfe990000 0 0x1000>;
+ renesas,ipmmu-main = <&ipmmu_mm 16>;
+@@ -1202,7 +1202,7 @@
+ #iommu-cells = <1>;
+ };
+
+- ipmmu_vp1: mmu@fe980000 {
++ ipmmu_vp1: iommu@fe980000 {
+ compatible = "renesas,ipmmu-r8a7795";
+ reg = <0 0xfe980000 0 0x1000>;
+ renesas,ipmmu-main = <&ipmmu_mm 17>;
+diff --git a/arch/arm64/boot/dts/renesas/r8a77960.dtsi b/arch/arm64/boot/dts/renesas/r8a77960.dtsi
+index 31282367d3ac..33bf62acffbb 100644
+--- a/arch/arm64/boot/dts/renesas/r8a77960.dtsi
++++ b/arch/arm64/boot/dts/renesas/r8a77960.dtsi
+@@ -997,7 +997,7 @@
+ <&ipmmu_ds1 30>, <&ipmmu_ds1 31>;
+ };
+
+- ipmmu_ds0: mmu@e6740000 {
++ ipmmu_ds0: iommu@e6740000 {
+ compatible = "renesas,ipmmu-r8a7796";
+ reg = <0 0xe6740000 0 0x1000>;
+ renesas,ipmmu-main = <&ipmmu_mm 0>;
+@@ -1005,7 +1005,7 @@
+ #iommu-cells = <1>;
+ };
+
+- ipmmu_ds1: mmu@e7740000 {
++ ipmmu_ds1: iommu@e7740000 {
+ compatible = "renesas,ipmmu-r8a7796";
+ reg = <0 0xe7740000 0 0x1000>;
+ renesas,ipmmu-main = <&ipmmu_mm 1>;
+@@ -1013,7 +1013,7 @@
+ #iommu-cells = <1>;
+ };
+
+- ipmmu_hc: mmu@e6570000 {
++ ipmmu_hc: iommu@e6570000 {
+ compatible = "renesas,ipmmu-r8a7796";
+ reg = <0 0xe6570000 0 0x1000>;
+ renesas,ipmmu-main = <&ipmmu_mm 2>;
+@@ -1021,7 +1021,7 @@
+ #iommu-cells = <1>;
+ };
+
+- ipmmu_ir: mmu@ff8b0000 {
++ ipmmu_ir: iommu@ff8b0000 {
+ compatible = "renesas,ipmmu-r8a7796";
+ reg = <0 0xff8b0000 0 0x1000>;
+ renesas,ipmmu-main = <&ipmmu_mm 3>;
+@@ -1029,7 +1029,7 @@
+ #iommu-cells = <1>;
+ };
+
+- ipmmu_mm: mmu@e67b0000 {
++ ipmmu_mm: iommu@e67b0000 {
+ compatible = "renesas,ipmmu-r8a7796";
+ reg = <0 0xe67b0000 0 0x1000>;
+ interrupts = <GIC_SPI 196 IRQ_TYPE_LEVEL_HIGH>,
+@@ -1038,7 +1038,7 @@
+ #iommu-cells = <1>;
+ };
+
+- ipmmu_mp: mmu@ec670000 {
++ ipmmu_mp: iommu@ec670000 {
+ compatible = "renesas,ipmmu-r8a7796";
+ reg = <0 0xec670000 0 0x1000>;
+ renesas,ipmmu-main = <&ipmmu_mm 4>;
+@@ -1046,7 +1046,7 @@
+ #iommu-cells = <1>;
+ };
+
+- ipmmu_pv0: mmu@fd800000 {
++ ipmmu_pv0: iommu@fd800000 {
+ compatible = "renesas,ipmmu-r8a7796";
+ reg = <0 0xfd800000 0 0x1000>;
+ renesas,ipmmu-main = <&ipmmu_mm 5>;
+@@ -1054,7 +1054,7 @@
+ #iommu-cells = <1>;
+ };
+
+- ipmmu_pv1: mmu@fd950000 {
++ ipmmu_pv1: iommu@fd950000 {
+ compatible = "renesas,ipmmu-r8a7796";
+ reg = <0 0xfd950000 0 0x1000>;
+ renesas,ipmmu-main = <&ipmmu_mm 6>;
+@@ -1062,7 +1062,7 @@
+ #iommu-cells = <1>;
+ };
+
+- ipmmu_rt: mmu@ffc80000 {
++ ipmmu_rt: iommu@ffc80000 {
+ compatible = "renesas,ipmmu-r8a7796";
+ reg = <0 0xffc80000 0 0x1000>;
+ renesas,ipmmu-main = <&ipmmu_mm 7>;
+@@ -1070,7 +1070,7 @@
+ #iommu-cells = <1>;
+ };
+
+- ipmmu_vc0: mmu@fe6b0000 {
++ ipmmu_vc0: iommu@fe6b0000 {
+ compatible = "renesas,ipmmu-r8a7796";
+ reg = <0 0xfe6b0000 0 0x1000>;
+ renesas,ipmmu-main = <&ipmmu_mm 8>;
+@@ -1078,7 +1078,7 @@
+ #iommu-cells = <1>;
+ };
+
+- ipmmu_vi0: mmu@febd0000 {
++ ipmmu_vi0: iommu@febd0000 {
+ compatible = "renesas,ipmmu-r8a7796";
+ reg = <0 0xfebd0000 0 0x1000>;
+ renesas,ipmmu-main = <&ipmmu_mm 9>;
+diff --git a/arch/arm64/boot/dts/renesas/r8a77965.dtsi b/arch/arm64/boot/dts/renesas/r8a77965.dtsi
+index d82dd4e67b62..6f7ab39fd282 100644
+--- a/arch/arm64/boot/dts/renesas/r8a77965.dtsi
++++ b/arch/arm64/boot/dts/renesas/r8a77965.dtsi
+@@ -867,7 +867,7 @@
+ <&ipmmu_ds1 30>, <&ipmmu_ds1 31>;
+ };
+
+- ipmmu_ds0: mmu@e6740000 {
++ ipmmu_ds0: iommu@e6740000 {
+ compatible = "renesas,ipmmu-r8a77965";
+ reg = <0 0xe6740000 0 0x1000>;
+ renesas,ipmmu-main = <&ipmmu_mm 0>;
+@@ -875,7 +875,7 @@
+ #iommu-cells = <1>;
+ };
+
+- ipmmu_ds1: mmu@e7740000 {
++ ipmmu_ds1: iommu@e7740000 {
+ compatible = "renesas,ipmmu-r8a77965";
+ reg = <0 0xe7740000 0 0x1000>;
+ renesas,ipmmu-main = <&ipmmu_mm 1>;
+@@ -883,7 +883,7 @@
+ #iommu-cells = <1>;
+ };
+
+- ipmmu_hc: mmu@e6570000 {
++ ipmmu_hc: iommu@e6570000 {
+ compatible = "renesas,ipmmu-r8a77965";
+ reg = <0 0xe6570000 0 0x1000>;
+ renesas,ipmmu-main = <&ipmmu_mm 2>;
+@@ -891,7 +891,7 @@
+ #iommu-cells = <1>;
+ };
+
+- ipmmu_mm: mmu@e67b0000 {
++ ipmmu_mm: iommu@e67b0000 {
+ compatible = "renesas,ipmmu-r8a77965";
+ reg = <0 0xe67b0000 0 0x1000>;
+ interrupts = <GIC_SPI 196 IRQ_TYPE_LEVEL_HIGH>,
+@@ -900,7 +900,7 @@
+ #iommu-cells = <1>;
+ };
+
+- ipmmu_mp: mmu@ec670000 {
++ ipmmu_mp: iommu@ec670000 {
+ compatible = "renesas,ipmmu-r8a77965";
+ reg = <0 0xec670000 0 0x1000>;
+ renesas,ipmmu-main = <&ipmmu_mm 4>;
+@@ -908,7 +908,7 @@
+ #iommu-cells = <1>;
+ };
+
+- ipmmu_pv0: mmu@fd800000 {
++ ipmmu_pv0: iommu@fd800000 {
+ compatible = "renesas,ipmmu-r8a77965";
+ reg = <0 0xfd800000 0 0x1000>;
+ renesas,ipmmu-main = <&ipmmu_mm 6>;
+@@ -916,7 +916,7 @@
+ #iommu-cells = <1>;
+ };
+
+- ipmmu_rt: mmu@ffc80000 {
++ ipmmu_rt: iommu@ffc80000 {
+ compatible = "renesas,ipmmu-r8a77965";
+ reg = <0 0xffc80000 0 0x1000>;
+ renesas,ipmmu-main = <&ipmmu_mm 10>;
+@@ -924,7 +924,7 @@
+ #iommu-cells = <1>;
+ };
+
+- ipmmu_vc0: mmu@fe6b0000 {
++ ipmmu_vc0: iommu@fe6b0000 {
+ compatible = "renesas,ipmmu-r8a77965";
+ reg = <0 0xfe6b0000 0 0x1000>;
+ renesas,ipmmu-main = <&ipmmu_mm 12>;
+@@ -932,7 +932,7 @@
+ #iommu-cells = <1>;
+ };
+
+- ipmmu_vi0: mmu@febd0000 {
++ ipmmu_vi0: iommu@febd0000 {
+ compatible = "renesas,ipmmu-r8a77965";
+ reg = <0 0xfebd0000 0 0x1000>;
+ renesas,ipmmu-main = <&ipmmu_mm 14>;
+@@ -940,7 +940,7 @@
+ #iommu-cells = <1>;
+ };
+
+- ipmmu_vp0: mmu@fe990000 {
++ ipmmu_vp0: iommu@fe990000 {
+ compatible = "renesas,ipmmu-r8a77965";
+ reg = <0 0xfe990000 0 0x1000>;
+ renesas,ipmmu-main = <&ipmmu_mm 16>;
+diff --git a/arch/arm64/boot/dts/renesas/r8a77970.dtsi b/arch/arm64/boot/dts/renesas/r8a77970.dtsi
+index a009c0ebc8b4..bd95ecb1b40d 100644
+--- a/arch/arm64/boot/dts/renesas/r8a77970.dtsi
++++ b/arch/arm64/boot/dts/renesas/r8a77970.dtsi
+@@ -985,7 +985,7 @@
+ <&ipmmu_ds1 22>, <&ipmmu_ds1 23>;
+ };
+
+- ipmmu_ds1: mmu@e7740000 {
++ ipmmu_ds1: iommu@e7740000 {
+ compatible = "renesas,ipmmu-r8a77970";
+ reg = <0 0xe7740000 0 0x1000>;
+ renesas,ipmmu-main = <&ipmmu_mm 0>;
+@@ -993,7 +993,7 @@
+ #iommu-cells = <1>;
+ };
+
+- ipmmu_ir: mmu@ff8b0000 {
++ ipmmu_ir: iommu@ff8b0000 {
+ compatible = "renesas,ipmmu-r8a77970";
+ reg = <0 0xff8b0000 0 0x1000>;
+ renesas,ipmmu-main = <&ipmmu_mm 3>;
+@@ -1001,7 +1001,7 @@
+ #iommu-cells = <1>;
+ };
+
+- ipmmu_mm: mmu@e67b0000 {
++ ipmmu_mm: iommu@e67b0000 {
+ compatible = "renesas,ipmmu-r8a77970";
+ reg = <0 0xe67b0000 0 0x1000>;
+ interrupts = <GIC_SPI 196 IRQ_TYPE_LEVEL_HIGH>,
+@@ -1010,7 +1010,7 @@
+ #iommu-cells = <1>;
+ };
+
+- ipmmu_rt: mmu@ffc80000 {
++ ipmmu_rt: iommu@ffc80000 {
+ compatible = "renesas,ipmmu-r8a77970";
+ reg = <0 0xffc80000 0 0x1000>;
+ renesas,ipmmu-main = <&ipmmu_mm 7>;
+@@ -1018,7 +1018,7 @@
+ #iommu-cells = <1>;
+ };
+
+- ipmmu_vi0: mmu@febd0000 {
++ ipmmu_vi0: iommu@febd0000 {
+ compatible = "renesas,ipmmu-r8a77970";
+ reg = <0 0xfebd0000 0 0x1000>;
+ renesas,ipmmu-main = <&ipmmu_mm 9>;
+diff --git a/arch/arm64/boot/dts/renesas/r8a77980.dtsi b/arch/arm64/boot/dts/renesas/r8a77980.dtsi
+index d672b320bc14..387e6d99f2f3 100644
+--- a/arch/arm64/boot/dts/renesas/r8a77980.dtsi
++++ b/arch/arm64/boot/dts/renesas/r8a77980.dtsi
+@@ -1266,7 +1266,7 @@
+ status = "disabled";
+ };
+
+- ipmmu_ds1: mmu@e7740000 {
++ ipmmu_ds1: iommu@e7740000 {
+ compatible = "renesas,ipmmu-r8a77980";
+ reg = <0 0xe7740000 0 0x1000>;
+ renesas,ipmmu-main = <&ipmmu_mm 0>;
+@@ -1274,7 +1274,7 @@
+ #iommu-cells = <1>;
+ };
+
+- ipmmu_ir: mmu@ff8b0000 {
++ ipmmu_ir: iommu@ff8b0000 {
+ compatible = "renesas,ipmmu-r8a77980";
+ reg = <0 0xff8b0000 0 0x1000>;
+ renesas,ipmmu-main = <&ipmmu_mm 3>;
+@@ -1282,7 +1282,7 @@
+ #iommu-cells = <1>;
+ };
+
+- ipmmu_mm: mmu@e67b0000 {
++ ipmmu_mm: iommu@e67b0000 {
+ compatible = "renesas,ipmmu-r8a77980";
+ reg = <0 0xe67b0000 0 0x1000>;
+ interrupts = <GIC_SPI 196 IRQ_TYPE_LEVEL_HIGH>,
+@@ -1291,7 +1291,7 @@
+ #iommu-cells = <1>;
+ };
+
+- ipmmu_rt: mmu@ffc80000 {
++ ipmmu_rt: iommu@ffc80000 {
+ compatible = "renesas,ipmmu-r8a77980";
+ reg = <0 0xffc80000 0 0x1000>;
+ renesas,ipmmu-main = <&ipmmu_mm 10>;
+@@ -1299,7 +1299,7 @@
+ #iommu-cells = <1>;
+ };
+
+- ipmmu_vc0: mmu@fe990000 {
++ ipmmu_vc0: iommu@fe990000 {
+ compatible = "renesas,ipmmu-r8a77980";
+ reg = <0 0xfe990000 0 0x1000>;
+ renesas,ipmmu-main = <&ipmmu_mm 12>;
+@@ -1307,7 +1307,7 @@
+ #iommu-cells = <1>;
+ };
+
+- ipmmu_vi0: mmu@febd0000 {
++ ipmmu_vi0: iommu@febd0000 {
+ compatible = "renesas,ipmmu-r8a77980";
+ reg = <0 0xfebd0000 0 0x1000>;
+ renesas,ipmmu-main = <&ipmmu_mm 14>;
+@@ -1315,7 +1315,7 @@
+ #iommu-cells = <1>;
+ };
+
+- ipmmu_vip0: mmu@e7b00000 {
++ ipmmu_vip0: iommu@e7b00000 {
+ compatible = "renesas,ipmmu-r8a77980";
+ reg = <0 0xe7b00000 0 0x1000>;
+ renesas,ipmmu-main = <&ipmmu_mm 4>;
+@@ -1323,7 +1323,7 @@
+ #iommu-cells = <1>;
+ };
+
+- ipmmu_vip1: mmu@e7960000 {
++ ipmmu_vip1: iommu@e7960000 {
+ compatible = "renesas,ipmmu-r8a77980";
+ reg = <0 0xe7960000 0 0x1000>;
+ renesas,ipmmu-main = <&ipmmu_mm 11>;
+diff --git a/arch/arm64/boot/dts/renesas/r8a77990.dtsi b/arch/arm64/boot/dts/renesas/r8a77990.dtsi
+index 1543f18e834f..cd11f24744d4 100644
+--- a/arch/arm64/boot/dts/renesas/r8a77990.dtsi
++++ b/arch/arm64/boot/dts/renesas/r8a77990.dtsi
+@@ -817,7 +817,7 @@
+ <&ipmmu_ds1 30>, <&ipmmu_ds1 31>;
+ };
+
+- ipmmu_ds0: mmu@e6740000 {
++ ipmmu_ds0: iommu@e6740000 {
+ compatible = "renesas,ipmmu-r8a77990";
+ reg = <0 0xe6740000 0 0x1000>;
+ renesas,ipmmu-main = <&ipmmu_mm 0>;
+@@ -825,7 +825,7 @@
+ #iommu-cells = <1>;
+ };
+
+- ipmmu_ds1: mmu@e7740000 {
++ ipmmu_ds1: iommu@e7740000 {
+ compatible = "renesas,ipmmu-r8a77990";
+ reg = <0 0xe7740000 0 0x1000>;
+ renesas,ipmmu-main = <&ipmmu_mm 1>;
+@@ -833,7 +833,7 @@
+ #iommu-cells = <1>;
+ };
+
+- ipmmu_hc: mmu@e6570000 {
++ ipmmu_hc: iommu@e6570000 {
+ compatible = "renesas,ipmmu-r8a77990";
+ reg = <0 0xe6570000 0 0x1000>;
+ renesas,ipmmu-main = <&ipmmu_mm 2>;
+@@ -841,7 +841,7 @@
+ #iommu-cells = <1>;
+ };
+
+- ipmmu_mm: mmu@e67b0000 {
++ ipmmu_mm: iommu@e67b0000 {
+ compatible = "renesas,ipmmu-r8a77990";
+ reg = <0 0xe67b0000 0 0x1000>;
+ interrupts = <GIC_SPI 196 IRQ_TYPE_LEVEL_HIGH>,
+@@ -850,7 +850,7 @@
+ #iommu-cells = <1>;
+ };
+
+- ipmmu_mp: mmu@ec670000 {
++ ipmmu_mp: iommu@ec670000 {
+ compatible = "renesas,ipmmu-r8a77990";
+ reg = <0 0xec670000 0 0x1000>;
+ renesas,ipmmu-main = <&ipmmu_mm 4>;
+@@ -858,7 +858,7 @@
+ #iommu-cells = <1>;
+ };
+
+- ipmmu_pv0: mmu@fd800000 {
++ ipmmu_pv0: iommu@fd800000 {
+ compatible = "renesas,ipmmu-r8a77990";
+ reg = <0 0xfd800000 0 0x1000>;
+ renesas,ipmmu-main = <&ipmmu_mm 6>;
+@@ -866,7 +866,7 @@
+ #iommu-cells = <1>;
+ };
+
+- ipmmu_rt: mmu@ffc80000 {
++ ipmmu_rt: iommu@ffc80000 {
+ compatible = "renesas,ipmmu-r8a77990";
+ reg = <0 0xffc80000 0 0x1000>;
+ renesas,ipmmu-main = <&ipmmu_mm 10>;
+@@ -874,7 +874,7 @@
+ #iommu-cells = <1>;
+ };
+
+- ipmmu_vc0: mmu@fe6b0000 {
++ ipmmu_vc0: iommu@fe6b0000 {
+ compatible = "renesas,ipmmu-r8a77990";
+ reg = <0 0xfe6b0000 0 0x1000>;
+ renesas,ipmmu-main = <&ipmmu_mm 12>;
+@@ -882,7 +882,7 @@
+ #iommu-cells = <1>;
+ };
+
+- ipmmu_vi0: mmu@febd0000 {
++ ipmmu_vi0: iommu@febd0000 {
+ compatible = "renesas,ipmmu-r8a77990";
+ reg = <0 0xfebd0000 0 0x1000>;
+ renesas,ipmmu-main = <&ipmmu_mm 14>;
+@@ -890,7 +890,7 @@
+ #iommu-cells = <1>;
+ };
+
+- ipmmu_vp0: mmu@fe990000 {
++ ipmmu_vp0: iommu@fe990000 {
+ compatible = "renesas,ipmmu-r8a77990";
+ reg = <0 0xfe990000 0 0x1000>;
+ renesas,ipmmu-main = <&ipmmu_mm 16>;
+diff --git a/arch/arm64/boot/dts/renesas/r8a77995.dtsi b/arch/arm64/boot/dts/renesas/r8a77995.dtsi
+index e8d2290fe79d..e5617ec0f49c 100644
+--- a/arch/arm64/boot/dts/renesas/r8a77995.dtsi
++++ b/arch/arm64/boot/dts/renesas/r8a77995.dtsi
+@@ -507,7 +507,7 @@
+ <&ipmmu_ds1 22>, <&ipmmu_ds1 23>;
+ };
+
+- ipmmu_ds0: mmu@e6740000 {
++ ipmmu_ds0: iommu@e6740000 {
+ compatible = "renesas,ipmmu-r8a77995";
+ reg = <0 0xe6740000 0 0x1000>;
+ renesas,ipmmu-main = <&ipmmu_mm 0>;
+@@ -515,7 +515,7 @@
+ #iommu-cells = <1>;
+ };
+
+- ipmmu_ds1: mmu@e7740000 {
++ ipmmu_ds1: iommu@e7740000 {
+ compatible = "renesas,ipmmu-r8a77995";
+ reg = <0 0xe7740000 0 0x1000>;
+ renesas,ipmmu-main = <&ipmmu_mm 1>;
+@@ -523,7 +523,7 @@
+ #iommu-cells = <1>;
+ };
+
+- ipmmu_hc: mmu@e6570000 {
++ ipmmu_hc: iommu@e6570000 {
+ compatible = "renesas,ipmmu-r8a77995";
+ reg = <0 0xe6570000 0 0x1000>;
+ renesas,ipmmu-main = <&ipmmu_mm 2>;
+@@ -531,7 +531,7 @@
+ #iommu-cells = <1>;
+ };
+
+- ipmmu_mm: mmu@e67b0000 {
++ ipmmu_mm: iommu@e67b0000 {
+ compatible = "renesas,ipmmu-r8a77995";
+ reg = <0 0xe67b0000 0 0x1000>;
+ interrupts = <GIC_SPI 196 IRQ_TYPE_LEVEL_HIGH>,
+@@ -540,7 +540,7 @@
+ #iommu-cells = <1>;
+ };
+
+- ipmmu_mp: mmu@ec670000 {
++ ipmmu_mp: iommu@ec670000 {
+ compatible = "renesas,ipmmu-r8a77995";
+ reg = <0 0xec670000 0 0x1000>;
+ renesas,ipmmu-main = <&ipmmu_mm 4>;
+@@ -548,7 +548,7 @@
+ #iommu-cells = <1>;
+ };
+
+- ipmmu_pv0: mmu@fd800000 {
++ ipmmu_pv0: iommu@fd800000 {
+ compatible = "renesas,ipmmu-r8a77995";
+ reg = <0 0xfd800000 0 0x1000>;
+ renesas,ipmmu-main = <&ipmmu_mm 6>;
+@@ -556,7 +556,7 @@
+ #iommu-cells = <1>;
+ };
+
+- ipmmu_rt: mmu@ffc80000 {
++ ipmmu_rt: iommu@ffc80000 {
+ compatible = "renesas,ipmmu-r8a77995";
+ reg = <0 0xffc80000 0 0x1000>;
+ renesas,ipmmu-main = <&ipmmu_mm 10>;
+@@ -564,7 +564,7 @@
+ #iommu-cells = <1>;
+ };
+
+- ipmmu_vc0: mmu@fe6b0000 {
++ ipmmu_vc0: iommu@fe6b0000 {
+ compatible = "renesas,ipmmu-r8a77995";
+ reg = <0 0xfe6b0000 0 0x1000>;
+ renesas,ipmmu-main = <&ipmmu_mm 12>;
+@@ -572,7 +572,7 @@
+ #iommu-cells = <1>;
+ };
+
+- ipmmu_vi0: mmu@febd0000 {
++ ipmmu_vi0: iommu@febd0000 {
+ compatible = "renesas,ipmmu-r8a77995";
+ reg = <0 0xfebd0000 0 0x1000>;
+ renesas,ipmmu-main = <&ipmmu_mm 14>;
+@@ -580,7 +580,7 @@
+ #iommu-cells = <1>;
+ };
+
+- ipmmu_vp0: mmu@fe990000 {
++ ipmmu_vp0: iommu@fe990000 {
+ compatible = "renesas,ipmmu-r8a77995";
+ reg = <0 0xfe990000 0 0x1000>;
+ renesas,ipmmu-main = <&ipmmu_mm 16>;
+diff --git a/arch/arm64/kernel/ftrace.c b/arch/arm64/kernel/ftrace.c
+index 8618faa82e6d..86a5cf9bc19a 100644
+--- a/arch/arm64/kernel/ftrace.c
++++ b/arch/arm64/kernel/ftrace.c
+@@ -69,7 +69,8 @@ static struct plt_entry *get_ftrace_plt(struct module *mod, unsigned long addr)
+
+ if (addr == FTRACE_ADDR)
+ return &plt[FTRACE_PLT_IDX];
+- if (addr == FTRACE_REGS_ADDR && IS_ENABLED(CONFIG_FTRACE_WITH_REGS))
++ if (addr == FTRACE_REGS_ADDR &&
++ IS_ENABLED(CONFIG_DYNAMIC_FTRACE_WITH_REGS))
+ return &plt[FTRACE_REGS_PLT_IDX];
+ #endif
+ return NULL;
+diff --git a/arch/arm64/kernel/hw_breakpoint.c b/arch/arm64/kernel/hw_breakpoint.c
+index 0b727edf4104..af234a1e08b7 100644
+--- a/arch/arm64/kernel/hw_breakpoint.c
++++ b/arch/arm64/kernel/hw_breakpoint.c
+@@ -730,6 +730,27 @@ static u64 get_distance_from_watchpoint(unsigned long addr, u64 val,
+ return 0;
+ }
+
++static int watchpoint_report(struct perf_event *wp, unsigned long addr,
++ struct pt_regs *regs)
++{
++ int step = is_default_overflow_handler(wp);
++ struct arch_hw_breakpoint *info = counter_arch_bp(wp);
++
++ info->trigger = addr;
++
++ /*
++ * If we triggered a user watchpoint from a uaccess routine, then
++ * handle the stepping ourselves since userspace really can't help
++ * us with this.
++ */
++ if (!user_mode(regs) && info->ctrl.privilege == AARCH64_BREAKPOINT_EL0)
++ step = 1;
++ else
++ perf_bp_event(wp, regs);
++
++ return step;
++}
++
+ static int watchpoint_handler(unsigned long addr, unsigned int esr,
+ struct pt_regs *regs)
+ {
+@@ -739,7 +760,6 @@ static int watchpoint_handler(unsigned long addr, unsigned int esr,
+ u64 val;
+ struct perf_event *wp, **slots;
+ struct debug_info *debug_info;
+- struct arch_hw_breakpoint *info;
+ struct arch_hw_breakpoint_ctrl ctrl;
+
+ slots = this_cpu_ptr(wp_on_reg);
+@@ -777,25 +797,13 @@ static int watchpoint_handler(unsigned long addr, unsigned int esr,
+ if (dist != 0)
+ continue;
+
+- info = counter_arch_bp(wp);
+- info->trigger = addr;
+- perf_bp_event(wp, regs);
+-
+- /* Do we need to handle the stepping? */
+- if (is_default_overflow_handler(wp))
+- step = 1;
++ step = watchpoint_report(wp, addr, regs);
+ }
+- if (min_dist > 0 && min_dist != -1) {
+- /* No exact match found. */
+- wp = slots[closest_match];
+- info = counter_arch_bp(wp);
+- info->trigger = addr;
+- perf_bp_event(wp, regs);
+
+- /* Do we need to handle the stepping? */
+- if (is_default_overflow_handler(wp))
+- step = 1;
+- }
++ /* No exact match found? */
++ if (min_dist > 0 && min_dist != -1)
++ step = watchpoint_report(slots[closest_match], addr, regs);
++
+ rcu_read_unlock();
+
+ if (!step)
+diff --git a/arch/arm64/mm/init.c b/arch/arm64/mm/init.c
+index e42727e3568e..3f9010167468 100644
+--- a/arch/arm64/mm/init.c
++++ b/arch/arm64/mm/init.c
+@@ -458,11 +458,6 @@ void __init arm64_memblock_init(void)
+ high_memory = __va(memblock_end_of_DRAM() - 1) + 1;
+
+ dma_contiguous_reserve(arm64_dma32_phys_limit);
+-
+-#ifdef CONFIG_ARM64_4K_PAGES
+- hugetlb_cma_reserve(PUD_SHIFT - PAGE_SHIFT);
+-#endif
+-
+ }
+
+ void __init bootmem_init(void)
+@@ -478,6 +473,16 @@ void __init bootmem_init(void)
+ min_low_pfn = min;
+
+ arm64_numa_init();
++
++ /*
++ * must be done after arm64_numa_init() which calls numa_init() to
++ * initialize node_online_map that gets used in hugetlb_cma_reserve()
++ * while allocating required CMA size across online nodes.
++ */
++#ifdef CONFIG_ARM64_4K_PAGES
++ hugetlb_cma_reserve(PUD_SHIFT - PAGE_SHIFT);
++#endif
++
+ /*
+ * Sparsemem tries to allocate bootmem in memory_present(), so must be
+ * done after the fixed reservations.
+diff --git a/arch/m68k/coldfire/pci.c b/arch/m68k/coldfire/pci.c
+index 62b0eb6cf69a..84eab0f5e00a 100644
+--- a/arch/m68k/coldfire/pci.c
++++ b/arch/m68k/coldfire/pci.c
+@@ -216,8 +216,10 @@ static int __init mcf_pci_init(void)
+
+ /* Keep a virtual mapping to IO/config space active */
+ iospace = (unsigned long) ioremap(PCI_IO_PA, PCI_IO_SIZE);
+- if (iospace == 0)
++ if (iospace == 0) {
++ pci_free_host_bridge(bridge);
+ return -ENODEV;
++ }
+ pr_info("Coldfire: PCI IO/config window mapped to 0x%x\n",
+ (u32) iospace);
+
+diff --git a/arch/openrisc/kernel/entry.S b/arch/openrisc/kernel/entry.S
+index e4a78571f883..c6481cfc5220 100644
+--- a/arch/openrisc/kernel/entry.S
++++ b/arch/openrisc/kernel/entry.S
+@@ -1166,13 +1166,13 @@ ENTRY(__sys_clone)
+ l.movhi r29,hi(sys_clone)
+ l.ori r29,r29,lo(sys_clone)
+ l.j _fork_save_extra_regs_and_call
+- l.addi r7,r1,0
++ l.nop
+
+ ENTRY(__sys_fork)
+ l.movhi r29,hi(sys_fork)
+ l.ori r29,r29,lo(sys_fork)
+ l.j _fork_save_extra_regs_and_call
+- l.addi r3,r1,0
++ l.nop
+
+ ENTRY(sys_rt_sigreturn)
+ l.jal _sys_rt_sigreturn
+diff --git a/arch/powerpc/Kconfig b/arch/powerpc/Kconfig
+index 62aca9efbbbe..310957b988e3 100644
+--- a/arch/powerpc/Kconfig
++++ b/arch/powerpc/Kconfig
+@@ -773,6 +773,7 @@ config THREAD_SHIFT
+ range 13 15
+ default "15" if PPC_256K_PAGES
+ default "14" if PPC64
++ default "14" if KASAN
+ default "13"
+ help
+ Used to define the stack size. The default is almost always what you
+diff --git a/arch/powerpc/configs/adder875_defconfig b/arch/powerpc/configs/adder875_defconfig
+index f55e23cb176c..5326bc739279 100644
+--- a/arch/powerpc/configs/adder875_defconfig
++++ b/arch/powerpc/configs/adder875_defconfig
+@@ -10,7 +10,6 @@ CONFIG_EXPERT=y
+ # CONFIG_BLK_DEV_BSG is not set
+ CONFIG_PARTITION_ADVANCED=y
+ CONFIG_PPC_ADDER875=y
+-CONFIG_8xx_COPYBACK=y
+ CONFIG_GEN_RTC=y
+ CONFIG_HZ_1000=y
+ # CONFIG_SECCOMP is not set
+diff --git a/arch/powerpc/configs/ep88xc_defconfig b/arch/powerpc/configs/ep88xc_defconfig
+index 0e2e5e81a359..f5c3e72da719 100644
+--- a/arch/powerpc/configs/ep88xc_defconfig
++++ b/arch/powerpc/configs/ep88xc_defconfig
+@@ -12,7 +12,6 @@ CONFIG_EXPERT=y
+ # CONFIG_BLK_DEV_BSG is not set
+ CONFIG_PARTITION_ADVANCED=y
+ CONFIG_PPC_EP88XC=y
+-CONFIG_8xx_COPYBACK=y
+ CONFIG_GEN_RTC=y
+ CONFIG_HZ_100=y
+ # CONFIG_SECCOMP is not set
+diff --git a/arch/powerpc/configs/mpc866_ads_defconfig b/arch/powerpc/configs/mpc866_ads_defconfig
+index 5320735395e7..5c56d36cdfc5 100644
+--- a/arch/powerpc/configs/mpc866_ads_defconfig
++++ b/arch/powerpc/configs/mpc866_ads_defconfig
+@@ -12,7 +12,6 @@ CONFIG_EXPERT=y
+ # CONFIG_BLK_DEV_BSG is not set
+ CONFIG_PARTITION_ADVANCED=y
+ CONFIG_MPC86XADS=y
+-CONFIG_8xx_COPYBACK=y
+ CONFIG_GEN_RTC=y
+ CONFIG_HZ_1000=y
+ CONFIG_MATH_EMULATION=y
+diff --git a/arch/powerpc/configs/mpc885_ads_defconfig b/arch/powerpc/configs/mpc885_ads_defconfig
+index 82a008c04eae..949ff9ccda5e 100644
+--- a/arch/powerpc/configs/mpc885_ads_defconfig
++++ b/arch/powerpc/configs/mpc885_ads_defconfig
+@@ -11,7 +11,6 @@ CONFIG_EXPERT=y
+ # CONFIG_VM_EVENT_COUNTERS is not set
+ # CONFIG_BLK_DEV_BSG is not set
+ CONFIG_PARTITION_ADVANCED=y
+-CONFIG_8xx_COPYBACK=y
+ CONFIG_GEN_RTC=y
+ CONFIG_HZ_100=y
+ # CONFIG_SECCOMP is not set
+diff --git a/arch/powerpc/configs/tqm8xx_defconfig b/arch/powerpc/configs/tqm8xx_defconfig
+index eda8bfb2d0a3..77857d513022 100644
+--- a/arch/powerpc/configs/tqm8xx_defconfig
++++ b/arch/powerpc/configs/tqm8xx_defconfig
+@@ -15,7 +15,6 @@ CONFIG_MODULE_SRCVERSION_ALL=y
+ # CONFIG_BLK_DEV_BSG is not set
+ CONFIG_PARTITION_ADVANCED=y
+ CONFIG_TQM8XX=y
+-CONFIG_8xx_COPYBACK=y
+ # CONFIG_8xx_CPU15 is not set
+ CONFIG_GEN_RTC=y
+ CONFIG_HZ_100=y
+diff --git a/arch/powerpc/include/asm/book3s/64/kup-radix.h b/arch/powerpc/include/asm/book3s/64/kup-radix.h
+index 3bcef989a35d..101d60f16d46 100644
+--- a/arch/powerpc/include/asm/book3s/64/kup-radix.h
++++ b/arch/powerpc/include/asm/book3s/64/kup-radix.h
+@@ -16,7 +16,9 @@
+ #ifdef CONFIG_PPC_KUAP
+ BEGIN_MMU_FTR_SECTION_NESTED(67)
+ ld \gpr, STACK_REGS_KUAP(r1)
++ isync
+ mtspr SPRN_AMR, \gpr
++ /* No isync required, see kuap_restore_amr() */
+ END_MMU_FTR_SECTION_NESTED_IFSET(MMU_FTR_RADIX_KUAP, 67)
+ #endif
+ .endm
+@@ -62,8 +64,15 @@
+
+ static inline void kuap_restore_amr(struct pt_regs *regs)
+ {
+- if (mmu_has_feature(MMU_FTR_RADIX_KUAP))
++ if (mmu_has_feature(MMU_FTR_RADIX_KUAP)) {
++ isync();
+ mtspr(SPRN_AMR, regs->kuap);
++ /*
++ * No isync required here because we are about to RFI back to
++ * previous context before any user accesses would be made,
++ * which is a CSI.
++ */
++ }
+ }
+
+ static inline void kuap_check_amr(void)
+diff --git a/arch/powerpc/include/asm/book3s/64/pgtable.h b/arch/powerpc/include/asm/book3s/64/pgtable.h
+index 368b136517e0..2838b98bc6df 100644
+--- a/arch/powerpc/include/asm/book3s/64/pgtable.h
++++ b/arch/powerpc/include/asm/book3s/64/pgtable.h
+@@ -998,10 +998,25 @@ extern struct page *pgd_page(pgd_t pgd);
+ #define pud_page_vaddr(pud) __va(pud_val(pud) & ~PUD_MASKED_BITS)
+ #define pgd_page_vaddr(pgd) __va(pgd_val(pgd) & ~PGD_MASKED_BITS)
+
+-#define pgd_index(address) (((address) >> (PGDIR_SHIFT)) & (PTRS_PER_PGD - 1))
+-#define pud_index(address) (((address) >> (PUD_SHIFT)) & (PTRS_PER_PUD - 1))
+-#define pmd_index(address) (((address) >> (PMD_SHIFT)) & (PTRS_PER_PMD - 1))
+-#define pte_index(address) (((address) >> (PAGE_SHIFT)) & (PTRS_PER_PTE - 1))
++static inline unsigned long pgd_index(unsigned long address)
++{
++ return (address >> PGDIR_SHIFT) & (PTRS_PER_PGD - 1);
++}
++
++static inline unsigned long pud_index(unsigned long address)
++{
++ return (address >> PUD_SHIFT) & (PTRS_PER_PUD - 1);
++}
++
++static inline unsigned long pmd_index(unsigned long address)
++{
++ return (address >> PMD_SHIFT) & (PTRS_PER_PMD - 1);
++}
++
++static inline unsigned long pte_index(unsigned long address)
++{
++ return (address >> PAGE_SHIFT) & (PTRS_PER_PTE - 1);
++}
+
+ /*
+ * Find an entry in a page-table-directory. We combine the address region
+diff --git a/arch/powerpc/include/asm/nohash/32/mmu-8xx.h b/arch/powerpc/include/asm/nohash/32/mmu-8xx.h
+index 76af5b0cb16e..26b7cee34dfe 100644
+--- a/arch/powerpc/include/asm/nohash/32/mmu-8xx.h
++++ b/arch/powerpc/include/asm/nohash/32/mmu-8xx.h
+@@ -19,7 +19,6 @@
+ #define MI_RSV4I 0x08000000 /* Reserve 4 TLB entries */
+ #define MI_PPCS 0x02000000 /* Use MI_RPN prob/priv state */
+ #define MI_IDXMASK 0x00001f00 /* TLB index to be loaded */
+-#define MI_RESETVAL 0x00000000 /* Value of register at reset */
+
+ /* These are the Ks and Kp from the PowerPC books. For proper operation,
+ * Ks = 0, Kp = 1.
+@@ -95,7 +94,6 @@
+ #define MD_TWAM 0x04000000 /* Use 4K page hardware assist */
+ #define MD_PPCS 0x02000000 /* Use MI_RPN prob/priv state */
+ #define MD_IDXMASK 0x00001f00 /* TLB index to be loaded */
+-#define MD_RESETVAL 0x04000000 /* Value of register at reset */
+
+ #define SPRN_M_CASID 793 /* Address space ID (context) to match */
+ #define MC_ASIDMASK 0x0000000f /* Bits used for ASID value */
+diff --git a/arch/powerpc/include/asm/processor.h b/arch/powerpc/include/asm/processor.h
+index eedcbfb9a6ff..c220cb9eccad 100644
+--- a/arch/powerpc/include/asm/processor.h
++++ b/arch/powerpc/include/asm/processor.h
+@@ -301,7 +301,6 @@ struct thread_struct {
+ #else
+ #define INIT_THREAD { \
+ .ksp = INIT_SP, \
+- .regs = (struct pt_regs *)INIT_SP - 1, /* XXX bogus, I think */ \
+ .addr_limit = KERNEL_DS, \
+ .fpexc_mode = 0, \
+ .fscr = FSCR_TAR | FSCR_EBB \
+diff --git a/arch/powerpc/kernel/exceptions-64s.S b/arch/powerpc/kernel/exceptions-64s.S
+index ebeebab74b56..d9ddce40bed8 100644
+--- a/arch/powerpc/kernel/exceptions-64s.S
++++ b/arch/powerpc/kernel/exceptions-64s.S
+@@ -270,7 +270,7 @@ BEGIN_FTR_SECTION
+ END_FTR_SECTION_IFSET(CPU_FTR_CFAR)
+ .endif
+
+- ld r10,PACA_EXGEN+EX_CTR(r13)
++ ld r10,IAREA+EX_CTR(r13)
+ mtctr r10
+ BEGIN_FTR_SECTION
+ ld r10,IAREA+EX_PPR(r13)
+@@ -298,7 +298,7 @@ END_FTR_SECTION_IFSET(CPU_FTR_HAS_PPR)
+
+ .if IKVM_SKIP
+ 89: mtocrf 0x80,r9
+- ld r10,PACA_EXGEN+EX_CTR(r13)
++ ld r10,IAREA+EX_CTR(r13)
+ mtctr r10
+ ld r9,IAREA+EX_R9(r13)
+ ld r10,IAREA+EX_R10(r13)
+@@ -1117,11 +1117,30 @@ END_FTR_SECTION_IFSET(CPU_FTR_HVMODE)
+ li r10,MSR_RI
+ mtmsrd r10,1
+
++ /*
++ * Set IRQS_ALL_DISABLED and save PACAIRQHAPPENED (see
++ * system_reset_common)
++ */
++ li r10,IRQS_ALL_DISABLED
++ stb r10,PACAIRQSOFTMASK(r13)
++ lbz r10,PACAIRQHAPPENED(r13)
++ std r10,RESULT(r1)
++ ori r10,r10,PACA_IRQ_HARD_DIS
++ stb r10,PACAIRQHAPPENED(r13)
++
+ addi r3,r1,STACK_FRAME_OVERHEAD
+ bl machine_check_early
+ std r3,RESULT(r1) /* Save result */
+ ld r12,_MSR(r1)
+
++ /*
++ * Restore soft mask settings.
++ */
++ ld r10,RESULT(r1)
++ stb r10,PACAIRQHAPPENED(r13)
++ ld r10,SOFTE(r1)
++ stb r10,PACAIRQSOFTMASK(r13)
++
+ #ifdef CONFIG_PPC_P7_NAP
+ /*
+ * Check if thread was in power saving mode. We come here when any
+@@ -1225,17 +1244,19 @@ EXC_COMMON_BEGIN(machine_check_idle_common)
+ bl machine_check_queue_event
+
+ /*
+- * We have not used any non-volatile GPRs here, and as a rule
+- * most exception code including machine check does not.
+- * Therefore PACA_NAPSTATELOST does not need to be set. Idle
+- * wakeup will restore volatile registers.
++ * GPR-loss wakeups are relatively straightforward, because the
++ * idle sleep code has saved all non-volatile registers on its
++ * own stack, and r1 in PACAR1.
+ *
+- * Load the original SRR1 into r3 for pnv_powersave_wakeup_mce.
++ * For no-loss wakeups the r1 and lr registers used by the
++ * early machine check handler have to be restored first. r2 is
++ * the kernel TOC, so no need to restore it.
+ *
+ * Then decrement MCE nesting after finishing with the stack.
+ */
+ ld r3,_MSR(r1)
+ ld r4,_LINK(r1)
++ ld r1,GPR1(r1)
+
+ lhz r11,PACA_IN_MCE(r13)
+ subi r11,r11,1
+@@ -1244,7 +1265,7 @@ EXC_COMMON_BEGIN(machine_check_idle_common)
+ mtlr r4
+ rlwinm r10,r3,47-31,30,31
+ cmpwi cr1,r10,2
+- bltlr cr1 /* no state loss, return to idle caller */
++ bltlr cr1 /* no state loss, return to idle caller with r3=SRR1 */
+ b idle_return_gpr_loss
+ #endif
+
+diff --git a/arch/powerpc/kernel/head_64.S b/arch/powerpc/kernel/head_64.S
+index ddfbd02140d9..0e05a9a47a4b 100644
+--- a/arch/powerpc/kernel/head_64.S
++++ b/arch/powerpc/kernel/head_64.S
+@@ -947,15 +947,8 @@ start_here_multiplatform:
+ std r0,0(r4)
+ #endif
+
+- /* The following gets the stack set up with the regs */
+- /* pointing to the real addr of the kernel stack. This is */
+- /* all done to support the C function call below which sets */
+- /* up the htab. This is done because we have relocated the */
+- /* kernel but are still running in real mode. */
+-
+- LOAD_REG_ADDR(r3,init_thread_union)
+-
+ /* set up a stack pointer */
++ LOAD_REG_ADDR(r3,init_thread_union)
+ LOAD_REG_IMMEDIATE(r1,THREAD_SIZE)
+ add r1,r3,r1
+ li r0,0
+diff --git a/arch/powerpc/kernel/head_8xx.S b/arch/powerpc/kernel/head_8xx.S
+index 073a651787df..905205c79a25 100644
+--- a/arch/powerpc/kernel/head_8xx.S
++++ b/arch/powerpc/kernel/head_8xx.S
+@@ -779,10 +779,7 @@ start_here:
+ initial_mmu:
+ li r8, 0
+ mtspr SPRN_MI_CTR, r8 /* remove PINNED ITLB entries */
+- lis r10, MD_RESETVAL@h
+-#ifndef CONFIG_8xx_COPYBACK
+- oris r10, r10, MD_WTDEF@h
+-#endif
++ lis r10, MD_TWAM@h
+ mtspr SPRN_MD_CTR, r10 /* remove PINNED DTLB entries */
+
+ tlbia /* Invalidate all TLB entries */
+@@ -857,17 +854,7 @@ initial_mmu:
+ mtspr SPRN_DC_CST, r8
+ lis r8, IDC_ENABLE@h
+ mtspr SPRN_IC_CST, r8
+-#ifdef CONFIG_8xx_COPYBACK
+- mtspr SPRN_DC_CST, r8
+-#else
+- /* For a debug option, I left this here to easily enable
+- * the write through cache mode
+- */
+- lis r8, DC_SFWT@h
+ mtspr SPRN_DC_CST, r8
+- lis r8, IDC_ENABLE@h
+- mtspr SPRN_DC_CST, r8
+-#endif
+ /* Disable debug mode entry on breakpoints */
+ mfspr r8, SPRN_DER
+ #ifdef CONFIG_PERF_EVENTS
+diff --git a/arch/powerpc/kernel/process.c b/arch/powerpc/kernel/process.c
+index 9c21288f8645..774476be591b 100644
+--- a/arch/powerpc/kernel/process.c
++++ b/arch/powerpc/kernel/process.c
+@@ -1241,29 +1241,31 @@ struct task_struct *__switch_to(struct task_struct *prev,
+ static void show_instructions(struct pt_regs *regs)
+ {
+ int i;
++ unsigned long nip = regs->nip;
+ unsigned long pc = regs->nip - (NR_INSN_TO_PRINT * 3 / 4 * sizeof(int));
+
+ printk("Instruction dump:");
+
++ /*
++ * If we were executing with the MMU off for instructions, adjust pc
++ * rather than printing XXXXXXXX.
++ */
++ if (!IS_ENABLED(CONFIG_BOOKE) && !(regs->msr & MSR_IR)) {
++ pc = (unsigned long)phys_to_virt(pc);
++ nip = (unsigned long)phys_to_virt(regs->nip);
++ }
++
+ for (i = 0; i < NR_INSN_TO_PRINT; i++) {
+ int instr;
+
+ if (!(i % 8))
+ pr_cont("\n");
+
+-#if !defined(CONFIG_BOOKE)
+- /* If executing with the IMMU off, adjust pc rather
+- * than print XXXXXXXX.
+- */
+- if (!(regs->msr & MSR_IR))
+- pc = (unsigned long)phys_to_virt(pc);
+-#endif
+-
+ if (!__kernel_text_address(pc) ||
+ probe_kernel_address((const void *)pc, instr)) {
+ pr_cont("XXXXXXXX ");
+ } else {
+- if (regs->nip == pc)
++ if (nip == pc)
+ pr_cont("<%08x> ", instr);
+ else
+ pr_cont("%08x ", instr);
+diff --git a/arch/powerpc/kexec/core.c b/arch/powerpc/kexec/core.c
+index 078fe3d76feb..56da5eb2b923 100644
+--- a/arch/powerpc/kexec/core.c
++++ b/arch/powerpc/kexec/core.c
+@@ -115,11 +115,12 @@ void machine_kexec(struct kimage *image)
+
+ void __init reserve_crashkernel(void)
+ {
+- unsigned long long crash_size, crash_base;
++ unsigned long long crash_size, crash_base, total_mem_sz;
+ int ret;
+
++ total_mem_sz = memory_limit ? memory_limit : memblock_phys_mem_size();
+ /* use common parsing */
+- ret = parse_crashkernel(boot_command_line, memblock_phys_mem_size(),
++ ret = parse_crashkernel(boot_command_line, total_mem_sz,
+ &crash_size, &crash_base);
+ if (ret == 0 && crash_size > 0) {
+ crashk_res.start = crash_base;
+@@ -178,6 +179,7 @@ void __init reserve_crashkernel(void)
+ /* Crash kernel trumps memory limit */
+ if (memory_limit && memory_limit <= crashk_res.end) {
+ memory_limit = crashk_res.end + 1;
++ total_mem_sz = memory_limit;
+ printk("Adjusted memory limit for crashkernel, now 0x%llx\n",
+ memory_limit);
+ }
+@@ -186,7 +188,7 @@ void __init reserve_crashkernel(void)
+ "for crashkernel (System RAM: %ldMB)\n",
+ (unsigned long)(crash_size >> 20),
+ (unsigned long)(crashk_res.start >> 20),
+- (unsigned long)(memblock_phys_mem_size() >> 20));
++ (unsigned long)(total_mem_sz >> 20));
+
+ if (!memblock_is_region_memory(crashk_res.start, crash_size) ||
+ memblock_reserve(crashk_res.start, crash_size)) {
+diff --git a/arch/powerpc/kvm/book3s_64_mmu_radix.c b/arch/powerpc/kvm/book3s_64_mmu_radix.c
+index aa12cd4078b3..bc6c1aa3d0e9 100644
+--- a/arch/powerpc/kvm/book3s_64_mmu_radix.c
++++ b/arch/powerpc/kvm/book3s_64_mmu_radix.c
+@@ -353,7 +353,13 @@ static struct kmem_cache *kvm_pmd_cache;
+
+ static pte_t *kvmppc_pte_alloc(void)
+ {
+- return kmem_cache_alloc(kvm_pte_cache, GFP_KERNEL);
++ pte_t *pte;
++
++ pte = kmem_cache_alloc(kvm_pte_cache, GFP_KERNEL);
++ /* pmd_populate() will only reference _pa(pte). */
++ kmemleak_ignore(pte);
++
++ return pte;
+ }
+
+ static void kvmppc_pte_free(pte_t *ptep)
+@@ -363,7 +369,13 @@ static void kvmppc_pte_free(pte_t *ptep)
+
+ static pmd_t *kvmppc_pmd_alloc(void)
+ {
+- return kmem_cache_alloc(kvm_pmd_cache, GFP_KERNEL);
++ pmd_t *pmd;
++
++ pmd = kmem_cache_alloc(kvm_pmd_cache, GFP_KERNEL);
++ /* pud_populate() will only reference _pa(pmd). */
++ kmemleak_ignore(pmd);
++
++ return pmd;
+ }
+
+ static void kvmppc_pmd_free(pmd_t *pmdp)
+diff --git a/arch/powerpc/kvm/book3s_64_vio.c b/arch/powerpc/kvm/book3s_64_vio.c
+index 50555ad1db93..1a529df0ab44 100644
+--- a/arch/powerpc/kvm/book3s_64_vio.c
++++ b/arch/powerpc/kvm/book3s_64_vio.c
+@@ -73,6 +73,7 @@ extern void kvm_spapr_tce_release_iommu_group(struct kvm *kvm,
+ struct kvmppc_spapr_tce_iommu_table *stit, *tmp;
+ struct iommu_table_group *table_group = NULL;
+
++ rcu_read_lock();
+ list_for_each_entry_rcu(stt, &kvm->arch.spapr_tce_tables, list) {
+
+ table_group = iommu_group_get_iommudata(grp);
+@@ -87,7 +88,9 @@ extern void kvm_spapr_tce_release_iommu_group(struct kvm *kvm,
+ kref_put(&stit->kref, kvm_spapr_tce_liobn_put);
+ }
+ }
++ cond_resched_rcu();
+ }
++ rcu_read_unlock();
+ }
+
+ extern long kvm_spapr_tce_attach_iommu_group(struct kvm *kvm, int tablefd,
+@@ -105,12 +108,14 @@ extern long kvm_spapr_tce_attach_iommu_group(struct kvm *kvm, int tablefd,
+ if (!f.file)
+ return -EBADF;
+
++ rcu_read_lock();
+ list_for_each_entry_rcu(stt, &kvm->arch.spapr_tce_tables, list) {
+ if (stt == f.file->private_data) {
+ found = true;
+ break;
+ }
+ }
++ rcu_read_unlock();
+
+ fdput(f);
+
+@@ -143,6 +148,7 @@ extern long kvm_spapr_tce_attach_iommu_group(struct kvm *kvm, int tablefd,
+ if (!tbl)
+ return -EINVAL;
+
++ rcu_read_lock();
+ list_for_each_entry_rcu(stit, &stt->iommu_tables, next) {
+ if (tbl != stit->tbl)
+ continue;
+@@ -150,14 +156,17 @@ extern long kvm_spapr_tce_attach_iommu_group(struct kvm *kvm, int tablefd,
+ if (!kref_get_unless_zero(&stit->kref)) {
+ /* stit is being destroyed */
+ iommu_tce_table_put(tbl);
++ rcu_read_unlock();
+ return -ENOTTY;
+ }
+ /*
+ * The table is already known to this KVM, we just increased
+ * its KVM reference counter and can return.
+ */
++ rcu_read_unlock();
+ return 0;
+ }
++ rcu_read_unlock();
+
+ stit = kzalloc(sizeof(*stit), GFP_KERNEL);
+ if (!stit) {
+@@ -365,18 +374,19 @@ static long kvmppc_tce_validate(struct kvmppc_spapr_tce_table *stt,
+ if (kvmppc_tce_to_ua(stt->kvm, tce, &ua))
+ return H_TOO_HARD;
+
++ rcu_read_lock();
+ list_for_each_entry_rcu(stit, &stt->iommu_tables, next) {
+ unsigned long hpa = 0;
+ struct mm_iommu_table_group_mem_t *mem;
+ long shift = stit->tbl->it_page_shift;
+
+ mem = mm_iommu_lookup(stt->kvm->mm, ua, 1ULL << shift);
+- if (!mem)
+- return H_TOO_HARD;
+-
+- if (mm_iommu_ua_to_hpa(mem, ua, shift, &hpa))
++ if (!mem || mm_iommu_ua_to_hpa(mem, ua, shift, &hpa)) {
++ rcu_read_unlock();
+ return H_TOO_HARD;
++ }
+ }
++ rcu_read_unlock();
+
+ return H_SUCCESS;
+ }
+diff --git a/arch/powerpc/kvm/book3s_hv.c b/arch/powerpc/kvm/book3s_hv.c
+index 93493f0cbfe8..ee581cde4878 100644
+--- a/arch/powerpc/kvm/book3s_hv.c
++++ b/arch/powerpc/kvm/book3s_hv.c
+@@ -1099,9 +1099,14 @@ int kvmppc_pseries_do_hcall(struct kvm_vcpu *vcpu)
+ ret = kvmppc_h_svm_init_done(vcpu->kvm);
+ break;
+ case H_SVM_INIT_ABORT:
+- ret = H_UNSUPPORTED;
+- if (kvmppc_get_srr1(vcpu) & MSR_S)
+- ret = kvmppc_h_svm_init_abort(vcpu->kvm);
++ /*
++ * Even if that call is made by the Ultravisor, the SSR1 value
++ * is the guest context one, with the secure bit clear as it has
++ * not yet been secured. So we can't check it here.
++ * Instead the kvm->arch.secure_guest flag is checked inside
++ * kvmppc_h_svm_init_abort().
++ */
++ ret = kvmppc_h_svm_init_abort(vcpu->kvm);
+ break;
+
+ default:
+diff --git a/arch/powerpc/mm/book3s32/mmu.c b/arch/powerpc/mm/book3s32/mmu.c
+index 39ba53ca5bb5..a9b2cbc74797 100644
+--- a/arch/powerpc/mm/book3s32/mmu.c
++++ b/arch/powerpc/mm/book3s32/mmu.c
+@@ -187,6 +187,7 @@ void mmu_mark_initmem_nx(void)
+ int i;
+ unsigned long base = (unsigned long)_stext - PAGE_OFFSET;
+ unsigned long top = (unsigned long)_etext - PAGE_OFFSET;
++ unsigned long border = (unsigned long)__init_begin - PAGE_OFFSET;
+ unsigned long size;
+
+ if (IS_ENABLED(CONFIG_PPC_BOOK3S_601))
+@@ -201,9 +202,10 @@ void mmu_mark_initmem_nx(void)
+ size = block_size(base, top);
+ size = max(size, 128UL << 10);
+ if ((top - base) > size) {
+- if (strict_kernel_rwx_enabled())
+- pr_warn("Kernel _etext not properly aligned\n");
+ size <<= 1;
++ if (strict_kernel_rwx_enabled() && base + size > border)
++ pr_warn("Some RW data is getting mapped X. "
++ "Adjust CONFIG_DATA_SHIFT to avoid that.\n");
+ }
+ setibat(i++, PAGE_OFFSET + base, base, size, PAGE_KERNEL_TEXT);
+ base += size;
+diff --git a/arch/powerpc/mm/book3s64/radix_tlb.c b/arch/powerpc/mm/book3s64/radix_tlb.c
+index 758ade2c2b6e..b5cc9b23cf02 100644
+--- a/arch/powerpc/mm/book3s64/radix_tlb.c
++++ b/arch/powerpc/mm/book3s64/radix_tlb.c
+@@ -884,9 +884,7 @@ is_local:
+ if (IS_ENABLED(CONFIG_TRANSPARENT_HUGEPAGE)) {
+ hstart = (start + PMD_SIZE - 1) & PMD_MASK;
+ hend = end & PMD_MASK;
+- if (hstart == hend)
+- hflush = false;
+- else
++ if (hstart < hend)
+ hflush = true;
+ }
+
+diff --git a/arch/powerpc/mm/kasan/kasan_init_32.c b/arch/powerpc/mm/kasan/kasan_init_32.c
+index 59e49c0e8154..b7c287adfd59 100644
+--- a/arch/powerpc/mm/kasan/kasan_init_32.c
++++ b/arch/powerpc/mm/kasan/kasan_init_32.c
+@@ -76,15 +76,14 @@ static int __init kasan_init_region(void *start, size_t size)
+ return ret;
+
+ block = memblock_alloc(k_end - k_start, PAGE_SIZE);
++ if (!block)
++ return -ENOMEM;
+
+ for (k_cur = k_start & PAGE_MASK; k_cur < k_end; k_cur += PAGE_SIZE) {
+ pmd_t *pmd = pmd_ptr_k(k_cur);
+ void *va = block + k_cur - k_start;
+ pte_t pte = pfn_pte(PHYS_PFN(__pa(va)), PAGE_KERNEL);
+
+- if (!va)
+- return -ENOMEM;
+-
+ __set_pte_at(&init_mm, k_cur, pte_offset_kernel(pmd, k_cur), pte, 0);
+ }
+ flush_tlb_kernel_range(k_start, k_end);
+diff --git a/arch/powerpc/mm/ptdump/shared.c b/arch/powerpc/mm/ptdump/shared.c
+index f7ed2f187cb0..784f8df17f73 100644
+--- a/arch/powerpc/mm/ptdump/shared.c
++++ b/arch/powerpc/mm/ptdump/shared.c
+@@ -30,6 +30,11 @@ static const struct flag_info flag_array[] = {
+ .val = _PAGE_PRESENT,
+ .set = "present",
+ .clear = " ",
++ }, {
++ .mask = _PAGE_COHERENT,
++ .val = _PAGE_COHERENT,
++ .set = "coherent",
++ .clear = " ",
+ }, {
+ .mask = _PAGE_GUARDED,
+ .val = _PAGE_GUARDED,
+diff --git a/arch/powerpc/perf/hv-24x7.c b/arch/powerpc/perf/hv-24x7.c
+index 573e0b309c0c..48e8f4b17b91 100644
+--- a/arch/powerpc/perf/hv-24x7.c
++++ b/arch/powerpc/perf/hv-24x7.c
+@@ -1400,16 +1400,6 @@ static void h_24x7_event_read(struct perf_event *event)
+ h24x7hw = &get_cpu_var(hv_24x7_hw);
+ h24x7hw->events[i] = event;
+ put_cpu_var(h24x7hw);
+- /*
+- * Clear the event count so we can compute the _change_
+- * in the 24x7 raw counter value at the end of the txn.
+- *
+- * Note that we could alternatively read the 24x7 value
+- * now and save its value in event->hw.prev_count. But
+- * that would require issuing a hcall, which would then
+- * defeat the purpose of using the txn interface.
+- */
+- local64_set(&event->count, 0);
+ }
+
+ put_cpu_var(hv_24x7_reqb);
+diff --git a/arch/powerpc/platforms/4xx/pci.c b/arch/powerpc/platforms/4xx/pci.c
+index e6e2adcc7b64..c13d64c3b019 100644
+--- a/arch/powerpc/platforms/4xx/pci.c
++++ b/arch/powerpc/platforms/4xx/pci.c
+@@ -1242,7 +1242,7 @@ static void __init ppc460sx_pciex_check_link(struct ppc4xx_pciex_port *port)
+ if (mbase == NULL) {
+ printk(KERN_ERR "%pOF: Can't map internal config space !",
+ port->node);
+- goto done;
++ return;
+ }
+
+ while (attempt && (0 == (in_le32(mbase + PECFG_460SX_DLLSTA)
+@@ -1252,9 +1252,7 @@ static void __init ppc460sx_pciex_check_link(struct ppc4xx_pciex_port *port)
+ }
+ if (attempt)
+ port->link = 1;
+-done:
+ iounmap(mbase);
+-
+ }
+
+ static struct ppc4xx_pciex_hwops ppc460sx_pcie_hwops __initdata = {
+diff --git a/arch/powerpc/platforms/8xx/Kconfig b/arch/powerpc/platforms/8xx/Kconfig
+index e0fe670f06f6..b37de62d7e7f 100644
+--- a/arch/powerpc/platforms/8xx/Kconfig
++++ b/arch/powerpc/platforms/8xx/Kconfig
+@@ -98,15 +98,6 @@ menu "MPC8xx CPM Options"
+ # 8xx specific questions.
+ comment "Generic MPC8xx Options"
+
+-config 8xx_COPYBACK
+- bool "Copy-Back Data Cache (else Writethrough)"
+- help
+- Saying Y here will cause the cache on an MPC8xx processor to be used
+- in Copy-Back mode. If you say N here, it is used in Writethrough
+- mode.
+-
+- If in doubt, say Y here.
+-
+ config 8xx_GPIO
+ bool "GPIO API Support"
+ select GPIOLIB
+diff --git a/arch/powerpc/platforms/powernv/opal.c b/arch/powerpc/platforms/powernv/opal.c
+index 2b3dfd0b6cdd..d95954ad4c0a 100644
+--- a/arch/powerpc/platforms/powernv/opal.c
++++ b/arch/powerpc/platforms/powernv/opal.c
+@@ -811,6 +811,10 @@ static int opal_add_one_export(struct kobject *parent, const char *export_name,
+ goto out;
+
+ attr = kzalloc(sizeof(*attr), GFP_KERNEL);
++ if (!attr) {
++ rc = -ENOMEM;
++ goto out;
++ }
+ name = kstrdup(export_name, GFP_KERNEL);
+ if (!name) {
+ rc = -ENOMEM;
+diff --git a/arch/powerpc/platforms/ps3/mm.c b/arch/powerpc/platforms/ps3/mm.c
+index 423be34f0f5f..f42fe4e86ce5 100644
+--- a/arch/powerpc/platforms/ps3/mm.c
++++ b/arch/powerpc/platforms/ps3/mm.c
+@@ -200,13 +200,14 @@ void ps3_mm_vas_destroy(void)
+ {
+ int result;
+
+- DBG("%s:%d: map.vas_id = %llu\n", __func__, __LINE__, map.vas_id);
+-
+ if (map.vas_id) {
+ result = lv1_select_virtual_address_space(0);
+- BUG_ON(result);
+- result = lv1_destruct_virtual_address_space(map.vas_id);
+- BUG_ON(result);
++ result += lv1_destruct_virtual_address_space(map.vas_id);
++
++ if (result) {
++ lv1_panic(0);
++ }
++
+ map.vas_id = 0;
+ }
+ }
+@@ -304,19 +305,20 @@ static void ps3_mm_region_destroy(struct mem_region *r)
+ int result;
+
+ if (!r->destroy) {
+- pr_info("%s:%d: Not destroying high region: %llxh %llxh\n",
+- __func__, __LINE__, r->base, r->size);
+ return;
+ }
+
+- DBG("%s:%d: r->base = %llxh\n", __func__, __LINE__, r->base);
+-
+ if (r->base) {
+ result = lv1_release_memory(r->base);
+- BUG_ON(result);
++
++ if (result) {
++ lv1_panic(0);
++ }
++
+ r->size = r->base = r->offset = 0;
+ map.total = map.rm.size;
+ }
++
+ ps3_mm_set_repository_highmem(NULL);
+ }
+
+diff --git a/arch/powerpc/platforms/pseries/ras.c b/arch/powerpc/platforms/pseries/ras.c
+index 1d1da639b8b7..16ba5c542e55 100644
+--- a/arch/powerpc/platforms/pseries/ras.c
++++ b/arch/powerpc/platforms/pseries/ras.c
+@@ -395,10 +395,11 @@ static irqreturn_t ras_error_interrupt(int irq, void *dev_id)
+ /*
+ * Some versions of FWNMI place the buffer inside the 4kB page starting at
+ * 0x7000. Other versions place it inside the rtas buffer. We check both.
++ * Minimum size of the buffer is 16 bytes.
+ */
+ #define VALID_FWNMI_BUFFER(A) \
+- ((((A) >= 0x7000) && ((A) < 0x7ff0)) || \
+- (((A) >= rtas.base) && ((A) < (rtas.base + rtas.size - 16))))
++ ((((A) >= 0x7000) && ((A) <= 0x8000 - 16)) || \
++ (((A) >= rtas.base) && ((A) <= (rtas.base + rtas.size - 16))))
+
+ static inline struct rtas_error_log *fwnmi_get_errlog(void)
+ {
+diff --git a/arch/s390/Kconfig b/arch/s390/Kconfig
+index 2167bce993ff..ae01be202204 100644
+--- a/arch/s390/Kconfig
++++ b/arch/s390/Kconfig
+@@ -462,6 +462,7 @@ config NUMA
+
+ config NODES_SHIFT
+ int
++ depends on NEED_MULTIPLE_NODES
+ default "1"
+
+ config SCHED_SMT
+diff --git a/arch/s390/include/asm/syscall.h b/arch/s390/include/asm/syscall.h
+index f073292e9fdb..d9d5de0f67ff 100644
+--- a/arch/s390/include/asm/syscall.h
++++ b/arch/s390/include/asm/syscall.h
+@@ -33,7 +33,17 @@ static inline void syscall_rollback(struct task_struct *task,
+ static inline long syscall_get_error(struct task_struct *task,
+ struct pt_regs *regs)
+ {
+- return IS_ERR_VALUE(regs->gprs[2]) ? regs->gprs[2] : 0;
++ unsigned long error = regs->gprs[2];
++#ifdef CONFIG_COMPAT
++ if (test_tsk_thread_flag(task, TIF_31BIT)) {
++ /*
++ * Sign-extend the value so (int)-EFOO becomes (long)-EFOO
++ * and will match correctly in comparisons.
++ */
++ error = (long)(int)error;
++ }
++#endif
++ return IS_ERR_VALUE(error) ? error : 0;
+ }
+
+ static inline long syscall_get_return_value(struct task_struct *task,
+diff --git a/arch/sh/include/asm/io.h b/arch/sh/include/asm/io.h
+index 39c9ead489e5..b42228906eaf 100644
+--- a/arch/sh/include/asm/io.h
++++ b/arch/sh/include/asm/io.h
+@@ -328,7 +328,7 @@ __ioremap_mode(phys_addr_t offset, unsigned long size, pgprot_t prot)
+ #else
+ #define __ioremap(offset, size, prot) ((void __iomem *)(offset))
+ #define __ioremap_mode(offset, size, prot) ((void __iomem *)(offset))
+-#define iounmap(addr) do { } while (0)
++static inline void iounmap(void __iomem *addr) {}
+ #endif /* CONFIG_MMU */
+
+ static inline void __iomem *ioremap(phys_addr_t offset, unsigned long size)
+diff --git a/arch/sparc/mm/srmmu.c b/arch/sparc/mm/srmmu.c
+index a8c2f2615fc6..ecc9e8786d57 100644
+--- a/arch/sparc/mm/srmmu.c
++++ b/arch/sparc/mm/srmmu.c
+@@ -383,7 +383,6 @@ pgtable_t pte_alloc_one(struct mm_struct *mm)
+ return NULL;
+ page = pfn_to_page(__nocache_pa(pte) >> PAGE_SHIFT);
+ if (!pgtable_pte_page_ctor(page)) {
+- __free_page(page);
+ return NULL;
+ }
+ return page;
+diff --git a/arch/um/drivers/Makefile b/arch/um/drivers/Makefile
+index a290821e355c..2a249f619467 100644
+--- a/arch/um/drivers/Makefile
++++ b/arch/um/drivers/Makefile
+@@ -18,9 +18,9 @@ ubd-objs := ubd_kern.o ubd_user.o
+ port-objs := port_kern.o port_user.o
+ harddog-objs := harddog_kern.o harddog_user.o
+
+-LDFLAGS_pcap.o := -r $(shell $(CC) $(KBUILD_CFLAGS) -print-file-name=libpcap.a)
++LDFLAGS_pcap.o = $(shell $(CC) $(KBUILD_CFLAGS) -print-file-name=libpcap.a)
+
+-LDFLAGS_vde.o := -r $(shell $(CC) $(CFLAGS) -print-file-name=libvdeplug.a)
++LDFLAGS_vde.o = $(shell $(CC) $(CFLAGS) -print-file-name=libvdeplug.a)
+
+ targets := pcap_kern.o pcap_user.o vde_kern.o vde_user.o
+
+diff --git a/arch/unicore32/lib/Makefile b/arch/unicore32/lib/Makefile
+index 098981a01841..5af06645b8f0 100644
+--- a/arch/unicore32/lib/Makefile
++++ b/arch/unicore32/lib/Makefile
+@@ -10,12 +10,12 @@ lib-y += strncpy_from_user.o strnlen_user.o
+ lib-y += clear_user.o copy_page.o
+ lib-y += copy_from_user.o copy_to_user.o
+
+-GNU_LIBC_A := $(shell $(CC) $(KBUILD_CFLAGS) -print-file-name=libc.a)
++GNU_LIBC_A = $(shell $(CC) $(KBUILD_CFLAGS) -print-file-name=libc.a)
+ GNU_LIBC_A_OBJS := memchr.o memcpy.o memmove.o memset.o
+ GNU_LIBC_A_OBJS += strchr.o strrchr.o
+ GNU_LIBC_A_OBJS += rawmemchr.o # needed by strrchr.o
+
+-GNU_LIBGCC_A := $(shell $(CC) $(KBUILD_CFLAGS) -print-file-name=libgcc.a)
++GNU_LIBGCC_A = $(shell $(CC) $(KBUILD_CFLAGS) -print-file-name=libgcc.a)
+ GNU_LIBGCC_A_OBJS := _ashldi3.o _ashrdi3.o _lshrdi3.o
+ GNU_LIBGCC_A_OBJS += _divsi3.o _modsi3.o _ucmpdi2.o _umodsi3.o _udivsi3.o
+
+diff --git a/arch/x86/kernel/apic/apic.c b/arch/x86/kernel/apic/apic.c
+index e53dda210cd7..21d2f1de1057 100644
+--- a/arch/x86/kernel/apic/apic.c
++++ b/arch/x86/kernel/apic/apic.c
+@@ -2093,7 +2093,7 @@ void __init init_apic_mappings(void)
+ unsigned int new_apicid;
+
+ if (apic_validate_deadline_timer())
+- pr_debug("TSC deadline timer available\n");
++ pr_info("TSC deadline timer available\n");
+
+ if (x2apic_mode) {
+ boot_cpu_physical_apicid = read_apic_id();
+diff --git a/arch/x86/kernel/cpu/mce/dev-mcelog.c b/arch/x86/kernel/cpu/mce/dev-mcelog.c
+index d089567a9ce8..bcb379b2fd42 100644
+--- a/arch/x86/kernel/cpu/mce/dev-mcelog.c
++++ b/arch/x86/kernel/cpu/mce/dev-mcelog.c
+@@ -343,7 +343,7 @@ static __init int dev_mcelog_init_device(void)
+ if (!mcelog)
+ return -ENOMEM;
+
+- strncpy(mcelog->signature, MCE_LOG_SIGNATURE, sizeof(mcelog->signature));
++ memcpy(mcelog->signature, MCE_LOG_SIGNATURE, sizeof(mcelog->signature));
+ mcelog->len = mce_log_len;
+ mcelog->recordlen = sizeof(struct mce);
+
+diff --git a/arch/x86/kernel/idt.c b/arch/x86/kernel/idt.c
+index 87ef69a72c52..7bb4c3cbf4dc 100644
+--- a/arch/x86/kernel/idt.c
++++ b/arch/x86/kernel/idt.c
+@@ -318,7 +318,11 @@ void __init idt_setup_apic_and_irq_gates(void)
+
+ #ifdef CONFIG_X86_LOCAL_APIC
+ for_each_clear_bit_from(i, system_vectors, NR_VECTORS) {
+- set_bit(i, system_vectors);
++ /*
++ * Don't set the non assigned system vectors in the
++ * system_vectors bitmap. Otherwise they show up in
++ * /proc/interrupts.
++ */
+ entry = spurious_entries_start + 8 * (i - FIRST_SYSTEM_VECTOR);
+ set_intr_gate(i, entry);
+ }
+diff --git a/arch/x86/kernel/kprobes/core.c b/arch/x86/kernel/kprobes/core.c
+index 4d7022a740ab..a12adbe1559d 100644
+--- a/arch/x86/kernel/kprobes/core.c
++++ b/arch/x86/kernel/kprobes/core.c
+@@ -753,16 +753,11 @@ asm(
+ NOKPROBE_SYMBOL(kretprobe_trampoline);
+ STACK_FRAME_NON_STANDARD(kretprobe_trampoline);
+
+-static struct kprobe kretprobe_kprobe = {
+- .addr = (void *)kretprobe_trampoline,
+-};
+-
+ /*
+ * Called from kretprobe_trampoline
+ */
+ __used __visible void *trampoline_handler(struct pt_regs *regs)
+ {
+- struct kprobe_ctlblk *kcb;
+ struct kretprobe_instance *ri = NULL;
+ struct hlist_head *head, empty_rp;
+ struct hlist_node *tmp;
+@@ -772,16 +767,12 @@ __used __visible void *trampoline_handler(struct pt_regs *regs)
+ void *frame_pointer;
+ bool skipped = false;
+
+- preempt_disable();
+-
+ /*
+ * Set a dummy kprobe for avoiding kretprobe recursion.
+ * Since kretprobe never run in kprobe handler, kprobe must not
+ * be running at this point.
+ */
+- kcb = get_kprobe_ctlblk();
+- __this_cpu_write(current_kprobe, &kretprobe_kprobe);
+- kcb->kprobe_status = KPROBE_HIT_ACTIVE;
++ kprobe_busy_begin();
+
+ INIT_HLIST_HEAD(&empty_rp);
+ kretprobe_hash_lock(current, &head, &flags);
+@@ -857,7 +848,7 @@ __used __visible void *trampoline_handler(struct pt_regs *regs)
+ __this_cpu_write(current_kprobe, &ri->rp->kp);
+ ri->ret_addr = correct_ret_addr;
+ ri->rp->handler(ri, regs);
+- __this_cpu_write(current_kprobe, &kretprobe_kprobe);
++ __this_cpu_write(current_kprobe, &kprobe_busy);
+ }
+
+ recycle_rp_inst(ri, &empty_rp);
+@@ -873,8 +864,7 @@ __used __visible void *trampoline_handler(struct pt_regs *regs)
+
+ kretprobe_hash_unlock(current, &flags);
+
+- __this_cpu_write(current_kprobe, NULL);
+- preempt_enable();
++ kprobe_busy_end();
+
+ hlist_for_each_entry_safe(ri, tmp, &empty_rp, hlist) {
+ hlist_del(&ri->hlist);
+diff --git a/arch/x86/purgatory/Makefile b/arch/x86/purgatory/Makefile
+index fb4ee5444379..9733d1cc791d 100644
+--- a/arch/x86/purgatory/Makefile
++++ b/arch/x86/purgatory/Makefile
+@@ -17,7 +17,10 @@ CFLAGS_sha256.o := -D__DISABLE_EXPORTS
+ LDFLAGS_purgatory.ro := -e purgatory_start -r --no-undefined -nostdlib -z nodefaultlib
+ targets += purgatory.ro
+
++# Sanitizer, etc. runtimes are unavailable and cannot be linked here.
++GCOV_PROFILE := n
+ KASAN_SANITIZE := n
++UBSAN_SANITIZE := n
+ KCOV_INSTRUMENT := n
+
+ # These are adjustments to the compiler flags used for objects that
+@@ -25,7 +28,7 @@ KCOV_INSTRUMENT := n
+
+ PURGATORY_CFLAGS_REMOVE := -mcmodel=kernel
+ PURGATORY_CFLAGS := -mcmodel=large -ffreestanding -fno-zero-initialized-in-bss
+-PURGATORY_CFLAGS += $(DISABLE_STACKLEAK_PLUGIN)
++PURGATORY_CFLAGS += $(DISABLE_STACKLEAK_PLUGIN) -DDISABLE_BRANCH_PROFILING
+
+ # Default KBUILD_CFLAGS can have -pg option set when FTRACE is enabled. That
+ # in turn leaves some undefined symbols like __fentry__ in purgatory and not
+diff --git a/crypto/algboss.c b/crypto/algboss.c
+index 535f1f87e6c1..5ebccbd6b74e 100644
+--- a/crypto/algboss.c
++++ b/crypto/algboss.c
+@@ -178,8 +178,6 @@ static int cryptomgr_schedule_probe(struct crypto_larval *larval)
+ if (IS_ERR(thread))
+ goto err_put_larval;
+
+- wait_for_completion_interruptible(&larval->completion);
+-
+ return NOTIFY_STOP;
+
+ err_put_larval:
+diff --git a/crypto/algif_skcipher.c b/crypto/algif_skcipher.c
+index e2c8ab408bed..4c3bdffe0c3a 100644
+--- a/crypto/algif_skcipher.c
++++ b/crypto/algif_skcipher.c
+@@ -74,14 +74,10 @@ static int _skcipher_recvmsg(struct socket *sock, struct msghdr *msg,
+ return PTR_ERR(areq);
+
+ /* convert iovecs of output buffers into RX SGL */
+- err = af_alg_get_rsgl(sk, msg, flags, areq, -1, &len);
++ err = af_alg_get_rsgl(sk, msg, flags, areq, ctx->used, &len);
+ if (err)
+ goto free;
+
+- /* Process only as much RX buffers for which we have TX data */
+- if (len > ctx->used)
+- len = ctx->used;
+-
+ /*
+ * If more buffers are to be expected to be processed, process only
+ * full block size buffers.
+diff --git a/drivers/ata/libata-core.c b/drivers/ata/libata-core.c
+index beca5f91bb4c..e74c8fe2a5fd 100644
+--- a/drivers/ata/libata-core.c
++++ b/drivers/ata/libata-core.c
+@@ -42,7 +42,6 @@
+ #include <linux/workqueue.h>
+ #include <linux/scatterlist.h>
+ #include <linux/io.h>
+-#include <linux/async.h>
+ #include <linux/log2.h>
+ #include <linux/slab.h>
+ #include <linux/glob.h>
+@@ -5778,7 +5777,7 @@ int ata_host_register(struct ata_host *host, struct scsi_host_template *sht)
+ /* perform each probe asynchronously */
+ for (i = 0; i < host->n_ports; i++) {
+ struct ata_port *ap = host->ports[i];
+- async_schedule(async_port_probe, ap);
++ ap->cookie = async_schedule(async_port_probe, ap);
+ }
+
+ return 0;
+@@ -5920,11 +5919,11 @@ void ata_host_detach(struct ata_host *host)
+ {
+ int i;
+
+- /* Ensure ata_port probe has completed */
+- async_synchronize_full();
+-
+- for (i = 0; i < host->n_ports; i++)
++ for (i = 0; i < host->n_ports; i++) {
++ /* Ensure ata_port probe has completed */
++ async_synchronize_cookie(host->ports[i]->cookie + 1);
+ ata_port_detach(host->ports[i]);
++ }
+
+ /* the host is dead now, dissociate ACPI */
+ ata_acpi_dissociate(host);
+diff --git a/drivers/base/platform.c b/drivers/base/platform.c
+index b27d0f6c18c9..f5d485166fd3 100644
+--- a/drivers/base/platform.c
++++ b/drivers/base/platform.c
+@@ -851,6 +851,8 @@ int __init_or_module __platform_driver_probe(struct platform_driver *drv,
+ /* temporary section violation during probe() */
+ drv->probe = probe;
+ retval = code = __platform_driver_register(drv, module);
++ if (retval)
++ return retval;
+
+ /*
+ * Fixup that section violation, being paranoid about code scanning
+diff --git a/drivers/block/ps3disk.c b/drivers/block/ps3disk.c
+index c5c6487a19d5..7b55811c2a81 100644
+--- a/drivers/block/ps3disk.c
++++ b/drivers/block/ps3disk.c
+@@ -454,7 +454,6 @@ static int ps3disk_probe(struct ps3_system_bus_device *_dev)
+ queue->queuedata = dev;
+
+ blk_queue_max_hw_sectors(queue, dev->bounce_size >> 9);
+- blk_queue_segment_boundary(queue, -1UL);
+ blk_queue_dma_alignment(queue, dev->blk_size-1);
+ blk_queue_logical_block_size(queue, dev->blk_size);
+
+diff --git a/drivers/bus/mhi/core/main.c b/drivers/bus/mhi/core/main.c
+index 97e06cc586e4..8be3d0fb0614 100644
+--- a/drivers/bus/mhi/core/main.c
++++ b/drivers/bus/mhi/core/main.c
+@@ -513,7 +513,10 @@ static int parse_xfer_event(struct mhi_controller *mhi_cntrl,
+ mhi_cntrl->unmap_single(mhi_cntrl, buf_info);
+
+ result.buf_addr = buf_info->cb_buf;
+- result.bytes_xferd = xfer_len;
++
++ /* truncate to buf len if xfer_len is larger */
++ result.bytes_xferd =
++ min_t(u16, xfer_len, buf_info->len);
+ mhi_del_ring_element(mhi_cntrl, buf_ring);
+ mhi_del_ring_element(mhi_cntrl, tre_ring);
+ local_rp = tre_ring->rp;
+@@ -597,7 +600,9 @@ static int parse_rsc_event(struct mhi_controller *mhi_cntrl,
+
+ result.transaction_status = (ev_code == MHI_EV_CC_OVERFLOW) ?
+ -EOVERFLOW : 0;
+- result.bytes_xferd = xfer_len;
++
++ /* truncate to buf len if xfer_len is larger */
++ result.bytes_xferd = min_t(u16, xfer_len, buf_info->len);
+ result.buf_addr = buf_info->cb_buf;
+ result.dir = mhi_chan->dir;
+
+diff --git a/drivers/char/ipmi/ipmi_msghandler.c b/drivers/char/ipmi/ipmi_msghandler.c
+index c48d8f086382..9afd220cd824 100644
+--- a/drivers/char/ipmi/ipmi_msghandler.c
++++ b/drivers/char/ipmi/ipmi_msghandler.c
+@@ -33,6 +33,7 @@
+ #include <linux/workqueue.h>
+ #include <linux/uuid.h>
+ #include <linux/nospec.h>
++#include <linux/vmalloc.h>
+
+ #define IPMI_DRIVER_VERSION "39.2"
+
+@@ -1153,7 +1154,7 @@ static void free_user_work(struct work_struct *work)
+ remove_work);
+
+ cleanup_srcu_struct(&user->release_barrier);
+- kfree(user);
++ vfree(user);
+ }
+
+ int ipmi_create_user(unsigned int if_num,
+@@ -1185,7 +1186,7 @@ int ipmi_create_user(unsigned int if_num,
+ if (rv)
+ return rv;
+
+- new_user = kmalloc(sizeof(*new_user), GFP_KERNEL);
++ new_user = vzalloc(sizeof(*new_user));
+ if (!new_user)
+ return -ENOMEM;
+
+@@ -1232,7 +1233,7 @@ int ipmi_create_user(unsigned int if_num,
+
+ out_kfree:
+ srcu_read_unlock(&ipmi_interfaces_srcu, index);
+- kfree(new_user);
++ vfree(new_user);
+ return rv;
+ }
+ EXPORT_SYMBOL(ipmi_create_user);
+diff --git a/drivers/char/mem.c b/drivers/char/mem.c
+index 43dd0891ca1e..31cae88a730b 100644
+--- a/drivers/char/mem.c
++++ b/drivers/char/mem.c
+@@ -31,11 +31,15 @@
+ #include <linux/uio.h>
+ #include <linux/uaccess.h>
+ #include <linux/security.h>
++#include <linux/pseudo_fs.h>
++#include <uapi/linux/magic.h>
++#include <linux/mount.h>
+
+ #ifdef CONFIG_IA64
+ # include <linux/efi.h>
+ #endif
+
++#define DEVMEM_MINOR 1
+ #define DEVPORT_MINOR 4
+
+ static inline unsigned long size_inside_page(unsigned long start,
+@@ -805,12 +809,64 @@ static loff_t memory_lseek(struct file *file, loff_t offset, int orig)
+ return ret;
+ }
+
++static struct inode *devmem_inode;
++
++#ifdef CONFIG_IO_STRICT_DEVMEM
++void revoke_devmem(struct resource *res)
++{
++ struct inode *inode = READ_ONCE(devmem_inode);
++
++ /*
++ * Check that the initialization has completed. Losing the race
++ * is ok because it means drivers are claiming resources before
++ * the fs_initcall level of init and prevent /dev/mem from
++ * establishing mappings.
++ */
++ if (!inode)
++ return;
++
++ /*
++ * The expectation is that the driver has successfully marked
++ * the resource busy by this point, so devmem_is_allowed()
++ * should start returning false, however for performance this
++ * does not iterate the entire resource range.
++ */
++ if (devmem_is_allowed(PHYS_PFN(res->start)) &&
++ devmem_is_allowed(PHYS_PFN(res->end))) {
++ /*
++ * *cringe* iomem=relaxed says "go ahead, what's the
++ * worst that can happen?"
++ */
++ return;
++ }
++
++ unmap_mapping_range(inode->i_mapping, res->start, resource_size(res), 1);
++}
++#endif
++
+ static int open_port(struct inode *inode, struct file *filp)
+ {
++ int rc;
++
+ if (!capable(CAP_SYS_RAWIO))
+ return -EPERM;
+
+- return security_locked_down(LOCKDOWN_DEV_MEM);
++ rc = security_locked_down(LOCKDOWN_DEV_MEM);
++ if (rc)
++ return rc;
++
++ if (iminor(inode) != DEVMEM_MINOR)
++ return 0;
++
++ /*
++ * Use a unified address space to have a single point to manage
++ * revocations when drivers want to take over a /dev/mem mapped
++ * range.
++ */
++ inode->i_mapping = devmem_inode->i_mapping;
++ filp->f_mapping = inode->i_mapping;
++
++ return 0;
+ }
+
+ #define zero_lseek null_lseek
+@@ -885,7 +941,7 @@ static const struct memdev {
+ fmode_t fmode;
+ } devlist[] = {
+ #ifdef CONFIG_DEVMEM
+- [1] = { "mem", 0, &mem_fops, FMODE_UNSIGNED_OFFSET },
++ [DEVMEM_MINOR] = { "mem", 0, &mem_fops, FMODE_UNSIGNED_OFFSET },
+ #endif
+ #ifdef CONFIG_DEVKMEM
+ [2] = { "kmem", 0, &kmem_fops, FMODE_UNSIGNED_OFFSET },
+@@ -939,6 +995,45 @@ static char *mem_devnode(struct device *dev, umode_t *mode)
+
+ static struct class *mem_class;
+
++static int devmem_fs_init_fs_context(struct fs_context *fc)
++{
++ return init_pseudo(fc, DEVMEM_MAGIC) ? 0 : -ENOMEM;
++}
++
++static struct file_system_type devmem_fs_type = {
++ .name = "devmem",
++ .owner = THIS_MODULE,
++ .init_fs_context = devmem_fs_init_fs_context,
++ .kill_sb = kill_anon_super,
++};
++
++static int devmem_init_inode(void)
++{
++ static struct vfsmount *devmem_vfs_mount;
++ static int devmem_fs_cnt;
++ struct inode *inode;
++ int rc;
++
++ rc = simple_pin_fs(&devmem_fs_type, &devmem_vfs_mount, &devmem_fs_cnt);
++ if (rc < 0) {
++ pr_err("Cannot mount /dev/mem pseudo filesystem: %d\n", rc);
++ return rc;
++ }
++
++ inode = alloc_anon_inode(devmem_vfs_mount->mnt_sb);
++ if (IS_ERR(inode)) {
++ rc = PTR_ERR(inode);
++ pr_err("Cannot allocate inode for /dev/mem: %d\n", rc);
++ simple_release_fs(&devmem_vfs_mount, &devmem_fs_cnt);
++ return rc;
++ }
++
++ /* publish /dev/mem initialized */
++ WRITE_ONCE(devmem_inode, inode);
++
++ return 0;
++}
++
+ static int __init chr_dev_init(void)
+ {
+ int minor;
+@@ -960,6 +1055,8 @@ static int __init chr_dev_init(void)
+ */
+ if ((minor == DEVPORT_MINOR) && !arch_has_dev_port())
+ continue;
++ if ((minor == DEVMEM_MINOR) && devmem_init_inode() != 0)
++ continue;
+
+ device_create(mem_class, NULL, MKDEV(MEM_MAJOR, minor),
+ NULL, devlist[minor].name);
+diff --git a/drivers/clk/Makefile b/drivers/clk/Makefile
+index f4169cc2fd31..60e811d3f226 100644
+--- a/drivers/clk/Makefile
++++ b/drivers/clk/Makefile
+@@ -105,7 +105,7 @@ obj-$(CONFIG_CLK_SIFIVE) += sifive/
+ obj-$(CONFIG_ARCH_SIRF) += sirf/
+ obj-$(CONFIG_ARCH_SOCFPGA) += socfpga/
+ obj-$(CONFIG_PLAT_SPEAR) += spear/
+-obj-$(CONFIG_ARCH_SPRD) += sprd/
++obj-y += sprd/
+ obj-$(CONFIG_ARCH_STI) += st/
+ obj-$(CONFIG_ARCH_STRATIX10) += socfpga/
+ obj-$(CONFIG_ARCH_SUNXI) += sunxi/
+diff --git a/drivers/clk/bcm/clk-bcm2835.c b/drivers/clk/bcm/clk-bcm2835.c
+index ded13ccf768e..7c845c293af0 100644
+--- a/drivers/clk/bcm/clk-bcm2835.c
++++ b/drivers/clk/bcm/clk-bcm2835.c
+@@ -1448,13 +1448,13 @@ static struct clk_hw *bcm2835_register_clock(struct bcm2835_cprman *cprman,
+ return &clock->hw;
+ }
+
+-static struct clk *bcm2835_register_gate(struct bcm2835_cprman *cprman,
++static struct clk_hw *bcm2835_register_gate(struct bcm2835_cprman *cprman,
+ const struct bcm2835_gate_data *data)
+ {
+- return clk_register_gate(cprman->dev, data->name, data->parent,
+- CLK_IGNORE_UNUSED | CLK_SET_RATE_GATE,
+- cprman->regs + data->ctl_reg,
+- CM_GATE_BIT, 0, &cprman->regs_lock);
++ return clk_hw_register_gate(cprman->dev, data->name, data->parent,
++ CLK_IGNORE_UNUSED | CLK_SET_RATE_GATE,
++ cprman->regs + data->ctl_reg,
++ CM_GATE_BIT, 0, &cprman->regs_lock);
+ }
+
+ typedef struct clk_hw *(*bcm2835_clk_register)(struct bcm2835_cprman *cprman,
+diff --git a/drivers/clk/clk-ast2600.c b/drivers/clk/clk-ast2600.c
+index 392d01705b97..99afc949925f 100644
+--- a/drivers/clk/clk-ast2600.c
++++ b/drivers/clk/clk-ast2600.c
+@@ -642,14 +642,22 @@ static const u32 ast2600_a0_axi_ahb_div_table[] = {
+ 2, 2, 3, 5,
+ };
+
+-static const u32 ast2600_a1_axi_ahb_div_table[] = {
+- 4, 6, 2, 4,
++static const u32 ast2600_a1_axi_ahb_div0_tbl[] = {
++ 3, 2, 3, 4,
++};
++
++static const u32 ast2600_a1_axi_ahb_div1_tbl[] = {
++ 3, 4, 6, 8,
++};
++
++static const u32 ast2600_a1_axi_ahb200_tbl[] = {
++ 3, 4, 3, 4, 2, 2, 2, 2,
+ };
+
+ static void __init aspeed_g6_cc(struct regmap *map)
+ {
+ struct clk_hw *hw;
+- u32 val, div, chip_id, axi_div, ahb_div;
++ u32 val, div, divbits, chip_id, axi_div, ahb_div;
+
+ clk_hw_register_fixed_rate(NULL, "clkin", NULL, 0, 25000000);
+
+@@ -679,11 +687,22 @@ static void __init aspeed_g6_cc(struct regmap *map)
+ else
+ axi_div = 2;
+
++ divbits = (val >> 11) & 0x3;
+ regmap_read(map, ASPEED_G6_SILICON_REV, &chip_id);
+- if (chip_id & BIT(16))
+- ahb_div = ast2600_a1_axi_ahb_div_table[(val >> 11) & 0x3];
+- else
++ if (chip_id & BIT(16)) {
++ if (!divbits) {
++ ahb_div = ast2600_a1_axi_ahb200_tbl[(val >> 8) & 0x3];
++ if (val & BIT(16))
++ ahb_div *= 2;
++ } else {
++ if (val & BIT(16))
++ ahb_div = ast2600_a1_axi_ahb_div1_tbl[divbits];
++ else
++ ahb_div = ast2600_a1_axi_ahb_div0_tbl[divbits];
++ }
++ } else {
+ ahb_div = ast2600_a0_axi_ahb_div_table[(val >> 11) & 0x3];
++ }
+
+ hw = clk_hw_register_fixed_factor(NULL, "ahb", "hpll", 0, 1, axi_div * ahb_div);
+ aspeed_g6_clk_data->hws[ASPEED_CLK_AHB] = hw;
+diff --git a/drivers/clk/meson/meson8b.c b/drivers/clk/meson/meson8b.c
+index 34a70c4b4899..11f6b868cf2b 100644
+--- a/drivers/clk/meson/meson8b.c
++++ b/drivers/clk/meson/meson8b.c
+@@ -1077,7 +1077,7 @@ static struct clk_regmap meson8b_vid_pll_in_sel = {
+ * Meson8m2: vid2_pll
+ */
+ .parent_hws = (const struct clk_hw *[]) {
+- &meson8b_hdmi_pll_dco.hw
++ &meson8b_hdmi_pll_lvds_out.hw
+ },
+ .num_parents = 1,
+ .flags = CLK_SET_RATE_PARENT,
+@@ -1213,7 +1213,7 @@ static struct clk_regmap meson8b_vclk_in_en = {
+
+ static struct clk_regmap meson8b_vclk_div1_gate = {
+ .data = &(struct clk_regmap_gate_data){
+- .offset = HHI_VID_CLK_DIV,
++ .offset = HHI_VID_CLK_CNTL,
+ .bit_idx = 0,
+ },
+ .hw.init = &(struct clk_init_data){
+@@ -1243,7 +1243,7 @@ static struct clk_fixed_factor meson8b_vclk_div2_div = {
+
+ static struct clk_regmap meson8b_vclk_div2_div_gate = {
+ .data = &(struct clk_regmap_gate_data){
+- .offset = HHI_VID_CLK_DIV,
++ .offset = HHI_VID_CLK_CNTL,
+ .bit_idx = 1,
+ },
+ .hw.init = &(struct clk_init_data){
+@@ -1273,7 +1273,7 @@ static struct clk_fixed_factor meson8b_vclk_div4_div = {
+
+ static struct clk_regmap meson8b_vclk_div4_div_gate = {
+ .data = &(struct clk_regmap_gate_data){
+- .offset = HHI_VID_CLK_DIV,
++ .offset = HHI_VID_CLK_CNTL,
+ .bit_idx = 2,
+ },
+ .hw.init = &(struct clk_init_data){
+@@ -1303,7 +1303,7 @@ static struct clk_fixed_factor meson8b_vclk_div6_div = {
+
+ static struct clk_regmap meson8b_vclk_div6_div_gate = {
+ .data = &(struct clk_regmap_gate_data){
+- .offset = HHI_VID_CLK_DIV,
++ .offset = HHI_VID_CLK_CNTL,
+ .bit_idx = 3,
+ },
+ .hw.init = &(struct clk_init_data){
+@@ -1333,7 +1333,7 @@ static struct clk_fixed_factor meson8b_vclk_div12_div = {
+
+ static struct clk_regmap meson8b_vclk_div12_div_gate = {
+ .data = &(struct clk_regmap_gate_data){
+- .offset = HHI_VID_CLK_DIV,
++ .offset = HHI_VID_CLK_CNTL,
+ .bit_idx = 4,
+ },
+ .hw.init = &(struct clk_init_data){
+@@ -1918,6 +1918,13 @@ static struct clk_regmap meson8b_mali = {
+ },
+ };
+
++static const struct reg_sequence meson8m2_gp_pll_init_regs[] = {
++ { .reg = HHI_GP_PLL_CNTL2, .def = 0x59c88000 },
++ { .reg = HHI_GP_PLL_CNTL3, .def = 0xca463823 },
++ { .reg = HHI_GP_PLL_CNTL4, .def = 0x0286a027 },
++ { .reg = HHI_GP_PLL_CNTL5, .def = 0x00003000 },
++};
++
+ static const struct pll_params_table meson8m2_gp_pll_params_table[] = {
+ PLL_PARAMS(182, 3),
+ { /* sentinel */ },
+@@ -1951,6 +1958,8 @@ static struct clk_regmap meson8m2_gp_pll_dco = {
+ .width = 1,
+ },
+ .table = meson8m2_gp_pll_params_table,
++ .init_regs = meson8m2_gp_pll_init_regs,
++ .init_count = ARRAY_SIZE(meson8m2_gp_pll_init_regs),
+ },
+ .hw.init = &(struct clk_init_data){
+ .name = "gp_pll_dco",
+@@ -3506,54 +3515,87 @@ static struct clk_regmap *const meson8b_clk_regmaps[] = {
+ static const struct meson8b_clk_reset_line {
+ u32 reg;
+ u8 bit_idx;
++ bool active_low;
+ } meson8b_clk_reset_bits[] = {
+ [CLKC_RESET_L2_CACHE_SOFT_RESET] = {
+- .reg = HHI_SYS_CPU_CLK_CNTL0, .bit_idx = 30
++ .reg = HHI_SYS_CPU_CLK_CNTL0,
++ .bit_idx = 30,
++ .active_low = false,
+ },
+ [CLKC_RESET_AXI_64_TO_128_BRIDGE_A5_SOFT_RESET] = {
+- .reg = HHI_SYS_CPU_CLK_CNTL0, .bit_idx = 29
++ .reg = HHI_SYS_CPU_CLK_CNTL0,
++ .bit_idx = 29,
++ .active_low = false,
+ },
+ [CLKC_RESET_SCU_SOFT_RESET] = {
+- .reg = HHI_SYS_CPU_CLK_CNTL0, .bit_idx = 28
++ .reg = HHI_SYS_CPU_CLK_CNTL0,
++ .bit_idx = 28,
++ .active_low = false,
+ },
+ [CLKC_RESET_CPU3_SOFT_RESET] = {
+- .reg = HHI_SYS_CPU_CLK_CNTL0, .bit_idx = 27
++ .reg = HHI_SYS_CPU_CLK_CNTL0,
++ .bit_idx = 27,
++ .active_low = false,
+ },
+ [CLKC_RESET_CPU2_SOFT_RESET] = {
+- .reg = HHI_SYS_CPU_CLK_CNTL0, .bit_idx = 26
++ .reg = HHI_SYS_CPU_CLK_CNTL0,
++ .bit_idx = 26,
++ .active_low = false,
+ },
+ [CLKC_RESET_CPU1_SOFT_RESET] = {
+- .reg = HHI_SYS_CPU_CLK_CNTL0, .bit_idx = 25
++ .reg = HHI_SYS_CPU_CLK_CNTL0,
++ .bit_idx = 25,
++ .active_low = false,
+ },
+ [CLKC_RESET_CPU0_SOFT_RESET] = {
+- .reg = HHI_SYS_CPU_CLK_CNTL0, .bit_idx = 24
++ .reg = HHI_SYS_CPU_CLK_CNTL0,
++ .bit_idx = 24,
++ .active_low = false,
+ },
+ [CLKC_RESET_A5_GLOBAL_RESET] = {
+- .reg = HHI_SYS_CPU_CLK_CNTL0, .bit_idx = 18
++ .reg = HHI_SYS_CPU_CLK_CNTL0,
++ .bit_idx = 18,
++ .active_low = false,
+ },
+ [CLKC_RESET_A5_AXI_SOFT_RESET] = {
+- .reg = HHI_SYS_CPU_CLK_CNTL0, .bit_idx = 17
++ .reg = HHI_SYS_CPU_CLK_CNTL0,
++ .bit_idx = 17,
++ .active_low = false,
+ },
+ [CLKC_RESET_A5_ABP_SOFT_RESET] = {
+- .reg = HHI_SYS_CPU_CLK_CNTL0, .bit_idx = 16
++ .reg = HHI_SYS_CPU_CLK_CNTL0,
++ .bit_idx = 16,
++ .active_low = false,
+ },
+ [CLKC_RESET_AXI_64_TO_128_BRIDGE_MMC_SOFT_RESET] = {
+- .reg = HHI_SYS_CPU_CLK_CNTL1, .bit_idx = 30
++ .reg = HHI_SYS_CPU_CLK_CNTL1,
++ .bit_idx = 30,
++ .active_low = false,
+ },
+ [CLKC_RESET_VID_CLK_CNTL_SOFT_RESET] = {
+- .reg = HHI_VID_CLK_CNTL, .bit_idx = 15
++ .reg = HHI_VID_CLK_CNTL,
++ .bit_idx = 15,
++ .active_low = false,
+ },
+ [CLKC_RESET_VID_DIVIDER_CNTL_SOFT_RESET_POST] = {
+- .reg = HHI_VID_DIVIDER_CNTL, .bit_idx = 7
++ .reg = HHI_VID_DIVIDER_CNTL,
++ .bit_idx = 7,
++ .active_low = false,
+ },
+ [CLKC_RESET_VID_DIVIDER_CNTL_SOFT_RESET_PRE] = {
+- .reg = HHI_VID_DIVIDER_CNTL, .bit_idx = 3
++ .reg = HHI_VID_DIVIDER_CNTL,
++ .bit_idx = 3,
++ .active_low = false,
+ },
+ [CLKC_RESET_VID_DIVIDER_CNTL_RESET_N_POST] = {
+- .reg = HHI_VID_DIVIDER_CNTL, .bit_idx = 1
++ .reg = HHI_VID_DIVIDER_CNTL,
++ .bit_idx = 1,
++ .active_low = true,
+ },
+ [CLKC_RESET_VID_DIVIDER_CNTL_RESET_N_PRE] = {
+- .reg = HHI_VID_DIVIDER_CNTL, .bit_idx = 0
++ .reg = HHI_VID_DIVIDER_CNTL,
++ .bit_idx = 0,
++ .active_low = true,
+ },
+ };
+
+@@ -3562,22 +3604,22 @@ static int meson8b_clk_reset_update(struct reset_controller_dev *rcdev,
+ {
+ struct meson8b_clk_reset *meson8b_clk_reset =
+ container_of(rcdev, struct meson8b_clk_reset, reset);
+- unsigned long flags;
+ const struct meson8b_clk_reset_line *reset;
++ unsigned int value = 0;
++ unsigned long flags;
+
+ if (id >= ARRAY_SIZE(meson8b_clk_reset_bits))
+ return -EINVAL;
+
+ reset = &meson8b_clk_reset_bits[id];
+
++ if (assert != reset->active_low)
++ value = BIT(reset->bit_idx);
++
+ spin_lock_irqsave(&meson_clk_lock, flags);
+
+- if (assert)
+- regmap_update_bits(meson8b_clk_reset->regmap, reset->reg,
+- BIT(reset->bit_idx), BIT(reset->bit_idx));
+- else
+- regmap_update_bits(meson8b_clk_reset->regmap, reset->reg,
+- BIT(reset->bit_idx), 0);
++ regmap_update_bits(meson8b_clk_reset->regmap, reset->reg,
++ BIT(reset->bit_idx), value);
+
+ spin_unlock_irqrestore(&meson_clk_lock, flags);
+
+diff --git a/drivers/clk/meson/meson8b.h b/drivers/clk/meson/meson8b.h
+index c889fbeec30f..c91fb07fcb65 100644
+--- a/drivers/clk/meson/meson8b.h
++++ b/drivers/clk/meson/meson8b.h
+@@ -20,6 +20,10 @@
+ * [0] http://dn.odroid.com/S805/Datasheet/S805_Datasheet%20V0.8%2020150126.pdf
+ */
+ #define HHI_GP_PLL_CNTL 0x40 /* 0x10 offset in data sheet */
++#define HHI_GP_PLL_CNTL2 0x44 /* 0x11 offset in data sheet */
++#define HHI_GP_PLL_CNTL3 0x48 /* 0x12 offset in data sheet */
++#define HHI_GP_PLL_CNTL4 0x4C /* 0x13 offset in data sheet */
++#define HHI_GP_PLL_CNTL5 0x50 /* 0x14 offset in data sheet */
+ #define HHI_VIID_CLK_DIV 0x128 /* 0x4a offset in data sheet */
+ #define HHI_VIID_CLK_CNTL 0x12c /* 0x4b offset in data sheet */
+ #define HHI_GCLK_MPEG0 0x140 /* 0x50 offset in data sheet */
+diff --git a/drivers/clk/qcom/gcc-msm8916.c b/drivers/clk/qcom/gcc-msm8916.c
+index 4e329a7baf2b..17e4a5a2a9fd 100644
+--- a/drivers/clk/qcom/gcc-msm8916.c
++++ b/drivers/clk/qcom/gcc-msm8916.c
+@@ -260,7 +260,7 @@ static struct clk_pll gpll0 = {
+ .l_reg = 0x21004,
+ .m_reg = 0x21008,
+ .n_reg = 0x2100c,
+- .config_reg = 0x21014,
++ .config_reg = 0x21010,
+ .mode_reg = 0x21000,
+ .status_reg = 0x2101c,
+ .status_bit = 17,
+@@ -287,7 +287,7 @@ static struct clk_pll gpll1 = {
+ .l_reg = 0x20004,
+ .m_reg = 0x20008,
+ .n_reg = 0x2000c,
+- .config_reg = 0x20014,
++ .config_reg = 0x20010,
+ .mode_reg = 0x20000,
+ .status_reg = 0x2001c,
+ .status_bit = 17,
+@@ -314,7 +314,7 @@ static struct clk_pll gpll2 = {
+ .l_reg = 0x4a004,
+ .m_reg = 0x4a008,
+ .n_reg = 0x4a00c,
+- .config_reg = 0x4a014,
++ .config_reg = 0x4a010,
+ .mode_reg = 0x4a000,
+ .status_reg = 0x4a01c,
+ .status_bit = 17,
+@@ -341,7 +341,7 @@ static struct clk_pll bimc_pll = {
+ .l_reg = 0x23004,
+ .m_reg = 0x23008,
+ .n_reg = 0x2300c,
+- .config_reg = 0x23014,
++ .config_reg = 0x23010,
+ .mode_reg = 0x23000,
+ .status_reg = 0x2301c,
+ .status_bit = 17,
+diff --git a/drivers/clk/renesas/renesas-cpg-mssr.c b/drivers/clk/renesas/renesas-cpg-mssr.c
+index a2663fbbd7a5..d6a53c99b114 100644
+--- a/drivers/clk/renesas/renesas-cpg-mssr.c
++++ b/drivers/clk/renesas/renesas-cpg-mssr.c
+@@ -812,7 +812,8 @@ static int cpg_mssr_suspend_noirq(struct device *dev)
+ /* Save module registers with bits under our control */
+ for (reg = 0; reg < ARRAY_SIZE(priv->smstpcr_saved); reg++) {
+ if (priv->smstpcr_saved[reg].mask)
+- priv->smstpcr_saved[reg].val =
++ priv->smstpcr_saved[reg].val = priv->stbyctrl ?
++ readb(priv->base + STBCR(reg)) :
+ readl(priv->base + SMSTPCR(reg));
+ }
+
+@@ -872,8 +873,9 @@ static int cpg_mssr_resume_noirq(struct device *dev)
+ }
+
+ if (!i)
+- dev_warn(dev, "Failed to enable SMSTP %p[0x%x]\n",
+- priv->base + SMSTPCR(reg), oldval & mask);
++ dev_warn(dev, "Failed to enable %s%u[0x%x]\n",
++ priv->stbyctrl ? "STB" : "SMSTP", reg,
++ oldval & mask);
+ }
+
+ return 0;
+diff --git a/drivers/clk/samsung/clk-exynos5420.c b/drivers/clk/samsung/clk-exynos5420.c
+index c9e5a1fb6653..edb2363c735a 100644
+--- a/drivers/clk/samsung/clk-exynos5420.c
++++ b/drivers/clk/samsung/clk-exynos5420.c
+@@ -540,7 +540,7 @@ static const struct samsung_div_clock exynos5800_div_clks[] __initconst = {
+
+ static const struct samsung_gate_clock exynos5800_gate_clks[] __initconst = {
+ GATE(CLK_ACLK550_CAM, "aclk550_cam", "mout_user_aclk550_cam",
+- GATE_BUS_TOP, 24, 0, 0),
++ GATE_BUS_TOP, 24, CLK_IS_CRITICAL, 0),
+ GATE(CLK_ACLK432_SCALER, "aclk432_scaler", "mout_user_aclk432_scaler",
+ GATE_BUS_TOP, 27, CLK_IS_CRITICAL, 0),
+ };
+@@ -943,25 +943,25 @@ static const struct samsung_gate_clock exynos5x_gate_clks[] __initconst = {
+ GATE(0, "aclk300_jpeg", "mout_user_aclk300_jpeg",
+ GATE_BUS_TOP, 4, CLK_IGNORE_UNUSED, 0),
+ GATE(0, "aclk333_432_isp0", "mout_user_aclk333_432_isp0",
+- GATE_BUS_TOP, 5, 0, 0),
++ GATE_BUS_TOP, 5, CLK_IS_CRITICAL, 0),
+ GATE(0, "aclk300_gscl", "mout_user_aclk300_gscl",
+ GATE_BUS_TOP, 6, CLK_IS_CRITICAL, 0),
+ GATE(0, "aclk333_432_gscl", "mout_user_aclk333_432_gscl",
+ GATE_BUS_TOP, 7, CLK_IGNORE_UNUSED, 0),
+ GATE(0, "aclk333_432_isp", "mout_user_aclk333_432_isp",
+- GATE_BUS_TOP, 8, 0, 0),
++ GATE_BUS_TOP, 8, CLK_IS_CRITICAL, 0),
+ GATE(CLK_PCLK66_GPIO, "pclk66_gpio", "mout_user_pclk66_gpio",
+ GATE_BUS_TOP, 9, CLK_IGNORE_UNUSED, 0),
+ GATE(0, "aclk66_psgen", "mout_user_aclk66_psgen",
+ GATE_BUS_TOP, 10, CLK_IGNORE_UNUSED, 0),
+ GATE(0, "aclk266_isp", "mout_user_aclk266_isp",
+- GATE_BUS_TOP, 13, 0, 0),
++ GATE_BUS_TOP, 13, CLK_IS_CRITICAL, 0),
+ GATE(0, "aclk166", "mout_user_aclk166",
+ GATE_BUS_TOP, 14, CLK_IGNORE_UNUSED, 0),
+ GATE(CLK_ACLK333, "aclk333", "mout_user_aclk333",
+ GATE_BUS_TOP, 15, CLK_IS_CRITICAL, 0),
+ GATE(0, "aclk400_isp", "mout_user_aclk400_isp",
+- GATE_BUS_TOP, 16, 0, 0),
++ GATE_BUS_TOP, 16, CLK_IS_CRITICAL, 0),
+ GATE(0, "aclk400_mscl", "mout_user_aclk400_mscl",
+ GATE_BUS_TOP, 17, CLK_IS_CRITICAL, 0),
+ GATE(0, "aclk200_disp1", "mout_user_aclk200_disp1",
+@@ -1161,8 +1161,10 @@ static const struct samsung_gate_clock exynos5x_gate_clks[] __initconst = {
+ GATE_IP_GSCL1, 3, 0, 0),
+ GATE(CLK_SMMU_FIMCL1, "smmu_fimcl1", "dout_gscl_blk_333",
+ GATE_IP_GSCL1, 4, 0, 0),
+- GATE(CLK_GSCL_WA, "gscl_wa", "sclk_gscl_wa", GATE_IP_GSCL1, 12, 0, 0),
+- GATE(CLK_GSCL_WB, "gscl_wb", "sclk_gscl_wb", GATE_IP_GSCL1, 13, 0, 0),
++ GATE(CLK_GSCL_WA, "gscl_wa", "sclk_gscl_wa", GATE_IP_GSCL1, 12,
++ CLK_IS_CRITICAL, 0),
++ GATE(CLK_GSCL_WB, "gscl_wb", "sclk_gscl_wb", GATE_IP_GSCL1, 13,
++ CLK_IS_CRITICAL, 0),
+ GATE(CLK_SMMU_FIMCL3, "smmu_fimcl3,", "dout_gscl_blk_333",
+ GATE_IP_GSCL1, 16, 0, 0),
+ GATE(CLK_FIMC_LITE3, "fimc_lite3", "aclk333_432_gscl",
+diff --git a/drivers/clk/samsung/clk-exynos5433.c b/drivers/clk/samsung/clk-exynos5433.c
+index 4b1aa9382ad2..6f29ecd0442e 100644
+--- a/drivers/clk/samsung/clk-exynos5433.c
++++ b/drivers/clk/samsung/clk-exynos5433.c
+@@ -1706,7 +1706,8 @@ static const struct samsung_gate_clock peric_gate_clks[] __initconst = {
+ GATE(CLK_SCLK_PCM1, "sclk_pcm1", "sclk_pcm1_peric",
+ ENABLE_SCLK_PERIC, 7, CLK_SET_RATE_PARENT, 0),
+ GATE(CLK_SCLK_I2S1, "sclk_i2s1", "sclk_i2s1_peric",
+- ENABLE_SCLK_PERIC, 6, CLK_SET_RATE_PARENT, 0),
++ ENABLE_SCLK_PERIC, 6,
++ CLK_SET_RATE_PARENT | CLK_IGNORE_UNUSED, 0),
+ GATE(CLK_SCLK_SPI2, "sclk_spi2", "sclk_spi2_peric", ENABLE_SCLK_PERIC,
+ 5, CLK_SET_RATE_PARENT, 0),
+ GATE(CLK_SCLK_SPI1, "sclk_spi1", "sclk_spi1_peric", ENABLE_SCLK_PERIC,
+diff --git a/drivers/clk/sprd/pll.c b/drivers/clk/sprd/pll.c
+index 15791484388f..13a322b2535a 100644
+--- a/drivers/clk/sprd/pll.c
++++ b/drivers/clk/sprd/pll.c
+@@ -106,7 +106,7 @@ static unsigned long _sprd_pll_recalc_rate(const struct sprd_pll *pll,
+
+ cfg = kcalloc(regs_num, sizeof(*cfg), GFP_KERNEL);
+ if (!cfg)
+- return -ENOMEM;
++ return parent_rate;
+
+ for (i = 0; i < regs_num; i++)
+ cfg[i] = sprd_pll_read(pll, i);
+diff --git a/drivers/clk/st/clk-flexgen.c b/drivers/clk/st/clk-flexgen.c
+index 4413b6e04a8e..55873d4b7603 100644
+--- a/drivers/clk/st/clk-flexgen.c
++++ b/drivers/clk/st/clk-flexgen.c
+@@ -375,6 +375,7 @@ static void __init st_of_flexgen_setup(struct device_node *np)
+ break;
+ }
+
++ flex_flags &= ~CLK_IS_CRITICAL;
+ of_clk_detect_critical(np, i, &flex_flags);
+
+ /*
+diff --git a/drivers/clk/sunxi/clk-sunxi.c b/drivers/clk/sunxi/clk-sunxi.c
+index 27201fd26e44..e1aa1fbac48a 100644
+--- a/drivers/clk/sunxi/clk-sunxi.c
++++ b/drivers/clk/sunxi/clk-sunxi.c
+@@ -90,7 +90,7 @@ static void sun6i_a31_get_pll1_factors(struct factors_request *req)
+ * Round down the frequency to the closest multiple of either
+ * 6 or 16
+ */
+- u32 round_freq_6 = round_down(freq_mhz, 6);
++ u32 round_freq_6 = rounddown(freq_mhz, 6);
+ u32 round_freq_16 = round_down(freq_mhz, 16);
+
+ if (round_freq_6 > round_freq_16)
+diff --git a/drivers/clk/ti/composite.c b/drivers/clk/ti/composite.c
+index 6a89936ba03a..eaa43575cfa5 100644
+--- a/drivers/clk/ti/composite.c
++++ b/drivers/clk/ti/composite.c
+@@ -196,6 +196,7 @@ cleanup:
+ if (!cclk->comp_clks[i])
+ continue;
+ list_del(&cclk->comp_clks[i]->link);
++ kfree(cclk->comp_clks[i]->parent_names);
+ kfree(cclk->comp_clks[i]);
+ }
+
+diff --git a/drivers/clk/zynqmp/clkc.c b/drivers/clk/zynqmp/clkc.c
+index 10e89f23880b..b66c3a62233a 100644
+--- a/drivers/clk/zynqmp/clkc.c
++++ b/drivers/clk/zynqmp/clkc.c
+@@ -558,7 +558,7 @@ static struct clk_hw *zynqmp_register_clk_topology(int clk_id, char *clk_name,
+ {
+ int j;
+ u32 num_nodes, clk_dev_id;
+- char *clk_out = NULL;
++ char *clk_out[MAX_NODES];
+ struct clock_topology *nodes;
+ struct clk_hw *hw = NULL;
+
+@@ -572,16 +572,16 @@ static struct clk_hw *zynqmp_register_clk_topology(int clk_id, char *clk_name,
+ * Intermediate clock names are postfixed with type of clock.
+ */
+ if (j != (num_nodes - 1)) {
+- clk_out = kasprintf(GFP_KERNEL, "%s%s", clk_name,
++ clk_out[j] = kasprintf(GFP_KERNEL, "%s%s", clk_name,
+ clk_type_postfix[nodes[j].type]);
+ } else {
+- clk_out = kasprintf(GFP_KERNEL, "%s", clk_name);
++ clk_out[j] = kasprintf(GFP_KERNEL, "%s", clk_name);
+ }
+
+ if (!clk_topology[nodes[j].type])
+ continue;
+
+- hw = (*clk_topology[nodes[j].type])(clk_out, clk_dev_id,
++ hw = (*clk_topology[nodes[j].type])(clk_out[j], clk_dev_id,
+ parent_names,
+ num_parents,
+ &nodes[j]);
+@@ -590,9 +590,12 @@ static struct clk_hw *zynqmp_register_clk_topology(int clk_id, char *clk_name,
+ __func__, clk_dev_id, clk_name,
+ PTR_ERR(hw));
+
+- parent_names[0] = clk_out;
++ parent_names[0] = clk_out[j];
+ }
+- kfree(clk_out);
++
++ for (j = 0; j < num_nodes; j++)
++ kfree(clk_out[j]);
++
+ return hw;
+ }
+
+diff --git a/drivers/clk/zynqmp/divider.c b/drivers/clk/zynqmp/divider.c
+index 4be2cc76aa2e..9bc4f9409aea 100644
+--- a/drivers/clk/zynqmp/divider.c
++++ b/drivers/clk/zynqmp/divider.c
+@@ -111,23 +111,30 @@ static unsigned long zynqmp_clk_divider_recalc_rate(struct clk_hw *hw,
+
+ static void zynqmp_get_divider2_val(struct clk_hw *hw,
+ unsigned long rate,
+- unsigned long parent_rate,
+ struct zynqmp_clk_divider *divider,
+ int *bestdiv)
+ {
+ int div1;
+ int div2;
+ long error = LONG_MAX;
+- struct clk_hw *parent_hw = clk_hw_get_parent(hw);
+- struct zynqmp_clk_divider *pdivider = to_zynqmp_clk_divider(parent_hw);
++ unsigned long div1_prate;
++ struct clk_hw *div1_parent_hw;
++ struct clk_hw *div2_parent_hw = clk_hw_get_parent(hw);
++ struct zynqmp_clk_divider *pdivider =
++ to_zynqmp_clk_divider(div2_parent_hw);
+
+ if (!pdivider)
+ return;
+
++ div1_parent_hw = clk_hw_get_parent(div2_parent_hw);
++ if (!div1_parent_hw)
++ return;
++
++ div1_prate = clk_hw_get_rate(div1_parent_hw);
+ *bestdiv = 1;
+ for (div1 = 1; div1 <= pdivider->max_div;) {
+ for (div2 = 1; div2 <= divider->max_div;) {
+- long new_error = ((parent_rate / div1) / div2) - rate;
++ long new_error = ((div1_prate / div1) / div2) - rate;
+
+ if (abs(new_error) < abs(error)) {
+ *bestdiv = div2;
+@@ -192,7 +199,7 @@ static long zynqmp_clk_divider_round_rate(struct clk_hw *hw,
+ */
+ if (div_type == TYPE_DIV2 &&
+ (clk_hw_get_flags(hw) & CLK_SET_RATE_PARENT)) {
+- zynqmp_get_divider2_val(hw, rate, *prate, divider, &bestdiv);
++ zynqmp_get_divider2_val(hw, rate, divider, &bestdiv);
+ }
+
+ if ((clk_hw_get_flags(hw) & CLK_SET_RATE_PARENT) && divider->is_frac)
+diff --git a/drivers/crypto/hisilicon/sgl.c b/drivers/crypto/hisilicon/sgl.c
+index 0e8c7e324fb4..725a739800b0 100644
+--- a/drivers/crypto/hisilicon/sgl.c
++++ b/drivers/crypto/hisilicon/sgl.c
+@@ -66,7 +66,8 @@ struct hisi_acc_sgl_pool *hisi_acc_create_sgl_pool(struct device *dev,
+
+ sgl_size = sizeof(struct acc_hw_sge) * sge_nr +
+ sizeof(struct hisi_acc_hw_sgl);
+- block_size = PAGE_SIZE * (1 << (MAX_ORDER - 1));
++ block_size = 1 << (PAGE_SHIFT + MAX_ORDER <= 32 ?
++ PAGE_SHIFT + MAX_ORDER - 1 : 31);
+ sgl_num_per_block = block_size / sgl_size;
+ block_num = count / sgl_num_per_block;
+ remain_sgl = count % sgl_num_per_block;
+diff --git a/drivers/crypto/marvell/octeontx/otx_cptvf_algs.c b/drivers/crypto/marvell/octeontx/otx_cptvf_algs.c
+index 06202bcffb33..a370c99ecf4c 100644
+--- a/drivers/crypto/marvell/octeontx/otx_cptvf_algs.c
++++ b/drivers/crypto/marvell/octeontx/otx_cptvf_algs.c
+@@ -118,6 +118,9 @@ static void otx_cpt_aead_callback(int status, void *arg1, void *arg2)
+ struct otx_cpt_req_info *cpt_req;
+ struct pci_dev *pdev;
+
++ if (!cpt_info)
++ goto complete;
++
+ cpt_req = cpt_info->req;
+ if (!status) {
+ /*
+@@ -129,10 +132,10 @@ static void otx_cpt_aead_callback(int status, void *arg1, void *arg2)
+ !cpt_req->is_enc)
+ status = validate_hmac_cipher_null(cpt_req);
+ }
+- if (cpt_info) {
+- pdev = cpt_info->pdev;
+- do_request_cleanup(pdev, cpt_info);
+- }
++ pdev = cpt_info->pdev;
++ do_request_cleanup(pdev, cpt_info);
++
++complete:
+ if (areq)
+ areq->complete(areq, status);
+ }
+diff --git a/drivers/crypto/omap-sham.c b/drivers/crypto/omap-sham.c
+index e4072cd38585..a82a3596dca3 100644
+--- a/drivers/crypto/omap-sham.c
++++ b/drivers/crypto/omap-sham.c
+@@ -169,8 +169,6 @@ struct omap_sham_hmac_ctx {
+ };
+
+ struct omap_sham_ctx {
+- struct omap_sham_dev *dd;
+-
+ unsigned long flags;
+
+ /* fallback stuff */
+@@ -751,8 +749,15 @@ static int omap_sham_align_sgs(struct scatterlist *sg,
+ int offset = rctx->offset;
+ int bufcnt = rctx->bufcnt;
+
+- if (!sg || !sg->length || !nbytes)
++ if (!sg || !sg->length || !nbytes) {
++ if (bufcnt) {
++ sg_init_table(rctx->sgl, 1);
++ sg_set_buf(rctx->sgl, rctx->dd->xmit_buf, bufcnt);
++ rctx->sg = rctx->sgl;
++ }
++
+ return 0;
++ }
+
+ new_len = nbytes;
+
+@@ -896,7 +901,7 @@ static int omap_sham_prepare_request(struct ahash_request *req, bool update)
+ if (hash_later < 0)
+ hash_later = 0;
+
+- if (hash_later) {
++ if (hash_later && hash_later <= rctx->buflen) {
+ scatterwalk_map_and_copy(rctx->buffer,
+ req->src,
+ req->nbytes - hash_later,
+@@ -926,27 +931,35 @@ static int omap_sham_update_dma_stop(struct omap_sham_dev *dd)
+ return 0;
+ }
+
++struct omap_sham_dev *omap_sham_find_dev(struct omap_sham_reqctx *ctx)
++{
++ struct omap_sham_dev *dd;
++
++ if (ctx->dd)
++ return ctx->dd;
++
++ spin_lock_bh(&sham.lock);
++ dd = list_first_entry(&sham.dev_list, struct omap_sham_dev, list);
++ list_move_tail(&dd->list, &sham.dev_list);
++ ctx->dd = dd;
++ spin_unlock_bh(&sham.lock);
++
++ return dd;
++}
++
+ static int omap_sham_init(struct ahash_request *req)
+ {
+ struct crypto_ahash *tfm = crypto_ahash_reqtfm(req);
+ struct omap_sham_ctx *tctx = crypto_ahash_ctx(tfm);
+ struct omap_sham_reqctx *ctx = ahash_request_ctx(req);
+- struct omap_sham_dev *dd = NULL, *tmp;
++ struct omap_sham_dev *dd;
+ int bs = 0;
+
+- spin_lock_bh(&sham.lock);
+- if (!tctx->dd) {
+- list_for_each_entry(tmp, &sham.dev_list, list) {
+- dd = tmp;
+- break;
+- }
+- tctx->dd = dd;
+- } else {
+- dd = tctx->dd;
+- }
+- spin_unlock_bh(&sham.lock);
++ ctx->dd = NULL;
+
+- ctx->dd = dd;
++ dd = omap_sham_find_dev(ctx);
++ if (!dd)
++ return -ENODEV;
+
+ ctx->flags = 0;
+
+@@ -1216,8 +1229,7 @@ err1:
+ static int omap_sham_enqueue(struct ahash_request *req, unsigned int op)
+ {
+ struct omap_sham_reqctx *ctx = ahash_request_ctx(req);
+- struct omap_sham_ctx *tctx = crypto_tfm_ctx(req->base.tfm);
+- struct omap_sham_dev *dd = tctx->dd;
++ struct omap_sham_dev *dd = ctx->dd;
+
+ ctx->op = op;
+
+@@ -1227,7 +1239,7 @@ static int omap_sham_enqueue(struct ahash_request *req, unsigned int op)
+ static int omap_sham_update(struct ahash_request *req)
+ {
+ struct omap_sham_reqctx *ctx = ahash_request_ctx(req);
+- struct omap_sham_dev *dd = ctx->dd;
++ struct omap_sham_dev *dd = omap_sham_find_dev(ctx);
+
+ if (!req->nbytes)
+ return 0;
+@@ -1331,21 +1343,8 @@ static int omap_sham_setkey(struct crypto_ahash *tfm, const u8 *key,
+ struct omap_sham_hmac_ctx *bctx = tctx->base;
+ int bs = crypto_shash_blocksize(bctx->shash);
+ int ds = crypto_shash_digestsize(bctx->shash);
+- struct omap_sham_dev *dd = NULL, *tmp;
+ int err, i;
+
+- spin_lock_bh(&sham.lock);
+- if (!tctx->dd) {
+- list_for_each_entry(tmp, &sham.dev_list, list) {
+- dd = tmp;
+- break;
+- }
+- tctx->dd = dd;
+- } else {
+- dd = tctx->dd;
+- }
+- spin_unlock_bh(&sham.lock);
+-
+ err = crypto_shash_setkey(tctx->fallback, key, keylen);
+ if (err)
+ return err;
+@@ -1363,7 +1362,7 @@ static int omap_sham_setkey(struct crypto_ahash *tfm, const u8 *key,
+
+ memset(bctx->ipad + keylen, 0, bs - keylen);
+
+- if (!test_bit(FLAGS_AUTO_XOR, &dd->flags)) {
++ if (!test_bit(FLAGS_AUTO_XOR, &sham.flags)) {
+ memcpy(bctx->opad, bctx->ipad, bs);
+
+ for (i = 0; i < bs; i++) {
+@@ -2167,6 +2166,7 @@ static int omap_sham_probe(struct platform_device *pdev)
+ }
+
+ dd->flags |= dd->pdata->flags;
++ sham.flags |= dd->pdata->flags;
+
+ pm_runtime_use_autosuspend(dev);
+ pm_runtime_set_autosuspend_delay(dev, DEFAULT_AUTOSUSPEND_DELAY);
+@@ -2194,6 +2194,9 @@ static int omap_sham_probe(struct platform_device *pdev)
+ spin_unlock(&sham.lock);
+
+ for (i = 0; i < dd->pdata->algs_info_size; i++) {
++ if (dd->pdata->algs_info[i].registered)
++ break;
++
+ for (j = 0; j < dd->pdata->algs_info[i].size; j++) {
+ struct ahash_alg *alg;
+
+@@ -2245,9 +2248,11 @@ static int omap_sham_remove(struct platform_device *pdev)
+ list_del(&dd->list);
+ spin_unlock(&sham.lock);
+ for (i = dd->pdata->algs_info_size - 1; i >= 0; i--)
+- for (j = dd->pdata->algs_info[i].registered - 1; j >= 0; j--)
++ for (j = dd->pdata->algs_info[i].registered - 1; j >= 0; j--) {
+ crypto_unregister_ahash(
+ &dd->pdata->algs_info[i].algs_list[j]);
++ dd->pdata->algs_info[i].registered--;
++ }
+ tasklet_kill(&dd->done_task);
+ pm_runtime_disable(&pdev->dev);
+
+diff --git a/drivers/extcon/extcon-adc-jack.c b/drivers/extcon/extcon-adc-jack.c
+index ad02dc6747a4..0317b614b680 100644
+--- a/drivers/extcon/extcon-adc-jack.c
++++ b/drivers/extcon/extcon-adc-jack.c
+@@ -124,7 +124,7 @@ static int adc_jack_probe(struct platform_device *pdev)
+ for (i = 0; data->adc_conditions[i].id != EXTCON_NONE; i++);
+ data->num_conditions = i;
+
+- data->chan = iio_channel_get(&pdev->dev, pdata->consumer_channel);
++ data->chan = devm_iio_channel_get(&pdev->dev, pdata->consumer_channel);
+ if (IS_ERR(data->chan))
+ return PTR_ERR(data->chan);
+
+@@ -164,7 +164,6 @@ static int adc_jack_remove(struct platform_device *pdev)
+
+ free_irq(data->irq, data);
+ cancel_work_sync(&data->handler.work);
+- iio_channel_release(data->chan);
+
+ return 0;
+ }
+diff --git a/drivers/firmware/imx/imx-scu.c b/drivers/firmware/imx/imx-scu.c
+index b3da2e193ad2..176ddd151375 100644
+--- a/drivers/firmware/imx/imx-scu.c
++++ b/drivers/firmware/imx/imx-scu.c
+@@ -314,6 +314,7 @@ static int imx_scu_probe(struct platform_device *pdev)
+ if (ret != -EPROBE_DEFER)
+ dev_err(dev, "Failed to request mbox chan %s ret %d\n",
+ chan_name, ret);
++ kfree(chan_name);
+ return ret;
+ }
+
+diff --git a/drivers/firmware/qcom_scm.c b/drivers/firmware/qcom_scm.c
+index 059bb0fbae9e..4701487573f7 100644
+--- a/drivers/firmware/qcom_scm.c
++++ b/drivers/firmware/qcom_scm.c
+@@ -6,7 +6,6 @@
+ #include <linux/init.h>
+ #include <linux/cpumask.h>
+ #include <linux/export.h>
+-#include <linux/dma-direct.h>
+ #include <linux/dma-mapping.h>
+ #include <linux/module.h>
+ #include <linux/types.h>
+@@ -806,8 +805,7 @@ int qcom_scm_assign_mem(phys_addr_t mem_addr, size_t mem_sz,
+ struct qcom_scm_mem_map_info *mem_to_map;
+ phys_addr_t mem_to_map_phys;
+ phys_addr_t dest_phys;
+- phys_addr_t ptr_phys;
+- dma_addr_t ptr_dma;
++ dma_addr_t ptr_phys;
+ size_t mem_to_map_sz;
+ size_t dest_sz;
+ size_t src_sz;
+@@ -824,10 +822,9 @@ int qcom_scm_assign_mem(phys_addr_t mem_addr, size_t mem_sz,
+ ptr_sz = ALIGN(src_sz, SZ_64) + ALIGN(mem_to_map_sz, SZ_64) +
+ ALIGN(dest_sz, SZ_64);
+
+- ptr = dma_alloc_coherent(__scm->dev, ptr_sz, &ptr_dma, GFP_KERNEL);
++ ptr = dma_alloc_coherent(__scm->dev, ptr_sz, &ptr_phys, GFP_KERNEL);
+ if (!ptr)
+ return -ENOMEM;
+- ptr_phys = dma_to_phys(__scm->dev, ptr_dma);
+
+ /* Fill source vmid detail */
+ src = ptr;
+@@ -855,7 +852,7 @@ int qcom_scm_assign_mem(phys_addr_t mem_addr, size_t mem_sz,
+
+ ret = __qcom_scm_assign_mem(__scm->dev, mem_to_map_phys, mem_to_map_sz,
+ ptr_phys, src_sz, dest_phys, dest_sz);
+- dma_free_coherent(__scm->dev, ptr_sz, ptr, ptr_dma);
++ dma_free_coherent(__scm->dev, ptr_sz, ptr, ptr_phys);
+ if (ret) {
+ dev_err(__scm->dev,
+ "Assign memory protection call failed %d\n", ret);
+diff --git a/drivers/fpga/dfl-afu-dma-region.c b/drivers/fpga/dfl-afu-dma-region.c
+index 62f924489db5..5942343a5d6e 100644
+--- a/drivers/fpga/dfl-afu-dma-region.c
++++ b/drivers/fpga/dfl-afu-dma-region.c
+@@ -61,10 +61,10 @@ static int afu_dma_pin_pages(struct dfl_feature_platform_data *pdata,
+ region->pages);
+ if (pinned < 0) {
+ ret = pinned;
+- goto put_pages;
++ goto free_pages;
+ } else if (pinned != npages) {
+ ret = -EFAULT;
+- goto free_pages;
++ goto put_pages;
+ }
+
+ dev_dbg(dev, "%d pages pinned\n", pinned);
+diff --git a/drivers/gpio/gpio-dwapb.c b/drivers/gpio/gpio-dwapb.c
+index 92e127e74813..ed6061b5cca1 100644
+--- a/drivers/gpio/gpio-dwapb.c
++++ b/drivers/gpio/gpio-dwapb.c
+@@ -49,7 +49,9 @@
+ #define GPIO_EXT_PORTC 0x58
+ #define GPIO_EXT_PORTD 0x5c
+
++#define DWAPB_DRIVER_NAME "gpio-dwapb"
+ #define DWAPB_MAX_PORTS 4
++
+ #define GPIO_EXT_PORT_STRIDE 0x04 /* register stride 32 bits */
+ #define GPIO_SWPORT_DR_STRIDE 0x0c /* register stride 3*32 bits */
+ #define GPIO_SWPORT_DDR_STRIDE 0x0c /* register stride 3*32 bits */
+@@ -398,7 +400,7 @@ static void dwapb_configure_irqs(struct dwapb_gpio *gpio,
+ return;
+
+ err = irq_alloc_domain_generic_chips(gpio->domain, ngpio, 2,
+- "gpio-dwapb", handle_level_irq,
++ DWAPB_DRIVER_NAME, handle_level_irq,
+ IRQ_NOREQUEST, 0,
+ IRQ_GC_INIT_NESTED_LOCK);
+ if (err) {
+@@ -455,7 +457,7 @@ static void dwapb_configure_irqs(struct dwapb_gpio *gpio,
+ */
+ err = devm_request_irq(gpio->dev, pp->irq[0],
+ dwapb_irq_handler_mfd,
+- IRQF_SHARED, "gpio-dwapb-mfd", gpio);
++ IRQF_SHARED, DWAPB_DRIVER_NAME, gpio);
+ if (err) {
+ dev_err(gpio->dev, "error requesting IRQ\n");
+ irq_domain_remove(gpio->domain);
+@@ -533,26 +535,33 @@ static int dwapb_gpio_add_port(struct dwapb_gpio *gpio,
+ dwapb_configure_irqs(gpio, port, pp);
+
+ err = gpiochip_add_data(&port->gc, port);
+- if (err)
++ if (err) {
+ dev_err(gpio->dev, "failed to register gpiochip for port%d\n",
+ port->idx);
+- else
+- port->is_registered = true;
++ return err;
++ }
+
+ /* Add GPIO-signaled ACPI event support */
+- if (pp->has_irq)
+- acpi_gpiochip_request_interrupts(&port->gc);
++ acpi_gpiochip_request_interrupts(&port->gc);
+
+- return err;
++ port->is_registered = true;
++
++ return 0;
+ }
+
+ static void dwapb_gpio_unregister(struct dwapb_gpio *gpio)
+ {
+ unsigned int m;
+
+- for (m = 0; m < gpio->nr_ports; ++m)
+- if (gpio->ports[m].is_registered)
+- gpiochip_remove(&gpio->ports[m].gc);
++ for (m = 0; m < gpio->nr_ports; ++m) {
++ struct dwapb_gpio_port *port = &gpio->ports[m];
++
++ if (!port->is_registered)
++ continue;
++
++ acpi_gpiochip_free_interrupts(&port->gc);
++ gpiochip_remove(&port->gc);
++ }
+ }
+
+ static struct dwapb_platform_data *
+@@ -836,7 +845,7 @@ static SIMPLE_DEV_PM_OPS(dwapb_gpio_pm_ops, dwapb_gpio_suspend,
+
+ static struct platform_driver dwapb_gpio_driver = {
+ .driver = {
+- .name = "gpio-dwapb",
++ .name = DWAPB_DRIVER_NAME,
+ .pm = &dwapb_gpio_pm_ops,
+ .of_match_table = of_match_ptr(dwapb_of_match),
+ .acpi_match_table = ACPI_PTR(dwapb_acpi_match),
+@@ -850,3 +859,4 @@ module_platform_driver(dwapb_gpio_driver);
+ MODULE_LICENSE("GPL");
+ MODULE_AUTHOR("Jamie Iles");
+ MODULE_DESCRIPTION("Synopsys DesignWare APB GPIO driver");
++MODULE_ALIAS("platform:" DWAPB_DRIVER_NAME);
+diff --git a/drivers/gpio/gpio-mlxbf2.c b/drivers/gpio/gpio-mlxbf2.c
+index da570e63589d..cc0dd8593a4b 100644
+--- a/drivers/gpio/gpio-mlxbf2.c
++++ b/drivers/gpio/gpio-mlxbf2.c
+@@ -110,8 +110,8 @@ static int mlxbf2_gpio_get_lock_res(struct platform_device *pdev)
+ }
+
+ yu_arm_gpio_lock_param.io = devm_ioremap(dev, res->start, size);
+- if (IS_ERR(yu_arm_gpio_lock_param.io))
+- ret = PTR_ERR(yu_arm_gpio_lock_param.io);
++ if (!yu_arm_gpio_lock_param.io)
++ ret = -ENOMEM;
+
+ exit:
+ mutex_unlock(yu_arm_gpio_lock_param.lock);
+diff --git a/drivers/gpio/gpio-pca953x.c b/drivers/gpio/gpio-pca953x.c
+index 4269ea9a817e..01011a780688 100644
+--- a/drivers/gpio/gpio-pca953x.c
++++ b/drivers/gpio/gpio-pca953x.c
+@@ -307,8 +307,22 @@ static const struct regmap_config pca953x_i2c_regmap = {
+ .volatile_reg = pca953x_volatile_register,
+
+ .cache_type = REGCACHE_RBTREE,
+- /* REVISIT: should be 0x7f but some 24 bit chips use REG_ADDR_AI */
+- .max_register = 0xff,
++ .max_register = 0x7f,
++};
++
++static const struct regmap_config pca953x_ai_i2c_regmap = {
++ .reg_bits = 8,
++ .val_bits = 8,
++
++ .read_flag_mask = REG_ADDR_AI,
++ .write_flag_mask = REG_ADDR_AI,
++
++ .readable_reg = pca953x_readable_register,
++ .writeable_reg = pca953x_writeable_register,
++ .volatile_reg = pca953x_volatile_register,
++
++ .cache_type = REGCACHE_RBTREE,
++ .max_register = 0x7f,
+ };
+
+ static u8 pca953x_recalc_addr(struct pca953x_chip *chip, int reg, int off,
+@@ -319,18 +333,6 @@ static u8 pca953x_recalc_addr(struct pca953x_chip *chip, int reg, int off,
+ int pinctrl = (reg & PCAL_PINCTRL_MASK) << 1;
+ u8 regaddr = pinctrl | addr | (off / BANK_SZ);
+
+- /* Single byte read doesn't need AI bit set. */
+- if (!addrinc)
+- return regaddr;
+-
+- /* Chips with 24 and more GPIOs always support Auto Increment */
+- if (write && NBANK(chip) > 2)
+- regaddr |= REG_ADDR_AI;
+-
+- /* PCA9575 needs address-increment on multi-byte writes */
+- if (PCA_CHIP_TYPE(chip->driver_data) == PCA957X_TYPE)
+- regaddr |= REG_ADDR_AI;
+-
+ return regaddr;
+ }
+
+@@ -863,6 +865,7 @@ static int pca953x_probe(struct i2c_client *client,
+ int ret;
+ u32 invert = 0;
+ struct regulator *reg;
++ const struct regmap_config *regmap_config;
+
+ chip = devm_kzalloc(&client->dev, sizeof(*chip), GFP_KERNEL);
+ if (chip == NULL)
+@@ -925,7 +928,17 @@ static int pca953x_probe(struct i2c_client *client,
+
+ i2c_set_clientdata(client, chip);
+
+- chip->regmap = devm_regmap_init_i2c(client, &pca953x_i2c_regmap);
++ pca953x_setup_gpio(chip, chip->driver_data & PCA_GPIO_MASK);
++
++ if (NBANK(chip) > 2 || PCA_CHIP_TYPE(chip->driver_data) == PCA957X_TYPE) {
++ dev_info(&client->dev, "using AI\n");
++ regmap_config = &pca953x_ai_i2c_regmap;
++ } else {
++ dev_info(&client->dev, "using no AI\n");
++ regmap_config = &pca953x_i2c_regmap;
++ }
++
++ chip->regmap = devm_regmap_init_i2c(client, regmap_config);
+ if (IS_ERR(chip->regmap)) {
+ ret = PTR_ERR(chip->regmap);
+ goto err_exit;
+@@ -956,7 +969,6 @@ static int pca953x_probe(struct i2c_client *client,
+ /* initialize cached registers from their original values.
+ * we can't share this chip with another i2c master.
+ */
+- pca953x_setup_gpio(chip, chip->driver_data & PCA_GPIO_MASK);
+
+ if (PCA_CHIP_TYPE(chip->driver_data) == PCA953X_TYPE) {
+ chip->regs = &pca953x_regs;
+diff --git a/drivers/gpu/drm/amd/amdkfd/kfd_priv.h b/drivers/gpu/drm/amd/amdkfd/kfd_priv.h
+index c24cad3c64ed..f7cfb8180b71 100644
+--- a/drivers/gpu/drm/amd/amdkfd/kfd_priv.h
++++ b/drivers/gpu/drm/amd/amdkfd/kfd_priv.h
+@@ -40,6 +40,7 @@
+ #include <drm/drm_file.h>
+ #include <drm/drm_drv.h>
+ #include <drm/drm_device.h>
++#include <drm/drm_ioctl.h>
+ #include <kgd_kfd_interface.h>
+ #include <linux/swap.h>
+
+@@ -1053,7 +1054,7 @@ static inline int kfd_devcgroup_check_permission(struct kfd_dev *kfd)
+ #if defined(CONFIG_CGROUP_DEVICE) || defined(CONFIG_CGROUP_BPF)
+ struct drm_device *ddev = kfd->ddev;
+
+- return devcgroup_check_permission(DEVCG_DEV_CHAR, ddev->driver->major,
++ return devcgroup_check_permission(DEVCG_DEV_CHAR, DRM_MAJOR,
+ ddev->render->index,
+ DEVCG_ACC_WRITE | DEVCG_ACC_READ);
+ #else
+diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
+index 7fc15b82fe48..f9f02e08054b 100644
+--- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
++++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
+@@ -1334,7 +1334,7 @@ static int dm_late_init(void *handle)
+ unsigned int linear_lut[16];
+ int i;
+ struct dmcu *dmcu = adev->dm.dc->res_pool->dmcu;
+- bool ret = false;
++ bool ret;
+
+ for (i = 0; i < 16; i++)
+ linear_lut[i] = 0xFFFF * i / 15;
+@@ -1350,13 +1350,10 @@ static int dm_late_init(void *handle)
+ */
+ params.min_abm_backlight = 0x28F;
+
+- /* todo will enable for navi10 */
+- if (adev->asic_type <= CHIP_RAVEN) {
+- ret = dmcu_load_iram(dmcu, params);
++ ret = dmcu_load_iram(dmcu, params);
+
+- if (!ret)
+- return -EINVAL;
+- }
++ if (!ret)
++ return -EINVAL;
+
+ return detect_mst_link_for_all_connectors(adev->ddev);
+ }
+diff --git a/drivers/gpu/drm/amd/display/dc/core/dc.c b/drivers/gpu/drm/amd/display/dc/core/dc.c
+index 47431ca6986d..4acaf4be8a81 100644
+--- a/drivers/gpu/drm/amd/display/dc/core/dc.c
++++ b/drivers/gpu/drm/amd/display/dc/core/dc.c
+@@ -1011,9 +1011,17 @@ static void program_timing_sync(
+ }
+ }
+
+- /* set first pipe with plane as master */
++ /* set first unblanked pipe as master */
+ for (j = 0; j < group_size; j++) {
+- if (pipe_set[j]->plane_state) {
++ bool is_blanked;
++
++ if (pipe_set[j]->stream_res.opp->funcs->dpg_is_blanked)
++ is_blanked =
++ pipe_set[j]->stream_res.opp->funcs->dpg_is_blanked(pipe_set[j]->stream_res.opp);
++ else
++ is_blanked =
++ pipe_set[j]->stream_res.tg->funcs->is_blanked(pipe_set[j]->stream_res.tg);
++ if (!is_blanked) {
+ if (j == 0)
+ break;
+
+@@ -1034,9 +1042,17 @@ static void program_timing_sync(
+ status->timing_sync_info.master = false;
+
+ }
+- /* remove any other pipes with plane as they have already been synced */
++ /* remove any other unblanked pipes as they have already been synced */
+ for (j = j + 1; j < group_size; j++) {
+- if (pipe_set[j]->plane_state) {
++ bool is_blanked;
++
++ if (pipe_set[j]->stream_res.opp->funcs->dpg_is_blanked)
++ is_blanked =
++ pipe_set[j]->stream_res.opp->funcs->dpg_is_blanked(pipe_set[j]->stream_res.opp);
++ else
++ is_blanked =
++ pipe_set[j]->stream_res.tg->funcs->is_blanked(pipe_set[j]->stream_res.tg);
++ if (!is_blanked) {
+ group_size--;
+ pipe_set[j] = pipe_set[group_size];
+ j--;
+@@ -2517,6 +2533,12 @@ void dc_commit_updates_for_stream(struct dc *dc,
+
+ copy_stream_update_to_stream(dc, context, stream, stream_update);
+
++ if (!dc->res_pool->funcs->validate_bandwidth(dc, context, false)) {
++ DC_ERROR("Mode validation failed for stream update!\n");
++ dc_release_state(context);
++ return;
++ }
++
+ commit_planes_for_stream(
+ dc,
+ srf_updates,
+diff --git a/drivers/gpu/drm/amd/display/modules/color/color_gamma.c b/drivers/gpu/drm/amd/display/modules/color/color_gamma.c
+index cac09d500fda..e89694eb90b4 100644
+--- a/drivers/gpu/drm/amd/display/modules/color/color_gamma.c
++++ b/drivers/gpu/drm/amd/display/modules/color/color_gamma.c
+@@ -843,7 +843,7 @@ static bool build_regamma(struct pwl_float_data_ex *rgb_regamma,
+ pow_buffer_ptr = -1; // reset back to no optimize
+ ret = true;
+ release:
+- kfree(coeff);
++ kvfree(coeff);
+ return ret;
+ }
+
+diff --git a/drivers/gpu/drm/amd/powerplay/smumgr/ci_smumgr.c b/drivers/gpu/drm/amd/powerplay/smumgr/ci_smumgr.c
+index 868e2d5f6e62..7c3e903230ca 100644
+--- a/drivers/gpu/drm/amd/powerplay/smumgr/ci_smumgr.c
++++ b/drivers/gpu/drm/amd/powerplay/smumgr/ci_smumgr.c
+@@ -239,7 +239,7 @@ static void ci_initialize_power_tune_defaults(struct pp_hwmgr *hwmgr)
+
+ switch (dev_id) {
+ case 0x67BA:
+- case 0x66B1:
++ case 0x67B1:
+ smu_data->power_tune_defaults = &defaults_hawaii_pro;
+ break;
+ case 0x67B8:
+diff --git a/drivers/gpu/drm/ast/ast_mode.c b/drivers/gpu/drm/ast/ast_mode.c
+index 7a9f20a2fd30..e7ba0b6f46d8 100644
+--- a/drivers/gpu/drm/ast/ast_mode.c
++++ b/drivers/gpu/drm/ast/ast_mode.c
+@@ -226,6 +226,7 @@ static void ast_set_vbios_color_reg(struct ast_private *ast,
+ case 3:
+ case 4:
+ color_index = TrueCModeIndex;
++ break;
+ default:
+ return;
+ }
+@@ -801,6 +802,9 @@ static int ast_crtc_helper_atomic_check(struct drm_crtc *crtc,
+ return -EINVAL;
+ }
+
++ if (!state->enable)
++ return 0; /* no mode checks if CRTC is being disabled */
++
+ ast_state = to_ast_crtc_state(state);
+
+ format = ast_state->format;
+diff --git a/drivers/gpu/drm/drm_connector.c b/drivers/gpu/drm/drm_connector.c
+index 644f0ad10671..ac9fd96c4c66 100644
+--- a/drivers/gpu/drm/drm_connector.c
++++ b/drivers/gpu/drm/drm_connector.c
+@@ -27,6 +27,7 @@
+ #include <drm/drm_print.h>
+ #include <drm/drm_drv.h>
+ #include <drm/drm_file.h>
++#include <drm/drm_sysfs.h>
+
+ #include <linux/uaccess.h>
+
+@@ -523,6 +524,10 @@ int drm_connector_register(struct drm_connector *connector)
+ drm_mode_object_register(connector->dev, &connector->base);
+
+ connector->registration_state = DRM_CONNECTOR_REGISTERED;
++
++ /* Let userspace know we have a new connector */
++ drm_sysfs_hotplug_event(connector->dev);
++
+ goto unlock;
+
+ err_debugfs:
+diff --git a/drivers/gpu/drm/drm_dp_mst_topology.c b/drivers/gpu/drm/drm_dp_mst_topology.c
+index 9d89ebf3a749..abb1f358ec6d 100644
+--- a/drivers/gpu/drm/drm_dp_mst_topology.c
++++ b/drivers/gpu/drm/drm_dp_mst_topology.c
+@@ -27,6 +27,7 @@
+ #include <linux/kernel.h>
+ #include <linux/sched.h>
+ #include <linux/seq_file.h>
++#include <linux/iopoll.h>
+
+ #if IS_ENABLED(CONFIG_DRM_DEBUG_DP_MST_TOPOLOGY_REFS)
+ #include <linux/stacktrace.h>
+@@ -4448,6 +4449,17 @@ fail:
+ return ret;
+ }
+
++static int do_get_act_status(struct drm_dp_aux *aux)
++{
++ int ret;
++ u8 status;
++
++ ret = drm_dp_dpcd_readb(aux, DP_PAYLOAD_TABLE_UPDATE_STATUS, &status);
++ if (ret < 0)
++ return ret;
++
++ return status;
++}
+
+ /**
+ * drm_dp_check_act_status() - Check ACT handled status.
+@@ -4457,33 +4469,29 @@ fail:
+ */
+ int drm_dp_check_act_status(struct drm_dp_mst_topology_mgr *mgr)
+ {
+- u8 status;
+- int ret;
+- int count = 0;
+-
+- do {
+- ret = drm_dp_dpcd_readb(mgr->aux, DP_PAYLOAD_TABLE_UPDATE_STATUS, &status);
+-
+- if (ret < 0) {
+- DRM_DEBUG_KMS("failed to read payload table status %d\n", ret);
+- goto fail;
+- }
+-
+- if (status & DP_PAYLOAD_ACT_HANDLED)
+- break;
+- count++;
+- udelay(100);
+-
+- } while (count < 30);
+-
+- if (!(status & DP_PAYLOAD_ACT_HANDLED)) {
+- DRM_DEBUG_KMS("failed to get ACT bit %d after %d retries\n", status, count);
+- ret = -EINVAL;
+- goto fail;
++ /*
++ * There doesn't seem to be any recommended retry count or timeout in
++ * the MST specification. Since some hubs have been observed to take
++ * over 1 second to update their payload allocations under certain
++ * conditions, we use a rather large timeout value.
++ */
++ const int timeout_ms = 3000;
++ int ret, status;
++
++ ret = readx_poll_timeout(do_get_act_status, mgr->aux, status,
++ status & DP_PAYLOAD_ACT_HANDLED || status < 0,
++ 200, timeout_ms * USEC_PER_MSEC);
++ if (ret < 0 && status >= 0) {
++ DRM_DEBUG_KMS("Failed to get ACT after %dms, last status: %02x\n",
++ timeout_ms, status);
++ return -EINVAL;
++ } else if (status < 0) {
++ DRM_DEBUG_KMS("Failed to read payload table status: %d\n",
++ status);
++ return status;
+ }
++
+ return 0;
+-fail:
+- return ret;
+ }
+ EXPORT_SYMBOL(drm_dp_check_act_status);
+
+diff --git a/drivers/gpu/drm/drm_encoder_slave.c b/drivers/gpu/drm/drm_encoder_slave.c
+index cf804389f5ec..d50a7884e69e 100644
+--- a/drivers/gpu/drm/drm_encoder_slave.c
++++ b/drivers/gpu/drm/drm_encoder_slave.c
+@@ -84,7 +84,7 @@ int drm_i2c_encoder_init(struct drm_device *dev,
+
+ err = encoder_drv->encoder_init(client, dev, encoder);
+ if (err)
+- goto fail_unregister;
++ goto fail_module_put;
+
+ if (info->platform_data)
+ encoder->slave_funcs->set_config(&encoder->base,
+@@ -92,9 +92,10 @@ int drm_i2c_encoder_init(struct drm_device *dev,
+
+ return 0;
+
++fail_module_put:
++ module_put(module);
+ fail_unregister:
+ i2c_unregister_device(client);
+- module_put(module);
+ fail:
+ return err;
+ }
+diff --git a/drivers/gpu/drm/drm_sysfs.c b/drivers/gpu/drm/drm_sysfs.c
+index 939f0032aab1..f0336c804639 100644
+--- a/drivers/gpu/drm/drm_sysfs.c
++++ b/drivers/gpu/drm/drm_sysfs.c
+@@ -291,9 +291,6 @@ int drm_sysfs_connector_add(struct drm_connector *connector)
+ return PTR_ERR(connector->kdev);
+ }
+
+- /* Let userspace know we have a new connector */
+- drm_sysfs_hotplug_event(dev);
+-
+ if (connector->ddc)
+ return sysfs_create_link(&connector->kdev->kobj,
+ &connector->ddc->dev.kobj, "ddc");
+diff --git a/drivers/gpu/drm/i915/display/intel_ddi.c b/drivers/gpu/drm/i915/display/intel_ddi.c
+index 52db7852827b..647412da733e 100644
+--- a/drivers/gpu/drm/i915/display/intel_ddi.c
++++ b/drivers/gpu/drm/i915/display/intel_ddi.c
+@@ -2866,7 +2866,7 @@ icl_program_mg_dp_mode(struct intel_digital_port *intel_dig_port,
+ ln1 = intel_de_read(dev_priv, MG_DP_MODE(1, tc_port));
+ }
+
+- ln0 &= ~(MG_DP_MODE_CFG_DP_X1_MODE | MG_DP_MODE_CFG_DP_X1_MODE);
++ ln0 &= ~(MG_DP_MODE_CFG_DP_X1_MODE | MG_DP_MODE_CFG_DP_X2_MODE);
+ ln1 &= ~(MG_DP_MODE_CFG_DP_X1_MODE | MG_DP_MODE_CFG_DP_X2_MODE);
+
+ /* DPPATC */
+diff --git a/drivers/gpu/drm/i915/display/intel_dp.c b/drivers/gpu/drm/i915/display/intel_dp.c
+index a2fafd4499f2..5e228d202e4d 100644
+--- a/drivers/gpu/drm/i915/display/intel_dp.c
++++ b/drivers/gpu/drm/i915/display/intel_dp.c
+@@ -1343,8 +1343,7 @@ intel_dp_aux_xfer(struct intel_dp *intel_dp,
+ bool is_tc_port = intel_phy_is_tc(i915, phy);
+ i915_reg_t ch_ctl, ch_data[5];
+ u32 aux_clock_divider;
+- enum intel_display_power_domain aux_domain =
+- intel_aux_power_domain(intel_dig_port);
++ enum intel_display_power_domain aux_domain;
+ intel_wakeref_t aux_wakeref;
+ intel_wakeref_t pps_wakeref;
+ int i, ret, recv_bytes;
+@@ -1359,6 +1358,8 @@ intel_dp_aux_xfer(struct intel_dp *intel_dp,
+ if (is_tc_port)
+ intel_tc_port_lock(intel_dig_port);
+
++ aux_domain = intel_aux_power_domain(intel_dig_port);
++
+ aux_wakeref = intel_display_power_get(i915, aux_domain);
+ pps_wakeref = pps_lock(intel_dp);
+
+diff --git a/drivers/gpu/drm/i915/gem/i915_gem_shmem.c b/drivers/gpu/drm/i915/gem/i915_gem_shmem.c
+index 5d5d7eef3f43..7aff3514d97a 100644
+--- a/drivers/gpu/drm/i915/gem/i915_gem_shmem.c
++++ b/drivers/gpu/drm/i915/gem/i915_gem_shmem.c
+@@ -39,7 +39,6 @@ static int shmem_get_pages(struct drm_i915_gem_object *obj)
+ unsigned long last_pfn = 0; /* suppress gcc warning */
+ unsigned int max_segment = i915_sg_segment_size();
+ unsigned int sg_page_sizes;
+- struct pagevec pvec;
+ gfp_t noreclaim;
+ int ret;
+
+@@ -192,13 +191,17 @@ err_sg:
+ sg_mark_end(sg);
+ err_pages:
+ mapping_clear_unevictable(mapping);
+- pagevec_init(&pvec);
+- for_each_sgt_page(page, sgt_iter, st) {
+- if (!pagevec_add(&pvec, page))
++ if (sg != st->sgl) {
++ struct pagevec pvec;
++
++ pagevec_init(&pvec);
++ for_each_sgt_page(page, sgt_iter, st) {
++ if (!pagevec_add(&pvec, page))
++ check_release_pagevec(&pvec);
++ }
++ if (pagevec_count(&pvec))
+ check_release_pagevec(&pvec);
+ }
+- if (pagevec_count(&pvec))
+- check_release_pagevec(&pvec);
+ sg_free_table(st);
+ kfree(st);
+
+diff --git a/drivers/gpu/drm/i915/gt/intel_engine_cs.c b/drivers/gpu/drm/i915/gt/intel_engine_cs.c
+index 883a9b7fe88d..55b9165e7533 100644
+--- a/drivers/gpu/drm/i915/gt/intel_engine_cs.c
++++ b/drivers/gpu/drm/i915/gt/intel_engine_cs.c
+@@ -639,7 +639,7 @@ static int engine_setup_common(struct intel_engine_cs *engine)
+ struct measure_breadcrumb {
+ struct i915_request rq;
+ struct intel_ring ring;
+- u32 cs[1024];
++ u32 cs[2048];
+ };
+
+ static int measure_breadcrumb_dw(struct intel_context *ce)
+@@ -661,6 +661,8 @@ static int measure_breadcrumb_dw(struct intel_context *ce)
+
+ frame->ring.vaddr = frame->cs;
+ frame->ring.size = sizeof(frame->cs);
++ frame->ring.wrap =
++ BITS_PER_TYPE(frame->ring.size) - ilog2(frame->ring.size);
+ frame->ring.effective_size = frame->ring.size;
+ intel_ring_update_space(&frame->ring);
+ frame->rq.ring = &frame->ring;
+diff --git a/drivers/gpu/drm/i915/gt/intel_lrc.c b/drivers/gpu/drm/i915/gt/intel_lrc.c
+index 2dfaddb8811e..ba82193b4e31 100644
+--- a/drivers/gpu/drm/i915/gt/intel_lrc.c
++++ b/drivers/gpu/drm/i915/gt/intel_lrc.c
+@@ -972,6 +972,13 @@ __unwind_incomplete_requests(struct intel_engine_cs *engine)
+ list_move(&rq->sched.link, pl);
+ set_bit(I915_FENCE_FLAG_PQUEUE, &rq->fence.flags);
+
++ /* Check in case we rollback so far we wrap [size/2] */
++ if (intel_ring_direction(rq->ring,
++ intel_ring_wrap(rq->ring,
++ rq->tail),
++ rq->ring->tail) > 0)
++ rq->context->lrc.desc |= CTX_DESC_FORCE_RESTORE;
++
+ active = rq;
+ } else {
+ struct intel_engine_cs *owner = rq->context->engine;
+@@ -1383,8 +1390,9 @@ static u64 execlists_update_context(struct i915_request *rq)
+ * HW has a tendency to ignore us rewinding the TAIL to the end of
+ * an earlier request.
+ */
++ GEM_BUG_ON(ce->lrc_reg_state[CTX_RING_TAIL] != rq->ring->tail);
++ prev = rq->ring->tail;
+ tail = intel_ring_set_tail(rq->ring, rq->tail);
+- prev = ce->lrc_reg_state[CTX_RING_TAIL];
+ if (unlikely(intel_ring_direction(rq->ring, tail, prev) <= 0))
+ desc |= CTX_DESC_FORCE_RESTORE;
+ ce->lrc_reg_state[CTX_RING_TAIL] = tail;
+@@ -4213,6 +4221,14 @@ static int gen12_emit_flush_render(struct i915_request *request,
+ return 0;
+ }
+
++static void assert_request_valid(struct i915_request *rq)
++{
++ struct intel_ring *ring __maybe_unused = rq->ring;
++
++ /* Can we unwind this request without appearing to go forwards? */
++ GEM_BUG_ON(intel_ring_direction(ring, rq->wa_tail, rq->head) <= 0);
++}
++
+ /*
+ * Reserve space for 2 NOOPs at the end of each request to be
+ * used as a workaround for not being allowed to do lite
+@@ -4225,6 +4241,9 @@ static u32 *gen8_emit_wa_tail(struct i915_request *request, u32 *cs)
+ *cs++ = MI_NOOP;
+ request->wa_tail = intel_ring_offset(request, cs);
+
++ /* Check that entire request is less than half the ring */
++ assert_request_valid(request);
++
+ return cs;
+ }
+
+diff --git a/drivers/gpu/drm/i915/gt/intel_ring.c b/drivers/gpu/drm/i915/gt/intel_ring.c
+index 8cda1b7e17ba..bdb324167ef3 100644
+--- a/drivers/gpu/drm/i915/gt/intel_ring.c
++++ b/drivers/gpu/drm/i915/gt/intel_ring.c
+@@ -315,3 +315,7 @@ int intel_ring_cacheline_align(struct i915_request *rq)
+ GEM_BUG_ON(rq->ring->emit & (CACHELINE_BYTES - 1));
+ return 0;
+ }
++
++#if IS_ENABLED(CONFIG_DRM_I915_SELFTEST)
++#include "selftest_ring.c"
++#endif
+diff --git a/drivers/gpu/drm/i915/gt/intel_workarounds.c b/drivers/gpu/drm/i915/gt/intel_workarounds.c
+index 5176ad1a3976..bb100872cd07 100644
+--- a/drivers/gpu/drm/i915/gt/intel_workarounds.c
++++ b/drivers/gpu/drm/i915/gt/intel_workarounds.c
+@@ -178,6 +178,12 @@ wa_write_or(struct i915_wa_list *wal, i915_reg_t reg, u32 set)
+ wa_write_masked_or(wal, reg, set, set);
+ }
+
++static void
++wa_write_clr(struct i915_wa_list *wal, i915_reg_t reg, u32 clr)
++{
++ wa_write_masked_or(wal, reg, clr, 0);
++}
++
+ static void
+ wa_masked_en(struct i915_wa_list *wal, i915_reg_t reg, u32 val)
+ {
+@@ -697,6 +703,227 @@ int intel_engine_emit_ctx_wa(struct i915_request *rq)
+ return 0;
+ }
+
++static void
++gen4_gt_workarounds_init(struct drm_i915_private *i915,
++ struct i915_wa_list *wal)
++{
++ /* WaDisable_RenderCache_OperationalFlush:gen4,ilk */
++ wa_masked_dis(wal, CACHE_MODE_0, RC_OP_FLUSH_ENABLE);
++}
++
++static void
++g4x_gt_workarounds_init(struct drm_i915_private *i915, struct i915_wa_list *wal)
++{
++ gen4_gt_workarounds_init(i915, wal);
++
++ /* WaDisableRenderCachePipelinedFlush:g4x,ilk */
++ wa_masked_en(wal, CACHE_MODE_0, CM0_PIPELINED_RENDER_FLUSH_DISABLE);
++}
++
++static void
++ilk_gt_workarounds_init(struct drm_i915_private *i915, struct i915_wa_list *wal)
++{
++ g4x_gt_workarounds_init(i915, wal);
++
++ wa_masked_en(wal, _3D_CHICKEN2, _3D_CHICKEN2_WM_READ_PIPELINED);
++}
++
++static void
++snb_gt_workarounds_init(struct drm_i915_private *i915, struct i915_wa_list *wal)
++{
++ /* WaDisableHiZPlanesWhenMSAAEnabled:snb */
++ wa_masked_en(wal,
++ _3D_CHICKEN,
++ _3D_CHICKEN_HIZ_PLANE_DISABLE_MSAA_4X_SNB);
++
++ /* WaDisable_RenderCache_OperationalFlush:snb */
++ wa_masked_dis(wal, CACHE_MODE_0, RC_OP_FLUSH_ENABLE);
++
++ /*
++ * BSpec recommends 8x4 when MSAA is used,
++ * however in practice 16x4 seems fastest.
++ *
++ * Note that PS/WM thread counts depend on the WIZ hashing
++ * disable bit, which we don't touch here, but it's good
++ * to keep in mind (see 3DSTATE_PS and 3DSTATE_WM).
++ */
++ wa_add(wal,
++ GEN6_GT_MODE, 0,
++ _MASKED_FIELD(GEN6_WIZ_HASHING_MASK, GEN6_WIZ_HASHING_16x4),
++ GEN6_WIZ_HASHING_16x4);
++
++ wa_masked_dis(wal, CACHE_MODE_0, CM0_STC_EVICT_DISABLE_LRA_SNB);
++
++ wa_masked_en(wal,
++ _3D_CHICKEN3,
++ /* WaStripsFansDisableFastClipPerformanceFix:snb */
++ _3D_CHICKEN3_SF_DISABLE_FASTCLIP_CULL |
++ /*
++ * Bspec says:
++ * "This bit must be set if 3DSTATE_CLIP clip mode is set
++ * to normal and 3DSTATE_SF number of SF output attributes
++ * is more than 16."
++ */
++ _3D_CHICKEN3_SF_DISABLE_PIPELINED_ATTR_FETCH);
++}
++
++static void
++ivb_gt_workarounds_init(struct drm_i915_private *i915, struct i915_wa_list *wal)
++{
++ /* WaDisableEarlyCull:ivb */
++ wa_masked_en(wal, _3D_CHICKEN3, _3D_CHICKEN_SF_DISABLE_OBJEND_CULL);
++
++ /* WaDisablePSDDualDispatchEnable:ivb */
++ if (IS_IVB_GT1(i915))
++ wa_masked_en(wal,
++ GEN7_HALF_SLICE_CHICKEN1,
++ GEN7_PSD_SINGLE_PORT_DISPATCH_ENABLE);
++
++ /* WaDisable_RenderCache_OperationalFlush:ivb */
++ wa_masked_dis(wal, CACHE_MODE_0_GEN7, RC_OP_FLUSH_ENABLE);
++
++ /* Apply the WaDisableRHWOOptimizationForRenderHang:ivb workaround. */
++ wa_masked_dis(wal,
++ GEN7_COMMON_SLICE_CHICKEN1,
++ GEN7_CSC1_RHWO_OPT_DISABLE_IN_RCC);
++
++ /* WaApplyL3ControlAndL3ChickenMode:ivb */
++ wa_write(wal, GEN7_L3CNTLREG1, GEN7_WA_FOR_GEN7_L3_CONTROL);
++ wa_write(wal, GEN7_L3_CHICKEN_MODE_REGISTER, GEN7_WA_L3_CHICKEN_MODE);
++
++ /* WaForceL3Serialization:ivb */
++ wa_write_clr(wal, GEN7_L3SQCREG4, L3SQ_URB_READ_CAM_MATCH_DISABLE);
++
++ /*
++ * WaVSThreadDispatchOverride:ivb,vlv
++ *
++ * This actually overrides the dispatch
++ * mode for all thread types.
++ */
++ wa_write_masked_or(wal, GEN7_FF_THREAD_MODE,
++ GEN7_FF_SCHED_MASK,
++ GEN7_FF_TS_SCHED_HW |
++ GEN7_FF_VS_SCHED_HW |
++ GEN7_FF_DS_SCHED_HW);
++
++ if (0) { /* causes HiZ corruption on ivb:gt1 */
++ /* enable HiZ Raw Stall Optimization */
++ wa_masked_dis(wal, CACHE_MODE_0_GEN7, HIZ_RAW_STALL_OPT_DISABLE);
++ }
++
++ /* WaDisable4x2SubspanOptimization:ivb */
++ wa_masked_en(wal, CACHE_MODE_1, PIXEL_SUBSPAN_COLLECT_OPT_DISABLE);
++
++ /*
++ * BSpec recommends 8x4 when MSAA is used,
++ * however in practice 16x4 seems fastest.
++ *
++ * Note that PS/WM thread counts depend on the WIZ hashing
++ * disable bit, which we don't touch here, but it's good
++ * to keep in mind (see 3DSTATE_PS and 3DSTATE_WM).
++ */
++ wa_add(wal, GEN7_GT_MODE, 0,
++ _MASKED_FIELD(GEN6_WIZ_HASHING_MASK, GEN6_WIZ_HASHING_16x4),
++ GEN6_WIZ_HASHING_16x4);
++}
++
++static void
++vlv_gt_workarounds_init(struct drm_i915_private *i915, struct i915_wa_list *wal)
++{
++ /* WaDisableEarlyCull:vlv */
++ wa_masked_en(wal, _3D_CHICKEN3, _3D_CHICKEN_SF_DISABLE_OBJEND_CULL);
++
++ /* WaPsdDispatchEnable:vlv */
++ /* WaDisablePSDDualDispatchEnable:vlv */
++ wa_masked_en(wal,
++ GEN7_HALF_SLICE_CHICKEN1,
++ GEN7_MAX_PS_THREAD_DEP |
++ GEN7_PSD_SINGLE_PORT_DISPATCH_ENABLE);
++
++ /* WaDisable_RenderCache_OperationalFlush:vlv */
++ wa_masked_dis(wal, CACHE_MODE_0_GEN7, RC_OP_FLUSH_ENABLE);
++
++ /* WaForceL3Serialization:vlv */
++ wa_write_clr(wal, GEN7_L3SQCREG4, L3SQ_URB_READ_CAM_MATCH_DISABLE);
++
++ /*
++ * WaVSThreadDispatchOverride:ivb,vlv
++ *
++ * This actually overrides the dispatch
++ * mode for all thread types.
++ */
++ wa_write_masked_or(wal,
++ GEN7_FF_THREAD_MODE,
++ GEN7_FF_SCHED_MASK,
++ GEN7_FF_TS_SCHED_HW |
++ GEN7_FF_VS_SCHED_HW |
++ GEN7_FF_DS_SCHED_HW);
++
++ /*
++ * BSpec says this must be set, even though
++ * WaDisable4x2SubspanOptimization isn't listed for VLV.
++ */
++ wa_masked_en(wal, CACHE_MODE_1, PIXEL_SUBSPAN_COLLECT_OPT_DISABLE);
++
++ /*
++ * BSpec recommends 8x4 when MSAA is used,
++ * however in practice 16x4 seems fastest.
++ *
++ * Note that PS/WM thread counts depend on the WIZ hashing
++ * disable bit, which we don't touch here, but it's good
++ * to keep in mind (see 3DSTATE_PS and 3DSTATE_WM).
++ */
++ wa_add(wal, GEN7_GT_MODE, 0,
++ _MASKED_FIELD(GEN6_WIZ_HASHING_MASK, GEN6_WIZ_HASHING_16x4),
++ GEN6_WIZ_HASHING_16x4);
++
++ /*
++ * WaIncreaseL3CreditsForVLVB0:vlv
++ * This is the hardware default actually.
++ */
++ wa_write(wal, GEN7_L3SQCREG1, VLV_B0_WA_L3SQCREG1_VALUE);
++}
++
++static void
++hsw_gt_workarounds_init(struct drm_i915_private *i915, struct i915_wa_list *wal)
++{
++ /* L3 caching of data atomics doesn't work -- disable it. */
++ wa_write(wal, HSW_SCRATCH1, HSW_SCRATCH1_L3_DATA_ATOMICS_DISABLE);
++
++ wa_add(wal,
++ HSW_ROW_CHICKEN3, 0,
++ _MASKED_BIT_ENABLE(HSW_ROW_CHICKEN3_L3_GLOBAL_ATOMICS_DISABLE),
++ 0 /* XXX does this reg exist? */);
++
++ /* WaVSRefCountFullforceMissDisable:hsw */
++ wa_write_clr(wal, GEN7_FF_THREAD_MODE, GEN7_FF_VS_REF_CNT_FFME);
++
++ wa_masked_dis(wal,
++ CACHE_MODE_0_GEN7,
++ /* WaDisable_RenderCache_OperationalFlush:hsw */
++ RC_OP_FLUSH_ENABLE |
++ /* enable HiZ Raw Stall Optimization */
++ HIZ_RAW_STALL_OPT_DISABLE);
++
++ /* WaDisable4x2SubspanOptimization:hsw */
++ wa_masked_en(wal, CACHE_MODE_1, PIXEL_SUBSPAN_COLLECT_OPT_DISABLE);
++
++ /*
++ * BSpec recommends 8x4 when MSAA is used,
++ * however in practice 16x4 seems fastest.
++ *
++ * Note that PS/WM thread counts depend on the WIZ hashing
++ * disable bit, which we don't touch here, but it's good
++ * to keep in mind (see 3DSTATE_PS and 3DSTATE_WM).
++ */
++ wa_add(wal, GEN7_GT_MODE, 0,
++ _MASKED_FIELD(GEN6_WIZ_HASHING_MASK, GEN6_WIZ_HASHING_16x4),
++ GEN6_WIZ_HASHING_16x4);
++
++ /* WaSampleCChickenBitEnable:hsw */
++ wa_masked_en(wal, HALF_SLICE_CHICKEN3, HSW_SAMPLE_C_PERFORMANCE);
++}
++
+ static void
+ gen9_gt_workarounds_init(struct drm_i915_private *i915, struct i915_wa_list *wal)
+ {
+@@ -974,6 +1201,20 @@ gt_init_workarounds(struct drm_i915_private *i915, struct i915_wa_list *wal)
+ bxt_gt_workarounds_init(i915, wal);
+ else if (IS_SKYLAKE(i915))
+ skl_gt_workarounds_init(i915, wal);
++ else if (IS_HASWELL(i915))
++ hsw_gt_workarounds_init(i915, wal);
++ else if (IS_VALLEYVIEW(i915))
++ vlv_gt_workarounds_init(i915, wal);
++ else if (IS_IVYBRIDGE(i915))
++ ivb_gt_workarounds_init(i915, wal);
++ else if (IS_GEN(i915, 6))
++ snb_gt_workarounds_init(i915, wal);
++ else if (IS_GEN(i915, 5))
++ ilk_gt_workarounds_init(i915, wal);
++ else if (IS_G4X(i915))
++ g4x_gt_workarounds_init(i915, wal);
++ else if (IS_GEN(i915, 4))
++ gen4_gt_workarounds_init(i915, wal);
+ else if (INTEL_GEN(i915) <= 8)
+ return;
+ else
+@@ -1379,12 +1620,6 @@ rcs_engine_wa_init(struct intel_engine_cs *engine, struct i915_wa_list *wal)
+ GEN7_FF_THREAD_MODE,
+ GEN12_FF_TESSELATION_DOP_GATE_DISABLE);
+
+- /*
+- * Wa_1409085225:tgl
+- * Wa_14010229206:tgl
+- */
+- wa_masked_en(wal, GEN9_ROW_CHICKEN4, GEN12_DISABLE_TDL_PUSH);
+-
+ /* Wa_1408615072:tgl */
+ wa_write_or(wal, UNSLICE_UNIT_LEVEL_CLKGATE2,
+ VSUNIT_CLKGATE_DIS_TGL);
+@@ -1402,6 +1637,12 @@ rcs_engine_wa_init(struct intel_engine_cs *engine, struct i915_wa_list *wal)
+ wa_masked_en(wal,
+ GEN9_CS_DEBUG_MODE1,
+ FF_DOP_CLOCK_GATE_DISABLE);
++
++ /*
++ * Wa_1409085225:tgl
++ * Wa_14010229206:tgl
++ */
++ wa_masked_en(wal, GEN9_ROW_CHICKEN4, GEN12_DISABLE_TDL_PUSH);
+ }
+
+ if (IS_GEN(i915, 11)) {
+diff --git a/drivers/gpu/drm/i915/gt/selftest_mocs.c b/drivers/gpu/drm/i915/gt/selftest_mocs.c
+index 8831ffee2061..63f87d8608c3 100644
+--- a/drivers/gpu/drm/i915/gt/selftest_mocs.c
++++ b/drivers/gpu/drm/i915/gt/selftest_mocs.c
+@@ -18,6 +18,20 @@ struct live_mocs {
+ void *vaddr;
+ };
+
++static struct intel_context *mocs_context_create(struct intel_engine_cs *engine)
++{
++ struct intel_context *ce;
++
++ ce = intel_context_create(engine);
++ if (IS_ERR(ce))
++ return ce;
++
++ /* We build large requests to read the registers from the ring */
++ ce->ring = __intel_context_ring_size(SZ_16K);
++
++ return ce;
++}
++
+ static int request_add_sync(struct i915_request *rq, int err)
+ {
+ i915_request_get(rq);
+@@ -301,7 +315,7 @@ static int live_mocs_clean(void *arg)
+ for_each_engine(engine, gt, id) {
+ struct intel_context *ce;
+
+- ce = intel_context_create(engine);
++ ce = mocs_context_create(engine);
+ if (IS_ERR(ce)) {
+ err = PTR_ERR(ce);
+ break;
+@@ -395,7 +409,7 @@ static int live_mocs_reset(void *arg)
+ for_each_engine(engine, gt, id) {
+ struct intel_context *ce;
+
+- ce = intel_context_create(engine);
++ ce = mocs_context_create(engine);
+ if (IS_ERR(ce)) {
+ err = PTR_ERR(ce);
+ break;
+diff --git a/drivers/gpu/drm/i915/gt/selftest_ring.c b/drivers/gpu/drm/i915/gt/selftest_ring.c
+new file mode 100644
+index 000000000000..2a8c534dc125
+--- /dev/null
++++ b/drivers/gpu/drm/i915/gt/selftest_ring.c
+@@ -0,0 +1,110 @@
++// SPDX-License-Identifier: GPL-2.0
++/*
++ * Copyright © 2020 Intel Corporation
++ */
++
++static struct intel_ring *mock_ring(unsigned long sz)
++{
++ struct intel_ring *ring;
++
++ ring = kzalloc(sizeof(*ring) + sz, GFP_KERNEL);
++ if (!ring)
++ return NULL;
++
++ kref_init(&ring->ref);
++ ring->size = sz;
++ ring->wrap = BITS_PER_TYPE(ring->size) - ilog2(sz);
++ ring->effective_size = sz;
++ ring->vaddr = (void *)(ring + 1);
++ atomic_set(&ring->pin_count, 1);
++
++ intel_ring_update_space(ring);
++
++ return ring;
++}
++
++static void mock_ring_free(struct intel_ring *ring)
++{
++ kfree(ring);
++}
++
++static int check_ring_direction(struct intel_ring *ring,
++ u32 next, u32 prev,
++ int expected)
++{
++ int result;
++
++ result = intel_ring_direction(ring, next, prev);
++ if (result < 0)
++ result = -1;
++ else if (result > 0)
++ result = 1;
++
++ if (result != expected) {
++ pr_err("intel_ring_direction(%u, %u):%d != %d\n",
++ next, prev, result, expected);
++ return -EINVAL;
++ }
++
++ return 0;
++}
++
++static int check_ring_step(struct intel_ring *ring, u32 x, u32 step)
++{
++ u32 prev = x, next = intel_ring_wrap(ring, x + step);
++ int err = 0;
++
++ err |= check_ring_direction(ring, next, next, 0);
++ err |= check_ring_direction(ring, prev, prev, 0);
++ err |= check_ring_direction(ring, next, prev, 1);
++ err |= check_ring_direction(ring, prev, next, -1);
++
++ return err;
++}
++
++static int check_ring_offset(struct intel_ring *ring, u32 x, u32 step)
++{
++ int err = 0;
++
++ err |= check_ring_step(ring, x, step);
++ err |= check_ring_step(ring, intel_ring_wrap(ring, x + 1), step);
++ err |= check_ring_step(ring, intel_ring_wrap(ring, x - 1), step);
++
++ return err;
++}
++
++static int igt_ring_direction(void *dummy)
++{
++ struct intel_ring *ring;
++ unsigned int half = 2048;
++ int step, err = 0;
++
++ ring = mock_ring(2 * half);
++ if (!ring)
++ return -ENOMEM;
++
++ GEM_BUG_ON(ring->size != 2 * half);
++
++ /* Precision of wrap detection is limited to ring->size / 2 */
++ for (step = 1; step < half; step <<= 1) {
++ err |= check_ring_offset(ring, 0, step);
++ err |= check_ring_offset(ring, half, step);
++ }
++ err |= check_ring_step(ring, 0, half - 64);
++
++ /* And check unwrapped handling for good measure */
++ err |= check_ring_offset(ring, 0, 2 * half + 64);
++ err |= check_ring_offset(ring, 3 * half, 1);
++
++ mock_ring_free(ring);
++ return err;
++}
++
++int intel_ring_mock_selftests(void)
++{
++ static const struct i915_subtest tests[] = {
++ SUBTEST(igt_ring_direction),
++ };
++
++ return i915_subtests(tests, NULL);
++}
+diff --git a/drivers/gpu/drm/i915/i915_cmd_parser.c b/drivers/gpu/drm/i915/i915_cmd_parser.c
+index 189b573d02be..372354d33f55 100644
+--- a/drivers/gpu/drm/i915/i915_cmd_parser.c
++++ b/drivers/gpu/drm/i915/i915_cmd_parser.c
+@@ -572,6 +572,9 @@ struct drm_i915_reg_descriptor {
+ #define REG32(_reg, ...) \
+ { .addr = (_reg), __VA_ARGS__ }
+
++#define REG32_IDX(_reg, idx) \
++ { .addr = _reg(idx) }
++
+ /*
+ * Convenience macro for adding 64-bit registers.
+ *
+@@ -669,6 +672,7 @@ static const struct drm_i915_reg_descriptor gen9_blt_regs[] = {
+ REG64_IDX(RING_TIMESTAMP, BSD_RING_BASE),
+ REG32(BCS_SWCTRL),
+ REG64_IDX(RING_TIMESTAMP, BLT_RING_BASE),
++ REG32_IDX(RING_CTX_TIMESTAMP, BLT_RING_BASE),
+ REG64_IDX(BCS_GPR, 0),
+ REG64_IDX(BCS_GPR, 1),
+ REG64_IDX(BCS_GPR, 2),
+diff --git a/drivers/gpu/drm/i915/i915_irq.c b/drivers/gpu/drm/i915/i915_irq.c
+index 8a2b83807ffc..bd042725a678 100644
+--- a/drivers/gpu/drm/i915/i915_irq.c
++++ b/drivers/gpu/drm/i915/i915_irq.c
+@@ -3092,6 +3092,7 @@ static void gen11_hpd_irq_setup(struct drm_i915_private *dev_priv)
+
+ val = I915_READ(GEN11_DE_HPD_IMR);
+ val &= ~hotplug_irqs;
++ val |= ~enabled_irqs & hotplug_irqs;
+ I915_WRITE(GEN11_DE_HPD_IMR, val);
+ POSTING_READ(GEN11_DE_HPD_IMR);
+
+diff --git a/drivers/gpu/drm/i915/i915_reg.h b/drivers/gpu/drm/i915/i915_reg.h
+index 6e12000c4b6b..a41be9357d15 100644
+--- a/drivers/gpu/drm/i915/i915_reg.h
++++ b/drivers/gpu/drm/i915/i915_reg.h
+@@ -7819,7 +7819,7 @@ enum {
+
+ /* GEN7 chicken */
+ #define GEN7_COMMON_SLICE_CHICKEN1 _MMIO(0x7010)
+- #define GEN7_CSC1_RHWO_OPT_DISABLE_IN_RCC ((1 << 10) | (1 << 26))
++ #define GEN7_CSC1_RHWO_OPT_DISABLE_IN_RCC (1 << 10)
+ #define GEN9_RHWO_OPTIMIZATION_DISABLE (1 << 14)
+
+ #define COMMON_SLICE_CHICKEN2 _MMIO(0x7014)
+diff --git a/drivers/gpu/drm/i915/intel_pm.c b/drivers/gpu/drm/i915/intel_pm.c
+index a52986a9e7a6..20c1683fda24 100644
+--- a/drivers/gpu/drm/i915/intel_pm.c
++++ b/drivers/gpu/drm/i915/intel_pm.c
+@@ -6593,16 +6593,6 @@ static void ilk_init_clock_gating(struct drm_i915_private *dev_priv)
+ I915_WRITE(ILK_DISPLAY_CHICKEN2,
+ I915_READ(ILK_DISPLAY_CHICKEN2) |
+ ILK_ELPIN_409_SELECT);
+- I915_WRITE(_3D_CHICKEN2,
+- _3D_CHICKEN2_WM_READ_PIPELINED << 16 |
+- _3D_CHICKEN2_WM_READ_PIPELINED);
+-
+- /* WaDisableRenderCachePipelinedFlush:ilk */
+- I915_WRITE(CACHE_MODE_0,
+- _MASKED_BIT_ENABLE(CM0_PIPELINED_RENDER_FLUSH_DISABLE));
+-
+- /* WaDisable_RenderCache_OperationalFlush:ilk */
+- I915_WRITE(CACHE_MODE_0, _MASKED_BIT_DISABLE(RC_OP_FLUSH_ENABLE));
+
+ g4x_disable_trickle_feed(dev_priv);
+
+@@ -6665,27 +6655,6 @@ static void gen6_init_clock_gating(struct drm_i915_private *dev_priv)
+ I915_READ(ILK_DISPLAY_CHICKEN2) |
+ ILK_ELPIN_409_SELECT);
+
+- /* WaDisableHiZPlanesWhenMSAAEnabled:snb */
+- I915_WRITE(_3D_CHICKEN,
+- _MASKED_BIT_ENABLE(_3D_CHICKEN_HIZ_PLANE_DISABLE_MSAA_4X_SNB));
+-
+- /* WaDisable_RenderCache_OperationalFlush:snb */
+- I915_WRITE(CACHE_MODE_0, _MASKED_BIT_DISABLE(RC_OP_FLUSH_ENABLE));
+-
+- /*
+- * BSpec recoomends 8x4 when MSAA is used,
+- * however in practice 16x4 seems fastest.
+- *
+- * Note that PS/WM thread counts depend on the WIZ hashing
+- * disable bit, which we don't touch here, but it's good
+- * to keep in mind (see 3DSTATE_PS and 3DSTATE_WM).
+- */
+- I915_WRITE(GEN6_GT_MODE,
+- _MASKED_FIELD(GEN6_WIZ_HASHING_MASK, GEN6_WIZ_HASHING_16x4));
+-
+- I915_WRITE(CACHE_MODE_0,
+- _MASKED_BIT_DISABLE(CM0_STC_EVICT_DISABLE_LRA_SNB));
+-
+ I915_WRITE(GEN6_UCGCTL1,
+ I915_READ(GEN6_UCGCTL1) |
+ GEN6_BLBUNIT_CLOCK_GATE_DISABLE |
+@@ -6708,18 +6677,6 @@ static void gen6_init_clock_gating(struct drm_i915_private *dev_priv)
+ GEN6_RCPBUNIT_CLOCK_GATE_DISABLE |
+ GEN6_RCCUNIT_CLOCK_GATE_DISABLE);
+
+- /* WaStripsFansDisableFastClipPerformanceFix:snb */
+- I915_WRITE(_3D_CHICKEN3,
+- _MASKED_BIT_ENABLE(_3D_CHICKEN3_SF_DISABLE_FASTCLIP_CULL));
+-
+- /*
+- * Bspec says:
+- * "This bit must be set if 3DSTATE_CLIP clip mode is set to normal and
+- * 3DSTATE_SF number of SF output attributes is more than 16."
+- */
+- I915_WRITE(_3D_CHICKEN3,
+- _MASKED_BIT_ENABLE(_3D_CHICKEN3_SF_DISABLE_PIPELINED_ATTR_FETCH));
+-
+ /*
+ * According to the spec the following bits should be
+ * set in order to enable memory self-refresh and fbc:
+@@ -6749,24 +6706,6 @@ static void gen6_init_clock_gating(struct drm_i915_private *dev_priv)
+ gen6_check_mch_setup(dev_priv);
+ }
+
+-static void gen7_setup_fixed_func_scheduler(struct drm_i915_private *dev_priv)
+-{
+- u32 reg = I915_READ(GEN7_FF_THREAD_MODE);
+-
+- /*
+- * WaVSThreadDispatchOverride:ivb,vlv
+- *
+- * This actually overrides the dispatch
+- * mode for all thread types.
+- */
+- reg &= ~GEN7_FF_SCHED_MASK;
+- reg |= GEN7_FF_TS_SCHED_HW;
+- reg |= GEN7_FF_VS_SCHED_HW;
+- reg |= GEN7_FF_DS_SCHED_HW;
+-
+- I915_WRITE(GEN7_FF_THREAD_MODE, reg);
+-}
+-
+ static void lpt_init_clock_gating(struct drm_i915_private *dev_priv)
+ {
+ /*
+@@ -6992,45 +6931,10 @@ static void bdw_init_clock_gating(struct drm_i915_private *dev_priv)
+
+ static void hsw_init_clock_gating(struct drm_i915_private *dev_priv)
+ {
+- /* L3 caching of data atomics doesn't work -- disable it. */
+- I915_WRITE(HSW_SCRATCH1, HSW_SCRATCH1_L3_DATA_ATOMICS_DISABLE);
+- I915_WRITE(HSW_ROW_CHICKEN3,
+- _MASKED_BIT_ENABLE(HSW_ROW_CHICKEN3_L3_GLOBAL_ATOMICS_DISABLE));
+-
+ /* This is required by WaCatErrorRejectionIssue:hsw */
+ I915_WRITE(GEN7_SQ_CHICKEN_MBCUNIT_CONFIG,
+- I915_READ(GEN7_SQ_CHICKEN_MBCUNIT_CONFIG) |
+- GEN7_SQ_CHICKEN_MBCUNIT_SQINTMOB);
+-
+- /* WaVSRefCountFullforceMissDisable:hsw */
+- I915_WRITE(GEN7_FF_THREAD_MODE,
+- I915_READ(GEN7_FF_THREAD_MODE) & ~GEN7_FF_VS_REF_CNT_FFME);
+-
+- /* WaDisable_RenderCache_OperationalFlush:hsw */
+- I915_WRITE(CACHE_MODE_0_GEN7, _MASKED_BIT_DISABLE(RC_OP_FLUSH_ENABLE));
+-
+- /* enable HiZ Raw Stall Optimization */
+- I915_WRITE(CACHE_MODE_0_GEN7,
+- _MASKED_BIT_DISABLE(HIZ_RAW_STALL_OPT_DISABLE));
+-
+- /* WaDisable4x2SubspanOptimization:hsw */
+- I915_WRITE(CACHE_MODE_1,
+- _MASKED_BIT_ENABLE(PIXEL_SUBSPAN_COLLECT_OPT_DISABLE));
+-
+- /*
+- * BSpec recommends 8x4 when MSAA is used,
+- * however in practice 16x4 seems fastest.
+- *
+- * Note that PS/WM thread counts depend on the WIZ hashing
+- * disable bit, which we don't touch here, but it's good
+- * to keep in mind (see 3DSTATE_PS and 3DSTATE_WM).
+- */
+- I915_WRITE(GEN7_GT_MODE,
+- _MASKED_FIELD(GEN6_WIZ_HASHING_MASK, GEN6_WIZ_HASHING_16x4));
+-
+- /* WaSampleCChickenBitEnable:hsw */
+- I915_WRITE(HALF_SLICE_CHICKEN3,
+- _MASKED_BIT_ENABLE(HSW_SAMPLE_C_PERFORMANCE));
++ I915_READ(GEN7_SQ_CHICKEN_MBCUNIT_CONFIG) |
++ GEN7_SQ_CHICKEN_MBCUNIT_SQINTMOB);
+
+ /* WaSwitchSolVfFArbitrationPriority:hsw */
+ I915_WRITE(GAM_ECOCHK, I915_READ(GAM_ECOCHK) | HSW_ECOCHK_ARB_PRIO_SOL);
+@@ -7044,32 +6948,11 @@ static void ivb_init_clock_gating(struct drm_i915_private *dev_priv)
+
+ I915_WRITE(ILK_DSPCLK_GATE_D, ILK_VRHUNIT_CLOCK_GATE_DISABLE);
+
+- /* WaDisableEarlyCull:ivb */
+- I915_WRITE(_3D_CHICKEN3,
+- _MASKED_BIT_ENABLE(_3D_CHICKEN_SF_DISABLE_OBJEND_CULL));
+-
+ /* WaDisableBackToBackFlipFix:ivb */
+ I915_WRITE(IVB_CHICKEN3,
+ CHICKEN3_DGMG_REQ_OUT_FIX_DISABLE |
+ CHICKEN3_DGMG_DONE_FIX_DISABLE);
+
+- /* WaDisablePSDDualDispatchEnable:ivb */
+- if (IS_IVB_GT1(dev_priv))
+- I915_WRITE(GEN7_HALF_SLICE_CHICKEN1,
+- _MASKED_BIT_ENABLE(GEN7_PSD_SINGLE_PORT_DISPATCH_ENABLE));
+-
+- /* WaDisable_RenderCache_OperationalFlush:ivb */
+- I915_WRITE(CACHE_MODE_0_GEN7, _MASKED_BIT_DISABLE(RC_OP_FLUSH_ENABLE));
+-
+- /* Apply the WaDisableRHWOOptimizationForRenderHang:ivb workaround. */
+- I915_WRITE(GEN7_COMMON_SLICE_CHICKEN1,
+- GEN7_CSC1_RHWO_OPT_DISABLE_IN_RCC);
+-
+- /* WaApplyL3ControlAndL3ChickenMode:ivb */
+- I915_WRITE(GEN7_L3CNTLREG1,
+- GEN7_WA_FOR_GEN7_L3_CONTROL);
+- I915_WRITE(GEN7_L3_CHICKEN_MODE_REGISTER,
+- GEN7_WA_L3_CHICKEN_MODE);
+ if (IS_IVB_GT1(dev_priv))
+ I915_WRITE(GEN7_ROW_CHICKEN2,
+ _MASKED_BIT_ENABLE(DOP_CLOCK_GATING_DISABLE));
+@@ -7081,10 +6964,6 @@ static void ivb_init_clock_gating(struct drm_i915_private *dev_priv)
+ _MASKED_BIT_ENABLE(DOP_CLOCK_GATING_DISABLE));
+ }
+
+- /* WaForceL3Serialization:ivb */
+- I915_WRITE(GEN7_L3SQCREG4, I915_READ(GEN7_L3SQCREG4) &
+- ~L3SQ_URB_READ_CAM_MATCH_DISABLE);
+-
+ /*
+ * According to the spec, bit 13 (RCZUNIT) must be set on IVB.
+ * This implements the WaDisableRCZUnitClockGating:ivb workaround.
+@@ -7099,29 +6978,6 @@ static void ivb_init_clock_gating(struct drm_i915_private *dev_priv)
+
+ g4x_disable_trickle_feed(dev_priv);
+
+- gen7_setup_fixed_func_scheduler(dev_priv);
+-
+- if (0) { /* causes HiZ corruption on ivb:gt1 */
+- /* enable HiZ Raw Stall Optimization */
+- I915_WRITE(CACHE_MODE_0_GEN7,
+- _MASKED_BIT_DISABLE(HIZ_RAW_STALL_OPT_DISABLE));
+- }
+-
+- /* WaDisable4x2SubspanOptimization:ivb */
+- I915_WRITE(CACHE_MODE_1,
+- _MASKED_BIT_ENABLE(PIXEL_SUBSPAN_COLLECT_OPT_DISABLE));
+-
+- /*
+- * BSpec recommends 8x4 when MSAA is used,
+- * however in practice 16x4 seems fastest.
+- *
+- * Note that PS/WM thread counts depend on the WIZ hashing
+- * disable bit, which we don't touch here, but it's good
+- * to keep in mind (see 3DSTATE_PS and 3DSTATE_WM).
+- */
+- I915_WRITE(GEN7_GT_MODE,
+- _MASKED_FIELD(GEN6_WIZ_HASHING_MASK, GEN6_WIZ_HASHING_16x4));
+-
+ snpcr = I915_READ(GEN6_MBCUNIT_SNPCR);
+ snpcr &= ~GEN6_MBC_SNPCR_MASK;
+ snpcr |= GEN6_MBC_SNPCR_MED;
+@@ -7135,28 +6991,11 @@ static void ivb_init_clock_gating(struct drm_i915_private *dev_priv)
+
+ static void vlv_init_clock_gating(struct drm_i915_private *dev_priv)
+ {
+- /* WaDisableEarlyCull:vlv */
+- I915_WRITE(_3D_CHICKEN3,
+- _MASKED_BIT_ENABLE(_3D_CHICKEN_SF_DISABLE_OBJEND_CULL));
+-
+ /* WaDisableBackToBackFlipFix:vlv */
+ I915_WRITE(IVB_CHICKEN3,
+ CHICKEN3_DGMG_REQ_OUT_FIX_DISABLE |
+ CHICKEN3_DGMG_DONE_FIX_DISABLE);
+
+- /* WaPsdDispatchEnable:vlv */
+- /* WaDisablePSDDualDispatchEnable:vlv */
+- I915_WRITE(GEN7_HALF_SLICE_CHICKEN1,
+- _MASKED_BIT_ENABLE(GEN7_MAX_PS_THREAD_DEP |
+- GEN7_PSD_SINGLE_PORT_DISPATCH_ENABLE));
+-
+- /* WaDisable_RenderCache_OperationalFlush:vlv */
+- I915_WRITE(CACHE_MODE_0_GEN7, _MASKED_BIT_DISABLE(RC_OP_FLUSH_ENABLE));
+-
+- /* WaForceL3Serialization:vlv */
+- I915_WRITE(GEN7_L3SQCREG4, I915_READ(GEN7_L3SQCREG4) &
+- ~L3SQ_URB_READ_CAM_MATCH_DISABLE);
+-
+ /* WaDisableDopClockGating:vlv */
+ I915_WRITE(GEN7_ROW_CHICKEN2,
+ _MASKED_BIT_ENABLE(DOP_CLOCK_GATING_DISABLE));
+@@ -7166,8 +7005,6 @@ static void vlv_init_clock_gating(struct drm_i915_private *dev_priv)
+ I915_READ(GEN7_SQ_CHICKEN_MBCUNIT_CONFIG) |
+ GEN7_SQ_CHICKEN_MBCUNIT_SQINTMOB);
+
+- gen7_setup_fixed_func_scheduler(dev_priv);
+-
+ /*
+ * According to the spec, bit 13 (RCZUNIT) must be set on IVB.
+ * This implements the WaDisableRCZUnitClockGating:vlv workaround.
+@@ -7181,30 +7018,6 @@ static void vlv_init_clock_gating(struct drm_i915_private *dev_priv)
+ I915_WRITE(GEN7_UCGCTL4,
+ I915_READ(GEN7_UCGCTL4) | GEN7_L3BANK2X_CLOCK_GATE_DISABLE);
+
+- /*
+- * BSpec says this must be set, even though
+- * WaDisable4x2SubspanOptimization isn't listed for VLV.
+- */
+- I915_WRITE(CACHE_MODE_1,
+- _MASKED_BIT_ENABLE(PIXEL_SUBSPAN_COLLECT_OPT_DISABLE));
+-
+- /*
+- * BSpec recommends 8x4 when MSAA is used,
+- * however in practice 16x4 seems fastest.
+- *
+- * Note that PS/WM thread counts depend on the WIZ hashing
+- * disable bit, which we don't touch here, but it's good
+- * to keep in mind (see 3DSTATE_PS and 3DSTATE_WM).
+- */
+- I915_WRITE(GEN7_GT_MODE,
+- _MASKED_FIELD(GEN6_WIZ_HASHING_MASK, GEN6_WIZ_HASHING_16x4));
+-
+- /*
+- * WaIncreaseL3CreditsForVLVB0:vlv
+- * This is the hardware default actually.
+- */
+- I915_WRITE(GEN7_L3SQCREG1, VLV_B0_WA_L3SQCREG1_VALUE);
+-
+ /*
+ * WaDisableVLVClockGating_VBIIssue:vlv
+ * Disable clock gating on th GCFG unit to prevent a delay
+@@ -7257,13 +7070,6 @@ static void g4x_init_clock_gating(struct drm_i915_private *dev_priv)
+ dspclk_gate |= DSSUNIT_CLOCK_GATE_DISABLE;
+ I915_WRITE(DSPCLK_GATE_D, dspclk_gate);
+
+- /* WaDisableRenderCachePipelinedFlush */
+- I915_WRITE(CACHE_MODE_0,
+- _MASKED_BIT_ENABLE(CM0_PIPELINED_RENDER_FLUSH_DISABLE));
+-
+- /* WaDisable_RenderCache_OperationalFlush:g4x */
+- I915_WRITE(CACHE_MODE_0, _MASKED_BIT_DISABLE(RC_OP_FLUSH_ENABLE));
+-
+ g4x_disable_trickle_feed(dev_priv);
+ }
+
+@@ -7279,11 +7085,6 @@ static void i965gm_init_clock_gating(struct drm_i915_private *dev_priv)
+ intel_uncore_write(uncore,
+ MI_ARB_STATE,
+ _MASKED_BIT_ENABLE(MI_ARB_DISPLAY_TRICKLE_FEED_DISABLE));
+-
+- /* WaDisable_RenderCache_OperationalFlush:gen4 */
+- intel_uncore_write(uncore,
+- CACHE_MODE_0,
+- _MASKED_BIT_DISABLE(RC_OP_FLUSH_ENABLE));
+ }
+
+ static void i965g_init_clock_gating(struct drm_i915_private *dev_priv)
+@@ -7296,9 +7097,6 @@ static void i965g_init_clock_gating(struct drm_i915_private *dev_priv)
+ I915_WRITE(RENCLK_GATE_D2, 0);
+ I915_WRITE(MI_ARB_STATE,
+ _MASKED_BIT_ENABLE(MI_ARB_DISPLAY_TRICKLE_FEED_DISABLE));
+-
+- /* WaDisable_RenderCache_OperationalFlush:gen4 */
+- I915_WRITE(CACHE_MODE_0, _MASKED_BIT_DISABLE(RC_OP_FLUSH_ENABLE));
+ }
+
+ static void gen3_init_clock_gating(struct drm_i915_private *dev_priv)
+diff --git a/drivers/gpu/drm/i915/selftests/i915_mock_selftests.h b/drivers/gpu/drm/i915/selftests/i915_mock_selftests.h
+index 5b39bab4da1d..86baed226b53 100644
+--- a/drivers/gpu/drm/i915/selftests/i915_mock_selftests.h
++++ b/drivers/gpu/drm/i915/selftests/i915_mock_selftests.h
+@@ -20,6 +20,7 @@ selftest(fence, i915_sw_fence_mock_selftests)
+ selftest(scatterlist, scatterlist_mock_selftests)
+ selftest(syncmap, i915_syncmap_mock_selftests)
+ selftest(uncore, intel_uncore_mock_selftests)
++selftest(ring, intel_ring_mock_selftests)
+ selftest(engine, intel_engine_cs_mock_selftests)
+ selftest(timelines, intel_timeline_mock_selftests)
+ selftest(requests, i915_request_mock_selftests)
+diff --git a/drivers/gpu/drm/msm/adreno/a5xx_gpu.c b/drivers/gpu/drm/msm/adreno/a5xx_gpu.c
+index 724024a2243a..662d02289533 100644
+--- a/drivers/gpu/drm/msm/adreno/a5xx_gpu.c
++++ b/drivers/gpu/drm/msm/adreno/a5xx_gpu.c
+@@ -1404,6 +1404,10 @@ static unsigned long a5xx_gpu_busy(struct msm_gpu *gpu)
+ {
+ u64 busy_cycles, busy_time;
+
++ /* Only read the gpu busy if the hardware is already active */
++ if (pm_runtime_get_if_in_use(&gpu->pdev->dev) == 0)
++ return 0;
++
+ busy_cycles = gpu_read64(gpu, REG_A5XX_RBBM_PERFCTR_RBBM_0_LO,
+ REG_A5XX_RBBM_PERFCTR_RBBM_0_HI);
+
+@@ -1412,6 +1416,8 @@ static unsigned long a5xx_gpu_busy(struct msm_gpu *gpu)
+
+ gpu->devfreq.busy_cycles = busy_cycles;
+
++ pm_runtime_put(&gpu->pdev->dev);
++
+ if (WARN_ON(busy_time > ~0LU))
+ return ~0LU;
+
+diff --git a/drivers/gpu/drm/msm/adreno/a6xx_gmu.c b/drivers/gpu/drm/msm/adreno/a6xx_gmu.c
+index c4e71abbdd53..34607a98cc7c 100644
+--- a/drivers/gpu/drm/msm/adreno/a6xx_gmu.c
++++ b/drivers/gpu/drm/msm/adreno/a6xx_gmu.c
+@@ -108,6 +108,13 @@ static void __a6xx_gmu_set_freq(struct a6xx_gmu *gmu, int index)
+ struct msm_gpu *gpu = &adreno_gpu->base;
+ int ret;
+
++ /*
++ * This can get called from devfreq while the hardware is idle. Don't
++ * bring up the power if it isn't already active
++ */
++ if (pm_runtime_get_if_in_use(gmu->dev) == 0)
++ return;
++
+ gmu_write(gmu, REG_A6XX_GMU_DCVS_ACK_OPTION, 0);
+
+ gmu_write(gmu, REG_A6XX_GMU_DCVS_PERF_SETTING,
+@@ -134,6 +141,7 @@ static void __a6xx_gmu_set_freq(struct a6xx_gmu *gmu, int index)
+ * for now leave it at max so that the performance is nominal.
+ */
+ icc_set_bw(gpu->icc_path, 0, MBps_to_icc(7216));
++ pm_runtime_put(gmu->dev);
+ }
+
+ void a6xx_gmu_set_freq(struct msm_gpu *gpu, unsigned long freq)
+diff --git a/drivers/gpu/drm/msm/adreno/a6xx_gpu.c b/drivers/gpu/drm/msm/adreno/a6xx_gpu.c
+index 68af24150de5..2c09d2c21773 100644
+--- a/drivers/gpu/drm/msm/adreno/a6xx_gpu.c
++++ b/drivers/gpu/drm/msm/adreno/a6xx_gpu.c
+@@ -810,6 +810,11 @@ static unsigned long a6xx_gpu_busy(struct msm_gpu *gpu)
+ struct a6xx_gpu *a6xx_gpu = to_a6xx_gpu(adreno_gpu);
+ u64 busy_cycles, busy_time;
+
++
++ /* Only read the gpu busy if the hardware is already active */
++ if (pm_runtime_get_if_in_use(a6xx_gpu->gmu.dev) == 0)
++ return 0;
++
+ busy_cycles = gmu_read64(&a6xx_gpu->gmu,
+ REG_A6XX_GMU_CX_GMU_POWER_COUNTER_XOCLK_0_L,
+ REG_A6XX_GMU_CX_GMU_POWER_COUNTER_XOCLK_0_H);
+@@ -819,6 +824,8 @@ static unsigned long a6xx_gpu_busy(struct msm_gpu *gpu)
+
+ gpu->devfreq.busy_cycles = busy_cycles;
+
++ pm_runtime_put(a6xx_gpu->gmu.dev);
++
+ if (WARN_ON(busy_time > ~0LU))
+ return ~0LU;
+
+diff --git a/drivers/gpu/drm/msm/disp/mdp5/mdp5_kms.c b/drivers/gpu/drm/msm/disp/mdp5/mdp5_kms.c
+index 47b989834af1..c23a2fa13fb9 100644
+--- a/drivers/gpu/drm/msm/disp/mdp5/mdp5_kms.c
++++ b/drivers/gpu/drm/msm/disp/mdp5/mdp5_kms.c
+@@ -943,7 +943,8 @@ static int mdp5_init(struct platform_device *pdev, struct drm_device *dev)
+
+ return 0;
+ fail:
+- mdp5_destroy(pdev);
++ if (mdp5_kms)
++ mdp5_destroy(pdev);
+ return ret;
+ }
+
+diff --git a/drivers/gpu/drm/msm/msm_rd.c b/drivers/gpu/drm/msm/msm_rd.c
+index 732f65df5c4f..fea30e7aa9e8 100644
+--- a/drivers/gpu/drm/msm/msm_rd.c
++++ b/drivers/gpu/drm/msm/msm_rd.c
+@@ -29,8 +29,6 @@
+ * or shader programs (if not emitted inline in cmdstream).
+ */
+
+-#ifdef CONFIG_DEBUG_FS
+-
+ #include <linux/circ_buf.h>
+ #include <linux/debugfs.h>
+ #include <linux/kfifo.h>
+@@ -47,6 +45,8 @@ bool rd_full = false;
+ MODULE_PARM_DESC(rd_full, "If true, $debugfs/.../rd will snapshot all buffer contents");
+ module_param_named(rd_full, rd_full, bool, 0600);
+
++#ifdef CONFIG_DEBUG_FS
++
+ enum rd_sect_type {
+ RD_NONE,
+ RD_TEST, /* ascii text */
+diff --git a/drivers/gpu/drm/nouveau/dispnv50/disp.c b/drivers/gpu/drm/nouveau/dispnv50/disp.c
+index 6be9df1820c5..2625ed84fc44 100644
+--- a/drivers/gpu/drm/nouveau/dispnv50/disp.c
++++ b/drivers/gpu/drm/nouveau/dispnv50/disp.c
+@@ -482,15 +482,16 @@ nv50_dac_create(struct drm_connector *connector, struct dcb_output *dcbe)
+ * audio component binding for ELD notification
+ */
+ static void
+-nv50_audio_component_eld_notify(struct drm_audio_component *acomp, int port)
++nv50_audio_component_eld_notify(struct drm_audio_component *acomp, int port,
++ int dev_id)
+ {
+ if (acomp && acomp->audio_ops && acomp->audio_ops->pin_eld_notify)
+ acomp->audio_ops->pin_eld_notify(acomp->audio_ops->audio_ptr,
+- port, -1);
++ port, dev_id);
+ }
+
+ static int
+-nv50_audio_component_get_eld(struct device *kdev, int port, int pipe,
++nv50_audio_component_get_eld(struct device *kdev, int port, int dev_id,
+ bool *enabled, unsigned char *buf, int max_bytes)
+ {
+ struct drm_device *drm_dev = dev_get_drvdata(kdev);
+@@ -506,7 +507,8 @@ nv50_audio_component_get_eld(struct device *kdev, int port, int pipe,
+ nv_encoder = nouveau_encoder(encoder);
+ nv_connector = nouveau_encoder_connector_get(nv_encoder);
+ nv_crtc = nouveau_crtc(encoder->crtc);
+- if (!nv_connector || !nv_crtc || nv_crtc->index != port)
++ if (!nv_connector || !nv_crtc || nv_encoder->or != port ||
++ nv_crtc->index != dev_id)
+ continue;
+ *enabled = drm_detect_monitor_audio(nv_connector->edid);
+ if (*enabled) {
+@@ -600,7 +602,8 @@ nv50_audio_disable(struct drm_encoder *encoder, struct nouveau_crtc *nv_crtc)
+
+ nvif_mthd(&disp->disp->object, 0, &args, sizeof(args));
+
+- nv50_audio_component_eld_notify(drm->audio.component, nv_crtc->index);
++ nv50_audio_component_eld_notify(drm->audio.component, nv_encoder->or,
++ nv_crtc->index);
+ }
+
+ static void
+@@ -634,7 +637,8 @@ nv50_audio_enable(struct drm_encoder *encoder, struct drm_display_mode *mode)
+ nvif_mthd(&disp->disp->object, 0, &args,
+ sizeof(args.base) + drm_eld_size(args.data));
+
+- nv50_audio_component_eld_notify(drm->audio.component, nv_crtc->index);
++ nv50_audio_component_eld_notify(drm->audio.component, nv_encoder->or,
++ nv_crtc->index);
+ }
+
+ /******************************************************************************
+diff --git a/drivers/gpu/drm/nouveau/nvkm/engine/disp/hdmigm200.c b/drivers/gpu/drm/nouveau/nvkm/engine/disp/hdmigm200.c
+index 9b16a08eb4d9..bf6d41fb0c9f 100644
+--- a/drivers/gpu/drm/nouveau/nvkm/engine/disp/hdmigm200.c
++++ b/drivers/gpu/drm/nouveau/nvkm/engine/disp/hdmigm200.c
+@@ -27,10 +27,10 @@ void
+ gm200_hdmi_scdc(struct nvkm_ior *ior, int head, u8 scdc)
+ {
+ struct nvkm_device *device = ior->disp->engine.subdev.device;
+- const u32 hoff = head * 0x800;
++ const u32 soff = nv50_ior_base(ior);
+ const u32 ctrl = scdc & 0x3;
+
+- nvkm_mask(device, 0x61c5bc + hoff, 0x00000003, ctrl);
++ nvkm_mask(device, 0x61c5bc + soff, 0x00000003, ctrl);
+
+ ior->tmds.high_speed = !!(scdc & 0x2);
+ }
+diff --git a/drivers/gpu/drm/nouveau/nvkm/engine/gr/gk20a.c b/drivers/gpu/drm/nouveau/nvkm/engine/gr/gk20a.c
+index 4209b24a46d7..bf6b65257852 100644
+--- a/drivers/gpu/drm/nouveau/nvkm/engine/gr/gk20a.c
++++ b/drivers/gpu/drm/nouveau/nvkm/engine/gr/gk20a.c
+@@ -341,7 +341,7 @@ gk20a_gr_load(struct gf100_gr *gr, int ver, const struct gf100_gr_fwif *fwif)
+
+ static const struct gf100_gr_fwif
+ gk20a_gr_fwif[] = {
+- { -1, gk20a_gr_load, &gk20a_gr },
++ { 0, gk20a_gr_load, &gk20a_gr },
+ {}
+ };
+
+diff --git a/drivers/gpu/drm/qxl/qxl_kms.c b/drivers/gpu/drm/qxl/qxl_kms.c
+index 70b20ee4741a..41ef6a9ca8cc 100644
+--- a/drivers/gpu/drm/qxl/qxl_kms.c
++++ b/drivers/gpu/drm/qxl/qxl_kms.c
+@@ -218,7 +218,7 @@ int qxl_device_init(struct qxl_device *qdev,
+ &(qdev->ram_header->cursor_ring_hdr),
+ sizeof(struct qxl_command),
+ QXL_CURSOR_RING_SIZE,
+- qdev->io_base + QXL_IO_NOTIFY_CMD,
++ qdev->io_base + QXL_IO_NOTIFY_CURSOR,
+ false,
+ &qdev->cursor_event);
+
+diff --git a/drivers/gpu/drm/sun4i/sun4i_hdmi.h b/drivers/gpu/drm/sun4i/sun4i_hdmi.h
+index 7ad3f06c127e..00ca35f07ba5 100644
+--- a/drivers/gpu/drm/sun4i/sun4i_hdmi.h
++++ b/drivers/gpu/drm/sun4i/sun4i_hdmi.h
+@@ -148,7 +148,7 @@
+ #define SUN4I_HDMI_DDC_CMD_IMPLICIT_WRITE 3
+
+ #define SUN4I_HDMI_DDC_CLK_REG 0x528
+-#define SUN4I_HDMI_DDC_CLK_M(m) (((m) & 0x7) << 3)
++#define SUN4I_HDMI_DDC_CLK_M(m) (((m) & 0xf) << 3)
+ #define SUN4I_HDMI_DDC_CLK_N(n) ((n) & 0x7)
+
+ #define SUN4I_HDMI_DDC_LINE_CTRL_REG 0x540
+diff --git a/drivers/gpu/drm/sun4i/sun4i_hdmi_ddc_clk.c b/drivers/gpu/drm/sun4i/sun4i_hdmi_ddc_clk.c
+index 2ff780114106..12430b9d4e93 100644
+--- a/drivers/gpu/drm/sun4i/sun4i_hdmi_ddc_clk.c
++++ b/drivers/gpu/drm/sun4i/sun4i_hdmi_ddc_clk.c
+@@ -33,7 +33,7 @@ static unsigned long sun4i_ddc_calc_divider(unsigned long rate,
+ unsigned long best_rate = 0;
+ u8 best_m = 0, best_n = 0, _m, _n;
+
+- for (_m = 0; _m < 8; _m++) {
++ for (_m = 0; _m < 16; _m++) {
+ for (_n = 0; _n < 8; _n++) {
+ unsigned long tmp_rate;
+
+diff --git a/drivers/hid/hid-ids.h b/drivers/hid/hid-ids.h
+index 1c71a1aa76b2..f03f1cc913ce 100644
+--- a/drivers/hid/hid-ids.h
++++ b/drivers/hid/hid-ids.h
+@@ -1157,6 +1157,9 @@
+ #define USB_DEVICE_ID_TPV_OPTICAL_TOUCHSCREEN_8882 0x8882
+ #define USB_DEVICE_ID_TPV_OPTICAL_TOUCHSCREEN_8883 0x8883
+
++#define USB_VENDOR_ID_TRUST 0x145f
++#define USB_DEVICE_ID_TRUST_PANORA_TABLET 0x0212
++
+ #define USB_VENDOR_ID_TURBOX 0x062a
+ #define USB_DEVICE_ID_TURBOX_KEYBOARD 0x0201
+ #define USB_DEVICE_ID_ASUS_MD_5110 0x5110
+diff --git a/drivers/hid/hid-quirks.c b/drivers/hid/hid-quirks.c
+index e4cb543de0cd..ca8b5c261c7c 100644
+--- a/drivers/hid/hid-quirks.c
++++ b/drivers/hid/hid-quirks.c
+@@ -168,6 +168,7 @@ static const struct hid_device_id hid_quirks[] = {
+ { HID_USB_DEVICE(USB_VENDOR_ID_TOUCHPACK, USB_DEVICE_ID_TOUCHPACK_RTS), HID_QUIRK_MULTI_INPUT },
+ { HID_USB_DEVICE(USB_VENDOR_ID_TPV, USB_DEVICE_ID_TPV_OPTICAL_TOUCHSCREEN_8882), HID_QUIRK_NOGET },
+ { HID_USB_DEVICE(USB_VENDOR_ID_TPV, USB_DEVICE_ID_TPV_OPTICAL_TOUCHSCREEN_8883), HID_QUIRK_NOGET },
++ { HID_USB_DEVICE(USB_VENDOR_ID_TRUST, USB_DEVICE_ID_TRUST_PANORA_TABLET), HID_QUIRK_MULTI_INPUT | HID_QUIRK_HIDINPUT_FORCE },
+ { HID_USB_DEVICE(USB_VENDOR_ID_TURBOX, USB_DEVICE_ID_TURBOX_KEYBOARD), HID_QUIRK_NOGET },
+ { HID_USB_DEVICE(USB_VENDOR_ID_UCLOGIC, USB_DEVICE_ID_UCLOGIC_TABLET_KNA5), HID_QUIRK_MULTI_INPUT },
+ { HID_USB_DEVICE(USB_VENDOR_ID_UCLOGIC, USB_DEVICE_ID_UCLOGIC_TABLET_TWA60), HID_QUIRK_MULTI_INPUT },
+diff --git a/drivers/hid/intel-ish-hid/ishtp-fw-loader.c b/drivers/hid/intel-ish-hid/ishtp-fw-loader.c
+index aa2dbed30fc3..6cf59fd26ad7 100644
+--- a/drivers/hid/intel-ish-hid/ishtp-fw-loader.c
++++ b/drivers/hid/intel-ish-hid/ishtp-fw-loader.c
+@@ -480,6 +480,7 @@ static int ish_query_loader_prop(struct ishtp_cl_data *client_data,
+ sizeof(ldr_xfer_query_resp));
+ if (rv < 0) {
+ client_data->flag_retry = true;
++ *fw_info = (struct shim_fw_info){};
+ return rv;
+ }
+
+@@ -489,6 +490,7 @@ static int ish_query_loader_prop(struct ishtp_cl_data *client_data,
+ "data size %d is not equal to size of loader_xfer_query_response %zu\n",
+ rv, sizeof(struct loader_xfer_query_response));
+ client_data->flag_retry = true;
++ *fw_info = (struct shim_fw_info){};
+ return -EMSGSIZE;
+ }
+
+diff --git a/drivers/hwtracing/coresight/coresight-etm4x.c b/drivers/hwtracing/coresight/coresight-etm4x.c
+index a90d757f7043..a6d6c7a3abcb 100644
+--- a/drivers/hwtracing/coresight/coresight-etm4x.c
++++ b/drivers/hwtracing/coresight/coresight-etm4x.c
+@@ -1527,6 +1527,7 @@ static int etm4_probe(struct amba_device *adev, const struct amba_id *id)
+ return 0;
+
+ err_arch_supported:
++ etmdrvdata[drvdata->cpu] = NULL;
+ if (--etm4_count == 0) {
+ etm4_cpu_pm_unregister();
+
+diff --git a/drivers/hwtracing/coresight/coresight-platform.c b/drivers/hwtracing/coresight/coresight-platform.c
+index 43418a2126ff..471f34e40c74 100644
+--- a/drivers/hwtracing/coresight/coresight-platform.c
++++ b/drivers/hwtracing/coresight/coresight-platform.c
+@@ -87,6 +87,7 @@ static void of_coresight_get_ports_legacy(const struct device_node *node,
+ int *nr_inport, int *nr_outport)
+ {
+ struct device_node *ep = NULL;
++ struct of_endpoint endpoint;
+ int in = 0, out = 0;
+
+ do {
+@@ -94,10 +95,16 @@ static void of_coresight_get_ports_legacy(const struct device_node *node,
+ if (!ep)
+ break;
+
+- if (of_coresight_legacy_ep_is_input(ep))
+- in++;
+- else
+- out++;
++ if (of_graph_parse_endpoint(ep, &endpoint))
++ continue;
++
++ if (of_coresight_legacy_ep_is_input(ep)) {
++ in = (endpoint.port + 1 > in) ?
++ endpoint.port + 1 : in;
++ } else {
++ out = (endpoint.port + 1) > out ?
++ endpoint.port + 1 : out;
++ }
+
+ } while (ep);
+
+@@ -137,9 +144,16 @@ of_coresight_count_ports(struct device_node *port_parent)
+ {
+ int i = 0;
+ struct device_node *ep = NULL;
++ struct of_endpoint endpoint;
++
++ while ((ep = of_graph_get_next_endpoint(port_parent, ep))) {
++ /* Defer error handling to parsing */
++ if (of_graph_parse_endpoint(ep, &endpoint))
++ continue;
++ if (endpoint.port + 1 > i)
++ i = endpoint.port + 1;
++ }
+
+- while ((ep = of_graph_get_next_endpoint(port_parent, ep)))
+- i++;
+ return i;
+ }
+
+@@ -191,14 +205,12 @@ static int of_coresight_get_cpu(struct device *dev)
+ * Parses the local port, remote device name and the remote port.
+ *
+ * Returns :
+- * 1 - If the parsing is successful and a connection record
+- * was created for an output connection.
+ * 0 - If the parsing completed without any fatal errors.
+ * -Errno - Fatal error, abort the scanning.
+ */
+ static int of_coresight_parse_endpoint(struct device *dev,
+ struct device_node *ep,
+- struct coresight_connection *conn)
++ struct coresight_platform_data *pdata)
+ {
+ int ret = 0;
+ struct of_endpoint endpoint, rendpoint;
+@@ -206,6 +218,7 @@ static int of_coresight_parse_endpoint(struct device *dev,
+ struct device_node *rep = NULL;
+ struct device *rdev = NULL;
+ struct fwnode_handle *rdev_fwnode;
++ struct coresight_connection *conn;
+
+ do {
+ /* Parse the local port details */
+@@ -232,6 +245,13 @@ static int of_coresight_parse_endpoint(struct device *dev,
+ break;
+ }
+
++ conn = &pdata->conns[endpoint.port];
++ if (conn->child_fwnode) {
++ dev_warn(dev, "Duplicate output port %d\n",
++ endpoint.port);
++ ret = -EINVAL;
++ break;
++ }
+ conn->outport = endpoint.port;
+ /*
+ * Hold the refcount to the target device. This could be
+@@ -244,7 +264,6 @@ static int of_coresight_parse_endpoint(struct device *dev,
+ conn->child_fwnode = fwnode_handle_get(rdev_fwnode);
+ conn->child_port = rendpoint.port;
+ /* Connection record updated */
+- ret = 1;
+ } while (0);
+
+ of_node_put(rparent);
+@@ -258,7 +277,6 @@ static int of_get_coresight_platform_data(struct device *dev,
+ struct coresight_platform_data *pdata)
+ {
+ int ret = 0;
+- struct coresight_connection *conn;
+ struct device_node *ep = NULL;
+ const struct device_node *parent = NULL;
+ bool legacy_binding = false;
+@@ -287,8 +305,6 @@ static int of_get_coresight_platform_data(struct device *dev,
+ dev_warn_once(dev, "Uses obsolete Coresight DT bindings\n");
+ }
+
+- conn = pdata->conns;
+-
+ /* Iterate through each output port to discover topology */
+ while ((ep = of_graph_get_next_endpoint(parent, ep))) {
+ /*
+@@ -300,15 +316,9 @@ static int of_get_coresight_platform_data(struct device *dev,
+ if (legacy_binding && of_coresight_legacy_ep_is_input(ep))
+ continue;
+
+- ret = of_coresight_parse_endpoint(dev, ep, conn);
+- switch (ret) {
+- case 1:
+- conn++; /* Fall through */
+- case 0:
+- break;
+- default:
++ ret = of_coresight_parse_endpoint(dev, ep, pdata);
++ if (ret)
+ return ret;
+- }
+ }
+
+ return 0;
+@@ -647,6 +657,16 @@ static int acpi_coresight_parse_link(struct acpi_device *adev,
+ * coresight_remove_match().
+ */
+ conn->child_fwnode = fwnode_handle_get(&r_adev->fwnode);
++ } else if (dir == ACPI_CORESIGHT_LINK_SLAVE) {
++ /*
++ * We are only interested in the port number
++ * for the input ports at this component.
++ * Store the port number in child_port.
++ */
++ conn->child_port = fields[0].integer.value;
++ } else {
++ /* Invalid direction */
++ return -EINVAL;
+ }
+
+ return dir;
+@@ -692,10 +712,20 @@ static int acpi_coresight_parse_graph(struct acpi_device *adev,
+ return dir;
+
+ if (dir == ACPI_CORESIGHT_LINK_MASTER) {
+- pdata->nr_outport++;
++ if (ptr->outport > pdata->nr_outport)
++ pdata->nr_outport = ptr->outport;
+ ptr++;
+ } else {
+- pdata->nr_inport++;
++ WARN_ON(pdata->nr_inport == ptr->child_port);
++ /*
++ * We do not track input port connections for a device.
++ * However we need the highest port number described,
++ * which can be recorded now and reuse this connection
++ * record for an output connection. Hence, do not move
++ * the ptr for input connections
++ */
++ if (ptr->child_port > pdata->nr_inport)
++ pdata->nr_inport = ptr->child_port;
+ }
+ }
+
+@@ -704,8 +734,13 @@ static int acpi_coresight_parse_graph(struct acpi_device *adev,
+ return rc;
+
+ /* Copy the connection information to the final location */
+- for (i = 0; i < pdata->nr_outport; i++)
+- pdata->conns[i] = conns[i];
++ for (i = 0; conns + i < ptr; i++) {
++ int port = conns[i].outport;
++
++ /* Duplicate output port */
++ WARN_ON(pdata->conns[port].child_fwnode);
++ pdata->conns[port] = conns[i];
++ }
+
+ devm_kfree(&adev->dev, conns);
+ return 0;
+diff --git a/drivers/hwtracing/coresight/coresight-tmc-etf.c b/drivers/hwtracing/coresight/coresight-tmc-etf.c
+index d0cc3985b72a..36cce2bfb744 100644
+--- a/drivers/hwtracing/coresight/coresight-tmc-etf.c
++++ b/drivers/hwtracing/coresight/coresight-tmc-etf.c
+@@ -596,13 +596,6 @@ int tmc_read_prepare_etb(struct tmc_drvdata *drvdata)
+ goto out;
+ }
+
+- /* There is no point in reading a TMC in HW FIFO mode */
+- mode = readl_relaxed(drvdata->base + TMC_MODE);
+- if (mode != TMC_MODE_CIRCULAR_BUFFER) {
+- ret = -EINVAL;
+- goto out;
+- }
+-
+ /* Don't interfere if operated from Perf */
+ if (drvdata->mode == CS_MODE_PERF) {
+ ret = -EINVAL;
+@@ -616,8 +609,15 @@ int tmc_read_prepare_etb(struct tmc_drvdata *drvdata)
+ }
+
+ /* Disable the TMC if need be */
+- if (drvdata->mode == CS_MODE_SYSFS)
++ if (drvdata->mode == CS_MODE_SYSFS) {
++ /* There is no point in reading a TMC in HW FIFO mode */
++ mode = readl_relaxed(drvdata->base + TMC_MODE);
++ if (mode != TMC_MODE_CIRCULAR_BUFFER) {
++ ret = -EINVAL;
++ goto out;
++ }
+ __tmc_etb_disable_hw(drvdata);
++ }
+
+ drvdata->reading = true;
+ out:
+diff --git a/drivers/hwtracing/coresight/coresight.c b/drivers/hwtracing/coresight/coresight.c
+index c71553c09f8e..8f5e62f02444 100644
+--- a/drivers/hwtracing/coresight/coresight.c
++++ b/drivers/hwtracing/coresight/coresight.c
+@@ -1053,6 +1053,9 @@ static int coresight_orphan_match(struct device *dev, void *data)
+ for (i = 0; i < i_csdev->pdata->nr_outport; i++) {
+ conn = &i_csdev->pdata->conns[i];
+
++ /* Skip the port if FW doesn't describe it */
++ if (!conn->child_fwnode)
++ continue;
+ /* We have found at least one orphan connection */
+ if (conn->child_dev == NULL) {
+ /* Does it match this newly added device? */
+@@ -1091,6 +1094,8 @@ static void coresight_fixup_device_conns(struct coresight_device *csdev)
+ for (i = 0; i < csdev->pdata->nr_outport; i++) {
+ struct coresight_connection *conn = &csdev->pdata->conns[i];
+
++ if (!conn->child_fwnode)
++ continue;
+ conn->child_dev =
+ coresight_find_csdev_by_fwnode(conn->child_fwnode);
+ if (!conn->child_dev)
+@@ -1118,7 +1123,7 @@ static int coresight_remove_match(struct device *dev, void *data)
+ for (i = 0; i < iterator->pdata->nr_outport; i++) {
+ conn = &iterator->pdata->conns[i];
+
+- if (conn->child_dev == NULL)
++ if (conn->child_dev == NULL || conn->child_fwnode == NULL)
+ continue;
+
+ if (csdev->dev.fwnode == conn->child_fwnode) {
+diff --git a/drivers/i2c/busses/i2c-icy.c b/drivers/i2c/busses/i2c-icy.c
+index 271470f4d8a9..66c9923fc766 100644
+--- a/drivers/i2c/busses/i2c-icy.c
++++ b/drivers/i2c/busses/i2c-icy.c
+@@ -43,6 +43,7 @@
+ #include <linux/i2c.h>
+ #include <linux/i2c-algo-pcf.h>
+
++#include <asm/amigahw.h>
+ #include <asm/amigaints.h>
+ #include <linux/zorro.h>
+
+diff --git a/drivers/i2c/busses/i2c-piix4.c b/drivers/i2c/busses/i2c-piix4.c
+index 30ded6422e7b..69740a4ff1db 100644
+--- a/drivers/i2c/busses/i2c-piix4.c
++++ b/drivers/i2c/busses/i2c-piix4.c
+@@ -977,7 +977,8 @@ static int piix4_probe(struct pci_dev *dev, const struct pci_device_id *id)
+ }
+
+ if (dev->vendor == PCI_VENDOR_ID_AMD &&
+- dev->device == PCI_DEVICE_ID_AMD_HUDSON2_SMBUS) {
++ (dev->device == PCI_DEVICE_ID_AMD_HUDSON2_SMBUS ||
++ dev->device == PCI_DEVICE_ID_AMD_KERNCZ_SMBUS)) {
+ retval = piix4_setup_sb800(dev, id, 1);
+ }
+
+diff --git a/drivers/i2c/busses/i2c-pxa.c b/drivers/i2c/busses/i2c-pxa.c
+index 466e4f681d7a..f537a37ac1d5 100644
+--- a/drivers/i2c/busses/i2c-pxa.c
++++ b/drivers/i2c/busses/i2c-pxa.c
+@@ -311,11 +311,10 @@ static void i2c_pxa_scream_blue_murder(struct pxa_i2c *i2c, const char *why)
+ dev_err(dev, "IBMR: %08x IDBR: %08x ICR: %08x ISR: %08x\n",
+ readl(_IBMR(i2c)), readl(_IDBR(i2c)), readl(_ICR(i2c)),
+ readl(_ISR(i2c)));
+- dev_dbg(dev, "log: ");
++ dev_err(dev, "log:");
+ for (i = 0; i < i2c->irqlogidx; i++)
+- pr_debug("[%08x:%08x] ", i2c->isrlog[i], i2c->icrlog[i]);
+-
+- pr_debug("\n");
++ pr_cont(" [%03x:%05x]", i2c->isrlog[i], i2c->icrlog[i]);
++ pr_cont("\n");
+ }
+
+ #else /* ifdef DEBUG */
+@@ -747,11 +746,9 @@ static inline void i2c_pxa_stop_message(struct pxa_i2c *i2c)
+ {
+ u32 icr;
+
+- /*
+- * Clear the STOP and ACK flags
+- */
++ /* Clear the START, STOP, ACK, TB and MA flags */
+ icr = readl(_ICR(i2c));
+- icr &= ~(ICR_STOP | ICR_ACKNAK);
++ icr &= ~(ICR_START | ICR_STOP | ICR_ACKNAK | ICR_TB | ICR_MA);
+ writel(icr, _ICR(i2c));
+ }
+
+diff --git a/drivers/iio/buffer/industrialio-buffer-dmaengine.c b/drivers/iio/buffer/industrialio-buffer-dmaengine.c
+index b129693af0fd..94da3b1ca3a2 100644
+--- a/drivers/iio/buffer/industrialio-buffer-dmaengine.c
++++ b/drivers/iio/buffer/industrialio-buffer-dmaengine.c
+@@ -134,7 +134,7 @@ static ssize_t iio_dmaengine_buffer_get_length_align(struct device *dev,
+ struct dmaengine_buffer *dmaengine_buffer =
+ iio_buffer_to_dmaengine_buffer(indio_dev->buffer);
+
+- return sprintf(buf, "%u\n", dmaengine_buffer->align);
++ return sprintf(buf, "%zu\n", dmaengine_buffer->align);
+ }
+
+ static IIO_DEVICE_ATTR(length_align_bytes, 0444,
+diff --git a/drivers/iio/light/gp2ap002.c b/drivers/iio/light/gp2ap002.c
+index b7ef16b28280..7a2679bdc987 100644
+--- a/drivers/iio/light/gp2ap002.c
++++ b/drivers/iio/light/gp2ap002.c
+@@ -158,6 +158,9 @@ static irqreturn_t gp2ap002_prox_irq(int irq, void *d)
+ int val;
+ int ret;
+
++ if (!gp2ap002->enabled)
++ goto err_retrig;
++
+ ret = regmap_read(gp2ap002->map, GP2AP002_PROX, &val);
+ if (ret) {
+ dev_err(gp2ap002->dev, "error reading proximity\n");
+@@ -247,6 +250,8 @@ static int gp2ap002_read_raw(struct iio_dev *indio_dev,
+ struct gp2ap002 *gp2ap002 = iio_priv(indio_dev);
+ int ret;
+
++ pm_runtime_get_sync(gp2ap002->dev);
++
+ switch (mask) {
+ case IIO_CHAN_INFO_RAW:
+ switch (chan->type) {
+@@ -255,13 +260,21 @@ static int gp2ap002_read_raw(struct iio_dev *indio_dev,
+ if (ret < 0)
+ return ret;
+ *val = ret;
+- return IIO_VAL_INT;
++ ret = IIO_VAL_INT;
++ goto out;
+ default:
+- return -EINVAL;
++ ret = -EINVAL;
++ goto out;
+ }
+ default:
+- return -EINVAL;
++ ret = -EINVAL;
+ }
++
++out:
++ pm_runtime_mark_last_busy(gp2ap002->dev);
++ pm_runtime_put_autosuspend(gp2ap002->dev);
++
++ return ret;
+ }
+
+ static int gp2ap002_init(struct gp2ap002 *gp2ap002)
+diff --git a/drivers/iio/pressure/bmp280-core.c b/drivers/iio/pressure/bmp280-core.c
+index 29c209cc1108..973264a088f9 100644
+--- a/drivers/iio/pressure/bmp280-core.c
++++ b/drivers/iio/pressure/bmp280-core.c
+@@ -271,6 +271,8 @@ static u32 bmp280_compensate_humidity(struct bmp280_data *data,
+ + (s32)2097152) * calib->H2 + 8192) >> 14);
+ var -= ((((var >> 15) * (var >> 15)) >> 7) * (s32)calib->H1) >> 4;
+
++ var = clamp_val(var, 0, 419430400);
++
+ return var >> 12;
+ };
+
+@@ -713,7 +715,7 @@ static int bmp180_measure(struct bmp280_data *data, u8 ctrl_meas)
+ unsigned int ctrl;
+
+ if (data->use_eoc)
+- init_completion(&data->done);
++ reinit_completion(&data->done);
+
+ ret = regmap_write(data->regmap, BMP280_REG_CTRL_MEAS, ctrl_meas);
+ if (ret)
+@@ -969,6 +971,9 @@ static int bmp085_fetch_eoc_irq(struct device *dev,
+ "trying to enforce it\n");
+ irq_trig = IRQF_TRIGGER_RISING;
+ }
++
++ init_completion(&data->done);
++
+ ret = devm_request_threaded_irq(dev,
+ irq,
+ bmp085_eoc_irq,
+diff --git a/drivers/infiniband/core/cm.c b/drivers/infiniband/core/cm.c
+index 17f14e0eafe4..1c2bf18cda9f 100644
+--- a/drivers/infiniband/core/cm.c
++++ b/drivers/infiniband/core/cm.c
+@@ -1076,7 +1076,9 @@ retest:
+ case IB_CM_REP_SENT:
+ case IB_CM_MRA_REP_RCVD:
+ ib_cancel_mad(cm_id_priv->av.port->mad_agent, cm_id_priv->msg);
+- /* Fall through */
++ cm_send_rej_locked(cm_id_priv, IB_CM_REJ_CONSUMER_DEFINED, NULL,
++ 0, NULL, 0);
++ goto retest;
+ case IB_CM_MRA_REQ_SENT:
+ case IB_CM_REP_RCVD:
+ case IB_CM_MRA_REP_SENT:
+diff --git a/drivers/infiniband/core/cma_configfs.c b/drivers/infiniband/core/cma_configfs.c
+index c672a4978bfd..3c1e2ca564fe 100644
+--- a/drivers/infiniband/core/cma_configfs.c
++++ b/drivers/infiniband/core/cma_configfs.c
+@@ -322,8 +322,21 @@ fail:
+ return ERR_PTR(err);
+ }
+
++static void drop_cma_dev(struct config_group *cgroup, struct config_item *item)
++{
++ struct config_group *group =
++ container_of(item, struct config_group, cg_item);
++ struct cma_dev_group *cma_dev_group =
++ container_of(group, struct cma_dev_group, device_group);
++
++ configfs_remove_default_groups(&cma_dev_group->ports_group);
++ configfs_remove_default_groups(&cma_dev_group->device_group);
++ config_item_put(item);
++}
++
+ static struct configfs_group_operations cma_subsys_group_ops = {
+ .make_group = make_cma_dev,
++ .drop_item = drop_cma_dev,
+ };
+
+ static const struct config_item_type cma_subsys_type = {
+diff --git a/drivers/infiniband/core/sysfs.c b/drivers/infiniband/core/sysfs.c
+index 087682e6969e..defe9cd4c5ee 100644
+--- a/drivers/infiniband/core/sysfs.c
++++ b/drivers/infiniband/core/sysfs.c
+@@ -1058,8 +1058,7 @@ static int add_port(struct ib_core_device *coredev, int port_num)
+ coredev->ports_kobj,
+ "%d", port_num);
+ if (ret) {
+- kfree(p);
+- return ret;
++ goto err_put;
+ }
+
+ p->gid_attr_group = kzalloc(sizeof(*p->gid_attr_group), GFP_KERNEL);
+@@ -1072,8 +1071,7 @@ static int add_port(struct ib_core_device *coredev, int port_num)
+ ret = kobject_init_and_add(&p->gid_attr_group->kobj, &gid_attr_type,
+ &p->kobj, "gid_attrs");
+ if (ret) {
+- kfree(p->gid_attr_group);
+- goto err_put;
++ goto err_put_gid_attrs;
+ }
+
+ if (device->ops.process_mad && is_full_dev) {
+@@ -1404,8 +1402,10 @@ int ib_port_register_module_stat(struct ib_device *device, u8 port_num,
+
+ ret = kobject_init_and_add(kobj, ktype, &port->kobj, "%s",
+ name);
+- if (ret)
++ if (ret) {
++ kobject_put(kobj);
+ return ret;
++ }
+ }
+
+ return 0;
+diff --git a/drivers/infiniband/core/uverbs_cmd.c b/drivers/infiniband/core/uverbs_cmd.c
+index 060b4ebbd2ba..d6e9cc94dd90 100644
+--- a/drivers/infiniband/core/uverbs_cmd.c
++++ b/drivers/infiniband/core/uverbs_cmd.c
+@@ -2959,6 +2959,7 @@ static int ib_uverbs_ex_create_wq(struct uverbs_attr_bundle *attrs)
+ wq_init_attr.event_handler = ib_uverbs_wq_event_handler;
+ wq_init_attr.create_flags = cmd.create_flags;
+ INIT_LIST_HEAD(&obj->uevent.event_list);
++ obj->uevent.uobject.user_handle = cmd.user_handle;
+
+ wq = pd->device->ops.create_wq(pd, &wq_init_attr, &attrs->driver_udata);
+ if (IS_ERR(wq)) {
+@@ -2976,8 +2977,6 @@ static int ib_uverbs_ex_create_wq(struct uverbs_attr_bundle *attrs)
+ atomic_set(&wq->usecnt, 0);
+ atomic_inc(&pd->usecnt);
+ atomic_inc(&cq->usecnt);
+- wq->uobject = obj;
+- obj->uevent.uobject.object = wq;
+
+ memset(&resp, 0, sizeof(resp));
+ resp.wq_handle = obj->uevent.uobject.id;
+diff --git a/drivers/infiniband/hw/cxgb4/device.c b/drivers/infiniband/hw/cxgb4/device.c
+index 599340c1f0b8..541dbcf22d0e 100644
+--- a/drivers/infiniband/hw/cxgb4/device.c
++++ b/drivers/infiniband/hw/cxgb4/device.c
+@@ -953,6 +953,7 @@ void c4iw_dealloc(struct uld_ctx *ctx)
+ static void c4iw_remove(struct uld_ctx *ctx)
+ {
+ pr_debug("c4iw_dev %p\n", ctx->dev);
++ debugfs_remove_recursive(ctx->dev->debugfs_root);
+ c4iw_unregister_device(ctx->dev);
+ c4iw_dealloc(ctx);
+ }
+diff --git a/drivers/infiniband/hw/efa/efa_com_cmd.c b/drivers/infiniband/hw/efa/efa_com_cmd.c
+index eea5574a62e8..69f842c92ff6 100644
+--- a/drivers/infiniband/hw/efa/efa_com_cmd.c
++++ b/drivers/infiniband/hw/efa/efa_com_cmd.c
+@@ -388,7 +388,7 @@ static int efa_com_get_feature_ex(struct efa_com_dev *edev,
+
+ if (control_buff_size)
+ EFA_SET(&get_cmd.aq_common_descriptor.flags,
+- EFA_ADMIN_AQ_COMMON_DESC_CTRL_DATA_INDIRECT, 1);
++ EFA_ADMIN_AQ_COMMON_DESC_CTRL_DATA, 1);
+
+ efa_com_set_dma_addr(control_buf_dma_addr,
+ &get_cmd.control_buffer.address.mem_addr_high,
+@@ -540,7 +540,7 @@ static int efa_com_set_feature_ex(struct efa_com_dev *edev,
+ if (control_buff_size) {
+ set_cmd->aq_common_descriptor.flags = 0;
+ EFA_SET(&set_cmd->aq_common_descriptor.flags,
+- EFA_ADMIN_AQ_COMMON_DESC_CTRL_DATA_INDIRECT, 1);
++ EFA_ADMIN_AQ_COMMON_DESC_CTRL_DATA, 1);
+ efa_com_set_dma_addr(control_buf_dma_addr,
+ &set_cmd->control_buffer.address.mem_addr_high,
+ &set_cmd->control_buffer.address.mem_addr_low);
+diff --git a/drivers/infiniband/hw/hns/hns_roce_hw_v2.c b/drivers/infiniband/hw/hns/hns_roce_hw_v2.c
+index c3316672b70e..f9fa80ae5560 100644
+--- a/drivers/infiniband/hw/hns/hns_roce_hw_v2.c
++++ b/drivers/infiniband/hw/hns/hns_roce_hw_v2.c
+@@ -1349,34 +1349,26 @@ static int hns_roce_query_pf_resource(struct hns_roce_dev *hr_dev)
+ static int hns_roce_query_pf_timer_resource(struct hns_roce_dev *hr_dev)
+ {
+ struct hns_roce_pf_timer_res_a *req_a;
+- struct hns_roce_cmq_desc desc[2];
+- int ret, i;
++ struct hns_roce_cmq_desc desc;
++ int ret;
+
+- for (i = 0; i < 2; i++) {
+- hns_roce_cmq_setup_basic_desc(&desc[i],
+- HNS_ROCE_OPC_QUERY_PF_TIMER_RES,
+- true);
++ hns_roce_cmq_setup_basic_desc(&desc, HNS_ROCE_OPC_QUERY_PF_TIMER_RES,
++ true);
+
+- if (i == 0)
+- desc[i].flag |= cpu_to_le16(HNS_ROCE_CMD_FLAG_NEXT);
+- else
+- desc[i].flag &= ~cpu_to_le16(HNS_ROCE_CMD_FLAG_NEXT);
+- }
+-
+- ret = hns_roce_cmq_send(hr_dev, desc, 2);
++ ret = hns_roce_cmq_send(hr_dev, &desc, 1);
+ if (ret)
+ return ret;
+
+- req_a = (struct hns_roce_pf_timer_res_a *)desc[0].data;
++ req_a = (struct hns_roce_pf_timer_res_a *)desc.data;
+
+ hr_dev->caps.qpc_timer_bt_num =
+- roce_get_field(req_a->qpc_timer_bt_idx_num,
+- PF_RES_DATA_1_PF_QPC_TIMER_BT_NUM_M,
+- PF_RES_DATA_1_PF_QPC_TIMER_BT_NUM_S);
++ roce_get_field(req_a->qpc_timer_bt_idx_num,
++ PF_RES_DATA_1_PF_QPC_TIMER_BT_NUM_M,
++ PF_RES_DATA_1_PF_QPC_TIMER_BT_NUM_S);
+ hr_dev->caps.cqc_timer_bt_num =
+- roce_get_field(req_a->cqc_timer_bt_idx_num,
+- PF_RES_DATA_2_PF_CQC_TIMER_BT_NUM_M,
+- PF_RES_DATA_2_PF_CQC_TIMER_BT_NUM_S);
++ roce_get_field(req_a->cqc_timer_bt_idx_num,
++ PF_RES_DATA_2_PF_CQC_TIMER_BT_NUM_M,
++ PF_RES_DATA_2_PF_CQC_TIMER_BT_NUM_S);
+
+ return 0;
+ }
+@@ -4639,7 +4631,7 @@ static int hns_roce_v2_query_qp(struct ib_qp *ibqp, struct ib_qp_attr *qp_attr,
+ qp_attr->path_mig_state = IB_MIG_ARMED;
+ qp_attr->ah_attr.type = RDMA_AH_ATTR_TYPE_ROCE;
+ if (hr_qp->ibqp.qp_type == IB_QPT_UD)
+- qp_attr->qkey = V2_QKEY_VAL;
++ qp_attr->qkey = le32_to_cpu(context.qkey_xrcd);
+
+ qp_attr->rq_psn = roce_get_field(context.byte_108_rx_reqepsn,
+ V2_QPC_BYTE_108_RX_REQ_EPSN_M,
+diff --git a/drivers/infiniband/hw/mlx5/devx.c b/drivers/infiniband/hw/mlx5/devx.c
+index 46e1ab771f10..ed10e2f32aab 100644
+--- a/drivers/infiniband/hw/mlx5/devx.c
++++ b/drivers/infiniband/hw/mlx5/devx.c
+@@ -494,6 +494,10 @@ static u64 devx_get_obj_id(const void *in)
+ obj_id = get_enc_obj_id(MLX5_CMD_OP_CREATE_QP,
+ MLX5_GET(rst2init_qp_in, in, qpn));
+ break;
++ case MLX5_CMD_OP_INIT2INIT_QP:
++ obj_id = get_enc_obj_id(MLX5_CMD_OP_CREATE_QP,
++ MLX5_GET(init2init_qp_in, in, qpn));
++ break;
+ case MLX5_CMD_OP_INIT2RTR_QP:
+ obj_id = get_enc_obj_id(MLX5_CMD_OP_CREATE_QP,
+ MLX5_GET(init2rtr_qp_in, in, qpn));
+@@ -819,6 +823,7 @@ static bool devx_is_obj_modify_cmd(const void *in)
+ case MLX5_CMD_OP_SET_L2_TABLE_ENTRY:
+ case MLX5_CMD_OP_RST2INIT_QP:
+ case MLX5_CMD_OP_INIT2RTR_QP:
++ case MLX5_CMD_OP_INIT2INIT_QP:
+ case MLX5_CMD_OP_RTR2RTS_QP:
+ case MLX5_CMD_OP_RTS2RTS_QP:
+ case MLX5_CMD_OP_SQERR2RTS_QP:
+diff --git a/drivers/infiniband/hw/mlx5/srq.c b/drivers/infiniband/hw/mlx5/srq.c
+index b1a8a9175040..6d1ff13d2283 100644
+--- a/drivers/infiniband/hw/mlx5/srq.c
++++ b/drivers/infiniband/hw/mlx5/srq.c
+@@ -310,12 +310,18 @@ int mlx5_ib_create_srq(struct ib_srq *ib_srq,
+ srq->msrq.event = mlx5_ib_srq_event;
+ srq->ibsrq.ext.xrc.srq_num = srq->msrq.srqn;
+
+- if (udata)
+- if (ib_copy_to_udata(udata, &srq->msrq.srqn, sizeof(__u32))) {
++ if (udata) {
++ struct mlx5_ib_create_srq_resp resp = {
++ .srqn = srq->msrq.srqn,
++ };
++
++ if (ib_copy_to_udata(udata, &resp, min(udata->outlen,
++ sizeof(resp)))) {
+ mlx5_ib_dbg(dev, "copy to user failed\n");
+ err = -EFAULT;
+ goto err_core;
+ }
++ }
+
+ init_attr->attr.max_wr = srq->msrq.max - 1;
+
+diff --git a/drivers/infiniband/ulp/srpt/ib_srpt.c b/drivers/infiniband/ulp/srpt/ib_srpt.c
+index 98552749d71c..fcf982c60db6 100644
+--- a/drivers/infiniband/ulp/srpt/ib_srpt.c
++++ b/drivers/infiniband/ulp/srpt/ib_srpt.c
+@@ -610,6 +610,11 @@ static int srpt_refresh_port(struct srpt_port *sport)
+ dev_name(&sport->sdev->device->dev), sport->port,
+ PTR_ERR(sport->mad_agent));
+ sport->mad_agent = NULL;
++ memset(&port_modify, 0, sizeof(port_modify));
++ port_modify.clr_port_cap_mask = IB_PORT_DEVICE_MGMT_SUP;
++ ib_modify_port(sport->sdev->device, sport->port, 0,
++ &port_modify);
++
+ }
+ }
+
+@@ -633,9 +638,8 @@ static void srpt_unregister_mad_agent(struct srpt_device *sdev)
+ for (i = 1; i <= sdev->device->phys_port_cnt; i++) {
+ sport = &sdev->port[i - 1];
+ WARN_ON(sport->port != i);
+- if (ib_modify_port(sdev->device, i, 0, &port_modify) < 0)
+- pr_err("disabling MAD processing failed.\n");
+ if (sport->mad_agent) {
++ ib_modify_port(sdev->device, i, 0, &port_modify);
+ ib_unregister_mad_agent(sport->mad_agent);
+ sport->mad_agent = NULL;
+ }
+diff --git a/drivers/input/serio/i8042-ppcio.h b/drivers/input/serio/i8042-ppcio.h
+deleted file mode 100644
+index 391f94d9e47d..000000000000
+--- a/drivers/input/serio/i8042-ppcio.h
++++ /dev/null
+@@ -1,57 +0,0 @@
+-/* SPDX-License-Identifier: GPL-2.0-only */
+-#ifndef _I8042_PPCIO_H
+-#define _I8042_PPCIO_H
+-
+-
+-#if defined(CONFIG_WALNUT)
+-
+-#define I8042_KBD_IRQ 25
+-#define I8042_AUX_IRQ 26
+-
+-#define I8042_KBD_PHYS_DESC "walnutps2/serio0"
+-#define I8042_AUX_PHYS_DESC "walnutps2/serio1"
+-#define I8042_MUX_PHYS_DESC "walnutps2/serio%d"
+-
+-extern void *kb_cs;
+-extern void *kb_data;
+-
+-#define I8042_COMMAND_REG (*(int *)kb_cs)
+-#define I8042_DATA_REG (*(int *)kb_data)
+-
+-static inline int i8042_read_data(void)
+-{
+- return readb(kb_data);
+-}
+-
+-static inline int i8042_read_status(void)
+-{
+- return readb(kb_cs);
+-}
+-
+-static inline void i8042_write_data(int val)
+-{
+- writeb(val, kb_data);
+-}
+-
+-static inline void i8042_write_command(int val)
+-{
+- writeb(val, kb_cs);
+-}
+-
+-static inline int i8042_platform_init(void)
+-{
+- i8042_reset = I8042_RESET_ALWAYS;
+- return 0;
+-}
+-
+-static inline void i8042_platform_exit(void)
+-{
+-}
+-
+-#else
+-
+-#include "i8042-io.h"
+-
+-#endif
+-
+-#endif /* _I8042_PPCIO_H */
+diff --git a/drivers/input/serio/i8042.h b/drivers/input/serio/i8042.h
+index 38dc27ad3c18..eb376700dfff 100644
+--- a/drivers/input/serio/i8042.h
++++ b/drivers/input/serio/i8042.h
+@@ -17,8 +17,6 @@
+ #include "i8042-ip22io.h"
+ #elif defined(CONFIG_SNI_RM)
+ #include "i8042-snirm.h"
+-#elif defined(CONFIG_PPC)
+-#include "i8042-ppcio.h"
+ #elif defined(CONFIG_SPARC)
+ #include "i8042-sparcio.h"
+ #elif defined(CONFIG_X86) || defined(CONFIG_IA64)
+diff --git a/drivers/input/touchscreen/edt-ft5x06.c b/drivers/input/touchscreen/edt-ft5x06.c
+index d2587724c52a..9b8450794a8a 100644
+--- a/drivers/input/touchscreen/edt-ft5x06.c
++++ b/drivers/input/touchscreen/edt-ft5x06.c
+@@ -938,19 +938,25 @@ static void edt_ft5x06_ts_get_defaults(struct device *dev,
+
+ error = device_property_read_u32(dev, "offset", &val);
+ if (!error) {
+- edt_ft5x06_register_write(tsdata, reg_addr->reg_offset, val);
++ if (reg_addr->reg_offset != NO_REGISTER)
++ edt_ft5x06_register_write(tsdata,
++ reg_addr->reg_offset, val);
+ tsdata->offset = val;
+ }
+
+ error = device_property_read_u32(dev, "offset-x", &val);
+ if (!error) {
+- edt_ft5x06_register_write(tsdata, reg_addr->reg_offset_x, val);
++ if (reg_addr->reg_offset_x != NO_REGISTER)
++ edt_ft5x06_register_write(tsdata,
++ reg_addr->reg_offset_x, val);
+ tsdata->offset_x = val;
+ }
+
+ error = device_property_read_u32(dev, "offset-y", &val);
+ if (!error) {
+- edt_ft5x06_register_write(tsdata, reg_addr->reg_offset_y, val);
++ if (reg_addr->reg_offset_y != NO_REGISTER)
++ edt_ft5x06_register_write(tsdata,
++ reg_addr->reg_offset_y, val);
+ tsdata->offset_y = val;
+ }
+ }
+diff --git a/drivers/iommu/arm-smmu-v3.c b/drivers/iommu/arm-smmu-v3.c
+index 82508730feb7..af21d24a09e8 100644
+--- a/drivers/iommu/arm-smmu-v3.c
++++ b/drivers/iommu/arm-smmu-v3.c
+@@ -171,6 +171,8 @@
+ #define ARM_SMMU_PRIQ_IRQ_CFG1 0xd8
+ #define ARM_SMMU_PRIQ_IRQ_CFG2 0xdc
+
++#define ARM_SMMU_REG_SZ 0xe00
++
+ /* Common MSI config fields */
+ #define MSI_CFG0_ADDR_MASK GENMASK_ULL(51, 2)
+ #define MSI_CFG2_SH GENMASK(5, 4)
+@@ -628,6 +630,7 @@ struct arm_smmu_strtab_cfg {
+ struct arm_smmu_device {
+ struct device *dev;
+ void __iomem *base;
++ void __iomem *page1;
+
+ #define ARM_SMMU_FEAT_2_LVL_STRTAB (1 << 0)
+ #define ARM_SMMU_FEAT_2_LVL_CDTAB (1 << 1)
+@@ -733,9 +736,8 @@ static struct arm_smmu_option_prop arm_smmu_options[] = {
+ static inline void __iomem *arm_smmu_page1_fixup(unsigned long offset,
+ struct arm_smmu_device *smmu)
+ {
+- if ((offset > SZ_64K) &&
+- (smmu->options & ARM_SMMU_OPT_PAGE0_REGS_ONLY))
+- offset -= SZ_64K;
++ if (offset > SZ_64K)
++ return smmu->page1 + offset - SZ_64K;
+
+ return smmu->base + offset;
+ }
+@@ -4021,6 +4023,18 @@ err_reset_pci_ops: __maybe_unused;
+ return err;
+ }
+
++static void __iomem *arm_smmu_ioremap(struct device *dev, resource_size_t start,
++ resource_size_t size)
++{
++ struct resource res = {
++ .flags = IORESOURCE_MEM,
++ .start = start,
++ .end = start + size - 1,
++ };
++
++ return devm_ioremap_resource(dev, &res);
++}
++
+ static int arm_smmu_device_probe(struct platform_device *pdev)
+ {
+ int irq, ret;
+@@ -4056,10 +4070,23 @@ static int arm_smmu_device_probe(struct platform_device *pdev)
+ }
+ ioaddr = res->start;
+
+- smmu->base = devm_ioremap_resource(dev, res);
++ /*
++ * Don't map the IMPLEMENTATION DEFINED regions, since they may contain
++ * the PMCG registers which are reserved by the PMU driver.
++ */
++ smmu->base = arm_smmu_ioremap(dev, ioaddr, ARM_SMMU_REG_SZ);
+ if (IS_ERR(smmu->base))
+ return PTR_ERR(smmu->base);
+
++ if (arm_smmu_resource_size(smmu) > SZ_64K) {
++ smmu->page1 = arm_smmu_ioremap(dev, ioaddr + SZ_64K,
++ ARM_SMMU_REG_SZ);
++ if (IS_ERR(smmu->page1))
++ return PTR_ERR(smmu->page1);
++ } else {
++ smmu->page1 = smmu->base;
++ }
++
+ /* Interrupt lines */
+
+ irq = platform_get_irq_byname_optional(pdev, "combined");
+diff --git a/drivers/iommu/intel-iommu.c b/drivers/iommu/intel-iommu.c
+index 11ed871dd255..fde7aba49b74 100644
+--- a/drivers/iommu/intel-iommu.c
++++ b/drivers/iommu/intel-iommu.c
+@@ -2518,9 +2518,6 @@ struct dmar_domain *find_domain(struct device *dev)
+ if (unlikely(attach_deferred(dev) || iommu_dummy(dev)))
+ return NULL;
+
+- if (dev_is_pci(dev))
+- dev = &pci_real_dma_dev(to_pci_dev(dev))->dev;
+-
+ /* No lock here, assumes no domain exit in normal case */
+ info = dev->archdata.iommu;
+ if (likely(info))
+diff --git a/drivers/mailbox/imx-mailbox.c b/drivers/mailbox/imx-mailbox.c
+index 7906624a731c..478308fb82cc 100644
+--- a/drivers/mailbox/imx-mailbox.c
++++ b/drivers/mailbox/imx-mailbox.c
+@@ -66,6 +66,8 @@ struct imx_mu_priv {
+ struct clk *clk;
+ int irq;
+
++ u32 xcr;
++
+ bool side_b;
+ };
+
+@@ -374,7 +376,7 @@ static struct mbox_chan *imx_mu_scu_xlate(struct mbox_controller *mbox,
+ break;
+ default:
+ dev_err(mbox->dev, "Invalid chan type: %d\n", type);
+- return NULL;
++ return ERR_PTR(-EINVAL);
+ }
+
+ if (chan >= mbox->num_chans) {
+@@ -558,12 +560,45 @@ static const struct of_device_id imx_mu_dt_ids[] = {
+ };
+ MODULE_DEVICE_TABLE(of, imx_mu_dt_ids);
+
++static int imx_mu_suspend_noirq(struct device *dev)
++{
++ struct imx_mu_priv *priv = dev_get_drvdata(dev);
++
++ priv->xcr = imx_mu_read(priv, priv->dcfg->xCR);
++
++ return 0;
++}
++
++static int imx_mu_resume_noirq(struct device *dev)
++{
++ struct imx_mu_priv *priv = dev_get_drvdata(dev);
++
++ /*
++ * ONLY restore MU when context lost, the TIE could
++ * be set during noirq resume as there is MU data
++ * communication going on, and restore the saved
++ * value will overwrite the TIE and cause MU data
++ * send failed, may lead to system freeze. This issue
++ * is observed by testing freeze mode suspend.
++ */
++ if (!imx_mu_read(priv, priv->dcfg->xCR))
++ imx_mu_write(priv, priv->xcr, priv->dcfg->xCR);
++
++ return 0;
++}
++
++static const struct dev_pm_ops imx_mu_pm_ops = {
++ SET_NOIRQ_SYSTEM_SLEEP_PM_OPS(imx_mu_suspend_noirq,
++ imx_mu_resume_noirq)
++};
++
+ static struct platform_driver imx_mu_driver = {
+ .probe = imx_mu_probe,
+ .remove = imx_mu_remove,
+ .driver = {
+ .name = "imx_mu",
+ .of_match_table = imx_mu_dt_ids,
++ .pm = &imx_mu_pm_ops,
+ },
+ };
+ module_platform_driver(imx_mu_driver);
+diff --git a/drivers/mailbox/zynqmp-ipi-mailbox.c b/drivers/mailbox/zynqmp-ipi-mailbox.c
+index 86887c9a349a..f9cc674ba9b7 100644
+--- a/drivers/mailbox/zynqmp-ipi-mailbox.c
++++ b/drivers/mailbox/zynqmp-ipi-mailbox.c
+@@ -504,10 +504,9 @@ static int zynqmp_ipi_mbox_probe(struct zynqmp_ipi_mbox *ipi_mbox,
+ mchan->req_buf_size = resource_size(&res);
+ mchan->req_buf = devm_ioremap(mdev, res.start,
+ mchan->req_buf_size);
+- if (IS_ERR(mchan->req_buf)) {
++ if (!mchan->req_buf) {
+ dev_err(mdev, "Unable to map IPI buffer I/O memory\n");
+- ret = PTR_ERR(mchan->req_buf);
+- return ret;
++ return -ENOMEM;
+ }
+ } else if (ret != -ENODEV) {
+ dev_err(mdev, "Unmatched resource %s, %d.\n", name, ret);
+@@ -520,10 +519,9 @@ static int zynqmp_ipi_mbox_probe(struct zynqmp_ipi_mbox *ipi_mbox,
+ mchan->resp_buf_size = resource_size(&res);
+ mchan->resp_buf = devm_ioremap(mdev, res.start,
+ mchan->resp_buf_size);
+- if (IS_ERR(mchan->resp_buf)) {
++ if (!mchan->resp_buf) {
+ dev_err(mdev, "Unable to map IPI buffer I/O memory\n");
+- ret = PTR_ERR(mchan->resp_buf);
+- return ret;
++ return -ENOMEM;
+ }
+ } else if (ret != -ENODEV) {
+ dev_err(mdev, "Unmatched resource %s.\n", name);
+@@ -543,10 +541,9 @@ static int zynqmp_ipi_mbox_probe(struct zynqmp_ipi_mbox *ipi_mbox,
+ mchan->req_buf_size = resource_size(&res);
+ mchan->req_buf = devm_ioremap(mdev, res.start,
+ mchan->req_buf_size);
+- if (IS_ERR(mchan->req_buf)) {
++ if (!mchan->req_buf) {
+ dev_err(mdev, "Unable to map IPI buffer I/O memory\n");
+- ret = PTR_ERR(mchan->req_buf);
+- return ret;
++ return -ENOMEM;
+ }
+ } else if (ret != -ENODEV) {
+ dev_err(mdev, "Unmatched resource %s.\n", name);
+@@ -559,10 +556,9 @@ static int zynqmp_ipi_mbox_probe(struct zynqmp_ipi_mbox *ipi_mbox,
+ mchan->resp_buf_size = resource_size(&res);
+ mchan->resp_buf = devm_ioremap(mdev, res.start,
+ mchan->resp_buf_size);
+- if (IS_ERR(mchan->resp_buf)) {
++ if (!mchan->resp_buf) {
+ dev_err(mdev, "Unable to map IPI buffer I/O memory\n");
+- ret = PTR_ERR(mchan->resp_buf);
+- return ret;
++ return -ENOMEM;
+ }
+ } else if (ret != -ENODEV) {
+ dev_err(mdev, "Unmatched resource %s.\n", name);
+diff --git a/drivers/md/bcache/btree.c b/drivers/md/bcache/btree.c
+index 72856e5f23a3..fd1f288fd801 100644
+--- a/drivers/md/bcache/btree.c
++++ b/drivers/md/bcache/btree.c
+@@ -1389,7 +1389,7 @@ static int btree_gc_coalesce(struct btree *b, struct btree_op *op,
+ if (__set_blocks(n1, n1->keys + n2->keys,
+ block_bytes(b->c)) >
+ btree_blocks(new_nodes[i]))
+- goto out_nocoalesce;
++ goto out_unlock_nocoalesce;
+
+ keys = n2->keys;
+ /* Take the key of the node we're getting rid of */
+@@ -1418,7 +1418,7 @@ static int btree_gc_coalesce(struct btree *b, struct btree_op *op,
+
+ if (__bch_keylist_realloc(&keylist,
+ bkey_u64s(&new_nodes[i]->key)))
+- goto out_nocoalesce;
++ goto out_unlock_nocoalesce;
+
+ bch_btree_node_write(new_nodes[i], &cl);
+ bch_keylist_add(&keylist, &new_nodes[i]->key);
+@@ -1464,6 +1464,10 @@ static int btree_gc_coalesce(struct btree *b, struct btree_op *op,
+ /* Invalidated our iterator */
+ return -EINTR;
+
++out_unlock_nocoalesce:
++ for (i = 0; i < nodes; i++)
++ mutex_unlock(&new_nodes[i]->write_lock);
++
+ out_nocoalesce:
+ closure_sync(&cl);
+
+diff --git a/drivers/md/dm-mpath.c b/drivers/md/dm-mpath.c
+index 3e500098132f..e0c800cf87a9 100644
+--- a/drivers/md/dm-mpath.c
++++ b/drivers/md/dm-mpath.c
+@@ -1918,7 +1918,7 @@ static int multipath_prepare_ioctl(struct dm_target *ti,
+ int r;
+
+ current_pgpath = READ_ONCE(m->current_pgpath);
+- if (!current_pgpath)
++ if (!current_pgpath || !test_bit(MPATHF_QUEUE_IO, &m->flags))
+ current_pgpath = choose_pgpath(m, 0);
+
+ if (current_pgpath) {
+diff --git a/drivers/md/dm-zoned-metadata.c b/drivers/md/dm-zoned-metadata.c
+index 369de15c4e80..61b7d7b7e5a6 100644
+--- a/drivers/md/dm-zoned-metadata.c
++++ b/drivers/md/dm-zoned-metadata.c
+@@ -1554,7 +1554,7 @@ static struct dm_zone *dmz_get_rnd_zone_for_reclaim(struct dmz_metadata *zmd)
+ return dzone;
+ }
+
+- return ERR_PTR(-EBUSY);
++ return NULL;
+ }
+
+ /*
+@@ -1574,7 +1574,7 @@ static struct dm_zone *dmz_get_seq_zone_for_reclaim(struct dmz_metadata *zmd)
+ return zone;
+ }
+
+- return ERR_PTR(-EBUSY);
++ return NULL;
+ }
+
+ /*
+diff --git a/drivers/md/dm-zoned-reclaim.c b/drivers/md/dm-zoned-reclaim.c
+index e7ace908a9b7..d50817320e8e 100644
+--- a/drivers/md/dm-zoned-reclaim.c
++++ b/drivers/md/dm-zoned-reclaim.c
+@@ -349,8 +349,8 @@ static int dmz_do_reclaim(struct dmz_reclaim *zrc)
+
+ /* Get a data zone */
+ dzone = dmz_get_zone_for_reclaim(zmd);
+- if (IS_ERR(dzone))
+- return PTR_ERR(dzone);
++ if (!dzone)
++ return -EBUSY;
+
+ start = jiffies;
+
+diff --git a/drivers/media/platform/s5p-mfc/s5p_mfc.c b/drivers/media/platform/s5p-mfc/s5p_mfc.c
+index 5c2a23b953a4..eba2b9f040df 100644
+--- a/drivers/media/platform/s5p-mfc/s5p_mfc.c
++++ b/drivers/media/platform/s5p-mfc/s5p_mfc.c
+@@ -1089,6 +1089,10 @@ static struct device *s5p_mfc_alloc_memdev(struct device *dev,
+ child->coherent_dma_mask = dev->coherent_dma_mask;
+ child->dma_mask = dev->dma_mask;
+ child->release = s5p_mfc_memdev_release;
++ child->dma_parms = devm_kzalloc(dev, sizeof(*child->dma_parms),
++ GFP_KERNEL);
++ if (!child->dma_parms)
++ goto err;
+
+ /*
+ * The memdevs are not proper OF platform devices, so in order for them
+@@ -1104,7 +1108,7 @@ static struct device *s5p_mfc_alloc_memdev(struct device *dev,
+ return child;
+ device_del(child);
+ }
+-
++err:
+ put_device(child);
+ return NULL;
+ }
+diff --git a/drivers/media/v4l2-core/v4l2-ctrls.c b/drivers/media/v4l2-core/v4l2-ctrls.c
+index 452edd06d67d..99fd377f9b81 100644
+--- a/drivers/media/v4l2-core/v4l2-ctrls.c
++++ b/drivers/media/v4l2-core/v4l2-ctrls.c
+@@ -1825,7 +1825,7 @@ static int std_validate_compound(const struct v4l2_ctrl *ctrl, u32 idx,
+ sizeof(p_hevc_pps->row_height_minus1));
+
+ p_hevc_pps->flags &=
+- ~V4L2_HEVC_PPS_FLAG_PPS_LOOP_FILTER_ACROSS_SLICES_ENABLED;
++ ~V4L2_HEVC_PPS_FLAG_LOOP_FILTER_ACROSS_TILES_ENABLED;
+ }
+
+ if (p_hevc_pps->flags &
+diff --git a/drivers/mfd/stmfx.c b/drivers/mfd/stmfx.c
+index 857991cb3cbb..711979afd90a 100644
+--- a/drivers/mfd/stmfx.c
++++ b/drivers/mfd/stmfx.c
+@@ -287,14 +287,21 @@ static int stmfx_irq_init(struct i2c_client *client)
+
+ ret = regmap_write(stmfx->map, STMFX_REG_IRQ_OUT_PIN, irqoutpin);
+ if (ret)
+- return ret;
++ goto irq_exit;
+
+ ret = devm_request_threaded_irq(stmfx->dev, client->irq,
+ NULL, stmfx_irq_handler,
+ irqtrigger | IRQF_ONESHOT,
+ "stmfx", stmfx);
+ if (ret)
+- stmfx_irq_exit(client);
++ goto irq_exit;
++
++ stmfx->irq = client->irq;
++
++ return 0;
++
++irq_exit:
++ stmfx_irq_exit(client);
+
+ return ret;
+ }
+@@ -481,6 +488,8 @@ static int stmfx_suspend(struct device *dev)
+ if (ret)
+ return ret;
+
++ disable_irq(stmfx->irq);
++
+ if (stmfx->vdd)
+ return regulator_disable(stmfx->vdd);
+
+@@ -501,6 +510,13 @@ static int stmfx_resume(struct device *dev)
+ }
+ }
+
++ /* Reset STMFX - supply has been stopped during suspend */
++ ret = stmfx_chip_reset(stmfx);
++ if (ret) {
++ dev_err(stmfx->dev, "Failed to reset chip: %d\n", ret);
++ return ret;
++ }
++
+ ret = regmap_raw_write(stmfx->map, STMFX_REG_SYS_CTRL,
+ &stmfx->bkp_sysctrl, sizeof(stmfx->bkp_sysctrl));
+ if (ret)
+@@ -517,6 +533,8 @@ static int stmfx_resume(struct device *dev)
+ if (ret)
+ return ret;
+
++ enable_irq(stmfx->irq);
++
+ return 0;
+ }
+ #endif
+diff --git a/drivers/mfd/wcd934x.c b/drivers/mfd/wcd934x.c
+index 90341f3c6810..da910302d51a 100644
+--- a/drivers/mfd/wcd934x.c
++++ b/drivers/mfd/wcd934x.c
+@@ -280,7 +280,6 @@ static void wcd934x_slim_remove(struct slim_device *sdev)
+
+ regulator_bulk_disable(WCD934X_MAX_SUPPLY, ddata->supplies);
+ mfd_remove_devices(&sdev->dev);
+- kfree(ddata);
+ }
+
+ static const struct slim_device_id wcd934x_slim_id[] = {
+diff --git a/drivers/mfd/wm8994-core.c b/drivers/mfd/wm8994-core.c
+index 1e9fe7d92597..737dede4a95c 100644
+--- a/drivers/mfd/wm8994-core.c
++++ b/drivers/mfd/wm8994-core.c
+@@ -690,3 +690,4 @@ module_i2c_driver(wm8994_i2c_driver);
+ MODULE_DESCRIPTION("Core support for the WM8994 audio CODEC");
+ MODULE_LICENSE("GPL");
+ MODULE_AUTHOR("Mark Brown <broonie@opensource.wolfsonmicro.com>");
++MODULE_SOFTDEP("pre: wm8994_regulator");
+diff --git a/drivers/misc/fastrpc.c b/drivers/misc/fastrpc.c
+index e3e085e33d46..7939c55daceb 100644
+--- a/drivers/misc/fastrpc.c
++++ b/drivers/misc/fastrpc.c
+@@ -904,6 +904,7 @@ static int fastrpc_invoke_send(struct fastrpc_session_ctx *sctx,
+ struct fastrpc_channel_ctx *cctx;
+ struct fastrpc_user *fl = ctx->fl;
+ struct fastrpc_msg *msg = &ctx->msg;
++ int ret;
+
+ cctx = fl->cctx;
+ msg->pid = fl->tgid;
+@@ -919,7 +920,13 @@ static int fastrpc_invoke_send(struct fastrpc_session_ctx *sctx,
+ msg->size = roundup(ctx->msg_sz, PAGE_SIZE);
+ fastrpc_context_get(ctx);
+
+- return rpmsg_send(cctx->rpdev->ept, (void *)msg, sizeof(*msg));
++ ret = rpmsg_send(cctx->rpdev->ept, (void *)msg, sizeof(*msg));
++
++ if (ret)
++ fastrpc_context_put(ctx);
++
++ return ret;
++
+ }
+
+ static int fastrpc_internal_invoke(struct fastrpc_user *fl, u32 kernel,
+@@ -1613,8 +1620,10 @@ static int fastrpc_rpmsg_probe(struct rpmsg_device *rpdev)
+ domains[domain_id]);
+ data->miscdev.fops = &fastrpc_fops;
+ err = misc_register(&data->miscdev);
+- if (err)
++ if (err) {
++ kfree(data);
+ return err;
++ }
+
+ kref_init(&data->refcount);
+
+diff --git a/drivers/misc/habanalabs/device.c b/drivers/misc/habanalabs/device.c
+index aef4de36b7aa..6d9c298e02c7 100644
+--- a/drivers/misc/habanalabs/device.c
++++ b/drivers/misc/habanalabs/device.c
+@@ -718,7 +718,7 @@ disable_device:
+ return rc;
+ }
+
+-static void device_kill_open_processes(struct hl_device *hdev)
++static int device_kill_open_processes(struct hl_device *hdev)
+ {
+ u16 pending_total, pending_cnt;
+ struct hl_fpriv *hpriv;
+@@ -771,9 +771,7 @@ static void device_kill_open_processes(struct hl_device *hdev)
+ ssleep(1);
+ }
+
+- if (!list_empty(&hdev->fpriv_list))
+- dev_crit(hdev->dev,
+- "Going to hard reset with open user contexts\n");
++ return list_empty(&hdev->fpriv_list) ? 0 : -EBUSY;
+ }
+
+ static void device_hard_reset_pending(struct work_struct *work)
+@@ -894,7 +892,12 @@ again:
+ * process can't really exit until all its CSs are done, which
+ * is what we do in cs rollback
+ */
+- device_kill_open_processes(hdev);
++ rc = device_kill_open_processes(hdev);
++ if (rc) {
++ dev_crit(hdev->dev,
++ "Failed to kill all open processes, stopping hard reset\n");
++ goto out_err;
++ }
+
+ /* Flush the Event queue workers to make sure no other thread is
+ * reading or writing to registers during the reset
+@@ -1375,7 +1378,9 @@ void hl_device_fini(struct hl_device *hdev)
+ * can't really exit until all its CSs are done, which is what we
+ * do in cs rollback
+ */
+- device_kill_open_processes(hdev);
++ rc = device_kill_open_processes(hdev);
++ if (rc)
++ dev_crit(hdev->dev, "Failed to kill all open processes\n");
+
+ hl_cb_pool_fini(hdev);
+
+diff --git a/drivers/misc/habanalabs/habanalabs.h b/drivers/misc/habanalabs/habanalabs.h
+index 31ebcf9458fe..a6dd8e6ca594 100644
+--- a/drivers/misc/habanalabs/habanalabs.h
++++ b/drivers/misc/habanalabs/habanalabs.h
+@@ -23,7 +23,7 @@
+
+ #define HL_MMAP_CB_MASK (0x8000000000000000ull >> PAGE_SHIFT)
+
+-#define HL_PENDING_RESET_PER_SEC 5
++#define HL_PENDING_RESET_PER_SEC 30
+
+ #define HL_DEVICE_TIMEOUT_USEC 1000000 /* 1 s */
+
+diff --git a/drivers/misc/xilinx_sdfec.c b/drivers/misc/xilinx_sdfec.c
+index 71bbaa56bdb5..e2766aad9e14 100644
+--- a/drivers/misc/xilinx_sdfec.c
++++ b/drivers/misc/xilinx_sdfec.c
+@@ -602,10 +602,10 @@ static int xsdfec_table_write(struct xsdfec_dev *xsdfec, u32 offset,
+ const u32 depth)
+ {
+ u32 reg = 0;
+- u32 res;
+- u32 n, i;
++ int res, i, nr_pages;
++ u32 n;
+ u32 *addr = NULL;
+- struct page *page[MAX_NUM_PAGES];
++ struct page *pages[MAX_NUM_PAGES];
+
+ /*
+ * Writes that go beyond the length of
+@@ -622,15 +622,22 @@ static int xsdfec_table_write(struct xsdfec_dev *xsdfec, u32 offset,
+ if ((len * XSDFEC_REG_WIDTH_JUMP) % PAGE_SIZE)
+ n += 1;
+
+- res = get_user_pages_fast((unsigned long)src_ptr, n, 0, page);
+- if (res < n) {
+- for (i = 0; i < res; i++)
+- put_page(page[i]);
++ if (WARN_ON_ONCE(n > INT_MAX))
++ return -EINVAL;
++
++ nr_pages = n;
++
++ res = get_user_pages_fast((unsigned long)src_ptr, nr_pages, 0, pages);
++ if (res < nr_pages) {
++ if (res > 0) {
++ for (i = 0; i < res; i++)
++ put_page(pages[i]);
++ }
+ return -EINVAL;
+ }
+
+- for (i = 0; i < n; i++) {
+- addr = kmap(page[i]);
++ for (i = 0; i < nr_pages; i++) {
++ addr = kmap(pages[i]);
+ do {
+ xsdfec_regwrite(xsdfec,
+ base_addr + ((offset + reg) *
+@@ -639,7 +646,7 @@ static int xsdfec_table_write(struct xsdfec_dev *xsdfec, u32 offset,
+ reg++;
+ } while ((reg < len) &&
+ ((reg * XSDFEC_REG_WIDTH_JUMP) % PAGE_SIZE));
+- put_page(page[i]);
++ put_page(pages[i]);
+ }
+ return reg;
+ }
+diff --git a/drivers/net/bareudp.c b/drivers/net/bareudp.c
+index efd1a1d1f35e..5d3c691a1c66 100644
+--- a/drivers/net/bareudp.c
++++ b/drivers/net/bareudp.c
+@@ -552,6 +552,8 @@ static int bareudp_validate(struct nlattr *tb[], struct nlattr *data[],
+ static int bareudp2info(struct nlattr *data[], struct bareudp_conf *conf,
+ struct netlink_ext_ack *extack)
+ {
++ memset(conf, 0, sizeof(*conf));
++
+ if (!data[IFLA_BAREUDP_PORT]) {
+ NL_SET_ERR_MSG(extack, "port not specified");
+ return -EINVAL;
+diff --git a/drivers/net/dsa/lantiq_gswip.c b/drivers/net/dsa/lantiq_gswip.c
+index cf6fa8fede33..521ebc072903 100644
+--- a/drivers/net/dsa/lantiq_gswip.c
++++ b/drivers/net/dsa/lantiq_gswip.c
+@@ -1452,7 +1452,8 @@ static void gswip_phylink_validate(struct dsa_switch *ds, int port,
+
+ unsupported:
+ bitmap_zero(supported, __ETHTOOL_LINK_MODE_MASK_NBITS);
+- dev_err(ds->dev, "Unsupported interface: %d\n", state->interface);
++ dev_err(ds->dev, "Unsupported interface '%s' for port %d\n",
++ phy_modes(state->interface), port);
+ return;
+ }
+
+diff --git a/drivers/net/dsa/sja1105/sja1105_ptp.c b/drivers/net/dsa/sja1105/sja1105_ptp.c
+index bc0e47c1dbb9..177134596458 100644
+--- a/drivers/net/dsa/sja1105/sja1105_ptp.c
++++ b/drivers/net/dsa/sja1105/sja1105_ptp.c
+@@ -891,16 +891,16 @@ void sja1105_ptp_txtstamp_skb(struct dsa_switch *ds, int port,
+
+ mutex_lock(&ptp_data->lock);
+
+- rc = sja1105_ptpclkval_read(priv, &ticks, NULL);
++ rc = sja1105_ptpegr_ts_poll(ds, port, &ts);
+ if (rc < 0) {
+- dev_err(ds->dev, "Failed to read PTP clock: %d\n", rc);
++ dev_err(ds->dev, "timed out polling for tstamp\n");
+ kfree_skb(skb);
+ goto out;
+ }
+
+- rc = sja1105_ptpegr_ts_poll(ds, port, &ts);
++ rc = sja1105_ptpclkval_read(priv, &ticks, NULL);
+ if (rc < 0) {
+- dev_err(ds->dev, "timed out polling for tstamp\n");
++ dev_err(ds->dev, "Failed to read PTP clock: %d\n", rc);
+ kfree_skb(skb);
+ goto out;
+ }
+diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt.c b/drivers/net/ethernet/broadcom/bnxt/bnxt.c
+index 58e0d9a781e9..19c4a0a5727a 100644
+--- a/drivers/net/ethernet/broadcom/bnxt/bnxt.c
++++ b/drivers/net/ethernet/broadcom/bnxt/bnxt.c
+@@ -10014,7 +10014,7 @@ static void bnxt_timer(struct timer_list *t)
+ struct bnxt *bp = from_timer(bp, t, timer);
+ struct net_device *dev = bp->dev;
+
+- if (!netif_running(dev))
++ if (!netif_running(dev) || !test_bit(BNXT_STATE_OPEN, &bp->state))
+ return;
+
+ if (atomic_read(&bp->intr_sem) != 0)
+@@ -12097,19 +12097,9 @@ static int bnxt_resume(struct device *device)
+ goto resume_exit;
+ }
+
+- if (bnxt_hwrm_queue_qportcfg(bp)) {
+- rc = -ENODEV;
++ rc = bnxt_hwrm_func_qcaps(bp);
++ if (rc)
+ goto resume_exit;
+- }
+-
+- if (bp->hwrm_spec_code >= 0x10803) {
+- if (bnxt_alloc_ctx_mem(bp)) {
+- rc = -ENODEV;
+- goto resume_exit;
+- }
+- }
+- if (BNXT_NEW_RM(bp))
+- bnxt_hwrm_func_resc_qcaps(bp, false);
+
+ if (bnxt_hwrm_func_drv_rgtr(bp, NULL, 0, false)) {
+ rc = -ENODEV;
+@@ -12125,6 +12115,8 @@ static int bnxt_resume(struct device *device)
+
+ resume_exit:
+ bnxt_ulp_start(bp, rc);
++ if (!rc)
++ bnxt_reenable_sriov(bp);
+ rtnl_unlock();
+ return rc;
+ }
+@@ -12168,6 +12160,9 @@ static pci_ers_result_t bnxt_io_error_detected(struct pci_dev *pdev,
+ bnxt_close(netdev);
+
+ pci_disable_device(pdev);
++ bnxt_free_ctx_mem(bp);
++ kfree(bp->ctx);
++ bp->ctx = NULL;
+ rtnl_unlock();
+
+ /* Request a slot slot reset. */
+@@ -12201,12 +12196,16 @@ static pci_ers_result_t bnxt_io_slot_reset(struct pci_dev *pdev)
+ pci_set_master(pdev);
+
+ err = bnxt_hwrm_func_reset(bp);
+- if (!err && netif_running(netdev))
+- err = bnxt_open(netdev);
+-
+- if (!err)
+- result = PCI_ERS_RESULT_RECOVERED;
++ if (!err) {
++ err = bnxt_hwrm_func_qcaps(bp);
++ if (!err && netif_running(netdev))
++ err = bnxt_open(netdev);
++ }
+ bnxt_ulp_start(bp, err);
++ if (!err) {
++ bnxt_reenable_sriov(bp);
++ result = PCI_ERS_RESULT_RECOVERED;
++ }
+ }
+
+ if (result != PCI_ERS_RESULT_RECOVERED) {
+diff --git a/drivers/net/ethernet/cavium/octeon/octeon_mgmt.c b/drivers/net/ethernet/cavium/octeon/octeon_mgmt.c
+index 9d868403d86c..cbaa1924afbe 100644
+--- a/drivers/net/ethernet/cavium/octeon/octeon_mgmt.c
++++ b/drivers/net/ethernet/cavium/octeon/octeon_mgmt.c
+@@ -234,6 +234,11 @@ static void octeon_mgmt_rx_fill_ring(struct net_device *netdev)
+
+ /* Put it in the ring. */
+ p->rx_ring[p->rx_next_fill] = re.d64;
++ /* Make sure there is no reorder of filling the ring and ringing
++ * the bell
++ */
++ wmb();
++
+ dma_sync_single_for_device(p->dev, p->rx_ring_handle,
+ ring_size_to_bytes(OCTEON_MGMT_RX_RING_SIZE),
+ DMA_BIDIRECTIONAL);
+diff --git a/drivers/net/ethernet/ibm/ibmvnic.c b/drivers/net/ethernet/ibm/ibmvnic.c
+index 197dc5b2c090..1b4d04e4474b 100644
+--- a/drivers/net/ethernet/ibm/ibmvnic.c
++++ b/drivers/net/ethernet/ibm/ibmvnic.c
+@@ -5184,6 +5184,9 @@ static int ibmvnic_remove(struct vio_dev *dev)
+ adapter->state = VNIC_REMOVING;
+ spin_unlock_irqrestore(&adapter->state_lock, flags);
+
++ flush_work(&adapter->ibmvnic_reset);
++ flush_delayed_work(&adapter->ibmvnic_delayed_reset);
++
+ rtnl_lock();
+ unregister_netdevice(netdev);
+
+diff --git a/drivers/net/ethernet/intel/e1000e/netdev.c b/drivers/net/ethernet/intel/e1000e/netdev.c
+index df3d50e759de..5e388d4a97a1 100644
+--- a/drivers/net/ethernet/intel/e1000e/netdev.c
++++ b/drivers/net/ethernet/intel/e1000e/netdev.c
+@@ -6518,11 +6518,17 @@ static int __e1000_shutdown(struct pci_dev *pdev, bool runtime)
+ struct net_device *netdev = pci_get_drvdata(pdev);
+ struct e1000_adapter *adapter = netdev_priv(netdev);
+ struct e1000_hw *hw = &adapter->hw;
+- u32 ctrl, ctrl_ext, rctl, status;
+- /* Runtime suspend should only enable wakeup for link changes */
+- u32 wufc = runtime ? E1000_WUFC_LNKC : adapter->wol;
++ u32 ctrl, ctrl_ext, rctl, status, wufc;
+ int retval = 0;
+
++ /* Runtime suspend should only enable wakeup for link changes */
++ if (runtime)
++ wufc = E1000_WUFC_LNKC;
++ else if (device_may_wakeup(&pdev->dev))
++ wufc = adapter->wol;
++ else
++ wufc = 0;
++
+ status = er32(STATUS);
+ if (status & E1000_STATUS_LU)
+ wufc &= ~E1000_WUFC_LNKC;
+@@ -6579,7 +6585,7 @@ static int __e1000_shutdown(struct pci_dev *pdev, bool runtime)
+ if (adapter->hw.phy.type == e1000_phy_igp_3) {
+ e1000e_igp3_phy_powerdown_workaround_ich8lan(&adapter->hw);
+ } else if (hw->mac.type >= e1000_pch_lpt) {
+- if (!(wufc & (E1000_WUFC_EX | E1000_WUFC_MC | E1000_WUFC_BC)))
++ if (wufc && !(wufc & (E1000_WUFC_EX | E1000_WUFC_MC | E1000_WUFC_BC)))
+ /* ULP does not support wake from unicast, multicast
+ * or broadcast.
+ */
+diff --git a/drivers/net/ethernet/intel/iavf/iavf.h b/drivers/net/ethernet/intel/iavf/iavf.h
+index bcd11b4b29df..2d4ce6fdba1a 100644
+--- a/drivers/net/ethernet/intel/iavf/iavf.h
++++ b/drivers/net/ethernet/intel/iavf/iavf.h
+@@ -87,6 +87,10 @@ struct iavf_vsi {
+ #define IAVF_HLUT_ARRAY_SIZE ((IAVF_VFQF_HLUT_MAX_INDEX + 1) * 4)
+ #define IAVF_MBPS_DIVISOR 125000 /* divisor to convert to Mbps */
+
++#define IAVF_VIRTCHNL_VF_RESOURCE_SIZE (sizeof(struct virtchnl_vf_resource) + \
++ (IAVF_MAX_VF_VSI * \
++ sizeof(struct virtchnl_vsi_resource)))
++
+ /* MAX_MSIX_Q_VECTORS of these are allocated,
+ * but we only use one per queue-specific vector.
+ */
+@@ -306,6 +310,14 @@ struct iavf_adapter {
+ bool netdev_registered;
+ bool link_up;
+ enum virtchnl_link_speed link_speed;
++ /* This is only populated if the VIRTCHNL_VF_CAP_ADV_LINK_SPEED is set
++ * in vf_res->vf_cap_flags. Use ADV_LINK_SUPPORT macro to determine if
++ * this field is valid. This field should be used going forward and the
++ * enum virtchnl_link_speed above should be considered the legacy way of
++ * storing/communicating link speeds.
++ */
++ u32 link_speed_mbps;
++
+ enum virtchnl_ops current_op;
+ #define CLIENT_ALLOWED(_a) ((_a)->vf_res ? \
+ (_a)->vf_res->vf_cap_flags & \
+@@ -322,6 +334,8 @@ struct iavf_adapter {
+ VIRTCHNL_VF_OFFLOAD_RSS_PF)))
+ #define VLAN_ALLOWED(_a) ((_a)->vf_res->vf_cap_flags & \
+ VIRTCHNL_VF_OFFLOAD_VLAN)
++#define ADV_LINK_SUPPORT(_a) ((_a)->vf_res->vf_cap_flags & \
++ VIRTCHNL_VF_CAP_ADV_LINK_SPEED)
+ struct virtchnl_vf_resource *vf_res; /* incl. all VSIs */
+ struct virtchnl_vsi_resource *vsi_res; /* our LAN VSI */
+ struct virtchnl_version_info pf_version;
+diff --git a/drivers/net/ethernet/intel/iavf/iavf_ethtool.c b/drivers/net/ethernet/intel/iavf/iavf_ethtool.c
+index 2c39d46b6138..40a3fc7c5ea5 100644
+--- a/drivers/net/ethernet/intel/iavf/iavf_ethtool.c
++++ b/drivers/net/ethernet/intel/iavf/iavf_ethtool.c
+@@ -278,7 +278,18 @@ static int iavf_get_link_ksettings(struct net_device *netdev,
+ ethtool_link_ksettings_zero_link_mode(cmd, supported);
+ cmd->base.autoneg = AUTONEG_DISABLE;
+ cmd->base.port = PORT_NONE;
+- /* Set speed and duplex */
++ cmd->base.duplex = DUPLEX_FULL;
++
++ if (ADV_LINK_SUPPORT(adapter)) {
++ if (adapter->link_speed_mbps &&
++ adapter->link_speed_mbps < U32_MAX)
++ cmd->base.speed = adapter->link_speed_mbps;
++ else
++ cmd->base.speed = SPEED_UNKNOWN;
++
++ return 0;
++ }
++
+ switch (adapter->link_speed) {
+ case IAVF_LINK_SPEED_40GB:
+ cmd->base.speed = SPEED_40000;
+@@ -306,7 +317,6 @@ static int iavf_get_link_ksettings(struct net_device *netdev,
+ default:
+ break;
+ }
+- cmd->base.duplex = DUPLEX_FULL;
+
+ return 0;
+ }
+diff --git a/drivers/net/ethernet/intel/iavf/iavf_main.c b/drivers/net/ethernet/intel/iavf/iavf_main.c
+index 2050649848ba..a21ae74bcd1b 100644
+--- a/drivers/net/ethernet/intel/iavf/iavf_main.c
++++ b/drivers/net/ethernet/intel/iavf/iavf_main.c
+@@ -1756,17 +1756,17 @@ static int iavf_init_get_resources(struct iavf_adapter *adapter)
+ struct net_device *netdev = adapter->netdev;
+ struct pci_dev *pdev = adapter->pdev;
+ struct iavf_hw *hw = &adapter->hw;
+- int err = 0, bufsz;
++ int err;
+
+ WARN_ON(adapter->state != __IAVF_INIT_GET_RESOURCES);
+ /* aq msg sent, awaiting reply */
+ if (!adapter->vf_res) {
+- bufsz = sizeof(struct virtchnl_vf_resource) +
+- (IAVF_MAX_VF_VSI *
+- sizeof(struct virtchnl_vsi_resource));
+- adapter->vf_res = kzalloc(bufsz, GFP_KERNEL);
+- if (!adapter->vf_res)
++ adapter->vf_res = kzalloc(IAVF_VIRTCHNL_VF_RESOURCE_SIZE,
++ GFP_KERNEL);
++ if (!adapter->vf_res) {
++ err = -ENOMEM;
+ goto err;
++ }
+ }
+ err = iavf_get_vf_config(adapter);
+ if (err == IAVF_ERR_ADMIN_QUEUE_NO_WORK) {
+@@ -2036,7 +2036,7 @@ static void iavf_disable_vf(struct iavf_adapter *adapter)
+ iavf_reset_interrupt_capability(adapter);
+ iavf_free_queues(adapter);
+ iavf_free_q_vectors(adapter);
+- kfree(adapter->vf_res);
++ memset(adapter->vf_res, 0, IAVF_VIRTCHNL_VF_RESOURCE_SIZE);
+ iavf_shutdown_adminq(&adapter->hw);
+ adapter->netdev->flags &= ~IFF_UP;
+ clear_bit(__IAVF_IN_CRITICAL_TASK, &adapter->crit_section);
+@@ -2487,6 +2487,16 @@ static int iavf_validate_tx_bandwidth(struct iavf_adapter *adapter,
+ {
+ int speed = 0, ret = 0;
+
++ if (ADV_LINK_SUPPORT(adapter)) {
++ if (adapter->link_speed_mbps < U32_MAX) {
++ speed = adapter->link_speed_mbps;
++ goto validate_bw;
++ } else {
++ dev_err(&adapter->pdev->dev, "Unknown link speed\n");
++ return -EINVAL;
++ }
++ }
++
+ switch (adapter->link_speed) {
+ case IAVF_LINK_SPEED_40GB:
+ speed = 40000;
+@@ -2510,6 +2520,7 @@ static int iavf_validate_tx_bandwidth(struct iavf_adapter *adapter,
+ break;
+ }
+
++validate_bw:
+ if (max_tx_rate > speed) {
+ dev_err(&adapter->pdev->dev,
+ "Invalid tx rate specified\n");
+diff --git a/drivers/net/ethernet/intel/iavf/iavf_virtchnl.c b/drivers/net/ethernet/intel/iavf/iavf_virtchnl.c
+index d58374c2c33d..ca79bec4ebd9 100644
+--- a/drivers/net/ethernet/intel/iavf/iavf_virtchnl.c
++++ b/drivers/net/ethernet/intel/iavf/iavf_virtchnl.c
+@@ -139,7 +139,8 @@ int iavf_send_vf_config_msg(struct iavf_adapter *adapter)
+ VIRTCHNL_VF_OFFLOAD_ENCAP |
+ VIRTCHNL_VF_OFFLOAD_ENCAP_CSUM |
+ VIRTCHNL_VF_OFFLOAD_REQ_QUEUES |
+- VIRTCHNL_VF_OFFLOAD_ADQ;
++ VIRTCHNL_VF_OFFLOAD_ADQ |
++ VIRTCHNL_VF_CAP_ADV_LINK_SPEED;
+
+ adapter->current_op = VIRTCHNL_OP_GET_VF_RESOURCES;
+ adapter->aq_required &= ~IAVF_FLAG_AQ_GET_CONFIG;
+@@ -891,6 +892,8 @@ void iavf_disable_vlan_stripping(struct iavf_adapter *adapter)
+ iavf_send_pf_msg(adapter, VIRTCHNL_OP_DISABLE_VLAN_STRIPPING, NULL, 0);
+ }
+
++#define IAVF_MAX_SPEED_STRLEN 13
++
+ /**
+ * iavf_print_link_message - print link up or down
+ * @adapter: adapter structure
+@@ -900,37 +903,99 @@ void iavf_disable_vlan_stripping(struct iavf_adapter *adapter)
+ static void iavf_print_link_message(struct iavf_adapter *adapter)
+ {
+ struct net_device *netdev = adapter->netdev;
+- char *speed = "Unknown ";
++ int link_speed_mbps;
++ char *speed;
+
+ if (!adapter->link_up) {
+ netdev_info(netdev, "NIC Link is Down\n");
+ return;
+ }
+
++ speed = kcalloc(1, IAVF_MAX_SPEED_STRLEN, GFP_KERNEL);
++ if (!speed)
++ return;
++
++ if (ADV_LINK_SUPPORT(adapter)) {
++ link_speed_mbps = adapter->link_speed_mbps;
++ goto print_link_msg;
++ }
++
+ switch (adapter->link_speed) {
+ case IAVF_LINK_SPEED_40GB:
+- speed = "40 G";
++ link_speed_mbps = SPEED_40000;
+ break;
+ case IAVF_LINK_SPEED_25GB:
+- speed = "25 G";
++ link_speed_mbps = SPEED_25000;
+ break;
+ case IAVF_LINK_SPEED_20GB:
+- speed = "20 G";
++ link_speed_mbps = SPEED_20000;
+ break;
+ case IAVF_LINK_SPEED_10GB:
+- speed = "10 G";
++ link_speed_mbps = SPEED_10000;
+ break;
+ case IAVF_LINK_SPEED_1GB:
+- speed = "1000 M";
++ link_speed_mbps = SPEED_1000;
+ break;
+ case IAVF_LINK_SPEED_100MB:
+- speed = "100 M";
++ link_speed_mbps = SPEED_100;
+ break;
+ default:
++ link_speed_mbps = SPEED_UNKNOWN;
+ break;
+ }
+
+- netdev_info(netdev, "NIC Link is Up %sbps Full Duplex\n", speed);
++print_link_msg:
++ if (link_speed_mbps > SPEED_1000) {
++ if (link_speed_mbps == SPEED_2500)
++ snprintf(speed, IAVF_MAX_SPEED_STRLEN, "2.5 Gbps");
++ else
++ /* convert to Gbps inline */
++ snprintf(speed, IAVF_MAX_SPEED_STRLEN, "%d %s",
++ link_speed_mbps / 1000, "Gbps");
++ } else if (link_speed_mbps == SPEED_UNKNOWN) {
++ snprintf(speed, IAVF_MAX_SPEED_STRLEN, "%s", "Unknown Mbps");
++ } else {
++ snprintf(speed, IAVF_MAX_SPEED_STRLEN, "%u %s",
++ link_speed_mbps, "Mbps");
++ }
++
++ netdev_info(netdev, "NIC Link is Up Speed is %s Full Duplex\n", speed);
++ kfree(speed);
++}
++
++/**
++ * iavf_get_vpe_link_status
++ * @adapter: adapter structure
++ * @vpe: virtchnl_pf_event structure
++ *
++ * Helper function for determining the link status
++ **/
++static bool
++iavf_get_vpe_link_status(struct iavf_adapter *adapter,
++ struct virtchnl_pf_event *vpe)
++{
++ if (ADV_LINK_SUPPORT(adapter))
++ return vpe->event_data.link_event_adv.link_status;
++ else
++ return vpe->event_data.link_event.link_status;
++}
++
++/**
++ * iavf_set_adapter_link_speed_from_vpe
++ * @adapter: adapter structure for which we are setting the link speed
++ * @vpe: virtchnl_pf_event structure that contains the link speed we are setting
++ *
++ * Helper function for setting iavf_adapter link speed
++ **/
++static void
++iavf_set_adapter_link_speed_from_vpe(struct iavf_adapter *adapter,
++ struct virtchnl_pf_event *vpe)
++{
++ if (ADV_LINK_SUPPORT(adapter))
++ adapter->link_speed_mbps =
++ vpe->event_data.link_event_adv.link_speed;
++ else
++ adapter->link_speed = vpe->event_data.link_event.link_speed;
+ }
+
+ /**
+@@ -1160,12 +1225,11 @@ void iavf_virtchnl_completion(struct iavf_adapter *adapter,
+ if (v_opcode == VIRTCHNL_OP_EVENT) {
+ struct virtchnl_pf_event *vpe =
+ (struct virtchnl_pf_event *)msg;
+- bool link_up = vpe->event_data.link_event.link_status;
++ bool link_up = iavf_get_vpe_link_status(adapter, vpe);
+
+ switch (vpe->event) {
+ case VIRTCHNL_EVENT_LINK_CHANGE:
+- adapter->link_speed =
+- vpe->event_data.link_event.link_speed;
++ iavf_set_adapter_link_speed_from_vpe(adapter, vpe);
+
+ /* we've already got the right link status, bail */
+ if (adapter->link_up == link_up)
+diff --git a/drivers/net/ethernet/marvell/mvpp2/mvpp2_main.c b/drivers/net/ethernet/marvell/mvpp2/mvpp2_main.c
+index 2b5dad2ec650..b7b553602ea9 100644
+--- a/drivers/net/ethernet/marvell/mvpp2/mvpp2_main.c
++++ b/drivers/net/ethernet/marvell/mvpp2/mvpp2_main.c
+@@ -5983,8 +5983,8 @@ static int mvpp2_remove(struct platform_device *pdev)
+ {
+ struct mvpp2 *priv = platform_get_drvdata(pdev);
+ struct fwnode_handle *fwnode = pdev->dev.fwnode;
++ int i = 0, poolnum = MVPP2_BM_POOLS_NUM;
+ struct fwnode_handle *port_fwnode;
+- int i = 0;
+
+ mvpp2_dbgfs_cleanup(priv);
+
+@@ -5998,7 +5998,10 @@ static int mvpp2_remove(struct platform_device *pdev)
+
+ destroy_workqueue(priv->stats_queue);
+
+- for (i = 0; i < MVPP2_BM_POOLS_NUM; i++) {
++ if (priv->percpu_pools)
++ poolnum = mvpp2_get_nrxqs(priv) * 2;
++
++ for (i = 0; i < poolnum; i++) {
+ struct mvpp2_bm_pool *bm_pool = &priv->bm_pools[i];
+
+ mvpp2_bm_pool_destroy(&pdev->dev, priv, bm_pool);
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/steering/dr_send.c b/drivers/net/ethernet/mellanox/mlx5/core/steering/dr_send.c
+index 18719acb7e54..eff8bb64899d 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/steering/dr_send.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/steering/dr_send.c
+@@ -181,7 +181,7 @@ static struct mlx5dr_qp *dr_create_rc_qp(struct mlx5_core_dev *mdev,
+ in, pas));
+
+ err = mlx5_core_create_qp(mdev, &dr_qp->mqp, in, inlen);
+- kfree(in);
++ kvfree(in);
+
+ if (err) {
+ mlx5_core_warn(mdev, " Can't create QP\n");
+diff --git a/drivers/net/ethernet/mellanox/mlxsw/spectrum.c b/drivers/net/ethernet/mellanox/mlxsw/spectrum.c
+index 6b39978acd07..3e4199246a18 100644
+--- a/drivers/net/ethernet/mellanox/mlxsw/spectrum.c
++++ b/drivers/net/ethernet/mellanox/mlxsw/spectrum.c
+@@ -990,8 +990,10 @@ int __mlxsw_sp_port_headroom_set(struct mlxsw_sp_port *mlxsw_sp_port, int mtu,
+
+ lossy = !(pfc || pause_en);
+ thres_cells = mlxsw_sp_pg_buf_threshold_get(mlxsw_sp, mtu);
++ mlxsw_sp_port_headroom_8x_adjust(mlxsw_sp_port, &thres_cells);
+ delay_cells = mlxsw_sp_pg_buf_delay_get(mlxsw_sp, mtu, delay,
+ pfc, pause_en);
++ mlxsw_sp_port_headroom_8x_adjust(mlxsw_sp_port, &delay_cells);
+ total_cells = thres_cells + delay_cells;
+
+ taken_headroom_cells += total_cells;
+diff --git a/drivers/net/ethernet/mellanox/mlxsw/spectrum.h b/drivers/net/ethernet/mellanox/mlxsw/spectrum.h
+index ca56e72cb4b7..e28ecb84b816 100644
+--- a/drivers/net/ethernet/mellanox/mlxsw/spectrum.h
++++ b/drivers/net/ethernet/mellanox/mlxsw/spectrum.h
+@@ -395,6 +395,19 @@ mlxsw_sp_port_vlan_find_by_vid(const struct mlxsw_sp_port *mlxsw_sp_port,
+ return NULL;
+ }
+
++static inline void
++mlxsw_sp_port_headroom_8x_adjust(const struct mlxsw_sp_port *mlxsw_sp_port,
++ u16 *p_size)
++{
++ /* Ports with eight lanes use two headroom buffers between which the
++ * configured headroom size is split. Therefore, multiply the calculated
++ * headroom size by two.
++ */
++ if (mlxsw_sp_port->mapping.width != 8)
++ return;
++ *p_size *= 2;
++}
++
+ enum mlxsw_sp_flood_type {
+ MLXSW_SP_FLOOD_TYPE_UC,
+ MLXSW_SP_FLOOD_TYPE_BC,
+diff --git a/drivers/net/ethernet/mellanox/mlxsw/spectrum_buffers.c b/drivers/net/ethernet/mellanox/mlxsw/spectrum_buffers.c
+index 968f0902e4fe..19bf0768ed78 100644
+--- a/drivers/net/ethernet/mellanox/mlxsw/spectrum_buffers.c
++++ b/drivers/net/ethernet/mellanox/mlxsw/spectrum_buffers.c
+@@ -312,6 +312,7 @@ static int mlxsw_sp_port_pb_init(struct mlxsw_sp_port *mlxsw_sp_port)
+
+ if (i == MLXSW_SP_PB_UNUSED)
+ continue;
++ mlxsw_sp_port_headroom_8x_adjust(mlxsw_sp_port, &size);
+ mlxsw_reg_pbmc_lossy_buffer_pack(pbmc_pl, i, size);
+ }
+ mlxsw_reg_pbmc_lossy_buffer_pack(pbmc_pl,
+diff --git a/drivers/net/ethernet/mellanox/mlxsw/spectrum_span.c b/drivers/net/ethernet/mellanox/mlxsw/spectrum_span.c
+index 9fb2e9d93929..7c5032f9c8ff 100644
+--- a/drivers/net/ethernet/mellanox/mlxsw/spectrum_span.c
++++ b/drivers/net/ethernet/mellanox/mlxsw/spectrum_span.c
+@@ -776,6 +776,7 @@ mlxsw_sp_span_port_buffsize_update(struct mlxsw_sp_port *mlxsw_sp_port, u16 mtu)
+ speed = 0;
+
+ buffsize = mlxsw_sp_span_buffsize_get(mlxsw_sp, speed, mtu);
++ mlxsw_sp_port_headroom_8x_adjust(mlxsw_sp_port, (u16 *) &buffsize);
+ mlxsw_reg_sbib_pack(sbib_pl, mlxsw_sp_port->local_port, buffsize);
+ return mlxsw_reg_write(mlxsw_sp->core, MLXSW_REG(sbib), sbib_pl);
+ }
+diff --git a/drivers/net/geneve.c b/drivers/net/geneve.c
+index 6b461be1820b..75266580b586 100644
+--- a/drivers/net/geneve.c
++++ b/drivers/net/geneve.c
+@@ -987,9 +987,10 @@ static netdev_tx_t geneve_xmit(struct sk_buff *skb, struct net_device *dev)
+ if (geneve->collect_md) {
+ info = skb_tunnel_info(skb);
+ if (unlikely(!info || !(info->mode & IP_TUNNEL_INFO_TX))) {
+- err = -EINVAL;
+ netdev_dbg(dev, "no tunnel metadata\n");
+- goto tx_error;
++ dev_kfree_skb(skb);
++ dev->stats.tx_dropped++;
++ return NETDEV_TX_OK;
+ }
+ } else {
+ info = &geneve->info;
+@@ -1006,7 +1007,7 @@ static netdev_tx_t geneve_xmit(struct sk_buff *skb, struct net_device *dev)
+
+ if (likely(!err))
+ return NETDEV_TX_OK;
+-tx_error:
++
+ dev_kfree_skb(skb);
+
+ if (err == -ELOOP)
+diff --git a/drivers/net/hamradio/yam.c b/drivers/net/hamradio/yam.c
+index 71cdef9fb56b..5ab53e9942f3 100644
+--- a/drivers/net/hamradio/yam.c
++++ b/drivers/net/hamradio/yam.c
+@@ -1133,6 +1133,7 @@ static int __init yam_init_driver(void)
+ err = register_netdev(dev);
+ if (err) {
+ printk(KERN_WARNING "yam: cannot register net device %s\n", dev->name);
++ free_netdev(dev);
+ goto error;
+ }
+ yam_devs[i] = dev;
+diff --git a/drivers/net/ipa/ipa_endpoint.c b/drivers/net/ipa/ipa_endpoint.c
+index a21534f1462f..1d823ac0f6d6 100644
+--- a/drivers/net/ipa/ipa_endpoint.c
++++ b/drivers/net/ipa/ipa_endpoint.c
+@@ -669,10 +669,12 @@ static void ipa_endpoint_init_seq(struct ipa_endpoint *endpoint)
+ u32 seq_type = endpoint->seq_type;
+ u32 val = 0;
+
++ /* Sequencer type is made up of four nibbles */
+ val |= u32_encode_bits(seq_type & 0xf, HPS_SEQ_TYPE_FMASK);
+ val |= u32_encode_bits((seq_type >> 4) & 0xf, DPS_SEQ_TYPE_FMASK);
+- /* HPS_REP_SEQ_TYPE is 0 */
+- /* DPS_REP_SEQ_TYPE is 0 */
++ /* The second two apply to replicated packets */
++ val |= u32_encode_bits((seq_type >> 8) & 0xf, HPS_REP_SEQ_TYPE_FMASK);
++ val |= u32_encode_bits((seq_type >> 12) & 0xf, DPS_REP_SEQ_TYPE_FMASK);
+
+ iowrite32(val, endpoint->ipa->reg_virt + offset);
+ }
+diff --git a/drivers/net/ipa/ipa_reg.h b/drivers/net/ipa/ipa_reg.h
+index 3b8106aa277a..0a688d8c1d7c 100644
+--- a/drivers/net/ipa/ipa_reg.h
++++ b/drivers/net/ipa/ipa_reg.h
+@@ -455,6 +455,8 @@ enum ipa_mode {
+ * second packet processing pass + no decipher + microcontroller
+ * @IPA_SEQ_DMA_DEC: DMA + cipher/decipher
+ * @IPA_SEQ_DMA_COMP_DECOMP: DMA + compression/decompression
++ * @IPA_SEQ_PKT_PROCESS_NO_DEC_NO_UCP_DMAP:
++ * packet processing + no decipher + no uCP + HPS REP DMA parser
+ * @IPA_SEQ_INVALID: invalid sequencer type
+ *
+ * The values defined here are broken into 4-bit nibbles that are written
+diff --git a/drivers/net/phy/dp83867.c b/drivers/net/phy/dp83867.c
+index b55e3c0403ed..ddac79960ea7 100644
+--- a/drivers/net/phy/dp83867.c
++++ b/drivers/net/phy/dp83867.c
+@@ -488,7 +488,7 @@ static int dp83867_verify_rgmii_cfg(struct phy_device *phydev)
+ return 0;
+ }
+
+-#ifdef CONFIG_OF_MDIO
++#if IS_ENABLED(CONFIG_OF_MDIO)
+ static int dp83867_of_init(struct phy_device *phydev)
+ {
+ struct dp83867_private *dp83867 = phydev->priv;
+diff --git a/drivers/net/phy/marvell.c b/drivers/net/phy/marvell.c
+index 7fc8e10c5f33..a435f7352cfb 100644
+--- a/drivers/net/phy/marvell.c
++++ b/drivers/net/phy/marvell.c
+@@ -337,7 +337,7 @@ static int m88e1101_config_aneg(struct phy_device *phydev)
+ return marvell_config_aneg(phydev);
+ }
+
+-#ifdef CONFIG_OF_MDIO
++#if IS_ENABLED(CONFIG_OF_MDIO)
+ /* Set and/or override some configuration registers based on the
+ * marvell,reg-init property stored in the of_node for the phydev.
+ *
+diff --git a/drivers/net/phy/mdio_bus.c b/drivers/net/phy/mdio_bus.c
+index 7a4eb3f2cb74..a1a4dee2a033 100644
+--- a/drivers/net/phy/mdio_bus.c
++++ b/drivers/net/phy/mdio_bus.c
+@@ -757,6 +757,7 @@ EXPORT_SYMBOL(mdiobus_scan);
+
+ static void mdiobus_stats_acct(struct mdio_bus_stats *stats, bool op, int ret)
+ {
++ preempt_disable();
+ u64_stats_update_begin(&stats->syncp);
+
+ u64_stats_inc(&stats->transfers);
+@@ -771,6 +772,7 @@ static void mdiobus_stats_acct(struct mdio_bus_stats *stats, bool op, int ret)
+ u64_stats_inc(&stats->writes);
+ out:
+ u64_stats_update_end(&stats->syncp);
++ preempt_enable();
+ }
+
+ /**
+diff --git a/drivers/net/phy/mscc/mscc.h b/drivers/net/phy/mscc/mscc.h
+index 414e3b31bb1f..132f9bf49198 100644
+--- a/drivers/net/phy/mscc/mscc.h
++++ b/drivers/net/phy/mscc/mscc.h
+@@ -375,7 +375,7 @@ struct vsc8531_private {
+ #endif
+ };
+
+-#ifdef CONFIG_OF_MDIO
++#if IS_ENABLED(CONFIG_OF_MDIO)
+ struct vsc8531_edge_rate_table {
+ u32 vddmac;
+ u32 slowdown[8];
+diff --git a/drivers/net/phy/mscc/mscc_main.c b/drivers/net/phy/mscc/mscc_main.c
+index c8aa6d905d8e..485a4f8a6a9a 100644
+--- a/drivers/net/phy/mscc/mscc_main.c
++++ b/drivers/net/phy/mscc/mscc_main.c
+@@ -98,7 +98,7 @@ static const struct vsc85xx_hw_stat vsc8584_hw_stats[] = {
+ },
+ };
+
+-#ifdef CONFIG_OF_MDIO
++#if IS_ENABLED(CONFIG_OF_MDIO)
+ static const struct vsc8531_edge_rate_table edge_table[] = {
+ {MSCC_VDDMAC_3300, { 0, 2, 4, 7, 10, 17, 29, 53} },
+ {MSCC_VDDMAC_2500, { 0, 3, 6, 10, 14, 23, 37, 63} },
+@@ -382,7 +382,7 @@ out_unlock:
+ mutex_unlock(&phydev->lock);
+ }
+
+-#ifdef CONFIG_OF_MDIO
++#if IS_ENABLED(CONFIG_OF_MDIO)
+ static int vsc85xx_edge_rate_magic_get(struct phy_device *phydev)
+ {
+ u32 vdd, sd;
+diff --git a/drivers/ntb/core.c b/drivers/ntb/core.c
+index 2581ab724c34..f8f75a504a58 100644
+--- a/drivers/ntb/core.c
++++ b/drivers/ntb/core.c
+@@ -214,10 +214,8 @@ int ntb_default_port_number(struct ntb_dev *ntb)
+ case NTB_TOPO_B2B_DSD:
+ return NTB_PORT_SEC_DSD;
+ default:
+- break;
++ return 0;
+ }
+-
+- return -EINVAL;
+ }
+ EXPORT_SYMBOL(ntb_default_port_number);
+
+@@ -240,10 +238,8 @@ int ntb_default_peer_port_number(struct ntb_dev *ntb, int pidx)
+ case NTB_TOPO_B2B_DSD:
+ return NTB_PORT_PRI_USD;
+ default:
+- break;
++ return 0;
+ }
+-
+- return -EINVAL;
+ }
+ EXPORT_SYMBOL(ntb_default_peer_port_number);
+
+@@ -315,4 +311,3 @@ static void __exit ntb_driver_exit(void)
+ bus_unregister(&ntb_bus);
+ }
+ module_exit(ntb_driver_exit);
+-
+diff --git a/drivers/ntb/test/ntb_perf.c b/drivers/ntb/test/ntb_perf.c
+index 972f6d984f6d..528751803419 100644
+--- a/drivers/ntb/test/ntb_perf.c
++++ b/drivers/ntb/test/ntb_perf.c
+@@ -159,6 +159,8 @@ struct perf_peer {
+ /* NTB connection setup service */
+ struct work_struct service;
+ unsigned long sts;
++
++ struct completion init_comp;
+ };
+ #define to_peer_service(__work) \
+ container_of(__work, struct perf_peer, service)
+@@ -547,6 +549,7 @@ static int perf_setup_outbuf(struct perf_peer *peer)
+
+ /* Initialization is finally done */
+ set_bit(PERF_STS_DONE, &peer->sts);
++ complete_all(&peer->init_comp);
+
+ return 0;
+ }
+@@ -557,7 +560,7 @@ static void perf_free_inbuf(struct perf_peer *peer)
+ return;
+
+ (void)ntb_mw_clear_trans(peer->perf->ntb, peer->pidx, peer->gidx);
+- dma_free_coherent(&peer->perf->ntb->dev, peer->inbuf_size,
++ dma_free_coherent(&peer->perf->ntb->pdev->dev, peer->inbuf_size,
+ peer->inbuf, peer->inbuf_xlat);
+ peer->inbuf = NULL;
+ }
+@@ -586,8 +589,9 @@ static int perf_setup_inbuf(struct perf_peer *peer)
+
+ perf_free_inbuf(peer);
+
+- peer->inbuf = dma_alloc_coherent(&perf->ntb->dev, peer->inbuf_size,
+- &peer->inbuf_xlat, GFP_KERNEL);
++ peer->inbuf = dma_alloc_coherent(&perf->ntb->pdev->dev,
++ peer->inbuf_size, &peer->inbuf_xlat,
++ GFP_KERNEL);
+ if (!peer->inbuf) {
+ dev_err(&perf->ntb->dev, "Failed to alloc inbuf of %pa\n",
+ &peer->inbuf_size);
+@@ -637,6 +641,7 @@ static void perf_service_work(struct work_struct *work)
+ perf_setup_outbuf(peer);
+
+ if (test_and_clear_bit(PERF_CMD_CLEAR, &peer->sts)) {
++ init_completion(&peer->init_comp);
+ clear_bit(PERF_STS_DONE, &peer->sts);
+ if (test_bit(0, &peer->perf->busy_flag) &&
+ peer == peer->perf->test_peer) {
+@@ -653,7 +658,7 @@ static int perf_init_service(struct perf_ctx *perf)
+ {
+ u64 mask;
+
+- if (ntb_peer_mw_count(perf->ntb) < perf->pcnt + 1) {
++ if (ntb_peer_mw_count(perf->ntb) < perf->pcnt) {
+ dev_err(&perf->ntb->dev, "Not enough memory windows\n");
+ return -EINVAL;
+ }
+@@ -1083,8 +1088,9 @@ static int perf_submit_test(struct perf_peer *peer)
+ struct perf_thread *pthr;
+ int tidx, ret;
+
+- if (!test_bit(PERF_STS_DONE, &peer->sts))
+- return -ENOLINK;
++ ret = wait_for_completion_interruptible(&peer->init_comp);
++ if (ret < 0)
++ return ret;
+
+ if (test_and_set_bit_lock(0, &perf->busy_flag))
+ return -EBUSY;
+@@ -1455,10 +1461,21 @@ static int perf_init_peers(struct perf_ctx *perf)
+ peer->gidx = pidx;
+ }
+ INIT_WORK(&peer->service, perf_service_work);
++ init_completion(&peer->init_comp);
+ }
+ if (perf->gidx == -1)
+ perf->gidx = pidx;
+
++ /*
++ * Hardware with only two ports may not have unique port
++ * numbers. In this case, the gidxs should all be zero.
++ */
++ if (perf->pcnt == 1 && ntb_port_number(perf->ntb) == 0 &&
++ ntb_peer_port_number(perf->ntb, 0) == 0) {
++ perf->gidx = 0;
++ perf->peers[0].gidx = 0;
++ }
++
+ for (pidx = 0; pidx < perf->pcnt; pidx++) {
+ ret = perf_setup_peer_mw(&perf->peers[pidx]);
+ if (ret)
+@@ -1554,4 +1571,3 @@ static void __exit perf_exit(void)
+ destroy_workqueue(perf_wq);
+ }
+ module_exit(perf_exit);
+-
+diff --git a/drivers/ntb/test/ntb_pingpong.c b/drivers/ntb/test/ntb_pingpong.c
+index 04dd46647db3..2164e8492772 100644
+--- a/drivers/ntb/test/ntb_pingpong.c
++++ b/drivers/ntb/test/ntb_pingpong.c
+@@ -121,15 +121,14 @@ static int pp_find_next_peer(struct pp_ctx *pp)
+ link = ntb_link_is_up(pp->ntb, NULL, NULL);
+
+ /* Find next available peer */
+- if (link & pp->nmask) {
++ if (link & pp->nmask)
+ pidx = __ffs64(link & pp->nmask);
+- out_db = BIT_ULL(pidx + 1);
+- } else if (link & pp->pmask) {
++ else if (link & pp->pmask)
+ pidx = __ffs64(link & pp->pmask);
+- out_db = BIT_ULL(pidx);
+- } else {
++ else
+ return -ENODEV;
+- }
++
++ out_db = BIT_ULL(ntb_peer_port_number(pp->ntb, pidx));
+
+ spin_lock(&pp->lock);
+ pp->out_pidx = pidx;
+@@ -303,7 +302,7 @@ static void pp_init_flds(struct pp_ctx *pp)
+ break;
+ }
+
+- pp->in_db = BIT_ULL(pidx);
++ pp->in_db = BIT_ULL(lport);
+ pp->pmask = GENMASK_ULL(pidx, 0) >> 1;
+ pp->nmask = GENMASK_ULL(pcnt - 1, pidx);
+
+@@ -432,4 +431,3 @@ static void __exit pp_exit(void)
+ debugfs_remove_recursive(pp_dbgfs_topdir);
+ }
+ module_exit(pp_exit);
+-
+diff --git a/drivers/ntb/test/ntb_tool.c b/drivers/ntb/test/ntb_tool.c
+index 69da758fe64c..b7bf3f863d79 100644
+--- a/drivers/ntb/test/ntb_tool.c
++++ b/drivers/ntb/test/ntb_tool.c
+@@ -504,7 +504,7 @@ static ssize_t tool_peer_link_read(struct file *filep, char __user *ubuf,
+ buf[1] = '\n';
+ buf[2] = '\0';
+
+- return simple_read_from_buffer(ubuf, size, offp, buf, 3);
++ return simple_read_from_buffer(ubuf, size, offp, buf, 2);
+ }
+
+ static TOOL_FOPS_RDWR(tool_peer_link_fops,
+@@ -590,7 +590,7 @@ static int tool_setup_mw(struct tool_ctx *tc, int pidx, int widx,
+ inmw->size = min_t(resource_size_t, req_size, size);
+ inmw->size = round_up(inmw->size, addr_align);
+ inmw->size = round_up(inmw->size, size_align);
+- inmw->mm_base = dma_alloc_coherent(&tc->ntb->dev, inmw->size,
++ inmw->mm_base = dma_alloc_coherent(&tc->ntb->pdev->dev, inmw->size,
+ &inmw->dma_base, GFP_KERNEL);
+ if (!inmw->mm_base)
+ return -ENOMEM;
+@@ -612,7 +612,7 @@ static int tool_setup_mw(struct tool_ctx *tc, int pidx, int widx,
+ return 0;
+
+ err_free_dma:
+- dma_free_coherent(&tc->ntb->dev, inmw->size, inmw->mm_base,
++ dma_free_coherent(&tc->ntb->pdev->dev, inmw->size, inmw->mm_base,
+ inmw->dma_base);
+ inmw->mm_base = NULL;
+ inmw->dma_base = 0;
+@@ -629,7 +629,7 @@ static void tool_free_mw(struct tool_ctx *tc, int pidx, int widx)
+
+ if (inmw->mm_base != NULL) {
+ ntb_mw_clear_trans(tc->ntb, pidx, widx);
+- dma_free_coherent(&tc->ntb->dev, inmw->size,
++ dma_free_coherent(&tc->ntb->pdev->dev, inmw->size,
+ inmw->mm_base, inmw->dma_base);
+ }
+
+@@ -1690,4 +1690,3 @@ static void __exit tool_exit(void)
+ debugfs_remove_recursive(tool_dbgfs_topdir);
+ }
+ module_exit(tool_exit);
+-
+diff --git a/drivers/nvme/host/fc.c b/drivers/nvme/host/fc.c
+index 5ef4a84c442a..564e3f220ac7 100644
+--- a/drivers/nvme/host/fc.c
++++ b/drivers/nvme/host/fc.c
+@@ -2300,10 +2300,11 @@ nvme_fc_start_fcp_op(struct nvme_fc_ctrl *ctrl, struct nvme_fc_queue *queue,
+ opstate = atomic_xchg(&op->state, FCPOP_STATE_COMPLETE);
+ __nvme_fc_fcpop_chk_teardowns(ctrl, op, opstate);
+
+- if (!(op->flags & FCOP_FLAGS_AEN))
++ if (!(op->flags & FCOP_FLAGS_AEN)) {
+ nvme_fc_unmap_data(ctrl, op->rq, op);
++ nvme_cleanup_cmd(op->rq);
++ }
+
+- nvme_cleanup_cmd(op->rq);
+ nvme_fc_ctrl_put(ctrl);
+
+ if (ctrl->rport->remoteport.port_state == FC_OBJSTATE_ONLINE &&
+diff --git a/drivers/nvme/host/pci.c b/drivers/nvme/host/pci.c
+index 076bdd90c922..4ad629eb3bc6 100644
+--- a/drivers/nvme/host/pci.c
++++ b/drivers/nvme/host/pci.c
+@@ -2958,9 +2958,15 @@ static int nvme_suspend(struct device *dev)
+ * the PCI bus layer to put it into D3 in order to take the PCIe link
+ * down, so as to allow the platform to achieve its minimum low-power
+ * state (which may not be possible if the link is up).
++ *
++ * If a host memory buffer is enabled, shut down the device as the NVMe
++ * specification allows the device to access the host memory buffer in
++ * host DRAM from all power states, but hosts will fail access to DRAM
++ * during S3.
+ */
+ if (pm_suspend_via_firmware() || !ctrl->npss ||
+ !pcie_aspm_enabled(pdev) ||
++ ndev->nr_host_mem_descs ||
+ (ndev->ctrl.quirks & NVME_QUIRK_SIMPLE_SUSPEND))
+ return nvme_disable_prepare_reset(ndev, true);
+
+diff --git a/drivers/nvmem/core.c b/drivers/nvmem/core.c
+index 05c6ae4b0b97..a8300202a7fb 100644
+--- a/drivers/nvmem/core.c
++++ b/drivers/nvmem/core.c
+@@ -66,6 +66,30 @@ static LIST_HEAD(nvmem_lookup_list);
+
+ static BLOCKING_NOTIFIER_HEAD(nvmem_notifier);
+
++static int nvmem_reg_read(struct nvmem_device *nvmem, unsigned int offset,
++ void *val, size_t bytes)
++{
++ if (nvmem->reg_read)
++ return nvmem->reg_read(nvmem->priv, offset, val, bytes);
++
++ return -EINVAL;
++}
++
++static int nvmem_reg_write(struct nvmem_device *nvmem, unsigned int offset,
++ void *val, size_t bytes)
++{
++ int ret;
++
++ if (nvmem->reg_write) {
++ gpiod_set_value_cansleep(nvmem->wp_gpio, 0);
++ ret = nvmem->reg_write(nvmem->priv, offset, val, bytes);
++ gpiod_set_value_cansleep(nvmem->wp_gpio, 1);
++ return ret;
++ }
++
++ return -EINVAL;
++}
++
+ #ifdef CONFIG_NVMEM_SYSFS
+ static const char * const nvmem_type_str[] = {
+ [NVMEM_TYPE_UNKNOWN] = "Unknown",
+@@ -122,7 +146,7 @@ static ssize_t bin_attr_nvmem_read(struct file *filp, struct kobject *kobj,
+ if (!nvmem->reg_read)
+ return -EPERM;
+
+- rc = nvmem->reg_read(nvmem->priv, pos, buf, count);
++ rc = nvmem_reg_read(nvmem, pos, buf, count);
+
+ if (rc)
+ return rc;
+@@ -159,7 +183,7 @@ static ssize_t bin_attr_nvmem_write(struct file *filp, struct kobject *kobj,
+ if (!nvmem->reg_write)
+ return -EPERM;
+
+- rc = nvmem->reg_write(nvmem->priv, pos, buf, count);
++ rc = nvmem_reg_write(nvmem, pos, buf, count);
+
+ if (rc)
+ return rc;
+@@ -311,30 +335,6 @@ static void nvmem_sysfs_remove_compat(struct nvmem_device *nvmem,
+
+ #endif /* CONFIG_NVMEM_SYSFS */
+
+-static int nvmem_reg_read(struct nvmem_device *nvmem, unsigned int offset,
+- void *val, size_t bytes)
+-{
+- if (nvmem->reg_read)
+- return nvmem->reg_read(nvmem->priv, offset, val, bytes);
+-
+- return -EINVAL;
+-}
+-
+-static int nvmem_reg_write(struct nvmem_device *nvmem, unsigned int offset,
+- void *val, size_t bytes)
+-{
+- int ret;
+-
+- if (nvmem->reg_write) {
+- gpiod_set_value_cansleep(nvmem->wp_gpio, 0);
+- ret = nvmem->reg_write(nvmem->priv, offset, val, bytes);
+- gpiod_set_value_cansleep(nvmem->wp_gpio, 1);
+- return ret;
+- }
+-
+- return -EINVAL;
+-}
+-
+ static void nvmem_release(struct device *dev)
+ {
+ struct nvmem_device *nvmem = to_nvmem_device(dev);
+diff --git a/drivers/of/kobj.c b/drivers/of/kobj.c
+index c72eef988041..a32e60b024b8 100644
+--- a/drivers/of/kobj.c
++++ b/drivers/of/kobj.c
+@@ -134,8 +134,6 @@ int __of_attach_node_sysfs(struct device_node *np)
+ if (!name)
+ return -ENOMEM;
+
+- of_node_get(np);
+-
+ rc = kobject_add(&np->kobj, parent, "%s", name);
+ kfree(name);
+ if (rc)
+@@ -144,6 +142,7 @@ int __of_attach_node_sysfs(struct device_node *np)
+ for_each_property_of_node(np, pp)
+ __of_add_property_sysfs(np, pp);
+
++ of_node_get(np);
+ return 0;
+ }
+
+diff --git a/drivers/of/property.c b/drivers/of/property.c
+index b4916dcc9e72..6dc542af5a70 100644
+--- a/drivers/of/property.c
++++ b/drivers/of/property.c
+@@ -1045,8 +1045,20 @@ static int of_link_to_phandle(struct device *dev, struct device_node *sup_np,
+ * Find the device node that contains the supplier phandle. It may be
+ * @sup_np or it may be an ancestor of @sup_np.
+ */
+- while (sup_np && !of_find_property(sup_np, "compatible", NULL))
++ while (sup_np) {
++
++ /* Don't allow linking to a disabled supplier */
++ if (!of_device_is_available(sup_np)) {
++ of_node_put(sup_np);
++ sup_np = NULL;
++ }
++
++ if (of_find_property(sup_np, "compatible", NULL))
++ break;
++
+ sup_np = of_get_next_parent(sup_np);
++ }
++
+ if (!sup_np) {
+ dev_dbg(dev, "Not linking to %pOFP - No device\n", tmp_np);
+ return -ENODEV;
+@@ -1296,7 +1308,7 @@ static int of_link_to_suppliers(struct device *dev,
+ if (of_link_property(dev, con_np, p->name))
+ ret = -ENODEV;
+
+- for_each_child_of_node(con_np, child)
++ for_each_available_child_of_node(con_np, child)
+ if (of_link_to_suppliers(dev, child) && !ret)
+ ret = -EAGAIN;
+
+diff --git a/drivers/pci/controller/dwc/pci-dra7xx.c b/drivers/pci/controller/dwc/pci-dra7xx.c
+index 3b0e58f2de58..6184ebc9392d 100644
+--- a/drivers/pci/controller/dwc/pci-dra7xx.c
++++ b/drivers/pci/controller/dwc/pci-dra7xx.c
+@@ -840,7 +840,6 @@ static int __init dra7xx_pcie_probe(struct platform_device *pdev)
+ struct phy **phy;
+ struct device_link **link;
+ void __iomem *base;
+- struct resource *res;
+ struct dw_pcie *pci;
+ struct dra7xx_pcie *dra7xx;
+ struct device *dev = &pdev->dev;
+@@ -877,10 +876,9 @@ static int __init dra7xx_pcie_probe(struct platform_device *pdev)
+ return irq;
+ }
+
+- res = platform_get_resource_byname(pdev, IORESOURCE_MEM, "ti_conf");
+- base = devm_ioremap(dev, res->start, resource_size(res));
+- if (!base)
+- return -ENOMEM;
++ base = devm_platform_ioremap_resource_byname(pdev, "ti_conf");
++ if (IS_ERR(base))
++ return PTR_ERR(base);
+
+ phy_count = of_property_count_strings(np, "phy-names");
+ if (phy_count < 0) {
+diff --git a/drivers/pci/controller/dwc/pci-meson.c b/drivers/pci/controller/dwc/pci-meson.c
+index 3715dceca1bf..ca59ba9e0ecd 100644
+--- a/drivers/pci/controller/dwc/pci-meson.c
++++ b/drivers/pci/controller/dwc/pci-meson.c
+@@ -289,11 +289,11 @@ static void meson_pcie_init_dw(struct meson_pcie *mp)
+ meson_cfg_writel(mp, val, PCIE_CFG0);
+
+ val = meson_elb_readl(mp, PCIE_PORT_LINK_CTRL_OFF);
+- val &= ~LINK_CAPABLE_MASK;
++ val &= ~(LINK_CAPABLE_MASK | FAST_LINK_MODE);
+ meson_elb_writel(mp, val, PCIE_PORT_LINK_CTRL_OFF);
+
+ val = meson_elb_readl(mp, PCIE_PORT_LINK_CTRL_OFF);
+- val |= LINK_CAPABLE_X1 | FAST_LINK_MODE;
++ val |= LINK_CAPABLE_X1;
+ meson_elb_writel(mp, val, PCIE_PORT_LINK_CTRL_OFF);
+
+ val = meson_elb_readl(mp, PCIE_GEN2_CTRL_OFF);
+diff --git a/drivers/pci/controller/dwc/pcie-designware-host.c b/drivers/pci/controller/dwc/pcie-designware-host.c
+index 395feb8ca051..3c43311bb95c 100644
+--- a/drivers/pci/controller/dwc/pcie-designware-host.c
++++ b/drivers/pci/controller/dwc/pcie-designware-host.c
+@@ -264,6 +264,8 @@ int dw_pcie_allocate_domains(struct pcie_port *pp)
+ return -ENOMEM;
+ }
+
++ irq_domain_update_bus_token(pp->irq_domain, DOMAIN_BUS_NEXUS);
++
+ pp->msi_domain = pci_msi_create_irq_domain(fwnode,
+ &dw_pcie_msi_domain_info,
+ pp->irq_domain);
+diff --git a/drivers/pci/controller/pci-aardvark.c b/drivers/pci/controller/pci-aardvark.c
+index 2a20b649f40c..2ecc79c03ade 100644
+--- a/drivers/pci/controller/pci-aardvark.c
++++ b/drivers/pci/controller/pci-aardvark.c
+@@ -9,6 +9,7 @@
+ */
+
+ #include <linux/delay.h>
++#include <linux/gpio.h>
+ #include <linux/interrupt.h>
+ #include <linux/irq.h>
+ #include <linux/irqdomain.h>
+@@ -18,6 +19,7 @@
+ #include <linux/platform_device.h>
+ #include <linux/msi.h>
+ #include <linux/of_address.h>
++#include <linux/of_gpio.h>
+ #include <linux/of_pci.h>
+
+ #include "../pci.h"
+@@ -40,6 +42,7 @@
+ #define PCIE_CORE_LINK_CTRL_STAT_REG 0xd0
+ #define PCIE_CORE_LINK_L0S_ENTRY BIT(0)
+ #define PCIE_CORE_LINK_TRAINING BIT(5)
++#define PCIE_CORE_LINK_SPEED_SHIFT 16
+ #define PCIE_CORE_LINK_WIDTH_SHIFT 20
+ #define PCIE_CORE_ERR_CAPCTL_REG 0x118
+ #define PCIE_CORE_ERR_CAPCTL_ECRC_CHK_TX BIT(5)
+@@ -201,7 +204,9 @@ struct advk_pcie {
+ struct mutex msi_used_lock;
+ u16 msi_msg;
+ int root_bus_nr;
++ int link_gen;
+ struct pci_bridge_emul bridge;
++ struct gpio_desc *reset_gpio;
+ };
+
+ static inline void advk_writel(struct advk_pcie *pcie, u32 val, u64 reg)
+@@ -225,20 +230,16 @@ static int advk_pcie_link_up(struct advk_pcie *pcie)
+
+ static int advk_pcie_wait_for_link(struct advk_pcie *pcie)
+ {
+- struct device *dev = &pcie->pdev->dev;
+ int retries;
+
+ /* check if the link is up or not */
+ for (retries = 0; retries < LINK_WAIT_MAX_RETRIES; retries++) {
+- if (advk_pcie_link_up(pcie)) {
+- dev_info(dev, "link up\n");
++ if (advk_pcie_link_up(pcie))
+ return 0;
+- }
+
+ usleep_range(LINK_WAIT_USLEEP_MIN, LINK_WAIT_USLEEP_MAX);
+ }
+
+- dev_err(dev, "link never came up\n");
+ return -ETIMEDOUT;
+ }
+
+@@ -253,10 +254,110 @@ static void advk_pcie_wait_for_retrain(struct advk_pcie *pcie)
+ }
+ }
+
++static int advk_pcie_train_at_gen(struct advk_pcie *pcie, int gen)
++{
++ int ret, neg_gen;
++ u32 reg;
++
++ /* Setup link speed */
++ reg = advk_readl(pcie, PCIE_CORE_CTRL0_REG);
++ reg &= ~PCIE_GEN_SEL_MSK;
++ if (gen == 3)
++ reg |= SPEED_GEN_3;
++ else if (gen == 2)
++ reg |= SPEED_GEN_2;
++ else
++ reg |= SPEED_GEN_1;
++ advk_writel(pcie, reg, PCIE_CORE_CTRL0_REG);
++
++ /*
++ * Enable link training. This is not needed in every call to this
++ * function, just once suffices, but it does not break anything either.
++ */
++ reg = advk_readl(pcie, PCIE_CORE_CTRL0_REG);
++ reg |= LINK_TRAINING_EN;
++ advk_writel(pcie, reg, PCIE_CORE_CTRL0_REG);
++
++ /*
++ * Start link training immediately after enabling it.
++ * This solves problems for some buggy cards.
++ */
++ reg = advk_readl(pcie, PCIE_CORE_LINK_CTRL_STAT_REG);
++ reg |= PCIE_CORE_LINK_TRAINING;
++ advk_writel(pcie, reg, PCIE_CORE_LINK_CTRL_STAT_REG);
++
++ ret = advk_pcie_wait_for_link(pcie);
++ if (ret)
++ return ret;
++
++ reg = advk_readl(pcie, PCIE_CORE_LINK_CTRL_STAT_REG);
++ neg_gen = (reg >> PCIE_CORE_LINK_SPEED_SHIFT) & 0xf;
++
++ return neg_gen;
++}
++
++static void advk_pcie_train_link(struct advk_pcie *pcie)
++{
++ struct device *dev = &pcie->pdev->dev;
++ int neg_gen = -1, gen;
++
++ /*
++ * Try link training at link gen specified by device tree property
++ * 'max-link-speed'. If this fails, iteratively train at lower gen.
++ */
++ for (gen = pcie->link_gen; gen > 0; --gen) {
++ neg_gen = advk_pcie_train_at_gen(pcie, gen);
++ if (neg_gen > 0)
++ break;
++ }
++
++ if (neg_gen < 0)
++ goto err;
++
++ /*
++ * After successful training if negotiated gen is lower than requested,
++ * train again on negotiated gen. This solves some stability issues for
++ * some buggy gen1 cards.
++ */
++ if (neg_gen < gen) {
++ gen = neg_gen;
++ neg_gen = advk_pcie_train_at_gen(pcie, gen);
++ }
++
++ if (neg_gen == gen) {
++ dev_info(dev, "link up at gen %i\n", gen);
++ return;
++ }
++
++err:
++ dev_err(dev, "link never came up\n");
++}
++
++static void advk_pcie_issue_perst(struct advk_pcie *pcie)
++{
++ u32 reg;
++
++ if (!pcie->reset_gpio)
++ return;
++
++ /* PERST does not work for some cards when link training is enabled */
++ reg = advk_readl(pcie, PCIE_CORE_CTRL0_REG);
++ reg &= ~LINK_TRAINING_EN;
++ advk_writel(pcie, reg, PCIE_CORE_CTRL0_REG);
++
++ /* 10ms delay is needed for some cards */
++ dev_info(&pcie->pdev->dev, "issuing PERST via reset GPIO for 10ms\n");
++ gpiod_set_value_cansleep(pcie->reset_gpio, 1);
++ usleep_range(10000, 11000);
++ gpiod_set_value_cansleep(pcie->reset_gpio, 0);
++}
++
+ static void advk_pcie_setup_hw(struct advk_pcie *pcie)
+ {
+ u32 reg;
+
++ advk_pcie_issue_perst(pcie);
++
+ /* Set to Direct mode */
+ reg = advk_readl(pcie, CTRL_CONFIG_REG);
+ reg &= ~(CTRL_MODE_MASK << CTRL_MODE_SHIFT);
+@@ -288,23 +389,12 @@ static void advk_pcie_setup_hw(struct advk_pcie *pcie)
+ PCIE_CORE_CTRL2_TD_ENABLE;
+ advk_writel(pcie, reg, PCIE_CORE_CTRL2_REG);
+
+- /* Set GEN2 */
+- reg = advk_readl(pcie, PCIE_CORE_CTRL0_REG);
+- reg &= ~PCIE_GEN_SEL_MSK;
+- reg |= SPEED_GEN_2;
+- advk_writel(pcie, reg, PCIE_CORE_CTRL0_REG);
+-
+ /* Set lane X1 */
+ reg = advk_readl(pcie, PCIE_CORE_CTRL0_REG);
+ reg &= ~LANE_CNT_MSK;
+ reg |= LANE_COUNT_1;
+ advk_writel(pcie, reg, PCIE_CORE_CTRL0_REG);
+
+- /* Enable link training */
+- reg = advk_readl(pcie, PCIE_CORE_CTRL0_REG);
+- reg |= LINK_TRAINING_EN;
+- advk_writel(pcie, reg, PCIE_CORE_CTRL0_REG);
+-
+ /* Enable MSI */
+ reg = advk_readl(pcie, PCIE_CORE_CTRL2_REG);
+ reg |= PCIE_CORE_CTRL2_MSI_ENABLE;
+@@ -340,22 +430,14 @@ static void advk_pcie_setup_hw(struct advk_pcie *pcie)
+
+ /*
+ * PERST# signal could have been asserted by pinctrl subsystem before
+- * probe() callback has been called, making the endpoint going into
++ * probe() callback has been called or issued explicitly by reset gpio
++ * function advk_pcie_issue_perst(), making the endpoint going into
+ * fundamental reset. As required by PCI Express spec a delay for at
+ * least 100ms after such a reset before link training is needed.
+ */
+ msleep(PCI_PM_D3COLD_WAIT);
+
+- /* Start link training */
+- reg = advk_readl(pcie, PCIE_CORE_LINK_CTRL_STAT_REG);
+- reg |= PCIE_CORE_LINK_TRAINING;
+- advk_writel(pcie, reg, PCIE_CORE_LINK_CTRL_STAT_REG);
+-
+- advk_pcie_wait_for_link(pcie);
+-
+- reg = PCIE_CORE_LINK_L0S_ENTRY |
+- (1 << PCIE_CORE_LINK_WIDTH_SHIFT);
+- advk_writel(pcie, reg, PCIE_CORE_LINK_CTRL_STAT_REG);
++ advk_pcie_train_link(pcie);
+
+ reg = advk_readl(pcie, PCIE_CORE_CMD_STATUS_REG);
+ reg |= PCIE_CORE_CMD_MEM_ACCESS_EN |
+@@ -989,6 +1071,28 @@ static int advk_pcie_probe(struct platform_device *pdev)
+ }
+ pcie->root_bus_nr = bus->start;
+
++ pcie->reset_gpio = devm_gpiod_get_from_of_node(dev, dev->of_node,
++ "reset-gpios", 0,
++ GPIOD_OUT_LOW,
++ "pcie1-reset");
++ ret = PTR_ERR_OR_ZERO(pcie->reset_gpio);
++ if (ret) {
++ if (ret == -ENOENT) {
++ pcie->reset_gpio = NULL;
++ } else {
++ if (ret != -EPROBE_DEFER)
++ dev_err(dev, "Failed to get reset-gpio: %i\n",
++ ret);
++ return ret;
++ }
++ }
++
++ ret = of_pci_get_max_link_speed(dev->of_node);
++ if (ret <= 0 || ret > 3)
++ pcie->link_gen = 3;
++ else
++ pcie->link_gen = ret;
++
+ advk_pcie_setup_hw(pcie);
+
+ advk_sw_pci_bridge_init(pcie);
+diff --git a/drivers/pci/controller/pci-v3-semi.c b/drivers/pci/controller/pci-v3-semi.c
+index bd05221f5a22..ddcb4571a79b 100644
+--- a/drivers/pci/controller/pci-v3-semi.c
++++ b/drivers/pci/controller/pci-v3-semi.c
+@@ -720,7 +720,7 @@ static int v3_pci_probe(struct platform_device *pdev)
+ int irq;
+ int ret;
+
+- host = pci_alloc_host_bridge(sizeof(*v3));
++ host = devm_pci_alloc_host_bridge(dev, sizeof(*v3));
+ if (!host)
+ return -ENOMEM;
+
+diff --git a/drivers/pci/controller/pcie-brcmstb.c b/drivers/pci/controller/pcie-brcmstb.c
+index 6d79d14527a6..2297910bf6e4 100644
+--- a/drivers/pci/controller/pcie-brcmstb.c
++++ b/drivers/pci/controller/pcie-brcmstb.c
+@@ -54,11 +54,11 @@
+
+ #define PCIE_MISC_CPU_2_PCIE_MEM_WIN0_LO 0x400c
+ #define PCIE_MEM_WIN0_LO(win) \
+- PCIE_MISC_CPU_2_PCIE_MEM_WIN0_LO + ((win) * 4)
++ PCIE_MISC_CPU_2_PCIE_MEM_WIN0_LO + ((win) * 8)
+
+ #define PCIE_MISC_CPU_2_PCIE_MEM_WIN0_HI 0x4010
+ #define PCIE_MEM_WIN0_HI(win) \
+- PCIE_MISC_CPU_2_PCIE_MEM_WIN0_HI + ((win) * 4)
++ PCIE_MISC_CPU_2_PCIE_MEM_WIN0_HI + ((win) * 8)
+
+ #define PCIE_MISC_RC_BAR1_CONFIG_LO 0x402c
+ #define PCIE_MISC_RC_BAR1_CONFIG_LO_SIZE_MASK 0x1f
+@@ -697,6 +697,7 @@ static int brcm_pcie_setup(struct brcm_pcie *pcie)
+
+ /* Reset the bridge */
+ brcm_pcie_bridge_sw_init_set(pcie, 1);
++ brcm_pcie_perst_set(pcie, 1);
+
+ usleep_range(100, 200);
+
+diff --git a/drivers/pci/controller/pcie-rcar.c b/drivers/pci/controller/pcie-rcar.c
+index 759c6542c5c8..1bae6a4abaae 100644
+--- a/drivers/pci/controller/pcie-rcar.c
++++ b/drivers/pci/controller/pcie-rcar.c
+@@ -333,11 +333,12 @@ static struct pci_ops rcar_pcie_ops = {
+ };
+
+ static void rcar_pcie_setup_window(int win, struct rcar_pcie *pcie,
+- struct resource *res)
++ struct resource_entry *window)
+ {
+ /* Setup PCIe address space mappings for each resource */
+ resource_size_t size;
+ resource_size_t res_start;
++ struct resource *res = window->res;
+ u32 mask;
+
+ rcar_pci_write_reg(pcie, 0x00000000, PCIEPTCTLR(win));
+@@ -351,9 +352,9 @@ static void rcar_pcie_setup_window(int win, struct rcar_pcie *pcie,
+ rcar_pci_write_reg(pcie, mask << 7, PCIEPAMR(win));
+
+ if (res->flags & IORESOURCE_IO)
+- res_start = pci_pio_to_address(res->start);
++ res_start = pci_pio_to_address(res->start) - window->offset;
+ else
+- res_start = res->start;
++ res_start = res->start - window->offset;
+
+ rcar_pci_write_reg(pcie, upper_32_bits(res_start), PCIEPAUR(win));
+ rcar_pci_write_reg(pcie, lower_32_bits(res_start) & ~0x7F,
+@@ -382,7 +383,7 @@ static int rcar_pcie_setup(struct list_head *resource, struct rcar_pcie *pci)
+ switch (resource_type(res)) {
+ case IORESOURCE_IO:
+ case IORESOURCE_MEM:
+- rcar_pcie_setup_window(i, pci, res);
++ rcar_pcie_setup_window(i, pci, win);
+ i++;
+ break;
+ case IORESOURCE_BUS:
+diff --git a/drivers/pci/controller/vmd.c b/drivers/pci/controller/vmd.c
+index dac91d60701d..e386d4eac407 100644
+--- a/drivers/pci/controller/vmd.c
++++ b/drivers/pci/controller/vmd.c
+@@ -445,9 +445,11 @@ static int vmd_enable_domain(struct vmd_dev *vmd, unsigned long features)
+ if (!membar2)
+ return -ENOMEM;
+ offset[0] = vmd->dev->resource[VMD_MEMBAR1].start -
+- readq(membar2 + MB2_SHADOW_OFFSET);
++ (readq(membar2 + MB2_SHADOW_OFFSET) &
++ PCI_BASE_ADDRESS_MEM_MASK);
+ offset[1] = vmd->dev->resource[VMD_MEMBAR2].start -
+- readq(membar2 + MB2_SHADOW_OFFSET + 8);
++ (readq(membar2 + MB2_SHADOW_OFFSET + 8) &
++ PCI_BASE_ADDRESS_MEM_MASK);
+ pci_iounmap(vmd->dev, membar2);
+ }
+ }
+diff --git a/drivers/pci/endpoint/functions/pci-epf-test.c b/drivers/pci/endpoint/functions/pci-epf-test.c
+index 60330f3e3751..c89a9561439f 100644
+--- a/drivers/pci/endpoint/functions/pci-epf-test.c
++++ b/drivers/pci/endpoint/functions/pci-epf-test.c
+@@ -187,6 +187,9 @@ static int pci_epf_test_init_dma_chan(struct pci_epf_test *epf_test)
+ */
+ static void pci_epf_test_clean_dma_chan(struct pci_epf_test *epf_test)
+ {
++ if (!epf_test->dma_supported)
++ return;
++
+ dma_release_channel(epf_test->dma_chan);
+ epf_test->dma_chan = NULL;
+ }
+diff --git a/drivers/pci/pci-bridge-emul.c b/drivers/pci/pci-bridge-emul.c
+index 4f4f54bc732e..faa414655f33 100644
+--- a/drivers/pci/pci-bridge-emul.c
++++ b/drivers/pci/pci-bridge-emul.c
+@@ -185,8 +185,8 @@ static const struct pci_bridge_reg_behavior pcie_cap_regs_behavior[] = {
+ * RO, the rest is reserved
+ */
+ .w1c = GENMASK(19, 16),
+- .ro = GENMASK(20, 19),
+- .rsvd = GENMASK(31, 21),
++ .ro = GENMASK(21, 20),
++ .rsvd = GENMASK(31, 22),
+ },
+
+ [PCI_EXP_LNKCAP / 4] = {
+@@ -226,7 +226,7 @@ static const struct pci_bridge_reg_behavior pcie_cap_regs_behavior[] = {
+ PCI_EXP_SLTSTA_CC | PCI_EXP_SLTSTA_DLLSC) << 16,
+ .ro = (PCI_EXP_SLTSTA_MRLSS | PCI_EXP_SLTSTA_PDS |
+ PCI_EXP_SLTSTA_EIS) << 16,
+- .rsvd = GENMASK(15, 12) | (GENMASK(15, 9) << 16),
++ .rsvd = GENMASK(15, 13) | (GENMASK(15, 9) << 16),
+ },
+
+ [PCI_EXP_RTCTL / 4] = {
+diff --git a/drivers/pci/pci.c b/drivers/pci/pci.c
+index 6d3234f75692..809f2584e338 100644
+--- a/drivers/pci/pci.c
++++ b/drivers/pci/pci.c
+@@ -4660,7 +4660,8 @@ static int pci_pm_reset(struct pci_dev *dev, int probe)
+ * pcie_wait_for_link_delay - Wait until link is active or inactive
+ * @pdev: Bridge device
+ * @active: waiting for active or inactive?
+- * @delay: Delay to wait after link has become active (in ms)
++ * @delay: Delay to wait after link has become active (in ms). Specify %0
++ * for no delay.
+ *
+ * Use this to wait till link becomes active or inactive.
+ */
+@@ -4701,7 +4702,7 @@ static bool pcie_wait_for_link_delay(struct pci_dev *pdev, bool active,
+ msleep(10);
+ timeout -= 10;
+ }
+- if (active && ret)
++ if (active && ret && delay)
+ msleep(delay);
+ else if (ret != active)
+ pci_info(pdev, "Data Link Layer Link Active not %s in 1000 msec\n",
+@@ -4822,17 +4823,28 @@ void pci_bridge_wait_for_secondary_bus(struct pci_dev *dev)
+ if (!pcie_downstream_port(dev))
+ return;
+
+- if (pcie_get_speed_cap(dev) <= PCIE_SPEED_5_0GT) {
+- pci_dbg(dev, "waiting %d ms for downstream link\n", delay);
+- msleep(delay);
+- } else {
+- pci_dbg(dev, "waiting %d ms for downstream link, after activation\n",
+- delay);
+- if (!pcie_wait_for_link_delay(dev, true, delay)) {
++ /*
++ * Per PCIe r5.0, sec 6.6.1, for downstream ports that support
++ * speeds > 5 GT/s, we must wait for link training to complete
++ * before the mandatory delay.
++ *
++ * We can only tell when link training completes via DLL Link
++ * Active, which is required for downstream ports that support
++ * speeds > 5 GT/s (sec 7.5.3.6). Unfortunately some common
++ * devices do not implement Link Active reporting even when it's
++ * required, so we'll check for that directly instead of checking
++ * the supported link speed. We assume devices without Link Active
++ * reporting can train in 100 ms regardless of speed.
++ */
++ if (dev->link_active_reporting) {
++ pci_dbg(dev, "waiting for link to train\n");
++ if (!pcie_wait_for_link_delay(dev, true, 0)) {
+ /* Did not train, no need to wait any further */
+ return;
+ }
+ }
++ pci_dbg(child, "waiting %d ms to become accessible\n", delay);
++ msleep(delay);
+
+ if (!pci_device_is_present(child)) {
+ pci_dbg(child, "waiting additional %d ms to become accessible\n", delay);
+diff --git a/drivers/pci/pcie/aspm.c b/drivers/pci/pcie/aspm.c
+index 2378ed692534..b17e5ffd31b1 100644
+--- a/drivers/pci/pcie/aspm.c
++++ b/drivers/pci/pcie/aspm.c
+@@ -628,16 +628,6 @@ static void pcie_aspm_cap_init(struct pcie_link_state *link, int blacklist)
+
+ /* Setup initial capable state. Will be updated later */
+ link->aspm_capable = link->aspm_support;
+- /*
+- * If the downstream component has pci bridge function, don't
+- * do ASPM for now.
+- */
+- list_for_each_entry(child, &linkbus->devices, bus_list) {
+- if (pci_pcie_type(child) == PCI_EXP_TYPE_PCI_BRIDGE) {
+- link->aspm_disable = ASPM_STATE_ALL;
+- break;
+- }
+- }
+
+ /* Get and check endpoint acceptable latencies */
+ list_for_each_entry(child, &linkbus->devices, bus_list) {
+diff --git a/drivers/pci/pcie/ptm.c b/drivers/pci/pcie/ptm.c
+index 9361f3aa26ab..357a454cafa0 100644
+--- a/drivers/pci/pcie/ptm.c
++++ b/drivers/pci/pcie/ptm.c
+@@ -39,10 +39,6 @@ void pci_ptm_init(struct pci_dev *dev)
+ if (!pci_is_pcie(dev))
+ return;
+
+- pos = pci_find_ext_capability(dev, PCI_EXT_CAP_ID_PTM);
+- if (!pos)
+- return;
+-
+ /*
+ * Enable PTM only on interior devices (root ports, switch ports,
+ * etc.) on the assumption that it causes no link traffic until an
+@@ -52,6 +48,23 @@ void pci_ptm_init(struct pci_dev *dev)
+ pci_pcie_type(dev) == PCI_EXP_TYPE_RC_END))
+ return;
+
++ /*
++ * Switch Downstream Ports are not permitted to have a PTM
++ * capability; their PTM behavior is controlled by the Upstream
++ * Port (PCIe r5.0, sec 7.9.16).
++ */
++ ups = pci_upstream_bridge(dev);
++ if (pci_pcie_type(dev) == PCI_EXP_TYPE_DOWNSTREAM &&
++ ups && ups->ptm_enabled) {
++ dev->ptm_granularity = ups->ptm_granularity;
++ dev->ptm_enabled = 1;
++ return;
++ }
++
++ pos = pci_find_ext_capability(dev, PCI_EXT_CAP_ID_PTM);
++ if (!pos)
++ return;
++
+ pci_read_config_dword(dev, pos + PCI_PTM_CAP, &cap);
+ local_clock = (cap & PCI_PTM_GRANULARITY_MASK) >> 8;
+
+@@ -61,7 +74,6 @@ void pci_ptm_init(struct pci_dev *dev)
+ * the spec recommendation (PCIe r3.1, sec 7.32.3), select the
+ * furthest upstream Time Source as the PTM Root.
+ */
+- ups = pci_upstream_bridge(dev);
+ if (ups && ups->ptm_enabled) {
+ ctrl = PCI_PTM_CTRL_ENABLE;
+ if (ups->ptm_granularity == 0)
+diff --git a/drivers/pci/probe.c b/drivers/pci/probe.c
+index c7e3a8267521..b59a4b0f5f16 100644
+--- a/drivers/pci/probe.c
++++ b/drivers/pci/probe.c
+@@ -909,9 +909,10 @@ static int pci_register_host_bridge(struct pci_host_bridge *bridge)
+ goto free;
+
+ err = device_register(&bridge->dev);
+- if (err)
++ if (err) {
+ put_device(&bridge->dev);
+-
++ goto free;
++ }
+ bus->bridge = get_device(&bridge->dev);
+ device_enable_async_suspend(bus->bridge);
+ pci_set_bus_of_node(bus);
+diff --git a/drivers/pci/setup-res.c b/drivers/pci/setup-res.c
+index d8ca40a97693..d21fa04fa44d 100644
+--- a/drivers/pci/setup-res.c
++++ b/drivers/pci/setup-res.c
+@@ -439,10 +439,11 @@ int pci_resize_resource(struct pci_dev *dev, int resno, int size)
+ res->end = res->start + pci_rebar_size_to_bytes(size) - 1;
+
+ /* Check if the new config works by trying to assign everything. */
+- ret = pci_reassign_bridge_resources(dev->bus->self, res->flags);
+- if (ret)
+- goto error_resize;
+-
++ if (dev->bus->self) {
++ ret = pci_reassign_bridge_resources(dev->bus->self, res->flags);
++ if (ret)
++ goto error_resize;
++ }
+ return 0;
+
+ error_resize:
+diff --git a/drivers/perf/hisilicon/hisi_uncore_l3c_pmu.c b/drivers/perf/hisilicon/hisi_uncore_l3c_pmu.c
+index 1151e99b241c..479de4be99eb 100644
+--- a/drivers/perf/hisilicon/hisi_uncore_l3c_pmu.c
++++ b/drivers/perf/hisilicon/hisi_uncore_l3c_pmu.c
+@@ -35,7 +35,7 @@
+ /* L3C has 8-counters */
+ #define L3C_NR_COUNTERS 0x8
+
+-#define L3C_PERF_CTRL_EN 0x20000
++#define L3C_PERF_CTRL_EN 0x10000
+ #define L3C_EVTYPE_NONE 0xff
+
+ /*
+diff --git a/drivers/phy/broadcom/phy-bcm-sr-usb.c b/drivers/phy/broadcom/phy-bcm-sr-usb.c
+index fe6c58910e4c..7c7862b4f41f 100644
+--- a/drivers/phy/broadcom/phy-bcm-sr-usb.c
++++ b/drivers/phy/broadcom/phy-bcm-sr-usb.c
+@@ -16,8 +16,6 @@ enum bcm_usb_phy_version {
+ };
+
+ enum bcm_usb_phy_reg {
+- PLL_NDIV_FRAC,
+- PLL_NDIV_INT,
+ PLL_CTRL,
+ PHY_CTRL,
+ PHY_PLL_CTRL,
+@@ -31,18 +29,11 @@ static const u8 bcm_usb_combo_phy_ss[] = {
+ };
+
+ static const u8 bcm_usb_combo_phy_hs[] = {
+- [PLL_NDIV_FRAC] = 0x04,
+- [PLL_NDIV_INT] = 0x08,
+ [PLL_CTRL] = 0x0c,
+ [PHY_CTRL] = 0x10,
+ };
+
+-#define HSPLL_NDIV_INT_VAL 0x13
+-#define HSPLL_NDIV_FRAC_VAL 0x1005
+-
+ static const u8 bcm_usb_hs_phy[] = {
+- [PLL_NDIV_FRAC] = 0x0,
+- [PLL_NDIV_INT] = 0x4,
+ [PLL_CTRL] = 0x8,
+ [PHY_CTRL] = 0xc,
+ };
+@@ -52,7 +43,6 @@ enum pll_ctrl_bits {
+ SSPLL_SUSPEND_EN,
+ PLL_SEQ_START,
+ PLL_LOCK,
+- PLL_PDIV,
+ };
+
+ static const u8 u3pll_ctrl[] = {
+@@ -66,29 +56,17 @@ static const u8 u3pll_ctrl[] = {
+ #define HSPLL_PDIV_VAL 0x1
+
+ static const u8 u2pll_ctrl[] = {
+- [PLL_PDIV] = 1,
+ [PLL_RESETB] = 5,
+ [PLL_LOCK] = 6,
+ };
+
+ enum bcm_usb_phy_ctrl_bits {
+ CORERDY,
+- AFE_LDO_PWRDWNB,
+- AFE_PLL_PWRDWNB,
+- AFE_BG_PWRDWNB,
+- PHY_ISO,
+ PHY_RESETB,
+ PHY_PCTL,
+ };
+
+ #define PHY_PCTL_MASK 0xffff
+-/*
+- * 0x0806 of PCTL_VAL has below bits set
+- * BIT-8 : refclk divider 1
+- * BIT-3:2: device mode; mode is not effect
+- * BIT-1: soft reset active low
+- */
+-#define HSPHY_PCTL_VAL 0x0806
+ #define SSPHY_PCTL_VAL 0x0006
+
+ static const u8 u3phy_ctrl[] = {
+@@ -98,10 +76,6 @@ static const u8 u3phy_ctrl[] = {
+
+ static const u8 u2phy_ctrl[] = {
+ [CORERDY] = 0,
+- [AFE_LDO_PWRDWNB] = 1,
+- [AFE_PLL_PWRDWNB] = 2,
+- [AFE_BG_PWRDWNB] = 3,
+- [PHY_ISO] = 4,
+ [PHY_RESETB] = 5,
+ [PHY_PCTL] = 6,
+ };
+@@ -186,38 +160,13 @@ static int bcm_usb_hs_phy_init(struct bcm_usb_phy_cfg *phy_cfg)
+ int ret = 0;
+ void __iomem *regs = phy_cfg->regs;
+ const u8 *offset;
+- u32 rd_data;
+
+ offset = phy_cfg->offset;
+
+- writel(HSPLL_NDIV_INT_VAL, regs + offset[PLL_NDIV_INT]);
+- writel(HSPLL_NDIV_FRAC_VAL, regs + offset[PLL_NDIV_FRAC]);
+-
+- rd_data = readl(regs + offset[PLL_CTRL]);
+- rd_data &= ~(HSPLL_PDIV_MASK << u2pll_ctrl[PLL_PDIV]);
+- rd_data |= (HSPLL_PDIV_VAL << u2pll_ctrl[PLL_PDIV]);
+- writel(rd_data, regs + offset[PLL_CTRL]);
+-
+- /* Set Core Ready high */
+- bcm_usb_reg32_setbits(regs + offset[PHY_CTRL],
+- BIT(u2phy_ctrl[CORERDY]));
+-
+- /* Maximum timeout for Core Ready done */
+- msleep(30);
+-
++ bcm_usb_reg32_clrbits(regs + offset[PLL_CTRL],
++ BIT(u2pll_ctrl[PLL_RESETB]));
+ bcm_usb_reg32_setbits(regs + offset[PLL_CTRL],
+ BIT(u2pll_ctrl[PLL_RESETB]));
+- bcm_usb_reg32_setbits(regs + offset[PHY_CTRL],
+- BIT(u2phy_ctrl[PHY_RESETB]));
+-
+-
+- rd_data = readl(regs + offset[PHY_CTRL]);
+- rd_data &= ~(PHY_PCTL_MASK << u2phy_ctrl[PHY_PCTL]);
+- rd_data |= (HSPHY_PCTL_VAL << u2phy_ctrl[PHY_PCTL]);
+- writel(rd_data, regs + offset[PHY_CTRL]);
+-
+- /* Maximum timeout for PLL reset done */
+- msleep(30);
+
+ ret = bcm_usb_pll_lock_check(regs + offset[PLL_CTRL],
+ BIT(u2pll_ctrl[PLL_LOCK]));
+diff --git a/drivers/phy/cadence/phy-cadence-sierra.c b/drivers/phy/cadence/phy-cadence-sierra.c
+index a5c08e5bd2bf..faed652b73f7 100644
+--- a/drivers/phy/cadence/phy-cadence-sierra.c
++++ b/drivers/phy/cadence/phy-cadence-sierra.c
+@@ -685,10 +685,10 @@ static struct cdns_reg_pairs cdns_usb_cmn_regs_ext_ssc[] = {
+ static struct cdns_reg_pairs cdns_usb_ln_regs_ext_ssc[] = {
+ {0xFE0A, SIERRA_DET_STANDEC_A_PREG},
+ {0x000F, SIERRA_DET_STANDEC_B_PREG},
+- {0x00A5, SIERRA_DET_STANDEC_C_PREG},
++ {0x55A5, SIERRA_DET_STANDEC_C_PREG},
+ {0x69ad, SIERRA_DET_STANDEC_D_PREG},
+ {0x0241, SIERRA_DET_STANDEC_E_PREG},
+- {0x0010, SIERRA_PSM_LANECAL_DLY_A1_RESETS_PREG},
++ {0x0110, SIERRA_PSM_LANECAL_DLY_A1_RESETS_PREG},
+ {0x0014, SIERRA_PSM_A0IN_TMR_PREG},
+ {0xCF00, SIERRA_PSM_DIAG_PREG},
+ {0x001F, SIERRA_PSC_TX_A0_PREG},
+@@ -696,7 +696,7 @@ static struct cdns_reg_pairs cdns_usb_ln_regs_ext_ssc[] = {
+ {0x0003, SIERRA_PSC_TX_A2_PREG},
+ {0x0003, SIERRA_PSC_TX_A3_PREG},
+ {0x0FFF, SIERRA_PSC_RX_A0_PREG},
+- {0x0619, SIERRA_PSC_RX_A1_PREG},
++ {0x0003, SIERRA_PSC_RX_A1_PREG},
+ {0x0003, SIERRA_PSC_RX_A2_PREG},
+ {0x0001, SIERRA_PSC_RX_A3_PREG},
+ {0x0001, SIERRA_PLLCTRL_SUBRATE_PREG},
+@@ -705,19 +705,19 @@ static struct cdns_reg_pairs cdns_usb_ln_regs_ext_ssc[] = {
+ {0x00CA, SIERRA_CLKPATH_BIASTRIM_PREG},
+ {0x2512, SIERRA_DFE_BIASTRIM_PREG},
+ {0x0000, SIERRA_DRVCTRL_ATTEN_PREG},
+- {0x873E, SIERRA_CLKPATHCTRL_TMR_PREG},
+- {0x03CF, SIERRA_RX_CREQ_FLTR_A_MODE1_PREG},
+- {0x01CE, SIERRA_RX_CREQ_FLTR_A_MODE0_PREG},
++ {0x823E, SIERRA_CLKPATHCTRL_TMR_PREG},
++ {0x078F, SIERRA_RX_CREQ_FLTR_A_MODE1_PREG},
++ {0x078F, SIERRA_RX_CREQ_FLTR_A_MODE0_PREG},
+ {0x7B3C, SIERRA_CREQ_CCLKDET_MODE01_PREG},
+- {0x033F, SIERRA_RX_CTLE_MAINTENANCE_PREG},
++ {0x023C, SIERRA_RX_CTLE_MAINTENANCE_PREG},
+ {0x3232, SIERRA_CREQ_FSMCLK_SEL_PREG},
+ {0x0000, SIERRA_CREQ_EQ_CTRL_PREG},
+- {0x8000, SIERRA_CREQ_SPARE_PREG},
++ {0x0000, SIERRA_CREQ_SPARE_PREG},
+ {0xCC44, SIERRA_CREQ_EQ_OPEN_EYE_THRESH_PREG},
+- {0x8453, SIERRA_CTLELUT_CTRL_PREG},
+- {0x4110, SIERRA_DFE_ECMP_RATESEL_PREG},
+- {0x4110, SIERRA_DFE_SMP_RATESEL_PREG},
+- {0x0002, SIERRA_DEQ_PHALIGN_CTRL},
++ {0x8452, SIERRA_CTLELUT_CTRL_PREG},
++ {0x4121, SIERRA_DFE_ECMP_RATESEL_PREG},
++ {0x4121, SIERRA_DFE_SMP_RATESEL_PREG},
++ {0x0003, SIERRA_DEQ_PHALIGN_CTRL},
+ {0x3200, SIERRA_DEQ_CONCUR_CTRL1_PREG},
+ {0x5064, SIERRA_DEQ_CONCUR_CTRL2_PREG},
+ {0x0030, SIERRA_DEQ_EPIPWR_CTRL2_PREG},
+@@ -725,7 +725,7 @@ static struct cdns_reg_pairs cdns_usb_ln_regs_ext_ssc[] = {
+ {0x5A5A, SIERRA_DEQ_ERRCMP_CTRL_PREG},
+ {0x02F5, SIERRA_DEQ_OFFSET_CTRL_PREG},
+ {0x02F5, SIERRA_DEQ_GAIN_CTRL_PREG},
+- {0x9A8A, SIERRA_DEQ_VGATUNE_CTRL_PREG},
++ {0x9999, SIERRA_DEQ_VGATUNE_CTRL_PREG},
+ {0x0014, SIERRA_DEQ_GLUT0},
+ {0x0014, SIERRA_DEQ_GLUT1},
+ {0x0014, SIERRA_DEQ_GLUT2},
+@@ -772,6 +772,7 @@ static struct cdns_reg_pairs cdns_usb_ln_regs_ext_ssc[] = {
+ {0x000F, SIERRA_LFPSFILT_NS_PREG},
+ {0x0009, SIERRA_LFPSFILT_RD_PREG},
+ {0x0001, SIERRA_LFPSFILT_MP_PREG},
++ {0x6013, SIERRA_SIGDET_SUPPORT_PREG},
+ {0x8013, SIERRA_SDFILT_H2L_A_PREG},
+ {0x8009, SIERRA_SDFILT_L2H_PREG},
+ {0x0024, SIERRA_RXBUFFER_CTLECTRL_PREG},
+diff --git a/drivers/phy/ti/phy-j721e-wiz.c b/drivers/phy/ti/phy-j721e-wiz.c
+index 7b51045df783..c8e4ff341cef 100644
+--- a/drivers/phy/ti/phy-j721e-wiz.c
++++ b/drivers/phy/ti/phy-j721e-wiz.c
+@@ -794,8 +794,10 @@ static int wiz_probe(struct platform_device *pdev)
+ }
+
+ base = devm_ioremap(dev, res.start, resource_size(&res));
+- if (!base)
++ if (!base) {
++ ret = -ENOMEM;
+ goto err_addr_to_resource;
++ }
+
+ regmap = devm_regmap_init_mmio(dev, base, &wiz_regmap_config);
+ if (IS_ERR(regmap)) {
+@@ -812,6 +814,7 @@ static int wiz_probe(struct platform_device *pdev)
+
+ if (num_lanes > WIZ_MAX_LANES) {
+ dev_err(dev, "Cannot support %d lanes\n", num_lanes);
++ ret = -ENODEV;
+ goto err_addr_to_resource;
+ }
+
+@@ -897,6 +900,7 @@ static int wiz_probe(struct platform_device *pdev)
+ serdes_pdev = of_platform_device_create(child_node, NULL, dev);
+ if (!serdes_pdev) {
+ dev_WARN(dev, "Unable to create SERDES platform device\n");
++ ret = -ENOMEM;
+ goto err_pdev_create;
+ }
+ wiz->serdes_pdev = serdes_pdev;
+diff --git a/drivers/pinctrl/bcm/pinctrl-bcm281xx.c b/drivers/pinctrl/bcm/pinctrl-bcm281xx.c
+index f690fc5cd688..71e666178300 100644
+--- a/drivers/pinctrl/bcm/pinctrl-bcm281xx.c
++++ b/drivers/pinctrl/bcm/pinctrl-bcm281xx.c
+@@ -1406,7 +1406,7 @@ static int __init bcm281xx_pinctrl_probe(struct platform_device *pdev)
+ pdata->reg_base = devm_platform_ioremap_resource(pdev, 0);
+ if (IS_ERR(pdata->reg_base)) {
+ dev_err(&pdev->dev, "Failed to ioremap MEM resource\n");
+- return -ENODEV;
++ return PTR_ERR(pdata->reg_base);
+ }
+
+ /* Initialize the dynamic part of pinctrl_desc */
+diff --git a/drivers/pinctrl/freescale/pinctrl-imx.c b/drivers/pinctrl/freescale/pinctrl-imx.c
+index 9f42036c5fbb..1f81569c7ae3 100644
+--- a/drivers/pinctrl/freescale/pinctrl-imx.c
++++ b/drivers/pinctrl/freescale/pinctrl-imx.c
+@@ -774,16 +774,6 @@ static int imx_pinctrl_probe_dt(struct platform_device *pdev,
+ return 0;
+ }
+
+-/*
+- * imx_free_resources() - free memory used by this driver
+- * @info: info driver instance
+- */
+-static void imx_free_resources(struct imx_pinctrl *ipctl)
+-{
+- if (ipctl->pctl)
+- pinctrl_unregister(ipctl->pctl);
+-}
+-
+ int imx_pinctrl_probe(struct platform_device *pdev,
+ const struct imx_pinctrl_soc_info *info)
+ {
+@@ -874,23 +864,18 @@ int imx_pinctrl_probe(struct platform_device *pdev,
+ &ipctl->pctl);
+ if (ret) {
+ dev_err(&pdev->dev, "could not register IMX pinctrl driver\n");
+- goto free;
++ return ret;
+ }
+
+ ret = imx_pinctrl_probe_dt(pdev, ipctl);
+ if (ret) {
+ dev_err(&pdev->dev, "fail to probe dt properties\n");
+- goto free;
++ return ret;
+ }
+
+ dev_info(&pdev->dev, "initialized IMX pinctrl driver\n");
+
+ return pinctrl_enable(ipctl->pctl);
+-
+-free:
+- imx_free_resources(ipctl);
+-
+- return ret;
+ }
+
+ static int __maybe_unused imx_pinctrl_suspend(struct device *dev)
+diff --git a/drivers/pinctrl/freescale/pinctrl-imx1-core.c b/drivers/pinctrl/freescale/pinctrl-imx1-core.c
+index c00d0022d311..421f7d1886e5 100644
+--- a/drivers/pinctrl/freescale/pinctrl-imx1-core.c
++++ b/drivers/pinctrl/freescale/pinctrl-imx1-core.c
+@@ -638,7 +638,6 @@ int imx1_pinctrl_core_probe(struct platform_device *pdev,
+
+ ret = of_platform_populate(pdev->dev.of_node, NULL, NULL, &pdev->dev);
+ if (ret) {
+- pinctrl_unregister(ipctl->pctl);
+ dev_err(&pdev->dev, "Failed to populate subdevices\n");
+ return ret;
+ }
+diff --git a/drivers/pinctrl/pinctrl-at91-pio4.c b/drivers/pinctrl/pinctrl-at91-pio4.c
+index 694912409fd9..54222ccddfb1 100644
+--- a/drivers/pinctrl/pinctrl-at91-pio4.c
++++ b/drivers/pinctrl/pinctrl-at91-pio4.c
+@@ -1019,7 +1019,7 @@ static int atmel_pinctrl_probe(struct platform_device *pdev)
+
+ atmel_pioctrl->reg_base = devm_platform_ioremap_resource(pdev, 0);
+ if (IS_ERR(atmel_pioctrl->reg_base))
+- return -EINVAL;
++ return PTR_ERR(atmel_pioctrl->reg_base);
+
+ atmel_pioctrl->clk = devm_clk_get(dev, NULL);
+ if (IS_ERR(atmel_pioctrl->clk)) {
+diff --git a/drivers/pinctrl/pinctrl-ocelot.c b/drivers/pinctrl/pinctrl-ocelot.c
+index ed8eac6c1494..b1bf46ec207f 100644
+--- a/drivers/pinctrl/pinctrl-ocelot.c
++++ b/drivers/pinctrl/pinctrl-ocelot.c
+@@ -714,11 +714,12 @@ static void ocelot_irq_handler(struct irq_desc *desc)
+ struct irq_chip *parent_chip = irq_desc_get_chip(desc);
+ struct gpio_chip *chip = irq_desc_get_handler_data(desc);
+ struct ocelot_pinctrl *info = gpiochip_get_data(chip);
++ unsigned int id_reg = OCELOT_GPIO_INTR_IDENT * info->stride;
+ unsigned int reg = 0, irq, i;
+ unsigned long irqs;
+
+ for (i = 0; i < info->stride; i++) {
+- regmap_read(info->map, OCELOT_GPIO_INTR_IDENT + 4 * i, ®);
++ regmap_read(info->map, id_reg + 4 * i, ®);
+ if (!reg)
+ continue;
+
+@@ -751,21 +752,21 @@ static int ocelot_gpiochip_register(struct platform_device *pdev,
+ gc->of_node = info->dev->of_node;
+ gc->label = "ocelot-gpio";
+
+- irq = irq_of_parse_and_map(pdev->dev.of_node, 0);
+- if (irq <= 0)
+- return irq;
+-
+- girq = &gc->irq;
+- girq->chip = &ocelot_irqchip;
+- girq->parent_handler = ocelot_irq_handler;
+- girq->num_parents = 1;
+- girq->parents = devm_kcalloc(&pdev->dev, 1, sizeof(*girq->parents),
+- GFP_KERNEL);
+- if (!girq->parents)
+- return -ENOMEM;
+- girq->parents[0] = irq;
+- girq->default_type = IRQ_TYPE_NONE;
+- girq->handler = handle_edge_irq;
++ irq = irq_of_parse_and_map(gc->of_node, 0);
++ if (irq) {
++ girq = &gc->irq;
++ girq->chip = &ocelot_irqchip;
++ girq->parent_handler = ocelot_irq_handler;
++ girq->num_parents = 1;
++ girq->parents = devm_kcalloc(&pdev->dev, 1,
++ sizeof(*girq->parents),
++ GFP_KERNEL);
++ if (!girq->parents)
++ return -ENOMEM;
++ girq->parents[0] = irq;
++ girq->default_type = IRQ_TYPE_NONE;
++ girq->handler = handle_edge_irq;
++ }
+
+ ret = devm_gpiochip_add_data(&pdev->dev, gc, info);
+ if (ret)
+diff --git a/drivers/pinctrl/pinctrl-rockchip.c b/drivers/pinctrl/pinctrl-rockchip.c
+index 098951346339..d7869b636889 100644
+--- a/drivers/pinctrl/pinctrl-rockchip.c
++++ b/drivers/pinctrl/pinctrl-rockchip.c
+@@ -508,8 +508,8 @@ static int rockchip_dt_node_to_map(struct pinctrl_dev *pctldev,
+ }
+
+ map_num += grp->npins;
+- new_map = devm_kcalloc(pctldev->dev, map_num, sizeof(*new_map),
+- GFP_KERNEL);
++
++ new_map = kcalloc(map_num, sizeof(*new_map), GFP_KERNEL);
+ if (!new_map)
+ return -ENOMEM;
+
+@@ -519,7 +519,7 @@ static int rockchip_dt_node_to_map(struct pinctrl_dev *pctldev,
+ /* create mux map */
+ parent = of_get_parent(np);
+ if (!parent) {
+- devm_kfree(pctldev->dev, new_map);
++ kfree(new_map);
+ return -EINVAL;
+ }
+ new_map[0].type = PIN_MAP_TYPE_MUX_GROUP;
+@@ -546,6 +546,7 @@ static int rockchip_dt_node_to_map(struct pinctrl_dev *pctldev,
+ static void rockchip_dt_free_map(struct pinctrl_dev *pctldev,
+ struct pinctrl_map *map, unsigned num_maps)
+ {
++ kfree(map);
+ }
+
+ static const struct pinctrl_ops rockchip_pctrl_ops = {
+diff --git a/drivers/pinctrl/pinctrl-rza1.c b/drivers/pinctrl/pinctrl-rza1.c
+index da2d8365c690..ff4a7fb518bb 100644
+--- a/drivers/pinctrl/pinctrl-rza1.c
++++ b/drivers/pinctrl/pinctrl-rza1.c
+@@ -418,7 +418,7 @@ static const struct rza1_bidir_entry rza1l_bidir_entries[RZA1_NPORTS] = {
+ };
+
+ static const struct rza1_swio_entry rza1l_swio_entries[] = {
+- [0] = { ARRAY_SIZE(rza1h_swio_pins), rza1h_swio_pins },
++ [0] = { ARRAY_SIZE(rza1l_swio_pins), rza1l_swio_pins },
+ };
+
+ /* RZ/A1L (r7s72102x) pinmux flags table */
+diff --git a/drivers/pinctrl/qcom/pinctrl-ipq6018.c b/drivers/pinctrl/qcom/pinctrl-ipq6018.c
+index 38c33a778cb8..ec50a3b4bd16 100644
+--- a/drivers/pinctrl/qcom/pinctrl-ipq6018.c
++++ b/drivers/pinctrl/qcom/pinctrl-ipq6018.c
+@@ -367,7 +367,8 @@ static const char * const wci20_groups[] = {
+
+ static const char * const qpic_pad_groups[] = {
+ "gpio0", "gpio1", "gpio2", "gpio3", "gpio4", "gpio9", "gpio10",
+- "gpio11", "gpio17",
++ "gpio11", "gpio17", "gpio15", "gpio12", "gpio13", "gpio14", "gpio5",
++ "gpio6", "gpio7", "gpio8",
+ };
+
+ static const char * const burn0_groups[] = {
+diff --git a/drivers/pinctrl/sirf/pinctrl-sirf.c b/drivers/pinctrl/sirf/pinctrl-sirf.c
+index 1ebcb957c654..63a287d5795f 100644
+--- a/drivers/pinctrl/sirf/pinctrl-sirf.c
++++ b/drivers/pinctrl/sirf/pinctrl-sirf.c
+@@ -794,13 +794,17 @@ static int sirfsoc_gpio_probe(struct device_node *np)
+ return -ENODEV;
+
+ sgpio = devm_kzalloc(&pdev->dev, sizeof(*sgpio), GFP_KERNEL);
+- if (!sgpio)
+- return -ENOMEM;
++ if (!sgpio) {
++ err = -ENOMEM;
++ goto out_put_device;
++ }
+ spin_lock_init(&sgpio->lock);
+
+ regs = of_iomap(np, 0);
+- if (!regs)
+- return -ENOMEM;
++ if (!regs) {
++ err = -ENOMEM;
++ goto out_put_device;
++ }
+
+ sgpio->chip.gc.request = sirfsoc_gpio_request;
+ sgpio->chip.gc.free = sirfsoc_gpio_free;
+@@ -824,8 +828,10 @@ static int sirfsoc_gpio_probe(struct device_node *np)
+ girq->parents = devm_kcalloc(&pdev->dev, SIRFSOC_GPIO_NO_OF_BANKS,
+ sizeof(*girq->parents),
+ GFP_KERNEL);
+- if (!girq->parents)
+- return -ENOMEM;
++ if (!girq->parents) {
++ err = -ENOMEM;
++ goto out_put_device;
++ }
+ for (i = 0; i < SIRFSOC_GPIO_NO_OF_BANKS; i++) {
+ bank = &sgpio->sgpio_bank[i];
+ spin_lock_init(&bank->lock);
+@@ -868,6 +874,8 @@ out_no_range:
+ gpiochip_remove(&sgpio->chip.gc);
+ out:
+ iounmap(regs);
++out_put_device:
++ put_device(&pdev->dev);
+ return err;
+ }
+
+diff --git a/drivers/power/supply/Kconfig b/drivers/power/supply/Kconfig
+index f3424fdce341..d37ec0d03237 100644
+--- a/drivers/power/supply/Kconfig
++++ b/drivers/power/supply/Kconfig
+@@ -577,7 +577,7 @@ config CHARGER_BQ24257
+ tristate "TI BQ24250/24251/24257 battery charger driver"
+ depends on I2C
+ depends on GPIOLIB || COMPILE_TEST
+- depends on REGMAP_I2C
++ select REGMAP_I2C
+ help
+ Say Y to enable support for the TI BQ24250, BQ24251, and BQ24257 battery
+ chargers.
+diff --git a/drivers/power/supply/lp8788-charger.c b/drivers/power/supply/lp8788-charger.c
+index 84a206f42a8e..e7931ffb7151 100644
+--- a/drivers/power/supply/lp8788-charger.c
++++ b/drivers/power/supply/lp8788-charger.c
+@@ -572,27 +572,14 @@ static void lp8788_setup_adc_channel(struct device *dev,
+ return;
+
+ /* ADC channel for battery voltage */
+- chan = iio_channel_get(dev, pdata->adc_vbatt);
++ chan = devm_iio_channel_get(dev, pdata->adc_vbatt);
+ pchg->chan[LP8788_VBATT] = IS_ERR(chan) ? NULL : chan;
+
+ /* ADC channel for battery temperature */
+- chan = iio_channel_get(dev, pdata->adc_batt_temp);
++ chan = devm_iio_channel_get(dev, pdata->adc_batt_temp);
+ pchg->chan[LP8788_BATT_TEMP] = IS_ERR(chan) ? NULL : chan;
+ }
+
+-static void lp8788_release_adc_channel(struct lp8788_charger *pchg)
+-{
+- int i;
+-
+- for (i = 0; i < LP8788_NUM_CHG_ADC; i++) {
+- if (!pchg->chan[i])
+- continue;
+-
+- iio_channel_release(pchg->chan[i]);
+- pchg->chan[i] = NULL;
+- }
+-}
+-
+ static ssize_t lp8788_show_charger_status(struct device *dev,
+ struct device_attribute *attr, char *buf)
+ {
+@@ -735,7 +722,6 @@ static int lp8788_charger_remove(struct platform_device *pdev)
+ flush_work(&pchg->charger_work);
+ lp8788_irq_unregister(pdev, pchg);
+ lp8788_psy_unregister(pchg);
+- lp8788_release_adc_channel(pchg);
+
+ return 0;
+ }
+diff --git a/drivers/power/supply/smb347-charger.c b/drivers/power/supply/smb347-charger.c
+index c1d124b8be0c..d102921b3ab2 100644
+--- a/drivers/power/supply/smb347-charger.c
++++ b/drivers/power/supply/smb347-charger.c
+@@ -1138,6 +1138,7 @@ static bool smb347_volatile_reg(struct device *dev, unsigned int reg)
+ switch (reg) {
+ case IRQSTAT_A:
+ case IRQSTAT_C:
++ case IRQSTAT_D:
+ case IRQSTAT_E:
+ case IRQSTAT_F:
+ case STAT_A:
+diff --git a/drivers/pwm/core.c b/drivers/pwm/core.c
+index 9973c442b455..6b3cbc0490c6 100644
+--- a/drivers/pwm/core.c
++++ b/drivers/pwm/core.c
+@@ -121,7 +121,7 @@ static int pwm_device_request(struct pwm_device *pwm, const char *label)
+ pwm->chip->ops->get_state(pwm->chip, pwm, &pwm->state);
+ trace_pwm_get(pwm, &pwm->state);
+
+- if (IS_ENABLED(PWM_DEBUG))
++ if (IS_ENABLED(CONFIG_PWM_DEBUG))
+ pwm->last = pwm->state;
+ }
+
+diff --git a/drivers/pwm/pwm-img.c b/drivers/pwm/pwm-img.c
+index c9e57bd109fb..599a0f66a384 100644
+--- a/drivers/pwm/pwm-img.c
++++ b/drivers/pwm/pwm-img.c
+@@ -129,8 +129,10 @@ static int img_pwm_config(struct pwm_chip *chip, struct pwm_device *pwm,
+ duty = DIV_ROUND_UP(timebase * duty_ns, period_ns);
+
+ ret = pm_runtime_get_sync(chip->dev);
+- if (ret < 0)
++ if (ret < 0) {
++ pm_runtime_put_autosuspend(chip->dev);
+ return ret;
++ }
+
+ val = img_pwm_readl(pwm_chip, PWM_CTRL_CFG);
+ val &= ~(PWM_CTRL_CFG_DIV_MASK << PWM_CTRL_CFG_DIV_SHIFT(pwm->hwpwm));
+@@ -331,8 +333,10 @@ static int img_pwm_remove(struct platform_device *pdev)
+ int ret;
+
+ ret = pm_runtime_get_sync(&pdev->dev);
+- if (ret < 0)
++ if (ret < 0) {
++ pm_runtime_put(&pdev->dev);
+ return ret;
++ }
+
+ for (i = 0; i < pwm_chip->chip.npwm; i++) {
+ val = img_pwm_readl(pwm_chip, PWM_CTRL_CFG);
+diff --git a/drivers/pwm/pwm-imx27.c b/drivers/pwm/pwm-imx27.c
+index a6e40d4c485f..732a6f3701e8 100644
+--- a/drivers/pwm/pwm-imx27.c
++++ b/drivers/pwm/pwm-imx27.c
+@@ -150,13 +150,12 @@ static void pwm_imx27_get_state(struct pwm_chip *chip,
+
+ prescaler = MX3_PWMCR_PRESCALER_GET(val);
+ pwm_clk = clk_get_rate(imx->clk_per);
+- pwm_clk = DIV_ROUND_CLOSEST_ULL(pwm_clk, prescaler);
+ val = readl(imx->mmio_base + MX3_PWMPR);
+ period = val >= MX3_PWMPR_MAX ? MX3_PWMPR_MAX : val;
+
+ /* PWMOUT (Hz) = PWMCLK / (PWMPR + 2) */
+- tmp = NSEC_PER_SEC * (u64)(period + 2);
+- state->period = DIV_ROUND_CLOSEST_ULL(tmp, pwm_clk);
++ tmp = NSEC_PER_SEC * (u64)(period + 2) * prescaler;
++ state->period = DIV_ROUND_UP_ULL(tmp, pwm_clk);
+
+ /*
+ * PWMSAR can be read only if PWM is enabled. If the PWM is disabled,
+@@ -167,8 +166,8 @@ static void pwm_imx27_get_state(struct pwm_chip *chip,
+ else
+ val = imx->duty_cycle;
+
+- tmp = NSEC_PER_SEC * (u64)(val);
+- state->duty_cycle = DIV_ROUND_CLOSEST_ULL(tmp, pwm_clk);
++ tmp = NSEC_PER_SEC * (u64)(val) * prescaler;
++ state->duty_cycle = DIV_ROUND_UP_ULL(tmp, pwm_clk);
+
+ pwm_imx27_clk_disable_unprepare(imx);
+ }
+@@ -220,22 +219,23 @@ static int pwm_imx27_apply(struct pwm_chip *chip, struct pwm_device *pwm,
+ struct pwm_imx27_chip *imx = to_pwm_imx27_chip(chip);
+ struct pwm_state cstate;
+ unsigned long long c;
++ unsigned long long clkrate;
+ int ret;
+ u32 cr;
+
+ pwm_get_state(pwm, &cstate);
+
+- c = clk_get_rate(imx->clk_per);
+- c *= state->period;
++ clkrate = clk_get_rate(imx->clk_per);
++ c = clkrate * state->period;
+
+- do_div(c, 1000000000);
++ do_div(c, NSEC_PER_SEC);
+ period_cycles = c;
+
+ prescale = period_cycles / 0x10000 + 1;
+
+ period_cycles /= prescale;
+- c = (unsigned long long)period_cycles * state->duty_cycle;
+- do_div(c, state->period);
++ c = clkrate * state->duty_cycle;
++ do_div(c, NSEC_PER_SEC * prescale);
+ duty_cycles = c;
+
+ /*
+diff --git a/drivers/remoteproc/mtk_scp.c b/drivers/remoteproc/mtk_scp.c
+index 2bead57c9cf9..ac13e7b046a6 100644
+--- a/drivers/remoteproc/mtk_scp.c
++++ b/drivers/remoteproc/mtk_scp.c
+@@ -132,8 +132,8 @@ static int scp_ipi_init(struct mtk_scp *scp)
+ (struct mtk_share_obj __iomem *)(scp->sram_base + recv_offset);
+ scp->send_buf =
+ (struct mtk_share_obj __iomem *)(scp->sram_base + send_offset);
+- memset_io(scp->recv_buf, 0, sizeof(scp->recv_buf));
+- memset_io(scp->send_buf, 0, sizeof(scp->send_buf));
++ memset_io(scp->recv_buf, 0, sizeof(*scp->recv_buf));
++ memset_io(scp->send_buf, 0, sizeof(*scp->send_buf));
+
+ return 0;
+ }
+diff --git a/drivers/remoteproc/qcom_q6v5_mss.c b/drivers/remoteproc/qcom_q6v5_mss.c
+index 5475d4f808a8..629abcee2c1d 100644
+--- a/drivers/remoteproc/qcom_q6v5_mss.c
++++ b/drivers/remoteproc/qcom_q6v5_mss.c
+@@ -69,13 +69,9 @@
+ #define AXI_HALTREQ_REG 0x0
+ #define AXI_HALTACK_REG 0x4
+ #define AXI_IDLE_REG 0x8
+-#define NAV_AXI_HALTREQ_BIT BIT(0)
+-#define NAV_AXI_HALTACK_BIT BIT(1)
+-#define NAV_AXI_IDLE_BIT BIT(2)
+ #define AXI_GATING_VALID_OVERRIDE BIT(0)
+
+ #define HALT_ACK_TIMEOUT_US 100000
+-#define NAV_HALT_ACK_TIMEOUT_US 200
+
+ /* QDSP6SS_RESET */
+ #define Q6SS_STOP_CORE BIT(0)
+@@ -143,7 +139,7 @@ struct rproc_hexagon_res {
+ int version;
+ bool need_mem_protection;
+ bool has_alt_reset;
+- bool has_halt_nav;
++ bool has_spare_reg;
+ };
+
+ struct q6v5 {
+@@ -154,13 +150,11 @@ struct q6v5 {
+ void __iomem *rmb_base;
+
+ struct regmap *halt_map;
+- struct regmap *halt_nav_map;
+ struct regmap *conn_map;
+
+ u32 halt_q6;
+ u32 halt_modem;
+ u32 halt_nc;
+- u32 halt_nav;
+ u32 conn_box;
+
+ struct reset_control *mss_restart;
+@@ -206,7 +200,7 @@ struct q6v5 {
+ struct qcom_sysmon *sysmon;
+ bool need_mem_protection;
+ bool has_alt_reset;
+- bool has_halt_nav;
++ bool has_spare_reg;
+ int mpss_perm;
+ int mba_perm;
+ const char *hexagon_mdt_image;
+@@ -427,21 +421,19 @@ static int q6v5_reset_assert(struct q6v5 *qproc)
+ reset_control_assert(qproc->pdc_reset);
+ ret = reset_control_reset(qproc->mss_restart);
+ reset_control_deassert(qproc->pdc_reset);
+- } else if (qproc->has_halt_nav) {
++ } else if (qproc->has_spare_reg) {
+ /*
+ * When the AXI pipeline is being reset with the Q6 modem partly
+ * operational there is possibility of AXI valid signal to
+ * glitch, leading to spurious transactions and Q6 hangs. A work
+ * around is employed by asserting the AXI_GATING_VALID_OVERRIDE
+- * BIT before triggering Q6 MSS reset. Both the HALTREQ and
+- * AXI_GATING_VALID_OVERRIDE are withdrawn post MSS assert
+- * followed by a MSS deassert, while holding the PDC reset.
++ * BIT before triggering Q6 MSS reset. AXI_GATING_VALID_OVERRIDE
++ * is withdrawn post MSS assert followed by a MSS deassert,
++ * while holding the PDC reset.
+ */
+ reset_control_assert(qproc->pdc_reset);
+ regmap_update_bits(qproc->conn_map, qproc->conn_box,
+ AXI_GATING_VALID_OVERRIDE, 1);
+- regmap_update_bits(qproc->halt_nav_map, qproc->halt_nav,
+- NAV_AXI_HALTREQ_BIT, 0);
+ reset_control_assert(qproc->mss_restart);
+ reset_control_deassert(qproc->pdc_reset);
+ regmap_update_bits(qproc->conn_map, qproc->conn_box,
+@@ -464,7 +456,7 @@ static int q6v5_reset_deassert(struct q6v5 *qproc)
+ ret = reset_control_reset(qproc->mss_restart);
+ writel(0, qproc->rmb_base + RMB_MBA_ALT_RESET);
+ reset_control_deassert(qproc->pdc_reset);
+- } else if (qproc->has_halt_nav) {
++ } else if (qproc->has_spare_reg) {
+ ret = reset_control_reset(qproc->mss_restart);
+ } else {
+ ret = reset_control_deassert(qproc->mss_restart);
+@@ -761,32 +753,6 @@ static void q6v5proc_halt_axi_port(struct q6v5 *qproc,
+ regmap_write(halt_map, offset + AXI_HALTREQ_REG, 0);
+ }
+
+-static void q6v5proc_halt_nav_axi_port(struct q6v5 *qproc,
+- struct regmap *halt_map,
+- u32 offset)
+-{
+- unsigned int val;
+- int ret;
+-
+- /* Check if we're already idle */
+- ret = regmap_read(halt_map, offset, &val);
+- if (!ret && (val & NAV_AXI_IDLE_BIT))
+- return;
+-
+- /* Assert halt request */
+- regmap_update_bits(halt_map, offset, NAV_AXI_HALTREQ_BIT,
+- NAV_AXI_HALTREQ_BIT);
+-
+- /* Wait for halt ack*/
+- regmap_read_poll_timeout(halt_map, offset, val,
+- (val & NAV_AXI_HALTACK_BIT),
+- 5, NAV_HALT_ACK_TIMEOUT_US);
+-
+- ret = regmap_read(halt_map, offset, &val);
+- if (ret || !(val & NAV_AXI_IDLE_BIT))
+- dev_err(qproc->dev, "port failed halt\n");
+-}
+-
+ static int q6v5_mpss_init_image(struct q6v5 *qproc, const struct firmware *fw)
+ {
+ unsigned long dma_attrs = DMA_ATTR_FORCE_CONTIGUOUS;
+@@ -951,9 +917,6 @@ static int q6v5_mba_load(struct q6v5 *qproc)
+ halt_axi_ports:
+ q6v5proc_halt_axi_port(qproc, qproc->halt_map, qproc->halt_q6);
+ q6v5proc_halt_axi_port(qproc, qproc->halt_map, qproc->halt_modem);
+- if (qproc->has_halt_nav)
+- q6v5proc_halt_nav_axi_port(qproc, qproc->halt_nav_map,
+- qproc->halt_nav);
+ q6v5proc_halt_axi_port(qproc, qproc->halt_map, qproc->halt_nc);
+
+ reclaim_mba:
+@@ -1001,9 +964,6 @@ static void q6v5_mba_reclaim(struct q6v5 *qproc)
+
+ q6v5proc_halt_axi_port(qproc, qproc->halt_map, qproc->halt_q6);
+ q6v5proc_halt_axi_port(qproc, qproc->halt_map, qproc->halt_modem);
+- if (qproc->has_halt_nav)
+- q6v5proc_halt_nav_axi_port(qproc, qproc->halt_nav_map,
+- qproc->halt_nav);
+ q6v5proc_halt_axi_port(qproc, qproc->halt_map, qproc->halt_nc);
+ if (qproc->version == MSS_MSM8996) {
+ /*
+@@ -1156,7 +1116,13 @@ static int q6v5_mpss_load(struct q6v5 *qproc)
+ goto release_firmware;
+ }
+
+- ptr = qproc->mpss_region + offset;
++ ptr = ioremap_wc(qproc->mpss_phys + offset, phdr->p_memsz);
++ if (!ptr) {
++ dev_err(qproc->dev,
++ "unable to map memory region: %pa+%zx-%x\n",
++ &qproc->mpss_phys, offset, phdr->p_memsz);
++ goto release_firmware;
++ }
+
+ if (phdr->p_filesz && phdr->p_offset < fw->size) {
+ /* Firmware is large enough to be non-split */
+@@ -1165,6 +1131,7 @@ static int q6v5_mpss_load(struct q6v5 *qproc)
+ "failed to load segment %d from truncated file %s\n",
+ i, fw_name);
+ ret = -EINVAL;
++ iounmap(ptr);
+ goto release_firmware;
+ }
+
+@@ -1175,6 +1142,7 @@ static int q6v5_mpss_load(struct q6v5 *qproc)
+ ret = request_firmware(&seg_fw, fw_name, qproc->dev);
+ if (ret) {
+ dev_err(qproc->dev, "failed to load %s\n", fw_name);
++ iounmap(ptr);
+ goto release_firmware;
+ }
+
+@@ -1187,6 +1155,7 @@ static int q6v5_mpss_load(struct q6v5 *qproc)
+ memset(ptr + phdr->p_filesz, 0,
+ phdr->p_memsz - phdr->p_filesz);
+ }
++ iounmap(ptr);
+ size += phdr->p_memsz;
+
+ code_length = readl(qproc->rmb_base + RMB_PMI_CODE_LENGTH_REG);
+@@ -1236,7 +1205,8 @@ static void qcom_q6v5_dump_segment(struct rproc *rproc,
+ int ret = 0;
+ struct q6v5 *qproc = rproc->priv;
+ unsigned long mask = BIT((unsigned long)segment->priv);
+- void *ptr = rproc_da_to_va(rproc, segment->da, segment->size);
++ int offset = segment->da - qproc->mpss_reloc;
++ void *ptr = NULL;
+
+ /* Unlock mba before copying segments */
+ if (!qproc->dump_mba_loaded) {
+@@ -1250,10 +1220,15 @@ static void qcom_q6v5_dump_segment(struct rproc *rproc,
+ }
+ }
+
+- if (!ptr || ret)
+- memset(dest, 0xff, segment->size);
+- else
++ if (!ret)
++ ptr = ioremap_wc(qproc->mpss_phys + offset, segment->size);
++
++ if (ptr) {
+ memcpy(dest, ptr, segment->size);
++ iounmap(ptr);
++ } else {
++ memset(dest, 0xff, segment->size);
++ }
+
+ qproc->dump_segment_mask |= mask;
+
+@@ -1432,36 +1407,12 @@ static int q6v5_init_mem(struct q6v5 *qproc, struct platform_device *pdev)
+ qproc->halt_modem = args.args[1];
+ qproc->halt_nc = args.args[2];
+
+- if (qproc->has_halt_nav) {
+- struct platform_device *nav_pdev;
+-
++ if (qproc->has_spare_reg) {
+ ret = of_parse_phandle_with_fixed_args(pdev->dev.of_node,
+- "qcom,halt-nav-regs",
++ "qcom,spare-regs",
+ 1, 0, &args);
+ if (ret < 0) {
+- dev_err(&pdev->dev, "failed to parse halt-nav-regs\n");
+- return -EINVAL;
+- }
+-
+- nav_pdev = of_find_device_by_node(args.np);
+- of_node_put(args.np);
+- if (!nav_pdev) {
+- dev_err(&pdev->dev, "failed to get mss clock device\n");
+- return -EPROBE_DEFER;
+- }
+-
+- qproc->halt_nav_map = dev_get_regmap(&nav_pdev->dev, NULL);
+- if (!qproc->halt_nav_map) {
+- dev_err(&pdev->dev, "failed to get map from device\n");
+- return -EINVAL;
+- }
+- qproc->halt_nav = args.args[0];
+-
+- ret = of_parse_phandle_with_fixed_args(pdev->dev.of_node,
+- "qcom,halt-nav-regs",
+- 1, 1, &args);
+- if (ret < 0) {
+- dev_err(&pdev->dev, "failed to parse halt-nav-regs\n");
++ dev_err(&pdev->dev, "failed to parse spare-regs\n");
+ return -EINVAL;
+ }
+
+@@ -1547,7 +1498,7 @@ static int q6v5_init_reset(struct q6v5 *qproc)
+ return PTR_ERR(qproc->mss_restart);
+ }
+
+- if (qproc->has_alt_reset || qproc->has_halt_nav) {
++ if (qproc->has_alt_reset || qproc->has_spare_reg) {
+ qproc->pdc_reset = devm_reset_control_get_exclusive(qproc->dev,
+ "pdc_reset");
+ if (IS_ERR(qproc->pdc_reset)) {
+@@ -1595,12 +1546,6 @@ static int q6v5_alloc_memory_region(struct q6v5 *qproc)
+
+ qproc->mpss_phys = qproc->mpss_reloc = r.start;
+ qproc->mpss_size = resource_size(&r);
+- qproc->mpss_region = devm_ioremap_wc(qproc->dev, qproc->mpss_phys, qproc->mpss_size);
+- if (!qproc->mpss_region) {
+- dev_err(qproc->dev, "unable to map memory region: %pa+%zx\n",
+- &r.start, qproc->mpss_size);
+- return -EBUSY;
+- }
+
+ return 0;
+ }
+@@ -1679,7 +1624,7 @@ static int q6v5_probe(struct platform_device *pdev)
+
+ platform_set_drvdata(pdev, qproc);
+
+- qproc->has_halt_nav = desc->has_halt_nav;
++ qproc->has_spare_reg = desc->has_spare_reg;
+ ret = q6v5_init_mem(qproc, pdev);
+ if (ret)
+ goto free_rproc;
+@@ -1828,8 +1773,6 @@ static const struct rproc_hexagon_res sc7180_mss = {
+ .active_clk_names = (char*[]){
+ "mnoc_axi",
+ "nav",
+- "mss_nav",
+- "mss_crypto",
+ NULL
+ },
+ .active_pd_names = (char*[]){
+@@ -1844,7 +1787,7 @@ static const struct rproc_hexagon_res sc7180_mss = {
+ },
+ .need_mem_protection = true,
+ .has_alt_reset = false,
+- .has_halt_nav = true,
++ .has_spare_reg = true,
+ .version = MSS_SC7180,
+ };
+
+@@ -1879,7 +1822,7 @@ static const struct rproc_hexagon_res sdm845_mss = {
+ },
+ .need_mem_protection = true,
+ .has_alt_reset = true,
+- .has_halt_nav = false,
++ .has_spare_reg = false,
+ .version = MSS_SDM845,
+ };
+
+@@ -1906,7 +1849,7 @@ static const struct rproc_hexagon_res msm8998_mss = {
+ },
+ .need_mem_protection = true,
+ .has_alt_reset = false,
+- .has_halt_nav = false,
++ .has_spare_reg = false,
+ .version = MSS_MSM8998,
+ };
+
+@@ -1936,7 +1879,7 @@ static const struct rproc_hexagon_res msm8996_mss = {
+ },
+ .need_mem_protection = true,
+ .has_alt_reset = false,
+- .has_halt_nav = false,
++ .has_spare_reg = false,
+ .version = MSS_MSM8996,
+ };
+
+@@ -1969,7 +1912,7 @@ static const struct rproc_hexagon_res msm8916_mss = {
+ },
+ .need_mem_protection = false,
+ .has_alt_reset = false,
+- .has_halt_nav = false,
++ .has_spare_reg = false,
+ .version = MSS_MSM8916,
+ };
+
+@@ -2010,7 +1953,7 @@ static const struct rproc_hexagon_res msm8974_mss = {
+ },
+ .need_mem_protection = false,
+ .has_alt_reset = false,
+- .has_halt_nav = false,
++ .has_spare_reg = false,
+ .version = MSS_MSM8974,
+ };
+
+diff --git a/drivers/remoteproc/remoteproc_core.c b/drivers/remoteproc/remoteproc_core.c
+index be15aace9b3c..8f79cfd2e467 100644
+--- a/drivers/remoteproc/remoteproc_core.c
++++ b/drivers/remoteproc/remoteproc_core.c
+@@ -2053,6 +2053,7 @@ struct rproc *rproc_alloc(struct device *dev, const char *name,
+ rproc->dev.type = &rproc_type;
+ rproc->dev.class = &rproc_class;
+ rproc->dev.driver_data = rproc;
++ idr_init(&rproc->notifyids);
+
+ /* Assign a unique device index and name */
+ rproc->index = ida_simple_get(&rproc_dev_index, 0, 0, GFP_KERNEL);
+@@ -2078,8 +2079,6 @@ struct rproc *rproc_alloc(struct device *dev, const char *name,
+
+ mutex_init(&rproc->lock);
+
+- idr_init(&rproc->notifyids);
+-
+ INIT_LIST_HEAD(&rproc->carveouts);
+ INIT_LIST_HEAD(&rproc->mappings);
+ INIT_LIST_HEAD(&rproc->traces);
+diff --git a/drivers/rtc/rtc-mc13xxx.c b/drivers/rtc/rtc-mc13xxx.c
+index afce2c0b4bd6..d6802e6191cb 100644
+--- a/drivers/rtc/rtc-mc13xxx.c
++++ b/drivers/rtc/rtc-mc13xxx.c
+@@ -308,8 +308,10 @@ static int __init mc13xxx_rtc_probe(struct platform_device *pdev)
+ mc13xxx_unlock(mc13xxx);
+
+ ret = rtc_register_device(priv->rtc);
+- if (ret)
++ if (ret) {
++ mc13xxx_lock(mc13xxx);
+ goto err_irq_request;
++ }
+
+ return 0;
+
+diff --git a/drivers/rtc/rtc-rc5t619.c b/drivers/rtc/rtc-rc5t619.c
+index 24e386ecbc7e..dd1a20977478 100644
+--- a/drivers/rtc/rtc-rc5t619.c
++++ b/drivers/rtc/rtc-rc5t619.c
+@@ -356,10 +356,8 @@ static int rc5t619_rtc_probe(struct platform_device *pdev)
+ int err;
+
+ rtc = devm_kzalloc(dev, sizeof(*rtc), GFP_KERNEL);
+- if (IS_ERR(rtc)) {
+- err = PTR_ERR(rtc);
++ if (!rtc)
+ return -ENOMEM;
+- }
+
+ rtc->rn5t618 = rn5t618;
+
+diff --git a/drivers/rtc/rtc-rv3028.c b/drivers/rtc/rtc-rv3028.c
+index a0ddc86c975a..ec84db0b3d7a 100644
+--- a/drivers/rtc/rtc-rv3028.c
++++ b/drivers/rtc/rtc-rv3028.c
+@@ -755,6 +755,8 @@ static int rv3028_probe(struct i2c_client *client)
+ return -ENOMEM;
+
+ rv3028->regmap = devm_regmap_init_i2c(client, ®map_config);
++ if (IS_ERR(rv3028->regmap))
++ return PTR_ERR(rv3028->regmap);
+
+ i2c_set_clientdata(client, rv3028);
+
+diff --git a/drivers/s390/cio/qdio.h b/drivers/s390/cio/qdio.h
+index b8453b594679..a2afd7bc100b 100644
+--- a/drivers/s390/cio/qdio.h
++++ b/drivers/s390/cio/qdio.h
+@@ -364,7 +364,6 @@ static inline int multicast_outbound(struct qdio_q *q)
+ extern u64 last_ai_time;
+
+ /* prototypes for thin interrupt */
+-void qdio_setup_thinint(struct qdio_irq *irq_ptr);
+ int qdio_establish_thinint(struct qdio_irq *irq_ptr);
+ void qdio_shutdown_thinint(struct qdio_irq *irq_ptr);
+ void tiqdio_add_device(struct qdio_irq *irq_ptr);
+@@ -389,6 +388,7 @@ int qdio_setup_get_ssqd(struct qdio_irq *irq_ptr,
+ struct subchannel_id *schid,
+ struct qdio_ssqd_desc *data);
+ int qdio_setup_irq(struct qdio_irq *irq_ptr, struct qdio_initialize *init_data);
++void qdio_shutdown_irq(struct qdio_irq *irq);
+ void qdio_print_subchannel_info(struct qdio_irq *irq_ptr);
+ void qdio_release_memory(struct qdio_irq *irq_ptr);
+ int qdio_setup_init(void);
+diff --git a/drivers/s390/cio/qdio_main.c b/drivers/s390/cio/qdio_main.c
+index bcc3ab14e72d..80cc811bd2e0 100644
+--- a/drivers/s390/cio/qdio_main.c
++++ b/drivers/s390/cio/qdio_main.c
+@@ -1154,35 +1154,27 @@ int qdio_shutdown(struct ccw_device *cdev, int how)
+
+ /* cleanup subchannel */
+ spin_lock_irq(get_ccwdev_lock(cdev));
+-
++ qdio_set_state(irq_ptr, QDIO_IRQ_STATE_CLEANUP);
+ if (how & QDIO_FLAG_CLEANUP_USING_CLEAR)
+ rc = ccw_device_clear(cdev, QDIO_DOING_CLEANUP);
+ else
+ /* default behaviour is halt */
+ rc = ccw_device_halt(cdev, QDIO_DOING_CLEANUP);
++ spin_unlock_irq(get_ccwdev_lock(cdev));
+ if (rc) {
+ DBF_ERROR("%4x SHUTD ERR", irq_ptr->schid.sch_no);
+ DBF_ERROR("rc:%4d", rc);
+ goto no_cleanup;
+ }
+
+- qdio_set_state(irq_ptr, QDIO_IRQ_STATE_CLEANUP);
+- spin_unlock_irq(get_ccwdev_lock(cdev));
+ wait_event_interruptible_timeout(cdev->private->wait_q,
+ irq_ptr->state == QDIO_IRQ_STATE_INACTIVE ||
+ irq_ptr->state == QDIO_IRQ_STATE_ERR,
+ 10 * HZ);
+- spin_lock_irq(get_ccwdev_lock(cdev));
+
+ no_cleanup:
+ qdio_shutdown_thinint(irq_ptr);
+-
+- /* restore interrupt handler */
+- if ((void *)cdev->handler == (void *)qdio_int_handler) {
+- cdev->handler = irq_ptr->orig_handler;
+- cdev->private->intparm = 0;
+- }
+- spin_unlock_irq(get_ccwdev_lock(cdev));
++ qdio_shutdown_irq(irq_ptr);
+
+ qdio_set_state(irq_ptr, QDIO_IRQ_STATE_INACTIVE);
+ mutex_unlock(&irq_ptr->setup_mutex);
+@@ -1352,8 +1344,8 @@ int qdio_establish(struct ccw_device *cdev,
+
+ rc = qdio_establish_thinint(irq_ptr);
+ if (rc) {
++ qdio_shutdown_irq(irq_ptr);
+ mutex_unlock(&irq_ptr->setup_mutex);
+- qdio_shutdown(cdev, QDIO_FLAG_CLEANUP_USING_CLEAR);
+ return rc;
+ }
+
+@@ -1371,8 +1363,9 @@ int qdio_establish(struct ccw_device *cdev,
+ if (rc) {
+ DBF_ERROR("%4x est IO ERR", irq_ptr->schid.sch_no);
+ DBF_ERROR("rc:%4x", rc);
++ qdio_shutdown_thinint(irq_ptr);
++ qdio_shutdown_irq(irq_ptr);
+ mutex_unlock(&irq_ptr->setup_mutex);
+- qdio_shutdown(cdev, QDIO_FLAG_CLEANUP_USING_CLEAR);
+ return rc;
+ }
+
+diff --git a/drivers/s390/cio/qdio_setup.c b/drivers/s390/cio/qdio_setup.c
+index 3083edd61f0c..8edfa0982221 100644
+--- a/drivers/s390/cio/qdio_setup.c
++++ b/drivers/s390/cio/qdio_setup.c
+@@ -480,7 +480,6 @@ int qdio_setup_irq(struct qdio_irq *irq_ptr, struct qdio_initialize *init_data)
+ }
+
+ setup_qib(irq_ptr, init_data);
+- qdio_setup_thinint(irq_ptr);
+ set_impl_params(irq_ptr, init_data->qib_param_field_format,
+ init_data->qib_param_field,
+ init_data->input_slib_elements,
+@@ -491,6 +490,12 @@ int qdio_setup_irq(struct qdio_irq *irq_ptr, struct qdio_initialize *init_data)
+
+ /* qdr, qib, sls, slsbs, slibs, sbales are filled now */
+
++ /* set our IRQ handler */
++ spin_lock_irq(get_ccwdev_lock(cdev));
++ irq_ptr->orig_handler = cdev->handler;
++ cdev->handler = qdio_int_handler;
++ spin_unlock_irq(get_ccwdev_lock(cdev));
++
+ /* get qdio commands */
+ ciw = ccw_device_get_ciw(cdev, CIW_TYPE_EQUEUE);
+ if (!ciw) {
+@@ -506,12 +511,18 @@ int qdio_setup_irq(struct qdio_irq *irq_ptr, struct qdio_initialize *init_data)
+ }
+ irq_ptr->aqueue = *ciw;
+
+- /* set new interrupt handler */
++ return 0;
++}
++
++void qdio_shutdown_irq(struct qdio_irq *irq)
++{
++ struct ccw_device *cdev = irq->cdev;
++
++ /* restore IRQ handler */
+ spin_lock_irq(get_ccwdev_lock(cdev));
+- irq_ptr->orig_handler = cdev->handler;
+- cdev->handler = qdio_int_handler;
++ cdev->handler = irq->orig_handler;
++ cdev->private->intparm = 0;
+ spin_unlock_irq(get_ccwdev_lock(cdev));
+- return 0;
+ }
+
+ void qdio_print_subchannel_info(struct qdio_irq *irq_ptr)
+diff --git a/drivers/s390/cio/qdio_thinint.c b/drivers/s390/cio/qdio_thinint.c
+index ae50373617cd..0faa0ad21732 100644
+--- a/drivers/s390/cio/qdio_thinint.c
++++ b/drivers/s390/cio/qdio_thinint.c
+@@ -227,17 +227,19 @@ int __init tiqdio_register_thinints(void)
+
+ int qdio_establish_thinint(struct qdio_irq *irq_ptr)
+ {
++ int rc;
++
+ if (!is_thinint_irq(irq_ptr))
+ return 0;
+- return set_subchannel_ind(irq_ptr, 0);
+-}
+
+-void qdio_setup_thinint(struct qdio_irq *irq_ptr)
+-{
+- if (!is_thinint_irq(irq_ptr))
+- return;
+ irq_ptr->dsci = get_indicator();
+ DBF_HEX(&irq_ptr->dsci, sizeof(void *));
++
++ rc = set_subchannel_ind(irq_ptr, 0);
++ if (rc)
++ put_indicator(irq_ptr->dsci);
++
++ return rc;
+ }
+
+ void qdio_shutdown_thinint(struct qdio_irq *irq_ptr)
+diff --git a/drivers/scsi/arm/acornscsi.c b/drivers/scsi/arm/acornscsi.c
+index ddb52e7ba622..9a912fd0f70b 100644
+--- a/drivers/scsi/arm/acornscsi.c
++++ b/drivers/scsi/arm/acornscsi.c
+@@ -2911,8 +2911,10 @@ static int acornscsi_probe(struct expansion_card *ec, const struct ecard_id *id)
+
+ ashost->base = ecardm_iomap(ec, ECARD_RES_MEMC, 0, 0);
+ ashost->fast = ecardm_iomap(ec, ECARD_RES_IOCFAST, 0, 0);
+- if (!ashost->base || !ashost->fast)
++ if (!ashost->base || !ashost->fast) {
++ ret = -ENOMEM;
+ goto out_put;
++ }
+
+ host->irq = ec->irq;
+ ashost->host = host;
+diff --git a/drivers/scsi/cxgbi/cxgb3i/cxgb3i.c b/drivers/scsi/cxgbi/cxgb3i/cxgb3i.c
+index 524cdbcd29aa..ec7d01f6e2d5 100644
+--- a/drivers/scsi/cxgbi/cxgb3i/cxgb3i.c
++++ b/drivers/scsi/cxgbi/cxgb3i/cxgb3i.c
+@@ -959,6 +959,7 @@ static int init_act_open(struct cxgbi_sock *csk)
+ struct net_device *ndev = cdev->ports[csk->port_id];
+ struct cxgbi_hba *chba = cdev->hbas[csk->port_id];
+ struct sk_buff *skb = NULL;
++ int ret;
+
+ log_debug(1 << CXGBI_DBG_TOE | 1 << CXGBI_DBG_SOCK,
+ "csk 0x%p,%u,0x%lx.\n", csk, csk->state, csk->flags);
+@@ -979,16 +980,16 @@ static int init_act_open(struct cxgbi_sock *csk)
+ csk->atid = cxgb3_alloc_atid(t3dev, &t3_client, csk);
+ if (csk->atid < 0) {
+ pr_err("NO atid available.\n");
+- return -EINVAL;
++ ret = -EINVAL;
++ goto put_sock;
+ }
+ cxgbi_sock_set_flag(csk, CTPF_HAS_ATID);
+ cxgbi_sock_get(csk);
+
+ skb = alloc_wr(sizeof(struct cpl_act_open_req), 0, GFP_KERNEL);
+ if (!skb) {
+- cxgb3_free_atid(t3dev, csk->atid);
+- cxgbi_sock_put(csk);
+- return -ENOMEM;
++ ret = -ENOMEM;
++ goto free_atid;
+ }
+ skb->sk = (struct sock *)csk;
+ set_arp_failure_handler(skb, act_open_arp_failure);
+@@ -1010,6 +1011,15 @@ static int init_act_open(struct cxgbi_sock *csk)
+ cxgbi_sock_set_state(csk, CTP_ACTIVE_OPEN);
+ send_act_open_req(csk, skb, csk->l2t);
+ return 0;
++
++free_atid:
++ cxgb3_free_atid(t3dev, csk->atid);
++put_sock:
++ cxgbi_sock_put(csk);
++ l2t_release(t3dev, csk->l2t);
++ csk->l2t = NULL;
++
++ return ret;
+ }
+
+ cxgb3_cpl_handler_func cxgb3i_cpl_handlers[NUM_CPL_CMDS] = {
+diff --git a/drivers/scsi/hisi_sas/hisi_sas_main.c b/drivers/scsi/hisi_sas/hisi_sas_main.c
+index 9a6deb21fe4d..11caa4b0d797 100644
+--- a/drivers/scsi/hisi_sas/hisi_sas_main.c
++++ b/drivers/scsi/hisi_sas/hisi_sas_main.c
+@@ -898,8 +898,11 @@ void hisi_sas_phy_oob_ready(struct hisi_hba *hisi_hba, int phy_no)
+ struct hisi_sas_phy *phy = &hisi_hba->phy[phy_no];
+ struct device *dev = hisi_hba->dev;
+
++ dev_dbg(dev, "phy%d OOB ready\n", phy_no);
++ if (phy->phy_attached)
++ return;
++
+ if (!timer_pending(&phy->timer)) {
+- dev_dbg(dev, "phy%d OOB ready\n", phy_no);
+ phy->timer.expires = jiffies + HISI_SAS_WAIT_PHYUP_TIMEOUT * HZ;
+ add_timer(&phy->timer);
+ }
+diff --git a/drivers/scsi/ibmvscsi/ibmvscsi.c b/drivers/scsi/ibmvscsi/ibmvscsi.c
+index 59f0f1030c54..c5711c659b51 100644
+--- a/drivers/scsi/ibmvscsi/ibmvscsi.c
++++ b/drivers/scsi/ibmvscsi/ibmvscsi.c
+@@ -415,6 +415,8 @@ static int ibmvscsi_reenable_crq_queue(struct crq_queue *queue,
+ int rc = 0;
+ struct vio_dev *vdev = to_vio_dev(hostdata->dev);
+
++ set_adapter_info(hostdata);
++
+ /* Re-enable the CRQ */
+ do {
+ if (rc)
+diff --git a/drivers/scsi/iscsi_boot_sysfs.c b/drivers/scsi/iscsi_boot_sysfs.c
+index e4857b728033..a64abe38db2d 100644
+--- a/drivers/scsi/iscsi_boot_sysfs.c
++++ b/drivers/scsi/iscsi_boot_sysfs.c
+@@ -352,7 +352,7 @@ iscsi_boot_create_kobj(struct iscsi_boot_kset *boot_kset,
+ boot_kobj->kobj.kset = boot_kset->kset;
+ if (kobject_init_and_add(&boot_kobj->kobj, &iscsi_boot_ktype,
+ NULL, name, index)) {
+- kfree(boot_kobj);
++ kobject_put(&boot_kobj->kobj);
+ return NULL;
+ }
+ boot_kobj->data = data;
+diff --git a/drivers/scsi/lpfc/lpfc_els.c b/drivers/scsi/lpfc/lpfc_els.c
+index 80d1e661b0d4..35fbcb4d52eb 100644
+--- a/drivers/scsi/lpfc/lpfc_els.c
++++ b/drivers/scsi/lpfc/lpfc_els.c
+@@ -8514,6 +8514,8 @@ lpfc_els_unsol_buffer(struct lpfc_hba *phba, struct lpfc_sli_ring *pring,
+ spin_lock_irq(shost->host_lock);
+ if (ndlp->nlp_flag & NLP_IN_DEV_LOSS) {
+ spin_unlock_irq(shost->host_lock);
++ if (newnode)
++ lpfc_nlp_put(ndlp);
+ goto dropit;
+ }
+ spin_unlock_irq(shost->host_lock);
+diff --git a/drivers/scsi/mpt3sas/mpt3sas_base.c b/drivers/scsi/mpt3sas/mpt3sas_base.c
+index 663782bb790d..39d233262039 100644
+--- a/drivers/scsi/mpt3sas/mpt3sas_base.c
++++ b/drivers/scsi/mpt3sas/mpt3sas_base.c
+@@ -4915,7 +4915,9 @@ _base_release_memory_pools(struct MPT3SAS_ADAPTER *ioc)
+ }
+
+ kfree(ioc->hpr_lookup);
++ ioc->hpr_lookup = NULL;
+ kfree(ioc->internal_lookup);
++ ioc->internal_lookup = NULL;
+ if (ioc->chain_lookup) {
+ for (i = 0; i < ioc->scsiio_depth; i++) {
+ for (j = ioc->chains_per_prp_buffer;
+diff --git a/drivers/scsi/qedf/qedf.h b/drivers/scsi/qedf/qedf.h
+index f3f399fe10c8..0da4e16fb23a 100644
+--- a/drivers/scsi/qedf/qedf.h
++++ b/drivers/scsi/qedf/qedf.h
+@@ -355,6 +355,7 @@ struct qedf_ctx {
+ #define QEDF_GRCDUMP_CAPTURE 4
+ #define QEDF_IN_RECOVERY 5
+ #define QEDF_DBG_STOP_IO 6
++#define QEDF_PROBING 8
+ unsigned long flags; /* Miscellaneous state flags */
+ int fipvlan_retries;
+ u8 num_queues;
+diff --git a/drivers/scsi/qedf/qedf_main.c b/drivers/scsi/qedf/qedf_main.c
+index 5b19f5175c5c..3a7d03472922 100644
+--- a/drivers/scsi/qedf/qedf_main.c
++++ b/drivers/scsi/qedf/qedf_main.c
+@@ -3153,7 +3153,7 @@ static int __qedf_probe(struct pci_dev *pdev, int mode)
+ {
+ int rc = -EINVAL;
+ struct fc_lport *lport;
+- struct qedf_ctx *qedf;
++ struct qedf_ctx *qedf = NULL;
+ struct Scsi_Host *host;
+ bool is_vf = false;
+ struct qed_ll2_params params;
+@@ -3183,6 +3183,7 @@ static int __qedf_probe(struct pci_dev *pdev, int mode)
+
+ /* Initialize qedf_ctx */
+ qedf = lport_priv(lport);
++ set_bit(QEDF_PROBING, &qedf->flags);
+ qedf->lport = lport;
+ qedf->ctlr.lp = lport;
+ qedf->pdev = pdev;
+@@ -3206,9 +3207,12 @@ static int __qedf_probe(struct pci_dev *pdev, int mode)
+ } else {
+ /* Init pointers during recovery */
+ qedf = pci_get_drvdata(pdev);
++ set_bit(QEDF_PROBING, &qedf->flags);
+ lport = qedf->lport;
+ }
+
++ QEDF_INFO(&qedf->dbg_ctx, QEDF_LOG_DISC, "Probe started.\n");
++
+ host = lport->host;
+
+ /* Allocate mempool for qedf_io_work structs */
+@@ -3513,6 +3517,10 @@ static int __qedf_probe(struct pci_dev *pdev, int mode)
+ else
+ fc_fabric_login(lport);
+
++ QEDF_INFO(&qedf->dbg_ctx, QEDF_LOG_DISC, "Probe done.\n");
++
++ clear_bit(QEDF_PROBING, &qedf->flags);
++
+ /* All good */
+ return 0;
+
+@@ -3538,6 +3546,11 @@ err2:
+ err1:
+ scsi_host_put(lport->host);
+ err0:
++ if (qedf) {
++ QEDF_INFO(&qedf->dbg_ctx, QEDF_LOG_DISC, "Probe done.\n");
++
++ clear_bit(QEDF_PROBING, &qedf->flags);
++ }
+ return rc;
+ }
+
+@@ -3687,11 +3700,25 @@ void qedf_get_protocol_tlv_data(void *dev, void *data)
+ {
+ struct qedf_ctx *qedf = dev;
+ struct qed_mfw_tlv_fcoe *fcoe = data;
+- struct fc_lport *lport = qedf->lport;
+- struct Scsi_Host *host = lport->host;
+- struct fc_host_attrs *fc_host = shost_to_fc_host(host);
++ struct fc_lport *lport;
++ struct Scsi_Host *host;
++ struct fc_host_attrs *fc_host;
+ struct fc_host_statistics *hst;
+
++ if (!qedf) {
++ QEDF_ERR(NULL, "qedf is null.\n");
++ return;
++ }
++
++ if (test_bit(QEDF_PROBING, &qedf->flags)) {
++ QEDF_ERR(&qedf->dbg_ctx, "Function is still probing.\n");
++ return;
++ }
++
++ lport = qedf->lport;
++ host = lport->host;
++ fc_host = shost_to_fc_host(host);
++
+ /* Force a refresh of the fc_host stats including offload stats */
+ hst = qedf_fc_get_host_stats(host);
+
+diff --git a/drivers/scsi/qedi/qedi_iscsi.c b/drivers/scsi/qedi/qedi_iscsi.c
+index 1f4a5fb00a05..366c65b295a5 100644
+--- a/drivers/scsi/qedi/qedi_iscsi.c
++++ b/drivers/scsi/qedi/qedi_iscsi.c
+@@ -1001,7 +1001,8 @@ static void qedi_ep_disconnect(struct iscsi_endpoint *ep)
+ if (qedi_ep->state == EP_STATE_OFLDCONN_START)
+ goto ep_exit_recover;
+
+- flush_work(&qedi_ep->offload_work);
++ if (qedi_ep->state != EP_STATE_OFLDCONN_NONE)
++ flush_work(&qedi_ep->offload_work);
+
+ if (qedi_ep->conn) {
+ qedi_conn = qedi_ep->conn;
+@@ -1218,6 +1219,10 @@ static int qedi_set_path(struct Scsi_Host *shost, struct iscsi_path *path_data)
+ }
+
+ iscsi_cid = (u32)path_data->handle;
++ if (iscsi_cid >= qedi->max_active_conns) {
++ ret = -EINVAL;
++ goto set_path_exit;
++ }
+ qedi_ep = qedi->ep_tbl[iscsi_cid];
+ QEDI_INFO(&qedi->dbg_ctx, QEDI_LOG_INFO,
+ "iscsi_cid=0x%x, qedi_ep=%p\n", iscsi_cid, qedi_ep);
+diff --git a/drivers/scsi/qla2xxx/qla_os.c b/drivers/scsi/qla2xxx/qla_os.c
+index 1d9a4866f9a7..9179bb4caed8 100644
+--- a/drivers/scsi/qla2xxx/qla_os.c
++++ b/drivers/scsi/qla2xxx/qla_os.c
+@@ -6871,6 +6871,7 @@ qla2x00_do_dpc(void *data)
+
+ if (do_reset && !(test_and_set_bit(ABORT_ISP_ACTIVE,
+ &base_vha->dpc_flags))) {
++ base_vha->flags.online = 1;
+ ql_dbg(ql_dbg_dpc, base_vha, 0x4007,
+ "ISP abort scheduled.\n");
+ if (ha->isp_ops->abort_isp(base_vha)) {
+diff --git a/drivers/scsi/qla2xxx/tcm_qla2xxx.c b/drivers/scsi/qla2xxx/tcm_qla2xxx.c
+index 1f0a185b2a95..bf00ae16b487 100644
+--- a/drivers/scsi/qla2xxx/tcm_qla2xxx.c
++++ b/drivers/scsi/qla2xxx/tcm_qla2xxx.c
+@@ -949,6 +949,7 @@ static ssize_t tcm_qla2xxx_tpg_enable_store(struct config_item *item,
+
+ atomic_set(&tpg->lport_tpg_enabled, 0);
+ qlt_stop_phase1(vha->vha_tgt.qla_tgt);
++ qlt_stop_phase2(vha->vha_tgt.qla_tgt);
+ }
+
+ return count;
+@@ -1111,6 +1112,7 @@ static ssize_t tcm_qla2xxx_npiv_tpg_enable_store(struct config_item *item,
+
+ atomic_set(&tpg->lport_tpg_enabled, 0);
+ qlt_stop_phase1(vha->vha_tgt.qla_tgt);
++ qlt_stop_phase2(vha->vha_tgt.qla_tgt);
+ }
+
+ return count;
+diff --git a/drivers/scsi/scsi_error.c b/drivers/scsi/scsi_error.c
+index 978be1602f71..927b1e641842 100644
+--- a/drivers/scsi/scsi_error.c
++++ b/drivers/scsi/scsi_error.c
+@@ -1412,6 +1412,7 @@ static int scsi_eh_stu(struct Scsi_Host *shost,
+ sdev_printk(KERN_INFO, sdev,
+ "%s: skip START_UNIT, past eh deadline\n",
+ current->comm));
++ scsi_device_put(sdev);
+ break;
+ }
+ stu_scmd = NULL;
+@@ -1478,6 +1479,7 @@ static int scsi_eh_bus_device_reset(struct Scsi_Host *shost,
+ sdev_printk(KERN_INFO, sdev,
+ "%s: skip BDR, past eh deadline\n",
+ current->comm));
++ scsi_device_put(sdev);
+ break;
+ }
+ bdr_scmd = NULL;
+diff --git a/drivers/scsi/scsi_lib.c b/drivers/scsi/scsi_lib.c
+index 06c260f6cdae..b8b4366f1200 100644
+--- a/drivers/scsi/scsi_lib.c
++++ b/drivers/scsi/scsi_lib.c
+@@ -548,7 +548,7 @@ static void scsi_uninit_cmd(struct scsi_cmnd *cmd)
+ }
+ }
+
+-static void scsi_mq_free_sgtables(struct scsi_cmnd *cmd)
++static void scsi_free_sgtables(struct scsi_cmnd *cmd)
+ {
+ if (cmd->sdb.table.nents)
+ sg_free_table_chained(&cmd->sdb.table,
+@@ -560,7 +560,7 @@ static void scsi_mq_free_sgtables(struct scsi_cmnd *cmd)
+
+ static void scsi_mq_uninit_cmd(struct scsi_cmnd *cmd)
+ {
+- scsi_mq_free_sgtables(cmd);
++ scsi_free_sgtables(cmd);
+ scsi_uninit_cmd(cmd);
+ }
+
+@@ -1059,7 +1059,7 @@ blk_status_t scsi_init_io(struct scsi_cmnd *cmd)
+
+ return BLK_STS_OK;
+ out_free_sgtables:
+- scsi_mq_free_sgtables(cmd);
++ scsi_free_sgtables(cmd);
+ return ret;
+ }
+ EXPORT_SYMBOL(scsi_init_io);
+@@ -1190,6 +1190,7 @@ static blk_status_t scsi_setup_cmnd(struct scsi_device *sdev,
+ struct request *req)
+ {
+ struct scsi_cmnd *cmd = blk_mq_rq_to_pdu(req);
++ blk_status_t ret;
+
+ if (!blk_rq_bytes(req))
+ cmd->sc_data_direction = DMA_NONE;
+@@ -1199,9 +1200,14 @@ static blk_status_t scsi_setup_cmnd(struct scsi_device *sdev,
+ cmd->sc_data_direction = DMA_FROM_DEVICE;
+
+ if (blk_rq_is_scsi(req))
+- return scsi_setup_scsi_cmnd(sdev, req);
++ ret = scsi_setup_scsi_cmnd(sdev, req);
+ else
+- return scsi_setup_fs_cmnd(sdev, req);
++ ret = scsi_setup_fs_cmnd(sdev, req);
++
++ if (ret != BLK_STS_OK)
++ scsi_free_sgtables(cmd);
++
++ return ret;
+ }
+
+ static blk_status_t
+@@ -2859,8 +2865,10 @@ scsi_host_unblock(struct Scsi_Host *shost, int new_state)
+
+ shost_for_each_device(sdev, shost) {
+ ret = scsi_internal_device_unblock(sdev, new_state);
+- if (ret)
++ if (ret) {
++ scsi_device_put(sdev);
+ break;
++ }
+ }
+ return ret;
+ }
+diff --git a/drivers/scsi/scsi_transport_iscsi.c b/drivers/scsi/scsi_transport_iscsi.c
+index b2a803c51288..ea6d498fa923 100644
+--- a/drivers/scsi/scsi_transport_iscsi.c
++++ b/drivers/scsi/scsi_transport_iscsi.c
+@@ -1616,6 +1616,12 @@ static DECLARE_TRANSPORT_CLASS(iscsi_connection_class,
+ static struct sock *nls;
+ static DEFINE_MUTEX(rx_queue_mutex);
+
++/*
++ * conn_mutex protects the {start,bind,stop,destroy}_conn from racing
++ * against the kernel stop_connection recovery mechanism
++ */
++static DEFINE_MUTEX(conn_mutex);
++
+ static LIST_HEAD(sesslist);
+ static LIST_HEAD(sessdestroylist);
+ static DEFINE_SPINLOCK(sesslock);
+@@ -2445,6 +2451,32 @@ int iscsi_offload_mesg(struct Scsi_Host *shost,
+ }
+ EXPORT_SYMBOL_GPL(iscsi_offload_mesg);
+
++/*
++ * This can be called without the rx_queue_mutex, if invoked by the kernel
++ * stop work. But, in that case, it is guaranteed not to race with
++ * iscsi_destroy by conn_mutex.
++ */
++static void iscsi_if_stop_conn(struct iscsi_cls_conn *conn, int flag)
++{
++ /*
++ * It is important that this path doesn't rely on
++ * rx_queue_mutex, otherwise, a thread doing allocation on a
++ * start_session/start_connection could sleep waiting on a
++ * writeback to a failed iscsi device, that cannot be recovered
++ * because the lock is held. If we don't hold it here, the
++ * kernel stop_conn_work_fn has a chance to stop the broken
++ * session and resolve the allocation.
++ *
++ * Still, the user invoked .stop_conn() needs to be serialized
++ * with stop_conn_work_fn by a private mutex. Not pretty, but
++ * it works.
++ */
++ mutex_lock(&conn_mutex);
++ conn->transport->stop_conn(conn, flag);
++ mutex_unlock(&conn_mutex);
++
++}
++
+ static void stop_conn_work_fn(struct work_struct *work)
+ {
+ struct iscsi_cls_conn *conn, *tmp;
+@@ -2463,30 +2495,17 @@ static void stop_conn_work_fn(struct work_struct *work)
+ uint32_t sid = iscsi_conn_get_sid(conn);
+ struct iscsi_cls_session *session;
+
+- mutex_lock(&rx_queue_mutex);
+-
+ session = iscsi_session_lookup(sid);
+ if (session) {
+ if (system_state != SYSTEM_RUNNING) {
+ session->recovery_tmo = 0;
+- conn->transport->stop_conn(conn,
+- STOP_CONN_TERM);
++ iscsi_if_stop_conn(conn, STOP_CONN_TERM);
+ } else {
+- conn->transport->stop_conn(conn,
+- STOP_CONN_RECOVER);
++ iscsi_if_stop_conn(conn, STOP_CONN_RECOVER);
+ }
+ }
+
+ list_del_init(&conn->conn_list_err);
+-
+- mutex_unlock(&rx_queue_mutex);
+-
+- /* we don't want to hold rx_queue_mutex for too long,
+- * for instance if many conns failed at the same time,
+- * since this stall other iscsi maintenance operations.
+- * Give other users a chance to proceed.
+- */
+- cond_resched();
+ }
+ }
+
+@@ -2846,8 +2865,11 @@ iscsi_if_destroy_conn(struct iscsi_transport *transport, struct iscsi_uevent *ev
+ spin_unlock_irqrestore(&connlock, flags);
+
+ ISCSI_DBG_TRANS_CONN(conn, "Destroying transport conn\n");
++
++ mutex_lock(&conn_mutex);
+ if (transport->destroy_conn)
+ transport->destroy_conn(conn);
++ mutex_unlock(&conn_mutex);
+
+ return 0;
+ }
+@@ -3689,9 +3711,12 @@ iscsi_if_recv_msg(struct sk_buff *skb, struct nlmsghdr *nlh, uint32_t *group)
+ break;
+ }
+
++ mutex_lock(&conn_mutex);
+ ev->r.retcode = transport->bind_conn(session, conn,
+ ev->u.b_conn.transport_eph,
+ ev->u.b_conn.is_leading);
++ mutex_unlock(&conn_mutex);
++
+ if (ev->r.retcode || !transport->ep_connect)
+ break;
+
+@@ -3713,9 +3738,11 @@ iscsi_if_recv_msg(struct sk_buff *skb, struct nlmsghdr *nlh, uint32_t *group)
+ case ISCSI_UEVENT_START_CONN:
+ conn = iscsi_conn_lookup(ev->u.start_conn.sid, ev->u.start_conn.cid);
+ if (conn) {
++ mutex_lock(&conn_mutex);
+ ev->r.retcode = transport->start_conn(conn);
+ if (!ev->r.retcode)
+ conn->state = ISCSI_CONN_UP;
++ mutex_unlock(&conn_mutex);
+ }
+ else
+ err = -EINVAL;
+@@ -3723,17 +3750,20 @@ iscsi_if_recv_msg(struct sk_buff *skb, struct nlmsghdr *nlh, uint32_t *group)
+ case ISCSI_UEVENT_STOP_CONN:
+ conn = iscsi_conn_lookup(ev->u.stop_conn.sid, ev->u.stop_conn.cid);
+ if (conn)
+- transport->stop_conn(conn, ev->u.stop_conn.flag);
++ iscsi_if_stop_conn(conn, ev->u.stop_conn.flag);
+ else
+ err = -EINVAL;
+ break;
+ case ISCSI_UEVENT_SEND_PDU:
+ conn = iscsi_conn_lookup(ev->u.send_pdu.sid, ev->u.send_pdu.cid);
+- if (conn)
++ if (conn) {
++ mutex_lock(&conn_mutex);
+ ev->r.retcode = transport->send_pdu(conn,
+ (struct iscsi_hdr*)((char*)ev + sizeof(*ev)),
+ (char*)ev + sizeof(*ev) + ev->u.send_pdu.hdr_size,
+ ev->u.send_pdu.data_size);
++ mutex_unlock(&conn_mutex);
++ }
+ else
+ err = -EINVAL;
+ break;
+diff --git a/drivers/scsi/sr.c b/drivers/scsi/sr.c
+index d2fe3fa470f9..1e13c6a0f0ca 100644
+--- a/drivers/scsi/sr.c
++++ b/drivers/scsi/sr.c
+@@ -797,7 +797,7 @@ static int sr_probe(struct device *dev)
+ cd->cdi.disk = disk;
+
+ if (register_cdrom(&cd->cdi))
+- goto fail_put;
++ goto fail_minor;
+
+ /*
+ * Initialize block layer runtime PM stuffs before the
+@@ -815,8 +815,13 @@ static int sr_probe(struct device *dev)
+
+ return 0;
+
++fail_minor:
++ spin_lock(&sr_index_lock);
++ clear_bit(minor, sr_index_bits);
++ spin_unlock(&sr_index_lock);
+ fail_put:
+ put_disk(disk);
++ mutex_destroy(&cd->lock);
+ fail_free:
+ kfree(cd);
+ fail:
+diff --git a/drivers/scsi/ufs/ti-j721e-ufs.c b/drivers/scsi/ufs/ti-j721e-ufs.c
+index 5216d228cdd9..46bb905b4d6a 100644
+--- a/drivers/scsi/ufs/ti-j721e-ufs.c
++++ b/drivers/scsi/ufs/ti-j721e-ufs.c
+@@ -32,14 +32,14 @@ static int ti_j721e_ufs_probe(struct platform_device *pdev)
+ ret = pm_runtime_get_sync(dev);
+ if (ret < 0) {
+ pm_runtime_put_noidle(dev);
+- return ret;
++ goto disable_pm;
+ }
+
+ /* Select MPHY refclk frequency */
+ clk = devm_clk_get(dev, NULL);
+ if (IS_ERR(clk)) {
+ dev_err(dev, "Cannot claim MPHY clock.\n");
+- return PTR_ERR(clk);
++ goto clk_err;
+ }
+ clk_rate = clk_get_rate(clk);
+ if (clk_rate == 26000000)
+@@ -54,16 +54,23 @@ static int ti_j721e_ufs_probe(struct platform_device *pdev)
+ dev);
+ if (ret) {
+ dev_err(dev, "failed to populate child nodes %d\n", ret);
+- pm_runtime_put_sync(dev);
++ goto clk_err;
+ }
+
+ return ret;
++
++clk_err:
++ pm_runtime_put_sync(dev);
++disable_pm:
++ pm_runtime_disable(dev);
++ return ret;
+ }
+
+ static int ti_j721e_ufs_remove(struct platform_device *pdev)
+ {
+ of_platform_depopulate(&pdev->dev);
+ pm_runtime_put_sync(&pdev->dev);
++ pm_runtime_disable(&pdev->dev);
+
+ return 0;
+ }
+diff --git a/drivers/scsi/ufs/ufs-qcom.c b/drivers/scsi/ufs/ufs-qcom.c
+index 19aa5c44e0da..f938867301a0 100644
+--- a/drivers/scsi/ufs/ufs-qcom.c
++++ b/drivers/scsi/ufs/ufs-qcom.c
+@@ -1658,11 +1658,11 @@ static void ufs_qcom_dump_dbg_regs(struct ufs_hba *hba)
+
+ /* sleep a bit intermittently as we are dumping too much data */
+ ufs_qcom_print_hw_debug_reg_all(hba, NULL, ufs_qcom_dump_regs_wrapper);
+- usleep_range(1000, 1100);
++ udelay(1000);
+ ufs_qcom_testbus_read(hba);
+- usleep_range(1000, 1100);
++ udelay(1000);
+ ufs_qcom_print_unipro_testbus(hba);
+- usleep_range(1000, 1100);
++ udelay(1000);
+ }
+
+ /**
+diff --git a/drivers/scsi/ufs/ufs_bsg.c b/drivers/scsi/ufs/ufs_bsg.c
+index 53dd87628cbe..516a7f573942 100644
+--- a/drivers/scsi/ufs/ufs_bsg.c
++++ b/drivers/scsi/ufs/ufs_bsg.c
+@@ -106,8 +106,10 @@ static int ufs_bsg_request(struct bsg_job *job)
+ desc_op = bsg_request->upiu_req.qr.opcode;
+ ret = ufs_bsg_alloc_desc_buffer(hba, job, &desc_buff,
+ &desc_len, desc_op);
+- if (ret)
++ if (ret) {
++ pm_runtime_put_sync(hba->dev);
+ goto out;
++ }
+
+ /* fall through */
+ case UPIU_TRANSACTION_NOP_OUT:
+diff --git a/drivers/scsi/ufs/ufshcd.c b/drivers/scsi/ufs/ufshcd.c
+index 698e8d20b4ba..52740b60d786 100644
+--- a/drivers/scsi/ufs/ufshcd.c
++++ b/drivers/scsi/ufs/ufshcd.c
+@@ -5098,7 +5098,6 @@ static int ufshcd_bkops_ctrl(struct ufs_hba *hba,
+ err = ufshcd_enable_auto_bkops(hba);
+ else
+ err = ufshcd_disable_auto_bkops(hba);
+- hba->urgent_bkops_lvl = curr_status;
+ out:
+ return err;
+ }
+diff --git a/drivers/slimbus/qcom-ngd-ctrl.c b/drivers/slimbus/qcom-ngd-ctrl.c
+index fc2575fef51b..7426b5884218 100644
+--- a/drivers/slimbus/qcom-ngd-ctrl.c
++++ b/drivers/slimbus/qcom-ngd-ctrl.c
+@@ -1361,7 +1361,6 @@ static int of_qcom_slim_ngd_register(struct device *parent,
+ ngd->pdev->driver_override = QCOM_SLIM_NGD_DRV_NAME;
+ ngd->pdev->dev.of_node = node;
+ ctrl->ngd = ngd;
+- platform_set_drvdata(ngd->pdev, ctrl);
+
+ platform_device_add(ngd->pdev);
+ ngd->base = ctrl->base + ngd->id * data->offset +
+@@ -1376,12 +1375,13 @@ static int of_qcom_slim_ngd_register(struct device *parent,
+
+ static int qcom_slim_ngd_probe(struct platform_device *pdev)
+ {
+- struct qcom_slim_ngd_ctrl *ctrl = platform_get_drvdata(pdev);
+ struct device *dev = &pdev->dev;
++ struct qcom_slim_ngd_ctrl *ctrl = dev_get_drvdata(dev->parent);
+ int ret;
+
+ ctrl->ctrl.dev = dev;
+
++ platform_set_drvdata(pdev, ctrl);
+ pm_runtime_use_autosuspend(dev);
+ pm_runtime_set_autosuspend_delay(dev, QCOM_SLIM_NGD_AUTOSUSPEND);
+ pm_runtime_set_suspended(dev);
+diff --git a/drivers/soundwire/slave.c b/drivers/soundwire/slave.c
+index aace57fae7f8..4bacdb187eab 100644
+--- a/drivers/soundwire/slave.c
++++ b/drivers/soundwire/slave.c
+@@ -68,6 +68,8 @@ static int sdw_slave_add(struct sdw_bus *bus,
+ list_del(&slave->node);
+ mutex_unlock(&bus->bus_lock);
+ put_device(&slave->dev);
++
++ return ret;
+ }
+ sdw_slave_debugfs_init(slave);
+
+diff --git a/drivers/staging/gasket/gasket_sysfs.c b/drivers/staging/gasket/gasket_sysfs.c
+index 5f0e089573a2..af26bc9f184a 100644
+--- a/drivers/staging/gasket/gasket_sysfs.c
++++ b/drivers/staging/gasket/gasket_sysfs.c
+@@ -339,6 +339,7 @@ void gasket_sysfs_put_attr(struct device *device,
+
+ dev_err(device, "Unable to put unknown attribute: %s\n",
+ attr->attr.attr.name);
++ put_mapping(mapping);
+ }
+ EXPORT_SYMBOL(gasket_sysfs_put_attr);
+
+@@ -372,6 +373,7 @@ ssize_t gasket_sysfs_register_store(struct device *device,
+ gasket_dev = mapping->gasket_dev;
+ if (!gasket_dev) {
+ dev_err(device, "Device driver may have been removed\n");
++ put_mapping(mapping);
+ return 0;
+ }
+
+diff --git a/drivers/staging/greybus/light.c b/drivers/staging/greybus/light.c
+index d6ba25f21d80..d2672b65c3f4 100644
+--- a/drivers/staging/greybus/light.c
++++ b/drivers/staging/greybus/light.c
+@@ -1026,7 +1026,8 @@ static int gb_lights_light_config(struct gb_lights *glights, u8 id)
+
+ light->channels_count = conf.channel_count;
+ light->name = kstrndup(conf.name, NAMES_MAX, GFP_KERNEL);
+-
++ if (!light->name)
++ return -ENOMEM;
+ light->channels = kcalloc(light->channels_count,
+ sizeof(struct gb_channel), GFP_KERNEL);
+ if (!light->channels)
+diff --git a/drivers/staging/mt7621-dts/mt7621.dtsi b/drivers/staging/mt7621-dts/mt7621.dtsi
+index 9e5cf68731bb..82aa93634eda 100644
+--- a/drivers/staging/mt7621-dts/mt7621.dtsi
++++ b/drivers/staging/mt7621-dts/mt7621.dtsi
+@@ -523,11 +523,10 @@
+ 0x01000000 0 0x00000000 0x1e160000 0 0x00010000 /* io space */
+ >;
+
+- #interrupt-cells = <1>;
+- interrupt-map-mask = <0xF0000 0 0 1>;
+- interrupt-map = <0x10000 0 0 1 &gic GIC_SHARED 4 IRQ_TYPE_LEVEL_HIGH>,
+- <0x20000 0 0 1 &gic GIC_SHARED 24 IRQ_TYPE_LEVEL_HIGH>,
+- <0x30000 0 0 1 &gic GIC_SHARED 25 IRQ_TYPE_LEVEL_HIGH>;
++ interrupt-parent = <&gic>;
++ interrupts = <GIC_SHARED 4 IRQ_TYPE_LEVEL_HIGH
++ GIC_SHARED 24 IRQ_TYPE_LEVEL_HIGH
++ GIC_SHARED 25 IRQ_TYPE_LEVEL_HIGH>;
+
+ status = "disabled";
+
+diff --git a/drivers/staging/mt7621-pci/pci-mt7621.c b/drivers/staging/mt7621-pci/pci-mt7621.c
+index b9d460a9c041..36207243a71b 100644
+--- a/drivers/staging/mt7621-pci/pci-mt7621.c
++++ b/drivers/staging/mt7621-pci/pci-mt7621.c
+@@ -97,6 +97,7 @@
+ * @pcie_rst: pointer to port reset control
+ * @gpio_rst: gpio reset
+ * @slot: port slot
++ * @irq: GIC irq
+ * @enabled: indicates if port is enabled
+ */
+ struct mt7621_pcie_port {
+@@ -107,6 +108,7 @@ struct mt7621_pcie_port {
+ struct reset_control *pcie_rst;
+ struct gpio_desc *gpio_rst;
+ u32 slot;
++ int irq;
+ bool enabled;
+ };
+
+@@ -120,6 +122,7 @@ struct mt7621_pcie_port {
+ * @dev: Pointer to PCIe device
+ * @io_map_base: virtual memory base address for io
+ * @ports: pointer to PCIe port information
++ * @irq_map: irq mapping info according pcie link status
+ * @resets_inverted: depends on chip revision
+ * reset lines are inverted.
+ */
+@@ -135,6 +138,7 @@ struct mt7621_pcie {
+ } offset;
+ unsigned long io_map_base;
+ struct list_head ports;
++ int irq_map[PCIE_P2P_MAX];
+ bool resets_inverted;
+ };
+
+@@ -279,6 +283,16 @@ static void setup_cm_memory_region(struct mt7621_pcie *pcie)
+ }
+ }
+
++static int mt7621_map_irq(const struct pci_dev *pdev, u8 slot, u8 pin)
++{
++ struct mt7621_pcie *pcie = pdev->bus->sysdata;
++ struct device *dev = pcie->dev;
++ int irq = pcie->irq_map[slot];
++
++ dev_info(dev, "bus=%d slot=%d irq=%d\n", pdev->bus->number, slot, irq);
++ return irq;
++}
++
+ static int mt7621_pci_parse_request_of_pci_ranges(struct mt7621_pcie *pcie)
+ {
+ struct device *dev = pcie->dev;
+@@ -330,6 +344,7 @@ static int mt7621_pcie_parse_port(struct mt7621_pcie *pcie,
+ {
+ struct mt7621_pcie_port *port;
+ struct device *dev = pcie->dev;
++ struct platform_device *pdev = to_platform_device(dev);
+ struct device_node *pnode = dev->of_node;
+ struct resource regs;
+ char name[10];
+@@ -371,6 +386,12 @@ static int mt7621_pcie_parse_port(struct mt7621_pcie *pcie,
+ port->slot = slot;
+ port->pcie = pcie;
+
++ port->irq = platform_get_irq(pdev, slot);
++ if (port->irq < 0) {
++ dev_err(dev, "Failed to get IRQ for PCIe%d\n", slot);
++ return -ENXIO;
++ }
++
+ INIT_LIST_HEAD(&port->list);
+ list_add_tail(&port->list, &pcie->ports);
+
+@@ -585,13 +606,15 @@ static int mt7621_pcie_init_virtual_bridges(struct mt7621_pcie *pcie)
+ {
+ u32 pcie_link_status = 0;
+ u32 n;
+- int i;
++ int i = 0;
+ u32 p2p_br_devnum[PCIE_P2P_MAX];
++ int irqs[PCIE_P2P_MAX];
+ struct mt7621_pcie_port *port;
+
+ list_for_each_entry(port, &pcie->ports, list) {
+ u32 slot = port->slot;
+
++ irqs[i++] = port->irq;
+ if (port->enabled)
+ pcie_link_status |= BIT(slot);
+ }
+@@ -614,6 +637,15 @@ static int mt7621_pcie_init_virtual_bridges(struct mt7621_pcie *pcie)
+ (p2p_br_devnum[1] << PCIE_P2P_BR_DEVNUM1_SHIFT) |
+ (p2p_br_devnum[2] << PCIE_P2P_BR_DEVNUM2_SHIFT));
+
++ /* Assign IRQs */
++ n = 0;
++ for (i = 0; i < PCIE_P2P_MAX; i++)
++ if (pcie_link_status & BIT(i))
++ pcie->irq_map[n++] = irqs[i];
++
++ for (i = n; i < PCIE_P2P_MAX; i++)
++ pcie->irq_map[i] = -1;
++
+ return 0;
+ }
+
+@@ -638,7 +670,7 @@ static int mt7621_pcie_register_host(struct pci_host_bridge *host,
+ host->busnr = pcie->busn.start;
+ host->dev.parent = pcie->dev;
+ host->ops = &mt7621_pci_ops;
+- host->map_irq = of_irq_parse_and_map_pci;
++ host->map_irq = mt7621_map_irq;
+ host->swizzle_irq = pci_common_swizzle;
+ host->sysdata = pcie;
+
+diff --git a/drivers/staging/sm750fb/sm750.c b/drivers/staging/sm750fb/sm750.c
+index 59568d18ce23..5b72aa81d94c 100644
+--- a/drivers/staging/sm750fb/sm750.c
++++ b/drivers/staging/sm750fb/sm750.c
+@@ -898,6 +898,7 @@ static int lynxfb_set_fbinfo(struct fb_info *info, int index)
+ fix->visual = FB_VISUAL_PSEUDOCOLOR;
+ break;
+ case 16:
++ case 24:
+ case 32:
+ fix->visual = FB_VISUAL_TRUECOLOR;
+ break;
+diff --git a/drivers/staging/wfx/bus_sdio.c b/drivers/staging/wfx/bus_sdio.c
+index dedc3ff58d3e..c2e4bd1e3b0a 100644
+--- a/drivers/staging/wfx/bus_sdio.c
++++ b/drivers/staging/wfx/bus_sdio.c
+@@ -156,7 +156,13 @@ static const struct hwbus_ops wfx_sdio_hwbus_ops = {
+ .align_size = wfx_sdio_align_size,
+ };
+
+-static const struct of_device_id wfx_sdio_of_match[];
++static const struct of_device_id wfx_sdio_of_match[] = {
++ { .compatible = "silabs,wfx-sdio" },
++ { .compatible = "silabs,wf200" },
++ { },
++};
++MODULE_DEVICE_TABLE(of, wfx_sdio_of_match);
++
+ static int wfx_sdio_probe(struct sdio_func *func,
+ const struct sdio_device_id *id)
+ {
+@@ -248,15 +254,6 @@ static const struct sdio_device_id wfx_sdio_ids[] = {
+ };
+ MODULE_DEVICE_TABLE(sdio, wfx_sdio_ids);
+
+-#ifdef CONFIG_OF
+-static const struct of_device_id wfx_sdio_of_match[] = {
+- { .compatible = "silabs,wfx-sdio" },
+- { .compatible = "silabs,wf200" },
+- { },
+-};
+-MODULE_DEVICE_TABLE(of, wfx_sdio_of_match);
+-#endif
+-
+ struct sdio_driver wfx_sdio_driver = {
+ .name = "wfx-sdio",
+ .id_table = wfx_sdio_ids,
+@@ -264,6 +261,6 @@ struct sdio_driver wfx_sdio_driver = {
+ .remove = wfx_sdio_remove,
+ .drv = {
+ .owner = THIS_MODULE,
+- .of_match_table = of_match_ptr(wfx_sdio_of_match),
++ .of_match_table = wfx_sdio_of_match,
+ }
+ };
+diff --git a/drivers/staging/wfx/debug.c b/drivers/staging/wfx/debug.c
+index 1164aba118a1..a73b5bbb578e 100644
+--- a/drivers/staging/wfx/debug.c
++++ b/drivers/staging/wfx/debug.c
+@@ -142,7 +142,7 @@ static int wfx_rx_stats_show(struct seq_file *seq, void *v)
+ mutex_lock(&wdev->rx_stats_lock);
+ seq_printf(seq, "Timestamp: %dus\n", st->date);
+ seq_printf(seq, "Low power clock: frequency %uHz, external %s\n",
+- st->pwr_clk_freq,
++ le32_to_cpu(st->pwr_clk_freq),
+ st->is_ext_pwr_clk ? "yes" : "no");
+ seq_printf(seq,
+ "Num. of frames: %d, PER (x10e4): %d, Throughput: %dKbps/s\n",
+@@ -152,9 +152,12 @@ static int wfx_rx_stats_show(struct seq_file *seq, void *v)
+ for (i = 0; i < ARRAY_SIZE(channel_names); i++) {
+ if (channel_names[i])
+ seq_printf(seq, "%5s %8d %8d %8d %8d %8d\n",
+- channel_names[i], st->nb_rx_by_rate[i],
+- st->per[i], st->rssi[i] / 100,
+- st->snr[i] / 100, st->cfo[i]);
++ channel_names[i],
++ le32_to_cpu(st->nb_rx_by_rate[i]),
++ le16_to_cpu(st->per[i]),
++ (s16)le16_to_cpu(st->rssi[i]) / 100,
++ (s16)le16_to_cpu(st->snr[i]) / 100,
++ (s16)le16_to_cpu(st->cfo[i]));
+ }
+ mutex_unlock(&wdev->rx_stats_lock);
+
+diff --git a/drivers/staging/wfx/hif_tx.c b/drivers/staging/wfx/hif_tx.c
+index 77bca43aca42..20b3045d7667 100644
+--- a/drivers/staging/wfx/hif_tx.c
++++ b/drivers/staging/wfx/hif_tx.c
+@@ -268,7 +268,7 @@ int hif_scan(struct wfx_vif *wvif, struct cfg80211_scan_request *req,
+ tmo_chan_bg = le32_to_cpu(body->max_channel_time) * USEC_PER_TU;
+ tmo_chan_fg = 512 * USEC_PER_TU + body->probe_delay;
+ tmo_chan_fg *= body->num_of_probe_requests;
+- tmo = chan_num * max(tmo_chan_bg, tmo_chan_fg);
++ tmo = chan_num * max(tmo_chan_bg, tmo_chan_fg) + 512 * USEC_PER_TU;
+
+ wfx_fill_header(hif, wvif->id, HIF_REQ_ID_START_SCAN, buf_len);
+ ret = wfx_cmd_send(wvif->wdev, hif, NULL, 0, false);
+diff --git a/drivers/staging/wfx/queue.c b/drivers/staging/wfx/queue.c
+index 39d9127ce4b9..8ae23681e29b 100644
+--- a/drivers/staging/wfx/queue.c
++++ b/drivers/staging/wfx/queue.c
+@@ -35,6 +35,7 @@ void wfx_tx_flush(struct wfx_dev *wdev)
+ if (wdev->chip_frozen)
+ return;
+
++ wfx_tx_lock(wdev);
+ mutex_lock(&wdev->hif_cmd.lock);
+ ret = wait_event_timeout(wdev->hif.tx_buffers_empty,
+ !wdev->hif.tx_buffers_used,
+@@ -47,6 +48,7 @@ void wfx_tx_flush(struct wfx_dev *wdev)
+ wdev->chip_frozen = 1;
+ }
+ mutex_unlock(&wdev->hif_cmd.lock);
++ wfx_tx_unlock(wdev);
+ }
+
+ void wfx_tx_lock_flush(struct wfx_dev *wdev)
+diff --git a/drivers/staging/wfx/sta.c b/drivers/staging/wfx/sta.c
+index 9d430346a58b..b4cd7cb1ce56 100644
+--- a/drivers/staging/wfx/sta.c
++++ b/drivers/staging/wfx/sta.c
+@@ -520,7 +520,9 @@ static void wfx_do_join(struct wfx_vif *wvif)
+ ssidie = ieee80211_bss_get_ie(bss, WLAN_EID_SSID);
+ if (ssidie) {
+ ssidlen = ssidie[1];
+- memcpy(ssid, &ssidie[2], ssidie[1]);
++ if (ssidlen > IEEE80211_MAX_SSID_LEN)
++ ssidlen = IEEE80211_MAX_SSID_LEN;
++ memcpy(ssid, &ssidie[2], ssidlen);
+ }
+ rcu_read_unlock();
+
+@@ -1047,7 +1049,6 @@ int wfx_add_interface(struct ieee80211_hw *hw, struct ieee80211_vif *vif)
+ init_completion(&wvif->scan_complete);
+ INIT_WORK(&wvif->scan_work, wfx_hw_scan_work);
+
+- INIT_WORK(&wvif->tx_policy_upload_work, wfx_tx_policy_upload_work);
+ mutex_unlock(&wdev->conf_mutex);
+
+ hif_set_macaddr(wvif, vif->addr);
+diff --git a/drivers/staging/wfx/sta.h b/drivers/staging/wfx/sta.h
+index cf99a8a74a81..ace845f9ed14 100644
+--- a/drivers/staging/wfx/sta.h
++++ b/drivers/staging/wfx/sta.h
+@@ -37,7 +37,7 @@ struct wfx_grp_addr_table {
+ struct wfx_sta_priv {
+ int link_id;
+ int vif_id;
+- u8 buffered[IEEE80211_NUM_TIDS];
++ int buffered[IEEE80211_NUM_TIDS];
+ // Ensure atomicity of "buffered" and calls to ieee80211_sta_set_buffered()
+ spinlock_t lock;
+ };
+diff --git a/drivers/staging/wilc1000/hif.c b/drivers/staging/wilc1000/hif.c
+index 6c7de2f8d3f2..d025a3093015 100644
+--- a/drivers/staging/wilc1000/hif.c
++++ b/drivers/staging/wilc1000/hif.c
+@@ -11,6 +11,8 @@
+
+ #define WILC_FALSE_FRMWR_CHANNEL 100
+
++#define WILC_SCAN_WID_LIST_SIZE 6
++
+ struct wilc_rcvd_mac_info {
+ u8 status;
+ };
+@@ -151,7 +153,7 @@ int wilc_scan(struct wilc_vif *vif, u8 scan_source, u8 scan_type,
+ void *user_arg, struct cfg80211_scan_request *request)
+ {
+ int result = 0;
+- struct wid wid_list[5];
++ struct wid wid_list[WILC_SCAN_WID_LIST_SIZE];
+ u32 index = 0;
+ u32 i, scan_timeout;
+ u8 *buffer;
+diff --git a/drivers/target/loopback/tcm_loop.c b/drivers/target/loopback/tcm_loop.c
+index 3305b47fdf53..16d5a4e117a2 100644
+--- a/drivers/target/loopback/tcm_loop.c
++++ b/drivers/target/loopback/tcm_loop.c
+@@ -545,32 +545,15 @@ static int tcm_loop_write_pending(struct se_cmd *se_cmd)
+ return 0;
+ }
+
+-static int tcm_loop_queue_data_in(struct se_cmd *se_cmd)
++static int tcm_loop_queue_data_or_status(const char *func,
++ struct se_cmd *se_cmd, u8 scsi_status)
+ {
+ struct tcm_loop_cmd *tl_cmd = container_of(se_cmd,
+ struct tcm_loop_cmd, tl_se_cmd);
+ struct scsi_cmnd *sc = tl_cmd->sc;
+
+ pr_debug("%s() called for scsi_cmnd: %p cdb: 0x%02x\n",
+- __func__, sc, sc->cmnd[0]);
+-
+- sc->result = SAM_STAT_GOOD;
+- set_host_byte(sc, DID_OK);
+- if ((se_cmd->se_cmd_flags & SCF_OVERFLOW_BIT) ||
+- (se_cmd->se_cmd_flags & SCF_UNDERFLOW_BIT))
+- scsi_set_resid(sc, se_cmd->residual_count);
+- sc->scsi_done(sc);
+- return 0;
+-}
+-
+-static int tcm_loop_queue_status(struct se_cmd *se_cmd)
+-{
+- struct tcm_loop_cmd *tl_cmd = container_of(se_cmd,
+- struct tcm_loop_cmd, tl_se_cmd);
+- struct scsi_cmnd *sc = tl_cmd->sc;
+-
+- pr_debug("%s() called for scsi_cmnd: %p cdb: 0x%02x\n",
+- __func__, sc, sc->cmnd[0]);
++ func, sc, sc->cmnd[0]);
+
+ if (se_cmd->sense_buffer &&
+ ((se_cmd->se_cmd_flags & SCF_TRANSPORT_TASK_SENSE) ||
+@@ -581,7 +564,7 @@ static int tcm_loop_queue_status(struct se_cmd *se_cmd)
+ sc->result = SAM_STAT_CHECK_CONDITION;
+ set_driver_byte(sc, DRIVER_SENSE);
+ } else
+- sc->result = se_cmd->scsi_status;
++ sc->result = scsi_status;
+
+ set_host_byte(sc, DID_OK);
+ if ((se_cmd->se_cmd_flags & SCF_OVERFLOW_BIT) ||
+@@ -591,6 +574,17 @@ static int tcm_loop_queue_status(struct se_cmd *se_cmd)
+ return 0;
+ }
+
++static int tcm_loop_queue_data_in(struct se_cmd *se_cmd)
++{
++ return tcm_loop_queue_data_or_status(__func__, se_cmd, SAM_STAT_GOOD);
++}
++
++static int tcm_loop_queue_status(struct se_cmd *se_cmd)
++{
++ return tcm_loop_queue_data_or_status(__func__,
++ se_cmd, se_cmd->scsi_status);
++}
++
+ static void tcm_loop_queue_tm_rsp(struct se_cmd *se_cmd)
+ {
+ struct tcm_loop_cmd *tl_cmd = container_of(se_cmd,
+diff --git a/drivers/target/target_core_user.c b/drivers/target/target_core_user.c
+index f769bb1e3735..b63a1e0c4aa6 100644
+--- a/drivers/target/target_core_user.c
++++ b/drivers/target/target_core_user.c
+@@ -882,41 +882,24 @@ static inline size_t tcmu_cmd_get_cmd_size(struct tcmu_cmd *tcmu_cmd,
+ return command_size;
+ }
+
+-static int tcmu_setup_cmd_timer(struct tcmu_cmd *tcmu_cmd, unsigned int tmo,
+- struct timer_list *timer)
++static void tcmu_setup_cmd_timer(struct tcmu_cmd *tcmu_cmd, unsigned int tmo,
++ struct timer_list *timer)
+ {
+- struct tcmu_dev *udev = tcmu_cmd->tcmu_dev;
+- int cmd_id;
+-
+- if (tcmu_cmd->cmd_id)
+- goto setup_timer;
+-
+- cmd_id = idr_alloc(&udev->commands, tcmu_cmd, 1, USHRT_MAX, GFP_NOWAIT);
+- if (cmd_id < 0) {
+- pr_err("tcmu: Could not allocate cmd id.\n");
+- return cmd_id;
+- }
+- tcmu_cmd->cmd_id = cmd_id;
+-
+- pr_debug("allocated cmd %u for dev %s tmo %lu\n", tcmu_cmd->cmd_id,
+- udev->name, tmo / MSEC_PER_SEC);
+-
+-setup_timer:
+ if (!tmo)
+- return 0;
++ return;
+
+ tcmu_cmd->deadline = round_jiffies_up(jiffies + msecs_to_jiffies(tmo));
+ if (!timer_pending(timer))
+ mod_timer(timer, tcmu_cmd->deadline);
+
+- return 0;
++ pr_debug("Timeout set up for cmd %p, dev = %s, tmo = %lu\n", tcmu_cmd,
++ tcmu_cmd->tcmu_dev->name, tmo / MSEC_PER_SEC);
+ }
+
+ static int add_to_qfull_queue(struct tcmu_cmd *tcmu_cmd)
+ {
+ struct tcmu_dev *udev = tcmu_cmd->tcmu_dev;
+ unsigned int tmo;
+- int ret;
+
+ /*
+ * For backwards compat if qfull_time_out is not set use
+@@ -931,13 +914,11 @@ static int add_to_qfull_queue(struct tcmu_cmd *tcmu_cmd)
+ else
+ tmo = TCMU_TIME_OUT;
+
+- ret = tcmu_setup_cmd_timer(tcmu_cmd, tmo, &udev->qfull_timer);
+- if (ret)
+- return ret;
++ tcmu_setup_cmd_timer(tcmu_cmd, tmo, &udev->qfull_timer);
+
+ list_add_tail(&tcmu_cmd->queue_entry, &udev->qfull_queue);
+- pr_debug("adding cmd %u on dev %s to ring space wait queue\n",
+- tcmu_cmd->cmd_id, udev->name);
++ pr_debug("adding cmd %p on dev %s to ring space wait queue\n",
++ tcmu_cmd, udev->name);
+ return 0;
+ }
+
+@@ -959,7 +940,7 @@ static int queue_cmd_ring(struct tcmu_cmd *tcmu_cmd, sense_reason_t *scsi_err)
+ struct tcmu_mailbox *mb;
+ struct tcmu_cmd_entry *entry;
+ struct iovec *iov;
+- int iov_cnt, ret;
++ int iov_cnt, cmd_id;
+ uint32_t cmd_head;
+ uint64_t cdb_off;
+ bool copy_to_data_area;
+@@ -1060,14 +1041,21 @@ static int queue_cmd_ring(struct tcmu_cmd *tcmu_cmd, sense_reason_t *scsi_err)
+ }
+ entry->req.iov_bidi_cnt = iov_cnt;
+
+- ret = tcmu_setup_cmd_timer(tcmu_cmd, udev->cmd_time_out,
+- &udev->cmd_timer);
+- if (ret) {
+- tcmu_cmd_free_data(tcmu_cmd, tcmu_cmd->dbi_cnt);
++ cmd_id = idr_alloc(&udev->commands, tcmu_cmd, 1, USHRT_MAX, GFP_NOWAIT);
++ if (cmd_id < 0) {
++ pr_err("tcmu: Could not allocate cmd id.\n");
+
++ tcmu_cmd_free_data(tcmu_cmd, tcmu_cmd->dbi_cnt);
+ *scsi_err = TCM_OUT_OF_RESOURCES;
+ return -1;
+ }
++ tcmu_cmd->cmd_id = cmd_id;
++
++ pr_debug("allocated cmd id %u for cmd %p dev %s\n", tcmu_cmd->cmd_id,
++ tcmu_cmd, udev->name);
++
++ tcmu_setup_cmd_timer(tcmu_cmd, udev->cmd_time_out, &udev->cmd_timer);
++
+ entry->hdr.cmd_id = tcmu_cmd->cmd_id;
+
+ /*
+@@ -1279,50 +1267,39 @@ static unsigned int tcmu_handle_completions(struct tcmu_dev *udev)
+ return handled;
+ }
+
+-static int tcmu_check_expired_cmd(int id, void *p, void *data)
++static void tcmu_check_expired_ring_cmd(struct tcmu_cmd *cmd)
+ {
+- struct tcmu_cmd *cmd = p;
+- struct tcmu_dev *udev = cmd->tcmu_dev;
+- u8 scsi_status;
+ struct se_cmd *se_cmd;
+- bool is_running;
+-
+- if (test_bit(TCMU_CMD_BIT_EXPIRED, &cmd->flags))
+- return 0;
+
+ if (!time_after(jiffies, cmd->deadline))
+- return 0;
++ return;
+
+- is_running = test_bit(TCMU_CMD_BIT_INFLIGHT, &cmd->flags);
++ set_bit(TCMU_CMD_BIT_EXPIRED, &cmd->flags);
++ list_del_init(&cmd->queue_entry);
+ se_cmd = cmd->se_cmd;
++ cmd->se_cmd = NULL;
+
+- if (is_running) {
+- /*
+- * If cmd_time_out is disabled but qfull is set deadline
+- * will only reflect the qfull timeout. Ignore it.
+- */
+- if (!udev->cmd_time_out)
+- return 0;
++ pr_debug("Timing out inflight cmd %u on dev %s.\n",
++ cmd->cmd_id, cmd->tcmu_dev->name);
+
+- set_bit(TCMU_CMD_BIT_EXPIRED, &cmd->flags);
+- /*
+- * target_complete_cmd will translate this to LUN COMM FAILURE
+- */
+- scsi_status = SAM_STAT_CHECK_CONDITION;
+- list_del_init(&cmd->queue_entry);
+- cmd->se_cmd = NULL;
+- } else {
+- list_del_init(&cmd->queue_entry);
+- idr_remove(&udev->commands, id);
+- tcmu_free_cmd(cmd);
+- scsi_status = SAM_STAT_TASK_SET_FULL;
+- }
++ target_complete_cmd(se_cmd, SAM_STAT_CHECK_CONDITION);
++}
+
+- pr_debug("Timing out cmd %u on dev %s that is %s.\n",
+- id, udev->name, is_running ? "inflight" : "queued");
++static void tcmu_check_expired_queue_cmd(struct tcmu_cmd *cmd)
++{
++ struct se_cmd *se_cmd;
+
+- target_complete_cmd(se_cmd, scsi_status);
+- return 0;
++ if (!time_after(jiffies, cmd->deadline))
++ return;
++
++ pr_debug("Timing out queued cmd %p on dev %s.\n",
++ cmd, cmd->tcmu_dev->name);
++
++ list_del_init(&cmd->queue_entry);
++ se_cmd = cmd->se_cmd;
++ tcmu_free_cmd(cmd);
++
++ target_complete_cmd(se_cmd, SAM_STAT_TASK_SET_FULL);
+ }
+
+ static void tcmu_device_timedout(struct tcmu_dev *udev)
+@@ -1407,16 +1384,15 @@ static struct se_device *tcmu_alloc_device(struct se_hba *hba, const char *name)
+ return &udev->se_dev;
+ }
+
+-static bool run_qfull_queue(struct tcmu_dev *udev, bool fail)
++static void run_qfull_queue(struct tcmu_dev *udev, bool fail)
+ {
+ struct tcmu_cmd *tcmu_cmd, *tmp_cmd;
+ LIST_HEAD(cmds);
+- bool drained = true;
+ sense_reason_t scsi_ret;
+ int ret;
+
+ if (list_empty(&udev->qfull_queue))
+- return true;
++ return;
+
+ pr_debug("running %s's cmdr queue forcefail %d\n", udev->name, fail);
+
+@@ -1425,11 +1401,10 @@ static bool run_qfull_queue(struct tcmu_dev *udev, bool fail)
+ list_for_each_entry_safe(tcmu_cmd, tmp_cmd, &cmds, queue_entry) {
+ list_del_init(&tcmu_cmd->queue_entry);
+
+- pr_debug("removing cmd %u on dev %s from queue\n",
+- tcmu_cmd->cmd_id, udev->name);
++ pr_debug("removing cmd %p on dev %s from queue\n",
++ tcmu_cmd, udev->name);
+
+ if (fail) {
+- idr_remove(&udev->commands, tcmu_cmd->cmd_id);
+ /*
+ * We were not able to even start the command, so
+ * fail with busy to allow a retry in case runner
+@@ -1444,10 +1419,8 @@ static bool run_qfull_queue(struct tcmu_dev *udev, bool fail)
+
+ ret = queue_cmd_ring(tcmu_cmd, &scsi_ret);
+ if (ret < 0) {
+- pr_debug("cmd %u on dev %s failed with %u\n",
+- tcmu_cmd->cmd_id, udev->name, scsi_ret);
+-
+- idr_remove(&udev->commands, tcmu_cmd->cmd_id);
++ pr_debug("cmd %p on dev %s failed with %u\n",
++ tcmu_cmd, udev->name, scsi_ret);
+ /*
+ * Ignore scsi_ret for now. target_complete_cmd
+ * drops it.
+@@ -1462,13 +1435,11 @@ static bool run_qfull_queue(struct tcmu_dev *udev, bool fail)
+ * the queue
+ */
+ list_splice_tail(&cmds, &udev->qfull_queue);
+- drained = false;
+ break;
+ }
+ }
+
+ tcmu_set_next_deadline(&udev->qfull_queue, &udev->qfull_timer);
+- return drained;
+ }
+
+ static int tcmu_irqcontrol(struct uio_info *info, s32 irq_on)
+@@ -1652,6 +1623,8 @@ static void tcmu_dev_kref_release(struct kref *kref)
+ if (tcmu_check_and_free_pending_cmd(cmd) != 0)
+ all_expired = false;
+ }
++ if (!list_empty(&udev->qfull_queue))
++ all_expired = false;
+ idr_destroy(&udev->commands);
+ WARN_ON(!all_expired);
+
+@@ -2037,9 +2010,6 @@ static void tcmu_reset_ring(struct tcmu_dev *udev, u8 err_level)
+ mutex_lock(&udev->cmdr_lock);
+
+ idr_for_each_entry(&udev->commands, cmd, i) {
+- if (!test_bit(TCMU_CMD_BIT_INFLIGHT, &cmd->flags))
+- continue;
+-
+ pr_debug("removing cmd %u on dev %s from ring (is expired %d)\n",
+ cmd->cmd_id, udev->name,
+ test_bit(TCMU_CMD_BIT_EXPIRED, &cmd->flags));
+@@ -2077,6 +2047,8 @@ static void tcmu_reset_ring(struct tcmu_dev *udev, u8 err_level)
+
+ del_timer(&udev->cmd_timer);
+
++ run_qfull_queue(udev, false);
++
+ mutex_unlock(&udev->cmdr_lock);
+ }
+
+@@ -2698,6 +2670,7 @@ static void find_free_blocks(void)
+ static void check_timedout_devices(void)
+ {
+ struct tcmu_dev *udev, *tmp_dev;
++ struct tcmu_cmd *cmd, *tmp_cmd;
+ LIST_HEAD(devs);
+
+ spin_lock_bh(&timed_out_udevs_lock);
+@@ -2708,9 +2681,24 @@ static void check_timedout_devices(void)
+ spin_unlock_bh(&timed_out_udevs_lock);
+
+ mutex_lock(&udev->cmdr_lock);
+- idr_for_each(&udev->commands, tcmu_check_expired_cmd, NULL);
+
+- tcmu_set_next_deadline(&udev->inflight_queue, &udev->cmd_timer);
++ /*
++ * If cmd_time_out is disabled but qfull is set deadline
++ * will only reflect the qfull timeout. Ignore it.
++ */
++ if (udev->cmd_time_out) {
++ list_for_each_entry_safe(cmd, tmp_cmd,
++ &udev->inflight_queue,
++ queue_entry) {
++ tcmu_check_expired_ring_cmd(cmd);
++ }
++ tcmu_set_next_deadline(&udev->inflight_queue,
++ &udev->cmd_timer);
++ }
++ list_for_each_entry_safe(cmd, tmp_cmd, &udev->qfull_queue,
++ queue_entry) {
++ tcmu_check_expired_queue_cmd(cmd);
++ }
+ tcmu_set_next_deadline(&udev->qfull_queue, &udev->qfull_timer);
+
+ mutex_unlock(&udev->cmdr_lock);
+diff --git a/drivers/thermal/ti-soc-thermal/ti-thermal-common.c b/drivers/thermal/ti-soc-thermal/ti-thermal-common.c
+index d3e959d01606..85776db4bf34 100644
+--- a/drivers/thermal/ti-soc-thermal/ti-thermal-common.c
++++ b/drivers/thermal/ti-soc-thermal/ti-thermal-common.c
+@@ -169,7 +169,7 @@ int ti_thermal_expose_sensor(struct ti_bandgap *bgp, int id,
+
+ data = ti_bandgap_get_sensor_data(bgp, id);
+
+- if (!data || IS_ERR(data))
++ if (!IS_ERR_OR_NULL(data))
+ data = ti_thermal_build_data(bgp, id);
+
+ if (!data)
+@@ -196,7 +196,7 @@ int ti_thermal_remove_sensor(struct ti_bandgap *bgp, int id)
+
+ data = ti_bandgap_get_sensor_data(bgp, id);
+
+- if (data && data->ti_thermal) {
++ if (!IS_ERR_OR_NULL(data) && data->ti_thermal) {
+ if (data->our_zone)
+ thermal_zone_device_unregister(data->ti_thermal);
+ }
+@@ -262,7 +262,7 @@ int ti_thermal_unregister_cpu_cooling(struct ti_bandgap *bgp, int id)
+
+ data = ti_bandgap_get_sensor_data(bgp, id);
+
+- if (data) {
++ if (!IS_ERR_OR_NULL(data)) {
+ cpufreq_cooling_unregister(data->cool_dev);
+ if (data->policy)
+ cpufreq_cpu_put(data->policy);
+diff --git a/drivers/tty/hvc/hvc_console.c b/drivers/tty/hvc/hvc_console.c
+index cdcc64ea2554..f8e43a6faea9 100644
+--- a/drivers/tty/hvc/hvc_console.c
++++ b/drivers/tty/hvc/hvc_console.c
+@@ -75,6 +75,8 @@ static LIST_HEAD(hvc_structs);
+ */
+ static DEFINE_MUTEX(hvc_structs_mutex);
+
++/* Mutex to serialize hvc_open */
++static DEFINE_MUTEX(hvc_open_mutex);
+ /*
+ * This value is used to assign a tty->index value to a hvc_struct based
+ * upon order of exposure via hvc_probe(), when we can not match it to
+@@ -346,16 +348,24 @@ static int hvc_install(struct tty_driver *driver, struct tty_struct *tty)
+ */
+ static int hvc_open(struct tty_struct *tty, struct file * filp)
+ {
+- struct hvc_struct *hp = tty->driver_data;
++ struct hvc_struct *hp;
+ unsigned long flags;
+ int rc = 0;
+
++ mutex_lock(&hvc_open_mutex);
++
++ hp = tty->driver_data;
++ if (!hp) {
++ rc = -EIO;
++ goto out;
++ }
++
+ spin_lock_irqsave(&hp->port.lock, flags);
+ /* Check and then increment for fast path open. */
+ if (hp->port.count++ > 0) {
+ spin_unlock_irqrestore(&hp->port.lock, flags);
+ hvc_kick();
+- return 0;
++ goto out;
+ } /* else count == 0 */
+ spin_unlock_irqrestore(&hp->port.lock, flags);
+
+@@ -383,6 +393,8 @@ static int hvc_open(struct tty_struct *tty, struct file * filp)
+ /* Force wakeup of the polling thread */
+ hvc_kick();
+
++out:
++ mutex_unlock(&hvc_open_mutex);
+ return rc;
+ }
+
+diff --git a/drivers/tty/n_gsm.c b/drivers/tty/n_gsm.c
+index d77ed82a4840..f189579db7c4 100644
+--- a/drivers/tty/n_gsm.c
++++ b/drivers/tty/n_gsm.c
+@@ -673,11 +673,10 @@ static struct gsm_msg *gsm_data_alloc(struct gsm_mux *gsm, u8 addr, int len,
+ * FIXME: lock against link layer control transmissions
+ */
+
+-static void gsm_data_kick(struct gsm_mux *gsm)
++static void gsm_data_kick(struct gsm_mux *gsm, struct gsm_dlci *dlci)
+ {
+ struct gsm_msg *msg, *nmsg;
+ int len;
+- int skip_sof = 0;
+
+ list_for_each_entry_safe(msg, nmsg, &gsm->tx_list, list) {
+ if (gsm->constipated && msg->addr)
+@@ -699,18 +698,23 @@ static void gsm_data_kick(struct gsm_mux *gsm)
+ print_hex_dump_bytes("gsm_data_kick: ",
+ DUMP_PREFIX_OFFSET,
+ gsm->txframe, len);
+-
+- if (gsm->output(gsm, gsm->txframe + skip_sof,
+- len - skip_sof) < 0)
++ if (gsm->output(gsm, gsm->txframe, len) < 0)
+ break;
+ /* FIXME: Can eliminate one SOF in many more cases */
+ gsm->tx_bytes -= msg->len;
+- /* For a burst of frames skip the extra SOF within the
+- burst */
+- skip_sof = 1;
+
+ list_del(&msg->list);
+ kfree(msg);
++
++ if (dlci) {
++ tty_port_tty_wakeup(&dlci->port);
++ } else {
++ int i = 0;
++
++ for (i = 0; i < NUM_DLCI; i++)
++ if (gsm->dlci[i])
++ tty_port_tty_wakeup(&gsm->dlci[i]->port);
++ }
+ }
+ }
+
+@@ -762,7 +766,7 @@ static void __gsm_data_queue(struct gsm_dlci *dlci, struct gsm_msg *msg)
+ /* Add to the actual output queue */
+ list_add_tail(&msg->list, &gsm->tx_list);
+ gsm->tx_bytes += msg->len;
+- gsm_data_kick(gsm);
++ gsm_data_kick(gsm, dlci);
+ }
+
+ /**
+@@ -1223,7 +1227,7 @@ static void gsm_control_message(struct gsm_mux *gsm, unsigned int command,
+ gsm_control_reply(gsm, CMD_FCON, NULL, 0);
+ /* Kick the link in case it is idling */
+ spin_lock_irqsave(&gsm->tx_lock, flags);
+- gsm_data_kick(gsm);
++ gsm_data_kick(gsm, NULL);
+ spin_unlock_irqrestore(&gsm->tx_lock, flags);
+ break;
+ case CMD_FCOFF:
+@@ -2545,7 +2549,7 @@ static void gsmld_write_wakeup(struct tty_struct *tty)
+ /* Queue poll */
+ clear_bit(TTY_DO_WRITE_WAKEUP, &tty->flags);
+ spin_lock_irqsave(&gsm->tx_lock, flags);
+- gsm_data_kick(gsm);
++ gsm_data_kick(gsm, NULL);
+ if (gsm->tx_bytes < TX_THRESH_LO) {
+ gsm_dlci_data_sweep(gsm);
+ }
+diff --git a/drivers/tty/serial/8250/8250_port.c b/drivers/tty/serial/8250/8250_port.c
+index f77bf820b7a3..4d83c85a7389 100644
+--- a/drivers/tty/serial/8250/8250_port.c
++++ b/drivers/tty/serial/8250/8250_port.c
+@@ -2615,6 +2615,8 @@ static unsigned int serial8250_get_baud_rate(struct uart_port *port,
+ struct ktermios *termios,
+ struct ktermios *old)
+ {
++ unsigned int tolerance = port->uartclk / 100;
++
+ /*
+ * Ask the core to calculate the divisor for us.
+ * Allow 1% tolerance at the upper limit so uart clks marginally
+@@ -2623,7 +2625,7 @@ static unsigned int serial8250_get_baud_rate(struct uart_port *port,
+ */
+ return uart_get_baud_rate(port, termios, old,
+ port->uartclk / 16 / UART_DIV_MAX,
+- port->uartclk);
++ (port->uartclk + tolerance) / 16);
+ }
+
+ void
+diff --git a/drivers/usb/cdns3/cdns3-ti.c b/drivers/usb/cdns3/cdns3-ti.c
+index 5685ba11480b..e701ab56b0a7 100644
+--- a/drivers/usb/cdns3/cdns3-ti.c
++++ b/drivers/usb/cdns3/cdns3-ti.c
+@@ -138,7 +138,7 @@ static int cdns_ti_probe(struct platform_device *pdev)
+ error = pm_runtime_get_sync(dev);
+ if (error < 0) {
+ dev_err(dev, "pm_runtime_get_sync failed: %d\n", error);
+- goto err_get;
++ goto err;
+ }
+
+ /* assert RESET */
+@@ -185,7 +185,6 @@ static int cdns_ti_probe(struct platform_device *pdev)
+
+ err:
+ pm_runtime_put_sync(data->dev);
+-err_get:
+ pm_runtime_disable(data->dev);
+
+ return error;
+diff --git a/drivers/usb/class/usblp.c b/drivers/usb/class/usblp.c
+index 0d8e3f3804a3..084c48c5848f 100644
+--- a/drivers/usb/class/usblp.c
++++ b/drivers/usb/class/usblp.c
+@@ -468,7 +468,8 @@ static int usblp_release(struct inode *inode, struct file *file)
+ usb_autopm_put_interface(usblp->intf);
+
+ if (!usblp->present) /* finish cleanup from disconnect */
+- usblp_cleanup(usblp);
++ usblp_cleanup(usblp); /* any URBs must be dead */
++
+ mutex_unlock(&usblp_mutex);
+ return 0;
+ }
+@@ -1375,9 +1376,11 @@ static void usblp_disconnect(struct usb_interface *intf)
+
+ usblp_unlink_urbs(usblp);
+ mutex_unlock(&usblp->mut);
++ usb_poison_anchored_urbs(&usblp->urbs);
+
+ if (!usblp->used)
+ usblp_cleanup(usblp);
++
+ mutex_unlock(&usblp_mutex);
+ }
+
+diff --git a/drivers/usb/dwc2/core_intr.c b/drivers/usb/dwc2/core_intr.c
+index 876ff31261d5..55f1d14fc414 100644
+--- a/drivers/usb/dwc2/core_intr.c
++++ b/drivers/usb/dwc2/core_intr.c
+@@ -416,10 +416,13 @@ static void dwc2_handle_wakeup_detected_intr(struct dwc2_hsotg *hsotg)
+ if (ret && (ret != -ENOTSUPP))
+ dev_err(hsotg->dev, "exit power_down failed\n");
+
++ /* Change to L0 state */
++ hsotg->lx_state = DWC2_L0;
+ call_gadget(hsotg, resume);
++ } else {
++ /* Change to L0 state */
++ hsotg->lx_state = DWC2_L0;
+ }
+- /* Change to L0 state */
+- hsotg->lx_state = DWC2_L0;
+ } else {
+ if (hsotg->params.power_down)
+ return;
+diff --git a/drivers/usb/dwc3/dwc3-meson-g12a.c b/drivers/usb/dwc3/dwc3-meson-g12a.c
+index b81d085bc534..eabb3bb6fcaa 100644
+--- a/drivers/usb/dwc3/dwc3-meson-g12a.c
++++ b/drivers/usb/dwc3/dwc3-meson-g12a.c
+@@ -505,7 +505,7 @@ static int dwc3_meson_g12a_probe(struct platform_device *pdev)
+ if (IS_ERR(priv->reset)) {
+ ret = PTR_ERR(priv->reset);
+ dev_err(dev, "failed to get device reset, err=%d\n", ret);
+- return ret;
++ goto err_disable_clks;
+ }
+
+ ret = reset_control_reset(priv->reset);
+@@ -525,7 +525,9 @@ static int dwc3_meson_g12a_probe(struct platform_device *pdev)
+ /* Get dr_mode */
+ priv->otg_mode = usb_get_dr_mode(dev);
+
+- dwc3_meson_g12a_usb_init(priv);
++ ret = dwc3_meson_g12a_usb_init(priv);
++ if (ret)
++ goto err_disable_clks;
+
+ /* Init PHYs */
+ for (i = 0 ; i < PHY_COUNT ; ++i) {
+diff --git a/drivers/usb/dwc3/gadget.c b/drivers/usb/dwc3/gadget.c
+index 585cb3deea7a..de3b92680935 100644
+--- a/drivers/usb/dwc3/gadget.c
++++ b/drivers/usb/dwc3/gadget.c
+@@ -1220,6 +1220,8 @@ static void dwc3_prepare_trbs(struct dwc3_ep *dep)
+ }
+ }
+
++static void dwc3_gadget_ep_cleanup_cancelled_requests(struct dwc3_ep *dep);
++
+ static int __dwc3_gadget_kick_transfer(struct dwc3_ep *dep)
+ {
+ struct dwc3_gadget_ep_cmd_params params;
+@@ -1259,14 +1261,20 @@ static int __dwc3_gadget_kick_transfer(struct dwc3_ep *dep)
+
+ ret = dwc3_send_gadget_ep_cmd(dep, cmd, ¶ms);
+ if (ret < 0) {
+- /*
+- * FIXME we need to iterate over the list of requests
+- * here and stop, unmap, free and del each of the linked
+- * requests instead of what we do now.
+- */
+- if (req->trb)
+- memset(req->trb, 0, sizeof(struct dwc3_trb));
+- dwc3_gadget_del_and_unmap_request(dep, req, ret);
++ struct dwc3_request *tmp;
++
++ if (ret == -EAGAIN)
++ return ret;
++
++ dwc3_stop_active_transfer(dep, true, true);
++
++ list_for_each_entry_safe(req, tmp, &dep->started_list, list)
++ dwc3_gadget_move_cancelled_request(req);
++
++ /* If ep isn't started, then there's no end transfer pending */
++ if (!(dep->flags & DWC3_EP_END_TRANSFER_PENDING))
++ dwc3_gadget_ep_cleanup_cancelled_requests(dep);
++
+ return ret;
+ }
+
+@@ -1508,6 +1516,10 @@ static void dwc3_gadget_ep_skip_trbs(struct dwc3_ep *dep, struct dwc3_request *r
+ {
+ int i;
+
++ /* If req->trb is not set, then the request has not started */
++ if (!req->trb)
++ return;
++
+ /*
+ * If request was already started, this means we had to
+ * stop the transfer. With that we also need to ignore
+@@ -1598,6 +1610,8 @@ int __dwc3_gadget_ep_set_halt(struct dwc3_ep *dep, int value, int protocol)
+ {
+ struct dwc3_gadget_ep_cmd_params params;
+ struct dwc3 *dwc = dep->dwc;
++ struct dwc3_request *req;
++ struct dwc3_request *tmp;
+ int ret;
+
+ if (usb_endpoint_xfer_isoc(dep->endpoint.desc)) {
+@@ -1634,13 +1648,37 @@ int __dwc3_gadget_ep_set_halt(struct dwc3_ep *dep, int value, int protocol)
+ else
+ dep->flags |= DWC3_EP_STALL;
+ } else {
++ /*
++ * Don't issue CLEAR_STALL command to control endpoints. The
++ * controller automatically clears the STALL when it receives
++ * the SETUP token.
++ */
++ if (dep->number <= 1) {
++ dep->flags &= ~(DWC3_EP_STALL | DWC3_EP_WEDGE);
++ return 0;
++ }
+
+ ret = dwc3_send_clear_stall_ep_cmd(dep);
+- if (ret)
++ if (ret) {
+ dev_err(dwc->dev, "failed to clear STALL on %s\n",
+ dep->name);
+- else
+- dep->flags &= ~(DWC3_EP_STALL | DWC3_EP_WEDGE);
++ return ret;
++ }
++
++ dep->flags &= ~(DWC3_EP_STALL | DWC3_EP_WEDGE);
++
++ dwc3_stop_active_transfer(dep, true, true);
++
++ list_for_each_entry_safe(req, tmp, &dep->started_list, list)
++ dwc3_gadget_move_cancelled_request(req);
++
++ list_for_each_entry_safe(req, tmp, &dep->pending_list, list)
++ dwc3_gadget_move_cancelled_request(req);
++
++ if (!(dep->flags & DWC3_EP_END_TRANSFER_PENDING)) {
++ dep->flags &= ~DWC3_EP_DELAY_START;
++ dwc3_gadget_ep_cleanup_cancelled_requests(dep);
++ }
+ }
+
+ return ret;
+diff --git a/drivers/usb/gadget/composite.c b/drivers/usb/gadget/composite.c
+index cb4950cf1cdc..5c1eb96a5c57 100644
+--- a/drivers/usb/gadget/composite.c
++++ b/drivers/usb/gadget/composite.c
+@@ -96,40 +96,43 @@ function_descriptors(struct usb_function *f,
+ }
+
+ /**
+- * next_ep_desc() - advance to the next EP descriptor
++ * next_desc() - advance to the next desc_type descriptor
+ * @t: currect pointer within descriptor array
++ * @desc_type: descriptor type
+ *
+- * Return: next EP descriptor or NULL
++ * Return: next desc_type descriptor or NULL
+ *
+- * Iterate over @t until either EP descriptor found or
++ * Iterate over @t until either desc_type descriptor found or
+ * NULL (that indicates end of list) encountered
+ */
+ static struct usb_descriptor_header**
+-next_ep_desc(struct usb_descriptor_header **t)
++next_desc(struct usb_descriptor_header **t, u8 desc_type)
+ {
+ for (; *t; t++) {
+- if ((*t)->bDescriptorType == USB_DT_ENDPOINT)
++ if ((*t)->bDescriptorType == desc_type)
+ return t;
+ }
+ return NULL;
+ }
+
+ /*
+- * for_each_ep_desc()- iterate over endpoint descriptors in the
+- * descriptors list
+- * @start: pointer within descriptor array.
+- * @ep_desc: endpoint descriptor to use as the loop cursor
++ * for_each_desc() - iterate over desc_type descriptors in the
++ * descriptors list
++ * @start: pointer within descriptor array.
++ * @iter_desc: desc_type descriptor to use as the loop cursor
++ * @desc_type: wanted descriptr type
+ */
+-#define for_each_ep_desc(start, ep_desc) \
+- for (ep_desc = next_ep_desc(start); \
+- ep_desc; ep_desc = next_ep_desc(ep_desc+1))
++#define for_each_desc(start, iter_desc, desc_type) \
++ for (iter_desc = next_desc(start, desc_type); \
++ iter_desc; iter_desc = next_desc(iter_desc + 1, desc_type))
+
+ /**
+- * config_ep_by_speed() - configures the given endpoint
++ * config_ep_by_speed_and_alt() - configures the given endpoint
+ * according to gadget speed.
+ * @g: pointer to the gadget
+ * @f: usb function
+ * @_ep: the endpoint to configure
++ * @alt: alternate setting number
+ *
+ * Return: error code, 0 on success
+ *
+@@ -142,11 +145,13 @@ next_ep_desc(struct usb_descriptor_header **t)
+ * Note: the supplied function should hold all the descriptors
+ * for supported speeds
+ */
+-int config_ep_by_speed(struct usb_gadget *g,
+- struct usb_function *f,
+- struct usb_ep *_ep)
++int config_ep_by_speed_and_alt(struct usb_gadget *g,
++ struct usb_function *f,
++ struct usb_ep *_ep,
++ u8 alt)
+ {
+ struct usb_endpoint_descriptor *chosen_desc = NULL;
++ struct usb_interface_descriptor *int_desc = NULL;
+ struct usb_descriptor_header **speed_desc = NULL;
+
+ struct usb_ss_ep_comp_descriptor *comp_desc = NULL;
+@@ -182,8 +187,21 @@ int config_ep_by_speed(struct usb_gadget *g,
+ default:
+ speed_desc = f->fs_descriptors;
+ }
++
++ /* find correct alternate setting descriptor */
++ for_each_desc(speed_desc, d_spd, USB_DT_INTERFACE) {
++ int_desc = (struct usb_interface_descriptor *)*d_spd;
++
++ if (int_desc->bAlternateSetting == alt) {
++ speed_desc = d_spd;
++ goto intf_found;
++ }
++ }
++ return -EIO;
++
++intf_found:
+ /* find descriptors */
+- for_each_ep_desc(speed_desc, d_spd) {
++ for_each_desc(speed_desc, d_spd, USB_DT_ENDPOINT) {
+ chosen_desc = (struct usb_endpoint_descriptor *)*d_spd;
+ if (chosen_desc->bEndpointAddress == _ep->address)
+ goto ep_found;
+@@ -237,6 +255,32 @@ ep_found:
+ }
+ return 0;
+ }
++EXPORT_SYMBOL_GPL(config_ep_by_speed_and_alt);
++
++/**
++ * config_ep_by_speed() - configures the given endpoint
++ * according to gadget speed.
++ * @g: pointer to the gadget
++ * @f: usb function
++ * @_ep: the endpoint to configure
++ *
++ * Return: error code, 0 on success
++ *
++ * This function chooses the right descriptors for a given
++ * endpoint according to gadget speed and saves it in the
++ * endpoint desc field. If the endpoint already has a descriptor
++ * assigned to it - overwrites it with currently corresponding
++ * descriptor. The endpoint maxpacket field is updated according
++ * to the chosen descriptor.
++ * Note: the supplied function should hold all the descriptors
++ * for supported speeds
++ */
++int config_ep_by_speed(struct usb_gadget *g,
++ struct usb_function *f,
++ struct usb_ep *_ep)
++{
++ return config_ep_by_speed_and_alt(g, f, _ep, 0);
++}
+ EXPORT_SYMBOL_GPL(config_ep_by_speed);
+
+ /**
+diff --git a/drivers/usb/gadget/udc/core.c b/drivers/usb/gadget/udc/core.c
+index 9b11046480fe..2e28dde8376f 100644
+--- a/drivers/usb/gadget/udc/core.c
++++ b/drivers/usb/gadget/udc/core.c
+@@ -1297,6 +1297,8 @@ static void usb_gadget_remove_driver(struct usb_udc *udc)
+ kobject_uevent(&udc->dev.kobj, KOBJ_CHANGE);
+
+ usb_gadget_disconnect(udc->gadget);
++ if (udc->gadget->irq)
++ synchronize_irq(udc->gadget->irq);
+ udc->driver->unbind(udc->gadget);
+ usb_gadget_udc_stop(udc);
+
+diff --git a/drivers/usb/gadget/udc/lpc32xx_udc.c b/drivers/usb/gadget/udc/lpc32xx_udc.c
+index cb997b82c008..465d0b7c6522 100644
+--- a/drivers/usb/gadget/udc/lpc32xx_udc.c
++++ b/drivers/usb/gadget/udc/lpc32xx_udc.c
+@@ -1614,17 +1614,17 @@ static int lpc32xx_ep_enable(struct usb_ep *_ep,
+ const struct usb_endpoint_descriptor *desc)
+ {
+ struct lpc32xx_ep *ep = container_of(_ep, struct lpc32xx_ep, ep);
+- struct lpc32xx_udc *udc = ep->udc;
++ struct lpc32xx_udc *udc;
+ u16 maxpacket;
+ u32 tmp;
+ unsigned long flags;
+
+ /* Verify EP data */
+ if ((!_ep) || (!ep) || (!desc) ||
+- (desc->bDescriptorType != USB_DT_ENDPOINT)) {
+- dev_dbg(udc->dev, "bad ep or descriptor\n");
++ (desc->bDescriptorType != USB_DT_ENDPOINT))
+ return -EINVAL;
+- }
++
++ udc = ep->udc;
+ maxpacket = usb_endpoint_maxp(desc);
+ if ((maxpacket == 0) || (maxpacket > ep->maxpacket)) {
+ dev_dbg(udc->dev, "bad ep descriptor's packet size\n");
+@@ -1872,7 +1872,7 @@ static int lpc32xx_ep_dequeue(struct usb_ep *_ep, struct usb_request *_req)
+ static int lpc32xx_ep_set_halt(struct usb_ep *_ep, int value)
+ {
+ struct lpc32xx_ep *ep = container_of(_ep, struct lpc32xx_ep, ep);
+- struct lpc32xx_udc *udc = ep->udc;
++ struct lpc32xx_udc *udc;
+ unsigned long flags;
+
+ if ((!ep) || (ep->hwep_num <= 1))
+@@ -1882,6 +1882,7 @@ static int lpc32xx_ep_set_halt(struct usb_ep *_ep, int value)
+ if (ep->is_in)
+ return -EAGAIN;
+
++ udc = ep->udc;
+ spin_lock_irqsave(&udc->lock, flags);
+
+ if (value == 1) {
+diff --git a/drivers/usb/gadget/udc/m66592-udc.c b/drivers/usb/gadget/udc/m66592-udc.c
+index 75d16a8902e6..931e6362a13d 100644
+--- a/drivers/usb/gadget/udc/m66592-udc.c
++++ b/drivers/usb/gadget/udc/m66592-udc.c
+@@ -1667,7 +1667,7 @@ static int m66592_probe(struct platform_device *pdev)
+
+ err_add_udc:
+ m66592_free_request(&m66592->ep[0].ep, m66592->ep0_req);
+-
++ m66592->ep0_req = NULL;
+ clean_up3:
+ if (m66592->pdata->on_chip) {
+ clk_disable(m66592->clk);
+diff --git a/drivers/usb/gadget/udc/s3c2410_udc.c b/drivers/usb/gadget/udc/s3c2410_udc.c
+index 0507a2ca0f55..80002d97b59d 100644
+--- a/drivers/usb/gadget/udc/s3c2410_udc.c
++++ b/drivers/usb/gadget/udc/s3c2410_udc.c
+@@ -251,10 +251,6 @@ static void s3c2410_udc_done(struct s3c2410_ep *ep,
+ static void s3c2410_udc_nuke(struct s3c2410_udc *udc,
+ struct s3c2410_ep *ep, int status)
+ {
+- /* Sanity check */
+- if (&ep->queue == NULL)
+- return;
+-
+ while (!list_empty(&ep->queue)) {
+ struct s3c2410_request *req;
+ req = list_entry(ep->queue.next, struct s3c2410_request,
+diff --git a/drivers/usb/host/ehci-mxc.c b/drivers/usb/host/ehci-mxc.c
+index c9f91e6c72b6..7f65c86047dd 100644
+--- a/drivers/usb/host/ehci-mxc.c
++++ b/drivers/usb/host/ehci-mxc.c
+@@ -50,6 +50,8 @@ static int ehci_mxc_drv_probe(struct platform_device *pdev)
+ }
+
+ irq = platform_get_irq(pdev, 0);
++ if (irq < 0)
++ return irq;
+
+ hcd = usb_create_hcd(&ehci_mxc_hc_driver, dev, dev_name(dev));
+ if (!hcd)
+diff --git a/drivers/usb/host/ehci-platform.c b/drivers/usb/host/ehci-platform.c
+index e4fc3f66d43b..e9a49007cce4 100644
+--- a/drivers/usb/host/ehci-platform.c
++++ b/drivers/usb/host/ehci-platform.c
+@@ -455,6 +455,10 @@ static int ehci_platform_resume(struct device *dev)
+
+ ehci_resume(hcd, priv->reset_on_resume);
+
++ pm_runtime_disable(dev);
++ pm_runtime_set_active(dev);
++ pm_runtime_enable(dev);
++
+ if (priv->quirk_poll)
+ quirk_poll_init(priv);
+
+diff --git a/drivers/usb/host/ohci-platform.c b/drivers/usb/host/ohci-platform.c
+index 7addfc2cbadc..4a8456f12a73 100644
+--- a/drivers/usb/host/ohci-platform.c
++++ b/drivers/usb/host/ohci-platform.c
+@@ -299,6 +299,11 @@ static int ohci_platform_resume(struct device *dev)
+ }
+
+ ohci_resume(hcd, false);
++
++ pm_runtime_disable(dev);
++ pm_runtime_set_active(dev);
++ pm_runtime_enable(dev);
++
+ return 0;
+ }
+ #endif /* CONFIG_PM_SLEEP */
+diff --git a/drivers/usb/host/ohci-sm501.c b/drivers/usb/host/ohci-sm501.c
+index c158cda9e4b9..cff965240327 100644
+--- a/drivers/usb/host/ohci-sm501.c
++++ b/drivers/usb/host/ohci-sm501.c
+@@ -157,9 +157,10 @@ static int ohci_hcd_sm501_drv_probe(struct platform_device *pdev)
+ * the call to usb_hcd_setup_local_mem() below does just that.
+ */
+
+- if (usb_hcd_setup_local_mem(hcd, mem->start,
+- mem->start - mem->parent->start,
+- resource_size(mem)) < 0)
++ retval = usb_hcd_setup_local_mem(hcd, mem->start,
++ mem->start - mem->parent->start,
++ resource_size(mem));
++ if (retval < 0)
+ goto err5;
+ retval = usb_add_hcd(hcd, irq, IRQF_SHARED);
+ if (retval)
+diff --git a/drivers/usb/host/xhci-plat.c b/drivers/usb/host/xhci-plat.c
+index ea460b9682d5..ca82e2c61ddc 100644
+--- a/drivers/usb/host/xhci-plat.c
++++ b/drivers/usb/host/xhci-plat.c
+@@ -409,7 +409,15 @@ static int __maybe_unused xhci_plat_resume(struct device *dev)
+ if (ret)
+ return ret;
+
+- return xhci_resume(xhci, 0);
++ ret = xhci_resume(xhci, 0);
++ if (ret)
++ return ret;
++
++ pm_runtime_disable(dev);
++ pm_runtime_set_active(dev);
++ pm_runtime_enable(dev);
++
++ return 0;
+ }
+
+ static int __maybe_unused xhci_plat_runtime_suspend(struct device *dev)
+diff --git a/drivers/usb/roles/class.c b/drivers/usb/roles/class.c
+index 5b17709821df..27d92af29635 100644
+--- a/drivers/usb/roles/class.c
++++ b/drivers/usb/roles/class.c
+@@ -49,8 +49,10 @@ int usb_role_switch_set_role(struct usb_role_switch *sw, enum usb_role role)
+ mutex_lock(&sw->lock);
+
+ ret = sw->set(sw, role);
+- if (!ret)
++ if (!ret) {
+ sw->role = role;
++ kobject_uevent(&sw->dev.kobj, KOBJ_CHANGE);
++ }
+
+ mutex_unlock(&sw->lock);
+
+diff --git a/drivers/vfio/mdev/mdev_sysfs.c b/drivers/vfio/mdev/mdev_sysfs.c
+index 8ad14e5c02bf..917fd84c1c6f 100644
+--- a/drivers/vfio/mdev/mdev_sysfs.c
++++ b/drivers/vfio/mdev/mdev_sysfs.c
+@@ -110,7 +110,7 @@ static struct mdev_type *add_mdev_supported_type(struct mdev_parent *parent,
+ "%s-%s", dev_driver_string(parent->dev),
+ group->name);
+ if (ret) {
+- kfree(type);
++ kobject_put(&type->kobj);
+ return ERR_PTR(ret);
+ }
+
+diff --git a/drivers/vfio/pci/vfio_pci_config.c b/drivers/vfio/pci/vfio_pci_config.c
+index 90c0b80f8acf..814bcbe0dd4e 100644
+--- a/drivers/vfio/pci/vfio_pci_config.c
++++ b/drivers/vfio/pci/vfio_pci_config.c
+@@ -1462,7 +1462,12 @@ static int vfio_cap_init(struct vfio_pci_device *vdev)
+ if (ret)
+ return ret;
+
+- if (cap <= PCI_CAP_ID_MAX) {
++ /*
++ * ID 0 is a NULL capability, conflicting with our fake
++ * PCI_CAP_ID_BASIC. As it has no content, consider it
++ * hidden for now.
++ */
++ if (cap && cap <= PCI_CAP_ID_MAX) {
+ len = pci_cap_length[cap];
+ if (len == 0xFF) { /* Variable length */
+ len = vfio_cap_len(vdev, cap, pos);
+@@ -1728,8 +1733,11 @@ void vfio_config_free(struct vfio_pci_device *vdev)
+ vdev->vconfig = NULL;
+ kfree(vdev->pci_config_map);
+ vdev->pci_config_map = NULL;
+- kfree(vdev->msi_perm);
+- vdev->msi_perm = NULL;
++ if (vdev->msi_perm) {
++ free_perm_bits(vdev->msi_perm);
++ kfree(vdev->msi_perm);
++ vdev->msi_perm = NULL;
++ }
+ }
+
+ /*
+diff --git a/drivers/vhost/scsi.c b/drivers/vhost/scsi.c
+index c39952243fd3..8b104f76f324 100644
+--- a/drivers/vhost/scsi.c
++++ b/drivers/vhost/scsi.c
+@@ -2280,6 +2280,7 @@ static struct configfs_attribute *vhost_scsi_wwn_attrs[] = {
+ static const struct target_core_fabric_ops vhost_scsi_ops = {
+ .module = THIS_MODULE,
+ .fabric_name = "vhost",
++ .max_data_sg_nents = VHOST_SCSI_PREALLOC_SGLS,
+ .tpg_get_wwn = vhost_scsi_get_fabric_wwn,
+ .tpg_get_tag = vhost_scsi_get_tpgt,
+ .tpg_check_demo_mode = vhost_scsi_check_true,
+diff --git a/drivers/video/backlight/lp855x_bl.c b/drivers/video/backlight/lp855x_bl.c
+index f68920131a4a..e94932c69f54 100644
+--- a/drivers/video/backlight/lp855x_bl.c
++++ b/drivers/video/backlight/lp855x_bl.c
+@@ -456,7 +456,7 @@ static int lp855x_probe(struct i2c_client *cl, const struct i2c_device_id *id)
+ ret = regulator_enable(lp->enable);
+ if (ret < 0) {
+ dev_err(lp->dev, "failed to enable vddio: %d\n", ret);
+- return ret;
++ goto disable_supply;
+ }
+
+ /*
+@@ -471,24 +471,34 @@ static int lp855x_probe(struct i2c_client *cl, const struct i2c_device_id *id)
+ ret = lp855x_configure(lp);
+ if (ret) {
+ dev_err(lp->dev, "device config err: %d", ret);
+- return ret;
++ goto disable_vddio;
+ }
+
+ ret = lp855x_backlight_register(lp);
+ if (ret) {
+ dev_err(lp->dev,
+ "failed to register backlight. err: %d\n", ret);
+- return ret;
++ goto disable_vddio;
+ }
+
+ ret = sysfs_create_group(&lp->dev->kobj, &lp855x_attr_group);
+ if (ret) {
+ dev_err(lp->dev, "failed to register sysfs. err: %d\n", ret);
+- return ret;
++ goto disable_vddio;
+ }
+
+ backlight_update_status(lp->bl);
++
+ return 0;
++
++disable_vddio:
++ if (lp->enable)
++ regulator_disable(lp->enable);
++disable_supply:
++ if (lp->supply)
++ regulator_disable(lp->supply);
++
++ return ret;
+ }
+
+ static int lp855x_remove(struct i2c_client *cl)
+@@ -497,6 +507,8 @@ static int lp855x_remove(struct i2c_client *cl)
+
+ lp->bl->props.brightness = 0;
+ backlight_update_status(lp->bl);
++ if (lp->enable)
++ regulator_disable(lp->enable);
+ if (lp->supply)
+ regulator_disable(lp->supply);
+ sysfs_remove_group(&lp->dev->kobj, &lp855x_attr_group);
+diff --git a/drivers/watchdog/da9062_wdt.c b/drivers/watchdog/da9062_wdt.c
+index 0ad15d55071c..18dec438d518 100644
+--- a/drivers/watchdog/da9062_wdt.c
++++ b/drivers/watchdog/da9062_wdt.c
+@@ -58,11 +58,6 @@ static int da9062_wdt_update_timeout_register(struct da9062_watchdog *wdt,
+ unsigned int regval)
+ {
+ struct da9062 *chip = wdt->hw;
+- int ret;
+-
+- ret = da9062_reset_watchdog_timer(wdt);
+- if (ret)
+- return ret;
+
+ regmap_update_bits(chip->regmap,
+ DA9062AA_CONTROL_D,
+diff --git a/drivers/xen/cpu_hotplug.c b/drivers/xen/cpu_hotplug.c
+index ec975decb5de..b96b11e2b571 100644
+--- a/drivers/xen/cpu_hotplug.c
++++ b/drivers/xen/cpu_hotplug.c
+@@ -93,10 +93,8 @@ static int setup_cpu_watcher(struct notifier_block *notifier,
+ (void)register_xenbus_watch(&cpu_watch);
+
+ for_each_possible_cpu(cpu) {
+- if (vcpu_online(cpu) == 0) {
+- device_offline(get_cpu_device(cpu));
+- set_cpu_present(cpu, false);
+- }
++ if (vcpu_online(cpu) == 0)
++ disable_hotplug_cpu(cpu);
+ }
+
+ return NOTIFY_DONE;
+@@ -119,5 +117,5 @@ static int __init setup_vcpu_hotplug_event(void)
+ return 0;
+ }
+
+-arch_initcall(setup_vcpu_hotplug_event);
++late_initcall(setup_vcpu_hotplug_event);
+
+diff --git a/fs/afs/cmservice.c b/fs/afs/cmservice.c
+index 380ad5ace7cf..3a9b8b1f5f2b 100644
+--- a/fs/afs/cmservice.c
++++ b/fs/afs/cmservice.c
+@@ -305,8 +305,7 @@ static int afs_deliver_cb_callback(struct afs_call *call)
+ call->count = ntohl(call->tmp);
+ _debug("FID count: %u", call->count);
+ if (call->count > AFSCBMAX)
+- return afs_protocol_error(call, -EBADMSG,
+- afs_eproto_cb_fid_count);
++ return afs_protocol_error(call, afs_eproto_cb_fid_count);
+
+ call->buffer = kmalloc(array3_size(call->count, 3, 4),
+ GFP_KERNEL);
+@@ -351,8 +350,7 @@ static int afs_deliver_cb_callback(struct afs_call *call)
+ call->count2 = ntohl(call->tmp);
+ _debug("CB count: %u", call->count2);
+ if (call->count2 != call->count && call->count2 != 0)
+- return afs_protocol_error(call, -EBADMSG,
+- afs_eproto_cb_count);
++ return afs_protocol_error(call, afs_eproto_cb_count);
+ call->iter = &call->def_iter;
+ iov_iter_discard(&call->def_iter, READ, call->count2 * 3 * 4);
+ call->unmarshall++;
+@@ -672,8 +670,7 @@ static int afs_deliver_yfs_cb_callback(struct afs_call *call)
+ call->count = ntohl(call->tmp);
+ _debug("FID count: %u", call->count);
+ if (call->count > YFSCBMAX)
+- return afs_protocol_error(call, -EBADMSG,
+- afs_eproto_cb_fid_count);
++ return afs_protocol_error(call, afs_eproto_cb_fid_count);
+
+ size = array_size(call->count, sizeof(struct yfs_xdr_YFSFid));
+ call->buffer = kmalloc(size, GFP_KERNEL);
+diff --git a/fs/afs/dir.c b/fs/afs/dir.c
+index d1e1caa23c8b..3c486340b220 100644
+--- a/fs/afs/dir.c
++++ b/fs/afs/dir.c
+@@ -658,7 +658,8 @@ static struct inode *afs_do_lookup(struct inode *dir, struct dentry *dentry,
+
+ cookie->ctx.actor = afs_lookup_filldir;
+ cookie->name = dentry->d_name;
+- cookie->nr_fids = 1; /* slot 0 is saved for the fid we actually want */
++ cookie->nr_fids = 2; /* slot 0 is saved for the fid we actually want
++ * and slot 1 for the directory */
+
+ read_seqlock_excl(&dvnode->cb_lock);
+ dcbi = rcu_dereference_protected(dvnode->cb_interest,
+@@ -709,7 +710,11 @@ static struct inode *afs_do_lookup(struct inode *dir, struct dentry *dentry,
+ if (!cookie->inodes)
+ goto out_s;
+
+- for (i = 1; i < cookie->nr_fids; i++) {
++ cookie->fids[1] = dvnode->fid;
++ cookie->statuses[1].cb_break = afs_calc_vnode_cb_break(dvnode);
++ cookie->inodes[1] = igrab(&dvnode->vfs_inode);
++
++ for (i = 2; i < cookie->nr_fids; i++) {
+ scb = &cookie->statuses[i];
+
+ /* Find any inodes that already exist and get their
+diff --git a/fs/afs/fsclient.c b/fs/afs/fsclient.c
+index d2b3798c1932..7bca0c13d0c4 100644
+--- a/fs/afs/fsclient.c
++++ b/fs/afs/fsclient.c
+@@ -56,16 +56,15 @@ static void xdr_dump_bad(const __be32 *bp)
+ /*
+ * decode an AFSFetchStatus block
+ */
+-static int xdr_decode_AFSFetchStatus(const __be32 **_bp,
+- struct afs_call *call,
+- struct afs_status_cb *scb)
++static void xdr_decode_AFSFetchStatus(const __be32 **_bp,
++ struct afs_call *call,
++ struct afs_status_cb *scb)
+ {
+ const struct afs_xdr_AFSFetchStatus *xdr = (const void *)*_bp;
+ struct afs_file_status *status = &scb->status;
+ bool inline_error = (call->operation_ID == afs_FS_InlineBulkStatus);
+ u64 data_version, size;
+ u32 type, abort_code;
+- int ret;
+
+ abort_code = ntohl(xdr->abort_code);
+
+@@ -79,7 +78,7 @@ static int xdr_decode_AFSFetchStatus(const __be32 **_bp,
+ */
+ status->abort_code = abort_code;
+ scb->have_error = true;
+- goto good;
++ goto advance;
+ }
+
+ pr_warn("Unknown AFSFetchStatus version %u\n", ntohl(xdr->if_version));
+@@ -89,7 +88,7 @@ static int xdr_decode_AFSFetchStatus(const __be32 **_bp,
+ if (abort_code != 0 && inline_error) {
+ status->abort_code = abort_code;
+ scb->have_error = true;
+- goto good;
++ goto advance;
+ }
+
+ type = ntohl(xdr->type);
+@@ -125,15 +124,13 @@ static int xdr_decode_AFSFetchStatus(const __be32 **_bp,
+ data_version |= (u64)ntohl(xdr->data_version_hi) << 32;
+ status->data_version = data_version;
+ scb->have_status = true;
+-good:
+- ret = 0;
+ advance:
+ *_bp = (const void *)*_bp + sizeof(*xdr);
+- return ret;
++ return;
+
+ bad:
+ xdr_dump_bad(*_bp);
+- ret = afs_protocol_error(call, -EBADMSG, afs_eproto_bad_status);
++ afs_protocol_error(call, afs_eproto_bad_status);
+ goto advance;
+ }
+
+@@ -254,9 +251,7 @@ static int afs_deliver_fs_fetch_status_vnode(struct afs_call *call)
+
+ /* unmarshall the reply once we've received all of it */
+ bp = call->buffer;
+- ret = xdr_decode_AFSFetchStatus(&bp, call, call->out_scb);
+- if (ret < 0)
+- return ret;
++ xdr_decode_AFSFetchStatus(&bp, call, call->out_scb);
+ xdr_decode_AFSCallBack(&bp, call, call->out_scb);
+ xdr_decode_AFSVolSync(&bp, call->out_volsync);
+
+@@ -419,9 +414,7 @@ static int afs_deliver_fs_fetch_data(struct afs_call *call)
+ return ret;
+
+ bp = call->buffer;
+- ret = xdr_decode_AFSFetchStatus(&bp, call, call->out_scb);
+- if (ret < 0)
+- return ret;
++ xdr_decode_AFSFetchStatus(&bp, call, call->out_scb);
+ xdr_decode_AFSCallBack(&bp, call, call->out_scb);
+ xdr_decode_AFSVolSync(&bp, call->out_volsync);
+
+@@ -577,12 +570,8 @@ static int afs_deliver_fs_create_vnode(struct afs_call *call)
+ /* unmarshall the reply once we've received all of it */
+ bp = call->buffer;
+ xdr_decode_AFSFid(&bp, call->out_fid);
+- ret = xdr_decode_AFSFetchStatus(&bp, call, call->out_scb);
+- if (ret < 0)
+- return ret;
+- ret = xdr_decode_AFSFetchStatus(&bp, call, call->out_dir_scb);
+- if (ret < 0)
+- return ret;
++ xdr_decode_AFSFetchStatus(&bp, call, call->out_scb);
++ xdr_decode_AFSFetchStatus(&bp, call, call->out_dir_scb);
+ xdr_decode_AFSCallBack(&bp, call, call->out_scb);
+ xdr_decode_AFSVolSync(&bp, call->out_volsync);
+
+@@ -691,9 +680,7 @@ static int afs_deliver_fs_dir_status_and_vol(struct afs_call *call)
+
+ /* unmarshall the reply once we've received all of it */
+ bp = call->buffer;
+- ret = xdr_decode_AFSFetchStatus(&bp, call, call->out_dir_scb);
+- if (ret < 0)
+- return ret;
++ xdr_decode_AFSFetchStatus(&bp, call, call->out_dir_scb);
+ xdr_decode_AFSVolSync(&bp, call->out_volsync);
+
+ _leave(" = 0 [done]");
+@@ -784,12 +771,8 @@ static int afs_deliver_fs_link(struct afs_call *call)
+
+ /* unmarshall the reply once we've received all of it */
+ bp = call->buffer;
+- ret = xdr_decode_AFSFetchStatus(&bp, call, call->out_scb);
+- if (ret < 0)
+- return ret;
+- ret = xdr_decode_AFSFetchStatus(&bp, call, call->out_dir_scb);
+- if (ret < 0)
+- return ret;
++ xdr_decode_AFSFetchStatus(&bp, call, call->out_scb);
++ xdr_decode_AFSFetchStatus(&bp, call, call->out_dir_scb);
+ xdr_decode_AFSVolSync(&bp, call->out_volsync);
+
+ _leave(" = 0 [done]");
+@@ -878,12 +861,8 @@ static int afs_deliver_fs_symlink(struct afs_call *call)
+ /* unmarshall the reply once we've received all of it */
+ bp = call->buffer;
+ xdr_decode_AFSFid(&bp, call->out_fid);
+- ret = xdr_decode_AFSFetchStatus(&bp, call, call->out_scb);
+- if (ret < 0)
+- return ret;
+- ret = xdr_decode_AFSFetchStatus(&bp, call, call->out_dir_scb);
+- if (ret < 0)
+- return ret;
++ xdr_decode_AFSFetchStatus(&bp, call, call->out_scb);
++ xdr_decode_AFSFetchStatus(&bp, call, call->out_dir_scb);
+ xdr_decode_AFSVolSync(&bp, call->out_volsync);
+
+ _leave(" = 0 [done]");
+@@ -986,16 +965,12 @@ static int afs_deliver_fs_rename(struct afs_call *call)
+ if (ret < 0)
+ return ret;
+
++ bp = call->buffer;
+ /* If the two dirs are the same, we have two copies of the same status
+ * report, so we just decode it twice.
+ */
+- bp = call->buffer;
+- ret = xdr_decode_AFSFetchStatus(&bp, call, call->out_dir_scb);
+- if (ret < 0)
+- return ret;
+- ret = xdr_decode_AFSFetchStatus(&bp, call, call->out_scb);
+- if (ret < 0)
+- return ret;
++ xdr_decode_AFSFetchStatus(&bp, call, call->out_dir_scb);
++ xdr_decode_AFSFetchStatus(&bp, call, call->out_scb);
+ xdr_decode_AFSVolSync(&bp, call->out_volsync);
+
+ _leave(" = 0 [done]");
+@@ -1103,9 +1078,7 @@ static int afs_deliver_fs_store_data(struct afs_call *call)
+
+ /* unmarshall the reply once we've received all of it */
+ bp = call->buffer;
+- ret = xdr_decode_AFSFetchStatus(&bp, call, call->out_scb);
+- if (ret < 0)
+- return ret;
++ xdr_decode_AFSFetchStatus(&bp, call, call->out_scb);
+ xdr_decode_AFSVolSync(&bp, call->out_volsync);
+
+ _leave(" = 0 [done]");
+@@ -1283,9 +1256,7 @@ static int afs_deliver_fs_store_status(struct afs_call *call)
+
+ /* unmarshall the reply once we've received all of it */
+ bp = call->buffer;
+- ret = xdr_decode_AFSFetchStatus(&bp, call, call->out_scb);
+- if (ret < 0)
+- return ret;
++ xdr_decode_AFSFetchStatus(&bp, call, call->out_scb);
+ xdr_decode_AFSVolSync(&bp, call->out_volsync);
+
+ _leave(" = 0 [done]");
+@@ -1499,8 +1470,7 @@ static int afs_deliver_fs_get_volume_status(struct afs_call *call)
+ call->count = ntohl(call->tmp);
+ _debug("volname length: %u", call->count);
+ if (call->count >= AFSNAMEMAX)
+- return afs_protocol_error(call, -EBADMSG,
+- afs_eproto_volname_len);
++ return afs_protocol_error(call, afs_eproto_volname_len);
+ size = (call->count + 3) & ~3; /* It's padded */
+ afs_extract_to_buf(call, size);
+ call->unmarshall++;
+@@ -1529,8 +1499,7 @@ static int afs_deliver_fs_get_volume_status(struct afs_call *call)
+ call->count = ntohl(call->tmp);
+ _debug("offline msg length: %u", call->count);
+ if (call->count >= AFSNAMEMAX)
+- return afs_protocol_error(call, -EBADMSG,
+- afs_eproto_offline_msg_len);
++ return afs_protocol_error(call, afs_eproto_offline_msg_len);
+ size = (call->count + 3) & ~3; /* It's padded */
+ afs_extract_to_buf(call, size);
+ call->unmarshall++;
+@@ -1560,8 +1529,7 @@ static int afs_deliver_fs_get_volume_status(struct afs_call *call)
+ call->count = ntohl(call->tmp);
+ _debug("motd length: %u", call->count);
+ if (call->count >= AFSNAMEMAX)
+- return afs_protocol_error(call, -EBADMSG,
+- afs_eproto_motd_len);
++ return afs_protocol_error(call, afs_eproto_motd_len);
+ size = (call->count + 3) & ~3; /* It's padded */
+ afs_extract_to_buf(call, size);
+ call->unmarshall++;
+@@ -1954,9 +1922,7 @@ static int afs_deliver_fs_fetch_status(struct afs_call *call)
+
+ /* unmarshall the reply once we've received all of it */
+ bp = call->buffer;
+- ret = xdr_decode_AFSFetchStatus(&bp, call, call->out_scb);
+- if (ret < 0)
+- return ret;
++ xdr_decode_AFSFetchStatus(&bp, call, call->out_scb);
+ xdr_decode_AFSCallBack(&bp, call, call->out_scb);
+ xdr_decode_AFSVolSync(&bp, call->out_volsync);
+
+@@ -2045,8 +2011,7 @@ static int afs_deliver_fs_inline_bulk_status(struct afs_call *call)
+ tmp = ntohl(call->tmp);
+ _debug("status count: %u/%u", tmp, call->count2);
+ if (tmp != call->count2)
+- return afs_protocol_error(call, -EBADMSG,
+- afs_eproto_ibulkst_count);
++ return afs_protocol_error(call, afs_eproto_ibulkst_count);
+
+ call->count = 0;
+ call->unmarshall++;
+@@ -2062,10 +2027,7 @@ static int afs_deliver_fs_inline_bulk_status(struct afs_call *call)
+
+ bp = call->buffer;
+ scb = &call->out_scb[call->count];
+- ret = xdr_decode_AFSFetchStatus(&bp, call, scb);
+- if (ret < 0)
+- return ret;
+-
++ xdr_decode_AFSFetchStatus(&bp, call, scb);
+ call->count++;
+ if (call->count < call->count2)
+ goto more_counts;
+@@ -2085,8 +2047,7 @@ static int afs_deliver_fs_inline_bulk_status(struct afs_call *call)
+ tmp = ntohl(call->tmp);
+ _debug("CB count: %u", tmp);
+ if (tmp != call->count2)
+- return afs_protocol_error(call, -EBADMSG,
+- afs_eproto_ibulkst_cb_count);
++ return afs_protocol_error(call, afs_eproto_ibulkst_cb_count);
+ call->count = 0;
+ call->unmarshall++;
+ more_cbs:
+@@ -2243,9 +2204,7 @@ static int afs_deliver_fs_fetch_acl(struct afs_call *call)
+ return ret;
+
+ bp = call->buffer;
+- ret = xdr_decode_AFSFetchStatus(&bp, call, call->out_scb);
+- if (ret < 0)
+- return ret;
++ xdr_decode_AFSFetchStatus(&bp, call, call->out_scb);
+ xdr_decode_AFSVolSync(&bp, call->out_volsync);
+
+ call->unmarshall++;
+@@ -2326,9 +2285,7 @@ static int afs_deliver_fs_file_status_and_vol(struct afs_call *call)
+ return ret;
+
+ bp = call->buffer;
+- ret = xdr_decode_AFSFetchStatus(&bp, call, call->out_scb);
+- if (ret < 0)
+- return ret;
++ xdr_decode_AFSFetchStatus(&bp, call, call->out_scb);
+ xdr_decode_AFSVolSync(&bp, call->out_volsync);
+
+ _leave(" = 0 [done]");
+diff --git a/fs/afs/inode.c b/fs/afs/inode.c
+index 281470fe1183..d7b65fad6679 100644
+--- a/fs/afs/inode.c
++++ b/fs/afs/inode.c
+@@ -130,7 +130,7 @@ static int afs_inode_init_from_status(struct afs_vnode *vnode, struct key *key,
+ default:
+ dump_vnode(vnode, parent_vnode);
+ write_sequnlock(&vnode->cb_lock);
+- return afs_protocol_error(NULL, -EBADMSG, afs_eproto_file_type);
++ return afs_protocol_error(NULL, afs_eproto_file_type);
+ }
+
+ afs_set_i_size(vnode, status->size);
+@@ -170,6 +170,7 @@ static void afs_apply_status(struct afs_fs_cursor *fc,
+ struct timespec64 t;
+ umode_t mode;
+ bool data_changed = false;
++ bool change_size = false;
+
+ BUG_ON(test_bit(AFS_VNODE_UNSET, &vnode->flags));
+
+@@ -179,7 +180,7 @@ static void afs_apply_status(struct afs_fs_cursor *fc,
+ vnode->fid.vnode,
+ vnode->fid.unique,
+ status->type, vnode->status.type);
+- afs_protocol_error(NULL, -EBADMSG, afs_eproto_bad_status);
++ afs_protocol_error(NULL, afs_eproto_bad_status);
+ return;
+ }
+
+@@ -225,6 +226,7 @@ static void afs_apply_status(struct afs_fs_cursor *fc,
+ } else {
+ set_bit(AFS_VNODE_ZAP_DATA, &vnode->flags);
+ }
++ change_size = true;
+ } else if (vnode->status.type == AFS_FTYPE_DIR) {
+ /* Expected directory change is handled elsewhere so
+ * that we can locally edit the directory and save on a
+@@ -232,11 +234,19 @@ static void afs_apply_status(struct afs_fs_cursor *fc,
+ */
+ if (test_bit(AFS_VNODE_DIR_VALID, &vnode->flags))
+ data_changed = false;
++ change_size = true;
+ }
+
+ if (data_changed) {
+ inode_set_iversion_raw(&vnode->vfs_inode, status->data_version);
+- afs_set_i_size(vnode, status->size);
++
++ /* Only update the size if the data version jumped. If the
++ * file is being modified locally, then we might have our own
++ * idea of what the size should be that's not the same as
++ * what's on the server.
++ */
++ if (change_size)
++ afs_set_i_size(vnode, status->size);
+ }
+ }
+
+diff --git a/fs/afs/internal.h b/fs/afs/internal.h
+index 80255513e230..98e0cebd5e5e 100644
+--- a/fs/afs/internal.h
++++ b/fs/afs/internal.h
+@@ -161,6 +161,7 @@ struct afs_call {
+ bool upgrade; /* T to request service upgrade */
+ bool have_reply_time; /* T if have got reply_time */
+ bool intr; /* T if interruptible */
++ bool unmarshalling_error; /* T if an unmarshalling error occurred */
+ u16 service_id; /* Actual service ID (after upgrade) */
+ unsigned int debug_id; /* Trace ID */
+ u32 operation_ID; /* operation ID for an incoming call */
+@@ -1128,7 +1129,7 @@ extern void afs_flat_call_destructor(struct afs_call *);
+ extern void afs_send_empty_reply(struct afs_call *);
+ extern void afs_send_simple_reply(struct afs_call *, const void *, size_t);
+ extern int afs_extract_data(struct afs_call *, bool);
+-extern int afs_protocol_error(struct afs_call *, int, enum afs_eproto_cause);
++extern int afs_protocol_error(struct afs_call *, enum afs_eproto_cause);
+
+ static inline void afs_set_fc_call(struct afs_call *call, struct afs_fs_cursor *fc)
+ {
+diff --git a/fs/afs/misc.c b/fs/afs/misc.c
+index 52b19e9c1535..5334f1bd2bca 100644
+--- a/fs/afs/misc.c
++++ b/fs/afs/misc.c
+@@ -83,6 +83,7 @@ int afs_abort_to_error(u32 abort_code)
+ case UAENOLCK: return -ENOLCK;
+ case UAENOTEMPTY: return -ENOTEMPTY;
+ case UAELOOP: return -ELOOP;
++ case UAEOVERFLOW: return -EOVERFLOW;
+ case UAENOMEDIUM: return -ENOMEDIUM;
+ case UAEDQUOT: return -EDQUOT;
+
+diff --git a/fs/afs/proc.c b/fs/afs/proc.c
+index 468e1713bce1..6f34c84a0fd0 100644
+--- a/fs/afs/proc.c
++++ b/fs/afs/proc.c
+@@ -563,6 +563,7 @@ void afs_put_sysnames(struct afs_sysnames *sysnames)
+ if (sysnames->subs[i] != afs_init_sysname &&
+ sysnames->subs[i] != sysnames->blank)
+ kfree(sysnames->subs[i]);
++ kfree(sysnames);
+ }
+ }
+
+diff --git a/fs/afs/rxrpc.c b/fs/afs/rxrpc.c
+index 1ecc67da6c1a..e3c2655616dc 100644
+--- a/fs/afs/rxrpc.c
++++ b/fs/afs/rxrpc.c
+@@ -540,6 +540,8 @@ static void afs_deliver_to_call(struct afs_call *call)
+
+ ret = call->type->deliver(call);
+ state = READ_ONCE(call->state);
++ if (ret == 0 && call->unmarshalling_error)
++ ret = -EBADMSG;
+ switch (ret) {
+ case 0:
+ afs_queue_call_work(call);
+@@ -959,9 +961,11 @@ int afs_extract_data(struct afs_call *call, bool want_more)
+ /*
+ * Log protocol error production.
+ */
+-noinline int afs_protocol_error(struct afs_call *call, int error,
++noinline int afs_protocol_error(struct afs_call *call,
+ enum afs_eproto_cause cause)
+ {
+- trace_afs_protocol_error(call, error, cause);
+- return error;
++ trace_afs_protocol_error(call, cause);
++ if (call)
++ call->unmarshalling_error = true;
++ return -EBADMSG;
+ }
+diff --git a/fs/afs/vlclient.c b/fs/afs/vlclient.c
+index 516e9a3bb5b4..e64b002c3bb3 100644
+--- a/fs/afs/vlclient.c
++++ b/fs/afs/vlclient.c
+@@ -447,8 +447,7 @@ static int afs_deliver_yfsvl_get_endpoints(struct afs_call *call)
+ call->count2 = ntohl(*bp); /* Type or next count */
+
+ if (call->count > YFS_MAXENDPOINTS)
+- return afs_protocol_error(call, -EBADMSG,
+- afs_eproto_yvl_fsendpt_num);
++ return afs_protocol_error(call, afs_eproto_yvl_fsendpt_num);
+
+ alist = afs_alloc_addrlist(call->count, FS_SERVICE, AFS_FS_PORT);
+ if (!alist)
+@@ -468,8 +467,7 @@ static int afs_deliver_yfsvl_get_endpoints(struct afs_call *call)
+ size = sizeof(__be32) * (1 + 4 + 1);
+ break;
+ default:
+- return afs_protocol_error(call, -EBADMSG,
+- afs_eproto_yvl_fsendpt_type);
++ return afs_protocol_error(call, afs_eproto_yvl_fsendpt_type);
+ }
+
+ size += sizeof(__be32);
+@@ -487,21 +485,20 @@ static int afs_deliver_yfsvl_get_endpoints(struct afs_call *call)
+ switch (call->count2) {
+ case YFS_ENDPOINT_IPV4:
+ if (ntohl(bp[0]) != sizeof(__be32) * 2)
+- return afs_protocol_error(call, -EBADMSG,
+- afs_eproto_yvl_fsendpt4_len);
++ return afs_protocol_error(
++ call, afs_eproto_yvl_fsendpt4_len);
+ afs_merge_fs_addr4(alist, bp[1], ntohl(bp[2]));
+ bp += 3;
+ break;
+ case YFS_ENDPOINT_IPV6:
+ if (ntohl(bp[0]) != sizeof(__be32) * 5)
+- return afs_protocol_error(call, -EBADMSG,
+- afs_eproto_yvl_fsendpt6_len);
++ return afs_protocol_error(
++ call, afs_eproto_yvl_fsendpt6_len);
+ afs_merge_fs_addr6(alist, bp + 1, ntohl(bp[5]));
+ bp += 6;
+ break;
+ default:
+- return afs_protocol_error(call, -EBADMSG,
+- afs_eproto_yvl_fsendpt_type);
++ return afs_protocol_error(call, afs_eproto_yvl_fsendpt_type);
+ }
+
+ /* Got either the type of the next entry or the count of
+@@ -519,8 +516,7 @@ static int afs_deliver_yfsvl_get_endpoints(struct afs_call *call)
+ if (!call->count)
+ goto end;
+ if (call->count > YFS_MAXENDPOINTS)
+- return afs_protocol_error(call, -EBADMSG,
+- afs_eproto_yvl_vlendpt_type);
++ return afs_protocol_error(call, afs_eproto_yvl_vlendpt_type);
+
+ afs_extract_to_buf(call, 1 * sizeof(__be32));
+ call->unmarshall = 3;
+@@ -547,8 +543,7 @@ static int afs_deliver_yfsvl_get_endpoints(struct afs_call *call)
+ size = sizeof(__be32) * (1 + 4 + 1);
+ break;
+ default:
+- return afs_protocol_error(call, -EBADMSG,
+- afs_eproto_yvl_vlendpt_type);
++ return afs_protocol_error(call, afs_eproto_yvl_vlendpt_type);
+ }
+
+ if (call->count > 1)
+@@ -566,19 +561,18 @@ static int afs_deliver_yfsvl_get_endpoints(struct afs_call *call)
+ switch (call->count2) {
+ case YFS_ENDPOINT_IPV4:
+ if (ntohl(bp[0]) != sizeof(__be32) * 2)
+- return afs_protocol_error(call, -EBADMSG,
+- afs_eproto_yvl_vlendpt4_len);
++ return afs_protocol_error(
++ call, afs_eproto_yvl_vlendpt4_len);
+ bp += 3;
+ break;
+ case YFS_ENDPOINT_IPV6:
+ if (ntohl(bp[0]) != sizeof(__be32) * 5)
+- return afs_protocol_error(call, -EBADMSG,
+- afs_eproto_yvl_vlendpt6_len);
++ return afs_protocol_error(
++ call, afs_eproto_yvl_vlendpt6_len);
+ bp += 6;
+ break;
+ default:
+- return afs_protocol_error(call, -EBADMSG,
+- afs_eproto_yvl_vlendpt_type);
++ return afs_protocol_error(call, afs_eproto_yvl_vlendpt_type);
+ }
+
+ /* Got either the type of the next entry or the count of
+diff --git a/fs/afs/write.c b/fs/afs/write.c
+index cb76566763db..96b042af6248 100644
+--- a/fs/afs/write.c
++++ b/fs/afs/write.c
+@@ -194,11 +194,11 @@ int afs_write_end(struct file *file, struct address_space *mapping,
+
+ i_size = i_size_read(&vnode->vfs_inode);
+ if (maybe_i_size > i_size) {
+- spin_lock(&vnode->wb_lock);
++ write_seqlock(&vnode->cb_lock);
+ i_size = i_size_read(&vnode->vfs_inode);
+ if (maybe_i_size > i_size)
+ i_size_write(&vnode->vfs_inode, maybe_i_size);
+- spin_unlock(&vnode->wb_lock);
++ write_sequnlock(&vnode->cb_lock);
+ }
+
+ if (!PageUptodate(page)) {
+@@ -811,6 +811,7 @@ vm_fault_t afs_page_mkwrite(struct vm_fault *vmf)
+ vmf->page->index, priv);
+ SetPagePrivate(vmf->page);
+ set_page_private(vmf->page, priv);
++ file_update_time(file);
+
+ sb_end_pagefault(inode->i_sb);
+ return VM_FAULT_LOCKED;
+diff --git a/fs/afs/yfsclient.c b/fs/afs/yfsclient.c
+index fe413e7a5cf4..bf74c679c02b 100644
+--- a/fs/afs/yfsclient.c
++++ b/fs/afs/yfsclient.c
+@@ -179,21 +179,20 @@ static void xdr_dump_bad(const __be32 *bp)
+ /*
+ * Decode a YFSFetchStatus block
+ */
+-static int xdr_decode_YFSFetchStatus(const __be32 **_bp,
+- struct afs_call *call,
+- struct afs_status_cb *scb)
++static void xdr_decode_YFSFetchStatus(const __be32 **_bp,
++ struct afs_call *call,
++ struct afs_status_cb *scb)
+ {
+ const struct yfs_xdr_YFSFetchStatus *xdr = (const void *)*_bp;
+ struct afs_file_status *status = &scb->status;
+ u32 type;
+- int ret;
+
+ status->abort_code = ntohl(xdr->abort_code);
+ if (status->abort_code != 0) {
+ if (status->abort_code == VNOVNODE)
+ status->nlink = 0;
+ scb->have_error = true;
+- goto good;
++ goto advance;
+ }
+
+ type = ntohl(xdr->type);
+@@ -221,15 +220,13 @@ static int xdr_decode_YFSFetchStatus(const __be32 **_bp,
+ status->size = xdr_to_u64(xdr->size);
+ status->data_version = xdr_to_u64(xdr->data_version);
+ scb->have_status = true;
+-good:
+- ret = 0;
+ advance:
+ *_bp += xdr_size(xdr);
+- return ret;
++ return;
+
+ bad:
+ xdr_dump_bad(*_bp);
+- ret = afs_protocol_error(call, -EBADMSG, afs_eproto_bad_status);
++ afs_protocol_error(call, afs_eproto_bad_status);
+ goto advance;
+ }
+
+@@ -348,9 +345,7 @@ static int yfs_deliver_fs_status_cb_and_volsync(struct afs_call *call)
+
+ /* unmarshall the reply once we've received all of it */
+ bp = call->buffer;
+- ret = xdr_decode_YFSFetchStatus(&bp, call, call->out_scb);
+- if (ret < 0)
+- return ret;
++ xdr_decode_YFSFetchStatus(&bp, call, call->out_scb);
+ xdr_decode_YFSCallBack(&bp, call, call->out_scb);
+ xdr_decode_YFSVolSync(&bp, call->out_volsync);
+
+@@ -372,9 +367,7 @@ static int yfs_deliver_status_and_volsync(struct afs_call *call)
+ return ret;
+
+ bp = call->buffer;
+- ret = xdr_decode_YFSFetchStatus(&bp, call, call->out_scb);
+- if (ret < 0)
+- return ret;
++ xdr_decode_YFSFetchStatus(&bp, call, call->out_scb);
+ xdr_decode_YFSVolSync(&bp, call->out_volsync);
+
+ _leave(" = 0 [done]");
+@@ -534,9 +527,7 @@ static int yfs_deliver_fs_fetch_data64(struct afs_call *call)
+ return ret;
+
+ bp = call->buffer;
+- ret = xdr_decode_YFSFetchStatus(&bp, call, call->out_scb);
+- if (ret < 0)
+- return ret;
++ xdr_decode_YFSFetchStatus(&bp, call, call->out_scb);
+ xdr_decode_YFSCallBack(&bp, call, call->out_scb);
+ xdr_decode_YFSVolSync(&bp, call->out_volsync);
+
+@@ -644,12 +635,8 @@ static int yfs_deliver_fs_create_vnode(struct afs_call *call)
+ /* unmarshall the reply once we've received all of it */
+ bp = call->buffer;
+ xdr_decode_YFSFid(&bp, call->out_fid);
+- ret = xdr_decode_YFSFetchStatus(&bp, call, call->out_scb);
+- if (ret < 0)
+- return ret;
+- ret = xdr_decode_YFSFetchStatus(&bp, call, call->out_dir_scb);
+- if (ret < 0)
+- return ret;
++ xdr_decode_YFSFetchStatus(&bp, call, call->out_scb);
++ xdr_decode_YFSFetchStatus(&bp, call, call->out_dir_scb);
+ xdr_decode_YFSCallBack(&bp, call, call->out_scb);
+ xdr_decode_YFSVolSync(&bp, call->out_volsync);
+
+@@ -802,14 +789,9 @@ static int yfs_deliver_fs_remove_file2(struct afs_call *call)
+ return ret;
+
+ bp = call->buffer;
+- ret = xdr_decode_YFSFetchStatus(&bp, call, call->out_dir_scb);
+- if (ret < 0)
+- return ret;
+-
++ xdr_decode_YFSFetchStatus(&bp, call, call->out_dir_scb);
+ xdr_decode_YFSFid(&bp, &fid);
+- ret = xdr_decode_YFSFetchStatus(&bp, call, call->out_scb);
+- if (ret < 0)
+- return ret;
++ xdr_decode_YFSFetchStatus(&bp, call, call->out_scb);
+ /* Was deleted if vnode->status.abort_code == VNOVNODE. */
+
+ xdr_decode_YFSVolSync(&bp, call->out_volsync);
+@@ -889,10 +871,7 @@ static int yfs_deliver_fs_remove(struct afs_call *call)
+ return ret;
+
+ bp = call->buffer;
+- ret = xdr_decode_YFSFetchStatus(&bp, call, call->out_dir_scb);
+- if (ret < 0)
+- return ret;
+-
++ xdr_decode_YFSFetchStatus(&bp, call, call->out_dir_scb);
+ xdr_decode_YFSVolSync(&bp, call->out_volsync);
+ return 0;
+ }
+@@ -974,12 +953,8 @@ static int yfs_deliver_fs_link(struct afs_call *call)
+ return ret;
+
+ bp = call->buffer;
+- ret = xdr_decode_YFSFetchStatus(&bp, call, call->out_scb);
+- if (ret < 0)
+- return ret;
+- ret = xdr_decode_YFSFetchStatus(&bp, call, call->out_dir_scb);
+- if (ret < 0)
+- return ret;
++ xdr_decode_YFSFetchStatus(&bp, call, call->out_scb);
++ xdr_decode_YFSFetchStatus(&bp, call, call->out_dir_scb);
+ xdr_decode_YFSVolSync(&bp, call->out_volsync);
+ _leave(" = 0 [done]");
+ return 0;
+@@ -1061,12 +1036,8 @@ static int yfs_deliver_fs_symlink(struct afs_call *call)
+ /* unmarshall the reply once we've received all of it */
+ bp = call->buffer;
+ xdr_decode_YFSFid(&bp, call->out_fid);
+- ret = xdr_decode_YFSFetchStatus(&bp, call, call->out_scb);
+- if (ret < 0)
+- return ret;
+- ret = xdr_decode_YFSFetchStatus(&bp, call, call->out_dir_scb);
+- if (ret < 0)
+- return ret;
++ xdr_decode_YFSFetchStatus(&bp, call, call->out_scb);
++ xdr_decode_YFSFetchStatus(&bp, call, call->out_dir_scb);
+ xdr_decode_YFSVolSync(&bp, call->out_volsync);
+
+ _leave(" = 0 [done]");
+@@ -1154,13 +1125,11 @@ static int yfs_deliver_fs_rename(struct afs_call *call)
+ return ret;
+
+ bp = call->buffer;
+- ret = xdr_decode_YFSFetchStatus(&bp, call, call->out_dir_scb);
+- if (ret < 0)
+- return ret;
+- ret = xdr_decode_YFSFetchStatus(&bp, call, call->out_scb);
+- if (ret < 0)
+- return ret;
+-
++ /* If the two dirs are the same, we have two copies of the same status
++ * report, so we just decode it twice.
++ */
++ xdr_decode_YFSFetchStatus(&bp, call, call->out_dir_scb);
++ xdr_decode_YFSFetchStatus(&bp, call, call->out_scb);
+ xdr_decode_YFSVolSync(&bp, call->out_volsync);
+ _leave(" = 0 [done]");
+ return 0;
+@@ -1457,8 +1426,7 @@ static int yfs_deliver_fs_get_volume_status(struct afs_call *call)
+ call->count = ntohl(call->tmp);
+ _debug("volname length: %u", call->count);
+ if (call->count >= AFSNAMEMAX)
+- return afs_protocol_error(call, -EBADMSG,
+- afs_eproto_volname_len);
++ return afs_protocol_error(call, afs_eproto_volname_len);
+ size = (call->count + 3) & ~3; /* It's padded */
+ afs_extract_to_buf(call, size);
+ call->unmarshall++;
+@@ -1487,8 +1455,7 @@ static int yfs_deliver_fs_get_volume_status(struct afs_call *call)
+ call->count = ntohl(call->tmp);
+ _debug("offline msg length: %u", call->count);
+ if (call->count >= AFSNAMEMAX)
+- return afs_protocol_error(call, -EBADMSG,
+- afs_eproto_offline_msg_len);
++ return afs_protocol_error(call, afs_eproto_offline_msg_len);
+ size = (call->count + 3) & ~3; /* It's padded */
+ afs_extract_to_buf(call, size);
+ call->unmarshall++;
+@@ -1518,8 +1485,7 @@ static int yfs_deliver_fs_get_volume_status(struct afs_call *call)
+ call->count = ntohl(call->tmp);
+ _debug("motd length: %u", call->count);
+ if (call->count >= AFSNAMEMAX)
+- return afs_protocol_error(call, -EBADMSG,
+- afs_eproto_motd_len);
++ return afs_protocol_error(call, afs_eproto_motd_len);
+ size = (call->count + 3) & ~3; /* It's padded */
+ afs_extract_to_buf(call, size);
+ call->unmarshall++;
+@@ -1828,8 +1794,7 @@ static int yfs_deliver_fs_inline_bulk_status(struct afs_call *call)
+ tmp = ntohl(call->tmp);
+ _debug("status count: %u/%u", tmp, call->count2);
+ if (tmp != call->count2)
+- return afs_protocol_error(call, -EBADMSG,
+- afs_eproto_ibulkst_count);
++ return afs_protocol_error(call, afs_eproto_ibulkst_count);
+
+ call->count = 0;
+ call->unmarshall++;
+@@ -1845,9 +1810,7 @@ static int yfs_deliver_fs_inline_bulk_status(struct afs_call *call)
+
+ bp = call->buffer;
+ scb = &call->out_scb[call->count];
+- ret = xdr_decode_YFSFetchStatus(&bp, call, scb);
+- if (ret < 0)
+- return ret;
++ xdr_decode_YFSFetchStatus(&bp, call, scb);
+
+ call->count++;
+ if (call->count < call->count2)
+@@ -1868,8 +1831,7 @@ static int yfs_deliver_fs_inline_bulk_status(struct afs_call *call)
+ tmp = ntohl(call->tmp);
+ _debug("CB count: %u", tmp);
+ if (tmp != call->count2)
+- return afs_protocol_error(call, -EBADMSG,
+- afs_eproto_ibulkst_cb_count);
++ return afs_protocol_error(call, afs_eproto_ibulkst_cb_count);
+ call->count = 0;
+ call->unmarshall++;
+ more_cbs:
+@@ -2067,9 +2029,7 @@ static int yfs_deliver_fs_fetch_opaque_acl(struct afs_call *call)
+ bp = call->buffer;
+ yacl->inherit_flag = ntohl(*bp++);
+ yacl->num_cleaned = ntohl(*bp++);
+- ret = xdr_decode_YFSFetchStatus(&bp, call, call->out_scb);
+- if (ret < 0)
+- return ret;
++ xdr_decode_YFSFetchStatus(&bp, call, call->out_scb);
+ xdr_decode_YFSVolSync(&bp, call->out_volsync);
+
+ call->unmarshall++;
+diff --git a/fs/block_dev.c b/fs/block_dev.c
+index 93672c3f1c78..313aae95818e 100644
+--- a/fs/block_dev.c
++++ b/fs/block_dev.c
+@@ -1583,10 +1583,8 @@ static int __blkdev_get(struct block_device *bdev, fmode_t mode, int for_part)
+ */
+ if (!for_part) {
+ ret = devcgroup_inode_permission(bdev->bd_inode, perm);
+- if (ret != 0) {
+- bdput(bdev);
++ if (ret != 0)
+ return ret;
+- }
+ }
+
+ restart:
+@@ -1655,8 +1653,10 @@ static int __blkdev_get(struct block_device *bdev, fmode_t mode, int for_part)
+ goto out_clear;
+ BUG_ON(for_part);
+ ret = __blkdev_get(whole, mode, 1);
+- if (ret)
++ if (ret) {
++ bdput(whole);
+ goto out_clear;
++ }
+ bdev->bd_contains = whole;
+ bdev->bd_part = disk_get_part(disk, partno);
+ if (!(disk->flags & GENHD_FL_UP) ||
+@@ -1706,7 +1706,6 @@ static int __blkdev_get(struct block_device *bdev, fmode_t mode, int for_part)
+ disk_unblock_events(disk);
+ put_disk_and_module(disk);
+ out:
+- bdput(bdev);
+
+ return ret;
+ }
+@@ -1773,6 +1772,9 @@ int blkdev_get(struct block_device *bdev, fmode_t mode, void *holder)
+ bdput(whole);
+ }
+
++ if (res)
++ bdput(bdev);
++
+ return res;
+ }
+ EXPORT_SYMBOL(blkdev_get);
+diff --git a/fs/ceph/export.c b/fs/ceph/export.c
+index 79dc06881e78..e088843a7734 100644
+--- a/fs/ceph/export.c
++++ b/fs/ceph/export.c
+@@ -172,9 +172,16 @@ struct inode *ceph_lookup_inode(struct super_block *sb, u64 ino)
+ static struct dentry *__fh_to_dentry(struct super_block *sb, u64 ino)
+ {
+ struct inode *inode = __lookup_inode(sb, ino);
++ int err;
++
+ if (IS_ERR(inode))
+ return ERR_CAST(inode);
+- if (inode->i_nlink == 0) {
++ /* We need LINK caps to reliably check i_nlink */
++ err = ceph_do_getattr(inode, CEPH_CAP_LINK_SHARED, false);
++ if (err)
++ return ERR_PTR(err);
++ /* -ESTALE if inode as been unlinked and no file is open */
++ if ((inode->i_nlink == 0) && (atomic_read(&inode->i_count) == 1)) {
+ iput(inode);
+ return ERR_PTR(-ESTALE);
+ }
+diff --git a/fs/cifs/connect.c b/fs/cifs/connect.c
+index 28268ed461b8..47b9fbb70bf5 100644
+--- a/fs/cifs/connect.c
++++ b/fs/cifs/connect.c
+@@ -572,26 +572,26 @@ cifs_reconnect(struct TCP_Server_Info *server)
+ try_to_freeze();
+
+ mutex_lock(&server->srv_mutex);
++#ifdef CONFIG_CIFS_DFS_UPCALL
+ /*
+ * Set up next DFS target server (if any) for reconnect. If DFS
+ * feature is disabled, then we will retry last server we
+ * connected to before.
+ */
++ reconn_inval_dfs_target(server, cifs_sb, &tgt_list, &tgt_it);
++#endif
++ rc = reconn_set_ipaddr(server);
++ if (rc) {
++ cifs_dbg(FYI, "%s: failed to resolve hostname: %d\n",
++ __func__, rc);
++ }
++
+ if (cifs_rdma_enabled(server))
+ rc = smbd_reconnect(server);
+ else
+ rc = generic_ip_connect(server);
+ if (rc) {
+ cifs_dbg(FYI, "reconnect error %d\n", rc);
+-#ifdef CONFIG_CIFS_DFS_UPCALL
+- reconn_inval_dfs_target(server, cifs_sb, &tgt_list,
+- &tgt_it);
+-#endif
+- rc = reconn_set_ipaddr(server);
+- if (rc) {
+- cifs_dbg(FYI, "%s: failed to resolve hostname: %d\n",
+- __func__, rc);
+- }
+ mutex_unlock(&server->srv_mutex);
+ msleep(3000);
+ } else {
+diff --git a/fs/dlm/dlm_internal.h b/fs/dlm/dlm_internal.h
+index 416d9de35679..4311d01b02a8 100644
+--- a/fs/dlm/dlm_internal.h
++++ b/fs/dlm/dlm_internal.h
+@@ -97,7 +97,6 @@ do { \
+ __LINE__, __FILE__, #x, jiffies); \
+ {do} \
+ printk("\n"); \
+- BUG(); \
+ panic("DLM: Record message above and reboot.\n"); \
+ } \
+ }
+diff --git a/fs/ext4/acl.c b/fs/ext4/acl.c
+index 8c7bbf3e566d..470be69f19aa 100644
+--- a/fs/ext4/acl.c
++++ b/fs/ext4/acl.c
+@@ -256,7 +256,7 @@ retry:
+ if (!error && update_mode) {
+ inode->i_mode = mode;
+ inode->i_ctime = current_time(inode);
+- ext4_mark_inode_dirty(handle, inode);
++ error = ext4_mark_inode_dirty(handle, inode);
+ }
+ out_stop:
+ ext4_journal_stop(handle);
+diff --git a/fs/ext4/dir.c b/fs/ext4/dir.c
+index c654205f648d..1d82336b1cd4 100644
+--- a/fs/ext4/dir.c
++++ b/fs/ext4/dir.c
+@@ -675,6 +675,7 @@ static int ext4_d_compare(const struct dentry *dentry, unsigned int len,
+ struct qstr qstr = {.name = str, .len = len };
+ const struct dentry *parent = READ_ONCE(dentry->d_parent);
+ const struct inode *inode = READ_ONCE(parent->d_inode);
++ char strbuf[DNAME_INLINE_LEN];
+
+ if (!inode || !IS_CASEFOLDED(inode) ||
+ !EXT4_SB(inode->i_sb)->s_encoding) {
+@@ -683,6 +684,21 @@ static int ext4_d_compare(const struct dentry *dentry, unsigned int len,
+ return memcmp(str, name->name, len);
+ }
+
++ /*
++ * If the dentry name is stored in-line, then it may be concurrently
++ * modified by a rename. If this happens, the VFS will eventually retry
++ * the lookup, so it doesn't matter what ->d_compare() returns.
++ * However, it's unsafe to call utf8_strncasecmp() with an unstable
++ * string. Therefore, we have to copy the name into a temporary buffer.
++ */
++ if (len <= DNAME_INLINE_LEN - 1) {
++ memcpy(strbuf, str, len);
++ strbuf[len] = 0;
++ qstr.name = strbuf;
++ /* prevent compiler from optimizing out the temporary buffer */
++ barrier();
++ }
++
+ return ext4_ci_compare(inode, name, &qstr, false);
+ }
+
+diff --git a/fs/ext4/ext4.h b/fs/ext4/ext4.h
+index ad2dbf6e4924..51a85b50033a 100644
+--- a/fs/ext4/ext4.h
++++ b/fs/ext4/ext4.h
+@@ -3354,7 +3354,7 @@ struct ext4_extent;
+ */
+ #define EXT_MAX_BLOCKS 0xffffffff
+
+-extern int ext4_ext_tree_init(handle_t *handle, struct inode *);
++extern void ext4_ext_tree_init(handle_t *handle, struct inode *inode);
+ extern int ext4_ext_index_trans_blocks(struct inode *inode, int extents);
+ extern int ext4_ext_map_blocks(handle_t *handle, struct inode *inode,
+ struct ext4_map_blocks *map, int flags);
+diff --git a/fs/ext4/ext4_jbd2.h b/fs/ext4/ext4_jbd2.h
+index 4b9002f0e84c..3bacf76d2609 100644
+--- a/fs/ext4/ext4_jbd2.h
++++ b/fs/ext4/ext4_jbd2.h
+@@ -222,7 +222,10 @@ ext4_mark_iloc_dirty(handle_t *handle,
+ int ext4_reserve_inode_write(handle_t *handle, struct inode *inode,
+ struct ext4_iloc *iloc);
+
+-int ext4_mark_inode_dirty(handle_t *handle, struct inode *inode);
++#define ext4_mark_inode_dirty(__h, __i) \
++ __ext4_mark_inode_dirty((__h), (__i), __func__, __LINE__)
++int __ext4_mark_inode_dirty(handle_t *handle, struct inode *inode,
++ const char *func, unsigned int line);
+
+ int ext4_expand_extra_isize(struct inode *inode,
+ unsigned int new_extra_isize,
+diff --git a/fs/ext4/extents.c b/fs/ext4/extents.c
+index 2b4b94542e34..d5453072eb63 100644
+--- a/fs/ext4/extents.c
++++ b/fs/ext4/extents.c
+@@ -816,7 +816,7 @@ ext4_ext_binsearch(struct inode *inode,
+
+ }
+
+-int ext4_ext_tree_init(handle_t *handle, struct inode *inode)
++void ext4_ext_tree_init(handle_t *handle, struct inode *inode)
+ {
+ struct ext4_extent_header *eh;
+
+@@ -826,7 +826,6 @@ int ext4_ext_tree_init(handle_t *handle, struct inode *inode)
+ eh->eh_magic = EXT4_EXT_MAGIC;
+ eh->eh_max = cpu_to_le16(ext4_ext_space_root(inode, 0));
+ ext4_mark_inode_dirty(handle, inode);
+- return 0;
+ }
+
+ struct ext4_ext_path *
+@@ -1319,7 +1318,7 @@ static int ext4_ext_grow_indepth(handle_t *handle, struct inode *inode,
+ ext4_idx_pblock(EXT_FIRST_INDEX(neh)));
+
+ le16_add_cpu(&neh->eh_depth, 1);
+- ext4_mark_inode_dirty(handle, inode);
++ err = ext4_mark_inode_dirty(handle, inode);
+ out:
+ brelse(bh);
+
+@@ -2828,7 +2827,7 @@ again:
+ * in use to avoid freeing it when removing blocks.
+ */
+ if (sbi->s_cluster_ratio > 1) {
+- pblk = ext4_ext_pblock(ex) + end - ee_block + 2;
++ pblk = ext4_ext_pblock(ex) + end - ee_block + 1;
+ partial.pclu = EXT4_B2C(sbi, pblk);
+ partial.state = nofree;
+ }
+@@ -4363,7 +4362,7 @@ static int ext4_alloc_file_blocks(struct file *file, ext4_lblk_t offset,
+ struct inode *inode = file_inode(file);
+ handle_t *handle;
+ int ret = 0;
+- int ret2 = 0;
++ int ret2 = 0, ret3 = 0;
+ int retries = 0;
+ int depth = 0;
+ struct ext4_map_blocks map;
+@@ -4423,10 +4422,11 @@ retry:
+ if (ext4_update_inode_size(inode, epos) & 0x1)
+ inode->i_mtime = inode->i_ctime;
+ }
+- ext4_mark_inode_dirty(handle, inode);
++ ret2 = ext4_mark_inode_dirty(handle, inode);
+ ext4_update_inode_fsync_trans(handle, inode, 1);
+- ret2 = ext4_journal_stop(handle);
+- if (ret2)
++ ret3 = ext4_journal_stop(handle);
++ ret2 = ret3 ? ret3 : ret2;
++ if (unlikely(ret2))
+ break;
+ }
+ if (ret == -ENOSPC &&
+@@ -4577,7 +4577,9 @@ static long ext4_zero_range(struct file *file, loff_t offset,
+ inode->i_mtime = inode->i_ctime = current_time(inode);
+ if (new_size)
+ ext4_update_inode_size(inode, new_size);
+- ext4_mark_inode_dirty(handle, inode);
++ ret = ext4_mark_inode_dirty(handle, inode);
++ if (unlikely(ret))
++ goto out_handle;
+
+ /* Zero out partial block at the edges of the range */
+ ret = ext4_zero_partial_blocks(handle, inode, offset, len);
+@@ -4587,6 +4589,7 @@ static long ext4_zero_range(struct file *file, loff_t offset,
+ if (file->f_flags & O_SYNC)
+ ext4_handle_sync(handle);
+
++out_handle:
+ ext4_journal_stop(handle);
+ out_mutex:
+ inode_unlock(inode);
+@@ -4700,8 +4703,7 @@ int ext4_convert_unwritten_extents(handle_t *handle, struct inode *inode,
+ loff_t offset, ssize_t len)
+ {
+ unsigned int max_blocks;
+- int ret = 0;
+- int ret2 = 0;
++ int ret = 0, ret2 = 0, ret3 = 0;
+ struct ext4_map_blocks map;
+ unsigned int blkbits = inode->i_blkbits;
+ unsigned int credits = 0;
+@@ -4734,9 +4736,13 @@ int ext4_convert_unwritten_extents(handle_t *handle, struct inode *inode,
+ "ext4_ext_map_blocks returned %d",
+ inode->i_ino, map.m_lblk,
+ map.m_len, ret);
+- ext4_mark_inode_dirty(handle, inode);
+- if (credits)
+- ret2 = ext4_journal_stop(handle);
++ ret2 = ext4_mark_inode_dirty(handle, inode);
++ if (credits) {
++ ret3 = ext4_journal_stop(handle);
++ if (unlikely(ret3))
++ ret2 = ret3;
++ }
++
+ if (ret <= 0 || ret2)
+ break;
+ }
+@@ -5304,7 +5310,7 @@ static int ext4_collapse_range(struct inode *inode, loff_t offset, loff_t len)
+ if (IS_SYNC(inode))
+ ext4_handle_sync(handle);
+ inode->i_mtime = inode->i_ctime = current_time(inode);
+- ext4_mark_inode_dirty(handle, inode);
++ ret = ext4_mark_inode_dirty(handle, inode);
+ ext4_update_inode_fsync_trans(handle, inode, 1);
+
+ out_stop:
+diff --git a/fs/ext4/file.c b/fs/ext4/file.c
+index 0d624250a62b..2a01e31a032c 100644
+--- a/fs/ext4/file.c
++++ b/fs/ext4/file.c
+@@ -287,6 +287,7 @@ static ssize_t ext4_handle_inode_extension(struct inode *inode, loff_t offset,
+ bool truncate = false;
+ u8 blkbits = inode->i_blkbits;
+ ext4_lblk_t written_blk, end_blk;
++ int ret;
+
+ /*
+ * Note that EXT4_I(inode)->i_disksize can get extended up to
+@@ -327,8 +328,14 @@ static ssize_t ext4_handle_inode_extension(struct inode *inode, loff_t offset,
+ goto truncate;
+ }
+
+- if (ext4_update_inode_size(inode, offset + written))
+- ext4_mark_inode_dirty(handle, inode);
++ if (ext4_update_inode_size(inode, offset + written)) {
++ ret = ext4_mark_inode_dirty(handle, inode);
++ if (unlikely(ret)) {
++ written = ret;
++ ext4_journal_stop(handle);
++ goto truncate;
++ }
++ }
+
+ /*
+ * We may need to truncate allocated but not written blocks beyond EOF.
+@@ -495,6 +502,12 @@ static ssize_t ext4_dio_write_iter(struct kiocb *iocb, struct iov_iter *from)
+ if (ret <= 0)
+ return ret;
+
++ /* if we're going to block and IOCB_NOWAIT is set, return -EAGAIN */
++ if ((iocb->ki_flags & IOCB_NOWAIT) && (unaligned_io || extend)) {
++ ret = -EAGAIN;
++ goto out;
++ }
++
+ offset = iocb->ki_pos;
+ count = ret;
+
+diff --git a/fs/ext4/indirect.c b/fs/ext4/indirect.c
+index 107f0043f67f..be2b66eb65f7 100644
+--- a/fs/ext4/indirect.c
++++ b/fs/ext4/indirect.c
+@@ -467,7 +467,9 @@ static int ext4_splice_branch(handle_t *handle,
+ /*
+ * OK, we spliced it into the inode itself on a direct block.
+ */
+- ext4_mark_inode_dirty(handle, ar->inode);
++ err = ext4_mark_inode_dirty(handle, ar->inode);
++ if (unlikely(err))
++ goto err_out;
+ jbd_debug(5, "splicing direct\n");
+ }
+ return err;
+diff --git a/fs/ext4/inline.c b/fs/ext4/inline.c
+index f35e289e17aa..c3a1ad2db122 100644
+--- a/fs/ext4/inline.c
++++ b/fs/ext4/inline.c
+@@ -1260,7 +1260,7 @@ out:
+ int ext4_try_add_inline_entry(handle_t *handle, struct ext4_filename *fname,
+ struct inode *dir, struct inode *inode)
+ {
+- int ret, inline_size, no_expand;
++ int ret, ret2, inline_size, no_expand;
+ void *inline_start;
+ struct ext4_iloc iloc;
+
+@@ -1314,7 +1314,9 @@ int ext4_try_add_inline_entry(handle_t *handle, struct ext4_filename *fname,
+
+ out:
+ ext4_write_unlock_xattr(dir, &no_expand);
+- ext4_mark_inode_dirty(handle, dir);
++ ret2 = ext4_mark_inode_dirty(handle, dir);
++ if (unlikely(ret2 && !ret))
++ ret = ret2;
+ brelse(iloc.bh);
+ return ret;
+ }
+diff --git a/fs/ext4/inode.c b/fs/ext4/inode.c
+index 2a4aae6acdcb..87430d276bcc 100644
+--- a/fs/ext4/inode.c
++++ b/fs/ext4/inode.c
+@@ -1296,7 +1296,7 @@ static int ext4_write_end(struct file *file,
+ * filesystems.
+ */
+ if (i_size_changed || inline_data)
+- ext4_mark_inode_dirty(handle, inode);
++ ret = ext4_mark_inode_dirty(handle, inode);
+
+ if (pos + len > inode->i_size && !verity && ext4_can_truncate(inode))
+ /* if we have allocated more blocks and copied
+@@ -3077,7 +3077,7 @@ static int ext4_da_write_end(struct file *file,
+ * new_i_size is less that inode->i_size
+ * bu greater than i_disksize.(hint delalloc)
+ */
+- ext4_mark_inode_dirty(handle, inode);
++ ret = ext4_mark_inode_dirty(handle, inode);
+ }
+ }
+
+@@ -3094,7 +3094,7 @@ static int ext4_da_write_end(struct file *file,
+ if (ret2 < 0)
+ ret = ret2;
+ ret2 = ext4_journal_stop(handle);
+- if (!ret)
++ if (unlikely(ret2 && !ret))
+ ret = ret2;
+
+ return ret ? ret : copied;
+@@ -3886,6 +3886,8 @@ int ext4_update_disksize_before_punch(struct inode *inode, loff_t offset,
+ loff_t len)
+ {
+ handle_t *handle;
++ int ret;
++
+ loff_t size = i_size_read(inode);
+
+ WARN_ON(!inode_is_locked(inode));
+@@ -3899,10 +3901,10 @@ int ext4_update_disksize_before_punch(struct inode *inode, loff_t offset,
+ if (IS_ERR(handle))
+ return PTR_ERR(handle);
+ ext4_update_i_disksize(inode, size);
+- ext4_mark_inode_dirty(handle, inode);
++ ret = ext4_mark_inode_dirty(handle, inode);
+ ext4_journal_stop(handle);
+
+- return 0;
++ return ret;
+ }
+
+ static void ext4_wait_dax_page(struct ext4_inode_info *ei)
+@@ -3954,7 +3956,7 @@ int ext4_punch_hole(struct inode *inode, loff_t offset, loff_t length)
+ loff_t first_block_offset, last_block_offset;
+ handle_t *handle;
+ unsigned int credits;
+- int ret = 0;
++ int ret = 0, ret2 = 0;
+
+ trace_ext4_punch_hole(inode, offset, length, 0);
+
+@@ -4077,7 +4079,9 @@ int ext4_punch_hole(struct inode *inode, loff_t offset, loff_t length)
+ ext4_handle_sync(handle);
+
+ inode->i_mtime = inode->i_ctime = current_time(inode);
+- ext4_mark_inode_dirty(handle, inode);
++ ret2 = ext4_mark_inode_dirty(handle, inode);
++ if (unlikely(ret2))
++ ret = ret2;
+ if (ret >= 0)
+ ext4_update_inode_fsync_trans(handle, inode, 1);
+ out_stop:
+@@ -4146,7 +4150,7 @@ int ext4_truncate(struct inode *inode)
+ {
+ struct ext4_inode_info *ei = EXT4_I(inode);
+ unsigned int credits;
+- int err = 0;
++ int err = 0, err2;
+ handle_t *handle;
+ struct address_space *mapping = inode->i_mapping;
+
+@@ -4234,7 +4238,9 @@ out_stop:
+ ext4_orphan_del(handle, inode);
+
+ inode->i_mtime = inode->i_ctime = current_time(inode);
+- ext4_mark_inode_dirty(handle, inode);
++ err2 = ext4_mark_inode_dirty(handle, inode);
++ if (unlikely(err2 && !err))
++ err = err2;
+ ext4_journal_stop(handle);
+
+ trace_ext4_truncate_exit(inode);
+@@ -5292,6 +5298,8 @@ int ext4_setattr(struct dentry *dentry, struct iattr *attr)
+ inode->i_gid = attr->ia_gid;
+ error = ext4_mark_inode_dirty(handle, inode);
+ ext4_journal_stop(handle);
++ if (unlikely(error))
++ return error;
+ }
+
+ if (attr->ia_valid & ATTR_SIZE) {
+@@ -5777,7 +5785,8 @@ out_unlock:
+ * Whenever the user wants stuff synced (sys_sync, sys_msync, sys_fsync)
+ * we start and wait on commits.
+ */
+-int ext4_mark_inode_dirty(handle_t *handle, struct inode *inode)
++int __ext4_mark_inode_dirty(handle_t *handle, struct inode *inode,
++ const char *func, unsigned int line)
+ {
+ struct ext4_iloc iloc;
+ struct ext4_sb_info *sbi = EXT4_SB(inode->i_sb);
+@@ -5787,13 +5796,18 @@ int ext4_mark_inode_dirty(handle_t *handle, struct inode *inode)
+ trace_ext4_mark_inode_dirty(inode, _RET_IP_);
+ err = ext4_reserve_inode_write(handle, inode, &iloc);
+ if (err)
+- return err;
++ goto out;
+
+ if (EXT4_I(inode)->i_extra_isize < sbi->s_want_extra_isize)
+ ext4_try_to_expand_extra_isize(inode, sbi->s_want_extra_isize,
+ iloc, handle);
+
+- return ext4_mark_iloc_dirty(handle, inode, &iloc);
++ err = ext4_mark_iloc_dirty(handle, inode, &iloc);
++out:
++ if (unlikely(err))
++ ext4_error_inode_err(inode, func, line, 0, err,
++ "mark_inode_dirty error");
++ return err;
+ }
+
+ /*
+diff --git a/fs/ext4/migrate.c b/fs/ext4/migrate.c
+index fb6520f37135..c5e3fc998211 100644
+--- a/fs/ext4/migrate.c
++++ b/fs/ext4/migrate.c
+@@ -287,7 +287,7 @@ static int free_ind_block(handle_t *handle, struct inode *inode, __le32 *i_data)
+ static int ext4_ext_swap_inode_data(handle_t *handle, struct inode *inode,
+ struct inode *tmp_inode)
+ {
+- int retval;
++ int retval, retval2 = 0;
+ __le32 i_data[3];
+ struct ext4_inode_info *ei = EXT4_I(inode);
+ struct ext4_inode_info *tmp_ei = EXT4_I(tmp_inode);
+@@ -342,7 +342,9 @@ static int ext4_ext_swap_inode_data(handle_t *handle, struct inode *inode,
+ * i_blocks when freeing the indirect meta-data blocks
+ */
+ retval = free_ind_block(handle, inode, i_data);
+- ext4_mark_inode_dirty(handle, inode);
++ retval2 = ext4_mark_inode_dirty(handle, inode);
++ if (unlikely(retval2 && !retval))
++ retval = retval2;
+
+ err_out:
+ return retval;
+@@ -601,7 +603,7 @@ int ext4_ind_migrate(struct inode *inode)
+ ext4_lblk_t start, end;
+ ext4_fsblk_t blk;
+ handle_t *handle;
+- int ret;
++ int ret, ret2 = 0;
+
+ if (!ext4_has_feature_extents(inode->i_sb) ||
+ (!ext4_test_inode_flag(inode, EXT4_INODE_EXTENTS)))
+@@ -655,7 +657,9 @@ int ext4_ind_migrate(struct inode *inode)
+ memset(ei->i_data, 0, sizeof(ei->i_data));
+ for (i = start; i <= end; i++)
+ ei->i_data[i] = cpu_to_le32(blk++);
+- ext4_mark_inode_dirty(handle, inode);
++ ret2 = ext4_mark_inode_dirty(handle, inode);
++ if (unlikely(ret2 && !ret))
++ ret = ret2;
+ errout:
+ ext4_journal_stop(handle);
+ up_write(&EXT4_I(inode)->i_data_sem);
+diff --git a/fs/ext4/namei.c b/fs/ext4/namei.c
+index a8aca4772aaa..56738b538ddf 100644
+--- a/fs/ext4/namei.c
++++ b/fs/ext4/namei.c
+@@ -1993,7 +1993,7 @@ static int add_dirent_to_buf(handle_t *handle, struct ext4_filename *fname,
+ {
+ unsigned int blocksize = dir->i_sb->s_blocksize;
+ int csum_size = 0;
+- int err;
++ int err, err2;
+
+ if (ext4_has_metadata_csum(inode->i_sb))
+ csum_size = sizeof(struct ext4_dir_entry_tail);
+@@ -2028,12 +2028,12 @@ static int add_dirent_to_buf(handle_t *handle, struct ext4_filename *fname,
+ dir->i_mtime = dir->i_ctime = current_time(dir);
+ ext4_update_dx_flag(dir);
+ inode_inc_iversion(dir);
+- ext4_mark_inode_dirty(handle, dir);
++ err2 = ext4_mark_inode_dirty(handle, dir);
+ BUFFER_TRACE(bh, "call ext4_handle_dirty_metadata");
+ err = ext4_handle_dirty_dirblock(handle, dir, bh);
+ if (err)
+ ext4_std_error(dir->i_sb, err);
+- return 0;
++ return err ? err : err2;
+ }
+
+ /*
+@@ -2223,7 +2223,9 @@ static int ext4_add_entry(handle_t *handle, struct dentry *dentry,
+ }
+ ext4_clear_inode_flag(dir, EXT4_INODE_INDEX);
+ dx_fallback++;
+- ext4_mark_inode_dirty(handle, dir);
++ retval = ext4_mark_inode_dirty(handle, dir);
++ if (unlikely(retval))
++ goto out;
+ }
+ blocks = dir->i_size >> sb->s_blocksize_bits;
+ for (block = 0; block < blocks; block++) {
+@@ -2576,12 +2578,12 @@ static int ext4_add_nondir(handle_t *handle,
+ struct inode *inode = *inodep;
+ int err = ext4_add_entry(handle, dentry, inode);
+ if (!err) {
+- ext4_mark_inode_dirty(handle, inode);
++ err = ext4_mark_inode_dirty(handle, inode);
+ if (IS_DIRSYNC(dir))
+ ext4_handle_sync(handle);
+ d_instantiate_new(dentry, inode);
+ *inodep = NULL;
+- return 0;
++ return err;
+ }
+ drop_nlink(inode);
+ ext4_orphan_add(handle, inode);
+@@ -2775,7 +2777,7 @@ static int ext4_mkdir(struct inode *dir, struct dentry *dentry, umode_t mode)
+ {
+ handle_t *handle;
+ struct inode *inode;
+- int err, credits, retries = 0;
++ int err, err2 = 0, credits, retries = 0;
+
+ if (EXT4_DIR_LINK_MAX(dir))
+ return -EMLINK;
+@@ -2808,7 +2810,9 @@ out_clear_inode:
+ clear_nlink(inode);
+ ext4_orphan_add(handle, inode);
+ unlock_new_inode(inode);
+- ext4_mark_inode_dirty(handle, inode);
++ err2 = ext4_mark_inode_dirty(handle, inode);
++ if (unlikely(err2))
++ err = err2;
+ ext4_journal_stop(handle);
+ iput(inode);
+ goto out_retry;
+@@ -3148,10 +3152,12 @@ static int ext4_rmdir(struct inode *dir, struct dentry *dentry)
+ inode->i_size = 0;
+ ext4_orphan_add(handle, inode);
+ inode->i_ctime = dir->i_ctime = dir->i_mtime = current_time(inode);
+- ext4_mark_inode_dirty(handle, inode);
++ retval = ext4_mark_inode_dirty(handle, inode);
++ if (retval)
++ goto end_rmdir;
+ ext4_dec_count(handle, dir);
+ ext4_update_dx_flag(dir);
+- ext4_mark_inode_dirty(handle, dir);
++ retval = ext4_mark_inode_dirty(handle, dir);
+
+ #ifdef CONFIG_UNICODE
+ /* VFS negative dentries are incompatible with Encoding and
+@@ -3221,7 +3227,9 @@ static int ext4_unlink(struct inode *dir, struct dentry *dentry)
+ goto end_unlink;
+ dir->i_ctime = dir->i_mtime = current_time(dir);
+ ext4_update_dx_flag(dir);
+- ext4_mark_inode_dirty(handle, dir);
++ retval = ext4_mark_inode_dirty(handle, dir);
++ if (retval)
++ goto end_unlink;
+ if (inode->i_nlink == 0)
+ ext4_warning_inode(inode, "Deleting file '%.*s' with no links",
+ dentry->d_name.len, dentry->d_name.name);
+@@ -3230,7 +3238,7 @@ static int ext4_unlink(struct inode *dir, struct dentry *dentry)
+ if (!inode->i_nlink)
+ ext4_orphan_add(handle, inode);
+ inode->i_ctime = current_time(inode);
+- ext4_mark_inode_dirty(handle, inode);
++ retval = ext4_mark_inode_dirty(handle, inode);
+
+ #ifdef CONFIG_UNICODE
+ /* VFS negative dentries are incompatible with Encoding and
+@@ -3419,7 +3427,7 @@ retry:
+
+ err = ext4_add_entry(handle, dentry, inode);
+ if (!err) {
+- ext4_mark_inode_dirty(handle, inode);
++ err = ext4_mark_inode_dirty(handle, inode);
+ /* this can happen only for tmpfile being
+ * linked the first time
+ */
+@@ -3531,7 +3539,7 @@ static int ext4_rename_dir_finish(handle_t *handle, struct ext4_renament *ent,
+ static int ext4_setent(handle_t *handle, struct ext4_renament *ent,
+ unsigned ino, unsigned file_type)
+ {
+- int retval;
++ int retval, retval2;
+
+ BUFFER_TRACE(ent->bh, "get write access");
+ retval = ext4_journal_get_write_access(handle, ent->bh);
+@@ -3543,19 +3551,19 @@ static int ext4_setent(handle_t *handle, struct ext4_renament *ent,
+ inode_inc_iversion(ent->dir);
+ ent->dir->i_ctime = ent->dir->i_mtime =
+ current_time(ent->dir);
+- ext4_mark_inode_dirty(handle, ent->dir);
++ retval = ext4_mark_inode_dirty(handle, ent->dir);
+ BUFFER_TRACE(ent->bh, "call ext4_handle_dirty_metadata");
+ if (!ent->inlined) {
+- retval = ext4_handle_dirty_dirblock(handle, ent->dir, ent->bh);
+- if (unlikely(retval)) {
+- ext4_std_error(ent->dir->i_sb, retval);
+- return retval;
++ retval2 = ext4_handle_dirty_dirblock(handle, ent->dir, ent->bh);
++ if (unlikely(retval2)) {
++ ext4_std_error(ent->dir->i_sb, retval2);
++ return retval2;
+ }
+ }
+ brelse(ent->bh);
+ ent->bh = NULL;
+
+- return 0;
++ return retval;
+ }
+
+ static int ext4_find_delete_entry(handle_t *handle, struct inode *dir,
+@@ -3790,7 +3798,9 @@ static int ext4_rename(struct inode *old_dir, struct dentry *old_dentry,
+ EXT4_FT_CHRDEV);
+ if (retval)
+ goto end_rename;
+- ext4_mark_inode_dirty(handle, whiteout);
++ retval = ext4_mark_inode_dirty(handle, whiteout);
++ if (unlikely(retval))
++ goto end_rename;
+ }
+ if (!new.bh) {
+ retval = ext4_add_entry(handle, new.dentry, old.inode);
+@@ -3811,7 +3821,9 @@ static int ext4_rename(struct inode *old_dir, struct dentry *old_dentry,
+ * rename.
+ */
+ old.inode->i_ctime = current_time(old.inode);
+- ext4_mark_inode_dirty(handle, old.inode);
++ retval = ext4_mark_inode_dirty(handle, old.inode);
++ if (unlikely(retval))
++ goto end_rename;
+
+ if (!whiteout) {
+ /*
+@@ -3840,12 +3852,18 @@ static int ext4_rename(struct inode *old_dir, struct dentry *old_dentry,
+ } else {
+ ext4_inc_count(handle, new.dir);
+ ext4_update_dx_flag(new.dir);
+- ext4_mark_inode_dirty(handle, new.dir);
++ retval = ext4_mark_inode_dirty(handle, new.dir);
++ if (unlikely(retval))
++ goto end_rename;
+ }
+ }
+- ext4_mark_inode_dirty(handle, old.dir);
++ retval = ext4_mark_inode_dirty(handle, old.dir);
++ if (unlikely(retval))
++ goto end_rename;
+ if (new.inode) {
+- ext4_mark_inode_dirty(handle, new.inode);
++ retval = ext4_mark_inode_dirty(handle, new.inode);
++ if (unlikely(retval))
++ goto end_rename;
+ if (!new.inode->i_nlink)
+ ext4_orphan_add(handle, new.inode);
+ }
+@@ -3979,8 +3997,12 @@ static int ext4_cross_rename(struct inode *old_dir, struct dentry *old_dentry,
+ ctime = current_time(old.inode);
+ old.inode->i_ctime = ctime;
+ new.inode->i_ctime = ctime;
+- ext4_mark_inode_dirty(handle, old.inode);
+- ext4_mark_inode_dirty(handle, new.inode);
++ retval = ext4_mark_inode_dirty(handle, old.inode);
++ if (unlikely(retval))
++ goto end_rename;
++ retval = ext4_mark_inode_dirty(handle, new.inode);
++ if (unlikely(retval))
++ goto end_rename;
+
+ if (old.dir_bh) {
+ retval = ext4_rename_dir_finish(handle, &old, new.dir->i_ino);
+diff --git a/fs/ext4/super.c b/fs/ext4/super.c
+index bf5fcb477f66..7318ca71b69e 100644
+--- a/fs/ext4/super.c
++++ b/fs/ext4/super.c
+@@ -522,9 +522,6 @@ static void ext4_handle_error(struct super_block *sb)
+ smp_wmb();
+ sb->s_flags |= SB_RDONLY;
+ } else if (test_opt(sb, ERRORS_PANIC)) {
+- if (EXT4_SB(sb)->s_journal &&
+- !(EXT4_SB(sb)->s_journal->j_flags & JBD2_REC_ERR))
+- return;
+ panic("EXT4-fs (device %s): panic forced after error\n",
+ sb->s_id);
+ }
+@@ -725,23 +722,20 @@ void __ext4_abort(struct super_block *sb, const char *function,
+ va_end(args);
+
+ if (sb_rdonly(sb) == 0) {
+- ext4_msg(sb, KERN_CRIT, "Remounting filesystem read-only");
+ EXT4_SB(sb)->s_mount_flags |= EXT4_MF_FS_ABORTED;
++ if (EXT4_SB(sb)->s_journal)
++ jbd2_journal_abort(EXT4_SB(sb)->s_journal, -EIO);
++
++ ext4_msg(sb, KERN_CRIT, "Remounting filesystem read-only");
+ /*
+ * Make sure updated value of ->s_mount_flags will be visible
+ * before ->s_flags update
+ */
+ smp_wmb();
+ sb->s_flags |= SB_RDONLY;
+- if (EXT4_SB(sb)->s_journal)
+- jbd2_journal_abort(EXT4_SB(sb)->s_journal, -EIO);
+ }
+- if (test_opt(sb, ERRORS_PANIC) && !system_going_down()) {
+- if (EXT4_SB(sb)->s_journal &&
+- !(EXT4_SB(sb)->s_journal->j_flags & JBD2_REC_ERR))
+- return;
++ if (test_opt(sb, ERRORS_PANIC) && !system_going_down())
+ panic("EXT4-fs panic from previous error\n");
+- }
+ }
+
+ void __ext4_msg(struct super_block *sb,
+@@ -2086,6 +2080,16 @@ static int handle_mount_opt(struct super_block *sb, char *opt, int token,
+ #endif
+ } else if (token == Opt_dax) {
+ #ifdef CONFIG_FS_DAX
++ if (is_remount && test_opt(sb, DAX)) {
++ ext4_msg(sb, KERN_ERR, "can't mount with "
++ "both data=journal and dax");
++ return -1;
++ }
++ if (is_remount && !(sbi->s_mount_opt & EXT4_MOUNT_DAX)) {
++ ext4_msg(sb, KERN_ERR, "can't change "
++ "dax mount option while remounting");
++ return -1;
++ }
+ ext4_msg(sb, KERN_WARNING,
+ "DAX enabled. Warning: EXPERIMENTAL, use at your own risk");
+ sbi->s_mount_opt |= m->mount_opt;
+@@ -2344,6 +2348,7 @@ static int ext4_setup_super(struct super_block *sb, struct ext4_super_block *es,
+ ext4_msg(sb, KERN_ERR, "revision level too high, "
+ "forcing read-only mode");
+ err = -EROFS;
++ goto done;
+ }
+ if (read_only)
+ goto done;
+@@ -5412,12 +5417,6 @@ static int ext4_remount(struct super_block *sb, int *flags, char *data)
+ err = -EINVAL;
+ goto restore_opts;
+ }
+- if (test_opt(sb, DAX)) {
+- ext4_msg(sb, KERN_ERR, "can't mount with "
+- "both data=journal and dax");
+- err = -EINVAL;
+- goto restore_opts;
+- }
+ } else if (test_opt(sb, DATA_FLAGS) == EXT4_MOUNT_ORDERED_DATA) {
+ if (test_opt(sb, JOURNAL_ASYNC_COMMIT)) {
+ ext4_msg(sb, KERN_ERR, "can't mount with "
+@@ -5433,12 +5432,6 @@ static int ext4_remount(struct super_block *sb, int *flags, char *data)
+ goto restore_opts;
+ }
+
+- if ((sbi->s_mount_opt ^ old_opts.s_mount_opt) & EXT4_MOUNT_DAX) {
+- ext4_msg(sb, KERN_WARNING, "warning: refusing change of "
+- "dax flag with busy inodes while remounting");
+- sbi->s_mount_opt ^= EXT4_MOUNT_DAX;
+- }
+-
+ if (sbi->s_mount_flags & EXT4_MF_FS_ABORTED)
+ ext4_abort(sb, EXT4_ERR_ESHUTDOWN, "Abort forced by user");
+
+@@ -5885,7 +5878,7 @@ static int ext4_quota_on(struct super_block *sb, int type, int format_id,
+ EXT4_I(inode)->i_flags |= EXT4_NOATIME_FL | EXT4_IMMUTABLE_FL;
+ inode_set_flags(inode, S_NOATIME | S_IMMUTABLE,
+ S_NOATIME | S_IMMUTABLE);
+- ext4_mark_inode_dirty(handle, inode);
++ err = ext4_mark_inode_dirty(handle, inode);
+ ext4_journal_stop(handle);
+ unlock_inode:
+ inode_unlock(inode);
+@@ -5987,12 +5980,14 @@ static int ext4_quota_off(struct super_block *sb, int type)
+ * this is not a hard failure and quotas are already disabled.
+ */
+ handle = ext4_journal_start(inode, EXT4_HT_QUOTA, 1);
+- if (IS_ERR(handle))
++ if (IS_ERR(handle)) {
++ err = PTR_ERR(handle);
+ goto out_unlock;
++ }
+ EXT4_I(inode)->i_flags &= ~(EXT4_NOATIME_FL | EXT4_IMMUTABLE_FL);
+ inode_set_flags(inode, 0, S_NOATIME | S_IMMUTABLE);
+ inode->i_mtime = inode->i_ctime = current_time(inode);
+- ext4_mark_inode_dirty(handle, inode);
++ err = ext4_mark_inode_dirty(handle, inode);
+ ext4_journal_stop(handle);
+ out_unlock:
+ inode_unlock(inode);
+@@ -6050,7 +6045,7 @@ static ssize_t ext4_quota_write(struct super_block *sb, int type,
+ {
+ struct inode *inode = sb_dqopt(sb)->files[type];
+ ext4_lblk_t blk = off >> EXT4_BLOCK_SIZE_BITS(sb);
+- int err, offset = off & (sb->s_blocksize - 1);
++ int err = 0, err2 = 0, offset = off & (sb->s_blocksize - 1);
+ int retries = 0;
+ struct buffer_head *bh;
+ handle_t *handle = journal_current_handle();
+@@ -6098,9 +6093,11 @@ out:
+ if (inode->i_size < off + len) {
+ i_size_write(inode, off + len);
+ EXT4_I(inode)->i_disksize = inode->i_size;
+- ext4_mark_inode_dirty(handle, inode);
++ err2 = ext4_mark_inode_dirty(handle, inode);
++ if (unlikely(err2 && !err))
++ err = err2;
+ }
+- return len;
++ return err ? err : len;
+ }
+ #endif
+
+diff --git a/fs/ext4/xattr.c b/fs/ext4/xattr.c
+index 01ba66373e97..9b29a40738ac 100644
+--- a/fs/ext4/xattr.c
++++ b/fs/ext4/xattr.c
+@@ -1327,7 +1327,7 @@ static int ext4_xattr_inode_write(handle_t *handle, struct inode *ea_inode,
+ int blocksize = ea_inode->i_sb->s_blocksize;
+ int max_blocks = (bufsize + blocksize - 1) >> ea_inode->i_blkbits;
+ int csize, wsize = 0;
+- int ret = 0;
++ int ret = 0, ret2 = 0;
+ int retries = 0;
+
+ retry:
+@@ -1385,7 +1385,9 @@ retry:
+ ext4_update_i_disksize(ea_inode, wsize);
+ inode_unlock(ea_inode);
+
+- ext4_mark_inode_dirty(handle, ea_inode);
++ ret2 = ext4_mark_inode_dirty(handle, ea_inode);
++ if (unlikely(ret2 && !ret))
++ ret = ret2;
+
+ out:
+ brelse(bh);
+diff --git a/fs/f2fs/checkpoint.c b/fs/f2fs/checkpoint.c
+index 852890b72d6a..448b3dc6f925 100644
+--- a/fs/f2fs/checkpoint.c
++++ b/fs/f2fs/checkpoint.c
+@@ -889,8 +889,8 @@ int f2fs_get_valid_checkpoint(struct f2fs_sb_info *sbi)
+ int i;
+ int err;
+
+- sbi->ckpt = f2fs_kzalloc(sbi, array_size(blk_size, cp_blks),
+- GFP_KERNEL);
++ sbi->ckpt = f2fs_kvzalloc(sbi, array_size(blk_size, cp_blks),
++ GFP_KERNEL);
+ if (!sbi->ckpt)
+ return -ENOMEM;
+ /*
+diff --git a/fs/f2fs/compress.c b/fs/f2fs/compress.c
+index df7b2d15eacd..a5b2e72174bb 100644
+--- a/fs/f2fs/compress.c
++++ b/fs/f2fs/compress.c
+@@ -236,7 +236,12 @@ static int lz4_init_compress_ctx(struct compress_ctx *cc)
+ if (!cc->private)
+ return -ENOMEM;
+
+- cc->clen = LZ4_compressBound(PAGE_SIZE << cc->log_cluster_size);
++ /*
++ * we do not change cc->clen to LZ4_compressBound(inputsize) to
++ * adapt worst compress case, because lz4 compressor can handle
++ * output budget properly.
++ */
++ cc->clen = cc->rlen - PAGE_SIZE - COMPRESS_HEADER_SIZE;
+ return 0;
+ }
+
+@@ -252,11 +257,9 @@ static int lz4_compress_pages(struct compress_ctx *cc)
+
+ len = LZ4_compress_default(cc->rbuf, cc->cbuf->cdata, cc->rlen,
+ cc->clen, cc->private);
+- if (!len) {
+- printk_ratelimited("%sF2FS-fs (%s): lz4 compress failed\n",
+- KERN_ERR, F2FS_I_SB(cc->inode)->sb->s_id);
+- return -EIO;
+- }
++ if (!len)
++ return -EAGAIN;
++
+ cc->clen = len;
+ return 0;
+ }
+@@ -366,6 +369,13 @@ static int zstd_compress_pages(struct compress_ctx *cc)
+ return -EIO;
+ }
+
++ /*
++ * there is compressed data remained in intermediate buffer due to
++ * no more space in cbuf.cdata
++ */
++ if (ret)
++ return -EAGAIN;
++
+ cc->clen = outbuf.pos;
+ return 0;
+ }
+diff --git a/fs/f2fs/data.c b/fs/f2fs/data.c
+index cdf2f626bea7..10491ae1cb85 100644
+--- a/fs/f2fs/data.c
++++ b/fs/f2fs/data.c
+@@ -2130,16 +2130,16 @@ submit_and_realloc:
+ page->index, for_write);
+ if (IS_ERR(bio)) {
+ ret = PTR_ERR(bio);
+- bio = NULL;
+ dic->failed = true;
+ if (refcount_sub_and_test(dic->nr_cpages - i,
+- &dic->ref))
++ &dic->ref)) {
+ f2fs_decompress_end_io(dic->rpages,
+ cc->cluster_size, true,
+ false);
+- f2fs_free_dic(dic);
++ f2fs_free_dic(dic);
++ }
+ f2fs_put_dnode(&dn);
+- *bio_ret = bio;
++ *bio_ret = NULL;
+ return ret;
+ }
+ }
+diff --git a/fs/f2fs/dir.c b/fs/f2fs/dir.c
+index 44bfc464df78..54e90dbb09e7 100644
+--- a/fs/f2fs/dir.c
++++ b/fs/f2fs/dir.c
+@@ -107,36 +107,28 @@ static struct f2fs_dir_entry *find_in_block(struct inode *dir,
+ /*
+ * Test whether a case-insensitive directory entry matches the filename
+ * being searched for.
+- *
+- * Returns: 0 if the directory entry matches, more than 0 if it
+- * doesn't match or less than zero on error.
+ */
+-int f2fs_ci_compare(const struct inode *parent, const struct qstr *name,
+- const struct qstr *entry, bool quick)
++static bool f2fs_match_ci_name(const struct inode *dir, const struct qstr *name,
++ const struct qstr *entry, bool quick)
+ {
+- const struct f2fs_sb_info *sbi = F2FS_SB(parent->i_sb);
++ const struct f2fs_sb_info *sbi = F2FS_SB(dir->i_sb);
+ const struct unicode_map *um = sbi->s_encoding;
+- int ret;
++ int res;
+
+ if (quick)
+- ret = utf8_strncasecmp_folded(um, name, entry);
++ res = utf8_strncasecmp_folded(um, name, entry);
+ else
+- ret = utf8_strncasecmp(um, name, entry);
+-
+- if (ret < 0) {
+- /* Handle invalid character sequence as either an error
+- * or as an opaque byte sequence.
++ res = utf8_strncasecmp(um, name, entry);
++ if (res < 0) {
++ /*
++ * In strict mode, ignore invalid names. In non-strict mode,
++ * fall back to treating them as opaque byte sequences.
+ */
+- if (f2fs_has_strict_mode(sbi))
+- return -EINVAL;
+-
+- if (name->len != entry->len)
+- return 1;
+-
+- return !!memcmp(name->name, entry->name, name->len);
++ if (f2fs_has_strict_mode(sbi) || name->len != entry->len)
++ return false;
++ return !memcmp(name->name, entry->name, name->len);
+ }
+-
+- return ret;
++ return res == 0;
+ }
+
+ static void f2fs_fname_setup_ci_filename(struct inode *dir,
+@@ -188,10 +180,10 @@ static inline bool f2fs_match_name(struct f2fs_dentry_ptr *d,
+ if (cf_str->name) {
+ struct qstr cf = {.name = cf_str->name,
+ .len = cf_str->len};
+- return !f2fs_ci_compare(parent, &cf, &entry, true);
++ return f2fs_match_ci_name(parent, &cf, &entry, true);
+ }
+- return !f2fs_ci_compare(parent, fname->usr_fname, &entry,
+- false);
++ return f2fs_match_ci_name(parent, fname->usr_fname, &entry,
++ false);
+ }
+ #endif
+ if (fscrypt_match_name(fname, d->filename[bit_pos],
+@@ -1080,17 +1072,41 @@ const struct file_operations f2fs_dir_operations = {
+ static int f2fs_d_compare(const struct dentry *dentry, unsigned int len,
+ const char *str, const struct qstr *name)
+ {
+- struct qstr qstr = {.name = str, .len = len };
+ const struct dentry *parent = READ_ONCE(dentry->d_parent);
+- const struct inode *inode = READ_ONCE(parent->d_inode);
++ const struct inode *dir = READ_ONCE(parent->d_inode);
++ const struct f2fs_sb_info *sbi = F2FS_SB(dentry->d_sb);
++ struct qstr entry = QSTR_INIT(str, len);
++ char strbuf[DNAME_INLINE_LEN];
++ int res;
++
++ if (!dir || !IS_CASEFOLDED(dir))
++ goto fallback;
+
+- if (!inode || !IS_CASEFOLDED(inode)) {
+- if (len != name->len)
+- return -1;
+- return memcmp(str, name->name, len);
++ /*
++ * If the dentry name is stored in-line, then it may be concurrently
++ * modified by a rename. If this happens, the VFS will eventually retry
++ * the lookup, so it doesn't matter what ->d_compare() returns.
++ * However, it's unsafe to call utf8_strncasecmp() with an unstable
++ * string. Therefore, we have to copy the name into a temporary buffer.
++ */
++ if (len <= DNAME_INLINE_LEN - 1) {
++ memcpy(strbuf, str, len);
++ strbuf[len] = 0;
++ entry.name = strbuf;
++ /* prevent compiler from optimizing out the temporary buffer */
++ barrier();
+ }
+
+- return f2fs_ci_compare(inode, name, &qstr, false);
++ res = utf8_strncasecmp(sbi->s_encoding, name, &entry);
++ if (res >= 0)
++ return res;
++
++ if (f2fs_has_strict_mode(sbi))
++ return -EINVAL;
++fallback:
++ if (len != name->len)
++ return 1;
++ return !!memcmp(str, name->name, len);
+ }
+
+ static int f2fs_d_hash(const struct dentry *dentry, struct qstr *str)
+diff --git a/fs/f2fs/f2fs.h b/fs/f2fs/f2fs.h
+index 7c5dd7f666a0..5a0f95dfbac2 100644
+--- a/fs/f2fs/f2fs.h
++++ b/fs/f2fs/f2fs.h
+@@ -2936,18 +2936,12 @@ static inline bool f2fs_may_extent_tree(struct inode *inode)
+ static inline void *f2fs_kmalloc(struct f2fs_sb_info *sbi,
+ size_t size, gfp_t flags)
+ {
+- void *ret;
+-
+ if (time_to_inject(sbi, FAULT_KMALLOC)) {
+ f2fs_show_injection_info(sbi, FAULT_KMALLOC);
+ return NULL;
+ }
+
+- ret = kmalloc(size, flags);
+- if (ret)
+- return ret;
+-
+- return kvmalloc(size, flags);
++ return kmalloc(size, flags);
+ }
+
+ static inline void *f2fs_kzalloc(struct f2fs_sb_info *sbi,
+@@ -3107,11 +3101,6 @@ int f2fs_update_extension_list(struct f2fs_sb_info *sbi, const char *name,
+ bool hot, bool set);
+ struct dentry *f2fs_get_parent(struct dentry *child);
+
+-extern int f2fs_ci_compare(const struct inode *parent,
+- const struct qstr *name,
+- const struct qstr *entry,
+- bool quick);
+-
+ /*
+ * dir.c
+ */
+@@ -3656,7 +3645,7 @@ static inline int f2fs_build_stats(struct f2fs_sb_info *sbi) { return 0; }
+ static inline void f2fs_destroy_stats(struct f2fs_sb_info *sbi) { }
+ static inline void __init f2fs_create_root_stats(void) { }
+ static inline void f2fs_destroy_root_stats(void) { }
+-static inline void update_sit_info(struct f2fs_sb_info *sbi) {}
++static inline void f2fs_update_sit_info(struct f2fs_sb_info *sbi) {}
+ #endif
+
+ extern const struct file_operations f2fs_dir_operations;
+diff --git a/fs/f2fs/file.c b/fs/f2fs/file.c
+index 6ab8f621a3c5..30b35915fa3a 100644
+--- a/fs/f2fs/file.c
++++ b/fs/f2fs/file.c
+@@ -2219,8 +2219,15 @@ static int f2fs_ioc_shutdown(struct file *filp, unsigned long arg)
+
+ if (in != F2FS_GOING_DOWN_FULLSYNC) {
+ ret = mnt_want_write_file(filp);
+- if (ret)
++ if (ret) {
++ if (ret == -EROFS) {
++ ret = 0;
++ f2fs_stop_checkpoint(sbi, false);
++ set_sbi_flag(sbi, SBI_IS_SHUTDOWN);
++ trace_f2fs_shutdown(sbi, in, ret);
++ }
+ return ret;
++ }
+ }
+
+ switch (in) {
+diff --git a/fs/f2fs/node.c b/fs/f2fs/node.c
+index ecbd6bd14a49..daf531e69b67 100644
+--- a/fs/f2fs/node.c
++++ b/fs/f2fs/node.c
+@@ -2928,7 +2928,7 @@ static int __get_nat_bitmaps(struct f2fs_sb_info *sbi)
+ return 0;
+
+ nm_i->nat_bits_blocks = F2FS_BLK_ALIGN((nat_bits_bytes << 1) + 8);
+- nm_i->nat_bits = f2fs_kzalloc(sbi,
++ nm_i->nat_bits = f2fs_kvzalloc(sbi,
+ nm_i->nat_bits_blocks << F2FS_BLKSIZE_BITS, GFP_KERNEL);
+ if (!nm_i->nat_bits)
+ return -ENOMEM;
+@@ -3061,9 +3061,9 @@ static int init_free_nid_cache(struct f2fs_sb_info *sbi)
+ int i;
+
+ nm_i->free_nid_bitmap =
+- f2fs_kzalloc(sbi, array_size(sizeof(unsigned char *),
+- nm_i->nat_blocks),
+- GFP_KERNEL);
++ f2fs_kvzalloc(sbi, array_size(sizeof(unsigned char *),
++ nm_i->nat_blocks),
++ GFP_KERNEL);
+ if (!nm_i->free_nid_bitmap)
+ return -ENOMEM;
+
+diff --git a/fs/f2fs/super.c b/fs/f2fs/super.c
+index 56ccb8323e21..4696c9cb47a5 100644
+--- a/fs/f2fs/super.c
++++ b/fs/f2fs/super.c
+@@ -1303,7 +1303,8 @@ static int f2fs_statfs_project(struct super_block *sb,
+ limit >>= sb->s_blocksize_bits;
+
+ if (limit && buf->f_blocks > limit) {
+- curblock = dquot->dq_dqb.dqb_curspace >> sb->s_blocksize_bits;
++ curblock = (dquot->dq_dqb.dqb_curspace +
++ dquot->dq_dqb.dqb_rsvspace) >> sb->s_blocksize_bits;
+ buf->f_blocks = limit;
+ buf->f_bfree = buf->f_bavail =
+ (buf->f_blocks > curblock) ?
+@@ -3038,7 +3039,7 @@ static int init_blkz_info(struct f2fs_sb_info *sbi, int devi)
+ if (nr_sectors & (bdev_zone_sectors(bdev) - 1))
+ FDEV(devi).nr_blkz++;
+
+- FDEV(devi).blkz_seq = f2fs_kzalloc(sbi,
++ FDEV(devi).blkz_seq = f2fs_kvzalloc(sbi,
+ BITS_TO_LONGS(FDEV(devi).nr_blkz)
+ * sizeof(unsigned long),
+ GFP_KERNEL);
+diff --git a/fs/fuse/dev.c b/fs/fuse/dev.c
+index 97eec7522bf2..5c155437a455 100644
+--- a/fs/fuse/dev.c
++++ b/fs/fuse/dev.c
+@@ -1977,8 +1977,9 @@ static ssize_t fuse_dev_splice_write(struct pipe_inode_info *pipe,
+ struct pipe_buffer *ibuf;
+ struct pipe_buffer *obuf;
+
+- BUG_ON(nbuf >= pipe->ring_size);
+- BUG_ON(tail == head);
++ if (WARN_ON(nbuf >= count || tail == head))
++ goto out_free;
++
+ ibuf = &pipe->bufs[tail & mask];
+ obuf = &bufs[nbuf];
+
+diff --git a/fs/fuse/file.c b/fs/fuse/file.c
+index 9d67b830fb7a..e3afceecaa6b 100644
+--- a/fs/fuse/file.c
++++ b/fs/fuse/file.c
+@@ -712,6 +712,7 @@ static ssize_t fuse_async_req_send(struct fuse_conn *fc,
+ spin_unlock(&io->lock);
+
+ ia->ap.args.end = fuse_aio_complete_req;
++ ia->ap.args.may_block = io->should_dirty;
+ err = fuse_simple_background(fc, &ia->ap.args, GFP_KERNEL);
+ if (err)
+ fuse_aio_complete_req(fc, &ia->ap.args, err);
+@@ -3279,13 +3280,11 @@ static ssize_t __fuse_copy_file_range(struct file *file_in, loff_t pos_in,
+ if (file_inode(file_in)->i_sb != file_inode(file_out)->i_sb)
+ return -EXDEV;
+
+- if (fc->writeback_cache) {
+- inode_lock(inode_in);
+- err = fuse_writeback_range(inode_in, pos_in, pos_in + len);
+- inode_unlock(inode_in);
+- if (err)
+- return err;
+- }
++ inode_lock(inode_in);
++ err = fuse_writeback_range(inode_in, pos_in, pos_in + len - 1);
++ inode_unlock(inode_in);
++ if (err)
++ return err;
+
+ inode_lock(inode_out);
+
+@@ -3293,11 +3292,27 @@ static ssize_t __fuse_copy_file_range(struct file *file_in, loff_t pos_in,
+ if (err)
+ goto out;
+
+- if (fc->writeback_cache) {
+- err = fuse_writeback_range(inode_out, pos_out, pos_out + len);
+- if (err)
+- goto out;
+- }
++ /*
++ * Write out dirty pages in the destination file before sending the COPY
++ * request to userspace. After the request is completed, truncate off
++ * pages (including partial ones) from the cache that have been copied,
++ * since these contain stale data at that point.
++ *
++ * This should be mostly correct, but if the COPY writes to partial
++ * pages (at the start or end) and the parts not covered by the COPY are
++ * written through a memory map after calling fuse_writeback_range(),
++ * then these partial page modifications will be lost on truncation.
++ *
++ * It is unlikely that someone would rely on such mixed style
++ * modifications. Yet this does give less guarantees than if the
++ * copying was performed with write(2).
++ *
++ * To fix this a i_mmap_sem style lock could be used to prevent new
++ * faults while the copy is ongoing.
++ */
++ err = fuse_writeback_range(inode_out, pos_out, pos_out + len - 1);
++ if (err)
++ goto out;
+
+ if (is_unstable)
+ set_bit(FUSE_I_SIZE_UNSTABLE, &fi_out->state);
+@@ -3318,6 +3333,10 @@ static ssize_t __fuse_copy_file_range(struct file *file_in, loff_t pos_in,
+ if (err)
+ goto out;
+
++ truncate_inode_pages_range(inode_out->i_mapping,
++ ALIGN_DOWN(pos_out, PAGE_SIZE),
++ ALIGN(pos_out + outarg.size, PAGE_SIZE) - 1);
++
+ if (fc->writeback_cache) {
+ fuse_write_update_size(inode_out, pos_out + outarg.size);
+ file_update_time(file_out);
+diff --git a/fs/fuse/fuse_i.h b/fs/fuse/fuse_i.h
+index ca344bf71404..d7cde216fc87 100644
+--- a/fs/fuse/fuse_i.h
++++ b/fs/fuse/fuse_i.h
+@@ -249,6 +249,7 @@ struct fuse_args {
+ bool out_argvar:1;
+ bool page_zeroing:1;
+ bool page_replace:1;
++ bool may_block:1;
+ struct fuse_in_arg in_args[3];
+ struct fuse_arg out_args[2];
+ void (*end)(struct fuse_conn *fc, struct fuse_args *args, int error);
+diff --git a/fs/fuse/virtio_fs.c b/fs/fuse/virtio_fs.c
+index bade74768903..0c6ef5d3c6ab 100644
+--- a/fs/fuse/virtio_fs.c
++++ b/fs/fuse/virtio_fs.c
+@@ -60,6 +60,12 @@ struct virtio_fs_forget {
+ struct virtio_fs_forget_req req;
+ };
+
++struct virtio_fs_req_work {
++ struct fuse_req *req;
++ struct virtio_fs_vq *fsvq;
++ struct work_struct done_work;
++};
++
+ static int virtio_fs_enqueue_req(struct virtio_fs_vq *fsvq,
+ struct fuse_req *req, bool in_flight);
+
+@@ -485,19 +491,67 @@ static void copy_args_from_argbuf(struct fuse_args *args, struct fuse_req *req)
+ }
+
+ /* Work function for request completion */
++static void virtio_fs_request_complete(struct fuse_req *req,
++ struct virtio_fs_vq *fsvq)
++{
++ struct fuse_pqueue *fpq = &fsvq->fud->pq;
++ struct fuse_conn *fc = fsvq->fud->fc;
++ struct fuse_args *args;
++ struct fuse_args_pages *ap;
++ unsigned int len, i, thislen;
++ struct page *page;
++
++ /*
++ * TODO verify that server properly follows FUSE protocol
++ * (oh.uniq, oh.len)
++ */
++ args = req->args;
++ copy_args_from_argbuf(args, req);
++
++ if (args->out_pages && args->page_zeroing) {
++ len = args->out_args[args->out_numargs - 1].size;
++ ap = container_of(args, typeof(*ap), args);
++ for (i = 0; i < ap->num_pages; i++) {
++ thislen = ap->descs[i].length;
++ if (len < thislen) {
++ WARN_ON(ap->descs[i].offset);
++ page = ap->pages[i];
++ zero_user_segment(page, len, thislen);
++ len = 0;
++ } else {
++ len -= thislen;
++ }
++ }
++ }
++
++ spin_lock(&fpq->lock);
++ clear_bit(FR_SENT, &req->flags);
++ spin_unlock(&fpq->lock);
++
++ fuse_request_end(fc, req);
++ spin_lock(&fsvq->lock);
++ dec_in_flight_req(fsvq);
++ spin_unlock(&fsvq->lock);
++}
++
++static void virtio_fs_complete_req_work(struct work_struct *work)
++{
++ struct virtio_fs_req_work *w =
++ container_of(work, typeof(*w), done_work);
++
++ virtio_fs_request_complete(w->req, w->fsvq);
++ kfree(w);
++}
++
+ static void virtio_fs_requests_done_work(struct work_struct *work)
+ {
+ struct virtio_fs_vq *fsvq = container_of(work, struct virtio_fs_vq,
+ done_work);
+ struct fuse_pqueue *fpq = &fsvq->fud->pq;
+- struct fuse_conn *fc = fsvq->fud->fc;
+ struct virtqueue *vq = fsvq->vq;
+ struct fuse_req *req;
+- struct fuse_args_pages *ap;
+ struct fuse_req *next;
+- struct fuse_args *args;
+- unsigned int len, i, thislen;
+- struct page *page;
++ unsigned int len;
+ LIST_HEAD(reqs);
+
+ /* Collect completed requests off the virtqueue */
+@@ -515,38 +569,20 @@ static void virtio_fs_requests_done_work(struct work_struct *work)
+
+ /* End requests */
+ list_for_each_entry_safe(req, next, &reqs, list) {
+- /*
+- * TODO verify that server properly follows FUSE protocol
+- * (oh.uniq, oh.len)
+- */
+- args = req->args;
+- copy_args_from_argbuf(args, req);
+-
+- if (args->out_pages && args->page_zeroing) {
+- len = args->out_args[args->out_numargs - 1].size;
+- ap = container_of(args, typeof(*ap), args);
+- for (i = 0; i < ap->num_pages; i++) {
+- thislen = ap->descs[i].length;
+- if (len < thislen) {
+- WARN_ON(ap->descs[i].offset);
+- page = ap->pages[i];
+- zero_user_segment(page, len, thislen);
+- len = 0;
+- } else {
+- len -= thislen;
+- }
+- }
+- }
+-
+- spin_lock(&fpq->lock);
+- clear_bit(FR_SENT, &req->flags);
+ list_del_init(&req->list);
+- spin_unlock(&fpq->lock);
+
+- fuse_request_end(fc, req);
+- spin_lock(&fsvq->lock);
+- dec_in_flight_req(fsvq);
+- spin_unlock(&fsvq->lock);
++ /* blocking async request completes in a worker context */
++ if (req->args->may_block) {
++ struct virtio_fs_req_work *w;
++
++ w = kzalloc(sizeof(*w), GFP_NOFS | __GFP_NOFAIL);
++ INIT_WORK(&w->done_work, virtio_fs_complete_req_work);
++ w->fsvq = fsvq;
++ w->req = req;
++ schedule_work(&w->done_work);
++ } else {
++ virtio_fs_request_complete(req, fsvq);
++ }
+ }
+ }
+
+diff --git a/fs/gfs2/log.c b/fs/gfs2/log.c
+index 0644e58c6191..b7a5221bea7d 100644
+--- a/fs/gfs2/log.c
++++ b/fs/gfs2/log.c
+@@ -1003,8 +1003,10 @@ out:
+ * @new: New transaction to be merged
+ */
+
+-static void gfs2_merge_trans(struct gfs2_trans *old, struct gfs2_trans *new)
++static void gfs2_merge_trans(struct gfs2_sbd *sdp, struct gfs2_trans *new)
+ {
++ struct gfs2_trans *old = sdp->sd_log_tr;
++
+ WARN_ON_ONCE(!test_bit(TR_ATTACHED, &old->tr_flags));
+
+ old->tr_num_buf_new += new->tr_num_buf_new;
+@@ -1016,6 +1018,11 @@ static void gfs2_merge_trans(struct gfs2_trans *old, struct gfs2_trans *new)
+
+ list_splice_tail_init(&new->tr_databuf, &old->tr_databuf);
+ list_splice_tail_init(&new->tr_buf, &old->tr_buf);
++
++ spin_lock(&sdp->sd_ail_lock);
++ list_splice_tail_init(&new->tr_ail1_list, &old->tr_ail1_list);
++ list_splice_tail_init(&new->tr_ail2_list, &old->tr_ail2_list);
++ spin_unlock(&sdp->sd_ail_lock);
+ }
+
+ static void log_refund(struct gfs2_sbd *sdp, struct gfs2_trans *tr)
+@@ -1027,7 +1034,7 @@ static void log_refund(struct gfs2_sbd *sdp, struct gfs2_trans *tr)
+ gfs2_log_lock(sdp);
+
+ if (sdp->sd_log_tr) {
+- gfs2_merge_trans(sdp->sd_log_tr, tr);
++ gfs2_merge_trans(sdp, tr);
+ } else if (tr->tr_num_buf_new || tr->tr_num_databuf_new) {
+ gfs2_assert_withdraw(sdp, test_bit(TR_ALLOCED, &tr->tr_flags));
+ sdp->sd_log_tr = tr;
+diff --git a/fs/gfs2/ops_fstype.c b/fs/gfs2/ops_fstype.c
+index e2b69ffcc6a8..094f5fe7c009 100644
+--- a/fs/gfs2/ops_fstype.c
++++ b/fs/gfs2/ops_fstype.c
+@@ -880,7 +880,7 @@ fail:
+ }
+
+ static const match_table_t nolock_tokens = {
+- { Opt_jid, "jid=%d\n", },
++ { Opt_jid, "jid=%d", },
+ { Opt_err, NULL },
+ };
+
+diff --git a/fs/io_uring.c b/fs/io_uring.c
+index 2698e9b08490..1829be7f63a3 100644
+--- a/fs/io_uring.c
++++ b/fs/io_uring.c
+@@ -513,7 +513,6 @@ enum {
+ REQ_F_INFLIGHT_BIT,
+ REQ_F_CUR_POS_BIT,
+ REQ_F_NOWAIT_BIT,
+- REQ_F_IOPOLL_COMPLETED_BIT,
+ REQ_F_LINK_TIMEOUT_BIT,
+ REQ_F_TIMEOUT_BIT,
+ REQ_F_ISREG_BIT,
+@@ -556,8 +555,6 @@ enum {
+ REQ_F_CUR_POS = BIT(REQ_F_CUR_POS_BIT),
+ /* must not punt to workers */
+ REQ_F_NOWAIT = BIT(REQ_F_NOWAIT_BIT),
+- /* polled IO has completed */
+- REQ_F_IOPOLL_COMPLETED = BIT(REQ_F_IOPOLL_COMPLETED_BIT),
+ /* has linked timeout */
+ REQ_F_LINK_TIMEOUT = BIT(REQ_F_LINK_TIMEOUT_BIT),
+ /* timeout request */
+@@ -618,6 +615,8 @@ struct io_kiocb {
+ int cflags;
+ bool needs_fixed_file;
+ u8 opcode;
++ /* polled IO has completed */
++ u8 iopoll_completed;
+
+ u16 buf_index;
+
+@@ -1691,6 +1690,18 @@ static int io_put_kbuf(struct io_kiocb *req)
+ return cflags;
+ }
+
++static void io_iopoll_queue(struct list_head *again)
++{
++ struct io_kiocb *req;
++
++ do {
++ req = list_first_entry(again, struct io_kiocb, list);
++ list_del(&req->list);
++ refcount_inc(&req->refs);
++ io_queue_async_work(req);
++ } while (!list_empty(again));
++}
++
+ /*
+ * Find and free completed poll iocbs
+ */
+@@ -1699,12 +1710,21 @@ static void io_iopoll_complete(struct io_ring_ctx *ctx, unsigned int *nr_events,
+ {
+ struct req_batch rb;
+ struct io_kiocb *req;
++ LIST_HEAD(again);
++
++ /* order with ->result store in io_complete_rw_iopoll() */
++ smp_rmb();
+
+ rb.to_free = rb.need_iter = 0;
+ while (!list_empty(done)) {
+ int cflags = 0;
+
+ req = list_first_entry(done, struct io_kiocb, list);
++ if (READ_ONCE(req->result) == -EAGAIN) {
++ req->iopoll_completed = 0;
++ list_move_tail(&req->list, &again);
++ continue;
++ }
+ list_del(&req->list);
+
+ if (req->flags & REQ_F_BUFFER_SELECTED)
+@@ -1722,18 +1742,9 @@ static void io_iopoll_complete(struct io_ring_ctx *ctx, unsigned int *nr_events,
+ if (ctx->flags & IORING_SETUP_SQPOLL)
+ io_cqring_ev_posted(ctx);
+ io_free_req_many(ctx, &rb);
+-}
+
+-static void io_iopoll_queue(struct list_head *again)
+-{
+- struct io_kiocb *req;
+-
+- do {
+- req = list_first_entry(again, struct io_kiocb, list);
+- list_del(&req->list);
+- refcount_inc(&req->refs);
+- io_queue_async_work(req);
+- } while (!list_empty(again));
++ if (!list_empty(&again))
++ io_iopoll_queue(&again);
+ }
+
+ static int io_do_iopoll(struct io_ring_ctx *ctx, unsigned int *nr_events,
+@@ -1741,7 +1752,6 @@ static int io_do_iopoll(struct io_ring_ctx *ctx, unsigned int *nr_events,
+ {
+ struct io_kiocb *req, *tmp;
+ LIST_HEAD(done);
+- LIST_HEAD(again);
+ bool spin;
+ int ret;
+
+@@ -1760,20 +1770,13 @@ static int io_do_iopoll(struct io_ring_ctx *ctx, unsigned int *nr_events,
+ * If we find a request that requires polling, break out
+ * and complete those lists first, if we have entries there.
+ */
+- if (req->flags & REQ_F_IOPOLL_COMPLETED) {
++ if (READ_ONCE(req->iopoll_completed)) {
+ list_move_tail(&req->list, &done);
+ continue;
+ }
+ if (!list_empty(&done))
+ break;
+
+- if (req->result == -EAGAIN) {
+- list_move_tail(&req->list, &again);
+- continue;
+- }
+- if (!list_empty(&again))
+- break;
+-
+ ret = kiocb->ki_filp->f_op->iopoll(kiocb, spin);
+ if (ret < 0)
+ break;
+@@ -1786,9 +1789,6 @@ static int io_do_iopoll(struct io_ring_ctx *ctx, unsigned int *nr_events,
+ if (!list_empty(&done))
+ io_iopoll_complete(ctx, nr_events, &done);
+
+- if (!list_empty(&again))
+- io_iopoll_queue(&again);
+-
+ return ret;
+ }
+
+@@ -1937,11 +1937,15 @@ static void io_complete_rw_iopoll(struct kiocb *kiocb, long res, long res2)
+ if (kiocb->ki_flags & IOCB_WRITE)
+ kiocb_end_write(req);
+
+- if (res != req->result)
++ if (res != -EAGAIN && res != req->result)
+ req_set_fail_links(req);
+- req->result = res;
+- if (res != -EAGAIN)
+- req->flags |= REQ_F_IOPOLL_COMPLETED;
++
++ WRITE_ONCE(req->result, res);
++ /* order with io_poll_complete() checking ->result */
++ if (res != -EAGAIN) {
++ smp_wmb();
++ WRITE_ONCE(req->iopoll_completed, 1);
++ }
+ }
+
+ /*
+@@ -1974,7 +1978,7 @@ static void io_iopoll_req_issued(struct io_kiocb *req)
+ * For fast devices, IO may have already completed. If it has, add
+ * it to the front so we find it first.
+ */
+- if (req->flags & REQ_F_IOPOLL_COMPLETED)
++ if (READ_ONCE(req->iopoll_completed))
+ list_add(&req->list, &ctx->poll_list);
+ else
+ list_add_tail(&req->list, &ctx->poll_list);
+@@ -2098,6 +2102,7 @@ static int io_prep_rw(struct io_kiocb *req, const struct io_uring_sqe *sqe,
+ kiocb->ki_flags |= IOCB_HIPRI;
+ kiocb->ki_complete = io_complete_rw_iopoll;
+ req->result = 0;
++ req->iopoll_completed = 0;
+ } else {
+ if (kiocb->ki_flags & IOCB_HIPRI)
+ return -EINVAL;
+@@ -2609,8 +2614,8 @@ copy_iov:
+ }
+ }
+ out_free:
+- kfree(iovec);
+- req->flags &= ~REQ_F_NEED_CLEANUP;
++ if (!(req->flags & REQ_F_NEED_CLEANUP))
++ kfree(iovec);
+ return ret;
+ }
+
+@@ -2732,8 +2737,8 @@ copy_iov:
+ }
+ }
+ out_free:
+- req->flags &= ~REQ_F_NEED_CLEANUP;
+- kfree(iovec);
++ if (!(req->flags & REQ_F_NEED_CLEANUP))
++ kfree(iovec);
+ return ret;
+ }
+
+@@ -4297,6 +4302,28 @@ static void io_async_queue_proc(struct file *file, struct wait_queue_head *head,
+ __io_queue_proc(&pt->req->apoll->poll, pt, head);
+ }
+
++static void io_sq_thread_drop_mm(struct io_ring_ctx *ctx)
++{
++ struct mm_struct *mm = current->mm;
++
++ if (mm) {
++ unuse_mm(mm);
++ mmput(mm);
++ }
++}
++
++static int io_sq_thread_acquire_mm(struct io_ring_ctx *ctx,
++ struct io_kiocb *req)
++{
++ if (io_op_defs[req->opcode].needs_mm && !current->mm) {
++ if (unlikely(!mmget_not_zero(ctx->sqo_mm)))
++ return -EFAULT;
++ use_mm(ctx->sqo_mm);
++ }
++
++ return 0;
++}
++
+ static void io_async_task_func(struct callback_head *cb)
+ {
+ struct io_kiocb *req = container_of(cb, struct io_kiocb, task_work);
+@@ -4328,12 +4355,17 @@ static void io_async_task_func(struct callback_head *cb)
+ if (canceled) {
+ kfree(apoll);
+ io_cqring_ev_posted(ctx);
++end_req:
+ req_set_fail_links(req);
+ io_double_put_req(req);
+ return;
+ }
+
+ __set_current_state(TASK_RUNNING);
++ if (io_sq_thread_acquire_mm(ctx, req)) {
++ io_cqring_add_event(req, -EFAULT);
++ goto end_req;
++ }
+ mutex_lock(&ctx->uring_lock);
+ __io_queue_sqe(req, NULL);
+ mutex_unlock(&ctx->uring_lock);
+@@ -5892,11 +5924,8 @@ static int io_init_req(struct io_ring_ctx *ctx, struct io_kiocb *req,
+ if (unlikely(req->opcode >= IORING_OP_LAST))
+ return -EINVAL;
+
+- if (io_op_defs[req->opcode].needs_mm && !current->mm) {
+- if (unlikely(!mmget_not_zero(ctx->sqo_mm)))
+- return -EFAULT;
+- use_mm(ctx->sqo_mm);
+- }
++ if (unlikely(io_sq_thread_acquire_mm(ctx, req)))
++ return -EFAULT;
+
+ sqe_flags = READ_ONCE(sqe->flags);
+ /* enforce forwards compatibility on users */
+@@ -6006,16 +6035,6 @@ fail_req:
+ return submitted;
+ }
+
+-static inline void io_sq_thread_drop_mm(struct io_ring_ctx *ctx)
+-{
+- struct mm_struct *mm = current->mm;
+-
+- if (mm) {
+- unuse_mm(mm);
+- mmput(mm);
+- }
+-}
+-
+ static int io_sq_thread(void *data)
+ {
+ struct io_ring_ctx *ctx = data;
+@@ -7385,7 +7404,17 @@ static void io_ring_exit_work(struct work_struct *work)
+ if (ctx->rings)
+ io_cqring_overflow_flush(ctx, true);
+
+- wait_for_completion(&ctx->completions[0]);
++ /*
++ * If we're doing polled IO and end up having requests being
++ * submitted async (out-of-line), then completions can come in while
++ * we're waiting for refs to drop. We need to reap these manually,
++ * as nobody else will be looking for them.
++ */
++ while (!wait_for_completion_timeout(&ctx->completions[0], HZ/20)) {
++ io_iopoll_reap_events(ctx);
++ if (ctx->rings)
++ io_cqring_overflow_flush(ctx, true);
++ }
+ io_ring_ctx_free(ctx);
+ }
+
+diff --git a/fs/jbd2/journal.c b/fs/jbd2/journal.c
+index a49d0e670ddf..e4944436e733 100644
+--- a/fs/jbd2/journal.c
++++ b/fs/jbd2/journal.c
+@@ -1140,6 +1140,7 @@ static journal_t *journal_init_common(struct block_device *bdev,
+ init_waitqueue_head(&journal->j_wait_commit);
+ init_waitqueue_head(&journal->j_wait_updates);
+ init_waitqueue_head(&journal->j_wait_reserved);
++ mutex_init(&journal->j_abort_mutex);
+ mutex_init(&journal->j_barrier);
+ mutex_init(&journal->j_checkpoint_mutex);
+ spin_lock_init(&journal->j_revoke_lock);
+@@ -1402,7 +1403,8 @@ static int jbd2_write_superblock(journal_t *journal, int write_flags)
+ printk(KERN_ERR "JBD2: Error %d detected when updating "
+ "journal superblock for %s.\n", ret,
+ journal->j_devname);
+- jbd2_journal_abort(journal, ret);
++ if (!is_journal_aborted(journal))
++ jbd2_journal_abort(journal, ret);
+ }
+
+ return ret;
+@@ -2153,6 +2155,13 @@ void jbd2_journal_abort(journal_t *journal, int errno)
+ {
+ transaction_t *transaction;
+
++ /*
++ * Lock the aborting procedure until everything is done, this avoid
++ * races between filesystem's error handling flow (e.g. ext4_abort()),
++ * ensure panic after the error info is written into journal's
++ * superblock.
++ */
++ mutex_lock(&journal->j_abort_mutex);
+ /*
+ * ESHUTDOWN always takes precedence because a file system check
+ * caused by any other journal abort error is not required after
+@@ -2167,6 +2176,7 @@ void jbd2_journal_abort(journal_t *journal, int errno)
+ journal->j_errno = errno;
+ jbd2_journal_update_sb_errno(journal);
+ }
++ mutex_unlock(&journal->j_abort_mutex);
+ return;
+ }
+
+@@ -2188,10 +2198,7 @@ void jbd2_journal_abort(journal_t *journal, int errno)
+ * layer could realise that a filesystem check is needed.
+ */
+ jbd2_journal_update_sb_errno(journal);
+-
+- write_lock(&journal->j_state_lock);
+- journal->j_flags |= JBD2_REC_ERR;
+- write_unlock(&journal->j_state_lock);
++ mutex_unlock(&journal->j_abort_mutex);
+ }
+
+ /**
+diff --git a/fs/nfs/direct.c b/fs/nfs/direct.c
+index a57e7c72c7f4..d49b1d197908 100644
+--- a/fs/nfs/direct.c
++++ b/fs/nfs/direct.c
+@@ -731,6 +731,8 @@ static void nfs_direct_write_completion(struct nfs_pgio_header *hdr)
+ nfs_list_remove_request(req);
+ if (request_commit) {
+ kref_get(&req->wb_kref);
++ memcpy(&req->wb_verf, &hdr->verf.verifier,
++ sizeof(req->wb_verf));
+ nfs_mark_request_commit(req, hdr->lseg, &cinfo,
+ hdr->ds_commit_idx);
+ }
+diff --git a/fs/nfs/inode.c b/fs/nfs/inode.c
+index b9d0921cb4fe..0bf1f835de01 100644
+--- a/fs/nfs/inode.c
++++ b/fs/nfs/inode.c
+@@ -833,6 +833,8 @@ int nfs_getattr(const struct path *path, struct kstat *stat,
+ do_update |= cache_validity & NFS_INO_INVALID_ATIME;
+ if (request_mask & (STATX_CTIME|STATX_MTIME))
+ do_update |= cache_validity & NFS_INO_REVAL_PAGECACHE;
++ if (request_mask & STATX_BLOCKS)
++ do_update |= cache_validity & NFS_INO_INVALID_BLOCKS;
+ if (do_update) {
+ /* Update the attribute cache */
+ if (!(server->flags & NFS_MOUNT_NOAC))
+@@ -1764,7 +1766,8 @@ out_noforce:
+ status = nfs_post_op_update_inode_locked(inode, fattr,
+ NFS_INO_INVALID_CHANGE
+ | NFS_INO_INVALID_CTIME
+- | NFS_INO_INVALID_MTIME);
++ | NFS_INO_INVALID_MTIME
++ | NFS_INO_INVALID_BLOCKS);
+ return status;
+ }
+
+@@ -1871,7 +1874,8 @@ static int nfs_update_inode(struct inode *inode, struct nfs_fattr *fattr)
+ nfsi->cache_validity &= ~(NFS_INO_INVALID_ATTR
+ | NFS_INO_INVALID_ATIME
+ | NFS_INO_REVAL_FORCED
+- | NFS_INO_REVAL_PAGECACHE);
++ | NFS_INO_REVAL_PAGECACHE
++ | NFS_INO_INVALID_BLOCKS);
+
+ /* Do atomic weak cache consistency updates */
+ nfs_wcc_update_inode(inode, fattr);
+@@ -2033,8 +2037,12 @@ static int nfs_update_inode(struct inode *inode, struct nfs_fattr *fattr)
+ inode->i_blocks = nfs_calc_block_size(fattr->du.nfs3.used);
+ } else if (fattr->valid & NFS_ATTR_FATTR_BLOCKS_USED)
+ inode->i_blocks = fattr->du.nfs2.blocks;
+- else
++ else {
++ nfsi->cache_validity |= save_cache_validity &
++ (NFS_INO_INVALID_BLOCKS
++ | NFS_INO_REVAL_FORCED);
+ cache_revalidated = false;
++ }
+
+ /* Update attrtimeo value if we're out of the unstable period */
+ if (attr_changed) {
+diff --git a/fs/nfs/nfs4proc.c b/fs/nfs/nfs4proc.c
+index 9056f3dd380e..e32717fd1169 100644
+--- a/fs/nfs/nfs4proc.c
++++ b/fs/nfs/nfs4proc.c
+@@ -7909,7 +7909,7 @@ nfs4_bind_one_conn_to_session_done(struct rpc_task *task, void *calldata)
+ }
+
+ static const struct rpc_call_ops nfs4_bind_one_conn_to_session_ops = {
+- .rpc_call_done = &nfs4_bind_one_conn_to_session_done,
++ .rpc_call_done = nfs4_bind_one_conn_to_session_done,
+ };
+
+ /*
+diff --git a/fs/nfsd/cache.h b/fs/nfsd/cache.h
+index 10ec5ecdf117..65c331f75e9c 100644
+--- a/fs/nfsd/cache.h
++++ b/fs/nfsd/cache.h
+@@ -78,6 +78,8 @@ enum {
+ /* Checksum this amount of the request */
+ #define RC_CSUMLEN (256U)
+
++int nfsd_drc_slab_create(void);
++void nfsd_drc_slab_free(void);
+ int nfsd_reply_cache_init(struct nfsd_net *);
+ void nfsd_reply_cache_shutdown(struct nfsd_net *);
+ int nfsd_cache_lookup(struct svc_rqst *);
+diff --git a/fs/nfsd/netns.h b/fs/nfsd/netns.h
+index 09aa545825bd..9217cb64bf0e 100644
+--- a/fs/nfsd/netns.h
++++ b/fs/nfsd/netns.h
+@@ -139,7 +139,6 @@ struct nfsd_net {
+ * Duplicate reply cache
+ */
+ struct nfsd_drc_bucket *drc_hashtbl;
+- struct kmem_cache *drc_slab;
+
+ /* max number of entries allowed in the cache */
+ unsigned int max_drc_entries;
+diff --git a/fs/nfsd/nfs4callback.c b/fs/nfsd/nfs4callback.c
+index 5cf91322de0f..07e0c6f6322f 100644
+--- a/fs/nfsd/nfs4callback.c
++++ b/fs/nfsd/nfs4callback.c
+@@ -1301,6 +1301,8 @@ static void nfsd4_process_cb_update(struct nfsd4_callback *cb)
+ err = setup_callback_client(clp, &conn, ses);
+ if (err) {
+ nfsd4_mark_cb_down(clp, err);
++ if (c)
++ svc_xprt_put(c->cn_xprt);
+ return;
+ }
+ }
+diff --git a/fs/nfsd/nfscache.c b/fs/nfsd/nfscache.c
+index 96352ab7bd81..4a258065188e 100644
+--- a/fs/nfsd/nfscache.c
++++ b/fs/nfsd/nfscache.c
+@@ -36,6 +36,8 @@ struct nfsd_drc_bucket {
+ spinlock_t cache_lock;
+ };
+
++static struct kmem_cache *drc_slab;
++
+ static int nfsd_cache_append(struct svc_rqst *rqstp, struct kvec *vec);
+ static unsigned long nfsd_reply_cache_count(struct shrinker *shrink,
+ struct shrink_control *sc);
+@@ -95,7 +97,7 @@ nfsd_reply_cache_alloc(struct svc_rqst *rqstp, __wsum csum,
+ {
+ struct svc_cacherep *rp;
+
+- rp = kmem_cache_alloc(nn->drc_slab, GFP_KERNEL);
++ rp = kmem_cache_alloc(drc_slab, GFP_KERNEL);
+ if (rp) {
+ rp->c_state = RC_UNUSED;
+ rp->c_type = RC_NOCACHE;
+@@ -129,7 +131,7 @@ nfsd_reply_cache_free_locked(struct nfsd_drc_bucket *b, struct svc_cacherep *rp,
+ atomic_dec(&nn->num_drc_entries);
+ nn->drc_mem_usage -= sizeof(*rp);
+ }
+- kmem_cache_free(nn->drc_slab, rp);
++ kmem_cache_free(drc_slab, rp);
+ }
+
+ static void
+@@ -141,6 +143,18 @@ nfsd_reply_cache_free(struct nfsd_drc_bucket *b, struct svc_cacherep *rp,
+ spin_unlock(&b->cache_lock);
+ }
+
++int nfsd_drc_slab_create(void)
++{
++ drc_slab = kmem_cache_create("nfsd_drc",
++ sizeof(struct svc_cacherep), 0, 0, NULL);
++ return drc_slab ? 0: -ENOMEM;
++}
++
++void nfsd_drc_slab_free(void)
++{
++ kmem_cache_destroy(drc_slab);
++}
++
+ int nfsd_reply_cache_init(struct nfsd_net *nn)
+ {
+ unsigned int hashsize;
+@@ -159,18 +173,13 @@ int nfsd_reply_cache_init(struct nfsd_net *nn)
+ if (status)
+ goto out_nomem;
+
+- nn->drc_slab = kmem_cache_create("nfsd_drc",
+- sizeof(struct svc_cacherep), 0, 0, NULL);
+- if (!nn->drc_slab)
+- goto out_shrinker;
+-
+ nn->drc_hashtbl = kcalloc(hashsize,
+ sizeof(*nn->drc_hashtbl), GFP_KERNEL);
+ if (!nn->drc_hashtbl) {
+ nn->drc_hashtbl = vzalloc(array_size(hashsize,
+ sizeof(*nn->drc_hashtbl)));
+ if (!nn->drc_hashtbl)
+- goto out_slab;
++ goto out_shrinker;
+ }
+
+ for (i = 0; i < hashsize; i++) {
+@@ -180,8 +189,6 @@ int nfsd_reply_cache_init(struct nfsd_net *nn)
+ nn->drc_hashsize = hashsize;
+
+ return 0;
+-out_slab:
+- kmem_cache_destroy(nn->drc_slab);
+ out_shrinker:
+ unregister_shrinker(&nn->nfsd_reply_cache_shrinker);
+ out_nomem:
+@@ -209,8 +216,6 @@ void nfsd_reply_cache_shutdown(struct nfsd_net *nn)
+ nn->drc_hashtbl = NULL;
+ nn->drc_hashsize = 0;
+
+- kmem_cache_destroy(nn->drc_slab);
+- nn->drc_slab = NULL;
+ }
+
+ /*
+@@ -464,8 +469,7 @@ found_entry:
+ rtn = RC_REPLY;
+ break;
+ default:
+- printk(KERN_WARNING "nfsd: bad repcache type %d\n", rp->c_type);
+- nfsd_reply_cache_free_locked(b, rp, nn);
++ WARN_ONCE(1, "nfsd: bad repcache type %d\n", rp->c_type);
+ }
+
+ goto out;
+diff --git a/fs/nfsd/nfsctl.c b/fs/nfsd/nfsctl.c
+index 3bb2db947d29..71687d99b090 100644
+--- a/fs/nfsd/nfsctl.c
++++ b/fs/nfsd/nfsctl.c
+@@ -1533,6 +1533,9 @@ static int __init init_nfsd(void)
+ goto out_free_slabs;
+ nfsd_fault_inject_init(); /* nfsd fault injection controls */
+ nfsd_stat_init(); /* Statistics */
++ retval = nfsd_drc_slab_create();
++ if (retval)
++ goto out_free_stat;
+ nfsd_lockd_init(); /* lockd->nfsd callbacks */
+ retval = create_proc_exports_entry();
+ if (retval)
+@@ -1546,6 +1549,8 @@ out_free_all:
+ remove_proc_entry("fs/nfs", NULL);
+ out_free_lockd:
+ nfsd_lockd_shutdown();
++ nfsd_drc_slab_free();
++out_free_stat:
+ nfsd_stat_shutdown();
+ nfsd_fault_inject_cleanup();
+ nfsd4_exit_pnfs();
+@@ -1560,6 +1565,7 @@ out_unregister_pernet:
+
+ static void __exit exit_nfsd(void)
+ {
++ nfsd_drc_slab_free();
+ remove_proc_entry("fs/nfs/exports", NULL);
+ remove_proc_entry("fs/nfs", NULL);
+ nfsd_stat_shutdown();
+diff --git a/fs/proc/bootconfig.c b/fs/proc/bootconfig.c
+index 9955d75c0585..ad31ec4ad627 100644
+--- a/fs/proc/bootconfig.c
++++ b/fs/proc/bootconfig.c
+@@ -26,8 +26,9 @@ static int boot_config_proc_show(struct seq_file *m, void *v)
+ static int __init copy_xbc_key_value_list(char *dst, size_t size)
+ {
+ struct xbc_node *leaf, *vnode;
+- const char *val;
+ char *key, *end = dst + size;
++ const char *val;
++ char q;
+ int ret = 0;
+
+ key = kzalloc(XBC_KEYLEN_MAX, GFP_KERNEL);
+@@ -41,16 +42,20 @@ static int __init copy_xbc_key_value_list(char *dst, size_t size)
+ break;
+ dst += ret;
+ vnode = xbc_node_get_child(leaf);
+- if (vnode && xbc_node_is_array(vnode)) {
++ if (vnode) {
+ xbc_array_for_each_value(vnode, val) {
+- ret = snprintf(dst, rest(dst, end), "\"%s\"%s",
+- val, vnode->next ? ", " : "\n");
++ if (strchr(val, '"'))
++ q = '\'';
++ else
++ q = '"';
++ ret = snprintf(dst, rest(dst, end), "%c%s%c%s",
++ q, val, q, vnode->next ? ", " : "\n");
+ if (ret < 0)
+ goto out;
+ dst += ret;
+ }
+ } else {
+- ret = snprintf(dst, rest(dst, end), "\"%s\"\n", val);
++ ret = snprintf(dst, rest(dst, end), "\"\"\n");
+ if (ret < 0)
+ break;
+ dst += ret;
+diff --git a/fs/xfs/xfs_inode.c b/fs/xfs/xfs_inode.c
+index d1772786af29..8845faa8161a 100644
+--- a/fs/xfs/xfs_inode.c
++++ b/fs/xfs/xfs_inode.c
+@@ -2639,8 +2639,10 @@ xfs_ifree_cluster(
+ error = xfs_trans_get_buf(tp, mp->m_ddev_targp, blkno,
+ mp->m_bsize * igeo->blocks_per_cluster,
+ XBF_UNMAPPED, &bp);
+- if (error)
++ if (error) {
++ xfs_perag_put(pag);
+ return error;
++ }
+
+ /*
+ * This buffer may not have been correctly initialised as we
+diff --git a/include/linux/bitops.h b/include/linux/bitops.h
+index 9acf654f0b19..99f2ac30b1d9 100644
+--- a/include/linux/bitops.h
++++ b/include/linux/bitops.h
+@@ -72,7 +72,7 @@ static inline int get_bitmask_order(unsigned int count)
+
+ static __always_inline unsigned long hweight_long(unsigned long w)
+ {
+- return sizeof(w) == 4 ? hweight32(w) : hweight64(w);
++ return sizeof(w) == 4 ? hweight32(w) : hweight64((__u64)w);
+ }
+
+ /**
+diff --git a/include/linux/coresight.h b/include/linux/coresight.h
+index 193cc9dbf448..09f0565a5de3 100644
+--- a/include/linux/coresight.h
++++ b/include/linux/coresight.h
+@@ -100,10 +100,12 @@ union coresight_dev_subtype {
+ };
+
+ /**
+- * struct coresight_platform_data - data harvested from the DT specification
+- * @nr_inport: number of input ports for this component.
+- * @nr_outport: number of output ports for this component.
+- * @conns: Array of nr_outport connections from this component
++ * struct coresight_platform_data - data harvested from the firmware
++ * specification.
++ *
++ * @nr_inport: Number of elements for the input connections.
++ * @nr_outport: Number of elements for the output connections.
++ * @conns: Sparse array of nr_outport connections from this component.
+ */
+ struct coresight_platform_data {
+ int nr_inport;
+diff --git a/include/linux/ioport.h b/include/linux/ioport.h
+index a9b9170b5dd2..6c3eca90cbc4 100644
+--- a/include/linux/ioport.h
++++ b/include/linux/ioport.h
+@@ -301,5 +301,11 @@ struct resource *devm_request_free_mem_region(struct device *dev,
+ struct resource *request_free_mem_region(struct resource *base,
+ unsigned long size, const char *name);
+
++#ifdef CONFIG_IO_STRICT_DEVMEM
++void revoke_devmem(struct resource *res);
++#else
++static inline void revoke_devmem(struct resource *res) { };
++#endif
++
+ #endif /* __ASSEMBLY__ */
+ #endif /* _LINUX_IOPORT_H */
+diff --git a/include/linux/jbd2.h b/include/linux/jbd2.h
+index f613d8529863..d56128df2aff 100644
+--- a/include/linux/jbd2.h
++++ b/include/linux/jbd2.h
+@@ -765,6 +765,11 @@ struct journal_s
+ */
+ int j_errno;
+
++ /**
++ * @j_abort_mutex: Lock the whole aborting procedure.
++ */
++ struct mutex j_abort_mutex;
++
+ /**
+ * @j_sb_buffer: The first part of the superblock buffer.
+ */
+@@ -1247,7 +1252,6 @@ JBD2_FEATURE_INCOMPAT_FUNCS(csum3, CSUM_V3)
+ #define JBD2_ABORT_ON_SYNCDATA_ERR 0x040 /* Abort the journal on file
+ * data write error in ordered
+ * mode */
+-#define JBD2_REC_ERR 0x080 /* The errno in the sb has been recorded */
+
+ /*
+ * Function declarations for the journaling transaction and buffer
+diff --git a/include/linux/kprobes.h b/include/linux/kprobes.h
+index 04bdaf01112c..645fd401c856 100644
+--- a/include/linux/kprobes.h
++++ b/include/linux/kprobes.h
+@@ -350,6 +350,10 @@ static inline struct kprobe_ctlblk *get_kprobe_ctlblk(void)
+ return this_cpu_ptr(&kprobe_ctlblk);
+ }
+
++extern struct kprobe kprobe_busy;
++void kprobe_busy_begin(void);
++void kprobe_busy_end(void);
++
+ kprobe_opcode_t *kprobe_lookup_name(const char *name, unsigned int offset);
+ int register_kprobe(struct kprobe *p);
+ void unregister_kprobe(struct kprobe *p);
+diff --git a/include/linux/libata.h b/include/linux/libata.h
+index cffa4714bfa8..ae6dfc107ea8 100644
+--- a/include/linux/libata.h
++++ b/include/linux/libata.h
+@@ -22,6 +22,7 @@
+ #include <linux/acpi.h>
+ #include <linux/cdrom.h>
+ #include <linux/sched.h>
++#include <linux/async.h>
+
+ /*
+ * Define if arch has non-standard setup. This is a _PCI_ standard
+@@ -872,6 +873,8 @@ struct ata_port {
+ struct timer_list fastdrain_timer;
+ unsigned long fastdrain_cnt;
+
++ async_cookie_t cookie;
++
+ int em_message_type;
+ void *private_data;
+
+diff --git a/include/linux/mfd/stmfx.h b/include/linux/mfd/stmfx.h
+index 3c67983678ec..744dce63946e 100644
+--- a/include/linux/mfd/stmfx.h
++++ b/include/linux/mfd/stmfx.h
+@@ -109,6 +109,7 @@ struct stmfx {
+ struct device *dev;
+ struct regmap *map;
+ struct regulator *vdd;
++ int irq;
+ struct irq_domain *irq_domain;
+ struct mutex lock; /* IRQ bus lock */
+ u8 irq_src;
+diff --git a/include/linux/nfs_fs.h b/include/linux/nfs_fs.h
+index 73eda45f1cfd..6ee9119acc5d 100644
+--- a/include/linux/nfs_fs.h
++++ b/include/linux/nfs_fs.h
+@@ -230,6 +230,7 @@ struct nfs4_copy_state {
+ #define NFS_INO_INVALID_OTHER BIT(12) /* other attrs are invalid */
+ #define NFS_INO_DATA_INVAL_DEFER \
+ BIT(13) /* Deferred cache invalidation */
++#define NFS_INO_INVALID_BLOCKS BIT(14) /* cached blocks are invalid */
+
+ #define NFS_INO_INVALID_ATTR (NFS_INO_INVALID_CHANGE \
+ | NFS_INO_INVALID_CTIME \
+diff --git a/include/linux/usb/composite.h b/include/linux/usb/composite.h
+index 8675e145ea8b..2040696d75b6 100644
+--- a/include/linux/usb/composite.h
++++ b/include/linux/usb/composite.h
+@@ -249,6 +249,9 @@ int usb_function_activate(struct usb_function *);
+
+ int usb_interface_id(struct usb_configuration *, struct usb_function *);
+
++int config_ep_by_speed_and_alt(struct usb_gadget *g, struct usb_function *f,
++ struct usb_ep *_ep, u8 alt);
++
+ int config_ep_by_speed(struct usb_gadget *g, struct usb_function *f,
+ struct usb_ep *_ep);
+
+diff --git a/include/linux/usb/gadget.h b/include/linux/usb/gadget.h
+index 9411c08a5c7e..73a6113322c6 100644
+--- a/include/linux/usb/gadget.h
++++ b/include/linux/usb/gadget.h
+@@ -373,6 +373,7 @@ struct usb_gadget_ops {
+ * @connected: True if gadget is connected.
+ * @lpm_capable: If the gadget max_speed is FULL or HIGH, this flag
+ * indicates that it supports LPM as per the LPM ECN & errata.
++ * @irq: the interrupt number for device controller.
+ *
+ * Gadgets have a mostly-portable "gadget driver" implementing device
+ * functions, handling all usb configurations and interfaces. Gadget
+@@ -427,6 +428,7 @@ struct usb_gadget {
+ unsigned deactivated:1;
+ unsigned connected:1;
+ unsigned lpm_capable:1;
++ int irq;
+ };
+ #define work_to_gadget(w) (container_of((w), struct usb_gadget, work))
+
+diff --git a/include/sound/soc.h b/include/sound/soc.h
+index 946f88a6c63d..8e480efeda2a 100644
+--- a/include/sound/soc.h
++++ b/include/sound/soc.h
+@@ -790,9 +790,6 @@ struct snd_soc_dai_link {
+ const struct snd_soc_pcm_stream *params;
+ unsigned int num_params;
+
+- struct snd_soc_dapm_widget *playback_widget;
+- struct snd_soc_dapm_widget *capture_widget;
+-
+ unsigned int dai_fmt; /* format to set on init */
+
+ enum snd_soc_dpcm_trigger trigger[2]; /* trigger type for DPCM */
+@@ -1156,6 +1153,9 @@ struct snd_soc_pcm_runtime {
+ struct snd_soc_dai **cpu_dais;
+ unsigned int num_cpus;
+
++ struct snd_soc_dapm_widget *playback_widget;
++ struct snd_soc_dapm_widget *capture_widget;
++
+ struct delayed_work delayed_work;
+ void (*close_delayed_work_func)(struct snd_soc_pcm_runtime *rtd);
+ #ifdef CONFIG_DEBUG_FS
+@@ -1177,7 +1177,7 @@ struct snd_soc_pcm_runtime {
+ #define asoc_rtd_to_codec(rtd, n) (rtd)->dais[n + (rtd)->num_cpus]
+
+ #define for_each_rtd_components(rtd, i, component) \
+- for ((i) = 0; \
++ for ((i) = 0, component = NULL; \
+ ((i) < rtd->num_components) && ((component) = rtd->components[i]);\
+ (i)++)
+ #define for_each_rtd_cpu_dais(rtd, i, dai) \
+diff --git a/include/trace/events/afs.h b/include/trace/events/afs.h
+index c612cabbc378..93eddd32bd74 100644
+--- a/include/trace/events/afs.h
++++ b/include/trace/events/afs.h
+@@ -988,24 +988,22 @@ TRACE_EVENT(afs_edit_dir,
+ );
+
+ TRACE_EVENT(afs_protocol_error,
+- TP_PROTO(struct afs_call *call, int error, enum afs_eproto_cause cause),
++ TP_PROTO(struct afs_call *call, enum afs_eproto_cause cause),
+
+- TP_ARGS(call, error, cause),
++ TP_ARGS(call, cause),
+
+ TP_STRUCT__entry(
+ __field(unsigned int, call )
+- __field(int, error )
+ __field(enum afs_eproto_cause, cause )
+ ),
+
+ TP_fast_assign(
+ __entry->call = call ? call->debug_id : 0;
+- __entry->error = error;
+ __entry->cause = cause;
+ ),
+
+- TP_printk("c=%08x r=%d %s",
+- __entry->call, __entry->error,
++ TP_printk("c=%08x %s",
++ __entry->call,
+ __print_symbolic(__entry->cause, afs_eproto_causes))
+ );
+
+diff --git a/include/uapi/linux/magic.h b/include/uapi/linux/magic.h
+index d78064007b17..f3956fc11de6 100644
+--- a/include/uapi/linux/magic.h
++++ b/include/uapi/linux/magic.h
+@@ -94,6 +94,7 @@
+ #define BALLOON_KVM_MAGIC 0x13661366
+ #define ZSMALLOC_MAGIC 0x58295829
+ #define DMA_BUF_MAGIC 0x444d4142 /* "DMAB" */
++#define DEVMEM_MAGIC 0x454d444d /* "DMEM" */
+ #define Z3FOLD_MAGIC 0x33
+ #define PPC_CMM_MAGIC 0xc7571590
+
+diff --git a/kernel/bpf/syscall.c b/kernel/bpf/syscall.c
+index 5e52765161f9..c8acc8f37583 100644
+--- a/kernel/bpf/syscall.c
++++ b/kernel/bpf/syscall.c
+@@ -2924,6 +2924,7 @@ static struct bpf_insn *bpf_insn_prepare_dump(const struct bpf_prog *prog)
+ struct bpf_insn *insns;
+ u32 off, type;
+ u64 imm;
++ u8 code;
+ int i;
+
+ insns = kmemdup(prog->insnsi, bpf_prog_insn_size(prog),
+@@ -2932,21 +2933,27 @@ static struct bpf_insn *bpf_insn_prepare_dump(const struct bpf_prog *prog)
+ return insns;
+
+ for (i = 0; i < prog->len; i++) {
+- if (insns[i].code == (BPF_JMP | BPF_TAIL_CALL)) {
++ code = insns[i].code;
++
++ if (code == (BPF_JMP | BPF_TAIL_CALL)) {
+ insns[i].code = BPF_JMP | BPF_CALL;
+ insns[i].imm = BPF_FUNC_tail_call;
+ /* fall-through */
+ }
+- if (insns[i].code == (BPF_JMP | BPF_CALL) ||
+- insns[i].code == (BPF_JMP | BPF_CALL_ARGS)) {
+- if (insns[i].code == (BPF_JMP | BPF_CALL_ARGS))
++ if (code == (BPF_JMP | BPF_CALL) ||
++ code == (BPF_JMP | BPF_CALL_ARGS)) {
++ if (code == (BPF_JMP | BPF_CALL_ARGS))
+ insns[i].code = BPF_JMP | BPF_CALL;
+ if (!bpf_dump_raw_ok())
+ insns[i].imm = 0;
+ continue;
+ }
++ if (BPF_CLASS(code) == BPF_LDX && BPF_MODE(code) == BPF_PROBE_MEM) {
++ insns[i].code = BPF_LDX | BPF_SIZE(code) | BPF_MEM;
++ continue;
++ }
+
+- if (insns[i].code != (BPF_LD | BPF_IMM | BPF_DW))
++ if (code != (BPF_LD | BPF_IMM | BPF_DW))
+ continue;
+
+ imm = ((u64)insns[i + 1].imm << 32) | (u32)insns[i].imm;
+diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
+index efe14cf24bc6..739d9ba3ba6b 100644
+--- a/kernel/bpf/verifier.c
++++ b/kernel/bpf/verifier.c
+@@ -7366,7 +7366,7 @@ static int check_btf_func(struct bpf_verifier_env *env,
+ const struct btf *btf;
+ void __user *urecord;
+ u32 prev_offset = 0;
+- int ret = 0;
++ int ret = -ENOMEM;
+
+ nfuncs = attr->func_info_cnt;
+ if (!nfuncs)
+diff --git a/kernel/kprobes.c b/kernel/kprobes.c
+index 2625c241ac00..195ecb955fcc 100644
+--- a/kernel/kprobes.c
++++ b/kernel/kprobes.c
+@@ -586,11 +586,12 @@ static void kprobe_optimizer(struct work_struct *work)
+ mutex_unlock(&module_mutex);
+ mutex_unlock(&text_mutex);
+ cpus_read_unlock();
+- mutex_unlock(&kprobe_mutex);
+
+ /* Step 5: Kick optimizer again if needed */
+ if (!list_empty(&optimizing_list) || !list_empty(&unoptimizing_list))
+ kick_kprobe_optimizer();
++
++ mutex_unlock(&kprobe_mutex);
+ }
+
+ /* Wait for completing optimization and unoptimization */
+@@ -1236,6 +1237,26 @@ __releases(hlist_lock)
+ }
+ NOKPROBE_SYMBOL(kretprobe_table_unlock);
+
++struct kprobe kprobe_busy = {
++ .addr = (void *) get_kprobe,
++};
++
++void kprobe_busy_begin(void)
++{
++ struct kprobe_ctlblk *kcb;
++
++ preempt_disable();
++ __this_cpu_write(current_kprobe, &kprobe_busy);
++ kcb = get_kprobe_ctlblk();
++ kcb->kprobe_status = KPROBE_HIT_ACTIVE;
++}
++
++void kprobe_busy_end(void)
++{
++ __this_cpu_write(current_kprobe, NULL);
++ preempt_enable();
++}
++
+ /*
+ * This function is called from finish_task_switch when task tk becomes dead,
+ * so that we can recycle any function-return probe instances associated
+@@ -1253,6 +1274,8 @@ void kprobe_flush_task(struct task_struct *tk)
+ /* Early boot. kretprobe_table_locks not yet initialized. */
+ return;
+
++ kprobe_busy_begin();
++
+ INIT_HLIST_HEAD(&empty_rp);
+ hash = hash_ptr(tk, KPROBE_HASH_BITS);
+ head = &kretprobe_inst_table[hash];
+@@ -1266,6 +1289,8 @@ void kprobe_flush_task(struct task_struct *tk)
+ hlist_del(&ri->hlist);
+ kfree(ri);
+ }
++
++ kprobe_busy_end();
+ }
+ NOKPROBE_SYMBOL(kprobe_flush_task);
+
+diff --git a/kernel/resource.c b/kernel/resource.c
+index 76036a41143b..841737bbda9e 100644
+--- a/kernel/resource.c
++++ b/kernel/resource.c
+@@ -1126,6 +1126,7 @@ struct resource * __request_region(struct resource *parent,
+ {
+ DECLARE_WAITQUEUE(wait, current);
+ struct resource *res = alloc_resource(GFP_KERNEL);
++ struct resource *orig_parent = parent;
+
+ if (!res)
+ return NULL;
+@@ -1176,6 +1177,10 @@ struct resource * __request_region(struct resource *parent,
+ break;
+ }
+ write_unlock(&resource_lock);
++
++ if (res && orig_parent == &iomem_resource)
++ revoke_devmem(res);
++
+ return res;
+ }
+ EXPORT_SYMBOL(__request_region);
+diff --git a/kernel/trace/blktrace.c b/kernel/trace/blktrace.c
+index ca39dc3230cb..35610a4be4a9 100644
+--- a/kernel/trace/blktrace.c
++++ b/kernel/trace/blktrace.c
+@@ -995,8 +995,10 @@ static void blk_add_trace_split(void *ignore,
+
+ __blk_add_trace(bt, bio->bi_iter.bi_sector,
+ bio->bi_iter.bi_size, bio_op(bio), bio->bi_opf,
+- BLK_TA_SPLIT, bio->bi_status, sizeof(rpdu),
+- &rpdu, blk_trace_bio_get_cgid(q, bio));
++ BLK_TA_SPLIT,
++ blk_status_to_errno(bio->bi_status),
++ sizeof(rpdu), &rpdu,
++ blk_trace_bio_get_cgid(q, bio));
+ }
+ rcu_read_unlock();
+ }
+@@ -1033,7 +1035,8 @@ static void blk_add_trace_bio_remap(void *ignore,
+ r.sector_from = cpu_to_be64(from);
+
+ __blk_add_trace(bt, bio->bi_iter.bi_sector, bio->bi_iter.bi_size,
+- bio_op(bio), bio->bi_opf, BLK_TA_REMAP, bio->bi_status,
++ bio_op(bio), bio->bi_opf, BLK_TA_REMAP,
++ blk_status_to_errno(bio->bi_status),
+ sizeof(r), &r, blk_trace_bio_get_cgid(q, bio));
+ rcu_read_unlock();
+ }
+@@ -1253,21 +1256,10 @@ static inline __u16 t_error(const struct trace_entry *ent)
+
+ static __u64 get_pdu_int(const struct trace_entry *ent, bool has_cg)
+ {
+- const __u64 *val = pdu_start(ent, has_cg);
++ const __be64 *val = pdu_start(ent, has_cg);
+ return be64_to_cpu(*val);
+ }
+
+-static void get_pdu_remap(const struct trace_entry *ent,
+- struct blk_io_trace_remap *r, bool has_cg)
+-{
+- const struct blk_io_trace_remap *__r = pdu_start(ent, has_cg);
+- __u64 sector_from = __r->sector_from;
+-
+- r->device_from = be32_to_cpu(__r->device_from);
+- r->device_to = be32_to_cpu(__r->device_to);
+- r->sector_from = be64_to_cpu(sector_from);
+-}
+-
+ typedef void (blk_log_action_t) (struct trace_iterator *iter, const char *act,
+ bool has_cg);
+
+@@ -1407,13 +1399,13 @@ static void blk_log_with_error(struct trace_seq *s,
+
+ static void blk_log_remap(struct trace_seq *s, const struct trace_entry *ent, bool has_cg)
+ {
+- struct blk_io_trace_remap r = { .device_from = 0, };
++ const struct blk_io_trace_remap *__r = pdu_start(ent, has_cg);
+
+- get_pdu_remap(ent, &r, has_cg);
+ trace_seq_printf(s, "%llu + %u <- (%d,%d) %llu\n",
+ t_sector(ent), t_sec(ent),
+- MAJOR(r.device_from), MINOR(r.device_from),
+- (unsigned long long)r.sector_from);
++ MAJOR(be32_to_cpu(__r->device_from)),
++ MINOR(be32_to_cpu(__r->device_from)),
++ be64_to_cpu(__r->sector_from));
+ }
+
+ static void blk_log_plug(struct trace_seq *s, const struct trace_entry *ent, bool has_cg)
+diff --git a/kernel/trace/trace.h b/kernel/trace/trace.h
+index 4eb1d004d5f2..7fb2f4c1bc49 100644
+--- a/kernel/trace/trace.h
++++ b/kernel/trace/trace.h
+@@ -61,6 +61,9 @@ enum trace_type {
+ #undef __field_desc
+ #define __field_desc(type, container, item)
+
++#undef __field_packed
++#define __field_packed(type, container, item)
++
+ #undef __array
+ #define __array(type, item, size) type item[size];
+
+diff --git a/kernel/trace/trace_entries.h b/kernel/trace/trace_entries.h
+index a523da0dae0a..18c4a58aff79 100644
+--- a/kernel/trace/trace_entries.h
++++ b/kernel/trace/trace_entries.h
+@@ -78,8 +78,8 @@ FTRACE_ENTRY_PACKED(funcgraph_entry, ftrace_graph_ent_entry,
+
+ F_STRUCT(
+ __field_struct( struct ftrace_graph_ent, graph_ent )
+- __field_desc( unsigned long, graph_ent, func )
+- __field_desc( int, graph_ent, depth )
++ __field_packed( unsigned long, graph_ent, func )
++ __field_packed( int, graph_ent, depth )
+ ),
+
+ F_printk("--> %ps (%d)", (void *)__entry->func, __entry->depth)
+@@ -92,11 +92,11 @@ FTRACE_ENTRY_PACKED(funcgraph_exit, ftrace_graph_ret_entry,
+
+ F_STRUCT(
+ __field_struct( struct ftrace_graph_ret, ret )
+- __field_desc( unsigned long, ret, func )
+- __field_desc( unsigned long, ret, overrun )
+- __field_desc( unsigned long long, ret, calltime)
+- __field_desc( unsigned long long, ret, rettime )
+- __field_desc( int, ret, depth )
++ __field_packed( unsigned long, ret, func )
++ __field_packed( unsigned long, ret, overrun )
++ __field_packed( unsigned long long, ret, calltime)
++ __field_packed( unsigned long long, ret, rettime )
++ __field_packed( int, ret, depth )
+ ),
+
+ F_printk("<-- %ps (%d) (start: %llx end: %llx) over: %d",
+diff --git a/kernel/trace/trace_export.c b/kernel/trace/trace_export.c
+index 77ce5a3b6773..70d3d0a09053 100644
+--- a/kernel/trace/trace_export.c
++++ b/kernel/trace/trace_export.c
+@@ -45,6 +45,9 @@ static int ftrace_event_register(struct trace_event_call *call,
+ #undef __field_desc
+ #define __field_desc(type, container, item) type item;
+
++#undef __field_packed
++#define __field_packed(type, container, item) type item;
++
+ #undef __array
+ #define __array(type, item, size) type item[size];
+
+@@ -85,6 +88,13 @@ static void __always_unused ____ftrace_check_##name(void) \
+ .size = sizeof(_type), .align = __alignof__(_type), \
+ is_signed_type(_type), .filter_type = _filter_type },
+
++
++#undef __field_ext_packed
++#define __field_ext_packed(_type, _item, _filter_type) { \
++ .type = #_type, .name = #_item, \
++ .size = sizeof(_type), .align = 1, \
++ is_signed_type(_type), .filter_type = _filter_type },
++
+ #undef __field
+ #define __field(_type, _item) __field_ext(_type, _item, FILTER_OTHER)
+
+@@ -94,6 +104,9 @@ static void __always_unused ____ftrace_check_##name(void) \
+ #undef __field_desc
+ #define __field_desc(_type, _container, _item) __field_ext(_type, _item, FILTER_OTHER)
+
++#undef __field_packed
++#define __field_packed(_type, _container, _item) __field_ext_packed(_type, _item, FILTER_OTHER)
++
+ #undef __array
+ #define __array(_type, _item, _len) { \
+ .type = #_type"["__stringify(_len)"]", .name = #_item, \
+@@ -129,6 +142,9 @@ static struct trace_event_fields ftrace_event_fields_##name[] = { \
+ #undef __field_desc
+ #define __field_desc(type, container, item)
+
++#undef __field_packed
++#define __field_packed(type, container, item)
++
+ #undef __array
+ #define __array(type, item, len)
+
+diff --git a/kernel/trace/trace_kprobe.c b/kernel/trace/trace_kprobe.c
+index 35989383ae11..8eeb95e04bf5 100644
+--- a/kernel/trace/trace_kprobe.c
++++ b/kernel/trace/trace_kprobe.c
+@@ -1629,7 +1629,7 @@ int bpf_get_kprobe_info(const struct perf_event *event, u32 *fd_type,
+ if (perf_type_tracepoint)
+ tk = find_trace_kprobe(pevent, group);
+ else
+- tk = event->tp_event->data;
++ tk = trace_kprobe_primary_from_call(event->tp_event);
+ if (!tk)
+ return -EINVAL;
+
+diff --git a/kernel/trace/trace_probe.c b/kernel/trace/trace_probe.c
+index ab8b6436d53f..f98d6d94cbbf 100644
+--- a/kernel/trace/trace_probe.c
++++ b/kernel/trace/trace_probe.c
+@@ -639,8 +639,8 @@ static int traceprobe_parse_probe_arg_body(char *arg, ssize_t *size,
+ ret = -EINVAL;
+ goto fail;
+ }
+- if ((code->op == FETCH_OP_IMM || code->op == FETCH_OP_COMM) ||
+- parg->count) {
++ if ((code->op == FETCH_OP_IMM || code->op == FETCH_OP_COMM ||
++ code->op == FETCH_OP_DATA) || parg->count) {
+ /*
+ * IMM, DATA and COMM is pointing actual address, those
+ * must be kept, and if parg->count != 0, this is an
+diff --git a/kernel/trace/trace_uprobe.c b/kernel/trace/trace_uprobe.c
+index 2a8e8e9c1c75..fdd47f99b18f 100644
+--- a/kernel/trace/trace_uprobe.c
++++ b/kernel/trace/trace_uprobe.c
+@@ -1412,7 +1412,7 @@ int bpf_get_uprobe_info(const struct perf_event *event, u32 *fd_type,
+ if (perf_type_tracepoint)
+ tu = find_probe_event(pevent, group);
+ else
+- tu = event->tp_event->data;
++ tu = trace_uprobe_primary_from_call(event->tp_event);
+ if (!tu)
+ return -EINVAL;
+
+diff --git a/lib/zlib_inflate/inffast.c b/lib/zlib_inflate/inffast.c
+index 2c13ecc5bb2c..ed1f3df27260 100644
+--- a/lib/zlib_inflate/inffast.c
++++ b/lib/zlib_inflate/inffast.c
+@@ -10,17 +10,6 @@
+
+ #ifndef ASMINF
+
+-/* Allow machine dependent optimization for post-increment or pre-increment.
+- Based on testing to date,
+- Pre-increment preferred for:
+- - PowerPC G3 (Adler)
+- - MIPS R5000 (Randers-Pehrson)
+- Post-increment preferred for:
+- - none
+- No measurable difference:
+- - Pentium III (Anderson)
+- - M68060 (Nikl)
+- */
+ union uu {
+ unsigned short us;
+ unsigned char b[2];
+@@ -38,16 +27,6 @@ get_unaligned16(const unsigned short *p)
+ return mm.us;
+ }
+
+-#ifdef POSTINC
+-# define OFF 0
+-# define PUP(a) *(a)++
+-# define UP_UNALIGNED(a) get_unaligned16((a)++)
+-#else
+-# define OFF 1
+-# define PUP(a) *++(a)
+-# define UP_UNALIGNED(a) get_unaligned16(++(a))
+-#endif
+-
+ /*
+ Decode literal, length, and distance codes and write out the resulting
+ literal and match bytes until either not enough input or output is
+@@ -115,9 +94,9 @@ void inflate_fast(z_streamp strm, unsigned start)
+
+ /* copy state to local variables */
+ state = (struct inflate_state *)strm->state;
+- in = strm->next_in - OFF;
++ in = strm->next_in;
+ last = in + (strm->avail_in - 5);
+- out = strm->next_out - OFF;
++ out = strm->next_out;
+ beg = out - (start - strm->avail_out);
+ end = out + (strm->avail_out - 257);
+ #ifdef INFLATE_STRICT
+@@ -138,9 +117,9 @@ void inflate_fast(z_streamp strm, unsigned start)
+ input data or output space */
+ do {
+ if (bits < 15) {
+- hold += (unsigned long)(PUP(in)) << bits;
++ hold += (unsigned long)(*in++) << bits;
+ bits += 8;
+- hold += (unsigned long)(PUP(in)) << bits;
++ hold += (unsigned long)(*in++) << bits;
+ bits += 8;
+ }
+ this = lcode[hold & lmask];
+@@ -150,14 +129,14 @@ void inflate_fast(z_streamp strm, unsigned start)
+ bits -= op;
+ op = (unsigned)(this.op);
+ if (op == 0) { /* literal */
+- PUP(out) = (unsigned char)(this.val);
++ *out++ = (unsigned char)(this.val);
+ }
+ else if (op & 16) { /* length base */
+ len = (unsigned)(this.val);
+ op &= 15; /* number of extra bits */
+ if (op) {
+ if (bits < op) {
+- hold += (unsigned long)(PUP(in)) << bits;
++ hold += (unsigned long)(*in++) << bits;
+ bits += 8;
+ }
+ len += (unsigned)hold & ((1U << op) - 1);
+@@ -165,9 +144,9 @@ void inflate_fast(z_streamp strm, unsigned start)
+ bits -= op;
+ }
+ if (bits < 15) {
+- hold += (unsigned long)(PUP(in)) << bits;
++ hold += (unsigned long)(*in++) << bits;
+ bits += 8;
+- hold += (unsigned long)(PUP(in)) << bits;
++ hold += (unsigned long)(*in++) << bits;
+ bits += 8;
+ }
+ this = dcode[hold & dmask];
+@@ -180,10 +159,10 @@ void inflate_fast(z_streamp strm, unsigned start)
+ dist = (unsigned)(this.val);
+ op &= 15; /* number of extra bits */
+ if (bits < op) {
+- hold += (unsigned long)(PUP(in)) << bits;
++ hold += (unsigned long)(*in++) << bits;
+ bits += 8;
+ if (bits < op) {
+- hold += (unsigned long)(PUP(in)) << bits;
++ hold += (unsigned long)(*in++) << bits;
+ bits += 8;
+ }
+ }
+@@ -205,13 +184,13 @@ void inflate_fast(z_streamp strm, unsigned start)
+ state->mode = BAD;
+ break;
+ }
+- from = window - OFF;
++ from = window;
+ if (write == 0) { /* very common case */
+ from += wsize - op;
+ if (op < len) { /* some from window */
+ len -= op;
+ do {
+- PUP(out) = PUP(from);
++ *out++ = *from++;
+ } while (--op);
+ from = out - dist; /* rest from output */
+ }
+@@ -222,14 +201,14 @@ void inflate_fast(z_streamp strm, unsigned start)
+ if (op < len) { /* some from end of window */
+ len -= op;
+ do {
+- PUP(out) = PUP(from);
++ *out++ = *from++;
+ } while (--op);
+- from = window - OFF;
++ from = window;
+ if (write < len) { /* some from start of window */
+ op = write;
+ len -= op;
+ do {
+- PUP(out) = PUP(from);
++ *out++ = *from++;
+ } while (--op);
+ from = out - dist; /* rest from output */
+ }
+@@ -240,21 +219,21 @@ void inflate_fast(z_streamp strm, unsigned start)
+ if (op < len) { /* some from window */
+ len -= op;
+ do {
+- PUP(out) = PUP(from);
++ *out++ = *from++;
+ } while (--op);
+ from = out - dist; /* rest from output */
+ }
+ }
+ while (len > 2) {
+- PUP(out) = PUP(from);
+- PUP(out) = PUP(from);
+- PUP(out) = PUP(from);
++ *out++ = *from++;
++ *out++ = *from++;
++ *out++ = *from++;
+ len -= 3;
+ }
+ if (len) {
+- PUP(out) = PUP(from);
++ *out++ = *from++;
+ if (len > 1)
+- PUP(out) = PUP(from);
++ *out++ = *from++;
+ }
+ }
+ else {
+@@ -264,29 +243,29 @@ void inflate_fast(z_streamp strm, unsigned start)
+ from = out - dist; /* copy direct from output */
+ /* minimum length is three */
+ /* Align out addr */
+- if (!((long)(out - 1 + OFF) & 1)) {
+- PUP(out) = PUP(from);
++ if (!((long)(out - 1) & 1)) {
++ *out++ = *from++;
+ len--;
+ }
+- sout = (unsigned short *)(out - OFF);
++ sout = (unsigned short *)(out);
+ if (dist > 2) {
+ unsigned short *sfrom;
+
+- sfrom = (unsigned short *)(from - OFF);
++ sfrom = (unsigned short *)(from);
+ loops = len >> 1;
+ do
+ #ifdef CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS
+- PUP(sout) = PUP(sfrom);
++ *sout++ = *sfrom++;
+ #else
+- PUP(sout) = UP_UNALIGNED(sfrom);
++ *sout++ = get_unaligned16(sfrom++);
+ #endif
+ while (--loops);
+- out = (unsigned char *)sout + OFF;
+- from = (unsigned char *)sfrom + OFF;
++ out = (unsigned char *)sout;
++ from = (unsigned char *)sfrom;
+ } else { /* dist == 1 or dist == 2 */
+ unsigned short pat16;
+
+- pat16 = *(sout-1+OFF);
++ pat16 = *(sout-1);
+ if (dist == 1) {
+ union uu mm;
+ /* copy one char pattern to both bytes */
+@@ -296,12 +275,12 @@ void inflate_fast(z_streamp strm, unsigned start)
+ }
+ loops = len >> 1;
+ do
+- PUP(sout) = pat16;
++ *sout++ = pat16;
+ while (--loops);
+- out = (unsigned char *)sout + OFF;
++ out = (unsigned char *)sout;
+ }
+ if (len & 1)
+- PUP(out) = PUP(from);
++ *out++ = *from++;
+ }
+ }
+ else if ((op & 64) == 0) { /* 2nd level distance code */
+@@ -336,8 +315,8 @@ void inflate_fast(z_streamp strm, unsigned start)
+ hold &= (1U << bits) - 1;
+
+ /* update state and return */
+- strm->next_in = in + OFF;
+- strm->next_out = out + OFF;
++ strm->next_in = in;
++ strm->next_out = out;
+ strm->avail_in = (unsigned)(in < last ? 5 + (last - in) : 5 - (in - last));
+ strm->avail_out = (unsigned)(out < end ?
+ 257 + (end - out) : 257 - (out - end));
+diff --git a/net/core/dev.c b/net/core/dev.c
+index 2d8aceee4284..93a279ab4e97 100644
+--- a/net/core/dev.c
++++ b/net/core/dev.c
+@@ -79,6 +79,7 @@
+ #include <linux/sched.h>
+ #include <linux/sched/mm.h>
+ #include <linux/mutex.h>
++#include <linux/rwsem.h>
+ #include <linux/string.h>
+ #include <linux/mm.h>
+ #include <linux/socket.h>
+@@ -194,7 +195,7 @@ static DEFINE_SPINLOCK(napi_hash_lock);
+ static unsigned int napi_gen_id = NR_CPUS;
+ static DEFINE_READ_MOSTLY_HASHTABLE(napi_hash, 8);
+
+-static seqcount_t devnet_rename_seq;
++static DECLARE_RWSEM(devnet_rename_sem);
+
+ static inline void dev_base_seq_inc(struct net *net)
+ {
+@@ -930,33 +931,28 @@ EXPORT_SYMBOL(dev_get_by_napi_id);
+ * @net: network namespace
+ * @name: a pointer to the buffer where the name will be stored.
+ * @ifindex: the ifindex of the interface to get the name from.
+- *
+- * The use of raw_seqcount_begin() and cond_resched() before
+- * retrying is required as we want to give the writers a chance
+- * to complete when CONFIG_PREEMPTION is not set.
+ */
+ int netdev_get_name(struct net *net, char *name, int ifindex)
+ {
+ struct net_device *dev;
+- unsigned int seq;
++ int ret;
+
+-retry:
+- seq = raw_seqcount_begin(&devnet_rename_seq);
++ down_read(&devnet_rename_sem);
+ rcu_read_lock();
++
+ dev = dev_get_by_index_rcu(net, ifindex);
+ if (!dev) {
+- rcu_read_unlock();
+- return -ENODEV;
++ ret = -ENODEV;
++ goto out;
+ }
+
+ strcpy(name, dev->name);
+- rcu_read_unlock();
+- if (read_seqcount_retry(&devnet_rename_seq, seq)) {
+- cond_resched();
+- goto retry;
+- }
+
+- return 0;
++ ret = 0;
++out:
++ rcu_read_unlock();
++ up_read(&devnet_rename_sem);
++ return ret;
+ }
+
+ /**
+@@ -1228,10 +1224,10 @@ int dev_change_name(struct net_device *dev, const char *newname)
+ likely(!(dev->priv_flags & IFF_LIVE_RENAME_OK)))
+ return -EBUSY;
+
+- write_seqcount_begin(&devnet_rename_seq);
++ down_write(&devnet_rename_sem);
+
+ if (strncmp(newname, dev->name, IFNAMSIZ) == 0) {
+- write_seqcount_end(&devnet_rename_seq);
++ up_write(&devnet_rename_sem);
+ return 0;
+ }
+
+@@ -1239,7 +1235,7 @@ int dev_change_name(struct net_device *dev, const char *newname)
+
+ err = dev_get_valid_name(net, dev, newname);
+ if (err < 0) {
+- write_seqcount_end(&devnet_rename_seq);
++ up_write(&devnet_rename_sem);
+ return err;
+ }
+
+@@ -1254,11 +1250,11 @@ rollback:
+ if (ret) {
+ memcpy(dev->name, oldname, IFNAMSIZ);
+ dev->name_assign_type = old_assign_type;
+- write_seqcount_end(&devnet_rename_seq);
++ up_write(&devnet_rename_sem);
+ return ret;
+ }
+
+- write_seqcount_end(&devnet_rename_seq);
++ up_write(&devnet_rename_sem);
+
+ netdev_adjacent_rename_links(dev, oldname);
+
+@@ -1279,7 +1275,7 @@ rollback:
+ /* err >= 0 after dev_alloc_name() or stores the first errno */
+ if (err >= 0) {
+ err = ret;
+- write_seqcount_begin(&devnet_rename_seq);
++ down_write(&devnet_rename_sem);
+ memcpy(dev->name, oldname, IFNAMSIZ);
+ memcpy(oldname, newname, IFNAMSIZ);
+ dev->name_assign_type = old_assign_type;
+diff --git a/net/core/filter.c b/net/core/filter.c
+index 11b97c31bca5..9512a9772d69 100644
+--- a/net/core/filter.c
++++ b/net/core/filter.c
+@@ -1766,25 +1766,27 @@ BPF_CALL_5(bpf_skb_load_bytes_relative, const struct sk_buff *, skb,
+ u32, offset, void *, to, u32, len, u32, start_header)
+ {
+ u8 *end = skb_tail_pointer(skb);
+- u8 *net = skb_network_header(skb);
+- u8 *mac = skb_mac_header(skb);
+- u8 *ptr;
++ u8 *start, *ptr;
+
+- if (unlikely(offset > 0xffff || len > (end - mac)))
++ if (unlikely(offset > 0xffff))
+ goto err_clear;
+
+ switch (start_header) {
+ case BPF_HDR_START_MAC:
+- ptr = mac + offset;
++ if (unlikely(!skb_mac_header_was_set(skb)))
++ goto err_clear;
++ start = skb_mac_header(skb);
+ break;
+ case BPF_HDR_START_NET:
+- ptr = net + offset;
++ start = skb_network_header(skb);
+ break;
+ default:
+ goto err_clear;
+ }
+
+- if (likely(ptr >= mac && ptr + len <= end)) {
++ ptr = start + offset;
++
++ if (likely(ptr + len <= end)) {
+ memcpy(to, ptr, len);
+ return 0;
+ }
+diff --git a/net/core/sock_map.c b/net/core/sock_map.c
+index b08dfae10f88..591457fcbd02 100644
+--- a/net/core/sock_map.c
++++ b/net/core/sock_map.c
+@@ -417,10 +417,7 @@ static int sock_map_get_next_key(struct bpf_map *map, void *key, void *next)
+ return 0;
+ }
+
+-static bool sock_map_redirect_allowed(const struct sock *sk)
+-{
+- return sk->sk_state != TCP_LISTEN;
+-}
++static bool sock_map_redirect_allowed(const struct sock *sk);
+
+ static int sock_map_update_common(struct bpf_map *map, u32 idx,
+ struct sock *sk, u64 flags)
+@@ -501,6 +498,11 @@ static bool sk_is_udp(const struct sock *sk)
+ sk->sk_protocol == IPPROTO_UDP;
+ }
+
++static bool sock_map_redirect_allowed(const struct sock *sk)
++{
++ return sk_is_tcp(sk) && sk->sk_state != TCP_LISTEN;
++}
++
+ static bool sock_map_sk_is_suitable(const struct sock *sk)
+ {
+ return sk_is_tcp(sk) || sk_is_udp(sk);
+@@ -982,11 +984,15 @@ static struct bpf_map *sock_hash_alloc(union bpf_attr *attr)
+ err = -EINVAL;
+ goto free_htab;
+ }
++ err = bpf_map_charge_init(&htab->map.memory, cost);
++ if (err)
++ goto free_htab;
+
+ htab->buckets = bpf_map_area_alloc(htab->buckets_num *
+ sizeof(struct bpf_htab_bucket),
+ htab->map.numa_node);
+ if (!htab->buckets) {
++ bpf_map_charge_finish(&htab->map.memory);
+ err = -ENOMEM;
+ goto free_htab;
+ }
+@@ -1006,6 +1012,7 @@ static void sock_hash_free(struct bpf_map *map)
+ {
+ struct bpf_htab *htab = container_of(map, struct bpf_htab, map);
+ struct bpf_htab_bucket *bucket;
++ struct hlist_head unlink_list;
+ struct bpf_htab_elem *elem;
+ struct hlist_node *node;
+ int i;
+@@ -1017,13 +1024,32 @@ static void sock_hash_free(struct bpf_map *map)
+ synchronize_rcu();
+ for (i = 0; i < htab->buckets_num; i++) {
+ bucket = sock_hash_select_bucket(htab, i);
+- hlist_for_each_entry_safe(elem, node, &bucket->head, node) {
+- hlist_del_rcu(&elem->node);
++
++ /* We are racing with sock_hash_delete_from_link to
++ * enter the spin-lock critical section. Every socket on
++ * the list is still linked to sockhash. Since link
++ * exists, psock exists and holds a ref to socket. That
++ * lets us to grab a socket ref too.
++ */
++ raw_spin_lock_bh(&bucket->lock);
++ hlist_for_each_entry(elem, &bucket->head, node)
++ sock_hold(elem->sk);
++ hlist_move_list(&bucket->head, &unlink_list);
++ raw_spin_unlock_bh(&bucket->lock);
++
++ /* Process removed entries out of atomic context to
++ * block for socket lock before deleting the psock's
++ * link to sockhash.
++ */
++ hlist_for_each_entry_safe(elem, node, &unlink_list, node) {
++ hlist_del(&elem->node);
+ lock_sock(elem->sk);
+ rcu_read_lock();
+ sock_map_unref(elem->sk, elem);
+ rcu_read_unlock();
+ release_sock(elem->sk);
++ sock_put(elem->sk);
++ sock_hash_free_elem(htab, elem);
+ }
+ }
+
+diff --git a/net/ipv4/tcp_bpf.c b/net/ipv4/tcp_bpf.c
+index 629aaa9a1eb9..7aa68f4aae6c 100644
+--- a/net/ipv4/tcp_bpf.c
++++ b/net/ipv4/tcp_bpf.c
+@@ -64,6 +64,9 @@ int __tcp_bpf_recvmsg(struct sock *sk, struct sk_psock *psock,
+ } while (i != msg_rx->sg.end);
+
+ if (unlikely(peek)) {
++ if (msg_rx == list_last_entry(&psock->ingress_msg,
++ struct sk_msg, list))
++ break;
+ msg_rx = list_next_entry(msg_rx, list);
+ continue;
+ }
+@@ -242,6 +245,9 @@ static int tcp_bpf_wait_data(struct sock *sk, struct sk_psock *psock,
+ DEFINE_WAIT_FUNC(wait, woken_wake_function);
+ int ret = 0;
+
++ if (sk->sk_shutdown & RCV_SHUTDOWN)
++ return 1;
++
+ if (!timeo)
+ return ret;
+
+diff --git a/net/netfilter/nft_set_pipapo.c b/net/netfilter/nft_set_pipapo.c
+index 8b5acc6910fd..8c04388296b0 100644
+--- a/net/netfilter/nft_set_pipapo.c
++++ b/net/netfilter/nft_set_pipapo.c
+@@ -1242,7 +1242,9 @@ static int nft_pipapo_insert(const struct net *net, const struct nft_set *set,
+ end += NFT_PIPAPO_GROUPS_PADDED_SIZE(f);
+ }
+
+- if (!*this_cpu_ptr(m->scratch) || bsize_max > m->bsize_max) {
++ if (!*get_cpu_ptr(m->scratch) || bsize_max > m->bsize_max) {
++ put_cpu_ptr(m->scratch);
++
+ err = pipapo_realloc_scratch(m, bsize_max);
+ if (err)
+ return err;
+@@ -1250,6 +1252,8 @@ static int nft_pipapo_insert(const struct net *net, const struct nft_set *set,
+ this_cpu_write(nft_pipapo_scratch_index, false);
+
+ m->bsize_max = bsize_max;
++ } else {
++ put_cpu_ptr(m->scratch);
+ }
+
+ *ext2 = &e->ext;
+diff --git a/net/netfilter/nft_set_rbtree.c b/net/netfilter/nft_set_rbtree.c
+index 62f416bc0579..b6aad3fc46c3 100644
+--- a/net/netfilter/nft_set_rbtree.c
++++ b/net/netfilter/nft_set_rbtree.c
+@@ -271,12 +271,14 @@ static int __nft_rbtree_insert(const struct net *net, const struct nft_set *set,
+
+ if (nft_rbtree_interval_start(new)) {
+ if (nft_rbtree_interval_end(rbe) &&
+- nft_set_elem_active(&rbe->ext, genmask))
++ nft_set_elem_active(&rbe->ext, genmask) &&
++ !nft_set_elem_expired(&rbe->ext))
+ overlap = false;
+ } else {
+ overlap = nft_rbtree_interval_end(rbe) &&
+ nft_set_elem_active(&rbe->ext,
+- genmask);
++ genmask) &&
++ !nft_set_elem_expired(&rbe->ext);
+ }
+ } else if (d > 0) {
+ p = &parent->rb_right;
+@@ -284,9 +286,11 @@ static int __nft_rbtree_insert(const struct net *net, const struct nft_set *set,
+ if (nft_rbtree_interval_end(new)) {
+ overlap = nft_rbtree_interval_end(rbe) &&
+ nft_set_elem_active(&rbe->ext,
+- genmask);
++ genmask) &&
++ !nft_set_elem_expired(&rbe->ext);
+ } else if (nft_rbtree_interval_end(rbe) &&
+- nft_set_elem_active(&rbe->ext, genmask)) {
++ nft_set_elem_active(&rbe->ext, genmask) &&
++ !nft_set_elem_expired(&rbe->ext)) {
+ overlap = true;
+ }
+ } else {
+@@ -294,15 +298,18 @@ static int __nft_rbtree_insert(const struct net *net, const struct nft_set *set,
+ nft_rbtree_interval_start(new)) {
+ p = &parent->rb_left;
+
+- if (nft_set_elem_active(&rbe->ext, genmask))
++ if (nft_set_elem_active(&rbe->ext, genmask) &&
++ !nft_set_elem_expired(&rbe->ext))
+ overlap = false;
+ } else if (nft_rbtree_interval_start(rbe) &&
+ nft_rbtree_interval_end(new)) {
+ p = &parent->rb_right;
+
+- if (nft_set_elem_active(&rbe->ext, genmask))
++ if (nft_set_elem_active(&rbe->ext, genmask) &&
++ !nft_set_elem_expired(&rbe->ext))
+ overlap = false;
+- } else if (nft_set_elem_active(&rbe->ext, genmask)) {
++ } else if (nft_set_elem_active(&rbe->ext, genmask) &&
++ !nft_set_elem_expired(&rbe->ext)) {
+ *ext = &rbe->ext;
+ return -EEXIST;
+ } else {
+diff --git a/net/rxrpc/proc.c b/net/rxrpc/proc.c
+index 8b179e3c802a..543afd9bd664 100644
+--- a/net/rxrpc/proc.c
++++ b/net/rxrpc/proc.c
+@@ -68,7 +68,7 @@ static int rxrpc_call_seq_show(struct seq_file *seq, void *v)
+ "Proto Local "
+ " Remote "
+ " SvID ConnID CallID End Use State Abort "
+- " UserID TxSeq TW RxSeq RW RxSerial RxTimo\n");
++ " DebugId TxSeq TW RxSeq RW RxSerial RxTimo\n");
+ return 0;
+ }
+
+@@ -100,7 +100,7 @@ static int rxrpc_call_seq_show(struct seq_file *seq, void *v)
+ rx_hard_ack = READ_ONCE(call->rx_hard_ack);
+ seq_printf(seq,
+ "UDP %-47.47s %-47.47s %4x %08x %08x %s %3u"
+- " %-8.8s %08x %lx %08x %02x %08x %02x %08x %06lx\n",
++ " %-8.8s %08x %08x %08x %02x %08x %02x %08x %06lx\n",
+ lbuff,
+ rbuff,
+ call->service_id,
+@@ -110,7 +110,7 @@ static int rxrpc_call_seq_show(struct seq_file *seq, void *v)
+ atomic_read(&call->usage),
+ rxrpc_call_states[call->state],
+ call->abort_code,
+- call->user_call_ID,
++ call->debug_id,
+ tx_hard_ack, READ_ONCE(call->tx_top) - tx_hard_ack,
+ rx_hard_ack, READ_ONCE(call->rx_top) - rx_hard_ack,
+ call->rx_serial,
+diff --git a/net/sunrpc/addr.c b/net/sunrpc/addr.c
+index 8b4d72b1a066..010dcb876f9d 100644
+--- a/net/sunrpc/addr.c
++++ b/net/sunrpc/addr.c
+@@ -82,11 +82,11 @@ static size_t rpc_ntop6(const struct sockaddr *sap,
+
+ rc = snprintf(scopebuf, sizeof(scopebuf), "%c%u",
+ IPV6_SCOPE_DELIMITER, sin6->sin6_scope_id);
+- if (unlikely((size_t)rc > sizeof(scopebuf)))
++ if (unlikely((size_t)rc >= sizeof(scopebuf)))
+ return 0;
+
+ len += rc;
+- if (unlikely(len > buflen))
++ if (unlikely(len >= buflen))
+ return 0;
+
+ strcat(buf, scopebuf);
+diff --git a/net/xdp/xsk.c b/net/xdp/xsk.c
+index c350108aa38d..a4676107fad0 100644
+--- a/net/xdp/xsk.c
++++ b/net/xdp/xsk.c
+@@ -397,10 +397,8 @@ static int xsk_generic_xmit(struct sock *sk)
+
+ len = desc.len;
+ skb = sock_alloc_send_skb(sk, len, 1, &err);
+- if (unlikely(!skb)) {
+- err = -EAGAIN;
++ if (unlikely(!skb))
+ goto out;
+- }
+
+ skb_put(skb, len);
+ addr = desc.addr;
+diff --git a/samples/ftrace/sample-trace-array.c b/samples/ftrace/sample-trace-array.c
+index d523450d73eb..6aba02a31c96 100644
+--- a/samples/ftrace/sample-trace-array.c
++++ b/samples/ftrace/sample-trace-array.c
+@@ -6,6 +6,7 @@
+ #include <linux/timer.h>
+ #include <linux/err.h>
+ #include <linux/jiffies.h>
++#include <linux/workqueue.h>
+
+ /*
+ * Any file that uses trace points, must include the header.
+@@ -20,6 +21,16 @@ struct trace_array *tr;
+ static void mytimer_handler(struct timer_list *unused);
+ static struct task_struct *simple_tsk;
+
++static void trace_work_fn(struct work_struct *work)
++{
++ /*
++ * Disable tracing for event "sample_event".
++ */
++ trace_array_set_clr_event(tr, "sample-subsystem", "sample_event",
++ false);
++}
++static DECLARE_WORK(trace_work, trace_work_fn);
++
+ /*
+ * mytimer: Timer setup to disable tracing for event "sample_event". This
+ * timer is only for the purposes of the sample module to demonstrate access of
+@@ -29,11 +40,7 @@ static DEFINE_TIMER(mytimer, mytimer_handler);
+
+ static void mytimer_handler(struct timer_list *unused)
+ {
+- /*
+- * Disable tracing for event "sample_event".
+- */
+- trace_array_set_clr_event(tr, "sample-subsystem", "sample_event",
+- false);
++ schedule_work(&trace_work);
+ }
+
+ static void simple_thread_func(int count)
+@@ -76,6 +83,7 @@ static int simple_thread(void *arg)
+ simple_thread_func(count++);
+
+ del_timer(&mytimer);
++ cancel_work_sync(&trace_work);
+
+ /*
+ * trace_array_put() decrements the reference counter associated with
+@@ -107,8 +115,12 @@ static int __init sample_trace_array_init(void)
+ trace_printk_init_buffers();
+
+ simple_tsk = kthread_run(simple_thread, NULL, "sample-instance");
+- if (IS_ERR(simple_tsk))
++ if (IS_ERR(simple_tsk)) {
++ trace_array_put(tr);
++ trace_array_destroy(tr);
+ return -1;
++ }
++
+ return 0;
+ }
+
+diff --git a/scripts/Makefile.modpost b/scripts/Makefile.modpost
+index 957eed6a17a5..33aaa572f686 100644
+--- a/scripts/Makefile.modpost
++++ b/scripts/Makefile.modpost
+@@ -66,7 +66,7 @@ __modpost:
+
+ else
+
+-MODPOST += $(subst -i,-n,$(filter -i,$(MAKEFLAGS))) -s -T - \
++MODPOST += -s -T - \
+ $(if $(KBUILD_NSDEPS),-d $(MODULES_NSDEPS))
+
+ ifeq ($(KBUILD_EXTMOD),)
+@@ -82,6 +82,11 @@ include $(if $(wildcard $(KBUILD_EXTMOD)/Kbuild), \
+ $(KBUILD_EXTMOD)/Kbuild, $(KBUILD_EXTMOD)/Makefile)
+ endif
+
++# 'make -i -k' ignores compile errors, and builds as many modules as possible.
++ifneq ($(findstring i,$(filter-out --%,$(MAKEFLAGS))),)
++MODPOST += -n
++endif
++
+ # find all modules listed in modules.order
+ modules := $(sort $(shell cat $(MODORDER)))
+
+diff --git a/scripts/headers_install.sh b/scripts/headers_install.sh
+index a07668a5c36b..94a833597a88 100755
+--- a/scripts/headers_install.sh
++++ b/scripts/headers_install.sh
+@@ -64,7 +64,7 @@ configs=$(sed -e '
+ d
+ ' $OUTFILE)
+
+-# The entries in the following list are not warned.
++# The entries in the following list do not result in an error.
+ # Please do not add a new entry. This list is only for existing ones.
+ # The list will be reduced gradually, and deleted eventually. (hopefully)
+ #
+@@ -98,18 +98,19 @@ include/uapi/linux/raw.h:CONFIG_MAX_RAW_DEVS
+
+ for c in $configs
+ do
+- warn=1
++ leak_error=1
+
+ for ignore in $config_leak_ignores
+ do
+ if echo "$INFILE:$c" | grep -q "$ignore$"; then
+- warn=
++ leak_error=
+ break
+ fi
+ done
+
+- if [ "$warn" = 1 ]; then
+- echo "warning: $INFILE: leak $c to user-space" >&2
++ if [ "$leak_error" = 1 ]; then
++ echo "error: $INFILE: leak $c to user-space" >&2
++ exit 1
+ fi
+ done
+
+diff --git a/scripts/mksysmap b/scripts/mksysmap
+index a35acc0d0b82..9aa23d15862a 100755
+--- a/scripts/mksysmap
++++ b/scripts/mksysmap
+@@ -41,4 +41,4 @@
+ # so we just ignore them to let readprofile continue to work.
+ # (At least sparc64 has __crc_ in the middle).
+
+-$NM -n $1 | grep -v '\( [aNUw] \)\|\(__crc_\)\|\( \$[adt]\)\|\( .L\)' > $2
++$NM -n $1 | grep -v '\( [aNUw] \)\|\(__crc_\)\|\( \$[adt]\)\|\( \.L\)' > $2
+diff --git a/security/apparmor/domain.c b/security/apparmor/domain.c
+index a84ef030fbd7..4cfa58c07778 100644
+--- a/security/apparmor/domain.c
++++ b/security/apparmor/domain.c
+@@ -929,7 +929,8 @@ int apparmor_bprm_set_creds(struct linux_binprm *bprm)
+ * aways results in a further reduction of permissions.
+ */
+ if ((bprm->unsafe & LSM_UNSAFE_NO_NEW_PRIVS) &&
+- !unconfined(label) && !aa_label_is_subset(new, ctx->nnp)) {
++ !unconfined(label) &&
++ !aa_label_is_unconfined_subset(new, ctx->nnp)) {
+ error = -EPERM;
+ info = "no new privs";
+ goto audit;
+@@ -1207,7 +1208,7 @@ int aa_change_hat(const char *hats[], int count, u64 token, int flags)
+ * reduce restrictions.
+ */
+ if (task_no_new_privs(current) && !unconfined(label) &&
+- !aa_label_is_subset(new, ctx->nnp)) {
++ !aa_label_is_unconfined_subset(new, ctx->nnp)) {
+ /* not an apparmor denial per se, so don't log it */
+ AA_DEBUG("no_new_privs - change_hat denied");
+ error = -EPERM;
+@@ -1228,7 +1229,7 @@ int aa_change_hat(const char *hats[], int count, u64 token, int flags)
+ * reduce restrictions.
+ */
+ if (task_no_new_privs(current) && !unconfined(label) &&
+- !aa_label_is_subset(previous, ctx->nnp)) {
++ !aa_label_is_unconfined_subset(previous, ctx->nnp)) {
+ /* not an apparmor denial per se, so don't log it */
+ AA_DEBUG("no_new_privs - change_hat denied");
+ error = -EPERM;
+@@ -1423,7 +1424,7 @@ check:
+ * reduce restrictions.
+ */
+ if (task_no_new_privs(current) && !unconfined(label) &&
+- !aa_label_is_subset(new, ctx->nnp)) {
++ !aa_label_is_unconfined_subset(new, ctx->nnp)) {
+ /* not an apparmor denial per se, so don't log it */
+ AA_DEBUG("no_new_privs - change_hat denied");
+ error = -EPERM;
+diff --git a/security/apparmor/include/label.h b/security/apparmor/include/label.h
+index 47942c4ba7ca..255764ab06e2 100644
+--- a/security/apparmor/include/label.h
++++ b/security/apparmor/include/label.h
+@@ -281,6 +281,7 @@ bool aa_label_init(struct aa_label *label, int size, gfp_t gfp);
+ struct aa_label *aa_label_alloc(int size, struct aa_proxy *proxy, gfp_t gfp);
+
+ bool aa_label_is_subset(struct aa_label *set, struct aa_label *sub);
++bool aa_label_is_unconfined_subset(struct aa_label *set, struct aa_label *sub);
+ struct aa_profile *__aa_label_next_not_in_set(struct label_it *I,
+ struct aa_label *set,
+ struct aa_label *sub);
+diff --git a/security/apparmor/label.c b/security/apparmor/label.c
+index 470693239e64..5f324d63ceaa 100644
+--- a/security/apparmor/label.c
++++ b/security/apparmor/label.c
+@@ -550,6 +550,39 @@ bool aa_label_is_subset(struct aa_label *set, struct aa_label *sub)
+ return __aa_label_next_not_in_set(&i, set, sub) == NULL;
+ }
+
++/**
++ * aa_label_is_unconfined_subset - test if @sub is a subset of @set
++ * @set: label to test against
++ * @sub: label to test if is subset of @set
++ *
++ * This checks for subset but taking into account unconfined. IF
++ * @sub contains an unconfined profile that does not have a matching
++ * unconfined in @set then this will not cause the test to fail.
++ * Conversely we don't care about an unconfined in @set that is not in
++ * @sub
++ *
++ * Returns: true if @sub is special_subset of @set
++ * else false
++ */
++bool aa_label_is_unconfined_subset(struct aa_label *set, struct aa_label *sub)
++{
++ struct label_it i = { };
++ struct aa_profile *p;
++
++ AA_BUG(!set);
++ AA_BUG(!sub);
++
++ if (sub == set)
++ return true;
++
++ do {
++ p = __aa_label_next_not_in_set(&i, set, sub);
++ if (p && !profile_unconfined(p))
++ break;
++ } while (p);
++
++ return p == NULL;
++}
+
+
+ /**
+@@ -1531,13 +1564,13 @@ static const char *label_modename(struct aa_ns *ns, struct aa_label *label,
+
+ label_for_each(i, label, profile) {
+ if (aa_ns_visible(ns, profile->ns, flags & FLAG_VIEW_SUBNS)) {
+- if (profile->mode == APPARMOR_UNCONFINED)
++ count++;
++ if (profile == profile->ns->unconfined)
+ /* special case unconfined so stacks with
+ * unconfined don't report as mixed. ie.
+ * profile_foo//&:ns1:unconfined (mixed)
+ */
+ continue;
+- count++;
+ if (mode == -1)
+ mode = profile->mode;
+ else if (mode != profile->mode)
+diff --git a/security/apparmor/lsm.c b/security/apparmor/lsm.c
+index b621ad74f54a..66a8504c8bea 100644
+--- a/security/apparmor/lsm.c
++++ b/security/apparmor/lsm.c
+@@ -804,7 +804,12 @@ static void apparmor_sk_clone_security(const struct sock *sk,
+ struct aa_sk_ctx *ctx = SK_CTX(sk);
+ struct aa_sk_ctx *new = SK_CTX(newsk);
+
++ if (new->label)
++ aa_put_label(new->label);
+ new->label = aa_get_label(ctx->label);
++
++ if (new->peer)
++ aa_put_label(new->peer);
+ new->peer = aa_get_label(ctx->peer);
+ }
+
+diff --git a/security/selinux/ss/conditional.c b/security/selinux/ss/conditional.c
+index da94a1b4bfda..0cc7cdd58465 100644
+--- a/security/selinux/ss/conditional.c
++++ b/security/selinux/ss/conditional.c
+@@ -27,6 +27,9 @@ static int cond_evaluate_expr(struct policydb *p, struct cond_expr *expr)
+ int s[COND_EXPR_MAXDEPTH];
+ int sp = -1;
+
++ if (expr->len == 0)
++ return -1;
++
+ for (i = 0; i < expr->len; i++) {
+ struct cond_expr_node *node = &expr->nodes[i];
+
+@@ -392,27 +395,19 @@ static int cond_read_node(struct policydb *p, struct cond_node *node, void *fp)
+
+ rc = next_entry(buf, fp, sizeof(u32) * 2);
+ if (rc)
+- goto err;
++ return rc;
+
+ expr->expr_type = le32_to_cpu(buf[0]);
+ expr->bool = le32_to_cpu(buf[1]);
+
+- if (!expr_node_isvalid(p, expr)) {
+- rc = -EINVAL;
+- goto err;
+- }
++ if (!expr_node_isvalid(p, expr))
++ return -EINVAL;
+ }
+
+ rc = cond_read_av_list(p, fp, &node->true_list, NULL);
+ if (rc)
+- goto err;
+- rc = cond_read_av_list(p, fp, &node->false_list, &node->true_list);
+- if (rc)
+- goto err;
+- return 0;
+-err:
+- cond_node_destroy(node);
+- return rc;
++ return rc;
++ return cond_read_av_list(p, fp, &node->false_list, &node->true_list);
+ }
+
+ int cond_read_list(struct policydb *p, void *fp)
+diff --git a/security/selinux/ss/services.c b/security/selinux/ss/services.c
+index 8ad34fd031d1..77e591fce919 100644
+--- a/security/selinux/ss/services.c
++++ b/security/selinux/ss/services.c
+@@ -2923,8 +2923,12 @@ err:
+ if (*names) {
+ for (i = 0; i < *len; i++)
+ kfree((*names)[i]);
++ kfree(*names);
+ }
+ kfree(*values);
++ *len = 0;
++ *names = NULL;
++ *values = NULL;
+ goto out;
+ }
+
+diff --git a/sound/firewire/amdtp-am824.c b/sound/firewire/amdtp-am824.c
+index 67d735e9a6a4..fea92e148790 100644
+--- a/sound/firewire/amdtp-am824.c
++++ b/sound/firewire/amdtp-am824.c
+@@ -82,7 +82,8 @@ int amdtp_am824_set_parameters(struct amdtp_stream *s, unsigned int rate,
+ if (err < 0)
+ return err;
+
+- s->ctx_data.rx.fdf = AMDTP_FDF_AM824 | s->sfc;
++ if (s->direction == AMDTP_OUT_STREAM)
++ s->ctx_data.rx.fdf = AMDTP_FDF_AM824 | s->sfc;
+
+ p->pcm_channels = pcm_channels;
+ p->midi_ports = midi_ports;
+diff --git a/sound/isa/wavefront/wavefront_synth.c b/sound/isa/wavefront/wavefront_synth.c
+index c5b1d5900eed..d6420d224d09 100644
+--- a/sound/isa/wavefront/wavefront_synth.c
++++ b/sound/isa/wavefront/wavefront_synth.c
+@@ -1171,7 +1171,10 @@ wavefront_send_alias (snd_wavefront_t *dev, wavefront_patch_info *header)
+ "alias for %d\n",
+ header->number,
+ header->hdr.a.OriginalSample);
+-
++
++ if (header->number >= WF_MAX_SAMPLE)
++ return -EINVAL;
++
+ munge_int32 (header->number, &alias_hdr[0], 2);
+ munge_int32 (header->hdr.a.OriginalSample, &alias_hdr[2], 2);
+ munge_int32 (*((unsigned int *)&header->hdr.a.sampleStartOffset),
+@@ -1202,6 +1205,9 @@ wavefront_send_multisample (snd_wavefront_t *dev, wavefront_patch_info *header)
+ int num_samples;
+ unsigned char *msample_hdr;
+
++ if (header->number >= WF_MAX_SAMPLE)
++ return -EINVAL;
++
+ msample_hdr = kmalloc(WF_MSAMPLE_BYTES, GFP_KERNEL);
+ if (! msample_hdr)
+ return -ENOMEM;
+diff --git a/sound/pci/hda/patch_realtek.c b/sound/pci/hda/patch_realtek.c
+index 2c4575909441..e057ecb5a904 100644
+--- a/sound/pci/hda/patch_realtek.c
++++ b/sound/pci/hda/patch_realtek.c
+@@ -81,6 +81,7 @@ struct alc_spec {
+
+ /* mute LED for HP laptops, see alc269_fixup_mic_mute_hook() */
+ int mute_led_polarity;
++ int micmute_led_polarity;
+ hda_nid_t mute_led_nid;
+ hda_nid_t cap_mute_led_nid;
+
+@@ -4080,11 +4081,9 @@ static void alc269_fixup_hp_mute_led_mic3(struct hda_codec *codec,
+
+ /* update LED status via GPIO */
+ static void alc_update_gpio_led(struct hda_codec *codec, unsigned int mask,
+- bool enabled)
++ int polarity, bool enabled)
+ {
+- struct alc_spec *spec = codec->spec;
+-
+- if (spec->mute_led_polarity)
++ if (polarity)
+ enabled = !enabled;
+ alc_update_gpio_data(codec, mask, !enabled); /* muted -> LED on */
+ }
+@@ -4095,7 +4094,8 @@ static void alc_fixup_gpio_mute_hook(void *private_data, int enabled)
+ struct hda_codec *codec = private_data;
+ struct alc_spec *spec = codec->spec;
+
+- alc_update_gpio_led(codec, spec->gpio_mute_led_mask, enabled);
++ alc_update_gpio_led(codec, spec->gpio_mute_led_mask,
++ spec->mute_led_polarity, enabled);
+ }
+
+ /* turn on/off mic-mute LED via GPIO per capture hook */
+@@ -4104,6 +4104,7 @@ static void alc_gpio_micmute_update(struct hda_codec *codec)
+ struct alc_spec *spec = codec->spec;
+
+ alc_update_gpio_led(codec, spec->gpio_mic_led_mask,
++ spec->micmute_led_polarity,
+ spec->gen.micmute_led.led_value);
+ }
+
+@@ -5808,7 +5809,8 @@ static void alc280_hp_gpio4_automute_hook(struct hda_codec *codec,
+
+ snd_hda_gen_hp_automute(codec, jack);
+ /* mute_led_polarity is set to 0, so we pass inverted value here */
+- alc_update_gpio_led(codec, 0x10, !spec->gen.hp_jack_present);
++ alc_update_gpio_led(codec, 0x10, spec->mute_led_polarity,
++ !spec->gen.hp_jack_present);
+ }
+
+ /* Manage GPIOs for HP EliteBook Folio 9480m.
+diff --git a/sound/soc/codecs/Kconfig b/sound/soc/codecs/Kconfig
+index e60e0b6a689c..8a66f23a7b05 100644
+--- a/sound/soc/codecs/Kconfig
++++ b/sound/soc/codecs/Kconfig
+@@ -1136,10 +1136,13 @@ config SND_SOC_RT5677_SPI
+ config SND_SOC_RT5682
+ tristate
+ depends on I2C || SOUNDWIRE
++ depends on SOUNDWIRE || !SOUNDWIRE
++ depends on I2C || !I2C
+
+ config SND_SOC_RT5682_SDW
+ tristate "Realtek RT5682 Codec - SDW"
+ depends on SOUNDWIRE
++ depends on I2C || !I2C
+ select SND_SOC_RT5682
+ select REGMAP_SOUNDWIRE
+
+@@ -1620,19 +1623,19 @@ config SND_SOC_WM9090
+
+ config SND_SOC_WM9705
+ tristate
+- depends on SND_SOC_AC97_BUS
++ depends on SND_SOC_AC97_BUS || AC97_BUS_NEW
+ select REGMAP_AC97
+ select AC97_BUS_COMPAT if AC97_BUS_NEW
+
+ config SND_SOC_WM9712
+ tristate
+- depends on SND_SOC_AC97_BUS
++ depends on SND_SOC_AC97_BUS || AC97_BUS_NEW
+ select REGMAP_AC97
+ select AC97_BUS_COMPAT if AC97_BUS_NEW
+
+ config SND_SOC_WM9713
+ tristate
+- depends on SND_SOC_AC97_BUS
++ depends on SND_SOC_AC97_BUS || AC97_BUS_NEW
+ select REGMAP_AC97
+ select AC97_BUS_COMPAT if AC97_BUS_NEW
+
+diff --git a/sound/soc/codecs/max98373.c b/sound/soc/codecs/max98373.c
+index cae1def8902d..96718e3a1ad0 100644
+--- a/sound/soc/codecs/max98373.c
++++ b/sound/soc/codecs/max98373.c
+@@ -850,8 +850,8 @@ static int max98373_resume(struct device *dev)
+ {
+ struct max98373_priv *max98373 = dev_get_drvdata(dev);
+
+- max98373_reset(max98373, dev);
+ regcache_cache_only(max98373->regmap, false);
++ max98373_reset(max98373, dev);
+ regcache_sync(max98373->regmap);
+ return 0;
+ }
+diff --git a/sound/soc/codecs/rt1308-sdw.c b/sound/soc/codecs/rt1308-sdw.c
+index a5a7e46de246..a7f45191364d 100644
+--- a/sound/soc/codecs/rt1308-sdw.c
++++ b/sound/soc/codecs/rt1308-sdw.c
+@@ -482,6 +482,9 @@ static int rt1308_set_sdw_stream(struct snd_soc_dai *dai, void *sdw_stream,
+ {
+ struct sdw_stream_data *stream;
+
++ if (!sdw_stream)
++ return 0;
++
+ stream = kzalloc(sizeof(*stream), GFP_KERNEL);
+ if (!stream)
+ return -ENOMEM;
+diff --git a/sound/soc/codecs/rt5645.c b/sound/soc/codecs/rt5645.c
+index 6ba1849a77b0..e2e1d5b03b38 100644
+--- a/sound/soc/codecs/rt5645.c
++++ b/sound/soc/codecs/rt5645.c
+@@ -3625,6 +3625,12 @@ static const struct rt5645_platform_data asus_t100ha_platform_data = {
+ .inv_jd1_1 = true,
+ };
+
++static const struct rt5645_platform_data asus_t101ha_platform_data = {
++ .dmic1_data_pin = RT5645_DMIC_DATA_IN2N,
++ .dmic2_data_pin = RT5645_DMIC2_DISABLE,
++ .jd_mode = 3,
++};
++
+ static const struct rt5645_platform_data lenovo_ideapad_miix_310_pdata = {
+ .jd_mode = 3,
+ .in2_diff = true,
+@@ -3708,6 +3714,14 @@ static const struct dmi_system_id dmi_platform_data[] = {
+ },
+ .driver_data = (void *)&asus_t100ha_platform_data,
+ },
++ {
++ .ident = "ASUS T101HA",
++ .matches = {
++ DMI_MATCH(DMI_SYS_VENDOR, "ASUSTeK COMPUTER INC."),
++ DMI_MATCH(DMI_PRODUCT_NAME, "T101HA"),
++ },
++ .driver_data = (void *)&asus_t101ha_platform_data,
++ },
+ {
+ .ident = "MINIX Z83-4",
+ .matches = {
+diff --git a/sound/soc/codecs/rt5682.c b/sound/soc/codecs/rt5682.c
+index d36f560ad7a8..c4892af14850 100644
+--- a/sound/soc/codecs/rt5682.c
++++ b/sound/soc/codecs/rt5682.c
+@@ -2958,6 +2958,9 @@ static int rt5682_set_sdw_stream(struct snd_soc_dai *dai, void *sdw_stream,
+ {
+ struct sdw_stream_data *stream;
+
++ if (!sdw_stream)
++ return 0;
++
+ stream = kzalloc(sizeof(*stream), GFP_KERNEL);
+ if (!stream)
+ return -ENOMEM;
+diff --git a/sound/soc/codecs/rt700.c b/sound/soc/codecs/rt700.c
+index ff68f0e4f629..687ac2153666 100644
+--- a/sound/soc/codecs/rt700.c
++++ b/sound/soc/codecs/rt700.c
+@@ -860,6 +860,9 @@ static int rt700_set_sdw_stream(struct snd_soc_dai *dai, void *sdw_stream,
+ {
+ struct sdw_stream_data *stream;
+
++ if (!sdw_stream)
++ return 0;
++
+ stream = kzalloc(sizeof(*stream), GFP_KERNEL);
+ if (!stream)
+ return -ENOMEM;
+diff --git a/sound/soc/codecs/rt711.c b/sound/soc/codecs/rt711.c
+index 2daed7692a3b..65b59dbfb43c 100644
+--- a/sound/soc/codecs/rt711.c
++++ b/sound/soc/codecs/rt711.c
+@@ -906,6 +906,9 @@ static int rt711_set_sdw_stream(struct snd_soc_dai *dai, void *sdw_stream,
+ {
+ struct sdw_stream_data *stream;
+
++ if (!sdw_stream)
++ return 0;
++
+ stream = kzalloc(sizeof(*stream), GFP_KERNEL);
+ if (!stream)
+ return -ENOMEM;
+diff --git a/sound/soc/codecs/rt715.c b/sound/soc/codecs/rt715.c
+index 2cbc57b16b13..099c8bd20006 100644
+--- a/sound/soc/codecs/rt715.c
++++ b/sound/soc/codecs/rt715.c
+@@ -530,6 +530,9 @@ static int rt715_set_sdw_stream(struct snd_soc_dai *dai, void *sdw_stream,
+
+ struct sdw_stream_data *stream;
+
++ if (!sdw_stream)
++ return 0;
++
+ stream = kzalloc(sizeof(*stream), GFP_KERNEL);
+ if (!stream)
+ return -ENOMEM;
+diff --git a/sound/soc/fsl/fsl_asrc_dma.c b/sound/soc/fsl/fsl_asrc_dma.c
+index e7178817d7a7..1ee10eafe3e6 100644
+--- a/sound/soc/fsl/fsl_asrc_dma.c
++++ b/sound/soc/fsl/fsl_asrc_dma.c
+@@ -252,6 +252,7 @@ static int fsl_asrc_dma_hw_params(struct snd_soc_component *component,
+ ret = dmaengine_slave_config(pair->dma_chan[dir], &config_be);
+ if (ret) {
+ dev_err(dev, "failed to config DMA channel for Back-End\n");
++ dma_release_channel(pair->dma_chan[dir]);
+ return ret;
+ }
+
+diff --git a/sound/soc/fsl/fsl_esai.c b/sound/soc/fsl/fsl_esai.c
+index c7a49d03463a..84290be778f0 100644
+--- a/sound/soc/fsl/fsl_esai.c
++++ b/sound/soc/fsl/fsl_esai.c
+@@ -87,6 +87,10 @@ static irqreturn_t esai_isr(int irq, void *devid)
+ if ((saisr & (ESAI_SAISR_TUE | ESAI_SAISR_ROE)) &&
+ esai_priv->reset_at_xrun) {
+ dev_dbg(&pdev->dev, "reset module for xrun\n");
++ regmap_update_bits(esai_priv->regmap, REG_ESAI_TCR,
++ ESAI_xCR_xEIE_MASK, 0);
++ regmap_update_bits(esai_priv->regmap, REG_ESAI_RCR,
++ ESAI_xCR_xEIE_MASK, 0);
+ tasklet_schedule(&esai_priv->task);
+ }
+
+diff --git a/sound/soc/img/img-i2s-in.c b/sound/soc/img/img-i2s-in.c
+index a495d1050d49..e30b66b94bf6 100644
+--- a/sound/soc/img/img-i2s-in.c
++++ b/sound/soc/img/img-i2s-in.c
+@@ -482,6 +482,7 @@ static int img_i2s_in_probe(struct platform_device *pdev)
+ if (IS_ERR(rst)) {
+ if (PTR_ERR(rst) == -EPROBE_DEFER) {
+ ret = -EPROBE_DEFER;
++ pm_runtime_put(&pdev->dev);
+ goto err_suspend;
+ }
+
+diff --git a/sound/soc/intel/boards/bytcr_rt5640.c b/sound/soc/intel/boards/bytcr_rt5640.c
+index 08f4ae964b02..5c1a5e2aff6f 100644
+--- a/sound/soc/intel/boards/bytcr_rt5640.c
++++ b/sound/soc/intel/boards/bytcr_rt5640.c
+@@ -742,6 +742,30 @@ static const struct dmi_system_id byt_rt5640_quirk_table[] = {
+ BYT_RT5640_SSP0_AIF1 |
+ BYT_RT5640_MCLK_EN),
+ },
++ { /* Toshiba Encore WT8-A */
++ .matches = {
++ DMI_EXACT_MATCH(DMI_SYS_VENDOR, "TOSHIBA"),
++ DMI_EXACT_MATCH(DMI_PRODUCT_NAME, "TOSHIBA WT8-A"),
++ },
++ .driver_data = (void *)(BYT_RT5640_DMIC1_MAP |
++ BYT_RT5640_JD_SRC_JD2_IN4N |
++ BYT_RT5640_OVCD_TH_2000UA |
++ BYT_RT5640_OVCD_SF_0P75 |
++ BYT_RT5640_JD_NOT_INV |
++ BYT_RT5640_MCLK_EN),
++ },
++ { /* Toshiba Encore WT10-A */
++ .matches = {
++ DMI_EXACT_MATCH(DMI_SYS_VENDOR, "TOSHIBA"),
++ DMI_EXACT_MATCH(DMI_PRODUCT_NAME, "TOSHIBA WT10-A-103"),
++ },
++ .driver_data = (void *)(BYT_RT5640_DMIC1_MAP |
++ BYT_RT5640_JD_SRC_JD1_IN4P |
++ BYT_RT5640_OVCD_TH_2000UA |
++ BYT_RT5640_OVCD_SF_0P75 |
++ BYT_RT5640_SSP0_AIF2 |
++ BYT_RT5640_MCLK_EN),
++ },
+ { /* Catch-all for generic Insyde tablets, must be last */
+ .matches = {
+ DMI_MATCH(DMI_SYS_VENDOR, "Insyde"),
+diff --git a/sound/soc/meson/axg-fifo.c b/sound/soc/meson/axg-fifo.c
+index 2e9b56b29d31..b2e867113226 100644
+--- a/sound/soc/meson/axg-fifo.c
++++ b/sound/soc/meson/axg-fifo.c
+@@ -249,7 +249,7 @@ int axg_fifo_pcm_open(struct snd_soc_component *component,
+ /* Enable pclk to access registers and clock the fifo ip */
+ ret = clk_prepare_enable(fifo->pclk);
+ if (ret)
+- return ret;
++ goto free_irq;
+
+ /* Setup status2 so it reports the memory pointer */
+ regmap_update_bits(fifo->map, FIFO_CTRL1,
+@@ -269,8 +269,14 @@ int axg_fifo_pcm_open(struct snd_soc_component *component,
+ /* Take memory arbitror out of reset */
+ ret = reset_control_deassert(fifo->arb);
+ if (ret)
+- clk_disable_unprepare(fifo->pclk);
++ goto free_clk;
++
++ return 0;
+
++free_clk:
++ clk_disable_unprepare(fifo->pclk);
++free_irq:
++ free_irq(fifo->irq, ss);
+ return ret;
+ }
+ EXPORT_SYMBOL_GPL(axg_fifo_pcm_open);
+diff --git a/sound/soc/meson/meson-card-utils.c b/sound/soc/meson/meson-card-utils.c
+index 2ca8c98e204f..5a4a91c88734 100644
+--- a/sound/soc/meson/meson-card-utils.c
++++ b/sound/soc/meson/meson-card-utils.c
+@@ -49,19 +49,26 @@ int meson_card_reallocate_links(struct snd_soc_card *card,
+ links = krealloc(priv->card.dai_link,
+ num_links * sizeof(*priv->card.dai_link),
+ GFP_KERNEL | __GFP_ZERO);
++ if (!links)
++ goto err_links;
++
+ ldata = krealloc(priv->link_data,
+ num_links * sizeof(*priv->link_data),
+ GFP_KERNEL | __GFP_ZERO);
+-
+- if (!links || !ldata) {
+- dev_err(priv->card.dev, "failed to allocate links\n");
+- return -ENOMEM;
+- }
++ if (!ldata)
++ goto err_ldata;
+
+ priv->card.dai_link = links;
+ priv->link_data = ldata;
+ priv->card.num_links = num_links;
+ return 0;
++
++err_ldata:
++ kfree(links);
++err_links:
++ dev_err(priv->card.dev, "failed to allocate links\n");
++ return -ENOMEM;
++
+ }
+ EXPORT_SYMBOL_GPL(meson_card_reallocate_links);
+
+diff --git a/sound/soc/qcom/qdsp6/q6asm-dai.c b/sound/soc/qcom/qdsp6/q6asm-dai.c
+index 125af00bba53..4640804aab7f 100644
+--- a/sound/soc/qcom/qdsp6/q6asm-dai.c
++++ b/sound/soc/qcom/qdsp6/q6asm-dai.c
+@@ -176,7 +176,7 @@ static const struct snd_compr_codec_caps q6asm_compr_caps = {
+ };
+
+ static void event_handler(uint32_t opcode, uint32_t token,
+- uint32_t *payload, void *priv)
++ void *payload, void *priv)
+ {
+ struct q6asm_dai_rtd *prtd = priv;
+ struct snd_pcm_substream *substream = prtd->substream;
+@@ -490,7 +490,7 @@ static int q6asm_dai_hw_params(struct snd_soc_component *component,
+ }
+
+ static void compress_event_handler(uint32_t opcode, uint32_t token,
+- uint32_t *payload, void *priv)
++ void *payload, void *priv)
+ {
+ struct q6asm_dai_rtd *prtd = priv;
+ struct snd_compr_stream *substream = prtd->cstream;
+diff --git a/sound/soc/sh/rcar/gen.c b/sound/soc/sh/rcar/gen.c
+index af19010b9d88..8bd49c8a9517 100644
+--- a/sound/soc/sh/rcar/gen.c
++++ b/sound/soc/sh/rcar/gen.c
+@@ -224,6 +224,14 @@ static int rsnd_gen2_probe(struct rsnd_priv *priv)
+ RSND_GEN_S_REG(SSI_SYS_STATUS5, 0x884),
+ RSND_GEN_S_REG(SSI_SYS_STATUS6, 0x888),
+ RSND_GEN_S_REG(SSI_SYS_STATUS7, 0x88c),
++ RSND_GEN_S_REG(SSI_SYS_INT_ENABLE0, 0x850),
++ RSND_GEN_S_REG(SSI_SYS_INT_ENABLE1, 0x854),
++ RSND_GEN_S_REG(SSI_SYS_INT_ENABLE2, 0x858),
++ RSND_GEN_S_REG(SSI_SYS_INT_ENABLE3, 0x85c),
++ RSND_GEN_S_REG(SSI_SYS_INT_ENABLE4, 0x890),
++ RSND_GEN_S_REG(SSI_SYS_INT_ENABLE5, 0x894),
++ RSND_GEN_S_REG(SSI_SYS_INT_ENABLE6, 0x898),
++ RSND_GEN_S_REG(SSI_SYS_INT_ENABLE7, 0x89c),
+ RSND_GEN_S_REG(HDMI0_SEL, 0x9e0),
+ RSND_GEN_S_REG(HDMI1_SEL, 0x9e4),
+
+diff --git a/sound/soc/sh/rcar/rsnd.h b/sound/soc/sh/rcar/rsnd.h
+index ea6cbaa9743e..d47608ff5fac 100644
+--- a/sound/soc/sh/rcar/rsnd.h
++++ b/sound/soc/sh/rcar/rsnd.h
+@@ -189,6 +189,14 @@ enum rsnd_reg {
+ SSI_SYS_STATUS5,
+ SSI_SYS_STATUS6,
+ SSI_SYS_STATUS7,
++ SSI_SYS_INT_ENABLE0,
++ SSI_SYS_INT_ENABLE1,
++ SSI_SYS_INT_ENABLE2,
++ SSI_SYS_INT_ENABLE3,
++ SSI_SYS_INT_ENABLE4,
++ SSI_SYS_INT_ENABLE5,
++ SSI_SYS_INT_ENABLE6,
++ SSI_SYS_INT_ENABLE7,
+ HDMI0_SEL,
+ HDMI1_SEL,
+ SSI9_BUSIF0_MODE,
+@@ -237,6 +245,7 @@ enum rsnd_reg {
+ #define SSI9_BUSIF_ADINR(i) (SSI9_BUSIF0_ADINR + (i))
+ #define SSI9_BUSIF_DALIGN(i) (SSI9_BUSIF0_DALIGN + (i))
+ #define SSI_SYS_STATUS(i) (SSI_SYS_STATUS0 + (i))
++#define SSI_SYS_INT_ENABLE(i) (SSI_SYS_INT_ENABLE0 + (i))
+
+
+ struct rsnd_priv;
+diff --git a/sound/soc/sh/rcar/ssi.c b/sound/soc/sh/rcar/ssi.c
+index 4a7d3413917f..47d5ddb526f2 100644
+--- a/sound/soc/sh/rcar/ssi.c
++++ b/sound/soc/sh/rcar/ssi.c
+@@ -372,6 +372,9 @@ static void rsnd_ssi_config_init(struct rsnd_mod *mod,
+ u32 wsr = ssi->wsr;
+ int width;
+ int is_tdm, is_tdm_split;
++ int id = rsnd_mod_id(mod);
++ int i;
++ u32 sys_int_enable = 0;
+
+ is_tdm = rsnd_runtime_is_tdm(io);
+ is_tdm_split = rsnd_runtime_is_tdm_split(io);
+@@ -447,6 +450,38 @@ static void rsnd_ssi_config_init(struct rsnd_mod *mod,
+ cr_mode = DIEN; /* PIO : enable Data interrupt */
+ }
+
++ /* enable busif buffer over/under run interrupt. */
++ if (is_tdm || is_tdm_split) {
++ switch (id) {
++ case 0:
++ case 1:
++ case 2:
++ case 3:
++ case 4:
++ for (i = 0; i < 4; i++) {
++ sys_int_enable = rsnd_mod_read(mod,
++ SSI_SYS_INT_ENABLE(i * 2));
++ sys_int_enable |= 0xf << (id * 4);
++ rsnd_mod_write(mod,
++ SSI_SYS_INT_ENABLE(i * 2),
++ sys_int_enable);
++ }
++
++ break;
++ case 9:
++ for (i = 0; i < 4; i++) {
++ sys_int_enable = rsnd_mod_read(mod,
++ SSI_SYS_INT_ENABLE((i * 2) + 1));
++ sys_int_enable |= 0xf << 4;
++ rsnd_mod_write(mod,
++ SSI_SYS_INT_ENABLE((i * 2) + 1),
++ sys_int_enable);
++ }
++
++ break;
++ }
++ }
++
+ init_end:
+ ssi->cr_own = cr_own;
+ ssi->cr_mode = cr_mode;
+@@ -496,6 +531,13 @@ static int rsnd_ssi_quit(struct rsnd_mod *mod,
+ {
+ struct rsnd_ssi *ssi = rsnd_mod_to_ssi(mod);
+ struct device *dev = rsnd_priv_to_dev(priv);
++ int is_tdm, is_tdm_split;
++ int id = rsnd_mod_id(mod);
++ int i;
++ u32 sys_int_enable = 0;
++
++ is_tdm = rsnd_runtime_is_tdm(io);
++ is_tdm_split = rsnd_runtime_is_tdm_split(io);
+
+ if (!rsnd_ssi_is_run_mods(mod, io))
+ return 0;
+@@ -517,6 +559,38 @@ static int rsnd_ssi_quit(struct rsnd_mod *mod,
+ ssi->wsr = 0;
+ }
+
++ /* disable busif buffer over/under run interrupt. */
++ if (is_tdm || is_tdm_split) {
++ switch (id) {
++ case 0:
++ case 1:
++ case 2:
++ case 3:
++ case 4:
++ for (i = 0; i < 4; i++) {
++ sys_int_enable = rsnd_mod_read(mod,
++ SSI_SYS_INT_ENABLE(i * 2));
++ sys_int_enable &= ~(0xf << (id * 4));
++ rsnd_mod_write(mod,
++ SSI_SYS_INT_ENABLE(i * 2),
++ sys_int_enable);
++ }
++
++ break;
++ case 9:
++ for (i = 0; i < 4; i++) {
++ sys_int_enable = rsnd_mod_read(mod,
++ SSI_SYS_INT_ENABLE((i * 2) + 1));
++ sys_int_enable &= ~(0xf << 4);
++ rsnd_mod_write(mod,
++ SSI_SYS_INT_ENABLE((i * 2) + 1),
++ sys_int_enable);
++ }
++
++ break;
++ }
++ }
++
+ return 0;
+ }
+
+@@ -622,6 +696,11 @@ static int rsnd_ssi_irq(struct rsnd_mod *mod,
+ int enable)
+ {
+ u32 val = 0;
++ int is_tdm, is_tdm_split;
++ int id = rsnd_mod_id(mod);
++
++ is_tdm = rsnd_runtime_is_tdm(io);
++ is_tdm_split = rsnd_runtime_is_tdm_split(io);
+
+ if (rsnd_is_gen1(priv))
+ return 0;
+@@ -635,6 +714,19 @@ static int rsnd_ssi_irq(struct rsnd_mod *mod,
+ if (enable)
+ val = rsnd_ssi_is_dma_mode(mod) ? 0x0e000000 : 0x0f000000;
+
++ if (is_tdm || is_tdm_split) {
++ switch (id) {
++ case 0:
++ case 1:
++ case 2:
++ case 3:
++ case 4:
++ case 9:
++ val |= 0x0000ff00;
++ break;
++ }
++ }
++
+ rsnd_mod_write(mod, SSI_INT_ENABLE, val);
+
+ return 0;
+@@ -651,6 +743,12 @@ static void __rsnd_ssi_interrupt(struct rsnd_mod *mod,
+ u32 status;
+ bool elapsed = false;
+ bool stop = false;
++ int id = rsnd_mod_id(mod);
++ int i;
++ int is_tdm, is_tdm_split;
++
++ is_tdm = rsnd_runtime_is_tdm(io);
++ is_tdm_split = rsnd_runtime_is_tdm_split(io);
+
+ spin_lock(&priv->lock);
+
+@@ -672,6 +770,53 @@ static void __rsnd_ssi_interrupt(struct rsnd_mod *mod,
+ stop = true;
+ }
+
++ status = 0;
++
++ if (is_tdm || is_tdm_split) {
++ switch (id) {
++ case 0:
++ case 1:
++ case 2:
++ case 3:
++ case 4:
++ for (i = 0; i < 4; i++) {
++ status = rsnd_mod_read(mod,
++ SSI_SYS_STATUS(i * 2));
++ status &= 0xf << (id * 4);
++
++ if (status) {
++ rsnd_dbg_irq_status(dev,
++ "%s err status : 0x%08x\n",
++ rsnd_mod_name(mod), status);
++ rsnd_mod_write(mod,
++ SSI_SYS_STATUS(i * 2),
++ 0xf << (id * 4));
++ stop = true;
++ break;
++ }
++ }
++ break;
++ case 9:
++ for (i = 0; i < 4; i++) {
++ status = rsnd_mod_read(mod,
++ SSI_SYS_STATUS((i * 2) + 1));
++ status &= 0xf << 4;
++
++ if (status) {
++ rsnd_dbg_irq_status(dev,
++ "%s err status : 0x%08x\n",
++ rsnd_mod_name(mod), status);
++ rsnd_mod_write(mod,
++ SSI_SYS_STATUS((i * 2) + 1),
++ 0xf << 4);
++ stop = true;
++ break;
++ }
++ }
++ break;
++ }
++ }
++
+ rsnd_ssi_status_clear(mod);
+ rsnd_ssi_interrupt_out:
+ spin_unlock(&priv->lock);
+diff --git a/sound/soc/soc-core.c b/sound/soc/soc-core.c
+index 843b8b1c89d4..e5433e8fcf19 100644
+--- a/sound/soc/soc-core.c
++++ b/sound/soc/soc-core.c
+@@ -1720,9 +1720,25 @@ match:
+ dai_link->platforms->name = component->name;
+
+ /* convert non BE into BE */
+- dai_link->no_pcm = 1;
+- dai_link->dpcm_playback = 1;
+- dai_link->dpcm_capture = 1;
++ if (!dai_link->no_pcm) {
++ dai_link->no_pcm = 1;
++
++ if (dai_link->dpcm_playback)
++ dev_warn(card->dev,
++ "invalid configuration, dailink %s has flags no_pcm=0 and dpcm_playback=1\n",
++ dai_link->name);
++ if (dai_link->dpcm_capture)
++ dev_warn(card->dev,
++ "invalid configuration, dailink %s has flags no_pcm=0 and dpcm_capture=1\n",
++ dai_link->name);
++
++ /* convert normal link into DPCM one */
++ if (!(dai_link->dpcm_playback ||
++ dai_link->dpcm_capture)) {
++ dai_link->dpcm_playback = !dai_link->capture_only;
++ dai_link->dpcm_capture = !dai_link->playback_only;
++ }
++ }
+
+ /* override any BE fixups */
+ dai_link->be_hw_params_fixup =
+diff --git a/sound/soc/soc-dapm.c b/sound/soc/soc-dapm.c
+index e2632841b321..c0aa64ff8e32 100644
+--- a/sound/soc/soc-dapm.c
++++ b/sound/soc/soc-dapm.c
+@@ -4340,16 +4340,16 @@ static void dapm_connect_dai_pair(struct snd_soc_card *card,
+ codec = codec_dai->playback_widget;
+
+ if (playback_cpu && codec) {
+- if (dai_link->params && !dai_link->playback_widget) {
++ if (dai_link->params && !rtd->playback_widget) {
+ substream = streams[SNDRV_PCM_STREAM_PLAYBACK].substream;
+ dai = snd_soc_dapm_new_dai(card, substream, "playback");
+ if (IS_ERR(dai))
+ goto capture;
+- dai_link->playback_widget = dai;
++ rtd->playback_widget = dai;
+ }
+
+ dapm_connect_dai_routes(&card->dapm, cpu_dai, playback_cpu,
+- dai_link->playback_widget,
++ rtd->playback_widget,
+ codec_dai, codec);
+ }
+
+@@ -4358,16 +4358,16 @@ capture:
+ codec = codec_dai->capture_widget;
+
+ if (codec && capture_cpu) {
+- if (dai_link->params && !dai_link->capture_widget) {
++ if (dai_link->params && !rtd->capture_widget) {
+ substream = streams[SNDRV_PCM_STREAM_CAPTURE].substream;
+ dai = snd_soc_dapm_new_dai(card, substream, "capture");
+ if (IS_ERR(dai))
+ return;
+- dai_link->capture_widget = dai;
++ rtd->capture_widget = dai;
+ }
+
+ dapm_connect_dai_routes(&card->dapm, codec_dai, codec,
+- dai_link->capture_widget,
++ rtd->capture_widget,
+ cpu_dai, capture_cpu);
+ }
+ }
+diff --git a/sound/soc/soc-pcm.c b/sound/soc/soc-pcm.c
+index 1f302de44052..39ce61c5b874 100644
+--- a/sound/soc/soc-pcm.c
++++ b/sound/soc/soc-pcm.c
+@@ -2908,20 +2908,44 @@ int soc_new_pcm(struct snd_soc_pcm_runtime *rtd, int num)
+ struct snd_pcm *pcm;
+ char new_name[64];
+ int ret = 0, playback = 0, capture = 0;
++ int stream;
+ int i;
+
++ if (rtd->dai_link->dynamic && rtd->num_cpus > 1) {
++ dev_err(rtd->dev,
++ "DPCM doesn't support Multi CPU for Front-Ends yet\n");
++ return -EINVAL;
++ }
++
+ if (rtd->dai_link->dynamic || rtd->dai_link->no_pcm) {
+- cpu_dai = asoc_rtd_to_cpu(rtd, 0);
+- if (rtd->num_cpus > 1) {
+- dev_err(rtd->dev,
+- "DPCM doesn't support Multi CPU yet\n");
+- return -EINVAL;
++ if (rtd->dai_link->dpcm_playback) {
++ stream = SNDRV_PCM_STREAM_PLAYBACK;
++
++ for_each_rtd_cpu_dais(rtd, i, cpu_dai)
++ if (!snd_soc_dai_stream_valid(cpu_dai,
++ stream)) {
++ dev_err(rtd->card->dev,
++ "CPU DAI %s for rtd %s does not support playback\n",
++ cpu_dai->name,
++ rtd->dai_link->stream_name);
++ return -EINVAL;
++ }
++ playback = 1;
++ }
++ if (rtd->dai_link->dpcm_capture) {
++ stream = SNDRV_PCM_STREAM_CAPTURE;
++
++ for_each_rtd_cpu_dais(rtd, i, cpu_dai)
++ if (!snd_soc_dai_stream_valid(cpu_dai,
++ stream)) {
++ dev_err(rtd->card->dev,
++ "CPU DAI %s for rtd %s does not support capture\n",
++ cpu_dai->name,
++ rtd->dai_link->stream_name);
++ return -EINVAL;
++ }
++ capture = 1;
+ }
+-
+- playback = rtd->dai_link->dpcm_playback &&
+- snd_soc_dai_stream_valid(cpu_dai, SNDRV_PCM_STREAM_PLAYBACK);
+- capture = rtd->dai_link->dpcm_capture &&
+- snd_soc_dai_stream_valid(cpu_dai, SNDRV_PCM_STREAM_CAPTURE);
+ } else {
+ /* Adapt stream for codec2codec links */
+ int cpu_capture = rtd->dai_link->params ?
+diff --git a/sound/soc/sof/control.c b/sound/soc/sof/control.c
+index dfc412e2d956..6d63768d42aa 100644
+--- a/sound/soc/sof/control.c
++++ b/sound/soc/sof/control.c
+@@ -19,8 +19,8 @@ static void update_mute_led(struct snd_sof_control *scontrol,
+ struct snd_kcontrol *kcontrol,
+ struct snd_ctl_elem_value *ucontrol)
+ {
+- unsigned int temp = 0;
+- unsigned int mask;
++ int temp = 0;
++ int mask;
+ int i;
+
+ mask = 1U << snd_ctl_get_ioffidx(kcontrol, &ucontrol->id);
+diff --git a/sound/soc/sof/core.c b/sound/soc/sof/core.c
+index 91acfae7935c..74b438216216 100644
+--- a/sound/soc/sof/core.c
++++ b/sound/soc/sof/core.c
+@@ -176,6 +176,7 @@ static int sof_probe_continue(struct snd_sof_dev *sdev)
+ /* init the IPC */
+ sdev->ipc = snd_sof_ipc_init(sdev);
+ if (!sdev->ipc) {
++ ret = -ENOMEM;
+ dev_err(sdev->dev, "error: failed to init DSP IPC %d\n", ret);
+ goto ipc_err;
+ }
+diff --git a/sound/soc/sof/imx/Kconfig b/sound/soc/sof/imx/Kconfig
+index bae4f7bf5f75..812749064ca8 100644
+--- a/sound/soc/sof/imx/Kconfig
++++ b/sound/soc/sof/imx/Kconfig
+@@ -14,7 +14,7 @@ if SND_SOC_SOF_IMX_TOPLEVEL
+ config SND_SOC_SOF_IMX8_SUPPORT
+ bool "SOF support for i.MX8"
+ depends on IMX_SCU
+- depends on IMX_DSP
++ select IMX_DSP
+ help
+ This adds support for Sound Open Firmware for NXP i.MX8 platforms
+ Say Y if you have such a device.
+diff --git a/sound/soc/sof/intel/hda-codec.c b/sound/soc/sof/intel/hda-codec.c
+index 3041fbbb010a..ea021db697b8 100644
+--- a/sound/soc/sof/intel/hda-codec.c
++++ b/sound/soc/sof/intel/hda-codec.c
+@@ -24,19 +24,44 @@
+ #define IDISP_VID_INTEL 0x80860000
+
+ /* load the legacy HDA codec driver */
+-static int hda_codec_load_module(struct hda_codec *codec)
++static int request_codec_module(struct hda_codec *codec)
+ {
+ #ifdef MODULE
+ char alias[MODULE_NAME_LEN];
+- const char *module = alias;
++ const char *mod = NULL;
+
+- snd_hdac_codec_modalias(&codec->core, alias, sizeof(alias));
+- dev_dbg(&codec->core.dev, "loading codec module: %s\n", module);
+- request_module(module);
++ switch (codec->probe_id) {
++ case HDA_CODEC_ID_GENERIC:
++#if IS_MODULE(CONFIG_SND_HDA_GENERIC)
++ mod = "snd-hda-codec-generic";
+ #endif
++ break;
++ default:
++ snd_hdac_codec_modalias(&codec->core, alias, sizeof(alias));
++ mod = alias;
++ break;
++ }
++
++ if (mod) {
++ dev_dbg(&codec->core.dev, "loading codec module: %s\n", mod);
++ request_module(mod);
++ }
++#endif /* MODULE */
+ return device_attach(hda_codec_dev(codec));
+ }
+
++static int hda_codec_load_module(struct hda_codec *codec)
++{
++ int ret = request_codec_module(codec);
++
++ if (ret <= 0) {
++ codec->probe_id = HDA_CODEC_ID_GENERIC;
++ ret = request_codec_module(codec);
++ }
++
++ return ret;
++}
++
+ /* enable controller wake up event for all codecs with jack connectors */
+ void hda_codec_jack_wake_enable(struct snd_sof_dev *sdev)
+ {
+@@ -78,6 +103,13 @@ void hda_codec_jack_check(struct snd_sof_dev *sdev) {}
+ EXPORT_SYMBOL_NS(hda_codec_jack_wake_enable, SND_SOC_SOF_HDA_AUDIO_CODEC);
+ EXPORT_SYMBOL_NS(hda_codec_jack_check, SND_SOC_SOF_HDA_AUDIO_CODEC);
+
++#if IS_ENABLED(CONFIG_SND_HDA_GENERIC)
++#define is_generic_config(bus) \
++ ((bus)->modelname && !strcmp((bus)->modelname, "generic"))
++#else
++#define is_generic_config(x) 0
++#endif
++
+ /* probe individual codec */
+ static int hda_codec_probe(struct snd_sof_dev *sdev, int address,
+ bool hda_codec_use_common_hdmi)
+@@ -87,6 +119,7 @@ static int hda_codec_probe(struct snd_sof_dev *sdev, int address,
+ #endif
+ struct hda_bus *hbus = sof_to_hbus(sdev);
+ struct hdac_device *hdev;
++ struct hda_codec *codec;
+ u32 hda_cmd = (address << 28) | (AC_NODE_ROOT << 20) |
+ (AC_VERB_PARAMETERS << 8) | AC_PAR_VENDOR_ID;
+ u32 resp = -1;
+@@ -108,6 +141,7 @@ static int hda_codec_probe(struct snd_sof_dev *sdev, int address,
+
+ hda_priv->codec.bus = hbus;
+ hdev = &hda_priv->codec.core;
++ codec = &hda_priv->codec;
+
+ ret = snd_hdac_ext_bus_device_init(&hbus->core, address, hdev);
+ if (ret < 0)
+@@ -122,6 +156,11 @@ static int hda_codec_probe(struct snd_sof_dev *sdev, int address,
+ hda_priv->need_display_power = true;
+ }
+
++ if (is_generic_config(hbus))
++ codec->probe_id = HDA_CODEC_ID_GENERIC;
++ else
++ codec->probe_id = 0;
++
+ /*
+ * if common HDMI codec driver is not used, codec load
+ * is skipped here and hdac_hdmi is used instead
+@@ -129,7 +168,7 @@ static int hda_codec_probe(struct snd_sof_dev *sdev, int address,
+ if (hda_codec_use_common_hdmi ||
+ (resp & 0xFFFF0000) != IDISP_VID_INTEL) {
+ hdev->type = HDA_DEV_LEGACY;
+- ret = hda_codec_load_module(&hda_priv->codec);
++ ret = hda_codec_load_module(codec);
+ /*
+ * handle ret==0 (no driver bound) as an error, but pass
+ * other return codes without modification
+diff --git a/sound/soc/sof/nocodec.c b/sound/soc/sof/nocodec.c
+index 2233146386cc..71cf5f9db79d 100644
+--- a/sound/soc/sof/nocodec.c
++++ b/sound/soc/sof/nocodec.c
+@@ -52,8 +52,10 @@ static int sof_nocodec_bes_setup(struct device *dev,
+ links[i].platforms->name = dev_name(dev);
+ links[i].codecs->dai_name = "snd-soc-dummy-dai";
+ links[i].codecs->name = "snd-soc-dummy";
+- links[i].dpcm_playback = 1;
+- links[i].dpcm_capture = 1;
++ if (ops->drv[i].playback.channels_min)
++ links[i].dpcm_playback = 1;
++ if (ops->drv[i].capture.channels_min)
++ links[i].dpcm_capture = 1;
+ }
+
+ card->dai_link = links;
+diff --git a/sound/soc/sof/pm.c b/sound/soc/sof/pm.c
+index c410822d9920..01d83ddc16ba 100644
+--- a/sound/soc/sof/pm.c
++++ b/sound/soc/sof/pm.c
+@@ -90,7 +90,10 @@ static int sof_resume(struct device *dev, bool runtime_resume)
+ int ret;
+
+ /* do nothing if dsp resume callbacks are not set */
+- if (!sof_ops(sdev)->resume || !sof_ops(sdev)->runtime_resume)
++ if (!runtime_resume && !sof_ops(sdev)->resume)
++ return 0;
++
++ if (runtime_resume && !sof_ops(sdev)->runtime_resume)
+ return 0;
+
+ /* DSP was never successfully started, nothing to resume */
+@@ -175,7 +178,10 @@ static int sof_suspend(struct device *dev, bool runtime_suspend)
+ int ret;
+
+ /* do nothing if dsp suspend callback is not set */
+- if (!sof_ops(sdev)->suspend)
++ if (!runtime_suspend && !sof_ops(sdev)->suspend)
++ return 0;
++
++ if (runtime_suspend && !sof_ops(sdev)->runtime_suspend)
+ return 0;
+
+ if (sdev->fw_state != SOF_FW_BOOT_COMPLETE)
+diff --git a/sound/soc/sof/sof-audio.h b/sound/soc/sof/sof-audio.h
+index bf65f31af858..875a5fc13297 100644
+--- a/sound/soc/sof/sof-audio.h
++++ b/sound/soc/sof/sof-audio.h
+@@ -56,7 +56,7 @@ struct snd_sof_pcm {
+ struct snd_sof_led_control {
+ unsigned int use_led;
+ unsigned int direction;
+- unsigned int led_value;
++ int led_value;
+ };
+
+ /* ALSA SOF Kcontrol device */
+diff --git a/sound/soc/sof/topology.c b/sound/soc/sof/topology.c
+index fe8ba3e05e08..ab2b69de1d4d 100644
+--- a/sound/soc/sof/topology.c
++++ b/sound/soc/sof/topology.c
+@@ -1203,6 +1203,8 @@ static int sof_control_load(struct snd_soc_component *scomp, int index,
+ return ret;
+ }
+
++ scontrol->led_ctl.led_value = -1;
++
+ dobj->private = scontrol;
+ list_add(&scontrol->list, &sdev->kcontrol_list);
+ return ret;
+diff --git a/sound/soc/tegra/tegra_wm8903.c b/sound/soc/tegra/tegra_wm8903.c
+index 9b5651502f12..3aca354f9e08 100644
+--- a/sound/soc/tegra/tegra_wm8903.c
++++ b/sound/soc/tegra/tegra_wm8903.c
+@@ -177,6 +177,7 @@ static int tegra_wm8903_init(struct snd_soc_pcm_runtime *rtd)
+ struct snd_soc_component *component = codec_dai->component;
+ struct snd_soc_card *card = rtd->card;
+ struct tegra_wm8903 *machine = snd_soc_card_get_drvdata(card);
++ int shrt = 0;
+
+ if (gpio_is_valid(machine->gpio_hp_det)) {
+ tegra_wm8903_hp_jack_gpio.gpio = machine->gpio_hp_det;
+@@ -189,12 +190,15 @@ static int tegra_wm8903_init(struct snd_soc_pcm_runtime *rtd)
+ &tegra_wm8903_hp_jack_gpio);
+ }
+
++ if (of_property_read_bool(card->dev->of_node, "nvidia,headset"))
++ shrt = SND_JACK_MICROPHONE;
++
+ snd_soc_card_jack_new(rtd->card, "Mic Jack", SND_JACK_MICROPHONE,
+ &tegra_wm8903_mic_jack,
+ tegra_wm8903_mic_jack_pins,
+ ARRAY_SIZE(tegra_wm8903_mic_jack_pins));
+ wm8903_mic_detect(component, &tegra_wm8903_mic_jack, SND_JACK_MICROPHONE,
+- 0);
++ shrt);
+
+ snd_soc_dapm_force_enable_pin(&card->dapm, "MICBIAS");
+
+diff --git a/sound/soc/ti/davinci-mcasp.c b/sound/soc/ti/davinci-mcasp.c
+index 734ffe925c4d..7a7db743dc5b 100644
+--- a/sound/soc/ti/davinci-mcasp.c
++++ b/sound/soc/ti/davinci-mcasp.c
+@@ -1896,8 +1896,10 @@ static int davinci_mcasp_get_dma_type(struct davinci_mcasp *mcasp)
+ PTR_ERR(chan));
+ return PTR_ERR(chan);
+ }
+- if (WARN_ON(!chan->device || !chan->device->dev))
++ if (WARN_ON(!chan->device || !chan->device->dev)) {
++ dma_release_channel(chan);
+ return -EINVAL;
++ }
+
+ if (chan->device->dev->of_node)
+ ret = of_property_read_string(chan->device->dev->of_node,
+diff --git a/sound/soc/ti/omap-mcbsp.c b/sound/soc/ti/omap-mcbsp.c
+index 3d41ca2238d4..4f33ddb7b441 100644
+--- a/sound/soc/ti/omap-mcbsp.c
++++ b/sound/soc/ti/omap-mcbsp.c
+@@ -686,7 +686,7 @@ static int omap_mcbsp_init(struct platform_device *pdev)
+ mcbsp->dma_data[1].addr = omap_mcbsp_dma_reg_params(mcbsp,
+ SNDRV_PCM_STREAM_CAPTURE);
+
+- mcbsp->fclk = clk_get(&pdev->dev, "fck");
++ mcbsp->fclk = devm_clk_get(&pdev->dev, "fck");
+ if (IS_ERR(mcbsp->fclk)) {
+ ret = PTR_ERR(mcbsp->fclk);
+ dev_err(mcbsp->dev, "unable to get fck: %d\n", ret);
+@@ -711,7 +711,7 @@ static int omap_mcbsp_init(struct platform_device *pdev)
+ if (ret) {
+ dev_err(mcbsp->dev,
+ "Unable to create additional controls\n");
+- goto err_thres;
++ return ret;
+ }
+ }
+
+@@ -724,8 +724,6 @@ static int omap_mcbsp_init(struct platform_device *pdev)
+ err_st:
+ if (mcbsp->pdata->buffer_size)
+ sysfs_remove_group(&mcbsp->dev->kobj, &additional_attr_group);
+-err_thres:
+- clk_put(mcbsp->fclk);
+ return ret;
+ }
+
+@@ -1442,8 +1440,6 @@ static int asoc_mcbsp_remove(struct platform_device *pdev)
+
+ omap_mcbsp_st_cleanup(pdev);
+
+- clk_put(mcbsp->fclk);
+-
+ return 0;
+ }
+
+diff --git a/sound/soc/ux500/mop500.c b/sound/soc/ux500/mop500.c
+index 2873e8e6f02b..cdae1190b930 100644
+--- a/sound/soc/ux500/mop500.c
++++ b/sound/soc/ux500/mop500.c
+@@ -63,10 +63,11 @@ static void mop500_of_node_put(void)
+ {
+ int i;
+
+- for (i = 0; i < 2; i++) {
++ for (i = 0; i < 2; i++)
+ of_node_put(mop500_dai_links[i].cpus->of_node);
+- of_node_put(mop500_dai_links[i].codecs->of_node);
+- }
++
++ /* Both links use the same codec, which is refcounted only once */
++ of_node_put(mop500_dai_links[0].codecs->of_node);
+ }
+
+ static int mop500_of_probe(struct platform_device *pdev,
+@@ -81,7 +82,9 @@ static int mop500_of_probe(struct platform_device *pdev,
+
+ if (!(msp_np[0] && msp_np[1] && codec_np)) {
+ dev_err(&pdev->dev, "Phandle missing or invalid\n");
+- mop500_of_node_put();
++ for (i = 0; i < 2; i++)
++ of_node_put(msp_np[i]);
++ of_node_put(codec_np);
+ return -EINVAL;
+ }
+
+diff --git a/sound/usb/card.h b/sound/usb/card.h
+index 395403a2d33f..d6219fba9699 100644
+--- a/sound/usb/card.h
++++ b/sound/usb/card.h
+@@ -84,6 +84,10 @@ struct snd_usb_endpoint {
+ dma_addr_t sync_dma; /* DMA address of syncbuf */
+
+ unsigned int pipe; /* the data i/o pipe */
++ unsigned int framesize[2]; /* small/large frame sizes in samples */
++ unsigned int sample_rem; /* remainder from division fs/fps */
++ unsigned int sample_accum; /* sample accumulator */
++ unsigned int fps; /* frames per second */
+ unsigned int freqn; /* nominal sampling rate in fs/fps in Q16.16 format */
+ unsigned int freqm; /* momentary sampling rate in fs/fps in Q16.16 format */
+ int freqshift; /* how much to shift the feedback value to get Q16.16 */
+@@ -104,6 +108,7 @@ struct snd_usb_endpoint {
+ int iface, altsetting;
+ int skip_packets; /* quirks for devices to ignore the first n packets
+ in a stream */
++ bool is_implicit_feedback; /* This endpoint is used as implicit feedback */
+
+ spinlock_t lock;
+ struct list_head list;
+diff --git a/sound/usb/endpoint.c b/sound/usb/endpoint.c
+index 4a9a2f6ef5a4..9bea7d3f99f8 100644
+--- a/sound/usb/endpoint.c
++++ b/sound/usb/endpoint.c
+@@ -124,12 +124,12 @@ int snd_usb_endpoint_implicit_feedback_sink(struct snd_usb_endpoint *ep)
+
+ /*
+ * For streaming based on information derived from sync endpoints,
+- * prepare_outbound_urb_sizes() will call next_packet_size() to
++ * prepare_outbound_urb_sizes() will call slave_next_packet_size() to
+ * determine the number of samples to be sent in the next packet.
+ *
+- * For implicit feedback, next_packet_size() is unused.
++ * For implicit feedback, slave_next_packet_size() is unused.
+ */
+-int snd_usb_endpoint_next_packet_size(struct snd_usb_endpoint *ep)
++int snd_usb_endpoint_slave_next_packet_size(struct snd_usb_endpoint *ep)
+ {
+ unsigned long flags;
+ int ret;
+@@ -146,6 +146,29 @@ int snd_usb_endpoint_next_packet_size(struct snd_usb_endpoint *ep)
+ return ret;
+ }
+
++/*
++ * For adaptive and synchronous endpoints, prepare_outbound_urb_sizes()
++ * will call next_packet_size() to determine the number of samples to be
++ * sent in the next packet.
++ */
++int snd_usb_endpoint_next_packet_size(struct snd_usb_endpoint *ep)
++{
++ int ret;
++
++ if (ep->fill_max)
++ return ep->maxframesize;
++
++ ep->sample_accum += ep->sample_rem;
++ if (ep->sample_accum >= ep->fps) {
++ ep->sample_accum -= ep->fps;
++ ret = ep->framesize[1];
++ } else {
++ ret = ep->framesize[0];
++ }
++
++ return ret;
++}
++
+ static void retire_outbound_urb(struct snd_usb_endpoint *ep,
+ struct snd_urb_ctx *urb_ctx)
+ {
+@@ -190,6 +213,8 @@ static void prepare_silent_urb(struct snd_usb_endpoint *ep,
+
+ if (ctx->packet_size[i])
+ counts = ctx->packet_size[i];
++ else if (ep->sync_master)
++ counts = snd_usb_endpoint_slave_next_packet_size(ep);
+ else
+ counts = snd_usb_endpoint_next_packet_size(ep);
+
+@@ -321,17 +346,17 @@ static void queue_pending_output_urbs(struct snd_usb_endpoint *ep)
+ ep->next_packet_read_pos %= MAX_URBS;
+
+ /* take URB out of FIFO */
+- if (!list_empty(&ep->ready_playback_urbs))
++ if (!list_empty(&ep->ready_playback_urbs)) {
+ ctx = list_first_entry(&ep->ready_playback_urbs,
+ struct snd_urb_ctx, ready_list);
++ list_del_init(&ctx->ready_list);
++ }
+ }
+ spin_unlock_irqrestore(&ep->lock, flags);
+
+ if (ctx == NULL)
+ return;
+
+- list_del_init(&ctx->ready_list);
+-
+ /* copy over the length information */
+ for (i = 0; i < packet->packets; i++)
+ ctx->packet_size[i] = packet->packet_size[i];
+@@ -497,6 +522,8 @@ struct snd_usb_endpoint *snd_usb_add_endpoint(struct snd_usb_audio *chip,
+
+ list_add_tail(&ep->list, &chip->ep_list);
+
++ ep->is_implicit_feedback = 0;
++
+ __exit_unlock:
+ mutex_unlock(&chip->mutex);
+
+@@ -596,6 +623,178 @@ static void release_urbs(struct snd_usb_endpoint *ep, int force)
+ ep->nurbs = 0;
+ }
+
++/*
++ * Check data endpoint for format differences
++ */
++static bool check_ep_params(struct snd_usb_endpoint *ep,
++ snd_pcm_format_t pcm_format,
++ unsigned int channels,
++ unsigned int period_bytes,
++ unsigned int frames_per_period,
++ unsigned int periods_per_buffer,
++ struct audioformat *fmt,
++ struct snd_usb_endpoint *sync_ep)
++{
++ unsigned int maxsize, minsize, packs_per_ms, max_packs_per_urb;
++ unsigned int max_packs_per_period, urbs_per_period, urb_packs;
++ unsigned int max_urbs;
++ int frame_bits = snd_pcm_format_physical_width(pcm_format) * channels;
++ int tx_length_quirk = (ep->chip->tx_length_quirk &&
++ usb_pipeout(ep->pipe));
++ bool ret = 1;
++
++ if (pcm_format == SNDRV_PCM_FORMAT_DSD_U16_LE && fmt->dsd_dop) {
++ /*
++ * When operating in DSD DOP mode, the size of a sample frame
++ * in hardware differs from the actual physical format width
++ * because we need to make room for the DOP markers.
++ */
++ frame_bits += channels << 3;
++ }
++
++ ret = ret && (ep->datainterval == fmt->datainterval);
++ ret = ret && (ep->stride == frame_bits >> 3);
++
++ switch (pcm_format) {
++ case SNDRV_PCM_FORMAT_U8:
++ ret = ret && (ep->silence_value == 0x80);
++ break;
++ case SNDRV_PCM_FORMAT_DSD_U8:
++ case SNDRV_PCM_FORMAT_DSD_U16_LE:
++ case SNDRV_PCM_FORMAT_DSD_U32_LE:
++ case SNDRV_PCM_FORMAT_DSD_U16_BE:
++ case SNDRV_PCM_FORMAT_DSD_U32_BE:
++ ret = ret && (ep->silence_value == 0x69);
++ break;
++ default:
++ ret = ret && (ep->silence_value == 0);
++ }
++
++ /* assume max. frequency is 50% higher than nominal */
++ ret = ret && (ep->freqmax == ep->freqn + (ep->freqn >> 1));
++ /* Round up freqmax to nearest integer in order to calculate maximum
++ * packet size, which must represent a whole number of frames.
++ * This is accomplished by adding 0x0.ffff before converting the
++ * Q16.16 format into integer.
++ * In order to accurately calculate the maximum packet size when
++ * the data interval is more than 1 (i.e. ep->datainterval > 0),
++ * multiply by the data interval prior to rounding. For instance,
++ * a freqmax of 41 kHz will result in a max packet size of 6 (5.125)
++ * frames with a data interval of 1, but 11 (10.25) frames with a
++ * data interval of 2.
++ * (ep->freqmax << ep->datainterval overflows at 8.192 MHz for the
++ * maximum datainterval value of 3, at USB full speed, higher for
++ * USB high speed, noting that ep->freqmax is in units of
++ * frames per packet in Q16.16 format.)
++ */
++ maxsize = (((ep->freqmax << ep->datainterval) + 0xffff) >> 16) *
++ (frame_bits >> 3);
++ if (tx_length_quirk)
++ maxsize += sizeof(__le32); /* Space for length descriptor */
++ /* but wMaxPacketSize might reduce this */
++ if (ep->maxpacksize && ep->maxpacksize < maxsize) {
++ /* whatever fits into a max. size packet */
++ unsigned int data_maxsize = maxsize = ep->maxpacksize;
++
++ if (tx_length_quirk)
++ /* Need to remove the length descriptor to calc freq */
++ data_maxsize -= sizeof(__le32);
++ ret = ret && (ep->freqmax == (data_maxsize / (frame_bits >> 3))
++ << (16 - ep->datainterval));
++ }
++
++ if (ep->fill_max)
++ ret = ret && (ep->curpacksize == ep->maxpacksize);
++ else
++ ret = ret && (ep->curpacksize == maxsize);
++
++ if (snd_usb_get_speed(ep->chip->dev) != USB_SPEED_FULL) {
++ packs_per_ms = 8 >> ep->datainterval;
++ max_packs_per_urb = MAX_PACKS_HS;
++ } else {
++ packs_per_ms = 1;
++ max_packs_per_urb = MAX_PACKS;
++ }
++ if (sync_ep && !snd_usb_endpoint_implicit_feedback_sink(ep))
++ max_packs_per_urb = min(max_packs_per_urb,
++ 1U << sync_ep->syncinterval);
++ max_packs_per_urb = max(1u, max_packs_per_urb >> ep->datainterval);
++
++ /*
++ * Capture endpoints need to use small URBs because there's no way
++ * to tell in advance where the next period will end, and we don't
++ * want the next URB to complete much after the period ends.
++ *
++ * Playback endpoints with implicit sync much use the same parameters
++ * as their corresponding capture endpoint.
++ */
++ if (usb_pipein(ep->pipe) ||
++ snd_usb_endpoint_implicit_feedback_sink(ep)) {
++
++ urb_packs = packs_per_ms;
++ /*
++ * Wireless devices can poll at a max rate of once per 4ms.
++ * For dataintervals less than 5, increase the packet count to
++ * allow the host controller to use bursting to fill in the
++ * gaps.
++ */
++ if (snd_usb_get_speed(ep->chip->dev) == USB_SPEED_WIRELESS) {
++ int interval = ep->datainterval;
++
++ while (interval < 5) {
++ urb_packs <<= 1;
++ ++interval;
++ }
++ }
++ /* make capture URBs <= 1 ms and smaller than a period */
++ urb_packs = min(max_packs_per_urb, urb_packs);
++ while (urb_packs > 1 && urb_packs * maxsize >= period_bytes)
++ urb_packs >>= 1;
++ ret = ret && (ep->nurbs == MAX_URBS);
++
++ /*
++ * Playback endpoints without implicit sync are adjusted so that
++ * a period fits as evenly as possible in the smallest number of
++ * URBs. The total number of URBs is adjusted to the size of the
++ * ALSA buffer, subject to the MAX_URBS and MAX_QUEUE limits.
++ */
++ } else {
++ /* determine how small a packet can be */
++ minsize = (ep->freqn >> (16 - ep->datainterval)) *
++ (frame_bits >> 3);
++ /* with sync from device, assume it can be 12% lower */
++ if (sync_ep)
++ minsize -= minsize >> 3;
++ minsize = max(minsize, 1u);
++
++ /* how many packets will contain an entire ALSA period? */
++ max_packs_per_period = DIV_ROUND_UP(period_bytes, minsize);
++
++ /* how many URBs will contain a period? */
++ urbs_per_period = DIV_ROUND_UP(max_packs_per_period,
++ max_packs_per_urb);
++ /* how many packets are needed in each URB? */
++ urb_packs = DIV_ROUND_UP(max_packs_per_period, urbs_per_period);
++
++ /* limit the number of frames in a single URB */
++ ret = ret && (ep->max_urb_frames ==
++ DIV_ROUND_UP(frames_per_period, urbs_per_period));
++
++ /* try to use enough URBs to contain an entire ALSA buffer */
++ max_urbs = min((unsigned) MAX_URBS,
++ MAX_QUEUE * packs_per_ms / urb_packs);
++ ret = ret && (ep->nurbs == min(max_urbs,
++ urbs_per_period * periods_per_buffer));
++ }
++
++ ret = ret && (ep->datainterval == fmt->datainterval);
++ ret = ret && (ep->maxpacksize == fmt->maxpacksize);
++ ret = ret &&
++ (ep->fill_max == !!(fmt->attributes & UAC_EP_CS_ATTR_FILL_MAX));
++
++ return ret;
++}
++
+ /*
+ * configure a data endpoint
+ */
+@@ -861,10 +1060,23 @@ int snd_usb_endpoint_set_params(struct snd_usb_endpoint *ep,
+ int err;
+
+ if (ep->use_count != 0) {
+- usb_audio_warn(ep->chip,
+- "Unable to change format on ep #%x: already in use\n",
+- ep->ep_num);
+- return -EBUSY;
++ bool check = ep->is_implicit_feedback &&
++ check_ep_params(ep, pcm_format,
++ channels, period_bytes,
++ period_frames, buffer_periods,
++ fmt, sync_ep);
++
++ if (!check) {
++ usb_audio_warn(ep->chip,
++ "Unable to change format on ep #%x: already in use\n",
++ ep->ep_num);
++ return -EBUSY;
++ }
++
++ usb_audio_dbg(ep->chip,
++ "Ep #%x already in use as implicit feedback but format not changed\n",
++ ep->ep_num);
++ return 0;
+ }
+
+ /* release old buffers, if any */
+@@ -874,10 +1086,17 @@ int snd_usb_endpoint_set_params(struct snd_usb_endpoint *ep,
+ ep->maxpacksize = fmt->maxpacksize;
+ ep->fill_max = !!(fmt->attributes & UAC_EP_CS_ATTR_FILL_MAX);
+
+- if (snd_usb_get_speed(ep->chip->dev) == USB_SPEED_FULL)
++ if (snd_usb_get_speed(ep->chip->dev) == USB_SPEED_FULL) {
+ ep->freqn = get_usb_full_speed_rate(rate);
+- else
++ ep->fps = 1000;
++ } else {
+ ep->freqn = get_usb_high_speed_rate(rate);
++ ep->fps = 8000;
++ }
++
++ ep->sample_rem = rate % ep->fps;
++ ep->framesize[0] = rate / ep->fps;
++ ep->framesize[1] = (rate + (ep->fps - 1)) / ep->fps;
+
+ /* calculate the frequency in 16.16 format */
+ ep->freqm = ep->freqn;
+@@ -936,6 +1155,7 @@ int snd_usb_endpoint_start(struct snd_usb_endpoint *ep)
+ ep->active_mask = 0;
+ ep->unlink_mask = 0;
+ ep->phase = 0;
++ ep->sample_accum = 0;
+
+ snd_usb_endpoint_start_quirk(ep);
+
+diff --git a/sound/usb/endpoint.h b/sound/usb/endpoint.h
+index 63a39d4fa8d8..d23fa0a8c11b 100644
+--- a/sound/usb/endpoint.h
++++ b/sound/usb/endpoint.h
+@@ -28,6 +28,7 @@ void snd_usb_endpoint_release(struct snd_usb_endpoint *ep);
+ void snd_usb_endpoint_free(struct snd_usb_endpoint *ep);
+
+ int snd_usb_endpoint_implicit_feedback_sink(struct snd_usb_endpoint *ep);
++int snd_usb_endpoint_slave_next_packet_size(struct snd_usb_endpoint *ep);
+ int snd_usb_endpoint_next_packet_size(struct snd_usb_endpoint *ep);
+
+ void snd_usb_handle_sync_urb(struct snd_usb_endpoint *ep,
+diff --git a/sound/usb/mixer_quirks.c b/sound/usb/mixer_quirks.c
+index a5f65a9a0254..aad2683ff793 100644
+--- a/sound/usb/mixer_quirks.c
++++ b/sound/usb/mixer_quirks.c
+@@ -2185,6 +2185,421 @@ static int snd_rme_controls_create(struct usb_mixer_interface *mixer)
+ return 0;
+ }
+
++/*
++ * RME Babyface Pro (FS)
++ *
++ * These devices exposes a couple of DSP functions via request to EP0.
++ * Switches are available via control registers, while routing is controlled
++ * by controlling the volume on each possible crossing point.
++ * Volume control is linear, from -inf (dec. 0) to +6dB (dec. 65536) with
++ * 0dB being at dec. 32768.
++ */
++enum {
++ SND_BBFPRO_CTL_REG1 = 0,
++ SND_BBFPRO_CTL_REG2
++};
++
++#define SND_BBFPRO_CTL_REG_MASK 1
++#define SND_BBFPRO_CTL_IDX_MASK 0xff
++#define SND_BBFPRO_CTL_IDX_SHIFT 1
++#define SND_BBFPRO_CTL_VAL_MASK 1
++#define SND_BBFPRO_CTL_VAL_SHIFT 9
++#define SND_BBFPRO_CTL_REG1_CLK_MASTER 0
++#define SND_BBFPRO_CTL_REG1_CLK_OPTICAL 1
++#define SND_BBFPRO_CTL_REG1_SPDIF_PRO 7
++#define SND_BBFPRO_CTL_REG1_SPDIF_EMPH 8
++#define SND_BBFPRO_CTL_REG1_SPDIF_OPTICAL 10
++#define SND_BBFPRO_CTL_REG2_48V_AN1 0
++#define SND_BBFPRO_CTL_REG2_48V_AN2 1
++#define SND_BBFPRO_CTL_REG2_SENS_IN3 2
++#define SND_BBFPRO_CTL_REG2_SENS_IN4 3
++#define SND_BBFPRO_CTL_REG2_PAD_AN1 4
++#define SND_BBFPRO_CTL_REG2_PAD_AN2 5
++
++#define SND_BBFPRO_MIXER_IDX_MASK 0x1ff
++#define SND_BBFPRO_MIXER_VAL_MASK 0x3ffff
++#define SND_BBFPRO_MIXER_VAL_SHIFT 9
++#define SND_BBFPRO_MIXER_VAL_MIN 0 // -inf
++#define SND_BBFPRO_MIXER_VAL_MAX 65536 // +6dB
++
++#define SND_BBFPRO_USBREQ_CTL_REG1 0x10
++#define SND_BBFPRO_USBREQ_CTL_REG2 0x17
++#define SND_BBFPRO_USBREQ_MIXER 0x12
++
++static int snd_bbfpro_ctl_update(struct usb_mixer_interface *mixer, u8 reg,
++ u8 index, u8 value)
++{
++ int err;
++ u16 usb_req, usb_idx, usb_val;
++ struct snd_usb_audio *chip = mixer->chip;
++
++ err = snd_usb_lock_shutdown(chip);
++ if (err < 0)
++ return err;
++
++ if (reg == SND_BBFPRO_CTL_REG1) {
++ usb_req = SND_BBFPRO_USBREQ_CTL_REG1;
++ if (index == SND_BBFPRO_CTL_REG1_CLK_OPTICAL) {
++ usb_idx = 3;
++ usb_val = value ? 3 : 0;
++ } else {
++ usb_idx = 1 << index;
++ usb_val = value ? usb_idx : 0;
++ }
++ } else {
++ usb_req = SND_BBFPRO_USBREQ_CTL_REG2;
++ usb_idx = 1 << index;
++ usb_val = value ? usb_idx : 0;
++ }
++
++ err = snd_usb_ctl_msg(chip->dev,
++ usb_sndctrlpipe(chip->dev, 0), usb_req,
++ USB_DIR_OUT | USB_TYPE_VENDOR | USB_RECIP_DEVICE,
++ usb_val, usb_idx, 0, 0);
++
++ snd_usb_unlock_shutdown(chip);
++ return err;
++}
++
++static int snd_bbfpro_ctl_get(struct snd_kcontrol *kcontrol,
++ struct snd_ctl_elem_value *ucontrol)
++{
++ u8 reg, idx, val;
++ int pv;
++
++ pv = kcontrol->private_value;
++ reg = pv & SND_BBFPRO_CTL_REG_MASK;
++ idx = (pv >> SND_BBFPRO_CTL_IDX_SHIFT) & SND_BBFPRO_CTL_IDX_MASK;
++ val = kcontrol->private_value >> SND_BBFPRO_CTL_VAL_SHIFT;
++
++ if ((reg == SND_BBFPRO_CTL_REG1 &&
++ idx == SND_BBFPRO_CTL_REG1_CLK_OPTICAL) ||
++ (reg == SND_BBFPRO_CTL_REG2 &&
++ (idx == SND_BBFPRO_CTL_REG2_SENS_IN3 ||
++ idx == SND_BBFPRO_CTL_REG2_SENS_IN4))) {
++ ucontrol->value.enumerated.item[0] = val;
++ } else {
++ ucontrol->value.integer.value[0] = val;
++ }
++ return 0;
++}
++
++static int snd_bbfpro_ctl_info(struct snd_kcontrol *kcontrol,
++ struct snd_ctl_elem_info *uinfo)
++{
++ u8 reg, idx;
++ int pv;
++
++ pv = kcontrol->private_value;
++ reg = pv & SND_BBFPRO_CTL_REG_MASK;
++ idx = (pv >> SND_BBFPRO_CTL_IDX_SHIFT) & SND_BBFPRO_CTL_IDX_MASK;
++
++ if (reg == SND_BBFPRO_CTL_REG1 &&
++ idx == SND_BBFPRO_CTL_REG1_CLK_OPTICAL) {
++ static const char * const texts[2] = {
++ "AutoSync",
++ "Internal"
++ };
++ return snd_ctl_enum_info(uinfo, 1, 2, texts);
++ } else if (reg == SND_BBFPRO_CTL_REG2 &&
++ (idx == SND_BBFPRO_CTL_REG2_SENS_IN3 ||
++ idx == SND_BBFPRO_CTL_REG2_SENS_IN4)) {
++ static const char * const texts[2] = {
++ "-10dBV",
++ "+4dBu"
++ };
++ return snd_ctl_enum_info(uinfo, 1, 2, texts);
++ }
++
++ uinfo->count = 1;
++ uinfo->value.integer.min = 0;
++ uinfo->value.integer.max = 1;
++ uinfo->type = SNDRV_CTL_ELEM_TYPE_BOOLEAN;
++ return 0;
++}
++
++static int snd_bbfpro_ctl_put(struct snd_kcontrol *kcontrol,
++ struct snd_ctl_elem_value *ucontrol)
++{
++ int err;
++ u8 reg, idx;
++ int old_value, pv, val;
++
++ struct usb_mixer_elem_list *list = snd_kcontrol_chip(kcontrol);
++ struct usb_mixer_interface *mixer = list->mixer;
++
++ pv = kcontrol->private_value;
++ reg = pv & SND_BBFPRO_CTL_REG_MASK;
++ idx = (pv >> SND_BBFPRO_CTL_IDX_SHIFT) & SND_BBFPRO_CTL_IDX_MASK;
++ old_value = (pv >> SND_BBFPRO_CTL_VAL_SHIFT) & SND_BBFPRO_CTL_VAL_MASK;
++
++ if ((reg == SND_BBFPRO_CTL_REG1 &&
++ idx == SND_BBFPRO_CTL_REG1_CLK_OPTICAL) ||
++ (reg == SND_BBFPRO_CTL_REG2 &&
++ (idx == SND_BBFPRO_CTL_REG2_SENS_IN3 ||
++ idx == SND_BBFPRO_CTL_REG2_SENS_IN4))) {
++ val = ucontrol->value.enumerated.item[0];
++ } else {
++ val = ucontrol->value.integer.value[0];
++ }
++
++ if (val > 1)
++ return -EINVAL;
++
++ if (val == old_value)
++ return 0;
++
++ kcontrol->private_value = reg
++ | ((idx & SND_BBFPRO_CTL_IDX_MASK) << SND_BBFPRO_CTL_IDX_SHIFT)
++ | ((val & SND_BBFPRO_CTL_VAL_MASK) << SND_BBFPRO_CTL_VAL_SHIFT);
++
++ err = snd_bbfpro_ctl_update(mixer, reg, idx, val);
++ return err < 0 ? err : 1;
++}
++
++static int snd_bbfpro_ctl_resume(struct usb_mixer_elem_list *list)
++{
++ u8 reg, idx;
++ int value, pv;
++
++ pv = list->kctl->private_value;
++ reg = pv & SND_BBFPRO_CTL_REG_MASK;
++ idx = (pv >> SND_BBFPRO_CTL_IDX_SHIFT) & SND_BBFPRO_CTL_IDX_MASK;
++ value = (pv >> SND_BBFPRO_CTL_VAL_SHIFT) & SND_BBFPRO_CTL_VAL_MASK;
++
++ return snd_bbfpro_ctl_update(list->mixer, reg, idx, value);
++}
++
++static int snd_bbfpro_vol_update(struct usb_mixer_interface *mixer, u16 index,
++ u32 value)
++{
++ struct snd_usb_audio *chip = mixer->chip;
++ int err;
++ u16 idx;
++ u16 usb_idx, usb_val;
++ u32 v;
++
++ err = snd_usb_lock_shutdown(chip);
++ if (err < 0)
++ return err;
++
++ idx = index & SND_BBFPRO_MIXER_IDX_MASK;
++ // 18 bit linear volume, split so 2 bits end up in index.
++ v = value & SND_BBFPRO_MIXER_VAL_MASK;
++ usb_idx = idx | (v & 0x3) << 14;
++ usb_val = (v >> 2) & 0xffff;
++
++ err = snd_usb_ctl_msg(chip->dev,
++ usb_sndctrlpipe(chip->dev, 0),
++ SND_BBFPRO_USBREQ_MIXER,
++ USB_DIR_OUT | USB_TYPE_VENDOR |
++ USB_RECIP_DEVICE,
++ usb_val, usb_idx, 0, 0);
++
++ snd_usb_unlock_shutdown(chip);
++ return err;
++}
++
++static int snd_bbfpro_vol_get(struct snd_kcontrol *kcontrol,
++ struct snd_ctl_elem_value *ucontrol)
++{
++ ucontrol->value.integer.value[0] =
++ kcontrol->private_value >> SND_BBFPRO_MIXER_VAL_SHIFT;
++ return 0;
++}
++
++static int snd_bbfpro_vol_info(struct snd_kcontrol *kcontrol,
++ struct snd_ctl_elem_info *uinfo)
++{
++ uinfo->type = SNDRV_CTL_ELEM_TYPE_INTEGER;
++ uinfo->count = 1;
++ uinfo->value.integer.min = SND_BBFPRO_MIXER_VAL_MIN;
++ uinfo->value.integer.max = SND_BBFPRO_MIXER_VAL_MAX;
++ return 0;
++}
++
++static int snd_bbfpro_vol_put(struct snd_kcontrol *kcontrol,
++ struct snd_ctl_elem_value *ucontrol)
++{
++ int err;
++ u16 idx;
++ u32 new_val, old_value, uvalue;
++ struct usb_mixer_elem_list *list = snd_kcontrol_chip(kcontrol);
++ struct usb_mixer_interface *mixer = list->mixer;
++
++ uvalue = ucontrol->value.integer.value[0];
++ idx = kcontrol->private_value & SND_BBFPRO_MIXER_IDX_MASK;
++ old_value = kcontrol->private_value >> SND_BBFPRO_MIXER_VAL_SHIFT;
++
++ if (uvalue > SND_BBFPRO_MIXER_VAL_MAX)
++ return -EINVAL;
++
++ if (uvalue == old_value)
++ return 0;
++
++ new_val = uvalue & SND_BBFPRO_MIXER_VAL_MASK;
++
++ kcontrol->private_value = idx
++ | (new_val << SND_BBFPRO_MIXER_VAL_SHIFT);
++
++ err = snd_bbfpro_vol_update(mixer, idx, new_val);
++ return err < 0 ? err : 1;
++}
++
++static int snd_bbfpro_vol_resume(struct usb_mixer_elem_list *list)
++{
++ int pv = list->kctl->private_value;
++ u16 idx = pv & SND_BBFPRO_MIXER_IDX_MASK;
++ u32 val = (pv >> SND_BBFPRO_MIXER_VAL_SHIFT)
++ & SND_BBFPRO_MIXER_VAL_MASK;
++ return snd_bbfpro_vol_update(list->mixer, idx, val);
++}
++
++// Predfine elements
++static const struct snd_kcontrol_new snd_bbfpro_ctl_control = {
++ .iface = SNDRV_CTL_ELEM_IFACE_MIXER,
++ .access = SNDRV_CTL_ELEM_ACCESS_READWRITE,
++ .index = 0,
++ .info = snd_bbfpro_ctl_info,
++ .get = snd_bbfpro_ctl_get,
++ .put = snd_bbfpro_ctl_put
++};
++
++static const struct snd_kcontrol_new snd_bbfpro_vol_control = {
++ .iface = SNDRV_CTL_ELEM_IFACE_MIXER,
++ .access = SNDRV_CTL_ELEM_ACCESS_READWRITE,
++ .index = 0,
++ .info = snd_bbfpro_vol_info,
++ .get = snd_bbfpro_vol_get,
++ .put = snd_bbfpro_vol_put
++};
++
++static int snd_bbfpro_ctl_add(struct usb_mixer_interface *mixer, u8 reg,
++ u8 index, char *name)
++{
++ struct snd_kcontrol_new knew = snd_bbfpro_ctl_control;
++
++ knew.name = name;
++ knew.private_value = (reg & SND_BBFPRO_CTL_REG_MASK)
++ | ((index & SND_BBFPRO_CTL_IDX_MASK)
++ << SND_BBFPRO_CTL_IDX_SHIFT);
++
++ return add_single_ctl_with_resume(mixer, 0, snd_bbfpro_ctl_resume,
++ &knew, NULL);
++}
++
++static int snd_bbfpro_vol_add(struct usb_mixer_interface *mixer, u16 index,
++ char *name)
++{
++ struct snd_kcontrol_new knew = snd_bbfpro_vol_control;
++
++ knew.name = name;
++ knew.private_value = index & SND_BBFPRO_MIXER_IDX_MASK;
++
++ return add_single_ctl_with_resume(mixer, 0, snd_bbfpro_vol_resume,
++ &knew, NULL);
++}
++
++static int snd_bbfpro_controls_create(struct usb_mixer_interface *mixer)
++{
++ int err, i, o;
++ char name[48];
++
++ static const char * const input[] = {
++ "AN1", "AN2", "IN3", "IN4", "AS1", "AS2", "ADAT3",
++ "ADAT4", "ADAT5", "ADAT6", "ADAT7", "ADAT8"};
++
++ static const char * const output[] = {
++ "AN1", "AN2", "PH3", "PH4", "AS1", "AS2", "ADAT3", "ADAT4",
++ "ADAT5", "ADAT6", "ADAT7", "ADAT8"};
++
++ for (o = 0 ; o < 12 ; ++o) {
++ for (i = 0 ; i < 12 ; ++i) {
++ // Line routing
++ snprintf(name, sizeof(name),
++ "%s-%s-%s Playback Volume",
++ (i < 2 ? "Mic" : "Line"),
++ input[i], output[o]);
++ err = snd_bbfpro_vol_add(mixer, (26 * o + i), name);
++ if (err < 0)
++ return err;
++
++ // PCM routing... yes, it is output remapping
++ snprintf(name, sizeof(name),
++ "PCM-%s-%s Playback Volume",
++ output[i], output[o]);
++ err = snd_bbfpro_vol_add(mixer, (26 * o + 12 + i),
++ name);
++ if (err < 0)
++ return err;
++ }
++ }
++
++ // Control Reg 1
++ err = snd_bbfpro_ctl_add(mixer, SND_BBFPRO_CTL_REG1,
++ SND_BBFPRO_CTL_REG1_CLK_OPTICAL,
++ "Sample Clock Source");
++ if (err < 0)
++ return err;
++
++ err = snd_bbfpro_ctl_add(mixer, SND_BBFPRO_CTL_REG1,
++ SND_BBFPRO_CTL_REG1_SPDIF_PRO,
++ "IEC958 Pro Mask");
++ if (err < 0)
++ return err;
++
++ err = snd_bbfpro_ctl_add(mixer, SND_BBFPRO_CTL_REG1,
++ SND_BBFPRO_CTL_REG1_SPDIF_EMPH,
++ "IEC958 Emphasis");
++ if (err < 0)
++ return err;
++
++ err = snd_bbfpro_ctl_add(mixer, SND_BBFPRO_CTL_REG1,
++ SND_BBFPRO_CTL_REG1_SPDIF_OPTICAL,
++ "IEC958 Switch");
++ if (err < 0)
++ return err;
++
++ // Control Reg 2
++ err = snd_bbfpro_ctl_add(mixer, SND_BBFPRO_CTL_REG2,
++ SND_BBFPRO_CTL_REG2_48V_AN1,
++ "Mic-AN1 48V");
++ if (err < 0)
++ return err;
++
++ err = snd_bbfpro_ctl_add(mixer, SND_BBFPRO_CTL_REG2,
++ SND_BBFPRO_CTL_REG2_48V_AN2,
++ "Mic-AN2 48V");
++ if (err < 0)
++ return err;
++
++ err = snd_bbfpro_ctl_add(mixer, SND_BBFPRO_CTL_REG2,
++ SND_BBFPRO_CTL_REG2_SENS_IN3,
++ "Line-IN3 Sens.");
++ if (err < 0)
++ return err;
++
++ err = snd_bbfpro_ctl_add(mixer, SND_BBFPRO_CTL_REG2,
++ SND_BBFPRO_CTL_REG2_SENS_IN4,
++ "Line-IN4 Sens.");
++ if (err < 0)
++ return err;
++
++ err = snd_bbfpro_ctl_add(mixer, SND_BBFPRO_CTL_REG2,
++ SND_BBFPRO_CTL_REG2_PAD_AN1,
++ "Mic-AN1 PAD");
++ if (err < 0)
++ return err;
++
++ err = snd_bbfpro_ctl_add(mixer, SND_BBFPRO_CTL_REG2,
++ SND_BBFPRO_CTL_REG2_PAD_AN2,
++ "Mic-AN2 PAD");
++ if (err < 0)
++ return err;
++
++ return 0;
++}
++
+ int snd_usb_mixer_apply_create_quirk(struct usb_mixer_interface *mixer)
+ {
+ int err = 0;
+@@ -2286,6 +2701,9 @@ int snd_usb_mixer_apply_create_quirk(struct usb_mixer_interface *mixer)
+ case USB_ID(0x0194f, 0x010c): /* Presonus Studio 1810c */
+ err = snd_sc1810_init_mixer(mixer);
+ break;
++ case USB_ID(0x2a39, 0x3fb0): /* RME Babyface Pro FS */
++ err = snd_bbfpro_controls_create(mixer);
++ break;
+ }
+
+ return err;
+diff --git a/sound/usb/pcm.c b/sound/usb/pcm.c
+index a4e4064f9aee..d61c2f1095b5 100644
+--- a/sound/usb/pcm.c
++++ b/sound/usb/pcm.c
+@@ -404,6 +404,8 @@ add_sync_ep:
+ if (!subs->sync_endpoint)
+ return -EINVAL;
+
++ subs->sync_endpoint->is_implicit_feedback = 1;
++
+ subs->data_endpoint->sync_master = subs->sync_endpoint;
+
+ return 1;
+@@ -502,12 +504,15 @@ static int set_sync_endpoint(struct snd_usb_substream *subs,
+ implicit_fb ?
+ SND_USB_ENDPOINT_TYPE_DATA :
+ SND_USB_ENDPOINT_TYPE_SYNC);
++
+ if (!subs->sync_endpoint) {
+ if (is_playback && attr == USB_ENDPOINT_SYNC_NONE)
+ return 0;
+ return -EINVAL;
+ }
+
++ subs->sync_endpoint->is_implicit_feedback = implicit_fb;
++
+ subs->data_endpoint->sync_master = subs->sync_endpoint;
+
+ return 0;
+@@ -1579,6 +1584,8 @@ static void prepare_playback_urb(struct snd_usb_substream *subs,
+ for (i = 0; i < ctx->packets; i++) {
+ if (ctx->packet_size[i])
+ counts = ctx->packet_size[i];
++ else if (ep->sync_master)
++ counts = snd_usb_endpoint_slave_next_packet_size(ep);
+ else
+ counts = snd_usb_endpoint_next_packet_size(ep);
+
+diff --git a/tools/bootconfig/main.c b/tools/bootconfig/main.c
+index 0efaf45f7367..e0878f5f74b1 100644
+--- a/tools/bootconfig/main.c
++++ b/tools/bootconfig/main.c
+@@ -14,13 +14,18 @@
+ #include <linux/kernel.h>
+ #include <linux/bootconfig.h>
+
+-static int xbc_show_array(struct xbc_node *node)
++static int xbc_show_value(struct xbc_node *node)
+ {
+ const char *val;
++ char q;
+ int i = 0;
+
+ xbc_array_for_each_value(node, val) {
+- printf("\"%s\"%s", val, node->next ? ", " : ";\n");
++ if (strchr(val, '"'))
++ q = '\'';
++ else
++ q = '"';
++ printf("%c%s%c%s", q, val, q, node->next ? ", " : ";\n");
+ i++;
+ }
+ return i;
+@@ -48,10 +53,7 @@ static void xbc_show_compact_tree(void)
+ continue;
+ } else if (cnode && xbc_node_is_value(cnode)) {
+ printf("%s = ", xbc_node_get_data(node));
+- if (cnode->next)
+- xbc_show_array(cnode);
+- else
+- printf("\"%s\";\n", xbc_node_get_data(cnode));
++ xbc_show_value(cnode);
+ } else {
+ printf("%s;\n", xbc_node_get_data(node));
+ }
+@@ -205,11 +207,13 @@ int show_xbc(const char *path)
+ }
+
+ ret = load_xbc_from_initrd(fd, &buf);
+- if (ret < 0)
++ if (ret < 0) {
+ pr_err("Failed to load a boot config from initrd: %d\n", ret);
+- else
+- xbc_show_compact_tree();
+-
++ goto out;
++ }
++ xbc_show_compact_tree();
++ ret = 0;
++out:
+ close(fd);
+ free(buf);
+
+diff --git a/tools/bpf/bpftool/gen.c b/tools/bpf/bpftool/gen.c
+index f8113b3646f5..f5960b48c861 100644
+--- a/tools/bpf/bpftool/gen.c
++++ b/tools/bpf/bpftool/gen.c
+@@ -225,6 +225,7 @@ static int codegen(const char *template, ...)
+ } else {
+ p_err("unrecognized character at pos %td in template '%s'",
+ src - template - 1, template);
++ free(s);
+ return -EINVAL;
+ }
+ }
+@@ -235,6 +236,7 @@ static int codegen(const char *template, ...)
+ if (*src != '\t') {
+ p_err("not enough tabs at pos %td in template '%s'",
+ src - template - 1, template);
++ free(s);
+ return -EINVAL;
+ }
+ }
+diff --git a/tools/lib/bpf/btf_dump.c b/tools/lib/bpf/btf_dump.c
+index 0c28ee82834b..653dbbe2e366 100644
+--- a/tools/lib/bpf/btf_dump.c
++++ b/tools/lib/bpf/btf_dump.c
+@@ -1137,6 +1137,20 @@ static void btf_dump_emit_mods(struct btf_dump *d, struct id_stack *decl_stack)
+ }
+ }
+
++static void btf_dump_drop_mods(struct btf_dump *d, struct id_stack *decl_stack)
++{
++ const struct btf_type *t;
++ __u32 id;
++
++ while (decl_stack->cnt) {
++ id = decl_stack->ids[decl_stack->cnt - 1];
++ t = btf__type_by_id(d->btf, id);
++ if (!btf_is_mod(t))
++ return;
++ decl_stack->cnt--;
++ }
++}
++
+ static void btf_dump_emit_name(const struct btf_dump *d,
+ const char *name, bool last_was_ptr)
+ {
+@@ -1235,14 +1249,7 @@ static void btf_dump_emit_type_chain(struct btf_dump *d,
+ * a const/volatile modifier for array, so we are
+ * going to silently skip them here.
+ */
+- while (decls->cnt) {
+- next_id = decls->ids[decls->cnt - 1];
+- next_t = btf__type_by_id(d->btf, next_id);
+- if (btf_is_mod(next_t))
+- decls->cnt--;
+- else
+- break;
+- }
++ btf_dump_drop_mods(d, decls);
+
+ if (decls->cnt == 0) {
+ btf_dump_emit_name(d, fname, last_was_ptr);
+@@ -1270,7 +1277,15 @@ static void btf_dump_emit_type_chain(struct btf_dump *d,
+ __u16 vlen = btf_vlen(t);
+ int i;
+
+- btf_dump_emit_mods(d, decls);
++ /*
++ * GCC emits extra volatile qualifier for
++ * __attribute__((noreturn)) function pointers. Clang
++ * doesn't do it. It's a GCC quirk for backwards
++ * compatibility with code written for GCC <2.5. So,
++ * similarly to extra qualifiers for array, just drop
++ * them, instead of handling them.
++ */
++ btf_dump_drop_mods(d, decls);
+ if (decls->cnt) {
+ btf_dump_printf(d, " (");
+ btf_dump_emit_type_chain(d, decls, fname, lvl);
+diff --git a/tools/lib/bpf/libbpf.c b/tools/lib/bpf/libbpf.c
+index 0c5b4fb553fb..c417cff2cdaf 100644
+--- a/tools/lib/bpf/libbpf.c
++++ b/tools/lib/bpf/libbpf.c
+@@ -3455,10 +3455,6 @@ bpf_object__populate_internal_map(struct bpf_object *obj, struct bpf_map *map)
+ char *cp, errmsg[STRERR_BUFSIZE];
+ int err, zero = 0;
+
+- /* kernel already zero-initializes .bss map. */
+- if (map_type == LIBBPF_MAP_BSS)
+- return 0;
+-
+ err = bpf_map_update_elem(map->fd, &zero, map->mmaped, 0);
+ if (err) {
+ err = -errno;
+diff --git a/tools/perf/builtin-report.c b/tools/perf/builtin-report.c
+index 26d8fc27e427..fc7855262162 100644
+--- a/tools/perf/builtin-report.c
++++ b/tools/perf/builtin-report.c
+@@ -476,8 +476,7 @@ static size_t hists__fprintf_nr_sample_events(struct hists *hists, struct report
+ if (rep->time_str)
+ ret += fprintf(fp, " (time slices: %s)", rep->time_str);
+
+- if (symbol_conf.show_ref_callgraph &&
+- strstr(evname, "call-graph=no")) {
++ if (symbol_conf.show_ref_callgraph && evname && strstr(evname, "call-graph=no")) {
+ ret += fprintf(fp, ", show reference callgraph");
+ }
+
+diff --git a/tools/perf/util/parse-events.y b/tools/perf/util/parse-events.y
+index 94f8bcd83582..9a41247c602b 100644
+--- a/tools/perf/util/parse-events.y
++++ b/tools/perf/util/parse-events.y
+@@ -348,7 +348,7 @@ PE_PMU_EVENT_PRE '-' PE_PMU_EVENT_SUF sep_dc
+ struct list_head *list;
+ char pmu_name[128];
+
+- snprintf(&pmu_name, 128, "%s-%s", $1, $3);
++ snprintf(pmu_name, sizeof(pmu_name), "%s-%s", $1, $3);
+ free($1);
+ free($3);
+ if (parse_events_multi_pmu_add(_parse_state, pmu_name, &list) < 0)
+diff --git a/tools/perf/util/probe-event.c b/tools/perf/util/probe-event.c
+index a08f373d3305..df713a5d1e26 100644
+--- a/tools/perf/util/probe-event.c
++++ b/tools/perf/util/probe-event.c
+@@ -1575,7 +1575,7 @@ static int parse_perf_probe_arg(char *str, struct perf_probe_arg *arg)
+ }
+
+ tmp = strchr(str, '@');
+- if (tmp && tmp != str && strcmp(tmp + 1, "user")) { /* user attr */
++ if (tmp && tmp != str && !strcmp(tmp + 1, "user")) { /* user attr */
+ if (!user_access_is_supported()) {
+ semantic_error("ftrace does not support user access\n");
+ return -EINVAL;
+@@ -1995,7 +1995,10 @@ static int __synthesize_probe_trace_arg_ref(struct probe_trace_arg_ref *ref,
+ if (depth < 0)
+ return depth;
+ }
+- err = strbuf_addf(buf, "%+ld(", ref->offset);
++ if (ref->user_access)
++ err = strbuf_addf(buf, "%s%ld(", "+u", ref->offset);
++ else
++ err = strbuf_addf(buf, "%+ld(", ref->offset);
+ return (err < 0) ? err : depth;
+ }
+
+diff --git a/tools/perf/util/probe-file.c b/tools/perf/util/probe-file.c
+index 8c852948513e..064b63a6a3f3 100644
+--- a/tools/perf/util/probe-file.c
++++ b/tools/perf/util/probe-file.c
+@@ -1044,7 +1044,7 @@ static struct {
+ DEFINE_TYPE(FTRACE_README_PROBE_TYPE_X, "*type: * x8/16/32/64,*"),
+ DEFINE_TYPE(FTRACE_README_KRETPROBE_OFFSET, "*place (kretprobe): *"),
+ DEFINE_TYPE(FTRACE_README_UPROBE_REF_CTR, "*ref_ctr_offset*"),
+- DEFINE_TYPE(FTRACE_README_USER_ACCESS, "*[u]<offset>*"),
++ DEFINE_TYPE(FTRACE_README_USER_ACCESS, "*u]<offset>*"),
+ DEFINE_TYPE(FTRACE_README_MULTIPROBE_EVENT, "*Create/append/*"),
+ DEFINE_TYPE(FTRACE_README_IMMEDIATE_VALUE, "*\\imm-value,*"),
+ };
+diff --git a/tools/perf/util/stat-display.c b/tools/perf/util/stat-display.c
+index 9e757d18d713..cf393c3eea23 100644
+--- a/tools/perf/util/stat-display.c
++++ b/tools/perf/util/stat-display.c
+@@ -671,7 +671,7 @@ static void print_aggr(struct perf_stat_config *config,
+ int s;
+ bool first;
+
+- if (!(config->aggr_map || config->aggr_get_id))
++ if (!config->aggr_map || !config->aggr_get_id)
+ return;
+
+ aggr_update_shadow(config, evlist);
+@@ -1172,7 +1172,7 @@ static void print_percore(struct perf_stat_config *config,
+ int s;
+ bool first = true;
+
+- if (!(config->aggr_map || config->aggr_get_id))
++ if (!config->aggr_map || !config->aggr_get_id)
+ return;
+
+ if (config->percore_show_thread)
+diff --git a/tools/testing/selftests/bpf/prog_tests/skeleton.c b/tools/testing/selftests/bpf/prog_tests/skeleton.c
+index 9264a2736018..fa153cf67b1b 100644
+--- a/tools/testing/selftests/bpf/prog_tests/skeleton.c
++++ b/tools/testing/selftests/bpf/prog_tests/skeleton.c
+@@ -15,6 +15,8 @@ void test_skeleton(void)
+ int duration = 0, err;
+ struct test_skeleton* skel;
+ struct test_skeleton__bss *bss;
++ struct test_skeleton__data *data;
++ struct test_skeleton__rodata *rodata;
+ struct test_skeleton__kconfig *kcfg;
+
+ skel = test_skeleton__open();
+@@ -24,13 +26,45 @@ void test_skeleton(void)
+ if (CHECK(skel->kconfig, "skel_kconfig", "kconfig is mmaped()!\n"))
+ goto cleanup;
+
++ bss = skel->bss;
++ data = skel->data;
++ rodata = skel->rodata;
++
++ /* validate values are pre-initialized correctly */
++ CHECK(data->in1 != -1, "in1", "got %d != exp %d\n", data->in1, -1);
++ CHECK(data->out1 != -1, "out1", "got %d != exp %d\n", data->out1, -1);
++ CHECK(data->in2 != -1, "in2", "got %lld != exp %lld\n", data->in2, -1LL);
++ CHECK(data->out2 != -1, "out2", "got %lld != exp %lld\n", data->out2, -1LL);
++
++ CHECK(bss->in3 != 0, "in3", "got %d != exp %d\n", bss->in3, 0);
++ CHECK(bss->out3 != 0, "out3", "got %d != exp %d\n", bss->out3, 0);
++ CHECK(bss->in4 != 0, "in4", "got %lld != exp %lld\n", bss->in4, 0LL);
++ CHECK(bss->out4 != 0, "out4", "got %lld != exp %lld\n", bss->out4, 0LL);
++
++ CHECK(rodata->in6 != 0, "in6", "got %d != exp %d\n", rodata->in6, 0);
++ CHECK(bss->out6 != 0, "out6", "got %d != exp %d\n", bss->out6, 0);
++
++ /* validate we can pre-setup global variables, even in .bss */
++ data->in1 = 10;
++ data->in2 = 11;
++ bss->in3 = 12;
++ bss->in4 = 13;
++ rodata->in6 = 14;
++
+ err = test_skeleton__load(skel);
+ if (CHECK(err, "skel_load", "failed to load skeleton: %d\n", err))
+ goto cleanup;
+
+- bss = skel->bss;
+- bss->in1 = 1;
+- bss->in2 = 2;
++ /* validate pre-setup values are still there */
++ CHECK(data->in1 != 10, "in1", "got %d != exp %d\n", data->in1, 10);
++ CHECK(data->in2 != 11, "in2", "got %lld != exp %lld\n", data->in2, 11LL);
++ CHECK(bss->in3 != 12, "in3", "got %d != exp %d\n", bss->in3, 12);
++ CHECK(bss->in4 != 13, "in4", "got %lld != exp %lld\n", bss->in4, 13LL);
++ CHECK(rodata->in6 != 14, "in6", "got %d != exp %d\n", rodata->in6, 14);
++
++ /* now set new values and attach to get them into outX variables */
++ data->in1 = 1;
++ data->in2 = 2;
+ bss->in3 = 3;
+ bss->in4 = 4;
+ bss->in5.a = 5;
+@@ -44,14 +78,15 @@ void test_skeleton(void)
+ /* trigger tracepoint */
+ usleep(1);
+
+- CHECK(bss->out1 != 1, "res1", "got %d != exp %d\n", bss->out1, 1);
+- CHECK(bss->out2 != 2, "res2", "got %lld != exp %d\n", bss->out2, 2);
++ CHECK(data->out1 != 1, "res1", "got %d != exp %d\n", data->out1, 1);
++ CHECK(data->out2 != 2, "res2", "got %lld != exp %d\n", data->out2, 2);
+ CHECK(bss->out3 != 3, "res3", "got %d != exp %d\n", (int)bss->out3, 3);
+ CHECK(bss->out4 != 4, "res4", "got %lld != exp %d\n", bss->out4, 4);
+ CHECK(bss->handler_out5.a != 5, "res5", "got %d != exp %d\n",
+ bss->handler_out5.a, 5);
+ CHECK(bss->handler_out5.b != 6, "res6", "got %lld != exp %d\n",
+ bss->handler_out5.b, 6);
++ CHECK(bss->out6 != 14, "res7", "got %d != exp %d\n", bss->out6, 14);
+
+ CHECK(bss->bpf_syscall != kcfg->CONFIG_BPF_SYSCALL, "ext1",
+ "got %d != exp %d\n", bss->bpf_syscall, kcfg->CONFIG_BPF_SYSCALL);
+diff --git a/tools/testing/selftests/bpf/progs/test_skeleton.c b/tools/testing/selftests/bpf/progs/test_skeleton.c
+index de03a90f78ca..77ae86f44db5 100644
+--- a/tools/testing/selftests/bpf/progs/test_skeleton.c
++++ b/tools/testing/selftests/bpf/progs/test_skeleton.c
+@@ -10,16 +10,26 @@ struct s {
+ long long b;
+ } __attribute__((packed));
+
+-int in1 = 0;
+-long long in2 = 0;
++/* .data section */
++int in1 = -1;
++long long in2 = -1;
++
++/* .bss section */
+ char in3 = '\0';
+ long long in4 __attribute__((aligned(64))) = 0;
+ struct s in5 = {};
+
+-long long out2 = 0;
++/* .rodata section */
++const volatile int in6 = 0;
++
++/* .data section */
++int out1 = -1;
++long long out2 = -1;
++
++/* .bss section */
+ char out3 = 0;
+ long long out4 = 0;
+-int out1 = 0;
++int out6 = 0;
+
+ extern bool CONFIG_BPF_SYSCALL __kconfig;
+ extern int LINUX_KERNEL_VERSION __kconfig;
+@@ -36,6 +46,7 @@ int handler(const void *ctx)
+ out3 = in3;
+ out4 = in4;
+ out5 = in5;
++ out6 = in6;
+
+ bpf_syscall = CONFIG_BPF_SYSCALL;
+ kern_ver = LINUX_KERNEL_VERSION;
+diff --git a/tools/testing/selftests/kvm/Makefile b/tools/testing/selftests/kvm/Makefile
+index 42f4f49f2a48..2c85b9dd86f5 100644
+--- a/tools/testing/selftests/kvm/Makefile
++++ b/tools/testing/selftests/kvm/Makefile
+@@ -80,7 +80,11 @@ LIBKVM += $(LIBKVM_$(UNAME_M))
+ INSTALL_HDR_PATH = $(top_srcdir)/usr
+ LINUX_HDR_PATH = $(INSTALL_HDR_PATH)/include/
+ LINUX_TOOL_INCLUDE = $(top_srcdir)/tools/include
++ifeq ($(ARCH),x86_64)
++LINUX_TOOL_ARCH_INCLUDE = $(top_srcdir)/tools/arch/x86/include
++else
+ LINUX_TOOL_ARCH_INCLUDE = $(top_srcdir)/tools/arch/$(ARCH)/include
++endif
+ CFLAGS += -Wall -Wstrict-prototypes -Wuninitialized -O2 -g -std=gnu99 \
+ -fno-stack-protector -fno-PIE -I$(LINUX_TOOL_INCLUDE) \
+ -I$(LINUX_TOOL_ARCH_INCLUDE) -I$(LINUX_HDR_PATH) -Iinclude \
+diff --git a/tools/testing/selftests/net/timestamping.c b/tools/testing/selftests/net/timestamping.c
+index aca3491174a1..f4bb4fef0f39 100644
+--- a/tools/testing/selftests/net/timestamping.c
++++ b/tools/testing/selftests/net/timestamping.c
+@@ -313,10 +313,16 @@ int main(int argc, char **argv)
+ int val;
+ socklen_t len;
+ struct timeval next;
++ size_t if_len;
+
+ if (argc < 2)
+ usage(0);
+ interface = argv[1];
++ if_len = strlen(interface);
++ if (if_len >= IFNAMSIZ) {
++ printf("interface name exceeds IFNAMSIZ\n");
++ exit(1);
++ }
+
+ for (i = 2; i < argc; i++) {
+ if (!strcasecmp(argv[i], "SO_TIMESTAMP"))
+@@ -350,12 +356,12 @@ int main(int argc, char **argv)
+ bail("socket");
+
+ memset(&device, 0, sizeof(device));
+- strncpy(device.ifr_name, interface, sizeof(device.ifr_name));
++ memcpy(device.ifr_name, interface, if_len + 1);
+ if (ioctl(sock, SIOCGIFADDR, &device) < 0)
+ bail("getting interface IP address");
+
+ memset(&hwtstamp, 0, sizeof(hwtstamp));
+- strncpy(hwtstamp.ifr_name, interface, sizeof(hwtstamp.ifr_name));
++ memcpy(hwtstamp.ifr_name, interface, if_len + 1);
+ hwtstamp.ifr_data = (void *)&hwconfig;
+ memset(&hwconfig, 0, sizeof(hwconfig));
+ hwconfig.tx_type =
+diff --git a/tools/testing/selftests/ntb/ntb_test.sh b/tools/testing/selftests/ntb/ntb_test.sh
+index 9c60337317c6..020137b61407 100755
+--- a/tools/testing/selftests/ntb/ntb_test.sh
++++ b/tools/testing/selftests/ntb/ntb_test.sh
+@@ -241,7 +241,7 @@ function get_files_count()
+ split_remote $LOC
+
+ if [[ "$REMOTE" == "" ]]; then
+- echo $(ls -1 "$LOC"/${NAME}* 2>/dev/null | wc -l)
++ echo $(ls -1 "$VPATH"/${NAME}* 2>/dev/null | wc -l)
+ else
+ echo $(ssh "$REMOTE" "ls -1 \"$VPATH\"/${NAME}* | \
+ wc -l" 2> /dev/null)
+diff --git a/tools/testing/selftests/timens/clock_nanosleep.c b/tools/testing/selftests/timens/clock_nanosleep.c
+index 8e7b7c72ef65..72d41b955fb2 100644
+--- a/tools/testing/selftests/timens/clock_nanosleep.c
++++ b/tools/testing/selftests/timens/clock_nanosleep.c
+@@ -119,7 +119,7 @@ int main(int argc, char *argv[])
+
+ ksft_set_plan(4);
+
+- check_config_posix_timers();
++ check_supported_timers();
+
+ if (unshare_timens())
+ return 1;
+diff --git a/tools/testing/selftests/timens/timens.c b/tools/testing/selftests/timens/timens.c
+index 098be7c83be3..52b6a1185f52 100644
+--- a/tools/testing/selftests/timens/timens.c
++++ b/tools/testing/selftests/timens/timens.c
+@@ -155,7 +155,7 @@ int main(int argc, char *argv[])
+
+ nscheck();
+
+- check_config_posix_timers();
++ check_supported_timers();
+
+ ksft_set_plan(ARRAY_SIZE(clocks) * 2);
+
+diff --git a/tools/testing/selftests/timens/timens.h b/tools/testing/selftests/timens/timens.h
+index e09e7e39bc52..d4fc52d47146 100644
+--- a/tools/testing/selftests/timens/timens.h
++++ b/tools/testing/selftests/timens/timens.h
+@@ -14,15 +14,26 @@
+ #endif
+
+ static int config_posix_timers = true;
++static int config_alarm_timers = true;
+
+-static inline void check_config_posix_timers(void)
++static inline void check_supported_timers(void)
+ {
++ struct timespec ts;
++
+ if (timer_create(-1, 0, 0) == -1 && errno == ENOSYS)
+ config_posix_timers = false;
++
++ if (clock_gettime(CLOCK_BOOTTIME_ALARM, &ts) == -1 && errno == EINVAL)
++ config_alarm_timers = false;
+ }
+
+ static inline bool check_skip(int clockid)
+ {
++ if (!config_alarm_timers && clockid == CLOCK_BOOTTIME_ALARM) {
++ ksft_test_result_skip("CLOCK_BOOTTIME_ALARM isn't supported\n");
++ return true;
++ }
++
+ if (config_posix_timers)
+ return false;
+
+diff --git a/tools/testing/selftests/timens/timer.c b/tools/testing/selftests/timens/timer.c
+index 96dba11ebe44..5e7f0051bd7b 100644
+--- a/tools/testing/selftests/timens/timer.c
++++ b/tools/testing/selftests/timens/timer.c
+@@ -22,6 +22,9 @@ int run_test(int clockid, struct timespec now)
+ timer_t fd;
+ int i;
+
++ if (check_skip(clockid))
++ return 0;
++
+ for (i = 0; i < 2; i++) {
+ struct sigevent sevp = {.sigev_notify = SIGEV_NONE};
+ int flags = 0;
+@@ -74,6 +77,8 @@ int main(int argc, char *argv[])
+
+ nscheck();
+
++ check_supported_timers();
++
+ ksft_set_plan(3);
+
+ clock_gettime(CLOCK_MONOTONIC, &mtime_now);
+diff --git a/tools/testing/selftests/timens/timerfd.c b/tools/testing/selftests/timens/timerfd.c
+index eff1ec5ff215..9edd43d6b2c1 100644
+--- a/tools/testing/selftests/timens/timerfd.c
++++ b/tools/testing/selftests/timens/timerfd.c
+@@ -28,6 +28,9 @@ int run_test(int clockid, struct timespec now)
+ long long elapsed;
+ int fd, i;
+
++ if (check_skip(clockid))
++ return 0;
++
+ if (tclock_gettime(clockid, &now))
+ return pr_perror("clock_gettime(%d)", clockid);
+
+@@ -81,6 +84,8 @@ int main(int argc, char *argv[])
+
+ nscheck();
+
++ check_supported_timers();
++
+ ksft_set_plan(3);
+
+ clock_gettime(CLOCK_MONOTONIC, &mtime_now);
+diff --git a/tools/testing/selftests/x86/protection_keys.c b/tools/testing/selftests/x86/protection_keys.c
+index 480995bceefa..47191af46617 100644
+--- a/tools/testing/selftests/x86/protection_keys.c
++++ b/tools/testing/selftests/x86/protection_keys.c
+@@ -24,6 +24,7 @@
+ #define _GNU_SOURCE
+ #include <errno.h>
+ #include <linux/futex.h>
++#include <time.h>
+ #include <sys/time.h>
+ #include <sys/syscall.h>
+ #include <string.h>
+@@ -612,10 +613,10 @@ int alloc_random_pkey(void)
+ int nr_alloced = 0;
+ int random_index;
+ memset(alloced_pkeys, 0, sizeof(alloced_pkeys));
++ srand((unsigned int)time(NULL));
+
+ /* allocate every possible key and make a note of which ones we got */
+ max_nr_pkey_allocs = NR_PKEYS;
+- max_nr_pkey_allocs = 1;
+ for (i = 0; i < max_nr_pkey_allocs; i++) {
+ int new_pkey = alloc_pkey();
+ if (new_pkey < 0)
^ permalink raw reply related [flat|nested] 25+ messages in thread
* [gentoo-commits] proj/linux-patches:5.7 commit in: /
@ 2020-06-29 17:32 Mike Pagano
0 siblings, 0 replies; 25+ messages in thread
From: Mike Pagano @ 2020-06-29 17:32 UTC (permalink / raw
To: gentoo-commits
commit: d4662d3ac3d6470f051a0923b24deeced928878a
Author: Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Mon Jun 29 17:31:54 2020 +0000
Commit: Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Mon Jun 29 17:31:54 2020 +0000
URL: https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=d4662d3a
Update CPU optimization patch for gcc 9.1+
Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>
5012_enable-cpu-optimizations-for-gcc91.patch | 51 ++++++++++++++++-----------
1 file changed, 30 insertions(+), 21 deletions(-)
diff --git a/5012_enable-cpu-optimizations-for-gcc91.patch b/5012_enable-cpu-optimizations-for-gcc91.patch
index 049ec12..2f16153 100644
--- a/5012_enable-cpu-optimizations-for-gcc91.patch
+++ b/5012_enable-cpu-optimizations-for-gcc91.patch
@@ -42,14 +42,18 @@ It also offers to compile passing the 'native' option which, "selects the CPU
to generate code for at compilation time by determining the processor type of
the compiling machine. Using -march=native enables all instruction subsets
supported by the local machine and will produce code optimized for the local
-machine under the constraints of the selected instruction set."[3]
+machine under the constraints of the selected instruction set."[2]
+
+Do NOT try using the 'native' option on AMD Piledriver, Steamroller, or
+Excavator CPUs (-march=bdver{2,3,4} flag). The build will error out due the
+kernel's objtool issue with these.[3a,b]
MINOR NOTES
This patch also changes 'atom' to 'bonnell' in accordance with the gcc v4.9
changes. Note that upstream is using the deprecated 'match=atom' flags when I
-believe it should use the newer 'march=bonnell' flag for atom processors.[2]
+believe it should use the newer 'march=bonnell' flag for atom processors.[4]
-It is not recommended to compile on Atom-CPUs with the 'native' option.[4] The
+It is not recommended to compile on Atom-CPUs with the 'native' option.[5] The
recommendation is to use the 'atom' option instead.
BENEFITS
@@ -61,21 +65,23 @@ https://github.com/graysky2/kernel_gcc_patch
REQUIREMENTS
linux version >=5.7
-gcc version >=9.1
+gcc version >=9.1 and <10
ACKNOWLEDGMENTS
-This patch builds on the seminal work by Jeroen.[5]
+This patch builds on the seminal work by Jeroen.[6]
REFERENCES
-1. https://gcc.gnu.org/gcc-4.9/changes.html
-2. https://bugzilla.kernel.org/show_bug.cgi?id=77461
-3. https://gcc.gnu.org/onlinedocs/gcc/x86-Options.html
-4. https://github.com/graysky2/kernel_gcc_patch/issues/15
-5. http://www.linuxforge.net/docs/linux/linux-gcc.php
+1. https://gcc.gnu.org/gcc-4.9/changes.html
+2. https://gcc.gnu.org/onlinedocs/gcc/x86-Options.html
+3a. https://gcc.gnu.org/bugzilla/show_bug.cgi?id=95671#c11
+3b. https://github.com/graysky2/kernel_gcc_patch/issues/55
+4. https://bugzilla.kernel.org/show_bug.cgi?id=77461
+5. https://github.com/graysky2/kernel_gcc_patch/issues/15
+6. http://www.linuxforge.net/docs/linux/linux-gcc.php
---- a/arch/x86/include/asm/vermagic.h 2019-12-15 18:16:08.000000000 -0500
-+++ b/arch/x86/include/asm/vermagic.h 2019-12-17 14:03:55.968871551 -0500
-@@ -27,6 +27,36 @@ struct mod_arch_specific {
+--- a/arch/x86/include/asm/vermagic.h 2020-06-10 14:21:45.000000000 -0400
++++ b/arch/x86/include/asm/vermagic.h 2020-06-15 10:44:10.437477053 -0400
+@@ -17,6 +17,36 @@
#define MODULE_PROC_FAMILY "586MMX "
#elif defined CONFIG_MCORE2
#define MODULE_PROC_FAMILY "CORE2 "
@@ -112,7 +118,7 @@ REFERENCES
#elif defined CONFIG_MATOM
#define MODULE_PROC_FAMILY "ATOM "
#elif defined CONFIG_M686
-@@ -45,6 +75,28 @@ struct mod_arch_specific {
+@@ -35,6 +65,28 @@
#define MODULE_PROC_FAMILY "K7 "
#elif defined CONFIG_MK8
#define MODULE_PROC_FAMILY "K8 "
@@ -141,8 +147,8 @@ REFERENCES
#elif defined CONFIG_MELAN
#define MODULE_PROC_FAMILY "ELAN "
#elif defined CONFIG_MCRUSOE
---- a/arch/x86/Kconfig.cpu 2019-12-15 18:16:08.000000000 -0500
-+++ b/arch/x86/Kconfig.cpu 2019-12-17 14:09:03.805642284 -0500
+--- a/arch/x86/Kconfig.cpu 2020-06-10 14:21:45.000000000 -0400
++++ b/arch/x86/Kconfig.cpu 2020-06-15 10:44:10.437477053 -0400
@@ -123,6 +123,7 @@ config MPENTIUMM
config MPENTIUM4
bool "Pentium-4/Celeron(P4-based)/Pentium-4 M/older Xeon"
@@ -524,9 +530,9 @@ REFERENCES
config X86_MINIMUM_CPU_FAMILY
int
---- a/arch/x86/Makefile 2019-12-15 18:16:08.000000000 -0500
-+++ b/arch/x86/Makefile 2019-12-17 14:03:55.972204960 -0500
-@@ -119,13 +119,53 @@ else
+--- a/arch/x86/Makefile 2020-06-10 14:21:45.000000000 -0400
++++ b/arch/x86/Makefile 2020-06-15 10:44:35.608035680 -0400
+@@ -119,13 +119,56 @@ else
KBUILD_CFLAGS += $(call cc-option,-mskip-rax-setup)
# FIXME - should be integrated in Makefile.cpu (Makefile_32.cpu)
@@ -539,8 +545,11 @@ REFERENCES
+ cflags-$(CONFIG_MJAGUAR) += $(call cc-option,-march=btver2)
+ cflags-$(CONFIG_MBULLDOZER) += $(call cc-option,-march=bdver1)
+ cflags-$(CONFIG_MPILEDRIVER) += $(call cc-option,-march=bdver2)
++ cflags-$(CONFIG_MPILEDRIVER) += $(call cc-option,-mno-tbm)
+ cflags-$(CONFIG_MSTEAMROLLER) += $(call cc-option,-march=bdver3)
++ cflags-$(CONFIG_MSTEAMROLLER) += $(call cc-option,-mno-tbm)
+ cflags-$(CONFIG_MEXCAVATOR) += $(call cc-option,-march=bdver4)
++ cflags-$(CONFIG_MEXCAVATOR) += $(call cc-option,-mno-tbm)
+ cflags-$(CONFIG_MZEN) += $(call cc-option,-march=znver1)
+ cflags-$(CONFIG_MZEN2) += $(call cc-option,-march=znver2)
cflags-$(CONFIG_MPSC) += $(call cc-option,-march=nocona)
@@ -583,8 +592,8 @@ REFERENCES
cflags-$(CONFIG_GENERIC_CPU) += $(call cc-option,-mtune=generic)
KBUILD_CFLAGS += $(cflags-y)
---- a/arch/x86/Makefile_32.cpu 2019-12-15 18:16:08.000000000 -0500
-+++ b/arch/x86/Makefile_32.cpu 2019-12-17 14:03:55.972204960 -0500
+--- a/arch/x86/Makefile_32.cpu 2020-06-10 14:21:45.000000000 -0400
++++ b/arch/x86/Makefile_32.cpu 2020-06-15 10:44:10.437477053 -0400
@@ -24,7 +24,19 @@ cflags-$(CONFIG_MK6) += -march=k6
# Please note, that patches that add -march=athlon-xp and friends are pointless.
# They make zero difference whatsosever to performance at this time.
^ permalink raw reply related [flat|nested] 25+ messages in thread
* [gentoo-commits] proj/linux-patches:5.7 commit in: /
@ 2020-06-29 17:37 Mike Pagano
0 siblings, 0 replies; 25+ messages in thread
From: Mike Pagano @ 2020-06-29 17:37 UTC (permalink / raw
To: gentoo-commits
commit: 73182aa4280c7a5906ee99d67cb3104e799d5b97
Author: Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Mon Jun 29 17:36:48 2020 +0000
Commit: Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Mon Jun 29 17:36:48 2020 +0000
URL: https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=73182aa4
Kernel patch enables gcc = v10.1+ optimizations for additional CPUs
Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>
0000_README | 4 +
5013_enable-cpu-optimizations-for-gcc10.patch | 671 ++++++++++++++++++++++++++
2 files changed, 675 insertions(+)
diff --git a/0000_README b/0000_README
index 1d59c5b..916b8cc 100644
--- a/0000_README
+++ b/0000_README
@@ -134,3 +134,7 @@ Desc: .gitignore: add ZSTD-compressed files
Patch: 5012_enable-cpu-optimizations-for-gcc91.patch
From: https://github.com/graysky2/kernel_gcc_patch/
Desc: Kernel patch enables gcc >= v9.1 optimizations for additional CPUs.
+
+Patch: 5013_enable-cpu-optimizations-for-gcc10.patch
+From: https://github.com/graysky2/kernel_gcc_patch/
+Desc: Kernel patch enables gcc = v10.1+ optimizations for additional CPUs.
diff --git a/5013_enable-cpu-optimizations-for-gcc10.patch b/5013_enable-cpu-optimizations-for-gcc10.patch
new file mode 100644
index 0000000..13c251b
--- /dev/null
+++ b/5013_enable-cpu-optimizations-for-gcc10.patch
@@ -0,0 +1,671 @@
+WARNING
+This patch works with gcc versions 10.1+ and with kernel version 5.7+ and should
+NOT be applied when compiling on older versions of gcc due to key name changes
+of the march flags introduced with the version 4.9 release of gcc.[1]
+
+Use the older version of this patch hosted on the same github for older
+versions of gcc.
+
+FEATURES
+This patch adds additional CPU options to the Linux kernel accessible under:
+ Processor type and features --->
+ Processor family --->
+
+The expanded microarchitectures include:
+* AMD Improved K8-family
+* AMD K10-family
+* AMD Family 10h (Barcelona)
+* AMD Family 14h (Bobcat)
+* AMD Family 16h (Jaguar)
+* AMD Family 15h (Bulldozer)
+* AMD Family 15h (Piledriver)
+* AMD Family 15h (Steamroller)
+* AMD Family 15h (Excavator)
+* AMD Family 17h (Zen)
+* AMD Family 17h (Zen 2)
+* Intel Silvermont low-power processors
+* Intel Goldmont low-power processors (Apollo Lake and Denverton)
+* Intel Goldmont Plus low-power processors (Gemini Lake)
+* Intel 1st Gen Core i3/i5/i7 (Nehalem)
+* Intel 1.5 Gen Core i3/i5/i7 (Westmere)
+* Intel 2nd Gen Core i3/i5/i7 (Sandybridge)
+* Intel 3rd Gen Core i3/i5/i7 (Ivybridge)
+* Intel 4th Gen Core i3/i5/i7 (Haswell)
+* Intel 5th Gen Core i3/i5/i7 (Broadwell)
+* Intel 6th Gen Core i3/i5/i7 (Skylake)
+* Intel 6th Gen Core i7/i9 (Skylake X)
+* Intel 8th Gen Core i3/i5/i7 (Cannon Lake)
+* Intel 10th Gen Core i7/i9 (Ice Lake)
+* Intel Xeon (Cascade Lake)
+* Intel Xeon (Cooper Lake)
+* Intel 3rd Gen 10nm++ i3/i5/i7/i9-family (Tiger Lake)
+
+It also offers to compile passing the 'native' option which, "selects the CPU
+to generate code for at compilation time by determining the processor type of
+the compiling machine. Using -march=native enables all instruction subsets
+supported by the local machine and will produce code optimized for the local
+machine under the constraints of the selected instruction set."[2]
+
+Do NOT try using the 'native' option on AMD Piledriver, Steamroller, or
+Excavator CPUs (-march=bdver{2,3,4} flag). The build will error out due the
+kernel's objtool issue with these.[3a,b]
+
+MINOR NOTES
+This patch also changes 'atom' to 'bonnell' in accordance with the gcc v4.9
+changes. Note that upstream is using the deprecated 'match=atom' flags when I
+believe it should use the newer 'march=bonnell' flag for atom processors.[4]
+
+It is not recommended to compile on Atom-CPUs with the 'native' option.[5] The
+recommendation is to use the 'atom' option instead.
+
+BENEFITS
+Small but real speed increases are measurable using a make endpoint comparing
+a generic kernel to one built with one of the respective microarchs.
+
+See the following experimental evidence supporting this statement:
+https://github.com/graysky2/kernel_gcc_patch
+
+REQUIREMENTS
+linux version >=5.7
+gcc version >=10.1
+
+ACKNOWLEDGMENTS
+This patch builds on the seminal work by Jeroen.[6]
+
+REFERENCES
+1. https://gcc.gnu.org/gcc-4.9/changes.html
+2. https://gcc.gnu.org/onlinedocs/gcc/x86-Options.html
+3a. https://gcc.gnu.org/bugzilla/show_bug.cgi?id=95671#c11
+3b. https://github.com/graysky2/kernel_gcc_patch/issues/55
+4. https://bugzilla.kernel.org/show_bug.cgi?id=77461
+5. https://github.com/graysky2/kernel_gcc_patch/issues/15
+6. http://www.linuxforge.net/docs/linux/linux-gcc.php
+
+--- a/arch/x86/include/asm/vermagic.h 2020-06-10 14:21:45.000000000 -0400
++++ b/arch/x86/include/asm/vermagic.h 2020-06-15 10:12:15.577746073 -0400
+@@ -17,6 +17,40 @@
+ #define MODULE_PROC_FAMILY "586MMX "
+ #elif defined CONFIG_MCORE2
+ #define MODULE_PROC_FAMILY "CORE2 "
++#elif defined CONFIG_MNATIVE
++#define MODULE_PROC_FAMILY "NATIVE "
++#elif defined CONFIG_MNEHALEM
++#define MODULE_PROC_FAMILY "NEHALEM "
++#elif defined CONFIG_MWESTMERE
++#define MODULE_PROC_FAMILY "WESTMERE "
++#elif defined CONFIG_MSILVERMONT
++#define MODULE_PROC_FAMILY "SILVERMONT "
++#elif defined CONFIG_MGOLDMONT
++#define MODULE_PROC_FAMILY "GOLDMONT "
++#elif defined CONFIG_MGOLDMONTPLUS
++#define MODULE_PROC_FAMILY "GOLDMONTPLUS "
++#elif defined CONFIG_MSANDYBRIDGE
++#define MODULE_PROC_FAMILY "SANDYBRIDGE "
++#elif defined CONFIG_MIVYBRIDGE
++#define MODULE_PROC_FAMILY "IVYBRIDGE "
++#elif defined CONFIG_MHASWELL
++#define MODULE_PROC_FAMILY "HASWELL "
++#elif defined CONFIG_MBROADWELL
++#define MODULE_PROC_FAMILY "BROADWELL "
++#elif defined CONFIG_MSKYLAKE
++#define MODULE_PROC_FAMILY "SKYLAKE "
++#elif defined CONFIG_MSKYLAKEX
++#define MODULE_PROC_FAMILY "SKYLAKEX "
++#elif defined CONFIG_MCANNONLAKE
++#define MODULE_PROC_FAMILY "CANNONLAKE "
++#elif defined CONFIG_MICELAKE
++#define MODULE_PROC_FAMILY "ICELAKE "
++#elif defined CONFIG_MCASCADELAKE
++#define MODULE_PROC_FAMILY "CASCADELAKE "
++#elif defined CONFIG_MCOOPERLAKE
++#define MODULE_PROC_FAMILY "COOPERLAKE "
++#elif defined CONFIG_MTIGERLAKE
++#define MODULE_PROC_FAMILY "TIGERLAKE "
+ #elif defined CONFIG_MATOM
+ #define MODULE_PROC_FAMILY "ATOM "
+ #elif defined CONFIG_M686
+@@ -35,6 +69,28 @@
+ #define MODULE_PROC_FAMILY "K7 "
+ #elif defined CONFIG_MK8
+ #define MODULE_PROC_FAMILY "K8 "
++#elif defined CONFIG_MK8SSE3
++#define MODULE_PROC_FAMILY "K8SSE3 "
++#elif defined CONFIG_MK10
++#define MODULE_PROC_FAMILY "K10 "
++#elif defined CONFIG_MBARCELONA
++#define MODULE_PROC_FAMILY "BARCELONA "
++#elif defined CONFIG_MBOBCAT
++#define MODULE_PROC_FAMILY "BOBCAT "
++#elif defined CONFIG_MBULLDOZER
++#define MODULE_PROC_FAMILY "BULLDOZER "
++#elif defined CONFIG_MPILEDRIVER
++#define MODULE_PROC_FAMILY "PILEDRIVER "
++#elif defined CONFIG_MSTEAMROLLER
++#define MODULE_PROC_FAMILY "STEAMROLLER "
++#elif defined CONFIG_MJAGUAR
++#define MODULE_PROC_FAMILY "JAGUAR "
++#elif defined CONFIG_MEXCAVATOR
++#define MODULE_PROC_FAMILY "EXCAVATOR "
++#elif defined CONFIG_MZEN
++#define MODULE_PROC_FAMILY "ZEN "
++#elif defined CONFIG_MZEN2
++#define MODULE_PROC_FAMILY "ZEN2 "
+ #elif defined CONFIG_MELAN
+ #define MODULE_PROC_FAMILY "ELAN "
+ #elif defined CONFIG_MCRUSOE
+--- a/arch/x86/Kconfig.cpu 2020-06-10 14:21:45.000000000 -0400
++++ b/arch/x86/Kconfig.cpu 2020-06-15 10:12:15.577746073 -0400
+@@ -123,6 +123,7 @@ config MPENTIUMM
+ config MPENTIUM4
+ bool "Pentium-4/Celeron(P4-based)/Pentium-4 M/older Xeon"
+ depends on X86_32
++ select X86_P6_NOP
+ ---help---
+ Select this for Intel Pentium 4 chips. This includes the
+ Pentium 4, Pentium D, P4-based Celeron and Xeon, and
+@@ -155,9 +156,8 @@ config MPENTIUM4
+ -Paxville
+ -Dempsey
+
+-
+ config MK6
+- bool "K6/K6-II/K6-III"
++ bool "AMD K6/K6-II/K6-III"
+ depends on X86_32
+ ---help---
+ Select this for an AMD K6-family processor. Enables use of
+@@ -165,7 +165,7 @@ config MK6
+ flags to GCC.
+
+ config MK7
+- bool "Athlon/Duron/K7"
++ bool "AMD Athlon/Duron/K7"
+ depends on X86_32
+ ---help---
+ Select this for an AMD Athlon K7-family processor. Enables use of
+@@ -173,12 +173,90 @@ config MK7
+ flags to GCC.
+
+ config MK8
+- bool "Opteron/Athlon64/Hammer/K8"
++ bool "AMD Opteron/Athlon64/Hammer/K8"
+ ---help---
+ Select this for an AMD Opteron or Athlon64 Hammer-family processor.
+ Enables use of some extended instructions, and passes appropriate
+ optimization flags to GCC.
+
++config MK8SSE3
++ bool "AMD Opteron/Athlon64/Hammer/K8 with SSE3"
++ ---help---
++ Select this for improved AMD Opteron or Athlon64 Hammer-family processors.
++ Enables use of some extended instructions, and passes appropriate
++ optimization flags to GCC.
++
++config MK10
++ bool "AMD 61xx/7x50/PhenomX3/X4/II/K10"
++ ---help---
++ Select this for an AMD 61xx Eight-Core Magny-Cours, Athlon X2 7x50,
++ Phenom X3/X4/II, Athlon II X2/X3/X4, or Turion II-family processor.
++ Enables use of some extended instructions, and passes appropriate
++ optimization flags to GCC.
++
++config MBARCELONA
++ bool "AMD Barcelona"
++ ---help---
++ Select this for AMD Family 10h Barcelona processors.
++
++ Enables -march=barcelona
++
++config MBOBCAT
++ bool "AMD Bobcat"
++ ---help---
++ Select this for AMD Family 14h Bobcat processors.
++
++ Enables -march=btver1
++
++config MJAGUAR
++ bool "AMD Jaguar"
++ ---help---
++ Select this for AMD Family 16h Jaguar processors.
++
++ Enables -march=btver2
++
++config MBULLDOZER
++ bool "AMD Bulldozer"
++ ---help---
++ Select this for AMD Family 15h Bulldozer processors.
++
++ Enables -march=bdver1
++
++config MPILEDRIVER
++ bool "AMD Piledriver"
++ ---help---
++ Select this for AMD Family 15h Piledriver processors.
++
++ Enables -march=bdver2
++
++config MSTEAMROLLER
++ bool "AMD Steamroller"
++ ---help---
++ Select this for AMD Family 15h Steamroller processors.
++
++ Enables -march=bdver3
++
++config MEXCAVATOR
++ bool "AMD Excavator"
++ ---help---
++ Select this for AMD Family 15h Excavator processors.
++
++ Enables -march=bdver4
++
++config MZEN
++ bool "AMD Zen"
++ ---help---
++ Select this for AMD Family 17h Zen processors.
++
++ Enables -march=znver1
++
++config MZEN2
++ bool "AMD Zen 2"
++ ---help---
++ Select this for AMD Family 17h Zen 2 processors.
++
++ Enables -march=znver2
++
+ config MCRUSOE
+ bool "Crusoe"
+ depends on X86_32
+@@ -260,6 +338,7 @@ config MVIAC7
+
+ config MPSC
+ bool "Intel P4 / older Netburst based Xeon"
++ select X86_P6_NOP
+ depends on X86_64
+ ---help---
+ Optimize for Intel Pentium 4, Pentium D and older Nocona/Dempsey
+@@ -269,8 +348,19 @@ config MPSC
+ using the cpu family field
+ in /proc/cpuinfo. Family 15 is an older Xeon, Family 6 a newer one.
+
++config MATOM
++ bool "Intel Atom"
++ select X86_P6_NOP
++ ---help---
++
++ Select this for the Intel Atom platform. Intel Atom CPUs have an
++ in-order pipelining architecture and thus can benefit from
++ accordingly optimized code. Use a recent GCC with specific Atom
++ support in order to fully benefit from selecting this option.
++
+ config MCORE2
+- bool "Core 2/newer Xeon"
++ bool "Intel Core 2"
++ select X86_P6_NOP
+ ---help---
+
+ Select this for Intel Core 2 and newer Core 2 Xeons (Xeon 51xx and
+@@ -278,14 +368,151 @@ config MCORE2
+ family in /proc/cpuinfo. Newer ones have 6 and older ones 15
+ (not a typo)
+
+-config MATOM
+- bool "Intel Atom"
++ Enables -march=core2
++
++config MNEHALEM
++ bool "Intel Nehalem"
++ select X86_P6_NOP
+ ---help---
+
+- Select this for the Intel Atom platform. Intel Atom CPUs have an
+- in-order pipelining architecture and thus can benefit from
+- accordingly optimized code. Use a recent GCC with specific Atom
+- support in order to fully benefit from selecting this option.
++ Select this for 1st Gen Core processors in the Nehalem family.
++
++ Enables -march=nehalem
++
++config MWESTMERE
++ bool "Intel Westmere"
++ select X86_P6_NOP
++ ---help---
++
++ Select this for the Intel Westmere formerly Nehalem-C family.
++
++ Enables -march=westmere
++
++config MSILVERMONT
++ bool "Intel Silvermont"
++ select X86_P6_NOP
++ ---help---
++
++ Select this for the Intel Silvermont platform.
++
++ Enables -march=silvermont
++
++config MGOLDMONT
++ bool "Intel Goldmont"
++ select X86_P6_NOP
++ ---help---
++
++ Select this for the Intel Goldmont platform including Apollo Lake and Denverton.
++
++ Enables -march=goldmont
++
++config MGOLDMONTPLUS
++ bool "Intel Goldmont Plus"
++ select X86_P6_NOP
++ ---help---
++
++ Select this for the Intel Goldmont Plus platform including Gemini Lake.
++
++ Enables -march=goldmont-plus
++
++config MSANDYBRIDGE
++ bool "Intel Sandy Bridge"
++ select X86_P6_NOP
++ ---help---
++
++ Select this for 2nd Gen Core processors in the Sandy Bridge family.
++
++ Enables -march=sandybridge
++
++config MIVYBRIDGE
++ bool "Intel Ivy Bridge"
++ select X86_P6_NOP
++ ---help---
++
++ Select this for 3rd Gen Core processors in the Ivy Bridge family.
++
++ Enables -march=ivybridge
++
++config MHASWELL
++ bool "Intel Haswell"
++ select X86_P6_NOP
++ ---help---
++
++ Select this for 4th Gen Core processors in the Haswell family.
++
++ Enables -march=haswell
++
++config MBROADWELL
++ bool "Intel Broadwell"
++ select X86_P6_NOP
++ ---help---
++
++ Select this for 5th Gen Core processors in the Broadwell family.
++
++ Enables -march=broadwell
++
++config MSKYLAKE
++ bool "Intel Skylake"
++ select X86_P6_NOP
++ ---help---
++
++ Select this for 6th Gen Core processors in the Skylake family.
++
++ Enables -march=skylake
++
++config MSKYLAKEX
++ bool "Intel Skylake X"
++ select X86_P6_NOP
++ ---help---
++
++ Select this for 6th Gen Core processors in the Skylake X family.
++
++ Enables -march=skylake-avx512
++
++config MCANNONLAKE
++ bool "Intel Cannon Lake"
++ select X86_P6_NOP
++ ---help---
++
++ Select this for 8th Gen Core processors
++
++ Enables -march=cannonlake
++
++config MICELAKE
++ bool "Intel Ice Lake"
++ select X86_P6_NOP
++ ---help---
++
++ Select this for 10th Gen Core processors in the Ice Lake family.
++
++ Enables -march=icelake-client
++
++config MCASCADELAKE
++ bool "Intel Cascade Lake"
++ select X86_P6_NOP
++ ---help---
++
++ Select this for Xeon processors in the Cascade Lake family.
++
++ Enables -march=cascadelake
++
++config MCOOPERLAKE
++ bool "Intel Cooper Lake"
++ select X86_P6_NOP
++ ---help---
++
++ Select this for Xeon processors in the Cooper Lake family.
++
++ Enables -march=cooperlake
++
++config MTIGERLAKE
++ bool "Intel Tiger Lake"
++ select X86_P6_NOP
++ ---help---
++
++ Select this for third-generation 10 nm process processors in the Tiger Lake family.
++
++ Enables -march=tigerlake
+
+ config GENERIC_CPU
+ bool "Generic-x86-64"
+@@ -294,6 +521,19 @@ config GENERIC_CPU
+ Generic x86-64 CPU.
+ Run equally well on all x86-64 CPUs.
+
++config MNATIVE
++ bool "Native optimizations autodetected by GCC"
++ ---help---
++
++ GCC 4.2 and above support -march=native, which automatically detects
++ the optimum settings to use based on your processor. -march=native
++ also detects and applies additional settings beyond -march specific
++ to your CPU, (eg. -msse4). Unless you have a specific reason not to
++ (e.g. distcc cross-compiling), you should probably be using
++ -march=native rather than anything listed below.
++
++ Enables -march=native
++
+ endchoice
+
+ config X86_GENERIC
+@@ -318,7 +558,7 @@ config X86_INTERNODE_CACHE_SHIFT
+ config X86_L1_CACHE_SHIFT
+ int
+ default "7" if MPENTIUM4 || MPSC
+- default "6" if MK7 || MK8 || MPENTIUMM || MCORE2 || MATOM || MVIAC7 || X86_GENERIC || GENERIC_CPU
++ default "6" if MK7 || MK8 || MK8SSE3 || MK10 || MBARCELONA || MBOBCAT || MBULLDOZER || MPILEDRIVER || MSTEAMROLLER || MEXCAVATOR || MZEN || MZEN2 || MJAGUAR || MPENTIUMM || MCORE2 || MNEHALEM || MWESTMERE || MSILVERMONT || MGOLDMONT || MGOLDMONTPLUS || MSANDYBRIDGE || MIVYBRIDGE || MHASWELL || MBROADWELL || MSKYLAKE || MSKYLAKEX || MCANNONLAKE || MICELAKE || MCASCADELAKE || MCOOPERLAKE || MTIGERLAKE || MNATIVE || MATOM || MVIAC7 || X86_GENERIC || GENERIC_CPU
+ default "4" if MELAN || M486SX || M486 || MGEODEGX1
+ default "5" if MWINCHIP3D || MWINCHIPC6 || MCRUSOE || MEFFICEON || MCYRIXIII || MK6 || MPENTIUMIII || MPENTIUMII || M686 || M586MMX || M586TSC || M586 || MVIAC3_2 || MGEODE_LX
+
+@@ -336,35 +576,36 @@ config X86_ALIGNMENT_16
+
+ config X86_INTEL_USERCOPY
+ def_bool y
+- depends on MPENTIUM4 || MPENTIUMM || MPENTIUMIII || MPENTIUMII || M586MMX || X86_GENERIC || MK8 || MK7 || MEFFICEON || MCORE2
++ depends on MPENTIUM4 || MPENTIUMM || MPENTIUMIII || MPENTIUMII || M586MMX || X86_GENERIC || MK8 || MK8SSE3 || MK7 || MEFFICEON || MCORE2 || MK10 || MBARCELONA || MNEHALEM || MWESTMERE || MSILVERMONT || MGOLDMONT || MGOLDMONTPLUS || MSANDYBRIDGE || MIVYBRIDGE || MHASWELL || MBROADWELL || MSKYLAKE || MSKYLAKEX || MCANNONLAKE || MICELAKE || MCASCADELAKE || MCOOPERLAKE || MTIGERLAKE || MNATIVE
+
+ config X86_USE_PPRO_CHECKSUM
+ def_bool y
+- depends on MWINCHIP3D || MWINCHIPC6 || MCYRIXIII || MK7 || MK6 || MPENTIUM4 || MPENTIUMM || MPENTIUMIII || MPENTIUMII || M686 || MK8 || MVIAC3_2 || MVIAC7 || MEFFICEON || MGEODE_LX || MCORE2 || MATOM
++ depends on MWINCHIP3D || MWINCHIPC6 || MCYRIXIII || MK7 || MK6 || MK10 || MPENTIUM4 || MPENTIUMM || MPENTIUMIII || MPENTIUMII || M686 || MK8 || MK8SSE3 || MVIAC3_2 || MVIAC7 || MEFFICEON || MGEODE_LX || MCORE2 || MNEHALEM || MWESTMERE || MSILVERMONT || MGOLDMONT || MGOLDMONTPLUS || MSANDYBRIDGE || MIVYBRIDGE || MHASWELL || MBROADWELL || MSKYLAKE || MSKYLAKEX || MCANNONLAKE || MICELAKE || MCASCADELAKE || MCOOPERLAKE || MTIGERLAKE || MATOM || MNATIVE
+
+ config X86_USE_3DNOW
+ def_bool y
+ depends on (MCYRIXIII || MK7 || MGEODE_LX) && !UML
+
+-#
+-# P6_NOPs are a relatively minor optimization that require a family >=
+-# 6 processor, except that it is broken on certain VIA chips.
+-# Furthermore, AMD chips prefer a totally different sequence of NOPs
+-# (which work on all CPUs). In addition, it looks like Virtual PC
+-# does not understand them.
+-#
+-# As a result, disallow these if we're not compiling for X86_64 (these
+-# NOPs do work on all x86-64 capable chips); the list of processors in
+-# the right-hand clause are the cores that benefit from this optimization.
+-#
+ config X86_P6_NOP
+- def_bool y
+- depends on X86_64
+- depends on (MCORE2 || MPENTIUM4 || MPSC)
++ default n
++ bool "Support for P6_NOPs on Intel chips"
++ depends on (MCORE2 || MPENTIUM4 || MPSC || MATOM || MNEHALEM || MWESTMERE || MSILVERMONT || MGOLDMONT || MGOLDMONTPLUS || MSANDYBRIDGE || MIVYBRIDGE || MHASWELL || MBROADWELL || MSKYLAKE || MSKYLAKEX || MCANNONLAKE || MICELAKE || MCASCADELAKE || MCOOPERLAKE || MTIGERLAKE || MNATIVE)
++ ---help---
++ P6_NOPs are a relatively minor optimization that require a family >=
++ 6 processor, except that it is broken on certain VIA chips.
++ Furthermore, AMD chips prefer a totally different sequence of NOPs
++ (which work on all CPUs). In addition, it looks like Virtual PC
++ does not understand them.
++
++ As a result, disallow these if we're not compiling for X86_64 (these
++ NOPs do work on all x86-64 capable chips); the list of processors in
++ the right-hand clause are the cores that benefit from this optimization.
++
++ Say Y if you have Intel CPU newer than Pentium Pro, N otherwise.
+
+ config X86_TSC
+ def_bool y
+- depends on (MWINCHIP3D || MCRUSOE || MEFFICEON || MCYRIXIII || MK7 || MK6 || MPENTIUM4 || MPENTIUMM || MPENTIUMIII || MPENTIUMII || M686 || M586MMX || M586TSC || MK8 || MVIAC3_2 || MVIAC7 || MGEODEGX1 || MGEODE_LX || MCORE2 || MATOM) || X86_64
++ depends on (MWINCHIP3D || MCRUSOE || MEFFICEON || MCYRIXIII || MK7 || MK6 || MPENTIUM4 || MPENTIUMM || MPENTIUMIII || MPENTIUMII || M686 || M586MMX || M586TSC || MK8 || MK8SSE3 || MVIAC3_2 || MVIAC7 || MGEODEGX1 || MGEODE_LX || MCORE2 || MNEHALEM || MWESTMERE || MSILVERMONT || MGOLDMONT || MGOLDMONTPLUS || MSANDYBRIDGE || MIVYBRIDGE || MHASWELL || MBROADWELL || MSKYLAKE || MSKYLAKEX || MCANNONLAKE || MICELAKE || MCASCADELAKE || MCOOPERLAKE || MTIGERLAKE || MNATIVE || MATOM) || X86_64
+
+ config X86_CMPXCHG64
+ def_bool y
+@@ -374,7 +615,7 @@ config X86_CMPXCHG64
+ # generates cmov.
+ config X86_CMOV
+ def_bool y
+- depends on (MK8 || MK7 || MCORE2 || MPENTIUM4 || MPENTIUMM || MPENTIUMIII || MPENTIUMII || M686 || MVIAC3_2 || MVIAC7 || MCRUSOE || MEFFICEON || X86_64 || MATOM || MGEODE_LX)
++ depends on (MK8 || MK8SSE3 || MK10 || MBARCELONA || MBOBCAT || MBULLDOZER || MPILEDRIVER || MSTEAMROLLER || MEXCAVATOR || MZEN || MZEN2 || MJAGUAR || MK7 || MCORE2 || MNEHALEM || MWESTMERE || MSILVERMONT || MGOLDMONT || MGOLDMONTPLUS || MSANDYBRIDGE || MIVYBRIDGE || MHASWELL || MBROADWELL || MSKYLAKE || MSKYLAKEX || MCANNONLAKE || MICELAKE || MCASCADELAKE || MCOOPERLAKE || MTIGERLAKE || MPENTIUM4 || MPENTIUMM || MPENTIUMIII || MPENTIUMII || M686 || MVIAC3_2 || MVIAC7 || MCRUSOE || MEFFICEON || X86_64 || MNATIVE || MATOM || MGEODE_LX)
+
+ config X86_MINIMUM_CPU_FAMILY
+ int
+--- a/arch/x86/Makefile 2020-06-10 14:21:45.000000000 -0400
++++ b/arch/x86/Makefile 2020-06-15 10:16:01.605959969 -0400
+@@ -119,13 +119,60 @@ else
+ KBUILD_CFLAGS += $(call cc-option,-mskip-rax-setup)
+
+ # FIXME - should be integrated in Makefile.cpu (Makefile_32.cpu)
++ cflags-$(CONFIG_MNATIVE) += $(call cc-option,-march=native)
+ cflags-$(CONFIG_MK8) += $(call cc-option,-march=k8)
++ cflags-$(CONFIG_MK8SSE3) += $(call cc-option,-march=k8-sse3,-mtune=k8)
++ cflags-$(CONFIG_MK10) += $(call cc-option,-march=amdfam10)
++ cflags-$(CONFIG_MBARCELONA) += $(call cc-option,-march=barcelona)
++ cflags-$(CONFIG_MBOBCAT) += $(call cc-option,-march=btver1)
++ cflags-$(CONFIG_MJAGUAR) += $(call cc-option,-march=btver2)
++ cflags-$(CONFIG_MBULLDOZER) += $(call cc-option,-march=bdver1)
++ cflags-$(CONFIG_MPILEDRIVER) += $(call cc-option,-march=bdver2)
++ cflags-$(CONFIG_MPILEDRIVER) += $(call cc-option,-mno-tbm)
++ cflags-$(CONFIG_MSTEAMROLLER) += $(call cc-option,-march=bdver3)
++ cflags-$(CONFIG_MSTEAMROLLER) += $(call cc-option,-mno-tbm)
++ cflags-$(CONFIG_MEXCAVATOR) += $(call cc-option,-march=bdver4)
++ cflags-$(CONFIG_MEXCAVATOR) += $(call cc-option,-mno-tbm)
++ cflags-$(CONFIG_MZEN) += $(call cc-option,-march=znver1)
++ cflags-$(CONFIG_MZEN2) += $(call cc-option,-march=znver2)
+ cflags-$(CONFIG_MPSC) += $(call cc-option,-march=nocona)
+
+ cflags-$(CONFIG_MCORE2) += \
+- $(call cc-option,-march=core2,$(call cc-option,-mtune=generic))
+- cflags-$(CONFIG_MATOM) += $(call cc-option,-march=atom) \
+- $(call cc-option,-mtune=atom,$(call cc-option,-mtune=generic))
++ $(call cc-option,-march=core2,$(call cc-option,-mtune=core2))
++ cflags-$(CONFIG_MNEHALEM) += \
++ $(call cc-option,-march=nehalem,$(call cc-option,-mtune=nehalem))
++ cflags-$(CONFIG_MWESTMERE) += \
++ $(call cc-option,-march=westmere,$(call cc-option,-mtune=westmere))
++ cflags-$(CONFIG_MSILVERMONT) += \
++ $(call cc-option,-march=silvermont,$(call cc-option,-mtune=silvermont))
++ cflags-$(CONFIG_MGOLDMONT) += \
++ $(call cc-option,-march=goldmont,$(call cc-option,-mtune=goldmont))
++ cflags-$(CONFIG_MGOLDMONTPLUS) += \
++ $(call cc-option,-march=goldmont-plus,$(call cc-option,-mtune=goldmont-plus))
++ cflags-$(CONFIG_MSANDYBRIDGE) += \
++ $(call cc-option,-march=sandybridge,$(call cc-option,-mtune=sandybridge))
++ cflags-$(CONFIG_MIVYBRIDGE) += \
++ $(call cc-option,-march=ivybridge,$(call cc-option,-mtune=ivybridge))
++ cflags-$(CONFIG_MHASWELL) += \
++ $(call cc-option,-march=haswell,$(call cc-option,-mtune=haswell))
++ cflags-$(CONFIG_MBROADWELL) += \
++ $(call cc-option,-march=broadwell,$(call cc-option,-mtune=broadwell))
++ cflags-$(CONFIG_MSKYLAKE) += \
++ $(call cc-option,-march=skylake,$(call cc-option,-mtune=skylake))
++ cflags-$(CONFIG_MSKYLAKEX) += \
++ $(call cc-option,-march=skylake-avx512,$(call cc-option,-mtune=skylake-avx512))
++ cflags-$(CONFIG_MCANNONLAKE) += \
++ $(call cc-option,-march=cannonlake,$(call cc-option,-mtune=cannonlake))
++ cflags-$(CONFIG_MICELAKE) += \
++ $(call cc-option,-march=icelake-client,$(call cc-option,-mtune=icelake-client))
++ cflags-$(CONFIG_MCASCADELAKE) += \
++ $(call cc-option,-march=cascadelake,$(call cc-option,-mtune=cascadelake))
++ cflags-$(CONFIG_MCOOPERLAKE) += \
++ $(call cc-option,-march=cooperlake,$(call cc-option,-mtune=cooperlake))
++ cflags-$(CONFIG_MTIGERLAKE) += \
++ $(call cc-option,-march=tigerlake,$(call cc-option,-mtune=tigerlake))
++ cflags-$(CONFIG_MATOM) += $(call cc-option,-march=bonnell) \
++ $(call cc-option,-mtune=bonnell,$(call cc-option,-mtune=generic))
+ cflags-$(CONFIG_GENERIC_CPU) += $(call cc-option,-mtune=generic)
+ KBUILD_CFLAGS += $(cflags-y)
+
+--- a/arch/x86/Makefile_32.cpu 2020-06-10 14:21:45.000000000 -0400
++++ b/arch/x86/Makefile_32.cpu 2020-06-15 10:12:15.577746073 -0400
+@@ -24,7 +24,19 @@ cflags-$(CONFIG_MK6) += -march=k6
+ # Please note, that patches that add -march=athlon-xp and friends are pointless.
+ # They make zero difference whatsosever to performance at this time.
+ cflags-$(CONFIG_MK7) += -march=athlon
++cflags-$(CONFIG_MNATIVE) += $(call cc-option,-march=native)
+ cflags-$(CONFIG_MK8) += $(call cc-option,-march=k8,-march=athlon)
++cflags-$(CONFIG_MK8SSE3) += $(call cc-option,-march=k8-sse3,-march=athlon)
++cflags-$(CONFIG_MK10) += $(call cc-option,-march=amdfam10,-march=athlon)
++cflags-$(CONFIG_MBARCELONA) += $(call cc-option,-march=barcelona,-march=athlon)
++cflags-$(CONFIG_MBOBCAT) += $(call cc-option,-march=btver1,-march=athlon)
++cflags-$(CONFIG_MJAGUAR) += $(call cc-option,-march=btver2,-march=athlon)
++cflags-$(CONFIG_MBULLDOZER) += $(call cc-option,-march=bdver1,-march=athlon)
++cflags-$(CONFIG_MPILEDRIVER) += $(call cc-option,-march=bdver2,-march=athlon)
++cflags-$(CONFIG_MSTEAMROLLER) += $(call cc-option,-march=bdver3,-march=athlon)
++cflags-$(CONFIG_MEXCAVATOR) += $(call cc-option,-march=bdver4,-march=athlon)
++cflags-$(CONFIG_MZEN) += $(call cc-option,-march=znver1,-march=athlon)
++cflags-$(CONFIG_MZEN2) += $(call cc-option,-march=znver2,-march=athlon)
+ cflags-$(CONFIG_MCRUSOE) += -march=i686 -falign-functions=0 -falign-jumps=0 -falign-loops=0
+ cflags-$(CONFIG_MEFFICEON) += -march=i686 $(call tune,pentium3) -falign-functions=0 -falign-jumps=0 -falign-loops=0
+ cflags-$(CONFIG_MWINCHIPC6) += $(call cc-option,-march=winchip-c6,-march=i586)
+@@ -33,8 +45,24 @@ cflags-$(CONFIG_MCYRIXIII) += $(call cc-
+ cflags-$(CONFIG_MVIAC3_2) += $(call cc-option,-march=c3-2,-march=i686)
+ cflags-$(CONFIG_MVIAC7) += -march=i686
+ cflags-$(CONFIG_MCORE2) += -march=i686 $(call tune,core2)
+-cflags-$(CONFIG_MATOM) += $(call cc-option,-march=atom,$(call cc-option,-march=core2,-march=i686)) \
+- $(call cc-option,-mtune=atom,$(call cc-option,-mtune=generic))
++cflags-$(CONFIG_MNEHALEM) += -march=i686 $(call tune,nehalem)
++cflags-$(CONFIG_MWESTMERE) += -march=i686 $(call tune,westmere)
++cflags-$(CONFIG_MSILVERMONT) += -march=i686 $(call tune,silvermont)
++cflags-$(CONFIG_MGOLDMONT) += -march=i686 $(call tune,goldmont)
++cflags-$(CONFIG_MGOLDMONTPLUS) += -march=i686 $(call tune,goldmont-plus)
++cflags-$(CONFIG_MSANDYBRIDGE) += -march=i686 $(call tune,sandybridge)
++cflags-$(CONFIG_MIVYBRIDGE) += -march=i686 $(call tune,ivybridge)
++cflags-$(CONFIG_MHASWELL) += -march=i686 $(call tune,haswell)
++cflags-$(CONFIG_MBROADWELL) += -march=i686 $(call tune,broadwell)
++cflags-$(CONFIG_MSKYLAKE) += -march=i686 $(call tune,skylake)
++cflags-$(CONFIG_MSKYLAKEX) += -march=i686 $(call tune,skylake-avx512)
++cflags-$(CONFIG_MCANNONLAKE) += -march=i686 $(call tune,cannonlake)
++cflags-$(CONFIG_MICELAKE) += -march=i686 $(call tune,icelake-client)
++cflags-$(CONFIG_MCASCADELAKE) += -march=i686 $(call tune,cascadelake)
++cflags-$(CONFIG_MCOOPERLAKE) += -march=i686 $(call tune,cooperlake)
++cflags-$(CONFIG_MTIGERLAKE) += -march=i686 $(call tune,tigerlake)
++cflags-$(CONFIG_MATOM) += $(call cc-option,-march=bonnell,$(call cc-option,-march=core2,-march=i686)) \
++ $(call cc-option,-mtune=bonnell,$(call cc-option,-mtune=generic))
+
+ # AMD Elan support
+ cflags-$(CONFIG_MELAN) += -march=i486
^ permalink raw reply related [flat|nested] 25+ messages in thread
* [gentoo-commits] proj/linux-patches:5.7 commit in: /
@ 2020-07-01 12:24 Mike Pagano
0 siblings, 0 replies; 25+ messages in thread
From: Mike Pagano @ 2020-07-01 12:24 UTC (permalink / raw
To: gentoo-commits
commit: 16fbe10b9bcf30d335432166d62c2fb674105770
Author: Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Wed Jul 1 12:24:19 2020 +0000
Commit: Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Wed Jul 1 12:24:19 2020 +0000
URL: https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=16fbe10b
Linux patch 5.7.7
Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>
0000_README | 4 +
1006_linux-5.7.7.patch | 8627 ++++++++++++++++++++++++++++++++++++++++++++++++
2 files changed, 8631 insertions(+)
diff --git a/0000_README b/0000_README
index 916b8cc..4fdfe73 100644
--- a/0000_README
+++ b/0000_README
@@ -67,6 +67,10 @@ Patch: 1005_linux-5.7.6.patch
From: http://www.kernel.org
Desc: Linux 5.7.6
+Patch: 1006_linux-5.7.7.patch
+From: http://www.kernel.org
+Desc: Linux 5.7.7
+
Patch: 1500_XATTR_USER_PREFIX.patch
From: https://bugs.gentoo.org/show_bug.cgi?id=470644
Desc: Support for namespace user.pax.* on tmpfs.
diff --git a/1006_linux-5.7.7.patch b/1006_linux-5.7.7.patch
new file mode 100644
index 0000000..ec7b58c
--- /dev/null
+++ b/1006_linux-5.7.7.patch
@@ -0,0 +1,8627 @@
+diff --git a/Makefile b/Makefile
+index f928cd1dfdc1..5a5e329d9241 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 5
+ PATCHLEVEL = 7
+-SUBLEVEL = 6
++SUBLEVEL = 7
+ EXTRAVERSION =
+ NAME = Kleptomaniac Octopus
+
+diff --git a/arch/arm/boot/dts/am335x-pocketbeagle.dts b/arch/arm/boot/dts/am335x-pocketbeagle.dts
+index 4da719098028..f0b222201b86 100644
+--- a/arch/arm/boot/dts/am335x-pocketbeagle.dts
++++ b/arch/arm/boot/dts/am335x-pocketbeagle.dts
+@@ -88,7 +88,6 @@
+ AM33XX_PADCONF(AM335X_PIN_MMC0_DAT3, PIN_INPUT_PULLUP, MUX_MODE0)
+ AM33XX_PADCONF(AM335X_PIN_MMC0_CMD, PIN_INPUT_PULLUP, MUX_MODE0)
+ AM33XX_PADCONF(AM335X_PIN_MMC0_CLK, PIN_INPUT_PULLUP, MUX_MODE0)
+- AM33XX_PADCONF(AM335X_PIN_MCASP0_ACLKR, PIN_INPUT, MUX_MODE4) /* (B12) mcasp0_aclkr.mmc0_sdwp */
+ >;
+ };
+
+diff --git a/arch/arm/boot/dts/am33xx.dtsi b/arch/arm/boot/dts/am33xx.dtsi
+index a35f5052d76f..ed6634d34c3c 100644
+--- a/arch/arm/boot/dts/am33xx.dtsi
++++ b/arch/arm/boot/dts/am33xx.dtsi
+@@ -335,7 +335,7 @@
+ <0x47400010 0x4>;
+ reg-names = "rev", "sysc";
+ ti,sysc-mask = <(SYSC_OMAP4_FREEEMU |
+- SYSC_OMAP2_SOFTRESET)>;
++ SYSC_OMAP4_SOFTRESET)>;
+ ti,sysc-midle = <SYSC_IDLE_FORCE>,
+ <SYSC_IDLE_NO>,
+ <SYSC_IDLE_SMART>;
+@@ -347,7 +347,7 @@
+ clock-names = "fck";
+ #address-cells = <1>;
+ #size-cells = <1>;
+- ranges = <0x0 0x47400000 0x5000>;
++ ranges = <0x0 0x47400000 0x8000>;
+
+ usb0_phy: usb-phy@1300 {
+ compatible = "ti,am335x-usb-phy";
+diff --git a/arch/arm/boot/dts/bcm-nsp.dtsi b/arch/arm/boot/dts/bcm-nsp.dtsi
+index da6d70f09ef1..3175266ede64 100644
+--- a/arch/arm/boot/dts/bcm-nsp.dtsi
++++ b/arch/arm/boot/dts/bcm-nsp.dtsi
+@@ -200,7 +200,7 @@
+ status = "disabled";
+ };
+
+- dma@20000 {
++ dma: dma@20000 {
+ compatible = "arm,pl330", "arm,primecell";
+ reg = <0x20000 0x1000>;
+ interrupts = <GIC_SPI 47 IRQ_TYPE_LEVEL_HIGH>,
+@@ -215,6 +215,8 @@
+ clocks = <&iprocslow>;
+ clock-names = "apb_pclk";
+ #dma-cells = <1>;
++ dma-coherent;
++ status = "disabled";
+ };
+
+ sdio: sdhci@21000 {
+@@ -257,10 +259,10 @@
+ status = "disabled";
+ };
+
+- mailbox: mailbox@25000 {
++ mailbox: mailbox@25c00 {
+ compatible = "brcm,iproc-fa2-mbox";
+- reg = <0x25000 0x445>;
+- interrupts = <GIC_SPI 150 IRQ_TYPE_LEVEL_HIGH>;
++ reg = <0x25c00 0x400>;
++ interrupts = <GIC_SPI 151 IRQ_TYPE_LEVEL_HIGH>;
+ #mbox-cells = <1>;
+ brcm,rx-status-len = <32>;
+ brcm,use-bcm-hdr;
+diff --git a/arch/arm/boot/dts/bcm47094-luxul-xwc-2000.dts b/arch/arm/boot/dts/bcm47094-luxul-xwc-2000.dts
+index 334325390aed..29bbecd36f65 100644
+--- a/arch/arm/boot/dts/bcm47094-luxul-xwc-2000.dts
++++ b/arch/arm/boot/dts/bcm47094-luxul-xwc-2000.dts
+@@ -17,6 +17,7 @@
+ };
+
+ memory {
++ device_type = "memory";
+ reg = <0x00000000 0x08000000
+ 0x88000000 0x18000000>;
+ };
+diff --git a/arch/arm/boot/dts/bcm958522er.dts b/arch/arm/boot/dts/bcm958522er.dts
+index 8c388eb8a08f..7be4c4e628e0 100644
+--- a/arch/arm/boot/dts/bcm958522er.dts
++++ b/arch/arm/boot/dts/bcm958522er.dts
+@@ -58,6 +58,10 @@
+
+ /* USB 3 support needed to be complete */
+
++&dma {
++ status = "okay";
++};
++
+ &amac0 {
+ status = "okay";
+ };
+diff --git a/arch/arm/boot/dts/bcm958525er.dts b/arch/arm/boot/dts/bcm958525er.dts
+index c339771bb22e..e58ed7e95346 100644
+--- a/arch/arm/boot/dts/bcm958525er.dts
++++ b/arch/arm/boot/dts/bcm958525er.dts
+@@ -58,6 +58,10 @@
+
+ /* USB 3 support needed to be complete */
+
++&dma {
++ status = "okay";
++};
++
+ &amac0 {
+ status = "okay";
+ };
+diff --git a/arch/arm/boot/dts/bcm958525xmc.dts b/arch/arm/boot/dts/bcm958525xmc.dts
+index 1c72ec8288de..716da62f5788 100644
+--- a/arch/arm/boot/dts/bcm958525xmc.dts
++++ b/arch/arm/boot/dts/bcm958525xmc.dts
+@@ -58,6 +58,10 @@
+
+ /* XHCI support needed to be complete */
+
++&dma {
++ status = "okay";
++};
++
+ &amac0 {
+ status = "okay";
+ };
+diff --git a/arch/arm/boot/dts/bcm958622hr.dts b/arch/arm/boot/dts/bcm958622hr.dts
+index 96a021cebd97..a49c2fd21f4a 100644
+--- a/arch/arm/boot/dts/bcm958622hr.dts
++++ b/arch/arm/boot/dts/bcm958622hr.dts
+@@ -58,6 +58,10 @@
+
+ /* USB 3 and SLIC support needed to be complete */
+
++&dma {
++ status = "okay";
++};
++
+ &amac0 {
+ status = "okay";
+ };
+diff --git a/arch/arm/boot/dts/bcm958623hr.dts b/arch/arm/boot/dts/bcm958623hr.dts
+index b2c7f21d471e..dd6dff6452b8 100644
+--- a/arch/arm/boot/dts/bcm958623hr.dts
++++ b/arch/arm/boot/dts/bcm958623hr.dts
+@@ -58,6 +58,10 @@
+
+ /* USB 3 and SLIC support needed to be complete */
+
++&dma {
++ status = "okay";
++};
++
+ &amac0 {
+ status = "okay";
+ };
+diff --git a/arch/arm/boot/dts/bcm958625hr.dts b/arch/arm/boot/dts/bcm958625hr.dts
+index 536fb24f38bb..a71371b4065e 100644
+--- a/arch/arm/boot/dts/bcm958625hr.dts
++++ b/arch/arm/boot/dts/bcm958625hr.dts
+@@ -69,6 +69,10 @@
+ status = "okay";
+ };
+
++&dma {
++ status = "okay";
++};
++
+ &amac0 {
+ status = "okay";
+ };
+diff --git a/arch/arm/boot/dts/bcm958625k.dts b/arch/arm/boot/dts/bcm958625k.dts
+index 3fcca12d83c2..7b84b54436ed 100644
+--- a/arch/arm/boot/dts/bcm958625k.dts
++++ b/arch/arm/boot/dts/bcm958625k.dts
+@@ -48,6 +48,10 @@
+ };
+ };
+
++&dma {
++ status = "okay";
++};
++
+ &amac0 {
+ status = "okay";
+ };
+diff --git a/arch/arm/boot/dts/imx6ul-kontron-n6x1x-s.dtsi b/arch/arm/boot/dts/imx6ul-kontron-n6x1x-s.dtsi
+index f05e91841202..53a25fba34f6 100644
+--- a/arch/arm/boot/dts/imx6ul-kontron-n6x1x-s.dtsi
++++ b/arch/arm/boot/dts/imx6ul-kontron-n6x1x-s.dtsi
+@@ -232,13 +232,6 @@
+ status = "okay";
+ };
+
+-&wdog1 {
+- pinctrl-names = "default";
+- pinctrl-0 = <&pinctrl_wdog>;
+- fsl,ext-reset-output;
+- status = "okay";
+-};
+-
+ &iomuxc {
+ pinctrl-0 = <&pinctrl_reset_out &pinctrl_gpio>;
+
+@@ -409,10 +402,4 @@
+ MX6UL_PAD_NAND_DATA03__USDHC2_DATA3 0x170f9
+ >;
+ };
+-
+- pinctrl_wdog: wdoggrp {
+- fsl,pins = <
+- MX6UL_PAD_GPIO1_IO09__WDOG1_WDOG_ANY 0x30b0
+- >;
+- };
+ };
+diff --git a/arch/arm/boot/dts/imx6ul-kontron-n6x1x-som-common.dtsi b/arch/arm/boot/dts/imx6ul-kontron-n6x1x-som-common.dtsi
+index a17af4d9bfdf..61ba21a605a8 100644
+--- a/arch/arm/boot/dts/imx6ul-kontron-n6x1x-som-common.dtsi
++++ b/arch/arm/boot/dts/imx6ul-kontron-n6x1x-som-common.dtsi
+@@ -57,6 +57,13 @@
+ status = "okay";
+ };
+
++&wdog1 {
++ pinctrl-names = "default";
++ pinctrl-0 = <&pinctrl_wdog>;
++ fsl,ext-reset-output;
++ status = "okay";
++};
++
+ &iomuxc {
+ pinctrl-names = "default";
+ pinctrl-0 = <&pinctrl_reset_out>;
+@@ -106,4 +113,10 @@
+ MX6UL_PAD_SNVS_TAMPER9__GPIO5_IO09 0x1b0b0
+ >;
+ };
++
++ pinctrl_wdog: wdoggrp {
++ fsl,pins = <
++ MX6UL_PAD_GPIO1_IO09__WDOG1_WDOG_ANY 0x18b0
++ >;
++ };
+ };
+diff --git a/arch/arm/boot/dts/omap4-duovero-parlor.dts b/arch/arm/boot/dts/omap4-duovero-parlor.dts
+index 8047e8cdb3af..4548d87534e3 100644
+--- a/arch/arm/boot/dts/omap4-duovero-parlor.dts
++++ b/arch/arm/boot/dts/omap4-duovero-parlor.dts
+@@ -139,7 +139,7 @@
+ ethernet@gpmc {
+ reg = <5 0 0xff>;
+ interrupt-parent = <&gpio2>;
+- interrupts = <12 IRQ_TYPE_EDGE_FALLING>; /* gpio_44 */
++ interrupts = <12 IRQ_TYPE_LEVEL_LOW>; /* gpio_44 */
+
+ phy-mode = "mii";
+
+diff --git a/arch/arm/mach-bcm/Kconfig b/arch/arm/mach-bcm/Kconfig
+index 6aa938b949db..1df0ee01ee02 100644
+--- a/arch/arm/mach-bcm/Kconfig
++++ b/arch/arm/mach-bcm/Kconfig
+@@ -53,6 +53,7 @@ config ARCH_BCM_NSP
+ select ARM_ERRATA_754322
+ select ARM_ERRATA_775420
+ select ARM_ERRATA_764369 if SMP
++ select ARM_TIMER_SP804
+ select THERMAL
+ select THERMAL_OF
+ help
+diff --git a/arch/arm/mach-imx/pm-imx5.c b/arch/arm/mach-imx/pm-imx5.c
+index f057df813f83..e9962b48e30c 100644
+--- a/arch/arm/mach-imx/pm-imx5.c
++++ b/arch/arm/mach-imx/pm-imx5.c
+@@ -295,14 +295,14 @@ static int __init imx_suspend_alloc_ocram(
+ if (!ocram_pool) {
+ pr_warn("%s: ocram pool unavailable!\n", __func__);
+ ret = -ENODEV;
+- goto put_node;
++ goto put_device;
+ }
+
+ ocram_base = gen_pool_alloc(ocram_pool, size);
+ if (!ocram_base) {
+ pr_warn("%s: unable to alloc ocram!\n", __func__);
+ ret = -ENOMEM;
+- goto put_node;
++ goto put_device;
+ }
+
+ phys = gen_pool_virt_to_phys(ocram_pool, ocram_base);
+@@ -312,6 +312,8 @@ static int __init imx_suspend_alloc_ocram(
+ if (virt_out)
+ *virt_out = virt;
+
++put_device:
++ put_device(&pdev->dev);
+ put_node:
+ of_node_put(node);
+
+diff --git a/arch/arm/mach-omap2/omap_hwmod.c b/arch/arm/mach-omap2/omap_hwmod.c
+index 82706af307de..c630457bb228 100644
+--- a/arch/arm/mach-omap2/omap_hwmod.c
++++ b/arch/arm/mach-omap2/omap_hwmod.c
+@@ -3489,7 +3489,7 @@ static const struct omap_hwmod_reset dra7_reset_quirks[] = {
+ };
+
+ static const struct omap_hwmod_reset omap_reset_quirks[] = {
+- { .match = "dss", .len = 3, .reset = omap_dss_reset, },
++ { .match = "dss_core", .len = 8, .reset = omap_dss_reset, },
+ { .match = "hdq1w", .len = 5, .reset = omap_hdq1w_reset, },
+ { .match = "i2c", .len = 3, .reset = omap_i2c_reset, },
+ { .match = "wd_timer", .len = 8, .reset = omap2_wd_timer_reset, },
+diff --git a/arch/arm64/boot/dts/freescale/imx8mm-evk.dts b/arch/arm64/boot/dts/freescale/imx8mm-evk.dts
+index 951e14a3de0e..22aed2806fda 100644
+--- a/arch/arm64/boot/dts/freescale/imx8mm-evk.dts
++++ b/arch/arm64/boot/dts/freescale/imx8mm-evk.dts
+@@ -196,7 +196,7 @@
+
+ ldo1_reg: LDO1 {
+ regulator-name = "LDO1";
+- regulator-min-microvolt = <3000000>;
++ regulator-min-microvolt = <1600000>;
+ regulator-max-microvolt = <3300000>;
+ regulator-boot-on;
+ regulator-always-on;
+@@ -204,7 +204,7 @@
+
+ ldo2_reg: LDO2 {
+ regulator-name = "LDO2";
+- regulator-min-microvolt = <900000>;
++ regulator-min-microvolt = <800000>;
+ regulator-max-microvolt = <900000>;
+ regulator-boot-on;
+ regulator-always-on;
+diff --git a/arch/arm64/boot/dts/freescale/imx8mn-ddr4-evk.dts b/arch/arm64/boot/dts/freescale/imx8mn-ddr4-evk.dts
+index 2497eebb5739..fe49dbc535e1 100644
+--- a/arch/arm64/boot/dts/freescale/imx8mn-ddr4-evk.dts
++++ b/arch/arm64/boot/dts/freescale/imx8mn-ddr4-evk.dts
+@@ -101,7 +101,7 @@
+
+ ldo1_reg: LDO1 {
+ regulator-name = "LDO1";
+- regulator-min-microvolt = <3000000>;
++ regulator-min-microvolt = <1600000>;
+ regulator-max-microvolt = <3300000>;
+ regulator-boot-on;
+ regulator-always-on;
+@@ -109,7 +109,7 @@
+
+ ldo2_reg: LDO2 {
+ regulator-name = "LDO2";
+- regulator-min-microvolt = <900000>;
++ regulator-min-microvolt = <800000>;
+ regulator-max-microvolt = <900000>;
+ regulator-boot-on;
+ regulator-always-on;
+diff --git a/arch/arm64/kernel/fpsimd.c b/arch/arm64/kernel/fpsimd.c
+index 94289d126993..c12186f8ab7a 100644
+--- a/arch/arm64/kernel/fpsimd.c
++++ b/arch/arm64/kernel/fpsimd.c
+@@ -338,7 +338,7 @@ static unsigned int find_supported_vector_length(unsigned int vl)
+ return sve_vl_from_vq(__bit_to_vq(bit));
+ }
+
+-#ifdef CONFIG_SYSCTL
++#if defined(CONFIG_ARM64_SVE) && defined(CONFIG_SYSCTL)
+
+ static int sve_proc_do_default_vl(struct ctl_table *table, int write,
+ void __user *buffer, size_t *lenp,
+@@ -384,9 +384,9 @@ static int __init sve_sysctl_init(void)
+ return 0;
+ }
+
+-#else /* ! CONFIG_SYSCTL */
++#else /* ! (CONFIG_ARM64_SVE && CONFIG_SYSCTL) */
+ static int __init sve_sysctl_init(void) { return 0; }
+-#endif /* ! CONFIG_SYSCTL */
++#endif /* ! (CONFIG_ARM64_SVE && CONFIG_SYSCTL) */
+
+ #define ZREG(sve_state, vq, n) ((char *)(sve_state) + \
+ (SVE_SIG_ZREG_OFFSET(vq, n) - SVE_SIG_REGS_OFFSET))
+diff --git a/arch/arm64/kernel/perf_regs.c b/arch/arm64/kernel/perf_regs.c
+index 0bbac612146e..666b225aeb3a 100644
+--- a/arch/arm64/kernel/perf_regs.c
++++ b/arch/arm64/kernel/perf_regs.c
+@@ -15,15 +15,34 @@ u64 perf_reg_value(struct pt_regs *regs, int idx)
+ return 0;
+
+ /*
+- * Compat (i.e. 32 bit) mode:
+- * - PC has been set in the pt_regs struct in kernel_entry,
+- * - Handle SP and LR here.
++ * Our handling of compat tasks (PERF_SAMPLE_REGS_ABI_32) is weird, but
++ * we're stuck with it for ABI compatability reasons.
++ *
++ * For a 32-bit consumer inspecting a 32-bit task, then it will look at
++ * the first 16 registers (see arch/arm/include/uapi/asm/perf_regs.h).
++ * These correspond directly to a prefix of the registers saved in our
++ * 'struct pt_regs', with the exception of the PC, so we copy that down
++ * (x15 corresponds to SP_hyp in the architecture).
++ *
++ * So far, so good.
++ *
++ * The oddity arises when a 64-bit consumer looks at a 32-bit task and
++ * asks for registers beyond PERF_REG_ARM_MAX. In this case, we return
++ * SP_usr, LR_usr and PC in the positions where the AArch64 SP, LR and
++ * PC registers would normally live. The initial idea was to allow a
++ * 64-bit unwinder to unwind a 32-bit task and, although it's not clear
++ * how well that works in practice, somebody might be relying on it.
++ *
++ * At the time we make a sample, we don't know whether the consumer is
++ * 32-bit or 64-bit, so we have to cater for both possibilities.
+ */
+ if (compat_user_mode(regs)) {
+ if ((u32)idx == PERF_REG_ARM64_SP)
+ return regs->compat_sp;
+ if ((u32)idx == PERF_REG_ARM64_LR)
+ return regs->compat_lr;
++ if (idx == 15)
++ return regs->pc;
+ }
+
+ if ((u32)idx == PERF_REG_ARM64_SP)
+diff --git a/arch/powerpc/mm/nohash/kaslr_booke.c b/arch/powerpc/mm/nohash/kaslr_booke.c
+index 4a75f2d9bf0e..bce0e5349978 100644
+--- a/arch/powerpc/mm/nohash/kaslr_booke.c
++++ b/arch/powerpc/mm/nohash/kaslr_booke.c
+@@ -14,6 +14,7 @@
+ #include <linux/memblock.h>
+ #include <linux/libfdt.h>
+ #include <linux/crash_core.h>
++#include <asm/cacheflush.h>
+ #include <asm/pgalloc.h>
+ #include <asm/prom.h>
+ #include <asm/kdump.h>
+diff --git a/arch/riscv/include/asm/cmpxchg.h b/arch/riscv/include/asm/cmpxchg.h
+index d969bab4a26b..262e5bbb2776 100644
+--- a/arch/riscv/include/asm/cmpxchg.h
++++ b/arch/riscv/include/asm/cmpxchg.h
+@@ -179,7 +179,7 @@
+ " bnez %1, 0b\n" \
+ "1:\n" \
+ : "=&r" (__ret), "=&r" (__rc), "+A" (*__ptr) \
+- : "rJ" (__old), "rJ" (__new) \
++ : "rJ" ((long)__old), "rJ" (__new) \
+ : "memory"); \
+ break; \
+ case 8: \
+@@ -224,7 +224,7 @@
+ RISCV_ACQUIRE_BARRIER \
+ "1:\n" \
+ : "=&r" (__ret), "=&r" (__rc), "+A" (*__ptr) \
+- : "rJ" (__old), "rJ" (__new) \
++ : "rJ" ((long)__old), "rJ" (__new) \
+ : "memory"); \
+ break; \
+ case 8: \
+@@ -270,7 +270,7 @@
+ " bnez %1, 0b\n" \
+ "1:\n" \
+ : "=&r" (__ret), "=&r" (__rc), "+A" (*__ptr) \
+- : "rJ" (__old), "rJ" (__new) \
++ : "rJ" ((long)__old), "rJ" (__new) \
+ : "memory"); \
+ break; \
+ case 8: \
+@@ -316,7 +316,7 @@
+ " fence rw, rw\n" \
+ "1:\n" \
+ : "=&r" (__ret), "=&r" (__rc), "+A" (*__ptr) \
+- : "rJ" (__old), "rJ" (__new) \
++ : "rJ" ((long)__old), "rJ" (__new) \
+ : "memory"); \
+ break; \
+ case 8: \
+diff --git a/arch/riscv/kernel/sys_riscv.c b/arch/riscv/kernel/sys_riscv.c
+index f3619f59d85c..12f8a7fce78b 100644
+--- a/arch/riscv/kernel/sys_riscv.c
++++ b/arch/riscv/kernel/sys_riscv.c
+@@ -8,6 +8,7 @@
+ #include <linux/syscalls.h>
+ #include <asm/unistd.h>
+ #include <asm/cacheflush.h>
++#include <asm-generic/mman-common.h>
+
+ static long riscv_sys_mmap(unsigned long addr, unsigned long len,
+ unsigned long prot, unsigned long flags,
+@@ -16,6 +17,11 @@ static long riscv_sys_mmap(unsigned long addr, unsigned long len,
+ {
+ if (unlikely(offset & (~PAGE_MASK >> page_shift_offset)))
+ return -EINVAL;
++
++ if ((prot & PROT_WRITE) && (prot & PROT_EXEC))
++ if (unlikely(!(prot & PROT_READ)))
++ return -EINVAL;
++
+ return ksys_mmap_pgoff(addr, len, prot, flags, fd,
+ offset >> (PAGE_SHIFT - page_shift_offset));
+ }
+diff --git a/arch/s390/include/asm/vdso.h b/arch/s390/include/asm/vdso.h
+index 3bcfdeb01395..0cd085cdeb4f 100644
+--- a/arch/s390/include/asm/vdso.h
++++ b/arch/s390/include/asm/vdso.h
+@@ -36,6 +36,7 @@ struct vdso_data {
+ __u32 tk_shift; /* Shift used for xtime_nsec 0x60 */
+ __u32 ts_dir; /* TOD steering direction 0x64 */
+ __u64 ts_end; /* TOD steering end 0x68 */
++ __u32 hrtimer_res; /* hrtimer resolution 0x70 */
+ };
+
+ struct vdso_per_cpu_data {
+diff --git a/arch/s390/kernel/asm-offsets.c b/arch/s390/kernel/asm-offsets.c
+index e80f0e6f5972..46f84cb0d552 100644
+--- a/arch/s390/kernel/asm-offsets.c
++++ b/arch/s390/kernel/asm-offsets.c
+@@ -76,6 +76,7 @@ int main(void)
+ OFFSET(__VDSO_TK_SHIFT, vdso_data, tk_shift);
+ OFFSET(__VDSO_TS_DIR, vdso_data, ts_dir);
+ OFFSET(__VDSO_TS_END, vdso_data, ts_end);
++ OFFSET(__VDSO_CLOCK_REALTIME_RES, vdso_data, hrtimer_res);
+ OFFSET(__VDSO_ECTG_BASE, vdso_per_cpu_data, ectg_timer_base);
+ OFFSET(__VDSO_ECTG_USER, vdso_per_cpu_data, ectg_user_time);
+ OFFSET(__VDSO_GETCPU_VAL, vdso_per_cpu_data, getcpu_val);
+@@ -86,7 +87,6 @@ int main(void)
+ DEFINE(__CLOCK_REALTIME_COARSE, CLOCK_REALTIME_COARSE);
+ DEFINE(__CLOCK_MONOTONIC_COARSE, CLOCK_MONOTONIC_COARSE);
+ DEFINE(__CLOCK_THREAD_CPUTIME_ID, CLOCK_THREAD_CPUTIME_ID);
+- DEFINE(__CLOCK_REALTIME_RES, MONOTONIC_RES_NSEC);
+ DEFINE(__CLOCK_COARSE_RES, LOW_RES_NSEC);
+ BLANK();
+ /* idle data offsets */
+diff --git a/arch/s390/kernel/entry.S b/arch/s390/kernel/entry.S
+index 3ae64914bd14..9584e743102b 100644
+--- a/arch/s390/kernel/entry.S
++++ b/arch/s390/kernel/entry.S
+@@ -368,9 +368,9 @@ ENTRY(system_call)
+ jnz .Lsysc_nr_ok
+ # svc 0: system call number in %r1
+ llgfr %r1,%r1 # clear high word in r1
++ sth %r1,__PT_INT_CODE+2(%r11)
+ cghi %r1,NR_syscalls
+ jnl .Lsysc_nr_ok
+- sth %r1,__PT_INT_CODE+2(%r11)
+ slag %r8,%r1,3
+ .Lsysc_nr_ok:
+ xc __SF_BACKCHAIN(8,%r15),__SF_BACKCHAIN(%r15)
+diff --git a/arch/s390/kernel/ptrace.c b/arch/s390/kernel/ptrace.c
+index 58faa12542a1..e007224b65bb 100644
+--- a/arch/s390/kernel/ptrace.c
++++ b/arch/s390/kernel/ptrace.c
+@@ -324,6 +324,25 @@ static inline void __poke_user_per(struct task_struct *child,
+ child->thread.per_user.end = data;
+ }
+
++static void fixup_int_code(struct task_struct *child, addr_t data)
++{
++ struct pt_regs *regs = task_pt_regs(child);
++ int ilc = regs->int_code >> 16;
++ u16 insn;
++
++ if (ilc > 6)
++ return;
++
++ if (ptrace_access_vm(child, regs->psw.addr - (regs->int_code >> 16),
++ &insn, sizeof(insn), FOLL_FORCE) != sizeof(insn))
++ return;
++
++ /* double check that tracee stopped on svc instruction */
++ if ((insn >> 8) != 0xa)
++ return;
++
++ regs->int_code = 0x20000 | (data & 0xffff);
++}
+ /*
+ * Write a word to the user area of a process at location addr. This
+ * operation does have an additional problem compared to peek_user.
+@@ -335,7 +354,9 @@ static int __poke_user(struct task_struct *child, addr_t addr, addr_t data)
+ struct user *dummy = NULL;
+ addr_t offset;
+
++
+ if (addr < (addr_t) &dummy->regs.acrs) {
++ struct pt_regs *regs = task_pt_regs(child);
+ /*
+ * psw and gprs are stored on the stack
+ */
+@@ -353,7 +374,11 @@ static int __poke_user(struct task_struct *child, addr_t addr, addr_t data)
+ /* Invalid addressing mode bits */
+ return -EINVAL;
+ }
+- *(addr_t *)((addr_t) &task_pt_regs(child)->psw + addr) = data;
++
++ if (test_pt_regs_flag(regs, PIF_SYSCALL) &&
++ addr == offsetof(struct user, regs.gprs[2]))
++ fixup_int_code(child, data);
++ *(addr_t *)((addr_t) ®s->psw + addr) = data;
+
+ } else if (addr < (addr_t) (&dummy->regs.orig_gpr2)) {
+ /*
+@@ -719,6 +744,10 @@ static int __poke_user_compat(struct task_struct *child,
+ regs->psw.mask = (regs->psw.mask & ~PSW_MASK_BA) |
+ (__u64)(tmp & PSW32_ADDR_AMODE);
+ } else {
++
++ if (test_pt_regs_flag(regs, PIF_SYSCALL) &&
++ addr == offsetof(struct compat_user, regs.gprs[2]))
++ fixup_int_code(child, data);
+ /* gpr 0-15 */
+ *(__u32*)((addr_t) ®s->psw + addr*2 + 4) = tmp;
+ }
+@@ -838,40 +867,66 @@ long compat_arch_ptrace(struct task_struct *child, compat_long_t request,
+ asmlinkage long do_syscall_trace_enter(struct pt_regs *regs)
+ {
+ unsigned long mask = -1UL;
++ long ret = -1;
++
++ if (is_compat_task())
++ mask = 0xffffffff;
+
+ /*
+ * The sysc_tracesys code in entry.S stored the system
+ * call number to gprs[2].
+ */
+ if (test_thread_flag(TIF_SYSCALL_TRACE) &&
+- (tracehook_report_syscall_entry(regs) ||
+- regs->gprs[2] >= NR_syscalls)) {
++ tracehook_report_syscall_entry(regs)) {
+ /*
+- * Tracing decided this syscall should not happen or the
+- * debugger stored an invalid system call number. Skip
++ * Tracing decided this syscall should not happen. Skip
+ * the system call and the system call restart handling.
+ */
+- clear_pt_regs_flag(regs, PIF_SYSCALL);
+- return -1;
++ goto skip;
+ }
+
++#ifdef CONFIG_SECCOMP
+ /* Do the secure computing check after ptrace. */
+- if (secure_computing()) {
+- /* seccomp failures shouldn't expose any additional code. */
+- return -1;
++ if (unlikely(test_thread_flag(TIF_SECCOMP))) {
++ struct seccomp_data sd;
++
++ if (is_compat_task()) {
++ sd.instruction_pointer = regs->psw.addr & 0x7fffffff;
++ sd.arch = AUDIT_ARCH_S390;
++ } else {
++ sd.instruction_pointer = regs->psw.addr;
++ sd.arch = AUDIT_ARCH_S390X;
++ }
++
++ sd.nr = regs->int_code & 0xffff;
++ sd.args[0] = regs->orig_gpr2 & mask;
++ sd.args[1] = regs->gprs[3] & mask;
++ sd.args[2] = regs->gprs[4] & mask;
++ sd.args[3] = regs->gprs[5] & mask;
++ sd.args[4] = regs->gprs[6] & mask;
++ sd.args[5] = regs->gprs[7] & mask;
++
++ if (__secure_computing(&sd) == -1)
++ goto skip;
+ }
++#endif /* CONFIG_SECCOMP */
+
+ if (unlikely(test_thread_flag(TIF_SYSCALL_TRACEPOINT)))
+- trace_sys_enter(regs, regs->gprs[2]);
++ trace_sys_enter(regs, regs->int_code & 0xffff);
+
+- if (is_compat_task())
+- mask = 0xffffffff;
+
+- audit_syscall_entry(regs->gprs[2], regs->orig_gpr2 & mask,
++ audit_syscall_entry(regs->int_code & 0xffff, regs->orig_gpr2 & mask,
+ regs->gprs[3] &mask, regs->gprs[4] &mask,
+ regs->gprs[5] &mask);
+
++ if ((signed long)regs->gprs[2] >= NR_syscalls) {
++ regs->gprs[2] = -ENOSYS;
++ ret = -ENOSYS;
++ }
+ return regs->gprs[2];
++skip:
++ clear_pt_regs_flag(regs, PIF_SYSCALL);
++ return ret;
+ }
+
+ asmlinkage void do_syscall_trace_exit(struct pt_regs *regs)
+diff --git a/arch/s390/kernel/time.c b/arch/s390/kernel/time.c
+index f9d070d016e3..b1113b519432 100644
+--- a/arch/s390/kernel/time.c
++++ b/arch/s390/kernel/time.c
+@@ -301,6 +301,7 @@ void update_vsyscall(struct timekeeper *tk)
+
+ vdso_data->tk_mult = tk->tkr_mono.mult;
+ vdso_data->tk_shift = tk->tkr_mono.shift;
++ vdso_data->hrtimer_res = hrtimer_resolution;
+ smp_wmb();
+ ++vdso_data->tb_update_count;
+ }
+diff --git a/arch/s390/kernel/vdso64/Makefile b/arch/s390/kernel/vdso64/Makefile
+index bec19e7e6e1c..4a66a1cb919b 100644
+--- a/arch/s390/kernel/vdso64/Makefile
++++ b/arch/s390/kernel/vdso64/Makefile
+@@ -18,8 +18,8 @@ KBUILD_AFLAGS_64 += -m64 -s
+
+ KBUILD_CFLAGS_64 := $(filter-out -m64,$(KBUILD_CFLAGS))
+ KBUILD_CFLAGS_64 += -m64 -fPIC -shared -fno-common -fno-builtin
+-KBUILD_CFLAGS_64 += -nostdlib -Wl,-soname=linux-vdso64.so.1 \
+- -Wl,--hash-style=both
++ldflags-y := -fPIC -shared -nostdlib -soname=linux-vdso64.so.1 \
++ --hash-style=both --build-id -T
+
+ $(targets:%=$(obj)/%.dbg): KBUILD_CFLAGS = $(KBUILD_CFLAGS_64)
+ $(targets:%=$(obj)/%.dbg): KBUILD_AFLAGS = $(KBUILD_AFLAGS_64)
+@@ -37,8 +37,8 @@ KASAN_SANITIZE := n
+ $(obj)/vdso64_wrapper.o : $(obj)/vdso64.so
+
+ # link rule for the .so file, .lds has to be first
+-$(obj)/vdso64.so.dbg: $(src)/vdso64.lds $(obj-vdso64) FORCE
+- $(call if_changed,vdso64ld)
++$(obj)/vdso64.so.dbg: $(obj)/vdso64.lds $(obj-vdso64) FORCE
++ $(call if_changed,ld)
+
+ # strip rule for the .so file
+ $(obj)/%.so: OBJCOPYFLAGS := -S
+@@ -50,8 +50,6 @@ $(obj-vdso64): %.o: %.S FORCE
+ $(call if_changed_dep,vdso64as)
+
+ # actual build commands
+-quiet_cmd_vdso64ld = VDSO64L $@
+- cmd_vdso64ld = $(CC) $(c_flags) -Wl,-T $(filter %.lds %.o,$^) -o $@
+ quiet_cmd_vdso64as = VDSO64A $@
+ cmd_vdso64as = $(CC) $(a_flags) -c -o $@ $<
+
+diff --git a/arch/s390/kernel/vdso64/clock_getres.S b/arch/s390/kernel/vdso64/clock_getres.S
+index 081435398e0a..0c79caa32b59 100644
+--- a/arch/s390/kernel/vdso64/clock_getres.S
++++ b/arch/s390/kernel/vdso64/clock_getres.S
+@@ -17,12 +17,14 @@
+ .type __kernel_clock_getres,@function
+ __kernel_clock_getres:
+ CFI_STARTPROC
+- larl %r1,4f
++ larl %r1,3f
++ lg %r0,0(%r1)
+ cghi %r2,__CLOCK_REALTIME_COARSE
+ je 0f
+ cghi %r2,__CLOCK_MONOTONIC_COARSE
+ je 0f
+- larl %r1,3f
++ larl %r1,_vdso_data
++ llgf %r0,__VDSO_CLOCK_REALTIME_RES(%r1)
+ cghi %r2,__CLOCK_REALTIME
+ je 0f
+ cghi %r2,__CLOCK_MONOTONIC
+@@ -36,7 +38,6 @@ __kernel_clock_getres:
+ jz 2f
+ 0: ltgr %r3,%r3
+ jz 1f /* res == NULL */
+- lg %r0,0(%r1)
+ xc 0(8,%r3),0(%r3) /* set tp->tv_sec to zero */
+ stg %r0,8(%r3) /* store tp->tv_usec */
+ 1: lghi %r2,0
+@@ -45,6 +46,5 @@ __kernel_clock_getres:
+ svc 0
+ br %r14
+ CFI_ENDPROC
+-3: .quad __CLOCK_REALTIME_RES
+-4: .quad __CLOCK_COARSE_RES
++3: .quad __CLOCK_COARSE_RES
+ .size __kernel_clock_getres,.-__kernel_clock_getres
+diff --git a/arch/sparc/kernel/ptrace_32.c b/arch/sparc/kernel/ptrace_32.c
+index 60f7205ebe40..646dd58169ec 100644
+--- a/arch/sparc/kernel/ptrace_32.c
++++ b/arch/sparc/kernel/ptrace_32.c
+@@ -168,12 +168,17 @@ static int genregs32_set(struct task_struct *target,
+ if (ret || !count)
+ return ret;
+ ret = user_regset_copyin(&pos, &count, &kbuf, &ubuf,
+- ®s->y,
++ ®s->npc,
+ 34 * sizeof(u32), 35 * sizeof(u32));
+ if (ret || !count)
+ return ret;
++ ret = user_regset_copyin(&pos, &count, &kbuf, &ubuf,
++ ®s->y,
++ 35 * sizeof(u32), 36 * sizeof(u32));
++ if (ret || !count)
++ return ret;
+ return user_regset_copyin_ignore(&pos, &count, &kbuf, &ubuf,
+- 35 * sizeof(u32), 38 * sizeof(u32));
++ 36 * sizeof(u32), 38 * sizeof(u32));
+ }
+
+ static int fpregs32_get(struct task_struct *target,
+diff --git a/arch/x86/boot/compressed/head_64.S b/arch/x86/boot/compressed/head_64.S
+index 76d1d64d51e3..41f792208622 100644
+--- a/arch/x86/boot/compressed/head_64.S
++++ b/arch/x86/boot/compressed/head_64.S
+@@ -213,7 +213,6 @@ SYM_FUNC_START(startup_32)
+ * We place all of the values on our mini stack so lret can
+ * used to perform that far jump.
+ */
+- pushl $__KERNEL_CS
+ leal startup_64(%ebp), %eax
+ #ifdef CONFIG_EFI_MIXED
+ movl efi32_boot_args(%ebp), %edi
+@@ -224,11 +223,20 @@ SYM_FUNC_START(startup_32)
+ movl efi32_boot_args+8(%ebp), %edx // saved bootparams pointer
+ cmpl $0, %edx
+ jnz 1f
++ /*
++ * efi_pe_entry uses MS calling convention, which requires 32 bytes of
++ * shadow space on the stack even if all arguments are passed in
++ * registers. We also need an additional 8 bytes for the space that
++ * would be occupied by the return address, and this also results in
++ * the correct stack alignment for entry.
++ */
++ subl $40, %esp
+ leal efi_pe_entry(%ebp), %eax
+ movl %edi, %ecx // MS calling convention
+ movl %esi, %edx
+ 1:
+ #endif
++ pushl $__KERNEL_CS
+ pushl %eax
+
+ /* Enter paged protected Mode, activating Long Mode */
+@@ -776,6 +784,7 @@ SYM_DATA_LOCAL(boot_heap, .fill BOOT_HEAP_SIZE, 1, 0)
+
+ SYM_DATA_START_LOCAL(boot_stack)
+ .fill BOOT_STACK_SIZE, 1, 0
++ .balign 16
+ SYM_DATA_END_LABEL(boot_stack, SYM_L_LOCAL, boot_stack_end)
+
+ /*
+diff --git a/arch/x86/include/asm/cpu.h b/arch/x86/include/asm/cpu.h
+index dd17c2da1af5..da78ccbd493b 100644
+--- a/arch/x86/include/asm/cpu.h
++++ b/arch/x86/include/asm/cpu.h
+@@ -58,4 +58,9 @@ static inline bool handle_guest_split_lock(unsigned long ip)
+ return false;
+ }
+ #endif
++#ifdef CONFIG_IA32_FEAT_CTL
++void init_ia32_feat_ctl(struct cpuinfo_x86 *c);
++#else
++static inline void init_ia32_feat_ctl(struct cpuinfo_x86 *c) {}
++#endif
+ #endif /* _ASM_X86_CPU_H */
+diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
+index 0a6b35353fc7..86e2e0272c57 100644
+--- a/arch/x86/include/asm/kvm_host.h
++++ b/arch/x86/include/asm/kvm_host.h
+@@ -1195,7 +1195,7 @@ struct kvm_x86_ops {
+ void (*enable_log_dirty_pt_masked)(struct kvm *kvm,
+ struct kvm_memory_slot *slot,
+ gfn_t offset, unsigned long mask);
+- int (*write_log_dirty)(struct kvm_vcpu *vcpu);
++ int (*write_log_dirty)(struct kvm_vcpu *vcpu, gpa_t l2_gpa);
+
+ /* pmu operations of sub-arch */
+ const struct kvm_pmu_ops *pmu_ops;
+diff --git a/arch/x86/include/asm/mwait.h b/arch/x86/include/asm/mwait.h
+index b809f117f3f4..9d5252c9685c 100644
+--- a/arch/x86/include/asm/mwait.h
++++ b/arch/x86/include/asm/mwait.h
+@@ -23,8 +23,6 @@
+ #define MWAITX_MAX_LOOPS ((u32)-1)
+ #define MWAITX_DISABLE_CSTATES 0xf0
+
+-u32 get_umwait_control_msr(void);
+-
+ static inline void __monitor(const void *eax, unsigned long ecx,
+ unsigned long edx)
+ {
+diff --git a/arch/x86/include/asm/processor.h b/arch/x86/include/asm/processor.h
+index 3bcf27caf6c9..c4e8fd709cf6 100644
+--- a/arch/x86/include/asm/processor.h
++++ b/arch/x86/include/asm/processor.h
+@@ -113,9 +113,10 @@ struct cpuinfo_x86 {
+ /* in KB - valid for CPUS which support this call: */
+ unsigned int x86_cache_size;
+ int x86_cache_alignment; /* In bytes */
+- /* Cache QoS architectural values: */
++ /* Cache QoS architectural values, valid only on the BSP: */
+ int x86_cache_max_rmid; /* max index */
+ int x86_cache_occ_scale; /* scale to bytes */
++ int x86_cache_mbm_width_offset;
+ int x86_power;
+ unsigned long loops_per_jiffy;
+ /* cpuid returned max cores value: */
+diff --git a/arch/x86/include/asm/resctrl_sched.h b/arch/x86/include/asm/resctrl_sched.h
+index f6b7fe2833cc..c8a27cbbdae2 100644
+--- a/arch/x86/include/asm/resctrl_sched.h
++++ b/arch/x86/include/asm/resctrl_sched.h
+@@ -84,9 +84,12 @@ static inline void resctrl_sched_in(void)
+ __resctrl_sched_in();
+ }
+
++void resctrl_cpu_detect(struct cpuinfo_x86 *c);
++
+ #else
+
+ static inline void resctrl_sched_in(void) {}
++static inline void resctrl_cpu_detect(struct cpuinfo_x86 *c) {}
+
+ #endif /* CONFIG_X86_CPU_RESCTRL */
+
+diff --git a/arch/x86/kernel/cpu/centaur.c b/arch/x86/kernel/cpu/centaur.c
+index 426792565d86..c5cf336e5077 100644
+--- a/arch/x86/kernel/cpu/centaur.c
++++ b/arch/x86/kernel/cpu/centaur.c
+@@ -3,6 +3,7 @@
+ #include <linux/sched.h>
+ #include <linux/sched/clock.h>
+
++#include <asm/cpu.h>
+ #include <asm/cpufeature.h>
+ #include <asm/e820/api.h>
+ #include <asm/mtrr.h>
+diff --git a/arch/x86/kernel/cpu/common.c b/arch/x86/kernel/cpu/common.c
+index 8293ee514975..c669a5756bdf 100644
+--- a/arch/x86/kernel/cpu/common.c
++++ b/arch/x86/kernel/cpu/common.c
+@@ -56,6 +56,7 @@
+ #include <asm/intel-family.h>
+ #include <asm/cpu_device_id.h>
+ #include <asm/uv/uv.h>
++#include <asm/resctrl_sched.h>
+
+ #include "cpu.h"
+
+@@ -347,6 +348,9 @@ out:
+ cr4_clear_bits(X86_CR4_UMIP);
+ }
+
++/* These bits should not change their value after CPU init is finished. */
++static const unsigned long cr4_pinned_mask =
++ X86_CR4_SMEP | X86_CR4_SMAP | X86_CR4_UMIP | X86_CR4_FSGSBASE;
+ static DEFINE_STATIC_KEY_FALSE_RO(cr_pinning);
+ static unsigned long cr4_pinned_bits __ro_after_init;
+
+@@ -371,20 +375,20 @@ EXPORT_SYMBOL(native_write_cr0);
+
+ void native_write_cr4(unsigned long val)
+ {
+- unsigned long bits_missing = 0;
++ unsigned long bits_changed = 0;
+
+ set_register:
+ asm volatile("mov %0,%%cr4": "+r" (val), "+m" (cr4_pinned_bits));
+
+ if (static_branch_likely(&cr_pinning)) {
+- if (unlikely((val & cr4_pinned_bits) != cr4_pinned_bits)) {
+- bits_missing = ~val & cr4_pinned_bits;
+- val |= bits_missing;
++ if (unlikely((val & cr4_pinned_mask) != cr4_pinned_bits)) {
++ bits_changed = (val & cr4_pinned_mask) ^ cr4_pinned_bits;
++ val = (val & ~cr4_pinned_mask) | cr4_pinned_bits;
+ goto set_register;
+ }
+- /* Warn after we've set the missing bits. */
+- WARN_ONCE(bits_missing, "CR4 bits went missing: %lx!?\n",
+- bits_missing);
++ /* Warn after we've corrected the changed bits. */
++ WARN_ONCE(bits_changed, "pinned CR4 bits changed: 0x%lx!?\n",
++ bits_changed);
+ }
+ }
+ EXPORT_SYMBOL(native_write_cr4);
+@@ -396,7 +400,7 @@ void cr4_init(void)
+ if (boot_cpu_has(X86_FEATURE_PCID))
+ cr4 |= X86_CR4_PCIDE;
+ if (static_branch_likely(&cr_pinning))
+- cr4 |= cr4_pinned_bits;
++ cr4 = (cr4 & ~cr4_pinned_mask) | cr4_pinned_bits;
+
+ __write_cr4(cr4);
+
+@@ -411,10 +415,7 @@ void cr4_init(void)
+ */
+ static void __init setup_cr_pinning(void)
+ {
+- unsigned long mask;
+-
+- mask = (X86_CR4_SMEP | X86_CR4_SMAP | X86_CR4_UMIP);
+- cr4_pinned_bits = this_cpu_read(cpu_tlbstate.cr4) & mask;
++ cr4_pinned_bits = this_cpu_read(cpu_tlbstate.cr4) & cr4_pinned_mask;
+ static_key_enable(&cr_pinning.key);
+ }
+
+@@ -854,30 +855,6 @@ static void init_speculation_control(struct cpuinfo_x86 *c)
+ }
+ }
+
+-static void init_cqm(struct cpuinfo_x86 *c)
+-{
+- if (!cpu_has(c, X86_FEATURE_CQM_LLC)) {
+- c->x86_cache_max_rmid = -1;
+- c->x86_cache_occ_scale = -1;
+- return;
+- }
+-
+- /* will be overridden if occupancy monitoring exists */
+- c->x86_cache_max_rmid = cpuid_ebx(0xf);
+-
+- if (cpu_has(c, X86_FEATURE_CQM_OCCUP_LLC) ||
+- cpu_has(c, X86_FEATURE_CQM_MBM_TOTAL) ||
+- cpu_has(c, X86_FEATURE_CQM_MBM_LOCAL)) {
+- u32 eax, ebx, ecx, edx;
+-
+- /* QoS sub-leaf, EAX=0Fh, ECX=1 */
+- cpuid_count(0xf, 1, &eax, &ebx, &ecx, &edx);
+-
+- c->x86_cache_max_rmid = ecx;
+- c->x86_cache_occ_scale = ebx;
+- }
+-}
+-
+ void get_cpu_cap(struct cpuinfo_x86 *c)
+ {
+ u32 eax, ebx, ecx, edx;
+@@ -945,7 +922,7 @@ void get_cpu_cap(struct cpuinfo_x86 *c)
+
+ init_scattered_cpuid_features(c);
+ init_speculation_control(c);
+- init_cqm(c);
++ resctrl_cpu_detect(c);
+
+ /*
+ * Clear/Set all flags overridden by options, after probe.
+diff --git a/arch/x86/kernel/cpu/cpu.h b/arch/x86/kernel/cpu/cpu.h
+index fb538fccd24c..9d033693519a 100644
+--- a/arch/x86/kernel/cpu/cpu.h
++++ b/arch/x86/kernel/cpu/cpu.h
+@@ -81,8 +81,4 @@ extern void update_srbds_msr(void);
+
+ extern u64 x86_read_arch_cap_msr(void);
+
+-#ifdef CONFIG_IA32_FEAT_CTL
+-void init_ia32_feat_ctl(struct cpuinfo_x86 *c);
+-#endif
+-
+ #endif /* ARCH_X86_CPU_H */
+diff --git a/arch/x86/kernel/cpu/resctrl/core.c b/arch/x86/kernel/cpu/resctrl/core.c
+index d8cc5223b7ce..c1551541c7a5 100644
+--- a/arch/x86/kernel/cpu/resctrl/core.c
++++ b/arch/x86/kernel/cpu/resctrl/core.c
+@@ -958,6 +958,35 @@ static __init void rdt_init_res_defs(void)
+
+ static enum cpuhp_state rdt_online;
+
++void resctrl_cpu_detect(struct cpuinfo_x86 *c)
++{
++ if (!cpu_has(c, X86_FEATURE_CQM_LLC)) {
++ c->x86_cache_max_rmid = -1;
++ c->x86_cache_occ_scale = -1;
++ c->x86_cache_mbm_width_offset = -1;
++ return;
++ }
++
++ /* will be overridden if occupancy monitoring exists */
++ c->x86_cache_max_rmid = cpuid_ebx(0xf);
++
++ if (cpu_has(c, X86_FEATURE_CQM_OCCUP_LLC) ||
++ cpu_has(c, X86_FEATURE_CQM_MBM_TOTAL) ||
++ cpu_has(c, X86_FEATURE_CQM_MBM_LOCAL)) {
++ u32 eax, ebx, ecx, edx;
++
++ /* QoS sub-leaf, EAX=0Fh, ECX=1 */
++ cpuid_count(0xf, 1, &eax, &ebx, &ecx, &edx);
++
++ c->x86_cache_max_rmid = ecx;
++ c->x86_cache_occ_scale = ebx;
++ c->x86_cache_mbm_width_offset = eax & 0xff;
++
++ if (c->x86_vendor == X86_VENDOR_AMD && !c->x86_cache_mbm_width_offset)
++ c->x86_cache_mbm_width_offset = MBM_CNTR_WIDTH_OFFSET_AMD;
++ }
++}
++
+ static int __init resctrl_late_init(void)
+ {
+ struct rdt_resource *r;
+diff --git a/arch/x86/kernel/cpu/resctrl/internal.h b/arch/x86/kernel/cpu/resctrl/internal.h
+index 3dd13f3a8b23..096386475714 100644
+--- a/arch/x86/kernel/cpu/resctrl/internal.h
++++ b/arch/x86/kernel/cpu/resctrl/internal.h
+@@ -37,6 +37,7 @@
+ #define MBA_IS_LINEAR 0x4
+ #define MBA_MAX_MBPS U32_MAX
+ #define MAX_MBA_BW_AMD 0x800
++#define MBM_CNTR_WIDTH_OFFSET_AMD 20
+
+ #define RMID_VAL_ERROR BIT_ULL(63)
+ #define RMID_VAL_UNAVAIL BIT_ULL(62)
+diff --git a/arch/x86/kernel/cpu/resctrl/rdtgroup.c b/arch/x86/kernel/cpu/resctrl/rdtgroup.c
+index 5a359d9fcc05..29a3878ab3c0 100644
+--- a/arch/x86/kernel/cpu/resctrl/rdtgroup.c
++++ b/arch/x86/kernel/cpu/resctrl/rdtgroup.c
+@@ -1117,6 +1117,7 @@ static int rdt_cdp_peer_get(struct rdt_resource *r, struct rdt_domain *d,
+ _d_cdp = rdt_find_domain(_r_cdp, d->id, NULL);
+ if (WARN_ON(IS_ERR_OR_NULL(_d_cdp))) {
+ _r_cdp = NULL;
++ _d_cdp = NULL;
+ ret = -EINVAL;
+ }
+
+diff --git a/arch/x86/kernel/cpu/umwait.c b/arch/x86/kernel/cpu/umwait.c
+index 300e3fd5ade3..ec8064c0ae03 100644
+--- a/arch/x86/kernel/cpu/umwait.c
++++ b/arch/x86/kernel/cpu/umwait.c
+@@ -18,12 +18,6 @@
+ */
+ static u32 umwait_control_cached = UMWAIT_CTRL_VAL(100000, UMWAIT_C02_ENABLE);
+
+-u32 get_umwait_control_msr(void)
+-{
+- return umwait_control_cached;
+-}
+-EXPORT_SYMBOL_GPL(get_umwait_control_msr);
+-
+ /*
+ * Cache the original IA32_UMWAIT_CONTROL MSR value which is configured by
+ * hardware or BIOS before kernel boot.
+diff --git a/arch/x86/kernel/cpu/zhaoxin.c b/arch/x86/kernel/cpu/zhaoxin.c
+index df1358ba622b..05fa4ef63490 100644
+--- a/arch/x86/kernel/cpu/zhaoxin.c
++++ b/arch/x86/kernel/cpu/zhaoxin.c
+@@ -2,6 +2,7 @@
+ #include <linux/sched.h>
+ #include <linux/sched/clock.h>
+
++#include <asm/cpu.h>
+ #include <asm/cpufeature.h>
+
+ #include "cpu.h"
+diff --git a/arch/x86/kvm/lapic.c b/arch/x86/kvm/lapic.c
+index 9af25c97612a..8967e320a978 100644
+--- a/arch/x86/kvm/lapic.c
++++ b/arch/x86/kvm/lapic.c
+@@ -2512,6 +2512,7 @@ int kvm_apic_set_state(struct kvm_vcpu *vcpu, struct kvm_lapic_state *s)
+ }
+ memcpy(vcpu->arch.apic->regs, s->regs, sizeof(*s));
+
++ apic->vcpu->kvm->arch.apic_map_dirty = true;
+ kvm_recalculate_apic_map(vcpu->kvm);
+ kvm_apic_set_version(vcpu);
+
+diff --git a/arch/x86/kvm/mmu.h b/arch/x86/kvm/mmu.h
+index 8a3b1bce722a..d0e3b1b6845b 100644
+--- a/arch/x86/kvm/mmu.h
++++ b/arch/x86/kvm/mmu.h
+@@ -222,7 +222,7 @@ void kvm_mmu_gfn_disallow_lpage(struct kvm_memory_slot *slot, gfn_t gfn);
+ void kvm_mmu_gfn_allow_lpage(struct kvm_memory_slot *slot, gfn_t gfn);
+ bool kvm_mmu_slot_gfn_write_protect(struct kvm *kvm,
+ struct kvm_memory_slot *slot, u64 gfn);
+-int kvm_arch_write_log_dirty(struct kvm_vcpu *vcpu);
++int kvm_arch_write_log_dirty(struct kvm_vcpu *vcpu, gpa_t l2_gpa);
+
+ int kvm_mmu_post_init_vm(struct kvm *kvm);
+ void kvm_mmu_pre_destroy_vm(struct kvm *kvm);
+diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c
+index 92d056954194..eb27ab47d607 100644
+--- a/arch/x86/kvm/mmu/mmu.c
++++ b/arch/x86/kvm/mmu/mmu.c
+@@ -1746,10 +1746,10 @@ void kvm_arch_mmu_enable_log_dirty_pt_masked(struct kvm *kvm,
+ * Emulate arch specific page modification logging for the
+ * nested hypervisor
+ */
+-int kvm_arch_write_log_dirty(struct kvm_vcpu *vcpu)
++int kvm_arch_write_log_dirty(struct kvm_vcpu *vcpu, gpa_t l2_gpa)
+ {
+ if (kvm_x86_ops.write_log_dirty)
+- return kvm_x86_ops.write_log_dirty(vcpu);
++ return kvm_x86_ops.write_log_dirty(vcpu, l2_gpa);
+
+ return 0;
+ }
+diff --git a/arch/x86/kvm/mmu/paging_tmpl.h b/arch/x86/kvm/mmu/paging_tmpl.h
+index 9bdf9b7d9a96..7098f843eabd 100644
+--- a/arch/x86/kvm/mmu/paging_tmpl.h
++++ b/arch/x86/kvm/mmu/paging_tmpl.h
+@@ -235,7 +235,7 @@ static inline unsigned FNAME(gpte_access)(u64 gpte)
+ static int FNAME(update_accessed_dirty_bits)(struct kvm_vcpu *vcpu,
+ struct kvm_mmu *mmu,
+ struct guest_walker *walker,
+- int write_fault)
++ gpa_t addr, int write_fault)
+ {
+ unsigned level, index;
+ pt_element_t pte, orig_pte;
+@@ -260,7 +260,7 @@ static int FNAME(update_accessed_dirty_bits)(struct kvm_vcpu *vcpu,
+ !(pte & PT_GUEST_DIRTY_MASK)) {
+ trace_kvm_mmu_set_dirty_bit(table_gfn, index, sizeof(pte));
+ #if PTTYPE == PTTYPE_EPT
+- if (kvm_arch_write_log_dirty(vcpu))
++ if (kvm_arch_write_log_dirty(vcpu, addr))
+ return -EINVAL;
+ #endif
+ pte |= PT_GUEST_DIRTY_MASK;
+@@ -457,7 +457,8 @@ retry_walk:
+ (PT_GUEST_DIRTY_SHIFT - PT_GUEST_ACCESSED_SHIFT);
+
+ if (unlikely(!accessed_dirty)) {
+- ret = FNAME(update_accessed_dirty_bits)(vcpu, mmu, walker, write_fault);
++ ret = FNAME(update_accessed_dirty_bits)(vcpu, mmu, walker,
++ addr, write_fault);
+ if (unlikely(ret < 0))
+ goto error;
+ else if (ret)
+diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c
+index d7aa0dfab8bb..390ec34e4b4f 100644
+--- a/arch/x86/kvm/vmx/vmx.c
++++ b/arch/x86/kvm/vmx/vmx.c
+@@ -6467,23 +6467,6 @@ static void atomic_switch_perf_msrs(struct vcpu_vmx *vmx)
+ msrs[i].host, false);
+ }
+
+-static void atomic_switch_umwait_control_msr(struct vcpu_vmx *vmx)
+-{
+- u32 host_umwait_control;
+-
+- if (!vmx_has_waitpkg(vmx))
+- return;
+-
+- host_umwait_control = get_umwait_control_msr();
+-
+- if (vmx->msr_ia32_umwait_control != host_umwait_control)
+- add_atomic_switch_msr(vmx, MSR_IA32_UMWAIT_CONTROL,
+- vmx->msr_ia32_umwait_control,
+- host_umwait_control, false);
+- else
+- clear_atomic_switch_msr(vmx, MSR_IA32_UMWAIT_CONTROL);
+-}
+-
+ static void vmx_update_hv_timer(struct kvm_vcpu *vcpu)
+ {
+ struct vcpu_vmx *vmx = to_vmx(vcpu);
+@@ -6575,9 +6558,7 @@ static void vmx_vcpu_run(struct kvm_vcpu *vcpu)
+
+ pt_guest_enter(vmx);
+
+- if (vcpu_to_pmu(vcpu)->version)
+- atomic_switch_perf_msrs(vmx);
+- atomic_switch_umwait_control_msr(vmx);
++ atomic_switch_perf_msrs(vmx);
+
+ if (enable_preemption_timer)
+ vmx_update_hv_timer(vcpu);
+@@ -7334,11 +7315,11 @@ static void vmx_flush_log_dirty(struct kvm *kvm)
+ kvm_flush_pml_buffers(kvm);
+ }
+
+-static int vmx_write_pml_buffer(struct kvm_vcpu *vcpu)
++static int vmx_write_pml_buffer(struct kvm_vcpu *vcpu, gpa_t gpa)
+ {
+ struct vmcs12 *vmcs12;
+ struct vcpu_vmx *vmx = to_vmx(vcpu);
+- gpa_t gpa, dst;
++ gpa_t dst;
+
+ if (is_guest_mode(vcpu)) {
+ WARN_ON_ONCE(vmx->nested.pml_full);
+@@ -7357,7 +7338,7 @@ static int vmx_write_pml_buffer(struct kvm_vcpu *vcpu)
+ return 1;
+ }
+
+- gpa = vmcs_read64(GUEST_PHYSICAL_ADDRESS) & ~0xFFFull;
++ gpa &= ~0xFFFull;
+ dst = vmcs12->pml_address + sizeof(u64) * vmcs12->guest_pml_index;
+
+ if (kvm_write_guest_page(vcpu->kvm, gpa_to_gfn(dst), &gpa,
+diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
+index 97c5a92146f9..5f08eeac16c8 100644
+--- a/arch/x86/kvm/x86.c
++++ b/arch/x86/kvm/x86.c
+@@ -2784,7 +2784,7 @@ int kvm_set_msr_common(struct kvm_vcpu *vcpu, struct msr_data *msr_info)
+ return kvm_mtrr_set_msr(vcpu, msr, data);
+ case MSR_IA32_APICBASE:
+ return kvm_set_apic_base(vcpu, msr_info);
+- case APIC_BASE_MSR ... APIC_BASE_MSR + 0x3ff:
++ case APIC_BASE_MSR ... APIC_BASE_MSR + 0xff:
+ return kvm_x2apic_msr_write(vcpu, msr, data);
+ case MSR_IA32_TSCDEADLINE:
+ kvm_set_lapic_tscdeadline_msr(vcpu, data);
+@@ -3112,7 +3112,7 @@ int kvm_get_msr_common(struct kvm_vcpu *vcpu, struct msr_data *msr_info)
+ case MSR_IA32_APICBASE:
+ msr_info->data = kvm_get_apic_base(vcpu);
+ break;
+- case APIC_BASE_MSR ... APIC_BASE_MSR + 0x3ff:
++ case APIC_BASE_MSR ... APIC_BASE_MSR + 0xff:
+ return kvm_x2apic_msr_read(vcpu, msr_info->index, &msr_info->data);
+ case MSR_IA32_TSCDEADLINE:
+ msr_info->data = kvm_get_lapic_tscdeadline_msr(vcpu);
+diff --git a/arch/x86/lib/usercopy_64.c b/arch/x86/lib/usercopy_64.c
+index fff28c6f73a2..b0dfac3d3df7 100644
+--- a/arch/x86/lib/usercopy_64.c
++++ b/arch/x86/lib/usercopy_64.c
+@@ -24,6 +24,7 @@ unsigned long __clear_user(void __user *addr, unsigned long size)
+ asm volatile(
+ " testq %[size8],%[size8]\n"
+ " jz 4f\n"
++ " .align 16\n"
+ "0: movq $0,(%[dst])\n"
+ " addq $8,%[dst]\n"
+ " decl %%ecx ; jnz 0b\n"
+diff --git a/arch/x86/power/cpu.c b/arch/x86/power/cpu.c
+index aaff9ed7ff45..b0d3c5ca6d80 100644
+--- a/arch/x86/power/cpu.c
++++ b/arch/x86/power/cpu.c
+@@ -193,6 +193,8 @@ static void fix_processor_context(void)
+ */
+ static void notrace __restore_processor_state(struct saved_context *ctxt)
+ {
++ struct cpuinfo_x86 *c;
++
+ if (ctxt->misc_enable_saved)
+ wrmsrl(MSR_IA32_MISC_ENABLE, ctxt->misc_enable);
+ /*
+@@ -263,6 +265,10 @@ static void notrace __restore_processor_state(struct saved_context *ctxt)
+ mtrr_bp_restore();
+ perf_restore_debug_store();
+ msr_restore_context(ctxt);
++
++ c = &cpu_data(smp_processor_id());
++ if (cpu_has(c, X86_FEATURE_MSR_IA32_FEAT_CTL))
++ init_ia32_feat_ctl(c);
+ }
+
+ /* Needed by apm.c */
+diff --git a/block/bio-integrity.c b/block/bio-integrity.c
+index bf62c25cde8f..ae07dd78e951 100644
+--- a/block/bio-integrity.c
++++ b/block/bio-integrity.c
+@@ -278,7 +278,6 @@ bool bio_integrity_prep(struct bio *bio)
+
+ if (ret == 0) {
+ printk(KERN_ERR "could not attach integrity payload\n");
+- kfree(buf);
+ status = BLK_STS_RESOURCE;
+ goto err_end_io;
+ }
+diff --git a/block/blk-mq.c b/block/blk-mq.c
+index 98a702761e2c..8f580e66691b 100644
+--- a/block/blk-mq.c
++++ b/block/blk-mq.c
+@@ -3328,7 +3328,9 @@ static void __blk_mq_update_nr_hw_queues(struct blk_mq_tag_set *set,
+
+ if (set->nr_maps == 1 && nr_hw_queues > nr_cpu_ids)
+ nr_hw_queues = nr_cpu_ids;
+- if (nr_hw_queues < 1 || nr_hw_queues == set->nr_hw_queues)
++ if (nr_hw_queues < 1)
++ return;
++ if (set->nr_maps == 1 && nr_hw_queues == set->nr_hw_queues)
+ return;
+
+ list_for_each_entry(q, &set->tag_list, tag_set_list)
+diff --git a/drivers/acpi/acpi_configfs.c b/drivers/acpi/acpi_configfs.c
+index ece8c1a921cc..88c8af455ea3 100644
+--- a/drivers/acpi/acpi_configfs.c
++++ b/drivers/acpi/acpi_configfs.c
+@@ -11,6 +11,7 @@
+ #include <linux/module.h>
+ #include <linux/configfs.h>
+ #include <linux/acpi.h>
++#include <linux/security.h>
+
+ #include "acpica/accommon.h"
+ #include "acpica/actables.h"
+@@ -28,7 +29,10 @@ static ssize_t acpi_table_aml_write(struct config_item *cfg,
+ {
+ const struct acpi_table_header *header = data;
+ struct acpi_table *table;
+- int ret;
++ int ret = security_locked_down(LOCKDOWN_ACPI_TABLES);
++
++ if (ret)
++ return ret;
+
+ table = container_of(cfg, struct acpi_table, cfg);
+
+diff --git a/drivers/acpi/sysfs.c b/drivers/acpi/sysfs.c
+index 3a89909b50a6..76c668c05fa0 100644
+--- a/drivers/acpi/sysfs.c
++++ b/drivers/acpi/sysfs.c
+@@ -938,13 +938,13 @@ static void __exit interrupt_stats_exit(void)
+ }
+
+ static ssize_t
+-acpi_show_profile(struct device *dev, struct device_attribute *attr,
++acpi_show_profile(struct kobject *kobj, struct kobj_attribute *attr,
+ char *buf)
+ {
+ return sprintf(buf, "%d\n", acpi_gbl_FADT.preferred_profile);
+ }
+
+-static const struct device_attribute pm_profile_attr =
++static const struct kobj_attribute pm_profile_attr =
+ __ATTR(pm_profile, S_IRUGO, acpi_show_profile, NULL);
+
+ static ssize_t hotplug_enabled_show(struct kobject *kobj,
+diff --git a/drivers/android/binder.c b/drivers/android/binder.c
+index e47c8a4c83db..f50c5f182bb5 100644
+--- a/drivers/android/binder.c
++++ b/drivers/android/binder.c
+@@ -4686,8 +4686,15 @@ static struct binder_thread *binder_get_thread(struct binder_proc *proc)
+
+ static void binder_free_proc(struct binder_proc *proc)
+ {
++ struct binder_device *device;
++
+ BUG_ON(!list_empty(&proc->todo));
+ BUG_ON(!list_empty(&proc->delivered_death));
++ device = container_of(proc->context, struct binder_device, context);
++ if (refcount_dec_and_test(&device->ref)) {
++ kfree(proc->context->name);
++ kfree(device);
++ }
+ binder_alloc_deferred_release(&proc->alloc);
+ put_task_struct(proc->tsk);
+ binder_stats_deleted(BINDER_STAT_PROC);
+@@ -5406,7 +5413,6 @@ static int binder_node_release(struct binder_node *node, int refs)
+ static void binder_deferred_release(struct binder_proc *proc)
+ {
+ struct binder_context *context = proc->context;
+- struct binder_device *device;
+ struct rb_node *n;
+ int threads, nodes, incoming_refs, outgoing_refs, active_transactions;
+
+@@ -5423,12 +5429,6 @@ static void binder_deferred_release(struct binder_proc *proc)
+ context->binder_context_mgr_node = NULL;
+ }
+ mutex_unlock(&context->context_mgr_node_lock);
+- device = container_of(proc->context, struct binder_device, context);
+- if (refcount_dec_and_test(&device->ref)) {
+- kfree(context->name);
+- kfree(device);
+- }
+- proc->context = NULL;
+ binder_inner_proc_lock(proc);
+ /*
+ * Make sure proc stays alive after we
+diff --git a/drivers/ata/libata-scsi.c b/drivers/ata/libata-scsi.c
+index 36e588d88b95..c10deb87015b 100644
+--- a/drivers/ata/libata-scsi.c
++++ b/drivers/ata/libata-scsi.c
+@@ -3692,12 +3692,13 @@ static unsigned int ata_scsi_mode_select_xlat(struct ata_queued_cmd *qc)
+ {
+ struct scsi_cmnd *scmd = qc->scsicmd;
+ const u8 *cdb = scmd->cmnd;
+- const u8 *p;
+ u8 pg, spg;
+ unsigned six_byte, pg_len, hdr_len, bd_len;
+ int len;
+ u16 fp = (u16)-1;
+ u8 bp = 0xff;
++ u8 buffer[64];
++ const u8 *p = buffer;
+
+ VPRINTK("ENTER\n");
+
+@@ -3731,12 +3732,14 @@ static unsigned int ata_scsi_mode_select_xlat(struct ata_queued_cmd *qc)
+ if (!scsi_sg_count(scmd) || scsi_sglist(scmd)->length < len)
+ goto invalid_param_len;
+
+- p = page_address(sg_page(scsi_sglist(scmd)));
+-
+ /* Move past header and block descriptors. */
+ if (len < hdr_len)
+ goto invalid_param_len;
+
++ if (!sg_copy_to_buffer(scsi_sglist(scmd), scsi_sg_count(scmd),
++ buffer, sizeof(buffer)))
++ goto invalid_param_len;
++
+ if (six_byte)
+ bd_len = p[3];
+ else
+diff --git a/drivers/ata/sata_rcar.c b/drivers/ata/sata_rcar.c
+index 980aacdbcf3b..141ac600b64c 100644
+--- a/drivers/ata/sata_rcar.c
++++ b/drivers/ata/sata_rcar.c
+@@ -907,7 +907,7 @@ static int sata_rcar_probe(struct platform_device *pdev)
+ pm_runtime_enable(dev);
+ ret = pm_runtime_get_sync(dev);
+ if (ret < 0)
+- goto err_pm_disable;
++ goto err_pm_put;
+
+ host = ata_host_alloc(dev, 1);
+ if (!host) {
+@@ -937,7 +937,6 @@ static int sata_rcar_probe(struct platform_device *pdev)
+
+ err_pm_put:
+ pm_runtime_put(dev);
+-err_pm_disable:
+ pm_runtime_disable(dev);
+ return ret;
+ }
+@@ -991,8 +990,10 @@ static int sata_rcar_resume(struct device *dev)
+ int ret;
+
+ ret = pm_runtime_get_sync(dev);
+- if (ret < 0)
++ if (ret < 0) {
++ pm_runtime_put(dev);
+ return ret;
++ }
+
+ if (priv->type == RCAR_GEN3_SATA) {
+ sata_rcar_init_module(priv);
+@@ -1017,8 +1018,10 @@ static int sata_rcar_restore(struct device *dev)
+ int ret;
+
+ ret = pm_runtime_get_sync(dev);
+- if (ret < 0)
++ if (ret < 0) {
++ pm_runtime_put(dev);
+ return ret;
++ }
+
+ sata_rcar_setup_port(host);
+
+diff --git a/drivers/base/regmap/regmap.c b/drivers/base/regmap/regmap.c
+index 59f911e57719..508bbd6ea439 100644
+--- a/drivers/base/regmap/regmap.c
++++ b/drivers/base/regmap/regmap.c
+@@ -1356,6 +1356,7 @@ void regmap_exit(struct regmap *map)
+ if (map->hwlock)
+ hwspin_lock_free(map->hwlock);
+ kfree_const(map->name);
++ kfree(map->patch);
+ kfree(map);
+ }
+ EXPORT_SYMBOL_GPL(regmap_exit);
+diff --git a/drivers/block/loop.c b/drivers/block/loop.c
+index da693e6a834e..418bb4621255 100644
+--- a/drivers/block/loop.c
++++ b/drivers/block/loop.c
+@@ -1289,7 +1289,7 @@ loop_set_status(struct loop_device *lo, const struct loop_info64 *info)
+ if (lo->lo_offset != info->lo_offset ||
+ lo->lo_sizelimit != info->lo_sizelimit) {
+ sync_blockdev(lo->lo_device);
+- kill_bdev(lo->lo_device);
++ invalidate_bdev(lo->lo_device);
+ }
+
+ /* I/O need to be drained during transfer transition */
+@@ -1320,7 +1320,7 @@ loop_set_status(struct loop_device *lo, const struct loop_info64 *info)
+
+ if (lo->lo_offset != info->lo_offset ||
+ lo->lo_sizelimit != info->lo_sizelimit) {
+- /* kill_bdev should have truncated all the pages */
++ /* invalidate_bdev should have truncated all the pages */
+ if (lo->lo_device->bd_inode->i_mapping->nrpages) {
+ err = -EAGAIN;
+ pr_warn("%s: loop%d (%s) has still dirty pages (nrpages=%lu)\n",
+@@ -1565,11 +1565,11 @@ static int loop_set_block_size(struct loop_device *lo, unsigned long arg)
+ return 0;
+
+ sync_blockdev(lo->lo_device);
+- kill_bdev(lo->lo_device);
++ invalidate_bdev(lo->lo_device);
+
+ blk_mq_freeze_queue(lo->lo_queue);
+
+- /* kill_bdev should have truncated all the pages */
++ /* invalidate_bdev should have truncated all the pages */
+ if (lo->lo_device->bd_inode->i_mapping->nrpages) {
+ err = -EAGAIN;
+ pr_warn("%s: loop%d (%s) has still dirty pages (nrpages=%lu)\n",
+diff --git a/drivers/bus/ti-sysc.c b/drivers/bus/ti-sysc.c
+index e5f5f48d69d2..db9541f38505 100644
+--- a/drivers/bus/ti-sysc.c
++++ b/drivers/bus/ti-sysc.c
+@@ -221,6 +221,35 @@ static u32 sysc_read_sysstatus(struct sysc *ddata)
+ return sysc_read(ddata, offset);
+ }
+
++/* Poll on reset status */
++static int sysc_wait_softreset(struct sysc *ddata)
++{
++ u32 sysc_mask, syss_done, rstval;
++ int syss_offset, error = 0;
++
++ syss_offset = ddata->offsets[SYSC_SYSSTATUS];
++ sysc_mask = BIT(ddata->cap->regbits->srst_shift);
++
++ if (ddata->cfg.quirks & SYSS_QUIRK_RESETDONE_INVERTED)
++ syss_done = 0;
++ else
++ syss_done = ddata->cfg.syss_mask;
++
++ if (syss_offset >= 0) {
++ error = readx_poll_timeout(sysc_read_sysstatus, ddata, rstval,
++ (rstval & ddata->cfg.syss_mask) ==
++ syss_done,
++ 100, MAX_MODULE_SOFTRESET_WAIT);
++
++ } else if (ddata->cfg.quirks & SYSC_QUIRK_RESET_STATUS) {
++ error = readx_poll_timeout(sysc_read_sysconfig, ddata, rstval,
++ !(rstval & sysc_mask),
++ 100, MAX_MODULE_SOFTRESET_WAIT);
++ }
++
++ return error;
++}
++
+ static int sysc_add_named_clock_from_child(struct sysc *ddata,
+ const char *name,
+ const char *optfck_name)
+@@ -925,18 +954,47 @@ static int sysc_enable_module(struct device *dev)
+ struct sysc *ddata;
+ const struct sysc_regbits *regbits;
+ u32 reg, idlemodes, best_mode;
++ int error;
+
+ ddata = dev_get_drvdata(dev);
++
++ /*
++ * Some modules like DSS reset automatically on idle. Enable optional
++ * reset clocks and wait for OCP softreset to complete.
++ */
++ if (ddata->cfg.quirks & SYSC_QUIRK_OPT_CLKS_IN_RESET) {
++ error = sysc_enable_opt_clocks(ddata);
++ if (error) {
++ dev_err(ddata->dev,
++ "Optional clocks failed for enable: %i\n",
++ error);
++ return error;
++ }
++ }
++ error = sysc_wait_softreset(ddata);
++ if (error)
++ dev_warn(ddata->dev, "OCP softreset timed out\n");
++ if (ddata->cfg.quirks & SYSC_QUIRK_OPT_CLKS_IN_RESET)
++ sysc_disable_opt_clocks(ddata);
++
++ /*
++ * Some subsystem private interconnects, like DSS top level module,
++ * need only the automatic OCP softreset handling with no sysconfig
++ * register bits to configure.
++ */
+ if (ddata->offsets[SYSC_SYSCONFIG] == -ENODEV)
+ return 0;
+
+ regbits = ddata->cap->regbits;
+ reg = sysc_read(ddata, ddata->offsets[SYSC_SYSCONFIG]);
+
+- /* Set CLOCKACTIVITY, we only use it for ick */
++ /*
++ * Set CLOCKACTIVITY, we only use it for ick. And we only configure it
++ * based on the SYSC_QUIRK_USE_CLOCKACT flag, not based on the hardware
++ * capabilities. See the old HWMOD_SET_DEFAULT_CLOCKACT flag.
++ */
+ if (regbits->clkact_shift >= 0 &&
+- (ddata->cfg.quirks & SYSC_QUIRK_USE_CLOCKACT ||
+- ddata->cfg.sysc_val & BIT(regbits->clkact_shift)))
++ (ddata->cfg.quirks & SYSC_QUIRK_USE_CLOCKACT))
+ reg |= SYSC_CLOCACT_ICK << regbits->clkact_shift;
+
+ /* Set SIDLE mode */
+@@ -991,6 +1049,9 @@ set_autoidle:
+ sysc_write_sysconfig(ddata, reg);
+ }
+
++ /* Flush posted write */
++ sysc_read(ddata, ddata->offsets[SYSC_SYSCONFIG]);
++
+ if (ddata->module_enable_quirk)
+ ddata->module_enable_quirk(ddata);
+
+@@ -1071,6 +1132,9 @@ set_sidle:
+ reg |= 1 << regbits->autoidle_shift;
+ sysc_write_sysconfig(ddata, reg);
+
++ /* Flush posted write */
++ sysc_read(ddata, ddata->offsets[SYSC_SYSCONFIG]);
++
+ return 0;
+ }
+
+@@ -1488,7 +1552,7 @@ static u32 sysc_quirk_dispc(struct sysc *ddata, int dispc_offset,
+ bool lcd_en, digit_en, lcd2_en = false, lcd3_en = false;
+ const int lcd_en_mask = BIT(0), digit_en_mask = BIT(1);
+ int manager_count;
+- bool framedonetv_irq;
++ bool framedonetv_irq = true;
+ u32 val, irq_mask = 0;
+
+ switch (sysc_soc->soc) {
+@@ -1505,6 +1569,7 @@ static u32 sysc_quirk_dispc(struct sysc *ddata, int dispc_offset,
+ break;
+ case SOC_AM4:
+ manager_count = 1;
++ framedonetv_irq = false;
+ break;
+ case SOC_UNKNOWN:
+ default:
+@@ -1822,11 +1887,10 @@ static int sysc_legacy_init(struct sysc *ddata)
+ */
+ static int sysc_reset(struct sysc *ddata)
+ {
+- int sysc_offset, syss_offset, sysc_val, rstval, error = 0;
+- u32 sysc_mask, syss_done;
++ int sysc_offset, sysc_val, error;
++ u32 sysc_mask;
+
+ sysc_offset = ddata->offsets[SYSC_SYSCONFIG];
+- syss_offset = ddata->offsets[SYSC_SYSSTATUS];
+
+ if (ddata->legacy_mode ||
+ ddata->cap->regbits->srst_shift < 0 ||
+@@ -1835,11 +1899,6 @@ static int sysc_reset(struct sysc *ddata)
+
+ sysc_mask = BIT(ddata->cap->regbits->srst_shift);
+
+- if (ddata->cfg.quirks & SYSS_QUIRK_RESETDONE_INVERTED)
+- syss_done = 0;
+- else
+- syss_done = ddata->cfg.syss_mask;
+-
+ if (ddata->pre_reset_quirk)
+ ddata->pre_reset_quirk(ddata);
+
+@@ -1856,18 +1915,9 @@ static int sysc_reset(struct sysc *ddata)
+ if (ddata->post_reset_quirk)
+ ddata->post_reset_quirk(ddata);
+
+- /* Poll on reset status */
+- if (syss_offset >= 0) {
+- error = readx_poll_timeout(sysc_read_sysstatus, ddata, rstval,
+- (rstval & ddata->cfg.syss_mask) ==
+- syss_done,
+- 100, MAX_MODULE_SOFTRESET_WAIT);
+-
+- } else if (ddata->cfg.quirks & SYSC_QUIRK_RESET_STATUS) {
+- error = readx_poll_timeout(sysc_read_sysconfig, ddata, rstval,
+- !(rstval & sysc_mask),
+- 100, MAX_MODULE_SOFTRESET_WAIT);
+- }
++ error = sysc_wait_softreset(ddata);
++ if (error)
++ dev_warn(ddata->dev, "OCP softreset timed out\n");
+
+ if (ddata->reset_done_quirk)
+ ddata->reset_done_quirk(ddata);
+diff --git a/drivers/char/hw_random/ks-sa-rng.c b/drivers/char/hw_random/ks-sa-rng.c
+index e2330e757f1f..001617033d6a 100644
+--- a/drivers/char/hw_random/ks-sa-rng.c
++++ b/drivers/char/hw_random/ks-sa-rng.c
+@@ -244,6 +244,7 @@ static int ks_sa_rng_probe(struct platform_device *pdev)
+ ret = pm_runtime_get_sync(dev);
+ if (ret < 0) {
+ dev_err(dev, "Failed to enable SA power-domain\n");
++ pm_runtime_put_noidle(dev);
+ pm_runtime_disable(dev);
+ return ret;
+ }
+diff --git a/drivers/clk/sifive/fu540-prci.c b/drivers/clk/sifive/fu540-prci.c
+index 6282ee2f361c..a8901f90a61a 100644
+--- a/drivers/clk/sifive/fu540-prci.c
++++ b/drivers/clk/sifive/fu540-prci.c
+@@ -586,7 +586,10 @@ static int sifive_fu540_prci_probe(struct platform_device *pdev)
+ struct __prci_data *pd;
+ int r;
+
+- pd = devm_kzalloc(dev, sizeof(*pd), GFP_KERNEL);
++ pd = devm_kzalloc(dev,
++ struct_size(pd, hw_clks.hws,
++ ARRAY_SIZE(__prci_init_clocks)),
++ GFP_KERNEL);
+ if (!pd)
+ return -ENOMEM;
+
+diff --git a/drivers/edac/amd64_edac.c b/drivers/edac/amd64_edac.c
+index 4e9994de0b90..0d89c3e473bd 100644
+--- a/drivers/edac/amd64_edac.c
++++ b/drivers/edac/amd64_edac.c
+@@ -272,6 +272,8 @@ static int get_scrub_rate(struct mem_ctl_info *mci)
+
+ if (pvt->model == 0x60)
+ amd64_read_pci_cfg(pvt->F2, F15H_M60H_SCRCTRL, &scrubval);
++ else
++ amd64_read_pci_cfg(pvt->F3, SCRCTRL, &scrubval);
+ } else {
+ amd64_read_pci_cfg(pvt->F3, SCRCTRL, &scrubval);
+ }
+diff --git a/drivers/firmware/efi/esrt.c b/drivers/firmware/efi/esrt.c
+index e3d692696583..d5915272141f 100644
+--- a/drivers/firmware/efi/esrt.c
++++ b/drivers/firmware/efi/esrt.c
+@@ -181,7 +181,7 @@ static int esre_create_sysfs_entry(void *esre, int entry_num)
+ rc = kobject_init_and_add(&entry->kobj, &esre1_ktype, NULL,
+ "entry%d", entry_num);
+ if (rc) {
+- kfree(entry);
++ kobject_put(&entry->kobj);
+ return rc;
+ }
+ }
+diff --git a/drivers/firmware/efi/libstub/file.c b/drivers/firmware/efi/libstub/file.c
+index ea66b1f16a79..f1c4faf58c76 100644
+--- a/drivers/firmware/efi/libstub/file.c
++++ b/drivers/firmware/efi/libstub/file.c
+@@ -104,12 +104,20 @@ static int find_file_option(const efi_char16_t *cmdline, int cmdline_len,
+ if (!found)
+ return 0;
+
++ /* Skip any leading slashes */
++ while (cmdline[i] == L'/' || cmdline[i] == L'\\')
++ i++;
++
+ while (--result_len > 0 && i < cmdline_len) {
+- if (cmdline[i] == L'\0' ||
+- cmdline[i] == L'\n' ||
+- cmdline[i] == L' ')
++ efi_char16_t c = cmdline[i++];
++
++ if (c == L'\0' || c == L'\n' || c == L' ')
+ break;
+- *result++ = cmdline[i++];
++ else if (c == L'/')
++ /* Replace UNIX dir separators with EFI standard ones */
++ *result++ = L'\\';
++ else
++ *result++ = c;
+ }
+ *result = L'\0';
+ return i;
+diff --git a/drivers/gpu/drm/amd/amdgpu/sdma_v5_0.c b/drivers/gpu/drm/amd/amdgpu/sdma_v5_0.c
+index d2840c2f6286..1dc57079933c 100644
+--- a/drivers/gpu/drm/amd/amdgpu/sdma_v5_0.c
++++ b/drivers/gpu/drm/amd/amdgpu/sdma_v5_0.c
+@@ -1261,8 +1261,12 @@ static int sdma_v5_0_sw_fini(void *handle)
+ struct amdgpu_device *adev = (struct amdgpu_device *)handle;
+ int i;
+
+- for (i = 0; i < adev->sdma.num_instances; i++)
++ for (i = 0; i < adev->sdma.num_instances; i++) {
++ if (adev->sdma.instance[i].fw != NULL)
++ release_firmware(adev->sdma.instance[i].fw);
++
+ amdgpu_ring_fini(&adev->sdma.instance[i].ring);
++ }
+
+ return 0;
+ }
+diff --git a/drivers/gpu/drm/amd/amdkfd/kfd_process.c b/drivers/gpu/drm/amd/amdkfd/kfd_process.c
+index fe0cd49d4ea7..d8c74aa4e565 100644
+--- a/drivers/gpu/drm/amd/amdkfd/kfd_process.c
++++ b/drivers/gpu/drm/amd/amdkfd/kfd_process.c
+@@ -396,6 +396,7 @@ struct kfd_process *kfd_create_process(struct file *filep)
+ (int)process->lead_thread->pid);
+ if (ret) {
+ pr_warn("Creating procfs pid directory failed");
++ kobject_put(process->kobj);
+ goto out;
+ }
+
+diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_debugfs.c b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_debugfs.c
+index 0461fecd68db..11491ae1effc 100644
+--- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_debugfs.c
++++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_debugfs.c
+@@ -1017,7 +1017,6 @@ static const struct {
+ {"link_settings", &dp_link_settings_debugfs_fops},
+ {"phy_settings", &dp_phy_settings_debugfs_fop},
+ {"test_pattern", &dp_phy_test_pattern_fops},
+- {"output_bpc", &output_bpc_fops},
+ {"vrr_range", &vrr_range_fops},
+ {"sdp_message", &sdp_message_fops},
+ {"aux_dpcd_address", &dp_dpcd_address_debugfs_fops},
+@@ -1090,6 +1089,9 @@ void connector_debugfs_init(struct amdgpu_dm_connector *connector)
+ debugfs_create_file_unsafe("force_yuv420_output", 0644, dir, connector,
+ &force_yuv420_output_fops);
+
++ debugfs_create_file("output_bpc", 0644, dir, connector,
++ &output_bpc_fops);
++
+ connector->debugfs_dpcd_address = 0;
+ connector->debugfs_dpcd_size = 0;
+
+diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_hdcp.c b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_hdcp.c
+index dcf84a61de37..949d10ef8304 100644
+--- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_hdcp.c
++++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_hdcp.c
+@@ -510,8 +510,10 @@ static ssize_t srm_data_read(struct file *filp, struct kobject *kobj, struct bin
+
+ srm = psp_get_srm(work->hdcp.config.psp.handle, &srm_version, &srm_size);
+
+- if (!srm)
+- return -EINVAL;
++ if (!srm) {
++ ret = -EINVAL;
++ goto ret;
++ }
+
+ if (pos >= srm_size)
+ ret = 0;
+diff --git a/drivers/gpu/drm/amd/display/modules/color/color_gamma.c b/drivers/gpu/drm/amd/display/modules/color/color_gamma.c
+index e89694eb90b4..700f0039df7b 100644
+--- a/drivers/gpu/drm/amd/display/modules/color/color_gamma.c
++++ b/drivers/gpu/drm/amd/display/modules/color/color_gamma.c
+@@ -1777,7 +1777,7 @@ bool calculate_user_regamma_ramp(struct dc_transfer_func *output_tf,
+
+ kfree(rgb_regamma);
+ rgb_regamma_alloc_fail:
+- kvfree(rgb_user);
++ kfree(rgb_user);
+ rgb_user_alloc_fail:
+ return ret;
+ }
+diff --git a/drivers/gpu/drm/drm_fb_helper.c b/drivers/gpu/drm/drm_fb_helper.c
+index a9771de4d17e..c7be39a00d43 100644
+--- a/drivers/gpu/drm/drm_fb_helper.c
++++ b/drivers/gpu/drm/drm_fb_helper.c
+@@ -227,18 +227,9 @@ int drm_fb_helper_debug_leave(struct fb_info *info)
+ }
+ EXPORT_SYMBOL(drm_fb_helper_debug_leave);
+
+-/**
+- * drm_fb_helper_restore_fbdev_mode_unlocked - restore fbdev configuration
+- * @fb_helper: driver-allocated fbdev helper, can be NULL
+- *
+- * This should be called from driver's drm &drm_driver.lastclose callback
+- * when implementing an fbcon on top of kms using this helper. This ensures that
+- * the user isn't greeted with a black screen when e.g. X dies.
+- *
+- * RETURNS:
+- * Zero if everything went ok, negative error code otherwise.
+- */
+-int drm_fb_helper_restore_fbdev_mode_unlocked(struct drm_fb_helper *fb_helper)
++static int
++__drm_fb_helper_restore_fbdev_mode_unlocked(struct drm_fb_helper *fb_helper,
++ bool force)
+ {
+ bool do_delayed;
+ int ret;
+@@ -250,7 +241,16 @@ int drm_fb_helper_restore_fbdev_mode_unlocked(struct drm_fb_helper *fb_helper)
+ return 0;
+
+ mutex_lock(&fb_helper->lock);
+- ret = drm_client_modeset_commit(&fb_helper->client);
++ if (force) {
++ /*
++ * Yes this is the _locked version which expects the master lock
++ * to be held. But for forced restores we're intentionally
++ * racing here, see drm_fb_helper_set_par().
++ */
++ ret = drm_client_modeset_commit_locked(&fb_helper->client);
++ } else {
++ ret = drm_client_modeset_commit(&fb_helper->client);
++ }
+
+ do_delayed = fb_helper->delayed_hotplug;
+ if (do_delayed)
+@@ -262,6 +262,22 @@ int drm_fb_helper_restore_fbdev_mode_unlocked(struct drm_fb_helper *fb_helper)
+
+ return ret;
+ }
++
++/**
++ * drm_fb_helper_restore_fbdev_mode_unlocked - restore fbdev configuration
++ * @fb_helper: driver-allocated fbdev helper, can be NULL
++ *
++ * This should be called from driver's drm &drm_driver.lastclose callback
++ * when implementing an fbcon on top of kms using this helper. This ensures that
++ * the user isn't greeted with a black screen when e.g. X dies.
++ *
++ * RETURNS:
++ * Zero if everything went ok, negative error code otherwise.
++ */
++int drm_fb_helper_restore_fbdev_mode_unlocked(struct drm_fb_helper *fb_helper)
++{
++ return __drm_fb_helper_restore_fbdev_mode_unlocked(fb_helper, false);
++}
+ EXPORT_SYMBOL(drm_fb_helper_restore_fbdev_mode_unlocked);
+
+ #ifdef CONFIG_MAGIC_SYSRQ
+@@ -1310,6 +1326,7 @@ int drm_fb_helper_set_par(struct fb_info *info)
+ {
+ struct drm_fb_helper *fb_helper = info->par;
+ struct fb_var_screeninfo *var = &info->var;
++ bool force;
+
+ if (oops_in_progress)
+ return -EBUSY;
+@@ -1319,7 +1336,25 @@ int drm_fb_helper_set_par(struct fb_info *info)
+ return -EINVAL;
+ }
+
+- drm_fb_helper_restore_fbdev_mode_unlocked(fb_helper);
++ /*
++ * Normally we want to make sure that a kms master takes precedence over
++ * fbdev, to avoid fbdev flickering and occasionally stealing the
++ * display status. But Xorg first sets the vt back to text mode using
++ * the KDSET IOCTL with KD_TEXT, and only after that drops the master
++ * status when exiting.
++ *
++ * In the past this was caught by drm_fb_helper_lastclose(), but on
++ * modern systems where logind always keeps a drm fd open to orchestrate
++ * the vt switching, this doesn't work.
++ *
++ * To not break the userspace ABI we have this special case here, which
++ * is only used for the above case. Everything else uses the normal
++ * commit function, which ensures that we never steal the display from
++ * an active drm master.
++ */
++ force = var->activate & FB_ACTIVATE_KD_TEXT;
++
++ __drm_fb_helper_restore_fbdev_mode_unlocked(fb_helper, force);
+
+ return 0;
+ }
+diff --git a/drivers/gpu/drm/panel/panel-simple.c b/drivers/gpu/drm/panel/panel-simple.c
+index 3ad828eaefe1..db91b3c031a1 100644
+--- a/drivers/gpu/drm/panel/panel-simple.c
++++ b/drivers/gpu/drm/panel/panel-simple.c
+@@ -2297,6 +2297,7 @@ static const struct panel_desc logicpd_type_28 = {
+ .bus_format = MEDIA_BUS_FMT_RGB888_1X24,
+ .bus_flags = DRM_BUS_FLAG_DE_HIGH | DRM_BUS_FLAG_PIXDATA_DRIVE_POSEDGE |
+ DRM_BUS_FLAG_SYNC_DRIVE_NEGEDGE,
++ .connector_type = DRM_MODE_CONNECTOR_DPI,
+ };
+
+ static const struct panel_desc mitsubishi_aa070mc01 = {
+@@ -2465,6 +2466,7 @@ static const struct panel_desc newhaven_nhd_43_480272ef_atxl = {
+ .bus_format = MEDIA_BUS_FMT_RGB888_1X24,
+ .bus_flags = DRM_BUS_FLAG_DE_HIGH | DRM_BUS_FLAG_PIXDATA_DRIVE_POSEDGE |
+ DRM_BUS_FLAG_SYNC_DRIVE_POSEDGE,
++ .connector_type = DRM_MODE_CONNECTOR_DPI,
+ };
+
+ static const struct display_timing nlt_nl192108ac18_02d_timing = {
+diff --git a/drivers/gpu/drm/radeon/ni_dpm.c b/drivers/gpu/drm/radeon/ni_dpm.c
+index b57c37ddd164..c7fbb7932f37 100644
+--- a/drivers/gpu/drm/radeon/ni_dpm.c
++++ b/drivers/gpu/drm/radeon/ni_dpm.c
+@@ -2127,7 +2127,7 @@ static int ni_init_smc_spll_table(struct radeon_device *rdev)
+ if (clk_s & ~(SMC_NISLANDS_SPLL_DIV_TABLE_CLKS_MASK >> SMC_NISLANDS_SPLL_DIV_TABLE_CLKS_SHIFT))
+ ret = -EINVAL;
+
+- if (clk_s & ~(SMC_NISLANDS_SPLL_DIV_TABLE_CLKS_MASK >> SMC_NISLANDS_SPLL_DIV_TABLE_CLKS_SHIFT))
++ if (fb_div & ~(SMC_NISLANDS_SPLL_DIV_TABLE_FBDIV_MASK >> SMC_NISLANDS_SPLL_DIV_TABLE_FBDIV_SHIFT))
+ ret = -EINVAL;
+
+ if (clk_v & ~(SMC_NISLANDS_SPLL_DIV_TABLE_CLKV_MASK >> SMC_NISLANDS_SPLL_DIV_TABLE_CLKV_SHIFT))
+diff --git a/drivers/gpu/drm/rcar-du/Kconfig b/drivers/gpu/drm/rcar-du/Kconfig
+index 0919f1f159a4..f65d1489dc50 100644
+--- a/drivers/gpu/drm/rcar-du/Kconfig
++++ b/drivers/gpu/drm/rcar-du/Kconfig
+@@ -31,6 +31,7 @@ config DRM_RCAR_DW_HDMI
+ config DRM_RCAR_LVDS
+ tristate "R-Car DU LVDS Encoder Support"
+ depends on DRM && DRM_BRIDGE && OF
++ select DRM_KMS_HELPER
+ select DRM_PANEL
+ select OF_FLATTREE
+ select OF_OVERLAY
+diff --git a/drivers/i2c/busses/i2c-fsi.c b/drivers/i2c/busses/i2c-fsi.c
+index e0c256922d4f..977d6f524649 100644
+--- a/drivers/i2c/busses/i2c-fsi.c
++++ b/drivers/i2c/busses/i2c-fsi.c
+@@ -98,7 +98,7 @@
+ #define I2C_STAT_DAT_REQ BIT(25)
+ #define I2C_STAT_CMD_COMP BIT(24)
+ #define I2C_STAT_STOP_ERR BIT(23)
+-#define I2C_STAT_MAX_PORT GENMASK(19, 16)
++#define I2C_STAT_MAX_PORT GENMASK(22, 16)
+ #define I2C_STAT_ANY_INT BIT(15)
+ #define I2C_STAT_SCL_IN BIT(11)
+ #define I2C_STAT_SDA_IN BIT(10)
+diff --git a/drivers/i2c/busses/i2c-tegra.c b/drivers/i2c/busses/i2c-tegra.c
+index 4c4d17ddc96b..7c88611c732c 100644
+--- a/drivers/i2c/busses/i2c-tegra.c
++++ b/drivers/i2c/busses/i2c-tegra.c
+@@ -1769,14 +1769,9 @@ static int tegra_i2c_remove(struct platform_device *pdev)
+ static int __maybe_unused tegra_i2c_suspend(struct device *dev)
+ {
+ struct tegra_i2c_dev *i2c_dev = dev_get_drvdata(dev);
+- int err;
+
+ i2c_mark_adapter_suspended(&i2c_dev->adapter);
+
+- err = pm_runtime_force_suspend(dev);
+- if (err < 0)
+- return err;
+-
+ return 0;
+ }
+
+@@ -1797,10 +1792,6 @@ static int __maybe_unused tegra_i2c_resume(struct device *dev)
+ if (err)
+ return err;
+
+- err = pm_runtime_force_resume(dev);
+- if (err < 0)
+- return err;
+-
+ i2c_mark_adapter_resumed(&i2c_dev->adapter);
+
+ return 0;
+diff --git a/drivers/i2c/i2c-core-smbus.c b/drivers/i2c/i2c-core-smbus.c
+index b34d2ff06931..bbb70a8a411e 100644
+--- a/drivers/i2c/i2c-core-smbus.c
++++ b/drivers/i2c/i2c-core-smbus.c
+@@ -495,6 +495,13 @@ static s32 i2c_smbus_xfer_emulated(struct i2c_adapter *adapter, u16 addr,
+ break;
+ case I2C_SMBUS_BLOCK_DATA:
+ case I2C_SMBUS_BLOCK_PROC_CALL:
++ if (msg[1].buf[0] > I2C_SMBUS_BLOCK_MAX) {
++ dev_err(&adapter->dev,
++ "Invalid block size returned: %d\n",
++ msg[1].buf[0]);
++ status = -EPROTO;
++ goto cleanup;
++ }
+ for (i = 0; i < msg[1].buf[0] + 1; i++)
+ data->block[i] = msg[1].buf[i];
+ break;
+diff --git a/drivers/infiniband/core/cma.c b/drivers/infiniband/core/cma.c
+index 26e6f7df247b..12ada58c96a9 100644
+--- a/drivers/infiniband/core/cma.c
++++ b/drivers/infiniband/core/cma.c
+@@ -1619,6 +1619,8 @@ static struct rdma_id_private *cma_find_listener(
+ {
+ struct rdma_id_private *id_priv, *id_priv_dev;
+
++ lockdep_assert_held(&lock);
++
+ if (!bind_list)
+ return ERR_PTR(-EINVAL);
+
+@@ -1665,6 +1667,7 @@ cma_ib_id_from_event(struct ib_cm_id *cm_id,
+ }
+ }
+
++ mutex_lock(&lock);
+ /*
+ * Net namespace might be getting deleted while route lookup,
+ * cm_id lookup is in progress. Therefore, perform netdevice
+@@ -1706,6 +1709,7 @@ cma_ib_id_from_event(struct ib_cm_id *cm_id,
+ id_priv = cma_find_listener(bind_list, cm_id, ib_event, req, *net_dev);
+ err:
+ rcu_read_unlock();
++ mutex_unlock(&lock);
+ if (IS_ERR(id_priv) && *net_dev) {
+ dev_put(*net_dev);
+ *net_dev = NULL;
+@@ -2481,6 +2485,8 @@ static void cma_listen_on_dev(struct rdma_id_private *id_priv,
+ struct net *net = id_priv->id.route.addr.dev_addr.net;
+ int ret;
+
++ lockdep_assert_held(&lock);
++
+ if (cma_family(id_priv) == AF_IB && !rdma_cap_ib_cm(cma_dev->device, 1))
+ return;
+
+@@ -3308,6 +3314,8 @@ static void cma_bind_port(struct rdma_bind_list *bind_list,
+ u64 sid, mask;
+ __be16 port;
+
++ lockdep_assert_held(&lock);
++
+ addr = cma_src_addr(id_priv);
+ port = htons(bind_list->port);
+
+@@ -3336,6 +3344,8 @@ static int cma_alloc_port(enum rdma_ucm_port_space ps,
+ struct rdma_bind_list *bind_list;
+ int ret;
+
++ lockdep_assert_held(&lock);
++
+ bind_list = kzalloc(sizeof *bind_list, GFP_KERNEL);
+ if (!bind_list)
+ return -ENOMEM;
+@@ -3362,6 +3372,8 @@ static int cma_port_is_unique(struct rdma_bind_list *bind_list,
+ struct sockaddr *saddr = cma_src_addr(id_priv);
+ __be16 dport = cma_port(daddr);
+
++ lockdep_assert_held(&lock);
++
+ hlist_for_each_entry(cur_id, &bind_list->owners, node) {
+ struct sockaddr *cur_daddr = cma_dst_addr(cur_id);
+ struct sockaddr *cur_saddr = cma_src_addr(cur_id);
+@@ -3401,6 +3413,8 @@ static int cma_alloc_any_port(enum rdma_ucm_port_space ps,
+ unsigned int rover;
+ struct net *net = id_priv->id.route.addr.dev_addr.net;
+
++ lockdep_assert_held(&lock);
++
+ inet_get_local_port_range(net, &low, &high);
+ remaining = (high - low) + 1;
+ rover = prandom_u32() % remaining + low;
+@@ -3448,6 +3462,8 @@ static int cma_check_port(struct rdma_bind_list *bind_list,
+ struct rdma_id_private *cur_id;
+ struct sockaddr *addr, *cur_addr;
+
++ lockdep_assert_held(&lock);
++
+ addr = cma_src_addr(id_priv);
+ hlist_for_each_entry(cur_id, &bind_list->owners, node) {
+ if (id_priv == cur_id)
+@@ -3478,6 +3494,8 @@ static int cma_use_port(enum rdma_ucm_port_space ps,
+ unsigned short snum;
+ int ret;
+
++ lockdep_assert_held(&lock);
++
+ snum = ntohs(cma_port(cma_src_addr(id_priv)));
+ if (snum < PROT_SOCK && !capable(CAP_NET_BIND_SERVICE))
+ return -EACCES;
+diff --git a/drivers/infiniband/core/mad.c b/drivers/infiniband/core/mad.c
+index c54db13fa9b0..049c9cdc10de 100644
+--- a/drivers/infiniband/core/mad.c
++++ b/drivers/infiniband/core/mad.c
+@@ -639,10 +639,10 @@ static void unregister_mad_agent(struct ib_mad_agent_private *mad_agent_priv)
+ xa_erase(&ib_mad_clients, mad_agent_priv->agent.hi_tid);
+
+ flush_workqueue(port_priv->wq);
+- ib_cancel_rmpp_recvs(mad_agent_priv);
+
+ deref_mad_agent(mad_agent_priv);
+ wait_for_completion(&mad_agent_priv->comp);
++ ib_cancel_rmpp_recvs(mad_agent_priv);
+
+ ib_mad_agent_security_cleanup(&mad_agent_priv->agent);
+
+@@ -2941,6 +2941,7 @@ static int ib_mad_post_receive_mads(struct ib_mad_qp_info *qp_info,
+ DMA_FROM_DEVICE);
+ if (unlikely(ib_dma_mapping_error(qp_info->port_priv->device,
+ sg_list.addr))) {
++ kfree(mad_priv);
+ ret = -ENOMEM;
+ break;
+ }
+diff --git a/drivers/infiniband/core/rdma_core.c b/drivers/infiniband/core/rdma_core.c
+index e0a5e897e4b1..75bcbc625616 100644
+--- a/drivers/infiniband/core/rdma_core.c
++++ b/drivers/infiniband/core/rdma_core.c
+@@ -459,40 +459,46 @@ static struct ib_uobject *
+ alloc_begin_fd_uobject(const struct uverbs_api_object *obj,
+ struct uverbs_attr_bundle *attrs)
+ {
+- const struct uverbs_obj_fd_type *fd_type =
+- container_of(obj->type_attrs, struct uverbs_obj_fd_type, type);
++ const struct uverbs_obj_fd_type *fd_type;
+ int new_fd;
+- struct ib_uobject *uobj;
++ struct ib_uobject *uobj, *ret;
+ struct file *filp;
+
++ uobj = alloc_uobj(attrs, obj);
++ if (IS_ERR(uobj))
++ return uobj;
++
++ fd_type =
++ container_of(obj->type_attrs, struct uverbs_obj_fd_type, type);
+ if (WARN_ON(fd_type->fops->release != &uverbs_uobject_fd_release &&
+- fd_type->fops->release != &uverbs_async_event_release))
+- return ERR_PTR(-EINVAL);
++ fd_type->fops->release != &uverbs_async_event_release)) {
++ ret = ERR_PTR(-EINVAL);
++ goto err_fd;
++ }
+
+ new_fd = get_unused_fd_flags(O_CLOEXEC);
+- if (new_fd < 0)
+- return ERR_PTR(new_fd);
+-
+- uobj = alloc_uobj(attrs, obj);
+- if (IS_ERR(uobj))
++ if (new_fd < 0) {
++ ret = ERR_PTR(new_fd);
+ goto err_fd;
++ }
+
+ /* Note that uverbs_uobject_fd_release() is called during abort */
+ filp = anon_inode_getfile(fd_type->name, fd_type->fops, NULL,
+ fd_type->flags);
+ if (IS_ERR(filp)) {
+- uverbs_uobject_put(uobj);
+- uobj = ERR_CAST(filp);
+- goto err_fd;
++ ret = ERR_CAST(filp);
++ goto err_getfile;
+ }
+ uobj->object = filp;
+
+ uobj->id = new_fd;
+ return uobj;
+
+-err_fd:
++err_getfile:
+ put_unused_fd(new_fd);
+- return uobj;
++err_fd:
++ uverbs_uobject_put(uobj);
++ return ret;
+ }
+
+ struct ib_uobject *rdma_alloc_begin_uobject(const struct uverbs_api_object *obj,
+diff --git a/drivers/infiniband/hw/efa/efa_verbs.c b/drivers/infiniband/hw/efa/efa_verbs.c
+index 5c57098a4aee..3420c7742486 100644
+--- a/drivers/infiniband/hw/efa/efa_verbs.c
++++ b/drivers/infiniband/hw/efa/efa_verbs.c
+@@ -209,6 +209,7 @@ int efa_query_device(struct ib_device *ibdev,
+ props->max_send_sge = dev_attr->max_sq_sge;
+ props->max_recv_sge = dev_attr->max_rq_sge;
+ props->max_sge_rd = dev_attr->max_wr_rdma_sge;
++ props->max_pkeys = 1;
+
+ if (udata && udata->outlen) {
+ resp.max_sq_sge = dev_attr->max_sq_sge;
+diff --git a/drivers/infiniband/hw/hfi1/debugfs.c b/drivers/infiniband/hw/hfi1/debugfs.c
+index 4633a0ce1a8c..2ced236e1553 100644
+--- a/drivers/infiniband/hw/hfi1/debugfs.c
++++ b/drivers/infiniband/hw/hfi1/debugfs.c
+@@ -985,15 +985,10 @@ static ssize_t qsfp2_debugfs_read(struct file *file, char __user *buf,
+ static int __i2c_debugfs_open(struct inode *in, struct file *fp, u32 target)
+ {
+ struct hfi1_pportdata *ppd;
+- int ret;
+
+ ppd = private2ppd(fp);
+
+- ret = acquire_chip_resource(ppd->dd, i2c_target(target), 0);
+- if (ret) /* failed - release the module */
+- module_put(THIS_MODULE);
+-
+- return ret;
++ return acquire_chip_resource(ppd->dd, i2c_target(target), 0);
+ }
+
+ static int i2c1_debugfs_open(struct inode *in, struct file *fp)
+@@ -1013,7 +1008,6 @@ static int __i2c_debugfs_release(struct inode *in, struct file *fp, u32 target)
+ ppd = private2ppd(fp);
+
+ release_chip_resource(ppd->dd, i2c_target(target));
+- module_put(THIS_MODULE);
+
+ return 0;
+ }
+@@ -1031,18 +1025,10 @@ static int i2c2_debugfs_release(struct inode *in, struct file *fp)
+ static int __qsfp_debugfs_open(struct inode *in, struct file *fp, u32 target)
+ {
+ struct hfi1_pportdata *ppd;
+- int ret;
+-
+- if (!try_module_get(THIS_MODULE))
+- return -ENODEV;
+
+ ppd = private2ppd(fp);
+
+- ret = acquire_chip_resource(ppd->dd, i2c_target(target), 0);
+- if (ret) /* failed - release the module */
+- module_put(THIS_MODULE);
+-
+- return ret;
++ return acquire_chip_resource(ppd->dd, i2c_target(target), 0);
+ }
+
+ static int qsfp1_debugfs_open(struct inode *in, struct file *fp)
+@@ -1062,7 +1048,6 @@ static int __qsfp_debugfs_release(struct inode *in, struct file *fp, u32 target)
+ ppd = private2ppd(fp);
+
+ release_chip_resource(ppd->dd, i2c_target(target));
+- module_put(THIS_MODULE);
+
+ return 0;
+ }
+diff --git a/drivers/infiniband/hw/qedr/qedr_iw_cm.c b/drivers/infiniband/hw/qedr/qedr_iw_cm.c
+index 792eecd206b6..97fc7dd353b0 100644
+--- a/drivers/infiniband/hw/qedr/qedr_iw_cm.c
++++ b/drivers/infiniband/hw/qedr/qedr_iw_cm.c
+@@ -150,8 +150,17 @@ qedr_iw_issue_event(void *context,
+ if (params->cm_info) {
+ event.ird = params->cm_info->ird;
+ event.ord = params->cm_info->ord;
+- event.private_data_len = params->cm_info->private_data_len;
+- event.private_data = (void *)params->cm_info->private_data;
++ /* Only connect_request and reply have valid private data
++ * the rest of the events this may be left overs from
++ * connection establishment. CONNECT_REQUEST is issued via
++ * qedr_iw_mpa_request
++ */
++ if (event_type == IW_CM_EVENT_CONNECT_REPLY) {
++ event.private_data_len =
++ params->cm_info->private_data_len;
++ event.private_data =
++ (void *)params->cm_info->private_data;
++ }
+ }
+
+ if (ep->cm_id)
+diff --git a/drivers/infiniband/sw/rdmavt/qp.c b/drivers/infiniband/sw/rdmavt/qp.c
+index 500a7ee04c44..ca29954a54ac 100644
+--- a/drivers/infiniband/sw/rdmavt/qp.c
++++ b/drivers/infiniband/sw/rdmavt/qp.c
+@@ -1196,7 +1196,7 @@ struct ib_qp *rvt_create_qp(struct ib_pd *ibpd,
+ err = alloc_ud_wq_attr(qp, rdi->dparms.node);
+ if (err) {
+ ret = (ERR_PTR(err));
+- goto bail_driver_priv;
++ goto bail_rq_rvt;
+ }
+
+ err = alloc_qpn(rdi, &rdi->qp_dev->qpn_table,
+@@ -1300,9 +1300,11 @@ bail_qpn:
+ rvt_free_qpn(&rdi->qp_dev->qpn_table, qp->ibqp.qp_num);
+
+ bail_rq_wq:
+- rvt_free_rq(&qp->r_rq);
+ free_ud_wq_attr(qp);
+
++bail_rq_rvt:
++ rvt_free_rq(&qp->r_rq);
++
+ bail_driver_priv:
+ rdi->driver_f.qp_priv_free(rdi, qp);
+
+diff --git a/drivers/infiniband/sw/siw/siw_qp_rx.c b/drivers/infiniband/sw/siw/siw_qp_rx.c
+index 650520244ed0..7271d705f4b0 100644
+--- a/drivers/infiniband/sw/siw/siw_qp_rx.c
++++ b/drivers/infiniband/sw/siw/siw_qp_rx.c
+@@ -139,7 +139,8 @@ static int siw_rx_pbl(struct siw_rx_stream *srx, int *pbl_idx,
+ break;
+
+ bytes = min(bytes, len);
+- if (siw_rx_kva(srx, (void *)buf_addr, bytes) == bytes) {
++ if (siw_rx_kva(srx, (void *)(uintptr_t)buf_addr, bytes) ==
++ bytes) {
+ copied += bytes;
+ offset += bytes;
+ len -= bytes;
+diff --git a/drivers/iommu/dmar.c b/drivers/iommu/dmar.c
+index f77dae7ba7d4..7df5621bba8d 100644
+--- a/drivers/iommu/dmar.c
++++ b/drivers/iommu/dmar.c
+@@ -898,7 +898,8 @@ int __init detect_intel_iommu(void)
+ if (!ret)
+ ret = dmar_walk_dmar_table((struct acpi_table_dmar *)dmar_tbl,
+ &validate_drhd_cb);
+- if (!ret && !no_iommu && !iommu_detected && !dmar_disabled) {
++ if (!ret && !no_iommu && !iommu_detected &&
++ (!dmar_disabled || dmar_platform_optin())) {
+ iommu_detected = 1;
+ /* Make sure ACS will be enabled */
+ pci_request_acs();
+diff --git a/drivers/iommu/intel-iommu.c b/drivers/iommu/intel-iommu.c
+index fde7aba49b74..34b2ed91cf4d 100644
+--- a/drivers/iommu/intel-iommu.c
++++ b/drivers/iommu/intel-iommu.c
+@@ -634,6 +634,12 @@ struct intel_iommu *domain_get_iommu(struct dmar_domain *domain)
+ return g_iommus[iommu_id];
+ }
+
++static inline bool iommu_paging_structure_coherency(struct intel_iommu *iommu)
++{
++ return sm_supported(iommu) ?
++ ecap_smpwc(iommu->ecap) : ecap_coherent(iommu->ecap);
++}
++
+ static void domain_update_iommu_coherency(struct dmar_domain *domain)
+ {
+ struct dmar_drhd_unit *drhd;
+@@ -645,7 +651,7 @@ static void domain_update_iommu_coherency(struct dmar_domain *domain)
+
+ for_each_domain_iommu(i, domain) {
+ found = true;
+- if (!ecap_coherent(g_iommus[i]->ecap)) {
++ if (!iommu_paging_structure_coherency(g_iommus[i])) {
+ domain->iommu_coherency = 0;
+ break;
+ }
+@@ -656,7 +662,7 @@ static void domain_update_iommu_coherency(struct dmar_domain *domain)
+ /* No hardware attached; use lowest common denominator */
+ rcu_read_lock();
+ for_each_active_iommu(iommu, drhd) {
+- if (!ecap_coherent(iommu->ecap)) {
++ if (!iommu_paging_structure_coherency(iommu)) {
+ domain->iommu_coherency = 0;
+ break;
+ }
+@@ -943,7 +949,7 @@ static struct dma_pte *pfn_to_dma_pte(struct dmar_domain *domain,
+ domain_flush_cache(domain, tmp_page, VTD_PAGE_SIZE);
+ pteval = ((uint64_t)virt_to_dma_pfn(tmp_page) << VTD_PAGE_SHIFT) | DMA_PTE_READ | DMA_PTE_WRITE;
+ if (domain_use_first_level(domain))
+- pteval |= DMA_FL_PTE_XD;
++ pteval |= DMA_FL_PTE_XD | DMA_FL_PTE_US;
+ if (cmpxchg64(&pte->val, 0ULL, pteval))
+ /* Someone else set it while we were thinking; use theirs. */
+ free_pgtable_page(tmp_page);
+@@ -2034,7 +2040,6 @@ static inline void
+ context_set_sm_rid2pasid(struct context_entry *context, unsigned long pasid)
+ {
+ context->hi |= pasid & ((1 << 20) - 1);
+- context->hi |= (1 << 20);
+ }
+
+ /*
+@@ -2178,7 +2183,8 @@ static int domain_context_mapping_one(struct dmar_domain *domain,
+
+ context_set_fault_enable(context);
+ context_set_present(context);
+- domain_flush_cache(domain, context, sizeof(*context));
++ if (!ecap_coherent(iommu->ecap))
++ clflush_cache_range(context, sizeof(*context));
+
+ /*
+ * It's a non-present to present mapping. If hardware doesn't cache
+@@ -2326,7 +2332,7 @@ static int __domain_mapping(struct dmar_domain *domain, unsigned long iov_pfn,
+
+ attr = prot & (DMA_PTE_READ | DMA_PTE_WRITE | DMA_PTE_SNP);
+ if (domain_use_first_level(domain))
+- attr |= DMA_FL_PTE_PRESENT | DMA_FL_PTE_XD;
++ attr |= DMA_FL_PTE_PRESENT | DMA_FL_PTE_XD | DMA_FL_PTE_US;
+
+ if (!sg) {
+ sg_res = nr_pages;
+diff --git a/drivers/md/bcache/super.c b/drivers/md/bcache/super.c
+index 4d8bf731b118..a2e5a0fcd7d5 100644
+--- a/drivers/md/bcache/super.c
++++ b/drivers/md/bcache/super.c
+@@ -819,7 +819,8 @@ static void bcache_device_free(struct bcache_device *d)
+ }
+
+ static int bcache_device_init(struct bcache_device *d, unsigned int block_size,
+- sector_t sectors, make_request_fn make_request_fn)
++ sector_t sectors, make_request_fn make_request_fn,
++ struct block_device *cached_bdev)
+ {
+ struct request_queue *q;
+ const size_t max_stripes = min_t(size_t, INT_MAX,
+@@ -885,6 +886,21 @@ static int bcache_device_init(struct bcache_device *d, unsigned int block_size,
+ q->limits.io_min = block_size;
+ q->limits.logical_block_size = block_size;
+ q->limits.physical_block_size = block_size;
++
++ if (q->limits.logical_block_size > PAGE_SIZE && cached_bdev) {
++ /*
++ * This should only happen with BCACHE_SB_VERSION_BDEV.
++ * Block/page size is checked for BCACHE_SB_VERSION_CDEV.
++ */
++ pr_info("%s: sb/logical block size (%u) greater than page size "
++ "(%lu) falling back to device logical block size (%u)",
++ d->disk->disk_name, q->limits.logical_block_size,
++ PAGE_SIZE, bdev_logical_block_size(cached_bdev));
++
++ /* This also adjusts physical block size/min io size if needed */
++ blk_queue_logical_block_size(q, bdev_logical_block_size(cached_bdev));
++ }
++
+ blk_queue_flag_set(QUEUE_FLAG_NONROT, d->disk->queue);
+ blk_queue_flag_clear(QUEUE_FLAG_ADD_RANDOM, d->disk->queue);
+ blk_queue_flag_set(QUEUE_FLAG_DISCARD, d->disk->queue);
+@@ -1342,7 +1358,7 @@ static int cached_dev_init(struct cached_dev *dc, unsigned int block_size)
+
+ ret = bcache_device_init(&dc->disk, block_size,
+ dc->bdev->bd_part->nr_sects - dc->sb.data_offset,
+- cached_dev_make_request);
++ cached_dev_make_request, dc->bdev);
+ if (ret)
+ return ret;
+
+@@ -1455,7 +1471,7 @@ static int flash_dev_run(struct cache_set *c, struct uuid_entry *u)
+ kobject_init(&d->kobj, &bch_flash_dev_ktype);
+
+ if (bcache_device_init(d, block_bytes(c), u->sectors,
+- flash_dev_make_request))
++ flash_dev_make_request, NULL))
+ goto err;
+
+ bcache_device_attach(d, c, u - c->uuids);
+diff --git a/drivers/md/dm-writecache.c b/drivers/md/dm-writecache.c
+index 613c171b1b6d..5cc94f57421c 100644
+--- a/drivers/md/dm-writecache.c
++++ b/drivers/md/dm-writecache.c
+@@ -286,6 +286,8 @@ static int persistent_memory_claim(struct dm_writecache *wc)
+ while (daa-- && i < p) {
+ pages[i++] = pfn_t_to_page(pfn);
+ pfn.val++;
++ if (!(i & 15))
++ cond_resched();
+ }
+ } while (i < p);
+ wc->memory_map = vmap(pages, p, VM_MAP, PAGE_KERNEL);
+@@ -857,6 +859,8 @@ static void writecache_discard(struct dm_writecache *wc, sector_t start, sector_
+ writecache_wait_for_ios(wc, WRITE);
+ discarded_something = true;
+ }
++ if (!writecache_entry_is_committed(wc, e))
++ wc->uncommitted_blocks--;
+ writecache_free_entry(wc, e);
+ }
+
+diff --git a/drivers/misc/mei/hw-me-regs.h b/drivers/misc/mei/hw-me-regs.h
+index 9392934e3a06..7becfc768bbc 100644
+--- a/drivers/misc/mei/hw-me-regs.h
++++ b/drivers/misc/mei/hw-me-regs.h
+@@ -94,6 +94,7 @@
+ #define MEI_DEV_ID_JSP_N 0x4DE0 /* Jasper Lake Point N */
+
+ #define MEI_DEV_ID_TGP_LP 0xA0E0 /* Tiger Lake Point LP */
++#define MEI_DEV_ID_TGP_H 0x43E0 /* Tiger Lake Point H */
+
+ #define MEI_DEV_ID_MCC 0x4B70 /* Mule Creek Canyon (EHL) */
+ #define MEI_DEV_ID_MCC_4 0x4B75 /* Mule Creek Canyon 4 (EHL) */
+@@ -107,6 +108,8 @@
+ # define PCI_CFG_HFS_1_D0I3_MSK 0x80000000
+ #define PCI_CFG_HFS_2 0x48
+ #define PCI_CFG_HFS_3 0x60
++# define PCI_CFG_HFS_3_FW_SKU_MSK 0x00000070
++# define PCI_CFG_HFS_3_FW_SKU_SPS 0x00000060
+ #define PCI_CFG_HFS_4 0x64
+ #define PCI_CFG_HFS_5 0x68
+ #define PCI_CFG_HFS_6 0x6C
+diff --git a/drivers/misc/mei/hw-me.c b/drivers/misc/mei/hw-me.c
+index f620442addf5..7649710a2ab9 100644
+--- a/drivers/misc/mei/hw-me.c
++++ b/drivers/misc/mei/hw-me.c
+@@ -1366,7 +1366,7 @@ static bool mei_me_fw_type_nm(struct pci_dev *pdev)
+ #define MEI_CFG_FW_NM \
+ .quirk_probe = mei_me_fw_type_nm
+
+-static bool mei_me_fw_type_sps(struct pci_dev *pdev)
++static bool mei_me_fw_type_sps_4(struct pci_dev *pdev)
+ {
+ u32 reg;
+ unsigned int devfn;
+@@ -1382,7 +1382,36 @@ static bool mei_me_fw_type_sps(struct pci_dev *pdev)
+ return (reg & 0xf0000) == 0xf0000;
+ }
+
+-#define MEI_CFG_FW_SPS \
++#define MEI_CFG_FW_SPS_4 \
++ .quirk_probe = mei_me_fw_type_sps_4
++
++/**
++ * mei_me_fw_sku_sps() - check for sps sku
++ *
++ * Read ME FW Status register to check for SPS Firmware.
++ * The SPS FW is only signaled in pci function 0
++ *
++ * @pdev: pci device
++ *
++ * Return: true in case of SPS firmware
++ */
++static bool mei_me_fw_type_sps(struct pci_dev *pdev)
++{
++ u32 reg;
++ u32 fw_type;
++ unsigned int devfn;
++
++ devfn = PCI_DEVFN(PCI_SLOT(pdev->devfn), 0);
++ pci_bus_read_config_dword(pdev->bus, devfn, PCI_CFG_HFS_3, ®);
++ trace_mei_pci_cfg_read(&pdev->dev, "PCI_CFG_HFS_3", PCI_CFG_HFS_3, reg);
++ fw_type = (reg & PCI_CFG_HFS_3_FW_SKU_MSK);
++
++ dev_dbg(&pdev->dev, "fw type is %d\n", fw_type);
++
++ return fw_type == PCI_CFG_HFS_3_FW_SKU_SPS;
++}
++
++#define MEI_CFG_FW_SPS \
+ .quirk_probe = mei_me_fw_type_sps
+
+ #define MEI_CFG_FW_VER_SUPP \
+@@ -1452,10 +1481,17 @@ static const struct mei_cfg mei_me_pch8_cfg = {
+ };
+
+ /* PCH8 Lynx Point with quirk for SPS Firmware exclusion */
+-static const struct mei_cfg mei_me_pch8_sps_cfg = {
++static const struct mei_cfg mei_me_pch8_sps_4_cfg = {
+ MEI_CFG_PCH8_HFS,
+ MEI_CFG_FW_VER_SUPP,
+- MEI_CFG_FW_SPS,
++ MEI_CFG_FW_SPS_4,
++};
++
++/* LBG with quirk for SPS (4.0) Firmware exclusion */
++static const struct mei_cfg mei_me_pch12_sps_4_cfg = {
++ MEI_CFG_PCH8_HFS,
++ MEI_CFG_FW_VER_SUPP,
++ MEI_CFG_FW_SPS_4,
+ };
+
+ /* Cannon Lake and newer devices */
+@@ -1465,8 +1501,18 @@ static const struct mei_cfg mei_me_pch12_cfg = {
+ MEI_CFG_DMA_128,
+ };
+
+-/* LBG with quirk for SPS Firmware exclusion */
++/* Cannon Lake with quirk for SPS 5.0 and newer Firmware exclusion */
+ static const struct mei_cfg mei_me_pch12_sps_cfg = {
++ MEI_CFG_PCH8_HFS,
++ MEI_CFG_FW_VER_SUPP,
++ MEI_CFG_DMA_128,
++ MEI_CFG_FW_SPS,
++};
++
++/* Cannon Lake with quirk for SPS 5.0 and newer Firmware exclusion
++ * w/o DMA support
++ */
++static const struct mei_cfg mei_me_pch12_nodma_sps_cfg = {
+ MEI_CFG_PCH8_HFS,
+ MEI_CFG_FW_VER_SUPP,
+ MEI_CFG_FW_SPS,
+@@ -1480,6 +1526,15 @@ static const struct mei_cfg mei_me_pch15_cfg = {
+ MEI_CFG_TRC,
+ };
+
++/* Tiger Lake with quirk for SPS 5.0 and newer Firmware exclusion */
++static const struct mei_cfg mei_me_pch15_sps_cfg = {
++ MEI_CFG_PCH8_HFS,
++ MEI_CFG_FW_VER_SUPP,
++ MEI_CFG_DMA_128,
++ MEI_CFG_TRC,
++ MEI_CFG_FW_SPS,
++};
++
+ /*
+ * mei_cfg_list - A list of platform platform specific configurations.
+ * Note: has to be synchronized with enum mei_cfg_idx.
+@@ -1492,10 +1547,13 @@ static const struct mei_cfg *const mei_cfg_list[] = {
+ [MEI_ME_PCH7_CFG] = &mei_me_pch7_cfg,
+ [MEI_ME_PCH_CPT_PBG_CFG] = &mei_me_pch_cpt_pbg_cfg,
+ [MEI_ME_PCH8_CFG] = &mei_me_pch8_cfg,
+- [MEI_ME_PCH8_SPS_CFG] = &mei_me_pch8_sps_cfg,
++ [MEI_ME_PCH8_SPS_4_CFG] = &mei_me_pch8_sps_4_cfg,
+ [MEI_ME_PCH12_CFG] = &mei_me_pch12_cfg,
++ [MEI_ME_PCH12_SPS_4_CFG] = &mei_me_pch12_sps_4_cfg,
+ [MEI_ME_PCH12_SPS_CFG] = &mei_me_pch12_sps_cfg,
++ [MEI_ME_PCH12_SPS_NODMA_CFG] = &mei_me_pch12_nodma_sps_cfg,
+ [MEI_ME_PCH15_CFG] = &mei_me_pch15_cfg,
++ [MEI_ME_PCH15_SPS_CFG] = &mei_me_pch15_sps_cfg,
+ };
+
+ const struct mei_cfg *mei_me_get_cfg(kernel_ulong_t idx)
+diff --git a/drivers/misc/mei/hw-me.h b/drivers/misc/mei/hw-me.h
+index b6b94e211464..6a8973649c49 100644
+--- a/drivers/misc/mei/hw-me.h
++++ b/drivers/misc/mei/hw-me.h
+@@ -1,6 +1,6 @@
+ /* SPDX-License-Identifier: GPL-2.0 */
+ /*
+- * Copyright (c) 2012-2019, Intel Corporation. All rights reserved.
++ * Copyright (c) 2012-2020, Intel Corporation. All rights reserved.
+ * Intel Management Engine Interface (Intel MEI) Linux driver
+ */
+
+@@ -76,14 +76,20 @@ struct mei_me_hw {
+ * with quirk for Node Manager exclusion.
+ * @MEI_ME_PCH8_CFG: Platform Controller Hub Gen8 and newer
+ * client platforms.
+- * @MEI_ME_PCH8_SPS_CFG: Platform Controller Hub Gen8 and newer
++ * @MEI_ME_PCH8_SPS_4_CFG: Platform Controller Hub Gen8 and newer
+ * servers platforms with quirk for
+ * SPS firmware exclusion.
+ * @MEI_ME_PCH12_CFG: Platform Controller Hub Gen12 and newer
+- * @MEI_ME_PCH12_SPS_CFG: Platform Controller Hub Gen12 and newer
++ * @MEI_ME_PCH12_SPS_4_CFG:Platform Controller Hub Gen12 up to 4.0
++ * servers platforms with quirk for
++ * SPS firmware exclusion.
++ * @MEI_ME_PCH12_SPS_CFG: Platform Controller Hub Gen12 5.0 and newer
+ * servers platforms with quirk for
+ * SPS firmware exclusion.
+ * @MEI_ME_PCH15_CFG: Platform Controller Hub Gen15 and newer
++ * @MEI_ME_PCH15_SPS_CFG: Platform Controller Hub Gen15 and newer
++ * servers platforms with quirk for
++ * SPS firmware exclusion.
+ * @MEI_ME_NUM_CFG: Upper Sentinel.
+ */
+ enum mei_cfg_idx {
+@@ -94,10 +100,13 @@ enum mei_cfg_idx {
+ MEI_ME_PCH7_CFG,
+ MEI_ME_PCH_CPT_PBG_CFG,
+ MEI_ME_PCH8_CFG,
+- MEI_ME_PCH8_SPS_CFG,
++ MEI_ME_PCH8_SPS_4_CFG,
+ MEI_ME_PCH12_CFG,
++ MEI_ME_PCH12_SPS_4_CFG,
+ MEI_ME_PCH12_SPS_CFG,
++ MEI_ME_PCH12_SPS_NODMA_CFG,
+ MEI_ME_PCH15_CFG,
++ MEI_ME_PCH15_SPS_CFG,
+ MEI_ME_NUM_CFG,
+ };
+
+diff --git a/drivers/misc/mei/pci-me.c b/drivers/misc/mei/pci-me.c
+index a1ed375fed37..81e759674c1b 100644
+--- a/drivers/misc/mei/pci-me.c
++++ b/drivers/misc/mei/pci-me.c
+@@ -59,18 +59,18 @@ static const struct pci_device_id mei_me_pci_tbl[] = {
+ {MEI_PCI_DEVICE(MEI_DEV_ID_PPT_1, MEI_ME_PCH7_CFG)},
+ {MEI_PCI_DEVICE(MEI_DEV_ID_PPT_2, MEI_ME_PCH7_CFG)},
+ {MEI_PCI_DEVICE(MEI_DEV_ID_PPT_3, MEI_ME_PCH7_CFG)},
+- {MEI_PCI_DEVICE(MEI_DEV_ID_LPT_H, MEI_ME_PCH8_SPS_CFG)},
+- {MEI_PCI_DEVICE(MEI_DEV_ID_LPT_W, MEI_ME_PCH8_SPS_CFG)},
++ {MEI_PCI_DEVICE(MEI_DEV_ID_LPT_H, MEI_ME_PCH8_SPS_4_CFG)},
++ {MEI_PCI_DEVICE(MEI_DEV_ID_LPT_W, MEI_ME_PCH8_SPS_4_CFG)},
+ {MEI_PCI_DEVICE(MEI_DEV_ID_LPT_LP, MEI_ME_PCH8_CFG)},
+- {MEI_PCI_DEVICE(MEI_DEV_ID_LPT_HR, MEI_ME_PCH8_SPS_CFG)},
++ {MEI_PCI_DEVICE(MEI_DEV_ID_LPT_HR, MEI_ME_PCH8_SPS_4_CFG)},
+ {MEI_PCI_DEVICE(MEI_DEV_ID_WPT_LP, MEI_ME_PCH8_CFG)},
+ {MEI_PCI_DEVICE(MEI_DEV_ID_WPT_LP_2, MEI_ME_PCH8_CFG)},
+
+ {MEI_PCI_DEVICE(MEI_DEV_ID_SPT, MEI_ME_PCH8_CFG)},
+ {MEI_PCI_DEVICE(MEI_DEV_ID_SPT_2, MEI_ME_PCH8_CFG)},
+- {MEI_PCI_DEVICE(MEI_DEV_ID_SPT_H, MEI_ME_PCH8_SPS_CFG)},
+- {MEI_PCI_DEVICE(MEI_DEV_ID_SPT_H_2, MEI_ME_PCH8_SPS_CFG)},
+- {MEI_PCI_DEVICE(MEI_DEV_ID_LBG, MEI_ME_PCH12_SPS_CFG)},
++ {MEI_PCI_DEVICE(MEI_DEV_ID_SPT_H, MEI_ME_PCH8_SPS_4_CFG)},
++ {MEI_PCI_DEVICE(MEI_DEV_ID_SPT_H_2, MEI_ME_PCH8_SPS_4_CFG)},
++ {MEI_PCI_DEVICE(MEI_DEV_ID_LBG, MEI_ME_PCH12_SPS_4_CFG)},
+
+ {MEI_PCI_DEVICE(MEI_DEV_ID_BXT_M, MEI_ME_PCH8_CFG)},
+ {MEI_PCI_DEVICE(MEI_DEV_ID_APL_I, MEI_ME_PCH8_CFG)},
+@@ -84,8 +84,8 @@ static const struct pci_device_id mei_me_pci_tbl[] = {
+
+ {MEI_PCI_DEVICE(MEI_DEV_ID_CNP_LP, MEI_ME_PCH12_CFG)},
+ {MEI_PCI_DEVICE(MEI_DEV_ID_CNP_LP_3, MEI_ME_PCH8_CFG)},
+- {MEI_PCI_DEVICE(MEI_DEV_ID_CNP_H, MEI_ME_PCH12_CFG)},
+- {MEI_PCI_DEVICE(MEI_DEV_ID_CNP_H_3, MEI_ME_PCH8_CFG)},
++ {MEI_PCI_DEVICE(MEI_DEV_ID_CNP_H, MEI_ME_PCH12_SPS_CFG)},
++ {MEI_PCI_DEVICE(MEI_DEV_ID_CNP_H_3, MEI_ME_PCH12_SPS_NODMA_CFG)},
+
+ {MEI_PCI_DEVICE(MEI_DEV_ID_CMP_LP, MEI_ME_PCH12_CFG)},
+ {MEI_PCI_DEVICE(MEI_DEV_ID_CMP_LP_3, MEI_ME_PCH8_CFG)},
+@@ -96,6 +96,7 @@ static const struct pci_device_id mei_me_pci_tbl[] = {
+ {MEI_PCI_DEVICE(MEI_DEV_ID_ICP_LP, MEI_ME_PCH12_CFG)},
+
+ {MEI_PCI_DEVICE(MEI_DEV_ID_TGP_LP, MEI_ME_PCH15_CFG)},
++ {MEI_PCI_DEVICE(MEI_DEV_ID_TGP_H, MEI_ME_PCH15_SPS_CFG)},
+
+ {MEI_PCI_DEVICE(MEI_DEV_ID_JSP_N, MEI_ME_PCH15_CFG)},
+
+diff --git a/drivers/net/bareudp.c b/drivers/net/bareudp.c
+index 5d3c691a1c66..3dd46cd55114 100644
+--- a/drivers/net/bareudp.c
++++ b/drivers/net/bareudp.c
+@@ -572,6 +572,9 @@ static int bareudp2info(struct nlattr *data[], struct bareudp_conf *conf,
+ if (data[IFLA_BAREUDP_SRCPORT_MIN])
+ conf->sport_min = nla_get_u16(data[IFLA_BAREUDP_SRCPORT_MIN]);
+
++ if (data[IFLA_BAREUDP_MULTIPROTO_MODE])
++ conf->multi_proto_mode = true;
++
+ return 0;
+ }
+
+diff --git a/drivers/net/dsa/bcm_sf2.c b/drivers/net/dsa/bcm_sf2.c
+index c7ac63f41918..946e41f020a5 100644
+--- a/drivers/net/dsa/bcm_sf2.c
++++ b/drivers/net/dsa/bcm_sf2.c
+@@ -1147,6 +1147,8 @@ static int bcm_sf2_sw_probe(struct platform_device *pdev)
+ set_bit(0, priv->cfp.used);
+ set_bit(0, priv->cfp.unique);
+
++ /* Balance of_node_put() done by of_find_node_by_name() */
++ of_node_get(dn);
+ ports = of_find_node_by_name(dn, "ports");
+ if (ports) {
+ bcm_sf2_identify_ports(priv, ports);
+diff --git a/drivers/net/ethernet/atheros/alx/main.c b/drivers/net/ethernet/atheros/alx/main.c
+index b9b4edb913c1..9b7f1af5f574 100644
+--- a/drivers/net/ethernet/atheros/alx/main.c
++++ b/drivers/net/ethernet/atheros/alx/main.c
+@@ -1249,8 +1249,12 @@ out_disable_adv_intr:
+
+ static void __alx_stop(struct alx_priv *alx)
+ {
+- alx_halt(alx);
+ alx_free_irq(alx);
++
++ cancel_work_sync(&alx->link_check_wk);
++ cancel_work_sync(&alx->reset_wk);
++
++ alx_halt(alx);
+ alx_free_rings(alx);
+ alx_free_napis(alx);
+ }
+@@ -1855,9 +1859,6 @@ static void alx_remove(struct pci_dev *pdev)
+ struct alx_priv *alx = pci_get_drvdata(pdev);
+ struct alx_hw *hw = &alx->hw;
+
+- cancel_work_sync(&alx->link_check_wk);
+- cancel_work_sync(&alx->reset_wk);
+-
+ /* restore permanent mac address */
+ alx_set_macaddr(hw, hw->perm_addr);
+
+diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt.c b/drivers/net/ethernet/broadcom/bnxt/bnxt.c
+index 19c4a0a5727a..b6fb5a1709c0 100644
+--- a/drivers/net/ethernet/broadcom/bnxt/bnxt.c
++++ b/drivers/net/ethernet/broadcom/bnxt/bnxt.c
+@@ -6293,6 +6293,7 @@ int bnxt_hwrm_set_coal(struct bnxt *bp)
+
+ static void bnxt_hwrm_stat_ctx_free(struct bnxt *bp)
+ {
++ struct hwrm_stat_ctx_clr_stats_input req0 = {0};
+ struct hwrm_stat_ctx_free_input req = {0};
+ int i;
+
+@@ -6302,6 +6303,7 @@ static void bnxt_hwrm_stat_ctx_free(struct bnxt *bp)
+ if (BNXT_CHIP_TYPE_NITRO_A0(bp))
+ return;
+
++ bnxt_hwrm_cmd_hdr_init(bp, &req0, HWRM_STAT_CTX_CLR_STATS, -1, -1);
+ bnxt_hwrm_cmd_hdr_init(bp, &req, HWRM_STAT_CTX_FREE, -1, -1);
+
+ mutex_lock(&bp->hwrm_cmd_lock);
+@@ -6311,7 +6313,11 @@ static void bnxt_hwrm_stat_ctx_free(struct bnxt *bp)
+
+ if (cpr->hw_stats_ctx_id != INVALID_STATS_CTX_ID) {
+ req.stat_ctx_id = cpu_to_le32(cpr->hw_stats_ctx_id);
+-
++ if (BNXT_FW_MAJ(bp) <= 20) {
++ req0.stat_ctx_id = req.stat_ctx_id;
++ _hwrm_send_message(bp, &req0, sizeof(req0),
++ HWRM_CMD_TIMEOUT);
++ }
+ _hwrm_send_message(bp, &req, sizeof(req),
+ HWRM_CMD_TIMEOUT);
+
+@@ -6953,7 +6959,8 @@ static int __bnxt_hwrm_func_qcaps(struct bnxt *bp)
+ bp->fw_cap |= BNXT_FW_CAP_ERR_RECOVER_RELOAD;
+
+ bp->tx_push_thresh = 0;
+- if (flags & FUNC_QCAPS_RESP_FLAGS_PUSH_MODE_SUPPORTED)
++ if ((flags & FUNC_QCAPS_RESP_FLAGS_PUSH_MODE_SUPPORTED) &&
++ BNXT_FW_MAJ(bp) > 217)
+ bp->tx_push_thresh = BNXT_TX_PUSH_THRESH;
+
+ hw_resc->max_rsscos_ctxs = le16_to_cpu(resp->max_rsscos_ctx);
+@@ -7217,8 +7224,9 @@ static int __bnxt_hwrm_ver_get(struct bnxt *bp, bool silent)
+ static int bnxt_hwrm_ver_get(struct bnxt *bp)
+ {
+ struct hwrm_ver_get_output *resp = bp->hwrm_cmd_resp_addr;
++ u16 fw_maj, fw_min, fw_bld, fw_rsv;
+ u32 dev_caps_cfg, hwrm_ver;
+- int rc;
++ int rc, len;
+
+ bp->hwrm_max_req_len = HWRM_MAX_REQ_LEN;
+ mutex_lock(&bp->hwrm_cmd_lock);
+@@ -7250,9 +7258,22 @@ static int bnxt_hwrm_ver_get(struct bnxt *bp)
+ resp->hwrm_intf_maj_8b, resp->hwrm_intf_min_8b,
+ resp->hwrm_intf_upd_8b);
+
+- snprintf(bp->fw_ver_str, BC_HWRM_STR_LEN, "%d.%d.%d.%d",
+- resp->hwrm_fw_maj_8b, resp->hwrm_fw_min_8b,
+- resp->hwrm_fw_bld_8b, resp->hwrm_fw_rsvd_8b);
++ fw_maj = le16_to_cpu(resp->hwrm_fw_major);
++ if (bp->hwrm_spec_code > 0x10803 && fw_maj) {
++ fw_min = le16_to_cpu(resp->hwrm_fw_minor);
++ fw_bld = le16_to_cpu(resp->hwrm_fw_build);
++ fw_rsv = le16_to_cpu(resp->hwrm_fw_patch);
++ len = FW_VER_STR_LEN;
++ } else {
++ fw_maj = resp->hwrm_fw_maj_8b;
++ fw_min = resp->hwrm_fw_min_8b;
++ fw_bld = resp->hwrm_fw_bld_8b;
++ fw_rsv = resp->hwrm_fw_rsvd_8b;
++ len = BC_HWRM_STR_LEN;
++ }
++ bp->fw_ver_code = BNXT_FW_VER_CODE(fw_maj, fw_min, fw_bld, fw_rsv);
++ snprintf(bp->fw_ver_str, len, "%d.%d.%d.%d", fw_maj, fw_min, fw_bld,
++ fw_rsv);
+
+ if (strlen(resp->active_pkg_name)) {
+ int fw_ver_len = strlen(bp->fw_ver_str);
+@@ -11863,7 +11884,8 @@ static int bnxt_init_one(struct pci_dev *pdev, const struct pci_device_id *ent)
+ dev->ethtool_ops = &bnxt_ethtool_ops;
+ pci_set_drvdata(pdev, dev);
+
+- bnxt_vpd_read_info(bp);
++ if (BNXT_PF(bp))
++ bnxt_vpd_read_info(bp);
+
+ rc = bnxt_alloc_hwrm_resources(bp);
+ if (rc)
+diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt.h b/drivers/net/ethernet/broadcom/bnxt/bnxt.h
+index 3d39638521d6..23ee433db864 100644
+--- a/drivers/net/ethernet/broadcom/bnxt/bnxt.h
++++ b/drivers/net/ethernet/broadcom/bnxt/bnxt.h
+@@ -1729,6 +1729,11 @@ struct bnxt {
+ #define PHY_VER_STR_LEN (FW_VER_STR_LEN - BC_HWRM_STR_LEN)
+ char fw_ver_str[FW_VER_STR_LEN];
+ char hwrm_ver_supp[FW_VER_STR_LEN];
++ u64 fw_ver_code;
++#define BNXT_FW_VER_CODE(maj, min, bld, rsv) \
++ ((u64)(maj) << 48 | (u64)(min) << 32 | (u64)(bld) << 16 | (rsv))
++#define BNXT_FW_MAJ(bp) ((bp)->fw_ver_code >> 48)
++
+ __be16 vxlan_port;
+ u8 vxlan_port_cnt;
+ __le16 vxlan_fw_dst_port_id;
+diff --git a/drivers/net/ethernet/broadcom/genet/bcmgenet.c b/drivers/net/ethernet/broadcom/genet/bcmgenet.c
+index 38bdfd4b46f0..dde1c23c8e39 100644
+--- a/drivers/net/ethernet/broadcom/genet/bcmgenet.c
++++ b/drivers/net/ethernet/broadcom/genet/bcmgenet.c
+@@ -1520,11 +1520,6 @@ static netdev_tx_t bcmgenet_xmit(struct sk_buff *skb, struct net_device *dev)
+ goto out;
+ }
+
+- if (skb_padto(skb, ETH_ZLEN)) {
+- ret = NETDEV_TX_OK;
+- goto out;
+- }
+-
+ /* Retain how many bytes will be sent on the wire, without TSB inserted
+ * by transmit checksum offload
+ */
+@@ -1571,6 +1566,9 @@ static netdev_tx_t bcmgenet_xmit(struct sk_buff *skb, struct net_device *dev)
+ len_stat = (size << DMA_BUFLENGTH_SHIFT) |
+ (priv->hw_params->qtag_mask << DMA_TX_QTAG_SHIFT);
+
++ /* Note: if we ever change from DMA_TX_APPEND_CRC below we
++ * will need to restore software padding of "runt" packets
++ */
+ if (!i) {
+ len_stat |= DMA_TX_APPEND_CRC | DMA_SOP;
+ if (skb->ip_summed == CHECKSUM_PARTIAL)
+diff --git a/drivers/net/ethernet/broadcom/tg3.c b/drivers/net/ethernet/broadcom/tg3.c
+index ff98a82b7bc4..d71ce7634ac1 100644
+--- a/drivers/net/ethernet/broadcom/tg3.c
++++ b/drivers/net/ethernet/broadcom/tg3.c
+@@ -18170,8 +18170,8 @@ static pci_ers_result_t tg3_io_error_detected(struct pci_dev *pdev,
+
+ rtnl_lock();
+
+- /* We probably don't have netdev yet */
+- if (!netdev || !netif_running(netdev))
++ /* Could be second call or maybe we don't have netdev yet */
++ if (!netdev || tp->pcierr_recovery || !netif_running(netdev))
+ goto done;
+
+ /* We needn't recover from permanent error */
+diff --git a/drivers/net/ethernet/cadence/macb_main.c b/drivers/net/ethernet/cadence/macb_main.c
+index 67933079aeea..52582e8ed90e 100644
+--- a/drivers/net/ethernet/cadence/macb_main.c
++++ b/drivers/net/ethernet/cadence/macb_main.c
+@@ -2558,7 +2558,7 @@ static int macb_open(struct net_device *dev)
+
+ err = macb_phylink_connect(bp);
+ if (err)
+- goto napi_exit;
++ goto reset_hw;
+
+ netif_tx_start_all_queues(dev);
+
+@@ -2567,9 +2567,11 @@ static int macb_open(struct net_device *dev)
+
+ return 0;
+
+-napi_exit:
++reset_hw:
++ macb_reset_hw(bp);
+ for (q = 0, queue = bp->queues; q < bp->num_queues; ++q, ++queue)
+ napi_disable(&queue->napi);
++ macb_free_consistent(bp);
+ pm_exit:
+ pm_runtime_put_sync(&bp->pdev->dev);
+ return err;
+@@ -3760,15 +3762,9 @@ static int macb_init(struct platform_device *pdev)
+
+ static struct sifive_fu540_macb_mgmt *mgmt;
+
+-/* Initialize and start the Receiver and Transmit subsystems */
+-static int at91ether_start(struct net_device *dev)
++static int at91ether_alloc_coherent(struct macb *lp)
+ {
+- struct macb *lp = netdev_priv(dev);
+ struct macb_queue *q = &lp->queues[0];
+- struct macb_dma_desc *desc;
+- dma_addr_t addr;
+- u32 ctl;
+- int i;
+
+ q->rx_ring = dma_alloc_coherent(&lp->pdev->dev,
+ (AT91ETHER_MAX_RX_DESCR *
+@@ -3790,6 +3786,43 @@ static int at91ether_start(struct net_device *dev)
+ return -ENOMEM;
+ }
+
++ return 0;
++}
++
++static void at91ether_free_coherent(struct macb *lp)
++{
++ struct macb_queue *q = &lp->queues[0];
++
++ if (q->rx_ring) {
++ dma_free_coherent(&lp->pdev->dev,
++ AT91ETHER_MAX_RX_DESCR *
++ macb_dma_desc_get_size(lp),
++ q->rx_ring, q->rx_ring_dma);
++ q->rx_ring = NULL;
++ }
++
++ if (q->rx_buffers) {
++ dma_free_coherent(&lp->pdev->dev,
++ AT91ETHER_MAX_RX_DESCR *
++ AT91ETHER_MAX_RBUFF_SZ,
++ q->rx_buffers, q->rx_buffers_dma);
++ q->rx_buffers = NULL;
++ }
++}
++
++/* Initialize and start the Receiver and Transmit subsystems */
++static int at91ether_start(struct macb *lp)
++{
++ struct macb_queue *q = &lp->queues[0];
++ struct macb_dma_desc *desc;
++ dma_addr_t addr;
++ u32 ctl;
++ int i, ret;
++
++ ret = at91ether_alloc_coherent(lp);
++ if (ret)
++ return ret;
++
+ addr = q->rx_buffers_dma;
+ for (i = 0; i < AT91ETHER_MAX_RX_DESCR; i++) {
+ desc = macb_rx_desc(q, i);
+@@ -3811,9 +3844,39 @@ static int at91ether_start(struct net_device *dev)
+ ctl = macb_readl(lp, NCR);
+ macb_writel(lp, NCR, ctl | MACB_BIT(RE) | MACB_BIT(TE));
+
++ /* Enable MAC interrupts */
++ macb_writel(lp, IER, MACB_BIT(RCOMP) |
++ MACB_BIT(RXUBR) |
++ MACB_BIT(ISR_TUND) |
++ MACB_BIT(ISR_RLE) |
++ MACB_BIT(TCOMP) |
++ MACB_BIT(ISR_ROVR) |
++ MACB_BIT(HRESP));
++
+ return 0;
+ }
+
++static void at91ether_stop(struct macb *lp)
++{
++ u32 ctl;
++
++ /* Disable MAC interrupts */
++ macb_writel(lp, IDR, MACB_BIT(RCOMP) |
++ MACB_BIT(RXUBR) |
++ MACB_BIT(ISR_TUND) |
++ MACB_BIT(ISR_RLE) |
++ MACB_BIT(TCOMP) |
++ MACB_BIT(ISR_ROVR) |
++ MACB_BIT(HRESP));
++
++ /* Disable Receiver and Transmitter */
++ ctl = macb_readl(lp, NCR);
++ macb_writel(lp, NCR, ctl & ~(MACB_BIT(TE) | MACB_BIT(RE)));
++
++ /* Free resources. */
++ at91ether_free_coherent(lp);
++}
++
+ /* Open the ethernet interface */
+ static int at91ether_open(struct net_device *dev)
+ {
+@@ -3833,63 +3896,36 @@ static int at91ether_open(struct net_device *dev)
+
+ macb_set_hwaddr(lp);
+
+- ret = at91ether_start(dev);
++ ret = at91ether_start(lp);
+ if (ret)
+- return ret;
+-
+- /* Enable MAC interrupts */
+- macb_writel(lp, IER, MACB_BIT(RCOMP) |
+- MACB_BIT(RXUBR) |
+- MACB_BIT(ISR_TUND) |
+- MACB_BIT(ISR_RLE) |
+- MACB_BIT(TCOMP) |
+- MACB_BIT(ISR_ROVR) |
+- MACB_BIT(HRESP));
++ goto pm_exit;
+
+ ret = macb_phylink_connect(lp);
+ if (ret)
+- return ret;
++ goto stop;
+
+ netif_start_queue(dev);
+
+ return 0;
++
++stop:
++ at91ether_stop(lp);
++pm_exit:
++ pm_runtime_put_sync(&lp->pdev->dev);
++ return ret;
+ }
+
+ /* Close the interface */
+ static int at91ether_close(struct net_device *dev)
+ {
+ struct macb *lp = netdev_priv(dev);
+- struct macb_queue *q = &lp->queues[0];
+- u32 ctl;
+-
+- /* Disable Receiver and Transmitter */
+- ctl = macb_readl(lp, NCR);
+- macb_writel(lp, NCR, ctl & ~(MACB_BIT(TE) | MACB_BIT(RE)));
+-
+- /* Disable MAC interrupts */
+- macb_writel(lp, IDR, MACB_BIT(RCOMP) |
+- MACB_BIT(RXUBR) |
+- MACB_BIT(ISR_TUND) |
+- MACB_BIT(ISR_RLE) |
+- MACB_BIT(TCOMP) |
+- MACB_BIT(ISR_ROVR) |
+- MACB_BIT(HRESP));
+
+ netif_stop_queue(dev);
+
+ phylink_stop(lp->phylink);
+ phylink_disconnect_phy(lp->phylink);
+
+- dma_free_coherent(&lp->pdev->dev,
+- AT91ETHER_MAX_RX_DESCR *
+- macb_dma_desc_get_size(lp),
+- q->rx_ring, q->rx_ring_dma);
+- q->rx_ring = NULL;
+-
+- dma_free_coherent(&lp->pdev->dev,
+- AT91ETHER_MAX_RX_DESCR * AT91ETHER_MAX_RBUFF_SZ,
+- q->rx_buffers, q->rx_buffers_dma);
+- q->rx_buffers = NULL;
++ at91ether_stop(lp);
+
+ return pm_runtime_put(&lp->pdev->dev);
+ }
+diff --git a/drivers/net/ethernet/chelsio/cxgb4/l2t.c b/drivers/net/ethernet/chelsio/cxgb4/l2t.c
+index 72b37a66c7d8..0ed20a9cca14 100644
+--- a/drivers/net/ethernet/chelsio/cxgb4/l2t.c
++++ b/drivers/net/ethernet/chelsio/cxgb4/l2t.c
+@@ -502,41 +502,20 @@ u64 cxgb4_select_ntuple(struct net_device *dev,
+ }
+ EXPORT_SYMBOL(cxgb4_select_ntuple);
+
+-/*
+- * Called when address resolution fails for an L2T entry to handle packets
+- * on the arpq head. If a packet specifies a failure handler it is invoked,
+- * otherwise the packet is sent to the device.
+- */
+-static void handle_failed_resolution(struct adapter *adap, struct l2t_entry *e)
+-{
+- struct sk_buff *skb;
+-
+- while ((skb = __skb_dequeue(&e->arpq)) != NULL) {
+- const struct l2t_skb_cb *cb = L2T_SKB_CB(skb);
+-
+- spin_unlock(&e->lock);
+- if (cb->arp_err_handler)
+- cb->arp_err_handler(cb->handle, skb);
+- else
+- t4_ofld_send(adap, skb);
+- spin_lock(&e->lock);
+- }
+-}
+-
+ /*
+ * Called when the host's neighbor layer makes a change to some entry that is
+ * loaded into the HW L2 table.
+ */
+ void t4_l2t_update(struct adapter *adap, struct neighbour *neigh)
+ {
+- struct l2t_entry *e;
+- struct sk_buff_head *arpq = NULL;
+- struct l2t_data *d = adap->l2t;
+ unsigned int addr_len = neigh->tbl->key_len;
+ u32 *addr = (u32 *) neigh->primary_key;
+- int ifidx = neigh->dev->ifindex;
+- int hash = addr_hash(d, addr, addr_len, ifidx);
++ int hash, ifidx = neigh->dev->ifindex;
++ struct sk_buff_head *arpq = NULL;
++ struct l2t_data *d = adap->l2t;
++ struct l2t_entry *e;
+
++ hash = addr_hash(d, addr, addr_len, ifidx);
+ read_lock_bh(&d->lock);
+ for (e = d->l2tab[hash].first; e; e = e->next)
+ if (!addreq(e, addr) && e->ifindex == ifidx) {
+@@ -569,8 +548,25 @@ void t4_l2t_update(struct adapter *adap, struct neighbour *neigh)
+ write_l2e(adap, e, 0);
+ }
+
+- if (arpq)
+- handle_failed_resolution(adap, e);
++ if (arpq) {
++ struct sk_buff *skb;
++
++ /* Called when address resolution fails for an L2T
++ * entry to handle packets on the arpq head. If a
++ * packet specifies a failure handler it is invoked,
++ * otherwise the packet is sent to the device.
++ */
++ while ((skb = __skb_dequeue(&e->arpq)) != NULL) {
++ const struct l2t_skb_cb *cb = L2T_SKB_CB(skb);
++
++ spin_unlock(&e->lock);
++ if (cb->arp_err_handler)
++ cb->arp_err_handler(cb->handle, skb);
++ else
++ t4_ofld_send(adap, skb);
++ spin_lock(&e->lock);
++ }
++ }
+ spin_unlock_bh(&e->lock);
+ }
+
+diff --git a/drivers/net/ethernet/chelsio/cxgb4/sge.c b/drivers/net/ethernet/chelsio/cxgb4/sge.c
+index 6516c45864b3..db8106d9d6ed 100644
+--- a/drivers/net/ethernet/chelsio/cxgb4/sge.c
++++ b/drivers/net/ethernet/chelsio/cxgb4/sge.c
+@@ -1425,12 +1425,10 @@ static netdev_tx_t cxgb4_eth_xmit(struct sk_buff *skb, struct net_device *dev)
+
+ qidx = skb_get_queue_mapping(skb);
+ if (ptp_enabled) {
+- spin_lock(&adap->ptp_lock);
+ if (!(adap->ptp_tx_skb)) {
+ skb_shinfo(skb)->tx_flags |= SKBTX_IN_PROGRESS;
+ adap->ptp_tx_skb = skb_get(skb);
+ } else {
+- spin_unlock(&adap->ptp_lock);
+ goto out_free;
+ }
+ q = &adap->sge.ptptxq;
+@@ -1444,11 +1442,8 @@ static netdev_tx_t cxgb4_eth_xmit(struct sk_buff *skb, struct net_device *dev)
+
+ #ifdef CONFIG_CHELSIO_T4_FCOE
+ ret = cxgb_fcoe_offload(skb, adap, pi, &cntrl);
+- if (unlikely(ret == -ENOTSUPP)) {
+- if (ptp_enabled)
+- spin_unlock(&adap->ptp_lock);
++ if (unlikely(ret == -EOPNOTSUPP))
+ goto out_free;
+- }
+ #endif /* CONFIG_CHELSIO_T4_FCOE */
+
+ chip_ver = CHELSIO_CHIP_VERSION(adap->params.chip);
+@@ -1461,8 +1456,6 @@ static netdev_tx_t cxgb4_eth_xmit(struct sk_buff *skb, struct net_device *dev)
+ dev_err(adap->pdev_dev,
+ "%s: Tx ring %u full while queue awake!\n",
+ dev->name, qidx);
+- if (ptp_enabled)
+- spin_unlock(&adap->ptp_lock);
+ return NETDEV_TX_BUSY;
+ }
+
+@@ -1481,8 +1474,6 @@ static netdev_tx_t cxgb4_eth_xmit(struct sk_buff *skb, struct net_device *dev)
+ unlikely(cxgb4_map_skb(adap->pdev_dev, skb, sgl_sdesc->addr) < 0)) {
+ memset(sgl_sdesc->addr, 0, sizeof(sgl_sdesc->addr));
+ q->mapping_err++;
+- if (ptp_enabled)
+- spin_unlock(&adap->ptp_lock);
+ goto out_free;
+ }
+
+@@ -1630,8 +1621,6 @@ static netdev_tx_t cxgb4_eth_xmit(struct sk_buff *skb, struct net_device *dev)
+ txq_advance(&q->q, ndesc);
+
+ cxgb4_ring_tx_db(adap, &q->q, ndesc);
+- if (ptp_enabled)
+- spin_unlock(&adap->ptp_lock);
+ return NETDEV_TX_OK;
+
+ out_free:
+@@ -2365,6 +2354,16 @@ netdev_tx_t t4_start_xmit(struct sk_buff *skb, struct net_device *dev)
+ if (unlikely(qid >= pi->nqsets))
+ return cxgb4_ethofld_xmit(skb, dev);
+
++ if (is_ptp_enabled(skb, dev)) {
++ struct adapter *adap = netdev2adap(dev);
++ netdev_tx_t ret;
++
++ spin_lock(&adap->ptp_lock);
++ ret = cxgb4_eth_xmit(skb, dev);
++ spin_unlock(&adap->ptp_lock);
++ return ret;
++ }
++
+ return cxgb4_eth_xmit(skb, dev);
+ }
+
+diff --git a/drivers/net/ethernet/freescale/enetc/enetc.c b/drivers/net/ethernet/freescale/enetc/enetc.c
+index ccf2611f4a20..4486a0db8ef0 100644
+--- a/drivers/net/ethernet/freescale/enetc/enetc.c
++++ b/drivers/net/ethernet/freescale/enetc/enetc.c
+@@ -266,7 +266,7 @@ static irqreturn_t enetc_msix(int irq, void *data)
+ /* disable interrupts */
+ enetc_wr_reg(v->rbier, 0);
+
+- for_each_set_bit(i, &v->tx_rings_map, v->count_tx_rings)
++ for_each_set_bit(i, &v->tx_rings_map, ENETC_MAX_NUM_TXQS)
+ enetc_wr_reg(v->tbier_base + ENETC_BDR_OFF(i), 0);
+
+ napi_schedule_irqoff(&v->napi);
+@@ -302,7 +302,7 @@ static int enetc_poll(struct napi_struct *napi, int budget)
+ /* enable interrupts */
+ enetc_wr_reg(v->rbier, ENETC_RBIER_RXTIE);
+
+- for_each_set_bit(i, &v->tx_rings_map, v->count_tx_rings)
++ for_each_set_bit(i, &v->tx_rings_map, ENETC_MAX_NUM_TXQS)
+ enetc_wr_reg(v->tbier_base + ENETC_BDR_OFF(i),
+ ENETC_TBIER_TXTIE);
+
+diff --git a/drivers/net/ethernet/ibm/ibmveth.c b/drivers/net/ethernet/ibm/ibmveth.c
+index 96d36ae5049e..c5c732601e35 100644
+--- a/drivers/net/ethernet/ibm/ibmveth.c
++++ b/drivers/net/ethernet/ibm/ibmveth.c
+@@ -1715,7 +1715,7 @@ static int ibmveth_probe(struct vio_dev *dev, const struct vio_device_id *id)
+ }
+
+ netdev->min_mtu = IBMVETH_MIN_MTU;
+- netdev->max_mtu = ETH_MAX_MTU;
++ netdev->max_mtu = ETH_MAX_MTU - IBMVETH_BUFF_OH;
+
+ memcpy(netdev->dev_addr, mac_addr_p, ETH_ALEN);
+
+diff --git a/drivers/net/ethernet/ibm/ibmvnic.c b/drivers/net/ethernet/ibm/ibmvnic.c
+index 1b4d04e4474b..2baf7b3ff4cb 100644
+--- a/drivers/net/ethernet/ibm/ibmvnic.c
++++ b/drivers/net/ethernet/ibm/ibmvnic.c
+@@ -842,12 +842,13 @@ static int ibmvnic_login(struct net_device *netdev)
+ struct ibmvnic_adapter *adapter = netdev_priv(netdev);
+ unsigned long timeout = msecs_to_jiffies(30000);
+ int retry_count = 0;
++ int retries = 10;
+ bool retry;
+ int rc;
+
+ do {
+ retry = false;
+- if (retry_count > IBMVNIC_MAX_QUEUES) {
++ if (retry_count > retries) {
+ netdev_warn(netdev, "Login attempts exceeded\n");
+ return -1;
+ }
+@@ -862,11 +863,23 @@ static int ibmvnic_login(struct net_device *netdev)
+
+ if (!wait_for_completion_timeout(&adapter->init_done,
+ timeout)) {
+- netdev_warn(netdev, "Login timed out\n");
+- return -1;
++ netdev_warn(netdev, "Login timed out, retrying...\n");
++ retry = true;
++ adapter->init_done_rc = 0;
++ retry_count++;
++ continue;
+ }
+
+- if (adapter->init_done_rc == PARTIALSUCCESS) {
++ if (adapter->init_done_rc == ABORTED) {
++ netdev_warn(netdev, "Login aborted, retrying...\n");
++ retry = true;
++ adapter->init_done_rc = 0;
++ retry_count++;
++ /* FW or device may be busy, so
++ * wait a bit before retrying login
++ */
++ msleep(500);
++ } else if (adapter->init_done_rc == PARTIALSUCCESS) {
+ retry_count++;
+ release_sub_crqs(adapter, 1);
+
+diff --git a/drivers/net/ethernet/marvell/mvpp2/mvpp2_main.c b/drivers/net/ethernet/marvell/mvpp2/mvpp2_main.c
+index b7b553602ea9..24f4d8e0da98 100644
+--- a/drivers/net/ethernet/marvell/mvpp2/mvpp2_main.c
++++ b/drivers/net/ethernet/marvell/mvpp2/mvpp2_main.c
+@@ -1544,7 +1544,7 @@ static void mvpp2_read_stats(struct mvpp2_port *port)
+ for (q = 0; q < port->ntxqs; q++)
+ for (i = 0; i < ARRAY_SIZE(mvpp2_ethtool_txq_regs); i++)
+ *pstats++ += mvpp2_read_index(port->priv,
+- MVPP22_CTRS_TX_CTR(port->id, i),
++ MVPP22_CTRS_TX_CTR(port->id, q),
+ mvpp2_ethtool_txq_regs[i].offset);
+
+ /* Rxqs are numbered from 0 from the user standpoint, but not from the
+@@ -1553,7 +1553,7 @@ static void mvpp2_read_stats(struct mvpp2_port *port)
+ for (q = 0; q < port->nrxqs; q++)
+ for (i = 0; i < ARRAY_SIZE(mvpp2_ethtool_rxq_regs); i++)
+ *pstats++ += mvpp2_read_index(port->priv,
+- port->first_rxq + i,
++ port->first_rxq + q,
+ mvpp2_ethtool_rxq_regs[i].offset);
+ }
+
+diff --git a/drivers/net/ethernet/mellanox/mlxsw/spectrum.c b/drivers/net/ethernet/mellanox/mlxsw/spectrum.c
+index 3e4199246a18..d9a2267aeaea 100644
+--- a/drivers/net/ethernet/mellanox/mlxsw/spectrum.c
++++ b/drivers/net/ethernet/mellanox/mlxsw/spectrum.c
+@@ -990,10 +990,10 @@ int __mlxsw_sp_port_headroom_set(struct mlxsw_sp_port *mlxsw_sp_port, int mtu,
+
+ lossy = !(pfc || pause_en);
+ thres_cells = mlxsw_sp_pg_buf_threshold_get(mlxsw_sp, mtu);
+- mlxsw_sp_port_headroom_8x_adjust(mlxsw_sp_port, &thres_cells);
++ thres_cells = mlxsw_sp_port_headroom_8x_adjust(mlxsw_sp_port, thres_cells);
+ delay_cells = mlxsw_sp_pg_buf_delay_get(mlxsw_sp, mtu, delay,
+ pfc, pause_en);
+- mlxsw_sp_port_headroom_8x_adjust(mlxsw_sp_port, &delay_cells);
++ delay_cells = mlxsw_sp_port_headroom_8x_adjust(mlxsw_sp_port, delay_cells);
+ total_cells = thres_cells + delay_cells;
+
+ taken_headroom_cells += total_cells;
+diff --git a/drivers/net/ethernet/mellanox/mlxsw/spectrum.h b/drivers/net/ethernet/mellanox/mlxsw/spectrum.h
+index e28ecb84b816..6b2e4e730b18 100644
+--- a/drivers/net/ethernet/mellanox/mlxsw/spectrum.h
++++ b/drivers/net/ethernet/mellanox/mlxsw/spectrum.h
+@@ -395,17 +395,15 @@ mlxsw_sp_port_vlan_find_by_vid(const struct mlxsw_sp_port *mlxsw_sp_port,
+ return NULL;
+ }
+
+-static inline void
++static inline u32
+ mlxsw_sp_port_headroom_8x_adjust(const struct mlxsw_sp_port *mlxsw_sp_port,
+- u16 *p_size)
++ u32 size_cells)
+ {
+ /* Ports with eight lanes use two headroom buffers between which the
+ * configured headroom size is split. Therefore, multiply the calculated
+ * headroom size by two.
+ */
+- if (mlxsw_sp_port->mapping.width != 8)
+- return;
+- *p_size *= 2;
++ return mlxsw_sp_port->mapping.width == 8 ? 2 * size_cells : size_cells;
+ }
+
+ enum mlxsw_sp_flood_type {
+diff --git a/drivers/net/ethernet/mellanox/mlxsw/spectrum_buffers.c b/drivers/net/ethernet/mellanox/mlxsw/spectrum_buffers.c
+index 19bf0768ed78..2fb2cbd4f229 100644
+--- a/drivers/net/ethernet/mellanox/mlxsw/spectrum_buffers.c
++++ b/drivers/net/ethernet/mellanox/mlxsw/spectrum_buffers.c
+@@ -312,7 +312,7 @@ static int mlxsw_sp_port_pb_init(struct mlxsw_sp_port *mlxsw_sp_port)
+
+ if (i == MLXSW_SP_PB_UNUSED)
+ continue;
+- mlxsw_sp_port_headroom_8x_adjust(mlxsw_sp_port, &size);
++ size = mlxsw_sp_port_headroom_8x_adjust(mlxsw_sp_port, size);
+ mlxsw_reg_pbmc_lossy_buffer_pack(pbmc_pl, i, size);
+ }
+ mlxsw_reg_pbmc_lossy_buffer_pack(pbmc_pl,
+diff --git a/drivers/net/ethernet/mellanox/mlxsw/spectrum_span.c b/drivers/net/ethernet/mellanox/mlxsw/spectrum_span.c
+index 7c5032f9c8ff..76242c70d41a 100644
+--- a/drivers/net/ethernet/mellanox/mlxsw/spectrum_span.c
++++ b/drivers/net/ethernet/mellanox/mlxsw/spectrum_span.c
+@@ -776,7 +776,7 @@ mlxsw_sp_span_port_buffsize_update(struct mlxsw_sp_port *mlxsw_sp_port, u16 mtu)
+ speed = 0;
+
+ buffsize = mlxsw_sp_span_buffsize_get(mlxsw_sp, speed, mtu);
+- mlxsw_sp_port_headroom_8x_adjust(mlxsw_sp_port, (u16 *) &buffsize);
++ buffsize = mlxsw_sp_port_headroom_8x_adjust(mlxsw_sp_port, buffsize);
+ mlxsw_reg_sbib_pack(sbib_pl, mlxsw_sp_port->local_port, buffsize);
+ return mlxsw_reg_write(mlxsw_sp->core, MLXSW_REG(sbib), sbib_pl);
+ }
+diff --git a/drivers/net/ethernet/pensando/ionic/ionic_lif.c b/drivers/net/ethernet/pensando/ionic/ionic_lif.c
+index 7aa037c3fe02..790d4854b8ef 100644
+--- a/drivers/net/ethernet/pensando/ionic/ionic_lif.c
++++ b/drivers/net/ethernet/pensando/ionic/ionic_lif.c
+@@ -1653,6 +1653,14 @@ int ionic_open(struct net_device *netdev)
+ if (err)
+ goto err_out;
+
++ err = netif_set_real_num_tx_queues(netdev, lif->nxqs);
++ if (err)
++ goto err_txrx_deinit;
++
++ err = netif_set_real_num_rx_queues(netdev, lif->nxqs);
++ if (err)
++ goto err_txrx_deinit;
++
+ /* don't start the queues until we have link */
+ if (netif_carrier_ok(netdev)) {
+ err = ionic_start_queues(lif);
+@@ -1674,8 +1682,8 @@ static void ionic_stop_queues(struct ionic_lif *lif)
+ if (!test_and_clear_bit(IONIC_LIF_F_UP, lif->state))
+ return;
+
+- ionic_txrx_disable(lif);
+ netif_tx_disable(lif->netdev);
++ ionic_txrx_disable(lif);
+ }
+
+ int ionic_stop(struct net_device *netdev)
+@@ -1941,18 +1949,19 @@ int ionic_reset_queues(struct ionic_lif *lif)
+ bool running;
+ int err = 0;
+
+- /* Put off the next watchdog timeout */
+- netif_trans_update(lif->netdev);
+-
+ err = ionic_wait_for_bit(lif, IONIC_LIF_F_QUEUE_RESET);
+ if (err)
+ return err;
+
+ running = netif_running(lif->netdev);
+- if (running)
++ if (running) {
++ netif_device_detach(lif->netdev);
+ err = ionic_stop(lif->netdev);
+- if (!err && running)
++ }
++ if (!err && running) {
+ ionic_open(lif->netdev);
++ netif_device_attach(lif->netdev);
++ }
+
+ clear_bit(IONIC_LIF_F_QUEUE_RESET, lif->state);
+
+diff --git a/drivers/net/ethernet/qlogic/qed/qed_cxt.c b/drivers/net/ethernet/qlogic/qed/qed_cxt.c
+index 1a636bad717d..aeed8939f410 100644
+--- a/drivers/net/ethernet/qlogic/qed/qed_cxt.c
++++ b/drivers/net/ethernet/qlogic/qed/qed_cxt.c
+@@ -270,7 +270,7 @@ static void qed_cxt_qm_iids(struct qed_hwfn *p_hwfn,
+ vf_tids += segs[NUM_TASK_PF_SEGMENTS].count;
+ }
+
+- iids->vf_cids += vf_cids * p_mngr->vf_count;
++ iids->vf_cids = vf_cids;
+ iids->tids += vf_tids * p_mngr->vf_count;
+
+ DP_VERBOSE(p_hwfn, QED_MSG_ILT,
+@@ -442,6 +442,20 @@ static struct qed_ilt_cli_blk *qed_cxt_set_blk(struct qed_ilt_cli_blk *p_blk)
+ return p_blk;
+ }
+
++static void qed_cxt_ilt_blk_reset(struct qed_hwfn *p_hwfn)
++{
++ struct qed_ilt_client_cfg *clients = p_hwfn->p_cxt_mngr->clients;
++ u32 cli_idx, blk_idx;
++
++ for (cli_idx = 0; cli_idx < MAX_ILT_CLIENTS; cli_idx++) {
++ for (blk_idx = 0; blk_idx < ILT_CLI_PF_BLOCKS; blk_idx++)
++ clients[cli_idx].pf_blks[blk_idx].total_size = 0;
++
++ for (blk_idx = 0; blk_idx < ILT_CLI_VF_BLOCKS; blk_idx++)
++ clients[cli_idx].vf_blks[blk_idx].total_size = 0;
++ }
++}
++
+ int qed_cxt_cfg_ilt_compute(struct qed_hwfn *p_hwfn, u32 *line_count)
+ {
+ struct qed_cxt_mngr *p_mngr = p_hwfn->p_cxt_mngr;
+@@ -461,6 +475,11 @@ int qed_cxt_cfg_ilt_compute(struct qed_hwfn *p_hwfn, u32 *line_count)
+
+ p_mngr->pf_start_line = RESC_START(p_hwfn, QED_ILT);
+
++ /* Reset all ILT blocks at the beginning of ILT computing in order
++ * to prevent memory allocation for irrelevant blocks afterwards.
++ */
++ qed_cxt_ilt_blk_reset(p_hwfn);
++
+ DP_VERBOSE(p_hwfn, QED_MSG_ILT,
+ "hwfn [%d] - Set context manager starting line to be 0x%08x\n",
+ p_hwfn->my_id, p_hwfn->p_cxt_mngr->pf_start_line);
+diff --git a/drivers/net/ethernet/qlogic/qed/qed_debug.c b/drivers/net/ethernet/qlogic/qed/qed_debug.c
+index f4eebaabb6d0..3e56b6056b47 100644
+--- a/drivers/net/ethernet/qlogic/qed/qed_debug.c
++++ b/drivers/net/ethernet/qlogic/qed/qed_debug.c
+@@ -5568,7 +5568,8 @@ static const char * const s_status_str[] = {
+
+ /* DBG_STATUS_INVALID_FILTER_TRIGGER_DWORDS */
+ "The filter/trigger constraint dword offsets are not enabled for recording",
+-
++ /* DBG_STATUS_NO_MATCHING_FRAMING_MODE */
++ "No matching framing mode",
+
+ /* DBG_STATUS_VFC_READ_ERROR */
+ "Error reading from VFC",
+diff --git a/drivers/net/ethernet/qlogic/qed/qed_dev.c b/drivers/net/ethernet/qlogic/qed/qed_dev.c
+index 38a65b984e47..9b00988fb77e 100644
+--- a/drivers/net/ethernet/qlogic/qed/qed_dev.c
++++ b/drivers/net/ethernet/qlogic/qed/qed_dev.c
+@@ -1368,6 +1368,8 @@ static void qed_dbg_user_data_free(struct qed_hwfn *p_hwfn)
+
+ void qed_resc_free(struct qed_dev *cdev)
+ {
++ struct qed_rdma_info *rdma_info;
++ struct qed_hwfn *p_hwfn;
+ int i;
+
+ if (IS_VF(cdev)) {
+@@ -1385,7 +1387,8 @@ void qed_resc_free(struct qed_dev *cdev)
+ qed_llh_free(cdev);
+
+ for_each_hwfn(cdev, i) {
+- struct qed_hwfn *p_hwfn = &cdev->hwfns[i];
++ p_hwfn = cdev->hwfns + i;
++ rdma_info = p_hwfn->p_rdma_info;
+
+ qed_cxt_mngr_free(p_hwfn);
+ qed_qm_info_free(p_hwfn);
+@@ -1404,8 +1407,10 @@ void qed_resc_free(struct qed_dev *cdev)
+ qed_ooo_free(p_hwfn);
+ }
+
+- if (QED_IS_RDMA_PERSONALITY(p_hwfn))
++ if (QED_IS_RDMA_PERSONALITY(p_hwfn) && rdma_info) {
++ qed_spq_unregister_async_cb(p_hwfn, rdma_info->proto);
+ qed_rdma_info_free(p_hwfn);
++ }
+
+ qed_iov_free(p_hwfn);
+ qed_l2_free(p_hwfn);
+diff --git a/drivers/net/ethernet/qlogic/qed/qed_iwarp.c b/drivers/net/ethernet/qlogic/qed/qed_iwarp.c
+index d2fe61a5cf56..5409a2da6106 100644
+--- a/drivers/net/ethernet/qlogic/qed/qed_iwarp.c
++++ b/drivers/net/ethernet/qlogic/qed/qed_iwarp.c
+@@ -2836,8 +2836,6 @@ int qed_iwarp_stop(struct qed_hwfn *p_hwfn)
+ if (rc)
+ return rc;
+
+- qed_spq_unregister_async_cb(p_hwfn, PROTOCOLID_IWARP);
+-
+ return qed_iwarp_ll2_stop(p_hwfn);
+ }
+
+diff --git a/drivers/net/ethernet/qlogic/qed/qed_roce.c b/drivers/net/ethernet/qlogic/qed/qed_roce.c
+index 37e70562a964..f15c26ef8870 100644
+--- a/drivers/net/ethernet/qlogic/qed/qed_roce.c
++++ b/drivers/net/ethernet/qlogic/qed/qed_roce.c
+@@ -113,7 +113,6 @@ void qed_roce_stop(struct qed_hwfn *p_hwfn)
+ break;
+ }
+ }
+- qed_spq_unregister_async_cb(p_hwfn, PROTOCOLID_ROCE);
+ }
+
+ static void qed_rdma_copy_gids(struct qed_rdma_qp *qp, __le32 *src_gid,
+diff --git a/drivers/net/ethernet/qlogic/qed/qed_vf.c b/drivers/net/ethernet/qlogic/qed/qed_vf.c
+index 856051f50eb7..adc2c8f3d48e 100644
+--- a/drivers/net/ethernet/qlogic/qed/qed_vf.c
++++ b/drivers/net/ethernet/qlogic/qed/qed_vf.c
+@@ -81,12 +81,17 @@ static void qed_vf_pf_req_end(struct qed_hwfn *p_hwfn, int req_status)
+ mutex_unlock(&(p_hwfn->vf_iov_info->mutex));
+ }
+
++#define QED_VF_CHANNEL_USLEEP_ITERATIONS 90
++#define QED_VF_CHANNEL_USLEEP_DELAY 100
++#define QED_VF_CHANNEL_MSLEEP_ITERATIONS 10
++#define QED_VF_CHANNEL_MSLEEP_DELAY 25
++
+ static int qed_send_msg2pf(struct qed_hwfn *p_hwfn, u8 *done, u32 resp_size)
+ {
+ union vfpf_tlvs *p_req = p_hwfn->vf_iov_info->vf2pf_request;
+ struct ustorm_trigger_vf_zone trigger;
+ struct ustorm_vf_zone *zone_data;
+- int rc = 0, time = 100;
++ int iter, rc = 0;
+
+ zone_data = (struct ustorm_vf_zone *)PXP_VF_BAR0_START_USDM_ZONE_B;
+
+@@ -126,11 +131,19 @@ static int qed_send_msg2pf(struct qed_hwfn *p_hwfn, u8 *done, u32 resp_size)
+ REG_WR(p_hwfn, (uintptr_t)&zone_data->trigger, *((u32 *)&trigger));
+
+ /* When PF would be done with the response, it would write back to the
+- * `done' address. Poll until then.
++ * `done' address from a coherent DMA zone. Poll until then.
+ */
+- while ((!*done) && time) {
+- msleep(25);
+- time--;
++
++ iter = QED_VF_CHANNEL_USLEEP_ITERATIONS;
++ while (!*done && iter--) {
++ udelay(QED_VF_CHANNEL_USLEEP_DELAY);
++ dma_rmb();
++ }
++
++ iter = QED_VF_CHANNEL_MSLEEP_ITERATIONS;
++ while (!*done && iter--) {
++ msleep(QED_VF_CHANNEL_MSLEEP_DELAY);
++ dma_rmb();
+ }
+
+ if (!*done) {
+diff --git a/drivers/net/ethernet/qlogic/qede/qede_main.c b/drivers/net/ethernet/qlogic/qede/qede_main.c
+index 1a83d1fd8ccd..26eb58e7e076 100644
+--- a/drivers/net/ethernet/qlogic/qede/qede_main.c
++++ b/drivers/net/ethernet/qlogic/qede/qede_main.c
+@@ -1158,7 +1158,7 @@ static int __qede_probe(struct pci_dev *pdev, u32 dp_module, u8 dp_level,
+
+ /* PTP not supported on VFs */
+ if (!is_vf)
+- qede_ptp_enable(edev, (mode == QEDE_PROBE_NORMAL));
++ qede_ptp_enable(edev);
+
+ edev->ops->register_ops(cdev, &qede_ll_ops, edev);
+
+@@ -1247,6 +1247,7 @@ static void __qede_remove(struct pci_dev *pdev, enum qede_remove_mode mode)
+ if (system_state == SYSTEM_POWER_OFF)
+ return;
+ qed_ops->common->remove(cdev);
++ edev->cdev = NULL;
+
+ /* Since this can happen out-of-sync with other flows,
+ * don't release the netdevice until after slowpath stop
+diff --git a/drivers/net/ethernet/qlogic/qede/qede_ptp.c b/drivers/net/ethernet/qlogic/qede/qede_ptp.c
+index 4c7f7a7fc151..cd5841a9415e 100644
+--- a/drivers/net/ethernet/qlogic/qede/qede_ptp.c
++++ b/drivers/net/ethernet/qlogic/qede/qede_ptp.c
+@@ -412,6 +412,7 @@ void qede_ptp_disable(struct qede_dev *edev)
+ if (ptp->tx_skb) {
+ dev_kfree_skb_any(ptp->tx_skb);
+ ptp->tx_skb = NULL;
++ clear_bit_unlock(QEDE_FLAGS_PTP_TX_IN_PRORGESS, &edev->flags);
+ }
+
+ /* Disable PTP in HW */
+@@ -423,7 +424,7 @@ void qede_ptp_disable(struct qede_dev *edev)
+ edev->ptp = NULL;
+ }
+
+-static int qede_ptp_init(struct qede_dev *edev, bool init_tc)
++static int qede_ptp_init(struct qede_dev *edev)
+ {
+ struct qede_ptp *ptp;
+ int rc;
+@@ -444,25 +445,19 @@ static int qede_ptp_init(struct qede_dev *edev, bool init_tc)
+ /* Init work queue for Tx timestamping */
+ INIT_WORK(&ptp->work, qede_ptp_task);
+
+- /* Init cyclecounter and timecounter. This is done only in the first
+- * load. If done in every load, PTP application will fail when doing
+- * unload / load (e.g. MTU change) while it is running.
+- */
+- if (init_tc) {
+- memset(&ptp->cc, 0, sizeof(ptp->cc));
+- ptp->cc.read = qede_ptp_read_cc;
+- ptp->cc.mask = CYCLECOUNTER_MASK(64);
+- ptp->cc.shift = 0;
+- ptp->cc.mult = 1;
+-
+- timecounter_init(&ptp->tc, &ptp->cc,
+- ktime_to_ns(ktime_get_real()));
+- }
++ /* Init cyclecounter and timecounter */
++ memset(&ptp->cc, 0, sizeof(ptp->cc));
++ ptp->cc.read = qede_ptp_read_cc;
++ ptp->cc.mask = CYCLECOUNTER_MASK(64);
++ ptp->cc.shift = 0;
++ ptp->cc.mult = 1;
+
+- return rc;
++ timecounter_init(&ptp->tc, &ptp->cc, ktime_to_ns(ktime_get_real()));
++
++ return 0;
+ }
+
+-int qede_ptp_enable(struct qede_dev *edev, bool init_tc)
++int qede_ptp_enable(struct qede_dev *edev)
+ {
+ struct qede_ptp *ptp;
+ int rc;
+@@ -483,7 +478,7 @@ int qede_ptp_enable(struct qede_dev *edev, bool init_tc)
+
+ edev->ptp = ptp;
+
+- rc = qede_ptp_init(edev, init_tc);
++ rc = qede_ptp_init(edev);
+ if (rc)
+ goto err1;
+
+diff --git a/drivers/net/ethernet/qlogic/qede/qede_ptp.h b/drivers/net/ethernet/qlogic/qede/qede_ptp.h
+index 691a14c4b2c5..89c7f3cf3ee2 100644
+--- a/drivers/net/ethernet/qlogic/qede/qede_ptp.h
++++ b/drivers/net/ethernet/qlogic/qede/qede_ptp.h
+@@ -41,7 +41,7 @@ void qede_ptp_rx_ts(struct qede_dev *edev, struct sk_buff *skb);
+ void qede_ptp_tx_ts(struct qede_dev *edev, struct sk_buff *skb);
+ int qede_ptp_hw_ts(struct qede_dev *edev, struct ifreq *req);
+ void qede_ptp_disable(struct qede_dev *edev);
+-int qede_ptp_enable(struct qede_dev *edev, bool init_tc);
++int qede_ptp_enable(struct qede_dev *edev);
+ int qede_ptp_get_ts_info(struct qede_dev *edev, struct ethtool_ts_info *ts);
+
+ static inline void qede_ptp_record_rx_ts(struct qede_dev *edev,
+diff --git a/drivers/net/ethernet/qlogic/qede/qede_rdma.c b/drivers/net/ethernet/qlogic/qede/qede_rdma.c
+index 2d873ae8a234..668ccc9d49f8 100644
+--- a/drivers/net/ethernet/qlogic/qede/qede_rdma.c
++++ b/drivers/net/ethernet/qlogic/qede/qede_rdma.c
+@@ -105,6 +105,7 @@ static void qede_rdma_destroy_wq(struct qede_dev *edev)
+
+ qede_rdma_cleanup_event(edev);
+ destroy_workqueue(edev->rdma_info.rdma_wq);
++ edev->rdma_info.rdma_wq = NULL;
+ }
+
+ int qede_rdma_dev_add(struct qede_dev *edev, bool recovery)
+@@ -325,7 +326,7 @@ static void qede_rdma_add_event(struct qede_dev *edev,
+ if (edev->rdma_info.exp_recovery)
+ return;
+
+- if (!edev->rdma_info.qedr_dev)
++ if (!edev->rdma_info.qedr_dev || !edev->rdma_info.rdma_wq)
+ return;
+
+ /* We don't want the cleanup flow to start while we're allocating and
+diff --git a/drivers/net/ethernet/realtek/r8169_main.c b/drivers/net/ethernet/realtek/r8169_main.c
+index c51b48dc3639..7bda2671bd5b 100644
+--- a/drivers/net/ethernet/realtek/r8169_main.c
++++ b/drivers/net/ethernet/realtek/r8169_main.c
+@@ -2192,8 +2192,11 @@ static void rtl_release_firmware(struct rtl8169_private *tp)
+ void r8169_apply_firmware(struct rtl8169_private *tp)
+ {
+ /* TODO: release firmware if rtl_fw_write_firmware signals failure. */
+- if (tp->rtl_fw)
++ if (tp->rtl_fw) {
+ rtl_fw_write_firmware(tp, tp->rtl_fw);
++ /* At least one firmware doesn't reset tp->ocp_base. */
++ tp->ocp_base = OCP_STD_PHY_BASE;
++ }
+ }
+
+ static void rtl8168_config_eee_mac(struct rtl8169_private *tp)
+diff --git a/drivers/net/ethernet/rocker/rocker_main.c b/drivers/net/ethernet/rocker/rocker_main.c
+index 7585cd2270ba..fc99e7118e49 100644
+--- a/drivers/net/ethernet/rocker/rocker_main.c
++++ b/drivers/net/ethernet/rocker/rocker_main.c
+@@ -647,10 +647,10 @@ static int rocker_dma_rings_init(struct rocker *rocker)
+ err_dma_event_ring_bufs_alloc:
+ rocker_dma_ring_destroy(rocker, &rocker->event_ring);
+ err_dma_event_ring_create:
++ rocker_dma_cmd_ring_waits_free(rocker);
++err_dma_cmd_ring_waits_alloc:
+ rocker_dma_ring_bufs_free(rocker, &rocker->cmd_ring,
+ PCI_DMA_BIDIRECTIONAL);
+-err_dma_cmd_ring_waits_alloc:
+- rocker_dma_cmd_ring_waits_free(rocker);
+ err_dma_cmd_ring_bufs_alloc:
+ rocker_dma_ring_destroy(rocker, &rocker->cmd_ring);
+ return err;
+diff --git a/drivers/net/ethernet/socionext/netsec.c b/drivers/net/ethernet/socionext/netsec.c
+index a5a0fb60193a..5a70c49bf454 100644
+--- a/drivers/net/ethernet/socionext/netsec.c
++++ b/drivers/net/ethernet/socionext/netsec.c
+@@ -1038,8 +1038,9 @@ static int netsec_process_rx(struct netsec_priv *priv, int budget)
+ skb->ip_summed = CHECKSUM_UNNECESSARY;
+
+ next:
+- if ((skb && napi_gro_receive(&priv->napi, skb) != GRO_DROP) ||
+- xdp_result) {
++ if (skb)
++ napi_gro_receive(&priv->napi, skb);
++ if (skb || xdp_result) {
+ ndev->stats.rx_packets++;
+ ndev->stats.rx_bytes += xdp.data_end - xdp.data;
+ }
+diff --git a/drivers/net/geneve.c b/drivers/net/geneve.c
+index 75266580b586..4661ef865807 100644
+--- a/drivers/net/geneve.c
++++ b/drivers/net/geneve.c
+@@ -1649,6 +1649,7 @@ static int geneve_changelink(struct net_device *dev, struct nlattr *tb[],
+ geneve->collect_md = metadata;
+ geneve->use_udp6_rx_checksums = use_udp6_rx_checksums;
+ geneve->ttl_inherit = ttl_inherit;
++ geneve->df = df;
+ geneve_unquiesce(geneve, gs4, gs6);
+
+ return 0;
+diff --git a/drivers/net/phy/Kconfig b/drivers/net/phy/Kconfig
+index 3fa33d27eeba..d140e3c93fe3 100644
+--- a/drivers/net/phy/Kconfig
++++ b/drivers/net/phy/Kconfig
+@@ -461,8 +461,7 @@ config MICROCHIP_T1_PHY
+ config MICROSEMI_PHY
+ tristate "Microsemi PHYs"
+ depends on MACSEC || MACSEC=n
+- select CRYPTO_AES
+- select CRYPTO_ECB
++ select CRYPTO_LIB_AES if MACSEC
+ ---help---
+ Currently supports VSC8514, VSC8530, VSC8531, VSC8540 and VSC8541 PHYs
+
+diff --git a/drivers/net/phy/mscc/mscc_macsec.c b/drivers/net/phy/mscc/mscc_macsec.c
+index b4d3dc4068e2..d53ca884b5c9 100644
+--- a/drivers/net/phy/mscc/mscc_macsec.c
++++ b/drivers/net/phy/mscc/mscc_macsec.c
+@@ -10,7 +10,7 @@
+ #include <linux/phy.h>
+ #include <dt-bindings/net/mscc-phy-vsc8531.h>
+
+-#include <crypto/skcipher.h>
++#include <crypto/aes.h>
+
+ #include <net/macsec.h>
+
+@@ -500,39 +500,17 @@ static u32 vsc8584_macsec_flow_context_id(struct macsec_flow *flow)
+ static int vsc8584_macsec_derive_key(const u8 key[MACSEC_KEYID_LEN],
+ u16 key_len, u8 hkey[16])
+ {
+- struct crypto_skcipher *tfm = crypto_alloc_skcipher("ecb(aes)", 0, 0);
+- struct skcipher_request *req = NULL;
+- struct scatterlist src, dst;
+- DECLARE_CRYPTO_WAIT(wait);
+- u32 input[4] = {0};
++ const u8 input[AES_BLOCK_SIZE] = {0};
++ struct crypto_aes_ctx ctx;
+ int ret;
+
+- if (IS_ERR(tfm))
+- return PTR_ERR(tfm);
+-
+- req = skcipher_request_alloc(tfm, GFP_KERNEL);
+- if (!req) {
+- ret = -ENOMEM;
+- goto out;
+- }
+-
+- skcipher_request_set_callback(req, CRYPTO_TFM_REQ_MAY_BACKLOG |
+- CRYPTO_TFM_REQ_MAY_SLEEP, crypto_req_done,
+- &wait);
+- ret = crypto_skcipher_setkey(tfm, key, key_len);
+- if (ret < 0)
+- goto out;
+-
+- sg_init_one(&src, input, 16);
+- sg_init_one(&dst, hkey, 16);
+- skcipher_request_set_crypt(req, &src, &dst, 16, NULL);
+-
+- ret = crypto_wait_req(crypto_skcipher_encrypt(req), &wait);
++ ret = aes_expandkey(&ctx, key, key_len);
++ if (ret)
++ return ret;
+
+-out:
+- skcipher_request_free(req);
+- crypto_free_skcipher(tfm);
+- return ret;
++ aes_encrypt(&ctx, hkey, input);
++ memzero_explicit(&ctx, sizeof(ctx));
++ return 0;
+ }
+
+ static int vsc8584_macsec_transformation(struct phy_device *phydev,
+diff --git a/drivers/net/phy/phy_device.c b/drivers/net/phy/phy_device.c
+index 697c74deb222..0881b4b92363 100644
+--- a/drivers/net/phy/phy_device.c
++++ b/drivers/net/phy/phy_device.c
+@@ -798,8 +798,10 @@ static int get_phy_id(struct mii_bus *bus, int addr, u32 *phy_id,
+
+ /* Grab the bits from PHYIR2, and put them in the lower half */
+ phy_reg = mdiobus_read(bus, addr, MII_PHYSID2);
+- if (phy_reg < 0)
+- return -EIO;
++ if (phy_reg < 0) {
++ /* returning -ENODEV doesn't stop bus scanning */
++ return (phy_reg == -EIO || phy_reg == -ENODEV) ? -ENODEV : -EIO;
++ }
+
+ *phy_id |= phy_reg;
+
+diff --git a/drivers/net/phy/phylink.c b/drivers/net/phy/phylink.c
+index 34ca12aec61b..ac38bead1cd2 100644
+--- a/drivers/net/phy/phylink.c
++++ b/drivers/net/phy/phylink.c
+@@ -1480,6 +1480,8 @@ int phylink_ethtool_set_pauseparam(struct phylink *pl,
+ struct ethtool_pauseparam *pause)
+ {
+ struct phylink_link_state *config = &pl->link_config;
++ bool manual_changed;
++ int pause_state;
+
+ ASSERT_RTNL();
+
+@@ -1494,15 +1496,15 @@ int phylink_ethtool_set_pauseparam(struct phylink *pl,
+ !pause->autoneg && pause->rx_pause != pause->tx_pause)
+ return -EINVAL;
+
+- mutex_lock(&pl->state_mutex);
+- config->pause = 0;
++ pause_state = 0;
+ if (pause->autoneg)
+- config->pause |= MLO_PAUSE_AN;
++ pause_state |= MLO_PAUSE_AN;
+ if (pause->rx_pause)
+- config->pause |= MLO_PAUSE_RX;
++ pause_state |= MLO_PAUSE_RX;
+ if (pause->tx_pause)
+- config->pause |= MLO_PAUSE_TX;
++ pause_state |= MLO_PAUSE_TX;
+
++ mutex_lock(&pl->state_mutex);
+ /*
+ * See the comments for linkmode_set_pause(), wrt the deficiencies
+ * with the current implementation. A solution to this issue would
+@@ -1519,18 +1521,35 @@ int phylink_ethtool_set_pauseparam(struct phylink *pl,
+ linkmode_set_pause(config->advertising, pause->tx_pause,
+ pause->rx_pause);
+
+- /* If we have a PHY, phylib will call our link state function if the
+- * mode has changed, which will trigger a resolve and update the MAC
+- * configuration.
++ manual_changed = (config->pause ^ pause_state) & MLO_PAUSE_AN ||
++ (!(pause_state & MLO_PAUSE_AN) &&
++ (config->pause ^ pause_state) & MLO_PAUSE_TXRX_MASK);
++
++ config->pause = pause_state;
++
++ if (!pl->phydev && !test_bit(PHYLINK_DISABLE_STOPPED,
++ &pl->phylink_disable_state))
++ phylink_pcs_config(pl, true, &pl->link_config);
++
++ mutex_unlock(&pl->state_mutex);
++
++ /* If we have a PHY, a change of the pause frame advertisement will
++ * cause phylib to renegotiate (if AN is enabled) which will in turn
++ * call our phylink_phy_change() and trigger a resolve. Note that
++ * we can't hold our state mutex while calling phy_set_asym_pause().
+ */
+- if (pl->phydev) {
++ if (pl->phydev)
+ phy_set_asym_pause(pl->phydev, pause->rx_pause,
+ pause->tx_pause);
+- } else if (!test_bit(PHYLINK_DISABLE_STOPPED,
+- &pl->phylink_disable_state)) {
+- phylink_pcs_config(pl, true, &pl->link_config);
++
++ /* If the manual pause settings changed, make sure we trigger a
++ * resolve to update their state; we can not guarantee that the
++ * link will cycle.
++ */
++ if (manual_changed) {
++ pl->mac_link_dropped = true;
++ phylink_run_resolve(pl);
+ }
+- mutex_unlock(&pl->state_mutex);
+
+ return 0;
+ }
+diff --git a/drivers/net/phy/smsc.c b/drivers/net/phy/smsc.c
+index 93da7d3d0954..74568ae16125 100644
+--- a/drivers/net/phy/smsc.c
++++ b/drivers/net/phy/smsc.c
+@@ -122,10 +122,13 @@ static int lan87xx_read_status(struct phy_device *phydev)
+ if (rc < 0)
+ return rc;
+
+- /* Wait max 640 ms to detect energy */
+- phy_read_poll_timeout(phydev, MII_LAN83C185_CTRL_STATUS, rc,
+- rc & MII_LAN83C185_ENERGYON, 10000,
+- 640000, true);
++ /* Wait max 640 ms to detect energy and the timeout is not
++ * an actual error.
++ */
++ read_poll_timeout(phy_read, rc,
++ rc & MII_LAN83C185_ENERGYON || rc < 0,
++ 10000, 640000, true, phydev,
++ MII_LAN83C185_CTRL_STATUS);
+ if (rc < 0)
+ return rc;
+
+diff --git a/drivers/net/usb/ax88179_178a.c b/drivers/net/usb/ax88179_178a.c
+index 93044cf1417a..1fe4cc28d154 100644
+--- a/drivers/net/usb/ax88179_178a.c
++++ b/drivers/net/usb/ax88179_178a.c
+@@ -1414,10 +1414,10 @@ static int ax88179_rx_fixup(struct usbnet *dev, struct sk_buff *skb)
+ }
+
+ if (pkt_cnt == 0) {
+- /* Skip IP alignment psudo header */
+- skb_pull(skb, 2);
+ skb->len = pkt_len;
+- skb_set_tail_pointer(skb, pkt_len);
++ /* Skip IP alignment pseudo header */
++ skb_pull(skb, 2);
++ skb_set_tail_pointer(skb, skb->len);
+ skb->truesize = pkt_len + sizeof(struct sk_buff);
+ ax88179_rx_checksum(skb, pkt_hdr);
+ return 1;
+@@ -1426,8 +1426,9 @@ static int ax88179_rx_fixup(struct usbnet *dev, struct sk_buff *skb)
+ ax_skb = skb_clone(skb, GFP_ATOMIC);
+ if (ax_skb) {
+ ax_skb->len = pkt_len;
+- ax_skb->data = skb->data + 2;
+- skb_set_tail_pointer(ax_skb, pkt_len);
++ /* Skip IP alignment pseudo header */
++ skb_pull(ax_skb, 2);
++ skb_set_tail_pointer(ax_skb, ax_skb->len);
+ ax_skb->truesize = pkt_len + sizeof(struct sk_buff);
+ ax88179_rx_checksum(ax_skb, pkt_hdr);
+ usbnet_skb_return(dev, ax_skb);
+diff --git a/drivers/net/wireguard/device.c b/drivers/net/wireguard/device.c
+index 3ac3f8570ca1..a8f151b1b5fa 100644
+--- a/drivers/net/wireguard/device.c
++++ b/drivers/net/wireguard/device.c
+@@ -45,17 +45,18 @@ static int wg_open(struct net_device *dev)
+ if (dev_v6)
+ dev_v6->cnf.addr_gen_mode = IN6_ADDR_GEN_MODE_NONE;
+
++ mutex_lock(&wg->device_update_lock);
+ ret = wg_socket_init(wg, wg->incoming_port);
+ if (ret < 0)
+- return ret;
+- mutex_lock(&wg->device_update_lock);
++ goto out;
+ list_for_each_entry(peer, &wg->peer_list, peer_list) {
+ wg_packet_send_staged_packets(peer);
+ if (peer->persistent_keepalive_interval)
+ wg_packet_send_keepalive(peer);
+ }
++out:
+ mutex_unlock(&wg->device_update_lock);
+- return 0;
++ return ret;
+ }
+
+ #ifdef CONFIG_PM_SLEEP
+@@ -225,6 +226,7 @@ static void wg_destruct(struct net_device *dev)
+ list_del(&wg->device_list);
+ rtnl_unlock();
+ mutex_lock(&wg->device_update_lock);
++ rcu_assign_pointer(wg->creating_net, NULL);
+ wg->incoming_port = 0;
+ wg_socket_reinit(wg, NULL, NULL);
+ /* The final references are cleared in the below calls to destroy_workqueue. */
+@@ -240,13 +242,11 @@ static void wg_destruct(struct net_device *dev)
+ skb_queue_purge(&wg->incoming_handshakes);
+ free_percpu(dev->tstats);
+ free_percpu(wg->incoming_handshakes_worker);
+- if (wg->have_creating_net_ref)
+- put_net(wg->creating_net);
+ kvfree(wg->index_hashtable);
+ kvfree(wg->peer_hashtable);
+ mutex_unlock(&wg->device_update_lock);
+
+- pr_debug("%s: Interface deleted\n", dev->name);
++ pr_debug("%s: Interface destroyed\n", dev->name);
+ free_netdev(dev);
+ }
+
+@@ -292,7 +292,7 @@ static int wg_newlink(struct net *src_net, struct net_device *dev,
+ struct wg_device *wg = netdev_priv(dev);
+ int ret = -ENOMEM;
+
+- wg->creating_net = src_net;
++ rcu_assign_pointer(wg->creating_net, src_net);
+ init_rwsem(&wg->static_identity.lock);
+ mutex_init(&wg->socket_update_lock);
+ mutex_init(&wg->device_update_lock);
+@@ -393,30 +393,26 @@ static struct rtnl_link_ops link_ops __read_mostly = {
+ .newlink = wg_newlink,
+ };
+
+-static int wg_netdevice_notification(struct notifier_block *nb,
+- unsigned long action, void *data)
++static void wg_netns_pre_exit(struct net *net)
+ {
+- struct net_device *dev = ((struct netdev_notifier_info *)data)->dev;
+- struct wg_device *wg = netdev_priv(dev);
+-
+- ASSERT_RTNL();
+-
+- if (action != NETDEV_REGISTER || dev->netdev_ops != &netdev_ops)
+- return 0;
++ struct wg_device *wg;
+
+- if (dev_net(dev) == wg->creating_net && wg->have_creating_net_ref) {
+- put_net(wg->creating_net);
+- wg->have_creating_net_ref = false;
+- } else if (dev_net(dev) != wg->creating_net &&
+- !wg->have_creating_net_ref) {
+- wg->have_creating_net_ref = true;
+- get_net(wg->creating_net);
++ rtnl_lock();
++ list_for_each_entry(wg, &device_list, device_list) {
++ if (rcu_access_pointer(wg->creating_net) == net) {
++ pr_debug("%s: Creating namespace exiting\n", wg->dev->name);
++ netif_carrier_off(wg->dev);
++ mutex_lock(&wg->device_update_lock);
++ rcu_assign_pointer(wg->creating_net, NULL);
++ wg_socket_reinit(wg, NULL, NULL);
++ mutex_unlock(&wg->device_update_lock);
++ }
+ }
+- return 0;
++ rtnl_unlock();
+ }
+
+-static struct notifier_block netdevice_notifier = {
+- .notifier_call = wg_netdevice_notification
++static struct pernet_operations pernet_ops = {
++ .pre_exit = wg_netns_pre_exit
+ };
+
+ int __init wg_device_init(void)
+@@ -429,18 +425,18 @@ int __init wg_device_init(void)
+ return ret;
+ #endif
+
+- ret = register_netdevice_notifier(&netdevice_notifier);
++ ret = register_pernet_device(&pernet_ops);
+ if (ret)
+ goto error_pm;
+
+ ret = rtnl_link_register(&link_ops);
+ if (ret)
+- goto error_netdevice;
++ goto error_pernet;
+
+ return 0;
+
+-error_netdevice:
+- unregister_netdevice_notifier(&netdevice_notifier);
++error_pernet:
++ unregister_pernet_device(&pernet_ops);
+ error_pm:
+ #ifdef CONFIG_PM_SLEEP
+ unregister_pm_notifier(&pm_notifier);
+@@ -451,7 +447,7 @@ error_pm:
+ void wg_device_uninit(void)
+ {
+ rtnl_link_unregister(&link_ops);
+- unregister_netdevice_notifier(&netdevice_notifier);
++ unregister_pernet_device(&pernet_ops);
+ #ifdef CONFIG_PM_SLEEP
+ unregister_pm_notifier(&pm_notifier);
+ #endif
+diff --git a/drivers/net/wireguard/device.h b/drivers/net/wireguard/device.h
+index b15a8be9d816..4d0144e16947 100644
+--- a/drivers/net/wireguard/device.h
++++ b/drivers/net/wireguard/device.h
+@@ -40,7 +40,7 @@ struct wg_device {
+ struct net_device *dev;
+ struct crypt_queue encrypt_queue, decrypt_queue;
+ struct sock __rcu *sock4, *sock6;
+- struct net *creating_net;
++ struct net __rcu *creating_net;
+ struct noise_static_identity static_identity;
+ struct workqueue_struct *handshake_receive_wq, *handshake_send_wq;
+ struct workqueue_struct *packet_crypt_wq;
+@@ -56,7 +56,6 @@ struct wg_device {
+ unsigned int num_peers, device_update_gen;
+ u32 fwmark;
+ u16 incoming_port;
+- bool have_creating_net_ref;
+ };
+
+ int wg_device_init(void);
+diff --git a/drivers/net/wireguard/netlink.c b/drivers/net/wireguard/netlink.c
+index 802099c8828a..20a4f3c0a0a1 100644
+--- a/drivers/net/wireguard/netlink.c
++++ b/drivers/net/wireguard/netlink.c
+@@ -511,11 +511,15 @@ static int wg_set_device(struct sk_buff *skb, struct genl_info *info)
+ if (flags & ~__WGDEVICE_F_ALL)
+ goto out;
+
+- ret = -EPERM;
+- if ((info->attrs[WGDEVICE_A_LISTEN_PORT] ||
+- info->attrs[WGDEVICE_A_FWMARK]) &&
+- !ns_capable(wg->creating_net->user_ns, CAP_NET_ADMIN))
+- goto out;
++ if (info->attrs[WGDEVICE_A_LISTEN_PORT] || info->attrs[WGDEVICE_A_FWMARK]) {
++ struct net *net;
++ rcu_read_lock();
++ net = rcu_dereference(wg->creating_net);
++ ret = !net || !ns_capable(net->user_ns, CAP_NET_ADMIN) ? -EPERM : 0;
++ rcu_read_unlock();
++ if (ret)
++ goto out;
++ }
+
+ ++wg->device_update_gen;
+
+diff --git a/drivers/net/wireguard/receive.c b/drivers/net/wireguard/receive.c
+index 91438144e4f7..9b2ab6fc91cd 100644
+--- a/drivers/net/wireguard/receive.c
++++ b/drivers/net/wireguard/receive.c
+@@ -414,14 +414,8 @@ static void wg_packet_consume_data_done(struct wg_peer *peer,
+ if (unlikely(routed_peer != peer))
+ goto dishonest_packet_peer;
+
+- if (unlikely(napi_gro_receive(&peer->napi, skb) == GRO_DROP)) {
+- ++dev->stats.rx_dropped;
+- net_dbg_ratelimited("%s: Failed to give packet to userspace from peer %llu (%pISpfsc)\n",
+- dev->name, peer->internal_id,
+- &peer->endpoint.addr);
+- } else {
+- update_rx_stats(peer, message_data_len(len_before_trim));
+- }
++ napi_gro_receive(&peer->napi, skb);
++ update_rx_stats(peer, message_data_len(len_before_trim));
+ return;
+
+ dishonest_packet_peer:
+diff --git a/drivers/net/wireguard/socket.c b/drivers/net/wireguard/socket.c
+index f9018027fc13..c33e2c81635f 100644
+--- a/drivers/net/wireguard/socket.c
++++ b/drivers/net/wireguard/socket.c
+@@ -347,6 +347,7 @@ static void set_sock_opts(struct socket *sock)
+
+ int wg_socket_init(struct wg_device *wg, u16 port)
+ {
++ struct net *net;
+ int ret;
+ struct udp_tunnel_sock_cfg cfg = {
+ .sk_user_data = wg,
+@@ -371,37 +372,47 @@ int wg_socket_init(struct wg_device *wg, u16 port)
+ };
+ #endif
+
++ rcu_read_lock();
++ net = rcu_dereference(wg->creating_net);
++ net = net ? maybe_get_net(net) : NULL;
++ rcu_read_unlock();
++ if (unlikely(!net))
++ return -ENONET;
++
+ #if IS_ENABLED(CONFIG_IPV6)
+ retry:
+ #endif
+
+- ret = udp_sock_create(wg->creating_net, &port4, &new4);
++ ret = udp_sock_create(net, &port4, &new4);
+ if (ret < 0) {
+ pr_err("%s: Could not create IPv4 socket\n", wg->dev->name);
+- return ret;
++ goto out;
+ }
+ set_sock_opts(new4);
+- setup_udp_tunnel_sock(wg->creating_net, new4, &cfg);
++ setup_udp_tunnel_sock(net, new4, &cfg);
+
+ #if IS_ENABLED(CONFIG_IPV6)
+ if (ipv6_mod_enabled()) {
+ port6.local_udp_port = inet_sk(new4->sk)->inet_sport;
+- ret = udp_sock_create(wg->creating_net, &port6, &new6);
++ ret = udp_sock_create(net, &port6, &new6);
+ if (ret < 0) {
+ udp_tunnel_sock_release(new4);
+ if (ret == -EADDRINUSE && !port && retries++ < 100)
+ goto retry;
+ pr_err("%s: Could not create IPv6 socket\n",
+ wg->dev->name);
+- return ret;
++ goto out;
+ }
+ set_sock_opts(new6);
+- setup_udp_tunnel_sock(wg->creating_net, new6, &cfg);
++ setup_udp_tunnel_sock(net, new6, &cfg);
+ }
+ #endif
+
+ wg_socket_reinit(wg, new4->sk, new6 ? new6->sk : NULL);
+- return 0;
++ ret = 0;
++out:
++ put_net(net);
++ return ret;
+ }
+
+ void wg_socket_reinit(struct wg_device *wg, struct sock *new4,
+diff --git a/drivers/net/wireless/ath/wil6210/txrx.c b/drivers/net/wireless/ath/wil6210/txrx.c
+index bc8c15fb609d..080e5aa60bea 100644
+--- a/drivers/net/wireless/ath/wil6210/txrx.c
++++ b/drivers/net/wireless/ath/wil6210/txrx.c
+@@ -897,7 +897,6 @@ static void wil_rx_handle_eapol(struct wil6210_vif *vif, struct sk_buff *skb)
+ void wil_netif_rx(struct sk_buff *skb, struct net_device *ndev, int cid,
+ struct wil_net_stats *stats, bool gro)
+ {
+- gro_result_t rc = GRO_NORMAL;
+ struct wil6210_vif *vif = ndev_to_vif(ndev);
+ struct wil6210_priv *wil = ndev_to_wil(ndev);
+ struct wireless_dev *wdev = vif_to_wdev(vif);
+@@ -908,22 +907,16 @@ void wil_netif_rx(struct sk_buff *skb, struct net_device *ndev, int cid,
+ */
+ int mcast = is_multicast_ether_addr(da);
+ struct sk_buff *xmit_skb = NULL;
+- static const char * const gro_res_str[] = {
+- [GRO_MERGED] = "GRO_MERGED",
+- [GRO_MERGED_FREE] = "GRO_MERGED_FREE",
+- [GRO_HELD] = "GRO_HELD",
+- [GRO_NORMAL] = "GRO_NORMAL",
+- [GRO_DROP] = "GRO_DROP",
+- [GRO_CONSUMED] = "GRO_CONSUMED",
+- };
+
+ if (wdev->iftype == NL80211_IFTYPE_STATION) {
+ sa = wil_skb_get_sa(skb);
+ if (mcast && ether_addr_equal(sa, ndev->dev_addr)) {
+ /* mcast packet looped back to us */
+- rc = GRO_DROP;
+ dev_kfree_skb(skb);
+- goto stats;
++ ndev->stats.rx_dropped++;
++ stats->rx_dropped++;
++ wil_dbg_txrx(wil, "Rx drop %d bytes\n", len);
++ return;
+ }
+ } else if (wdev->iftype == NL80211_IFTYPE_AP && !vif->ap_isolate) {
+ if (mcast) {
+@@ -967,26 +960,16 @@ void wil_netif_rx(struct sk_buff *skb, struct net_device *ndev, int cid,
+ wil_rx_handle_eapol(vif, skb);
+
+ if (gro)
+- rc = napi_gro_receive(&wil->napi_rx, skb);
++ napi_gro_receive(&wil->napi_rx, skb);
+ else
+ netif_rx_ni(skb);
+- wil_dbg_txrx(wil, "Rx complete %d bytes => %s\n",
+- len, gro_res_str[rc]);
+- }
+-stats:
+- /* statistics. rc set to GRO_NORMAL for AP bridging */
+- if (unlikely(rc == GRO_DROP)) {
+- ndev->stats.rx_dropped++;
+- stats->rx_dropped++;
+- wil_dbg_txrx(wil, "Rx drop %d bytes\n", len);
+- } else {
+- ndev->stats.rx_packets++;
+- stats->rx_packets++;
+- ndev->stats.rx_bytes += len;
+- stats->rx_bytes += len;
+- if (mcast)
+- ndev->stats.multicast++;
+ }
++ ndev->stats.rx_packets++;
++ stats->rx_packets++;
++ ndev->stats.rx_bytes += len;
++ stats->rx_bytes += len;
++ if (mcast)
++ ndev->stats.multicast++;
+ }
+
+ void wil_netif_rx_any(struct sk_buff *skb, struct net_device *ndev)
+diff --git a/drivers/nvdimm/region_devs.c b/drivers/nvdimm/region_devs.c
+index ccbb5b43b8b2..4502f9c4708d 100644
+--- a/drivers/nvdimm/region_devs.c
++++ b/drivers/nvdimm/region_devs.c
+@@ -679,18 +679,8 @@ static umode_t region_visible(struct kobject *kobj, struct attribute *a, int n)
+ return a->mode;
+ }
+
+- if (a == &dev_attr_align.attr) {
+- int i;
+-
+- for (i = 0; i < nd_region->ndr_mappings; i++) {
+- struct nd_mapping *nd_mapping = &nd_region->mapping[i];
+- struct nvdimm *nvdimm = nd_mapping->nvdimm;
+-
+- if (test_bit(NDD_LABELING, &nvdimm->flags))
+- return a->mode;
+- }
+- return 0;
+- }
++ if (a == &dev_attr_align.attr)
++ return a->mode;
+
+ if (a != &dev_attr_set_cookie.attr
+ && a != &dev_attr_available_size.attr)
+diff --git a/drivers/nvme/host/multipath.c b/drivers/nvme/host/multipath.c
+index 54603bd3e02d..17f172cf456a 100644
+--- a/drivers/nvme/host/multipath.c
++++ b/drivers/nvme/host/multipath.c
+@@ -409,11 +409,10 @@ static void nvme_mpath_set_live(struct nvme_ns *ns)
+ {
+ struct nvme_ns_head *head = ns->head;
+
+- lockdep_assert_held(&ns->head->lock);
+-
+ if (!head->disk)
+ return;
+
++ mutex_lock(&head->lock);
+ if (!(head->disk->flags & GENHD_FL_UP))
+ device_add_disk(&head->subsys->dev, head->disk,
+ nvme_ns_id_attr_groups);
+@@ -426,9 +425,10 @@ static void nvme_mpath_set_live(struct nvme_ns *ns)
+ __nvme_find_path(head, node);
+ srcu_read_unlock(&head->srcu, srcu_idx);
+ }
++ mutex_unlock(&head->lock);
+
+- synchronize_srcu(&ns->head->srcu);
+- kblockd_schedule_work(&ns->head->requeue_work);
++ synchronize_srcu(&head->srcu);
++ kblockd_schedule_work(&head->requeue_work);
+ }
+
+ static int nvme_parse_ana_log(struct nvme_ctrl *ctrl, void *data,
+@@ -483,14 +483,12 @@ static inline bool nvme_state_is_live(enum nvme_ana_state state)
+ static void nvme_update_ns_ana_state(struct nvme_ana_group_desc *desc,
+ struct nvme_ns *ns)
+ {
+- mutex_lock(&ns->head->lock);
+ ns->ana_grpid = le32_to_cpu(desc->grpid);
+ ns->ana_state = desc->state;
+ clear_bit(NVME_NS_ANA_PENDING, &ns->flags);
+
+ if (nvme_state_is_live(ns->ana_state))
+ nvme_mpath_set_live(ns);
+- mutex_unlock(&ns->head->lock);
+ }
+
+ static int nvme_update_ana_state(struct nvme_ctrl *ctrl,
+@@ -661,10 +659,8 @@ void nvme_mpath_add_disk(struct nvme_ns *ns, struct nvme_id_ns *id)
+ nvme_parse_ana_log(ns->ctrl, ns, nvme_set_ns_ana_state);
+ mutex_unlock(&ns->ctrl->ana_lock);
+ } else {
+- mutex_lock(&ns->head->lock);
+ ns->ana_state = NVME_ANA_OPTIMIZED;
+ nvme_mpath_set_live(ns);
+- mutex_unlock(&ns->head->lock);
+ }
+ }
+
+diff --git a/drivers/nvme/target/core.c b/drivers/nvme/target/core.c
+index aa5ca222c6f5..96deaf348466 100644
+--- a/drivers/nvme/target/core.c
++++ b/drivers/nvme/target/core.c
+@@ -129,30 +129,41 @@ static u32 nvmet_async_event_result(struct nvmet_async_event *aen)
+ return aen->event_type | (aen->event_info << 8) | (aen->log_page << 16);
+ }
+
+-static void nvmet_async_events_process(struct nvmet_ctrl *ctrl, u16 status)
++static void nvmet_async_events_failall(struct nvmet_ctrl *ctrl)
+ {
+- struct nvmet_async_event *aen;
++ u16 status = NVME_SC_INTERNAL | NVME_SC_DNR;
+ struct nvmet_req *req;
+
+- while (1) {
++ mutex_lock(&ctrl->lock);
++ while (ctrl->nr_async_event_cmds) {
++ req = ctrl->async_event_cmds[--ctrl->nr_async_event_cmds];
++ mutex_unlock(&ctrl->lock);
++ nvmet_req_complete(req, status);
+ mutex_lock(&ctrl->lock);
+- aen = list_first_entry_or_null(&ctrl->async_events,
+- struct nvmet_async_event, entry);
+- if (!aen || !ctrl->nr_async_event_cmds) {
+- mutex_unlock(&ctrl->lock);
+- break;
+- }
++ }
++ mutex_unlock(&ctrl->lock);
++}
++
++static void nvmet_async_events_process(struct nvmet_ctrl *ctrl)
++{
++ struct nvmet_async_event *aen;
++ struct nvmet_req *req;
+
++ mutex_lock(&ctrl->lock);
++ while (ctrl->nr_async_event_cmds && !list_empty(&ctrl->async_events)) {
++ aen = list_first_entry(&ctrl->async_events,
++ struct nvmet_async_event, entry);
+ req = ctrl->async_event_cmds[--ctrl->nr_async_event_cmds];
+- if (status == 0)
+- nvmet_set_result(req, nvmet_async_event_result(aen));
++ nvmet_set_result(req, nvmet_async_event_result(aen));
+
+ list_del(&aen->entry);
+ kfree(aen);
+
+ mutex_unlock(&ctrl->lock);
+- nvmet_req_complete(req, status);
++ nvmet_req_complete(req, 0);
++ mutex_lock(&ctrl->lock);
+ }
++ mutex_unlock(&ctrl->lock);
+ }
+
+ static void nvmet_async_events_free(struct nvmet_ctrl *ctrl)
+@@ -172,7 +183,7 @@ static void nvmet_async_event_work(struct work_struct *work)
+ struct nvmet_ctrl *ctrl =
+ container_of(work, struct nvmet_ctrl, async_event_work);
+
+- nvmet_async_events_process(ctrl, 0);
++ nvmet_async_events_process(ctrl);
+ }
+
+ void nvmet_add_async_event(struct nvmet_ctrl *ctrl, u8 event_type,
+@@ -755,7 +766,6 @@ static void nvmet_confirm_sq(struct percpu_ref *ref)
+
+ void nvmet_sq_destroy(struct nvmet_sq *sq)
+ {
+- u16 status = NVME_SC_INTERNAL | NVME_SC_DNR;
+ struct nvmet_ctrl *ctrl = sq->ctrl;
+
+ /*
+@@ -763,7 +773,7 @@ void nvmet_sq_destroy(struct nvmet_sq *sq)
+ * queue doesn't have outstanding requests on it.
+ */
+ if (ctrl && ctrl->sqs && ctrl->sqs[0] == sq)
+- nvmet_async_events_process(ctrl, status);
++ nvmet_async_events_failall(ctrl);
+ percpu_ref_kill_and_confirm(&sq->ref, nvmet_confirm_sq);
+ wait_for_completion(&sq->confirm_done);
+ wait_for_completion(&sq->free_done);
+diff --git a/drivers/of/of_mdio.c b/drivers/of/of_mdio.c
+index 9f982c0627a0..95a3bb2e5eab 100644
+--- a/drivers/of/of_mdio.c
++++ b/drivers/of/of_mdio.c
+@@ -303,10 +303,15 @@ int of_mdiobus_register(struct mii_bus *mdio, struct device_node *np)
+ child, addr);
+
+ if (of_mdiobus_child_is_phy(child)) {
++ /* -ENODEV is the return code that PHYLIB has
++ * standardized on to indicate that bus
++ * scanning should continue.
++ */
+ rc = of_mdiobus_register_phy(mdio, child, addr);
+- if (rc && rc != -ENODEV)
++ if (!rc)
++ break;
++ if (rc != -ENODEV)
+ goto unregister;
+- break;
+ }
+ }
+ }
+diff --git a/drivers/pinctrl/qcom/pinctrl-spmi-gpio.c b/drivers/pinctrl/qcom/pinctrl-spmi-gpio.c
+index fe0be8a6ebb7..092a48e4dff5 100644
+--- a/drivers/pinctrl/qcom/pinctrl-spmi-gpio.c
++++ b/drivers/pinctrl/qcom/pinctrl-spmi-gpio.c
+@@ -170,6 +170,7 @@ struct pmic_gpio_state {
+ struct regmap *map;
+ struct pinctrl_dev *ctrl;
+ struct gpio_chip chip;
++ struct irq_chip irq;
+ };
+
+ static const struct pinconf_generic_params pmic_gpio_bindings[] = {
+@@ -917,16 +918,6 @@ static int pmic_gpio_populate(struct pmic_gpio_state *state,
+ return 0;
+ }
+
+-static struct irq_chip pmic_gpio_irq_chip = {
+- .name = "spmi-gpio",
+- .irq_ack = irq_chip_ack_parent,
+- .irq_mask = irq_chip_mask_parent,
+- .irq_unmask = irq_chip_unmask_parent,
+- .irq_set_type = irq_chip_set_type_parent,
+- .irq_set_wake = irq_chip_set_wake_parent,
+- .flags = IRQCHIP_MASK_ON_SUSPEND,
+-};
+-
+ static int pmic_gpio_domain_translate(struct irq_domain *domain,
+ struct irq_fwspec *fwspec,
+ unsigned long *hwirq,
+@@ -1053,8 +1044,16 @@ static int pmic_gpio_probe(struct platform_device *pdev)
+ if (!parent_domain)
+ return -ENXIO;
+
++ state->irq.name = "spmi-gpio",
++ state->irq.irq_ack = irq_chip_ack_parent,
++ state->irq.irq_mask = irq_chip_mask_parent,
++ state->irq.irq_unmask = irq_chip_unmask_parent,
++ state->irq.irq_set_type = irq_chip_set_type_parent,
++ state->irq.irq_set_wake = irq_chip_set_wake_parent,
++ state->irq.flags = IRQCHIP_MASK_ON_SUSPEND,
++
+ girq = &state->chip.irq;
+- girq->chip = &pmic_gpio_irq_chip;
++ girq->chip = &state->irq;
+ girq->default_type = IRQ_TYPE_NONE;
+ girq->handler = handle_level_irq;
+ girq->fwnode = of_node_to_fwnode(state->dev->of_node);
+diff --git a/drivers/pinctrl/tegra/pinctrl-tegra.c b/drivers/pinctrl/tegra/pinctrl-tegra.c
+index 21661f6490d6..195cfe557511 100644
+--- a/drivers/pinctrl/tegra/pinctrl-tegra.c
++++ b/drivers/pinctrl/tegra/pinctrl-tegra.c
+@@ -731,8 +731,8 @@ static int tegra_pinctrl_resume(struct device *dev)
+ }
+
+ const struct dev_pm_ops tegra_pinctrl_pm = {
+- .suspend = &tegra_pinctrl_suspend,
+- .resume = &tegra_pinctrl_resume
++ .suspend_noirq = &tegra_pinctrl_suspend,
++ .resume_noirq = &tegra_pinctrl_resume
+ };
+
+ static bool tegra_pinctrl_gpio_node_has_range(struct tegra_pmx *pmx)
+diff --git a/drivers/regulator/da9063-regulator.c b/drivers/regulator/da9063-regulator.c
+index e1d6c8f6d40b..fe65b5acaf28 100644
+--- a/drivers/regulator/da9063-regulator.c
++++ b/drivers/regulator/da9063-regulator.c
+@@ -512,7 +512,6 @@ static const struct da9063_regulator_info da9063_regulator_info[] = {
+ },
+ {
+ DA9063_LDO(DA9063, LDO9, 950, 50, 3600),
+- .suspend = BFIELD(DA9063_REG_LDO9_CONT, DA9063_VLDO9_SEL),
+ },
+ {
+ DA9063_LDO(DA9063, LDO11, 900, 50, 3600),
+diff --git a/drivers/regulator/pfuze100-regulator.c b/drivers/regulator/pfuze100-regulator.c
+index 689537927f6f..4c8e8b472287 100644
+--- a/drivers/regulator/pfuze100-regulator.c
++++ b/drivers/regulator/pfuze100-regulator.c
+@@ -209,6 +209,19 @@ static const struct regulator_ops pfuze100_swb_regulator_ops = {
+
+ };
+
++static const struct regulator_ops pfuze3000_sw_regulator_ops = {
++ .enable = regulator_enable_regmap,
++ .disable = regulator_disable_regmap,
++ .is_enabled = regulator_is_enabled_regmap,
++ .list_voltage = regulator_list_voltage_table,
++ .map_voltage = regulator_map_voltage_ascend,
++ .set_voltage_sel = regulator_set_voltage_sel_regmap,
++ .get_voltage_sel = regulator_get_voltage_sel_regmap,
++ .set_voltage_time_sel = regulator_set_voltage_time_sel,
++ .set_ramp_delay = pfuze100_set_ramp_delay,
++
++};
++
+ #define PFUZE100_FIXED_REG(_chip, _name, base, voltage) \
+ [_chip ## _ ## _name] = { \
+ .desc = { \
+@@ -318,23 +331,28 @@ static const struct regulator_ops pfuze100_swb_regulator_ops = {
+ .stby_mask = 0x20, \
+ }
+
+-
+-#define PFUZE3000_SW2_REG(_chip, _name, base, min, max, step) { \
+- .desc = { \
+- .name = #_name,\
+- .n_voltages = ((max) - (min)) / (step) + 1, \
+- .ops = &pfuze100_sw_regulator_ops, \
+- .type = REGULATOR_VOLTAGE, \
+- .id = _chip ## _ ## _name, \
+- .owner = THIS_MODULE, \
+- .min_uV = (min), \
+- .uV_step = (step), \
+- .vsel_reg = (base) + PFUZE100_VOL_OFFSET, \
+- .vsel_mask = 0x7, \
+- }, \
+- .stby_reg = (base) + PFUZE100_STANDBY_OFFSET, \
+- .stby_mask = 0x7, \
+-}
++/* No linar case for the some switches of PFUZE3000 */
++#define PFUZE3000_SW_REG(_chip, _name, base, mask, voltages) \
++ [_chip ## _ ## _name] = { \
++ .desc = { \
++ .name = #_name, \
++ .n_voltages = ARRAY_SIZE(voltages), \
++ .ops = &pfuze3000_sw_regulator_ops, \
++ .type = REGULATOR_VOLTAGE, \
++ .id = _chip ## _ ## _name, \
++ .owner = THIS_MODULE, \
++ .volt_table = voltages, \
++ .vsel_reg = (base) + PFUZE100_VOL_OFFSET, \
++ .vsel_mask = (mask), \
++ .enable_reg = (base) + PFUZE100_MODE_OFFSET, \
++ .enable_mask = 0xf, \
++ .enable_val = 0x8, \
++ .enable_time = 500, \
++ }, \
++ .stby_reg = (base) + PFUZE100_STANDBY_OFFSET, \
++ .stby_mask = (mask), \
++ .sw_reg = true, \
++ }
+
+ #define PFUZE3000_SW3_REG(_chip, _name, base, min, max, step) { \
+ .desc = { \
+@@ -391,9 +409,9 @@ static struct pfuze_regulator pfuze200_regulators[] = {
+ };
+
+ static struct pfuze_regulator pfuze3000_regulators[] = {
+- PFUZE100_SWB_REG(PFUZE3000, SW1A, PFUZE100_SW1ABVOL, 0x1f, pfuze3000_sw1a),
++ PFUZE3000_SW_REG(PFUZE3000, SW1A, PFUZE100_SW1ABVOL, 0x1f, pfuze3000_sw1a),
+ PFUZE100_SW_REG(PFUZE3000, SW1B, PFUZE100_SW1CVOL, 700000, 1475000, 25000),
+- PFUZE100_SWB_REG(PFUZE3000, SW2, PFUZE100_SW2VOL, 0x7, pfuze3000_sw2lo),
++ PFUZE3000_SW_REG(PFUZE3000, SW2, PFUZE100_SW2VOL, 0x7, pfuze3000_sw2lo),
+ PFUZE3000_SW3_REG(PFUZE3000, SW3, PFUZE100_SW3AVOL, 900000, 1650000, 50000),
+ PFUZE100_SWB_REG(PFUZE3000, SWBST, PFUZE100_SWBSTCON1, 0x3, pfuze100_swbst),
+ PFUZE100_SWB_REG(PFUZE3000, VSNVS, PFUZE100_VSNVSVOL, 0x7, pfuze100_vsnvs),
+@@ -407,8 +425,8 @@ static struct pfuze_regulator pfuze3000_regulators[] = {
+ };
+
+ static struct pfuze_regulator pfuze3001_regulators[] = {
+- PFUZE100_SWB_REG(PFUZE3001, SW1, PFUZE100_SW1ABVOL, 0x1f, pfuze3000_sw1a),
+- PFUZE100_SWB_REG(PFUZE3001, SW2, PFUZE100_SW2VOL, 0x7, pfuze3000_sw2lo),
++ PFUZE3000_SW_REG(PFUZE3001, SW1, PFUZE100_SW1ABVOL, 0x1f, pfuze3000_sw1a),
++ PFUZE3000_SW_REG(PFUZE3001, SW2, PFUZE100_SW2VOL, 0x7, pfuze3000_sw2lo),
+ PFUZE3000_SW3_REG(PFUZE3001, SW3, PFUZE100_SW3AVOL, 900000, 1650000, 50000),
+ PFUZE100_SWB_REG(PFUZE3001, VSNVS, PFUZE100_VSNVSVOL, 0x7, pfuze100_vsnvs),
+ PFUZE100_VGEN_REG(PFUZE3001, VLDO1, PFUZE100_VGEN1VOL, 1800000, 3300000, 100000),
+diff --git a/drivers/s390/net/qeth_core_main.c b/drivers/s390/net/qeth_core_main.c
+index 569966bdc513..60d675fefac7 100644
+--- a/drivers/s390/net/qeth_core_main.c
++++ b/drivers/s390/net/qeth_core_main.c
+@@ -4265,9 +4265,6 @@ static int qeth_setadpparms_set_access_ctrl_cb(struct qeth_card *card,
+ int fallback = *(int *)reply->param;
+
+ QETH_CARD_TEXT(card, 4, "setaccb");
+- if (cmd->hdr.return_code)
+- return -EIO;
+- qeth_setadpparms_inspect_rc(cmd);
+
+ access_ctrl_req = &cmd->data.setadapterparms.data.set_access_ctrl;
+ QETH_CARD_TEXT_(card, 2, "rc=%d",
+@@ -4277,7 +4274,7 @@ static int qeth_setadpparms_set_access_ctrl_cb(struct qeth_card *card,
+ QETH_DBF_MESSAGE(3, "ERR:SET_ACCESS_CTRL(%#x) on device %x: %#x\n",
+ access_ctrl_req->subcmd_code, CARD_DEVID(card),
+ cmd->data.setadapterparms.hdr.return_code);
+- switch (cmd->data.setadapterparms.hdr.return_code) {
++ switch (qeth_setadpparms_inspect_rc(cmd)) {
+ case SET_ACCESS_CTRL_RC_SUCCESS:
+ if (card->options.isolation == ISOLATION_MODE_NONE) {
+ dev_info(&card->gdev->dev,
+diff --git a/drivers/s390/scsi/zfcp_erp.c b/drivers/s390/scsi/zfcp_erp.c
+index 3d0bc000f500..c621e8f0897f 100644
+--- a/drivers/s390/scsi/zfcp_erp.c
++++ b/drivers/s390/scsi/zfcp_erp.c
+@@ -576,7 +576,10 @@ static void zfcp_erp_strategy_check_fsfreq(struct zfcp_erp_action *act)
+ ZFCP_STATUS_ERP_TIMEDOUT)) {
+ req->status |= ZFCP_STATUS_FSFREQ_DISMISSED;
+ zfcp_dbf_rec_run("erscf_1", act);
+- req->erp_action = NULL;
++ /* lock-free concurrent access with
++ * zfcp_erp_timeout_handler()
++ */
++ WRITE_ONCE(req->erp_action, NULL);
+ }
+ if (act->status & ZFCP_STATUS_ERP_TIMEDOUT)
+ zfcp_dbf_rec_run("erscf_2", act);
+@@ -612,8 +615,14 @@ void zfcp_erp_notify(struct zfcp_erp_action *erp_action, unsigned long set_mask)
+ void zfcp_erp_timeout_handler(struct timer_list *t)
+ {
+ struct zfcp_fsf_req *fsf_req = from_timer(fsf_req, t, timer);
+- struct zfcp_erp_action *act = fsf_req->erp_action;
++ struct zfcp_erp_action *act;
+
++ if (fsf_req->status & ZFCP_STATUS_FSFREQ_DISMISSED)
++ return;
++ /* lock-free concurrent access with zfcp_erp_strategy_check_fsfreq() */
++ act = READ_ONCE(fsf_req->erp_action);
++ if (!act)
++ return;
+ zfcp_erp_notify(act, ZFCP_STATUS_ERP_TIMEDOUT);
+ }
+
+diff --git a/drivers/scsi/lpfc/lpfc_init.c b/drivers/scsi/lpfc/lpfc_init.c
+index 4104bdcdbb6f..70be1f5de873 100644
+--- a/drivers/scsi/lpfc/lpfc_init.c
++++ b/drivers/scsi/lpfc/lpfc_init.c
+@@ -11895,7 +11895,8 @@ lpfc_sli4_hba_unset(struct lpfc_hba *phba)
+ lpfc_sli4_xri_exchange_busy_wait(phba);
+
+ /* per-phba callback de-registration for hotplug event */
+- lpfc_cpuhp_remove(phba);
++ if (phba->pport)
++ lpfc_cpuhp_remove(phba);
+
+ /* Disable PCI subsystem interrupt */
+ lpfc_sli4_disable_intr(phba);
+diff --git a/drivers/scsi/qla2xxx/qla_gs.c b/drivers/scsi/qla2xxx/qla_gs.c
+index 42c3ad27f1cb..df670fba2ab8 100644
+--- a/drivers/scsi/qla2xxx/qla_gs.c
++++ b/drivers/scsi/qla2xxx/qla_gs.c
+@@ -3496,7 +3496,9 @@ void qla24xx_async_gnnft_done(scsi_qla_host_t *vha, srb_t *sp)
+ qla2x00_clear_loop_id(fcport);
+ fcport->flags |= FCF_FABRIC_DEVICE;
+ } else if (fcport->d_id.b24 != rp->id.b24 ||
+- fcport->scan_needed) {
++ (fcport->scan_needed &&
++ fcport->port_type != FCT_INITIATOR &&
++ fcport->port_type != FCT_NVME_INITIATOR)) {
+ qlt_schedule_sess_for_deletion(fcport);
+ }
+ fcport->d_id.b24 = rp->id.b24;
+diff --git a/drivers/soc/imx/soc-imx8m.c b/drivers/soc/imx/soc-imx8m.c
+index 719e1f189ebf..ada0d8804d84 100644
+--- a/drivers/soc/imx/soc-imx8m.c
++++ b/drivers/soc/imx/soc-imx8m.c
+@@ -22,6 +22,8 @@
+ #define OCOTP_UID_LOW 0x410
+ #define OCOTP_UID_HIGH 0x420
+
++#define IMX8MP_OCOTP_UID_OFFSET 0x10
++
+ /* Same as ANADIG_DIGPROG_IMX7D */
+ #define ANADIG_DIGPROG_IMX8MM 0x800
+
+@@ -88,6 +90,8 @@ static void __init imx8mm_soc_uid(void)
+ {
+ void __iomem *ocotp_base;
+ struct device_node *np;
++ u32 offset = of_machine_is_compatible("fsl,imx8mp") ?
++ IMX8MP_OCOTP_UID_OFFSET : 0;
+
+ np = of_find_compatible_node(NULL, NULL, "fsl,imx8mm-ocotp");
+ if (!np)
+@@ -96,9 +100,9 @@ static void __init imx8mm_soc_uid(void)
+ ocotp_base = of_iomap(np, 0);
+ WARN_ON(!ocotp_base);
+
+- soc_uid = readl_relaxed(ocotp_base + OCOTP_UID_HIGH);
++ soc_uid = readl_relaxed(ocotp_base + OCOTP_UID_HIGH + offset);
+ soc_uid <<= 32;
+- soc_uid |= readl_relaxed(ocotp_base + OCOTP_UID_LOW);
++ soc_uid |= readl_relaxed(ocotp_base + OCOTP_UID_LOW + offset);
+
+ iounmap(ocotp_base);
+ of_node_put(np);
+diff --git a/drivers/spi/spi-fsl-dspi.c b/drivers/spi/spi-fsl-dspi.c
+index 2e9f9adc5900..88176eaca448 100644
+--- a/drivers/spi/spi-fsl-dspi.c
++++ b/drivers/spi/spi-fsl-dspi.c
+@@ -584,14 +584,14 @@ static void dspi_release_dma(struct fsl_dspi *dspi)
+ return;
+
+ if (dma->chan_tx) {
+- dma_unmap_single(dma->chan_tx->device->dev, dma->tx_dma_phys,
+- dma_bufsize, DMA_TO_DEVICE);
++ dma_free_coherent(dma->chan_tx->device->dev, dma_bufsize,
++ dma->tx_dma_buf, dma->tx_dma_phys);
+ dma_release_channel(dma->chan_tx);
+ }
+
+ if (dma->chan_rx) {
+- dma_unmap_single(dma->chan_rx->device->dev, dma->rx_dma_phys,
+- dma_bufsize, DMA_FROM_DEVICE);
++ dma_free_coherent(dma->chan_rx->device->dev, dma_bufsize,
++ dma->rx_dma_buf, dma->rx_dma_phys);
+ dma_release_channel(dma->chan_rx);
+ }
+ }
+diff --git a/drivers/staging/rtl8723bs/core/rtw_wlan_util.c b/drivers/staging/rtl8723bs/core/rtw_wlan_util.c
+index 110338dbe372..cc60f6a28d70 100644
+--- a/drivers/staging/rtl8723bs/core/rtw_wlan_util.c
++++ b/drivers/staging/rtl8723bs/core/rtw_wlan_util.c
+@@ -1830,12 +1830,14 @@ int update_sta_support_rate(struct adapter *padapter, u8 *pvar_ie, uint var_ie_l
+ pIE = (struct ndis_80211_var_ie *)rtw_get_ie(pvar_ie, _SUPPORTEDRATES_IE_, &ie_len, var_ie_len);
+ if (!pIE)
+ return _FAIL;
++ if (ie_len > sizeof(pmlmeinfo->FW_sta_info[cam_idx].SupportedRates))
++ return _FAIL;
+
+ memcpy(pmlmeinfo->FW_sta_info[cam_idx].SupportedRates, pIE->data, ie_len);
+ supportRateNum = ie_len;
+
+ pIE = (struct ndis_80211_var_ie *)rtw_get_ie(pvar_ie, _EXT_SUPPORTEDRATES_IE_, &ie_len, var_ie_len);
+- if (pIE)
++ if (pIE && (ie_len <= sizeof(pmlmeinfo->FW_sta_info[cam_idx].SupportedRates) - supportRateNum))
+ memcpy((pmlmeinfo->FW_sta_info[cam_idx].SupportedRates + supportRateNum), pIE->data, ie_len);
+
+ return _SUCCESS;
+diff --git a/drivers/tty/hvc/hvc_console.c b/drivers/tty/hvc/hvc_console.c
+index f8e43a6faea9..cdcc64ea2554 100644
+--- a/drivers/tty/hvc/hvc_console.c
++++ b/drivers/tty/hvc/hvc_console.c
+@@ -75,8 +75,6 @@ static LIST_HEAD(hvc_structs);
+ */
+ static DEFINE_MUTEX(hvc_structs_mutex);
+
+-/* Mutex to serialize hvc_open */
+-static DEFINE_MUTEX(hvc_open_mutex);
+ /*
+ * This value is used to assign a tty->index value to a hvc_struct based
+ * upon order of exposure via hvc_probe(), when we can not match it to
+@@ -348,24 +346,16 @@ static int hvc_install(struct tty_driver *driver, struct tty_struct *tty)
+ */
+ static int hvc_open(struct tty_struct *tty, struct file * filp)
+ {
+- struct hvc_struct *hp;
++ struct hvc_struct *hp = tty->driver_data;
+ unsigned long flags;
+ int rc = 0;
+
+- mutex_lock(&hvc_open_mutex);
+-
+- hp = tty->driver_data;
+- if (!hp) {
+- rc = -EIO;
+- goto out;
+- }
+-
+ spin_lock_irqsave(&hp->port.lock, flags);
+ /* Check and then increment for fast path open. */
+ if (hp->port.count++ > 0) {
+ spin_unlock_irqrestore(&hp->port.lock, flags);
+ hvc_kick();
+- goto out;
++ return 0;
+ } /* else count == 0 */
+ spin_unlock_irqrestore(&hp->port.lock, flags);
+
+@@ -393,8 +383,6 @@ static int hvc_open(struct tty_struct *tty, struct file * filp)
+ /* Force wakeup of the polling thread */
+ hvc_kick();
+
+-out:
+- mutex_unlock(&hvc_open_mutex);
+ return rc;
+ }
+
+diff --git a/drivers/usb/cdns3/ep0.c b/drivers/usb/cdns3/ep0.c
+index e71240b386b4..da4c5eb03d7e 100644
+--- a/drivers/usb/cdns3/ep0.c
++++ b/drivers/usb/cdns3/ep0.c
+@@ -327,7 +327,8 @@ static int cdns3_ep0_feature_handle_device(struct cdns3_device *priv_dev,
+ if (!set || (tmode & 0xff) != 0)
+ return -EINVAL;
+
+- switch (tmode >> 8) {
++ tmode >>= 8;
++ switch (tmode) {
+ case TEST_J:
+ case TEST_K:
+ case TEST_SE0_NAK:
+@@ -711,15 +712,17 @@ static int cdns3_gadget_ep0_queue(struct usb_ep *ep,
+ int ret = 0;
+ u8 zlp = 0;
+
++ spin_lock_irqsave(&priv_dev->lock, flags);
+ trace_cdns3_ep0_queue(priv_dev, request);
+
+ /* cancel the request if controller receive new SETUP packet. */
+- if (cdns3_check_new_setup(priv_dev))
++ if (cdns3_check_new_setup(priv_dev)) {
++ spin_unlock_irqrestore(&priv_dev->lock, flags);
+ return -ECONNRESET;
++ }
+
+ /* send STATUS stage. Should be called only for SET_CONFIGURATION */
+ if (priv_dev->ep0_stage == CDNS3_STATUS_STAGE) {
+- spin_lock_irqsave(&priv_dev->lock, flags);
+ cdns3_select_ep(priv_dev, 0x00);
+
+ erdy_sent = !priv_dev->hw_configured_flag;
+@@ -744,7 +747,6 @@ static int cdns3_gadget_ep0_queue(struct usb_ep *ep,
+ return 0;
+ }
+
+- spin_lock_irqsave(&priv_dev->lock, flags);
+ if (!list_empty(&priv_ep->pending_req_list)) {
+ dev_err(priv_dev->dev,
+ "can't handle multiple requests for ep0\n");
+diff --git a/drivers/usb/cdns3/trace.h b/drivers/usb/cdns3/trace.h
+index 8d121e207fd8..755c56582257 100644
+--- a/drivers/usb/cdns3/trace.h
++++ b/drivers/usb/cdns3/trace.h
+@@ -156,7 +156,7 @@ DECLARE_EVENT_CLASS(cdns3_log_ep0_irq,
+ __dynamic_array(char, str, CDNS3_MSG_MAX)
+ ),
+ TP_fast_assign(
+- __entry->ep_dir = priv_dev->ep0_data_dir;
++ __entry->ep_dir = priv_dev->selected_ep;
+ __entry->ep_sts = ep_sts;
+ ),
+ TP_printk("%s", cdns3_decode_ep0_irq(__get_str(str),
+diff --git a/drivers/usb/class/cdc-acm.c b/drivers/usb/class/cdc-acm.c
+index f67088bb8218..d5187b50fc82 100644
+--- a/drivers/usb/class/cdc-acm.c
++++ b/drivers/usb/class/cdc-acm.c
+@@ -1689,6 +1689,8 @@ static int acm_pre_reset(struct usb_interface *intf)
+
+ static const struct usb_device_id acm_ids[] = {
+ /* quirky and broken devices */
++ { USB_DEVICE(0x0424, 0x274e), /* Microchip Technology, Inc. (formerly SMSC) */
++ .driver_info = DISABLE_ECHO, }, /* DISABLE ECHO in termios flag */
+ { USB_DEVICE(0x076d, 0x0006), /* Denso Cradle CU-321 */
+ .driver_info = NO_UNION_NORMAL, },/* has no union descriptor */
+ { USB_DEVICE(0x17ef, 0x7000), /* Lenovo USB modem */
+diff --git a/drivers/usb/core/quirks.c b/drivers/usb/core/quirks.c
+index 3e8efe759c3e..e0b77674869c 100644
+--- a/drivers/usb/core/quirks.c
++++ b/drivers/usb/core/quirks.c
+@@ -218,11 +218,12 @@ static const struct usb_device_id usb_quirk_list[] = {
+ /* Logitech HD Webcam C270 */
+ { USB_DEVICE(0x046d, 0x0825), .driver_info = USB_QUIRK_RESET_RESUME },
+
+- /* Logitech HD Pro Webcams C920, C920-C, C925e and C930e */
++ /* Logitech HD Pro Webcams C920, C920-C, C922, C925e and C930e */
+ { USB_DEVICE(0x046d, 0x082d), .driver_info = USB_QUIRK_DELAY_INIT },
+ { USB_DEVICE(0x046d, 0x0841), .driver_info = USB_QUIRK_DELAY_INIT },
+ { USB_DEVICE(0x046d, 0x0843), .driver_info = USB_QUIRK_DELAY_INIT },
+ { USB_DEVICE(0x046d, 0x085b), .driver_info = USB_QUIRK_DELAY_INIT },
++ { USB_DEVICE(0x046d, 0x085c), .driver_info = USB_QUIRK_DELAY_INIT },
+
+ /* Logitech ConferenceCam CC3000e */
+ { USB_DEVICE(0x046d, 0x0847), .driver_info = USB_QUIRK_DELAY_INIT },
+diff --git a/drivers/usb/dwc2/gadget.c b/drivers/usb/dwc2/gadget.c
+index 12b98b466287..7faf5f8c056d 100644
+--- a/drivers/usb/dwc2/gadget.c
++++ b/drivers/usb/dwc2/gadget.c
+@@ -4920,12 +4920,6 @@ int dwc2_gadget_init(struct dwc2_hsotg *hsotg)
+ epnum, 0);
+ }
+
+- ret = usb_add_gadget_udc(dev, &hsotg->gadget);
+- if (ret) {
+- dwc2_hsotg_ep_free_request(&hsotg->eps_out[0]->ep,
+- hsotg->ctrl_req);
+- return ret;
+- }
+ dwc2_hsotg_dump(hsotg);
+
+ return 0;
+diff --git a/drivers/usb/dwc2/platform.c b/drivers/usb/dwc2/platform.c
+index 69972750e161..5684c4781af9 100644
+--- a/drivers/usb/dwc2/platform.c
++++ b/drivers/usb/dwc2/platform.c
+@@ -536,6 +536,17 @@ static int dwc2_driver_probe(struct platform_device *dev)
+ if (hsotg->dr_mode == USB_DR_MODE_PERIPHERAL)
+ dwc2_lowlevel_hw_disable(hsotg);
+
++#if IS_ENABLED(CONFIG_USB_DWC2_PERIPHERAL) || \
++ IS_ENABLED(CONFIG_USB_DWC2_DUAL_ROLE)
++ /* Postponed adding a new gadget to the udc class driver list */
++ if (hsotg->gadget_enabled) {
++ retval = usb_add_gadget_udc(hsotg->dev, &hsotg->gadget);
++ if (retval) {
++ dwc2_hsotg_remove(hsotg);
++ goto error_init;
++ }
++ }
++#endif /* CONFIG_USB_DWC2_PERIPHERAL || CONFIG_USB_DWC2_DUAL_ROLE */
+ return 0;
+
+ error_init:
+diff --git a/drivers/usb/dwc3/dwc3-exynos.c b/drivers/usb/dwc3/dwc3-exynos.c
+index 48b68b6f0dc8..90bb022737da 100644
+--- a/drivers/usb/dwc3/dwc3-exynos.c
++++ b/drivers/usb/dwc3/dwc3-exynos.c
+@@ -162,12 +162,6 @@ static const struct dwc3_exynos_driverdata exynos5250_drvdata = {
+ .suspend_clk_idx = -1,
+ };
+
+-static const struct dwc3_exynos_driverdata exynos5420_drvdata = {
+- .clk_names = { "usbdrd30", "usbdrd30_susp_clk"},
+- .num_clks = 2,
+- .suspend_clk_idx = 1,
+-};
+-
+ static const struct dwc3_exynos_driverdata exynos5433_drvdata = {
+ .clk_names = { "aclk", "susp_clk", "pipe_pclk", "phyclk" },
+ .num_clks = 4,
+@@ -184,9 +178,6 @@ static const struct of_device_id exynos_dwc3_match[] = {
+ {
+ .compatible = "samsung,exynos5250-dwusb3",
+ .data = &exynos5250_drvdata,
+- }, {
+- .compatible = "samsung,exynos5420-dwusb3",
+- .data = &exynos5420_drvdata,
+ }, {
+ .compatible = "samsung,exynos5433-dwusb3",
+ .data = &exynos5433_drvdata,
+diff --git a/drivers/usb/gadget/udc/mv_udc_core.c b/drivers/usb/gadget/udc/mv_udc_core.c
+index cafde053788b..80a1b52c656e 100644
+--- a/drivers/usb/gadget/udc/mv_udc_core.c
++++ b/drivers/usb/gadget/udc/mv_udc_core.c
+@@ -2313,7 +2313,8 @@ static int mv_udc_probe(struct platform_device *pdev)
+ return 0;
+
+ err_create_workqueue:
+- destroy_workqueue(udc->qwork);
++ if (udc->qwork)
++ destroy_workqueue(udc->qwork);
+ err_destroy_dma:
+ dma_pool_destroy(udc->dtd_pool);
+ err_free_dma:
+diff --git a/drivers/usb/host/ehci-exynos.c b/drivers/usb/host/ehci-exynos.c
+index a4e9abcbdc4f..1a9b7572e17f 100644
+--- a/drivers/usb/host/ehci-exynos.c
++++ b/drivers/usb/host/ehci-exynos.c
+@@ -203,9 +203,8 @@ static int exynos_ehci_probe(struct platform_device *pdev)
+ hcd->rsrc_len = resource_size(res);
+
+ irq = platform_get_irq(pdev, 0);
+- if (!irq) {
+- dev_err(&pdev->dev, "Failed to get IRQ\n");
+- err = -ENODEV;
++ if (irq < 0) {
++ err = irq;
+ goto fail_io;
+ }
+
+diff --git a/drivers/usb/host/ehci-pci.c b/drivers/usb/host/ehci-pci.c
+index 1a48ab1bd3b2..7ff2cbdcd0b2 100644
+--- a/drivers/usb/host/ehci-pci.c
++++ b/drivers/usb/host/ehci-pci.c
+@@ -216,6 +216,13 @@ static int ehci_pci_setup(struct usb_hcd *hcd)
+ ehci_info(ehci, "applying MosChip frame-index workaround\n");
+ ehci->frame_index_bug = 1;
+ break;
++ case PCI_VENDOR_ID_HUAWEI:
++ /* Synopsys HC bug */
++ if (pdev->device == 0xa239) {
++ ehci_info(ehci, "applying Synopsys HC workaround\n");
++ ehci->has_synopsys_hc_bug = 1;
++ }
++ break;
+ }
+
+ /* optional debug port, normally in the first BAR */
+diff --git a/drivers/usb/host/ohci-sm501.c b/drivers/usb/host/ohci-sm501.c
+index cff965240327..b91d50da6127 100644
+--- a/drivers/usb/host/ohci-sm501.c
++++ b/drivers/usb/host/ohci-sm501.c
+@@ -191,6 +191,7 @@ static int ohci_hcd_sm501_drv_remove(struct platform_device *pdev)
+ struct resource *mem;
+
+ usb_remove_hcd(hcd);
++ iounmap(hcd->regs);
+ release_mem_region(hcd->rsrc_start, hcd->rsrc_len);
+ usb_put_hcd(hcd);
+ mem = platform_get_resource(pdev, IORESOURCE_MEM, 1);
+diff --git a/drivers/usb/host/xhci-mtk.c b/drivers/usb/host/xhci-mtk.c
+index bfbdb3ceed29..4311d4c9b68d 100644
+--- a/drivers/usb/host/xhci-mtk.c
++++ b/drivers/usb/host/xhci-mtk.c
+@@ -587,6 +587,9 @@ static int xhci_mtk_remove(struct platform_device *dev)
+ struct xhci_hcd *xhci = hcd_to_xhci(hcd);
+ struct usb_hcd *shared_hcd = xhci->shared_hcd;
+
++ pm_runtime_put_noidle(&dev->dev);
++ pm_runtime_disable(&dev->dev);
++
+ usb_remove_hcd(shared_hcd);
+ xhci->shared_hcd = NULL;
+ device_init_wakeup(&dev->dev, false);
+@@ -597,8 +600,6 @@ static int xhci_mtk_remove(struct platform_device *dev)
+ xhci_mtk_sch_exit(mtk);
+ xhci_mtk_clks_disable(mtk);
+ xhci_mtk_ldos_disable(mtk);
+- pm_runtime_put_sync(&dev->dev);
+- pm_runtime_disable(&dev->dev);
+
+ return 0;
+ }
+diff --git a/drivers/usb/host/xhci.c b/drivers/usb/host/xhci.c
+index bee5deccc83d..ed468eed299c 100644
+--- a/drivers/usb/host/xhci.c
++++ b/drivers/usb/host/xhci.c
+@@ -1430,6 +1430,7 @@ static int xhci_check_maxpacket(struct xhci_hcd *xhci, unsigned int slot_id,
+ xhci->devs[slot_id]->out_ctx, ep_index);
+
+ ep_ctx = xhci_get_ep_ctx(xhci, command->in_ctx, ep_index);
++ ep_ctx->ep_info &= cpu_to_le32(~EP_STATE_MASK);/* must clear */
+ ep_ctx->ep_info2 &= cpu_to_le32(~MAX_PACKET_MASK);
+ ep_ctx->ep_info2 |= cpu_to_le32(MAX_PACKET(max_packet_size));
+
+@@ -4390,6 +4391,9 @@ static int xhci_set_usb2_hardware_lpm(struct usb_hcd *hcd,
+ int hird, exit_latency;
+ int ret;
+
++ if (xhci->quirks & XHCI_HW_LPM_DISABLE)
++ return -EPERM;
++
+ if (hcd->speed >= HCD_USB3 || !xhci->hw_lpm_support ||
+ !udev->lpm_capable)
+ return -EPERM;
+@@ -4412,7 +4416,7 @@ static int xhci_set_usb2_hardware_lpm(struct usb_hcd *hcd,
+ xhci_dbg(xhci, "%s port %d USB2 hardware LPM\n",
+ enable ? "enable" : "disable", port_num + 1);
+
+- if (enable && !(xhci->quirks & XHCI_HW_LPM_DISABLE)) {
++ if (enable) {
+ /* Host supports BESL timeout instead of HIRD */
+ if (udev->usb2_hw_lpm_besl_capable) {
+ /* if device doesn't have a preferred BESL value use a
+@@ -4471,6 +4475,9 @@ static int xhci_set_usb2_hardware_lpm(struct usb_hcd *hcd,
+ mutex_lock(hcd->bandwidth_mutex);
+ xhci_change_max_exit_latency(xhci, udev, 0);
+ mutex_unlock(hcd->bandwidth_mutex);
++ readl_poll_timeout(ports[port_num]->addr, pm_val,
++ (pm_val & PORT_PLS_MASK) == XDEV_U0,
++ 100, 10000);
+ return 0;
+ }
+ }
+diff --git a/drivers/usb/host/xhci.h b/drivers/usb/host/xhci.h
+index 86cfefdd6632..c80710e47476 100644
+--- a/drivers/usb/host/xhci.h
++++ b/drivers/usb/host/xhci.h
+@@ -716,7 +716,7 @@ struct xhci_ep_ctx {
+ * 4 - TRB error
+ * 5-7 - reserved
+ */
+-#define EP_STATE_MASK (0xf)
++#define EP_STATE_MASK (0x7)
+ #define EP_STATE_DISABLED 0
+ #define EP_STATE_RUNNING 1
+ #define EP_STATE_HALTED 2
+diff --git a/drivers/usb/renesas_usbhs/fifo.c b/drivers/usb/renesas_usbhs/fifo.c
+index 01c6a48c41bc..ac9a81ae8216 100644
+--- a/drivers/usb/renesas_usbhs/fifo.c
++++ b/drivers/usb/renesas_usbhs/fifo.c
+@@ -803,7 +803,8 @@ static int __usbhsf_dma_map_ctrl(struct usbhs_pkt *pkt, int map)
+ return info->dma_map_ctrl(chan->device->dev, pkt, map);
+ }
+
+-static void usbhsf_dma_complete(void *arg);
++static void usbhsf_dma_complete(void *arg,
++ const struct dmaengine_result *result);
+ static void usbhsf_dma_xfer_preparing(struct usbhs_pkt *pkt)
+ {
+ struct usbhs_pipe *pipe = pkt->pipe;
+@@ -813,6 +814,7 @@ static void usbhsf_dma_xfer_preparing(struct usbhs_pkt *pkt)
+ struct dma_chan *chan;
+ struct device *dev = usbhs_priv_to_dev(priv);
+ enum dma_transfer_direction dir;
++ dma_cookie_t cookie;
+
+ fifo = usbhs_pipe_to_fifo(pipe);
+ if (!fifo)
+@@ -827,11 +829,11 @@ static void usbhsf_dma_xfer_preparing(struct usbhs_pkt *pkt)
+ if (!desc)
+ return;
+
+- desc->callback = usbhsf_dma_complete;
+- desc->callback_param = pipe;
++ desc->callback_result = usbhsf_dma_complete;
++ desc->callback_param = pkt;
+
+- pkt->cookie = dmaengine_submit(desc);
+- if (pkt->cookie < 0) {
++ cookie = dmaengine_submit(desc);
++ if (cookie < 0) {
+ dev_err(dev, "Failed to submit dma descriptor\n");
+ return;
+ }
+@@ -1152,12 +1154,10 @@ static size_t usbhs_dma_calc_received_size(struct usbhs_pkt *pkt,
+ struct dma_chan *chan, int dtln)
+ {
+ struct usbhs_pipe *pipe = pkt->pipe;
+- struct dma_tx_state state;
+ size_t received_size;
+ int maxp = usbhs_pipe_get_maxpacket(pipe);
+
+- dmaengine_tx_status(chan, pkt->cookie, &state);
+- received_size = pkt->length - state.residue;
++ received_size = pkt->length - pkt->dma_result->residue;
+
+ if (dtln) {
+ received_size -= USBHS_USB_DMAC_XFER_SIZE;
+@@ -1363,13 +1363,16 @@ static int usbhsf_irq_ready(struct usbhs_priv *priv,
+ return 0;
+ }
+
+-static void usbhsf_dma_complete(void *arg)
++static void usbhsf_dma_complete(void *arg,
++ const struct dmaengine_result *result)
+ {
+- struct usbhs_pipe *pipe = arg;
++ struct usbhs_pkt *pkt = arg;
++ struct usbhs_pipe *pipe = pkt->pipe;
+ struct usbhs_priv *priv = usbhs_pipe_to_priv(pipe);
+ struct device *dev = usbhs_priv_to_dev(priv);
+ int ret;
+
++ pkt->dma_result = result;
+ ret = usbhsf_pkt_handler(pipe, USBHSF_PKT_DMA_DONE);
+ if (ret < 0)
+ dev_err(dev, "dma_complete run_error %d : %d\n",
+diff --git a/drivers/usb/renesas_usbhs/fifo.h b/drivers/usb/renesas_usbhs/fifo.h
+index c3d3cc35cee0..4a7dc23ce3d3 100644
+--- a/drivers/usb/renesas_usbhs/fifo.h
++++ b/drivers/usb/renesas_usbhs/fifo.h
+@@ -50,7 +50,7 @@ struct usbhs_pkt {
+ struct usbhs_pkt *pkt);
+ struct work_struct work;
+ dma_addr_t dma;
+- dma_cookie_t cookie;
++ const struct dmaengine_result *dma_result;
+ void *buf;
+ int length;
+ int trans;
+diff --git a/drivers/usb/typec/mux/intel_pmc_mux.c b/drivers/usb/typec/mux/intel_pmc_mux.c
+index c22e5c4bbf1a..e35508f5e128 100644
+--- a/drivers/usb/typec/mux/intel_pmc_mux.c
++++ b/drivers/usb/typec/mux/intel_pmc_mux.c
+@@ -129,7 +129,8 @@ pmc_usb_mux_dp_hpd(struct pmc_usb_port *port, struct typec_mux_state *state)
+ msg[0] = PMC_USB_DP_HPD;
+ msg[0] |= port->usb3_port << PMC_USB_MSG_USB3_PORT_SHIFT;
+
+- msg[1] = PMC_USB_DP_HPD_IRQ;
++ if (data->status & DP_STATUS_IRQ_HPD)
++ msg[1] = PMC_USB_DP_HPD_IRQ;
+
+ if (data->status & DP_STATUS_HPD_STATE)
+ msg[1] |= PMC_USB_DP_HPD_LVL;
+@@ -142,6 +143,7 @@ pmc_usb_mux_dp(struct pmc_usb_port *port, struct typec_mux_state *state)
+ {
+ struct typec_displayport_data *data = state->data;
+ struct altmode_req req = { };
++ int ret;
+
+ if (data->status & DP_STATUS_IRQ_HPD)
+ return pmc_usb_mux_dp_hpd(port, state);
+@@ -161,7 +163,14 @@ pmc_usb_mux_dp(struct pmc_usb_port *port, struct typec_mux_state *state)
+ if (data->status & DP_STATUS_HPD_STATE)
+ req.mode_data |= PMC_USB_ALTMODE_HPD_HIGH;
+
+- return pmc_usb_command(port, (void *)&req, sizeof(req));
++ ret = pmc_usb_command(port, (void *)&req, sizeof(req));
++ if (ret)
++ return ret;
++
++ if (data->status & DP_STATUS_HPD_STATE)
++ return pmc_usb_mux_dp_hpd(port, state);
++
++ return 0;
+ }
+
+ static int
+diff --git a/drivers/usb/typec/tcpm/tcpci_rt1711h.c b/drivers/usb/typec/tcpm/tcpci_rt1711h.c
+index 017389021b96..b56a0880a044 100644
+--- a/drivers/usb/typec/tcpm/tcpci_rt1711h.c
++++ b/drivers/usb/typec/tcpm/tcpci_rt1711h.c
+@@ -179,26 +179,6 @@ out:
+ return tcpci_irq(chip->tcpci);
+ }
+
+-static int rt1711h_init_alert(struct rt1711h_chip *chip,
+- struct i2c_client *client)
+-{
+- int ret;
+-
+- /* Disable chip interrupts before requesting irq */
+- ret = rt1711h_write16(chip, TCPC_ALERT_MASK, 0);
+- if (ret < 0)
+- return ret;
+-
+- ret = devm_request_threaded_irq(chip->dev, client->irq, NULL,
+- rt1711h_irq,
+- IRQF_ONESHOT | IRQF_TRIGGER_LOW,
+- dev_name(chip->dev), chip);
+- if (ret < 0)
+- return ret;
+- enable_irq_wake(client->irq);
+- return 0;
+-}
+-
+ static int rt1711h_sw_reset(struct rt1711h_chip *chip)
+ {
+ int ret;
+@@ -260,7 +240,8 @@ static int rt1711h_probe(struct i2c_client *client,
+ if (ret < 0)
+ return ret;
+
+- ret = rt1711h_init_alert(chip, client);
++ /* Disable chip interrupts before requesting irq */
++ ret = rt1711h_write16(chip, TCPC_ALERT_MASK, 0);
+ if (ret < 0)
+ return ret;
+
+@@ -271,6 +252,14 @@ static int rt1711h_probe(struct i2c_client *client,
+ if (IS_ERR_OR_NULL(chip->tcpci))
+ return PTR_ERR(chip->tcpci);
+
++ ret = devm_request_threaded_irq(chip->dev, client->irq, NULL,
++ rt1711h_irq,
++ IRQF_ONESHOT | IRQF_TRIGGER_LOW,
++ dev_name(chip->dev), chip);
++ if (ret < 0)
++ return ret;
++ enable_irq_wake(client->irq);
++
+ return 0;
+ }
+
+diff --git a/drivers/video/fbdev/core/fbcon.c b/drivers/video/fbdev/core/fbcon.c
+index 9d28a8e3328f..e2a490c5ae08 100644
+--- a/drivers/video/fbdev/core/fbcon.c
++++ b/drivers/video/fbdev/core/fbcon.c
+@@ -2402,7 +2402,8 @@ static int fbcon_blank(struct vc_data *vc, int blank, int mode_switch)
+ ops->graphics = 1;
+
+ if (!blank) {
+- var.activate = FB_ACTIVATE_NOW | FB_ACTIVATE_FORCE;
++ var.activate = FB_ACTIVATE_NOW | FB_ACTIVATE_FORCE |
++ FB_ACTIVATE_KD_TEXT;
+ fb_set_var(info, &var);
+ ops->graphics = 0;
+ ops->var = info->var;
+diff --git a/fs/afs/cell.c b/fs/afs/cell.c
+index 78ba5f932287..296b489861a9 100644
+--- a/fs/afs/cell.c
++++ b/fs/afs/cell.c
+@@ -154,10 +154,17 @@ static struct afs_cell *afs_alloc_cell(struct afs_net *net,
+ return ERR_PTR(-ENOMEM);
+ }
+
++ cell->name = kmalloc(namelen + 1, GFP_KERNEL);
++ if (!cell->name) {
++ kfree(cell);
++ return ERR_PTR(-ENOMEM);
++ }
++
+ cell->net = net;
+ cell->name_len = namelen;
+ for (i = 0; i < namelen; i++)
+ cell->name[i] = tolower(name[i]);
++ cell->name[i] = 0;
+
+ atomic_set(&cell->usage, 2);
+ INIT_WORK(&cell->manager, afs_manage_cell);
+@@ -203,6 +210,7 @@ parse_failed:
+ if (ret == -EINVAL)
+ printk(KERN_ERR "kAFS: bad VL server IP address\n");
+ error:
++ kfree(cell->name);
+ kfree(cell);
+ _leave(" = %d", ret);
+ return ERR_PTR(ret);
+@@ -483,6 +491,7 @@ static void afs_cell_destroy(struct rcu_head *rcu)
+
+ afs_put_vlserverlist(cell->net, rcu_access_pointer(cell->vl_servers));
+ key_put(cell->anonymous_key);
++ kfree(cell->name);
+ kfree(cell);
+
+ _leave(" [destroyed]");
+diff --git a/fs/afs/internal.h b/fs/afs/internal.h
+index 98e0cebd5e5e..c67a9767397d 100644
+--- a/fs/afs/internal.h
++++ b/fs/afs/internal.h
+@@ -397,7 +397,7 @@ struct afs_cell {
+ struct afs_vlserver_list __rcu *vl_servers;
+
+ u8 name_len; /* Length of name */
+- char name[64 + 1]; /* Cell name, case-flattened and NUL-padded */
++ char *name; /* Cell name, case-flattened and NUL-padded */
+ };
+
+ /*
+diff --git a/fs/btrfs/block-group.c b/fs/btrfs/block-group.c
+index 233c5663f233..0c17f18b4794 100644
+--- a/fs/btrfs/block-group.c
++++ b/fs/btrfs/block-group.c
+@@ -916,7 +916,7 @@ int btrfs_remove_block_group(struct btrfs_trans_handle *trans,
+ path = btrfs_alloc_path();
+ if (!path) {
+ ret = -ENOMEM;
+- goto out_put_group;
++ goto out;
+ }
+
+ /*
+@@ -954,7 +954,7 @@ int btrfs_remove_block_group(struct btrfs_trans_handle *trans,
+ ret = btrfs_orphan_add(trans, BTRFS_I(inode));
+ if (ret) {
+ btrfs_add_delayed_iput(inode);
+- goto out_put_group;
++ goto out;
+ }
+ clear_nlink(inode);
+ /* One for the block groups ref */
+@@ -977,13 +977,13 @@ int btrfs_remove_block_group(struct btrfs_trans_handle *trans,
+
+ ret = btrfs_search_slot(trans, tree_root, &key, path, -1, 1);
+ if (ret < 0)
+- goto out_put_group;
++ goto out;
+ if (ret > 0)
+ btrfs_release_path(path);
+ if (ret == 0) {
+ ret = btrfs_del_item(trans, tree_root, path);
+ if (ret)
+- goto out_put_group;
++ goto out;
+ btrfs_release_path(path);
+ }
+
+@@ -992,6 +992,9 @@ int btrfs_remove_block_group(struct btrfs_trans_handle *trans,
+ &fs_info->block_group_cache_tree);
+ RB_CLEAR_NODE(&block_group->cache_node);
+
++ /* Once for the block groups rbtree */
++ btrfs_put_block_group(block_group);
++
+ if (fs_info->first_logical_byte == block_group->start)
+ fs_info->first_logical_byte = (u64)-1;
+ spin_unlock(&fs_info->block_group_cache_lock);
+@@ -1102,10 +1105,7 @@ int btrfs_remove_block_group(struct btrfs_trans_handle *trans,
+
+ ret = remove_block_group_free_space(trans, block_group);
+ if (ret)
+- goto out_put_group;
+-
+- /* Once for the block groups rbtree */
+- btrfs_put_block_group(block_group);
++ goto out;
+
+ ret = btrfs_search_slot(trans, root, &key, path, -1, 1);
+ if (ret > 0)
+@@ -1128,10 +1128,9 @@ int btrfs_remove_block_group(struct btrfs_trans_handle *trans,
+ free_extent_map(em);
+ }
+
+-out_put_group:
++out:
+ /* Once for the lookup reference */
+ btrfs_put_block_group(block_group);
+-out:
+ if (remove_rsv)
+ btrfs_delayed_refs_rsv_release(fs_info, 1);
+ btrfs_free_path(path);
+diff --git a/fs/btrfs/ctree.h b/fs/btrfs/ctree.h
+index 196d4511f812..09e6dff8a8f8 100644
+--- a/fs/btrfs/ctree.h
++++ b/fs/btrfs/ctree.h
+@@ -988,6 +988,8 @@ enum {
+ BTRFS_ROOT_DEAD_RELOC_TREE,
+ /* Mark dead root stored on device whose cleanup needs to be resumed */
+ BTRFS_ROOT_DEAD_TREE,
++ /* The root has a log tree. Used only for subvolume roots. */
++ BTRFS_ROOT_HAS_LOG_TREE,
+ };
+
+ /*
+diff --git a/fs/btrfs/file.c b/fs/btrfs/file.c
+index 719e68ab552c..52d565ff66e2 100644
+--- a/fs/btrfs/file.c
++++ b/fs/btrfs/file.c
+@@ -1912,13 +1912,26 @@ static ssize_t btrfs_file_write_iter(struct kiocb *iocb,
+ pos = iocb->ki_pos;
+ count = iov_iter_count(from);
+ if (iocb->ki_flags & IOCB_NOWAIT) {
++ size_t nocow_bytes = count;
++
+ /*
+ * We will allocate space in case nodatacow is not set,
+ * so bail
+ */
+ if (!(BTRFS_I(inode)->flags & (BTRFS_INODE_NODATACOW |
+ BTRFS_INODE_PREALLOC)) ||
+- check_can_nocow(BTRFS_I(inode), pos, &count) <= 0) {
++ check_can_nocow(BTRFS_I(inode), pos, &nocow_bytes) <= 0) {
++ inode_unlock(inode);
++ return -EAGAIN;
++ }
++ /* check_can_nocow() locks the snapshot lock on success */
++ btrfs_drew_write_unlock(&root->snapshot_lock);
++ /*
++ * There are holes in the range or parts of the range that must
++ * be COWed (shared extents, RO block groups, etc), so just bail
++ * out.
++ */
++ if (nocow_bytes < count) {
+ inode_unlock(inode);
+ return -EAGAIN;
+ }
+diff --git a/fs/btrfs/inode.c b/fs/btrfs/inode.c
+index 66dd919fc723..6aa200e373c8 100644
+--- a/fs/btrfs/inode.c
++++ b/fs/btrfs/inode.c
+@@ -985,6 +985,7 @@ static noinline int cow_file_range(struct inode *inode,
+ u64 num_bytes;
+ unsigned long ram_size;
+ u64 cur_alloc_size = 0;
++ u64 min_alloc_size;
+ u64 blocksize = fs_info->sectorsize;
+ struct btrfs_key ins;
+ struct extent_map *em;
+@@ -1035,10 +1036,26 @@ static noinline int cow_file_range(struct inode *inode,
+ btrfs_drop_extent_cache(BTRFS_I(inode), start,
+ start + num_bytes - 1, 0);
+
++ /*
++ * Relocation relies on the relocated extents to have exactly the same
++ * size as the original extents. Normally writeback for relocation data
++ * extents follows a NOCOW path because relocation preallocates the
++ * extents. However, due to an operation such as scrub turning a block
++ * group to RO mode, it may fallback to COW mode, so we must make sure
++ * an extent allocated during COW has exactly the requested size and can
++ * not be split into smaller extents, otherwise relocation breaks and
++ * fails during the stage where it updates the bytenr of file extent
++ * items.
++ */
++ if (root->root_key.objectid == BTRFS_DATA_RELOC_TREE_OBJECTID)
++ min_alloc_size = num_bytes;
++ else
++ min_alloc_size = fs_info->sectorsize;
++
+ while (num_bytes > 0) {
+ cur_alloc_size = num_bytes;
+ ret = btrfs_reserve_extent(root, cur_alloc_size, cur_alloc_size,
+- fs_info->sectorsize, 0, alloc_hint,
++ min_alloc_size, 0, alloc_hint,
+ &ins, 1, 1);
+ if (ret < 0)
+ goto out_unlock;
+@@ -1361,6 +1378,8 @@ static int fallback_to_cow(struct inode *inode, struct page *locked_page,
+ int *page_started, unsigned long *nr_written)
+ {
+ const bool is_space_ino = btrfs_is_free_space_inode(BTRFS_I(inode));
++ const bool is_reloc_ino = (BTRFS_I(inode)->root->root_key.objectid ==
++ BTRFS_DATA_RELOC_TREE_OBJECTID);
+ const u64 range_bytes = end + 1 - start;
+ struct extent_io_tree *io_tree = &BTRFS_I(inode)->io_tree;
+ u64 range_start = start;
+@@ -1391,18 +1410,23 @@ static int fallback_to_cow(struct inode *inode, struct page *locked_page,
+ * data space info, which we incremented in the step above.
+ *
+ * If we need to fallback to cow and the inode corresponds to a free
+- * space cache inode, we must also increment bytes_may_use of the data
+- * space_info for the same reason. Space caches always get a prealloc
++ * space cache inode or an inode of the data relocation tree, we must
++ * also increment bytes_may_use of the data space_info for the same
++ * reason. Space caches and relocated data extents always get a prealloc
+ * extent for them, however scrub or balance may have set the block
+- * group that contains that extent to RO mode.
++ * group that contains that extent to RO mode and therefore force COW
++ * when starting writeback.
+ */
+ count = count_range_bits(io_tree, &range_start, end, range_bytes,
+ EXTENT_NORESERVE, 0);
+- if (count > 0 || is_space_ino) {
+- const u64 bytes = is_space_ino ? range_bytes : count;
++ if (count > 0 || is_space_ino || is_reloc_ino) {
++ u64 bytes = count;
+ struct btrfs_fs_info *fs_info = BTRFS_I(inode)->root->fs_info;
+ struct btrfs_space_info *sinfo = fs_info->data_sinfo;
+
++ if (is_space_ino || is_reloc_ino)
++ bytes = range_bytes;
++
+ spin_lock(&sinfo->lock);
+ btrfs_space_info_update_bytes_may_use(fs_info, sinfo, bytes);
+ spin_unlock(&sinfo->lock);
+@@ -8238,9 +8262,6 @@ static ssize_t btrfs_direct_IO(struct kiocb *iocb, struct iov_iter *iter)
+ dio_data.overwrite = 1;
+ inode_unlock(inode);
+ relock = true;
+- } else if (iocb->ki_flags & IOCB_NOWAIT) {
+- ret = -EAGAIN;
+- goto out;
+ }
+ ret = btrfs_delalloc_reserve_space(inode, &data_reserved,
+ offset, count);
+diff --git a/fs/btrfs/tree-log.c b/fs/btrfs/tree-log.c
+index ea72b9d54ec8..bdfc42149448 100644
+--- a/fs/btrfs/tree-log.c
++++ b/fs/btrfs/tree-log.c
+@@ -169,6 +169,7 @@ static int start_log_trans(struct btrfs_trans_handle *trans,
+ if (ret)
+ goto out;
+
++ set_bit(BTRFS_ROOT_HAS_LOG_TREE, &root->state);
+ clear_bit(BTRFS_ROOT_MULTI_LOG_TASKS, &root->state);
+ root->log_start_pid = current->pid;
+ }
+@@ -195,6 +196,9 @@ static int join_running_log_trans(struct btrfs_root *root)
+ {
+ int ret = -ENOENT;
+
++ if (!test_bit(BTRFS_ROOT_HAS_LOG_TREE, &root->state))
++ return ret;
++
+ mutex_lock(&root->log_mutex);
+ if (root->log_root) {
+ ret = 0;
+@@ -3312,6 +3316,7 @@ int btrfs_free_log(struct btrfs_trans_handle *trans, struct btrfs_root *root)
+ if (root->log_root) {
+ free_log_tree(trans, root->log_root);
+ root->log_root = NULL;
++ clear_bit(BTRFS_ROOT_HAS_LOG_TREE, &root->state);
+ }
+ return 0;
+ }
+diff --git a/fs/cifs/smb2ops.c b/fs/cifs/smb2ops.c
+index f829f4165d38..6fc69c3b2749 100644
+--- a/fs/cifs/smb2ops.c
++++ b/fs/cifs/smb2ops.c
+@@ -759,6 +759,7 @@ int open_shroot(unsigned int xid, struct cifs_tcon *tcon,
+ /* close extra handle outside of crit sec */
+ SMB2_close(xid, tcon, fid.persistent_fid, fid.volatile_fid);
+ }
++ rc = 0;
+ goto oshr_free;
+ }
+
+@@ -3144,6 +3145,11 @@ static long smb3_zero_range(struct file *file, struct cifs_tcon *tcon,
+ trace_smb3_zero_enter(xid, cfile->fid.persistent_fid, tcon->tid,
+ ses->Suid, offset, len);
+
++ /*
++ * We zero the range through ioctl, so we need remove the page caches
++ * first, otherwise the data may be inconsistent with the server.
++ */
++ truncate_pagecache_range(inode, offset, offset + len - 1);
+
+ /* if file not oplocked can't be sure whether asking to extend size */
+ if (!CIFS_CACHE_READ(cifsi))
+@@ -3210,6 +3216,12 @@ static long smb3_punch_hole(struct file *file, struct cifs_tcon *tcon,
+ return rc;
+ }
+
++ /*
++ * We implement the punch hole through ioctl, so we need remove the page
++ * caches first, otherwise the data may be inconsistent with the server.
++ */
++ truncate_pagecache_range(inode, offset, offset + len - 1);
++
+ cifs_dbg(FYI, "Offset %lld len %lld\n", offset, len);
+
+ fsctl_buf.FileOffset = cpu_to_le64(offset);
+diff --git a/fs/erofs/zdata.h b/fs/erofs/zdata.h
+index 7824f5563a55..9b66c28b3ae9 100644
+--- a/fs/erofs/zdata.h
++++ b/fs/erofs/zdata.h
+@@ -144,22 +144,22 @@ static inline void z_erofs_onlinepage_init(struct page *page)
+ static inline void z_erofs_onlinepage_fixup(struct page *page,
+ uintptr_t index, bool down)
+ {
+- unsigned long *p, o, v, id;
+-repeat:
+- p = &page_private(page);
+- o = READ_ONCE(*p);
++ union z_erofs_onlinepage_converter u = { .v = &page_private(page) };
++ int orig, orig_index, val;
+
+- id = o >> Z_EROFS_ONLINEPAGE_INDEX_SHIFT;
+- if (id) {
++repeat:
++ orig = atomic_read(u.o);
++ orig_index = orig >> Z_EROFS_ONLINEPAGE_INDEX_SHIFT;
++ if (orig_index) {
+ if (!index)
+ return;
+
+- DBG_BUGON(id != index);
++ DBG_BUGON(orig_index != index);
+ }
+
+- v = (index << Z_EROFS_ONLINEPAGE_INDEX_SHIFT) |
+- ((o & Z_EROFS_ONLINEPAGE_COUNT_MASK) + (unsigned int)down);
+- if (cmpxchg(p, o, v) != o)
++ val = (index << Z_EROFS_ONLINEPAGE_INDEX_SHIFT) |
++ ((orig & Z_EROFS_ONLINEPAGE_COUNT_MASK) + (unsigned int)down);
++ if (atomic_cmpxchg(u.o, orig, val) != orig)
+ goto repeat;
+ }
+
+diff --git a/fs/io_uring.c b/fs/io_uring.c
+index 1829be7f63a3..4ab1728de247 100644
+--- a/fs/io_uring.c
++++ b/fs/io_uring.c
+@@ -1942,10 +1942,8 @@ static void io_complete_rw_iopoll(struct kiocb *kiocb, long res, long res2)
+
+ WRITE_ONCE(req->result, res);
+ /* order with io_poll_complete() checking ->result */
+- if (res != -EAGAIN) {
+- smp_wmb();
+- WRITE_ONCE(req->iopoll_completed, 1);
+- }
++ smp_wmb();
++ WRITE_ONCE(req->iopoll_completed, 1);
+ }
+
+ /*
+@@ -5425,9 +5423,6 @@ static int io_issue_sqe(struct io_kiocb *req, const struct io_uring_sqe *sqe,
+ if ((ctx->flags & IORING_SETUP_IOPOLL) && req->file) {
+ const bool in_async = io_wq_current_is_worker();
+
+- if (req->result == -EAGAIN)
+- return -EAGAIN;
+-
+ /* workqueue context doesn't hold uring_lock, grab it now */
+ if (in_async)
+ mutex_lock(&ctx->uring_lock);
+diff --git a/fs/nfs/direct.c b/fs/nfs/direct.c
+index d49b1d197908..f0c3f0123131 100644
+--- a/fs/nfs/direct.c
++++ b/fs/nfs/direct.c
+@@ -267,8 +267,6 @@ static void nfs_direct_complete(struct nfs_direct_req *dreq)
+ {
+ struct inode *inode = dreq->inode;
+
+- inode_dio_end(inode);
+-
+ if (dreq->iocb) {
+ long res = (long) dreq->error;
+ if (dreq->count != 0) {
+@@ -280,7 +278,10 @@ static void nfs_direct_complete(struct nfs_direct_req *dreq)
+
+ complete(&dreq->completion);
+
++ igrab(inode);
+ nfs_direct_req_release(dreq);
++ inode_dio_end(inode);
++ iput(inode);
+ }
+
+ static void nfs_direct_read_completion(struct nfs_pgio_header *hdr)
+@@ -410,8 +411,10 @@ static ssize_t nfs_direct_read_schedule_iovec(struct nfs_direct_req *dreq,
+ * generic layer handle the completion.
+ */
+ if (requested_bytes == 0) {
+- inode_dio_end(inode);
++ igrab(inode);
+ nfs_direct_req_release(dreq);
++ inode_dio_end(inode);
++ iput(inode);
+ return result < 0 ? result : -EIO;
+ }
+
+@@ -864,8 +867,10 @@ static ssize_t nfs_direct_write_schedule_iovec(struct nfs_direct_req *dreq,
+ * generic layer handle the completion.
+ */
+ if (requested_bytes == 0) {
+- inode_dio_end(inode);
++ igrab(inode);
+ nfs_direct_req_release(dreq);
++ inode_dio_end(inode);
++ iput(inode);
+ return result < 0 ? result : -EIO;
+ }
+
+diff --git a/fs/nfs/file.c b/fs/nfs/file.c
+index f96367a2463e..ccd6c1637b27 100644
+--- a/fs/nfs/file.c
++++ b/fs/nfs/file.c
+@@ -83,6 +83,7 @@ nfs_file_release(struct inode *inode, struct file *filp)
+ dprintk("NFS: release(%pD2)\n", filp);
+
+ nfs_inc_stats(inode, NFSIOS_VFSRELEASE);
++ inode_dio_wait(inode);
+ nfs_file_clear_open_context(filp);
+ return 0;
+ }
+diff --git a/fs/nfs/flexfilelayout/flexfilelayout.c b/fs/nfs/flexfilelayout/flexfilelayout.c
+index 7d399f72ebbb..de03e440b7ee 100644
+--- a/fs/nfs/flexfilelayout/flexfilelayout.c
++++ b/fs/nfs/flexfilelayout/flexfilelayout.c
+@@ -907,9 +907,8 @@ retry:
+ goto out_mds;
+
+ /* Use a direct mapping of ds_idx to pgio mirror_idx */
+- if (WARN_ON_ONCE(pgio->pg_mirror_count !=
+- FF_LAYOUT_MIRROR_COUNT(pgio->pg_lseg)))
+- goto out_mds;
++ if (pgio->pg_mirror_count != FF_LAYOUT_MIRROR_COUNT(pgio->pg_lseg))
++ goto out_eagain;
+
+ for (i = 0; i < pgio->pg_mirror_count; i++) {
+ mirror = FF_LAYOUT_COMP(pgio->pg_lseg, i);
+@@ -931,7 +930,10 @@ retry:
+ (NFS_MOUNT_SOFT|NFS_MOUNT_SOFTERR))
+ pgio->pg_maxretrans = io_maxretrans;
+ return;
+-
++out_eagain:
++ pnfs_generic_pg_cleanup(pgio);
++ pgio->pg_error = -EAGAIN;
++ return;
+ out_mds:
+ trace_pnfs_mds_fallback_pg_init_write(pgio->pg_inode,
+ 0, NFS4_MAX_UINT64, IOMODE_RW,
+@@ -941,6 +943,7 @@ out_mds:
+ pgio->pg_lseg = NULL;
+ pgio->pg_maxretrans = 0;
+ nfs_pageio_reset_write_mds(pgio);
++ pgio->pg_error = -EAGAIN;
+ }
+
+ static unsigned int
+diff --git a/fs/ocfs2/dlmglue.c b/fs/ocfs2/dlmglue.c
+index 152a0fc4e905..751bc4dc7466 100644
+--- a/fs/ocfs2/dlmglue.c
++++ b/fs/ocfs2/dlmglue.c
+@@ -689,6 +689,12 @@ static void ocfs2_nfs_sync_lock_res_init(struct ocfs2_lock_res *res,
+ &ocfs2_nfs_sync_lops, osb);
+ }
+
++static void ocfs2_nfs_sync_lock_init(struct ocfs2_super *osb)
++{
++ ocfs2_nfs_sync_lock_res_init(&osb->osb_nfs_sync_lockres, osb);
++ init_rwsem(&osb->nfs_sync_rwlock);
++}
++
+ void ocfs2_trim_fs_lock_res_init(struct ocfs2_super *osb)
+ {
+ struct ocfs2_lock_res *lockres = &osb->osb_trim_fs_lockres;
+@@ -2855,6 +2861,11 @@ int ocfs2_nfs_sync_lock(struct ocfs2_super *osb, int ex)
+ if (ocfs2_is_hard_readonly(osb))
+ return -EROFS;
+
++ if (ex)
++ down_write(&osb->nfs_sync_rwlock);
++ else
++ down_read(&osb->nfs_sync_rwlock);
++
+ if (ocfs2_mount_local(osb))
+ return 0;
+
+@@ -2873,6 +2884,10 @@ void ocfs2_nfs_sync_unlock(struct ocfs2_super *osb, int ex)
+ if (!ocfs2_mount_local(osb))
+ ocfs2_cluster_unlock(osb, lockres,
+ ex ? LKM_EXMODE : LKM_PRMODE);
++ if (ex)
++ up_write(&osb->nfs_sync_rwlock);
++ else
++ up_read(&osb->nfs_sync_rwlock);
+ }
+
+ int ocfs2_trim_fs_lock(struct ocfs2_super *osb,
+@@ -3340,7 +3355,7 @@ int ocfs2_dlm_init(struct ocfs2_super *osb)
+ local:
+ ocfs2_super_lock_res_init(&osb->osb_super_lockres, osb);
+ ocfs2_rename_lock_res_init(&osb->osb_rename_lockres, osb);
+- ocfs2_nfs_sync_lock_res_init(&osb->osb_nfs_sync_lockres, osb);
++ ocfs2_nfs_sync_lock_init(osb);
+ ocfs2_orphan_scan_lock_res_init(&osb->osb_orphan_scan.os_lockres, osb);
+
+ osb->cconn = conn;
+diff --git a/fs/ocfs2/ocfs2.h b/fs/ocfs2/ocfs2.h
+index 9150cfa4df7d..9461bd3e1c0c 100644
+--- a/fs/ocfs2/ocfs2.h
++++ b/fs/ocfs2/ocfs2.h
+@@ -394,6 +394,7 @@ struct ocfs2_super
+ struct ocfs2_lock_res osb_super_lockres;
+ struct ocfs2_lock_res osb_rename_lockres;
+ struct ocfs2_lock_res osb_nfs_sync_lockres;
++ struct rw_semaphore nfs_sync_rwlock;
+ struct ocfs2_lock_res osb_trim_fs_lockres;
+ struct mutex obs_trim_fs_mutex;
+ struct ocfs2_dlm_debug *osb_dlm_debug;
+diff --git a/fs/ocfs2/ocfs2_fs.h b/fs/ocfs2/ocfs2_fs.h
+index 0dd8c41bafd4..19137c6d087b 100644
+--- a/fs/ocfs2/ocfs2_fs.h
++++ b/fs/ocfs2/ocfs2_fs.h
+@@ -290,7 +290,7 @@
+ #define OCFS2_MAX_SLOTS 255
+
+ /* Slot map indicator for an empty slot */
+-#define OCFS2_INVALID_SLOT -1
++#define OCFS2_INVALID_SLOT ((u16)-1)
+
+ #define OCFS2_VOL_UUID_LEN 16
+ #define OCFS2_MAX_VOL_LABEL_LEN 64
+@@ -326,8 +326,8 @@ struct ocfs2_system_inode_info {
+ enum {
+ BAD_BLOCK_SYSTEM_INODE = 0,
+ GLOBAL_INODE_ALLOC_SYSTEM_INODE,
++#define OCFS2_FIRST_ONLINE_SYSTEM_INODE GLOBAL_INODE_ALLOC_SYSTEM_INODE
+ SLOT_MAP_SYSTEM_INODE,
+-#define OCFS2_FIRST_ONLINE_SYSTEM_INODE SLOT_MAP_SYSTEM_INODE
+ HEARTBEAT_SYSTEM_INODE,
+ GLOBAL_BITMAP_SYSTEM_INODE,
+ USER_QUOTA_SYSTEM_INODE,
+diff --git a/fs/ocfs2/suballoc.c b/fs/ocfs2/suballoc.c
+index 4836becb7578..45745cc3408a 100644
+--- a/fs/ocfs2/suballoc.c
++++ b/fs/ocfs2/suballoc.c
+@@ -2825,9 +2825,12 @@ int ocfs2_test_inode_bit(struct ocfs2_super *osb, u64 blkno, int *res)
+ goto bail;
+ }
+
+- inode_alloc_inode =
+- ocfs2_get_system_file_inode(osb, INODE_ALLOC_SYSTEM_INODE,
+- suballoc_slot);
++ if (suballoc_slot == (u16)OCFS2_INVALID_SLOT)
++ inode_alloc_inode = ocfs2_get_system_file_inode(osb,
++ GLOBAL_INODE_ALLOC_SYSTEM_INODE, suballoc_slot);
++ else
++ inode_alloc_inode = ocfs2_get_system_file_inode(osb,
++ INODE_ALLOC_SYSTEM_INODE, suballoc_slot);
+ if (!inode_alloc_inode) {
+ /* the error code could be inaccurate, but we are not able to
+ * get the correct one. */
+diff --git a/include/linux/intel-iommu.h b/include/linux/intel-iommu.h
+index de23fb95fe91..64a5335046b0 100644
+--- a/include/linux/intel-iommu.h
++++ b/include/linux/intel-iommu.h
+@@ -40,6 +40,7 @@
+ #define DMA_PTE_SNP BIT_ULL(11)
+
+ #define DMA_FL_PTE_PRESENT BIT_ULL(0)
++#define DMA_FL_PTE_US BIT_ULL(2)
+ #define DMA_FL_PTE_XD BIT_ULL(63)
+
+ #define CONTEXT_TT_MULTI_LEVEL 0
+diff --git a/include/linux/netdevice.h b/include/linux/netdevice.h
+index 130a668049ab..36c7ad24d54d 100644
+--- a/include/linux/netdevice.h
++++ b/include/linux/netdevice.h
+@@ -3125,7 +3125,7 @@ static inline int dev_recursion_level(void)
+ return this_cpu_read(softnet_data.xmit.recursion);
+ }
+
+-#define XMIT_RECURSION_LIMIT 10
++#define XMIT_RECURSION_LIMIT 8
+ static inline bool dev_xmit_recursion(void)
+ {
+ return unlikely(__this_cpu_read(softnet_data.xmit.recursion) >
+diff --git a/include/linux/qed/qed_chain.h b/include/linux/qed/qed_chain.h
+index 733fad7dfbed..6d15040c642c 100644
+--- a/include/linux/qed/qed_chain.h
++++ b/include/linux/qed/qed_chain.h
+@@ -207,28 +207,34 @@ static inline u32 qed_chain_get_cons_idx_u32(struct qed_chain *p_chain)
+
+ static inline u16 qed_chain_get_elem_left(struct qed_chain *p_chain)
+ {
++ u16 elem_per_page = p_chain->elem_per_page;
++ u32 prod = p_chain->u.chain16.prod_idx;
++ u32 cons = p_chain->u.chain16.cons_idx;
+ u16 used;
+
+- used = (u16) (((u32)0x10000 +
+- (u32)p_chain->u.chain16.prod_idx) -
+- (u32)p_chain->u.chain16.cons_idx);
++ if (prod < cons)
++ prod += (u32)U16_MAX + 1;
++
++ used = (u16)(prod - cons);
+ if (p_chain->mode == QED_CHAIN_MODE_NEXT_PTR)
+- used -= p_chain->u.chain16.prod_idx / p_chain->elem_per_page -
+- p_chain->u.chain16.cons_idx / p_chain->elem_per_page;
++ used -= prod / elem_per_page - cons / elem_per_page;
+
+ return (u16)(p_chain->capacity - used);
+ }
+
+ static inline u32 qed_chain_get_elem_left_u32(struct qed_chain *p_chain)
+ {
++ u16 elem_per_page = p_chain->elem_per_page;
++ u64 prod = p_chain->u.chain32.prod_idx;
++ u64 cons = p_chain->u.chain32.cons_idx;
+ u32 used;
+
+- used = (u32) (((u64)0x100000000ULL +
+- (u64)p_chain->u.chain32.prod_idx) -
+- (u64)p_chain->u.chain32.cons_idx);
++ if (prod < cons)
++ prod += (u64)U32_MAX + 1;
++
++ used = (u32)(prod - cons);
+ if (p_chain->mode == QED_CHAIN_MODE_NEXT_PTR)
+- used -= p_chain->u.chain32.prod_idx / p_chain->elem_per_page -
+- p_chain->u.chain32.cons_idx / p_chain->elem_per_page;
++ used -= (u32)(prod / elem_per_page - cons / elem_per_page);
+
+ return p_chain->capacity - used;
+ }
+diff --git a/include/linux/syscalls.h b/include/linux/syscalls.h
+index 1815065d52f3..1b0c7813197b 100644
+--- a/include/linux/syscalls.h
++++ b/include/linux/syscalls.h
+@@ -1358,7 +1358,7 @@ static inline long ksys_lchown(const char __user *filename, uid_t user,
+
+ extern long do_sys_ftruncate(unsigned int fd, loff_t length, int small);
+
+-static inline long ksys_ftruncate(unsigned int fd, unsigned long length)
++static inline long ksys_ftruncate(unsigned int fd, loff_t length)
+ {
+ return do_sys_ftruncate(fd, length, 1);
+ }
+diff --git a/include/linux/tpm_eventlog.h b/include/linux/tpm_eventlog.h
+index c253461b1c4e..96d36b7a1344 100644
+--- a/include/linux/tpm_eventlog.h
++++ b/include/linux/tpm_eventlog.h
+@@ -81,6 +81,8 @@ struct tcg_efi_specid_event_algs {
+ u16 digest_size;
+ } __packed;
+
++#define TCG_SPECID_SIG "Spec ID Event03"
++
+ struct tcg_efi_specid_event_head {
+ u8 signature[16];
+ u32 platform_class;
+@@ -171,6 +173,7 @@ static inline int __calc_tpm2_event_size(struct tcg_pcr_event2_head *event,
+ int i;
+ int j;
+ u32 count, event_type;
++ const u8 zero_digest[sizeof(event_header->digest)] = {0};
+
+ marker = event;
+ marker_start = marker;
+@@ -198,10 +201,19 @@ static inline int __calc_tpm2_event_size(struct tcg_pcr_event2_head *event,
+ count = READ_ONCE(event->count);
+ event_type = READ_ONCE(event->event_type);
+
++ /* Verify that it's the log header */
++ if (event_header->pcr_idx != 0 ||
++ event_header->event_type != NO_ACTION ||
++ memcmp(event_header->digest, zero_digest, sizeof(zero_digest))) {
++ size = 0;
++ goto out;
++ }
++
+ efispecid = (struct tcg_efi_specid_event_head *)event_header->event;
+
+ /* Check if event is malformed. */
+- if (count > efispecid->num_algs) {
++ if (memcmp(efispecid->signature, TCG_SPECID_SIG,
++ sizeof(TCG_SPECID_SIG)) || count > efispecid->num_algs) {
+ size = 0;
+ goto out;
+ }
+diff --git a/include/net/sctp/constants.h b/include/net/sctp/constants.h
+index 15b4d9aec7ff..122d9e2d8dfd 100644
+--- a/include/net/sctp/constants.h
++++ b/include/net/sctp/constants.h
+@@ -353,11 +353,13 @@ enum {
+ ipv4_is_anycast_6to4(a))
+
+ /* Flags used for the bind address copy functions. */
+-#define SCTP_ADDR6_ALLOWED 0x00000001 /* IPv6 address is allowed by
++#define SCTP_ADDR4_ALLOWED 0x00000001 /* IPv4 address is allowed by
+ local sock family */
+-#define SCTP_ADDR4_PEERSUPP 0x00000002 /* IPv4 address is supported by
++#define SCTP_ADDR6_ALLOWED 0x00000002 /* IPv6 address is allowed by
++ local sock family */
++#define SCTP_ADDR4_PEERSUPP 0x00000004 /* IPv4 address is supported by
+ peer */
+-#define SCTP_ADDR6_PEERSUPP 0x00000004 /* IPv6 address is supported by
++#define SCTP_ADDR6_PEERSUPP 0x00000008 /* IPv6 address is supported by
+ peer */
+
+ /* Reasons to retransmit. */
+diff --git a/include/net/sock.h b/include/net/sock.h
+index 3e8c6d4b4b59..46423e86dba5 100644
+--- a/include/net/sock.h
++++ b/include/net/sock.h
+@@ -1846,7 +1846,6 @@ static inline int sk_rx_queue_get(const struct sock *sk)
+
+ static inline void sk_set_socket(struct sock *sk, struct socket *sock)
+ {
+- sk_tx_queue_clear(sk);
+ sk->sk_socket = sock;
+ }
+
+diff --git a/include/net/xfrm.h b/include/net/xfrm.h
+index 8f71c111e65a..03024701c79f 100644
+--- a/include/net/xfrm.h
++++ b/include/net/xfrm.h
+@@ -1013,6 +1013,7 @@ struct xfrm_offload {
+ #define XFRM_GRO 32
+ #define XFRM_ESP_NO_TRAILER 64
+ #define XFRM_DEV_RESUME 128
++#define XFRM_XMIT 256
+
+ __u32 status;
+ #define CRYPTO_SUCCESS 1
+diff --git a/include/uapi/linux/fb.h b/include/uapi/linux/fb.h
+index b6aac7ee1f67..4c14e8be7267 100644
+--- a/include/uapi/linux/fb.h
++++ b/include/uapi/linux/fb.h
+@@ -205,6 +205,7 @@ struct fb_bitfield {
+ #define FB_ACTIVATE_ALL 64 /* change all VCs on this fb */
+ #define FB_ACTIVATE_FORCE 128 /* force apply even when no change*/
+ #define FB_ACTIVATE_INV_MODE 256 /* invalidate videomode */
++#define FB_ACTIVATE_KD_TEXT 512 /* for KDSET vt ioctl */
+
+ #define FB_ACCELF_TEXT 1 /* (OBSOLETE) see fb_info.flags and vc_mode */
+
+diff --git a/kernel/bpf/cgroup.c b/kernel/bpf/cgroup.c
+index cb305e71e7de..25aebd21c15b 100644
+--- a/kernel/bpf/cgroup.c
++++ b/kernel/bpf/cgroup.c
+@@ -1240,16 +1240,23 @@ static bool __cgroup_bpf_prog_array_is_empty(struct cgroup *cgrp,
+
+ static int sockopt_alloc_buf(struct bpf_sockopt_kern *ctx, int max_optlen)
+ {
+- if (unlikely(max_optlen > PAGE_SIZE) || max_optlen < 0)
++ if (unlikely(max_optlen < 0))
+ return -EINVAL;
+
++ if (unlikely(max_optlen > PAGE_SIZE)) {
++ /* We don't expose optvals that are greater than PAGE_SIZE
++ * to the BPF program.
++ */
++ max_optlen = PAGE_SIZE;
++ }
++
+ ctx->optval = kzalloc(max_optlen, GFP_USER);
+ if (!ctx->optval)
+ return -ENOMEM;
+
+ ctx->optval_end = ctx->optval + max_optlen;
+
+- return 0;
++ return max_optlen;
+ }
+
+ static void sockopt_free_buf(struct bpf_sockopt_kern *ctx)
+@@ -1283,13 +1290,13 @@ int __cgroup_bpf_run_filter_setsockopt(struct sock *sk, int *level,
+ */
+ max_optlen = max_t(int, 16, *optlen);
+
+- ret = sockopt_alloc_buf(&ctx, max_optlen);
+- if (ret)
+- return ret;
++ max_optlen = sockopt_alloc_buf(&ctx, max_optlen);
++ if (max_optlen < 0)
++ return max_optlen;
+
+ ctx.optlen = *optlen;
+
+- if (copy_from_user(ctx.optval, optval, *optlen) != 0) {
++ if (copy_from_user(ctx.optval, optval, min(*optlen, max_optlen)) != 0) {
+ ret = -EFAULT;
+ goto out;
+ }
+@@ -1317,8 +1324,14 @@ int __cgroup_bpf_run_filter_setsockopt(struct sock *sk, int *level,
+ /* export any potential modifications */
+ *level = ctx.level;
+ *optname = ctx.optname;
+- *optlen = ctx.optlen;
+- *kernel_optval = ctx.optval;
++
++ /* optlen == 0 from BPF indicates that we should
++ * use original userspace data.
++ */
++ if (ctx.optlen != 0) {
++ *optlen = ctx.optlen;
++ *kernel_optval = ctx.optval;
++ }
+ }
+
+ out:
+@@ -1350,12 +1363,12 @@ int __cgroup_bpf_run_filter_getsockopt(struct sock *sk, int level,
+ __cgroup_bpf_prog_array_is_empty(cgrp, BPF_CGROUP_GETSOCKOPT))
+ return retval;
+
+- ret = sockopt_alloc_buf(&ctx, max_optlen);
+- if (ret)
+- return ret;
+-
+ ctx.optlen = max_optlen;
+
++ max_optlen = sockopt_alloc_buf(&ctx, max_optlen);
++ if (max_optlen < 0)
++ return max_optlen;
++
+ if (!retval) {
+ /* If kernel getsockopt finished successfully,
+ * copy whatever was returned to the user back
+@@ -1369,10 +1382,8 @@ int __cgroup_bpf_run_filter_getsockopt(struct sock *sk, int level,
+ goto out;
+ }
+
+- if (ctx.optlen > max_optlen)
+- ctx.optlen = max_optlen;
+-
+- if (copy_from_user(ctx.optval, optval, ctx.optlen) != 0) {
++ if (copy_from_user(ctx.optval, optval,
++ min(ctx.optlen, max_optlen)) != 0) {
+ ret = -EFAULT;
+ goto out;
+ }
+@@ -1401,10 +1412,12 @@ int __cgroup_bpf_run_filter_getsockopt(struct sock *sk, int level,
+ goto out;
+ }
+
+- if (copy_to_user(optval, ctx.optval, ctx.optlen) ||
+- put_user(ctx.optlen, optlen)) {
+- ret = -EFAULT;
+- goto out;
++ if (ctx.optlen != 0) {
++ if (copy_to_user(optval, ctx.optval, ctx.optlen) ||
++ put_user(ctx.optlen, optlen)) {
++ ret = -EFAULT;
++ goto out;
++ }
+ }
+
+ ret = ctx.retval;
+diff --git a/kernel/bpf/devmap.c b/kernel/bpf/devmap.c
+index 58bdca5d978a..badf382bbd36 100644
+--- a/kernel/bpf/devmap.c
++++ b/kernel/bpf/devmap.c
+@@ -85,12 +85,13 @@ static DEFINE_PER_CPU(struct list_head, dev_flush_list);
+ static DEFINE_SPINLOCK(dev_map_lock);
+ static LIST_HEAD(dev_map_list);
+
+-static struct hlist_head *dev_map_create_hash(unsigned int entries)
++static struct hlist_head *dev_map_create_hash(unsigned int entries,
++ int numa_node)
+ {
+ int i;
+ struct hlist_head *hash;
+
+- hash = kmalloc_array(entries, sizeof(*hash), GFP_KERNEL);
++ hash = bpf_map_area_alloc(entries * sizeof(*hash), numa_node);
+ if (hash != NULL)
+ for (i = 0; i < entries; i++)
+ INIT_HLIST_HEAD(&hash[i]);
+@@ -138,7 +139,8 @@ static int dev_map_init_map(struct bpf_dtab *dtab, union bpf_attr *attr)
+ return -EINVAL;
+
+ if (attr->map_type == BPF_MAP_TYPE_DEVMAP_HASH) {
+- dtab->dev_index_head = dev_map_create_hash(dtab->n_buckets);
++ dtab->dev_index_head = dev_map_create_hash(dtab->n_buckets,
++ dtab->map.numa_node);
+ if (!dtab->dev_index_head)
+ goto free_charge;
+
+@@ -223,7 +225,7 @@ static void dev_map_free(struct bpf_map *map)
+ }
+ }
+
+- kfree(dtab->dev_index_head);
++ bpf_map_area_free(dtab->dev_index_head);
+ } else {
+ for (i = 0; i < dtab->map.max_entries; i++) {
+ struct bpf_dtab_netdev *dev;
+diff --git a/kernel/dma/direct.c b/kernel/dma/direct.c
+index 8f4bbdaf965e..2270930f36f8 100644
+--- a/kernel/dma/direct.c
++++ b/kernel/dma/direct.c
+@@ -124,6 +124,7 @@ void *dma_direct_alloc_pages(struct device *dev, size_t size,
+ {
+ struct page *page;
+ void *ret;
++ int err;
+
+ if (IS_ENABLED(CONFIG_DMA_DIRECT_REMAP) &&
+ dma_alloc_need_uncached(dev, attrs) &&
+@@ -160,6 +161,12 @@ void *dma_direct_alloc_pages(struct device *dev, size_t size,
+ __builtin_return_address(0));
+ if (!ret)
+ goto out_free_pages;
++ if (force_dma_unencrypted(dev)) {
++ err = set_memory_decrypted((unsigned long)ret,
++ 1 << get_order(size));
++ if (err)
++ goto out_free_pages;
++ }
+ memset(ret, 0, size);
+ goto done;
+ }
+@@ -176,8 +183,12 @@ void *dma_direct_alloc_pages(struct device *dev, size_t size,
+ }
+
+ ret = page_address(page);
+- if (force_dma_unencrypted(dev))
+- set_memory_decrypted((unsigned long)ret, 1 << get_order(size));
++ if (force_dma_unencrypted(dev)) {
++ err = set_memory_decrypted((unsigned long)ret,
++ 1 << get_order(size));
++ if (err)
++ goto out_free_pages;
++ }
+
+ memset(ret, 0, size);
+
+@@ -186,7 +197,7 @@ void *dma_direct_alloc_pages(struct device *dev, size_t size,
+ arch_dma_prep_coherent(page, size);
+ ret = arch_dma_set_uncached(ret, size);
+ if (IS_ERR(ret))
+- goto out_free_pages;
++ goto out_encrypt_pages;
+ }
+ done:
+ if (force_dma_unencrypted(dev))
+@@ -194,6 +205,15 @@ done:
+ else
+ *dma_handle = phys_to_dma(dev, page_to_phys(page));
+ return ret;
++
++out_encrypt_pages:
++ if (force_dma_unencrypted(dev)) {
++ err = set_memory_encrypted((unsigned long)page_address(page),
++ 1 << get_order(size));
++ /* If memory cannot be re-encrypted, it must be leaked */
++ if (err)
++ return NULL;
++ }
+ out_free_pages:
+ dma_free_contiguous(dev, page, size);
+ return NULL;
+diff --git a/kernel/kprobes.c b/kernel/kprobes.c
+index 195ecb955fcc..950a5cfd262c 100644
+--- a/kernel/kprobes.c
++++ b/kernel/kprobes.c
+@@ -326,7 +326,8 @@ struct kprobe *get_kprobe(void *addr)
+ struct kprobe *p;
+
+ head = &kprobe_table[hash_ptr(addr, KPROBE_HASH_BITS)];
+- hlist_for_each_entry_rcu(p, head, hlist) {
++ hlist_for_each_entry_rcu(p, head, hlist,
++ lockdep_is_held(&kprobe_mutex)) {
+ if (p->addr == addr)
+ return p;
+ }
+diff --git a/kernel/sched/core.c b/kernel/sched/core.c
+index 5eccfb816d23..f2618ade8047 100644
+--- a/kernel/sched/core.c
++++ b/kernel/sched/core.c
+@@ -4461,7 +4461,8 @@ void rt_mutex_setprio(struct task_struct *p, struct task_struct *pi_task)
+ */
+ if (dl_prio(prio)) {
+ if (!dl_prio(p->normal_prio) ||
+- (pi_task && dl_entity_preempt(&pi_task->dl, &p->dl))) {
++ (pi_task && dl_prio(pi_task->prio) &&
++ dl_entity_preempt(&pi_task->dl, &p->dl))) {
+ p->dl.dl_boosted = 1;
+ queue_flag |= ENQUEUE_REPLENISH;
+ } else
+diff --git a/kernel/sched/deadline.c b/kernel/sched/deadline.c
+index 504d2f51b0d6..f63f337c7147 100644
+--- a/kernel/sched/deadline.c
++++ b/kernel/sched/deadline.c
+@@ -2692,6 +2692,7 @@ void __dl_clear_params(struct task_struct *p)
+ dl_se->dl_bw = 0;
+ dl_se->dl_density = 0;
+
++ dl_se->dl_boosted = 0;
+ dl_se->dl_throttled = 0;
+ dl_se->dl_yielded = 0;
+ dl_se->dl_non_contending = 0;
+diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
+index 2ae7e30ccb33..5725199b32dc 100644
+--- a/kernel/sched/fair.c
++++ b/kernel/sched/fair.c
+@@ -807,7 +807,7 @@ void post_init_entity_util_avg(struct task_struct *p)
+ }
+ }
+
+- sa->runnable_avg = cpu_scale;
++ sa->runnable_avg = sa->util_avg;
+
+ if (p->sched_class != &fair_sched_class) {
+ /*
+diff --git a/kernel/trace/blktrace.c b/kernel/trace/blktrace.c
+index 35610a4be4a9..085fceca3377 100644
+--- a/kernel/trace/blktrace.c
++++ b/kernel/trace/blktrace.c
+@@ -3,6 +3,9 @@
+ * Copyright (C) 2006 Jens Axboe <axboe@kernel.dk>
+ *
+ */
++
++#define pr_fmt(fmt) KBUILD_MODNAME ": " fmt
++
+ #include <linux/kernel.h>
+ #include <linux/blkdev.h>
+ #include <linux/blktrace_api.h>
+@@ -494,6 +497,16 @@ static int do_blk_trace_setup(struct request_queue *q, char *name, dev_t dev,
+ */
+ strreplace(buts->name, '/', '_');
+
++ /*
++ * bdev can be NULL, as with scsi-generic, this is a helpful as
++ * we can be.
++ */
++ if (q->blk_trace) {
++ pr_warn("Concurrent blktraces are not allowed on %s\n",
++ buts->name);
++ return -EBUSY;
++ }
++
+ bt = kzalloc(sizeof(*bt), GFP_KERNEL);
+ if (!bt)
+ return -ENOMEM;
+diff --git a/kernel/trace/ring_buffer.c b/kernel/trace/ring_buffer.c
+index b8e1ca48be50..00867ff82412 100644
+--- a/kernel/trace/ring_buffer.c
++++ b/kernel/trace/ring_buffer.c
+@@ -2427,7 +2427,7 @@ rb_update_event(struct ring_buffer_per_cpu *cpu_buffer,
+ if (unlikely(info->add_timestamp)) {
+ bool abs = ring_buffer_time_stamp_abs(cpu_buffer->buffer);
+
+- event = rb_add_time_stamp(event, info->delta, abs);
++ event = rb_add_time_stamp(event, abs ? info->delta : delta, abs);
+ length -= RB_LEN_TIME_EXTEND;
+ delta = 0;
+ }
+diff --git a/kernel/trace/trace_boot.c b/kernel/trace/trace_boot.c
+index 9de29bb45a27..fdc5abc00bf8 100644
+--- a/kernel/trace/trace_boot.c
++++ b/kernel/trace/trace_boot.c
+@@ -101,12 +101,16 @@ trace_boot_add_kprobe_event(struct xbc_node *node, const char *event)
+ kprobe_event_cmd_init(&cmd, buf, MAX_BUF_LEN);
+
+ ret = kprobe_event_gen_cmd_start(&cmd, event, val);
+- if (ret)
++ if (ret) {
++ pr_err("Failed to generate probe: %s\n", buf);
+ break;
++ }
+
+ ret = kprobe_event_gen_cmd_end(&cmd);
+- if (ret)
++ if (ret) {
+ pr_err("Failed to add probe: %s\n", buf);
++ break;
++ }
+ }
+
+ return ret;
+diff --git a/kernel/trace/trace_events_trigger.c b/kernel/trace/trace_events_trigger.c
+index 3a74736da363..f725802160c0 100644
+--- a/kernel/trace/trace_events_trigger.c
++++ b/kernel/trace/trace_events_trigger.c
+@@ -216,11 +216,17 @@ static int event_trigger_regex_open(struct inode *inode, struct file *file)
+
+ int trigger_process_regex(struct trace_event_file *file, char *buff)
+ {
+- char *command, *next = buff;
++ char *command, *next;
+ struct event_command *p;
+ int ret = -EINVAL;
+
++ next = buff = skip_spaces(buff);
+ command = strsep(&next, ": \t");
++ if (next) {
++ next = skip_spaces(next);
++ if (!*next)
++ next = NULL;
++ }
+ command = (command[0] != '!') ? command : command + 1;
+
+ mutex_lock(&trigger_cmd_mutex);
+@@ -630,8 +636,14 @@ event_trigger_callback(struct event_command *cmd_ops,
+ int ret;
+
+ /* separate the trigger from the filter (t:n [if filter]) */
+- if (param && isdigit(param[0]))
++ if (param && isdigit(param[0])) {
+ trigger = strsep(¶m, " \t");
++ if (param) {
++ param = skip_spaces(param);
++ if (!*param)
++ param = NULL;
++ }
++ }
+
+ trigger_ops = cmd_ops->get_trigger_ops(cmd, trigger);
+
+@@ -1368,6 +1380,11 @@ int event_enable_trigger_func(struct event_command *cmd_ops,
+ trigger = strsep(¶m, " \t");
+ if (!trigger)
+ return -EINVAL;
++ if (param) {
++ param = skip_spaces(param);
++ if (!*param)
++ param = NULL;
++ }
+
+ system = strsep(&trigger, ":");
+ if (!trigger)
+diff --git a/lib/test_objagg.c b/lib/test_objagg.c
+index 72c1abfa154d..da137939a410 100644
+--- a/lib/test_objagg.c
++++ b/lib/test_objagg.c
+@@ -979,10 +979,10 @@ err_check_expect_stats2:
+ err_world2_obj_get:
+ for (i--; i >= 0; i--)
+ world_obj_put(&world2, objagg, hints_case->key_ids[i]);
+- objagg_hints_put(hints);
+- objagg_destroy(objagg2);
+ i = hints_case->key_ids_count;
++ objagg_destroy(objagg2);
+ err_check_expect_hints_stats:
++ objagg_hints_put(hints);
+ err_hints_get:
+ err_check_expect_stats:
+ err_world_obj_get:
+diff --git a/mm/compaction.c b/mm/compaction.c
+index 46f0fcc93081..65b568e19582 100644
+--- a/mm/compaction.c
++++ b/mm/compaction.c
+@@ -2318,15 +2318,26 @@ static enum compact_result compact_zone_order(struct zone *zone, int order,
+ .page = NULL,
+ };
+
+- current->capture_control = &capc;
++ /*
++ * Make sure the structs are really initialized before we expose the
++ * capture control, in case we are interrupted and the interrupt handler
++ * frees a page.
++ */
++ barrier();
++ WRITE_ONCE(current->capture_control, &capc);
+
+ ret = compact_zone(&cc, &capc);
+
+ VM_BUG_ON(!list_empty(&cc.freepages));
+ VM_BUG_ON(!list_empty(&cc.migratepages));
+
+- *capture = capc.page;
+- current->capture_control = NULL;
++ /*
++ * Make sure we hide capture control first before we read the captured
++ * page pointer, otherwise an interrupt could free and capture a page
++ * and we would leak it.
++ */
++ WRITE_ONCE(current->capture_control, NULL);
++ *capture = READ_ONCE(capc.page);
+
+ return ret;
+ }
+diff --git a/mm/memcontrol.c b/mm/memcontrol.c
+index a3b97f103966..ef0e291a8cf4 100644
+--- a/mm/memcontrol.c
++++ b/mm/memcontrol.c
+@@ -2790,8 +2790,10 @@ static void memcg_schedule_kmem_cache_create(struct mem_cgroup *memcg,
+ return;
+
+ cw = kmalloc(sizeof(*cw), GFP_NOWAIT | __GFP_NOWARN);
+- if (!cw)
++ if (!cw) {
++ css_put(&memcg->css);
+ return;
++ }
+
+ cw->memcg = memcg;
+ cw->cachep = cachep;
+@@ -6349,11 +6351,16 @@ static unsigned long effective_protection(unsigned long usage,
+ * We're using unprotected memory for the weight so that if
+ * some cgroups DO claim explicit protection, we don't protect
+ * the same bytes twice.
++ *
++ * Check both usage and parent_usage against the respective
++ * protected values. One should imply the other, but they
++ * aren't read atomically - make sure the division is sane.
+ */
+ if (!(cgrp_dfl_root.flags & CGRP_ROOT_MEMORY_RECURSIVE_PROT))
+ return ep;
+-
+- if (parent_effective > siblings_protected && usage > protected) {
++ if (parent_effective > siblings_protected &&
++ parent_usage > siblings_protected &&
++ usage > protected) {
+ unsigned long unclaimed;
+
+ unclaimed = parent_effective - siblings_protected;
+diff --git a/mm/memory_hotplug.c b/mm/memory_hotplug.c
+index fc0aad0bc1f5..744a3ea284b7 100644
+--- a/mm/memory_hotplug.c
++++ b/mm/memory_hotplug.c
+@@ -468,11 +468,20 @@ void __ref remove_pfn_range_from_zone(struct zone *zone,
+ unsigned long start_pfn,
+ unsigned long nr_pages)
+ {
++ const unsigned long end_pfn = start_pfn + nr_pages;
+ struct pglist_data *pgdat = zone->zone_pgdat;
+- unsigned long flags;
++ unsigned long pfn, cur_nr_pages, flags;
+
+ /* Poison struct pages because they are now uninitialized again. */
+- page_init_poison(pfn_to_page(start_pfn), sizeof(struct page) * nr_pages);
++ for (pfn = start_pfn; pfn < end_pfn; pfn += cur_nr_pages) {
++ cond_resched();
++
++ /* Select all remaining pages up to the next section boundary */
++ cur_nr_pages =
++ min(end_pfn - pfn, SECTION_ALIGN_UP(pfn + 1) - pfn);
++ page_init_poison(pfn_to_page(pfn),
++ sizeof(struct page) * cur_nr_pages);
++ }
+
+ #ifdef CONFIG_ZONE_DEVICE
+ /*
+diff --git a/mm/slab.h b/mm/slab.h
+index 207c83ef6e06..74f7e09a7cfd 100644
+--- a/mm/slab.h
++++ b/mm/slab.h
+@@ -348,7 +348,7 @@ static __always_inline int memcg_charge_slab(struct page *page,
+ gfp_t gfp, int order,
+ struct kmem_cache *s)
+ {
+- unsigned int nr_pages = 1 << order;
++ int nr_pages = 1 << order;
+ struct mem_cgroup *memcg;
+ struct lruvec *lruvec;
+ int ret;
+@@ -388,7 +388,7 @@ out:
+ static __always_inline void memcg_uncharge_slab(struct page *page, int order,
+ struct kmem_cache *s)
+ {
+- unsigned int nr_pages = 1 << order;
++ int nr_pages = 1 << order;
+ struct mem_cgroup *memcg;
+ struct lruvec *lruvec;
+
+diff --git a/mm/slab_common.c b/mm/slab_common.c
+index 9e72ba224175..37d48a56431d 100644
+--- a/mm/slab_common.c
++++ b/mm/slab_common.c
+@@ -1726,7 +1726,7 @@ void kzfree(const void *p)
+ if (unlikely(ZERO_OR_NULL_PTR(mem)))
+ return;
+ ks = ksize(mem);
+- memset(mem, 0, ks);
++ memzero_explicit(mem, ks);
+ kfree(mem);
+ }
+ EXPORT_SYMBOL(kzfree);
+diff --git a/net/bridge/br_private.h b/net/bridge/br_private.h
+index 1f97703a52ff..18430f79ac37 100644
+--- a/net/bridge/br_private.h
++++ b/net/bridge/br_private.h
+@@ -217,8 +217,8 @@ struct net_bridge_port_group {
+ struct rcu_head rcu;
+ struct timer_list timer;
+ struct br_ip addr;
++ unsigned char eth_addr[ETH_ALEN] __aligned(2);
+ unsigned char flags;
+- unsigned char eth_addr[ETH_ALEN];
+ };
+
+ struct net_bridge_mdb_entry {
+diff --git a/net/core/dev.c b/net/core/dev.c
+index 93a279ab4e97..c9ee5d80d5ea 100644
+--- a/net/core/dev.c
++++ b/net/core/dev.c
+@@ -4109,10 +4109,12 @@ int dev_direct_xmit(struct sk_buff *skb, u16 queue_id)
+
+ local_bh_disable();
+
++ dev_xmit_recursion_inc();
+ HARD_TX_LOCK(dev, txq, smp_processor_id());
+ if (!netif_xmit_frozen_or_drv_stopped(txq))
+ ret = netdev_start_xmit(skb, dev, txq, false);
+ HARD_TX_UNLOCK(dev, txq);
++ dev_xmit_recursion_dec();
+
+ local_bh_enable();
+
+@@ -9435,6 +9437,13 @@ int register_netdevice(struct net_device *dev)
+ rcu_barrier();
+
+ dev->reg_state = NETREG_UNREGISTERED;
++ /* We should put the kobject that hold in
++ * netdev_unregister_kobject(), otherwise
++ * the net device cannot be freed when
++ * driver calls free_netdev(), because the
++ * kobject is being hold.
++ */
++ kobject_put(&dev->dev.kobj);
+ }
+ /*
+ * Prevent userspace races by waiting until the network
+diff --git a/net/core/sock.c b/net/core/sock.c
+index b714162213ae..afe4a62adf8f 100644
+--- a/net/core/sock.c
++++ b/net/core/sock.c
+@@ -707,7 +707,7 @@ bool sk_mc_loop(struct sock *sk)
+ return inet6_sk(sk)->mc_loop;
+ #endif
+ }
+- WARN_ON(1);
++ WARN_ON_ONCE(1);
+ return true;
+ }
+ EXPORT_SYMBOL(sk_mc_loop);
+@@ -1678,6 +1678,7 @@ struct sock *sk_alloc(struct net *net, int family, gfp_t priority,
+ cgroup_sk_alloc(&sk->sk_cgrp_data);
+ sock_update_classid(&sk->sk_cgrp_data);
+ sock_update_netprioidx(&sk->sk_cgrp_data);
++ sk_tx_queue_clear(sk);
+ }
+
+ return sk;
+@@ -1901,6 +1902,7 @@ struct sock *sk_clone_lock(const struct sock *sk, const gfp_t priority)
+ */
+ sk_refcnt_debug_inc(newsk);
+ sk_set_socket(newsk, NULL);
++ sk_tx_queue_clear(newsk);
+ RCU_INIT_POINTER(newsk->sk_wq, NULL);
+
+ if (newsk->sk_prot->sockets_allocated)
+diff --git a/net/ethtool/common.c b/net/ethtool/common.c
+index 423e640e3876..aaecfc916a4d 100644
+--- a/net/ethtool/common.c
++++ b/net/ethtool/common.c
+@@ -40,9 +40,11 @@ const char netdev_features_strings[NETDEV_FEATURE_COUNT][ETH_GSTRING_LEN] = {
+ [NETIF_F_GSO_UDP_TUNNEL_BIT] = "tx-udp_tnl-segmentation",
+ [NETIF_F_GSO_UDP_TUNNEL_CSUM_BIT] = "tx-udp_tnl-csum-segmentation",
+ [NETIF_F_GSO_PARTIAL_BIT] = "tx-gso-partial",
++ [NETIF_F_GSO_TUNNEL_REMCSUM_BIT] = "tx-tunnel-remcsum-segmentation",
+ [NETIF_F_GSO_SCTP_BIT] = "tx-sctp-segmentation",
+ [NETIF_F_GSO_ESP_BIT] = "tx-esp-segmentation",
+ [NETIF_F_GSO_UDP_L4_BIT] = "tx-udp-segmentation",
++ [NETIF_F_GSO_FRAGLIST_BIT] = "tx-gso-list",
+
+ [NETIF_F_FCOE_CRC_BIT] = "tx-checksum-fcoe-crc",
+ [NETIF_F_SCTP_CRC_BIT] = "tx-checksum-sctp",
+diff --git a/net/ethtool/ioctl.c b/net/ethtool/ioctl.c
+index 89d0b1827aaf..d3eeeb26396c 100644
+--- a/net/ethtool/ioctl.c
++++ b/net/ethtool/ioctl.c
+@@ -2957,7 +2957,7 @@ ethtool_rx_flow_rule_create(const struct ethtool_rx_flow_spec_input *input)
+ sizeof(match->mask.ipv6.dst));
+ }
+ if (memcmp(v6_m_spec->ip6src, &zero_addr, sizeof(zero_addr)) ||
+- memcmp(v6_m_spec->ip6src, &zero_addr, sizeof(zero_addr))) {
++ memcmp(v6_m_spec->ip6dst, &zero_addr, sizeof(zero_addr))) {
+ match->dissector.used_keys |=
+ BIT(FLOW_DISSECTOR_KEY_IPV6_ADDRS);
+ match->dissector.offset[FLOW_DISSECTOR_KEY_IPV6_ADDRS] =
+diff --git a/net/ipv4/fib_semantics.c b/net/ipv4/fib_semantics.c
+index 55ca2e521828..871c035be31f 100644
+--- a/net/ipv4/fib_semantics.c
++++ b/net/ipv4/fib_semantics.c
+@@ -1109,7 +1109,7 @@ static int fib_check_nh_v4_gw(struct net *net, struct fib_nh *nh, u32 table,
+ if (fl4.flowi4_scope < RT_SCOPE_LINK)
+ fl4.flowi4_scope = RT_SCOPE_LINK;
+
+- if (table)
++ if (table && table != RT_TABLE_MAIN)
+ tbl = fib_get_table(net, table);
+
+ if (tbl)
+diff --git a/net/ipv4/ip_tunnel.c b/net/ipv4/ip_tunnel.c
+index cd4b84310d92..a0b4dc54f8a6 100644
+--- a/net/ipv4/ip_tunnel.c
++++ b/net/ipv4/ip_tunnel.c
+@@ -85,9 +85,10 @@ struct ip_tunnel *ip_tunnel_lookup(struct ip_tunnel_net *itn,
+ __be32 remote, __be32 local,
+ __be32 key)
+ {
+- unsigned int hash;
+ struct ip_tunnel *t, *cand = NULL;
+ struct hlist_head *head;
++ struct net_device *ndev;
++ unsigned int hash;
+
+ hash = ip_tunnel_hash(key, remote);
+ head = &itn->tunnels[hash];
+@@ -162,8 +163,9 @@ struct ip_tunnel *ip_tunnel_lookup(struct ip_tunnel_net *itn,
+ if (t && t->dev->flags & IFF_UP)
+ return t;
+
+- if (itn->fb_tunnel_dev && itn->fb_tunnel_dev->flags & IFF_UP)
+- return netdev_priv(itn->fb_tunnel_dev);
++ ndev = READ_ONCE(itn->fb_tunnel_dev);
++ if (ndev && ndev->flags & IFF_UP)
++ return netdev_priv(ndev);
+
+ return NULL;
+ }
+@@ -1245,9 +1247,9 @@ void ip_tunnel_uninit(struct net_device *dev)
+ struct ip_tunnel_net *itn;
+
+ itn = net_generic(net, tunnel->ip_tnl_net_id);
+- /* fb_tunnel_dev will be unregisted in net-exit call. */
+- if (itn->fb_tunnel_dev != dev)
+- ip_tunnel_del(itn, netdev_priv(dev));
++ ip_tunnel_del(itn, netdev_priv(dev));
++ if (itn->fb_tunnel_dev == dev)
++ WRITE_ONCE(itn->fb_tunnel_dev, NULL);
+
+ dst_cache_reset(&tunnel->dst_cache);
+ }
+diff --git a/net/ipv4/tcp_cubic.c b/net/ipv4/tcp_cubic.c
+index 8f8eefd3a3ce..c7bf5b26bf0c 100644
+--- a/net/ipv4/tcp_cubic.c
++++ b/net/ipv4/tcp_cubic.c
+@@ -432,10 +432,9 @@ static void hystart_update(struct sock *sk, u32 delay)
+
+ if (hystart_detect & HYSTART_DELAY) {
+ /* obtain the minimum delay of more than sampling packets */
++ if (ca->curr_rtt > delay)
++ ca->curr_rtt = delay;
+ if (ca->sample_cnt < HYSTART_MIN_SAMPLES) {
+- if (ca->curr_rtt > delay)
+- ca->curr_rtt = delay;
+-
+ ca->sample_cnt++;
+ } else {
+ if (ca->curr_rtt > ca->delay_min +
+diff --git a/net/ipv4/tcp_input.c b/net/ipv4/tcp_input.c
+index 29c6fc8c7716..1fa009999f57 100644
+--- a/net/ipv4/tcp_input.c
++++ b/net/ipv4/tcp_input.c
+@@ -261,7 +261,8 @@ static void tcp_ecn_accept_cwr(struct sock *sk, const struct sk_buff *skb)
+ * cwnd may be very low (even just 1 packet), so we should ACK
+ * immediately.
+ */
+- inet_csk(sk)->icsk_ack.pending |= ICSK_ACK_NOW;
++ if (TCP_SKB_CB(skb)->seq != TCP_SKB_CB(skb)->end_seq)
++ inet_csk(sk)->icsk_ack.pending |= ICSK_ACK_NOW;
+ }
+ }
+
+@@ -3683,6 +3684,15 @@ static int tcp_ack(struct sock *sk, const struct sk_buff *skb, int flag)
+ tcp_in_ack_event(sk, ack_ev_flags);
+ }
+
++ /* This is a deviation from RFC3168 since it states that:
++ * "When the TCP data sender is ready to set the CWR bit after reducing
++ * the congestion window, it SHOULD set the CWR bit only on the first
++ * new data packet that it transmits."
++ * We accept CWR on pure ACKs to be more robust
++ * with widely-deployed TCP implementations that do this.
++ */
++ tcp_ecn_accept_cwr(sk, skb);
++
+ /* We passed data and got it acked, remove any soft error
+ * log. Something worked...
+ */
+@@ -4593,7 +4603,11 @@ static void tcp_data_queue_ofo(struct sock *sk, struct sk_buff *skb)
+ if (tcp_ooo_try_coalesce(sk, tp->ooo_last_skb,
+ skb, &fragstolen)) {
+ coalesce_done:
+- tcp_grow_window(sk, skb);
++ /* For non sack flows, do not grow window to force DUPACK
++ * and trigger fast retransmit.
++ */
++ if (tcp_is_sack(tp))
++ tcp_grow_window(sk, skb);
+ kfree_skb_partial(skb, fragstolen);
+ skb = NULL;
+ goto add_sack;
+@@ -4677,7 +4691,11 @@ add_sack:
+ tcp_sack_new_ofo_skb(sk, seq, end_seq);
+ end:
+ if (skb) {
+- tcp_grow_window(sk, skb);
++ /* For non sack flows, do not grow window to force DUPACK
++ * and trigger fast retransmit.
++ */
++ if (tcp_is_sack(tp))
++ tcp_grow_window(sk, skb);
+ skb_condense(skb);
+ skb_set_owner_r(skb, sk);
+ }
+@@ -4780,8 +4798,6 @@ static void tcp_data_queue(struct sock *sk, struct sk_buff *skb)
+ skb_dst_drop(skb);
+ __skb_pull(skb, tcp_hdr(skb)->doff * 4);
+
+- tcp_ecn_accept_cwr(sk, skb);
+-
+ tp->rx_opt.dsack = 0;
+
+ /* Queue data for delivery to the user.
+diff --git a/net/ipv6/ip6_gre.c b/net/ipv6/ip6_gre.c
+index 781ca8c07a0d..6532bde82b40 100644
+--- a/net/ipv6/ip6_gre.c
++++ b/net/ipv6/ip6_gre.c
+@@ -127,6 +127,7 @@ static struct ip6_tnl *ip6gre_tunnel_lookup(struct net_device *dev,
+ gre_proto == htons(ETH_P_ERSPAN2)) ?
+ ARPHRD_ETHER : ARPHRD_IP6GRE;
+ int score, cand_score = 4;
++ struct net_device *ndev;
+
+ for_each_ip_tunnel_rcu(t, ign->tunnels_r_l[h0 ^ h1]) {
+ if (!ipv6_addr_equal(local, &t->parms.laddr) ||
+@@ -238,9 +239,9 @@ static struct ip6_tnl *ip6gre_tunnel_lookup(struct net_device *dev,
+ if (t && t->dev->flags & IFF_UP)
+ return t;
+
+- dev = ign->fb_tunnel_dev;
+- if (dev && dev->flags & IFF_UP)
+- return netdev_priv(dev);
++ ndev = READ_ONCE(ign->fb_tunnel_dev);
++ if (ndev && ndev->flags & IFF_UP)
++ return netdev_priv(ndev);
+
+ return NULL;
+ }
+@@ -413,6 +414,8 @@ static void ip6gre_tunnel_uninit(struct net_device *dev)
+
+ ip6gre_tunnel_unlink_md(ign, t);
+ ip6gre_tunnel_unlink(ign, t);
++ if (ign->fb_tunnel_dev == dev)
++ WRITE_ONCE(ign->fb_tunnel_dev, NULL);
+ dst_cache_reset(&t->dst_cache);
+ dev_put(dev);
+ }
+diff --git a/net/ipv6/mcast.c b/net/ipv6/mcast.c
+index eaa4c2cc2fbb..c875c9b6edbe 100644
+--- a/net/ipv6/mcast.c
++++ b/net/ipv6/mcast.c
+@@ -2618,6 +2618,7 @@ void ipv6_mc_destroy_dev(struct inet6_dev *idev)
+ idev->mc_list = i->next;
+
+ write_unlock_bh(&idev->lock);
++ ip6_mc_clear_src(i);
+ ma_put(i);
+ write_lock_bh(&idev->lock);
+ }
+diff --git a/net/mptcp/options.c b/net/mptcp/options.c
+index 1c20dd14b2aa..2430bbfa3405 100644
+--- a/net/mptcp/options.c
++++ b/net/mptcp/options.c
+@@ -336,9 +336,7 @@ bool mptcp_syn_options(struct sock *sk, const struct sk_buff *skb,
+ */
+ subflow->snd_isn = TCP_SKB_CB(skb)->end_seq;
+ if (subflow->request_mptcp) {
+- pr_debug("local_key=%llu", subflow->local_key);
+ opts->suboptions = OPTION_MPTCP_MPC_SYN;
+- opts->sndr_key = subflow->local_key;
+ *size = TCPOLEN_MPTCP_MPC_SYN;
+ return true;
+ } else if (subflow->request_join) {
+diff --git a/net/mptcp/subflow.c b/net/mptcp/subflow.c
+index e6feb05a93dc..db3e4e74e785 100644
+--- a/net/mptcp/subflow.c
++++ b/net/mptcp/subflow.c
+@@ -1015,8 +1015,10 @@ int mptcp_subflow_create_socket(struct sock *sk, struct socket **new_sock)
+ err = tcp_set_ulp(sf->sk, "mptcp");
+ release_sock(sf->sk);
+
+- if (err)
++ if (err) {
++ sock_release(sf);
+ return err;
++ }
+
+ /* the newly created socket really belongs to the owning MPTCP master
+ * socket, even if for additional subflows the allocation is performed
+diff --git a/net/netfilter/ipset/ip_set_core.c b/net/netfilter/ipset/ip_set_core.c
+index 340cb955af25..56621d6bfd29 100644
+--- a/net/netfilter/ipset/ip_set_core.c
++++ b/net/netfilter/ipset/ip_set_core.c
+@@ -460,6 +460,8 @@ ip_set_elem_len(struct ip_set *set, struct nlattr *tb[], size_t len,
+ for (id = 0; id < IPSET_EXT_ID_MAX; id++) {
+ if (!add_extension(id, cadt_flags, tb))
+ continue;
++ if (align < ip_set_extensions[id].align)
++ align = ip_set_extensions[id].align;
+ len = ALIGN(len, ip_set_extensions[id].align);
+ set->offset[id] = len;
+ set->extensions |= ip_set_extensions[id].type;
+diff --git a/net/netlink/genetlink.c b/net/netlink/genetlink.c
+index bcbba0bef1c2..9c1c27f3a089 100644
+--- a/net/netlink/genetlink.c
++++ b/net/netlink/genetlink.c
+@@ -474,8 +474,7 @@ genl_family_rcv_msg_attrs_parse(const struct genl_family *family,
+ struct netlink_ext_ack *extack,
+ const struct genl_ops *ops,
+ int hdrlen,
+- enum genl_validate_flags no_strict_flag,
+- bool parallel)
++ enum genl_validate_flags no_strict_flag)
+ {
+ enum netlink_validation validate = ops->validate & no_strict_flag ?
+ NL_VALIDATE_LIBERAL :
+@@ -486,7 +485,7 @@ genl_family_rcv_msg_attrs_parse(const struct genl_family *family,
+ if (!family->maxattr)
+ return NULL;
+
+- if (parallel) {
++ if (family->parallel_ops) {
+ attrbuf = kmalloc_array(family->maxattr + 1,
+ sizeof(struct nlattr *), GFP_KERNEL);
+ if (!attrbuf)
+@@ -498,7 +497,7 @@ genl_family_rcv_msg_attrs_parse(const struct genl_family *family,
+ err = __nlmsg_parse(nlh, hdrlen, attrbuf, family->maxattr,
+ family->policy, validate, extack);
+ if (err) {
+- if (parallel)
++ if (family->parallel_ops)
+ kfree(attrbuf);
+ return ERR_PTR(err);
+ }
+@@ -506,10 +505,9 @@ genl_family_rcv_msg_attrs_parse(const struct genl_family *family,
+ }
+
+ static void genl_family_rcv_msg_attrs_free(const struct genl_family *family,
+- struct nlattr **attrbuf,
+- bool parallel)
++ struct nlattr **attrbuf)
+ {
+- if (parallel)
++ if (family->parallel_ops)
+ kfree(attrbuf);
+ }
+
+@@ -537,15 +535,14 @@ static int genl_start(struct netlink_callback *cb)
+
+ attrs = genl_family_rcv_msg_attrs_parse(ctx->family, ctx->nlh, ctx->extack,
+ ops, ctx->hdrlen,
+- GENL_DONT_VALIDATE_DUMP_STRICT,
+- true);
++ GENL_DONT_VALIDATE_DUMP_STRICT);
+ if (IS_ERR(attrs))
+ return PTR_ERR(attrs);
+
+ no_attrs:
+ info = genl_dumpit_info_alloc();
+ if (!info) {
+- kfree(attrs);
++ genl_family_rcv_msg_attrs_free(ctx->family, attrs);
+ return -ENOMEM;
+ }
+ info->family = ctx->family;
+@@ -562,7 +559,7 @@ no_attrs:
+ }
+
+ if (rc) {
+- kfree(attrs);
++ genl_family_rcv_msg_attrs_free(info->family, info->attrs);
+ genl_dumpit_info_free(info);
+ cb->data = NULL;
+ }
+@@ -591,7 +588,7 @@ static int genl_lock_done(struct netlink_callback *cb)
+ rc = ops->done(cb);
+ genl_unlock();
+ }
+- genl_family_rcv_msg_attrs_free(info->family, info->attrs, false);
++ genl_family_rcv_msg_attrs_free(info->family, info->attrs);
+ genl_dumpit_info_free(info);
+ return rc;
+ }
+@@ -604,7 +601,7 @@ static int genl_parallel_done(struct netlink_callback *cb)
+
+ if (ops->done)
+ rc = ops->done(cb);
+- genl_family_rcv_msg_attrs_free(info->family, info->attrs, true);
++ genl_family_rcv_msg_attrs_free(info->family, info->attrs);
+ genl_dumpit_info_free(info);
+ return rc;
+ }
+@@ -671,8 +668,7 @@ static int genl_family_rcv_msg_doit(const struct genl_family *family,
+
+ attrbuf = genl_family_rcv_msg_attrs_parse(family, nlh, extack,
+ ops, hdrlen,
+- GENL_DONT_VALIDATE_STRICT,
+- family->parallel_ops);
++ GENL_DONT_VALIDATE_STRICT);
+ if (IS_ERR(attrbuf))
+ return PTR_ERR(attrbuf);
+
+@@ -698,7 +694,7 @@ static int genl_family_rcv_msg_doit(const struct genl_family *family,
+ family->post_doit(ops, skb, &info);
+
+ out:
+- genl_family_rcv_msg_attrs_free(family, attrbuf, family->parallel_ops);
++ genl_family_rcv_msg_attrs_free(family, attrbuf);
+
+ return err;
+ }
+diff --git a/net/openvswitch/actions.c b/net/openvswitch/actions.c
+index fc0efd8833c8..2611657f40ca 100644
+--- a/net/openvswitch/actions.c
++++ b/net/openvswitch/actions.c
+@@ -1169,9 +1169,10 @@ static int execute_check_pkt_len(struct datapath *dp, struct sk_buff *skb,
+ struct sw_flow_key *key,
+ const struct nlattr *attr, bool last)
+ {
++ struct ovs_skb_cb *ovs_cb = OVS_CB(skb);
+ const struct nlattr *actions, *cpl_arg;
++ int len, max_len, rem = nla_len(attr);
+ const struct check_pkt_len_arg *arg;
+- int rem = nla_len(attr);
+ bool clone_flow_key;
+
+ /* The first netlink attribute in 'attr' is always
+@@ -1180,7 +1181,11 @@ static int execute_check_pkt_len(struct datapath *dp, struct sk_buff *skb,
+ cpl_arg = nla_data(attr);
+ arg = nla_data(cpl_arg);
+
+- if (skb->len <= arg->pkt_len) {
++ len = ovs_cb->mru ? ovs_cb->mru + skb->mac_len : skb->len;
++ max_len = arg->pkt_len;
++
++ if ((skb_is_gso(skb) && skb_gso_validate_mac_len(skb, max_len)) ||
++ len <= max_len) {
+ /* Second netlink attribute in 'attr' is always
+ * 'OVS_CHECK_PKT_LEN_ATTR_ACTIONS_IF_LESS_EQUAL'.
+ */
+diff --git a/net/rxrpc/call_accept.c b/net/rxrpc/call_accept.c
+index b7611cc159e5..032ed76c0166 100644
+--- a/net/rxrpc/call_accept.c
++++ b/net/rxrpc/call_accept.c
+@@ -22,6 +22,11 @@
+ #include <net/ip.h>
+ #include "ar-internal.h"
+
++static void rxrpc_dummy_notify(struct sock *sk, struct rxrpc_call *call,
++ unsigned long user_call_ID)
++{
++}
++
+ /*
+ * Preallocate a single service call, connection and peer and, if possible,
+ * give them a user ID and attach the user's side of the ID to them.
+@@ -228,6 +233,8 @@ void rxrpc_discard_prealloc(struct rxrpc_sock *rx)
+ if (rx->discard_new_call) {
+ _debug("discard %lx", call->user_call_ID);
+ rx->discard_new_call(call, call->user_call_ID);
++ if (call->notify_rx)
++ call->notify_rx = rxrpc_dummy_notify;
+ rxrpc_put_call(call, rxrpc_call_put_kernel);
+ }
+ rxrpc_call_completed(call);
+diff --git a/net/rxrpc/input.c b/net/rxrpc/input.c
+index 3be4177baf70..22dec6049e1b 100644
+--- a/net/rxrpc/input.c
++++ b/net/rxrpc/input.c
+@@ -723,13 +723,12 @@ static void rxrpc_input_ackinfo(struct rxrpc_call *call, struct sk_buff *skb,
+ ntohl(ackinfo->rxMTU), ntohl(ackinfo->maxMTU),
+ rwind, ntohl(ackinfo->jumbo_max));
+
++ if (rwind > RXRPC_RXTX_BUFF_SIZE - 1)
++ rwind = RXRPC_RXTX_BUFF_SIZE - 1;
+ if (call->tx_winsize != rwind) {
+- if (rwind > RXRPC_RXTX_BUFF_SIZE - 1)
+- rwind = RXRPC_RXTX_BUFF_SIZE - 1;
+ if (rwind > call->tx_winsize)
+ wake = true;
+- trace_rxrpc_rx_rwind_change(call, sp->hdr.serial,
+- ntohl(ackinfo->rwind), wake);
++ trace_rxrpc_rx_rwind_change(call, sp->hdr.serial, rwind, wake);
+ call->tx_winsize = rwind;
+ }
+
+diff --git a/net/sched/sch_cake.c b/net/sched/sch_cake.c
+index 1496e87cd07b..9475fa81ea7f 100644
+--- a/net/sched/sch_cake.c
++++ b/net/sched/sch_cake.c
+@@ -1514,32 +1514,51 @@ static unsigned int cake_drop(struct Qdisc *sch, struct sk_buff **to_free)
+ return idx + (tin << 16);
+ }
+
+-static u8 cake_handle_diffserv(struct sk_buff *skb, u16 wash)
++static u8 cake_handle_diffserv(struct sk_buff *skb, bool wash)
+ {
+- int wlen = skb_network_offset(skb);
++ const int offset = skb_network_offset(skb);
++ u16 *buf, buf_;
+ u8 dscp;
+
+ switch (tc_skb_protocol(skb)) {
+ case htons(ETH_P_IP):
+- wlen += sizeof(struct iphdr);
+- if (!pskb_may_pull(skb, wlen) ||
+- skb_try_make_writable(skb, wlen))
++ buf = skb_header_pointer(skb, offset, sizeof(buf_), &buf_);
++ if (unlikely(!buf))
+ return 0;
+
+- dscp = ipv4_get_dsfield(ip_hdr(skb)) >> 2;
+- if (wash && dscp)
++ /* ToS is in the second byte of iphdr */
++ dscp = ipv4_get_dsfield((struct iphdr *)buf) >> 2;
++
++ if (wash && dscp) {
++ const int wlen = offset + sizeof(struct iphdr);
++
++ if (!pskb_may_pull(skb, wlen) ||
++ skb_try_make_writable(skb, wlen))
++ return 0;
++
+ ipv4_change_dsfield(ip_hdr(skb), INET_ECN_MASK, 0);
++ }
++
+ return dscp;
+
+ case htons(ETH_P_IPV6):
+- wlen += sizeof(struct ipv6hdr);
+- if (!pskb_may_pull(skb, wlen) ||
+- skb_try_make_writable(skb, wlen))
++ buf = skb_header_pointer(skb, offset, sizeof(buf_), &buf_);
++ if (unlikely(!buf))
+ return 0;
+
+- dscp = ipv6_get_dsfield(ipv6_hdr(skb)) >> 2;
+- if (wash && dscp)
++ /* Traffic class is in the first and second bytes of ipv6hdr */
++ dscp = ipv6_get_dsfield((struct ipv6hdr *)buf) >> 2;
++
++ if (wash && dscp) {
++ const int wlen = offset + sizeof(struct ipv6hdr);
++
++ if (!pskb_may_pull(skb, wlen) ||
++ skb_try_make_writable(skb, wlen))
++ return 0;
++
+ ipv6_change_dsfield(ipv6_hdr(skb), INET_ECN_MASK, 0);
++ }
++
+ return dscp;
+
+ case htons(ETH_P_ARP):
+@@ -1556,14 +1575,17 @@ static struct cake_tin_data *cake_select_tin(struct Qdisc *sch,
+ {
+ struct cake_sched_data *q = qdisc_priv(sch);
+ u32 tin, mark;
++ bool wash;
+ u8 dscp;
+
+ /* Tin selection: Default to diffserv-based selection, allow overriding
+- * using firewall marks or skb->priority.
++ * using firewall marks or skb->priority. Call DSCP parsing early if
++ * wash is enabled, otherwise defer to below to skip unneeded parsing.
+ */
+- dscp = cake_handle_diffserv(skb,
+- q->rate_flags & CAKE_FLAG_WASH);
+ mark = (skb->mark & q->fwmark_mask) >> q->fwmark_shft;
++ wash = !!(q->rate_flags & CAKE_FLAG_WASH);
++ if (wash)
++ dscp = cake_handle_diffserv(skb, wash);
+
+ if (q->tin_mode == CAKE_DIFFSERV_BESTEFFORT)
+ tin = 0;
+@@ -1577,6 +1599,8 @@ static struct cake_tin_data *cake_select_tin(struct Qdisc *sch,
+ tin = q->tin_order[TC_H_MIN(skb->priority) - 1];
+
+ else {
++ if (!wash)
++ dscp = cake_handle_diffserv(skb, wash);
+ tin = q->tin_index[dscp];
+
+ if (unlikely(tin >= q->tin_cnt))
+@@ -2654,7 +2678,7 @@ static int cake_init(struct Qdisc *sch, struct nlattr *opt,
+ qdisc_watchdog_init(&q->watchdog, sch);
+
+ if (opt) {
+- int err = cake_change(sch, opt, extack);
++ err = cake_change(sch, opt, extack);
+
+ if (err)
+ return err;
+@@ -2971,7 +2995,7 @@ static int cake_dump_class_stats(struct Qdisc *sch, unsigned long cl,
+ PUT_STAT_S32(BLUE_TIMER_US,
+ ktime_to_us(
+ ktime_sub(now,
+- flow->cvars.blue_timer)));
++ flow->cvars.blue_timer)));
+ }
+ if (flow->cvars.dropping) {
+ PUT_STAT_S32(DROP_NEXT_US,
+diff --git a/net/sctp/associola.c b/net/sctp/associola.c
+index 437079a4883d..732bc9a45190 100644
+--- a/net/sctp/associola.c
++++ b/net/sctp/associola.c
+@@ -1565,12 +1565,15 @@ void sctp_assoc_rwnd_decrease(struct sctp_association *asoc, unsigned int len)
+ int sctp_assoc_set_bind_addr_from_ep(struct sctp_association *asoc,
+ enum sctp_scope scope, gfp_t gfp)
+ {
++ struct sock *sk = asoc->base.sk;
+ int flags;
+
+ /* Use scoping rules to determine the subset of addresses from
+ * the endpoint.
+ */
+- flags = (PF_INET6 == asoc->base.sk->sk_family) ? SCTP_ADDR6_ALLOWED : 0;
++ flags = (PF_INET6 == sk->sk_family) ? SCTP_ADDR6_ALLOWED : 0;
++ if (!inet_v6_ipv6only(sk))
++ flags |= SCTP_ADDR4_ALLOWED;
+ if (asoc->peer.ipv4_address)
+ flags |= SCTP_ADDR4_PEERSUPP;
+ if (asoc->peer.ipv6_address)
+diff --git a/net/sctp/bind_addr.c b/net/sctp/bind_addr.c
+index 53bc61537f44..701c5a4e441d 100644
+--- a/net/sctp/bind_addr.c
++++ b/net/sctp/bind_addr.c
+@@ -461,6 +461,7 @@ static int sctp_copy_one_addr(struct net *net, struct sctp_bind_addr *dest,
+ * well as the remote peer.
+ */
+ if ((((AF_INET == addr->sa.sa_family) &&
++ (flags & SCTP_ADDR4_ALLOWED) &&
+ (flags & SCTP_ADDR4_PEERSUPP))) ||
+ (((AF_INET6 == addr->sa.sa_family) &&
+ (flags & SCTP_ADDR6_ALLOWED) &&
+diff --git a/net/sctp/protocol.c b/net/sctp/protocol.c
+index 092d1afdee0d..cde29f3c7fb3 100644
+--- a/net/sctp/protocol.c
++++ b/net/sctp/protocol.c
+@@ -148,7 +148,8 @@ int sctp_copy_local_addr_list(struct net *net, struct sctp_bind_addr *bp,
+ * sock as well as the remote peer.
+ */
+ if (addr->a.sa.sa_family == AF_INET &&
+- !(copy_flags & SCTP_ADDR4_PEERSUPP))
++ (!(copy_flags & SCTP_ADDR4_ALLOWED) ||
++ !(copy_flags & SCTP_ADDR4_PEERSUPP)))
+ continue;
+ if (addr->a.sa.sa_family == AF_INET6 &&
+ (!(copy_flags & SCTP_ADDR6_ALLOWED) ||
+diff --git a/net/sunrpc/rpc_pipe.c b/net/sunrpc/rpc_pipe.c
+index 39e14d5edaf1..e9d0953522f0 100644
+--- a/net/sunrpc/rpc_pipe.c
++++ b/net/sunrpc/rpc_pipe.c
+@@ -1317,6 +1317,7 @@ rpc_gssd_dummy_populate(struct dentry *root, struct rpc_pipe *pipe_data)
+ q.len = strlen(gssd_dummy_clnt_dir[0].name);
+ clnt_dentry = d_hash_and_lookup(gssd_dentry, &q);
+ if (!clnt_dentry) {
++ __rpc_depopulate(gssd_dentry, gssd_dummy_clnt_dir, 0, 1);
+ pipe_dentry = ERR_PTR(-ENOENT);
+ goto out;
+ }
+diff --git a/net/sunrpc/xdr.c b/net/sunrpc/xdr.c
+index 6f7d82fb1eb0..be11d672b5b9 100644
+--- a/net/sunrpc/xdr.c
++++ b/net/sunrpc/xdr.c
+@@ -1118,6 +1118,7 @@ xdr_buf_subsegment(struct xdr_buf *buf, struct xdr_buf *subbuf,
+ base = 0;
+ } else {
+ base -= buf->head[0].iov_len;
++ subbuf->head[0].iov_base = buf->head[0].iov_base;
+ subbuf->head[0].iov_len = 0;
+ }
+
+@@ -1130,6 +1131,8 @@ xdr_buf_subsegment(struct xdr_buf *buf, struct xdr_buf *subbuf,
+ base = 0;
+ } else {
+ base -= buf->page_len;
++ subbuf->pages = buf->pages;
++ subbuf->page_base = 0;
+ subbuf->page_len = 0;
+ }
+
+@@ -1141,6 +1144,7 @@ xdr_buf_subsegment(struct xdr_buf *buf, struct xdr_buf *subbuf,
+ base = 0;
+ } else {
+ base -= buf->tail[0].iov_len;
++ subbuf->tail[0].iov_base = buf->tail[0].iov_base;
+ subbuf->tail[0].iov_len = 0;
+ }
+
+diff --git a/net/sunrpc/xprtrdma/rpc_rdma.c b/net/sunrpc/xprtrdma/rpc_rdma.c
+index 3c627dc685cc..57118e342c8e 100644
+--- a/net/sunrpc/xprtrdma/rpc_rdma.c
++++ b/net/sunrpc/xprtrdma/rpc_rdma.c
+@@ -1349,8 +1349,7 @@ rpcrdma_decode_error(struct rpcrdma_xprt *r_xprt, struct rpcrdma_rep *rep,
+ be32_to_cpup(p), be32_to_cpu(rep->rr_xid));
+ }
+
+- r_xprt->rx_stats.bad_reply_count++;
+- return -EREMOTEIO;
++ return -EIO;
+ }
+
+ /* Perform XID lookup, reconstruction of the RPC reply, and
+@@ -1387,13 +1386,11 @@ out:
+ spin_unlock(&xprt->queue_lock);
+ return;
+
+-/* If the incoming reply terminated a pending RPC, the next
+- * RPC call will post a replacement receive buffer as it is
+- * being marshaled.
+- */
+ out_badheader:
+ trace_xprtrdma_reply_hdr(rep);
+ r_xprt->rx_stats.bad_reply_count++;
++ rqst->rq_task->tk_status = status;
++ status = 0;
+ goto out;
+ }
+
+diff --git a/net/xfrm/xfrm_device.c b/net/xfrm/xfrm_device.c
+index f50d1f97cf8e..626096bd0d29 100644
+--- a/net/xfrm/xfrm_device.c
++++ b/net/xfrm/xfrm_device.c
+@@ -108,7 +108,7 @@ struct sk_buff *validate_xmit_xfrm(struct sk_buff *skb, netdev_features_t featur
+ struct xfrm_offload *xo = xfrm_offload(skb);
+ struct sec_path *sp;
+
+- if (!xo)
++ if (!xo || (xo->flags & XFRM_XMIT))
+ return skb;
+
+ if (!(features & NETIF_F_HW_ESP))
+@@ -129,6 +129,8 @@ struct sk_buff *validate_xmit_xfrm(struct sk_buff *skb, netdev_features_t featur
+ return skb;
+ }
+
++ xo->flags |= XFRM_XMIT;
++
+ if (skb_is_gso(skb)) {
+ struct net_device *dev = skb->dev;
+
+diff --git a/samples/bpf/xdp_monitor_user.c b/samples/bpf/xdp_monitor_user.c
+index dd558cbb2309..ef53b93db573 100644
+--- a/samples/bpf/xdp_monitor_user.c
++++ b/samples/bpf/xdp_monitor_user.c
+@@ -509,11 +509,8 @@ static void *alloc_rec_per_cpu(int record_size)
+ {
+ unsigned int nr_cpus = bpf_num_possible_cpus();
+ void *array;
+- size_t size;
+
+- size = record_size * nr_cpus;
+- array = malloc(size);
+- memset(array, 0, size);
++ array = calloc(nr_cpus, record_size);
+ if (!array) {
+ fprintf(stderr, "Mem alloc error (nr_cpus:%u)\n", nr_cpus);
+ exit(EXIT_FAIL_MEM);
+@@ -528,8 +525,7 @@ static struct stats_record *alloc_stats_record(void)
+ int i;
+
+ /* Alloc main stats_record structure */
+- rec = malloc(sizeof(*rec));
+- memset(rec, 0, sizeof(*rec));
++ rec = calloc(1, sizeof(*rec));
+ if (!rec) {
+ fprintf(stderr, "Mem alloc error\n");
+ exit(EXIT_FAIL_MEM);
+diff --git a/samples/bpf/xdp_redirect_cpu_kern.c b/samples/bpf/xdp_redirect_cpu_kern.c
+index 313a8fe6d125..2baf8db1f7e7 100644
+--- a/samples/bpf/xdp_redirect_cpu_kern.c
++++ b/samples/bpf/xdp_redirect_cpu_kern.c
+@@ -15,7 +15,7 @@
+ #include <bpf/bpf_helpers.h>
+ #include "hash_func01.h"
+
+-#define MAX_CPUS 64 /* WARNING - sync with _user.c */
++#define MAX_CPUS NR_CPUS
+
+ /* Special map type that can XDP_REDIRECT frames to another CPU */
+ struct {
+diff --git a/samples/bpf/xdp_redirect_cpu_user.c b/samples/bpf/xdp_redirect_cpu_user.c
+index 15bdf047a222..e86fed5cdb92 100644
+--- a/samples/bpf/xdp_redirect_cpu_user.c
++++ b/samples/bpf/xdp_redirect_cpu_user.c
+@@ -13,6 +13,7 @@ static const char *__doc__ =
+ #include <unistd.h>
+ #include <locale.h>
+ #include <sys/resource.h>
++#include <sys/sysinfo.h>
+ #include <getopt.h>
+ #include <net/if.h>
+ #include <time.h>
+@@ -24,8 +25,6 @@ static const char *__doc__ =
+ #include <arpa/inet.h>
+ #include <linux/if_link.h>
+
+-#define MAX_CPUS 64 /* WARNING - sync with _kern.c */
+-
+ /* How many xdp_progs are defined in _kern.c */
+ #define MAX_PROG 6
+
+@@ -40,6 +39,7 @@ static char *ifname;
+ static __u32 prog_id;
+
+ static __u32 xdp_flags = XDP_FLAGS_UPDATE_IF_NOEXIST;
++static int n_cpus;
+ static int cpu_map_fd;
+ static int rx_cnt_map_fd;
+ static int redirect_err_cnt_map_fd;
+@@ -170,7 +170,7 @@ struct stats_record {
+ struct record redir_err;
+ struct record kthread;
+ struct record exception;
+- struct record enq[MAX_CPUS];
++ struct record enq[];
+ };
+
+ static bool map_collect_percpu(int fd, __u32 key, struct record *rec)
+@@ -210,11 +210,8 @@ static struct datarec *alloc_record_per_cpu(void)
+ {
+ unsigned int nr_cpus = bpf_num_possible_cpus();
+ struct datarec *array;
+- size_t size;
+
+- size = sizeof(struct datarec) * nr_cpus;
+- array = malloc(size);
+- memset(array, 0, size);
++ array = calloc(nr_cpus, sizeof(struct datarec));
+ if (!array) {
+ fprintf(stderr, "Mem alloc error (nr_cpus:%u)\n", nr_cpus);
+ exit(EXIT_FAIL_MEM);
+@@ -225,19 +222,20 @@ static struct datarec *alloc_record_per_cpu(void)
+ static struct stats_record *alloc_stats_record(void)
+ {
+ struct stats_record *rec;
+- int i;
++ int i, size;
+
+- rec = malloc(sizeof(*rec));
+- memset(rec, 0, sizeof(*rec));
++ size = sizeof(*rec) + n_cpus * sizeof(struct record);
++ rec = malloc(size);
+ if (!rec) {
+ fprintf(stderr, "Mem alloc error\n");
+ exit(EXIT_FAIL_MEM);
+ }
++ memset(rec, 0, size);
+ rec->rx_cnt.cpu = alloc_record_per_cpu();
+ rec->redir_err.cpu = alloc_record_per_cpu();
+ rec->kthread.cpu = alloc_record_per_cpu();
+ rec->exception.cpu = alloc_record_per_cpu();
+- for (i = 0; i < MAX_CPUS; i++)
++ for (i = 0; i < n_cpus; i++)
+ rec->enq[i].cpu = alloc_record_per_cpu();
+
+ return rec;
+@@ -247,7 +245,7 @@ static void free_stats_record(struct stats_record *r)
+ {
+ int i;
+
+- for (i = 0; i < MAX_CPUS; i++)
++ for (i = 0; i < n_cpus; i++)
+ free(r->enq[i].cpu);
+ free(r->exception.cpu);
+ free(r->kthread.cpu);
+@@ -350,7 +348,7 @@ static void stats_print(struct stats_record *stats_rec,
+ }
+
+ /* cpumap enqueue stats */
+- for (to_cpu = 0; to_cpu < MAX_CPUS; to_cpu++) {
++ for (to_cpu = 0; to_cpu < n_cpus; to_cpu++) {
+ char *fmt = "%-15s %3d:%-3d %'-14.0f %'-11.0f %'-10.2f %s\n";
+ char *fm2 = "%-15s %3s:%-3d %'-14.0f %'-11.0f %'-10.2f %s\n";
+ char *errstr = "";
+@@ -475,7 +473,7 @@ static void stats_collect(struct stats_record *rec)
+ map_collect_percpu(fd, 1, &rec->redir_err);
+
+ fd = cpumap_enqueue_cnt_map_fd;
+- for (i = 0; i < MAX_CPUS; i++)
++ for (i = 0; i < n_cpus; i++)
+ map_collect_percpu(fd, i, &rec->enq[i]);
+
+ fd = cpumap_kthread_cnt_map_fd;
+@@ -549,10 +547,10 @@ static int create_cpu_entry(__u32 cpu, __u32 queue_size,
+ */
+ static void mark_cpus_unavailable(void)
+ {
+- __u32 invalid_cpu = MAX_CPUS;
++ __u32 invalid_cpu = n_cpus;
+ int ret, i;
+
+- for (i = 0; i < MAX_CPUS; i++) {
++ for (i = 0; i < n_cpus; i++) {
+ ret = bpf_map_update_elem(cpus_available_map_fd, &i,
+ &invalid_cpu, 0);
+ if (ret) {
+@@ -688,6 +686,8 @@ int main(int argc, char **argv)
+ int prog_fd;
+ __u32 qsize;
+
++ n_cpus = get_nprocs_conf();
++
+ /* Notice: choosing he queue size is very important with the
+ * ixgbe driver, because it's driver page recycling trick is
+ * dependend on pages being returned quickly. The number of
+@@ -757,7 +757,7 @@ int main(int argc, char **argv)
+ case 'c':
+ /* Add multiple CPUs */
+ add_cpu = strtoul(optarg, NULL, 0);
+- if (add_cpu >= MAX_CPUS) {
++ if (add_cpu >= n_cpus) {
+ fprintf(stderr,
+ "--cpu nr too large for cpumap err(%d):%s\n",
+ errno, strerror(errno));
+diff --git a/samples/bpf/xdp_rxq_info_user.c b/samples/bpf/xdp_rxq_info_user.c
+index 4fe47502ebed..caa4e7ffcfc7 100644
+--- a/samples/bpf/xdp_rxq_info_user.c
++++ b/samples/bpf/xdp_rxq_info_user.c
+@@ -198,11 +198,8 @@ static struct datarec *alloc_record_per_cpu(void)
+ {
+ unsigned int nr_cpus = bpf_num_possible_cpus();
+ struct datarec *array;
+- size_t size;
+
+- size = sizeof(struct datarec) * nr_cpus;
+- array = malloc(size);
+- memset(array, 0, size);
++ array = calloc(nr_cpus, sizeof(struct datarec));
+ if (!array) {
+ fprintf(stderr, "Mem alloc error (nr_cpus:%u)\n", nr_cpus);
+ exit(EXIT_FAIL_MEM);
+@@ -214,11 +211,8 @@ static struct record *alloc_record_per_rxq(void)
+ {
+ unsigned int nr_rxqs = bpf_map__def(rx_queue_index_map)->max_entries;
+ struct record *array;
+- size_t size;
+
+- size = sizeof(struct record) * nr_rxqs;
+- array = malloc(size);
+- memset(array, 0, size);
++ array = calloc(nr_rxqs, sizeof(struct record));
+ if (!array) {
+ fprintf(stderr, "Mem alloc error (nr_rxqs:%u)\n", nr_rxqs);
+ exit(EXIT_FAIL_MEM);
+@@ -232,8 +226,7 @@ static struct stats_record *alloc_stats_record(void)
+ struct stats_record *rec;
+ int i;
+
+- rec = malloc(sizeof(*rec));
+- memset(rec, 0, sizeof(*rec));
++ rec = calloc(1, sizeof(struct stats_record));
+ if (!rec) {
+ fprintf(stderr, "Mem alloc error\n");
+ exit(EXIT_FAIL_MEM);
+diff --git a/scripts/Kbuild.include b/scripts/Kbuild.include
+index 6cabf20ce66a..fe427f7fcfb3 100644
+--- a/scripts/Kbuild.include
++++ b/scripts/Kbuild.include
+@@ -86,20 +86,21 @@ cc-cross-prefix = $(firstword $(foreach c, $(1), \
+ $(if $(shell command -v -- $(c)gcc 2>/dev/null), $(c))))
+
+ # output directory for tests below
+-TMPOUT := $(if $(KBUILD_EXTMOD),$(firstword $(KBUILD_EXTMOD))/)
++TMPOUT = $(if $(KBUILD_EXTMOD),$(firstword $(KBUILD_EXTMOD))/).tmp_$$$$
+
+ # try-run
+ # Usage: option = $(call try-run, $(CC)...-o "$$TMP",option-ok,otherwise)
+ # Exit code chooses option. "$$TMP" serves as a temporary file and is
+ # automatically cleaned up.
+ try-run = $(shell set -e; \
+- TMP="$(TMPOUT).$$$$.tmp"; \
+- TMPO="$(TMPOUT).$$$$.o"; \
++ TMP=$(TMPOUT)/tmp; \
++ TMPO=$(TMPOUT)/tmp.o; \
++ mkdir -p $(TMPOUT); \
++ trap "rm -rf $(TMPOUT)" EXIT; \
+ if ($(1)) >/dev/null 2>&1; \
+ then echo "$(2)"; \
+ else echo "$(3)"; \
+- fi; \
+- rm -f "$$TMP" "$$TMPO")
++ fi)
+
+ # as-option
+ # Usage: cflags-y += $(call as-option,-Wa$(comma)-isa=foo,)
+diff --git a/scripts/recordmcount.h b/scripts/recordmcount.h
+index 74eab03e31d4..f9b19524da11 100644
+--- a/scripts/recordmcount.h
++++ b/scripts/recordmcount.h
+@@ -29,6 +29,11 @@
+ #undef has_rel_mcount
+ #undef tot_relsize
+ #undef get_mcountsym
++#undef find_symtab
++#undef get_shnum
++#undef set_shnum
++#undef get_shstrndx
++#undef get_symindex
+ #undef get_sym_str_and_relp
+ #undef do_func
+ #undef Elf_Addr
+@@ -58,6 +63,11 @@
+ # define __has_rel_mcount __has64_rel_mcount
+ # define has_rel_mcount has64_rel_mcount
+ # define tot_relsize tot64_relsize
++# define find_symtab find_symtab64
++# define get_shnum get_shnum64
++# define set_shnum set_shnum64
++# define get_shstrndx get_shstrndx64
++# define get_symindex get_symindex64
+ # define get_sym_str_and_relp get_sym_str_and_relp_64
+ # define do_func do64
+ # define get_mcountsym get_mcountsym_64
+@@ -91,6 +101,11 @@
+ # define __has_rel_mcount __has32_rel_mcount
+ # define has_rel_mcount has32_rel_mcount
+ # define tot_relsize tot32_relsize
++# define find_symtab find_symtab32
++# define get_shnum get_shnum32
++# define set_shnum set_shnum32
++# define get_shstrndx get_shstrndx32
++# define get_symindex get_symindex32
+ # define get_sym_str_and_relp get_sym_str_and_relp_32
+ # define do_func do32
+ # define get_mcountsym get_mcountsym_32
+@@ -173,6 +188,67 @@ static int MIPS_is_fake_mcount(Elf_Rel const *rp)
+ return is_fake;
+ }
+
++static unsigned int get_symindex(Elf_Sym const *sym, Elf32_Word const *symtab,
++ Elf32_Word const *symtab_shndx)
++{
++ unsigned long offset;
++ int index;
++
++ if (sym->st_shndx != SHN_XINDEX)
++ return w2(sym->st_shndx);
++
++ offset = (unsigned long)sym - (unsigned long)symtab;
++ index = offset / sizeof(*sym);
++
++ return w(symtab_shndx[index]);
++}
++
++static unsigned int get_shnum(Elf_Ehdr const *ehdr, Elf_Shdr const *shdr0)
++{
++ if (shdr0 && !ehdr->e_shnum)
++ return w(shdr0->sh_size);
++
++ return w2(ehdr->e_shnum);
++}
++
++static void set_shnum(Elf_Ehdr *ehdr, Elf_Shdr *shdr0, unsigned int new_shnum)
++{
++ if (new_shnum >= SHN_LORESERVE) {
++ ehdr->e_shnum = 0;
++ shdr0->sh_size = w(new_shnum);
++ } else
++ ehdr->e_shnum = w2(new_shnum);
++}
++
++static int get_shstrndx(Elf_Ehdr const *ehdr, Elf_Shdr const *shdr0)
++{
++ if (ehdr->e_shstrndx != SHN_XINDEX)
++ return w2(ehdr->e_shstrndx);
++
++ return w(shdr0->sh_link);
++}
++
++static void find_symtab(Elf_Ehdr *const ehdr, Elf_Shdr const *shdr0,
++ unsigned const nhdr, Elf32_Word **symtab,
++ Elf32_Word **symtab_shndx)
++{
++ Elf_Shdr const *relhdr;
++ unsigned k;
++
++ *symtab = NULL;
++ *symtab_shndx = NULL;
++
++ for (relhdr = shdr0, k = nhdr; k; --k, ++relhdr) {
++ if (relhdr->sh_type == SHT_SYMTAB)
++ *symtab = (void *)ehdr + relhdr->sh_offset;
++ else if (relhdr->sh_type == SHT_SYMTAB_SHNDX)
++ *symtab_shndx = (void *)ehdr + relhdr->sh_offset;
++
++ if (*symtab && *symtab_shndx)
++ break;
++ }
++}
++
+ /* Append the new shstrtab, Elf_Shdr[], __mcount_loc and its relocations. */
+ static int append_func(Elf_Ehdr *const ehdr,
+ Elf_Shdr *const shstr,
+@@ -188,10 +264,12 @@ static int append_func(Elf_Ehdr *const ehdr,
+ char const *mc_name = (sizeof(Elf_Rela) == rel_entsize)
+ ? ".rela__mcount_loc"
+ : ".rel__mcount_loc";
+- unsigned const old_shnum = w2(ehdr->e_shnum);
+ uint_t const old_shoff = _w(ehdr->e_shoff);
+ uint_t const old_shstr_sh_size = _w(shstr->sh_size);
+ uint_t const old_shstr_sh_offset = _w(shstr->sh_offset);
++ Elf_Shdr *const shdr0 = (Elf_Shdr *)(old_shoff + (void *)ehdr);
++ unsigned int const old_shnum = get_shnum(ehdr, shdr0);
++ unsigned int const new_shnum = 2 + old_shnum; /* {.rel,}__mcount_loc */
+ uint_t t = 1 + strlen(mc_name) + _w(shstr->sh_size);
+ uint_t new_e_shoff;
+
+@@ -201,6 +279,8 @@ static int append_func(Elf_Ehdr *const ehdr,
+ t += (_align & -t); /* word-byte align */
+ new_e_shoff = t;
+
++ set_shnum(ehdr, shdr0, new_shnum);
++
+ /* body for new shstrtab */
+ if (ulseek(sb.st_size, SEEK_SET) < 0)
+ return -1;
+@@ -255,7 +335,6 @@ static int append_func(Elf_Ehdr *const ehdr,
+ return -1;
+
+ ehdr->e_shoff = _w(new_e_shoff);
+- ehdr->e_shnum = w2(2 + w2(ehdr->e_shnum)); /* {.rel,}__mcount_loc */
+ if (ulseek(0, SEEK_SET) < 0)
+ return -1;
+ if (uwrite(ehdr, sizeof(*ehdr)) < 0)
+@@ -434,6 +513,8 @@ static int find_secsym_ndx(unsigned const txtndx,
+ uint_t *const recvalp,
+ unsigned int *sym_index,
+ Elf_Shdr const *const symhdr,
++ Elf32_Word const *symtab,
++ Elf32_Word const *symtab_shndx,
+ Elf_Ehdr const *const ehdr)
+ {
+ Elf_Sym const *const sym0 = (Elf_Sym const *)(_w(symhdr->sh_offset)
+@@ -445,7 +526,7 @@ static int find_secsym_ndx(unsigned const txtndx,
+ for (symp = sym0, t = nsym; t; --t, ++symp) {
+ unsigned int const st_bind = ELF_ST_BIND(symp->st_info);
+
+- if (txtndx == w2(symp->st_shndx)
++ if (txtndx == get_symindex(symp, symtab, symtab_shndx)
+ /* avoid STB_WEAK */
+ && (STB_LOCAL == st_bind || STB_GLOBAL == st_bind)) {
+ /* function symbols on ARM have quirks, avoid them */
+@@ -516,21 +597,23 @@ static unsigned tot_relsize(Elf_Shdr const *const shdr0,
+ return totrelsz;
+ }
+
+-
+ /* Overall supervision for Elf32 ET_REL file. */
+ static int do_func(Elf_Ehdr *const ehdr, char const *const fname,
+ unsigned const reltype)
+ {
+ Elf_Shdr *const shdr0 = (Elf_Shdr *)(_w(ehdr->e_shoff)
+ + (void *)ehdr);
+- unsigned const nhdr = w2(ehdr->e_shnum);
+- Elf_Shdr *const shstr = &shdr0[w2(ehdr->e_shstrndx)];
++ unsigned const nhdr = get_shnum(ehdr, shdr0);
++ Elf_Shdr *const shstr = &shdr0[get_shstrndx(ehdr, shdr0)];
+ char const *const shstrtab = (char const *)(_w(shstr->sh_offset)
+ + (void *)ehdr);
+
+ Elf_Shdr const *relhdr;
+ unsigned k;
+
++ Elf32_Word *symtab;
++ Elf32_Word *symtab_shndx;
++
+ /* Upper bound on space: assume all relevant relocs are for mcount. */
+ unsigned totrelsz;
+
+@@ -561,6 +644,8 @@ static int do_func(Elf_Ehdr *const ehdr, char const *const fname,
+ return -1;
+ }
+
++ find_symtab(ehdr, shdr0, nhdr, &symtab, &symtab_shndx);
++
+ for (relhdr = shdr0, k = nhdr; k; --k, ++relhdr) {
+ char const *const txtname = has_rel_mcount(relhdr, shdr0,
+ shstrtab, fname);
+@@ -577,6 +662,7 @@ static int do_func(Elf_Ehdr *const ehdr, char const *const fname,
+ result = find_secsym_ndx(w(relhdr->sh_info), txtname,
+ &recval, &recsym,
+ &shdr0[symsec_sh_link],
++ symtab, symtab_shndx,
+ ehdr);
+ if (result)
+ goto out;
+diff --git a/sound/pci/hda/patch_hdmi.c b/sound/pci/hda/patch_hdmi.c
+index 93760a3564cf..137d655fed8f 100644
+--- a/sound/pci/hda/patch_hdmi.c
++++ b/sound/pci/hda/patch_hdmi.c
+@@ -4145,6 +4145,11 @@ HDA_CODEC_ENTRY(0x10de0095, "GPU 95 HDMI/DP", patch_nvhdmi),
+ HDA_CODEC_ENTRY(0x10de0097, "GPU 97 HDMI/DP", patch_nvhdmi),
+ HDA_CODEC_ENTRY(0x10de0098, "GPU 98 HDMI/DP", patch_nvhdmi),
+ HDA_CODEC_ENTRY(0x10de0099, "GPU 99 HDMI/DP", patch_nvhdmi),
++HDA_CODEC_ENTRY(0x10de009a, "GPU 9a HDMI/DP", patch_nvhdmi),
++HDA_CODEC_ENTRY(0x10de009d, "GPU 9d HDMI/DP", patch_nvhdmi),
++HDA_CODEC_ENTRY(0x10de009e, "GPU 9e HDMI/DP", patch_nvhdmi),
++HDA_CODEC_ENTRY(0x10de009f, "GPU 9f HDMI/DP", patch_nvhdmi),
++HDA_CODEC_ENTRY(0x10de00a0, "GPU a0 HDMI/DP", patch_nvhdmi),
+ HDA_CODEC_ENTRY(0x10de8001, "MCP73 HDMI", patch_nvhdmi_2ch),
+ HDA_CODEC_ENTRY(0x10de8067, "MCP67/68 HDMI", patch_nvhdmi_2ch),
+ HDA_CODEC_ENTRY(0x11069f80, "VX900 HDMI/DP", patch_via_hdmi),
+diff --git a/sound/pci/hda/patch_realtek.c b/sound/pci/hda/patch_realtek.c
+index e057ecb5a904..cb689878ba20 100644
+--- a/sound/pci/hda/patch_realtek.c
++++ b/sound/pci/hda/patch_realtek.c
+@@ -2460,6 +2460,7 @@ static const struct snd_pci_quirk alc882_fixup_tbl[] = {
+ SND_PCI_QUIRK(0x1458, 0xa0b8, "Gigabyte AZ370-Gaming", ALC1220_FIXUP_GB_DUAL_CODECS),
+ SND_PCI_QUIRK(0x1458, 0xa0cd, "Gigabyte X570 Aorus Master", ALC1220_FIXUP_CLEVO_P950),
+ SND_PCI_QUIRK(0x1458, 0xa0ce, "Gigabyte X570 Aorus Xtreme", ALC1220_FIXUP_CLEVO_P950),
++ SND_PCI_QUIRK(0x1462, 0x11f7, "MSI-GE63", ALC1220_FIXUP_CLEVO_P950),
+ SND_PCI_QUIRK(0x1462, 0x1228, "MSI-GP63", ALC1220_FIXUP_CLEVO_P950),
+ SND_PCI_QUIRK(0x1462, 0x1275, "MSI-GL63", ALC1220_FIXUP_CLEVO_P950),
+ SND_PCI_QUIRK(0x1462, 0x1276, "MSI-GL73", ALC1220_FIXUP_CLEVO_P950),
+@@ -7435,6 +7436,8 @@ static const struct snd_pci_quirk alc269_fixup_tbl[] = {
+ SND_PCI_QUIRK(0x103c, 0x83b9, "HP Spectre x360", ALC269_FIXUP_HP_MUTE_LED_MIC3),
+ SND_PCI_QUIRK(0x103c, 0x8497, "HP Envy x360", ALC269_FIXUP_HP_MUTE_LED_MIC3),
+ SND_PCI_QUIRK(0x103c, 0x84e7, "HP Pavilion 15", ALC269_FIXUP_HP_MUTE_LED_MIC3),
++ SND_PCI_QUIRK(0x103c, 0x869d, "HP", ALC236_FIXUP_HP_MUTE_LED),
++ SND_PCI_QUIRK(0x103c, 0x8729, "HP", ALC285_FIXUP_HP_GPIO_LED),
+ SND_PCI_QUIRK(0x103c, 0x8736, "HP", ALC285_FIXUP_HP_GPIO_LED),
+ SND_PCI_QUIRK(0x103c, 0x877a, "HP", ALC285_FIXUP_HP_MUTE_LED),
+ SND_PCI_QUIRK(0x103c, 0x877d, "HP", ALC236_FIXUP_HP_MUTE_LED),
+diff --git a/sound/soc/fsl/fsl_ssi.c b/sound/soc/fsl/fsl_ssi.c
+index bad89b0d129e..1a2fa7f18142 100644
+--- a/sound/soc/fsl/fsl_ssi.c
++++ b/sound/soc/fsl/fsl_ssi.c
+@@ -678,8 +678,9 @@ static int fsl_ssi_set_bclk(struct snd_pcm_substream *substream,
+ struct regmap *regs = ssi->regs;
+ u32 pm = 999, div2, psr, stccr, mask, afreq, factor, i;
+ unsigned long clkrate, baudrate, tmprate;
+- unsigned int slots = params_channels(hw_params);
+- unsigned int slot_width = 32;
++ unsigned int channels = params_channels(hw_params);
++ unsigned int slot_width = params_width(hw_params);
++ unsigned int slots = 2;
+ u64 sub, savesub = 100000;
+ unsigned int freq;
+ bool baudclk_is_used;
+@@ -688,10 +689,14 @@ static int fsl_ssi_set_bclk(struct snd_pcm_substream *substream,
+ /* Override slots and slot_width if being specifically set... */
+ if (ssi->slots)
+ slots = ssi->slots;
+- /* ...but keep 32 bits if slots is 2 -- I2S Master mode */
+- if (ssi->slot_width && slots != 2)
++ if (ssi->slot_width)
+ slot_width = ssi->slot_width;
+
++ /* ...but force 32 bits for stereo audio using I2S Master Mode */
++ if (channels == 2 &&
++ (ssi->i2s_net & SSI_SCR_I2S_MODE_MASK) == SSI_SCR_I2S_MODE_MASTER)
++ slot_width = 32;
++
+ /* Generate bit clock based on the slot number and slot width */
+ freq = slots * slot_width * params_rate(hw_params);
+
+diff --git a/sound/soc/qcom/common.c b/sound/soc/qcom/common.c
+index 6c20bdd850f3..8ada4ecba847 100644
+--- a/sound/soc/qcom/common.c
++++ b/sound/soc/qcom/common.c
+@@ -4,6 +4,7 @@
+
+ #include <linux/module.h>
+ #include "common.h"
++#include "qdsp6/q6afe.h"
+
+ int qcom_snd_parse_of(struct snd_soc_card *card)
+ {
+@@ -101,6 +102,15 @@ int qcom_snd_parse_of(struct snd_soc_card *card)
+ }
+ link->no_pcm = 1;
+ link->ignore_pmdown_time = 1;
++
++ if (q6afe_is_rx_port(link->id)) {
++ link->dpcm_playback = 1;
++ link->dpcm_capture = 0;
++ } else {
++ link->dpcm_playback = 0;
++ link->dpcm_capture = 1;
++ }
++
+ } else {
+ dlc = devm_kzalloc(dev, sizeof(*dlc), GFP_KERNEL);
+ if (!dlc)
+@@ -113,12 +123,12 @@ int qcom_snd_parse_of(struct snd_soc_card *card)
+ link->codecs->dai_name = "snd-soc-dummy-dai";
+ link->codecs->name = "snd-soc-dummy";
+ link->dynamic = 1;
++ link->dpcm_playback = 1;
++ link->dpcm_capture = 1;
+ }
+
+ link->ignore_suspend = 1;
+ link->nonatomic = 1;
+- link->dpcm_playback = 1;
+- link->dpcm_capture = 1;
+ link->stream_name = link->name;
+ link++;
+
+diff --git a/sound/soc/qcom/qdsp6/q6afe.c b/sound/soc/qcom/qdsp6/q6afe.c
+index e0945f7a58c8..0ce4eb60f984 100644
+--- a/sound/soc/qcom/qdsp6/q6afe.c
++++ b/sound/soc/qcom/qdsp6/q6afe.c
+@@ -800,6 +800,14 @@ int q6afe_get_port_id(int index)
+ }
+ EXPORT_SYMBOL_GPL(q6afe_get_port_id);
+
++int q6afe_is_rx_port(int index)
++{
++ if (index < 0 || index >= AFE_PORT_MAX)
++ return -EINVAL;
++
++ return port_maps[index].is_rx;
++}
++EXPORT_SYMBOL_GPL(q6afe_is_rx_port);
+ static int afe_apr_send_pkt(struct q6afe *afe, struct apr_pkt *pkt,
+ struct q6afe_port *port)
+ {
+diff --git a/sound/soc/qcom/qdsp6/q6afe.h b/sound/soc/qcom/qdsp6/q6afe.h
+index c7ed5422baff..1a0f80a14afe 100644
+--- a/sound/soc/qcom/qdsp6/q6afe.h
++++ b/sound/soc/qcom/qdsp6/q6afe.h
+@@ -198,6 +198,7 @@ int q6afe_port_start(struct q6afe_port *port);
+ int q6afe_port_stop(struct q6afe_port *port);
+ void q6afe_port_put(struct q6afe_port *port);
+ int q6afe_get_port_id(int index);
++int q6afe_is_rx_port(int index);
+ void q6afe_hdmi_port_prepare(struct q6afe_port *port,
+ struct q6afe_hdmi_cfg *cfg);
+ void q6afe_slim_port_prepare(struct q6afe_port *port,
+diff --git a/sound/soc/qcom/qdsp6/q6asm.c b/sound/soc/qcom/qdsp6/q6asm.c
+index 0e0e8f7a460a..ae4b2cabdf2d 100644
+--- a/sound/soc/qcom/qdsp6/q6asm.c
++++ b/sound/soc/qcom/qdsp6/q6asm.c
+@@ -25,6 +25,7 @@
+ #define ASM_STREAM_CMD_FLUSH 0x00010BCE
+ #define ASM_SESSION_CMD_PAUSE 0x00010BD3
+ #define ASM_DATA_CMD_EOS 0x00010BDB
++#define ASM_DATA_EVENT_RENDERED_EOS 0x00010C1C
+ #define ASM_NULL_POPP_TOPOLOGY 0x00010C68
+ #define ASM_STREAM_CMD_FLUSH_READBUFS 0x00010C09
+ #define ASM_STREAM_CMD_SET_ENCDEC_PARAM 0x00010C10
+@@ -622,9 +623,6 @@ static int32_t q6asm_stream_callback(struct apr_device *adev,
+ case ASM_SESSION_CMD_SUSPEND:
+ client_event = ASM_CLIENT_EVENT_CMD_SUSPEND_DONE;
+ break;
+- case ASM_DATA_CMD_EOS:
+- client_event = ASM_CLIENT_EVENT_CMD_EOS_DONE;
+- break;
+ case ASM_STREAM_CMD_FLUSH:
+ client_event = ASM_CLIENT_EVENT_CMD_FLUSH_DONE;
+ break;
+@@ -727,6 +725,9 @@ static int32_t q6asm_stream_callback(struct apr_device *adev,
+ spin_unlock_irqrestore(&ac->lock, flags);
+ }
+
++ break;
++ case ASM_DATA_EVENT_RENDERED_EOS:
++ client_event = ASM_CLIENT_EVENT_CMD_EOS_DONE;
+ break;
+ }
+
+diff --git a/sound/soc/rockchip/rockchip_pdm.c b/sound/soc/rockchip/rockchip_pdm.c
+index 7cd42fcfcf38..1707414cfa92 100644
+--- a/sound/soc/rockchip/rockchip_pdm.c
++++ b/sound/soc/rockchip/rockchip_pdm.c
+@@ -590,8 +590,10 @@ static int rockchip_pdm_resume(struct device *dev)
+ int ret;
+
+ ret = pm_runtime_get_sync(dev);
+- if (ret < 0)
++ if (ret < 0) {
++ pm_runtime_put(dev);
+ return ret;
++ }
+
+ ret = regcache_sync(pdm->regmap);
+
+diff --git a/sound/soc/soc-pcm.c b/sound/soc/soc-pcm.c
+index 39ce61c5b874..fde097a7aad3 100644
+--- a/sound/soc/soc-pcm.c
++++ b/sound/soc/soc-pcm.c
+@@ -2749,15 +2749,15 @@ static int soc_dpcm_fe_runtime_update(struct snd_soc_pcm_runtime *fe, int new)
+ int count, paths;
+ int ret;
+
++ if (!fe->dai_link->dynamic)
++ return 0;
++
+ if (fe->num_cpus > 1) {
+ dev_err(fe->dev,
+ "%s doesn't support Multi CPU yet\n", __func__);
+ return -EINVAL;
+ }
+
+- if (!fe->dai_link->dynamic)
+- return 0;
+-
+ /* only check active links */
+ if (!fe->cpu_dai->active)
+ return 0;
+diff --git a/sound/usb/format.c b/sound/usb/format.c
+index 5ffb457cc88c..1b28d01d1f4c 100644
+--- a/sound/usb/format.c
++++ b/sound/usb/format.c
+@@ -394,8 +394,9 @@ skip_rate:
+ return nr_rates;
+ }
+
+-/* Line6 Helix series don't support the UAC2_CS_RANGE usb function
+- * call. Return a static table of known clock rates.
++/* Line6 Helix series and the Rode Rodecaster Pro don't support the
++ * UAC2_CS_RANGE usb function call. Return a static table of known
++ * clock rates.
+ */
+ static int line6_parse_audio_format_rates_quirk(struct snd_usb_audio *chip,
+ struct audioformat *fp)
+@@ -408,6 +409,7 @@ static int line6_parse_audio_format_rates_quirk(struct snd_usb_audio *chip,
+ case USB_ID(0x0e41, 0x4248): /* Line6 Helix >= fw 2.82 */
+ case USB_ID(0x0e41, 0x4249): /* Line6 Helix Rack >= fw 2.82 */
+ case USB_ID(0x0e41, 0x424a): /* Line6 Helix LT >= fw 2.82 */
++ case USB_ID(0x19f7, 0x0011): /* Rode Rodecaster Pro */
+ return set_fixed_rate(fp, 48000, SNDRV_PCM_RATE_48000);
+ }
+
+diff --git a/sound/usb/mixer.c b/sound/usb/mixer.c
+index 15769f266790..eab0fd4fd7c3 100644
+--- a/sound/usb/mixer.c
++++ b/sound/usb/mixer.c
+@@ -581,8 +581,9 @@ static int check_matrix_bitmap(unsigned char *bmap,
+ * if failed, give up and free the control instance.
+ */
+
+-int snd_usb_mixer_add_control(struct usb_mixer_elem_list *list,
+- struct snd_kcontrol *kctl)
++int snd_usb_mixer_add_list(struct usb_mixer_elem_list *list,
++ struct snd_kcontrol *kctl,
++ bool is_std_info)
+ {
+ struct usb_mixer_interface *mixer = list->mixer;
+ int err;
+@@ -596,6 +597,7 @@ int snd_usb_mixer_add_control(struct usb_mixer_elem_list *list,
+ return err;
+ }
+ list->kctl = kctl;
++ list->is_std_info = is_std_info;
+ list->next_id_elem = mixer->id_elems[list->id];
+ mixer->id_elems[list->id] = list;
+ return 0;
+@@ -3234,8 +3236,11 @@ void snd_usb_mixer_notify_id(struct usb_mixer_interface *mixer, int unitid)
+ unitid = delegate_notify(mixer, unitid, NULL, NULL);
+
+ for_each_mixer_elem(list, mixer, unitid) {
+- struct usb_mixer_elem_info *info =
+- mixer_elem_list_to_info(list);
++ struct usb_mixer_elem_info *info;
++
++ if (!list->is_std_info)
++ continue;
++ info = mixer_elem_list_to_info(list);
+ /* invalidate cache, so the value is read from the device */
+ info->cached = 0;
+ snd_ctl_notify(mixer->chip->card, SNDRV_CTL_EVENT_MASK_VALUE,
+@@ -3315,6 +3320,8 @@ static void snd_usb_mixer_interrupt_v2(struct usb_mixer_interface *mixer,
+
+ if (!list->kctl)
+ continue;
++ if (!list->is_std_info)
++ continue;
+
+ info = mixer_elem_list_to_info(list);
+ if (count > 1 && info->control != control)
+diff --git a/sound/usb/mixer.h b/sound/usb/mixer.h
+index 41ec9dc4139b..c29e27ac43a7 100644
+--- a/sound/usb/mixer.h
++++ b/sound/usb/mixer.h
+@@ -66,6 +66,7 @@ struct usb_mixer_elem_list {
+ struct usb_mixer_elem_list *next_id_elem; /* list of controls with same id */
+ struct snd_kcontrol *kctl;
+ unsigned int id;
++ bool is_std_info;
+ usb_mixer_elem_dump_func_t dump;
+ usb_mixer_elem_resume_func_t resume;
+ };
+@@ -103,8 +104,12 @@ void snd_usb_mixer_notify_id(struct usb_mixer_interface *mixer, int unitid);
+ int snd_usb_mixer_set_ctl_value(struct usb_mixer_elem_info *cval,
+ int request, int validx, int value_set);
+
+-int snd_usb_mixer_add_control(struct usb_mixer_elem_list *list,
+- struct snd_kcontrol *kctl);
++int snd_usb_mixer_add_list(struct usb_mixer_elem_list *list,
++ struct snd_kcontrol *kctl,
++ bool is_std_info);
++
++#define snd_usb_mixer_add_control(list, kctl) \
++ snd_usb_mixer_add_list(list, kctl, true)
+
+ void snd_usb_mixer_elem_init_std(struct usb_mixer_elem_list *list,
+ struct usb_mixer_interface *mixer,
+diff --git a/sound/usb/mixer_quirks.c b/sound/usb/mixer_quirks.c
+index aad2683ff793..260607144f56 100644
+--- a/sound/usb/mixer_quirks.c
++++ b/sound/usb/mixer_quirks.c
+@@ -158,7 +158,8 @@ static int add_single_ctl_with_resume(struct usb_mixer_interface *mixer,
+ return -ENOMEM;
+ }
+ kctl->private_free = snd_usb_mixer_elem_free;
+- return snd_usb_mixer_add_control(list, kctl);
++ /* don't use snd_usb_mixer_add_control() here, this is a special list element */
++ return snd_usb_mixer_add_list(list, kctl, false);
+ }
+
+ /*
+diff --git a/sound/usb/pcm.c b/sound/usb/pcm.c
+index d61c2f1095b5..39aec83f8aca 100644
+--- a/sound/usb/pcm.c
++++ b/sound/usb/pcm.c
+@@ -367,6 +367,7 @@ static int set_sync_ep_implicit_fb_quirk(struct snd_usb_substream *subs,
+ ifnum = 0;
+ goto add_sync_ep_from_ifnum;
+ case USB_ID(0x07fd, 0x0008): /* MOTU M Series */
++ case USB_ID(0x31e9, 0x0002): /* Solid State Logic SSL2+ */
+ ep = 0x81;
+ ifnum = 2;
+ goto add_sync_ep_from_ifnum;
+@@ -1782,6 +1783,7 @@ static int snd_usb_substream_capture_trigger(struct snd_pcm_substream *substream
+ return 0;
+ case SNDRV_PCM_TRIGGER_STOP:
+ stop_endpoints(subs);
++ subs->data_endpoint->retire_data_urb = NULL;
+ subs->running = 0;
+ return 0;
+ case SNDRV_PCM_TRIGGER_PAUSE_PUSH:
+diff --git a/sound/usb/quirks.c b/sound/usb/quirks.c
+index d8a765be5dfe..d7d900ebcf37 100644
+--- a/sound/usb/quirks.c
++++ b/sound/usb/quirks.c
+@@ -1505,6 +1505,7 @@ bool snd_usb_get_sample_rate_quirk(struct snd_usb_audio *chip)
+ static bool is_itf_usb_dsd_dac(unsigned int id)
+ {
+ switch (id) {
++ case USB_ID(0x154e, 0x1002): /* Denon DCD-1500RE */
+ case USB_ID(0x154e, 0x1003): /* Denon DA-300USB */
+ case USB_ID(0x154e, 0x3005): /* Marantz HD-DAC1 */
+ case USB_ID(0x154e, 0x3006): /* Marantz SA-14S1 */
+@@ -1646,6 +1647,14 @@ void snd_usb_ctl_msg_quirk(struct usb_device *dev, unsigned int pipe,
+ chip->usb_id == USB_ID(0x0951, 0x16ad)) &&
+ (requesttype & USB_TYPE_MASK) == USB_TYPE_CLASS)
+ usleep_range(1000, 2000);
++
++ /*
++ * Samsung USBC Headset (AKG) need a tiny delay after each
++ * class compliant request. (Model number: AAM625R or AAM627R)
++ */
++ if (chip->usb_id == USB_ID(0x04e8, 0xa051) &&
++ (requesttype & USB_TYPE_MASK) == USB_TYPE_CLASS)
++ usleep_range(5000, 6000);
+ }
+
+ /*
+@@ -1843,6 +1852,7 @@ struct registration_quirk {
+ static const struct registration_quirk registration_quirks[] = {
+ REG_QUIRK_ENTRY(0x0951, 0x16d8, 2), /* Kingston HyperX AMP */
+ REG_QUIRK_ENTRY(0x0951, 0x16ed, 2), /* Kingston HyperX Cloud Alpha S */
++ REG_QUIRK_ENTRY(0x0951, 0x16ea, 2), /* Kingston HyperX Cloud Flight S */
+ { 0 } /* terminator */
+ };
+
+diff --git a/tools/testing/selftests/bpf/progs/bpf_cubic.c b/tools/testing/selftests/bpf/progs/bpf_cubic.c
+index 7897c8f4d363..ef574087f1e1 100644
+--- a/tools/testing/selftests/bpf/progs/bpf_cubic.c
++++ b/tools/testing/selftests/bpf/progs/bpf_cubic.c
+@@ -480,10 +480,9 @@ static __always_inline void hystart_update(struct sock *sk, __u32 delay)
+
+ if (hystart_detect & HYSTART_DELAY) {
+ /* obtain the minimum delay of more than sampling packets */
++ if (ca->curr_rtt > delay)
++ ca->curr_rtt = delay;
+ if (ca->sample_cnt < HYSTART_MIN_SAMPLES) {
+- if (ca->curr_rtt > delay)
+- ca->curr_rtt = delay;
+-
+ ca->sample_cnt++;
+ } else {
+ if (ca->curr_rtt > ca->delay_min +
+diff --git a/tools/testing/selftests/net/so_txtime.c b/tools/testing/selftests/net/so_txtime.c
+index 383bac05ac32..ceaad78e9667 100644
+--- a/tools/testing/selftests/net/so_txtime.c
++++ b/tools/testing/selftests/net/so_txtime.c
+@@ -15,8 +15,9 @@
+ #include <inttypes.h>
+ #include <linux/net_tstamp.h>
+ #include <linux/errqueue.h>
++#include <linux/if_ether.h>
+ #include <linux/ipv6.h>
+-#include <linux/tcp.h>
++#include <linux/udp.h>
+ #include <stdbool.h>
+ #include <stdlib.h>
+ #include <stdio.h>
+@@ -140,8 +141,8 @@ static void do_recv_errqueue_timeout(int fdt)
+ {
+ char control[CMSG_SPACE(sizeof(struct sock_extended_err)) +
+ CMSG_SPACE(sizeof(struct sockaddr_in6))] = {0};
+- char data[sizeof(struct ipv6hdr) +
+- sizeof(struct tcphdr) + 1];
++ char data[sizeof(struct ethhdr) + sizeof(struct ipv6hdr) +
++ sizeof(struct udphdr) + 1];
+ struct sock_extended_err *err;
+ struct msghdr msg = {0};
+ struct iovec iov = {0};
+@@ -159,6 +160,8 @@ static void do_recv_errqueue_timeout(int fdt)
+ msg.msg_controllen = sizeof(control);
+
+ while (1) {
++ const char *reason;
++
+ ret = recvmsg(fdt, &msg, MSG_ERRQUEUE);
+ if (ret == -1 && errno == EAGAIN)
+ break;
+@@ -176,14 +179,30 @@ static void do_recv_errqueue_timeout(int fdt)
+ err = (struct sock_extended_err *)CMSG_DATA(cm);
+ if (err->ee_origin != SO_EE_ORIGIN_TXTIME)
+ error(1, 0, "errqueue: origin 0x%x\n", err->ee_origin);
+- if (err->ee_code != ECANCELED)
+- error(1, 0, "errqueue: code 0x%x\n", err->ee_code);
++
++ switch (err->ee_errno) {
++ case ECANCELED:
++ if (err->ee_code != SO_EE_CODE_TXTIME_MISSED)
++ error(1, 0, "errqueue: unknown ECANCELED %u\n",
++ err->ee_code);
++ reason = "missed txtime";
++ break;
++ case EINVAL:
++ if (err->ee_code != SO_EE_CODE_TXTIME_INVALID_PARAM)
++ error(1, 0, "errqueue: unknown EINVAL %u\n",
++ err->ee_code);
++ reason = "invalid txtime";
++ break;
++ default:
++ error(1, 0, "errqueue: errno %u code %u\n",
++ err->ee_errno, err->ee_code);
++ };
+
+ tstamp = ((int64_t) err->ee_data) << 32 | err->ee_info;
+ tstamp -= (int64_t) glob_tstart;
+ tstamp /= 1000 * 1000;
+- fprintf(stderr, "send: pkt %c at %" PRId64 "ms dropped\n",
+- data[ret - 1], tstamp);
++ fprintf(stderr, "send: pkt %c at %" PRId64 "ms dropped: %s\n",
++ data[ret - 1], tstamp, reason);
+
+ msg.msg_flags = 0;
+ msg.msg_controllen = sizeof(control);
+diff --git a/tools/testing/selftests/powerpc/pmu/ebb/Makefile b/tools/testing/selftests/powerpc/pmu/ebb/Makefile
+index ca35dd8848b0..af3df79d8163 100644
+--- a/tools/testing/selftests/powerpc/pmu/ebb/Makefile
++++ b/tools/testing/selftests/powerpc/pmu/ebb/Makefile
+@@ -7,7 +7,7 @@ noarg:
+ # The EBB handler is 64-bit code and everything links against it
+ CFLAGS += -m64
+
+-TMPOUT = $(OUTPUT)/
++TMPOUT = $(OUTPUT)/TMPDIR/
+ # Toolchains may build PIE by default which breaks the assembly
+ no-pie-option := $(call try-run, echo 'int main() { return 0; }' | \
+ $(CC) -Werror $(KBUILD_CPPFLAGS) $(CC_OPTION_CFLAGS) -no-pie -x c - -o "$$TMP", -no-pie)
+diff --git a/tools/testing/selftests/wireguard/netns.sh b/tools/testing/selftests/wireguard/netns.sh
+index 17a1f53ceba0..d77f4829f1e0 100755
+--- a/tools/testing/selftests/wireguard/netns.sh
++++ b/tools/testing/selftests/wireguard/netns.sh
+@@ -587,9 +587,20 @@ ip0 link set wg0 up
+ kill $ncat_pid
+ ip0 link del wg0
+
++# Ensure there aren't circular reference loops
++ip1 link add wg1 type wireguard
++ip2 link add wg2 type wireguard
++ip1 link set wg1 netns $netns2
++ip2 link set wg2 netns $netns1
++pp ip netns delete $netns1
++pp ip netns delete $netns2
++pp ip netns add $netns1
++pp ip netns add $netns2
++
++sleep 2 # Wait for cleanup and grace periods
+ declare -A objects
+ while read -t 0.1 -r line 2>/dev/null || [[ $? -ne 142 ]]; do
+- [[ $line =~ .*(wg[0-9]+:\ [A-Z][a-z]+\ [0-9]+)\ .*(created|destroyed).* ]] || continue
++ [[ $line =~ .*(wg[0-9]+:\ [A-Z][a-z]+\ ?[0-9]*)\ .*(created|destroyed).* ]] || continue
+ objects["${BASH_REMATCH[1]}"]+="${BASH_REMATCH[2]}"
+ done < /dev/kmsg
+ alldeleted=1
^ permalink raw reply related [flat|nested] 25+ messages in thread
* [gentoo-commits] proj/linux-patches:5.7 commit in: /
@ 2020-07-09 12:15 Mike Pagano
0 siblings, 0 replies; 25+ messages in thread
From: Mike Pagano @ 2020-07-09 12:15 UTC (permalink / raw
To: gentoo-commits
commit: 1d06c709e96c894be7ce445cad98142c8c24befd
Author: Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Thu Jul 9 12:15:44 2020 +0000
Commit: Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Thu Jul 9 12:15:44 2020 +0000
URL: https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=1d06c709
Linux patch 5.7.8
Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>
0000_README | 4 +
1007_linux-5.7.8.patch | 5012 ++++++++++++++++++++++++++++++++++++++++++++++++
2 files changed, 5016 insertions(+)
diff --git a/0000_README b/0000_README
index 4fdfe73..46bac07 100644
--- a/0000_README
+++ b/0000_README
@@ -71,6 +71,10 @@ Patch: 1006_linux-5.7.7.patch
From: http://www.kernel.org
Desc: Linux 5.7.7
+Patch: 1007_linux-5.7.8.patch
+From: http://www.kernel.org
+Desc: Linux 5.7.8
+
Patch: 1500_XATTR_USER_PREFIX.patch
From: https://bugs.gentoo.org/show_bug.cgi?id=470644
Desc: Support for namespace user.pax.* on tmpfs.
diff --git a/1007_linux-5.7.8.patch b/1007_linux-5.7.8.patch
new file mode 100644
index 0000000..1e0b33e
--- /dev/null
+++ b/1007_linux-5.7.8.patch
@@ -0,0 +1,5012 @@
+diff --git a/Makefile b/Makefile
+index 5a5e329d9241..6163d607ca72 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 5
+ PATCHLEVEL = 7
+-SUBLEVEL = 7
++SUBLEVEL = 8
+ EXTRAVERSION =
+ NAME = Kleptomaniac Octopus
+
+diff --git a/arch/mips/kernel/traps.c b/arch/mips/kernel/traps.c
+index 31968cbd6464..9f252d132b52 100644
+--- a/arch/mips/kernel/traps.c
++++ b/arch/mips/kernel/traps.c
+@@ -2121,6 +2121,7 @@ static void configure_status(void)
+
+ change_c0_status(ST0_CU|ST0_MX|ST0_RE|ST0_FR|ST0_BEV|ST0_TS|ST0_KX|ST0_SX|ST0_UX,
+ status_set);
++ back_to_back_c0_hazard();
+ }
+
+ unsigned int hwrena;
+diff --git a/arch/mips/lantiq/xway/sysctrl.c b/arch/mips/lantiq/xway/sysctrl.c
+index aa37545ebe8f..b10342018d19 100644
+--- a/arch/mips/lantiq/xway/sysctrl.c
++++ b/arch/mips/lantiq/xway/sysctrl.c
+@@ -514,8 +514,8 @@ void __init ltq_soc_init(void)
+ clkdev_add_pmu("1e10b308.eth", NULL, 0, 0, PMU_SWITCH |
+ PMU_PPE_DP | PMU_PPE_TC);
+ clkdev_add_pmu("1da00000.usif", "NULL", 1, 0, PMU_USIF);
+- clkdev_add_pmu("1e108000.gswip", "gphy0", 0, 0, PMU_GPHY);
+- clkdev_add_pmu("1e108000.gswip", "gphy1", 0, 0, PMU_GPHY);
++ clkdev_add_pmu("1e108000.switch", "gphy0", 0, 0, PMU_GPHY);
++ clkdev_add_pmu("1e108000.switch", "gphy1", 0, 0, PMU_GPHY);
+ clkdev_add_pmu("1e103100.deu", NULL, 1, 0, PMU_DEU);
+ clkdev_add_pmu("1e116000.mei", "afe", 1, 2, PMU_ANALOG_DSL_AFE);
+ clkdev_add_pmu("1e116000.mei", "dfe", 1, 0, PMU_DFE);
+@@ -538,8 +538,8 @@ void __init ltq_soc_init(void)
+ PMU_SWITCH | PMU_PPE_DPLUS | PMU_PPE_DPLUM |
+ PMU_PPE_EMA | PMU_PPE_TC | PMU_PPE_SLL01 |
+ PMU_PPE_QSB | PMU_PPE_TOP);
+- clkdev_add_pmu("1e108000.gswip", "gphy0", 0, 0, PMU_GPHY);
+- clkdev_add_pmu("1e108000.gswip", "gphy1", 0, 0, PMU_GPHY);
++ clkdev_add_pmu("1e108000.switch", "gphy0", 0, 0, PMU_GPHY);
++ clkdev_add_pmu("1e108000.switch", "gphy1", 0, 0, PMU_GPHY);
+ clkdev_add_pmu("1e103000.sdio", NULL, 1, 0, PMU_SDIO);
+ clkdev_add_pmu("1e103100.deu", NULL, 1, 0, PMU_DEU);
+ clkdev_add_pmu("1e116000.mei", "dfe", 1, 0, PMU_DFE);
+diff --git a/arch/powerpc/include/asm/kvm_book3s_64.h b/arch/powerpc/include/asm/kvm_book3s_64.h
+index 04b2b927bb5a..0431db7b82af 100644
+--- a/arch/powerpc/include/asm/kvm_book3s_64.h
++++ b/arch/powerpc/include/asm/kvm_book3s_64.h
+@@ -14,6 +14,7 @@
+ #include <asm/book3s/64/mmu-hash.h>
+ #include <asm/cpu_has_feature.h>
+ #include <asm/ppc-opcode.h>
++#include <asm/pte-walk.h>
+
+ #ifdef CONFIG_PPC_PSERIES
+ static inline bool kvmhv_on_pseries(void)
+@@ -634,6 +635,28 @@ extern void kvmhv_remove_nest_rmap_range(struct kvm *kvm,
+ unsigned long gpa, unsigned long hpa,
+ unsigned long nbytes);
+
++static inline pte_t *
++find_kvm_secondary_pte_unlocked(struct kvm *kvm, unsigned long ea,
++ unsigned *hshift)
++{
++ pte_t *pte;
++
++ pte = __find_linux_pte(kvm->arch.pgtable, ea, NULL, hshift);
++ return pte;
++}
++
++static inline pte_t *find_kvm_secondary_pte(struct kvm *kvm, unsigned long ea,
++ unsigned *hshift)
++{
++ pte_t *pte;
++
++ VM_WARN(!spin_is_locked(&kvm->mmu_lock),
++ "%s called with kvm mmu_lock not held \n", __func__);
++ pte = __find_linux_pte(kvm->arch.pgtable, ea, NULL, hshift);
++
++ return pte;
++}
++
+ #endif /* CONFIG_KVM_BOOK3S_HV_POSSIBLE */
+
+ #endif /* __ASM_KVM_BOOK3S_64_H__ */
+diff --git a/arch/powerpc/kvm/book3s_64_mmu_radix.c b/arch/powerpc/kvm/book3s_64_mmu_radix.c
+index bc6c1aa3d0e9..d4e532a63f08 100644
+--- a/arch/powerpc/kvm/book3s_64_mmu_radix.c
++++ b/arch/powerpc/kvm/book3s_64_mmu_radix.c
+@@ -993,11 +993,11 @@ int kvm_unmap_radix(struct kvm *kvm, struct kvm_memory_slot *memslot,
+ return 0;
+ }
+
+- ptep = __find_linux_pte(kvm->arch.pgtable, gpa, NULL, &shift);
++ ptep = find_kvm_secondary_pte(kvm, gpa, &shift);
+ if (ptep && pte_present(*ptep))
+ kvmppc_unmap_pte(kvm, ptep, gpa, shift, memslot,
+ kvm->arch.lpid);
+- return 0;
++ return 0;
+ }
+
+ /* Called with kvm->mmu_lock held */
+@@ -1013,7 +1013,7 @@ int kvm_age_radix(struct kvm *kvm, struct kvm_memory_slot *memslot,
+ if (kvm->arch.secure_guest & KVMPPC_SECURE_INIT_DONE)
+ return ref;
+
+- ptep = __find_linux_pte(kvm->arch.pgtable, gpa, NULL, &shift);
++ ptep = find_kvm_secondary_pte(kvm, gpa, &shift);
+ if (ptep && pte_present(*ptep) && pte_young(*ptep)) {
+ old = kvmppc_radix_update_pte(kvm, ptep, _PAGE_ACCESSED, 0,
+ gpa, shift);
+@@ -1040,7 +1040,7 @@ int kvm_test_age_radix(struct kvm *kvm, struct kvm_memory_slot *memslot,
+ if (kvm->arch.secure_guest & KVMPPC_SECURE_INIT_DONE)
+ return ref;
+
+- ptep = __find_linux_pte(kvm->arch.pgtable, gpa, NULL, &shift);
++ ptep = find_kvm_secondary_pte(kvm, gpa, &shift);
+ if (ptep && pte_present(*ptep) && pte_young(*ptep))
+ ref = 1;
+ return ref;
+@@ -1052,7 +1052,7 @@ static int kvm_radix_test_clear_dirty(struct kvm *kvm,
+ {
+ unsigned long gfn = memslot->base_gfn + pagenum;
+ unsigned long gpa = gfn << PAGE_SHIFT;
+- pte_t *ptep;
++ pte_t *ptep, pte;
+ unsigned int shift;
+ int ret = 0;
+ unsigned long old, *rmapp;
+@@ -1060,12 +1060,35 @@ static int kvm_radix_test_clear_dirty(struct kvm *kvm,
+ if (kvm->arch.secure_guest & KVMPPC_SECURE_INIT_DONE)
+ return ret;
+
+- ptep = __find_linux_pte(kvm->arch.pgtable, gpa, NULL, &shift);
+- if (ptep && pte_present(*ptep) && pte_dirty(*ptep)) {
+- ret = 1;
+- if (shift)
+- ret = 1 << (shift - PAGE_SHIFT);
++ /*
++ * For performance reasons we don't hold kvm->mmu_lock while walking the
++ * partition scoped table.
++ */
++ ptep = find_kvm_secondary_pte_unlocked(kvm, gpa, &shift);
++ if (!ptep)
++ return 0;
++
++ pte = READ_ONCE(*ptep);
++ if (pte_present(pte) && pte_dirty(pte)) {
+ spin_lock(&kvm->mmu_lock);
++ /*
++ * Recheck the pte again
++ */
++ if (pte_val(pte) != pte_val(*ptep)) {
++ /*
++ * We have KVM_MEM_LOG_DIRTY_PAGES enabled. Hence we can
++ * only find PAGE_SIZE pte entries here. We can continue
++ * to use the pte addr returned by above page table
++ * walk.
++ */
++ if (!pte_present(*ptep) || !pte_dirty(*ptep)) {
++ spin_unlock(&kvm->mmu_lock);
++ return 0;
++ }
++ }
++
++ ret = 1;
++ VM_BUG_ON(shift);
+ old = kvmppc_radix_update_pte(kvm, ptep, _PAGE_DIRTY, 0,
+ gpa, shift);
+ kvmppc_radix_tlbie_page(kvm, gpa, shift, kvm->arch.lpid);
+@@ -1121,7 +1144,7 @@ void kvmppc_radix_flush_memslot(struct kvm *kvm,
+ gpa = memslot->base_gfn << PAGE_SHIFT;
+ spin_lock(&kvm->mmu_lock);
+ for (n = memslot->npages; n; --n) {
+- ptep = __find_linux_pte(kvm->arch.pgtable, gpa, NULL, &shift);
++ ptep = find_kvm_secondary_pte(kvm, gpa, &shift);
+ if (ptep && pte_present(*ptep))
+ kvmppc_unmap_pte(kvm, ptep, gpa, shift, memslot,
+ kvm->arch.lpid);
+diff --git a/arch/powerpc/kvm/book3s_hv_nested.c b/arch/powerpc/kvm/book3s_hv_nested.c
+index dc97e5be76f6..7f1fc5db13ea 100644
+--- a/arch/powerpc/kvm/book3s_hv_nested.c
++++ b/arch/powerpc/kvm/book3s_hv_nested.c
+@@ -1362,7 +1362,7 @@ static long int __kvmhv_nested_page_fault(struct kvm_run *run,
+ /* See if can find translation in our partition scoped tables for L1 */
+ pte = __pte(0);
+ spin_lock(&kvm->mmu_lock);
+- pte_p = __find_linux_pte(kvm->arch.pgtable, gpa, NULL, &shift);
++ pte_p = find_kvm_secondary_pte(kvm, gpa, &shift);
+ if (!shift)
+ shift = PAGE_SHIFT;
+ if (pte_p)
+diff --git a/arch/s390/kernel/debug.c b/arch/s390/kernel/debug.c
+index 6d321f5f101d..7184d55d87aa 100644
+--- a/arch/s390/kernel/debug.c
++++ b/arch/s390/kernel/debug.c
+@@ -198,9 +198,10 @@ static debug_entry_t ***debug_areas_alloc(int pages_per_area, int nr_areas)
+ if (!areas)
+ goto fail_malloc_areas;
+ for (i = 0; i < nr_areas; i++) {
++ /* GFP_NOWARN to avoid user triggerable WARN, we handle fails */
+ areas[i] = kmalloc_array(pages_per_area,
+ sizeof(debug_entry_t *),
+- GFP_KERNEL);
++ GFP_KERNEL | __GFP_NOWARN);
+ if (!areas[i])
+ goto fail_malloc_areas2;
+ for (j = 0; j < pages_per_area; j++) {
+diff --git a/arch/x86/kernel/cpu/intel.c b/arch/x86/kernel/cpu/intel.c
+index a19a680542ce..19b6c42739fc 100644
+--- a/arch/x86/kernel/cpu/intel.c
++++ b/arch/x86/kernel/cpu/intel.c
+@@ -48,6 +48,13 @@ enum split_lock_detect_state {
+ static enum split_lock_detect_state sld_state __ro_after_init = sld_off;
+ static u64 msr_test_ctrl_cache __ro_after_init;
+
++/*
++ * With a name like MSR_TEST_CTL it should go without saying, but don't touch
++ * MSR_TEST_CTL unless the CPU is one of the whitelisted models. Writing it
++ * on CPUs that do not support SLD can cause fireworks, even when writing '0'.
++ */
++static bool cpu_model_supports_sld __ro_after_init;
++
+ /*
+ * Processors which have self-snooping capability can handle conflicting
+ * memory type across CPUs by snooping its own cache. However, there exists
+@@ -1064,7 +1071,8 @@ static void sld_update_msr(bool on)
+
+ static void split_lock_init(void)
+ {
+- split_lock_verify_msr(sld_state != sld_off);
++ if (cpu_model_supports_sld)
++ split_lock_verify_msr(sld_state != sld_off);
+ }
+
+ static void split_lock_warn(unsigned long ip)
+@@ -1167,5 +1175,6 @@ void __init cpu_set_core_cap_bits(struct cpuinfo_x86 *c)
+ return;
+ }
+
++ cpu_model_supports_sld = true;
+ split_lock_setup();
+ }
+diff --git a/crypto/af_alg.c b/crypto/af_alg.c
+index b1cd3535c525..28fc323e3fe3 100644
+--- a/crypto/af_alg.c
++++ b/crypto/af_alg.c
+@@ -128,21 +128,15 @@ EXPORT_SYMBOL_GPL(af_alg_release);
+ void af_alg_release_parent(struct sock *sk)
+ {
+ struct alg_sock *ask = alg_sk(sk);
+- unsigned int nokey = ask->nokey_refcnt;
+- bool last = nokey && !ask->refcnt;
++ unsigned int nokey = atomic_read(&ask->nokey_refcnt);
+
+ sk = ask->parent;
+ ask = alg_sk(sk);
+
+- local_bh_disable();
+- bh_lock_sock(sk);
+- ask->nokey_refcnt -= nokey;
+- if (!last)
+- last = !--ask->refcnt;
+- bh_unlock_sock(sk);
+- local_bh_enable();
++ if (nokey)
++ atomic_dec(&ask->nokey_refcnt);
+
+- if (last)
++ if (atomic_dec_and_test(&ask->refcnt))
+ sock_put(sk);
+ }
+ EXPORT_SYMBOL_GPL(af_alg_release_parent);
+@@ -187,7 +181,7 @@ static int alg_bind(struct socket *sock, struct sockaddr *uaddr, int addr_len)
+
+ err = -EBUSY;
+ lock_sock(sk);
+- if (ask->refcnt | ask->nokey_refcnt)
++ if (atomic_read(&ask->refcnt))
+ goto unlock;
+
+ swap(ask->type, type);
+@@ -236,7 +230,7 @@ static int alg_setsockopt(struct socket *sock, int level, int optname,
+ int err = -EBUSY;
+
+ lock_sock(sk);
+- if (ask->refcnt)
++ if (atomic_read(&ask->refcnt) != atomic_read(&ask->nokey_refcnt))
+ goto unlock;
+
+ type = ask->type;
+@@ -301,12 +295,14 @@ int af_alg_accept(struct sock *sk, struct socket *newsock, bool kern)
+ if (err)
+ goto unlock;
+
+- if (nokey || !ask->refcnt++)
++ if (atomic_inc_return_relaxed(&ask->refcnt) == 1)
+ sock_hold(sk);
+- ask->nokey_refcnt += nokey;
++ if (nokey) {
++ atomic_inc(&ask->nokey_refcnt);
++ atomic_set(&alg_sk(sk2)->nokey_refcnt, 1);
++ }
+ alg_sk(sk2)->parent = sk;
+ alg_sk(sk2)->type = type;
+- alg_sk(sk2)->nokey_refcnt = nokey;
+
+ newsock->ops = type->ops;
+ newsock->state = SS_CONNECTED;
+diff --git a/crypto/algif_aead.c b/crypto/algif_aead.c
+index eb1910b6d434..0ae000a61c7f 100644
+--- a/crypto/algif_aead.c
++++ b/crypto/algif_aead.c
+@@ -384,7 +384,7 @@ static int aead_check_key(struct socket *sock)
+ struct alg_sock *ask = alg_sk(sk);
+
+ lock_sock(sk);
+- if (ask->refcnt)
++ if (!atomic_read(&ask->nokey_refcnt))
+ goto unlock_child;
+
+ psk = ask->parent;
+@@ -396,11 +396,8 @@ static int aead_check_key(struct socket *sock)
+ if (crypto_aead_get_flags(tfm->aead) & CRYPTO_TFM_NEED_KEY)
+ goto unlock;
+
+- if (!pask->refcnt++)
+- sock_hold(psk);
+-
+- ask->refcnt = 1;
+- sock_put(psk);
++ atomic_dec(&pask->nokey_refcnt);
++ atomic_set(&ask->nokey_refcnt, 0);
+
+ err = 0;
+
+diff --git a/crypto/algif_hash.c b/crypto/algif_hash.c
+index da1ffa4f7f8d..e71727c25a7d 100644
+--- a/crypto/algif_hash.c
++++ b/crypto/algif_hash.c
+@@ -301,7 +301,7 @@ static int hash_check_key(struct socket *sock)
+ struct alg_sock *ask = alg_sk(sk);
+
+ lock_sock(sk);
+- if (ask->refcnt)
++ if (!atomic_read(&ask->nokey_refcnt))
+ goto unlock_child;
+
+ psk = ask->parent;
+@@ -313,11 +313,8 @@ static int hash_check_key(struct socket *sock)
+ if (crypto_ahash_get_flags(tfm) & CRYPTO_TFM_NEED_KEY)
+ goto unlock;
+
+- if (!pask->refcnt++)
+- sock_hold(psk);
+-
+- ask->refcnt = 1;
+- sock_put(psk);
++ atomic_dec(&pask->nokey_refcnt);
++ atomic_set(&ask->nokey_refcnt, 0);
+
+ err = 0;
+
+diff --git a/crypto/algif_skcipher.c b/crypto/algif_skcipher.c
+index 4c3bdffe0c3a..ec5567c87a6d 100644
+--- a/crypto/algif_skcipher.c
++++ b/crypto/algif_skcipher.c
+@@ -211,7 +211,7 @@ static int skcipher_check_key(struct socket *sock)
+ struct alg_sock *ask = alg_sk(sk);
+
+ lock_sock(sk);
+- if (ask->refcnt)
++ if (!atomic_read(&ask->nokey_refcnt))
+ goto unlock_child;
+
+ psk = ask->parent;
+@@ -223,11 +223,8 @@ static int skcipher_check_key(struct socket *sock)
+ if (crypto_skcipher_get_flags(tfm) & CRYPTO_TFM_NEED_KEY)
+ goto unlock;
+
+- if (!pask->refcnt++)
+- sock_hold(psk);
+-
+- ask->refcnt = 1;
+- sock_put(psk);
++ atomic_dec(&pask->nokey_refcnt);
++ atomic_set(&ask->nokey_refcnt, 0);
+
+ err = 0;
+
+diff --git a/drivers/acpi/fan.c b/drivers/acpi/fan.c
+index 873e039ad4b7..62873388b24f 100644
+--- a/drivers/acpi/fan.c
++++ b/drivers/acpi/fan.c
+@@ -25,8 +25,8 @@ static int acpi_fan_remove(struct platform_device *pdev);
+
+ static const struct acpi_device_id fan_device_ids[] = {
+ {"PNP0C0B", 0},
+- {"INT1044", 0},
+ {"INT3404", 0},
++ {"INTC1044", 0},
+ {"", 0},
+ };
+ MODULE_DEVICE_TABLE(acpi, fan_device_ids);
+diff --git a/drivers/block/virtio_blk.c b/drivers/block/virtio_blk.c
+index 9d21bf0f155e..980df853ee49 100644
+--- a/drivers/block/virtio_blk.c
++++ b/drivers/block/virtio_blk.c
+@@ -878,6 +878,7 @@ out_put_disk:
+ put_disk(vblk->disk);
+ out_free_vq:
+ vdev->config->del_vqs(vdev);
++ kfree(vblk->vqs);
+ out_free_vblk:
+ kfree(vblk);
+ out_free_index:
+diff --git a/drivers/char/tpm/tpm-dev-common.c b/drivers/char/tpm/tpm-dev-common.c
+index 87f449340202..1784530b8387 100644
+--- a/drivers/char/tpm/tpm-dev-common.c
++++ b/drivers/char/tpm/tpm-dev-common.c
+@@ -189,15 +189,6 @@ ssize_t tpm_common_write(struct file *file, const char __user *buf,
+ goto out;
+ }
+
+- /* atomic tpm command send and result receive. We only hold the ops
+- * lock during this period so that the tpm can be unregistered even if
+- * the char dev is held open.
+- */
+- if (tpm_try_get_ops(priv->chip)) {
+- ret = -EPIPE;
+- goto out;
+- }
+-
+ priv->response_length = 0;
+ priv->response_read = false;
+ *off = 0;
+@@ -211,11 +202,19 @@ ssize_t tpm_common_write(struct file *file, const char __user *buf,
+ if (file->f_flags & O_NONBLOCK) {
+ priv->command_enqueued = true;
+ queue_work(tpm_dev_wq, &priv->async_work);
+- tpm_put_ops(priv->chip);
+ mutex_unlock(&priv->buffer_mutex);
+ return size;
+ }
+
++ /* atomic tpm command send and result receive. We only hold the ops
++ * lock during this period so that the tpm can be unregistered even if
++ * the char dev is held open.
++ */
++ if (tpm_try_get_ops(priv->chip)) {
++ ret = -EPIPE;
++ goto out;
++ }
++
+ ret = tpm_dev_transmit(priv->chip, priv->space, priv->data_buffer,
+ sizeof(priv->data_buffer));
+ tpm_put_ops(priv->chip);
+diff --git a/drivers/char/tpm/tpm_ibmvtpm.c b/drivers/char/tpm/tpm_ibmvtpm.c
+index 09fe45246b8c..994385bf37c0 100644
+--- a/drivers/char/tpm/tpm_ibmvtpm.c
++++ b/drivers/char/tpm/tpm_ibmvtpm.c
+@@ -683,13 +683,6 @@ static int tpm_ibmvtpm_probe(struct vio_dev *vio_dev,
+ if (rc)
+ goto init_irq_cleanup;
+
+- if (!strcmp(id->compat, "IBM,vtpm20")) {
+- chip->flags |= TPM_CHIP_FLAG_TPM2;
+- rc = tpm2_get_cc_attrs_tbl(chip);
+- if (rc)
+- goto init_irq_cleanup;
+- }
+-
+ if (!wait_event_timeout(ibmvtpm->crq_queue.wq,
+ ibmvtpm->rtce_buf != NULL,
+ HZ)) {
+@@ -697,6 +690,13 @@ static int tpm_ibmvtpm_probe(struct vio_dev *vio_dev,
+ goto init_irq_cleanup;
+ }
+
++ if (!strcmp(id->compat, "IBM,vtpm20")) {
++ chip->flags |= TPM_CHIP_FLAG_TPM2;
++ rc = tpm2_get_cc_attrs_tbl(chip);
++ if (rc)
++ goto init_irq_cleanup;
++ }
++
+ return tpm_chip_register(chip);
+ init_irq_cleanup:
+ do {
+diff --git a/drivers/dma-buf/dma-buf.c b/drivers/dma-buf/dma-buf.c
+index 07df88f2e305..e782aaaf3e1f 100644
+--- a/drivers/dma-buf/dma-buf.c
++++ b/drivers/dma-buf/dma-buf.c
+@@ -54,37 +54,11 @@ static char *dmabuffs_dname(struct dentry *dentry, char *buffer, int buflen)
+ dentry->d_name.name, ret > 0 ? name : "");
+ }
+
+-static const struct dentry_operations dma_buf_dentry_ops = {
+- .d_dname = dmabuffs_dname,
+-};
+-
+-static struct vfsmount *dma_buf_mnt;
+-
+-static int dma_buf_fs_init_context(struct fs_context *fc)
+-{
+- struct pseudo_fs_context *ctx;
+-
+- ctx = init_pseudo(fc, DMA_BUF_MAGIC);
+- if (!ctx)
+- return -ENOMEM;
+- ctx->dops = &dma_buf_dentry_ops;
+- return 0;
+-}
+-
+-static struct file_system_type dma_buf_fs_type = {
+- .name = "dmabuf",
+- .init_fs_context = dma_buf_fs_init_context,
+- .kill_sb = kill_anon_super,
+-};
+-
+-static int dma_buf_release(struct inode *inode, struct file *file)
++static void dma_buf_release(struct dentry *dentry)
+ {
+ struct dma_buf *dmabuf;
+
+- if (!is_dma_buf_file(file))
+- return -EINVAL;
+-
+- dmabuf = file->private_data;
++ dmabuf = dentry->d_fsdata;
+
+ BUG_ON(dmabuf->vmapping_counter);
+
+@@ -110,9 +84,32 @@ static int dma_buf_release(struct inode *inode, struct file *file)
+ module_put(dmabuf->owner);
+ kfree(dmabuf->name);
+ kfree(dmabuf);
++}
++
++static const struct dentry_operations dma_buf_dentry_ops = {
++ .d_dname = dmabuffs_dname,
++ .d_release = dma_buf_release,
++};
++
++static struct vfsmount *dma_buf_mnt;
++
++static int dma_buf_fs_init_context(struct fs_context *fc)
++{
++ struct pseudo_fs_context *ctx;
++
++ ctx = init_pseudo(fc, DMA_BUF_MAGIC);
++ if (!ctx)
++ return -ENOMEM;
++ ctx->dops = &dma_buf_dentry_ops;
+ return 0;
+ }
+
++static struct file_system_type dma_buf_fs_type = {
++ .name = "dmabuf",
++ .init_fs_context = dma_buf_fs_init_context,
++ .kill_sb = kill_anon_super,
++};
++
+ static int dma_buf_mmap_internal(struct file *file, struct vm_area_struct *vma)
+ {
+ struct dma_buf *dmabuf;
+@@ -412,7 +409,6 @@ static void dma_buf_show_fdinfo(struct seq_file *m, struct file *file)
+ }
+
+ static const struct file_operations dma_buf_fops = {
+- .release = dma_buf_release,
+ .mmap = dma_buf_mmap_internal,
+ .llseek = dma_buf_llseek,
+ .poll = dma_buf_poll,
+diff --git a/drivers/firmware/efi/Kconfig b/drivers/firmware/efi/Kconfig
+index 613828d3f106..168935b3afa1 100644
+--- a/drivers/firmware/efi/Kconfig
++++ b/drivers/firmware/efi/Kconfig
+@@ -267,3 +267,14 @@ config EFI_EARLYCON
+ depends on SERIAL_EARLYCON && !ARM && !IA64
+ select FONT_SUPPORT
+ select ARCH_USE_MEMREMAP_PROT
++
++config EFI_CUSTOM_SSDT_OVERLAYS
++ bool "Load custom ACPI SSDT overlay from an EFI variable"
++ depends on EFI_VARS && ACPI
++ default ACPI_TABLE_UPGRADE
++ help
++ Allow loading of an ACPI SSDT overlay from an EFI variable specified
++ by a kernel command line option.
++
++ See Documentation/admin-guide/acpi/ssdt-overlays.rst for more
++ information.
+diff --git a/drivers/firmware/efi/efi.c b/drivers/firmware/efi/efi.c
+index 4e3055238f31..20a7ba47a792 100644
+--- a/drivers/firmware/efi/efi.c
++++ b/drivers/firmware/efi/efi.c
+@@ -189,7 +189,7 @@ static void generic_ops_unregister(void)
+ efivars_unregister(&generic_efivars);
+ }
+
+-#if IS_ENABLED(CONFIG_ACPI)
++#ifdef CONFIG_EFI_CUSTOM_SSDT_OVERLAYS
+ #define EFIVAR_SSDT_NAME_MAX 16
+ static char efivar_ssdt[EFIVAR_SSDT_NAME_MAX] __initdata;
+ static int __init efivar_ssdt_setup(char *str)
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_atomfirmware.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_atomfirmware.c
+index 58f9d8c3a17a..44f927641b89 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_atomfirmware.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_atomfirmware.c
+@@ -204,6 +204,7 @@ amdgpu_atomfirmware_get_vram_info(struct amdgpu_device *adev,
+ (mode_info->atom_context->bios + data_offset);
+ switch (crev) {
+ case 11:
++ case 12:
+ mem_channel_number = igp_info->v11.umachannelnumber;
+ /* channel width is 64 */
+ if (vram_width)
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
+index affde2de2a0d..59288653412d 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
+@@ -4091,6 +4091,8 @@ int amdgpu_device_gpu_recover(struct amdgpu_device *adev,
+ need_full_reset = job_signaled = false;
+ INIT_LIST_HEAD(&device_list);
+
++ amdgpu_ras_set_error_query_ready(adev, false);
++
+ dev_info(adev->dev, "GPU %s begin!\n",
+ (in_ras_intr && !use_baco) ? "jobs stop":"reset");
+
+@@ -4147,6 +4149,7 @@ int amdgpu_device_gpu_recover(struct amdgpu_device *adev,
+ /* block all schedulers and reset given job's ring */
+ list_for_each_entry(tmp_adev, device_list_handle, gmc.xgmi.head) {
+ if (tmp_adev != adev) {
++ amdgpu_ras_set_error_query_ready(tmp_adev, false);
+ amdgpu_device_lock_adev(tmp_adev, false);
+ if (!amdgpu_sriov_vf(tmp_adev))
+ amdgpu_amdkfd_pre_reset(tmp_adev);
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_pm.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_pm.c
+index 532f4d908b8d..96b8feb77b15 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_pm.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_pm.c
+@@ -2590,7 +2590,7 @@ static ssize_t amdgpu_hwmon_show_sclk(struct device *dev,
+ if (r)
+ return r;
+
+- return snprintf(buf, PAGE_SIZE, "%d\n", sclk * 10 * 1000);
++ return snprintf(buf, PAGE_SIZE, "%u\n", sclk * 10 * 1000);
+ }
+
+ static ssize_t amdgpu_hwmon_show_sclk_label(struct device *dev,
+@@ -2622,7 +2622,7 @@ static ssize_t amdgpu_hwmon_show_mclk(struct device *dev,
+ if (r)
+ return r;
+
+- return snprintf(buf, PAGE_SIZE, "%d\n", mclk * 10 * 1000);
++ return snprintf(buf, PAGE_SIZE, "%u\n", mclk * 10 * 1000);
+ }
+
+ static ssize_t amdgpu_hwmon_show_mclk_label(struct device *dev,
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ras.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_ras.c
+index ab379b44679c..cd18596b47d3 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ras.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ras.c
+@@ -80,6 +80,20 @@ atomic_t amdgpu_ras_in_intr = ATOMIC_INIT(0);
+ static bool amdgpu_ras_check_bad_page(struct amdgpu_device *adev,
+ uint64_t addr);
+
++void amdgpu_ras_set_error_query_ready(struct amdgpu_device *adev, bool ready)
++{
++ if (adev && amdgpu_ras_get_context(adev))
++ amdgpu_ras_get_context(adev)->error_query_ready = ready;
++}
++
++bool amdgpu_ras_get_error_query_ready(struct amdgpu_device *adev)
++{
++ if (adev && amdgpu_ras_get_context(adev))
++ return amdgpu_ras_get_context(adev)->error_query_ready;
++
++ return false;
++}
++
+ static ssize_t amdgpu_ras_debugfs_read(struct file *f, char __user *buf,
+ size_t size, loff_t *pos)
+ {
+@@ -281,7 +295,7 @@ static ssize_t amdgpu_ras_debugfs_ctrl_write(struct file *f, const char __user *
+ struct ras_debug_if data;
+ int ret = 0;
+
+- if (amdgpu_ras_intr_triggered()) {
++ if (!amdgpu_ras_get_error_query_ready(adev)) {
+ DRM_WARN("RAS WARN: error injection currently inaccessible\n");
+ return size;
+ }
+@@ -399,7 +413,7 @@ static ssize_t amdgpu_ras_sysfs_read(struct device *dev,
+ .head = obj->head,
+ };
+
+- if (amdgpu_ras_intr_triggered())
++ if (!amdgpu_ras_get_error_query_ready(obj->adev))
+ return snprintf(buf, PAGE_SIZE,
+ "Query currently inaccessible\n");
+
+@@ -1430,9 +1444,10 @@ static void amdgpu_ras_do_recovery(struct work_struct *work)
+ struct amdgpu_hive_info *hive = amdgpu_get_xgmi_hive(adev, false);
+
+ /* Build list of devices to query RAS related errors */
+- if (hive && adev->gmc.xgmi.num_physical_nodes > 1) {
++ if (hive && adev->gmc.xgmi.num_physical_nodes > 1)
+ device_list_handle = &hive->device_list;
+- } else {
++ else {
++ INIT_LIST_HEAD(&device_list);
+ list_add_tail(&adev->gmc.xgmi.head, &device_list);
+ device_list_handle = &device_list;
+ }
+@@ -1896,8 +1911,10 @@ int amdgpu_ras_late_init(struct amdgpu_device *adev,
+ }
+
+ /* in resume phase, no need to create ras fs node */
+- if (adev->in_suspend || adev->in_gpu_reset)
++ if (adev->in_suspend || adev->in_gpu_reset) {
++ amdgpu_ras_set_error_query_ready(adev, true);
+ return 0;
++ }
+
+ if (ih_info->cb) {
+ r = amdgpu_ras_interrupt_add_handler(adev, ih_info);
+@@ -1909,6 +1926,8 @@ int amdgpu_ras_late_init(struct amdgpu_device *adev,
+ if (r)
+ goto sysfs;
+
++ amdgpu_ras_set_error_query_ready(adev, true);
++
+ return 0;
+ cleanup:
+ amdgpu_ras_sysfs_remove(adev, ras_block);
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ras.h b/drivers/gpu/drm/amd/amdgpu/amdgpu_ras.h
+index 55c3eceb390d..e7df5d8429f8 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ras.h
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ras.h
+@@ -334,6 +334,8 @@ struct amdgpu_ras {
+ uint32_t flags;
+ bool reboot;
+ struct amdgpu_ras_eeprom_control eeprom_control;
++
++ bool error_query_ready;
+ };
+
+ struct ras_fs_data {
+@@ -629,4 +631,6 @@ static inline void amdgpu_ras_intr_cleared(void)
+
+ void amdgpu_ras_global_ras_isr(struct amdgpu_device *adev);
+
++void amdgpu_ras_set_error_query_ready(struct amdgpu_device *adev, bool ready);
++
+ #endif
+diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
+index f9f02e08054b..69b1f61928ef 100644
+--- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
++++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
+@@ -3797,8 +3797,7 @@ static void update_stream_scaling_settings(const struct drm_display_mode *mode,
+
+ static enum dc_color_depth
+ convert_color_depth_from_display_info(const struct drm_connector *connector,
+- const struct drm_connector_state *state,
+- bool is_y420)
++ bool is_y420, int requested_bpc)
+ {
+ uint8_t bpc;
+
+@@ -3818,10 +3817,7 @@ convert_color_depth_from_display_info(const struct drm_connector *connector,
+ bpc = bpc ? bpc : 8;
+ }
+
+- if (!state)
+- state = connector->state;
+-
+- if (state) {
++ if (requested_bpc > 0) {
+ /*
+ * Cap display bpc based on the user requested value.
+ *
+@@ -3830,7 +3826,7 @@ convert_color_depth_from_display_info(const struct drm_connector *connector,
+ * or if this was called outside of atomic check, so it
+ * can't be used directly.
+ */
+- bpc = min(bpc, state->max_requested_bpc);
++ bpc = min_t(u8, bpc, requested_bpc);
+
+ /* Round down to the nearest even number. */
+ bpc = bpc - (bpc & 1);
+@@ -3952,7 +3948,8 @@ static void fill_stream_properties_from_drm_display_mode(
+ const struct drm_display_mode *mode_in,
+ const struct drm_connector *connector,
+ const struct drm_connector_state *connector_state,
+- const struct dc_stream_state *old_stream)
++ const struct dc_stream_state *old_stream,
++ int requested_bpc)
+ {
+ struct dc_crtc_timing *timing_out = &stream->timing;
+ const struct drm_display_info *info = &connector->display_info;
+@@ -3982,8 +3979,9 @@ static void fill_stream_properties_from_drm_display_mode(
+
+ timing_out->timing_3d_format = TIMING_3D_FORMAT_NONE;
+ timing_out->display_color_depth = convert_color_depth_from_display_info(
+- connector, connector_state,
+- (timing_out->pixel_encoding == PIXEL_ENCODING_YCBCR420));
++ connector,
++ (timing_out->pixel_encoding == PIXEL_ENCODING_YCBCR420),
++ requested_bpc);
+ timing_out->scan_type = SCANNING_TYPE_NODATA;
+ timing_out->hdmi_vic = 0;
+
+@@ -4189,7 +4187,8 @@ static struct dc_stream_state *
+ create_stream_for_sink(struct amdgpu_dm_connector *aconnector,
+ const struct drm_display_mode *drm_mode,
+ const struct dm_connector_state *dm_state,
+- const struct dc_stream_state *old_stream)
++ const struct dc_stream_state *old_stream,
++ int requested_bpc)
+ {
+ struct drm_display_mode *preferred_mode = NULL;
+ struct drm_connector *drm_connector;
+@@ -4274,10 +4273,10 @@ create_stream_for_sink(struct amdgpu_dm_connector *aconnector,
+ */
+ if (!scale || mode_refresh != preferred_refresh)
+ fill_stream_properties_from_drm_display_mode(stream,
+- &mode, &aconnector->base, con_state, NULL);
++ &mode, &aconnector->base, con_state, NULL, requested_bpc);
+ else
+ fill_stream_properties_from_drm_display_mode(stream,
+- &mode, &aconnector->base, con_state, old_stream);
++ &mode, &aconnector->base, con_state, old_stream, requested_bpc);
+
+ stream->timing.flags.DSC = 0;
+
+@@ -4800,16 +4799,55 @@ static void handle_edid_mgmt(struct amdgpu_dm_connector *aconnector)
+ create_eml_sink(aconnector);
+ }
+
++static struct dc_stream_state *
++create_validate_stream_for_sink(struct amdgpu_dm_connector *aconnector,
++ const struct drm_display_mode *drm_mode,
++ const struct dm_connector_state *dm_state,
++ const struct dc_stream_state *old_stream)
++{
++ struct drm_connector *connector = &aconnector->base;
++ struct amdgpu_device *adev = connector->dev->dev_private;
++ struct dc_stream_state *stream;
++ const struct drm_connector_state *drm_state = dm_state ? &dm_state->base : NULL;
++ int requested_bpc = drm_state ? drm_state->max_requested_bpc : 8;
++ enum dc_status dc_result = DC_OK;
++
++ do {
++ stream = create_stream_for_sink(aconnector, drm_mode,
++ dm_state, old_stream,
++ requested_bpc);
++ if (stream == NULL) {
++ DRM_ERROR("Failed to create stream for sink!\n");
++ break;
++ }
++
++ dc_result = dc_validate_stream(adev->dm.dc, stream);
++
++ if (dc_result != DC_OK) {
++ DRM_DEBUG_KMS("Mode %dx%d (clk %d) failed DC validation with error %d\n",
++ drm_mode->hdisplay,
++ drm_mode->vdisplay,
++ drm_mode->clock,
++ dc_result);
++
++ dc_stream_release(stream);
++ stream = NULL;
++ requested_bpc -= 2; /* lower bpc to retry validation */
++ }
++
++ } while (stream == NULL && requested_bpc >= 6);
++
++ return stream;
++}
++
+ enum drm_mode_status amdgpu_dm_connector_mode_valid(struct drm_connector *connector,
+ struct drm_display_mode *mode)
+ {
+ int result = MODE_ERROR;
+ struct dc_sink *dc_sink;
+- struct amdgpu_device *adev = connector->dev->dev_private;
+ /* TODO: Unhardcode stream count */
+ struct dc_stream_state *stream;
+ struct amdgpu_dm_connector *aconnector = to_amdgpu_dm_connector(connector);
+- enum dc_status dc_result = DC_OK;
+
+ if ((mode->flags & DRM_MODE_FLAG_INTERLACE) ||
+ (mode->flags & DRM_MODE_FLAG_DBLSCAN))
+@@ -4830,24 +4868,11 @@ enum drm_mode_status amdgpu_dm_connector_mode_valid(struct drm_connector *connec
+ goto fail;
+ }
+
+- stream = create_stream_for_sink(aconnector, mode, NULL, NULL);
+- if (stream == NULL) {
+- DRM_ERROR("Failed to create stream for sink!\n");
+- goto fail;
+- }
+-
+- dc_result = dc_validate_stream(adev->dm.dc, stream);
+-
+- if (dc_result == DC_OK)
++ stream = create_validate_stream_for_sink(aconnector, mode, NULL, NULL);
++ if (stream) {
++ dc_stream_release(stream);
+ result = MODE_OK;
+- else
+- DRM_DEBUG_KMS("Mode %dx%d (clk %d) failed DC validation with error %d\n",
+- mode->hdisplay,
+- mode->vdisplay,
+- mode->clock,
+- dc_result);
+-
+- dc_stream_release(stream);
++ }
+
+ fail:
+ /* TODO: error handling*/
+@@ -5170,10 +5195,12 @@ static int dm_encoder_helper_atomic_check(struct drm_encoder *encoder,
+ return 0;
+
+ if (!state->duplicated) {
++ int max_bpc = conn_state->max_requested_bpc;
+ is_y420 = drm_mode_is_420_also(&connector->display_info, adjusted_mode) &&
+ aconnector->force_yuv420_output;
+- color_depth = convert_color_depth_from_display_info(connector, conn_state,
+- is_y420);
++ color_depth = convert_color_depth_from_display_info(connector,
++ is_y420,
++ max_bpc);
+ bpp = convert_dc_color_depth_into_bpc(color_depth) * 3;
+ clock = adjusted_mode->clock;
+ dm_new_connector_state->pbn = drm_dp_calc_pbn_mode(clock, bpp, false);
+@@ -7589,10 +7616,10 @@ static int dm_update_crtc_state(struct amdgpu_display_manager *dm,
+ if (!drm_atomic_crtc_needs_modeset(new_crtc_state))
+ goto skip_modeset;
+
+- new_stream = create_stream_for_sink(aconnector,
+- &new_crtc_state->mode,
+- dm_new_conn_state,
+- dm_old_crtc_state->stream);
++ new_stream = create_validate_stream_for_sink(aconnector,
++ &new_crtc_state->mode,
++ dm_new_conn_state,
++ dm_old_crtc_state->stream);
+
+ /*
+ * we can have no stream on ACTION_SET if a display
+diff --git a/drivers/gpu/drm/amd/display/dc/core/dc.c b/drivers/gpu/drm/amd/display/dc/core/dc.c
+index 4acaf4be8a81..c825d383f0f1 100644
+--- a/drivers/gpu/drm/amd/display/dc/core/dc.c
++++ b/drivers/gpu/drm/amd/display/dc/core/dc.c
+@@ -2533,10 +2533,12 @@ void dc_commit_updates_for_stream(struct dc *dc,
+
+ copy_stream_update_to_stream(dc, context, stream, stream_update);
+
+- if (!dc->res_pool->funcs->validate_bandwidth(dc, context, false)) {
+- DC_ERROR("Mode validation failed for stream update!\n");
+- dc_release_state(context);
+- return;
++ if (update_type > UPDATE_TYPE_FAST) {
++ if (!dc->res_pool->funcs->validate_bandwidth(dc, context, false)) {
++ DC_ERROR("Mode validation failed for stream update!\n");
++ dc_release_state(context);
++ return;
++ }
+ }
+
+ commit_planes_for_stream(
+diff --git a/drivers/gpu/drm/amd/powerplay/smumgr/vega20_smumgr.c b/drivers/gpu/drm/amd/powerplay/smumgr/vega20_smumgr.c
+index 16aa171971d3..f1e7024c508c 100644
+--- a/drivers/gpu/drm/amd/powerplay/smumgr/vega20_smumgr.c
++++ b/drivers/gpu/drm/amd/powerplay/smumgr/vega20_smumgr.c
+@@ -508,9 +508,11 @@ static int vega20_smu_init(struct pp_hwmgr *hwmgr)
+ priv->smu_tables.entry[TABLE_ACTIVITY_MONITOR_COEFF].version = 0x01;
+ priv->smu_tables.entry[TABLE_ACTIVITY_MONITOR_COEFF].size = sizeof(DpmActivityMonitorCoeffInt_t);
+
+- ret = smu_v11_0_i2c_eeprom_control_init(&adev->pm.smu_i2c);
+- if (ret)
+- goto err4;
++ if (adev->psp.ras.ras) {
++ ret = smu_v11_0_i2c_eeprom_control_init(&adev->pm.smu_i2c);
++ if (ret)
++ goto err4;
++ }
+
+ return 0;
+
+@@ -546,7 +548,8 @@ static int vega20_smu_fini(struct pp_hwmgr *hwmgr)
+ (struct vega20_smumgr *)(hwmgr->smu_backend);
+ struct amdgpu_device *adev = hwmgr->adev;
+
+- smu_v11_0_i2c_eeprom_control_fini(&adev->pm.smu_i2c);
++ if (adev->psp.ras.ras)
++ smu_v11_0_i2c_eeprom_control_fini(&adev->pm.smu_i2c);
+
+ if (priv) {
+ amdgpu_bo_free_kernel(&priv->smu_tables.entry[TABLE_PPTABLE].handle,
+diff --git a/drivers/gpu/drm/i915/gt/intel_timeline.c b/drivers/gpu/drm/i915/gt/intel_timeline.c
+index 08b56d7ab4f4..92da746f01c1 100644
+--- a/drivers/gpu/drm/i915/gt/intel_timeline.c
++++ b/drivers/gpu/drm/i915/gt/intel_timeline.c
+@@ -119,6 +119,15 @@ static void __idle_hwsp_free(struct intel_timeline_hwsp *hwsp, int cacheline)
+ spin_unlock_irqrestore(>->hwsp_lock, flags);
+ }
+
++static void __rcu_cacheline_free(struct rcu_head *rcu)
++{
++ struct intel_timeline_cacheline *cl =
++ container_of(rcu, typeof(*cl), rcu);
++
++ i915_active_fini(&cl->active);
++ kfree(cl);
++}
++
+ static void __idle_cacheline_free(struct intel_timeline_cacheline *cl)
+ {
+ GEM_BUG_ON(!i915_active_is_idle(&cl->active));
+@@ -127,8 +136,7 @@ static void __idle_cacheline_free(struct intel_timeline_cacheline *cl)
+ i915_vma_put(cl->hwsp->vma);
+ __idle_hwsp_free(cl->hwsp, ptr_unmask_bits(cl->vaddr, CACHELINE_BITS));
+
+- i915_active_fini(&cl->active);
+- kfree_rcu(cl, rcu);
++ call_rcu(&cl->rcu, __rcu_cacheline_free);
+ }
+
+ __i915_active_call
+diff --git a/drivers/gpu/drm/i915/gt/shaders/README b/drivers/gpu/drm/i915/gt/shaders/README
+new file mode 100644
+index 000000000000..e7e96d7073c7
+--- /dev/null
++++ b/drivers/gpu/drm/i915/gt/shaders/README
+@@ -0,0 +1,46 @@
++ASM sources for auto generated shaders
++======================================
++
++The i915/gt/hsw_clear_kernel.c and i915/gt/ivb_clear_kernel.c files contain
++pre-compiled batch chunks that will clear any residual render cache during
++context switch.
++
++They are generated from their respective platform ASM files present on
++i915/gt/shaders/clear_kernel directory.
++
++The generated .c files should never be modified directly. Instead, any modification
++needs to be done on the on their respective ASM files and build instructions below
++needes to be followed.
++
++Building
++========
++
++Environment
++-----------
++
++IGT GPU tool scripts and the Mesa's i965 instruction assembler tool are used
++on building.
++
++Please make sure your Mesa tool is compiled with "-Dtools=intel" and
++"-Ddri-drivers=i965", and run this script from IGT source root directory"
++
++The instructions bellow assume:
++ * IGT gpu tools source code is located on your home directory (~) as ~/igt
++ * Mesa source code is located on your home directory (~) as ~/mesa
++ and built under the ~/mesa/build directory
++ * Linux kernel source code is under your home directory (~) as ~/linux
++
++Instructions
++------------
++
++~ $ cp ~/linux/drivers/gpu/drm/i915/gt/shaders/clear_kernel/ivb.asm \
++ ~/igt/lib/i915/shaders/clear_kernel/ivb.asm
++~ $ cd ~/igt
++igt $ ./scripts/generate_clear_kernel.sh -g ivb \
++ -m ~/mesa/build/src/intel/tools/i965_asm
++
++~ $ cp ~/linux/drivers/gpu/drm/i915/gt/shaders/clear_kernel/hsw.asm \
++ ~/igt/lib/i915/shaders/clear_kernel/hsw.asm
++~ $ cd ~/igt
++igt $ ./scripts/generate_clear_kernel.sh -g hsw \
++ -m ~/mesa/build/src/intel/tools/i965_asm
+\ No newline at end of file
+diff --git a/drivers/gpu/drm/i915/gt/shaders/clear_kernel/hsw.asm b/drivers/gpu/drm/i915/gt/shaders/clear_kernel/hsw.asm
+new file mode 100644
+index 000000000000..5fdf384bb621
+--- /dev/null
++++ b/drivers/gpu/drm/i915/gt/shaders/clear_kernel/hsw.asm
+@@ -0,0 +1,119 @@
++// SPDX-License-Identifier: MIT
++/*
++ * Copyright © 2020 Intel Corporation
++ */
++
++/*
++ * Kernel for PAVP buffer clear.
++ *
++ * 1. Clear all 64 GRF registers assigned to the kernel with designated value;
++ * 2. Write 32x16 block of all "0" to render target buffer which indirectly clears
++ * 512 bytes of Render Cache.
++ */
++
++/* Store designated "clear GRF" value */
++mov(1) f0.1<1>UW g1.2<0,1,0>UW { align1 1N };
++
++/**
++ * Curbe Format
++ *
++ * DW 1.0 - Block Offset to write Render Cache
++ * DW 1.1 [15:0] - Clear Word
++ * DW 1.2 - Delay iterations
++ * DW 1.3 - Enable Instrumentation (only for debug)
++ * DW 1.4 - Rsvd (intended for context ID)
++ * DW 1.5 - [31:16]:SliceCount, [15:0]:SubSlicePerSliceCount
++ * DW 1.6 - Rsvd MBZ (intended for Enable Wait on Total Thread Count)
++ * DW 1.7 - Rsvd MBZ (inteded for Total Thread Count)
++ *
++ * Binding Table
++ *
++ * BTI 0: 2D Surface to help clear L3 (Render/Data Cache)
++ * BTI 1: Wait/Instrumentation Buffer
++ * Size : (SliceCount * SubSliceCount * 16 EUs/SubSlice) rows * (16 threads/EU) cols (Format R32_UINT)
++ * Expected to be initialized to 0 by driver/another kernel
++ * Layout:
++ * RowN: Histogram for EU-N: (SliceID*SubSlicePerSliceCount + SSID)*16 + EUID [assume max 16 EUs / SS]
++ * Col-k[DW-k]: Threads Executed on ThreadID-k for EU-N
++ */
++add(1) g1.2<1>UD g1.2<0,1,0>UD 0x00000001UD { align1 1N }; /* Loop count to delay kernel: Init to (g1.2 + 1) */
++cmp.z.f0.0(1) null<1>UD g1.3<0,1,0>UD 0x00000000UD { align1 1N };
++(+f0.0) jmpi(1) 352D { align1 WE_all 1N };
++
++/**
++ * State Register has info on where this thread is running
++ * IVB: sr0.0 :: [15:13]: MBZ, 12: HSID (Half-Slice ID), [11:8]EUID, [2:0] ThreadSlotID
++ * HSW: sr0.0 :: 15: MBZ, [14:13]: SliceID, 12: HSID (Half-Slice ID), [11:8]EUID, [2:0] ThreadSlotID
++ */
++mov(8) g3<1>UD 0x00000000UD { align1 1Q };
++shr(1) g3<1>D sr0<0,1,0>D 12D { align1 1N };
++and(1) g3<1>D g3<0,1,0>D 1D { align1 1N }; /* g3 has HSID */
++shr(1) g3.1<1>D sr0<0,1,0>D 13D { align1 1N };
++and(1) g3.1<1>D g3.1<0,1,0>D 3D { align1 1N }; /* g3.1 has sliceID */
++mul(1) g3.5<1>D g3.1<0,1,0>D g1.10<0,1,0>UW { align1 1N };
++add(1) g3<1>D g3<0,1,0>D g3.5<0,1,0>D { align1 1N }; /* g3 = sliceID * SubSlicePerSliceCount + HSID */
++shr(1) g3.2<1>D sr0<0,1,0>D 8D { align1 1N };
++and(1) g3.2<1>D g3.2<0,1,0>D 15D { align1 1N }; /* g3.2 = EUID */
++mul(1) g3.4<1>D g3<0,1,0>D 16D { align1 1N };
++add(1) g3.2<1>D g3.2<0,1,0>D g3.4<0,1,0>D { align1 1N }; /* g3.2 now points to EU row number (Y-pixel = V address ) in instrumentation surf */
++
++mov(8) g5<1>UD 0x00000000UD { align1 1Q };
++and(1) g3.3<1>D sr0<0,1,0>D 7D { align1 1N };
++mul(1) g3.3<1>D g3.3<0,1,0>D 4D { align1 1N };
++
++mov(8) g4<1>UD g0<8,8,1>UD { align1 1Q }; /* Initialize message header with g0 */
++mov(1) g4<1>UD g3.3<0,1,0>UD { align1 1N }; /* Block offset */
++mov(1) g4.1<1>UD g3.2<0,1,0>UD { align1 1N }; /* Block offset */
++mov(1) g4.2<1>UD 0x00000003UD { align1 1N }; /* Block size (1 row x 4 bytes) */
++and(1) g4.3<1>UD g4.3<0,1,0>UW 0xffffffffUD { align1 1N };
++
++/* Media block read to fetch current value at specified location in instrumentation buffer */
++sendc(8) g5<1>UD g4<8,8,1>F 0x02190001
++
++ render MsgDesc: media block read MsgCtrl = 0x0 Surface = 1 mlen 1 rlen 1 { align1 1Q };
++add(1) g5<1>D g5<0,1,0>D 1D { align1 1N };
++
++/* Media block write for updated value at specified location in instrumentation buffer */
++sendc(8) g5<1>UD g4<8,8,1>F 0x040a8001
++ render MsgDesc: media block write MsgCtrl = 0x0 Surface = 1 mlen 2 rlen 0 { align1 1Q };
++
++/* Delay thread for specified parameter */
++add.nz.f0.0(1) g1.2<1>UD g1.2<0,1,0>UD -1D { align1 1N };
++(+f0.0) jmpi(1) -32D { align1 WE_all 1N };
++
++/* Store designated "clear GRF" value */
++mov(1) f0.1<1>UW g1.2<0,1,0>UW { align1 1N };
++
++/* Initialize looping parameters */
++mov(1) a0<1>D 0D { align1 1N }; /* Initialize a0.0:w=0 */
++mov(1) a0.4<1>W 127W { align1 1N }; /* Loop count. Each loop contains 16 GRF's */
++
++/* Write 32x16 all "0" block */
++mov(8) g2<1>UD g0<8,8,1>UD { align1 1Q };
++mov(8) g127<1>UD g0<8,8,1>UD { align1 1Q };
++mov(2) g2<1>UD g1<2,2,1>UW { align1 1N };
++mov(1) g2.2<1>UD 0x000f000fUD { align1 1N }; /* Block size (16x16) */
++and(1) g2.3<1>UD g2.3<0,1,0>UW 0xffffffefUD { align1 1N };
++mov(16) g3<1>UD 0x00000000UD { align1 1H };
++mov(16) g4<1>UD 0x00000000UD { align1 1H };
++mov(16) g5<1>UD 0x00000000UD { align1 1H };
++mov(16) g6<1>UD 0x00000000UD { align1 1H };
++mov(16) g7<1>UD 0x00000000UD { align1 1H };
++mov(16) g8<1>UD 0x00000000UD { align1 1H };
++mov(16) g9<1>UD 0x00000000UD { align1 1H };
++mov(16) g10<1>UD 0x00000000UD { align1 1H };
++sendc(8) null<1>UD g2<8,8,1>F 0x120a8000
++ render MsgDesc: media block write MsgCtrl = 0x0 Surface = 0 mlen 9 rlen 0 { align1 1Q };
++add(1) g2<1>UD g1<0,1,0>UW 0x0010UW { align1 1N };
++sendc(8) null<1>UD g2<8,8,1>F 0x120a8000
++ render MsgDesc: media block write MsgCtrl = 0x0 Surface = 0 mlen 9 rlen 0 { align1 1Q };
++
++/* Now, clear all GRF registers */
++add.nz.f0.0(1) a0.4<1>W a0.4<0,1,0>W -1W { align1 1N };
++mov(16) g[a0]<1>UW f0.1<0,1,0>UW { align1 1H };
++add(1) a0<1>D a0<0,1,0>D 32D { align1 1N };
++(+f0.0) jmpi(1) -64D { align1 WE_all 1N };
++
++/* Terminante the thread */
++sendc(8) null<1>UD g127<8,8,1>F 0x82000010
++ thread_spawner MsgDesc: mlen 1 rlen 0 { align1 1Q EOT };
+diff --git a/drivers/gpu/drm/i915/gt/shaders/clear_kernel/ivb.asm b/drivers/gpu/drm/i915/gt/shaders/clear_kernel/ivb.asm
+new file mode 100644
+index 000000000000..97c7ac9e3854
+--- /dev/null
++++ b/drivers/gpu/drm/i915/gt/shaders/clear_kernel/ivb.asm
+@@ -0,0 +1,117 @@
++// SPDX-License-Identifier: MIT
++/*
++ * Copyright © 2020 Intel Corporation
++ */
++
++/*
++ * Kernel for PAVP buffer clear.
++ *
++ * 1. Clear all 64 GRF registers assigned to the kernel with designated value;
++ * 2. Write 32x16 block of all "0" to render target buffer which indirectly clears
++ * 512 bytes of Render Cache.
++ */
++
++/* Store designated "clear GRF" value */
++mov(1) f0.1<1>UW g1.2<0,1,0>UW { align1 1N };
++
++/**
++ * Curbe Format
++ *
++ * DW 1.0 - Block Offset to write Render Cache
++ * DW 1.1 [15:0] - Clear Word
++ * DW 1.2 - Delay iterations
++ * DW 1.3 - Enable Instrumentation (only for debug)
++ * DW 1.4 - Rsvd (intended for context ID)
++ * DW 1.5 - [31:16]:SliceCount, [15:0]:SubSlicePerSliceCount
++ * DW 1.6 - Rsvd MBZ (intended for Enable Wait on Total Thread Count)
++ * DW 1.7 - Rsvd MBZ (inteded for Total Thread Count)
++ *
++ * Binding Table
++ *
++ * BTI 0: 2D Surface to help clear L3 (Render/Data Cache)
++ * BTI 1: Wait/Instrumentation Buffer
++ * Size : (SliceCount * SubSliceCount * 16 EUs/SubSlice) rows * (16 threads/EU) cols (Format R32_UINT)
++ * Expected to be initialized to 0 by driver/another kernel
++ * Layout :
++ * RowN: Histogram for EU-N: (SliceID*SubSlicePerSliceCount + SSID)*16 + EUID [assume max 16 EUs / SS]
++ * Col-k[DW-k]: Threads Executed on ThreadID-k for EU-N
++ */
++add(1) g1.2<1>UD g1.2<0,1,0>UD 0x00000001UD { align1 1N }; /* Loop count to delay kernel: Init to (g1.2 + 1) */
++cmp.z.f0.0(1) null<1>UD g1.3<0,1,0>UD 0x00000000UD { align1 1N };
++(+f0.0) jmpi(1) 44D { align1 WE_all 1N };
++
++/**
++ * State Register has info on where this thread is running
++ * IVB: sr0.0 :: [15:13]: MBZ, 12: HSID (Half-Slice ID), [11:8]EUID, [2:0] ThreadSlotID
++ * HSW: sr0.0 :: 15: MBZ, [14:13]: SliceID, 12: HSID (Half-Slice ID), [11:8]EUID, [2:0] ThreadSlotID
++ */
++mov(8) g3<1>UD 0x00000000UD { align1 1Q };
++shr(1) g3<1>D sr0<0,1,0>D 12D { align1 1N };
++and(1) g3<1>D g3<0,1,0>D 1D { align1 1N }; /* g3 has HSID */
++shr(1) g3.1<1>D sr0<0,1,0>D 13D { align1 1N };
++and(1) g3.1<1>D g3.1<0,1,0>D 3D { align1 1N }; /* g3.1 has sliceID */
++mul(1) g3.5<1>D g3.1<0,1,0>D g1.10<0,1,0>UW { align1 1N };
++add(1) g3<1>D g3<0,1,0>D g3.5<0,1,0>D { align1 1N }; /* g3 = sliceID * SubSlicePerSliceCount + HSID */
++shr(1) g3.2<1>D sr0<0,1,0>D 8D { align1 1N };
++and(1) g3.2<1>D g3.2<0,1,0>D 15D { align1 1N }; /* g3.2 = EUID */
++mul(1) g3.4<1>D g3<0,1,0>D 16D { align1 1N };
++add(1) g3.2<1>D g3.2<0,1,0>D g3.4<0,1,0>D { align1 1N }; /* g3.2 now points to EU row number (Y-pixel = V address ) in instrumentation surf */
++
++mov(8) g5<1>UD 0x00000000UD { align1 1Q };
++and(1) g3.3<1>D sr0<0,1,0>D 7D { align1 1N };
++mul(1) g3.3<1>D g3.3<0,1,0>D 4D { align1 1N };
++
++mov(8) g4<1>UD g0<8,8,1>UD { align1 1Q }; /* Initialize message header with g0 */
++mov(1) g4<1>UD g3.3<0,1,0>UD { align1 1N }; /* Block offset */
++mov(1) g4.1<1>UD g3.2<0,1,0>UD { align1 1N }; /* Block offset */
++mov(1) g4.2<1>UD 0x00000003UD { align1 1N }; /* Block size (1 row x 4 bytes) */
++and(1) g4.3<1>UD g4.3<0,1,0>UW 0xffffffffUD { align1 1N };
++
++/* Media block read to fetch current value at specified location in instrumentation buffer */
++sendc(8) g5<1>UD g4<8,8,1>F 0x02190001
++ render MsgDesc: media block read MsgCtrl = 0x0 Surface = 1 mlen 1 rlen 1 { align1 1Q };
++add(1) g5<1>D g5<0,1,0>D 1D { align1 1N };
++
++/* Media block write for updated value at specified location in instrumentation buffer */
++sendc(8) g5<1>UD g4<8,8,1>F 0x040a8001
++ render MsgDesc: media block write MsgCtrl = 0x0 Surface = 1 mlen 2 rlen 0 { align1 1Q };
++/* Delay thread for specified parameter */
++add.nz.f0.0(1) g1.2<1>UD g1.2<0,1,0>UD -1D { align1 1N };
++(+f0.0) jmpi(1) -4D { align1 WE_all 1N };
++
++/* Store designated "clear GRF" value */
++mov(1) f0.1<1>UW g1.2<0,1,0>UW { align1 1N };
++
++/* Initialize looping parameters */
++mov(1) a0<1>D 0D { align1 1N }; /* Initialize a0.0:w=0 */
++mov(1) a0.4<1>W 127W { align1 1N }; /* Loop count. Each loop contains 16 GRF's */
++
++/* Write 32x16 all "0" block */
++mov(8) g2<1>UD g0<8,8,1>UD { align1 1Q };
++mov(8) g127<1>UD g0<8,8,1>UD { align1 1Q };
++mov(2) g2<1>UD g1<2,2,1>UW { align1 1N };
++mov(1) g2.2<1>UD 0x000f000fUD { align1 1N }; /* Block size (16x16) */
++and(1) g2.3<1>UD g2.3<0,1,0>UW 0xffffffefUD { align1 1N };
++mov(16) g3<1>UD 0x00000000UD { align1 1H };
++mov(16) g4<1>UD 0x00000000UD { align1 1H };
++mov(16) g5<1>UD 0x00000000UD { align1 1H };
++mov(16) g6<1>UD 0x00000000UD { align1 1H };
++mov(16) g7<1>UD 0x00000000UD { align1 1H };
++mov(16) g8<1>UD 0x00000000UD { align1 1H };
++mov(16) g9<1>UD 0x00000000UD { align1 1H };
++mov(16) g10<1>UD 0x00000000UD { align1 1H };
++sendc(8) null<1>UD g2<8,8,1>F 0x120a8000
++ render MsgDesc: media block write MsgCtrl = 0x0 Surface = 0 mlen 9 rlen 0 { align1 1Q };
++add(1) g2<1>UD g1<0,1,0>UW 0x0010UW { align1 1N };
++sendc(8) null<1>UD g2<8,8,1>F 0x120a8000
++ render MsgDesc: media block write MsgCtrl = 0x0 Surface = 0 mlen 9 rlen 0 { align1 1Q };
++
++/* Now, clear all GRF registers */
++add.nz.f0.0(1) a0.4<1>W a0.4<0,1,0>W -1W { align1 1N };
++mov(16) g[a0]<1>UW f0.1<0,1,0>UW { align1 1H };
++add(1) a0<1>D a0<0,1,0>D 32D { align1 1N };
++(+f0.0) jmpi(1) -8D { align1 WE_all 1N };
++
++/* Terminante the thread */
++sendc(8) null<1>UD g127<8,8,1>F 0x82000010
++ thread_spawner MsgDesc: mlen 1 rlen 0 { align1 1Q EOT };
+diff --git a/drivers/gpu/drm/msm/disp/dpu1/dpu_encoder.c b/drivers/gpu/drm/msm/disp/dpu1/dpu_encoder.c
+index a1b79ee2bd9d..a2f6b688a976 100644
+--- a/drivers/gpu/drm/msm/disp/dpu1/dpu_encoder.c
++++ b/drivers/gpu/drm/msm/disp/dpu1/dpu_encoder.c
+@@ -2173,7 +2173,7 @@ struct drm_encoder *dpu_encoder_init(struct drm_device *dev,
+
+ dpu_enc = devm_kzalloc(dev->dev, sizeof(*dpu_enc), GFP_KERNEL);
+ if (!dpu_enc)
+- return ERR_PTR(ENOMEM);
++ return ERR_PTR(-ENOMEM);
+
+ rc = drm_encoder_init(dev, &dpu_enc->base, &dpu_encoder_funcs,
+ drm_enc_mode, NULL);
+diff --git a/drivers/gpu/drm/sun4i/sun4i_hdmi_enc.c b/drivers/gpu/drm/sun4i/sun4i_hdmi_enc.c
+index 68d4644ac2dc..f07e0c32b93a 100644
+--- a/drivers/gpu/drm/sun4i/sun4i_hdmi_enc.c
++++ b/drivers/gpu/drm/sun4i/sun4i_hdmi_enc.c
+@@ -262,9 +262,8 @@ sun4i_hdmi_connector_detect(struct drm_connector *connector, bool force)
+ struct sun4i_hdmi *hdmi = drm_connector_to_sun4i_hdmi(connector);
+ unsigned long reg;
+
+- if (readl_poll_timeout(hdmi->base + SUN4I_HDMI_HPD_REG, reg,
+- reg & SUN4I_HDMI_HPD_HIGH,
+- 0, 500000)) {
++ reg = readl(hdmi->base + SUN4I_HDMI_HPD_REG);
++ if (reg & SUN4I_HDMI_HPD_HIGH) {
+ cec_phys_addr_invalidate(hdmi->cec_adap);
+ return connector_status_disconnected;
+ }
+diff --git a/drivers/hv/vmbus_drv.c b/drivers/hv/vmbus_drv.c
+index ec173da45b42..7469cfa72518 100644
+--- a/drivers/hv/vmbus_drv.c
++++ b/drivers/hv/vmbus_drv.c
+@@ -1328,7 +1328,7 @@ static void hv_kmsg_dump(struct kmsg_dumper *dumper,
+ * Write dump contents to the page. No need to synchronize; panic should
+ * be single-threaded.
+ */
+- kmsg_dump_get_buffer(dumper, true, hv_panic_page, HV_HYP_PAGE_SIZE,
++ kmsg_dump_get_buffer(dumper, false, hv_panic_page, HV_HYP_PAGE_SIZE,
+ &bytes_written);
+ if (bytes_written)
+ hyperv_report_panic_msg(panic_pa, bytes_written);
+diff --git a/drivers/hwmon/acpi_power_meter.c b/drivers/hwmon/acpi_power_meter.c
+index 0db8ef4fd6e1..a270b975e90b 100644
+--- a/drivers/hwmon/acpi_power_meter.c
++++ b/drivers/hwmon/acpi_power_meter.c
+@@ -883,7 +883,7 @@ static int acpi_power_meter_add(struct acpi_device *device)
+
+ res = setup_attrs(resource);
+ if (res)
+- goto exit_free;
++ goto exit_free_capability;
+
+ resource->hwmon_dev = hwmon_device_register(&device->dev);
+ if (IS_ERR(resource->hwmon_dev)) {
+@@ -896,6 +896,8 @@ static int acpi_power_meter_add(struct acpi_device *device)
+
+ exit_remove:
+ remove_attrs(resource);
++exit_free_capability:
++ free_capabilities(resource);
+ exit_free:
+ kfree(resource);
+ exit:
+diff --git a/drivers/hwmon/max6697.c b/drivers/hwmon/max6697.c
+index 743752a2467a..64122eb38060 100644
+--- a/drivers/hwmon/max6697.c
++++ b/drivers/hwmon/max6697.c
+@@ -38,8 +38,9 @@ static const u8 MAX6697_REG_CRIT[] = {
+ * Map device tree / platform data register bit map to chip bit map.
+ * Applies to alert register and over-temperature register.
+ */
+-#define MAX6697_MAP_BITS(reg) ((((reg) & 0x7e) >> 1) | \
++#define MAX6697_ALERT_MAP_BITS(reg) ((((reg) & 0x7e) >> 1) | \
+ (((reg) & 0x01) << 6) | ((reg) & 0x80))
++#define MAX6697_OVERT_MAP_BITS(reg) (((reg) >> 1) | (((reg) & 0x01) << 7))
+
+ #define MAX6697_REG_STAT(n) (0x44 + (n))
+
+@@ -562,12 +563,12 @@ static int max6697_init_chip(struct max6697_data *data,
+ return ret;
+
+ ret = i2c_smbus_write_byte_data(client, MAX6697_REG_ALERT_MASK,
+- MAX6697_MAP_BITS(pdata->alert_mask));
++ MAX6697_ALERT_MAP_BITS(pdata->alert_mask));
+ if (ret < 0)
+ return ret;
+
+ ret = i2c_smbus_write_byte_data(client, MAX6697_REG_OVERT_MASK,
+- MAX6697_MAP_BITS(pdata->over_temperature_mask));
++ MAX6697_OVERT_MAP_BITS(pdata->over_temperature_mask));
+ if (ret < 0)
+ return ret;
+
+diff --git a/drivers/hwmon/pmbus/pmbus_core.c b/drivers/hwmon/pmbus/pmbus_core.c
+index 8d321bf7d15b..e721a016f3e7 100644
+--- a/drivers/hwmon/pmbus/pmbus_core.c
++++ b/drivers/hwmon/pmbus/pmbus_core.c
+@@ -1869,7 +1869,7 @@ static int pmbus_add_fan_ctrl(struct i2c_client *client,
+ struct pmbus_sensor *sensor;
+
+ sensor = pmbus_add_sensor(data, "fan", "target", index, page,
+- PMBUS_VIRT_FAN_TARGET_1 + id, 0xff, PSC_FAN,
++ 0xff, PMBUS_VIRT_FAN_TARGET_1 + id, PSC_FAN,
+ false, false, true);
+
+ if (!sensor)
+@@ -1880,14 +1880,14 @@ static int pmbus_add_fan_ctrl(struct i2c_client *client,
+ return 0;
+
+ sensor = pmbus_add_sensor(data, "pwm", NULL, index, page,
+- PMBUS_VIRT_PWM_1 + id, 0xff, PSC_PWM,
++ 0xff, PMBUS_VIRT_PWM_1 + id, PSC_PWM,
+ false, false, true);
+
+ if (!sensor)
+ return -ENOMEM;
+
+ sensor = pmbus_add_sensor(data, "pwm", "enable", index, page,
+- PMBUS_VIRT_PWM_ENABLE_1 + id, 0xff, PSC_PWM,
++ 0xff, PMBUS_VIRT_PWM_ENABLE_1 + id, PSC_PWM,
+ true, false, false);
+
+ if (!sensor)
+@@ -1929,7 +1929,7 @@ static int pmbus_add_fan_attributes(struct i2c_client *client,
+ continue;
+
+ if (pmbus_add_sensor(data, "fan", "input", index,
+- page, pmbus_fan_registers[f], 0xff,
++ page, 0xff, pmbus_fan_registers[f],
+ PSC_FAN, true, true, true) == NULL)
+ return -ENOMEM;
+
+diff --git a/drivers/i2c/algos/i2c-algo-pca.c b/drivers/i2c/algos/i2c-algo-pca.c
+index 7f10312d1b88..388978775be0 100644
+--- a/drivers/i2c/algos/i2c-algo-pca.c
++++ b/drivers/i2c/algos/i2c-algo-pca.c
+@@ -314,7 +314,8 @@ static int pca_xfer(struct i2c_adapter *i2c_adap,
+ DEB2("BUS ERROR - SDA Stuck low\n");
+ pca_reset(adap);
+ goto out;
+- case 0x90: /* Bus error - SCL stuck low */
++ case 0x78: /* Bus error - SCL stuck low (PCA9665) */
++ case 0x90: /* Bus error - SCL stuck low (PCA9564) */
+ DEB2("BUS ERROR - SCL Stuck low\n");
+ pca_reset(adap);
+ goto out;
+diff --git a/drivers/i2c/busses/i2c-designware-platdrv.c b/drivers/i2c/busses/i2c-designware-platdrv.c
+index 5536673060cc..3a9c2cfbef97 100644
+--- a/drivers/i2c/busses/i2c-designware-platdrv.c
++++ b/drivers/i2c/busses/i2c-designware-platdrv.c
+@@ -234,6 +234,17 @@ static const u32 supported_speeds[] = {
+ I2C_MAX_STANDARD_MODE_FREQ,
+ };
+
++static const struct dmi_system_id dw_i2c_hwmon_class_dmi[] = {
++ {
++ .ident = "Qtechnology QT5222",
++ .matches = {
++ DMI_MATCH(DMI_SYS_VENDOR, "Qtechnology"),
++ DMI_MATCH(DMI_PRODUCT_NAME, "QT5222"),
++ },
++ },
++ { } /* terminate list */
++};
++
+ static int dw_i2c_plat_probe(struct platform_device *pdev)
+ {
+ struct dw_i2c_platform_data *pdata = dev_get_platdata(&pdev->dev);
+@@ -349,7 +360,8 @@ static int dw_i2c_plat_probe(struct platform_device *pdev)
+
+ adap = &dev->adapter;
+ adap->owner = THIS_MODULE;
+- adap->class = I2C_CLASS_DEPRECATED;
++ adap->class = dmi_check_system(dw_i2c_hwmon_class_dmi) ?
++ I2C_CLASS_HWMON : I2C_CLASS_DEPRECATED;
+ ACPI_COMPANION_SET(&adap->dev, ACPI_COMPANION(&pdev->dev));
+ adap->dev.of_node = pdev->dev.of_node;
+ adap->nr = -1;
+diff --git a/drivers/i2c/busses/i2c-mlxcpld.c b/drivers/i2c/busses/i2c-mlxcpld.c
+index 2fd717d8dd30..71d7bae2cbca 100644
+--- a/drivers/i2c/busses/i2c-mlxcpld.c
++++ b/drivers/i2c/busses/i2c-mlxcpld.c
+@@ -337,9 +337,9 @@ static int mlxcpld_i2c_wait_for_tc(struct mlxcpld_i2c_priv *priv)
+ if (priv->smbus_block && (val & MLXCPLD_I2C_SMBUS_BLK_BIT)) {
+ mlxcpld_i2c_read_comm(priv, MLXCPLD_LPCI2C_NUM_DAT_REG,
+ &datalen, 1);
+- if (unlikely(datalen > (I2C_SMBUS_BLOCK_MAX + 1))) {
++ if (unlikely(datalen > I2C_SMBUS_BLOCK_MAX)) {
+ dev_err(priv->dev, "Incorrect smbus block read message len\n");
+- return -E2BIG;
++ return -EPROTO;
+ }
+ } else {
+ datalen = priv->xfer.data_len;
+diff --git a/drivers/infiniband/core/counters.c b/drivers/infiniband/core/counters.c
+index 2257d7f7810f..738d1faf4bba 100644
+--- a/drivers/infiniband/core/counters.c
++++ b/drivers/infiniband/core/counters.c
+@@ -202,7 +202,7 @@ static int __rdma_counter_unbind_qp(struct ib_qp *qp)
+ return ret;
+ }
+
+-static void counter_history_stat_update(const struct rdma_counter *counter)
++static void counter_history_stat_update(struct rdma_counter *counter)
+ {
+ struct ib_device *dev = counter->device;
+ struct rdma_port_counter *port_counter;
+@@ -212,6 +212,8 @@ static void counter_history_stat_update(const struct rdma_counter *counter)
+ if (!port_counter->hstats)
+ return;
+
++ rdma_counter_query_stats(counter);
++
+ for (i = 0; i < counter->stats->num_counters; i++)
+ port_counter->hstats->value[i] += counter->stats->value[i];
+ }
+diff --git a/drivers/irqchip/irq-gic-v3-its.c b/drivers/irqchip/irq-gic-v3-its.c
+index 124251b0ccba..b3e16a06c13b 100644
+--- a/drivers/irqchip/irq-gic-v3-its.c
++++ b/drivers/irqchip/irq-gic-v3-its.c
+@@ -3681,10 +3681,10 @@ static void its_wait_vpt_parse_complete(void)
+ if (!gic_rdists->has_vpend_valid_dirty)
+ return;
+
+- WARN_ON_ONCE(readq_relaxed_poll_timeout(vlpi_base + GICR_VPENDBASER,
+- val,
+- !(val & GICR_VPENDBASER_Dirty),
+- 10, 500));
++ WARN_ON_ONCE(readq_relaxed_poll_timeout_atomic(vlpi_base + GICR_VPENDBASER,
++ val,
++ !(val & GICR_VPENDBASER_Dirty),
++ 10, 500));
+ }
+
+ static void its_vpe_schedule(struct its_vpe *vpe)
+diff --git a/drivers/irqchip/irq-gic.c b/drivers/irqchip/irq-gic.c
+index 30ab623343d3..882204d1ef4f 100644
+--- a/drivers/irqchip/irq-gic.c
++++ b/drivers/irqchip/irq-gic.c
+@@ -329,10 +329,8 @@ static int gic_irq_set_vcpu_affinity(struct irq_data *d, void *vcpu)
+ static int gic_set_affinity(struct irq_data *d, const struct cpumask *mask_val,
+ bool force)
+ {
+- void __iomem *reg = gic_dist_base(d) + GIC_DIST_TARGET + (gic_irq(d) & ~3);
+- unsigned int cpu, shift = (gic_irq(d) % 4) * 8;
+- u32 val, mask, bit;
+- unsigned long flags;
++ void __iomem *reg = gic_dist_base(d) + GIC_DIST_TARGET + gic_irq(d);
++ unsigned int cpu;
+
+ if (!force)
+ cpu = cpumask_any_and(mask_val, cpu_online_mask);
+@@ -342,13 +340,7 @@ static int gic_set_affinity(struct irq_data *d, const struct cpumask *mask_val,
+ if (cpu >= NR_GIC_CPU_IF || cpu >= nr_cpu_ids)
+ return -EINVAL;
+
+- gic_lock_irqsave(flags);
+- mask = 0xff << shift;
+- bit = gic_cpu_map[cpu] << shift;
+- val = readl_relaxed(reg) & ~mask;
+- writel_relaxed(val | bit, reg);
+- gic_unlock_irqrestore(flags);
+-
++ writeb_relaxed(gic_cpu_map[cpu], reg);
+ irq_data_update_effective_affinity(d, cpumask_of(cpu));
+
+ return IRQ_SET_MASK_OK_DONE;
+diff --git a/drivers/md/dm-zoned-target.c b/drivers/md/dm-zoned-target.c
+index f4f83d39b3dc..29881fea6acb 100644
+--- a/drivers/md/dm-zoned-target.c
++++ b/drivers/md/dm-zoned-target.c
+@@ -790,7 +790,7 @@ static int dmz_ctr(struct dm_target *ti, unsigned int argc, char **argv)
+ }
+
+ /* Set target (no write same support) */
+- ti->max_io_len = dev->zone_nr_sectors << 9;
++ ti->max_io_len = dev->zone_nr_sectors;
+ ti->num_flush_bios = 1;
+ ti->num_discard_bios = 1;
+ ti->num_write_zeroes_bios = 1;
+diff --git a/drivers/net/ethernet/chelsio/cxgb4/cudbg_lib.c b/drivers/net/ethernet/chelsio/cxgb4/cudbg_lib.c
+index 7b9cd69f9844..d8ab8e366818 100644
+--- a/drivers/net/ethernet/chelsio/cxgb4/cudbg_lib.c
++++ b/drivers/net/ethernet/chelsio/cxgb4/cudbg_lib.c
+@@ -1975,7 +1975,6 @@ int cudbg_collect_dump_context(struct cudbg_init *pdbg_init,
+ u8 mem_type[CTXT_INGRESS + 1] = { 0 };
+ struct cudbg_buffer temp_buff = { 0 };
+ struct cudbg_ch_cntxt *buff;
+- u64 *dst_off, *src_off;
+ u8 *ctx_buf;
+ u8 i, k;
+ int rc;
+@@ -2044,8 +2043,11 @@ int cudbg_collect_dump_context(struct cudbg_init *pdbg_init,
+ }
+
+ for (j = 0; j < max_ctx_qid; j++) {
++ __be64 *dst_off;
++ u64 *src_off;
++
+ src_off = (u64 *)(ctx_buf + j * SGE_CTXT_SIZE);
+- dst_off = (u64 *)buff->data;
++ dst_off = (__be64 *)buff->data;
+
+ /* The data is stored in 64-bit cpu order. Convert it
+ * to big endian before parsing.
+diff --git a/drivers/net/ethernet/chelsio/cxgb4/cxgb4_filter.c b/drivers/net/ethernet/chelsio/cxgb4/cxgb4_filter.c
+index 796555255207..7a7f61a8cdf4 100644
+--- a/drivers/net/ethernet/chelsio/cxgb4/cxgb4_filter.c
++++ b/drivers/net/ethernet/chelsio/cxgb4/cxgb4_filter.c
+@@ -165,6 +165,9 @@ static void set_nat_params(struct adapter *adap, struct filter_entry *f,
+ unsigned int tid, bool dip, bool sip, bool dp,
+ bool sp)
+ {
++ u8 *nat_lp = (u8 *)&f->fs.nat_lport;
++ u8 *nat_fp = (u8 *)&f->fs.nat_fport;
++
+ if (dip) {
+ if (f->fs.type) {
+ set_tcb_field(adap, f, tid, TCB_SND_UNA_RAW_W,
+@@ -236,8 +239,9 @@ static void set_nat_params(struct adapter *adap, struct filter_entry *f,
+ }
+
+ set_tcb_field(adap, f, tid, TCB_PDU_HDR_LEN_W, WORD_MASK,
+- (dp ? f->fs.nat_lport : 0) |
+- (sp ? f->fs.nat_fport << 16 : 0), 1);
++ (dp ? (nat_lp[1] | nat_lp[0] << 8) : 0) |
++ (sp ? (nat_fp[1] << 16 | nat_fp[0] << 24) : 0),
++ 1);
+ }
+
+ /* Validate filter spec against configuration done on the card. */
+@@ -909,6 +913,9 @@ int set_filter_wr(struct adapter *adapter, int fidx)
+ fwr->fpm = htons(f->fs.mask.fport);
+
+ if (adapter->params.filter2_wr_support) {
++ u8 *nat_lp = (u8 *)&f->fs.nat_lport;
++ u8 *nat_fp = (u8 *)&f->fs.nat_fport;
++
+ fwr->natmode_to_ulp_type =
+ FW_FILTER2_WR_ULP_TYPE_V(f->fs.nat_mode ?
+ ULP_MODE_TCPDDP :
+@@ -916,8 +923,8 @@ int set_filter_wr(struct adapter *adapter, int fidx)
+ FW_FILTER2_WR_NATMODE_V(f->fs.nat_mode);
+ memcpy(fwr->newlip, f->fs.nat_lip, sizeof(fwr->newlip));
+ memcpy(fwr->newfip, f->fs.nat_fip, sizeof(fwr->newfip));
+- fwr->newlport = htons(f->fs.nat_lport);
+- fwr->newfport = htons(f->fs.nat_fport);
++ fwr->newlport = htons(nat_lp[1] | nat_lp[0] << 8);
++ fwr->newfport = htons(nat_fp[1] | nat_fp[0] << 8);
+ }
+
+ /* Mark the filter as "pending" and ship off the Filter Work Request.
+@@ -1105,16 +1112,16 @@ static bool is_addr_all_mask(u8 *ipmask, int family)
+ struct in_addr *addr;
+
+ addr = (struct in_addr *)ipmask;
+- if (addr->s_addr == 0xffffffff)
++ if (ntohl(addr->s_addr) == 0xffffffff)
+ return true;
+ } else if (family == AF_INET6) {
+ struct in6_addr *addr6;
+
+ addr6 = (struct in6_addr *)ipmask;
+- if (addr6->s6_addr32[0] == 0xffffffff &&
+- addr6->s6_addr32[1] == 0xffffffff &&
+- addr6->s6_addr32[2] == 0xffffffff &&
+- addr6->s6_addr32[3] == 0xffffffff)
++ if (ntohl(addr6->s6_addr32[0]) == 0xffffffff &&
++ ntohl(addr6->s6_addr32[1]) == 0xffffffff &&
++ ntohl(addr6->s6_addr32[2]) == 0xffffffff &&
++ ntohl(addr6->s6_addr32[3]) == 0xffffffff)
+ return true;
+ }
+ return false;
+diff --git a/drivers/net/ethernet/chelsio/cxgb4/cxgb4_main.c b/drivers/net/ethernet/chelsio/cxgb4/cxgb4_main.c
+index a70018f067aa..e8934c48f09b 100644
+--- a/drivers/net/ethernet/chelsio/cxgb4/cxgb4_main.c
++++ b/drivers/net/ethernet/chelsio/cxgb4/cxgb4_main.c
+@@ -2604,7 +2604,7 @@ int cxgb4_create_server_filter(const struct net_device *dev, unsigned int stid,
+
+ /* Clear out filter specifications */
+ memset(&f->fs, 0, sizeof(struct ch_filter_specification));
+- f->fs.val.lport = cpu_to_be16(sport);
++ f->fs.val.lport = be16_to_cpu(sport);
+ f->fs.mask.lport = ~0;
+ val = (u8 *)&sip;
+ if ((val[0] | val[1] | val[2] | val[3]) != 0) {
+diff --git a/drivers/net/ethernet/chelsio/cxgb4/cxgb4_tc_flower.c b/drivers/net/ethernet/chelsio/cxgb4/cxgb4_tc_flower.c
+index 4a5fa9eba0b6..59b65d4db086 100644
+--- a/drivers/net/ethernet/chelsio/cxgb4/cxgb4_tc_flower.c
++++ b/drivers/net/ethernet/chelsio/cxgb4/cxgb4_tc_flower.c
+@@ -58,10 +58,6 @@ static struct ch_tc_pedit_fields pedits[] = {
+ PEDIT_FIELDS(IP6_, DST_63_32, 4, nat_lip, 4),
+ PEDIT_FIELDS(IP6_, DST_95_64, 4, nat_lip, 8),
+ PEDIT_FIELDS(IP6_, DST_127_96, 4, nat_lip, 12),
+- PEDIT_FIELDS(TCP_, SPORT, 2, nat_fport, 0),
+- PEDIT_FIELDS(TCP_, DPORT, 2, nat_lport, 0),
+- PEDIT_FIELDS(UDP_, SPORT, 2, nat_fport, 0),
+- PEDIT_FIELDS(UDP_, DPORT, 2, nat_lport, 0),
+ };
+
+ static struct ch_tc_flower_entry *allocate_flower_entry(void)
+@@ -156,14 +152,14 @@ static void cxgb4_process_flow_match(struct net_device *dev,
+ struct flow_match_ports match;
+
+ flow_rule_match_ports(rule, &match);
+- fs->val.lport = cpu_to_be16(match.key->dst);
+- fs->mask.lport = cpu_to_be16(match.mask->dst);
+- fs->val.fport = cpu_to_be16(match.key->src);
+- fs->mask.fport = cpu_to_be16(match.mask->src);
++ fs->val.lport = be16_to_cpu(match.key->dst);
++ fs->mask.lport = be16_to_cpu(match.mask->dst);
++ fs->val.fport = be16_to_cpu(match.key->src);
++ fs->mask.fport = be16_to_cpu(match.mask->src);
+
+ /* also initialize nat_lport/fport to same values */
+- fs->nat_lport = cpu_to_be16(match.key->dst);
+- fs->nat_fport = cpu_to_be16(match.key->src);
++ fs->nat_lport = fs->val.lport;
++ fs->nat_fport = fs->val.fport;
+ }
+
+ if (flow_rule_match_key(rule, FLOW_DISSECTOR_KEY_IP)) {
+@@ -354,12 +350,9 @@ static void process_pedit_field(struct ch_filter_specification *fs, u32 val,
+ switch (offset) {
+ case PEDIT_TCP_SPORT_DPORT:
+ if (~mask & PEDIT_TCP_UDP_SPORT_MASK)
+- offload_pedit(fs, cpu_to_be32(val) >> 16,
+- cpu_to_be32(mask) >> 16,
+- TCP_SPORT);
++ fs->nat_fport = val;
+ else
+- offload_pedit(fs, cpu_to_be32(val),
+- cpu_to_be32(mask), TCP_DPORT);
++ fs->nat_lport = val >> 16;
+ }
+ fs->nat_mode = NAT_MODE_ALL;
+ break;
+@@ -367,12 +360,9 @@ static void process_pedit_field(struct ch_filter_specification *fs, u32 val,
+ switch (offset) {
+ case PEDIT_UDP_SPORT_DPORT:
+ if (~mask & PEDIT_TCP_UDP_SPORT_MASK)
+- offload_pedit(fs, cpu_to_be32(val) >> 16,
+- cpu_to_be32(mask) >> 16,
+- UDP_SPORT);
++ fs->nat_fport = val;
+ else
+- offload_pedit(fs, cpu_to_be32(val),
+- cpu_to_be32(mask), UDP_DPORT);
++ fs->nat_lport = val >> 16;
+ }
+ fs->nat_mode = NAT_MODE_ALL;
+ }
+diff --git a/drivers/net/ethernet/chelsio/cxgb4/cxgb4_tc_u32.c b/drivers/net/ethernet/chelsio/cxgb4/cxgb4_tc_u32.c
+index 3f3c11e54d97..dede02505ceb 100644
+--- a/drivers/net/ethernet/chelsio/cxgb4/cxgb4_tc_u32.c
++++ b/drivers/net/ethernet/chelsio/cxgb4/cxgb4_tc_u32.c
+@@ -48,7 +48,7 @@ static int fill_match_fields(struct adapter *adap,
+ bool next_header)
+ {
+ unsigned int i, j;
+- u32 val, mask;
++ __be32 val, mask;
+ int off, err;
+ bool found;
+
+@@ -228,7 +228,7 @@ int cxgb4_config_knode(struct net_device *dev, struct tc_cls_u32_offload *cls)
+ const struct cxgb4_next_header *next;
+ bool found = false;
+ unsigned int i, j;
+- u32 val, mask;
++ __be32 val, mask;
+ int off;
+
+ if (t->table[link_uhtid - 1].link_handle) {
+@@ -242,10 +242,10 @@ int cxgb4_config_knode(struct net_device *dev, struct tc_cls_u32_offload *cls)
+
+ /* Try to find matches that allow jumps to next header. */
+ for (i = 0; next[i].jump; i++) {
+- if (next[i].offoff != cls->knode.sel->offoff ||
+- next[i].shift != cls->knode.sel->offshift ||
+- next[i].mask != cls->knode.sel->offmask ||
+- next[i].offset != cls->knode.sel->off)
++ if (next[i].sel.offoff != cls->knode.sel->offoff ||
++ next[i].sel.offshift != cls->knode.sel->offshift ||
++ next[i].sel.offmask != cls->knode.sel->offmask ||
++ next[i].sel.off != cls->knode.sel->off)
+ continue;
+
+ /* Found a possible candidate. Find a key that
+@@ -257,9 +257,9 @@ int cxgb4_config_knode(struct net_device *dev, struct tc_cls_u32_offload *cls)
+ val = cls->knode.sel->keys[j].val;
+ mask = cls->knode.sel->keys[j].mask;
+
+- if (next[i].match_off == off &&
+- next[i].match_val == val &&
+- next[i].match_mask == mask) {
++ if (next[i].key.off == off &&
++ next[i].key.val == val &&
++ next[i].key.mask == mask) {
+ found = true;
+ break;
+ }
+diff --git a/drivers/net/ethernet/chelsio/cxgb4/cxgb4_tc_u32_parse.h b/drivers/net/ethernet/chelsio/cxgb4/cxgb4_tc_u32_parse.h
+index 125868c6770a..f59dd4b2ae6f 100644
+--- a/drivers/net/ethernet/chelsio/cxgb4/cxgb4_tc_u32_parse.h
++++ b/drivers/net/ethernet/chelsio/cxgb4/cxgb4_tc_u32_parse.h
+@@ -38,12 +38,12 @@
+ struct cxgb4_match_field {
+ int off; /* Offset from the beginning of the header to match */
+ /* Fill the value/mask pair in the spec if matched */
+- int (*val)(struct ch_filter_specification *f, u32 val, u32 mask);
++ int (*val)(struct ch_filter_specification *f, __be32 val, __be32 mask);
+ };
+
+ /* IPv4 match fields */
+ static inline int cxgb4_fill_ipv4_tos(struct ch_filter_specification *f,
+- u32 val, u32 mask)
++ __be32 val, __be32 mask)
+ {
+ f->val.tos = (ntohl(val) >> 16) & 0x000000FF;
+ f->mask.tos = (ntohl(mask) >> 16) & 0x000000FF;
+@@ -52,7 +52,7 @@ static inline int cxgb4_fill_ipv4_tos(struct ch_filter_specification *f,
+ }
+
+ static inline int cxgb4_fill_ipv4_frag(struct ch_filter_specification *f,
+- u32 val, u32 mask)
++ __be32 val, __be32 mask)
+ {
+ u32 mask_val;
+ u8 frag_val;
+@@ -74,7 +74,7 @@ static inline int cxgb4_fill_ipv4_frag(struct ch_filter_specification *f,
+ }
+
+ static inline int cxgb4_fill_ipv4_proto(struct ch_filter_specification *f,
+- u32 val, u32 mask)
++ __be32 val, __be32 mask)
+ {
+ f->val.proto = (ntohl(val) >> 16) & 0x000000FF;
+ f->mask.proto = (ntohl(mask) >> 16) & 0x000000FF;
+@@ -83,7 +83,7 @@ static inline int cxgb4_fill_ipv4_proto(struct ch_filter_specification *f,
+ }
+
+ static inline int cxgb4_fill_ipv4_src_ip(struct ch_filter_specification *f,
+- u32 val, u32 mask)
++ __be32 val, __be32 mask)
+ {
+ memcpy(&f->val.fip[0], &val, sizeof(u32));
+ memcpy(&f->mask.fip[0], &mask, sizeof(u32));
+@@ -92,7 +92,7 @@ static inline int cxgb4_fill_ipv4_src_ip(struct ch_filter_specification *f,
+ }
+
+ static inline int cxgb4_fill_ipv4_dst_ip(struct ch_filter_specification *f,
+- u32 val, u32 mask)
++ __be32 val, __be32 mask)
+ {
+ memcpy(&f->val.lip[0], &val, sizeof(u32));
+ memcpy(&f->mask.lip[0], &mask, sizeof(u32));
+@@ -111,7 +111,7 @@ static const struct cxgb4_match_field cxgb4_ipv4_fields[] = {
+
+ /* IPv6 match fields */
+ static inline int cxgb4_fill_ipv6_tos(struct ch_filter_specification *f,
+- u32 val, u32 mask)
++ __be32 val, __be32 mask)
+ {
+ f->val.tos = (ntohl(val) >> 20) & 0x000000FF;
+ f->mask.tos = (ntohl(mask) >> 20) & 0x000000FF;
+@@ -120,7 +120,7 @@ static inline int cxgb4_fill_ipv6_tos(struct ch_filter_specification *f,
+ }
+
+ static inline int cxgb4_fill_ipv6_proto(struct ch_filter_specification *f,
+- u32 val, u32 mask)
++ __be32 val, __be32 mask)
+ {
+ f->val.proto = (ntohl(val) >> 8) & 0x000000FF;
+ f->mask.proto = (ntohl(mask) >> 8) & 0x000000FF;
+@@ -129,7 +129,7 @@ static inline int cxgb4_fill_ipv6_proto(struct ch_filter_specification *f,
+ }
+
+ static inline int cxgb4_fill_ipv6_src_ip0(struct ch_filter_specification *f,
+- u32 val, u32 mask)
++ __be32 val, __be32 mask)
+ {
+ memcpy(&f->val.fip[0], &val, sizeof(u32));
+ memcpy(&f->mask.fip[0], &mask, sizeof(u32));
+@@ -138,7 +138,7 @@ static inline int cxgb4_fill_ipv6_src_ip0(struct ch_filter_specification *f,
+ }
+
+ static inline int cxgb4_fill_ipv6_src_ip1(struct ch_filter_specification *f,
+- u32 val, u32 mask)
++ __be32 val, __be32 mask)
+ {
+ memcpy(&f->val.fip[4], &val, sizeof(u32));
+ memcpy(&f->mask.fip[4], &mask, sizeof(u32));
+@@ -147,7 +147,7 @@ static inline int cxgb4_fill_ipv6_src_ip1(struct ch_filter_specification *f,
+ }
+
+ static inline int cxgb4_fill_ipv6_src_ip2(struct ch_filter_specification *f,
+- u32 val, u32 mask)
++ __be32 val, __be32 mask)
+ {
+ memcpy(&f->val.fip[8], &val, sizeof(u32));
+ memcpy(&f->mask.fip[8], &mask, sizeof(u32));
+@@ -156,7 +156,7 @@ static inline int cxgb4_fill_ipv6_src_ip2(struct ch_filter_specification *f,
+ }
+
+ static inline int cxgb4_fill_ipv6_src_ip3(struct ch_filter_specification *f,
+- u32 val, u32 mask)
++ __be32 val, __be32 mask)
+ {
+ memcpy(&f->val.fip[12], &val, sizeof(u32));
+ memcpy(&f->mask.fip[12], &mask, sizeof(u32));
+@@ -165,7 +165,7 @@ static inline int cxgb4_fill_ipv6_src_ip3(struct ch_filter_specification *f,
+ }
+
+ static inline int cxgb4_fill_ipv6_dst_ip0(struct ch_filter_specification *f,
+- u32 val, u32 mask)
++ __be32 val, __be32 mask)
+ {
+ memcpy(&f->val.lip[0], &val, sizeof(u32));
+ memcpy(&f->mask.lip[0], &mask, sizeof(u32));
+@@ -174,7 +174,7 @@ static inline int cxgb4_fill_ipv6_dst_ip0(struct ch_filter_specification *f,
+ }
+
+ static inline int cxgb4_fill_ipv6_dst_ip1(struct ch_filter_specification *f,
+- u32 val, u32 mask)
++ __be32 val, __be32 mask)
+ {
+ memcpy(&f->val.lip[4], &val, sizeof(u32));
+ memcpy(&f->mask.lip[4], &mask, sizeof(u32));
+@@ -183,7 +183,7 @@ static inline int cxgb4_fill_ipv6_dst_ip1(struct ch_filter_specification *f,
+ }
+
+ static inline int cxgb4_fill_ipv6_dst_ip2(struct ch_filter_specification *f,
+- u32 val, u32 mask)
++ __be32 val, __be32 mask)
+ {
+ memcpy(&f->val.lip[8], &val, sizeof(u32));
+ memcpy(&f->mask.lip[8], &mask, sizeof(u32));
+@@ -192,7 +192,7 @@ static inline int cxgb4_fill_ipv6_dst_ip2(struct ch_filter_specification *f,
+ }
+
+ static inline int cxgb4_fill_ipv6_dst_ip3(struct ch_filter_specification *f,
+- u32 val, u32 mask)
++ __be32 val, __be32 mask)
+ {
+ memcpy(&f->val.lip[12], &val, sizeof(u32));
+ memcpy(&f->mask.lip[12], &mask, sizeof(u32));
+@@ -216,7 +216,7 @@ static const struct cxgb4_match_field cxgb4_ipv6_fields[] = {
+
+ /* TCP/UDP match */
+ static inline int cxgb4_fill_l4_ports(struct ch_filter_specification *f,
+- u32 val, u32 mask)
++ __be32 val, __be32 mask)
+ {
+ f->val.fport = ntohl(val) >> 16;
+ f->mask.fport = ntohl(mask) >> 16;
+@@ -237,19 +237,13 @@ static const struct cxgb4_match_field cxgb4_udp_fields[] = {
+ };
+
+ struct cxgb4_next_header {
+- unsigned int offset; /* Offset to next header */
+- /* offset, shift, and mask added to offset above
++ /* Offset, shift, and mask added to beginning of the header
+ * to get to next header. Useful when using a header
+ * field's value to jump to next header such as IHL field
+ * in IPv4 header.
+ */
+- unsigned int offoff;
+- u32 shift;
+- u32 mask;
+- /* match criteria to make this jump */
+- unsigned int match_off;
+- u32 match_val;
+- u32 match_mask;
++ struct tc_u32_sel sel;
++ struct tc_u32_key key;
+ /* location of jump to make */
+ const struct cxgb4_match_field *jump;
+ };
+@@ -258,26 +252,74 @@ struct cxgb4_next_header {
+ * IPv4 header.
+ */
+ static const struct cxgb4_next_header cxgb4_ipv4_jumps[] = {
+- { .offset = 0, .offoff = 0, .shift = 6, .mask = 0xF,
+- .match_off = 8, .match_val = 0x600, .match_mask = 0xFF00,
+- .jump = cxgb4_tcp_fields },
+- { .offset = 0, .offoff = 0, .shift = 6, .mask = 0xF,
+- .match_off = 8, .match_val = 0x1100, .match_mask = 0xFF00,
+- .jump = cxgb4_udp_fields },
+- { .jump = NULL }
++ {
++ /* TCP Jump */
++ .sel = {
++ .off = 0,
++ .offoff = 0,
++ .offshift = 6,
++ .offmask = cpu_to_be16(0x0f00),
++ },
++ .key = {
++ .off = 8,
++ .val = cpu_to_be32(0x00060000),
++ .mask = cpu_to_be32(0x00ff0000),
++ },
++ .jump = cxgb4_tcp_fields,
++ },
++ {
++ /* UDP Jump */
++ .sel = {
++ .off = 0,
++ .offoff = 0,
++ .offshift = 6,
++ .offmask = cpu_to_be16(0x0f00),
++ },
++ .key = {
++ .off = 8,
++ .val = cpu_to_be32(0x00110000),
++ .mask = cpu_to_be32(0x00ff0000),
++ },
++ .jump = cxgb4_udp_fields,
++ },
++ { .jump = NULL },
+ };
+
+ /* Accept a rule with a jump directly past the 40 Bytes of IPv6 fixed header
+ * to get to transport layer header.
+ */
+ static const struct cxgb4_next_header cxgb4_ipv6_jumps[] = {
+- { .offset = 0x28, .offoff = 0, .shift = 0, .mask = 0,
+- .match_off = 4, .match_val = 0x60000, .match_mask = 0xFF0000,
+- .jump = cxgb4_tcp_fields },
+- { .offset = 0x28, .offoff = 0, .shift = 0, .mask = 0,
+- .match_off = 4, .match_val = 0x110000, .match_mask = 0xFF0000,
+- .jump = cxgb4_udp_fields },
+- { .jump = NULL }
++ {
++ /* TCP Jump */
++ .sel = {
++ .off = 40,
++ .offoff = 0,
++ .offshift = 0,
++ .offmask = 0,
++ },
++ .key = {
++ .off = 4,
++ .val = cpu_to_be32(0x00000600),
++ .mask = cpu_to_be32(0x0000ff00),
++ },
++ .jump = cxgb4_tcp_fields,
++ },
++ {
++ /* UDP Jump */
++ .sel = {
++ .off = 40,
++ .offoff = 0,
++ .offshift = 0,
++ .offmask = 0,
++ },
++ .key = {
++ .off = 4,
++ .val = cpu_to_be32(0x00001100),
++ .mask = cpu_to_be32(0x0000ff00),
++ },
++ .jump = cxgb4_udp_fields,
++ },
++ { .jump = NULL },
+ };
+
+ struct cxgb4_link {
+diff --git a/drivers/net/ethernet/chelsio/cxgb4/sge.c b/drivers/net/ethernet/chelsio/cxgb4/sge.c
+index db8106d9d6ed..28ce9856a078 100644
+--- a/drivers/net/ethernet/chelsio/cxgb4/sge.c
++++ b/drivers/net/ethernet/chelsio/cxgb4/sge.c
+@@ -3300,7 +3300,7 @@ static noinline int t4_systim_to_hwstamp(struct adapter *adapter,
+
+ hwtstamps = skb_hwtstamps(skb);
+ memset(hwtstamps, 0, sizeof(*hwtstamps));
+- hwtstamps->hwtstamp = ns_to_ktime(be64_to_cpu(*((u64 *)data)));
++ hwtstamps->hwtstamp = ns_to_ktime(get_unaligned_be64(data));
+
+ return RX_PTP_PKT_SUC;
+ }
+diff --git a/drivers/net/ethernet/freescale/enetc/enetc.c b/drivers/net/ethernet/freescale/enetc/enetc.c
+index 4486a0db8ef0..a7e4274d3f40 100644
+--- a/drivers/net/ethernet/freescale/enetc/enetc.c
++++ b/drivers/net/ethernet/freescale/enetc/enetc.c
+@@ -756,6 +756,9 @@ void enetc_get_si_caps(struct enetc_si *si)
+
+ if (val & ENETC_SIPCAPR0_QBV)
+ si->hw_features |= ENETC_SI_F_QBV;
++
++ if (val & ENETC_SIPCAPR0_PSFP)
++ si->hw_features |= ENETC_SI_F_PSFP;
+ }
+
+ static int enetc_dma_alloc_bdr(struct enetc_bdr *r, size_t bd_size)
+@@ -1567,6 +1570,41 @@ static int enetc_set_rss(struct net_device *ndev, int en)
+ return 0;
+ }
+
++static int enetc_set_psfp(struct net_device *ndev, int en)
++{
++ struct enetc_ndev_priv *priv = netdev_priv(ndev);
++
++ if (en) {
++ priv->active_offloads |= ENETC_F_QCI;
++ enetc_get_max_cap(priv);
++ enetc_psfp_enable(&priv->si->hw);
++ } else {
++ priv->active_offloads &= ~ENETC_F_QCI;
++ memset(&priv->psfp_cap, 0, sizeof(struct psfp_cap));
++ enetc_psfp_disable(&priv->si->hw);
++ }
++
++ return 0;
++}
++
++static void enetc_enable_rxvlan(struct net_device *ndev, bool en)
++{
++ struct enetc_ndev_priv *priv = netdev_priv(ndev);
++ int i;
++
++ for (i = 0; i < priv->num_rx_rings; i++)
++ enetc_bdr_enable_rxvlan(&priv->si->hw, i, en);
++}
++
++static void enetc_enable_txvlan(struct net_device *ndev, bool en)
++{
++ struct enetc_ndev_priv *priv = netdev_priv(ndev);
++ int i;
++
++ for (i = 0; i < priv->num_tx_rings; i++)
++ enetc_bdr_enable_txvlan(&priv->si->hw, i, en);
++}
++
+ int enetc_set_features(struct net_device *ndev,
+ netdev_features_t features)
+ {
+@@ -1575,6 +1613,17 @@ int enetc_set_features(struct net_device *ndev,
+ if (changed & NETIF_F_RXHASH)
+ enetc_set_rss(ndev, !!(features & NETIF_F_RXHASH));
+
++ if (changed & NETIF_F_HW_VLAN_CTAG_RX)
++ enetc_enable_rxvlan(ndev,
++ !!(features & NETIF_F_HW_VLAN_CTAG_RX));
++
++ if (changed & NETIF_F_HW_VLAN_CTAG_TX)
++ enetc_enable_txvlan(ndev,
++ !!(features & NETIF_F_HW_VLAN_CTAG_TX));
++
++ if (changed & NETIF_F_HW_TC)
++ enetc_set_psfp(ndev, !!(features & NETIF_F_HW_TC));
++
+ return 0;
+ }
+
+diff --git a/drivers/net/ethernet/freescale/enetc/enetc.h b/drivers/net/ethernet/freescale/enetc/enetc.h
+index 56c43f35b633..2cfe877c3778 100644
+--- a/drivers/net/ethernet/freescale/enetc/enetc.h
++++ b/drivers/net/ethernet/freescale/enetc/enetc.h
+@@ -151,6 +151,7 @@ enum enetc_errata {
+ };
+
+ #define ENETC_SI_F_QBV BIT(0)
++#define ENETC_SI_F_PSFP BIT(1)
+
+ /* PCI IEP device data */
+ struct enetc_si {
+@@ -203,12 +204,20 @@ struct enetc_cls_rule {
+ };
+
+ #define ENETC_MAX_BDR_INT 2 /* fixed to max # of available cpus */
++struct psfp_cap {
++ u32 max_streamid;
++ u32 max_psfp_filter;
++ u32 max_psfp_gate;
++ u32 max_psfp_gatelist;
++ u32 max_psfp_meter;
++};
+
+ /* TODO: more hardware offloads */
+ enum enetc_active_offloads {
+ ENETC_F_RX_TSTAMP = BIT(0),
+ ENETC_F_TX_TSTAMP = BIT(1),
+ ENETC_F_QBV = BIT(2),
++ ENETC_F_QCI = BIT(3),
+ };
+
+ struct enetc_ndev_priv {
+@@ -231,6 +240,8 @@ struct enetc_ndev_priv {
+
+ struct enetc_cls_rule *cls_rules;
+
++ struct psfp_cap psfp_cap;
++
+ struct device_node *phy_node;
+ phy_interface_t if_mode;
+ };
+@@ -289,9 +300,46 @@ int enetc_setup_tc_taprio(struct net_device *ndev, void *type_data);
+ void enetc_sched_speed_set(struct net_device *ndev);
+ int enetc_setup_tc_cbs(struct net_device *ndev, void *type_data);
+ int enetc_setup_tc_txtime(struct net_device *ndev, void *type_data);
++
++static inline void enetc_get_max_cap(struct enetc_ndev_priv *priv)
++{
++ u32 reg;
++
++ reg = enetc_port_rd(&priv->si->hw, ENETC_PSIDCAPR);
++ priv->psfp_cap.max_streamid = reg & ENETC_PSIDCAPR_MSK;
++ /* Port stream filter capability */
++ reg = enetc_port_rd(&priv->si->hw, ENETC_PSFCAPR);
++ priv->psfp_cap.max_psfp_filter = reg & ENETC_PSFCAPR_MSK;
++ /* Port stream gate capability */
++ reg = enetc_port_rd(&priv->si->hw, ENETC_PSGCAPR);
++ priv->psfp_cap.max_psfp_gate = (reg & ENETC_PSGCAPR_SGIT_MSK);
++ priv->psfp_cap.max_psfp_gatelist = (reg & ENETC_PSGCAPR_GCL_MSK) >> 16;
++ /* Port flow meter capability */
++ reg = enetc_port_rd(&priv->si->hw, ENETC_PFMCAPR);
++ priv->psfp_cap.max_psfp_meter = reg & ENETC_PFMCAPR_MSK;
++}
++
++static inline void enetc_psfp_enable(struct enetc_hw *hw)
++{
++ enetc_wr(hw, ENETC_PPSFPMR, enetc_rd(hw, ENETC_PPSFPMR) |
++ ENETC_PPSFPMR_PSFPEN | ENETC_PPSFPMR_VS |
++ ENETC_PPSFPMR_PVC | ENETC_PPSFPMR_PVZC);
++}
++
++static inline void enetc_psfp_disable(struct enetc_hw *hw)
++{
++ enetc_wr(hw, ENETC_PPSFPMR, enetc_rd(hw, ENETC_PPSFPMR) &
++ ~ENETC_PPSFPMR_PSFPEN & ~ENETC_PPSFPMR_VS &
++ ~ENETC_PPSFPMR_PVC & ~ENETC_PPSFPMR_PVZC);
++}
+ #else
+ #define enetc_setup_tc_taprio(ndev, type_data) -EOPNOTSUPP
+ #define enetc_sched_speed_set(ndev) (void)0
+ #define enetc_setup_tc_cbs(ndev, type_data) -EOPNOTSUPP
+ #define enetc_setup_tc_txtime(ndev, type_data) -EOPNOTSUPP
++#define enetc_get_max_cap(p) \
++ memset(&((p)->psfp_cap), 0, sizeof(struct psfp_cap))
++
++#define enetc_psfp_enable(hw) (void)0
++#define enetc_psfp_disable(hw) (void)0
+ #endif
+diff --git a/drivers/net/ethernet/freescale/enetc/enetc_hw.h b/drivers/net/ethernet/freescale/enetc/enetc_hw.h
+index 2a6523136947..02efda266c46 100644
+--- a/drivers/net/ethernet/freescale/enetc/enetc_hw.h
++++ b/drivers/net/ethernet/freescale/enetc/enetc_hw.h
+@@ -19,6 +19,7 @@
+ #define ENETC_SICTR1 0x1c
+ #define ENETC_SIPCAPR0 0x20
+ #define ENETC_SIPCAPR0_QBV BIT(4)
++#define ENETC_SIPCAPR0_PSFP BIT(9)
+ #define ENETC_SIPCAPR0_RSS BIT(8)
+ #define ENETC_SIPCAPR1 0x24
+ #define ENETC_SITGTGR 0x30
+@@ -228,6 +229,15 @@ enum enetc_bdr_type {TX, RX};
+ #define ENETC_PM0_IFM_RLP (BIT(5) | BIT(11))
+ #define ENETC_PM0_IFM_RGAUTO (BIT(15) | ENETC_PMO_IFM_RG | BIT(1))
+ #define ENETC_PM0_IFM_XGMII BIT(12)
++#define ENETC_PSIDCAPR 0x1b08
++#define ENETC_PSIDCAPR_MSK GENMASK(15, 0)
++#define ENETC_PSFCAPR 0x1b18
++#define ENETC_PSFCAPR_MSK GENMASK(15, 0)
++#define ENETC_PSGCAPR 0x1b28
++#define ENETC_PSGCAPR_GCL_MSK GENMASK(18, 16)
++#define ENETC_PSGCAPR_SGIT_MSK GENMASK(15, 0)
++#define ENETC_PFMCAPR 0x1b38
++#define ENETC_PFMCAPR_MSK GENMASK(15, 0)
+
+ /* MAC counters */
+ #define ENETC_PM0_REOCT 0x8100
+@@ -521,22 +531,22 @@ struct enetc_msg_cmd_header {
+
+ /* Common H/W utility functions */
+
+-static inline void enetc_enable_rxvlan(struct enetc_hw *hw, int si_idx,
+- bool en)
++static inline void enetc_bdr_enable_rxvlan(struct enetc_hw *hw, int idx,
++ bool en)
+ {
+- u32 val = enetc_rxbdr_rd(hw, si_idx, ENETC_RBMR);
++ u32 val = enetc_rxbdr_rd(hw, idx, ENETC_RBMR);
+
+ val = (val & ~ENETC_RBMR_VTE) | (en ? ENETC_RBMR_VTE : 0);
+- enetc_rxbdr_wr(hw, si_idx, ENETC_RBMR, val);
++ enetc_rxbdr_wr(hw, idx, ENETC_RBMR, val);
+ }
+
+-static inline void enetc_enable_txvlan(struct enetc_hw *hw, int si_idx,
+- bool en)
++static inline void enetc_bdr_enable_txvlan(struct enetc_hw *hw, int idx,
++ bool en)
+ {
+- u32 val = enetc_txbdr_rd(hw, si_idx, ENETC_TBMR);
++ u32 val = enetc_txbdr_rd(hw, idx, ENETC_TBMR);
+
+ val = (val & ~ENETC_TBMR_VIH) | (en ? ENETC_TBMR_VIH : 0);
+- enetc_txbdr_wr(hw, si_idx, ENETC_TBMR, val);
++ enetc_txbdr_wr(hw, idx, ENETC_TBMR, val);
+ }
+
+ static inline void enetc_set_bdr_prio(struct enetc_hw *hw, int bdr_idx,
+@@ -621,3 +631,10 @@ struct enetc_cbd {
+ /* Port time specific departure */
+ #define ENETC_PTCTSDR(n) (0x1210 + 4 * (n))
+ #define ENETC_TSDE BIT(31)
++
++/* PSFP setting */
++#define ENETC_PPSFPMR 0x11b00
++#define ENETC_PPSFPMR_PSFPEN BIT(0)
++#define ENETC_PPSFPMR_VS BIT(1)
++#define ENETC_PPSFPMR_PVC BIT(2)
++#define ENETC_PPSFPMR_PVZC BIT(3)
+diff --git a/drivers/net/ethernet/freescale/enetc/enetc_pf.c b/drivers/net/ethernet/freescale/enetc/enetc_pf.c
+index 85e2b741df41..438648a06f2a 100644
+--- a/drivers/net/ethernet/freescale/enetc/enetc_pf.c
++++ b/drivers/net/ethernet/freescale/enetc/enetc_pf.c
+@@ -667,15 +667,6 @@ static int enetc_pf_set_features(struct net_device *ndev,
+ netdev_features_t features)
+ {
+ netdev_features_t changed = ndev->features ^ features;
+- struct enetc_ndev_priv *priv = netdev_priv(ndev);
+-
+- if (changed & NETIF_F_HW_VLAN_CTAG_RX)
+- enetc_enable_rxvlan(&priv->si->hw, 0,
+- !!(features & NETIF_F_HW_VLAN_CTAG_RX));
+-
+- if (changed & NETIF_F_HW_VLAN_CTAG_TX)
+- enetc_enable_txvlan(&priv->si->hw, 0,
+- !!(features & NETIF_F_HW_VLAN_CTAG_TX));
+
+ if (changed & NETIF_F_LOOPBACK)
+ enetc_set_loopback(ndev, !!(features & NETIF_F_LOOPBACK));
+@@ -739,6 +730,14 @@ static void enetc_pf_netdev_setup(struct enetc_si *si, struct net_device *ndev,
+ if (si->hw_features & ENETC_SI_F_QBV)
+ priv->active_offloads |= ENETC_F_QBV;
+
++ if (si->hw_features & ENETC_SI_F_PSFP) {
++ priv->active_offloads |= ENETC_F_QCI;
++ ndev->features |= NETIF_F_HW_TC;
++ ndev->hw_features |= NETIF_F_HW_TC;
++ enetc_get_max_cap(priv);
++ enetc_psfp_enable(&si->hw);
++ }
++
+ /* pick up primary MAC address from SI */
+ enetc_get_primary_mac_addr(&si->hw, ndev->dev_addr);
+ }
+diff --git a/drivers/net/usb/smsc95xx.c b/drivers/net/usb/smsc95xx.c
+index 355be77f4241..3cf4dc3433f9 100644
+--- a/drivers/net/usb/smsc95xx.c
++++ b/drivers/net/usb/smsc95xx.c
+@@ -1324,7 +1324,7 @@ static void smsc95xx_unbind(struct usbnet *dev, struct usb_interface *intf)
+ struct smsc95xx_priv *pdata = (struct smsc95xx_priv *)(dev->data[0]);
+
+ if (pdata) {
+- cancel_delayed_work(&pdata->carrier_check);
++ cancel_delayed_work_sync(&pdata->carrier_check);
+ netif_dbg(dev, ifdown, dev->net, "free pdata\n");
+ kfree(pdata);
+ pdata = NULL;
+diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c
+index 7b4cbe2c6954..71d63ed62071 100644
+--- a/drivers/nvme/host/core.c
++++ b/drivers/nvme/host/core.c
+@@ -1120,10 +1120,16 @@ static int nvme_identify_ns_descs(struct nvme_ctrl *ctrl, unsigned nsid,
+ dev_warn(ctrl->device,
+ "Identify Descriptors failed (%d)\n", status);
+ /*
+- * Don't treat an error as fatal, as we potentially already
+- * have a NGUID or EUI-64.
++ * Don't treat non-retryable errors as fatal, as we potentially
++ * already have a NGUID or EUI-64. If we failed with DNR set,
++ * we want to silently ignore the error as we can still
++ * identify the device, but if the status has DNR set, we want
++ * to propagate the error back specifically for the disk
++ * revalidation flow to make sure we don't abandon the
++ * device just because of a temporal retry-able error (such
++ * as path of transport errors).
+ */
+- if (status > 0 && !(status & NVME_SC_DNR))
++ if (status > 0 && (status & NVME_SC_DNR))
+ status = 0;
+ goto free_data;
+ }
+@@ -1910,14 +1916,6 @@ static void __nvme_revalidate_disk(struct gendisk *disk, struct nvme_id_ns *id)
+ if (ns->head->disk) {
+ nvme_update_disk_info(ns->head->disk, ns, id);
+ blk_queue_stack_limits(ns->head->disk->queue, ns->queue);
+- if (bdi_cap_stable_pages_required(ns->queue->backing_dev_info)) {
+- struct backing_dev_info *info =
+- ns->head->disk->queue->backing_dev_info;
+-
+- info->capabilities |= BDI_CAP_STABLE_WRITES;
+- }
+-
+- revalidate_disk(ns->head->disk);
+ }
+ #endif
+ }
+diff --git a/drivers/nvme/host/multipath.c b/drivers/nvme/host/multipath.c
+index 17f172cf456a..36db7d2e6a89 100644
+--- a/drivers/nvme/host/multipath.c
++++ b/drivers/nvme/host/multipath.c
+@@ -3,6 +3,7 @@
+ * Copyright (c) 2017-2018 Christoph Hellwig.
+ */
+
++#include <linux/backing-dev.h>
+ #include <linux/moduleparam.h>
+ #include <trace/events/block.h>
+ #include "nvme.h"
+@@ -412,11 +413,11 @@ static void nvme_mpath_set_live(struct nvme_ns *ns)
+ if (!head->disk)
+ return;
+
+- mutex_lock(&head->lock);
+- if (!(head->disk->flags & GENHD_FL_UP))
++ if (!test_and_set_bit(NVME_NSHEAD_DISK_LIVE, &head->flags))
+ device_add_disk(&head->subsys->dev, head->disk,
+ nvme_ns_id_attr_groups);
+
++ mutex_lock(&head->lock);
+ if (nvme_path_is_optimized(ns)) {
+ int node, srcu_idx;
+
+@@ -638,30 +639,46 @@ static ssize_t ana_state_show(struct device *dev, struct device_attribute *attr,
+ }
+ DEVICE_ATTR_RO(ana_state);
+
+-static int nvme_set_ns_ana_state(struct nvme_ctrl *ctrl,
++static int nvme_lookup_ana_group_desc(struct nvme_ctrl *ctrl,
+ struct nvme_ana_group_desc *desc, void *data)
+ {
+- struct nvme_ns *ns = data;
++ struct nvme_ana_group_desc *dst = data;
+
+- if (ns->ana_grpid == le32_to_cpu(desc->grpid)) {
+- nvme_update_ns_ana_state(desc, ns);
+- return -ENXIO; /* just break out of the loop */
+- }
++ if (desc->grpid != dst->grpid)
++ return 0;
+
+- return 0;
++ *dst = *desc;
++ return -ENXIO; /* just break out of the loop */
+ }
+
+ void nvme_mpath_add_disk(struct nvme_ns *ns, struct nvme_id_ns *id)
+ {
+ if (nvme_ctrl_use_ana(ns->ctrl)) {
++ struct nvme_ana_group_desc desc = {
++ .grpid = id->anagrpid,
++ .state = 0,
++ };
++
+ mutex_lock(&ns->ctrl->ana_lock);
+ ns->ana_grpid = le32_to_cpu(id->anagrpid);
+- nvme_parse_ana_log(ns->ctrl, ns, nvme_set_ns_ana_state);
++ nvme_parse_ana_log(ns->ctrl, &desc, nvme_lookup_ana_group_desc);
+ mutex_unlock(&ns->ctrl->ana_lock);
++ if (desc.state) {
++ /* found the group desc: update */
++ nvme_update_ns_ana_state(&desc, ns);
++ }
+ } else {
+ ns->ana_state = NVME_ANA_OPTIMIZED;
+ nvme_mpath_set_live(ns);
+ }
++
++ if (bdi_cap_stable_pages_required(ns->queue->backing_dev_info)) {
++ struct gendisk *disk = ns->head->disk;
++
++ if (disk)
++ disk->queue->backing_dev_info->capabilities |=
++ BDI_CAP_STABLE_WRITES;
++ }
+ }
+
+ void nvme_mpath_remove_disk(struct nvme_ns_head *head)
+@@ -675,6 +692,14 @@ void nvme_mpath_remove_disk(struct nvme_ns_head *head)
+ kblockd_schedule_work(&head->requeue_work);
+ flush_work(&head->requeue_work);
+ blk_cleanup_queue(head->disk->queue);
++ if (!test_bit(NVME_NSHEAD_DISK_LIVE, &head->flags)) {
++ /*
++ * if device_add_disk wasn't called, prevent
++ * disk release to put a bogus reference on the
++ * request queue
++ */
++ head->disk->queue = NULL;
++ }
+ put_disk(head->disk);
+ }
+
+diff --git a/drivers/nvme/host/nvme.h b/drivers/nvme/host/nvme.h
+index 2e04a36296d9..719342600be6 100644
+--- a/drivers/nvme/host/nvme.h
++++ b/drivers/nvme/host/nvme.h
+@@ -359,6 +359,8 @@ struct nvme_ns_head {
+ spinlock_t requeue_lock;
+ struct work_struct requeue_work;
+ struct mutex lock;
++ unsigned long flags;
++#define NVME_NSHEAD_DISK_LIVE 0
+ struct nvme_ns __rcu *current_path[];
+ #endif
+ };
+diff --git a/drivers/scsi/qla2xxx/qla_init.c b/drivers/scsi/qla2xxx/qla_init.c
+index caa6b840e459..cfbb4294fb8b 100644
+--- a/drivers/scsi/qla2xxx/qla_init.c
++++ b/drivers/scsi/qla2xxx/qla_init.c
+@@ -5933,7 +5933,7 @@ qla2x00_find_all_fabric_devs(scsi_qla_host_t *vha)
+ break;
+ }
+
+- if (NVME_TARGET(vha->hw, fcport)) {
++ if (found && NVME_TARGET(vha->hw, fcport)) {
+ if (fcport->disc_state == DSC_DELETE_PEND) {
+ qla2x00_set_fcport_disc_state(fcport, DSC_GNL);
+ vha->fcport_count--;
+diff --git a/drivers/soc/ti/omap_prm.c b/drivers/soc/ti/omap_prm.c
+index 96c6f777519c..c9b3f9ebf0bb 100644
+--- a/drivers/soc/ti/omap_prm.c
++++ b/drivers/soc/ti/omap_prm.c
+@@ -256,10 +256,10 @@ static int omap_reset_deassert(struct reset_controller_dev *rcdev,
+ goto exit;
+
+ /* wait for the status to be set */
+- ret = readl_relaxed_poll_timeout(reset->prm->base +
+- reset->prm->data->rstst,
+- v, v & BIT(st_bit), 1,
+- OMAP_RESET_MAX_WAIT);
++ ret = readl_relaxed_poll_timeout_atomic(reset->prm->base +
++ reset->prm->data->rstst,
++ v, v & BIT(st_bit), 1,
++ OMAP_RESET_MAX_WAIT);
+ if (ret)
+ pr_err("%s: timedout waiting for %s:%lu\n", __func__,
+ reset->prm->data->name, id);
+diff --git a/drivers/spi/spi-fsl-dspi.c b/drivers/spi/spi-fsl-dspi.c
+index 88176eaca448..856a4a0edcc7 100644
+--- a/drivers/spi/spi-fsl-dspi.c
++++ b/drivers/spi/spi-fsl-dspi.c
+@@ -1105,6 +1105,8 @@ static int dspi_suspend(struct device *dev)
+ struct spi_controller *ctlr = dev_get_drvdata(dev);
+ struct fsl_dspi *dspi = spi_controller_get_devdata(ctlr);
+
++ if (dspi->irq)
++ disable_irq(dspi->irq);
+ spi_controller_suspend(ctlr);
+ clk_disable_unprepare(dspi->clk);
+
+@@ -1125,6 +1127,8 @@ static int dspi_resume(struct device *dev)
+ if (ret)
+ return ret;
+ spi_controller_resume(ctlr);
++ if (dspi->irq)
++ enable_irq(dspi->irq);
+
+ return 0;
+ }
+@@ -1381,8 +1385,8 @@ static int dspi_probe(struct platform_device *pdev)
+ goto poll_mode;
+ }
+
+- ret = devm_request_irq(&pdev->dev, dspi->irq, dspi_interrupt,
+- IRQF_SHARED, pdev->name, dspi);
++ ret = request_threaded_irq(dspi->irq, dspi_interrupt, NULL,
++ IRQF_SHARED, pdev->name, dspi);
+ if (ret < 0) {
+ dev_err(&pdev->dev, "Unable to attach DSPI interrupt\n");
+ goto out_clk_put;
+@@ -1396,7 +1400,7 @@ poll_mode:
+ ret = dspi_request_dma(dspi, res->start);
+ if (ret < 0) {
+ dev_err(&pdev->dev, "can't get dma channels\n");
+- goto out_clk_put;
++ goto out_free_irq;
+ }
+ }
+
+@@ -1411,11 +1415,14 @@ poll_mode:
+ ret = spi_register_controller(ctlr);
+ if (ret != 0) {
+ dev_err(&pdev->dev, "Problem registering DSPI ctlr\n");
+- goto out_clk_put;
++ goto out_free_irq;
+ }
+
+ return ret;
+
++out_free_irq:
++ if (dspi->irq)
++ free_irq(dspi->irq, dspi);
+ out_clk_put:
+ clk_disable_unprepare(dspi->clk);
+ out_ctlr_put:
+@@ -1431,6 +1438,8 @@ static int dspi_remove(struct platform_device *pdev)
+
+ /* Disconnect from the SPI framework */
+ dspi_release_dma(dspi);
++ if (dspi->irq)
++ free_irq(dspi->irq, dspi);
+ clk_disable_unprepare(dspi->clk);
+ spi_unregister_controller(dspi->ctlr);
+
+diff --git a/drivers/thermal/cpufreq_cooling.c b/drivers/thermal/cpufreq_cooling.c
+index e297e135c031..bdc887cb4b63 100644
+--- a/drivers/thermal/cpufreq_cooling.c
++++ b/drivers/thermal/cpufreq_cooling.c
+@@ -123,12 +123,12 @@ static u32 cpu_power_to_freq(struct cpufreq_cooling_device *cpufreq_cdev,
+ {
+ int i;
+
+- for (i = cpufreq_cdev->max_level - 1; i >= 0; i--) {
+- if (power > cpufreq_cdev->em->table[i].power)
++ for (i = cpufreq_cdev->max_level; i >= 0; i--) {
++ if (power >= cpufreq_cdev->em->table[i].power)
+ break;
+ }
+
+- return cpufreq_cdev->em->table[i + 1].frequency;
++ return cpufreq_cdev->em->table[i].frequency;
+ }
+
+ /**
+diff --git a/drivers/thermal/mtk_thermal.c b/drivers/thermal/mtk_thermal.c
+index 76e30603d4d5..6b7ef1993d7e 100644
+--- a/drivers/thermal/mtk_thermal.c
++++ b/drivers/thermal/mtk_thermal.c
+@@ -211,6 +211,9 @@ enum {
+ /* The total number of temperature sensors in the MT8183 */
+ #define MT8183_NUM_SENSORS 6
+
++/* The number of banks in the MT8183 */
++#define MT8183_NUM_ZONES 1
++
+ /* The number of sensing points per bank */
+ #define MT8183_NUM_SENSORS_PER_ZONE 6
+
+@@ -497,7 +500,7 @@ static const struct mtk_thermal_data mt7622_thermal_data = {
+ */
+ static const struct mtk_thermal_data mt8183_thermal_data = {
+ .auxadc_channel = MT8183_TEMP_AUXADC_CHANNEL,
+- .num_banks = MT8183_NUM_SENSORS_PER_ZONE,
++ .num_banks = MT8183_NUM_ZONES,
+ .num_sensors = MT8183_NUM_SENSORS,
+ .vts_index = mt8183_vts_index,
+ .cali_val = MT8183_CALIBRATION,
+diff --git a/drivers/thermal/rcar_gen3_thermal.c b/drivers/thermal/rcar_gen3_thermal.c
+index 58fe7c1ef00b..c48c5e9b8f20 100644
+--- a/drivers/thermal/rcar_gen3_thermal.c
++++ b/drivers/thermal/rcar_gen3_thermal.c
+@@ -167,7 +167,7 @@ static int rcar_gen3_thermal_get_temp(void *devdata, int *temp)
+ {
+ struct rcar_gen3_thermal_tsc *tsc = devdata;
+ int mcelsius, val;
+- u32 reg;
++ int reg;
+
+ /* Read register and convert to mili Celsius */
+ reg = rcar_gen3_thermal_read(tsc, REG_GEN3_TEMP) & CTEMP_MASK;
+diff --git a/drivers/thermal/sprd_thermal.c b/drivers/thermal/sprd_thermal.c
+index a340374e8c51..4cde70dcf655 100644
+--- a/drivers/thermal/sprd_thermal.c
++++ b/drivers/thermal/sprd_thermal.c
+@@ -348,8 +348,8 @@ static int sprd_thm_probe(struct platform_device *pdev)
+
+ thm->var_data = pdata;
+ thm->base = devm_platform_ioremap_resource(pdev, 0);
+- if (!thm->base)
+- return -ENOMEM;
++ if (IS_ERR(thm->base))
++ return PTR_ERR(thm->base);
+
+ thm->nr_sensors = of_get_child_count(np);
+ if (thm->nr_sensors == 0 || thm->nr_sensors > SPRD_THM_MAX_SENSOR) {
+diff --git a/drivers/usb/misc/usbtest.c b/drivers/usb/misc/usbtest.c
+index 98ada1a3425c..bae88893ee8e 100644
+--- a/drivers/usb/misc/usbtest.c
++++ b/drivers/usb/misc/usbtest.c
+@@ -2873,6 +2873,7 @@ static void usbtest_disconnect(struct usb_interface *intf)
+
+ usb_set_intfdata(intf, NULL);
+ dev_dbg(&intf->dev, "disconnect\n");
++ kfree(dev->buf);
+ kfree(dev);
+ }
+
+diff --git a/fs/btrfs/block-group.c b/fs/btrfs/block-group.c
+index 0c17f18b4794..1b1c86953008 100644
+--- a/fs/btrfs/block-group.c
++++ b/fs/btrfs/block-group.c
+@@ -863,11 +863,34 @@ static void clear_incompat_bg_bits(struct btrfs_fs_info *fs_info, u64 flags)
+ }
+ }
+
++static int remove_block_group_item(struct btrfs_trans_handle *trans,
++ struct btrfs_path *path,
++ struct btrfs_block_group *block_group)
++{
++ struct btrfs_fs_info *fs_info = trans->fs_info;
++ struct btrfs_root *root;
++ struct btrfs_key key;
++ int ret;
++
++ root = fs_info->extent_root;
++ key.objectid = block_group->start;
++ key.type = BTRFS_BLOCK_GROUP_ITEM_KEY;
++ key.offset = block_group->length;
++
++ ret = btrfs_search_slot(trans, root, &key, path, -1, 1);
++ if (ret > 0)
++ ret = -ENOENT;
++ if (ret < 0)
++ return ret;
++
++ ret = btrfs_del_item(trans, root, path);
++ return ret;
++}
++
+ int btrfs_remove_block_group(struct btrfs_trans_handle *trans,
+ u64 group_start, struct extent_map *em)
+ {
+ struct btrfs_fs_info *fs_info = trans->fs_info;
+- struct btrfs_root *root = fs_info->extent_root;
+ struct btrfs_path *path;
+ struct btrfs_block_group *block_group;
+ struct btrfs_free_cluster *cluster;
+@@ -1068,9 +1091,24 @@ int btrfs_remove_block_group(struct btrfs_trans_handle *trans,
+
+ spin_unlock(&block_group->space_info->lock);
+
+- key.objectid = block_group->start;
+- key.type = BTRFS_BLOCK_GROUP_ITEM_KEY;
+- key.offset = block_group->length;
++ /*
++ * Remove the free space for the block group from the free space tree
++ * and the block group's item from the extent tree before marking the
++ * block group as removed. This is to prevent races with tasks that
++ * freeze and unfreeze a block group, this task and another task
++ * allocating a new block group - the unfreeze task ends up removing
++ * the block group's extent map before the task calling this function
++ * deletes the block group item from the extent tree, allowing for
++ * another task to attempt to create another block group with the same
++ * item key (and failing with -EEXIST and a transaction abort).
++ */
++ ret = remove_block_group_free_space(trans, block_group);
++ if (ret)
++ goto out;
++
++ ret = remove_block_group_item(trans, path, block_group);
++ if (ret < 0)
++ goto out;
+
+ mutex_lock(&fs_info->chunk_mutex);
+ spin_lock(&block_group->lock);
+@@ -1103,20 +1141,6 @@ int btrfs_remove_block_group(struct btrfs_trans_handle *trans,
+
+ mutex_unlock(&fs_info->chunk_mutex);
+
+- ret = remove_block_group_free_space(trans, block_group);
+- if (ret)
+- goto out;
+-
+- ret = btrfs_search_slot(trans, root, &key, path, -1, 1);
+- if (ret > 0)
+- ret = -EIO;
+- if (ret < 0)
+- goto out;
+-
+- ret = btrfs_del_item(trans, root, path);
+- if (ret)
+- goto out;
+-
+ if (remove_em) {
+ struct extent_map_tree *em_tree;
+
+diff --git a/fs/btrfs/file.c b/fs/btrfs/file.c
+index 52d565ff66e2..93244934d4f9 100644
+--- a/fs/btrfs/file.c
++++ b/fs/btrfs/file.c
+@@ -1541,7 +1541,7 @@ lock_and_cleanup_extent_if_need(struct btrfs_inode *inode, struct page **pages,
+ }
+
+ static noinline int check_can_nocow(struct btrfs_inode *inode, loff_t pos,
+- size_t *write_bytes)
++ size_t *write_bytes, bool nowait)
+ {
+ struct btrfs_fs_info *fs_info = inode->root->fs_info;
+ struct btrfs_root *root = inode->root;
+@@ -1549,27 +1549,43 @@ static noinline int check_can_nocow(struct btrfs_inode *inode, loff_t pos,
+ u64 num_bytes;
+ int ret;
+
+- if (!btrfs_drew_try_write_lock(&root->snapshot_lock))
++ if (!nowait && !btrfs_drew_try_write_lock(&root->snapshot_lock))
+ return -EAGAIN;
+
+ lockstart = round_down(pos, fs_info->sectorsize);
+ lockend = round_up(pos + *write_bytes,
+ fs_info->sectorsize) - 1;
++ num_bytes = lockend - lockstart + 1;
+
+- btrfs_lock_and_flush_ordered_range(inode, lockstart,
+- lockend, NULL);
++ if (nowait) {
++ struct btrfs_ordered_extent *ordered;
++
++ if (!try_lock_extent(&inode->io_tree, lockstart, lockend))
++ return -EAGAIN;
++
++ ordered = btrfs_lookup_ordered_range(inode, lockstart,
++ num_bytes);
++ if (ordered) {
++ btrfs_put_ordered_extent(ordered);
++ ret = -EAGAIN;
++ goto out_unlock;
++ }
++ } else {
++ btrfs_lock_and_flush_ordered_range(inode, lockstart,
++ lockend, NULL);
++ }
+
+- num_bytes = lockend - lockstart + 1;
+ ret = can_nocow_extent(&inode->vfs_inode, lockstart, &num_bytes,
+ NULL, NULL, NULL);
+ if (ret <= 0) {
+ ret = 0;
+- btrfs_drew_write_unlock(&root->snapshot_lock);
++ if (!nowait)
++ btrfs_drew_write_unlock(&root->snapshot_lock);
+ } else {
+ *write_bytes = min_t(size_t, *write_bytes ,
+ num_bytes - pos + lockstart);
+ }
+-
++out_unlock:
+ unlock_extent(&inode->io_tree, lockstart, lockend);
+
+ return ret;
+@@ -1641,7 +1657,7 @@ static noinline ssize_t btrfs_buffered_write(struct kiocb *iocb,
+ if ((BTRFS_I(inode)->flags & (BTRFS_INODE_NODATACOW |
+ BTRFS_INODE_PREALLOC)) &&
+ check_can_nocow(BTRFS_I(inode), pos,
+- &write_bytes) > 0) {
++ &write_bytes, false) > 0) {
+ /*
+ * For nodata cow case, no need to reserve
+ * data space.
+@@ -1920,12 +1936,11 @@ static ssize_t btrfs_file_write_iter(struct kiocb *iocb,
+ */
+ if (!(BTRFS_I(inode)->flags & (BTRFS_INODE_NODATACOW |
+ BTRFS_INODE_PREALLOC)) ||
+- check_can_nocow(BTRFS_I(inode), pos, &nocow_bytes) <= 0) {
++ check_can_nocow(BTRFS_I(inode), pos, &nocow_bytes,
++ true) <= 0) {
+ inode_unlock(inode);
+ return -EAGAIN;
+ }
+- /* check_can_nocow() locks the snapshot lock on success */
+- btrfs_drew_write_unlock(&root->snapshot_lock);
+ /*
+ * There are holes in the range or parts of the range that must
+ * be COWed (shared extents, RO block groups, etc), so just bail
+diff --git a/fs/cifs/connect.c b/fs/cifs/connect.c
+index 47b9fbb70bf5..bda8615f8c33 100644
+--- a/fs/cifs/connect.c
++++ b/fs/cifs/connect.c
+@@ -5306,9 +5306,15 @@ cifs_construct_tcon(struct cifs_sb_info *cifs_sb, kuid_t fsuid)
+ vol_info->nocase = master_tcon->nocase;
+ vol_info->nohandlecache = master_tcon->nohandlecache;
+ vol_info->local_lease = master_tcon->local_lease;
++ vol_info->no_lease = master_tcon->no_lease;
++ vol_info->resilient = master_tcon->use_resilient;
++ vol_info->persistent = master_tcon->use_persistent;
++ vol_info->handle_timeout = master_tcon->handle_timeout;
+ vol_info->no_linux_ext = !master_tcon->unix_ext;
++ vol_info->linux_ext = master_tcon->posix_extensions;
+ vol_info->sectype = master_tcon->ses->sectype;
+ vol_info->sign = master_tcon->ses->sign;
++ vol_info->seal = master_tcon->seal;
+
+ rc = cifs_set_vol_auth(vol_info, master_tcon->ses);
+ if (rc) {
+@@ -5334,10 +5340,6 @@ cifs_construct_tcon(struct cifs_sb_info *cifs_sb, kuid_t fsuid)
+ goto out;
+ }
+
+- /* if new SMB3.11 POSIX extensions are supported do not remap / and \ */
+- if (tcon->posix_extensions)
+- cifs_sb->mnt_cifs_flags |= CIFS_MOUNT_POSIX_PATHS;
+-
+ if (cap_unix(ses))
+ reset_cifs_unix_caps(0, tcon, NULL, vol_info);
+
+diff --git a/fs/cifs/inode.c b/fs/cifs/inode.c
+index 5d2965a23730..430b0b125654 100644
+--- a/fs/cifs/inode.c
++++ b/fs/cifs/inode.c
+@@ -1855,6 +1855,7 @@ cifs_rename2(struct inode *source_dir, struct dentry *source_dentry,
+ FILE_UNIX_BASIC_INFO *info_buf_target;
+ unsigned int xid;
+ int rc, tmprc;
++ bool new_target = d_really_is_negative(target_dentry);
+
+ if (flags & ~RENAME_NOREPLACE)
+ return -EINVAL;
+@@ -1931,8 +1932,13 @@ cifs_rename2(struct inode *source_dir, struct dentry *source_dentry,
+ */
+
+ unlink_target:
+- /* Try unlinking the target dentry if it's not negative */
+- if (d_really_is_positive(target_dentry) && (rc == -EACCES || rc == -EEXIST)) {
++ /*
++ * If the target dentry was created during the rename, try
++ * unlinking it if it's not negative
++ */
++ if (new_target &&
++ d_really_is_positive(target_dentry) &&
++ (rc == -EACCES || rc == -EEXIST)) {
+ if (d_is_dir(target_dentry))
+ tmprc = cifs_rmdir(target_dir, target_dentry);
+ else
+diff --git a/fs/exfat/dir.c b/fs/exfat/dir.c
+index 4b91afb0f051..6db302d76d4c 100644
+--- a/fs/exfat/dir.c
++++ b/fs/exfat/dir.c
+@@ -314,7 +314,7 @@ const struct file_operations exfat_dir_operations = {
+ .llseek = generic_file_llseek,
+ .read = generic_read_dir,
+ .iterate = exfat_iterate,
+- .fsync = generic_file_fsync,
++ .fsync = exfat_file_fsync,
+ };
+
+ int exfat_alloc_new_dir(struct inode *inode, struct exfat_chain *clu)
+@@ -430,10 +430,12 @@ static void exfat_init_name_entry(struct exfat_dentry *ep,
+ ep->dentry.name.flags = 0x0;
+
+ for (i = 0; i < EXFAT_FILE_NAME_LEN; i++) {
+- ep->dentry.name.unicode_0_14[i] = cpu_to_le16(*uniname);
+- if (*uniname == 0x0)
+- break;
+- uniname++;
++ if (*uniname != 0x0) {
++ ep->dentry.name.unicode_0_14[i] = cpu_to_le16(*uniname);
++ uniname++;
++ } else {
++ ep->dentry.name.unicode_0_14[i] = 0x0;
++ }
+ }
+ }
+
+diff --git a/fs/exfat/exfat_fs.h b/fs/exfat/exfat_fs.h
+index d67fb8a6f770..d865050fa6cd 100644
+--- a/fs/exfat/exfat_fs.h
++++ b/fs/exfat/exfat_fs.h
+@@ -424,6 +424,7 @@ void exfat_truncate(struct inode *inode, loff_t size);
+ int exfat_setattr(struct dentry *dentry, struct iattr *attr);
+ int exfat_getattr(const struct path *path, struct kstat *stat,
+ unsigned int request_mask, unsigned int query_flags);
++int exfat_file_fsync(struct file *file, loff_t start, loff_t end, int datasync);
+
+ /* namei.c */
+ extern const struct dentry_operations exfat_dentry_ops;
+diff --git a/fs/exfat/file.c b/fs/exfat/file.c
+index 5b4ddff18731..b93aa9e6cb16 100644
+--- a/fs/exfat/file.c
++++ b/fs/exfat/file.c
+@@ -6,6 +6,7 @@
+ #include <linux/slab.h>
+ #include <linux/cred.h>
+ #include <linux/buffer_head.h>
++#include <linux/blkdev.h>
+
+ #include "exfat_raw.h"
+ #include "exfat_fs.h"
+@@ -347,12 +348,28 @@ out:
+ return error;
+ }
+
++int exfat_file_fsync(struct file *filp, loff_t start, loff_t end, int datasync)
++{
++ struct inode *inode = filp->f_mapping->host;
++ int err;
++
++ err = __generic_file_fsync(filp, start, end, datasync);
++ if (err)
++ return err;
++
++ err = sync_blockdev(inode->i_sb->s_bdev);
++ if (err)
++ return err;
++
++ return blkdev_issue_flush(inode->i_sb->s_bdev, GFP_KERNEL, NULL);
++}
++
+ const struct file_operations exfat_file_operations = {
+ .llseek = generic_file_llseek,
+ .read_iter = generic_file_read_iter,
+ .write_iter = generic_file_write_iter,
+ .mmap = generic_file_mmap,
+- .fsync = generic_file_fsync,
++ .fsync = exfat_file_fsync,
+ .splice_read = generic_file_splice_read,
+ .splice_write = iter_file_splice_write,
+ };
+diff --git a/fs/exfat/namei.c b/fs/exfat/namei.c
+index a2659a8a68a1..2c9c78317721 100644
+--- a/fs/exfat/namei.c
++++ b/fs/exfat/namei.c
+@@ -984,7 +984,6 @@ static int exfat_rmdir(struct inode *dir, struct dentry *dentry)
+ goto unlock;
+ }
+
+- exfat_set_vol_flags(sb, VOL_DIRTY);
+ exfat_chain_set(&clu_to_free, ei->start_clu,
+ EXFAT_B_TO_CLU_ROUND_UP(i_size_read(inode), sbi), ei->flags);
+
+@@ -1012,6 +1011,7 @@ static int exfat_rmdir(struct inode *dir, struct dentry *dentry)
+ num_entries++;
+ brelse(bh);
+
++ exfat_set_vol_flags(sb, VOL_DIRTY);
+ err = exfat_remove_entries(dir, &cdir, entry, 0, num_entries);
+ if (err) {
+ exfat_msg(sb, KERN_ERR,
+@@ -1089,10 +1089,14 @@ static int exfat_rename_file(struct inode *inode, struct exfat_chain *p_dir,
+
+ epold = exfat_get_dentry(sb, p_dir, oldentry + 1, &old_bh,
+ §or_old);
++ if (!epold)
++ return -EIO;
+ epnew = exfat_get_dentry(sb, p_dir, newentry + 1, &new_bh,
+ §or_new);
+- if (!epold || !epnew)
++ if (!epnew) {
++ brelse(old_bh);
+ return -EIO;
++ }
+
+ memcpy(epnew, epold, DENTRY_SIZE);
+ exfat_update_bh(sb, new_bh, sync);
+@@ -1173,10 +1177,14 @@ static int exfat_move_file(struct inode *inode, struct exfat_chain *p_olddir,
+
+ epmov = exfat_get_dentry(sb, p_olddir, oldentry + 1, &mov_bh,
+ §or_mov);
++ if (!epmov)
++ return -EIO;
+ epnew = exfat_get_dentry(sb, p_newdir, newentry + 1, &new_bh,
+ §or_new);
+- if (!epmov || !epnew)
++ if (!epnew) {
++ brelse(mov_bh);
+ return -EIO;
++ }
+
+ memcpy(epnew, epmov, DENTRY_SIZE);
+ exfat_update_bh(sb, new_bh, IS_DIRSYNC(inode));
+diff --git a/fs/exfat/super.c b/fs/exfat/super.c
+index c1b1ed306a48..e87980153398 100644
+--- a/fs/exfat/super.c
++++ b/fs/exfat/super.c
+@@ -637,10 +637,20 @@ static void exfat_free(struct fs_context *fc)
+ }
+ }
+
++static int exfat_reconfigure(struct fs_context *fc)
++{
++ fc->sb_flags |= SB_NODIRATIME;
++
++ /* volume flag will be updated in exfat_sync_fs */
++ sync_filesystem(fc->root->d_sb);
++ return 0;
++}
++
+ static const struct fs_context_operations exfat_context_ops = {
+ .parse_param = exfat_parse_param,
+ .get_tree = exfat_get_tree,
+ .free = exfat_free,
++ .reconfigure = exfat_reconfigure,
+ };
+
+ static int exfat_init_fs_context(struct fs_context *fc)
+diff --git a/fs/gfs2/log.c b/fs/gfs2/log.c
+index b7a5221bea7d..04882712cd66 100644
+--- a/fs/gfs2/log.c
++++ b/fs/gfs2/log.c
+@@ -987,6 +987,16 @@ void gfs2_log_flush(struct gfs2_sbd *sdp, struct gfs2_glock *gl, u32 flags)
+
+ out:
+ if (gfs2_withdrawn(sdp)) {
++ /**
++ * If the tr_list is empty, we're withdrawing during a log
++ * flush that targets a transaction, but the transaction was
++ * never queued onto any of the ail lists. Here we add it to
++ * ail1 just so that ail_drain() will find and free it.
++ */
++ spin_lock(&sdp->sd_ail_lock);
++ if (tr && list_empty(&tr->tr_list))
++ list_add(&tr->tr_list, &sdp->sd_ail1_list);
++ spin_unlock(&sdp->sd_ail_lock);
+ ail_drain(sdp); /* frees all transactions */
+ tr = NULL;
+ }
+diff --git a/fs/io_uring.c b/fs/io_uring.c
+index 4ab1728de247..2be6ea010340 100644
+--- a/fs/io_uring.c
++++ b/fs/io_uring.c
+@@ -858,6 +858,7 @@ static int __io_sqe_files_update(struct io_ring_ctx *ctx,
+ struct io_uring_files_update *ip,
+ unsigned nr_args);
+ static int io_grab_files(struct io_kiocb *req);
++static void io_complete_rw_common(struct kiocb *kiocb, long res);
+ static void io_cleanup_req(struct io_kiocb *req);
+ static int io_file_get(struct io_submit_state *state, struct io_kiocb *req,
+ int fd, struct file **out_file, bool fixed);
+@@ -1697,6 +1698,14 @@ static void io_iopoll_queue(struct list_head *again)
+ do {
+ req = list_first_entry(again, struct io_kiocb, list);
+ list_del(&req->list);
++
++ /* shouldn't happen unless io_uring is dying, cancel reqs */
++ if (unlikely(!current->mm)) {
++ io_complete_rw_common(&req->rw.kiocb, -EAGAIN);
++ io_put_req(req);
++ continue;
++ }
++
+ refcount_inc(&req->refs);
+ io_queue_async_work(req);
+ } while (!list_empty(again));
+@@ -2748,6 +2757,8 @@ static int io_splice_prep(struct io_kiocb *req, const struct io_uring_sqe *sqe)
+
+ if (req->flags & REQ_F_NEED_CLEANUP)
+ return 0;
++ if (unlikely(req->ctx->flags & IORING_SETUP_IOPOLL))
++ return -EINVAL;
+
+ sp->file_in = NULL;
+ sp->off_in = READ_ONCE(sqe->splice_off_in);
+@@ -2910,6 +2921,8 @@ static int io_fallocate_prep(struct io_kiocb *req,
+ {
+ if (sqe->ioprio || sqe->buf_index || sqe->rw_flags)
+ return -EINVAL;
++ if (unlikely(req->ctx->flags & IORING_SETUP_IOPOLL))
++ return -EINVAL;
+
+ req->sync.off = READ_ONCE(sqe->off);
+ req->sync.len = READ_ONCE(sqe->addr);
+@@ -2935,6 +2948,8 @@ static int io_openat_prep(struct io_kiocb *req, const struct io_uring_sqe *sqe)
+ const char __user *fname;
+ int ret;
+
++ if (unlikely(req->ctx->flags & (IORING_SETUP_IOPOLL|IORING_SETUP_SQPOLL)))
++ return -EINVAL;
+ if (sqe->ioprio || sqe->buf_index)
+ return -EINVAL;
+ if (req->flags & REQ_F_FIXED_FILE)
+@@ -2968,6 +2983,8 @@ static int io_openat2_prep(struct io_kiocb *req, const struct io_uring_sqe *sqe)
+ size_t len;
+ int ret;
+
++ if (unlikely(req->ctx->flags & (IORING_SETUP_IOPOLL|IORING_SETUP_SQPOLL)))
++ return -EINVAL;
+ if (sqe->ioprio || sqe->buf_index)
+ return -EINVAL;
+ if (req->flags & REQ_F_FIXED_FILE)
+@@ -3207,6 +3224,8 @@ static int io_epoll_ctl_prep(struct io_kiocb *req,
+ #if defined(CONFIG_EPOLL)
+ if (sqe->ioprio || sqe->buf_index)
+ return -EINVAL;
++ if (unlikely(req->ctx->flags & IORING_SETUP_IOPOLL))
++ return -EINVAL;
+
+ req->epoll.epfd = READ_ONCE(sqe->fd);
+ req->epoll.op = READ_ONCE(sqe->len);
+@@ -3251,6 +3270,8 @@ static int io_madvise_prep(struct io_kiocb *req, const struct io_uring_sqe *sqe)
+ #if defined(CONFIG_ADVISE_SYSCALLS) && defined(CONFIG_MMU)
+ if (sqe->ioprio || sqe->buf_index || sqe->off)
+ return -EINVAL;
++ if (unlikely(req->ctx->flags & IORING_SETUP_IOPOLL))
++ return -EINVAL;
+
+ req->madvise.addr = READ_ONCE(sqe->addr);
+ req->madvise.len = READ_ONCE(sqe->len);
+@@ -3285,6 +3306,8 @@ static int io_fadvise_prep(struct io_kiocb *req, const struct io_uring_sqe *sqe)
+ {
+ if (sqe->ioprio || sqe->buf_index || sqe->addr)
+ return -EINVAL;
++ if (unlikely(req->ctx->flags & IORING_SETUP_IOPOLL))
++ return -EINVAL;
+
+ req->fadvise.offset = READ_ONCE(sqe->off);
+ req->fadvise.len = READ_ONCE(sqe->len);
+@@ -3322,6 +3345,8 @@ static int io_statx_prep(struct io_kiocb *req, const struct io_uring_sqe *sqe)
+ unsigned lookup_flags;
+ int ret;
+
++ if (unlikely(req->ctx->flags & IORING_SETUP_IOPOLL))
++ return -EINVAL;
+ if (sqe->ioprio || sqe->buf_index)
+ return -EINVAL;
+ if (req->flags & REQ_F_FIXED_FILE)
+@@ -3402,6 +3427,8 @@ static int io_close_prep(struct io_kiocb *req, const struct io_uring_sqe *sqe)
+ */
+ req->work.flags |= IO_WQ_WORK_NO_CANCEL;
+
++ if (unlikely(req->ctx->flags & (IORING_SETUP_IOPOLL|IORING_SETUP_SQPOLL)))
++ return -EINVAL;
+ if (sqe->ioprio || sqe->off || sqe->addr || sqe->len ||
+ sqe->rw_flags || sqe->buf_index)
+ return -EINVAL;
+@@ -4109,6 +4136,29 @@ struct io_poll_table {
+ int error;
+ };
+
++static int io_req_task_work_add(struct io_kiocb *req, struct callback_head *cb)
++{
++ struct task_struct *tsk = req->task;
++ struct io_ring_ctx *ctx = req->ctx;
++ int ret, notify = TWA_RESUME;
++
++ /*
++ * SQPOLL kernel thread doesn't need notification, just a wakeup.
++ * If we're not using an eventfd, then TWA_RESUME is always fine,
++ * as we won't have dependencies between request completions for
++ * other kernel wait conditions.
++ */
++ if (ctx->flags & IORING_SETUP_SQPOLL)
++ notify = 0;
++ else if (ctx->cq_ev_fd)
++ notify = TWA_SIGNAL;
++
++ ret = task_work_add(tsk, cb, notify);
++ if (!ret)
++ wake_up_process(tsk);
++ return ret;
++}
++
+ static int __io_async_wake(struct io_kiocb *req, struct io_poll_iocb *poll,
+ __poll_t mask, task_work_func_t func)
+ {
+@@ -4132,13 +4182,13 @@ static int __io_async_wake(struct io_kiocb *req, struct io_poll_iocb *poll,
+ * of executing it. We can't safely execute it anyway, as we may not
+ * have the needed state needed for it anyway.
+ */
+- ret = task_work_add(tsk, &req->task_work, true);
++ ret = io_req_task_work_add(req, &req->task_work);
+ if (unlikely(ret)) {
+ WRITE_ONCE(poll->canceled, true);
+ tsk = io_wq_get_task(req->ctx->io_wq);
+- task_work_add(tsk, &req->task_work, true);
++ task_work_add(tsk, &req->task_work, 0);
++ wake_up_process(tsk);
+ }
+- wake_up_process(tsk);
+ return 1;
+ }
+
+@@ -6066,7 +6116,7 @@ static int io_sq_thread(void *data)
+ * If submit got -EBUSY, flag us as needing the application
+ * to enter the kernel to reap and flush events.
+ */
+- if (!to_submit || ret == -EBUSY) {
++ if (!to_submit || ret == -EBUSY || need_resched()) {
+ /*
+ * Drop cur_mm before scheduling, we can't hold it for
+ * long periods (or over schedule()). Do this before
+@@ -6082,7 +6132,7 @@ static int io_sq_thread(void *data)
+ * more IO, we should wait for the application to
+ * reap events and wake us up.
+ */
+- if (!list_empty(&ctx->poll_list) ||
++ if (!list_empty(&ctx->poll_list) || need_resched() ||
+ (!time_after(jiffies, timeout) && ret != -EBUSY &&
+ !percpu_ref_is_dying(&ctx->refs))) {
+ if (current->task_works)
+@@ -6233,15 +6283,23 @@ static int io_cqring_wait(struct io_ring_ctx *ctx, int min_events,
+ do {
+ prepare_to_wait_exclusive(&ctx->wait, &iowq.wq,
+ TASK_INTERRUPTIBLE);
++ /* make sure we run task_work before checking for signals */
+ if (current->task_works)
+ task_work_run();
+- if (io_should_wake(&iowq, false))
+- break;
+- schedule();
+ if (signal_pending(current)) {
++ if (current->jobctl & JOBCTL_TASK_WORK) {
++ spin_lock_irq(¤t->sighand->siglock);
++ current->jobctl &= ~JOBCTL_TASK_WORK;
++ recalc_sigpending();
++ spin_unlock_irq(¤t->sighand->siglock);
++ continue;
++ }
+ ret = -EINTR;
+ break;
+ }
++ if (io_should_wake(&iowq, false))
++ break;
++ schedule();
+ } while (1);
+ finish_wait(&ctx->wait, &iowq.wq);
+
+diff --git a/fs/nfsd/nfs4state.c b/fs/nfsd/nfs4state.c
+index c107caa56525..bdfae3ba3953 100644
+--- a/fs/nfsd/nfs4state.c
++++ b/fs/nfsd/nfs4state.c
+@@ -7859,9 +7859,14 @@ nfs4_state_start_net(struct net *net)
+ struct nfsd_net *nn = net_generic(net, nfsd_net_id);
+ int ret;
+
+- ret = nfs4_state_create_net(net);
++ ret = get_nfsdfs(net);
+ if (ret)
+ return ret;
++ ret = nfs4_state_create_net(net);
++ if (ret) {
++ mntput(nn->nfsd_mnt);
++ return ret;
++ }
+ locks_start_grace(net, &nn->nfsd4_manager);
+ nfsd4_client_tracking_init(net);
+ if (nn->track_reclaim_completes && nn->reclaim_str_hashtbl_size == 0)
+@@ -7930,6 +7935,7 @@ nfs4_state_shutdown_net(struct net *net)
+
+ nfsd4_client_tracking_exit(net);
+ nfs4_state_destroy_net(net);
++ mntput(nn->nfsd_mnt);
+ }
+
+ void
+diff --git a/fs/nfsd/nfsctl.c b/fs/nfsd/nfsctl.c
+index 71687d99b090..f298aad41070 100644
+--- a/fs/nfsd/nfsctl.c
++++ b/fs/nfsd/nfsctl.c
+@@ -1335,6 +1335,7 @@ void nfsd_client_rmdir(struct dentry *dentry)
+ WARN_ON_ONCE(ret);
+ fsnotify_rmdir(dir, dentry);
+ d_delete(dentry);
++ dput(dentry);
+ inode_unlock(dir);
+ }
+
+@@ -1424,6 +1425,18 @@ static struct file_system_type nfsd_fs_type = {
+ };
+ MODULE_ALIAS_FS("nfsd");
+
++int get_nfsdfs(struct net *net)
++{
++ struct nfsd_net *nn = net_generic(net, nfsd_net_id);
++ struct vfsmount *mnt;
++
++ mnt = vfs_kern_mount(&nfsd_fs_type, SB_KERNMOUNT, "nfsd", NULL);
++ if (IS_ERR(mnt))
++ return PTR_ERR(mnt);
++ nn->nfsd_mnt = mnt;
++ return 0;
++}
++
+ #ifdef CONFIG_PROC_FS
+ static int create_proc_exports_entry(void)
+ {
+@@ -1451,7 +1464,6 @@ unsigned int nfsd_net_id;
+ static __net_init int nfsd_init_net(struct net *net)
+ {
+ int retval;
+- struct vfsmount *mnt;
+ struct nfsd_net *nn = net_generic(net, nfsd_net_id);
+
+ retval = nfsd_export_init(net);
+@@ -1478,16 +1490,8 @@ static __net_init int nfsd_init_net(struct net *net)
+ init_waitqueue_head(&nn->ntf_wq);
+ seqlock_init(&nn->boot_lock);
+
+- mnt = vfs_kern_mount(&nfsd_fs_type, SB_KERNMOUNT, "nfsd", NULL);
+- if (IS_ERR(mnt)) {
+- retval = PTR_ERR(mnt);
+- goto out_mount_err;
+- }
+- nn->nfsd_mnt = mnt;
+ return 0;
+
+-out_mount_err:
+- nfsd_reply_cache_shutdown(nn);
+ out_drc_error:
+ nfsd_idmap_shutdown(net);
+ out_idmap_error:
+@@ -1500,7 +1504,6 @@ static __net_exit void nfsd_exit_net(struct net *net)
+ {
+ struct nfsd_net *nn = net_generic(net, nfsd_net_id);
+
+- mntput(nn->nfsd_mnt);
+ nfsd_reply_cache_shutdown(nn);
+ nfsd_idmap_shutdown(net);
+ nfsd_export_shutdown(net);
+diff --git a/fs/nfsd/nfsd.h b/fs/nfsd/nfsd.h
+index 2ab5569126b8..b61de3cd69b7 100644
+--- a/fs/nfsd/nfsd.h
++++ b/fs/nfsd/nfsd.h
+@@ -88,6 +88,8 @@ int nfsd_pool_stats_release(struct inode *, struct file *);
+
+ void nfsd_destroy(struct net *net);
+
++int get_nfsdfs(struct net *);
++
+ struct nfsdfs_client {
+ struct kref cl_ref;
+ void (*cl_release)(struct kref *kref);
+@@ -98,6 +100,7 @@ struct dentry *nfsd_client_mkdir(struct nfsd_net *nn,
+ struct nfsdfs_client *ncl, u32 id, const struct tree_descr *);
+ void nfsd_client_rmdir(struct dentry *dentry);
+
++
+ #if defined(CONFIG_NFSD_V2_ACL) || defined(CONFIG_NFSD_V3_ACL)
+ #ifdef CONFIG_NFSD_V2_ACL
+ extern const struct svc_version nfsd_acl_version2;
+diff --git a/fs/nfsd/vfs.c b/fs/nfsd/vfs.c
+index 0aa02eb18bd3..8fa3e0ff3671 100644
+--- a/fs/nfsd/vfs.c
++++ b/fs/nfsd/vfs.c
+@@ -1225,6 +1225,9 @@ nfsd_create_locked(struct svc_rqst *rqstp, struct svc_fh *fhp,
+ iap->ia_mode = 0;
+ iap->ia_mode = (iap->ia_mode & S_IALLUGO) | type;
+
++ if (!IS_POSIXACL(dirp))
++ iap->ia_mode &= ~current_umask();
++
+ err = 0;
+ host_err = 0;
+ switch (type) {
+@@ -1457,6 +1460,9 @@ do_nfsd_create(struct svc_rqst *rqstp, struct svc_fh *fhp,
+ goto out;
+ }
+
++ if (!IS_POSIXACL(dirp))
++ iap->ia_mode &= ~current_umask();
++
+ host_err = vfs_create(dirp, dchild, iap->ia_mode, true);
+ if (host_err < 0) {
+ fh_drop_write(fhp);
+diff --git a/fs/xfs/xfs_log_cil.c b/fs/xfs/xfs_log_cil.c
+index b43f0e8f43f2..9ed90368ab31 100644
+--- a/fs/xfs/xfs_log_cil.c
++++ b/fs/xfs/xfs_log_cil.c
+@@ -671,7 +671,8 @@ xlog_cil_push_work(
+ /*
+ * Wake up any background push waiters now this context is being pushed.
+ */
+- wake_up_all(&ctx->push_wait);
++ if (ctx->space_used >= XLOG_CIL_BLOCKING_SPACE_LIMIT(log))
++ wake_up_all(&cil->xc_push_wait);
+
+ /*
+ * Check if we've anything to push. If there is nothing, then we don't
+@@ -743,13 +744,12 @@ xlog_cil_push_work(
+
+ /*
+ * initialise the new context and attach it to the CIL. Then attach
+- * the current context to the CIL committing lsit so it can be found
++ * the current context to the CIL committing list so it can be found
+ * during log forces to extract the commit lsn of the sequence that
+ * needs to be forced.
+ */
+ INIT_LIST_HEAD(&new_ctx->committing);
+ INIT_LIST_HEAD(&new_ctx->busy_extents);
+- init_waitqueue_head(&new_ctx->push_wait);
+ new_ctx->sequence = ctx->sequence + 1;
+ new_ctx->cil = cil;
+ cil->xc_ctx = new_ctx;
+@@ -937,7 +937,7 @@ xlog_cil_push_background(
+ if (cil->xc_ctx->space_used >= XLOG_CIL_BLOCKING_SPACE_LIMIT(log)) {
+ trace_xfs_log_cil_wait(log, cil->xc_ctx->ticket);
+ ASSERT(cil->xc_ctx->space_used < log->l_logsize);
+- xlog_wait(&cil->xc_ctx->push_wait, &cil->xc_push_lock);
++ xlog_wait(&cil->xc_push_wait, &cil->xc_push_lock);
+ return;
+ }
+
+@@ -1216,12 +1216,12 @@ xlog_cil_init(
+ INIT_LIST_HEAD(&cil->xc_committing);
+ spin_lock_init(&cil->xc_cil_lock);
+ spin_lock_init(&cil->xc_push_lock);
++ init_waitqueue_head(&cil->xc_push_wait);
+ init_rwsem(&cil->xc_ctx_lock);
+ init_waitqueue_head(&cil->xc_commit_wait);
+
+ INIT_LIST_HEAD(&ctx->committing);
+ INIT_LIST_HEAD(&ctx->busy_extents);
+- init_waitqueue_head(&ctx->push_wait);
+ ctx->sequence = 1;
+ ctx->cil = cil;
+ cil->xc_ctx = ctx;
+diff --git a/fs/xfs/xfs_log_priv.h b/fs/xfs/xfs_log_priv.h
+index ec22c7a3867f..75a62870b63a 100644
+--- a/fs/xfs/xfs_log_priv.h
++++ b/fs/xfs/xfs_log_priv.h
+@@ -240,7 +240,6 @@ struct xfs_cil_ctx {
+ struct xfs_log_vec *lv_chain; /* logvecs being pushed */
+ struct list_head iclog_entry;
+ struct list_head committing; /* ctx committing list */
+- wait_queue_head_t push_wait; /* background push throttle */
+ struct work_struct discard_endio_work;
+ };
+
+@@ -274,6 +273,7 @@ struct xfs_cil {
+ wait_queue_head_t xc_commit_wait;
+ xfs_lsn_t xc_current_sequence;
+ struct work_struct xc_push_work;
++ wait_queue_head_t xc_push_wait; /* background push throttle */
+ } ____cacheline_aligned_in_smp;
+
+ /*
+diff --git a/include/crypto/if_alg.h b/include/crypto/if_alg.h
+index 56527c85d122..088c1ded2714 100644
+--- a/include/crypto/if_alg.h
++++ b/include/crypto/if_alg.h
+@@ -29,8 +29,8 @@ struct alg_sock {
+
+ struct sock *parent;
+
+- unsigned int refcnt;
+- unsigned int nokey_refcnt;
++ atomic_t refcnt;
++ atomic_t nokey_refcnt;
+
+ const struct af_alg_type *type;
+ void *private;
+diff --git a/include/linux/lsm_hook_defs.h b/include/linux/lsm_hook_defs.h
+index 5616b2567aa7..c2d073c49bf8 100644
+--- a/include/linux/lsm_hook_defs.h
++++ b/include/linux/lsm_hook_defs.h
+@@ -149,7 +149,7 @@ LSM_HOOK(int, 0, inode_listsecurity, struct inode *inode, char *buffer,
+ size_t buffer_size)
+ LSM_HOOK(void, LSM_RET_VOID, inode_getsecid, struct inode *inode, u32 *secid)
+ LSM_HOOK(int, 0, inode_copy_up, struct dentry *src, struct cred **new)
+-LSM_HOOK(int, 0, inode_copy_up_xattr, const char *name)
++LSM_HOOK(int, -EOPNOTSUPP, inode_copy_up_xattr, const char *name)
+ LSM_HOOK(int, 0, kernfs_init_security, struct kernfs_node *kn_dir,
+ struct kernfs_node *kn)
+ LSM_HOOK(int, 0, file_permission, struct file *file, int mask)
+diff --git a/include/linux/sched/jobctl.h b/include/linux/sched/jobctl.h
+index fa067de9f1a9..d2b4204ba4d3 100644
+--- a/include/linux/sched/jobctl.h
++++ b/include/linux/sched/jobctl.h
+@@ -19,6 +19,7 @@ struct task_struct;
+ #define JOBCTL_TRAPPING_BIT 21 /* switching to TRACED */
+ #define JOBCTL_LISTENING_BIT 22 /* ptracer is listening for events */
+ #define JOBCTL_TRAP_FREEZE_BIT 23 /* trap for cgroup freezer */
++#define JOBCTL_TASK_WORK_BIT 24 /* set by TWA_SIGNAL */
+
+ #define JOBCTL_STOP_DEQUEUED (1UL << JOBCTL_STOP_DEQUEUED_BIT)
+ #define JOBCTL_STOP_PENDING (1UL << JOBCTL_STOP_PENDING_BIT)
+@@ -28,9 +29,10 @@ struct task_struct;
+ #define JOBCTL_TRAPPING (1UL << JOBCTL_TRAPPING_BIT)
+ #define JOBCTL_LISTENING (1UL << JOBCTL_LISTENING_BIT)
+ #define JOBCTL_TRAP_FREEZE (1UL << JOBCTL_TRAP_FREEZE_BIT)
++#define JOBCTL_TASK_WORK (1UL << JOBCTL_TASK_WORK_BIT)
+
+ #define JOBCTL_TRAP_MASK (JOBCTL_TRAP_STOP | JOBCTL_TRAP_NOTIFY)
+-#define JOBCTL_PENDING_MASK (JOBCTL_STOP_PENDING | JOBCTL_TRAP_MASK)
++#define JOBCTL_PENDING_MASK (JOBCTL_STOP_PENDING | JOBCTL_TRAP_MASK | JOBCTL_TASK_WORK)
+
+ extern bool task_set_jobctl_pending(struct task_struct *task, unsigned long mask);
+ extern void task_clear_jobctl_trapping(struct task_struct *task);
+diff --git a/include/linux/task_work.h b/include/linux/task_work.h
+index bd9a6a91c097..0fb93aafa478 100644
+--- a/include/linux/task_work.h
++++ b/include/linux/task_work.h
+@@ -13,7 +13,10 @@ init_task_work(struct callback_head *twork, task_work_func_t func)
+ twork->func = func;
+ }
+
+-int task_work_add(struct task_struct *task, struct callback_head *twork, bool);
++#define TWA_RESUME 1
++#define TWA_SIGNAL 2
++int task_work_add(struct task_struct *task, struct callback_head *twork, int);
++
+ struct callback_head *task_work_cancel(struct task_struct *, task_work_func_t);
+ void task_work_run(void);
+
+diff --git a/include/net/seg6.h b/include/net/seg6.h
+index 640724b35273..9d19c15e8545 100644
+--- a/include/net/seg6.h
++++ b/include/net/seg6.h
+@@ -57,7 +57,7 @@ extern void seg6_iptunnel_exit(void);
+ extern int seg6_local_init(void);
+ extern void seg6_local_exit(void);
+
+-extern bool seg6_validate_srh(struct ipv6_sr_hdr *srh, int len);
++extern bool seg6_validate_srh(struct ipv6_sr_hdr *srh, int len, bool reduced);
+ extern int seg6_do_srh_encap(struct sk_buff *skb, struct ipv6_sr_hdr *osrh,
+ int proto);
+ extern int seg6_do_srh_inline(struct sk_buff *skb, struct ipv6_sr_hdr *osrh);
+diff --git a/kernel/debug/debug_core.c b/kernel/debug/debug_core.c
+index d47c7d6656cd..9be6accf8fe3 100644
+--- a/kernel/debug/debug_core.c
++++ b/kernel/debug/debug_core.c
+@@ -577,6 +577,7 @@ static int kgdb_cpu_enter(struct kgdb_state *ks, struct pt_regs *regs,
+ arch_kgdb_ops.disable_hw_break(regs);
+
+ acquirelock:
++ rcu_read_lock();
+ /*
+ * Interrupts will be restored by the 'trap return' code, except when
+ * single stepping.
+@@ -636,6 +637,7 @@ return_normal:
+ atomic_dec(&slaves_in_kgdb);
+ dbg_touch_watchdogs();
+ local_irq_restore(flags);
++ rcu_read_unlock();
+ return 0;
+ }
+ cpu_relax();
+@@ -654,6 +656,7 @@ return_normal:
+ raw_spin_unlock(&dbg_master_lock);
+ dbg_touch_watchdogs();
+ local_irq_restore(flags);
++ rcu_read_unlock();
+
+ goto acquirelock;
+ }
+@@ -777,6 +780,7 @@ kgdb_restore:
+ raw_spin_unlock(&dbg_master_lock);
+ dbg_touch_watchdogs();
+ local_irq_restore(flags);
++ rcu_read_unlock();
+
+ return kgdb_info[cpu].ret_state;
+ }
+diff --git a/kernel/padata.c b/kernel/padata.c
+index aae789896616..859c77d22aa7 100644
+--- a/kernel/padata.c
++++ b/kernel/padata.c
+@@ -260,7 +260,7 @@ static void padata_reorder(struct parallel_data *pd)
+ *
+ * Ensure reorder queue is read after pd->lock is dropped so we see
+ * new objects from another task in padata_do_serial. Pairs with
+- * smp_mb__after_atomic in padata_do_serial.
++ * smp_mb in padata_do_serial.
+ */
+ smp_mb();
+
+@@ -342,7 +342,7 @@ void padata_do_serial(struct padata_priv *padata)
+ * with the trylock of pd->lock in padata_reorder. Pairs with smp_mb
+ * in padata_reorder.
+ */
+- smp_mb__after_atomic();
++ smp_mb();
+
+ padata_reorder(pd);
+ }
+diff --git a/kernel/sched/debug.c b/kernel/sched/debug.c
+index 239970b991c0..0f4aaad236a9 100644
+--- a/kernel/sched/debug.c
++++ b/kernel/sched/debug.c
+@@ -258,7 +258,7 @@ sd_alloc_ctl_domain_table(struct sched_domain *sd)
+ set_table_entry(&table[2], "busy_factor", &sd->busy_factor, sizeof(int), 0644, proc_dointvec_minmax);
+ set_table_entry(&table[3], "imbalance_pct", &sd->imbalance_pct, sizeof(int), 0644, proc_dointvec_minmax);
+ set_table_entry(&table[4], "cache_nice_tries", &sd->cache_nice_tries, sizeof(int), 0644, proc_dointvec_minmax);
+- set_table_entry(&table[5], "flags", &sd->flags, sizeof(int), 0644, proc_dointvec_minmax);
++ set_table_entry(&table[5], "flags", &sd->flags, sizeof(int), 0444, proc_dointvec_minmax);
+ set_table_entry(&table[6], "max_newidle_lb_cost", &sd->max_newidle_lb_cost, sizeof(long), 0644, proc_doulongvec_minmax);
+ set_table_entry(&table[7], "name", sd->name, CORENAME_MAX_SIZE, 0444, proc_dostring);
+ /* &table[8] is terminator */
+diff --git a/kernel/signal.c b/kernel/signal.c
+index 284fc1600063..d5feb34b5e15 100644
+--- a/kernel/signal.c
++++ b/kernel/signal.c
+@@ -2529,9 +2529,6 @@ bool get_signal(struct ksignal *ksig)
+ struct signal_struct *signal = current->signal;
+ int signr;
+
+- if (unlikely(current->task_works))
+- task_work_run();
+-
+ if (unlikely(uprobe_deny_signal()))
+ return false;
+
+@@ -2544,6 +2541,13 @@ bool get_signal(struct ksignal *ksig)
+
+ relock:
+ spin_lock_irq(&sighand->siglock);
++ current->jobctl &= ~JOBCTL_TASK_WORK;
++ if (unlikely(current->task_works)) {
++ spin_unlock_irq(&sighand->siglock);
++ task_work_run();
++ goto relock;
++ }
++
+ /*
+ * Every stopped thread goes here after wakeup. Check to see if
+ * we should notify the parent, prepare_signal(SIGCONT) encodes
+diff --git a/kernel/task_work.c b/kernel/task_work.c
+index 825f28259a19..5c0848ca1287 100644
+--- a/kernel/task_work.c
++++ b/kernel/task_work.c
+@@ -25,9 +25,10 @@ static struct callback_head work_exited; /* all we need is ->next == NULL */
+ * 0 if succeeds or -ESRCH.
+ */
+ int
+-task_work_add(struct task_struct *task, struct callback_head *work, bool notify)
++task_work_add(struct task_struct *task, struct callback_head *work, int notify)
+ {
+ struct callback_head *head;
++ unsigned long flags;
+
+ do {
+ head = READ_ONCE(task->task_works);
+@@ -36,8 +37,19 @@ task_work_add(struct task_struct *task, struct callback_head *work, bool notify)
+ work->next = head;
+ } while (cmpxchg(&task->task_works, head, work) != head);
+
+- if (notify)
++ switch (notify) {
++ case TWA_RESUME:
+ set_notify_resume(task);
++ break;
++ case TWA_SIGNAL:
++ if (lock_task_sighand(task, &flags)) {
++ task->jobctl |= JOBCTL_TASK_WORK;
++ signal_wake_up(task, 0);
++ unlock_task_sighand(task, &flags);
++ }
++ break;
++ }
++
+ return 0;
+ }
+
+diff --git a/mm/cma.c b/mm/cma.c
+index 0463ad2ce06b..26ecff818881 100644
+--- a/mm/cma.c
++++ b/mm/cma.c
+@@ -339,13 +339,13 @@ int __init cma_declare_contiguous_nid(phys_addr_t base,
+ */
+ if (base < highmem_start && limit > highmem_start) {
+ addr = memblock_alloc_range_nid(size, alignment,
+- highmem_start, limit, nid, false);
++ highmem_start, limit, nid, true);
+ limit = highmem_start;
+ }
+
+ if (!addr) {
+ addr = memblock_alloc_range_nid(size, alignment, base,
+- limit, nid, false);
++ limit, nid, true);
+ if (!addr) {
+ ret = -ENOMEM;
+ goto err;
+diff --git a/mm/debug.c b/mm/debug.c
+index 2189357f0987..f2ede2df585a 100644
+--- a/mm/debug.c
++++ b/mm/debug.c
+@@ -110,13 +110,57 @@ void __dump_page(struct page *page, const char *reason)
+ else if (PageAnon(page))
+ type = "anon ";
+ else if (mapping) {
+- if (mapping->host && mapping->host->i_dentry.first) {
+- struct dentry *dentry;
+- dentry = container_of(mapping->host->i_dentry.first, struct dentry, d_u.d_alias);
+- pr_warn("%ps name:\"%pd\"\n", mapping->a_ops, dentry);
+- } else
+- pr_warn("%ps\n", mapping->a_ops);
++ const struct inode *host;
++ const struct address_space_operations *a_ops;
++ const struct hlist_node *dentry_first;
++ const struct dentry *dentry_ptr;
++ struct dentry dentry;
++
++ /*
++ * mapping can be invalid pointer and we don't want to crash
++ * accessing it, so probe everything depending on it carefully
++ */
++ if (probe_kernel_read_strict(&host, &mapping->host,
++ sizeof(struct inode *)) ||
++ probe_kernel_read_strict(&a_ops, &mapping->a_ops,
++ sizeof(struct address_space_operations *))) {
++ pr_warn("failed to read mapping->host or a_ops, mapping not a valid kernel address?\n");
++ goto out_mapping;
++ }
++
++ if (!host) {
++ pr_warn("mapping->a_ops:%ps\n", a_ops);
++ goto out_mapping;
++ }
++
++ if (probe_kernel_read_strict(&dentry_first,
++ &host->i_dentry.first, sizeof(struct hlist_node *))) {
++ pr_warn("mapping->a_ops:%ps with invalid mapping->host inode address %px\n",
++ a_ops, host);
++ goto out_mapping;
++ }
++
++ if (!dentry_first) {
++ pr_warn("mapping->a_ops:%ps\n", a_ops);
++ goto out_mapping;
++ }
++
++ dentry_ptr = container_of(dentry_first, struct dentry, d_u.d_alias);
++ if (probe_kernel_read_strict(&dentry, dentry_ptr,
++ sizeof(struct dentry))) {
++ pr_warn("mapping->aops:%ps with invalid mapping->host->i_dentry.first %px\n",
++ a_ops, dentry_ptr);
++ } else {
++ /*
++ * if dentry is corrupted, the %pd handler may still
++ * crash, but it's unlikely that we reach here with a
++ * corrupted struct page
++ */
++ pr_warn("mapping->aops:%ps dentry name:\"%pd\"\n",
++ a_ops, &dentry);
++ }
+ }
++out_mapping:
+ BUILD_BUG_ON(ARRAY_SIZE(pageflag_names) != __NR_PAGEFLAGS + 1);
+
+ pr_warn("%sflags: %#lx(%pGp)%s\n", type, page->flags, &page->flags,
+diff --git a/mm/hugetlb.c b/mm/hugetlb.c
+index bcabbe02192b..4f7cdc55fbe4 100644
+--- a/mm/hugetlb.c
++++ b/mm/hugetlb.c
+@@ -1594,7 +1594,7 @@ static struct address_space *_get_hugetlb_page_mapping(struct page *hpage)
+
+ /* Use first found vma */
+ pgoff_start = page_to_pgoff(hpage);
+- pgoff_end = pgoff_start + hpage_nr_pages(hpage) - 1;
++ pgoff_end = pgoff_start + pages_per_huge_page(page_hstate(hpage)) - 1;
+ anon_vma_interval_tree_foreach(avc, &anon_vma->rb_root,
+ pgoff_start, pgoff_end) {
+ struct vm_area_struct *vma = avc->vma;
+diff --git a/mm/slub.c b/mm/slub.c
+index 63bd39c47643..660f4324c097 100644
+--- a/mm/slub.c
++++ b/mm/slub.c
+@@ -679,6 +679,20 @@ static void slab_fix(struct kmem_cache *s, char *fmt, ...)
+ va_end(args);
+ }
+
++static bool freelist_corrupted(struct kmem_cache *s, struct page *page,
++ void *freelist, void *nextfree)
++{
++ if ((s->flags & SLAB_CONSISTENCY_CHECKS) &&
++ !check_valid_pointer(s, page, nextfree)) {
++ object_err(s, page, freelist, "Freechain corrupt");
++ freelist = NULL;
++ slab_fix(s, "Isolate corrupted freechain");
++ return true;
++ }
++
++ return false;
++}
++
+ static void print_trailer(struct kmem_cache *s, struct page *page, u8 *p)
+ {
+ unsigned int off; /* Offset of last byte */
+@@ -1410,6 +1424,11 @@ static inline void inc_slabs_node(struct kmem_cache *s, int node,
+ static inline void dec_slabs_node(struct kmem_cache *s, int node,
+ int objects) {}
+
++static bool freelist_corrupted(struct kmem_cache *s, struct page *page,
++ void *freelist, void *nextfree)
++{
++ return false;
++}
+ #endif /* CONFIG_SLUB_DEBUG */
+
+ /*
+@@ -2093,6 +2112,14 @@ static void deactivate_slab(struct kmem_cache *s, struct page *page,
+ void *prior;
+ unsigned long counters;
+
++ /*
++ * If 'nextfree' is invalid, it is possible that the object at
++ * 'freelist' is already corrupted. So isolate all objects
++ * starting at 'freelist'.
++ */
++ if (freelist_corrupted(s, page, freelist, nextfree))
++ break;
++
+ do {
+ prior = page->freelist;
+ counters = page->counters;
+@@ -5654,7 +5681,8 @@ static void memcg_propagate_slab_attrs(struct kmem_cache *s)
+ */
+ if (buffer)
+ buf = buffer;
+- else if (root_cache->max_attr_size < ARRAY_SIZE(mbuf))
++ else if (root_cache->max_attr_size < ARRAY_SIZE(mbuf) &&
++ !IS_ENABLED(CONFIG_SLUB_STATS))
+ buf = mbuf;
+ else {
+ buffer = (char *) get_zeroed_page(GFP_KERNEL);
+diff --git a/mm/swap_state.c b/mm/swap_state.c
+index ebed37bbf7a3..e3d36776c08b 100644
+--- a/mm/swap_state.c
++++ b/mm/swap_state.c
+@@ -23,6 +23,7 @@
+ #include <linux/huge_mm.h>
+
+ #include <asm/pgtable.h>
++#include "internal.h"
+
+ /*
+ * swapper_space is a fiction, retained to simplify the path through
+@@ -418,7 +419,8 @@ struct page *__read_swap_cache_async(swp_entry_t entry, gfp_t gfp_mask,
+ /* May fail (-ENOMEM) if XArray node allocation failed. */
+ __SetPageLocked(new_page);
+ __SetPageSwapBacked(new_page);
+- err = add_to_swap_cache(new_page, entry, gfp_mask & GFP_KERNEL);
++ err = add_to_swap_cache(new_page, entry,
++ gfp_mask & GFP_RECLAIM_MASK);
+ if (likely(!err)) {
+ /* Initiate read into locked page */
+ SetPageWorkingset(new_page);
+diff --git a/net/core/filter.c b/net/core/filter.c
+index 9512a9772d69..45fa65a28983 100644
+--- a/net/core/filter.c
++++ b/net/core/filter.c
+@@ -4920,7 +4920,7 @@ static int bpf_push_seg6_encap(struct sk_buff *skb, u32 type, void *hdr, u32 len
+ int err;
+ struct ipv6_sr_hdr *srh = (struct ipv6_sr_hdr *)hdr;
+
+- if (!seg6_validate_srh(srh, len))
++ if (!seg6_validate_srh(srh, len, false))
+ return -EINVAL;
+
+ switch (type) {
+diff --git a/net/hsr/hsr_device.c b/net/hsr/hsr_device.c
+index fc7027314ad8..ef100cfd2ac1 100644
+--- a/net/hsr/hsr_device.c
++++ b/net/hsr/hsr_device.c
+@@ -341,7 +341,7 @@ static void hsr_announce(struct timer_list *t)
+ rcu_read_unlock();
+ }
+
+-static void hsr_del_ports(struct hsr_priv *hsr)
++void hsr_del_ports(struct hsr_priv *hsr)
+ {
+ struct hsr_port *port;
+
+@@ -358,31 +358,12 @@ static void hsr_del_ports(struct hsr_priv *hsr)
+ hsr_del_port(port);
+ }
+
+-/* This has to be called after all the readers are gone.
+- * Otherwise we would have to check the return value of
+- * hsr_port_get_hsr().
+- */
+-static void hsr_dev_destroy(struct net_device *hsr_dev)
+-{
+- struct hsr_priv *hsr = netdev_priv(hsr_dev);
+-
+- hsr_debugfs_term(hsr);
+- hsr_del_ports(hsr);
+-
+- del_timer_sync(&hsr->prune_timer);
+- del_timer_sync(&hsr->announce_timer);
+-
+- hsr_del_self_node(hsr);
+- hsr_del_nodes(&hsr->node_db);
+-}
+-
+ static const struct net_device_ops hsr_device_ops = {
+ .ndo_change_mtu = hsr_dev_change_mtu,
+ .ndo_open = hsr_dev_open,
+ .ndo_stop = hsr_dev_close,
+ .ndo_start_xmit = hsr_dev_xmit,
+ .ndo_fix_features = hsr_fix_features,
+- .ndo_uninit = hsr_dev_destroy,
+ };
+
+ static struct device_type hsr_type = {
+diff --git a/net/hsr/hsr_device.h b/net/hsr/hsr_device.h
+index a099d7de7e79..b8f9262ed101 100644
+--- a/net/hsr/hsr_device.h
++++ b/net/hsr/hsr_device.h
+@@ -11,6 +11,7 @@
+ #include <linux/netdevice.h>
+ #include "hsr_main.h"
+
++void hsr_del_ports(struct hsr_priv *hsr);
+ void hsr_dev_setup(struct net_device *dev);
+ int hsr_dev_finalize(struct net_device *hsr_dev, struct net_device *slave[2],
+ unsigned char multicast_spec, u8 protocol_version,
+@@ -18,5 +19,4 @@ int hsr_dev_finalize(struct net_device *hsr_dev, struct net_device *slave[2],
+ void hsr_check_carrier_and_operstate(struct hsr_priv *hsr);
+ bool is_hsr_master(struct net_device *dev);
+ int hsr_get_max_mtu(struct hsr_priv *hsr);
+-
+ #endif /* __HSR_DEVICE_H */
+diff --git a/net/hsr/hsr_main.c b/net/hsr/hsr_main.c
+index 26d6c39f24e1..144da15f0a81 100644
+--- a/net/hsr/hsr_main.c
++++ b/net/hsr/hsr_main.c
+@@ -6,6 +6,7 @@
+ */
+
+ #include <linux/netdevice.h>
++#include <net/rtnetlink.h>
+ #include <linux/rculist.h>
+ #include <linux/timer.h>
+ #include <linux/etherdevice.h>
+@@ -15,12 +16,23 @@
+ #include "hsr_framereg.h"
+ #include "hsr_slave.h"
+
++static bool hsr_slave_empty(struct hsr_priv *hsr)
++{
++ struct hsr_port *port;
++
++ hsr_for_each_port(hsr, port)
++ if (port->type != HSR_PT_MASTER)
++ return false;
++ return true;
++}
++
+ static int hsr_netdev_notify(struct notifier_block *nb, unsigned long event,
+ void *ptr)
+ {
+- struct net_device *dev;
+ struct hsr_port *port, *master;
++ struct net_device *dev;
+ struct hsr_priv *hsr;
++ LIST_HEAD(list_kill);
+ int mtu_max;
+ int res;
+
+@@ -85,8 +97,17 @@ static int hsr_netdev_notify(struct notifier_block *nb, unsigned long event,
+ master->dev->mtu = mtu_max;
+ break;
+ case NETDEV_UNREGISTER:
+- if (!is_hsr_master(dev))
++ if (!is_hsr_master(dev)) {
++ master = hsr_port_get_hsr(port->hsr, HSR_PT_MASTER);
+ hsr_del_port(port);
++ if (hsr_slave_empty(master->hsr)) {
++ const struct rtnl_link_ops *ops;
++
++ ops = master->dev->rtnl_link_ops;
++ ops->dellink(master->dev, &list_kill);
++ unregister_netdevice_many(&list_kill);
++ }
++ }
+ break;
+ case NETDEV_PRE_TYPE_CHANGE:
+ /* HSR works only on Ethernet devices. Refuse slave to change
+@@ -126,9 +147,9 @@ static int __init hsr_init(void)
+
+ static void __exit hsr_exit(void)
+ {
+- unregister_netdevice_notifier(&hsr_nb);
+ hsr_netlink_exit();
+ hsr_debugfs_remove_root();
++ unregister_netdevice_notifier(&hsr_nb);
+ }
+
+ module_init(hsr_init);
+diff --git a/net/hsr/hsr_netlink.c b/net/hsr/hsr_netlink.c
+index 1decb25f6764..6e14b7d22639 100644
+--- a/net/hsr/hsr_netlink.c
++++ b/net/hsr/hsr_netlink.c
+@@ -83,6 +83,22 @@ static int hsr_newlink(struct net *src_net, struct net_device *dev,
+ return hsr_dev_finalize(dev, link, multicast_spec, hsr_version, extack);
+ }
+
++static void hsr_dellink(struct net_device *dev, struct list_head *head)
++{
++ struct hsr_priv *hsr = netdev_priv(dev);
++
++ del_timer_sync(&hsr->prune_timer);
++ del_timer_sync(&hsr->announce_timer);
++
++ hsr_debugfs_term(hsr);
++ hsr_del_ports(hsr);
++
++ hsr_del_self_node(hsr);
++ hsr_del_nodes(&hsr->node_db);
++
++ unregister_netdevice_queue(dev, head);
++}
++
+ static int hsr_fill_info(struct sk_buff *skb, const struct net_device *dev)
+ {
+ struct hsr_priv *hsr = netdev_priv(dev);
+@@ -118,6 +134,7 @@ static struct rtnl_link_ops hsr_link_ops __read_mostly = {
+ .priv_size = sizeof(struct hsr_priv),
+ .setup = hsr_dev_setup,
+ .newlink = hsr_newlink,
++ .dellink = hsr_dellink,
+ .fill_info = hsr_fill_info,
+ };
+
+diff --git a/net/ipv6/ipv6_sockglue.c b/net/ipv6/ipv6_sockglue.c
+index 5af97b4f5df3..ff187fd2083f 100644
+--- a/net/ipv6/ipv6_sockglue.c
++++ b/net/ipv6/ipv6_sockglue.c
+@@ -458,7 +458,7 @@ static int do_ipv6_setsockopt(struct sock *sk, int level, int optname,
+ struct ipv6_sr_hdr *srh = (struct ipv6_sr_hdr *)
+ opt->srcrt;
+
+- if (!seg6_validate_srh(srh, optlen))
++ if (!seg6_validate_srh(srh, optlen, false))
+ goto sticky_done;
+ break;
+ }
+diff --git a/net/ipv6/seg6.c b/net/ipv6/seg6.c
+index 37b434293bda..d2f8138e5a73 100644
+--- a/net/ipv6/seg6.c
++++ b/net/ipv6/seg6.c
+@@ -25,7 +25,7 @@
+ #include <net/seg6_hmac.h>
+ #endif
+
+-bool seg6_validate_srh(struct ipv6_sr_hdr *srh, int len)
++bool seg6_validate_srh(struct ipv6_sr_hdr *srh, int len, bool reduced)
+ {
+ unsigned int tlv_offset;
+ int max_last_entry;
+@@ -37,13 +37,17 @@ bool seg6_validate_srh(struct ipv6_sr_hdr *srh, int len)
+ if (((srh->hdrlen + 1) << 3) != len)
+ return false;
+
+- max_last_entry = (srh->hdrlen / 2) - 1;
+-
+- if (srh->first_segment > max_last_entry)
++ if (!reduced && srh->segments_left > srh->first_segment) {
+ return false;
++ } else {
++ max_last_entry = (srh->hdrlen / 2) - 1;
+
+- if (srh->segments_left > srh->first_segment + 1)
+- return false;
++ if (srh->first_segment > max_last_entry)
++ return false;
++
++ if (srh->segments_left > srh->first_segment + 1)
++ return false;
++ }
+
+ tlv_offset = sizeof(*srh) + ((srh->first_segment + 1) << 4);
+
+diff --git a/net/ipv6/seg6_iptunnel.c b/net/ipv6/seg6_iptunnel.c
+index c7cbfeae94f5..e0e9f48ab14f 100644
+--- a/net/ipv6/seg6_iptunnel.c
++++ b/net/ipv6/seg6_iptunnel.c
+@@ -426,7 +426,7 @@ static int seg6_build_state(struct net *net, struct nlattr *nla,
+ }
+
+ /* verify that SRH is consistent */
+- if (!seg6_validate_srh(tuninfo->srh, tuninfo_len - sizeof(*tuninfo)))
++ if (!seg6_validate_srh(tuninfo->srh, tuninfo_len - sizeof(*tuninfo), false))
+ return -EINVAL;
+
+ newts = lwtunnel_state_alloc(tuninfo_len + sizeof(*slwt));
+diff --git a/net/ipv6/seg6_local.c b/net/ipv6/seg6_local.c
+index 52493423f329..eba23279912d 100644
+--- a/net/ipv6/seg6_local.c
++++ b/net/ipv6/seg6_local.c
+@@ -87,7 +87,7 @@ static struct ipv6_sr_hdr *get_srh(struct sk_buff *skb)
+ */
+ srh = (struct ipv6_sr_hdr *)(skb->data + srhoff);
+
+- if (!seg6_validate_srh(srh, len))
++ if (!seg6_validate_srh(srh, len, true))
+ return NULL;
+
+ return srh;
+@@ -495,7 +495,7 @@ bool seg6_bpf_has_valid_srh(struct sk_buff *skb)
+ return false;
+
+ srh->hdrlen = (u8)(srh_state->hdrlen >> 3);
+- if (!seg6_validate_srh(srh, (srh->hdrlen + 1) << 3))
++ if (!seg6_validate_srh(srh, (srh->hdrlen + 1) << 3, true))
+ return false;
+
+ srh_state->valid = true;
+@@ -670,7 +670,7 @@ static int parse_nla_srh(struct nlattr **attrs, struct seg6_local_lwt *slwt)
+ if (len < sizeof(*srh) + sizeof(struct in6_addr))
+ return -EINVAL;
+
+- if (!seg6_validate_srh(srh, len))
++ if (!seg6_validate_srh(srh, len, false))
+ return -EINVAL;
+
+ slwt->srh = kmemdup(srh, len, GFP_KERNEL);
+diff --git a/net/mptcp/subflow.c b/net/mptcp/subflow.c
+index db3e4e74e785..0112ead58fd8 100644
+--- a/net/mptcp/subflow.c
++++ b/net/mptcp/subflow.c
+@@ -424,22 +424,25 @@ static struct sock *subflow_syn_recv_sock(const struct sock *sk,
+ struct mptcp_subflow_context *listener = mptcp_subflow_ctx(sk);
+ struct mptcp_subflow_request_sock *subflow_req;
+ struct mptcp_options_received mp_opt;
+- bool fallback_is_fatal = false;
++ bool fallback, fallback_is_fatal;
+ struct sock *new_msk = NULL;
+- bool fallback = false;
+ struct sock *child;
+
+ pr_debug("listener=%p, req=%p, conn=%p", listener, req, listener->conn);
+
+- /* we need later a valid 'mp_capable' value even when options are not
+- * parsed
++ /* After child creation we must look for 'mp_capable' even when options
++ * are not parsed
+ */
+ mp_opt.mp_capable = 0;
+- if (tcp_rsk(req)->is_mptcp == 0)
++
++ /* hopefully temporary handling for MP_JOIN+syncookie */
++ subflow_req = mptcp_subflow_rsk(req);
++ fallback_is_fatal = subflow_req->mp_join;
++ fallback = !tcp_rsk(req)->is_mptcp;
++ if (fallback)
+ goto create_child;
+
+ /* if the sk is MP_CAPABLE, we try to fetch the client key */
+- subflow_req = mptcp_subflow_rsk(req);
+ if (subflow_req->mp_capable) {
+ if (TCP_SKB_CB(skb)->seq != subflow_req->ssn_offset + 1) {
+ /* here we can receive and accept an in-window,
+@@ -460,12 +463,11 @@ create_msk:
+ if (!new_msk)
+ fallback = true;
+ } else if (subflow_req->mp_join) {
+- fallback_is_fatal = true;
+ mptcp_get_options(skb, &mp_opt);
+ if (!mp_opt.mp_join ||
+ !subflow_hmac_valid(req, &mp_opt)) {
+ SUBFLOW_REQ_INC_STATS(req, MPTCP_MIB_JOINACKMAC);
+- return NULL;
++ fallback = true;
+ }
+ }
+
+diff --git a/net/rxrpc/call_event.c b/net/rxrpc/call_event.c
+index 2a65ac41055f..9ff85ee8337c 100644
+--- a/net/rxrpc/call_event.c
++++ b/net/rxrpc/call_event.c
+@@ -248,7 +248,18 @@ static void rxrpc_resend(struct rxrpc_call *call, unsigned long now_j)
+ if (anno_type != RXRPC_TX_ANNO_RETRANS)
+ continue;
+
++ /* We need to reset the retransmission state, but we need to do
++ * so before we drop the lock as a new ACK/NAK may come in and
++ * confuse things
++ */
++ annotation &= ~RXRPC_TX_ANNO_MASK;
++ annotation |= RXRPC_TX_ANNO_UNACK | RXRPC_TX_ANNO_RESENT;
++ call->rxtx_annotations[ix] = annotation;
++
+ skb = call->rxtx_buffer[ix];
++ if (!skb)
++ continue;
++
+ rxrpc_get_skb(skb, rxrpc_skb_got);
+ spin_unlock_bh(&call->lock);
+
+@@ -262,24 +273,6 @@ static void rxrpc_resend(struct rxrpc_call *call, unsigned long now_j)
+
+ rxrpc_free_skb(skb, rxrpc_skb_freed);
+ spin_lock_bh(&call->lock);
+-
+- /* We need to clear the retransmit state, but there are two
+- * things we need to be aware of: A new ACK/NAK might have been
+- * received and the packet might have been hard-ACK'd (in which
+- * case it will no longer be in the buffer).
+- */
+- if (after(seq, call->tx_hard_ack)) {
+- annotation = call->rxtx_annotations[ix];
+- anno_type = annotation & RXRPC_TX_ANNO_MASK;
+- if (anno_type == RXRPC_TX_ANNO_RETRANS ||
+- anno_type == RXRPC_TX_ANNO_NAK) {
+- annotation &= ~RXRPC_TX_ANNO_MASK;
+- annotation |= RXRPC_TX_ANNO_UNACK;
+- }
+- annotation |= RXRPC_TX_ANNO_RESENT;
+- call->rxtx_annotations[ix] = annotation;
+- }
+-
+ if (after(call->tx_hard_ack, seq))
+ seq = call->tx_hard_ack;
+ }
+diff --git a/net/tipc/msg.c b/net/tipc/msg.c
+index 3ad411884e6c..560d7a4c0fff 100644
+--- a/net/tipc/msg.c
++++ b/net/tipc/msg.c
+@@ -235,21 +235,18 @@ int tipc_msg_append(struct tipc_msg *_hdr, struct msghdr *m, int dlen,
+ msg_set_size(hdr, MIN_H_SIZE);
+ __skb_queue_tail(txq, skb);
+ total += 1;
+- if (prev)
+- msg_set_ack_required(buf_msg(prev), 0);
+- msg_set_ack_required(hdr, 1);
+ }
+ hdr = buf_msg(skb);
+ curr = msg_blocks(hdr);
+ mlen = msg_size(hdr);
+- cpy = min_t(int, rem, mss - mlen);
++ cpy = min_t(size_t, rem, mss - mlen);
+ if (cpy != copy_from_iter(skb->data + mlen, cpy, &m->msg_iter))
+ return -EFAULT;
+ msg_set_size(hdr, mlen + cpy);
+ skb_put(skb, cpy);
+ rem -= cpy;
+ total += msg_blocks(hdr) - curr;
+- } while (rem);
++ } while (rem > 0);
+ return total - accounted;
+ }
+
+diff --git a/net/tipc/msg.h b/net/tipc/msg.h
+index 871feadbbc19..a4e2029170b1 100644
+--- a/net/tipc/msg.h
++++ b/net/tipc/msg.h
+@@ -321,9 +321,19 @@ static inline int msg_ack_required(struct tipc_msg *m)
+ return msg_bits(m, 0, 18, 1);
+ }
+
+-static inline void msg_set_ack_required(struct tipc_msg *m, u32 d)
++static inline void msg_set_ack_required(struct tipc_msg *m)
+ {
+- msg_set_bits(m, 0, 18, 1, d);
++ msg_set_bits(m, 0, 18, 1, 1);
++}
++
++static inline int msg_nagle_ack(struct tipc_msg *m)
++{
++ return msg_bits(m, 0, 18, 1);
++}
++
++static inline void msg_set_nagle_ack(struct tipc_msg *m)
++{
++ msg_set_bits(m, 0, 18, 1, 1);
+ }
+
+ static inline bool msg_is_rcast(struct tipc_msg *m)
+diff --git a/net/tipc/socket.c b/net/tipc/socket.c
+index e370ad0edd76..f02f2abf6e3c 100644
+--- a/net/tipc/socket.c
++++ b/net/tipc/socket.c
+@@ -48,6 +48,8 @@
+ #include "group.h"
+ #include "trace.h"
+
++#define NAGLE_START_INIT 4
++#define NAGLE_START_MAX 1024
+ #define CONN_TIMEOUT_DEFAULT 8000 /* default connect timeout = 8s */
+ #define CONN_PROBING_INTV msecs_to_jiffies(3600000) /* [ms] => 1 h */
+ #define TIPC_FWD_MSG 1
+@@ -119,7 +121,10 @@ struct tipc_sock {
+ struct rcu_head rcu;
+ struct tipc_group *group;
+ u32 oneway;
++ u32 nagle_start;
+ u16 snd_backlog;
++ u16 msg_acc;
++ u16 pkt_cnt;
+ bool expect_ack;
+ bool nodelay;
+ bool group_is_open;
+@@ -143,7 +148,7 @@ static int tipc_sk_insert(struct tipc_sock *tsk);
+ static void tipc_sk_remove(struct tipc_sock *tsk);
+ static int __tipc_sendstream(struct socket *sock, struct msghdr *m, size_t dsz);
+ static int __tipc_sendmsg(struct socket *sock, struct msghdr *m, size_t dsz);
+-static void tipc_sk_push_backlog(struct tipc_sock *tsk);
++static void tipc_sk_push_backlog(struct tipc_sock *tsk, bool nagle_ack);
+
+ static const struct proto_ops packet_ops;
+ static const struct proto_ops stream_ops;
+@@ -474,6 +479,7 @@ static int tipc_sk_create(struct net *net, struct socket *sock,
+ tsk = tipc_sk(sk);
+ tsk->max_pkt = MAX_PKT_DEFAULT;
+ tsk->maxnagle = 0;
++ tsk->nagle_start = NAGLE_START_INIT;
+ INIT_LIST_HEAD(&tsk->publications);
+ INIT_LIST_HEAD(&tsk->cong_links);
+ msg = &tsk->phdr;
+@@ -541,7 +547,7 @@ static void __tipc_shutdown(struct socket *sock, int error)
+ !tsk_conn_cong(tsk)));
+
+ /* Push out delayed messages if in Nagle mode */
+- tipc_sk_push_backlog(tsk);
++ tipc_sk_push_backlog(tsk, false);
+ /* Remove pending SYN */
+ __skb_queue_purge(&sk->sk_write_queue);
+
+@@ -1252,14 +1258,37 @@ void tipc_sk_mcast_rcv(struct net *net, struct sk_buff_head *arrvq,
+ /* tipc_sk_push_backlog(): send accumulated buffers in socket write queue
+ * when socket is in Nagle mode
+ */
+-static void tipc_sk_push_backlog(struct tipc_sock *tsk)
++static void tipc_sk_push_backlog(struct tipc_sock *tsk, bool nagle_ack)
+ {
+ struct sk_buff_head *txq = &tsk->sk.sk_write_queue;
++ struct sk_buff *skb = skb_peek_tail(txq);
+ struct net *net = sock_net(&tsk->sk);
+ u32 dnode = tsk_peer_node(tsk);
+- struct sk_buff *skb = skb_peek(txq);
+ int rc;
+
++ if (nagle_ack) {
++ tsk->pkt_cnt += skb_queue_len(txq);
++ if (!tsk->pkt_cnt || tsk->msg_acc / tsk->pkt_cnt < 2) {
++ tsk->oneway = 0;
++ if (tsk->nagle_start < NAGLE_START_MAX)
++ tsk->nagle_start *= 2;
++ tsk->expect_ack = false;
++ pr_debug("tsk %10u: bad nagle %u -> %u, next start %u!\n",
++ tsk->portid, tsk->msg_acc, tsk->pkt_cnt,
++ tsk->nagle_start);
++ } else {
++ tsk->nagle_start = NAGLE_START_INIT;
++ if (skb) {
++ msg_set_ack_required(buf_msg(skb));
++ tsk->expect_ack = true;
++ } else {
++ tsk->expect_ack = false;
++ }
++ }
++ tsk->msg_acc = 0;
++ tsk->pkt_cnt = 0;
++ }
++
+ if (!skb || tsk->cong_link_cnt)
+ return;
+
+@@ -1267,9 +1296,10 @@ static void tipc_sk_push_backlog(struct tipc_sock *tsk)
+ if (msg_is_syn(buf_msg(skb)))
+ return;
+
++ if (tsk->msg_acc)
++ tsk->pkt_cnt += skb_queue_len(txq);
+ tsk->snt_unacked += tsk->snd_backlog;
+ tsk->snd_backlog = 0;
+- tsk->expect_ack = true;
+ rc = tipc_node_xmit(net, txq, dnode, tsk->portid);
+ if (rc == -ELINKCONG)
+ tsk->cong_link_cnt = 1;
+@@ -1322,8 +1352,7 @@ static void tipc_sk_conn_proto_rcv(struct tipc_sock *tsk, struct sk_buff *skb,
+ return;
+ } else if (mtyp == CONN_ACK) {
+ was_cong = tsk_conn_cong(tsk);
+- tsk->expect_ack = false;
+- tipc_sk_push_backlog(tsk);
++ tipc_sk_push_backlog(tsk, msg_nagle_ack(hdr));
+ tsk->snt_unacked -= msg_conn_ack(hdr);
+ if (tsk->peer_caps & TIPC_BLOCK_FLOWCTL)
+ tsk->snd_win = msg_adv_win(hdr);
+@@ -1516,6 +1545,7 @@ static int __tipc_sendstream(struct socket *sock, struct msghdr *m, size_t dlen)
+ struct tipc_sock *tsk = tipc_sk(sk);
+ struct tipc_msg *hdr = &tsk->phdr;
+ struct net *net = sock_net(sk);
++ struct sk_buff *skb;
+ u32 dnode = tsk_peer_node(tsk);
+ int maxnagle = tsk->maxnagle;
+ int maxpkt = tsk->max_pkt;
+@@ -1544,17 +1574,30 @@ static int __tipc_sendstream(struct socket *sock, struct msghdr *m, size_t dlen)
+ break;
+ send = min_t(size_t, dlen - sent, TIPC_MAX_USER_MSG_SIZE);
+ blocks = tsk->snd_backlog;
+- if (tsk->oneway++ >= 4 && send <= maxnagle) {
++ if (tsk->oneway++ >= tsk->nagle_start && maxnagle &&
++ send <= maxnagle) {
+ rc = tipc_msg_append(hdr, m, send, maxnagle, txq);
+ if (unlikely(rc < 0))
+ break;
+ blocks += rc;
++ tsk->msg_acc++;
+ if (blocks <= 64 && tsk->expect_ack) {
+ tsk->snd_backlog = blocks;
+ sent += send;
+ break;
++ } else if (blocks > 64) {
++ tsk->pkt_cnt += skb_queue_len(txq);
++ } else {
++ skb = skb_peek_tail(txq);
++ if (skb) {
++ msg_set_ack_required(buf_msg(skb));
++ tsk->expect_ack = true;
++ } else {
++ tsk->expect_ack = false;
++ }
++ tsk->msg_acc = 0;
++ tsk->pkt_cnt = 0;
+ }
+- tsk->expect_ack = true;
+ } else {
+ rc = tipc_msg_build(hdr, m, sent, send, maxpkt, txq);
+ if (unlikely(rc != send))
+@@ -2091,7 +2134,7 @@ static void tipc_sk_proto_rcv(struct sock *sk,
+ smp_wmb();
+ tsk->cong_link_cnt--;
+ wakeup = true;
+- tipc_sk_push_backlog(tsk);
++ tipc_sk_push_backlog(tsk, false);
+ break;
+ case GROUP_PROTOCOL:
+ tipc_group_proto_rcv(grp, &wakeup, hdr, inputq, xmitq);
+@@ -2180,7 +2223,7 @@ static bool tipc_sk_filter_connect(struct tipc_sock *tsk, struct sk_buff *skb,
+ return false;
+ case TIPC_ESTABLISHED:
+ if (!skb_queue_empty(&sk->sk_write_queue))
+- tipc_sk_push_backlog(tsk);
++ tipc_sk_push_backlog(tsk, false);
+ /* Accept only connection-based messages sent by peer */
+ if (likely(con_msg && !err && pport == oport &&
+ pnode == onode)) {
+@@ -2188,8 +2231,10 @@ static bool tipc_sk_filter_connect(struct tipc_sock *tsk, struct sk_buff *skb,
+ struct sk_buff *skb;
+
+ skb = tipc_sk_build_ack(tsk);
+- if (skb)
++ if (skb) {
++ msg_set_nagle_ack(buf_msg(skb));
+ __skb_queue_tail(xmitq, skb);
++ }
+ }
+ return true;
+ }
+diff --git a/samples/vfs/test-statx.c b/samples/vfs/test-statx.c
+index a3d68159fb51..507f09c38b49 100644
+--- a/samples/vfs/test-statx.c
++++ b/samples/vfs/test-statx.c
+@@ -23,6 +23,8 @@
+ #include <linux/fcntl.h>
+ #define statx foo
+ #define statx_timestamp foo_timestamp
++struct statx;
++struct statx_timestamp;
+ #include <sys/stat.h>
+ #undef statx
+ #undef statx_timestamp
+diff --git a/security/security.c b/security/security.c
+index 51de970fbb1e..8b4d342ade5e 100644
+--- a/security/security.c
++++ b/security/security.c
+@@ -1409,7 +1409,22 @@ EXPORT_SYMBOL(security_inode_copy_up);
+
+ int security_inode_copy_up_xattr(const char *name)
+ {
+- return call_int_hook(inode_copy_up_xattr, -EOPNOTSUPP, name);
++ struct security_hook_list *hp;
++ int rc;
++
++ /*
++ * The implementation can return 0 (accept the xattr), 1 (discard the
++ * xattr), -EOPNOTSUPP if it does not know anything about the xattr or
++ * any other error code incase of an error.
++ */
++ hlist_for_each_entry(hp,
++ &security_hook_heads.inode_copy_up_xattr, list) {
++ rc = hp->hook.inode_copy_up_xattr(name);
++ if (rc != LSM_RET_DEFAULT(inode_copy_up_xattr))
++ return rc;
++ }
++
++ return LSM_RET_DEFAULT(inode_copy_up_xattr);
+ }
+ EXPORT_SYMBOL(security_inode_copy_up_xattr);
+
+diff --git a/sound/usb/card.h b/sound/usb/card.h
+index d6219fba9699..f39f23e3525d 100644
+--- a/sound/usb/card.h
++++ b/sound/usb/card.h
+@@ -84,10 +84,6 @@ struct snd_usb_endpoint {
+ dma_addr_t sync_dma; /* DMA address of syncbuf */
+
+ unsigned int pipe; /* the data i/o pipe */
+- unsigned int framesize[2]; /* small/large frame sizes in samples */
+- unsigned int sample_rem; /* remainder from division fs/fps */
+- unsigned int sample_accum; /* sample accumulator */
+- unsigned int fps; /* frames per second */
+ unsigned int freqn; /* nominal sampling rate in fs/fps in Q16.16 format */
+ unsigned int freqm; /* momentary sampling rate in fs/fps in Q16.16 format */
+ int freqshift; /* how much to shift the feedback value to get Q16.16 */
+diff --git a/sound/usb/endpoint.c b/sound/usb/endpoint.c
+index 9bea7d3f99f8..87cc249a31b9 100644
+--- a/sound/usb/endpoint.c
++++ b/sound/usb/endpoint.c
+@@ -124,12 +124,12 @@ int snd_usb_endpoint_implicit_feedback_sink(struct snd_usb_endpoint *ep)
+
+ /*
+ * For streaming based on information derived from sync endpoints,
+- * prepare_outbound_urb_sizes() will call slave_next_packet_size() to
++ * prepare_outbound_urb_sizes() will call next_packet_size() to
+ * determine the number of samples to be sent in the next packet.
+ *
+- * For implicit feedback, slave_next_packet_size() is unused.
++ * For implicit feedback, next_packet_size() is unused.
+ */
+-int snd_usb_endpoint_slave_next_packet_size(struct snd_usb_endpoint *ep)
++int snd_usb_endpoint_next_packet_size(struct snd_usb_endpoint *ep)
+ {
+ unsigned long flags;
+ int ret;
+@@ -146,29 +146,6 @@ int snd_usb_endpoint_slave_next_packet_size(struct snd_usb_endpoint *ep)
+ return ret;
+ }
+
+-/*
+- * For adaptive and synchronous endpoints, prepare_outbound_urb_sizes()
+- * will call next_packet_size() to determine the number of samples to be
+- * sent in the next packet.
+- */
+-int snd_usb_endpoint_next_packet_size(struct snd_usb_endpoint *ep)
+-{
+- int ret;
+-
+- if (ep->fill_max)
+- return ep->maxframesize;
+-
+- ep->sample_accum += ep->sample_rem;
+- if (ep->sample_accum >= ep->fps) {
+- ep->sample_accum -= ep->fps;
+- ret = ep->framesize[1];
+- } else {
+- ret = ep->framesize[0];
+- }
+-
+- return ret;
+-}
+-
+ static void retire_outbound_urb(struct snd_usb_endpoint *ep,
+ struct snd_urb_ctx *urb_ctx)
+ {
+@@ -213,8 +190,6 @@ static void prepare_silent_urb(struct snd_usb_endpoint *ep,
+
+ if (ctx->packet_size[i])
+ counts = ctx->packet_size[i];
+- else if (ep->sync_master)
+- counts = snd_usb_endpoint_slave_next_packet_size(ep);
+ else
+ counts = snd_usb_endpoint_next_packet_size(ep);
+
+@@ -1086,17 +1061,10 @@ int snd_usb_endpoint_set_params(struct snd_usb_endpoint *ep,
+ ep->maxpacksize = fmt->maxpacksize;
+ ep->fill_max = !!(fmt->attributes & UAC_EP_CS_ATTR_FILL_MAX);
+
+- if (snd_usb_get_speed(ep->chip->dev) == USB_SPEED_FULL) {
++ if (snd_usb_get_speed(ep->chip->dev) == USB_SPEED_FULL)
+ ep->freqn = get_usb_full_speed_rate(rate);
+- ep->fps = 1000;
+- } else {
++ else
+ ep->freqn = get_usb_high_speed_rate(rate);
+- ep->fps = 8000;
+- }
+-
+- ep->sample_rem = rate % ep->fps;
+- ep->framesize[0] = rate / ep->fps;
+- ep->framesize[1] = (rate + (ep->fps - 1)) / ep->fps;
+
+ /* calculate the frequency in 16.16 format */
+ ep->freqm = ep->freqn;
+@@ -1155,7 +1123,6 @@ int snd_usb_endpoint_start(struct snd_usb_endpoint *ep)
+ ep->active_mask = 0;
+ ep->unlink_mask = 0;
+ ep->phase = 0;
+- ep->sample_accum = 0;
+
+ snd_usb_endpoint_start_quirk(ep);
+
+diff --git a/sound/usb/endpoint.h b/sound/usb/endpoint.h
+index d23fa0a8c11b..63a39d4fa8d8 100644
+--- a/sound/usb/endpoint.h
++++ b/sound/usb/endpoint.h
+@@ -28,7 +28,6 @@ void snd_usb_endpoint_release(struct snd_usb_endpoint *ep);
+ void snd_usb_endpoint_free(struct snd_usb_endpoint *ep);
+
+ int snd_usb_endpoint_implicit_feedback_sink(struct snd_usb_endpoint *ep);
+-int snd_usb_endpoint_slave_next_packet_size(struct snd_usb_endpoint *ep);
+ int snd_usb_endpoint_next_packet_size(struct snd_usb_endpoint *ep);
+
+ void snd_usb_handle_sync_urb(struct snd_usb_endpoint *ep,
+diff --git a/sound/usb/pcm.c b/sound/usb/pcm.c
+index 39aec83f8aca..c73efdf7545e 100644
+--- a/sound/usb/pcm.c
++++ b/sound/usb/pcm.c
+@@ -1585,8 +1585,6 @@ static void prepare_playback_urb(struct snd_usb_substream *subs,
+ for (i = 0; i < ctx->packets; i++) {
+ if (ctx->packet_size[i])
+ counts = ctx->packet_size[i];
+- else if (ep->sync_master)
+- counts = snd_usb_endpoint_slave_next_packet_size(ep);
+ else
+ counts = snd_usb_endpoint_next_packet_size(ep);
+
+diff --git a/tools/lib/traceevent/event-parse.c b/tools/lib/traceevent/event-parse.c
+index e1bd2a93c6db..010e60d5a081 100644
+--- a/tools/lib/traceevent/event-parse.c
++++ b/tools/lib/traceevent/event-parse.c
+@@ -1425,13 +1425,28 @@ static unsigned int type_size(const char *name)
+ return 0;
+ }
+
++static int append(char **buf, const char *delim, const char *str)
++{
++ char *new_buf;
++
++ new_buf = realloc(*buf, strlen(*buf) + strlen(delim) + strlen(str) + 1);
++ if (!new_buf)
++ return -1;
++ strcat(new_buf, delim);
++ strcat(new_buf, str);
++ *buf = new_buf;
++ return 0;
++}
++
+ static int event_read_fields(struct tep_event *event, struct tep_format_field **fields)
+ {
+ struct tep_format_field *field = NULL;
+ enum tep_event_type type;
+ char *token;
+ char *last_token;
++ char *delim = " ";
+ int count = 0;
++ int ret;
+
+ do {
+ unsigned int size_dynamic = 0;
+@@ -1490,24 +1505,51 @@ static int event_read_fields(struct tep_event *event, struct tep_format_field **
+ field->flags |= TEP_FIELD_IS_POINTER;
+
+ if (field->type) {
+- char *new_type;
+- new_type = realloc(field->type,
+- strlen(field->type) +
+- strlen(last_token) + 2);
+- if (!new_type) {
+- free(last_token);
+- goto fail;
+- }
+- field->type = new_type;
+- strcat(field->type, " ");
+- strcat(field->type, last_token);
++ ret = append(&field->type, delim, last_token);
+ free(last_token);
++ if (ret < 0)
++ goto fail;
+ } else
+ field->type = last_token;
+ last_token = token;
++ delim = " ";
+ continue;
+ }
+
++ /* Handle __attribute__((user)) */
++ if ((type == TEP_EVENT_DELIM) &&
++ strcmp("__attribute__", last_token) == 0 &&
++ token[0] == '(') {
++ int depth = 1;
++ int ret;
++
++ ret = append(&field->type, " ", last_token);
++ ret |= append(&field->type, "", "(");
++ if (ret < 0)
++ goto fail;
++
++ delim = " ";
++ while ((type = read_token(&token)) != TEP_EVENT_NONE) {
++ if (type == TEP_EVENT_DELIM) {
++ if (token[0] == '(')
++ depth++;
++ else if (token[0] == ')')
++ depth--;
++ if (!depth)
++ break;
++ ret = append(&field->type, "", token);
++ delim = "";
++ } else {
++ ret = append(&field->type, delim, token);
++ delim = " ";
++ }
++ if (ret < 0)
++ goto fail;
++ free(last_token);
++ last_token = token;
++ }
++ continue;
++ }
+ break;
+ }
+
+@@ -1523,8 +1565,6 @@ static int event_read_fields(struct tep_event *event, struct tep_format_field **
+ if (strcmp(token, "[") == 0) {
+ enum tep_event_type last_type = type;
+ char *brackets = token;
+- char *new_brackets;
+- int len;
+
+ field->flags |= TEP_FIELD_IS_ARRAY;
+
+@@ -1536,29 +1576,27 @@ static int event_read_fields(struct tep_event *event, struct tep_format_field **
+ field->arraylen = 0;
+
+ while (strcmp(token, "]") != 0) {
++ const char *delim;
++
+ if (last_type == TEP_EVENT_ITEM &&
+ type == TEP_EVENT_ITEM)
+- len = 2;
++ delim = " ";
+ else
+- len = 1;
++ delim = "";
++
+ last_type = type;
+
+- new_brackets = realloc(brackets,
+- strlen(brackets) +
+- strlen(token) + len);
+- if (!new_brackets) {
++ ret = append(&brackets, delim, token);
++ if (ret < 0) {
+ free(brackets);
+ goto fail;
+ }
+- brackets = new_brackets;
+- if (len == 2)
+- strcat(brackets, " ");
+- strcat(brackets, token);
+ /* We only care about the last token */
+ field->arraylen = strtoul(token, NULL, 0);
+ free_token(token);
+ type = read_token(&token);
+ if (type == TEP_EVENT_NONE) {
++ free(brackets);
+ do_warning_event(event, "failed to find token");
+ goto fail;
+ }
+@@ -1566,13 +1604,11 @@ static int event_read_fields(struct tep_event *event, struct tep_format_field **
+
+ free_token(token);
+
+- new_brackets = realloc(brackets, strlen(brackets) + 2);
+- if (!new_brackets) {
++ ret = append(&brackets, "", "]");
++ if (ret < 0) {
+ free(brackets);
+ goto fail;
+ }
+- brackets = new_brackets;
+- strcat(brackets, "]");
+
+ /* add brackets to type */
+
+@@ -1582,34 +1618,23 @@ static int event_read_fields(struct tep_event *event, struct tep_format_field **
+ * the format: type [] item;
+ */
+ if (type == TEP_EVENT_ITEM) {
+- char *new_type;
+- new_type = realloc(field->type,
+- strlen(field->type) +
+- strlen(field->name) +
+- strlen(brackets) + 2);
+- if (!new_type) {
++ ret = append(&field->type, " ", field->name);
++ if (ret < 0) {
+ free(brackets);
+ goto fail;
+ }
+- field->type = new_type;
+- strcat(field->type, " ");
+- strcat(field->type, field->name);
++ ret = append(&field->type, "", brackets);
++
+ size_dynamic = type_size(field->name);
+ free_token(field->name);
+- strcat(field->type, brackets);
+ field->name = field->alias = token;
+ type = read_token(&token);
+ } else {
+- char *new_type;
+- new_type = realloc(field->type,
+- strlen(field->type) +
+- strlen(brackets) + 1);
+- if (!new_type) {
++ ret = append(&field->type, "", brackets);
++ if (ret < 0) {
+ free(brackets);
+ goto fail;
+ }
+- field->type = new_type;
+- strcat(field->type, brackets);
+ }
+ free(brackets);
+ }
+@@ -2046,19 +2071,16 @@ process_op(struct tep_event *event, struct tep_print_arg *arg, char **tok)
+ /* could just be a type pointer */
+ if ((strcmp(arg->op.op, "*") == 0) &&
+ type == TEP_EVENT_DELIM && (strcmp(token, ")") == 0)) {
+- char *new_atom;
++ int ret;
+
+ if (left->type != TEP_PRINT_ATOM) {
+ do_warning_event(event, "bad pointer type");
+ goto out_free;
+ }
+- new_atom = realloc(left->atom.atom,
+- strlen(left->atom.atom) + 3);
+- if (!new_atom)
++ ret = append(&left->atom.atom, " ", "*");
++ if (ret < 0)
+ goto out_warn_free;
+
+- left->atom.atom = new_atom;
+- strcat(left->atom.atom, " *");
+ free(arg->op.op);
+ *arg = *left;
+ free(left);
+@@ -3151,18 +3173,15 @@ process_arg_token(struct tep_event *event, struct tep_print_arg *arg,
+ }
+ /* atoms can be more than one token long */
+ while (type == TEP_EVENT_ITEM) {
+- char *new_atom;
+- new_atom = realloc(atom,
+- strlen(atom) + strlen(token) + 2);
+- if (!new_atom) {
++ int ret;
++
++ ret = append(&atom, " ", token);
++ if (ret < 0) {
+ free(atom);
+ *tok = NULL;
+ free_token(token);
+ return TEP_EVENT_ERROR;
+ }
+- atom = new_atom;
+- strcat(atom, " ");
+- strcat(atom, token);
+ free_token(token);
+ type = read_token_item(&token);
+ }
+diff --git a/tools/testing/selftests/tpm2/test_smoke.sh b/tools/testing/selftests/tpm2/test_smoke.sh
+index 8155c2ea7ccb..a5e994a68d88 100755
+--- a/tools/testing/selftests/tpm2/test_smoke.sh
++++ b/tools/testing/selftests/tpm2/test_smoke.sh
+@@ -1,10 +1,5 @@
+-#!/bin/bash
++#!/bin/sh
+ # SPDX-License-Identifier: (GPL-2.0 OR BSD-3-Clause)
+
+ python -m unittest -v tpm2_tests.SmokeTest
+ python -m unittest -v tpm2_tests.AsyncTest
+-
+-CLEAR_CMD=$(which tpm2_clear)
+-if [ -n $CLEAR_CMD ]; then
+- tpm2_clear -T device
+-fi
+diff --git a/tools/testing/selftests/tpm2/test_space.sh b/tools/testing/selftests/tpm2/test_space.sh
+index a6f5e346635e..3ded3011b642 100755
+--- a/tools/testing/selftests/tpm2/test_space.sh
++++ b/tools/testing/selftests/tpm2/test_space.sh
+@@ -1,4 +1,4 @@
+-#!/bin/bash
++#!/bin/sh
+ # SPDX-License-Identifier: (GPL-2.0 OR BSD-3-Clause)
+
+ python -m unittest -v tpm2_tests.SpaceTest
^ permalink raw reply related [flat|nested] 25+ messages in thread
* [gentoo-commits] proj/linux-patches:5.7 commit in: /
@ 2020-07-16 11:22 Mike Pagano
0 siblings, 0 replies; 25+ messages in thread
From: Mike Pagano @ 2020-07-16 11:22 UTC (permalink / raw
To: gentoo-commits
commit: 57098bdf4beb664bb403c1daad43489aaa7bcc40
Author: Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Thu Jul 16 11:22:25 2020 +0000
Commit: Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Thu Jul 16 11:22:25 2020 +0000
URL: https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=57098bdf
Linux patch 5.7.9
Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>
0000_README | 4 +
1008_linux-5.7.9.patch | 7856 ++++++++++++++++++++++++++++++++++++++++++++++++
2 files changed, 7860 insertions(+)
diff --git a/0000_README b/0000_README
index 46bac07..527d714 100644
--- a/0000_README
+++ b/0000_README
@@ -75,6 +75,10 @@ Patch: 1007_linux-5.7.8.patch
From: http://www.kernel.org
Desc: Linux 5.7.8
+Patch: 1008_linux-5.7.9.patch
+From: http://www.kernel.org
+Desc: Linux 5.7.9
+
Patch: 1500_XATTR_USER_PREFIX.patch
From: https://bugs.gentoo.org/show_bug.cgi?id=470644
Desc: Support for namespace user.pax.* on tmpfs.
diff --git a/1008_linux-5.7.9.patch b/1008_linux-5.7.9.patch
new file mode 100644
index 0000000..ff65b1b
--- /dev/null
+++ b/1008_linux-5.7.9.patch
@@ -0,0 +1,7856 @@
+diff --git a/Makefile b/Makefile
+index 6163d607ca72..fb3a747575b5 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 5
+ PATCHLEVEL = 7
+-SUBLEVEL = 8
++SUBLEVEL = 9
+ EXTRAVERSION =
+ NAME = Kleptomaniac Octopus
+
+diff --git a/arch/arc/include/asm/elf.h b/arch/arc/include/asm/elf.h
+index c77a0e3671ac..0284ace0e1ab 100644
+--- a/arch/arc/include/asm/elf.h
++++ b/arch/arc/include/asm/elf.h
+@@ -19,7 +19,7 @@
+ #define R_ARC_32_PCREL 0x31
+
+ /*to set parameters in the core dumps */
+-#define ELF_ARCH EM_ARCOMPACT
++#define ELF_ARCH EM_ARC_INUSE
+ #define ELF_CLASS ELFCLASS32
+
+ #ifdef CONFIG_CPU_BIG_ENDIAN
+diff --git a/arch/arc/kernel/entry.S b/arch/arc/kernel/entry.S
+index 60406ec62eb8..ea00c8a17f07 100644
+--- a/arch/arc/kernel/entry.S
++++ b/arch/arc/kernel/entry.S
+@@ -165,7 +165,6 @@ END(EV_Extension)
+ tracesys:
+ ; save EFA in case tracer wants the PC of traced task
+ ; using ERET won't work since next-PC has already committed
+- lr r12, [efa]
+ GET_CURR_TASK_FIELD_PTR TASK_THREAD, r11
+ st r12, [r11, THREAD_FAULT_ADDR] ; thread.fault_address
+
+@@ -208,15 +207,9 @@ tracesys_exit:
+ ; Breakpoint TRAP
+ ; ---------------------------------------------
+ trap_with_param:
+-
+- ; stop_pc info by gdb needs this info
+- lr r0, [efa]
++ mov r0, r12 ; EFA in case ptracer/gdb wants stop_pc
+ mov r1, sp
+
+- ; Now that we have read EFA, it is safe to do "fake" rtie
+- ; and get out of CPU exception mode
+- FAKE_RET_FROM_EXCPN
+-
+ ; Save callee regs in case gdb wants to have a look
+ ; SP will grow up by size of CALLEE Reg-File
+ ; NOTE: clobbers r12
+@@ -243,6 +236,10 @@ ENTRY(EV_Trap)
+
+ EXCEPTION_PROLOGUE
+
++ lr r12, [efa]
++
++ FAKE_RET_FROM_EXCPN
++
+ ;============ TRAP 1 :breakpoints
+ ; Check ECR for trap with arg (PROLOGUE ensures r10 has ECR)
+ bmsk.f 0, r10, 7
+@@ -250,9 +247,6 @@ ENTRY(EV_Trap)
+
+ ;============ TRAP (no param): syscall top level
+
+- ; First return from Exception to pure K mode (Exception/IRQs renabled)
+- FAKE_RET_FROM_EXCPN
+-
+ ; If syscall tracing ongoing, invoke pre-post-hooks
+ GET_CURR_THR_INFO_FLAGS r10
+ btst r10, TIF_SYSCALL_TRACE
+diff --git a/arch/arm/boot/dts/motorola-cpcap-mapphone.dtsi b/arch/arm/boot/dts/motorola-cpcap-mapphone.dtsi
+index e39eee628afd..08a7d3ce383f 100644
+--- a/arch/arm/boot/dts/motorola-cpcap-mapphone.dtsi
++++ b/arch/arm/boot/dts/motorola-cpcap-mapphone.dtsi
+@@ -13,8 +13,10 @@
+ #interrupt-cells = <2>;
+ #address-cells = <1>;
+ #size-cells = <0>;
+- spi-max-frequency = <3000000>;
++ spi-max-frequency = <9600000>;
+ spi-cs-high;
++ spi-cpol;
++ spi-cpha;
+
+ cpcap_adc: adc {
+ compatible = "motorola,mapphone-cpcap-adc";
+diff --git a/arch/arm/mach-imx/pm-imx6.c b/arch/arm/mach-imx/pm-imx6.c
+index dd34dff13762..40c74b4c4d73 100644
+--- a/arch/arm/mach-imx/pm-imx6.c
++++ b/arch/arm/mach-imx/pm-imx6.c
+@@ -493,14 +493,14 @@ static int __init imx6q_suspend_init(const struct imx6_pm_socdata *socdata)
+ if (!ocram_pool) {
+ pr_warn("%s: ocram pool unavailable!\n", __func__);
+ ret = -ENODEV;
+- goto put_node;
++ goto put_device;
+ }
+
+ ocram_base = gen_pool_alloc(ocram_pool, MX6Q_SUSPEND_OCRAM_SIZE);
+ if (!ocram_base) {
+ pr_warn("%s: unable to alloc ocram!\n", __func__);
+ ret = -ENOMEM;
+- goto put_node;
++ goto put_device;
+ }
+
+ ocram_pbase = gen_pool_virt_to_phys(ocram_pool, ocram_base);
+@@ -523,7 +523,7 @@ static int __init imx6q_suspend_init(const struct imx6_pm_socdata *socdata)
+ ret = imx6_pm_get_base(&pm_info->mmdc_base, socdata->mmdc_compat);
+ if (ret) {
+ pr_warn("%s: failed to get mmdc base %d!\n", __func__, ret);
+- goto put_node;
++ goto put_device;
+ }
+
+ ret = imx6_pm_get_base(&pm_info->src_base, socdata->src_compat);
+@@ -570,7 +570,7 @@ static int __init imx6q_suspend_init(const struct imx6_pm_socdata *socdata)
+ &imx6_suspend,
+ MX6Q_SUSPEND_OCRAM_SIZE - sizeof(*pm_info));
+
+- goto put_node;
++ goto put_device;
+
+ pl310_cache_map_failed:
+ iounmap(pm_info->gpc_base.vbase);
+@@ -580,6 +580,8 @@ iomuxc_map_failed:
+ iounmap(pm_info->src_base.vbase);
+ src_map_failed:
+ iounmap(pm_info->mmdc_base.vbase);
++put_device:
++ put_device(&pdev->dev);
+ put_node:
+ of_node_put(node);
+
+diff --git a/arch/arm64/include/asm/arch_gicv3.h b/arch/arm64/include/asm/arch_gicv3.h
+index a358e97572c1..6647ae4f0231 100644
+--- a/arch/arm64/include/asm/arch_gicv3.h
++++ b/arch/arm64/include/asm/arch_gicv3.h
+@@ -109,7 +109,7 @@ static inline u32 gic_read_pmr(void)
+ return read_sysreg_s(SYS_ICC_PMR_EL1);
+ }
+
+-static inline void gic_write_pmr(u32 val)
++static __always_inline void gic_write_pmr(u32 val)
+ {
+ write_sysreg_s(val, SYS_ICC_PMR_EL1);
+ }
+diff --git a/arch/arm64/include/asm/arch_timer.h b/arch/arm64/include/asm/arch_timer.h
+index 7ae54d7d333a..9f0ec21d6327 100644
+--- a/arch/arm64/include/asm/arch_timer.h
++++ b/arch/arm64/include/asm/arch_timer.h
+@@ -58,6 +58,7 @@ struct arch_timer_erratum_workaround {
+ u64 (*read_cntvct_el0)(void);
+ int (*set_next_event_phys)(unsigned long, struct clock_event_device *);
+ int (*set_next_event_virt)(unsigned long, struct clock_event_device *);
++ bool disable_compat_vdso;
+ };
+
+ DECLARE_PER_CPU(const struct arch_timer_erratum_workaround *,
+diff --git a/arch/arm64/include/asm/cpufeature.h b/arch/arm64/include/asm/cpufeature.h
+index afe08251ff95..9e2d2f04d93b 100644
+--- a/arch/arm64/include/asm/cpufeature.h
++++ b/arch/arm64/include/asm/cpufeature.h
+@@ -668,7 +668,7 @@ static inline bool system_supports_generic_auth(void)
+ cpus_have_const_cap(ARM64_HAS_GENERIC_AUTH);
+ }
+
+-static inline bool system_uses_irq_prio_masking(void)
++static __always_inline bool system_uses_irq_prio_masking(void)
+ {
+ return IS_ENABLED(CONFIG_ARM64_PSEUDO_NMI) &&
+ cpus_have_const_cap(ARM64_HAS_IRQ_PRIO_MASKING);
+diff --git a/arch/arm64/include/asm/pgtable-prot.h b/arch/arm64/include/asm/pgtable-prot.h
+index 1305e28225fc..6ffcb290b8aa 100644
+--- a/arch/arm64/include/asm/pgtable-prot.h
++++ b/arch/arm64/include/asm/pgtable-prot.h
+@@ -56,7 +56,7 @@ extern bool arm64_use_ng_mappings;
+ #define PAGE_HYP __pgprot(_HYP_PAGE_DEFAULT | PTE_HYP | PTE_HYP_XN)
+ #define PAGE_HYP_EXEC __pgprot(_HYP_PAGE_DEFAULT | PTE_HYP | PTE_RDONLY)
+ #define PAGE_HYP_RO __pgprot(_HYP_PAGE_DEFAULT | PTE_HYP | PTE_RDONLY | PTE_HYP_XN)
+-#define PAGE_HYP_DEVICE __pgprot(PROT_DEVICE_nGnRE | PTE_HYP)
++#define PAGE_HYP_DEVICE __pgprot(_PROT_DEFAULT | PTE_ATTRINDX(MT_DEVICE_nGnRE) | PTE_HYP | PTE_HYP_XN)
+
+ #define PAGE_S2_MEMATTR(attr) \
+ ({ \
+diff --git a/arch/arm64/include/asm/vdso/clocksource.h b/arch/arm64/include/asm/vdso/clocksource.h
+index df6ea65c1dec..b054d9febfb5 100644
+--- a/arch/arm64/include/asm/vdso/clocksource.h
++++ b/arch/arm64/include/asm/vdso/clocksource.h
+@@ -2,7 +2,10 @@
+ #ifndef __ASM_VDSOCLOCKSOURCE_H
+ #define __ASM_VDSOCLOCKSOURCE_H
+
+-#define VDSO_ARCH_CLOCKMODES \
+- VDSO_CLOCKMODE_ARCHTIMER
++#define VDSO_ARCH_CLOCKMODES \
++ /* vdso clocksource for both 32 and 64bit tasks */ \
++ VDSO_CLOCKMODE_ARCHTIMER, \
++ /* vdso clocksource for 64bit tasks only */ \
++ VDSO_CLOCKMODE_ARCHTIMER_NOCOMPAT
+
+ #endif
+diff --git a/arch/arm64/include/asm/vdso/compat_gettimeofday.h b/arch/arm64/include/asm/vdso/compat_gettimeofday.h
+index b6907ae78e53..9a625e8947ff 100644
+--- a/arch/arm64/include/asm/vdso/compat_gettimeofday.h
++++ b/arch/arm64/include/asm/vdso/compat_gettimeofday.h
+@@ -111,7 +111,7 @@ static __always_inline u64 __arch_get_hw_counter(s32 clock_mode)
+ * update. Return something. Core will do another round and then
+ * see the mode change and fallback to the syscall.
+ */
+- if (clock_mode == VDSO_CLOCKMODE_NONE)
++ if (clock_mode != VDSO_CLOCKMODE_ARCHTIMER)
+ return 0;
+
+ /*
+@@ -152,6 +152,12 @@ static __always_inline const struct vdso_data *__arch_get_vdso_data(void)
+ return ret;
+ }
+
++static inline bool vdso_clocksource_ok(const struct vdso_data *vd)
++{
++ return vd->clock_mode == VDSO_CLOCKMODE_ARCHTIMER;
++}
++#define vdso_clocksource_ok vdso_clocksource_ok
++
+ #endif /* !__ASSEMBLY__ */
+
+ #endif /* __ASM_VDSO_GETTIMEOFDAY_H */
+diff --git a/arch/arm64/kernel/cpu_errata.c b/arch/arm64/kernel/cpu_errata.c
+index df56d2295d16..0f37045fafab 100644
+--- a/arch/arm64/kernel/cpu_errata.c
++++ b/arch/arm64/kernel/cpu_errata.c
+@@ -460,6 +460,8 @@ static const struct midr_range arm64_ssb_cpus[] = {
+ MIDR_ALL_VERSIONS(MIDR_CORTEX_A53),
+ MIDR_ALL_VERSIONS(MIDR_CORTEX_A55),
+ MIDR_ALL_VERSIONS(MIDR_BRAHMA_B53),
++ MIDR_ALL_VERSIONS(MIDR_QCOM_KRYO_3XX_SILVER),
++ MIDR_ALL_VERSIONS(MIDR_QCOM_KRYO_4XX_SILVER),
+ {},
+ };
+
+diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c
+index 9fac745aa7bb..b0fb1d5bf223 100644
+--- a/arch/arm64/kernel/cpufeature.c
++++ b/arch/arm64/kernel/cpufeature.c
+@@ -1059,6 +1059,8 @@ static bool unmap_kernel_at_el0(const struct arm64_cpu_capabilities *entry,
+ MIDR_ALL_VERSIONS(MIDR_CORTEX_A73),
+ MIDR_ALL_VERSIONS(MIDR_HISI_TSV110),
+ MIDR_ALL_VERSIONS(MIDR_NVIDIA_CARMEL),
++ MIDR_ALL_VERSIONS(MIDR_QCOM_KRYO_3XX_SILVER),
++ MIDR_ALL_VERSIONS(MIDR_QCOM_KRYO_4XX_SILVER),
+ { /* sentinel */ }
+ };
+ char const *str = "kpti command line option";
+diff --git a/arch/arm64/kernel/kgdb.c b/arch/arm64/kernel/kgdb.c
+index 43119922341f..1a157ca33262 100644
+--- a/arch/arm64/kernel/kgdb.c
++++ b/arch/arm64/kernel/kgdb.c
+@@ -252,7 +252,7 @@ static int kgdb_step_brk_fn(struct pt_regs *regs, unsigned int esr)
+ if (!kgdb_single_step)
+ return DBG_HOOK_ERROR;
+
+- kgdb_handle_exception(1, SIGTRAP, 0, regs);
++ kgdb_handle_exception(0, SIGTRAP, 0, regs);
+ return DBG_HOOK_HANDLED;
+ }
+ NOKPROBE_SYMBOL(kgdb_step_brk_fn);
+diff --git a/arch/arm64/kvm/hyp-init.S b/arch/arm64/kvm/hyp-init.S
+index 6e6ed5581eed..e76c0e89d48e 100644
+--- a/arch/arm64/kvm/hyp-init.S
++++ b/arch/arm64/kvm/hyp-init.S
+@@ -136,11 +136,15 @@ SYM_CODE_START(__kvm_handle_stub_hvc)
+
+ 1: cmp x0, #HVC_RESET_VECTORS
+ b.ne 1f
+-reset:
++
+ /*
+- * Reset kvm back to the hyp stub. Do not clobber x0-x4 in
+- * case we coming via HVC_SOFT_RESTART.
++ * Set the HVC_RESET_VECTORS return code before entering the common
++ * path so that we do not clobber x0-x2 in case we are coming via
++ * HVC_SOFT_RESTART.
+ */
++ mov x0, xzr
++reset:
++ /* Reset kvm back to the hyp stub. */
+ mrs x5, sctlr_el2
+ mov_q x6, SCTLR_ELx_FLAGS
+ bic x5, x5, x6 // Clear SCTL_M and etc
+@@ -151,7 +155,6 @@ reset:
+ /* Install stub vectors */
+ adr_l x5, __hyp_stub_vectors
+ msr vbar_el2, x5
+- mov x0, xzr
+ eret
+
+ 1: /* Bad stub call */
+diff --git a/arch/arm64/kvm/reset.c b/arch/arm64/kvm/reset.c
+index 30b7ea680f66..ab76728e2742 100644
+--- a/arch/arm64/kvm/reset.c
++++ b/arch/arm64/kvm/reset.c
+@@ -258,7 +258,7 @@ static int kvm_vcpu_enable_ptrauth(struct kvm_vcpu *vcpu)
+ int kvm_reset_vcpu(struct kvm_vcpu *vcpu)
+ {
+ const struct kvm_regs *cpu_reset;
+- int ret = -EINVAL;
++ int ret;
+ bool loaded;
+
+ /* Reset PMU outside of the non-preemptible section */
+@@ -281,15 +281,19 @@ int kvm_reset_vcpu(struct kvm_vcpu *vcpu)
+
+ if (test_bit(KVM_ARM_VCPU_PTRAUTH_ADDRESS, vcpu->arch.features) ||
+ test_bit(KVM_ARM_VCPU_PTRAUTH_GENERIC, vcpu->arch.features)) {
+- if (kvm_vcpu_enable_ptrauth(vcpu))
++ if (kvm_vcpu_enable_ptrauth(vcpu)) {
++ ret = -EINVAL;
+ goto out;
++ }
+ }
+
+ switch (vcpu->arch.target) {
+ default:
+ if (test_bit(KVM_ARM_VCPU_EL1_32BIT, vcpu->arch.features)) {
+- if (!cpu_has_32bit_el1())
++ if (!cpu_has_32bit_el1()) {
++ ret = -EINVAL;
+ goto out;
++ }
+ cpu_reset = &default_regs_reset32;
+ } else {
+ cpu_reset = &default_regs_reset;
+diff --git a/arch/powerpc/kernel/exceptions-64s.S b/arch/powerpc/kernel/exceptions-64s.S
+index d9ddce40bed8..fd99d4feec7a 100644
+--- a/arch/powerpc/kernel/exceptions-64s.S
++++ b/arch/powerpc/kernel/exceptions-64s.S
+@@ -2547,7 +2547,7 @@ EXC_VIRT_NONE(0x5400, 0x100)
+ INT_DEFINE_BEGIN(denorm_exception)
+ IVEC=0x1500
+ IHSRR=1
+- IBRANCH_COMMON=0
++ IBRANCH_TO_COMMON=0
+ IKVM_REAL=1
+ INT_DEFINE_END(denorm_exception)
+
+diff --git a/arch/powerpc/kvm/book3s_64_mmu_radix.c b/arch/powerpc/kvm/book3s_64_mmu_radix.c
+index d4e532a63f08..2f27faf24b2c 100644
+--- a/arch/powerpc/kvm/book3s_64_mmu_radix.c
++++ b/arch/powerpc/kvm/book3s_64_mmu_radix.c
+@@ -40,7 +40,8 @@ unsigned long __kvmhv_copy_tofrom_guest_radix(int lpid, int pid,
+ /* Can't access quadrants 1 or 2 in non-HV mode, call the HV to do it */
+ if (kvmhv_on_pseries())
+ return plpar_hcall_norets(H_COPY_TOFROM_GUEST, lpid, pid, eaddr,
+- __pa(to), __pa(from), n);
++ (to != NULL) ? __pa(to): 0,
++ (from != NULL) ? __pa(from): 0, n);
+
+ quadrant = 1;
+ if (!pid)
+diff --git a/arch/s390/include/asm/kvm_host.h b/arch/s390/include/asm/kvm_host.h
+index d6bcd34f3ec3..ec65bc2bd084 100644
+--- a/arch/s390/include/asm/kvm_host.h
++++ b/arch/s390/include/asm/kvm_host.h
+@@ -31,12 +31,12 @@
+ #define KVM_USER_MEM_SLOTS 32
+
+ /*
+- * These seem to be used for allocating ->chip in the routing table,
+- * which we don't use. 4096 is an out-of-thin-air value. If we need
+- * to look at ->chip later on, we'll need to revisit this.
++ * These seem to be used for allocating ->chip in the routing table, which we
++ * don't use. 1 is as small as we can get to reduce the needed memory. If we
++ * need to look at ->chip later on, we'll need to revisit this.
+ */
+ #define KVM_NR_IRQCHIPS 1
+-#define KVM_IRQCHIP_NUM_PINS 4096
++#define KVM_IRQCHIP_NUM_PINS 1
+ #define KVM_HALT_POLL_NS_DEFAULT 50000
+
+ /* s390-specific vcpu->requests bit members */
+diff --git a/arch/s390/include/asm/uaccess.h b/arch/s390/include/asm/uaccess.h
+index a470f1fa9f2a..324438889fe1 100644
+--- a/arch/s390/include/asm/uaccess.h
++++ b/arch/s390/include/asm/uaccess.h
+@@ -276,6 +276,6 @@ static inline unsigned long __must_check clear_user(void __user *to, unsigned lo
+ }
+
+ int copy_to_user_real(void __user *dest, void *src, unsigned long count);
+-void s390_kernel_write(void *dst, const void *src, size_t size);
++void *s390_kernel_write(void *dst, const void *src, size_t size);
+
+ #endif /* __S390_UACCESS_H */
+diff --git a/arch/s390/kernel/early.c b/arch/s390/kernel/early.c
+index cd241ee66eff..078277231858 100644
+--- a/arch/s390/kernel/early.c
++++ b/arch/s390/kernel/early.c
+@@ -170,6 +170,8 @@ static noinline __init void setup_lowcore_early(void)
+ psw_t psw;
+
+ psw.mask = PSW_MASK_BASE | PSW_DEFAULT_KEY | PSW_MASK_EA | PSW_MASK_BA;
++ if (IS_ENABLED(CONFIG_KASAN))
++ psw.mask |= PSW_MASK_DAT;
+ psw.addr = (unsigned long) s390_base_ext_handler;
+ S390_lowcore.external_new_psw = psw;
+ psw.addr = (unsigned long) s390_base_pgm_handler;
+diff --git a/arch/s390/kernel/setup.c b/arch/s390/kernel/setup.c
+index 36445dd40fdb..cb10885f3d27 100644
+--- a/arch/s390/kernel/setup.c
++++ b/arch/s390/kernel/setup.c
+@@ -1107,6 +1107,7 @@ void __init setup_arch(char **cmdline_p)
+ if (IS_ENABLED(CONFIG_EXPOLINE_AUTO))
+ nospec_auto_detect();
+
++ jump_label_init();
+ parse_early_param();
+ #ifdef CONFIG_CRASH_DUMP
+ /* Deactivate elfcorehdr= kernel parameter */
+diff --git a/arch/s390/mm/hugetlbpage.c b/arch/s390/mm/hugetlbpage.c
+index 4632d4e26b66..720d4405160b 100644
+--- a/arch/s390/mm/hugetlbpage.c
++++ b/arch/s390/mm/hugetlbpage.c
+@@ -117,7 +117,7 @@ static inline pte_t __rste_to_pte(unsigned long rste)
+ _PAGE_YOUNG);
+ #ifdef CONFIG_MEM_SOFT_DIRTY
+ pte_val(pte) |= move_set_bit(rste, _SEGMENT_ENTRY_SOFT_DIRTY,
+- _PAGE_DIRTY);
++ _PAGE_SOFT_DIRTY);
+ #endif
+ pte_val(pte) |= move_set_bit(rste, _SEGMENT_ENTRY_NOEXEC,
+ _PAGE_NOEXEC);
+diff --git a/arch/s390/mm/maccess.c b/arch/s390/mm/maccess.c
+index de7ca4b6718f..1d17413b319a 100644
+--- a/arch/s390/mm/maccess.c
++++ b/arch/s390/mm/maccess.c
+@@ -55,19 +55,26 @@ static notrace long s390_kernel_write_odd(void *dst, const void *src, size_t siz
+ */
+ static DEFINE_SPINLOCK(s390_kernel_write_lock);
+
+-void notrace s390_kernel_write(void *dst, const void *src, size_t size)
++notrace void *s390_kernel_write(void *dst, const void *src, size_t size)
+ {
++ void *tmp = dst;
+ unsigned long flags;
+ long copied;
+
+ spin_lock_irqsave(&s390_kernel_write_lock, flags);
+- while (size) {
+- copied = s390_kernel_write_odd(dst, src, size);
+- dst += copied;
+- src += copied;
+- size -= copied;
++ if (!(flags & PSW_MASK_DAT)) {
++ memcpy(dst, src, size);
++ } else {
++ while (size) {
++ copied = s390_kernel_write_odd(tmp, src, size);
++ tmp += copied;
++ src += copied;
++ size -= copied;
++ }
+ }
+ spin_unlock_irqrestore(&s390_kernel_write_lock, flags);
++
++ return dst;
+ }
+
+ static int __no_sanitize_address __memcpy_real(void *dest, void *src, size_t count)
+diff --git a/arch/x86/events/Kconfig b/arch/x86/events/Kconfig
+index 9a7a1446cb3a..4a809c6cbd2f 100644
+--- a/arch/x86/events/Kconfig
++++ b/arch/x86/events/Kconfig
+@@ -10,11 +10,11 @@ config PERF_EVENTS_INTEL_UNCORE
+ available on NehalemEX and more modern processors.
+
+ config PERF_EVENTS_INTEL_RAPL
+- tristate "Intel rapl performance events"
+- depends on PERF_EVENTS && CPU_SUP_INTEL && PCI
++ tristate "Intel/AMD rapl performance events"
++ depends on PERF_EVENTS && (CPU_SUP_INTEL || CPU_SUP_AMD) && PCI
+ default y
+ ---help---
+- Include support for Intel rapl performance events for power
++ Include support for Intel and AMD rapl performance events for power
+ monitoring on modern processors.
+
+ config PERF_EVENTS_INTEL_CSTATE
+diff --git a/arch/x86/events/Makefile b/arch/x86/events/Makefile
+index 9e07f554333f..726e83c0a31a 100644
+--- a/arch/x86/events/Makefile
++++ b/arch/x86/events/Makefile
+@@ -1,5 +1,6 @@
+ # SPDX-License-Identifier: GPL-2.0-only
+ obj-y += core.o probe.o
++obj-$(CONFIG_PERF_EVENTS_INTEL_RAPL) += rapl.o
+ obj-y += amd/
+ obj-$(CONFIG_X86_LOCAL_APIC) += msr.o
+ obj-$(CONFIG_CPU_SUP_INTEL) += intel/
+diff --git a/arch/x86/events/intel/Makefile b/arch/x86/events/intel/Makefile
+index 3468b0c1dc7c..e67a5886336c 100644
+--- a/arch/x86/events/intel/Makefile
++++ b/arch/x86/events/intel/Makefile
+@@ -2,8 +2,6 @@
+ obj-$(CONFIG_CPU_SUP_INTEL) += core.o bts.o
+ obj-$(CONFIG_CPU_SUP_INTEL) += ds.o knc.o
+ obj-$(CONFIG_CPU_SUP_INTEL) += lbr.o p4.o p6.o pt.o
+-obj-$(CONFIG_PERF_EVENTS_INTEL_RAPL) += intel-rapl-perf.o
+-intel-rapl-perf-objs := rapl.o
+ obj-$(CONFIG_PERF_EVENTS_INTEL_UNCORE) += intel-uncore.o
+ intel-uncore-objs := uncore.o uncore_nhmex.o uncore_snb.o uncore_snbep.o
+ obj-$(CONFIG_PERF_EVENTS_INTEL_CSTATE) += intel-cstate.o
+diff --git a/arch/x86/events/intel/rapl.c b/arch/x86/events/intel/rapl.c
+deleted file mode 100644
+index a5dbd25852cb..000000000000
+--- a/arch/x86/events/intel/rapl.c
++++ /dev/null
+@@ -1,800 +0,0 @@
+-// SPDX-License-Identifier: GPL-2.0-only
+-/*
+- * Support Intel RAPL energy consumption counters
+- * Copyright (C) 2013 Google, Inc., Stephane Eranian
+- *
+- * Intel RAPL interface is specified in the IA-32 Manual Vol3b
+- * section 14.7.1 (September 2013)
+- *
+- * RAPL provides more controls than just reporting energy consumption
+- * however here we only expose the 3 energy consumption free running
+- * counters (pp0, pkg, dram).
+- *
+- * Each of those counters increments in a power unit defined by the
+- * RAPL_POWER_UNIT MSR. On SandyBridge, this unit is 1/(2^16) Joules
+- * but it can vary.
+- *
+- * Counter to rapl events mappings:
+- *
+- * pp0 counter: consumption of all physical cores (power plane 0)
+- * event: rapl_energy_cores
+- * perf code: 0x1
+- *
+- * pkg counter: consumption of the whole processor package
+- * event: rapl_energy_pkg
+- * perf code: 0x2
+- *
+- * dram counter: consumption of the dram domain (servers only)
+- * event: rapl_energy_dram
+- * perf code: 0x3
+- *
+- * gpu counter: consumption of the builtin-gpu domain (client only)
+- * event: rapl_energy_gpu
+- * perf code: 0x4
+- *
+- * psys counter: consumption of the builtin-psys domain (client only)
+- * event: rapl_energy_psys
+- * perf code: 0x5
+- *
+- * We manage those counters as free running (read-only). They may be
+- * use simultaneously by other tools, such as turbostat.
+- *
+- * The events only support system-wide mode counting. There is no
+- * sampling support because it does not make sense and is not
+- * supported by the RAPL hardware.
+- *
+- * Because we want to avoid floating-point operations in the kernel,
+- * the events are all reported in fixed point arithmetic (32.32).
+- * Tools must adjust the counts to convert them to Watts using
+- * the duration of the measurement. Tools may use a function such as
+- * ldexp(raw_count, -32);
+- */
+-
+-#define pr_fmt(fmt) "RAPL PMU: " fmt
+-
+-#include <linux/module.h>
+-#include <linux/slab.h>
+-#include <linux/perf_event.h>
+-#include <linux/nospec.h>
+-#include <asm/cpu_device_id.h>
+-#include <asm/intel-family.h>
+-#include "../perf_event.h"
+-#include "../probe.h"
+-
+-MODULE_LICENSE("GPL");
+-
+-/*
+- * RAPL energy status counters
+- */
+-enum perf_rapl_events {
+- PERF_RAPL_PP0 = 0, /* all cores */
+- PERF_RAPL_PKG, /* entire package */
+- PERF_RAPL_RAM, /* DRAM */
+- PERF_RAPL_PP1, /* gpu */
+- PERF_RAPL_PSYS, /* psys */
+-
+- PERF_RAPL_MAX,
+- NR_RAPL_DOMAINS = PERF_RAPL_MAX,
+-};
+-
+-static const char *const rapl_domain_names[NR_RAPL_DOMAINS] __initconst = {
+- "pp0-core",
+- "package",
+- "dram",
+- "pp1-gpu",
+- "psys",
+-};
+-
+-/*
+- * event code: LSB 8 bits, passed in attr->config
+- * any other bit is reserved
+- */
+-#define RAPL_EVENT_MASK 0xFFULL
+-
+-#define DEFINE_RAPL_FORMAT_ATTR(_var, _name, _format) \
+-static ssize_t __rapl_##_var##_show(struct kobject *kobj, \
+- struct kobj_attribute *attr, \
+- char *page) \
+-{ \
+- BUILD_BUG_ON(sizeof(_format) >= PAGE_SIZE); \
+- return sprintf(page, _format "\n"); \
+-} \
+-static struct kobj_attribute format_attr_##_var = \
+- __ATTR(_name, 0444, __rapl_##_var##_show, NULL)
+-
+-#define RAPL_CNTR_WIDTH 32
+-
+-#define RAPL_EVENT_ATTR_STR(_name, v, str) \
+-static struct perf_pmu_events_attr event_attr_##v = { \
+- .attr = __ATTR(_name, 0444, perf_event_sysfs_show, NULL), \
+- .id = 0, \
+- .event_str = str, \
+-};
+-
+-struct rapl_pmu {
+- raw_spinlock_t lock;
+- int n_active;
+- int cpu;
+- struct list_head active_list;
+- struct pmu *pmu;
+- ktime_t timer_interval;
+- struct hrtimer hrtimer;
+-};
+-
+-struct rapl_pmus {
+- struct pmu pmu;
+- unsigned int maxdie;
+- struct rapl_pmu *pmus[];
+-};
+-
+-struct rapl_model {
+- unsigned long events;
+- bool apply_quirk;
+-};
+-
+- /* 1/2^hw_unit Joule */
+-static int rapl_hw_unit[NR_RAPL_DOMAINS] __read_mostly;
+-static struct rapl_pmus *rapl_pmus;
+-static cpumask_t rapl_cpu_mask;
+-static unsigned int rapl_cntr_mask;
+-static u64 rapl_timer_ms;
+-static struct perf_msr rapl_msrs[];
+-
+-static inline struct rapl_pmu *cpu_to_rapl_pmu(unsigned int cpu)
+-{
+- unsigned int dieid = topology_logical_die_id(cpu);
+-
+- /*
+- * The unsigned check also catches the '-1' return value for non
+- * existent mappings in the topology map.
+- */
+- return dieid < rapl_pmus->maxdie ? rapl_pmus->pmus[dieid] : NULL;
+-}
+-
+-static inline u64 rapl_read_counter(struct perf_event *event)
+-{
+- u64 raw;
+- rdmsrl(event->hw.event_base, raw);
+- return raw;
+-}
+-
+-static inline u64 rapl_scale(u64 v, int cfg)
+-{
+- if (cfg > NR_RAPL_DOMAINS) {
+- pr_warn("Invalid domain %d, failed to scale data\n", cfg);
+- return v;
+- }
+- /*
+- * scale delta to smallest unit (1/2^32)
+- * users must then scale back: count * 1/(1e9*2^32) to get Joules
+- * or use ldexp(count, -32).
+- * Watts = Joules/Time delta
+- */
+- return v << (32 - rapl_hw_unit[cfg - 1]);
+-}
+-
+-static u64 rapl_event_update(struct perf_event *event)
+-{
+- struct hw_perf_event *hwc = &event->hw;
+- u64 prev_raw_count, new_raw_count;
+- s64 delta, sdelta;
+- int shift = RAPL_CNTR_WIDTH;
+-
+-again:
+- prev_raw_count = local64_read(&hwc->prev_count);
+- rdmsrl(event->hw.event_base, new_raw_count);
+-
+- if (local64_cmpxchg(&hwc->prev_count, prev_raw_count,
+- new_raw_count) != prev_raw_count) {
+- cpu_relax();
+- goto again;
+- }
+-
+- /*
+- * Now we have the new raw value and have updated the prev
+- * timestamp already. We can now calculate the elapsed delta
+- * (event-)time and add that to the generic event.
+- *
+- * Careful, not all hw sign-extends above the physical width
+- * of the count.
+- */
+- delta = (new_raw_count << shift) - (prev_raw_count << shift);
+- delta >>= shift;
+-
+- sdelta = rapl_scale(delta, event->hw.config);
+-
+- local64_add(sdelta, &event->count);
+-
+- return new_raw_count;
+-}
+-
+-static void rapl_start_hrtimer(struct rapl_pmu *pmu)
+-{
+- hrtimer_start(&pmu->hrtimer, pmu->timer_interval,
+- HRTIMER_MODE_REL_PINNED);
+-}
+-
+-static enum hrtimer_restart rapl_hrtimer_handle(struct hrtimer *hrtimer)
+-{
+- struct rapl_pmu *pmu = container_of(hrtimer, struct rapl_pmu, hrtimer);
+- struct perf_event *event;
+- unsigned long flags;
+-
+- if (!pmu->n_active)
+- return HRTIMER_NORESTART;
+-
+- raw_spin_lock_irqsave(&pmu->lock, flags);
+-
+- list_for_each_entry(event, &pmu->active_list, active_entry)
+- rapl_event_update(event);
+-
+- raw_spin_unlock_irqrestore(&pmu->lock, flags);
+-
+- hrtimer_forward_now(hrtimer, pmu->timer_interval);
+-
+- return HRTIMER_RESTART;
+-}
+-
+-static void rapl_hrtimer_init(struct rapl_pmu *pmu)
+-{
+- struct hrtimer *hr = &pmu->hrtimer;
+-
+- hrtimer_init(hr, CLOCK_MONOTONIC, HRTIMER_MODE_REL);
+- hr->function = rapl_hrtimer_handle;
+-}
+-
+-static void __rapl_pmu_event_start(struct rapl_pmu *pmu,
+- struct perf_event *event)
+-{
+- if (WARN_ON_ONCE(!(event->hw.state & PERF_HES_STOPPED)))
+- return;
+-
+- event->hw.state = 0;
+-
+- list_add_tail(&event->active_entry, &pmu->active_list);
+-
+- local64_set(&event->hw.prev_count, rapl_read_counter(event));
+-
+- pmu->n_active++;
+- if (pmu->n_active == 1)
+- rapl_start_hrtimer(pmu);
+-}
+-
+-static void rapl_pmu_event_start(struct perf_event *event, int mode)
+-{
+- struct rapl_pmu *pmu = event->pmu_private;
+- unsigned long flags;
+-
+- raw_spin_lock_irqsave(&pmu->lock, flags);
+- __rapl_pmu_event_start(pmu, event);
+- raw_spin_unlock_irqrestore(&pmu->lock, flags);
+-}
+-
+-static void rapl_pmu_event_stop(struct perf_event *event, int mode)
+-{
+- struct rapl_pmu *pmu = event->pmu_private;
+- struct hw_perf_event *hwc = &event->hw;
+- unsigned long flags;
+-
+- raw_spin_lock_irqsave(&pmu->lock, flags);
+-
+- /* mark event as deactivated and stopped */
+- if (!(hwc->state & PERF_HES_STOPPED)) {
+- WARN_ON_ONCE(pmu->n_active <= 0);
+- pmu->n_active--;
+- if (pmu->n_active == 0)
+- hrtimer_cancel(&pmu->hrtimer);
+-
+- list_del(&event->active_entry);
+-
+- WARN_ON_ONCE(hwc->state & PERF_HES_STOPPED);
+- hwc->state |= PERF_HES_STOPPED;
+- }
+-
+- /* check if update of sw counter is necessary */
+- if ((mode & PERF_EF_UPDATE) && !(hwc->state & PERF_HES_UPTODATE)) {
+- /*
+- * Drain the remaining delta count out of a event
+- * that we are disabling:
+- */
+- rapl_event_update(event);
+- hwc->state |= PERF_HES_UPTODATE;
+- }
+-
+- raw_spin_unlock_irqrestore(&pmu->lock, flags);
+-}
+-
+-static int rapl_pmu_event_add(struct perf_event *event, int mode)
+-{
+- struct rapl_pmu *pmu = event->pmu_private;
+- struct hw_perf_event *hwc = &event->hw;
+- unsigned long flags;
+-
+- raw_spin_lock_irqsave(&pmu->lock, flags);
+-
+- hwc->state = PERF_HES_UPTODATE | PERF_HES_STOPPED;
+-
+- if (mode & PERF_EF_START)
+- __rapl_pmu_event_start(pmu, event);
+-
+- raw_spin_unlock_irqrestore(&pmu->lock, flags);
+-
+- return 0;
+-}
+-
+-static void rapl_pmu_event_del(struct perf_event *event, int flags)
+-{
+- rapl_pmu_event_stop(event, PERF_EF_UPDATE);
+-}
+-
+-static int rapl_pmu_event_init(struct perf_event *event)
+-{
+- u64 cfg = event->attr.config & RAPL_EVENT_MASK;
+- int bit, ret = 0;
+- struct rapl_pmu *pmu;
+-
+- /* only look at RAPL events */
+- if (event->attr.type != rapl_pmus->pmu.type)
+- return -ENOENT;
+-
+- /* check only supported bits are set */
+- if (event->attr.config & ~RAPL_EVENT_MASK)
+- return -EINVAL;
+-
+- if (event->cpu < 0)
+- return -EINVAL;
+-
+- event->event_caps |= PERF_EV_CAP_READ_ACTIVE_PKG;
+-
+- if (!cfg || cfg >= NR_RAPL_DOMAINS + 1)
+- return -EINVAL;
+-
+- cfg = array_index_nospec((long)cfg, NR_RAPL_DOMAINS + 1);
+- bit = cfg - 1;
+-
+- /* check event supported */
+- if (!(rapl_cntr_mask & (1 << bit)))
+- return -EINVAL;
+-
+- /* unsupported modes and filters */
+- if (event->attr.sample_period) /* no sampling */
+- return -EINVAL;
+-
+- /* must be done before validate_group */
+- pmu = cpu_to_rapl_pmu(event->cpu);
+- if (!pmu)
+- return -EINVAL;
+- event->cpu = pmu->cpu;
+- event->pmu_private = pmu;
+- event->hw.event_base = rapl_msrs[bit].msr;
+- event->hw.config = cfg;
+- event->hw.idx = bit;
+-
+- return ret;
+-}
+-
+-static void rapl_pmu_event_read(struct perf_event *event)
+-{
+- rapl_event_update(event);
+-}
+-
+-static ssize_t rapl_get_attr_cpumask(struct device *dev,
+- struct device_attribute *attr, char *buf)
+-{
+- return cpumap_print_to_pagebuf(true, buf, &rapl_cpu_mask);
+-}
+-
+-static DEVICE_ATTR(cpumask, S_IRUGO, rapl_get_attr_cpumask, NULL);
+-
+-static struct attribute *rapl_pmu_attrs[] = {
+- &dev_attr_cpumask.attr,
+- NULL,
+-};
+-
+-static struct attribute_group rapl_pmu_attr_group = {
+- .attrs = rapl_pmu_attrs,
+-};
+-
+-RAPL_EVENT_ATTR_STR(energy-cores, rapl_cores, "event=0x01");
+-RAPL_EVENT_ATTR_STR(energy-pkg , rapl_pkg, "event=0x02");
+-RAPL_EVENT_ATTR_STR(energy-ram , rapl_ram, "event=0x03");
+-RAPL_EVENT_ATTR_STR(energy-gpu , rapl_gpu, "event=0x04");
+-RAPL_EVENT_ATTR_STR(energy-psys, rapl_psys, "event=0x05");
+-
+-RAPL_EVENT_ATTR_STR(energy-cores.unit, rapl_cores_unit, "Joules");
+-RAPL_EVENT_ATTR_STR(energy-pkg.unit , rapl_pkg_unit, "Joules");
+-RAPL_EVENT_ATTR_STR(energy-ram.unit , rapl_ram_unit, "Joules");
+-RAPL_EVENT_ATTR_STR(energy-gpu.unit , rapl_gpu_unit, "Joules");
+-RAPL_EVENT_ATTR_STR(energy-psys.unit, rapl_psys_unit, "Joules");
+-
+-/*
+- * we compute in 0.23 nJ increments regardless of MSR
+- */
+-RAPL_EVENT_ATTR_STR(energy-cores.scale, rapl_cores_scale, "2.3283064365386962890625e-10");
+-RAPL_EVENT_ATTR_STR(energy-pkg.scale, rapl_pkg_scale, "2.3283064365386962890625e-10");
+-RAPL_EVENT_ATTR_STR(energy-ram.scale, rapl_ram_scale, "2.3283064365386962890625e-10");
+-RAPL_EVENT_ATTR_STR(energy-gpu.scale, rapl_gpu_scale, "2.3283064365386962890625e-10");
+-RAPL_EVENT_ATTR_STR(energy-psys.scale, rapl_psys_scale, "2.3283064365386962890625e-10");
+-
+-/*
+- * There are no default events, but we need to create
+- * "events" group (with empty attrs) before updating
+- * it with detected events.
+- */
+-static struct attribute *attrs_empty[] = {
+- NULL,
+-};
+-
+-static struct attribute_group rapl_pmu_events_group = {
+- .name = "events",
+- .attrs = attrs_empty,
+-};
+-
+-DEFINE_RAPL_FORMAT_ATTR(event, event, "config:0-7");
+-static struct attribute *rapl_formats_attr[] = {
+- &format_attr_event.attr,
+- NULL,
+-};
+-
+-static struct attribute_group rapl_pmu_format_group = {
+- .name = "format",
+- .attrs = rapl_formats_attr,
+-};
+-
+-static const struct attribute_group *rapl_attr_groups[] = {
+- &rapl_pmu_attr_group,
+- &rapl_pmu_format_group,
+- &rapl_pmu_events_group,
+- NULL,
+-};
+-
+-static struct attribute *rapl_events_cores[] = {
+- EVENT_PTR(rapl_cores),
+- EVENT_PTR(rapl_cores_unit),
+- EVENT_PTR(rapl_cores_scale),
+- NULL,
+-};
+-
+-static struct attribute_group rapl_events_cores_group = {
+- .name = "events",
+- .attrs = rapl_events_cores,
+-};
+-
+-static struct attribute *rapl_events_pkg[] = {
+- EVENT_PTR(rapl_pkg),
+- EVENT_PTR(rapl_pkg_unit),
+- EVENT_PTR(rapl_pkg_scale),
+- NULL,
+-};
+-
+-static struct attribute_group rapl_events_pkg_group = {
+- .name = "events",
+- .attrs = rapl_events_pkg,
+-};
+-
+-static struct attribute *rapl_events_ram[] = {
+- EVENT_PTR(rapl_ram),
+- EVENT_PTR(rapl_ram_unit),
+- EVENT_PTR(rapl_ram_scale),
+- NULL,
+-};
+-
+-static struct attribute_group rapl_events_ram_group = {
+- .name = "events",
+- .attrs = rapl_events_ram,
+-};
+-
+-static struct attribute *rapl_events_gpu[] = {
+- EVENT_PTR(rapl_gpu),
+- EVENT_PTR(rapl_gpu_unit),
+- EVENT_PTR(rapl_gpu_scale),
+- NULL,
+-};
+-
+-static struct attribute_group rapl_events_gpu_group = {
+- .name = "events",
+- .attrs = rapl_events_gpu,
+-};
+-
+-static struct attribute *rapl_events_psys[] = {
+- EVENT_PTR(rapl_psys),
+- EVENT_PTR(rapl_psys_unit),
+- EVENT_PTR(rapl_psys_scale),
+- NULL,
+-};
+-
+-static struct attribute_group rapl_events_psys_group = {
+- .name = "events",
+- .attrs = rapl_events_psys,
+-};
+-
+-static bool test_msr(int idx, void *data)
+-{
+- return test_bit(idx, (unsigned long *) data);
+-}
+-
+-static struct perf_msr rapl_msrs[] = {
+- [PERF_RAPL_PP0] = { MSR_PP0_ENERGY_STATUS, &rapl_events_cores_group, test_msr },
+- [PERF_RAPL_PKG] = { MSR_PKG_ENERGY_STATUS, &rapl_events_pkg_group, test_msr },
+- [PERF_RAPL_RAM] = { MSR_DRAM_ENERGY_STATUS, &rapl_events_ram_group, test_msr },
+- [PERF_RAPL_PP1] = { MSR_PP1_ENERGY_STATUS, &rapl_events_gpu_group, test_msr },
+- [PERF_RAPL_PSYS] = { MSR_PLATFORM_ENERGY_STATUS, &rapl_events_psys_group, test_msr },
+-};
+-
+-static int rapl_cpu_offline(unsigned int cpu)
+-{
+- struct rapl_pmu *pmu = cpu_to_rapl_pmu(cpu);
+- int target;
+-
+- /* Check if exiting cpu is used for collecting rapl events */
+- if (!cpumask_test_and_clear_cpu(cpu, &rapl_cpu_mask))
+- return 0;
+-
+- pmu->cpu = -1;
+- /* Find a new cpu to collect rapl events */
+- target = cpumask_any_but(topology_die_cpumask(cpu), cpu);
+-
+- /* Migrate rapl events to the new target */
+- if (target < nr_cpu_ids) {
+- cpumask_set_cpu(target, &rapl_cpu_mask);
+- pmu->cpu = target;
+- perf_pmu_migrate_context(pmu->pmu, cpu, target);
+- }
+- return 0;
+-}
+-
+-static int rapl_cpu_online(unsigned int cpu)
+-{
+- struct rapl_pmu *pmu = cpu_to_rapl_pmu(cpu);
+- int target;
+-
+- if (!pmu) {
+- pmu = kzalloc_node(sizeof(*pmu), GFP_KERNEL, cpu_to_node(cpu));
+- if (!pmu)
+- return -ENOMEM;
+-
+- raw_spin_lock_init(&pmu->lock);
+- INIT_LIST_HEAD(&pmu->active_list);
+- pmu->pmu = &rapl_pmus->pmu;
+- pmu->timer_interval = ms_to_ktime(rapl_timer_ms);
+- rapl_hrtimer_init(pmu);
+-
+- rapl_pmus->pmus[topology_logical_die_id(cpu)] = pmu;
+- }
+-
+- /*
+- * Check if there is an online cpu in the package which collects rapl
+- * events already.
+- */
+- target = cpumask_any_and(&rapl_cpu_mask, topology_die_cpumask(cpu));
+- if (target < nr_cpu_ids)
+- return 0;
+-
+- cpumask_set_cpu(cpu, &rapl_cpu_mask);
+- pmu->cpu = cpu;
+- return 0;
+-}
+-
+-static int rapl_check_hw_unit(bool apply_quirk)
+-{
+- u64 msr_rapl_power_unit_bits;
+- int i;
+-
+- /* protect rdmsrl() to handle virtualization */
+- if (rdmsrl_safe(MSR_RAPL_POWER_UNIT, &msr_rapl_power_unit_bits))
+- return -1;
+- for (i = 0; i < NR_RAPL_DOMAINS; i++)
+- rapl_hw_unit[i] = (msr_rapl_power_unit_bits >> 8) & 0x1FULL;
+-
+- /*
+- * DRAM domain on HSW server and KNL has fixed energy unit which can be
+- * different than the unit from power unit MSR. See
+- * "Intel Xeon Processor E5-1600 and E5-2600 v3 Product Families, V2
+- * of 2. Datasheet, September 2014, Reference Number: 330784-001 "
+- */
+- if (apply_quirk)
+- rapl_hw_unit[PERF_RAPL_RAM] = 16;
+-
+- /*
+- * Calculate the timer rate:
+- * Use reference of 200W for scaling the timeout to avoid counter
+- * overflows. 200W = 200 Joules/sec
+- * Divide interval by 2 to avoid lockstep (2 * 100)
+- * if hw unit is 32, then we use 2 ms 1/200/2
+- */
+- rapl_timer_ms = 2;
+- if (rapl_hw_unit[0] < 32) {
+- rapl_timer_ms = (1000 / (2 * 100));
+- rapl_timer_ms *= (1ULL << (32 - rapl_hw_unit[0] - 1));
+- }
+- return 0;
+-}
+-
+-static void __init rapl_advertise(void)
+-{
+- int i;
+-
+- pr_info("API unit is 2^-32 Joules, %d fixed counters, %llu ms ovfl timer\n",
+- hweight32(rapl_cntr_mask), rapl_timer_ms);
+-
+- for (i = 0; i < NR_RAPL_DOMAINS; i++) {
+- if (rapl_cntr_mask & (1 << i)) {
+- pr_info("hw unit of domain %s 2^-%d Joules\n",
+- rapl_domain_names[i], rapl_hw_unit[i]);
+- }
+- }
+-}
+-
+-static void cleanup_rapl_pmus(void)
+-{
+- int i;
+-
+- for (i = 0; i < rapl_pmus->maxdie; i++)
+- kfree(rapl_pmus->pmus[i]);
+- kfree(rapl_pmus);
+-}
+-
+-static const struct attribute_group *rapl_attr_update[] = {
+- &rapl_events_cores_group,
+- &rapl_events_pkg_group,
+- &rapl_events_ram_group,
+- &rapl_events_gpu_group,
+- &rapl_events_gpu_group,
+- NULL,
+-};
+-
+-static int __init init_rapl_pmus(void)
+-{
+- int maxdie = topology_max_packages() * topology_max_die_per_package();
+- size_t size;
+-
+- size = sizeof(*rapl_pmus) + maxdie * sizeof(struct rapl_pmu *);
+- rapl_pmus = kzalloc(size, GFP_KERNEL);
+- if (!rapl_pmus)
+- return -ENOMEM;
+-
+- rapl_pmus->maxdie = maxdie;
+- rapl_pmus->pmu.attr_groups = rapl_attr_groups;
+- rapl_pmus->pmu.attr_update = rapl_attr_update;
+- rapl_pmus->pmu.task_ctx_nr = perf_invalid_context;
+- rapl_pmus->pmu.event_init = rapl_pmu_event_init;
+- rapl_pmus->pmu.add = rapl_pmu_event_add;
+- rapl_pmus->pmu.del = rapl_pmu_event_del;
+- rapl_pmus->pmu.start = rapl_pmu_event_start;
+- rapl_pmus->pmu.stop = rapl_pmu_event_stop;
+- rapl_pmus->pmu.read = rapl_pmu_event_read;
+- rapl_pmus->pmu.module = THIS_MODULE;
+- rapl_pmus->pmu.capabilities = PERF_PMU_CAP_NO_EXCLUDE;
+- return 0;
+-}
+-
+-static struct rapl_model model_snb = {
+- .events = BIT(PERF_RAPL_PP0) |
+- BIT(PERF_RAPL_PKG) |
+- BIT(PERF_RAPL_PP1),
+- .apply_quirk = false,
+-};
+-
+-static struct rapl_model model_snbep = {
+- .events = BIT(PERF_RAPL_PP0) |
+- BIT(PERF_RAPL_PKG) |
+- BIT(PERF_RAPL_RAM),
+- .apply_quirk = false,
+-};
+-
+-static struct rapl_model model_hsw = {
+- .events = BIT(PERF_RAPL_PP0) |
+- BIT(PERF_RAPL_PKG) |
+- BIT(PERF_RAPL_RAM) |
+- BIT(PERF_RAPL_PP1),
+- .apply_quirk = false,
+-};
+-
+-static struct rapl_model model_hsx = {
+- .events = BIT(PERF_RAPL_PP0) |
+- BIT(PERF_RAPL_PKG) |
+- BIT(PERF_RAPL_RAM),
+- .apply_quirk = true,
+-};
+-
+-static struct rapl_model model_knl = {
+- .events = BIT(PERF_RAPL_PKG) |
+- BIT(PERF_RAPL_RAM),
+- .apply_quirk = true,
+-};
+-
+-static struct rapl_model model_skl = {
+- .events = BIT(PERF_RAPL_PP0) |
+- BIT(PERF_RAPL_PKG) |
+- BIT(PERF_RAPL_RAM) |
+- BIT(PERF_RAPL_PP1) |
+- BIT(PERF_RAPL_PSYS),
+- .apply_quirk = false,
+-};
+-
+-static const struct x86_cpu_id rapl_model_match[] __initconst = {
+- X86_MATCH_INTEL_FAM6_MODEL(SANDYBRIDGE, &model_snb),
+- X86_MATCH_INTEL_FAM6_MODEL(SANDYBRIDGE_X, &model_snbep),
+- X86_MATCH_INTEL_FAM6_MODEL(IVYBRIDGE, &model_snb),
+- X86_MATCH_INTEL_FAM6_MODEL(IVYBRIDGE_X, &model_snbep),
+- X86_MATCH_INTEL_FAM6_MODEL(HASWELL, &model_hsw),
+- X86_MATCH_INTEL_FAM6_MODEL(HASWELL_X, &model_hsx),
+- X86_MATCH_INTEL_FAM6_MODEL(HASWELL_L, &model_hsw),
+- X86_MATCH_INTEL_FAM6_MODEL(HASWELL_G, &model_hsw),
+- X86_MATCH_INTEL_FAM6_MODEL(BROADWELL, &model_hsw),
+- X86_MATCH_INTEL_FAM6_MODEL(BROADWELL_G, &model_hsw),
+- X86_MATCH_INTEL_FAM6_MODEL(BROADWELL_X, &model_hsx),
+- X86_MATCH_INTEL_FAM6_MODEL(BROADWELL_D, &model_hsx),
+- X86_MATCH_INTEL_FAM6_MODEL(XEON_PHI_KNL, &model_knl),
+- X86_MATCH_INTEL_FAM6_MODEL(XEON_PHI_KNM, &model_knl),
+- X86_MATCH_INTEL_FAM6_MODEL(SKYLAKE_L, &model_skl),
+- X86_MATCH_INTEL_FAM6_MODEL(SKYLAKE, &model_skl),
+- X86_MATCH_INTEL_FAM6_MODEL(SKYLAKE_X, &model_hsx),
+- X86_MATCH_INTEL_FAM6_MODEL(KABYLAKE_L, &model_skl),
+- X86_MATCH_INTEL_FAM6_MODEL(KABYLAKE, &model_skl),
+- X86_MATCH_INTEL_FAM6_MODEL(CANNONLAKE_L, &model_skl),
+- X86_MATCH_INTEL_FAM6_MODEL(ATOM_GOLDMONT, &model_hsw),
+- X86_MATCH_INTEL_FAM6_MODEL(ATOM_GOLDMONT_D, &model_hsw),
+- X86_MATCH_INTEL_FAM6_MODEL(ATOM_GOLDMONT_PLUS, &model_hsw),
+- X86_MATCH_INTEL_FAM6_MODEL(ICELAKE_L, &model_skl),
+- X86_MATCH_INTEL_FAM6_MODEL(ICELAKE, &model_skl),
+- X86_MATCH_INTEL_FAM6_MODEL(COMETLAKE_L, &model_skl),
+- X86_MATCH_INTEL_FAM6_MODEL(COMETLAKE, &model_skl),
+- {},
+-};
+-MODULE_DEVICE_TABLE(x86cpu, rapl_model_match);
+-
+-static int __init rapl_pmu_init(void)
+-{
+- const struct x86_cpu_id *id;
+- struct rapl_model *rm;
+- int ret;
+-
+- id = x86_match_cpu(rapl_model_match);
+- if (!id)
+- return -ENODEV;
+-
+- rm = (struct rapl_model *) id->driver_data;
+- rapl_cntr_mask = perf_msr_probe(rapl_msrs, PERF_RAPL_MAX,
+- false, (void *) &rm->events);
+-
+- ret = rapl_check_hw_unit(rm->apply_quirk);
+- if (ret)
+- return ret;
+-
+- ret = init_rapl_pmus();
+- if (ret)
+- return ret;
+-
+- /*
+- * Install callbacks. Core will call them for each online cpu.
+- */
+- ret = cpuhp_setup_state(CPUHP_AP_PERF_X86_RAPL_ONLINE,
+- "perf/x86/rapl:online",
+- rapl_cpu_online, rapl_cpu_offline);
+- if (ret)
+- goto out;
+-
+- ret = perf_pmu_register(&rapl_pmus->pmu, "power", -1);
+- if (ret)
+- goto out1;
+-
+- rapl_advertise();
+- return 0;
+-
+-out1:
+- cpuhp_remove_state(CPUHP_AP_PERF_X86_RAPL_ONLINE);
+-out:
+- pr_warn("Initialization failed (%d), disabled\n", ret);
+- cleanup_rapl_pmus();
+- return ret;
+-}
+-module_init(rapl_pmu_init);
+-
+-static void __exit intel_rapl_exit(void)
+-{
+- cpuhp_remove_state_nocalls(CPUHP_AP_PERF_X86_RAPL_ONLINE);
+- perf_pmu_unregister(&rapl_pmus->pmu);
+- cleanup_rapl_pmus();
+-}
+-module_exit(intel_rapl_exit);
+diff --git a/arch/x86/events/rapl.c b/arch/x86/events/rapl.c
+new file mode 100644
+index 000000000000..ece043fb7b49
+--- /dev/null
++++ b/arch/x86/events/rapl.c
+@@ -0,0 +1,803 @@
++// SPDX-License-Identifier: GPL-2.0-only
++/*
++ * Support Intel/AMD RAPL energy consumption counters
++ * Copyright (C) 2013 Google, Inc., Stephane Eranian
++ *
++ * Intel RAPL interface is specified in the IA-32 Manual Vol3b
++ * section 14.7.1 (September 2013)
++ *
++ * AMD RAPL interface for Fam17h is described in the public PPR:
++ * https://bugzilla.kernel.org/show_bug.cgi?id=206537
++ *
++ * RAPL provides more controls than just reporting energy consumption
++ * however here we only expose the 3 energy consumption free running
++ * counters (pp0, pkg, dram).
++ *
++ * Each of those counters increments in a power unit defined by the
++ * RAPL_POWER_UNIT MSR. On SandyBridge, this unit is 1/(2^16) Joules
++ * but it can vary.
++ *
++ * Counter to rapl events mappings:
++ *
++ * pp0 counter: consumption of all physical cores (power plane 0)
++ * event: rapl_energy_cores
++ * perf code: 0x1
++ *
++ * pkg counter: consumption of the whole processor package
++ * event: rapl_energy_pkg
++ * perf code: 0x2
++ *
++ * dram counter: consumption of the dram domain (servers only)
++ * event: rapl_energy_dram
++ * perf code: 0x3
++ *
++ * gpu counter: consumption of the builtin-gpu domain (client only)
++ * event: rapl_energy_gpu
++ * perf code: 0x4
++ *
++ * psys counter: consumption of the builtin-psys domain (client only)
++ * event: rapl_energy_psys
++ * perf code: 0x5
++ *
++ * We manage those counters as free running (read-only). They may be
++ * use simultaneously by other tools, such as turbostat.
++ *
++ * The events only support system-wide mode counting. There is no
++ * sampling support because it does not make sense and is not
++ * supported by the RAPL hardware.
++ *
++ * Because we want to avoid floating-point operations in the kernel,
++ * the events are all reported in fixed point arithmetic (32.32).
++ * Tools must adjust the counts to convert them to Watts using
++ * the duration of the measurement. Tools may use a function such as
++ * ldexp(raw_count, -32);
++ */
++
++#define pr_fmt(fmt) "RAPL PMU: " fmt
++
++#include <linux/module.h>
++#include <linux/slab.h>
++#include <linux/perf_event.h>
++#include <linux/nospec.h>
++#include <asm/cpu_device_id.h>
++#include <asm/intel-family.h>
++#include "perf_event.h"
++#include "probe.h"
++
++MODULE_LICENSE("GPL");
++
++/*
++ * RAPL energy status counters
++ */
++enum perf_rapl_events {
++ PERF_RAPL_PP0 = 0, /* all cores */
++ PERF_RAPL_PKG, /* entire package */
++ PERF_RAPL_RAM, /* DRAM */
++ PERF_RAPL_PP1, /* gpu */
++ PERF_RAPL_PSYS, /* psys */
++
++ PERF_RAPL_MAX,
++ NR_RAPL_DOMAINS = PERF_RAPL_MAX,
++};
++
++static const char *const rapl_domain_names[NR_RAPL_DOMAINS] __initconst = {
++ "pp0-core",
++ "package",
++ "dram",
++ "pp1-gpu",
++ "psys",
++};
++
++/*
++ * event code: LSB 8 bits, passed in attr->config
++ * any other bit is reserved
++ */
++#define RAPL_EVENT_MASK 0xFFULL
++
++#define DEFINE_RAPL_FORMAT_ATTR(_var, _name, _format) \
++static ssize_t __rapl_##_var##_show(struct kobject *kobj, \
++ struct kobj_attribute *attr, \
++ char *page) \
++{ \
++ BUILD_BUG_ON(sizeof(_format) >= PAGE_SIZE); \
++ return sprintf(page, _format "\n"); \
++} \
++static struct kobj_attribute format_attr_##_var = \
++ __ATTR(_name, 0444, __rapl_##_var##_show, NULL)
++
++#define RAPL_CNTR_WIDTH 32
++
++#define RAPL_EVENT_ATTR_STR(_name, v, str) \
++static struct perf_pmu_events_attr event_attr_##v = { \
++ .attr = __ATTR(_name, 0444, perf_event_sysfs_show, NULL), \
++ .id = 0, \
++ .event_str = str, \
++};
++
++struct rapl_pmu {
++ raw_spinlock_t lock;
++ int n_active;
++ int cpu;
++ struct list_head active_list;
++ struct pmu *pmu;
++ ktime_t timer_interval;
++ struct hrtimer hrtimer;
++};
++
++struct rapl_pmus {
++ struct pmu pmu;
++ unsigned int maxdie;
++ struct rapl_pmu *pmus[];
++};
++
++struct rapl_model {
++ unsigned long events;
++ bool apply_quirk;
++};
++
++ /* 1/2^hw_unit Joule */
++static int rapl_hw_unit[NR_RAPL_DOMAINS] __read_mostly;
++static struct rapl_pmus *rapl_pmus;
++static cpumask_t rapl_cpu_mask;
++static unsigned int rapl_cntr_mask;
++static u64 rapl_timer_ms;
++static struct perf_msr rapl_msrs[];
++
++static inline struct rapl_pmu *cpu_to_rapl_pmu(unsigned int cpu)
++{
++ unsigned int dieid = topology_logical_die_id(cpu);
++
++ /*
++ * The unsigned check also catches the '-1' return value for non
++ * existent mappings in the topology map.
++ */
++ return dieid < rapl_pmus->maxdie ? rapl_pmus->pmus[dieid] : NULL;
++}
++
++static inline u64 rapl_read_counter(struct perf_event *event)
++{
++ u64 raw;
++ rdmsrl(event->hw.event_base, raw);
++ return raw;
++}
++
++static inline u64 rapl_scale(u64 v, int cfg)
++{
++ if (cfg > NR_RAPL_DOMAINS) {
++ pr_warn("Invalid domain %d, failed to scale data\n", cfg);
++ return v;
++ }
++ /*
++ * scale delta to smallest unit (1/2^32)
++ * users must then scale back: count * 1/(1e9*2^32) to get Joules
++ * or use ldexp(count, -32).
++ * Watts = Joules/Time delta
++ */
++ return v << (32 - rapl_hw_unit[cfg - 1]);
++}
++
++static u64 rapl_event_update(struct perf_event *event)
++{
++ struct hw_perf_event *hwc = &event->hw;
++ u64 prev_raw_count, new_raw_count;
++ s64 delta, sdelta;
++ int shift = RAPL_CNTR_WIDTH;
++
++again:
++ prev_raw_count = local64_read(&hwc->prev_count);
++ rdmsrl(event->hw.event_base, new_raw_count);
++
++ if (local64_cmpxchg(&hwc->prev_count, prev_raw_count,
++ new_raw_count) != prev_raw_count) {
++ cpu_relax();
++ goto again;
++ }
++
++ /*
++ * Now we have the new raw value and have updated the prev
++ * timestamp already. We can now calculate the elapsed delta
++ * (event-)time and add that to the generic event.
++ *
++ * Careful, not all hw sign-extends above the physical width
++ * of the count.
++ */
++ delta = (new_raw_count << shift) - (prev_raw_count << shift);
++ delta >>= shift;
++
++ sdelta = rapl_scale(delta, event->hw.config);
++
++ local64_add(sdelta, &event->count);
++
++ return new_raw_count;
++}
++
++static void rapl_start_hrtimer(struct rapl_pmu *pmu)
++{
++ hrtimer_start(&pmu->hrtimer, pmu->timer_interval,
++ HRTIMER_MODE_REL_PINNED);
++}
++
++static enum hrtimer_restart rapl_hrtimer_handle(struct hrtimer *hrtimer)
++{
++ struct rapl_pmu *pmu = container_of(hrtimer, struct rapl_pmu, hrtimer);
++ struct perf_event *event;
++ unsigned long flags;
++
++ if (!pmu->n_active)
++ return HRTIMER_NORESTART;
++
++ raw_spin_lock_irqsave(&pmu->lock, flags);
++
++ list_for_each_entry(event, &pmu->active_list, active_entry)
++ rapl_event_update(event);
++
++ raw_spin_unlock_irqrestore(&pmu->lock, flags);
++
++ hrtimer_forward_now(hrtimer, pmu->timer_interval);
++
++ return HRTIMER_RESTART;
++}
++
++static void rapl_hrtimer_init(struct rapl_pmu *pmu)
++{
++ struct hrtimer *hr = &pmu->hrtimer;
++
++ hrtimer_init(hr, CLOCK_MONOTONIC, HRTIMER_MODE_REL);
++ hr->function = rapl_hrtimer_handle;
++}
++
++static void __rapl_pmu_event_start(struct rapl_pmu *pmu,
++ struct perf_event *event)
++{
++ if (WARN_ON_ONCE(!(event->hw.state & PERF_HES_STOPPED)))
++ return;
++
++ event->hw.state = 0;
++
++ list_add_tail(&event->active_entry, &pmu->active_list);
++
++ local64_set(&event->hw.prev_count, rapl_read_counter(event));
++
++ pmu->n_active++;
++ if (pmu->n_active == 1)
++ rapl_start_hrtimer(pmu);
++}
++
++static void rapl_pmu_event_start(struct perf_event *event, int mode)
++{
++ struct rapl_pmu *pmu = event->pmu_private;
++ unsigned long flags;
++
++ raw_spin_lock_irqsave(&pmu->lock, flags);
++ __rapl_pmu_event_start(pmu, event);
++ raw_spin_unlock_irqrestore(&pmu->lock, flags);
++}
++
++static void rapl_pmu_event_stop(struct perf_event *event, int mode)
++{
++ struct rapl_pmu *pmu = event->pmu_private;
++ struct hw_perf_event *hwc = &event->hw;
++ unsigned long flags;
++
++ raw_spin_lock_irqsave(&pmu->lock, flags);
++
++ /* mark event as deactivated and stopped */
++ if (!(hwc->state & PERF_HES_STOPPED)) {
++ WARN_ON_ONCE(pmu->n_active <= 0);
++ pmu->n_active--;
++ if (pmu->n_active == 0)
++ hrtimer_cancel(&pmu->hrtimer);
++
++ list_del(&event->active_entry);
++
++ WARN_ON_ONCE(hwc->state & PERF_HES_STOPPED);
++ hwc->state |= PERF_HES_STOPPED;
++ }
++
++ /* check if update of sw counter is necessary */
++ if ((mode & PERF_EF_UPDATE) && !(hwc->state & PERF_HES_UPTODATE)) {
++ /*
++ * Drain the remaining delta count out of a event
++ * that we are disabling:
++ */
++ rapl_event_update(event);
++ hwc->state |= PERF_HES_UPTODATE;
++ }
++
++ raw_spin_unlock_irqrestore(&pmu->lock, flags);
++}
++
++static int rapl_pmu_event_add(struct perf_event *event, int mode)
++{
++ struct rapl_pmu *pmu = event->pmu_private;
++ struct hw_perf_event *hwc = &event->hw;
++ unsigned long flags;
++
++ raw_spin_lock_irqsave(&pmu->lock, flags);
++
++ hwc->state = PERF_HES_UPTODATE | PERF_HES_STOPPED;
++
++ if (mode & PERF_EF_START)
++ __rapl_pmu_event_start(pmu, event);
++
++ raw_spin_unlock_irqrestore(&pmu->lock, flags);
++
++ return 0;
++}
++
++static void rapl_pmu_event_del(struct perf_event *event, int flags)
++{
++ rapl_pmu_event_stop(event, PERF_EF_UPDATE);
++}
++
++static int rapl_pmu_event_init(struct perf_event *event)
++{
++ u64 cfg = event->attr.config & RAPL_EVENT_MASK;
++ int bit, ret = 0;
++ struct rapl_pmu *pmu;
++
++ /* only look at RAPL events */
++ if (event->attr.type != rapl_pmus->pmu.type)
++ return -ENOENT;
++
++ /* check only supported bits are set */
++ if (event->attr.config & ~RAPL_EVENT_MASK)
++ return -EINVAL;
++
++ if (event->cpu < 0)
++ return -EINVAL;
++
++ event->event_caps |= PERF_EV_CAP_READ_ACTIVE_PKG;
++
++ if (!cfg || cfg >= NR_RAPL_DOMAINS + 1)
++ return -EINVAL;
++
++ cfg = array_index_nospec((long)cfg, NR_RAPL_DOMAINS + 1);
++ bit = cfg - 1;
++
++ /* check event supported */
++ if (!(rapl_cntr_mask & (1 << bit)))
++ return -EINVAL;
++
++ /* unsupported modes and filters */
++ if (event->attr.sample_period) /* no sampling */
++ return -EINVAL;
++
++ /* must be done before validate_group */
++ pmu = cpu_to_rapl_pmu(event->cpu);
++ if (!pmu)
++ return -EINVAL;
++ event->cpu = pmu->cpu;
++ event->pmu_private = pmu;
++ event->hw.event_base = rapl_msrs[bit].msr;
++ event->hw.config = cfg;
++ event->hw.idx = bit;
++
++ return ret;
++}
++
++static void rapl_pmu_event_read(struct perf_event *event)
++{
++ rapl_event_update(event);
++}
++
++static ssize_t rapl_get_attr_cpumask(struct device *dev,
++ struct device_attribute *attr, char *buf)
++{
++ return cpumap_print_to_pagebuf(true, buf, &rapl_cpu_mask);
++}
++
++static DEVICE_ATTR(cpumask, S_IRUGO, rapl_get_attr_cpumask, NULL);
++
++static struct attribute *rapl_pmu_attrs[] = {
++ &dev_attr_cpumask.attr,
++ NULL,
++};
++
++static struct attribute_group rapl_pmu_attr_group = {
++ .attrs = rapl_pmu_attrs,
++};
++
++RAPL_EVENT_ATTR_STR(energy-cores, rapl_cores, "event=0x01");
++RAPL_EVENT_ATTR_STR(energy-pkg , rapl_pkg, "event=0x02");
++RAPL_EVENT_ATTR_STR(energy-ram , rapl_ram, "event=0x03");
++RAPL_EVENT_ATTR_STR(energy-gpu , rapl_gpu, "event=0x04");
++RAPL_EVENT_ATTR_STR(energy-psys, rapl_psys, "event=0x05");
++
++RAPL_EVENT_ATTR_STR(energy-cores.unit, rapl_cores_unit, "Joules");
++RAPL_EVENT_ATTR_STR(energy-pkg.unit , rapl_pkg_unit, "Joules");
++RAPL_EVENT_ATTR_STR(energy-ram.unit , rapl_ram_unit, "Joules");
++RAPL_EVENT_ATTR_STR(energy-gpu.unit , rapl_gpu_unit, "Joules");
++RAPL_EVENT_ATTR_STR(energy-psys.unit, rapl_psys_unit, "Joules");
++
++/*
++ * we compute in 0.23 nJ increments regardless of MSR
++ */
++RAPL_EVENT_ATTR_STR(energy-cores.scale, rapl_cores_scale, "2.3283064365386962890625e-10");
++RAPL_EVENT_ATTR_STR(energy-pkg.scale, rapl_pkg_scale, "2.3283064365386962890625e-10");
++RAPL_EVENT_ATTR_STR(energy-ram.scale, rapl_ram_scale, "2.3283064365386962890625e-10");
++RAPL_EVENT_ATTR_STR(energy-gpu.scale, rapl_gpu_scale, "2.3283064365386962890625e-10");
++RAPL_EVENT_ATTR_STR(energy-psys.scale, rapl_psys_scale, "2.3283064365386962890625e-10");
++
++/*
++ * There are no default events, but we need to create
++ * "events" group (with empty attrs) before updating
++ * it with detected events.
++ */
++static struct attribute *attrs_empty[] = {
++ NULL,
++};
++
++static struct attribute_group rapl_pmu_events_group = {
++ .name = "events",
++ .attrs = attrs_empty,
++};
++
++DEFINE_RAPL_FORMAT_ATTR(event, event, "config:0-7");
++static struct attribute *rapl_formats_attr[] = {
++ &format_attr_event.attr,
++ NULL,
++};
++
++static struct attribute_group rapl_pmu_format_group = {
++ .name = "format",
++ .attrs = rapl_formats_attr,
++};
++
++static const struct attribute_group *rapl_attr_groups[] = {
++ &rapl_pmu_attr_group,
++ &rapl_pmu_format_group,
++ &rapl_pmu_events_group,
++ NULL,
++};
++
++static struct attribute *rapl_events_cores[] = {
++ EVENT_PTR(rapl_cores),
++ EVENT_PTR(rapl_cores_unit),
++ EVENT_PTR(rapl_cores_scale),
++ NULL,
++};
++
++static struct attribute_group rapl_events_cores_group = {
++ .name = "events",
++ .attrs = rapl_events_cores,
++};
++
++static struct attribute *rapl_events_pkg[] = {
++ EVENT_PTR(rapl_pkg),
++ EVENT_PTR(rapl_pkg_unit),
++ EVENT_PTR(rapl_pkg_scale),
++ NULL,
++};
++
++static struct attribute_group rapl_events_pkg_group = {
++ .name = "events",
++ .attrs = rapl_events_pkg,
++};
++
++static struct attribute *rapl_events_ram[] = {
++ EVENT_PTR(rapl_ram),
++ EVENT_PTR(rapl_ram_unit),
++ EVENT_PTR(rapl_ram_scale),
++ NULL,
++};
++
++static struct attribute_group rapl_events_ram_group = {
++ .name = "events",
++ .attrs = rapl_events_ram,
++};
++
++static struct attribute *rapl_events_gpu[] = {
++ EVENT_PTR(rapl_gpu),
++ EVENT_PTR(rapl_gpu_unit),
++ EVENT_PTR(rapl_gpu_scale),
++ NULL,
++};
++
++static struct attribute_group rapl_events_gpu_group = {
++ .name = "events",
++ .attrs = rapl_events_gpu,
++};
++
++static struct attribute *rapl_events_psys[] = {
++ EVENT_PTR(rapl_psys),
++ EVENT_PTR(rapl_psys_unit),
++ EVENT_PTR(rapl_psys_scale),
++ NULL,
++};
++
++static struct attribute_group rapl_events_psys_group = {
++ .name = "events",
++ .attrs = rapl_events_psys,
++};
++
++static bool test_msr(int idx, void *data)
++{
++ return test_bit(idx, (unsigned long *) data);
++}
++
++static struct perf_msr rapl_msrs[] = {
++ [PERF_RAPL_PP0] = { MSR_PP0_ENERGY_STATUS, &rapl_events_cores_group, test_msr },
++ [PERF_RAPL_PKG] = { MSR_PKG_ENERGY_STATUS, &rapl_events_pkg_group, test_msr },
++ [PERF_RAPL_RAM] = { MSR_DRAM_ENERGY_STATUS, &rapl_events_ram_group, test_msr },
++ [PERF_RAPL_PP1] = { MSR_PP1_ENERGY_STATUS, &rapl_events_gpu_group, test_msr },
++ [PERF_RAPL_PSYS] = { MSR_PLATFORM_ENERGY_STATUS, &rapl_events_psys_group, test_msr },
++};
++
++static int rapl_cpu_offline(unsigned int cpu)
++{
++ struct rapl_pmu *pmu = cpu_to_rapl_pmu(cpu);
++ int target;
++
++ /* Check if exiting cpu is used for collecting rapl events */
++ if (!cpumask_test_and_clear_cpu(cpu, &rapl_cpu_mask))
++ return 0;
++
++ pmu->cpu = -1;
++ /* Find a new cpu to collect rapl events */
++ target = cpumask_any_but(topology_die_cpumask(cpu), cpu);
++
++ /* Migrate rapl events to the new target */
++ if (target < nr_cpu_ids) {
++ cpumask_set_cpu(target, &rapl_cpu_mask);
++ pmu->cpu = target;
++ perf_pmu_migrate_context(pmu->pmu, cpu, target);
++ }
++ return 0;
++}
++
++static int rapl_cpu_online(unsigned int cpu)
++{
++ struct rapl_pmu *pmu = cpu_to_rapl_pmu(cpu);
++ int target;
++
++ if (!pmu) {
++ pmu = kzalloc_node(sizeof(*pmu), GFP_KERNEL, cpu_to_node(cpu));
++ if (!pmu)
++ return -ENOMEM;
++
++ raw_spin_lock_init(&pmu->lock);
++ INIT_LIST_HEAD(&pmu->active_list);
++ pmu->pmu = &rapl_pmus->pmu;
++ pmu->timer_interval = ms_to_ktime(rapl_timer_ms);
++ rapl_hrtimer_init(pmu);
++
++ rapl_pmus->pmus[topology_logical_die_id(cpu)] = pmu;
++ }
++
++ /*
++ * Check if there is an online cpu in the package which collects rapl
++ * events already.
++ */
++ target = cpumask_any_and(&rapl_cpu_mask, topology_die_cpumask(cpu));
++ if (target < nr_cpu_ids)
++ return 0;
++
++ cpumask_set_cpu(cpu, &rapl_cpu_mask);
++ pmu->cpu = cpu;
++ return 0;
++}
++
++static int rapl_check_hw_unit(bool apply_quirk)
++{
++ u64 msr_rapl_power_unit_bits;
++ int i;
++
++ /* protect rdmsrl() to handle virtualization */
++ if (rdmsrl_safe(MSR_RAPL_POWER_UNIT, &msr_rapl_power_unit_bits))
++ return -1;
++ for (i = 0; i < NR_RAPL_DOMAINS; i++)
++ rapl_hw_unit[i] = (msr_rapl_power_unit_bits >> 8) & 0x1FULL;
++
++ /*
++ * DRAM domain on HSW server and KNL has fixed energy unit which can be
++ * different than the unit from power unit MSR. See
++ * "Intel Xeon Processor E5-1600 and E5-2600 v3 Product Families, V2
++ * of 2. Datasheet, September 2014, Reference Number: 330784-001 "
++ */
++ if (apply_quirk)
++ rapl_hw_unit[PERF_RAPL_RAM] = 16;
++
++ /*
++ * Calculate the timer rate:
++ * Use reference of 200W for scaling the timeout to avoid counter
++ * overflows. 200W = 200 Joules/sec
++ * Divide interval by 2 to avoid lockstep (2 * 100)
++ * if hw unit is 32, then we use 2 ms 1/200/2
++ */
++ rapl_timer_ms = 2;
++ if (rapl_hw_unit[0] < 32) {
++ rapl_timer_ms = (1000 / (2 * 100));
++ rapl_timer_ms *= (1ULL << (32 - rapl_hw_unit[0] - 1));
++ }
++ return 0;
++}
++
++static void __init rapl_advertise(void)
++{
++ int i;
++
++ pr_info("API unit is 2^-32 Joules, %d fixed counters, %llu ms ovfl timer\n",
++ hweight32(rapl_cntr_mask), rapl_timer_ms);
++
++ for (i = 0; i < NR_RAPL_DOMAINS; i++) {
++ if (rapl_cntr_mask & (1 << i)) {
++ pr_info("hw unit of domain %s 2^-%d Joules\n",
++ rapl_domain_names[i], rapl_hw_unit[i]);
++ }
++ }
++}
++
++static void cleanup_rapl_pmus(void)
++{
++ int i;
++
++ for (i = 0; i < rapl_pmus->maxdie; i++)
++ kfree(rapl_pmus->pmus[i]);
++ kfree(rapl_pmus);
++}
++
++static const struct attribute_group *rapl_attr_update[] = {
++ &rapl_events_cores_group,
++ &rapl_events_pkg_group,
++ &rapl_events_ram_group,
++ &rapl_events_gpu_group,
++ &rapl_events_gpu_group,
++ NULL,
++};
++
++static int __init init_rapl_pmus(void)
++{
++ int maxdie = topology_max_packages() * topology_max_die_per_package();
++ size_t size;
++
++ size = sizeof(*rapl_pmus) + maxdie * sizeof(struct rapl_pmu *);
++ rapl_pmus = kzalloc(size, GFP_KERNEL);
++ if (!rapl_pmus)
++ return -ENOMEM;
++
++ rapl_pmus->maxdie = maxdie;
++ rapl_pmus->pmu.attr_groups = rapl_attr_groups;
++ rapl_pmus->pmu.attr_update = rapl_attr_update;
++ rapl_pmus->pmu.task_ctx_nr = perf_invalid_context;
++ rapl_pmus->pmu.event_init = rapl_pmu_event_init;
++ rapl_pmus->pmu.add = rapl_pmu_event_add;
++ rapl_pmus->pmu.del = rapl_pmu_event_del;
++ rapl_pmus->pmu.start = rapl_pmu_event_start;
++ rapl_pmus->pmu.stop = rapl_pmu_event_stop;
++ rapl_pmus->pmu.read = rapl_pmu_event_read;
++ rapl_pmus->pmu.module = THIS_MODULE;
++ rapl_pmus->pmu.capabilities = PERF_PMU_CAP_NO_EXCLUDE;
++ return 0;
++}
++
++static struct rapl_model model_snb = {
++ .events = BIT(PERF_RAPL_PP0) |
++ BIT(PERF_RAPL_PKG) |
++ BIT(PERF_RAPL_PP1),
++ .apply_quirk = false,
++};
++
++static struct rapl_model model_snbep = {
++ .events = BIT(PERF_RAPL_PP0) |
++ BIT(PERF_RAPL_PKG) |
++ BIT(PERF_RAPL_RAM),
++ .apply_quirk = false,
++};
++
++static struct rapl_model model_hsw = {
++ .events = BIT(PERF_RAPL_PP0) |
++ BIT(PERF_RAPL_PKG) |
++ BIT(PERF_RAPL_RAM) |
++ BIT(PERF_RAPL_PP1),
++ .apply_quirk = false,
++};
++
++static struct rapl_model model_hsx = {
++ .events = BIT(PERF_RAPL_PP0) |
++ BIT(PERF_RAPL_PKG) |
++ BIT(PERF_RAPL_RAM),
++ .apply_quirk = true,
++};
++
++static struct rapl_model model_knl = {
++ .events = BIT(PERF_RAPL_PKG) |
++ BIT(PERF_RAPL_RAM),
++ .apply_quirk = true,
++};
++
++static struct rapl_model model_skl = {
++ .events = BIT(PERF_RAPL_PP0) |
++ BIT(PERF_RAPL_PKG) |
++ BIT(PERF_RAPL_RAM) |
++ BIT(PERF_RAPL_PP1) |
++ BIT(PERF_RAPL_PSYS),
++ .apply_quirk = false,
++};
++
++static const struct x86_cpu_id rapl_model_match[] __initconst = {
++ X86_MATCH_INTEL_FAM6_MODEL(SANDYBRIDGE, &model_snb),
++ X86_MATCH_INTEL_FAM6_MODEL(SANDYBRIDGE_X, &model_snbep),
++ X86_MATCH_INTEL_FAM6_MODEL(IVYBRIDGE, &model_snb),
++ X86_MATCH_INTEL_FAM6_MODEL(IVYBRIDGE_X, &model_snbep),
++ X86_MATCH_INTEL_FAM6_MODEL(HASWELL, &model_hsw),
++ X86_MATCH_INTEL_FAM6_MODEL(HASWELL_X, &model_hsx),
++ X86_MATCH_INTEL_FAM6_MODEL(HASWELL_L, &model_hsw),
++ X86_MATCH_INTEL_FAM6_MODEL(HASWELL_G, &model_hsw),
++ X86_MATCH_INTEL_FAM6_MODEL(BROADWELL, &model_hsw),
++ X86_MATCH_INTEL_FAM6_MODEL(BROADWELL_G, &model_hsw),
++ X86_MATCH_INTEL_FAM6_MODEL(BROADWELL_X, &model_hsx),
++ X86_MATCH_INTEL_FAM6_MODEL(BROADWELL_D, &model_hsx),
++ X86_MATCH_INTEL_FAM6_MODEL(XEON_PHI_KNL, &model_knl),
++ X86_MATCH_INTEL_FAM6_MODEL(XEON_PHI_KNM, &model_knl),
++ X86_MATCH_INTEL_FAM6_MODEL(SKYLAKE_L, &model_skl),
++ X86_MATCH_INTEL_FAM6_MODEL(SKYLAKE, &model_skl),
++ X86_MATCH_INTEL_FAM6_MODEL(SKYLAKE_X, &model_hsx),
++ X86_MATCH_INTEL_FAM6_MODEL(KABYLAKE_L, &model_skl),
++ X86_MATCH_INTEL_FAM6_MODEL(KABYLAKE, &model_skl),
++ X86_MATCH_INTEL_FAM6_MODEL(CANNONLAKE_L, &model_skl),
++ X86_MATCH_INTEL_FAM6_MODEL(ATOM_GOLDMONT, &model_hsw),
++ X86_MATCH_INTEL_FAM6_MODEL(ATOM_GOLDMONT_D, &model_hsw),
++ X86_MATCH_INTEL_FAM6_MODEL(ATOM_GOLDMONT_PLUS, &model_hsw),
++ X86_MATCH_INTEL_FAM6_MODEL(ICELAKE_L, &model_skl),
++ X86_MATCH_INTEL_FAM6_MODEL(ICELAKE, &model_skl),
++ X86_MATCH_INTEL_FAM6_MODEL(COMETLAKE_L, &model_skl),
++ X86_MATCH_INTEL_FAM6_MODEL(COMETLAKE, &model_skl),
++ {},
++};
++MODULE_DEVICE_TABLE(x86cpu, rapl_model_match);
++
++static int __init rapl_pmu_init(void)
++{
++ const struct x86_cpu_id *id;
++ struct rapl_model *rm;
++ int ret;
++
++ id = x86_match_cpu(rapl_model_match);
++ if (!id)
++ return -ENODEV;
++
++ rm = (struct rapl_model *) id->driver_data;
++ rapl_cntr_mask = perf_msr_probe(rapl_msrs, PERF_RAPL_MAX,
++ false, (void *) &rm->events);
++
++ ret = rapl_check_hw_unit(rm->apply_quirk);
++ if (ret)
++ return ret;
++
++ ret = init_rapl_pmus();
++ if (ret)
++ return ret;
++
++ /*
++ * Install callbacks. Core will call them for each online cpu.
++ */
++ ret = cpuhp_setup_state(CPUHP_AP_PERF_X86_RAPL_ONLINE,
++ "perf/x86/rapl:online",
++ rapl_cpu_online, rapl_cpu_offline);
++ if (ret)
++ goto out;
++
++ ret = perf_pmu_register(&rapl_pmus->pmu, "power", -1);
++ if (ret)
++ goto out1;
++
++ rapl_advertise();
++ return 0;
++
++out1:
++ cpuhp_remove_state(CPUHP_AP_PERF_X86_RAPL_ONLINE);
++out:
++ pr_warn("Initialization failed (%d), disabled\n", ret);
++ cleanup_rapl_pmus();
++ return ret;
++}
++module_init(rapl_pmu_init);
++
++static void __exit intel_rapl_exit(void)
++{
++ cpuhp_remove_state_nocalls(CPUHP_AP_PERF_X86_RAPL_ONLINE);
++ perf_pmu_unregister(&rapl_pmus->pmu);
++ cleanup_rapl_pmus();
++}
++module_exit(intel_rapl_exit);
+diff --git a/arch/x86/include/asm/processor.h b/arch/x86/include/asm/processor.h
+index c4e8fd709cf6..e38befea287f 100644
+--- a/arch/x86/include/asm/processor.h
++++ b/arch/x86/include/asm/processor.h
+@@ -370,7 +370,7 @@ struct x86_hw_tss {
+ #define IO_BITMAP_OFFSET_INVALID (__KERNEL_TSS_LIMIT + 1)
+
+ struct entry_stack {
+- unsigned long words[64];
++ char stack[PAGE_SIZE];
+ };
+
+ struct entry_stack_page {
+diff --git a/arch/x86/kvm/kvm_cache_regs.h b/arch/x86/kvm/kvm_cache_regs.h
+index 62558b9bdda7..a4b8277ae88e 100644
+--- a/arch/x86/kvm/kvm_cache_regs.h
++++ b/arch/x86/kvm/kvm_cache_regs.h
+@@ -7,7 +7,7 @@
+ #define KVM_POSSIBLE_CR0_GUEST_BITS X86_CR0_TS
+ #define KVM_POSSIBLE_CR4_GUEST_BITS \
+ (X86_CR4_PVI | X86_CR4_DE | X86_CR4_PCE | X86_CR4_OSFXSR \
+- | X86_CR4_OSXMMEXCPT | X86_CR4_LA57 | X86_CR4_PGE)
++ | X86_CR4_OSXMMEXCPT | X86_CR4_LA57 | X86_CR4_PGE | X86_CR4_TSD)
+
+ #define BUILD_KVM_GPR_ACCESSORS(lname, uname) \
+ static __always_inline unsigned long kvm_##lname##_read(struct kvm_vcpu *vcpu)\
+diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c
+index eb27ab47d607..70cf2c1a1423 100644
+--- a/arch/x86/kvm/mmu/mmu.c
++++ b/arch/x86/kvm/mmu/mmu.c
+@@ -4484,7 +4484,7 @@ __reset_rsvds_bits_mask(struct kvm_vcpu *vcpu,
+ nonleaf_bit8_rsvd | rsvd_bits(7, 7) |
+ rsvd_bits(maxphyaddr, 51);
+ rsvd_check->rsvd_bits_mask[0][2] = exb_bit_rsvd |
+- nonleaf_bit8_rsvd | gbpages_bit_rsvd |
++ gbpages_bit_rsvd |
+ rsvd_bits(maxphyaddr, 51);
+ rsvd_check->rsvd_bits_mask[0][1] = exb_bit_rsvd |
+ rsvd_bits(maxphyaddr, 51);
+diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c
+index 390ec34e4b4f..8fafcb2cd103 100644
+--- a/arch/x86/kvm/vmx/vmx.c
++++ b/arch/x86/kvm/vmx/vmx.c
+@@ -3932,6 +3932,8 @@ void vmx_set_constant_host_state(struct vcpu_vmx *vmx)
+
+ void set_cr4_guest_host_mask(struct vcpu_vmx *vmx)
+ {
++ BUILD_BUG_ON(KVM_CR4_GUEST_OWNED_BITS & ~KVM_POSSIBLE_CR4_GUEST_BITS);
++
+ vmx->vcpu.arch.cr4_guest_owned_bits = KVM_CR4_GUEST_OWNED_BITS;
+ if (enable_ept)
+ vmx->vcpu.arch.cr4_guest_owned_bits |= X86_CR4_PGE;
+diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
+index 5f08eeac16c8..738a558c915c 100644
+--- a/arch/x86/kvm/x86.c
++++ b/arch/x86/kvm/x86.c
+@@ -964,6 +964,8 @@ int kvm_set_cr4(struct kvm_vcpu *vcpu, unsigned long cr4)
+ if (is_long_mode(vcpu)) {
+ if (!(cr4 & X86_CR4_PAE))
+ return 1;
++ if ((cr4 ^ old_cr4) & X86_CR4_LA57)
++ return 1;
+ } else if (is_paging(vcpu) && (cr4 & X86_CR4_PAE)
+ && ((cr4 ^ old_cr4) & pdptr_bits)
+ && !load_pdptrs(vcpu, vcpu->arch.walk_mmu,
+diff --git a/block/bio-integrity.c b/block/bio-integrity.c
+index ae07dd78e951..c9dc2b17ce25 100644
+--- a/block/bio-integrity.c
++++ b/block/bio-integrity.c
+@@ -24,6 +24,18 @@ void blk_flush_integrity(void)
+ flush_workqueue(kintegrityd_wq);
+ }
+
++void __bio_integrity_free(struct bio_set *bs, struct bio_integrity_payload *bip)
++{
++ if (bs && mempool_initialized(&bs->bio_integrity_pool)) {
++ if (bip->bip_vec)
++ bvec_free(&bs->bvec_integrity_pool, bip->bip_vec,
++ bip->bip_slab);
++ mempool_free(bip, &bs->bio_integrity_pool);
++ } else {
++ kfree(bip);
++ }
++}
++
+ /**
+ * bio_integrity_alloc - Allocate integrity payload and attach it to bio
+ * @bio: bio to attach integrity metadata to
+@@ -75,7 +87,7 @@ struct bio_integrity_payload *bio_integrity_alloc(struct bio *bio,
+
+ return bip;
+ err:
+- mempool_free(bip, &bs->bio_integrity_pool);
++ __bio_integrity_free(bs, bip);
+ return ERR_PTR(-ENOMEM);
+ }
+ EXPORT_SYMBOL(bio_integrity_alloc);
+@@ -96,14 +108,7 @@ void bio_integrity_free(struct bio *bio)
+ kfree(page_address(bip->bip_vec->bv_page) +
+ bip->bip_vec->bv_offset);
+
+- if (bs && mempool_initialized(&bs->bio_integrity_pool)) {
+- bvec_free(&bs->bvec_integrity_pool, bip->bip_vec, bip->bip_slab);
+-
+- mempool_free(bip, &bs->bio_integrity_pool);
+- } else {
+- kfree(bip);
+- }
+-
++ __bio_integrity_free(bs, bip);
+ bio->bi_integrity = NULL;
+ bio->bi_opf &= ~REQ_INTEGRITY;
+ }
+diff --git a/block/blk-mq.c b/block/blk-mq.c
+index 8f580e66691b..0d533d084a5f 100644
+--- a/block/blk-mq.c
++++ b/block/blk-mq.c
+@@ -803,10 +803,10 @@ static bool blk_mq_rq_inflight(struct blk_mq_hw_ctx *hctx, struct request *rq,
+ void *priv, bool reserved)
+ {
+ /*
+- * If we find a request that is inflight and the queue matches,
++ * If we find a request that isn't idle and the queue matches,
+ * we know the queue is busy. Return false to stop the iteration.
+ */
+- if (rq->state == MQ_RQ_IN_FLIGHT && rq->q == hctx->queue) {
++ if (blk_mq_request_started(rq) && rq->q == hctx->queue) {
+ bool *busy = priv;
+
+ *busy = true;
+diff --git a/drivers/base/regmap/regmap.c b/drivers/base/regmap/regmap.c
+index 508bbd6ea439..320d23de02c2 100644
+--- a/drivers/base/regmap/regmap.c
++++ b/drivers/base/regmap/regmap.c
+@@ -17,6 +17,7 @@
+ #include <linux/delay.h>
+ #include <linux/log2.h>
+ #include <linux/hwspinlock.h>
++#include <asm/unaligned.h>
+
+ #define CREATE_TRACE_POINTS
+ #include "trace.h"
+@@ -249,22 +250,20 @@ static void regmap_format_8(void *buf, unsigned int val, unsigned int shift)
+
+ static void regmap_format_16_be(void *buf, unsigned int val, unsigned int shift)
+ {
+- __be16 *b = buf;
+-
+- b[0] = cpu_to_be16(val << shift);
++ put_unaligned_be16(val << shift, buf);
+ }
+
+ static void regmap_format_16_le(void *buf, unsigned int val, unsigned int shift)
+ {
+- __le16 *b = buf;
+-
+- b[0] = cpu_to_le16(val << shift);
++ put_unaligned_le16(val << shift, buf);
+ }
+
+ static void regmap_format_16_native(void *buf, unsigned int val,
+ unsigned int shift)
+ {
+- *(u16 *)buf = val << shift;
++ u16 v = val << shift;
++
++ memcpy(buf, &v, sizeof(v));
+ }
+
+ static void regmap_format_24(void *buf, unsigned int val, unsigned int shift)
+@@ -280,43 +279,39 @@ static void regmap_format_24(void *buf, unsigned int val, unsigned int shift)
+
+ static void regmap_format_32_be(void *buf, unsigned int val, unsigned int shift)
+ {
+- __be32 *b = buf;
+-
+- b[0] = cpu_to_be32(val << shift);
++ put_unaligned_be32(val << shift, buf);
+ }
+
+ static void regmap_format_32_le(void *buf, unsigned int val, unsigned int shift)
+ {
+- __le32 *b = buf;
+-
+- b[0] = cpu_to_le32(val << shift);
++ put_unaligned_le32(val << shift, buf);
+ }
+
+ static void regmap_format_32_native(void *buf, unsigned int val,
+ unsigned int shift)
+ {
+- *(u32 *)buf = val << shift;
++ u32 v = val << shift;
++
++ memcpy(buf, &v, sizeof(v));
+ }
+
+ #ifdef CONFIG_64BIT
+ static void regmap_format_64_be(void *buf, unsigned int val, unsigned int shift)
+ {
+- __be64 *b = buf;
+-
+- b[0] = cpu_to_be64((u64)val << shift);
++ put_unaligned_be64((u64) val << shift, buf);
+ }
+
+ static void regmap_format_64_le(void *buf, unsigned int val, unsigned int shift)
+ {
+- __le64 *b = buf;
+-
+- b[0] = cpu_to_le64((u64)val << shift);
++ put_unaligned_le64((u64) val << shift, buf);
+ }
+
+ static void regmap_format_64_native(void *buf, unsigned int val,
+ unsigned int shift)
+ {
+- *(u64 *)buf = (u64)val << shift;
++ u64 v = (u64) val << shift;
++
++ memcpy(buf, &v, sizeof(v));
+ }
+ #endif
+
+@@ -333,35 +328,34 @@ static unsigned int regmap_parse_8(const void *buf)
+
+ static unsigned int regmap_parse_16_be(const void *buf)
+ {
+- const __be16 *b = buf;
+-
+- return be16_to_cpu(b[0]);
++ return get_unaligned_be16(buf);
+ }
+
+ static unsigned int regmap_parse_16_le(const void *buf)
+ {
+- const __le16 *b = buf;
+-
+- return le16_to_cpu(b[0]);
++ return get_unaligned_le16(buf);
+ }
+
+ static void regmap_parse_16_be_inplace(void *buf)
+ {
+- __be16 *b = buf;
++ u16 v = get_unaligned_be16(buf);
+
+- b[0] = be16_to_cpu(b[0]);
++ memcpy(buf, &v, sizeof(v));
+ }
+
+ static void regmap_parse_16_le_inplace(void *buf)
+ {
+- __le16 *b = buf;
++ u16 v = get_unaligned_le16(buf);
+
+- b[0] = le16_to_cpu(b[0]);
++ memcpy(buf, &v, sizeof(v));
+ }
+
+ static unsigned int regmap_parse_16_native(const void *buf)
+ {
+- return *(u16 *)buf;
++ u16 v;
++
++ memcpy(&v, buf, sizeof(v));
++ return v;
+ }
+
+ static unsigned int regmap_parse_24(const void *buf)
+@@ -376,69 +370,67 @@ static unsigned int regmap_parse_24(const void *buf)
+
+ static unsigned int regmap_parse_32_be(const void *buf)
+ {
+- const __be32 *b = buf;
+-
+- return be32_to_cpu(b[0]);
++ return get_unaligned_be32(buf);
+ }
+
+ static unsigned int regmap_parse_32_le(const void *buf)
+ {
+- const __le32 *b = buf;
+-
+- return le32_to_cpu(b[0]);
++ return get_unaligned_le32(buf);
+ }
+
+ static void regmap_parse_32_be_inplace(void *buf)
+ {
+- __be32 *b = buf;
++ u32 v = get_unaligned_be32(buf);
+
+- b[0] = be32_to_cpu(b[0]);
++ memcpy(buf, &v, sizeof(v));
+ }
+
+ static void regmap_parse_32_le_inplace(void *buf)
+ {
+- __le32 *b = buf;
++ u32 v = get_unaligned_le32(buf);
+
+- b[0] = le32_to_cpu(b[0]);
++ memcpy(buf, &v, sizeof(v));
+ }
+
+ static unsigned int regmap_parse_32_native(const void *buf)
+ {
+- return *(u32 *)buf;
++ u32 v;
++
++ memcpy(&v, buf, sizeof(v));
++ return v;
+ }
+
+ #ifdef CONFIG_64BIT
+ static unsigned int regmap_parse_64_be(const void *buf)
+ {
+- const __be64 *b = buf;
+-
+- return be64_to_cpu(b[0]);
++ return get_unaligned_be64(buf);
+ }
+
+ static unsigned int regmap_parse_64_le(const void *buf)
+ {
+- const __le64 *b = buf;
+-
+- return le64_to_cpu(b[0]);
++ return get_unaligned_le64(buf);
+ }
+
+ static void regmap_parse_64_be_inplace(void *buf)
+ {
+- __be64 *b = buf;
++ u64 v = get_unaligned_be64(buf);
+
+- b[0] = be64_to_cpu(b[0]);
++ memcpy(buf, &v, sizeof(v));
+ }
+
+ static void regmap_parse_64_le_inplace(void *buf)
+ {
+- __le64 *b = buf;
++ u64 v = get_unaligned_le64(buf);
+
+- b[0] = le64_to_cpu(b[0]);
++ memcpy(buf, &v, sizeof(v));
+ }
+
+ static unsigned int regmap_parse_64_native(const void *buf)
+ {
+- return *(u64 *)buf;
++ u64 v;
++
++ memcpy(&v, buf, sizeof(v));
++ return v;
+ }
+ #endif
+
+diff --git a/drivers/block/nbd.c b/drivers/block/nbd.c
+index 43cff01a5a67..ce7e9f223b20 100644
+--- a/drivers/block/nbd.c
++++ b/drivers/block/nbd.c
+@@ -1033,25 +1033,26 @@ static int nbd_add_socket(struct nbd_device *nbd, unsigned long arg,
+ test_bit(NBD_RT_BOUND, &config->runtime_flags))) {
+ dev_err(disk_to_dev(nbd->disk),
+ "Device being setup by another task");
+- sockfd_put(sock);
+- return -EBUSY;
++ err = -EBUSY;
++ goto put_socket;
++ }
++
++ nsock = kzalloc(sizeof(*nsock), GFP_KERNEL);
++ if (!nsock) {
++ err = -ENOMEM;
++ goto put_socket;
+ }
+
+ socks = krealloc(config->socks, (config->num_connections + 1) *
+ sizeof(struct nbd_sock *), GFP_KERNEL);
+ if (!socks) {
+- sockfd_put(sock);
+- return -ENOMEM;
++ kfree(nsock);
++ err = -ENOMEM;
++ goto put_socket;
+ }
+
+ config->socks = socks;
+
+- nsock = kzalloc(sizeof(struct nbd_sock), GFP_KERNEL);
+- if (!nsock) {
+- sockfd_put(sock);
+- return -ENOMEM;
+- }
+-
+ nsock->fallback_index = -1;
+ nsock->dead = false;
+ mutex_init(&nsock->tx_lock);
+@@ -1063,6 +1064,10 @@ static int nbd_add_socket(struct nbd_device *nbd, unsigned long arg,
+ atomic_inc(&config->live_connections);
+
+ return 0;
++
++put_socket:
++ sockfd_put(sock);
++ return err;
+ }
+
+ static int nbd_reconnect_socket(struct nbd_device *nbd, unsigned long arg)
+diff --git a/drivers/clocksource/arm_arch_timer.c b/drivers/clocksource/arm_arch_timer.c
+index 2204a444e801..3cf4b402cdac 100644
+--- a/drivers/clocksource/arm_arch_timer.c
++++ b/drivers/clocksource/arm_arch_timer.c
+@@ -480,6 +480,14 @@ static const struct arch_timer_erratum_workaround ool_workarounds[] = {
+ .set_next_event_virt = erratum_set_next_event_tval_virt,
+ },
+ #endif
++#ifdef CONFIG_ARM64_ERRATUM_1418040
++ {
++ .match_type = ate_match_local_cap_id,
++ .id = (void *)ARM64_WORKAROUND_1418040,
++ .desc = "ARM erratum 1418040",
++ .disable_compat_vdso = true,
++ },
++#endif
+ };
+
+ typedef bool (*ate_match_fn_t)(const struct arch_timer_erratum_workaround *,
+@@ -566,6 +574,9 @@ void arch_timer_enable_workaround(const struct arch_timer_erratum_workaround *wa
+ if (wa->read_cntvct_el0) {
+ clocksource_counter.vdso_clock_mode = VDSO_CLOCKMODE_NONE;
+ vdso_default = VDSO_CLOCKMODE_NONE;
++ } else if (wa->disable_compat_vdso && vdso_default != VDSO_CLOCKMODE_NONE) {
++ vdso_default = VDSO_CLOCKMODE_ARCHTIMER_NOCOMPAT;
++ clocksource_counter.vdso_clock_mode = vdso_default;
+ }
+ }
+
+diff --git a/drivers/gpio/gpio-pca953x.c b/drivers/gpio/gpio-pca953x.c
+index 01011a780688..48bea0997e70 100644
+--- a/drivers/gpio/gpio-pca953x.c
++++ b/drivers/gpio/gpio-pca953x.c
+@@ -107,6 +107,84 @@ static const struct i2c_device_id pca953x_id[] = {
+ };
+ MODULE_DEVICE_TABLE(i2c, pca953x_id);
+
++#ifdef CONFIG_GPIO_PCA953X_IRQ
++
++#include <linux/dmi.h>
++#include <linux/gpio.h>
++#include <linux/list.h>
++
++static const struct dmi_system_id pca953x_dmi_acpi_irq_info[] = {
++ {
++ /*
++ * On Intel Galileo Gen 2 board the IRQ pin of one of
++ * the I²C GPIO expanders, which has GpioInt() resource,
++ * is provided as an absolute number instead of being
++ * relative. Since first controller (gpio-sch.c) and
++ * second (gpio-dwapb.c) are at the fixed bases, we may
++ * safely refer to the number in the global space to get
++ * an IRQ out of it.
++ */
++ .matches = {
++ DMI_EXACT_MATCH(DMI_BOARD_NAME, "GalileoGen2"),
++ },
++ },
++ {}
++};
++
++#ifdef CONFIG_ACPI
++static int pca953x_acpi_get_pin(struct acpi_resource *ares, void *data)
++{
++ struct acpi_resource_gpio *agpio;
++ int *pin = data;
++
++ if (acpi_gpio_get_irq_resource(ares, &agpio))
++ *pin = agpio->pin_table[0];
++ return 1;
++}
++
++static int pca953x_acpi_find_pin(struct device *dev)
++{
++ struct acpi_device *adev = ACPI_COMPANION(dev);
++ int pin = -ENOENT, ret;
++ LIST_HEAD(r);
++
++ ret = acpi_dev_get_resources(adev, &r, pca953x_acpi_get_pin, &pin);
++ acpi_dev_free_resource_list(&r);
++ if (ret < 0)
++ return ret;
++
++ return pin;
++}
++#else
++static inline int pca953x_acpi_find_pin(struct device *dev) { return -ENXIO; }
++#endif
++
++static int pca953x_acpi_get_irq(struct device *dev)
++{
++ int pin, ret;
++
++ pin = pca953x_acpi_find_pin(dev);
++ if (pin < 0)
++ return pin;
++
++ dev_info(dev, "Applying ACPI interrupt quirk (GPIO %d)\n", pin);
++
++ if (!gpio_is_valid(pin))
++ return -EINVAL;
++
++ ret = gpio_request(pin, "pca953x interrupt");
++ if (ret)
++ return ret;
++
++ ret = gpio_to_irq(pin);
++
++ /* When pin is used as an IRQ, no need to keep it requested */
++ gpio_free(pin);
++
++ return ret;
++}
++#endif
++
+ static const struct acpi_device_id pca953x_acpi_ids[] = {
+ { "INT3491", 16 | PCA953X_TYPE | PCA_LATCH_INT, },
+ { }
+@@ -613,8 +691,6 @@ static void pca953x_irq_bus_sync_unlock(struct irq_data *d)
+ DECLARE_BITMAP(reg_direction, MAX_LINE);
+ int level;
+
+- pca953x_read_regs(chip, chip->regs->direction, reg_direction);
+-
+ if (chip->driver_data & PCA_PCAL) {
+ /* Enable latch on interrupt-enabled inputs */
+ pca953x_write_regs(chip, PCAL953X_IN_LATCH, chip->irq_mask);
+@@ -625,7 +701,11 @@ static void pca953x_irq_bus_sync_unlock(struct irq_data *d)
+ pca953x_write_regs(chip, PCAL953X_INT_MASK, irq_mask);
+ }
+
++ /* Switch direction to input if needed */
++ pca953x_read_regs(chip, chip->regs->direction, reg_direction);
++
+ bitmap_or(irq_mask, chip->irq_trig_fall, chip->irq_trig_raise, gc->ngpio);
++ bitmap_complement(reg_direction, reg_direction, gc->ngpio);
+ bitmap_and(irq_mask, irq_mask, reg_direction, gc->ngpio);
+
+ /* Look for any newly setup interrupt */
+@@ -724,14 +804,16 @@ static irqreturn_t pca953x_irq_handler(int irq, void *devid)
+ struct gpio_chip *gc = &chip->gpio_chip;
+ DECLARE_BITMAP(pending, MAX_LINE);
+ int level;
++ bool ret;
+
+- if (!pca953x_irq_pending(chip, pending))
+- return IRQ_NONE;
++ mutex_lock(&chip->i2c_lock);
++ ret = pca953x_irq_pending(chip, pending);
++ mutex_unlock(&chip->i2c_lock);
+
+ for_each_set_bit(level, pending, gc->ngpio)
+ handle_nested_irq(irq_find_mapping(gc->irq.domain, level));
+
+- return IRQ_HANDLED;
++ return IRQ_RETVAL(ret);
+ }
+
+ static int pca953x_irq_setup(struct pca953x_chip *chip, int irq_base)
+@@ -742,6 +824,12 @@ static int pca953x_irq_setup(struct pca953x_chip *chip, int irq_base)
+ DECLARE_BITMAP(irq_stat, MAX_LINE);
+ int ret;
+
++ if (dmi_first_match(pca953x_dmi_acpi_irq_info)) {
++ ret = pca953x_acpi_get_irq(&client->dev);
++ if (ret > 0)
++ client->irq = ret;
++ }
++
+ if (!client->irq)
+ return 0;
+
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_job.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_job.c
+index 4981e443a884..2f0eff2c23c7 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_job.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_job.c
+@@ -36,7 +36,8 @@ static void amdgpu_job_timedout(struct drm_sched_job *s_job)
+
+ memset(&ti, 0, sizeof(struct amdgpu_task_info));
+
+- if (amdgpu_ring_soft_recovery(ring, job->vmid, s_job->s_fence->parent)) {
++ if (amdgpu_gpu_recovery &&
++ amdgpu_ring_soft_recovery(ring, job->vmid, s_job->s_fence->parent)) {
+ DRM_ERROR("ring %s timeout, but soft recovered\n",
+ s_job->sched->name);
+ return;
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_psp.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_psp.c
+index deaa26808841..3c6f60c5b1a5 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_psp.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_psp.c
+@@ -370,6 +370,52 @@ static int psp_tmr_load(struct psp_context *psp)
+ return ret;
+ }
+
++static void psp_prep_tmr_unload_cmd_buf(struct psp_context *psp,
++ struct psp_gfx_cmd_resp *cmd)
++{
++ if (amdgpu_sriov_vf(psp->adev))
++ cmd->cmd_id = GFX_CMD_ID_DESTROY_VMR;
++ else
++ cmd->cmd_id = GFX_CMD_ID_DESTROY_TMR;
++}
++
++static int psp_tmr_unload(struct psp_context *psp)
++{
++ int ret;
++ struct psp_gfx_cmd_resp *cmd;
++
++ cmd = kzalloc(sizeof(struct psp_gfx_cmd_resp), GFP_KERNEL);
++ if (!cmd)
++ return -ENOMEM;
++
++ psp_prep_tmr_unload_cmd_buf(psp, cmd);
++ DRM_INFO("free PSP TMR buffer\n");
++
++ ret = psp_cmd_submit_buf(psp, NULL, cmd,
++ psp->fence_buf_mc_addr);
++
++ kfree(cmd);
++
++ return ret;
++}
++
++static int psp_tmr_terminate(struct psp_context *psp)
++{
++ int ret;
++ void *tmr_buf;
++ void **pptr;
++
++ ret = psp_tmr_unload(psp);
++ if (ret)
++ return ret;
++
++ /* free TMR memory buffer */
++ pptr = amdgpu_sriov_vf(psp->adev) ? &tmr_buf : NULL;
++ amdgpu_bo_free_kernel(&psp->tmr_bo, &psp->tmr_mc_addr, pptr);
++
++ return 0;
++}
++
+ static void psp_prep_asd_load_cmd_buf(struct psp_gfx_cmd_resp *cmd,
+ uint64_t asd_mc, uint32_t size)
+ {
+@@ -1575,8 +1621,6 @@ static int psp_hw_fini(void *handle)
+ {
+ struct amdgpu_device *adev = (struct amdgpu_device *)handle;
+ struct psp_context *psp = &adev->psp;
+- void *tmr_buf;
+- void **pptr;
+
+ if (psp->adev->psp.ta_fw) {
+ psp_ras_terminate(psp);
+@@ -1586,10 +1630,9 @@ static int psp_hw_fini(void *handle)
+
+ psp_asd_unload(psp);
+
++ psp_tmr_terminate(psp);
+ psp_ring_destroy(psp, PSP_RING_TYPE__KM);
+
+- pptr = amdgpu_sriov_vf(psp->adev) ? &tmr_buf : NULL;
+- amdgpu_bo_free_kernel(&psp->tmr_bo, &psp->tmr_mc_addr, pptr);
+ amdgpu_bo_free_kernel(&psp->fw_pri_bo,
+ &psp->fw_pri_mc_addr, &psp->fw_pri_buf);
+ amdgpu_bo_free_kernel(&psp->fence_buf_bo,
+@@ -1636,6 +1679,18 @@ static int psp_suspend(void *handle)
+ }
+ }
+
++ ret = psp_tmr_terminate(psp);
++ if (ret) {
++ DRM_ERROR("Falied to terminate tmr\n");
++ return ret;
++ }
++
++ ret = psp_asd_unload(psp);
++ if (ret) {
++ DRM_ERROR("Failed to unload asd\n");
++ return ret;
++ }
++
+ ret = psp_ring_stop(psp, PSP_RING_TYPE__KM);
+ if (ret) {
+ DRM_ERROR("PSP ring stop failed\n");
+diff --git a/drivers/gpu/drm/drm_panel_orientation_quirks.c b/drivers/gpu/drm/drm_panel_orientation_quirks.c
+index ffd95bfeaa94..d00ea384dcbf 100644
+--- a/drivers/gpu/drm/drm_panel_orientation_quirks.c
++++ b/drivers/gpu/drm/drm_panel_orientation_quirks.c
+@@ -30,12 +30,6 @@ struct drm_dmi_panel_orientation_data {
+ int orientation;
+ };
+
+-static const struct drm_dmi_panel_orientation_data acer_s1003 = {
+- .width = 800,
+- .height = 1280,
+- .orientation = DRM_MODE_PANEL_ORIENTATION_RIGHT_UP,
+-};
+-
+ static const struct drm_dmi_panel_orientation_data asus_t100ha = {
+ .width = 800,
+ .height = 1280,
+@@ -114,13 +108,19 @@ static const struct dmi_system_id orientation_data[] = {
+ DMI_EXACT_MATCH(DMI_SYS_VENDOR, "Acer"),
+ DMI_EXACT_MATCH(DMI_PRODUCT_NAME, "One S1003"),
+ },
+- .driver_data = (void *)&acer_s1003,
++ .driver_data = (void *)&lcd800x1280_rightside_up,
+ }, { /* Asus T100HA */
+ .matches = {
+ DMI_EXACT_MATCH(DMI_SYS_VENDOR, "ASUSTeK COMPUTER INC."),
+ DMI_EXACT_MATCH(DMI_PRODUCT_NAME, "T100HAN"),
+ },
+ .driver_data = (void *)&asus_t100ha,
++ }, { /* Asus T101HA */
++ .matches = {
++ DMI_EXACT_MATCH(DMI_SYS_VENDOR, "ASUSTeK COMPUTER INC."),
++ DMI_EXACT_MATCH(DMI_PRODUCT_NAME, "T101HA"),
++ },
++ .driver_data = (void *)&lcd800x1280_rightside_up,
+ }, { /* GPD MicroPC (generic strings, also match on bios date) */
+ .matches = {
+ DMI_EXACT_MATCH(DMI_SYS_VENDOR, "Default string"),
+diff --git a/drivers/gpu/drm/i915/gt/intel_context.c b/drivers/gpu/drm/i915/gt/intel_context.c
+index aea992e46c42..711380375fa1 100644
+--- a/drivers/gpu/drm/i915/gt/intel_context.c
++++ b/drivers/gpu/drm/i915/gt/intel_context.c
+@@ -201,25 +201,25 @@ static int __ring_active(struct intel_ring *ring)
+ {
+ int err;
+
+- err = i915_active_acquire(&ring->vma->active);
++ err = intel_ring_pin(ring);
+ if (err)
+ return err;
+
+- err = intel_ring_pin(ring);
++ err = i915_active_acquire(&ring->vma->active);
+ if (err)
+- goto err_active;
++ goto err_pin;
+
+ return 0;
+
+-err_active:
+- i915_active_release(&ring->vma->active);
++err_pin:
++ intel_ring_unpin(ring);
+ return err;
+ }
+
+ static void __ring_retire(struct intel_ring *ring)
+ {
+- intel_ring_unpin(ring);
+ i915_active_release(&ring->vma->active);
++ intel_ring_unpin(ring);
+ }
+
+ __i915_active_call
+diff --git a/drivers/gpu/drm/i915/i915_debugfs.c b/drivers/gpu/drm/i915/i915_debugfs.c
+index 6ca797128aa1..4472b6eb3085 100644
+--- a/drivers/gpu/drm/i915/i915_debugfs.c
++++ b/drivers/gpu/drm/i915/i915_debugfs.c
+@@ -229,7 +229,7 @@ static int per_file_stats(int id, void *ptr, void *data)
+ struct file_stats *stats = data;
+ struct i915_vma *vma;
+
+- if (!kref_get_unless_zero(&obj->base.refcount))
++ if (IS_ERR_OR_NULL(obj) || !kref_get_unless_zero(&obj->base.refcount))
+ return 0;
+
+ stats->count++;
+diff --git a/drivers/gpu/drm/i915/i915_vma.c b/drivers/gpu/drm/i915/i915_vma.c
+index 2cd7a7e87c0a..1e4bd2d4f019 100644
+--- a/drivers/gpu/drm/i915/i915_vma.c
++++ b/drivers/gpu/drm/i915/i915_vma.c
+@@ -104,6 +104,7 @@ vma_create(struct drm_i915_gem_object *obj,
+ struct i915_address_space *vm,
+ const struct i915_ggtt_view *view)
+ {
++ struct i915_vma *pos = ERR_PTR(-E2BIG);
+ struct i915_vma *vma;
+ struct rb_node *rb, **p;
+
+@@ -184,7 +185,6 @@ vma_create(struct drm_i915_gem_object *obj,
+ rb = NULL;
+ p = &obj->vma.tree.rb_node;
+ while (*p) {
+- struct i915_vma *pos;
+ long cmp;
+
+ rb = *p;
+@@ -196,16 +196,12 @@ vma_create(struct drm_i915_gem_object *obj,
+ * and dispose of ours.
+ */
+ cmp = i915_vma_compare(pos, vm, view);
+- if (cmp == 0) {
+- spin_unlock(&obj->vma.lock);
+- i915_vma_free(vma);
+- return pos;
+- }
+-
+ if (cmp < 0)
+ p = &rb->rb_right;
+- else
++ else if (cmp > 0)
+ p = &rb->rb_left;
++ else
++ goto err_unlock;
+ }
+ rb_link_node(&vma->obj_node, rb, p);
+ rb_insert_color(&vma->obj_node, &obj->vma.tree);
+@@ -228,8 +224,9 @@ vma_create(struct drm_i915_gem_object *obj,
+ err_unlock:
+ spin_unlock(&obj->vma.lock);
+ err_vma:
++ i915_vm_put(vm);
+ i915_vma_free(vma);
+- return ERR_PTR(-E2BIG);
++ return pos;
+ }
+
+ static struct i915_vma *
+diff --git a/drivers/gpu/drm/mcde/mcde_drv.c b/drivers/gpu/drm/mcde/mcde_drv.c
+index f28cb7a576ba..1e7c5aa4d5e6 100644
+--- a/drivers/gpu/drm/mcde/mcde_drv.c
++++ b/drivers/gpu/drm/mcde/mcde_drv.c
+@@ -208,7 +208,6 @@ static int mcde_modeset_init(struct drm_device *drm)
+
+ drm_mode_config_reset(drm);
+ drm_kms_helper_poll_init(drm);
+- drm_fbdev_generic_setup(drm, 32);
+
+ return 0;
+
+@@ -275,6 +274,8 @@ static int mcde_drm_bind(struct device *dev)
+ if (ret < 0)
+ goto unbind;
+
++ drm_fbdev_generic_setup(drm, 32);
++
+ return 0;
+
+ unbind:
+diff --git a/drivers/gpu/drm/mediatek/mtk_drm_plane.c b/drivers/gpu/drm/mediatek/mtk_drm_plane.c
+index c2bd683a87c8..92141a19681b 100644
+--- a/drivers/gpu/drm/mediatek/mtk_drm_plane.c
++++ b/drivers/gpu/drm/mediatek/mtk_drm_plane.c
+@@ -164,6 +164,16 @@ static int mtk_plane_atomic_check(struct drm_plane *plane,
+ true, true);
+ }
+
++static void mtk_plane_atomic_disable(struct drm_plane *plane,
++ struct drm_plane_state *old_state)
++{
++ struct mtk_plane_state *state = to_mtk_plane_state(plane->state);
++
++ state->pending.enable = false;
++ wmb(); /* Make sure the above parameter is set before update */
++ state->pending.dirty = true;
++}
++
+ static void mtk_plane_atomic_update(struct drm_plane *plane,
+ struct drm_plane_state *old_state)
+ {
+@@ -178,6 +188,11 @@ static void mtk_plane_atomic_update(struct drm_plane *plane,
+ if (!crtc || WARN_ON(!fb))
+ return;
+
++ if (!plane->state->visible) {
++ mtk_plane_atomic_disable(plane, old_state);
++ return;
++ }
++
+ gem = fb->obj[0];
+ mtk_gem = to_mtk_gem_obj(gem);
+ addr = mtk_gem->dma_addr;
+@@ -200,16 +215,6 @@ static void mtk_plane_atomic_update(struct drm_plane *plane,
+ state->pending.dirty = true;
+ }
+
+-static void mtk_plane_atomic_disable(struct drm_plane *plane,
+- struct drm_plane_state *old_state)
+-{
+- struct mtk_plane_state *state = to_mtk_plane_state(plane->state);
+-
+- state->pending.enable = false;
+- wmb(); /* Make sure the above parameter is set before update */
+- state->pending.dirty = true;
+-}
+-
+ static const struct drm_plane_helper_funcs mtk_plane_helper_funcs = {
+ .prepare_fb = drm_gem_fb_prepare_fb,
+ .atomic_check = mtk_plane_atomic_check,
+diff --git a/drivers/gpu/drm/meson/meson_registers.h b/drivers/gpu/drm/meson/meson_registers.h
+index 8ea00546cd4e..049c4bfe2a3a 100644
+--- a/drivers/gpu/drm/meson/meson_registers.h
++++ b/drivers/gpu/drm/meson/meson_registers.h
+@@ -261,6 +261,12 @@
+ #define VIU_OSD_FIFO_DEPTH_VAL(val) ((val & 0x7f) << 12)
+ #define VIU_OSD_WORDS_PER_BURST(words) (((words & 0x4) >> 1) << 22)
+ #define VIU_OSD_FIFO_LIMITS(size) ((size & 0xf) << 24)
++#define VIU_OSD_BURST_LENGTH_24 (0x0 << 31 | 0x0 << 10)
++#define VIU_OSD_BURST_LENGTH_32 (0x0 << 31 | 0x1 << 10)
++#define VIU_OSD_BURST_LENGTH_48 (0x0 << 31 | 0x2 << 10)
++#define VIU_OSD_BURST_LENGTH_64 (0x0 << 31 | 0x3 << 10)
++#define VIU_OSD_BURST_LENGTH_96 (0x1 << 31 | 0x0 << 10)
++#define VIU_OSD_BURST_LENGTH_128 (0x1 << 31 | 0x1 << 10)
+
+ #define VD1_IF0_GEN_REG 0x1a50
+ #define VD1_IF0_CANVAS0 0x1a51
+diff --git a/drivers/gpu/drm/meson/meson_viu.c b/drivers/gpu/drm/meson/meson_viu.c
+index 304f8ff1339c..aede0c67a57f 100644
+--- a/drivers/gpu/drm/meson/meson_viu.c
++++ b/drivers/gpu/drm/meson/meson_viu.c
+@@ -411,13 +411,6 @@ void meson_viu_gxm_disable_osd1_afbc(struct meson_drm *priv)
+ priv->io_base + _REG(VIU_MISC_CTRL1));
+ }
+
+-static inline uint32_t meson_viu_osd_burst_length_reg(uint32_t length)
+-{
+- uint32_t val = (((length & 0x80) % 24) / 12);
+-
+- return (((val & 0x3) << 10) | (((val & 0x4) >> 2) << 31));
+-}
+-
+ void meson_viu_init(struct meson_drm *priv)
+ {
+ uint32_t reg;
+@@ -444,9 +437,9 @@ void meson_viu_init(struct meson_drm *priv)
+ VIU_OSD_FIFO_LIMITS(2); /* fifo_lim: 2*16=32 */
+
+ if (meson_vpu_is_compatible(priv, VPU_COMPATIBLE_G12A))
+- reg |= meson_viu_osd_burst_length_reg(32);
++ reg |= VIU_OSD_BURST_LENGTH_32;
+ else
+- reg |= meson_viu_osd_burst_length_reg(64);
++ reg |= VIU_OSD_BURST_LENGTH_64;
+
+ writel_relaxed(reg, priv->io_base + _REG(VIU_OSD1_FIFO_CTRL_STAT));
+ writel_relaxed(reg, priv->io_base + _REG(VIU_OSD2_FIFO_CTRL_STAT));
+diff --git a/drivers/gpu/drm/radeon/ci_dpm.c b/drivers/gpu/drm/radeon/ci_dpm.c
+index a9257bed3484..30b5a59353c5 100644
+--- a/drivers/gpu/drm/radeon/ci_dpm.c
++++ b/drivers/gpu/drm/radeon/ci_dpm.c
+@@ -5577,6 +5577,7 @@ static int ci_parse_power_table(struct radeon_device *rdev)
+ if (!rdev->pm.dpm.ps)
+ return -ENOMEM;
+ power_state_offset = (u8 *)state_array->states;
++ rdev->pm.dpm.num_ps = 0;
+ for (i = 0; i < state_array->ucNumEntries; i++) {
+ u8 *idx;
+ power_state = (union pplib_power_state *)power_state_offset;
+@@ -5586,10 +5587,8 @@ static int ci_parse_power_table(struct radeon_device *rdev)
+ if (!rdev->pm.power_state[i].clock_info)
+ return -EINVAL;
+ ps = kzalloc(sizeof(struct ci_ps), GFP_KERNEL);
+- if (ps == NULL) {
+- kfree(rdev->pm.dpm.ps);
++ if (ps == NULL)
+ return -ENOMEM;
+- }
+ rdev->pm.dpm.ps[i].ps_priv = ps;
+ ci_parse_pplib_non_clock_info(rdev, &rdev->pm.dpm.ps[i],
+ non_clock_info,
+@@ -5611,8 +5610,8 @@ static int ci_parse_power_table(struct radeon_device *rdev)
+ k++;
+ }
+ power_state_offset += 2 + power_state->v2.ucNumDPMLevels;
++ rdev->pm.dpm.num_ps = i + 1;
+ }
+- rdev->pm.dpm.num_ps = state_array->ucNumEntries;
+
+ /* fill in the vce power states */
+ for (i = 0; i < RADEON_MAX_VCE_LEVELS; i++) {
+diff --git a/drivers/gpu/drm/tegra/hub.c b/drivers/gpu/drm/tegra/hub.c
+index 8183e617bf6b..a2ef8f218d4e 100644
+--- a/drivers/gpu/drm/tegra/hub.c
++++ b/drivers/gpu/drm/tegra/hub.c
+@@ -149,7 +149,9 @@ int tegra_display_hub_prepare(struct tegra_display_hub *hub)
+ for (i = 0; i < hub->soc->num_wgrps; i++) {
+ struct tegra_windowgroup *wgrp = &hub->wgrps[i];
+
+- tegra_windowgroup_enable(wgrp);
++ /* Skip orphaned window group whose parent DC is disabled */
++ if (wgrp->parent)
++ tegra_windowgroup_enable(wgrp);
+ }
+
+ return 0;
+@@ -166,7 +168,9 @@ void tegra_display_hub_cleanup(struct tegra_display_hub *hub)
+ for (i = 0; i < hub->soc->num_wgrps; i++) {
+ struct tegra_windowgroup *wgrp = &hub->wgrps[i];
+
+- tegra_windowgroup_disable(wgrp);
++ /* Skip orphaned window group whose parent DC is disabled */
++ if (wgrp->parent)
++ tegra_windowgroup_disable(wgrp);
+ }
+ }
+
+diff --git a/drivers/gpu/drm/ttm/ttm_bo.c b/drivers/gpu/drm/ttm/ttm_bo.c
+index 9e07c3f75156..ef5bc00c73e2 100644
+--- a/drivers/gpu/drm/ttm/ttm_bo.c
++++ b/drivers/gpu/drm/ttm/ttm_bo.c
+@@ -881,8 +881,10 @@ static int ttm_bo_add_move_fence(struct ttm_buffer_object *bo,
+ if (!fence)
+ return 0;
+
+- if (no_wait_gpu)
++ if (no_wait_gpu) {
++ dma_fence_put(fence);
+ return -EBUSY;
++ }
+
+ dma_resv_add_shared_fence(bo->base.resv, fence);
+
+diff --git a/drivers/gpu/drm/ttm/ttm_bo_vm.c b/drivers/gpu/drm/ttm/ttm_bo_vm.c
+index 0ad30b112982..72100b84c7a9 100644
+--- a/drivers/gpu/drm/ttm/ttm_bo_vm.c
++++ b/drivers/gpu/drm/ttm/ttm_bo_vm.c
+@@ -300,8 +300,10 @@ vm_fault_t ttm_bo_vm_fault_reserved(struct vm_fault *vmf,
+ break;
+ case -EBUSY:
+ case -ERESTARTSYS:
++ dma_fence_put(moving);
+ return VM_FAULT_NOPAGE;
+ default:
++ dma_fence_put(moving);
+ return VM_FAULT_SIGBUS;
+ }
+
+diff --git a/drivers/gpu/host1x/bus.c b/drivers/gpu/host1x/bus.c
+index 6a995db51d6d..e201f62d62c0 100644
+--- a/drivers/gpu/host1x/bus.c
++++ b/drivers/gpu/host1x/bus.c
+@@ -686,8 +686,17 @@ EXPORT_SYMBOL(host1x_driver_register_full);
+ */
+ void host1x_driver_unregister(struct host1x_driver *driver)
+ {
++ struct host1x *host1x;
++
+ driver_unregister(&driver->driver);
+
++ mutex_lock(&devices_lock);
++
++ list_for_each_entry(host1x, &devices, list)
++ host1x_detach_driver(host1x, driver);
++
++ mutex_unlock(&devices_lock);
++
+ mutex_lock(&drivers_lock);
+ list_del_init(&driver->list);
+ mutex_unlock(&drivers_lock);
+diff --git a/drivers/gpu/host1x/dev.c b/drivers/gpu/host1x/dev.c
+index d24344e91922..3c0f151847ba 100644
+--- a/drivers/gpu/host1x/dev.c
++++ b/drivers/gpu/host1x/dev.c
+@@ -468,11 +468,12 @@ static int host1x_probe(struct platform_device *pdev)
+
+ err = host1x_register(host);
+ if (err < 0)
+- goto deinit_intr;
++ goto deinit_debugfs;
+
+ return 0;
+
+-deinit_intr:
++deinit_debugfs:
++ host1x_debug_deinit(host);
+ host1x_intr_deinit(host);
+ deinit_syncpt:
+ host1x_syncpt_deinit(host);
+diff --git a/drivers/infiniband/core/sa_query.c b/drivers/infiniband/core/sa_query.c
+index 74e0058fcf9e..0c14ab2244d4 100644
+--- a/drivers/infiniband/core/sa_query.c
++++ b/drivers/infiniband/core/sa_query.c
+@@ -829,13 +829,20 @@ static int ib_nl_get_path_rec_attrs_len(ib_sa_comp_mask comp_mask)
+ return len;
+ }
+
+-static int ib_nl_send_msg(struct ib_sa_query *query, gfp_t gfp_mask)
++static int ib_nl_make_request(struct ib_sa_query *query, gfp_t gfp_mask)
+ {
+ struct sk_buff *skb = NULL;
+ struct nlmsghdr *nlh;
+ void *data;
+ struct ib_sa_mad *mad;
+ int len;
++ unsigned long flags;
++ unsigned long delay;
++ gfp_t gfp_flag;
++ int ret;
++
++ INIT_LIST_HEAD(&query->list);
++ query->seq = (u32)atomic_inc_return(&ib_nl_sa_request_seq);
+
+ mad = query->mad_buf->mad;
+ len = ib_nl_get_path_rec_attrs_len(mad->sa_hdr.comp_mask);
+@@ -860,36 +867,25 @@ static int ib_nl_send_msg(struct ib_sa_query *query, gfp_t gfp_mask)
+ /* Repair the nlmsg header length */
+ nlmsg_end(skb, nlh);
+
+- return rdma_nl_multicast(&init_net, skb, RDMA_NL_GROUP_LS, gfp_mask);
+-}
++ gfp_flag = ((gfp_mask & GFP_ATOMIC) == GFP_ATOMIC) ? GFP_ATOMIC :
++ GFP_NOWAIT;
+
+-static int ib_nl_make_request(struct ib_sa_query *query, gfp_t gfp_mask)
+-{
+- unsigned long flags;
+- unsigned long delay;
+- int ret;
++ spin_lock_irqsave(&ib_nl_request_lock, flags);
++ ret = rdma_nl_multicast(&init_net, skb, RDMA_NL_GROUP_LS, gfp_flag);
+
+- INIT_LIST_HEAD(&query->list);
+- query->seq = (u32)atomic_inc_return(&ib_nl_sa_request_seq);
++ if (ret)
++ goto out;
+
+- /* Put the request on the list first.*/
+- spin_lock_irqsave(&ib_nl_request_lock, flags);
++ /* Put the request on the list.*/
+ delay = msecs_to_jiffies(sa_local_svc_timeout_ms);
+ query->timeout = delay + jiffies;
+ list_add_tail(&query->list, &ib_nl_request_list);
+ /* Start the timeout if this is the only request */
+ if (ib_nl_request_list.next == &query->list)
+ queue_delayed_work(ib_nl_wq, &ib_nl_timed_work, delay);
+- spin_unlock_irqrestore(&ib_nl_request_lock, flags);
+
+- ret = ib_nl_send_msg(query, gfp_mask);
+- if (ret) {
+- ret = -EIO;
+- /* Remove the request */
+- spin_lock_irqsave(&ib_nl_request_lock, flags);
+- list_del(&query->list);
+- spin_unlock_irqrestore(&ib_nl_request_lock, flags);
+- }
++out:
++ spin_unlock_irqrestore(&ib_nl_request_lock, flags);
+
+ return ret;
+ }
+diff --git a/drivers/infiniband/hw/hfi1/init.c b/drivers/infiniband/hw/hfi1/init.c
+index 3759d9233a1c..498684551427 100644
+--- a/drivers/infiniband/hw/hfi1/init.c
++++ b/drivers/infiniband/hw/hfi1/init.c
+@@ -828,6 +828,29 @@ wq_error:
+ return -ENOMEM;
+ }
+
++/**
++ * destroy_workqueues - destroy per port workqueues
++ * @dd: the hfi1_ib device
++ */
++static void destroy_workqueues(struct hfi1_devdata *dd)
++{
++ int pidx;
++ struct hfi1_pportdata *ppd;
++
++ for (pidx = 0; pidx < dd->num_pports; ++pidx) {
++ ppd = dd->pport + pidx;
++
++ if (ppd->hfi1_wq) {
++ destroy_workqueue(ppd->hfi1_wq);
++ ppd->hfi1_wq = NULL;
++ }
++ if (ppd->link_wq) {
++ destroy_workqueue(ppd->link_wq);
++ ppd->link_wq = NULL;
++ }
++ }
++}
++
+ /**
+ * enable_general_intr() - Enable the IRQs that will be handled by the
+ * general interrupt handler.
+@@ -1101,15 +1124,10 @@ static void shutdown_device(struct hfi1_devdata *dd)
+ * We can't count on interrupts since we are stopping.
+ */
+ hfi1_quiet_serdes(ppd);
+-
+- if (ppd->hfi1_wq) {
+- destroy_workqueue(ppd->hfi1_wq);
+- ppd->hfi1_wq = NULL;
+- }
+- if (ppd->link_wq) {
+- destroy_workqueue(ppd->link_wq);
+- ppd->link_wq = NULL;
+- }
++ if (ppd->hfi1_wq)
++ flush_workqueue(ppd->hfi1_wq);
++ if (ppd->link_wq)
++ flush_workqueue(ppd->link_wq);
+ }
+ sdma_exit(dd);
+ }
+@@ -1757,6 +1775,7 @@ static void remove_one(struct pci_dev *pdev)
+ * clear dma engines, etc.
+ */
+ shutdown_device(dd);
++ destroy_workqueues(dd);
+
+ stop_timers(dd);
+
+diff --git a/drivers/infiniband/hw/hfi1/qp.c b/drivers/infiniband/hw/hfi1/qp.c
+index f8e733aa3bb8..acd4400b0092 100644
+--- a/drivers/infiniband/hw/hfi1/qp.c
++++ b/drivers/infiniband/hw/hfi1/qp.c
+@@ -381,7 +381,10 @@ bool _hfi1_schedule_send(struct rvt_qp *qp)
+ struct hfi1_ibport *ibp =
+ to_iport(qp->ibqp.device, qp->port_num);
+ struct hfi1_pportdata *ppd = ppd_from_ibp(ibp);
+- struct hfi1_devdata *dd = dd_from_ibdev(qp->ibqp.device);
++ struct hfi1_devdata *dd = ppd->dd;
++
++ if (dd->flags & HFI1_SHUTDOWN)
++ return true;
+
+ return iowait_schedule(&priv->s_iowait, ppd->hfi1_wq,
+ priv->s_sde ?
+diff --git a/drivers/infiniband/hw/hfi1/tid_rdma.c b/drivers/infiniband/hw/hfi1/tid_rdma.c
+index 8a2e0d9351e9..7c6fd720fb2e 100644
+--- a/drivers/infiniband/hw/hfi1/tid_rdma.c
++++ b/drivers/infiniband/hw/hfi1/tid_rdma.c
+@@ -5406,7 +5406,10 @@ static bool _hfi1_schedule_tid_send(struct rvt_qp *qp)
+ struct hfi1_ibport *ibp =
+ to_iport(qp->ibqp.device, qp->port_num);
+ struct hfi1_pportdata *ppd = ppd_from_ibp(ibp);
+- struct hfi1_devdata *dd = dd_from_ibdev(qp->ibqp.device);
++ struct hfi1_devdata *dd = ppd->dd;
++
++ if ((dd->flags & HFI1_SHUTDOWN))
++ return true;
+
+ return iowait_tid_schedule(&priv->s_iowait, ppd->hfi1_wq,
+ priv->s_sde ?
+diff --git a/drivers/infiniband/hw/mlx5/main.c b/drivers/infiniband/hw/mlx5/main.c
+index 6679756506e6..820e407b3e26 100644
+--- a/drivers/infiniband/hw/mlx5/main.c
++++ b/drivers/infiniband/hw/mlx5/main.c
+@@ -515,7 +515,7 @@ static int mlx5_query_port_roce(struct ib_device *device, u8 port_num,
+ mdev_port_num);
+ if (err)
+ goto out;
+- ext = MLX5_CAP_PCAM_FEATURE(dev->mdev, ptys_extended_ethernet);
++ ext = !!MLX5_GET_ETH_PROTO(ptys_reg, out, true, eth_proto_capability);
+ eth_prot_oper = MLX5_GET_ETH_PROTO(ptys_reg, out, ext, eth_proto_oper);
+
+ props->active_width = IB_WIDTH_4X;
+diff --git a/drivers/infiniband/sw/siw/siw_main.c b/drivers/infiniband/sw/siw/siw_main.c
+index 5cd40fb9e20c..634c4b371623 100644
+--- a/drivers/infiniband/sw/siw/siw_main.c
++++ b/drivers/infiniband/sw/siw/siw_main.c
+@@ -67,12 +67,13 @@ static int siw_device_register(struct siw_device *sdev, const char *name)
+ static int dev_id = 1;
+ int rv;
+
++ sdev->vendor_part_id = dev_id++;
++
+ rv = ib_register_device(base_dev, name);
+ if (rv) {
+ pr_warn("siw: device registration error %d\n", rv);
+ return rv;
+ }
+- sdev->vendor_part_id = dev_id++;
+
+ siw_dbg(base_dev, "HWaddr=%pM\n", sdev->netdev->dev_addr);
+
+diff --git a/drivers/iommu/intel-iommu.c b/drivers/iommu/intel-iommu.c
+index 34b2ed91cf4d..2acf2842c3bd 100644
+--- a/drivers/iommu/intel-iommu.c
++++ b/drivers/iommu/intel-iommu.c
+@@ -6206,6 +6206,23 @@ intel_iommu_domain_set_attr(struct iommu_domain *domain,
+ return ret;
+ }
+
++/*
++ * Check that the device does not live on an external facing PCI port that is
++ * marked as untrusted. Such devices should not be able to apply quirks and
++ * thus not be able to bypass the IOMMU restrictions.
++ */
++static bool risky_device(struct pci_dev *pdev)
++{
++ if (pdev->untrusted) {
++ pci_info(pdev,
++ "Skipping IOMMU quirk for dev [%04X:%04X] on untrusted PCI link\n",
++ pdev->vendor, pdev->device);
++ pci_info(pdev, "Please check with your BIOS/Platform vendor about this\n");
++ return true;
++ }
++ return false;
++}
++
+ const struct iommu_ops intel_iommu_ops = {
+ .capable = intel_iommu_capable,
+ .domain_alloc = intel_iommu_domain_alloc,
+@@ -6235,6 +6252,9 @@ const struct iommu_ops intel_iommu_ops = {
+
+ static void quirk_iommu_igfx(struct pci_dev *dev)
+ {
++ if (risky_device(dev))
++ return;
++
+ pci_info(dev, "Disabling IOMMU for graphics on this chipset\n");
+ dmar_map_gfx = 0;
+ }
+@@ -6276,6 +6296,9 @@ DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_INTEL, 0x163D, quirk_iommu_igfx);
+
+ static void quirk_iommu_rwbf(struct pci_dev *dev)
+ {
++ if (risky_device(dev))
++ return;
++
+ /*
+ * Mobile 4 Series Chipset neglects to set RWBF capability,
+ * but needs it. Same seems to hold for the desktop versions.
+@@ -6306,6 +6329,9 @@ static void quirk_calpella_no_shadow_gtt(struct pci_dev *dev)
+ {
+ unsigned short ggc;
+
++ if (risky_device(dev))
++ return;
++
+ if (pci_read_config_word(dev, GGC, &ggc))
+ return;
+
+@@ -6339,6 +6365,12 @@ static void __init check_tylersburg_isoch(void)
+ pdev = pci_get_device(PCI_VENDOR_ID_INTEL, 0x3a3e, NULL);
+ if (!pdev)
+ return;
++
++ if (risky_device(pdev)) {
++ pci_dev_put(pdev);
++ return;
++ }
++
+ pci_dev_put(pdev);
+
+ /* System Management Registers. Might be hidden, in which case
+@@ -6348,6 +6380,11 @@ static void __init check_tylersburg_isoch(void)
+ if (!pdev)
+ return;
+
++ if (risky_device(pdev)) {
++ pci_dev_put(pdev);
++ return;
++ }
++
+ if (pci_read_config_dword(pdev, 0x188, &vtisochctrl)) {
+ pci_dev_put(pdev);
+ return;
+diff --git a/drivers/irqchip/irq-gic-v3-its.c b/drivers/irqchip/irq-gic-v3-its.c
+index b3e16a06c13b..b99e3105bf9f 100644
+--- a/drivers/irqchip/irq-gic-v3-its.c
++++ b/drivers/irqchip/irq-gic-v3-its.c
+@@ -3938,16 +3938,24 @@ static void its_vpe_4_1_deschedule(struct its_vpe *vpe,
+ u64 val;
+
+ if (info->req_db) {
++ unsigned long flags;
++
+ /*
+ * vPE is going to block: make the vPE non-resident with
+ * PendingLast clear and DB set. The GIC guarantees that if
+ * we read-back PendingLast clear, then a doorbell will be
+ * delivered when an interrupt comes.
++ *
++ * Note the locking to deal with the concurrent update of
++ * pending_last from the doorbell interrupt handler that can
++ * run concurrently.
+ */
++ raw_spin_lock_irqsave(&vpe->vpe_lock, flags);
+ val = its_clear_vpend_valid(vlpi_base,
+ GICR_VPENDBASER_PendingLast,
+ GICR_VPENDBASER_4_1_DB);
+ vpe->pending_last = !!(val & GICR_VPENDBASER_PendingLast);
++ raw_spin_unlock_irqrestore(&vpe->vpe_lock, flags);
+ } else {
+ /*
+ * We're not blocking, so just make the vPE non-resident
+diff --git a/drivers/md/dm-writecache.c b/drivers/md/dm-writecache.c
+index 5cc94f57421c..00d774bdd2b1 100644
+--- a/drivers/md/dm-writecache.c
++++ b/drivers/md/dm-writecache.c
+@@ -2232,6 +2232,12 @@ invalid_optional:
+ }
+
+ if (WC_MODE_PMEM(wc)) {
++ if (!dax_synchronous(wc->ssd_dev->dax_dev)) {
++ r = -EOPNOTSUPP;
++ ti->error = "Asynchronous persistent memory not supported as pmem cache";
++ goto bad;
++ }
++
+ r = persistent_memory_claim(wc);
+ if (r) {
+ ti->error = "Unable to map persistent memory for cache";
+diff --git a/drivers/md/dm.c b/drivers/md/dm.c
+index db9e46114653..05333fc2f8d2 100644
+--- a/drivers/md/dm.c
++++ b/drivers/md/dm.c
+@@ -12,6 +12,7 @@
+ #include <linux/init.h>
+ #include <linux/module.h>
+ #include <linux/mutex.h>
++#include <linux/sched/mm.h>
+ #include <linux/sched/signal.h>
+ #include <linux/blkpg.h>
+ #include <linux/bio.h>
+@@ -2894,17 +2895,25 @@ EXPORT_SYMBOL_GPL(dm_internal_resume_fast);
+ int dm_kobject_uevent(struct mapped_device *md, enum kobject_action action,
+ unsigned cookie)
+ {
++ int r;
++ unsigned noio_flag;
+ char udev_cookie[DM_COOKIE_LENGTH];
+ char *envp[] = { udev_cookie, NULL };
+
++ noio_flag = memalloc_noio_save();
++
+ if (!cookie)
+- return kobject_uevent(&disk_to_dev(md->disk)->kobj, action);
++ r = kobject_uevent(&disk_to_dev(md->disk)->kobj, action);
+ else {
+ snprintf(udev_cookie, DM_COOKIE_LENGTH, "%s=%u",
+ DM_COOKIE_ENV_VAR_NAME, cookie);
+- return kobject_uevent_env(&disk_to_dev(md->disk)->kobj,
+- action, envp);
++ r = kobject_uevent_env(&disk_to_dev(md->disk)->kobj,
++ action, envp);
+ }
++
++ memalloc_noio_restore(noio_flag);
++
++ return r;
+ }
+
+ uint32_t dm_next_uevent_seq(struct mapped_device *md)
+diff --git a/drivers/message/fusion/mptscsih.c b/drivers/message/fusion/mptscsih.c
+index f0737c57ed5f..1491561d2e5c 100644
+--- a/drivers/message/fusion/mptscsih.c
++++ b/drivers/message/fusion/mptscsih.c
+@@ -118,8 +118,6 @@ int mptscsih_suspend(struct pci_dev *pdev, pm_message_t state);
+ int mptscsih_resume(struct pci_dev *pdev);
+ #endif
+
+-#define SNS_LEN(scp) SCSI_SENSE_BUFFERSIZE
+-
+
+ /*=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=*/
+ /*
+@@ -2422,7 +2420,7 @@ mptscsih_copy_sense_data(struct scsi_cmnd *sc, MPT_SCSI_HOST *hd, MPT_FRAME_HDR
+ /* Copy the sense received into the scsi command block. */
+ req_index = le16_to_cpu(mf->u.frame.hwhdr.msgctxu.fld.req_idx);
+ sense_data = ((u8 *)ioc->sense_buf_pool + (req_index * MPT_SENSE_BUFFER_ALLOC));
+- memcpy(sc->sense_buffer, sense_data, SNS_LEN(sc));
++ memcpy(sc->sense_buffer, sense_data, MPT_SENSE_BUFFER_ALLOC);
+
+ /* Log SMART data (asc = 0x5D, non-IM case only) if required.
+ */
+diff --git a/drivers/mmc/host/meson-gx-mmc.c b/drivers/mmc/host/meson-gx-mmc.c
+index 35400cf2a2e4..cfaf8e7e22ec 100644
+--- a/drivers/mmc/host/meson-gx-mmc.c
++++ b/drivers/mmc/host/meson-gx-mmc.c
+@@ -1143,9 +1143,11 @@ static int meson_mmc_probe(struct platform_device *pdev)
+
+ mmc->caps |= MMC_CAP_CMD23;
+ if (host->dram_access_quirk) {
++ /* Limit segments to 1 due to low available sram memory */
++ mmc->max_segs = 1;
+ /* Limit to the available sram memory */
+- mmc->max_segs = SD_EMMC_SRAM_DATA_BUF_LEN / mmc->max_blk_size;
+- mmc->max_blk_count = mmc->max_segs;
++ mmc->max_blk_count = SD_EMMC_SRAM_DATA_BUF_LEN /
++ mmc->max_blk_size;
+ } else {
+ mmc->max_blk_count = CMD_CFG_LENGTH_MASK;
+ mmc->max_segs = SD_EMMC_DESC_BUF_LEN /
+diff --git a/drivers/mmc/host/owl-mmc.c b/drivers/mmc/host/owl-mmc.c
+index 5e20c099fe03..df43f42855e2 100644
+--- a/drivers/mmc/host/owl-mmc.c
++++ b/drivers/mmc/host/owl-mmc.c
+@@ -689,7 +689,7 @@ MODULE_DEVICE_TABLE(of, owl_mmc_of_match);
+ static struct platform_driver owl_mmc_driver = {
+ .driver = {
+ .name = "owl_mmc",
+- .of_match_table = of_match_ptr(owl_mmc_of_match),
++ .of_match_table = owl_mmc_of_match,
+ },
+ .probe = owl_mmc_probe,
+ .remove = owl_mmc_remove,
+diff --git a/drivers/mtd/mtdcore.c b/drivers/mtd/mtdcore.c
+index 29d41003d6e0..f8317ccd8f2a 100644
+--- a/drivers/mtd/mtdcore.c
++++ b/drivers/mtd/mtdcore.c
+@@ -1235,8 +1235,8 @@ int mtd_panic_write(struct mtd_info *mtd, loff_t to, size_t len, size_t *retlen,
+ return -EROFS;
+ if (!len)
+ return 0;
+- if (!mtd->oops_panic_write)
+- mtd->oops_panic_write = true;
++ if (!master->oops_panic_write)
++ master->oops_panic_write = true;
+
+ return master->_panic_write(master, mtd_get_master_ofs(mtd, to), len,
+ retlen, buf);
+diff --git a/drivers/net/dsa/microchip/ksz8795.c b/drivers/net/dsa/microchip/ksz8795.c
+index 47d65b77caf7..7c17b0f705ec 100644
+--- a/drivers/net/dsa/microchip/ksz8795.c
++++ b/drivers/net/dsa/microchip/ksz8795.c
+@@ -1268,6 +1268,9 @@ static int ksz8795_switch_init(struct ksz_device *dev)
+ return -ENOMEM;
+ }
+
++ /* set the real number of ports */
++ dev->ds->num_ports = dev->port_cnt;
++
+ return 0;
+ }
+
+diff --git a/drivers/net/dsa/microchip/ksz9477.c b/drivers/net/dsa/microchip/ksz9477.c
+index 9a51b8a4de5d..8d15c3016024 100644
+--- a/drivers/net/dsa/microchip/ksz9477.c
++++ b/drivers/net/dsa/microchip/ksz9477.c
+@@ -1588,6 +1588,9 @@ static int ksz9477_switch_init(struct ksz_device *dev)
+ return -ENOMEM;
+ }
+
++ /* set the real number of ports */
++ dev->ds->num_ports = dev->port_cnt;
++
+ return 0;
+ }
+
+diff --git a/drivers/net/ethernet/aquantia/atlantic/hw_atl/hw_atl_llh.c b/drivers/net/ethernet/aquantia/atlantic/hw_atl/hw_atl_llh.c
+index d1f68fc16291..e6b1fb10ad91 100644
+--- a/drivers/net/ethernet/aquantia/atlantic/hw_atl/hw_atl_llh.c
++++ b/drivers/net/ethernet/aquantia/atlantic/hw_atl/hw_atl_llh.c
+@@ -1651,7 +1651,7 @@ void hw_atl_rpfl3l4_ipv6_src_addr_set(struct aq_hw_s *aq_hw, u8 location,
+ for (i = 0; i < 4; ++i)
+ aq_hw_write_reg(aq_hw,
+ HW_ATL_RPF_L3_SRCA_ADR(location + i),
+- ipv6_src[i]);
++ ipv6_src[3 - i]);
+ }
+
+ void hw_atl_rpfl3l4_ipv6_dest_addr_set(struct aq_hw_s *aq_hw, u8 location,
+@@ -1662,7 +1662,7 @@ void hw_atl_rpfl3l4_ipv6_dest_addr_set(struct aq_hw_s *aq_hw, u8 location,
+ for (i = 0; i < 4; ++i)
+ aq_hw_write_reg(aq_hw,
+ HW_ATL_RPF_L3_DSTA_ADR(location + i),
+- ipv6_dest[i]);
++ ipv6_dest[3 - i]);
+ }
+
+ u32 hw_atl_sem_ram_get(struct aq_hw_s *self)
+diff --git a/drivers/net/ethernet/aquantia/atlantic/hw_atl/hw_atl_llh_internal.h b/drivers/net/ethernet/aquantia/atlantic/hw_atl/hw_atl_llh_internal.h
+index 18de2f7b8959..a7590b9ea2df 100644
+--- a/drivers/net/ethernet/aquantia/atlantic/hw_atl/hw_atl_llh_internal.h
++++ b/drivers/net/ethernet/aquantia/atlantic/hw_atl/hw_atl_llh_internal.h
+@@ -1360,7 +1360,7 @@
+ */
+
+ /* Register address for bitfield pif_rpf_l3_da0_i[31:0] */
+-#define HW_ATL_RPF_L3_DSTA_ADR(filter) (0x000053B0 + (filter) * 0x4)
++#define HW_ATL_RPF_L3_DSTA_ADR(filter) (0x000053D0 + (filter) * 0x4)
+ /* Bitmask for bitfield l3_da0[1F:0] */
+ #define HW_ATL_RPF_L3_DSTA_MSK 0xFFFFFFFFu
+ /* Inverted bitmask for bitfield l3_da0[1F:0] */
+diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt_sriov.c b/drivers/net/ethernet/broadcom/bnxt/bnxt_sriov.c
+index cea2f9958a1d..2295f539a641 100644
+--- a/drivers/net/ethernet/broadcom/bnxt/bnxt_sriov.c
++++ b/drivers/net/ethernet/broadcom/bnxt/bnxt_sriov.c
+@@ -396,6 +396,7 @@ static void bnxt_free_vf_resources(struct bnxt *bp)
+ }
+ }
+
++ bp->pf.active_vfs = 0;
+ kfree(bp->pf.vf);
+ bp->pf.vf = NULL;
+ }
+@@ -835,7 +836,6 @@ void bnxt_sriov_disable(struct bnxt *bp)
+
+ bnxt_free_vf_resources(bp);
+
+- bp->pf.active_vfs = 0;
+ /* Reclaim all resources for the PF. */
+ rtnl_lock();
+ bnxt_restore_pf_fw_resources(bp);
+diff --git a/drivers/net/ethernet/cadence/macb_main.c b/drivers/net/ethernet/cadence/macb_main.c
+index 52582e8ed90e..f1f0976e7669 100644
+--- a/drivers/net/ethernet/cadence/macb_main.c
++++ b/drivers/net/ethernet/cadence/macb_main.c
+@@ -2821,11 +2821,13 @@ static void macb_get_wol(struct net_device *netdev, struct ethtool_wolinfo *wol)
+ {
+ struct macb *bp = netdev_priv(netdev);
+
+- wol->supported = 0;
+- wol->wolopts = 0;
+-
+- if (bp->wol & MACB_WOL_HAS_MAGIC_PACKET)
++ if (bp->wol & MACB_WOL_HAS_MAGIC_PACKET) {
+ phylink_ethtool_get_wol(bp->phylink, wol);
++ wol->supported |= WAKE_MAGIC;
++
++ if (bp->wol & MACB_WOL_ENABLED)
++ wol->wolopts |= WAKE_MAGIC;
++ }
+ }
+
+ static int macb_set_wol(struct net_device *netdev, struct ethtool_wolinfo *wol)
+@@ -2833,9 +2835,13 @@ static int macb_set_wol(struct net_device *netdev, struct ethtool_wolinfo *wol)
+ struct macb *bp = netdev_priv(netdev);
+ int ret;
+
++ /* Pass the order to phylink layer */
+ ret = phylink_ethtool_set_wol(bp->phylink, wol);
+- if (!ret)
+- return 0;
++ /* Don't manage WoL on MAC if handled by the PHY
++ * or if there's a failure in talking to the PHY
++ */
++ if (!ret || ret != -EOPNOTSUPP)
++ return ret;
+
+ if (!(bp->wol & MACB_WOL_HAS_MAGIC_PACKET) ||
+ (wol->wolopts & ~WAKE_MAGIC))
+@@ -4422,7 +4428,7 @@ static int macb_probe(struct platform_device *pdev)
+ bp->wol = 0;
+ if (of_get_property(np, "magic-packet", NULL))
+ bp->wol |= MACB_WOL_HAS_MAGIC_PACKET;
+- device_init_wakeup(&pdev->dev, bp->wol & MACB_WOL_HAS_MAGIC_PACKET);
++ device_set_wakeup_capable(&pdev->dev, bp->wol & MACB_WOL_HAS_MAGIC_PACKET);
+
+ spin_lock_init(&bp->lock);
+
+@@ -4598,10 +4604,10 @@ static int __maybe_unused macb_suspend(struct device *dev)
+ bp->pm_data.scrt2 = gem_readl_n(bp, ETHT, SCRT2_ETHT);
+ }
+
+- netif_carrier_off(netdev);
+ if (bp->ptp_info)
+ bp->ptp_info->ptp_remove(netdev);
+- pm_runtime_force_suspend(dev);
++ if (!device_may_wakeup(dev))
++ pm_runtime_force_suspend(dev);
+
+ return 0;
+ }
+@@ -4616,7 +4622,8 @@ static int __maybe_unused macb_resume(struct device *dev)
+ if (!netif_running(netdev))
+ return 0;
+
+- pm_runtime_force_resume(dev);
++ if (!device_may_wakeup(dev))
++ pm_runtime_force_resume(dev);
+
+ if (bp->wol & MACB_WOL_ENABLED) {
+ macb_writel(bp, IDR, MACB_BIT(WOL));
+@@ -4654,7 +4661,7 @@ static int __maybe_unused macb_runtime_suspend(struct device *dev)
+ struct net_device *netdev = dev_get_drvdata(dev);
+ struct macb *bp = netdev_priv(netdev);
+
+- if (!(device_may_wakeup(&bp->dev->dev))) {
++ if (!(device_may_wakeup(dev))) {
+ clk_disable_unprepare(bp->tx_clk);
+ clk_disable_unprepare(bp->hclk);
+ clk_disable_unprepare(bp->pclk);
+@@ -4670,7 +4677,7 @@ static int __maybe_unused macb_runtime_resume(struct device *dev)
+ struct net_device *netdev = dev_get_drvdata(dev);
+ struct macb *bp = netdev_priv(netdev);
+
+- if (!(device_may_wakeup(&bp->dev->dev))) {
++ if (!(device_may_wakeup(dev))) {
+ clk_prepare_enable(bp->pclk);
+ clk_prepare_enable(bp->hclk);
+ clk_prepare_enable(bp->tx_clk);
+diff --git a/drivers/net/ethernet/chelsio/cxgb4/cxgb4_filter.c b/drivers/net/ethernet/chelsio/cxgb4/cxgb4_filter.c
+index 7a7f61a8cdf4..d02d346629b3 100644
+--- a/drivers/net/ethernet/chelsio/cxgb4/cxgb4_filter.c
++++ b/drivers/net/ethernet/chelsio/cxgb4/cxgb4_filter.c
+@@ -1112,16 +1112,16 @@ static bool is_addr_all_mask(u8 *ipmask, int family)
+ struct in_addr *addr;
+
+ addr = (struct in_addr *)ipmask;
+- if (ntohl(addr->s_addr) == 0xffffffff)
++ if (addr->s_addr == htonl(0xffffffff))
+ return true;
+ } else if (family == AF_INET6) {
+ struct in6_addr *addr6;
+
+ addr6 = (struct in6_addr *)ipmask;
+- if (ntohl(addr6->s6_addr32[0]) == 0xffffffff &&
+- ntohl(addr6->s6_addr32[1]) == 0xffffffff &&
+- ntohl(addr6->s6_addr32[2]) == 0xffffffff &&
+- ntohl(addr6->s6_addr32[3]) == 0xffffffff)
++ if (addr6->s6_addr32[0] == htonl(0xffffffff) &&
++ addr6->s6_addr32[1] == htonl(0xffffffff) &&
++ addr6->s6_addr32[2] == htonl(0xffffffff) &&
++ addr6->s6_addr32[3] == htonl(0xffffffff))
+ return true;
+ }
+ return false;
+diff --git a/drivers/net/ethernet/chelsio/cxgb4/t4_hw.c b/drivers/net/ethernet/chelsio/cxgb4/t4_hw.c
+index 2a3480fc1d91..9121cef2be2d 100644
+--- a/drivers/net/ethernet/chelsio/cxgb4/t4_hw.c
++++ b/drivers/net/ethernet/chelsio/cxgb4/t4_hw.c
+@@ -3493,7 +3493,7 @@ int t4_prep_fw(struct adapter *adap, struct fw_info *fw_info,
+ drv_fw = &fw_info->fw_hdr;
+
+ /* Read the header of the firmware on the card */
+- ret = -t4_read_flash(adap, FLASH_FW_START,
++ ret = t4_read_flash(adap, FLASH_FW_START,
+ sizeof(*card_fw) / sizeof(uint32_t),
+ (uint32_t *)card_fw, 1);
+ if (ret == 0) {
+@@ -3522,8 +3522,8 @@ int t4_prep_fw(struct adapter *adap, struct fw_info *fw_info,
+ should_install_fs_fw(adap, card_fw_usable,
+ be32_to_cpu(fs_fw->fw_ver),
+ be32_to_cpu(card_fw->fw_ver))) {
+- ret = -t4_fw_upgrade(adap, adap->mbox, fw_data,
+- fw_size, 0);
++ ret = t4_fw_upgrade(adap, adap->mbox, fw_data,
++ fw_size, 0);
+ if (ret != 0) {
+ dev_err(adap->pdev_dev,
+ "failed to install firmware: %d\n", ret);
+@@ -3554,7 +3554,7 @@ int t4_prep_fw(struct adapter *adap, struct fw_info *fw_info,
+ FW_HDR_FW_VER_MICRO_G(c), FW_HDR_FW_VER_BUILD_G(c),
+ FW_HDR_FW_VER_MAJOR_G(k), FW_HDR_FW_VER_MINOR_G(k),
+ FW_HDR_FW_VER_MICRO_G(k), FW_HDR_FW_VER_BUILD_G(k));
+- ret = EINVAL;
++ ret = -EINVAL;
+ goto bye;
+ }
+
+diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3_enet.c b/drivers/net/ethernet/hisilicon/hns3/hns3_enet.c
+index da98fd7c8eca..3003eecd5263 100644
+--- a/drivers/net/ethernet/hisilicon/hns3/hns3_enet.c
++++ b/drivers/net/ethernet/hisilicon/hns3/hns3_enet.c
+@@ -4153,9 +4153,8 @@ static void hns3_client_uninit(struct hnae3_handle *handle, bool reset)
+
+ hns3_put_ring_config(priv);
+
+- hns3_dbg_uninit(handle);
+-
+ out_netdev_free:
++ hns3_dbg_uninit(handle);
+ free_netdev(netdev);
+ }
+
+diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3_ethtool.c b/drivers/net/ethernet/hisilicon/hns3/hns3_ethtool.c
+index 28b81f24afa1..2a78805d531a 100644
+--- a/drivers/net/ethernet/hisilicon/hns3/hns3_ethtool.c
++++ b/drivers/net/ethernet/hisilicon/hns3/hns3_ethtool.c
+@@ -174,18 +174,21 @@ static void hns3_lb_check_skb_data(struct hns3_enet_ring *ring,
+ {
+ struct hns3_enet_tqp_vector *tqp_vector = ring->tqp_vector;
+ unsigned char *packet = skb->data;
++ u32 len = skb_headlen(skb);
+ u32 i;
+
+- for (i = 0; i < skb->len; i++)
++ len = min_t(u32, len, HNS3_NIC_LB_TEST_PACKET_SIZE);
++
++ for (i = 0; i < len; i++)
+ if (packet[i] != (unsigned char)(i & 0xff))
+ break;
+
+ /* The packet is correctly received */
+- if (i == skb->len)
++ if (i == HNS3_NIC_LB_TEST_PACKET_SIZE)
+ tqp_vector->rx_group.total_packets++;
+ else
+ print_hex_dump(KERN_ERR, "selftest:", DUMP_PREFIX_OFFSET, 16, 1,
+- skb->data, skb->len, true);
++ skb->data, len, true);
+
+ dev_kfree_skb_any(skb);
+ }
+diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.c b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.c
+index a758f9ae32be..4de268a87958 100644
+--- a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.c
++++ b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.c
+@@ -9351,7 +9351,7 @@ retry:
+ set_bit(HCLGE_STATE_RST_HANDLING, &hdev->state);
+ hdev->reset_type = HNAE3_FLR_RESET;
+ ret = hclge_reset_prepare(hdev);
+- if (ret) {
++ if (ret || hdev->reset_pending) {
+ dev_err(&hdev->pdev->dev, "fail to prepare FLR, ret=%d\n",
+ ret);
+ if (hdev->reset_pending ||
+diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3vf/hclgevf_main.c b/drivers/net/ethernet/hisilicon/hns3/hns3vf/hclgevf_main.c
+index e02d427131ee..e6cdd06925e6 100644
+--- a/drivers/net/ethernet/hisilicon/hns3/hns3vf/hclgevf_main.c
++++ b/drivers/net/ethernet/hisilicon/hns3/hns3vf/hclgevf_main.c
+@@ -1527,6 +1527,11 @@ static int hclgevf_reset_prepare_wait(struct hclgevf_dev *hdev)
+ if (hdev->reset_type == HNAE3_VF_FUNC_RESET) {
+ hclgevf_build_send_msg(&send_msg, HCLGE_MBX_RESET, 0);
+ ret = hclgevf_send_mbx_msg(hdev, &send_msg, true, NULL, 0);
++ if (ret) {
++ dev_err(&hdev->pdev->dev,
++ "failed to assert VF reset, ret = %d\n", ret);
++ return ret;
++ }
+ hdev->rst_stats.vf_func_rst_cnt++;
+ }
+
+diff --git a/drivers/net/ethernet/ibm/ibmvnic.c b/drivers/net/ethernet/ibm/ibmvnic.c
+index 2baf7b3ff4cb..0fd7eae25fe9 100644
+--- a/drivers/net/ethernet/ibm/ibmvnic.c
++++ b/drivers/net/ethernet/ibm/ibmvnic.c
+@@ -1971,13 +1971,18 @@ static int do_reset(struct ibmvnic_adapter *adapter,
+ release_sub_crqs(adapter, 1);
+ } else {
+ rc = ibmvnic_reset_crq(adapter);
+- if (!rc)
++ if (rc == H_CLOSED || rc == H_SUCCESS) {
+ rc = vio_enable_interrupts(adapter->vdev);
++ if (rc)
++ netdev_err(adapter->netdev,
++ "Reset failed to enable interrupts. rc=%d\n",
++ rc);
++ }
+ }
+
+ if (rc) {
+ netdev_err(adapter->netdev,
+- "Couldn't initialize crq. rc=%d\n", rc);
++ "Reset couldn't initialize crq. rc=%d\n", rc);
+ goto out;
+ }
+
+diff --git a/drivers/net/ethernet/intel/i40e/i40e_main.c b/drivers/net/ethernet/intel/i40e/i40e_main.c
+index 2a037ec244b9..80dc5fcb82db 100644
+--- a/drivers/net/ethernet/intel/i40e/i40e_main.c
++++ b/drivers/net/ethernet/intel/i40e/i40e_main.c
+@@ -439,11 +439,15 @@ static void i40e_get_netdev_stats_struct(struct net_device *netdev,
+ i40e_get_netdev_stats_struct_tx(ring, stats);
+
+ if (i40e_enabled_xdp_vsi(vsi)) {
+- ring++;
++ ring = READ_ONCE(vsi->xdp_rings[i]);
++ if (!ring)
++ continue;
+ i40e_get_netdev_stats_struct_tx(ring, stats);
+ }
+
+- ring++;
++ ring = READ_ONCE(vsi->rx_rings[i]);
++ if (!ring)
++ continue;
+ do {
+ start = u64_stats_fetch_begin_irq(&ring->syncp);
+ packets = ring->stats.packets;
+@@ -787,6 +791,8 @@ static void i40e_update_vsi_stats(struct i40e_vsi *vsi)
+ for (q = 0; q < vsi->num_queue_pairs; q++) {
+ /* locate Tx ring */
+ p = READ_ONCE(vsi->tx_rings[q]);
++ if (!p)
++ continue;
+
+ do {
+ start = u64_stats_fetch_begin_irq(&p->syncp);
+@@ -800,8 +806,11 @@ static void i40e_update_vsi_stats(struct i40e_vsi *vsi)
+ tx_linearize += p->tx_stats.tx_linearize;
+ tx_force_wb += p->tx_stats.tx_force_wb;
+
+- /* Rx queue is part of the same block as Tx queue */
+- p = &p[1];
++ /* locate Rx ring */
++ p = READ_ONCE(vsi->rx_rings[q]);
++ if (!p)
++ continue;
++
+ do {
+ start = u64_stats_fetch_begin_irq(&p->syncp);
+ packets = p->stats.packets;
+@@ -10816,10 +10825,10 @@ static void i40e_vsi_clear_rings(struct i40e_vsi *vsi)
+ if (vsi->tx_rings && vsi->tx_rings[0]) {
+ for (i = 0; i < vsi->alloc_queue_pairs; i++) {
+ kfree_rcu(vsi->tx_rings[i], rcu);
+- vsi->tx_rings[i] = NULL;
+- vsi->rx_rings[i] = NULL;
++ WRITE_ONCE(vsi->tx_rings[i], NULL);
++ WRITE_ONCE(vsi->rx_rings[i], NULL);
+ if (vsi->xdp_rings)
+- vsi->xdp_rings[i] = NULL;
++ WRITE_ONCE(vsi->xdp_rings[i], NULL);
+ }
+ }
+ }
+@@ -10853,7 +10862,7 @@ static int i40e_alloc_rings(struct i40e_vsi *vsi)
+ if (vsi->back->hw_features & I40E_HW_WB_ON_ITR_CAPABLE)
+ ring->flags = I40E_TXR_FLAGS_WB_ON_ITR;
+ ring->itr_setting = pf->tx_itr_default;
+- vsi->tx_rings[i] = ring++;
++ WRITE_ONCE(vsi->tx_rings[i], ring++);
+
+ if (!i40e_enabled_xdp_vsi(vsi))
+ goto setup_rx;
+@@ -10871,7 +10880,7 @@ static int i40e_alloc_rings(struct i40e_vsi *vsi)
+ ring->flags = I40E_TXR_FLAGS_WB_ON_ITR;
+ set_ring_xdp(ring);
+ ring->itr_setting = pf->tx_itr_default;
+- vsi->xdp_rings[i] = ring++;
++ WRITE_ONCE(vsi->xdp_rings[i], ring++);
+
+ setup_rx:
+ ring->queue_index = i;
+@@ -10884,7 +10893,7 @@ setup_rx:
+ ring->size = 0;
+ ring->dcb_tc = 0;
+ ring->itr_setting = pf->rx_itr_default;
+- vsi->rx_rings[i] = ring;
++ WRITE_ONCE(vsi->rx_rings[i], ring);
+ }
+
+ return 0;
+diff --git a/drivers/net/ethernet/intel/ice/ice_lib.c b/drivers/net/ethernet/intel/ice/ice_lib.c
+index 2f256bf45efc..6dd839b32525 100644
+--- a/drivers/net/ethernet/intel/ice/ice_lib.c
++++ b/drivers/net/ethernet/intel/ice/ice_lib.c
+@@ -1063,7 +1063,7 @@ static void ice_vsi_clear_rings(struct ice_vsi *vsi)
+ for (i = 0; i < vsi->alloc_txq; i++) {
+ if (vsi->tx_rings[i]) {
+ kfree_rcu(vsi->tx_rings[i], rcu);
+- vsi->tx_rings[i] = NULL;
++ WRITE_ONCE(vsi->tx_rings[i], NULL);
+ }
+ }
+ }
+@@ -1071,7 +1071,7 @@ static void ice_vsi_clear_rings(struct ice_vsi *vsi)
+ for (i = 0; i < vsi->alloc_rxq; i++) {
+ if (vsi->rx_rings[i]) {
+ kfree_rcu(vsi->rx_rings[i], rcu);
+- vsi->rx_rings[i] = NULL;
++ WRITE_ONCE(vsi->rx_rings[i], NULL);
+ }
+ }
+ }
+@@ -1104,7 +1104,7 @@ static int ice_vsi_alloc_rings(struct ice_vsi *vsi)
+ ring->vsi = vsi;
+ ring->dev = dev;
+ ring->count = vsi->num_tx_desc;
+- vsi->tx_rings[i] = ring;
++ WRITE_ONCE(vsi->tx_rings[i], ring);
+ }
+
+ /* Allocate Rx rings */
+@@ -1123,7 +1123,7 @@ static int ice_vsi_alloc_rings(struct ice_vsi *vsi)
+ ring->netdev = vsi->netdev;
+ ring->dev = dev;
+ ring->count = vsi->num_rx_desc;
+- vsi->rx_rings[i] = ring;
++ WRITE_ONCE(vsi->rx_rings[i], ring);
+ }
+
+ return 0;
+diff --git a/drivers/net/ethernet/intel/ice/ice_main.c b/drivers/net/ethernet/intel/ice/ice_main.c
+index 69e50331e08e..7fd2ec63f128 100644
+--- a/drivers/net/ethernet/intel/ice/ice_main.c
++++ b/drivers/net/ethernet/intel/ice/ice_main.c
+@@ -1701,7 +1701,7 @@ static int ice_xdp_alloc_setup_rings(struct ice_vsi *vsi)
+ xdp_ring->netdev = NULL;
+ xdp_ring->dev = dev;
+ xdp_ring->count = vsi->num_tx_desc;
+- vsi->xdp_rings[i] = xdp_ring;
++ WRITE_ONCE(vsi->xdp_rings[i], xdp_ring);
+ if (ice_setup_tx_ring(xdp_ring))
+ goto free_xdp_rings;
+ ice_set_ring_xdp(xdp_ring);
+diff --git a/drivers/net/ethernet/intel/ixgbe/ixgbe_lib.c b/drivers/net/ethernet/intel/ixgbe/ixgbe_lib.c
+index fd9f5d41b594..2e35c5706cf1 100644
+--- a/drivers/net/ethernet/intel/ixgbe/ixgbe_lib.c
++++ b/drivers/net/ethernet/intel/ixgbe/ixgbe_lib.c
+@@ -921,7 +921,7 @@ static int ixgbe_alloc_q_vector(struct ixgbe_adapter *adapter,
+ ring->queue_index = txr_idx;
+
+ /* assign ring to adapter */
+- adapter->tx_ring[txr_idx] = ring;
++ WRITE_ONCE(adapter->tx_ring[txr_idx], ring);
+
+ /* update count and index */
+ txr_count--;
+@@ -948,7 +948,7 @@ static int ixgbe_alloc_q_vector(struct ixgbe_adapter *adapter,
+ set_ring_xdp(ring);
+
+ /* assign ring to adapter */
+- adapter->xdp_ring[xdp_idx] = ring;
++ WRITE_ONCE(adapter->xdp_ring[xdp_idx], ring);
+
+ /* update count and index */
+ xdp_count--;
+@@ -991,7 +991,7 @@ static int ixgbe_alloc_q_vector(struct ixgbe_adapter *adapter,
+ ring->queue_index = rxr_idx;
+
+ /* assign ring to adapter */
+- adapter->rx_ring[rxr_idx] = ring;
++ WRITE_ONCE(adapter->rx_ring[rxr_idx], ring);
+
+ /* update count and index */
+ rxr_count--;
+@@ -1020,13 +1020,13 @@ static void ixgbe_free_q_vector(struct ixgbe_adapter *adapter, int v_idx)
+
+ ixgbe_for_each_ring(ring, q_vector->tx) {
+ if (ring_is_xdp(ring))
+- adapter->xdp_ring[ring->queue_index] = NULL;
++ WRITE_ONCE(adapter->xdp_ring[ring->queue_index], NULL);
+ else
+- adapter->tx_ring[ring->queue_index] = NULL;
++ WRITE_ONCE(adapter->tx_ring[ring->queue_index], NULL);
+ }
+
+ ixgbe_for_each_ring(ring, q_vector->rx)
+- adapter->rx_ring[ring->queue_index] = NULL;
++ WRITE_ONCE(adapter->rx_ring[ring->queue_index], NULL);
+
+ adapter->q_vector[v_idx] = NULL;
+ napi_hash_del(&q_vector->napi);
+diff --git a/drivers/net/ethernet/intel/ixgbe/ixgbe_main.c b/drivers/net/ethernet/intel/ixgbe/ixgbe_main.c
+index ea6834bae04c..a32a072761aa 100644
+--- a/drivers/net/ethernet/intel/ixgbe/ixgbe_main.c
++++ b/drivers/net/ethernet/intel/ixgbe/ixgbe_main.c
+@@ -7065,7 +7065,10 @@ void ixgbe_update_stats(struct ixgbe_adapter *adapter)
+ }
+
+ for (i = 0; i < adapter->num_rx_queues; i++) {
+- struct ixgbe_ring *rx_ring = adapter->rx_ring[i];
++ struct ixgbe_ring *rx_ring = READ_ONCE(adapter->rx_ring[i]);
++
++ if (!rx_ring)
++ continue;
+ non_eop_descs += rx_ring->rx_stats.non_eop_descs;
+ alloc_rx_page += rx_ring->rx_stats.alloc_rx_page;
+ alloc_rx_page_failed += rx_ring->rx_stats.alloc_rx_page_failed;
+@@ -7086,15 +7089,20 @@ void ixgbe_update_stats(struct ixgbe_adapter *adapter)
+ packets = 0;
+ /* gather some stats to the adapter struct that are per queue */
+ for (i = 0; i < adapter->num_tx_queues; i++) {
+- struct ixgbe_ring *tx_ring = adapter->tx_ring[i];
++ struct ixgbe_ring *tx_ring = READ_ONCE(adapter->tx_ring[i]);
++
++ if (!tx_ring)
++ continue;
+ restart_queue += tx_ring->tx_stats.restart_queue;
+ tx_busy += tx_ring->tx_stats.tx_busy;
+ bytes += tx_ring->stats.bytes;
+ packets += tx_ring->stats.packets;
+ }
+ for (i = 0; i < adapter->num_xdp_queues; i++) {
+- struct ixgbe_ring *xdp_ring = adapter->xdp_ring[i];
++ struct ixgbe_ring *xdp_ring = READ_ONCE(adapter->xdp_ring[i]);
+
++ if (!xdp_ring)
++ continue;
+ restart_queue += xdp_ring->tx_stats.restart_queue;
+ tx_busy += xdp_ring->tx_stats.tx_busy;
+ bytes += xdp_ring->stats.bytes;
+diff --git a/drivers/net/ethernet/marvell/mvneta.c b/drivers/net/ethernet/marvell/mvneta.c
+index 43b44a1e8f69..cf26cf4e47aa 100644
+--- a/drivers/net/ethernet/marvell/mvneta.c
++++ b/drivers/net/ethernet/marvell/mvneta.c
+@@ -106,9 +106,11 @@
+ #define MVNETA_TX_IN_PRGRS BIT(1)
+ #define MVNETA_TX_FIFO_EMPTY BIT(8)
+ #define MVNETA_RX_MIN_FRAME_SIZE 0x247c
++/* Only exists on Armada XP and Armada 370 */
+ #define MVNETA_SERDES_CFG 0x24A0
+ #define MVNETA_SGMII_SERDES_PROTO 0x0cc7
+ #define MVNETA_QSGMII_SERDES_PROTO 0x0667
++#define MVNETA_HSGMII_SERDES_PROTO 0x1107
+ #define MVNETA_TYPE_PRIO 0x24bc
+ #define MVNETA_FORCE_UNI BIT(21)
+ #define MVNETA_TXQ_CMD_1 0x24e4
+@@ -3523,26 +3525,60 @@ static int mvneta_setup_txqs(struct mvneta_port *pp)
+ return 0;
+ }
+
+-static int mvneta_comphy_init(struct mvneta_port *pp)
++static int mvneta_comphy_init(struct mvneta_port *pp, phy_interface_t interface)
+ {
+ int ret;
+
+- if (!pp->comphy)
+- return 0;
+-
+- ret = phy_set_mode_ext(pp->comphy, PHY_MODE_ETHERNET,
+- pp->phy_interface);
++ ret = phy_set_mode_ext(pp->comphy, PHY_MODE_ETHERNET, interface);
+ if (ret)
+ return ret;
+
+ return phy_power_on(pp->comphy);
+ }
+
++static int mvneta_config_interface(struct mvneta_port *pp,
++ phy_interface_t interface)
++{
++ int ret = 0;
++
++ if (pp->comphy) {
++ if (interface == PHY_INTERFACE_MODE_SGMII ||
++ interface == PHY_INTERFACE_MODE_1000BASEX ||
++ interface == PHY_INTERFACE_MODE_2500BASEX) {
++ ret = mvneta_comphy_init(pp, interface);
++ }
++ } else {
++ switch (interface) {
++ case PHY_INTERFACE_MODE_QSGMII:
++ mvreg_write(pp, MVNETA_SERDES_CFG,
++ MVNETA_QSGMII_SERDES_PROTO);
++ break;
++
++ case PHY_INTERFACE_MODE_SGMII:
++ case PHY_INTERFACE_MODE_1000BASEX:
++ mvreg_write(pp, MVNETA_SERDES_CFG,
++ MVNETA_SGMII_SERDES_PROTO);
++ break;
++
++ case PHY_INTERFACE_MODE_2500BASEX:
++ mvreg_write(pp, MVNETA_SERDES_CFG,
++ MVNETA_HSGMII_SERDES_PROTO);
++ break;
++ default:
++ return -EINVAL;
++ }
++ }
++
++ pp->phy_interface = interface;
++
++ return ret;
++}
++
+ static void mvneta_start_dev(struct mvneta_port *pp)
+ {
+ int cpu;
+
+- WARN_ON(mvneta_comphy_init(pp));
++ WARN_ON(mvneta_config_interface(pp, pp->phy_interface));
+
+ mvneta_max_rx_size_set(pp, pp->pkt_size);
+ mvneta_txq_max_tx_size_set(pp, pp->pkt_size);
+@@ -3917,17 +3953,13 @@ static void mvneta_mac_config(struct phylink_config *config, unsigned int mode,
+ /* When at 2.5G, the link partner can send frames with shortened
+ * preambles.
+ */
+- if (state->speed == SPEED_2500)
++ if (state->interface == PHY_INTERFACE_MODE_2500BASEX)
+ new_ctrl4 |= MVNETA_GMAC4_SHORT_PREAMBLE_ENABLE;
+
+- if (pp->comphy && pp->phy_interface != state->interface &&
+- (state->interface == PHY_INTERFACE_MODE_SGMII ||
+- state->interface == PHY_INTERFACE_MODE_1000BASEX ||
+- state->interface == PHY_INTERFACE_MODE_2500BASEX)) {
+- pp->phy_interface = state->interface;
+-
+- WARN_ON(phy_power_off(pp->comphy));
+- WARN_ON(mvneta_comphy_init(pp));
++ if (pp->phy_interface != state->interface) {
++ if (pp->comphy)
++ WARN_ON(phy_power_off(pp->comphy));
++ WARN_ON(mvneta_config_interface(pp, state->interface));
+ }
+
+ if (new_ctrl0 != gmac_ctrl0)
+@@ -4971,20 +5003,10 @@ static void mvneta_conf_mbus_windows(struct mvneta_port *pp,
+ }
+
+ /* Power up the port */
+-static int mvneta_port_power_up(struct mvneta_port *pp, int phy_mode)
++static void mvneta_port_power_up(struct mvneta_port *pp, int phy_mode)
+ {
+ /* MAC Cause register should be cleared */
+ mvreg_write(pp, MVNETA_UNIT_INTR_CAUSE, 0);
+-
+- if (phy_mode == PHY_INTERFACE_MODE_QSGMII)
+- mvreg_write(pp, MVNETA_SERDES_CFG, MVNETA_QSGMII_SERDES_PROTO);
+- else if (phy_mode == PHY_INTERFACE_MODE_SGMII ||
+- phy_interface_mode_is_8023z(phy_mode))
+- mvreg_write(pp, MVNETA_SERDES_CFG, MVNETA_SGMII_SERDES_PROTO);
+- else if (!phy_interface_mode_is_rgmii(phy_mode))
+- return -EINVAL;
+-
+- return 0;
+ }
+
+ /* Device initialization routine */
+@@ -5170,11 +5192,7 @@ static int mvneta_probe(struct platform_device *pdev)
+ if (err < 0)
+ goto err_netdev;
+
+- err = mvneta_port_power_up(pp, phy_mode);
+- if (err < 0) {
+- dev_err(&pdev->dev, "can't power up port\n");
+- goto err_netdev;
+- }
++ mvneta_port_power_up(pp, phy_mode);
+
+ /* Armada3700 network controller does not support per-cpu
+ * operation, so only single NAPI should be initialized.
+@@ -5328,11 +5346,7 @@ static int mvneta_resume(struct device *device)
+ }
+ }
+ mvneta_defaults_set(pp);
+- err = mvneta_port_power_up(pp, pp->phy_interface);
+- if (err < 0) {
+- dev_err(device, "can't power up port\n");
+- return err;
+- }
++ mvneta_port_power_up(pp, pp->phy_interface);
+
+ netif_device_attach(dev);
+
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en/port.c b/drivers/net/ethernet/mellanox/mlx5/core/en/port.c
+index 2a8950b3056f..3cf3e35053f7 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/en/port.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/en/port.c
+@@ -78,11 +78,26 @@ static const u32 mlx5e_ext_link_speed[MLX5E_EXT_LINK_MODES_NUMBER] = {
+ [MLX5E_400GAUI_8] = 400000,
+ };
+
++bool mlx5e_ptys_ext_supported(struct mlx5_core_dev *mdev)
++{
++ struct mlx5e_port_eth_proto eproto;
++ int err;
++
++ if (MLX5_CAP_PCAM_FEATURE(mdev, ptys_extended_ethernet))
++ return true;
++
++ err = mlx5_port_query_eth_proto(mdev, 1, true, &eproto);
++ if (err)
++ return false;
++
++ return !!eproto.cap;
++}
++
+ static void mlx5e_port_get_speed_arr(struct mlx5_core_dev *mdev,
+ const u32 **arr, u32 *size,
+ bool force_legacy)
+ {
+- bool ext = force_legacy ? false : MLX5_CAP_PCAM_FEATURE(mdev, ptys_extended_ethernet);
++ bool ext = force_legacy ? false : mlx5e_ptys_ext_supported(mdev);
+
+ *size = ext ? ARRAY_SIZE(mlx5e_ext_link_speed) :
+ ARRAY_SIZE(mlx5e_link_speed);
+@@ -177,7 +192,7 @@ int mlx5e_port_linkspeed(struct mlx5_core_dev *mdev, u32 *speed)
+ bool ext;
+ int err;
+
+- ext = MLX5_CAP_PCAM_FEATURE(mdev, ptys_extended_ethernet);
++ ext = mlx5e_ptys_ext_supported(mdev);
+ err = mlx5_port_query_eth_proto(mdev, 1, ext, &eproto);
+ if (err)
+ goto out;
+@@ -205,7 +220,7 @@ int mlx5e_port_max_linkspeed(struct mlx5_core_dev *mdev, u32 *speed)
+ int err;
+ int i;
+
+- ext = MLX5_CAP_PCAM_FEATURE(mdev, ptys_extended_ethernet);
++ ext = mlx5e_ptys_ext_supported(mdev);
+ err = mlx5_port_query_eth_proto(mdev, 1, ext, &eproto);
+ if (err)
+ return err;
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en/port.h b/drivers/net/ethernet/mellanox/mlx5/core/en/port.h
+index a2ddd446dd59..7a7defe60792 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/en/port.h
++++ b/drivers/net/ethernet/mellanox/mlx5/core/en/port.h
+@@ -54,7 +54,7 @@ int mlx5e_port_linkspeed(struct mlx5_core_dev *mdev, u32 *speed);
+ int mlx5e_port_max_linkspeed(struct mlx5_core_dev *mdev, u32 *speed);
+ u32 mlx5e_port_speed2linkmodes(struct mlx5_core_dev *mdev, u32 speed,
+ bool force_legacy);
+-
++bool mlx5e_ptys_ext_supported(struct mlx5_core_dev *mdev);
+ int mlx5e_port_query_pbmc(struct mlx5_core_dev *mdev, void *out);
+ int mlx5e_port_set_pbmc(struct mlx5_core_dev *mdev, void *in);
+ int mlx5e_port_query_priority2buffer(struct mlx5_core_dev *mdev, u8 *buffer);
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en/tc_ct.c b/drivers/net/ethernet/mellanox/mlx5/core/en/tc_ct.c
+index 470282daed19..369a03771435 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/en/tc_ct.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/en/tc_ct.c
+@@ -849,6 +849,7 @@ mlx5_tc_ct_flush_ft_entry(void *ptr, void *arg)
+ struct mlx5_ct_entry *entry = ptr;
+
+ mlx5_tc_ct_entry_del_rules(ct_priv, entry);
++ kfree(entry);
+ }
+
+ static void
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_ethtool.c b/drivers/net/ethernet/mellanox/mlx5/core/en_ethtool.c
+index bc290ae80a53..1c491acd48f3 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/en_ethtool.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/en_ethtool.c
+@@ -200,7 +200,7 @@ static void mlx5e_ethtool_get_speed_arr(struct mlx5_core_dev *mdev,
+ struct ptys2ethtool_config **arr,
+ u32 *size)
+ {
+- bool ext = MLX5_CAP_PCAM_FEATURE(mdev, ptys_extended_ethernet);
++ bool ext = mlx5e_ptys_ext_supported(mdev);
+
+ *arr = ext ? ptys2ext_ethtool_table : ptys2legacy_ethtool_table;
+ *size = ext ? ARRAY_SIZE(ptys2ext_ethtool_table) :
+@@ -883,7 +883,7 @@ static void get_lp_advertising(struct mlx5_core_dev *mdev, u32 eth_proto_lp,
+ struct ethtool_link_ksettings *link_ksettings)
+ {
+ unsigned long *lp_advertising = link_ksettings->link_modes.lp_advertising;
+- bool ext = MLX5_CAP_PCAM_FEATURE(mdev, ptys_extended_ethernet);
++ bool ext = mlx5e_ptys_ext_supported(mdev);
+
+ ptys2ethtool_adver_link(lp_advertising, eth_proto_lp, ext);
+ }
+@@ -913,7 +913,7 @@ int mlx5e_ethtool_get_link_ksettings(struct mlx5e_priv *priv,
+ __func__, err);
+ goto err_query_regs;
+ }
+- ext = MLX5_CAP_PCAM_FEATURE(mdev, ptys_extended_ethernet);
++ ext = !!MLX5_GET_ETH_PROTO(ptys_reg, out, true, eth_proto_capability);
+ eth_proto_cap = MLX5_GET_ETH_PROTO(ptys_reg, out, ext,
+ eth_proto_capability);
+ eth_proto_admin = MLX5_GET_ETH_PROTO(ptys_reg, out, ext,
+@@ -1066,7 +1066,7 @@ int mlx5e_ethtool_set_link_ksettings(struct mlx5e_priv *priv,
+ autoneg = link_ksettings->base.autoneg;
+ speed = link_ksettings->base.speed;
+
+- ext_supported = MLX5_CAP_PCAM_FEATURE(mdev, ptys_extended_ethernet);
++ ext_supported = mlx5e_ptys_ext_supported(mdev);
+ ext = ext_requested(autoneg, adver, ext_supported);
+ if (!ext_supported && ext)
+ return -EOPNOTSUPP;
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_main.c b/drivers/net/ethernet/mellanox/mlx5/core/en_main.c
+index bd8d0e096085..bc54913c5861 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/en_main.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/en_main.c
+@@ -3076,9 +3076,6 @@ int mlx5e_open(struct net_device *netdev)
+ mlx5_set_port_admin_status(priv->mdev, MLX5_PORT_UP);
+ mutex_unlock(&priv->state_lock);
+
+- if (mlx5_vxlan_allowed(priv->mdev->vxlan))
+- udp_tunnel_get_rx_info(netdev);
+-
+ return err;
+ }
+
+@@ -5122,6 +5119,10 @@ static int mlx5e_init_nic_rx(struct mlx5e_priv *priv)
+ if (err)
+ goto err_destroy_flow_steering;
+
++#ifdef CONFIG_MLX5_EN_ARFS
++ priv->netdev->rx_cpu_rmap = mlx5_eq_table_get_rmap(priv->mdev);
++#endif
++
+ return 0;
+
+ err_destroy_flow_steering:
+@@ -5207,6 +5208,8 @@ static void mlx5e_nic_enable(struct mlx5e_priv *priv)
+ rtnl_lock();
+ if (netif_running(netdev))
+ mlx5e_open(netdev);
++ if (mlx5_vxlan_allowed(priv->mdev->vxlan))
++ udp_tunnel_get_rx_info(netdev);
+ netif_device_attach(netdev);
+ rtnl_unlock();
+ }
+@@ -5223,6 +5226,8 @@ static void mlx5e_nic_disable(struct mlx5e_priv *priv)
+ rtnl_lock();
+ if (netif_running(priv->netdev))
+ mlx5e_close(priv->netdev);
++ if (mlx5_vxlan_allowed(priv->mdev->vxlan))
++ udp_tunnel_drop_rx_info(priv->netdev);
+ netif_device_detach(priv->netdev);
+ rtnl_unlock();
+
+@@ -5295,10 +5300,6 @@ int mlx5e_netdev_init(struct net_device *netdev,
+ /* netdev init */
+ netif_carrier_off(netdev);
+
+-#ifdef CONFIG_MLX5_EN_ARFS
+- netdev->rx_cpu_rmap = mlx5_eq_table_get_rmap(mdev);
+-#endif
+-
+ return 0;
+
+ err_free_cpumask:
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/port.c b/drivers/net/ethernet/mellanox/mlx5/core/port.c
+index cc262b30aed5..dc589322940c 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/port.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/port.c
+@@ -293,7 +293,40 @@ static int mlx5_query_module_num(struct mlx5_core_dev *dev, int *module_num)
+ return 0;
+ }
+
+-static int mlx5_eeprom_page(int offset)
++static int mlx5_query_module_id(struct mlx5_core_dev *dev, int module_num,
++ u8 *module_id)
++{
++ u32 in[MLX5_ST_SZ_DW(mcia_reg)] = {};
++ u32 out[MLX5_ST_SZ_DW(mcia_reg)];
++ int err, status;
++ u8 *ptr;
++
++ MLX5_SET(mcia_reg, in, i2c_device_address, MLX5_I2C_ADDR_LOW);
++ MLX5_SET(mcia_reg, in, module, module_num);
++ MLX5_SET(mcia_reg, in, device_address, 0);
++ MLX5_SET(mcia_reg, in, page_number, 0);
++ MLX5_SET(mcia_reg, in, size, 1);
++ MLX5_SET(mcia_reg, in, l, 0);
++
++ err = mlx5_core_access_reg(dev, in, sizeof(in), out,
++ sizeof(out), MLX5_REG_MCIA, 0, 0);
++ if (err)
++ return err;
++
++ status = MLX5_GET(mcia_reg, out, status);
++ if (status) {
++ mlx5_core_err(dev, "query_mcia_reg failed: status: 0x%x\n",
++ status);
++ return -EIO;
++ }
++ ptr = MLX5_ADDR_OF(mcia_reg, out, dword_0);
++
++ *module_id = ptr[0];
++
++ return 0;
++}
++
++static int mlx5_qsfp_eeprom_page(u16 offset)
+ {
+ if (offset < MLX5_EEPROM_PAGE_LENGTH)
+ /* Addresses between 0-255 - page 00 */
+@@ -307,7 +340,7 @@ static int mlx5_eeprom_page(int offset)
+ MLX5_EEPROM_HIGH_PAGE_LENGTH);
+ }
+
+-static int mlx5_eeprom_high_page_offset(int page_num)
++static int mlx5_qsfp_eeprom_high_page_offset(int page_num)
+ {
+ if (!page_num) /* Page 0 always start from low page */
+ return 0;
+@@ -316,35 +349,62 @@ static int mlx5_eeprom_high_page_offset(int page_num)
+ return page_num * MLX5_EEPROM_HIGH_PAGE_LENGTH;
+ }
+
++static void mlx5_qsfp_eeprom_params_set(u16 *i2c_addr, int *page_num, u16 *offset)
++{
++ *i2c_addr = MLX5_I2C_ADDR_LOW;
++ *page_num = mlx5_qsfp_eeprom_page(*offset);
++ *offset -= mlx5_qsfp_eeprom_high_page_offset(*page_num);
++}
++
++static void mlx5_sfp_eeprom_params_set(u16 *i2c_addr, int *page_num, u16 *offset)
++{
++ *i2c_addr = MLX5_I2C_ADDR_LOW;
++ *page_num = 0;
++
++ if (*offset < MLX5_EEPROM_PAGE_LENGTH)
++ return;
++
++ *i2c_addr = MLX5_I2C_ADDR_HIGH;
++ *offset -= MLX5_EEPROM_PAGE_LENGTH;
++}
++
+ int mlx5_query_module_eeprom(struct mlx5_core_dev *dev,
+ u16 offset, u16 size, u8 *data)
+ {
+- int module_num, page_num, status, err;
++ int module_num, status, err, page_num = 0;
++ u32 in[MLX5_ST_SZ_DW(mcia_reg)] = {};
+ u32 out[MLX5_ST_SZ_DW(mcia_reg)];
+- u32 in[MLX5_ST_SZ_DW(mcia_reg)];
+- u16 i2c_addr;
+- void *ptr = MLX5_ADDR_OF(mcia_reg, out, dword_0);
++ u16 i2c_addr = 0;
++ u8 module_id;
++ void *ptr;
+
+ err = mlx5_query_module_num(dev, &module_num);
+ if (err)
+ return err;
+
+- memset(in, 0, sizeof(in));
+- size = min_t(int, size, MLX5_EEPROM_MAX_BYTES);
+-
+- /* Get the page number related to the given offset */
+- page_num = mlx5_eeprom_page(offset);
++ err = mlx5_query_module_id(dev, module_num, &module_id);
++ if (err)
++ return err;
+
+- /* Set the right offset according to the page number,
+- * For page_num > 0, relative offset is always >= 128 (high page).
+- */
+- offset -= mlx5_eeprom_high_page_offset(page_num);
++ switch (module_id) {
++ case MLX5_MODULE_ID_SFP:
++ mlx5_sfp_eeprom_params_set(&i2c_addr, &page_num, &offset);
++ break;
++ case MLX5_MODULE_ID_QSFP:
++ case MLX5_MODULE_ID_QSFP_PLUS:
++ case MLX5_MODULE_ID_QSFP28:
++ mlx5_qsfp_eeprom_params_set(&i2c_addr, &page_num, &offset);
++ break;
++ default:
++ mlx5_core_err(dev, "Module ID not recognized: 0x%x\n", module_id);
++ return -EINVAL;
++ }
+
+ if (offset + size > MLX5_EEPROM_PAGE_LENGTH)
+ /* Cross pages read, read until offset 256 in low page */
+ size -= offset + size - MLX5_EEPROM_PAGE_LENGTH;
+
+- i2c_addr = MLX5_I2C_ADDR_LOW;
++ size = min_t(int, size, MLX5_EEPROM_MAX_BYTES);
+
+ MLX5_SET(mcia_reg, in, l, 0);
+ MLX5_SET(mcia_reg, in, module, module_num);
+@@ -365,6 +425,7 @@ int mlx5_query_module_eeprom(struct mlx5_core_dev *dev,
+ return -EIO;
+ }
+
++ ptr = MLX5_ADDR_OF(mcia_reg, out, dword_0);
+ memcpy(data, ptr, size);
+
+ return size;
+diff --git a/drivers/net/ethernet/mellanox/mlxsw/pci.c b/drivers/net/ethernet/mellanox/mlxsw/pci.c
+index fd0e97de44e7..c04ec1a92826 100644
+--- a/drivers/net/ethernet/mellanox/mlxsw/pci.c
++++ b/drivers/net/ethernet/mellanox/mlxsw/pci.c
+@@ -1414,23 +1414,12 @@ static int mlxsw_pci_init(void *bus_priv, struct mlxsw_core *mlxsw_core,
+ u16 num_pages;
+ int err;
+
+- mutex_init(&mlxsw_pci->cmd.lock);
+- init_waitqueue_head(&mlxsw_pci->cmd.wait);
+-
+ mlxsw_pci->core = mlxsw_core;
+
+ mbox = mlxsw_cmd_mbox_alloc();
+ if (!mbox)
+ return -ENOMEM;
+
+- err = mlxsw_pci_mbox_alloc(mlxsw_pci, &mlxsw_pci->cmd.in_mbox);
+- if (err)
+- goto mbox_put;
+-
+- err = mlxsw_pci_mbox_alloc(mlxsw_pci, &mlxsw_pci->cmd.out_mbox);
+- if (err)
+- goto err_out_mbox_alloc;
+-
+ err = mlxsw_pci_sw_reset(mlxsw_pci, mlxsw_pci->id);
+ if (err)
+ goto err_sw_reset;
+@@ -1537,9 +1526,6 @@ err_query_fw:
+ mlxsw_pci_free_irq_vectors(mlxsw_pci);
+ err_alloc_irq:
+ err_sw_reset:
+- mlxsw_pci_mbox_free(mlxsw_pci, &mlxsw_pci->cmd.out_mbox);
+-err_out_mbox_alloc:
+- mlxsw_pci_mbox_free(mlxsw_pci, &mlxsw_pci->cmd.in_mbox);
+ mbox_put:
+ mlxsw_cmd_mbox_free(mbox);
+ return err;
+@@ -1553,8 +1539,6 @@ static void mlxsw_pci_fini(void *bus_priv)
+ mlxsw_pci_aqs_fini(mlxsw_pci);
+ mlxsw_pci_fw_area_fini(mlxsw_pci);
+ mlxsw_pci_free_irq_vectors(mlxsw_pci);
+- mlxsw_pci_mbox_free(mlxsw_pci, &mlxsw_pci->cmd.out_mbox);
+- mlxsw_pci_mbox_free(mlxsw_pci, &mlxsw_pci->cmd.in_mbox);
+ }
+
+ static struct mlxsw_pci_queue *
+@@ -1776,6 +1760,37 @@ static const struct mlxsw_bus mlxsw_pci_bus = {
+ .features = MLXSW_BUS_F_TXRX | MLXSW_BUS_F_RESET,
+ };
+
++static int mlxsw_pci_cmd_init(struct mlxsw_pci *mlxsw_pci)
++{
++ int err;
++
++ mutex_init(&mlxsw_pci->cmd.lock);
++ init_waitqueue_head(&mlxsw_pci->cmd.wait);
++
++ err = mlxsw_pci_mbox_alloc(mlxsw_pci, &mlxsw_pci->cmd.in_mbox);
++ if (err)
++ goto err_in_mbox_alloc;
++
++ err = mlxsw_pci_mbox_alloc(mlxsw_pci, &mlxsw_pci->cmd.out_mbox);
++ if (err)
++ goto err_out_mbox_alloc;
++
++ return 0;
++
++err_out_mbox_alloc:
++ mlxsw_pci_mbox_free(mlxsw_pci, &mlxsw_pci->cmd.in_mbox);
++err_in_mbox_alloc:
++ mutex_destroy(&mlxsw_pci->cmd.lock);
++ return err;
++}
++
++static void mlxsw_pci_cmd_fini(struct mlxsw_pci *mlxsw_pci)
++{
++ mlxsw_pci_mbox_free(mlxsw_pci, &mlxsw_pci->cmd.out_mbox);
++ mlxsw_pci_mbox_free(mlxsw_pci, &mlxsw_pci->cmd.in_mbox);
++ mutex_destroy(&mlxsw_pci->cmd.lock);
++}
++
+ static int mlxsw_pci_probe(struct pci_dev *pdev, const struct pci_device_id *id)
+ {
+ const char *driver_name = pdev->driver->name;
+@@ -1831,6 +1846,10 @@ static int mlxsw_pci_probe(struct pci_dev *pdev, const struct pci_device_id *id)
+ mlxsw_pci->pdev = pdev;
+ pci_set_drvdata(pdev, mlxsw_pci);
+
++ err = mlxsw_pci_cmd_init(mlxsw_pci);
++ if (err)
++ goto err_pci_cmd_init;
++
+ mlxsw_pci->bus_info.device_kind = driver_name;
+ mlxsw_pci->bus_info.device_name = pci_name(mlxsw_pci->pdev);
+ mlxsw_pci->bus_info.dev = &pdev->dev;
+@@ -1848,6 +1867,8 @@ static int mlxsw_pci_probe(struct pci_dev *pdev, const struct pci_device_id *id)
+ return 0;
+
+ err_bus_device_register:
++ mlxsw_pci_cmd_fini(mlxsw_pci);
++err_pci_cmd_init:
+ iounmap(mlxsw_pci->hw_addr);
+ err_ioremap:
+ err_pci_resource_len_check:
+@@ -1865,6 +1886,7 @@ static void mlxsw_pci_remove(struct pci_dev *pdev)
+ struct mlxsw_pci *mlxsw_pci = pci_get_drvdata(pdev);
+
+ mlxsw_core_bus_device_unregister(mlxsw_pci->core, false);
++ mlxsw_pci_cmd_fini(mlxsw_pci);
+ iounmap(mlxsw_pci->hw_addr);
+ pci_release_regions(mlxsw_pci->pdev);
+ pci_disable_device(mlxsw_pci->pdev);
+diff --git a/drivers/net/ethernet/mellanox/mlxsw/spectrum_router.c b/drivers/net/ethernet/mellanox/mlxsw/spectrum_router.c
+index d5bca1be3ef5..84b3d78a9dd8 100644
+--- a/drivers/net/ethernet/mellanox/mlxsw/spectrum_router.c
++++ b/drivers/net/ethernet/mellanox/mlxsw/spectrum_router.c
+@@ -6256,7 +6256,7 @@ static int mlxsw_sp_router_fib_event(struct notifier_block *nb,
+ }
+
+ fib_work = kzalloc(sizeof(*fib_work), GFP_ATOMIC);
+- if (WARN_ON(!fib_work))
++ if (!fib_work)
+ return NOTIFY_BAD;
+
+ fib_work->mlxsw_sp = router->mlxsw_sp;
+diff --git a/drivers/net/ethernet/pensando/ionic/ionic_ethtool.c b/drivers/net/ethernet/pensando/ionic/ionic_ethtool.c
+index 6996229facfd..22430fa911e2 100644
+--- a/drivers/net/ethernet/pensando/ionic/ionic_ethtool.c
++++ b/drivers/net/ethernet/pensando/ionic/ionic_ethtool.c
+@@ -464,12 +464,18 @@ static void ionic_get_ringparam(struct net_device *netdev,
+ ring->rx_pending = lif->nrxq_descs;
+ }
+
++static void ionic_set_ringsize(struct ionic_lif *lif, void *arg)
++{
++ struct ethtool_ringparam *ring = arg;
++
++ lif->ntxq_descs = ring->tx_pending;
++ lif->nrxq_descs = ring->rx_pending;
++}
++
+ static int ionic_set_ringparam(struct net_device *netdev,
+ struct ethtool_ringparam *ring)
+ {
+ struct ionic_lif *lif = netdev_priv(netdev);
+- bool running;
+- int err;
+
+ if (ring->rx_mini_pending || ring->rx_jumbo_pending) {
+ netdev_info(netdev, "Changing jumbo or mini descriptors not supported\n");
+@@ -487,22 +493,7 @@ static int ionic_set_ringparam(struct net_device *netdev,
+ ring->rx_pending == lif->nrxq_descs)
+ return 0;
+
+- err = ionic_wait_for_bit(lif, IONIC_LIF_F_QUEUE_RESET);
+- if (err)
+- return err;
+-
+- running = test_bit(IONIC_LIF_F_UP, lif->state);
+- if (running)
+- ionic_stop(netdev);
+-
+- lif->ntxq_descs = ring->tx_pending;
+- lif->nrxq_descs = ring->rx_pending;
+-
+- if (running)
+- ionic_open(netdev);
+- clear_bit(IONIC_LIF_F_QUEUE_RESET, lif->state);
+-
+- return 0;
++ return ionic_reset_queues(lif, ionic_set_ringsize, ring);
+ }
+
+ static void ionic_get_channels(struct net_device *netdev,
+@@ -517,12 +508,17 @@ static void ionic_get_channels(struct net_device *netdev,
+ ch->combined_count = lif->nxqs;
+ }
+
++static void ionic_set_queuecount(struct ionic_lif *lif, void *arg)
++{
++ struct ethtool_channels *ch = arg;
++
++ lif->nxqs = ch->combined_count;
++}
++
+ static int ionic_set_channels(struct net_device *netdev,
+ struct ethtool_channels *ch)
+ {
+ struct ionic_lif *lif = netdev_priv(netdev);
+- bool running;
+- int err;
+
+ if (!ch->combined_count || ch->other_count ||
+ ch->rx_count || ch->tx_count)
+@@ -531,21 +527,7 @@ static int ionic_set_channels(struct net_device *netdev,
+ if (ch->combined_count == lif->nxqs)
+ return 0;
+
+- err = ionic_wait_for_bit(lif, IONIC_LIF_F_QUEUE_RESET);
+- if (err)
+- return err;
+-
+- running = test_bit(IONIC_LIF_F_UP, lif->state);
+- if (running)
+- ionic_stop(netdev);
+-
+- lif->nxqs = ch->combined_count;
+-
+- if (running)
+- ionic_open(netdev);
+- clear_bit(IONIC_LIF_F_QUEUE_RESET, lif->state);
+-
+- return 0;
++ return ionic_reset_queues(lif, ionic_set_queuecount, ch);
+ }
+
+ static u32 ionic_get_priv_flags(struct net_device *netdev)
+diff --git a/drivers/net/ethernet/pensando/ionic/ionic_lif.c b/drivers/net/ethernet/pensando/ionic/ionic_lif.c
+index 790d4854b8ef..b591bec0301c 100644
+--- a/drivers/net/ethernet/pensando/ionic/ionic_lif.c
++++ b/drivers/net/ethernet/pensando/ionic/ionic_lif.c
+@@ -1301,7 +1301,7 @@ static int ionic_change_mtu(struct net_device *netdev, int new_mtu)
+ return err;
+
+ netdev->mtu = new_mtu;
+- err = ionic_reset_queues(lif);
++ err = ionic_reset_queues(lif, NULL, NULL);
+
+ return err;
+ }
+@@ -1313,7 +1313,7 @@ static void ionic_tx_timeout_work(struct work_struct *ws)
+ netdev_info(lif->netdev, "Tx Timeout recovery\n");
+
+ rtnl_lock();
+- ionic_reset_queues(lif);
++ ionic_reset_queues(lif, NULL, NULL);
+ rtnl_unlock();
+ }
+
+@@ -1944,7 +1944,7 @@ static const struct net_device_ops ionic_netdev_ops = {
+ .ndo_get_vf_stats = ionic_get_vf_stats,
+ };
+
+-int ionic_reset_queues(struct ionic_lif *lif)
++int ionic_reset_queues(struct ionic_lif *lif, ionic_reset_cb cb, void *arg)
+ {
+ bool running;
+ int err = 0;
+@@ -1957,12 +1957,19 @@ int ionic_reset_queues(struct ionic_lif *lif)
+ if (running) {
+ netif_device_detach(lif->netdev);
+ err = ionic_stop(lif->netdev);
++ if (err)
++ goto reset_out;
+ }
+- if (!err && running) {
+- ionic_open(lif->netdev);
++
++ if (cb)
++ cb(lif, arg);
++
++ if (running) {
++ err = ionic_open(lif->netdev);
+ netif_device_attach(lif->netdev);
+ }
+
++reset_out:
+ clear_bit(IONIC_LIF_F_QUEUE_RESET, lif->state);
+
+ return err;
+diff --git a/drivers/net/ethernet/pensando/ionic/ionic_lif.h b/drivers/net/ethernet/pensando/ionic/ionic_lif.h
+index 5d4ffda5c05f..2c65cf6300db 100644
+--- a/drivers/net/ethernet/pensando/ionic/ionic_lif.h
++++ b/drivers/net/ethernet/pensando/ionic/ionic_lif.h
+@@ -226,6 +226,8 @@ static inline u32 ionic_coal_hw_to_usec(struct ionic *ionic, u32 units)
+ return (units * div) / mult;
+ }
+
++typedef void (*ionic_reset_cb)(struct ionic_lif *lif, void *arg);
++
+ void ionic_link_status_check_request(struct ionic_lif *lif);
+ void ionic_lif_deferred_enqueue(struct ionic_deferred *def,
+ struct ionic_deferred_work *work);
+@@ -243,7 +245,7 @@ int ionic_lif_rss_config(struct ionic_lif *lif, u16 types,
+
+ int ionic_open(struct net_device *netdev);
+ int ionic_stop(struct net_device *netdev);
+-int ionic_reset_queues(struct ionic_lif *lif);
++int ionic_reset_queues(struct ionic_lif *lif, ionic_reset_cb cb, void *arg);
+
+ static inline void debug_stats_txq_post(struct ionic_qcq *qcq,
+ struct ionic_txq_desc *desc, bool dbell)
+diff --git a/drivers/net/ethernet/qlogic/qed/qed.h b/drivers/net/ethernet/qlogic/qed/qed.h
+index fa41bf08a589..58d6ef489d5b 100644
+--- a/drivers/net/ethernet/qlogic/qed/qed.h
++++ b/drivers/net/ethernet/qlogic/qed/qed.h
+@@ -880,6 +880,8 @@ struct qed_dev {
+ #endif
+ struct qed_dbg_feature dbg_features[DBG_FEATURE_NUM];
+ bool disable_ilt_dump;
++ bool dbg_bin_dump;
++
+ DECLARE_HASHTABLE(connections, 10);
+ const struct firmware *firmware;
+
+diff --git a/drivers/net/ethernet/qlogic/qed/qed_debug.c b/drivers/net/ethernet/qlogic/qed/qed_debug.c
+index 3e56b6056b47..25745b75daf3 100644
+--- a/drivers/net/ethernet/qlogic/qed/qed_debug.c
++++ b/drivers/net/ethernet/qlogic/qed/qed_debug.c
+@@ -7506,6 +7506,12 @@ static enum dbg_status format_feature(struct qed_hwfn *p_hwfn,
+ if (p_hwfn->cdev->dbg_params.print_data)
+ qed_dbg_print_feature(text_buf, text_size_bytes);
+
++ /* Just return the original binary buffer if requested */
++ if (p_hwfn->cdev->dbg_bin_dump) {
++ vfree(text_buf);
++ return DBG_STATUS_OK;
++ }
++
+ /* Free the old dump_buf and point the dump_buf to the newly allocagted
+ * and formatted text buffer.
+ */
+@@ -7733,7 +7739,9 @@ int qed_dbg_mcp_trace_size(struct qed_dev *cdev)
+ #define REGDUMP_HEADER_SIZE_SHIFT 0
+ #define REGDUMP_HEADER_SIZE_MASK 0xffffff
+ #define REGDUMP_HEADER_FEATURE_SHIFT 24
+-#define REGDUMP_HEADER_FEATURE_MASK 0x3f
++#define REGDUMP_HEADER_FEATURE_MASK 0x1f
++#define REGDUMP_HEADER_BIN_DUMP_SHIFT 29
++#define REGDUMP_HEADER_BIN_DUMP_MASK 0x1
+ #define REGDUMP_HEADER_OMIT_ENGINE_SHIFT 30
+ #define REGDUMP_HEADER_OMIT_ENGINE_MASK 0x1
+ #define REGDUMP_HEADER_ENGINE_SHIFT 31
+@@ -7771,6 +7779,7 @@ static u32 qed_calc_regdump_header(struct qed_dev *cdev,
+ feature, feature_size);
+
+ SET_FIELD(res, REGDUMP_HEADER_FEATURE, feature);
++ SET_FIELD(res, REGDUMP_HEADER_BIN_DUMP, 1);
+ SET_FIELD(res, REGDUMP_HEADER_OMIT_ENGINE, omit_engine);
+ SET_FIELD(res, REGDUMP_HEADER_ENGINE, engine);
+
+@@ -7794,6 +7803,7 @@ int qed_dbg_all_data(struct qed_dev *cdev, void *buffer)
+ omit_engine = 1;
+
+ mutex_lock(&qed_dbg_lock);
++ cdev->dbg_bin_dump = true;
+
+ org_engine = qed_get_debug_engine(cdev);
+ for (cur_engine = 0; cur_engine < cdev->num_hwfns; cur_engine++) {
+@@ -7931,6 +7941,10 @@ int qed_dbg_all_data(struct qed_dev *cdev, void *buffer)
+ DP_ERR(cdev, "qed_dbg_mcp_trace failed. rc = %d\n", rc);
+ }
+
++ /* Re-populate nvm attribute info */
++ qed_mcp_nvm_info_free(p_hwfn);
++ qed_mcp_nvm_info_populate(p_hwfn);
++
+ /* nvm cfg1 */
+ rc = qed_dbg_nvm_image(cdev,
+ (u8 *)buffer + offset +
+@@ -7993,6 +8007,7 @@ int qed_dbg_all_data(struct qed_dev *cdev, void *buffer)
+ QED_NVM_IMAGE_MDUMP, "QED_NVM_IMAGE_MDUMP", rc);
+ }
+
++ cdev->dbg_bin_dump = false;
+ mutex_unlock(&qed_dbg_lock);
+
+ return 0;
+diff --git a/drivers/net/ethernet/qlogic/qed/qed_dev.c b/drivers/net/ethernet/qlogic/qed/qed_dev.c
+index 9b00988fb77e..58913fe4f345 100644
+--- a/drivers/net/ethernet/qlogic/qed/qed_dev.c
++++ b/drivers/net/ethernet/qlogic/qed/qed_dev.c
+@@ -4466,12 +4466,6 @@ static int qed_get_dev_info(struct qed_hwfn *p_hwfn, struct qed_ptt *p_ptt)
+ return 0;
+ }
+
+-static void qed_nvm_info_free(struct qed_hwfn *p_hwfn)
+-{
+- kfree(p_hwfn->nvm_info.image_att);
+- p_hwfn->nvm_info.image_att = NULL;
+-}
+-
+ static int qed_hw_prepare_single(struct qed_hwfn *p_hwfn,
+ void __iomem *p_regview,
+ void __iomem *p_doorbells,
+@@ -4556,7 +4550,7 @@ static int qed_hw_prepare_single(struct qed_hwfn *p_hwfn,
+ return rc;
+ err3:
+ if (IS_LEAD_HWFN(p_hwfn))
+- qed_nvm_info_free(p_hwfn);
++ qed_mcp_nvm_info_free(p_hwfn);
+ err2:
+ if (IS_LEAD_HWFN(p_hwfn))
+ qed_iov_free_hw_info(p_hwfn->cdev);
+@@ -4617,7 +4611,7 @@ int qed_hw_prepare(struct qed_dev *cdev,
+ if (rc) {
+ if (IS_PF(cdev)) {
+ qed_init_free(p_hwfn);
+- qed_nvm_info_free(p_hwfn);
++ qed_mcp_nvm_info_free(p_hwfn);
+ qed_mcp_free(p_hwfn);
+ qed_hw_hwfn_free(p_hwfn);
+ }
+@@ -4651,7 +4645,7 @@ void qed_hw_remove(struct qed_dev *cdev)
+
+ qed_iov_free_hw_info(cdev);
+
+- qed_nvm_info_free(p_hwfn);
++ qed_mcp_nvm_info_free(p_hwfn);
+ }
+
+ static void qed_chain_free_next_ptr(struct qed_dev *cdev,
+diff --git a/drivers/net/ethernet/qlogic/qed/qed_mcp.c b/drivers/net/ethernet/qlogic/qed/qed_mcp.c
+index 280527cc0578..99548d5b44ea 100644
+--- a/drivers/net/ethernet/qlogic/qed/qed_mcp.c
++++ b/drivers/net/ethernet/qlogic/qed/qed_mcp.c
+@@ -3151,6 +3151,13 @@ err0:
+ return rc;
+ }
+
++void qed_mcp_nvm_info_free(struct qed_hwfn *p_hwfn)
++{
++ kfree(p_hwfn->nvm_info.image_att);
++ p_hwfn->nvm_info.image_att = NULL;
++ p_hwfn->nvm_info.valid = false;
++}
++
+ int
+ qed_mcp_get_nvm_image_att(struct qed_hwfn *p_hwfn,
+ enum qed_nvm_images image_id,
+diff --git a/drivers/net/ethernet/qlogic/qed/qed_mcp.h b/drivers/net/ethernet/qlogic/qed/qed_mcp.h
+index 9c4c2763de8d..e38297383b00 100644
+--- a/drivers/net/ethernet/qlogic/qed/qed_mcp.h
++++ b/drivers/net/ethernet/qlogic/qed/qed_mcp.h
+@@ -1192,6 +1192,13 @@ void qed_mcp_read_ufp_config(struct qed_hwfn *p_hwfn, struct qed_ptt *p_ptt);
+ */
+ int qed_mcp_nvm_info_populate(struct qed_hwfn *p_hwfn);
+
++/**
++ * @brief Delete nvm info shadow in the given hardware function
++ *
++ * @param p_hwfn
++ */
++void qed_mcp_nvm_info_free(struct qed_hwfn *p_hwfn);
++
+ /**
+ * @brief Get the engine affinity configuration.
+ *
+diff --git a/drivers/net/ethernet/qualcomm/rmnet/rmnet_config.c b/drivers/net/ethernet/qualcomm/rmnet/rmnet_config.c
+index 40efe60eff8d..fcdecddb2812 100644
+--- a/drivers/net/ethernet/qualcomm/rmnet/rmnet_config.c
++++ b/drivers/net/ethernet/qualcomm/rmnet/rmnet_config.c
+@@ -47,15 +47,23 @@ static int rmnet_unregister_real_device(struct net_device *real_dev)
+ return 0;
+ }
+
+-static int rmnet_register_real_device(struct net_device *real_dev)
++static int rmnet_register_real_device(struct net_device *real_dev,
++ struct netlink_ext_ack *extack)
+ {
+ struct rmnet_port *port;
+ int rc, entry;
+
+ ASSERT_RTNL();
+
+- if (rmnet_is_real_dev_registered(real_dev))
++ if (rmnet_is_real_dev_registered(real_dev)) {
++ port = rmnet_get_port_rtnl(real_dev);
++ if (port->rmnet_mode != RMNET_EPMODE_VND) {
++ NL_SET_ERR_MSG_MOD(extack, "bridge device already exists");
++ return -EINVAL;
++ }
++
+ return 0;
++ }
+
+ port = kzalloc(sizeof(*port), GFP_KERNEL);
+ if (!port)
+@@ -133,7 +141,7 @@ static int rmnet_newlink(struct net *src_net, struct net_device *dev,
+
+ mux_id = nla_get_u16(data[IFLA_RMNET_MUX_ID]);
+
+- err = rmnet_register_real_device(real_dev);
++ err = rmnet_register_real_device(real_dev, extack);
+ if (err)
+ goto err0;
+
+@@ -422,7 +430,7 @@ int rmnet_add_bridge(struct net_device *rmnet_dev,
+ }
+
+ if (port->rmnet_mode != RMNET_EPMODE_VND) {
+- NL_SET_ERR_MSG_MOD(extack, "bridge device already exists");
++ NL_SET_ERR_MSG_MOD(extack, "more than one bridge dev attached");
+ return -EINVAL;
+ }
+
+@@ -433,7 +441,7 @@ int rmnet_add_bridge(struct net_device *rmnet_dev,
+ return -EBUSY;
+ }
+
+- err = rmnet_register_real_device(slave_dev);
++ err = rmnet_register_real_device(slave_dev, extack);
+ if (err)
+ return -EBUSY;
+
+diff --git a/drivers/net/ipa/ipa_data-sdm845.c b/drivers/net/ipa/ipa_data-sdm845.c
+index 0d9c36e1e806..0917c5b028f6 100644
+--- a/drivers/net/ipa/ipa_data-sdm845.c
++++ b/drivers/net/ipa/ipa_data-sdm845.c
+@@ -44,7 +44,6 @@ static const struct ipa_gsi_endpoint_data ipa_gsi_endpoint_data[] = {
+ .endpoint = {
+ .seq_type = IPA_SEQ_INVALID,
+ .config = {
+- .checksum = true,
+ .aggregation = true,
+ .status_enable = true,
+ .rx = {
+diff --git a/drivers/net/ipa/ipa_qmi_msg.c b/drivers/net/ipa/ipa_qmi_msg.c
+index 03a1d0e55964..73413371e3d3 100644
+--- a/drivers/net/ipa/ipa_qmi_msg.c
++++ b/drivers/net/ipa/ipa_qmi_msg.c
+@@ -119,7 +119,7 @@ struct qmi_elem_info ipa_driver_init_complete_rsp_ei[] = {
+ sizeof_field(struct ipa_driver_init_complete_rsp,
+ rsp),
+ .tlv_type = 0x02,
+- .elem_size = offsetof(struct ipa_driver_init_complete_rsp,
++ .offset = offsetof(struct ipa_driver_init_complete_rsp,
+ rsp),
+ .ei_array = qmi_response_type_v01_ei,
+ },
+@@ -137,7 +137,7 @@ struct qmi_elem_info ipa_init_complete_ind_ei[] = {
+ sizeof_field(struct ipa_init_complete_ind,
+ status),
+ .tlv_type = 0x02,
+- .elem_size = offsetof(struct ipa_init_complete_ind,
++ .offset = offsetof(struct ipa_init_complete_ind,
+ status),
+ .ei_array = qmi_response_type_v01_ei,
+ },
+@@ -218,7 +218,7 @@ struct qmi_elem_info ipa_init_modem_driver_req_ei[] = {
+ sizeof_field(struct ipa_init_modem_driver_req,
+ platform_type_valid),
+ .tlv_type = 0x10,
+- .elem_size = offsetof(struct ipa_init_modem_driver_req,
++ .offset = offsetof(struct ipa_init_modem_driver_req,
+ platform_type_valid),
+ },
+ {
+diff --git a/drivers/net/usb/smsc95xx.c b/drivers/net/usb/smsc95xx.c
+index 3cf4dc3433f9..bb4ccbda031a 100644
+--- a/drivers/net/usb/smsc95xx.c
++++ b/drivers/net/usb/smsc95xx.c
+@@ -1287,11 +1287,14 @@ static int smsc95xx_bind(struct usbnet *dev, struct usb_interface *intf)
+
+ /* Init all registers */
+ ret = smsc95xx_reset(dev);
++ if (ret)
++ goto free_pdata;
+
+ /* detect device revision as different features may be available */
+ ret = smsc95xx_read_reg(dev, ID_REV, &val);
+ if (ret < 0)
+- return ret;
++ goto free_pdata;
++
+ val >>= 16;
+ pdata->chip_id = val;
+ pdata->mdix_ctrl = get_mdix_status(dev->net);
+@@ -1317,6 +1320,10 @@ static int smsc95xx_bind(struct usbnet *dev, struct usb_interface *intf)
+ schedule_delayed_work(&pdata->carrier_check, CARRIER_CHECK_DELAY);
+
+ return 0;
++
++free_pdata:
++ kfree(pdata);
++ return ret;
+ }
+
+ static void smsc95xx_unbind(struct usbnet *dev, struct usb_interface *intf)
+diff --git a/drivers/net/wireless/ath/ath9k/hif_usb.c b/drivers/net/wireless/ath/ath9k/hif_usb.c
+index 4ed21dad6a8e..6049d3766c64 100644
+--- a/drivers/net/wireless/ath/ath9k/hif_usb.c
++++ b/drivers/net/wireless/ath/ath9k/hif_usb.c
+@@ -643,9 +643,9 @@ err:
+
+ static void ath9k_hif_usb_rx_cb(struct urb *urb)
+ {
+- struct rx_buf *rx_buf = (struct rx_buf *)urb->context;
+- struct hif_device_usb *hif_dev = rx_buf->hif_dev;
+- struct sk_buff *skb = rx_buf->skb;
++ struct sk_buff *skb = (struct sk_buff *) urb->context;
++ struct hif_device_usb *hif_dev =
++ usb_get_intfdata(usb_ifnum_to_if(urb->dev, 0));
+ int ret;
+
+ if (!skb)
+@@ -685,15 +685,14 @@ resubmit:
+ return;
+ free:
+ kfree_skb(skb);
+- kfree(rx_buf);
+ }
+
+ static void ath9k_hif_usb_reg_in_cb(struct urb *urb)
+ {
+- struct rx_buf *rx_buf = (struct rx_buf *)urb->context;
+- struct hif_device_usb *hif_dev = rx_buf->hif_dev;
+- struct sk_buff *skb = rx_buf->skb;
++ struct sk_buff *skb = (struct sk_buff *) urb->context;
+ struct sk_buff *nskb;
++ struct hif_device_usb *hif_dev =
++ usb_get_intfdata(usb_ifnum_to_if(urb->dev, 0));
+ int ret;
+
+ if (!skb)
+@@ -751,7 +750,6 @@ resubmit:
+ return;
+ free:
+ kfree_skb(skb);
+- kfree(rx_buf);
+ urb->context = NULL;
+ }
+
+@@ -797,7 +795,7 @@ static int ath9k_hif_usb_alloc_tx_urbs(struct hif_device_usb *hif_dev)
+ init_usb_anchor(&hif_dev->mgmt_submitted);
+
+ for (i = 0; i < MAX_TX_URB_NUM; i++) {
+- tx_buf = kzalloc(sizeof(*tx_buf), GFP_KERNEL);
++ tx_buf = kzalloc(sizeof(struct tx_buf), GFP_KERNEL);
+ if (!tx_buf)
+ goto err;
+
+@@ -834,9 +832,8 @@ static void ath9k_hif_usb_dealloc_rx_urbs(struct hif_device_usb *hif_dev)
+
+ static int ath9k_hif_usb_alloc_rx_urbs(struct hif_device_usb *hif_dev)
+ {
+- struct rx_buf *rx_buf = NULL;
+- struct sk_buff *skb = NULL;
+ struct urb *urb = NULL;
++ struct sk_buff *skb = NULL;
+ int i, ret;
+
+ init_usb_anchor(&hif_dev->rx_submitted);
+@@ -844,12 +841,6 @@ static int ath9k_hif_usb_alloc_rx_urbs(struct hif_device_usb *hif_dev)
+
+ for (i = 0; i < MAX_RX_URB_NUM; i++) {
+
+- rx_buf = kzalloc(sizeof(*rx_buf), GFP_KERNEL);
+- if (!rx_buf) {
+- ret = -ENOMEM;
+- goto err_rxb;
+- }
+-
+ /* Allocate URB */
+ urb = usb_alloc_urb(0, GFP_KERNEL);
+ if (urb == NULL) {
+@@ -864,14 +855,11 @@ static int ath9k_hif_usb_alloc_rx_urbs(struct hif_device_usb *hif_dev)
+ goto err_skb;
+ }
+
+- rx_buf->hif_dev = hif_dev;
+- rx_buf->skb = skb;
+-
+ usb_fill_bulk_urb(urb, hif_dev->udev,
+ usb_rcvbulkpipe(hif_dev->udev,
+ USB_WLAN_RX_PIPE),
+ skb->data, MAX_RX_BUF_SIZE,
+- ath9k_hif_usb_rx_cb, rx_buf);
++ ath9k_hif_usb_rx_cb, skb);
+
+ /* Anchor URB */
+ usb_anchor_urb(urb, &hif_dev->rx_submitted);
+@@ -897,8 +885,6 @@ err_submit:
+ err_skb:
+ usb_free_urb(urb);
+ err_urb:
+- kfree(rx_buf);
+-err_rxb:
+ ath9k_hif_usb_dealloc_rx_urbs(hif_dev);
+ return ret;
+ }
+@@ -910,21 +896,14 @@ static void ath9k_hif_usb_dealloc_reg_in_urbs(struct hif_device_usb *hif_dev)
+
+ static int ath9k_hif_usb_alloc_reg_in_urbs(struct hif_device_usb *hif_dev)
+ {
+- struct rx_buf *rx_buf = NULL;
+- struct sk_buff *skb = NULL;
+ struct urb *urb = NULL;
++ struct sk_buff *skb = NULL;
+ int i, ret;
+
+ init_usb_anchor(&hif_dev->reg_in_submitted);
+
+ for (i = 0; i < MAX_REG_IN_URB_NUM; i++) {
+
+- rx_buf = kzalloc(sizeof(*rx_buf), GFP_KERNEL);
+- if (!rx_buf) {
+- ret = -ENOMEM;
+- goto err_rxb;
+- }
+-
+ /* Allocate URB */
+ urb = usb_alloc_urb(0, GFP_KERNEL);
+ if (urb == NULL) {
+@@ -939,14 +918,11 @@ static int ath9k_hif_usb_alloc_reg_in_urbs(struct hif_device_usb *hif_dev)
+ goto err_skb;
+ }
+
+- rx_buf->hif_dev = hif_dev;
+- rx_buf->skb = skb;
+-
+ usb_fill_int_urb(urb, hif_dev->udev,
+ usb_rcvintpipe(hif_dev->udev,
+ USB_REG_IN_PIPE),
+ skb->data, MAX_REG_IN_BUF_SIZE,
+- ath9k_hif_usb_reg_in_cb, rx_buf, 1);
++ ath9k_hif_usb_reg_in_cb, skb, 1);
+
+ /* Anchor URB */
+ usb_anchor_urb(urb, &hif_dev->reg_in_submitted);
+@@ -972,8 +948,6 @@ err_submit:
+ err_skb:
+ usb_free_urb(urb);
+ err_urb:
+- kfree(rx_buf);
+-err_rxb:
+ ath9k_hif_usb_dealloc_reg_in_urbs(hif_dev);
+ return ret;
+ }
+diff --git a/drivers/net/wireless/ath/ath9k/hif_usb.h b/drivers/net/wireless/ath/ath9k/hif_usb.h
+index 5985aa15ca93..a94e7e1c86e9 100644
+--- a/drivers/net/wireless/ath/ath9k/hif_usb.h
++++ b/drivers/net/wireless/ath/ath9k/hif_usb.h
+@@ -86,11 +86,6 @@ struct tx_buf {
+ struct list_head list;
+ };
+
+-struct rx_buf {
+- struct sk_buff *skb;
+- struct hif_device_usb *hif_dev;
+-};
+-
+ #define HIF_USB_TX_STOP BIT(0)
+ #define HIF_USB_TX_FLUSH BIT(1)
+
+diff --git a/drivers/nvme/host/rdma.c b/drivers/nvme/host/rdma.c
+index cac8a930396a..1f9a45145d0d 100644
+--- a/drivers/nvme/host/rdma.c
++++ b/drivers/nvme/host/rdma.c
+@@ -443,7 +443,7 @@ static int nvme_rdma_create_queue_ib(struct nvme_rdma_queue *queue)
+ * Spread I/O queues completion vectors according their queue index.
+ * Admin queues can always go on completion vector 0.
+ */
+- comp_vector = idx == 0 ? idx : idx - 1;
++ comp_vector = (idx == 0 ? idx : idx - 1) % ibdev->num_comp_vectors;
+
+ /* Polling queues need direct cq polling context */
+ if (nvme_rdma_poll_queue(queue))
+diff --git a/drivers/pinctrl/intel/pinctrl-baytrail.c b/drivers/pinctrl/intel/pinctrl-baytrail.c
+index 9b821c9cbd16..b033f9d13fb4 100644
+--- a/drivers/pinctrl/intel/pinctrl-baytrail.c
++++ b/drivers/pinctrl/intel/pinctrl-baytrail.c
+@@ -800,6 +800,21 @@ static void byt_gpio_disable_free(struct pinctrl_dev *pctl_dev,
+ pm_runtime_put(vg->dev);
+ }
+
++static void byt_gpio_direct_irq_check(struct intel_pinctrl *vg,
++ unsigned int offset)
++{
++ void __iomem *conf_reg = byt_gpio_reg(vg, offset, BYT_CONF0_REG);
++
++ /*
++ * Before making any direction modifications, do a check if gpio is set
++ * for direct IRQ. On Bay Trail, setting GPIO to output does not make
++ * sense, so let's at least inform the caller before they shoot
++ * themselves in the foot.
++ */
++ if (readl(conf_reg) & BYT_DIRECT_IRQ_EN)
++ dev_info_once(vg->dev, "Potential Error: Setting GPIO with direct_irq_en to output");
++}
++
+ static int byt_gpio_set_direction(struct pinctrl_dev *pctl_dev,
+ struct pinctrl_gpio_range *range,
+ unsigned int offset,
+@@ -807,7 +822,6 @@ static int byt_gpio_set_direction(struct pinctrl_dev *pctl_dev,
+ {
+ struct intel_pinctrl *vg = pinctrl_dev_get_drvdata(pctl_dev);
+ void __iomem *val_reg = byt_gpio_reg(vg, offset, BYT_VAL_REG);
+- void __iomem *conf_reg = byt_gpio_reg(vg, offset, BYT_CONF0_REG);
+ unsigned long flags;
+ u32 value;
+
+@@ -817,14 +831,8 @@ static int byt_gpio_set_direction(struct pinctrl_dev *pctl_dev,
+ value &= ~BYT_DIR_MASK;
+ if (input)
+ value |= BYT_OUTPUT_EN;
+- else if (readl(conf_reg) & BYT_DIRECT_IRQ_EN)
+- /*
+- * Before making any direction modifications, do a check if gpio
+- * is set for direct IRQ. On baytrail, setting GPIO to output
+- * does not make sense, so let's at least inform the caller before
+- * they shoot themselves in the foot.
+- */
+- dev_info_once(vg->dev, "Potential Error: Setting GPIO with direct_irq_en to output");
++ else
++ byt_gpio_direct_irq_check(vg, offset);
+
+ writel(value, val_reg);
+
+@@ -1165,19 +1173,50 @@ static int byt_gpio_get_direction(struct gpio_chip *chip, unsigned int offset)
+
+ static int byt_gpio_direction_input(struct gpio_chip *chip, unsigned int offset)
+ {
+- return pinctrl_gpio_direction_input(chip->base + offset);
++ struct intel_pinctrl *vg = gpiochip_get_data(chip);
++ void __iomem *val_reg = byt_gpio_reg(vg, offset, BYT_VAL_REG);
++ unsigned long flags;
++ u32 reg;
++
++ raw_spin_lock_irqsave(&byt_lock, flags);
++
++ reg = readl(val_reg);
++ reg &= ~BYT_DIR_MASK;
++ reg |= BYT_OUTPUT_EN;
++ writel(reg, val_reg);
++
++ raw_spin_unlock_irqrestore(&byt_lock, flags);
++ return 0;
+ }
+
++/*
++ * Note despite the temptation this MUST NOT be converted into a call to
++ * pinctrl_gpio_direction_output() + byt_gpio_set() that does not work this
++ * MUST be done as a single BYT_VAL_REG register write.
++ * See the commit message of the commit adding this comment for details.
++ */
+ static int byt_gpio_direction_output(struct gpio_chip *chip,
+ unsigned int offset, int value)
+ {
+- int ret = pinctrl_gpio_direction_output(chip->base + offset);
++ struct intel_pinctrl *vg = gpiochip_get_data(chip);
++ void __iomem *val_reg = byt_gpio_reg(vg, offset, BYT_VAL_REG);
++ unsigned long flags;
++ u32 reg;
+
+- if (ret)
+- return ret;
++ raw_spin_lock_irqsave(&byt_lock, flags);
++
++ byt_gpio_direct_irq_check(vg, offset);
+
+- byt_gpio_set(chip, offset, value);
++ reg = readl(val_reg);
++ reg &= ~BYT_DIR_MASK;
++ if (value)
++ reg |= BYT_LEVEL;
++ else
++ reg &= ~BYT_LEVEL;
+
++ writel(reg, val_reg);
++
++ raw_spin_unlock_irqrestore(&byt_lock, flags);
+ return 0;
+ }
+
+diff --git a/drivers/scsi/qla2xxx/qla_attr.c b/drivers/scsi/qla2xxx/qla_attr.c
+index 2c9e5ac24692..c4917f441b10 100644
+--- a/drivers/scsi/qla2xxx/qla_attr.c
++++ b/drivers/scsi/qla2xxx/qla_attr.c
+@@ -26,7 +26,8 @@ qla2x00_sysfs_read_fw_dump(struct file *filp, struct kobject *kobj,
+ struct qla_hw_data *ha = vha->hw;
+ int rval = 0;
+
+- if (!(ha->fw_dump_reading || ha->mctp_dump_reading))
++ if (!(ha->fw_dump_reading || ha->mctp_dump_reading ||
++ ha->mpi_fw_dump_reading))
+ return 0;
+
+ mutex_lock(&ha->optrom_mutex);
+@@ -42,6 +43,10 @@ qla2x00_sysfs_read_fw_dump(struct file *filp, struct kobject *kobj,
+ } else if (ha->mctp_dumped && ha->mctp_dump_reading) {
+ rval = memory_read_from_buffer(buf, count, &off, ha->mctp_dump,
+ MCTP_DUMP_SIZE);
++ } else if (ha->mpi_fw_dumped && ha->mpi_fw_dump_reading) {
++ rval = memory_read_from_buffer(buf, count, &off,
++ ha->mpi_fw_dump,
++ ha->mpi_fw_dump_len);
+ } else if (ha->fw_dump_reading) {
+ rval = memory_read_from_buffer(buf, count, &off, ha->fw_dump,
+ ha->fw_dump_len);
+@@ -103,7 +108,6 @@ qla2x00_sysfs_write_fw_dump(struct file *filp, struct kobject *kobj,
+ qla82xx_set_reset_owner(vha);
+ qla8044_idc_unlock(ha);
+ } else {
+- ha->fw_dump_mpi = 1;
+ qla2x00_system_error(vha);
+ }
+ break;
+@@ -137,6 +141,22 @@ qla2x00_sysfs_write_fw_dump(struct file *filp, struct kobject *kobj,
+ vha->host_no);
+ }
+ break;
++ case 8:
++ if (!ha->mpi_fw_dump_reading)
++ break;
++ ql_log(ql_log_info, vha, 0x70e7,
++ "MPI firmware dump cleared on (%ld).\n", vha->host_no);
++ ha->mpi_fw_dump_reading = 0;
++ ha->mpi_fw_dumped = 0;
++ break;
++ case 9:
++ if (ha->mpi_fw_dumped && !ha->mpi_fw_dump_reading) {
++ ha->mpi_fw_dump_reading = 1;
++ ql_log(ql_log_info, vha, 0x70e8,
++ "Raw MPI firmware dump ready for read on (%ld).\n",
++ vha->host_no);
++ }
++ break;
+ }
+ return count;
+ }
+@@ -706,7 +726,8 @@ qla2x00_sysfs_write_reset(struct file *filp, struct kobject *kobj,
+ scsi_unblock_requests(vha->host);
+ break;
+ case 0x2025d:
+- if (!IS_QLA81XX(ha) && !IS_QLA83XX(ha))
++ if (!IS_QLA81XX(ha) && !IS_QLA83XX(ha) &&
++ !IS_QLA27XX(ha) && !IS_QLA28XX(ha))
+ return -EPERM;
+
+ ql_log(ql_log_info, vha, 0x706f,
+@@ -724,6 +745,8 @@ qla2x00_sysfs_write_reset(struct file *filp, struct kobject *kobj,
+ qla83xx_idc_audit(vha, IDC_AUDIT_TIMESTAMP);
+ qla83xx_idc_unlock(vha, 0);
+ break;
++ } else if (IS_QLA27XX(ha) || IS_QLA28XX(ha)) {
++ qla27xx_reset_mpi(vha);
+ } else {
+ /* Make sure FC side is not in reset */
+ WARN_ON_ONCE(qla2x00_wait_for_hba_online(vha) !=
+@@ -737,6 +760,7 @@ qla2x00_sysfs_write_reset(struct file *filp, struct kobject *kobj,
+ scsi_unblock_requests(vha->host);
+ break;
+ }
++ break;
+ case 0x2025e:
+ if (!IS_P3P_TYPE(ha) || vha != base_vha) {
+ ql_log(ql_log_info, vha, 0x7071,
+diff --git a/drivers/scsi/qla2xxx/qla_def.h b/drivers/scsi/qla2xxx/qla_def.h
+index 47c7a56438b5..daa9e936887b 100644
+--- a/drivers/scsi/qla2xxx/qla_def.h
++++ b/drivers/scsi/qla2xxx/qla_def.h
+@@ -3223,6 +3223,7 @@ struct isp_operations {
+ uint32_t);
+
+ void (*fw_dump) (struct scsi_qla_host *, int);
++ void (*mpi_fw_dump)(struct scsi_qla_host *, int);
+
+ int (*beacon_on) (struct scsi_qla_host *);
+ int (*beacon_off) (struct scsi_qla_host *);
+@@ -3748,6 +3749,11 @@ struct qlt_hw_data {
+
+ #define LEAK_EXCHG_THRESH_HOLD_PERCENT 75 /* 75 percent */
+
++struct qla_hw_data_stat {
++ u32 num_fw_dump;
++ u32 num_mpi_reset;
++};
++
+ /*
+ * Qlogic host adapter specific data structure.
+ */
+@@ -4230,7 +4236,6 @@ struct qla_hw_data {
+ uint32_t fw_dump_len;
+ u32 fw_dump_alloc_len;
+ bool fw_dumped;
+- bool fw_dump_mpi;
+ unsigned long fw_dump_cap_flags;
+ #define RISC_PAUSE_CMPL 0
+ #define DMA_SHUTDOWN_CMPL 1
+@@ -4241,6 +4246,10 @@ struct qla_hw_data {
+ #define ISP_MBX_RDY 6
+ #define ISP_SOFT_RESET_CMPL 7
+ int fw_dump_reading;
++ void *mpi_fw_dump;
++ u32 mpi_fw_dump_len;
++ int mpi_fw_dump_reading:1;
++ int mpi_fw_dumped:1;
+ int prev_minidump_failed;
+ dma_addr_t eft_dma;
+ void *eft;
+@@ -4454,6 +4463,8 @@ struct qla_hw_data {
+ uint16_t last_zio_threshold;
+
+ #define DEFAULT_ZIO_THRESHOLD 5
++
++ struct qla_hw_data_stat stat;
+ };
+
+ struct active_regions {
+diff --git a/drivers/scsi/qla2xxx/qla_gbl.h b/drivers/scsi/qla2xxx/qla_gbl.h
+index 1b93f5b4d77d..b20c5fa122fb 100644
+--- a/drivers/scsi/qla2xxx/qla_gbl.h
++++ b/drivers/scsi/qla2xxx/qla_gbl.h
+@@ -173,6 +173,7 @@ extern int ql2xenablemsix;
+ extern int qla2xuseresexchforels;
+ extern int ql2xexlogins;
+ extern int ql2xdifbundlinginternalbuffers;
++extern int ql2xfulldump_on_mpifail;
+
+ extern int qla2x00_loop_reset(scsi_qla_host_t *);
+ extern void qla2x00_abort_all_cmds(scsi_qla_host_t *, int);
+@@ -645,6 +646,7 @@ extern void qla82xx_fw_dump(scsi_qla_host_t *, int);
+ extern void qla8044_fw_dump(scsi_qla_host_t *, int);
+
+ extern void qla27xx_fwdump(scsi_qla_host_t *, int);
++extern void qla27xx_mpi_fwdump(scsi_qla_host_t *, int);
+ extern ulong qla27xx_fwdt_calculate_dump_size(struct scsi_qla_host *, void *);
+ extern int qla27xx_fwdt_template_valid(void *);
+ extern ulong qla27xx_fwdt_template_size(void *);
+@@ -933,5 +935,6 @@ extern void qla24xx_process_purex_list(struct purex_list *);
+
+ /* nvme.c */
+ void qla_nvme_unregister_remote_port(struct fc_port *fcport);
++void qla27xx_reset_mpi(scsi_qla_host_t *vha);
+ void qla_handle_els_plogi_done(scsi_qla_host_t *vha, struct event_arg *ea);
+ #endif /* _QLA_GBL_H */
+diff --git a/drivers/scsi/qla2xxx/qla_init.c b/drivers/scsi/qla2xxx/qla_init.c
+index cfbb4294fb8b..53686246f566 100644
+--- a/drivers/scsi/qla2xxx/qla_init.c
++++ b/drivers/scsi/qla2xxx/qla_init.c
+@@ -3339,6 +3339,8 @@ qla2x00_alloc_fw_dump(scsi_qla_host_t *vha)
+ dump_size / 1024);
+
+ if (IS_QLA27XX(ha) || IS_QLA28XX(ha)) {
++ ha->mpi_fw_dump = (char *)fw_dump +
++ ha->fwdt[1].dump_size;
+ mutex_unlock(&ha->optrom_mutex);
+ return;
+ }
+diff --git a/drivers/scsi/qla2xxx/qla_isr.c b/drivers/scsi/qla2xxx/qla_isr.c
+index 8a78d395bbc8..4d9ec7ee59cc 100644
+--- a/drivers/scsi/qla2xxx/qla_isr.c
++++ b/drivers/scsi/qla2xxx/qla_isr.c
+@@ -756,6 +756,39 @@ qla2x00_find_fcport_by_nportid(scsi_qla_host_t *vha, port_id_t *id,
+ return NULL;
+ }
+
++/* Shall be called only on supported adapters. */
++static void
++qla27xx_handle_8200_aen(scsi_qla_host_t *vha, uint16_t *mb)
++{
++ struct qla_hw_data *ha = vha->hw;
++ bool reset_isp_needed = 0;
++
++ ql_log(ql_log_warn, vha, 0x02f0,
++ "MPI Heartbeat stop. MPI reset is%s needed. "
++ "MB0[%xh] MB1[%xh] MB2[%xh] MB3[%xh]\n",
++ mb[0] & BIT_8 ? "" : " not",
++ mb[0], mb[1], mb[2], mb[3]);
++
++ if ((mb[1] & BIT_8) == 0)
++ return;
++
++ ql_log(ql_log_warn, vha, 0x02f1,
++ "MPI Heartbeat stop. FW dump needed\n");
++
++ if (ql2xfulldump_on_mpifail) {
++ ha->isp_ops->fw_dump(vha, 1);
++ reset_isp_needed = 1;
++ }
++
++ ha->isp_ops->mpi_fw_dump(vha, 1);
++
++ if (reset_isp_needed) {
++ vha->hw->flags.fw_init_done = 0;
++ set_bit(ISP_ABORT_NEEDED, &vha->dpc_flags);
++ qla2xxx_wake_dpc(vha);
++ }
++}
++
+ /**
+ * qla2x00_async_event() - Process aynchronous events.
+ * @vha: SCSI driver HA context
+@@ -871,9 +904,9 @@ skip_rio:
+ "ISP System Error - mbx1=%xh mbx2=%xh mbx3=%xh.\n ",
+ mb[1], mb[2], mb[3]);
+
+- ha->fw_dump_mpi =
+- (IS_QLA27XX(ha) || IS_QLA28XX(ha)) &&
+- RD_REG_WORD(®24->mailbox7) & BIT_8;
++ if ((IS_QLA27XX(ha) || IS_QLA28XX(ha)) &&
++ RD_REG_WORD(®24->mailbox7) & BIT_8)
++ ha->isp_ops->mpi_fw_dump(vha, 1);
+ ha->isp_ops->fw_dump(vha, 1);
+ ha->flags.fw_init_done = 0;
+ QLA_FW_STOPPED(ha);
+@@ -1374,20 +1407,7 @@ global_port_update:
+
+ case MBA_IDC_AEN:
+ if (IS_QLA27XX(ha) || IS_QLA28XX(ha)) {
+- ha->flags.fw_init_done = 0;
+- ql_log(ql_log_warn, vha, 0xffff,
+- "MPI Heartbeat stop. Chip reset needed. MB0[%xh] MB1[%xh] MB2[%xh] MB3[%xh]\n",
+- mb[0], mb[1], mb[2], mb[3]);
+-
+- if ((mb[1] & BIT_8) ||
+- (mb[2] & BIT_8)) {
+- ql_log(ql_log_warn, vha, 0xd013,
+- "MPI Heartbeat stop. FW dump needed\n");
+- ha->fw_dump_mpi = 1;
+- ha->isp_ops->fw_dump(vha, 1);
+- }
+- set_bit(ISP_ABORT_NEEDED, &vha->dpc_flags);
+- qla2xxx_wake_dpc(vha);
++ qla27xx_handle_8200_aen(vha, mb);
+ } else if (IS_QLA83XX(ha)) {
+ mb[4] = RD_REG_WORD(®24->mailbox4);
+ mb[5] = RD_REG_WORD(®24->mailbox5);
+diff --git a/drivers/scsi/qla2xxx/qla_os.c b/drivers/scsi/qla2xxx/qla_os.c
+index 9179bb4caed8..1120d133204c 100644
+--- a/drivers/scsi/qla2xxx/qla_os.c
++++ b/drivers/scsi/qla2xxx/qla_os.c
+@@ -35,6 +35,11 @@ static int apidev_major;
+ */
+ struct kmem_cache *srb_cachep;
+
++int ql2xfulldump_on_mpifail;
++module_param(ql2xfulldump_on_mpifail, int, S_IRUGO | S_IWUSR);
++MODULE_PARM_DESC(ql2xfulldump_on_mpifail,
++ "Set this to take full dump on MPI hang.");
++
+ /*
+ * CT6 CTX allocation cache
+ */
+@@ -2518,6 +2523,7 @@ static struct isp_operations qla27xx_isp_ops = {
+ .read_nvram = NULL,
+ .write_nvram = NULL,
+ .fw_dump = qla27xx_fwdump,
++ .mpi_fw_dump = qla27xx_mpi_fwdump,
+ .beacon_on = qla24xx_beacon_on,
+ .beacon_off = qla24xx_beacon_off,
+ .beacon_blink = qla83xx_beacon_blink,
+diff --git a/drivers/scsi/qla2xxx/qla_tmpl.c b/drivers/scsi/qla2xxx/qla_tmpl.c
+index 6aeb1c3fb7a8..342363862434 100644
+--- a/drivers/scsi/qla2xxx/qla_tmpl.c
++++ b/drivers/scsi/qla2xxx/qla_tmpl.c
+@@ -12,6 +12,33 @@
+ #define IOBASE(vha) IOBAR(ISPREG(vha))
+ #define INVALID_ENTRY ((struct qla27xx_fwdt_entry *)0xffffffffffffffffUL)
+
++/* hardware_lock assumed held. */
++static void
++qla27xx_write_remote_reg(struct scsi_qla_host *vha,
++ u32 addr, u32 data)
++{
++ char *reg = (char *)ISPREG(vha);
++
++ ql_dbg(ql_dbg_misc, vha, 0xd300,
++ "%s: addr/data = %xh/%xh\n", __func__, addr, data);
++
++ WRT_REG_DWORD(reg + IOBASE(vha), 0x40);
++ WRT_REG_DWORD(reg + 0xc4, data);
++ WRT_REG_DWORD(reg + 0xc0, addr);
++}
++
++void
++qla27xx_reset_mpi(scsi_qla_host_t *vha)
++{
++ ql_dbg(ql_dbg_misc + ql_dbg_verbose, vha, 0xd301,
++ "Entered %s.\n", __func__);
++
++ qla27xx_write_remote_reg(vha, 0x104050, 0x40004);
++ qla27xx_write_remote_reg(vha, 0x10405c, 0x4);
++
++ vha->hw->stat.num_mpi_reset++;
++}
++
+ static inline void
+ qla27xx_insert16(uint16_t value, void *buf, ulong *len)
+ {
+@@ -997,6 +1024,62 @@ qla27xx_fwdt_template_valid(void *p)
+ return true;
+ }
+
++void
++qla27xx_mpi_fwdump(scsi_qla_host_t *vha, int hardware_locked)
++{
++ ulong flags = 0;
++ bool need_mpi_reset = 1;
++
++#ifndef __CHECKER__
++ if (!hardware_locked)
++ spin_lock_irqsave(&vha->hw->hardware_lock, flags);
++#endif
++ if (!vha->hw->mpi_fw_dump) {
++ ql_log(ql_log_warn, vha, 0x02f3, "-> mpi_fwdump no buffer\n");
++ } else if (vha->hw->mpi_fw_dumped) {
++ ql_log(ql_log_warn, vha, 0x02f4,
++ "-> MPI firmware already dumped (%p) -- ignoring request\n",
++ vha->hw->mpi_fw_dump);
++ } else {
++ struct fwdt *fwdt = &vha->hw->fwdt[1];
++ ulong len;
++ void *buf = vha->hw->mpi_fw_dump;
++
++ ql_log(ql_log_warn, vha, 0x02f5, "-> fwdt1 running...\n");
++ if (!fwdt->template) {
++ ql_log(ql_log_warn, vha, 0x02f6,
++ "-> fwdt1 no template\n");
++ goto bailout;
++ }
++ len = qla27xx_execute_fwdt_template(vha, fwdt->template, buf);
++ if (len == 0) {
++ goto bailout;
++ } else if (len != fwdt->dump_size) {
++ ql_log(ql_log_warn, vha, 0x02f7,
++ "-> fwdt1 fwdump residual=%+ld\n",
++ fwdt->dump_size - len);
++ } else {
++ need_mpi_reset = 0;
++ }
++
++ vha->hw->mpi_fw_dump_len = len;
++ vha->hw->mpi_fw_dumped = 1;
++
++ ql_log(ql_log_warn, vha, 0x02f8,
++ "-> MPI firmware dump saved to buffer (%lu/%p)\n",
++ vha->host_no, vha->hw->mpi_fw_dump);
++ qla2x00_post_uevent_work(vha, QLA_UEVENT_CODE_FW_DUMP);
++ }
++
++bailout:
++ if (need_mpi_reset)
++ qla27xx_reset_mpi(vha);
++#ifndef __CHECKER__
++ if (!hardware_locked)
++ spin_unlock_irqrestore(&vha->hw->hardware_lock, flags);
++#endif
++}
++
+ void
+ qla27xx_fwdump(scsi_qla_host_t *vha, int hardware_locked)
+ {
+@@ -1015,30 +1098,25 @@ qla27xx_fwdump(scsi_qla_host_t *vha, int hardware_locked)
+ vha->hw->fw_dump);
+ } else {
+ struct fwdt *fwdt = vha->hw->fwdt;
+- uint j;
+ ulong len;
+ void *buf = vha->hw->fw_dump;
+- uint count = vha->hw->fw_dump_mpi ? 2 : 1;
+-
+- for (j = 0; j < count; j++, fwdt++, buf += len) {
+- ql_log(ql_log_warn, vha, 0xd011,
+- "-> fwdt%u running...\n", j);
+- if (!fwdt->template) {
+- ql_log(ql_log_warn, vha, 0xd012,
+- "-> fwdt%u no template\n", j);
+- break;
+- }
+- len = qla27xx_execute_fwdt_template(vha,
+- fwdt->template, buf);
+- if (len == 0) {
+- goto bailout;
+- } else if (len != fwdt->dump_size) {
+- ql_log(ql_log_warn, vha, 0xd013,
+- "-> fwdt%u fwdump residual=%+ld\n",
+- j, fwdt->dump_size - len);
+- }
++
++ ql_log(ql_log_warn, vha, 0xd011, "-> fwdt0 running...\n");
++ if (!fwdt->template) {
++ ql_log(ql_log_warn, vha, 0xd012,
++ "-> fwdt0 no template\n");
++ goto bailout;
+ }
+- vha->hw->fw_dump_len = buf - (void *)vha->hw->fw_dump;
++ len = qla27xx_execute_fwdt_template(vha, fwdt->template, buf);
++ if (len == 0) {
++ goto bailout;
++ } else if (len != fwdt->dump_size) {
++ ql_log(ql_log_warn, vha, 0xd013,
++ "-> fwdt0 fwdump residual=%+ld\n",
++ fwdt->dump_size - len);
++ }
++
++ vha->hw->fw_dump_len = len;
+ vha->hw->fw_dumped = 1;
+
+ ql_log(ql_log_warn, vha, 0xd015,
+@@ -1048,7 +1126,6 @@ qla27xx_fwdump(scsi_qla_host_t *vha, int hardware_locked)
+ }
+
+ bailout:
+- vha->hw->fw_dump_mpi = 0;
+ #ifndef __CHECKER__
+ if (!hardware_locked)
+ spin_unlock_irqrestore(&vha->hw->hardware_lock, flags);
+diff --git a/drivers/spi/spi-fsl-dspi.c b/drivers/spi/spi-fsl-dspi.c
+index 856a4a0edcc7..38d337f0967d 100644
+--- a/drivers/spi/spi-fsl-dspi.c
++++ b/drivers/spi/spi-fsl-dspi.c
+@@ -1,6 +1,7 @@
+ // SPDX-License-Identifier: GPL-2.0+
+ //
+ // Copyright 2013 Freescale Semiconductor, Inc.
++// Copyright 2020 NXP
+ //
+ // Freescale DSPI driver
+ // This file contains a driver for the Freescale DSPI
+@@ -26,6 +27,9 @@
+ #define SPI_MCR_CLR_TXF BIT(11)
+ #define SPI_MCR_CLR_RXF BIT(10)
+ #define SPI_MCR_XSPI BIT(3)
++#define SPI_MCR_DIS_TXF BIT(13)
++#define SPI_MCR_DIS_RXF BIT(12)
++#define SPI_MCR_HALT BIT(0)
+
+ #define SPI_TCR 0x08
+ #define SPI_TCR_GET_TCNT(x) (((x) & GENMASK(31, 16)) >> 16)
+@@ -1437,15 +1441,42 @@ static int dspi_remove(struct platform_device *pdev)
+ struct fsl_dspi *dspi = spi_controller_get_devdata(ctlr);
+
+ /* Disconnect from the SPI framework */
++ spi_unregister_controller(dspi->ctlr);
++
++ /* Disable RX and TX */
++ regmap_update_bits(dspi->regmap, SPI_MCR,
++ SPI_MCR_DIS_TXF | SPI_MCR_DIS_RXF,
++ SPI_MCR_DIS_TXF | SPI_MCR_DIS_RXF);
++
++ /* Stop Running */
++ regmap_update_bits(dspi->regmap, SPI_MCR, SPI_MCR_HALT, SPI_MCR_HALT);
++
+ dspi_release_dma(dspi);
+ if (dspi->irq)
+ free_irq(dspi->irq, dspi);
+ clk_disable_unprepare(dspi->clk);
+- spi_unregister_controller(dspi->ctlr);
+
+ return 0;
+ }
+
++static void dspi_shutdown(struct platform_device *pdev)
++{
++ struct spi_controller *ctlr = platform_get_drvdata(pdev);
++ struct fsl_dspi *dspi = spi_controller_get_devdata(ctlr);
++
++ /* Disable RX and TX */
++ regmap_update_bits(dspi->regmap, SPI_MCR,
++ SPI_MCR_DIS_TXF | SPI_MCR_DIS_RXF,
++ SPI_MCR_DIS_TXF | SPI_MCR_DIS_RXF);
++
++ /* Stop Running */
++ regmap_update_bits(dspi->regmap, SPI_MCR, SPI_MCR_HALT, SPI_MCR_HALT);
++
++ dspi_release_dma(dspi);
++ clk_disable_unprepare(dspi->clk);
++ spi_unregister_controller(dspi->ctlr);
++}
++
+ static struct platform_driver fsl_dspi_driver = {
+ .driver.name = DRIVER_NAME,
+ .driver.of_match_table = fsl_dspi_dt_ids,
+@@ -1453,6 +1484,7 @@ static struct platform_driver fsl_dspi_driver = {
+ .driver.pm = &dspi_pm,
+ .probe = dspi_probe,
+ .remove = dspi_remove,
++ .shutdown = dspi_shutdown,
+ };
+ module_platform_driver(fsl_dspi_driver);
+
+diff --git a/drivers/spi/spidev.c b/drivers/spi/spidev.c
+index 80dd1025b953..012a89123067 100644
+--- a/drivers/spi/spidev.c
++++ b/drivers/spi/spidev.c
+@@ -608,15 +608,20 @@ err_find_dev:
+ static int spidev_release(struct inode *inode, struct file *filp)
+ {
+ struct spidev_data *spidev;
++ int dofree;
+
+ mutex_lock(&device_list_lock);
+ spidev = filp->private_data;
+ filp->private_data = NULL;
+
++ spin_lock_irq(&spidev->spi_lock);
++ /* ... after we unbound from the underlying device? */
++ dofree = (spidev->spi == NULL);
++ spin_unlock_irq(&spidev->spi_lock);
++
+ /* last close? */
+ spidev->users--;
+ if (!spidev->users) {
+- int dofree;
+
+ kfree(spidev->tx_buffer);
+ spidev->tx_buffer = NULL;
+@@ -624,19 +629,14 @@ static int spidev_release(struct inode *inode, struct file *filp)
+ kfree(spidev->rx_buffer);
+ spidev->rx_buffer = NULL;
+
+- spin_lock_irq(&spidev->spi_lock);
+- if (spidev->spi)
+- spidev->speed_hz = spidev->spi->max_speed_hz;
+-
+- /* ... after we unbound from the underlying device? */
+- dofree = (spidev->spi == NULL);
+- spin_unlock_irq(&spidev->spi_lock);
+-
+ if (dofree)
+ kfree(spidev);
++ else
++ spidev->speed_hz = spidev->spi->max_speed_hz;
+ }
+ #ifdef CONFIG_SPI_SLAVE
+- spi_slave_abort(spidev->spi);
++ if (!dofree)
++ spi_slave_abort(spidev->spi);
+ #endif
+ mutex_unlock(&device_list_lock);
+
+@@ -786,13 +786,13 @@ static int spidev_remove(struct spi_device *spi)
+ {
+ struct spidev_data *spidev = spi_get_drvdata(spi);
+
++ /* prevent new opens */
++ mutex_lock(&device_list_lock);
+ /* make sure ops on existing fds can abort cleanly */
+ spin_lock_irq(&spidev->spi_lock);
+ spidev->spi = NULL;
+ spin_unlock_irq(&spidev->spi_lock);
+
+- /* prevent new opens */
+- mutex_lock(&device_list_lock);
+ list_del(&spidev->device_entry);
+ device_destroy(spidev_class, spidev->devt);
+ clear_bit(MINOR(spidev->devt), minors);
+diff --git a/drivers/staging/wfx/hif_tx.c b/drivers/staging/wfx/hif_tx.c
+index 20b3045d7667..15ff60a58466 100644
+--- a/drivers/staging/wfx/hif_tx.c
++++ b/drivers/staging/wfx/hif_tx.c
+@@ -222,7 +222,7 @@ int hif_write_mib(struct wfx_dev *wdev, int vif_id, u16 mib_id, void *val,
+ }
+
+ int hif_scan(struct wfx_vif *wvif, struct cfg80211_scan_request *req,
+- int chan_start_idx, int chan_num)
++ int chan_start_idx, int chan_num, int *timeout)
+ {
+ int ret, i;
+ struct hif_msg *hif;
+@@ -269,11 +269,13 @@ int hif_scan(struct wfx_vif *wvif, struct cfg80211_scan_request *req,
+ tmo_chan_fg = 512 * USEC_PER_TU + body->probe_delay;
+ tmo_chan_fg *= body->num_of_probe_requests;
+ tmo = chan_num * max(tmo_chan_bg, tmo_chan_fg) + 512 * USEC_PER_TU;
++ if (timeout)
++ *timeout = usecs_to_jiffies(tmo);
+
+ wfx_fill_header(hif, wvif->id, HIF_REQ_ID_START_SCAN, buf_len);
+ ret = wfx_cmd_send(wvif->wdev, hif, NULL, 0, false);
+ kfree(hif);
+- return ret ? ret : usecs_to_jiffies(tmo);
++ return ret;
+ }
+
+ int hif_stop_scan(struct wfx_vif *wvif)
+diff --git a/drivers/staging/wfx/hif_tx.h b/drivers/staging/wfx/hif_tx.h
+index f8520a14c14c..7a21338470ee 100644
+--- a/drivers/staging/wfx/hif_tx.h
++++ b/drivers/staging/wfx/hif_tx.h
+@@ -43,7 +43,7 @@ int hif_read_mib(struct wfx_dev *wdev, int vif_id, u16 mib_id,
+ int hif_write_mib(struct wfx_dev *wdev, int vif_id, u16 mib_id,
+ void *buf, size_t buf_size);
+ int hif_scan(struct wfx_vif *wvif, struct cfg80211_scan_request *req80211,
+- int chan_start, int chan_num);
++ int chan_start, int chan_num, int *timeout);
+ int hif_stop_scan(struct wfx_vif *wvif);
+ int hif_join(struct wfx_vif *wvif, const struct ieee80211_bss_conf *conf,
+ struct ieee80211_channel *channel, const u8 *ssid, int ssidlen);
+diff --git a/drivers/staging/wfx/scan.c b/drivers/staging/wfx/scan.c
+index 9aa14331affd..d47b8a3ba403 100644
+--- a/drivers/staging/wfx/scan.c
++++ b/drivers/staging/wfx/scan.c
+@@ -56,10 +56,10 @@ static int send_scan_req(struct wfx_vif *wvif,
+ wfx_tx_lock_flush(wvif->wdev);
+ wvif->scan_abort = false;
+ reinit_completion(&wvif->scan_complete);
+- timeout = hif_scan(wvif, req, start_idx, i - start_idx);
+- if (timeout < 0) {
++ ret = hif_scan(wvif, req, start_idx, i - start_idx, &timeout);
++ if (ret) {
+ wfx_tx_unlock(wvif->wdev);
+- return timeout;
++ return -EIO;
+ }
+ ret = wait_for_completion_timeout(&wvif->scan_complete, timeout);
+ if (req->channels[start_idx]->max_power != wvif->vif->bss_conf.txpower)
+diff --git a/drivers/usb/dwc3/dwc3-pci.c b/drivers/usb/dwc3/dwc3-pci.c
+index b67372737dc9..96c05b121fac 100644
+--- a/drivers/usb/dwc3/dwc3-pci.c
++++ b/drivers/usb/dwc3/dwc3-pci.c
+@@ -206,8 +206,10 @@ static void dwc3_pci_resume_work(struct work_struct *work)
+ int ret;
+
+ ret = pm_runtime_get_sync(&dwc3->dev);
+- if (ret)
++ if (ret) {
++ pm_runtime_put_sync_autosuspend(&dwc3->dev);
+ return;
++ }
+
+ pm_runtime_mark_last_busy(&dwc3->dev);
+ pm_runtime_put_sync_autosuspend(&dwc3->dev);
+diff --git a/fs/btrfs/discard.c b/fs/btrfs/discard.c
+index 5615320fa659..741c7e19c32f 100644
+--- a/fs/btrfs/discard.c
++++ b/fs/btrfs/discard.c
+@@ -619,6 +619,7 @@ void btrfs_discard_punt_unused_bgs_list(struct btrfs_fs_info *fs_info)
+ list_for_each_entry_safe(block_group, next, &fs_info->unused_bgs,
+ bg_list) {
+ list_del_init(&block_group->bg_list);
++ btrfs_put_block_group(block_group);
+ btrfs_discard_queue_work(&fs_info->discard_ctl, block_group);
+ }
+ spin_unlock(&fs_info->unused_bgs_lock);
+diff --git a/fs/btrfs/disk-io.c b/fs/btrfs/disk-io.c
+index 91def9fd9456..f71e4dbe1d8a 100644
+--- a/fs/btrfs/disk-io.c
++++ b/fs/btrfs/disk-io.c
+@@ -2583,10 +2583,12 @@ static int __cold init_tree_roots(struct btrfs_fs_info *fs_info)
+ !extent_buffer_uptodate(tree_root->node)) {
+ handle_error = true;
+
+- if (IS_ERR(tree_root->node))
++ if (IS_ERR(tree_root->node)) {
+ ret = PTR_ERR(tree_root->node);
+- else if (!extent_buffer_uptodate(tree_root->node))
++ tree_root->node = NULL;
++ } else if (!extent_buffer_uptodate(tree_root->node)) {
+ ret = -EUCLEAN;
++ }
+
+ btrfs_warn(fs_info, "failed to read tree root");
+ continue;
+diff --git a/fs/btrfs/extent_io.c b/fs/btrfs/extent_io.c
+index 39e45b8a5031..6e17a92869ad 100644
+--- a/fs/btrfs/extent_io.c
++++ b/fs/btrfs/extent_io.c
+@@ -5063,25 +5063,28 @@ struct extent_buffer *alloc_dummy_extent_buffer(struct btrfs_fs_info *fs_info,
+ static void check_buffer_tree_ref(struct extent_buffer *eb)
+ {
+ int refs;
+- /* the ref bit is tricky. We have to make sure it is set
+- * if we have the buffer dirty. Otherwise the
+- * code to free a buffer can end up dropping a dirty
+- * page
++ /*
++ * The TREE_REF bit is first set when the extent_buffer is added
++ * to the radix tree. It is also reset, if unset, when a new reference
++ * is created by find_extent_buffer.
+ *
+- * Once the ref bit is set, it won't go away while the
+- * buffer is dirty or in writeback, and it also won't
+- * go away while we have the reference count on the
+- * eb bumped.
++ * It is only cleared in two cases: freeing the last non-tree
++ * reference to the extent_buffer when its STALE bit is set or
++ * calling releasepage when the tree reference is the only reference.
+ *
+- * We can't just set the ref bit without bumping the
+- * ref on the eb because free_extent_buffer might
+- * see the ref bit and try to clear it. If this happens
+- * free_extent_buffer might end up dropping our original
+- * ref by mistake and freeing the page before we are able
+- * to add one more ref.
++ * In both cases, care is taken to ensure that the extent_buffer's
++ * pages are not under io. However, releasepage can be concurrently
++ * called with creating new references, which is prone to race
++ * conditions between the calls to check_buffer_tree_ref in those
++ * codepaths and clearing TREE_REF in try_release_extent_buffer.
+ *
+- * So bump the ref count first, then set the bit. If someone
+- * beat us to it, drop the ref we added.
++ * The actual lifetime of the extent_buffer in the radix tree is
++ * adequately protected by the refcount, but the TREE_REF bit and
++ * its corresponding reference are not. To protect against this
++ * class of races, we call check_buffer_tree_ref from the codepaths
++ * which trigger io after they set eb->io_pages. Note that once io is
++ * initiated, TREE_REF can no longer be cleared, so that is the
++ * moment at which any such race is best fixed.
+ */
+ refs = atomic_read(&eb->refs);
+ if (refs >= 2 && test_bit(EXTENT_BUFFER_TREE_REF, &eb->bflags))
+@@ -5532,6 +5535,11 @@ int read_extent_buffer_pages(struct extent_buffer *eb, int wait, int mirror_num)
+ clear_bit(EXTENT_BUFFER_READ_ERR, &eb->bflags);
+ eb->read_mirror = 0;
+ atomic_set(&eb->io_pages, num_reads);
++ /*
++ * It is possible for releasepage to clear the TREE_REF bit before we
++ * set io_pages. See check_buffer_tree_ref for a more detailed comment.
++ */
++ check_buffer_tree_ref(eb);
+ for (i = 0; i < num_pages; i++) {
+ page = eb->pages[i];
+
+diff --git a/fs/btrfs/inode.c b/fs/btrfs/inode.c
+index 6aa200e373c8..e7bdda3ed069 100644
+--- a/fs/btrfs/inode.c
++++ b/fs/btrfs/inode.c
+@@ -1690,12 +1690,8 @@ out_check:
+ ret = fallback_to_cow(inode, locked_page, cow_start,
+ found_key.offset - 1,
+ page_started, nr_written);
+- if (ret) {
+- if (nocow)
+- btrfs_dec_nocow_writers(fs_info,
+- disk_bytenr);
++ if (ret)
+ goto error;
+- }
+ cow_start = (u64)-1;
+ }
+
+@@ -1711,9 +1707,6 @@ out_check:
+ ram_bytes, BTRFS_COMPRESS_NONE,
+ BTRFS_ORDERED_PREALLOC);
+ if (IS_ERR(em)) {
+- if (nocow)
+- btrfs_dec_nocow_writers(fs_info,
+- disk_bytenr);
+ ret = PTR_ERR(em);
+ goto error;
+ }
+diff --git a/fs/btrfs/space-info.c b/fs/btrfs/space-info.c
+index eee6748c49e4..756950aba1a6 100644
+--- a/fs/btrfs/space-info.c
++++ b/fs/btrfs/space-info.c
+@@ -879,8 +879,8 @@ static bool steal_from_global_rsv(struct btrfs_fs_info *fs_info,
+ return false;
+ }
+ global_rsv->reserved -= ticket->bytes;
++ remove_ticket(space_info, ticket);
+ ticket->bytes = 0;
+- list_del_init(&ticket->list);
+ wake_up(&ticket->wait);
+ space_info->tickets_id++;
+ if (global_rsv->reserved < global_rsv->size)
+diff --git a/fs/cifs/inode.c b/fs/cifs/inode.c
+index 430b0b125654..44a57b65915b 100644
+--- a/fs/cifs/inode.c
++++ b/fs/cifs/inode.c
+@@ -2350,6 +2350,15 @@ set_size_out:
+ if (rc == 0) {
+ cifsInode->server_eof = attrs->ia_size;
+ cifs_setsize(inode, attrs->ia_size);
++
++ /*
++ * The man page of truncate says if the size changed,
++ * then the st_ctime and st_mtime fields for the file
++ * are updated.
++ */
++ attrs->ia_ctime = attrs->ia_mtime = current_time(inode);
++ attrs->ia_valid |= ATTR_CTIME | ATTR_MTIME;
++
+ cifs_truncate_page(inode->i_mapping, inode->i_size);
+ }
+
+diff --git a/fs/cifs/ioctl.c b/fs/cifs/ioctl.c
+index 4a73e63c4d43..dcde44ff6cf9 100644
+--- a/fs/cifs/ioctl.c
++++ b/fs/cifs/ioctl.c
+@@ -169,6 +169,7 @@ long cifs_ioctl(struct file *filep, unsigned int command, unsigned long arg)
+ unsigned int xid;
+ struct cifsFileInfo *pSMBFile = filep->private_data;
+ struct cifs_tcon *tcon;
++ struct tcon_link *tlink;
+ struct cifs_sb_info *cifs_sb;
+ __u64 ExtAttrBits = 0;
+ __u64 caps;
+@@ -307,13 +308,19 @@ long cifs_ioctl(struct file *filep, unsigned int command, unsigned long arg)
+ break;
+ }
+ cifs_sb = CIFS_SB(inode->i_sb);
+- tcon = tlink_tcon(cifs_sb_tlink(cifs_sb));
++ tlink = cifs_sb_tlink(cifs_sb);
++ if (IS_ERR(tlink)) {
++ rc = PTR_ERR(tlink);
++ break;
++ }
++ tcon = tlink_tcon(tlink);
+ if (tcon && tcon->ses->server->ops->notify) {
+ rc = tcon->ses->server->ops->notify(xid,
+ filep, (void __user *)arg);
+ cifs_dbg(FYI, "ioctl notify rc %d\n", rc);
+ } else
+ rc = -EOPNOTSUPP;
++ cifs_put_tlink(tlink);
+ break;
+ default:
+ cifs_dbg(FYI, "unsupported ioctl\n");
+diff --git a/fs/cifs/smb2misc.c b/fs/cifs/smb2misc.c
+index 497afb0b9960..44fca24d993e 100644
+--- a/fs/cifs/smb2misc.c
++++ b/fs/cifs/smb2misc.c
+@@ -354,9 +354,13 @@ smb2_get_data_area_len(int *off, int *len, struct smb2_sync_hdr *shdr)
+ ((struct smb2_ioctl_rsp *)shdr)->OutputCount);
+ break;
+ case SMB2_CHANGE_NOTIFY:
++ *off = le16_to_cpu(
++ ((struct smb2_change_notify_rsp *)shdr)->OutputBufferOffset);
++ *len = le32_to_cpu(
++ ((struct smb2_change_notify_rsp *)shdr)->OutputBufferLength);
++ break;
+ default:
+- /* BB FIXME for unimplemented cases above */
+- cifs_dbg(VFS, "no length check for command\n");
++ cifs_dbg(VFS, "no length check for command %d\n", le16_to_cpu(shdr->Command));
+ break;
+ }
+
+diff --git a/fs/cifs/smb2ops.c b/fs/cifs/smb2ops.c
+index 6fc69c3b2749..bf13917ec1a4 100644
+--- a/fs/cifs/smb2ops.c
++++ b/fs/cifs/smb2ops.c
+@@ -2119,7 +2119,7 @@ smb3_notify(const unsigned int xid, struct file *pfile,
+
+ tcon = cifs_sb_master_tcon(cifs_sb);
+ oparms.tcon = tcon;
+- oparms.desired_access = FILE_READ_ATTRIBUTES;
++ oparms.desired_access = FILE_READ_ATTRIBUTES | FILE_READ_DATA;
+ oparms.disposition = FILE_OPEN;
+ oparms.create_options = cifs_create_options(cifs_sb, 0);
+ oparms.fid = &fid;
+diff --git a/fs/io_uring.c b/fs/io_uring.c
+index 2be6ea010340..ba2184841cb5 100644
+--- a/fs/io_uring.c
++++ b/fs/io_uring.c
+@@ -3587,6 +3587,7 @@ static int io_sendmsg_prep(struct io_kiocb *req, const struct io_uring_sqe *sqe)
+ if (req->flags & REQ_F_NEED_CLEANUP)
+ return 0;
+
++ io->msg.msg.msg_name = &io->msg.addr;
+ io->msg.iov = io->msg.fast_iov;
+ ret = sendmsg_copy_msghdr(&io->msg.msg, sr->msg, sr->msg_flags,
+ &io->msg.iov);
+@@ -3774,6 +3775,7 @@ static int __io_compat_recvmsg_copy_hdr(struct io_kiocb *req,
+
+ static int io_recvmsg_copy_hdr(struct io_kiocb *req, struct io_async_ctx *io)
+ {
++ io->msg.msg.msg_name = &io->msg.addr;
+ io->msg.iov = io->msg.fast_iov;
+
+ #ifdef CONFIG_COMPAT
+@@ -6751,6 +6753,7 @@ static int io_sqe_files_register(struct io_ring_ctx *ctx, void __user *arg,
+ for (i = 0; i < nr_tables; i++)
+ kfree(ctx->file_data->table[i].files);
+
++ percpu_ref_exit(&ctx->file_data->refs);
+ kfree(ctx->file_data->table);
+ kfree(ctx->file_data);
+ ctx->file_data = NULL;
+@@ -6904,8 +6907,10 @@ static int __io_sqe_files_update(struct io_ring_ctx *ctx,
+ }
+ table->files[index] = file;
+ err = io_sqe_file_register(ctx, file, i);
+- if (err)
++ if (err) {
++ fput(file);
+ break;
++ }
+ }
+ nr_args--;
+ done++;
+@@ -7400,9 +7405,6 @@ static void io_ring_ctx_free(struct io_ring_ctx *ctx)
+ io_mem_free(ctx->sq_sqes);
+
+ percpu_ref_exit(&ctx->refs);
+- if (ctx->account_mem)
+- io_unaccount_mem(ctx->user,
+- ring_pages(ctx->sq_entries, ctx->cq_entries));
+ free_uid(ctx->user);
+ put_cred(ctx->creds);
+ kfree(ctx->completions);
+@@ -7498,6 +7500,16 @@ static void io_ring_ctx_wait_and_kill(struct io_ring_ctx *ctx)
+ if (ctx->rings)
+ io_cqring_overflow_flush(ctx, true);
+ idr_for_each(&ctx->personality_idr, io_remove_personalities, ctx);
++
++ /*
++ * Do this upfront, so we won't have a grace period where the ring
++ * is closed but resources aren't reaped yet. This can cause
++ * spurious failure in setting up a new ring.
++ */
++ if (ctx->account_mem)
++ io_unaccount_mem(ctx->user,
++ ring_pages(ctx->sq_entries, ctx->cq_entries));
++
+ INIT_WORK(&ctx->exit_work, io_ring_exit_work);
+ queue_work(system_wq, &ctx->exit_work);
+ }
+diff --git a/fs/nfs/nfs4namespace.c b/fs/nfs/nfs4namespace.c
+index a3ab6e219061..873342308dc0 100644
+--- a/fs/nfs/nfs4namespace.c
++++ b/fs/nfs/nfs4namespace.c
+@@ -308,6 +308,7 @@ static int try_location(struct fs_context *fc,
+ if (IS_ERR(export_path))
+ return PTR_ERR(export_path);
+
++ kfree(ctx->nfs_server.export_path);
+ ctx->nfs_server.export_path = export_path;
+
+ source = kmalloc(len + 1 + ctx->nfs_server.export_path_len + 1,
+diff --git a/include/linux/btf.h b/include/linux/btf.h
+index 5c1ea99b480f..8b81fbb4497c 100644
+--- a/include/linux/btf.h
++++ b/include/linux/btf.h
+@@ -82,6 +82,11 @@ static inline bool btf_type_is_int(const struct btf_type *t)
+ return BTF_INFO_KIND(t->info) == BTF_KIND_INT;
+ }
+
++static inline bool btf_type_is_small_int(const struct btf_type *t)
++{
++ return btf_type_is_int(t) && t->size <= sizeof(u64);
++}
++
+ static inline bool btf_type_is_enum(const struct btf_type *t)
+ {
+ return BTF_INFO_KIND(t->info) == BTF_KIND_ENUM;
+diff --git a/include/linux/filter.h b/include/linux/filter.h
+index 9b5aa5c483cc..ccbba0adc0da 100644
+--- a/include/linux/filter.h
++++ b/include/linux/filter.h
+@@ -888,12 +888,12 @@ void bpf_jit_compile(struct bpf_prog *prog);
+ bool bpf_jit_needs_zext(void);
+ bool bpf_helper_changes_pkt_data(void *func);
+
+-static inline bool bpf_dump_raw_ok(void)
++static inline bool bpf_dump_raw_ok(const struct cred *cred)
+ {
+ /* Reconstruction of call-sites is dependent on kallsyms,
+ * thus make dump the same restriction.
+ */
+- return kallsyms_show_value() == 1;
++ return kallsyms_show_value(cred);
+ }
+
+ struct bpf_prog *bpf_patch_insn_single(struct bpf_prog *prog, u32 off,
+diff --git a/include/linux/kallsyms.h b/include/linux/kallsyms.h
+index 657a83b943f0..1f96ce2b47df 100644
+--- a/include/linux/kallsyms.h
++++ b/include/linux/kallsyms.h
+@@ -18,6 +18,7 @@
+ #define KSYM_SYMBOL_LEN (sizeof("%s+%#lx/%#lx [%s]") + (KSYM_NAME_LEN - 1) + \
+ 2*(BITS_PER_LONG*3/10) + (MODULE_NAME_LEN - 1) + 1)
+
++struct cred;
+ struct module;
+
+ static inline int is_kernel_inittext(unsigned long addr)
+@@ -98,7 +99,7 @@ int lookup_symbol_name(unsigned long addr, char *symname);
+ int lookup_symbol_attrs(unsigned long addr, unsigned long *size, unsigned long *offset, char *modname, char *name);
+
+ /* How and when do we show kallsyms values? */
+-extern int kallsyms_show_value(void);
++extern bool kallsyms_show_value(const struct cred *cred);
+
+ #else /* !CONFIG_KALLSYMS */
+
+@@ -158,7 +159,7 @@ static inline int lookup_symbol_attrs(unsigned long addr, unsigned long *size, u
+ return -ERANGE;
+ }
+
+-static inline int kallsyms_show_value(void)
++static inline bool kallsyms_show_value(const struct cred *cred)
+ {
+ return false;
+ }
+diff --git a/include/sound/compress_driver.h b/include/sound/compress_driver.h
+index 6ce8effa0b12..70cbc5095e72 100644
+--- a/include/sound/compress_driver.h
++++ b/include/sound/compress_driver.h
+@@ -66,6 +66,7 @@ struct snd_compr_runtime {
+ * @direction: stream direction, playback/recording
+ * @metadata_set: metadata set flag, true when set
+ * @next_track: has userspace signal next track transition, true when set
++ * @partial_drain: undergoing partial_drain for stream, true when set
+ * @private_data: pointer to DSP private data
+ * @dma_buffer: allocated buffer if any
+ */
+@@ -78,6 +79,7 @@ struct snd_compr_stream {
+ enum snd_compr_direction direction;
+ bool metadata_set;
+ bool next_track;
++ bool partial_drain;
+ void *private_data;
+ struct snd_dma_buffer dma_buffer;
+ };
+@@ -182,7 +184,13 @@ static inline void snd_compr_drain_notify(struct snd_compr_stream *stream)
+ if (snd_BUG_ON(!stream))
+ return;
+
+- stream->runtime->state = SNDRV_PCM_STATE_SETUP;
++ /* for partial_drain case we are back to running state on success */
++ if (stream->partial_drain) {
++ stream->runtime->state = SNDRV_PCM_STATE_RUNNING;
++ stream->partial_drain = false; /* clear this flag as well */
++ } else {
++ stream->runtime->state = SNDRV_PCM_STATE_SETUP;
++ }
+
+ wake_up(&stream->runtime->sleep);
+ }
+diff --git a/kernel/bpf/btf.c b/kernel/bpf/btf.c
+index d65c6912bdaf..d1f5d428c9fe 100644
+--- a/kernel/bpf/btf.c
++++ b/kernel/bpf/btf.c
+@@ -3744,7 +3744,7 @@ bool btf_ctx_access(int off, int size, enum bpf_access_type type,
+ return false;
+
+ t = btf_type_skip_modifiers(btf, t->type, NULL);
+- if (!btf_type_is_int(t)) {
++ if (!btf_type_is_small_int(t)) {
+ bpf_log(log,
+ "ret type %s not allowed for fmod_ret\n",
+ btf_kind_str[BTF_INFO_KIND(t->info)]);
+@@ -3766,7 +3766,7 @@ bool btf_ctx_access(int off, int size, enum bpf_access_type type,
+ /* skip modifiers */
+ while (btf_type_is_modifier(t))
+ t = btf_type_by_id(btf, t->type);
+- if (btf_type_is_int(t) || btf_type_is_enum(t))
++ if (btf_type_is_small_int(t) || btf_type_is_enum(t))
+ /* accessing a scalar */
+ return true;
+ if (!btf_type_is_ptr(t)) {
+diff --git a/kernel/bpf/syscall.c b/kernel/bpf/syscall.c
+index c8acc8f37583..0e4d99cfac93 100644
+--- a/kernel/bpf/syscall.c
++++ b/kernel/bpf/syscall.c
+@@ -2918,7 +2918,8 @@ static const struct bpf_map *bpf_map_from_imm(const struct bpf_prog *prog,
+ return NULL;
+ }
+
+-static struct bpf_insn *bpf_insn_prepare_dump(const struct bpf_prog *prog)
++static struct bpf_insn *bpf_insn_prepare_dump(const struct bpf_prog *prog,
++ const struct cred *f_cred)
+ {
+ const struct bpf_map *map;
+ struct bpf_insn *insns;
+@@ -2944,7 +2945,7 @@ static struct bpf_insn *bpf_insn_prepare_dump(const struct bpf_prog *prog)
+ code == (BPF_JMP | BPF_CALL_ARGS)) {
+ if (code == (BPF_JMP | BPF_CALL_ARGS))
+ insns[i].code = BPF_JMP | BPF_CALL;
+- if (!bpf_dump_raw_ok())
++ if (!bpf_dump_raw_ok(f_cred))
+ insns[i].imm = 0;
+ continue;
+ }
+@@ -3000,7 +3001,8 @@ static int set_info_rec_size(struct bpf_prog_info *info)
+ return 0;
+ }
+
+-static int bpf_prog_get_info_by_fd(struct bpf_prog *prog,
++static int bpf_prog_get_info_by_fd(struct file *file,
++ struct bpf_prog *prog,
+ const union bpf_attr *attr,
+ union bpf_attr __user *uattr)
+ {
+@@ -3069,11 +3071,11 @@ static int bpf_prog_get_info_by_fd(struct bpf_prog *prog,
+ struct bpf_insn *insns_sanitized;
+ bool fault;
+
+- if (prog->blinded && !bpf_dump_raw_ok()) {
++ if (prog->blinded && !bpf_dump_raw_ok(file->f_cred)) {
+ info.xlated_prog_insns = 0;
+ goto done;
+ }
+- insns_sanitized = bpf_insn_prepare_dump(prog);
++ insns_sanitized = bpf_insn_prepare_dump(prog, file->f_cred);
+ if (!insns_sanitized)
+ return -ENOMEM;
+ uinsns = u64_to_user_ptr(info.xlated_prog_insns);
+@@ -3107,7 +3109,7 @@ static int bpf_prog_get_info_by_fd(struct bpf_prog *prog,
+ }
+
+ if (info.jited_prog_len && ulen) {
+- if (bpf_dump_raw_ok()) {
++ if (bpf_dump_raw_ok(file->f_cred)) {
+ uinsns = u64_to_user_ptr(info.jited_prog_insns);
+ ulen = min_t(u32, info.jited_prog_len, ulen);
+
+@@ -3142,7 +3144,7 @@ static int bpf_prog_get_info_by_fd(struct bpf_prog *prog,
+ ulen = info.nr_jited_ksyms;
+ info.nr_jited_ksyms = prog->aux->func_cnt ? : 1;
+ if (ulen) {
+- if (bpf_dump_raw_ok()) {
++ if (bpf_dump_raw_ok(file->f_cred)) {
+ unsigned long ksym_addr;
+ u64 __user *user_ksyms;
+ u32 i;
+@@ -3173,7 +3175,7 @@ static int bpf_prog_get_info_by_fd(struct bpf_prog *prog,
+ ulen = info.nr_jited_func_lens;
+ info.nr_jited_func_lens = prog->aux->func_cnt ? : 1;
+ if (ulen) {
+- if (bpf_dump_raw_ok()) {
++ if (bpf_dump_raw_ok(file->f_cred)) {
+ u32 __user *user_lens;
+ u32 func_len, i;
+
+@@ -3230,7 +3232,7 @@ static int bpf_prog_get_info_by_fd(struct bpf_prog *prog,
+ else
+ info.nr_jited_line_info = 0;
+ if (info.nr_jited_line_info && ulen) {
+- if (bpf_dump_raw_ok()) {
++ if (bpf_dump_raw_ok(file->f_cred)) {
+ __u64 __user *user_linfo;
+ u32 i;
+
+@@ -3276,7 +3278,8 @@ done:
+ return 0;
+ }
+
+-static int bpf_map_get_info_by_fd(struct bpf_map *map,
++static int bpf_map_get_info_by_fd(struct file *file,
++ struct bpf_map *map,
+ const union bpf_attr *attr,
+ union bpf_attr __user *uattr)
+ {
+@@ -3319,7 +3322,8 @@ static int bpf_map_get_info_by_fd(struct bpf_map *map,
+ return 0;
+ }
+
+-static int bpf_btf_get_info_by_fd(struct btf *btf,
++static int bpf_btf_get_info_by_fd(struct file *file,
++ struct btf *btf,
+ const union bpf_attr *attr,
+ union bpf_attr __user *uattr)
+ {
+@@ -3351,13 +3355,13 @@ static int bpf_obj_get_info_by_fd(const union bpf_attr *attr,
+ return -EBADFD;
+
+ if (f.file->f_op == &bpf_prog_fops)
+- err = bpf_prog_get_info_by_fd(f.file->private_data, attr,
++ err = bpf_prog_get_info_by_fd(f.file, f.file->private_data, attr,
+ uattr);
+ else if (f.file->f_op == &bpf_map_fops)
+- err = bpf_map_get_info_by_fd(f.file->private_data, attr,
++ err = bpf_map_get_info_by_fd(f.file, f.file->private_data, attr,
+ uattr);
+ else if (f.file->f_op == &btf_fops)
+- err = bpf_btf_get_info_by_fd(f.file->private_data, attr, uattr);
++ err = bpf_btf_get_info_by_fd(f.file, f.file->private_data, attr, uattr);
+ else
+ err = -EINVAL;
+
+diff --git a/kernel/kallsyms.c b/kernel/kallsyms.c
+index 16c8c605f4b0..bb14e64f62a4 100644
+--- a/kernel/kallsyms.c
++++ b/kernel/kallsyms.c
+@@ -644,19 +644,20 @@ static inline int kallsyms_for_perf(void)
+ * Otherwise, require CAP_SYSLOG (assuming kptr_restrict isn't set to
+ * block even that).
+ */
+-int kallsyms_show_value(void)
++bool kallsyms_show_value(const struct cred *cred)
+ {
+ switch (kptr_restrict) {
+ case 0:
+ if (kallsyms_for_perf())
+- return 1;
++ return true;
+ /* fallthrough */
+ case 1:
+- if (has_capability_noaudit(current, CAP_SYSLOG))
+- return 1;
++ if (security_capable(cred, &init_user_ns, CAP_SYSLOG,
++ CAP_OPT_NOAUDIT) == 0)
++ return true;
+ /* fallthrough */
+ default:
+- return 0;
++ return false;
+ }
+ }
+
+@@ -673,7 +674,11 @@ static int kallsyms_open(struct inode *inode, struct file *file)
+ return -ENOMEM;
+ reset_iter(iter, 0);
+
+- iter->show_value = kallsyms_show_value();
++ /*
++ * Instead of checking this on every s_show() call, cache
++ * the result here at open time.
++ */
++ iter->show_value = kallsyms_show_value(file->f_cred);
+ return 0;
+ }
+
+diff --git a/kernel/kprobes.c b/kernel/kprobes.c
+index 950a5cfd262c..0a967db226d8 100644
+--- a/kernel/kprobes.c
++++ b/kernel/kprobes.c
+@@ -2362,7 +2362,7 @@ static void report_probe(struct seq_file *pi, struct kprobe *p,
+ else
+ kprobe_type = "k";
+
+- if (!kallsyms_show_value())
++ if (!kallsyms_show_value(pi->file->f_cred))
+ addr = NULL;
+
+ if (sym)
+@@ -2463,7 +2463,7 @@ static int kprobe_blacklist_seq_show(struct seq_file *m, void *v)
+ * If /proc/kallsyms is not showing kernel address, we won't
+ * show them here either.
+ */
+- if (!kallsyms_show_value())
++ if (!kallsyms_show_value(m->file->f_cred))
+ seq_printf(m, "0x%px-0x%px\t%ps\n", NULL, NULL,
+ (void *)ent->start_addr);
+ else
+diff --git a/kernel/module.c b/kernel/module.c
+index 646f1e2330d2..af59c86f1547 100644
+--- a/kernel/module.c
++++ b/kernel/module.c
+@@ -1507,8 +1507,7 @@ static inline bool sect_empty(const Elf_Shdr *sect)
+ }
+
+ struct module_sect_attr {
+- struct module_attribute mattr;
+- char *name;
++ struct bin_attribute battr;
+ unsigned long address;
+ };
+
+@@ -1518,13 +1517,18 @@ struct module_sect_attrs {
+ struct module_sect_attr attrs[];
+ };
+
+-static ssize_t module_sect_show(struct module_attribute *mattr,
+- struct module_kobject *mk, char *buf)
++static ssize_t module_sect_read(struct file *file, struct kobject *kobj,
++ struct bin_attribute *battr,
++ char *buf, loff_t pos, size_t count)
+ {
+ struct module_sect_attr *sattr =
+- container_of(mattr, struct module_sect_attr, mattr);
+- return sprintf(buf, "0x%px\n", kptr_restrict < 2 ?
+- (void *)sattr->address : NULL);
++ container_of(battr, struct module_sect_attr, battr);
++
++ if (pos != 0)
++ return -EINVAL;
++
++ return sprintf(buf, "0x%px\n",
++ kallsyms_show_value(file->f_cred) ? (void *)sattr->address : NULL);
+ }
+
+ static void free_sect_attrs(struct module_sect_attrs *sect_attrs)
+@@ -1532,7 +1536,7 @@ static void free_sect_attrs(struct module_sect_attrs *sect_attrs)
+ unsigned int section;
+
+ for (section = 0; section < sect_attrs->nsections; section++)
+- kfree(sect_attrs->attrs[section].name);
++ kfree(sect_attrs->attrs[section].battr.attr.name);
+ kfree(sect_attrs);
+ }
+
+@@ -1541,42 +1545,41 @@ static void add_sect_attrs(struct module *mod, const struct load_info *info)
+ unsigned int nloaded = 0, i, size[2];
+ struct module_sect_attrs *sect_attrs;
+ struct module_sect_attr *sattr;
+- struct attribute **gattr;
++ struct bin_attribute **gattr;
+
+ /* Count loaded sections and allocate structures */
+ for (i = 0; i < info->hdr->e_shnum; i++)
+ if (!sect_empty(&info->sechdrs[i]))
+ nloaded++;
+ size[0] = ALIGN(struct_size(sect_attrs, attrs, nloaded),
+- sizeof(sect_attrs->grp.attrs[0]));
+- size[1] = (nloaded + 1) * sizeof(sect_attrs->grp.attrs[0]);
++ sizeof(sect_attrs->grp.bin_attrs[0]));
++ size[1] = (nloaded + 1) * sizeof(sect_attrs->grp.bin_attrs[0]);
+ sect_attrs = kzalloc(size[0] + size[1], GFP_KERNEL);
+ if (sect_attrs == NULL)
+ return;
+
+ /* Setup section attributes. */
+ sect_attrs->grp.name = "sections";
+- sect_attrs->grp.attrs = (void *)sect_attrs + size[0];
++ sect_attrs->grp.bin_attrs = (void *)sect_attrs + size[0];
+
+ sect_attrs->nsections = 0;
+ sattr = §_attrs->attrs[0];
+- gattr = §_attrs->grp.attrs[0];
++ gattr = §_attrs->grp.bin_attrs[0];
+ for (i = 0; i < info->hdr->e_shnum; i++) {
+ Elf_Shdr *sec = &info->sechdrs[i];
+ if (sect_empty(sec))
+ continue;
++ sysfs_bin_attr_init(&sattr->battr);
+ sattr->address = sec->sh_addr;
+- sattr->name = kstrdup(info->secstrings + sec->sh_name,
+- GFP_KERNEL);
+- if (sattr->name == NULL)
++ sattr->battr.attr.name =
++ kstrdup(info->secstrings + sec->sh_name, GFP_KERNEL);
++ if (sattr->battr.attr.name == NULL)
+ goto out;
+ sect_attrs->nsections++;
+- sysfs_attr_init(&sattr->mattr.attr);
+- sattr->mattr.show = module_sect_show;
+- sattr->mattr.store = NULL;
+- sattr->mattr.attr.name = sattr->name;
+- sattr->mattr.attr.mode = S_IRUSR;
+- *(gattr++) = &(sattr++)->mattr.attr;
++ sattr->battr.read = module_sect_read;
++ sattr->battr.size = 3 /* "0x", "\n" */ + (BITS_PER_LONG / 4);
++ sattr->battr.attr.mode = 0400;
++ *(gattr++) = &(sattr++)->battr;
+ }
+ *gattr = NULL;
+
+@@ -1666,7 +1669,7 @@ static void add_notes_attrs(struct module *mod, const struct load_info *info)
+ continue;
+ if (info->sechdrs[i].sh_type == SHT_NOTE) {
+ sysfs_bin_attr_init(nattr);
+- nattr->attr.name = mod->sect_attrs->attrs[loaded].name;
++ nattr->attr.name = mod->sect_attrs->attrs[loaded].battr.attr.name;
+ nattr->attr.mode = S_IRUGO;
+ nattr->size = info->sechdrs[i].sh_size;
+ nattr->private = (void *) info->sechdrs[i].sh_addr;
+@@ -4348,7 +4351,7 @@ static int modules_open(struct inode *inode, struct file *file)
+
+ if (!err) {
+ struct seq_file *m = file->private_data;
+- m->private = kallsyms_show_value() ? NULL : (void *)8ul;
++ m->private = kallsyms_show_value(file->f_cred) ? NULL : (void *)8ul;
+ }
+
+ return err;
+diff --git a/kernel/sched/core.c b/kernel/sched/core.c
+index f2618ade8047..8034434b1040 100644
+--- a/kernel/sched/core.c
++++ b/kernel/sched/core.c
+@@ -1637,7 +1637,7 @@ static int __set_cpus_allowed_ptr(struct task_struct *p,
+ goto out;
+ }
+
+- if (cpumask_equal(p->cpus_ptr, new_mask))
++ if (cpumask_equal(&p->cpus_mask, new_mask))
+ goto out;
+
+ /*
+diff --git a/net/core/skmsg.c b/net/core/skmsg.c
+index 351afbf6bfba..6a32a1fd34f8 100644
+--- a/net/core/skmsg.c
++++ b/net/core/skmsg.c
+@@ -683,7 +683,7 @@ static struct sk_psock *sk_psock_from_strp(struct strparser *strp)
+ return container_of(parser, struct sk_psock, parser);
+ }
+
+-static void sk_psock_skb_redirect(struct sk_psock *psock, struct sk_buff *skb)
++static void sk_psock_skb_redirect(struct sk_buff *skb)
+ {
+ struct sk_psock *psock_other;
+ struct sock *sk_other;
+@@ -715,12 +715,11 @@ static void sk_psock_skb_redirect(struct sk_psock *psock, struct sk_buff *skb)
+ }
+ }
+
+-static void sk_psock_tls_verdict_apply(struct sk_psock *psock,
+- struct sk_buff *skb, int verdict)
++static void sk_psock_tls_verdict_apply(struct sk_buff *skb, int verdict)
+ {
+ switch (verdict) {
+ case __SK_REDIRECT:
+- sk_psock_skb_redirect(psock, skb);
++ sk_psock_skb_redirect(skb);
+ break;
+ case __SK_PASS:
+ case __SK_DROP:
+@@ -741,8 +740,8 @@ int sk_psock_tls_strp_read(struct sk_psock *psock, struct sk_buff *skb)
+ ret = sk_psock_bpf_run(psock, prog, skb);
+ ret = sk_psock_map_verd(ret, tcp_skb_bpf_redirect_fetch(skb));
+ }
++ sk_psock_tls_verdict_apply(skb, ret);
+ rcu_read_unlock();
+- sk_psock_tls_verdict_apply(psock, skb, ret);
+ return ret;
+ }
+ EXPORT_SYMBOL_GPL(sk_psock_tls_strp_read);
+@@ -770,7 +769,7 @@ static void sk_psock_verdict_apply(struct sk_psock *psock,
+ }
+ goto out_free;
+ case __SK_REDIRECT:
+- sk_psock_skb_redirect(psock, skb);
++ sk_psock_skb_redirect(skb);
+ break;
+ case __SK_DROP:
+ /* fall-through */
+@@ -782,11 +781,18 @@ out_free:
+
+ static void sk_psock_strp_read(struct strparser *strp, struct sk_buff *skb)
+ {
+- struct sk_psock *psock = sk_psock_from_strp(strp);
++ struct sk_psock *psock;
+ struct bpf_prog *prog;
+ int ret = __SK_DROP;
++ struct sock *sk;
+
+ rcu_read_lock();
++ sk = strp->sk;
++ psock = sk_psock(sk);
++ if (unlikely(!psock)) {
++ kfree_skb(skb);
++ goto out;
++ }
+ prog = READ_ONCE(psock->progs.skb_verdict);
+ if (likely(prog)) {
+ skb_orphan(skb);
+@@ -794,8 +800,9 @@ static void sk_psock_strp_read(struct strparser *strp, struct sk_buff *skb)
+ ret = sk_psock_bpf_run(psock, prog, skb);
+ ret = sk_psock_map_verd(ret, tcp_skb_bpf_redirect_fetch(skb));
+ }
+- rcu_read_unlock();
+ sk_psock_verdict_apply(psock, skb, ret);
++out:
++ rcu_read_unlock();
+ }
+
+ static int sk_psock_strp_read_done(struct strparser *strp, int err)
+diff --git a/net/core/sysctl_net_core.c b/net/core/sysctl_net_core.c
+index 9f9e00ba3ad7..669cbe1609d9 100644
+--- a/net/core/sysctl_net_core.c
++++ b/net/core/sysctl_net_core.c
+@@ -277,7 +277,7 @@ static int proc_dointvec_minmax_bpf_enable(struct ctl_table *table, int write,
+ ret = proc_dointvec_minmax(&tmp, write, buffer, lenp, ppos);
+ if (write && !ret) {
+ if (jit_enable < 2 ||
+- (jit_enable == 2 && bpf_dump_raw_ok())) {
++ (jit_enable == 2 && bpf_dump_raw_ok(current_cred()))) {
+ *(int *)table->data = jit_enable;
+ if (jit_enable == 2)
+ pr_warn("bpf_jit_enable = 2 was set! NEVER use this in production, only for JIT debugging!\n");
+diff --git a/net/mac80211/tx.c b/net/mac80211/tx.c
+index 82846aca86d9..6ab33d9904ee 100644
+--- a/net/mac80211/tx.c
++++ b/net/mac80211/tx.c
+@@ -4192,7 +4192,7 @@ static void ieee80211_8023_xmit(struct ieee80211_sub_if_data *sdata,
+ (!sta || !test_sta_flag(sta, WLAN_STA_TDLS_PEER)))
+ ra = sdata->u.mgd.bssid;
+
+- if (!is_valid_ether_addr(ra))
++ if (is_zero_ether_addr(ra))
+ goto out_free;
+
+ multicast = is_multicast_ether_addr(ra);
+diff --git a/net/netfilter/ipset/ip_set_bitmap_ip.c b/net/netfilter/ipset/ip_set_bitmap_ip.c
+index 486959f70cf3..a8ce04a4bb72 100644
+--- a/net/netfilter/ipset/ip_set_bitmap_ip.c
++++ b/net/netfilter/ipset/ip_set_bitmap_ip.c
+@@ -326,7 +326,7 @@ bitmap_ip_create(struct net *net, struct ip_set *set, struct nlattr *tb[],
+ set->variant = &bitmap_ip;
+ if (!init_map_ip(set, map, first_ip, last_ip,
+ elements, hosts, netmask)) {
+- kfree(map);
++ ip_set_free(map);
+ return -ENOMEM;
+ }
+ if (tb[IPSET_ATTR_TIMEOUT]) {
+diff --git a/net/netfilter/ipset/ip_set_bitmap_ipmac.c b/net/netfilter/ipset/ip_set_bitmap_ipmac.c
+index 2310a316e0af..2c625e0f49ec 100644
+--- a/net/netfilter/ipset/ip_set_bitmap_ipmac.c
++++ b/net/netfilter/ipset/ip_set_bitmap_ipmac.c
+@@ -363,7 +363,7 @@ bitmap_ipmac_create(struct net *net, struct ip_set *set, struct nlattr *tb[],
+ map->memsize = BITS_TO_LONGS(elements) * sizeof(unsigned long);
+ set->variant = &bitmap_ipmac;
+ if (!init_map_ipmac(set, map, first_ip, last_ip, elements)) {
+- kfree(map);
++ ip_set_free(map);
+ return -ENOMEM;
+ }
+ if (tb[IPSET_ATTR_TIMEOUT]) {
+diff --git a/net/netfilter/ipset/ip_set_bitmap_port.c b/net/netfilter/ipset/ip_set_bitmap_port.c
+index e56ced66f202..7138e080def4 100644
+--- a/net/netfilter/ipset/ip_set_bitmap_port.c
++++ b/net/netfilter/ipset/ip_set_bitmap_port.c
+@@ -274,7 +274,7 @@ bitmap_port_create(struct net *net, struct ip_set *set, struct nlattr *tb[],
+ map->memsize = BITS_TO_LONGS(elements) * sizeof(unsigned long);
+ set->variant = &bitmap_port;
+ if (!init_map_port(set, map, first_port, last_port)) {
+- kfree(map);
++ ip_set_free(map);
+ return -ENOMEM;
+ }
+ if (tb[IPSET_ATTR_TIMEOUT]) {
+diff --git a/net/netfilter/ipset/ip_set_hash_gen.h b/net/netfilter/ipset/ip_set_hash_gen.h
+index 1ee43752d6d3..521e970be402 100644
+--- a/net/netfilter/ipset/ip_set_hash_gen.h
++++ b/net/netfilter/ipset/ip_set_hash_gen.h
+@@ -682,7 +682,7 @@ retry:
+ }
+ t->hregion = ip_set_alloc(ahash_sizeof_regions(htable_bits));
+ if (!t->hregion) {
+- kfree(t);
++ ip_set_free(t);
+ ret = -ENOMEM;
+ goto out;
+ }
+@@ -1533,7 +1533,7 @@ IPSET_TOKEN(HTYPE, _create)(struct net *net, struct ip_set *set,
+ }
+ t->hregion = ip_set_alloc(ahash_sizeof_regions(hbits));
+ if (!t->hregion) {
+- kfree(t);
++ ip_set_free(t);
+ kfree(h);
+ return -ENOMEM;
+ }
+diff --git a/net/netfilter/nf_conntrack_core.c b/net/netfilter/nf_conntrack_core.c
+index bb72ca5f3999..3ab6dbb6588e 100644
+--- a/net/netfilter/nf_conntrack_core.c
++++ b/net/netfilter/nf_conntrack_core.c
+@@ -2149,6 +2149,8 @@ static int nf_conntrack_update(struct net *net, struct sk_buff *skb)
+ err = __nf_conntrack_update(net, skb, ct, ctinfo);
+ if (err < 0)
+ return err;
++
++ ct = nf_ct_get(skb, &ctinfo);
+ }
+
+ return nf_confirm_cthelper(skb, ct, ctinfo);
+diff --git a/net/qrtr/qrtr.c b/net/qrtr/qrtr.c
+index 2d8d6131bc5f..7eccbbf6f8ad 100644
+--- a/net/qrtr/qrtr.c
++++ b/net/qrtr/qrtr.c
+@@ -427,7 +427,7 @@ int qrtr_endpoint_post(struct qrtr_endpoint *ep, const void *data, size_t len)
+ unsigned int ver;
+ size_t hdrlen;
+
+- if (len & 3)
++ if (len == 0 || len & 3)
+ return -EINVAL;
+
+ skb = netdev_alloc_skb(NULL, len);
+@@ -441,6 +441,8 @@ int qrtr_endpoint_post(struct qrtr_endpoint *ep, const void *data, size_t len)
+
+ switch (ver) {
+ case QRTR_PROTO_VER_1:
++ if (len < sizeof(*v1))
++ goto err;
+ v1 = data;
+ hdrlen = sizeof(*v1);
+
+@@ -454,6 +456,8 @@ int qrtr_endpoint_post(struct qrtr_endpoint *ep, const void *data, size_t len)
+ size = le32_to_cpu(v1->size);
+ break;
+ case QRTR_PROTO_VER_2:
++ if (len < sizeof(*v2))
++ goto err;
+ v2 = data;
+ hdrlen = sizeof(*v2) + v2->optlen;
+
+diff --git a/net/sunrpc/xprtrdma/verbs.c b/net/sunrpc/xprtrdma/verbs.c
+index 05c4d3a9cda2..db0259c6467e 100644
+--- a/net/sunrpc/xprtrdma/verbs.c
++++ b/net/sunrpc/xprtrdma/verbs.c
+@@ -84,7 +84,8 @@ static void rpcrdma_rep_destroy(struct rpcrdma_rep *rep);
+ static void rpcrdma_reps_unmap(struct rpcrdma_xprt *r_xprt);
+ static void rpcrdma_mrs_create(struct rpcrdma_xprt *r_xprt);
+ static void rpcrdma_mrs_destroy(struct rpcrdma_xprt *r_xprt);
+-static int rpcrdma_ep_destroy(struct rpcrdma_ep *ep);
++static void rpcrdma_ep_get(struct rpcrdma_ep *ep);
++static int rpcrdma_ep_put(struct rpcrdma_ep *ep);
+ static struct rpcrdma_regbuf *
+ rpcrdma_regbuf_alloc(size_t size, enum dma_data_direction direction,
+ gfp_t flags);
+@@ -97,7 +98,8 @@ static void rpcrdma_regbuf_free(struct rpcrdma_regbuf *rb);
+ */
+ static void rpcrdma_xprt_drain(struct rpcrdma_xprt *r_xprt)
+ {
+- struct rdma_cm_id *id = r_xprt->rx_ep->re_id;
++ struct rpcrdma_ep *ep = r_xprt->rx_ep;
++ struct rdma_cm_id *id = ep->re_id;
+
+ /* Flush Receives, then wait for deferred Reply work
+ * to complete.
+@@ -108,6 +110,8 @@ static void rpcrdma_xprt_drain(struct rpcrdma_xprt *r_xprt)
+ * local invalidations.
+ */
+ ib_drain_sq(id->qp);
++
++ rpcrdma_ep_put(ep);
+ }
+
+ /**
+@@ -267,7 +271,7 @@ rpcrdma_cm_event_handler(struct rdma_cm_id *id, struct rdma_cm_event *event)
+ xprt_force_disconnect(xprt);
+ goto disconnected;
+ case RDMA_CM_EVENT_ESTABLISHED:
+- kref_get(&ep->re_kref);
++ rpcrdma_ep_get(ep);
+ ep->re_connect_status = 1;
+ rpcrdma_update_cm_private(ep, &event->param.conn);
+ trace_xprtrdma_inline_thresh(ep);
+@@ -290,7 +294,7 @@ rpcrdma_cm_event_handler(struct rdma_cm_id *id, struct rdma_cm_event *event)
+ ep->re_connect_status = -ECONNABORTED;
+ disconnected:
+ xprt_force_disconnect(xprt);
+- return rpcrdma_ep_destroy(ep);
++ return rpcrdma_ep_put(ep);
+ default:
+ break;
+ }
+@@ -346,7 +350,7 @@ out:
+ return ERR_PTR(rc);
+ }
+
+-static void rpcrdma_ep_put(struct kref *kref)
++static void rpcrdma_ep_destroy(struct kref *kref)
+ {
+ struct rpcrdma_ep *ep = container_of(kref, struct rpcrdma_ep, re_kref);
+
+@@ -370,13 +374,18 @@ static void rpcrdma_ep_put(struct kref *kref)
+ module_put(THIS_MODULE);
+ }
+
++static noinline void rpcrdma_ep_get(struct rpcrdma_ep *ep)
++{
++ kref_get(&ep->re_kref);
++}
++
+ /* Returns:
+ * %0 if @ep still has a positive kref count, or
+ * %1 if @ep was destroyed successfully.
+ */
+-static int rpcrdma_ep_destroy(struct rpcrdma_ep *ep)
++static noinline int rpcrdma_ep_put(struct rpcrdma_ep *ep)
+ {
+- return kref_put(&ep->re_kref, rpcrdma_ep_put);
++ return kref_put(&ep->re_kref, rpcrdma_ep_destroy);
+ }
+
+ static int rpcrdma_ep_create(struct rpcrdma_xprt *r_xprt)
+@@ -493,7 +502,7 @@ static int rpcrdma_ep_create(struct rpcrdma_xprt *r_xprt)
+ return 0;
+
+ out_destroy:
+- rpcrdma_ep_destroy(ep);
++ rpcrdma_ep_put(ep);
+ rdma_destroy_id(id);
+ out_free:
+ kfree(ep);
+@@ -522,8 +531,12 @@ retry:
+
+ ep->re_connect_status = 0;
+ xprt_clear_connected(xprt);
+-
+ rpcrdma_reset_cwnd(r_xprt);
++
++ /* Bump the ep's reference count while there are
++ * outstanding Receives.
++ */
++ rpcrdma_ep_get(ep);
+ rpcrdma_post_recvs(r_xprt, true);
+
+ rc = rpcrdma_sendctxs_create(r_xprt);
+@@ -588,7 +601,7 @@ void rpcrdma_xprt_disconnect(struct rpcrdma_xprt *r_xprt)
+ rpcrdma_mrs_destroy(r_xprt);
+ rpcrdma_sendctxs_destroy(r_xprt);
+
+- if (rpcrdma_ep_destroy(ep))
++ if (rpcrdma_ep_put(ep))
+ rdma_destroy_id(id);
+
+ r_xprt->rx_ep = NULL;
+diff --git a/net/wireless/nl80211.c b/net/wireless/nl80211.c
+index 692bcd35f809..7ae6b90e0d26 100644
+--- a/net/wireless/nl80211.c
++++ b/net/wireless/nl80211.c
+@@ -5004,7 +5004,8 @@ static int nl80211_start_ap(struct sk_buff *skb, struct genl_info *info)
+ err = nl80211_parse_he_obss_pd(
+ info->attrs[NL80211_ATTR_HE_OBSS_PD],
+ ¶ms.he_obss_pd);
+- goto out;
++ if (err)
++ goto out;
+ }
+
+ if (info->attrs[NL80211_ATTR_HE_BSS_COLOR]) {
+@@ -5012,7 +5013,7 @@ static int nl80211_start_ap(struct sk_buff *skb, struct genl_info *info)
+ info->attrs[NL80211_ATTR_HE_BSS_COLOR],
+ ¶ms.he_bss_color);
+ if (err)
+- return err;
++ goto out;
+ }
+
+ nl80211_calculate_ap_params(¶ms);
+diff --git a/sound/core/compress_offload.c b/sound/core/compress_offload.c
+index 509290f2efa8..0e53f6f31916 100644
+--- a/sound/core/compress_offload.c
++++ b/sound/core/compress_offload.c
+@@ -764,6 +764,9 @@ static int snd_compr_stop(struct snd_compr_stream *stream)
+
+ retval = stream->ops->trigger(stream, SNDRV_PCM_TRIGGER_STOP);
+ if (!retval) {
++ /* clear flags and stop any drain wait */
++ stream->partial_drain = false;
++ stream->metadata_set = false;
+ snd_compr_drain_notify(stream);
+ stream->runtime->total_bytes_available = 0;
+ stream->runtime->total_bytes_transferred = 0;
+@@ -921,6 +924,7 @@ static int snd_compr_partial_drain(struct snd_compr_stream *stream)
+ if (stream->next_track == false)
+ return -EPERM;
+
++ stream->partial_drain = true;
+ retval = stream->ops->trigger(stream, SND_COMPR_TRIGGER_PARTIAL_DRAIN);
+ if (retval) {
+ pr_debug("Partial drain returned failure\n");
+diff --git a/sound/drivers/opl3/opl3_synth.c b/sound/drivers/opl3/opl3_synth.c
+index e69a4ef0d6bd..08c10ac9d6c8 100644
+--- a/sound/drivers/opl3/opl3_synth.c
++++ b/sound/drivers/opl3/opl3_synth.c
+@@ -91,6 +91,8 @@ int snd_opl3_ioctl(struct snd_hwdep * hw, struct file *file,
+ {
+ struct snd_dm_fm_info info;
+
++ memset(&info, 0, sizeof(info));
++
+ info.fm_mode = opl3->fm_mode;
+ info.rhythm = opl3->rhythm;
+ if (copy_to_user(argp, &info, sizeof(struct snd_dm_fm_info)))
+diff --git a/sound/pci/hda/hda_auto_parser.c b/sound/pci/hda/hda_auto_parser.c
+index 2c6d2becfe1a..824f4ac1a8ce 100644
+--- a/sound/pci/hda/hda_auto_parser.c
++++ b/sound/pci/hda/hda_auto_parser.c
+@@ -72,6 +72,12 @@ static int compare_input_type(const void *ap, const void *bp)
+ if (a->type != b->type)
+ return (int)(a->type - b->type);
+
++ /* If has both hs_mic and hp_mic, pick the hs_mic ahead of hp_mic. */
++ if (a->is_headset_mic && b->is_headphone_mic)
++ return -1; /* don't swap */
++ else if (a->is_headphone_mic && b->is_headset_mic)
++ return 1; /* swap */
++
+ /* In case one has boost and the other one has not,
+ pick the one with boost first. */
+ return (int)(b->has_boost_on_pin - a->has_boost_on_pin);
+diff --git a/sound/pci/hda/hda_intel.c b/sound/pci/hda/hda_intel.c
+index 41a03c61a74b..11ec5c56c80e 100644
+--- a/sound/pci/hda/hda_intel.c
++++ b/sound/pci/hda/hda_intel.c
+@@ -2470,6 +2470,9 @@ static const struct pci_device_id azx_ids[] = {
+ /* Icelake */
+ { PCI_DEVICE(0x8086, 0x34c8),
+ .driver_data = AZX_DRIVER_SKL | AZX_DCAPS_INTEL_SKYLAKE},
++ /* Icelake-H */
++ { PCI_DEVICE(0x8086, 0x3dc8),
++ .driver_data = AZX_DRIVER_SKL | AZX_DCAPS_INTEL_SKYLAKE},
+ /* Jasperlake */
+ { PCI_DEVICE(0x8086, 0x38c8),
+ .driver_data = AZX_DRIVER_SKL | AZX_DCAPS_INTEL_SKYLAKE},
+@@ -2478,9 +2481,14 @@ static const struct pci_device_id azx_ids[] = {
+ /* Tigerlake */
+ { PCI_DEVICE(0x8086, 0xa0c8),
+ .driver_data = AZX_DRIVER_SKL | AZX_DCAPS_INTEL_SKYLAKE},
++ /* Tigerlake-H */
++ { PCI_DEVICE(0x8086, 0x43c8),
++ .driver_data = AZX_DRIVER_SKL | AZX_DCAPS_INTEL_SKYLAKE},
+ /* Elkhart Lake */
+ { PCI_DEVICE(0x8086, 0x4b55),
+ .driver_data = AZX_DRIVER_SKL | AZX_DCAPS_INTEL_SKYLAKE},
++ { PCI_DEVICE(0x8086, 0x4b58),
++ .driver_data = AZX_DRIVER_SKL | AZX_DCAPS_INTEL_SKYLAKE},
+ /* Broxton-P(Apollolake) */
+ { PCI_DEVICE(0x8086, 0x5a98),
+ .driver_data = AZX_DRIVER_SKL | AZX_DCAPS_INTEL_BROXTON },
+diff --git a/sound/pci/hda/patch_realtek.c b/sound/pci/hda/patch_realtek.c
+index cb689878ba20..16ecc8515db8 100644
+--- a/sound/pci/hda/patch_realtek.c
++++ b/sound/pci/hda/patch_realtek.c
+@@ -6114,6 +6114,9 @@ enum {
+ ALC236_FIXUP_HP_MUTE_LED,
+ ALC298_FIXUP_SAMSUNG_HEADPHONE_VERY_QUIET,
+ ALC295_FIXUP_ASUS_MIC_NO_PRESENCE,
++ ALC269VC_FIXUP_ACER_VCOPPERBOX_PINS,
++ ALC269VC_FIXUP_ACER_HEADSET_MIC,
++ ALC269VC_FIXUP_ACER_MIC_NO_PRESENCE,
+ };
+
+ static const struct hda_fixup alc269_fixups[] = {
+@@ -7292,6 +7295,35 @@ static const struct hda_fixup alc269_fixups[] = {
+ .chained = true,
+ .chain_id = ALC269_FIXUP_HEADSET_MODE
+ },
++ [ALC269VC_FIXUP_ACER_VCOPPERBOX_PINS] = {
++ .type = HDA_FIXUP_PINS,
++ .v.pins = (const struct hda_pintbl[]) {
++ { 0x14, 0x90100120 }, /* use as internal speaker */
++ { 0x18, 0x02a111f0 }, /* use as headset mic, without its own jack detect */
++ { 0x1a, 0x01011020 }, /* use as line out */
++ { },
++ },
++ .chained = true,
++ .chain_id = ALC269_FIXUP_HEADSET_MIC
++ },
++ [ALC269VC_FIXUP_ACER_HEADSET_MIC] = {
++ .type = HDA_FIXUP_PINS,
++ .v.pins = (const struct hda_pintbl[]) {
++ { 0x18, 0x02a11030 }, /* use as headset mic */
++ { }
++ },
++ .chained = true,
++ .chain_id = ALC269_FIXUP_HEADSET_MIC
++ },
++ [ALC269VC_FIXUP_ACER_MIC_NO_PRESENCE] = {
++ .type = HDA_FIXUP_PINS,
++ .v.pins = (const struct hda_pintbl[]) {
++ { 0x18, 0x01a11130 }, /* use as headset mic, without its own jack detect */
++ { }
++ },
++ .chained = true,
++ .chain_id = ALC269_FIXUP_HEADSET_MIC
++ },
+ };
+
+ static const struct snd_pci_quirk alc269_fixup_tbl[] = {
+@@ -7307,10 +7339,13 @@ static const struct snd_pci_quirk alc269_fixup_tbl[] = {
+ SND_PCI_QUIRK(0x1025, 0x0775, "Acer Aspire E1-572", ALC271_FIXUP_HP_GATE_MIC_JACK_E1_572),
+ SND_PCI_QUIRK(0x1025, 0x079b, "Acer Aspire V5-573G", ALC282_FIXUP_ASPIRE_V5_PINS),
+ SND_PCI_QUIRK(0x1025, 0x102b, "Acer Aspire C24-860", ALC286_FIXUP_ACER_AIO_MIC_NO_PRESENCE),
++ SND_PCI_QUIRK(0x1025, 0x1065, "Acer Aspire C20-820", ALC269VC_FIXUP_ACER_HEADSET_MIC),
+ SND_PCI_QUIRK(0x1025, 0x106d, "Acer Cloudbook 14", ALC283_FIXUP_CHROME_BOOK),
+ SND_PCI_QUIRK(0x1025, 0x1099, "Acer Aspire E5-523G", ALC255_FIXUP_ACER_MIC_NO_PRESENCE),
+ SND_PCI_QUIRK(0x1025, 0x110e, "Acer Aspire ES1-432", ALC255_FIXUP_ACER_MIC_NO_PRESENCE),
+ SND_PCI_QUIRK(0x1025, 0x1246, "Acer Predator Helios 500", ALC299_FIXUP_PREDATOR_SPK),
++ SND_PCI_QUIRK(0x1025, 0x1247, "Acer vCopperbox", ALC269VC_FIXUP_ACER_VCOPPERBOX_PINS),
++ SND_PCI_QUIRK(0x1025, 0x1248, "Acer Veriton N4660G", ALC269VC_FIXUP_ACER_MIC_NO_PRESENCE),
+ SND_PCI_QUIRK(0x1025, 0x128f, "Acer Veriton Z6860G", ALC286_FIXUP_ACER_AIO_HEADSET_MIC),
+ SND_PCI_QUIRK(0x1025, 0x1290, "Acer Veriton Z4860G", ALC286_FIXUP_ACER_AIO_HEADSET_MIC),
+ SND_PCI_QUIRK(0x1025, 0x1291, "Acer Veriton Z4660G", ALC286_FIXUP_ACER_AIO_HEADSET_MIC),
+@@ -7536,8 +7571,7 @@ static const struct snd_pci_quirk alc269_fixup_tbl[] = {
+ SND_PCI_QUIRK(0x17aa, 0x224c, "Thinkpad", ALC298_FIXUP_TPT470_DOCK),
+ SND_PCI_QUIRK(0x17aa, 0x224d, "Thinkpad", ALC298_FIXUP_TPT470_DOCK),
+ SND_PCI_QUIRK(0x17aa, 0x225d, "Thinkpad T480", ALC269_FIXUP_LIMIT_INT_MIC_BOOST),
+- SND_PCI_QUIRK(0x17aa, 0x2292, "Thinkpad X1 Yoga 7th", ALC285_FIXUP_THINKPAD_HEADSET_JACK),
+- SND_PCI_QUIRK(0x17aa, 0x2293, "Thinkpad X1 Carbon 7th", ALC285_FIXUP_THINKPAD_HEADSET_JACK),
++ SND_PCI_QUIRK(0x17aa, 0x2292, "Thinkpad X1 Carbon 7th", ALC285_FIXUP_THINKPAD_HEADSET_JACK),
+ SND_PCI_QUIRK(0x17aa, 0x22be, "Thinkpad X1 Carbon 8th", ALC285_FIXUP_THINKPAD_HEADSET_JACK),
+ SND_PCI_QUIRK(0x17aa, 0x30bb, "ThinkCentre AIO", ALC233_FIXUP_LENOVO_LINE2_MIC_HOTKEY),
+ SND_PCI_QUIRK(0x17aa, 0x30e2, "ThinkCentre AIO", ALC233_FIXUP_LENOVO_LINE2_MIC_HOTKEY),
+diff --git a/sound/soc/codecs/hdac_hda.c b/sound/soc/codecs/hdac_hda.c
+index de003acb1951..473efe9ef998 100644
+--- a/sound/soc/codecs/hdac_hda.c
++++ b/sound/soc/codecs/hdac_hda.c
+@@ -441,13 +441,13 @@ static int hdac_hda_codec_probe(struct snd_soc_component *component)
+ ret = snd_hda_codec_set_name(hcodec, hcodec->preset->name);
+ if (ret < 0) {
+ dev_err(&hdev->dev, "name failed %s\n", hcodec->preset->name);
+- goto error;
++ goto error_pm;
+ }
+
+ ret = snd_hdac_regmap_init(&hcodec->core);
+ if (ret < 0) {
+ dev_err(&hdev->dev, "regmap init failed\n");
+- goto error;
++ goto error_pm;
+ }
+
+ patch = (hda_codec_patch_t)hcodec->preset->driver_data;
+@@ -455,7 +455,7 @@ static int hdac_hda_codec_probe(struct snd_soc_component *component)
+ ret = patch(hcodec);
+ if (ret < 0) {
+ dev_err(&hdev->dev, "patch failed %d\n", ret);
+- goto error;
++ goto error_regmap;
+ }
+ } else {
+ dev_dbg(&hdev->dev, "no patch file found\n");
+@@ -467,7 +467,7 @@ static int hdac_hda_codec_probe(struct snd_soc_component *component)
+ ret = snd_hda_codec_parse_pcms(hcodec);
+ if (ret < 0) {
+ dev_err(&hdev->dev, "unable to map pcms to dai %d\n", ret);
+- goto error;
++ goto error_regmap;
+ }
+
+ /* HDMI controls need to be created in machine drivers */
+@@ -476,7 +476,7 @@ static int hdac_hda_codec_probe(struct snd_soc_component *component)
+ if (ret < 0) {
+ dev_err(&hdev->dev, "unable to create controls %d\n",
+ ret);
+- goto error;
++ goto error_regmap;
+ }
+ }
+
+@@ -496,7 +496,9 @@ static int hdac_hda_codec_probe(struct snd_soc_component *component)
+
+ return 0;
+
+-error:
++error_regmap:
++ snd_hdac_regmap_exit(hdev);
++error_pm:
+ pm_runtime_put(&hdev->dev);
+ error_no_pm:
+ snd_hdac_ext_bus_link_put(hdev->bus, hlink);
+@@ -518,6 +520,8 @@ static void hdac_hda_codec_remove(struct snd_soc_component *component)
+
+ pm_runtime_disable(&hdev->dev);
+ snd_hdac_ext_bus_link_put(hdev->bus, hlink);
++
++ snd_hdac_regmap_exit(hdev);
+ }
+
+ static const struct snd_soc_dapm_route hdac_hda_dapm_routes[] = {
+diff --git a/sound/soc/fsl/fsl_mqs.c b/sound/soc/fsl/fsl_mqs.c
+index 0c813a45bba7..69aeb0e71844 100644
+--- a/sound/soc/fsl/fsl_mqs.c
++++ b/sound/soc/fsl/fsl_mqs.c
+@@ -265,12 +265,20 @@ static int fsl_mqs_remove(struct platform_device *pdev)
+ static int fsl_mqs_runtime_resume(struct device *dev)
+ {
+ struct fsl_mqs *mqs_priv = dev_get_drvdata(dev);
++ int ret;
+
+- if (mqs_priv->ipg)
+- clk_prepare_enable(mqs_priv->ipg);
++ ret = clk_prepare_enable(mqs_priv->ipg);
++ if (ret) {
++ dev_err(dev, "failed to enable ipg clock\n");
++ return ret;
++ }
+
+- if (mqs_priv->mclk)
+- clk_prepare_enable(mqs_priv->mclk);
++ ret = clk_prepare_enable(mqs_priv->mclk);
++ if (ret) {
++ dev_err(dev, "failed to enable mclk clock\n");
++ clk_disable_unprepare(mqs_priv->ipg);
++ return ret;
++ }
+
+ if (mqs_priv->use_gpr)
+ regmap_write(mqs_priv->regmap, IOMUXC_GPR2,
+@@ -292,11 +300,8 @@ static int fsl_mqs_runtime_suspend(struct device *dev)
+ regmap_read(mqs_priv->regmap, REG_MQS_CTRL,
+ &mqs_priv->reg_mqs_ctrl);
+
+- if (mqs_priv->mclk)
+- clk_disable_unprepare(mqs_priv->mclk);
+-
+- if (mqs_priv->ipg)
+- clk_disable_unprepare(mqs_priv->ipg);
++ clk_disable_unprepare(mqs_priv->mclk);
++ clk_disable_unprepare(mqs_priv->ipg);
+
+ return 0;
+ }
+diff --git a/sound/soc/sof/sof-pci-dev.c b/sound/soc/sof/sof-pci-dev.c
+index cec631a1389b..7b1846aeadd5 100644
+--- a/sound/soc/sof/sof-pci-dev.c
++++ b/sound/soc/sof/sof-pci-dev.c
+@@ -427,6 +427,8 @@ static const struct pci_device_id sof_pci_ids[] = {
+ #if IS_ENABLED(CONFIG_SND_SOC_SOF_COMETLAKE_H)
+ { PCI_DEVICE(0x8086, 0x06c8),
+ .driver_data = (unsigned long)&cml_desc},
++ { PCI_DEVICE(0x8086, 0xa3f0), /* CML-S */
++ .driver_data = (unsigned long)&cml_desc},
+ #endif
+ #if IS_ENABLED(CONFIG_SND_SOC_SOF_TIGERLAKE)
+ { PCI_DEVICE(0x8086, 0xa0c8),
+diff --git a/sound/usb/pcm.c b/sound/usb/pcm.c
+index c73efdf7545e..9702c4311b91 100644
+--- a/sound/usb/pcm.c
++++ b/sound/usb/pcm.c
+@@ -368,6 +368,7 @@ static int set_sync_ep_implicit_fb_quirk(struct snd_usb_substream *subs,
+ goto add_sync_ep_from_ifnum;
+ case USB_ID(0x07fd, 0x0008): /* MOTU M Series */
+ case USB_ID(0x31e9, 0x0002): /* Solid State Logic SSL2+ */
++ case USB_ID(0x0d9a, 0x00df): /* RTX6001 */
+ ep = 0x81;
+ ifnum = 2;
+ goto add_sync_ep_from_ifnum;
+diff --git a/sound/usb/quirks-table.h b/sound/usb/quirks-table.h
+index 0bf370d89556..562179492a33 100644
+--- a/sound/usb/quirks-table.h
++++ b/sound/usb/quirks-table.h
+@@ -3611,4 +3611,56 @@ ALC1220_VB_DESKTOP(0x26ce, 0x0a01), /* Asrock TRX40 Creator */
+ }
+ },
+
++/*
++ * MacroSilicon MS2109 based HDMI capture cards
++ *
++ * These claim 96kHz 1ch in the descriptors, but are actually 48kHz 2ch.
++ * They also need QUIRK_AUDIO_ALIGN_TRANSFER, which makes one wonder if
++ * they pretend to be 96kHz mono as a workaround for stereo being broken
++ * by that...
++ *
++ * They also have swapped L-R channels, but that's for userspace to deal
++ * with.
++ */
++{
++ USB_DEVICE(0x534d, 0x2109),
++ .driver_info = (unsigned long) &(const struct snd_usb_audio_quirk) {
++ .vendor_name = "MacroSilicon",
++ .product_name = "MS2109",
++ .ifnum = QUIRK_ANY_INTERFACE,
++ .type = QUIRK_COMPOSITE,
++ .data = &(const struct snd_usb_audio_quirk[]) {
++ {
++ .ifnum = 2,
++ .type = QUIRK_AUDIO_ALIGN_TRANSFER,
++ },
++ {
++ .ifnum = 2,
++ .type = QUIRK_AUDIO_STANDARD_MIXER,
++ },
++ {
++ .ifnum = 3,
++ .type = QUIRK_AUDIO_FIXED_ENDPOINT,
++ .data = &(const struct audioformat) {
++ .formats = SNDRV_PCM_FMTBIT_S16_LE,
++ .channels = 2,
++ .iface = 3,
++ .altsetting = 1,
++ .altset_idx = 1,
++ .attributes = 0,
++ .endpoint = 0x82,
++ .ep_attr = USB_ENDPOINT_XFER_ISOC |
++ USB_ENDPOINT_SYNC_ASYNC,
++ .rates = SNDRV_PCM_RATE_CONTINUOUS,
++ .rate_min = 48000,
++ .rate_max = 48000,
++ }
++ },
++ {
++ .ifnum = -1
++ }
++ }
++ }
++},
++
+ #undef USB_DEVICE_VENDOR_SPEC
+diff --git a/tools/perf/arch/x86/util/intel-pt.c b/tools/perf/arch/x86/util/intel-pt.c
+index 1643aed8c4c8..2a548fbdf2a2 100644
+--- a/tools/perf/arch/x86/util/intel-pt.c
++++ b/tools/perf/arch/x86/util/intel-pt.c
+@@ -634,6 +634,7 @@ static int intel_pt_recording_options(struct auxtrace_record *itr,
+ }
+ evsel->core.attr.freq = 0;
+ evsel->core.attr.sample_period = 1;
++ evsel->no_aux_samples = true;
+ intel_pt_evsel = evsel;
+ opts->full_auxtrace = true;
+ }
+diff --git a/tools/perf/scripts/python/export-to-postgresql.py b/tools/perf/scripts/python/export-to-postgresql.py
+index 7bd73a904b4e..d187e46c2683 100644
+--- a/tools/perf/scripts/python/export-to-postgresql.py
++++ b/tools/perf/scripts/python/export-to-postgresql.py
+@@ -1055,7 +1055,7 @@ def cbr(id, raw_buf):
+ cbr = data[0]
+ MHz = (data[4] + 500) / 1000
+ percent = ((cbr * 1000 / data[2]) + 5) / 10
+- value = struct.pack("!hiqiiiiii", 4, 8, id, 4, cbr, 4, MHz, 4, percent)
++ value = struct.pack("!hiqiiiiii", 4, 8, id, 4, cbr, 4, int(MHz), 4, int(percent))
+ cbr_file.write(value)
+
+ def mwait(id, raw_buf):
+diff --git a/tools/perf/scripts/python/exported-sql-viewer.py b/tools/perf/scripts/python/exported-sql-viewer.py
+index 26d7be785288..7daa8bb70a5a 100755
+--- a/tools/perf/scripts/python/exported-sql-viewer.py
++++ b/tools/perf/scripts/python/exported-sql-viewer.py
+@@ -768,7 +768,8 @@ class CallGraphModel(CallGraphModelBase):
+ " FROM calls"
+ " INNER JOIN call_paths ON calls.call_path_id = call_paths.id"
+ " INNER JOIN symbols ON call_paths.symbol_id = symbols.id"
+- " WHERE symbols.name" + match +
++ " WHERE calls.id <> 0"
++ " AND symbols.name" + match +
+ " GROUP BY comm_id, thread_id, call_path_id"
+ " ORDER BY comm_id, thread_id, call_path_id")
+
+@@ -963,7 +964,8 @@ class CallTreeModel(CallGraphModelBase):
+ " FROM calls"
+ " INNER JOIN call_paths ON calls.call_path_id = call_paths.id"
+ " INNER JOIN symbols ON call_paths.symbol_id = symbols.id"
+- " WHERE symbols.name" + match +
++ " WHERE calls.id <> 0"
++ " AND symbols.name" + match +
+ " ORDER BY comm_id, thread_id, call_time, calls.id")
+
+ def FindPath(self, query):
+@@ -1050,6 +1052,7 @@ class TreeWindowBase(QMdiSubWindow):
+ child = self.model.index(row, 0, parent)
+ if child.internalPointer().dbid == dbid:
+ found = True
++ self.view.setExpanded(parent, True)
+ self.view.setCurrentIndex(child)
+ parent = child
+ break
+@@ -1127,6 +1130,7 @@ class CallTreeWindow(TreeWindowBase):
+ child = self.model.index(row, 0, parent)
+ if child.internalPointer().dbid == dbid:
+ found = True
++ self.view.setExpanded(parent, True)
+ self.view.setCurrentIndex(child)
+ parent = child
+ break
+@@ -1139,6 +1143,7 @@ class CallTreeWindow(TreeWindowBase):
+ return
+ last_child = None
+ for row in xrange(n):
++ self.view.setExpanded(parent, True)
+ child = self.model.index(row, 0, parent)
+ child_call_time = child.internalPointer().call_time
+ if child_call_time < time:
+@@ -1151,9 +1156,11 @@ class CallTreeWindow(TreeWindowBase):
+ if not last_child:
+ if not found:
+ child = self.model.index(0, 0, parent)
++ self.view.setExpanded(parent, True)
+ self.view.setCurrentIndex(child)
+ return
+ found = True
++ self.view.setExpanded(parent, True)
+ self.view.setCurrentIndex(last_child)
+ parent = last_child
+
+diff --git a/tools/perf/ui/browsers/hists.c b/tools/perf/ui/browsers/hists.c
+index 487e54ef56a9..2101b6b770d8 100644
+--- a/tools/perf/ui/browsers/hists.c
++++ b/tools/perf/ui/browsers/hists.c
+@@ -2288,6 +2288,11 @@ static struct thread *hist_browser__selected_thread(struct hist_browser *browser
+ return browser->he_selection->thread;
+ }
+
++static struct res_sample *hist_browser__selected_res_sample(struct hist_browser *browser)
++{
++ return browser->he_selection ? browser->he_selection->res_samples : NULL;
++}
++
+ /* Check whether the browser is for 'top' or 'report' */
+ static inline bool is_report_browser(void *timer)
+ {
+@@ -3357,16 +3362,16 @@ skip_annotation:
+ &options[nr_options], NULL, NULL, evsel);
+ nr_options += add_res_sample_opt(browser, &actions[nr_options],
+ &options[nr_options],
+- hist_browser__selected_entry(browser)->res_samples,
+- evsel, A_NORMAL);
++ hist_browser__selected_res_sample(browser),
++ evsel, A_NORMAL);
+ nr_options += add_res_sample_opt(browser, &actions[nr_options],
+ &options[nr_options],
+- hist_browser__selected_entry(browser)->res_samples,
+- evsel, A_ASM);
++ hist_browser__selected_res_sample(browser),
++ evsel, A_ASM);
+ nr_options += add_res_sample_opt(browser, &actions[nr_options],
+ &options[nr_options],
+- hist_browser__selected_entry(browser)->res_samples,
+- evsel, A_SOURCE);
++ hist_browser__selected_res_sample(browser),
++ evsel, A_SOURCE);
+ nr_options += add_switch_opt(browser, &actions[nr_options],
+ &options[nr_options]);
+ skip_scripting:
+diff --git a/tools/perf/util/evsel.c b/tools/perf/util/evsel.c
+index eb880efbce16..386950f29792 100644
+--- a/tools/perf/util/evsel.c
++++ b/tools/perf/util/evsel.c
+@@ -1048,12 +1048,12 @@ void perf_evsel__config(struct evsel *evsel, struct record_opts *opts,
+ if (callchain && callchain->enabled && !evsel->no_aux_samples)
+ perf_evsel__config_callchain(evsel, opts, callchain);
+
+- if (opts->sample_intr_regs) {
++ if (opts->sample_intr_regs && !evsel->no_aux_samples) {
+ attr->sample_regs_intr = opts->sample_intr_regs;
+ perf_evsel__set_sample_bit(evsel, REGS_INTR);
+ }
+
+- if (opts->sample_user_regs) {
++ if (opts->sample_user_regs && !evsel->no_aux_samples) {
+ attr->sample_regs_user |= opts->sample_user_regs;
+ perf_evsel__set_sample_bit(evsel, REGS_USER);
+ }
+diff --git a/tools/perf/util/intel-pt.c b/tools/perf/util/intel-pt.c
+index 23c8289c2472..545d1cdc0ec8 100644
+--- a/tools/perf/util/intel-pt.c
++++ b/tools/perf/util/intel-pt.c
+@@ -1731,6 +1731,7 @@ static int intel_pt_synth_pebs_sample(struct intel_pt_queue *ptq)
+ u64 sample_type = evsel->core.attr.sample_type;
+ u64 id = evsel->core.id[0];
+ u8 cpumode;
++ u64 regs[8 * sizeof(sample.intr_regs.mask)];
+
+ if (intel_pt_skip_event(pt))
+ return 0;
+@@ -1780,8 +1781,8 @@ static int intel_pt_synth_pebs_sample(struct intel_pt_queue *ptq)
+ }
+
+ if (sample_type & PERF_SAMPLE_REGS_INTR &&
+- items->mask[INTEL_PT_GP_REGS_POS]) {
+- u64 regs[sizeof(sample.intr_regs.mask)];
++ (items->mask[INTEL_PT_GP_REGS_POS] ||
++ items->mask[INTEL_PT_XMM_POS])) {
+ u64 regs_mask = evsel->core.attr.sample_regs_intr;
+ u64 *pos;
+
+diff --git a/tools/testing/selftests/bpf/test_maps.c b/tools/testing/selftests/bpf/test_maps.c
+index c6766b2cff85..9990e91c18df 100644
+--- a/tools/testing/selftests/bpf/test_maps.c
++++ b/tools/testing/selftests/bpf/test_maps.c
+@@ -789,19 +789,19 @@ static void test_sockmap(unsigned int tasks, void *data)
+ }
+
+ err = bpf_prog_detach(fd, BPF_SK_SKB_STREAM_PARSER);
+- if (err) {
++ if (!err) {
+ printf("Failed empty parser prog detach\n");
+ goto out_sockmap;
+ }
+
+ err = bpf_prog_detach(fd, BPF_SK_SKB_STREAM_VERDICT);
+- if (err) {
++ if (!err) {
+ printf("Failed empty verdict prog detach\n");
+ goto out_sockmap;
+ }
+
+ err = bpf_prog_detach(fd, BPF_SK_MSG_VERDICT);
+- if (err) {
++ if (!err) {
+ printf("Failed empty msg verdict prog detach\n");
+ goto out_sockmap;
+ }
+@@ -1090,19 +1090,19 @@ static void test_sockmap(unsigned int tasks, void *data)
+ assert(status == 0);
+ }
+
+- err = bpf_prog_detach(map_fd_rx, __MAX_BPF_ATTACH_TYPE);
++ err = bpf_prog_detach2(parse_prog, map_fd_rx, __MAX_BPF_ATTACH_TYPE);
+ if (!err) {
+ printf("Detached an invalid prog type.\n");
+ goto out_sockmap;
+ }
+
+- err = bpf_prog_detach(map_fd_rx, BPF_SK_SKB_STREAM_PARSER);
++ err = bpf_prog_detach2(parse_prog, map_fd_rx, BPF_SK_SKB_STREAM_PARSER);
+ if (err) {
+ printf("Failed parser prog detach\n");
+ goto out_sockmap;
+ }
+
+- err = bpf_prog_detach(map_fd_rx, BPF_SK_SKB_STREAM_VERDICT);
++ err = bpf_prog_detach2(verdict_prog, map_fd_rx, BPF_SK_SKB_STREAM_VERDICT);
+ if (err) {
+ printf("Failed parser prog detach\n");
+ goto out_sockmap;
+diff --git a/virt/kvm/arm/vgic/vgic-v4.c b/virt/kvm/arm/vgic/vgic-v4.c
+index 27ac833e5ec7..b5fa73c9fd35 100644
+--- a/virt/kvm/arm/vgic/vgic-v4.c
++++ b/virt/kvm/arm/vgic/vgic-v4.c
+@@ -90,7 +90,15 @@ static irqreturn_t vgic_v4_doorbell_handler(int irq, void *info)
+ !irqd_irq_disabled(&irq_to_desc(irq)->irq_data))
+ disable_irq_nosync(irq);
+
++ /*
++ * The v4.1 doorbell can fire concurrently with the vPE being
++ * made non-resident. Ensure we only update pending_last
++ * *after* the non-residency sequence has completed.
++ */
++ raw_spin_lock(&vcpu->arch.vgic_cpu.vgic_v3.its_vpe.vpe_lock);
+ vcpu->arch.vgic_cpu.vgic_v3.its_vpe.pending_last = true;
++ raw_spin_unlock(&vcpu->arch.vgic_cpu.vgic_v3.its_vpe.vpe_lock);
++
+ kvm_make_request(KVM_REQ_IRQ_PENDING, vcpu);
+ kvm_vcpu_kick(vcpu);
+
^ permalink raw reply related [flat|nested] 25+ messages in thread
* [gentoo-commits] proj/linux-patches:5.7 commit in: /
@ 2020-07-22 12:59 Mike Pagano
0 siblings, 0 replies; 25+ messages in thread
From: Mike Pagano @ 2020-07-22 12:59 UTC (permalink / raw
To: gentoo-commits
commit: a367b5a8f0ff97119cf528647086b7ad4b670728
Author: Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Wed Jul 22 12:59:38 2020 +0000
Commit: Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Wed Jul 22 12:59:38 2020 +0000
URL: https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=a367b5a8
Linux patch 5.7.10
Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>
0000_README | 4 +
1009_linux-5.7.10.patch | 9109 +++++++++++++++++++++++++++++++++++++++++++++++
2 files changed, 9113 insertions(+)
diff --git a/0000_README b/0000_README
index 527d714..c2d1f0c 100644
--- a/0000_README
+++ b/0000_README
@@ -79,6 +79,10 @@ Patch: 1008_linux-5.7.9.patch
From: http://www.kernel.org
Desc: Linux 5.7.9
+Patch: 1009_linux-5.7.10.patch
+From: http://www.kernel.org
+Desc: Linux 5.7.10
+
Patch: 1500_XATTR_USER_PREFIX.patch
From: https://bugs.gentoo.org/show_bug.cgi?id=470644
Desc: Support for namespace user.pax.* on tmpfs.
diff --git a/1009_linux-5.7.10.patch b/1009_linux-5.7.10.patch
new file mode 100644
index 0000000..2219fb5
--- /dev/null
+++ b/1009_linux-5.7.10.patch
@@ -0,0 +1,9109 @@
+diff --git a/Documentation/arm64/silicon-errata.rst b/Documentation/arm64/silicon-errata.rst
+index 2c08c628febd..7dc8f8ac69ee 100644
+--- a/Documentation/arm64/silicon-errata.rst
++++ b/Documentation/arm64/silicon-errata.rst
+@@ -147,6 +147,14 @@ stable kernels.
+ +----------------+-----------------+-----------------+-----------------------------+
+ | Qualcomm Tech. | Falkor v{1,2} | E1041 | QCOM_FALKOR_ERRATUM_1041 |
+ +----------------+-----------------+-----------------+-----------------------------+
++| Qualcomm Tech. | Kryo4xx Gold | N/A | ARM64_ERRATUM_1463225 |
+++----------------+-----------------+-----------------+-----------------------------+
++| Qualcomm Tech. | Kryo4xx Gold | N/A | ARM64_ERRATUM_1418040 |
+++----------------+-----------------+-----------------+-----------------------------+
++| Qualcomm Tech. | Kryo4xx Silver | N/A | ARM64_ERRATUM_1530923 |
+++----------------+-----------------+-----------------+-----------------------------+
++| Qualcomm Tech. | Kryo4xx Silver | N/A | ARM64_ERRATUM_1024718 |
+++----------------+-----------------+-----------------+-----------------------------+
+ +----------------+-----------------+-----------------+-----------------------------+
+ | Fujitsu | A64FX | E#010001 | FUJITSU_ERRATUM_010001 |
+ +----------------+-----------------+-----------------+-----------------------------+
+diff --git a/Documentation/devicetree/bindings/Makefile b/Documentation/devicetree/bindings/Makefile
+index 7782d9985082..b03d58c6b072 100644
+--- a/Documentation/devicetree/bindings/Makefile
++++ b/Documentation/devicetree/bindings/Makefile
+@@ -45,3 +45,8 @@ $(obj)/processed-schema.yaml: $(DT_SCHEMA_FILES) FORCE
+ $(call if_changed,mk_schema)
+
+ extra-y += processed-schema.yaml
++
++# Hack: avoid 'Argument list too long' error for 'make clean'. Remove most of
++# build artifacts here before they are processed by scripts/Makefile.clean
++clean-files = $(shell find $(obj) \( -name '*.example.dts' -o \
++ -name '*.example.dt.yaml' \) -delete 2>/dev/null)
+diff --git a/Documentation/devicetree/bindings/bus/socionext,uniphier-system-bus.yaml b/Documentation/devicetree/bindings/bus/socionext,uniphier-system-bus.yaml
+index c4c9119e4a20..a0c6c5d2b70f 100644
+--- a/Documentation/devicetree/bindings/bus/socionext,uniphier-system-bus.yaml
++++ b/Documentation/devicetree/bindings/bus/socionext,uniphier-system-bus.yaml
+@@ -80,14 +80,14 @@ examples:
+ ranges = <1 0x00000000 0x42000000 0x02000000>,
+ <5 0x00000000 0x46000000 0x01000000>;
+
+- ethernet@1,01f00000 {
++ ethernet@1,1f00000 {
+ compatible = "smsc,lan9115";
+ reg = <1 0x01f00000 0x1000>;
+ interrupts = <0 48 4>;
+ phy-mode = "mii";
+ };
+
+- uart@5,00200000 {
++ serial@5,200000 {
+ compatible = "ns16550a";
+ reg = <5 0x00200000 0x20>;
+ interrupts = <0 49 4>;
+diff --git a/Documentation/devicetree/bindings/mailbox/xlnx,zynqmp-ipi-mailbox.txt b/Documentation/devicetree/bindings/mailbox/xlnx,zynqmp-ipi-mailbox.txt
+index 4438432bfe9b..ad76edccf881 100644
+--- a/Documentation/devicetree/bindings/mailbox/xlnx,zynqmp-ipi-mailbox.txt
++++ b/Documentation/devicetree/bindings/mailbox/xlnx,zynqmp-ipi-mailbox.txt
+@@ -87,7 +87,7 @@ Example:
+ ranges;
+
+ /* APU<->RPU0 IPI mailbox controller */
+- ipi_mailbox_rpu0: mailbox@ff90400 {
++ ipi_mailbox_rpu0: mailbox@ff990400 {
+ reg = <0xff990400 0x20>,
+ <0xff990420 0x20>,
+ <0xff990080 0x20>,
+diff --git a/Makefile b/Makefile
+index fb3a747575b5..e622e084e7e2 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 5
+ PATCHLEVEL = 7
+-SUBLEVEL = 9
++SUBLEVEL = 10
+ EXTRAVERSION =
+ NAME = Kleptomaniac Octopus
+
+diff --git a/arch/arm/boot/dts/am437x-l4.dtsi b/arch/arm/boot/dts/am437x-l4.dtsi
+index 49c6a872052e..e30089e96263 100644
+--- a/arch/arm/boot/dts/am437x-l4.dtsi
++++ b/arch/arm/boot/dts/am437x-l4.dtsi
+@@ -1544,8 +1544,9 @@
+ reg = <0xcc020 0x4>;
+ reg-names = "rev";
+ /* Domains (P, C): per_pwrdm, l4ls_clkdm */
+- clocks = <&l4ls_clkctrl AM4_L4LS_D_CAN0_CLKCTRL 0>;
+- clock-names = "fck";
++ clocks = <&l4ls_clkctrl AM4_L4LS_D_CAN0_CLKCTRL 0>,
++ <&dcan0_fck>;
++ clock-names = "fck", "osc";
+ #address-cells = <1>;
+ #size-cells = <1>;
+ ranges = <0x0 0xcc000 0x2000>;
+@@ -1553,6 +1554,8 @@
+ dcan0: can@0 {
+ compatible = "ti,am4372-d_can", "ti,am3352-d_can";
+ reg = <0x0 0x2000>;
++ clocks = <&dcan0_fck>;
++ clock-names = "fck";
+ syscon-raminit = <&scm_conf 0x644 0>;
+ interrupts = <GIC_SPI 52 IRQ_TYPE_LEVEL_HIGH>;
+ status = "disabled";
+@@ -1564,8 +1567,9 @@
+ reg = <0xd0020 0x4>;
+ reg-names = "rev";
+ /* Domains (P, C): per_pwrdm, l4ls_clkdm */
+- clocks = <&l4ls_clkctrl AM4_L4LS_D_CAN1_CLKCTRL 0>;
+- clock-names = "fck";
++ clocks = <&l4ls_clkctrl AM4_L4LS_D_CAN1_CLKCTRL 0>,
++ <&dcan1_fck>;
++ clock-names = "fck", "osc";
+ #address-cells = <1>;
+ #size-cells = <1>;
+ ranges = <0x0 0xd0000 0x2000>;
+@@ -1573,6 +1577,8 @@
+ dcan1: can@0 {
+ compatible = "ti,am4372-d_can", "ti,am3352-d_can";
+ reg = <0x0 0x2000>;
++ clocks = <&dcan1_fck>;
++ clock-name = "fck";
+ syscon-raminit = <&scm_conf 0x644 1>;
+ interrupts = <GIC_SPI 49 IRQ_TYPE_LEVEL_HIGH>;
+ status = "disabled";
+diff --git a/arch/arm/boot/dts/imx6qdl-gw551x.dtsi b/arch/arm/boot/dts/imx6qdl-gw551x.dtsi
+index c38e86eedcc0..8c33510c9519 100644
+--- a/arch/arm/boot/dts/imx6qdl-gw551x.dtsi
++++ b/arch/arm/boot/dts/imx6qdl-gw551x.dtsi
+@@ -110,7 +110,7 @@
+ simple-audio-card,frame-master = <&sound_codec>;
+
+ sound_cpu: simple-audio-card,cpu {
+- sound-dai = <&ssi2>;
++ sound-dai = <&ssi1>;
+ };
+
+ sound_codec: simple-audio-card,codec {
+diff --git a/arch/arm/boot/dts/mt7623n-rfb-emmc.dts b/arch/arm/boot/dts/mt7623n-rfb-emmc.dts
+index b7606130ade9..0447748f9fa0 100644
+--- a/arch/arm/boot/dts/mt7623n-rfb-emmc.dts
++++ b/arch/arm/boot/dts/mt7623n-rfb-emmc.dts
+@@ -138,6 +138,7 @@
+ mac@1 {
+ compatible = "mediatek,eth-mac";
+ reg = <1>;
++ phy-mode = "rgmii";
+ phy-handle = <&phy5>;
+ };
+
+diff --git a/arch/arm/boot/dts/socfpga.dtsi b/arch/arm/boot/dts/socfpga.dtsi
+index 4f3993cc0227..451030897220 100644
+--- a/arch/arm/boot/dts/socfpga.dtsi
++++ b/arch/arm/boot/dts/socfpga.dtsi
+@@ -710,7 +710,7 @@
+ };
+ };
+
+- L2: l2-cache@fffef000 {
++ L2: cache-controller@fffef000 {
+ compatible = "arm,pl310-cache";
+ reg = <0xfffef000 0x1000>;
+ interrupts = <0 38 0x04>;
+diff --git a/arch/arm/boot/dts/socfpga_arria10.dtsi b/arch/arm/boot/dts/socfpga_arria10.dtsi
+index 3b8571b8b412..8f614c4b0e3e 100644
+--- a/arch/arm/boot/dts/socfpga_arria10.dtsi
++++ b/arch/arm/boot/dts/socfpga_arria10.dtsi
+@@ -636,7 +636,7 @@
+ reg = <0xffcfb100 0x80>;
+ };
+
+- L2: l2-cache@fffff000 {
++ L2: cache-controller@fffff000 {
+ compatible = "arm,pl310-cache";
+ reg = <0xfffff000 0x1000>;
+ interrupts = <0 18 IRQ_TYPE_LEVEL_HIGH>;
+diff --git a/arch/arm64/boot/dts/altera/socfpga_stratix10.dtsi b/arch/arm64/boot/dts/altera/socfpga_stratix10.dtsi
+index d1fc9c2055f4..9498d1de730c 100644
+--- a/arch/arm64/boot/dts/altera/socfpga_stratix10.dtsi
++++ b/arch/arm64/boot/dts/altera/socfpga_stratix10.dtsi
+@@ -77,7 +77,7 @@
+ method = "smc";
+ };
+
+- intc: intc@fffc1000 {
++ intc: interrupt-controller@fffc1000 {
+ compatible = "arm,gic-400", "arm,cortex-a15-gic";
+ #interrupt-cells = <3>;
+ interrupt-controller;
+@@ -302,7 +302,7 @@
+ status = "disabled";
+ };
+
+- nand: nand@ffb90000 {
++ nand: nand-controller@ffb90000 {
+ #address-cells = <1>;
+ #size-cells = <0>;
+ compatible = "altr,socfpga-denali-nand";
+@@ -445,7 +445,7 @@
+ clock-names = "timer";
+ };
+
+- uart0: serial0@ffc02000 {
++ uart0: serial@ffc02000 {
+ compatible = "snps,dw-apb-uart";
+ reg = <0xffc02000 0x100>;
+ interrupts = <0 108 4>;
+@@ -456,7 +456,7 @@
+ status = "disabled";
+ };
+
+- uart1: serial1@ffc02100 {
++ uart1: serial@ffc02100 {
+ compatible = "snps,dw-apb-uart";
+ reg = <0xffc02100 0x100>;
+ interrupts = <0 109 4>;
+diff --git a/arch/arm64/boot/dts/altera/socfpga_stratix10_socdk.dts b/arch/arm64/boot/dts/altera/socfpga_stratix10_socdk.dts
+index f6c4a15079d3..feadd21bc0dc 100644
+--- a/arch/arm64/boot/dts/altera/socfpga_stratix10_socdk.dts
++++ b/arch/arm64/boot/dts/altera/socfpga_stratix10_socdk.dts
+@@ -155,6 +155,7 @@
+ };
+
+ &qspi {
++ status = "okay";
+ flash@0 {
+ #address-cells = <1>;
+ #size-cells = <1>;
+diff --git a/arch/arm64/boot/dts/altera/socfpga_stratix10_socdk_nand.dts b/arch/arm64/boot/dts/altera/socfpga_stratix10_socdk_nand.dts
+index 9946515b8afd..c07966740e14 100644
+--- a/arch/arm64/boot/dts/altera/socfpga_stratix10_socdk_nand.dts
++++ b/arch/arm64/boot/dts/altera/socfpga_stratix10_socdk_nand.dts
+@@ -188,6 +188,7 @@
+ };
+
+ &qspi {
++ status = "okay";
+ flash@0 {
+ #address-cells = <1>;
+ #size-cells = <1>;
+@@ -211,12 +212,12 @@
+
+ qspi_boot: partition@0 {
+ label = "Boot and fpga data";
+- reg = <0x0 0x034B0000>;
++ reg = <0x0 0x03FE0000>;
+ };
+
+- qspi_rootfs: partition@4000000 {
++ qspi_rootfs: partition@3FE0000 {
+ label = "Root Filesystem - JFFS2";
+- reg = <0x034B0000 0x0EB50000>;
++ reg = <0x03FE0000 0x0C020000>;
+ };
+ };
+ };
+diff --git a/arch/arm64/boot/dts/amlogic/meson-gxl-s805x-libretech-ac.dts b/arch/arm64/boot/dts/amlogic/meson-gxl-s805x-libretech-ac.dts
+index 4d5949496596..c6ae5622a532 100644
+--- a/arch/arm64/boot/dts/amlogic/meson-gxl-s805x-libretech-ac.dts
++++ b/arch/arm64/boot/dts/amlogic/meson-gxl-s805x-libretech-ac.dts
+@@ -9,7 +9,7 @@
+
+ #include <dt-bindings/input/input.h>
+
+-#include "meson-gxl-s905x.dtsi"
++#include "meson-gxl-s805x.dtsi"
+
+ / {
+ compatible = "libretech,aml-s805x-ac", "amlogic,s805x",
+diff --git a/arch/arm64/boot/dts/amlogic/meson-gxl-s805x-p241.dts b/arch/arm64/boot/dts/amlogic/meson-gxl-s805x-p241.dts
+index a1119cfb0280..85f78a945407 100644
+--- a/arch/arm64/boot/dts/amlogic/meson-gxl-s805x-p241.dts
++++ b/arch/arm64/boot/dts/amlogic/meson-gxl-s805x-p241.dts
+@@ -9,7 +9,7 @@
+
+ #include <dt-bindings/input/input.h>
+
+-#include "meson-gxl-s905x.dtsi"
++#include "meson-gxl-s805x.dtsi"
+
+ / {
+ compatible = "amlogic,p241", "amlogic,s805x", "amlogic,meson-gxl";
+diff --git a/arch/arm64/boot/dts/amlogic/meson-gxl-s805x.dtsi b/arch/arm64/boot/dts/amlogic/meson-gxl-s805x.dtsi
+new file mode 100644
+index 000000000000..f9d705648426
+--- /dev/null
++++ b/arch/arm64/boot/dts/amlogic/meson-gxl-s805x.dtsi
+@@ -0,0 +1,24 @@
++// SPDX-License-Identifier: (GPL-2.0+ OR MIT)
++/*
++ * Copyright (c) 2020 BayLibre SAS
++ * Author: Neil Armstrong <narmstrong@baylibre.com>
++ */
++
++#include "meson-gxl-s905x.dtsi"
++
++/ {
++ compatible = "amlogic,s805x", "amlogic,meson-gxl";
++};
++
++/* The S805X Package doesn't seem to handle the 744MHz OPP correctly */
++&mali {
++ assigned-clocks = <&clkc CLKID_MALI_0_SEL>,
++ <&clkc CLKID_MALI_0>,
++ <&clkc CLKID_MALI>; /* Glitch free mux */
++ assigned-clock-parents = <&clkc CLKID_FCLK_DIV3>,
++ <0>, /* Do Nothing */
++ <&clkc CLKID_MALI_0>;
++ assigned-clock-rates = <0>, /* Do Nothing */
++ <666666666>,
++ <0>; /* Do Nothing */
++};
+diff --git a/arch/arm64/boot/dts/amlogic/meson-gxl.dtsi b/arch/arm64/boot/dts/amlogic/meson-gxl.dtsi
+index 259d86399390..887c43119e63 100644
+--- a/arch/arm64/boot/dts/amlogic/meson-gxl.dtsi
++++ b/arch/arm64/boot/dts/amlogic/meson-gxl.dtsi
+@@ -298,6 +298,11 @@
+ };
+ };
+
++&hwrng {
++ clocks = <&clkc CLKID_RNG0>;
++ clock-names = "core";
++};
++
+ &i2c_A {
+ clocks = <&clkc CLKID_I2C>;
+ };
+diff --git a/arch/arm64/boot/dts/intel/socfpga_agilex_socdk.dts b/arch/arm64/boot/dts/intel/socfpga_agilex_socdk.dts
+index 51d948323bfd..92f478def723 100644
+--- a/arch/arm64/boot/dts/intel/socfpga_agilex_socdk.dts
++++ b/arch/arm64/boot/dts/intel/socfpga_agilex_socdk.dts
+@@ -98,6 +98,7 @@
+ };
+
+ &qspi {
++ status = "okay";
+ flash@0 {
+ #address-cells = <1>;
+ #size-cells = <1>;
+diff --git a/arch/arm64/include/asm/alternative.h b/arch/arm64/include/asm/alternative.h
+index 5e5dc05d63a0..12f0eb56a1cc 100644
+--- a/arch/arm64/include/asm/alternative.h
++++ b/arch/arm64/include/asm/alternative.h
+@@ -73,11 +73,11 @@ static inline void apply_alternatives_module(void *start, size_t length) { }
+ ".pushsection .altinstructions,\"a\"\n" \
+ ALTINSTR_ENTRY(feature) \
+ ".popsection\n" \
+- ".pushsection .altinstr_replacement, \"a\"\n" \
++ ".subsection 1\n" \
+ "663:\n\t" \
+ newinstr "\n" \
+ "664:\n\t" \
+- ".popsection\n\t" \
++ ".previous\n\t" \
+ ".org . - (664b-663b) + (662b-661b)\n\t" \
+ ".org . - (662b-661b) + (664b-663b)\n" \
+ ".endif\n"
+@@ -117,9 +117,9 @@ static inline void apply_alternatives_module(void *start, size_t length) { }
+ 662: .pushsection .altinstructions, "a"
+ altinstruction_entry 661b, 663f, \cap, 662b-661b, 664f-663f
+ .popsection
+- .pushsection .altinstr_replacement, "ax"
++ .subsection 1
+ 663: \insn2
+-664: .popsection
++664: .previous
+ .org . - (664b-663b) + (662b-661b)
+ .org . - (662b-661b) + (664b-663b)
+ .endif
+@@ -160,7 +160,7 @@ static inline void apply_alternatives_module(void *start, size_t length) { }
+ .pushsection .altinstructions, "a"
+ altinstruction_entry 663f, 661f, \cap, 664f-663f, 662f-661f
+ .popsection
+- .pushsection .altinstr_replacement, "ax"
++ .subsection 1
+ .align 2 /* So GAS knows label 661 is suitably aligned */
+ 661:
+ .endm
+@@ -179,9 +179,9 @@ static inline void apply_alternatives_module(void *start, size_t length) { }
+ .macro alternative_else
+ 662:
+ .if .Lasm_alt_mode==0
+- .pushsection .altinstr_replacement, "ax"
++ .subsection 1
+ .else
+- .popsection
++ .previous
+ .endif
+ 663:
+ .endm
+@@ -192,7 +192,7 @@ static inline void apply_alternatives_module(void *start, size_t length) { }
+ .macro alternative_endif
+ 664:
+ .if .Lasm_alt_mode==0
+- .popsection
++ .previous
+ .endif
+ .org . - (664b-663b) + (662b-661b)
+ .org . - (662b-661b) + (664b-663b)
+diff --git a/arch/arm64/include/asm/cputype.h b/arch/arm64/include/asm/cputype.h
+index a87a93f67671..7219cddeba66 100644
+--- a/arch/arm64/include/asm/cputype.h
++++ b/arch/arm64/include/asm/cputype.h
+@@ -86,6 +86,7 @@
+ #define QCOM_CPU_PART_FALKOR 0xC00
+ #define QCOM_CPU_PART_KRYO 0x200
+ #define QCOM_CPU_PART_KRYO_3XX_SILVER 0x803
++#define QCOM_CPU_PART_KRYO_4XX_GOLD 0x804
+ #define QCOM_CPU_PART_KRYO_4XX_SILVER 0x805
+
+ #define NVIDIA_CPU_PART_DENVER 0x003
+@@ -114,6 +115,7 @@
+ #define MIDR_QCOM_FALKOR MIDR_CPU_MODEL(ARM_CPU_IMP_QCOM, QCOM_CPU_PART_FALKOR)
+ #define MIDR_QCOM_KRYO MIDR_CPU_MODEL(ARM_CPU_IMP_QCOM, QCOM_CPU_PART_KRYO)
+ #define MIDR_QCOM_KRYO_3XX_SILVER MIDR_CPU_MODEL(ARM_CPU_IMP_QCOM, QCOM_CPU_PART_KRYO_3XX_SILVER)
++#define MIDR_QCOM_KRYO_4XX_GOLD MIDR_CPU_MODEL(ARM_CPU_IMP_QCOM, QCOM_CPU_PART_KRYO_4XX_GOLD)
+ #define MIDR_QCOM_KRYO_4XX_SILVER MIDR_CPU_MODEL(ARM_CPU_IMP_QCOM, QCOM_CPU_PART_KRYO_4XX_SILVER)
+ #define MIDR_NVIDIA_DENVER MIDR_CPU_MODEL(ARM_CPU_IMP_NVIDIA, NVIDIA_CPU_PART_DENVER)
+ #define MIDR_NVIDIA_CARMEL MIDR_CPU_MODEL(ARM_CPU_IMP_NVIDIA, NVIDIA_CPU_PART_CARMEL)
+diff --git a/arch/arm64/include/asm/debug-monitors.h b/arch/arm64/include/asm/debug-monitors.h
+index 7619f473155f..d825e3585e28 100644
+--- a/arch/arm64/include/asm/debug-monitors.h
++++ b/arch/arm64/include/asm/debug-monitors.h
+@@ -109,6 +109,8 @@ void disable_debug_monitors(enum dbg_active_el el);
+
+ void user_rewind_single_step(struct task_struct *task);
+ void user_fastforward_single_step(struct task_struct *task);
++void user_regs_reset_single_step(struct user_pt_regs *regs,
++ struct task_struct *task);
+
+ void kernel_enable_single_step(struct pt_regs *regs);
+ void kernel_disable_single_step(void);
+diff --git a/arch/arm64/include/asm/syscall.h b/arch/arm64/include/asm/syscall.h
+index 65299a2dcf9c..cfc0672013f6 100644
+--- a/arch/arm64/include/asm/syscall.h
++++ b/arch/arm64/include/asm/syscall.h
+@@ -34,6 +34,10 @@ static inline long syscall_get_error(struct task_struct *task,
+ struct pt_regs *regs)
+ {
+ unsigned long error = regs->regs[0];
++
++ if (is_compat_thread(task_thread_info(task)))
++ error = sign_extend64(error, 31);
++
+ return IS_ERR_VALUE(error) ? error : 0;
+ }
+
+@@ -47,7 +51,13 @@ static inline void syscall_set_return_value(struct task_struct *task,
+ struct pt_regs *regs,
+ int error, long val)
+ {
+- regs->regs[0] = (long) error ? error : val;
++ if (error)
++ val = error;
++
++ if (is_compat_thread(task_thread_info(task)))
++ val = lower_32_bits(val);
++
++ regs->regs[0] = val;
+ }
+
+ #define SYSCALL_MAX_ARGS 6
+diff --git a/arch/arm64/include/asm/thread_info.h b/arch/arm64/include/asm/thread_info.h
+index 512174a8e789..62aca2a50ad7 100644
+--- a/arch/arm64/include/asm/thread_info.h
++++ b/arch/arm64/include/asm/thread_info.h
+@@ -89,6 +89,7 @@ void arch_release_task_struct(struct task_struct *tsk);
+ #define _TIF_SYSCALL_EMU (1 << TIF_SYSCALL_EMU)
+ #define _TIF_UPROBE (1 << TIF_UPROBE)
+ #define _TIF_FSCHECK (1 << TIF_FSCHECK)
++#define _TIF_SINGLESTEP (1 << TIF_SINGLESTEP)
+ #define _TIF_32BIT (1 << TIF_32BIT)
+ #define _TIF_SVE (1 << TIF_SVE)
+
+diff --git a/arch/arm64/kernel/alternative.c b/arch/arm64/kernel/alternative.c
+index d1757ef1b1e7..73039949b5ce 100644
+--- a/arch/arm64/kernel/alternative.c
++++ b/arch/arm64/kernel/alternative.c
+@@ -43,20 +43,8 @@ bool alternative_is_applied(u16 cpufeature)
+ */
+ static bool branch_insn_requires_update(struct alt_instr *alt, unsigned long pc)
+ {
+- unsigned long replptr;
+-
+- if (kernel_text_address(pc))
+- return true;
+-
+- replptr = (unsigned long)ALT_REPL_PTR(alt);
+- if (pc >= replptr && pc <= (replptr + alt->alt_len))
+- return false;
+-
+- /*
+- * Branching into *another* alternate sequence is doomed, and
+- * we're not even trying to fix it up.
+- */
+- BUG();
++ unsigned long replptr = (unsigned long)ALT_REPL_PTR(alt);
++ return !(pc >= replptr && pc <= (replptr + alt->alt_len));
+ }
+
+ #define align_down(x, a) ((unsigned long)(x) & ~(((unsigned long)(a)) - 1))
+diff --git a/arch/arm64/kernel/cpu_errata.c b/arch/arm64/kernel/cpu_errata.c
+index 0f37045fafab..f9387c125232 100644
+--- a/arch/arm64/kernel/cpu_errata.c
++++ b/arch/arm64/kernel/cpu_errata.c
+@@ -472,12 +472,7 @@ static bool
+ has_cortex_a76_erratum_1463225(const struct arm64_cpu_capabilities *entry,
+ int scope)
+ {
+- u32 midr = read_cpuid_id();
+- /* Cortex-A76 r0p0 - r3p1 */
+- struct midr_range range = MIDR_RANGE(MIDR_CORTEX_A76, 0, 0, 3, 1);
+-
+- WARN_ON(scope != SCOPE_LOCAL_CPU || preemptible());
+- return is_midr_in_range(midr, &range) && is_kernel_in_hyp_mode();
++ return is_affected_midr_range_list(entry, scope) && is_kernel_in_hyp_mode();
+ }
+ #endif
+
+@@ -728,6 +723,8 @@ static const struct midr_range erratum_1418040_list[] = {
+ MIDR_RANGE(MIDR_CORTEX_A76, 0, 0, 3, 1),
+ /* Neoverse-N1 r0p0 to r3p1 */
+ MIDR_RANGE(MIDR_NEOVERSE_N1, 0, 0, 3, 1),
++ /* Kryo4xx Gold (rcpe to rfpf) => (r0p0 to r3p1) */
++ MIDR_RANGE(MIDR_QCOM_KRYO_4XX_GOLD, 0xc, 0xe, 0xf, 0xf),
+ {},
+ };
+ #endif
+@@ -768,11 +765,23 @@ static const struct midr_range erratum_speculative_at_vhe_list[] = {
+ #ifdef CONFIG_ARM64_ERRATUM_1530923
+ /* Cortex A55 r0p0 to r2p0 */
+ MIDR_RANGE(MIDR_CORTEX_A55, 0, 0, 2, 0),
++ /* Kryo4xx Silver (rdpe => r1p0) */
++ MIDR_REV(MIDR_QCOM_KRYO_4XX_SILVER, 0xd, 0xe),
+ #endif
+ {},
+ };
+ #endif
+
++#ifdef CONFIG_ARM64_ERRATUM_1463225
++static const struct midr_range erratum_1463225[] = {
++ /* Cortex-A76 r0p0 - r3p1 */
++ MIDR_RANGE(MIDR_CORTEX_A76, 0, 0, 3, 1),
++ /* Kryo4xx Gold (rcpe to rfpf) => (r0p0 to r3p1) */
++ MIDR_RANGE(MIDR_QCOM_KRYO_4XX_GOLD, 0xc, 0xe, 0xf, 0xf),
++ {},
++};
++#endif
++
+ const struct arm64_cpu_capabilities arm64_errata[] = {
+ #ifdef CONFIG_ARM64_WORKAROUND_CLEAN_CACHE
+ {
+@@ -912,6 +921,7 @@ const struct arm64_cpu_capabilities arm64_errata[] = {
+ .capability = ARM64_WORKAROUND_1463225,
+ .type = ARM64_CPUCAP_LOCAL_CPU_ERRATUM,
+ .matches = has_cortex_a76_erratum_1463225,
++ .midr_range_list = erratum_1463225,
+ },
+ #endif
+ #ifdef CONFIG_CAVIUM_TX2_ERRATUM_219
+diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c
+index b0fb1d5bf223..cadc9d9a7477 100644
+--- a/arch/arm64/kernel/cpufeature.c
++++ b/arch/arm64/kernel/cpufeature.c
+@@ -1177,6 +1177,8 @@ static bool cpu_has_broken_dbm(void)
+ static const struct midr_range cpus[] = {
+ #ifdef CONFIG_ARM64_ERRATUM_1024718
+ MIDR_RANGE(MIDR_CORTEX_A55, 0, 0, 1, 0), // A55 r0p0 -r1p0
++ /* Kryo4xx Silver (rdpe => r1p0) */
++ MIDR_REV(MIDR_QCOM_KRYO_4XX_SILVER, 0xd, 0xe),
+ #endif
+ {},
+ };
+diff --git a/arch/arm64/kernel/debug-monitors.c b/arch/arm64/kernel/debug-monitors.c
+index 48222a4760c2..7569deb1eac1 100644
+--- a/arch/arm64/kernel/debug-monitors.c
++++ b/arch/arm64/kernel/debug-monitors.c
+@@ -141,17 +141,20 @@ postcore_initcall(debug_monitors_init);
+ /*
+ * Single step API and exception handling.
+ */
+-static void set_regs_spsr_ss(struct pt_regs *regs)
++static void set_user_regs_spsr_ss(struct user_pt_regs *regs)
+ {
+ regs->pstate |= DBG_SPSR_SS;
+ }
+-NOKPROBE_SYMBOL(set_regs_spsr_ss);
++NOKPROBE_SYMBOL(set_user_regs_spsr_ss);
+
+-static void clear_regs_spsr_ss(struct pt_regs *regs)
++static void clear_user_regs_spsr_ss(struct user_pt_regs *regs)
+ {
+ regs->pstate &= ~DBG_SPSR_SS;
+ }
+-NOKPROBE_SYMBOL(clear_regs_spsr_ss);
++NOKPROBE_SYMBOL(clear_user_regs_spsr_ss);
++
++#define set_regs_spsr_ss(r) set_user_regs_spsr_ss(&(r)->user_regs)
++#define clear_regs_spsr_ss(r) clear_user_regs_spsr_ss(&(r)->user_regs)
+
+ static DEFINE_SPINLOCK(debug_hook_lock);
+ static LIST_HEAD(user_step_hook);
+@@ -404,6 +407,15 @@ void user_fastforward_single_step(struct task_struct *task)
+ clear_regs_spsr_ss(task_pt_regs(task));
+ }
+
++void user_regs_reset_single_step(struct user_pt_regs *regs,
++ struct task_struct *task)
++{
++ if (test_tsk_thread_flag(task, TIF_SINGLESTEP))
++ set_user_regs_spsr_ss(regs);
++ else
++ clear_user_regs_spsr_ss(regs);
++}
++
+ /* Kernel API */
+ void kernel_enable_single_step(struct pt_regs *regs)
+ {
+diff --git a/arch/arm64/kernel/ptrace.c b/arch/arm64/kernel/ptrace.c
+index e7b01904f180..cd1b47d5198a 100644
+--- a/arch/arm64/kernel/ptrace.c
++++ b/arch/arm64/kernel/ptrace.c
+@@ -1819,12 +1819,23 @@ static void tracehook_report_syscall(struct pt_regs *regs,
+ saved_reg = regs->regs[regno];
+ regs->regs[regno] = dir;
+
+- if (dir == PTRACE_SYSCALL_EXIT)
++ if (dir == PTRACE_SYSCALL_ENTER) {
++ if (tracehook_report_syscall_entry(regs))
++ forget_syscall(regs);
++ regs->regs[regno] = saved_reg;
++ } else if (!test_thread_flag(TIF_SINGLESTEP)) {
+ tracehook_report_syscall_exit(regs, 0);
+- else if (tracehook_report_syscall_entry(regs))
+- forget_syscall(regs);
++ regs->regs[regno] = saved_reg;
++ } else {
++ regs->regs[regno] = saved_reg;
+
+- regs->regs[regno] = saved_reg;
++ /*
++ * Signal a pseudo-step exception since we are stepping but
++ * tracer modifications to the registers may have rewound the
++ * state machine.
++ */
++ tracehook_report_syscall_exit(regs, 1);
++ }
+ }
+
+ int syscall_trace_enter(struct pt_regs *regs)
+@@ -1852,12 +1863,14 @@ int syscall_trace_enter(struct pt_regs *regs)
+
+ void syscall_trace_exit(struct pt_regs *regs)
+ {
++ unsigned long flags = READ_ONCE(current_thread_info()->flags);
++
+ audit_syscall_exit(regs);
+
+- if (test_thread_flag(TIF_SYSCALL_TRACEPOINT))
++ if (flags & _TIF_SYSCALL_TRACEPOINT)
+ trace_sys_exit(regs, regs_return_value(regs));
+
+- if (test_thread_flag(TIF_SYSCALL_TRACE))
++ if (flags & (_TIF_SYSCALL_TRACE | _TIF_SINGLESTEP))
+ tracehook_report_syscall(regs, PTRACE_SYSCALL_EXIT);
+
+ rseq_syscall(regs);
+@@ -1935,8 +1948,8 @@ static int valid_native_regs(struct user_pt_regs *regs)
+ */
+ int valid_user_regs(struct user_pt_regs *regs, struct task_struct *task)
+ {
+- if (!test_tsk_thread_flag(task, TIF_SINGLESTEP))
+- regs->pstate &= ~DBG_SPSR_SS;
++ /* https://lore.kernel.org/lkml/20191118131525.GA4180@willie-the-truck */
++ user_regs_reset_single_step(regs, task);
+
+ if (is_compat_thread(task_thread_info(task)))
+ return valid_compat_regs(regs);
+diff --git a/arch/arm64/kernel/signal.c b/arch/arm64/kernel/signal.c
+index 339882db5a91..de205ca806c1 100644
+--- a/arch/arm64/kernel/signal.c
++++ b/arch/arm64/kernel/signal.c
+@@ -784,7 +784,6 @@ static void setup_restart_syscall(struct pt_regs *regs)
+ */
+ static void handle_signal(struct ksignal *ksig, struct pt_regs *regs)
+ {
+- struct task_struct *tsk = current;
+ sigset_t *oldset = sigmask_to_save();
+ int usig = ksig->sig;
+ int ret;
+@@ -808,14 +807,8 @@ static void handle_signal(struct ksignal *ksig, struct pt_regs *regs)
+ */
+ ret |= !valid_user_regs(®s->user_regs, current);
+
+- /*
+- * Fast forward the stepping logic so we step into the signal
+- * handler.
+- */
+- if (!ret)
+- user_fastforward_single_step(tsk);
+-
+- signal_setup_done(ret, ksig, 0);
++ /* Step into the signal handler if we are stepping */
++ signal_setup_done(ret, ksig, test_thread_flag(TIF_SINGLESTEP));
+ }
+
+ /*
+diff --git a/arch/arm64/kernel/syscall.c b/arch/arm64/kernel/syscall.c
+index a12c0c88d345..7aa7cf76367e 100644
+--- a/arch/arm64/kernel/syscall.c
++++ b/arch/arm64/kernel/syscall.c
+@@ -50,6 +50,9 @@ static void invoke_syscall(struct pt_regs *regs, unsigned int scno,
+ ret = do_ni_syscall(regs, scno);
+ }
+
++ if (is_compat_task())
++ ret = lower_32_bits(ret);
++
+ regs->regs[0] = ret;
+ }
+
+@@ -121,7 +124,7 @@ static void el0_svc_common(struct pt_regs *regs, int scno, int sc_nr,
+ if (!has_syscall_work(flags) && !IS_ENABLED(CONFIG_DEBUG_RSEQ)) {
+ local_daif_mask();
+ flags = current_thread_info()->flags;
+- if (!has_syscall_work(flags)) {
++ if (!has_syscall_work(flags) && !(flags & _TIF_SINGLESTEP)) {
+ /*
+ * We're off to userspace, where interrupts are
+ * always enabled after we restore the flags from
+diff --git a/arch/arm64/kernel/vmlinux.lds.S b/arch/arm64/kernel/vmlinux.lds.S
+index 94402aaf5f5c..9869412ac156 100644
+--- a/arch/arm64/kernel/vmlinux.lds.S
++++ b/arch/arm64/kernel/vmlinux.lds.S
+@@ -172,9 +172,6 @@ SECTIONS
+ *(.altinstructions)
+ __alt_instructions_end = .;
+ }
+- .altinstr_replacement : {
+- *(.altinstr_replacement)
+- }
+
+ . = ALIGN(PAGE_SIZE);
+ __inittext_end = .;
+diff --git a/arch/m68k/kernel/setup_no.c b/arch/m68k/kernel/setup_no.c
+index a63483de7a42..5dacba392c74 100644
+--- a/arch/m68k/kernel/setup_no.c
++++ b/arch/m68k/kernel/setup_no.c
+@@ -139,7 +139,8 @@ void __init setup_arch(char **cmdline_p)
+ pr_debug("MEMORY -> ROMFS=0x%p-0x%06lx MEM=0x%06lx-0x%06lx\n ",
+ __bss_stop, memory_start, memory_start, memory_end);
+
+- memblock_add(memory_start, memory_end - memory_start);
++ memblock_add(_rambase, memory_end - _rambase);
++ memblock_reserve(_rambase, memory_start - _rambase);
+
+ /* Keep a copy of command line */
+ *cmdline_p = &command_line[0];
+diff --git a/arch/m68k/mm/mcfmmu.c b/arch/m68k/mm/mcfmmu.c
+index 0ea375607767..2c57f46facc0 100644
+--- a/arch/m68k/mm/mcfmmu.c
++++ b/arch/m68k/mm/mcfmmu.c
+@@ -178,7 +178,7 @@ void __init cf_bootmem_alloc(void)
+ m68k_memory[0].addr = _rambase;
+ m68k_memory[0].size = _ramend - _rambase;
+
+- memblock_add(m68k_memory[0].addr, m68k_memory[0].size);
++ memblock_add_node(m68k_memory[0].addr, m68k_memory[0].size, 0);
+
+ /* compute total pages in system */
+ num_pages = PFN_DOWN(_ramend - _rambase);
+diff --git a/arch/powerpc/kernel/paca.c b/arch/powerpc/kernel/paca.c
+index 3f91ccaa9c74..4ea0cca52e16 100644
+--- a/arch/powerpc/kernel/paca.c
++++ b/arch/powerpc/kernel/paca.c
+@@ -86,7 +86,7 @@ static void *__init alloc_shared_lppaca(unsigned long size, unsigned long align,
+ * This is very early in boot, so no harm done if the kernel crashes at
+ * this point.
+ */
+- BUG_ON(shared_lppaca_size >= shared_lppaca_total_size);
++ BUG_ON(shared_lppaca_size > shared_lppaca_total_size);
+
+ return ptr;
+ }
+diff --git a/arch/powerpc/mm/book3s64/pkeys.c b/arch/powerpc/mm/book3s64/pkeys.c
+index 1199fc2bfaec..268ce9581676 100644
+--- a/arch/powerpc/mm/book3s64/pkeys.c
++++ b/arch/powerpc/mm/book3s64/pkeys.c
+@@ -357,12 +357,14 @@ static bool pkey_access_permitted(int pkey, bool write, bool execute)
+ return true;
+
+ pkey_shift = pkeyshift(pkey);
+- if (execute && !(read_iamr() & (IAMR_EX_BIT << pkey_shift)))
+- return true;
++ if (execute)
++ return !(read_iamr() & (IAMR_EX_BIT << pkey_shift));
++
++ amr = read_amr();
++ if (write)
++ return !(amr & (AMR_WR_BIT << pkey_shift));
+
+- amr = read_amr(); /* Delay reading amr until absolutely needed */
+- return ((!write && !(amr & (AMR_RD_BIT << pkey_shift))) ||
+- (write && !(amr & (AMR_WR_BIT << pkey_shift))));
++ return !(amr & (AMR_RD_BIT << pkey_shift));
+ }
+
+ bool arch_pte_access_permitted(u64 pte, bool write, bool execute)
+diff --git a/arch/riscv/include/asm/thread_info.h b/arch/riscv/include/asm/thread_info.h
+index 1dd12a0cbb2b..464a2bbc97ea 100644
+--- a/arch/riscv/include/asm/thread_info.h
++++ b/arch/riscv/include/asm/thread_info.h
+@@ -12,7 +12,11 @@
+ #include <linux/const.h>
+
+ /* thread information allocation */
++#ifdef CONFIG_64BIT
++#define THREAD_SIZE_ORDER (2)
++#else
+ #define THREAD_SIZE_ORDER (1)
++#endif
+ #define THREAD_SIZE (PAGE_SIZE << THREAD_SIZE_ORDER)
+
+ #ifndef __ASSEMBLY__
+diff --git a/arch/x86/include/asm/fpu/internal.h b/arch/x86/include/asm/fpu/internal.h
+index 44c48e34d799..00eac7f1529b 100644
+--- a/arch/x86/include/asm/fpu/internal.h
++++ b/arch/x86/include/asm/fpu/internal.h
+@@ -619,6 +619,11 @@ static inline void switch_fpu_finish(struct fpu *new_fpu)
+ * MXCSR and XCR definitions:
+ */
+
++static inline void ldmxcsr(u32 mxcsr)
++{
++ asm volatile("ldmxcsr %0" :: "m" (mxcsr));
++}
++
+ extern unsigned int mxcsr_feature_mask;
+
+ #define XCR_XFEATURE_ENABLED_MASK 0x00000000
+diff --git a/arch/x86/include/asm/io_bitmap.h b/arch/x86/include/asm/io_bitmap.h
+index ac1a99ffbd8d..7f080f5c7def 100644
+--- a/arch/x86/include/asm/io_bitmap.h
++++ b/arch/x86/include/asm/io_bitmap.h
+@@ -19,12 +19,28 @@ struct task_struct;
+ void io_bitmap_share(struct task_struct *tsk);
+ void io_bitmap_exit(struct task_struct *tsk);
+
++static inline void native_tss_invalidate_io_bitmap(void)
++{
++ /*
++ * Invalidate the I/O bitmap by moving io_bitmap_base outside the
++ * TSS limit so any subsequent I/O access from user space will
++ * trigger a #GP.
++ *
++ * This is correct even when VMEXIT rewrites the TSS limit
++ * to 0x67 as the only requirement is that the base points
++ * outside the limit.
++ */
++ this_cpu_write(cpu_tss_rw.x86_tss.io_bitmap_base,
++ IO_BITMAP_OFFSET_INVALID);
++}
++
+ void native_tss_update_io_bitmap(void);
+
+ #ifdef CONFIG_PARAVIRT_XXL
+ #include <asm/paravirt.h>
+ #else
+ #define tss_update_io_bitmap native_tss_update_io_bitmap
++#define tss_invalidate_io_bitmap native_tss_invalidate_io_bitmap
+ #endif
+
+ #else
+diff --git a/arch/x86/include/asm/paravirt.h b/arch/x86/include/asm/paravirt.h
+index 694d8daf4983..0296dd66c167 100644
+--- a/arch/x86/include/asm/paravirt.h
++++ b/arch/x86/include/asm/paravirt.h
+@@ -296,6 +296,11 @@ static inline void write_idt_entry(gate_desc *dt, int entry, const gate_desc *g)
+ }
+
+ #ifdef CONFIG_X86_IOPL_IOPERM
++static inline void tss_invalidate_io_bitmap(void)
++{
++ PVOP_VCALL0(cpu.invalidate_io_bitmap);
++}
++
+ static inline void tss_update_io_bitmap(void)
+ {
+ PVOP_VCALL0(cpu.update_io_bitmap);
+diff --git a/arch/x86/include/asm/paravirt_types.h b/arch/x86/include/asm/paravirt_types.h
+index 732f62e04ddb..8dfcb2508e6d 100644
+--- a/arch/x86/include/asm/paravirt_types.h
++++ b/arch/x86/include/asm/paravirt_types.h
+@@ -141,6 +141,7 @@ struct pv_cpu_ops {
+ void (*load_sp0)(unsigned long sp0);
+
+ #ifdef CONFIG_X86_IOPL_IOPERM
++ void (*invalidate_io_bitmap)(void);
+ void (*update_io_bitmap)(void);
+ #endif
+
+diff --git a/arch/x86/kernel/apic/vector.c b/arch/x86/kernel/apic/vector.c
+index 67768e54438b..cf8b6ebc6031 100644
+--- a/arch/x86/kernel/apic/vector.c
++++ b/arch/x86/kernel/apic/vector.c
+@@ -446,12 +446,10 @@ static int x86_vector_activate(struct irq_domain *dom, struct irq_data *irqd,
+ trace_vector_activate(irqd->irq, apicd->is_managed,
+ apicd->can_reserve, reserve);
+
+- /* Nothing to do for fixed assigned vectors */
+- if (!apicd->can_reserve && !apicd->is_managed)
+- return 0;
+-
+ raw_spin_lock_irqsave(&vector_lock, flags);
+- if (reserve || irqd_is_managed_and_shutdown(irqd))
++ if (!apicd->can_reserve && !apicd->is_managed)
++ assign_irq_vector_any_locked(irqd);
++ else if (reserve || irqd_is_managed_and_shutdown(irqd))
+ vector_assign_managed_shutdown(irqd);
+ else if (apicd->is_managed)
+ ret = activate_managed(irqd);
+@@ -775,20 +773,10 @@ void lapic_offline(void)
+ static int apic_set_affinity(struct irq_data *irqd,
+ const struct cpumask *dest, bool force)
+ {
+- struct apic_chip_data *apicd = apic_chip_data(irqd);
+ int err;
+
+- /*
+- * Core code can call here for inactive interrupts. For inactive
+- * interrupts which use managed or reservation mode there is no
+- * point in going through the vector assignment right now as the
+- * activation will assign a vector which fits the destination
+- * cpumask. Let the core code store the destination mask and be
+- * done with it.
+- */
+- if (!irqd_is_activated(irqd) &&
+- (apicd->is_managed || apicd->can_reserve))
+- return IRQ_SET_MASK_OK;
++ if (WARN_ON_ONCE(!irqd_is_activated(irqd)))
++ return -EIO;
+
+ raw_spin_lock(&vector_lock);
+ cpumask_and(vector_searchmask, dest, cpu_online_mask);
+diff --git a/arch/x86/kernel/fpu/core.c b/arch/x86/kernel/fpu/core.c
+index 12c70840980e..cd8839027f66 100644
+--- a/arch/x86/kernel/fpu/core.c
++++ b/arch/x86/kernel/fpu/core.c
+@@ -101,6 +101,12 @@ void kernel_fpu_begin(void)
+ copy_fpregs_to_fpstate(¤t->thread.fpu);
+ }
+ __cpu_invalidate_fpregs_state();
++
++ if (boot_cpu_has(X86_FEATURE_XMM))
++ ldmxcsr(MXCSR_DEFAULT);
++
++ if (boot_cpu_has(X86_FEATURE_FPU))
++ asm volatile ("fninit");
+ }
+ EXPORT_SYMBOL_GPL(kernel_fpu_begin);
+
+diff --git a/arch/x86/kernel/fpu/xstate.c b/arch/x86/kernel/fpu/xstate.c
+index 6a54e83d5589..9cf40a7ff7ae 100644
+--- a/arch/x86/kernel/fpu/xstate.c
++++ b/arch/x86/kernel/fpu/xstate.c
+@@ -1022,7 +1022,7 @@ int copy_xstate_to_kernel(void *kbuf, struct xregs_state *xsave, unsigned int of
+ copy_part(offsetof(struct fxregs_state, st_space), 128,
+ &xsave->i387.st_space, &kbuf, &offset_start, &count);
+ if (header.xfeatures & XFEATURE_MASK_SSE)
+- copy_part(xstate_offsets[XFEATURE_MASK_SSE], 256,
++ copy_part(xstate_offsets[XFEATURE_SSE], 256,
+ &xsave->i387.xmm_space, &kbuf, &offset_start, &count);
+ /*
+ * Fill xsave->i387.sw_reserved value for ptrace frame:
+diff --git a/arch/x86/kernel/paravirt.c b/arch/x86/kernel/paravirt.c
+index c131ba4e70ef..97b4ce839b4c 100644
+--- a/arch/x86/kernel/paravirt.c
++++ b/arch/x86/kernel/paravirt.c
+@@ -343,7 +343,8 @@ struct paravirt_patch_template pv_ops = {
+ .cpu.swapgs = native_swapgs,
+
+ #ifdef CONFIG_X86_IOPL_IOPERM
+- .cpu.update_io_bitmap = native_tss_update_io_bitmap,
++ .cpu.invalidate_io_bitmap = native_tss_invalidate_io_bitmap,
++ .cpu.update_io_bitmap = native_tss_update_io_bitmap,
+ #endif
+
+ .cpu.start_context_switch = paravirt_nop,
+diff --git a/arch/x86/kernel/process.c b/arch/x86/kernel/process.c
+index 8f4533c1a4ec..19a94a0be3bd 100644
+--- a/arch/x86/kernel/process.c
++++ b/arch/x86/kernel/process.c
+@@ -322,20 +322,6 @@ void arch_setup_new_exec(void)
+ }
+
+ #ifdef CONFIG_X86_IOPL_IOPERM
+-static inline void tss_invalidate_io_bitmap(struct tss_struct *tss)
+-{
+- /*
+- * Invalidate the I/O bitmap by moving io_bitmap_base outside the
+- * TSS limit so any subsequent I/O access from user space will
+- * trigger a #GP.
+- *
+- * This is correct even when VMEXIT rewrites the TSS limit
+- * to 0x67 as the only requirement is that the base points
+- * outside the limit.
+- */
+- tss->x86_tss.io_bitmap_base = IO_BITMAP_OFFSET_INVALID;
+-}
+-
+ static inline void switch_to_bitmap(unsigned long tifp)
+ {
+ /*
+@@ -346,7 +332,7 @@ static inline void switch_to_bitmap(unsigned long tifp)
+ * user mode.
+ */
+ if (tifp & _TIF_IO_BITMAP)
+- tss_invalidate_io_bitmap(this_cpu_ptr(&cpu_tss_rw));
++ tss_invalidate_io_bitmap();
+ }
+
+ static void tss_copy_io_bitmap(struct tss_struct *tss, struct io_bitmap *iobm)
+@@ -380,7 +366,7 @@ void native_tss_update_io_bitmap(void)
+ u16 *base = &tss->x86_tss.io_bitmap_base;
+
+ if (!test_thread_flag(TIF_IO_BITMAP)) {
+- tss_invalidate_io_bitmap(tss);
++ native_tss_invalidate_io_bitmap();
+ return;
+ }
+
+diff --git a/arch/x86/xen/enlighten_pv.c b/arch/x86/xen/enlighten_pv.c
+index 507f4fb88fa7..9621d31104b6 100644
+--- a/arch/x86/xen/enlighten_pv.c
++++ b/arch/x86/xen/enlighten_pv.c
+@@ -841,6 +841,17 @@ static void xen_load_sp0(unsigned long sp0)
+ }
+
+ #ifdef CONFIG_X86_IOPL_IOPERM
++static void xen_invalidate_io_bitmap(void)
++{
++ struct physdev_set_iobitmap iobitmap = {
++ .bitmap = 0,
++ .nr_ports = 0,
++ };
++
++ native_tss_invalidate_io_bitmap();
++ HYPERVISOR_physdev_op(PHYSDEVOP_set_iobitmap, &iobitmap);
++}
++
+ static void xen_update_io_bitmap(void)
+ {
+ struct physdev_set_iobitmap iobitmap;
+@@ -1070,6 +1081,7 @@ static const struct pv_cpu_ops xen_cpu_ops __initconst = {
+ .load_sp0 = xen_load_sp0,
+
+ #ifdef CONFIG_X86_IOPL_IOPERM
++ .invalidate_io_bitmap = xen_invalidate_io_bitmap,
+ .update_io_bitmap = xen_update_io_bitmap,
+ #endif
+ .io_delay = xen_io_delay,
+diff --git a/block/blk-mq-debugfs.c b/block/blk-mq-debugfs.c
+index b3f2ba483992..121f4c1e0697 100644
+--- a/block/blk-mq-debugfs.c
++++ b/block/blk-mq-debugfs.c
+@@ -125,6 +125,9 @@ static const char *const blk_queue_flag_name[] = {
+ QUEUE_FLAG_NAME(REGISTERED),
+ QUEUE_FLAG_NAME(SCSI_PASSTHROUGH),
+ QUEUE_FLAG_NAME(QUIESCED),
++ QUEUE_FLAG_NAME(PCI_P2PDMA),
++ QUEUE_FLAG_NAME(ZONE_RESETALL),
++ QUEUE_FLAG_NAME(RQ_ALLOC_TIME),
+ };
+ #undef QUEUE_FLAG_NAME
+
+diff --git a/crypto/asymmetric_keys/public_key.c b/crypto/asymmetric_keys/public_key.c
+index d7f43d4ea925..e5fae4e838c0 100644
+--- a/crypto/asymmetric_keys/public_key.c
++++ b/crypto/asymmetric_keys/public_key.c
+@@ -119,6 +119,7 @@ static int software_key_query(const struct kernel_pkey_params *params,
+ if (IS_ERR(tfm))
+ return PTR_ERR(tfm);
+
++ ret = -ENOMEM;
+ key = kmalloc(pkey->keylen + sizeof(u32) * 2 + pkey->paramlen,
+ GFP_KERNEL);
+ if (!key)
+diff --git a/drivers/acpi/dptf/dptf_power.c b/drivers/acpi/dptf/dptf_power.c
+index e4e8b75d39f0..8b42f529047e 100644
+--- a/drivers/acpi/dptf/dptf_power.c
++++ b/drivers/acpi/dptf/dptf_power.c
+@@ -99,6 +99,7 @@ static int dptf_power_remove(struct platform_device *pdev)
+ static const struct acpi_device_id int3407_device_ids[] = {
+ {"INT3407", 0},
+ {"INTC1047", 0},
++ {"INTC1050", 0},
+ {"", 0},
+ };
+ MODULE_DEVICE_TABLE(acpi, int3407_device_ids);
+diff --git a/drivers/base/regmap/regmap-debugfs.c b/drivers/base/regmap/regmap-debugfs.c
+index e72843fe41df..e16afa27700d 100644
+--- a/drivers/base/regmap/regmap-debugfs.c
++++ b/drivers/base/regmap/regmap-debugfs.c
+@@ -457,29 +457,31 @@ static ssize_t regmap_cache_only_write_file(struct file *file,
+ {
+ struct regmap *map = container_of(file->private_data,
+ struct regmap, cache_only);
+- ssize_t result;
+- bool was_enabled, require_sync = false;
++ bool new_val, require_sync = false;
+ int err;
+
+- map->lock(map->lock_arg);
++ err = kstrtobool_from_user(user_buf, count, &new_val);
++ /* Ignore malforned data like debugfs_write_file_bool() */
++ if (err)
++ return count;
+
+- was_enabled = map->cache_only;
++ err = debugfs_file_get(file->f_path.dentry);
++ if (err)
++ return err;
+
+- result = debugfs_write_file_bool(file, user_buf, count, ppos);
+- if (result < 0) {
+- map->unlock(map->lock_arg);
+- return result;
+- }
++ map->lock(map->lock_arg);
+
+- if (map->cache_only && !was_enabled) {
++ if (new_val && !map->cache_only) {
+ dev_warn(map->dev, "debugfs cache_only=Y forced\n");
+ add_taint(TAINT_USER, LOCKDEP_STILL_OK);
+- } else if (!map->cache_only && was_enabled) {
++ } else if (!new_val && map->cache_only) {
+ dev_warn(map->dev, "debugfs cache_only=N forced: syncing cache\n");
+ require_sync = true;
+ }
++ map->cache_only = new_val;
+
+ map->unlock(map->lock_arg);
++ debugfs_file_put(file->f_path.dentry);
+
+ if (require_sync) {
+ err = regcache_sync(map);
+@@ -487,7 +489,7 @@ static ssize_t regmap_cache_only_write_file(struct file *file,
+ dev_err(map->dev, "Failed to sync cache %d\n", err);
+ }
+
+- return result;
++ return count;
+ }
+
+ static const struct file_operations regmap_cache_only_fops = {
+@@ -502,28 +504,32 @@ static ssize_t regmap_cache_bypass_write_file(struct file *file,
+ {
+ struct regmap *map = container_of(file->private_data,
+ struct regmap, cache_bypass);
+- ssize_t result;
+- bool was_enabled;
++ bool new_val;
++ int err;
+
+- map->lock(map->lock_arg);
++ err = kstrtobool_from_user(user_buf, count, &new_val);
++ /* Ignore malforned data like debugfs_write_file_bool() */
++ if (err)
++ return count;
+
+- was_enabled = map->cache_bypass;
++ err = debugfs_file_get(file->f_path.dentry);
++ if (err)
++ return err;
+
+- result = debugfs_write_file_bool(file, user_buf, count, ppos);
+- if (result < 0)
+- goto out;
++ map->lock(map->lock_arg);
+
+- if (map->cache_bypass && !was_enabled) {
++ if (new_val && !map->cache_bypass) {
+ dev_warn(map->dev, "debugfs cache_bypass=Y forced\n");
+ add_taint(TAINT_USER, LOCKDEP_STILL_OK);
+- } else if (!map->cache_bypass && was_enabled) {
++ } else if (!new_val && map->cache_bypass) {
+ dev_warn(map->dev, "debugfs cache_bypass=N forced\n");
+ }
++ map->cache_bypass = new_val;
+
+-out:
+ map->unlock(map->lock_arg);
++ debugfs_file_put(file->f_path.dentry);
+
+- return result;
++ return count;
+ }
+
+ static const struct file_operations regmap_cache_bypass_fops = {
+diff --git a/drivers/block/zram/zram_drv.c b/drivers/block/zram/zram_drv.c
+index ebb234f36909..73a5cecfa9bb 100644
+--- a/drivers/block/zram/zram_drv.c
++++ b/drivers/block/zram/zram_drv.c
+@@ -2025,7 +2025,8 @@ static ssize_t hot_add_show(struct class *class,
+ return ret;
+ return scnprintf(buf, PAGE_SIZE, "%d\n", ret);
+ }
+-static CLASS_ATTR_RO(hot_add);
++static struct class_attribute class_attr_hot_add =
++ __ATTR(hot_add, 0400, hot_add_show, NULL);
+
+ static ssize_t hot_remove_store(struct class *class,
+ struct class_attribute *attr,
+diff --git a/drivers/bus/ti-sysc.c b/drivers/bus/ti-sysc.c
+index db9541f38505..3b0417a01494 100644
+--- a/drivers/bus/ti-sysc.c
++++ b/drivers/bus/ti-sysc.c
+@@ -236,15 +236,14 @@ static int sysc_wait_softreset(struct sysc *ddata)
+ syss_done = ddata->cfg.syss_mask;
+
+ if (syss_offset >= 0) {
+- error = readx_poll_timeout(sysc_read_sysstatus, ddata, rstval,
+- (rstval & ddata->cfg.syss_mask) ==
+- syss_done,
+- 100, MAX_MODULE_SOFTRESET_WAIT);
++ error = readx_poll_timeout_atomic(sysc_read_sysstatus, ddata,
++ rstval, (rstval & ddata->cfg.syss_mask) ==
++ syss_done, 100, MAX_MODULE_SOFTRESET_WAIT);
+
+ } else if (ddata->cfg.quirks & SYSC_QUIRK_RESET_STATUS) {
+- error = readx_poll_timeout(sysc_read_sysconfig, ddata, rstval,
+- !(rstval & sysc_mask),
+- 100, MAX_MODULE_SOFTRESET_WAIT);
++ error = readx_poll_timeout_atomic(sysc_read_sysconfig, ddata,
++ rstval, !(rstval & sysc_mask),
++ 100, MAX_MODULE_SOFTRESET_WAIT);
+ }
+
+ return error;
+@@ -1279,7 +1278,8 @@ static int __maybe_unused sysc_noirq_suspend(struct device *dev)
+
+ ddata = dev_get_drvdata(dev);
+
+- if (ddata->cfg.quirks & SYSC_QUIRK_LEGACY_IDLE)
++ if (ddata->cfg.quirks &
++ (SYSC_QUIRK_LEGACY_IDLE | SYSC_QUIRK_NO_IDLE))
+ return 0;
+
+ return pm_runtime_force_suspend(dev);
+@@ -1291,7 +1291,8 @@ static int __maybe_unused sysc_noirq_resume(struct device *dev)
+
+ ddata = dev_get_drvdata(dev);
+
+- if (ddata->cfg.quirks & SYSC_QUIRK_LEGACY_IDLE)
++ if (ddata->cfg.quirks &
++ (SYSC_QUIRK_LEGACY_IDLE | SYSC_QUIRK_NO_IDLE))
+ return 0;
+
+ return pm_runtime_force_resume(dev);
+@@ -1728,8 +1729,8 @@ static void sysc_quirk_rtc(struct sysc *ddata, bool lock)
+
+ local_irq_save(flags);
+ /* RTC_STATUS BUSY bit may stay active for 1/32768 seconds (~30 usec) */
+- error = readl_poll_timeout(ddata->module_va + 0x44, val,
+- !(val & BIT(0)), 100, 50);
++ error = readl_poll_timeout_atomic(ddata->module_va + 0x44, val,
++ !(val & BIT(0)), 100, 50);
+ if (error)
+ dev_warn(ddata->dev, "rtc busy timeout\n");
+ /* Now we have ~15 microseconds to read/write various registers */
+diff --git a/drivers/char/tpm/tpm_tis_core.c b/drivers/char/tpm/tpm_tis_core.c
+index 2435216bd10a..65ab1b027949 100644
+--- a/drivers/char/tpm/tpm_tis_core.c
++++ b/drivers/char/tpm/tpm_tis_core.c
+@@ -1085,7 +1085,7 @@ int tpm_tis_core_init(struct device *dev, struct tpm_tis_data *priv, int irq,
+
+ return 0;
+ out_err:
+- if ((chip->ops != NULL) && (chip->ops->clk_enable != NULL))
++ if (chip->ops->clk_enable != NULL)
+ chip->ops->clk_enable(chip, false);
+
+ tpm_tis_remove(chip);
+diff --git a/drivers/char/virtio_console.c b/drivers/char/virtio_console.c
+index 3cbaec925606..5adad9fa3036 100644
+--- a/drivers/char/virtio_console.c
++++ b/drivers/char/virtio_console.c
+@@ -2116,6 +2116,7 @@ static struct virtio_device_id id_table[] = {
+ { VIRTIO_ID_CONSOLE, VIRTIO_DEV_ANY_ID },
+ { 0 },
+ };
++MODULE_DEVICE_TABLE(virtio, id_table);
+
+ static unsigned int features[] = {
+ VIRTIO_CONSOLE_F_SIZE,
+@@ -2128,6 +2129,7 @@ static struct virtio_device_id rproc_serial_id_table[] = {
+ #endif
+ { 0 },
+ };
++MODULE_DEVICE_TABLE(virtio, rproc_serial_id_table);
+
+ static unsigned int rproc_serial_features[] = {
+ };
+@@ -2280,6 +2282,5 @@ static void __exit fini(void)
+ module_init(init);
+ module_exit(fini);
+
+-MODULE_DEVICE_TABLE(virtio, id_table);
+ MODULE_DESCRIPTION("Virtio console driver");
+ MODULE_LICENSE("GPL");
+diff --git a/drivers/clk/clk-ast2600.c b/drivers/clk/clk-ast2600.c
+index 99afc949925f..177368cac6dd 100644
+--- a/drivers/clk/clk-ast2600.c
++++ b/drivers/clk/clk-ast2600.c
+@@ -131,6 +131,18 @@ static const struct clk_div_table ast2600_eclk_div_table[] = {
+ { 0 }
+ };
+
++static const struct clk_div_table ast2600_emmc_extclk_div_table[] = {
++ { 0x0, 2 },
++ { 0x1, 4 },
++ { 0x2, 6 },
++ { 0x3, 8 },
++ { 0x4, 10 },
++ { 0x5, 12 },
++ { 0x6, 14 },
++ { 0x7, 16 },
++ { 0 }
++};
++
+ static const struct clk_div_table ast2600_mac_div_table[] = {
+ { 0x0, 4 },
+ { 0x1, 4 },
+@@ -390,6 +402,11 @@ static struct clk_hw *aspeed_g6_clk_hw_register_gate(struct device *dev,
+ return hw;
+ }
+
++static const char *const emmc_extclk_parent_names[] = {
++ "emmc_extclk_hpll_in",
++ "mpll",
++};
++
+ static const char * const vclk_parent_names[] = {
+ "dpll",
+ "d1pll",
+@@ -459,16 +476,32 @@ static int aspeed_g6_clk_probe(struct platform_device *pdev)
+ return PTR_ERR(hw);
+ aspeed_g6_clk_data->hws[ASPEED_CLK_UARTX] = hw;
+
+- /* EMMC ext clock divider */
+- hw = clk_hw_register_gate(dev, "emmc_extclk_gate", "hpll", 0,
+- scu_g6_base + ASPEED_G6_CLK_SELECTION1, 15, 0,
+- &aspeed_g6_clk_lock);
++ /* EMMC ext clock */
++ hw = clk_hw_register_fixed_factor(dev, "emmc_extclk_hpll_in", "hpll",
++ 0, 1, 2);
+ if (IS_ERR(hw))
+ return PTR_ERR(hw);
+- hw = clk_hw_register_divider_table(dev, "emmc_extclk", "emmc_extclk_gate", 0,
+- scu_g6_base + ASPEED_G6_CLK_SELECTION1, 12, 3, 0,
+- ast2600_div_table,
+- &aspeed_g6_clk_lock);
++
++ hw = clk_hw_register_mux(dev, "emmc_extclk_mux",
++ emmc_extclk_parent_names,
++ ARRAY_SIZE(emmc_extclk_parent_names), 0,
++ scu_g6_base + ASPEED_G6_CLK_SELECTION1, 11, 1,
++ 0, &aspeed_g6_clk_lock);
++ if (IS_ERR(hw))
++ return PTR_ERR(hw);
++
++ hw = clk_hw_register_gate(dev, "emmc_extclk_gate", "emmc_extclk_mux",
++ 0, scu_g6_base + ASPEED_G6_CLK_SELECTION1,
++ 15, 0, &aspeed_g6_clk_lock);
++ if (IS_ERR(hw))
++ return PTR_ERR(hw);
++
++ hw = clk_hw_register_divider_table(dev, "emmc_extclk",
++ "emmc_extclk_gate", 0,
++ scu_g6_base +
++ ASPEED_G6_CLK_SELECTION1, 12,
++ 3, 0, ast2600_emmc_extclk_div_table,
++ &aspeed_g6_clk_lock);
+ if (IS_ERR(hw))
+ return PTR_ERR(hw);
+ aspeed_g6_clk_data->hws[ASPEED_CLK_EMMC] = hw;
+diff --git a/drivers/clk/mvebu/Kconfig b/drivers/clk/mvebu/Kconfig
+index ded07b0bd0d5..557d6213783c 100644
+--- a/drivers/clk/mvebu/Kconfig
++++ b/drivers/clk/mvebu/Kconfig
+@@ -42,6 +42,7 @@ config ARMADA_AP806_SYSCON
+
+ config ARMADA_AP_CPU_CLK
+ bool
++ select ARMADA_AP_CP_HELPER
+
+ config ARMADA_CP110_SYSCON
+ bool
+diff --git a/drivers/clk/qcom/gcc-msm8998.c b/drivers/clk/qcom/gcc-msm8998.c
+index df1d7056436c..9d7016bcd680 100644
+--- a/drivers/clk/qcom/gcc-msm8998.c
++++ b/drivers/clk/qcom/gcc-msm8998.c
+@@ -1110,6 +1110,27 @@ static struct clk_rcg2 ufs_axi_clk_src = {
+ },
+ };
+
++static const struct freq_tbl ftbl_ufs_unipro_core_clk_src[] = {
++ F(37500000, P_GPLL0_OUT_MAIN, 16, 0, 0),
++ F(75000000, P_GPLL0_OUT_MAIN, 8, 0, 0),
++ F(150000000, P_GPLL0_OUT_MAIN, 4, 0, 0),
++ { }
++};
++
++static struct clk_rcg2 ufs_unipro_core_clk_src = {
++ .cmd_rcgr = 0x76028,
++ .mnd_width = 8,
++ .hid_width = 5,
++ .parent_map = gcc_parent_map_0,
++ .freq_tbl = ftbl_ufs_unipro_core_clk_src,
++ .clkr.hw.init = &(struct clk_init_data){
++ .name = "ufs_unipro_core_clk_src",
++ .parent_names = gcc_parent_names_0,
++ .num_parents = 4,
++ .ops = &clk_rcg2_ops,
++ },
++};
++
+ static const struct freq_tbl ftbl_usb30_master_clk_src[] = {
+ F(19200000, P_XO, 1, 0, 0),
+ F(60000000, P_GPLL0_OUT_MAIN, 10, 0, 0),
+@@ -2549,6 +2570,11 @@ static struct clk_branch gcc_ufs_unipro_core_clk = {
+ .enable_mask = BIT(0),
+ .hw.init = &(struct clk_init_data){
+ .name = "gcc_ufs_unipro_core_clk",
++ .parent_names = (const char *[]){
++ "ufs_unipro_core_clk_src",
++ },
++ .num_parents = 1,
++ .flags = CLK_SET_RATE_PARENT,
+ .ops = &clk_branch2_ops,
+ },
+ },
+@@ -2904,6 +2930,7 @@ static struct clk_regmap *gcc_msm8998_clocks[] = {
+ [SDCC4_APPS_CLK_SRC] = &sdcc4_apps_clk_src.clkr,
+ [TSIF_REF_CLK_SRC] = &tsif_ref_clk_src.clkr,
+ [UFS_AXI_CLK_SRC] = &ufs_axi_clk_src.clkr,
++ [UFS_UNIPRO_CORE_CLK_SRC] = &ufs_unipro_core_clk_src.clkr,
+ [USB30_MASTER_CLK_SRC] = &usb30_master_clk_src.clkr,
+ [USB30_MOCK_UTMI_CLK_SRC] = &usb30_mock_utmi_clk_src.clkr,
+ [USB3_PHY_AUX_CLK_SRC] = &usb3_phy_aux_clk_src.clkr,
+diff --git a/drivers/clk/qcom/gcc-sc7180.c b/drivers/clk/qcom/gcc-sc7180.c
+index 6a51b5b5fc19..73380525cb09 100644
+--- a/drivers/clk/qcom/gcc-sc7180.c
++++ b/drivers/clk/qcom/gcc-sc7180.c
+@@ -390,6 +390,7 @@ static const struct freq_tbl ftbl_gcc_qupv3_wrap0_s0_clk_src[] = {
+ F(29491200, P_GPLL0_OUT_EVEN, 1, 1536, 15625),
+ F(32000000, P_GPLL0_OUT_EVEN, 1, 8, 75),
+ F(48000000, P_GPLL0_OUT_EVEN, 1, 4, 25),
++ F(51200000, P_GPLL6_OUT_MAIN, 7.5, 0, 0),
+ F(64000000, P_GPLL0_OUT_EVEN, 1, 16, 75),
+ F(75000000, P_GPLL0_OUT_EVEN, 4, 0, 0),
+ F(80000000, P_GPLL0_OUT_EVEN, 1, 4, 15),
+@@ -405,8 +406,8 @@ static const struct freq_tbl ftbl_gcc_qupv3_wrap0_s0_clk_src[] = {
+
+ static struct clk_init_data gcc_qupv3_wrap0_s0_clk_src_init = {
+ .name = "gcc_qupv3_wrap0_s0_clk_src",
+- .parent_data = gcc_parent_data_0,
+- .num_parents = 4,
++ .parent_data = gcc_parent_data_1,
++ .num_parents = ARRAY_SIZE(gcc_parent_data_1),
+ .ops = &clk_rcg2_ops,
+ };
+
+@@ -414,15 +415,15 @@ static struct clk_rcg2 gcc_qupv3_wrap0_s0_clk_src = {
+ .cmd_rcgr = 0x17034,
+ .mnd_width = 16,
+ .hid_width = 5,
+- .parent_map = gcc_parent_map_0,
++ .parent_map = gcc_parent_map_1,
+ .freq_tbl = ftbl_gcc_qupv3_wrap0_s0_clk_src,
+ .clkr.hw.init = &gcc_qupv3_wrap0_s0_clk_src_init,
+ };
+
+ static struct clk_init_data gcc_qupv3_wrap0_s1_clk_src_init = {
+ .name = "gcc_qupv3_wrap0_s1_clk_src",
+- .parent_data = gcc_parent_data_0,
+- .num_parents = 4,
++ .parent_data = gcc_parent_data_1,
++ .num_parents = ARRAY_SIZE(gcc_parent_data_1),
+ .ops = &clk_rcg2_ops,
+ };
+
+@@ -430,15 +431,15 @@ static struct clk_rcg2 gcc_qupv3_wrap0_s1_clk_src = {
+ .cmd_rcgr = 0x17164,
+ .mnd_width = 16,
+ .hid_width = 5,
+- .parent_map = gcc_parent_map_0,
++ .parent_map = gcc_parent_map_1,
+ .freq_tbl = ftbl_gcc_qupv3_wrap0_s0_clk_src,
+ .clkr.hw.init = &gcc_qupv3_wrap0_s1_clk_src_init,
+ };
+
+ static struct clk_init_data gcc_qupv3_wrap0_s2_clk_src_init = {
+ .name = "gcc_qupv3_wrap0_s2_clk_src",
+- .parent_data = gcc_parent_data_0,
+- .num_parents = 4,
++ .parent_data = gcc_parent_data_1,
++ .num_parents = ARRAY_SIZE(gcc_parent_data_1),
+ .ops = &clk_rcg2_ops,
+ };
+
+@@ -446,15 +447,15 @@ static struct clk_rcg2 gcc_qupv3_wrap0_s2_clk_src = {
+ .cmd_rcgr = 0x17294,
+ .mnd_width = 16,
+ .hid_width = 5,
+- .parent_map = gcc_parent_map_0,
++ .parent_map = gcc_parent_map_1,
+ .freq_tbl = ftbl_gcc_qupv3_wrap0_s0_clk_src,
+ .clkr.hw.init = &gcc_qupv3_wrap0_s2_clk_src_init,
+ };
+
+ static struct clk_init_data gcc_qupv3_wrap0_s3_clk_src_init = {
+ .name = "gcc_qupv3_wrap0_s3_clk_src",
+- .parent_data = gcc_parent_data_0,
+- .num_parents = 4,
++ .parent_data = gcc_parent_data_1,
++ .num_parents = ARRAY_SIZE(gcc_parent_data_1),
+ .ops = &clk_rcg2_ops,
+ };
+
+@@ -462,15 +463,15 @@ static struct clk_rcg2 gcc_qupv3_wrap0_s3_clk_src = {
+ .cmd_rcgr = 0x173c4,
+ .mnd_width = 16,
+ .hid_width = 5,
+- .parent_map = gcc_parent_map_0,
++ .parent_map = gcc_parent_map_1,
+ .freq_tbl = ftbl_gcc_qupv3_wrap0_s0_clk_src,
+ .clkr.hw.init = &gcc_qupv3_wrap0_s3_clk_src_init,
+ };
+
+ static struct clk_init_data gcc_qupv3_wrap0_s4_clk_src_init = {
+ .name = "gcc_qupv3_wrap0_s4_clk_src",
+- .parent_data = gcc_parent_data_0,
+- .num_parents = 4,
++ .parent_data = gcc_parent_data_1,
++ .num_parents = ARRAY_SIZE(gcc_parent_data_1),
+ .ops = &clk_rcg2_ops,
+ };
+
+@@ -478,15 +479,15 @@ static struct clk_rcg2 gcc_qupv3_wrap0_s4_clk_src = {
+ .cmd_rcgr = 0x174f4,
+ .mnd_width = 16,
+ .hid_width = 5,
+- .parent_map = gcc_parent_map_0,
++ .parent_map = gcc_parent_map_1,
+ .freq_tbl = ftbl_gcc_qupv3_wrap0_s0_clk_src,
+ .clkr.hw.init = &gcc_qupv3_wrap0_s4_clk_src_init,
+ };
+
+ static struct clk_init_data gcc_qupv3_wrap0_s5_clk_src_init = {
+ .name = "gcc_qupv3_wrap0_s5_clk_src",
+- .parent_data = gcc_parent_data_0,
+- .num_parents = 4,
++ .parent_data = gcc_parent_data_1,
++ .num_parents = ARRAY_SIZE(gcc_parent_data_1),
+ .ops = &clk_rcg2_ops,
+ };
+
+@@ -494,15 +495,15 @@ static struct clk_rcg2 gcc_qupv3_wrap0_s5_clk_src = {
+ .cmd_rcgr = 0x17624,
+ .mnd_width = 16,
+ .hid_width = 5,
+- .parent_map = gcc_parent_map_0,
++ .parent_map = gcc_parent_map_1,
+ .freq_tbl = ftbl_gcc_qupv3_wrap0_s0_clk_src,
+ .clkr.hw.init = &gcc_qupv3_wrap0_s5_clk_src_init,
+ };
+
+ static struct clk_init_data gcc_qupv3_wrap1_s0_clk_src_init = {
+ .name = "gcc_qupv3_wrap1_s0_clk_src",
+- .parent_data = gcc_parent_data_0,
+- .num_parents = 4,
++ .parent_data = gcc_parent_data_1,
++ .num_parents = ARRAY_SIZE(gcc_parent_data_1),
+ .ops = &clk_rcg2_ops,
+ };
+
+@@ -510,15 +511,15 @@ static struct clk_rcg2 gcc_qupv3_wrap1_s0_clk_src = {
+ .cmd_rcgr = 0x18018,
+ .mnd_width = 16,
+ .hid_width = 5,
+- .parent_map = gcc_parent_map_0,
++ .parent_map = gcc_parent_map_1,
+ .freq_tbl = ftbl_gcc_qupv3_wrap0_s0_clk_src,
+ .clkr.hw.init = &gcc_qupv3_wrap1_s0_clk_src_init,
+ };
+
+ static struct clk_init_data gcc_qupv3_wrap1_s1_clk_src_init = {
+ .name = "gcc_qupv3_wrap1_s1_clk_src",
+- .parent_data = gcc_parent_data_0,
+- .num_parents = 4,
++ .parent_data = gcc_parent_data_1,
++ .num_parents = ARRAY_SIZE(gcc_parent_data_1),
+ .ops = &clk_rcg2_ops,
+ };
+
+@@ -526,15 +527,15 @@ static struct clk_rcg2 gcc_qupv3_wrap1_s1_clk_src = {
+ .cmd_rcgr = 0x18148,
+ .mnd_width = 16,
+ .hid_width = 5,
+- .parent_map = gcc_parent_map_0,
++ .parent_map = gcc_parent_map_1,
+ .freq_tbl = ftbl_gcc_qupv3_wrap0_s0_clk_src,
+ .clkr.hw.init = &gcc_qupv3_wrap1_s1_clk_src_init,
+ };
+
+ static struct clk_init_data gcc_qupv3_wrap1_s2_clk_src_init = {
+ .name = "gcc_qupv3_wrap1_s2_clk_src",
+- .parent_data = gcc_parent_data_0,
+- .num_parents = 4,
++ .parent_data = gcc_parent_data_1,
++ .num_parents = ARRAY_SIZE(gcc_parent_data_1),
+ .ops = &clk_rcg2_ops,
+ };
+
+@@ -542,15 +543,15 @@ static struct clk_rcg2 gcc_qupv3_wrap1_s2_clk_src = {
+ .cmd_rcgr = 0x18278,
+ .mnd_width = 16,
+ .hid_width = 5,
+- .parent_map = gcc_parent_map_0,
++ .parent_map = gcc_parent_map_1,
+ .freq_tbl = ftbl_gcc_qupv3_wrap0_s0_clk_src,
+ .clkr.hw.init = &gcc_qupv3_wrap1_s2_clk_src_init,
+ };
+
+ static struct clk_init_data gcc_qupv3_wrap1_s3_clk_src_init = {
+ .name = "gcc_qupv3_wrap1_s3_clk_src",
+- .parent_data = gcc_parent_data_0,
+- .num_parents = 4,
++ .parent_data = gcc_parent_data_1,
++ .num_parents = ARRAY_SIZE(gcc_parent_data_1),
+ .ops = &clk_rcg2_ops,
+ };
+
+@@ -558,15 +559,15 @@ static struct clk_rcg2 gcc_qupv3_wrap1_s3_clk_src = {
+ .cmd_rcgr = 0x183a8,
+ .mnd_width = 16,
+ .hid_width = 5,
+- .parent_map = gcc_parent_map_0,
++ .parent_map = gcc_parent_map_1,
+ .freq_tbl = ftbl_gcc_qupv3_wrap0_s0_clk_src,
+ .clkr.hw.init = &gcc_qupv3_wrap1_s3_clk_src_init,
+ };
+
+ static struct clk_init_data gcc_qupv3_wrap1_s4_clk_src_init = {
+ .name = "gcc_qupv3_wrap1_s4_clk_src",
+- .parent_data = gcc_parent_data_0,
+- .num_parents = 4,
++ .parent_data = gcc_parent_data_1,
++ .num_parents = ARRAY_SIZE(gcc_parent_data_1),
+ .ops = &clk_rcg2_ops,
+ };
+
+@@ -574,15 +575,15 @@ static struct clk_rcg2 gcc_qupv3_wrap1_s4_clk_src = {
+ .cmd_rcgr = 0x184d8,
+ .mnd_width = 16,
+ .hid_width = 5,
+- .parent_map = gcc_parent_map_0,
++ .parent_map = gcc_parent_map_1,
+ .freq_tbl = ftbl_gcc_qupv3_wrap0_s0_clk_src,
+ .clkr.hw.init = &gcc_qupv3_wrap1_s4_clk_src_init,
+ };
+
+ static struct clk_init_data gcc_qupv3_wrap1_s5_clk_src_init = {
+ .name = "gcc_qupv3_wrap1_s5_clk_src",
+- .parent_data = gcc_parent_data_0,
+- .num_parents = 4,
++ .parent_data = gcc_parent_data_1,
++ .num_parents = ARRAY_SIZE(gcc_parent_data_1),
+ .ops = &clk_rcg2_ops,
+ };
+
+@@ -590,7 +591,7 @@ static struct clk_rcg2 gcc_qupv3_wrap1_s5_clk_src = {
+ .cmd_rcgr = 0x18608,
+ .mnd_width = 16,
+ .hid_width = 5,
+- .parent_map = gcc_parent_map_0,
++ .parent_map = gcc_parent_map_1,
+ .freq_tbl = ftbl_gcc_qupv3_wrap0_s0_clk_src,
+ .clkr.hw.init = &gcc_qupv3_wrap1_s5_clk_src_init,
+ };
+diff --git a/drivers/clk/qcom/gcc-sm8150.c b/drivers/clk/qcom/gcc-sm8150.c
+index 732bc7c937e6..72524cf11048 100644
+--- a/drivers/clk/qcom/gcc-sm8150.c
++++ b/drivers/clk/qcom/gcc-sm8150.c
+@@ -1616,6 +1616,36 @@ static struct clk_branch gcc_gpu_cfg_ahb_clk = {
+ },
+ };
+
++static struct clk_branch gcc_gpu_gpll0_clk_src = {
++ .clkr = {
++ .enable_reg = 0x52004,
++ .enable_mask = BIT(15),
++ .hw.init = &(struct clk_init_data){
++ .name = "gcc_gpu_gpll0_clk_src",
++ .parent_hws = (const struct clk_hw *[]){
++ &gpll0.clkr.hw },
++ .num_parents = 1,
++ .flags = CLK_SET_RATE_PARENT,
++ .ops = &clk_branch2_ops,
++ },
++ },
++};
++
++static struct clk_branch gcc_gpu_gpll0_div_clk_src = {
++ .clkr = {
++ .enable_reg = 0x52004,
++ .enable_mask = BIT(16),
++ .hw.init = &(struct clk_init_data){
++ .name = "gcc_gpu_gpll0_div_clk_src",
++ .parent_hws = (const struct clk_hw *[]){
++ &gcc_gpu_gpll0_clk_src.clkr.hw },
++ .num_parents = 1,
++ .flags = CLK_SET_RATE_PARENT,
++ .ops = &clk_branch2_ops,
++ },
++ },
++};
++
+ static struct clk_branch gcc_gpu_iref_clk = {
+ .halt_reg = 0x8c010,
+ .halt_check = BRANCH_HALT,
+@@ -1698,6 +1728,36 @@ static struct clk_branch gcc_npu_cfg_ahb_clk = {
+ },
+ };
+
++static struct clk_branch gcc_npu_gpll0_clk_src = {
++ .clkr = {
++ .enable_reg = 0x52004,
++ .enable_mask = BIT(18),
++ .hw.init = &(struct clk_init_data){
++ .name = "gcc_npu_gpll0_clk_src",
++ .parent_hws = (const struct clk_hw *[]){
++ &gpll0.clkr.hw },
++ .num_parents = 1,
++ .flags = CLK_SET_RATE_PARENT,
++ .ops = &clk_branch2_ops,
++ },
++ },
++};
++
++static struct clk_branch gcc_npu_gpll0_div_clk_src = {
++ .clkr = {
++ .enable_reg = 0x52004,
++ .enable_mask = BIT(19),
++ .hw.init = &(struct clk_init_data){
++ .name = "gcc_npu_gpll0_div_clk_src",
++ .parent_hws = (const struct clk_hw *[]){
++ &gcc_npu_gpll0_clk_src.clkr.hw },
++ .num_parents = 1,
++ .flags = CLK_SET_RATE_PARENT,
++ .ops = &clk_branch2_ops,
++ },
++ },
++};
++
+ static struct clk_branch gcc_npu_trig_clk = {
+ .halt_reg = 0x4d00c,
+ .halt_check = BRANCH_VOTED,
+@@ -2812,6 +2872,45 @@ static struct clk_branch gcc_ufs_card_phy_aux_hw_ctl_clk = {
+ },
+ };
+
++/* external clocks so add BRANCH_HALT_SKIP */
++static struct clk_branch gcc_ufs_card_rx_symbol_0_clk = {
++ .halt_check = BRANCH_HALT_SKIP,
++ .clkr = {
++ .enable_reg = 0x7501c,
++ .enable_mask = BIT(0),
++ .hw.init = &(struct clk_init_data){
++ .name = "gcc_ufs_card_rx_symbol_0_clk",
++ .ops = &clk_branch2_ops,
++ },
++ },
++};
++
++/* external clocks so add BRANCH_HALT_SKIP */
++static struct clk_branch gcc_ufs_card_rx_symbol_1_clk = {
++ .halt_check = BRANCH_HALT_SKIP,
++ .clkr = {
++ .enable_reg = 0x750ac,
++ .enable_mask = BIT(0),
++ .hw.init = &(struct clk_init_data){
++ .name = "gcc_ufs_card_rx_symbol_1_clk",
++ .ops = &clk_branch2_ops,
++ },
++ },
++};
++
++/* external clocks so add BRANCH_HALT_SKIP */
++static struct clk_branch gcc_ufs_card_tx_symbol_0_clk = {
++ .halt_check = BRANCH_HALT_SKIP,
++ .clkr = {
++ .enable_reg = 0x75018,
++ .enable_mask = BIT(0),
++ .hw.init = &(struct clk_init_data){
++ .name = "gcc_ufs_card_tx_symbol_0_clk",
++ .ops = &clk_branch2_ops,
++ },
++ },
++};
++
+ static struct clk_branch gcc_ufs_card_unipro_core_clk = {
+ .halt_reg = 0x75058,
+ .halt_check = BRANCH_HALT,
+@@ -2992,6 +3091,45 @@ static struct clk_branch gcc_ufs_phy_phy_aux_hw_ctl_clk = {
+ },
+ };
+
++/* external clocks so add BRANCH_HALT_SKIP */
++static struct clk_branch gcc_ufs_phy_rx_symbol_0_clk = {
++ .halt_check = BRANCH_HALT_SKIP,
++ .clkr = {
++ .enable_reg = 0x7701c,
++ .enable_mask = BIT(0),
++ .hw.init = &(struct clk_init_data){
++ .name = "gcc_ufs_phy_rx_symbol_0_clk",
++ .ops = &clk_branch2_ops,
++ },
++ },
++};
++
++/* external clocks so add BRANCH_HALT_SKIP */
++static struct clk_branch gcc_ufs_phy_rx_symbol_1_clk = {
++ .halt_check = BRANCH_HALT_SKIP,
++ .clkr = {
++ .enable_reg = 0x770ac,
++ .enable_mask = BIT(0),
++ .hw.init = &(struct clk_init_data){
++ .name = "gcc_ufs_phy_rx_symbol_1_clk",
++ .ops = &clk_branch2_ops,
++ },
++ },
++};
++
++/* external clocks so add BRANCH_HALT_SKIP */
++static struct clk_branch gcc_ufs_phy_tx_symbol_0_clk = {
++ .halt_check = BRANCH_HALT_SKIP,
++ .clkr = {
++ .enable_reg = 0x77018,
++ .enable_mask = BIT(0),
++ .hw.init = &(struct clk_init_data){
++ .name = "gcc_ufs_phy_tx_symbol_0_clk",
++ .ops = &clk_branch2_ops,
++ },
++ },
++};
++
+ static struct clk_branch gcc_ufs_phy_unipro_core_clk = {
+ .halt_reg = 0x77058,
+ .halt_check = BRANCH_HALT,
+@@ -3374,12 +3512,16 @@ static struct clk_regmap *gcc_sm8150_clocks[] = {
+ [GCC_GP3_CLK] = &gcc_gp3_clk.clkr,
+ [GCC_GP3_CLK_SRC] = &gcc_gp3_clk_src.clkr,
+ [GCC_GPU_CFG_AHB_CLK] = &gcc_gpu_cfg_ahb_clk.clkr,
++ [GCC_GPU_GPLL0_CLK_SRC] = &gcc_gpu_gpll0_clk_src.clkr,
++ [GCC_GPU_GPLL0_DIV_CLK_SRC] = &gcc_gpu_gpll0_div_clk_src.clkr,
+ [GCC_GPU_IREF_CLK] = &gcc_gpu_iref_clk.clkr,
+ [GCC_GPU_MEMNOC_GFX_CLK] = &gcc_gpu_memnoc_gfx_clk.clkr,
+ [GCC_GPU_SNOC_DVM_GFX_CLK] = &gcc_gpu_snoc_dvm_gfx_clk.clkr,
+ [GCC_NPU_AT_CLK] = &gcc_npu_at_clk.clkr,
+ [GCC_NPU_AXI_CLK] = &gcc_npu_axi_clk.clkr,
+ [GCC_NPU_CFG_AHB_CLK] = &gcc_npu_cfg_ahb_clk.clkr,
++ [GCC_NPU_GPLL0_CLK_SRC] = &gcc_npu_gpll0_clk_src.clkr,
++ [GCC_NPU_GPLL0_DIV_CLK_SRC] = &gcc_npu_gpll0_div_clk_src.clkr,
+ [GCC_NPU_TRIG_CLK] = &gcc_npu_trig_clk.clkr,
+ [GCC_PCIE0_PHY_REFGEN_CLK] = &gcc_pcie0_phy_refgen_clk.clkr,
+ [GCC_PCIE1_PHY_REFGEN_CLK] = &gcc_pcie1_phy_refgen_clk.clkr,
+@@ -3484,6 +3626,9 @@ static struct clk_regmap *gcc_sm8150_clocks[] = {
+ [GCC_UFS_CARD_PHY_AUX_CLK_SRC] = &gcc_ufs_card_phy_aux_clk_src.clkr,
+ [GCC_UFS_CARD_PHY_AUX_HW_CTL_CLK] =
+ &gcc_ufs_card_phy_aux_hw_ctl_clk.clkr,
++ [GCC_UFS_CARD_RX_SYMBOL_0_CLK] = &gcc_ufs_card_rx_symbol_0_clk.clkr,
++ [GCC_UFS_CARD_RX_SYMBOL_1_CLK] = &gcc_ufs_card_rx_symbol_1_clk.clkr,
++ [GCC_UFS_CARD_TX_SYMBOL_0_CLK] = &gcc_ufs_card_tx_symbol_0_clk.clkr,
+ [GCC_UFS_CARD_UNIPRO_CORE_CLK] = &gcc_ufs_card_unipro_core_clk.clkr,
+ [GCC_UFS_CARD_UNIPRO_CORE_CLK_SRC] =
+ &gcc_ufs_card_unipro_core_clk_src.clkr,
+@@ -3501,6 +3646,9 @@ static struct clk_regmap *gcc_sm8150_clocks[] = {
+ [GCC_UFS_PHY_PHY_AUX_CLK] = &gcc_ufs_phy_phy_aux_clk.clkr,
+ [GCC_UFS_PHY_PHY_AUX_CLK_SRC] = &gcc_ufs_phy_phy_aux_clk_src.clkr,
+ [GCC_UFS_PHY_PHY_AUX_HW_CTL_CLK] = &gcc_ufs_phy_phy_aux_hw_ctl_clk.clkr,
++ [GCC_UFS_PHY_RX_SYMBOL_0_CLK] = &gcc_ufs_phy_rx_symbol_0_clk.clkr,
++ [GCC_UFS_PHY_RX_SYMBOL_1_CLK] = &gcc_ufs_phy_rx_symbol_1_clk.clkr,
++ [GCC_UFS_PHY_TX_SYMBOL_0_CLK] = &gcc_ufs_phy_tx_symbol_0_clk.clkr,
+ [GCC_UFS_PHY_UNIPRO_CORE_CLK] = &gcc_ufs_phy_unipro_core_clk.clkr,
+ [GCC_UFS_PHY_UNIPRO_CORE_CLK_SRC] =
+ &gcc_ufs_phy_unipro_core_clk_src.clkr,
+diff --git a/drivers/counter/104-quad-8.c b/drivers/counter/104-quad-8.c
+index aa13708c2bc3..d22cfae1b019 100644
+--- a/drivers/counter/104-quad-8.c
++++ b/drivers/counter/104-quad-8.c
+@@ -1274,18 +1274,26 @@ static ssize_t quad8_signal_cable_fault_read(struct counter_device *counter,
+ struct counter_signal *signal,
+ void *private, char *buf)
+ {
+- const struct quad8_iio *const priv = counter->priv;
++ struct quad8_iio *const priv = counter->priv;
+ const size_t channel_id = signal->id / 2;
+- const bool disabled = !(priv->cable_fault_enable & BIT(channel_id));
++ bool disabled;
+ unsigned int status;
+ unsigned int fault;
+
+- if (disabled)
++ mutex_lock(&priv->lock);
++
++ disabled = !(priv->cable_fault_enable & BIT(channel_id));
++
++ if (disabled) {
++ mutex_unlock(&priv->lock);
+ return -EINVAL;
++ }
+
+ /* Logic 0 = cable fault */
+ status = inb(priv->base + QUAD8_DIFF_ENCODER_CABLE_STATUS);
+
++ mutex_unlock(&priv->lock);
++
+ /* Mask respective channel and invert logic */
+ fault = !(status & BIT(channel_id));
+
+@@ -1317,6 +1325,8 @@ static ssize_t quad8_signal_cable_fault_enable_write(
+ if (ret)
+ return ret;
+
++ mutex_lock(&priv->lock);
++
+ if (enable)
+ priv->cable_fault_enable |= BIT(channel_id);
+ else
+@@ -1327,6 +1337,8 @@ static ssize_t quad8_signal_cable_fault_enable_write(
+
+ outb(cable_fault_enable, priv->base + QUAD8_DIFF_ENCODER_CABLE_STATUS);
+
++ mutex_unlock(&priv->lock);
++
+ return len;
+ }
+
+@@ -1353,6 +1365,8 @@ static ssize_t quad8_signal_fck_prescaler_write(struct counter_device *counter,
+ if (ret)
+ return ret;
+
++ mutex_lock(&priv->lock);
++
+ priv->fck_prescaler[channel_id] = prescaler;
+
+ /* Reset Byte Pointer */
+@@ -1363,6 +1377,8 @@ static ssize_t quad8_signal_fck_prescaler_write(struct counter_device *counter,
+ outb(QUAD8_CTR_RLD | QUAD8_RLD_RESET_BP | QUAD8_RLD_PRESET_PSC,
+ base_offset + 1);
+
++ mutex_unlock(&priv->lock);
++
+ return len;
+ }
+
+diff --git a/drivers/dma-buf/dma-buf.c b/drivers/dma-buf/dma-buf.c
+index e782aaaf3e1f..c26937143c35 100644
+--- a/drivers/dma-buf/dma-buf.c
++++ b/drivers/dma-buf/dma-buf.c
+@@ -45,10 +45,10 @@ static char *dmabuffs_dname(struct dentry *dentry, char *buffer, int buflen)
+ size_t ret = 0;
+
+ dmabuf = dentry->d_fsdata;
+- dma_resv_lock(dmabuf->resv, NULL);
++ spin_lock(&dmabuf->name_lock);
+ if (dmabuf->name)
+ ret = strlcpy(name, dmabuf->name, DMA_BUF_NAME_LEN);
+- dma_resv_unlock(dmabuf->resv);
++ spin_unlock(&dmabuf->name_lock);
+
+ return dynamic_dname(dentry, buffer, buflen, "/%s:%s",
+ dentry->d_name.name, ret > 0 ? name : "");
+@@ -338,8 +338,10 @@ static long dma_buf_set_name(struct dma_buf *dmabuf, const char __user *buf)
+ kfree(name);
+ goto out_unlock;
+ }
++ spin_lock(&dmabuf->name_lock);
+ kfree(dmabuf->name);
+ dmabuf->name = name;
++ spin_unlock(&dmabuf->name_lock);
+
+ out_unlock:
+ dma_resv_unlock(dmabuf->resv);
+@@ -402,10 +404,10 @@ static void dma_buf_show_fdinfo(struct seq_file *m, struct file *file)
+ /* Don't count the temporary reference taken inside procfs seq_show */
+ seq_printf(m, "count:\t%ld\n", file_count(dmabuf->file) - 1);
+ seq_printf(m, "exp_name:\t%s\n", dmabuf->exp_name);
+- dma_resv_lock(dmabuf->resv, NULL);
++ spin_lock(&dmabuf->name_lock);
+ if (dmabuf->name)
+ seq_printf(m, "name:\t%s\n", dmabuf->name);
+- dma_resv_unlock(dmabuf->resv);
++ spin_unlock(&dmabuf->name_lock);
+ }
+
+ static const struct file_operations dma_buf_fops = {
+@@ -542,6 +544,7 @@ struct dma_buf *dma_buf_export(const struct dma_buf_export_info *exp_info)
+ dmabuf->size = exp_info->size;
+ dmabuf->exp_name = exp_info->exp_name;
+ dmabuf->owner = exp_info->owner;
++ spin_lock_init(&dmabuf->name_lock);
+ init_waitqueue_head(&dmabuf->poll);
+ dmabuf->cb_excl.poll = dmabuf->cb_shared.poll = &dmabuf->poll;
+ dmabuf->cb_excl.active = dmabuf->cb_shared.active = 0;
+diff --git a/drivers/dma/dmatest.c b/drivers/dma/dmatest.c
+index 0425984db118..62d9825a49e9 100644
+--- a/drivers/dma/dmatest.c
++++ b/drivers/dma/dmatest.c
+@@ -1168,6 +1168,8 @@ static int dmatest_run_set(const char *val, const struct kernel_param *kp)
+ } else if (dmatest_run) {
+ if (!is_threaded_test_pending(info)) {
+ pr_info("No channels configured, continue with any\n");
++ if (!is_threaded_test_run(info))
++ stop_threaded_test(info);
+ add_threaded_test(info);
+ }
+ start_threaded_tests(info);
+diff --git a/drivers/dma/dw/core.c b/drivers/dma/dw/core.c
+index 21cb2a58dbd2..a1b56f52db2f 100644
+--- a/drivers/dma/dw/core.c
++++ b/drivers/dma/dw/core.c
+@@ -118,16 +118,11 @@ static void dwc_initialize(struct dw_dma_chan *dwc)
+ {
+ struct dw_dma *dw = to_dw_dma(dwc->chan.device);
+
+- if (test_bit(DW_DMA_IS_INITIALIZED, &dwc->flags))
+- return;
+-
+ dw->initialize_chan(dwc);
+
+ /* Enable interrupts */
+ channel_set_bit(dw, MASK.XFER, dwc->mask);
+ channel_set_bit(dw, MASK.ERROR, dwc->mask);
+-
+- set_bit(DW_DMA_IS_INITIALIZED, &dwc->flags);
+ }
+
+ /*----------------------------------------------------------------------*/
+@@ -954,8 +949,6 @@ static void dwc_issue_pending(struct dma_chan *chan)
+
+ void do_dw_dma_off(struct dw_dma *dw)
+ {
+- unsigned int i;
+-
+ dma_writel(dw, CFG, 0);
+
+ channel_clear_bit(dw, MASK.XFER, dw->all_chan_mask);
+@@ -966,9 +959,6 @@ void do_dw_dma_off(struct dw_dma *dw)
+
+ while (dma_readl(dw, CFG) & DW_CFG_DMA_EN)
+ cpu_relax();
+-
+- for (i = 0; i < dw->dma.chancnt; i++)
+- clear_bit(DW_DMA_IS_INITIALIZED, &dw->chan[i].flags);
+ }
+
+ void do_dw_dma_on(struct dw_dma *dw)
+@@ -1032,8 +1022,6 @@ static void dwc_free_chan_resources(struct dma_chan *chan)
+ /* Clear custom channel configuration */
+ memset(&dwc->dws, 0, sizeof(struct dw_dma_slave));
+
+- clear_bit(DW_DMA_IS_INITIALIZED, &dwc->flags);
+-
+ /* Disable interrupts */
+ channel_clear_bit(dw, MASK.XFER, dwc->mask);
+ channel_clear_bit(dw, MASK.BLOCK, dwc->mask);
+diff --git a/drivers/dma/fsl-edma-common.h b/drivers/dma/fsl-edma-common.h
+index 67e422590c9a..ec1169741de1 100644
+--- a/drivers/dma/fsl-edma-common.h
++++ b/drivers/dma/fsl-edma-common.h
+@@ -33,7 +33,7 @@
+ #define EDMA_TCD_ATTR_DSIZE_16BIT BIT(0)
+ #define EDMA_TCD_ATTR_DSIZE_32BIT BIT(1)
+ #define EDMA_TCD_ATTR_DSIZE_64BIT (BIT(0) | BIT(1))
+-#define EDMA_TCD_ATTR_DSIZE_32BYTE (BIT(3) | BIT(0))
++#define EDMA_TCD_ATTR_DSIZE_32BYTE (BIT(2) | BIT(0))
+ #define EDMA_TCD_ATTR_SSIZE_8BIT 0
+ #define EDMA_TCD_ATTR_SSIZE_16BIT (EDMA_TCD_ATTR_DSIZE_16BIT << 8)
+ #define EDMA_TCD_ATTR_SSIZE_32BIT (EDMA_TCD_ATTR_DSIZE_32BIT << 8)
+diff --git a/drivers/dma/fsl-edma.c b/drivers/dma/fsl-edma.c
+index eff7ebd8cf35..90bb72af306c 100644
+--- a/drivers/dma/fsl-edma.c
++++ b/drivers/dma/fsl-edma.c
+@@ -45,6 +45,13 @@ static irqreturn_t fsl_edma_tx_handler(int irq, void *dev_id)
+ fsl_chan = &fsl_edma->chans[ch];
+
+ spin_lock(&fsl_chan->vchan.lock);
++
++ if (!fsl_chan->edesc) {
++ /* terminate_all called before */
++ spin_unlock(&fsl_chan->vchan.lock);
++ continue;
++ }
++
+ if (!fsl_chan->edesc->iscyclic) {
+ list_del(&fsl_chan->edesc->vdesc.node);
+ vchan_cookie_complete(&fsl_chan->edesc->vdesc);
+diff --git a/drivers/dma/idxd/cdev.c b/drivers/dma/idxd/cdev.c
+index ff49847e37a8..cb376cf6a2d2 100644
+--- a/drivers/dma/idxd/cdev.c
++++ b/drivers/dma/idxd/cdev.c
+@@ -74,6 +74,7 @@ static int idxd_cdev_open(struct inode *inode, struct file *filp)
+ struct idxd_device *idxd;
+ struct idxd_wq *wq;
+ struct device *dev;
++ int rc = 0;
+
+ wq = inode_wq(inode);
+ idxd = wq->idxd;
+@@ -81,17 +82,27 @@ static int idxd_cdev_open(struct inode *inode, struct file *filp)
+
+ dev_dbg(dev, "%s called: %d\n", __func__, idxd_wq_refcount(wq));
+
+- if (idxd_wq_refcount(wq) > 0 && wq_dedicated(wq))
+- return -EBUSY;
+-
+ ctx = kzalloc(sizeof(*ctx), GFP_KERNEL);
+ if (!ctx)
+ return -ENOMEM;
+
++ mutex_lock(&wq->wq_lock);
++
++ if (idxd_wq_refcount(wq) > 0 && wq_dedicated(wq)) {
++ rc = -EBUSY;
++ goto failed;
++ }
++
+ ctx->wq = wq;
+ filp->private_data = ctx;
+ idxd_wq_get(wq);
++ mutex_unlock(&wq->wq_lock);
+ return 0;
++
++ failed:
++ mutex_unlock(&wq->wq_lock);
++ kfree(ctx);
++ return rc;
+ }
+
+ static int idxd_cdev_release(struct inode *node, struct file *filep)
+@@ -105,7 +116,9 @@ static int idxd_cdev_release(struct inode *node, struct file *filep)
+ filep->private_data = NULL;
+
+ kfree(ctx);
++ mutex_lock(&wq->wq_lock);
+ idxd_wq_put(wq);
++ mutex_unlock(&wq->wq_lock);
+ return 0;
+ }
+
+diff --git a/drivers/dma/idxd/device.c b/drivers/dma/idxd/device.c
+index 8d79a8787104..8d2718c585dc 100644
+--- a/drivers/dma/idxd/device.c
++++ b/drivers/dma/idxd/device.c
+@@ -320,6 +320,31 @@ void idxd_wq_unmap_portal(struct idxd_wq *wq)
+ devm_iounmap(dev, wq->dportal);
+ }
+
++void idxd_wq_disable_cleanup(struct idxd_wq *wq)
++{
++ struct idxd_device *idxd = wq->idxd;
++ struct device *dev = &idxd->pdev->dev;
++ int i, wq_offset;
++
++ lockdep_assert_held(&idxd->dev_lock);
++ memset(&wq->wqcfg, 0, sizeof(wq->wqcfg));
++ wq->type = IDXD_WQT_NONE;
++ wq->size = 0;
++ wq->group = NULL;
++ wq->threshold = 0;
++ wq->priority = 0;
++ clear_bit(WQ_FLAG_DEDICATED, &wq->flags);
++ memset(wq->name, 0, WQ_NAME_SIZE);
++
++ for (i = 0; i < 8; i++) {
++ wq_offset = idxd->wqcfg_offset + wq->id * 32 + i * sizeof(u32);
++ iowrite32(0, idxd->reg_base + wq_offset);
++ dev_dbg(dev, "WQ[%d][%d][%#x]: %#x\n",
++ wq->id, i, wq_offset,
++ ioread32(idxd->reg_base + wq_offset));
++ }
++}
++
+ /* Device control bits */
+ static inline bool idxd_is_enabled(struct idxd_device *idxd)
+ {
+diff --git a/drivers/dma/idxd/idxd.h b/drivers/dma/idxd/idxd.h
+index b8f8a363b4a7..908c8d0ef3ab 100644
+--- a/drivers/dma/idxd/idxd.h
++++ b/drivers/dma/idxd/idxd.h
+@@ -290,6 +290,7 @@ int idxd_wq_enable(struct idxd_wq *wq);
+ int idxd_wq_disable(struct idxd_wq *wq);
+ int idxd_wq_map_portal(struct idxd_wq *wq);
+ void idxd_wq_unmap_portal(struct idxd_wq *wq);
++void idxd_wq_disable_cleanup(struct idxd_wq *wq);
+
+ /* submission */
+ int idxd_submit_desc(struct idxd_wq *wq, struct idxd_desc *desc);
+diff --git a/drivers/dma/idxd/irq.c b/drivers/dma/idxd/irq.c
+index 6510791b9921..8a35f58da689 100644
+--- a/drivers/dma/idxd/irq.c
++++ b/drivers/dma/idxd/irq.c
+@@ -141,7 +141,7 @@ irqreturn_t idxd_misc_thread(int vec, void *data)
+
+ iowrite32(cause, idxd->reg_base + IDXD_INTCAUSE_OFFSET);
+ if (!err)
+- return IRQ_HANDLED;
++ goto out;
+
+ gensts.bits = ioread32(idxd->reg_base + IDXD_GENSTATS_OFFSET);
+ if (gensts.state == IDXD_DEVICE_STATE_HALT) {
+@@ -162,6 +162,7 @@ irqreturn_t idxd_misc_thread(int vec, void *data)
+ spin_unlock_bh(&idxd->dev_lock);
+ }
+
++ out:
+ idxd_unmask_msix_vector(idxd, irq_entry->id);
+ return IRQ_HANDLED;
+ }
+diff --git a/drivers/dma/idxd/sysfs.c b/drivers/dma/idxd/sysfs.c
+index 3999827970ab..fbb455ece81e 100644
+--- a/drivers/dma/idxd/sysfs.c
++++ b/drivers/dma/idxd/sysfs.c
+@@ -315,6 +315,11 @@ static int idxd_config_bus_remove(struct device *dev)
+ idxd_unregister_dma_device(idxd);
+ spin_lock_irqsave(&idxd->dev_lock, flags);
+ rc = idxd_device_disable(idxd);
++ for (i = 0; i < idxd->max_wqs; i++) {
++ struct idxd_wq *wq = &idxd->wqs[i];
++
++ idxd_wq_disable_cleanup(wq);
++ }
+ spin_unlock_irqrestore(&idxd->dev_lock, flags);
+ module_put(THIS_MODULE);
+ if (rc < 0)
+diff --git a/drivers/dma/mcf-edma.c b/drivers/dma/mcf-edma.c
+index e15bd15a9ef6..e12b754e6398 100644
+--- a/drivers/dma/mcf-edma.c
++++ b/drivers/dma/mcf-edma.c
+@@ -35,6 +35,13 @@ static irqreturn_t mcf_edma_tx_handler(int irq, void *dev_id)
+ mcf_chan = &mcf_edma->chans[ch];
+
+ spin_lock(&mcf_chan->vchan.lock);
++
++ if (!mcf_chan->edesc) {
++ /* terminate_all called before */
++ spin_unlock(&mcf_chan->vchan.lock);
++ continue;
++ }
++
+ if (!mcf_chan->edesc->iscyclic) {
+ list_del(&mcf_chan->edesc->vdesc.node);
+ vchan_cookie_complete(&mcf_chan->edesc->vdesc);
+diff --git a/drivers/dma/sh/usb-dmac.c b/drivers/dma/sh/usb-dmac.c
+index b218a013c260..8f7ceb698226 100644
+--- a/drivers/dma/sh/usb-dmac.c
++++ b/drivers/dma/sh/usb-dmac.c
+@@ -586,6 +586,8 @@ static void usb_dmac_isr_transfer_end(struct usb_dmac_chan *chan)
+ desc->residue = usb_dmac_get_current_residue(chan, desc,
+ desc->sg_index - 1);
+ desc->done_cookie = desc->vd.tx.cookie;
++ desc->vd.tx_result.result = DMA_TRANS_NOERROR;
++ desc->vd.tx_result.residue = desc->residue;
+ vchan_cookie_complete(&desc->vd);
+
+ /* Restart the next transfer if this driver has a next desc */
+diff --git a/drivers/dma/ti/k3-udma.c b/drivers/dma/ti/k3-udma.c
+index a90e154b0ae0..7cab23fe5c73 100644
+--- a/drivers/dma/ti/k3-udma.c
++++ b/drivers/dma/ti/k3-udma.c
+@@ -1925,8 +1925,6 @@ static int udma_alloc_chan_resources(struct dma_chan *chan)
+
+ udma_reset_rings(uc);
+
+- INIT_DELAYED_WORK_ONSTACK(&uc->tx_drain.work,
+- udma_check_tx_completion);
+ return 0;
+
+ err_irq_free:
+@@ -3038,7 +3036,6 @@ static void udma_free_chan_resources(struct dma_chan *chan)
+ }
+
+ cancel_delayed_work_sync(&uc->tx_drain.work);
+- destroy_delayed_work_on_stack(&uc->tx_drain.work);
+
+ if (uc->irq_num_ring > 0) {
+ free_irq(uc->irq_num_ring, uc);
+@@ -3189,7 +3186,7 @@ static struct udma_match_data am654_main_data = {
+
+ static struct udma_match_data am654_mcu_data = {
+ .psil_base = 0x6000,
+- .enable_memcpy_support = true, /* TEST: DMA domains */
++ .enable_memcpy_support = false,
+ .statictr_z_mask = GENMASK(11, 0),
+ .rchan_oes_offset = 0x2000,
+ .tpl_levels = 2,
+@@ -3609,7 +3606,7 @@ static int udma_probe(struct platform_device *pdev)
+ return ret;
+ }
+
+- ret = of_property_read_u32(navss_node, "ti,udma-atype", &ud->atype);
++ ret = of_property_read_u32(dev->of_node, "ti,udma-atype", &ud->atype);
+ if (!ret && ud->atype > 2) {
+ dev_err(dev, "Invalid atype: %u\n", ud->atype);
+ return -EINVAL;
+@@ -3727,6 +3724,7 @@ static int udma_probe(struct platform_device *pdev)
+ tasklet_init(&uc->vc.task, udma_vchan_complete,
+ (unsigned long)&uc->vc);
+ init_completion(&uc->teardown_completed);
++ INIT_DELAYED_WORK(&uc->tx_drain.work, udma_check_tx_completion);
+ }
+
+ ret = dma_async_device_register(&ud->ddev);
+diff --git a/drivers/gpio/gpio-pca953x.c b/drivers/gpio/gpio-pca953x.c
+index 48bea0997e70..374b772150a6 100644
+--- a/drivers/gpio/gpio-pca953x.c
++++ b/drivers/gpio/gpio-pca953x.c
+@@ -399,6 +399,7 @@ static const struct regmap_config pca953x_ai_i2c_regmap = {
+ .writeable_reg = pca953x_writeable_register,
+ .volatile_reg = pca953x_volatile_register,
+
++ .disable_locking = true,
+ .cache_type = REGCACHE_RBTREE,
+ .max_register = 0x7f,
+ };
+diff --git a/drivers/gpu/drm/amd/amdgpu/sdma_v5_0.c b/drivers/gpu/drm/amd/amdgpu/sdma_v5_0.c
+index 1dc57079933c..ee96f12c4aef 100644
+--- a/drivers/gpu/drm/amd/amdgpu/sdma_v5_0.c
++++ b/drivers/gpu/drm/amd/amdgpu/sdma_v5_0.c
+@@ -286,30 +286,20 @@ static uint64_t sdma_v5_0_ring_get_rptr(struct amdgpu_ring *ring)
+ static uint64_t sdma_v5_0_ring_get_wptr(struct amdgpu_ring *ring)
+ {
+ struct amdgpu_device *adev = ring->adev;
+- u64 *wptr = NULL;
+- uint64_t local_wptr = 0;
++ u64 wptr;
+
+ if (ring->use_doorbell) {
+ /* XXX check if swapping is necessary on BE */
+- wptr = ((u64 *)&adev->wb.wb[ring->wptr_offs]);
+- DRM_DEBUG("wptr/doorbell before shift == 0x%016llx\n", *wptr);
+- *wptr = (*wptr) >> 2;
+- DRM_DEBUG("wptr/doorbell after shift == 0x%016llx\n", *wptr);
++ wptr = READ_ONCE(*((u64 *)&adev->wb.wb[ring->wptr_offs]));
++ DRM_DEBUG("wptr/doorbell before shift == 0x%016llx\n", wptr);
+ } else {
+- u32 lowbit, highbit;
+-
+- wptr = &local_wptr;
+- lowbit = RREG32(sdma_v5_0_get_reg_offset(adev, ring->me, mmSDMA0_GFX_RB_WPTR)) >> 2;
+- highbit = RREG32(sdma_v5_0_get_reg_offset(adev, ring->me, mmSDMA0_GFX_RB_WPTR_HI)) >> 2;
+-
+- DRM_DEBUG("wptr [%i]high== 0x%08x low==0x%08x\n",
+- ring->me, highbit, lowbit);
+- *wptr = highbit;
+- *wptr = (*wptr) << 32;
+- *wptr |= lowbit;
++ wptr = RREG32(sdma_v5_0_get_reg_offset(adev, ring->me, mmSDMA0_GFX_RB_WPTR_HI));
++ wptr = wptr << 32;
++ wptr |= RREG32(sdma_v5_0_get_reg_offset(adev, ring->me, mmSDMA0_GFX_RB_WPTR));
++ DRM_DEBUG("wptr before shift [%i] wptr == 0x%016llx\n", ring->me, wptr);
+ }
+
+- return *wptr;
++ return wptr >> 2;
+ }
+
+ /**
+diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
+index 69b1f61928ef..d06fa6380179 100644
+--- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
++++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
+@@ -956,6 +956,9 @@ static int amdgpu_dm_init(struct amdgpu_device *adev)
+ /* Update the actual used number of crtc */
+ adev->mode_info.num_crtc = adev->dm.display_indexes_num;
+
++ /* create fake encoders for MST */
++ dm_dp_create_fake_mst_encoders(adev);
++
+ /* TODO: Add_display_info? */
+
+ /* TODO use dynamic cursor width */
+@@ -979,6 +982,12 @@ error:
+
+ static void amdgpu_dm_fini(struct amdgpu_device *adev)
+ {
++ int i;
++
++ for (i = 0; i < adev->dm.display_indexes_num; i++) {
++ drm_encoder_cleanup(&adev->dm.mst_encoders[i].base);
++ }
++
+ amdgpu_dm_audio_fini(adev);
+
+ amdgpu_dm_destroy_drm_device(&adev->dm);
+@@ -1804,6 +1813,7 @@ static void update_connector_ext_caps(struct amdgpu_dm_connector *aconnector)
+ struct amdgpu_display_manager *dm;
+ struct drm_connector *conn_base;
+ struct amdgpu_device *adev;
++ struct dc_link *link = NULL;
+ static const u8 pre_computed_values[] = {
+ 50, 51, 52, 53, 55, 56, 57, 58, 59, 61, 62, 63, 65, 66, 68, 69,
+ 71, 72, 74, 75, 77, 79, 81, 82, 84, 86, 88, 90, 92, 94, 96, 98};
+@@ -1811,6 +1821,10 @@ static void update_connector_ext_caps(struct amdgpu_dm_connector *aconnector)
+ if (!aconnector || !aconnector->dc_link)
+ return;
+
++ link = aconnector->dc_link;
++ if (link->connector_signal != SIGNAL_TYPE_EDP)
++ return;
++
+ conn_base = &aconnector->base;
+ adev = conn_base->dev->dev_private;
+ dm = &adev->dm;
+diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.h b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.h
+index 5cab3e65d992..76f7c5275239 100644
+--- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.h
++++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.h
+@@ -43,6 +43,9 @@
+ */
+
+ #define AMDGPU_DM_MAX_DISPLAY_INDEX 31
++
++#define AMDGPU_DM_MAX_CRTC 6
++
+ /*
+ #include "include/amdgpu_dal_power_if.h"
+ #include "amdgpu_dm_irq.h"
+@@ -327,6 +330,13 @@ struct amdgpu_display_manager {
+ * available in FW
+ */
+ const struct gpu_info_soc_bounding_box_v1_0 *soc_bounding_box;
++
++ /**
++ * @mst_encoders:
++ *
++ * fake encoders used for DP MST.
++ */
++ struct amdgpu_encoder mst_encoders[AMDGPU_DM_MAX_CRTC];
+ };
+
+ struct amdgpu_dm_connector {
+@@ -355,7 +365,6 @@ struct amdgpu_dm_connector {
+ struct amdgpu_dm_dp_aux dm_dp_aux;
+ struct drm_dp_mst_port *port;
+ struct amdgpu_dm_connector *mst_port;
+- struct amdgpu_encoder *mst_encoder;
+ struct drm_dp_aux *dsc_aux;
+
+ /* TODO see if we can merge with ddc_bus or make a dm_connector */
+diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_mst_types.c b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_mst_types.c
+index d2917759b7ab..6c8e87baedd9 100644
+--- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_mst_types.c
++++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_mst_types.c
+@@ -137,13 +137,10 @@ static void
+ dm_dp_mst_connector_destroy(struct drm_connector *connector)
+ {
+ struct amdgpu_dm_connector *amdgpu_dm_connector = to_amdgpu_dm_connector(connector);
+- struct amdgpu_encoder *amdgpu_encoder = amdgpu_dm_connector->mst_encoder;
+
+ kfree(amdgpu_dm_connector->edid);
+ amdgpu_dm_connector->edid = NULL;
+
+- drm_encoder_cleanup(&amdgpu_encoder->base);
+- kfree(amdgpu_encoder);
+ drm_connector_cleanup(connector);
+ drm_dp_mst_put_port_malloc(amdgpu_dm_connector->port);
+ kfree(amdgpu_dm_connector);
+@@ -280,7 +277,11 @@ static struct drm_encoder *
+ dm_mst_atomic_best_encoder(struct drm_connector *connector,
+ struct drm_connector_state *connector_state)
+ {
+- return &to_amdgpu_dm_connector(connector)->mst_encoder->base;
++ struct drm_device *dev = connector->dev;
++ struct amdgpu_device *adev = dev->dev_private;
++ struct amdgpu_crtc *acrtc = to_amdgpu_crtc(connector_state->crtc);
++
++ return &adev->dm.mst_encoders[acrtc->crtc_id].base;
+ }
+
+ static int
+@@ -343,31 +344,27 @@ static const struct drm_encoder_funcs amdgpu_dm_encoder_funcs = {
+ .destroy = amdgpu_dm_encoder_destroy,
+ };
+
+-static struct amdgpu_encoder *
+-dm_dp_create_fake_mst_encoder(struct amdgpu_dm_connector *connector)
++void
++dm_dp_create_fake_mst_encoders(struct amdgpu_device *adev)
+ {
+- struct drm_device *dev = connector->base.dev;
+- struct amdgpu_device *adev = dev->dev_private;
+- struct amdgpu_encoder *amdgpu_encoder;
+- struct drm_encoder *encoder;
+-
+- amdgpu_encoder = kzalloc(sizeof(*amdgpu_encoder), GFP_KERNEL);
+- if (!amdgpu_encoder)
+- return NULL;
++ struct drm_device *dev = adev->ddev;
++ int i;
+
+- encoder = &amdgpu_encoder->base;
+- encoder->possible_crtcs = amdgpu_dm_get_encoder_crtc_mask(adev);
++ for (i = 0; i < adev->dm.display_indexes_num; i++) {
++ struct amdgpu_encoder *amdgpu_encoder = &adev->dm.mst_encoders[i];
++ struct drm_encoder *encoder = &amdgpu_encoder->base;
+
+- drm_encoder_init(
+- dev,
+- &amdgpu_encoder->base,
+- &amdgpu_dm_encoder_funcs,
+- DRM_MODE_ENCODER_DPMST,
+- NULL);
++ encoder->possible_crtcs = amdgpu_dm_get_encoder_crtc_mask(adev);
+
+- drm_encoder_helper_add(encoder, &amdgpu_dm_encoder_helper_funcs);
++ drm_encoder_init(
++ dev,
++ &amdgpu_encoder->base,
++ &amdgpu_dm_encoder_funcs,
++ DRM_MODE_ENCODER_DPMST,
++ NULL);
+
+- return amdgpu_encoder;
++ drm_encoder_helper_add(encoder, &amdgpu_dm_encoder_helper_funcs);
++ }
+ }
+
+ static struct drm_connector *
+@@ -380,6 +377,7 @@ dm_dp_add_mst_connector(struct drm_dp_mst_topology_mgr *mgr,
+ struct amdgpu_device *adev = dev->dev_private;
+ struct amdgpu_dm_connector *aconnector;
+ struct drm_connector *connector;
++ int i;
+
+ aconnector = kzalloc(sizeof(*aconnector), GFP_KERNEL);
+ if (!aconnector)
+@@ -406,9 +404,10 @@ dm_dp_add_mst_connector(struct drm_dp_mst_topology_mgr *mgr,
+ master->dc_link,
+ master->connector_id);
+
+- aconnector->mst_encoder = dm_dp_create_fake_mst_encoder(master);
+- drm_connector_attach_encoder(&aconnector->base,
+- &aconnector->mst_encoder->base);
++ for (i = 0; i < adev->dm.display_indexes_num; i++) {
++ drm_connector_attach_encoder(&aconnector->base,
++ &adev->dm.mst_encoders[i].base);
++ }
+
+ connector->max_bpc_property = master->base.max_bpc_property;
+ if (connector->max_bpc_property)
+diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_mst_types.h b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_mst_types.h
+index d2c56579a2cc..b38bd68121ce 100644
+--- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_mst_types.h
++++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_mst_types.h
+@@ -35,6 +35,9 @@ void amdgpu_dm_initialize_dp_connector(struct amdgpu_display_manager *dm,
+ struct amdgpu_dm_connector *aconnector,
+ int link_index);
+
++void
++dm_dp_create_fake_mst_encoders(struct amdgpu_device *adev);
++
+ #if defined(CONFIG_DRM_AMD_DC_DCN)
+ bool compute_mst_dsc_configs_for_state(struct drm_atomic_state *state,
+ struct dc_state *dc_state);
+diff --git a/drivers/gpu/drm/amd/display/dc/core/dc_stream.c b/drivers/gpu/drm/amd/display/dc/core/dc_stream.c
+index 4f0e7203dba4..470c82794f6f 100644
+--- a/drivers/gpu/drm/amd/display/dc/core/dc_stream.c
++++ b/drivers/gpu/drm/amd/display/dc/core/dc_stream.c
+@@ -56,7 +56,7 @@ void update_stream_signal(struct dc_stream_state *stream, struct dc_sink *sink)
+ }
+ }
+
+-static void dc_stream_construct(struct dc_stream_state *stream,
++static bool dc_stream_construct(struct dc_stream_state *stream,
+ struct dc_sink *dc_sink_data)
+ {
+ uint32_t i = 0;
+@@ -118,11 +118,17 @@ static void dc_stream_construct(struct dc_stream_state *stream,
+ update_stream_signal(stream, dc_sink_data);
+
+ stream->out_transfer_func = dc_create_transfer_func();
++ if (stream->out_transfer_func == NULL) {
++ dc_sink_release(dc_sink_data);
++ return false;
++ }
+ stream->out_transfer_func->type = TF_TYPE_BYPASS;
+ stream->out_transfer_func->ctx = stream->ctx;
+
+ stream->stream_id = stream->ctx->dc_stream_id_count;
+ stream->ctx->dc_stream_id_count++;
++
++ return true;
+ }
+
+ static void dc_stream_destruct(struct dc_stream_state *stream)
+@@ -164,13 +170,20 @@ struct dc_stream_state *dc_create_stream_for_sink(
+
+ stream = kzalloc(sizeof(struct dc_stream_state), GFP_KERNEL);
+ if (stream == NULL)
+- return NULL;
++ goto alloc_fail;
+
+- dc_stream_construct(stream, sink);
++ if (dc_stream_construct(stream, sink) == false)
++ goto construct_fail;
+
+ kref_init(&stream->refcount);
+
+ return stream;
++
++construct_fail:
++ kfree(stream);
++
++alloc_fail:
++ return NULL;
+ }
+
+ struct dc_stream_state *dc_copy_stream(const struct dc_stream_state *stream)
+diff --git a/drivers/gpu/drm/amd/powerplay/renoir_ppt.c b/drivers/gpu/drm/amd/powerplay/renoir_ppt.c
+index b0ed1b3fe79a..72e4d7611323 100644
+--- a/drivers/gpu/drm/amd/powerplay/renoir_ppt.c
++++ b/drivers/gpu/drm/amd/powerplay/renoir_ppt.c
+@@ -687,7 +687,7 @@ static int renoir_set_power_profile_mode(struct smu_context *smu, long *input, u
+ return -EINVAL;
+ }
+
+- ret = smu_send_smc_msg_with_param(smu, SMU_MSG_SetWorkloadMask,
++ ret = smu_send_smc_msg_with_param(smu, SMU_MSG_ActiveProcessNotify,
+ 1 << workload_type,
+ NULL);
+ if (ret) {
+diff --git a/drivers/gpu/drm/exynos/exynos_drm_dma.c b/drivers/gpu/drm/exynos/exynos_drm_dma.c
+index 619f81435c1b..58b89ec11b0e 100644
+--- a/drivers/gpu/drm/exynos/exynos_drm_dma.c
++++ b/drivers/gpu/drm/exynos/exynos_drm_dma.c
+@@ -61,7 +61,7 @@ static int drm_iommu_attach_device(struct drm_device *drm_dev,
+ struct device *subdrv_dev, void **dma_priv)
+ {
+ struct exynos_drm_private *priv = drm_dev->dev_private;
+- int ret;
++ int ret = 0;
+
+ if (get_dma_ops(priv->dma_dev) != get_dma_ops(subdrv_dev)) {
+ DRM_DEV_ERROR(subdrv_dev, "Device %s lacks support for IOMMU\n",
+@@ -92,7 +92,7 @@ static int drm_iommu_attach_device(struct drm_device *drm_dev,
+ if (ret)
+ clear_dma_max_seg_size(subdrv_dev);
+
+- return 0;
++ return ret;
+ }
+
+ /*
+diff --git a/drivers/gpu/drm/exynos/exynos_drm_mic.c b/drivers/gpu/drm/exynos/exynos_drm_mic.c
+index f41d75923557..004110c5ded4 100644
+--- a/drivers/gpu/drm/exynos/exynos_drm_mic.c
++++ b/drivers/gpu/drm/exynos/exynos_drm_mic.c
+@@ -269,8 +269,10 @@ static void mic_pre_enable(struct drm_bridge *bridge)
+ goto unlock;
+
+ ret = pm_runtime_get_sync(mic->dev);
+- if (ret < 0)
++ if (ret < 0) {
++ pm_runtime_put_noidle(mic->dev);
+ goto unlock;
++ }
+
+ mic_set_path(mic, 1);
+
+diff --git a/drivers/gpu/drm/i915/display/intel_hdmi.c b/drivers/gpu/drm/i915/display/intel_hdmi.c
+index 821411b93dac..0718ffc7829f 100644
+--- a/drivers/gpu/drm/i915/display/intel_hdmi.c
++++ b/drivers/gpu/drm/i915/display/intel_hdmi.c
+@@ -2821,19 +2821,13 @@ intel_hdmi_connector_register(struct drm_connector *connector)
+ return ret;
+ }
+
+-static void intel_hdmi_destroy(struct drm_connector *connector)
++static void intel_hdmi_connector_unregister(struct drm_connector *connector)
+ {
+ struct cec_notifier *n = intel_attached_hdmi(to_intel_connector(connector))->cec_notifier;
+
+ cec_notifier_conn_unregister(n);
+
+- intel_connector_destroy(connector);
+-}
+-
+-static void intel_hdmi_connector_unregister(struct drm_connector *connector)
+-{
+ intel_hdmi_remove_i2c_symlink(connector);
+-
+ intel_connector_unregister(connector);
+ }
+
+@@ -2845,7 +2839,7 @@ static const struct drm_connector_funcs intel_hdmi_connector_funcs = {
+ .atomic_set_property = intel_digital_connector_atomic_set_property,
+ .late_register = intel_hdmi_connector_register,
+ .early_unregister = intel_hdmi_connector_unregister,
+- .destroy = intel_hdmi_destroy,
++ .destroy = intel_connector_destroy,
+ .atomic_destroy_state = drm_atomic_helper_connector_destroy_state,
+ .atomic_duplicate_state = intel_digital_connector_duplicate_state,
+ };
+diff --git a/drivers/gpu/drm/i915/gt/intel_lrc.c b/drivers/gpu/drm/i915/gt/intel_lrc.c
+index ba82193b4e31..06922e8aae99 100644
+--- a/drivers/gpu/drm/i915/gt/intel_lrc.c
++++ b/drivers/gpu/drm/i915/gt/intel_lrc.c
+@@ -4897,13 +4897,8 @@ static void virtual_engine_initial_hint(struct virtual_engine *ve)
+ * typically be the first we inspect for submission.
+ */
+ swp = prandom_u32_max(ve->num_siblings);
+- if (!swp)
+- return;
+-
+- swap(ve->siblings[swp], ve->siblings[0]);
+- if (!intel_engine_has_relative_mmio(ve->siblings[0]))
+- virtual_update_register_offsets(ve->context.lrc_reg_state,
+- ve->siblings[0]);
++ if (swp)
++ swap(ve->siblings[swp], ve->siblings[0]);
+ }
+
+ static int virtual_context_alloc(struct intel_context *ce)
+@@ -4916,15 +4911,9 @@ static int virtual_context_alloc(struct intel_context *ce)
+ static int virtual_context_pin(struct intel_context *ce)
+ {
+ struct virtual_engine *ve = container_of(ce, typeof(*ve), context);
+- int err;
+
+ /* Note: we must use a real engine class for setting up reg state */
+- err = __execlists_context_pin(ce, ve->siblings[0]);
+- if (err)
+- return err;
+-
+- virtual_engine_initial_hint(ve);
+- return 0;
++ return __execlists_context_pin(ce, ve->siblings[0]);
+ }
+
+ static void virtual_context_enter(struct intel_context *ce)
+@@ -5188,6 +5177,7 @@ intel_execlists_create_virtual(struct intel_engine_cs **siblings,
+ intel_engine_init_active(&ve->base, ENGINE_VIRTUAL);
+ intel_engine_init_breadcrumbs(&ve->base);
+ intel_engine_init_execlists(&ve->base);
++ ve->base.breadcrumbs.irq_armed = true; /* fake HW, used for irq_work */
+
+ ve->base.cops = &virtual_context_ops;
+ ve->base.request_alloc = execlists_request_alloc;
+@@ -5269,6 +5259,7 @@ intel_execlists_create_virtual(struct intel_engine_cs **siblings,
+
+ ve->base.flags |= I915_ENGINE_IS_VIRTUAL;
+
++ virtual_engine_initial_hint(ve);
+ return &ve->context;
+
+ err_put:
+diff --git a/drivers/gpu/drm/i915/gvt/handlers.c b/drivers/gpu/drm/i915/gvt/handlers.c
+index 2faf50e1b051..6e7dc28455c4 100644
+--- a/drivers/gpu/drm/i915/gvt/handlers.c
++++ b/drivers/gpu/drm/i915/gvt/handlers.c
+@@ -3131,8 +3131,8 @@ static int init_skl_mmio_info(struct intel_gvt *gvt)
+ MMIO_DFH(GEN9_WM_CHICKEN3, D_SKL_PLUS, F_MODE_MASK | F_CMD_ACCESS,
+ NULL, NULL);
+
+- MMIO_D(GAMT_CHKN_BIT_REG, D_KBL);
+- MMIO_D(GEN9_CTX_PREEMPT_REG, D_KBL | D_SKL);
++ MMIO_D(GAMT_CHKN_BIT_REG, D_KBL | D_CFL);
++ MMIO_D(GEN9_CTX_PREEMPT_REG, D_SKL_PLUS);
+
+ return 0;
+ }
+diff --git a/drivers/gpu/drm/i915/i915_perf.c b/drivers/gpu/drm/i915/i915_perf.c
+index cf2c01f17da8..803983d9a05d 100644
+--- a/drivers/gpu/drm/i915/i915_perf.c
++++ b/drivers/gpu/drm/i915/i915_perf.c
+@@ -1645,6 +1645,7 @@ static u32 *save_restore_register(struct i915_perf_stream *stream, u32 *cs,
+ u32 d;
+
+ cmd = save ? MI_STORE_REGISTER_MEM : MI_LOAD_REGISTER_MEM;
++ cmd |= MI_SRM_LRM_GLOBAL_GTT;
+ if (INTEL_GEN(stream->perf->i915) >= 8)
+ cmd++;
+
+diff --git a/drivers/gpu/drm/msm/disp/dpu1/dpu_encoder.c b/drivers/gpu/drm/msm/disp/dpu1/dpu_encoder.c
+index a2f6b688a976..932dd8c3c411 100644
+--- a/drivers/gpu/drm/msm/disp/dpu1/dpu_encoder.c
++++ b/drivers/gpu/drm/msm/disp/dpu1/dpu_encoder.c
+@@ -2126,7 +2126,6 @@ int dpu_encoder_setup(struct drm_device *dev, struct drm_encoder *enc,
+
+ dpu_enc = to_dpu_encoder_virt(enc);
+
+- mutex_init(&dpu_enc->enc_lock);
+ ret = dpu_encoder_setup_display(dpu_enc, dpu_kms, disp_info);
+ if (ret)
+ goto fail;
+@@ -2141,7 +2140,6 @@ int dpu_encoder_setup(struct drm_device *dev, struct drm_encoder *enc,
+ 0);
+
+
+- mutex_init(&dpu_enc->rc_lock);
+ INIT_DELAYED_WORK(&dpu_enc->delayed_off_work,
+ dpu_encoder_off_work);
+ dpu_enc->idle_timeout = IDLE_TIMEOUT;
+@@ -2186,6 +2184,8 @@ struct drm_encoder *dpu_encoder_init(struct drm_device *dev,
+
+ spin_lock_init(&dpu_enc->enc_spinlock);
+ dpu_enc->enabled = false;
++ mutex_init(&dpu_enc->enc_lock);
++ mutex_init(&dpu_enc->rc_lock);
+
+ return &dpu_enc->base;
+ }
+diff --git a/drivers/gpu/drm/msm/msm_submitqueue.c b/drivers/gpu/drm/msm/msm_submitqueue.c
+index 001fbf537440..a1d94be7883a 100644
+--- a/drivers/gpu/drm/msm/msm_submitqueue.c
++++ b/drivers/gpu/drm/msm/msm_submitqueue.c
+@@ -71,8 +71,10 @@ int msm_submitqueue_create(struct drm_device *drm, struct msm_file_private *ctx,
+ queue->flags = flags;
+
+ if (priv->gpu) {
+- if (prio >= priv->gpu->nr_rings)
++ if (prio >= priv->gpu->nr_rings) {
++ kfree(queue);
+ return -EINVAL;
++ }
+
+ queue->prio = prio;
+ }
+diff --git a/drivers/gpu/drm/vmwgfx/vmwgfx_stdu.c b/drivers/gpu/drm/vmwgfx/vmwgfx_stdu.c
+index 9ffa9c75a5da..16b385629688 100644
+--- a/drivers/gpu/drm/vmwgfx/vmwgfx_stdu.c
++++ b/drivers/gpu/drm/vmwgfx/vmwgfx_stdu.c
+@@ -1069,10 +1069,6 @@ vmw_stdu_primary_plane_prepare_fb(struct drm_plane *plane,
+ if (new_content_type != SAME_AS_DISPLAY) {
+ struct vmw_surface_metadata metadata = {0};
+
+- metadata.base_size.width = hdisplay;
+- metadata.base_size.height = vdisplay;
+- metadata.base_size.depth = 1;
+-
+ /*
+ * If content buffer is a buffer object, then we have to
+ * construct surface info
+@@ -1104,6 +1100,10 @@ vmw_stdu_primary_plane_prepare_fb(struct drm_plane *plane,
+ metadata = new_vfbs->surface->metadata;
+ }
+
++ metadata.base_size.width = hdisplay;
++ metadata.base_size.height = vdisplay;
++ metadata.base_size.depth = 1;
++
+ if (vps->surf) {
+ struct drm_vmw_size cur_base_size =
+ vps->surf->metadata.base_size;
+diff --git a/drivers/hid/hid-ids.h b/drivers/hid/hid-ids.h
+index f03f1cc913ce..047abf731cf0 100644
+--- a/drivers/hid/hid-ids.h
++++ b/drivers/hid/hid-ids.h
+@@ -624,6 +624,7 @@
+ #define USB_DEVICE_ID_HOLTEK_ALT_MOUSE_A081 0xa081
+ #define USB_DEVICE_ID_HOLTEK_ALT_MOUSE_A0C2 0xa0c2
+ #define USB_DEVICE_ID_HOLTEK_ALT_KEYBOARD_A096 0xa096
++#define USB_DEVICE_ID_HOLTEK_ALT_KEYBOARD_A293 0xa293
+
+ #define USB_VENDOR_ID_IMATION 0x0718
+ #define USB_DEVICE_ID_DISC_STAKKA 0xd000
+@@ -1005,6 +1006,8 @@
+ #define USB_DEVICE_ID_ROCCAT_RYOS_MK_PRO 0x3232
+ #define USB_DEVICE_ID_ROCCAT_SAVU 0x2d5a
+
++#define USB_VENDOR_ID_SAI 0x17dd
++
+ #define USB_VENDOR_ID_SAITEK 0x06a3
+ #define USB_DEVICE_ID_SAITEK_RUMBLEPAD 0xff17
+ #define USB_DEVICE_ID_SAITEK_PS1000 0x0621
+diff --git a/drivers/hid/hid-logitech-hidpp.c b/drivers/hid/hid-logitech-hidpp.c
+index 094f4f1b6555..89d74cc264f9 100644
+--- a/drivers/hid/hid-logitech-hidpp.c
++++ b/drivers/hid/hid-logitech-hidpp.c
+@@ -3146,7 +3146,7 @@ static int hi_res_scroll_enable(struct hidpp_device *hidpp)
+ multiplier = 1;
+
+ hidpp->vertical_wheel_counter.wheel_multiplier = multiplier;
+- hid_info(hidpp->hid_dev, "multiplier = %d\n", multiplier);
++ hid_dbg(hidpp->hid_dev, "wheel multiplier = %d\n", multiplier);
+ return 0;
+ }
+
+diff --git a/drivers/hid/hid-magicmouse.c b/drivers/hid/hid-magicmouse.c
+index 34138667f8af..abd86903875f 100644
+--- a/drivers/hid/hid-magicmouse.c
++++ b/drivers/hid/hid-magicmouse.c
+@@ -535,6 +535,12 @@ static int magicmouse_setup_input(struct input_dev *input, struct hid_device *hd
+ __set_bit(MSC_RAW, input->mscbit);
+ }
+
++ /*
++ * hid-input may mark device as using autorepeat, but neither
++ * the trackpad, nor the mouse actually want it.
++ */
++ __clear_bit(EV_REP, input->evbit);
++
+ return 0;
+ }
+
+diff --git a/drivers/hid/hid-quirks.c b/drivers/hid/hid-quirks.c
+index ca8b5c261c7c..934fc0a798d4 100644
+--- a/drivers/hid/hid-quirks.c
++++ b/drivers/hid/hid-quirks.c
+@@ -88,6 +88,7 @@ static const struct hid_device_id hid_quirks[] = {
+ { HID_USB_DEVICE(USB_VENDOR_ID_HAPP, USB_DEVICE_ID_UGCI_FIGHTING), HID_QUIRK_BADPAD | HID_QUIRK_MULTI_INPUT },
+ { HID_USB_DEVICE(USB_VENDOR_ID_HAPP, USB_DEVICE_ID_UGCI_FLYING), HID_QUIRK_BADPAD | HID_QUIRK_MULTI_INPUT },
+ { HID_USB_DEVICE(USB_VENDOR_ID_HOLTEK_ALT, USB_DEVICE_ID_HOLTEK_ALT_KEYBOARD_A096), HID_QUIRK_NO_INIT_REPORTS },
++ { HID_USB_DEVICE(USB_VENDOR_ID_HOLTEK_ALT, USB_DEVICE_ID_HOLTEK_ALT_KEYBOARD_A293), HID_QUIRK_ALWAYS_POLL },
+ { HID_USB_DEVICE(USB_VENDOR_ID_HP, USB_PRODUCT_ID_HP_LOGITECH_OEM_USB_OPTICAL_MOUSE_0A4A), HID_QUIRK_ALWAYS_POLL },
+ { HID_USB_DEVICE(USB_VENDOR_ID_HP, USB_PRODUCT_ID_HP_LOGITECH_OEM_USB_OPTICAL_MOUSE_0B4A), HID_QUIRK_ALWAYS_POLL },
+ { HID_USB_DEVICE(USB_VENDOR_ID_HP, USB_PRODUCT_ID_HP_PIXART_OEM_USB_OPTICAL_MOUSE), HID_QUIRK_ALWAYS_POLL },
+@@ -832,6 +833,7 @@ static const struct hid_device_id hid_ignore_list[] = {
+ { HID_USB_DEVICE(USB_VENDOR_ID_PETZL, USB_DEVICE_ID_PETZL_HEADLAMP) },
+ { HID_USB_DEVICE(USB_VENDOR_ID_PHILIPS, USB_DEVICE_ID_PHILIPS_IEEE802154_DONGLE) },
+ { HID_USB_DEVICE(USB_VENDOR_ID_POWERCOM, USB_DEVICE_ID_POWERCOM_UPS) },
++ { HID_USB_DEVICE(USB_VENDOR_ID_SAI, USB_DEVICE_ID_CYPRESS_HIDCOM) },
+ #if IS_ENABLED(CONFIG_MOUSE_SYNAPTICS_USB)
+ { HID_USB_DEVICE(USB_VENDOR_ID_SYNAPTICS, USB_DEVICE_ID_SYNAPTICS_TP) },
+ { HID_USB_DEVICE(USB_VENDOR_ID_SYNAPTICS, USB_DEVICE_ID_SYNAPTICS_INT_TP) },
+diff --git a/drivers/hwmon/drivetemp.c b/drivers/hwmon/drivetemp.c
+index 0d4f3d97ffc6..72c760373957 100644
+--- a/drivers/hwmon/drivetemp.c
++++ b/drivers/hwmon/drivetemp.c
+@@ -285,6 +285,42 @@ static int drivetemp_get_scttemp(struct drivetemp_data *st, u32 attr, long *val)
+ return err;
+ }
+
++static const char * const sct_avoid_models[] = {
++/*
++ * These drives will have WRITE FPDMA QUEUED command timeouts and sometimes just
++ * freeze until power-cycled under heavy write loads when their temperature is
++ * getting polled in SCT mode. The SMART mode seems to be fine, though.
++ *
++ * While only the 3 TB model (DT01ACA3) was actually caught exhibiting the
++ * problem let's play safe here to avoid data corruption and ban the whole
++ * DT01ACAx family.
++
++ * The models from this array are prefix-matched.
++ */
++ "TOSHIBA DT01ACA",
++};
++
++static bool drivetemp_sct_avoid(struct drivetemp_data *st)
++{
++ struct scsi_device *sdev = st->sdev;
++ unsigned int ctr;
++
++ if (!sdev->model)
++ return false;
++
++ /*
++ * The "model" field contains just the raw SCSI INQUIRY response
++ * "product identification" field, which has a width of 16 bytes.
++ * This field is space-filled, but is NOT NULL-terminated.
++ */
++ for (ctr = 0; ctr < ARRAY_SIZE(sct_avoid_models); ctr++)
++ if (!strncmp(sdev->model, sct_avoid_models[ctr],
++ strlen(sct_avoid_models[ctr])))
++ return true;
++
++ return false;
++}
++
+ static int drivetemp_identify_sata(struct drivetemp_data *st)
+ {
+ struct scsi_device *sdev = st->sdev;
+@@ -326,6 +362,13 @@ static int drivetemp_identify_sata(struct drivetemp_data *st)
+ /* bail out if this is not a SATA device */
+ if (!is_ata || !is_sata)
+ return -ENODEV;
++
++ if (have_sct && drivetemp_sct_avoid(st)) {
++ dev_notice(&sdev->sdev_gendev,
++ "will avoid using SCT for temperature monitoring\n");
++ have_sct = false;
++ }
++
+ if (!have_sct)
+ goto skip_sct;
+
+diff --git a/drivers/hwmon/emc2103.c b/drivers/hwmon/emc2103.c
+index 491a570e8e50..924c02c1631d 100644
+--- a/drivers/hwmon/emc2103.c
++++ b/drivers/hwmon/emc2103.c
+@@ -443,7 +443,7 @@ static ssize_t pwm1_enable_store(struct device *dev,
+ }
+
+ result = read_u8_from_i2c(client, REG_FAN_CONF1, &conf_reg);
+- if (result) {
++ if (result < 0) {
+ count = result;
+ goto err;
+ }
+diff --git a/drivers/hwtracing/coresight/coresight-etm4x.c b/drivers/hwtracing/coresight/coresight-etm4x.c
+index a6d6c7a3abcb..d59e4b1e5ce5 100644
+--- a/drivers/hwtracing/coresight/coresight-etm4x.c
++++ b/drivers/hwtracing/coresight/coresight-etm4x.c
+@@ -1399,18 +1399,57 @@ static struct notifier_block etm4_cpu_pm_nb = {
+ .notifier_call = etm4_cpu_pm_notify,
+ };
+
+-static int etm4_cpu_pm_register(void)
++/* Setup PM. Called with cpus locked. Deals with error conditions and counts */
++static int etm4_pm_setup_cpuslocked(void)
+ {
+- if (IS_ENABLED(CONFIG_CPU_PM))
+- return cpu_pm_register_notifier(&etm4_cpu_pm_nb);
++ int ret;
+
+- return 0;
++ if (etm4_count++)
++ return 0;
++
++ ret = cpu_pm_register_notifier(&etm4_cpu_pm_nb);
++ if (ret)
++ goto reduce_count;
++
++ ret = cpuhp_setup_state_nocalls_cpuslocked(CPUHP_AP_ARM_CORESIGHT_STARTING,
++ "arm/coresight4:starting",
++ etm4_starting_cpu, etm4_dying_cpu);
++
++ if (ret)
++ goto unregister_notifier;
++
++ ret = cpuhp_setup_state_nocalls_cpuslocked(CPUHP_AP_ONLINE_DYN,
++ "arm/coresight4:online",
++ etm4_online_cpu, NULL);
++
++ /* HP dyn state ID returned in ret on success */
++ if (ret > 0) {
++ hp_online = ret;
++ return 0;
++ }
++
++ /* failed dyn state - remove others */
++ cpuhp_remove_state_nocalls_cpuslocked(CPUHP_AP_ARM_CORESIGHT_STARTING);
++
++unregister_notifier:
++ cpu_pm_unregister_notifier(&etm4_cpu_pm_nb);
++
++reduce_count:
++ --etm4_count;
++ return ret;
+ }
+
+-static void etm4_cpu_pm_unregister(void)
++static void etm4_pm_clear(void)
+ {
+- if (IS_ENABLED(CONFIG_CPU_PM))
+- cpu_pm_unregister_notifier(&etm4_cpu_pm_nb);
++ if (--etm4_count != 0)
++ return;
++
++ cpu_pm_unregister_notifier(&etm4_cpu_pm_nb);
++ cpuhp_remove_state_nocalls(CPUHP_AP_ARM_CORESIGHT_STARTING);
++ if (hp_online) {
++ cpuhp_remove_state_nocalls(hp_online);
++ hp_online = 0;
++ }
+ }
+
+ static int etm4_probe(struct amba_device *adev, const struct amba_id *id)
+@@ -1464,24 +1503,15 @@ static int etm4_probe(struct amba_device *adev, const struct amba_id *id)
+ etm4_init_arch_data, drvdata, 1))
+ dev_err(dev, "ETM arch init failed\n");
+
+- if (!etm4_count++) {
+- cpuhp_setup_state_nocalls_cpuslocked(CPUHP_AP_ARM_CORESIGHT_STARTING,
+- "arm/coresight4:starting",
+- etm4_starting_cpu, etm4_dying_cpu);
+- ret = cpuhp_setup_state_nocalls_cpuslocked(CPUHP_AP_ONLINE_DYN,
+- "arm/coresight4:online",
+- etm4_online_cpu, NULL);
+- if (ret < 0)
+- goto err_arch_supported;
+- hp_online = ret;
++ ret = etm4_pm_setup_cpuslocked();
++ cpus_read_unlock();
+
+- ret = etm4_cpu_pm_register();
+- if (ret)
+- goto err_arch_supported;
++ /* etm4_pm_setup_cpuslocked() does its own cleanup - exit on error */
++ if (ret) {
++ etmdrvdata[drvdata->cpu] = NULL;
++ return ret;
+ }
+
+- cpus_read_unlock();
+-
+ if (etm4_arch_supported(drvdata->arch) == false) {
+ ret = -EINVAL;
+ goto err_arch_supported;
+@@ -1528,13 +1558,7 @@ static int etm4_probe(struct amba_device *adev, const struct amba_id *id)
+
+ err_arch_supported:
+ etmdrvdata[drvdata->cpu] = NULL;
+- if (--etm4_count == 0) {
+- etm4_cpu_pm_unregister();
+-
+- cpuhp_remove_state_nocalls(CPUHP_AP_ARM_CORESIGHT_STARTING);
+- if (hp_online)
+- cpuhp_remove_state_nocalls(hp_online);
+- }
++ etm4_pm_clear();
+ return ret;
+ }
+
+diff --git a/drivers/hwtracing/intel_th/core.c b/drivers/hwtracing/intel_th/core.c
+index ca232ec565e8..c9ac3dc65113 100644
+--- a/drivers/hwtracing/intel_th/core.c
++++ b/drivers/hwtracing/intel_th/core.c
+@@ -1021,15 +1021,30 @@ int intel_th_set_output(struct intel_th_device *thdev,
+ {
+ struct intel_th_device *hub = to_intel_th_hub(thdev);
+ struct intel_th_driver *hubdrv = to_intel_th_driver(hub->dev.driver);
++ int ret;
+
+ /* In host mode, this is up to the external debugger, do nothing. */
+ if (hub->host_mode)
+ return 0;
+
+- if (!hubdrv->set_output)
+- return -ENOTSUPP;
++ /*
++ * hub is instantiated together with the source device that
++ * calls here, so guaranteed to be present.
++ */
++ hubdrv = to_intel_th_driver(hub->dev.driver);
++ if (!hubdrv || !try_module_get(hubdrv->driver.owner))
++ return -EINVAL;
++
++ if (!hubdrv->set_output) {
++ ret = -ENOTSUPP;
++ goto out;
++ }
++
++ ret = hubdrv->set_output(hub, master);
+
+- return hubdrv->set_output(hub, master);
++out:
++ module_put(hubdrv->driver.owner);
++ return ret;
+ }
+ EXPORT_SYMBOL_GPL(intel_th_set_output);
+
+diff --git a/drivers/hwtracing/intel_th/pci.c b/drivers/hwtracing/intel_th/pci.c
+index 7ccac74553a6..21fdf0b93516 100644
+--- a/drivers/hwtracing/intel_th/pci.c
++++ b/drivers/hwtracing/intel_th/pci.c
+@@ -233,11 +233,21 @@ static const struct pci_device_id intel_th_pci_id_table[] = {
+ PCI_DEVICE(PCI_VENDOR_ID_INTEL, 0xa0a6),
+ .driver_data = (kernel_ulong_t)&intel_th_2x,
+ },
++ {
++ /* Tiger Lake PCH-H */
++ PCI_DEVICE(PCI_VENDOR_ID_INTEL, 0x43a6),
++ .driver_data = (kernel_ulong_t)&intel_th_2x,
++ },
+ {
+ /* Jasper Lake PCH */
+ PCI_DEVICE(PCI_VENDOR_ID_INTEL, 0x4da6),
+ .driver_data = (kernel_ulong_t)&intel_th_2x,
+ },
++ {
++ /* Jasper Lake CPU */
++ PCI_DEVICE(PCI_VENDOR_ID_INTEL, 0x4e29),
++ .driver_data = (kernel_ulong_t)&intel_th_2x,
++ },
+ {
+ /* Elkhart Lake CPU */
+ PCI_DEVICE(PCI_VENDOR_ID_INTEL, 0x4529),
+@@ -248,6 +258,11 @@ static const struct pci_device_id intel_th_pci_id_table[] = {
+ PCI_DEVICE(PCI_VENDOR_ID_INTEL, 0x4b26),
+ .driver_data = (kernel_ulong_t)&intel_th_2x,
+ },
++ {
++ /* Emmitsburg PCH */
++ PCI_DEVICE(PCI_VENDOR_ID_INTEL, 0x1bcc),
++ .driver_data = (kernel_ulong_t)&intel_th_2x,
++ },
+ { 0 },
+ };
+
+diff --git a/drivers/hwtracing/intel_th/sth.c b/drivers/hwtracing/intel_th/sth.c
+index 3a1f4e650378..a1529f571491 100644
+--- a/drivers/hwtracing/intel_th/sth.c
++++ b/drivers/hwtracing/intel_th/sth.c
+@@ -161,9 +161,7 @@ static int sth_stm_link(struct stm_data *stm_data, unsigned int master,
+ {
+ struct sth_device *sth = container_of(stm_data, struct sth_device, stm);
+
+- intel_th_set_output(to_intel_th_device(sth->dev), master);
+-
+- return 0;
++ return intel_th_set_output(to_intel_th_device(sth->dev), master);
+ }
+
+ static int intel_th_sw_init(struct sth_device *sth)
+diff --git a/drivers/i2c/busses/i2c-eg20t.c b/drivers/i2c/busses/i2c-eg20t.c
+index bb810dee8fb5..73f139690e4e 100644
+--- a/drivers/i2c/busses/i2c-eg20t.c
++++ b/drivers/i2c/busses/i2c-eg20t.c
+@@ -180,6 +180,7 @@ static const struct pci_device_id pch_pcidev_id[] = {
+ { PCI_VDEVICE(ROHM, PCI_DEVICE_ID_ML7831_I2C), 1, },
+ {0,}
+ };
++MODULE_DEVICE_TABLE(pci, pch_pcidev_id);
+
+ static irqreturn_t pch_i2c_handler(int irq, void *pData);
+
+diff --git a/drivers/iio/accel/mma8452.c b/drivers/iio/accel/mma8452.c
+index 00e100fc845a..813bca7cfc3e 100644
+--- a/drivers/iio/accel/mma8452.c
++++ b/drivers/iio/accel/mma8452.c
+@@ -1685,10 +1685,13 @@ static int mma8452_probe(struct i2c_client *client,
+
+ ret = mma8452_set_freefall_mode(data, false);
+ if (ret < 0)
+- goto buffer_cleanup;
++ goto unregister_device;
+
+ return 0;
+
++unregister_device:
++ iio_device_unregister(indio_dev);
++
+ buffer_cleanup:
+ iio_triggered_buffer_cleanup(indio_dev);
+
+diff --git a/drivers/iio/adc/ad7780.c b/drivers/iio/adc/ad7780.c
+index 291c1a898129..643771ed3f83 100644
+--- a/drivers/iio/adc/ad7780.c
++++ b/drivers/iio/adc/ad7780.c
+@@ -310,7 +310,7 @@ static int ad7780_probe(struct spi_device *spi)
+
+ ret = ad7780_init_gpios(&spi->dev, st);
+ if (ret)
+- goto error_cleanup_buffer_and_trigger;
++ return ret;
+
+ st->reg = devm_regulator_get(&spi->dev, "avdd");
+ if (IS_ERR(st->reg))
+diff --git a/drivers/iio/health/afe4403.c b/drivers/iio/health/afe4403.c
+index dc22dc363a99..29104656a537 100644
+--- a/drivers/iio/health/afe4403.c
++++ b/drivers/iio/health/afe4403.c
+@@ -63,6 +63,7 @@ static const struct reg_field afe4403_reg_fields[] = {
+ * @regulator: Pointer to the regulator for the IC
+ * @trig: IIO trigger for this device
+ * @irq: ADC_RDY line interrupt number
++ * @buffer: Used to construct data layout to push into IIO buffer.
+ */
+ struct afe4403_data {
+ struct device *dev;
+@@ -72,6 +73,8 @@ struct afe4403_data {
+ struct regulator *regulator;
+ struct iio_trigger *trig;
+ int irq;
++ /* Ensure suitable alignment for timestamp */
++ s32 buffer[8] __aligned(8);
+ };
+
+ enum afe4403_chan_id {
+@@ -309,7 +312,6 @@ static irqreturn_t afe4403_trigger_handler(int irq, void *private)
+ struct iio_dev *indio_dev = pf->indio_dev;
+ struct afe4403_data *afe = iio_priv(indio_dev);
+ int ret, bit, i = 0;
+- s32 buffer[8];
+ u8 tx[4] = {AFE440X_CONTROL0, 0x0, 0x0, AFE440X_CONTROL0_READ};
+ u8 rx[3];
+
+@@ -326,9 +328,9 @@ static irqreturn_t afe4403_trigger_handler(int irq, void *private)
+ if (ret)
+ goto err;
+
+- buffer[i++] = (rx[0] << 16) |
+- (rx[1] << 8) |
+- (rx[2]);
++ afe->buffer[i++] = (rx[0] << 16) |
++ (rx[1] << 8) |
++ (rx[2]);
+ }
+
+ /* Disable reading from the device */
+@@ -337,7 +339,8 @@ static irqreturn_t afe4403_trigger_handler(int irq, void *private)
+ if (ret)
+ goto err;
+
+- iio_push_to_buffers_with_timestamp(indio_dev, buffer, pf->timestamp);
++ iio_push_to_buffers_with_timestamp(indio_dev, afe->buffer,
++ pf->timestamp);
+ err:
+ iio_trigger_notify_done(indio_dev->trig);
+
+diff --git a/drivers/iio/health/afe4404.c b/drivers/iio/health/afe4404.c
+index e728bbb21ca8..cebb1fd4d0b1 100644
+--- a/drivers/iio/health/afe4404.c
++++ b/drivers/iio/health/afe4404.c
+@@ -83,6 +83,7 @@ static const struct reg_field afe4404_reg_fields[] = {
+ * @regulator: Pointer to the regulator for the IC
+ * @trig: IIO trigger for this device
+ * @irq: ADC_RDY line interrupt number
++ * @buffer: Used to construct a scan to push to the iio buffer.
+ */
+ struct afe4404_data {
+ struct device *dev;
+@@ -91,6 +92,7 @@ struct afe4404_data {
+ struct regulator *regulator;
+ struct iio_trigger *trig;
+ int irq;
++ s32 buffer[10] __aligned(8);
+ };
+
+ enum afe4404_chan_id {
+@@ -328,17 +330,17 @@ static irqreturn_t afe4404_trigger_handler(int irq, void *private)
+ struct iio_dev *indio_dev = pf->indio_dev;
+ struct afe4404_data *afe = iio_priv(indio_dev);
+ int ret, bit, i = 0;
+- s32 buffer[10];
+
+ for_each_set_bit(bit, indio_dev->active_scan_mask,
+ indio_dev->masklength) {
+ ret = regmap_read(afe->regmap, afe4404_channel_values[bit],
+- &buffer[i++]);
++ &afe->buffer[i++]);
+ if (ret)
+ goto err;
+ }
+
+- iio_push_to_buffers_with_timestamp(indio_dev, buffer, pf->timestamp);
++ iio_push_to_buffers_with_timestamp(indio_dev, afe->buffer,
++ pf->timestamp);
+ err:
+ iio_trigger_notify_done(indio_dev->trig);
+
+diff --git a/drivers/iio/humidity/hdc100x.c b/drivers/iio/humidity/hdc100x.c
+index 7ecd2ffa3132..665eb7e38293 100644
+--- a/drivers/iio/humidity/hdc100x.c
++++ b/drivers/iio/humidity/hdc100x.c
+@@ -38,6 +38,11 @@ struct hdc100x_data {
+
+ /* integration time of the sensor */
+ int adc_int_us[2];
++ /* Ensure natural alignment of timestamp */
++ struct {
++ __be16 channels[2];
++ s64 ts __aligned(8);
++ } scan;
+ };
+
+ /* integration time in us */
+@@ -322,7 +327,6 @@ static irqreturn_t hdc100x_trigger_handler(int irq, void *p)
+ struct i2c_client *client = data->client;
+ int delay = data->adc_int_us[0] + data->adc_int_us[1];
+ int ret;
+- s16 buf[8]; /* 2x s16 + padding + 8 byte timestamp */
+
+ /* dual read starts at temp register */
+ mutex_lock(&data->lock);
+@@ -333,13 +337,13 @@ static irqreturn_t hdc100x_trigger_handler(int irq, void *p)
+ }
+ usleep_range(delay, delay + 1000);
+
+- ret = i2c_master_recv(client, (u8 *)buf, 4);
++ ret = i2c_master_recv(client, (u8 *)data->scan.channels, 4);
+ if (ret < 0) {
+ dev_err(&client->dev, "cannot read sensor data\n");
+ goto err;
+ }
+
+- iio_push_to_buffers_with_timestamp(indio_dev, buf,
++ iio_push_to_buffers_with_timestamp(indio_dev, &data->scan,
+ iio_get_time_ns(indio_dev));
+ err:
+ mutex_unlock(&data->lock);
+diff --git a/drivers/iio/humidity/hts221.h b/drivers/iio/humidity/hts221.h
+index 7d6771f7cf47..b2eb5abeaccd 100644
+--- a/drivers/iio/humidity/hts221.h
++++ b/drivers/iio/humidity/hts221.h
+@@ -14,8 +14,6 @@
+
+ #include <linux/iio/iio.h>
+
+-#define HTS221_DATA_SIZE 2
+-
+ enum hts221_sensor_type {
+ HTS221_SENSOR_H,
+ HTS221_SENSOR_T,
+@@ -39,6 +37,11 @@ struct hts221_hw {
+
+ bool enabled;
+ u8 odr;
++ /* Ensure natural alignment of timestamp */
++ struct {
++ __le16 channels[2];
++ s64 ts __aligned(8);
++ } scan;
+ };
+
+ extern const struct dev_pm_ops hts221_pm_ops;
+diff --git a/drivers/iio/humidity/hts221_buffer.c b/drivers/iio/humidity/hts221_buffer.c
+index 81d50a861c22..49dcd36d8838 100644
+--- a/drivers/iio/humidity/hts221_buffer.c
++++ b/drivers/iio/humidity/hts221_buffer.c
+@@ -162,7 +162,6 @@ static const struct iio_buffer_setup_ops hts221_buffer_ops = {
+
+ static irqreturn_t hts221_buffer_handler_thread(int irq, void *p)
+ {
+- u8 buffer[ALIGN(2 * HTS221_DATA_SIZE, sizeof(s64)) + sizeof(s64)];
+ struct iio_poll_func *pf = p;
+ struct iio_dev *iio_dev = pf->indio_dev;
+ struct hts221_hw *hw = iio_priv(iio_dev);
+@@ -172,18 +171,20 @@ static irqreturn_t hts221_buffer_handler_thread(int irq, void *p)
+ /* humidity data */
+ ch = &iio_dev->channels[HTS221_SENSOR_H];
+ err = regmap_bulk_read(hw->regmap, ch->address,
+- buffer, HTS221_DATA_SIZE);
++ &hw->scan.channels[0],
++ sizeof(hw->scan.channels[0]));
+ if (err < 0)
+ goto out;
+
+ /* temperature data */
+ ch = &iio_dev->channels[HTS221_SENSOR_T];
+ err = regmap_bulk_read(hw->regmap, ch->address,
+- buffer + HTS221_DATA_SIZE, HTS221_DATA_SIZE);
++ &hw->scan.channels[1],
++ sizeof(hw->scan.channels[1]));
+ if (err < 0)
+ goto out;
+
+- iio_push_to_buffers_with_timestamp(iio_dev, buffer,
++ iio_push_to_buffers_with_timestamp(iio_dev, &hw->scan,
+ iio_get_time_ns(iio_dev));
+
+ out:
+diff --git a/drivers/iio/industrialio-core.c b/drivers/iio/industrialio-core.c
+index 24f7bbff4938..c6e36411053b 100644
+--- a/drivers/iio/industrialio-core.c
++++ b/drivers/iio/industrialio-core.c
+@@ -130,6 +130,8 @@ static const char * const iio_modifier_names[] = {
+ [IIO_MOD_PM2P5] = "pm2p5",
+ [IIO_MOD_PM4] = "pm4",
+ [IIO_MOD_PM10] = "pm10",
++ [IIO_MOD_ETHANOL] = "ethanol",
++ [IIO_MOD_H2] = "h2",
+ };
+
+ /* relies on pairs of these shared then separate */
+diff --git a/drivers/iio/magnetometer/ak8974.c b/drivers/iio/magnetometer/ak8974.c
+index d32996702110..87c15a63c1a4 100644
+--- a/drivers/iio/magnetometer/ak8974.c
++++ b/drivers/iio/magnetometer/ak8974.c
+@@ -185,6 +185,11 @@ struct ak8974 {
+ bool drdy_irq;
+ struct completion drdy_complete;
+ bool drdy_active_low;
++ /* Ensure timestamp is naturally aligned */
++ struct {
++ __le16 channels[3];
++ s64 ts __aligned(8);
++ } scan;
+ };
+
+ static const char ak8974_reg_avdd[] = "avdd";
+@@ -581,7 +586,6 @@ static void ak8974_fill_buffer(struct iio_dev *indio_dev)
+ {
+ struct ak8974 *ak8974 = iio_priv(indio_dev);
+ int ret;
+- __le16 hw_values[8]; /* Three axes + 64bit padding */
+
+ pm_runtime_get_sync(&ak8974->i2c->dev);
+ mutex_lock(&ak8974->lock);
+@@ -591,13 +595,13 @@ static void ak8974_fill_buffer(struct iio_dev *indio_dev)
+ dev_err(&ak8974->i2c->dev, "error triggering measure\n");
+ goto out_unlock;
+ }
+- ret = ak8974_getresult(ak8974, hw_values);
++ ret = ak8974_getresult(ak8974, ak8974->scan.channels);
+ if (ret) {
+ dev_err(&ak8974->i2c->dev, "error getting measures\n");
+ goto out_unlock;
+ }
+
+- iio_push_to_buffers_with_timestamp(indio_dev, hw_values,
++ iio_push_to_buffers_with_timestamp(indio_dev, &ak8974->scan,
+ iio_get_time_ns(indio_dev));
+
+ out_unlock:
+@@ -764,19 +768,21 @@ static int ak8974_probe(struct i2c_client *i2c,
+ ak8974->map = devm_regmap_init_i2c(i2c, &ak8974_regmap_config);
+ if (IS_ERR(ak8974->map)) {
+ dev_err(&i2c->dev, "failed to allocate register map\n");
++ pm_runtime_put_noidle(&i2c->dev);
++ pm_runtime_disable(&i2c->dev);
+ return PTR_ERR(ak8974->map);
+ }
+
+ ret = ak8974_set_power(ak8974, AK8974_PWR_ON);
+ if (ret) {
+ dev_err(&i2c->dev, "could not power on\n");
+- goto power_off;
++ goto disable_pm;
+ }
+
+ ret = ak8974_detect(ak8974);
+ if (ret) {
+ dev_err(&i2c->dev, "neither AK8974 nor AMI30x found\n");
+- goto power_off;
++ goto disable_pm;
+ }
+
+ ret = ak8974_selftest(ak8974);
+@@ -786,14 +792,9 @@ static int ak8974_probe(struct i2c_client *i2c,
+ ret = ak8974_reset(ak8974);
+ if (ret) {
+ dev_err(&i2c->dev, "AK8974 reset failed\n");
+- goto power_off;
++ goto disable_pm;
+ }
+
+- pm_runtime_set_autosuspend_delay(&i2c->dev,
+- AK8974_AUTOSUSPEND_DELAY);
+- pm_runtime_use_autosuspend(&i2c->dev);
+- pm_runtime_put(&i2c->dev);
+-
+ indio_dev->dev.parent = &i2c->dev;
+ indio_dev->channels = ak8974_channels;
+ indio_dev->num_channels = ARRAY_SIZE(ak8974_channels);
+@@ -846,6 +847,11 @@ no_irq:
+ goto cleanup_buffer;
+ }
+
++ pm_runtime_set_autosuspend_delay(&i2c->dev,
++ AK8974_AUTOSUSPEND_DELAY);
++ pm_runtime_use_autosuspend(&i2c->dev);
++ pm_runtime_put(&i2c->dev);
++
+ return 0;
+
+ cleanup_buffer:
+@@ -854,7 +860,6 @@ disable_pm:
+ pm_runtime_put_noidle(&i2c->dev);
+ pm_runtime_disable(&i2c->dev);
+ ak8974_set_power(ak8974, AK8974_PWR_OFF);
+-power_off:
+ regulator_bulk_disable(ARRAY_SIZE(ak8974->regs), ak8974->regs);
+
+ return ret;
+diff --git a/drivers/iio/pressure/ms5611_core.c b/drivers/iio/pressure/ms5611_core.c
+index 2f598ad91621..f5db9fa086f3 100644
+--- a/drivers/iio/pressure/ms5611_core.c
++++ b/drivers/iio/pressure/ms5611_core.c
+@@ -212,16 +212,21 @@ static irqreturn_t ms5611_trigger_handler(int irq, void *p)
+ struct iio_poll_func *pf = p;
+ struct iio_dev *indio_dev = pf->indio_dev;
+ struct ms5611_state *st = iio_priv(indio_dev);
+- s32 buf[4]; /* s32 (pressure) + s32 (temp) + 2 * s32 (timestamp) */
++ /* Ensure buffer elements are naturally aligned */
++ struct {
++ s32 channels[2];
++ s64 ts __aligned(8);
++ } scan;
+ int ret;
+
+ mutex_lock(&st->lock);
+- ret = ms5611_read_temp_and_pressure(indio_dev, &buf[1], &buf[0]);
++ ret = ms5611_read_temp_and_pressure(indio_dev, &scan.channels[1],
++ &scan.channels[0]);
+ mutex_unlock(&st->lock);
+ if (ret < 0)
+ goto err;
+
+- iio_push_to_buffers_with_timestamp(indio_dev, buf,
++ iio_push_to_buffers_with_timestamp(indio_dev, &scan,
+ iio_get_time_ns(indio_dev));
+
+ err:
+diff --git a/drivers/iio/pressure/zpa2326.c b/drivers/iio/pressure/zpa2326.c
+index 99dfe33ee402..245f2e2d412b 100644
+--- a/drivers/iio/pressure/zpa2326.c
++++ b/drivers/iio/pressure/zpa2326.c
+@@ -664,8 +664,10 @@ static int zpa2326_resume(const struct iio_dev *indio_dev)
+ int err;
+
+ err = pm_runtime_get_sync(indio_dev->dev.parent);
+- if (err < 0)
++ if (err < 0) {
++ pm_runtime_put(indio_dev->dev.parent);
+ return err;
++ }
+
+ if (err > 0) {
+ /*
+diff --git a/drivers/infiniband/hw/mlx5/qp.c b/drivers/infiniband/hw/mlx5/qp.c
+index 2210759843ba..c3521ace4d25 100644
+--- a/drivers/infiniband/hw/mlx5/qp.c
++++ b/drivers/infiniband/hw/mlx5/qp.c
+@@ -1525,6 +1525,8 @@ static int create_raw_packet_qp(struct mlx5_ib_dev *dev, struct mlx5_ib_qp *qp,
+ u16 uid = to_mpd(pd)->uid;
+ u32 out[MLX5_ST_SZ_DW(create_tir_out)] = {};
+
++ if (!qp->sq.wqe_cnt && !qp->rq.wqe_cnt)
++ return -EINVAL;
+ if (qp->sq.wqe_cnt) {
+ err = create_raw_packet_qp_tis(dev, qp, sq, tdn, pd);
+ if (err)
+diff --git a/drivers/infiniband/sw/rxe/rxe.c b/drivers/infiniband/sw/rxe/rxe.c
+index 4afdd2e20883..5642eefb4ba1 100644
+--- a/drivers/infiniband/sw/rxe/rxe.c
++++ b/drivers/infiniband/sw/rxe/rxe.c
+@@ -77,6 +77,7 @@ static void rxe_init_device_param(struct rxe_dev *rxe)
+ {
+ rxe->max_inline_data = RXE_MAX_INLINE_DATA;
+
++ rxe->attr.vendor_id = RXE_VENDOR_ID;
+ rxe->attr.max_mr_size = RXE_MAX_MR_SIZE;
+ rxe->attr.page_size_cap = RXE_PAGE_SIZE_CAP;
+ rxe->attr.max_qp = RXE_MAX_QP;
+diff --git a/drivers/infiniband/sw/rxe/rxe_param.h b/drivers/infiniband/sw/rxe/rxe_param.h
+index f59616b02477..99e9d8ba9767 100644
+--- a/drivers/infiniband/sw/rxe/rxe_param.h
++++ b/drivers/infiniband/sw/rxe/rxe_param.h
+@@ -127,6 +127,9 @@ enum rxe_device_param {
+
+ /* Delay before calling arbiter timer */
+ RXE_NSEC_ARB_TIMER_DELAY = 200,
++
++ /* IBTA v1.4 A3.3.1 VENDOR INFORMATION section */
++ RXE_VENDOR_ID = 0XFFFFFF,
+ };
+
+ /* default/initial rxe port parameters */
+diff --git a/drivers/input/serio/i8042-x86ia64io.h b/drivers/input/serio/i8042-x86ia64io.h
+index 7e048b557462..858a26302198 100644
+--- a/drivers/input/serio/i8042-x86ia64io.h
++++ b/drivers/input/serio/i8042-x86ia64io.h
+@@ -425,6 +425,13 @@ static const struct dmi_system_id __initconst i8042_dmi_nomux_table[] = {
+ DMI_MATCH(DMI_PRODUCT_NAME, "076804U"),
+ },
+ },
++ {
++ /* Lenovo XiaoXin Air 12 */
++ .matches = {
++ DMI_MATCH(DMI_SYS_VENDOR, "LENOVO"),
++ DMI_MATCH(DMI_PRODUCT_NAME, "80UN"),
++ },
++ },
+ {
+ .matches = {
+ DMI_MATCH(DMI_SYS_VENDOR, "Acer"),
+diff --git a/drivers/input/touchscreen/elants_i2c.c b/drivers/input/touchscreen/elants_i2c.c
+index 2289f9638116..5b14a4a87c40 100644
+--- a/drivers/input/touchscreen/elants_i2c.c
++++ b/drivers/input/touchscreen/elants_i2c.c
+@@ -1318,7 +1318,6 @@ static int elants_i2c_probe(struct i2c_client *client,
+ 0, MT_TOOL_PALM, 0, 0);
+ input_abs_set_res(ts->input, ABS_MT_POSITION_X, ts->x_res);
+ input_abs_set_res(ts->input, ABS_MT_POSITION_Y, ts->y_res);
+- input_abs_set_res(ts->input, ABS_MT_TOUCH_MAJOR, 1);
+
+ error = input_register_device(ts->input);
+ if (error) {
+diff --git a/drivers/input/touchscreen/mms114.c b/drivers/input/touchscreen/mms114.c
+index 2ef1adaed9af..5bdf4ac1a303 100644
+--- a/drivers/input/touchscreen/mms114.c
++++ b/drivers/input/touchscreen/mms114.c
+@@ -54,6 +54,7 @@
+ enum mms_type {
+ TYPE_MMS114 = 114,
+ TYPE_MMS152 = 152,
++ TYPE_MMS345L = 345,
+ };
+
+ struct mms114_data {
+@@ -250,6 +251,15 @@ static int mms114_get_version(struct mms114_data *data)
+ int error;
+
+ switch (data->type) {
++ case TYPE_MMS345L:
++ error = __mms114_read_reg(data, MMS152_FW_REV, 3, buf);
++ if (error)
++ return error;
++
++ dev_info(dev, "TSP FW Rev: bootloader 0x%x / core 0x%x / config 0x%x\n",
++ buf[0], buf[1], buf[2]);
++ break;
++
+ case TYPE_MMS152:
+ error = __mms114_read_reg(data, MMS152_FW_REV, 3, buf);
+ if (error)
+@@ -287,8 +297,8 @@ static int mms114_setup_regs(struct mms114_data *data)
+ if (error < 0)
+ return error;
+
+- /* MMS152 has no configuration or power on registers */
+- if (data->type == TYPE_MMS152)
++ /* Only MMS114 has configuration and power on registers */
++ if (data->type != TYPE_MMS114)
+ return 0;
+
+ error = mms114_set_active(data, true);
+@@ -597,6 +607,9 @@ static const struct of_device_id mms114_dt_match[] = {
+ }, {
+ .compatible = "melfas,mms152",
+ .data = (void *)TYPE_MMS152,
++ }, {
++ .compatible = "melfas,mms345l",
++ .data = (void *)TYPE_MMS345L,
+ },
+ { }
+ };
+diff --git a/drivers/iommu/Kconfig b/drivers/iommu/Kconfig
+index 2ab07ce17abb..432f3ff080c9 100644
+--- a/drivers/iommu/Kconfig
++++ b/drivers/iommu/Kconfig
+@@ -211,7 +211,7 @@ config INTEL_IOMMU_DEBUGFS
+
+ config INTEL_IOMMU_SVM
+ bool "Support for Shared Virtual Memory with Intel IOMMU"
+- depends on INTEL_IOMMU && X86
++ depends on INTEL_IOMMU && X86_64
+ select PCI_PASID
+ select PCI_PRI
+ select MMU_NOTIFIER
+diff --git a/drivers/misc/atmel-ssc.c b/drivers/misc/atmel-ssc.c
+index ab4144ea1f11..d6cd5537126c 100644
+--- a/drivers/misc/atmel-ssc.c
++++ b/drivers/misc/atmel-ssc.c
+@@ -10,7 +10,7 @@
+ #include <linux/clk.h>
+ #include <linux/err.h>
+ #include <linux/io.h>
+-#include <linux/spinlock.h>
++#include <linux/mutex.h>
+ #include <linux/atmel-ssc.h>
+ #include <linux/slab.h>
+ #include <linux/module.h>
+@@ -20,7 +20,7 @@
+ #include "../../sound/soc/atmel/atmel_ssc_dai.h"
+
+ /* Serialize access to ssc_list and user count */
+-static DEFINE_SPINLOCK(user_lock);
++static DEFINE_MUTEX(user_lock);
+ static LIST_HEAD(ssc_list);
+
+ struct ssc_device *ssc_request(unsigned int ssc_num)
+@@ -28,7 +28,7 @@ struct ssc_device *ssc_request(unsigned int ssc_num)
+ int ssc_valid = 0;
+ struct ssc_device *ssc;
+
+- spin_lock(&user_lock);
++ mutex_lock(&user_lock);
+ list_for_each_entry(ssc, &ssc_list, list) {
+ if (ssc->pdev->dev.of_node) {
+ if (of_alias_get_id(ssc->pdev->dev.of_node, "ssc")
+@@ -44,18 +44,18 @@ struct ssc_device *ssc_request(unsigned int ssc_num)
+ }
+
+ if (!ssc_valid) {
+- spin_unlock(&user_lock);
++ mutex_unlock(&user_lock);
+ pr_err("ssc: ssc%d platform device is missing\n", ssc_num);
+ return ERR_PTR(-ENODEV);
+ }
+
+ if (ssc->user) {
+- spin_unlock(&user_lock);
++ mutex_unlock(&user_lock);
+ dev_dbg(&ssc->pdev->dev, "module busy\n");
+ return ERR_PTR(-EBUSY);
+ }
+ ssc->user++;
+- spin_unlock(&user_lock);
++ mutex_unlock(&user_lock);
+
+ clk_prepare(ssc->clk);
+
+@@ -67,14 +67,14 @@ void ssc_free(struct ssc_device *ssc)
+ {
+ bool disable_clk = true;
+
+- spin_lock(&user_lock);
++ mutex_lock(&user_lock);
+ if (ssc->user)
+ ssc->user--;
+ else {
+ disable_clk = false;
+ dev_dbg(&ssc->pdev->dev, "device already free\n");
+ }
+- spin_unlock(&user_lock);
++ mutex_unlock(&user_lock);
+
+ if (disable_clk)
+ clk_unprepare(ssc->clk);
+@@ -237,9 +237,9 @@ static int ssc_probe(struct platform_device *pdev)
+ return -ENXIO;
+ }
+
+- spin_lock(&user_lock);
++ mutex_lock(&user_lock);
+ list_add_tail(&ssc->list, &ssc_list);
+- spin_unlock(&user_lock);
++ mutex_unlock(&user_lock);
+
+ platform_set_drvdata(pdev, ssc);
+
+@@ -258,9 +258,9 @@ static int ssc_remove(struct platform_device *pdev)
+
+ ssc_sound_dai_remove(ssc);
+
+- spin_lock(&user_lock);
++ mutex_lock(&user_lock);
+ list_del(&ssc->list);
+- spin_unlock(&user_lock);
++ mutex_unlock(&user_lock);
+
+ return 0;
+ }
+diff --git a/drivers/misc/habanalabs/goya/goya_security.c b/drivers/misc/habanalabs/goya/goya_security.c
+index d6ec12b3e692..08fc89ea0a0c 100644
+--- a/drivers/misc/habanalabs/goya/goya_security.c
++++ b/drivers/misc/habanalabs/goya/goya_security.c
+@@ -695,7 +695,6 @@ static void goya_init_tpc_protection_bits(struct hl_device *hdev)
+ mask |= 1 << ((mmTPC0_CFG_CFG_SUBTRACT_VALUE & 0x7F) >> 2);
+ mask |= 1 << ((mmTPC0_CFG_SM_BASE_ADDRESS_LOW & 0x7F) >> 2);
+ mask |= 1 << ((mmTPC0_CFG_SM_BASE_ADDRESS_HIGH & 0x7F) >> 2);
+- mask |= 1 << ((mmTPC0_CFG_CFG_SUBTRACT_VALUE & 0x7F) >> 2);
+ mask |= 1 << ((mmTPC0_CFG_TPC_STALL & 0x7F) >> 2);
+ mask |= 1 << ((mmTPC0_CFG_MSS_CONFIG & 0x7F) >> 2);
+ mask |= 1 << ((mmTPC0_CFG_TPC_INTR_CAUSE & 0x7F) >> 2);
+@@ -875,6 +874,16 @@ static void goya_init_tpc_protection_bits(struct hl_device *hdev)
+ goya_pb_set_block(hdev, mmTPC1_RD_REGULATOR_BASE);
+ goya_pb_set_block(hdev, mmTPC1_WR_REGULATOR_BASE);
+
++ pb_addr = (mmTPC1_CFG_SEMAPHORE & ~0xFFF) + PROT_BITS_OFFS;
++ word_offset = ((mmTPC1_CFG_SEMAPHORE & PROT_BITS_OFFS) >> 7) << 2;
++
++ mask = 1 << ((mmTPC1_CFG_SEMAPHORE & 0x7F) >> 2);
++ mask |= 1 << ((mmTPC1_CFG_VFLAGS & 0x7F) >> 2);
++ mask |= 1 << ((mmTPC1_CFG_SFLAGS & 0x7F) >> 2);
++ mask |= 1 << ((mmTPC1_CFG_STATUS & 0x7F) >> 2);
++
++ WREG32(pb_addr + word_offset, ~mask);
++
+ pb_addr = (mmTPC1_CFG_CFG_BASE_ADDRESS_HIGH & ~0xFFF) + PROT_BITS_OFFS;
+ word_offset = ((mmTPC1_CFG_CFG_BASE_ADDRESS_HIGH &
+ PROT_BITS_OFFS) >> 7) << 2;
+@@ -882,6 +891,10 @@ static void goya_init_tpc_protection_bits(struct hl_device *hdev)
+ mask |= 1 << ((mmTPC1_CFG_CFG_SUBTRACT_VALUE & 0x7F) >> 2);
+ mask |= 1 << ((mmTPC1_CFG_SM_BASE_ADDRESS_LOW & 0x7F) >> 2);
+ mask |= 1 << ((mmTPC1_CFG_SM_BASE_ADDRESS_HIGH & 0x7F) >> 2);
++ mask |= 1 << ((mmTPC1_CFG_TPC_STALL & 0x7F) >> 2);
++ mask |= 1 << ((mmTPC1_CFG_MSS_CONFIG & 0x7F) >> 2);
++ mask |= 1 << ((mmTPC1_CFG_TPC_INTR_CAUSE & 0x7F) >> 2);
++ mask |= 1 << ((mmTPC1_CFG_TPC_INTR_MASK & 0x7F) >> 2);
+
+ WREG32(pb_addr + word_offset, ~mask);
+
+@@ -1057,6 +1070,16 @@ static void goya_init_tpc_protection_bits(struct hl_device *hdev)
+ goya_pb_set_block(hdev, mmTPC2_RD_REGULATOR_BASE);
+ goya_pb_set_block(hdev, mmTPC2_WR_REGULATOR_BASE);
+
++ pb_addr = (mmTPC2_CFG_SEMAPHORE & ~0xFFF) + PROT_BITS_OFFS;
++ word_offset = ((mmTPC2_CFG_SEMAPHORE & PROT_BITS_OFFS) >> 7) << 2;
++
++ mask = 1 << ((mmTPC2_CFG_SEMAPHORE & 0x7F) >> 2);
++ mask |= 1 << ((mmTPC2_CFG_VFLAGS & 0x7F) >> 2);
++ mask |= 1 << ((mmTPC2_CFG_SFLAGS & 0x7F) >> 2);
++ mask |= 1 << ((mmTPC2_CFG_STATUS & 0x7F) >> 2);
++
++ WREG32(pb_addr + word_offset, ~mask);
++
+ pb_addr = (mmTPC2_CFG_CFG_BASE_ADDRESS_HIGH & ~0xFFF) + PROT_BITS_OFFS;
+ word_offset = ((mmTPC2_CFG_CFG_BASE_ADDRESS_HIGH &
+ PROT_BITS_OFFS) >> 7) << 2;
+@@ -1064,6 +1087,10 @@ static void goya_init_tpc_protection_bits(struct hl_device *hdev)
+ mask |= 1 << ((mmTPC2_CFG_CFG_SUBTRACT_VALUE & 0x7F) >> 2);
+ mask |= 1 << ((mmTPC2_CFG_SM_BASE_ADDRESS_LOW & 0x7F) >> 2);
+ mask |= 1 << ((mmTPC2_CFG_SM_BASE_ADDRESS_HIGH & 0x7F) >> 2);
++ mask |= 1 << ((mmTPC2_CFG_TPC_STALL & 0x7F) >> 2);
++ mask |= 1 << ((mmTPC2_CFG_MSS_CONFIG & 0x7F) >> 2);
++ mask |= 1 << ((mmTPC2_CFG_TPC_INTR_CAUSE & 0x7F) >> 2);
++ mask |= 1 << ((mmTPC2_CFG_TPC_INTR_MASK & 0x7F) >> 2);
+
+ WREG32(pb_addr + word_offset, ~mask);
+
+@@ -1239,6 +1266,16 @@ static void goya_init_tpc_protection_bits(struct hl_device *hdev)
+ goya_pb_set_block(hdev, mmTPC3_RD_REGULATOR_BASE);
+ goya_pb_set_block(hdev, mmTPC3_WR_REGULATOR_BASE);
+
++ pb_addr = (mmTPC3_CFG_SEMAPHORE & ~0xFFF) + PROT_BITS_OFFS;
++ word_offset = ((mmTPC3_CFG_SEMAPHORE & PROT_BITS_OFFS) >> 7) << 2;
++
++ mask = 1 << ((mmTPC3_CFG_SEMAPHORE & 0x7F) >> 2);
++ mask |= 1 << ((mmTPC3_CFG_VFLAGS & 0x7F) >> 2);
++ mask |= 1 << ((mmTPC3_CFG_SFLAGS & 0x7F) >> 2);
++ mask |= 1 << ((mmTPC3_CFG_STATUS & 0x7F) >> 2);
++
++ WREG32(pb_addr + word_offset, ~mask);
++
+ pb_addr = (mmTPC3_CFG_CFG_BASE_ADDRESS_HIGH & ~0xFFF) + PROT_BITS_OFFS;
+ word_offset = ((mmTPC3_CFG_CFG_BASE_ADDRESS_HIGH
+ & PROT_BITS_OFFS) >> 7) << 2;
+@@ -1246,6 +1283,10 @@ static void goya_init_tpc_protection_bits(struct hl_device *hdev)
+ mask |= 1 << ((mmTPC3_CFG_CFG_SUBTRACT_VALUE & 0x7F) >> 2);
+ mask |= 1 << ((mmTPC3_CFG_SM_BASE_ADDRESS_LOW & 0x7F) >> 2);
+ mask |= 1 << ((mmTPC3_CFG_SM_BASE_ADDRESS_HIGH & 0x7F) >> 2);
++ mask |= 1 << ((mmTPC3_CFG_TPC_STALL & 0x7F) >> 2);
++ mask |= 1 << ((mmTPC3_CFG_MSS_CONFIG & 0x7F) >> 2);
++ mask |= 1 << ((mmTPC3_CFG_TPC_INTR_CAUSE & 0x7F) >> 2);
++ mask |= 1 << ((mmTPC3_CFG_TPC_INTR_MASK & 0x7F) >> 2);
+
+ WREG32(pb_addr + word_offset, ~mask);
+
+@@ -1421,6 +1462,16 @@ static void goya_init_tpc_protection_bits(struct hl_device *hdev)
+ goya_pb_set_block(hdev, mmTPC4_RD_REGULATOR_BASE);
+ goya_pb_set_block(hdev, mmTPC4_WR_REGULATOR_BASE);
+
++ pb_addr = (mmTPC4_CFG_SEMAPHORE & ~0xFFF) + PROT_BITS_OFFS;
++ word_offset = ((mmTPC4_CFG_SEMAPHORE & PROT_BITS_OFFS) >> 7) << 2;
++
++ mask = 1 << ((mmTPC4_CFG_SEMAPHORE & 0x7F) >> 2);
++ mask |= 1 << ((mmTPC4_CFG_VFLAGS & 0x7F) >> 2);
++ mask |= 1 << ((mmTPC4_CFG_SFLAGS & 0x7F) >> 2);
++ mask |= 1 << ((mmTPC4_CFG_STATUS & 0x7F) >> 2);
++
++ WREG32(pb_addr + word_offset, ~mask);
++
+ pb_addr = (mmTPC4_CFG_CFG_BASE_ADDRESS_HIGH & ~0xFFF) + PROT_BITS_OFFS;
+ word_offset = ((mmTPC4_CFG_CFG_BASE_ADDRESS_HIGH &
+ PROT_BITS_OFFS) >> 7) << 2;
+@@ -1428,6 +1479,10 @@ static void goya_init_tpc_protection_bits(struct hl_device *hdev)
+ mask |= 1 << ((mmTPC4_CFG_CFG_SUBTRACT_VALUE & 0x7F) >> 2);
+ mask |= 1 << ((mmTPC4_CFG_SM_BASE_ADDRESS_LOW & 0x7F) >> 2);
+ mask |= 1 << ((mmTPC4_CFG_SM_BASE_ADDRESS_HIGH & 0x7F) >> 2);
++ mask |= 1 << ((mmTPC4_CFG_TPC_STALL & 0x7F) >> 2);
++ mask |= 1 << ((mmTPC4_CFG_MSS_CONFIG & 0x7F) >> 2);
++ mask |= 1 << ((mmTPC4_CFG_TPC_INTR_CAUSE & 0x7F) >> 2);
++ mask |= 1 << ((mmTPC4_CFG_TPC_INTR_MASK & 0x7F) >> 2);
+
+ WREG32(pb_addr + word_offset, ~mask);
+
+@@ -1603,6 +1658,16 @@ static void goya_init_tpc_protection_bits(struct hl_device *hdev)
+ goya_pb_set_block(hdev, mmTPC5_RD_REGULATOR_BASE);
+ goya_pb_set_block(hdev, mmTPC5_WR_REGULATOR_BASE);
+
++ pb_addr = (mmTPC5_CFG_SEMAPHORE & ~0xFFF) + PROT_BITS_OFFS;
++ word_offset = ((mmTPC5_CFG_SEMAPHORE & PROT_BITS_OFFS) >> 7) << 2;
++
++ mask = 1 << ((mmTPC5_CFG_SEMAPHORE & 0x7F) >> 2);
++ mask |= 1 << ((mmTPC5_CFG_VFLAGS & 0x7F) >> 2);
++ mask |= 1 << ((mmTPC5_CFG_SFLAGS & 0x7F) >> 2);
++ mask |= 1 << ((mmTPC5_CFG_STATUS & 0x7F) >> 2);
++
++ WREG32(pb_addr + word_offset, ~mask);
++
+ pb_addr = (mmTPC5_CFG_CFG_BASE_ADDRESS_HIGH & ~0xFFF) + PROT_BITS_OFFS;
+ word_offset = ((mmTPC5_CFG_CFG_BASE_ADDRESS_HIGH &
+ PROT_BITS_OFFS) >> 7) << 2;
+@@ -1610,6 +1675,10 @@ static void goya_init_tpc_protection_bits(struct hl_device *hdev)
+ mask |= 1 << ((mmTPC5_CFG_CFG_SUBTRACT_VALUE & 0x7F) >> 2);
+ mask |= 1 << ((mmTPC5_CFG_SM_BASE_ADDRESS_LOW & 0x7F) >> 2);
+ mask |= 1 << ((mmTPC5_CFG_SM_BASE_ADDRESS_HIGH & 0x7F) >> 2);
++ mask |= 1 << ((mmTPC5_CFG_TPC_STALL & 0x7F) >> 2);
++ mask |= 1 << ((mmTPC5_CFG_MSS_CONFIG & 0x7F) >> 2);
++ mask |= 1 << ((mmTPC5_CFG_TPC_INTR_CAUSE & 0x7F) >> 2);
++ mask |= 1 << ((mmTPC5_CFG_TPC_INTR_MASK & 0x7F) >> 2);
+
+ WREG32(pb_addr + word_offset, ~mask);
+
+@@ -1785,6 +1854,16 @@ static void goya_init_tpc_protection_bits(struct hl_device *hdev)
+ goya_pb_set_block(hdev, mmTPC6_RD_REGULATOR_BASE);
+ goya_pb_set_block(hdev, mmTPC6_WR_REGULATOR_BASE);
+
++ pb_addr = (mmTPC6_CFG_SEMAPHORE & ~0xFFF) + PROT_BITS_OFFS;
++ word_offset = ((mmTPC6_CFG_SEMAPHORE & PROT_BITS_OFFS) >> 7) << 2;
++
++ mask = 1 << ((mmTPC6_CFG_SEMAPHORE & 0x7F) >> 2);
++ mask |= 1 << ((mmTPC6_CFG_VFLAGS & 0x7F) >> 2);
++ mask |= 1 << ((mmTPC6_CFG_SFLAGS & 0x7F) >> 2);
++ mask |= 1 << ((mmTPC6_CFG_STATUS & 0x7F) >> 2);
++
++ WREG32(pb_addr + word_offset, ~mask);
++
+ pb_addr = (mmTPC6_CFG_CFG_BASE_ADDRESS_HIGH & ~0xFFF) + PROT_BITS_OFFS;
+ word_offset = ((mmTPC6_CFG_CFG_BASE_ADDRESS_HIGH &
+ PROT_BITS_OFFS) >> 7) << 2;
+@@ -1792,6 +1871,10 @@ static void goya_init_tpc_protection_bits(struct hl_device *hdev)
+ mask |= 1 << ((mmTPC6_CFG_CFG_SUBTRACT_VALUE & 0x7F) >> 2);
+ mask |= 1 << ((mmTPC6_CFG_SM_BASE_ADDRESS_LOW & 0x7F) >> 2);
+ mask |= 1 << ((mmTPC6_CFG_SM_BASE_ADDRESS_HIGH & 0x7F) >> 2);
++ mask |= 1 << ((mmTPC6_CFG_TPC_STALL & 0x7F) >> 2);
++ mask |= 1 << ((mmTPC6_CFG_MSS_CONFIG & 0x7F) >> 2);
++ mask |= 1 << ((mmTPC6_CFG_TPC_INTR_CAUSE & 0x7F) >> 2);
++ mask |= 1 << ((mmTPC6_CFG_TPC_INTR_MASK & 0x7F) >> 2);
+
+ WREG32(pb_addr + word_offset, ~mask);
+
+@@ -1967,6 +2050,16 @@ static void goya_init_tpc_protection_bits(struct hl_device *hdev)
+ goya_pb_set_block(hdev, mmTPC7_RD_REGULATOR_BASE);
+ goya_pb_set_block(hdev, mmTPC7_WR_REGULATOR_BASE);
+
++ pb_addr = (mmTPC7_CFG_SEMAPHORE & ~0xFFF) + PROT_BITS_OFFS;
++ word_offset = ((mmTPC7_CFG_SEMAPHORE & PROT_BITS_OFFS) >> 7) << 2;
++
++ mask = 1 << ((mmTPC7_CFG_SEMAPHORE & 0x7F) >> 2);
++ mask |= 1 << ((mmTPC7_CFG_VFLAGS & 0x7F) >> 2);
++ mask |= 1 << ((mmTPC7_CFG_SFLAGS & 0x7F) >> 2);
++ mask |= 1 << ((mmTPC7_CFG_STATUS & 0x7F) >> 2);
++
++ WREG32(pb_addr + word_offset, ~mask);
++
+ pb_addr = (mmTPC7_CFG_CFG_BASE_ADDRESS_HIGH & ~0xFFF) + PROT_BITS_OFFS;
+ word_offset = ((mmTPC7_CFG_CFG_BASE_ADDRESS_HIGH &
+ PROT_BITS_OFFS) >> 7) << 2;
+@@ -1974,6 +2067,10 @@ static void goya_init_tpc_protection_bits(struct hl_device *hdev)
+ mask |= 1 << ((mmTPC7_CFG_CFG_SUBTRACT_VALUE & 0x7F) >> 2);
+ mask |= 1 << ((mmTPC7_CFG_SM_BASE_ADDRESS_LOW & 0x7F) >> 2);
+ mask |= 1 << ((mmTPC7_CFG_SM_BASE_ADDRESS_HIGH & 0x7F) >> 2);
++ mask |= 1 << ((mmTPC7_CFG_TPC_STALL & 0x7F) >> 2);
++ mask |= 1 << ((mmTPC7_CFG_MSS_CONFIG & 0x7F) >> 2);
++ mask |= 1 << ((mmTPC7_CFG_TPC_INTR_CAUSE & 0x7F) >> 2);
++ mask |= 1 << ((mmTPC7_CFG_TPC_INTR_MASK & 0x7F) >> 2);
+
+ WREG32(pb_addr + word_offset, ~mask);
+
+diff --git a/drivers/misc/mei/bus.c b/drivers/misc/mei/bus.c
+index 8d468e0a950a..f476dbc7252b 100644
+--- a/drivers/misc/mei/bus.c
++++ b/drivers/misc/mei/bus.c
+@@ -745,9 +745,8 @@ static int mei_cl_device_remove(struct device *dev)
+
+ mei_cl_bus_module_put(cldev);
+ module_put(THIS_MODULE);
+- dev->driver = NULL;
+- return ret;
+
++ return ret;
+ }
+
+ static ssize_t name_show(struct device *dev, struct device_attribute *a,
+diff --git a/drivers/mtd/nand/raw/brcmnand/brcmnand.c b/drivers/mtd/nand/raw/brcmnand/brcmnand.c
+index 52402aa7b4d3..968ff7703925 100644
+--- a/drivers/mtd/nand/raw/brcmnand/brcmnand.c
++++ b/drivers/mtd/nand/raw/brcmnand/brcmnand.c
+@@ -606,8 +606,9 @@ static int brcmnand_revision_init(struct brcmnand_controller *ctrl)
+ } else {
+ ctrl->cs_offsets = brcmnand_cs_offsets;
+
+- /* v5.0 and earlier has a different CS0 offset layout */
+- if (ctrl->nand_version <= 0x0500)
++ /* v3.3-5.0 have a different CS0 offset layout */
++ if (ctrl->nand_version >= 0x0303 &&
++ ctrl->nand_version <= 0x0500)
+ ctrl->cs0_offsets = brcmnand_cs_offsets_cs0;
+ }
+
+@@ -2021,28 +2022,31 @@ static int brcmnand_read_by_pio(struct mtd_info *mtd, struct nand_chip *chip,
+ static int brcmstb_nand_verify_erased_page(struct mtd_info *mtd,
+ struct nand_chip *chip, void *buf, u64 addr)
+ {
+- int i, sas;
+- void *oob = chip->oob_poi;
++ struct mtd_oob_region ecc;
++ int i;
+ int bitflips = 0;
+ int page = addr >> chip->page_shift;
+ int ret;
++ void *ecc_bytes;
+ void *ecc_chunk;
+
+ if (!buf)
+ buf = nand_get_data_buf(chip);
+
+- sas = mtd->oobsize / chip->ecc.steps;
+-
+ /* read without ecc for verification */
+ ret = chip->ecc.read_page_raw(chip, buf, true, page);
+ if (ret)
+ return ret;
+
+- for (i = 0; i < chip->ecc.steps; i++, oob += sas) {
++ for (i = 0; i < chip->ecc.steps; i++) {
+ ecc_chunk = buf + chip->ecc.size * i;
+- ret = nand_check_erased_ecc_chunk(ecc_chunk,
+- chip->ecc.size,
+- oob, sas, NULL, 0,
++
++ mtd_ooblayout_ecc(mtd, i, &ecc);
++ ecc_bytes = chip->oob_poi + ecc.offset;
++
++ ret = nand_check_erased_ecc_chunk(ecc_chunk, chip->ecc.size,
++ ecc_bytes, ecc.length,
++ NULL, 0,
+ chip->ecc.strength);
+ if (ret < 0)
+ return ret;
+diff --git a/drivers/mtd/nand/raw/marvell_nand.c b/drivers/mtd/nand/raw/marvell_nand.c
+index 179f0ca585f8..2211a23e4d50 100644
+--- a/drivers/mtd/nand/raw/marvell_nand.c
++++ b/drivers/mtd/nand/raw/marvell_nand.c
+@@ -707,7 +707,7 @@ static int marvell_nfc_wait_op(struct nand_chip *chip, unsigned int timeout_ms)
+ * In case the interrupt was not served in the required time frame,
+ * check if the ISR was not served or if something went actually wrong.
+ */
+- if (ret && !pending) {
++ if (!ret && !pending) {
+ dev_err(nfc->dev, "Timeout waiting for RB signal\n");
+ return -ETIMEDOUT;
+ }
+@@ -2664,7 +2664,7 @@ static int marvell_nand_chip_init(struct device *dev, struct marvell_nfc *nfc,
+ ret = mtd_device_register(mtd, NULL, 0);
+ if (ret) {
+ dev_err(dev, "failed to register mtd device: %d\n", ret);
+- nand_release(chip);
++ nand_cleanup(chip);
+ return ret;
+ }
+
+@@ -2673,6 +2673,16 @@ static int marvell_nand_chip_init(struct device *dev, struct marvell_nfc *nfc,
+ return 0;
+ }
+
++static void marvell_nand_chips_cleanup(struct marvell_nfc *nfc)
++{
++ struct marvell_nand_chip *entry, *temp;
++
++ list_for_each_entry_safe(entry, temp, &nfc->chips, node) {
++ nand_release(&entry->chip);
++ list_del(&entry->node);
++ }
++}
++
+ static int marvell_nand_chips_init(struct device *dev, struct marvell_nfc *nfc)
+ {
+ struct device_node *np = dev->of_node;
+@@ -2707,21 +2717,16 @@ static int marvell_nand_chips_init(struct device *dev, struct marvell_nfc *nfc)
+ ret = marvell_nand_chip_init(dev, nfc, nand_np);
+ if (ret) {
+ of_node_put(nand_np);
+- return ret;
++ goto cleanup_chips;
+ }
+ }
+
+ return 0;
+-}
+
+-static void marvell_nand_chips_cleanup(struct marvell_nfc *nfc)
+-{
+- struct marvell_nand_chip *entry, *temp;
++cleanup_chips:
++ marvell_nand_chips_cleanup(nfc);
+
+- list_for_each_entry_safe(entry, temp, &nfc->chips, node) {
+- nand_release(&entry->chip);
+- list_del(&entry->node);
+- }
++ return ret;
+ }
+
+ static int marvell_nfc_init_dma(struct marvell_nfc *nfc)
+diff --git a/drivers/mtd/nand/raw/nand_timings.c b/drivers/mtd/nand/raw/nand_timings.c
+index f64b06a71dfa..f12b7a7844c9 100644
+--- a/drivers/mtd/nand/raw/nand_timings.c
++++ b/drivers/mtd/nand/raw/nand_timings.c
+@@ -314,10 +314,9 @@ int onfi_fill_data_interface(struct nand_chip *chip,
+ /* microseconds -> picoseconds */
+ timings->tPROG_max = 1000000ULL * ONFI_DYN_TIMING_MAX;
+ timings->tBERS_max = 1000000ULL * ONFI_DYN_TIMING_MAX;
+- timings->tR_max = 1000000ULL * 200000000ULL;
+
+- /* nanoseconds -> picoseconds */
+- timings->tCCS_min = 1000UL * 500000;
++ timings->tR_max = 200000000;
++ timings->tCCS_min = 500000;
+ }
+
+ return 0;
+diff --git a/drivers/mtd/nand/raw/oxnas_nand.c b/drivers/mtd/nand/raw/oxnas_nand.c
+index 0429d218fd9f..23c222b6c40e 100644
+--- a/drivers/mtd/nand/raw/oxnas_nand.c
++++ b/drivers/mtd/nand/raw/oxnas_nand.c
+@@ -32,6 +32,7 @@ struct oxnas_nand_ctrl {
+ void __iomem *io_base;
+ struct clk *clk;
+ struct nand_chip *chips[OXNAS_NAND_MAX_CHIPS];
++ unsigned int nchips;
+ };
+
+ static uint8_t oxnas_nand_read_byte(struct nand_chip *chip)
+@@ -79,9 +80,9 @@ static int oxnas_nand_probe(struct platform_device *pdev)
+ struct nand_chip *chip;
+ struct mtd_info *mtd;
+ struct resource *res;
+- int nchips = 0;
+ int count = 0;
+ int err = 0;
++ int i;
+
+ /* Allocate memory for the device structure (and zero it) */
+ oxnas = devm_kzalloc(&pdev->dev, sizeof(*oxnas),
+@@ -143,12 +144,12 @@ static int oxnas_nand_probe(struct platform_device *pdev)
+ if (err)
+ goto err_cleanup_nand;
+
+- oxnas->chips[nchips] = chip;
+- ++nchips;
++ oxnas->chips[oxnas->nchips] = chip;
++ ++oxnas->nchips;
+ }
+
+ /* Exit if no chips found */
+- if (!nchips) {
++ if (!oxnas->nchips) {
+ err = -ENODEV;
+ goto err_clk_unprepare;
+ }
+@@ -161,6 +162,13 @@ err_cleanup_nand:
+ nand_cleanup(chip);
+ err_release_child:
+ of_node_put(nand_np);
++
++ for (i = 0; i < oxnas->nchips; i++) {
++ chip = oxnas->chips[i];
++ WARN_ON(mtd_device_unregister(nand_to_mtd(chip)));
++ nand_cleanup(chip);
++ }
++
+ err_clk_unprepare:
+ clk_disable_unprepare(oxnas->clk);
+ return err;
+@@ -169,9 +177,13 @@ err_clk_unprepare:
+ static int oxnas_nand_remove(struct platform_device *pdev)
+ {
+ struct oxnas_nand_ctrl *oxnas = platform_get_drvdata(pdev);
++ struct nand_chip *chip;
++ int i;
+
+- if (oxnas->chips[0])
+- nand_release(oxnas->chips[0]);
++ for (i = 0; i < oxnas->nchips; i++) {
++ chip = oxnas->chips[i];
++ nand_release(chip);
++ }
+
+ clk_disable_unprepare(oxnas->clk);
+
+diff --git a/drivers/mtd/spi-nor/sfdp.c b/drivers/mtd/spi-nor/sfdp.c
+index f6038d3a3684..27838f6166bb 100644
+--- a/drivers/mtd/spi-nor/sfdp.c
++++ b/drivers/mtd/spi-nor/sfdp.c
+@@ -21,10 +21,6 @@
+ #define SFDP_4BAIT_ID 0xff84 /* 4-byte Address Instruction Table */
+
+ #define SFDP_SIGNATURE 0x50444653U
+-#define SFDP_JESD216_MAJOR 1
+-#define SFDP_JESD216_MINOR 0
+-#define SFDP_JESD216A_MINOR 5
+-#define SFDP_JESD216B_MINOR 6
+
+ struct sfdp_header {
+ u32 signature; /* Ox50444653U <=> "SFDP" */
+diff --git a/drivers/mtd/spi-nor/sfdp.h b/drivers/mtd/spi-nor/sfdp.h
+index e0a8ded04890..b84abd0b6434 100644
+--- a/drivers/mtd/spi-nor/sfdp.h
++++ b/drivers/mtd/spi-nor/sfdp.h
+@@ -7,6 +7,12 @@
+ #ifndef __LINUX_MTD_SFDP_H
+ #define __LINUX_MTD_SFDP_H
+
++/* SFDP revisions */
++#define SFDP_JESD216_MAJOR 1
++#define SFDP_JESD216_MINOR 0
++#define SFDP_JESD216A_MINOR 5
++#define SFDP_JESD216B_MINOR 6
++
+ /* Basic Flash Parameter Table */
+
+ /*
+diff --git a/drivers/mtd/spi-nor/spansion.c b/drivers/mtd/spi-nor/spansion.c
+index 6756202ace4b..eac1c22b730f 100644
+--- a/drivers/mtd/spi-nor/spansion.c
++++ b/drivers/mtd/spi-nor/spansion.c
+@@ -8,6 +8,27 @@
+
+ #include "core.h"
+
++static int
++s25fs_s_post_bfpt_fixups(struct spi_nor *nor,
++ const struct sfdp_parameter_header *bfpt_header,
++ const struct sfdp_bfpt *bfpt,
++ struct spi_nor_flash_parameter *params)
++{
++ /*
++ * The S25FS-S chip family reports 512-byte pages in BFPT but
++ * in reality the write buffer still wraps at the safe default
++ * of 256 bytes. Overwrite the page size advertised by BFPT
++ * to get the writes working.
++ */
++ params->page_size = 256;
++
++ return 0;
++}
++
++static struct spi_nor_fixups s25fs_s_fixups = {
++ .post_bfpt = s25fs_s_post_bfpt_fixups,
++};
++
+ static const struct flash_info spansion_parts[] = {
+ /* Spansion/Cypress -- single (large) sector size only, at least
+ * for the chips listed here (without boot sectors).
+@@ -30,8 +51,8 @@ static const struct flash_info spansion_parts[] = {
+ SPI_NOR_DUAL_READ | SPI_NOR_QUAD_READ |
+ SPI_NOR_HAS_LOCK | USE_CLSR) },
+ { "s25fs512s", INFO6(0x010220, 0x4d0081, 256 * 1024, 256,
+- SPI_NOR_DUAL_READ | SPI_NOR_QUAD_READ |
+- USE_CLSR) },
++ SPI_NOR_DUAL_READ | SPI_NOR_QUAD_READ | USE_CLSR)
++ .fixups = &s25fs_s_fixups, },
+ { "s70fl01gs", INFO(0x010221, 0x4d00, 256 * 1024, 256, 0) },
+ { "s25sl12800", INFO(0x012018, 0x0300, 256 * 1024, 64, 0) },
+ { "s25sl12801", INFO(0x012018, 0x0301, 64 * 1024, 256, 0) },
+diff --git a/drivers/mtd/spi-nor/winbond.c b/drivers/mtd/spi-nor/winbond.c
+index 17deabad57e1..5062af10f138 100644
+--- a/drivers/mtd/spi-nor/winbond.c
++++ b/drivers/mtd/spi-nor/winbond.c
+@@ -8,6 +8,31 @@
+
+ #include "core.h"
+
++static int
++w25q256_post_bfpt_fixups(struct spi_nor *nor,
++ const struct sfdp_parameter_header *bfpt_header,
++ const struct sfdp_bfpt *bfpt,
++ struct spi_nor_flash_parameter *params)
++{
++ /*
++ * W25Q256JV supports 4B opcodes but W25Q256FV does not.
++ * Unfortunately, Winbond has re-used the same JEDEC ID for both
++ * variants which prevents us from defining a new entry in the parts
++ * table.
++ * To differentiate between W25Q256JV and W25Q256FV check SFDP header
++ * version: only JV has JESD216A compliant structure (version 5).
++ */
++ if (bfpt_header->major == SFDP_JESD216_MAJOR &&
++ bfpt_header->minor == SFDP_JESD216A_MINOR)
++ nor->flags |= SNOR_F_4B_OPCODES;
++
++ return 0;
++}
++
++static struct spi_nor_fixups w25q256_fixups = {
++ .post_bfpt = w25q256_post_bfpt_fixups,
++};
++
+ static const struct flash_info winbond_parts[] = {
+ /* Winbond -- w25x "blocks" are 64K, "sectors" are 4KiB */
+ { "w25x05", INFO(0xef3010, 0, 64 * 1024, 1, SECT_4K) },
+@@ -53,8 +78,8 @@ static const struct flash_info winbond_parts[] = {
+ { "w25q80bl", INFO(0xef4014, 0, 64 * 1024, 16, SECT_4K) },
+ { "w25q128", INFO(0xef4018, 0, 64 * 1024, 256, SECT_4K) },
+ { "w25q256", INFO(0xef4019, 0, 64 * 1024, 512,
+- SECT_4K | SPI_NOR_DUAL_READ | SPI_NOR_QUAD_READ |
+- SPI_NOR_4B_OPCODES) },
++ SECT_4K | SPI_NOR_DUAL_READ | SPI_NOR_QUAD_READ)
++ .fixups = &w25q256_fixups },
+ { "w25q256jvm", INFO(0xef7019, 0, 64 * 1024, 512,
+ SECT_4K | SPI_NOR_DUAL_READ | SPI_NOR_QUAD_READ) },
+ { "w25q256jw", INFO(0xef6019, 0, 64 * 1024, 512,
+diff --git a/drivers/net/dsa/microchip/ksz8795.c b/drivers/net/dsa/microchip/ksz8795.c
+index 7c17b0f705ec..6aadab277abc 100644
+--- a/drivers/net/dsa/microchip/ksz8795.c
++++ b/drivers/net/dsa/microchip/ksz8795.c
+@@ -1271,6 +1271,9 @@ static int ksz8795_switch_init(struct ksz_device *dev)
+ /* set the real number of ports */
+ dev->ds->num_ports = dev->port_cnt;
+
++ /* set the real number of ports */
++ dev->ds->num_ports = dev->port_cnt;
++
+ return 0;
+ }
+
+diff --git a/drivers/net/dsa/microchip/ksz9477.c b/drivers/net/dsa/microchip/ksz9477.c
+index 8d15c3016024..65701e65b6c2 100644
+--- a/drivers/net/dsa/microchip/ksz9477.c
++++ b/drivers/net/dsa/microchip/ksz9477.c
+@@ -516,6 +516,9 @@ static int ksz9477_port_vlan_filtering(struct dsa_switch *ds, int port,
+ PORT_VLAN_LOOKUP_VID_0, false);
+ }
+
++ /* set the real number of ports */
++ dev->ds->num_ports = dev->port_cnt;
++
+ return 0;
+ }
+
+diff --git a/drivers/net/ethernet/marvell/mvneta.c b/drivers/net/ethernet/marvell/mvneta.c
+index cf26cf4e47aa..b2da295e2fc0 100644
+--- a/drivers/net/ethernet/marvell/mvneta.c
++++ b/drivers/net/ethernet/marvell/mvneta.c
+@@ -3565,7 +3565,7 @@ static int mvneta_config_interface(struct mvneta_port *pp,
+ MVNETA_HSGMII_SERDES_PROTO);
+ break;
+ default:
+- return -EINVAL;
++ break;
+ }
+ }
+
+@@ -5003,10 +5003,18 @@ static void mvneta_conf_mbus_windows(struct mvneta_port *pp,
+ }
+
+ /* Power up the port */
+-static void mvneta_port_power_up(struct mvneta_port *pp, int phy_mode)
++static int mvneta_port_power_up(struct mvneta_port *pp, int phy_mode)
+ {
+ /* MAC Cause register should be cleared */
+ mvreg_write(pp, MVNETA_UNIT_INTR_CAUSE, 0);
++
++ if (phy_mode != PHY_INTERFACE_MODE_QSGMII &&
++ phy_mode != PHY_INTERFACE_MODE_SGMII &&
++ !phy_interface_mode_is_8023z(phy_mode) &&
++ !phy_interface_mode_is_rgmii(phy_mode))
++ return -EINVAL;
++
++ return 0;
+ }
+
+ /* Device initialization routine */
+@@ -5192,7 +5200,11 @@ static int mvneta_probe(struct platform_device *pdev)
+ if (err < 0)
+ goto err_netdev;
+
+- mvneta_port_power_up(pp, phy_mode);
++ err = mvneta_port_power_up(pp, pp->phy_interface);
++ if (err < 0) {
++ dev_err(&pdev->dev, "can't power up port\n");
++ return err;
++ }
+
+ /* Armada3700 network controller does not support per-cpu
+ * operation, so only single NAPI should be initialized.
+@@ -5346,7 +5358,11 @@ static int mvneta_resume(struct device *device)
+ }
+ }
+ mvneta_defaults_set(pp);
+- mvneta_port_power_up(pp, pp->phy_interface);
++ err = mvneta_port_power_up(pp, pp->phy_interface);
++ if (err < 0) {
++ dev_err(device, "can't power up port\n");
++ return err;
++ }
+
+ netif_device_attach(dev);
+
+diff --git a/drivers/net/ethernet/pensando/ionic/ionic_lif.c b/drivers/net/ethernet/pensando/ionic/ionic_lif.c
+index b591bec0301c..7fea60fc3e08 100644
+--- a/drivers/net/ethernet/pensando/ionic/ionic_lif.c
++++ b/drivers/net/ethernet/pensando/ionic/ionic_lif.c
+@@ -85,7 +85,8 @@ static void ionic_link_status_check(struct ionic_lif *lif)
+ u16 link_status;
+ bool link_up;
+
+- if (!test_bit(IONIC_LIF_F_LINK_CHECK_REQUESTED, lif->state))
++ if (!test_bit(IONIC_LIF_F_LINK_CHECK_REQUESTED, lif->state) ||
++ test_bit(IONIC_LIF_F_QUEUE_RESET, lif->state))
+ return;
+
+ if (lif->ionic->is_mgmt_nic)
+@@ -1235,6 +1236,7 @@ static int ionic_init_nic_features(struct ionic_lif *lif)
+
+ netdev->hw_features |= netdev->hw_enc_features;
+ netdev->features |= netdev->hw_features;
++ netdev->vlan_features |= netdev->features & ~NETIF_F_VLAN_FEATURES;
+
+ netdev->priv_flags |= IFF_UNICAST_FLT |
+ IFF_LIVE_ADDR_CHANGE;
+diff --git a/drivers/net/ethernet/qualcomm/rmnet/rmnet_config.c b/drivers/net/ethernet/qualcomm/rmnet/rmnet_config.c
+index fcdecddb2812..32106f4b3370 100644
+--- a/drivers/net/ethernet/qualcomm/rmnet/rmnet_config.c
++++ b/drivers/net/ethernet/qualcomm/rmnet/rmnet_config.c
+@@ -434,6 +434,11 @@ int rmnet_add_bridge(struct net_device *rmnet_dev,
+ return -EINVAL;
+ }
+
++ if (port->rmnet_mode != RMNET_EPMODE_VND) {
++ NL_SET_ERR_MSG_MOD(extack, "more than one bridge dev attached");
++ return -EINVAL;
++ }
++
+ if (rmnet_is_real_dev_registered(slave_dev)) {
+ NL_SET_ERR_MSG_MOD(extack,
+ "slave cannot be another rmnet dev");
+diff --git a/drivers/net/ipa/gsi.c b/drivers/net/ipa/gsi.c
+index 043a675e1be1..6144a1ccc0f3 100644
+--- a/drivers/net/ipa/gsi.c
++++ b/drivers/net/ipa/gsi.c
+@@ -490,6 +490,12 @@ static int gsi_channel_stop_command(struct gsi_channel *channel)
+ enum gsi_channel_state state = channel->state;
+ int ret;
+
++ /* Channel could have entered STOPPED state since last call
++ * if it timed out. If so, we're done.
++ */
++ if (state == GSI_CHANNEL_STATE_STOPPED)
++ return 0;
++
+ if (state != GSI_CHANNEL_STATE_STARTED &&
+ state != GSI_CHANNEL_STATE_STOP_IN_PROC)
+ return -EINVAL;
+@@ -773,13 +779,6 @@ int gsi_channel_stop(struct gsi *gsi, u32 channel_id)
+
+ gsi_channel_freeze(channel);
+
+- /* Channel could have entered STOPPED state since last call if the
+- * STOP command timed out. We won't stop a channel if stopping it
+- * was successful previously (so we still want the freeze above).
+- */
+- if (channel->state == GSI_CHANNEL_STATE_STOPPED)
+- return 0;
+-
+ /* RX channels might require a little time to enter STOPPED state */
+ retries = channel->toward_ipa ? 0 : GSI_CHANNEL_STOP_RX_RETRIES;
+
+diff --git a/drivers/net/ipa/ipa_cmd.c b/drivers/net/ipa/ipa_cmd.c
+index cee417181f98..e4febda2d6b4 100644
+--- a/drivers/net/ipa/ipa_cmd.c
++++ b/drivers/net/ipa/ipa_cmd.c
+@@ -645,6 +645,21 @@ u32 ipa_cmd_tag_process_count(void)
+ return 4;
+ }
+
++void ipa_cmd_tag_process(struct ipa *ipa)
++{
++ u32 count = ipa_cmd_tag_process_count();
++ struct gsi_trans *trans;
++
++ trans = ipa_cmd_trans_alloc(ipa, count);
++ if (trans) {
++ ipa_cmd_tag_process_add(trans);
++ gsi_trans_commit_wait(trans);
++ } else {
++ dev_err(&ipa->pdev->dev,
++ "error allocating %u entry tag transaction\n", count);
++ }
++}
++
+ static struct ipa_cmd_info *
+ ipa_cmd_info_alloc(struct ipa_endpoint *endpoint, u32 tre_count)
+ {
+diff --git a/drivers/net/ipa/ipa_cmd.h b/drivers/net/ipa/ipa_cmd.h
+index 4917525b3a47..1ee9265651a1 100644
+--- a/drivers/net/ipa/ipa_cmd.h
++++ b/drivers/net/ipa/ipa_cmd.h
+@@ -182,6 +182,14 @@ void ipa_cmd_tag_process_add(struct gsi_trans *trans);
+ */
+ u32 ipa_cmd_tag_process_count(void);
+
++/**
++ * ipa_cmd_tag_process() - Perform a tag process
++ *
++ * @Return: The number of elements to allocate in a transaction
++ * to hold tag process commands
++ */
++void ipa_cmd_tag_process(struct ipa *ipa);
++
+ /**
+ * ipa_cmd_trans_alloc() - Allocate a transaction for the command TX endpoint
+ * @ipa: IPA pointer
+diff --git a/drivers/net/ipa/ipa_endpoint.c b/drivers/net/ipa/ipa_endpoint.c
+index 1d823ac0f6d6..371c93953aea 100644
+--- a/drivers/net/ipa/ipa_endpoint.c
++++ b/drivers/net/ipa/ipa_endpoint.c
+@@ -1485,6 +1485,8 @@ void ipa_endpoint_suspend(struct ipa *ipa)
+ if (ipa->modem_netdev)
+ ipa_modem_suspend(ipa->modem_netdev);
+
++ ipa_cmd_tag_process(ipa);
++
+ ipa_endpoint_suspend_one(ipa->name_map[IPA_ENDPOINT_AP_LAN_RX]);
+ ipa_endpoint_suspend_one(ipa->name_map[IPA_ENDPOINT_AP_COMMAND_TX]);
+ }
+diff --git a/drivers/net/usb/qmi_wwan.c b/drivers/net/usb/qmi_wwan.c
+index 4a2c7355be63..e57d59b0a7ae 100644
+--- a/drivers/net/usb/qmi_wwan.c
++++ b/drivers/net/usb/qmi_wwan.c
+@@ -1370,6 +1370,7 @@ static const struct usb_device_id products[] = {
+ {QMI_QUIRK_SET_DTR(0x1e0e, 0x9001, 5)}, /* SIMCom 7100E, 7230E, 7600E ++ */
+ {QMI_QUIRK_SET_DTR(0x2c7c, 0x0121, 4)}, /* Quectel EC21 Mini PCIe */
+ {QMI_QUIRK_SET_DTR(0x2c7c, 0x0191, 4)}, /* Quectel EG91 */
++ {QMI_QUIRK_SET_DTR(0x2c7c, 0x0195, 4)}, /* Quectel EG95 */
+ {QMI_FIXED_INTF(0x2c7c, 0x0296, 4)}, /* Quectel BG96 */
+ {QMI_QUIRK_SET_DTR(0x2cb7, 0x0104, 4)}, /* Fibocom NL678 series */
+ {QMI_FIXED_INTF(0x0489, 0xe0b4, 0)}, /* Foxconn T77W968 LTE */
+diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c
+index 71d63ed62071..137d7bcc1358 100644
+--- a/drivers/nvme/host/core.c
++++ b/drivers/nvme/host/core.c
+@@ -1916,6 +1916,7 @@ static void __nvme_revalidate_disk(struct gendisk *disk, struct nvme_id_ns *id)
+ if (ns->head->disk) {
+ nvme_update_disk_info(ns->head->disk, ns, id);
+ blk_queue_stack_limits(ns->head->disk->queue, ns->queue);
++ nvme_mpath_update_disk_size(ns->head->disk);
+ }
+ #endif
+ }
+diff --git a/drivers/nvme/host/nvme.h b/drivers/nvme/host/nvme.h
+index 719342600be6..46f965f8c9bc 100644
+--- a/drivers/nvme/host/nvme.h
++++ b/drivers/nvme/host/nvme.h
+@@ -583,6 +583,16 @@ static inline void nvme_trace_bio_complete(struct request *req,
+ req->bio, status);
+ }
+
++static inline void nvme_mpath_update_disk_size(struct gendisk *disk)
++{
++ struct block_device *bdev = bdget_disk(disk, 0);
++
++ if (bdev) {
++ bd_set_size(bdev, get_capacity(disk) << SECTOR_SHIFT);
++ bdput(bdev);
++ }
++}
++
+ extern struct device_attribute dev_attr_ana_grpid;
+ extern struct device_attribute dev_attr_ana_state;
+ extern struct device_attribute subsys_attr_iopolicy;
+@@ -658,6 +668,9 @@ static inline void nvme_mpath_wait_freeze(struct nvme_subsystem *subsys)
+ static inline void nvme_mpath_start_freeze(struct nvme_subsystem *subsys)
+ {
+ }
++static inline void nvme_mpath_update_disk_size(struct gendisk *disk)
++{
++}
+ #endif /* CONFIG_NVME_MULTIPATH */
+
+ #ifdef CONFIG_NVM
+diff --git a/drivers/opp/of.c b/drivers/opp/of.c
+index 9cd8f0adacae..249738e1e0b7 100644
+--- a/drivers/opp/of.c
++++ b/drivers/opp/of.c
+@@ -733,6 +733,10 @@ static int _of_add_opp_table_v1(struct device *dev, struct opp_table *opp_table)
+ return -EINVAL;
+ }
+
++ mutex_lock(&opp_table->lock);
++ opp_table->parsed_static_opps = 1;
++ mutex_unlock(&opp_table->lock);
++
+ val = prop->value;
+ while (nr) {
+ unsigned long freq = be32_to_cpup(val++) * 1000;
+diff --git a/drivers/pci/pci.c b/drivers/pci/pci.c
+index 809f2584e338..d4758518a97b 100644
+--- a/drivers/pci/pci.c
++++ b/drivers/pci/pci.c
+@@ -868,7 +868,9 @@ static inline bool platform_pci_need_resume(struct pci_dev *dev)
+
+ static inline bool platform_pci_bridge_d3(struct pci_dev *dev)
+ {
+- return pci_platform_pm ? pci_platform_pm->bridge_d3(dev) : false;
++ if (pci_platform_pm && pci_platform_pm->bridge_d3)
++ return pci_platform_pm->bridge_d3(dev);
++ return false;
+ }
+
+ /**
+diff --git a/drivers/phy/allwinner/phy-sun4i-usb.c b/drivers/phy/allwinner/phy-sun4i-usb.c
+index 856927382248..e5842e48a5e0 100644
+--- a/drivers/phy/allwinner/phy-sun4i-usb.c
++++ b/drivers/phy/allwinner/phy-sun4i-usb.c
+@@ -545,13 +545,14 @@ static void sun4i_usb_phy0_id_vbus_det_scan(struct work_struct *work)
+ struct sun4i_usb_phy_data *data =
+ container_of(work, struct sun4i_usb_phy_data, detect.work);
+ struct phy *phy0 = data->phys[0].phy;
+- struct sun4i_usb_phy *phy = phy_get_drvdata(phy0);
++ struct sun4i_usb_phy *phy;
+ bool force_session_end, id_notify = false, vbus_notify = false;
+ int id_det, vbus_det;
+
+- if (phy0 == NULL)
++ if (!phy0)
+ return;
+
++ phy = phy_get_drvdata(phy0);
+ id_det = sun4i_usb_phy0_get_id_det(data);
+ vbus_det = sun4i_usb_phy0_get_vbus_det(data);
+
+diff --git a/drivers/phy/rockchip/phy-rockchip-inno-dsidphy.c b/drivers/phy/rockchip/phy-rockchip-inno-dsidphy.c
+index a7c6c940a3a8..8af8c6c5cc02 100644
+--- a/drivers/phy/rockchip/phy-rockchip-inno-dsidphy.c
++++ b/drivers/phy/rockchip/phy-rockchip-inno-dsidphy.c
+@@ -607,8 +607,8 @@ static int inno_dsidphy_probe(struct platform_device *pdev)
+ platform_set_drvdata(pdev, inno);
+
+ inno->phy_base = devm_platform_ioremap_resource(pdev, 0);
+- if (!inno->phy_base)
+- return -ENOMEM;
++ if (IS_ERR(inno->phy_base))
++ return PTR_ERR(inno->phy_base);
+
+ inno->ref_clk = devm_clk_get(dev, "ref");
+ if (IS_ERR(inno->ref_clk)) {
+diff --git a/drivers/scsi/megaraid/megaraid_sas_fusion.c b/drivers/scsi/megaraid/megaraid_sas_fusion.c
+index 03a6c86475c8..9a9d742779f6 100644
+--- a/drivers/scsi/megaraid/megaraid_sas_fusion.c
++++ b/drivers/scsi/megaraid/megaraid_sas_fusion.c
+@@ -3797,10 +3797,8 @@ static irqreturn_t megasas_isr_fusion(int irq, void *devp)
+ if (instance->mask_interrupts)
+ return IRQ_NONE;
+
+-#if defined(ENABLE_IRQ_POLL)
+ if (irq_context->irq_poll_scheduled)
+ return IRQ_HANDLED;
+-#endif
+
+ if (!instance->msix_vectors) {
+ mfiStatus = instance->instancet->clear_intr(instance);
+diff --git a/drivers/scsi/qla2xxx/qla_def.h b/drivers/scsi/qla2xxx/qla_def.h
+index daa9e936887b..172ea4e5887d 100644
+--- a/drivers/scsi/qla2xxx/qla_def.h
++++ b/drivers/scsi/qla2xxx/qla_def.h
+@@ -4248,8 +4248,8 @@ struct qla_hw_data {
+ int fw_dump_reading;
+ void *mpi_fw_dump;
+ u32 mpi_fw_dump_len;
+- int mpi_fw_dump_reading:1;
+- int mpi_fw_dumped:1;
++ unsigned int mpi_fw_dump_reading:1;
++ unsigned int mpi_fw_dumped:1;
+ int prev_minidump_failed;
+ dma_addr_t eft_dma;
+ void *eft;
+diff --git a/drivers/slimbus/core.c b/drivers/slimbus/core.c
+index 526e3215d8fe..63ee96eb58c6 100644
+--- a/drivers/slimbus/core.c
++++ b/drivers/slimbus/core.c
+@@ -283,6 +283,7 @@ EXPORT_SYMBOL_GPL(slim_register_controller);
+ /* slim_remove_device: Remove the effect of slim_add_device() */
+ static void slim_remove_device(struct slim_device *sbdev)
+ {
++ of_node_put(sbdev->dev.of_node);
+ device_unregister(&sbdev->dev);
+ }
+
+diff --git a/drivers/soc/qcom/rpmh-rsc.c b/drivers/soc/qcom/rpmh-rsc.c
+index b71822131f59..3d2104286ee9 100644
+--- a/drivers/soc/qcom/rpmh-rsc.c
++++ b/drivers/soc/qcom/rpmh-rsc.c
+@@ -148,7 +148,7 @@ int rpmh_rsc_invalidate(struct rsc_drv *drv)
+ static struct tcs_group *get_tcs_for_msg(struct rsc_drv *drv,
+ const struct tcs_request *msg)
+ {
+- int type, ret;
++ int type;
+ struct tcs_group *tcs;
+
+ switch (msg->state) {
+@@ -169,19 +169,10 @@ static struct tcs_group *get_tcs_for_msg(struct rsc_drv *drv,
+ * If we are making an active request on a RSC that does not have a
+ * dedicated TCS for active state use, then re-purpose a wake TCS to
+ * send active votes.
+- * NOTE: The driver must be aware that this RSC does not have a
+- * dedicated AMC, and therefore would invalidate the sleep and wake
+- * TCSes before making an active state request.
+ */
+ tcs = get_tcs_of_type(drv, type);
+- if (msg->state == RPMH_ACTIVE_ONLY_STATE && !tcs->num_tcs) {
++ if (msg->state == RPMH_ACTIVE_ONLY_STATE && !tcs->num_tcs)
+ tcs = get_tcs_of_type(drv, WAKE_TCS);
+- if (tcs->num_tcs) {
+- ret = rpmh_rsc_invalidate(drv);
+- if (ret)
+- return ERR_PTR(ret);
+- }
+- }
+
+ return tcs;
+ }
+@@ -201,6 +192,42 @@ static const struct tcs_request *get_req_from_tcs(struct rsc_drv *drv,
+ return NULL;
+ }
+
++static void __tcs_set_trigger(struct rsc_drv *drv, int tcs_id, bool trigger)
++{
++ u32 enable;
++
++ /*
++ * HW req: Clear the DRV_CONTROL and enable TCS again
++ * While clearing ensure that the AMC mode trigger is cleared
++ * and then the mode enable is cleared.
++ */
++ enable = read_tcs_reg(drv, RSC_DRV_CONTROL, tcs_id, 0);
++ enable &= ~TCS_AMC_MODE_TRIGGER;
++ write_tcs_reg_sync(drv, RSC_DRV_CONTROL, tcs_id, enable);
++ enable &= ~TCS_AMC_MODE_ENABLE;
++ write_tcs_reg_sync(drv, RSC_DRV_CONTROL, tcs_id, enable);
++
++ if (trigger) {
++ /* Enable the AMC mode on the TCS and then trigger the TCS */
++ enable = TCS_AMC_MODE_ENABLE;
++ write_tcs_reg_sync(drv, RSC_DRV_CONTROL, tcs_id, enable);
++ enable |= TCS_AMC_MODE_TRIGGER;
++ write_tcs_reg_sync(drv, RSC_DRV_CONTROL, tcs_id, enable);
++ }
++}
++
++static void enable_tcs_irq(struct rsc_drv *drv, int tcs_id, bool enable)
++{
++ u32 data;
++
++ data = read_tcs_reg(drv, RSC_DRV_IRQ_ENABLE, 0, 0);
++ if (enable)
++ data |= BIT(tcs_id);
++ else
++ data &= ~BIT(tcs_id);
++ write_tcs_reg(drv, RSC_DRV_IRQ_ENABLE, 0, data);
++}
++
+ /**
+ * tcs_tx_done: TX Done interrupt handler
+ */
+@@ -237,6 +264,14 @@ static irqreturn_t tcs_tx_done(int irq, void *p)
+ }
+
+ trace_rpmh_tx_done(drv, i, req, err);
++
++ /*
++ * If wake tcs was re-purposed for sending active
++ * votes, clear AMC trigger & enable modes and
++ * disable interrupt for this TCS
++ */
++ if (!drv->tcs[ACTIVE_TCS].num_tcs)
++ __tcs_set_trigger(drv, i, false);
+ skip:
+ /* Reclaim the TCS */
+ write_tcs_reg(drv, RSC_DRV_CMD_ENABLE, i, 0);
+@@ -244,6 +279,13 @@ skip:
+ write_tcs_reg(drv, RSC_DRV_IRQ_CLEAR, 0, BIT(i));
+ spin_lock(&drv->lock);
+ clear_bit(i, drv->tcs_in_use);
++ /*
++ * Disable interrupt for WAKE TCS to avoid being
++ * spammed with interrupts coming when the solver
++ * sends its wake votes.
++ */
++ if (!drv->tcs[ACTIVE_TCS].num_tcs)
++ enable_tcs_irq(drv, i, false);
+ spin_unlock(&drv->lock);
+ if (req)
+ rpmh_tx_done(req, err);
+@@ -285,28 +327,6 @@ static void __tcs_buffer_write(struct rsc_drv *drv, int tcs_id, int cmd_id,
+ write_tcs_reg(drv, RSC_DRV_CMD_ENABLE, tcs_id, cmd_enable);
+ }
+
+-static void __tcs_trigger(struct rsc_drv *drv, int tcs_id)
+-{
+- u32 enable;
+-
+- /*
+- * HW req: Clear the DRV_CONTROL and enable TCS again
+- * While clearing ensure that the AMC mode trigger is cleared
+- * and then the mode enable is cleared.
+- */
+- enable = read_tcs_reg(drv, RSC_DRV_CONTROL, tcs_id, 0);
+- enable &= ~TCS_AMC_MODE_TRIGGER;
+- write_tcs_reg_sync(drv, RSC_DRV_CONTROL, tcs_id, enable);
+- enable &= ~TCS_AMC_MODE_ENABLE;
+- write_tcs_reg_sync(drv, RSC_DRV_CONTROL, tcs_id, enable);
+-
+- /* Enable the AMC mode on the TCS and then trigger the TCS */
+- enable = TCS_AMC_MODE_ENABLE;
+- write_tcs_reg_sync(drv, RSC_DRV_CONTROL, tcs_id, enable);
+- enable |= TCS_AMC_MODE_TRIGGER;
+- write_tcs_reg_sync(drv, RSC_DRV_CONTROL, tcs_id, enable);
+-}
+-
+ static int check_for_req_inflight(struct rsc_drv *drv, struct tcs_group *tcs,
+ const struct tcs_request *msg)
+ {
+@@ -377,10 +397,20 @@ static int tcs_write(struct rsc_drv *drv, const struct tcs_request *msg)
+
+ tcs->req[tcs_id - tcs->offset] = msg;
+ set_bit(tcs_id, drv->tcs_in_use);
++ if (msg->state == RPMH_ACTIVE_ONLY_STATE && tcs->type != ACTIVE_TCS) {
++ /*
++ * Clear previously programmed WAKE commands in selected
++ * repurposed TCS to avoid triggering them. tcs->slots will be
++ * cleaned from rpmh_flush() by invoking rpmh_rsc_invalidate()
++ */
++ write_tcs_reg_sync(drv, RSC_DRV_CMD_ENABLE, tcs_id, 0);
++ write_tcs_reg_sync(drv, RSC_DRV_CMD_WAIT_FOR_CMPL, tcs_id, 0);
++ enable_tcs_irq(drv, tcs_id, true);
++ }
+ spin_unlock(&drv->lock);
+
+ __tcs_buffer_write(drv, tcs_id, 0, msg);
+- __tcs_trigger(drv, tcs_id);
++ __tcs_set_trigger(drv, tcs_id, true);
+
+ done_write:
+ spin_unlock_irqrestore(&tcs->lock, flags);
+diff --git a/drivers/soc/qcom/rpmh.c b/drivers/soc/qcom/rpmh.c
+index eb0ded059d2e..a75f3df97742 100644
+--- a/drivers/soc/qcom/rpmh.c
++++ b/drivers/soc/qcom/rpmh.c
+@@ -119,6 +119,7 @@ static struct cache_req *cache_rpm_request(struct rpmh_ctrlr *ctrlr,
+ {
+ struct cache_req *req;
+ unsigned long flags;
++ u32 old_sleep_val, old_wake_val;
+
+ spin_lock_irqsave(&ctrlr->cache_lock, flags);
+ req = __find_req(ctrlr, cmd->addr);
+@@ -133,26 +134,27 @@ static struct cache_req *cache_rpm_request(struct rpmh_ctrlr *ctrlr,
+
+ req->addr = cmd->addr;
+ req->sleep_val = req->wake_val = UINT_MAX;
+- INIT_LIST_HEAD(&req->list);
+ list_add_tail(&req->list, &ctrlr->cache);
+
+ existing:
++ old_sleep_val = req->sleep_val;
++ old_wake_val = req->wake_val;
++
+ switch (state) {
+ case RPMH_ACTIVE_ONLY_STATE:
+- if (req->sleep_val != UINT_MAX)
+- req->wake_val = cmd->data;
+- break;
+ case RPMH_WAKE_ONLY_STATE:
+ req->wake_val = cmd->data;
+ break;
+ case RPMH_SLEEP_STATE:
+ req->sleep_val = cmd->data;
+ break;
+- default:
+- break;
+ }
+
+- ctrlr->dirty = true;
++ ctrlr->dirty = (req->sleep_val != old_sleep_val ||
++ req->wake_val != old_wake_val) &&
++ req->sleep_val != UINT_MAX &&
++ req->wake_val != UINT_MAX;
++
+ unlock:
+ spin_unlock_irqrestore(&ctrlr->cache_lock, flags);
+
+@@ -287,6 +289,7 @@ static void cache_batch(struct rpmh_ctrlr *ctrlr, struct batch_cache_req *req)
+
+ spin_lock_irqsave(&ctrlr->cache_lock, flags);
+ list_add_tail(&req->list, &ctrlr->batch_cache);
++ ctrlr->dirty = true;
+ spin_unlock_irqrestore(&ctrlr->cache_lock, flags);
+ }
+
+@@ -314,18 +317,6 @@ static int flush_batch(struct rpmh_ctrlr *ctrlr)
+ return ret;
+ }
+
+-static void invalidate_batch(struct rpmh_ctrlr *ctrlr)
+-{
+- struct batch_cache_req *req, *tmp;
+- unsigned long flags;
+-
+- spin_lock_irqsave(&ctrlr->cache_lock, flags);
+- list_for_each_entry_safe(req, tmp, &ctrlr->batch_cache, list)
+- kfree(req);
+- INIT_LIST_HEAD(&ctrlr->batch_cache);
+- spin_unlock_irqrestore(&ctrlr->cache_lock, flags);
+-}
+-
+ /**
+ * rpmh_write_batch: Write multiple sets of RPMH commands and wait for the
+ * batch to finish.
+@@ -463,6 +454,13 @@ int rpmh_flush(struct rpmh_ctrlr *ctrlr)
+ return 0;
+ }
+
++ /* Invalidate the TCSes first to avoid stale data */
++ do {
++ ret = rpmh_rsc_invalidate(ctrlr_to_drv(ctrlr));
++ } while (ret == -EAGAIN);
++ if (ret)
++ return ret;
++
+ /* First flush the cached batch requests */
+ ret = flush_batch(ctrlr);
+ if (ret)
+@@ -494,25 +492,25 @@ int rpmh_flush(struct rpmh_ctrlr *ctrlr)
+ }
+
+ /**
+- * rpmh_invalidate: Invalidate all sleep and active sets
+- * sets.
++ * rpmh_invalidate: Invalidate sleep and wake sets in batch_cache
+ *
+ * @dev: The device making the request
+ *
+- * Invalidate the sleep and active values in the TCS blocks.
++ * Invalidate the sleep and wake values in batch_cache.
+ */
+ int rpmh_invalidate(const struct device *dev)
+ {
+ struct rpmh_ctrlr *ctrlr = get_rpmh_ctrlr(dev);
+- int ret;
++ struct batch_cache_req *req, *tmp;
++ unsigned long flags;
+
+- invalidate_batch(ctrlr);
++ spin_lock_irqsave(&ctrlr->cache_lock, flags);
++ list_for_each_entry_safe(req, tmp, &ctrlr->batch_cache, list)
++ kfree(req);
++ INIT_LIST_HEAD(&ctrlr->batch_cache);
+ ctrlr->dirty = true;
++ spin_unlock_irqrestore(&ctrlr->cache_lock, flags);
+
+- do {
+- ret = rpmh_rsc_invalidate(ctrlr_to_drv(ctrlr));
+- } while (ret == -EAGAIN);
+-
+- return ret;
++ return 0;
+ }
+ EXPORT_SYMBOL(rpmh_invalidate);
+diff --git a/drivers/soc/qcom/socinfo.c b/drivers/soc/qcom/socinfo.c
+index ebb49aee179b..08a4b8ae1764 100644
+--- a/drivers/soc/qcom/socinfo.c
++++ b/drivers/soc/qcom/socinfo.c
+@@ -430,6 +430,8 @@ static int qcom_socinfo_probe(struct platform_device *pdev)
+ qs->attr.family = "Snapdragon";
+ qs->attr.machine = socinfo_machine(&pdev->dev,
+ le32_to_cpu(info->id));
++ qs->attr.soc_id = devm_kasprintf(&pdev->dev, GFP_KERNEL, "%u",
++ le32_to_cpu(info->id));
+ qs->attr.revision = devm_kasprintf(&pdev->dev, GFP_KERNEL, "%u.%u",
+ SOCINFO_MAJOR(le32_to_cpu(info->ver)),
+ SOCINFO_MINOR(le32_to_cpu(info->ver)));
+diff --git a/drivers/soundwire/intel.c b/drivers/soundwire/intel.c
+index 3c83e76c6bf9..cab425070e64 100644
+--- a/drivers/soundwire/intel.c
++++ b/drivers/soundwire/intel.c
+@@ -930,8 +930,9 @@ static int intel_create_dai(struct sdw_cdns *cdns,
+
+ /* TODO: Read supported rates/formats from hardware */
+ for (i = off; i < (off + num); i++) {
+- dais[i].name = kasprintf(GFP_KERNEL, "SDW%d Pin%d",
+- cdns->instance, i);
++ dais[i].name = devm_kasprintf(cdns->dev, GFP_KERNEL,
++ "SDW%d Pin%d",
++ cdns->instance, i);
+ if (!dais[i].name)
+ return -ENOMEM;
+
+diff --git a/drivers/spi/spi-fsl-dspi.c b/drivers/spi/spi-fsl-dspi.c
+index 38d337f0967d..e0b30e4b1b69 100644
+--- a/drivers/spi/spi-fsl-dspi.c
++++ b/drivers/spi/spi-fsl-dspi.c
+@@ -1461,20 +1461,7 @@ static int dspi_remove(struct platform_device *pdev)
+
+ static void dspi_shutdown(struct platform_device *pdev)
+ {
+- struct spi_controller *ctlr = platform_get_drvdata(pdev);
+- struct fsl_dspi *dspi = spi_controller_get_devdata(ctlr);
+-
+- /* Disable RX and TX */
+- regmap_update_bits(dspi->regmap, SPI_MCR,
+- SPI_MCR_DIS_TXF | SPI_MCR_DIS_RXF,
+- SPI_MCR_DIS_TXF | SPI_MCR_DIS_RXF);
+-
+- /* Stop Running */
+- regmap_update_bits(dspi->regmap, SPI_MCR, SPI_MCR_HALT, SPI_MCR_HALT);
+-
+- dspi_release_dma(dspi);
+- clk_disable_unprepare(dspi->clk);
+- spi_unregister_controller(dspi->ctlr);
++ dspi_remove(pdev);
+ }
+
+ static struct platform_driver fsl_dspi_driver = {
+diff --git a/drivers/spi/spi-sprd-adi.c b/drivers/spi/spi-sprd-adi.c
+index 87dadb6b8ebf..8e84e25a8f7a 100644
+--- a/drivers/spi/spi-sprd-adi.c
++++ b/drivers/spi/spi-sprd-adi.c
+@@ -389,9 +389,9 @@ static int sprd_adi_restart_handler(struct notifier_block *this,
+ sprd_adi_write(sadi, sadi->slave_pbase + REG_WDG_CTRL, val);
+
+ /* Load the watchdog timeout value, 50ms is always enough. */
++ sprd_adi_write(sadi, sadi->slave_pbase + REG_WDG_LOAD_HIGH, 0);
+ sprd_adi_write(sadi, sadi->slave_pbase + REG_WDG_LOAD_LOW,
+ WDG_LOAD_VAL & WDG_LOAD_MASK);
+- sprd_adi_write(sadi, sadi->slave_pbase + REG_WDG_LOAD_HIGH, 0);
+
+ /* Start the watchdog to reset system */
+ sprd_adi_read(sadi, sadi->slave_pbase + REG_WDG_CTRL, &val);
+diff --git a/drivers/spi/spi-sun6i.c b/drivers/spi/spi-sun6i.c
+index ec7967be9e2f..956df79035d5 100644
+--- a/drivers/spi/spi-sun6i.c
++++ b/drivers/spi/spi-sun6i.c
+@@ -198,7 +198,7 @@ static int sun6i_spi_transfer_one(struct spi_master *master,
+ struct spi_transfer *tfr)
+ {
+ struct sun6i_spi *sspi = spi_master_get_devdata(master);
+- unsigned int mclk_rate, div, timeout;
++ unsigned int mclk_rate, div, div_cdr1, div_cdr2, timeout;
+ unsigned int start, end, tx_time;
+ unsigned int trig_level;
+ unsigned int tx_len = 0;
+@@ -287,14 +287,12 @@ static int sun6i_spi_transfer_one(struct spi_master *master,
+ * First try CDR2, and if we can't reach the expected
+ * frequency, fall back to CDR1.
+ */
+- div = mclk_rate / (2 * tfr->speed_hz);
+- if (div <= (SUN6I_CLK_CTL_CDR2_MASK + 1)) {
+- if (div > 0)
+- div--;
+-
+- reg = SUN6I_CLK_CTL_CDR2(div) | SUN6I_CLK_CTL_DRS;
++ div_cdr1 = DIV_ROUND_UP(mclk_rate, tfr->speed_hz);
++ div_cdr2 = DIV_ROUND_UP(div_cdr1, 2);
++ if (div_cdr2 <= (SUN6I_CLK_CTL_CDR2_MASK + 1)) {
++ reg = SUN6I_CLK_CTL_CDR2(div_cdr2 - 1) | SUN6I_CLK_CTL_DRS;
+ } else {
+- div = ilog2(mclk_rate) - ilog2(tfr->speed_hz);
++ div = min(SUN6I_CLK_CTL_CDR1_MASK, order_base_2(div_cdr1));
+ reg = SUN6I_CLK_CTL_CDR1(div);
+ }
+
+diff --git a/drivers/staging/comedi/drivers/addi_apci_1500.c b/drivers/staging/comedi/drivers/addi_apci_1500.c
+index 45ad4ba92f94..689acd69a1b9 100644
+--- a/drivers/staging/comedi/drivers/addi_apci_1500.c
++++ b/drivers/staging/comedi/drivers/addi_apci_1500.c
+@@ -456,9 +456,9 @@ static int apci1500_di_cfg_trig(struct comedi_device *dev,
+ unsigned int lo_mask = data[5] << shift;
+ unsigned int chan_mask = hi_mask | lo_mask;
+ unsigned int old_mask = (1 << shift) - 1;
+- unsigned int pm = devpriv->pm[trig] & old_mask;
+- unsigned int pt = devpriv->pt[trig] & old_mask;
+- unsigned int pp = devpriv->pp[trig] & old_mask;
++ unsigned int pm;
++ unsigned int pt;
++ unsigned int pp;
+
+ if (trig > 1) {
+ dev_dbg(dev->class_dev,
+@@ -471,6 +471,10 @@ static int apci1500_di_cfg_trig(struct comedi_device *dev,
+ return -EINVAL;
+ }
+
++ pm = devpriv->pm[trig] & old_mask;
++ pt = devpriv->pt[trig] & old_mask;
++ pp = devpriv->pp[trig] & old_mask;
++
+ switch (data[2]) {
+ case COMEDI_DIGITAL_TRIG_DISABLE:
+ /* clear trigger configuration */
+diff --git a/drivers/thermal/imx_thermal.c b/drivers/thermal/imx_thermal.c
+index e761c9b42217..1b84ea674edb 100644
+--- a/drivers/thermal/imx_thermal.c
++++ b/drivers/thermal/imx_thermal.c
+@@ -649,7 +649,7 @@ MODULE_DEVICE_TABLE(of, of_imx_thermal_match);
+ static int imx_thermal_register_legacy_cooling(struct imx_thermal_data *data)
+ {
+ struct device_node *np;
+- int ret;
++ int ret = 0;
+
+ data->policy = cpufreq_cpu_get(0);
+ if (!data->policy) {
+@@ -664,11 +664,12 @@ static int imx_thermal_register_legacy_cooling(struct imx_thermal_data *data)
+ if (IS_ERR(data->cdev)) {
+ ret = PTR_ERR(data->cdev);
+ cpufreq_cpu_put(data->policy);
+- return ret;
+ }
+ }
+
+- return 0;
++ of_node_put(np);
++
++ return ret;
+ }
+
+ static void imx_thermal_unregister_legacy_cooling(struct imx_thermal_data *data)
+diff --git a/drivers/thermal/intel/int340x_thermal/int3403_thermal.c b/drivers/thermal/intel/int340x_thermal/int3403_thermal.c
+index f86cbb125e2f..ec1d58c4ceaa 100644
+--- a/drivers/thermal/intel/int340x_thermal/int3403_thermal.c
++++ b/drivers/thermal/intel/int340x_thermal/int3403_thermal.c
+@@ -74,7 +74,7 @@ static void int3403_notify(acpi_handle handle,
+ THERMAL_TRIP_CHANGED);
+ break;
+ default:
+- dev_err(&priv->pdev->dev, "Unsupported event [0x%x]\n", event);
++ dev_dbg(&priv->pdev->dev, "Unsupported event [0x%x]\n", event);
+ break;
+ }
+ }
+diff --git a/drivers/thermal/mtk_thermal.c b/drivers/thermal/mtk_thermal.c
+index 6b7ef1993d7e..42c9cd0e5f77 100644
+--- a/drivers/thermal/mtk_thermal.c
++++ b/drivers/thermal/mtk_thermal.c
+@@ -594,8 +594,7 @@ static int mtk_thermal_bank_temperature(struct mtk_thermal_bank *bank)
+ u32 raw;
+
+ for (i = 0; i < conf->bank_data[bank->id].num_sensors; i++) {
+- raw = readl(mt->thermal_base +
+- conf->msr[conf->bank_data[bank->id].sensors[i]]);
++ raw = readl(mt->thermal_base + conf->msr[i]);
+
+ temp = raw_to_mcelsius(mt,
+ conf->bank_data[bank->id].sensors[i],
+@@ -736,8 +735,7 @@ static void mtk_thermal_init_bank(struct mtk_thermal *mt, int num,
+
+ for (i = 0; i < conf->bank_data[num].num_sensors; i++)
+ writel(conf->sensor_mux_values[conf->bank_data[num].sensors[i]],
+- mt->thermal_base +
+- conf->adcpnp[conf->bank_data[num].sensors[i]]);
++ mt->thermal_base + conf->adcpnp[i]);
+
+ writel((1 << conf->bank_data[num].num_sensors) - 1,
+ controller_base + TEMP_MONCTL0);
+diff --git a/drivers/thunderbolt/tunnel.c b/drivers/thunderbolt/tunnel.c
+index dbe90bcf4ad4..c144ca9b032c 100644
+--- a/drivers/thunderbolt/tunnel.c
++++ b/drivers/thunderbolt/tunnel.c
+@@ -913,21 +913,21 @@ struct tb_tunnel *tb_tunnel_discover_usb3(struct tb *tb, struct tb_port *down)
+ * case.
+ */
+ path = tb_path_discover(down, TB_USB3_HOPID, NULL, -1,
+- &tunnel->dst_port, "USB3 Up");
++ &tunnel->dst_port, "USB3 Down");
+ if (!path) {
+ /* Just disable the downstream port */
+ tb_usb3_port_enable(down, false);
+ goto err_free;
+ }
+- tunnel->paths[TB_USB3_PATH_UP] = path;
+- tb_usb3_init_path(tunnel->paths[TB_USB3_PATH_UP]);
++ tunnel->paths[TB_USB3_PATH_DOWN] = path;
++ tb_usb3_init_path(tunnel->paths[TB_USB3_PATH_DOWN]);
+
+ path = tb_path_discover(tunnel->dst_port, -1, down, TB_USB3_HOPID, NULL,
+- "USB3 Down");
++ "USB3 Up");
+ if (!path)
+ goto err_deactivate;
+- tunnel->paths[TB_USB3_PATH_DOWN] = path;
+- tb_usb3_init_path(tunnel->paths[TB_USB3_PATH_DOWN]);
++ tunnel->paths[TB_USB3_PATH_UP] = path;
++ tb_usb3_init_path(tunnel->paths[TB_USB3_PATH_UP]);
+
+ /* Validate that the tunnel is complete */
+ if (!tb_port_is_usb3_up(tunnel->dst_port)) {
+diff --git a/drivers/tty/serial/cpm_uart/cpm_uart_core.c b/drivers/tty/serial/cpm_uart/cpm_uart_core.c
+index a04f74d2e854..4df47d02b34b 100644
+--- a/drivers/tty/serial/cpm_uart/cpm_uart_core.c
++++ b/drivers/tty/serial/cpm_uart/cpm_uart_core.c
+@@ -1215,7 +1215,12 @@ static int cpm_uart_init_port(struct device_node *np,
+
+ pinfo->gpios[i] = NULL;
+
+- gpiod = devm_gpiod_get_index(dev, NULL, i, GPIOD_ASIS);
++ gpiod = devm_gpiod_get_index_optional(dev, NULL, i, GPIOD_ASIS);
++
++ if (IS_ERR(gpiod)) {
++ ret = PTR_ERR(gpiod);
++ goto out_irq;
++ }
+
+ if (gpiod) {
+ if (i == GPIO_RTS || i == GPIO_DTR)
+@@ -1237,6 +1242,8 @@ static int cpm_uart_init_port(struct device_node *np,
+
+ return cpm_uart_request_port(&pinfo->port);
+
++out_irq:
++ irq_dispose_mapping(pinfo->port.irq);
+ out_pram:
+ cpm_uart_unmap_pram(pinfo, pram);
+ out_mem:
+diff --git a/drivers/tty/serial/mxs-auart.c b/drivers/tty/serial/mxs-auart.c
+index b4f835e7de23..b784323a6a7b 100644
+--- a/drivers/tty/serial/mxs-auart.c
++++ b/drivers/tty/serial/mxs-auart.c
+@@ -1698,21 +1698,21 @@ static int mxs_auart_probe(struct platform_device *pdev)
+ irq = platform_get_irq(pdev, 0);
+ if (irq < 0) {
+ ret = irq;
+- goto out_disable_clks;
++ goto out_iounmap;
+ }
+
+ s->port.irq = irq;
+ ret = devm_request_irq(&pdev->dev, irq, mxs_auart_irq_handle, 0,
+ dev_name(&pdev->dev), s);
+ if (ret)
+- goto out_disable_clks;
++ goto out_iounmap;
+
+ platform_set_drvdata(pdev, s);
+
+ ret = mxs_auart_init_gpios(s, &pdev->dev);
+ if (ret) {
+ dev_err(&pdev->dev, "Failed to initialize GPIOs.\n");
+- goto out_disable_clks;
++ goto out_iounmap;
+ }
+
+ /*
+@@ -1720,7 +1720,7 @@ static int mxs_auart_probe(struct platform_device *pdev)
+ */
+ ret = mxs_auart_request_gpio_irq(s);
+ if (ret)
+- goto out_disable_clks;
++ goto out_iounmap;
+
+ auart_port[s->port.line] = s;
+
+@@ -1746,6 +1746,9 @@ out_free_qpio_irq:
+ mxs_auart_free_gpio_irq(s);
+ auart_port[pdev->id] = NULL;
+
++out_iounmap:
++ iounmap(s->port.membase);
++
+ out_disable_clks:
+ if (is_asm9260_auart(s)) {
+ clk_disable_unprepare(s->clk);
+@@ -1761,6 +1764,7 @@ static int mxs_auart_remove(struct platform_device *pdev)
+ uart_remove_one_port(&auart_driver, &s->port);
+ auart_port[pdev->id] = NULL;
+ mxs_auart_free_gpio_irq(s);
++ iounmap(s->port.membase);
+ if (is_asm9260_auart(s)) {
+ clk_disable_unprepare(s->clk);
+ clk_disable_unprepare(s->clk_ahb);
+diff --git a/drivers/tty/serial/serial_core.c b/drivers/tty/serial/serial_core.c
+index 66a5e2faf57e..01cfeece0f16 100644
+--- a/drivers/tty/serial/serial_core.c
++++ b/drivers/tty/serial/serial_core.c
+@@ -41,8 +41,6 @@ static struct lock_class_key port_lock_key;
+
+ #define HIGH_BITS_OFFSET ((sizeof(long)-sizeof(int))*8)
+
+-#define SYSRQ_TIMEOUT (HZ * 5)
+-
+ static void uart_change_speed(struct tty_struct *tty, struct uart_state *state,
+ struct ktermios *old_termios);
+ static void uart_wait_until_sent(struct tty_struct *tty, int timeout);
+@@ -1916,6 +1914,12 @@ static inline bool uart_console_enabled(struct uart_port *port)
+ return uart_console(port) && (port->cons->flags & CON_ENABLED);
+ }
+
++static void __uart_port_spin_lock_init(struct uart_port *port)
++{
++ spin_lock_init(&port->lock);
++ lockdep_set_class(&port->lock, &port_lock_key);
++}
++
+ /*
+ * Ensure that the serial console lock is initialised early.
+ * If this port is a console, then the spinlock is already initialised.
+@@ -1925,8 +1929,7 @@ static inline void uart_port_spin_lock_init(struct uart_port *port)
+ if (uart_console(port))
+ return;
+
+- spin_lock_init(&port->lock);
+- lockdep_set_class(&port->lock, &port_lock_key);
++ __uart_port_spin_lock_init(port);
+ }
+
+ #if defined(CONFIG_SERIAL_CORE_CONSOLE) || defined(CONFIG_CONSOLE_POLL)
+@@ -2372,6 +2375,13 @@ uart_configure_port(struct uart_driver *drv, struct uart_state *state,
+ /* Power up port for set_mctrl() */
+ uart_change_pm(state, UART_PM_STATE_ON);
+
++ /*
++ * If this driver supports console, and it hasn't been
++ * successfully registered yet, initialise spin lock for it.
++ */
++ if (port->cons && !(port->cons->flags & CON_ENABLED))
++ __uart_port_spin_lock_init(port);
++
+ /*
+ * Ensure that the modem control lines are de-activated.
+ * keep the DTR setting that is set in uart_set_options()
+@@ -3163,7 +3173,7 @@ static DECLARE_WORK(sysrq_enable_work, uart_sysrq_on);
+ * Returns false if @ch is out of enabling sequence and should be
+ * handled some other way, true if @ch was consumed.
+ */
+-static bool uart_try_toggle_sysrq(struct uart_port *port, unsigned int ch)
++bool uart_try_toggle_sysrq(struct uart_port *port, unsigned int ch)
+ {
+ int sysrq_toggle_seq_len = strlen(sysrq_toggle_seq);
+
+@@ -3186,99 +3196,9 @@ static bool uart_try_toggle_sysrq(struct uart_port *port, unsigned int ch)
+ port->sysrq = 0;
+ return true;
+ }
+-#else
+-static inline bool uart_try_toggle_sysrq(struct uart_port *port, unsigned int ch)
+-{
+- return false;
+-}
++EXPORT_SYMBOL_GPL(uart_try_toggle_sysrq);
+ #endif
+
+-int uart_handle_sysrq_char(struct uart_port *port, unsigned int ch)
+-{
+- if (!IS_ENABLED(CONFIG_MAGIC_SYSRQ_SERIAL))
+- return 0;
+-
+- if (!port->has_sysrq || !port->sysrq)
+- return 0;
+-
+- if (ch && time_before(jiffies, port->sysrq)) {
+- if (sysrq_mask()) {
+- handle_sysrq(ch);
+- port->sysrq = 0;
+- return 1;
+- }
+- if (uart_try_toggle_sysrq(port, ch))
+- return 1;
+- }
+- port->sysrq = 0;
+-
+- return 0;
+-}
+-EXPORT_SYMBOL_GPL(uart_handle_sysrq_char);
+-
+-int uart_prepare_sysrq_char(struct uart_port *port, unsigned int ch)
+-{
+- if (!IS_ENABLED(CONFIG_MAGIC_SYSRQ_SERIAL))
+- return 0;
+-
+- if (!port->has_sysrq || !port->sysrq)
+- return 0;
+-
+- if (ch && time_before(jiffies, port->sysrq)) {
+- if (sysrq_mask()) {
+- port->sysrq_ch = ch;
+- port->sysrq = 0;
+- return 1;
+- }
+- if (uart_try_toggle_sysrq(port, ch))
+- return 1;
+- }
+- port->sysrq = 0;
+-
+- return 0;
+-}
+-EXPORT_SYMBOL_GPL(uart_prepare_sysrq_char);
+-
+-void uart_unlock_and_check_sysrq(struct uart_port *port, unsigned long flags)
+-__releases(&port->lock)
+-{
+- if (port->has_sysrq) {
+- int sysrq_ch = port->sysrq_ch;
+-
+- port->sysrq_ch = 0;
+- spin_unlock_irqrestore(&port->lock, flags);
+- if (sysrq_ch)
+- handle_sysrq(sysrq_ch);
+- } else {
+- spin_unlock_irqrestore(&port->lock, flags);
+- }
+-}
+-EXPORT_SYMBOL_GPL(uart_unlock_and_check_sysrq);
+-
+-/*
+- * We do the SysRQ and SAK checking like this...
+- */
+-int uart_handle_break(struct uart_port *port)
+-{
+- struct uart_state *state = port->state;
+-
+- if (port->handle_break)
+- port->handle_break(port);
+-
+- if (port->has_sysrq && uart_console(port)) {
+- if (!port->sysrq) {
+- port->sysrq = jiffies + SYSRQ_TIMEOUT;
+- return 1;
+- }
+- port->sysrq = 0;
+- }
+-
+- if (port->flags & UPF_SAK)
+- do_SAK(state->port.tty);
+- return 0;
+-}
+-EXPORT_SYMBOL_GPL(uart_handle_break);
+-
+ EXPORT_SYMBOL(uart_write_wakeup);
+ EXPORT_SYMBOL(uart_register_driver);
+ EXPORT_SYMBOL(uart_unregister_driver);
+diff --git a/drivers/tty/serial/sh-sci.c b/drivers/tty/serial/sh-sci.c
+index e1179e74a2b8..204bb68ce3ca 100644
+--- a/drivers/tty/serial/sh-sci.c
++++ b/drivers/tty/serial/sh-sci.c
+@@ -3301,6 +3301,9 @@ static int sci_probe_single(struct platform_device *dev,
+ sciport->port.flags |= UPF_HARD_FLOW;
+ }
+
++ if (sci_uart_driver.cons->index == sciport->port.line)
++ spin_lock_init(&sciport->port.lock);
++
+ ret = uart_add_one_port(&sci_uart_driver, &sciport->port);
+ if (ret) {
+ sci_cleanup_single(sciport);
+diff --git a/drivers/tty/serial/xilinx_uartps.c b/drivers/tty/serial/xilinx_uartps.c
+index 35e9e8faf8de..ac137b6a1dc1 100644
+--- a/drivers/tty/serial/xilinx_uartps.c
++++ b/drivers/tty/serial/xilinx_uartps.c
+@@ -1459,7 +1459,6 @@ static int cdns_uart_probe(struct platform_device *pdev)
+ cdns_uart_uart_driver.nr = CDNS_UART_NR_PORTS;
+ #ifdef CONFIG_SERIAL_XILINX_PS_UART_CONSOLE
+ cdns_uart_uart_driver.cons = &cdns_uart_console;
+- cdns_uart_console.index = id;
+ #endif
+
+ rc = uart_register_driver(&cdns_uart_uart_driver);
+diff --git a/drivers/uio/uio_pdrv_genirq.c b/drivers/uio/uio_pdrv_genirq.c
+index ae319ef3a832..b60173bc93ce 100644
+--- a/drivers/uio/uio_pdrv_genirq.c
++++ b/drivers/uio/uio_pdrv_genirq.c
+@@ -159,9 +159,9 @@ static int uio_pdrv_genirq_probe(struct platform_device *pdev)
+ priv->pdev = pdev;
+
+ if (!uioinfo->irq) {
+- ret = platform_get_irq(pdev, 0);
++ ret = platform_get_irq_optional(pdev, 0);
+ uioinfo->irq = ret;
+- if (ret == -ENXIO && pdev->dev.of_node)
++ if (ret == -ENXIO)
+ uioinfo->irq = UIO_IRQ_NONE;
+ else if (ret == -EPROBE_DEFER)
+ return ret;
+diff --git a/drivers/usb/c67x00/c67x00-sched.c b/drivers/usb/c67x00/c67x00-sched.c
+index 633c52de3bb3..9865750bc31e 100644
+--- a/drivers/usb/c67x00/c67x00-sched.c
++++ b/drivers/usb/c67x00/c67x00-sched.c
+@@ -486,7 +486,7 @@ c67x00_giveback_urb(struct c67x00_hcd *c67x00, struct urb *urb, int status)
+ c67x00_release_urb(c67x00, urb);
+ usb_hcd_unlink_urb_from_ep(c67x00_hcd_to_hcd(c67x00), urb);
+ spin_unlock(&c67x00->lock);
+- usb_hcd_giveback_urb(c67x00_hcd_to_hcd(c67x00), urb, urbp->status);
++ usb_hcd_giveback_urb(c67x00_hcd_to_hcd(c67x00), urb, status);
+ spin_lock(&c67x00->lock);
+ }
+
+diff --git a/drivers/usb/chipidea/core.c b/drivers/usb/chipidea/core.c
+index ae0bdc036464..2a93127187ea 100644
+--- a/drivers/usb/chipidea/core.c
++++ b/drivers/usb/chipidea/core.c
+@@ -1265,6 +1265,29 @@ static void ci_controller_suspend(struct ci_hdrc *ci)
+ enable_irq(ci->irq);
+ }
+
++/*
++ * Handle the wakeup interrupt triggered by extcon connector
++ * We need to call ci_irq again for extcon since the first
++ * interrupt (wakeup int) only let the controller be out of
++ * low power mode, but not handle any interrupts.
++ */
++static void ci_extcon_wakeup_int(struct ci_hdrc *ci)
++{
++ struct ci_hdrc_cable *cable_id, *cable_vbus;
++ u32 otgsc = hw_read_otgsc(ci, ~0);
++
++ cable_id = &ci->platdata->id_extcon;
++ cable_vbus = &ci->platdata->vbus_extcon;
++
++ if (!IS_ERR(cable_id->edev) && ci->is_otg &&
++ (otgsc & OTGSC_IDIE) && (otgsc & OTGSC_IDIS))
++ ci_irq(ci->irq, ci);
++
++ if (!IS_ERR(cable_vbus->edev) && ci->is_otg &&
++ (otgsc & OTGSC_BSVIE) && (otgsc & OTGSC_BSVIS))
++ ci_irq(ci->irq, ci);
++}
++
+ static int ci_controller_resume(struct device *dev)
+ {
+ struct ci_hdrc *ci = dev_get_drvdata(dev);
+@@ -1297,6 +1320,7 @@ static int ci_controller_resume(struct device *dev)
+ enable_irq(ci->irq);
+ if (ci_otg_is_fsm_mode(ci))
+ ci_otg_fsm_wakeup_by_srp(ci);
++ ci_extcon_wakeup_int(ci);
+ }
+
+ return 0;
+diff --git a/drivers/usb/dwc2/platform.c b/drivers/usb/dwc2/platform.c
+index 5684c4781af9..797afa99ef3b 100644
+--- a/drivers/usb/dwc2/platform.c
++++ b/drivers/usb/dwc2/platform.c
+@@ -342,7 +342,8 @@ static void dwc2_driver_shutdown(struct platform_device *dev)
+ {
+ struct dwc2_hsotg *hsotg = platform_get_drvdata(dev);
+
+- disable_irq(hsotg->irq);
++ dwc2_disable_global_interrupts(hsotg);
++ synchronize_irq(hsotg->irq);
+ }
+
+ /**
+diff --git a/drivers/usb/gadget/function/f_uac1_legacy.c b/drivers/usb/gadget/function/f_uac1_legacy.c
+index 349deae7cabd..e2d7f69128a0 100644
+--- a/drivers/usb/gadget/function/f_uac1_legacy.c
++++ b/drivers/usb/gadget/function/f_uac1_legacy.c
+@@ -336,7 +336,9 @@ static int f_audio_out_ep_complete(struct usb_ep *ep, struct usb_request *req)
+
+ /* Copy buffer is full, add it to the play_queue */
+ if (audio_buf_size - copy_buf->actual < req->actual) {
++ spin_lock_irq(&audio->lock);
+ list_add_tail(©_buf->list, &audio->play_queue);
++ spin_unlock_irq(&audio->lock);
+ schedule_work(&audio->playback_work);
+ copy_buf = f_audio_buffer_alloc(audio_buf_size);
+ if (IS_ERR(copy_buf))
+diff --git a/drivers/usb/gadget/udc/atmel_usba_udc.c b/drivers/usb/gadget/udc/atmel_usba_udc.c
+index b771a854e29c..cfdc66e11871 100644
+--- a/drivers/usb/gadget/udc/atmel_usba_udc.c
++++ b/drivers/usb/gadget/udc/atmel_usba_udc.c
+@@ -871,7 +871,7 @@ static int usba_ep_dequeue(struct usb_ep *_ep, struct usb_request *_req)
+ u32 status;
+
+ DBG(DBG_GADGET | DBG_QUEUE, "ep_dequeue: %s, req %p\n",
+- ep->ep.name, req);
++ ep->ep.name, _req);
+
+ spin_lock_irqsave(&udc->lock, flags);
+
+diff --git a/drivers/usb/serial/ch341.c b/drivers/usb/serial/ch341.c
+index 89675ee29645..8fbaef5c9d69 100644
+--- a/drivers/usb/serial/ch341.c
++++ b/drivers/usb/serial/ch341.c
+@@ -77,6 +77,7 @@
+
+ static const struct usb_device_id id_table[] = {
+ { USB_DEVICE(0x4348, 0x5523) },
++ { USB_DEVICE(0x1a86, 0x7522) },
+ { USB_DEVICE(0x1a86, 0x7523) },
+ { USB_DEVICE(0x1a86, 0x5523) },
+ { },
+diff --git a/drivers/usb/serial/cypress_m8.c b/drivers/usb/serial/cypress_m8.c
+index 216edd5826ca..ecda82198798 100644
+--- a/drivers/usb/serial/cypress_m8.c
++++ b/drivers/usb/serial/cypress_m8.c
+@@ -59,6 +59,7 @@ static const struct usb_device_id id_table_earthmate[] = {
+
+ static const struct usb_device_id id_table_cyphidcomrs232[] = {
+ { USB_DEVICE(VENDOR_ID_CYPRESS, PRODUCT_ID_CYPHIDCOM) },
++ { USB_DEVICE(VENDOR_ID_SAI, PRODUCT_ID_CYPHIDCOM) },
+ { USB_DEVICE(VENDOR_ID_POWERCOM, PRODUCT_ID_UPS) },
+ { USB_DEVICE(VENDOR_ID_FRWD, PRODUCT_ID_CYPHIDCOM_FRWD) },
+ { } /* Terminating entry */
+@@ -73,6 +74,7 @@ static const struct usb_device_id id_table_combined[] = {
+ { USB_DEVICE(VENDOR_ID_DELORME, PRODUCT_ID_EARTHMATEUSB) },
+ { USB_DEVICE(VENDOR_ID_DELORME, PRODUCT_ID_EARTHMATEUSB_LT20) },
+ { USB_DEVICE(VENDOR_ID_CYPRESS, PRODUCT_ID_CYPHIDCOM) },
++ { USB_DEVICE(VENDOR_ID_SAI, PRODUCT_ID_CYPHIDCOM) },
+ { USB_DEVICE(VENDOR_ID_POWERCOM, PRODUCT_ID_UPS) },
+ { USB_DEVICE(VENDOR_ID_FRWD, PRODUCT_ID_CYPHIDCOM_FRWD) },
+ { USB_DEVICE(VENDOR_ID_DAZZLE, PRODUCT_ID_CA42) },
+diff --git a/drivers/usb/serial/cypress_m8.h b/drivers/usb/serial/cypress_m8.h
+index 35e223751c0e..16b7410ad057 100644
+--- a/drivers/usb/serial/cypress_m8.h
++++ b/drivers/usb/serial/cypress_m8.h
+@@ -25,6 +25,9 @@
+ #define VENDOR_ID_CYPRESS 0x04b4
+ #define PRODUCT_ID_CYPHIDCOM 0x5500
+
++/* Simply Automated HID->COM UPB PIM (using Cypress PID 0x5500) */
++#define VENDOR_ID_SAI 0x17dd
++
+ /* FRWD Dongle - a GPS sports watch */
+ #define VENDOR_ID_FRWD 0x6737
+ #define PRODUCT_ID_CYPHIDCOM_FRWD 0x0001
+diff --git a/drivers/usb/serial/iuu_phoenix.c b/drivers/usb/serial/iuu_phoenix.c
+index d5bff69b1769..b8dfeb4fb2ed 100644
+--- a/drivers/usb/serial/iuu_phoenix.c
++++ b/drivers/usb/serial/iuu_phoenix.c
+@@ -697,14 +697,16 @@ static int iuu_uart_write(struct tty_struct *tty, struct usb_serial_port *port,
+ struct iuu_private *priv = usb_get_serial_port_data(port);
+ unsigned long flags;
+
+- if (count > 256)
+- return -ENOMEM;
+-
+ spin_lock_irqsave(&priv->lock, flags);
+
++ count = min(count, 256 - priv->writelen);
++ if (count == 0)
++ goto out;
++
+ /* fill the buffer */
+ memcpy(priv->writebuf + priv->writelen, buf, count);
+ priv->writelen += count;
++out:
+ spin_unlock_irqrestore(&priv->lock, flags);
+
+ return count;
+diff --git a/drivers/usb/serial/option.c b/drivers/usb/serial/option.c
+index 254a8bbeea67..9b7cee98ea60 100644
+--- a/drivers/usb/serial/option.c
++++ b/drivers/usb/serial/option.c
+@@ -245,6 +245,7 @@ static void option_instat_callback(struct urb *urb);
+ /* These Quectel products use Quectel's vendor ID */
+ #define QUECTEL_PRODUCT_EC21 0x0121
+ #define QUECTEL_PRODUCT_EC25 0x0125
++#define QUECTEL_PRODUCT_EG95 0x0195
+ #define QUECTEL_PRODUCT_BG96 0x0296
+ #define QUECTEL_PRODUCT_EP06 0x0306
+ #define QUECTEL_PRODUCT_EM12 0x0512
+@@ -1097,6 +1098,8 @@ static const struct usb_device_id option_ids[] = {
+ .driver_info = RSVD(4) },
+ { USB_DEVICE(QUECTEL_VENDOR_ID, QUECTEL_PRODUCT_EC25),
+ .driver_info = RSVD(4) },
++ { USB_DEVICE(QUECTEL_VENDOR_ID, QUECTEL_PRODUCT_EG95),
++ .driver_info = RSVD(4) },
+ { USB_DEVICE(QUECTEL_VENDOR_ID, QUECTEL_PRODUCT_BG96),
+ .driver_info = RSVD(4) },
+ { USB_DEVICE_AND_INTERFACE_INFO(QUECTEL_VENDOR_ID, QUECTEL_PRODUCT_EP06, 0xff, 0xff, 0xff),
+@@ -2028,6 +2031,9 @@ static const struct usb_device_id option_ids[] = {
+ .driver_info = RSVD(4) | RSVD(5) },
+ { USB_DEVICE_INTERFACE_CLASS(0x2cb7, 0x0105, 0xff), /* Fibocom NL678 series */
+ .driver_info = RSVD(6) },
++ { USB_DEVICE_INTERFACE_CLASS(0x305a, 0x1404, 0xff) }, /* GosunCn GM500 RNDIS */
++ { USB_DEVICE_INTERFACE_CLASS(0x305a, 0x1405, 0xff) }, /* GosunCn GM500 MBIM */
++ { USB_DEVICE_INTERFACE_CLASS(0x305a, 0x1406, 0xff) }, /* GosunCn GM500 ECM/NCM */
+ { } /* Terminating entry */
+ };
+ MODULE_DEVICE_TABLE(usb, option_ids);
+diff --git a/drivers/virt/vboxguest/vboxguest_core.c b/drivers/virt/vboxguest/vboxguest_core.c
+index b690a8a4bf9e..18ebd7a6af98 100644
+--- a/drivers/virt/vboxguest/vboxguest_core.c
++++ b/drivers/virt/vboxguest/vboxguest_core.c
+@@ -1444,7 +1444,7 @@ static int vbg_ioctl_change_guest_capabilities(struct vbg_dev *gdev,
+ or_mask = caps->u.in.or_mask;
+ not_mask = caps->u.in.not_mask;
+
+- if ((or_mask | not_mask) & ~VMMDEV_EVENT_VALID_EVENT_MASK)
++ if ((or_mask | not_mask) & ~VMMDEV_GUEST_CAPABILITIES_MASK)
+ return -EINVAL;
+
+ ret = vbg_set_session_capabilities(gdev, session, or_mask, not_mask,
+@@ -1520,7 +1520,8 @@ int vbg_core_ioctl(struct vbg_session *session, unsigned int req, void *data)
+
+ /* For VMMDEV_REQUEST hdr->type != VBG_IOCTL_HDR_TYPE_DEFAULT */
+ if (req_no_size == VBG_IOCTL_VMMDEV_REQUEST(0) ||
+- req == VBG_IOCTL_VMMDEV_REQUEST_BIG)
++ req == VBG_IOCTL_VMMDEV_REQUEST_BIG ||
++ req == VBG_IOCTL_VMMDEV_REQUEST_BIG_ALT)
+ return vbg_ioctl_vmmrequest(gdev, session, data);
+
+ if (hdr->type != VBG_IOCTL_HDR_TYPE_DEFAULT)
+@@ -1558,6 +1559,7 @@ int vbg_core_ioctl(struct vbg_session *session, unsigned int req, void *data)
+ case VBG_IOCTL_HGCM_CALL(0):
+ return vbg_ioctl_hgcm_call(gdev, session, f32bit, data);
+ case VBG_IOCTL_LOG(0):
++ case VBG_IOCTL_LOG_ALT(0):
+ return vbg_ioctl_log(data);
+ }
+
+diff --git a/drivers/virt/vboxguest/vboxguest_core.h b/drivers/virt/vboxguest/vboxguest_core.h
+index 4188c12b839f..77c3a9c8255d 100644
+--- a/drivers/virt/vboxguest/vboxguest_core.h
++++ b/drivers/virt/vboxguest/vboxguest_core.h
+@@ -15,6 +15,21 @@
+ #include <linux/vboxguest.h>
+ #include "vmmdev.h"
+
++/*
++ * The mainline kernel version (this version) of the vboxguest module
++ * contained a bug where it defined VBGL_IOCTL_VMMDEV_REQUEST_BIG and
++ * VBGL_IOCTL_LOG using _IOC(_IOC_READ | _IOC_WRITE, 'V', ...) instead
++ * of _IO(V, ...) as the out of tree VirtualBox upstream version does.
++ *
++ * These _ALT definitions keep compatibility with the wrong defines the
++ * mainline kernel version used for a while.
++ * Note the VirtualBox userspace bits have always been built against
++ * VirtualBox upstream's headers, so this is likely not necessary. But
++ * we must never break our ABI so we keep these around to be 100% sure.
++ */
++#define VBG_IOCTL_VMMDEV_REQUEST_BIG_ALT _IOC(_IOC_READ | _IOC_WRITE, 'V', 3, 0)
++#define VBG_IOCTL_LOG_ALT(s) _IOC(_IOC_READ | _IOC_WRITE, 'V', 9, s)
++
+ struct vbg_session;
+
+ /** VBox guest memory balloon. */
+diff --git a/drivers/virt/vboxguest/vboxguest_linux.c b/drivers/virt/vboxguest/vboxguest_linux.c
+index 6e8c0f1c1056..32c2c52f7e84 100644
+--- a/drivers/virt/vboxguest/vboxguest_linux.c
++++ b/drivers/virt/vboxguest/vboxguest_linux.c
+@@ -131,7 +131,8 @@ static long vbg_misc_device_ioctl(struct file *filp, unsigned int req,
+ * the need for a bounce-buffer and another copy later on.
+ */
+ is_vmmdev_req = (req & ~IOCSIZE_MASK) == VBG_IOCTL_VMMDEV_REQUEST(0) ||
+- req == VBG_IOCTL_VMMDEV_REQUEST_BIG;
++ req == VBG_IOCTL_VMMDEV_REQUEST_BIG ||
++ req == VBG_IOCTL_VMMDEV_REQUEST_BIG_ALT;
+
+ if (is_vmmdev_req)
+ buf = vbg_req_alloc(size, VBG_IOCTL_HDR_TYPE_DEFAULT,
+diff --git a/drivers/virt/vboxguest/vmmdev.h b/drivers/virt/vboxguest/vmmdev.h
+index 6337b8d75d96..21f408120e3f 100644
+--- a/drivers/virt/vboxguest/vmmdev.h
++++ b/drivers/virt/vboxguest/vmmdev.h
+@@ -206,6 +206,8 @@ VMMDEV_ASSERT_SIZE(vmmdev_mask, 24 + 8);
+ * not.
+ */
+ #define VMMDEV_GUEST_SUPPORTS_GRAPHICS BIT(2)
++/* The mask of valid capabilities, for sanity checking. */
++#define VMMDEV_GUEST_CAPABILITIES_MASK 0x00000007U
+
+ /** struct vmmdev_hypervisorinfo - Hypervisor info structure. */
+ struct vmmdev_hypervisorinfo {
+diff --git a/drivers/xen/xenbus/xenbus_client.c b/drivers/xen/xenbus/xenbus_client.c
+index 040d2a43e8e3..786fbb7d8be0 100644
+--- a/drivers/xen/xenbus/xenbus_client.c
++++ b/drivers/xen/xenbus/xenbus_client.c
+@@ -69,11 +69,27 @@ struct xenbus_map_node {
+ unsigned int nr_handles;
+ };
+
++struct map_ring_valloc {
++ struct xenbus_map_node *node;
++
++ /* Why do we need two arrays? See comment of __xenbus_map_ring */
++ union {
++ unsigned long addrs[XENBUS_MAX_RING_GRANTS];
++ pte_t *ptes[XENBUS_MAX_RING_GRANTS];
++ };
++ phys_addr_t phys_addrs[XENBUS_MAX_RING_GRANTS];
++
++ struct gnttab_map_grant_ref map[XENBUS_MAX_RING_GRANTS];
++ struct gnttab_unmap_grant_ref unmap[XENBUS_MAX_RING_GRANTS];
++
++ unsigned int idx; /* HVM only. */
++};
++
+ static DEFINE_SPINLOCK(xenbus_valloc_lock);
+ static LIST_HEAD(xenbus_valloc_pages);
+
+ struct xenbus_ring_ops {
+- int (*map)(struct xenbus_device *dev,
++ int (*map)(struct xenbus_device *dev, struct map_ring_valloc *info,
+ grant_ref_t *gnt_refs, unsigned int nr_grefs,
+ void **vaddr);
+ int (*unmap)(struct xenbus_device *dev, void *vaddr);
+@@ -440,8 +456,7 @@ EXPORT_SYMBOL_GPL(xenbus_free_evtchn);
+ * Map @nr_grefs pages of memory into this domain from another
+ * domain's grant table. xenbus_map_ring_valloc allocates @nr_grefs
+ * pages of virtual address space, maps the pages to that address, and
+- * sets *vaddr to that address. Returns 0 on success, and GNTST_*
+- * (see xen/include/interface/grant_table.h) or -ENOMEM / -EINVAL on
++ * sets *vaddr to that address. Returns 0 on success, and -errno on
+ * error. If an error is returned, device will switch to
+ * XenbusStateClosing and the error message will be saved in XenStore.
+ */
+@@ -449,12 +464,25 @@ int xenbus_map_ring_valloc(struct xenbus_device *dev, grant_ref_t *gnt_refs,
+ unsigned int nr_grefs, void **vaddr)
+ {
+ int err;
++ struct map_ring_valloc *info;
++
++ *vaddr = NULL;
++
++ if (nr_grefs > XENBUS_MAX_RING_GRANTS)
++ return -EINVAL;
++
++ info = kzalloc(sizeof(*info), GFP_KERNEL);
++ if (!info)
++ return -ENOMEM;
+
+- err = ring_ops->map(dev, gnt_refs, nr_grefs, vaddr);
+- /* Some hypervisors are buggy and can return 1. */
+- if (err > 0)
+- err = GNTST_general_error;
++ info->node = kzalloc(sizeof(*info->node), GFP_KERNEL);
++ if (!info->node)
++ err = -ENOMEM;
++ else
++ err = ring_ops->map(dev, info, gnt_refs, nr_grefs, vaddr);
+
++ kfree(info->node);
++ kfree(info);
+ return err;
+ }
+ EXPORT_SYMBOL_GPL(xenbus_map_ring_valloc);
+@@ -466,62 +494,57 @@ static int __xenbus_map_ring(struct xenbus_device *dev,
+ grant_ref_t *gnt_refs,
+ unsigned int nr_grefs,
+ grant_handle_t *handles,
+- phys_addr_t *addrs,
++ struct map_ring_valloc *info,
+ unsigned int flags,
+ bool *leaked)
+ {
+- struct gnttab_map_grant_ref map[XENBUS_MAX_RING_GRANTS];
+- struct gnttab_unmap_grant_ref unmap[XENBUS_MAX_RING_GRANTS];
+ int i, j;
+- int err = GNTST_okay;
+
+ if (nr_grefs > XENBUS_MAX_RING_GRANTS)
+ return -EINVAL;
+
+ for (i = 0; i < nr_grefs; i++) {
+- memset(&map[i], 0, sizeof(map[i]));
+- gnttab_set_map_op(&map[i], addrs[i], flags, gnt_refs[i],
+- dev->otherend_id);
++ gnttab_set_map_op(&info->map[i], info->phys_addrs[i], flags,
++ gnt_refs[i], dev->otherend_id);
+ handles[i] = INVALID_GRANT_HANDLE;
+ }
+
+- gnttab_batch_map(map, i);
++ gnttab_batch_map(info->map, i);
+
+ for (i = 0; i < nr_grefs; i++) {
+- if (map[i].status != GNTST_okay) {
+- err = map[i].status;
+- xenbus_dev_fatal(dev, map[i].status,
++ if (info->map[i].status != GNTST_okay) {
++ xenbus_dev_fatal(dev, info->map[i].status,
+ "mapping in shared page %d from domain %d",
+ gnt_refs[i], dev->otherend_id);
+ goto fail;
+ } else
+- handles[i] = map[i].handle;
++ handles[i] = info->map[i].handle;
+ }
+
+- return GNTST_okay;
++ return 0;
+
+ fail:
+ for (i = j = 0; i < nr_grefs; i++) {
+ if (handles[i] != INVALID_GRANT_HANDLE) {
+- memset(&unmap[j], 0, sizeof(unmap[j]));
+- gnttab_set_unmap_op(&unmap[j], (phys_addr_t)addrs[i],
++ gnttab_set_unmap_op(&info->unmap[j],
++ info->phys_addrs[i],
+ GNTMAP_host_map, handles[i]);
+ j++;
+ }
+ }
+
+- if (HYPERVISOR_grant_table_op(GNTTABOP_unmap_grant_ref, unmap, j))
++ if (HYPERVISOR_grant_table_op(GNTTABOP_unmap_grant_ref, info->unmap, j))
+ BUG();
+
+ *leaked = false;
+ for (i = 0; i < j; i++) {
+- if (unmap[i].status != GNTST_okay) {
++ if (info->unmap[i].status != GNTST_okay) {
+ *leaked = true;
+ break;
+ }
+ }
+
+- return err;
++ return -ENOENT;
+ }
+
+ /**
+@@ -566,21 +589,12 @@ static int xenbus_unmap_ring(struct xenbus_device *dev, grant_handle_t *handles,
+ return err;
+ }
+
+-struct map_ring_valloc_hvm
+-{
+- unsigned int idx;
+-
+- /* Why do we need two arrays? See comment of __xenbus_map_ring */
+- phys_addr_t phys_addrs[XENBUS_MAX_RING_GRANTS];
+- unsigned long addrs[XENBUS_MAX_RING_GRANTS];
+-};
+-
+ static void xenbus_map_ring_setup_grant_hvm(unsigned long gfn,
+ unsigned int goffset,
+ unsigned int len,
+ void *data)
+ {
+- struct map_ring_valloc_hvm *info = data;
++ struct map_ring_valloc *info = data;
+ unsigned long vaddr = (unsigned long)gfn_to_virt(gfn);
+
+ info->phys_addrs[info->idx] = vaddr;
+@@ -589,39 +603,28 @@ static void xenbus_map_ring_setup_grant_hvm(unsigned long gfn,
+ info->idx++;
+ }
+
+-static int xenbus_map_ring_valloc_hvm(struct xenbus_device *dev,
+- grant_ref_t *gnt_ref,
+- unsigned int nr_grefs,
+- void **vaddr)
++static int xenbus_map_ring_hvm(struct xenbus_device *dev,
++ struct map_ring_valloc *info,
++ grant_ref_t *gnt_ref,
++ unsigned int nr_grefs,
++ void **vaddr)
+ {
+- struct xenbus_map_node *node;
++ struct xenbus_map_node *node = info->node;
+ int err;
+ void *addr;
+ bool leaked = false;
+- struct map_ring_valloc_hvm info = {
+- .idx = 0,
+- };
+ unsigned int nr_pages = XENBUS_PAGES(nr_grefs);
+
+- if (nr_grefs > XENBUS_MAX_RING_GRANTS)
+- return -EINVAL;
+-
+- *vaddr = NULL;
+-
+- node = kzalloc(sizeof(*node), GFP_KERNEL);
+- if (!node)
+- return -ENOMEM;
+-
+ err = alloc_xenballooned_pages(nr_pages, node->hvm.pages);
+ if (err)
+ goto out_err;
+
+ gnttab_foreach_grant(node->hvm.pages, nr_grefs,
+ xenbus_map_ring_setup_grant_hvm,
+- &info);
++ info);
+
+ err = __xenbus_map_ring(dev, gnt_ref, nr_grefs, node->handles,
+- info.phys_addrs, GNTMAP_host_map, &leaked);
++ info, GNTMAP_host_map, &leaked);
+ node->nr_handles = nr_grefs;
+
+ if (err)
+@@ -641,11 +644,13 @@ static int xenbus_map_ring_valloc_hvm(struct xenbus_device *dev,
+ spin_unlock(&xenbus_valloc_lock);
+
+ *vaddr = addr;
++ info->node = NULL;
++
+ return 0;
+
+ out_xenbus_unmap_ring:
+ if (!leaked)
+- xenbus_unmap_ring(dev, node->handles, nr_grefs, info.addrs);
++ xenbus_unmap_ring(dev, node->handles, nr_grefs, info->addrs);
+ else
+ pr_alert("leaking %p size %u page(s)",
+ addr, nr_pages);
+@@ -653,7 +658,6 @@ static int xenbus_map_ring_valloc_hvm(struct xenbus_device *dev,
+ if (!leaked)
+ free_xenballooned_pages(nr_pages, node->hvm.pages);
+ out_err:
+- kfree(node);
+ return err;
+ }
+
+@@ -676,40 +680,28 @@ int xenbus_unmap_ring_vfree(struct xenbus_device *dev, void *vaddr)
+ EXPORT_SYMBOL_GPL(xenbus_unmap_ring_vfree);
+
+ #ifdef CONFIG_XEN_PV
+-static int xenbus_map_ring_valloc_pv(struct xenbus_device *dev,
+- grant_ref_t *gnt_refs,
+- unsigned int nr_grefs,
+- void **vaddr)
++static int xenbus_map_ring_pv(struct xenbus_device *dev,
++ struct map_ring_valloc *info,
++ grant_ref_t *gnt_refs,
++ unsigned int nr_grefs,
++ void **vaddr)
+ {
+- struct xenbus_map_node *node;
++ struct xenbus_map_node *node = info->node;
+ struct vm_struct *area;
+- pte_t *ptes[XENBUS_MAX_RING_GRANTS];
+- phys_addr_t phys_addrs[XENBUS_MAX_RING_GRANTS];
+ int err = GNTST_okay;
+ int i;
+ bool leaked;
+
+- *vaddr = NULL;
+-
+- if (nr_grefs > XENBUS_MAX_RING_GRANTS)
+- return -EINVAL;
+-
+- node = kzalloc(sizeof(*node), GFP_KERNEL);
+- if (!node)
++ area = alloc_vm_area(XEN_PAGE_SIZE * nr_grefs, info->ptes);
++ if (!area)
+ return -ENOMEM;
+
+- area = alloc_vm_area(XEN_PAGE_SIZE * nr_grefs, ptes);
+- if (!area) {
+- kfree(node);
+- return -ENOMEM;
+- }
+-
+ for (i = 0; i < nr_grefs; i++)
+- phys_addrs[i] = arbitrary_virt_to_machine(ptes[i]).maddr;
++ info->phys_addrs[i] =
++ arbitrary_virt_to_machine(info->ptes[i]).maddr;
+
+ err = __xenbus_map_ring(dev, gnt_refs, nr_grefs, node->handles,
+- phys_addrs,
+- GNTMAP_host_map | GNTMAP_contains_pte,
++ info, GNTMAP_host_map | GNTMAP_contains_pte,
+ &leaked);
+ if (err)
+ goto failed;
+@@ -722,6 +714,8 @@ static int xenbus_map_ring_valloc_pv(struct xenbus_device *dev,
+ spin_unlock(&xenbus_valloc_lock);
+
+ *vaddr = area->addr;
++ info->node = NULL;
++
+ return 0;
+
+ failed:
+@@ -730,11 +724,10 @@ failed:
+ else
+ pr_alert("leaking VM area %p size %u page(s)", area, nr_grefs);
+
+- kfree(node);
+ return err;
+ }
+
+-static int xenbus_unmap_ring_vfree_pv(struct xenbus_device *dev, void *vaddr)
++static int xenbus_unmap_ring_pv(struct xenbus_device *dev, void *vaddr)
+ {
+ struct xenbus_map_node *node;
+ struct gnttab_unmap_grant_ref unmap[XENBUS_MAX_RING_GRANTS];
+@@ -798,12 +791,12 @@ static int xenbus_unmap_ring_vfree_pv(struct xenbus_device *dev, void *vaddr)
+ }
+
+ static const struct xenbus_ring_ops ring_ops_pv = {
+- .map = xenbus_map_ring_valloc_pv,
+- .unmap = xenbus_unmap_ring_vfree_pv,
++ .map = xenbus_map_ring_pv,
++ .unmap = xenbus_unmap_ring_pv,
+ };
+ #endif
+
+-struct unmap_ring_vfree_hvm
++struct unmap_ring_hvm
+ {
+ unsigned int idx;
+ unsigned long addrs[XENBUS_MAX_RING_GRANTS];
+@@ -814,19 +807,19 @@ static void xenbus_unmap_ring_setup_grant_hvm(unsigned long gfn,
+ unsigned int len,
+ void *data)
+ {
+- struct unmap_ring_vfree_hvm *info = data;
++ struct unmap_ring_hvm *info = data;
+
+ info->addrs[info->idx] = (unsigned long)gfn_to_virt(gfn);
+
+ info->idx++;
+ }
+
+-static int xenbus_unmap_ring_vfree_hvm(struct xenbus_device *dev, void *vaddr)
++static int xenbus_unmap_ring_hvm(struct xenbus_device *dev, void *vaddr)
+ {
+ int rv;
+ struct xenbus_map_node *node;
+ void *addr;
+- struct unmap_ring_vfree_hvm info = {
++ struct unmap_ring_hvm info = {
+ .idx = 0,
+ };
+ unsigned int nr_pages;
+@@ -887,8 +880,8 @@ enum xenbus_state xenbus_read_driver_state(const char *path)
+ EXPORT_SYMBOL_GPL(xenbus_read_driver_state);
+
+ static const struct xenbus_ring_ops ring_ops_hvm = {
+- .map = xenbus_map_ring_valloc_hvm,
+- .unmap = xenbus_unmap_ring_vfree_hvm,
++ .map = xenbus_map_ring_hvm,
++ .unmap = xenbus_unmap_ring_hvm,
+ };
+
+ void __init xenbus_ring_ops_init(void)
+diff --git a/fs/cifs/transport.c b/fs/cifs/transport.c
+index c97570eb2c18..7fefd2bd111c 100644
+--- a/fs/cifs/transport.c
++++ b/fs/cifs/transport.c
+@@ -528,7 +528,7 @@ wait_for_free_credits(struct TCP_Server_Info *server, const int num_credits,
+ const int timeout, const int flags,
+ unsigned int *instance)
+ {
+- int rc;
++ long rc;
+ int *credits;
+ int optype;
+ long int t;
+diff --git a/fs/fuse/file.c b/fs/fuse/file.c
+index e3afceecaa6b..0df03bc520a9 100644
+--- a/fs/fuse/file.c
++++ b/fs/fuse/file.c
+@@ -18,6 +18,7 @@
+ #include <linux/swap.h>
+ #include <linux/falloc.h>
+ #include <linux/uio.h>
++#include <linux/fs.h>
+
+ static struct page **fuse_pages_alloc(unsigned int npages, gfp_t flags,
+ struct fuse_page_desc **desc)
+@@ -2148,10 +2149,8 @@ static int fuse_writepages(struct address_space *mapping,
+
+ err = write_cache_pages(mapping, wbc, fuse_writepages_fill, &data);
+ if (data.wpa) {
+- /* Ignore errors if we can write at least one page */
+ WARN_ON(!data.wpa->ia.ap.num_pages);
+ fuse_writepages_send(&data);
+- err = 0;
+ }
+ if (data.ff)
+ fuse_file_put(data.ff, false, false);
+@@ -2760,7 +2759,16 @@ long fuse_do_ioctl(struct file *file, unsigned int cmd, unsigned long arg,
+ struct iovec *iov = iov_page;
+
+ iov->iov_base = (void __user *)arg;
+- iov->iov_len = _IOC_SIZE(cmd);
++
++ switch (cmd) {
++ case FS_IOC_GETFLAGS:
++ case FS_IOC_SETFLAGS:
++ iov->iov_len = sizeof(int);
++ break;
++ default:
++ iov->iov_len = _IOC_SIZE(cmd);
++ break;
++ }
+
+ if (_IOC_DIR(cmd) & _IOC_WRITE) {
+ in_iov = iov;
+diff --git a/fs/fuse/inode.c b/fs/fuse/inode.c
+index 95d712d44ca1..46dde0b659ec 100644
+--- a/fs/fuse/inode.c
++++ b/fs/fuse/inode.c
+@@ -121,10 +121,12 @@ static void fuse_evict_inode(struct inode *inode)
+ }
+ }
+
+-static int fuse_remount_fs(struct super_block *sb, int *flags, char *data)
++static int fuse_reconfigure(struct fs_context *fc)
+ {
++ struct super_block *sb = fc->root->d_sb;
++
+ sync_filesystem(sb);
+- if (*flags & SB_MANDLOCK)
++ if (fc->sb_flags & SB_MANDLOCK)
+ return -EINVAL;
+
+ return 0;
+@@ -468,6 +470,13 @@ static int fuse_parse_param(struct fs_context *fc, struct fs_parameter *param)
+ struct fuse_fs_context *ctx = fc->fs_private;
+ int opt;
+
++ /*
++ * Ignore options coming from mount(MS_REMOUNT) for backward
++ * compatibility.
++ */
++ if (fc->purpose == FS_CONTEXT_FOR_RECONFIGURE)
++ return 0;
++
+ opt = fs_parse(fc, fuse_fs_parameters, param, &result);
+ if (opt < 0)
+ return opt;
+@@ -810,7 +819,6 @@ static const struct super_operations fuse_super_operations = {
+ .evict_inode = fuse_evict_inode,
+ .write_inode = fuse_write_inode,
+ .drop_inode = generic_delete_inode,
+- .remount_fs = fuse_remount_fs,
+ .put_super = fuse_put_super,
+ .umount_begin = fuse_umount_begin,
+ .statfs = fuse_statfs,
+@@ -1284,6 +1292,7 @@ static int fuse_get_tree(struct fs_context *fc)
+ static const struct fs_context_operations fuse_context_ops = {
+ .free = fuse_free_fc,
+ .parse_param = fuse_parse_param,
++ .reconfigure = fuse_reconfigure,
+ .get_tree = fuse_get_tree,
+ };
+
+diff --git a/fs/gfs2/glops.c b/fs/gfs2/glops.c
+index 9e9c7a4b8c66..fc97c4d24dc5 100644
+--- a/fs/gfs2/glops.c
++++ b/fs/gfs2/glops.c
+@@ -527,8 +527,7 @@ static int freeze_go_sync(struct gfs2_glock *gl)
+ int error = 0;
+ struct gfs2_sbd *sdp = gl->gl_name.ln_sbd;
+
+- if (gl->gl_state == LM_ST_SHARED && !gfs2_withdrawn(sdp) &&
+- test_bit(SDF_JOURNAL_LIVE, &sdp->sd_flags)) {
++ if (gl->gl_req == LM_ST_EXCLUSIVE && !gfs2_withdrawn(sdp)) {
+ atomic_set(&sdp->sd_freeze_state, SFS_STARTING_FREEZE);
+ error = freeze_super(sdp->sd_vfs);
+ if (error) {
+@@ -541,8 +540,11 @@ static int freeze_go_sync(struct gfs2_glock *gl)
+ gfs2_assert_withdraw(sdp, 0);
+ }
+ queue_work(gfs2_freeze_wq, &sdp->sd_freeze_work);
+- gfs2_log_flush(sdp, NULL, GFS2_LOG_HEAD_FLUSH_FREEZE |
+- GFS2_LFC_FREEZE_GO_SYNC);
++ if (test_bit(SDF_JOURNAL_LIVE, &sdp->sd_flags))
++ gfs2_log_flush(sdp, NULL, GFS2_LOG_HEAD_FLUSH_FREEZE |
++ GFS2_LFC_FREEZE_GO_SYNC);
++ else /* read-only mounts */
++ atomic_set(&sdp->sd_freeze_state, SFS_FROZEN);
+ }
+ return 0;
+ }
+diff --git a/fs/gfs2/incore.h b/fs/gfs2/incore.h
+index 84a824293a78..013c029dd16b 100644
+--- a/fs/gfs2/incore.h
++++ b/fs/gfs2/incore.h
+@@ -395,7 +395,6 @@ enum {
+ GIF_QD_LOCKED = 1,
+ GIF_ALLOC_FAILED = 2,
+ GIF_SW_PAGED = 3,
+- GIF_ORDERED = 4,
+ GIF_FREE_VFS_INODE = 5,
+ GIF_GLOP_PENDING = 6,
+ };
+diff --git a/fs/gfs2/log.c b/fs/gfs2/log.c
+index 04882712cd66..62a73bd6575c 100644
+--- a/fs/gfs2/log.c
++++ b/fs/gfs2/log.c
+@@ -613,6 +613,12 @@ static int ip_cmp(void *priv, struct list_head *a, struct list_head *b)
+ return 0;
+ }
+
++static void __ordered_del_inode(struct gfs2_inode *ip)
++{
++ if (!list_empty(&ip->i_ordered))
++ list_del_init(&ip->i_ordered);
++}
++
+ static void gfs2_ordered_write(struct gfs2_sbd *sdp)
+ {
+ struct gfs2_inode *ip;
+@@ -623,8 +629,7 @@ static void gfs2_ordered_write(struct gfs2_sbd *sdp)
+ while (!list_empty(&sdp->sd_log_ordered)) {
+ ip = list_first_entry(&sdp->sd_log_ordered, struct gfs2_inode, i_ordered);
+ if (ip->i_inode.i_mapping->nrpages == 0) {
+- test_and_clear_bit(GIF_ORDERED, &ip->i_flags);
+- list_del(&ip->i_ordered);
++ __ordered_del_inode(ip);
+ continue;
+ }
+ list_move(&ip->i_ordered, &written);
+@@ -643,8 +648,7 @@ static void gfs2_ordered_wait(struct gfs2_sbd *sdp)
+ spin_lock(&sdp->sd_ordered_lock);
+ while (!list_empty(&sdp->sd_log_ordered)) {
+ ip = list_first_entry(&sdp->sd_log_ordered, struct gfs2_inode, i_ordered);
+- list_del(&ip->i_ordered);
+- WARN_ON(!test_and_clear_bit(GIF_ORDERED, &ip->i_flags));
++ __ordered_del_inode(ip);
+ if (ip->i_inode.i_mapping->nrpages == 0)
+ continue;
+ spin_unlock(&sdp->sd_ordered_lock);
+@@ -659,8 +663,7 @@ void gfs2_ordered_del_inode(struct gfs2_inode *ip)
+ struct gfs2_sbd *sdp = GFS2_SB(&ip->i_inode);
+
+ spin_lock(&sdp->sd_ordered_lock);
+- if (test_and_clear_bit(GIF_ORDERED, &ip->i_flags))
+- list_del(&ip->i_ordered);
++ __ordered_del_inode(ip);
+ spin_unlock(&sdp->sd_ordered_lock);
+ }
+
+diff --git a/fs/gfs2/log.h b/fs/gfs2/log.h
+index c1cd6ae17659..8965c751a303 100644
+--- a/fs/gfs2/log.h
++++ b/fs/gfs2/log.h
+@@ -53,9 +53,9 @@ static inline void gfs2_ordered_add_inode(struct gfs2_inode *ip)
+ if (gfs2_is_jdata(ip) || !gfs2_is_ordered(sdp))
+ return;
+
+- if (!test_bit(GIF_ORDERED, &ip->i_flags)) {
++ if (list_empty(&ip->i_ordered)) {
+ spin_lock(&sdp->sd_ordered_lock);
+- if (!test_and_set_bit(GIF_ORDERED, &ip->i_flags))
++ if (list_empty(&ip->i_ordered))
+ list_add(&ip->i_ordered, &sdp->sd_log_ordered);
+ spin_unlock(&sdp->sd_ordered_lock);
+ }
+diff --git a/fs/gfs2/main.c b/fs/gfs2/main.c
+index a1a295b739fb..4f2edb777a72 100644
+--- a/fs/gfs2/main.c
++++ b/fs/gfs2/main.c
+@@ -39,6 +39,7 @@ static void gfs2_init_inode_once(void *foo)
+ atomic_set(&ip->i_sizehint, 0);
+ init_rwsem(&ip->i_rw_mutex);
+ INIT_LIST_HEAD(&ip->i_trunc_list);
++ INIT_LIST_HEAD(&ip->i_ordered);
+ ip->i_qadata = NULL;
+ gfs2_holder_mark_uninitialized(&ip->i_rgd_gh);
+ memset(&ip->i_res, 0, sizeof(ip->i_res));
+diff --git a/fs/gfs2/ops_fstype.c b/fs/gfs2/ops_fstype.c
+index 094f5fe7c009..6d18d2c91add 100644
+--- a/fs/gfs2/ops_fstype.c
++++ b/fs/gfs2/ops_fstype.c
+@@ -1136,7 +1136,18 @@ static int gfs2_fill_super(struct super_block *sb, struct fs_context *fc)
+ goto fail_per_node;
+ }
+
+- if (!sb_rdonly(sb)) {
++ if (sb_rdonly(sb)) {
++ struct gfs2_holder freeze_gh;
++
++ error = gfs2_glock_nq_init(sdp->sd_freeze_gl, LM_ST_SHARED,
++ LM_FLAG_NOEXP | GL_EXACT,
++ &freeze_gh);
++ if (error) {
++ fs_err(sdp, "can't make FS RO: %d\n", error);
++ goto fail_per_node;
++ }
++ gfs2_glock_dq_uninit(&freeze_gh);
++ } else {
+ error = gfs2_make_fs_rw(sdp);
+ if (error) {
+ fs_err(sdp, "can't make FS RW: %d\n", error);
+diff --git a/fs/gfs2/recovery.c b/fs/gfs2/recovery.c
+index 96c345f49273..390ea79d682c 100644
+--- a/fs/gfs2/recovery.c
++++ b/fs/gfs2/recovery.c
+@@ -364,8 +364,8 @@ void gfs2_recover_func(struct work_struct *work)
+ /* Acquire a shared hold on the freeze lock */
+
+ error = gfs2_glock_nq_init(sdp->sd_freeze_gl, LM_ST_SHARED,
+- LM_FLAG_NOEXP | LM_FLAG_PRIORITY,
+- &thaw_gh);
++ LM_FLAG_NOEXP | LM_FLAG_PRIORITY |
++ GL_EXACT, &thaw_gh);
+ if (error)
+ goto fail_gunlock_ji;
+
+diff --git a/fs/gfs2/super.c b/fs/gfs2/super.c
+index 956fced0a8ec..160bb4598b48 100644
+--- a/fs/gfs2/super.c
++++ b/fs/gfs2/super.c
+@@ -167,7 +167,8 @@ int gfs2_make_fs_rw(struct gfs2_sbd *sdp)
+ if (error)
+ return error;
+
+- error = gfs2_glock_nq_init(sdp->sd_freeze_gl, LM_ST_SHARED, 0,
++ error = gfs2_glock_nq_init(sdp->sd_freeze_gl, LM_ST_SHARED,
++ LM_FLAG_NOEXP | GL_EXACT,
+ &freeze_gh);
+ if (error)
+ goto fail_threads;
+@@ -203,7 +204,6 @@ int gfs2_make_fs_rw(struct gfs2_sbd *sdp)
+ return 0;
+
+ fail:
+- freeze_gh.gh_flags |= GL_NOCACHE;
+ gfs2_glock_dq_uninit(&freeze_gh);
+ fail_threads:
+ if (sdp->sd_quotad_process)
+@@ -430,7 +430,7 @@ static int gfs2_lock_fs_check_clean(struct gfs2_sbd *sdp)
+ }
+
+ error = gfs2_glock_nq_init(sdp->sd_freeze_gl, LM_ST_EXCLUSIVE,
+- GL_NOCACHE, &sdp->sd_freeze_gh);
++ LM_FLAG_NOEXP, &sdp->sd_freeze_gh);
+ if (error)
+ goto out;
+
+@@ -613,13 +613,15 @@ int gfs2_make_fs_ro(struct gfs2_sbd *sdp)
+ !gfs2_glock_is_locked_by_me(sdp->sd_freeze_gl)) {
+ if (!log_write_allowed) {
+ error = gfs2_glock_nq_init(sdp->sd_freeze_gl,
+- LM_ST_SHARED, GL_NOCACHE |
+- LM_FLAG_TRY, &freeze_gh);
++ LM_ST_SHARED, LM_FLAG_TRY |
++ LM_FLAG_NOEXP | GL_EXACT,
++ &freeze_gh);
+ if (error == GLR_TRYFAILED)
+ error = 0;
+ } else {
+ error = gfs2_glock_nq_init(sdp->sd_freeze_gl,
+- LM_ST_SHARED, GL_NOCACHE,
++ LM_ST_SHARED,
++ LM_FLAG_NOEXP | GL_EXACT,
+ &freeze_gh);
+ if (error && !gfs2_withdrawn(sdp))
+ return error;
+@@ -761,8 +763,8 @@ void gfs2_freeze_func(struct work_struct *work)
+ struct super_block *sb = sdp->sd_vfs;
+
+ atomic_inc(&sb->s_active);
+- error = gfs2_glock_nq_init(sdp->sd_freeze_gl, LM_ST_SHARED, 0,
+- &freeze_gh);
++ error = gfs2_glock_nq_init(sdp->sd_freeze_gl, LM_ST_SHARED,
++ LM_FLAG_NOEXP | GL_EXACT, &freeze_gh);
+ if (error) {
+ fs_info(sdp, "GFS2: couldn't get freeze lock : %d\n", error);
+ gfs2_assert_withdraw(sdp, 0);
+@@ -774,8 +776,6 @@ void gfs2_freeze_func(struct work_struct *work)
+ error);
+ gfs2_assert_withdraw(sdp, 0);
+ }
+- if (!test_bit(SDF_JOURNAL_LIVE, &sdp->sd_flags))
+- freeze_gh.gh_flags |= GL_NOCACHE;
+ gfs2_glock_dq_uninit(&freeze_gh);
+ }
+ deactivate_super(sb);
+diff --git a/fs/io_uring.c b/fs/io_uring.c
+index ba2184841cb5..51be3a20ade1 100644
+--- a/fs/io_uring.c
++++ b/fs/io_uring.c
+@@ -3884,10 +3884,16 @@ static int io_recvmsg(struct io_kiocb *req, bool force_nonblock)
+
+ ret = __sys_recvmsg_sock(sock, &kmsg->msg, req->sr_msg.msg,
+ kmsg->uaddr, flags);
+- if (force_nonblock && ret == -EAGAIN)
+- return io_setup_async_msg(req, kmsg);
++ if (force_nonblock && ret == -EAGAIN) {
++ ret = io_setup_async_msg(req, kmsg);
++ if (ret != -EAGAIN)
++ kfree(kbuf);
++ return ret;
++ }
+ if (ret == -ERESTARTSYS)
+ ret = -EINTR;
++ if (kbuf)
++ kfree(kbuf);
+ }
+
+ if (kmsg && kmsg->iov != kmsg->fast_iov)
+diff --git a/fs/nfs/nfs4proc.c b/fs/nfs/nfs4proc.c
+index e32717fd1169..2e2dac29a9e9 100644
+--- a/fs/nfs/nfs4proc.c
++++ b/fs/nfs/nfs4proc.c
+@@ -774,6 +774,14 @@ static void nfs4_slot_sequence_acked(struct nfs4_slot *slot,
+ slot->seq_nr_last_acked = seqnr;
+ }
+
++static void nfs4_probe_sequence(struct nfs_client *client, const struct cred *cred,
++ struct nfs4_slot *slot)
++{
++ struct rpc_task *task = _nfs41_proc_sequence(client, cred, slot, true);
++ if (!IS_ERR(task))
++ rpc_put_task_async(task);
++}
++
+ static int nfs41_sequence_process(struct rpc_task *task,
+ struct nfs4_sequence_res *res)
+ {
+@@ -790,6 +798,7 @@ static int nfs41_sequence_process(struct rpc_task *task,
+ goto out;
+
+ session = slot->table->session;
++ clp = session->clp;
+
+ trace_nfs4_sequence_done(session, res);
+
+@@ -804,7 +813,6 @@ static int nfs41_sequence_process(struct rpc_task *task,
+ nfs4_slot_sequence_acked(slot, slot->seq_nr);
+ /* Update the slot's sequence and clientid lease timer */
+ slot->seq_done = 1;
+- clp = session->clp;
+ do_renew_lease(clp, res->sr_timestamp);
+ /* Check sequence flags */
+ nfs41_handle_sequence_flag_errors(clp, res->sr_status_flags,
+@@ -852,10 +860,18 @@ static int nfs41_sequence_process(struct rpc_task *task,
+ /*
+ * Were one or more calls using this slot interrupted?
+ * If the server never received the request, then our
+- * transmitted slot sequence number may be too high.
++ * transmitted slot sequence number may be too high. However,
++ * if the server did receive the request then it might
++ * accidentally give us a reply with a mismatched operation.
++ * We can sort this out by sending a lone sequence operation
++ * to the server on the same slot.
+ */
+ if ((s32)(slot->seq_nr - slot->seq_nr_last_acked) > 1) {
+ slot->seq_nr--;
++ if (task->tk_msg.rpc_proc != &nfs4_procedures[NFSPROC4_CLNT_SEQUENCE]) {
++ nfs4_probe_sequence(clp, task->tk_msg.rpc_cred, slot);
++ res->sr_slot = NULL;
++ }
+ goto retry_nowait;
+ }
+ /*
+diff --git a/fs/overlayfs/export.c b/fs/overlayfs/export.c
+index ed5c1078919c..c19531dc62ef 100644
+--- a/fs/overlayfs/export.c
++++ b/fs/overlayfs/export.c
+@@ -478,7 +478,7 @@ static struct dentry *ovl_lookup_real_inode(struct super_block *sb,
+ if (IS_ERR_OR_NULL(this))
+ return this;
+
+- if (WARN_ON(ovl_dentry_real_at(this, layer->idx) != real)) {
++ if (ovl_dentry_real_at(this, layer->idx) != real) {
+ dput(this);
+ this = ERR_PTR(-EIO);
+ }
+diff --git a/fs/overlayfs/file.c b/fs/overlayfs/file.c
+index 87c362f65448..6804e55db217 100644
+--- a/fs/overlayfs/file.c
++++ b/fs/overlayfs/file.c
+@@ -32,13 +32,16 @@ static char ovl_whatisit(struct inode *inode, struct inode *realinode)
+ return 'm';
+ }
+
++/* No atime modificaton nor notify on underlying */
++#define OVL_OPEN_FLAGS (O_NOATIME | FMODE_NONOTIFY)
++
+ static struct file *ovl_open_realfile(const struct file *file,
+ struct inode *realinode)
+ {
+ struct inode *inode = file_inode(file);
+ struct file *realfile;
+ const struct cred *old_cred;
+- int flags = file->f_flags | O_NOATIME | FMODE_NONOTIFY;
++ int flags = file->f_flags | OVL_OPEN_FLAGS;
+
+ old_cred = ovl_override_creds(inode->i_sb);
+ realfile = open_with_fake_path(&file->f_path, flags, realinode,
+@@ -59,8 +62,7 @@ static int ovl_change_flags(struct file *file, unsigned int flags)
+ struct inode *inode = file_inode(file);
+ int err;
+
+- /* No atime modificaton on underlying */
+- flags |= O_NOATIME | FMODE_NONOTIFY;
++ flags |= OVL_OPEN_FLAGS;
+
+ /* If some flag changed that cannot be changed then something's amiss */
+ if (WARN_ON((file->f_flags ^ flags) & ~OVL_SETFL_MASK))
+@@ -113,7 +115,7 @@ static int ovl_real_fdget_meta(const struct file *file, struct fd *real,
+ }
+
+ /* Did the flags change since open? */
+- if (unlikely((file->f_flags ^ real->file->f_flags) & ~O_NOATIME))
++ if (unlikely((file->f_flags ^ real->file->f_flags) & ~OVL_OPEN_FLAGS))
+ return ovl_change_flags(real->file, file->f_flags);
+
+ return 0;
+diff --git a/fs/overlayfs/super.c b/fs/overlayfs/super.c
+index 732ad5495c92..72395e42e897 100644
+--- a/fs/overlayfs/super.c
++++ b/fs/overlayfs/super.c
+@@ -1331,6 +1331,18 @@ static bool ovl_lower_uuid_ok(struct ovl_fs *ofs, const uuid_t *uuid)
+ if (!ofs->config.nfs_export && !ofs->upper_mnt)
+ return true;
+
++ /*
++ * We allow using single lower with null uuid for index and nfs_export
++ * for example to support those features with single lower squashfs.
++ * To avoid regressions in setups of overlay with re-formatted lower
++ * squashfs, do not allow decoding origin with lower null uuid unless
++ * user opted-in to one of the new features that require following the
++ * lower inode of non-dir upper.
++ */
++ if (!ofs->config.index && !ofs->config.metacopy && !ofs->config.xino &&
++ uuid_is_null(uuid))
++ return false;
++
+ for (i = 0; i < ofs->numfs; i++) {
+ /*
+ * We use uuid to associate an overlay lower file handle with a
+@@ -1438,14 +1450,23 @@ static int ovl_get_layers(struct super_block *sb, struct ovl_fs *ofs,
+ if (err < 0)
+ goto out;
+
++ /*
++ * Check if lower root conflicts with this overlay layers before
++ * checking if it is in-use as upperdir/workdir of "another"
++ * mount, because we do not bother to check in ovl_is_inuse() if
++ * the upperdir/workdir is in fact in-use by our
++ * upperdir/workdir.
++ */
+ err = ovl_setup_trap(sb, stack[i].dentry, &trap, "lowerdir");
+ if (err)
+ goto out;
+
+ if (ovl_is_inuse(stack[i].dentry)) {
+ err = ovl_report_in_use(ofs, "lowerdir");
+- if (err)
++ if (err) {
++ iput(trap);
+ goto out;
++ }
+ }
+
+ mnt = clone_private_mount(&stack[i]);
+diff --git a/include/dt-bindings/clock/qcom,gcc-msm8998.h b/include/dt-bindings/clock/qcom,gcc-msm8998.h
+index 63e02dc32a0b..6a73a174f049 100644
+--- a/include/dt-bindings/clock/qcom,gcc-msm8998.h
++++ b/include/dt-bindings/clock/qcom,gcc-msm8998.h
+@@ -183,6 +183,7 @@
+ #define GCC_MSS_SNOC_AXI_CLK 174
+ #define GCC_MSS_MNOC_BIMC_AXI_CLK 175
+ #define GCC_BIMC_GFX_CLK 176
++#define UFS_UNIPRO_CORE_CLK_SRC 177
+
+ #define PCIE_0_GDSC 0
+ #define UFS_GDSC 1
+diff --git a/include/linux/blkdev.h b/include/linux/blkdev.h
+index 32868fbedc9e..02809e4dd661 100644
+--- a/include/linux/blkdev.h
++++ b/include/linux/blkdev.h
+@@ -585,6 +585,7 @@ struct request_queue {
+ u64 write_hints[BLK_MAX_WRITE_HINTS];
+ };
+
++/* Keep blk_queue_flag_name[] in sync with the definitions below */
+ #define QUEUE_FLAG_STOPPED 0 /* queue is stopped */
+ #define QUEUE_FLAG_DYING 1 /* queue being torn down */
+ #define QUEUE_FLAG_NOMERGES 3 /* disable merge attempts */
+diff --git a/include/linux/bpf.h b/include/linux/bpf.h
+index fd2b2322412d..746bae8624a8 100644
+--- a/include/linux/bpf.h
++++ b/include/linux/bpf.h
+@@ -1444,13 +1444,16 @@ static inline void bpf_map_offload_map_free(struct bpf_map *map)
+ #endif /* CONFIG_NET && CONFIG_BPF_SYSCALL */
+
+ #if defined(CONFIG_BPF_STREAM_PARSER)
+-int sock_map_prog_update(struct bpf_map *map, struct bpf_prog *prog, u32 which);
++int sock_map_prog_update(struct bpf_map *map, struct bpf_prog *prog,
++ struct bpf_prog *old, u32 which);
+ int sock_map_get_from_fd(const union bpf_attr *attr, struct bpf_prog *prog);
++int sock_map_prog_detach(const union bpf_attr *attr, enum bpf_prog_type ptype);
+ void sock_map_unhash(struct sock *sk);
+ void sock_map_close(struct sock *sk, long timeout);
+ #else
+ static inline int sock_map_prog_update(struct bpf_map *map,
+- struct bpf_prog *prog, u32 which)
++ struct bpf_prog *prog,
++ struct bpf_prog *old, u32 which)
+ {
+ return -EOPNOTSUPP;
+ }
+@@ -1460,6 +1463,12 @@ static inline int sock_map_get_from_fd(const union bpf_attr *attr,
+ {
+ return -EINVAL;
+ }
++
++static inline int sock_map_prog_detach(const union bpf_attr *attr,
++ enum bpf_prog_type ptype)
++{
++ return -EOPNOTSUPP;
++}
+ #endif /* CONFIG_BPF_STREAM_PARSER */
+
+ #if defined(CONFIG_INET) && defined(CONFIG_BPF_SYSCALL)
+diff --git a/include/linux/cgroup-defs.h b/include/linux/cgroup-defs.h
+index 52661155f85f..fee0b5547cd0 100644
+--- a/include/linux/cgroup-defs.h
++++ b/include/linux/cgroup-defs.h
+@@ -790,7 +790,9 @@ struct sock_cgroup_data {
+ union {
+ #ifdef __LITTLE_ENDIAN
+ struct {
+- u8 is_data;
++ u8 is_data : 1;
++ u8 no_refcnt : 1;
++ u8 unused : 6;
+ u8 padding;
+ u16 prioidx;
+ u32 classid;
+@@ -800,7 +802,9 @@ struct sock_cgroup_data {
+ u32 classid;
+ u16 prioidx;
+ u8 padding;
+- u8 is_data;
++ u8 unused : 6;
++ u8 no_refcnt : 1;
++ u8 is_data : 1;
+ } __packed;
+ #endif
+ u64 val;
+diff --git a/include/linux/cgroup.h b/include/linux/cgroup.h
+index 4598e4da6b1b..618838c48313 100644
+--- a/include/linux/cgroup.h
++++ b/include/linux/cgroup.h
+@@ -822,6 +822,7 @@ extern spinlock_t cgroup_sk_update_lock;
+
+ void cgroup_sk_alloc_disable(void);
+ void cgroup_sk_alloc(struct sock_cgroup_data *skcd);
++void cgroup_sk_clone(struct sock_cgroup_data *skcd);
+ void cgroup_sk_free(struct sock_cgroup_data *skcd);
+
+ static inline struct cgroup *sock_cgroup_ptr(struct sock_cgroup_data *skcd)
+@@ -835,7 +836,7 @@ static inline struct cgroup *sock_cgroup_ptr(struct sock_cgroup_data *skcd)
+ */
+ v = READ_ONCE(skcd->val);
+
+- if (v & 1)
++ if (v & 3)
+ return &cgrp_dfl_root.cgrp;
+
+ return (struct cgroup *)(unsigned long)v ?: &cgrp_dfl_root.cgrp;
+@@ -847,6 +848,7 @@ static inline struct cgroup *sock_cgroup_ptr(struct sock_cgroup_data *skcd)
+ #else /* CONFIG_CGROUP_DATA */
+
+ static inline void cgroup_sk_alloc(struct sock_cgroup_data *skcd) {}
++static inline void cgroup_sk_clone(struct sock_cgroup_data *skcd) {}
+ static inline void cgroup_sk_free(struct sock_cgroup_data *skcd) {}
+
+ #endif /* CONFIG_CGROUP_DATA */
+diff --git a/include/linux/dma-buf.h b/include/linux/dma-buf.h
+index 57bcef6f988a..589c4cb03372 100644
+--- a/include/linux/dma-buf.h
++++ b/include/linux/dma-buf.h
+@@ -311,6 +311,7 @@ struct dma_buf {
+ void *vmap_ptr;
+ const char *exp_name;
+ const char *name;
++ spinlock_t name_lock; /* spinlock to protect name access */
+ struct module *owner;
+ struct list_head list_node;
+ void *priv;
+diff --git a/include/linux/if_vlan.h b/include/linux/if_vlan.h
+index b05e855f1ddd..41a518336673 100644
+--- a/include/linux/if_vlan.h
++++ b/include/linux/if_vlan.h
+@@ -25,6 +25,8 @@
+ #define VLAN_ETH_DATA_LEN 1500 /* Max. octets in payload */
+ #define VLAN_ETH_FRAME_LEN 1518 /* Max. octets in frame sans FCS */
+
++#define VLAN_MAX_DEPTH 8 /* Max. number of nested VLAN tags parsed */
++
+ /*
+ * struct vlan_hdr - vlan header
+ * @h_vlan_TCI: priority and VLAN ID
+@@ -577,10 +579,10 @@ static inline int vlan_get_tag(const struct sk_buff *skb, u16 *vlan_tci)
+ * Returns the EtherType of the packet, regardless of whether it is
+ * vlan encapsulated (normal or hardware accelerated) or not.
+ */
+-static inline __be16 __vlan_get_protocol(struct sk_buff *skb, __be16 type,
++static inline __be16 __vlan_get_protocol(const struct sk_buff *skb, __be16 type,
+ int *depth)
+ {
+- unsigned int vlan_depth = skb->mac_len;
++ unsigned int vlan_depth = skb->mac_len, parse_depth = VLAN_MAX_DEPTH;
+
+ /* if type is 802.1Q/AD then the header should already be
+ * present at mac_len - VLAN_HLEN (if mac_len > 0), or at
+@@ -595,13 +597,12 @@ static inline __be16 __vlan_get_protocol(struct sk_buff *skb, __be16 type,
+ vlan_depth = ETH_HLEN;
+ }
+ do {
+- struct vlan_hdr *vh;
++ struct vlan_hdr vhdr, *vh;
+
+- if (unlikely(!pskb_may_pull(skb,
+- vlan_depth + VLAN_HLEN)))
++ vh = skb_header_pointer(skb, vlan_depth, sizeof(vhdr), &vhdr);
++ if (unlikely(!vh || !--parse_depth))
+ return 0;
+
+- vh = (struct vlan_hdr *)(skb->data + vlan_depth);
+ type = vh->h_vlan_encapsulated_proto;
+ vlan_depth += VLAN_HLEN;
+ } while (eth_type_vlan(type));
+@@ -620,11 +621,25 @@ static inline __be16 __vlan_get_protocol(struct sk_buff *skb, __be16 type,
+ * Returns the EtherType of the packet, regardless of whether it is
+ * vlan encapsulated (normal or hardware accelerated) or not.
+ */
+-static inline __be16 vlan_get_protocol(struct sk_buff *skb)
++static inline __be16 vlan_get_protocol(const struct sk_buff *skb)
+ {
+ return __vlan_get_protocol(skb, skb->protocol, NULL);
+ }
+
++/* A getter for the SKB protocol field which will handle VLAN tags consistently
++ * whether VLAN acceleration is enabled or not.
++ */
++static inline __be16 skb_protocol(const struct sk_buff *skb, bool skip_vlan)
++{
++ if (!skip_vlan)
++ /* VLAN acceleration strips the VLAN header from the skb and
++ * moves it to skb->vlan_proto
++ */
++ return skb_vlan_tag_present(skb) ? skb->vlan_proto : skb->protocol;
++
++ return vlan_get_protocol(skb);
++}
++
+ static inline void vlan_set_encap_proto(struct sk_buff *skb,
+ struct vlan_hdr *vhdr)
+ {
+diff --git a/include/linux/input/elan-i2c-ids.h b/include/linux/input/elan-i2c-ids.h
+index 1ecb6b45812c..520858d12680 100644
+--- a/include/linux/input/elan-i2c-ids.h
++++ b/include/linux/input/elan-i2c-ids.h
+@@ -67,8 +67,15 @@ static const struct acpi_device_id elan_acpi_id[] = {
+ { "ELAN062B", 0 },
+ { "ELAN062C", 0 },
+ { "ELAN062D", 0 },
++ { "ELAN062E", 0 }, /* Lenovo V340 Whiskey Lake U */
++ { "ELAN062F", 0 }, /* Lenovo V340 Comet Lake U */
+ { "ELAN0631", 0 },
+ { "ELAN0632", 0 },
++ { "ELAN0633", 0 }, /* Lenovo S145 */
++ { "ELAN0634", 0 }, /* Lenovo V340 Ice lake */
++ { "ELAN0635", 0 }, /* Lenovo V1415-IIL */
++ { "ELAN0636", 0 }, /* Lenovo V1415-Dali */
++ { "ELAN0637", 0 }, /* Lenovo V1415-IGLR */
+ { "ELAN1000", 0 },
+ { }
+ };
+diff --git a/include/linux/serial_core.h b/include/linux/serial_core.h
+index 92f5eba86052..60b1160b6d06 100644
+--- a/include/linux/serial_core.h
++++ b/include/linux/serial_core.h
+@@ -460,10 +460,104 @@ extern void uart_handle_cts_change(struct uart_port *uport,
+ extern void uart_insert_char(struct uart_port *port, unsigned int status,
+ unsigned int overrun, unsigned int ch, unsigned int flag);
+
+-extern int uart_handle_sysrq_char(struct uart_port *port, unsigned int ch);
+-extern int uart_prepare_sysrq_char(struct uart_port *port, unsigned int ch);
+-extern void uart_unlock_and_check_sysrq(struct uart_port *port, unsigned long flags);
+-extern int uart_handle_break(struct uart_port *port);
++#ifdef CONFIG_MAGIC_SYSRQ_SERIAL
++#define SYSRQ_TIMEOUT (HZ * 5)
++
++bool uart_try_toggle_sysrq(struct uart_port *port, unsigned int ch);
++
++static inline int uart_handle_sysrq_char(struct uart_port *port, unsigned int ch)
++{
++ if (!port->has_sysrq || !port->sysrq)
++ return 0;
++
++ if (ch && time_before(jiffies, port->sysrq)) {
++ if (sysrq_mask()) {
++ handle_sysrq(ch);
++ port->sysrq = 0;
++ return 1;
++ }
++ if (uart_try_toggle_sysrq(port, ch))
++ return 1;
++ }
++ port->sysrq = 0;
++
++ return 0;
++}
++
++static inline int uart_prepare_sysrq_char(struct uart_port *port, unsigned int ch)
++{
++ if (!port->has_sysrq || !port->sysrq)
++ return 0;
++
++ if (ch && time_before(jiffies, port->sysrq)) {
++ if (sysrq_mask()) {
++ port->sysrq_ch = ch;
++ port->sysrq = 0;
++ return 1;
++ }
++ if (uart_try_toggle_sysrq(port, ch))
++ return 1;
++ }
++ port->sysrq = 0;
++
++ return 0;
++}
++
++static inline void uart_unlock_and_check_sysrq(struct uart_port *port, unsigned long irqflags)
++{
++ int sysrq_ch;
++
++ if (!port->has_sysrq) {
++ spin_unlock_irqrestore(&port->lock, irqflags);
++ return;
++ }
++
++ sysrq_ch = port->sysrq_ch;
++ port->sysrq_ch = 0;
++
++ spin_unlock_irqrestore(&port->lock, irqflags);
++
++ if (sysrq_ch)
++ handle_sysrq(sysrq_ch);
++}
++#else /* CONFIG_MAGIC_SYSRQ_SERIAL */
++static inline int uart_handle_sysrq_char(struct uart_port *port, unsigned int ch)
++{
++ return 0;
++}
++static inline int uart_prepare_sysrq_char(struct uart_port *port, unsigned int ch)
++{
++ return 0;
++}
++static inline void uart_unlock_and_check_sysrq(struct uart_port *port, unsigned long irqflags)
++{
++ spin_unlock_irqrestore(&port->lock, irqflags);
++}
++#endif /* CONFIG_MAGIC_SYSRQ_SERIAL */
++
++/*
++ * We do the SysRQ and SAK checking like this...
++ */
++static inline int uart_handle_break(struct uart_port *port)
++{
++ struct uart_state *state = port->state;
++
++ if (port->handle_break)
++ port->handle_break(port);
++
++#ifdef CONFIG_MAGIC_SYSRQ_SERIAL
++ if (port->has_sysrq && uart_console(port)) {
++ if (!port->sysrq) {
++ port->sysrq = jiffies + SYSRQ_TIMEOUT;
++ return 1;
++ }
++ port->sysrq = 0;
++ }
++#endif
++ if (port->flags & UPF_SAK)
++ do_SAK(state->port.tty);
++ return 0;
++}
+
+ /*
+ * UART_ENABLE_MS - determine if port should enable modem status irqs
+diff --git a/include/linux/skmsg.h b/include/linux/skmsg.h
+index 08674cd14d5a..1e9ed840b9fc 100644
+--- a/include/linux/skmsg.h
++++ b/include/linux/skmsg.h
+@@ -430,6 +430,19 @@ static inline void psock_set_prog(struct bpf_prog **pprog,
+ bpf_prog_put(prog);
+ }
+
++static inline int psock_replace_prog(struct bpf_prog **pprog,
++ struct bpf_prog *prog,
++ struct bpf_prog *old)
++{
++ if (cmpxchg(pprog, old, prog) != old)
++ return -ENOENT;
++
++ if (old)
++ bpf_prog_put(old);
++
++ return 0;
++}
++
+ static inline void psock_progs_drop(struct sk_psock_progs *progs)
+ {
+ psock_set_prog(&progs->msg_parser, NULL);
+diff --git a/include/net/dst.h b/include/net/dst.h
+index 07adfacd8088..852d8fb36ab7 100644
+--- a/include/net/dst.h
++++ b/include/net/dst.h
+@@ -400,7 +400,15 @@ static inline struct neighbour *dst_neigh_lookup(const struct dst_entry *dst, co
+ static inline struct neighbour *dst_neigh_lookup_skb(const struct dst_entry *dst,
+ struct sk_buff *skb)
+ {
+- struct neighbour *n = dst->ops->neigh_lookup(dst, skb, NULL);
++ struct neighbour *n = NULL;
++
++ /* The packets from tunnel devices (eg bareudp) may have only
++ * metadata in the dst pointer of skb. Hence a pointer check of
++ * neigh_lookup is needed.
++ */
++ if (dst->ops->neigh_lookup)
++ n = dst->ops->neigh_lookup(dst, skb, NULL);
++
+ return IS_ERR(n) ? NULL : n;
+ }
+
+diff --git a/include/net/genetlink.h b/include/net/genetlink.h
+index 74950663bb00..6e5f1e1aa822 100644
+--- a/include/net/genetlink.h
++++ b/include/net/genetlink.h
+@@ -35,13 +35,6 @@ struct genl_info;
+ * do additional, common, filtering and return an error
+ * @post_doit: called after an operation's doit callback, it may
+ * undo operations done by pre_doit, for example release locks
+- * @mcast_bind: a socket bound to the given multicast group (which
+- * is given as the offset into the groups array)
+- * @mcast_unbind: a socket was unbound from the given multicast group.
+- * Note that unbind() will not be called symmetrically if the
+- * generic netlink family is removed while there are still open
+- * sockets.
+- * @attrbuf: buffer to store parsed attributes (private)
+ * @mcgrps: multicast groups used by this family
+ * @n_mcgrps: number of multicast groups
+ * @mcgrp_offset: starting number of multicast group IDs in this family
+@@ -64,9 +57,6 @@ struct genl_family {
+ void (*post_doit)(const struct genl_ops *ops,
+ struct sk_buff *skb,
+ struct genl_info *info);
+- int (*mcast_bind)(struct net *net, int group);
+- void (*mcast_unbind)(struct net *net, int group);
+- struct nlattr ** attrbuf; /* private */
+ const struct genl_ops * ops;
+ const struct genl_multicast_group *mcgrps;
+ unsigned int n_ops;
+diff --git a/include/net/inet_ecn.h b/include/net/inet_ecn.h
+index 0f0d1efe06dd..e1eaf1780288 100644
+--- a/include/net/inet_ecn.h
++++ b/include/net/inet_ecn.h
+@@ -4,6 +4,7 @@
+
+ #include <linux/ip.h>
+ #include <linux/skbuff.h>
++#include <linux/if_vlan.h>
+
+ #include <net/inet_sock.h>
+ #include <net/dsfield.h>
+@@ -172,7 +173,7 @@ static inline void ipv6_copy_dscp(unsigned int dscp, struct ipv6hdr *inner)
+
+ static inline int INET_ECN_set_ce(struct sk_buff *skb)
+ {
+- switch (skb->protocol) {
++ switch (skb_protocol(skb, true)) {
+ case cpu_to_be16(ETH_P_IP):
+ if (skb_network_header(skb) + sizeof(struct iphdr) <=
+ skb_tail_pointer(skb))
+@@ -191,7 +192,7 @@ static inline int INET_ECN_set_ce(struct sk_buff *skb)
+
+ static inline int INET_ECN_set_ect1(struct sk_buff *skb)
+ {
+- switch (skb->protocol) {
++ switch (skb_protocol(skb, true)) {
+ case cpu_to_be16(ETH_P_IP):
+ if (skb_network_header(skb) + sizeof(struct iphdr) <=
+ skb_tail_pointer(skb))
+@@ -272,12 +273,16 @@ static inline int IP_ECN_decapsulate(const struct iphdr *oiph,
+ {
+ __u8 inner;
+
+- if (skb->protocol == htons(ETH_P_IP))
++ switch (skb_protocol(skb, true)) {
++ case htons(ETH_P_IP):
+ inner = ip_hdr(skb)->tos;
+- else if (skb->protocol == htons(ETH_P_IPV6))
++ break;
++ case htons(ETH_P_IPV6):
+ inner = ipv6_get_dsfield(ipv6_hdr(skb));
+- else
++ break;
++ default:
+ return 0;
++ }
+
+ return INET_ECN_decapsulate(skb, oiph->tos, inner);
+ }
+@@ -287,12 +292,16 @@ static inline int IP6_ECN_decapsulate(const struct ipv6hdr *oipv6h,
+ {
+ __u8 inner;
+
+- if (skb->protocol == htons(ETH_P_IP))
++ switch (skb_protocol(skb, true)) {
++ case htons(ETH_P_IP):
+ inner = ip_hdr(skb)->tos;
+- else if (skb->protocol == htons(ETH_P_IPV6))
++ break;
++ case htons(ETH_P_IPV6):
+ inner = ipv6_get_dsfield(ipv6_hdr(skb));
+- else
++ break;
++ default:
+ return 0;
++ }
+
+ return INET_ECN_decapsulate(skb, ipv6_get_dsfield(oipv6h), inner);
+ }
+diff --git a/include/net/pkt_sched.h b/include/net/pkt_sched.h
+index 9092e697059e..ac8c890a2657 100644
+--- a/include/net/pkt_sched.h
++++ b/include/net/pkt_sched.h
+@@ -136,17 +136,6 @@ static inline void qdisc_run(struct Qdisc *q)
+ }
+ }
+
+-static inline __be16 tc_skb_protocol(const struct sk_buff *skb)
+-{
+- /* We need to take extra care in case the skb came via
+- * vlan accelerated path. In that case, use skb->vlan_proto
+- * as the original vlan header was already stripped.
+- */
+- if (skb_vlan_tag_present(skb))
+- return skb->vlan_proto;
+- return skb->protocol;
+-}
+-
+ /* Calculate maximal size of packet seen by hard_start_xmit
+ routine of this device.
+ */
+diff --git a/include/trace/events/rxrpc.h b/include/trace/events/rxrpc.h
+index ba9efdc848f9..059b6e45a028 100644
+--- a/include/trace/events/rxrpc.h
++++ b/include/trace/events/rxrpc.h
+@@ -400,7 +400,7 @@ enum rxrpc_tx_point {
+ EM(rxrpc_cong_begin_retransmission, " Retrans") \
+ EM(rxrpc_cong_cleared_nacks, " Cleared") \
+ EM(rxrpc_cong_new_low_nack, " NewLowN") \
+- EM(rxrpc_cong_no_change, "") \
++ EM(rxrpc_cong_no_change, " -") \
+ EM(rxrpc_cong_progress, " Progres") \
+ EM(rxrpc_cong_retransmit_again, " ReTxAgn") \
+ EM(rxrpc_cong_rtt_window_end, " RttWinE") \
+diff --git a/include/uapi/linux/vboxguest.h b/include/uapi/linux/vboxguest.h
+index 9cec58a6a5ea..f79d7abe27db 100644
+--- a/include/uapi/linux/vboxguest.h
++++ b/include/uapi/linux/vboxguest.h
+@@ -103,7 +103,7 @@ VMMDEV_ASSERT_SIZE(vbg_ioctl_driver_version_info, 24 + 20);
+
+
+ /* IOCTL to perform a VMM Device request larger then 1KB. */
+-#define VBG_IOCTL_VMMDEV_REQUEST_BIG _IOC(_IOC_READ | _IOC_WRITE, 'V', 3, 0)
++#define VBG_IOCTL_VMMDEV_REQUEST_BIG _IO('V', 3)
+
+
+ /** VBG_IOCTL_HGCM_CONNECT data structure. */
+@@ -198,7 +198,7 @@ struct vbg_ioctl_log {
+ } u;
+ };
+
+-#define VBG_IOCTL_LOG(s) _IOC(_IOC_READ | _IOC_WRITE, 'V', 9, s)
++#define VBG_IOCTL_LOG(s) _IO('V', 9)
+
+
+ /** VBG_IOCTL_WAIT_FOR_EVENTS data structure. */
+diff --git a/kernel/bpf/syscall.c b/kernel/bpf/syscall.c
+index 0e4d99cfac93..f9e95ea2883b 100644
+--- a/kernel/bpf/syscall.c
++++ b/kernel/bpf/syscall.c
+@@ -2695,7 +2695,7 @@ static int bpf_prog_detach(const union bpf_attr *attr)
+ switch (ptype) {
+ case BPF_PROG_TYPE_SK_MSG:
+ case BPF_PROG_TYPE_SK_SKB:
+- return sock_map_get_from_fd(attr, NULL);
++ return sock_map_prog_detach(attr, ptype);
+ case BPF_PROG_TYPE_LIRC_MODE2:
+ return lirc_prog_detach(attr);
+ case BPF_PROG_TYPE_FLOW_DISSECTOR:
+diff --git a/kernel/cgroup/cgroup.c b/kernel/cgroup/cgroup.c
+index 06b5ea9d899d..9b46a7604e7b 100644
+--- a/kernel/cgroup/cgroup.c
++++ b/kernel/cgroup/cgroup.c
+@@ -6447,18 +6447,8 @@ void cgroup_sk_alloc_disable(void)
+
+ void cgroup_sk_alloc(struct sock_cgroup_data *skcd)
+ {
+- if (cgroup_sk_alloc_disabled)
+- return;
+-
+- /* Socket clone path */
+- if (skcd->val) {
+- /*
+- * We might be cloning a socket which is left in an empty
+- * cgroup and the cgroup might have already been rmdir'd.
+- * Don't use cgroup_get_live().
+- */
+- cgroup_get(sock_cgroup_ptr(skcd));
+- cgroup_bpf_get(sock_cgroup_ptr(skcd));
++ if (cgroup_sk_alloc_disabled) {
++ skcd->no_refcnt = 1;
+ return;
+ }
+
+@@ -6483,10 +6473,27 @@ void cgroup_sk_alloc(struct sock_cgroup_data *skcd)
+ rcu_read_unlock();
+ }
+
++void cgroup_sk_clone(struct sock_cgroup_data *skcd)
++{
++ if (skcd->val) {
++ if (skcd->no_refcnt)
++ return;
++ /*
++ * We might be cloning a socket which is left in an empty
++ * cgroup and the cgroup might have already been rmdir'd.
++ * Don't use cgroup_get_live().
++ */
++ cgroup_get(sock_cgroup_ptr(skcd));
++ cgroup_bpf_get(sock_cgroup_ptr(skcd));
++ }
++}
++
+ void cgroup_sk_free(struct sock_cgroup_data *skcd)
+ {
+ struct cgroup *cgrp = sock_cgroup_ptr(skcd);
+
++ if (skcd->no_refcnt)
++ return;
+ cgroup_bpf_put(cgrp);
+ cgroup_put(cgrp);
+ }
+diff --git a/kernel/irq/manage.c b/kernel/irq/manage.c
+index 453a8a0f4804..dc58fd245e79 100644
+--- a/kernel/irq/manage.c
++++ b/kernel/irq/manage.c
+@@ -195,9 +195,9 @@ void irq_set_thread_affinity(struct irq_desc *desc)
+ set_bit(IRQTF_AFFINITY, &action->thread_flags);
+ }
+
++#ifdef CONFIG_GENERIC_IRQ_EFFECTIVE_AFF_MASK
+ static void irq_validate_effective_affinity(struct irq_data *data)
+ {
+-#ifdef CONFIG_GENERIC_IRQ_EFFECTIVE_AFF_MASK
+ const struct cpumask *m = irq_data_get_effective_affinity_mask(data);
+ struct irq_chip *chip = irq_data_get_irq_chip(data);
+
+@@ -205,9 +205,19 @@ static void irq_validate_effective_affinity(struct irq_data *data)
+ return;
+ pr_warn_once("irq_chip %s did not update eff. affinity mask of irq %u\n",
+ chip->name, data->irq);
+-#endif
+ }
+
++static inline void irq_init_effective_affinity(struct irq_data *data,
++ const struct cpumask *mask)
++{
++ cpumask_copy(irq_data_get_effective_affinity_mask(data), mask);
++}
++#else
++static inline void irq_validate_effective_affinity(struct irq_data *data) { }
++static inline void irq_init_effective_affinity(struct irq_data *data,
++ const struct cpumask *mask) { }
++#endif
++
+ int irq_do_set_affinity(struct irq_data *data, const struct cpumask *mask,
+ bool force)
+ {
+@@ -304,6 +314,26 @@ static int irq_try_set_affinity(struct irq_data *data,
+ return ret;
+ }
+
++static bool irq_set_affinity_deactivated(struct irq_data *data,
++ const struct cpumask *mask, bool force)
++{
++ struct irq_desc *desc = irq_data_to_desc(data);
++
++ /*
++ * If the interrupt is not yet activated, just store the affinity
++ * mask and do not call the chip driver at all. On activation the
++ * driver has to make sure anyway that the interrupt is in a
++ * useable state so startup works.
++ */
++ if (!IS_ENABLED(CONFIG_IRQ_DOMAIN_HIERARCHY) || irqd_is_activated(data))
++ return false;
++
++ cpumask_copy(desc->irq_common_data.affinity, mask);
++ irq_init_effective_affinity(data, mask);
++ irqd_set(data, IRQD_AFFINITY_SET);
++ return true;
++}
++
+ int irq_set_affinity_locked(struct irq_data *data, const struct cpumask *mask,
+ bool force)
+ {
+@@ -314,6 +344,9 @@ int irq_set_affinity_locked(struct irq_data *data, const struct cpumask *mask,
+ if (!chip || !chip->irq_set_affinity)
+ return -EINVAL;
+
++ if (irq_set_affinity_deactivated(data, mask, force))
++ return 0;
++
+ if (irq_can_move_pcntxt(data) && !irqd_is_setaffinity_pending(data)) {
+ ret = irq_try_set_affinity(data, mask, force);
+ } else {
+diff --git a/kernel/sched/core.c b/kernel/sched/core.c
+index 8034434b1040..a7ef76a62699 100644
+--- a/kernel/sched/core.c
++++ b/kernel/sched/core.c
+@@ -2876,6 +2876,7 @@ int sched_fork(unsigned long clone_flags, struct task_struct *p)
+ * Silence PROVE_RCU.
+ */
+ raw_spin_lock_irqsave(&p->pi_lock, flags);
++ rseq_migrate(p);
+ /*
+ * We're setting the CPU for the first time, we don't migrate,
+ * so use __set_task_cpu().
+@@ -2940,6 +2941,7 @@ void wake_up_new_task(struct task_struct *p)
+ * as we're not fully set-up yet.
+ */
+ p->recent_used_cpu = task_cpu(p);
++ rseq_migrate(p);
+ __set_task_cpu(p, select_task_rq(p, task_cpu(p), SD_BALANCE_FORK, 0));
+ #endif
+ rq = __task_rq_lock(p, &rf);
+diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
+index 5725199b32dc..5c31875a7d9d 100644
+--- a/kernel/sched/fair.c
++++ b/kernel/sched/fair.c
+@@ -4033,7 +4033,11 @@ static inline void update_misfit_status(struct task_struct *p, struct rq *rq)
+ return;
+ }
+
+- rq->misfit_task_load = task_h_load(p);
++ /*
++ * Make sure that misfit_task_load will not be null even if
++ * task_h_load() returns 0.
++ */
++ rq->misfit_task_load = max_t(unsigned long, task_h_load(p), 1);
+ }
+
+ #else /* CONFIG_SMP */
+@@ -7633,7 +7637,14 @@ static int detach_tasks(struct lb_env *env)
+
+ switch (env->migration_type) {
+ case migrate_load:
+- load = task_h_load(p);
++ /*
++ * Depending of the number of CPUs and tasks and the
++ * cgroup hierarchy, task_h_load() can return a null
++ * value. Make sure that env->imbalance decreases
++ * otherwise detach_tasks() will stop only after
++ * detaching up to loop_max tasks.
++ */
++ load = max_t(unsigned long, task_h_load(p), 1);
+
+ if (sched_feat(LB_MIN) &&
+ load < 16 && !env->sd->nr_balance_failed)
+diff --git a/kernel/time/timer.c b/kernel/time/timer.c
+index a5221abb4594..03c9fc395ab1 100644
+--- a/kernel/time/timer.c
++++ b/kernel/time/timer.c
+@@ -522,8 +522,8 @@ static int calc_wheel_index(unsigned long expires, unsigned long clk)
+ * Force expire obscene large timeouts to expire at the
+ * capacity limit of the wheel.
+ */
+- if (expires >= WHEEL_TIMEOUT_CUTOFF)
+- expires = WHEEL_TIMEOUT_MAX;
++ if (delta >= WHEEL_TIMEOUT_CUTOFF)
++ expires = clk + WHEEL_TIMEOUT_MAX;
+
+ idx = calc_index(expires, LVL_DEPTH - 1);
+ }
+@@ -585,7 +585,15 @@ trigger_dyntick_cpu(struct timer_base *base, struct timer_list *timer)
+ * Set the next expiry time and kick the CPU so it can reevaluate the
+ * wheel:
+ */
+- base->next_expiry = timer->expires;
++ if (time_before(timer->expires, base->clk)) {
++ /*
++ * Prevent from forward_timer_base() moving the base->clk
++ * backward
++ */
++ base->next_expiry = base->clk;
++ } else {
++ base->next_expiry = timer->expires;
++ }
+ wake_up_nohz_cpu(base->cpu);
+ }
+
+@@ -897,10 +905,13 @@ static inline void forward_timer_base(struct timer_base *base)
+ * If the next expiry value is > jiffies, then we fast forward to
+ * jiffies otherwise we forward to the next expiry value.
+ */
+- if (time_after(base->next_expiry, jnow))
++ if (time_after(base->next_expiry, jnow)) {
+ base->clk = jnow;
+- else
++ } else {
++ if (WARN_ON_ONCE(time_before(base->next_expiry, base->clk)))
++ return;
+ base->clk = base->next_expiry;
++ }
+ #endif
+ }
+
+diff --git a/mm/memory.c b/mm/memory.c
+index f703fe8c8346..22d218bc56c8 100644
+--- a/mm/memory.c
++++ b/mm/memory.c
+@@ -1501,7 +1501,7 @@ out:
+ }
+
+ #ifdef pte_index
+-static int insert_page_in_batch_locked(struct mm_struct *mm, pmd_t *pmd,
++static int insert_page_in_batch_locked(struct mm_struct *mm, pte_t *pte,
+ unsigned long addr, struct page *page, pgprot_t prot)
+ {
+ int err;
+@@ -1509,8 +1509,9 @@ static int insert_page_in_batch_locked(struct mm_struct *mm, pmd_t *pmd,
+ if (!page_count(page))
+ return -EINVAL;
+ err = validate_page_before_insert(page);
+- return err ? err : insert_page_into_pte_locked(
+- mm, pte_offset_map(pmd, addr), addr, page, prot);
++ if (err)
++ return err;
++ return insert_page_into_pte_locked(mm, pte, addr, page, prot);
+ }
+
+ /* insert_pages() amortizes the cost of spinlock operations
+@@ -1520,7 +1521,8 @@ static int insert_pages(struct vm_area_struct *vma, unsigned long addr,
+ struct page **pages, unsigned long *num, pgprot_t prot)
+ {
+ pmd_t *pmd = NULL;
+- spinlock_t *pte_lock = NULL;
++ pte_t *start_pte, *pte;
++ spinlock_t *pte_lock;
+ struct mm_struct *const mm = vma->vm_mm;
+ unsigned long curr_page_idx = 0;
+ unsigned long remaining_pages_total = *num;
+@@ -1539,18 +1541,17 @@ more:
+ ret = -ENOMEM;
+ if (pte_alloc(mm, pmd))
+ goto out;
+- pte_lock = pte_lockptr(mm, pmd);
+
+ while (pages_to_write_in_pmd) {
+ int pte_idx = 0;
+ const int batch_size = min_t(int, pages_to_write_in_pmd, 8);
+
+- spin_lock(pte_lock);
+- for (; pte_idx < batch_size; ++pte_idx) {
+- int err = insert_page_in_batch_locked(mm, pmd,
++ start_pte = pte_offset_map_lock(mm, pmd, addr, &pte_lock);
++ for (pte = start_pte; pte_idx < batch_size; ++pte, ++pte_idx) {
++ int err = insert_page_in_batch_locked(mm, pte,
+ addr, pages[curr_page_idx], prot);
+ if (unlikely(err)) {
+- spin_unlock(pte_lock);
++ pte_unmap_unlock(start_pte, pte_lock);
+ ret = err;
+ remaining_pages_total -= pte_idx;
+ goto out;
+@@ -1558,7 +1559,7 @@ more:
+ addr += PAGE_SIZE;
+ ++curr_page_idx;
+ }
+- spin_unlock(pte_lock);
++ pte_unmap_unlock(start_pte, pte_lock);
+ pages_to_write_in_pmd -= batch_size;
+ remaining_pages_total -= batch_size;
+ }
+diff --git a/net/bridge/br_multicast.c b/net/bridge/br_multicast.c
+index 83490bf73a13..4c4a93abde68 100644
+--- a/net/bridge/br_multicast.c
++++ b/net/bridge/br_multicast.c
+@@ -1007,7 +1007,7 @@ static int br_ip6_multicast_mld2_report(struct net_bridge *br,
+ nsrcs_offset = len + offsetof(struct mld2_grec, grec_nsrcs);
+
+ if (skb_transport_offset(skb) + ipv6_transport_len(skb) <
+- nsrcs_offset + sizeof(_nsrcs))
++ nsrcs_offset + sizeof(__nsrcs))
+ return -EINVAL;
+
+ _nsrcs = skb_header_pointer(skb, nsrcs_offset,
+diff --git a/net/ceph/osd_client.c b/net/ceph/osd_client.c
+index 1d4973f8cd7a..99f35b7f765d 100644
+--- a/net/ceph/osd_client.c
++++ b/net/ceph/osd_client.c
+@@ -445,6 +445,7 @@ static void target_copy(struct ceph_osd_request_target *dest,
+ dest->size = src->size;
+ dest->min_size = src->min_size;
+ dest->sort_bitwise = src->sort_bitwise;
++ dest->recovery_deletes = src->recovery_deletes;
+
+ dest->flags = src->flags;
+ dest->paused = src->paused;
+diff --git a/net/core/filter.c b/net/core/filter.c
+index 45fa65a28983..cebbb6ba9ed9 100644
+--- a/net/core/filter.c
++++ b/net/core/filter.c
+@@ -5724,12 +5724,16 @@ BPF_CALL_1(bpf_skb_ecn_set_ce, struct sk_buff *, skb)
+ {
+ unsigned int iphdr_len;
+
+- if (skb->protocol == cpu_to_be16(ETH_P_IP))
++ switch (skb_protocol(skb, true)) {
++ case cpu_to_be16(ETH_P_IP):
+ iphdr_len = sizeof(struct iphdr);
+- else if (skb->protocol == cpu_to_be16(ETH_P_IPV6))
++ break;
++ case cpu_to_be16(ETH_P_IPV6):
+ iphdr_len = sizeof(struct ipv6hdr);
+- else
++ break;
++ default:
+ return 0;
++ }
+
+ if (skb_headlen(skb) < iphdr_len)
+ return 0;
+diff --git a/net/core/sock.c b/net/core/sock.c
+index afe4a62adf8f..bc6fe4114374 100644
+--- a/net/core/sock.c
++++ b/net/core/sock.c
+@@ -1837,7 +1837,7 @@ struct sock *sk_clone_lock(const struct sock *sk, const gfp_t priority)
+ /* sk->sk_memcg will be populated at accept() time */
+ newsk->sk_memcg = NULL;
+
+- cgroup_sk_alloc(&newsk->sk_cgrp_data);
++ cgroup_sk_clone(&newsk->sk_cgrp_data);
+
+ rcu_read_lock();
+ filter = rcu_dereference(sk->sk_filter);
+diff --git a/net/core/sock_map.c b/net/core/sock_map.c
+index 591457fcbd02..4d0aba233bb4 100644
+--- a/net/core/sock_map.c
++++ b/net/core/sock_map.c
+@@ -70,11 +70,49 @@ int sock_map_get_from_fd(const union bpf_attr *attr, struct bpf_prog *prog)
+ struct fd f;
+ int ret;
+
++ if (attr->attach_flags || attr->replace_bpf_fd)
++ return -EINVAL;
++
+ f = fdget(ufd);
+ map = __bpf_map_get(f);
+ if (IS_ERR(map))
+ return PTR_ERR(map);
+- ret = sock_map_prog_update(map, prog, attr->attach_type);
++ ret = sock_map_prog_update(map, prog, NULL, attr->attach_type);
++ fdput(f);
++ return ret;
++}
++
++int sock_map_prog_detach(const union bpf_attr *attr, enum bpf_prog_type ptype)
++{
++ u32 ufd = attr->target_fd;
++ struct bpf_prog *prog;
++ struct bpf_map *map;
++ struct fd f;
++ int ret;
++
++ if (attr->attach_flags || attr->replace_bpf_fd)
++ return -EINVAL;
++
++ f = fdget(ufd);
++ map = __bpf_map_get(f);
++ if (IS_ERR(map))
++ return PTR_ERR(map);
++
++ prog = bpf_prog_get(attr->attach_bpf_fd);
++ if (IS_ERR(prog)) {
++ ret = PTR_ERR(prog);
++ goto put_map;
++ }
++
++ if (prog->type != ptype) {
++ ret = -EINVAL;
++ goto put_prog;
++ }
++
++ ret = sock_map_prog_update(map, NULL, prog, attr->attach_type);
++put_prog:
++ bpf_prog_put(prog);
++put_map:
+ fdput(f);
+ return ret;
+ }
+@@ -1189,27 +1227,32 @@ static struct sk_psock_progs *sock_map_progs(struct bpf_map *map)
+ }
+
+ int sock_map_prog_update(struct bpf_map *map, struct bpf_prog *prog,
+- u32 which)
++ struct bpf_prog *old, u32 which)
+ {
+ struct sk_psock_progs *progs = sock_map_progs(map);
++ struct bpf_prog **pprog;
+
+ if (!progs)
+ return -EOPNOTSUPP;
+
+ switch (which) {
+ case BPF_SK_MSG_VERDICT:
+- psock_set_prog(&progs->msg_parser, prog);
++ pprog = &progs->msg_parser;
+ break;
+ case BPF_SK_SKB_STREAM_PARSER:
+- psock_set_prog(&progs->skb_parser, prog);
++ pprog = &progs->skb_parser;
+ break;
+ case BPF_SK_SKB_STREAM_VERDICT:
+- psock_set_prog(&progs->skb_verdict, prog);
++ pprog = &progs->skb_verdict;
+ break;
+ default:
+ return -EOPNOTSUPP;
+ }
+
++ if (old)
++ return psock_replace_prog(pprog, prog, old);
++
++ psock_set_prog(pprog, prog);
+ return 0;
+ }
+
+diff --git a/net/ethtool/netlink.c b/net/ethtool/netlink.c
+index ed5357210193..0f4e2e106799 100644
+--- a/net/ethtool/netlink.c
++++ b/net/ethtool/netlink.c
+@@ -376,10 +376,17 @@ err_dev:
+ }
+
+ static int ethnl_default_dump_one(struct sk_buff *skb, struct net_device *dev,
+- const struct ethnl_dump_ctx *ctx)
++ const struct ethnl_dump_ctx *ctx,
++ struct netlink_callback *cb)
+ {
++ void *ehdr;
+ int ret;
+
++ ehdr = genlmsg_put(skb, NETLINK_CB(cb->skb).portid, cb->nlh->nlmsg_seq,
++ ðtool_genl_family, 0, ctx->ops->reply_cmd);
++ if (!ehdr)
++ return -EMSGSIZE;
++
+ ethnl_init_reply_data(ctx->reply_data, ctx->ops, dev);
+ rtnl_lock();
+ ret = ctx->ops->prepare_data(ctx->req_info, ctx->reply_data, NULL);
+@@ -395,6 +402,10 @@ out:
+ if (ctx->ops->cleanup_data)
+ ctx->ops->cleanup_data(ctx->reply_data);
+ ctx->reply_data->dev = NULL;
++ if (ret < 0)
++ genlmsg_cancel(skb, ehdr);
++ else
++ genlmsg_end(skb, ehdr);
+ return ret;
+ }
+
+@@ -411,7 +422,6 @@ static int ethnl_default_dumpit(struct sk_buff *skb,
+ int s_idx = ctx->pos_idx;
+ int h, idx = 0;
+ int ret = 0;
+- void *ehdr;
+
+ rtnl_lock();
+ for (h = ctx->pos_hash; h < NETDEV_HASHENTRIES; h++, s_idx = 0) {
+@@ -431,26 +441,15 @@ restart_chain:
+ dev_hold(dev);
+ rtnl_unlock();
+
+- ehdr = genlmsg_put(skb, NETLINK_CB(cb->skb).portid,
+- cb->nlh->nlmsg_seq,
+- ðtool_genl_family, 0,
+- ctx->ops->reply_cmd);
+- if (!ehdr) {
+- dev_put(dev);
+- ret = -EMSGSIZE;
+- goto out;
+- }
+- ret = ethnl_default_dump_one(skb, dev, ctx);
++ ret = ethnl_default_dump_one(skb, dev, ctx, cb);
+ dev_put(dev);
+ if (ret < 0) {
+- genlmsg_cancel(skb, ehdr);
+ if (ret == -EOPNOTSUPP)
+ goto lock_and_cont;
+ if (likely(skb->len))
+ ret = skb->len;
+ goto out;
+ }
+- genlmsg_end(skb, ehdr);
+ lock_and_cont:
+ rtnl_lock();
+ if (net->dev_base_seq != seq) {
+diff --git a/net/hsr/hsr_device.c b/net/hsr/hsr_device.c
+index ef100cfd2ac1..56a11341f99c 100644
+--- a/net/hsr/hsr_device.c
++++ b/net/hsr/hsr_device.c
+@@ -417,6 +417,7 @@ int hsr_dev_finalize(struct net_device *hsr_dev, struct net_device *slave[2],
+ unsigned char multicast_spec, u8 protocol_version,
+ struct netlink_ext_ack *extack)
+ {
++ bool unregister = false;
+ struct hsr_priv *hsr;
+ int res;
+
+@@ -468,25 +469,27 @@ int hsr_dev_finalize(struct net_device *hsr_dev, struct net_device *slave[2],
+ if (res)
+ goto err_unregister;
+
++ unregister = true;
++
+ res = hsr_add_port(hsr, slave[0], HSR_PT_SLAVE_A, extack);
+ if (res)
+- goto err_add_slaves;
++ goto err_unregister;
+
+ res = hsr_add_port(hsr, slave[1], HSR_PT_SLAVE_B, extack);
+ if (res)
+- goto err_add_slaves;
++ goto err_unregister;
+
+ hsr_debugfs_init(hsr, hsr_dev);
+ mod_timer(&hsr->prune_timer, jiffies + msecs_to_jiffies(PRUNE_PERIOD));
+
+ return 0;
+
+-err_add_slaves:
+- unregister_netdevice(hsr_dev);
+ err_unregister:
+ hsr_del_ports(hsr);
+ err_add_master:
+ hsr_del_self_node(hsr);
+
++ if (unregister)
++ unregister_netdevice(hsr_dev);
+ return res;
+ }
+diff --git a/net/ipv4/icmp.c b/net/ipv4/icmp.c
+index fc61f51d87a3..ca591051c656 100644
+--- a/net/ipv4/icmp.c
++++ b/net/ipv4/icmp.c
+@@ -427,7 +427,7 @@ static void icmp_reply(struct icmp_bxm *icmp_param, struct sk_buff *skb)
+
+ ipcm_init(&ipc);
+ inet->tos = ip_hdr(skb)->tos;
+- sk->sk_mark = mark;
++ ipc.sockc.mark = mark;
+ daddr = ipc.addr = ip_hdr(skb)->saddr;
+ saddr = fib_compute_spec_dst(skb);
+
+@@ -710,10 +710,10 @@ void __icmp_send(struct sk_buff *skb_in, int type, int code, __be32 info,
+ icmp_param.skb = skb_in;
+ icmp_param.offset = skb_network_offset(skb_in);
+ inet_sk(sk)->tos = tos;
+- sk->sk_mark = mark;
+ ipcm_init(&ipc);
+ ipc.addr = iph->saddr;
+ ipc.opt = &icmp_param.replyopts.opt;
++ ipc.sockc.mark = mark;
+
+ rt = icmp_route_lookup(net, &fl4, skb_in, iph, saddr, tos, mark,
+ type, code, &icmp_param);
+diff --git a/net/ipv4/ip_output.c b/net/ipv4/ip_output.c
+index 090d3097ee15..17206677d503 100644
+--- a/net/ipv4/ip_output.c
++++ b/net/ipv4/ip_output.c
+@@ -1702,7 +1702,7 @@ void ip_send_unicast_reply(struct sock *sk, struct sk_buff *skb,
+ sk->sk_protocol = ip_hdr(skb)->protocol;
+ sk->sk_bound_dev_if = arg->bound_dev_if;
+ sk->sk_sndbuf = sysctl_wmem_default;
+- sk->sk_mark = fl4.flowi4_mark;
++ ipc.sockc.mark = fl4.flowi4_mark;
+ err = ip_append_data(sk, &fl4, ip_reply_glue_bits, arg->iov->iov_base,
+ len, 0, &ipc, &rt, MSG_DONTWAIT);
+ if (unlikely(err)) {
+diff --git a/net/ipv4/ping.c b/net/ipv4/ping.c
+index 535427292194..df6fbefe44d4 100644
+--- a/net/ipv4/ping.c
++++ b/net/ipv4/ping.c
+@@ -786,6 +786,9 @@ static int ping_v4_sendmsg(struct sock *sk, struct msghdr *msg, size_t len)
+ inet_sk_flowi_flags(sk), faddr, saddr, 0, 0,
+ sk->sk_uid);
+
++ fl4.fl4_icmp_type = user_icmph.type;
++ fl4.fl4_icmp_code = user_icmph.code;
++
+ security_sk_classify_flow(sk, flowi4_to_flowi(&fl4));
+ rt = ip_route_output_flow(net, &fl4, sk);
+ if (IS_ERR(rt)) {
+diff --git a/net/ipv4/route.c b/net/ipv4/route.c
+index b73f540fa19b..abe12caf2451 100644
+--- a/net/ipv4/route.c
++++ b/net/ipv4/route.c
+@@ -2027,7 +2027,7 @@ int ip_route_use_hint(struct sk_buff *skb, __be32 daddr, __be32 saddr,
+ const struct sk_buff *hint)
+ {
+ struct in_device *in_dev = __in_dev_get_rcu(dev);
+- struct rtable *rt = (struct rtable *)hint;
++ struct rtable *rt = skb_rtable(hint);
+ struct net *net = dev_net(dev);
+ int err = -EINVAL;
+ u32 tag = 0;
+diff --git a/net/ipv4/tcp.c b/net/ipv4/tcp.c
+index dd401757eea1..eee18259a24e 100644
+--- a/net/ipv4/tcp.c
++++ b/net/ipv4/tcp.c
+@@ -2635,6 +2635,9 @@ int tcp_disconnect(struct sock *sk, int flags)
+ tp->window_clamp = 0;
+ tp->delivered = 0;
+ tp->delivered_ce = 0;
++ if (icsk->icsk_ca_ops->release)
++ icsk->icsk_ca_ops->release(sk);
++ memset(icsk->icsk_ca_priv, 0, sizeof(icsk->icsk_ca_priv));
+ tcp_set_ca_state(sk, TCP_CA_Open);
+ tp->is_sack_reneg = 0;
+ tcp_clear_retrans(tp);
+@@ -3090,10 +3093,7 @@ static int do_tcp_setsockopt(struct sock *sk, int level,
+ #ifdef CONFIG_TCP_MD5SIG
+ case TCP_MD5SIG:
+ case TCP_MD5SIG_EXT:
+- if ((1 << sk->sk_state) & (TCPF_CLOSE | TCPF_LISTEN))
+- err = tp->af_specific->md5_parse(sk, optname, optval, optlen);
+- else
+- err = -EINVAL;
++ err = tp->af_specific->md5_parse(sk, optname, optval, optlen);
+ break;
+ #endif
+ case TCP_USER_TIMEOUT:
+@@ -3877,10 +3877,13 @@ EXPORT_SYMBOL(tcp_md5_hash_skb_data);
+
+ int tcp_md5_hash_key(struct tcp_md5sig_pool *hp, const struct tcp_md5sig_key *key)
+ {
++ u8 keylen = READ_ONCE(key->keylen); /* paired with WRITE_ONCE() in tcp_md5_do_add */
+ struct scatterlist sg;
+
+- sg_init_one(&sg, key->key, key->keylen);
+- ahash_request_set_crypt(hp->md5_req, &sg, NULL, key->keylen);
++ sg_init_one(&sg, key->key, keylen);
++ ahash_request_set_crypt(hp->md5_req, &sg, NULL, keylen);
++
++ /* tcp_md5_do_add() might change key->key under us */
+ return crypto_ahash_update(hp->md5_req);
+ }
+ EXPORT_SYMBOL(tcp_md5_hash_key);
+diff --git a/net/ipv4/tcp_cong.c b/net/ipv4/tcp_cong.c
+index 3172e31987be..62878cf26d9c 100644
+--- a/net/ipv4/tcp_cong.c
++++ b/net/ipv4/tcp_cong.c
+@@ -197,7 +197,7 @@ static void tcp_reinit_congestion_control(struct sock *sk,
+ icsk->icsk_ca_setsockopt = 1;
+ memset(icsk->icsk_ca_priv, 0, sizeof(icsk->icsk_ca_priv));
+
+- if (sk->sk_state != TCP_CLOSE)
++ if (!((1 << sk->sk_state) & (TCPF_CLOSE | TCPF_LISTEN)))
+ tcp_init_congestion_control(sk);
+ }
+
+diff --git a/net/ipv4/tcp_input.c b/net/ipv4/tcp_input.c
+index 1fa009999f57..31c58e00d25b 100644
+--- a/net/ipv4/tcp_input.c
++++ b/net/ipv4/tcp_input.c
+@@ -4570,6 +4570,7 @@ static void tcp_data_queue_ofo(struct sock *sk, struct sk_buff *skb)
+
+ if (unlikely(tcp_try_rmem_schedule(sk, skb, skb->truesize))) {
+ NET_INC_STATS(sock_net(sk), LINUX_MIB_TCPOFODROP);
++ sk->sk_data_ready(sk);
+ tcp_drop(sk, skb);
+ return;
+ }
+@@ -4816,6 +4817,7 @@ queue_and_out:
+ sk_forced_mem_schedule(sk, skb->truesize);
+ else if (tcp_try_rmem_schedule(sk, skb, skb->truesize)) {
+ NET_INC_STATS(sock_net(sk), LINUX_MIB_TCPRCVQDROP);
++ sk->sk_data_ready(sk);
+ goto drop;
+ }
+
+diff --git a/net/ipv4/tcp_ipv4.c b/net/ipv4/tcp_ipv4.c
+index 83a5d24e13b8..4c2f2f2107a9 100644
+--- a/net/ipv4/tcp_ipv4.c
++++ b/net/ipv4/tcp_ipv4.c
+@@ -1103,9 +1103,18 @@ int tcp_md5_do_add(struct sock *sk, const union tcp_md5_addr *addr,
+
+ key = tcp_md5_do_lookup_exact(sk, addr, family, prefixlen, l3index);
+ if (key) {
+- /* Pre-existing entry - just update that one. */
++ /* Pre-existing entry - just update that one.
++ * Note that the key might be used concurrently.
++ */
+ memcpy(key->key, newkey, newkeylen);
+- key->keylen = newkeylen;
++
++ /* Pairs with READ_ONCE() in tcp_md5_hash_key().
++ * Also note that a reader could catch new key->keylen value
++ * but old key->key[], this is the reason we use __GFP_ZERO
++ * at sock_kmalloc() time below these lines.
++ */
++ WRITE_ONCE(key->keylen, newkeylen);
++
+ return 0;
+ }
+
+@@ -1121,7 +1130,7 @@ int tcp_md5_do_add(struct sock *sk, const union tcp_md5_addr *addr,
+ rcu_assign_pointer(tp->md5sig_info, md5sig);
+ }
+
+- key = sock_kmalloc(sk, sizeof(*key), gfp);
++ key = sock_kmalloc(sk, sizeof(*key), gfp | __GFP_ZERO);
+ if (!key)
+ return -ENOMEM;
+ if (!tcp_alloc_md5sig_pool()) {
+diff --git a/net/ipv4/tcp_output.c b/net/ipv4/tcp_output.c
+index 2f45cde168c4..bee2f9b8b8a1 100644
+--- a/net/ipv4/tcp_output.c
++++ b/net/ipv4/tcp_output.c
+@@ -700,7 +700,8 @@ static unsigned int tcp_synack_options(const struct sock *sk,
+ unsigned int mss, struct sk_buff *skb,
+ struct tcp_out_options *opts,
+ const struct tcp_md5sig_key *md5,
+- struct tcp_fastopen_cookie *foc)
++ struct tcp_fastopen_cookie *foc,
++ enum tcp_synack_type synack_type)
+ {
+ struct inet_request_sock *ireq = inet_rsk(req);
+ unsigned int remaining = MAX_TCP_OPTION_SPACE;
+@@ -715,7 +716,8 @@ static unsigned int tcp_synack_options(const struct sock *sk,
+ * rather than TS in order to fit in better with old,
+ * buggy kernels, but that was deemed to be unnecessary.
+ */
+- ireq->tstamp_ok &= !ireq->sack_ok;
++ if (synack_type != TCP_SYNACK_COOKIE)
++ ireq->tstamp_ok &= !ireq->sack_ok;
+ }
+ #endif
+
+@@ -3388,7 +3390,7 @@ struct sk_buff *tcp_make_synack(const struct sock *sk, struct dst_entry *dst,
+ #endif
+ skb_set_hash(skb, tcp_rsk(req)->txhash, PKT_HASH_TYPE_L4);
+ tcp_header_size = tcp_synack_options(sk, req, mss, skb, &opts, md5,
+- foc) + sizeof(*th);
++ foc, synack_type) + sizeof(*th);
+
+ skb_push(skb, tcp_header_size);
+ skb_reset_transport_header(skb);
+diff --git a/net/ipv6/icmp.c b/net/ipv6/icmp.c
+index fc5000370030..9df8737ae0d3 100644
+--- a/net/ipv6/icmp.c
++++ b/net/ipv6/icmp.c
+@@ -566,7 +566,6 @@ static void icmp6_send(struct sk_buff *skb, u8 type, u8 code, __u32 info,
+ fl6.mp_hash = rt6_multipath_hash(net, &fl6, skb, NULL);
+ security_skb_classify_flow(skb, flowi6_to_flowi(&fl6));
+
+- sk->sk_mark = mark;
+ np = inet6_sk(sk);
+
+ if (!icmpv6_xrlim_allow(sk, type, &fl6))
+@@ -583,6 +582,7 @@ static void icmp6_send(struct sk_buff *skb, u8 type, u8 code, __u32 info,
+ fl6.flowi6_oif = np->ucast_oif;
+
+ ipcm6_init_sk(&ipc6, np);
++ ipc6.sockc.mark = mark;
+ fl6.flowlabel = ip6_make_flowinfo(ipc6.tclass, fl6.flowlabel);
+
+ dst = icmpv6_route_lookup(net, skb, sk, &fl6);
+@@ -751,7 +751,6 @@ static void icmpv6_echo_reply(struct sk_buff *skb)
+ sk = icmpv6_xmit_lock(net);
+ if (!sk)
+ goto out_bh_enable;
+- sk->sk_mark = mark;
+ np = inet6_sk(sk);
+
+ if (!fl6.flowi6_oif && ipv6_addr_is_multicast(&fl6.daddr))
+@@ -779,6 +778,7 @@ static void icmpv6_echo_reply(struct sk_buff *skb)
+ ipcm6_init_sk(&ipc6, np);
+ ipc6.hlimit = ip6_sk_dst_hoplimit(np, &fl6, dst);
+ ipc6.tclass = ipv6_get_dsfield(ipv6_hdr(skb));
++ ipc6.sockc.mark = mark;
+
+ if (ip6_append_data(sk, icmpv6_getfrag, &msg,
+ skb->len + sizeof(struct icmp6hdr),
+diff --git a/net/ipv6/route.c b/net/ipv6/route.c
+index ff847a324220..e8a184acf668 100644
+--- a/net/ipv6/route.c
++++ b/net/ipv6/route.c
+@@ -431,9 +431,12 @@ void fib6_select_path(const struct net *net, struct fib6_result *res,
+ struct fib6_info *sibling, *next_sibling;
+ struct fib6_info *match = res->f6i;
+
+- if ((!match->fib6_nsiblings && !match->nh) || have_oif_match)
++ if (!match->nh && (!match->fib6_nsiblings || have_oif_match))
+ goto out;
+
++ if (match->nh && have_oif_match && res->nh)
++ return;
++
+ /* We might have already computed the hash for ICMPv6 errors. In such
+ * case it will always be non-zero. Otherwise now is the time to do it.
+ */
+@@ -3399,7 +3402,7 @@ static bool fib6_is_reject(u32 flags, struct net_device *dev, int addr_type)
+ if ((flags & RTF_REJECT) ||
+ (dev && (dev->flags & IFF_LOOPBACK) &&
+ !(addr_type & IPV6_ADDR_LOOPBACK) &&
+- !(flags & RTF_LOCAL)))
++ !(flags & (RTF_ANYCAST | RTF_LOCAL))))
+ return true;
+
+ return false;
+diff --git a/net/l2tp/l2tp_core.c b/net/l2tp/l2tp_core.c
+index 6d7ef78c88af..6434d17e6e8e 100644
+--- a/net/l2tp/l2tp_core.c
++++ b/net/l2tp/l2tp_core.c
+@@ -1028,6 +1028,7 @@ static void l2tp_xmit_core(struct l2tp_session *session, struct sk_buff *skb,
+
+ /* Queue the packet to IP for output */
+ skb->ignore_df = 1;
++ skb_dst_drop(skb);
+ #if IS_ENABLED(CONFIG_IPV6)
+ if (l2tp_sk_is_v6(tunnel->sock))
+ error = inet6_csk_xmit(tunnel->sock, skb, NULL);
+@@ -1099,10 +1100,6 @@ int l2tp_xmit_skb(struct l2tp_session *session, struct sk_buff *skb, int hdr_len
+ goto out_unlock;
+ }
+
+- /* Get routing info from the tunnel socket */
+- skb_dst_drop(skb);
+- skb_dst_set(skb, sk_dst_check(sk, 0));
+-
+ inet = inet_sk(sk);
+ fl = &inet->cork.fl;
+ switch (tunnel->encap) {
+diff --git a/net/llc/af_llc.c b/net/llc/af_llc.c
+index 54fb8d452a7b..6e53e43c1907 100644
+--- a/net/llc/af_llc.c
++++ b/net/llc/af_llc.c
+@@ -273,6 +273,10 @@ static int llc_ui_autobind(struct socket *sock, struct sockaddr_llc *addr)
+
+ if (!sock_flag(sk, SOCK_ZAPPED))
+ goto out;
++ if (!addr->sllc_arphrd)
++ addr->sllc_arphrd = ARPHRD_ETHER;
++ if (addr->sllc_arphrd != ARPHRD_ETHER)
++ goto out;
+ rc = -ENODEV;
+ if (sk->sk_bound_dev_if) {
+ llc->dev = dev_get_by_index(&init_net, sk->sk_bound_dev_if);
+@@ -328,7 +332,9 @@ static int llc_ui_bind(struct socket *sock, struct sockaddr *uaddr, int addrlen)
+ if (unlikely(!sock_flag(sk, SOCK_ZAPPED) || addrlen != sizeof(*addr)))
+ goto out;
+ rc = -EAFNOSUPPORT;
+- if (unlikely(addr->sllc_family != AF_LLC))
++ if (!addr->sllc_arphrd)
++ addr->sllc_arphrd = ARPHRD_ETHER;
++ if (unlikely(addr->sllc_family != AF_LLC || addr->sllc_arphrd != ARPHRD_ETHER))
+ goto out;
+ dprintk("%s: binding %02X\n", __func__, addr->sllc_sap);
+ rc = -ENODEV;
+@@ -336,8 +342,6 @@ static int llc_ui_bind(struct socket *sock, struct sockaddr *uaddr, int addrlen)
+ if (sk->sk_bound_dev_if) {
+ llc->dev = dev_get_by_index_rcu(&init_net, sk->sk_bound_dev_if);
+ if (llc->dev) {
+- if (!addr->sllc_arphrd)
+- addr->sllc_arphrd = llc->dev->type;
+ if (is_zero_ether_addr(addr->sllc_mac))
+ memcpy(addr->sllc_mac, llc->dev->dev_addr,
+ IFHWADDRLEN);
+diff --git a/net/mptcp/options.c b/net/mptcp/options.c
+index 2430bbfa3405..2b3ed0c5199d 100644
+--- a/net/mptcp/options.c
++++ b/net/mptcp/options.c
+@@ -449,9 +449,9 @@ static bool mptcp_established_options_mp(struct sock *sk, struct sk_buff *skb,
+ }
+
+ static void mptcp_write_data_fin(struct mptcp_subflow_context *subflow,
+- struct mptcp_ext *ext)
++ struct sk_buff *skb, struct mptcp_ext *ext)
+ {
+- if (!ext->use_map) {
++ if (!ext->use_map || !skb->len) {
+ /* RFC6824 requires a DSS mapping with specific values
+ * if DATA_FIN is set but no data payload is mapped
+ */
+@@ -503,7 +503,7 @@ static bool mptcp_established_options_dss(struct sock *sk, struct sk_buff *skb,
+ opts->ext_copy = *mpext;
+
+ if (skb && tcp_fin && subflow->data_fin_tx_enable)
+- mptcp_write_data_fin(subflow, &opts->ext_copy);
++ mptcp_write_data_fin(subflow, skb, &opts->ext_copy);
+ ret = true;
+ }
+
+diff --git a/net/netlink/genetlink.c b/net/netlink/genetlink.c
+index 9c1c27f3a089..cfcc518d77c0 100644
+--- a/net/netlink/genetlink.c
++++ b/net/netlink/genetlink.c
+@@ -351,22 +351,11 @@ int genl_register_family(struct genl_family *family)
+ start = end = GENL_ID_VFS_DQUOT;
+ }
+
+- if (family->maxattr && !family->parallel_ops) {
+- family->attrbuf = kmalloc_array(family->maxattr + 1,
+- sizeof(struct nlattr *),
+- GFP_KERNEL);
+- if (family->attrbuf == NULL) {
+- err = -ENOMEM;
+- goto errout_locked;
+- }
+- } else
+- family->attrbuf = NULL;
+-
+ family->id = idr_alloc_cyclic(&genl_fam_idr, family,
+ start, end + 1, GFP_KERNEL);
+ if (family->id < 0) {
+ err = family->id;
+- goto errout_free;
++ goto errout_locked;
+ }
+
+ err = genl_validate_assign_mc_groups(family);
+@@ -385,8 +374,6 @@ int genl_register_family(struct genl_family *family)
+
+ errout_remove:
+ idr_remove(&genl_fam_idr, family->id);
+-errout_free:
+- kfree(family->attrbuf);
+ errout_locked:
+ genl_unlock_all();
+ return err;
+@@ -419,8 +406,6 @@ int genl_unregister_family(const struct genl_family *family)
+ atomic_read(&genl_sk_destructing_cnt) == 0);
+ genl_unlock();
+
+- kfree(family->attrbuf);
+-
+ genl_ctrl_event(CTRL_CMD_DELFAMILY, family, NULL, 0);
+
+ return 0;
+@@ -485,30 +470,23 @@ genl_family_rcv_msg_attrs_parse(const struct genl_family *family,
+ if (!family->maxattr)
+ return NULL;
+
+- if (family->parallel_ops) {
+- attrbuf = kmalloc_array(family->maxattr + 1,
+- sizeof(struct nlattr *), GFP_KERNEL);
+- if (!attrbuf)
+- return ERR_PTR(-ENOMEM);
+- } else {
+- attrbuf = family->attrbuf;
+- }
++ attrbuf = kmalloc_array(family->maxattr + 1,
++ sizeof(struct nlattr *), GFP_KERNEL);
++ if (!attrbuf)
++ return ERR_PTR(-ENOMEM);
+
+ err = __nlmsg_parse(nlh, hdrlen, attrbuf, family->maxattr,
+ family->policy, validate, extack);
+ if (err) {
+- if (family->parallel_ops)
+- kfree(attrbuf);
++ kfree(attrbuf);
+ return ERR_PTR(err);
+ }
+ return attrbuf;
+ }
+
+-static void genl_family_rcv_msg_attrs_free(const struct genl_family *family,
+- struct nlattr **attrbuf)
++static void genl_family_rcv_msg_attrs_free(struct nlattr **attrbuf)
+ {
+- if (family->parallel_ops)
+- kfree(attrbuf);
++ kfree(attrbuf);
+ }
+
+ struct genl_start_context {
+@@ -542,7 +520,7 @@ static int genl_start(struct netlink_callback *cb)
+ no_attrs:
+ info = genl_dumpit_info_alloc();
+ if (!info) {
+- genl_family_rcv_msg_attrs_free(ctx->family, attrs);
++ genl_family_rcv_msg_attrs_free(attrs);
+ return -ENOMEM;
+ }
+ info->family = ctx->family;
+@@ -559,7 +537,7 @@ no_attrs:
+ }
+
+ if (rc) {
+- genl_family_rcv_msg_attrs_free(info->family, info->attrs);
++ genl_family_rcv_msg_attrs_free(info->attrs);
+ genl_dumpit_info_free(info);
+ cb->data = NULL;
+ }
+@@ -588,7 +566,7 @@ static int genl_lock_done(struct netlink_callback *cb)
+ rc = ops->done(cb);
+ genl_unlock();
+ }
+- genl_family_rcv_msg_attrs_free(info->family, info->attrs);
++ genl_family_rcv_msg_attrs_free(info->attrs);
+ genl_dumpit_info_free(info);
+ return rc;
+ }
+@@ -601,7 +579,7 @@ static int genl_parallel_done(struct netlink_callback *cb)
+
+ if (ops->done)
+ rc = ops->done(cb);
+- genl_family_rcv_msg_attrs_free(info->family, info->attrs);
++ genl_family_rcv_msg_attrs_free(info->attrs);
+ genl_dumpit_info_free(info);
+ return rc;
+ }
+@@ -694,7 +672,7 @@ static int genl_family_rcv_msg_doit(const struct genl_family *family,
+ family->post_doit(ops, skb, &info);
+
+ out:
+- genl_family_rcv_msg_attrs_free(family, attrbuf);
++ genl_family_rcv_msg_attrs_free(attrbuf);
+
+ return err;
+ }
+@@ -1088,60 +1066,11 @@ static struct genl_family genl_ctrl __ro_after_init = {
+ .netnsok = true,
+ };
+
+-static int genl_bind(struct net *net, int group)
+-{
+- struct genl_family *f;
+- int err = -ENOENT;
+- unsigned int id;
+-
+- down_read(&cb_lock);
+-
+- idr_for_each_entry(&genl_fam_idr, f, id) {
+- if (group >= f->mcgrp_offset &&
+- group < f->mcgrp_offset + f->n_mcgrps) {
+- int fam_grp = group - f->mcgrp_offset;
+-
+- if (!f->netnsok && net != &init_net)
+- err = -ENOENT;
+- else if (f->mcast_bind)
+- err = f->mcast_bind(net, fam_grp);
+- else
+- err = 0;
+- break;
+- }
+- }
+- up_read(&cb_lock);
+-
+- return err;
+-}
+-
+-static void genl_unbind(struct net *net, int group)
+-{
+- struct genl_family *f;
+- unsigned int id;
+-
+- down_read(&cb_lock);
+-
+- idr_for_each_entry(&genl_fam_idr, f, id) {
+- if (group >= f->mcgrp_offset &&
+- group < f->mcgrp_offset + f->n_mcgrps) {
+- int fam_grp = group - f->mcgrp_offset;
+-
+- if (f->mcast_unbind)
+- f->mcast_unbind(net, fam_grp);
+- break;
+- }
+- }
+- up_read(&cb_lock);
+-}
+-
+ static int __net_init genl_pernet_init(struct net *net)
+ {
+ struct netlink_kernel_cfg cfg = {
+ .input = genl_rcv,
+ .flags = NL_CFG_F_NONROOT_RECV,
+- .bind = genl_bind,
+- .unbind = genl_unbind,
+ };
+
+ /* we'll bump the group number right afterwards */
+diff --git a/net/qrtr/qrtr.c b/net/qrtr/qrtr.c
+index 7eccbbf6f8ad..24a8c3c6da0d 100644
+--- a/net/qrtr/qrtr.c
++++ b/net/qrtr/qrtr.c
+@@ -166,6 +166,7 @@ static void __qrtr_node_release(struct kref *kref)
+ {
+ struct qrtr_node *node = container_of(kref, struct qrtr_node, ref);
+ struct radix_tree_iter iter;
++ struct qrtr_tx_flow *flow;
+ unsigned long flags;
+ void __rcu **slot;
+
+@@ -181,8 +182,9 @@ static void __qrtr_node_release(struct kref *kref)
+
+ /* Free tx flow counters */
+ radix_tree_for_each_slot(slot, &node->qrtr_tx_flow, &iter, 0) {
++ flow = *slot;
+ radix_tree_iter_delete(&node->qrtr_tx_flow, &iter, slot);
+- kfree(*slot);
++ kfree(flow);
+ }
+ kfree(node);
+ }
+diff --git a/net/sched/act_connmark.c b/net/sched/act_connmark.c
+index 43a243081e7d..f901421b0634 100644
+--- a/net/sched/act_connmark.c
++++ b/net/sched/act_connmark.c
+@@ -43,17 +43,20 @@ static int tcf_connmark_act(struct sk_buff *skb, const struct tc_action *a,
+ tcf_lastuse_update(&ca->tcf_tm);
+ bstats_update(&ca->tcf_bstats, skb);
+
+- if (skb->protocol == htons(ETH_P_IP)) {
++ switch (skb_protocol(skb, true)) {
++ case htons(ETH_P_IP):
+ if (skb->len < sizeof(struct iphdr))
+ goto out;
+
+ proto = NFPROTO_IPV4;
+- } else if (skb->protocol == htons(ETH_P_IPV6)) {
++ break;
++ case htons(ETH_P_IPV6):
+ if (skb->len < sizeof(struct ipv6hdr))
+ goto out;
+
+ proto = NFPROTO_IPV6;
+- } else {
++ break;
++ default:
+ goto out;
+ }
+
+diff --git a/net/sched/act_csum.c b/net/sched/act_csum.c
+index cb8608f0a77a..c60674cf25c4 100644
+--- a/net/sched/act_csum.c
++++ b/net/sched/act_csum.c
+@@ -587,7 +587,7 @@ static int tcf_csum_act(struct sk_buff *skb, const struct tc_action *a,
+ goto drop;
+
+ update_flags = params->update_flags;
+- protocol = tc_skb_protocol(skb);
++ protocol = skb_protocol(skb, false);
+ again:
+ switch (protocol) {
+ case cpu_to_be16(ETH_P_IP):
+diff --git a/net/sched/act_ct.c b/net/sched/act_ct.c
+index 20577355235a..6a114f80e54b 100644
+--- a/net/sched/act_ct.c
++++ b/net/sched/act_ct.c
+@@ -622,7 +622,7 @@ static u8 tcf_ct_skb_nf_family(struct sk_buff *skb)
+ {
+ u8 family = NFPROTO_UNSPEC;
+
+- switch (skb->protocol) {
++ switch (skb_protocol(skb, true)) {
+ case htons(ETH_P_IP):
+ family = NFPROTO_IPV4;
+ break;
+@@ -746,6 +746,7 @@ static int ct_nat_execute(struct sk_buff *skb, struct nf_conn *ct,
+ const struct nf_nat_range2 *range,
+ enum nf_nat_manip_type maniptype)
+ {
++ __be16 proto = skb_protocol(skb, true);
+ int hooknum, err = NF_ACCEPT;
+
+ /* See HOOK2MANIP(). */
+@@ -757,14 +758,13 @@ static int ct_nat_execute(struct sk_buff *skb, struct nf_conn *ct,
+ switch (ctinfo) {
+ case IP_CT_RELATED:
+ case IP_CT_RELATED_REPLY:
+- if (skb->protocol == htons(ETH_P_IP) &&
++ if (proto == htons(ETH_P_IP) &&
+ ip_hdr(skb)->protocol == IPPROTO_ICMP) {
+ if (!nf_nat_icmp_reply_translation(skb, ct, ctinfo,
+ hooknum))
+ err = NF_DROP;
+ goto out;
+- } else if (IS_ENABLED(CONFIG_IPV6) &&
+- skb->protocol == htons(ETH_P_IPV6)) {
++ } else if (IS_ENABLED(CONFIG_IPV6) && proto == htons(ETH_P_IPV6)) {
+ __be16 frag_off;
+ u8 nexthdr = ipv6_hdr(skb)->nexthdr;
+ int hdrlen = ipv6_skip_exthdr(skb,
+@@ -1559,4 +1559,3 @@ MODULE_AUTHOR("Yossi Kuperman <yossiku@mellanox.com>");
+ MODULE_AUTHOR("Marcelo Ricardo Leitner <marcelo.leitner@gmail.com>");
+ MODULE_DESCRIPTION("Connection tracking action");
+ MODULE_LICENSE("GPL v2");
+-
+diff --git a/net/sched/act_ctinfo.c b/net/sched/act_ctinfo.c
+index 19649623493b..b5042f3ea079 100644
+--- a/net/sched/act_ctinfo.c
++++ b/net/sched/act_ctinfo.c
+@@ -96,19 +96,22 @@ static int tcf_ctinfo_act(struct sk_buff *skb, const struct tc_action *a,
+ action = READ_ONCE(ca->tcf_action);
+
+ wlen = skb_network_offset(skb);
+- if (tc_skb_protocol(skb) == htons(ETH_P_IP)) {
++ switch (skb_protocol(skb, true)) {
++ case htons(ETH_P_IP):
+ wlen += sizeof(struct iphdr);
+ if (!pskb_may_pull(skb, wlen))
+ goto out;
+
+ proto = NFPROTO_IPV4;
+- } else if (tc_skb_protocol(skb) == htons(ETH_P_IPV6)) {
++ break;
++ case htons(ETH_P_IPV6):
+ wlen += sizeof(struct ipv6hdr);
+ if (!pskb_may_pull(skb, wlen))
+ goto out;
+
+ proto = NFPROTO_IPV6;
+- } else {
++ break;
++ default:
+ goto out;
+ }
+
+diff --git a/net/sched/act_mpls.c b/net/sched/act_mpls.c
+index be3f215cd027..8118e2640979 100644
+--- a/net/sched/act_mpls.c
++++ b/net/sched/act_mpls.c
+@@ -82,7 +82,7 @@ static int tcf_mpls_act(struct sk_buff *skb, const struct tc_action *a,
+ goto drop;
+ break;
+ case TCA_MPLS_ACT_PUSH:
+- new_lse = tcf_mpls_get_lse(NULL, p, !eth_p_mpls(skb->protocol));
++ new_lse = tcf_mpls_get_lse(NULL, p, !eth_p_mpls(skb_protocol(skb, true)));
+ if (skb_mpls_push(skb, new_lse, p->tcfm_proto, mac_len,
+ skb->dev && skb->dev->type == ARPHRD_ETHER))
+ goto drop;
+diff --git a/net/sched/act_skbedit.c b/net/sched/act_skbedit.c
+index b125b2be4467..b2b3faa57294 100644
+--- a/net/sched/act_skbedit.c
++++ b/net/sched/act_skbedit.c
+@@ -41,7 +41,7 @@ static int tcf_skbedit_act(struct sk_buff *skb, const struct tc_action *a,
+ if (params->flags & SKBEDIT_F_INHERITDSFIELD) {
+ int wlen = skb_network_offset(skb);
+
+- switch (tc_skb_protocol(skb)) {
++ switch (skb_protocol(skb, true)) {
+ case htons(ETH_P_IP):
+ wlen += sizeof(struct iphdr);
+ if (!pskb_may_pull(skb, wlen))
+diff --git a/net/sched/cls_api.c b/net/sched/cls_api.c
+index 0a7ecc292bd3..58d469a66896 100644
+--- a/net/sched/cls_api.c
++++ b/net/sched/cls_api.c
+@@ -1589,7 +1589,7 @@ static inline int __tcf_classify(struct sk_buff *skb,
+ reclassify:
+ #endif
+ for (; tp; tp = rcu_dereference_bh(tp->next)) {
+- __be16 protocol = tc_skb_protocol(skb);
++ __be16 protocol = skb_protocol(skb, false);
+ int err;
+
+ if (tp->protocol != protocol &&
+diff --git a/net/sched/cls_flow.c b/net/sched/cls_flow.c
+index 80ae7b9fa90a..ab53a93b2f2b 100644
+--- a/net/sched/cls_flow.c
++++ b/net/sched/cls_flow.c
+@@ -80,7 +80,7 @@ static u32 flow_get_dst(const struct sk_buff *skb, const struct flow_keys *flow)
+ if (dst)
+ return ntohl(dst);
+
+- return addr_fold(skb_dst(skb)) ^ (__force u16) tc_skb_protocol(skb);
++ return addr_fold(skb_dst(skb)) ^ (__force u16)skb_protocol(skb, true);
+ }
+
+ static u32 flow_get_proto(const struct sk_buff *skb,
+@@ -104,7 +104,7 @@ static u32 flow_get_proto_dst(const struct sk_buff *skb,
+ if (flow->ports.ports)
+ return ntohs(flow->ports.dst);
+
+- return addr_fold(skb_dst(skb)) ^ (__force u16) tc_skb_protocol(skb);
++ return addr_fold(skb_dst(skb)) ^ (__force u16)skb_protocol(skb, true);
+ }
+
+ static u32 flow_get_iif(const struct sk_buff *skb)
+@@ -151,7 +151,7 @@ static u32 flow_get_nfct(const struct sk_buff *skb)
+ static u32 flow_get_nfct_src(const struct sk_buff *skb,
+ const struct flow_keys *flow)
+ {
+- switch (tc_skb_protocol(skb)) {
++ switch (skb_protocol(skb, true)) {
+ case htons(ETH_P_IP):
+ return ntohl(CTTUPLE(skb, src.u3.ip));
+ case htons(ETH_P_IPV6):
+@@ -164,7 +164,7 @@ fallback:
+ static u32 flow_get_nfct_dst(const struct sk_buff *skb,
+ const struct flow_keys *flow)
+ {
+- switch (tc_skb_protocol(skb)) {
++ switch (skb_protocol(skb, true)) {
+ case htons(ETH_P_IP):
+ return ntohl(CTTUPLE(skb, dst.u3.ip));
+ case htons(ETH_P_IPV6):
+diff --git a/net/sched/cls_flower.c b/net/sched/cls_flower.c
+index 74a0febcafb8..3b93d95d2a56 100644
+--- a/net/sched/cls_flower.c
++++ b/net/sched/cls_flower.c
+@@ -312,7 +312,7 @@ static int fl_classify(struct sk_buff *skb, const struct tcf_proto *tp,
+ /* skb_flow_dissect() does not set n_proto in case an unknown
+ * protocol, so do it rather here.
+ */
+- skb_key.basic.n_proto = skb->protocol;
++ skb_key.basic.n_proto = skb_protocol(skb, false);
+ skb_flow_dissect_tunnel_info(skb, &mask->dissector, &skb_key);
+ skb_flow_dissect_ct(skb, &mask->dissector, &skb_key,
+ fl_ct_info_to_flower_map,
+diff --git a/net/sched/em_ipset.c b/net/sched/em_ipset.c
+index df00566d327d..c95cf86fb431 100644
+--- a/net/sched/em_ipset.c
++++ b/net/sched/em_ipset.c
+@@ -59,7 +59,7 @@ static int em_ipset_match(struct sk_buff *skb, struct tcf_ematch *em,
+ };
+ int ret, network_offset;
+
+- switch (tc_skb_protocol(skb)) {
++ switch (skb_protocol(skb, true)) {
+ case htons(ETH_P_IP):
+ state.pf = NFPROTO_IPV4;
+ if (!pskb_network_may_pull(skb, sizeof(struct iphdr)))
+diff --git a/net/sched/em_ipt.c b/net/sched/em_ipt.c
+index eecfe072c508..9405b4d88002 100644
+--- a/net/sched/em_ipt.c
++++ b/net/sched/em_ipt.c
+@@ -212,7 +212,7 @@ static int em_ipt_match(struct sk_buff *skb, struct tcf_ematch *em,
+ struct nf_hook_state state;
+ int ret;
+
+- switch (tc_skb_protocol(skb)) {
++ switch (skb_protocol(skb, true)) {
+ case htons(ETH_P_IP):
+ if (!pskb_network_may_pull(skb, sizeof(struct iphdr)))
+ return 0;
+diff --git a/net/sched/em_meta.c b/net/sched/em_meta.c
+index d99966a55c84..46254968d390 100644
+--- a/net/sched/em_meta.c
++++ b/net/sched/em_meta.c
+@@ -195,7 +195,7 @@ META_COLLECTOR(int_priority)
+ META_COLLECTOR(int_protocol)
+ {
+ /* Let userspace take care of the byte ordering */
+- dst->value = tc_skb_protocol(skb);
++ dst->value = skb_protocol(skb, false);
+ }
+
+ META_COLLECTOR(int_pkttype)
+diff --git a/net/sched/sch_atm.c b/net/sched/sch_atm.c
+index ee12ca9f55b4..1c281cc81f57 100644
+--- a/net/sched/sch_atm.c
++++ b/net/sched/sch_atm.c
+@@ -553,16 +553,16 @@ static int atm_tc_init(struct Qdisc *sch, struct nlattr *opt,
+ if (!p->link.q)
+ p->link.q = &noop_qdisc;
+ pr_debug("atm_tc_init: link (%p) qdisc %p\n", &p->link, p->link.q);
++ p->link.vcc = NULL;
++ p->link.sock = NULL;
++ p->link.common.classid = sch->handle;
++ p->link.ref = 1;
+
+ err = tcf_block_get(&p->link.block, &p->link.filter_list, sch,
+ extack);
+ if (err)
+ return err;
+
+- p->link.vcc = NULL;
+- p->link.sock = NULL;
+- p->link.common.classid = sch->handle;
+- p->link.ref = 1;
+ tasklet_init(&p->task, sch_atm_dequeue, (unsigned long)sch);
+ return 0;
+ }
+diff --git a/net/sched/sch_cake.c b/net/sched/sch_cake.c
+index 9475fa81ea7f..9bb2b8f73692 100644
+--- a/net/sched/sch_cake.c
++++ b/net/sched/sch_cake.c
+@@ -591,7 +591,7 @@ static void cake_update_flowkeys(struct flow_keys *keys,
+ struct nf_conntrack_tuple tuple = {};
+ bool rev = !skb->_nfct;
+
+- if (tc_skb_protocol(skb) != htons(ETH_P_IP))
++ if (skb_protocol(skb, true) != htons(ETH_P_IP))
+ return;
+
+ if (!nf_ct_get_tuple_skb(&tuple, skb))
+@@ -1520,7 +1520,7 @@ static u8 cake_handle_diffserv(struct sk_buff *skb, bool wash)
+ u16 *buf, buf_;
+ u8 dscp;
+
+- switch (tc_skb_protocol(skb)) {
++ switch (skb_protocol(skb, true)) {
+ case htons(ETH_P_IP):
+ buf = skb_header_pointer(skb, offset, sizeof(buf_), &buf_);
+ if (unlikely(!buf))
+diff --git a/net/sched/sch_dsmark.c b/net/sched/sch_dsmark.c
+index 05605b30bef3..2b88710994d7 100644
+--- a/net/sched/sch_dsmark.c
++++ b/net/sched/sch_dsmark.c
+@@ -210,7 +210,7 @@ static int dsmark_enqueue(struct sk_buff *skb, struct Qdisc *sch,
+ if (p->set_tc_index) {
+ int wlen = skb_network_offset(skb);
+
+- switch (tc_skb_protocol(skb)) {
++ switch (skb_protocol(skb, true)) {
+ case htons(ETH_P_IP):
+ wlen += sizeof(struct iphdr);
+ if (!pskb_may_pull(skb, wlen) ||
+@@ -303,7 +303,7 @@ static struct sk_buff *dsmark_dequeue(struct Qdisc *sch)
+ index = skb->tc_index & (p->indices - 1);
+ pr_debug("index %d->%d\n", skb->tc_index, index);
+
+- switch (tc_skb_protocol(skb)) {
++ switch (skb_protocol(skb, true)) {
+ case htons(ETH_P_IP):
+ ipv4_change_dsfield(ip_hdr(skb), p->mv[index].mask,
+ p->mv[index].value);
+@@ -320,7 +320,7 @@ static struct sk_buff *dsmark_dequeue(struct Qdisc *sch)
+ */
+ if (p->mv[index].mask != 0xff || p->mv[index].value)
+ pr_warn("%s: unsupported protocol %d\n",
+- __func__, ntohs(tc_skb_protocol(skb)));
++ __func__, ntohs(skb_protocol(skb, true)));
+ break;
+ }
+
+diff --git a/net/sched/sch_teql.c b/net/sched/sch_teql.c
+index 689ef6f3ded8..2f1f0a378408 100644
+--- a/net/sched/sch_teql.c
++++ b/net/sched/sch_teql.c
+@@ -239,7 +239,7 @@ __teql_resolve(struct sk_buff *skb, struct sk_buff *skb_res,
+ char haddr[MAX_ADDR_LEN];
+
+ neigh_ha_snapshot(haddr, n, dev);
+- err = dev_hard_header(skb, dev, ntohs(tc_skb_protocol(skb)),
++ err = dev_hard_header(skb, dev, ntohs(skb_protocol(skb, false)),
+ haddr, NULL, skb->len);
+
+ if (err < 0)
+diff --git a/net/sunrpc/xprtrdma/rpc_rdma.c b/net/sunrpc/xprtrdma/rpc_rdma.c
+index 57118e342c8e..06a8268edf3b 100644
+--- a/net/sunrpc/xprtrdma/rpc_rdma.c
++++ b/net/sunrpc/xprtrdma/rpc_rdma.c
+@@ -71,7 +71,7 @@ static unsigned int rpcrdma_max_call_header_size(unsigned int maxsegs)
+ size = RPCRDMA_HDRLEN_MIN;
+
+ /* Maximum Read list size */
+- size = maxsegs * rpcrdma_readchunk_maxsz * sizeof(__be32);
++ size += maxsegs * rpcrdma_readchunk_maxsz * sizeof(__be32);
+
+ /* Minimal Read chunk size */
+ size += sizeof(__be32); /* segment count */
+@@ -94,7 +94,7 @@ static unsigned int rpcrdma_max_reply_header_size(unsigned int maxsegs)
+ size = RPCRDMA_HDRLEN_MIN;
+
+ /* Maximum Write list size */
+- size = sizeof(__be32); /* segment count */
++ size += sizeof(__be32); /* segment count */
+ size += maxsegs * rpcrdma_segment_maxsz * sizeof(__be32);
+ size += sizeof(__be32); /* list discriminator */
+
+diff --git a/net/sunrpc/xprtrdma/transport.c b/net/sunrpc/xprtrdma/transport.c
+index 659da37020a4..3b5fb1f57aeb 100644
+--- a/net/sunrpc/xprtrdma/transport.c
++++ b/net/sunrpc/xprtrdma/transport.c
+@@ -249,6 +249,11 @@ xprt_rdma_connect_worker(struct work_struct *work)
+ xprt->stat.connect_start;
+ xprt_set_connected(xprt);
+ rc = -EAGAIN;
++ } else {
++ /* Force a call to xprt_rdma_close to clean up */
++ spin_lock(&xprt->transport_lock);
++ set_bit(XPRT_CLOSE_WAIT, &xprt->state);
++ spin_unlock(&xprt->transport_lock);
+ }
+ xprt_wake_pending_tasks(xprt, rc);
+ }
+diff --git a/net/sunrpc/xprtrdma/verbs.c b/net/sunrpc/xprtrdma/verbs.c
+index db0259c6467e..26e89c65ba56 100644
+--- a/net/sunrpc/xprtrdma/verbs.c
++++ b/net/sunrpc/xprtrdma/verbs.c
+@@ -279,17 +279,19 @@ rpcrdma_cm_event_handler(struct rdma_cm_id *id, struct rdma_cm_event *event)
+ break;
+ case RDMA_CM_EVENT_CONNECT_ERROR:
+ ep->re_connect_status = -ENOTCONN;
+- goto disconnected;
++ goto wake_connect_worker;
+ case RDMA_CM_EVENT_UNREACHABLE:
+ ep->re_connect_status = -ENETUNREACH;
+- goto disconnected;
++ goto wake_connect_worker;
+ case RDMA_CM_EVENT_REJECTED:
+ dprintk("rpcrdma: connection to %pISpc rejected: %s\n",
+ sap, rdma_reject_msg(id, event->status));
+ ep->re_connect_status = -ECONNREFUSED;
+ if (event->status == IB_CM_REJ_STALE_CONN)
+- ep->re_connect_status = -EAGAIN;
+- goto disconnected;
++ ep->re_connect_status = -ENOTCONN;
++wake_connect_worker:
++ wake_up_all(&ep->re_connect_wait);
++ return 0;
+ case RDMA_CM_EVENT_DISCONNECTED:
+ ep->re_connect_status = -ECONNABORTED;
+ disconnected:
+@@ -398,14 +400,14 @@ static int rpcrdma_ep_create(struct rpcrdma_xprt *r_xprt)
+
+ ep = kzalloc(sizeof(*ep), GFP_NOFS);
+ if (!ep)
+- return -EAGAIN;
++ return -ENOTCONN;
+ ep->re_xprt = &r_xprt->rx_xprt;
+ kref_init(&ep->re_kref);
+
+ id = rpcrdma_create_id(r_xprt, ep);
+ if (IS_ERR(id)) {
+- rc = PTR_ERR(id);
+- goto out_free;
++ kfree(ep);
++ return PTR_ERR(id);
+ }
+ __module_get(THIS_MODULE);
+ device = id->device;
+@@ -504,9 +506,6 @@ static int rpcrdma_ep_create(struct rpcrdma_xprt *r_xprt)
+ out_destroy:
+ rpcrdma_ep_put(ep);
+ rdma_destroy_id(id);
+-out_free:
+- kfree(ep);
+- r_xprt->rx_ep = NULL;
+ return rc;
+ }
+
+@@ -522,8 +521,6 @@ int rpcrdma_xprt_connect(struct rpcrdma_xprt *r_xprt)
+ struct rpcrdma_ep *ep;
+ int rc;
+
+-retry:
+- rpcrdma_xprt_disconnect(r_xprt);
+ rc = rpcrdma_ep_create(r_xprt);
+ if (rc)
+ return rc;
+@@ -539,10 +536,6 @@ retry:
+ rpcrdma_ep_get(ep);
+ rpcrdma_post_recvs(r_xprt, true);
+
+- rc = rpcrdma_sendctxs_create(r_xprt);
+- if (rc)
+- goto out;
+-
+ rc = rdma_connect(ep->re_id, &ep->re_remote_cma);
+ if (rc)
+ goto out;
+@@ -552,15 +545,19 @@ retry:
+ wait_event_interruptible(ep->re_connect_wait,
+ ep->re_connect_status != 0);
+ if (ep->re_connect_status <= 0) {
+- if (ep->re_connect_status == -EAGAIN)
+- goto retry;
+ rc = ep->re_connect_status;
+ goto out;
+ }
+
++ rc = rpcrdma_sendctxs_create(r_xprt);
++ if (rc) {
++ rc = -ENOTCONN;
++ goto out;
++ }
++
+ rc = rpcrdma_reqs_setup(r_xprt);
+ if (rc) {
+- rpcrdma_xprt_disconnect(r_xprt);
++ rc = -ENOTCONN;
+ goto out;
+ }
+ rpcrdma_mrs_create(r_xprt);
+diff --git a/security/apparmor/match.c b/security/apparmor/match.c
+index 525ce22dc0e9..5947b0a763c2 100644
+--- a/security/apparmor/match.c
++++ b/security/apparmor/match.c
+@@ -97,6 +97,9 @@ static struct table_header *unpack_table(char *blob, size_t bsize)
+ th.td_flags == YYTD_DATA8))
+ goto out;
+
++ /* if we have a table it must have some entries */
++ if (th.td_lolen == 0)
++ goto out;
+ tsize = table_size(th.td_lolen, th.td_flags);
+ if (bsize < tsize)
+ goto out;
+@@ -198,6 +201,8 @@ static int verify_dfa(struct aa_dfa *dfa)
+
+ state_count = dfa->tables[YYTD_ID_BASE]->td_lolen;
+ trans_count = dfa->tables[YYTD_ID_NXT]->td_lolen;
++ if (state_count == 0)
++ goto out;
+ for (i = 0; i < state_count; i++) {
+ if (!(BASE_TABLE(dfa)[i] & MATCH_FLAG_DIFF_ENCODE) &&
+ (DEFAULT_TABLE(dfa)[i] >= state_count))
+diff --git a/sound/pci/hda/patch_realtek.c b/sound/pci/hda/patch_realtek.c
+index 16ecc8515db8..d80eed2a48a1 100644
+--- a/sound/pci/hda/patch_realtek.c
++++ b/sound/pci/hda/patch_realtek.c
+@@ -6117,6 +6117,8 @@ enum {
+ ALC269VC_FIXUP_ACER_VCOPPERBOX_PINS,
+ ALC269VC_FIXUP_ACER_HEADSET_MIC,
+ ALC269VC_FIXUP_ACER_MIC_NO_PRESENCE,
++ ALC289_FIXUP_ASUS_G401,
++ ALC256_FIXUP_ACER_MIC_NO_PRESENCE,
+ };
+
+ static const struct hda_fixup alc269_fixups[] = {
+@@ -7082,7 +7084,7 @@ static const struct hda_fixup alc269_fixups[] = {
+ { }
+ },
+ .chained = true,
+- .chain_id = ALC269_FIXUP_HEADSET_MODE_NO_HP_MIC
++ .chain_id = ALC269_FIXUP_HEADSET_MIC
+ },
+ [ALC294_FIXUP_ASUS_HEADSET_MIC] = {
+ .type = HDA_FIXUP_PINS,
+@@ -7091,7 +7093,7 @@ static const struct hda_fixup alc269_fixups[] = {
+ { }
+ },
+ .chained = true,
+- .chain_id = ALC269_FIXUP_HEADSET_MODE_NO_HP_MIC
++ .chain_id = ALC269_FIXUP_HEADSET_MIC
+ },
+ [ALC294_FIXUP_ASUS_SPK] = {
+ .type = HDA_FIXUP_VERBS,
+@@ -7099,6 +7101,8 @@ static const struct hda_fixup alc269_fixups[] = {
+ /* Set EAPD high */
+ { 0x20, AC_VERB_SET_COEF_INDEX, 0x40 },
+ { 0x20, AC_VERB_SET_PROC_COEF, 0x8800 },
++ { 0x20, AC_VERB_SET_COEF_INDEX, 0x0f },
++ { 0x20, AC_VERB_SET_PROC_COEF, 0x7774 },
+ { }
+ },
+ .chained = true,
+@@ -7324,6 +7328,22 @@ static const struct hda_fixup alc269_fixups[] = {
+ .chained = true,
+ .chain_id = ALC269_FIXUP_HEADSET_MIC
+ },
++ [ALC289_FIXUP_ASUS_G401] = {
++ .type = HDA_FIXUP_PINS,
++ .v.pins = (const struct hda_pintbl[]) {
++ { 0x19, 0x03a11020 }, /* headset mic with jack detect */
++ { }
++ },
++ },
++ [ALC256_FIXUP_ACER_MIC_NO_PRESENCE] = {
++ .type = HDA_FIXUP_PINS,
++ .v.pins = (const struct hda_pintbl[]) {
++ { 0x19, 0x02a11120 }, /* use as headset mic, without its own jack detect */
++ { }
++ },
++ .chained = true,
++ .chain_id = ALC256_FIXUP_ASUS_HEADSET_MODE
++ },
+ };
+
+ static const struct snd_pci_quirk alc269_fixup_tbl[] = {
+@@ -7352,6 +7372,7 @@ static const struct snd_pci_quirk alc269_fixup_tbl[] = {
+ SND_PCI_QUIRK(0x1025, 0x1308, "Acer Aspire Z24-890", ALC286_FIXUP_ACER_AIO_HEADSET_MIC),
+ SND_PCI_QUIRK(0x1025, 0x132a, "Acer TravelMate B114-21", ALC233_FIXUP_ACER_HEADSET_MIC),
+ SND_PCI_QUIRK(0x1025, 0x1330, "Acer TravelMate X514-51T", ALC255_FIXUP_ACER_HEADSET_MIC),
++ SND_PCI_QUIRK(0x1025, 0x1430, "Acer TravelMate B311R-31", ALC256_FIXUP_ACER_MIC_NO_PRESENCE),
+ SND_PCI_QUIRK(0x1028, 0x0470, "Dell M101z", ALC269_FIXUP_DELL_M101Z),
+ SND_PCI_QUIRK(0x1028, 0x054b, "Dell XPS one 2710", ALC275_FIXUP_DELL_XPS),
+ SND_PCI_QUIRK(0x1028, 0x05bd, "Dell Latitude E6440", ALC292_FIXUP_DELL_E7X),
+@@ -7495,6 +7516,7 @@ static const struct snd_pci_quirk alc269_fixup_tbl[] = {
+ SND_PCI_QUIRK(0x1043, 0x17d1, "ASUS UX431FL", ALC294_FIXUP_ASUS_DUAL_SPK),
+ SND_PCI_QUIRK(0x1043, 0x18b1, "Asus MJ401TA", ALC256_FIXUP_ASUS_HEADSET_MIC),
+ SND_PCI_QUIRK(0x1043, 0x18f1, "Asus FX505DT", ALC256_FIXUP_ASUS_HEADSET_MIC),
++ SND_PCI_QUIRK(0x1043, 0x194e, "ASUS UX563FD", ALC294_FIXUP_ASUS_HPE),
+ SND_PCI_QUIRK(0x1043, 0x19ce, "ASUS B9450FA", ALC294_FIXUP_ASUS_HPE),
+ SND_PCI_QUIRK(0x1043, 0x19e1, "ASUS UX581LV", ALC295_FIXUP_ASUS_MIC_NO_PRESENCE),
+ SND_PCI_QUIRK(0x1043, 0x1a13, "Asus G73Jw", ALC269_FIXUP_ASUS_G73JW),
+@@ -7504,6 +7526,7 @@ static const struct snd_pci_quirk alc269_fixup_tbl[] = {
+ SND_PCI_QUIRK(0x1043, 0x1bbd, "ASUS Z550MA", ALC255_FIXUP_ASUS_MIC_NO_PRESENCE),
+ SND_PCI_QUIRK(0x1043, 0x1c23, "Asus X55U", ALC269_FIXUP_LIMIT_INT_MIC_BOOST),
+ SND_PCI_QUIRK(0x1043, 0x1ccd, "ASUS X555UB", ALC256_FIXUP_ASUS_MIC),
++ SND_PCI_QUIRK(0x1043, 0x1f11, "ASUS Zephyrus G14", ALC289_FIXUP_ASUS_G401),
+ SND_PCI_QUIRK(0x1043, 0x3030, "ASUS ZN270IE", ALC256_FIXUP_ASUS_AIO_GPIO2),
+ SND_PCI_QUIRK(0x1043, 0x831a, "ASUS P901", ALC269_FIXUP_STEREO_DMIC),
+ SND_PCI_QUIRK(0x1043, 0x834a, "ASUS S101", ALC269_FIXUP_STEREO_DMIC),
+diff --git a/sound/usb/line6/capture.c b/sound/usb/line6/capture.c
+index 663d608c4287..970c9bdce0b2 100644
+--- a/sound/usb/line6/capture.c
++++ b/sound/usb/line6/capture.c
+@@ -286,6 +286,8 @@ int line6_create_audio_in_urbs(struct snd_line6_pcm *line6pcm)
+ urb->interval = LINE6_ISO_INTERVAL;
+ urb->error_count = 0;
+ urb->complete = audio_in_callback;
++ if (usb_urb_ep_type_check(urb))
++ return -EINVAL;
+ }
+
+ return 0;
+diff --git a/sound/usb/line6/driver.c b/sound/usb/line6/driver.c
+index 4f096685ed65..0caf53f5764c 100644
+--- a/sound/usb/line6/driver.c
++++ b/sound/usb/line6/driver.c
+@@ -820,7 +820,7 @@ void line6_disconnect(struct usb_interface *interface)
+ if (WARN_ON(usbdev != line6->usbdev))
+ return;
+
+- cancel_delayed_work(&line6->startup_work);
++ cancel_delayed_work_sync(&line6->startup_work);
+
+ if (line6->urb_listen != NULL)
+ line6_stop_listen(line6);
+diff --git a/sound/usb/line6/playback.c b/sound/usb/line6/playback.c
+index 01930ce7bd75..8233c61e23f1 100644
+--- a/sound/usb/line6/playback.c
++++ b/sound/usb/line6/playback.c
+@@ -431,6 +431,8 @@ int line6_create_audio_out_urbs(struct snd_line6_pcm *line6pcm)
+ urb->interval = LINE6_ISO_INTERVAL;
+ urb->error_count = 0;
+ urb->complete = audio_out_callback;
++ if (usb_urb_ep_type_check(urb))
++ return -EINVAL;
+ }
+
+ return 0;
+diff --git a/sound/usb/midi.c b/sound/usb/midi.c
+index 047b90595d65..354f57692938 100644
+--- a/sound/usb/midi.c
++++ b/sound/usb/midi.c
+@@ -1499,6 +1499,8 @@ void snd_usbmidi_disconnect(struct list_head *p)
+ spin_unlock_irq(&umidi->disc_lock);
+ up_write(&umidi->disc_rwsem);
+
++ del_timer_sync(&umidi->error_timer);
++
+ for (i = 0; i < MIDI_MAX_ENDPOINTS; ++i) {
+ struct snd_usb_midi_endpoint *ep = &umidi->endpoints[i];
+ if (ep->out)
+@@ -1525,7 +1527,6 @@ void snd_usbmidi_disconnect(struct list_head *p)
+ ep->in = NULL;
+ }
+ }
+- del_timer_sync(&umidi->error_timer);
+ }
+ EXPORT_SYMBOL(snd_usbmidi_disconnect);
+
+@@ -2301,16 +2302,22 @@ void snd_usbmidi_input_stop(struct list_head *p)
+ }
+ EXPORT_SYMBOL(snd_usbmidi_input_stop);
+
+-static void snd_usbmidi_input_start_ep(struct snd_usb_midi_in_endpoint *ep)
++static void snd_usbmidi_input_start_ep(struct snd_usb_midi *umidi,
++ struct snd_usb_midi_in_endpoint *ep)
+ {
+ unsigned int i;
++ unsigned long flags;
+
+ if (!ep)
+ return;
+ for (i = 0; i < INPUT_URBS; ++i) {
+ struct urb *urb = ep->urbs[i];
+- urb->dev = ep->umidi->dev;
+- snd_usbmidi_submit_urb(urb, GFP_KERNEL);
++ spin_lock_irqsave(&umidi->disc_lock, flags);
++ if (!atomic_read(&urb->use_count)) {
++ urb->dev = ep->umidi->dev;
++ snd_usbmidi_submit_urb(urb, GFP_ATOMIC);
++ }
++ spin_unlock_irqrestore(&umidi->disc_lock, flags);
+ }
+ }
+
+@@ -2326,7 +2333,7 @@ void snd_usbmidi_input_start(struct list_head *p)
+ if (umidi->input_running || !umidi->opened[1])
+ return;
+ for (i = 0; i < MIDI_MAX_ENDPOINTS; ++i)
+- snd_usbmidi_input_start_ep(umidi->endpoints[i].in);
++ snd_usbmidi_input_start_ep(umidi, umidi->endpoints[i].in);
+ umidi->input_running = 1;
+ }
+ EXPORT_SYMBOL(snd_usbmidi_input_start);
+diff --git a/tools/perf/util/stat.c b/tools/perf/util/stat.c
+index 5f26137b8d60..242476eb808c 100644
+--- a/tools/perf/util/stat.c
++++ b/tools/perf/util/stat.c
+@@ -368,8 +368,10 @@ int perf_stat_process_counter(struct perf_stat_config *config,
+ * interval mode, otherwise overall avg running
+ * averages will be shown for each interval.
+ */
+- if (config->interval)
+- init_stats(ps->res_stats);
++ if (config->interval) {
++ for (i = 0; i < 3; i++)
++ init_stats(&ps->res_stats[i]);
++ }
+
+ if (counter->per_pkg)
+ zero_per_pkg(counter);
+diff --git a/tools/testing/selftests/net/fib_nexthops.sh b/tools/testing/selftests/net/fib_nexthops.sh
+index 6560ed796ac4..09830b88ec8c 100755
+--- a/tools/testing/selftests/net/fib_nexthops.sh
++++ b/tools/testing/selftests/net/fib_nexthops.sh
+@@ -512,6 +512,19 @@ ipv6_fcnal_runtime()
+ run_cmd "$IP nexthop add id 86 via 2001:db8:91::2 dev veth1"
+ run_cmd "$IP ro add 2001:db8:101::1/128 nhid 81"
+
++ # rpfilter and default route
++ $IP nexthop flush >/dev/null 2>&1
++ run_cmd "ip netns exec me ip6tables -t mangle -I PREROUTING 1 -m rpfilter --invert -j DROP"
++ run_cmd "$IP nexthop add id 91 via 2001:db8:91::2 dev veth1"
++ run_cmd "$IP nexthop add id 92 via 2001:db8:92::2 dev veth3"
++ run_cmd "$IP nexthop add id 93 group 91/92"
++ run_cmd "$IP -6 ro add default nhid 91"
++ run_cmd "ip netns exec me ping -c1 -w1 2001:db8:101::1"
++ log_test $? 0 "Nexthop with default route and rpfilter"
++ run_cmd "$IP -6 ro replace default nhid 93"
++ run_cmd "ip netns exec me ping -c1 -w1 2001:db8:101::1"
++ log_test $? 0 "Nexthop with multipath default route and rpfilter"
++
+ # TO-DO:
+ # existing route with old nexthop; append route with new nexthop
+ # existing route with old nexthop; replace route with new
^ permalink raw reply related [flat|nested] 25+ messages in thread
* [gentoo-commits] proj/linux-patches:5.7 commit in: /
@ 2020-07-29 12:43 Mike Pagano
0 siblings, 0 replies; 25+ messages in thread
From: Mike Pagano @ 2020-07-29 12:43 UTC (permalink / raw
To: gentoo-commits
commit: 356150888a3720b32bbeb80bf06dab3a89c2e10b
Author: Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Wed Jul 29 12:43:33 2020 +0000
Commit: Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Wed Jul 29 12:43:33 2020 +0000
URL: https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=35615088
Linux patch 5.7.11
Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>
0000_README | 4 +
1010_linux-5.7.11.patch | 6182 +++++++++++++++++++++++++++++++++++++++++++++++
2 files changed, 6186 insertions(+)
diff --git a/0000_README b/0000_README
index c2d1f0c..6409a51 100644
--- a/0000_README
+++ b/0000_README
@@ -83,6 +83,10 @@ Patch: 1009_linux-5.7.10.patch
From: http://www.kernel.org
Desc: Linux 5.7.10
+Patch: 1010_linux-5.7.11.patch
+From: http://www.kernel.org
+Desc: Linux 5.7.11
+
Patch: 1500_XATTR_USER_PREFIX.patch
From: https://bugs.gentoo.org/show_bug.cgi?id=470644
Desc: Support for namespace user.pax.* on tmpfs.
diff --git a/1010_linux-5.7.11.patch b/1010_linux-5.7.11.patch
new file mode 100644
index 0000000..f2f0416
--- /dev/null
+++ b/1010_linux-5.7.11.patch
@@ -0,0 +1,6182 @@
+diff --git a/Makefile b/Makefile
+index e622e084e7e2..12777a95833f 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 5
+ PATCHLEVEL = 7
+-SUBLEVEL = 10
++SUBLEVEL = 11
+ EXTRAVERSION =
+ NAME = Kleptomaniac Octopus
+
+@@ -549,7 +549,7 @@ ifneq ($(shell $(CC) --version 2>&1 | head -n 1 | grep clang),)
+ ifneq ($(CROSS_COMPILE),)
+ CLANG_FLAGS += --target=$(notdir $(CROSS_COMPILE:%-=%))
+ GCC_TOOLCHAIN_DIR := $(dir $(shell which $(CROSS_COMPILE)elfedit))
+-CLANG_FLAGS += --prefix=$(GCC_TOOLCHAIN_DIR)
++CLANG_FLAGS += --prefix=$(GCC_TOOLCHAIN_DIR)$(notdir $(CROSS_COMPILE))
+ GCC_TOOLCHAIN := $(realpath $(GCC_TOOLCHAIN_DIR)/..)
+ endif
+ ifneq ($(GCC_TOOLCHAIN),)
+@@ -1730,7 +1730,7 @@ PHONY += descend $(build-dirs)
+ descend: $(build-dirs)
+ $(build-dirs): prepare
+ $(Q)$(MAKE) $(build)=$@ \
+- single-build=$(if $(filter-out $@/, $(filter $@/%, $(single-no-ko))),1) \
++ single-build=$(if $(filter-out $@/, $(filter $@/%, $(KBUILD_SINGLE_TARGETS))),1) \
+ need-builtin=1 need-modorder=1
+
+ clean-dirs := $(addprefix _clean_, $(clean-dirs))
+diff --git a/arch/arm/boot/dts/omap3-n900.dts b/arch/arm/boot/dts/omap3-n900.dts
+index 4089d97405c9..3dbcae3d60d2 100644
+--- a/arch/arm/boot/dts/omap3-n900.dts
++++ b/arch/arm/boot/dts/omap3-n900.dts
+@@ -105,6 +105,14 @@
+ linux,code = <SW_FRONT_PROXIMITY>;
+ linux,can-disable;
+ };
++
++ machine_cover {
++ label = "Machine Cover";
++ gpios = <&gpio6 0 GPIO_ACTIVE_LOW>; /* 160 */
++ linux,input-type = <EV_SW>;
++ linux,code = <SW_MACHINE_COVER>;
++ linux,can-disable;
++ };
+ };
+
+ isp1707: isp1707 {
+@@ -819,10 +827,6 @@
+ pinctrl-0 = <&mmc1_pins>;
+ vmmc-supply = <&vmmc1>;
+ bus-width = <4>;
+- /* For debugging, it is often good idea to remove this GPIO.
+- It means you can remove back cover (to reboot by removing
+- battery) and still use the MMC card. */
+- cd-gpios = <&gpio6 0 GPIO_ACTIVE_LOW>; /* 160 */
+ };
+
+ /* most boards use vaux3, only some old versions use vmmc2 instead */
+diff --git a/arch/arm64/boot/dts/marvell/armada-8040-clearfog-gt-8k.dts b/arch/arm64/boot/dts/marvell/armada-8040-clearfog-gt-8k.dts
+index b90d78a5724b..e32a491e909f 100644
+--- a/arch/arm64/boot/dts/marvell/armada-8040-clearfog-gt-8k.dts
++++ b/arch/arm64/boot/dts/marvell/armada-8040-clearfog-gt-8k.dts
+@@ -454,10 +454,7 @@
+ status = "okay";
+ phy-mode = "2500base-x";
+ phys = <&cp1_comphy5 2>;
+- fixed-link {
+- speed = <2500>;
+- full-duplex;
+- };
++ managed = "in-band-status";
+ };
+
+ &cp1_spi1 {
+diff --git a/arch/arm64/kernel/debug-monitors.c b/arch/arm64/kernel/debug-monitors.c
+index 7569deb1eac1..d64a3c1e1b6b 100644
+--- a/arch/arm64/kernel/debug-monitors.c
++++ b/arch/arm64/kernel/debug-monitors.c
+@@ -396,14 +396,14 @@ void user_rewind_single_step(struct task_struct *task)
+ * If single step is active for this thread, then set SPSR.SS
+ * to 1 to avoid returning to the active-pending state.
+ */
+- if (test_ti_thread_flag(task_thread_info(task), TIF_SINGLESTEP))
++ if (test_tsk_thread_flag(task, TIF_SINGLESTEP))
+ set_regs_spsr_ss(task_pt_regs(task));
+ }
+ NOKPROBE_SYMBOL(user_rewind_single_step);
+
+ void user_fastforward_single_step(struct task_struct *task)
+ {
+- if (test_ti_thread_flag(task_thread_info(task), TIF_SINGLESTEP))
++ if (test_tsk_thread_flag(task, TIF_SINGLESTEP))
+ clear_regs_spsr_ss(task_pt_regs(task));
+ }
+
+diff --git a/arch/arm64/kernel/vdso32/Makefile b/arch/arm64/kernel/vdso32/Makefile
+index 3964738ebbde..0433bb58ce52 100644
+--- a/arch/arm64/kernel/vdso32/Makefile
++++ b/arch/arm64/kernel/vdso32/Makefile
+@@ -14,7 +14,7 @@ COMPAT_GCC_TOOLCHAIN_DIR := $(dir $(shell which $(CROSS_COMPILE_COMPAT)elfedit))
+ COMPAT_GCC_TOOLCHAIN := $(realpath $(COMPAT_GCC_TOOLCHAIN_DIR)/..)
+
+ CC_COMPAT_CLANG_FLAGS := --target=$(notdir $(CROSS_COMPILE_COMPAT:%-=%))
+-CC_COMPAT_CLANG_FLAGS += --prefix=$(COMPAT_GCC_TOOLCHAIN_DIR)
++CC_COMPAT_CLANG_FLAGS += --prefix=$(COMPAT_GCC_TOOLCHAIN_DIR)$(notdir $(CROSS_COMPILE_COMPAT))
+ CC_COMPAT_CLANG_FLAGS += -no-integrated-as -Qunused-arguments
+ ifneq ($(COMPAT_GCC_TOOLCHAIN),)
+ CC_COMPAT_CLANG_FLAGS += --gcc-toolchain=$(COMPAT_GCC_TOOLCHAIN)
+diff --git a/arch/mips/pci/pci-xtalk-bridge.c b/arch/mips/pci/pci-xtalk-bridge.c
+index 3b2552fb7735..5958217861b8 100644
+--- a/arch/mips/pci/pci-xtalk-bridge.c
++++ b/arch/mips/pci/pci-xtalk-bridge.c
+@@ -627,9 +627,10 @@ static int bridge_probe(struct platform_device *pdev)
+ return -ENOMEM;
+ domain = irq_domain_create_hierarchy(parent, 0, 8, fn,
+ &bridge_domain_ops, NULL);
+- irq_domain_free_fwnode(fn);
+- if (!domain)
++ if (!domain) {
++ irq_domain_free_fwnode(fn);
+ return -ENOMEM;
++ }
+
+ pci_set_flags(PCI_PROBE_ONLY);
+
+diff --git a/arch/parisc/include/asm/atomic.h b/arch/parisc/include/asm/atomic.h
+index 118953d41763..6dd4171c9530 100644
+--- a/arch/parisc/include/asm/atomic.h
++++ b/arch/parisc/include/asm/atomic.h
+@@ -212,6 +212,8 @@ atomic64_set(atomic64_t *v, s64 i)
+ _atomic_spin_unlock_irqrestore(v, flags);
+ }
+
++#define atomic64_set_release(v, i) atomic64_set((v), (i))
++
+ static __inline__ s64
+ atomic64_read(const atomic64_t *v)
+ {
+diff --git a/arch/riscv/include/asm/barrier.h b/arch/riscv/include/asm/barrier.h
+index 3f1737f301cc..d0e24aaa2aa0 100644
+--- a/arch/riscv/include/asm/barrier.h
++++ b/arch/riscv/include/asm/barrier.h
+@@ -58,8 +58,16 @@ do { \
+ * The AQ/RL pair provides a RCpc critical section, but there's not really any
+ * way we can take advantage of that here because the ordering is only enforced
+ * on that one lock. Thus, we're just doing a full fence.
++ *
++ * Since we allow writeX to be called from preemptive regions we need at least
++ * an "o" in the predecessor set to ensure device writes are visible before the
++ * task is marked as available for scheduling on a new hart. While I don't see
++ * any concrete reason we need a full IO fence, it seems safer to just upgrade
++ * this in order to avoid any IO crossing a scheduling boundary. In both
++ * instances the scheduler pairs this with an mb(), so nothing is necessary on
++ * the new hart.
+ */
+-#define smp_mb__after_spinlock() RISCV_FENCE(rw,rw)
++#define smp_mb__after_spinlock() RISCV_FENCE(iorw,iorw)
+
+ #include <asm-generic/barrier.h>
+
+diff --git a/arch/riscv/mm/init.c b/arch/riscv/mm/init.c
+index fdc772f57edc..81493cee0a16 100644
+--- a/arch/riscv/mm/init.c
++++ b/arch/riscv/mm/init.c
+@@ -94,19 +94,40 @@ void __init mem_init(void)
+ #ifdef CONFIG_BLK_DEV_INITRD
+ static void __init setup_initrd(void)
+ {
++ phys_addr_t start;
+ unsigned long size;
+
+- if (initrd_start >= initrd_end) {
+- pr_info("initrd not found or empty");
++ /* Ignore the virtul address computed during device tree parsing */
++ initrd_start = initrd_end = 0;
++
++ if (!phys_initrd_size)
++ return;
++ /*
++ * Round the memory region to page boundaries as per free_initrd_mem()
++ * This allows us to detect whether the pages overlapping the initrd
++ * are in use, but more importantly, reserves the entire set of pages
++ * as we don't want these pages allocated for other purposes.
++ */
++ start = round_down(phys_initrd_start, PAGE_SIZE);
++ size = phys_initrd_size + (phys_initrd_start - start);
++ size = round_up(size, PAGE_SIZE);
++
++ if (!memblock_is_region_memory(start, size)) {
++ pr_err("INITRD: 0x%08llx+0x%08lx is not a memory region",
++ (u64)start, size);
+ goto disable;
+ }
+- if (__pa_symbol(initrd_end) > PFN_PHYS(max_low_pfn)) {
+- pr_err("initrd extends beyond end of memory");
++
++ if (memblock_is_region_reserved(start, size)) {
++ pr_err("INITRD: 0x%08llx+0x%08lx overlaps in-use memory region\n",
++ (u64)start, size);
+ goto disable;
+ }
+
+- size = initrd_end - initrd_start;
+- memblock_reserve(__pa_symbol(initrd_start), size);
++ memblock_reserve(start, size);
++ /* Now convert initrd to virtual addresses */
++ initrd_start = (unsigned long)__va(phys_initrd_start);
++ initrd_end = initrd_start + phys_initrd_size;
+ initrd_below_start_ok = 1;
+
+ pr_info("Initial ramdisk at: 0x%p (%lu bytes)\n",
+diff --git a/arch/s390/kernel/perf_cpum_cf_events.c b/arch/s390/kernel/perf_cpum_cf_events.c
+index 1e3df52b2b65..37265f551a11 100644
+--- a/arch/s390/kernel/perf_cpum_cf_events.c
++++ b/arch/s390/kernel/perf_cpum_cf_events.c
+@@ -292,7 +292,7 @@ CPUMF_EVENT_ATTR(cf_z15, TX_C_TABORT_SPECIAL, 0x00f5);
+ CPUMF_EVENT_ATTR(cf_z15, DFLT_ACCESS, 0x00f7);
+ CPUMF_EVENT_ATTR(cf_z15, DFLT_CYCLES, 0x00fc);
+ CPUMF_EVENT_ATTR(cf_z15, DFLT_CC, 0x00108);
+-CPUMF_EVENT_ATTR(cf_z15, DFLT_CCERROR, 0x00109);
++CPUMF_EVENT_ATTR(cf_z15, DFLT_CCFINISH, 0x00109);
+ CPUMF_EVENT_ATTR(cf_z15, MT_DIAG_CYCLES_ONE_THR_ACTIVE, 0x01c0);
+ CPUMF_EVENT_ATTR(cf_z15, MT_DIAG_CYCLES_TWO_THR_ACTIVE, 0x01c1);
+
+@@ -629,7 +629,7 @@ static struct attribute *cpumcf_z15_pmu_event_attr[] __initdata = {
+ CPUMF_EVENT_PTR(cf_z15, DFLT_ACCESS),
+ CPUMF_EVENT_PTR(cf_z15, DFLT_CYCLES),
+ CPUMF_EVENT_PTR(cf_z15, DFLT_CC),
+- CPUMF_EVENT_PTR(cf_z15, DFLT_CCERROR),
++ CPUMF_EVENT_PTR(cf_z15, DFLT_CCFINISH),
+ CPUMF_EVENT_PTR(cf_z15, MT_DIAG_CYCLES_ONE_THR_ACTIVE),
+ CPUMF_EVENT_PTR(cf_z15, MT_DIAG_CYCLES_TWO_THR_ACTIVE),
+ NULL,
+diff --git a/arch/x86/boot/compressed/Makefile b/arch/x86/boot/compressed/Makefile
+index 5f7c262bcc99..20aac9968315 100644
+--- a/arch/x86/boot/compressed/Makefile
++++ b/arch/x86/boot/compressed/Makefile
+@@ -88,8 +88,8 @@ endif
+
+ vmlinux-objs-$(CONFIG_ACPI) += $(obj)/acpi.o
+
+-vmlinux-objs-$(CONFIG_EFI_STUB) += $(objtree)/drivers/firmware/efi/libstub/lib.a
+ vmlinux-objs-$(CONFIG_EFI_MIXED) += $(obj)/efi_thunk_$(BITS).o
++efi-obj-$(CONFIG_EFI_STUB) = $(objtree)/drivers/firmware/efi/libstub/lib.a
+
+ # The compressed kernel is built with -fPIC/-fPIE so that a boot loader
+ # can place it anywhere in memory and it will still run. However, since
+@@ -113,7 +113,7 @@ endef
+ quiet_cmd_check-and-link-vmlinux = LD $@
+ cmd_check-and-link-vmlinux = $(cmd_check_data_rel); $(cmd_ld)
+
+-$(obj)/vmlinux: $(vmlinux-objs-y) FORCE
++$(obj)/vmlinux: $(vmlinux-objs-y) $(efi-obj-y) FORCE
+ $(call if_changed,check-and-link-vmlinux)
+
+ OBJCOPYFLAGS_vmlinux.bin := -R .comment -S
+diff --git a/arch/x86/kernel/apic/io_apic.c b/arch/x86/kernel/apic/io_apic.c
+index 913c88617848..57447f03ee87 100644
+--- a/arch/x86/kernel/apic/io_apic.c
++++ b/arch/x86/kernel/apic/io_apic.c
+@@ -2329,12 +2329,12 @@ static int mp_irqdomain_create(int ioapic)
+ ip->irqdomain = irq_domain_create_linear(fn, hwirqs, cfg->ops,
+ (void *)(long)ioapic);
+
+- /* Release fw handle if it was allocated above */
+- if (!cfg->dev)
+- irq_domain_free_fwnode(fn);
+-
+- if (!ip->irqdomain)
++ if (!ip->irqdomain) {
++ /* Release fw handle if it was allocated above */
++ if (!cfg->dev)
++ irq_domain_free_fwnode(fn);
+ return -ENOMEM;
++ }
+
+ ip->irqdomain->parent = parent;
+
+diff --git a/arch/x86/kernel/apic/msi.c b/arch/x86/kernel/apic/msi.c
+index 159bd0cb8548..a20873bbbed6 100644
+--- a/arch/x86/kernel/apic/msi.c
++++ b/arch/x86/kernel/apic/msi.c
+@@ -262,12 +262,13 @@ void __init arch_init_msi_domain(struct irq_domain *parent)
+ msi_default_domain =
+ pci_msi_create_irq_domain(fn, &pci_msi_domain_info,
+ parent);
+- irq_domain_free_fwnode(fn);
+ }
+- if (!msi_default_domain)
++ if (!msi_default_domain) {
++ irq_domain_free_fwnode(fn);
+ pr_warn("failed to initialize irqdomain for MSI/MSI-x.\n");
+- else
++ } else {
+ msi_default_domain->flags |= IRQ_DOMAIN_MSI_NOMASK_QUIRK;
++ }
+ }
+
+ #ifdef CONFIG_IRQ_REMAP
+@@ -300,7 +301,8 @@ struct irq_domain *arch_create_remap_msi_irq_domain(struct irq_domain *parent,
+ if (!fn)
+ return NULL;
+ d = pci_msi_create_irq_domain(fn, &pci_msi_ir_domain_info, parent);
+- irq_domain_free_fwnode(fn);
++ if (!d)
++ irq_domain_free_fwnode(fn);
+ return d;
+ }
+ #endif
+@@ -363,7 +365,8 @@ static struct irq_domain *dmar_get_irq_domain(void)
+ if (fn) {
+ dmar_domain = msi_create_irq_domain(fn, &dmar_msi_domain_info,
+ x86_vector_domain);
+- irq_domain_free_fwnode(fn);
++ if (!dmar_domain)
++ irq_domain_free_fwnode(fn);
+ }
+ out:
+ mutex_unlock(&dmar_lock);
+@@ -488,7 +491,10 @@ struct irq_domain *hpet_create_irq_domain(int hpet_id)
+ }
+
+ d = msi_create_irq_domain(fn, domain_info, parent);
+- irq_domain_free_fwnode(fn);
++ if (!d) {
++ irq_domain_free_fwnode(fn);
++ kfree(domain_info);
++ }
+ return d;
+ }
+
+diff --git a/arch/x86/kernel/apic/vector.c b/arch/x86/kernel/apic/vector.c
+index cf8b6ebc6031..410363e60968 100644
+--- a/arch/x86/kernel/apic/vector.c
++++ b/arch/x86/kernel/apic/vector.c
+@@ -707,7 +707,6 @@ int __init arch_early_irq_init(void)
+ x86_vector_domain = irq_domain_create_tree(fn, &x86_vector_domain_ops,
+ NULL);
+ BUG_ON(x86_vector_domain == NULL);
+- irq_domain_free_fwnode(fn);
+ irq_set_default_host(x86_vector_domain);
+
+ arch_init_msi_domain(x86_vector_domain);
+diff --git a/arch/x86/kernel/vmlinux.lds.S b/arch/x86/kernel/vmlinux.lds.S
+index 7c35556c7827..1b165813892f 100644
+--- a/arch/x86/kernel/vmlinux.lds.S
++++ b/arch/x86/kernel/vmlinux.lds.S
+@@ -359,6 +359,7 @@ SECTIONS
+ .bss : AT(ADDR(.bss) - LOAD_OFFSET) {
+ __bss_start = .;
+ *(.bss..page_aligned)
++ . = ALIGN(PAGE_SIZE);
+ *(BSS_MAIN)
+ BSS_DECRYPTED
+ . = ALIGN(PAGE_SIZE);
+diff --git a/arch/x86/math-emu/wm_sqrt.S b/arch/x86/math-emu/wm_sqrt.S
+index 3b2b58164ec1..40526dd85137 100644
+--- a/arch/x86/math-emu/wm_sqrt.S
++++ b/arch/x86/math-emu/wm_sqrt.S
+@@ -209,7 +209,7 @@ sqrt_stage_2_finish:
+
+ #ifdef PARANOID
+ /* It should be possible to get here only if the arg is ffff....ffff */
+- cmp $0xffffffff,FPU_fsqrt_arg_1
++ cmpl $0xffffffff,FPU_fsqrt_arg_1
+ jnz sqrt_stage_2_error
+ #endif /* PARANOID */
+
+diff --git a/arch/x86/platform/uv/uv_irq.c b/arch/x86/platform/uv/uv_irq.c
+index fc13cbbb2dce..abb6075397f0 100644
+--- a/arch/x86/platform/uv/uv_irq.c
++++ b/arch/x86/platform/uv/uv_irq.c
+@@ -167,9 +167,10 @@ static struct irq_domain *uv_get_irq_domain(void)
+ goto out;
+
+ uv_domain = irq_domain_create_tree(fn, &uv_domain_ops, NULL);
+- irq_domain_free_fwnode(fn);
+ if (uv_domain)
+ uv_domain->parent = x86_vector_domain;
++ else
++ irq_domain_free_fwnode(fn);
+ out:
+ mutex_unlock(&uv_lock);
+
+diff --git a/arch/xtensa/kernel/setup.c b/arch/xtensa/kernel/setup.c
+index 3880c765d448..0271e2e47bcd 100644
+--- a/arch/xtensa/kernel/setup.c
++++ b/arch/xtensa/kernel/setup.c
+@@ -725,7 +725,8 @@ c_start(struct seq_file *f, loff_t *pos)
+ static void *
+ c_next(struct seq_file *f, void *v, loff_t *pos)
+ {
+- return NULL;
++ ++*pos;
++ return c_start(f, pos);
+ }
+
+ static void
+diff --git a/arch/xtensa/kernel/xtensa_ksyms.c b/arch/xtensa/kernel/xtensa_ksyms.c
+index 4092555828b1..24cf6972eace 100644
+--- a/arch/xtensa/kernel/xtensa_ksyms.c
++++ b/arch/xtensa/kernel/xtensa_ksyms.c
+@@ -87,13 +87,13 @@ void __xtensa_libgcc_window_spill(void)
+ }
+ EXPORT_SYMBOL(__xtensa_libgcc_window_spill);
+
+-unsigned long __sync_fetch_and_and_4(unsigned long *p, unsigned long v)
++unsigned int __sync_fetch_and_and_4(volatile void *p, unsigned int v)
+ {
+ BUG();
+ }
+ EXPORT_SYMBOL(__sync_fetch_and_and_4);
+
+-unsigned long __sync_fetch_and_or_4(unsigned long *p, unsigned long v)
++unsigned int __sync_fetch_and_or_4(volatile void *p, unsigned int v)
+ {
+ BUG();
+ }
+diff --git a/drivers/android/binder_alloc.c b/drivers/android/binder_alloc.c
+index 2d8b9b91dee0..1dadca3d381e 100644
+--- a/drivers/android/binder_alloc.c
++++ b/drivers/android/binder_alloc.c
+@@ -947,7 +947,7 @@ enum lru_status binder_alloc_free_page(struct list_head *item,
+ trace_binder_unmap_user_end(alloc, index);
+ }
+ up_read(&mm->mmap_sem);
+- mmput(mm);
++ mmput_async(mm);
+
+ trace_binder_unmap_kernel_start(alloc, index);
+
+diff --git a/drivers/base/regmap/regmap.c b/drivers/base/regmap/regmap.c
+index 320d23de02c2..927ebde1607b 100644
+--- a/drivers/base/regmap/regmap.c
++++ b/drivers/base/regmap/regmap.c
+@@ -1363,7 +1363,7 @@ static int dev_get_regmap_match(struct device *dev, void *res, void *data)
+
+ /* If the user didn't specify a name match any */
+ if (data)
+- return (*r)->name == data;
++ return !strcmp((*r)->name, data);
+ else
+ return 1;
+ }
+diff --git a/drivers/char/mem.c b/drivers/char/mem.c
+index 31cae88a730b..6b56bff9b68c 100644
+--- a/drivers/char/mem.c
++++ b/drivers/char/mem.c
+@@ -814,7 +814,8 @@ static struct inode *devmem_inode;
+ #ifdef CONFIG_IO_STRICT_DEVMEM
+ void revoke_devmem(struct resource *res)
+ {
+- struct inode *inode = READ_ONCE(devmem_inode);
++ /* pairs with smp_store_release() in devmem_init_inode() */
++ struct inode *inode = smp_load_acquire(&devmem_inode);
+
+ /*
+ * Check that the initialization has completed. Losing the race
+@@ -1028,8 +1029,11 @@ static int devmem_init_inode(void)
+ return rc;
+ }
+
+- /* publish /dev/mem initialized */
+- WRITE_ONCE(devmem_inode, inode);
++ /*
++ * Publish /dev/mem initialized.
++ * Pairs with smp_load_acquire() in revoke_devmem().
++ */
++ smp_store_release(&devmem_inode, inode);
+
+ return 0;
+ }
+diff --git a/drivers/crypto/chelsio/chtls/chtls_io.c b/drivers/crypto/chelsio/chtls/chtls_io.c
+index e1401d9cc756..2e9acae1cba3 100644
+--- a/drivers/crypto/chelsio/chtls/chtls_io.c
++++ b/drivers/crypto/chelsio/chtls/chtls_io.c
+@@ -1052,14 +1052,15 @@ int chtls_sendmsg(struct sock *sk, struct msghdr *msg, size_t size)
+ &record_type);
+ if (err)
+ goto out_err;
++
++ /* Avoid appending tls handshake, alert to tls data */
++ if (skb)
++ tx_skb_finalize(skb);
+ }
+
+ recordsz = size;
+ csk->tlshws.txleft = recordsz;
+ csk->tlshws.type = record_type;
+-
+- if (skb)
+- ULP_SKB_CB(skb)->ulp.tls.type = record_type;
+ }
+
+ if (!skb || (ULP_SKB_CB(skb)->flags & ULPCB_FLAG_NO_APPEND) ||
+diff --git a/drivers/dma/fsl-edma-common.c b/drivers/dma/fsl-edma-common.c
+index 5697c3622699..9285884758b2 100644
+--- a/drivers/dma/fsl-edma-common.c
++++ b/drivers/dma/fsl-edma-common.c
+@@ -352,26 +352,28 @@ static void fsl_edma_set_tcd_regs(struct fsl_edma_chan *fsl_chan,
+ /*
+ * TCD parameters are stored in struct fsl_edma_hw_tcd in little
+ * endian format. However, we need to load the TCD registers in
+- * big- or little-endian obeying the eDMA engine model endian.
++ * big- or little-endian obeying the eDMA engine model endian,
++ * and this is performed from specific edma_write functions
+ */
+ edma_writew(edma, 0, ®s->tcd[ch].csr);
+- edma_writel(edma, le32_to_cpu(tcd->saddr), ®s->tcd[ch].saddr);
+- edma_writel(edma, le32_to_cpu(tcd->daddr), ®s->tcd[ch].daddr);
+
+- edma_writew(edma, le16_to_cpu(tcd->attr), ®s->tcd[ch].attr);
+- edma_writew(edma, le16_to_cpu(tcd->soff), ®s->tcd[ch].soff);
++ edma_writel(edma, (s32)tcd->saddr, ®s->tcd[ch].saddr);
++ edma_writel(edma, (s32)tcd->daddr, ®s->tcd[ch].daddr);
+
+- edma_writel(edma, le32_to_cpu(tcd->nbytes), ®s->tcd[ch].nbytes);
+- edma_writel(edma, le32_to_cpu(tcd->slast), ®s->tcd[ch].slast);
++ edma_writew(edma, (s16)tcd->attr, ®s->tcd[ch].attr);
++ edma_writew(edma, tcd->soff, ®s->tcd[ch].soff);
+
+- edma_writew(edma, le16_to_cpu(tcd->citer), ®s->tcd[ch].citer);
+- edma_writew(edma, le16_to_cpu(tcd->biter), ®s->tcd[ch].biter);
+- edma_writew(edma, le16_to_cpu(tcd->doff), ®s->tcd[ch].doff);
++ edma_writel(edma, (s32)tcd->nbytes, ®s->tcd[ch].nbytes);
++ edma_writel(edma, (s32)tcd->slast, ®s->tcd[ch].slast);
+
+- edma_writel(edma, le32_to_cpu(tcd->dlast_sga),
++ edma_writew(edma, (s16)tcd->citer, ®s->tcd[ch].citer);
++ edma_writew(edma, (s16)tcd->biter, ®s->tcd[ch].biter);
++ edma_writew(edma, (s16)tcd->doff, ®s->tcd[ch].doff);
++
++ edma_writel(edma, (s32)tcd->dlast_sga,
+ ®s->tcd[ch].dlast_sga);
+
+- edma_writew(edma, le16_to_cpu(tcd->csr), ®s->tcd[ch].csr);
++ edma_writew(edma, (s16)tcd->csr, ®s->tcd[ch].csr);
+ }
+
+ static inline
+diff --git a/drivers/dma/ioat/dma.c b/drivers/dma/ioat/dma.c
+index 18c011e57592..8e2a4d1f0be5 100644
+--- a/drivers/dma/ioat/dma.c
++++ b/drivers/dma/ioat/dma.c
+@@ -26,6 +26,18 @@
+
+ #include "../dmaengine.h"
+
++int completion_timeout = 200;
++module_param(completion_timeout, int, 0644);
++MODULE_PARM_DESC(completion_timeout,
++ "set ioat completion timeout [msec] (default 200 [msec])");
++int idle_timeout = 2000;
++module_param(idle_timeout, int, 0644);
++MODULE_PARM_DESC(idle_timeout,
++ "set ioat idel timeout [msec] (default 2000 [msec])");
++
++#define IDLE_TIMEOUT msecs_to_jiffies(idle_timeout)
++#define COMPLETION_TIMEOUT msecs_to_jiffies(completion_timeout)
++
+ static char *chanerr_str[] = {
+ "DMA Transfer Source Address Error",
+ "DMA Transfer Destination Address Error",
+diff --git a/drivers/dma/ioat/dma.h b/drivers/dma/ioat/dma.h
+index b8e8e0b9693c..4ac9134962f3 100644
+--- a/drivers/dma/ioat/dma.h
++++ b/drivers/dma/ioat/dma.h
+@@ -99,8 +99,6 @@ struct ioatdma_chan {
+ #define IOAT_RUN 5
+ #define IOAT_CHAN_ACTIVE 6
+ struct timer_list timer;
+- #define COMPLETION_TIMEOUT msecs_to_jiffies(100)
+- #define IDLE_TIMEOUT msecs_to_jiffies(2000)
+ #define RESET_DELAY msecs_to_jiffies(100)
+ struct ioatdma_device *ioat_dma;
+ dma_addr_t completion_dma;
+diff --git a/drivers/dma/tegra210-adma.c b/drivers/dma/tegra210-adma.c
+index db58d7e4f9fe..c5fa2ef74abc 100644
+--- a/drivers/dma/tegra210-adma.c
++++ b/drivers/dma/tegra210-adma.c
+@@ -658,6 +658,7 @@ static int tegra_adma_alloc_chan_resources(struct dma_chan *dc)
+
+ ret = pm_runtime_get_sync(tdc2dev(tdc));
+ if (ret < 0) {
++ pm_runtime_put_noidle(tdc2dev(tdc));
+ free_irq(tdc->irq, tdc);
+ return ret;
+ }
+@@ -869,8 +870,10 @@ static int tegra_adma_probe(struct platform_device *pdev)
+ pm_runtime_enable(&pdev->dev);
+
+ ret = pm_runtime_get_sync(&pdev->dev);
+- if (ret < 0)
++ if (ret < 0) {
++ pm_runtime_put_noidle(&pdev->dev);
+ goto rpm_disable;
++ }
+
+ ret = tegra_adma_init(tdma);
+ if (ret)
+diff --git a/drivers/dma/ti/k3-udma-private.c b/drivers/dma/ti/k3-udma-private.c
+index 0b8f3dd6b146..77e8e67d995b 100644
+--- a/drivers/dma/ti/k3-udma-private.c
++++ b/drivers/dma/ti/k3-udma-private.c
+@@ -42,6 +42,7 @@ struct udma_dev *of_xudma_dev_get(struct device_node *np, const char *property)
+ ud = platform_get_drvdata(pdev);
+ if (!ud) {
+ pr_debug("UDMA has not been probed\n");
++ put_device(&pdev->dev);
+ return ERR_PTR(-EPROBE_DEFER);
+ }
+
+diff --git a/drivers/dma/ti/k3-udma.c b/drivers/dma/ti/k3-udma.c
+index 7cab23fe5c73..b777f1924968 100644
+--- a/drivers/dma/ti/k3-udma.c
++++ b/drivers/dma/ti/k3-udma.c
+@@ -1773,7 +1773,8 @@ static int udma_alloc_chan_resources(struct dma_chan *chan)
+ dev_err(ud->ddev.dev,
+ "Descriptor pool allocation failed\n");
+ uc->use_dma_pool = false;
+- return -ENOMEM;
++ ret = -ENOMEM;
++ goto err_cleanup;
+ }
+ }
+
+@@ -1793,16 +1794,18 @@ static int udma_alloc_chan_resources(struct dma_chan *chan)
+
+ ret = udma_get_chan_pair(uc);
+ if (ret)
+- return ret;
++ goto err_cleanup;
+
+ ret = udma_alloc_tx_resources(uc);
+- if (ret)
+- return ret;
++ if (ret) {
++ udma_put_rchan(uc);
++ goto err_cleanup;
++ }
+
+ ret = udma_alloc_rx_resources(uc);
+ if (ret) {
+ udma_free_tx_resources(uc);
+- return ret;
++ goto err_cleanup;
+ }
+
+ uc->config.src_thread = ud->psil_base + uc->tchan->id;
+@@ -1820,10 +1823,8 @@ static int udma_alloc_chan_resources(struct dma_chan *chan)
+ uc->id);
+
+ ret = udma_alloc_tx_resources(uc);
+- if (ret) {
+- uc->config.remote_thread_id = -1;
+- return ret;
+- }
++ if (ret)
++ goto err_cleanup;
+
+ uc->config.src_thread = ud->psil_base + uc->tchan->id;
+ uc->config.dst_thread = uc->config.remote_thread_id;
+@@ -1840,10 +1841,8 @@ static int udma_alloc_chan_resources(struct dma_chan *chan)
+ uc->id);
+
+ ret = udma_alloc_rx_resources(uc);
+- if (ret) {
+- uc->config.remote_thread_id = -1;
+- return ret;
+- }
++ if (ret)
++ goto err_cleanup;
+
+ uc->config.src_thread = uc->config.remote_thread_id;
+ uc->config.dst_thread = (ud->psil_base + uc->rchan->id) |
+@@ -1858,7 +1857,9 @@ static int udma_alloc_chan_resources(struct dma_chan *chan)
+ /* Can not happen */
+ dev_err(uc->ud->dev, "%s: chan%d invalid direction (%u)\n",
+ __func__, uc->id, uc->config.dir);
+- return -EINVAL;
++ ret = -EINVAL;
++ goto err_cleanup;
++
+ }
+
+ /* check if the channel configuration was successful */
+@@ -1867,7 +1868,7 @@ static int udma_alloc_chan_resources(struct dma_chan *chan)
+
+ if (udma_is_chan_running(uc)) {
+ dev_warn(ud->dev, "chan%d: is running!\n", uc->id);
+- udma_stop(uc);
++ udma_reset_chan(uc, false);
+ if (udma_is_chan_running(uc)) {
+ dev_err(ud->dev, "chan%d: won't stop!\n", uc->id);
+ goto err_res_free;
+@@ -1936,7 +1937,7 @@ err_psi_free:
+ err_res_free:
+ udma_free_tx_resources(uc);
+ udma_free_rx_resources(uc);
+-
++err_cleanup:
+ udma_reset_uchan(uc);
+
+ if (uc->use_dma_pool) {
+diff --git a/drivers/firmware/efi/efi-pstore.c b/drivers/firmware/efi/efi-pstore.c
+index c2f1d4e6630b..feb7fe6f2da7 100644
+--- a/drivers/firmware/efi/efi-pstore.c
++++ b/drivers/firmware/efi/efi-pstore.c
+@@ -356,10 +356,7 @@ static struct pstore_info efi_pstore_info = {
+
+ static __init int efivars_pstore_init(void)
+ {
+- if (!efi_rt_services_supported(EFI_RT_SUPPORTED_VARIABLE_SERVICES))
+- return 0;
+-
+- if (!efivars_kobject())
++ if (!efivars_kobject() || !efivar_supports_writes())
+ return 0;
+
+ if (efivars_pstore_disable)
+diff --git a/drivers/firmware/efi/efi.c b/drivers/firmware/efi/efi.c
+index 20a7ba47a792..99446b384726 100644
+--- a/drivers/firmware/efi/efi.c
++++ b/drivers/firmware/efi/efi.c
+@@ -176,11 +176,13 @@ static struct efivar_operations generic_ops;
+ static int generic_ops_register(void)
+ {
+ generic_ops.get_variable = efi.get_variable;
+- generic_ops.set_variable = efi.set_variable;
+- generic_ops.set_variable_nonblocking = efi.set_variable_nonblocking;
+ generic_ops.get_next_variable = efi.get_next_variable;
+ generic_ops.query_variable_store = efi_query_variable_store;
+
++ if (efi_rt_services_supported(EFI_RT_SUPPORTED_SET_VARIABLE)) {
++ generic_ops.set_variable = efi.set_variable;
++ generic_ops.set_variable_nonblocking = efi.set_variable_nonblocking;
++ }
+ return efivars_register(&generic_efivars, &generic_ops, efi_kobj);
+ }
+
+@@ -382,7 +384,8 @@ static int __init efisubsys_init(void)
+ return -ENOMEM;
+ }
+
+- if (efi_rt_services_supported(EFI_RT_SUPPORTED_VARIABLE_SERVICES)) {
++ if (efi_rt_services_supported(EFI_RT_SUPPORTED_GET_VARIABLE |
++ EFI_RT_SUPPORTED_GET_NEXT_VARIABLE_NAME)) {
+ efivar_ssdt_load();
+ error = generic_ops_register();
+ if (error)
+@@ -416,7 +419,8 @@ static int __init efisubsys_init(void)
+ err_remove_group:
+ sysfs_remove_group(efi_kobj, &efi_subsys_attr_group);
+ err_unregister:
+- if (efi_rt_services_supported(EFI_RT_SUPPORTED_VARIABLE_SERVICES))
++ if (efi_rt_services_supported(EFI_RT_SUPPORTED_GET_VARIABLE |
++ EFI_RT_SUPPORTED_GET_NEXT_VARIABLE_NAME))
+ generic_ops_unregister();
+ err_put:
+ kobject_put(efi_kobj);
+diff --git a/drivers/firmware/efi/efivars.c b/drivers/firmware/efi/efivars.c
+index 26528a46d99e..dcea137142b3 100644
+--- a/drivers/firmware/efi/efivars.c
++++ b/drivers/firmware/efi/efivars.c
+@@ -680,11 +680,8 @@ int efivars_sysfs_init(void)
+ struct kobject *parent_kobj = efivars_kobject();
+ int error = 0;
+
+- if (!efi_rt_services_supported(EFI_RT_SUPPORTED_VARIABLE_SERVICES))
+- return -ENODEV;
+-
+ /* No efivars has been registered yet */
+- if (!parent_kobj)
++ if (!parent_kobj || !efivar_supports_writes())
+ return 0;
+
+ printk(KERN_INFO "EFI Variables Facility v%s %s\n", EFIVARS_VERSION,
+diff --git a/drivers/firmware/efi/vars.c b/drivers/firmware/efi/vars.c
+index 5f2a4d162795..973eef234b36 100644
+--- a/drivers/firmware/efi/vars.c
++++ b/drivers/firmware/efi/vars.c
+@@ -1229,3 +1229,9 @@ out:
+ return rv;
+ }
+ EXPORT_SYMBOL_GPL(efivars_unregister);
++
++int efivar_supports_writes(void)
++{
++ return __efivars && __efivars->ops->set_variable;
++}
++EXPORT_SYMBOL_GPL(efivar_supports_writes);
+diff --git a/drivers/firmware/psci/psci_checker.c b/drivers/firmware/psci/psci_checker.c
+index 873841af8d57..d9b1a2d71223 100644
+--- a/drivers/firmware/psci/psci_checker.c
++++ b/drivers/firmware/psci/psci_checker.c
+@@ -157,8 +157,10 @@ static int alloc_init_cpu_groups(cpumask_var_t **pcpu_groups)
+
+ cpu_groups = kcalloc(nb_available_cpus, sizeof(cpu_groups),
+ GFP_KERNEL);
+- if (!cpu_groups)
++ if (!cpu_groups) {
++ free_cpumask_var(tmp);
+ return -ENOMEM;
++ }
+
+ cpumask_copy(tmp, cpu_online_mask);
+
+@@ -167,6 +169,7 @@ static int alloc_init_cpu_groups(cpumask_var_t **pcpu_groups)
+ topology_core_cpumask(cpumask_any(tmp));
+
+ if (!alloc_cpumask_var(&cpu_groups[num_groups], GFP_KERNEL)) {
++ free_cpumask_var(tmp);
+ free_cpu_groups(num_groups, &cpu_groups);
+ return -ENOMEM;
+ }
+diff --git a/drivers/fpga/dfl-afu-main.c b/drivers/fpga/dfl-afu-main.c
+index 65437b6a6842..77e257c88a1d 100644
+--- a/drivers/fpga/dfl-afu-main.c
++++ b/drivers/fpga/dfl-afu-main.c
+@@ -83,7 +83,8 @@ int __afu_port_disable(struct platform_device *pdev)
+ * on this port and minimum soft reset pulse width has elapsed.
+ * Driver polls port_soft_reset_ack to determine if reset done by HW.
+ */
+- if (readq_poll_timeout(base + PORT_HDR_CTRL, v, v & PORT_CTRL_SFTRST,
++ if (readq_poll_timeout(base + PORT_HDR_CTRL, v,
++ v & PORT_CTRL_SFTRST_ACK,
+ RST_POLL_INVL, RST_POLL_TIMEOUT)) {
+ dev_err(&pdev->dev, "timeout, fail to reset device\n");
+ return -ETIMEDOUT;
+diff --git a/drivers/fpga/dfl-pci.c b/drivers/fpga/dfl-pci.c
+index 538755062ab7..a78c409bf2c4 100644
+--- a/drivers/fpga/dfl-pci.c
++++ b/drivers/fpga/dfl-pci.c
+@@ -227,7 +227,6 @@ static int cci_pci_sriov_configure(struct pci_dev *pcidev, int num_vfs)
+ {
+ struct cci_drvdata *drvdata = pci_get_drvdata(pcidev);
+ struct dfl_fpga_cdev *cdev = drvdata->cdev;
+- int ret = 0;
+
+ if (!num_vfs) {
+ /*
+@@ -239,6 +238,8 @@ static int cci_pci_sriov_configure(struct pci_dev *pcidev, int num_vfs)
+ dfl_fpga_cdev_config_ports_pf(cdev);
+
+ } else {
++ int ret;
++
+ /*
+ * before enable SRIOV, put released ports into VF access mode
+ * first of all.
+diff --git a/drivers/gpio/gpio-arizona.c b/drivers/gpio/gpio-arizona.c
+index 5640efe5e750..5bda38e0780f 100644
+--- a/drivers/gpio/gpio-arizona.c
++++ b/drivers/gpio/gpio-arizona.c
+@@ -64,6 +64,7 @@ static int arizona_gpio_get(struct gpio_chip *chip, unsigned offset)
+ ret = pm_runtime_get_sync(chip->parent);
+ if (ret < 0) {
+ dev_err(chip->parent, "Failed to resume: %d\n", ret);
++ pm_runtime_put_autosuspend(chip->parent);
+ return ret;
+ }
+
+@@ -72,12 +73,15 @@ static int arizona_gpio_get(struct gpio_chip *chip, unsigned offset)
+ if (ret < 0) {
+ dev_err(chip->parent, "Failed to drop cache: %d\n",
+ ret);
++ pm_runtime_put_autosuspend(chip->parent);
+ return ret;
+ }
+
+ ret = regmap_read(arizona->regmap, reg, &val);
+- if (ret < 0)
++ if (ret < 0) {
++ pm_runtime_put_autosuspend(chip->parent);
+ return ret;
++ }
+
+ pm_runtime_mark_last_busy(chip->parent);
+ pm_runtime_put_autosuspend(chip->parent);
+@@ -106,6 +110,7 @@ static int arizona_gpio_direction_out(struct gpio_chip *chip,
+ ret = pm_runtime_get_sync(chip->parent);
+ if (ret < 0) {
+ dev_err(chip->parent, "Failed to resume: %d\n", ret);
++ pm_runtime_put(chip->parent);
+ return ret;
+ }
+ }
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_debugfs.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_debugfs.c
+index c0f9a651dc06..92b18c4760e5 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_debugfs.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_debugfs.c
+@@ -1156,27 +1156,37 @@ static void amdgpu_ib_preempt_job_recovery(struct drm_gpu_scheduler *sched)
+ static void amdgpu_ib_preempt_mark_partial_job(struct amdgpu_ring *ring)
+ {
+ struct amdgpu_job *job;
+- struct drm_sched_job *s_job;
++ struct drm_sched_job *s_job, *tmp;
+ uint32_t preempt_seq;
+ struct dma_fence *fence, **ptr;
+ struct amdgpu_fence_driver *drv = &ring->fence_drv;
+ struct drm_gpu_scheduler *sched = &ring->sched;
++ bool preempted = true;
+
+ if (ring->funcs->type != AMDGPU_RING_TYPE_GFX)
+ return;
+
+ preempt_seq = le32_to_cpu(*(drv->cpu_addr + 2));
+- if (preempt_seq <= atomic_read(&drv->last_seq))
+- return;
++ if (preempt_seq <= atomic_read(&drv->last_seq)) {
++ preempted = false;
++ goto no_preempt;
++ }
+
+ preempt_seq &= drv->num_fences_mask;
+ ptr = &drv->fences[preempt_seq];
+ fence = rcu_dereference_protected(*ptr, 1);
+
++no_preempt:
+ spin_lock(&sched->job_list_lock);
+- list_for_each_entry(s_job, &sched->ring_mirror_list, node) {
++ list_for_each_entry_safe(s_job, tmp, &sched->ring_mirror_list, node) {
++ if (dma_fence_is_signaled(&s_job->s_fence->finished)) {
++ /* remove job from ring_mirror_list */
++ list_del_init(&s_job->node);
++ sched->ops->free_job(s_job);
++ continue;
++ }
+ job = to_amdgpu_job(s_job);
+- if (job->fence == fence)
++ if (preempted && job->fence == fence)
+ /* mark the job as preempted */
+ job->preemption_status |= AMDGPU_IB_PREEMPTED;
+ }
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_pm.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_pm.c
+index 96b8feb77b15..b14b0b4ffeb2 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_pm.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_pm.c
+@@ -775,8 +775,7 @@ static ssize_t amdgpu_set_pp_od_clk_voltage(struct device *dev,
+ tmp_str++;
+ while (isspace(*++tmp_str));
+
+- while (tmp_str[0]) {
+- sub_str = strsep(&tmp_str, delimiter);
++ while ((sub_str = strsep(&tmp_str, delimiter)) != NULL) {
+ ret = kstrtol(sub_str, 0, ¶meter[parameter_size]);
+ if (ret)
+ return -EINVAL;
+@@ -1036,8 +1035,7 @@ static ssize_t amdgpu_read_mask(const char *buf, size_t count, uint32_t *mask)
+ memcpy(buf_cpy, buf, bytes);
+ buf_cpy[bytes] = '\0';
+ tmp = buf_cpy;
+- while (tmp[0]) {
+- sub_str = strsep(&tmp, delimiter);
++ while ((sub_str = strsep(&tmp, delimiter)) != NULL) {
+ if (strlen(sub_str)) {
+ ret = kstrtol(sub_str, 0, &level);
+ if (ret)
+@@ -1634,8 +1632,7 @@ static ssize_t amdgpu_set_pp_power_profile_mode(struct device *dev,
+ i++;
+ memcpy(buf_cpy, buf, count-i);
+ tmp_str = buf_cpy;
+- while (tmp_str[0]) {
+- sub_str = strsep(&tmp_str, delimiter);
++ while ((sub_str = strsep(&tmp_str, delimiter)) != NULL) {
+ ret = kstrtol(sub_str, 0, ¶meter[parameter_size]);
+ if (ret)
+ return -EINVAL;
+diff --git a/drivers/gpu/drm/amd/amdgpu/gfx_v10_0.c b/drivers/gpu/drm/amd/amdgpu/gfx_v10_0.c
+index 0e0daf0021b6..ff94f756978d 100644
+--- a/drivers/gpu/drm/amd/amdgpu/gfx_v10_0.c
++++ b/drivers/gpu/drm/amd/amdgpu/gfx_v10_0.c
+@@ -4746,12 +4746,17 @@ static int gfx_v10_0_ring_preempt_ib(struct amdgpu_ring *ring)
+ struct amdgpu_device *adev = ring->adev;
+ struct amdgpu_kiq *kiq = &adev->gfx.kiq;
+ struct amdgpu_ring *kiq_ring = &kiq->ring;
++ unsigned long flags;
+
+ if (!kiq->pmf || !kiq->pmf->kiq_unmap_queues)
+ return -EINVAL;
+
+- if (amdgpu_ring_alloc(kiq_ring, kiq->pmf->unmap_queues_size))
++ spin_lock_irqsave(&kiq->ring_lock, flags);
++
++ if (amdgpu_ring_alloc(kiq_ring, kiq->pmf->unmap_queues_size)) {
++ spin_unlock_irqrestore(&kiq->ring_lock, flags);
+ return -ENOMEM;
++ }
+
+ /* assert preemption condition */
+ amdgpu_ring_set_preempt_cond_exec(ring, false);
+@@ -4762,6 +4767,8 @@ static int gfx_v10_0_ring_preempt_ib(struct amdgpu_ring *ring)
+ ++ring->trail_seq);
+ amdgpu_ring_commit(kiq_ring);
+
++ spin_unlock_irqrestore(&kiq->ring_lock, flags);
++
+ /* poll the trailing fence */
+ for (i = 0; i < adev->usec_timeout; i++) {
+ if (ring->trail_seq ==
+diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
+index d06fa6380179..837a286469ec 100644
+--- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
++++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
+@@ -1342,9 +1342,14 @@ static int dm_late_init(void *handle)
+ struct dmcu_iram_parameters params;
+ unsigned int linear_lut[16];
+ int i;
+- struct dmcu *dmcu = adev->dm.dc->res_pool->dmcu;
++ struct dmcu *dmcu = NULL;
+ bool ret;
+
++ if (!adev->dm.fw_dmcu && !adev->dm.dmub_fw)
++ return detect_mst_link_for_all_connectors(adev->ddev);
++
++ dmcu = adev->dm.dc->res_pool->dmcu;
++
+ for (i = 0; i < 16; i++)
+ linear_lut[i] = 0xFFFF * i / 15;
+
+diff --git a/drivers/gpu/drm/amd/powerplay/smumgr/vegam_smumgr.c b/drivers/gpu/drm/amd/powerplay/smumgr/vegam_smumgr.c
+index b0e0d67cd54b..2a081a792c6b 100644
+--- a/drivers/gpu/drm/amd/powerplay/smumgr/vegam_smumgr.c
++++ b/drivers/gpu/drm/amd/powerplay/smumgr/vegam_smumgr.c
+@@ -642,9 +642,6 @@ static int vegam_get_dependency_volt_by_clk(struct pp_hwmgr *hwmgr,
+
+ /* sclk is bigger than max sclk in the dependence table */
+ *voltage |= (dep_table->entries[i - 1].vddc * VOLTAGE_SCALE) << VDDC_SHIFT;
+- vddci = phm_find_closest_vddci(&(data->vddci_voltage_table),
+- (dep_table->entries[i - 1].vddc -
+- (uint16_t)VDDC_VDDCI_DELTA));
+
+ if (SMU7_VOLTAGE_CONTROL_NONE == data->vddci_control)
+ *voltage |= (data->vbios_boot_state.vddci_bootup_value *
+@@ -652,8 +649,13 @@ static int vegam_get_dependency_volt_by_clk(struct pp_hwmgr *hwmgr,
+ else if (dep_table->entries[i - 1].vddci)
+ *voltage |= (dep_table->entries[i - 1].vddci *
+ VOLTAGE_SCALE) << VDDC_SHIFT;
+- else
++ else {
++ vddci = phm_find_closest_vddci(&(data->vddci_voltage_table),
++ (dep_table->entries[i - 1].vddc -
++ (uint16_t)VDDC_VDDCI_DELTA));
++
+ *voltage |= (vddci * VOLTAGE_SCALE) << VDDCI_SHIFT;
++ }
+
+ if (SMU7_VOLTAGE_CONTROL_NONE == data->mvdd_control)
+ *mvdd = data->vbios_boot_state.mvdd_bootup_value * VOLTAGE_SCALE;
+diff --git a/drivers/gpu/drm/nouveau/nouveau_svm.c b/drivers/gpu/drm/nouveau/nouveau_svm.c
+index 645fedd77e21..a9ce86740799 100644
+--- a/drivers/gpu/drm/nouveau/nouveau_svm.c
++++ b/drivers/gpu/drm/nouveau/nouveau_svm.c
+@@ -534,6 +534,7 @@ static int nouveau_range_fault(struct nouveau_svmm *svmm,
+ .flags = nouveau_svm_pfn_flags,
+ .values = nouveau_svm_pfn_values,
+ .pfn_shift = NVIF_VMM_PFNMAP_V0_ADDR_SHIFT,
++ .dev_private_owner = drm->dev,
+ };
+ struct mm_struct *mm = notifier->notifier.mm;
+ long ret;
+diff --git a/drivers/gpu/drm/nouveau/nvkm/subdev/i2c/auxg94.c b/drivers/gpu/drm/nouveau/nvkm/subdev/i2c/auxg94.c
+index c8ab1b5741a3..db7769cb33eb 100644
+--- a/drivers/gpu/drm/nouveau/nvkm/subdev/i2c/auxg94.c
++++ b/drivers/gpu/drm/nouveau/nvkm/subdev/i2c/auxg94.c
+@@ -118,10 +118,10 @@ g94_i2c_aux_xfer(struct nvkm_i2c_aux *obj, bool retry,
+ if (retries)
+ udelay(400);
+
+- /* transaction request, wait up to 1ms for it to complete */
++ /* transaction request, wait up to 2ms for it to complete */
+ nvkm_wr32(device, 0x00e4e4 + base, 0x00010000 | ctrl);
+
+- timeout = 1000;
++ timeout = 2000;
+ do {
+ ctrl = nvkm_rd32(device, 0x00e4e4 + base);
+ udelay(1);
+diff --git a/drivers/gpu/drm/nouveau/nvkm/subdev/i2c/auxgm200.c b/drivers/gpu/drm/nouveau/nvkm/subdev/i2c/auxgm200.c
+index 7ef60895f43a..edb6148cbca0 100644
+--- a/drivers/gpu/drm/nouveau/nvkm/subdev/i2c/auxgm200.c
++++ b/drivers/gpu/drm/nouveau/nvkm/subdev/i2c/auxgm200.c
+@@ -118,10 +118,10 @@ gm200_i2c_aux_xfer(struct nvkm_i2c_aux *obj, bool retry,
+ if (retries)
+ udelay(400);
+
+- /* transaction request, wait up to 1ms for it to complete */
++ /* transaction request, wait up to 2ms for it to complete */
+ nvkm_wr32(device, 0x00d954 + base, 0x00010000 | ctrl);
+
+- timeout = 1000;
++ timeout = 2000;
+ do {
+ ctrl = nvkm_rd32(device, 0x00d954 + base);
+ udelay(1);
+diff --git a/drivers/gpu/drm/sun4i/sun4i_hdmi_enc.c b/drivers/gpu/drm/sun4i/sun4i_hdmi_enc.c
+index f07e0c32b93a..4c5072a578bf 100644
+--- a/drivers/gpu/drm/sun4i/sun4i_hdmi_enc.c
++++ b/drivers/gpu/drm/sun4i/sun4i_hdmi_enc.c
+@@ -263,7 +263,7 @@ sun4i_hdmi_connector_detect(struct drm_connector *connector, bool force)
+ unsigned long reg;
+
+ reg = readl(hdmi->base + SUN4I_HDMI_HPD_REG);
+- if (reg & SUN4I_HDMI_HPD_HIGH) {
++ if (!(reg & SUN4I_HDMI_HPD_HIGH)) {
+ cec_phys_addr_invalidate(hdmi->cec_adap);
+ return connector_status_disconnected;
+ }
+diff --git a/drivers/hid/hid-alps.c b/drivers/hid/hid-alps.c
+index b2ad319a74b9..d33f5abc8f64 100644
+--- a/drivers/hid/hid-alps.c
++++ b/drivers/hid/hid-alps.c
+@@ -25,6 +25,7 @@
+
+ #define U1_MOUSE_REPORT_ID 0x01 /* Mouse data ReportID */
+ #define U1_ABSOLUTE_REPORT_ID 0x03 /* Absolute data ReportID */
++#define U1_ABSOLUTE_REPORT_ID_SECD 0x02 /* FW-PTP Absolute data ReportID */
+ #define U1_FEATURE_REPORT_ID 0x05 /* Feature ReportID */
+ #define U1_SP_ABSOLUTE_REPORT_ID 0x06 /* Feature ReportID */
+
+@@ -368,6 +369,7 @@ static int u1_raw_event(struct alps_dev *hdata, u8 *data, int size)
+ case U1_FEATURE_REPORT_ID:
+ break;
+ case U1_ABSOLUTE_REPORT_ID:
++ case U1_ABSOLUTE_REPORT_ID_SECD:
+ for (i = 0; i < hdata->max_fingers; i++) {
+ u8 *contact = &data[i * 5];
+
+diff --git a/drivers/hid/hid-apple.c b/drivers/hid/hid-apple.c
+index d732d1d10caf..6909c045fece 100644
+--- a/drivers/hid/hid-apple.c
++++ b/drivers/hid/hid-apple.c
+@@ -54,6 +54,7 @@ MODULE_PARM_DESC(swap_opt_cmd, "Swap the Option (\"Alt\") and Command (\"Flag\")
+ struct apple_sc {
+ unsigned long quirks;
+ unsigned int fn_on;
++ unsigned int fn_found;
+ DECLARE_BITMAP(pressed_numlock, KEY_CNT);
+ };
+
+@@ -339,12 +340,15 @@ static int apple_input_mapping(struct hid_device *hdev, struct hid_input *hi,
+ struct hid_field *field, struct hid_usage *usage,
+ unsigned long **bit, int *max)
+ {
++ struct apple_sc *asc = hid_get_drvdata(hdev);
++
+ if (usage->hid == (HID_UP_CUSTOM | 0x0003) ||
+ usage->hid == (HID_UP_MSVENDOR | 0x0003) ||
+ usage->hid == (HID_UP_HPVENDOR2 | 0x0003)) {
+ /* The fn key on Apple USB keyboards */
+ set_bit(EV_REP, hi->input->evbit);
+ hid_map_usage_clear(hi, usage, bit, max, EV_KEY, KEY_FN);
++ asc->fn_found = true;
+ apple_setup_input(hi->input);
+ return 1;
+ }
+@@ -371,6 +375,19 @@ static int apple_input_mapped(struct hid_device *hdev, struct hid_input *hi,
+ return 0;
+ }
+
++static int apple_input_configured(struct hid_device *hdev,
++ struct hid_input *hidinput)
++{
++ struct apple_sc *asc = hid_get_drvdata(hdev);
++
++ if ((asc->quirks & APPLE_HAS_FN) && !asc->fn_found) {
++ hid_info(hdev, "Fn key not found (Apple Wireless Keyboard clone?), disabling Fn key handling\n");
++ asc->quirks = 0;
++ }
++
++ return 0;
++}
++
+ static int apple_probe(struct hid_device *hdev,
+ const struct hid_device_id *id)
+ {
+@@ -585,6 +602,7 @@ static struct hid_driver apple_driver = {
+ .event = apple_event,
+ .input_mapping = apple_input_mapping,
+ .input_mapped = apple_input_mapped,
++ .input_configured = apple_input_configured,
+ };
+ module_hid_driver(apple_driver);
+
+diff --git a/drivers/hid/hid-steam.c b/drivers/hid/hid-steam.c
+index 6286204d4c56..a3b151b29bd7 100644
+--- a/drivers/hid/hid-steam.c
++++ b/drivers/hid/hid-steam.c
+@@ -526,7 +526,8 @@ static int steam_register(struct steam_device *steam)
+ steam_battery_register(steam);
+
+ mutex_lock(&steam_devices_lock);
+- list_add(&steam->list, &steam_devices);
++ if (list_empty(&steam->list))
++ list_add(&steam->list, &steam_devices);
+ mutex_unlock(&steam_devices_lock);
+ }
+
+@@ -552,7 +553,7 @@ static void steam_unregister(struct steam_device *steam)
+ hid_info(steam->hdev, "Steam Controller '%s' disconnected",
+ steam->serial_no);
+ mutex_lock(&steam_devices_lock);
+- list_del(&steam->list);
++ list_del_init(&steam->list);
+ mutex_unlock(&steam_devices_lock);
+ steam->serial_no[0] = 0;
+ }
+@@ -738,6 +739,7 @@ static int steam_probe(struct hid_device *hdev,
+ mutex_init(&steam->mutex);
+ steam->quirks = id->driver_data;
+ INIT_WORK(&steam->work_connect, steam_work_connect_cb);
++ INIT_LIST_HEAD(&steam->list);
+
+ steam->client_hdev = steam_create_client_hid(hdev);
+ if (IS_ERR(steam->client_hdev)) {
+diff --git a/drivers/hid/i2c-hid/i2c-hid-dmi-quirks.c b/drivers/hid/i2c-hid/i2c-hid-dmi-quirks.c
+index ec142bc8c1da..35f3bfc3e6f5 100644
+--- a/drivers/hid/i2c-hid/i2c-hid-dmi-quirks.c
++++ b/drivers/hid/i2c-hid/i2c-hid-dmi-quirks.c
+@@ -373,6 +373,14 @@ static const struct dmi_system_id i2c_hid_dmi_desc_override_table[] = {
+ },
+ .driver_data = (void *)&sipodev_desc
+ },
++ {
++ .ident = "Mediacom FlexBook edge 13",
++ .matches = {
++ DMI_EXACT_MATCH(DMI_SYS_VENDOR, "MEDIACOM"),
++ DMI_EXACT_MATCH(DMI_PRODUCT_NAME, "FlexBook_edge13-M-FBE13"),
++ },
++ .driver_data = (void *)&sipodev_desc
++ },
+ {
+ .ident = "Odys Winbook 13",
+ .matches = {
+diff --git a/drivers/hwmon/aspeed-pwm-tacho.c b/drivers/hwmon/aspeed-pwm-tacho.c
+index 33fb54845bf6..3d8239fd66ed 100644
+--- a/drivers/hwmon/aspeed-pwm-tacho.c
++++ b/drivers/hwmon/aspeed-pwm-tacho.c
+@@ -851,6 +851,8 @@ static int aspeed_create_fan(struct device *dev,
+ ret = of_property_read_u32(child, "reg", &pwm_port);
+ if (ret)
+ return ret;
++ if (pwm_port >= ARRAY_SIZE(pwm_port_params))
++ return -EINVAL;
+ aspeed_create_pwm_port(priv, (u8)pwm_port);
+
+ ret = of_property_count_u8_elems(child, "cooling-levels");
+diff --git a/drivers/hwmon/nct6775.c b/drivers/hwmon/nct6775.c
+index 7efa6bfef060..ba9b96973e80 100644
+--- a/drivers/hwmon/nct6775.c
++++ b/drivers/hwmon/nct6775.c
+@@ -786,13 +786,13 @@ static const char *const nct6798_temp_label[] = {
+ "Agent1 Dimm1",
+ "BYTE_TEMP0",
+ "BYTE_TEMP1",
+- "",
+- "",
++ "PECI Agent 0 Calibration", /* undocumented */
++ "PECI Agent 1 Calibration", /* undocumented */
+ "",
+ "Virtual_TEMP"
+ };
+
+-#define NCT6798_TEMP_MASK 0x8fff0ffe
++#define NCT6798_TEMP_MASK 0xbfff0ffe
+ #define NCT6798_VIRT_TEMP_MASK 0x80000c00
+
+ /* NCT6102D/NCT6106D specific data */
+diff --git a/drivers/hwmon/pmbus/adm1275.c b/drivers/hwmon/pmbus/adm1275.c
+index e25f541227da..19317575d1c6 100644
+--- a/drivers/hwmon/pmbus/adm1275.c
++++ b/drivers/hwmon/pmbus/adm1275.c
+@@ -465,6 +465,7 @@ MODULE_DEVICE_TABLE(i2c, adm1275_id);
+ static int adm1275_probe(struct i2c_client *client,
+ const struct i2c_device_id *id)
+ {
++ s32 (*config_read_fn)(const struct i2c_client *client, u8 reg);
+ u8 block_buffer[I2C_SMBUS_BLOCK_MAX + 1];
+ int config, device_config;
+ int ret;
+@@ -510,11 +511,16 @@ static int adm1275_probe(struct i2c_client *client,
+ "Device mismatch: Configured %s, detected %s\n",
+ id->name, mid->name);
+
+- config = i2c_smbus_read_byte_data(client, ADM1275_PMON_CONFIG);
++ if (mid->driver_data == adm1272 || mid->driver_data == adm1278 ||
++ mid->driver_data == adm1293 || mid->driver_data == adm1294)
++ config_read_fn = i2c_smbus_read_word_data;
++ else
++ config_read_fn = i2c_smbus_read_byte_data;
++ config = config_read_fn(client, ADM1275_PMON_CONFIG);
+ if (config < 0)
+ return config;
+
+- device_config = i2c_smbus_read_byte_data(client, ADM1275_DEVICE_CONFIG);
++ device_config = config_read_fn(client, ADM1275_DEVICE_CONFIG);
+ if (device_config < 0)
+ return device_config;
+
+diff --git a/drivers/hwmon/scmi-hwmon.c b/drivers/hwmon/scmi-hwmon.c
+index 286d3cfda7de..d421e691318b 100644
+--- a/drivers/hwmon/scmi-hwmon.c
++++ b/drivers/hwmon/scmi-hwmon.c
+@@ -147,7 +147,7 @@ static enum hwmon_sensor_types scmi_types[] = {
+ [ENERGY] = hwmon_energy,
+ };
+
+-static u32 hwmon_attributes[] = {
++static u32 hwmon_attributes[hwmon_max] = {
+ [hwmon_chip] = HWMON_C_REGISTER_TZ,
+ [hwmon_temp] = HWMON_T_INPUT | HWMON_T_LABEL,
+ [hwmon_in] = HWMON_I_INPUT | HWMON_I_LABEL,
+diff --git a/drivers/i2c/busses/i2c-qcom-geni.c b/drivers/i2c/busses/i2c-qcom-geni.c
+index 18d1e4fd4cf3..7f130829bf01 100644
+--- a/drivers/i2c/busses/i2c-qcom-geni.c
++++ b/drivers/i2c/busses/i2c-qcom-geni.c
+@@ -367,7 +367,6 @@ static int geni_i2c_rx_one_msg(struct geni_i2c_dev *gi2c, struct i2c_msg *msg,
+ geni_se_select_mode(se, GENI_SE_FIFO);
+
+ writel_relaxed(len, se->base + SE_I2C_RX_TRANS_LEN);
+- geni_se_setup_m_cmd(se, I2C_READ, m_param);
+
+ if (dma_buf && geni_se_rx_dma_prep(se, dma_buf, len, &rx_dma)) {
+ geni_se_select_mode(se, GENI_SE_FIFO);
+@@ -375,6 +374,8 @@ static int geni_i2c_rx_one_msg(struct geni_i2c_dev *gi2c, struct i2c_msg *msg,
+ dma_buf = NULL;
+ }
+
++ geni_se_setup_m_cmd(se, I2C_READ, m_param);
++
+ time_left = wait_for_completion_timeout(&gi2c->done, XFER_TIMEOUT);
+ if (!time_left)
+ geni_i2c_abort_xfer(gi2c);
+@@ -408,7 +409,6 @@ static int geni_i2c_tx_one_msg(struct geni_i2c_dev *gi2c, struct i2c_msg *msg,
+ geni_se_select_mode(se, GENI_SE_FIFO);
+
+ writel_relaxed(len, se->base + SE_I2C_TX_TRANS_LEN);
+- geni_se_setup_m_cmd(se, I2C_WRITE, m_param);
+
+ if (dma_buf && geni_se_tx_dma_prep(se, dma_buf, len, &tx_dma)) {
+ geni_se_select_mode(se, GENI_SE_FIFO);
+@@ -416,6 +416,8 @@ static int geni_i2c_tx_one_msg(struct geni_i2c_dev *gi2c, struct i2c_msg *msg,
+ dma_buf = NULL;
+ }
+
++ geni_se_setup_m_cmd(se, I2C_WRITE, m_param);
++
+ if (!dma_buf) /* Get FIFO IRQ */
+ writel_relaxed(1, se->base + SE_GENI_TX_WATERMARK_REG);
+
+diff --git a/drivers/i2c/busses/i2c-rcar.c b/drivers/i2c/busses/i2c-rcar.c
+index 3b5397aa4ca6..50dd98803ca0 100644
+--- a/drivers/i2c/busses/i2c-rcar.c
++++ b/drivers/i2c/busses/i2c-rcar.c
+@@ -868,6 +868,7 @@ static int rcar_unreg_slave(struct i2c_client *slave)
+ /* disable irqs and ensure none is running before clearing ptr */
+ rcar_i2c_write(priv, ICSIER, 0);
+ rcar_i2c_write(priv, ICSCR, 0);
++ rcar_i2c_write(priv, ICSAR, 0); /* Gen2: must be 0 if not using slave */
+
+ synchronize_irq(priv->irq);
+ priv->slave = NULL;
+@@ -971,6 +972,8 @@ static int rcar_i2c_probe(struct platform_device *pdev)
+ if (ret < 0)
+ goto out_pm_put;
+
++ rcar_i2c_write(priv, ICSAR, 0); /* Gen2: must be 0 if not using slave */
++
+ if (priv->devtype == I2C_RCAR_GEN3) {
+ priv->rstc = devm_reset_control_get_exclusive(&pdev->dev, NULL);
+ if (!IS_ERR(priv->rstc)) {
+diff --git a/drivers/infiniband/core/cm.c b/drivers/infiniband/core/cm.c
+index 1c2bf18cda9f..83b66757c7ae 100644
+--- a/drivers/infiniband/core/cm.c
++++ b/drivers/infiniband/core/cm.c
+@@ -3723,10 +3723,12 @@ static int cm_send_sidr_rep_locked(struct cm_id_private *cm_id_priv,
+ return ret;
+ }
+ cm_id_priv->id.state = IB_CM_IDLE;
++ spin_lock_irq(&cm.lock);
+ if (!RB_EMPTY_NODE(&cm_id_priv->sidr_id_node)) {
+ rb_erase(&cm_id_priv->sidr_id_node, &cm.remote_sidr_table);
+ RB_CLEAR_NODE(&cm_id_priv->sidr_id_node);
+ }
++ spin_unlock_irq(&cm.lock);
+ return 0;
+ }
+
+diff --git a/drivers/infiniband/core/rdma_core.c b/drivers/infiniband/core/rdma_core.c
+index 75bcbc625616..3ab84fcbaade 100644
+--- a/drivers/infiniband/core/rdma_core.c
++++ b/drivers/infiniband/core/rdma_core.c
+@@ -638,9 +638,6 @@ void rdma_alloc_commit_uobject(struct ib_uobject *uobj,
+ {
+ struct ib_uverbs_file *ufile = attrs->ufile;
+
+- /* alloc_commit consumes the uobj kref */
+- uobj->uapi_object->type_class->alloc_commit(uobj);
+-
+ /* kref is held so long as the uobj is on the uobj list. */
+ uverbs_uobject_get(uobj);
+ spin_lock_irq(&ufile->uobjects_lock);
+@@ -650,6 +647,9 @@ void rdma_alloc_commit_uobject(struct ib_uobject *uobj,
+ /* matches atomic_set(-1) in alloc_uobj */
+ atomic_set(&uobj->usecnt, 0);
+
++ /* alloc_commit consumes the uobj kref */
++ uobj->uapi_object->type_class->alloc_commit(uobj);
++
+ /* Matches the down_read in rdma_alloc_begin_uobject */
+ up_read(&ufile->hw_destroy_rwsem);
+ }
+diff --git a/drivers/infiniband/hw/mlx5/odp.c b/drivers/infiniband/hw/mlx5/odp.c
+index 3de7606d4a1a..bdeb6500a919 100644
+--- a/drivers/infiniband/hw/mlx5/odp.c
++++ b/drivers/infiniband/hw/mlx5/odp.c
+@@ -601,6 +601,23 @@ void mlx5_ib_free_implicit_mr(struct mlx5_ib_mr *imr)
+ */
+ synchronize_srcu(&dev->odp_srcu);
+
++ /*
++ * All work on the prefetch list must be completed, xa_erase() prevented
++ * new work from being created.
++ */
++ wait_event(imr->q_deferred_work, !atomic_read(&imr->num_deferred_work));
++
++ /*
++ * At this point it is forbidden for any other thread to enter
++ * pagefault_mr() on this imr. It is already forbidden to call
++ * pagefault_mr() on an implicit child. Due to this additions to
++ * implicit_children are prevented.
++ */
++
++ /*
++ * Block destroy_unused_implicit_child_mr() from incrementing
++ * num_deferred_work.
++ */
+ xa_lock(&imr->implicit_children);
+ xa_for_each (&imr->implicit_children, idx, mtt) {
+ __xa_erase(&imr->implicit_children, idx);
+@@ -609,9 +626,8 @@ void mlx5_ib_free_implicit_mr(struct mlx5_ib_mr *imr)
+ xa_unlock(&imr->implicit_children);
+
+ /*
+- * num_deferred_work can only be incremented inside the odp_srcu, or
+- * under xa_lock while the child is in the xarray. Thus at this point
+- * it is only decreasing, and all work holding it is now on the wq.
++ * Wait for any concurrent destroy_unused_implicit_child_mr() to
++ * complete.
+ */
+ wait_event(imr->q_deferred_work, !atomic_read(&imr->num_deferred_work));
+
+diff --git a/drivers/infiniband/hw/mlx5/srq_cmd.c b/drivers/infiniband/hw/mlx5/srq_cmd.c
+index 8fc3630a9d4c..0224231a2e6f 100644
+--- a/drivers/infiniband/hw/mlx5/srq_cmd.c
++++ b/drivers/infiniband/hw/mlx5/srq_cmd.c
+@@ -83,11 +83,11 @@ struct mlx5_core_srq *mlx5_cmd_get_srq(struct mlx5_ib_dev *dev, u32 srqn)
+ struct mlx5_srq_table *table = &dev->srq_table;
+ struct mlx5_core_srq *srq;
+
+- xa_lock(&table->array);
++ xa_lock_irq(&table->array);
+ srq = xa_load(&table->array, srqn);
+ if (srq)
+ refcount_inc(&srq->common.refcount);
+- xa_unlock(&table->array);
++ xa_unlock_irq(&table->array);
+
+ return srq;
+ }
+diff --git a/drivers/input/mouse/elan_i2c_core.c b/drivers/input/mouse/elan_i2c_core.c
+index 8719da540383..196e8505dd8d 100644
+--- a/drivers/input/mouse/elan_i2c_core.c
++++ b/drivers/input/mouse/elan_i2c_core.c
+@@ -951,6 +951,8 @@ static void elan_report_absolute(struct elan_tp_data *data, u8 *packet)
+ u8 hover_info = packet[ETP_HOVER_INFO_OFFSET];
+ bool contact_valid, hover_event;
+
++ pm_wakeup_event(&data->client->dev, 0);
++
+ hover_event = hover_info & 0x40;
+ for (i = 0; i < ETP_MAX_FINGERS; i++) {
+ contact_valid = tp_info & (1U << (3 + i));
+@@ -974,6 +976,8 @@ static void elan_report_trackpoint(struct elan_tp_data *data, u8 *report)
+ u8 *packet = &report[ETP_REPORT_ID_OFFSET + 1];
+ int x, y;
+
++ pm_wakeup_event(&data->client->dev, 0);
++
+ if (!data->tp_input) {
+ dev_warn_once(&data->client->dev,
+ "received a trackpoint report while no trackpoint device has been created. Please report upstream.\n");
+@@ -998,7 +1002,6 @@ static void elan_report_trackpoint(struct elan_tp_data *data, u8 *report)
+ static irqreturn_t elan_isr(int irq, void *dev_id)
+ {
+ struct elan_tp_data *data = dev_id;
+- struct device *dev = &data->client->dev;
+ int error;
+ u8 report[ETP_MAX_REPORT_LEN];
+
+@@ -1016,8 +1019,6 @@ static irqreturn_t elan_isr(int irq, void *dev_id)
+ if (error)
+ goto out;
+
+- pm_wakeup_event(dev, 0);
+-
+ switch (report[ETP_REPORT_ID_OFFSET]) {
+ case ETP_REPORT_ID:
+ elan_report_absolute(data, report);
+@@ -1026,7 +1027,7 @@ static irqreturn_t elan_isr(int irq, void *dev_id)
+ elan_report_trackpoint(data, report);
+ break;
+ default:
+- dev_err(dev, "invalid report id data (%x)\n",
++ dev_err(&data->client->dev, "invalid report id data (%x)\n",
+ report[ETP_REPORT_ID_OFFSET]);
+ }
+
+diff --git a/drivers/input/mouse/synaptics.c b/drivers/input/mouse/synaptics.c
+index 758dae8d6500..4b81b2d0fe06 100644
+--- a/drivers/input/mouse/synaptics.c
++++ b/drivers/input/mouse/synaptics.c
+@@ -179,6 +179,7 @@ static const char * const smbus_pnp_ids[] = {
+ "LEN0093", /* T480 */
+ "LEN0096", /* X280 */
+ "LEN0097", /* X280 -> ALPS trackpoint */
++ "LEN0099", /* X1 Extreme 1st */
+ "LEN009b", /* T580 */
+ "LEN200f", /* T450s */
+ "LEN2044", /* L470 */
+diff --git a/drivers/interconnect/qcom/msm8916.c b/drivers/interconnect/qcom/msm8916.c
+index e94f3c5228b7..42c6c5581662 100644
+--- a/drivers/interconnect/qcom/msm8916.c
++++ b/drivers/interconnect/qcom/msm8916.c
+@@ -197,13 +197,13 @@ DEFINE_QNODE(pcnoc_int_0, MSM8916_PNOC_INT_0, 8, -1, -1, MSM8916_PNOC_SNOC_MAS,
+ DEFINE_QNODE(pcnoc_int_1, MSM8916_PNOC_INT_1, 8, -1, -1, MSM8916_PNOC_SNOC_MAS);
+ DEFINE_QNODE(pcnoc_m_0, MSM8916_PNOC_MAS_0, 8, -1, -1, MSM8916_PNOC_INT_0);
+ DEFINE_QNODE(pcnoc_m_1, MSM8916_PNOC_MAS_1, 8, -1, -1, MSM8916_PNOC_SNOC_MAS);
+-DEFINE_QNODE(pcnoc_s_0, MSM8916_PNOC_SLV_0, 8, -1, -1, MSM8916_SLAVE_CLK_CTL, MSM8916_SLAVE_TLMM, MSM8916_SLAVE_TCSR, MSM8916_SLAVE_SECURITY, MSM8916_SLAVE_MSS);
+-DEFINE_QNODE(pcnoc_s_1, MSM8916_PNOC_SLV_1, 8, -1, -1, MSM8916_SLAVE_IMEM_CFG, MSM8916_SLAVE_CRYPTO_0_CFG, MSM8916_SLAVE_MSG_RAM, MSM8916_SLAVE_PDM, MSM8916_SLAVE_PRNG);
+-DEFINE_QNODE(pcnoc_s_2, MSM8916_PNOC_SLV_2, 8, -1, -1, MSM8916_SLAVE_SPDM, MSM8916_SLAVE_BOOT_ROM, MSM8916_SLAVE_BIMC_CFG, MSM8916_SLAVE_PNOC_CFG, MSM8916_SLAVE_PMIC_ARB);
+-DEFINE_QNODE(pcnoc_s_3, MSM8916_PNOC_SLV_3, 8, -1, -1, MSM8916_SLAVE_MPM, MSM8916_SLAVE_SNOC_CFG, MSM8916_SLAVE_RBCPR_CFG, MSM8916_SLAVE_QDSS_CFG, MSM8916_SLAVE_DEHR_CFG);
+-DEFINE_QNODE(pcnoc_s_4, MSM8916_PNOC_SLV_4, 8, -1, -1, MSM8916_SLAVE_VENUS_CFG, MSM8916_SLAVE_CAMERA_CFG, MSM8916_SLAVE_DISPLAY_CFG);
+-DEFINE_QNODE(pcnoc_s_8, MSM8916_PNOC_SLV_8, 8, -1, -1, MSM8916_SLAVE_USB_HS, MSM8916_SLAVE_SDCC_1, MSM8916_SLAVE_BLSP_1);
+-DEFINE_QNODE(pcnoc_s_9, MSM8916_PNOC_SLV_9, 8, -1, -1, MSM8916_SLAVE_SDCC_2, MSM8916_SLAVE_LPASS, MSM8916_SLAVE_GRAPHICS_3D_CFG);
++DEFINE_QNODE(pcnoc_s_0, MSM8916_PNOC_SLV_0, 4, -1, -1, MSM8916_SLAVE_CLK_CTL, MSM8916_SLAVE_TLMM, MSM8916_SLAVE_TCSR, MSM8916_SLAVE_SECURITY, MSM8916_SLAVE_MSS);
++DEFINE_QNODE(pcnoc_s_1, MSM8916_PNOC_SLV_1, 4, -1, -1, MSM8916_SLAVE_IMEM_CFG, MSM8916_SLAVE_CRYPTO_0_CFG, MSM8916_SLAVE_MSG_RAM, MSM8916_SLAVE_PDM, MSM8916_SLAVE_PRNG);
++DEFINE_QNODE(pcnoc_s_2, MSM8916_PNOC_SLV_2, 4, -1, -1, MSM8916_SLAVE_SPDM, MSM8916_SLAVE_BOOT_ROM, MSM8916_SLAVE_BIMC_CFG, MSM8916_SLAVE_PNOC_CFG, MSM8916_SLAVE_PMIC_ARB);
++DEFINE_QNODE(pcnoc_s_3, MSM8916_PNOC_SLV_3, 4, -1, -1, MSM8916_SLAVE_MPM, MSM8916_SLAVE_SNOC_CFG, MSM8916_SLAVE_RBCPR_CFG, MSM8916_SLAVE_QDSS_CFG, MSM8916_SLAVE_DEHR_CFG);
++DEFINE_QNODE(pcnoc_s_4, MSM8916_PNOC_SLV_4, 4, -1, -1, MSM8916_SLAVE_VENUS_CFG, MSM8916_SLAVE_CAMERA_CFG, MSM8916_SLAVE_DISPLAY_CFG);
++DEFINE_QNODE(pcnoc_s_8, MSM8916_PNOC_SLV_8, 4, -1, -1, MSM8916_SLAVE_USB_HS, MSM8916_SLAVE_SDCC_1, MSM8916_SLAVE_BLSP_1);
++DEFINE_QNODE(pcnoc_s_9, MSM8916_PNOC_SLV_9, 4, -1, -1, MSM8916_SLAVE_SDCC_2, MSM8916_SLAVE_LPASS, MSM8916_SLAVE_GRAPHICS_3D_CFG);
+ DEFINE_QNODE(pcnoc_snoc_mas, MSM8916_PNOC_SNOC_MAS, 8, 29, -1, MSM8916_PNOC_SNOC_SLV);
+ DEFINE_QNODE(pcnoc_snoc_slv, MSM8916_PNOC_SNOC_SLV, 8, -1, 45, MSM8916_SNOC_INT_0, MSM8916_SNOC_INT_BIMC, MSM8916_SNOC_INT_1);
+ DEFINE_QNODE(qdss_int, MSM8916_SNOC_QDSS_INT, 8, -1, -1, MSM8916_SNOC_INT_0, MSM8916_SNOC_INT_BIMC);
+diff --git a/drivers/iommu/amd_iommu.c b/drivers/iommu/amd_iommu.c
+index 2883ac389abb..9c2e2ed82826 100644
+--- a/drivers/iommu/amd_iommu.c
++++ b/drivers/iommu/amd_iommu.c
+@@ -4090,9 +4090,10 @@ int amd_iommu_create_irq_domain(struct amd_iommu *iommu)
+ if (!fn)
+ return -ENOMEM;
+ iommu->ir_domain = irq_domain_create_tree(fn, &amd_ir_domain_ops, iommu);
+- irq_domain_free_fwnode(fn);
+- if (!iommu->ir_domain)
++ if (!iommu->ir_domain) {
++ irq_domain_free_fwnode(fn);
+ return -ENOMEM;
++ }
+
+ iommu->ir_domain->parent = arch_get_ir_parent_domain();
+ iommu->msi_domain = arch_create_remap_msi_irq_domain(iommu->ir_domain,
+diff --git a/drivers/iommu/hyperv-iommu.c b/drivers/iommu/hyperv-iommu.c
+index a386b83e0e34..f0fe5030acd3 100644
+--- a/drivers/iommu/hyperv-iommu.c
++++ b/drivers/iommu/hyperv-iommu.c
+@@ -155,7 +155,10 @@ static int __init hyperv_prepare_irq_remapping(void)
+ 0, IOAPIC_REMAPPING_ENTRY, fn,
+ &hyperv_ir_domain_ops, NULL);
+
+- irq_domain_free_fwnode(fn);
++ if (!ioapic_ir_domain) {
++ irq_domain_free_fwnode(fn);
++ return -ENOMEM;
++ }
+
+ /*
+ * Hyper-V doesn't provide irq remapping function for
+diff --git a/drivers/iommu/intel_irq_remapping.c b/drivers/iommu/intel_irq_remapping.c
+index 81e43c1df7ec..982d796b686b 100644
+--- a/drivers/iommu/intel_irq_remapping.c
++++ b/drivers/iommu/intel_irq_remapping.c
+@@ -563,8 +563,8 @@ static int intel_setup_irq_remapping(struct intel_iommu *iommu)
+ 0, INTR_REMAP_TABLE_ENTRIES,
+ fn, &intel_ir_domain_ops,
+ iommu);
+- irq_domain_free_fwnode(fn);
+ if (!iommu->ir_domain) {
++ irq_domain_free_fwnode(fn);
+ pr_err("IR%d: failed to allocate irqdomain\n", iommu->seq_id);
+ goto out_free_bitmap;
+ }
+diff --git a/drivers/iommu/qcom_iommu.c b/drivers/iommu/qcom_iommu.c
+index 5b3b270972f8..c6277d7398f3 100644
+--- a/drivers/iommu/qcom_iommu.c
++++ b/drivers/iommu/qcom_iommu.c
+@@ -65,6 +65,7 @@ struct qcom_iommu_domain {
+ struct mutex init_mutex; /* Protects iommu pointer */
+ struct iommu_domain domain;
+ struct qcom_iommu_dev *iommu;
++ struct iommu_fwspec *fwspec;
+ };
+
+ static struct qcom_iommu_domain *to_qcom_iommu_domain(struct iommu_domain *dom)
+@@ -84,9 +85,9 @@ static struct qcom_iommu_dev * to_iommu(struct device *dev)
+ return dev_iommu_priv_get(dev);
+ }
+
+-static struct qcom_iommu_ctx * to_ctx(struct device *dev, unsigned asid)
++static struct qcom_iommu_ctx * to_ctx(struct qcom_iommu_domain *d, unsigned asid)
+ {
+- struct qcom_iommu_dev *qcom_iommu = to_iommu(dev);
++ struct qcom_iommu_dev *qcom_iommu = d->iommu;
+ if (!qcom_iommu)
+ return NULL;
+ return qcom_iommu->ctxs[asid - 1];
+@@ -118,14 +119,12 @@ iommu_readq(struct qcom_iommu_ctx *ctx, unsigned reg)
+
+ static void qcom_iommu_tlb_sync(void *cookie)
+ {
+- struct iommu_fwspec *fwspec;
+- struct device *dev = cookie;
++ struct qcom_iommu_domain *qcom_domain = cookie;
++ struct iommu_fwspec *fwspec = qcom_domain->fwspec;
+ unsigned i;
+
+- fwspec = dev_iommu_fwspec_get(dev);
+-
+ for (i = 0; i < fwspec->num_ids; i++) {
+- struct qcom_iommu_ctx *ctx = to_ctx(dev, fwspec->ids[i]);
++ struct qcom_iommu_ctx *ctx = to_ctx(qcom_domain, fwspec->ids[i]);
+ unsigned int val, ret;
+
+ iommu_writel(ctx, ARM_SMMU_CB_TLBSYNC, 0);
+@@ -139,14 +138,12 @@ static void qcom_iommu_tlb_sync(void *cookie)
+
+ static void qcom_iommu_tlb_inv_context(void *cookie)
+ {
+- struct device *dev = cookie;
+- struct iommu_fwspec *fwspec;
++ struct qcom_iommu_domain *qcom_domain = cookie;
++ struct iommu_fwspec *fwspec = qcom_domain->fwspec;
+ unsigned i;
+
+- fwspec = dev_iommu_fwspec_get(dev);
+-
+ for (i = 0; i < fwspec->num_ids; i++) {
+- struct qcom_iommu_ctx *ctx = to_ctx(dev, fwspec->ids[i]);
++ struct qcom_iommu_ctx *ctx = to_ctx(qcom_domain, fwspec->ids[i]);
+ iommu_writel(ctx, ARM_SMMU_CB_S1_TLBIASID, ctx->asid);
+ }
+
+@@ -156,16 +153,14 @@ static void qcom_iommu_tlb_inv_context(void *cookie)
+ static void qcom_iommu_tlb_inv_range_nosync(unsigned long iova, size_t size,
+ size_t granule, bool leaf, void *cookie)
+ {
+- struct device *dev = cookie;
+- struct iommu_fwspec *fwspec;
++ struct qcom_iommu_domain *qcom_domain = cookie;
++ struct iommu_fwspec *fwspec = qcom_domain->fwspec;
+ unsigned i, reg;
+
+ reg = leaf ? ARM_SMMU_CB_S1_TLBIVAL : ARM_SMMU_CB_S1_TLBIVA;
+
+- fwspec = dev_iommu_fwspec_get(dev);
+-
+ for (i = 0; i < fwspec->num_ids; i++) {
+- struct qcom_iommu_ctx *ctx = to_ctx(dev, fwspec->ids[i]);
++ struct qcom_iommu_ctx *ctx = to_ctx(qcom_domain, fwspec->ids[i]);
+ size_t s = size;
+
+ iova = (iova >> 12) << 12;
+@@ -256,7 +251,9 @@ static int qcom_iommu_init_domain(struct iommu_domain *domain,
+ };
+
+ qcom_domain->iommu = qcom_iommu;
+- pgtbl_ops = alloc_io_pgtable_ops(ARM_32_LPAE_S1, &pgtbl_cfg, dev);
++ qcom_domain->fwspec = fwspec;
++
++ pgtbl_ops = alloc_io_pgtable_ops(ARM_32_LPAE_S1, &pgtbl_cfg, qcom_domain);
+ if (!pgtbl_ops) {
+ dev_err(qcom_iommu->dev, "failed to allocate pagetable ops\n");
+ ret = -ENOMEM;
+@@ -269,7 +266,7 @@ static int qcom_iommu_init_domain(struct iommu_domain *domain,
+ domain->geometry.force_aperture = true;
+
+ for (i = 0; i < fwspec->num_ids; i++) {
+- struct qcom_iommu_ctx *ctx = to_ctx(dev, fwspec->ids[i]);
++ struct qcom_iommu_ctx *ctx = to_ctx(qcom_domain, fwspec->ids[i]);
+
+ if (!ctx->secure_init) {
+ ret = qcom_scm_restore_sec_cfg(qcom_iommu->sec_id, ctx->asid);
+@@ -419,7 +416,7 @@ static void qcom_iommu_detach_dev(struct iommu_domain *domain, struct device *de
+
+ pm_runtime_get_sync(qcom_iommu->dev);
+ for (i = 0; i < fwspec->num_ids; i++) {
+- struct qcom_iommu_ctx *ctx = to_ctx(dev, fwspec->ids[i]);
++ struct qcom_iommu_ctx *ctx = to_ctx(qcom_domain, fwspec->ids[i]);
+
+ /* Disable the context bank: */
+ iommu_writel(ctx, ARM_SMMU_CB_SCTLR, 0);
+diff --git a/drivers/md/dm-integrity.c b/drivers/md/dm-integrity.c
+index 4094c47eca7f..8588fb59a3ed 100644
+--- a/drivers/md/dm-integrity.c
++++ b/drivers/md/dm-integrity.c
+@@ -2424,7 +2424,7 @@ static void integrity_writer(struct work_struct *w)
+ unsigned prev_free_sectors;
+
+ /* the following test is not needed, but it tests the replay code */
+- if (unlikely(dm_suspended(ic->ti)) && !ic->meta_dev)
++ if (unlikely(dm_post_suspending(ic->ti)) && !ic->meta_dev)
+ return;
+
+ spin_lock_irq(&ic->endio_wait.lock);
+@@ -2485,7 +2485,7 @@ static void integrity_recalc(struct work_struct *w)
+
+ next_chunk:
+
+- if (unlikely(dm_suspended(ic->ti)))
++ if (unlikely(dm_post_suspending(ic->ti)))
+ goto unlock_ret;
+
+ range.logical_sector = le64_to_cpu(ic->sb->recalc_sector);
+diff --git a/drivers/md/dm.c b/drivers/md/dm.c
+index 05333fc2f8d2..fabcc51b468c 100644
+--- a/drivers/md/dm.c
++++ b/drivers/md/dm.c
+@@ -142,6 +142,7 @@ EXPORT_SYMBOL_GPL(dm_bio_get_target_bio_nr);
+ #define DMF_NOFLUSH_SUSPENDING 5
+ #define DMF_DEFERRED_REMOVE 6
+ #define DMF_SUSPENDED_INTERNALLY 7
++#define DMF_POST_SUSPENDING 8
+
+ #define DM_NUMA_NODE NUMA_NO_NODE
+ static int dm_numa_node = DM_NUMA_NODE;
+@@ -1446,9 +1447,6 @@ static int __send_empty_flush(struct clone_info *ci)
+ BUG_ON(bio_has_data(ci->bio));
+ while ((ti = dm_table_get_target(ci->map, target_nr++)))
+ __send_duplicate_bios(ci, ti, ti->num_flush_bios, NULL);
+-
+- bio_disassociate_blkg(ci->bio);
+-
+ return 0;
+ }
+
+@@ -1636,6 +1634,7 @@ static blk_qc_t __split_and_process_bio(struct mapped_device *md,
+ ci.bio = &flush_bio;
+ ci.sector_count = 0;
+ error = __send_empty_flush(&ci);
++ bio_uninit(ci.bio);
+ /* dec_pending submits any data associated with flush */
+ } else if (op_is_zone_mgmt(bio_op(bio))) {
+ ci.bio = bio;
+@@ -1710,6 +1709,7 @@ static blk_qc_t __process_bio(struct mapped_device *md, struct dm_table *map,
+ ci.bio = &flush_bio;
+ ci.sector_count = 0;
+ error = __send_empty_flush(&ci);
++ bio_uninit(ci.bio);
+ /* dec_pending submits any data associated with flush */
+ } else {
+ struct dm_target_io *tio;
+@@ -2399,6 +2399,7 @@ static void __dm_destroy(struct mapped_device *md, bool wait)
+ if (!dm_suspended_md(md)) {
+ dm_table_presuspend_targets(map);
+ set_bit(DMF_SUSPENDED, &md->flags);
++ set_bit(DMF_POST_SUSPENDING, &md->flags);
+ dm_table_postsuspend_targets(map);
+ }
+ /* dm_put_live_table must be before msleep, otherwise deadlock is possible */
+@@ -2721,7 +2722,9 @@ retry:
+ if (r)
+ goto out_unlock;
+
++ set_bit(DMF_POST_SUSPENDING, &md->flags);
+ dm_table_postsuspend_targets(map);
++ clear_bit(DMF_POST_SUSPENDING, &md->flags);
+
+ out_unlock:
+ mutex_unlock(&md->suspend_lock);
+@@ -2818,7 +2821,9 @@ static void __dm_internal_suspend(struct mapped_device *md, unsigned suspend_fla
+ (void) __dm_suspend(md, map, suspend_flags, TASK_UNINTERRUPTIBLE,
+ DMF_SUSPENDED_INTERNALLY);
+
++ set_bit(DMF_POST_SUSPENDING, &md->flags);
+ dm_table_postsuspend_targets(map);
++ clear_bit(DMF_POST_SUSPENDING, &md->flags);
+ }
+
+ static void __dm_internal_resume(struct mapped_device *md)
+@@ -2979,6 +2984,11 @@ int dm_suspended_md(struct mapped_device *md)
+ return test_bit(DMF_SUSPENDED, &md->flags);
+ }
+
++static int dm_post_suspending_md(struct mapped_device *md)
++{
++ return test_bit(DMF_POST_SUSPENDING, &md->flags);
++}
++
+ int dm_suspended_internally_md(struct mapped_device *md)
+ {
+ return test_bit(DMF_SUSPENDED_INTERNALLY, &md->flags);
+@@ -2995,6 +3005,12 @@ int dm_suspended(struct dm_target *ti)
+ }
+ EXPORT_SYMBOL_GPL(dm_suspended);
+
++int dm_post_suspending(struct dm_target *ti)
++{
++ return dm_post_suspending_md(dm_table_get_md(ti->table));
++}
++EXPORT_SYMBOL_GPL(dm_post_suspending);
++
+ int dm_noflush_suspending(struct dm_target *ti)
+ {
+ return __noflush_suspending(dm_table_get_md(ti->table));
+diff --git a/drivers/mfd/ioc3.c b/drivers/mfd/ioc3.c
+index 02998d4eb74b..74cee7cb0afc 100644
+--- a/drivers/mfd/ioc3.c
++++ b/drivers/mfd/ioc3.c
+@@ -142,10 +142,11 @@ static int ioc3_irq_domain_setup(struct ioc3_priv_data *ipd, int irq)
+ goto err;
+
+ domain = irq_domain_create_linear(fn, 24, &ioc3_irq_domain_ops, ipd);
+- if (!domain)
++ if (!domain) {
++ irq_domain_free_fwnode(fn);
+ goto err;
++ }
+
+- irq_domain_free_fwnode(fn);
+ ipd->domain = domain;
+
+ irq_set_chained_handler_and_data(irq, ioc3_irq_handler, domain);
+diff --git a/drivers/mmc/host/sdhci-of-aspeed.c b/drivers/mmc/host/sdhci-of-aspeed.c
+index 56912e30c47e..a1bcc0f4ba9e 100644
+--- a/drivers/mmc/host/sdhci-of-aspeed.c
++++ b/drivers/mmc/host/sdhci-of-aspeed.c
+@@ -68,7 +68,7 @@ static void aspeed_sdhci_set_clock(struct sdhci_host *host, unsigned int clock)
+ if (WARN_ON(clock > host->max_clk))
+ clock = host->max_clk;
+
+- for (div = 1; div < 256; div *= 2) {
++ for (div = 2; div < 256; div *= 2) {
+ if ((parent / div) <= clock)
+ break;
+ }
+diff --git a/drivers/net/bonding/bond_main.c b/drivers/net/bonding/bond_main.c
+index 2e70e43c5df5..6b40b5ab143a 100644
+--- a/drivers/net/bonding/bond_main.c
++++ b/drivers/net/bonding/bond_main.c
+@@ -4953,15 +4953,19 @@ int bond_create(struct net *net, const char *name)
+ bond_dev->rtnl_link_ops = &bond_link_ops;
+
+ res = register_netdevice(bond_dev);
++ if (res < 0) {
++ free_netdev(bond_dev);
++ rtnl_unlock();
++
++ return res;
++ }
+
+ netif_carrier_off(bond_dev);
+
+ bond_work_init_all(bond);
+
+ rtnl_unlock();
+- if (res < 0)
+- free_netdev(bond_dev);
+- return res;
++ return 0;
+ }
+
+ static int __net_init bond_net_init(struct net *net)
+diff --git a/drivers/net/bonding/bond_netlink.c b/drivers/net/bonding/bond_netlink.c
+index b43b51646b11..f0f9138e967f 100644
+--- a/drivers/net/bonding/bond_netlink.c
++++ b/drivers/net/bonding/bond_netlink.c
+@@ -456,11 +456,10 @@ static int bond_newlink(struct net *src_net, struct net_device *bond_dev,
+ return err;
+
+ err = register_netdevice(bond_dev);
+-
+- netif_carrier_off(bond_dev);
+ if (!err) {
+ struct bonding *bond = netdev_priv(bond_dev);
+
++ netif_carrier_off(bond_dev);
+ bond_work_init_all(bond);
+ }
+
+diff --git a/drivers/net/dsa/microchip/ksz9477.c b/drivers/net/dsa/microchip/ksz9477.c
+index 65701e65b6c2..95a406e2e373 100644
+--- a/drivers/net/dsa/microchip/ksz9477.c
++++ b/drivers/net/dsa/microchip/ksz9477.c
+@@ -977,23 +977,6 @@ static void ksz9477_port_mirror_del(struct dsa_switch *ds, int port,
+ PORT_MIRROR_SNIFFER, false);
+ }
+
+-static void ksz9477_phy_setup(struct ksz_device *dev, int port,
+- struct phy_device *phy)
+-{
+- /* Only apply to port with PHY. */
+- if (port >= dev->phy_port_cnt)
+- return;
+-
+- /* The MAC actually cannot run in 1000 half-duplex mode. */
+- phy_remove_link_mode(phy,
+- ETHTOOL_LINK_MODE_1000baseT_Half_BIT);
+-
+- /* PHY does not support gigabit. */
+- if (!(dev->features & GBIT_SUPPORT))
+- phy_remove_link_mode(phy,
+- ETHTOOL_LINK_MODE_1000baseT_Full_BIT);
+-}
+-
+ static bool ksz9477_get_gbit(struct ksz_device *dev, u8 data)
+ {
+ bool gbit;
+@@ -1606,7 +1589,6 @@ static const struct ksz_dev_ops ksz9477_dev_ops = {
+ .get_port_addr = ksz9477_get_port_addr,
+ .cfg_port_member = ksz9477_cfg_port_member,
+ .flush_dyn_mac_table = ksz9477_flush_dyn_mac_table,
+- .phy_setup = ksz9477_phy_setup,
+ .port_setup = ksz9477_port_setup,
+ .r_mib_cnt = ksz9477_r_mib_cnt,
+ .r_mib_pkt = ksz9477_r_mib_pkt,
+@@ -1620,7 +1602,29 @@ static const struct ksz_dev_ops ksz9477_dev_ops = {
+
+ int ksz9477_switch_register(struct ksz_device *dev)
+ {
+- return ksz_switch_register(dev, &ksz9477_dev_ops);
++ int ret, i;
++ struct phy_device *phydev;
++
++ ret = ksz_switch_register(dev, &ksz9477_dev_ops);
++ if (ret)
++ return ret;
++
++ for (i = 0; i < dev->phy_port_cnt; ++i) {
++ if (!dsa_is_user_port(dev->ds, i))
++ continue;
++
++ phydev = dsa_to_port(dev->ds, i)->slave->phydev;
++
++ /* The MAC actually cannot run in 1000 half-duplex mode. */
++ phy_remove_link_mode(phydev,
++ ETHTOOL_LINK_MODE_1000baseT_Half_BIT);
++
++ /* PHY does not support gigabit. */
++ if (!(dev->features & GBIT_SUPPORT))
++ phy_remove_link_mode(phydev,
++ ETHTOOL_LINK_MODE_1000baseT_Full_BIT);
++ }
++ return ret;
+ }
+ EXPORT_SYMBOL(ksz9477_switch_register);
+
+diff --git a/drivers/net/dsa/microchip/ksz_common.c b/drivers/net/dsa/microchip/ksz_common.c
+index fd1d6676ae4f..7b6c0dce7536 100644
+--- a/drivers/net/dsa/microchip/ksz_common.c
++++ b/drivers/net/dsa/microchip/ksz_common.c
+@@ -358,8 +358,6 @@ int ksz_enable_port(struct dsa_switch *ds, int port, struct phy_device *phy)
+
+ /* setup slave port */
+ dev->dev_ops->port_setup(dev, port, false);
+- if (dev->dev_ops->phy_setup)
+- dev->dev_ops->phy_setup(dev, port, phy);
+
+ /* port_stp_state_set() will be called after to enable the port so
+ * there is no need to do anything.
+diff --git a/drivers/net/dsa/microchip/ksz_common.h b/drivers/net/dsa/microchip/ksz_common.h
+index f2c9bb68fd33..7d11dd32ec0d 100644
+--- a/drivers/net/dsa/microchip/ksz_common.h
++++ b/drivers/net/dsa/microchip/ksz_common.h
+@@ -119,8 +119,6 @@ struct ksz_dev_ops {
+ u32 (*get_port_addr)(int port, int offset);
+ void (*cfg_port_member)(struct ksz_device *dev, int port, u8 member);
+ void (*flush_dyn_mac_table)(struct ksz_device *dev, int port);
+- void (*phy_setup)(struct ksz_device *dev, int port,
+- struct phy_device *phy);
+ void (*port_cleanup)(struct ksz_device *dev, int port);
+ void (*port_setup)(struct ksz_device *dev, int port, bool cpu_port);
+ void (*r_phy)(struct ksz_device *dev, u16 phy, u16 reg, u16 *val);
+diff --git a/drivers/net/dsa/mv88e6xxx/chip.c b/drivers/net/dsa/mv88e6xxx/chip.c
+index 2b4a723c8306..e065be419a03 100644
+--- a/drivers/net/dsa/mv88e6xxx/chip.c
++++ b/drivers/net/dsa/mv88e6xxx/chip.c
+@@ -664,8 +664,11 @@ static void mv88e6xxx_mac_config(struct dsa_switch *ds, int port,
+ const struct phylink_link_state *state)
+ {
+ struct mv88e6xxx_chip *chip = ds->priv;
++ struct mv88e6xxx_port *p;
+ int err;
+
++ p = &chip->ports[port];
++
+ /* FIXME: is this the correct test? If we're in fixed mode on an
+ * internal port, why should we process this any different from
+ * PHY mode? On the other hand, the port may be automedia between
+@@ -675,10 +678,14 @@ static void mv88e6xxx_mac_config(struct dsa_switch *ds, int port,
+ return;
+
+ mv88e6xxx_reg_lock(chip);
+- /* FIXME: should we force the link down here - but if we do, how
+- * do we restore the link force/unforce state? The driver layering
+- * gets in the way.
++ /* In inband mode, the link may come up at any time while the link
++ * is not forced down. Force the link down while we reconfigure the
++ * interface mode.
+ */
++ if (mode == MLO_AN_INBAND && p->interface != state->interface &&
++ chip->info->ops->port_set_link)
++ chip->info->ops->port_set_link(chip, port, LINK_FORCED_DOWN);
++
+ err = mv88e6xxx_port_config_interface(chip, port, state->interface);
+ if (err && err != -EOPNOTSUPP)
+ goto err_unlock;
+@@ -691,6 +698,15 @@ static void mv88e6xxx_mac_config(struct dsa_switch *ds, int port,
+ if (err > 0)
+ err = 0;
+
++ /* Undo the forced down state above after completing configuration
++ * irrespective of its state on entry, which allows the link to come up.
++ */
++ if (mode == MLO_AN_INBAND && p->interface != state->interface &&
++ chip->info->ops->port_set_link)
++ chip->info->ops->port_set_link(chip, port, LINK_UNFORCED);
++
++ p->interface = state->interface;
++
+ err_unlock:
+ mv88e6xxx_reg_unlock(chip);
+
+diff --git a/drivers/net/dsa/mv88e6xxx/chip.h b/drivers/net/dsa/mv88e6xxx/chip.h
+index e5430cf2ad71..6476524e8239 100644
+--- a/drivers/net/dsa/mv88e6xxx/chip.h
++++ b/drivers/net/dsa/mv88e6xxx/chip.h
+@@ -232,6 +232,7 @@ struct mv88e6xxx_port {
+ u64 atu_full_violation;
+ u64 vtu_member_violation;
+ u64 vtu_miss_violation;
++ phy_interface_t interface;
+ u8 cmode;
+ bool mirror_ingress;
+ bool mirror_egress;
+diff --git a/drivers/net/ethernet/atheros/ag71xx.c b/drivers/net/ethernet/atheros/ag71xx.c
+index 02b7705393ca..37a1cf63d9f7 100644
+--- a/drivers/net/ethernet/atheros/ag71xx.c
++++ b/drivers/net/ethernet/atheros/ag71xx.c
+@@ -556,7 +556,8 @@ static int ag71xx_mdio_probe(struct ag71xx *ag)
+ ag->mdio_reset = of_reset_control_get_exclusive(np, "mdio");
+ if (IS_ERR(ag->mdio_reset)) {
+ netif_err(ag, probe, ndev, "Failed to get reset mdio.\n");
+- return PTR_ERR(ag->mdio_reset);
++ err = PTR_ERR(ag->mdio_reset);
++ goto mdio_err_put_clk;
+ }
+
+ mii_bus->name = "ag71xx_mdio";
+diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt.c b/drivers/net/ethernet/broadcom/bnxt/bnxt.c
+index b6fb5a1709c0..1656dc277af4 100644
+--- a/drivers/net/ethernet/broadcom/bnxt/bnxt.c
++++ b/drivers/net/ethernet/broadcom/bnxt/bnxt.c
+@@ -3418,7 +3418,7 @@ void bnxt_set_tpa_flags(struct bnxt *bp)
+ */
+ void bnxt_set_ring_params(struct bnxt *bp)
+ {
+- u32 ring_size, rx_size, rx_space;
++ u32 ring_size, rx_size, rx_space, max_rx_cmpl;
+ u32 agg_factor = 0, agg_ring_size = 0;
+
+ /* 8 for CRC and VLAN */
+@@ -3474,7 +3474,15 @@ void bnxt_set_ring_params(struct bnxt *bp)
+ bp->tx_nr_pages = bnxt_calc_nr_ring_pages(ring_size, TX_DESC_CNT);
+ bp->tx_ring_mask = (bp->tx_nr_pages * TX_DESC_CNT) - 1;
+
+- ring_size = bp->rx_ring_size * (2 + agg_factor) + bp->tx_ring_size;
++ max_rx_cmpl = bp->rx_ring_size;
++ /* MAX TPA needs to be added because TPA_START completions are
++ * immediately recycled, so the TPA completions are not bound by
++ * the RX ring size.
++ */
++ if (bp->flags & BNXT_FLAG_TPA)
++ max_rx_cmpl += bp->max_tpa;
++ /* RX and TPA completions are 32-byte, all others are 16-byte */
++ ring_size = max_rx_cmpl * 2 + agg_ring_size + bp->tx_ring_size;
+ bp->cp_ring_size = ring_size;
+
+ bp->cp_nr_pages = bnxt_calc_nr_ring_pages(ring_size, CP_DESC_CNT);
+@@ -10362,15 +10370,15 @@ static void bnxt_sp_task(struct work_struct *work)
+ &bp->sp_event))
+ bnxt_hwrm_phy_qcaps(bp);
+
+- if (test_and_clear_bit(BNXT_LINK_CFG_CHANGE_SP_EVENT,
+- &bp->sp_event))
+- bnxt_init_ethtool_link_settings(bp);
+-
+ rc = bnxt_update_link(bp, true);
+- mutex_unlock(&bp->link_lock);
+ if (rc)
+ netdev_err(bp->dev, "SP task can't update link (rc: %x)\n",
+ rc);
++
++ if (test_and_clear_bit(BNXT_LINK_CFG_CHANGE_SP_EVENT,
++ &bp->sp_event))
++ bnxt_init_ethtool_link_settings(bp);
++ mutex_unlock(&bp->link_lock);
+ }
+ if (test_and_clear_bit(BNXT_UPDATE_PHY_SP_EVENT, &bp->sp_event)) {
+ int rc;
+diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt_ethtool.c b/drivers/net/ethernet/broadcom/bnxt/bnxt_ethtool.c
+index 360f9a95c1d5..21cc2bd12760 100644
+--- a/drivers/net/ethernet/broadcom/bnxt/bnxt_ethtool.c
++++ b/drivers/net/ethernet/broadcom/bnxt/bnxt_ethtool.c
+@@ -1687,8 +1687,11 @@ static int bnxt_set_pauseparam(struct net_device *dev,
+ if (epause->tx_pause)
+ link_info->req_flow_ctrl |= BNXT_LINK_PAUSE_TX;
+
+- if (netif_running(dev))
++ if (netif_running(dev)) {
++ mutex_lock(&bp->link_lock);
+ rc = bnxt_hwrm_set_pause(bp);
++ mutex_unlock(&bp->link_lock);
++ }
+ return rc;
+ }
+
+diff --git a/drivers/net/ethernet/broadcom/genet/bcmgenet.c b/drivers/net/ethernet/broadcom/genet/bcmgenet.c
+index dde1c23c8e39..7b95bb77ad3b 100644
+--- a/drivers/net/ethernet/broadcom/genet/bcmgenet.c
++++ b/drivers/net/ethernet/broadcom/genet/bcmgenet.c
+@@ -3522,7 +3522,7 @@ static int bcmgenet_probe(struct platform_device *pdev)
+ if (err)
+ err = dma_set_mask_and_coherent(&pdev->dev, DMA_BIT_MASK(32));
+ if (err)
+- goto err;
++ goto err_clk_disable;
+
+ /* Mii wait queue */
+ init_waitqueue_head(&priv->wq);
+diff --git a/drivers/net/ethernet/freescale/dpaa/dpaa_eth.c b/drivers/net/ethernet/freescale/dpaa/dpaa_eth.c
+index 6bfa7575af94..5f82c1f32f09 100644
+--- a/drivers/net/ethernet/freescale/dpaa/dpaa_eth.c
++++ b/drivers/net/ethernet/freescale/dpaa/dpaa_eth.c
+@@ -2938,7 +2938,7 @@ static int dpaa_eth_probe(struct platform_device *pdev)
+ DMA_BIT_MASK(40));
+ if (err) {
+ netdev_err(net_dev, "dma_coerce_mask_and_coherent() failed\n");
+- return err;
++ goto free_netdev;
+ }
+
+ /* If fsl_fm_max_frm is set to a higher value than the all-common 1500,
+diff --git a/drivers/net/ethernet/freescale/dpaa2/dpaa2-eth.c b/drivers/net/ethernet/freescale/dpaa2/dpaa2-eth.c
+index 569e06d2bab2..72fa9c4e058f 100644
+--- a/drivers/net/ethernet/freescale/dpaa2/dpaa2-eth.c
++++ b/drivers/net/ethernet/freescale/dpaa2/dpaa2-eth.c
+@@ -3383,7 +3383,7 @@ static int dpaa2_eth_connect_mac(struct dpaa2_eth_priv *priv)
+
+ dpni_dev = to_fsl_mc_device(priv->net_dev->dev.parent);
+ dpmac_dev = fsl_mc_get_endpoint(dpni_dev);
+- if (IS_ERR(dpmac_dev) || dpmac_dev->dev.type != &fsl_mc_bus_dpmac_type)
++ if (IS_ERR_OR_NULL(dpmac_dev) || dpmac_dev->dev.type != &fsl_mc_bus_dpmac_type)
+ return 0;
+
+ if (dpaa2_mac_is_type_fixed(dpmac_dev, priv->mc_io))
+diff --git a/drivers/net/ethernet/freescale/enetc/enetc_pf.c b/drivers/net/ethernet/freescale/enetc/enetc_pf.c
+index 438648a06f2a..041e19895adf 100644
+--- a/drivers/net/ethernet/freescale/enetc/enetc_pf.c
++++ b/drivers/net/ethernet/freescale/enetc/enetc_pf.c
+@@ -919,6 +919,7 @@ static int enetc_pf_probe(struct pci_dev *pdev,
+ return 0;
+
+ err_reg_netdev:
++ enetc_mdio_remove(pf);
+ enetc_of_put_phy(priv);
+ enetc_free_msix(priv);
+ err_alloc_msix:
+diff --git a/drivers/net/ethernet/freescale/fec.h b/drivers/net/ethernet/freescale/fec.h
+index e74dd1f86bba..828eb8ce6631 100644
+--- a/drivers/net/ethernet/freescale/fec.h
++++ b/drivers/net/ethernet/freescale/fec.h
+@@ -597,6 +597,7 @@ struct fec_enet_private {
+ void fec_ptp_init(struct platform_device *pdev, int irq_idx);
+ void fec_ptp_stop(struct platform_device *pdev);
+ void fec_ptp_start_cyclecounter(struct net_device *ndev);
++void fec_ptp_disable_hwts(struct net_device *ndev);
+ int fec_ptp_set(struct net_device *ndev, struct ifreq *ifr);
+ int fec_ptp_get(struct net_device *ndev, struct ifreq *ifr);
+
+diff --git a/drivers/net/ethernet/freescale/fec_main.c b/drivers/net/ethernet/freescale/fec_main.c
+index dc6f8763a5d4..bf73bc9bf35b 100644
+--- a/drivers/net/ethernet/freescale/fec_main.c
++++ b/drivers/net/ethernet/freescale/fec_main.c
+@@ -1302,8 +1302,13 @@ fec_enet_tx_queue(struct net_device *ndev, u16 queue_id)
+ ndev->stats.tx_bytes += skb->len;
+ }
+
+- if (unlikely(skb_shinfo(skb)->tx_flags & SKBTX_IN_PROGRESS) &&
+- fep->bufdesc_ex) {
++ /* NOTE: SKBTX_IN_PROGRESS being set does not imply it's we who
++ * are to time stamp the packet, so we still need to check time
++ * stamping enabled flag.
++ */
++ if (unlikely(skb_shinfo(skb)->tx_flags & SKBTX_IN_PROGRESS &&
++ fep->hwts_tx_en) &&
++ fep->bufdesc_ex) {
+ struct skb_shared_hwtstamps shhwtstamps;
+ struct bufdesc_ex *ebdp = (struct bufdesc_ex *)bdp;
+
+@@ -2731,10 +2736,16 @@ static int fec_enet_ioctl(struct net_device *ndev, struct ifreq *rq, int cmd)
+ return -ENODEV;
+
+ if (fep->bufdesc_ex) {
+- if (cmd == SIOCSHWTSTAMP)
+- return fec_ptp_set(ndev, rq);
+- if (cmd == SIOCGHWTSTAMP)
+- return fec_ptp_get(ndev, rq);
++ bool use_fec_hwts = !phy_has_hwtstamp(phydev);
++
++ if (cmd == SIOCSHWTSTAMP) {
++ if (use_fec_hwts)
++ return fec_ptp_set(ndev, rq);
++ fec_ptp_disable_hwts(ndev);
++ } else if (cmd == SIOCGHWTSTAMP) {
++ if (use_fec_hwts)
++ return fec_ptp_get(ndev, rq);
++ }
+ }
+
+ return phy_mii_ioctl(phydev, rq, cmd);
+diff --git a/drivers/net/ethernet/freescale/fec_ptp.c b/drivers/net/ethernet/freescale/fec_ptp.c
+index 945643c02615..f8a592c96beb 100644
+--- a/drivers/net/ethernet/freescale/fec_ptp.c
++++ b/drivers/net/ethernet/freescale/fec_ptp.c
+@@ -452,6 +452,18 @@ static int fec_ptp_enable(struct ptp_clock_info *ptp,
+ return -EOPNOTSUPP;
+ }
+
++/**
++ * fec_ptp_disable_hwts - disable hardware time stamping
++ * @ndev: pointer to net_device
++ */
++void fec_ptp_disable_hwts(struct net_device *ndev)
++{
++ struct fec_enet_private *fep = netdev_priv(ndev);
++
++ fep->hwts_tx_en = 0;
++ fep->hwts_rx_en = 0;
++}
++
+ int fec_ptp_set(struct net_device *ndev, struct ifreq *ifr)
+ {
+ struct fec_enet_private *fep = netdev_priv(ndev);
+diff --git a/drivers/net/ethernet/hisilicon/hns3/hnae3.h b/drivers/net/ethernet/hisilicon/hns3/hnae3.h
+index 5587605d6deb..cc45662f77f0 100644
+--- a/drivers/net/ethernet/hisilicon/hns3/hnae3.h
++++ b/drivers/net/ethernet/hisilicon/hns3/hnae3.h
+@@ -77,6 +77,7 @@
+ ((ring)->p = ((ring)->p - 1 + (ring)->desc_num) % (ring)->desc_num)
+
+ enum hns_desc_type {
++ DESC_TYPE_UNKNOWN,
+ DESC_TYPE_SKB,
+ DESC_TYPE_FRAGLIST_SKB,
+ DESC_TYPE_PAGE,
+diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3_enet.c b/drivers/net/ethernet/hisilicon/hns3/hns3_enet.c
+index 3003eecd5263..df1cb0441183 100644
+--- a/drivers/net/ethernet/hisilicon/hns3/hns3_enet.c
++++ b/drivers/net/ethernet/hisilicon/hns3/hns3_enet.c
+@@ -1140,7 +1140,7 @@ static int hns3_fill_desc(struct hns3_enet_ring *ring, void *priv,
+ }
+
+ frag_buf_num = hns3_tx_bd_count(size);
+- sizeoflast = size & HNS3_TX_LAST_SIZE_M;
++ sizeoflast = size % HNS3_MAX_BD_SIZE;
+ sizeoflast = sizeoflast ? sizeoflast : HNS3_MAX_BD_SIZE;
+
+ /* When frag size is bigger than hardware limit, split this frag */
+@@ -1351,6 +1351,10 @@ static void hns3_clear_desc(struct hns3_enet_ring *ring, int next_to_use_orig)
+ unsigned int i;
+
+ for (i = 0; i < ring->desc_num; i++) {
++ struct hns3_desc *desc = &ring->desc[ring->next_to_use];
++
++ memset(desc, 0, sizeof(*desc));
++
+ /* check if this is where we started */
+ if (ring->next_to_use == next_to_use_orig)
+ break;
+@@ -1358,6 +1362,9 @@ static void hns3_clear_desc(struct hns3_enet_ring *ring, int next_to_use_orig)
+ /* rollback one */
+ ring_ptr_move_bw(ring, next_to_use);
+
++ if (!ring->desc_cb[ring->next_to_use].dma)
++ continue;
++
+ /* unmap the descriptor dma address */
+ if (ring->desc_cb[ring->next_to_use].type == DESC_TYPE_SKB ||
+ ring->desc_cb[ring->next_to_use].type ==
+@@ -1374,6 +1381,7 @@ static void hns3_clear_desc(struct hns3_enet_ring *ring, int next_to_use_orig)
+
+ ring->desc_cb[ring->next_to_use].length = 0;
+ ring->desc_cb[ring->next_to_use].dma = 0;
++ ring->desc_cb[ring->next_to_use].type = DESC_TYPE_UNKNOWN;
+ }
+ }
+
+diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3_enet.h b/drivers/net/ethernet/hisilicon/hns3/hns3_enet.h
+index abefd7a179f7..e6b29a35cdb2 100644
+--- a/drivers/net/ethernet/hisilicon/hns3/hns3_enet.h
++++ b/drivers/net/ethernet/hisilicon/hns3/hns3_enet.h
+@@ -186,8 +186,6 @@ enum hns3_nic_state {
+ #define HNS3_TXD_MSS_S 0
+ #define HNS3_TXD_MSS_M (0x3fff << HNS3_TXD_MSS_S)
+
+-#define HNS3_TX_LAST_SIZE_M 0xffff
+-
+ #define HNS3_VECTOR_TX_IRQ BIT_ULL(0)
+ #define HNS3_VECTOR_RX_IRQ BIT_ULL(1)
+
+diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.c b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.c
+index 4de268a87958..b66b93f320b4 100644
+--- a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.c
++++ b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.c
+@@ -2680,11 +2680,10 @@ void hclge_task_schedule(struct hclge_dev *hdev, unsigned long delay_time)
+ delay_time);
+ }
+
+-static int hclge_get_mac_link_status(struct hclge_dev *hdev)
++static int hclge_get_mac_link_status(struct hclge_dev *hdev, int *link_status)
+ {
+ struct hclge_link_status_cmd *req;
+ struct hclge_desc desc;
+- int link_status;
+ int ret;
+
+ hclge_cmd_setup_basic_desc(&desc, HCLGE_OPC_QUERY_LINK_STATUS, true);
+@@ -2696,33 +2695,25 @@ static int hclge_get_mac_link_status(struct hclge_dev *hdev)
+ }
+
+ req = (struct hclge_link_status_cmd *)desc.data;
+- link_status = req->status & HCLGE_LINK_STATUS_UP_M;
++ *link_status = (req->status & HCLGE_LINK_STATUS_UP_M) > 0 ?
++ HCLGE_LINK_STATUS_UP : HCLGE_LINK_STATUS_DOWN;
+
+- return !!link_status;
++ return 0;
+ }
+
+-static int hclge_get_mac_phy_link(struct hclge_dev *hdev)
++static int hclge_get_mac_phy_link(struct hclge_dev *hdev, int *link_status)
+ {
+- unsigned int mac_state;
+- int link_stat;
++ struct phy_device *phydev = hdev->hw.mac.phydev;
++
++ *link_status = HCLGE_LINK_STATUS_DOWN;
+
+ if (test_bit(HCLGE_STATE_DOWN, &hdev->state))
+ return 0;
+
+- mac_state = hclge_get_mac_link_status(hdev);
+-
+- if (hdev->hw.mac.phydev) {
+- if (hdev->hw.mac.phydev->state == PHY_RUNNING)
+- link_stat = mac_state &
+- hdev->hw.mac.phydev->link;
+- else
+- link_stat = 0;
+-
+- } else {
+- link_stat = mac_state;
+- }
++ if (phydev && (phydev->state != PHY_RUNNING || !phydev->link))
++ return 0;
+
+- return !!link_stat;
++ return hclge_get_mac_link_status(hdev, link_status);
+ }
+
+ static void hclge_update_link_status(struct hclge_dev *hdev)
+@@ -2732,6 +2723,7 @@ static void hclge_update_link_status(struct hclge_dev *hdev)
+ struct hnae3_handle *rhandle;
+ struct hnae3_handle *handle;
+ int state;
++ int ret;
+ int i;
+
+ if (!client)
+@@ -2740,7 +2732,12 @@ static void hclge_update_link_status(struct hclge_dev *hdev)
+ if (test_and_set_bit(HCLGE_STATE_LINK_UPDATING, &hdev->state))
+ return;
+
+- state = hclge_get_mac_phy_link(hdev);
++ ret = hclge_get_mac_phy_link(hdev, &state);
++ if (ret) {
++ clear_bit(HCLGE_STATE_LINK_UPDATING, &hdev->state);
++ return;
++ }
++
+ if (state != hdev->hw.mac.link) {
+ for (i = 0; i < hdev->num_vmdq_vport + 1; i++) {
+ handle = &hdev->vport[i].nic;
+@@ -6435,14 +6432,15 @@ static int hclge_mac_link_status_wait(struct hclge_dev *hdev, int link_ret)
+ {
+ #define HCLGE_MAC_LINK_STATUS_NUM 100
+
++ int link_status;
+ int i = 0;
+ int ret;
+
+ do {
+- ret = hclge_get_mac_link_status(hdev);
+- if (ret < 0)
++ ret = hclge_get_mac_link_status(hdev, &link_status);
++ if (ret)
+ return ret;
+- else if (ret == link_ret)
++ if (link_status == link_ret)
+ return 0;
+
+ msleep(HCLGE_LINK_STATUS_MS);
+@@ -6453,9 +6451,6 @@ static int hclge_mac_link_status_wait(struct hclge_dev *hdev, int link_ret)
+ static int hclge_mac_phy_link_status_wait(struct hclge_dev *hdev, bool en,
+ bool is_phy)
+ {
+-#define HCLGE_LINK_STATUS_DOWN 0
+-#define HCLGE_LINK_STATUS_UP 1
+-
+ int link_ret;
+
+ link_ret = en ? HCLGE_LINK_STATUS_UP : HCLGE_LINK_STATUS_DOWN;
+diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.h b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.h
+index 71df23d5f1b4..8784168f8f6f 100644
+--- a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.h
++++ b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.h
+@@ -316,6 +316,9 @@ enum hclge_link_fail_code {
+ HCLGE_LF_XSFP_ABSENT,
+ };
+
++#define HCLGE_LINK_STATUS_DOWN 0
++#define HCLGE_LINK_STATUS_UP 1
++
+ #define HCLGE_PG_NUM 4
+ #define HCLGE_SCH_MODE_SP 0
+ #define HCLGE_SCH_MODE_DWRR 1
+diff --git a/drivers/net/ethernet/marvell/sky2.c b/drivers/net/ethernet/marvell/sky2.c
+index 241f00716979..fe54764caea9 100644
+--- a/drivers/net/ethernet/marvell/sky2.c
++++ b/drivers/net/ethernet/marvell/sky2.c
+@@ -203,7 +203,7 @@ io_error:
+
+ static inline u16 gm_phy_read(struct sky2_hw *hw, unsigned port, u16 reg)
+ {
+- u16 v;
++ u16 v = 0;
+ __gm_phy_read(hw, port, reg, &v);
+ return v;
+ }
+diff --git a/drivers/net/ethernet/mellanox/mlxsw/core.c b/drivers/net/ethernet/mellanox/mlxsw/core.c
+index e9ccd333f61d..d6d6fe64887b 100644
+--- a/drivers/net/ethernet/mellanox/mlxsw/core.c
++++ b/drivers/net/ethernet/mellanox/mlxsw/core.c
+@@ -710,7 +710,7 @@ static int mlxsw_emad_init(struct mlxsw_core *mlxsw_core)
+ err = mlxsw_core_trap_register(mlxsw_core, &mlxsw_emad_rx_listener,
+ mlxsw_core);
+ if (err)
+- return err;
++ goto err_trap_register;
+
+ err = mlxsw_core->driver->basic_trap_groups_set(mlxsw_core);
+ if (err)
+@@ -722,6 +722,7 @@ static int mlxsw_emad_init(struct mlxsw_core *mlxsw_core)
+ err_emad_trap_set:
+ mlxsw_core_trap_unregister(mlxsw_core, &mlxsw_emad_rx_listener,
+ mlxsw_core);
++err_trap_register:
+ destroy_workqueue(mlxsw_core->emad_wq);
+ return err;
+ }
+diff --git a/drivers/net/ethernet/mellanox/mlxsw/core_env.c b/drivers/net/ethernet/mellanox/mlxsw/core_env.c
+index 08215fed193d..a7d86df7123f 100644
+--- a/drivers/net/ethernet/mellanox/mlxsw/core_env.c
++++ b/drivers/net/ethernet/mellanox/mlxsw/core_env.c
+@@ -45,7 +45,7 @@ static int mlxsw_env_validate_cable_ident(struct mlxsw_core *core, int id,
+ static int
+ mlxsw_env_query_module_eeprom(struct mlxsw_core *mlxsw_core, int module,
+ u16 offset, u16 size, void *data,
+- unsigned int *p_read_size)
++ bool qsfp, unsigned int *p_read_size)
+ {
+ char eeprom_tmp[MLXSW_REG_MCIA_EEPROM_SIZE];
+ char mcia_pl[MLXSW_REG_MCIA_LEN];
+@@ -54,6 +54,10 @@ mlxsw_env_query_module_eeprom(struct mlxsw_core *mlxsw_core, int module,
+ int status;
+ int err;
+
++ /* MCIA register accepts buffer size <= 48. Page of size 128 should be
++ * read by chunks of size 48, 48, 32. Align the size of the last chunk
++ * to avoid reading after the end of the page.
++ */
+ size = min_t(u16, size, MLXSW_REG_MCIA_EEPROM_SIZE);
+
+ if (offset < MLXSW_REG_MCIA_EEPROM_PAGE_LENGTH &&
+@@ -63,18 +67,25 @@ mlxsw_env_query_module_eeprom(struct mlxsw_core *mlxsw_core, int module,
+
+ i2c_addr = MLXSW_REG_MCIA_I2C_ADDR_LOW;
+ if (offset >= MLXSW_REG_MCIA_EEPROM_PAGE_LENGTH) {
+- page = MLXSW_REG_MCIA_PAGE_GET(offset);
+- offset -= MLXSW_REG_MCIA_EEPROM_UP_PAGE_LENGTH * page;
+- /* When reading upper pages 1, 2 and 3 the offset starts at
+- * 128. Please refer to "QSFP+ Memory Map" figure in SFF-8436
+- * specification for graphical depiction.
+- * MCIA register accepts buffer size <= 48. Page of size 128
+- * should be read by chunks of size 48, 48, 32. Align the size
+- * of the last chunk to avoid reading after the end of the
+- * page.
+- */
+- if (offset + size > MLXSW_REG_MCIA_EEPROM_PAGE_LENGTH)
+- size = MLXSW_REG_MCIA_EEPROM_PAGE_LENGTH - offset;
++ if (qsfp) {
++ /* When reading upper pages 1, 2 and 3 the offset
++ * starts at 128. Please refer to "QSFP+ Memory Map"
++ * figure in SFF-8436 specification for graphical
++ * depiction.
++ */
++ page = MLXSW_REG_MCIA_PAGE_GET(offset);
++ offset -= MLXSW_REG_MCIA_EEPROM_UP_PAGE_LENGTH * page;
++ if (offset + size > MLXSW_REG_MCIA_EEPROM_PAGE_LENGTH)
++ size = MLXSW_REG_MCIA_EEPROM_PAGE_LENGTH - offset;
++ } else {
++ /* When reading upper pages 1, 2 and 3 the offset
++ * starts at 0 and I2C high address is used. Please refer
++ * refer to "Memory Organization" figure in SFF-8472
++ * specification for graphical depiction.
++ */
++ i2c_addr = MLXSW_REG_MCIA_I2C_ADDR_HIGH;
++ offset -= MLXSW_REG_MCIA_EEPROM_PAGE_LENGTH;
++ }
+ }
+
+ mlxsw_reg_mcia_pack(mcia_pl, module, 0, page, offset, size, i2c_addr);
+@@ -166,7 +177,7 @@ int mlxsw_env_get_module_info(struct mlxsw_core *mlxsw_core, int module,
+ int err;
+
+ err = mlxsw_env_query_module_eeprom(mlxsw_core, module, 0, offset,
+- module_info, &read_size);
++ module_info, false, &read_size);
+ if (err)
+ return err;
+
+@@ -197,7 +208,7 @@ int mlxsw_env_get_module_info(struct mlxsw_core *mlxsw_core, int module,
+ /* Verify if transceiver provides diagnostic monitoring page */
+ err = mlxsw_env_query_module_eeprom(mlxsw_core, module,
+ SFP_DIAGMON, 1, &diag_mon,
+- &read_size);
++ false, &read_size);
+ if (err)
+ return err;
+
+@@ -225,17 +236,22 @@ int mlxsw_env_get_module_eeprom(struct net_device *netdev,
+ int offset = ee->offset;
+ unsigned int read_size;
+ int i = 0;
++ bool qsfp;
+ int err;
+
+ if (!ee->len)
+ return -EINVAL;
+
+ memset(data, 0, ee->len);
++ /* Validate module identifier value. */
++ err = mlxsw_env_validate_cable_ident(mlxsw_core, module, &qsfp);
++ if (err)
++ return err;
+
+ while (i < ee->len) {
+ err = mlxsw_env_query_module_eeprom(mlxsw_core, module, offset,
+ ee->len - i, data + i,
+- &read_size);
++ qsfp, &read_size);
+ if (err) {
+ netdev_err(netdev, "Eeprom query failed\n");
+ return err;
+diff --git a/drivers/net/ethernet/pensando/ionic/ionic_ethtool.c b/drivers/net/ethernet/pensando/ionic/ionic_ethtool.c
+index 22430fa911e2..63d78519cbc6 100644
+--- a/drivers/net/ethernet/pensando/ionic/ionic_ethtool.c
++++ b/drivers/net/ethernet/pensando/ionic/ionic_ethtool.c
+@@ -102,15 +102,18 @@ static void ionic_get_regs(struct net_device *netdev, struct ethtool_regs *regs,
+ void *p)
+ {
+ struct ionic_lif *lif = netdev_priv(netdev);
++ unsigned int offset;
+ unsigned int size;
+
+ regs->version = IONIC_DEV_CMD_REG_VERSION;
+
++ offset = 0;
+ size = IONIC_DEV_INFO_REG_COUNT * sizeof(u32);
+- memcpy_fromio(p, lif->ionic->idev.dev_info_regs->words, size);
++ memcpy_fromio(p + offset, lif->ionic->idev.dev_info_regs->words, size);
+
++ offset += size;
+ size = IONIC_DEV_CMD_REG_COUNT * sizeof(u32);
+- memcpy_fromio(p, lif->ionic->idev.dev_cmd_regs->words, size);
++ memcpy_fromio(p + offset, lif->ionic->idev.dev_cmd_regs->words, size);
+ }
+
+ static int ionic_get_link_ksettings(struct net_device *netdev,
+diff --git a/drivers/net/ethernet/pensando/ionic/ionic_lif.c b/drivers/net/ethernet/pensando/ionic/ionic_lif.c
+index 7fea60fc3e08..2c3e9ef22129 100644
+--- a/drivers/net/ethernet/pensando/ionic/ionic_lif.c
++++ b/drivers/net/ethernet/pensando/ionic/ionic_lif.c
+@@ -85,8 +85,7 @@ static void ionic_link_status_check(struct ionic_lif *lif)
+ u16 link_status;
+ bool link_up;
+
+- if (!test_bit(IONIC_LIF_F_LINK_CHECK_REQUESTED, lif->state) ||
+- test_bit(IONIC_LIF_F_QUEUE_RESET, lif->state))
++ if (!test_bit(IONIC_LIF_F_LINK_CHECK_REQUESTED, lif->state))
+ return;
+
+ if (lif->ionic->is_mgmt_nic)
+@@ -106,16 +105,22 @@ static void ionic_link_status_check(struct ionic_lif *lif)
+ netif_carrier_on(netdev);
+ }
+
+- if (lif->netdev->flags & IFF_UP && netif_running(lif->netdev))
++ if (lif->netdev->flags & IFF_UP && netif_running(lif->netdev)) {
++ mutex_lock(&lif->queue_lock);
+ ionic_start_queues(lif);
++ mutex_unlock(&lif->queue_lock);
++ }
+ } else {
+ if (netif_carrier_ok(netdev)) {
+ netdev_info(netdev, "Link down\n");
+ netif_carrier_off(netdev);
+ }
+
+- if (lif->netdev->flags & IFF_UP && netif_running(lif->netdev))
++ if (lif->netdev->flags & IFF_UP && netif_running(lif->netdev)) {
++ mutex_lock(&lif->queue_lock);
+ ionic_stop_queues(lif);
++ mutex_unlock(&lif->queue_lock);
++ }
+ }
+
+ clear_bit(IONIC_LIF_F_LINK_CHECK_REQUESTED, lif->state);
+@@ -849,8 +854,7 @@ static int ionic_lif_addr_add(struct ionic_lif *lif, const u8 *addr)
+ if (f)
+ return 0;
+
+- netdev_dbg(lif->netdev, "rx_filter add ADDR %pM (id %d)\n", addr,
+- ctx.comp.rx_filter_add.filter_id);
++ netdev_dbg(lif->netdev, "rx_filter add ADDR %pM\n", addr);
+
+ memcpy(ctx.cmd.rx_filter_add.mac.addr, addr, ETH_ALEN);
+ err = ionic_adminq_post_wait(lif, &ctx);
+@@ -879,6 +883,9 @@ static int ionic_lif_addr_del(struct ionic_lif *lif, const u8 *addr)
+ return -ENOENT;
+ }
+
++ netdev_dbg(lif->netdev, "rx_filter del ADDR %pM (id %d)\n",
++ addr, f->filter_id);
++
+ ctx.cmd.rx_filter_del.filter_id = cpu_to_le32(f->filter_id);
+ ionic_rx_filter_free(lif, f);
+ spin_unlock_bh(&lif->rx_filters.lock);
+@@ -887,9 +894,6 @@ static int ionic_lif_addr_del(struct ionic_lif *lif, const u8 *addr)
+ if (err && err != -EEXIST)
+ return err;
+
+- netdev_dbg(lif->netdev, "rx_filter del ADDR %pM (id %d)\n", addr,
+- ctx.cmd.rx_filter_del.filter_id);
+-
+ return 0;
+ }
+
+@@ -1341,13 +1345,11 @@ static int ionic_vlan_rx_add_vid(struct net_device *netdev, __be16 proto,
+ };
+ int err;
+
++ netdev_dbg(netdev, "rx_filter add VLAN %d\n", vid);
+ err = ionic_adminq_post_wait(lif, &ctx);
+ if (err)
+ return err;
+
+- netdev_dbg(netdev, "rx_filter add VLAN %d (id %d)\n", vid,
+- ctx.comp.rx_filter_add.filter_id);
+-
+ return ionic_rx_filter_save(lif, 0, IONIC_RXQ_INDEX_ANY, 0, &ctx);
+ }
+
+@@ -1372,8 +1374,8 @@ static int ionic_vlan_rx_kill_vid(struct net_device *netdev, __be16 proto,
+ return -ENOENT;
+ }
+
+- netdev_dbg(netdev, "rx_filter del VLAN %d (id %d)\n", vid,
+- le32_to_cpu(ctx.cmd.rx_filter_del.filter_id));
++ netdev_dbg(netdev, "rx_filter del VLAN %d (id %d)\n",
++ vid, f->filter_id);
+
+ ctx.cmd.rx_filter_del.filter_id = cpu_to_le32(f->filter_id);
+ ionic_rx_filter_free(lif, f);
+@@ -1951,16 +1953,13 @@ int ionic_reset_queues(struct ionic_lif *lif, ionic_reset_cb cb, void *arg)
+ bool running;
+ int err = 0;
+
+- err = ionic_wait_for_bit(lif, IONIC_LIF_F_QUEUE_RESET);
+- if (err)
+- return err;
+-
++ mutex_lock(&lif->queue_lock);
+ running = netif_running(lif->netdev);
+ if (running) {
+ netif_device_detach(lif->netdev);
+ err = ionic_stop(lif->netdev);
+ if (err)
+- goto reset_out;
++ return err;
+ }
+
+ if (cb)
+@@ -1970,9 +1969,7 @@ int ionic_reset_queues(struct ionic_lif *lif, ionic_reset_cb cb, void *arg)
+ err = ionic_open(lif->netdev);
+ netif_device_attach(lif->netdev);
+ }
+-
+-reset_out:
+- clear_bit(IONIC_LIF_F_QUEUE_RESET, lif->state);
++ mutex_unlock(&lif->queue_lock);
+
+ return err;
+ }
+@@ -2111,7 +2108,9 @@ static void ionic_lif_handle_fw_down(struct ionic_lif *lif)
+
+ if (test_bit(IONIC_LIF_F_UP, lif->state)) {
+ dev_info(ionic->dev, "Surprise FW stop, stopping queues\n");
++ mutex_lock(&lif->queue_lock);
+ ionic_stop_queues(lif);
++ mutex_unlock(&lif->queue_lock);
+ }
+
+ if (netif_running(lif->netdev)) {
+@@ -2230,15 +2229,15 @@ static void ionic_lif_deinit(struct ionic_lif *lif)
+ cancel_work_sync(&lif->deferred.work);
+ cancel_work_sync(&lif->tx_timeout_work);
+ ionic_rx_filters_deinit(lif);
++ if (lif->netdev->features & NETIF_F_RXHASH)
++ ionic_lif_rss_deinit(lif);
+ }
+
+- if (lif->netdev->features & NETIF_F_RXHASH)
+- ionic_lif_rss_deinit(lif);
+-
+ napi_disable(&lif->adminqcq->napi);
+ ionic_lif_qcq_deinit(lif, lif->notifyqcq);
+ ionic_lif_qcq_deinit(lif, lif->adminqcq);
+
++ mutex_destroy(&lif->queue_lock);
+ ionic_lif_reset(lif);
+ }
+
+@@ -2414,6 +2413,7 @@ static int ionic_lif_init(struct ionic_lif *lif)
+ return err;
+
+ lif->hw_index = le16_to_cpu(comp.hw_index);
++ mutex_init(&lif->queue_lock);
+
+ /* now that we have the hw_index we can figure out our doorbell page */
+ lif->dbid_count = le32_to_cpu(lif->ionic->ident.dev.ndbpgs_per_lif);
+diff --git a/drivers/net/ethernet/pensando/ionic/ionic_lif.h b/drivers/net/ethernet/pensando/ionic/ionic_lif.h
+index 2c65cf6300db..90992614f136 100644
+--- a/drivers/net/ethernet/pensando/ionic/ionic_lif.h
++++ b/drivers/net/ethernet/pensando/ionic/ionic_lif.h
+@@ -126,7 +126,6 @@ enum ionic_lif_state_flags {
+ IONIC_LIF_F_SW_DEBUG_STATS,
+ IONIC_LIF_F_UP,
+ IONIC_LIF_F_LINK_CHECK_REQUESTED,
+- IONIC_LIF_F_QUEUE_RESET,
+ IONIC_LIF_F_FW_RESET,
+
+ /* leave this as last */
+@@ -145,6 +144,7 @@ struct ionic_lif {
+ unsigned int hw_index;
+ unsigned int kern_pid;
+ u64 __iomem *kern_dbpage;
++ struct mutex queue_lock; /* lock for queue structures */
+ spinlock_t adminq_lock; /* lock for AdminQ operations */
+ struct ionic_qcq *adminqcq;
+ struct ionic_qcq *notifyqcq;
+@@ -191,12 +191,6 @@ struct ionic_lif {
+ #define lif_to_txq(lif, i) (&lif_to_txqcq((lif), i)->q)
+ #define lif_to_rxq(lif, i) (&lif_to_txqcq((lif), i)->q)
+
+-/* return 0 if successfully set the bit, else non-zero */
+-static inline int ionic_wait_for_bit(struct ionic_lif *lif, int bitname)
+-{
+- return wait_on_bit_lock(lif->state, bitname, TASK_INTERRUPTIBLE);
+-}
+-
+ static inline u32 ionic_coal_usec_to_hw(struct ionic *ionic, u32 usecs)
+ {
+ u32 mult = le32_to_cpu(ionic->ident.dev.intr_coal_mult);
+diff --git a/drivers/net/ethernet/pensando/ionic/ionic_rx_filter.c b/drivers/net/ethernet/pensando/ionic/ionic_rx_filter.c
+index 80eeb7696e01..cd0076fc3044 100644
+--- a/drivers/net/ethernet/pensando/ionic/ionic_rx_filter.c
++++ b/drivers/net/ethernet/pensando/ionic/ionic_rx_filter.c
+@@ -21,13 +21,16 @@ void ionic_rx_filter_free(struct ionic_lif *lif, struct ionic_rx_filter *f)
+ void ionic_rx_filter_replay(struct ionic_lif *lif)
+ {
+ struct ionic_rx_filter_add_cmd *ac;
++ struct hlist_head new_id_list;
+ struct ionic_admin_ctx ctx;
+ struct ionic_rx_filter *f;
+ struct hlist_head *head;
+ struct hlist_node *tmp;
++ unsigned int key;
+ unsigned int i;
+ int err;
+
++ INIT_HLIST_HEAD(&new_id_list);
+ ac = &ctx.cmd.rx_filter_add;
+
+ for (i = 0; i < IONIC_RX_FILTER_HLISTS; i++) {
+@@ -58,9 +61,30 @@ void ionic_rx_filter_replay(struct ionic_lif *lif)
+ ac->mac.addr);
+ break;
+ }
++ spin_lock_bh(&lif->rx_filters.lock);
++ ionic_rx_filter_free(lif, f);
++ spin_unlock_bh(&lif->rx_filters.lock);
++
++ continue;
+ }
++
++ /* remove from old id list, save new id in tmp list */
++ spin_lock_bh(&lif->rx_filters.lock);
++ hlist_del(&f->by_id);
++ spin_unlock_bh(&lif->rx_filters.lock);
++ f->filter_id = le32_to_cpu(ctx.comp.rx_filter_add.filter_id);
++ hlist_add_head(&f->by_id, &new_id_list);
+ }
+ }
++
++ /* rebuild the by_id hash lists with the new filter ids */
++ spin_lock_bh(&lif->rx_filters.lock);
++ hlist_for_each_entry_safe(f, tmp, &new_id_list, by_id) {
++ key = f->filter_id & IONIC_RX_FILTER_HLISTS_MASK;
++ head = &lif->rx_filters.by_id[key];
++ hlist_add_head(&f->by_id, head);
++ }
++ spin_unlock_bh(&lif->rx_filters.lock);
+ }
+
+ int ionic_rx_filters_init(struct ionic_lif *lif)
+@@ -69,10 +93,12 @@ int ionic_rx_filters_init(struct ionic_lif *lif)
+
+ spin_lock_init(&lif->rx_filters.lock);
+
++ spin_lock_bh(&lif->rx_filters.lock);
+ for (i = 0; i < IONIC_RX_FILTER_HLISTS; i++) {
+ INIT_HLIST_HEAD(&lif->rx_filters.by_hash[i]);
+ INIT_HLIST_HEAD(&lif->rx_filters.by_id[i]);
+ }
++ spin_unlock_bh(&lif->rx_filters.lock);
+
+ return 0;
+ }
+@@ -84,11 +110,13 @@ void ionic_rx_filters_deinit(struct ionic_lif *lif)
+ struct hlist_node *tmp;
+ unsigned int i;
+
++ spin_lock_bh(&lif->rx_filters.lock);
+ for (i = 0; i < IONIC_RX_FILTER_HLISTS; i++) {
+ head = &lif->rx_filters.by_id[i];
+ hlist_for_each_entry_safe(f, tmp, head, by_id)
+ ionic_rx_filter_free(lif, f);
+ }
++ spin_unlock_bh(&lif->rx_filters.lock);
+ }
+
+ int ionic_rx_filter_save(struct ionic_lif *lif, u32 flow_id, u16 rxq_index,
+@@ -124,6 +152,7 @@ int ionic_rx_filter_save(struct ionic_lif *lif, u32 flow_id, u16 rxq_index,
+ f->filter_id = le32_to_cpu(ctx->comp.rx_filter_add.filter_id);
+ f->rxq_index = rxq_index;
+ memcpy(&f->cmd, ac, sizeof(f->cmd));
++ netdev_dbg(lif->netdev, "rx_filter add filter_id %d\n", f->filter_id);
+
+ INIT_HLIST_NODE(&f->by_hash);
+ INIT_HLIST_NODE(&f->by_id);
+diff --git a/drivers/net/ethernet/pensando/ionic/ionic_txrx.c b/drivers/net/ethernet/pensando/ionic/ionic_txrx.c
+index d233b6e77b1e..ce8e246cda07 100644
+--- a/drivers/net/ethernet/pensando/ionic/ionic_txrx.c
++++ b/drivers/net/ethernet/pensando/ionic/ionic_txrx.c
+@@ -157,12 +157,6 @@ static void ionic_rx_clean(struct ionic_queue *q, struct ionic_desc_info *desc_i
+ return;
+ }
+
+- /* no packet processing while resetting */
+- if (unlikely(test_bit(IONIC_LIF_F_QUEUE_RESET, q->lif->state))) {
+- stats->dropped++;
+- return;
+- }
+-
+ stats->pkts++;
+ stats->bytes += le16_to_cpu(comp->len);
+
+diff --git a/drivers/net/ethernet/qlogic/qed/qed_cxt.c b/drivers/net/ethernet/qlogic/qed/qed_cxt.c
+index aeed8939f410..3bbcff5f2621 100644
+--- a/drivers/net/ethernet/qlogic/qed/qed_cxt.c
++++ b/drivers/net/ethernet/qlogic/qed/qed_cxt.c
+@@ -1987,8 +1987,8 @@ static void qed_rdma_set_pf_params(struct qed_hwfn *p_hwfn,
+ num_srqs = min_t(u32, QED_RDMA_MAX_SRQS, p_params->num_srqs);
+
+ if (p_hwfn->mcp_info->func_info.protocol == QED_PCI_ETH_RDMA) {
+- DP_NOTICE(p_hwfn,
+- "Current day drivers don't support RoCE & iWARP simultaneously on the same PF. Default to RoCE-only\n");
++ DP_VERBOSE(p_hwfn, QED_MSG_SP,
++ "Current day drivers don't support RoCE & iWARP simultaneously on the same PF. Default to RoCE-only\n");
+ p_hwfn->hw_info.personality = QED_PCI_ETH_ROCE;
+ }
+
+diff --git a/drivers/net/ethernet/qlogic/qed/qed_dev.c b/drivers/net/ethernet/qlogic/qed/qed_dev.c
+index 58913fe4f345..0629dd4e18d9 100644
+--- a/drivers/net/ethernet/qlogic/qed/qed_dev.c
++++ b/drivers/net/ethernet/qlogic/qed/qed_dev.c
+@@ -3096,7 +3096,7 @@ int qed_hw_init(struct qed_dev *cdev, struct qed_hw_init_params *p_params)
+ }
+
+ /* Log and clear previous pglue_b errors if such exist */
+- qed_pglueb_rbc_attn_handler(p_hwfn, p_hwfn->p_main_ptt);
++ qed_pglueb_rbc_attn_handler(p_hwfn, p_hwfn->p_main_ptt, true);
+
+ /* Enable the PF's internal FID_enable in the PXP */
+ rc = qed_pglueb_set_pfid_enable(p_hwfn, p_hwfn->p_main_ptt,
+diff --git a/drivers/net/ethernet/qlogic/qed/qed_int.c b/drivers/net/ethernet/qlogic/qed/qed_int.c
+index 9f5113639eaf..8d106063e927 100644
+--- a/drivers/net/ethernet/qlogic/qed/qed_int.c
++++ b/drivers/net/ethernet/qlogic/qed/qed_int.c
+@@ -256,9 +256,10 @@ out:
+ #define PGLUE_ATTENTION_ZLR_VALID (1 << 25)
+ #define PGLUE_ATTENTION_ILT_VALID (1 << 23)
+
+-int qed_pglueb_rbc_attn_handler(struct qed_hwfn *p_hwfn,
+- struct qed_ptt *p_ptt)
++int qed_pglueb_rbc_attn_handler(struct qed_hwfn *p_hwfn, struct qed_ptt *p_ptt,
++ bool hw_init)
+ {
++ char msg[256];
+ u32 tmp;
+
+ tmp = qed_rd(p_hwfn, p_ptt, PGLUE_B_REG_TX_ERR_WR_DETAILS2);
+@@ -272,22 +273,23 @@ int qed_pglueb_rbc_attn_handler(struct qed_hwfn *p_hwfn,
+ details = qed_rd(p_hwfn, p_ptt,
+ PGLUE_B_REG_TX_ERR_WR_DETAILS);
+
+- DP_NOTICE(p_hwfn,
+- "Illegal write by chip to [%08x:%08x] blocked.\n"
+- "Details: %08x [PFID %02x, VFID %02x, VF_VALID %02x]\n"
+- "Details2 %08x [Was_error %02x BME deassert %02x FID_enable deassert %02x]\n",
+- addr_hi, addr_lo, details,
+- (u8)GET_FIELD(details, PGLUE_ATTENTION_DETAILS_PFID),
+- (u8)GET_FIELD(details, PGLUE_ATTENTION_DETAILS_VFID),
+- GET_FIELD(details,
+- PGLUE_ATTENTION_DETAILS_VF_VALID) ? 1 : 0,
+- tmp,
+- GET_FIELD(tmp,
+- PGLUE_ATTENTION_DETAILS2_WAS_ERR) ? 1 : 0,
+- GET_FIELD(tmp,
+- PGLUE_ATTENTION_DETAILS2_BME) ? 1 : 0,
+- GET_FIELD(tmp,
+- PGLUE_ATTENTION_DETAILS2_FID_EN) ? 1 : 0);
++ snprintf(msg, sizeof(msg),
++ "Illegal write by chip to [%08x:%08x] blocked.\n"
++ "Details: %08x [PFID %02x, VFID %02x, VF_VALID %02x]\n"
++ "Details2 %08x [Was_error %02x BME deassert %02x FID_enable deassert %02x]",
++ addr_hi, addr_lo, details,
++ (u8)GET_FIELD(details, PGLUE_ATTENTION_DETAILS_PFID),
++ (u8)GET_FIELD(details, PGLUE_ATTENTION_DETAILS_VFID),
++ !!GET_FIELD(details, PGLUE_ATTENTION_DETAILS_VF_VALID),
++ tmp,
++ !!GET_FIELD(tmp, PGLUE_ATTENTION_DETAILS2_WAS_ERR),
++ !!GET_FIELD(tmp, PGLUE_ATTENTION_DETAILS2_BME),
++ !!GET_FIELD(tmp, PGLUE_ATTENTION_DETAILS2_FID_EN));
++
++ if (hw_init)
++ DP_VERBOSE(p_hwfn, NETIF_MSG_INTR, "%s\n", msg);
++ else
++ DP_NOTICE(p_hwfn, "%s\n", msg);
+ }
+
+ tmp = qed_rd(p_hwfn, p_ptt, PGLUE_B_REG_TX_ERR_RD_DETAILS2);
+@@ -320,8 +322,14 @@ int qed_pglueb_rbc_attn_handler(struct qed_hwfn *p_hwfn,
+ }
+
+ tmp = qed_rd(p_hwfn, p_ptt, PGLUE_B_REG_TX_ERR_WR_DETAILS_ICPL);
+- if (tmp & PGLUE_ATTENTION_ICPL_VALID)
+- DP_NOTICE(p_hwfn, "ICPL error - %08x\n", tmp);
++ if (tmp & PGLUE_ATTENTION_ICPL_VALID) {
++ snprintf(msg, sizeof(msg), "ICPL error - %08x", tmp);
++
++ if (hw_init)
++ DP_VERBOSE(p_hwfn, NETIF_MSG_INTR, "%s\n", msg);
++ else
++ DP_NOTICE(p_hwfn, "%s\n", msg);
++ }
+
+ tmp = qed_rd(p_hwfn, p_ptt, PGLUE_B_REG_MASTER_ZLR_ERR_DETAILS);
+ if (tmp & PGLUE_ATTENTION_ZLR_VALID) {
+@@ -360,7 +368,7 @@ int qed_pglueb_rbc_attn_handler(struct qed_hwfn *p_hwfn,
+
+ static int qed_pglueb_rbc_attn_cb(struct qed_hwfn *p_hwfn)
+ {
+- return qed_pglueb_rbc_attn_handler(p_hwfn, p_hwfn->p_dpc_ptt);
++ return qed_pglueb_rbc_attn_handler(p_hwfn, p_hwfn->p_dpc_ptt, false);
+ }
+
+ #define QED_DORQ_ATTENTION_REASON_MASK (0xfffff)
+diff --git a/drivers/net/ethernet/qlogic/qed/qed_int.h b/drivers/net/ethernet/qlogic/qed/qed_int.h
+index 9ad568d93ae6..defb0d1bc45a 100644
+--- a/drivers/net/ethernet/qlogic/qed/qed_int.h
++++ b/drivers/net/ethernet/qlogic/qed/qed_int.h
+@@ -431,7 +431,7 @@ int qed_int_set_timer_res(struct qed_hwfn *p_hwfn, struct qed_ptt *p_ptt,
+
+ #define QED_MAPPING_MEMORY_SIZE(dev) (NUM_OF_SBS(dev))
+
+-int qed_pglueb_rbc_attn_handler(struct qed_hwfn *p_hwfn,
+- struct qed_ptt *p_ptt);
++int qed_pglueb_rbc_attn_handler(struct qed_hwfn *p_hwfn, struct qed_ptt *p_ptt,
++ bool hw_init);
+
+ #endif
+diff --git a/drivers/net/ethernet/smsc/smc91x.c b/drivers/net/ethernet/smsc/smc91x.c
+index 90410f9d3b1a..1c4fea9c3ec4 100644
+--- a/drivers/net/ethernet/smsc/smc91x.c
++++ b/drivers/net/ethernet/smsc/smc91x.c
+@@ -2274,7 +2274,7 @@ static int smc_drv_probe(struct platform_device *pdev)
+ ret = try_toggle_control_gpio(&pdev->dev, &lp->power_gpio,
+ "power", 0, 0, 100);
+ if (ret)
+- return ret;
++ goto out_free_netdev;
+
+ /*
+ * Optional reset GPIO configured? Minimum 100 ns reset needed
+@@ -2283,7 +2283,7 @@ static int smc_drv_probe(struct platform_device *pdev)
+ ret = try_toggle_control_gpio(&pdev->dev, &lp->reset_gpio,
+ "reset", 0, 0, 100);
+ if (ret)
+- return ret;
++ goto out_free_netdev;
+
+ /*
+ * Need to wait for optional EEPROM to load, max 750 us according
+diff --git a/drivers/net/ethernet/socionext/sni_ave.c b/drivers/net/ethernet/socionext/sni_ave.c
+index 67ddf782d98a..897c895629d0 100644
+--- a/drivers/net/ethernet/socionext/sni_ave.c
++++ b/drivers/net/ethernet/socionext/sni_ave.c
+@@ -1191,7 +1191,7 @@ static int ave_init(struct net_device *ndev)
+ ret = regmap_update_bits(priv->regmap, SG_ETPINMODE,
+ priv->pinmode_mask, priv->pinmode_val);
+ if (ret)
+- return ret;
++ goto out_reset_assert;
+
+ ave_global_reset(ndev);
+
+diff --git a/drivers/net/geneve.c b/drivers/net/geneve.c
+index 4661ef865807..dec52b763d50 100644
+--- a/drivers/net/geneve.c
++++ b/drivers/net/geneve.c
+@@ -1615,11 +1615,11 @@ static int geneve_changelink(struct net_device *dev, struct nlattr *tb[],
+ struct netlink_ext_ack *extack)
+ {
+ struct geneve_dev *geneve = netdev_priv(dev);
++ enum ifla_geneve_df df = geneve->df;
+ struct geneve_sock *gs4, *gs6;
+ struct ip_tunnel_info info;
+ bool metadata;
+ bool use_udp6_rx_checksums;
+- enum ifla_geneve_df df;
+ bool ttl_inherit;
+ int err;
+
+diff --git a/drivers/net/hippi/rrunner.c b/drivers/net/hippi/rrunner.c
+index 2a6ec5394966..a4b3fce69ecd 100644
+--- a/drivers/net/hippi/rrunner.c
++++ b/drivers/net/hippi/rrunner.c
+@@ -1242,7 +1242,7 @@ static int rr_open(struct net_device *dev)
+ rrpriv->info = NULL;
+ }
+ if (rrpriv->rx_ctrl) {
+- pci_free_consistent(pdev, sizeof(struct ring_ctrl),
++ pci_free_consistent(pdev, 256 * sizeof(struct ring_ctrl),
+ rrpriv->rx_ctrl, rrpriv->rx_ctrl_dma);
+ rrpriv->rx_ctrl = NULL;
+ }
+diff --git a/drivers/net/ieee802154/adf7242.c b/drivers/net/ieee802154/adf7242.c
+index 5a37514e4234..8dbccec6ac86 100644
+--- a/drivers/net/ieee802154/adf7242.c
++++ b/drivers/net/ieee802154/adf7242.c
+@@ -1262,7 +1262,7 @@ static int adf7242_probe(struct spi_device *spi)
+ WQ_MEM_RECLAIM);
+ if (unlikely(!lp->wqueue)) {
+ ret = -ENOMEM;
+- goto err_hw_init;
++ goto err_alloc_wq;
+ }
+
+ ret = adf7242_hw_init(lp);
+@@ -1294,6 +1294,8 @@ static int adf7242_probe(struct spi_device *spi)
+ return ret;
+
+ err_hw_init:
++ destroy_workqueue(lp->wqueue);
++err_alloc_wq:
+ mutex_destroy(&lp->bmux);
+ ieee802154_free_hw(lp->hw);
+
+diff --git a/drivers/net/netdevsim/netdev.c b/drivers/net/netdevsim/netdev.c
+index 2908e0a0d6e1..23950e7a0f81 100644
+--- a/drivers/net/netdevsim/netdev.c
++++ b/drivers/net/netdevsim/netdev.c
+@@ -302,7 +302,7 @@ nsim_create(struct nsim_dev *nsim_dev, struct nsim_dev_port *nsim_dev_port)
+ rtnl_lock();
+ err = nsim_bpf_init(ns);
+ if (err)
+- goto err_free_netdev;
++ goto err_rtnl_unlock;
+
+ nsim_ipsec_init(ns);
+
+@@ -316,8 +316,8 @@ nsim_create(struct nsim_dev *nsim_dev, struct nsim_dev_port *nsim_dev_port)
+ err_ipsec_teardown:
+ nsim_ipsec_teardown(ns);
+ nsim_bpf_uninit(ns);
++err_rtnl_unlock:
+ rtnl_unlock();
+-err_free_netdev:
+ free_netdev(dev);
+ return ERR_PTR(err);
+ }
+diff --git a/drivers/net/phy/dp83640.c b/drivers/net/phy/dp83640.c
+index ecbd5e0d685c..acb0aae60755 100644
+--- a/drivers/net/phy/dp83640.c
++++ b/drivers/net/phy/dp83640.c
+@@ -1260,6 +1260,7 @@ static int dp83640_hwtstamp(struct mii_timestamper *mii_ts, struct ifreq *ifr)
+ dp83640->hwts_rx_en = 1;
+ dp83640->layer = PTP_CLASS_L4;
+ dp83640->version = PTP_CLASS_V1;
++ cfg.rx_filter = HWTSTAMP_FILTER_PTP_V1_L4_EVENT;
+ break;
+ case HWTSTAMP_FILTER_PTP_V2_L4_EVENT:
+ case HWTSTAMP_FILTER_PTP_V2_L4_SYNC:
+@@ -1267,6 +1268,7 @@ static int dp83640_hwtstamp(struct mii_timestamper *mii_ts, struct ifreq *ifr)
+ dp83640->hwts_rx_en = 1;
+ dp83640->layer = PTP_CLASS_L4;
+ dp83640->version = PTP_CLASS_V2;
++ cfg.rx_filter = HWTSTAMP_FILTER_PTP_V2_L4_EVENT;
+ break;
+ case HWTSTAMP_FILTER_PTP_V2_L2_EVENT:
+ case HWTSTAMP_FILTER_PTP_V2_L2_SYNC:
+@@ -1274,6 +1276,7 @@ static int dp83640_hwtstamp(struct mii_timestamper *mii_ts, struct ifreq *ifr)
+ dp83640->hwts_rx_en = 1;
+ dp83640->layer = PTP_CLASS_L2;
+ dp83640->version = PTP_CLASS_V2;
++ cfg.rx_filter = HWTSTAMP_FILTER_PTP_V2_L2_EVENT;
+ break;
+ case HWTSTAMP_FILTER_PTP_V2_EVENT:
+ case HWTSTAMP_FILTER_PTP_V2_SYNC:
+@@ -1281,6 +1284,7 @@ static int dp83640_hwtstamp(struct mii_timestamper *mii_ts, struct ifreq *ifr)
+ dp83640->hwts_rx_en = 1;
+ dp83640->layer = PTP_CLASS_L4 | PTP_CLASS_L2;
+ dp83640->version = PTP_CLASS_V2;
++ cfg.rx_filter = HWTSTAMP_FILTER_PTP_V2_EVENT;
+ break;
+ default:
+ return -ERANGE;
+diff --git a/drivers/net/usb/ax88172a.c b/drivers/net/usb/ax88172a.c
+index 4e514f5d7c6c..fd3a04d98dc1 100644
+--- a/drivers/net/usb/ax88172a.c
++++ b/drivers/net/usb/ax88172a.c
+@@ -187,6 +187,7 @@ static int ax88172a_bind(struct usbnet *dev, struct usb_interface *intf)
+ ret = asix_read_cmd(dev, AX_CMD_READ_NODE_ID, 0, 0, ETH_ALEN, buf, 0);
+ if (ret < ETH_ALEN) {
+ netdev_err(dev->net, "Failed to read MAC address: %d\n", ret);
++ ret = -EIO;
+ goto free;
+ }
+ memcpy(dev->net->dev_addr, buf, ETH_ALEN);
+diff --git a/drivers/net/wan/lapbether.c b/drivers/net/wan/lapbether.c
+index e30d91a38cfb..284832314f31 100644
+--- a/drivers/net/wan/lapbether.c
++++ b/drivers/net/wan/lapbether.c
+@@ -303,7 +303,6 @@ static void lapbeth_setup(struct net_device *dev)
+ dev->netdev_ops = &lapbeth_netdev_ops;
+ dev->needs_free_netdev = true;
+ dev->type = ARPHRD_X25;
+- dev->hard_header_len = 3;
+ dev->mtu = 1000;
+ dev->addr_len = 0;
+ }
+@@ -324,6 +323,14 @@ static int lapbeth_new_device(struct net_device *dev)
+ if (!ndev)
+ goto out;
+
++ /* When transmitting data:
++ * first this driver removes a pseudo header of 1 byte,
++ * then the lapb module prepends an LAPB header of at most 3 bytes,
++ * then this driver prepends a length field of 2 bytes,
++ * then the underlying Ethernet device prepends its own header.
++ */
++ ndev->hard_header_len = -1 + 3 + 2 + dev->hard_header_len;
++
+ lapbeth = netdev_priv(ndev);
+ lapbeth->axdev = ndev;
+
+diff --git a/drivers/net/wireless/ath/ath9k/hif_usb.c b/drivers/net/wireless/ath/ath9k/hif_usb.c
+index 6049d3766c64..3f563e02d17d 100644
+--- a/drivers/net/wireless/ath/ath9k/hif_usb.c
++++ b/drivers/net/wireless/ath/ath9k/hif_usb.c
+@@ -643,9 +643,9 @@ err:
+
+ static void ath9k_hif_usb_rx_cb(struct urb *urb)
+ {
+- struct sk_buff *skb = (struct sk_buff *) urb->context;
+- struct hif_device_usb *hif_dev =
+- usb_get_intfdata(usb_ifnum_to_if(urb->dev, 0));
++ struct rx_buf *rx_buf = (struct rx_buf *)urb->context;
++ struct hif_device_usb *hif_dev = rx_buf->hif_dev;
++ struct sk_buff *skb = rx_buf->skb;
+ int ret;
+
+ if (!skb)
+@@ -685,14 +685,15 @@ resubmit:
+ return;
+ free:
+ kfree_skb(skb);
++ kfree(rx_buf);
+ }
+
+ static void ath9k_hif_usb_reg_in_cb(struct urb *urb)
+ {
+- struct sk_buff *skb = (struct sk_buff *) urb->context;
++ struct rx_buf *rx_buf = (struct rx_buf *)urb->context;
++ struct hif_device_usb *hif_dev = rx_buf->hif_dev;
++ struct sk_buff *skb = rx_buf->skb;
+ struct sk_buff *nskb;
+- struct hif_device_usb *hif_dev =
+- usb_get_intfdata(usb_ifnum_to_if(urb->dev, 0));
+ int ret;
+
+ if (!skb)
+@@ -732,11 +733,13 @@ static void ath9k_hif_usb_reg_in_cb(struct urb *urb)
+ return;
+ }
+
++ rx_buf->skb = nskb;
++
+ usb_fill_int_urb(urb, hif_dev->udev,
+ usb_rcvintpipe(hif_dev->udev,
+ USB_REG_IN_PIPE),
+ nskb->data, MAX_REG_IN_BUF_SIZE,
+- ath9k_hif_usb_reg_in_cb, nskb, 1);
++ ath9k_hif_usb_reg_in_cb, rx_buf, 1);
+ }
+
+ resubmit:
+@@ -750,6 +753,7 @@ resubmit:
+ return;
+ free:
+ kfree_skb(skb);
++ kfree(rx_buf);
+ urb->context = NULL;
+ }
+
+@@ -795,7 +799,7 @@ static int ath9k_hif_usb_alloc_tx_urbs(struct hif_device_usb *hif_dev)
+ init_usb_anchor(&hif_dev->mgmt_submitted);
+
+ for (i = 0; i < MAX_TX_URB_NUM; i++) {
+- tx_buf = kzalloc(sizeof(struct tx_buf), GFP_KERNEL);
++ tx_buf = kzalloc(sizeof(*tx_buf), GFP_KERNEL);
+ if (!tx_buf)
+ goto err;
+
+@@ -832,8 +836,9 @@ static void ath9k_hif_usb_dealloc_rx_urbs(struct hif_device_usb *hif_dev)
+
+ static int ath9k_hif_usb_alloc_rx_urbs(struct hif_device_usb *hif_dev)
+ {
+- struct urb *urb = NULL;
++ struct rx_buf *rx_buf = NULL;
+ struct sk_buff *skb = NULL;
++ struct urb *urb = NULL;
+ int i, ret;
+
+ init_usb_anchor(&hif_dev->rx_submitted);
+@@ -841,6 +846,12 @@ static int ath9k_hif_usb_alloc_rx_urbs(struct hif_device_usb *hif_dev)
+
+ for (i = 0; i < MAX_RX_URB_NUM; i++) {
+
++ rx_buf = kzalloc(sizeof(*rx_buf), GFP_KERNEL);
++ if (!rx_buf) {
++ ret = -ENOMEM;
++ goto err_rxb;
++ }
++
+ /* Allocate URB */
+ urb = usb_alloc_urb(0, GFP_KERNEL);
+ if (urb == NULL) {
+@@ -855,11 +866,14 @@ static int ath9k_hif_usb_alloc_rx_urbs(struct hif_device_usb *hif_dev)
+ goto err_skb;
+ }
+
++ rx_buf->hif_dev = hif_dev;
++ rx_buf->skb = skb;
++
+ usb_fill_bulk_urb(urb, hif_dev->udev,
+ usb_rcvbulkpipe(hif_dev->udev,
+ USB_WLAN_RX_PIPE),
+ skb->data, MAX_RX_BUF_SIZE,
+- ath9k_hif_usb_rx_cb, skb);
++ ath9k_hif_usb_rx_cb, rx_buf);
+
+ /* Anchor URB */
+ usb_anchor_urb(urb, &hif_dev->rx_submitted);
+@@ -885,6 +899,8 @@ err_submit:
+ err_skb:
+ usb_free_urb(urb);
+ err_urb:
++ kfree(rx_buf);
++err_rxb:
+ ath9k_hif_usb_dealloc_rx_urbs(hif_dev);
+ return ret;
+ }
+@@ -896,14 +912,21 @@ static void ath9k_hif_usb_dealloc_reg_in_urbs(struct hif_device_usb *hif_dev)
+
+ static int ath9k_hif_usb_alloc_reg_in_urbs(struct hif_device_usb *hif_dev)
+ {
+- struct urb *urb = NULL;
++ struct rx_buf *rx_buf = NULL;
+ struct sk_buff *skb = NULL;
++ struct urb *urb = NULL;
+ int i, ret;
+
+ init_usb_anchor(&hif_dev->reg_in_submitted);
+
+ for (i = 0; i < MAX_REG_IN_URB_NUM; i++) {
+
++ rx_buf = kzalloc(sizeof(*rx_buf), GFP_KERNEL);
++ if (!rx_buf) {
++ ret = -ENOMEM;
++ goto err_rxb;
++ }
++
+ /* Allocate URB */
+ urb = usb_alloc_urb(0, GFP_KERNEL);
+ if (urb == NULL) {
+@@ -918,11 +941,14 @@ static int ath9k_hif_usb_alloc_reg_in_urbs(struct hif_device_usb *hif_dev)
+ goto err_skb;
+ }
+
++ rx_buf->hif_dev = hif_dev;
++ rx_buf->skb = skb;
++
+ usb_fill_int_urb(urb, hif_dev->udev,
+ usb_rcvintpipe(hif_dev->udev,
+ USB_REG_IN_PIPE),
+ skb->data, MAX_REG_IN_BUF_SIZE,
+- ath9k_hif_usb_reg_in_cb, skb, 1);
++ ath9k_hif_usb_reg_in_cb, rx_buf, 1);
+
+ /* Anchor URB */
+ usb_anchor_urb(urb, &hif_dev->reg_in_submitted);
+@@ -948,6 +974,8 @@ err_submit:
+ err_skb:
+ usb_free_urb(urb);
+ err_urb:
++ kfree(rx_buf);
++err_rxb:
+ ath9k_hif_usb_dealloc_reg_in_urbs(hif_dev);
+ return ret;
+ }
+diff --git a/drivers/net/wireless/ath/ath9k/hif_usb.h b/drivers/net/wireless/ath/ath9k/hif_usb.h
+index a94e7e1c86e9..5985aa15ca93 100644
+--- a/drivers/net/wireless/ath/ath9k/hif_usb.h
++++ b/drivers/net/wireless/ath/ath9k/hif_usb.h
+@@ -86,6 +86,11 @@ struct tx_buf {
+ struct list_head list;
+ };
+
++struct rx_buf {
++ struct sk_buff *skb;
++ struct hif_device_usb *hif_dev;
++};
++
+ #define HIF_USB_TX_STOP BIT(0)
+ #define HIF_USB_TX_FLUSH BIT(1)
+
+diff --git a/drivers/net/wireless/intel/iwlwifi/mvm/sta.c b/drivers/net/wireless/intel/iwlwifi/mvm/sta.c
+index 07ca8c91499d..ff49750b8ee7 100644
+--- a/drivers/net/wireless/intel/iwlwifi/mvm/sta.c
++++ b/drivers/net/wireless/intel/iwlwifi/mvm/sta.c
+@@ -1184,17 +1184,15 @@ static int iwl_mvm_inactivity_check(struct iwl_mvm *mvm, u8 alloc_for_sta)
+ for_each_set_bit(i, &changetid_queues, IWL_MAX_HW_QUEUES)
+ iwl_mvm_change_queue_tid(mvm, i);
+
++ rcu_read_unlock();
++
+ if (free_queue >= 0 && alloc_for_sta != IWL_MVM_INVALID_STA) {
+ ret = iwl_mvm_free_inactive_queue(mvm, free_queue, queue_owner,
+ alloc_for_sta);
+- if (ret) {
+- rcu_read_unlock();
++ if (ret)
+ return ret;
+- }
+ }
+
+- rcu_read_unlock();
+-
+ return free_queue;
+ }
+
+diff --git a/drivers/net/wireless/intel/iwlwifi/pcie/drv.c b/drivers/net/wireless/intel/iwlwifi/pcie/drv.c
+index 29971c25dba4..9ea3e5634672 100644
+--- a/drivers/net/wireless/intel/iwlwifi/pcie/drv.c
++++ b/drivers/net/wireless/intel/iwlwifi/pcie/drv.c
+@@ -577,6 +577,8 @@ static const struct iwl_dev_info iwl_dev_info_table[] = {
+ IWL_DEV_INFO(0x30DC, 0x1552, iwl9560_2ac_cfg_soc, iwl9560_killer_1550i_name),
+ IWL_DEV_INFO(0x31DC, 0x1551, iwl9560_2ac_cfg_soc, iwl9560_killer_1550s_name),
+ IWL_DEV_INFO(0x31DC, 0x1552, iwl9560_2ac_cfg_soc, iwl9560_killer_1550i_name),
++ IWL_DEV_INFO(0xA370, 0x1551, iwl9560_2ac_cfg_soc, iwl9560_killer_1550s_name),
++ IWL_DEV_INFO(0xA370, 0x1552, iwl9560_2ac_cfg_soc, iwl9560_killer_1550i_name),
+
+ IWL_DEV_INFO(0x271C, 0x0214, iwl9260_2ac_cfg, iwl9260_1_name),
+
+diff --git a/drivers/net/wireless/mediatek/mt76/mt76.h b/drivers/net/wireless/mediatek/mt76/mt76.h
+index 37641ad14d49..652dd05af16b 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt76.h
++++ b/drivers/net/wireless/mediatek/mt76/mt76.h
+@@ -275,6 +275,7 @@ enum {
+ MT76_STATE_RUNNING,
+ MT76_STATE_MCU_RUNNING,
+ MT76_SCANNING,
++ MT76_RESTART,
+ MT76_RESET,
+ MT76_MCU_RESET,
+ MT76_REMOVED,
+diff --git a/drivers/net/wireless/mediatek/mt76/mt76x0/pci.c b/drivers/net/wireless/mediatek/mt76/mt76x0/pci.c
+index 0b520ae08d01..57091d41eb85 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt76x0/pci.c
++++ b/drivers/net/wireless/mediatek/mt76/mt76x0/pci.c
+@@ -29,6 +29,7 @@ static void mt76x0e_stop_hw(struct mt76x02_dev *dev)
+ {
+ cancel_delayed_work_sync(&dev->cal_work);
+ cancel_delayed_work_sync(&dev->mt76.mac_work);
++ clear_bit(MT76_RESTART, &dev->mphy.state);
+
+ if (!mt76_poll(dev, MT_WPDMA_GLO_CFG, MT_WPDMA_GLO_CFG_TX_DMA_BUSY,
+ 0, 1000))
+@@ -83,6 +84,7 @@ static const struct ieee80211_ops mt76x0e_ops = {
+ .set_coverage_class = mt76x02_set_coverage_class,
+ .set_rts_threshold = mt76x02_set_rts_threshold,
+ .get_antenna = mt76_get_antenna,
++ .reconfig_complete = mt76x02_reconfig_complete,
+ };
+
+ static int mt76x0e_register_device(struct mt76x02_dev *dev)
+diff --git a/drivers/net/wireless/mediatek/mt76/mt76x02.h b/drivers/net/wireless/mediatek/mt76/mt76x02.h
+index 830532b85b58..6ea210bd3f07 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt76x02.h
++++ b/drivers/net/wireless/mediatek/mt76/mt76x02.h
+@@ -187,6 +187,8 @@ void mt76x02_sta_ps(struct mt76_dev *dev, struct ieee80211_sta *sta, bool ps);
+ void mt76x02_bss_info_changed(struct ieee80211_hw *hw,
+ struct ieee80211_vif *vif,
+ struct ieee80211_bss_conf *info, u32 changed);
++void mt76x02_reconfig_complete(struct ieee80211_hw *hw,
++ enum ieee80211_reconfig_type reconfig_type);
+
+ struct beacon_bc_data {
+ struct mt76x02_dev *dev;
+diff --git a/drivers/net/wireless/mediatek/mt76/mt76x02_mcu.c b/drivers/net/wireless/mediatek/mt76/mt76x02_mcu.c
+index 5664749ad6c1..8247611d9b18 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt76x02_mcu.c
++++ b/drivers/net/wireless/mediatek/mt76/mt76x02_mcu.c
+@@ -20,6 +20,9 @@ int mt76x02_mcu_msg_send(struct mt76_dev *mdev, int cmd, const void *data,
+ int ret;
+ u8 seq;
+
++ if (mt76_is_mmio(&dev->mt76) && dev->mcu_timeout)
++ return -EIO;
++
+ skb = mt76x02_mcu_msg_alloc(data, len);
+ if (!skb)
+ return -ENOMEM;
+diff --git a/drivers/net/wireless/mediatek/mt76/mt76x02_mmio.c b/drivers/net/wireless/mediatek/mt76/mt76x02_mmio.c
+index 7dcc5d342e9f..7e389dbccfeb 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt76x02_mmio.c
++++ b/drivers/net/wireless/mediatek/mt76/mt76x02_mmio.c
+@@ -520,6 +520,7 @@ static void mt76x02_watchdog_reset(struct mt76x02_dev *dev)
+ }
+
+ if (restart) {
++ set_bit(MT76_RESTART, &dev->mphy.state);
+ mt76x02_mcu_function_select(dev, Q_SELECT, 1);
+ ieee80211_restart_hw(dev->mt76.hw);
+ } else {
+@@ -528,8 +529,23 @@ static void mt76x02_watchdog_reset(struct mt76x02_dev *dev)
+ }
+ }
+
++void mt76x02_reconfig_complete(struct ieee80211_hw *hw,
++ enum ieee80211_reconfig_type reconfig_type)
++{
++ struct mt76x02_dev *dev = hw->priv;
++
++ if (reconfig_type != IEEE80211_RECONFIG_TYPE_RESTART)
++ return;
++
++ clear_bit(MT76_RESTART, &dev->mphy.state);
++}
++EXPORT_SYMBOL_GPL(mt76x02_reconfig_complete);
++
+ static void mt76x02_check_tx_hang(struct mt76x02_dev *dev)
+ {
++ if (test_bit(MT76_RESTART, &dev->mphy.state))
++ return;
++
+ if (mt76x02_tx_hang(dev)) {
+ if (++dev->tx_hang_check >= MT_TX_HANG_TH)
+ goto restart;
+diff --git a/drivers/net/wireless/mediatek/mt76/mt76x2/pci_init.c b/drivers/net/wireless/mediatek/mt76/mt76x2/pci_init.c
+index c69579e5f647..f27774f57438 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt76x2/pci_init.c
++++ b/drivers/net/wireless/mediatek/mt76/mt76x2/pci_init.c
+@@ -256,6 +256,7 @@ void mt76x2_stop_hardware(struct mt76x02_dev *dev)
+ cancel_delayed_work_sync(&dev->cal_work);
+ cancel_delayed_work_sync(&dev->mt76.mac_work);
+ cancel_delayed_work_sync(&dev->wdt_work);
++ clear_bit(MT76_RESTART, &dev->mphy.state);
+ mt76x02_mcu_set_radio_state(dev, false);
+ mt76x2_mac_stop(dev, false);
+ }
+diff --git a/drivers/net/wireless/mediatek/mt76/mt76x2/pci_main.c b/drivers/net/wireless/mediatek/mt76/mt76x2/pci_main.c
+index 105e5b99b3f9..a74599f7f729 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt76x2/pci_main.c
++++ b/drivers/net/wireless/mediatek/mt76/mt76x2/pci_main.c
+@@ -10,12 +10,9 @@ static int
+ mt76x2_start(struct ieee80211_hw *hw)
+ {
+ struct mt76x02_dev *dev = hw->priv;
+- int ret;
+
+ mt76x02_mac_start(dev);
+- ret = mt76x2_phy_start(dev);
+- if (ret)
+- return ret;
++ mt76x2_phy_start(dev);
+
+ ieee80211_queue_delayed_work(mt76_hw(dev), &dev->mt76.mac_work,
+ MT_MAC_WORK_INTERVAL);
+@@ -35,11 +32,9 @@ mt76x2_stop(struct ieee80211_hw *hw)
+ mt76x2_stop_hardware(dev);
+ }
+
+-static int
++static void
+ mt76x2_set_channel(struct mt76x02_dev *dev, struct cfg80211_chan_def *chandef)
+ {
+- int ret;
+-
+ cancel_delayed_work_sync(&dev->cal_work);
+ tasklet_disable(&dev->mt76.pre_tbtt_tasklet);
+ tasklet_disable(&dev->dfs_pd.dfs_tasklet);
+@@ -50,7 +45,7 @@ mt76x2_set_channel(struct mt76x02_dev *dev, struct cfg80211_chan_def *chandef)
+ mt76_set_channel(&dev->mphy);
+
+ mt76x2_mac_stop(dev, true);
+- ret = mt76x2_phy_set_channel(dev, chandef);
++ mt76x2_phy_set_channel(dev, chandef);
+
+ mt76x02_mac_cc_reset(dev);
+ mt76x02_dfs_init_params(dev);
+@@ -64,15 +59,12 @@ mt76x2_set_channel(struct mt76x02_dev *dev, struct cfg80211_chan_def *chandef)
+ tasklet_enable(&dev->mt76.pre_tbtt_tasklet);
+
+ mt76_txq_schedule_all(&dev->mphy);
+-
+- return ret;
+ }
+
+ static int
+ mt76x2_config(struct ieee80211_hw *hw, u32 changed)
+ {
+ struct mt76x02_dev *dev = hw->priv;
+- int ret = 0;
+
+ mutex_lock(&dev->mt76.mutex);
+
+@@ -101,11 +93,11 @@ mt76x2_config(struct ieee80211_hw *hw, u32 changed)
+
+ if (changed & IEEE80211_CONF_CHANGE_CHANNEL) {
+ ieee80211_stop_queues(hw);
+- ret = mt76x2_set_channel(dev, &hw->conf.chandef);
++ mt76x2_set_channel(dev, &hw->conf.chandef);
+ ieee80211_wake_queues(hw);
+ }
+
+- return ret;
++ return 0;
+ }
+
+ static void
+@@ -162,5 +154,6 @@ const struct ieee80211_ops mt76x2_ops = {
+ .set_antenna = mt76x2_set_antenna,
+ .get_antenna = mt76_get_antenna,
+ .set_rts_threshold = mt76x02_set_rts_threshold,
++ .reconfig_complete = mt76x02_reconfig_complete,
+ };
+
+diff --git a/drivers/pci/controller/vmd.c b/drivers/pci/controller/vmd.c
+index e386d4eac407..9a64cf90c291 100644
+--- a/drivers/pci/controller/vmd.c
++++ b/drivers/pci/controller/vmd.c
+@@ -546,9 +546,10 @@ static int vmd_enable_domain(struct vmd_dev *vmd, unsigned long features)
+
+ vmd->irq_domain = pci_msi_create_irq_domain(fn, &vmd_msi_domain_info,
+ x86_vector_domain);
+- irq_domain_free_fwnode(fn);
+- if (!vmd->irq_domain)
++ if (!vmd->irq_domain) {
++ irq_domain_free_fwnode(fn);
+ return -ENODEV;
++ }
+
+ pci_add_resource(&resources, &vmd->resources[0]);
+ pci_add_resource_offset(&resources, &vmd->resources[1], offset[0]);
+diff --git a/drivers/pci/pci.c b/drivers/pci/pci.c
+index d4758518a97b..a4efc7e0061f 100644
+--- a/drivers/pci/pci.c
++++ b/drivers/pci/pci.c
+@@ -4662,8 +4662,7 @@ static int pci_pm_reset(struct pci_dev *dev, int probe)
+ * pcie_wait_for_link_delay - Wait until link is active or inactive
+ * @pdev: Bridge device
+ * @active: waiting for active or inactive?
+- * @delay: Delay to wait after link has become active (in ms). Specify %0
+- * for no delay.
++ * @delay: Delay to wait after link has become active (in ms)
+ *
+ * Use this to wait till link becomes active or inactive.
+ */
+@@ -4704,7 +4703,7 @@ static bool pcie_wait_for_link_delay(struct pci_dev *pdev, bool active,
+ msleep(10);
+ timeout -= 10;
+ }
+- if (active && ret && delay)
++ if (active && ret)
+ msleep(delay);
+ else if (ret != active)
+ pci_info(pdev, "Data Link Layer Link Active not %s in 1000 msec\n",
+@@ -4825,28 +4824,17 @@ void pci_bridge_wait_for_secondary_bus(struct pci_dev *dev)
+ if (!pcie_downstream_port(dev))
+ return;
+
+- /*
+- * Per PCIe r5.0, sec 6.6.1, for downstream ports that support
+- * speeds > 5 GT/s, we must wait for link training to complete
+- * before the mandatory delay.
+- *
+- * We can only tell when link training completes via DLL Link
+- * Active, which is required for downstream ports that support
+- * speeds > 5 GT/s (sec 7.5.3.6). Unfortunately some common
+- * devices do not implement Link Active reporting even when it's
+- * required, so we'll check for that directly instead of checking
+- * the supported link speed. We assume devices without Link Active
+- * reporting can train in 100 ms regardless of speed.
+- */
+- if (dev->link_active_reporting) {
+- pci_dbg(dev, "waiting for link to train\n");
+- if (!pcie_wait_for_link_delay(dev, true, 0)) {
++ if (pcie_get_speed_cap(dev) <= PCIE_SPEED_5_0GT) {
++ pci_dbg(dev, "waiting %d ms for downstream link\n", delay);
++ msleep(delay);
++ } else {
++ pci_dbg(dev, "waiting %d ms for downstream link, after activation\n",
++ delay);
++ if (!pcie_wait_for_link_delay(dev, true, delay)) {
+ /* Did not train, no need to wait any further */
+ return;
+ }
+ }
+- pci_dbg(child, "waiting %d ms to become accessible\n", delay);
+- msleep(delay);
+
+ if (!pci_device_is_present(child)) {
+ pci_dbg(child, "waiting additional %d ms to become accessible\n", delay);
+diff --git a/drivers/perf/arm-cci.c b/drivers/perf/arm-cci.c
+index 1b8e337a29ca..87c4be9dd412 100644
+--- a/drivers/perf/arm-cci.c
++++ b/drivers/perf/arm-cci.c
+@@ -1718,6 +1718,7 @@ static struct platform_driver cci_pmu_driver = {
+ .driver = {
+ .name = DRIVER_NAME,
+ .of_match_table = arm_cci_pmu_matches,
++ .suppress_bind_attrs = true,
+ },
+ .probe = cci_pmu_probe,
+ .remove = cci_pmu_remove,
+diff --git a/drivers/perf/arm-ccn.c b/drivers/perf/arm-ccn.c
+index d50edef91f59..7b7d23f25713 100644
+--- a/drivers/perf/arm-ccn.c
++++ b/drivers/perf/arm-ccn.c
+@@ -1545,6 +1545,7 @@ static struct platform_driver arm_ccn_driver = {
+ .driver = {
+ .name = "arm-ccn",
+ .of_match_table = arm_ccn_match,
++ .suppress_bind_attrs = true,
+ },
+ .probe = arm_ccn_probe,
+ .remove = arm_ccn_remove,
+diff --git a/drivers/perf/arm_dsu_pmu.c b/drivers/perf/arm_dsu_pmu.c
+index 70968c8c09d7..4594e2ed13d5 100644
+--- a/drivers/perf/arm_dsu_pmu.c
++++ b/drivers/perf/arm_dsu_pmu.c
+@@ -759,6 +759,7 @@ static struct platform_driver dsu_pmu_driver = {
+ .driver = {
+ .name = DRVNAME,
+ .of_match_table = of_match_ptr(dsu_pmu_of_match),
++ .suppress_bind_attrs = true,
+ },
+ .probe = dsu_pmu_device_probe,
+ .remove = dsu_pmu_device_remove,
+diff --git a/drivers/perf/arm_smmuv3_pmu.c b/drivers/perf/arm_smmuv3_pmu.c
+index 48e28ef93a70..4cdb35d166ac 100644
+--- a/drivers/perf/arm_smmuv3_pmu.c
++++ b/drivers/perf/arm_smmuv3_pmu.c
+@@ -742,6 +742,7 @@ static int smmu_pmu_probe(struct platform_device *pdev)
+ platform_set_drvdata(pdev, smmu_pmu);
+
+ smmu_pmu->pmu = (struct pmu) {
++ .module = THIS_MODULE,
+ .task_ctx_nr = perf_invalid_context,
+ .pmu_enable = smmu_pmu_enable,
+ .pmu_disable = smmu_pmu_disable,
+@@ -859,6 +860,7 @@ static void smmu_pmu_shutdown(struct platform_device *pdev)
+ static struct platform_driver smmu_pmu_driver = {
+ .driver = {
+ .name = "arm-smmu-v3-pmcg",
++ .suppress_bind_attrs = true,
+ },
+ .probe = smmu_pmu_probe,
+ .remove = smmu_pmu_remove,
+diff --git a/drivers/perf/arm_spe_pmu.c b/drivers/perf/arm_spe_pmu.c
+index b72c04852599..c5418fd3122c 100644
+--- a/drivers/perf/arm_spe_pmu.c
++++ b/drivers/perf/arm_spe_pmu.c
+@@ -1228,6 +1228,7 @@ static struct platform_driver arm_spe_pmu_driver = {
+ .driver = {
+ .name = DRVNAME,
+ .of_match_table = of_match_ptr(arm_spe_pmu_of_match),
++ .suppress_bind_attrs = true,
+ },
+ .probe = arm_spe_pmu_device_probe,
+ .remove = arm_spe_pmu_device_remove,
+diff --git a/drivers/perf/fsl_imx8_ddr_perf.c b/drivers/perf/fsl_imx8_ddr_perf.c
+index 90884d14f95f..397540a4b799 100644
+--- a/drivers/perf/fsl_imx8_ddr_perf.c
++++ b/drivers/perf/fsl_imx8_ddr_perf.c
+@@ -512,6 +512,7 @@ static int ddr_perf_init(struct ddr_pmu *pmu, void __iomem *base,
+ {
+ *pmu = (struct ddr_pmu) {
+ .pmu = (struct pmu) {
++ .module = THIS_MODULE,
+ .capabilities = PERF_PMU_CAP_NO_EXCLUDE,
+ .task_ctx_nr = perf_invalid_context,
+ .attr_groups = attr_groups,
+@@ -706,6 +707,7 @@ static struct platform_driver imx_ddr_pmu_driver = {
+ .driver = {
+ .name = "imx-ddr-pmu",
+ .of_match_table = imx_ddr_pmu_dt_ids,
++ .suppress_bind_attrs = true,
+ },
+ .probe = ddr_perf_probe,
+ .remove = ddr_perf_remove,
+diff --git a/drivers/perf/hisilicon/hisi_uncore_ddrc_pmu.c b/drivers/perf/hisilicon/hisi_uncore_ddrc_pmu.c
+index 453f1c6a16ca..341852736640 100644
+--- a/drivers/perf/hisilicon/hisi_uncore_ddrc_pmu.c
++++ b/drivers/perf/hisilicon/hisi_uncore_ddrc_pmu.c
+@@ -378,6 +378,7 @@ static int hisi_ddrc_pmu_probe(struct platform_device *pdev)
+ ddrc_pmu->sccl_id, ddrc_pmu->index_id);
+ ddrc_pmu->pmu = (struct pmu) {
+ .name = name,
++ .module = THIS_MODULE,
+ .task_ctx_nr = perf_invalid_context,
+ .event_init = hisi_uncore_pmu_event_init,
+ .pmu_enable = hisi_uncore_pmu_enable,
+@@ -416,6 +417,7 @@ static struct platform_driver hisi_ddrc_pmu_driver = {
+ .driver = {
+ .name = "hisi_ddrc_pmu",
+ .acpi_match_table = ACPI_PTR(hisi_ddrc_pmu_acpi_match),
++ .suppress_bind_attrs = true,
+ },
+ .probe = hisi_ddrc_pmu_probe,
+ .remove = hisi_ddrc_pmu_remove,
+diff --git a/drivers/perf/hisilicon/hisi_uncore_hha_pmu.c b/drivers/perf/hisilicon/hisi_uncore_hha_pmu.c
+index e5af9d7e6e14..375c4737a088 100644
+--- a/drivers/perf/hisilicon/hisi_uncore_hha_pmu.c
++++ b/drivers/perf/hisilicon/hisi_uncore_hha_pmu.c
+@@ -390,6 +390,7 @@ static int hisi_hha_pmu_probe(struct platform_device *pdev)
+ hha_pmu->sccl_id, hha_pmu->index_id);
+ hha_pmu->pmu = (struct pmu) {
+ .name = name,
++ .module = THIS_MODULE,
+ .task_ctx_nr = perf_invalid_context,
+ .event_init = hisi_uncore_pmu_event_init,
+ .pmu_enable = hisi_uncore_pmu_enable,
+@@ -428,6 +429,7 @@ static struct platform_driver hisi_hha_pmu_driver = {
+ .driver = {
+ .name = "hisi_hha_pmu",
+ .acpi_match_table = ACPI_PTR(hisi_hha_pmu_acpi_match),
++ .suppress_bind_attrs = true,
+ },
+ .probe = hisi_hha_pmu_probe,
+ .remove = hisi_hha_pmu_remove,
+diff --git a/drivers/perf/hisilicon/hisi_uncore_l3c_pmu.c b/drivers/perf/hisilicon/hisi_uncore_l3c_pmu.c
+index 479de4be99eb..44e8a660c5f5 100644
+--- a/drivers/perf/hisilicon/hisi_uncore_l3c_pmu.c
++++ b/drivers/perf/hisilicon/hisi_uncore_l3c_pmu.c
+@@ -380,6 +380,7 @@ static int hisi_l3c_pmu_probe(struct platform_device *pdev)
+ l3c_pmu->sccl_id, l3c_pmu->index_id);
+ l3c_pmu->pmu = (struct pmu) {
+ .name = name,
++ .module = THIS_MODULE,
+ .task_ctx_nr = perf_invalid_context,
+ .event_init = hisi_uncore_pmu_event_init,
+ .pmu_enable = hisi_uncore_pmu_enable,
+@@ -418,6 +419,7 @@ static struct platform_driver hisi_l3c_pmu_driver = {
+ .driver = {
+ .name = "hisi_l3c_pmu",
+ .acpi_match_table = ACPI_PTR(hisi_l3c_pmu_acpi_match),
++ .suppress_bind_attrs = true,
+ },
+ .probe = hisi_l3c_pmu_probe,
+ .remove = hisi_l3c_pmu_remove,
+diff --git a/drivers/perf/qcom_l2_pmu.c b/drivers/perf/qcom_l2_pmu.c
+index 21d6991dbe0b..4da37f650f98 100644
+--- a/drivers/perf/qcom_l2_pmu.c
++++ b/drivers/perf/qcom_l2_pmu.c
+@@ -1028,6 +1028,7 @@ static struct platform_driver l2_cache_pmu_driver = {
+ .driver = {
+ .name = "qcom-l2cache-pmu",
+ .acpi_match_table = ACPI_PTR(l2_cache_pmu_acpi_match),
++ .suppress_bind_attrs = true,
+ },
+ .probe = l2_cache_pmu_probe,
+ .remove = l2_cache_pmu_remove,
+diff --git a/drivers/perf/qcom_l3_pmu.c b/drivers/perf/qcom_l3_pmu.c
+index 656e830798d9..9ddb577c542b 100644
+--- a/drivers/perf/qcom_l3_pmu.c
++++ b/drivers/perf/qcom_l3_pmu.c
+@@ -814,6 +814,7 @@ static struct platform_driver qcom_l3_cache_pmu_driver = {
+ .driver = {
+ .name = "qcom-l3cache-pmu",
+ .acpi_match_table = ACPI_PTR(qcom_l3_cache_pmu_acpi_match),
++ .suppress_bind_attrs = true,
+ },
+ .probe = qcom_l3_cache_pmu_probe,
+ };
+diff --git a/drivers/perf/thunderx2_pmu.c b/drivers/perf/thunderx2_pmu.c
+index 51b31d6ff2c4..aac9823b0c6b 100644
+--- a/drivers/perf/thunderx2_pmu.c
++++ b/drivers/perf/thunderx2_pmu.c
+@@ -1017,6 +1017,7 @@ static struct platform_driver tx2_uncore_driver = {
+ .driver = {
+ .name = "tx2-uncore-pmu",
+ .acpi_match_table = ACPI_PTR(tx2_uncore_acpi_match),
++ .suppress_bind_attrs = true,
+ },
+ .probe = tx2_uncore_probe,
+ .remove = tx2_uncore_remove,
+diff --git a/drivers/perf/xgene_pmu.c b/drivers/perf/xgene_pmu.c
+index 46ee6807d533..edac28cd25dd 100644
+--- a/drivers/perf/xgene_pmu.c
++++ b/drivers/perf/xgene_pmu.c
+@@ -1975,6 +1975,7 @@ static struct platform_driver xgene_pmu_driver = {
+ .name = "xgene-pmu",
+ .of_match_table = xgene_pmu_of_match,
+ .acpi_match_table = ACPI_PTR(xgene_pmu_acpi_match),
++ .suppress_bind_attrs = true,
+ },
+ };
+
+diff --git a/drivers/pinctrl/pinctrl-amd.h b/drivers/pinctrl/pinctrl-amd.h
+index 3e5760f1a715..d4a192df5fab 100644
+--- a/drivers/pinctrl/pinctrl-amd.h
++++ b/drivers/pinctrl/pinctrl-amd.h
+@@ -252,7 +252,7 @@ static const struct amd_pingroup kerncz_groups[] = {
+ {
+ .name = "uart0",
+ .pins = uart0_pins,
+- .npins = 9,
++ .npins = 5,
+ },
+ {
+ .name = "uart1",
+diff --git a/drivers/platform/x86/asus-wmi.c b/drivers/platform/x86/asus-wmi.c
+index cd212ee210e2..537b824a1ae2 100644
+--- a/drivers/platform/x86/asus-wmi.c
++++ b/drivers/platform/x86/asus-wmi.c
+@@ -432,6 +432,7 @@ static int asus_wmi_battery_add(struct power_supply *battery)
+ * battery is named BATT.
+ */
+ if (strcmp(battery->desc->name, "BAT0") != 0 &&
++ strcmp(battery->desc->name, "BAT1") != 0 &&
+ strcmp(battery->desc->name, "BATT") != 0)
+ return -ENODEV;
+
+diff --git a/drivers/platform/x86/intel_speed_select_if/isst_if_common.h b/drivers/platform/x86/intel_speed_select_if/isst_if_common.h
+index 1409a5bb5582..4f6f7f0761fc 100644
+--- a/drivers/platform/x86/intel_speed_select_if/isst_if_common.h
++++ b/drivers/platform/x86/intel_speed_select_if/isst_if_common.h
+@@ -13,6 +13,9 @@
+ #define INTEL_RAPL_PRIO_DEVID_0 0x3451
+ #define INTEL_CFG_MBOX_DEVID_0 0x3459
+
++#define INTEL_RAPL_PRIO_DEVID_1 0x3251
++#define INTEL_CFG_MBOX_DEVID_1 0x3259
++
+ /*
+ * Validate maximum commands in a single request.
+ * This is enough to handle command to every core in one ioctl, or all
+diff --git a/drivers/platform/x86/intel_speed_select_if/isst_if_mbox_pci.c b/drivers/platform/x86/intel_speed_select_if/isst_if_mbox_pci.c
+index de4169d0796b..9a055fd54053 100644
+--- a/drivers/platform/x86/intel_speed_select_if/isst_if_mbox_pci.c
++++ b/drivers/platform/x86/intel_speed_select_if/isst_if_mbox_pci.c
+@@ -148,6 +148,7 @@ static long isst_if_mbox_proc_cmd(u8 *cmd_ptr, int *write_only, int resume)
+
+ static const struct pci_device_id isst_if_mbox_ids[] = {
+ { PCI_DEVICE(PCI_VENDOR_ID_INTEL, INTEL_CFG_MBOX_DEVID_0)},
++ { PCI_DEVICE(PCI_VENDOR_ID_INTEL, INTEL_CFG_MBOX_DEVID_1)},
+ { 0 },
+ };
+ MODULE_DEVICE_TABLE(pci, isst_if_mbox_ids);
+diff --git a/drivers/platform/x86/intel_speed_select_if/isst_if_mmio.c b/drivers/platform/x86/intel_speed_select_if/isst_if_mmio.c
+index 3584859fcc42..aa17fd7817f8 100644
+--- a/drivers/platform/x86/intel_speed_select_if/isst_if_mmio.c
++++ b/drivers/platform/x86/intel_speed_select_if/isst_if_mmio.c
+@@ -72,6 +72,7 @@ static long isst_if_mmio_rd_wr(u8 *cmd_ptr, int *write_only, int resume)
+
+ static const struct pci_device_id isst_if_ids[] = {
+ { PCI_DEVICE(PCI_VENDOR_ID_INTEL, INTEL_RAPL_PRIO_DEVID_0)},
++ { PCI_DEVICE(PCI_VENDOR_ID_INTEL, INTEL_RAPL_PRIO_DEVID_1)},
+ { 0 },
+ };
+ MODULE_DEVICE_TABLE(pci, isst_if_ids);
+diff --git a/drivers/scsi/mpt3sas/mpt3sas_ctl.c b/drivers/scsi/mpt3sas/mpt3sas_ctl.c
+index 62e552838565..983e568ff231 100644
+--- a/drivers/scsi/mpt3sas/mpt3sas_ctl.c
++++ b/drivers/scsi/mpt3sas/mpt3sas_ctl.c
+@@ -3145,19 +3145,18 @@ BRM_status_show(struct device *cdev, struct device_attribute *attr,
+ if (!ioc->is_warpdrive) {
+ ioc_err(ioc, "%s: BRM attribute is only for warpdrive\n",
+ __func__);
+- goto out;
++ return 0;
+ }
+ /* pci_access_mutex lock acquired by sysfs show path */
+ mutex_lock(&ioc->pci_access_mutex);
+- if (ioc->pci_error_recovery || ioc->remove_host) {
+- mutex_unlock(&ioc->pci_access_mutex);
+- return 0;
+- }
++ if (ioc->pci_error_recovery || ioc->remove_host)
++ goto out;
+
+ /* allocate upto GPIOVal 36 entries */
+ sz = offsetof(Mpi2IOUnitPage3_t, GPIOVal) + (sizeof(u16) * 36);
+ io_unit_pg3 = kzalloc(sz, GFP_KERNEL);
+ if (!io_unit_pg3) {
++ rc = -ENOMEM;
+ ioc_err(ioc, "%s: failed allocating memory for iounit_pg3: (%d) bytes\n",
+ __func__, sz);
+ goto out;
+@@ -3167,6 +3166,7 @@ BRM_status_show(struct device *cdev, struct device_attribute *attr,
+ 0) {
+ ioc_err(ioc, "%s: failed reading iounit_pg3\n",
+ __func__);
++ rc = -EINVAL;
+ goto out;
+ }
+
+@@ -3174,12 +3174,14 @@ BRM_status_show(struct device *cdev, struct device_attribute *attr,
+ if (ioc_status != MPI2_IOCSTATUS_SUCCESS) {
+ ioc_err(ioc, "%s: iounit_pg3 failed with ioc_status(0x%04x)\n",
+ __func__, ioc_status);
++ rc = -EINVAL;
+ goto out;
+ }
+
+ if (io_unit_pg3->GPIOCount < 25) {
+ ioc_err(ioc, "%s: iounit_pg3->GPIOCount less than 25 entries, detected (%d) entries\n",
+ __func__, io_unit_pg3->GPIOCount);
++ rc = -EINVAL;
+ goto out;
+ }
+
+diff --git a/drivers/scsi/scsi_devinfo.c b/drivers/scsi/scsi_devinfo.c
+index eed31021e788..ba84244c1b4f 100644
+--- a/drivers/scsi/scsi_devinfo.c
++++ b/drivers/scsi/scsi_devinfo.c
+@@ -239,6 +239,7 @@ static struct {
+ {"LSI", "Universal Xport", "*", BLIST_NO_ULD_ATTACH},
+ {"ENGENIO", "Universal Xport", "*", BLIST_NO_ULD_ATTACH},
+ {"LENOVO", "Universal Xport", "*", BLIST_NO_ULD_ATTACH},
++ {"FUJITSU", "Universal Xport", "*", BLIST_NO_ULD_ATTACH},
+ {"SanDisk", "Cruzer Blade", NULL, BLIST_TRY_VPD_PAGES |
+ BLIST_INQUIRY_36},
+ {"SMSC", "USB 2 HS-CF", NULL, BLIST_SPARSELUN | BLIST_INQUIRY_36},
+diff --git a/drivers/scsi/scsi_dh.c b/drivers/scsi/scsi_dh.c
+index 42f0550d6b11..6f41e4b5a2b8 100644
+--- a/drivers/scsi/scsi_dh.c
++++ b/drivers/scsi/scsi_dh.c
+@@ -63,6 +63,7 @@ static const struct scsi_dh_blist scsi_dh_blist[] = {
+ {"LSI", "INF-01-00", "rdac", },
+ {"ENGENIO", "INF-01-00", "rdac", },
+ {"LENOVO", "DE_Series", "rdac", },
++ {"FUJITSU", "ETERNUS_AHB", "rdac", },
+ {NULL, NULL, NULL },
+ };
+
+diff --git a/drivers/scsi/scsi_transport_spi.c b/drivers/scsi/scsi_transport_spi.c
+index f8661062ef95..f3d5b1bbd5aa 100644
+--- a/drivers/scsi/scsi_transport_spi.c
++++ b/drivers/scsi/scsi_transport_spi.c
+@@ -339,7 +339,7 @@ store_spi_transport_##field(struct device *dev, \
+ struct spi_transport_attrs *tp \
+ = (struct spi_transport_attrs *)&starget->starget_data; \
+ \
+- if (i->f->set_##field) \
++ if (!i->f->set_##field) \
+ return -EINVAL; \
+ val = simple_strtoul(buf, NULL, 0); \
+ if (val > tp->max_##field) \
+diff --git a/drivers/soc/amlogic/meson-gx-socinfo.c b/drivers/soc/amlogic/meson-gx-socinfo.c
+index 01fc0d20a70d..6f54bd832c8b 100644
+--- a/drivers/soc/amlogic/meson-gx-socinfo.c
++++ b/drivers/soc/amlogic/meson-gx-socinfo.c
+@@ -66,10 +66,12 @@ static const struct meson_gx_package_id {
+ { "A113D", 0x25, 0x22, 0xff },
+ { "S905D2", 0x28, 0x10, 0xf0 },
+ { "S905X2", 0x28, 0x40, 0xf0 },
+- { "S922X", 0x29, 0x40, 0xf0 },
+ { "A311D", 0x29, 0x10, 0xf0 },
+- { "S905X3", 0x2b, 0x5, 0xf },
+- { "S905D3", 0x2b, 0xb0, 0xf0 },
++ { "S922X", 0x29, 0x40, 0xf0 },
++ { "S905D3", 0x2b, 0x4, 0xf5 },
++ { "S905X3", 0x2b, 0x5, 0xf5 },
++ { "S905X3", 0x2b, 0x10, 0x3f },
++ { "S905D3", 0x2b, 0x30, 0x3f },
+ { "A113L", 0x2c, 0x0, 0xf8 },
+ };
+
+diff --git a/drivers/soc/qcom/rpmh.c b/drivers/soc/qcom/rpmh.c
+index a75f3df97742..908b74352c1b 100644
+--- a/drivers/soc/qcom/rpmh.c
++++ b/drivers/soc/qcom/rpmh.c
+@@ -150,10 +150,10 @@ existing:
+ break;
+ }
+
+- ctrlr->dirty = (req->sleep_val != old_sleep_val ||
+- req->wake_val != old_wake_val) &&
+- req->sleep_val != UINT_MAX &&
+- req->wake_val != UINT_MAX;
++ ctrlr->dirty |= (req->sleep_val != old_sleep_val ||
++ req->wake_val != old_wake_val) &&
++ req->sleep_val != UINT_MAX &&
++ req->wake_val != UINT_MAX;
+
+ unlock:
+ spin_unlock_irqrestore(&ctrlr->cache_lock, flags);
+diff --git a/drivers/spi/spi-mt65xx.c b/drivers/spi/spi-mt65xx.c
+index 6783e12c40c2..a556795caeef 100644
+--- a/drivers/spi/spi-mt65xx.c
++++ b/drivers/spi/spi-mt65xx.c
+@@ -36,7 +36,6 @@
+ #define SPI_CFG0_SCK_LOW_OFFSET 8
+ #define SPI_CFG0_CS_HOLD_OFFSET 16
+ #define SPI_CFG0_CS_SETUP_OFFSET 24
+-#define SPI_ADJUST_CFG0_SCK_LOW_OFFSET 16
+ #define SPI_ADJUST_CFG0_CS_HOLD_OFFSET 0
+ #define SPI_ADJUST_CFG0_CS_SETUP_OFFSET 16
+
+@@ -48,6 +47,8 @@
+ #define SPI_CFG1_CS_IDLE_MASK 0xff
+ #define SPI_CFG1_PACKET_LOOP_MASK 0xff00
+ #define SPI_CFG1_PACKET_LENGTH_MASK 0x3ff0000
++#define SPI_CFG2_SCK_HIGH_OFFSET 0
++#define SPI_CFG2_SCK_LOW_OFFSET 16
+
+ #define SPI_CMD_ACT BIT(0)
+ #define SPI_CMD_RESUME BIT(1)
+@@ -283,7 +284,7 @@ static void mtk_spi_set_cs(struct spi_device *spi, bool enable)
+ static void mtk_spi_prepare_transfer(struct spi_master *master,
+ struct spi_transfer *xfer)
+ {
+- u32 spi_clk_hz, div, sck_time, cs_time, reg_val = 0;
++ u32 spi_clk_hz, div, sck_time, cs_time, reg_val;
+ struct mtk_spi *mdata = spi_master_get_devdata(master);
+
+ spi_clk_hz = clk_get_rate(mdata->spi_clk);
+@@ -296,18 +297,18 @@ static void mtk_spi_prepare_transfer(struct spi_master *master,
+ cs_time = sck_time * 2;
+
+ if (mdata->dev_comp->enhance_timing) {
++ reg_val = (((sck_time - 1) & 0xffff)
++ << SPI_CFG2_SCK_HIGH_OFFSET);
+ reg_val |= (((sck_time - 1) & 0xffff)
+- << SPI_CFG0_SCK_HIGH_OFFSET);
+- reg_val |= (((sck_time - 1) & 0xffff)
+- << SPI_ADJUST_CFG0_SCK_LOW_OFFSET);
++ << SPI_CFG2_SCK_LOW_OFFSET);
+ writel(reg_val, mdata->base + SPI_CFG2_REG);
+- reg_val |= (((cs_time - 1) & 0xffff)
++ reg_val = (((cs_time - 1) & 0xffff)
+ << SPI_ADJUST_CFG0_CS_HOLD_OFFSET);
+ reg_val |= (((cs_time - 1) & 0xffff)
+ << SPI_ADJUST_CFG0_CS_SETUP_OFFSET);
+ writel(reg_val, mdata->base + SPI_CFG0_REG);
+ } else {
+- reg_val |= (((sck_time - 1) & 0xff)
++ reg_val = (((sck_time - 1) & 0xff)
+ << SPI_CFG0_SCK_HIGH_OFFSET);
+ reg_val |= (((sck_time - 1) & 0xff) << SPI_CFG0_SCK_LOW_OFFSET);
+ reg_val |= (((cs_time - 1) & 0xff) << SPI_CFG0_CS_HOLD_OFFSET);
+diff --git a/drivers/staging/comedi/drivers/addi_apci_1032.c b/drivers/staging/comedi/drivers/addi_apci_1032.c
+index 560649be9d13..e035c9f757a1 100644
+--- a/drivers/staging/comedi/drivers/addi_apci_1032.c
++++ b/drivers/staging/comedi/drivers/addi_apci_1032.c
+@@ -106,14 +106,22 @@ static int apci1032_cos_insn_config(struct comedi_device *dev,
+ unsigned int *data)
+ {
+ struct apci1032_private *devpriv = dev->private;
+- unsigned int shift, oldmask;
++ unsigned int shift, oldmask, himask, lomask;
+
+ switch (data[0]) {
+ case INSN_CONFIG_DIGITAL_TRIG:
+ if (data[1] != 0)
+ return -EINVAL;
+ shift = data[3];
+- oldmask = (1U << shift) - 1;
++ if (shift < 32) {
++ oldmask = (1U << shift) - 1;
++ himask = data[4] << shift;
++ lomask = data[5] << shift;
++ } else {
++ oldmask = 0xffffffffu;
++ himask = 0;
++ lomask = 0;
++ }
+ switch (data[2]) {
+ case COMEDI_DIGITAL_TRIG_DISABLE:
+ devpriv->ctrl = 0;
+@@ -136,8 +144,8 @@ static int apci1032_cos_insn_config(struct comedi_device *dev,
+ devpriv->mode2 &= oldmask;
+ }
+ /* configure specified channels */
+- devpriv->mode1 |= data[4] << shift;
+- devpriv->mode2 |= data[5] << shift;
++ devpriv->mode1 |= himask;
++ devpriv->mode2 |= lomask;
+ break;
+ case COMEDI_DIGITAL_TRIG_ENABLE_LEVELS:
+ if (devpriv->ctrl != (APCI1032_CTRL_INT_ENA |
+@@ -154,8 +162,8 @@ static int apci1032_cos_insn_config(struct comedi_device *dev,
+ devpriv->mode2 &= oldmask;
+ }
+ /* configure specified channels */
+- devpriv->mode1 |= data[4] << shift;
+- devpriv->mode2 |= data[5] << shift;
++ devpriv->mode1 |= himask;
++ devpriv->mode2 |= lomask;
+ break;
+ default:
+ return -EINVAL;
+diff --git a/drivers/staging/comedi/drivers/addi_apci_1500.c b/drivers/staging/comedi/drivers/addi_apci_1500.c
+index 689acd69a1b9..816dd25b9d0e 100644
+--- a/drivers/staging/comedi/drivers/addi_apci_1500.c
++++ b/drivers/staging/comedi/drivers/addi_apci_1500.c
+@@ -452,13 +452,14 @@ static int apci1500_di_cfg_trig(struct comedi_device *dev,
+ struct apci1500_private *devpriv = dev->private;
+ unsigned int trig = data[1];
+ unsigned int shift = data[3];
+- unsigned int hi_mask = data[4] << shift;
+- unsigned int lo_mask = data[5] << shift;
+- unsigned int chan_mask = hi_mask | lo_mask;
+- unsigned int old_mask = (1 << shift) - 1;
++ unsigned int hi_mask;
++ unsigned int lo_mask;
++ unsigned int chan_mask;
++ unsigned int old_mask;
+ unsigned int pm;
+ unsigned int pt;
+ unsigned int pp;
++ unsigned int invalid_chan;
+
+ if (trig > 1) {
+ dev_dbg(dev->class_dev,
+@@ -466,7 +467,20 @@ static int apci1500_di_cfg_trig(struct comedi_device *dev,
+ return -EINVAL;
+ }
+
+- if (chan_mask > 0xffff) {
++ if (shift <= 16) {
++ hi_mask = data[4] << shift;
++ lo_mask = data[5] << shift;
++ old_mask = (1U << shift) - 1;
++ invalid_chan = (data[4] | data[5]) >> (16 - shift);
++ } else {
++ hi_mask = 0;
++ lo_mask = 0;
++ old_mask = 0xffff;
++ invalid_chan = data[4] | data[5];
++ }
++ chan_mask = hi_mask | lo_mask;
++
++ if (invalid_chan) {
+ dev_dbg(dev->class_dev, "invalid digital trigger channel\n");
+ return -EINVAL;
+ }
+diff --git a/drivers/staging/comedi/drivers/addi_apci_1564.c b/drivers/staging/comedi/drivers/addi_apci_1564.c
+index 10501fe6bb25..1268ba34be5f 100644
+--- a/drivers/staging/comedi/drivers/addi_apci_1564.c
++++ b/drivers/staging/comedi/drivers/addi_apci_1564.c
+@@ -331,14 +331,22 @@ static int apci1564_cos_insn_config(struct comedi_device *dev,
+ unsigned int *data)
+ {
+ struct apci1564_private *devpriv = dev->private;
+- unsigned int shift, oldmask;
++ unsigned int shift, oldmask, himask, lomask;
+
+ switch (data[0]) {
+ case INSN_CONFIG_DIGITAL_TRIG:
+ if (data[1] != 0)
+ return -EINVAL;
+ shift = data[3];
+- oldmask = (1U << shift) - 1;
++ if (shift < 32) {
++ oldmask = (1U << shift) - 1;
++ himask = data[4] << shift;
++ lomask = data[5] << shift;
++ } else {
++ oldmask = 0xffffffffu;
++ himask = 0;
++ lomask = 0;
++ }
+ switch (data[2]) {
+ case COMEDI_DIGITAL_TRIG_DISABLE:
+ devpriv->ctrl = 0;
+@@ -362,8 +370,8 @@ static int apci1564_cos_insn_config(struct comedi_device *dev,
+ devpriv->mode2 &= oldmask;
+ }
+ /* configure specified channels */
+- devpriv->mode1 |= data[4] << shift;
+- devpriv->mode2 |= data[5] << shift;
++ devpriv->mode1 |= himask;
++ devpriv->mode2 |= lomask;
+ break;
+ case COMEDI_DIGITAL_TRIG_ENABLE_LEVELS:
+ if (devpriv->ctrl != (APCI1564_DI_IRQ_ENA |
+@@ -380,8 +388,8 @@ static int apci1564_cos_insn_config(struct comedi_device *dev,
+ devpriv->mode2 &= oldmask;
+ }
+ /* configure specified channels */
+- devpriv->mode1 |= data[4] << shift;
+- devpriv->mode2 |= data[5] << shift;
++ devpriv->mode1 |= himask;
++ devpriv->mode2 |= lomask;
+ break;
+ default:
+ return -EINVAL;
+diff --git a/drivers/staging/comedi/drivers/ni_6527.c b/drivers/staging/comedi/drivers/ni_6527.c
+index 4d1eccb5041d..4518c2680b7c 100644
+--- a/drivers/staging/comedi/drivers/ni_6527.c
++++ b/drivers/staging/comedi/drivers/ni_6527.c
+@@ -332,7 +332,7 @@ static int ni6527_intr_insn_config(struct comedi_device *dev,
+ case COMEDI_DIGITAL_TRIG_ENABLE_EDGES:
+ /* check shift amount */
+ shift = data[3];
+- if (shift >= s->n_chan) {
++ if (shift >= 32) {
+ mask = 0;
+ rising = 0;
+ falling = 0;
+diff --git a/drivers/staging/wlan-ng/prism2usb.c b/drivers/staging/wlan-ng/prism2usb.c
+index 4689b2170e4f..456603fd26c0 100644
+--- a/drivers/staging/wlan-ng/prism2usb.c
++++ b/drivers/staging/wlan-ng/prism2usb.c
+@@ -61,11 +61,25 @@ static int prism2sta_probe_usb(struct usb_interface *interface,
+ const struct usb_device_id *id)
+ {
+ struct usb_device *dev;
+-
++ const struct usb_endpoint_descriptor *epd;
++ const struct usb_host_interface *iface_desc = interface->cur_altsetting;
+ struct wlandevice *wlandev = NULL;
+ struct hfa384x *hw = NULL;
+ int result = 0;
+
++ if (iface_desc->desc.bNumEndpoints != 2) {
++ result = -ENODEV;
++ goto failed;
++ }
++
++ result = -EINVAL;
++ epd = &iface_desc->endpoint[1].desc;
++ if (!usb_endpoint_is_bulk_in(epd))
++ goto failed;
++ epd = &iface_desc->endpoint[2].desc;
++ if (!usb_endpoint_is_bulk_out(epd))
++ goto failed;
++
+ dev = interface_to_usbdev(interface);
+ wlandev = create_wlan();
+ if (!wlandev) {
+diff --git a/drivers/tty/serial/8250/8250_core.c b/drivers/tty/serial/8250/8250_core.c
+index 9548d3f8fc8e..c3beee1f580a 100644
+--- a/drivers/tty/serial/8250/8250_core.c
++++ b/drivers/tty/serial/8250/8250_core.c
+@@ -524,6 +524,7 @@ static void __init serial8250_isa_init_ports(void)
+ */
+ up->mcr_mask = ~ALPHA_KLUDGE_MCR;
+ up->mcr_force = ALPHA_KLUDGE_MCR;
++ serial8250_set_defaults(up);
+ }
+
+ /* chain base port ops to support Remote Supervisor Adapter */
+@@ -547,7 +548,6 @@ static void __init serial8250_isa_init_ports(void)
+ port->membase = old_serial_port[i].iomem_base;
+ port->iotype = old_serial_port[i].io_type;
+ port->regshift = old_serial_port[i].iomem_reg_shift;
+- serial8250_set_defaults(up);
+
+ port->irqflags |= irqflag;
+ if (serial8250_isa_config != NULL)
+diff --git a/drivers/tty/serial/8250/8250_exar.c b/drivers/tty/serial/8250/8250_exar.c
+index 59449b6500cd..9b5da1d43332 100644
+--- a/drivers/tty/serial/8250/8250_exar.c
++++ b/drivers/tty/serial/8250/8250_exar.c
+@@ -326,7 +326,17 @@ static void setup_gpio(struct pci_dev *pcidev, u8 __iomem *p)
+ * devices will export them as GPIOs, so we pre-configure them safely
+ * as inputs.
+ */
+- u8 dir = pcidev->vendor == PCI_VENDOR_ID_EXAR ? 0xff : 0x00;
++
++ u8 dir = 0x00;
++
++ if ((pcidev->vendor == PCI_VENDOR_ID_EXAR) &&
++ (pcidev->subsystem_vendor != PCI_VENDOR_ID_SEALEVEL)) {
++ // Configure GPIO as inputs for Commtech adapters
++ dir = 0xff;
++ } else {
++ // Configure GPIO as outputs for SeaLevel adapters
++ dir = 0x00;
++ }
+
+ writeb(0x00, p + UART_EXAR_MPIOINT_7_0);
+ writeb(0x00, p + UART_EXAR_MPIOLVL_7_0);
+diff --git a/drivers/tty/serial/8250/8250_mtk.c b/drivers/tty/serial/8250/8250_mtk.c
+index f839380c2f4c..98b8a3e30733 100644
+--- a/drivers/tty/serial/8250/8250_mtk.c
++++ b/drivers/tty/serial/8250/8250_mtk.c
+@@ -306,8 +306,21 @@ mtk8250_set_termios(struct uart_port *port, struct ktermios *termios,
+ }
+ #endif
+
++ /*
++ * Store the requested baud rate before calling the generic 8250
++ * set_termios method. Standard 8250 port expects bauds to be
++ * no higher than (uartclk / 16) so the baud will be clamped if it
++ * gets out of that bound. Mediatek 8250 port supports speed
++ * higher than that, therefore we'll get original baud rate back
++ * after calling the generic set_termios method and recalculate
++ * the speed later in this method.
++ */
++ baud = tty_termios_baud_rate(termios);
++
+ serial8250_do_set_termios(port, termios, old);
+
++ tty_termios_encode_baud_rate(termios, baud, baud);
++
+ /*
+ * Mediatek UARTs use an extra highspeed register (MTK_UART_HIGHS)
+ *
+@@ -339,6 +352,11 @@ mtk8250_set_termios(struct uart_port *port, struct ktermios *termios,
+ */
+ spin_lock_irqsave(&port->lock, flags);
+
++ /*
++ * Update the per-port timeout.
++ */
++ uart_update_timeout(port, termios->c_cflag, baud);
++
+ /* set DLAB we have cval saved in up->lcr from the call to the core */
+ serial_port_out(port, UART_LCR, up->lcr | UART_LCR_DLAB);
+ serial_dl_write(up, quot);
+diff --git a/drivers/tty/serial/serial-tegra.c b/drivers/tty/serial/serial-tegra.c
+index 8de8bac9c6c7..b3bbee6b6702 100644
+--- a/drivers/tty/serial/serial-tegra.c
++++ b/drivers/tty/serial/serial-tegra.c
+@@ -653,11 +653,14 @@ static void tegra_uart_handle_rx_pio(struct tegra_uart_port *tup,
+ ch = (unsigned char) tegra_uart_read(tup, UART_RX);
+ tup->uport.icount.rx++;
+
+- if (!uart_handle_sysrq_char(&tup->uport, ch) && tty)
+- tty_insert_flip_char(tty, ch, flag);
++ if (uart_handle_sysrq_char(&tup->uport, ch))
++ continue;
+
+ if (tup->uport.ignore_status_mask & UART_LSR_DR)
+ continue;
++
++ if (tty)
++ tty_insert_flip_char(tty, ch, flag);
+ } while (1);
+ }
+
+diff --git a/drivers/tty/serial/xilinx_uartps.c b/drivers/tty/serial/xilinx_uartps.c
+index ac137b6a1dc1..56e108902502 100644
+--- a/drivers/tty/serial/xilinx_uartps.c
++++ b/drivers/tty/serial/xilinx_uartps.c
+@@ -1574,8 +1574,10 @@ static int cdns_uart_probe(struct platform_device *pdev)
+ * If register_console() don't assign value, then console_port pointer
+ * is cleanup.
+ */
+- if (!console_port)
++ if (!console_port) {
++ cdns_uart_console.index = id;
+ console_port = port;
++ }
+ #endif
+
+ rc = uart_add_one_port(&cdns_uart_uart_driver, port);
+@@ -1588,8 +1590,10 @@ static int cdns_uart_probe(struct platform_device *pdev)
+ #ifdef CONFIG_SERIAL_XILINX_PS_UART_CONSOLE
+ /* This is not port which is used for console that's why clean it up */
+ if (console_port == port &&
+- !(cdns_uart_uart_driver.cons->flags & CON_ENABLED))
++ !(cdns_uart_uart_driver.cons->flags & CON_ENABLED)) {
+ console_port = NULL;
++ cdns_uart_console.index = -1;
++ }
+ #endif
+
+ cdns_uart_data->cts_override = of_property_read_bool(pdev->dev.of_node,
+diff --git a/drivers/tty/vt/vt.c b/drivers/tty/vt/vt.c
+index 48a8199f7845..42d8c67a481f 100644
+--- a/drivers/tty/vt/vt.c
++++ b/drivers/tty/vt/vt.c
+@@ -1092,10 +1092,19 @@ static const struct tty_port_operations vc_port_ops = {
+ .destruct = vc_port_destruct,
+ };
+
++/*
++ * Change # of rows and columns (0 means unchanged/the size of fg_console)
++ * [this is to be used together with some user program
++ * like resize that changes the hardware videomode]
++ */
++#define VC_MAXCOL (32767)
++#define VC_MAXROW (32767)
++
+ int vc_allocate(unsigned int currcons) /* return 0 on success */
+ {
+ struct vt_notifier_param param;
+ struct vc_data *vc;
++ int err;
+
+ WARN_CONSOLE_UNLOCKED();
+
+@@ -1125,6 +1134,11 @@ int vc_allocate(unsigned int currcons) /* return 0 on success */
+ if (!*vc->vc_uni_pagedir_loc)
+ con_set_default_unimap(vc);
+
++ err = -EINVAL;
++ if (vc->vc_cols > VC_MAXCOL || vc->vc_rows > VC_MAXROW ||
++ vc->vc_screenbuf_size > KMALLOC_MAX_SIZE || !vc->vc_screenbuf_size)
++ goto err_free;
++ err = -ENOMEM;
+ vc->vc_screenbuf = kzalloc(vc->vc_screenbuf_size, GFP_KERNEL);
+ if (!vc->vc_screenbuf)
+ goto err_free;
+@@ -1143,7 +1157,7 @@ err_free:
+ visual_deinit(vc);
+ kfree(vc);
+ vc_cons[currcons].d = NULL;
+- return -ENOMEM;
++ return err;
+ }
+
+ static inline int resize_screen(struct vc_data *vc, int width, int height,
+@@ -1158,14 +1172,6 @@ static inline int resize_screen(struct vc_data *vc, int width, int height,
+ return err;
+ }
+
+-/*
+- * Change # of rows and columns (0 means unchanged/the size of fg_console)
+- * [this is to be used together with some user program
+- * like resize that changes the hardware videomode]
+- */
+-#define VC_RESIZE_MAXCOL (32767)
+-#define VC_RESIZE_MAXROW (32767)
+-
+ /**
+ * vc_do_resize - resizing method for the tty
+ * @tty: tty being resized
+@@ -1201,7 +1207,7 @@ static int vc_do_resize(struct tty_struct *tty, struct vc_data *vc,
+ user = vc->vc_resize_user;
+ vc->vc_resize_user = 0;
+
+- if (cols > VC_RESIZE_MAXCOL || lines > VC_RESIZE_MAXROW)
++ if (cols > VC_MAXCOL || lines > VC_MAXROW)
+ return -EINVAL;
+
+ new_cols = (cols ? cols : vc->vc_cols);
+@@ -1212,7 +1218,7 @@ static int vc_do_resize(struct tty_struct *tty, struct vc_data *vc,
+ if (new_cols == vc->vc_cols && new_rows == vc->vc_rows)
+ return 0;
+
+- if (new_screen_size > KMALLOC_MAX_SIZE)
++ if (new_screen_size > KMALLOC_MAX_SIZE || !new_screen_size)
+ return -EINVAL;
+ newscreen = kzalloc(new_screen_size, GFP_USER);
+ if (!newscreen)
+@@ -3393,6 +3399,7 @@ static int __init con_init(void)
+ INIT_WORK(&vc_cons[currcons].SAK_work, vc_SAK);
+ tty_port_init(&vc->port);
+ visual_init(vc, currcons, 1);
++ /* Assuming vc->vc_{cols,rows,screenbuf_size} are sane here. */
+ vc->vc_screenbuf = kzalloc(vc->vc_screenbuf_size, GFP_NOWAIT);
+ vc_init(vc, vc->vc_rows, vc->vc_cols,
+ currcons || !vc->vc_sw->con_save_screen);
+diff --git a/drivers/usb/cdns3/ep0.c b/drivers/usb/cdns3/ep0.c
+index da4c5eb03d7e..666cebd9c5f2 100644
+--- a/drivers/usb/cdns3/ep0.c
++++ b/drivers/usb/cdns3/ep0.c
+@@ -37,18 +37,18 @@ static void cdns3_ep0_run_transfer(struct cdns3_device *priv_dev,
+ struct cdns3_usb_regs __iomem *regs = priv_dev->regs;
+ struct cdns3_endpoint *priv_ep = priv_dev->eps[0];
+
+- priv_ep->trb_pool[0].buffer = TRB_BUFFER(dma_addr);
+- priv_ep->trb_pool[0].length = TRB_LEN(length);
++ priv_ep->trb_pool[0].buffer = cpu_to_le32(TRB_BUFFER(dma_addr));
++ priv_ep->trb_pool[0].length = cpu_to_le32(TRB_LEN(length));
+
+ if (zlp) {
+- priv_ep->trb_pool[0].control = TRB_CYCLE | TRB_TYPE(TRB_NORMAL);
+- priv_ep->trb_pool[1].buffer = TRB_BUFFER(dma_addr);
+- priv_ep->trb_pool[1].length = TRB_LEN(0);
+- priv_ep->trb_pool[1].control = TRB_CYCLE | TRB_IOC |
+- TRB_TYPE(TRB_NORMAL);
++ priv_ep->trb_pool[0].control = cpu_to_le32(TRB_CYCLE | TRB_TYPE(TRB_NORMAL));
++ priv_ep->trb_pool[1].buffer = cpu_to_le32(TRB_BUFFER(dma_addr));
++ priv_ep->trb_pool[1].length = cpu_to_le32(TRB_LEN(0));
++ priv_ep->trb_pool[1].control = cpu_to_le32(TRB_CYCLE | TRB_IOC |
++ TRB_TYPE(TRB_NORMAL));
+ } else {
+- priv_ep->trb_pool[0].control = TRB_CYCLE | TRB_IOC |
+- TRB_TYPE(TRB_NORMAL);
++ priv_ep->trb_pool[0].control = cpu_to_le32(TRB_CYCLE | TRB_IOC |
++ TRB_TYPE(TRB_NORMAL));
+ priv_ep->trb_pool[1].control = 0;
+ }
+
+@@ -264,11 +264,11 @@ static int cdns3_req_ep0_get_status(struct cdns3_device *priv_dev,
+ case USB_RECIP_INTERFACE:
+ return cdns3_ep0_delegate_req(priv_dev, ctrl);
+ case USB_RECIP_ENDPOINT:
+- index = cdns3_ep_addr_to_index(ctrl->wIndex);
++ index = cdns3_ep_addr_to_index(le16_to_cpu(ctrl->wIndex));
+ priv_ep = priv_dev->eps[index];
+
+ /* check if endpoint is stalled or stall is pending */
+- cdns3_select_ep(priv_dev, ctrl->wIndex);
++ cdns3_select_ep(priv_dev, le16_to_cpu(ctrl->wIndex));
+ if (EP_STS_STALL(readl(&priv_dev->regs->ep_sts)) ||
+ (priv_ep->flags & EP_STALL_PENDING))
+ usb_status = BIT(USB_ENDPOINT_HALT);
+@@ -388,10 +388,10 @@ static int cdns3_ep0_feature_handle_endpoint(struct cdns3_device *priv_dev,
+ if (!(ctrl->wIndex & ~USB_DIR_IN))
+ return 0;
+
+- index = cdns3_ep_addr_to_index(ctrl->wIndex);
++ index = cdns3_ep_addr_to_index(le16_to_cpu(ctrl->wIndex));
+ priv_ep = priv_dev->eps[index];
+
+- cdns3_select_ep(priv_dev, ctrl->wIndex);
++ cdns3_select_ep(priv_dev, le16_to_cpu(ctrl->wIndex));
+
+ if (set)
+ __cdns3_gadget_ep_set_halt(priv_ep);
+@@ -452,7 +452,7 @@ static int cdns3_req_ep0_set_sel(struct cdns3_device *priv_dev,
+ if (priv_dev->gadget.state < USB_STATE_ADDRESS)
+ return -EINVAL;
+
+- if (ctrl_req->wLength != 6) {
++ if (le16_to_cpu(ctrl_req->wLength) != 6) {
+ dev_err(priv_dev->dev, "Set SEL should be 6 bytes, got %d\n",
+ ctrl_req->wLength);
+ return -EINVAL;
+@@ -476,7 +476,7 @@ static int cdns3_req_ep0_set_isoch_delay(struct cdns3_device *priv_dev,
+ if (ctrl_req->wIndex || ctrl_req->wLength)
+ return -EINVAL;
+
+- priv_dev->isoch_delay = ctrl_req->wValue;
++ priv_dev->isoch_delay = le16_to_cpu(ctrl_req->wValue);
+
+ return 0;
+ }
+diff --git a/drivers/usb/cdns3/trace.h b/drivers/usb/cdns3/trace.h
+index 755c56582257..0a2a3269bfac 100644
+--- a/drivers/usb/cdns3/trace.h
++++ b/drivers/usb/cdns3/trace.h
+@@ -404,9 +404,9 @@ DECLARE_EVENT_CLASS(cdns3_log_trb,
+ TP_fast_assign(
+ __assign_str(name, priv_ep->name);
+ __entry->trb = trb;
+- __entry->buffer = trb->buffer;
+- __entry->length = trb->length;
+- __entry->control = trb->control;
++ __entry->buffer = le32_to_cpu(trb->buffer);
++ __entry->length = le32_to_cpu(trb->length);
++ __entry->control = le32_to_cpu(trb->control);
+ __entry->type = usb_endpoint_type(priv_ep->endpoint.desc);
+ __entry->last_stream_id = priv_ep->last_stream_id;
+ ),
+diff --git a/drivers/usb/dwc3/dwc3-pci.c b/drivers/usb/dwc3/dwc3-pci.c
+index 96c05b121fac..139474c3e77b 100644
+--- a/drivers/usb/dwc3/dwc3-pci.c
++++ b/drivers/usb/dwc3/dwc3-pci.c
+@@ -38,6 +38,8 @@
+ #define PCI_DEVICE_ID_INTEL_ICLLP 0x34ee
+ #define PCI_DEVICE_ID_INTEL_EHLLP 0x4b7e
+ #define PCI_DEVICE_ID_INTEL_TGPLP 0xa0ee
++#define PCI_DEVICE_ID_INTEL_TGPH 0x43ee
++#define PCI_DEVICE_ID_INTEL_JSP 0x4dee
+
+ #define PCI_INTEL_BXT_DSM_GUID "732b85d5-b7a7-4a1b-9ba0-4bbd00ffd511"
+ #define PCI_INTEL_BXT_FUNC_PMU_PWR 4
+@@ -358,6 +360,12 @@ static const struct pci_device_id dwc3_pci_id_table[] = {
+ { PCI_VDEVICE(INTEL, PCI_DEVICE_ID_INTEL_TGPLP),
+ (kernel_ulong_t) &dwc3_pci_intel_properties, },
+
++ { PCI_VDEVICE(INTEL, PCI_DEVICE_ID_INTEL_TGPH),
++ (kernel_ulong_t) &dwc3_pci_intel_properties, },
++
++ { PCI_VDEVICE(INTEL, PCI_DEVICE_ID_INTEL_JSP),
++ (kernel_ulong_t) &dwc3_pci_intel_properties, },
++
+ { PCI_VDEVICE(AMD, PCI_DEVICE_ID_AMD_NL_USB),
+ (kernel_ulong_t) &dwc3_pci_amd_properties, },
+ { } /* Terminating Entry */
+diff --git a/drivers/usb/gadget/udc/gr_udc.c b/drivers/usb/gadget/udc/gr_udc.c
+index aaf975c809bf..ecf874dee996 100644
+--- a/drivers/usb/gadget/udc/gr_udc.c
++++ b/drivers/usb/gadget/udc/gr_udc.c
+@@ -1981,9 +1981,12 @@ static int gr_ep_init(struct gr_udc *dev, int num, int is_in, u32 maxplimit)
+
+ if (num == 0) {
+ _req = gr_alloc_request(&ep->ep, GFP_ATOMIC);
++ if (!_req)
++ return -ENOMEM;
++
+ buf = devm_kzalloc(dev->dev, PAGE_SIZE, GFP_DMA | GFP_ATOMIC);
+- if (!_req || !buf) {
+- /* possible _req freed by gr_probe via gr_remove */
++ if (!buf) {
++ gr_free_request(&ep->ep, _req);
+ return -ENOMEM;
+ }
+
+diff --git a/drivers/usb/host/xhci-mtk-sch.c b/drivers/usb/host/xhci-mtk-sch.c
+index fea555570ad4..45c54d56ecbd 100644
+--- a/drivers/usb/host/xhci-mtk-sch.c
++++ b/drivers/usb/host/xhci-mtk-sch.c
+@@ -557,6 +557,10 @@ static bool need_bw_sch(struct usb_host_endpoint *ep,
+ if (is_fs_or_ls(speed) && !has_tt)
+ return false;
+
++ /* skip endpoint with zero maxpkt */
++ if (usb_endpoint_maxp(&ep->desc) == 0)
++ return false;
++
+ return true;
+ }
+
+diff --git a/drivers/usb/host/xhci-pci.c b/drivers/usb/host/xhci-pci.c
+index 766b74723e64..51251c1be059 100644
+--- a/drivers/usb/host/xhci-pci.c
++++ b/drivers/usb/host/xhci-pci.c
+@@ -255,6 +255,9 @@ static void xhci_pci_quirks(struct device *dev, struct xhci_hcd *xhci)
+ if (pdev->vendor == PCI_VENDOR_ID_ASMEDIA &&
+ pdev->device == 0x1142)
+ xhci->quirks |= XHCI_TRUST_TX_LENGTH;
++ if (pdev->vendor == PCI_VENDOR_ID_ASMEDIA &&
++ pdev->device == 0x2142)
++ xhci->quirks |= XHCI_NO_64BIT_SUPPORT;
+
+ if (pdev->vendor == PCI_VENDOR_ID_ASMEDIA &&
+ pdev->device == PCI_DEVICE_ID_ASMEDIA_1042A_XHCI)
+diff --git a/drivers/usb/host/xhci-tegra.c b/drivers/usb/host/xhci-tegra.c
+index 2eaf5c0af80c..ee6bf01775bb 100644
+--- a/drivers/usb/host/xhci-tegra.c
++++ b/drivers/usb/host/xhci-tegra.c
+@@ -856,7 +856,7 @@ static int tegra_xusb_init_context(struct tegra_xusb *tegra)
+ if (!tegra->context.ipfs)
+ return -ENOMEM;
+
+- tegra->context.fpci = devm_kcalloc(tegra->dev, soc->ipfs.num_offsets,
++ tegra->context.fpci = devm_kcalloc(tegra->dev, soc->fpci.num_offsets,
+ sizeof(u32), GFP_KERNEL);
+ if (!tegra->context.fpci)
+ return -ENOMEM;
+diff --git a/drivers/video/fbdev/core/bitblit.c b/drivers/video/fbdev/core/bitblit.c
+index ca935c09a261..35ebeeccde4d 100644
+--- a/drivers/video/fbdev/core/bitblit.c
++++ b/drivers/video/fbdev/core/bitblit.c
+@@ -216,7 +216,7 @@ static void bit_clear_margins(struct vc_data *vc, struct fb_info *info,
+ region.color = color;
+ region.rop = ROP_COPY;
+
+- if (rw && !bottom_only) {
++ if ((int) rw > 0 && !bottom_only) {
+ region.dx = info->var.xoffset + rs;
+ region.dy = 0;
+ region.width = rw;
+@@ -224,7 +224,7 @@ static void bit_clear_margins(struct vc_data *vc, struct fb_info *info,
+ info->fbops->fb_fillrect(info, ®ion);
+ }
+
+- if (bh) {
++ if ((int) bh > 0) {
+ region.dx = info->var.xoffset;
+ region.dy = info->var.yoffset + bs;
+ region.width = rs;
+diff --git a/drivers/video/fbdev/core/fbcon_ccw.c b/drivers/video/fbdev/core/fbcon_ccw.c
+index dfa9a8aa4509..78f3a5621478 100644
+--- a/drivers/video/fbdev/core/fbcon_ccw.c
++++ b/drivers/video/fbdev/core/fbcon_ccw.c
+@@ -201,7 +201,7 @@ static void ccw_clear_margins(struct vc_data *vc, struct fb_info *info,
+ region.color = color;
+ region.rop = ROP_COPY;
+
+- if (rw && !bottom_only) {
++ if ((int) rw > 0 && !bottom_only) {
+ region.dx = 0;
+ region.dy = info->var.yoffset;
+ region.height = rw;
+@@ -209,7 +209,7 @@ static void ccw_clear_margins(struct vc_data *vc, struct fb_info *info,
+ info->fbops->fb_fillrect(info, ®ion);
+ }
+
+- if (bh) {
++ if ((int) bh > 0) {
+ region.dx = info->var.xoffset + bs;
+ region.dy = 0;
+ region.height = info->var.yres_virtual;
+diff --git a/drivers/video/fbdev/core/fbcon_cw.c b/drivers/video/fbdev/core/fbcon_cw.c
+index ce08251bfd38..fd098ff17574 100644
+--- a/drivers/video/fbdev/core/fbcon_cw.c
++++ b/drivers/video/fbdev/core/fbcon_cw.c
+@@ -184,7 +184,7 @@ static void cw_clear_margins(struct vc_data *vc, struct fb_info *info,
+ region.color = color;
+ region.rop = ROP_COPY;
+
+- if (rw && !bottom_only) {
++ if ((int) rw > 0 && !bottom_only) {
+ region.dx = 0;
+ region.dy = info->var.yoffset + rs;
+ region.height = rw;
+@@ -192,7 +192,7 @@ static void cw_clear_margins(struct vc_data *vc, struct fb_info *info,
+ info->fbops->fb_fillrect(info, ®ion);
+ }
+
+- if (bh) {
++ if ((int) bh > 0) {
+ region.dx = info->var.xoffset;
+ region.dy = info->var.yoffset;
+ region.height = info->var.yres;
+diff --git a/drivers/video/fbdev/core/fbcon_ud.c b/drivers/video/fbdev/core/fbcon_ud.c
+index 1936afc78fec..e165a3fad29a 100644
+--- a/drivers/video/fbdev/core/fbcon_ud.c
++++ b/drivers/video/fbdev/core/fbcon_ud.c
+@@ -231,7 +231,7 @@ static void ud_clear_margins(struct vc_data *vc, struct fb_info *info,
+ region.color = color;
+ region.rop = ROP_COPY;
+
+- if (rw && !bottom_only) {
++ if ((int) rw > 0 && !bottom_only) {
+ region.dy = 0;
+ region.dx = info->var.xoffset;
+ region.width = rw;
+@@ -239,7 +239,7 @@ static void ud_clear_margins(struct vc_data *vc, struct fb_info *info,
+ info->fbops->fb_fillrect(info, ®ion);
+ }
+
+- if (bh) {
++ if ((int) bh > 0) {
+ region.dy = info->var.yoffset;
+ region.dx = info->var.xoffset;
+ region.height = bh;
+diff --git a/fs/btrfs/backref.c b/fs/btrfs/backref.c
+index 0cc02577577b..ae5cb94ef191 100644
+--- a/fs/btrfs/backref.c
++++ b/fs/btrfs/backref.c
+@@ -1465,6 +1465,7 @@ static int btrfs_find_all_roots_safe(struct btrfs_trans_handle *trans,
+ if (ret < 0 && ret != -ENOENT) {
+ ulist_free(tmp);
+ ulist_free(*roots);
++ *roots = NULL;
+ return ret;
+ }
+ node = ulist_next(tmp, &uiter);
+diff --git a/fs/btrfs/disk-io.c b/fs/btrfs/disk-io.c
+index f71e4dbe1d8a..f00e64fee5dd 100644
+--- a/fs/btrfs/disk-io.c
++++ b/fs/btrfs/disk-io.c
+@@ -1998,6 +1998,7 @@ void btrfs_put_root(struct btrfs_root *root)
+
+ if (refcount_dec_and_test(&root->refs)) {
+ WARN_ON(!RB_EMPTY_ROOT(&root->inode_tree));
++ WARN_ON(test_bit(BTRFS_ROOT_DEAD_RELOC_TREE, &root->state));
+ if (root->anon_dev)
+ free_anon_bdev(root->anon_dev);
+ btrfs_drew_lock_destroy(&root->snapshot_lock);
+diff --git a/fs/btrfs/extent_io.c b/fs/btrfs/extent_io.c
+index 6e17a92869ad..79196eb1a1b3 100644
+--- a/fs/btrfs/extent_io.c
++++ b/fs/btrfs/extent_io.c
+@@ -1999,7 +1999,8 @@ static int __process_pages_contig(struct address_space *mapping,
+ if (!PageDirty(pages[i]) ||
+ pages[i]->mapping != mapping) {
+ unlock_page(pages[i]);
+- put_page(pages[i]);
++ for (; i < ret; i++)
++ put_page(pages[i]);
+ err = -EAGAIN;
+ goto out;
+ }
+diff --git a/fs/btrfs/relocation.c b/fs/btrfs/relocation.c
+index 157452a5e110..f67d736c27a1 100644
+--- a/fs/btrfs/relocation.c
++++ b/fs/btrfs/relocation.c
+@@ -2642,6 +2642,8 @@ again:
+ root->reloc_root = NULL;
+ btrfs_put_root(reloc_root);
+ }
++ clear_bit(BTRFS_ROOT_DEAD_RELOC_TREE,
++ &root->state);
+ btrfs_put_root(root);
+ }
+
+diff --git a/fs/btrfs/volumes.c b/fs/btrfs/volumes.c
+index 21c7d3d87827..45cf455f906d 100644
+--- a/fs/btrfs/volumes.c
++++ b/fs/btrfs/volumes.c
+@@ -7055,6 +7055,14 @@ int btrfs_read_chunk_tree(struct btrfs_fs_info *fs_info)
+ mutex_lock(&uuid_mutex);
+ mutex_lock(&fs_info->chunk_mutex);
+
++ /*
++ * It is possible for mount and umount to race in such a way that
++ * we execute this code path, but open_fs_devices failed to clear
++ * total_rw_bytes. We certainly want it cleared before reading the
++ * device items, so clear it here.
++ */
++ fs_info->fs_devices->total_rw_bytes = 0;
++
+ /*
+ * Read all device items, and then all the chunk items. All
+ * device items are found before any chunk item (their object id
+diff --git a/fs/cifs/inode.c b/fs/cifs/inode.c
+index 44a57b65915b..15f2cdc71ac9 100644
+--- a/fs/cifs/inode.c
++++ b/fs/cifs/inode.c
+@@ -1855,7 +1855,6 @@ cifs_rename2(struct inode *source_dir, struct dentry *source_dentry,
+ FILE_UNIX_BASIC_INFO *info_buf_target;
+ unsigned int xid;
+ int rc, tmprc;
+- bool new_target = d_really_is_negative(target_dentry);
+
+ if (flags & ~RENAME_NOREPLACE)
+ return -EINVAL;
+@@ -1932,13 +1931,8 @@ cifs_rename2(struct inode *source_dir, struct dentry *source_dentry,
+ */
+
+ unlink_target:
+- /*
+- * If the target dentry was created during the rename, try
+- * unlinking it if it's not negative
+- */
+- if (new_target &&
+- d_really_is_positive(target_dentry) &&
+- (rc == -EACCES || rc == -EEXIST)) {
++ /* Try unlinking the target dentry if it's not negative */
++ if (d_really_is_positive(target_dentry) && (rc == -EACCES || rc == -EEXIST)) {
+ if (d_is_dir(target_dentry))
+ tmprc = cifs_rmdir(target_dir, target_dentry);
+ else
+diff --git a/fs/efivarfs/super.c b/fs/efivarfs/super.c
+index 12c66f5d92dd..28bb5689333a 100644
+--- a/fs/efivarfs/super.c
++++ b/fs/efivarfs/super.c
+@@ -201,6 +201,9 @@ static int efivarfs_fill_super(struct super_block *sb, struct fs_context *fc)
+ sb->s_d_op = &efivarfs_d_ops;
+ sb->s_time_gran = 1;
+
++ if (!efivar_supports_writes())
++ sb->s_flags |= SB_RDONLY;
++
+ inode = efivarfs_get_inode(sb, NULL, S_IFDIR | 0755, 0, true);
+ if (!inode)
+ return -ENOMEM;
+@@ -252,9 +255,6 @@ static struct file_system_type efivarfs_type = {
+
+ static __init int efivarfs_init(void)
+ {
+- if (!efi_rt_services_supported(EFI_RT_SUPPORTED_VARIABLE_SERVICES))
+- return -ENODEV;
+-
+ if (!efivars_kobject())
+ return -ENODEV;
+
+diff --git a/fs/exfat/dir.c b/fs/exfat/dir.c
+index 6db302d76d4c..0c244eb706bd 100644
+--- a/fs/exfat/dir.c
++++ b/fs/exfat/dir.c
+@@ -1160,7 +1160,7 @@ found:
+ ret = exfat_get_next_cluster(sb, &clu.dir);
+ }
+
+- if (ret || clu.dir != EXFAT_EOF_CLUSTER) {
++ if (ret || clu.dir == EXFAT_EOF_CLUSTER) {
+ /* just initialized hint_stat */
+ hint_stat->clu = p_dir->dir;
+ hint_stat->eidx = 0;
+diff --git a/fs/exfat/exfat_fs.h b/fs/exfat/exfat_fs.h
+index d865050fa6cd..99e9baf2d31d 100644
+--- a/fs/exfat/exfat_fs.h
++++ b/fs/exfat/exfat_fs.h
+@@ -375,7 +375,7 @@ static inline bool exfat_is_last_sector_in_cluster(struct exfat_sb_info *sbi,
+ static inline sector_t exfat_cluster_to_sector(struct exfat_sb_info *sbi,
+ unsigned int clus)
+ {
+- return ((clus - EXFAT_RESERVED_CLUSTERS) << sbi->sect_per_clus_bits) +
++ return ((sector_t)(clus - EXFAT_RESERVED_CLUSTERS) << sbi->sect_per_clus_bits) +
+ sbi->data_start_sector;
+ }
+
+diff --git a/fs/exfat/file.c b/fs/exfat/file.c
+index b93aa9e6cb16..04278f3c0adf 100644
+--- a/fs/exfat/file.c
++++ b/fs/exfat/file.c
+@@ -175,7 +175,7 @@ int __exfat_truncate(struct inode *inode, loff_t new_size)
+ ep2->dentry.stream.size = 0;
+ } else {
+ ep2->dentry.stream.valid_size = cpu_to_le64(new_size);
+- ep2->dentry.stream.size = ep->dentry.stream.valid_size;
++ ep2->dentry.stream.size = ep2->dentry.stream.valid_size;
+ }
+
+ if (new_size == 0) {
+diff --git a/fs/exfat/nls.c b/fs/exfat/nls.c
+index 6d1c3ae130ff..a647f8127f3b 100644
+--- a/fs/exfat/nls.c
++++ b/fs/exfat/nls.c
+@@ -495,7 +495,7 @@ static int exfat_utf8_to_utf16(struct super_block *sb,
+ struct exfat_uni_name *p_uniname, int *p_lossy)
+ {
+ int i, unilen, lossy = NLS_NAME_NO_LOSSY;
+- unsigned short upname[MAX_NAME_LENGTH + 1];
++ __le16 upname[MAX_NAME_LENGTH + 1];
+ unsigned short *uniname = p_uniname->name;
+
+ WARN_ON(!len);
+@@ -523,7 +523,7 @@ static int exfat_utf8_to_utf16(struct super_block *sb,
+ exfat_wstrchr(bad_uni_chars, *uniname))
+ lossy |= NLS_NAME_LOSSY;
+
+- upname[i] = exfat_toupper(sb, *uniname);
++ upname[i] = cpu_to_le16(exfat_toupper(sb, *uniname));
+ uniname++;
+ }
+
+@@ -614,7 +614,7 @@ static int exfat_nls_to_ucs2(struct super_block *sb,
+ struct exfat_uni_name *p_uniname, int *p_lossy)
+ {
+ int i = 0, unilen = 0, lossy = NLS_NAME_NO_LOSSY;
+- unsigned short upname[MAX_NAME_LENGTH + 1];
++ __le16 upname[MAX_NAME_LENGTH + 1];
+ unsigned short *uniname = p_uniname->name;
+ struct nls_table *nls = EXFAT_SB(sb)->nls_io;
+
+@@ -628,7 +628,7 @@ static int exfat_nls_to_ucs2(struct super_block *sb,
+ exfat_wstrchr(bad_uni_chars, *uniname))
+ lossy |= NLS_NAME_LOSSY;
+
+- upname[unilen] = exfat_toupper(sb, *uniname);
++ upname[unilen] = cpu_to_le16(exfat_toupper(sb, *uniname));
+ uniname++;
+ unilen++;
+ }
+diff --git a/fs/fuse/dev.c b/fs/fuse/dev.c
+index 5c155437a455..ec02c3240176 100644
+--- a/fs/fuse/dev.c
++++ b/fs/fuse/dev.c
+@@ -771,7 +771,8 @@ static int fuse_check_page(struct page *page)
+ 1 << PG_uptodate |
+ 1 << PG_lru |
+ 1 << PG_active |
+- 1 << PG_reclaim))) {
++ 1 << PG_reclaim |
++ 1 << PG_waiters))) {
+ pr_warn("trying to steal weird page\n");
+ pr_warn(" page=%p index=%li flags=%08lx, count=%i, mapcount=%i, mapping=%p\n", page, page->index, page->flags, page_count(page), page_mapcount(page), page->mapping);
+ return 1;
+diff --git a/fs/nfs/direct.c b/fs/nfs/direct.c
+index f0c3f0123131..d49b1d197908 100644
+--- a/fs/nfs/direct.c
++++ b/fs/nfs/direct.c
+@@ -267,6 +267,8 @@ static void nfs_direct_complete(struct nfs_direct_req *dreq)
+ {
+ struct inode *inode = dreq->inode;
+
++ inode_dio_end(inode);
++
+ if (dreq->iocb) {
+ long res = (long) dreq->error;
+ if (dreq->count != 0) {
+@@ -278,10 +280,7 @@ static void nfs_direct_complete(struct nfs_direct_req *dreq)
+
+ complete(&dreq->completion);
+
+- igrab(inode);
+ nfs_direct_req_release(dreq);
+- inode_dio_end(inode);
+- iput(inode);
+ }
+
+ static void nfs_direct_read_completion(struct nfs_pgio_header *hdr)
+@@ -411,10 +410,8 @@ static ssize_t nfs_direct_read_schedule_iovec(struct nfs_direct_req *dreq,
+ * generic layer handle the completion.
+ */
+ if (requested_bytes == 0) {
+- igrab(inode);
+- nfs_direct_req_release(dreq);
+ inode_dio_end(inode);
+- iput(inode);
++ nfs_direct_req_release(dreq);
+ return result < 0 ? result : -EIO;
+ }
+
+@@ -867,10 +864,8 @@ static ssize_t nfs_direct_write_schedule_iovec(struct nfs_direct_req *dreq,
+ * generic layer handle the completion.
+ */
+ if (requested_bytes == 0) {
+- igrab(inode);
+- nfs_direct_req_release(dreq);
+ inode_dio_end(inode);
+- iput(inode);
++ nfs_direct_req_release(dreq);
+ return result < 0 ? result : -EIO;
+ }
+
+diff --git a/fs/nfs/file.c b/fs/nfs/file.c
+index ccd6c1637b27..f96367a2463e 100644
+--- a/fs/nfs/file.c
++++ b/fs/nfs/file.c
+@@ -83,7 +83,6 @@ nfs_file_release(struct inode *inode, struct file *filp)
+ dprintk("NFS: release(%pD2)\n", filp);
+
+ nfs_inc_stats(inode, NFSIOS_VFSRELEASE);
+- inode_dio_wait(inode);
+ nfs_file_clear_open_context(filp);
+ return 0;
+ }
+diff --git a/fs/nfsd/nfs4state.c b/fs/nfsd/nfs4state.c
+index bdfae3ba3953..0a201bb074b0 100644
+--- a/fs/nfsd/nfs4state.c
++++ b/fs/nfsd/nfs4state.c
+@@ -509,6 +509,17 @@ find_any_file(struct nfs4_file *f)
+ return ret;
+ }
+
++static struct nfsd_file *find_deleg_file(struct nfs4_file *f)
++{
++ struct nfsd_file *ret = NULL;
++
++ spin_lock(&f->fi_lock);
++ if (f->fi_deleg_file)
++ ret = nfsd_file_get(f->fi_deleg_file);
++ spin_unlock(&f->fi_lock);
++ return ret;
++}
++
+ static atomic_long_t num_delegations;
+ unsigned long max_delegations;
+
+@@ -2436,6 +2447,8 @@ static int nfs4_show_open(struct seq_file *s, struct nfs4_stid *st)
+ oo = ols->st_stateowner;
+ nf = st->sc_file;
+ file = find_any_file(nf);
++ if (!file)
++ return 0;
+
+ seq_printf(s, "- 0x%16phN: { type: open, ", &st->sc_stateid);
+
+@@ -2469,6 +2482,8 @@ static int nfs4_show_lock(struct seq_file *s, struct nfs4_stid *st)
+ oo = ols->st_stateowner;
+ nf = st->sc_file;
+ file = find_any_file(nf);
++ if (!file)
++ return 0;
+
+ seq_printf(s, "- 0x%16phN: { type: lock, ", &st->sc_stateid);
+
+@@ -2497,7 +2512,9 @@ static int nfs4_show_deleg(struct seq_file *s, struct nfs4_stid *st)
+
+ ds = delegstateid(st);
+ nf = st->sc_file;
+- file = nf->fi_deleg_file;
++ file = find_deleg_file(nf);
++ if (!file)
++ return 0;
+
+ seq_printf(s, "- 0x%16phN: { type: deleg, ", &st->sc_stateid);
+
+@@ -2509,6 +2526,7 @@ static int nfs4_show_deleg(struct seq_file *s, struct nfs4_stid *st)
+
+ nfs4_show_superblock(s, file);
+ seq_printf(s, " }\n");
++ nfsd_file_put(file);
+
+ return 0;
+ }
+diff --git a/include/asm-generic/mmiowb.h b/include/asm-generic/mmiowb.h
+index 9439ff037b2d..5698fca3bf56 100644
+--- a/include/asm-generic/mmiowb.h
++++ b/include/asm-generic/mmiowb.h
+@@ -27,7 +27,7 @@
+ #include <asm/smp.h>
+
+ DECLARE_PER_CPU(struct mmiowb_state, __mmiowb_state);
+-#define __mmiowb_state() this_cpu_ptr(&__mmiowb_state)
++#define __mmiowb_state() raw_cpu_ptr(&__mmiowb_state)
+ #else
+ #define __mmiowb_state() arch_mmiowb_state()
+ #endif /* arch_mmiowb_state */
+@@ -35,7 +35,9 @@ DECLARE_PER_CPU(struct mmiowb_state, __mmiowb_state);
+ static inline void mmiowb_set_pending(void)
+ {
+ struct mmiowb_state *ms = __mmiowb_state();
+- ms->mmiowb_pending = ms->nesting_count;
++
++ if (likely(ms->nesting_count))
++ ms->mmiowb_pending = ms->nesting_count;
+ }
+
+ static inline void mmiowb_spin_lock(void)
+diff --git a/include/asm-generic/vmlinux.lds.h b/include/asm-generic/vmlinux.lds.h
+index 71e387a5fe90..dc29044d3ed9 100644
+--- a/include/asm-generic/vmlinux.lds.h
++++ b/include/asm-generic/vmlinux.lds.h
+@@ -341,7 +341,8 @@
+
+ #define PAGE_ALIGNED_DATA(page_align) \
+ . = ALIGN(page_align); \
+- *(.data..page_aligned)
++ *(.data..page_aligned) \
++ . = ALIGN(page_align);
+
+ #define READ_MOSTLY_DATA(align) \
+ . = ALIGN(align); \
+@@ -727,7 +728,9 @@
+ . = ALIGN(bss_align); \
+ .bss : AT(ADDR(.bss) - LOAD_OFFSET) { \
+ BSS_FIRST_SECTIONS \
++ . = ALIGN(PAGE_SIZE); \
+ *(.bss..page_aligned) \
++ . = ALIGN(PAGE_SIZE); \
+ *(.dynbss) \
+ *(BSS_MAIN) \
+ *(COMMON) \
+diff --git a/include/linux/device-mapper.h b/include/linux/device-mapper.h
+index af48d9da3916..2d84ca74cc74 100644
+--- a/include/linux/device-mapper.h
++++ b/include/linux/device-mapper.h
+@@ -424,6 +424,7 @@ const char *dm_device_name(struct mapped_device *md);
+ int dm_copy_name_and_uuid(struct mapped_device *md, char *name, char *uuid);
+ struct gendisk *dm_disk(struct mapped_device *md);
+ int dm_suspended(struct dm_target *ti);
++int dm_post_suspending(struct dm_target *ti);
+ int dm_noflush_suspending(struct dm_target *ti);
+ void dm_accept_partial_bio(struct bio *bio, unsigned n_sectors);
+ union map_info *dm_get_rq_mapinfo(struct request *rq);
+diff --git a/include/linux/efi.h b/include/linux/efi.h
+index 9430d01c0c3d..650794abfa32 100644
+--- a/include/linux/efi.h
++++ b/include/linux/efi.h
+@@ -991,6 +991,7 @@ int efivars_register(struct efivars *efivars,
+ int efivars_unregister(struct efivars *efivars);
+ struct kobject *efivars_kobject(void);
+
++int efivar_supports_writes(void);
+ int efivar_init(int (*func)(efi_char16_t *, efi_guid_t, unsigned long, void *),
+ void *data, bool duplicates, struct list_head *head);
+
+diff --git a/include/linux/io-mapping.h b/include/linux/io-mapping.h
+index b336622612f3..75eebe7c12f8 100644
+--- a/include/linux/io-mapping.h
++++ b/include/linux/io-mapping.h
+@@ -107,9 +107,12 @@ io_mapping_init_wc(struct io_mapping *iomap,
+ resource_size_t base,
+ unsigned long size)
+ {
++ iomap->iomem = ioremap_wc(base, size);
++ if (!iomap->iomem)
++ return NULL;
++
+ iomap->base = base;
+ iomap->size = size;
+- iomap->iomem = ioremap_wc(base, size);
+ #if defined(pgprot_noncached_wc) /* archs can't agree on a name ... */
+ iomap->prot = pgprot_noncached_wc(PAGE_KERNEL);
+ #elif defined(pgprot_writecombine)
+diff --git a/include/linux/mod_devicetable.h b/include/linux/mod_devicetable.h
+index 0754b8d71262..8a84f11bf124 100644
+--- a/include/linux/mod_devicetable.h
++++ b/include/linux/mod_devicetable.h
+@@ -318,7 +318,7 @@ struct pcmcia_device_id {
+ #define INPUT_DEVICE_ID_LED_MAX 0x0f
+ #define INPUT_DEVICE_ID_SND_MAX 0x07
+ #define INPUT_DEVICE_ID_FF_MAX 0x7f
+-#define INPUT_DEVICE_ID_SW_MAX 0x0f
++#define INPUT_DEVICE_ID_SW_MAX 0x10
+ #define INPUT_DEVICE_ID_PROP_MAX 0x1f
+
+ #define INPUT_DEVICE_ID_MATCH_BUS 1
+diff --git a/include/linux/xattr.h b/include/linux/xattr.h
+index 47eaa34f8761..c5afaf8ca7a2 100644
+--- a/include/linux/xattr.h
++++ b/include/linux/xattr.h
+@@ -15,6 +15,7 @@
+ #include <linux/slab.h>
+ #include <linux/types.h>
+ #include <linux/spinlock.h>
++#include <linux/mm.h>
+ #include <uapi/linux/xattr.h>
+
+ struct inode;
+@@ -94,7 +95,7 @@ static inline void simple_xattrs_free(struct simple_xattrs *xattrs)
+
+ list_for_each_entry_safe(xattr, node, &xattrs->head, list) {
+ kfree(xattr->name);
+- kfree(xattr);
++ kvfree(xattr);
+ }
+ }
+
+diff --git a/include/sound/rt5670.h b/include/sound/rt5670.h
+index f9024c7a1600..02e1d7778354 100644
+--- a/include/sound/rt5670.h
++++ b/include/sound/rt5670.h
+@@ -12,6 +12,7 @@ struct rt5670_platform_data {
+ int jd_mode;
+ bool in2_diff;
+ bool dev_gpio;
++ bool gpio1_is_ext_spk_en;
+
+ bool dmic_en;
+ unsigned int dmic1_data_pin;
+diff --git a/include/uapi/linux/idxd.h b/include/uapi/linux/idxd.h
+index 1f412fbf561b..e103c1434e4b 100644
+--- a/include/uapi/linux/idxd.h
++++ b/include/uapi/linux/idxd.h
+@@ -110,9 +110,12 @@ struct dsa_hw_desc {
+ uint16_t rsvd1;
+ union {
+ uint8_t expected_res;
++ /* create delta record */
+ struct {
+ uint64_t delta_addr;
+ uint32_t max_delta_size;
++ uint32_t delt_rsvd;
++ uint8_t expected_res_mask;
+ };
+ uint32_t delta_rec_size;
+ uint64_t dest2;
+diff --git a/include/uapi/linux/input-event-codes.h b/include/uapi/linux/input-event-codes.h
+index b6a835d37826..0c2e27d28e0a 100644
+--- a/include/uapi/linux/input-event-codes.h
++++ b/include/uapi/linux/input-event-codes.h
+@@ -888,7 +888,8 @@
+ #define SW_LINEIN_INSERT 0x0d /* set = inserted */
+ #define SW_MUTE_DEVICE 0x0e /* set = device disabled */
+ #define SW_PEN_INSERTED 0x0f /* set = pen inserted */
+-#define SW_MAX 0x0f
++#define SW_MACHINE_COVER 0x10 /* set = cover closed */
++#define SW_MAX 0x10
+ #define SW_CNT (SW_MAX+1)
+
+ /*
+diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
+index 739d9ba3ba6b..eebdd5307713 100644
+--- a/kernel/bpf/verifier.c
++++ b/kernel/bpf/verifier.c
+@@ -9613,7 +9613,7 @@ static int jit_subprogs(struct bpf_verifier_env *env)
+ int i, j, subprog_start, subprog_end = 0, len, subprog;
+ struct bpf_insn *insn;
+ void *old_bpf_func;
+- int err;
++ int err, num_exentries;
+
+ if (env->subprog_cnt <= 1)
+ return 0;
+@@ -9688,6 +9688,14 @@ static int jit_subprogs(struct bpf_verifier_env *env)
+ func[i]->aux->nr_linfo = prog->aux->nr_linfo;
+ func[i]->aux->jited_linfo = prog->aux->jited_linfo;
+ func[i]->aux->linfo_idx = env->subprog_info[i].linfo_idx;
++ num_exentries = 0;
++ insn = func[i]->insnsi;
++ for (j = 0; j < func[i]->len; j++, insn++) {
++ if (BPF_CLASS(insn->code) == BPF_LDX &&
++ BPF_MODE(insn->code) == BPF_PROBE_MEM)
++ num_exentries++;
++ }
++ func[i]->aux->num_exentries = num_exentries;
+ func[i] = bpf_int_jit_compile(func[i]);
+ if (!func[i]->jited) {
+ err = -ENOTSUPP;
+diff --git a/kernel/events/uprobes.c b/kernel/events/uprobes.c
+index cc2095607c74..8e1f7165162c 100644
+--- a/kernel/events/uprobes.c
++++ b/kernel/events/uprobes.c
+@@ -2205,7 +2205,7 @@ static void handle_swbp(struct pt_regs *regs)
+ if (!uprobe) {
+ if (is_swbp > 0) {
+ /* No matching uprobe; signal SIGTRAP. */
+- send_sig(SIGTRAP, current, 0);
++ force_sig(SIGTRAP);
+ } else {
+ /*
+ * Either we raced with uprobe_unregister() or we can't
+diff --git a/mm/hugetlb.c b/mm/hugetlb.c
+index 4f7cdc55fbe4..461324757c75 100644
+--- a/mm/hugetlb.c
++++ b/mm/hugetlb.c
+@@ -46,7 +46,10 @@ int hugetlb_max_hstate __read_mostly;
+ unsigned int default_hstate_idx;
+ struct hstate hstates[HUGE_MAX_HSTATE];
+
++#ifdef CONFIG_CMA
+ static struct cma *hugetlb_cma[MAX_NUMNODES];
++#endif
++static unsigned long hugetlb_cma_size __initdata;
+
+ /*
+ * Minimum page order among possible hugepage sizes, set to a proper value
+@@ -1236,9 +1239,10 @@ static void free_gigantic_page(struct page *page, unsigned int order)
+ * If the page isn't allocated using the cma allocator,
+ * cma_release() returns false.
+ */
+- if (IS_ENABLED(CONFIG_CMA) &&
+- cma_release(hugetlb_cma[page_to_nid(page)], page, 1 << order))
++#ifdef CONFIG_CMA
++ if (cma_release(hugetlb_cma[page_to_nid(page)], page, 1 << order))
+ return;
++#endif
+
+ free_contig_range(page_to_pfn(page), 1 << order);
+ }
+@@ -1249,7 +1253,8 @@ static struct page *alloc_gigantic_page(struct hstate *h, gfp_t gfp_mask,
+ {
+ unsigned long nr_pages = 1UL << huge_page_order(h);
+
+- if (IS_ENABLED(CONFIG_CMA)) {
++#ifdef CONFIG_CMA
++ {
+ struct page *page;
+ int node;
+
+@@ -1263,6 +1268,7 @@ static struct page *alloc_gigantic_page(struct hstate *h, gfp_t gfp_mask,
+ return page;
+ }
+ }
++#endif
+
+ return alloc_contig_pages(nr_pages, gfp_mask, nid, nodemask);
+ }
+@@ -2572,7 +2578,7 @@ static void __init hugetlb_hstate_alloc_pages(struct hstate *h)
+
+ for (i = 0; i < h->max_huge_pages; ++i) {
+ if (hstate_is_gigantic(h)) {
+- if (IS_ENABLED(CONFIG_CMA) && hugetlb_cma[0]) {
++ if (hugetlb_cma_size) {
+ pr_warn_once("HugeTLB: hugetlb_cma is enabled, skip boot time allocation\n");
+ break;
+ }
+@@ -5548,7 +5554,6 @@ void move_hugetlb_state(struct page *oldpage, struct page *newpage, int reason)
+ }
+
+ #ifdef CONFIG_CMA
+-static unsigned long hugetlb_cma_size __initdata;
+ static bool cma_reserve_called __initdata;
+
+ static int __init cmdline_parse_hugetlb_cma(char *p)
+diff --git a/mm/khugepaged.c b/mm/khugepaged.c
+index cd280afb246e..e9e7a5659d64 100644
+--- a/mm/khugepaged.c
++++ b/mm/khugepaged.c
+@@ -873,6 +873,9 @@ static int hugepage_vma_revalidate(struct mm_struct *mm, unsigned long address,
+ return SCAN_ADDRESS_RANGE;
+ if (!hugepage_vma_check(vma, vma->vm_flags))
+ return SCAN_VMA_CHECK;
++ /* Anon VMA expected */
++ if (!vma->anon_vma || vma->vm_ops)
++ return SCAN_VMA_CHECK;
+ return 0;
+ }
+
+diff --git a/mm/memcontrol.c b/mm/memcontrol.c
+index ef0e291a8cf4..9a4a77104d16 100644
+--- a/mm/memcontrol.c
++++ b/mm/memcontrol.c
+@@ -5658,7 +5658,6 @@ static void __mem_cgroup_clear_mc(void)
+ if (!mem_cgroup_is_root(mc.to))
+ page_counter_uncharge(&mc.to->memory, mc.moved_swap);
+
+- mem_cgroup_id_get_many(mc.to, mc.moved_swap);
+ css_put_many(&mc.to->css, mc.moved_swap);
+
+ mc.moved_swap = 0;
+@@ -5849,7 +5848,8 @@ put: /* get_mctgt_type() gets the page */
+ ent = target.ent;
+ if (!mem_cgroup_move_swap_account(ent, mc.from, mc.to)) {
+ mc.precharge--;
+- /* we fixup refcnts and charges later. */
++ mem_cgroup_id_get_many(mc.to, 1);
++ /* we fixup other refcnts and charges later. */
+ mc.moved_swap++;
+ }
+ break;
+diff --git a/mm/mmap.c b/mm/mmap.c
+index f609e9ec4a25..bb1822ac9909 100644
+--- a/mm/mmap.c
++++ b/mm/mmap.c
+@@ -2620,7 +2620,7 @@ static void unmap_region(struct mm_struct *mm,
+ * Create a list of vma's touched by the unmap, removing them from the mm's
+ * vma list as we go..
+ */
+-static void
++static bool
+ detach_vmas_to_be_unmapped(struct mm_struct *mm, struct vm_area_struct *vma,
+ struct vm_area_struct *prev, unsigned long end)
+ {
+@@ -2645,6 +2645,17 @@ detach_vmas_to_be_unmapped(struct mm_struct *mm, struct vm_area_struct *vma,
+
+ /* Kill the cache */
+ vmacache_invalidate(mm);
++
++ /*
++ * Do not downgrade mmap_lock if we are next to VM_GROWSDOWN or
++ * VM_GROWSUP VMA. Such VMAs can change their size under
++ * down_read(mmap_lock) and collide with the VMA we are about to unmap.
++ */
++ if (vma && (vma->vm_flags & VM_GROWSDOWN))
++ return false;
++ if (prev && (prev->vm_flags & VM_GROWSUP))
++ return false;
++ return true;
+ }
+
+ /*
+@@ -2825,7 +2836,8 @@ int __do_munmap(struct mm_struct *mm, unsigned long start, size_t len,
+ }
+
+ /* Detach vmas from rbtree */
+- detach_vmas_to_be_unmapped(mm, vma, prev, end);
++ if (!detach_vmas_to_be_unmapped(mm, vma, prev, end))
++ downgrade = false;
+
+ if (downgrade)
+ downgrade_write(&mm->mmap_sem);
+diff --git a/mm/shmem.c b/mm/shmem.c
+index bd8840082c94..97b4a47e9767 100644
+--- a/mm/shmem.c
++++ b/mm/shmem.c
+@@ -3205,7 +3205,7 @@ static int shmem_initxattrs(struct inode *inode,
+ new_xattr->name = kmalloc(XATTR_SECURITY_PREFIX_LEN + len,
+ GFP_KERNEL);
+ if (!new_xattr->name) {
+- kfree(new_xattr);
++ kvfree(new_xattr);
+ return -ENOMEM;
+ }
+
+diff --git a/mm/slab_common.c b/mm/slab_common.c
+index 37d48a56431d..fe8b68482670 100644
+--- a/mm/slab_common.c
++++ b/mm/slab_common.c
+@@ -326,6 +326,14 @@ int slab_unmergeable(struct kmem_cache *s)
+ if (s->refcount < 0)
+ return 1;
+
++#ifdef CONFIG_MEMCG_KMEM
++ /*
++ * Skip the dying kmem_cache.
++ */
++ if (s->memcg_params.dying)
++ return 1;
++#endif
++
+ return 0;
+ }
+
+@@ -886,12 +894,15 @@ static int shutdown_memcg_caches(struct kmem_cache *s)
+ return 0;
+ }
+
+-static void flush_memcg_workqueue(struct kmem_cache *s)
++static void memcg_set_kmem_cache_dying(struct kmem_cache *s)
+ {
+ spin_lock_irq(&memcg_kmem_wq_lock);
+ s->memcg_params.dying = true;
+ spin_unlock_irq(&memcg_kmem_wq_lock);
++}
+
++static void flush_memcg_workqueue(struct kmem_cache *s)
++{
+ /*
+ * SLAB and SLUB deactivate the kmem_caches through call_rcu. Make
+ * sure all registered rcu callbacks have been invoked.
+@@ -923,10 +934,6 @@ static inline int shutdown_memcg_caches(struct kmem_cache *s)
+ {
+ return 0;
+ }
+-
+-static inline void flush_memcg_workqueue(struct kmem_cache *s)
+-{
+-}
+ #endif /* CONFIG_MEMCG_KMEM */
+
+ void slab_kmem_cache_release(struct kmem_cache *s)
+@@ -944,8 +951,6 @@ void kmem_cache_destroy(struct kmem_cache *s)
+ if (unlikely(!s))
+ return;
+
+- flush_memcg_workqueue(s);
+-
+ get_online_cpus();
+ get_online_mems();
+
+@@ -955,6 +960,22 @@ void kmem_cache_destroy(struct kmem_cache *s)
+ if (s->refcount)
+ goto out_unlock;
+
++#ifdef CONFIG_MEMCG_KMEM
++ memcg_set_kmem_cache_dying(s);
++
++ mutex_unlock(&slab_mutex);
++
++ put_online_mems();
++ put_online_cpus();
++
++ flush_memcg_workqueue(s);
++
++ get_online_cpus();
++ get_online_mems();
++
++ mutex_lock(&slab_mutex);
++#endif
++
+ err = shutdown_memcg_caches(s);
+ if (!err)
+ err = shutdown_cache(s);
+diff --git a/net/mac80211/rx.c b/net/mac80211/rx.c
+index 91a13aee4378..961f37c0701b 100644
+--- a/net/mac80211/rx.c
++++ b/net/mac80211/rx.c
+@@ -2357,6 +2357,7 @@ static int ieee80211_802_1x_port_control(struct ieee80211_rx_data *rx)
+
+ static int ieee80211_drop_unencrypted(struct ieee80211_rx_data *rx, __le16 fc)
+ {
++ struct ieee80211_hdr *hdr = (void *)rx->skb->data;
+ struct sk_buff *skb = rx->skb;
+ struct ieee80211_rx_status *status = IEEE80211_SKB_RXCB(skb);
+
+@@ -2367,6 +2368,31 @@ static int ieee80211_drop_unencrypted(struct ieee80211_rx_data *rx, __le16 fc)
+ if (status->flag & RX_FLAG_DECRYPTED)
+ return 0;
+
++ /* check mesh EAPOL frames first */
++ if (unlikely(rx->sta && ieee80211_vif_is_mesh(&rx->sdata->vif) &&
++ ieee80211_is_data(fc))) {
++ struct ieee80211s_hdr *mesh_hdr;
++ u16 hdr_len = ieee80211_hdrlen(fc);
++ u16 ethertype_offset;
++ __be16 ethertype;
++
++ if (!ether_addr_equal(hdr->addr1, rx->sdata->vif.addr))
++ goto drop_check;
++
++ /* make sure fixed part of mesh header is there, also checks skb len */
++ if (!pskb_may_pull(rx->skb, hdr_len + 6))
++ goto drop_check;
++
++ mesh_hdr = (struct ieee80211s_hdr *)(skb->data + hdr_len);
++ ethertype_offset = hdr_len + ieee80211_get_mesh_hdrlen(mesh_hdr) +
++ sizeof(rfc1042_header);
++
++ if (skb_copy_bits(rx->skb, ethertype_offset, ðertype, 2) == 0 &&
++ ethertype == rx->sdata->control_port_protocol)
++ return 0;
++ }
++
++drop_check:
+ /* Drop unencrypted frames if key is set. */
+ if (unlikely(!ieee80211_has_protected(fc) &&
+ !ieee80211_is_any_nullfunc(fc) &&
+diff --git a/net/netfilter/ipvs/ip_vs_sync.c b/net/netfilter/ipvs/ip_vs_sync.c
+index 605e0f68f8bd..2b8abbfe018c 100644
+--- a/net/netfilter/ipvs/ip_vs_sync.c
++++ b/net/netfilter/ipvs/ip_vs_sync.c
+@@ -1717,6 +1717,8 @@ static int sync_thread_backup(void *data)
+ {
+ struct ip_vs_sync_thread_data *tinfo = data;
+ struct netns_ipvs *ipvs = tinfo->ipvs;
++ struct sock *sk = tinfo->sock->sk;
++ struct udp_sock *up = udp_sk(sk);
+ int len;
+
+ pr_info("sync thread started: state = BACKUP, mcast_ifn = %s, "
+@@ -1724,12 +1726,14 @@ static int sync_thread_backup(void *data)
+ ipvs->bcfg.mcast_ifn, ipvs->bcfg.syncid, tinfo->id);
+
+ while (!kthread_should_stop()) {
+- wait_event_interruptible(*sk_sleep(tinfo->sock->sk),
+- !skb_queue_empty(&tinfo->sock->sk->sk_receive_queue)
+- || kthread_should_stop());
++ wait_event_interruptible(*sk_sleep(sk),
++ !skb_queue_empty_lockless(&sk->sk_receive_queue) ||
++ !skb_queue_empty_lockless(&up->reader_queue) ||
++ kthread_should_stop());
+
+ /* do we have data now? */
+- while (!skb_queue_empty(&(tinfo->sock->sk->sk_receive_queue))) {
++ while (!skb_queue_empty_lockless(&sk->sk_receive_queue) ||
++ !skb_queue_empty_lockless(&up->reader_queue)) {
+ len = ip_vs_receive(tinfo->sock, tinfo->buf,
+ ipvs->bcfg.sync_maxlen);
+ if (len <= 0) {
+diff --git a/net/netfilter/nf_tables_api.c b/net/netfilter/nf_tables_api.c
+index 9780bd93b7e4..e1d678af8749 100644
+--- a/net/netfilter/nf_tables_api.c
++++ b/net/netfilter/nf_tables_api.c
+@@ -188,24 +188,6 @@ static void nft_netdev_unregister_hooks(struct net *net,
+ nf_unregister_net_hook(net, &hook->ops);
+ }
+
+-static int nft_register_basechain_hooks(struct net *net, int family,
+- struct nft_base_chain *basechain)
+-{
+- if (family == NFPROTO_NETDEV)
+- return nft_netdev_register_hooks(net, &basechain->hook_list);
+-
+- return nf_register_net_hook(net, &basechain->ops);
+-}
+-
+-static void nft_unregister_basechain_hooks(struct net *net, int family,
+- struct nft_base_chain *basechain)
+-{
+- if (family == NFPROTO_NETDEV)
+- nft_netdev_unregister_hooks(net, &basechain->hook_list);
+- else
+- nf_unregister_net_hook(net, &basechain->ops);
+-}
+-
+ static int nf_tables_register_hook(struct net *net,
+ const struct nft_table *table,
+ struct nft_chain *chain)
+@@ -223,7 +205,10 @@ static int nf_tables_register_hook(struct net *net,
+ if (basechain->type->ops_register)
+ return basechain->type->ops_register(net, ops);
+
+- return nft_register_basechain_hooks(net, table->family, basechain);
++ if (table->family == NFPROTO_NETDEV)
++ return nft_netdev_register_hooks(net, &basechain->hook_list);
++
++ return nf_register_net_hook(net, &basechain->ops);
+ }
+
+ static void nf_tables_unregister_hook(struct net *net,
+@@ -242,7 +227,10 @@ static void nf_tables_unregister_hook(struct net *net,
+ if (basechain->type->ops_unregister)
+ return basechain->type->ops_unregister(net, ops);
+
+- nft_unregister_basechain_hooks(net, table->family, basechain);
++ if (table->family == NFPROTO_NETDEV)
++ nft_netdev_unregister_hooks(net, &basechain->hook_list);
++ else
++ nf_unregister_net_hook(net, &basechain->ops);
+ }
+
+ static int nft_trans_table_add(struct nft_ctx *ctx, int msg_type)
+@@ -832,8 +820,7 @@ static void nft_table_disable(struct net *net, struct nft_table *table, u32 cnt)
+ if (cnt && i++ == cnt)
+ break;
+
+- nft_unregister_basechain_hooks(net, table->family,
+- nft_base_chain(chain));
++ nf_tables_unregister_hook(net, table, chain);
+ }
+ }
+
+@@ -848,8 +835,7 @@ static int nf_tables_table_enable(struct net *net, struct nft_table *table)
+ if (!nft_is_base_chain(chain))
+ continue;
+
+- err = nft_register_basechain_hooks(net, table->family,
+- nft_base_chain(chain));
++ err = nf_tables_register_hook(net, table, chain);
+ if (err < 0)
+ goto err_register_hooks;
+
+@@ -894,11 +880,12 @@ static int nf_tables_updtable(struct nft_ctx *ctx)
+ nft_trans_table_enable(trans) = false;
+ } else if (!(flags & NFT_TABLE_F_DORMANT) &&
+ ctx->table->flags & NFT_TABLE_F_DORMANT) {
++ ctx->table->flags &= ~NFT_TABLE_F_DORMANT;
+ ret = nf_tables_table_enable(ctx->net, ctx->table);
+- if (ret >= 0) {
+- ctx->table->flags &= ~NFT_TABLE_F_DORMANT;
++ if (ret >= 0)
+ nft_trans_table_enable(trans) = true;
+- }
++ else
++ ctx->table->flags |= NFT_TABLE_F_DORMANT;
+ }
+ if (ret < 0)
+ goto err;
+diff --git a/net/vmw_vsock/virtio_transport.c b/net/vmw_vsock/virtio_transport.c
+index dfbaf6bd8b1c..2700a63ab095 100644
+--- a/net/vmw_vsock/virtio_transport.c
++++ b/net/vmw_vsock/virtio_transport.c
+@@ -22,7 +22,7 @@
+ #include <net/af_vsock.h>
+
+ static struct workqueue_struct *virtio_vsock_workqueue;
+-static struct virtio_vsock *the_virtio_vsock;
++static struct virtio_vsock __rcu *the_virtio_vsock;
+ static DEFINE_MUTEX(the_virtio_vsock_mutex); /* protects the_virtio_vsock */
+
+ struct virtio_vsock {
+diff --git a/scripts/decode_stacktrace.sh b/scripts/decode_stacktrace.sh
+index 13e5fbafdf2f..fe7076fdac8a 100755
+--- a/scripts/decode_stacktrace.sh
++++ b/scripts/decode_stacktrace.sh
+@@ -84,8 +84,8 @@ parse_symbol() {
+ return
+ fi
+
+- # Strip out the base of the path
+- code=${code#$basepath/}
++ # Strip out the base of the path on each line
++ code=$(while read -r line; do echo "${line#$basepath/}"; done <<< "$code")
+
+ # In the case of inlines, move everything to same line
+ code=${code//$'\n'/' '}
+diff --git a/scripts/gdb/linux/symbols.py b/scripts/gdb/linux/symbols.py
+index be984aa29b75..1be9763cf8bb 100644
+--- a/scripts/gdb/linux/symbols.py
++++ b/scripts/gdb/linux/symbols.py
+@@ -96,7 +96,7 @@ lx-symbols command."""
+ return ""
+ attrs = sect_attrs['attrs']
+ section_name_to_address = {
+- attrs[n]['name'].string(): attrs[n]['address']
++ attrs[n]['battr']['attr']['name'].string(): attrs[n]['address']
+ for n in range(int(sect_attrs['nsections']))}
+ args = []
+ for section_name in [".data", ".data..read_mostly", ".rodata", ".bss",
+diff --git a/sound/core/info.c b/sound/core/info.c
+index 8c6bc5241df5..9fec3070f8ba 100644
+--- a/sound/core/info.c
++++ b/sound/core/info.c
+@@ -606,7 +606,9 @@ int snd_info_get_line(struct snd_info_buffer *buffer, char *line, int len)
+ {
+ int c;
+
+- if (snd_BUG_ON(!buffer || !buffer->buffer))
++ if (snd_BUG_ON(!buffer))
++ return 1;
++ if (!buffer->buffer)
+ return 1;
+ if (len <= 0 || buffer->stop || buffer->error)
+ return 1;
+diff --git a/sound/pci/hda/patch_hdmi.c b/sound/pci/hda/patch_hdmi.c
+index 137d655fed8f..e821c9df8107 100644
+--- a/sound/pci/hda/patch_hdmi.c
++++ b/sound/pci/hda/patch_hdmi.c
+@@ -1804,33 +1804,43 @@ static int hdmi_add_cvt(struct hda_codec *codec, hda_nid_t cvt_nid)
+
+ static int hdmi_parse_codec(struct hda_codec *codec)
+ {
+- hda_nid_t nid;
++ hda_nid_t start_nid;
++ unsigned int caps;
+ int i, nodes;
+
+- nodes = snd_hda_get_sub_nodes(codec, codec->core.afg, &nid);
+- if (!nid || nodes < 0) {
++ nodes = snd_hda_get_sub_nodes(codec, codec->core.afg, &start_nid);
++ if (!start_nid || nodes < 0) {
+ codec_warn(codec, "HDMI: failed to get afg sub nodes\n");
+ return -EINVAL;
+ }
+
+- for (i = 0; i < nodes; i++, nid++) {
+- unsigned int caps;
+- unsigned int type;
++ /*
++ * hdmi_add_pin() assumes total amount of converters to
++ * be known, so first discover all converters
++ */
++ for (i = 0; i < nodes; i++) {
++ hda_nid_t nid = start_nid + i;
+
+ caps = get_wcaps(codec, nid);
+- type = get_wcaps_type(caps);
+
+ if (!(caps & AC_WCAP_DIGITAL))
+ continue;
+
+- switch (type) {
+- case AC_WID_AUD_OUT:
++ if (get_wcaps_type(caps) == AC_WID_AUD_OUT)
+ hdmi_add_cvt(codec, nid);
+- break;
+- case AC_WID_PIN:
++ }
++
++ /* discover audio pins */
++ for (i = 0; i < nodes; i++) {
++ hda_nid_t nid = start_nid + i;
++
++ caps = get_wcaps(codec, nid);
++
++ if (!(caps & AC_WCAP_DIGITAL))
++ continue;
++
++ if (get_wcaps_type(caps) == AC_WID_PIN)
+ hdmi_add_pin(codec, nid);
+- break;
+- }
+ }
+
+ return 0;
+diff --git a/sound/pci/hda/patch_realtek.c b/sound/pci/hda/patch_realtek.c
+index d80eed2a48a1..27dd8945d6e6 100644
+--- a/sound/pci/hda/patch_realtek.c
++++ b/sound/pci/hda/patch_realtek.c
+@@ -7546,11 +7546,13 @@ static const struct snd_pci_quirk alc269_fixup_tbl[] = {
+ SND_PCI_QUIRK(0x10cf, 0x1629, "Lifebook U7x7", ALC255_FIXUP_LIFEBOOK_U7x7_HEADSET_MIC),
+ SND_PCI_QUIRK(0x10cf, 0x1845, "Lifebook U904", ALC269_FIXUP_LIFEBOOK_EXTMIC),
+ SND_PCI_QUIRK(0x10ec, 0x10f2, "Intel Reference board", ALC700_FIXUP_INTEL_REFERENCE),
++ SND_PCI_QUIRK(0x10ec, 0x1230, "Intel Reference board", ALC225_FIXUP_HEADSET_JACK),
+ SND_PCI_QUIRK(0x10f7, 0x8338, "Panasonic CF-SZ6", ALC269_FIXUP_HEADSET_MODE),
+ SND_PCI_QUIRK(0x144d, 0xc109, "Samsung Ativ book 9 (NP900X3G)", ALC269_FIXUP_INV_DMIC),
+ SND_PCI_QUIRK(0x144d, 0xc169, "Samsung Notebook 9 Pen (NP930SBE-K01US)", ALC298_FIXUP_SAMSUNG_HEADPHONE_VERY_QUIET),
+ SND_PCI_QUIRK(0x144d, 0xc176, "Samsung Notebook 9 Pro (NP930MBE-K04US)", ALC298_FIXUP_SAMSUNG_HEADPHONE_VERY_QUIET),
+ SND_PCI_QUIRK(0x144d, 0xc740, "Samsung Ativ book 8 (NP870Z5G)", ALC269_FIXUP_ATIV_BOOK_8),
++ SND_PCI_QUIRK(0x144d, 0xc812, "Samsung Notebook Pen S (NT950SBE-X58)", ALC298_FIXUP_SAMSUNG_HEADPHONE_VERY_QUIET),
+ SND_PCI_QUIRK(0x1458, 0xfa53, "Gigabyte BXBT-2807", ALC283_FIXUP_HEADSET_MIC),
+ SND_PCI_QUIRK(0x1462, 0xb120, "MSI Cubi MS-B120", ALC283_FIXUP_HEADSET_MIC),
+ SND_PCI_QUIRK(0x1462, 0xb171, "Cubi N 8GL (MS-B171)", ALC283_FIXUP_HEADSET_MIC),
+diff --git a/sound/soc/codecs/rt5670.c b/sound/soc/codecs/rt5670.c
+index 70fee6849ab0..f21181734170 100644
+--- a/sound/soc/codecs/rt5670.c
++++ b/sound/soc/codecs/rt5670.c
+@@ -31,18 +31,19 @@
+ #include "rt5670.h"
+ #include "rt5670-dsp.h"
+
+-#define RT5670_DEV_GPIO BIT(0)
+-#define RT5670_IN2_DIFF BIT(1)
+-#define RT5670_DMIC_EN BIT(2)
+-#define RT5670_DMIC1_IN2P BIT(3)
+-#define RT5670_DMIC1_GPIO6 BIT(4)
+-#define RT5670_DMIC1_GPIO7 BIT(5)
+-#define RT5670_DMIC2_INR BIT(6)
+-#define RT5670_DMIC2_GPIO8 BIT(7)
+-#define RT5670_DMIC3_GPIO5 BIT(8)
+-#define RT5670_JD_MODE1 BIT(9)
+-#define RT5670_JD_MODE2 BIT(10)
+-#define RT5670_JD_MODE3 BIT(11)
++#define RT5670_DEV_GPIO BIT(0)
++#define RT5670_IN2_DIFF BIT(1)
++#define RT5670_DMIC_EN BIT(2)
++#define RT5670_DMIC1_IN2P BIT(3)
++#define RT5670_DMIC1_GPIO6 BIT(4)
++#define RT5670_DMIC1_GPIO7 BIT(5)
++#define RT5670_DMIC2_INR BIT(6)
++#define RT5670_DMIC2_GPIO8 BIT(7)
++#define RT5670_DMIC3_GPIO5 BIT(8)
++#define RT5670_JD_MODE1 BIT(9)
++#define RT5670_JD_MODE2 BIT(10)
++#define RT5670_JD_MODE3 BIT(11)
++#define RT5670_GPIO1_IS_EXT_SPK_EN BIT(12)
+
+ static unsigned long rt5670_quirk;
+ static unsigned int quirk_override;
+@@ -1447,6 +1448,33 @@ static int rt5670_hp_event(struct snd_soc_dapm_widget *w,
+ return 0;
+ }
+
++static int rt5670_spk_event(struct snd_soc_dapm_widget *w,
++ struct snd_kcontrol *kcontrol, int event)
++{
++ struct snd_soc_component *component = snd_soc_dapm_to_component(w->dapm);
++ struct rt5670_priv *rt5670 = snd_soc_component_get_drvdata(component);
++
++ if (!rt5670->pdata.gpio1_is_ext_spk_en)
++ return 0;
++
++ switch (event) {
++ case SND_SOC_DAPM_POST_PMU:
++ regmap_update_bits(rt5670->regmap, RT5670_GPIO_CTRL2,
++ RT5670_GP1_OUT_MASK, RT5670_GP1_OUT_HI);
++ break;
++
++ case SND_SOC_DAPM_PRE_PMD:
++ regmap_update_bits(rt5670->regmap, RT5670_GPIO_CTRL2,
++ RT5670_GP1_OUT_MASK, RT5670_GP1_OUT_LO);
++ break;
++
++ default:
++ return 0;
++ }
++
++ return 0;
++}
++
+ static int rt5670_bst1_event(struct snd_soc_dapm_widget *w,
+ struct snd_kcontrol *kcontrol, int event)
+ {
+@@ -1860,7 +1888,9 @@ static const struct snd_soc_dapm_widget rt5670_specific_dapm_widgets[] = {
+ };
+
+ static const struct snd_soc_dapm_widget rt5672_specific_dapm_widgets[] = {
+- SND_SOC_DAPM_PGA("SPO Amp", SND_SOC_NOPM, 0, 0, NULL, 0),
++ SND_SOC_DAPM_PGA_E("SPO Amp", SND_SOC_NOPM, 0, 0, NULL, 0,
++ rt5670_spk_event, SND_SOC_DAPM_PRE_PMD |
++ SND_SOC_DAPM_POST_PMU),
+ SND_SOC_DAPM_OUTPUT("SPOLP"),
+ SND_SOC_DAPM_OUTPUT("SPOLN"),
+ SND_SOC_DAPM_OUTPUT("SPORP"),
+@@ -2857,14 +2887,14 @@ static const struct dmi_system_id dmi_platform_intel_quirks[] = {
+ },
+ {
+ .callback = rt5670_quirk_cb,
+- .ident = "Lenovo Thinkpad Tablet 10",
++ .ident = "Lenovo Miix 2 10",
+ .matches = {
+ DMI_MATCH(DMI_SYS_VENDOR, "LENOVO"),
+ DMI_MATCH(DMI_PRODUCT_VERSION, "Lenovo Miix 2 10"),
+ },
+ .driver_data = (unsigned long *)(RT5670_DMIC_EN |
+ RT5670_DMIC1_IN2P |
+- RT5670_DEV_GPIO |
++ RT5670_GPIO1_IS_EXT_SPK_EN |
+ RT5670_JD_MODE2),
+ },
+ {
+@@ -2924,6 +2954,10 @@ static int rt5670_i2c_probe(struct i2c_client *i2c,
+ rt5670->pdata.dev_gpio = true;
+ dev_info(&i2c->dev, "quirk dev_gpio\n");
+ }
++ if (rt5670_quirk & RT5670_GPIO1_IS_EXT_SPK_EN) {
++ rt5670->pdata.gpio1_is_ext_spk_en = true;
++ dev_info(&i2c->dev, "quirk GPIO1 is external speaker enable\n");
++ }
+ if (rt5670_quirk & RT5670_IN2_DIFF) {
+ rt5670->pdata.in2_diff = true;
+ dev_info(&i2c->dev, "quirk IN2_DIFF\n");
+@@ -3023,6 +3057,13 @@ static int rt5670_i2c_probe(struct i2c_client *i2c,
+ RT5670_GP1_PF_MASK, RT5670_GP1_PF_OUT);
+ }
+
++ if (rt5670->pdata.gpio1_is_ext_spk_en) {
++ regmap_update_bits(rt5670->regmap, RT5670_GPIO_CTRL1,
++ RT5670_GP1_PIN_MASK, RT5670_GP1_PIN_GPIO1);
++ regmap_update_bits(rt5670->regmap, RT5670_GPIO_CTRL2,
++ RT5670_GP1_PF_MASK, RT5670_GP1_PF_OUT);
++ }
++
+ if (rt5670->pdata.jd_mode) {
+ regmap_update_bits(rt5670->regmap, RT5670_GLB_CLK,
+ RT5670_SCLK_SRC_MASK, RT5670_SCLK_SRC_RCCLK);
+diff --git a/sound/soc/codecs/rt5670.h b/sound/soc/codecs/rt5670.h
+index a8c3e44770b8..de0203369b7c 100644
+--- a/sound/soc/codecs/rt5670.h
++++ b/sound/soc/codecs/rt5670.h
+@@ -757,7 +757,7 @@
+ #define RT5670_PWR_VREF2_BIT 4
+ #define RT5670_PWR_FV2 (0x1 << 3)
+ #define RT5670_PWR_FV2_BIT 3
+-#define RT5670_LDO_SEL_MASK (0x3)
++#define RT5670_LDO_SEL_MASK (0x7)
+ #define RT5670_LDO_SEL_SFT 0
+
+ /* Power Management for Analog 2 (0x64) */
+diff --git a/sound/soc/intel/boards/bdw-rt5677.c b/sound/soc/intel/boards/bdw-rt5677.c
+index cc41a348295e..fa12a85f535a 100644
+--- a/sound/soc/intel/boards/bdw-rt5677.c
++++ b/sound/soc/intel/boards/bdw-rt5677.c
+@@ -328,6 +328,7 @@ static struct snd_soc_dai_link bdw_rt5677_dais[] = {
+ {
+ .name = "Codec DSP",
+ .stream_name = "Wake on Voice",
++ .capture_only = 1,
+ .ops = &bdw_rt5677_dsp_ops,
+ SND_SOC_DAILINK_REG(dsp),
+ },
+diff --git a/sound/soc/intel/boards/bytcht_es8316.c b/sound/soc/intel/boards/bytcht_es8316.c
+index ddcd070100ef..b3fd7de594d7 100644
+--- a/sound/soc/intel/boards/bytcht_es8316.c
++++ b/sound/soc/intel/boards/bytcht_es8316.c
+@@ -543,8 +543,10 @@ static int snd_byt_cht_es8316_mc_probe(struct platform_device *pdev)
+
+ if (cnt) {
+ ret = device_add_properties(codec_dev, props);
+- if (ret)
++ if (ret) {
++ put_device(codec_dev);
+ return ret;
++ }
+ }
+
+ devm_acpi_dev_add_driver_gpios(codec_dev, byt_cht_es8316_gpios);
+diff --git a/sound/soc/intel/boards/cht_bsw_rt5672.c b/sound/soc/intel/boards/cht_bsw_rt5672.c
+index 097023a3ec14..a3aa2c1f7097 100644
+--- a/sound/soc/intel/boards/cht_bsw_rt5672.c
++++ b/sound/soc/intel/boards/cht_bsw_rt5672.c
+@@ -253,21 +253,20 @@ static int cht_codec_fixup(struct snd_soc_pcm_runtime *rtd,
+ params_set_format(params, SNDRV_PCM_FORMAT_S24_LE);
+
+ /*
+- * Default mode for SSP configuration is TDM 4 slot
++ * Default mode for SSP configuration is TDM 4 slot. One board/design,
++ * the Lenovo Miix 2 10 uses not 1 but 2 codecs connected to SSP2. The
++ * second piggy-backed, output-only codec is inside the keyboard-dock
++ * (which has extra speakers). Unlike the main rt5672 codec, we cannot
++ * configure this codec, it is hard coded to use 2 channel 24 bit I2S.
++ * Since we only support 2 channels anyways, there is no need for TDM
++ * on any cht-bsw-rt5672 designs. So we simply use I2S 2ch everywhere.
+ */
+- ret = snd_soc_dai_set_fmt(asoc_rtd_to_codec(rtd, 0),
+- SND_SOC_DAIFMT_DSP_B |
+- SND_SOC_DAIFMT_IB_NF |
++ ret = snd_soc_dai_set_fmt(asoc_rtd_to_cpu(rtd, 0),
++ SND_SOC_DAIFMT_I2S |
++ SND_SOC_DAIFMT_NB_NF |
+ SND_SOC_DAIFMT_CBS_CFS);
+ if (ret < 0) {
+- dev_err(rtd->dev, "can't set format to TDM %d\n", ret);
+- return ret;
+- }
+-
+- /* TDM 4 slots 24 bit, set Rx & Tx bitmask to 4 active slots */
+- ret = snd_soc_dai_set_tdm_slot(asoc_rtd_to_codec(rtd, 0), 0xF, 0xF, 4, 24);
+- if (ret < 0) {
+- dev_err(rtd->dev, "can't set codec TDM slot %d\n", ret);
++ dev_err(rtd->dev, "can't set format to I2S, err %d\n", ret);
+ return ret;
+ }
+
+diff --git a/sound/soc/qcom/Kconfig b/sound/soc/qcom/Kconfig
+index f51b28d1b94d..92f51d0e9fe2 100644
+--- a/sound/soc/qcom/Kconfig
++++ b/sound/soc/qcom/Kconfig
+@@ -72,7 +72,7 @@ config SND_SOC_QDSP6_ASM_DAI
+
+ config SND_SOC_QDSP6
+ tristate "SoC ALSA audio driver for QDSP6"
+- depends on QCOM_APR && HAS_DMA
++ depends on QCOM_APR
+ select SND_SOC_QDSP6_COMMON
+ select SND_SOC_QDSP6_CORE
+ select SND_SOC_QDSP6_AFE
+diff --git a/sound/soc/soc-topology.c b/sound/soc/soc-topology.c
+index 6df3b0d12d87..31250a14c21d 100644
+--- a/sound/soc/soc-topology.c
++++ b/sound/soc/soc-topology.c
+@@ -1285,17 +1285,29 @@ static int soc_tplg_dapm_graph_elems_load(struct soc_tplg *tplg,
+ list_add(&routes[i]->dobj.list, &tplg->comp->dobj_list);
+
+ ret = soc_tplg_add_route(tplg, routes[i]);
+- if (ret < 0)
++ if (ret < 0) {
++ /*
++ * this route was added to the list, it will
++ * be freed in remove_route() so increment the
++ * counter to skip it in the error handling
++ * below.
++ */
++ i++;
+ break;
++ }
+
+ /* add route, but keep going if some fail */
+ snd_soc_dapm_add_routes(dapm, routes[i], 1);
+ }
+
+- /* free memory allocated for all dapm routes in case of error */
+- if (ret < 0)
+- for (i = 0; i < count ; i++)
+- kfree(routes[i]);
++ /*
++ * free memory allocated for all dapm routes not added to the
++ * list in case of error
++ */
++ if (ret < 0) {
++ while (i < count)
++ kfree(routes[i++]);
++ }
+
+ /*
+ * free pointer to array of dapm routes as this is no longer needed.
+@@ -1383,7 +1395,6 @@ static struct snd_kcontrol_new *soc_tplg_dapm_widget_dmixer_create(
+ if (err < 0) {
+ dev_err(tplg->dev, "ASoC: failed to init %s\n",
+ mc->hdr.name);
+- soc_tplg_free_tlv(tplg, &kc[i]);
+ goto err_sm;
+ }
+ }
+@@ -1391,6 +1402,7 @@ static struct snd_kcontrol_new *soc_tplg_dapm_widget_dmixer_create(
+
+ err_sm:
+ for (; i >= 0; i--) {
++ soc_tplg_free_tlv(tplg, &kc[i]);
+ sm = (struct soc_mixer_control *)kc[i].private_value;
+ kfree(sm);
+ kfree(kc[i].name);
+diff --git a/tools/perf/pmu-events/arch/s390/cf_z15/extended.json b/tools/perf/pmu-events/arch/s390/cf_z15/extended.json
+index 2df2e231e9ee..24c4ba2a9ae5 100644
+--- a/tools/perf/pmu-events/arch/s390/cf_z15/extended.json
++++ b/tools/perf/pmu-events/arch/s390/cf_z15/extended.json
+@@ -380,7 +380,7 @@
+ {
+ "Unit": "CPU-M-CF",
+ "EventCode": "265",
+- "EventName": "DFLT_CCERROR",
++ "EventName": "DFLT_CCFINISH",
+ "BriefDescription": "Increments by one for every DEFLATE CONVERSION CALL instruction executed that ended in Condition Codes 0, 1 or 2",
+ "PublicDescription": "Increments by one for every DEFLATE CONVERSION CALL instruction executed that ended in Condition Codes 0, 1 or 2"
+ },
^ permalink raw reply related [flat|nested] 25+ messages in thread
* [gentoo-commits] proj/linux-patches:5.7 commit in: /
@ 2020-07-31 18:07 Mike Pagano
0 siblings, 0 replies; 25+ messages in thread
From: Mike Pagano @ 2020-07-31 18:07 UTC (permalink / raw
To: gentoo-commits
commit: 3019bd9ccad7fa58df721cc1831b0444e4fb1d3b
Author: Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Fri Jul 31 18:07:39 2020 +0000
Commit: Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Fri Jul 31 18:07:39 2020 +0000
URL: https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=3019bd9c
Linux patch 5.7.12
Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>
0000_README | 4 +
1011_linux-5.7.12.patch | 784 ++++++++++++++++++++++++++++++++++++++++++++++++
2 files changed, 788 insertions(+)
diff --git a/0000_README b/0000_README
index 6409a51..21eff3a 100644
--- a/0000_README
+++ b/0000_README
@@ -87,6 +87,10 @@ Patch: 1010_linux-5.7.11.patch
From: http://www.kernel.org
Desc: Linux 5.7.11
+Patch: 1011_linux-5.7.12.patch
+From: http://www.kernel.org
+Desc: Linux 5.7.12
+
Patch: 1500_XATTR_USER_PREFIX.patch
From: https://bugs.gentoo.org/show_bug.cgi?id=470644
Desc: Support for namespace user.pax.* on tmpfs.
diff --git a/1011_linux-5.7.12.patch b/1011_linux-5.7.12.patch
new file mode 100644
index 0000000..bd95a59
--- /dev/null
+++ b/1011_linux-5.7.12.patch
@@ -0,0 +1,784 @@
+diff --git a/Makefile b/Makefile
+index 12777a95833f..401d58b35e61 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 5
+ PATCHLEVEL = 7
+-SUBLEVEL = 11
++SUBLEVEL = 12
+ EXTRAVERSION =
+ NAME = Kleptomaniac Octopus
+
+diff --git a/drivers/base/regmap/regmap-debugfs.c b/drivers/base/regmap/regmap-debugfs.c
+index e16afa27700d..f58baff2be0a 100644
+--- a/drivers/base/regmap/regmap-debugfs.c
++++ b/drivers/base/regmap/regmap-debugfs.c
+@@ -227,6 +227,9 @@ static ssize_t regmap_read_debugfs(struct regmap *map, unsigned int from,
+ if (*ppos < 0 || !count)
+ return -EINVAL;
+
++ if (count > (PAGE_SIZE << (MAX_ORDER - 1)))
++ count = PAGE_SIZE << (MAX_ORDER - 1);
++
+ buf = kmalloc(count, GFP_KERNEL);
+ if (!buf)
+ return -ENOMEM;
+@@ -371,6 +374,9 @@ static ssize_t regmap_reg_ranges_read_file(struct file *file,
+ if (*ppos < 0 || !count)
+ return -EINVAL;
+
++ if (count > (PAGE_SIZE << (MAX_ORDER - 1)))
++ count = PAGE_SIZE << (MAX_ORDER - 1);
++
+ buf = kmalloc(count, GFP_KERNEL);
+ if (!buf)
+ return -ENOMEM;
+diff --git a/drivers/net/wan/x25_asy.c b/drivers/net/wan/x25_asy.c
+index 69773d228ec1..84640a0c13f3 100644
+--- a/drivers/net/wan/x25_asy.c
++++ b/drivers/net/wan/x25_asy.c
+@@ -183,7 +183,7 @@ static inline void x25_asy_unlock(struct x25_asy *sl)
+ netif_wake_queue(sl->dev);
+ }
+
+-/* Send one completely decapsulated IP datagram to the IP layer. */
++/* Send an LAPB frame to the LAPB module to process. */
+
+ static void x25_asy_bump(struct x25_asy *sl)
+ {
+@@ -195,13 +195,12 @@ static void x25_asy_bump(struct x25_asy *sl)
+ count = sl->rcount;
+ dev->stats.rx_bytes += count;
+
+- skb = dev_alloc_skb(count+1);
++ skb = dev_alloc_skb(count);
+ if (skb == NULL) {
+ netdev_warn(sl->dev, "memory squeeze, dropping packet\n");
+ dev->stats.rx_dropped++;
+ return;
+ }
+- skb_push(skb, 1); /* LAPB internal control */
+ skb_put_data(skb, sl->rbuff, count);
+ skb->protocol = x25_type_trans(skb, sl->dev);
+ err = lapb_data_received(skb->dev, skb);
+@@ -209,7 +208,6 @@ static void x25_asy_bump(struct x25_asy *sl)
+ kfree_skb(skb);
+ printk(KERN_DEBUG "x25_asy: data received err - %d\n", err);
+ } else {
+- netif_rx(skb);
+ dev->stats.rx_packets++;
+ }
+ }
+@@ -356,12 +354,21 @@ static netdev_tx_t x25_asy_xmit(struct sk_buff *skb,
+ */
+
+ /*
+- * Called when I frame data arrives. We did the work above - throw it
+- * at the net layer.
++ * Called when I frame data arrive. We add a pseudo header for upper
++ * layers and pass it to upper layers.
+ */
+
+ static int x25_asy_data_indication(struct net_device *dev, struct sk_buff *skb)
+ {
++ if (skb_cow(skb, 1)) {
++ kfree_skb(skb);
++ return NET_RX_DROP;
++ }
++ skb_push(skb, 1);
++ skb->data[0] = X25_IFACE_DATA;
++
++ skb->protocol = x25_type_trans(skb, dev);
++
+ return netif_rx(skb);
+ }
+
+@@ -657,7 +664,7 @@ static void x25_asy_unesc(struct x25_asy *sl, unsigned char s)
+ switch (s) {
+ case X25_END:
+ if (!test_and_clear_bit(SLF_ERROR, &sl->flags) &&
+- sl->rcount > 2)
++ sl->rcount >= 2)
+ x25_asy_bump(sl);
+ clear_bit(SLF_ESCAPE, &sl->flags);
+ sl->rcount = 0;
+diff --git a/fs/io_uring.c b/fs/io_uring.c
+index 51be3a20ade1..d0d3efaaa4d4 100644
+--- a/fs/io_uring.c
++++ b/fs/io_uring.c
+@@ -581,6 +581,7 @@ enum {
+
+ struct async_poll {
+ struct io_poll_iocb poll;
++ struct io_poll_iocb *double_poll;
+ struct io_wq_work work;
+ };
+
+@@ -4220,9 +4221,9 @@ static bool io_poll_rewait(struct io_kiocb *req, struct io_poll_iocb *poll)
+ return false;
+ }
+
+-static void io_poll_remove_double(struct io_kiocb *req)
++static void io_poll_remove_double(struct io_kiocb *req, void *data)
+ {
+- struct io_poll_iocb *poll = (struct io_poll_iocb *) req->io;
++ struct io_poll_iocb *poll = data;
+
+ lockdep_assert_held(&req->ctx->completion_lock);
+
+@@ -4242,7 +4243,7 @@ static void io_poll_complete(struct io_kiocb *req, __poll_t mask, int error)
+ {
+ struct io_ring_ctx *ctx = req->ctx;
+
+- io_poll_remove_double(req);
++ io_poll_remove_double(req, req->io);
+ req->poll.done = true;
+ io_cqring_fill_event(req, error ? error : mangle_poll(mask));
+ io_commit_cqring(ctx);
+@@ -4285,21 +4286,21 @@ static int io_poll_double_wake(struct wait_queue_entry *wait, unsigned mode,
+ int sync, void *key)
+ {
+ struct io_kiocb *req = wait->private;
+- struct io_poll_iocb *poll = (struct io_poll_iocb *) req->io;
++ struct io_poll_iocb *poll = req->apoll->double_poll;
+ __poll_t mask = key_to_poll(key);
+
+ /* for instances that support it check for an event match first: */
+ if (mask && !(mask & poll->events))
+ return 0;
+
+- if (req->poll.head) {
++ if (poll && poll->head) {
+ bool done;
+
+- spin_lock(&req->poll.head->lock);
+- done = list_empty(&req->poll.wait.entry);
++ spin_lock(&poll->head->lock);
++ done = list_empty(&poll->wait.entry);
+ if (!done)
+- list_del_init(&req->poll.wait.entry);
+- spin_unlock(&req->poll.head->lock);
++ list_del_init(&poll->wait.entry);
++ spin_unlock(&poll->head->lock);
+ if (!done)
+ __io_async_wake(req, poll, mask, io_poll_task_func);
+ }
+@@ -4319,7 +4320,8 @@ static void io_init_poll_iocb(struct io_poll_iocb *poll, __poll_t events,
+ }
+
+ static void __io_queue_proc(struct io_poll_iocb *poll, struct io_poll_table *pt,
+- struct wait_queue_head *head)
++ struct wait_queue_head *head,
++ struct io_poll_iocb **poll_ptr)
+ {
+ struct io_kiocb *req = pt->req;
+
+@@ -4330,7 +4332,7 @@ static void __io_queue_proc(struct io_poll_iocb *poll, struct io_poll_table *pt,
+ */
+ if (unlikely(poll->head)) {
+ /* already have a 2nd entry, fail a third attempt */
+- if (req->io) {
++ if (*poll_ptr) {
+ pt->error = -EINVAL;
+ return;
+ }
+@@ -4342,7 +4344,7 @@ static void __io_queue_proc(struct io_poll_iocb *poll, struct io_poll_table *pt,
+ io_init_poll_iocb(poll, req->poll.events, io_poll_double_wake);
+ refcount_inc(&req->refs);
+ poll->wait.private = req;
+- req->io = (void *) poll;
++ *poll_ptr = poll;
+ }
+
+ pt->error = 0;
+@@ -4354,8 +4356,9 @@ static void io_async_queue_proc(struct file *file, struct wait_queue_head *head,
+ struct poll_table_struct *p)
+ {
+ struct io_poll_table *pt = container_of(p, struct io_poll_table, pt);
++ struct async_poll *apoll = pt->req->apoll;
+
+- __io_queue_proc(&pt->req->apoll->poll, pt, head);
++ __io_queue_proc(&apoll->poll, pt, head, &apoll->double_poll);
+ }
+
+ static void io_sq_thread_drop_mm(struct io_ring_ctx *ctx)
+@@ -4409,6 +4412,7 @@ static void io_async_task_func(struct callback_head *cb)
+ memcpy(&req->work, &apoll->work, sizeof(req->work));
+
+ if (canceled) {
++ kfree(apoll->double_poll);
+ kfree(apoll);
+ io_cqring_ev_posted(ctx);
+ end_req:
+@@ -4426,6 +4430,7 @@ end_req:
+ __io_queue_sqe(req, NULL);
+ mutex_unlock(&ctx->uring_lock);
+
++ kfree(apoll->double_poll);
+ kfree(apoll);
+ }
+
+@@ -4497,7 +4502,6 @@ static bool io_arm_poll_handler(struct io_kiocb *req)
+ struct async_poll *apoll;
+ struct io_poll_table ipt;
+ __poll_t mask, ret;
+- bool had_io;
+
+ if (!req->file || !file_can_poll(req->file))
+ return false;
+@@ -4509,10 +4513,10 @@ static bool io_arm_poll_handler(struct io_kiocb *req)
+ apoll = kmalloc(sizeof(*apoll), GFP_ATOMIC);
+ if (unlikely(!apoll))
+ return false;
++ apoll->double_poll = NULL;
+
+ req->flags |= REQ_F_POLLED;
+ memcpy(&apoll->work, &req->work, sizeof(req->work));
+- had_io = req->io != NULL;
+
+ get_task_struct(current);
+ req->task = current;
+@@ -4531,12 +4535,10 @@ static bool io_arm_poll_handler(struct io_kiocb *req)
+ ret = __io_arm_poll_handler(req, &apoll->poll, &ipt, mask,
+ io_async_wake);
+ if (ret) {
+- ipt.error = 0;
+- /* only remove double add if we did it here */
+- if (!had_io)
+- io_poll_remove_double(req);
++ io_poll_remove_double(req, apoll->double_poll);
+ spin_unlock_irq(&ctx->completion_lock);
+ memcpy(&req->work, &apoll->work, sizeof(req->work));
++ kfree(apoll->double_poll);
+ kfree(apoll);
+ return false;
+ }
+@@ -4567,11 +4569,13 @@ static bool io_poll_remove_one(struct io_kiocb *req)
+ bool do_complete;
+
+ if (req->opcode == IORING_OP_POLL_ADD) {
+- io_poll_remove_double(req);
++ io_poll_remove_double(req, req->io);
+ do_complete = __io_poll_remove_one(req, &req->poll);
+ } else {
+ struct async_poll *apoll = req->apoll;
+
++ io_poll_remove_double(req, apoll->double_poll);
++
+ /* non-poll requests have submit ref still */
+ do_complete = __io_poll_remove_one(req, &apoll->poll);
+ if (do_complete) {
+@@ -4582,6 +4586,7 @@ static bool io_poll_remove_one(struct io_kiocb *req)
+ * final reference.
+ */
+ memcpy(&req->work, &apoll->work, sizeof(req->work));
++ kfree(apoll->double_poll);
+ kfree(apoll);
+ }
+ }
+@@ -4682,7 +4687,7 @@ static void io_poll_queue_proc(struct file *file, struct wait_queue_head *head,
+ {
+ struct io_poll_table *pt = container_of(p, struct io_poll_table, pt);
+
+- __io_queue_proc(&pt->req->poll, pt, head);
++ __io_queue_proc(&pt->req->poll, pt, head, (struct io_poll_iocb **) &pt->req->io);
+ }
+
+ static int io_poll_add_prep(struct io_kiocb *req, const struct io_uring_sqe *sqe)
+diff --git a/include/linux/tcp.h b/include/linux/tcp.h
+index 4f8159e90ce1..0bba582e83ca 100644
+--- a/include/linux/tcp.h
++++ b/include/linux/tcp.h
+@@ -217,6 +217,8 @@ struct tcp_sock {
+ } rack;
+ u16 advmss; /* Advertised MSS */
+ u8 compressed_ack;
++ u8 tlp_retrans:1, /* TLP is a retransmission */
++ unused:7;
+ u32 chrono_start; /* Start time in jiffies of a TCP chrono */
+ u32 chrono_stat[3]; /* Time in jiffies for chrono_stat stats */
+ u8 chrono_type:2, /* current chronograph type */
+@@ -239,7 +241,7 @@ struct tcp_sock {
+ save_syn:1, /* Save headers of SYN packet */
+ is_cwnd_limited:1,/* forward progress limited by snd_cwnd? */
+ syn_smc:1; /* SYN includes SMC */
+- u32 tlp_high_seq; /* snd_nxt at the time of TLP retransmit. */
++ u32 tlp_high_seq; /* snd_nxt at the time of TLP */
+
+ u32 tcp_tx_delay; /* delay (in usec) added to TX packets */
+ u64 tcp_wstamp_ns; /* departure time for next sent data packet */
+diff --git a/net/ax25/af_ax25.c b/net/ax25/af_ax25.c
+index fd91cd34f25e..dec3f35467c9 100644
+--- a/net/ax25/af_ax25.c
++++ b/net/ax25/af_ax25.c
+@@ -1187,7 +1187,10 @@ static int __must_check ax25_connect(struct socket *sock,
+ if (addr_len > sizeof(struct sockaddr_ax25) &&
+ fsa->fsa_ax25.sax25_ndigis != 0) {
+ /* Valid number of digipeaters ? */
+- if (fsa->fsa_ax25.sax25_ndigis < 1 || fsa->fsa_ax25.sax25_ndigis > AX25_MAX_DIGIS) {
++ if (fsa->fsa_ax25.sax25_ndigis < 1 ||
++ fsa->fsa_ax25.sax25_ndigis > AX25_MAX_DIGIS ||
++ addr_len < sizeof(struct sockaddr_ax25) +
++ sizeof(ax25_address) * fsa->fsa_ax25.sax25_ndigis) {
+ err = -EINVAL;
+ goto out_release;
+ }
+@@ -1507,7 +1510,10 @@ static int ax25_sendmsg(struct socket *sock, struct msghdr *msg, size_t len)
+ struct full_sockaddr_ax25 *fsa = (struct full_sockaddr_ax25 *)usax;
+
+ /* Valid number of digipeaters ? */
+- if (usax->sax25_ndigis < 1 || usax->sax25_ndigis > AX25_MAX_DIGIS) {
++ if (usax->sax25_ndigis < 1 ||
++ usax->sax25_ndigis > AX25_MAX_DIGIS ||
++ addr_len < sizeof(struct sockaddr_ax25) +
++ sizeof(ax25_address) * usax->sax25_ndigis) {
+ err = -EINVAL;
+ goto out;
+ }
+diff --git a/net/core/dev.c b/net/core/dev.c
+index c9ee5d80d5ea..c1c2688a955c 100644
+--- a/net/core/dev.c
++++ b/net/core/dev.c
+@@ -5504,7 +5504,7 @@ static void flush_backlog(struct work_struct *work)
+ skb_queue_walk_safe(&sd->input_pkt_queue, skb, tmp) {
+ if (skb->dev->reg_state == NETREG_UNREGISTERING) {
+ __skb_unlink(skb, &sd->input_pkt_queue);
+- kfree_skb(skb);
++ dev_kfree_skb_irq(skb);
+ input_queue_head_incr(sd);
+ }
+ }
+diff --git a/net/core/net-sysfs.c b/net/core/net-sysfs.c
+index 4773ad6ec111..f67f5ca39d63 100644
+--- a/net/core/net-sysfs.c
++++ b/net/core/net-sysfs.c
+@@ -1077,7 +1077,7 @@ static ssize_t tx_timeout_show(struct netdev_queue *queue, char *buf)
+ trans_timeout = queue->trans_timeout;
+ spin_unlock_irq(&queue->_xmit_lock);
+
+- return sprintf(buf, "%lu", trans_timeout);
++ return sprintf(buf, fmt_ulong, trans_timeout);
+ }
+
+ static unsigned int get_netdev_queue_index(struct netdev_queue *queue)
+diff --git a/net/core/rtnetlink.c b/net/core/rtnetlink.c
+index 709ebbf8ab5b..78345e39e54a 100644
+--- a/net/core/rtnetlink.c
++++ b/net/core/rtnetlink.c
+@@ -3337,7 +3337,8 @@ replay:
+ */
+ if (err < 0) {
+ /* If device is not registered at all, free it now */
+- if (dev->reg_state == NETREG_UNINITIALIZED)
++ if (dev->reg_state == NETREG_UNINITIALIZED ||
++ dev->reg_state == NETREG_UNREGISTERED)
+ free_netdev(dev);
+ goto out;
+ }
+diff --git a/net/core/sock_reuseport.c b/net/core/sock_reuseport.c
+index adcb3aea576d..bbdd3c7b6cb5 100644
+--- a/net/core/sock_reuseport.c
++++ b/net/core/sock_reuseport.c
+@@ -101,6 +101,7 @@ static struct sock_reuseport *reuseport_grow(struct sock_reuseport *reuse)
+ more_reuse->prog = reuse->prog;
+ more_reuse->reuseport_id = reuse->reuseport_id;
+ more_reuse->bind_inany = reuse->bind_inany;
++ more_reuse->has_conns = reuse->has_conns;
+
+ memcpy(more_reuse->socks, reuse->socks,
+ reuse->num_socks * sizeof(struct sock *));
+diff --git a/net/ipv4/tcp_input.c b/net/ipv4/tcp_input.c
+index 31c58e00d25b..32ac66a8c657 100644
+--- a/net/ipv4/tcp_input.c
++++ b/net/ipv4/tcp_input.c
+@@ -3506,10 +3506,8 @@ static void tcp_replace_ts_recent(struct tcp_sock *tp, u32 seq)
+ }
+ }
+
+-/* This routine deals with acks during a TLP episode.
+- * We mark the end of a TLP episode on receiving TLP dupack or when
+- * ack is after tlp_high_seq.
+- * Ref: loss detection algorithm in draft-dukkipati-tcpm-tcp-loss-probe.
++/* This routine deals with acks during a TLP episode and ends an episode by
++ * resetting tlp_high_seq. Ref: TLP algorithm in draft-ietf-tcpm-rack
+ */
+ static void tcp_process_tlp_ack(struct sock *sk, u32 ack, int flag)
+ {
+@@ -3518,7 +3516,10 @@ static void tcp_process_tlp_ack(struct sock *sk, u32 ack, int flag)
+ if (before(ack, tp->tlp_high_seq))
+ return;
+
+- if (flag & FLAG_DSACKING_ACK) {
++ if (!tp->tlp_retrans) {
++ /* TLP of new data has been acknowledged */
++ tp->tlp_high_seq = 0;
++ } else if (flag & FLAG_DSACKING_ACK) {
+ /* This DSACK means original and TLP probe arrived; no loss */
+ tp->tlp_high_seq = 0;
+ } else if (after(ack, tp->tlp_high_seq)) {
+diff --git a/net/ipv4/tcp_output.c b/net/ipv4/tcp_output.c
+index bee2f9b8b8a1..b1c2484b4314 100644
+--- a/net/ipv4/tcp_output.c
++++ b/net/ipv4/tcp_output.c
+@@ -2625,6 +2625,11 @@ void tcp_send_loss_probe(struct sock *sk)
+ int pcount;
+ int mss = tcp_current_mss(sk);
+
++ /* At most one outstanding TLP */
++ if (tp->tlp_high_seq)
++ goto rearm_timer;
++
++ tp->tlp_retrans = 0;
+ skb = tcp_send_head(sk);
+ if (skb && tcp_snd_wnd_test(tp, skb, mss)) {
+ pcount = tp->packets_out;
+@@ -2642,10 +2647,6 @@ void tcp_send_loss_probe(struct sock *sk)
+ return;
+ }
+
+- /* At most one outstanding TLP retransmission. */
+- if (tp->tlp_high_seq)
+- goto rearm_timer;
+-
+ if (skb_still_in_host_queue(sk, skb))
+ goto rearm_timer;
+
+@@ -2667,10 +2668,12 @@ void tcp_send_loss_probe(struct sock *sk)
+ if (__tcp_retransmit_skb(sk, skb, 1))
+ goto rearm_timer;
+
++ tp->tlp_retrans = 1;
++
++probe_sent:
+ /* Record snd_nxt for loss detection. */
+ tp->tlp_high_seq = tp->snd_nxt;
+
+-probe_sent:
+ NET_INC_STATS(sock_net(sk), LINUX_MIB_TCPLOSSPROBES);
+ /* Reset s.t. tcp_rearm_rto will restart timer from now */
+ inet_csk(sk)->icsk_pending = 0;
+diff --git a/net/ipv4/udp.c b/net/ipv4/udp.c
+index 32564b350823..6ffef9861fa9 100644
+--- a/net/ipv4/udp.c
++++ b/net/ipv4/udp.c
+@@ -413,7 +413,7 @@ static struct sock *udp4_lib_lookup2(struct net *net,
+ struct udp_hslot *hslot2,
+ struct sk_buff *skb)
+ {
+- struct sock *sk, *result;
++ struct sock *sk, *result, *reuseport_result;
+ int score, badness;
+ u32 hash = 0;
+
+@@ -423,17 +423,20 @@ static struct sock *udp4_lib_lookup2(struct net *net,
+ score = compute_score(sk, net, saddr, sport,
+ daddr, hnum, dif, sdif);
+ if (score > badness) {
++ reuseport_result = NULL;
++
+ if (sk->sk_reuseport &&
+ sk->sk_state != TCP_ESTABLISHED) {
+ hash = udp_ehashfn(net, daddr, hnum,
+ saddr, sport);
+- result = reuseport_select_sock(sk, hash, skb,
+- sizeof(struct udphdr));
+- if (result && !reuseport_has_conns(sk, false))
+- return result;
++ reuseport_result = reuseport_select_sock(sk, hash, skb,
++ sizeof(struct udphdr));
++ if (reuseport_result && !reuseport_has_conns(sk, false))
++ return reuseport_result;
+ }
++
++ result = reuseport_result ? : sk;
+ badness = score;
+- result = sk;
+ }
+ }
+ return result;
+@@ -2048,7 +2051,7 @@ static int udp_queue_rcv_one_skb(struct sock *sk, struct sk_buff *skb)
+ /*
+ * UDP-Lite specific tests, ignored on UDP sockets
+ */
+- if ((is_udplite & UDPLITE_RECV_CC) && UDP_SKB_CB(skb)->partial_cov) {
++ if ((up->pcflag & UDPLITE_RECV_CC) && UDP_SKB_CB(skb)->partial_cov) {
+
+ /*
+ * MIB statistics other than incrementing the error count are
+diff --git a/net/ipv6/ip6_gre.c b/net/ipv6/ip6_gre.c
+index 6532bde82b40..3a57fb9ce049 100644
+--- a/net/ipv6/ip6_gre.c
++++ b/net/ipv6/ip6_gre.c
+@@ -1562,17 +1562,18 @@ static void ip6gre_destroy_tunnels(struct net *net, struct list_head *head)
+ static int __net_init ip6gre_init_net(struct net *net)
+ {
+ struct ip6gre_net *ign = net_generic(net, ip6gre_net_id);
++ struct net_device *ndev;
+ int err;
+
+ if (!net_has_fallback_tunnels(net))
+ return 0;
+- ign->fb_tunnel_dev = alloc_netdev(sizeof(struct ip6_tnl), "ip6gre0",
+- NET_NAME_UNKNOWN,
+- ip6gre_tunnel_setup);
+- if (!ign->fb_tunnel_dev) {
++ ndev = alloc_netdev(sizeof(struct ip6_tnl), "ip6gre0",
++ NET_NAME_UNKNOWN, ip6gre_tunnel_setup);
++ if (!ndev) {
+ err = -ENOMEM;
+ goto err_alloc_dev;
+ }
++ ign->fb_tunnel_dev = ndev;
+ dev_net_set(ign->fb_tunnel_dev, net);
+ /* FB netdevice is special: we have one, and only one per netns.
+ * Allowing to move it to another netns is clearly unsafe.
+@@ -1592,7 +1593,7 @@ static int __net_init ip6gre_init_net(struct net *net)
+ return 0;
+
+ err_reg_dev:
+- free_netdev(ign->fb_tunnel_dev);
++ free_netdev(ndev);
+ err_alloc_dev:
+ return err;
+ }
+diff --git a/net/ipv6/udp.c b/net/ipv6/udp.c
+index 7d4151747340..a8d74f44056a 100644
+--- a/net/ipv6/udp.c
++++ b/net/ipv6/udp.c
+@@ -148,7 +148,7 @@ static struct sock *udp6_lib_lookup2(struct net *net,
+ int dif, int sdif, struct udp_hslot *hslot2,
+ struct sk_buff *skb)
+ {
+- struct sock *sk, *result;
++ struct sock *sk, *result, *reuseport_result;
+ int score, badness;
+ u32 hash = 0;
+
+@@ -158,17 +158,20 @@ static struct sock *udp6_lib_lookup2(struct net *net,
+ score = compute_score(sk, net, saddr, sport,
+ daddr, hnum, dif, sdif);
+ if (score > badness) {
++ reuseport_result = NULL;
++
+ if (sk->sk_reuseport &&
+ sk->sk_state != TCP_ESTABLISHED) {
+ hash = udp6_ehashfn(net, daddr, hnum,
+ saddr, sport);
+
+- result = reuseport_select_sock(sk, hash, skb,
+- sizeof(struct udphdr));
+- if (result && !reuseport_has_conns(sk, false))
+- return result;
++ reuseport_result = reuseport_select_sock(sk, hash, skb,
++ sizeof(struct udphdr));
++ if (reuseport_result && !reuseport_has_conns(sk, false))
++ return reuseport_result;
+ }
+- result = sk;
++
++ result = reuseport_result ? : sk;
+ badness = score;
+ }
+ }
+@@ -643,7 +646,7 @@ static int udpv6_queue_rcv_one_skb(struct sock *sk, struct sk_buff *skb)
+ /*
+ * UDP-Lite specific tests, ignored on UDP sockets (see net/ipv4/udp.c).
+ */
+- if ((is_udplite & UDPLITE_RECV_CC) && UDP_SKB_CB(skb)->partial_cov) {
++ if ((up->pcflag & UDPLITE_RECV_CC) && UDP_SKB_CB(skb)->partial_cov) {
+
+ if (up->pcrlen == 0) { /* full coverage was set */
+ net_dbg_ratelimited("UDPLITE6: partial coverage %d while full coverage %d requested\n",
+diff --git a/net/qrtr/qrtr.c b/net/qrtr/qrtr.c
+index 24a8c3c6da0d..300a104b9a0f 100644
+--- a/net/qrtr/qrtr.c
++++ b/net/qrtr/qrtr.c
+@@ -1180,6 +1180,7 @@ static int qrtr_release(struct socket *sock)
+ sk->sk_state_change(sk);
+
+ sock_set_flag(sk, SOCK_DEAD);
++ sock_orphan(sk);
+ sock->sk = NULL;
+
+ if (!sock_flag(sk, SOCK_ZAPPED))
+diff --git a/net/rxrpc/recvmsg.c b/net/rxrpc/recvmsg.c
+index 8578c39ec839..6896a33ef842 100644
+--- a/net/rxrpc/recvmsg.c
++++ b/net/rxrpc/recvmsg.c
+@@ -464,7 +464,7 @@ try_again:
+ list_empty(&rx->recvmsg_q) &&
+ rx->sk.sk_state != RXRPC_SERVER_LISTENING) {
+ release_sock(&rx->sk);
+- return -ENODATA;
++ return -EAGAIN;
+ }
+
+ if (list_empty(&rx->recvmsg_q)) {
+diff --git a/net/rxrpc/sendmsg.c b/net/rxrpc/sendmsg.c
+index 5e9c43d4a314..49d03c8c64da 100644
+--- a/net/rxrpc/sendmsg.c
++++ b/net/rxrpc/sendmsg.c
+@@ -306,7 +306,7 @@ static int rxrpc_send_data(struct rxrpc_sock *rx,
+ /* this should be in poll */
+ sk_clear_bit(SOCKWQ_ASYNC_NOSPACE, sk);
+
+- if (sk->sk_err || (sk->sk_shutdown & SEND_SHUTDOWN))
++ if (sk->sk_shutdown & SEND_SHUTDOWN)
+ return -EPIPE;
+
+ more = msg->msg_flags & MSG_MORE;
+diff --git a/net/sched/act_ct.c b/net/sched/act_ct.c
+index 6a114f80e54b..e191f2728389 100644
+--- a/net/sched/act_ct.c
++++ b/net/sched/act_ct.c
+@@ -671,9 +671,10 @@ static int tcf_ct_ipv6_is_fragment(struct sk_buff *skb, bool *frag)
+ }
+
+ static int tcf_ct_handle_fragments(struct net *net, struct sk_buff *skb,
+- u8 family, u16 zone)
++ u8 family, u16 zone, bool *defrag)
+ {
+ enum ip_conntrack_info ctinfo;
++ struct qdisc_skb_cb cb;
+ struct nf_conn *ct;
+ int err = 0;
+ bool frag;
+@@ -691,6 +692,7 @@ static int tcf_ct_handle_fragments(struct net *net, struct sk_buff *skb,
+ return err;
+
+ skb_get(skb);
++ cb = *qdisc_skb_cb(skb);
+
+ if (family == NFPROTO_IPV4) {
+ enum ip_defrag_users user = IP_DEFRAG_CONNTRACK_IN + zone;
+@@ -701,6 +703,9 @@ static int tcf_ct_handle_fragments(struct net *net, struct sk_buff *skb,
+ local_bh_enable();
+ if (err && err != -EINPROGRESS)
+ goto out_free;
++
++ if (!err)
++ *defrag = true;
+ } else { /* NFPROTO_IPV6 */
+ #if IS_ENABLED(CONFIG_NF_DEFRAG_IPV6)
+ enum ip6_defrag_users user = IP6_DEFRAG_CONNTRACK_IN + zone;
+@@ -709,12 +714,16 @@ static int tcf_ct_handle_fragments(struct net *net, struct sk_buff *skb,
+ err = nf_ct_frag6_gather(net, skb, user);
+ if (err && err != -EINPROGRESS)
+ goto out_free;
++
++ if (!err)
++ *defrag = true;
+ #else
+ err = -EOPNOTSUPP;
+ goto out_free;
+ #endif
+ }
+
++ *qdisc_skb_cb(skb) = cb;
+ skb_clear_hash(skb);
+ skb->ignore_df = 1;
+ return err;
+@@ -912,6 +921,7 @@ static int tcf_ct_act(struct sk_buff *skb, const struct tc_action *a,
+ int nh_ofs, err, retval;
+ struct tcf_ct_params *p;
+ bool skip_add = false;
++ bool defrag = false;
+ struct nf_conn *ct;
+ u8 family;
+
+@@ -942,7 +952,7 @@ static int tcf_ct_act(struct sk_buff *skb, const struct tc_action *a,
+ */
+ nh_ofs = skb_network_offset(skb);
+ skb_pull_rcsum(skb, nh_ofs);
+- err = tcf_ct_handle_fragments(net, skb, family, p->zone);
++ err = tcf_ct_handle_fragments(net, skb, family, p->zone, &defrag);
+ if (err == -EINPROGRESS) {
+ retval = TC_ACT_STOLEN;
+ goto out;
+@@ -1010,6 +1020,8 @@ out_push:
+
+ out:
+ tcf_action_update_bstats(&c->common, skb);
++ if (defrag)
++ qdisc_skb_cb(skb)->pkt_len = skb->len;
+ return retval;
+
+ drop:
+diff --git a/net/sctp/stream.c b/net/sctp/stream.c
+index 67f7e71f9129..bda2536dd740 100644
+--- a/net/sctp/stream.c
++++ b/net/sctp/stream.c
+@@ -22,17 +22,11 @@
+ #include <net/sctp/sm.h>
+ #include <net/sctp/stream_sched.h>
+
+-/* Migrates chunks from stream queues to new stream queues if needed,
+- * but not across associations. Also, removes those chunks to streams
+- * higher than the new max.
+- */
+-static void sctp_stream_outq_migrate(struct sctp_stream *stream,
+- struct sctp_stream *new, __u16 outcnt)
++static void sctp_stream_shrink_out(struct sctp_stream *stream, __u16 outcnt)
+ {
+ struct sctp_association *asoc;
+ struct sctp_chunk *ch, *temp;
+ struct sctp_outq *outq;
+- int i;
+
+ asoc = container_of(stream, struct sctp_association, stream);
+ outq = &asoc->outqueue;
+@@ -56,6 +50,19 @@ static void sctp_stream_outq_migrate(struct sctp_stream *stream,
+
+ sctp_chunk_free(ch);
+ }
++}
++
++/* Migrates chunks from stream queues to new stream queues if needed,
++ * but not across associations. Also, removes those chunks to streams
++ * higher than the new max.
++ */
++static void sctp_stream_outq_migrate(struct sctp_stream *stream,
++ struct sctp_stream *new, __u16 outcnt)
++{
++ int i;
++
++ if (stream->outcnt > outcnt)
++ sctp_stream_shrink_out(stream, outcnt);
+
+ if (new) {
+ /* Here we actually move the old ext stuff into the new
+@@ -1037,11 +1044,13 @@ struct sctp_chunk *sctp_process_strreset_resp(
+ nums = ntohs(addstrm->number_of_streams);
+ number = stream->outcnt - nums;
+
+- if (result == SCTP_STRRESET_PERFORMED)
++ if (result == SCTP_STRRESET_PERFORMED) {
+ for (i = number; i < stream->outcnt; i++)
+ SCTP_SO(stream, i)->state = SCTP_STREAM_OPEN;
+- else
++ } else {
++ sctp_stream_shrink_out(stream, number);
+ stream->outcnt = number;
++ }
+
+ *evp = sctp_ulpevent_make_stream_change_event(asoc, flags,
+ 0, nums, GFP_ATOMIC);
+diff --git a/net/tipc/link.c b/net/tipc/link.c
+index d4675e922a8f..e18369201a15 100644
+--- a/net/tipc/link.c
++++ b/net/tipc/link.c
+@@ -813,11 +813,11 @@ int tipc_link_timeout(struct tipc_link *l, struct sk_buff_head *xmitq)
+ state |= l->bc_rcvlink->rcv_unacked;
+ state |= l->rcv_unacked;
+ state |= !skb_queue_empty(&l->transmq);
+- state |= !skb_queue_empty(&l->deferdq);
+ probe = mstate->probing;
+ probe |= l->silent_intv_cnt;
+ if (probe || mstate->monitoring)
+ l->silent_intv_cnt++;
++ probe |= !skb_queue_empty(&l->deferdq);
+ if (l->snd_nxt == l->checkpoint) {
+ tipc_link_update_cwin(l, 0, 0);
+ probe = true;
^ permalink raw reply related [flat|nested] 25+ messages in thread
* [gentoo-commits] proj/linux-patches:5.7 commit in: /
@ 2020-08-05 14:36 Thomas Deutschmann
0 siblings, 0 replies; 25+ messages in thread
From: Thomas Deutschmann @ 2020-08-05 14:36 UTC (permalink / raw
To: gentoo-commits
commit: 89d0e8ab4377428b936f4a21b215d245e94131a3
Author: Thomas Deutschmann <whissi <AT> gentoo <DOT> org>
AuthorDate: Wed Aug 5 14:33:53 2020 +0000
Commit: Thomas Deutschmann <whissi <AT> gentoo <DOT> org>
CommitDate: Wed Aug 5 14:35:38 2020 +0000
URL: https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=89d0e8ab
Linux patch 5.7.13
Signed-off-by: Thomas Deutschmann <whissi <AT> gentoo.org>
0000_README | 4 +
1012_linux-5.7.13.patch | 3752 +++++++++++++++++++++++++++++++++++++++++++++++
2 files changed, 3756 insertions(+)
diff --git a/0000_README b/0000_README
index 21eff3a..a388fef 100644
--- a/0000_README
+++ b/0000_README
@@ -91,6 +91,10 @@ Patch: 1011_linux-5.7.12.patch
From: http://www.kernel.org
Desc: Linux 5.7.12
+Patch: 1012_linux-5.7.13.patch
+From: http://www.kernel.org
+Desc: Linux 5.7.13
+
Patch: 1500_XATTR_USER_PREFIX.patch
From: https://bugs.gentoo.org/show_bug.cgi?id=470644
Desc: Support for namespace user.pax.* on tmpfs.
diff --git a/1012_linux-5.7.13.patch b/1012_linux-5.7.13.patch
new file mode 100644
index 0000000..f28c06a
--- /dev/null
+++ b/1012_linux-5.7.13.patch
@@ -0,0 +1,3752 @@
+diff --git a/Makefile b/Makefile
+index 401d58b35e61..b77b4332a41a 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 5
+ PATCHLEVEL = 7
+-SUBLEVEL = 12
++SUBLEVEL = 13
+ EXTRAVERSION =
+ NAME = Kleptomaniac Octopus
+
+diff --git a/arch/arm/boot/dts/armada-38x.dtsi b/arch/arm/boot/dts/armada-38x.dtsi
+index e038abc0c6b4..420ae26e846b 100644
+--- a/arch/arm/boot/dts/armada-38x.dtsi
++++ b/arch/arm/boot/dts/armada-38x.dtsi
+@@ -344,7 +344,8 @@
+
+ comphy: phy@18300 {
+ compatible = "marvell,armada-380-comphy";
+- reg = <0x18300 0x100>;
++ reg-names = "comphy", "conf";
++ reg = <0x18300 0x100>, <0x18460 4>;
+ #address-cells = <1>;
+ #size-cells = <0>;
+
+diff --git a/arch/arm/boot/dts/imx6qdl-icore.dtsi b/arch/arm/boot/dts/imx6qdl-icore.dtsi
+index 756f3a9f1b4f..12997dae35d9 100644
+--- a/arch/arm/boot/dts/imx6qdl-icore.dtsi
++++ b/arch/arm/boot/dts/imx6qdl-icore.dtsi
+@@ -397,7 +397,7 @@
+
+ pinctrl_usbotg: usbotggrp {
+ fsl,pins = <
+- MX6QDL_PAD_GPIO_1__USB_OTG_ID 0x17059
++ MX6QDL_PAD_ENET_RX_ER__USB_OTG_ID 0x17059
+ >;
+ };
+
+@@ -409,6 +409,7 @@
+ MX6QDL_PAD_SD1_DAT1__SD1_DATA1 0x17070
+ MX6QDL_PAD_SD1_DAT2__SD1_DATA2 0x17070
+ MX6QDL_PAD_SD1_DAT3__SD1_DATA3 0x17070
++ MX6QDL_PAD_GPIO_1__GPIO1_IO01 0x1b0b0
+ >;
+ };
+
+diff --git a/arch/arm/boot/dts/imx6sx-sabreauto.dts b/arch/arm/boot/dts/imx6sx-sabreauto.dts
+index 825924448ab4..14fd1de52a68 100644
+--- a/arch/arm/boot/dts/imx6sx-sabreauto.dts
++++ b/arch/arm/boot/dts/imx6sx-sabreauto.dts
+@@ -99,7 +99,7 @@
+ &fec2 {
+ pinctrl-names = "default";
+ pinctrl-0 = <&pinctrl_enet2>;
+- phy-mode = "rgmii";
++ phy-mode = "rgmii-id";
+ phy-handle = <ðphy0>;
+ fsl,magic-packet;
+ status = "okay";
+diff --git a/arch/arm/boot/dts/imx6sx-sdb.dtsi b/arch/arm/boot/dts/imx6sx-sdb.dtsi
+index 3e5fb72f21fc..c99aa273c296 100644
+--- a/arch/arm/boot/dts/imx6sx-sdb.dtsi
++++ b/arch/arm/boot/dts/imx6sx-sdb.dtsi
+@@ -213,7 +213,7 @@
+ &fec2 {
+ pinctrl-names = "default";
+ pinctrl-0 = <&pinctrl_enet2>;
+- phy-mode = "rgmii";
++ phy-mode = "rgmii-id";
+ phy-handle = <ðphy2>;
+ status = "okay";
+ };
+diff --git a/arch/arm/boot/dts/sun4i-a10.dtsi b/arch/arm/boot/dts/sun4i-a10.dtsi
+index bf531efc0610..0f95a6ef8543 100644
+--- a/arch/arm/boot/dts/sun4i-a10.dtsi
++++ b/arch/arm/boot/dts/sun4i-a10.dtsi
+@@ -198,7 +198,7 @@
+ default-pool {
+ compatible = "shared-dma-pool";
+ size = <0x6000000>;
+- alloc-ranges = <0x4a000000 0x6000000>;
++ alloc-ranges = <0x40000000 0x10000000>;
+ reusable;
+ linux,cma-default;
+ };
+diff --git a/arch/arm/boot/dts/sun5i.dtsi b/arch/arm/boot/dts/sun5i.dtsi
+index e6b036734a64..c2b4fbf552a3 100644
+--- a/arch/arm/boot/dts/sun5i.dtsi
++++ b/arch/arm/boot/dts/sun5i.dtsi
+@@ -117,7 +117,7 @@
+ default-pool {
+ compatible = "shared-dma-pool";
+ size = <0x6000000>;
+- alloc-ranges = <0x4a000000 0x6000000>;
++ alloc-ranges = <0x40000000 0x10000000>;
+ reusable;
+ linux,cma-default;
+ };
+diff --git a/arch/arm/boot/dts/sun7i-a20.dtsi b/arch/arm/boot/dts/sun7i-a20.dtsi
+index ffe1d10a1a84..6d6a37940db2 100644
+--- a/arch/arm/boot/dts/sun7i-a20.dtsi
++++ b/arch/arm/boot/dts/sun7i-a20.dtsi
+@@ -181,7 +181,7 @@
+ default-pool {
+ compatible = "shared-dma-pool";
+ size = <0x6000000>;
+- alloc-ranges = <0x4a000000 0x6000000>;
++ alloc-ranges = <0x40000000 0x10000000>;
+ reusable;
+ linux,cma-default;
+ };
+diff --git a/arch/arm/kernel/hw_breakpoint.c b/arch/arm/kernel/hw_breakpoint.c
+index 02ca7adf5375..7fff88e61252 100644
+--- a/arch/arm/kernel/hw_breakpoint.c
++++ b/arch/arm/kernel/hw_breakpoint.c
+@@ -683,6 +683,12 @@ static void disable_single_step(struct perf_event *bp)
+ arch_install_hw_breakpoint(bp);
+ }
+
++static int watchpoint_fault_on_uaccess(struct pt_regs *regs,
++ struct arch_hw_breakpoint *info)
++{
++ return !user_mode(regs) && info->ctrl.privilege == ARM_BREAKPOINT_USER;
++}
++
+ static void watchpoint_handler(unsigned long addr, unsigned int fsr,
+ struct pt_regs *regs)
+ {
+@@ -742,16 +748,27 @@ static void watchpoint_handler(unsigned long addr, unsigned int fsr,
+ }
+
+ pr_debug("watchpoint fired: address = 0x%x\n", info->trigger);
++
++ /*
++ * If we triggered a user watchpoint from a uaccess routine,
++ * then handle the stepping ourselves since userspace really
++ * can't help us with this.
++ */
++ if (watchpoint_fault_on_uaccess(regs, info))
++ goto step;
++
+ perf_bp_event(wp, regs);
+
+ /*
+- * If no overflow handler is present, insert a temporary
+- * mismatch breakpoint so we can single-step over the
+- * watchpoint trigger.
++ * Defer stepping to the overflow handler if one is installed.
++ * Otherwise, insert a temporary mismatch breakpoint so that
++ * we can single-step over the watchpoint trigger.
+ */
+- if (is_default_overflow_handler(wp))
+- enable_single_step(wp, instruction_pointer(regs));
++ if (!is_default_overflow_handler(wp))
++ goto unlock;
+
++step:
++ enable_single_step(wp, instruction_pointer(regs));
+ unlock:
+ rcu_read_unlock();
+ }
+diff --git a/arch/arm/kernel/vdso.c b/arch/arm/kernel/vdso.c
+index e0330a25e1c6..28cfe7bad1bf 100644
+--- a/arch/arm/kernel/vdso.c
++++ b/arch/arm/kernel/vdso.c
+@@ -184,6 +184,7 @@ static void __init patch_vdso(void *ehdr)
+ if (!cntvct_ok) {
+ vdso_nullpatch_one(&einfo, "__vdso_gettimeofday");
+ vdso_nullpatch_one(&einfo, "__vdso_clock_gettime");
++ vdso_nullpatch_one(&einfo, "__vdso_clock_gettime64");
+ }
+ }
+
+diff --git a/arch/arm64/include/asm/alternative.h b/arch/arm64/include/asm/alternative.h
+index 12f0eb56a1cc..619db9b4c9d5 100644
+--- a/arch/arm64/include/asm/alternative.h
++++ b/arch/arm64/include/asm/alternative.h
+@@ -77,9 +77,9 @@ static inline void apply_alternatives_module(void *start, size_t length) { }
+ "663:\n\t" \
+ newinstr "\n" \
+ "664:\n\t" \
+- ".previous\n\t" \
+ ".org . - (664b-663b) + (662b-661b)\n\t" \
+- ".org . - (662b-661b) + (664b-663b)\n" \
++ ".org . - (662b-661b) + (664b-663b)\n\t" \
++ ".previous\n" \
+ ".endif\n"
+
+ #define __ALTERNATIVE_CFG_CB(oldinstr, feature, cfg_enabled, cb) \
+diff --git a/arch/arm64/include/asm/checksum.h b/arch/arm64/include/asm/checksum.h
+index b6f7bc6da5fb..93a161b3bf3f 100644
+--- a/arch/arm64/include/asm/checksum.h
++++ b/arch/arm64/include/asm/checksum.h
+@@ -24,16 +24,17 @@ static inline __sum16 ip_fast_csum(const void *iph, unsigned int ihl)
+ {
+ __uint128_t tmp;
+ u64 sum;
++ int n = ihl; /* we want it signed */
+
+ tmp = *(const __uint128_t *)iph;
+ iph += 16;
+- ihl -= 4;
++ n -= 4;
+ tmp += ((tmp >> 64) | (tmp << 64));
+ sum = tmp >> 64;
+ do {
+ sum += *(const u32 *)iph;
+ iph += 4;
+- } while (--ihl);
++ } while (--n > 0);
+
+ sum += ((sum >> 32) | (sum << 32));
+ return csum_fold((__force u32)(sum >> 32));
+diff --git a/arch/parisc/include/asm/cmpxchg.h b/arch/parisc/include/asm/cmpxchg.h
+index ab5c215cf46c..068958575871 100644
+--- a/arch/parisc/include/asm/cmpxchg.h
++++ b/arch/parisc/include/asm/cmpxchg.h
+@@ -60,6 +60,7 @@ extern void __cmpxchg_called_with_bad_pointer(void);
+ extern unsigned long __cmpxchg_u32(volatile unsigned int *m, unsigned int old,
+ unsigned int new_);
+ extern u64 __cmpxchg_u64(volatile u64 *ptr, u64 old, u64 new_);
++extern u8 __cmpxchg_u8(volatile u8 *ptr, u8 old, u8 new_);
+
+ /* don't worry...optimizer will get rid of most of this */
+ static inline unsigned long
+@@ -71,6 +72,7 @@ __cmpxchg(volatile void *ptr, unsigned long old, unsigned long new_, int size)
+ #endif
+ case 4: return __cmpxchg_u32((unsigned int *)ptr,
+ (unsigned int)old, (unsigned int)new_);
++ case 1: return __cmpxchg_u8((u8 *)ptr, (u8)old, (u8)new_);
+ }
+ __cmpxchg_called_with_bad_pointer();
+ return old;
+diff --git a/arch/parisc/lib/bitops.c b/arch/parisc/lib/bitops.c
+index 70ffbcf889b8..2e4d1f05a926 100644
+--- a/arch/parisc/lib/bitops.c
++++ b/arch/parisc/lib/bitops.c
+@@ -79,3 +79,15 @@ unsigned long __cmpxchg_u32(volatile unsigned int *ptr, unsigned int old, unsign
+ _atomic_spin_unlock_irqrestore(ptr, flags);
+ return (unsigned long)prev;
+ }
++
++u8 __cmpxchg_u8(volatile u8 *ptr, u8 old, u8 new)
++{
++ unsigned long flags;
++ u8 prev;
++
++ _atomic_spin_lock_irqsave(ptr, flags);
++ if ((prev = *ptr) == old)
++ *ptr = new;
++ _atomic_spin_unlock_irqrestore(ptr, flags);
++ return prev;
++}
+diff --git a/arch/riscv/mm/init.c b/arch/riscv/mm/init.c
+index 81493cee0a16..115fb9245f16 100644
+--- a/arch/riscv/mm/init.c
++++ b/arch/riscv/mm/init.c
+@@ -146,33 +146,36 @@ void __init setup_bootmem(void)
+ {
+ struct memblock_region *reg;
+ phys_addr_t mem_size = 0;
++ phys_addr_t total_mem = 0;
++ phys_addr_t mem_start, end = 0;
+ phys_addr_t vmlinux_end = __pa_symbol(&_end);
+ phys_addr_t vmlinux_start = __pa_symbol(&_start);
+
+ /* Find the memory region containing the kernel */
+ for_each_memblock(memory, reg) {
+- phys_addr_t end = reg->base + reg->size;
+-
+- if (reg->base <= vmlinux_start && vmlinux_end <= end) {
+- mem_size = min(reg->size, (phys_addr_t)-PAGE_OFFSET);
+-
+- /*
+- * Remove memblock from the end of usable area to the
+- * end of region
+- */
+- if (reg->base + mem_size < end)
+- memblock_remove(reg->base + mem_size,
+- end - reg->base - mem_size);
+- }
++ end = reg->base + reg->size;
++ if (!total_mem)
++ mem_start = reg->base;
++ if (reg->base <= vmlinux_start && vmlinux_end <= end)
++ BUG_ON(reg->size == 0);
++ total_mem = total_mem + reg->size;
+ }
+- BUG_ON(mem_size == 0);
++
++ /*
++ * Remove memblock from the end of usable area to the
++ * end of region
++ */
++ mem_size = min(total_mem, (phys_addr_t)-PAGE_OFFSET);
++ if (mem_start + mem_size < end)
++ memblock_remove(mem_start + mem_size,
++ end - mem_start - mem_size);
+
+ /* Reserve from the start of the kernel to the end of the kernel */
+ memblock_reserve(vmlinux_start, vmlinux_end - vmlinux_start);
+
+- set_max_mapnr(PFN_DOWN(mem_size));
+ max_pfn = PFN_DOWN(memblock_end_of_DRAM());
+ max_low_pfn = max_pfn;
++ set_max_mapnr(max_low_pfn);
+
+ #ifdef CONFIG_BLK_DEV_INITRD
+ setup_initrd();
+diff --git a/arch/riscv/mm/kasan_init.c b/arch/riscv/mm/kasan_init.c
+index ec0ca90dd900..7a580c8ad603 100644
+--- a/arch/riscv/mm/kasan_init.c
++++ b/arch/riscv/mm/kasan_init.c
+@@ -44,7 +44,7 @@ asmlinkage void __init kasan_early_init(void)
+ (__pa(((uintptr_t) kasan_early_shadow_pmd))),
+ __pgprot(_PAGE_TABLE)));
+
+- flush_tlb_all();
++ local_flush_tlb_all();
+ }
+
+ static void __init populate(void *start, void *end)
+@@ -79,7 +79,7 @@ static void __init populate(void *start, void *end)
+ pfn_pgd(PFN_DOWN(__pa(&pmd[offset])),
+ __pgprot(_PAGE_TABLE)));
+
+- flush_tlb_all();
++ local_flush_tlb_all();
+ memset(start, 0, end - start);
+ }
+
+diff --git a/arch/sh/include/asm/pgalloc.h b/arch/sh/include/asm/pgalloc.h
+index 22d968bfe9bb..d770da3f8b6f 100644
+--- a/arch/sh/include/asm/pgalloc.h
++++ b/arch/sh/include/asm/pgalloc.h
+@@ -12,6 +12,7 @@ extern void pgd_free(struct mm_struct *mm, pgd_t *pgd);
+ extern void pud_populate(struct mm_struct *mm, pud_t *pudp, pmd_t *pmd);
+ extern pmd_t *pmd_alloc_one(struct mm_struct *mm, unsigned long address);
+ extern void pmd_free(struct mm_struct *mm, pmd_t *pmd);
++#define __pmd_free_tlb(tlb, pmdp, addr) pmd_free((tlb)->mm, (pmdp))
+ #endif
+
+ static inline void pmd_populate_kernel(struct mm_struct *mm, pmd_t *pmd,
+@@ -33,13 +34,4 @@ do { \
+ tlb_remove_page((tlb), (pte)); \
+ } while (0)
+
+-#if CONFIG_PGTABLE_LEVELS > 2
+-#define __pmd_free_tlb(tlb, pmdp, addr) \
+-do { \
+- struct page *page = virt_to_page(pmdp); \
+- pgtable_pmd_page_dtor(page); \
+- tlb_remove_page((tlb), page); \
+-} while (0);
+-#endif
+-
+ #endif /* __ASM_SH_PGALLOC_H */
+diff --git a/arch/sh/kernel/entry-common.S b/arch/sh/kernel/entry-common.S
+index 956a7a03b0c8..9bac5bbb67f3 100644
+--- a/arch/sh/kernel/entry-common.S
++++ b/arch/sh/kernel/entry-common.S
+@@ -199,7 +199,7 @@ syscall_trace_entry:
+ mov.l @(OFF_R7,r15), r7 ! arg3
+ mov.l @(OFF_R3,r15), r3 ! syscall_nr
+ !
+- mov.l 2f, r10 ! Number of syscalls
++ mov.l 6f, r10 ! Number of syscalls
+ cmp/hs r10, r3
+ bf syscall_call
+ mov #-ENOSYS, r0
+@@ -353,7 +353,7 @@ ENTRY(system_call)
+ tst r9, r8
+ bf syscall_trace_entry
+ !
+- mov.l 2f, r8 ! Number of syscalls
++ mov.l 6f, r8 ! Number of syscalls
+ cmp/hs r8, r3
+ bt syscall_badsys
+ !
+@@ -392,7 +392,7 @@ syscall_exit:
+ #if !defined(CONFIG_CPU_SH2)
+ 1: .long TRA
+ #endif
+-2: .long NR_syscalls
++6: .long NR_syscalls
+ 3: .long sys_call_table
+ 7: .long do_syscall_trace_enter
+ 8: .long do_syscall_trace_leave
+diff --git a/arch/x86/kernel/i8259.c b/arch/x86/kernel/i8259.c
+index 519649ddf100..fe522691ac71 100644
+--- a/arch/x86/kernel/i8259.c
++++ b/arch/x86/kernel/i8259.c
+@@ -207,7 +207,7 @@ spurious_8259A_irq:
+ * lets ACK and report it. [once per IRQ]
+ */
+ if (!(spurious_irq_mask & irqmask)) {
+- printk(KERN_DEBUG
++ printk_deferred(KERN_DEBUG
+ "spurious 8259A interrupt: IRQ%d.\n", irq);
+ spurious_irq_mask |= irqmask;
+ }
+diff --git a/arch/x86/kernel/stacktrace.c b/arch/x86/kernel/stacktrace.c
+index 6ad43fc44556..2fd698e28e4d 100644
+--- a/arch/x86/kernel/stacktrace.c
++++ b/arch/x86/kernel/stacktrace.c
+@@ -58,7 +58,6 @@ int arch_stack_walk_reliable(stack_trace_consume_fn consume_entry,
+ * or a page fault), which can make frame pointers
+ * unreliable.
+ */
+-
+ if (IS_ENABLED(CONFIG_FRAME_POINTER))
+ return -EINVAL;
+ }
+@@ -81,10 +80,6 @@ int arch_stack_walk_reliable(stack_trace_consume_fn consume_entry,
+ if (unwind_error(&state))
+ return -EINVAL;
+
+- /* Success path for non-user tasks, i.e. kthreads and idle tasks */
+- if (!(task->flags & (PF_KTHREAD | PF_IDLE)))
+- return -EINVAL;
+-
+ return 0;
+ }
+
+diff --git a/arch/x86/kernel/unwind_orc.c b/arch/x86/kernel/unwind_orc.c
+index 7f969b2d240f..ec88bbe08a32 100644
+--- a/arch/x86/kernel/unwind_orc.c
++++ b/arch/x86/kernel/unwind_orc.c
+@@ -440,8 +440,11 @@ bool unwind_next_frame(struct unwind_state *state)
+ /*
+ * Find the orc_entry associated with the text address.
+ *
+- * Decrement call return addresses by one so they work for sibling
+- * calls and calls to noreturn functions.
++ * For a call frame (as opposed to a signal frame), state->ip points to
++ * the instruction after the call. That instruction's stack layout
++ * could be different from the call instruction's layout, for example
++ * if the call was to a noreturn function. So get the ORC data for the
++ * call instruction itself.
+ */
+ orc = orc_find(state->signal ? state->ip : state->ip - 1);
+ if (!orc) {
+@@ -662,6 +665,7 @@ void __unwind_start(struct unwind_state *state, struct task_struct *task,
+ state->sp = task->thread.sp;
+ state->bp = READ_ONCE_NOCHECK(frame->bp);
+ state->ip = READ_ONCE_NOCHECK(frame->ret_addr);
++ state->signal = (void *)state->ip == ret_from_fork;
+ }
+
+ if (get_stack_info((unsigned long *)state->sp, state->task,
+diff --git a/arch/x86/kvm/lapic.c b/arch/x86/kvm/lapic.c
+index 8967e320a978..6b26deccedfd 100644
+--- a/arch/x86/kvm/lapic.c
++++ b/arch/x86/kvm/lapic.c
+@@ -2136,7 +2136,7 @@ void kvm_set_lapic_tscdeadline_msr(struct kvm_vcpu *vcpu, u64 data)
+ {
+ struct kvm_lapic *apic = vcpu->arch.apic;
+
+- if (!lapic_in_kernel(vcpu) || apic_lvtt_oneshot(apic) ||
++ if (!kvm_apic_present(vcpu) || apic_lvtt_oneshot(apic) ||
+ apic_lvtt_period(apic))
+ return;
+
+diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c
+index a862c768fd54..7dbfc0bc738c 100644
+--- a/arch/x86/kvm/svm/svm.c
++++ b/arch/x86/kvm/svm/svm.c
+@@ -1105,7 +1105,7 @@ static void init_vmcb(struct vcpu_svm *svm)
+ svm->nested.vmcb = 0;
+ svm->vcpu.arch.hflags = 0;
+
+- if (pause_filter_count) {
++ if (!kvm_pause_in_guest(svm->vcpu.kvm)) {
+ control->pause_filter_count = pause_filter_count;
+ if (pause_filter_thresh)
+ control->pause_filter_thresh = pause_filter_thresh;
+@@ -2682,7 +2682,7 @@ static int pause_interception(struct vcpu_svm *svm)
+ struct kvm_vcpu *vcpu = &svm->vcpu;
+ bool in_kernel = (svm_get_cpl(vcpu) == 0);
+
+- if (pause_filter_thresh)
++ if (!kvm_pause_in_guest(vcpu->kvm))
+ grow_ple_window(vcpu);
+
+ kvm_vcpu_on_spin(vcpu, in_kernel);
+@@ -3727,7 +3727,7 @@ static void svm_handle_exit_irqoff(struct kvm_vcpu *vcpu,
+
+ static void svm_sched_in(struct kvm_vcpu *vcpu, int cpu)
+ {
+- if (pause_filter_thresh)
++ if (!kvm_pause_in_guest(vcpu->kvm))
+ shrink_ple_window(vcpu);
+ }
+
+@@ -3892,6 +3892,9 @@ static void svm_vm_destroy(struct kvm *kvm)
+
+ static int svm_vm_init(struct kvm *kvm)
+ {
++ if (!pause_filter_count || !pause_filter_thresh)
++ kvm->arch.pause_in_guest = true;
++
+ if (avic) {
+ int ret = avic_vm_init(kvm);
+ if (ret)
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_kms.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_kms.c
+index fd1dc3236eca..81f83ee4b12b 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_kms.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_kms.c
+@@ -692,9 +692,10 @@ static int amdgpu_info_ioctl(struct drm_device *dev, void *data, struct drm_file
+ return n ? -EFAULT : 0;
+ }
+ case AMDGPU_INFO_DEV_INFO: {
+- struct drm_amdgpu_info_device dev_info = {};
++ struct drm_amdgpu_info_device dev_info;
+ uint64_t vm_size;
+
++ memset(&dev_info, 0, sizeof(dev_info));
+ dev_info.device_id = dev->pdev->device;
+ dev_info.chip_rev = adev->rev_id;
+ dev_info.external_rev = adev->external_rev_id;
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_pm.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_pm.c
+index b14b0b4ffeb2..96b8feb77b15 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_pm.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_pm.c
+@@ -775,7 +775,8 @@ static ssize_t amdgpu_set_pp_od_clk_voltage(struct device *dev,
+ tmp_str++;
+ while (isspace(*++tmp_str));
+
+- while ((sub_str = strsep(&tmp_str, delimiter)) != NULL) {
++ while (tmp_str[0]) {
++ sub_str = strsep(&tmp_str, delimiter);
+ ret = kstrtol(sub_str, 0, ¶meter[parameter_size]);
+ if (ret)
+ return -EINVAL;
+@@ -1035,7 +1036,8 @@ static ssize_t amdgpu_read_mask(const char *buf, size_t count, uint32_t *mask)
+ memcpy(buf_cpy, buf, bytes);
+ buf_cpy[bytes] = '\0';
+ tmp = buf_cpy;
+- while ((sub_str = strsep(&tmp, delimiter)) != NULL) {
++ while (tmp[0]) {
++ sub_str = strsep(&tmp, delimiter);
+ if (strlen(sub_str)) {
+ ret = kstrtol(sub_str, 0, &level);
+ if (ret)
+@@ -1632,7 +1634,8 @@ static ssize_t amdgpu_set_pp_power_profile_mode(struct device *dev,
+ i++;
+ memcpy(buf_cpy, buf, count-i);
+ tmp_str = buf_cpy;
+- while ((sub_str = strsep(&tmp_str, delimiter)) != NULL) {
++ while (tmp_str[0]) {
++ sub_str = strsep(&tmp_str, delimiter);
+ ret = kstrtol(sub_str, 0, ¶meter[parameter_size]);
+ if (ret)
+ return -EINVAL;
+diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
+index 837a286469ec..d50751ae73f1 100644
+--- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
++++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
+@@ -8489,20 +8489,38 @@ static int amdgpu_dm_atomic_check(struct drm_device *dev,
+ * the same resource. If we have a new DC context as part of
+ * the DM atomic state from validation we need to free it and
+ * retain the existing one instead.
++ *
++ * Furthermore, since the DM atomic state only contains the DC
++ * context and can safely be annulled, we can free the state
++ * and clear the associated private object now to free
++ * some memory and avoid a possible use-after-free later.
+ */
+- struct dm_atomic_state *new_dm_state, *old_dm_state;
+
+- new_dm_state = dm_atomic_get_new_state(state);
+- old_dm_state = dm_atomic_get_old_state(state);
++ for (i = 0; i < state->num_private_objs; i++) {
++ struct drm_private_obj *obj = state->private_objs[i].ptr;
+
+- if (new_dm_state && old_dm_state) {
+- if (new_dm_state->context)
+- dc_release_state(new_dm_state->context);
++ if (obj->funcs == adev->dm.atomic_obj.funcs) {
++ int j = state->num_private_objs-1;
+
+- new_dm_state->context = old_dm_state->context;
++ dm_atomic_destroy_state(obj,
++ state->private_objs[i].state);
++
++ /* If i is not at the end of the array then the
++ * last element needs to be moved to where i was
++ * before the array can safely be truncated.
++ */
++ if (i != j)
++ state->private_objs[i] =
++ state->private_objs[j];
+
+- if (old_dm_state->context)
+- dc_retain_state(old_dm_state->context);
++ state->private_objs[j].ptr = NULL;
++ state->private_objs[j].state = NULL;
++ state->private_objs[j].old_state = NULL;
++ state->private_objs[j].new_state = NULL;
++
++ state->num_private_objs = j;
++ break;
++ }
+ }
+ }
+
+diff --git a/drivers/gpu/drm/drm_gem.c b/drivers/gpu/drm/drm_gem.c
+index 37627d06fb06..3087aa710e8d 100644
+--- a/drivers/gpu/drm/drm_gem.c
++++ b/drivers/gpu/drm/drm_gem.c
+@@ -872,9 +872,6 @@ err:
+ * @file_priv: drm file-private structure
+ *
+ * Open an object using the global name, returning a handle and the size.
+- *
+- * This handle (of course) holds a reference to the object, so the object
+- * will not go away until the handle is deleted.
+ */
+ int
+ drm_gem_open_ioctl(struct drm_device *dev, void *data,
+@@ -899,14 +896,15 @@ drm_gem_open_ioctl(struct drm_device *dev, void *data,
+
+ /* drm_gem_handle_create_tail unlocks dev->object_name_lock. */
+ ret = drm_gem_handle_create_tail(file_priv, obj, &handle);
+- drm_gem_object_put_unlocked(obj);
+ if (ret)
+- return ret;
++ goto err;
+
+ args->handle = handle;
+ args->size = obj->size;
+
+- return 0;
++err:
++ drm_gem_object_put_unlocked(obj);
++ return ret;
+ }
+
+ /**
+diff --git a/drivers/gpu/drm/drm_mipi_dbi.c b/drivers/gpu/drm/drm_mipi_dbi.c
+index 558baf989f5a..7d2211016eda 100644
+--- a/drivers/gpu/drm/drm_mipi_dbi.c
++++ b/drivers/gpu/drm/drm_mipi_dbi.c
+@@ -938,7 +938,7 @@ static int mipi_dbi_spi1_transfer(struct mipi_dbi *dbi, int dc,
+ }
+ }
+
+- tr.len = chunk;
++ tr.len = chunk * 2;
+ len -= chunk;
+
+ ret = spi_sync(spi, &m);
+diff --git a/drivers/gpu/drm/drm_of.c b/drivers/gpu/drm/drm_of.c
+index b50b44e76279..8fc3f67e3e76 100644
+--- a/drivers/gpu/drm/drm_of.c
++++ b/drivers/gpu/drm/drm_of.c
+@@ -322,10 +322,8 @@ static int drm_of_lvds_get_remote_pixels_type(
+ * configurations by passing the endpoints explicitly to
+ * drm_of_lvds_get_dual_link_pixel_order().
+ */
+- if (!current_pt || pixels_type != current_pt) {
+- of_node_put(remote_port);
++ if (!current_pt || pixels_type != current_pt)
+ return -EINVAL;
+- }
+ }
+
+ return pixels_type;
+diff --git a/drivers/gpu/drm/mcde/mcde_display.c b/drivers/gpu/drm/mcde/mcde_display.c
+index e59907e68854..d72ac23cd110 100644
+--- a/drivers/gpu/drm/mcde/mcde_display.c
++++ b/drivers/gpu/drm/mcde/mcde_display.c
+@@ -1060,9 +1060,14 @@ static void mcde_display_update(struct drm_simple_display_pipe *pipe,
+ */
+ if (fb) {
+ mcde_set_extsrc(mcde, drm_fb_cma_get_gem_addr(fb, pstate, 0));
+- if (!mcde->video_mode)
+- /* Send a single frame using software sync */
+- mcde_display_send_one_frame(mcde);
++ if (!mcde->video_mode) {
++ /*
++ * Send a single frame using software sync if the flow
++ * is not active yet.
++ */
++ if (mcde->flow_active == 0)
++ mcde_display_send_one_frame(mcde);
++ }
+ dev_info_once(mcde->dev, "sent first display update\n");
+ } else {
+ /*
+diff --git a/drivers/i2c/busses/i2c-cadence.c b/drivers/i2c/busses/i2c-cadence.c
+index 89d58f7d2a25..1efdabb5adca 100644
+--- a/drivers/i2c/busses/i2c-cadence.c
++++ b/drivers/i2c/busses/i2c-cadence.c
+@@ -230,20 +230,21 @@ static irqreturn_t cdns_i2c_isr(int irq, void *ptr)
+ /* Read data if receive data valid is set */
+ while (cdns_i2c_readreg(CDNS_I2C_SR_OFFSET) &
+ CDNS_I2C_SR_RXDV) {
+- /*
+- * Clear hold bit that was set for FIFO control if
+- * RX data left is less than FIFO depth, unless
+- * repeated start is selected.
+- */
+- if ((id->recv_count < CDNS_I2C_FIFO_DEPTH) &&
+- !id->bus_hold_flag)
+- cdns_i2c_clear_bus_hold(id);
+-
+ if (id->recv_count > 0) {
+ *(id->p_recv_buf)++ =
+ cdns_i2c_readreg(CDNS_I2C_DATA_OFFSET);
+ id->recv_count--;
+ id->curr_recv_count--;
++
++ /*
++ * Clear hold bit that was set for FIFO control
++ * if RX data left is less than or equal to
++ * FIFO DEPTH unless repeated start is selected
++ */
++ if (id->recv_count <= CDNS_I2C_FIFO_DEPTH &&
++ !id->bus_hold_flag)
++ cdns_i2c_clear_bus_hold(id);
++
+ } else {
+ dev_err(id->adap.dev.parent,
+ "xfer_size reg rollover. xfer aborted!\n");
+@@ -382,10 +383,8 @@ static void cdns_i2c_mrecv(struct cdns_i2c *id)
+ * Check for the message size against FIFO depth and set the
+ * 'hold bus' bit if it is greater than FIFO depth.
+ */
+- if ((id->recv_count > CDNS_I2C_FIFO_DEPTH) || id->bus_hold_flag)
++ if (id->recv_count > CDNS_I2C_FIFO_DEPTH)
+ ctrl_reg |= CDNS_I2C_CR_HOLD;
+- else
+- ctrl_reg = ctrl_reg & ~CDNS_I2C_CR_HOLD;
+
+ cdns_i2c_writereg(ctrl_reg, CDNS_I2C_CR_OFFSET);
+
+@@ -442,11 +441,8 @@ static void cdns_i2c_msend(struct cdns_i2c *id)
+ * Check for the message size against FIFO depth and set the
+ * 'hold bus' bit if it is greater than FIFO depth.
+ */
+- if ((id->send_count > CDNS_I2C_FIFO_DEPTH) || id->bus_hold_flag)
++ if (id->send_count > CDNS_I2C_FIFO_DEPTH)
+ ctrl_reg |= CDNS_I2C_CR_HOLD;
+- else
+- ctrl_reg = ctrl_reg & ~CDNS_I2C_CR_HOLD;
+-
+ cdns_i2c_writereg(ctrl_reg, CDNS_I2C_CR_OFFSET);
+
+ /* Clear the interrupts in interrupt status register. */
+diff --git a/drivers/infiniband/core/cq.c b/drivers/infiniband/core/cq.c
+index 4f25b2400694..6bb62d04030a 100644
+--- a/drivers/infiniband/core/cq.c
++++ b/drivers/infiniband/core/cq.c
+@@ -68,6 +68,15 @@ static void rdma_dim_init(struct ib_cq *cq)
+ INIT_WORK(&dim->work, ib_cq_rdma_dim_work);
+ }
+
++static void rdma_dim_destroy(struct ib_cq *cq)
++{
++ if (!cq->dim)
++ return;
++
++ cancel_work_sync(&cq->dim->work);
++ kfree(cq->dim);
++}
++
+ static int __poll_cq(struct ib_cq *cq, int num_entries, struct ib_wc *wc)
+ {
+ int rc;
+@@ -261,6 +270,7 @@ struct ib_cq *__ib_alloc_cq_user(struct ib_device *dev, void *private,
+ return cq;
+
+ out_destroy_cq:
++ rdma_dim_destroy(cq);
+ rdma_restrack_del(&cq->res);
+ cq->device->ops.destroy_cq(cq, udata);
+ out_free_wc:
+@@ -324,12 +334,10 @@ void ib_free_cq_user(struct ib_cq *cq, struct ib_udata *udata)
+ WARN_ON_ONCE(1);
+ }
+
++ rdma_dim_destroy(cq);
+ trace_cq_free(cq);
+ rdma_restrack_del(&cq->res);
+ cq->device->ops.destroy_cq(cq, udata);
+- if (cq->dim)
+- cancel_work_sync(&cq->dim->work);
+- kfree(cq->dim);
+ kfree(cq->wc);
+ kfree(cq);
+ }
+diff --git a/drivers/infiniband/hw/mlx5/odp.c b/drivers/infiniband/hw/mlx5/odp.c
+index bdeb6500a919..b56d812b8a7b 100644
+--- a/drivers/infiniband/hw/mlx5/odp.c
++++ b/drivers/infiniband/hw/mlx5/odp.c
+@@ -1798,9 +1798,7 @@ static bool init_prefetch_work(struct ib_pd *pd,
+ work->frags[i].mr =
+ get_prefetchable_mr(pd, advice, sg_list[i].lkey);
+ if (!work->frags[i].mr) {
+- work->num_sge = i - 1;
+- if (i)
+- destroy_prefetch_work(work);
++ work->num_sge = i;
+ return false;
+ }
+
+@@ -1866,6 +1864,7 @@ int mlx5_ib_advise_mr_prefetch(struct ib_pd *pd,
+ srcu_key = srcu_read_lock(&dev->odp_srcu);
+ if (!init_prefetch_work(pd, advice, pf_flags, work, sg_list, num_sge)) {
+ srcu_read_unlock(&dev->odp_srcu, srcu_key);
++ destroy_prefetch_work(work);
+ return -EINVAL;
+ }
+ queue_work(system_unbound_wq, &work->work);
+diff --git a/drivers/infiniband/sw/rdmavt/qp.c b/drivers/infiniband/sw/rdmavt/qp.c
+index ca29954a54ac..94372408cb5e 100644
+--- a/drivers/infiniband/sw/rdmavt/qp.c
++++ b/drivers/infiniband/sw/rdmavt/qp.c
+@@ -898,8 +898,6 @@ static void rvt_init_qp(struct rvt_dev_info *rdi, struct rvt_qp *qp,
+ qp->s_tail_ack_queue = 0;
+ qp->s_acked_ack_queue = 0;
+ qp->s_num_rd_atomic = 0;
+- if (qp->r_rq.kwq)
+- qp->r_rq.kwq->count = qp->r_rq.size;
+ qp->r_sge.num_sge = 0;
+ atomic_set(&qp->s_reserved_used, 0);
+ }
+@@ -2352,31 +2350,6 @@ bad_lkey:
+ return 0;
+ }
+
+-/**
+- * get_count - count numbers of request work queue entries
+- * in circular buffer
+- * @rq: data structure for request queue entry
+- * @tail: tail indices of the circular buffer
+- * @head: head indices of the circular buffer
+- *
+- * Return - total number of entries in the circular buffer
+- */
+-static u32 get_count(struct rvt_rq *rq, u32 tail, u32 head)
+-{
+- u32 count;
+-
+- count = head;
+-
+- if (count >= rq->size)
+- count = 0;
+- if (count < tail)
+- count += rq->size - tail;
+- else
+- count -= tail;
+-
+- return count;
+-}
+-
+ /**
+ * get_rvt_head - get head indices of the circular buffer
+ * @rq: data structure for request queue entry
+@@ -2451,7 +2424,7 @@ int rvt_get_rwqe(struct rvt_qp *qp, bool wr_id_only)
+
+ if (kwq->count < RVT_RWQ_COUNT_THRESHOLD) {
+ head = get_rvt_head(rq, ip);
+- kwq->count = get_count(rq, tail, head);
++ kwq->count = rvt_get_rq_count(rq, head, tail);
+ }
+ if (unlikely(kwq->count == 0)) {
+ ret = 0;
+@@ -2486,7 +2459,9 @@ int rvt_get_rwqe(struct rvt_qp *qp, bool wr_id_only)
+ * the number of remaining WQEs.
+ */
+ if (kwq->count < srq->limit) {
+- kwq->count = get_count(rq, tail, get_rvt_head(rq, ip));
++ kwq->count =
++ rvt_get_rq_count(rq,
++ get_rvt_head(rq, ip), tail);
+ if (kwq->count < srq->limit) {
+ struct ib_event ev;
+
+diff --git a/drivers/infiniband/sw/rdmavt/rc.c b/drivers/infiniband/sw/rdmavt/rc.c
+index 977906cc0d11..c58735f4c94a 100644
+--- a/drivers/infiniband/sw/rdmavt/rc.c
++++ b/drivers/infiniband/sw/rdmavt/rc.c
+@@ -127,9 +127,7 @@ __be32 rvt_compute_aeth(struct rvt_qp *qp)
+ * not atomic, which is OK, since the fuzziness is
+ * resolved as further ACKs go out.
+ */
+- credits = head - tail;
+- if ((int)credits < 0)
+- credits += qp->r_rq.size;
++ credits = rvt_get_rq_count(&qp->r_rq, head, tail);
+ }
+ /*
+ * Binary search the credit table to find the code to
+diff --git a/drivers/misc/habanalabs/command_submission.c b/drivers/misc/habanalabs/command_submission.c
+index 409276b6374d..e7c8e7473226 100644
+--- a/drivers/misc/habanalabs/command_submission.c
++++ b/drivers/misc/habanalabs/command_submission.c
+@@ -425,11 +425,19 @@ static int validate_queue_index(struct hl_device *hdev,
+ struct asic_fixed_properties *asic = &hdev->asic_prop;
+ struct hw_queue_properties *hw_queue_prop;
+
++ /* This must be checked here to prevent out-of-bounds access to
++ * hw_queues_props array
++ */
++ if (chunk->queue_index >= HL_MAX_QUEUES) {
++ dev_err(hdev->dev, "Queue index %d is invalid\n",
++ chunk->queue_index);
++ return -EINVAL;
++ }
++
+ hw_queue_prop = &asic->hw_queues_props[chunk->queue_index];
+
+- if ((chunk->queue_index >= HL_MAX_QUEUES) ||
+- (hw_queue_prop->type == QUEUE_TYPE_NA)) {
+- dev_err(hdev->dev, "Queue index %d is invalid\n",
++ if (hw_queue_prop->type == QUEUE_TYPE_NA) {
++ dev_err(hdev->dev, "Queue index %d is not applicable\n",
+ chunk->queue_index);
+ return -EINVAL;
+ }
+diff --git a/drivers/net/bareudp.c b/drivers/net/bareudp.c
+index 3dd46cd55114..88e7900853db 100644
+--- a/drivers/net/bareudp.c
++++ b/drivers/net/bareudp.c
+@@ -407,19 +407,34 @@ free_dst:
+ return err;
+ }
+
++static bool bareudp_proto_valid(struct bareudp_dev *bareudp, __be16 proto)
++{
++ if (bareudp->ethertype == proto)
++ return true;
++
++ if (!bareudp->multi_proto_mode)
++ return false;
++
++ if (bareudp->ethertype == htons(ETH_P_MPLS_UC) &&
++ proto == htons(ETH_P_MPLS_MC))
++ return true;
++
++ if (bareudp->ethertype == htons(ETH_P_IP) &&
++ proto == htons(ETH_P_IPV6))
++ return true;
++
++ return false;
++}
++
+ static netdev_tx_t bareudp_xmit(struct sk_buff *skb, struct net_device *dev)
+ {
+ struct bareudp_dev *bareudp = netdev_priv(dev);
+ struct ip_tunnel_info *info = NULL;
+ int err;
+
+- if (skb->protocol != bareudp->ethertype) {
+- if (!bareudp->multi_proto_mode ||
+- (skb->protocol != htons(ETH_P_MPLS_MC) &&
+- skb->protocol != htons(ETH_P_IPV6))) {
+- err = -EINVAL;
+- goto tx_error;
+- }
++ if (!bareudp_proto_valid(bareudp, skb->protocol)) {
++ err = -EINVAL;
++ goto tx_error;
+ }
+
+ info = skb_tunnel_info(skb);
+diff --git a/drivers/net/ethernet/chelsio/cxgb4/sge.c b/drivers/net/ethernet/chelsio/cxgb4/sge.c
+index 28ce9856a078..0f5ca68c9854 100644
+--- a/drivers/net/ethernet/chelsio/cxgb4/sge.c
++++ b/drivers/net/ethernet/chelsio/cxgb4/sge.c
+@@ -2925,6 +2925,7 @@ static inline int uld_send(struct adapter *adap, struct sk_buff *skb,
+ txq_info = adap->sge.uld_txq_info[tx_uld_type];
+ if (unlikely(!txq_info)) {
+ WARN_ON(true);
++ kfree_skb(skb);
+ return NET_XMIT_DROP;
+ }
+
+diff --git a/drivers/net/ethernet/cortina/gemini.c b/drivers/net/ethernet/cortina/gemini.c
+index 5bff5c2be88b..5359fb40578d 100644
+--- a/drivers/net/ethernet/cortina/gemini.c
++++ b/drivers/net/ethernet/cortina/gemini.c
+@@ -2445,6 +2445,7 @@ static int gemini_ethernet_port_probe(struct platform_device *pdev)
+ port->reset = devm_reset_control_get_exclusive(dev, NULL);
+ if (IS_ERR(port->reset)) {
+ dev_err(dev, "no reset\n");
++ clk_disable_unprepare(port->pclk);
+ return PTR_ERR(port->reset);
+ }
+ reset_control_reset(port->reset);
+@@ -2500,8 +2501,10 @@ static int gemini_ethernet_port_probe(struct platform_device *pdev)
+ IRQF_SHARED,
+ port_names[port->id],
+ port);
+- if (ret)
++ if (ret) {
++ clk_disable_unprepare(port->pclk);
+ return ret;
++ }
+
+ ret = register_netdev(netdev);
+ if (!ret) {
+diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3_enet.c b/drivers/net/ethernet/hisilicon/hns3/hns3_enet.c
+index df1cb0441183..6e186aea7a2f 100644
+--- a/drivers/net/ethernet/hisilicon/hns3/hns3_enet.c
++++ b/drivers/net/ethernet/hisilicon/hns3/hns3_enet.c
+@@ -1098,16 +1098,8 @@ static int hns3_fill_desc(struct hns3_enet_ring *ring, void *priv,
+ int k, sizeoflast;
+ dma_addr_t dma;
+
+- if (type == DESC_TYPE_SKB) {
+- struct sk_buff *skb = (struct sk_buff *)priv;
+- int ret;
+-
+- ret = hns3_fill_skb_desc(ring, skb, desc);
+- if (unlikely(ret < 0))
+- return ret;
+-
+- dma = dma_map_single(dev, skb->data, size, DMA_TO_DEVICE);
+- } else if (type == DESC_TYPE_FRAGLIST_SKB) {
++ if (type == DESC_TYPE_FRAGLIST_SKB ||
++ type == DESC_TYPE_SKB) {
+ struct sk_buff *skb = (struct sk_buff *)priv;
+
+ dma = dma_map_single(dev, skb->data, size, DMA_TO_DEVICE);
+@@ -1452,6 +1444,10 @@ netdev_tx_t hns3_nic_net_xmit(struct sk_buff *skb, struct net_device *netdev)
+
+ next_to_use_head = ring->next_to_use;
+
++ ret = hns3_fill_skb_desc(ring, skb, &ring->desc[ring->next_to_use]);
++ if (unlikely(ret < 0))
++ goto fill_err;
++
+ ret = hns3_fill_skb_to_desc(ring, skb, DESC_TYPE_SKB);
+ if (unlikely(ret < 0))
+ goto fill_err;
+@@ -4174,8 +4170,8 @@ static void hns3_link_status_change(struct hnae3_handle *handle, bool linkup)
+ return;
+
+ if (linkup) {
+- netif_carrier_on(netdev);
+ netif_tx_wake_all_queues(netdev);
++ netif_carrier_on(netdev);
+ if (netif_msg_link(handle))
+ netdev_info(netdev, "link up\n");
+ } else {
+diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.c b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.c
+index b66b93f320b4..dfe247ad8475 100644
+--- a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.c
++++ b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.c
+@@ -5737,9 +5737,9 @@ static int hclge_add_fd_entry(struct hnae3_handle *handle,
+ /* to avoid rule conflict, when user configure rule by ethtool,
+ * we need to clear all arfs rules
+ */
++ spin_lock_bh(&hdev->fd_rule_lock);
+ hclge_clear_arfs_rules(handle);
+
+- spin_lock_bh(&hdev->fd_rule_lock);
+ ret = hclge_fd_config_rule(hdev, rule);
+
+ spin_unlock_bh(&hdev->fd_rule_lock);
+@@ -5782,6 +5782,7 @@ static int hclge_del_fd_entry(struct hnae3_handle *handle,
+ return ret;
+ }
+
++/* make sure being called after lock up with fd_rule_lock */
+ static void hclge_del_all_fd_entries(struct hnae3_handle *handle,
+ bool clear_list)
+ {
+@@ -5794,7 +5795,6 @@ static void hclge_del_all_fd_entries(struct hnae3_handle *handle,
+ if (!hnae3_dev_fd_supported(hdev))
+ return;
+
+- spin_lock_bh(&hdev->fd_rule_lock);
+ for_each_set_bit(location, hdev->fd_bmap,
+ hdev->fd_cfg.rule_num[HCLGE_FD_STAGE_1])
+ hclge_fd_tcam_config(hdev, HCLGE_FD_STAGE_1, true, location,
+@@ -5811,8 +5811,6 @@ static void hclge_del_all_fd_entries(struct hnae3_handle *handle,
+ bitmap_zero(hdev->fd_bmap,
+ hdev->fd_cfg.rule_num[HCLGE_FD_STAGE_1]);
+ }
+-
+- spin_unlock_bh(&hdev->fd_rule_lock);
+ }
+
+ static int hclge_restore_fd_entries(struct hnae3_handle *handle)
+@@ -6179,7 +6177,7 @@ static int hclge_add_fd_entry_by_arfs(struct hnae3_handle *handle, u16 queue_id,
+ u16 flow_id, struct flow_keys *fkeys)
+ {
+ struct hclge_vport *vport = hclge_get_vport(handle);
+- struct hclge_fd_rule_tuples new_tuples;
++ struct hclge_fd_rule_tuples new_tuples = {};
+ struct hclge_dev *hdev = vport->back;
+ struct hclge_fd_rule *rule;
+ u16 tmp_queue_id;
+@@ -6189,20 +6187,18 @@ static int hclge_add_fd_entry_by_arfs(struct hnae3_handle *handle, u16 queue_id,
+ if (!hnae3_dev_fd_supported(hdev))
+ return -EOPNOTSUPP;
+
+- memset(&new_tuples, 0, sizeof(new_tuples));
+- hclge_fd_get_flow_tuples(fkeys, &new_tuples);
+-
+- spin_lock_bh(&hdev->fd_rule_lock);
+-
+ /* when there is already fd rule existed add by user,
+ * arfs should not work
+ */
++ spin_lock_bh(&hdev->fd_rule_lock);
+ if (hdev->fd_active_type == HCLGE_FD_EP_ACTIVE) {
+ spin_unlock_bh(&hdev->fd_rule_lock);
+
+ return -EOPNOTSUPP;
+ }
+
++ hclge_fd_get_flow_tuples(fkeys, &new_tuples);
++
+ /* check is there flow director filter existed for this flow,
+ * if not, create a new filter for it;
+ * if filter exist with different queue id, modify the filter;
+@@ -6287,6 +6283,7 @@ static void hclge_rfs_filter_expire(struct hclge_dev *hdev)
+ #endif
+ }
+
++/* make sure being called after lock up with fd_rule_lock */
+ static void hclge_clear_arfs_rules(struct hnae3_handle *handle)
+ {
+ #ifdef CONFIG_RFS_ACCEL
+@@ -6331,10 +6328,14 @@ static void hclge_enable_fd(struct hnae3_handle *handle, bool enable)
+
+ hdev->fd_en = enable;
+ clear = hdev->fd_active_type == HCLGE_FD_ARFS_ACTIVE;
+- if (!enable)
++
++ if (!enable) {
++ spin_lock_bh(&hdev->fd_rule_lock);
+ hclge_del_all_fd_entries(handle, clear);
+- else
++ spin_unlock_bh(&hdev->fd_rule_lock);
++ } else {
+ hclge_restore_fd_entries(handle);
++ }
+ }
+
+ static void hclge_cfg_mac_mode(struct hclge_dev *hdev, bool enable)
+@@ -6799,8 +6800,9 @@ static void hclge_ae_stop(struct hnae3_handle *handle)
+ int i;
+
+ set_bit(HCLGE_STATE_DOWN, &hdev->state);
+-
++ spin_lock_bh(&hdev->fd_rule_lock);
+ hclge_clear_arfs_rules(handle);
++ spin_unlock_bh(&hdev->fd_rule_lock);
+
+ /* If it is not PF reset, the firmware will disable the MAC,
+ * so it only need to stop phy here.
+@@ -8532,11 +8534,12 @@ int hclge_set_vlan_filter(struct hnae3_handle *handle, __be16 proto,
+ bool writen_to_tbl = false;
+ int ret = 0;
+
+- /* When device is resetting, firmware is unable to handle
+- * mailbox. Just record the vlan id, and remove it after
++ /* When device is resetting or reset failed, firmware is unable to
++ * handle mailbox. Just record the vlan id, and remove it after
+ * reset finished.
+ */
+- if (test_bit(HCLGE_STATE_RST_HANDLING, &hdev->state) && is_kill) {
++ if ((test_bit(HCLGE_STATE_RST_HANDLING, &hdev->state) ||
++ test_bit(HCLGE_STATE_RST_FAIL, &hdev->state)) && is_kill) {
+ set_bit(vlan_id, vport->vlan_del_fail_bmap);
+ return -EBUSY;
+ }
+diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3vf/hclgevf_main.c b/drivers/net/ethernet/hisilicon/hns3/hns3vf/hclgevf_main.c
+index e6cdd06925e6..0060fa643d0e 100644
+--- a/drivers/net/ethernet/hisilicon/hns3/hns3vf/hclgevf_main.c
++++ b/drivers/net/ethernet/hisilicon/hns3/hns3vf/hclgevf_main.c
+@@ -1322,11 +1322,12 @@ static int hclgevf_set_vlan_filter(struct hnae3_handle *handle,
+ if (proto != htons(ETH_P_8021Q))
+ return -EPROTONOSUPPORT;
+
+- /* When device is resetting, firmware is unable to handle
+- * mailbox. Just record the vlan id, and remove it after
++ /* When device is resetting or reset failed, firmware is unable to
++ * handle mailbox. Just record the vlan id, and remove it after
+ * reset finished.
+ */
+- if (test_bit(HCLGEVF_STATE_RST_HANDLING, &hdev->state) && is_kill) {
++ if ((test_bit(HCLGEVF_STATE_RST_HANDLING, &hdev->state) ||
++ test_bit(HCLGEVF_STATE_RST_FAIL, &hdev->state)) && is_kill) {
+ set_bit(vlan_id, hdev->vlan_del_fail_bmap);
+ return -EBUSY;
+ }
+@@ -3142,23 +3143,36 @@ void hclgevf_update_port_base_vlan_info(struct hclgevf_dev *hdev, u16 state,
+ {
+ struct hnae3_handle *nic = &hdev->nic;
+ struct hclge_vf_to_pf_msg send_msg;
++ int ret;
+
+ rtnl_lock();
+- hclgevf_notify_client(hdev, HNAE3_DOWN_CLIENT);
+- rtnl_unlock();
++
++ if (test_bit(HCLGEVF_STATE_RST_HANDLING, &hdev->state) ||
++ test_bit(HCLGEVF_STATE_RST_FAIL, &hdev->state)) {
++ dev_warn(&hdev->pdev->dev,
++ "is resetting when updating port based vlan info\n");
++ rtnl_unlock();
++ return;
++ }
++
++ ret = hclgevf_notify_client(hdev, HNAE3_DOWN_CLIENT);
++ if (ret) {
++ rtnl_unlock();
++ return;
++ }
+
+ /* send msg to PF and wait update port based vlan info */
+ hclgevf_build_send_msg(&send_msg, HCLGE_MBX_SET_VLAN,
+ HCLGE_MBX_PORT_BASE_VLAN_CFG);
+ memcpy(send_msg.data, port_base_vlan_info, data_size);
+- hclgevf_send_mbx_msg(hdev, &send_msg, false, NULL, 0);
+-
+- if (state == HNAE3_PORT_BASE_VLAN_DISABLE)
+- nic->port_base_vlan_state = HNAE3_PORT_BASE_VLAN_DISABLE;
+- else
+- nic->port_base_vlan_state = HNAE3_PORT_BASE_VLAN_ENABLE;
++ ret = hclgevf_send_mbx_msg(hdev, &send_msg, false, NULL, 0);
++ if (!ret) {
++ if (state == HNAE3_PORT_BASE_VLAN_DISABLE)
++ nic->port_base_vlan_state = state;
++ else
++ nic->port_base_vlan_state = HNAE3_PORT_BASE_VLAN_ENABLE;
++ }
+
+- rtnl_lock();
+ hclgevf_notify_client(hdev, HNAE3_UP_CLIENT);
+ rtnl_unlock();
+ }
+diff --git a/drivers/net/ethernet/ibm/ibmvnic.c b/drivers/net/ethernet/ibm/ibmvnic.c
+index 0fd7eae25fe9..5afb3c9c52d2 100644
+--- a/drivers/net/ethernet/ibm/ibmvnic.c
++++ b/drivers/net/ethernet/ibm/ibmvnic.c
+@@ -3206,7 +3206,7 @@ req_rx_irq_failed:
+ req_tx_irq_failed:
+ for (j = 0; j < i; j++) {
+ free_irq(adapter->tx_scrq[j]->irq, adapter->tx_scrq[j]);
+- irq_dispose_mapping(adapter->rx_scrq[j]->irq);
++ irq_dispose_mapping(adapter->tx_scrq[j]->irq);
+ }
+ release_sub_crqs(adapter, 1);
+ return rc;
+diff --git a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_pf.c b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_pf.c
+index 64786568af0d..75a8c407e815 100644
+--- a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_pf.c
++++ b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_pf.c
+@@ -1730,10 +1730,12 @@ static void otx2_reset_task(struct work_struct *work)
+ if (!netif_running(pf->netdev))
+ return;
+
++ rtnl_lock();
+ otx2_stop(pf->netdev);
+ pf->reset_count++;
+ otx2_open(pf->netdev);
+ netif_trans_update(pf->netdev);
++ rtnl_unlock();
+ }
+
+ static const struct net_device_ops otx2_netdev_ops = {
+@@ -2111,6 +2113,7 @@ static void otx2_remove(struct pci_dev *pdev)
+
+ pf = netdev_priv(netdev);
+
++ cancel_work_sync(&pf->reset_task);
+ /* Disable link notifications */
+ otx2_cgx_config_linkevents(pf, false);
+
+diff --git a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_vf.c b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_vf.c
+index f4227517dc8e..92a3db69a6cd 100644
+--- a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_vf.c
++++ b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_vf.c
+@@ -617,6 +617,8 @@ static void otx2vf_remove(struct pci_dev *pdev)
+
+ vf = netdev_priv(netdev);
+
++ cancel_work_sync(&vf->reset_task);
++ unregister_netdev(netdev);
+ otx2vf_disable_mbox_intr(vf);
+
+ otx2_detach_resources(&vf->mbox);
+diff --git a/drivers/net/ethernet/mediatek/mtk_eth_soc.c b/drivers/net/ethernet/mediatek/mtk_eth_soc.c
+index 09047109d0da..b743d8b56c84 100644
+--- a/drivers/net/ethernet/mediatek/mtk_eth_soc.c
++++ b/drivers/net/ethernet/mediatek/mtk_eth_soc.c
+@@ -2882,6 +2882,8 @@ static int mtk_add_mac(struct mtk_eth *eth, struct device_node *np)
+ eth->netdev[id]->irq = eth->irq[0];
+ eth->netdev[id]->dev.of_node = np;
+
++ eth->netdev[id]->max_mtu = MTK_MAX_RX_LENGTH - MTK_RX_ETH_HLEN;
++
+ return 0;
+
+ free_netdev:
+diff --git a/drivers/net/ethernet/mellanox/mlx4/main.c b/drivers/net/ethernet/mellanox/mlx4/main.c
+index c72c4e1ea383..598e222e0b90 100644
+--- a/drivers/net/ethernet/mellanox/mlx4/main.c
++++ b/drivers/net/ethernet/mellanox/mlx4/main.c
+@@ -4358,12 +4358,14 @@ end:
+ static void mlx4_shutdown(struct pci_dev *pdev)
+ {
+ struct mlx4_dev_persistent *persist = pci_get_drvdata(pdev);
++ struct mlx4_dev *dev = persist->dev;
+
+ mlx4_info(persist->dev, "mlx4_shutdown was called\n");
+ mutex_lock(&persist->interface_state_mutex);
+ if (persist->interface_state & MLX4_INTERFACE_STATE_UP)
+ mlx4_unload_one(pdev);
+ mutex_unlock(&persist->interface_state_mutex);
++ mlx4_pci_disable_device(dev);
+ }
+
+ static const struct pci_error_handlers mlx4_err_handler = {
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en/tc_tun_geneve.c b/drivers/net/ethernet/mellanox/mlx5/core/en/tc_tun_geneve.c
+index 951ea26d96bc..e472ed0eacfb 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/en/tc_tun_geneve.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/en/tc_tun_geneve.c
+@@ -301,6 +301,8 @@ static int mlx5e_tc_tun_parse_geneve_params(struct mlx5e_priv *priv,
+ MLX5_SET(fte_match_set_misc, misc_v, geneve_protocol_type, ETH_P_TEB);
+ }
+
++ spec->match_criteria_enable |= MLX5_MATCH_MISC_PARAMETERS;
++
+ return 0;
+ }
+
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en/tc_tun_gre.c b/drivers/net/ethernet/mellanox/mlx5/core/en/tc_tun_gre.c
+index 58b13192df23..2805416c32a3 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/en/tc_tun_gre.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/en/tc_tun_gre.c
+@@ -80,6 +80,8 @@ static int mlx5e_tc_tun_parse_gretap(struct mlx5e_priv *priv,
+ gre_key.key, be32_to_cpu(enc_keyid.key->keyid));
+ }
+
++ spec->match_criteria_enable |= MLX5_MATCH_MISC_PARAMETERS;
++
+ return 0;
+ }
+
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en/tc_tun_vxlan.c b/drivers/net/ethernet/mellanox/mlx5/core/en/tc_tun_vxlan.c
+index 37b176801bcc..038a0f1cecec 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/en/tc_tun_vxlan.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/en/tc_tun_vxlan.c
+@@ -136,6 +136,8 @@ static int mlx5e_tc_tun_parse_vxlan(struct mlx5e_priv *priv,
+ MLX5_SET(fte_match_set_misc, misc_v, vxlan_vni,
+ be32_to_cpu(enc_keyid.key->keyid));
+
++ spec->match_criteria_enable |= MLX5_MATCH_MISC_PARAMETERS;
++
+ return 0;
+ }
+
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_main.c b/drivers/net/ethernet/mellanox/mlx5/core/en_main.c
+index bc54913c5861..9861c9e42c0a 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/en_main.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/en_main.c
+@@ -422,7 +422,7 @@ static int mlx5e_alloc_rq(struct mlx5e_channel *c,
+ err = mlx5_wq_ll_create(mdev, &rqp->wq, rqc_wq, &rq->mpwqe.wq,
+ &rq->wq_ctrl);
+ if (err)
+- return err;
++ goto err_rq_wq_destroy;
+
+ rq->mpwqe.wq.db = &rq->mpwqe.wq.db[MLX5_RCV_DBR];
+
+@@ -475,7 +475,7 @@ static int mlx5e_alloc_rq(struct mlx5e_channel *c,
+ err = mlx5_wq_cyc_create(mdev, &rqp->wq, rqc_wq, &rq->wqe.wq,
+ &rq->wq_ctrl);
+ if (err)
+- return err;
++ goto err_rq_wq_destroy;
+
+ rq->wqe.wq.db = &rq->wqe.wq.db[MLX5_RCV_DBR];
+
+@@ -3041,6 +3041,25 @@ void mlx5e_timestamp_init(struct mlx5e_priv *priv)
+ priv->tstamp.rx_filter = HWTSTAMP_FILTER_NONE;
+ }
+
++static void mlx5e_modify_admin_state(struct mlx5_core_dev *mdev,
++ enum mlx5_port_status state)
++{
++ struct mlx5_eswitch *esw = mdev->priv.eswitch;
++ int vport_admin_state;
++
++ mlx5_set_port_admin_status(mdev, state);
++
++ if (!MLX5_ESWITCH_MANAGER(mdev) || mlx5_eswitch_mode(esw) == MLX5_ESWITCH_OFFLOADS)
++ return;
++
++ if (state == MLX5_PORT_UP)
++ vport_admin_state = MLX5_VPORT_ADMIN_STATE_AUTO;
++ else
++ vport_admin_state = MLX5_VPORT_ADMIN_STATE_DOWN;
++
++ mlx5_eswitch_set_vport_state(esw, MLX5_VPORT_UPLINK, vport_admin_state);
++}
++
+ int mlx5e_open_locked(struct net_device *netdev)
+ {
+ struct mlx5e_priv *priv = netdev_priv(netdev);
+@@ -3073,7 +3092,7 @@ int mlx5e_open(struct net_device *netdev)
+ mutex_lock(&priv->state_lock);
+ err = mlx5e_open_locked(netdev);
+ if (!err)
+- mlx5_set_port_admin_status(priv->mdev, MLX5_PORT_UP);
++ mlx5e_modify_admin_state(priv->mdev, MLX5_PORT_UP);
+ mutex_unlock(&priv->state_lock);
+
+ return err;
+@@ -3107,7 +3126,7 @@ int mlx5e_close(struct net_device *netdev)
+ return -ENODEV;
+
+ mutex_lock(&priv->state_lock);
+- mlx5_set_port_admin_status(priv->mdev, MLX5_PORT_DOWN);
++ mlx5e_modify_admin_state(priv->mdev, MLX5_PORT_DOWN);
+ err = mlx5e_close_locked(netdev);
+ mutex_unlock(&priv->state_lock);
+
+@@ -5185,7 +5204,7 @@ static void mlx5e_nic_enable(struct mlx5e_priv *priv)
+
+ /* Marking the link as currently not needed by the Driver */
+ if (!netif_running(netdev))
+- mlx5_set_port_admin_status(mdev, MLX5_PORT_DOWN);
++ mlx5e_modify_admin_state(mdev, MLX5_PORT_DOWN);
+
+ mlx5e_set_netdev_mtu_boundaries(priv);
+ mlx5e_set_dev_port_mtu(priv);
+@@ -5395,6 +5414,8 @@ err_cleanup_tx:
+ profile->cleanup_tx(priv);
+
+ out:
++ set_bit(MLX5E_STATE_DESTROYING, &priv->state);
++ cancel_work_sync(&priv->update_stats_work);
+ return err;
+ }
+
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_rep.c b/drivers/net/ethernet/mellanox/mlx5/core/en_rep.c
+index 4a8e0dfdc5f2..e93d7430c1a3 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/en_rep.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/en_rep.c
+@@ -1922,6 +1922,8 @@ static void mlx5e_uplink_rep_enable(struct mlx5e_priv *priv)
+ INIT_WORK(&rpriv->uplink_priv.reoffload_flows_work,
+ mlx5e_tc_reoffload_flows_work);
+
++ mlx5_modify_vport_admin_state(mdev, MLX5_VPORT_STATE_OP_MOD_UPLINK,
++ 0, 0, MLX5_VPORT_ADMIN_STATE_AUTO);
+ mlx5_lag_add(mdev, netdev);
+ priv->events_nb.notifier_call = uplink_rep_async_event;
+ mlx5_notifier_register(mdev, &priv->events_nb);
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_tc.c b/drivers/net/ethernet/mellanox/mlx5/core/en_tc.c
+index 10f705761666..c0f54d2d4925 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/en_tc.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/en_tc.c
+@@ -2256,6 +2256,7 @@ static int __parse_cls_flower(struct mlx5e_priv *priv,
+ match.key->vlan_priority);
+
+ *match_level = MLX5_MATCH_L2;
++ spec->match_criteria_enable |= MLX5_MATCH_MISC_PARAMETERS;
+ }
+ }
+
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/eswitch.c b/drivers/net/ethernet/mellanox/mlx5/core/eswitch.c
+index 7f618a443bfd..77a1ac1b1cc1 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/eswitch.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/eswitch.c
+@@ -2161,7 +2161,7 @@ abort:
+ mlx5_reload_interface(esw->dev, MLX5_INTERFACE_PROTOCOL_IB);
+ mlx5_reload_interface(esw->dev, MLX5_INTERFACE_PROTOCOL_ETH);
+ }
+-
++ esw_destroy_tsar(esw);
+ return err;
+ }
+
+@@ -2206,8 +2206,6 @@ void mlx5_eswitch_disable_locked(struct mlx5_eswitch *esw, bool clear_vf)
+ else if (esw->mode == MLX5_ESWITCH_OFFLOADS)
+ esw_offloads_disable(esw);
+
+- esw_destroy_tsar(esw);
+-
+ old_mode = esw->mode;
+ esw->mode = MLX5_ESWITCH_NONE;
+
+@@ -2217,6 +2215,8 @@ void mlx5_eswitch_disable_locked(struct mlx5_eswitch *esw, bool clear_vf)
+ mlx5_reload_interface(esw->dev, MLX5_INTERFACE_PROTOCOL_IB);
+ mlx5_reload_interface(esw->dev, MLX5_INTERFACE_PROTOCOL_ETH);
+ }
++ esw_destroy_tsar(esw);
++
+ if (clear_vf)
+ mlx5_eswitch_clear_vf_vports_info(esw);
+ }
+@@ -2374,6 +2374,8 @@ int mlx5_eswitch_set_vport_state(struct mlx5_eswitch *esw,
+ u16 vport, int link_state)
+ {
+ struct mlx5_vport *evport = mlx5_eswitch_get_vport(esw, vport);
++ int opmod = MLX5_VPORT_STATE_OP_MOD_ESW_VPORT;
++ int other_vport = 1;
+ int err = 0;
+
+ if (!ESW_ALLOWED(esw))
+@@ -2381,15 +2383,17 @@ int mlx5_eswitch_set_vport_state(struct mlx5_eswitch *esw,
+ if (IS_ERR(evport))
+ return PTR_ERR(evport);
+
++ if (vport == MLX5_VPORT_UPLINK) {
++ opmod = MLX5_VPORT_STATE_OP_MOD_UPLINK;
++ other_vport = 0;
++ vport = 0;
++ }
+ mutex_lock(&esw->state_lock);
+
+- err = mlx5_modify_vport_admin_state(esw->dev,
+- MLX5_VPORT_STATE_OP_MOD_ESW_VPORT,
+- vport, 1, link_state);
++ err = mlx5_modify_vport_admin_state(esw->dev, opmod, vport, other_vport, link_state);
+ if (err) {
+- mlx5_core_warn(esw->dev,
+- "Failed to set vport %d link state, err = %d",
+- vport, err);
++ mlx5_core_warn(esw->dev, "Failed to set vport %d link state, opmod = %d, err = %d",
++ vport, opmod, err);
+ goto unlock;
+ }
+
+@@ -2431,8 +2435,6 @@ int __mlx5_eswitch_set_vport_vlan(struct mlx5_eswitch *esw,
+ struct mlx5_vport *evport = mlx5_eswitch_get_vport(esw, vport);
+ int err = 0;
+
+- if (!ESW_ALLOWED(esw))
+- return -EPERM;
+ if (IS_ERR(evport))
+ return PTR_ERR(evport);
+ if (vlan > 4095 || qos > 7)
+@@ -2460,6 +2462,9 @@ int mlx5_eswitch_set_vport_vlan(struct mlx5_eswitch *esw,
+ u8 set_flags = 0;
+ int err;
+
++ if (!ESW_ALLOWED(esw))
++ return -EPERM;
++
+ if (vlan || qos)
+ set_flags = SET_VLAN_STRIP | SET_VLAN_INSERT;
+
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/eswitch.h b/drivers/net/ethernet/mellanox/mlx5/core/eswitch.h
+index c1848b57f61c..56d2a1ab9378 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/eswitch.h
++++ b/drivers/net/ethernet/mellanox/mlx5/core/eswitch.h
+@@ -684,6 +684,8 @@ static inline int mlx5_eswitch_enable(struct mlx5_eswitch *esw, int num_vfs) { r
+ static inline void mlx5_eswitch_disable(struct mlx5_eswitch *esw, bool clear_vf) {}
+ static inline bool mlx5_esw_lag_prereq(struct mlx5_core_dev *dev0, struct mlx5_core_dev *dev1) { return true; }
+ static inline bool mlx5_eswitch_is_funcs_handler(struct mlx5_core_dev *dev) { return false; }
++static inline
++int mlx5_eswitch_set_vport_state(struct mlx5_eswitch *esw, u16 vport, int link_state) { return 0; }
+ static inline const u32 *mlx5_esw_query_functions(struct mlx5_core_dev *dev)
+ {
+ return ERR_PTR(-EOPNOTSUPP);
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/eswitch_offloads.c b/drivers/net/ethernet/mellanox/mlx5/core/eswitch_offloads.c
+index 5d9def18ae3a..cfc52521d775 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/eswitch_offloads.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/eswitch_offloads.c
+@@ -264,9 +264,6 @@ mlx5_eswitch_set_rule_source_port(struct mlx5_eswitch *esw,
+ mlx5_eswitch_get_vport_metadata_mask());
+
+ spec->match_criteria_enable |= MLX5_MATCH_MISC_PARAMETERS_2;
+- misc = MLX5_ADDR_OF(fte_match_param, spec->match_criteria, misc_parameters);
+- if (memchr_inv(misc, 0, MLX5_ST_SZ_BYTES(fte_match_set_misc)))
+- spec->match_criteria_enable |= MLX5_MATCH_MISC_PARAMETERS;
+ } else {
+ misc = MLX5_ADDR_OF(fte_match_param, spec->match_value, misc_parameters);
+ MLX5_SET(fte_match_set_misc, misc, source_port, attr->in_rep->vport);
+@@ -381,6 +378,9 @@ mlx5_eswitch_add_offloaded_rule(struct mlx5_eswitch *esw,
+ flow_act.modify_hdr = attr->modify_hdr;
+
+ if (split) {
++ if (MLX5_CAP_ESW_FLOWTABLE(esw->dev, flow_source) &&
++ attr->in_rep->vport == MLX5_VPORT_UPLINK)
++ spec->flow_context.flow_source = MLX5_FLOW_CONTEXT_FLOW_SOURCE_UPLINK;
+ fdb = esw_vport_tbl_get(esw, attr);
+ } else {
+ if (attr->chain || attr->prio)
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/lib/clock.c b/drivers/net/ethernet/mellanox/mlx5/core/lib/clock.c
+index 43f97601b500..1d9a5117f90b 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/lib/clock.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/lib/clock.c
+@@ -252,17 +252,17 @@ static int mlx5_extts_configure(struct ptp_clock_info *ptp,
+ if (rq->extts.index >= clock->ptp_info.n_pins)
+ return -EINVAL;
+
++ pin = ptp_find_pin(clock->ptp, PTP_PF_EXTTS, rq->extts.index);
++ if (pin < 0)
++ return -EBUSY;
++
+ if (on) {
+- pin = ptp_find_pin(clock->ptp, PTP_PF_EXTTS, rq->extts.index);
+- if (pin < 0)
+- return -EBUSY;
+ pin_mode = MLX5_PIN_MODE_IN;
+ pattern = !!(rq->extts.flags & PTP_FALLING_EDGE);
+ field_select = MLX5_MTPPS_FS_PIN_MODE |
+ MLX5_MTPPS_FS_PATTERN |
+ MLX5_MTPPS_FS_ENABLE;
+ } else {
+- pin = rq->extts.index;
+ field_select = MLX5_MTPPS_FS_ENABLE;
+ }
+
+@@ -310,12 +310,12 @@ static int mlx5_perout_configure(struct ptp_clock_info *ptp,
+ if (rq->perout.index >= clock->ptp_info.n_pins)
+ return -EINVAL;
+
+- if (on) {
+- pin = ptp_find_pin(clock->ptp, PTP_PF_PEROUT,
+- rq->perout.index);
+- if (pin < 0)
+- return -EBUSY;
++ pin = ptp_find_pin(clock->ptp, PTP_PF_PEROUT,
++ rq->perout.index);
++ if (pin < 0)
++ return -EBUSY;
+
++ if (on) {
+ pin_mode = MLX5_PIN_MODE_OUT;
+ pattern = MLX5_OUT_PATTERN_PERIODIC;
+ ts.tv_sec = rq->perout.period.sec;
+@@ -341,7 +341,6 @@ static int mlx5_perout_configure(struct ptp_clock_info *ptp,
+ MLX5_MTPPS_FS_ENABLE |
+ MLX5_MTPPS_FS_TIME_STAMP;
+ } else {
+- pin = rq->perout.index;
+ field_select = MLX5_MTPPS_FS_ENABLE;
+ }
+
+@@ -388,10 +387,31 @@ static int mlx5_ptp_enable(struct ptp_clock_info *ptp,
+ return 0;
+ }
+
++enum {
++ MLX5_MTPPS_REG_CAP_PIN_X_MODE_SUPPORT_PPS_IN = BIT(0),
++ MLX5_MTPPS_REG_CAP_PIN_X_MODE_SUPPORT_PPS_OUT = BIT(1),
++};
++
+ static int mlx5_ptp_verify(struct ptp_clock_info *ptp, unsigned int pin,
+ enum ptp_pin_function func, unsigned int chan)
+ {
+- return (func == PTP_PF_PHYSYNC) ? -EOPNOTSUPP : 0;
++ struct mlx5_clock *clock = container_of(ptp, struct mlx5_clock,
++ ptp_info);
++
++ switch (func) {
++ case PTP_PF_NONE:
++ return 0;
++ case PTP_PF_EXTTS:
++ return !(clock->pps_info.pin_caps[pin] &
++ MLX5_MTPPS_REG_CAP_PIN_X_MODE_SUPPORT_PPS_IN);
++ case PTP_PF_PEROUT:
++ return !(clock->pps_info.pin_caps[pin] &
++ MLX5_MTPPS_REG_CAP_PIN_X_MODE_SUPPORT_PPS_OUT);
++ default:
++ return -EOPNOTSUPP;
++ }
++
++ return -EOPNOTSUPP;
+ }
+
+ static const struct ptp_clock_info mlx5_ptp_clock_info = {
+@@ -411,6 +431,38 @@ static const struct ptp_clock_info mlx5_ptp_clock_info = {
+ .verify = NULL,
+ };
+
++static int mlx5_query_mtpps_pin_mode(struct mlx5_core_dev *mdev, u8 pin,
++ u32 *mtpps, u32 mtpps_size)
++{
++ u32 in[MLX5_ST_SZ_DW(mtpps_reg)] = {};
++
++ MLX5_SET(mtpps_reg, in, pin, pin);
++
++ return mlx5_core_access_reg(mdev, in, sizeof(in), mtpps,
++ mtpps_size, MLX5_REG_MTPPS, 0, 0);
++}
++
++static int mlx5_get_pps_pin_mode(struct mlx5_clock *clock, u8 pin)
++{
++ struct mlx5_core_dev *mdev = clock->mdev;
++ u32 out[MLX5_ST_SZ_DW(mtpps_reg)] = {};
++ u8 mode;
++ int err;
++
++ err = mlx5_query_mtpps_pin_mode(mdev, pin, out, sizeof(out));
++ if (err || !MLX5_GET(mtpps_reg, out, enable))
++ return PTP_PF_NONE;
++
++ mode = MLX5_GET(mtpps_reg, out, pin_mode);
++
++ if (mode == MLX5_PIN_MODE_IN)
++ return PTP_PF_EXTTS;
++ else if (mode == MLX5_PIN_MODE_OUT)
++ return PTP_PF_PEROUT;
++
++ return PTP_PF_NONE;
++}
++
+ static int mlx5_init_pin_config(struct mlx5_clock *clock)
+ {
+ int i;
+@@ -430,8 +482,8 @@ static int mlx5_init_pin_config(struct mlx5_clock *clock)
+ sizeof(clock->ptp_info.pin_config[i].name),
+ "mlx5_pps%d", i);
+ clock->ptp_info.pin_config[i].index = i;
+- clock->ptp_info.pin_config[i].func = PTP_PF_NONE;
+- clock->ptp_info.pin_config[i].chan = i;
++ clock->ptp_info.pin_config[i].func = mlx5_get_pps_pin_mode(clock, i);
++ clock->ptp_info.pin_config[i].chan = 0;
+ }
+
+ return 0;
+diff --git a/drivers/net/ethernet/mellanox/mlxsw/core.c b/drivers/net/ethernet/mellanox/mlxsw/core.c
+index d6d6fe64887b..71b6185b4904 100644
+--- a/drivers/net/ethernet/mellanox/mlxsw/core.c
++++ b/drivers/net/ethernet/mellanox/mlxsw/core.c
+@@ -1814,7 +1814,7 @@ static int mlxsw_core_reg_access_emad(struct mlxsw_core *mlxsw_core,
+ err = mlxsw_emad_reg_access(mlxsw_core, reg, payload, type, trans,
+ bulk_list, cb, cb_priv, tid);
+ if (err) {
+- kfree(trans);
++ kfree_rcu(trans, rcu);
+ return err;
+ }
+ return 0;
+@@ -2051,11 +2051,13 @@ void mlxsw_core_skb_receive(struct mlxsw_core *mlxsw_core, struct sk_buff *skb,
+ break;
+ }
+ }
+- rcu_read_unlock();
+- if (!found)
++ if (!found) {
++ rcu_read_unlock();
+ goto drop;
++ }
+
+ rxl->func(skb, local_port, rxl_item->priv);
++ rcu_read_unlock();
+ return;
+
+ drop:
+diff --git a/drivers/net/ethernet/mellanox/mlxsw/spectrum_router.c b/drivers/net/ethernet/mellanox/mlxsw/spectrum_router.c
+index 84b3d78a9dd8..ac1a63fe0899 100644
+--- a/drivers/net/ethernet/mellanox/mlxsw/spectrum_router.c
++++ b/drivers/net/ethernet/mellanox/mlxsw/spectrum_router.c
+@@ -8072,16 +8072,6 @@ int mlxsw_sp_router_init(struct mlxsw_sp *mlxsw_sp,
+ mlxsw_sp->router = router;
+ router->mlxsw_sp = mlxsw_sp;
+
+- router->inetaddr_nb.notifier_call = mlxsw_sp_inetaddr_event;
+- err = register_inetaddr_notifier(&router->inetaddr_nb);
+- if (err)
+- goto err_register_inetaddr_notifier;
+-
+- router->inet6addr_nb.notifier_call = mlxsw_sp_inet6addr_event;
+- err = register_inet6addr_notifier(&router->inet6addr_nb);
+- if (err)
+- goto err_register_inet6addr_notifier;
+-
+ INIT_LIST_HEAD(&mlxsw_sp->router->nexthop_neighs_list);
+ err = __mlxsw_sp_router_init(mlxsw_sp);
+ if (err)
+@@ -8122,12 +8112,6 @@ int mlxsw_sp_router_init(struct mlxsw_sp *mlxsw_sp,
+ if (err)
+ goto err_neigh_init;
+
+- mlxsw_sp->router->netevent_nb.notifier_call =
+- mlxsw_sp_router_netevent_event;
+- err = register_netevent_notifier(&mlxsw_sp->router->netevent_nb);
+- if (err)
+- goto err_register_netevent_notifier;
+-
+ err = mlxsw_sp_mp_hash_init(mlxsw_sp);
+ if (err)
+ goto err_mp_hash_init;
+@@ -8136,6 +8120,22 @@ int mlxsw_sp_router_init(struct mlxsw_sp *mlxsw_sp,
+ if (err)
+ goto err_dscp_init;
+
++ router->inetaddr_nb.notifier_call = mlxsw_sp_inetaddr_event;
++ err = register_inetaddr_notifier(&router->inetaddr_nb);
++ if (err)
++ goto err_register_inetaddr_notifier;
++
++ router->inet6addr_nb.notifier_call = mlxsw_sp_inet6addr_event;
++ err = register_inet6addr_notifier(&router->inet6addr_nb);
++ if (err)
++ goto err_register_inet6addr_notifier;
++
++ mlxsw_sp->router->netevent_nb.notifier_call =
++ mlxsw_sp_router_netevent_event;
++ err = register_netevent_notifier(&mlxsw_sp->router->netevent_nb);
++ if (err)
++ goto err_register_netevent_notifier;
++
+ mlxsw_sp->router->fib_nb.notifier_call = mlxsw_sp_router_fib_event;
+ err = register_fib_notifier(mlxsw_sp_net(mlxsw_sp),
+ &mlxsw_sp->router->fib_nb,
+@@ -8146,10 +8146,15 @@ int mlxsw_sp_router_init(struct mlxsw_sp *mlxsw_sp,
+ return 0;
+
+ err_register_fib_notifier:
+-err_dscp_init:
+-err_mp_hash_init:
+ unregister_netevent_notifier(&mlxsw_sp->router->netevent_nb);
+ err_register_netevent_notifier:
++ unregister_inet6addr_notifier(&router->inet6addr_nb);
++err_register_inet6addr_notifier:
++ unregister_inetaddr_notifier(&router->inetaddr_nb);
++err_register_inetaddr_notifier:
++ mlxsw_core_flush_owq();
++err_dscp_init:
++err_mp_hash_init:
+ mlxsw_sp_neigh_fini(mlxsw_sp);
+ err_neigh_init:
+ mlxsw_sp_vrs_fini(mlxsw_sp);
+@@ -8168,10 +8173,6 @@ err_ipips_init:
+ err_rifs_init:
+ __mlxsw_sp_router_fini(mlxsw_sp);
+ err_router_init:
+- unregister_inet6addr_notifier(&router->inet6addr_nb);
+-err_register_inet6addr_notifier:
+- unregister_inetaddr_notifier(&router->inetaddr_nb);
+-err_register_inetaddr_notifier:
+ mutex_destroy(&mlxsw_sp->router->lock);
+ kfree(mlxsw_sp->router);
+ return err;
+@@ -8182,6 +8183,9 @@ void mlxsw_sp_router_fini(struct mlxsw_sp *mlxsw_sp)
+ unregister_fib_notifier(mlxsw_sp_net(mlxsw_sp),
+ &mlxsw_sp->router->fib_nb);
+ unregister_netevent_notifier(&mlxsw_sp->router->netevent_nb);
++ unregister_inet6addr_notifier(&mlxsw_sp->router->inet6addr_nb);
++ unregister_inetaddr_notifier(&mlxsw_sp->router->inetaddr_nb);
++ mlxsw_core_flush_owq();
+ mlxsw_sp_neigh_fini(mlxsw_sp);
+ mlxsw_sp_vrs_fini(mlxsw_sp);
+ mlxsw_sp_mr_fini(mlxsw_sp);
+@@ -8191,8 +8195,6 @@ void mlxsw_sp_router_fini(struct mlxsw_sp *mlxsw_sp)
+ mlxsw_sp_ipips_fini(mlxsw_sp);
+ mlxsw_sp_rifs_fini(mlxsw_sp);
+ __mlxsw_sp_router_fini(mlxsw_sp);
+- unregister_inet6addr_notifier(&mlxsw_sp->router->inet6addr_nb);
+- unregister_inetaddr_notifier(&mlxsw_sp->router->inetaddr_nb);
+ mutex_destroy(&mlxsw_sp->router->lock);
+ kfree(mlxsw_sp->router);
+ }
+diff --git a/drivers/net/ethernet/ni/nixge.c b/drivers/net/ethernet/ni/nixge.c
+index 2fdd0753b3af..0e776131a3ef 100644
+--- a/drivers/net/ethernet/ni/nixge.c
++++ b/drivers/net/ethernet/ni/nixge.c
+@@ -1298,19 +1298,21 @@ static int nixge_probe(struct platform_device *pdev)
+ netif_napi_add(ndev, &priv->napi, nixge_poll, NAPI_POLL_WEIGHT);
+ err = nixge_of_get_resources(pdev);
+ if (err)
+- return err;
++ goto free_netdev;
+ __nixge_hw_set_mac_address(ndev);
+
+ priv->tx_irq = platform_get_irq_byname(pdev, "tx");
+ if (priv->tx_irq < 0) {
+ netdev_err(ndev, "could not find 'tx' irq");
+- return priv->tx_irq;
++ err = priv->tx_irq;
++ goto free_netdev;
+ }
+
+ priv->rx_irq = platform_get_irq_byname(pdev, "rx");
+ if (priv->rx_irq < 0) {
+ netdev_err(ndev, "could not find 'rx' irq");
+- return priv->rx_irq;
++ err = priv->rx_irq;
++ goto free_netdev;
+ }
+
+ priv->coalesce_count_rx = XAXIDMA_DFT_RX_THRESHOLD;
+diff --git a/drivers/net/ethernet/pensando/ionic/ionic_lif.c b/drivers/net/ethernet/pensando/ionic/ionic_lif.c
+index 2c3e9ef22129..337d971ffd92 100644
+--- a/drivers/net/ethernet/pensando/ionic/ionic_lif.c
++++ b/drivers/net/ethernet/pensando/ionic/ionic_lif.c
+@@ -1959,7 +1959,7 @@ int ionic_reset_queues(struct ionic_lif *lif, ionic_reset_cb cb, void *arg)
+ netif_device_detach(lif->netdev);
+ err = ionic_stop(lif->netdev);
+ if (err)
+- return err;
++ goto reset_out;
+ }
+
+ if (cb)
+@@ -1969,6 +1969,8 @@ int ionic_reset_queues(struct ionic_lif *lif, ionic_reset_cb cb, void *arg)
+ err = ionic_open(lif->netdev);
+ netif_device_attach(lif->netdev);
+ }
++
++reset_out:
+ mutex_unlock(&lif->queue_lock);
+
+ return err;
+diff --git a/drivers/net/ethernet/qlogic/qed/qed_int.c b/drivers/net/ethernet/qlogic/qed/qed_int.c
+index 8d106063e927..666e43748a5f 100644
+--- a/drivers/net/ethernet/qlogic/qed/qed_int.c
++++ b/drivers/net/ethernet/qlogic/qed/qed_int.c
+@@ -1180,7 +1180,8 @@ static int qed_int_attentions(struct qed_hwfn *p_hwfn)
+ index, attn_bits, attn_acks, asserted_bits,
+ deasserted_bits, p_sb_attn_sw->known_attn);
+ } else if (asserted_bits == 0x100) {
+- DP_INFO(p_hwfn, "MFW indication via attention\n");
++ DP_VERBOSE(p_hwfn, NETIF_MSG_INTR,
++ "MFW indication via attention\n");
+ } else {
+ DP_VERBOSE(p_hwfn, NETIF_MSG_INTR,
+ "MFW indication [deassertion]\n");
+diff --git a/drivers/net/ethernet/renesas/ravb_main.c b/drivers/net/ethernet/renesas/ravb_main.c
+index 067ad25553b9..ab335f7dab82 100644
+--- a/drivers/net/ethernet/renesas/ravb_main.c
++++ b/drivers/net/ethernet/renesas/ravb_main.c
+@@ -1444,6 +1444,7 @@ static void ravb_tx_timeout_work(struct work_struct *work)
+ struct ravb_private *priv = container_of(work, struct ravb_private,
+ work);
+ struct net_device *ndev = priv->ndev;
++ int error;
+
+ netif_tx_stop_all_queues(ndev);
+
+@@ -1452,15 +1453,36 @@ static void ravb_tx_timeout_work(struct work_struct *work)
+ ravb_ptp_stop(ndev);
+
+ /* Wait for DMA stopping */
+- ravb_stop_dma(ndev);
++ if (ravb_stop_dma(ndev)) {
++ /* If ravb_stop_dma() fails, the hardware is still operating
++ * for TX and/or RX. So, this should not call the following
++ * functions because ravb_dmac_init() is possible to fail too.
++ * Also, this should not retry ravb_stop_dma() again and again
++ * here because it's possible to wait forever. So, this just
++ * re-enables the TX and RX and skip the following
++ * re-initialization procedure.
++ */
++ ravb_rcv_snd_enable(ndev);
++ goto out;
++ }
+
+ ravb_ring_free(ndev, RAVB_BE);
+ ravb_ring_free(ndev, RAVB_NC);
+
+ /* Device init */
+- ravb_dmac_init(ndev);
++ error = ravb_dmac_init(ndev);
++ if (error) {
++ /* If ravb_dmac_init() fails, descriptors are freed. So, this
++ * should return here to avoid re-enabling the TX and RX in
++ * ravb_emac_init().
++ */
++ netdev_err(ndev, "%s: ravb_dmac_init() failed, error %d\n",
++ __func__, error);
++ return;
++ }
+ ravb_emac_init(ndev);
+
++out:
+ /* Initialise PTP Clock driver */
+ if (priv->chip_id == RCAR_GEN2)
+ ravb_ptp_init(ndev, priv->pdev);
+diff --git a/drivers/net/usb/hso.c b/drivers/net/usb/hso.c
+index bb8c34d746ab..5f123a8cf68e 100644
+--- a/drivers/net/usb/hso.c
++++ b/drivers/net/usb/hso.c
+@@ -1390,8 +1390,9 @@ static void hso_serial_set_termios(struct tty_struct *tty, struct ktermios *old)
+ unsigned long flags;
+
+ if (old)
+- hso_dbg(0x16, "Termios called with: cflags new[%d] - old[%d]\n",
+- tty->termios.c_cflag, old->c_cflag);
++ hso_dbg(0x16, "Termios called with: cflags new[%u] - old[%u]\n",
++ (unsigned int)tty->termios.c_cflag,
++ (unsigned int)old->c_cflag);
+
+ /* the actual setup */
+ spin_lock_irqsave(&serial->serial_lock, flags);
+diff --git a/drivers/net/usb/lan78xx.c b/drivers/net/usb/lan78xx.c
+index eccbf4cd7149..ee062b27cfa7 100644
+--- a/drivers/net/usb/lan78xx.c
++++ b/drivers/net/usb/lan78xx.c
+@@ -3759,6 +3759,11 @@ static int lan78xx_probe(struct usb_interface *intf,
+ netdev->max_mtu = MAX_SINGLE_PACKET_SIZE;
+ netif_set_gso_max_size(netdev, MAX_SINGLE_PACKET_SIZE - MAX_HEADER);
+
++ if (intf->cur_altsetting->desc.bNumEndpoints < 3) {
++ ret = -ENODEV;
++ goto out3;
++ }
++
+ dev->ep_blkin = (intf->cur_altsetting)->endpoint + 0;
+ dev->ep_blkout = (intf->cur_altsetting)->endpoint + 1;
+ dev->ep_intr = (intf->cur_altsetting)->endpoint + 2;
+@@ -3783,6 +3788,7 @@ static int lan78xx_probe(struct usb_interface *intf,
+ usb_fill_int_urb(dev->urb_intr, dev->udev,
+ dev->pipe_intr, buf, maxp,
+ intr_complete, dev, period);
++ dev->urb_intr->transfer_flags |= URB_FREE_BUFFER;
+ }
+ }
+
+diff --git a/drivers/net/vxlan.c b/drivers/net/vxlan.c
+index 779e56c43d27..6e64bc8d601f 100644
+--- a/drivers/net/vxlan.c
++++ b/drivers/net/vxlan.c
+@@ -2863,8 +2863,10 @@ static void vxlan_flush(struct vxlan_dev *vxlan, bool do_all)
+ if (!do_all && (f->state & (NUD_PERMANENT | NUD_NOARP)))
+ continue;
+ /* the all_zeros_mac entry is deleted at vxlan_uninit */
+- if (!is_zero_ether_addr(f->eth_addr))
+- vxlan_fdb_destroy(vxlan, f, true, true);
++ if (is_zero_ether_addr(f->eth_addr) &&
++ f->vni == vxlan->cfg.vni)
++ continue;
++ vxlan_fdb_destroy(vxlan, f, true, true);
+ }
+ spin_unlock_bh(&vxlan->hash_lock[h]);
+ }
+diff --git a/drivers/net/wan/hdlc_x25.c b/drivers/net/wan/hdlc_x25.c
+index c84536b03aa8..f70336bb6f52 100644
+--- a/drivers/net/wan/hdlc_x25.c
++++ b/drivers/net/wan/hdlc_x25.c
+@@ -71,8 +71,10 @@ static int x25_data_indication(struct net_device *dev, struct sk_buff *skb)
+ {
+ unsigned char *ptr;
+
+- if (skb_cow(skb, 1))
++ if (skb_cow(skb, 1)) {
++ kfree_skb(skb);
+ return NET_RX_DROP;
++ }
+
+ skb_push(skb, 1);
+ skb_reset_network_header(skb);
+diff --git a/drivers/net/wan/lapbether.c b/drivers/net/wan/lapbether.c
+index 284832314f31..b2868433718f 100644
+--- a/drivers/net/wan/lapbether.c
++++ b/drivers/net/wan/lapbether.c
+@@ -128,10 +128,12 @@ static int lapbeth_data_indication(struct net_device *dev, struct sk_buff *skb)
+ {
+ unsigned char *ptr;
+
+- skb_push(skb, 1);
+-
+- if (skb_cow(skb, 1))
++ if (skb_cow(skb, 1)) {
++ kfree_skb(skb);
+ return NET_RX_DROP;
++ }
++
++ skb_push(skb, 1);
+
+ ptr = skb->data;
+ *ptr = X25_IFACE_DATA;
+diff --git a/drivers/net/wireless/intel/iwlwifi/iwl-dbg-tlv.c b/drivers/net/wireless/intel/iwlwifi/iwl-dbg-tlv.c
+index bf2f00b89214..85b132a77787 100644
+--- a/drivers/net/wireless/intel/iwlwifi/iwl-dbg-tlv.c
++++ b/drivers/net/wireless/intel/iwlwifi/iwl-dbg-tlv.c
+@@ -263,6 +263,8 @@ static int iwl_dbg_tlv_alloc_trigger(struct iwl_trans *trans,
+ {
+ struct iwl_fw_ini_trigger_tlv *trig = (void *)tlv->data;
+ u32 tp = le32_to_cpu(trig->time_point);
++ struct iwl_ucode_tlv *dup = NULL;
++ int ret;
+
+ if (le32_to_cpu(tlv->length) < sizeof(*trig))
+ return -EINVAL;
+@@ -275,10 +277,20 @@ static int iwl_dbg_tlv_alloc_trigger(struct iwl_trans *trans,
+ return -EINVAL;
+ }
+
+- if (!le32_to_cpu(trig->occurrences))
++ if (!le32_to_cpu(trig->occurrences)) {
++ dup = kmemdup(tlv, sizeof(*tlv) + le32_to_cpu(tlv->length),
++ GFP_KERNEL);
++ if (!dup)
++ return -ENOMEM;
++ trig = (void *)dup->data;
+ trig->occurrences = cpu_to_le32(-1);
++ tlv = dup;
++ }
++
++ ret = iwl_dbg_tlv_add(tlv, &trans->dbg.time_point[tp].trig_list);
++ kfree(dup);
+
+- return iwl_dbg_tlv_add(tlv, &trans->dbg.time_point[tp].trig_list);
++ return ret;
+ }
+
+ static int (*dbg_tlv_alloc[])(struct iwl_trans *trans,
+diff --git a/drivers/net/wireless/mediatek/mt76/mt7615/debugfs.c b/drivers/net/wireless/mediatek/mt76/mt7615/debugfs.c
+index b4d0795154e3..a2afd1a3c51b 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt7615/debugfs.c
++++ b/drivers/net/wireless/mediatek/mt76/mt7615/debugfs.c
+@@ -206,10 +206,11 @@ mt7615_queues_acq(struct seq_file *s, void *data)
+ int i;
+
+ for (i = 0; i < 16; i++) {
+- int j, acs = i / 4, index = i % 4;
++ int j, wmm_idx = i % MT7615_MAX_WMM_SETS;
++ int acs = i / MT7615_MAX_WMM_SETS;
+ u32 ctrl, val, qlen = 0;
+
+- val = mt76_rr(dev, MT_PLE_AC_QEMPTY(acs, index));
++ val = mt76_rr(dev, MT_PLE_AC_QEMPTY(acs, wmm_idx));
+ ctrl = BIT(31) | BIT(15) | (acs << 8);
+
+ for (j = 0; j < 32; j++) {
+@@ -217,11 +218,11 @@ mt7615_queues_acq(struct seq_file *s, void *data)
+ continue;
+
+ mt76_wr(dev, MT_PLE_FL_Q0_CTRL,
+- ctrl | (j + (index << 5)));
++ ctrl | (j + (wmm_idx << 5)));
+ qlen += mt76_get_field(dev, MT_PLE_FL_Q3_CTRL,
+ GENMASK(11, 0));
+ }
+- seq_printf(s, "AC%d%d: queued=%d\n", acs, index, qlen);
++ seq_printf(s, "AC%d%d: queued=%d\n", wmm_idx, acs, qlen);
+ }
+
+ return 0;
+diff --git a/drivers/net/xen-netfront.c b/drivers/net/xen-netfront.c
+index 482c6c8b0fb7..88280057e032 100644
+--- a/drivers/net/xen-netfront.c
++++ b/drivers/net/xen-netfront.c
+@@ -63,6 +63,8 @@ module_param_named(max_queues, xennet_max_queues, uint, 0644);
+ MODULE_PARM_DESC(max_queues,
+ "Maximum number of queues per virtual interface");
+
++#define XENNET_TIMEOUT (5 * HZ)
++
+ static const struct ethtool_ops xennet_ethtool_ops;
+
+ struct netfront_cb {
+@@ -1334,12 +1336,15 @@ static struct net_device *xennet_create_dev(struct xenbus_device *dev)
+
+ netif_carrier_off(netdev);
+
+- xenbus_switch_state(dev, XenbusStateInitialising);
+- wait_event(module_wq,
+- xenbus_read_driver_state(dev->otherend) !=
+- XenbusStateClosed &&
+- xenbus_read_driver_state(dev->otherend) !=
+- XenbusStateUnknown);
++ do {
++ xenbus_switch_state(dev, XenbusStateInitialising);
++ err = wait_event_timeout(module_wq,
++ xenbus_read_driver_state(dev->otherend) !=
++ XenbusStateClosed &&
++ xenbus_read_driver_state(dev->otherend) !=
++ XenbusStateUnknown, XENNET_TIMEOUT);
++ } while (!err);
++
+ return netdev;
+
+ exit:
+@@ -2139,28 +2144,43 @@ static const struct attribute_group xennet_dev_group = {
+ };
+ #endif /* CONFIG_SYSFS */
+
+-static int xennet_remove(struct xenbus_device *dev)
++static void xennet_bus_close(struct xenbus_device *dev)
+ {
+- struct netfront_info *info = dev_get_drvdata(&dev->dev);
+-
+- dev_dbg(&dev->dev, "%s\n", dev->nodename);
++ int ret;
+
+- if (xenbus_read_driver_state(dev->otherend) != XenbusStateClosed) {
++ if (xenbus_read_driver_state(dev->otherend) == XenbusStateClosed)
++ return;
++ do {
+ xenbus_switch_state(dev, XenbusStateClosing);
+- wait_event(module_wq,
+- xenbus_read_driver_state(dev->otherend) ==
+- XenbusStateClosing ||
+- xenbus_read_driver_state(dev->otherend) ==
+- XenbusStateUnknown);
++ ret = wait_event_timeout(module_wq,
++ xenbus_read_driver_state(dev->otherend) ==
++ XenbusStateClosing ||
++ xenbus_read_driver_state(dev->otherend) ==
++ XenbusStateClosed ||
++ xenbus_read_driver_state(dev->otherend) ==
++ XenbusStateUnknown,
++ XENNET_TIMEOUT);
++ } while (!ret);
++
++ if (xenbus_read_driver_state(dev->otherend) == XenbusStateClosed)
++ return;
+
++ do {
+ xenbus_switch_state(dev, XenbusStateClosed);
+- wait_event(module_wq,
+- xenbus_read_driver_state(dev->otherend) ==
+- XenbusStateClosed ||
+- xenbus_read_driver_state(dev->otherend) ==
+- XenbusStateUnknown);
+- }
++ ret = wait_event_timeout(module_wq,
++ xenbus_read_driver_state(dev->otherend) ==
++ XenbusStateClosed ||
++ xenbus_read_driver_state(dev->otherend) ==
++ XenbusStateUnknown,
++ XENNET_TIMEOUT);
++ } while (!ret);
++}
++
++static int xennet_remove(struct xenbus_device *dev)
++{
++ struct netfront_info *info = dev_get_drvdata(&dev->dev);
+
++ xennet_bus_close(dev);
+ xennet_disconnect_backend(info);
+
+ if (info->netdev->reg_state == NETREG_REGISTERED)
+diff --git a/drivers/nfc/s3fwrn5/core.c b/drivers/nfc/s3fwrn5/core.c
+index 91d4d5b28a7d..ba6c486d6465 100644
+--- a/drivers/nfc/s3fwrn5/core.c
++++ b/drivers/nfc/s3fwrn5/core.c
+@@ -198,6 +198,7 @@ int s3fwrn5_recv_frame(struct nci_dev *ndev, struct sk_buff *skb,
+ case S3FWRN5_MODE_FW:
+ return s3fwrn5_fw_recv_frame(ndev, skb);
+ default:
++ kfree_skb(skb);
+ return -ENODEV;
+ }
+ }
+diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c
+index 137d7bcc1358..f7540a9e54fd 100644
+--- a/drivers/nvme/host/core.c
++++ b/drivers/nvme/host/core.c
+@@ -1106,6 +1106,9 @@ static int nvme_identify_ns_descs(struct nvme_ctrl *ctrl, unsigned nsid,
+ int pos;
+ int len;
+
++ if (ctrl->quirks & NVME_QUIRK_NO_NS_DESC_LIST)
++ return 0;
++
+ c.identify.opcode = nvme_admin_identify;
+ c.identify.nsid = cpu_to_le32(nsid);
+ c.identify.cns = NVME_ID_CNS_NS_DESC_LIST;
+@@ -1119,18 +1122,6 @@ static int nvme_identify_ns_descs(struct nvme_ctrl *ctrl, unsigned nsid,
+ if (status) {
+ dev_warn(ctrl->device,
+ "Identify Descriptors failed (%d)\n", status);
+- /*
+- * Don't treat non-retryable errors as fatal, as we potentially
+- * already have a NGUID or EUI-64. If we failed with DNR set,
+- * we want to silently ignore the error as we can still
+- * identify the device, but if the status has DNR set, we want
+- * to propagate the error back specifically for the disk
+- * revalidation flow to make sure we don't abandon the
+- * device just because of a temporal retry-able error (such
+- * as path of transport errors).
+- */
+- if (status > 0 && (status & NVME_SC_DNR))
+- status = 0;
+ goto free_data;
+ }
+
+diff --git a/drivers/nvme/host/nvme.h b/drivers/nvme/host/nvme.h
+index 46f965f8c9bc..8f1b0a30fd2a 100644
+--- a/drivers/nvme/host/nvme.h
++++ b/drivers/nvme/host/nvme.h
+@@ -126,6 +126,13 @@ enum nvme_quirks {
+ * Don't change the value of the temperature threshold feature
+ */
+ NVME_QUIRK_NO_TEMP_THRESH_CHANGE = (1 << 14),
++
++ /*
++ * The controller doesn't handle the Identify Namespace
++ * Identification Descriptor list subcommand despite claiming
++ * NVMe 1.3 compliance.
++ */
++ NVME_QUIRK_NO_NS_DESC_LIST = (1 << 15),
+ };
+
+ /*
+diff --git a/drivers/nvme/host/pci.c b/drivers/nvme/host/pci.c
+index 4ad629eb3bc6..10d65f27879f 100644
+--- a/drivers/nvme/host/pci.c
++++ b/drivers/nvme/host/pci.c
+@@ -3105,6 +3105,8 @@ static const struct pci_device_id nvme_id_table[] = {
+ { PCI_VDEVICE(INTEL, 0x5845), /* Qemu emulated controller */
+ .driver_data = NVME_QUIRK_IDENTIFY_CNS |
+ NVME_QUIRK_DISABLE_WRITE_ZEROES, },
++ { PCI_DEVICE(0x126f, 0x2263), /* Silicon Motion unidentified */
++ .driver_data = NVME_QUIRK_NO_NS_DESC_LIST, },
+ { PCI_DEVICE(0x1bb1, 0x0100), /* Seagate Nytro Flash Storage */
+ .driver_data = NVME_QUIRK_DELAY_BEFORE_CHK_RDY, },
+ { PCI_DEVICE(0x1c58, 0x0003), /* HGST adapter */
+diff --git a/drivers/nvme/host/tcp.c b/drivers/nvme/host/tcp.c
+index 4862fa962011..26461bf3fdcc 100644
+--- a/drivers/nvme/host/tcp.c
++++ b/drivers/nvme/host/tcp.c
+@@ -1392,6 +1392,9 @@ static int nvme_tcp_alloc_queue(struct nvme_ctrl *nctrl,
+ }
+ }
+
++ /* Set 10 seconds timeout for icresp recvmsg */
++ queue->sock->sk->sk_rcvtimeo = 10 * HZ;
++
+ queue->sock->sk->sk_allocation = GFP_ATOMIC;
+ nvme_tcp_set_queue_io_cpu(queue);
+ queue->request = NULL;
+diff --git a/drivers/pci/quirks.c b/drivers/pci/quirks.c
+index 5067562924f0..cd522dd3dd58 100644
+--- a/drivers/pci/quirks.c
++++ b/drivers/pci/quirks.c
+@@ -2330,6 +2330,19 @@ DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_INTEL, 0x10f1, quirk_disable_aspm_l0s);
+ DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_INTEL, 0x10f4, quirk_disable_aspm_l0s);
+ DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_INTEL, 0x1508, quirk_disable_aspm_l0s);
+
++static void quirk_disable_aspm_l0s_l1(struct pci_dev *dev)
++{
++ pci_info(dev, "Disabling ASPM L0s/L1\n");
++ pci_disable_link_state(dev, PCIE_LINK_STATE_L0S | PCIE_LINK_STATE_L1);
++}
++
++/*
++ * ASM1083/1085 PCIe-PCI bridge devices cause AER timeout errors on the
++ * upstream PCIe root port when ASPM is enabled. At least L0s mode is affected;
++ * disable both L0s and L1 for now to be safe.
++ */
++DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_ASMEDIA, 0x1080, quirk_disable_aspm_l0s_l1);
++
+ /*
+ * Some Pericom PCIe-to-PCI bridges in reverse mode need the PCIe Retrain
+ * Link bit cleared after starting the link retrain process to allow this
+diff --git a/drivers/pinctrl/qcom/Kconfig b/drivers/pinctrl/qcom/Kconfig
+index c5d4428f1f94..2a1233b41aa4 100644
+--- a/drivers/pinctrl/qcom/Kconfig
++++ b/drivers/pinctrl/qcom/Kconfig
+@@ -7,6 +7,8 @@ config PINCTRL_MSM
+ select PINCONF
+ select GENERIC_PINCONF
+ select GPIOLIB_IRQCHIP
++ select IRQ_DOMAIN_HIERARCHY
++ select IRQ_FASTEOI_HIERARCHY_HANDLERS
+
+ config PINCTRL_APQ8064
+ tristate "Qualcomm APQ8064 pin controller driver"
+diff --git a/drivers/pinctrl/qcom/pinctrl-msm.c b/drivers/pinctrl/qcom/pinctrl-msm.c
+index 85858c1d56d0..4ebce5b73845 100644
+--- a/drivers/pinctrl/qcom/pinctrl-msm.c
++++ b/drivers/pinctrl/qcom/pinctrl-msm.c
+@@ -833,6 +833,52 @@ static void msm_gpio_irq_unmask(struct irq_data *d)
+ msm_gpio_irq_clear_unmask(d, false);
+ }
+
++/**
++ * msm_gpio_update_dual_edge_parent() - Prime next edge for IRQs handled by parent.
++ * @d: The irq dta.
++ *
++ * This is much like msm_gpio_update_dual_edge_pos() but for IRQs that are
++ * normally handled by the parent irqchip. The logic here is slightly
++ * different due to what's easy to do with our parent, but in principle it's
++ * the same.
++ */
++static void msm_gpio_update_dual_edge_parent(struct irq_data *d)
++{
++ struct gpio_chip *gc = irq_data_get_irq_chip_data(d);
++ struct msm_pinctrl *pctrl = gpiochip_get_data(gc);
++ const struct msm_pingroup *g = &pctrl->soc->groups[d->hwirq];
++ int loop_limit = 100;
++ unsigned int val;
++ unsigned int type;
++
++ /* Read the value and make a guess about what edge we need to catch */
++ val = msm_readl_io(pctrl, g) & BIT(g->in_bit);
++ type = val ? IRQ_TYPE_EDGE_FALLING : IRQ_TYPE_EDGE_RISING;
++
++ do {
++ /* Set the parent to catch the next edge */
++ irq_chip_set_type_parent(d, type);
++
++ /*
++ * Possibly the line changed between when we last read "val"
++ * (and decided what edge we needed) and when set the edge.
++ * If the value didn't change (or changed and then changed
++ * back) then we're done.
++ */
++ val = msm_readl_io(pctrl, g) & BIT(g->in_bit);
++ if (type == IRQ_TYPE_EDGE_RISING) {
++ if (!val)
++ return;
++ type = IRQ_TYPE_EDGE_FALLING;
++ } else if (type == IRQ_TYPE_EDGE_FALLING) {
++ if (val)
++ return;
++ type = IRQ_TYPE_EDGE_RISING;
++ }
++ } while (loop_limit-- > 0);
++ dev_warn_once(pctrl->dev, "dual-edge irq failed to stabilize\n");
++}
++
+ static void msm_gpio_irq_ack(struct irq_data *d)
+ {
+ struct gpio_chip *gc = irq_data_get_irq_chip_data(d);
+@@ -841,8 +887,11 @@ static void msm_gpio_irq_ack(struct irq_data *d)
+ unsigned long flags;
+ u32 val;
+
+- if (test_bit(d->hwirq, pctrl->skip_wake_irqs))
++ if (test_bit(d->hwirq, pctrl->skip_wake_irqs)) {
++ if (test_bit(d->hwirq, pctrl->dual_edge_irqs))
++ msm_gpio_update_dual_edge_parent(d);
+ return;
++ }
+
+ g = &pctrl->soc->groups[d->hwirq];
+
+@@ -861,6 +910,17 @@ static void msm_gpio_irq_ack(struct irq_data *d)
+ raw_spin_unlock_irqrestore(&pctrl->lock, flags);
+ }
+
++static bool msm_gpio_needs_dual_edge_parent_workaround(struct irq_data *d,
++ unsigned int type)
++{
++ struct gpio_chip *gc = irq_data_get_irq_chip_data(d);
++ struct msm_pinctrl *pctrl = gpiochip_get_data(gc);
++
++ return type == IRQ_TYPE_EDGE_BOTH &&
++ pctrl->soc->wakeirq_dual_edge_errata && d->parent_data &&
++ test_bit(d->hwirq, pctrl->skip_wake_irqs);
++}
++
+ static int msm_gpio_irq_set_type(struct irq_data *d, unsigned int type)
+ {
+ struct gpio_chip *gc = irq_data_get_irq_chip_data(d);
+@@ -869,11 +929,21 @@ static int msm_gpio_irq_set_type(struct irq_data *d, unsigned int type)
+ unsigned long flags;
+ u32 val;
+
++ if (msm_gpio_needs_dual_edge_parent_workaround(d, type)) {
++ set_bit(d->hwirq, pctrl->dual_edge_irqs);
++ irq_set_handler_locked(d, handle_fasteoi_ack_irq);
++ msm_gpio_update_dual_edge_parent(d);
++ return 0;
++ }
++
+ if (d->parent_data)
+ irq_chip_set_type_parent(d, type);
+
+- if (test_bit(d->hwirq, pctrl->skip_wake_irqs))
++ if (test_bit(d->hwirq, pctrl->skip_wake_irqs)) {
++ clear_bit(d->hwirq, pctrl->dual_edge_irqs);
++ irq_set_handler_locked(d, handle_fasteoi_irq);
+ return 0;
++ }
+
+ g = &pctrl->soc->groups[d->hwirq];
+
+diff --git a/drivers/pinctrl/qcom/pinctrl-msm.h b/drivers/pinctrl/qcom/pinctrl-msm.h
+index 9452da18a78b..7486fe08eb9b 100644
+--- a/drivers/pinctrl/qcom/pinctrl-msm.h
++++ b/drivers/pinctrl/qcom/pinctrl-msm.h
+@@ -113,6 +113,9 @@ struct msm_gpio_wakeirq_map {
+ * @pull_no_keeper: The SoC does not support keeper bias.
+ * @wakeirq_map: The map of wakeup capable GPIOs and the pin at PDC/MPM
+ * @nwakeirq_map: The number of entries in @wakeirq_map
++ * @wakeirq_dual_edge_errata: If true then GPIOs using the wakeirq_map need
++ * to be aware that their parent can't handle dual
++ * edge interrupts.
+ */
+ struct msm_pinctrl_soc_data {
+ const struct pinctrl_pin_desc *pins;
+@@ -128,6 +131,7 @@ struct msm_pinctrl_soc_data {
+ const int *reserved_gpios;
+ const struct msm_gpio_wakeirq_map *wakeirq_map;
+ unsigned int nwakeirq_map;
++ bool wakeirq_dual_edge_errata;
+ };
+
+ extern const struct dev_pm_ops msm_pinctrl_dev_pm_ops;
+diff --git a/drivers/pinctrl/qcom/pinctrl-sc7180.c b/drivers/pinctrl/qcom/pinctrl-sc7180.c
+index 1b6465a882f2..1d9acad3c1ce 100644
+--- a/drivers/pinctrl/qcom/pinctrl-sc7180.c
++++ b/drivers/pinctrl/qcom/pinctrl-sc7180.c
+@@ -1147,6 +1147,7 @@ static const struct msm_pinctrl_soc_data sc7180_pinctrl = {
+ .ntiles = ARRAY_SIZE(sc7180_tiles),
+ .wakeirq_map = sc7180_pdc_map,
+ .nwakeirq_map = ARRAY_SIZE(sc7180_pdc_map),
++ .wakeirq_dual_edge_errata = true,
+ };
+
+ static int sc7180_pinctrl_probe(struct platform_device *pdev)
+diff --git a/drivers/scsi/scsi_lib.c b/drivers/scsi/scsi_lib.c
+index b8b4366f1200..887b6a47f5da 100644
+--- a/drivers/scsi/scsi_lib.c
++++ b/drivers/scsi/scsi_lib.c
+@@ -564,6 +564,15 @@ static void scsi_mq_uninit_cmd(struct scsi_cmnd *cmd)
+ scsi_uninit_cmd(cmd);
+ }
+
++static void scsi_run_queue_async(struct scsi_device *sdev)
++{
++ if (scsi_target(sdev)->single_lun ||
++ !list_empty(&sdev->host->starved_list))
++ kblockd_schedule_work(&sdev->requeue_work);
++ else
++ blk_mq_run_hw_queues(sdev->request_queue, true);
++}
++
+ /* Returns false when no more bytes to process, true if there are more */
+ static bool scsi_end_request(struct request *req, blk_status_t error,
+ unsigned int bytes)
+@@ -608,11 +617,7 @@ static bool scsi_end_request(struct request *req, blk_status_t error,
+
+ __blk_mq_end_request(req, error);
+
+- if (scsi_target(sdev)->single_lun ||
+- !list_empty(&sdev->host->starved_list))
+- kblockd_schedule_work(&sdev->requeue_work);
+- else
+- blk_mq_run_hw_queues(q, true);
++ scsi_run_queue_async(sdev);
+
+ percpu_ref_put(&q->q_usage_counter);
+ return false;
+@@ -1706,6 +1711,7 @@ out_put_budget:
+ */
+ if (req->rq_flags & RQF_DONTPREP)
+ scsi_mq_uninit_cmd(cmd);
++ scsi_run_queue_async(sdev);
+ break;
+ }
+ return ret;
+diff --git a/drivers/vhost/scsi.c b/drivers/vhost/scsi.c
+index 8b104f76f324..675a83659c98 100644
+--- a/drivers/vhost/scsi.c
++++ b/drivers/vhost/scsi.c
+@@ -1215,7 +1215,7 @@ vhost_scsi_ctl_handle_vq(struct vhost_scsi *vs, struct vhost_virtqueue *vq)
+ continue;
+ }
+
+- switch (v_req.type) {
++ switch (vhost32_to_cpu(vq, v_req.type)) {
+ case VIRTIO_SCSI_T_TMF:
+ vc.req = &v_req.tmf;
+ vc.req_size = sizeof(struct virtio_scsi_ctrl_tmf_req);
+diff --git a/drivers/virtio/virtio_balloon.c b/drivers/virtio/virtio_balloon.c
+index 1f157d2f4952..67b002ade3e7 100644
+--- a/drivers/virtio/virtio_balloon.c
++++ b/drivers/virtio/virtio_balloon.c
+@@ -578,10 +578,14 @@ static int init_vqs(struct virtio_balloon *vb)
+ static u32 virtio_balloon_cmd_id_received(struct virtio_balloon *vb)
+ {
+ if (test_and_clear_bit(VIRTIO_BALLOON_CONFIG_READ_CMD_ID,
+- &vb->config_read_bitmap))
++ &vb->config_read_bitmap)) {
+ virtio_cread(vb->vdev, struct virtio_balloon_config,
+ free_page_hint_cmd_id,
+ &vb->cmd_id_received_cache);
++ /* Legacy balloon config space is LE, unlike all other devices. */
++ if (!virtio_has_feature(vb->vdev, VIRTIO_F_VERSION_1))
++ vb->cmd_id_received_cache = le32_to_cpu((__force __le32)vb->cmd_id_received_cache);
++ }
+
+ return vb->cmd_id_received_cache;
+ }
+diff --git a/fs/io_uring.c b/fs/io_uring.c
+index d0d3efaaa4d4..4e09af1d5d22 100644
+--- a/fs/io_uring.c
++++ b/fs/io_uring.c
+@@ -4808,7 +4808,9 @@ static int io_timeout_remove_prep(struct io_kiocb *req,
+ {
+ if (unlikely(req->ctx->flags & IORING_SETUP_IOPOLL))
+ return -EINVAL;
+- if (sqe->flags || sqe->ioprio || sqe->buf_index || sqe->len)
++ if (unlikely(req->flags & (REQ_F_FIXED_FILE | REQ_F_BUFFER_SELECT)))
++ return -EINVAL;
++ if (sqe->ioprio || sqe->buf_index || sqe->len)
+ return -EINVAL;
+
+ req->timeout.addr = READ_ONCE(sqe->addr);
+@@ -5014,8 +5016,9 @@ static int io_async_cancel_prep(struct io_kiocb *req,
+ {
+ if (unlikely(req->ctx->flags & IORING_SETUP_IOPOLL))
+ return -EINVAL;
+- if (sqe->flags || sqe->ioprio || sqe->off || sqe->len ||
+- sqe->cancel_flags)
++ if (unlikely(req->flags & (REQ_F_FIXED_FILE | REQ_F_BUFFER_SELECT)))
++ return -EINVAL;
++ if (sqe->ioprio || sqe->off || sqe->len || sqe->cancel_flags)
+ return -EINVAL;
+
+ req->cancel.addr = READ_ONCE(sqe->addr);
+@@ -5033,7 +5036,9 @@ static int io_async_cancel(struct io_kiocb *req)
+ static int io_files_update_prep(struct io_kiocb *req,
+ const struct io_uring_sqe *sqe)
+ {
+- if (sqe->flags || sqe->ioprio || sqe->rw_flags)
++ if (unlikely(req->flags & (REQ_F_FIXED_FILE | REQ_F_BUFFER_SELECT)))
++ return -EINVAL;
++ if (sqe->ioprio || sqe->rw_flags)
+ return -EINVAL;
+
+ req->files_update.offset = READ_ONCE(sqe->off);
+diff --git a/include/linux/mlx5/mlx5_ifc.h b/include/linux/mlx5/mlx5_ifc.h
+index 69b27c7dfc3e..fb7fa1fc8e01 100644
+--- a/include/linux/mlx5/mlx5_ifc.h
++++ b/include/linux/mlx5/mlx5_ifc.h
+@@ -4347,6 +4347,7 @@ struct mlx5_ifc_query_vport_state_out_bits {
+ enum {
+ MLX5_VPORT_STATE_OP_MOD_VNIC_VPORT = 0x0,
+ MLX5_VPORT_STATE_OP_MOD_ESW_VPORT = 0x1,
++ MLX5_VPORT_STATE_OP_MOD_UPLINK = 0x2,
+ };
+
+ struct mlx5_ifc_arm_monitor_counter_in_bits {
+diff --git a/include/linux/rhashtable.h b/include/linux/rhashtable.h
+index 70ebef866cc8..e3def7bbe932 100644
+--- a/include/linux/rhashtable.h
++++ b/include/linux/rhashtable.h
+@@ -349,11 +349,11 @@ static inline void rht_unlock(struct bucket_table *tbl,
+ local_bh_enable();
+ }
+
+-static inline struct rhash_head __rcu *__rht_ptr(
+- struct rhash_lock_head *const *bkt)
++static inline struct rhash_head *__rht_ptr(
++ struct rhash_lock_head *p, struct rhash_lock_head __rcu *const *bkt)
+ {
+- return (struct rhash_head __rcu *)
+- ((unsigned long)*bkt & ~BIT(0) ?:
++ return (struct rhash_head *)
++ ((unsigned long)p & ~BIT(0) ?:
+ (unsigned long)RHT_NULLS_MARKER(bkt));
+ }
+
+@@ -365,25 +365,26 @@ static inline struct rhash_head __rcu *__rht_ptr(
+ * access is guaranteed, such as when destroying the table.
+ */
+ static inline struct rhash_head *rht_ptr_rcu(
+- struct rhash_lock_head *const *bkt)
++ struct rhash_lock_head *const *p)
+ {
+- struct rhash_head __rcu *p = __rht_ptr(bkt);
+-
+- return rcu_dereference(p);
++ struct rhash_lock_head __rcu *const *bkt = (void *)p;
++ return __rht_ptr(rcu_dereference(*bkt), bkt);
+ }
+
+ static inline struct rhash_head *rht_ptr(
+- struct rhash_lock_head *const *bkt,
++ struct rhash_lock_head *const *p,
+ struct bucket_table *tbl,
+ unsigned int hash)
+ {
+- return rht_dereference_bucket(__rht_ptr(bkt), tbl, hash);
++ struct rhash_lock_head __rcu *const *bkt = (void *)p;
++ return __rht_ptr(rht_dereference_bucket(*bkt, tbl, hash), bkt);
+ }
+
+ static inline struct rhash_head *rht_ptr_exclusive(
+- struct rhash_lock_head *const *bkt)
++ struct rhash_lock_head *const *p)
+ {
+- return rcu_dereference_protected(__rht_ptr(bkt), 1);
++ struct rhash_lock_head __rcu *const *bkt = (void *)p;
++ return __rht_ptr(rcu_dereference_protected(*bkt, 1), bkt);
+ }
+
+ static inline void rht_assign_locked(struct rhash_lock_head **bkt,
+diff --git a/include/net/xfrm.h b/include/net/xfrm.h
+index 03024701c79f..7b616e45fbfc 100644
+--- a/include/net/xfrm.h
++++ b/include/net/xfrm.h
+@@ -946,7 +946,7 @@ struct xfrm_dst {
+ static inline struct dst_entry *xfrm_dst_path(const struct dst_entry *dst)
+ {
+ #ifdef CONFIG_XFRM
+- if (dst->xfrm) {
++ if (dst->xfrm || (dst->flags & DST_XFRM_QUEUE)) {
+ const struct xfrm_dst *xdst = (const struct xfrm_dst *) dst;
+
+ return xdst->path;
+@@ -958,7 +958,7 @@ static inline struct dst_entry *xfrm_dst_path(const struct dst_entry *dst)
+ static inline struct dst_entry *xfrm_dst_child(const struct dst_entry *dst)
+ {
+ #ifdef CONFIG_XFRM
+- if (dst->xfrm) {
++ if (dst->xfrm || (dst->flags & DST_XFRM_QUEUE)) {
+ struct xfrm_dst *xdst = (struct xfrm_dst *) dst;
+ return xdst->child;
+ }
+@@ -1633,13 +1633,16 @@ int xfrm_policy_walk(struct net *net, struct xfrm_policy_walk *walk,
+ void *);
+ void xfrm_policy_walk_done(struct xfrm_policy_walk *walk, struct net *net);
+ int xfrm_policy_insert(int dir, struct xfrm_policy *policy, int excl);
+-struct xfrm_policy *xfrm_policy_bysel_ctx(struct net *net, u32 mark, u32 if_id,
+- u8 type, int dir,
++struct xfrm_policy *xfrm_policy_bysel_ctx(struct net *net,
++ const struct xfrm_mark *mark,
++ u32 if_id, u8 type, int dir,
+ struct xfrm_selector *sel,
+ struct xfrm_sec_ctx *ctx, int delete,
+ int *err);
+-struct xfrm_policy *xfrm_policy_byid(struct net *net, u32 mark, u32 if_id, u8,
+- int dir, u32 id, int delete, int *err);
++struct xfrm_policy *xfrm_policy_byid(struct net *net,
++ const struct xfrm_mark *mark, u32 if_id,
++ u8 type, int dir, u32 id, int delete,
++ int *err);
+ int xfrm_policy_flush(struct net *net, u8 type, bool task_valid);
+ void xfrm_policy_hash_rebuild(struct net *net);
+ u32 xfrm_get_acqseq(void);
+diff --git a/include/rdma/rdmavt_qp.h b/include/rdma/rdmavt_qp.h
+index 5fc10108703a..4814f1771120 100644
+--- a/include/rdma/rdmavt_qp.h
++++ b/include/rdma/rdmavt_qp.h
+@@ -278,6 +278,25 @@ struct rvt_rq {
+ spinlock_t lock ____cacheline_aligned_in_smp;
+ };
+
++/**
++ * rvt_get_rq_count - count numbers of request work queue entries
++ * in circular buffer
++ * @rq: data structure for request queue entry
++ * @head: head indices of the circular buffer
++ * @tail: tail indices of the circular buffer
++ *
++ * Return - total number of entries in the Receive Queue
++ */
++
++static inline u32 rvt_get_rq_count(struct rvt_rq *rq, u32 head, u32 tail)
++{
++ u32 count = head - tail;
++
++ if ((s32)count < 0)
++ count += rq->size;
++ return count;
++}
++
+ /*
+ * This structure holds the information that the send tasklet needs
+ * to send a RDMA read response or atomic operation.
+diff --git a/kernel/audit.c b/kernel/audit.c
+index f711f424a28a..0aa0e00e4f83 100644
+--- a/kernel/audit.c
++++ b/kernel/audit.c
+@@ -1811,7 +1811,6 @@ struct audit_buffer *audit_log_start(struct audit_context *ctx, gfp_t gfp_mask,
+ }
+
+ audit_get_stamp(ab->ctx, &t, &serial);
+- audit_clear_dummy(ab->ctx);
+ audit_log_format(ab, "audit(%llu.%03lu:%u): ",
+ (unsigned long long)t.tv_sec, t.tv_nsec/1000000, serial);
+
+diff --git a/kernel/audit.h b/kernel/audit.h
+index f0233dc40b17..ddc22878433d 100644
+--- a/kernel/audit.h
++++ b/kernel/audit.h
+@@ -290,13 +290,6 @@ extern int audit_signal_info_syscall(struct task_struct *t);
+ extern void audit_filter_inodes(struct task_struct *tsk,
+ struct audit_context *ctx);
+ extern struct list_head *audit_killed_trees(void);
+-
+-static inline void audit_clear_dummy(struct audit_context *ctx)
+-{
+- if (ctx)
+- ctx->dummy = 0;
+-}
+-
+ #else /* CONFIG_AUDITSYSCALL */
+ #define auditsc_get_stamp(c, t, s) 0
+ #define audit_put_watch(w) {}
+@@ -330,7 +323,6 @@ static inline int audit_signal_info_syscall(struct task_struct *t)
+ }
+
+ #define audit_filter_inodes(t, c) AUDIT_DISABLED
+-#define audit_clear_dummy(c) {}
+ #endif /* CONFIG_AUDITSYSCALL */
+
+ extern char *audit_unpack_string(void **bufp, size_t *remain, size_t len);
+diff --git a/kernel/auditsc.c b/kernel/auditsc.c
+index 814406a35db1..4effe01ebbe2 100644
+--- a/kernel/auditsc.c
++++ b/kernel/auditsc.c
+@@ -1406,6 +1406,9 @@ static void audit_log_proctitle(void)
+ struct audit_context *context = audit_context();
+ struct audit_buffer *ab;
+
++ if (!context || context->dummy)
++ return;
++
+ ab = audit_log_start(context, GFP_KERNEL, AUDIT_PROCTITLE);
+ if (!ab)
+ return; /* audit_panic or being filtered */
+diff --git a/kernel/bpf/hashtab.c b/kernel/bpf/hashtab.c
+index d541c8486c95..5e1ac22adf7a 100644
+--- a/kernel/bpf/hashtab.c
++++ b/kernel/bpf/hashtab.c
+@@ -779,15 +779,20 @@ static void htab_elem_free_rcu(struct rcu_head *head)
+ htab_elem_free(htab, l);
+ }
+
+-static void free_htab_elem(struct bpf_htab *htab, struct htab_elem *l)
++static void htab_put_fd_value(struct bpf_htab *htab, struct htab_elem *l)
+ {
+ struct bpf_map *map = &htab->map;
++ void *ptr;
+
+ if (map->ops->map_fd_put_ptr) {
+- void *ptr = fd_htab_map_get_ptr(map, l);
+-
++ ptr = fd_htab_map_get_ptr(map, l);
+ map->ops->map_fd_put_ptr(ptr);
+ }
++}
++
++static void free_htab_elem(struct bpf_htab *htab, struct htab_elem *l)
++{
++ htab_put_fd_value(htab, l);
+
+ if (htab_is_prealloc(htab)) {
+ __pcpu_freelist_push(&htab->freelist, &l->fnode);
+@@ -839,6 +844,7 @@ static struct htab_elem *alloc_htab_elem(struct bpf_htab *htab, void *key,
+ */
+ pl_new = this_cpu_ptr(htab->extra_elems);
+ l_new = *pl_new;
++ htab_put_fd_value(htab, old_elem);
+ *pl_new = old_elem;
+ } else {
+ struct pcpu_freelist_node *l;
+diff --git a/net/9p/trans_fd.c b/net/9p/trans_fd.c
+index 13cd683a658a..3f67803123be 100644
+--- a/net/9p/trans_fd.c
++++ b/net/9p/trans_fd.c
+@@ -362,6 +362,10 @@ static void p9_read_work(struct work_struct *work)
+ if (m->rreq->status == REQ_STATUS_SENT) {
+ list_del(&m->rreq->req_list);
+ p9_client_cb(m->client, m->rreq, REQ_STATUS_RCVD);
++ } else if (m->rreq->status == REQ_STATUS_FLSHD) {
++ /* Ignore replies associated with a cancelled request. */
++ p9_debug(P9_DEBUG_TRANS,
++ "Ignore replies associated with a cancelled request\n");
+ } else {
+ spin_unlock(&m->client->lock);
+ p9_debug(P9_DEBUG_ERROR,
+@@ -703,11 +707,20 @@ static int p9_fd_cancelled(struct p9_client *client, struct p9_req_t *req)
+ {
+ p9_debug(P9_DEBUG_TRANS, "client %p req %p\n", client, req);
+
++ spin_lock(&client->lock);
++ /* Ignore cancelled request if message has been received
++ * before lock.
++ */
++ if (req->status == REQ_STATUS_RCVD) {
++ spin_unlock(&client->lock);
++ return 0;
++ }
++
+ /* we haven't received a response for oldreq,
+ * remove it from the list.
+ */
+- spin_lock(&client->lock);
+ list_del(&req->req_list);
++ req->status = REQ_STATUS_FLSHD;
+ spin_unlock(&client->lock);
+ p9_req_put(req);
+
+diff --git a/net/bluetooth/hci_event.c b/net/bluetooth/hci_event.c
+index b11f8d391ad8..fe75f435171c 100644
+--- a/net/bluetooth/hci_event.c
++++ b/net/bluetooth/hci_event.c
+@@ -1305,6 +1305,9 @@ static void store_pending_adv_report(struct hci_dev *hdev, bdaddr_t *bdaddr,
+ {
+ struct discovery_state *d = &hdev->discovery;
+
++ if (len > HCI_MAX_AD_LENGTH)
++ return;
++
+ bacpy(&d->last_adv_addr, bdaddr);
+ d->last_adv_addr_type = bdaddr_type;
+ d->last_adv_rssi = rssi;
+@@ -5317,7 +5320,8 @@ static struct hci_conn *check_pending_le_conn(struct hci_dev *hdev,
+
+ static void process_adv_report(struct hci_dev *hdev, u8 type, bdaddr_t *bdaddr,
+ u8 bdaddr_type, bdaddr_t *direct_addr,
+- u8 direct_addr_type, s8 rssi, u8 *data, u8 len)
++ u8 direct_addr_type, s8 rssi, u8 *data, u8 len,
++ bool ext_adv)
+ {
+ struct discovery_state *d = &hdev->discovery;
+ struct smp_irk *irk;
+@@ -5339,6 +5343,11 @@ static void process_adv_report(struct hci_dev *hdev, u8 type, bdaddr_t *bdaddr,
+ return;
+ }
+
++ if (!ext_adv && len > HCI_MAX_AD_LENGTH) {
++ bt_dev_err_ratelimited(hdev, "legacy adv larger than 31 bytes");
++ return;
++ }
++
+ /* Find the end of the data in case the report contains padded zero
+ * bytes at the end causing an invalid length value.
+ *
+@@ -5398,7 +5407,7 @@ static void process_adv_report(struct hci_dev *hdev, u8 type, bdaddr_t *bdaddr,
+ */
+ conn = check_pending_le_conn(hdev, bdaddr, bdaddr_type, type,
+ direct_addr);
+- if (conn && type == LE_ADV_IND) {
++ if (!ext_adv && conn && type == LE_ADV_IND && len <= HCI_MAX_AD_LENGTH) {
+ /* Store report for later inclusion by
+ * mgmt_device_connected
+ */
+@@ -5452,7 +5461,7 @@ static void process_adv_report(struct hci_dev *hdev, u8 type, bdaddr_t *bdaddr,
+ * event or send an immediate device found event if the data
+ * should not be stored for later.
+ */
+- if (!has_pending_adv_report(hdev)) {
++ if (!ext_adv && !has_pending_adv_report(hdev)) {
+ /* If the report will trigger a SCAN_REQ store it for
+ * later merging.
+ */
+@@ -5487,7 +5496,8 @@ static void process_adv_report(struct hci_dev *hdev, u8 type, bdaddr_t *bdaddr,
+ /* If the new report will trigger a SCAN_REQ store it for
+ * later merging.
+ */
+- if (type == LE_ADV_IND || type == LE_ADV_SCAN_IND) {
++ if (!ext_adv && (type == LE_ADV_IND ||
++ type == LE_ADV_SCAN_IND)) {
+ store_pending_adv_report(hdev, bdaddr, bdaddr_type,
+ rssi, flags, data, len);
+ return;
+@@ -5527,7 +5537,7 @@ static void hci_le_adv_report_evt(struct hci_dev *hdev, struct sk_buff *skb)
+ rssi = ev->data[ev->length];
+ process_adv_report(hdev, ev->evt_type, &ev->bdaddr,
+ ev->bdaddr_type, NULL, 0, rssi,
+- ev->data, ev->length);
++ ev->data, ev->length, false);
+ } else {
+ bt_dev_err(hdev, "Dropping invalid advertising data");
+ }
+@@ -5599,7 +5609,8 @@ static void hci_le_ext_adv_report_evt(struct hci_dev *hdev, struct sk_buff *skb)
+ if (legacy_evt_type != LE_ADV_INVALID) {
+ process_adv_report(hdev, legacy_evt_type, &ev->bdaddr,
+ ev->bdaddr_type, NULL, 0, ev->rssi,
+- ev->data, ev->length);
++ ev->data, ev->length,
++ !(evt_type & LE_EXT_ADV_LEGACY_PDU));
+ }
+
+ ptr += sizeof(*ev) + ev->length;
+@@ -5797,7 +5808,8 @@ static void hci_le_direct_adv_report_evt(struct hci_dev *hdev,
+
+ process_adv_report(hdev, ev->evt_type, &ev->bdaddr,
+ ev->bdaddr_type, &ev->direct_addr,
+- ev->direct_addr_type, ev->rssi, NULL, 0);
++ ev->direct_addr_type, ev->rssi, NULL, 0,
++ false);
+
+ ptr += sizeof(*ev);
+ }
+diff --git a/net/key/af_key.c b/net/key/af_key.c
+index b67ed3a8486c..979c579afc63 100644
+--- a/net/key/af_key.c
++++ b/net/key/af_key.c
+@@ -2400,7 +2400,7 @@ static int pfkey_spddelete(struct sock *sk, struct sk_buff *skb, const struct sa
+ return err;
+ }
+
+- xp = xfrm_policy_bysel_ctx(net, DUMMY_MARK, 0, XFRM_POLICY_TYPE_MAIN,
++ xp = xfrm_policy_bysel_ctx(net, &dummy_mark, 0, XFRM_POLICY_TYPE_MAIN,
+ pol->sadb_x_policy_dir - 1, &sel, pol_ctx,
+ 1, &err);
+ security_xfrm_policy_free(pol_ctx);
+@@ -2651,7 +2651,7 @@ static int pfkey_spdget(struct sock *sk, struct sk_buff *skb, const struct sadb_
+ return -EINVAL;
+
+ delete = (hdr->sadb_msg_type == SADB_X_SPDDELETE2);
+- xp = xfrm_policy_byid(net, DUMMY_MARK, 0, XFRM_POLICY_TYPE_MAIN,
++ xp = xfrm_policy_byid(net, &dummy_mark, 0, XFRM_POLICY_TYPE_MAIN,
+ dir, pol->sadb_x_policy_id, delete, &err);
+ if (xp == NULL)
+ return -ENOENT;
+diff --git a/net/mac80211/cfg.c b/net/mac80211/cfg.c
+index 0f72813fed53..4230b483168a 100644
+--- a/net/mac80211/cfg.c
++++ b/net/mac80211/cfg.c
+@@ -2140,6 +2140,7 @@ static int ieee80211_leave_mesh(struct wiphy *wiphy, struct net_device *dev)
+ ieee80211_stop_mesh(sdata);
+ mutex_lock(&sdata->local->mtx);
+ ieee80211_vif_release_channel(sdata);
++ kfree(sdata->u.mesh.ie);
+ mutex_unlock(&sdata->local->mtx);
+
+ return 0;
+diff --git a/net/mac80211/mesh_pathtbl.c b/net/mac80211/mesh_pathtbl.c
+index 117519bf33d6..aca608ae313f 100644
+--- a/net/mac80211/mesh_pathtbl.c
++++ b/net/mac80211/mesh_pathtbl.c
+@@ -521,6 +521,7 @@ static void mesh_path_free_rcu(struct mesh_table *tbl,
+ del_timer_sync(&mpath->timer);
+ atomic_dec(&sdata->u.mesh.mpaths);
+ atomic_dec(&tbl->entries);
++ mesh_path_flush_pending(mpath);
+ kfree_rcu(mpath, rcu);
+ }
+
+diff --git a/net/rds/recv.c b/net/rds/recv.c
+index c8404971d5ab..aba4afe4dfed 100644
+--- a/net/rds/recv.c
++++ b/net/rds/recv.c
+@@ -450,12 +450,13 @@ static int rds_still_queued(struct rds_sock *rs, struct rds_incoming *inc,
+ int rds_notify_queue_get(struct rds_sock *rs, struct msghdr *msghdr)
+ {
+ struct rds_notifier *notifier;
+- struct rds_rdma_notify cmsg = { 0 }; /* fill holes with zero */
++ struct rds_rdma_notify cmsg;
+ unsigned int count = 0, max_messages = ~0U;
+ unsigned long flags;
+ LIST_HEAD(copy);
+ int err = 0;
+
++ memset(&cmsg, 0, sizeof(cmsg)); /* fill holes with zero */
+
+ /* put_cmsg copies to user space and thus may sleep. We can't do this
+ * with rs_lock held, so first grab as many notifications as we can stuff
+diff --git a/net/sunrpc/sunrpc.h b/net/sunrpc/sunrpc.h
+index 47a756503d11..f6fe2e6cd65a 100644
+--- a/net/sunrpc/sunrpc.h
++++ b/net/sunrpc/sunrpc.h
+@@ -52,4 +52,5 @@ static inline int sock_is_loopback(struct sock *sk)
+
+ int rpc_clients_notifier_register(void);
+ void rpc_clients_notifier_unregister(void);
++void auth_domain_cleanup(void);
+ #endif /* _NET_SUNRPC_SUNRPC_H */
+diff --git a/net/sunrpc/sunrpc_syms.c b/net/sunrpc/sunrpc_syms.c
+index f9edaa9174a4..236fadc4a439 100644
+--- a/net/sunrpc/sunrpc_syms.c
++++ b/net/sunrpc/sunrpc_syms.c
+@@ -23,6 +23,7 @@
+ #include <linux/sunrpc/rpc_pipe_fs.h>
+ #include <linux/sunrpc/xprtsock.h>
+
++#include "sunrpc.h"
+ #include "netns.h"
+
+ unsigned int sunrpc_net_id;
+@@ -131,6 +132,7 @@ cleanup_sunrpc(void)
+ unregister_rpc_pipefs();
+ rpc_destroy_mempool();
+ unregister_pernet_subsys(&sunrpc_net_ops);
++ auth_domain_cleanup();
+ #if IS_ENABLED(CONFIG_SUNRPC_DEBUG)
+ rpc_unregister_sysctl();
+ #endif
+diff --git a/net/sunrpc/svcauth.c b/net/sunrpc/svcauth.c
+index 552617e3467b..998b196b6176 100644
+--- a/net/sunrpc/svcauth.c
++++ b/net/sunrpc/svcauth.c
+@@ -21,6 +21,8 @@
+
+ #include <trace/events/sunrpc.h>
+
++#include "sunrpc.h"
++
+ #define RPCDBG_FACILITY RPCDBG_AUTH
+
+
+@@ -205,3 +207,26 @@ struct auth_domain *auth_domain_find(char *name)
+ return NULL;
+ }
+ EXPORT_SYMBOL_GPL(auth_domain_find);
++
++/**
++ * auth_domain_cleanup - check that the auth_domain table is empty
++ *
++ * On module unload the auth_domain_table must be empty. To make it
++ * easier to catch bugs which don't clean up domains properly, we
++ * warn if anything remains in the table at cleanup time.
++ *
++ * Note that we cannot proactively remove the domains at this stage.
++ * The ->release() function might be in a module that has already been
++ * unloaded.
++ */
++
++void auth_domain_cleanup(void)
++{
++ int h;
++ struct auth_domain *hp;
++
++ for (h = 0; h < DN_HASHMAX; h++)
++ hlist_for_each_entry(hp, &auth_domain_table[h], hash)
++ pr_warn("svc: domain %s still present at module unload.\n",
++ hp->name);
++}
+diff --git a/net/x25/x25_subr.c b/net/x25/x25_subr.c
+index 0285aaa1e93c..3d424e80f16d 100644
+--- a/net/x25/x25_subr.c
++++ b/net/x25/x25_subr.c
+@@ -363,6 +363,12 @@ void x25_disconnect(struct sock *sk, int reason, unsigned char cause,
+ x25->neighbour = NULL;
+ read_unlock_bh(&x25_list_lock);
+ }
++ if (x25->neighbour) {
++ read_lock_bh(&x25_list_lock);
++ x25_neigh_put(x25->neighbour);
++ x25->neighbour = NULL;
++ read_unlock_bh(&x25_list_lock);
++ }
+ }
+
+ /*
+diff --git a/net/xfrm/espintcp.c b/net/xfrm/espintcp.c
+index 5a0ff665b71a..19396f3655c0 100644
+--- a/net/xfrm/espintcp.c
++++ b/net/xfrm/espintcp.c
+@@ -41,9 +41,32 @@ static void espintcp_rcv(struct strparser *strp, struct sk_buff *skb)
+ struct espintcp_ctx *ctx = container_of(strp, struct espintcp_ctx,
+ strp);
+ struct strp_msg *rxm = strp_msg(skb);
++ int len = rxm->full_len - 2;
+ u32 nonesp_marker;
+ int err;
+
++ /* keepalive packet? */
++ if (unlikely(len == 1)) {
++ u8 data;
++
++ err = skb_copy_bits(skb, rxm->offset + 2, &data, 1);
++ if (err < 0) {
++ kfree_skb(skb);
++ return;
++ }
++
++ if (data == 0xff) {
++ kfree_skb(skb);
++ return;
++ }
++ }
++
++ /* drop other short messages */
++ if (unlikely(len <= sizeof(nonesp_marker))) {
++ kfree_skb(skb);
++ return;
++ }
++
+ err = skb_copy_bits(skb, rxm->offset + 2, &nonesp_marker,
+ sizeof(nonesp_marker));
+ if (err < 0) {
+@@ -83,7 +106,7 @@ static int espintcp_parse(struct strparser *strp, struct sk_buff *skb)
+ return err;
+
+ len = be16_to_cpu(blen);
+- if (len < 6)
++ if (len < 2)
+ return -EINVAL;
+
+ return len;
+@@ -101,8 +124,11 @@ static int espintcp_recvmsg(struct sock *sk, struct msghdr *msg, size_t len,
+ flags |= nonblock ? MSG_DONTWAIT : 0;
+
+ skb = __skb_recv_datagram(sk, &ctx->ike_queue, flags, &off, &err);
+- if (!skb)
++ if (!skb) {
++ if (err == -EAGAIN && sk->sk_shutdown & RCV_SHUTDOWN)
++ return 0;
+ return err;
++ }
+
+ copied = len;
+ if (copied > skb->len)
+diff --git a/net/xfrm/xfrm_policy.c b/net/xfrm/xfrm_policy.c
+index 564aa6492e7c..6847b3579f54 100644
+--- a/net/xfrm/xfrm_policy.c
++++ b/net/xfrm/xfrm_policy.c
+@@ -1433,14 +1433,10 @@ static void xfrm_policy_requeue(struct xfrm_policy *old,
+ spin_unlock_bh(&pq->hold_queue.lock);
+ }
+
+-static bool xfrm_policy_mark_match(struct xfrm_policy *policy,
+- struct xfrm_policy *pol)
++static inline bool xfrm_policy_mark_match(const struct xfrm_mark *mark,
++ struct xfrm_policy *pol)
+ {
+- if (policy->mark.v == pol->mark.v &&
+- policy->priority == pol->priority)
+- return true;
+-
+- return false;
++ return mark->v == pol->mark.v && mark->m == pol->mark.m;
+ }
+
+ static u32 xfrm_pol_bin_key(const void *data, u32 len, u32 seed)
+@@ -1503,7 +1499,7 @@ static void xfrm_policy_insert_inexact_list(struct hlist_head *chain,
+ if (pol->type == policy->type &&
+ pol->if_id == policy->if_id &&
+ !selector_cmp(&pol->selector, &policy->selector) &&
+- xfrm_policy_mark_match(policy, pol) &&
++ xfrm_policy_mark_match(&policy->mark, pol) &&
+ xfrm_sec_ctx_match(pol->security, policy->security) &&
+ !WARN_ON(delpol)) {
+ delpol = pol;
+@@ -1538,7 +1534,7 @@ static struct xfrm_policy *xfrm_policy_insert_list(struct hlist_head *chain,
+ if (pol->type == policy->type &&
+ pol->if_id == policy->if_id &&
+ !selector_cmp(&pol->selector, &policy->selector) &&
+- xfrm_policy_mark_match(policy, pol) &&
++ xfrm_policy_mark_match(&policy->mark, pol) &&
+ xfrm_sec_ctx_match(pol->security, policy->security) &&
+ !WARN_ON(delpol)) {
+ if (excl)
+@@ -1610,9 +1606,8 @@ int xfrm_policy_insert(int dir, struct xfrm_policy *policy, int excl)
+ EXPORT_SYMBOL(xfrm_policy_insert);
+
+ static struct xfrm_policy *
+-__xfrm_policy_bysel_ctx(struct hlist_head *chain, u32 mark, u32 if_id,
+- u8 type, int dir,
+- struct xfrm_selector *sel,
++__xfrm_policy_bysel_ctx(struct hlist_head *chain, const struct xfrm_mark *mark,
++ u32 if_id, u8 type, int dir, struct xfrm_selector *sel,
+ struct xfrm_sec_ctx *ctx)
+ {
+ struct xfrm_policy *pol;
+@@ -1623,7 +1618,7 @@ __xfrm_policy_bysel_ctx(struct hlist_head *chain, u32 mark, u32 if_id,
+ hlist_for_each_entry(pol, chain, bydst) {
+ if (pol->type == type &&
+ pol->if_id == if_id &&
+- (mark & pol->mark.m) == pol->mark.v &&
++ xfrm_policy_mark_match(mark, pol) &&
+ !selector_cmp(sel, &pol->selector) &&
+ xfrm_sec_ctx_match(ctx, pol->security))
+ return pol;
+@@ -1632,11 +1627,10 @@ __xfrm_policy_bysel_ctx(struct hlist_head *chain, u32 mark, u32 if_id,
+ return NULL;
+ }
+
+-struct xfrm_policy *xfrm_policy_bysel_ctx(struct net *net, u32 mark, u32 if_id,
+- u8 type, int dir,
+- struct xfrm_selector *sel,
+- struct xfrm_sec_ctx *ctx, int delete,
+- int *err)
++struct xfrm_policy *
++xfrm_policy_bysel_ctx(struct net *net, const struct xfrm_mark *mark, u32 if_id,
++ u8 type, int dir, struct xfrm_selector *sel,
++ struct xfrm_sec_ctx *ctx, int delete, int *err)
+ {
+ struct xfrm_pol_inexact_bin *bin = NULL;
+ struct xfrm_policy *pol, *ret = NULL;
+@@ -1703,9 +1697,9 @@ struct xfrm_policy *xfrm_policy_bysel_ctx(struct net *net, u32 mark, u32 if_id,
+ }
+ EXPORT_SYMBOL(xfrm_policy_bysel_ctx);
+
+-struct xfrm_policy *xfrm_policy_byid(struct net *net, u32 mark, u32 if_id,
+- u8 type, int dir, u32 id, int delete,
+- int *err)
++struct xfrm_policy *
++xfrm_policy_byid(struct net *net, const struct xfrm_mark *mark, u32 if_id,
++ u8 type, int dir, u32 id, int delete, int *err)
+ {
+ struct xfrm_policy *pol, *ret;
+ struct hlist_head *chain;
+@@ -1720,8 +1714,7 @@ struct xfrm_policy *xfrm_policy_byid(struct net *net, u32 mark, u32 if_id,
+ ret = NULL;
+ hlist_for_each_entry(pol, chain, byidx) {
+ if (pol->type == type && pol->index == id &&
+- pol->if_id == if_id &&
+- (mark & pol->mark.m) == pol->mark.v) {
++ pol->if_id == if_id && xfrm_policy_mark_match(mark, pol)) {
+ xfrm_pol_hold(pol);
+ if (delete) {
+ *err = security_xfrm_policy_delete(
+diff --git a/net/xfrm/xfrm_user.c b/net/xfrm/xfrm_user.c
+index e6cfaa680ef3..fbb7d9d06478 100644
+--- a/net/xfrm/xfrm_user.c
++++ b/net/xfrm/xfrm_user.c
+@@ -1863,7 +1863,6 @@ static int xfrm_get_policy(struct sk_buff *skb, struct nlmsghdr *nlh,
+ struct km_event c;
+ int delete;
+ struct xfrm_mark m;
+- u32 mark = xfrm_mark_get(attrs, &m);
+ u32 if_id = 0;
+
+ p = nlmsg_data(nlh);
+@@ -1880,8 +1879,11 @@ static int xfrm_get_policy(struct sk_buff *skb, struct nlmsghdr *nlh,
+ if (attrs[XFRMA_IF_ID])
+ if_id = nla_get_u32(attrs[XFRMA_IF_ID]);
+
++ xfrm_mark_get(attrs, &m);
++
+ if (p->index)
+- xp = xfrm_policy_byid(net, mark, if_id, type, p->dir, p->index, delete, &err);
++ xp = xfrm_policy_byid(net, &m, if_id, type, p->dir,
++ p->index, delete, &err);
+ else {
+ struct nlattr *rt = attrs[XFRMA_SEC_CTX];
+ struct xfrm_sec_ctx *ctx;
+@@ -1898,8 +1900,8 @@ static int xfrm_get_policy(struct sk_buff *skb, struct nlmsghdr *nlh,
+ if (err)
+ return err;
+ }
+- xp = xfrm_policy_bysel_ctx(net, mark, if_id, type, p->dir, &p->sel,
+- ctx, delete, &err);
++ xp = xfrm_policy_bysel_ctx(net, &m, if_id, type, p->dir,
++ &p->sel, ctx, delete, &err);
+ security_xfrm_policy_free(ctx);
+ }
+ if (xp == NULL)
+@@ -2166,7 +2168,6 @@ static int xfrm_add_pol_expire(struct sk_buff *skb, struct nlmsghdr *nlh,
+ u8 type = XFRM_POLICY_TYPE_MAIN;
+ int err = -ENOENT;
+ struct xfrm_mark m;
+- u32 mark = xfrm_mark_get(attrs, &m);
+ u32 if_id = 0;
+
+ err = copy_from_user_policy_type(&type, attrs);
+@@ -2180,8 +2181,11 @@ static int xfrm_add_pol_expire(struct sk_buff *skb, struct nlmsghdr *nlh,
+ if (attrs[XFRMA_IF_ID])
+ if_id = nla_get_u32(attrs[XFRMA_IF_ID]);
+
++ xfrm_mark_get(attrs, &m);
++
+ if (p->index)
+- xp = xfrm_policy_byid(net, mark, if_id, type, p->dir, p->index, 0, &err);
++ xp = xfrm_policy_byid(net, &m, if_id, type, p->dir, p->index,
++ 0, &err);
+ else {
+ struct nlattr *rt = attrs[XFRMA_SEC_CTX];
+ struct xfrm_sec_ctx *ctx;
+@@ -2198,7 +2202,7 @@ static int xfrm_add_pol_expire(struct sk_buff *skb, struct nlmsghdr *nlh,
+ if (err)
+ return err;
+ }
+- xp = xfrm_policy_bysel_ctx(net, mark, if_id, type, p->dir,
++ xp = xfrm_policy_bysel_ctx(net, &m, if_id, type, p->dir,
+ &p->sel, ctx, 0, &err);
+ security_xfrm_policy_free(ctx);
+ }
+diff --git a/sound/pci/hda/hda_controller.h b/sound/pci/hda/hda_controller.h
+index 82e26442724b..a356fb0e5773 100644
+--- a/sound/pci/hda/hda_controller.h
++++ b/sound/pci/hda/hda_controller.h
+@@ -41,7 +41,7 @@
+ /* 24 unused */
+ #define AZX_DCAPS_COUNT_LPIB_DELAY (1 << 25) /* Take LPIB as delay */
+ #define AZX_DCAPS_PM_RUNTIME (1 << 26) /* runtime PM support */
+-/* 27 unused */
++#define AZX_DCAPS_SUSPEND_SPURIOUS_WAKEUP (1 << 27) /* Workaround for spurious wakeups after suspend */
+ #define AZX_DCAPS_CORBRP_SELF_CLEAR (1 << 28) /* CORBRP clears itself after reset */
+ #define AZX_DCAPS_NO_MSI64 (1 << 29) /* Stick to 32-bit MSIs */
+ #define AZX_DCAPS_SEPARATE_STREAM_TAG (1 << 30) /* capture and playback use separate stream tag */
+diff --git a/sound/pci/hda/hda_intel.c b/sound/pci/hda/hda_intel.c
+index 11ec5c56c80e..9d14c40c07ea 100644
+--- a/sound/pci/hda/hda_intel.c
++++ b/sound/pci/hda/hda_intel.c
+@@ -298,7 +298,8 @@ enum {
+ /* PCH for HSW/BDW; with runtime PM */
+ /* no i915 binding for this as HSW/BDW has another controller for HDMI */
+ #define AZX_DCAPS_INTEL_PCH \
+- (AZX_DCAPS_INTEL_PCH_BASE | AZX_DCAPS_PM_RUNTIME)
++ (AZX_DCAPS_INTEL_PCH_BASE | AZX_DCAPS_PM_RUNTIME |\
++ AZX_DCAPS_SUSPEND_SPURIOUS_WAKEUP)
+
+ /* HSW HDMI */
+ #define AZX_DCAPS_INTEL_HASWELL \
+@@ -1028,7 +1029,14 @@ static int azx_suspend(struct device *dev)
+ chip = card->private_data;
+ bus = azx_bus(chip);
+ snd_power_change_state(card, SNDRV_CTL_POWER_D3hot);
+- pm_runtime_force_suspend(dev);
++ /* An ugly workaround: direct call of __azx_runtime_suspend() and
++ * __azx_runtime_resume() for old Intel platforms that suffer from
++ * spurious wakeups after S3 suspend
++ */
++ if (chip->driver_caps & AZX_DCAPS_SUSPEND_SPURIOUS_WAKEUP)
++ __azx_runtime_suspend(chip);
++ else
++ pm_runtime_force_suspend(dev);
+ if (bus->irq >= 0) {
+ free_irq(bus->irq, chip);
+ bus->irq = -1;
+@@ -1057,7 +1065,10 @@ static int azx_resume(struct device *dev)
+ if (azx_acquire_irq(chip, 1) < 0)
+ return -EIO;
+
+- pm_runtime_force_resume(dev);
++ if (chip->driver_caps & AZX_DCAPS_SUSPEND_SPURIOUS_WAKEUP)
++ __azx_runtime_resume(chip, false);
++ else
++ pm_runtime_force_resume(dev);
+ snd_power_change_state(card, SNDRV_CTL_POWER_D0);
+
+ trace_azx_resume(chip);
+diff --git a/sound/pci/hda/patch_hdmi.c b/sound/pci/hda/patch_hdmi.c
+index e821c9df8107..37391c3d2f47 100644
+--- a/sound/pci/hda/patch_hdmi.c
++++ b/sound/pci/hda/patch_hdmi.c
+@@ -2439,6 +2439,7 @@ static void generic_acomp_notifier_set(struct drm_audio_component *acomp,
+ mutex_lock(&spec->bind_lock);
+ spec->use_acomp_notifier = use_acomp;
+ spec->codec->relaxed_resume = use_acomp;
++ spec->codec->bus->keep_power = 0;
+ /* reprogram each jack detection logic depending on the notifier */
+ for (i = 0; i < spec->num_pins; i++)
+ reprogram_jack_detect(spec->codec,
+@@ -2533,7 +2534,6 @@ static void generic_acomp_init(struct hda_codec *codec,
+ if (!snd_hdac_acomp_init(&codec->bus->core, &spec->drm_audio_ops,
+ match_bound_vga, 0)) {
+ spec->acomp_registered = true;
+- codec->bus->keep_power = 0;
+ }
+ }
+
+diff --git a/sound/pci/hda/patch_realtek.c b/sound/pci/hda/patch_realtek.c
+index 27dd8945d6e6..d8d018536484 100644
+--- a/sound/pci/hda/patch_realtek.c
++++ b/sound/pci/hda/patch_realtek.c
+@@ -5940,6 +5940,16 @@ static void alc_fixup_disable_mic_vref(struct hda_codec *codec,
+ snd_hda_codec_set_pin_target(codec, 0x19, PIN_VREFHIZ);
+ }
+
++static void alc285_fixup_hp_gpio_amp_init(struct hda_codec *codec,
++ const struct hda_fixup *fix, int action)
++{
++ if (action != HDA_FIXUP_ACT_INIT)
++ return;
++
++ msleep(100);
++ alc_write_coef_idx(codec, 0x65, 0x0);
++}
++
+ /* for hda_fixup_thinkpad_acpi() */
+ #include "thinkpad_helper.c"
+
+@@ -6117,8 +6127,10 @@ enum {
+ ALC269VC_FIXUP_ACER_VCOPPERBOX_PINS,
+ ALC269VC_FIXUP_ACER_HEADSET_MIC,
+ ALC269VC_FIXUP_ACER_MIC_NO_PRESENCE,
+- ALC289_FIXUP_ASUS_G401,
++ ALC289_FIXUP_ASUS_GA401,
++ ALC289_FIXUP_ASUS_GA502,
+ ALC256_FIXUP_ACER_MIC_NO_PRESENCE,
++ ALC285_FIXUP_HP_GPIO_AMP_INIT,
+ };
+
+ static const struct hda_fixup alc269_fixups[] = {
+@@ -7328,7 +7340,14 @@ static const struct hda_fixup alc269_fixups[] = {
+ .chained = true,
+ .chain_id = ALC269_FIXUP_HEADSET_MIC
+ },
+- [ALC289_FIXUP_ASUS_G401] = {
++ [ALC289_FIXUP_ASUS_GA401] = {
++ .type = HDA_FIXUP_PINS,
++ .v.pins = (const struct hda_pintbl[]) {
++ { 0x19, 0x03a11020 }, /* headset mic with jack detect */
++ { }
++ },
++ },
++ [ALC289_FIXUP_ASUS_GA502] = {
+ .type = HDA_FIXUP_PINS,
+ .v.pins = (const struct hda_pintbl[]) {
+ { 0x19, 0x03a11020 }, /* headset mic with jack detect */
+@@ -7344,6 +7363,12 @@ static const struct hda_fixup alc269_fixups[] = {
+ .chained = true,
+ .chain_id = ALC256_FIXUP_ASUS_HEADSET_MODE
+ },
++ [ALC285_FIXUP_HP_GPIO_AMP_INIT] = {
++ .type = HDA_FIXUP_FUNC,
++ .v.func = alc285_fixup_hp_gpio_amp_init,
++ .chained = true,
++ .chain_id = ALC285_FIXUP_HP_GPIO_LED
++ },
+ };
+
+ static const struct snd_pci_quirk alc269_fixup_tbl[] = {
+@@ -7494,7 +7519,7 @@ static const struct snd_pci_quirk alc269_fixup_tbl[] = {
+ SND_PCI_QUIRK(0x103c, 0x84e7, "HP Pavilion 15", ALC269_FIXUP_HP_MUTE_LED_MIC3),
+ SND_PCI_QUIRK(0x103c, 0x869d, "HP", ALC236_FIXUP_HP_MUTE_LED),
+ SND_PCI_QUIRK(0x103c, 0x8729, "HP", ALC285_FIXUP_HP_GPIO_LED),
+- SND_PCI_QUIRK(0x103c, 0x8736, "HP", ALC285_FIXUP_HP_GPIO_LED),
++ SND_PCI_QUIRK(0x103c, 0x8736, "HP", ALC285_FIXUP_HP_GPIO_AMP_INIT),
+ SND_PCI_QUIRK(0x103c, 0x877a, "HP", ALC285_FIXUP_HP_MUTE_LED),
+ SND_PCI_QUIRK(0x103c, 0x877d, "HP", ALC236_FIXUP_HP_MUTE_LED),
+ SND_PCI_QUIRK(0x1043, 0x103e, "ASUS X540SA", ALC256_FIXUP_ASUS_MIC),
+@@ -7526,7 +7551,8 @@ static const struct snd_pci_quirk alc269_fixup_tbl[] = {
+ SND_PCI_QUIRK(0x1043, 0x1bbd, "ASUS Z550MA", ALC255_FIXUP_ASUS_MIC_NO_PRESENCE),
+ SND_PCI_QUIRK(0x1043, 0x1c23, "Asus X55U", ALC269_FIXUP_LIMIT_INT_MIC_BOOST),
+ SND_PCI_QUIRK(0x1043, 0x1ccd, "ASUS X555UB", ALC256_FIXUP_ASUS_MIC),
+- SND_PCI_QUIRK(0x1043, 0x1f11, "ASUS Zephyrus G14", ALC289_FIXUP_ASUS_G401),
++ SND_PCI_QUIRK(0x1043, 0x1e11, "ASUS Zephyrus G15", ALC289_FIXUP_ASUS_GA502),
++ SND_PCI_QUIRK(0x1043, 0x1f11, "ASUS Zephyrus G14", ALC289_FIXUP_ASUS_GA401),
+ SND_PCI_QUIRK(0x1043, 0x3030, "ASUS ZN270IE", ALC256_FIXUP_ASUS_AIO_GPIO2),
+ SND_PCI_QUIRK(0x1043, 0x831a, "ASUS P901", ALC269_FIXUP_STEREO_DMIC),
+ SND_PCI_QUIRK(0x1043, 0x834a, "ASUS S101", ALC269_FIXUP_STEREO_DMIC),
+@@ -7546,7 +7572,7 @@ static const struct snd_pci_quirk alc269_fixup_tbl[] = {
+ SND_PCI_QUIRK(0x10cf, 0x1629, "Lifebook U7x7", ALC255_FIXUP_LIFEBOOK_U7x7_HEADSET_MIC),
+ SND_PCI_QUIRK(0x10cf, 0x1845, "Lifebook U904", ALC269_FIXUP_LIFEBOOK_EXTMIC),
+ SND_PCI_QUIRK(0x10ec, 0x10f2, "Intel Reference board", ALC700_FIXUP_INTEL_REFERENCE),
+- SND_PCI_QUIRK(0x10ec, 0x1230, "Intel Reference board", ALC225_FIXUP_HEADSET_JACK),
++ SND_PCI_QUIRK(0x10ec, 0x1230, "Intel Reference board", ALC295_FIXUP_CHROME_BOOK),
+ SND_PCI_QUIRK(0x10f7, 0x8338, "Panasonic CF-SZ6", ALC269_FIXUP_HEADSET_MODE),
+ SND_PCI_QUIRK(0x144d, 0xc109, "Samsung Ativ book 9 (NP900X3G)", ALC269_FIXUP_INV_DMIC),
+ SND_PCI_QUIRK(0x144d, 0xc169, "Samsung Notebook 9 Pen (NP930SBE-K01US)", ALC298_FIXUP_SAMSUNG_HEADPHONE_VERY_QUIET),
+diff --git a/sound/usb/pcm.c b/sound/usb/pcm.c
+index 9702c4311b91..0247162a9fbf 100644
+--- a/sound/usb/pcm.c
++++ b/sound/usb/pcm.c
+@@ -367,6 +367,7 @@ static int set_sync_ep_implicit_fb_quirk(struct snd_usb_substream *subs,
+ ifnum = 0;
+ goto add_sync_ep_from_ifnum;
+ case USB_ID(0x07fd, 0x0008): /* MOTU M Series */
++ case USB_ID(0x31e9, 0x0001): /* Solid State Logic SSL2 */
+ case USB_ID(0x31e9, 0x0002): /* Solid State Logic SSL2+ */
+ case USB_ID(0x0d9a, 0x00df): /* RTX6001 */
+ ep = 0x81;
+diff --git a/tools/lib/traceevent/plugins/Makefile b/tools/lib/traceevent/plugins/Makefile
+index 349bb81482ab..680d883efe05 100644
+--- a/tools/lib/traceevent/plugins/Makefile
++++ b/tools/lib/traceevent/plugins/Makefile
+@@ -197,7 +197,7 @@ define do_generate_dynamic_list_file
+ xargs echo "U w W" | tr 'w ' 'W\n' | sort -u | xargs echo`;\
+ if [ "$$symbol_type" = "U W" ];then \
+ (echo '{'; \
+- $(NM) -u -D $1 | awk 'NF>1 {print "\t"$$2";"}' | sort -u;\
++ $(NM) -u -D $1 | awk 'NF>1 {sub("@.*", "", $$2); print "\t"$$2";"}' | sort -u;\
+ echo '};'; \
+ ) > $2; \
+ else \
+diff --git a/tools/perf/arch/arm/util/auxtrace.c b/tools/perf/arch/arm/util/auxtrace.c
+index 0a6e75b8777a..28a5d0c18b1d 100644
+--- a/tools/perf/arch/arm/util/auxtrace.c
++++ b/tools/perf/arch/arm/util/auxtrace.c
+@@ -56,7 +56,7 @@ struct auxtrace_record
+ struct perf_pmu *cs_etm_pmu;
+ struct evsel *evsel;
+ bool found_etm = false;
+- bool found_spe = false;
++ struct perf_pmu *found_spe = NULL;
+ static struct perf_pmu **arm_spe_pmus = NULL;
+ static int nr_spes = 0;
+ int i = 0;
+@@ -74,12 +74,12 @@ struct auxtrace_record
+ evsel->core.attr.type == cs_etm_pmu->type)
+ found_etm = true;
+
+- if (!nr_spes)
++ if (!nr_spes || found_spe)
+ continue;
+
+ for (i = 0; i < nr_spes; i++) {
+ if (evsel->core.attr.type == arm_spe_pmus[i]->type) {
+- found_spe = true;
++ found_spe = arm_spe_pmus[i];
+ break;
+ }
+ }
+@@ -96,7 +96,7 @@ struct auxtrace_record
+
+ #if defined(__aarch64__)
+ if (found_spe)
+- return arm_spe_recording_init(err, arm_spe_pmus[i]);
++ return arm_spe_recording_init(err, found_spe);
+ #endif
+
+ /*
+diff --git a/tools/testing/selftests/bpf/test_offload.py b/tools/testing/selftests/bpf/test_offload.py
+index 8294ae3ffb3c..43c9cda199b8 100755
+--- a/tools/testing/selftests/bpf/test_offload.py
++++ b/tools/testing/selftests/bpf/test_offload.py
+@@ -318,6 +318,9 @@ class DebugfsDir:
+ continue
+
+ if os.path.isfile(p):
++ # We need to init trap_flow_action_cookie before read it
++ if f == "trap_flow_action_cookie":
++ cmd('echo deadbeef > %s/%s' % (path, f))
+ _, out = cmd('cat %s/%s' % (path, f))
+ dfs[f] = out.strip()
+ elif os.path.isdir(p):
+diff --git a/tools/testing/selftests/net/fib_nexthop_multiprefix.sh b/tools/testing/selftests/net/fib_nexthop_multiprefix.sh
+index 9dc35a16e415..51df5e305855 100755
+--- a/tools/testing/selftests/net/fib_nexthop_multiprefix.sh
++++ b/tools/testing/selftests/net/fib_nexthop_multiprefix.sh
+@@ -144,7 +144,7 @@ setup()
+
+ cleanup()
+ {
+- for n in h1 r1 h2 h3 h4
++ for n in h0 r1 h1 h2 h3
+ do
+ ip netns del ${n} 2>/dev/null
+ done
+diff --git a/tools/testing/selftests/net/forwarding/ethtool.sh b/tools/testing/selftests/net/forwarding/ethtool.sh
+index eb8e2a23bbb4..43a948feed26 100755
+--- a/tools/testing/selftests/net/forwarding/ethtool.sh
++++ b/tools/testing/selftests/net/forwarding/ethtool.sh
+@@ -252,8 +252,6 @@ check_highest_speed_is_chosen()
+ fi
+
+ local -a speeds_arr=($(common_speeds_get $h1 $h2 0 1))
+- # Remove the first speed, h1 does not advertise this speed.
+- unset speeds_arr[0]
+
+ max_speed=${speeds_arr[0]}
+ for current in ${speeds_arr[@]}; do
+diff --git a/tools/testing/selftests/net/ip_defrag.sh b/tools/testing/selftests/net/ip_defrag.sh
+index 15d3489ecd9c..ceb7ad4dbd94 100755
+--- a/tools/testing/selftests/net/ip_defrag.sh
++++ b/tools/testing/selftests/net/ip_defrag.sh
+@@ -6,6 +6,8 @@
+ set +x
+ set -e
+
++modprobe -q nf_defrag_ipv6
++
+ readonly NETNS="ns-$(mktemp -u XXXXXX)"
+
+ setup() {
+diff --git a/tools/testing/selftests/net/psock_fanout.c b/tools/testing/selftests/net/psock_fanout.c
+index 8c8c7d79c38d..2c522f7a0aec 100644
+--- a/tools/testing/selftests/net/psock_fanout.c
++++ b/tools/testing/selftests/net/psock_fanout.c
+@@ -350,7 +350,8 @@ static int test_datapath(uint16_t typeflags, int port_off,
+ int fds[2], fds_udp[2][2], ret;
+
+ fprintf(stderr, "\ntest: datapath 0x%hx ports %hu,%hu\n",
+- typeflags, PORT_BASE, PORT_BASE + port_off);
++ typeflags, (uint16_t)PORT_BASE,
++ (uint16_t)(PORT_BASE + port_off));
+
+ fds[0] = sock_fanout_open(typeflags, 0);
+ fds[1] = sock_fanout_open(typeflags, 0);
+diff --git a/tools/testing/selftests/net/rxtimestamp.c b/tools/testing/selftests/net/rxtimestamp.c
+index 422e7761254d..bcb79ba1f214 100644
+--- a/tools/testing/selftests/net/rxtimestamp.c
++++ b/tools/testing/selftests/net/rxtimestamp.c
+@@ -329,8 +329,7 @@ int main(int argc, char **argv)
+ bool all_tests = true;
+ int arg_index = 0;
+ int failures = 0;
+- int s, t;
+- char opt;
++ int s, t, opt;
+
+ while ((opt = getopt_long(argc, argv, "", long_options,
+ &arg_index)) != -1) {
+diff --git a/tools/testing/selftests/net/so_txtime.c b/tools/testing/selftests/net/so_txtime.c
+index ceaad78e9667..3155fbbf644b 100644
+--- a/tools/testing/selftests/net/so_txtime.c
++++ b/tools/testing/selftests/net/so_txtime.c
+@@ -121,7 +121,7 @@ static bool do_recv_one(int fdr, struct timed_send *ts)
+ if (rbuf[0] != ts->data)
+ error(1, 0, "payload mismatch. expected %c", ts->data);
+
+- if (labs(tstop - texpect) > cfg_variance_us)
++ if (llabs(tstop - texpect) > cfg_variance_us)
+ error(1, 0, "exceeds variance (%d us)", cfg_variance_us);
+
+ return false;
+diff --git a/tools/testing/selftests/net/tcp_mmap.c b/tools/testing/selftests/net/tcp_mmap.c
+index 4555f88252ba..a61b7b3da549 100644
+--- a/tools/testing/selftests/net/tcp_mmap.c
++++ b/tools/testing/selftests/net/tcp_mmap.c
+@@ -344,7 +344,7 @@ int main(int argc, char *argv[])
+ {
+ struct sockaddr_storage listenaddr, addr;
+ unsigned int max_pacing_rate = 0;
+- size_t total = 0;
++ uint64_t total = 0;
+ char *host = NULL;
+ int fd, c, on = 1;
+ char *buffer;
+@@ -473,12 +473,12 @@ int main(int argc, char *argv[])
+ zflg = 0;
+ }
+ while (total < FILE_SZ) {
+- ssize_t wr = FILE_SZ - total;
++ int64_t wr = FILE_SZ - total;
+
+ if (wr > chunk_size)
+ wr = chunk_size;
+ /* Note : we just want to fill the pipe with 0 bytes */
+- wr = send(fd, buffer, wr, zflg ? MSG_ZEROCOPY : 0);
++ wr = send(fd, buffer, (size_t)wr, zflg ? MSG_ZEROCOPY : 0);
+ if (wr <= 0)
+ break;
+ total += wr;
+diff --git a/tools/testing/selftests/net/txtimestamp.sh b/tools/testing/selftests/net/txtimestamp.sh
+index eea6f5193693..31637769f59f 100755
+--- a/tools/testing/selftests/net/txtimestamp.sh
++++ b/tools/testing/selftests/net/txtimestamp.sh
+@@ -75,7 +75,7 @@ main() {
+ fi
+ }
+
+-if [[ "$(ip netns identify)" == "root" ]]; then
++if [[ -z "$(ip netns identify)" ]]; then
+ ./in_netns.sh $0 $@
+ else
+ main $@
+diff --git a/virt/kvm/arm/mmu.c b/virt/kvm/arm/mmu.c
+index e3b9ee268823..8a9d13e8e904 100644
+--- a/virt/kvm/arm/mmu.c
++++ b/virt/kvm/arm/mmu.c
+@@ -1198,7 +1198,7 @@ static bool stage2_get_leaf_entry(struct kvm *kvm, phys_addr_t addr,
+ return true;
+ }
+
+-static bool stage2_is_exec(struct kvm *kvm, phys_addr_t addr)
++static bool stage2_is_exec(struct kvm *kvm, phys_addr_t addr, unsigned long sz)
+ {
+ pud_t *pudp;
+ pmd_t *pmdp;
+@@ -1210,11 +1210,11 @@ static bool stage2_is_exec(struct kvm *kvm, phys_addr_t addr)
+ return false;
+
+ if (pudp)
+- return kvm_s2pud_exec(pudp);
++ return sz <= PUD_SIZE && kvm_s2pud_exec(pudp);
+ else if (pmdp)
+- return kvm_s2pmd_exec(pmdp);
++ return sz <= PMD_SIZE && kvm_s2pmd_exec(pmdp);
+ else
+- return kvm_s2pte_exec(ptep);
++ return sz == PAGE_SIZE && kvm_s2pte_exec(ptep);
+ }
+
+ static int stage2_set_pte(struct kvm *kvm, struct kvm_mmu_memory_cache *cache,
+@@ -1801,7 +1801,8 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa,
+ * execute permissions, and we preserve whatever we have.
+ */
+ needs_exec = exec_fault ||
+- (fault_status == FSC_PERM && stage2_is_exec(kvm, fault_ipa));
++ (fault_status == FSC_PERM &&
++ stage2_is_exec(kvm, fault_ipa, vma_pagesize));
+
+ if (vma_pagesize == PUD_SIZE) {
+ pud_t new_pud = kvm_pfn_pud(pfn, mem_type);
^ permalink raw reply related [flat|nested] 25+ messages in thread
* [gentoo-commits] proj/linux-patches:5.7 commit in: /
@ 2020-08-07 12:13 Alice Ferrazzi
0 siblings, 0 replies; 25+ messages in thread
From: Alice Ferrazzi @ 2020-08-07 12:13 UTC (permalink / raw
To: gentoo-commits
commit: 56c5495f139f487a3030babb821d312d91a71475
Author: Alice Ferrazzi <alicef <AT> gentoo <DOT> org>
AuthorDate: Fri Aug 7 12:12:33 2020 +0000
Commit: Alice Ferrazzi <alicef <AT> gentoo <DOT> org>
CommitDate: Fri Aug 7 12:12:50 2020 +0000
URL: https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=56c5495f
Linux patch 5.7.14
Signed-off-by: Alice Ferrazzi <alicef <AT> gentoo.org>
0000_README | 4 +
1013_linux-5.7.14.patch | 294 ++++++++++++++++++++++++++++++++++++++++++++++++
2 files changed, 298 insertions(+)
diff --git a/0000_README b/0000_README
index a388fef..ff8860b 100644
--- a/0000_README
+++ b/0000_README
@@ -95,6 +95,10 @@ Patch: 1012_linux-5.7.13.patch
From: http://www.kernel.org
Desc: Linux 5.7.13
+Patch: 1013_linux-5.7.14.patch
+From: http://www.kernel.org
+Desc: Linux 5.7.14
+
Patch: 1500_XATTR_USER_PREFIX.patch
From: https://bugs.gentoo.org/show_bug.cgi?id=470644
Desc: Support for namespace user.pax.* on tmpfs.
diff --git a/1013_linux-5.7.14.patch b/1013_linux-5.7.14.patch
new file mode 100644
index 0000000..92e5caa
--- /dev/null
+++ b/1013_linux-5.7.14.patch
@@ -0,0 +1,294 @@
+diff --git a/Makefile b/Makefile
+index b77b4332a41a..70942a6541d8 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 5
+ PATCHLEVEL = 7
+-SUBLEVEL = 13
++SUBLEVEL = 14
+ EXTRAVERSION =
+ NAME = Kleptomaniac Octopus
+
+diff --git a/arch/arm/include/asm/percpu.h b/arch/arm/include/asm/percpu.h
+index f44f448537f2..1a3eedbac4a2 100644
+--- a/arch/arm/include/asm/percpu.h
++++ b/arch/arm/include/asm/percpu.h
+@@ -5,6 +5,8 @@
+ #ifndef _ASM_ARM_PERCPU_H_
+ #define _ASM_ARM_PERCPU_H_
+
++#include <asm/thread_info.h>
++
+ /*
+ * Same as asm-generic/percpu.h, except that we store the per cpu offset
+ * in the TPIDRPRW. TPIDRPRW only exists on V6K and V7
+diff --git a/arch/arm64/include/asm/archrandom.h b/arch/arm64/include/asm/archrandom.h
+index fc1594a0710e..44209f6146aa 100644
+--- a/arch/arm64/include/asm/archrandom.h
++++ b/arch/arm64/include/asm/archrandom.h
+@@ -6,7 +6,6 @@
+
+ #include <linux/bug.h>
+ #include <linux/kernel.h>
+-#include <linux/random.h>
+ #include <asm/cpufeature.h>
+
+ static inline bool __arm64_rndr(unsigned long *v)
+diff --git a/arch/arm64/include/asm/pointer_auth.h b/arch/arm64/include/asm/pointer_auth.h
+index c6b4f0603024..be7f853738e6 100644
+--- a/arch/arm64/include/asm/pointer_auth.h
++++ b/arch/arm64/include/asm/pointer_auth.h
+@@ -3,7 +3,6 @@
+ #define __ASM_POINTER_AUTH_H
+
+ #include <linux/bitops.h>
+-#include <linux/random.h>
+
+ #include <asm/cpufeature.h>
+ #include <asm/memory.h>
+@@ -34,6 +33,13 @@ struct ptrauth_keys_kernel {
+ struct ptrauth_key apia;
+ };
+
++/*
++ * Only include random.h once ptrauth_keys_* structures are defined
++ * to avoid yet another circular include hell (random.h * ends up
++ * including asm/smp.h, which requires ptrauth_keys_kernel).
++ */
++#include <linux/random.h>
++
+ static inline void ptrauth_keys_init_user(struct ptrauth_keys_user *keys)
+ {
+ if (system_supports_address_auth()) {
+diff --git a/arch/arm64/kernel/kaslr.c b/arch/arm64/kernel/kaslr.c
+index 91a83104c6e8..e2101440c314 100644
+--- a/arch/arm64/kernel/kaslr.c
++++ b/arch/arm64/kernel/kaslr.c
+@@ -10,8 +10,8 @@
+ #include <linux/mm_types.h>
+ #include <linux/sched.h>
+ #include <linux/types.h>
++#include <linux/random.h>
+
+-#include <asm/archrandom.h>
+ #include <asm/cacheflush.h>
+ #include <asm/fixmap.h>
+ #include <asm/kernel-pgtable.h>
+diff --git a/drivers/char/random.c b/drivers/char/random.c
+index 0d10e31fd342..344a57ebb35e 100644
+--- a/drivers/char/random.c
++++ b/drivers/char/random.c
+@@ -1277,6 +1277,7 @@ void add_interrupt_randomness(int irq, int irq_flags)
+
+ fast_mix(fast_pool);
+ add_interrupt_bench(cycles);
++ this_cpu_add(net_rand_state.s1, fast_pool->pool[cycles & 3]);
+
+ if (unlikely(crng_init == 0)) {
+ if ((fast_pool->count >= 64) &&
+diff --git a/include/linux/prandom.h b/include/linux/prandom.h
+new file mode 100644
+index 000000000000..aa16e6468f91
+--- /dev/null
++++ b/include/linux/prandom.h
+@@ -0,0 +1,78 @@
++/* SPDX-License-Identifier: GPL-2.0 */
++/*
++ * include/linux/prandom.h
++ *
++ * Include file for the fast pseudo-random 32-bit
++ * generation.
++ */
++#ifndef _LINUX_PRANDOM_H
++#define _LINUX_PRANDOM_H
++
++#include <linux/types.h>
++#include <linux/percpu.h>
++
++u32 prandom_u32(void);
++void prandom_bytes(void *buf, size_t nbytes);
++void prandom_seed(u32 seed);
++void prandom_reseed_late(void);
++
++struct rnd_state {
++ __u32 s1, s2, s3, s4;
++};
++
++DECLARE_PER_CPU(struct rnd_state, net_rand_state);
++
++u32 prandom_u32_state(struct rnd_state *state);
++void prandom_bytes_state(struct rnd_state *state, void *buf, size_t nbytes);
++void prandom_seed_full_state(struct rnd_state __percpu *pcpu_state);
++
++#define prandom_init_once(pcpu_state) \
++ DO_ONCE(prandom_seed_full_state, (pcpu_state))
++
++/**
++ * prandom_u32_max - returns a pseudo-random number in interval [0, ep_ro)
++ * @ep_ro: right open interval endpoint
++ *
++ * Returns a pseudo-random number that is in interval [0, ep_ro). Note
++ * that the result depends on PRNG being well distributed in [0, ~0U]
++ * u32 space. Here we use maximally equidistributed combined Tausworthe
++ * generator, that is, prandom_u32(). This is useful when requesting a
++ * random index of an array containing ep_ro elements, for example.
++ *
++ * Returns: pseudo-random number in interval [0, ep_ro)
++ */
++static inline u32 prandom_u32_max(u32 ep_ro)
++{
++ return (u32)(((u64) prandom_u32() * ep_ro) >> 32);
++}
++
++/*
++ * Handle minimum values for seeds
++ */
++static inline u32 __seed(u32 x, u32 m)
++{
++ return (x < m) ? x + m : x;
++}
++
++/**
++ * prandom_seed_state - set seed for prandom_u32_state().
++ * @state: pointer to state structure to receive the seed.
++ * @seed: arbitrary 64-bit value to use as a seed.
++ */
++static inline void prandom_seed_state(struct rnd_state *state, u64 seed)
++{
++ u32 i = (seed >> 32) ^ (seed << 10) ^ seed;
++
++ state->s1 = __seed(i, 2U);
++ state->s2 = __seed(i, 8U);
++ state->s3 = __seed(i, 16U);
++ state->s4 = __seed(i, 128U);
++}
++
++/* Pseudo random number generator from numerical recipes. */
++static inline u32 next_pseudo_random32(u32 seed)
++{
++ return seed * 1664525 + 1013904223;
++}
++
++#endif
+diff --git a/include/linux/random.h b/include/linux/random.h
+index 45e1f8fa742b..f45b8be3e3c4 100644
+--- a/include/linux/random.h
++++ b/include/linux/random.h
+@@ -110,61 +110,12 @@ declare_get_random_var_wait(long)
+
+ unsigned long randomize_page(unsigned long start, unsigned long range);
+
+-u32 prandom_u32(void);
+-void prandom_bytes(void *buf, size_t nbytes);
+-void prandom_seed(u32 seed);
+-void prandom_reseed_late(void);
+-
+-struct rnd_state {
+- __u32 s1, s2, s3, s4;
+-};
+-
+-u32 prandom_u32_state(struct rnd_state *state);
+-void prandom_bytes_state(struct rnd_state *state, void *buf, size_t nbytes);
+-void prandom_seed_full_state(struct rnd_state __percpu *pcpu_state);
+-
+-#define prandom_init_once(pcpu_state) \
+- DO_ONCE(prandom_seed_full_state, (pcpu_state))
+-
+-/**
+- * prandom_u32_max - returns a pseudo-random number in interval [0, ep_ro)
+- * @ep_ro: right open interval endpoint
+- *
+- * Returns a pseudo-random number that is in interval [0, ep_ro). Note
+- * that the result depends on PRNG being well distributed in [0, ~0U]
+- * u32 space. Here we use maximally equidistributed combined Tausworthe
+- * generator, that is, prandom_u32(). This is useful when requesting a
+- * random index of an array containing ep_ro elements, for example.
+- *
+- * Returns: pseudo-random number in interval [0, ep_ro)
+- */
+-static inline u32 prandom_u32_max(u32 ep_ro)
+-{
+- return (u32)(((u64) prandom_u32() * ep_ro) >> 32);
+-}
+-
+ /*
+- * Handle minimum values for seeds
++ * This is designed to be standalone for just prandom
++ * users, but for now we include it from <linux/random.h>
++ * for legacy reasons.
+ */
+-static inline u32 __seed(u32 x, u32 m)
+-{
+- return (x < m) ? x + m : x;
+-}
+-
+-/**
+- * prandom_seed_state - set seed for prandom_u32_state().
+- * @state: pointer to state structure to receive the seed.
+- * @seed: arbitrary 64-bit value to use as a seed.
+- */
+-static inline void prandom_seed_state(struct rnd_state *state, u64 seed)
+-{
+- u32 i = (seed >> 32) ^ (seed << 10) ^ seed;
+-
+- state->s1 = __seed(i, 2U);
+- state->s2 = __seed(i, 8U);
+- state->s3 = __seed(i, 16U);
+- state->s4 = __seed(i, 128U);
+-}
++#include <linux/prandom.h>
+
+ #ifdef CONFIG_ARCH_RANDOM
+ # include <asm/archrandom.h>
+@@ -207,10 +158,4 @@ static inline bool __init arch_get_random_long_early(unsigned long *v)
+ }
+ #endif
+
+-/* Pseudo random number generator from numerical recipes. */
+-static inline u32 next_pseudo_random32(u32 seed)
+-{
+- return seed * 1664525 + 1013904223;
+-}
+-
+ #endif /* _LINUX_RANDOM_H */
+diff --git a/kernel/time/timer.c b/kernel/time/timer.c
+index 03c9fc395ab1..721d5af8cfc7 100644
+--- a/kernel/time/timer.c
++++ b/kernel/time/timer.c
+@@ -43,6 +43,7 @@
+ #include <linux/sched/debug.h>
+ #include <linux/slab.h>
+ #include <linux/compat.h>
++#include <linux/random.h>
+
+ #include <linux/uaccess.h>
+ #include <asm/unistd.h>
+@@ -1743,6 +1744,13 @@ void update_process_times(int user_tick)
+ scheduler_tick();
+ if (IS_ENABLED(CONFIG_POSIX_TIMERS))
+ run_posix_cpu_timers();
++
++ /* The current CPU might make use of net randoms without receiving IRQs
++ * to renew them often enough. Let's update the net_rand_state from a
++ * non-constant value that's not affine to the number of calls to make
++ * sure it's updated when there's some activity (we don't care in idle).
++ */
++ this_cpu_add(net_rand_state.s1, rol32(jiffies, 24) + user_tick);
+ }
+
+ /**
+diff --git a/lib/random32.c b/lib/random32.c
+index 763b920a6206..3d749abb9e80 100644
+--- a/lib/random32.c
++++ b/lib/random32.c
+@@ -48,7 +48,7 @@ static inline void prandom_state_selftest(void)
+ }
+ #endif
+
+-static DEFINE_PER_CPU(struct rnd_state, net_rand_state) __latent_entropy;
++DEFINE_PER_CPU(struct rnd_state, net_rand_state);
+
+ /**
+ * prandom_u32_state - seeded pseudo-random number generator.
^ permalink raw reply related [flat|nested] 25+ messages in thread
* [gentoo-commits] proj/linux-patches:5.7 commit in: /
@ 2020-08-12 23:32 Alice Ferrazzi
0 siblings, 0 replies; 25+ messages in thread
From: Alice Ferrazzi @ 2020-08-12 23:32 UTC (permalink / raw
To: gentoo-commits
commit: b87ca498ee3e5898428a247b44e2ab5629ce4a79
Author: Alice Ferrazzi <alicef <AT> gentoo <DOT> org>
AuthorDate: Wed Aug 12 23:32:06 2020 +0000
Commit: Alice Ferrazzi <alicef <AT> gentoo <DOT> org>
CommitDate: Wed Aug 12 23:32:13 2020 +0000
URL: https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=b87ca498
Linux patch 5.7.15
Signed-off-by: Alice Ferrazzi <alicef <AT> gentoo.org>
0000_README | 4 +
1014_linux-5.7.15.patch | 2993 +++++++++++++++++++++++++++++++++++++++++++++++
2 files changed, 2997 insertions(+)
diff --git a/0000_README b/0000_README
index ff8860b..dc0ff9b 100644
--- a/0000_README
+++ b/0000_README
@@ -99,6 +99,10 @@ Patch: 1013_linux-5.7.14.patch
From: http://www.kernel.org
Desc: Linux 5.7.14
+Patch: 1014_linux-5.7.15.patch
+From: http://www.kernel.org
+Desc: Linux 5.7.15
+
Patch: 1500_XATTR_USER_PREFIX.patch
From: https://bugs.gentoo.org/show_bug.cgi?id=470644
Desc: Support for namespace user.pax.* on tmpfs.
diff --git a/1014_linux-5.7.15.patch b/1014_linux-5.7.15.patch
new file mode 100644
index 0000000..56dcf52
--- /dev/null
+++ b/1014_linux-5.7.15.patch
@@ -0,0 +1,2993 @@
+diff --git a/Makefile b/Makefile
+index 70942a6541d8..a2fbdb4c952d 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 5
+ PATCHLEVEL = 7
+-SUBLEVEL = 14
++SUBLEVEL = 15
+ EXTRAVERSION =
+ NAME = Kleptomaniac Octopus
+
+diff --git a/arch/arm64/kernel/kaslr.c b/arch/arm64/kernel/kaslr.c
+index e2101440c314..b892670293a9 100644
+--- a/arch/arm64/kernel/kaslr.c
++++ b/arch/arm64/kernel/kaslr.c
+@@ -84,6 +84,7 @@ u64 __init kaslr_early_init(u64 dt_phys)
+ void *fdt;
+ u64 seed, offset, mask, module_range;
+ const u8 *cmdline, *str;
++ unsigned long raw;
+ int size;
+
+ /*
+@@ -122,15 +123,12 @@ u64 __init kaslr_early_init(u64 dt_phys)
+ }
+
+ /*
+- * Mix in any entropy obtainable architecturally, open coded
+- * since this runs extremely early.
++ * Mix in any entropy obtainable architecturally if enabled
++ * and supported.
+ */
+- if (__early_cpu_has_rndr()) {
+- unsigned long raw;
+
+- if (__arm64_rndr(&raw))
+- seed ^= raw;
+- }
++ if (arch_get_random_seed_long_early(&raw))
++ seed ^= raw;
+
+ if (!seed) {
+ kaslr_status = KASLR_DISABLED_NO_SEED;
+diff --git a/arch/powerpc/include/asm/kasan.h b/arch/powerpc/include/asm/kasan.h
+index 4769bbf7173a..fc900937f653 100644
+--- a/arch/powerpc/include/asm/kasan.h
++++ b/arch/powerpc/include/asm/kasan.h
+@@ -27,10 +27,12 @@
+
+ #ifdef CONFIG_KASAN
+ void kasan_early_init(void);
++void kasan_mmu_init(void);
+ void kasan_init(void);
+ void kasan_late_init(void);
+ #else
+ static inline void kasan_init(void) { }
++static inline void kasan_mmu_init(void) { }
+ static inline void kasan_late_init(void) { }
+ #endif
+
+diff --git a/arch/powerpc/mm/init_32.c b/arch/powerpc/mm/init_32.c
+index a6991ef8727d..872df48ae41b 100644
+--- a/arch/powerpc/mm/init_32.c
++++ b/arch/powerpc/mm/init_32.c
+@@ -170,6 +170,8 @@ void __init MMU_init(void)
+ btext_unmap();
+ #endif
+
++ kasan_mmu_init();
++
+ setup_kup();
+
+ /* Shortly after that, the entire linear mapping will be available */
+diff --git a/arch/powerpc/mm/kasan/kasan_init_32.c b/arch/powerpc/mm/kasan/kasan_init_32.c
+index b7c287adfd59..8b15fe09b967 100644
+--- a/arch/powerpc/mm/kasan/kasan_init_32.c
++++ b/arch/powerpc/mm/kasan/kasan_init_32.c
+@@ -131,7 +131,7 @@ static void __init kasan_unmap_early_shadow_vmalloc(void)
+ flush_tlb_kernel_range(k_start, k_end);
+ }
+
+-static void __init kasan_mmu_init(void)
++void __init kasan_mmu_init(void)
+ {
+ int ret;
+ struct memblock_region *reg;
+@@ -159,8 +159,6 @@ static void __init kasan_mmu_init(void)
+
+ void __init kasan_init(void)
+ {
+- kasan_mmu_init();
+-
+ kasan_remap_early_shadow_ro();
+
+ clear_page(kasan_early_shadow_page);
+diff --git a/drivers/android/binder.c b/drivers/android/binder.c
+index f50c5f182bb5..5b310eea9e52 100644
+--- a/drivers/android/binder.c
++++ b/drivers/android/binder.c
+@@ -2982,6 +2982,12 @@ static void binder_transaction(struct binder_proc *proc,
+ goto err_dead_binder;
+ }
+ e->to_node = target_node->debug_id;
++ if (WARN_ON(proc == target_proc)) {
++ return_error = BR_FAILED_REPLY;
++ return_error_param = -EINVAL;
++ return_error_line = __LINE__;
++ goto err_invalid_target_handle;
++ }
+ if (security_binder_transaction(proc->tsk,
+ target_proc->tsk) < 0) {
+ return_error = BR_FAILED_REPLY;
+@@ -3635,10 +3641,17 @@ static int binder_thread_write(struct binder_proc *proc,
+ struct binder_node *ctx_mgr_node;
+ mutex_lock(&context->context_mgr_node_lock);
+ ctx_mgr_node = context->binder_context_mgr_node;
+- if (ctx_mgr_node)
++ if (ctx_mgr_node) {
++ if (ctx_mgr_node->proc == proc) {
++ binder_user_error("%d:%d context manager tried to acquire desc 0\n",
++ proc->pid, thread->pid);
++ mutex_unlock(&context->context_mgr_node_lock);
++ return -EINVAL;
++ }
+ ret = binder_inc_ref_for_node(
+ proc, ctx_mgr_node,
+ strong, NULL, &rdata);
++ }
+ mutex_unlock(&context->context_mgr_node_lock);
+ }
+ if (ret)
+diff --git a/drivers/atm/atmtcp.c b/drivers/atm/atmtcp.c
+index d9fd70280482..7f814da3c2d0 100644
+--- a/drivers/atm/atmtcp.c
++++ b/drivers/atm/atmtcp.c
+@@ -433,9 +433,15 @@ static int atmtcp_remove_persistent(int itf)
+ return -EMEDIUMTYPE;
+ }
+ dev_data = PRIV(dev);
+- if (!dev_data->persist) return 0;
++ if (!dev_data->persist) {
++ atm_dev_put(dev);
++ return 0;
++ }
+ dev_data->persist = 0;
+- if (PRIV(dev)->vcc) return 0;
++ if (PRIV(dev)->vcc) {
++ atm_dev_put(dev);
++ return 0;
++ }
+ kfree(dev_data);
+ atm_dev_put(dev);
+ atm_dev_deregister(dev);
+diff --git a/drivers/firmware/qemu_fw_cfg.c b/drivers/firmware/qemu_fw_cfg.c
+index 039e0f91dba8..6945c3c96637 100644
+--- a/drivers/firmware/qemu_fw_cfg.c
++++ b/drivers/firmware/qemu_fw_cfg.c
+@@ -605,8 +605,10 @@ static int fw_cfg_register_file(const struct fw_cfg_file *f)
+ /* register entry under "/sys/firmware/qemu_fw_cfg/by_key/" */
+ err = kobject_init_and_add(&entry->kobj, &fw_cfg_sysfs_entry_ktype,
+ fw_cfg_sel_ko, "%d", entry->select);
+- if (err)
+- goto err_register;
++ if (err) {
++ kobject_put(&entry->kobj);
++ return err;
++ }
+
+ /* add raw binary content access */
+ err = sysfs_create_bin_file(&entry->kobj, &fw_cfg_sysfs_attr_raw);
+@@ -622,7 +624,6 @@ static int fw_cfg_register_file(const struct fw_cfg_file *f)
+
+ err_add_raw:
+ kobject_del(&entry->kobj);
+-err_register:
+ kfree(entry);
+ return err;
+ }
+diff --git a/drivers/gpio/gpio-max77620.c b/drivers/gpio/gpio-max77620.c
+index 313bd02dd893..bd6c4faea639 100644
+--- a/drivers/gpio/gpio-max77620.c
++++ b/drivers/gpio/gpio-max77620.c
+@@ -305,8 +305,9 @@ static int max77620_gpio_probe(struct platform_device *pdev)
+ gpiochip_irqchip_add_nested(&mgpio->gpio_chip, &max77620_gpio_irqchip,
+ 0, handle_edge_irq, IRQ_TYPE_NONE);
+
+- ret = request_threaded_irq(gpio_irq, NULL, max77620_gpio_irqhandler,
+- IRQF_ONESHOT, "max77620-gpio", mgpio);
++ ret = devm_request_threaded_irq(&pdev->dev, gpio_irq, NULL,
++ max77620_gpio_irqhandler, IRQF_ONESHOT,
++ "max77620-gpio", mgpio);
+ if (ret < 0) {
+ dev_err(&pdev->dev, "failed to request IRQ: %d\n", ret);
+ return ret;
+diff --git a/drivers/gpu/drm/bochs/bochs_kms.c b/drivers/gpu/drm/bochs/bochs_kms.c
+index 8066d7d370d5..200d55fa9765 100644
+--- a/drivers/gpu/drm/bochs/bochs_kms.c
++++ b/drivers/gpu/drm/bochs/bochs_kms.c
+@@ -143,6 +143,7 @@ int bochs_kms_init(struct bochs_device *bochs)
+ bochs->dev->mode_config.preferred_depth = 24;
+ bochs->dev->mode_config.prefer_shadow = 0;
+ bochs->dev->mode_config.prefer_shadow_fbdev = 1;
++ bochs->dev->mode_config.fbdev_use_iomem = true;
+ bochs->dev->mode_config.quirk_addfb_prefer_host_byte_order = true;
+
+ bochs->dev->mode_config.funcs = &bochs_mode_funcs;
+diff --git a/drivers/gpu/drm/bridge/adv7511/adv7511_drv.c b/drivers/gpu/drm/bridge/adv7511/adv7511_drv.c
+index 87b58c1acff4..648eb23d0784 100644
+--- a/drivers/gpu/drm/bridge/adv7511/adv7511_drv.c
++++ b/drivers/gpu/drm/bridge/adv7511/adv7511_drv.c
+@@ -1224,6 +1224,7 @@ static int adv7511_probe(struct i2c_client *i2c, const struct i2c_device_id *id)
+
+ adv7511->bridge.funcs = &adv7511_bridge_funcs;
+ adv7511->bridge.of_node = dev->of_node;
++ adv7511->bridge.type = DRM_MODE_CONNECTOR_HDMIA;
+
+ drm_bridge_add(&adv7511->bridge);
+
+diff --git a/drivers/gpu/drm/drm_fb_helper.c b/drivers/gpu/drm/drm_fb_helper.c
+index c7be39a00d43..4dd12a069474 100644
+--- a/drivers/gpu/drm/drm_fb_helper.c
++++ b/drivers/gpu/drm/drm_fb_helper.c
+@@ -399,7 +399,11 @@ static void drm_fb_helper_dirty_blit_real(struct drm_fb_helper *fb_helper,
+ unsigned int y;
+
+ for (y = clip->y1; y < clip->y2; y++) {
+- memcpy(dst, src, len);
++ if (!fb_helper->dev->mode_config.fbdev_use_iomem)
++ memcpy(dst, src, len);
++ else
++ memcpy_toio((void __iomem *)dst, src, len);
++
+ src += fb->pitches[0];
+ dst += fb->pitches[0];
+ }
+diff --git a/drivers/gpu/drm/nouveau/dispnv50/disp.c b/drivers/gpu/drm/nouveau/dispnv50/disp.c
+index 2625ed84fc44..5835d19e1c45 100644
+--- a/drivers/gpu/drm/nouveau/dispnv50/disp.c
++++ b/drivers/gpu/drm/nouveau/dispnv50/disp.c
+@@ -2041,7 +2041,7 @@ nv50_disp_atomic_commit_tail(struct drm_atomic_state *state)
+ */
+ if (core->assign_windows) {
+ core->func->wndw.owner(core);
+- core->func->update(core, interlock, false);
++ nv50_disp_atomic_commit_core(state, interlock);
+ core->assign_windows = false;
+ interlock[NV50_DISP_INTERLOCK_CORE] = 0;
+ }
+diff --git a/drivers/gpu/drm/nouveau/nouveau_fbcon.c b/drivers/gpu/drm/nouveau/nouveau_fbcon.c
+index 24d543a01f43..47883f225941 100644
+--- a/drivers/gpu/drm/nouveau/nouveau_fbcon.c
++++ b/drivers/gpu/drm/nouveau/nouveau_fbcon.c
+@@ -315,7 +315,7 @@ nouveau_fbcon_create(struct drm_fb_helper *helper,
+ struct nouveau_framebuffer *fb;
+ struct nouveau_channel *chan;
+ struct nouveau_bo *nvbo;
+- struct drm_mode_fb_cmd2 mode_cmd;
++ struct drm_mode_fb_cmd2 mode_cmd = {};
+ int ret;
+
+ mode_cmd.width = sizes->surface_width;
+@@ -588,6 +588,7 @@ fini:
+ drm_fb_helper_fini(&fbcon->helper);
+ free:
+ kfree(fbcon);
++ drm->fbcon = NULL;
+ return ret;
+ }
+
+diff --git a/drivers/gpu/drm/panel/panel-boe-tv101wum-nl6.c b/drivers/gpu/drm/panel/panel-boe-tv101wum-nl6.c
+index 48a164257d18..3edb33e61908 100644
+--- a/drivers/gpu/drm/panel/panel-boe-tv101wum-nl6.c
++++ b/drivers/gpu/drm/panel/panel-boe-tv101wum-nl6.c
+@@ -615,9 +615,9 @@ static const struct panel_desc boe_tv101wum_nl6_desc = {
+ static const struct drm_display_mode auo_kd101n80_45na_default_mode = {
+ .clock = 157000,
+ .hdisplay = 1200,
+- .hsync_start = 1200 + 80,
+- .hsync_end = 1200 + 80 + 24,
+- .htotal = 1200 + 80 + 24 + 36,
++ .hsync_start = 1200 + 60,
++ .hsync_end = 1200 + 60 + 24,
++ .htotal = 1200 + 60 + 24 + 56,
+ .vdisplay = 1920,
+ .vsync_start = 1920 + 16,
+ .vsync_end = 1920 + 16 + 4,
+diff --git a/drivers/i2c/i2c-core-slave.c b/drivers/i2c/i2c-core-slave.c
+index 5427f047faf0..1589179d5eb9 100644
+--- a/drivers/i2c/i2c-core-slave.c
++++ b/drivers/i2c/i2c-core-slave.c
+@@ -18,10 +18,8 @@ int i2c_slave_register(struct i2c_client *client, i2c_slave_cb_t slave_cb)
+ {
+ int ret;
+
+- if (!client || !slave_cb) {
+- WARN(1, "insufficient data\n");
++ if (WARN(IS_ERR_OR_NULL(client) || !slave_cb, "insufficient data\n"))
+ return -EINVAL;
+- }
+
+ if (!(client->flags & I2C_CLIENT_SLAVE))
+ dev_warn(&client->dev, "%s: client slave flag not set. You might see address collisions\n",
+@@ -60,6 +58,9 @@ int i2c_slave_unregister(struct i2c_client *client)
+ {
+ int ret;
+
++ if (IS_ERR_OR_NULL(client))
++ return -EINVAL;
++
+ if (!client->adapter->algo->unreg_slave) {
+ dev_err(&client->dev, "%s: not supported by adapter\n", __func__);
+ return -EOPNOTSUPP;
+diff --git a/drivers/leds/leds-88pm860x.c b/drivers/leds/leds-88pm860x.c
+index b3044c9a8120..465c3755cf2e 100644
+--- a/drivers/leds/leds-88pm860x.c
++++ b/drivers/leds/leds-88pm860x.c
+@@ -203,21 +203,33 @@ static int pm860x_led_probe(struct platform_device *pdev)
+ data->cdev.brightness_set_blocking = pm860x_led_set;
+ mutex_init(&data->lock);
+
+- ret = devm_led_classdev_register(chip->dev, &data->cdev);
++ ret = led_classdev_register(chip->dev, &data->cdev);
+ if (ret < 0) {
+ dev_err(&pdev->dev, "Failed to register LED: %d\n", ret);
+ return ret;
+ }
+ pm860x_led_set(&data->cdev, 0);
++
++ platform_set_drvdata(pdev, data);
++
+ return 0;
+ }
+
++static int pm860x_led_remove(struct platform_device *pdev)
++{
++ struct pm860x_led *data = platform_get_drvdata(pdev);
++
++ led_classdev_unregister(&data->cdev);
++
++ return 0;
++}
+
+ static struct platform_driver pm860x_led_driver = {
+ .driver = {
+ .name = "88pm860x-led",
+ },
+ .probe = pm860x_led_probe,
++ .remove = pm860x_led_remove,
+ };
+
+ module_platform_driver(pm860x_led_driver);
+diff --git a/drivers/leds/leds-da903x.c b/drivers/leds/leds-da903x.c
+index ed1b303f699f..2b5fb00438a2 100644
+--- a/drivers/leds/leds-da903x.c
++++ b/drivers/leds/leds-da903x.c
+@@ -110,12 +110,23 @@ static int da903x_led_probe(struct platform_device *pdev)
+ led->flags = pdata->flags;
+ led->master = pdev->dev.parent;
+
+- ret = devm_led_classdev_register(led->master, &led->cdev);
++ ret = led_classdev_register(led->master, &led->cdev);
+ if (ret) {
+ dev_err(&pdev->dev, "failed to register LED %d\n", id);
+ return ret;
+ }
+
++ platform_set_drvdata(pdev, led);
++
++ return 0;
++}
++
++static int da903x_led_remove(struct platform_device *pdev)
++{
++ struct da903x_led *led = platform_get_drvdata(pdev);
++
++ led_classdev_unregister(&led->cdev);
++
+ return 0;
+ }
+
+@@ -124,6 +135,7 @@ static struct platform_driver da903x_led_driver = {
+ .name = "da903x-led",
+ },
+ .probe = da903x_led_probe,
++ .remove = da903x_led_remove,
+ };
+
+ module_platform_driver(da903x_led_driver);
+diff --git a/drivers/leds/leds-lm3533.c b/drivers/leds/leds-lm3533.c
+index 9504ad405aef..b3edee703193 100644
+--- a/drivers/leds/leds-lm3533.c
++++ b/drivers/leds/leds-lm3533.c
+@@ -694,7 +694,7 @@ static int lm3533_led_probe(struct platform_device *pdev)
+
+ platform_set_drvdata(pdev, led);
+
+- ret = devm_led_classdev_register(pdev->dev.parent, &led->cdev);
++ ret = led_classdev_register(pdev->dev.parent, &led->cdev);
+ if (ret) {
+ dev_err(&pdev->dev, "failed to register LED %d\n", pdev->id);
+ return ret;
+@@ -704,13 +704,18 @@ static int lm3533_led_probe(struct platform_device *pdev)
+
+ ret = lm3533_led_setup(led, pdata);
+ if (ret)
+- return ret;
++ goto err_deregister;
+
+ ret = lm3533_ctrlbank_enable(&led->cb);
+ if (ret)
+- return ret;
++ goto err_deregister;
+
+ return 0;
++
++err_deregister:
++ led_classdev_unregister(&led->cdev);
++
++ return ret;
+ }
+
+ static int lm3533_led_remove(struct platform_device *pdev)
+@@ -720,6 +725,7 @@ static int lm3533_led_remove(struct platform_device *pdev)
+ dev_dbg(&pdev->dev, "%s\n", __func__);
+
+ lm3533_ctrlbank_disable(&led->cb);
++ led_classdev_unregister(&led->cdev);
+
+ return 0;
+ }
+diff --git a/drivers/leds/leds-lm36274.c b/drivers/leds/leds-lm36274.c
+index 836b60c9a2b8..db842eeb7ca2 100644
+--- a/drivers/leds/leds-lm36274.c
++++ b/drivers/leds/leds-lm36274.c
+@@ -133,7 +133,7 @@ static int lm36274_probe(struct platform_device *pdev)
+ lm36274_data->pdev = pdev;
+ lm36274_data->dev = lmu->dev;
+ lm36274_data->regmap = lmu->regmap;
+- dev_set_drvdata(&pdev->dev, lm36274_data);
++ platform_set_drvdata(pdev, lm36274_data);
+
+ ret = lm36274_parse_dt(lm36274_data);
+ if (ret) {
+@@ -147,8 +147,16 @@ static int lm36274_probe(struct platform_device *pdev)
+ return ret;
+ }
+
+- return devm_led_classdev_register(lm36274_data->dev,
+- &lm36274_data->led_dev);
++ return led_classdev_register(lm36274_data->dev, &lm36274_data->led_dev);
++}
++
++static int lm36274_remove(struct platform_device *pdev)
++{
++ struct lm36274 *lm36274_data = platform_get_drvdata(pdev);
++
++ led_classdev_unregister(&lm36274_data->led_dev);
++
++ return 0;
+ }
+
+ static const struct of_device_id of_lm36274_leds_match[] = {
+@@ -159,6 +167,7 @@ MODULE_DEVICE_TABLE(of, of_lm36274_leds_match);
+
+ static struct platform_driver lm36274_driver = {
+ .probe = lm36274_probe,
++ .remove = lm36274_remove,
+ .driver = {
+ .name = "lm36274-leds",
+ },
+diff --git a/drivers/leds/leds-wm831x-status.c b/drivers/leds/leds-wm831x-status.c
+index 082df7f1dd90..67f4235cb28a 100644
+--- a/drivers/leds/leds-wm831x-status.c
++++ b/drivers/leds/leds-wm831x-status.c
+@@ -269,12 +269,23 @@ static int wm831x_status_probe(struct platform_device *pdev)
+ drvdata->cdev.blink_set = wm831x_status_blink_set;
+ drvdata->cdev.groups = wm831x_status_groups;
+
+- ret = devm_led_classdev_register(wm831x->dev, &drvdata->cdev);
++ ret = led_classdev_register(wm831x->dev, &drvdata->cdev);
+ if (ret < 0) {
+ dev_err(&pdev->dev, "Failed to register LED: %d\n", ret);
+ return ret;
+ }
+
++ platform_set_drvdata(pdev, drvdata);
++
++ return 0;
++}
++
++static int wm831x_status_remove(struct platform_device *pdev)
++{
++ struct wm831x_status *drvdata = platform_get_drvdata(pdev);
++
++ led_classdev_unregister(&drvdata->cdev);
++
+ return 0;
+ }
+
+@@ -283,6 +294,7 @@ static struct platform_driver wm831x_status_driver = {
+ .name = "wm831x-status",
+ },
+ .probe = wm831x_status_probe,
++ .remove = wm831x_status_remove,
+ };
+
+ module_platform_driver(wm831x_status_driver);
+diff --git a/drivers/misc/lkdtm/heap.c b/drivers/misc/lkdtm/heap.c
+index 3c5cec85edce..1323bc16f113 100644
+--- a/drivers/misc/lkdtm/heap.c
++++ b/drivers/misc/lkdtm/heap.c
+@@ -58,11 +58,12 @@ void lkdtm_READ_AFTER_FREE(void)
+ int *base, *val, saw;
+ size_t len = 1024;
+ /*
+- * The slub allocator uses the first word to store the free
+- * pointer in some configurations. Use the middle of the
+- * allocation to avoid running into the freelist
++ * The slub allocator will use the either the first word or
++ * the middle of the allocation to store the free pointer,
++ * depending on configurations. Store in the second word to
++ * avoid running into the freelist.
+ */
+- size_t offset = (len / sizeof(*base)) / 2;
++ size_t offset = sizeof(*base);
+
+ base = kmalloc(len, GFP_KERNEL);
+ if (!base) {
+diff --git a/drivers/mtd/mtdchar.c b/drivers/mtd/mtdchar.c
+index c5935b2f9cd1..b40f46a43fc6 100644
+--- a/drivers/mtd/mtdchar.c
++++ b/drivers/mtd/mtdchar.c
+@@ -355,9 +355,6 @@ static int mtdchar_writeoob(struct file *file, struct mtd_info *mtd,
+ uint32_t retlen;
+ int ret = 0;
+
+- if (!(file->f_mode & FMODE_WRITE))
+- return -EPERM;
+-
+ if (length > 4096)
+ return -EINVAL;
+
+@@ -643,6 +640,48 @@ static int mtdchar_ioctl(struct file *file, u_int cmd, u_long arg)
+
+ pr_debug("MTD_ioctl\n");
+
++ /*
++ * Check the file mode to require "dangerous" commands to have write
++ * permissions.
++ */
++ switch (cmd) {
++ /* "safe" commands */
++ case MEMGETREGIONCOUNT:
++ case MEMGETREGIONINFO:
++ case MEMGETINFO:
++ case MEMREADOOB:
++ case MEMREADOOB64:
++ case MEMLOCK:
++ case MEMUNLOCK:
++ case MEMISLOCKED:
++ case MEMGETOOBSEL:
++ case MEMGETBADBLOCK:
++ case MEMSETBADBLOCK:
++ case OTPSELECT:
++ case OTPGETREGIONCOUNT:
++ case OTPGETREGIONINFO:
++ case OTPLOCK:
++ case ECCGETLAYOUT:
++ case ECCGETSTATS:
++ case MTDFILEMODE:
++ case BLKPG:
++ case BLKRRPART:
++ break;
++
++ /* "dangerous" commands */
++ case MEMERASE:
++ case MEMERASE64:
++ case MEMWRITEOOB:
++ case MEMWRITEOOB64:
++ case MEMWRITE:
++ if (!(file->f_mode & FMODE_WRITE))
++ return -EPERM;
++ break;
++
++ default:
++ return -ENOTTY;
++ }
++
+ switch (cmd) {
+ case MEMGETREGIONCOUNT:
+ if (copy_to_user(argp, &(mtd->numeraseregions), sizeof(int)))
+@@ -690,9 +729,6 @@ static int mtdchar_ioctl(struct file *file, u_int cmd, u_long arg)
+ {
+ struct erase_info *erase;
+
+- if(!(file->f_mode & FMODE_WRITE))
+- return -EPERM;
+-
+ erase=kzalloc(sizeof(struct erase_info),GFP_KERNEL);
+ if (!erase)
+ ret = -ENOMEM;
+@@ -985,9 +1021,6 @@ static int mtdchar_ioctl(struct file *file, u_int cmd, u_long arg)
+ ret = 0;
+ break;
+ }
+-
+- default:
+- ret = -ENOTTY;
+ }
+
+ return ret;
+@@ -1031,6 +1064,11 @@ static long mtdchar_compat_ioctl(struct file *file, unsigned int cmd,
+ struct mtd_oob_buf32 buf;
+ struct mtd_oob_buf32 __user *buf_user = argp;
+
++ if (!(file->f_mode & FMODE_WRITE)) {
++ ret = -EPERM;
++ break;
++ }
++
+ if (copy_from_user(&buf, argp, sizeof(buf)))
+ ret = -EFAULT;
+ else
+diff --git a/drivers/net/ethernet/cadence/macb_main.c b/drivers/net/ethernet/cadence/macb_main.c
+index f1f0976e7669..3a157be857b0 100644
+--- a/drivers/net/ethernet/cadence/macb_main.c
++++ b/drivers/net/ethernet/cadence/macb_main.c
+@@ -578,7 +578,7 @@ static void macb_mac_config(struct phylink_config *config, unsigned int mode,
+ if (bp->caps & MACB_CAPS_MACB_IS_EMAC) {
+ if (state->interface == PHY_INTERFACE_MODE_RMII)
+ ctrl |= MACB_BIT(RM9200_RMII);
+- } else {
++ } else if (macb_is_gem(bp)) {
+ ctrl &= ~(GEM_BIT(SGMIIEN) | GEM_BIT(PCSSEL));
+
+ if (state->interface == PHY_INTERFACE_MODE_SGMII)
+@@ -639,10 +639,13 @@ static void macb_mac_link_up(struct phylink_config *config,
+ ctrl |= MACB_BIT(FD);
+
+ if (!(bp->caps & MACB_CAPS_MACB_IS_EMAC)) {
+- ctrl &= ~(GEM_BIT(GBE) | MACB_BIT(PAE));
++ ctrl &= ~MACB_BIT(PAE);
++ if (macb_is_gem(bp)) {
++ ctrl &= ~GEM_BIT(GBE);
+
+- if (speed == SPEED_1000)
+- ctrl |= GEM_BIT(GBE);
++ if (speed == SPEED_1000)
++ ctrl |= GEM_BIT(GBE);
++ }
+
+ /* We do not support MLO_PAUSE_RX yet */
+ if (tx_pause)
+diff --git a/drivers/net/ethernet/cavium/thunder/nicvf_main.c b/drivers/net/ethernet/cavium/thunder/nicvf_main.c
+index b4b33368698f..ae48f2e9265f 100644
+--- a/drivers/net/ethernet/cavium/thunder/nicvf_main.c
++++ b/drivers/net/ethernet/cavium/thunder/nicvf_main.c
+@@ -2041,11 +2041,11 @@ static void nicvf_set_rx_mode_task(struct work_struct *work_arg)
+ /* Save message data locally to prevent them from
+ * being overwritten by next ndo_set_rx_mode call().
+ */
+- spin_lock(&nic->rx_mode_wq_lock);
++ spin_lock_bh(&nic->rx_mode_wq_lock);
+ mode = vf_work->mode;
+ mc = vf_work->mc;
+ vf_work->mc = NULL;
+- spin_unlock(&nic->rx_mode_wq_lock);
++ spin_unlock_bh(&nic->rx_mode_wq_lock);
+
+ __nicvf_set_rx_mode_task(mode, mc, nic);
+ }
+diff --git a/drivers/net/ethernet/freescale/dpaa2/dpaa2-eth.c b/drivers/net/ethernet/freescale/dpaa2/dpaa2-eth.c
+index 72fa9c4e058f..b7031f8562e0 100644
+--- a/drivers/net/ethernet/freescale/dpaa2/dpaa2-eth.c
++++ b/drivers/net/ethernet/freescale/dpaa2/dpaa2-eth.c
+@@ -2120,7 +2120,7 @@ close:
+ free:
+ fsl_mc_object_free(dpcon);
+
+- return NULL;
++ return ERR_PTR(err);
+ }
+
+ static void free_dpcon(struct dpaa2_eth_priv *priv,
+@@ -2144,8 +2144,8 @@ alloc_channel(struct dpaa2_eth_priv *priv)
+ return NULL;
+
+ channel->dpcon = setup_dpcon(priv);
+- if (IS_ERR_OR_NULL(channel->dpcon)) {
+- err = PTR_ERR_OR_ZERO(channel->dpcon);
++ if (IS_ERR(channel->dpcon)) {
++ err = PTR_ERR(channel->dpcon);
+ goto err_setup;
+ }
+
+diff --git a/drivers/net/ethernet/intel/igb/igb_main.c b/drivers/net/ethernet/intel/igb/igb_main.c
+index b46bff8fe056..b35d599fc78e 100644
+--- a/drivers/net/ethernet/intel/igb/igb_main.c
++++ b/drivers/net/ethernet/intel/igb/igb_main.c
+@@ -6224,9 +6224,18 @@ static void igb_reset_task(struct work_struct *work)
+ struct igb_adapter *adapter;
+ adapter = container_of(work, struct igb_adapter, reset_task);
+
++ rtnl_lock();
++ /* If we're already down or resetting, just bail */
++ if (test_bit(__IGB_DOWN, &adapter->state) ||
++ test_bit(__IGB_RESETTING, &adapter->state)) {
++ rtnl_unlock();
++ return;
++ }
++
+ igb_dump(adapter);
+ netdev_err(adapter->netdev, "Reset adapter\n");
+ igb_reinit_locked(adapter);
++ rtnl_unlock();
+ }
+
+ /**
+diff --git a/drivers/net/ethernet/marvell/mvpp2/mvpp2_main.c b/drivers/net/ethernet/marvell/mvpp2/mvpp2_main.c
+index 24f4d8e0da98..ee72397813d4 100644
+--- a/drivers/net/ethernet/marvell/mvpp2/mvpp2_main.c
++++ b/drivers/net/ethernet/marvell/mvpp2/mvpp2_main.c
+@@ -2981,6 +2981,7 @@ static int mvpp2_rx(struct mvpp2_port *port, struct napi_struct *napi,
+ err = mvpp2_rx_refill(port, bm_pool, pool);
+ if (err) {
+ netdev_err(port->dev, "failed to refill BM pools\n");
++ dev_kfree_skb_any(skb);
+ goto err_drop_frame;
+ }
+
+diff --git a/drivers/net/ethernet/mediatek/mtk_eth_soc.c b/drivers/net/ethernet/mediatek/mtk_eth_soc.c
+index b743d8b56c84..82f5690ff4d3 100644
+--- a/drivers/net/ethernet/mediatek/mtk_eth_soc.c
++++ b/drivers/net/ethernet/mediatek/mtk_eth_soc.c
+@@ -171,11 +171,21 @@ static int mt7621_gmac0_rgmii_adjust(struct mtk_eth *eth,
+ return 0;
+ }
+
+-static void mtk_gmac0_rgmii_adjust(struct mtk_eth *eth, int speed)
++static void mtk_gmac0_rgmii_adjust(struct mtk_eth *eth,
++ phy_interface_t interface, int speed)
+ {
+ u32 val;
+ int ret;
+
++ if (interface == PHY_INTERFACE_MODE_TRGMII) {
++ mtk_w32(eth, TRGMII_MODE, INTF_MODE);
++ val = 500000000;
++ ret = clk_set_rate(eth->clks[MTK_CLK_TRGPLL], val);
++ if (ret)
++ dev_err(eth->dev, "Failed to set trgmii pll: %d\n", ret);
++ return;
++ }
++
+ val = (speed == SPEED_1000) ?
+ INTF_MODE_RGMII_1000 : INTF_MODE_RGMII_10_100;
+ mtk_w32(eth, val, INTF_MODE);
+@@ -262,10 +272,9 @@ static void mtk_mac_config(struct phylink_config *config, unsigned int mode,
+ state->interface))
+ goto err_phy;
+ } else {
+- if (state->interface !=
+- PHY_INTERFACE_MODE_TRGMII)
+- mtk_gmac0_rgmii_adjust(mac->hw,
+- state->speed);
++ mtk_gmac0_rgmii_adjust(mac->hw,
++ state->interface,
++ state->speed);
+
+ /* mt7623_pad_clk_setup */
+ for (i = 0 ; i < NUM_TRGMII_CTRL; i++)
+diff --git a/drivers/net/ethernet/mscc/ocelot.c b/drivers/net/ethernet/mscc/ocelot.c
+index efb3965a3e42..76dbf9ac8ad5 100644
+--- a/drivers/net/ethernet/mscc/ocelot.c
++++ b/drivers/net/ethernet/mscc/ocelot.c
+@@ -749,21 +749,21 @@ void ocelot_get_txtstamp(struct ocelot *ocelot)
+
+ spin_unlock_irqrestore(&port->tx_skbs.lock, flags);
+
+- /* Next ts */
+- ocelot_write(ocelot, SYS_PTP_NXT_PTP_NXT, SYS_PTP_NXT);
++ /* Get the h/w timestamp */
++ ocelot_get_hwtimestamp(ocelot, &ts);
+
+ if (unlikely(!skb_match))
+ continue;
+
+- /* Get the h/w timestamp */
+- ocelot_get_hwtimestamp(ocelot, &ts);
+-
+ /* Set the timestamp into the skb */
+ memset(&shhwtstamps, 0, sizeof(shhwtstamps));
+ shhwtstamps.hwtstamp = ktime_set(ts.tv_sec, ts.tv_nsec);
+ skb_tstamp_tx(skb_match, &shhwtstamps);
+
+ dev_kfree_skb_any(skb_match);
++
++ /* Next ts */
++ ocelot_write(ocelot, SYS_PTP_NXT_PTP_NXT, SYS_PTP_NXT);
+ }
+ }
+ EXPORT_SYMBOL(ocelot_get_txtstamp);
+diff --git a/drivers/net/hyperv/netvsc_drv.c b/drivers/net/hyperv/netvsc_drv.c
+index ebcfbae05690..b8b7fc13b3dc 100644
+--- a/drivers/net/hyperv/netvsc_drv.c
++++ b/drivers/net/hyperv/netvsc_drv.c
+@@ -532,12 +532,13 @@ static int netvsc_xmit(struct sk_buff *skb, struct net_device *net, bool xdp_tx)
+ u32 hash;
+ struct hv_page_buffer pb[MAX_PAGE_BUFFER_COUNT];
+
+- /* if VF is present and up then redirect packets
+- * already called with rcu_read_lock_bh
++ /* If VF is present and up then redirect packets to it.
++ * Skip the VF if it is marked down or has no carrier.
++ * If netpoll is in uses, then VF can not be used either.
+ */
+ vf_netdev = rcu_dereference_bh(net_device_ctx->vf_netdev);
+ if (vf_netdev && netif_running(vf_netdev) &&
+- !netpoll_tx_running(net))
++ netif_carrier_ok(vf_netdev) && !netpoll_tx_running(net))
+ return netvsc_vf_xmit(net, vf_netdev, skb);
+
+ /* We will atmost need two pages to describe the rndis
+diff --git a/drivers/net/usb/hso.c b/drivers/net/usb/hso.c
+index 5f123a8cf68e..d2fdb5430d27 100644
+--- a/drivers/net/usb/hso.c
++++ b/drivers/net/usb/hso.c
+@@ -2261,12 +2261,14 @@ static int hso_serial_common_create(struct hso_serial *serial, int num_urbs,
+
+ minor = get_free_serial_index();
+ if (minor < 0)
+- goto exit;
++ goto exit2;
+
+ /* register our minor number */
+ serial->parent->dev = tty_port_register_device_attr(&serial->port,
+ tty_drv, minor, &serial->parent->interface->dev,
+ serial->parent, hso_serial_dev_groups);
++ if (IS_ERR(serial->parent->dev))
++ goto exit2;
+
+ /* fill in specific data for later use */
+ serial->minor = minor;
+@@ -2311,6 +2313,7 @@ static int hso_serial_common_create(struct hso_serial *serial, int num_urbs,
+ return 0;
+ exit:
+ hso_serial_tty_unregister(serial);
++exit2:
+ hso_serial_common_free(serial);
+ return -1;
+ }
+diff --git a/drivers/net/usb/lan78xx.c b/drivers/net/usb/lan78xx.c
+index ee062b27cfa7..442507f25aad 100644
+--- a/drivers/net/usb/lan78xx.c
++++ b/drivers/net/usb/lan78xx.c
+@@ -377,10 +377,6 @@ struct lan78xx_net {
+ struct tasklet_struct bh;
+ struct delayed_work wq;
+
+- struct usb_host_endpoint *ep_blkin;
+- struct usb_host_endpoint *ep_blkout;
+- struct usb_host_endpoint *ep_intr;
+-
+ int msg_enable;
+
+ struct urb *urb_intr;
+@@ -2860,78 +2856,12 @@ lan78xx_start_xmit(struct sk_buff *skb, struct net_device *net)
+ return NETDEV_TX_OK;
+ }
+
+-static int
+-lan78xx_get_endpoints(struct lan78xx_net *dev, struct usb_interface *intf)
+-{
+- int tmp;
+- struct usb_host_interface *alt = NULL;
+- struct usb_host_endpoint *in = NULL, *out = NULL;
+- struct usb_host_endpoint *status = NULL;
+-
+- for (tmp = 0; tmp < intf->num_altsetting; tmp++) {
+- unsigned ep;
+-
+- in = NULL;
+- out = NULL;
+- status = NULL;
+- alt = intf->altsetting + tmp;
+-
+- for (ep = 0; ep < alt->desc.bNumEndpoints; ep++) {
+- struct usb_host_endpoint *e;
+- int intr = 0;
+-
+- e = alt->endpoint + ep;
+- switch (e->desc.bmAttributes) {
+- case USB_ENDPOINT_XFER_INT:
+- if (!usb_endpoint_dir_in(&e->desc))
+- continue;
+- intr = 1;
+- /* FALLTHROUGH */
+- case USB_ENDPOINT_XFER_BULK:
+- break;
+- default:
+- continue;
+- }
+- if (usb_endpoint_dir_in(&e->desc)) {
+- if (!intr && !in)
+- in = e;
+- else if (intr && !status)
+- status = e;
+- } else {
+- if (!out)
+- out = e;
+- }
+- }
+- if (in && out)
+- break;
+- }
+- if (!alt || !in || !out)
+- return -EINVAL;
+-
+- dev->pipe_in = usb_rcvbulkpipe(dev->udev,
+- in->desc.bEndpointAddress &
+- USB_ENDPOINT_NUMBER_MASK);
+- dev->pipe_out = usb_sndbulkpipe(dev->udev,
+- out->desc.bEndpointAddress &
+- USB_ENDPOINT_NUMBER_MASK);
+- dev->ep_intr = status;
+-
+- return 0;
+-}
+-
+ static int lan78xx_bind(struct lan78xx_net *dev, struct usb_interface *intf)
+ {
+ struct lan78xx_priv *pdata = NULL;
+ int ret;
+ int i;
+
+- ret = lan78xx_get_endpoints(dev, intf);
+- if (ret) {
+- netdev_warn(dev->net, "lan78xx_get_endpoints failed: %d\n",
+- ret);
+- return ret;
+- }
+-
+ dev->data[0] = (unsigned long)kzalloc(sizeof(*pdata), GFP_KERNEL);
+
+ pdata = (struct lan78xx_priv *)(dev->data[0]);
+@@ -3700,6 +3630,7 @@ static void lan78xx_stat_monitor(struct timer_list *t)
+ static int lan78xx_probe(struct usb_interface *intf,
+ const struct usb_device_id *id)
+ {
++ struct usb_host_endpoint *ep_blkin, *ep_blkout, *ep_intr;
+ struct lan78xx_net *dev;
+ struct net_device *netdev;
+ struct usb_device *udev;
+@@ -3748,6 +3679,34 @@ static int lan78xx_probe(struct usb_interface *intf,
+
+ mutex_init(&dev->stats.access_lock);
+
++ if (intf->cur_altsetting->desc.bNumEndpoints < 3) {
++ ret = -ENODEV;
++ goto out2;
++ }
++
++ dev->pipe_in = usb_rcvbulkpipe(udev, BULK_IN_PIPE);
++ ep_blkin = usb_pipe_endpoint(udev, dev->pipe_in);
++ if (!ep_blkin || !usb_endpoint_is_bulk_in(&ep_blkin->desc)) {
++ ret = -ENODEV;
++ goto out2;
++ }
++
++ dev->pipe_out = usb_sndbulkpipe(udev, BULK_OUT_PIPE);
++ ep_blkout = usb_pipe_endpoint(udev, dev->pipe_out);
++ if (!ep_blkout || !usb_endpoint_is_bulk_out(&ep_blkout->desc)) {
++ ret = -ENODEV;
++ goto out2;
++ }
++
++ ep_intr = &intf->cur_altsetting->endpoint[2];
++ if (!usb_endpoint_is_int_in(&ep_intr->desc)) {
++ ret = -ENODEV;
++ goto out2;
++ }
++
++ dev->pipe_intr = usb_rcvintpipe(dev->udev,
++ usb_endpoint_num(&ep_intr->desc));
++
+ ret = lan78xx_bind(dev, intf);
+ if (ret < 0)
+ goto out2;
+@@ -3759,23 +3718,7 @@ static int lan78xx_probe(struct usb_interface *intf,
+ netdev->max_mtu = MAX_SINGLE_PACKET_SIZE;
+ netif_set_gso_max_size(netdev, MAX_SINGLE_PACKET_SIZE - MAX_HEADER);
+
+- if (intf->cur_altsetting->desc.bNumEndpoints < 3) {
+- ret = -ENODEV;
+- goto out3;
+- }
+-
+- dev->ep_blkin = (intf->cur_altsetting)->endpoint + 0;
+- dev->ep_blkout = (intf->cur_altsetting)->endpoint + 1;
+- dev->ep_intr = (intf->cur_altsetting)->endpoint + 2;
+-
+- dev->pipe_in = usb_rcvbulkpipe(udev, BULK_IN_PIPE);
+- dev->pipe_out = usb_sndbulkpipe(udev, BULK_OUT_PIPE);
+-
+- dev->pipe_intr = usb_rcvintpipe(dev->udev,
+- dev->ep_intr->desc.bEndpointAddress &
+- USB_ENDPOINT_NUMBER_MASK);
+- period = dev->ep_intr->desc.bInterval;
+-
++ period = ep_intr->desc.bInterval;
+ maxp = usb_maxpacket(dev->udev, dev->pipe_intr, 0);
+ buf = kmalloc(maxp, GFP_KERNEL);
+ if (buf) {
+diff --git a/drivers/net/vxlan.c b/drivers/net/vxlan.c
+index 6e64bc8d601f..b78bb5c558ff 100644
+--- a/drivers/net/vxlan.c
++++ b/drivers/net/vxlan.c
+@@ -1225,6 +1225,7 @@ static int vxlan_fdb_dump(struct sk_buff *skb, struct netlink_callback *cb,
+ for (h = 0; h < FDB_HASH_SIZE; ++h) {
+ struct vxlan_fdb *f;
+
++ rcu_read_lock();
+ hlist_for_each_entry_rcu(f, &vxlan->fdb_head[h], hlist) {
+ struct vxlan_rdst *rd;
+
+@@ -1237,12 +1238,15 @@ static int vxlan_fdb_dump(struct sk_buff *skb, struct netlink_callback *cb,
+ cb->nlh->nlmsg_seq,
+ RTM_NEWNEIGH,
+ NLM_F_MULTI, rd);
+- if (err < 0)
++ if (err < 0) {
++ rcu_read_unlock();
+ goto out;
++ }
+ skip:
+ *idx += 1;
+ }
+ }
++ rcu_read_unlock();
+ }
+ out:
+ return err;
+@@ -2546,7 +2550,7 @@ static void vxlan_xmit_one(struct sk_buff *skb, struct net_device *dev,
+ ndst = &rt->dst;
+ skb_tunnel_check_pmtu(skb, ndst, VXLAN_HEADROOM);
+
+- tos = ip_tunnel_ecn_encap(RT_TOS(tos), old_iph, skb);
++ tos = ip_tunnel_ecn_encap(tos, old_iph, skb);
+ ttl = ttl ? : ip4_dst_hoplimit(&rt->dst);
+ err = vxlan_build_skb(skb, ndst, sizeof(struct iphdr),
+ vni, md, flags, udp_sum);
+@@ -2586,7 +2590,7 @@ static void vxlan_xmit_one(struct sk_buff *skb, struct net_device *dev,
+
+ skb_tunnel_check_pmtu(skb, ndst, VXLAN6_HEADROOM);
+
+- tos = ip_tunnel_ecn_encap(RT_TOS(tos), old_iph, skb);
++ tos = ip_tunnel_ecn_encap(tos, old_iph, skb);
+ ttl = ttl ? : ip6_dst_hoplimit(ndst);
+ skb_scrub_packet(skb, xnet);
+ err = vxlan_build_skb(skb, ndst, sizeof(struct ipv6hdr),
+diff --git a/drivers/nvme/host/pci.c b/drivers/nvme/host/pci.c
+index 10d65f27879f..45e29c6c3234 100644
+--- a/drivers/nvme/host/pci.c
++++ b/drivers/nvme/host/pci.c
+@@ -3130,6 +3130,8 @@ static const struct pci_device_id nvme_id_table[] = {
+ { PCI_DEVICE(0x1cc1, 0x8201), /* ADATA SX8200PNP 512GB */
+ .driver_data = NVME_QUIRK_NO_DEEPEST_PS |
+ NVME_QUIRK_IGNORE_DEV_SUBNQN, },
++ { PCI_DEVICE(0x1c5c, 0x1504), /* SK Hynix PC400 */
++ .driver_data = NVME_QUIRK_DISABLE_WRITE_ZEROES, },
+ { PCI_DEVICE_CLASS(PCI_CLASS_STORAGE_EXPRESS, 0xffffff) },
+ { PCI_DEVICE(PCI_VENDOR_ID_APPLE, 0x2001),
+ .driver_data = NVME_QUIRK_SINGLE_VECTOR },
+diff --git a/drivers/pci/controller/pci-tegra.c b/drivers/pci/controller/pci-tegra.c
+index 3e64ba6a36a8..a4bea7c7fb12 100644
+--- a/drivers/pci/controller/pci-tegra.c
++++ b/drivers/pci/controller/pci-tegra.c
+@@ -181,13 +181,6 @@
+
+ #define AFI_PEXBIAS_CTRL_0 0x168
+
+-#define RP_PRIV_XP_DL 0x00000494
+-#define RP_PRIV_XP_DL_GEN2_UPD_FC_TSHOLD (0x1ff << 1)
+-
+-#define RP_RX_HDR_LIMIT 0x00000e00
+-#define RP_RX_HDR_LIMIT_PW_MASK (0xff << 8)
+-#define RP_RX_HDR_LIMIT_PW (0x0e << 8)
+-
+ #define RP_ECTL_2_R1 0x00000e84
+ #define RP_ECTL_2_R1_RX_CTLE_1C_MASK 0xffff
+
+@@ -323,7 +316,6 @@ struct tegra_pcie_soc {
+ bool program_uphy;
+ bool update_clamp_threshold;
+ bool program_deskew_time;
+- bool raw_violation_fixup;
+ bool update_fc_timer;
+ bool has_cache_bars;
+ struct {
+@@ -659,23 +651,6 @@ static void tegra_pcie_apply_sw_fixup(struct tegra_pcie_port *port)
+ writel(value, port->base + RP_VEND_CTL0);
+ }
+
+- /* Fixup for read after write violation. */
+- if (soc->raw_violation_fixup) {
+- value = readl(port->base + RP_RX_HDR_LIMIT);
+- value &= ~RP_RX_HDR_LIMIT_PW_MASK;
+- value |= RP_RX_HDR_LIMIT_PW;
+- writel(value, port->base + RP_RX_HDR_LIMIT);
+-
+- value = readl(port->base + RP_PRIV_XP_DL);
+- value |= RP_PRIV_XP_DL_GEN2_UPD_FC_TSHOLD;
+- writel(value, port->base + RP_PRIV_XP_DL);
+-
+- value = readl(port->base + RP_VEND_XP);
+- value &= ~RP_VEND_XP_UPDATE_FC_THRESHOLD_MASK;
+- value |= soc->update_fc_threshold;
+- writel(value, port->base + RP_VEND_XP);
+- }
+-
+ if (soc->update_fc_timer) {
+ value = readl(port->base + RP_VEND_XP);
+ value &= ~RP_VEND_XP_UPDATE_FC_THRESHOLD_MASK;
+@@ -2416,7 +2391,6 @@ static const struct tegra_pcie_soc tegra20_pcie = {
+ .program_uphy = true,
+ .update_clamp_threshold = false,
+ .program_deskew_time = false,
+- .raw_violation_fixup = false,
+ .update_fc_timer = false,
+ .has_cache_bars = true,
+ .ectl.enable = false,
+@@ -2446,7 +2420,6 @@ static const struct tegra_pcie_soc tegra30_pcie = {
+ .program_uphy = true,
+ .update_clamp_threshold = false,
+ .program_deskew_time = false,
+- .raw_violation_fixup = false,
+ .update_fc_timer = false,
+ .has_cache_bars = false,
+ .ectl.enable = false,
+@@ -2459,8 +2432,6 @@ static const struct tegra_pcie_soc tegra124_pcie = {
+ .pads_pll_ctl = PADS_PLL_CTL_TEGRA30,
+ .tx_ref_sel = PADS_PLL_CTL_TXCLKREF_BUF_EN,
+ .pads_refclk_cfg0 = 0x44ac44ac,
+- /* FC threshold is bit[25:18] */
+- .update_fc_threshold = 0x03fc0000,
+ .has_pex_clkreq_en = true,
+ .has_pex_bias_ctrl = true,
+ .has_intr_prsnt_sense = true,
+@@ -2470,7 +2441,6 @@ static const struct tegra_pcie_soc tegra124_pcie = {
+ .program_uphy = true,
+ .update_clamp_threshold = true,
+ .program_deskew_time = false,
+- .raw_violation_fixup = true,
+ .update_fc_timer = false,
+ .has_cache_bars = false,
+ .ectl.enable = false,
+@@ -2494,7 +2464,6 @@ static const struct tegra_pcie_soc tegra210_pcie = {
+ .program_uphy = true,
+ .update_clamp_threshold = true,
+ .program_deskew_time = true,
+- .raw_violation_fixup = false,
+ .update_fc_timer = true,
+ .has_cache_bars = false,
+ .ectl = {
+@@ -2536,7 +2505,6 @@ static const struct tegra_pcie_soc tegra186_pcie = {
+ .program_uphy = false,
+ .update_clamp_threshold = false,
+ .program_deskew_time = false,
+- .raw_violation_fixup = false,
+ .update_fc_timer = false,
+ .has_cache_bars = false,
+ .ectl.enable = false,
+diff --git a/drivers/scsi/ufs/ufshcd.c b/drivers/scsi/ufs/ufshcd.c
+index 52740b60d786..7ca32ede5e17 100644
+--- a/drivers/scsi/ufs/ufshcd.c
++++ b/drivers/scsi/ufs/ufshcd.c
+@@ -1908,8 +1908,11 @@ static void ufshcd_clk_scaling_update_busy(struct ufs_hba *hba)
+ static inline
+ void ufshcd_send_command(struct ufs_hba *hba, unsigned int task_tag)
+ {
+- hba->lrb[task_tag].issue_time_stamp = ktime_get();
+- hba->lrb[task_tag].compl_time_stamp = ktime_set(0, 0);
++ struct ufshcd_lrb *lrbp = &hba->lrb[task_tag];
++
++ lrbp->issue_time_stamp = ktime_get();
++ lrbp->compl_time_stamp = ktime_set(0, 0);
++ ufshcd_vops_setup_xfer_req(hba, task_tag, (lrbp->cmd ? true : false));
+ ufshcd_add_command_trace(hba, task_tag, "send");
+ ufshcd_clk_scaling_start_busy(hba);
+ __set_bit(task_tag, &hba->outstanding_reqs);
+@@ -2519,7 +2522,6 @@ static int ufshcd_queuecommand(struct Scsi_Host *host, struct scsi_cmnd *cmd)
+
+ /* issue command to the controller */
+ spin_lock_irqsave(hba->host->host_lock, flags);
+- ufshcd_vops_setup_xfer_req(hba, tag, true);
+ ufshcd_send_command(hba, tag);
+ out_unlock:
+ spin_unlock_irqrestore(hba->host->host_lock, flags);
+@@ -2706,7 +2708,6 @@ static int ufshcd_exec_dev_cmd(struct ufs_hba *hba,
+ /* Make sure descriptors are ready before ringing the doorbell */
+ wmb();
+ spin_lock_irqsave(hba->host->host_lock, flags);
+- ufshcd_vops_setup_xfer_req(hba, tag, false);
+ ufshcd_send_command(hba, tag);
+ spin_unlock_irqrestore(hba->host->host_lock, flags);
+
+diff --git a/drivers/staging/android/ashmem.c b/drivers/staging/android/ashmem.c
+index 8044510d8ec6..d0d195bc3436 100644
+--- a/drivers/staging/android/ashmem.c
++++ b/drivers/staging/android/ashmem.c
+@@ -95,6 +95,15 @@ static DEFINE_MUTEX(ashmem_mutex);
+ static struct kmem_cache *ashmem_area_cachep __read_mostly;
+ static struct kmem_cache *ashmem_range_cachep __read_mostly;
+
++/*
++ * A separate lockdep class for the backing shmem inodes to resolve the lockdep
++ * warning about the race between kswapd taking fs_reclaim before inode_lock
++ * and write syscall taking inode_lock and then fs_reclaim.
++ * Note that such race is impossible because ashmem does not support write
++ * syscalls operating on the backing shmem.
++ */
++static struct lock_class_key backing_shmem_inode_class;
++
+ static inline unsigned long range_size(struct ashmem_range *range)
+ {
+ return range->pgend - range->pgstart + 1;
+@@ -396,6 +405,7 @@ static int ashmem_mmap(struct file *file, struct vm_area_struct *vma)
+ if (!asma->file) {
+ char *name = ASHMEM_NAME_DEF;
+ struct file *vmfile;
++ struct inode *inode;
+
+ if (asma->name[ASHMEM_NAME_PREFIX_LEN] != '\0')
+ name = asma->name;
+@@ -407,6 +417,8 @@ static int ashmem_mmap(struct file *file, struct vm_area_struct *vma)
+ goto out;
+ }
+ vmfile->f_mode |= FMODE_LSEEK;
++ inode = file_inode(vmfile);
++ lockdep_set_class(&inode->i_rwsem, &backing_shmem_inode_class);
+ asma->file = vmfile;
+ /*
+ * override mmap operation of the vmfile so that it can't be
+diff --git a/drivers/staging/rtl8188eu/core/rtw_mlme.c b/drivers/staging/rtl8188eu/core/rtw_mlme.c
+index 9de2d421f6b1..4f2abe1e14d5 100644
+--- a/drivers/staging/rtl8188eu/core/rtw_mlme.c
++++ b/drivers/staging/rtl8188eu/core/rtw_mlme.c
+@@ -1729,9 +1729,11 @@ int rtw_restruct_sec_ie(struct adapter *adapter, u8 *in_ie, u8 *out_ie, uint in_
+ if ((ndisauthmode == Ndis802_11AuthModeWPA) ||
+ (ndisauthmode == Ndis802_11AuthModeWPAPSK))
+ authmode = _WPA_IE_ID_;
+- if ((ndisauthmode == Ndis802_11AuthModeWPA2) ||
++ else if ((ndisauthmode == Ndis802_11AuthModeWPA2) ||
+ (ndisauthmode == Ndis802_11AuthModeWPA2PSK))
+ authmode = _WPA2_IE_ID_;
++ else
++ authmode = 0x0;
+
+ if (check_fwstate(pmlmepriv, WIFI_UNDER_WPS)) {
+ memcpy(out_ie + ielength, psecuritypriv->wps_ie, psecuritypriv->wps_ie_len);
+diff --git a/drivers/staging/rtl8712/hal_init.c b/drivers/staging/rtl8712/hal_init.c
+index 40145c0338e4..42c0a3c947f1 100644
+--- a/drivers/staging/rtl8712/hal_init.c
++++ b/drivers/staging/rtl8712/hal_init.c
+@@ -33,7 +33,6 @@ static void rtl871x_load_fw_cb(const struct firmware *firmware, void *context)
+ {
+ struct _adapter *adapter = context;
+
+- complete(&adapter->rtl8712_fw_ready);
+ if (!firmware) {
+ struct usb_device *udev = adapter->dvobjpriv.pusbdev;
+ struct usb_interface *usb_intf = adapter->pusb_intf;
+@@ -41,11 +40,13 @@ static void rtl871x_load_fw_cb(const struct firmware *firmware, void *context)
+ dev_err(&udev->dev, "r8712u: Firmware request failed\n");
+ usb_put_dev(udev);
+ usb_set_intfdata(usb_intf, NULL);
++ complete(&adapter->rtl8712_fw_ready);
+ return;
+ }
+ adapter->fw = firmware;
+ /* firmware available - start netdev */
+ register_netdev(adapter->pnetdev);
++ complete(&adapter->rtl8712_fw_ready);
+ }
+
+ static const char firmware_file[] = "rtlwifi/rtl8712u.bin";
+diff --git a/drivers/staging/rtl8712/usb_intf.c b/drivers/staging/rtl8712/usb_intf.c
+index a87562f632a7..2fcd65260f4c 100644
+--- a/drivers/staging/rtl8712/usb_intf.c
++++ b/drivers/staging/rtl8712/usb_intf.c
+@@ -595,13 +595,17 @@ static void r871xu_dev_remove(struct usb_interface *pusb_intf)
+ if (pnetdev) {
+ struct _adapter *padapter = netdev_priv(pnetdev);
+
+- usb_set_intfdata(pusb_intf, NULL);
+- release_firmware(padapter->fw);
+ /* never exit with a firmware callback pending */
+ wait_for_completion(&padapter->rtl8712_fw_ready);
++ pnetdev = usb_get_intfdata(pusb_intf);
++ usb_set_intfdata(pusb_intf, NULL);
++ if (!pnetdev)
++ goto firmware_load_fail;
++ release_firmware(padapter->fw);
+ if (drvpriv.drv_registered)
+ padapter->surprise_removed = true;
+- unregister_netdev(pnetdev); /* will call netdev_close() */
++ if (pnetdev->reg_state != NETREG_UNINITIALIZED)
++ unregister_netdev(pnetdev); /* will call netdev_close() */
+ flush_scheduled_work();
+ udelay(1);
+ /* Stop driver mlme relation timer */
+@@ -614,6 +618,7 @@ static void r871xu_dev_remove(struct usb_interface *pusb_intf)
+ */
+ usb_put_dev(udev);
+ }
++firmware_load_fail:
+ /* If we didn't unplug usb dongle and remove/insert module, driver
+ * fails on sitesurvey for the first time when device is up.
+ * Reset usb port for sitesurvey fail issue.
+diff --git a/drivers/usb/host/xhci-pci.c b/drivers/usb/host/xhci-pci.c
+index 51251c1be059..040497de6e87 100644
+--- a/drivers/usb/host/xhci-pci.c
++++ b/drivers/usb/host/xhci-pci.c
+@@ -56,7 +56,10 @@
+ #define PCI_DEVICE_ID_AMD_PROMONTORYA_3 0x43ba
+ #define PCI_DEVICE_ID_AMD_PROMONTORYA_2 0x43bb
+ #define PCI_DEVICE_ID_AMD_PROMONTORYA_1 0x43bc
++#define PCI_DEVICE_ID_ASMEDIA_1042_XHCI 0x1042
+ #define PCI_DEVICE_ID_ASMEDIA_1042A_XHCI 0x1142
++#define PCI_DEVICE_ID_ASMEDIA_1142_XHCI 0x1242
++#define PCI_DEVICE_ID_ASMEDIA_2142_XHCI 0x2142
+
+ static const char hcd_name[] = "xhci_hcd";
+
+@@ -250,13 +253,14 @@ static void xhci_pci_quirks(struct device *dev, struct xhci_hcd *xhci)
+ xhci->quirks |= XHCI_LPM_SUPPORT;
+
+ if (pdev->vendor == PCI_VENDOR_ID_ASMEDIA &&
+- pdev->device == 0x1042)
++ pdev->device == PCI_DEVICE_ID_ASMEDIA_1042_XHCI)
+ xhci->quirks |= XHCI_BROKEN_STREAMS;
+ if (pdev->vendor == PCI_VENDOR_ID_ASMEDIA &&
+- pdev->device == 0x1142)
++ pdev->device == PCI_DEVICE_ID_ASMEDIA_1042A_XHCI)
+ xhci->quirks |= XHCI_TRUST_TX_LENGTH;
+ if (pdev->vendor == PCI_VENDOR_ID_ASMEDIA &&
+- pdev->device == 0x2142)
++ (pdev->device == PCI_DEVICE_ID_ASMEDIA_1142_XHCI ||
++ pdev->device == PCI_DEVICE_ID_ASMEDIA_2142_XHCI))
+ xhci->quirks |= XHCI_NO_64BIT_SUPPORT;
+
+ if (pdev->vendor == PCI_VENDOR_ID_ASMEDIA &&
+diff --git a/drivers/usb/misc/iowarrior.c b/drivers/usb/misc/iowarrior.c
+index dce20301e367..103c69c692ba 100644
+--- a/drivers/usb/misc/iowarrior.c
++++ b/drivers/usb/misc/iowarrior.c
+@@ -2,8 +2,9 @@
+ /*
+ * Native support for the I/O-Warrior USB devices
+ *
+- * Copyright (c) 2003-2005 Code Mercenaries GmbH
+- * written by Christian Lucht <lucht@codemercs.com>
++ * Copyright (c) 2003-2005, 2020 Code Mercenaries GmbH
++ * written by Christian Lucht <lucht@codemercs.com> and
++ * Christoph Jung <jung@codemercs.com>
+ *
+ * based on
+
+@@ -802,14 +803,28 @@ static int iowarrior_probe(struct usb_interface *interface,
+
+ /* we have to check the report_size often, so remember it in the endianness suitable for our machine */
+ dev->report_size = usb_endpoint_maxp(dev->int_in_endpoint);
+- if ((dev->interface->cur_altsetting->desc.bInterfaceNumber == 0) &&
+- ((dev->product_id == USB_DEVICE_ID_CODEMERCS_IOW56) ||
+- (dev->product_id == USB_DEVICE_ID_CODEMERCS_IOW56AM) ||
+- (dev->product_id == USB_DEVICE_ID_CODEMERCS_IOW28) ||
+- (dev->product_id == USB_DEVICE_ID_CODEMERCS_IOW28L) ||
+- (dev->product_id == USB_DEVICE_ID_CODEMERCS_IOW100)))
+- /* IOWarrior56 has wMaxPacketSize different from report size */
+- dev->report_size = 7;
++
++ /*
++ * Some devices need the report size to be different than the
++ * endpoint size.
++ */
++ if (dev->interface->cur_altsetting->desc.bInterfaceNumber == 0) {
++ switch (dev->product_id) {
++ case USB_DEVICE_ID_CODEMERCS_IOW56:
++ case USB_DEVICE_ID_CODEMERCS_IOW56AM:
++ dev->report_size = 7;
++ break;
++
++ case USB_DEVICE_ID_CODEMERCS_IOW28:
++ case USB_DEVICE_ID_CODEMERCS_IOW28L:
++ dev->report_size = 4;
++ break;
++
++ case USB_DEVICE_ID_CODEMERCS_IOW100:
++ dev->report_size = 13;
++ break;
++ }
++ }
+
+ /* create the urb and buffer for reading */
+ dev->int_in_urb = usb_alloc_urb(0, GFP_KERNEL);
+diff --git a/drivers/usb/serial/qcserial.c b/drivers/usb/serial/qcserial.c
+index d147feae83e6..0f60363c1bbc 100644
+--- a/drivers/usb/serial/qcserial.c
++++ b/drivers/usb/serial/qcserial.c
+@@ -155,6 +155,7 @@ static const struct usb_device_id id_table[] = {
+ {DEVICE_SWI(0x1199, 0x9056)}, /* Sierra Wireless Modem */
+ {DEVICE_SWI(0x1199, 0x9060)}, /* Sierra Wireless Modem */
+ {DEVICE_SWI(0x1199, 0x9061)}, /* Sierra Wireless Modem */
++ {DEVICE_SWI(0x1199, 0x9062)}, /* Sierra Wireless EM7305 QDL */
+ {DEVICE_SWI(0x1199, 0x9063)}, /* Sierra Wireless EM7305 */
+ {DEVICE_SWI(0x1199, 0x9070)}, /* Sierra Wireless MC74xx */
+ {DEVICE_SWI(0x1199, 0x9071)}, /* Sierra Wireless MC74xx */
+diff --git a/drivers/video/console/vgacon.c b/drivers/video/console/vgacon.c
+index 998b0de1812f..e9254b3085a3 100644
+--- a/drivers/video/console/vgacon.c
++++ b/drivers/video/console/vgacon.c
+@@ -251,6 +251,10 @@ static void vgacon_scrollback_update(struct vc_data *c, int t, int count)
+ p = (void *) (c->vc_origin + t * c->vc_size_row);
+
+ while (count--) {
++ if ((vgacon_scrollback_cur->tail + c->vc_size_row) >
++ vgacon_scrollback_cur->size)
++ vgacon_scrollback_cur->tail = 0;
++
+ scr_memcpyw(vgacon_scrollback_cur->data +
+ vgacon_scrollback_cur->tail,
+ p, c->vc_size_row);
+diff --git a/drivers/video/fbdev/omap2/omapfb/dss/dss.c b/drivers/video/fbdev/omap2/omapfb/dss/dss.c
+index 7252d22dd117..bfc5c4c5a26a 100644
+--- a/drivers/video/fbdev/omap2/omapfb/dss/dss.c
++++ b/drivers/video/fbdev/omap2/omapfb/dss/dss.c
+@@ -833,7 +833,7 @@ static const struct dss_features omap34xx_dss_feats = {
+ };
+
+ static const struct dss_features omap3630_dss_feats = {
+- .fck_div_max = 32,
++ .fck_div_max = 31,
+ .dss_fck_multiplier = 1,
+ .parent_clk_name = "dpll4_ck",
+ .dpi_select_source = &dss_dpi_select_source_omap2_omap3,
+diff --git a/fs/io_uring.c b/fs/io_uring.c
+index 4e09af1d5d22..fb9dc865c9ea 100644
+--- a/fs/io_uring.c
++++ b/fs/io_uring.c
+@@ -4260,10 +4260,9 @@ static void io_poll_task_handler(struct io_kiocb *req, struct io_kiocb **nxt)
+
+ hash_del(&req->hash_node);
+ io_poll_complete(req, req->result, 0);
+- req->flags |= REQ_F_COMP_LOCKED;
+- io_put_req_find_next(req, nxt);
+ spin_unlock_irq(&ctx->completion_lock);
+
++ io_put_req_find_next(req, nxt);
+ io_cqring_ev_posted(ctx);
+ }
+
+diff --git a/fs/xattr.c b/fs/xattr.c
+index 91608d9bfc6a..95f38f57347f 100644
+--- a/fs/xattr.c
++++ b/fs/xattr.c
+@@ -204,10 +204,22 @@ int __vfs_setxattr_noperm(struct dentry *dentry, const char *name,
+ return error;
+ }
+
+-
++/**
++ * __vfs_setxattr_locked: set an extended attribute while holding the inode
++ * lock
++ *
++ * @dentry - object to perform setxattr on
++ * @name - xattr name to set
++ * @value - value to set @name to
++ * @size - size of @value
++ * @flags - flags to pass into filesystem operations
++ * @delegated_inode - on return, will contain an inode pointer that
++ * a delegation was broken on, NULL if none.
++ */
+ int
+-vfs_setxattr(struct dentry *dentry, const char *name, const void *value,
+- size_t size, int flags)
++__vfs_setxattr_locked(struct dentry *dentry, const char *name,
++ const void *value, size_t size, int flags,
++ struct inode **delegated_inode)
+ {
+ struct inode *inode = dentry->d_inode;
+ int error;
+@@ -216,15 +228,40 @@ vfs_setxattr(struct dentry *dentry, const char *name, const void *value,
+ if (error)
+ return error;
+
+- inode_lock(inode);
+ error = security_inode_setxattr(dentry, name, value, size, flags);
+ if (error)
+ goto out;
+
++ error = try_break_deleg(inode, delegated_inode);
++ if (error)
++ goto out;
++
+ error = __vfs_setxattr_noperm(dentry, name, value, size, flags);
+
+ out:
++ return error;
++}
++EXPORT_SYMBOL_GPL(__vfs_setxattr_locked);
++
++int
++vfs_setxattr(struct dentry *dentry, const char *name, const void *value,
++ size_t size, int flags)
++{
++ struct inode *inode = dentry->d_inode;
++ struct inode *delegated_inode = NULL;
++ int error;
++
++retry_deleg:
++ inode_lock(inode);
++ error = __vfs_setxattr_locked(dentry, name, value, size, flags,
++ &delegated_inode);
+ inode_unlock(inode);
++
++ if (delegated_inode) {
++ error = break_deleg_wait(&delegated_inode);
++ if (!error)
++ goto retry_deleg;
++ }
+ return error;
+ }
+ EXPORT_SYMBOL_GPL(vfs_setxattr);
+@@ -378,8 +415,18 @@ __vfs_removexattr(struct dentry *dentry, const char *name)
+ }
+ EXPORT_SYMBOL(__vfs_removexattr);
+
++/**
++ * __vfs_removexattr_locked: set an extended attribute while holding the inode
++ * lock
++ *
++ * @dentry - object to perform setxattr on
++ * @name - name of xattr to remove
++ * @delegated_inode - on return, will contain an inode pointer that
++ * a delegation was broken on, NULL if none.
++ */
+ int
+-vfs_removexattr(struct dentry *dentry, const char *name)
++__vfs_removexattr_locked(struct dentry *dentry, const char *name,
++ struct inode **delegated_inode)
+ {
+ struct inode *inode = dentry->d_inode;
+ int error;
+@@ -388,11 +435,14 @@ vfs_removexattr(struct dentry *dentry, const char *name)
+ if (error)
+ return error;
+
+- inode_lock(inode);
+ error = security_inode_removexattr(dentry, name);
+ if (error)
+ goto out;
+
++ error = try_break_deleg(inode, delegated_inode);
++ if (error)
++ goto out;
++
+ error = __vfs_removexattr(dentry, name);
+
+ if (!error) {
+@@ -401,12 +451,32 @@ vfs_removexattr(struct dentry *dentry, const char *name)
+ }
+
+ out:
++ return error;
++}
++EXPORT_SYMBOL_GPL(__vfs_removexattr_locked);
++
++int
++vfs_removexattr(struct dentry *dentry, const char *name)
++{
++ struct inode *inode = dentry->d_inode;
++ struct inode *delegated_inode = NULL;
++ int error;
++
++retry_deleg:
++ inode_lock(inode);
++ error = __vfs_removexattr_locked(dentry, name, &delegated_inode);
+ inode_unlock(inode);
++
++ if (delegated_inode) {
++ error = break_deleg_wait(&delegated_inode);
++ if (!error)
++ goto retry_deleg;
++ }
++
+ return error;
+ }
+ EXPORT_SYMBOL_GPL(vfs_removexattr);
+
+-
+ /*
+ * Extended attribute SET operations
+ */
+diff --git a/include/drm/drm_mode_config.h b/include/drm/drm_mode_config.h
+index 3bcbe30339f0..198b9d060008 100644
+--- a/include/drm/drm_mode_config.h
++++ b/include/drm/drm_mode_config.h
+@@ -865,6 +865,18 @@ struct drm_mode_config {
+ */
+ bool prefer_shadow_fbdev;
+
++ /**
++ * @fbdev_use_iomem:
++ *
++ * Set to true if framebuffer reside in iomem.
++ * When set to true memcpy_toio() is used when copying the framebuffer in
++ * drm_fb_helper.drm_fb_helper_dirty_blit_real().
++ *
++ * FIXME: This should be replaced with a per-mapping is_iomem
++ * flag (like ttm does), and then used everywhere in fbdev code.
++ */
++ bool fbdev_use_iomem;
++
+ /**
+ * @quirk_addfb_prefer_xbgr_30bpp:
+ *
+diff --git a/include/linux/rhashtable.h b/include/linux/rhashtable.h
+index e3def7bbe932..83ad875a7ea2 100644
+--- a/include/linux/rhashtable.h
++++ b/include/linux/rhashtable.h
+@@ -84,7 +84,7 @@ struct bucket_table {
+
+ struct lockdep_map dep_map;
+
+- struct rhash_lock_head *buckets[] ____cacheline_aligned_in_smp;
++ struct rhash_lock_head __rcu *buckets[] ____cacheline_aligned_in_smp;
+ };
+
+ /*
+@@ -261,13 +261,12 @@ void rhashtable_free_and_destroy(struct rhashtable *ht,
+ void *arg);
+ void rhashtable_destroy(struct rhashtable *ht);
+
+-struct rhash_lock_head **rht_bucket_nested(const struct bucket_table *tbl,
+- unsigned int hash);
+-struct rhash_lock_head **__rht_bucket_nested(const struct bucket_table *tbl,
+- unsigned int hash);
+-struct rhash_lock_head **rht_bucket_nested_insert(struct rhashtable *ht,
+- struct bucket_table *tbl,
+- unsigned int hash);
++struct rhash_lock_head __rcu **rht_bucket_nested(
++ const struct bucket_table *tbl, unsigned int hash);
++struct rhash_lock_head __rcu **__rht_bucket_nested(
++ const struct bucket_table *tbl, unsigned int hash);
++struct rhash_lock_head __rcu **rht_bucket_nested_insert(
++ struct rhashtable *ht, struct bucket_table *tbl, unsigned int hash);
+
+ #define rht_dereference(p, ht) \
+ rcu_dereference_protected(p, lockdep_rht_mutex_is_held(ht))
+@@ -284,21 +283,21 @@ struct rhash_lock_head **rht_bucket_nested_insert(struct rhashtable *ht,
+ #define rht_entry(tpos, pos, member) \
+ ({ tpos = container_of(pos, typeof(*tpos), member); 1; })
+
+-static inline struct rhash_lock_head *const *rht_bucket(
++static inline struct rhash_lock_head __rcu *const *rht_bucket(
+ const struct bucket_table *tbl, unsigned int hash)
+ {
+ return unlikely(tbl->nest) ? rht_bucket_nested(tbl, hash) :
+ &tbl->buckets[hash];
+ }
+
+-static inline struct rhash_lock_head **rht_bucket_var(
++static inline struct rhash_lock_head __rcu **rht_bucket_var(
+ struct bucket_table *tbl, unsigned int hash)
+ {
+ return unlikely(tbl->nest) ? __rht_bucket_nested(tbl, hash) :
+ &tbl->buckets[hash];
+ }
+
+-static inline struct rhash_lock_head **rht_bucket_insert(
++static inline struct rhash_lock_head __rcu **rht_bucket_insert(
+ struct rhashtable *ht, struct bucket_table *tbl, unsigned int hash)
+ {
+ return unlikely(tbl->nest) ? rht_bucket_nested_insert(ht, tbl, hash) :
+@@ -325,7 +324,7 @@ static inline struct rhash_lock_head **rht_bucket_insert(
+ */
+
+ static inline void rht_lock(struct bucket_table *tbl,
+- struct rhash_lock_head **bkt)
++ struct rhash_lock_head __rcu **bkt)
+ {
+ local_bh_disable();
+ bit_spin_lock(0, (unsigned long *)bkt);
+@@ -333,7 +332,7 @@ static inline void rht_lock(struct bucket_table *tbl,
+ }
+
+ static inline void rht_lock_nested(struct bucket_table *tbl,
+- struct rhash_lock_head **bucket,
++ struct rhash_lock_head __rcu **bucket,
+ unsigned int subclass)
+ {
+ local_bh_disable();
+@@ -342,7 +341,7 @@ static inline void rht_lock_nested(struct bucket_table *tbl,
+ }
+
+ static inline void rht_unlock(struct bucket_table *tbl,
+- struct rhash_lock_head **bkt)
++ struct rhash_lock_head __rcu **bkt)
+ {
+ lock_map_release(&tbl->dep_map);
+ bit_spin_unlock(0, (unsigned long *)bkt);
+@@ -365,48 +364,41 @@ static inline struct rhash_head *__rht_ptr(
+ * access is guaranteed, such as when destroying the table.
+ */
+ static inline struct rhash_head *rht_ptr_rcu(
+- struct rhash_lock_head *const *p)
++ struct rhash_lock_head __rcu *const *bkt)
+ {
+- struct rhash_lock_head __rcu *const *bkt = (void *)p;
+ return __rht_ptr(rcu_dereference(*bkt), bkt);
+ }
+
+ static inline struct rhash_head *rht_ptr(
+- struct rhash_lock_head *const *p,
++ struct rhash_lock_head __rcu *const *bkt,
+ struct bucket_table *tbl,
+ unsigned int hash)
+ {
+- struct rhash_lock_head __rcu *const *bkt = (void *)p;
+ return __rht_ptr(rht_dereference_bucket(*bkt, tbl, hash), bkt);
+ }
+
+ static inline struct rhash_head *rht_ptr_exclusive(
+- struct rhash_lock_head *const *p)
++ struct rhash_lock_head __rcu *const *bkt)
+ {
+- struct rhash_lock_head __rcu *const *bkt = (void *)p;
+ return __rht_ptr(rcu_dereference_protected(*bkt, 1), bkt);
+ }
+
+-static inline void rht_assign_locked(struct rhash_lock_head **bkt,
++static inline void rht_assign_locked(struct rhash_lock_head __rcu **bkt,
+ struct rhash_head *obj)
+ {
+- struct rhash_head __rcu **p = (struct rhash_head __rcu **)bkt;
+-
+ if (rht_is_a_nulls(obj))
+ obj = NULL;
+- rcu_assign_pointer(*p, (void *)((unsigned long)obj | BIT(0)));
++ rcu_assign_pointer(*bkt, (void *)((unsigned long)obj | BIT(0)));
+ }
+
+ static inline void rht_assign_unlock(struct bucket_table *tbl,
+- struct rhash_lock_head **bkt,
++ struct rhash_lock_head __rcu **bkt,
+ struct rhash_head *obj)
+ {
+- struct rhash_head __rcu **p = (struct rhash_head __rcu **)bkt;
+-
+ if (rht_is_a_nulls(obj))
+ obj = NULL;
+ lock_map_release(&tbl->dep_map);
+- rcu_assign_pointer(*p, obj);
++ rcu_assign_pointer(*bkt, (void *)obj);
+ preempt_enable();
+ __release(bitlock);
+ local_bh_enable();
+@@ -594,7 +586,7 @@ static inline struct rhash_head *__rhashtable_lookup(
+ .ht = ht,
+ .key = key,
+ };
+- struct rhash_lock_head *const *bkt;
++ struct rhash_lock_head __rcu *const *bkt;
+ struct bucket_table *tbl;
+ struct rhash_head *he;
+ unsigned int hash;
+@@ -710,7 +702,7 @@ static inline void *__rhashtable_insert_fast(
+ .ht = ht,
+ .key = key,
+ };
+- struct rhash_lock_head **bkt;
++ struct rhash_lock_head __rcu **bkt;
+ struct rhash_head __rcu **pprev;
+ struct bucket_table *tbl;
+ struct rhash_head *head;
+@@ -996,7 +988,7 @@ static inline int __rhashtable_remove_fast_one(
+ struct rhash_head *obj, const struct rhashtable_params params,
+ bool rhlist)
+ {
+- struct rhash_lock_head **bkt;
++ struct rhash_lock_head __rcu **bkt;
+ struct rhash_head __rcu **pprev;
+ struct rhash_head *he;
+ unsigned int hash;
+@@ -1148,7 +1140,7 @@ static inline int __rhashtable_replace_fast(
+ struct rhash_head *obj_old, struct rhash_head *obj_new,
+ const struct rhashtable_params params)
+ {
+- struct rhash_lock_head **bkt;
++ struct rhash_lock_head __rcu **bkt;
+ struct rhash_head __rcu **pprev;
+ struct rhash_head *he;
+ unsigned int hash;
+diff --git a/include/linux/skbuff.h b/include/linux/skbuff.h
+index 7e737a94bc63..7f348591647a 100644
+--- a/include/linux/skbuff.h
++++ b/include/linux/skbuff.h
+@@ -283,6 +283,7 @@ struct nf_bridge_info {
+ */
+ struct tc_skb_ext {
+ __u32 chain;
++ __u16 mru;
+ };
+ #endif
+
+diff --git a/include/linux/xattr.h b/include/linux/xattr.h
+index c5afaf8ca7a2..902b740b6cac 100644
+--- a/include/linux/xattr.h
++++ b/include/linux/xattr.h
+@@ -52,8 +52,10 @@ ssize_t vfs_getxattr(struct dentry *, const char *, void *, size_t);
+ ssize_t vfs_listxattr(struct dentry *d, char *list, size_t size);
+ int __vfs_setxattr(struct dentry *, struct inode *, const char *, const void *, size_t, int);
+ int __vfs_setxattr_noperm(struct dentry *, const char *, const void *, size_t, int);
++int __vfs_setxattr_locked(struct dentry *, const char *, const void *, size_t, int, struct inode **);
+ int vfs_setxattr(struct dentry *, const char *, const void *, size_t, int);
+ int __vfs_removexattr(struct dentry *, const char *);
++int __vfs_removexattr_locked(struct dentry *, const char *, struct inode **);
+ int vfs_removexattr(struct dentry *, const char *);
+
+ ssize_t generic_listxattr(struct dentry *dentry, char *buffer, size_t buffer_size);
+diff --git a/include/net/addrconf.h b/include/net/addrconf.h
+index e0eabe58aa8b..d9c76c6d8f72 100644
+--- a/include/net/addrconf.h
++++ b/include/net/addrconf.h
+@@ -276,6 +276,7 @@ int ipv6_sock_ac_join(struct sock *sk, int ifindex,
+ const struct in6_addr *addr);
+ int ipv6_sock_ac_drop(struct sock *sk, int ifindex,
+ const struct in6_addr *addr);
++void __ipv6_sock_ac_close(struct sock *sk);
+ void ipv6_sock_ac_close(struct sock *sk);
+
+ int __ipv6_dev_ac_inc(struct inet6_dev *idev, const struct in6_addr *addr);
+diff --git a/include/net/sch_generic.h b/include/net/sch_generic.h
+index 8428aa614265..f6bcd3960ba8 100644
+--- a/include/net/sch_generic.h
++++ b/include/net/sch_generic.h
+@@ -380,6 +380,7 @@ struct qdisc_skb_cb {
+ };
+ #define QDISC_CB_PRIV_LEN 20
+ unsigned char data[QDISC_CB_PRIV_LEN];
++ u16 mru;
+ };
+
+ typedef void tcf_chain_head_change_t(struct tcf_proto *tp_head, void *priv);
+@@ -459,7 +460,7 @@ static inline void qdisc_cb_private_validate(const struct sk_buff *skb, int sz)
+ {
+ struct qdisc_skb_cb *qcb;
+
+- BUILD_BUG_ON(sizeof(skb->cb) < offsetof(struct qdisc_skb_cb, data) + sz);
++ BUILD_BUG_ON(sizeof(skb->cb) < sizeof(*qcb));
+ BUILD_BUG_ON(sizeof(qcb->data) < sz);
+ }
+
+diff --git a/kernel/bpf/btf.c b/kernel/bpf/btf.c
+index d1f5d428c9fe..6cafc596631c 100644
+--- a/kernel/bpf/btf.c
++++ b/kernel/bpf/btf.c
+@@ -4011,6 +4011,11 @@ static int __btf_resolve_helper_id(struct bpf_verifier_log *log, void *fn,
+ const char *tname, *sym;
+ u32 btf_id, i;
+
++ if (!btf_vmlinux) {
++ bpf_log(log, "btf_vmlinux doesn't exist\n");
++ return -EINVAL;
++ }
++
+ if (IS_ERR(btf_vmlinux)) {
+ bpf_log(log, "btf_vmlinux is malformed\n");
+ return -EINVAL;
+diff --git a/lib/rhashtable.c b/lib/rhashtable.c
+index bdb7e4cadf05..da531dacb496 100644
+--- a/lib/rhashtable.c
++++ b/lib/rhashtable.c
+@@ -31,7 +31,7 @@
+
+ union nested_table {
+ union nested_table __rcu *table;
+- struct rhash_lock_head *bucket;
++ struct rhash_lock_head __rcu *bucket;
+ };
+
+ static u32 head_hashfn(struct rhashtable *ht,
+@@ -213,7 +213,7 @@ static struct bucket_table *rhashtable_last_table(struct rhashtable *ht,
+ }
+
+ static int rhashtable_rehash_one(struct rhashtable *ht,
+- struct rhash_lock_head **bkt,
++ struct rhash_lock_head __rcu **bkt,
+ unsigned int old_hash)
+ {
+ struct bucket_table *old_tbl = rht_dereference(ht->tbl, ht);
+@@ -266,7 +266,7 @@ static int rhashtable_rehash_chain(struct rhashtable *ht,
+ unsigned int old_hash)
+ {
+ struct bucket_table *old_tbl = rht_dereference(ht->tbl, ht);
+- struct rhash_lock_head **bkt = rht_bucket_var(old_tbl, old_hash);
++ struct rhash_lock_head __rcu **bkt = rht_bucket_var(old_tbl, old_hash);
+ int err;
+
+ if (!bkt)
+@@ -476,7 +476,7 @@ fail:
+ }
+
+ static void *rhashtable_lookup_one(struct rhashtable *ht,
+- struct rhash_lock_head **bkt,
++ struct rhash_lock_head __rcu **bkt,
+ struct bucket_table *tbl, unsigned int hash,
+ const void *key, struct rhash_head *obj)
+ {
+@@ -526,12 +526,10 @@ static void *rhashtable_lookup_one(struct rhashtable *ht,
+ return ERR_PTR(-ENOENT);
+ }
+
+-static struct bucket_table *rhashtable_insert_one(struct rhashtable *ht,
+- struct rhash_lock_head **bkt,
+- struct bucket_table *tbl,
+- unsigned int hash,
+- struct rhash_head *obj,
+- void *data)
++static struct bucket_table *rhashtable_insert_one(
++ struct rhashtable *ht, struct rhash_lock_head __rcu **bkt,
++ struct bucket_table *tbl, unsigned int hash, struct rhash_head *obj,
++ void *data)
+ {
+ struct bucket_table *new_tbl;
+ struct rhash_head *head;
+@@ -582,7 +580,7 @@ static void *rhashtable_try_insert(struct rhashtable *ht, const void *key,
+ {
+ struct bucket_table *new_tbl;
+ struct bucket_table *tbl;
+- struct rhash_lock_head **bkt;
++ struct rhash_lock_head __rcu **bkt;
+ unsigned int hash;
+ void *data;
+
+@@ -1164,8 +1162,8 @@ void rhashtable_destroy(struct rhashtable *ht)
+ }
+ EXPORT_SYMBOL_GPL(rhashtable_destroy);
+
+-struct rhash_lock_head **__rht_bucket_nested(const struct bucket_table *tbl,
+- unsigned int hash)
++struct rhash_lock_head __rcu **__rht_bucket_nested(
++ const struct bucket_table *tbl, unsigned int hash)
+ {
+ const unsigned int shift = PAGE_SHIFT - ilog2(sizeof(void *));
+ unsigned int index = hash & ((1 << tbl->nest) - 1);
+@@ -1193,10 +1191,10 @@ struct rhash_lock_head **__rht_bucket_nested(const struct bucket_table *tbl,
+ }
+ EXPORT_SYMBOL_GPL(__rht_bucket_nested);
+
+-struct rhash_lock_head **rht_bucket_nested(const struct bucket_table *tbl,
+- unsigned int hash)
++struct rhash_lock_head __rcu **rht_bucket_nested(
++ const struct bucket_table *tbl, unsigned int hash)
+ {
+- static struct rhash_lock_head *rhnull;
++ static struct rhash_lock_head __rcu *rhnull;
+
+ if (!rhnull)
+ INIT_RHT_NULLS_HEAD(rhnull);
+@@ -1204,9 +1202,8 @@ struct rhash_lock_head **rht_bucket_nested(const struct bucket_table *tbl,
+ }
+ EXPORT_SYMBOL_GPL(rht_bucket_nested);
+
+-struct rhash_lock_head **rht_bucket_nested_insert(struct rhashtable *ht,
+- struct bucket_table *tbl,
+- unsigned int hash)
++struct rhash_lock_head __rcu **rht_bucket_nested_insert(
++ struct rhashtable *ht, struct bucket_table *tbl, unsigned int hash)
+ {
+ const unsigned int shift = PAGE_SHIFT - ilog2(sizeof(void *));
+ unsigned int index = hash & ((1 << tbl->nest) - 1);
+diff --git a/net/9p/trans_fd.c b/net/9p/trans_fd.c
+index 3f67803123be..12ecacf0c55f 100644
+--- a/net/9p/trans_fd.c
++++ b/net/9p/trans_fd.c
+@@ -816,20 +816,28 @@ static int p9_fd_open(struct p9_client *client, int rfd, int wfd)
+ return -ENOMEM;
+
+ ts->rd = fget(rfd);
++ if (!ts->rd)
++ goto out_free_ts;
++ if (!(ts->rd->f_mode & FMODE_READ))
++ goto out_put_rd;
+ ts->wr = fget(wfd);
+- if (!ts->rd || !ts->wr) {
+- if (ts->rd)
+- fput(ts->rd);
+- if (ts->wr)
+- fput(ts->wr);
+- kfree(ts);
+- return -EIO;
+- }
++ if (!ts->wr)
++ goto out_put_rd;
++ if (!(ts->wr->f_mode & FMODE_WRITE))
++ goto out_put_wr;
+
+ client->trans = ts;
+ client->status = Connected;
+
+ return 0;
++
++out_put_wr:
++ fput(ts->wr);
++out_put_rd:
++ fput(ts->rd);
++out_free_ts:
++ kfree(ts);
++ return -EIO;
+ }
+
+ static int p9_socket_open(struct p9_client *client, struct socket *csocket)
+diff --git a/net/appletalk/atalk_proc.c b/net/appletalk/atalk_proc.c
+index 550c6ca007cc..9c1241292d1d 100644
+--- a/net/appletalk/atalk_proc.c
++++ b/net/appletalk/atalk_proc.c
+@@ -229,6 +229,8 @@ int __init atalk_proc_init(void)
+ sizeof(struct aarp_iter_state), NULL))
+ goto out;
+
++ return 0;
++
+ out:
+ remove_proc_subtree("atalk", init_net.proc_net);
+ return -ENOMEM;
+diff --git a/net/bluetooth/hci_event.c b/net/bluetooth/hci_event.c
+index fe75f435171c..2e481ee9fb52 100644
+--- a/net/bluetooth/hci_event.c
++++ b/net/bluetooth/hci_event.c
+@@ -2487,7 +2487,7 @@ static void hci_inquiry_result_evt(struct hci_dev *hdev, struct sk_buff *skb)
+
+ BT_DBG("%s num_rsp %d", hdev->name, num_rsp);
+
+- if (!num_rsp)
++ if (!num_rsp || skb->len < num_rsp * sizeof(*info) + 1)
+ return;
+
+ if (hci_dev_test_flag(hdev, HCI_PERIODIC_INQ))
+@@ -4143,6 +4143,9 @@ static void hci_inquiry_result_with_rssi_evt(struct hci_dev *hdev,
+ struct inquiry_info_with_rssi_and_pscan_mode *info;
+ info = (void *) (skb->data + 1);
+
++ if (skb->len < num_rsp * sizeof(*info) + 1)
++ goto unlock;
++
+ for (; num_rsp; num_rsp--, info++) {
+ u32 flags;
+
+@@ -4164,6 +4167,9 @@ static void hci_inquiry_result_with_rssi_evt(struct hci_dev *hdev,
+ } else {
+ struct inquiry_info_with_rssi *info = (void *) (skb->data + 1);
+
++ if (skb->len < num_rsp * sizeof(*info) + 1)
++ goto unlock;
++
+ for (; num_rsp; num_rsp--, info++) {
+ u32 flags;
+
+@@ -4184,6 +4190,7 @@ static void hci_inquiry_result_with_rssi_evt(struct hci_dev *hdev,
+ }
+ }
+
++unlock:
+ hci_dev_unlock(hdev);
+ }
+
+@@ -4346,7 +4353,7 @@ static void hci_extended_inquiry_result_evt(struct hci_dev *hdev,
+
+ BT_DBG("%s num_rsp %d", hdev->name, num_rsp);
+
+- if (!num_rsp)
++ if (!num_rsp || skb->len < num_rsp * sizeof(*info) + 1)
+ return;
+
+ if (hci_dev_test_flag(hdev, HCI_PERIODIC_INQ))
+diff --git a/net/bridge/br_device.c b/net/bridge/br_device.c
+index 0e3dbc5f3c34..22a0b3173456 100644
+--- a/net/bridge/br_device.c
++++ b/net/bridge/br_device.c
+@@ -36,6 +36,8 @@ netdev_tx_t br_dev_xmit(struct sk_buff *skb, struct net_device *dev)
+ const unsigned char *dest;
+ u16 vid = 0;
+
++ memset(skb->cb, 0, sizeof(struct br_input_skb_cb));
++
+ rcu_read_lock();
+ nf_ops = rcu_dereference(nf_br_ops);
+ if (nf_ops && nf_ops->br_dev_xmit_hook(skb)) {
+diff --git a/net/core/devlink.c b/net/core/devlink.c
+index 899edcee7dab..8547da27ea47 100644
+--- a/net/core/devlink.c
++++ b/net/core/devlink.c
+@@ -1065,7 +1065,9 @@ static int devlink_nl_cmd_sb_pool_get_dumpit(struct sk_buff *msg,
+ devlink_sb,
+ NETLINK_CB(cb->skb).portid,
+ cb->nlh->nlmsg_seq);
+- if (err && err != -EOPNOTSUPP) {
++ if (err == -EOPNOTSUPP) {
++ err = 0;
++ } else if (err) {
+ mutex_unlock(&devlink->lock);
+ goto out;
+ }
+@@ -1266,7 +1268,9 @@ static int devlink_nl_cmd_sb_port_pool_get_dumpit(struct sk_buff *msg,
+ devlink, devlink_sb,
+ NETLINK_CB(cb->skb).portid,
+ cb->nlh->nlmsg_seq);
+- if (err && err != -EOPNOTSUPP) {
++ if (err == -EOPNOTSUPP) {
++ err = 0;
++ } else if (err) {
+ mutex_unlock(&devlink->lock);
+ goto out;
+ }
+@@ -1498,7 +1502,9 @@ devlink_nl_cmd_sb_tc_pool_bind_get_dumpit(struct sk_buff *msg,
+ devlink_sb,
+ NETLINK_CB(cb->skb).portid,
+ cb->nlh->nlmsg_seq);
+- if (err && err != -EOPNOTSUPP) {
++ if (err == -EOPNOTSUPP) {
++ err = 0;
++ } else if (err) {
+ mutex_unlock(&devlink->lock);
+ goto out;
+ }
+@@ -3299,7 +3305,9 @@ static int devlink_nl_cmd_param_get_dumpit(struct sk_buff *msg,
+ NETLINK_CB(cb->skb).portid,
+ cb->nlh->nlmsg_seq,
+ NLM_F_MULTI);
+- if (err && err != -EOPNOTSUPP) {
++ if (err == -EOPNOTSUPP) {
++ err = 0;
++ } else if (err) {
+ mutex_unlock(&devlink->lock);
+ goto out;
+ }
+@@ -3569,7 +3577,9 @@ static int devlink_nl_cmd_port_param_get_dumpit(struct sk_buff *msg,
+ NETLINK_CB(cb->skb).portid,
+ cb->nlh->nlmsg_seq,
+ NLM_F_MULTI);
+- if (err && err != -EOPNOTSUPP) {
++ if (err == -EOPNOTSUPP) {
++ err = 0;
++ } else if (err) {
+ mutex_unlock(&devlink->lock);
+ goto out;
+ }
+@@ -4479,7 +4489,9 @@ static int devlink_nl_cmd_info_get_dumpit(struct sk_buff *msg,
+ cb->nlh->nlmsg_seq, NLM_F_MULTI,
+ cb->extack);
+ mutex_unlock(&devlink->lock);
+- if (err && err != -EOPNOTSUPP)
++ if (err == -EOPNOTSUPP)
++ err = 0;
++ else if (err)
+ break;
+ idx++;
+ }
+diff --git a/net/ipv4/fib_trie.c b/net/ipv4/fib_trie.c
+index 248f1c1959a6..3c65f71d0e82 100644
+--- a/net/ipv4/fib_trie.c
++++ b/net/ipv4/fib_trie.c
+@@ -1864,7 +1864,7 @@ struct fib_table *fib_trie_unmerge(struct fib_table *oldtb)
+ while ((l = leaf_walk_rcu(&tp, key)) != NULL) {
+ struct key_vector *local_l = NULL, *local_tp;
+
+- hlist_for_each_entry_rcu(fa, &l->leaf, fa_list) {
++ hlist_for_each_entry(fa, &l->leaf, fa_list) {
+ struct fib_alias *new_fa;
+
+ if (local_tb->tb_id != fa->tb_id)
+diff --git a/net/ipv4/gre_offload.c b/net/ipv4/gre_offload.c
+index 2e6d1b7a7bc9..e0a246575887 100644
+--- a/net/ipv4/gre_offload.c
++++ b/net/ipv4/gre_offload.c
+@@ -15,12 +15,12 @@ static struct sk_buff *gre_gso_segment(struct sk_buff *skb,
+ netdev_features_t features)
+ {
+ int tnl_hlen = skb_inner_mac_header(skb) - skb_transport_header(skb);
++ bool need_csum, need_recompute_csum, gso_partial;
+ struct sk_buff *segs = ERR_PTR(-EINVAL);
+ u16 mac_offset = skb->mac_header;
+ __be16 protocol = skb->protocol;
+ u16 mac_len = skb->mac_len;
+ int gre_offset, outer_hlen;
+- bool need_csum, gso_partial;
+
+ if (!skb->encapsulation)
+ goto out;
+@@ -41,6 +41,7 @@ static struct sk_buff *gre_gso_segment(struct sk_buff *skb,
+ skb->protocol = skb->inner_protocol;
+
+ need_csum = !!(skb_shinfo(skb)->gso_type & SKB_GSO_GRE_CSUM);
++ need_recompute_csum = skb->csum_not_inet;
+ skb->encap_hdr_csum = need_csum;
+
+ features &= skb->dev->hw_enc_features;
+@@ -98,7 +99,15 @@ static struct sk_buff *gre_gso_segment(struct sk_buff *skb,
+ }
+
+ *(pcsum + 1) = 0;
+- *pcsum = gso_make_checksum(skb, 0);
++ if (need_recompute_csum && !skb_is_gso(skb)) {
++ __wsum csum;
++
++ csum = skb_checksum(skb, gre_offset,
++ skb->len - gre_offset, 0);
++ *pcsum = csum_fold(csum);
++ } else {
++ *pcsum = gso_make_checksum(skb, 0);
++ }
+ } while ((skb = skb->next));
+ out:
+ return segs;
+diff --git a/net/ipv4/tcp_input.c b/net/ipv4/tcp_input.c
+index 32ac66a8c657..afee982734be 100644
+--- a/net/ipv4/tcp_input.c
++++ b/net/ipv4/tcp_input.c
+@@ -2945,6 +2945,8 @@ static bool tcp_ack_update_rtt(struct sock *sk, const int flag,
+ u32 delta = tcp_time_stamp(tp) - tp->rx_opt.rcv_tsecr;
+
+ if (likely(delta < INT_MAX / (USEC_PER_SEC / TCP_TS_HZ))) {
++ if (!delta)
++ delta = 1;
+ seq_rtt_us = delta * (USEC_PER_SEC / TCP_TS_HZ);
+ ca_rtt_us = seq_rtt_us;
+ }
+diff --git a/net/ipv6/anycast.c b/net/ipv6/anycast.c
+index fed91ab7ec46..cf3a88a10ddd 100644
+--- a/net/ipv6/anycast.c
++++ b/net/ipv6/anycast.c
+@@ -183,7 +183,7 @@ int ipv6_sock_ac_drop(struct sock *sk, int ifindex, const struct in6_addr *addr)
+ return 0;
+ }
+
+-void ipv6_sock_ac_close(struct sock *sk)
++void __ipv6_sock_ac_close(struct sock *sk)
+ {
+ struct ipv6_pinfo *np = inet6_sk(sk);
+ struct net_device *dev = NULL;
+@@ -191,10 +191,7 @@ void ipv6_sock_ac_close(struct sock *sk)
+ struct net *net = sock_net(sk);
+ int prev_index;
+
+- if (!np->ipv6_ac_list)
+- return;
+-
+- rtnl_lock();
++ ASSERT_RTNL();
+ pac = np->ipv6_ac_list;
+ np->ipv6_ac_list = NULL;
+
+@@ -211,6 +208,16 @@ void ipv6_sock_ac_close(struct sock *sk)
+ sock_kfree_s(sk, pac, sizeof(*pac));
+ pac = next;
+ }
++}
++
++void ipv6_sock_ac_close(struct sock *sk)
++{
++ struct ipv6_pinfo *np = inet6_sk(sk);
++
++ if (!np->ipv6_ac_list)
++ return;
++ rtnl_lock();
++ __ipv6_sock_ac_close(sk);
+ rtnl_unlock();
+ }
+
+diff --git a/net/ipv6/ipv6_sockglue.c b/net/ipv6/ipv6_sockglue.c
+index ff187fd2083f..f99d1641f602 100644
+--- a/net/ipv6/ipv6_sockglue.c
++++ b/net/ipv6/ipv6_sockglue.c
+@@ -205,6 +205,7 @@ static int do_ipv6_setsockopt(struct sock *sk, int level, int optname,
+
+ fl6_free_socklist(sk);
+ __ipv6_sock_mc_close(sk);
++ __ipv6_sock_ac_close(sk);
+
+ /*
+ * Sock is moving from IPv6 to IPv4 (sk_prot), so
+diff --git a/net/ipv6/route.c b/net/ipv6/route.c
+index e8a184acf668..de25836e4dde 100644
+--- a/net/ipv6/route.c
++++ b/net/ipv6/route.c
+@@ -3677,14 +3677,14 @@ static struct fib6_info *ip6_route_info_create(struct fib6_config *cfg,
+ rt->fib6_src.plen = cfg->fc_src_len;
+ #endif
+ if (nh) {
+- if (!nexthop_get(nh)) {
+- NL_SET_ERR_MSG(extack, "Nexthop has been deleted");
+- goto out;
+- }
+ if (rt->fib6_src.plen) {
+ NL_SET_ERR_MSG(extack, "Nexthops can not be used with source routing");
+ goto out;
+ }
++ if (!nexthop_get(nh)) {
++ NL_SET_ERR_MSG(extack, "Nexthop has been deleted");
++ goto out;
++ }
+ rt->nh = nh;
+ fib6_nh = nexthop_fib6_nh(rt->nh);
+ } else {
+diff --git a/net/mptcp/protocol.c b/net/mptcp/protocol.c
+index 4bf4f629975d..54e200b1b742 100644
+--- a/net/mptcp/protocol.c
++++ b/net/mptcp/protocol.c
+@@ -802,7 +802,6 @@ fallback:
+
+ mptcp_set_timeout(sk, ssk);
+ if (copied) {
+- ret = copied;
+ tcp_push(ssk, msg->msg_flags, mss_now, tcp_sk(ssk)->nonagle,
+ size_goal);
+
+@@ -815,7 +814,7 @@ fallback:
+ release_sock(ssk);
+ out:
+ release_sock(sk);
+- return ret;
++ return copied ? : ret;
+ }
+
+ static void mptcp_wait_data(struct sock *sk, long *timeo)
+diff --git a/net/mptcp/subflow.c b/net/mptcp/subflow.c
+index 0112ead58fd8..bb6ccde6bf49 100644
+--- a/net/mptcp/subflow.c
++++ b/net/mptcp/subflow.c
+@@ -999,6 +999,12 @@ int mptcp_subflow_create_socket(struct sock *sk, struct socket **new_sock)
+ struct socket *sf;
+ int err;
+
++ /* un-accepted server sockets can reach here - on bad configuration
++ * bail early to avoid greater trouble later
++ */
++ if (unlikely(!sk->sk_socket))
++ return -EINVAL;
++
+ err = sock_create_kern(net, sk->sk_family, SOCK_STREAM, IPPROTO_TCP,
+ &sf);
+ if (err)
+diff --git a/net/openvswitch/conntrack.c b/net/openvswitch/conntrack.c
+index 4340f25fe390..98d393e70de3 100644
+--- a/net/openvswitch/conntrack.c
++++ b/net/openvswitch/conntrack.c
+@@ -276,10 +276,6 @@ void ovs_ct_fill_key(const struct sk_buff *skb, struct sw_flow_key *key)
+ ovs_ct_update_key(skb, NULL, key, false, false);
+ }
+
+-#define IN6_ADDR_INITIALIZER(ADDR) \
+- { (ADDR).s6_addr32[0], (ADDR).s6_addr32[1], \
+- (ADDR).s6_addr32[2], (ADDR).s6_addr32[3] }
+-
+ int ovs_ct_put_key(const struct sw_flow_key *swkey,
+ const struct sw_flow_key *output, struct sk_buff *skb)
+ {
+@@ -301,24 +297,30 @@ int ovs_ct_put_key(const struct sw_flow_key *swkey,
+
+ if (swkey->ct_orig_proto) {
+ if (swkey->eth.type == htons(ETH_P_IP)) {
+- struct ovs_key_ct_tuple_ipv4 orig = {
+- output->ipv4.ct_orig.src,
+- output->ipv4.ct_orig.dst,
+- output->ct.orig_tp.src,
+- output->ct.orig_tp.dst,
+- output->ct_orig_proto,
+- };
++ struct ovs_key_ct_tuple_ipv4 orig;
++
++ memset(&orig, 0, sizeof(orig));
++ orig.ipv4_src = output->ipv4.ct_orig.src;
++ orig.ipv4_dst = output->ipv4.ct_orig.dst;
++ orig.src_port = output->ct.orig_tp.src;
++ orig.dst_port = output->ct.orig_tp.dst;
++ orig.ipv4_proto = output->ct_orig_proto;
++
+ if (nla_put(skb, OVS_KEY_ATTR_CT_ORIG_TUPLE_IPV4,
+ sizeof(orig), &orig))
+ return -EMSGSIZE;
+ } else if (swkey->eth.type == htons(ETH_P_IPV6)) {
+- struct ovs_key_ct_tuple_ipv6 orig = {
+- IN6_ADDR_INITIALIZER(output->ipv6.ct_orig.src),
+- IN6_ADDR_INITIALIZER(output->ipv6.ct_orig.dst),
+- output->ct.orig_tp.src,
+- output->ct.orig_tp.dst,
+- output->ct_orig_proto,
+- };
++ struct ovs_key_ct_tuple_ipv6 orig;
++
++ memset(&orig, 0, sizeof(orig));
++ memcpy(orig.ipv6_src, output->ipv6.ct_orig.src.s6_addr32,
++ sizeof(orig.ipv6_src));
++ memcpy(orig.ipv6_dst, output->ipv6.ct_orig.dst.s6_addr32,
++ sizeof(orig.ipv6_dst));
++ orig.src_port = output->ct.orig_tp.src;
++ orig.dst_port = output->ct.orig_tp.dst;
++ orig.ipv6_proto = output->ct_orig_proto;
++
+ if (nla_put(skb, OVS_KEY_ATTR_CT_ORIG_TUPLE_IPV6,
+ sizeof(orig), &orig))
+ return -EMSGSIZE;
+diff --git a/net/openvswitch/flow.c b/net/openvswitch/flow.c
+index 9d375e74b607..03942c30d83e 100644
+--- a/net/openvswitch/flow.c
++++ b/net/openvswitch/flow.c
+@@ -890,6 +890,7 @@ int ovs_flow_key_extract(const struct ip_tunnel_info *tun_info,
+ if (static_branch_unlikely(&tc_recirc_sharing_support)) {
+ tc_ext = skb_ext_find(skb, TC_SKB_EXT);
+ key->recirc_id = tc_ext ? tc_ext->chain : 0;
++ OVS_CB(skb)->mru = tc_ext ? tc_ext->mru : 0;
+ } else {
+ key->recirc_id = 0;
+ }
+diff --git a/net/rxrpc/call_object.c b/net/rxrpc/call_object.c
+index f07970207b54..38a46167523f 100644
+--- a/net/rxrpc/call_object.c
++++ b/net/rxrpc/call_object.c
+@@ -288,7 +288,7 @@ struct rxrpc_call *rxrpc_new_client_call(struct rxrpc_sock *rx,
+ */
+ ret = rxrpc_connect_call(rx, call, cp, srx, gfp);
+ if (ret < 0)
+- goto error;
++ goto error_attached_to_socket;
+
+ trace_rxrpc_call(call->debug_id, rxrpc_call_connected,
+ atomic_read(&call->usage), here, NULL);
+@@ -308,18 +308,29 @@ struct rxrpc_call *rxrpc_new_client_call(struct rxrpc_sock *rx,
+ error_dup_user_ID:
+ write_unlock(&rx->call_lock);
+ release_sock(&rx->sk);
+- ret = -EEXIST;
+-
+-error:
+ __rxrpc_set_call_completion(call, RXRPC_CALL_LOCAL_ERROR,
+- RX_CALL_DEAD, ret);
++ RX_CALL_DEAD, -EEXIST);
+ trace_rxrpc_call(call->debug_id, rxrpc_call_error,
+- atomic_read(&call->usage), here, ERR_PTR(ret));
++ atomic_read(&call->usage), here, ERR_PTR(-EEXIST));
+ rxrpc_release_call(rx, call);
+ mutex_unlock(&call->user_mutex);
+ rxrpc_put_call(call, rxrpc_call_put);
+- _leave(" = %d", ret);
+- return ERR_PTR(ret);
++ _leave(" = -EEXIST");
++ return ERR_PTR(-EEXIST);
++
++ /* We got an error, but the call is attached to the socket and is in
++ * need of release. However, we might now race with recvmsg() when
++ * completing the call queues it. Return 0 from sys_sendmsg() and
++ * leave the error to recvmsg() to deal with.
++ */
++error_attached_to_socket:
++ trace_rxrpc_call(call->debug_id, rxrpc_call_error,
++ atomic_read(&call->usage), here, ERR_PTR(ret));
++ set_bit(RXRPC_CALL_DISCONNECTED, &call->flags);
++ __rxrpc_set_call_completion(call, RXRPC_CALL_LOCAL_ERROR,
++ RX_CALL_DEAD, ret);
++ _leave(" = c=%08x [err]", call->debug_id);
++ return call;
+ }
+
+ /*
+diff --git a/net/rxrpc/conn_object.c b/net/rxrpc/conn_object.c
+index 19e141eeed17..8cbe0bf20ed5 100644
+--- a/net/rxrpc/conn_object.c
++++ b/net/rxrpc/conn_object.c
+@@ -212,9 +212,11 @@ void rxrpc_disconnect_call(struct rxrpc_call *call)
+
+ call->peer->cong_cwnd = call->cong_cwnd;
+
+- spin_lock_bh(&conn->params.peer->lock);
+- hlist_del_rcu(&call->error_link);
+- spin_unlock_bh(&conn->params.peer->lock);
++ if (!hlist_unhashed(&call->error_link)) {
++ spin_lock_bh(&call->peer->lock);
++ hlist_del_rcu(&call->error_link);
++ spin_unlock_bh(&call->peer->lock);
++ }
+
+ if (rxrpc_is_client_call(call))
+ return rxrpc_disconnect_client_call(call);
+diff --git a/net/rxrpc/recvmsg.c b/net/rxrpc/recvmsg.c
+index 6896a33ef842..4f48e3bdd4b4 100644
+--- a/net/rxrpc/recvmsg.c
++++ b/net/rxrpc/recvmsg.c
+@@ -541,7 +541,7 @@ try_again:
+ goto error_unlock_call;
+ }
+
+- if (msg->msg_name) {
++ if (msg->msg_name && call->peer) {
+ struct sockaddr_rxrpc *srx = msg->msg_name;
+ size_t len = sizeof(call->peer->srx);
+
+diff --git a/net/rxrpc/sendmsg.c b/net/rxrpc/sendmsg.c
+index 49d03c8c64da..1a340eb0abf7 100644
+--- a/net/rxrpc/sendmsg.c
++++ b/net/rxrpc/sendmsg.c
+@@ -683,6 +683,9 @@ int rxrpc_do_sendmsg(struct rxrpc_sock *rx, struct msghdr *msg, size_t len)
+ if (IS_ERR(call))
+ return PTR_ERR(call);
+ /* ... and we have the call lock. */
++ ret = 0;
++ if (READ_ONCE(call->state) == RXRPC_CALL_COMPLETE)
++ goto out_put_unlock;
+ } else {
+ switch (READ_ONCE(call->state)) {
+ case RXRPC_CALL_UNINITIALISED:
+diff --git a/net/sched/act_ct.c b/net/sched/act_ct.c
+index e191f2728389..417526d7741b 100644
+--- a/net/sched/act_ct.c
++++ b/net/sched/act_ct.c
+@@ -704,8 +704,10 @@ static int tcf_ct_handle_fragments(struct net *net, struct sk_buff *skb,
+ if (err && err != -EINPROGRESS)
+ goto out_free;
+
+- if (!err)
++ if (!err) {
+ *defrag = true;
++ cb.mru = IPCB(skb)->frag_max_size;
++ }
+ } else { /* NFPROTO_IPV6 */
+ #if IS_ENABLED(CONFIG_NF_DEFRAG_IPV6)
+ enum ip6_defrag_users user = IP6_DEFRAG_CONNTRACK_IN + zone;
+@@ -715,8 +717,10 @@ static int tcf_ct_handle_fragments(struct net *net, struct sk_buff *skb,
+ if (err && err != -EINPROGRESS)
+ goto out_free;
+
+- if (!err)
++ if (!err) {
+ *defrag = true;
++ cb.mru = IP6CB(skb)->frag_max_size;
++ }
+ #else
+ err = -EOPNOTSUPP;
+ goto out_free;
+diff --git a/net/sched/cls_api.c b/net/sched/cls_api.c
+index 58d469a66896..2ef39483a8bb 100644
+--- a/net/sched/cls_api.c
++++ b/net/sched/cls_api.c
+@@ -1679,6 +1679,7 @@ int tcf_classify_ingress(struct sk_buff *skb,
+ if (WARN_ON_ONCE(!ext))
+ return TC_ACT_SHOT;
+ ext->chain = last_executed_chain;
++ ext->mru = qdisc_skb_cb(skb)->mru;
+ }
+
+ return ret;
+diff --git a/net/wireless/nl80211.c b/net/wireless/nl80211.c
+index 7ae6b90e0d26..970f05c4150e 100644
+--- a/net/wireless/nl80211.c
++++ b/net/wireless/nl80211.c
+@@ -13190,13 +13190,13 @@ static int nl80211_vendor_cmd(struct sk_buff *skb, struct genl_info *info)
+ if (!wdev_running(wdev))
+ return -ENETDOWN;
+ }
+-
+- if (!vcmd->doit)
+- return -EOPNOTSUPP;
+ } else {
+ wdev = NULL;
+ }
+
++ if (!vcmd->doit)
++ return -EOPNOTSUPP;
++
+ if (info->attrs[NL80211_ATTR_VENDOR_DATA]) {
+ data = nla_data(info->attrs[NL80211_ATTR_VENDOR_DATA]);
+ len = nla_len(info->attrs[NL80211_ATTR_VENDOR_DATA]);
+diff --git a/scripts/coccinelle/misc/add_namespace.cocci b/scripts/coccinelle/misc/add_namespace.cocci
+index 99e93a6c2e24..cbf1614163cb 100644
+--- a/scripts/coccinelle/misc/add_namespace.cocci
++++ b/scripts/coccinelle/misc/add_namespace.cocci
+@@ -6,6 +6,7 @@
+ /// add a missing namespace tag to a module source file.
+ ///
+
++virtual nsdeps
+ virtual report
+
+ @has_ns_import@
+@@ -16,10 +17,15 @@ MODULE_IMPORT_NS(ns);
+
+ // Add missing imports, but only adjacent to a MODULE_LICENSE statement.
+ // That ensures we are adding it only to the main module source file.
+-@do_import depends on !has_ns_import@
++@do_import depends on !has_ns_import && nsdeps@
+ declarer name MODULE_LICENSE;
+ expression license;
+ identifier virtual.ns;
+ @@
+ MODULE_LICENSE(license);
+ + MODULE_IMPORT_NS(ns);
++
++// Dummy rule for report mode that would otherwise be empty and make spatch
++// fail ("No rules apply.")
++@script:python depends on report@
++@@
+diff --git a/scripts/nsdeps b/scripts/nsdeps
+index 03a8e7cbe6c7..dab4c1a0e27d 100644
+--- a/scripts/nsdeps
++++ b/scripts/nsdeps
+@@ -29,7 +29,7 @@ fi
+
+ generate_deps_for_ns() {
+ $SPATCH --very-quiet --in-place --sp-file \
+- $srctree/scripts/coccinelle/misc/add_namespace.cocci -D ns=$1 $2
++ $srctree/scripts/coccinelle/misc/add_namespace.cocci -D nsdeps -D ns=$1 $2
+ }
+
+ generate_deps() {
+diff --git a/security/integrity/ima/Kconfig b/security/integrity/ima/Kconfig
+index edde88dbe576..62dc11a5af01 100644
+--- a/security/integrity/ima/Kconfig
++++ b/security/integrity/ima/Kconfig
+@@ -232,7 +232,7 @@ config IMA_APPRAISE_REQUIRE_POLICY_SIGS
+
+ config IMA_APPRAISE_BOOTPARAM
+ bool "ima_appraise boot parameter"
+- depends on IMA_APPRAISE && !IMA_ARCH_POLICY
++ depends on IMA_APPRAISE
+ default y
+ help
+ This option enables the different "ima_appraise=" modes
+diff --git a/security/integrity/ima/ima_appraise.c b/security/integrity/ima/ima_appraise.c
+index a9649b04b9f1..28a59508c6bd 100644
+--- a/security/integrity/ima/ima_appraise.c
++++ b/security/integrity/ima/ima_appraise.c
+@@ -19,6 +19,12 @@
+ static int __init default_appraise_setup(char *str)
+ {
+ #ifdef CONFIG_IMA_APPRAISE_BOOTPARAM
++ if (arch_ima_get_secureboot()) {
++ pr_info("Secure boot enabled: ignoring ima_appraise=%s boot parameter option",
++ str);
++ return 1;
++ }
++
+ if (strncmp(str, "off", 3) == 0)
+ ima_appraise = 0;
+ else if (strncmp(str, "log", 3) == 0)
+diff --git a/security/smack/smackfs.c b/security/smack/smackfs.c
+index c21b656b3263..840a192e9337 100644
+--- a/security/smack/smackfs.c
++++ b/security/smack/smackfs.c
+@@ -2720,7 +2720,6 @@ static int smk_open_relabel_self(struct inode *inode, struct file *file)
+ static ssize_t smk_write_relabel_self(struct file *file, const char __user *buf,
+ size_t count, loff_t *ppos)
+ {
+- struct task_smack *tsp = smack_cred(current_cred());
+ char *data;
+ int rc;
+ LIST_HEAD(list_tmp);
+@@ -2745,11 +2744,21 @@ static ssize_t smk_write_relabel_self(struct file *file, const char __user *buf,
+ kfree(data);
+
+ if (!rc || (rc == -EINVAL && list_empty(&list_tmp))) {
++ struct cred *new;
++ struct task_smack *tsp;
++
++ new = prepare_creds();
++ if (!new) {
++ rc = -ENOMEM;
++ goto out;
++ }
++ tsp = smack_cred(new);
+ smk_destroy_label_list(&tsp->smk_relabel);
+ list_splice(&list_tmp, &tsp->smk_relabel);
++ commit_creds(new);
+ return count;
+ }
+-
++out:
+ smk_destroy_label_list(&list_tmp);
+ return rc;
+ }
+diff --git a/sound/core/seq/oss/seq_oss.c b/sound/core/seq/oss/seq_oss.c
+index 17f913657304..c8b9c0b315d8 100644
+--- a/sound/core/seq/oss/seq_oss.c
++++ b/sound/core/seq/oss/seq_oss.c
+@@ -168,10 +168,16 @@ static long
+ odev_ioctl(struct file *file, unsigned int cmd, unsigned long arg)
+ {
+ struct seq_oss_devinfo *dp;
++ long rc;
++
+ dp = file->private_data;
+ if (snd_BUG_ON(!dp))
+ return -ENXIO;
+- return snd_seq_oss_ioctl(dp, cmd, arg);
++
++ mutex_lock(®ister_mutex);
++ rc = snd_seq_oss_ioctl(dp, cmd, arg);
++ mutex_unlock(®ister_mutex);
++ return rc;
+ }
+
+ #ifdef CONFIG_COMPAT
+diff --git a/sound/pci/hda/hda_codec.c b/sound/pci/hda/hda_codec.c
+index 7e3ae4534df9..803978d69e3c 100644
+--- a/sound/pci/hda/hda_codec.c
++++ b/sound/pci/hda/hda_codec.c
+@@ -2935,6 +2935,10 @@ static int hda_codec_runtime_suspend(struct device *dev)
+ struct hda_codec *codec = dev_to_hda_codec(dev);
+ unsigned int state;
+
++ /* Nothing to do if card registration fails and the component driver never probes */
++ if (!codec->card)
++ return 0;
++
+ cancel_delayed_work_sync(&codec->jackpoll_work);
+ state = hda_call_codec_suspend(codec);
+ if (codec->link_down_at_suspend ||
+@@ -2949,6 +2953,10 @@ static int hda_codec_runtime_resume(struct device *dev)
+ {
+ struct hda_codec *codec = dev_to_hda_codec(dev);
+
++ /* Nothing to do if card registration fails and the component driver never probes */
++ if (!codec->card)
++ return 0;
++
+ codec_display_power(codec, true);
+ snd_hdac_codec_link_up(&codec->core);
+ hda_call_codec_resume(codec);
+diff --git a/sound/pci/hda/hda_intel.c b/sound/pci/hda/hda_intel.c
+index 9d14c40c07ea..a8b5db70050c 100644
+--- a/sound/pci/hda/hda_intel.c
++++ b/sound/pci/hda/hda_intel.c
+@@ -2354,7 +2354,6 @@ static int azx_probe_continue(struct azx *chip)
+
+ if (azx_has_pm_runtime(chip)) {
+ pm_runtime_use_autosuspend(&pci->dev);
+- pm_runtime_allow(&pci->dev);
+ pm_runtime_put_autosuspend(&pci->dev);
+ }
+
+diff --git a/sound/pci/hda/patch_ca0132.c b/sound/pci/hda/patch_ca0132.c
+index 34fe753a46fb..6dfa864d3fe7 100644
+--- a/sound/pci/hda/patch_ca0132.c
++++ b/sound/pci/hda/patch_ca0132.c
+@@ -1182,6 +1182,7 @@ static const struct snd_pci_quirk ca0132_quirks[] = {
+ SND_PCI_QUIRK(0x1458, 0xA036, "Gigabyte GA-Z170X-Gaming 7", QUIRK_R3DI),
+ SND_PCI_QUIRK(0x3842, 0x1038, "EVGA X99 Classified", QUIRK_R3DI),
+ SND_PCI_QUIRK(0x1102, 0x0013, "Recon3D", QUIRK_R3D),
++ SND_PCI_QUIRK(0x1102, 0x0018, "Recon3D", QUIRK_R3D),
+ SND_PCI_QUIRK(0x1102, 0x0051, "Sound Blaster AE-5", QUIRK_AE5),
+ {}
+ };
+@@ -4671,7 +4672,7 @@ static int ca0132_alt_select_in(struct hda_codec *codec)
+ tmp = FLOAT_ONE;
+ break;
+ case QUIRK_AE5:
+- ca0113_mmio_command_set(codec, 0x48, 0x28, 0x00);
++ ca0113_mmio_command_set(codec, 0x30, 0x28, 0x00);
+ tmp = FLOAT_THREE;
+ break;
+ default:
+@@ -4717,7 +4718,7 @@ static int ca0132_alt_select_in(struct hda_codec *codec)
+ r3di_gpio_mic_set(codec, R3DI_REAR_MIC);
+ break;
+ case QUIRK_AE5:
+- ca0113_mmio_command_set(codec, 0x48, 0x28, 0x00);
++ ca0113_mmio_command_set(codec, 0x30, 0x28, 0x00);
+ break;
+ default:
+ break;
+@@ -4756,7 +4757,7 @@ static int ca0132_alt_select_in(struct hda_codec *codec)
+ tmp = FLOAT_ONE;
+ break;
+ case QUIRK_AE5:
+- ca0113_mmio_command_set(codec, 0x48, 0x28, 0x3f);
++ ca0113_mmio_command_set(codec, 0x30, 0x28, 0x3f);
+ tmp = FLOAT_THREE;
+ break;
+ default:
+@@ -5748,6 +5749,11 @@ static int ca0132_switch_get(struct snd_kcontrol *kcontrol,
+ return 0;
+ }
+
++ if (nid == ZXR_HEADPHONE_GAIN) {
++ *valp = spec->zxr_gain_set;
++ return 0;
++ }
++
+ return 0;
+ }
+
+diff --git a/sound/pci/hda/patch_realtek.c b/sound/pci/hda/patch_realtek.c
+index d8d018536484..b27d88c86067 100644
+--- a/sound/pci/hda/patch_realtek.c
++++ b/sound/pci/hda/patch_realtek.c
+@@ -6131,6 +6131,11 @@ enum {
+ ALC289_FIXUP_ASUS_GA502,
+ ALC256_FIXUP_ACER_MIC_NO_PRESENCE,
+ ALC285_FIXUP_HP_GPIO_AMP_INIT,
++ ALC269_FIXUP_CZC_B20,
++ ALC269_FIXUP_CZC_TMI,
++ ALC269_FIXUP_CZC_L101,
++ ALC269_FIXUP_LEMOTE_A1802,
++ ALC269_FIXUP_LEMOTE_A190X,
+ };
+
+ static const struct hda_fixup alc269_fixups[] = {
+@@ -7369,6 +7374,89 @@ static const struct hda_fixup alc269_fixups[] = {
+ .chained = true,
+ .chain_id = ALC285_FIXUP_HP_GPIO_LED
+ },
++ [ALC269_FIXUP_CZC_B20] = {
++ .type = HDA_FIXUP_PINS,
++ .v.pins = (const struct hda_pintbl[]) {
++ { 0x12, 0x411111f0 },
++ { 0x14, 0x90170110 }, /* speaker */
++ { 0x15, 0x032f1020 }, /* HP out */
++ { 0x17, 0x411111f0 },
++ { 0x18, 0x03ab1040 }, /* mic */
++ { 0x19, 0xb7a7013f },
++ { 0x1a, 0x0181305f },
++ { 0x1b, 0x411111f0 },
++ { 0x1d, 0x411111f0 },
++ { 0x1e, 0x411111f0 },
++ { }
++ },
++ .chain_id = ALC269_FIXUP_DMIC,
++ },
++ [ALC269_FIXUP_CZC_TMI] = {
++ .type = HDA_FIXUP_PINS,
++ .v.pins = (const struct hda_pintbl[]) {
++ { 0x12, 0x4000c000 },
++ { 0x14, 0x90170110 }, /* speaker */
++ { 0x15, 0x0421401f }, /* HP out */
++ { 0x17, 0x411111f0 },
++ { 0x18, 0x04a19020 }, /* mic */
++ { 0x19, 0x411111f0 },
++ { 0x1a, 0x411111f0 },
++ { 0x1b, 0x411111f0 },
++ { 0x1d, 0x40448505 },
++ { 0x1e, 0x411111f0 },
++ { 0x20, 0x8000ffff },
++ { }
++ },
++ .chain_id = ALC269_FIXUP_DMIC,
++ },
++ [ALC269_FIXUP_CZC_L101] = {
++ .type = HDA_FIXUP_PINS,
++ .v.pins = (const struct hda_pintbl[]) {
++ { 0x12, 0x40000000 },
++ { 0x14, 0x01014010 }, /* speaker */
++ { 0x15, 0x411111f0 }, /* HP out */
++ { 0x16, 0x411111f0 },
++ { 0x18, 0x01a19020 }, /* mic */
++ { 0x19, 0x02a19021 },
++ { 0x1a, 0x0181302f },
++ { 0x1b, 0x0221401f },
++ { 0x1c, 0x411111f0 },
++ { 0x1d, 0x4044c601 },
++ { 0x1e, 0x411111f0 },
++ { }
++ },
++ .chain_id = ALC269_FIXUP_DMIC,
++ },
++ [ALC269_FIXUP_LEMOTE_A1802] = {
++ .type = HDA_FIXUP_PINS,
++ .v.pins = (const struct hda_pintbl[]) {
++ { 0x12, 0x40000000 },
++ { 0x14, 0x90170110 }, /* speaker */
++ { 0x17, 0x411111f0 },
++ { 0x18, 0x03a19040 }, /* mic1 */
++ { 0x19, 0x90a70130 }, /* mic2 */
++ { 0x1a, 0x411111f0 },
++ { 0x1b, 0x411111f0 },
++ { 0x1d, 0x40489d2d },
++ { 0x1e, 0x411111f0 },
++ { 0x20, 0x0003ffff },
++ { 0x21, 0x03214020 },
++ { }
++ },
++ .chain_id = ALC269_FIXUP_DMIC,
++ },
++ [ALC269_FIXUP_LEMOTE_A190X] = {
++ .type = HDA_FIXUP_PINS,
++ .v.pins = (const struct hda_pintbl[]) {
++ { 0x14, 0x99130110 }, /* speaker */
++ { 0x15, 0x0121401f }, /* HP out */
++ { 0x18, 0x01a19c20 }, /* rear mic */
++ { 0x19, 0x99a3092f }, /* front mic */
++ { 0x1b, 0x0201401f }, /* front lineout */
++ { }
++ },
++ .chain_id = ALC269_FIXUP_DMIC,
++ },
+ };
+
+ static const struct snd_pci_quirk alc269_fixup_tbl[] = {
+@@ -7658,9 +7746,14 @@ static const struct snd_pci_quirk alc269_fixup_tbl[] = {
+ SND_PCI_QUIRK(0x17aa, 0x3bf8, "Quanta FL1", ALC269_FIXUP_PCM_44K),
+ SND_PCI_QUIRK(0x17aa, 0x9e54, "LENOVO NB", ALC269_FIXUP_LENOVO_EAPD),
+ SND_PCI_QUIRK(0x19e5, 0x3204, "Huawei MACH-WX9", ALC256_FIXUP_HUAWEI_MACH_WX9_PINS),
++ SND_PCI_QUIRK(0x1b35, 0x1235, "CZC B20", ALC269_FIXUP_CZC_B20),
++ SND_PCI_QUIRK(0x1b35, 0x1236, "CZC TMI", ALC269_FIXUP_CZC_TMI),
++ SND_PCI_QUIRK(0x1b35, 0x1237, "CZC L101", ALC269_FIXUP_CZC_L101),
+ SND_PCI_QUIRK(0x1b7d, 0xa831, "Ordissimo EVE2 ", ALC269VB_FIXUP_ORDISSIMO_EVE2), /* Also known as Malata PC-B1303 */
+ SND_PCI_QUIRK(0x1d72, 0x1901, "RedmiBook 14", ALC256_FIXUP_ASUS_HEADSET_MIC),
+ SND_PCI_QUIRK(0x10ec, 0x118c, "Medion EE4254 MD62100", ALC256_FIXUP_MEDION_HEADSET_NO_PRESENCE),
++ SND_PCI_QUIRK(0x1c06, 0x2013, "Lemote A1802", ALC269_FIXUP_LEMOTE_A1802),
++ SND_PCI_QUIRK(0x1c06, 0x2015, "Lemote A190X", ALC269_FIXUP_LEMOTE_A190X),
+
+ #if 0
+ /* Below is a quirk table taken from the old code.
+@@ -8916,6 +9009,7 @@ enum {
+ ALC662_FIXUP_LED_GPIO1,
+ ALC662_FIXUP_IDEAPAD,
+ ALC272_FIXUP_MARIO,
++ ALC662_FIXUP_CZC_ET26,
+ ALC662_FIXUP_CZC_P10T,
+ ALC662_FIXUP_SKU_IGNORE,
+ ALC662_FIXUP_HP_RP5800,
+@@ -8985,6 +9079,25 @@ static const struct hda_fixup alc662_fixups[] = {
+ .type = HDA_FIXUP_FUNC,
+ .v.func = alc272_fixup_mario,
+ },
++ [ALC662_FIXUP_CZC_ET26] = {
++ .type = HDA_FIXUP_PINS,
++ .v.pins = (const struct hda_pintbl[]) {
++ {0x12, 0x403cc000},
++ {0x14, 0x90170110}, /* speaker */
++ {0x15, 0x411111f0},
++ {0x16, 0x411111f0},
++ {0x18, 0x01a19030}, /* mic */
++ {0x19, 0x90a7013f}, /* int-mic */
++ {0x1a, 0x01014020},
++ {0x1b, 0x0121401f},
++ {0x1c, 0x411111f0},
++ {0x1d, 0x411111f0},
++ {0x1e, 0x40478e35},
++ {}
++ },
++ .chained = true,
++ .chain_id = ALC662_FIXUP_SKU_IGNORE
++ },
+ [ALC662_FIXUP_CZC_P10T] = {
+ .type = HDA_FIXUP_VERBS,
+ .v.verbs = (const struct hda_verb[]) {
+@@ -9368,6 +9481,7 @@ static const struct snd_pci_quirk alc662_fixup_tbl[] = {
+ SND_PCI_QUIRK(0x1849, 0x5892, "ASRock B150M", ALC892_FIXUP_ASROCK_MOBO),
+ SND_PCI_QUIRK(0x19da, 0xa130, "Zotac Z68", ALC662_FIXUP_ZOTAC_Z68),
+ SND_PCI_QUIRK(0x1b0a, 0x01b8, "ACER Veriton", ALC662_FIXUP_ACER_VERITON),
++ SND_PCI_QUIRK(0x1b35, 0x1234, "CZC ET26", ALC662_FIXUP_CZC_ET26),
+ SND_PCI_QUIRK(0x1b35, 0x2206, "CZC P10T", ALC662_FIXUP_CZC_P10T),
+ SND_PCI_QUIRK(0x1025, 0x0566, "Acer Aspire Ethos 8951G", ALC669_FIXUP_ACER_ASPIRE_ETHOS),
+
+diff --git a/tools/lib/traceevent/event-parse.c b/tools/lib/traceevent/event-parse.c
+index 010e60d5a081..cb0d29865ee9 100644
+--- a/tools/lib/traceevent/event-parse.c
++++ b/tools/lib/traceevent/event-parse.c
+@@ -2861,6 +2861,7 @@ process_dynamic_array_len(struct tep_event *event, struct tep_print_arg *arg,
+ if (read_expected(TEP_EVENT_DELIM, ")") < 0)
+ goto out_err;
+
++ free_token(token);
+ type = read_token(&token);
+ *tok = token;
+
+diff --git a/tools/testing/selftests/net/msg_zerocopy.c b/tools/testing/selftests/net/msg_zerocopy.c
+index 4b02933cab8a..bdc03a2097e8 100644
+--- a/tools/testing/selftests/net/msg_zerocopy.c
++++ b/tools/testing/selftests/net/msg_zerocopy.c
+@@ -125,9 +125,8 @@ static int do_setcpu(int cpu)
+ CPU_ZERO(&mask);
+ CPU_SET(cpu, &mask);
+ if (sched_setaffinity(0, sizeof(mask), &mask))
+- error(1, 0, "setaffinity %d", cpu);
+-
+- if (cfg_verbose)
++ fprintf(stderr, "cpu: unable to pin, may increase variance.\n");
++ else if (cfg_verbose)
+ fprintf(stderr, "cpu: %u\n", cpu);
+
+ return 0;
^ permalink raw reply related [flat|nested] 25+ messages in thread
* [gentoo-commits] proj/linux-patches:5.7 commit in: /
@ 2020-08-19 9:31 Alice Ferrazzi
0 siblings, 0 replies; 25+ messages in thread
From: Alice Ferrazzi @ 2020-08-19 9:31 UTC (permalink / raw
To: gentoo-commits
commit: 8063d7761c286d92855fd78713a561a970613ebb
Author: Alice Ferrazzi <alicef <AT> gentoo <DOT> org>
AuthorDate: Wed Aug 19 09:30:23 2020 +0000
Commit: Alice Ferrazzi <alicef <AT> gentoo <DOT> org>
CommitDate: Wed Aug 19 09:30:44 2020 +0000
URL: https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=8063d776
Linux patch 5.7.16
Signed-off-by: Alice Ferrazzi <alicef <AT> gentoo.org>
0000_README | 4 +
1015_linux-5.7.16.patch | 14445 ++++++++++++++++++++++++++++++++++++++++++++++
2 files changed, 14449 insertions(+)
diff --git a/0000_README b/0000_README
index dc0ff9b..66f0380 100644
--- a/0000_README
+++ b/0000_README
@@ -103,6 +103,10 @@ Patch: 1014_linux-5.7.15.patch
From: http://www.kernel.org
Desc: Linux 5.7.15
+Patch: 1015_linux-5.7.16.patch
+From: http://www.kernel.org
+Desc: Linux 5.7.16
+
Patch: 1500_XATTR_USER_PREFIX.patch
From: https://bugs.gentoo.org/show_bug.cgi?id=470644
Desc: Support for namespace user.pax.* on tmpfs.
diff --git a/1015_linux-5.7.16.patch b/1015_linux-5.7.16.patch
new file mode 100644
index 0000000..ff642b6
--- /dev/null
+++ b/1015_linux-5.7.16.patch
@@ -0,0 +1,14445 @@
+diff --git a/Documentation/ABI/testing/sysfs-bus-iio b/Documentation/ABI/testing/sysfs-bus-iio
+index d3e53a6d8331..5c62bfb0f3f5 100644
+--- a/Documentation/ABI/testing/sysfs-bus-iio
++++ b/Documentation/ABI/testing/sysfs-bus-iio
+@@ -1569,7 +1569,8 @@ What: /sys/bus/iio/devices/iio:deviceX/in_concentrationX_voc_raw
+ KernelVersion: 4.3
+ Contact: linux-iio@vger.kernel.org
+ Description:
+- Raw (unscaled no offset etc.) percentage reading of a substance.
++ Raw (unscaled no offset etc.) reading of a substance. Units
++ after application of scale and offset are percents.
+
+ What: /sys/bus/iio/devices/iio:deviceX/in_resistance_raw
+ What: /sys/bus/iio/devices/iio:deviceX/in_resistanceX_raw
+diff --git a/Documentation/core-api/cpu_hotplug.rst b/Documentation/core-api/cpu_hotplug.rst
+index 4a50ab7817f7..b1ae1ac159cf 100644
+--- a/Documentation/core-api/cpu_hotplug.rst
++++ b/Documentation/core-api/cpu_hotplug.rst
+@@ -50,13 +50,6 @@ Command Line Switches
+
+ This option is limited to the X86 and S390 architecture.
+
+-``cede_offline={"off","on"}``
+- Use this option to disable/enable putting offlined processors to an extended
+- ``H_CEDE`` state on supported pseries platforms. If nothing is specified,
+- ``cede_offline`` is set to "on".
+-
+- This option is limited to the PowerPC architecture.
+-
+ ``cpu0_hotplug``
+ Allow to shutdown CPU0.
+
+diff --git a/Makefile b/Makefile
+index a2fbdb4c952d..627657860aa5 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 5
+ PATCHLEVEL = 7
+-SUBLEVEL = 15
++SUBLEVEL = 16
+ EXTRAVERSION =
+ NAME = Kleptomaniac Octopus
+
+diff --git a/arch/arm/boot/dts/exynos5422-odroid-core.dtsi b/arch/arm/boot/dts/exynos5422-odroid-core.dtsi
+index ab27ff8bc3dc..afe090578e8f 100644
+--- a/arch/arm/boot/dts/exynos5422-odroid-core.dtsi
++++ b/arch/arm/boot/dts/exynos5422-odroid-core.dtsi
+@@ -411,12 +411,6 @@
+ status = "okay";
+ };
+
+-&bus_fsys {
+- operating-points-v2 = <&bus_fsys2_opp_table>;
+- devfreq = <&bus_wcore>;
+- status = "okay";
+-};
+-
+ &bus_fsys2 {
+ operating-points-v2 = <&bus_fsys2_opp_table>;
+ devfreq = <&bus_wcore>;
+diff --git a/arch/arm/boot/dts/exynos5800.dtsi b/arch/arm/boot/dts/exynos5800.dtsi
+index dfb99ab53c3e..526729dad53f 100644
+--- a/arch/arm/boot/dts/exynos5800.dtsi
++++ b/arch/arm/boot/dts/exynos5800.dtsi
+@@ -23,17 +23,17 @@
+ &cluster_a15_opp_table {
+ opp-2000000000 {
+ opp-hz = /bits/ 64 <2000000000>;
+- opp-microvolt = <1312500>;
++ opp-microvolt = <1312500 1312500 1500000>;
+ clock-latency-ns = <140000>;
+ };
+ opp-1900000000 {
+ opp-hz = /bits/ 64 <1900000000>;
+- opp-microvolt = <1262500>;
++ opp-microvolt = <1262500 1262500 1500000>;
+ clock-latency-ns = <140000>;
+ };
+ opp-1800000000 {
+ opp-hz = /bits/ 64 <1800000000>;
+- opp-microvolt = <1237500>;
++ opp-microvolt = <1237500 1237500 1500000>;
+ clock-latency-ns = <140000>;
+ };
+ opp-1700000000 {
+diff --git a/arch/arm/boot/dts/r8a7793-gose.dts b/arch/arm/boot/dts/r8a7793-gose.dts
+index 79baf06019f5..10c3536b8e3d 100644
+--- a/arch/arm/boot/dts/r8a7793-gose.dts
++++ b/arch/arm/boot/dts/r8a7793-gose.dts
+@@ -336,7 +336,7 @@
+ reg = <0x20>;
+ remote = <&vin1>;
+
+- port {
++ ports {
+ #address-cells = <1>;
+ #size-cells = <0>;
+
+@@ -394,7 +394,7 @@
+ interrupts = <2 IRQ_TYPE_LEVEL_LOW>;
+ default-input = <0>;
+
+- port {
++ ports {
+ #address-cells = <1>;
+ #size-cells = <0>;
+
+diff --git a/arch/arm/boot/dts/stm32mp15-pinctrl.dtsi b/arch/arm/boot/dts/stm32mp15-pinctrl.dtsi
+index 73c07f0dfad2..4b67b682dd53 100644
+--- a/arch/arm/boot/dts/stm32mp15-pinctrl.dtsi
++++ b/arch/arm/boot/dts/stm32mp15-pinctrl.dtsi
+@@ -1095,15 +1095,15 @@
+
+ uart7_pins_a: uart7-0 {
+ pins1 {
+- pinmux = <STM32_PINMUX('E', 8, AF7)>; /* UART4_TX */
++ pinmux = <STM32_PINMUX('E', 8, AF7)>; /* UART7_TX */
+ bias-disable;
+ drive-push-pull;
+ slew-rate = <0>;
+ };
+ pins2 {
+- pinmux = <STM32_PINMUX('E', 7, AF7)>, /* UART4_RX */
+- <STM32_PINMUX('E', 10, AF7)>, /* UART4_CTS */
+- <STM32_PINMUX('E', 9, AF7)>; /* UART4_RTS */
++ pinmux = <STM32_PINMUX('E', 7, AF7)>, /* UART7_RX */
++ <STM32_PINMUX('E', 10, AF7)>, /* UART7_CTS */
++ <STM32_PINMUX('E', 9, AF7)>; /* UART7_RTS */
+ bias-disable;
+ };
+ };
+diff --git a/arch/arm/boot/dts/sunxi-bananapi-m2-plus-v1.2.dtsi b/arch/arm/boot/dts/sunxi-bananapi-m2-plus-v1.2.dtsi
+index 22466afd38a3..235994a4a2eb 100644
+--- a/arch/arm/boot/dts/sunxi-bananapi-m2-plus-v1.2.dtsi
++++ b/arch/arm/boot/dts/sunxi-bananapi-m2-plus-v1.2.dtsi
+@@ -16,15 +16,27 @@
+ regulator-type = "voltage";
+ regulator-boot-on;
+ regulator-always-on;
+- regulator-min-microvolt = <1100000>;
+- regulator-max-microvolt = <1300000>;
++ regulator-min-microvolt = <1108475>;
++ regulator-max-microvolt = <1308475>;
+ regulator-ramp-delay = <50>; /* 4ms */
+ gpios = <&r_pio 0 1 GPIO_ACTIVE_HIGH>; /* PL1 */
+ gpios-states = <0x1>;
+- states = <1100000 0>, <1300000 1>;
++ states = <1108475 0>, <1308475 1>;
+ };
+ };
+
+ &cpu0 {
+ cpu-supply = <®_vdd_cpux>;
+ };
++
++&cpu1 {
++ cpu-supply = <®_vdd_cpux>;
++};
++
++&cpu2 {
++ cpu-supply = <®_vdd_cpux>;
++};
++
++&cpu3 {
++ cpu-supply = <®_vdd_cpux>;
++};
+diff --git a/arch/arm/kernel/stacktrace.c b/arch/arm/kernel/stacktrace.c
+index cc726afea023..76ea4178a55c 100644
+--- a/arch/arm/kernel/stacktrace.c
++++ b/arch/arm/kernel/stacktrace.c
+@@ -22,6 +22,19 @@
+ * A simple function epilogue looks like this:
+ * ldm sp, {fp, sp, pc}
+ *
++ * When compiled with clang, pc and sp are not pushed. A simple function
++ * prologue looks like this when built with clang:
++ *
++ * stmdb {..., fp, lr}
++ * add fp, sp, #x
++ * sub sp, sp, #y
++ *
++ * A simple function epilogue looks like this when built with clang:
++ *
++ * sub sp, fp, #x
++ * ldm {..., fp, pc}
++ *
++ *
+ * Note that with framepointer enabled, even the leaf functions have the same
+ * prologue and epilogue, therefore we can ignore the LR value in this case.
+ */
+@@ -34,6 +47,16 @@ int notrace unwind_frame(struct stackframe *frame)
+ low = frame->sp;
+ high = ALIGN(low, THREAD_SIZE);
+
++#ifdef CONFIG_CC_IS_CLANG
++ /* check current frame pointer is within bounds */
++ if (fp < low + 4 || fp > high - 4)
++ return -EINVAL;
++
++ frame->sp = frame->fp;
++ frame->fp = *(unsigned long *)(fp);
++ frame->pc = frame->lr;
++ frame->lr = *(unsigned long *)(fp + 4);
++#else
+ /* check current frame pointer is within bounds */
+ if (fp < low + 12 || fp > high - 4)
+ return -EINVAL;
+@@ -42,6 +65,7 @@ int notrace unwind_frame(struct stackframe *frame)
+ frame->fp = *(unsigned long *)(fp - 12);
+ frame->sp = *(unsigned long *)(fp - 8);
+ frame->pc = *(unsigned long *)(fp - 4);
++#endif
+
+ return 0;
+ }
+diff --git a/arch/arm/mach-at91/pm.c b/arch/arm/mach-at91/pm.c
+index 074bde64064e..2aab043441e8 100644
+--- a/arch/arm/mach-at91/pm.c
++++ b/arch/arm/mach-at91/pm.c
+@@ -592,13 +592,13 @@ static void __init at91_pm_sram_init(void)
+ sram_pool = gen_pool_get(&pdev->dev, NULL);
+ if (!sram_pool) {
+ pr_warn("%s: sram pool unavailable!\n", __func__);
+- return;
++ goto out_put_device;
+ }
+
+ sram_base = gen_pool_alloc(sram_pool, at91_pm_suspend_in_sram_sz);
+ if (!sram_base) {
+ pr_warn("%s: unable to alloc sram!\n", __func__);
+- return;
++ goto out_put_device;
+ }
+
+ sram_pbase = gen_pool_virt_to_phys(sram_pool, sram_base);
+@@ -606,12 +606,17 @@ static void __init at91_pm_sram_init(void)
+ at91_pm_suspend_in_sram_sz, false);
+ if (!at91_suspend_sram_fn) {
+ pr_warn("SRAM: Could not map\n");
+- return;
++ goto out_put_device;
+ }
+
+ /* Copy the pm suspend handler to SRAM */
+ at91_suspend_sram_fn = fncpy(at91_suspend_sram_fn,
+ &at91_pm_suspend_in_sram, at91_pm_suspend_in_sram_sz);
++ return;
++
++out_put_device:
++ put_device(&pdev->dev);
++ return;
+ }
+
+ static bool __init at91_is_pm_mode_active(int pm_mode)
+diff --git a/arch/arm/mach-exynos/exynos.c b/arch/arm/mach-exynos/exynos.c
+index 7a8d1555db40..36c37444485a 100644
+--- a/arch/arm/mach-exynos/exynos.c
++++ b/arch/arm/mach-exynos/exynos.c
+@@ -193,7 +193,7 @@ static void __init exynos_dt_fixup(void)
+ }
+
+ DT_MACHINE_START(EXYNOS_DT, "Samsung Exynos (Flattened Device Tree)")
+- .l2c_aux_val = 0x3c400001,
++ .l2c_aux_val = 0x3c400000,
+ .l2c_aux_mask = 0xc20fffff,
+ .smp = smp_ops(exynos_smp_ops),
+ .map_io = exynos_init_io,
+diff --git a/arch/arm/mach-exynos/mcpm-exynos.c b/arch/arm/mach-exynos/mcpm-exynos.c
+index 9a681b421ae1..cd861c57d5ad 100644
+--- a/arch/arm/mach-exynos/mcpm-exynos.c
++++ b/arch/arm/mach-exynos/mcpm-exynos.c
+@@ -26,6 +26,7 @@
+ #define EXYNOS5420_USE_L2_COMMON_UP_STATE BIT(30)
+
+ static void __iomem *ns_sram_base_addr __ro_after_init;
++static bool secure_firmware __ro_after_init;
+
+ /*
+ * The common v7_exit_coherency_flush API could not be used because of the
+@@ -58,15 +59,16 @@ static void __iomem *ns_sram_base_addr __ro_after_init;
+ static int exynos_cpu_powerup(unsigned int cpu, unsigned int cluster)
+ {
+ unsigned int cpunr = cpu + (cluster * EXYNOS5420_CPUS_PER_CLUSTER);
++ bool state;
+
+ pr_debug("%s: cpu %u cluster %u\n", __func__, cpu, cluster);
+ if (cpu >= EXYNOS5420_CPUS_PER_CLUSTER ||
+ cluster >= EXYNOS5420_NR_CLUSTERS)
+ return -EINVAL;
+
+- if (!exynos_cpu_power_state(cpunr)) {
+- exynos_cpu_power_up(cpunr);
+-
++ state = exynos_cpu_power_state(cpunr);
++ exynos_cpu_power_up(cpunr);
++ if (!state && secure_firmware) {
+ /*
+ * This assumes the cluster number of the big cores(Cortex A15)
+ * is 0 and the Little cores(Cortex A7) is 1.
+@@ -258,6 +260,8 @@ static int __init exynos_mcpm_init(void)
+ return -ENOMEM;
+ }
+
++ secure_firmware = exynos_secure_firmware_available();
++
+ /*
+ * To increase the stability of KFC reset we need to program
+ * the PMU SPARE3 register
+diff --git a/arch/arm/mach-socfpga/pm.c b/arch/arm/mach-socfpga/pm.c
+index 6ed887cf8dc9..365c0428b21b 100644
+--- a/arch/arm/mach-socfpga/pm.c
++++ b/arch/arm/mach-socfpga/pm.c
+@@ -49,14 +49,14 @@ static int socfpga_setup_ocram_self_refresh(void)
+ if (!ocram_pool) {
+ pr_warn("%s: ocram pool unavailable!\n", __func__);
+ ret = -ENODEV;
+- goto put_node;
++ goto put_device;
+ }
+
+ ocram_base = gen_pool_alloc(ocram_pool, socfpga_sdram_self_refresh_sz);
+ if (!ocram_base) {
+ pr_warn("%s: unable to alloc ocram!\n", __func__);
+ ret = -ENOMEM;
+- goto put_node;
++ goto put_device;
+ }
+
+ ocram_pbase = gen_pool_virt_to_phys(ocram_pool, ocram_base);
+@@ -67,7 +67,7 @@ static int socfpga_setup_ocram_self_refresh(void)
+ if (!suspend_ocram_base) {
+ pr_warn("%s: __arm_ioremap_exec failed!\n", __func__);
+ ret = -ENOMEM;
+- goto put_node;
++ goto put_device;
+ }
+
+ /* Copy the code that puts DDR in self refresh to ocram */
+@@ -81,6 +81,8 @@ static int socfpga_setup_ocram_self_refresh(void)
+ if (!socfpga_sdram_self_refresh_in_ocram)
+ ret = -EFAULT;
+
++put_device:
++ put_device(&pdev->dev);
+ put_node:
+ of_node_put(np);
+
+diff --git a/arch/arm64/boot/dts/allwinner/sun50i-a64-pinephone.dtsi b/arch/arm64/boot/dts/allwinner/sun50i-a64-pinephone.dtsi
+index cefda145c3c9..342733a20c33 100644
+--- a/arch/arm64/boot/dts/allwinner/sun50i-a64-pinephone.dtsi
++++ b/arch/arm64/boot/dts/allwinner/sun50i-a64-pinephone.dtsi
+@@ -279,7 +279,7 @@
+
+ ®_dldo4 {
+ regulator-min-microvolt = <1800000>;
+- regulator-max-microvolt = <3300000>;
++ regulator-max-microvolt = <1800000>;
+ regulator-name = "vcc-wifi-io";
+ };
+
+diff --git a/arch/arm64/boot/dts/amlogic/meson-khadas-vim3.dtsi b/arch/arm64/boot/dts/amlogic/meson-khadas-vim3.dtsi
+index 1ef1e3672b96..ff5ba85b7562 100644
+--- a/arch/arm64/boot/dts/amlogic/meson-khadas-vim3.dtsi
++++ b/arch/arm64/boot/dts/amlogic/meson-khadas-vim3.dtsi
+@@ -270,7 +270,6 @@
+
+ bus-width = <4>;
+ cap-sd-highspeed;
+- sd-uhs-sdr50;
+ max-frequency = <100000000>;
+
+ non-removable;
+diff --git a/arch/arm64/boot/dts/amlogic/meson-sm1-khadas-vim3l.dts b/arch/arm64/boot/dts/amlogic/meson-sm1-khadas-vim3l.dts
+index dbbf29a0dbf6..026b21708b07 100644
+--- a/arch/arm64/boot/dts/amlogic/meson-sm1-khadas-vim3l.dts
++++ b/arch/arm64/boot/dts/amlogic/meson-sm1-khadas-vim3l.dts
+@@ -88,6 +88,10 @@
+ status = "okay";
+ };
+
++&sd_emmc_a {
++ sd-uhs-sdr50;
++};
++
+ &usb {
+ phys = <&usb2_phy0>, <&usb2_phy1>;
+ phy-names = "usb2-phy0", "usb2-phy1";
+diff --git a/arch/arm64/boot/dts/exynos/exynos7-espresso.dts b/arch/arm64/boot/dts/exynos/exynos7-espresso.dts
+index 7af288fa9475..a9412805c1d6 100644
+--- a/arch/arm64/boot/dts/exynos/exynos7-espresso.dts
++++ b/arch/arm64/boot/dts/exynos/exynos7-espresso.dts
+@@ -157,6 +157,7 @@
+ regulator-min-microvolt = <700000>;
+ regulator-max-microvolt = <1150000>;
+ regulator-enable-ramp-delay = <125>;
++ regulator-always-on;
+ };
+
+ ldo8_reg: LDO8 {
+diff --git a/arch/arm64/boot/dts/hisilicon/hi3660-hikey960.dts b/arch/arm64/boot/dts/hisilicon/hi3660-hikey960.dts
+index e035cf195b19..8c4bfbaf3a80 100644
+--- a/arch/arm64/boot/dts/hisilicon/hi3660-hikey960.dts
++++ b/arch/arm64/boot/dts/hisilicon/hi3660-hikey960.dts
+@@ -530,6 +530,17 @@
+ status = "ok";
+ compatible = "adi,adv7533";
+ reg = <0x39>;
++ adi,dsi-lanes = <4>;
++ ports {
++ #address-cells = <1>;
++ #size-cells = <0>;
++ port@0 {
++ reg = <0>;
++ };
++ port@1 {
++ reg = <1>;
++ };
++ };
+ };
+ };
+
+diff --git a/arch/arm64/boot/dts/hisilicon/hi6220-hikey.dts b/arch/arm64/boot/dts/hisilicon/hi6220-hikey.dts
+index c14205cd6bf5..3e47150c05ec 100644
+--- a/arch/arm64/boot/dts/hisilicon/hi6220-hikey.dts
++++ b/arch/arm64/boot/dts/hisilicon/hi6220-hikey.dts
+@@ -516,7 +516,7 @@
+ reg = <0x39>;
+ interrupt-parent = <&gpio1>;
+ interrupts = <1 2>;
+- pd-gpio = <&gpio0 4 0>;
++ pd-gpios = <&gpio0 4 0>;
+ adi,dsi-lanes = <4>;
+ #sound-dai-cells = <0>;
+
+diff --git a/arch/arm64/boot/dts/qcom/msm8916-pins.dtsi b/arch/arm64/boot/dts/qcom/msm8916-pins.dtsi
+index 242aaea68804..1235830ffd0b 100644
+--- a/arch/arm64/boot/dts/qcom/msm8916-pins.dtsi
++++ b/arch/arm64/boot/dts/qcom/msm8916-pins.dtsi
+@@ -508,7 +508,7 @@
+ pins = "gpio63", "gpio64", "gpio65", "gpio66",
+ "gpio67", "gpio68";
+ drive-strength = <8>;
+- bias-pull-none;
++ bias-disable;
+ };
+ };
+ cdc_pdm_lines_sus: pdm_lines_off {
+@@ -537,7 +537,7 @@
+ pins = "gpio113", "gpio114", "gpio115",
+ "gpio116";
+ drive-strength = <8>;
+- bias-pull-none;
++ bias-disable;
+ };
+ };
+
+@@ -565,7 +565,7 @@
+ pinconf {
+ pins = "gpio110";
+ drive-strength = <8>;
+- bias-pull-none;
++ bias-disable;
+ };
+ };
+
+@@ -591,7 +591,7 @@
+ pinconf {
+ pins = "gpio116";
+ drive-strength = <8>;
+- bias-pull-none;
++ bias-disable;
+ };
+ };
+ ext_mclk_tlmm_lines_sus: mclk_lines_off {
+@@ -619,7 +619,7 @@
+ pins = "gpio112", "gpio117", "gpio118",
+ "gpio119";
+ drive-strength = <8>;
+- bias-pull-none;
++ bias-disable;
+ };
+ };
+ ext_sec_tlmm_lines_sus: tlmm_lines_off {
+diff --git a/arch/arm64/boot/dts/renesas/r8a774a1.dtsi b/arch/arm64/boot/dts/renesas/r8a774a1.dtsi
+index a603d947970e..16b059d7fd01 100644
+--- a/arch/arm64/boot/dts/renesas/r8a774a1.dtsi
++++ b/arch/arm64/boot/dts/renesas/r8a774a1.dtsi
+@@ -2250,7 +2250,7 @@
+ status = "disabled";
+ };
+
+- sdhi0: sd@ee100000 {
++ sdhi0: mmc@ee100000 {
+ compatible = "renesas,sdhi-r8a774a1",
+ "renesas,rcar-gen3-sdhi";
+ reg = <0 0xee100000 0 0x2000>;
+@@ -2262,7 +2262,7 @@
+ status = "disabled";
+ };
+
+- sdhi1: sd@ee120000 {
++ sdhi1: mmc@ee120000 {
+ compatible = "renesas,sdhi-r8a774a1",
+ "renesas,rcar-gen3-sdhi";
+ reg = <0 0xee120000 0 0x2000>;
+@@ -2274,7 +2274,7 @@
+ status = "disabled";
+ };
+
+- sdhi2: sd@ee140000 {
++ sdhi2: mmc@ee140000 {
+ compatible = "renesas,sdhi-r8a774a1",
+ "renesas,rcar-gen3-sdhi";
+ reg = <0 0xee140000 0 0x2000>;
+@@ -2286,7 +2286,7 @@
+ status = "disabled";
+ };
+
+- sdhi3: sd@ee160000 {
++ sdhi3: mmc@ee160000 {
+ compatible = "renesas,sdhi-r8a774a1",
+ "renesas,rcar-gen3-sdhi";
+ reg = <0 0xee160000 0 0x2000>;
+diff --git a/arch/arm64/boot/dts/renesas/r8a774b1.dtsi b/arch/arm64/boot/dts/renesas/r8a774b1.dtsi
+index 1e51855c7cd3..6db8b6a4d191 100644
+--- a/arch/arm64/boot/dts/renesas/r8a774b1.dtsi
++++ b/arch/arm64/boot/dts/renesas/r8a774b1.dtsi
+@@ -2108,7 +2108,7 @@
+ status = "disabled";
+ };
+
+- sdhi0: sd@ee100000 {
++ sdhi0: mmc@ee100000 {
+ compatible = "renesas,sdhi-r8a774b1",
+ "renesas,rcar-gen3-sdhi";
+ reg = <0 0xee100000 0 0x2000>;
+@@ -2120,7 +2120,7 @@
+ status = "disabled";
+ };
+
+- sdhi1: sd@ee120000 {
++ sdhi1: mmc@ee120000 {
+ compatible = "renesas,sdhi-r8a774b1",
+ "renesas,rcar-gen3-sdhi";
+ reg = <0 0xee120000 0 0x2000>;
+@@ -2132,7 +2132,7 @@
+ status = "disabled";
+ };
+
+- sdhi2: sd@ee140000 {
++ sdhi2: mmc@ee140000 {
+ compatible = "renesas,sdhi-r8a774b1",
+ "renesas,rcar-gen3-sdhi";
+ reg = <0 0xee140000 0 0x2000>;
+@@ -2144,7 +2144,7 @@
+ status = "disabled";
+ };
+
+- sdhi3: sd@ee160000 {
++ sdhi3: mmc@ee160000 {
+ compatible = "renesas,sdhi-r8a774b1",
+ "renesas,rcar-gen3-sdhi";
+ reg = <0 0xee160000 0 0x2000>;
+diff --git a/arch/arm64/boot/dts/renesas/r8a774c0.dtsi b/arch/arm64/boot/dts/renesas/r8a774c0.dtsi
+index 5c72a7efbb03..42171190cce4 100644
+--- a/arch/arm64/boot/dts/renesas/r8a774c0.dtsi
++++ b/arch/arm64/boot/dts/renesas/r8a774c0.dtsi
+@@ -1618,7 +1618,7 @@
+ status = "disabled";
+ };
+
+- sdhi0: sd@ee100000 {
++ sdhi0: mmc@ee100000 {
+ compatible = "renesas,sdhi-r8a774c0",
+ "renesas,rcar-gen3-sdhi";
+ reg = <0 0xee100000 0 0x2000>;
+@@ -1630,7 +1630,7 @@
+ status = "disabled";
+ };
+
+- sdhi1: sd@ee120000 {
++ sdhi1: mmc@ee120000 {
+ compatible = "renesas,sdhi-r8a774c0",
+ "renesas,rcar-gen3-sdhi";
+ reg = <0 0xee120000 0 0x2000>;
+@@ -1642,7 +1642,7 @@
+ status = "disabled";
+ };
+
+- sdhi3: sd@ee160000 {
++ sdhi3: mmc@ee160000 {
+ compatible = "renesas,sdhi-r8a774c0",
+ "renesas,rcar-gen3-sdhi";
+ reg = <0 0xee160000 0 0x2000>;
+diff --git a/arch/arm64/boot/dts/renesas/r8a77951.dtsi b/arch/arm64/boot/dts/renesas/r8a77951.dtsi
+index 61d67d9714ab..9beb8e76d923 100644
+--- a/arch/arm64/boot/dts/renesas/r8a77951.dtsi
++++ b/arch/arm64/boot/dts/renesas/r8a77951.dtsi
+@@ -2590,7 +2590,7 @@
+ status = "disabled";
+ };
+
+- sdhi0: sd@ee100000 {
++ sdhi0: mmc@ee100000 {
+ compatible = "renesas,sdhi-r8a7795",
+ "renesas,rcar-gen3-sdhi";
+ reg = <0 0xee100000 0 0x2000>;
+@@ -2603,7 +2603,7 @@
+ status = "disabled";
+ };
+
+- sdhi1: sd@ee120000 {
++ sdhi1: mmc@ee120000 {
+ compatible = "renesas,sdhi-r8a7795",
+ "renesas,rcar-gen3-sdhi";
+ reg = <0 0xee120000 0 0x2000>;
+@@ -2616,7 +2616,7 @@
+ status = "disabled";
+ };
+
+- sdhi2: sd@ee140000 {
++ sdhi2: mmc@ee140000 {
+ compatible = "renesas,sdhi-r8a7795",
+ "renesas,rcar-gen3-sdhi";
+ reg = <0 0xee140000 0 0x2000>;
+@@ -2629,7 +2629,7 @@
+ status = "disabled";
+ };
+
+- sdhi3: sd@ee160000 {
++ sdhi3: mmc@ee160000 {
+ compatible = "renesas,sdhi-r8a7795",
+ "renesas,rcar-gen3-sdhi";
+ reg = <0 0xee160000 0 0x2000>;
+diff --git a/arch/arm64/boot/dts/renesas/r8a77960.dtsi b/arch/arm64/boot/dts/renesas/r8a77960.dtsi
+index 33bf62acffbb..4dfb7f076787 100644
+--- a/arch/arm64/boot/dts/renesas/r8a77960.dtsi
++++ b/arch/arm64/boot/dts/renesas/r8a77960.dtsi
+@@ -2394,7 +2394,7 @@
+ status = "disabled";
+ };
+
+- sdhi0: sd@ee100000 {
++ sdhi0: mmc@ee100000 {
+ compatible = "renesas,sdhi-r8a7796",
+ "renesas,rcar-gen3-sdhi";
+ reg = <0 0xee100000 0 0x2000>;
+@@ -2407,7 +2407,7 @@
+ status = "disabled";
+ };
+
+- sdhi1: sd@ee120000 {
++ sdhi1: mmc@ee120000 {
+ compatible = "renesas,sdhi-r8a7796",
+ "renesas,rcar-gen3-sdhi";
+ reg = <0 0xee120000 0 0x2000>;
+@@ -2420,7 +2420,7 @@
+ status = "disabled";
+ };
+
+- sdhi2: sd@ee140000 {
++ sdhi2: mmc@ee140000 {
+ compatible = "renesas,sdhi-r8a7796",
+ "renesas,rcar-gen3-sdhi";
+ reg = <0 0xee140000 0 0x2000>;
+@@ -2433,7 +2433,7 @@
+ status = "disabled";
+ };
+
+- sdhi3: sd@ee160000 {
++ sdhi3: mmc@ee160000 {
+ compatible = "renesas,sdhi-r8a7796",
+ "renesas,rcar-gen3-sdhi";
+ reg = <0 0xee160000 0 0x2000>;
+diff --git a/arch/arm64/boot/dts/renesas/r8a77961.dtsi b/arch/arm64/boot/dts/renesas/r8a77961.dtsi
+index 0d96f2d3492b..8227b68b5646 100644
+--- a/arch/arm64/boot/dts/renesas/r8a77961.dtsi
++++ b/arch/arm64/boot/dts/renesas/r8a77961.dtsi
+@@ -928,7 +928,7 @@
+ /* placeholder */
+ };
+
+- sdhi0: sd@ee100000 {
++ sdhi0: mmc@ee100000 {
+ compatible = "renesas,sdhi-r8a77961",
+ "renesas,rcar-gen3-sdhi";
+ reg = <0 0xee100000 0 0x2000>;
+@@ -940,7 +940,7 @@
+ status = "disabled";
+ };
+
+- sdhi1: sd@ee120000 {
++ sdhi1: mmc@ee120000 {
+ compatible = "renesas,sdhi-r8a77961",
+ "renesas,rcar-gen3-sdhi";
+ reg = <0 0xee120000 0 0x2000>;
+@@ -952,7 +952,7 @@
+ status = "disabled";
+ };
+
+- sdhi2: sd@ee140000 {
++ sdhi2: mmc@ee140000 {
+ compatible = "renesas,sdhi-r8a77961",
+ "renesas,rcar-gen3-sdhi";
+ reg = <0 0xee140000 0 0x2000>;
+@@ -964,7 +964,7 @@
+ status = "disabled";
+ };
+
+- sdhi3: sd@ee160000 {
++ sdhi3: mmc@ee160000 {
+ compatible = "renesas,sdhi-r8a77961",
+ "renesas,rcar-gen3-sdhi";
+ reg = <0 0xee160000 0 0x2000>;
+diff --git a/arch/arm64/boot/dts/renesas/r8a77965.dtsi b/arch/arm64/boot/dts/renesas/r8a77965.dtsi
+index 6f7ab39fd282..fe4dc12e2bdf 100644
+--- a/arch/arm64/boot/dts/renesas/r8a77965.dtsi
++++ b/arch/arm64/boot/dts/renesas/r8a77965.dtsi
+@@ -2120,7 +2120,7 @@
+ status = "disabled";
+ };
+
+- sdhi0: sd@ee100000 {
++ sdhi0: mmc@ee100000 {
+ compatible = "renesas,sdhi-r8a77965",
+ "renesas,rcar-gen3-sdhi";
+ reg = <0 0xee100000 0 0x2000>;
+@@ -2133,7 +2133,7 @@
+ status = "disabled";
+ };
+
+- sdhi1: sd@ee120000 {
++ sdhi1: mmc@ee120000 {
+ compatible = "renesas,sdhi-r8a77965",
+ "renesas,rcar-gen3-sdhi";
+ reg = <0 0xee120000 0 0x2000>;
+@@ -2146,7 +2146,7 @@
+ status = "disabled";
+ };
+
+- sdhi2: sd@ee140000 {
++ sdhi2: mmc@ee140000 {
+ compatible = "renesas,sdhi-r8a77965",
+ "renesas,rcar-gen3-sdhi";
+ reg = <0 0xee140000 0 0x2000>;
+@@ -2159,7 +2159,7 @@
+ status = "disabled";
+ };
+
+- sdhi3: sd@ee160000 {
++ sdhi3: mmc@ee160000 {
+ compatible = "renesas,sdhi-r8a77965",
+ "renesas,rcar-gen3-sdhi";
+ reg = <0 0xee160000 0 0x2000>;
+diff --git a/arch/arm64/boot/dts/renesas/r8a77990.dtsi b/arch/arm64/boot/dts/renesas/r8a77990.dtsi
+index cd11f24744d4..1991bdc36792 100644
+--- a/arch/arm64/boot/dts/renesas/r8a77990.dtsi
++++ b/arch/arm64/boot/dts/renesas/r8a77990.dtsi
+@@ -1595,7 +1595,7 @@
+ status = "disabled";
+ };
+
+- sdhi0: sd@ee100000 {
++ sdhi0: mmc@ee100000 {
+ compatible = "renesas,sdhi-r8a77990",
+ "renesas,rcar-gen3-sdhi";
+ reg = <0 0xee100000 0 0x2000>;
+@@ -1608,7 +1608,7 @@
+ status = "disabled";
+ };
+
+- sdhi1: sd@ee120000 {
++ sdhi1: mmc@ee120000 {
+ compatible = "renesas,sdhi-r8a77990",
+ "renesas,rcar-gen3-sdhi";
+ reg = <0 0xee120000 0 0x2000>;
+@@ -1621,7 +1621,7 @@
+ status = "disabled";
+ };
+
+- sdhi3: sd@ee160000 {
++ sdhi3: mmc@ee160000 {
+ compatible = "renesas,sdhi-r8a77990",
+ "renesas,rcar-gen3-sdhi";
+ reg = <0 0xee160000 0 0x2000>;
+diff --git a/arch/arm64/boot/dts/renesas/r8a77995.dtsi b/arch/arm64/boot/dts/renesas/r8a77995.dtsi
+index e5617ec0f49c..2c2272f5f5b5 100644
+--- a/arch/arm64/boot/dts/renesas/r8a77995.dtsi
++++ b/arch/arm64/boot/dts/renesas/r8a77995.dtsi
+@@ -916,7 +916,7 @@
+ status = "disabled";
+ };
+
+- sdhi2: sd@ee140000 {
++ sdhi2: mmc@ee140000 {
+ compatible = "renesas,sdhi-r8a77995",
+ "renesas,rcar-gen3-sdhi";
+ reg = <0 0xee140000 0 0x2000>;
+diff --git a/arch/arm64/boot/dts/rockchip/rk3368-lion.dtsi b/arch/arm64/boot/dts/rockchip/rk3368-lion.dtsi
+index e17311e09082..216aafd90e7f 100644
+--- a/arch/arm64/boot/dts/rockchip/rk3368-lion.dtsi
++++ b/arch/arm64/boot/dts/rockchip/rk3368-lion.dtsi
+@@ -156,7 +156,7 @@
+ pinctrl-0 = <&rgmii_pins>;
+ snps,reset-active-low;
+ snps,reset-delays-us = <0 10000 50000>;
+- snps,reset-gpio = <&gpio3 RK_PB3 GPIO_ACTIVE_HIGH>;
++ snps,reset-gpio = <&gpio3 RK_PB3 GPIO_ACTIVE_LOW>;
+ tx_delay = <0x10>;
+ rx_delay = <0x10>;
+ status = "okay";
+diff --git a/arch/arm64/boot/dts/rockchip/rk3399-puma.dtsi b/arch/arm64/boot/dts/rockchip/rk3399-puma.dtsi
+index 07694b196fdb..72c06abd27ea 100644
+--- a/arch/arm64/boot/dts/rockchip/rk3399-puma.dtsi
++++ b/arch/arm64/boot/dts/rockchip/rk3399-puma.dtsi
+@@ -101,7 +101,7 @@
+
+ vcc5v0_host: vcc5v0-host-regulator {
+ compatible = "regulator-fixed";
+- gpio = <&gpio4 RK_PA3 GPIO_ACTIVE_HIGH>;
++ gpio = <&gpio4 RK_PA3 GPIO_ACTIVE_LOW>;
+ enable-active-low;
+ pinctrl-names = "default";
+ pinctrl-0 = <&vcc5v0_host_en>;
+@@ -157,7 +157,7 @@
+ phy-mode = "rgmii";
+ pinctrl-names = "default";
+ pinctrl-0 = <&rgmii_pins>;
+- snps,reset-gpio = <&gpio3 RK_PC0 GPIO_ACTIVE_HIGH>;
++ snps,reset-gpio = <&gpio3 RK_PC0 GPIO_ACTIVE_LOW>;
+ snps,reset-active-low;
+ snps,reset-delays-us = <0 10000 50000>;
+ tx_delay = <0x10>;
+diff --git a/arch/m68k/mac/iop.c b/arch/m68k/mac/iop.c
+index 9bfa17015768..c432bfafe63e 100644
+--- a/arch/m68k/mac/iop.c
++++ b/arch/m68k/mac/iop.c
+@@ -183,7 +183,7 @@ static __inline__ void iop_writeb(volatile struct mac_iop *iop, __u16 addr, __u8
+
+ static __inline__ void iop_stop(volatile struct mac_iop *iop)
+ {
+- iop->status_ctrl &= ~IOP_RUN;
++ iop->status_ctrl = IOP_AUTOINC;
+ }
+
+ static __inline__ void iop_start(volatile struct mac_iop *iop)
+@@ -191,14 +191,9 @@ static __inline__ void iop_start(volatile struct mac_iop *iop)
+ iop->status_ctrl = IOP_RUN | IOP_AUTOINC;
+ }
+
+-static __inline__ void iop_bypass(volatile struct mac_iop *iop)
+-{
+- iop->status_ctrl |= IOP_BYPASS;
+-}
+-
+ static __inline__ void iop_interrupt(volatile struct mac_iop *iop)
+ {
+- iop->status_ctrl |= IOP_IRQ;
++ iop->status_ctrl = IOP_IRQ | IOP_RUN | IOP_AUTOINC;
+ }
+
+ static int iop_alive(volatile struct mac_iop *iop)
+@@ -244,7 +239,6 @@ void __init iop_preinit(void)
+ } else {
+ iop_base[IOP_NUM_SCC] = (struct mac_iop *) SCC_IOP_BASE_QUADRA;
+ }
+- iop_base[IOP_NUM_SCC]->status_ctrl = 0x87;
+ iop_scc_present = 1;
+ } else {
+ iop_base[IOP_NUM_SCC] = NULL;
+@@ -256,7 +250,7 @@ void __init iop_preinit(void)
+ } else {
+ iop_base[IOP_NUM_ISM] = (struct mac_iop *) ISM_IOP_BASE_QUADRA;
+ }
+- iop_base[IOP_NUM_ISM]->status_ctrl = 0;
++ iop_stop(iop_base[IOP_NUM_ISM]);
+ iop_ism_present = 1;
+ } else {
+ iop_base[IOP_NUM_ISM] = NULL;
+@@ -416,7 +410,8 @@ static void iop_handle_send(uint iop_num, uint chan)
+ msg->status = IOP_MSGSTATUS_UNUSED;
+ msg = msg->next;
+ iop_send_queue[iop_num][chan] = msg;
+- if (msg) iop_do_send(msg);
++ if (msg && iop_readb(iop, IOP_ADDR_SEND_STATE + chan) == IOP_MSG_IDLE)
++ iop_do_send(msg);
+ }
+
+ /*
+@@ -490,16 +485,12 @@ int iop_send_message(uint iop_num, uint chan, void *privdata,
+
+ if (!(q = iop_send_queue[iop_num][chan])) {
+ iop_send_queue[iop_num][chan] = msg;
++ iop_do_send(msg);
+ } else {
+ while (q->next) q = q->next;
+ q->next = msg;
+ }
+
+- if (iop_readb(iop_base[iop_num],
+- IOP_ADDR_SEND_STATE + chan) == IOP_MSG_IDLE) {
+- iop_do_send(msg);
+- }
+-
+ return 0;
+ }
+
+diff --git a/arch/mips/cavium-octeon/octeon-usb.c b/arch/mips/cavium-octeon/octeon-usb.c
+index cc88a08bc1f7..4017398519cf 100644
+--- a/arch/mips/cavium-octeon/octeon-usb.c
++++ b/arch/mips/cavium-octeon/octeon-usb.c
+@@ -518,6 +518,7 @@ static int __init dwc3_octeon_device_init(void)
+
+ res = platform_get_resource(pdev, IORESOURCE_MEM, 0);
+ if (res == NULL) {
++ put_device(&pdev->dev);
+ dev_err(&pdev->dev, "No memory resources\n");
+ return -ENXIO;
+ }
+@@ -529,8 +530,10 @@ static int __init dwc3_octeon_device_init(void)
+ * know the difference.
+ */
+ base = devm_ioremap_resource(&pdev->dev, res);
+- if (IS_ERR(base))
++ if (IS_ERR(base)) {
++ put_device(&pdev->dev);
+ return PTR_ERR(base);
++ }
+
+ mutex_lock(&dwc3_octeon_clocks_mutex);
+ dwc3_octeon_clocks_start(&pdev->dev, (u64)base);
+diff --git a/arch/mips/pci/pci-xtalk-bridge.c b/arch/mips/pci/pci-xtalk-bridge.c
+index 5958217861b8..9b3cc775c55e 100644
+--- a/arch/mips/pci/pci-xtalk-bridge.c
++++ b/arch/mips/pci/pci-xtalk-bridge.c
+@@ -728,6 +728,7 @@ err_free_resource:
+ pci_free_resource_list(&host->windows);
+ err_remove_domain:
+ irq_domain_remove(domain);
++ irq_domain_free_fwnode(fn);
+ return err;
+ }
+
+@@ -735,8 +736,10 @@ static int bridge_remove(struct platform_device *pdev)
+ {
+ struct pci_bus *bus = platform_get_drvdata(pdev);
+ struct bridge_controller *bc = BRIDGE_CONTROLLER(bus);
++ struct fwnode_handle *fn = bc->domain->fwnode;
+
+ irq_domain_remove(bc->domain);
++ irq_domain_free_fwnode(fn);
+ pci_lock_rescan_remove();
+ pci_stop_root_bus(bus);
+ pci_remove_root_bus(bus);
+diff --git a/arch/parisc/include/asm/barrier.h b/arch/parisc/include/asm/barrier.h
+index dbaaca84f27f..640d46edf32e 100644
+--- a/arch/parisc/include/asm/barrier.h
++++ b/arch/parisc/include/asm/barrier.h
+@@ -26,6 +26,67 @@
+ #define __smp_rmb() mb()
+ #define __smp_wmb() mb()
+
++#define __smp_store_release(p, v) \
++do { \
++ typeof(p) __p = (p); \
++ union { typeof(*p) __val; char __c[1]; } __u = \
++ { .__val = (__force typeof(*p)) (v) }; \
++ compiletime_assert_atomic_type(*p); \
++ switch (sizeof(*p)) { \
++ case 1: \
++ asm volatile("stb,ma %0,0(%1)" \
++ : : "r"(*(__u8 *)__u.__c), "r"(__p) \
++ : "memory"); \
++ break; \
++ case 2: \
++ asm volatile("sth,ma %0,0(%1)" \
++ : : "r"(*(__u16 *)__u.__c), "r"(__p) \
++ : "memory"); \
++ break; \
++ case 4: \
++ asm volatile("stw,ma %0,0(%1)" \
++ : : "r"(*(__u32 *)__u.__c), "r"(__p) \
++ : "memory"); \
++ break; \
++ case 8: \
++ if (IS_ENABLED(CONFIG_64BIT)) \
++ asm volatile("std,ma %0,0(%1)" \
++ : : "r"(*(__u64 *)__u.__c), "r"(__p) \
++ : "memory"); \
++ break; \
++ } \
++} while (0)
++
++#define __smp_load_acquire(p) \
++({ \
++ union { typeof(*p) __val; char __c[1]; } __u; \
++ typeof(p) __p = (p); \
++ compiletime_assert_atomic_type(*p); \
++ switch (sizeof(*p)) { \
++ case 1: \
++ asm volatile("ldb,ma 0(%1),%0" \
++ : "=r"(*(__u8 *)__u.__c) : "r"(__p) \
++ : "memory"); \
++ break; \
++ case 2: \
++ asm volatile("ldh,ma 0(%1),%0" \
++ : "=r"(*(__u16 *)__u.__c) : "r"(__p) \
++ : "memory"); \
++ break; \
++ case 4: \
++ asm volatile("ldw,ma 0(%1),%0" \
++ : "=r"(*(__u32 *)__u.__c) : "r"(__p) \
++ : "memory"); \
++ break; \
++ case 8: \
++ if (IS_ENABLED(CONFIG_64BIT)) \
++ asm volatile("ldd,ma 0(%1),%0" \
++ : "=r"(*(__u64 *)__u.__c) : "r"(__p) \
++ : "memory"); \
++ break; \
++ } \
++ __u.__val; \
++})
+ #include <asm-generic/barrier.h>
+
+ #endif /* !__ASSEMBLY__ */
+diff --git a/arch/parisc/include/asm/spinlock.h b/arch/parisc/include/asm/spinlock.h
+index 70fecb8dc4e2..51b6c47f802f 100644
+--- a/arch/parisc/include/asm/spinlock.h
++++ b/arch/parisc/include/asm/spinlock.h
+@@ -10,34 +10,25 @@
+ static inline int arch_spin_is_locked(arch_spinlock_t *x)
+ {
+ volatile unsigned int *a = __ldcw_align(x);
+- smp_mb();
+ return *a == 0;
+ }
+
+-static inline void arch_spin_lock(arch_spinlock_t *x)
+-{
+- volatile unsigned int *a;
+-
+- a = __ldcw_align(x);
+- while (__ldcw(a) == 0)
+- while (*a == 0)
+- cpu_relax();
+-}
++#define arch_spin_lock(lock) arch_spin_lock_flags(lock, 0)
+
+ static inline void arch_spin_lock_flags(arch_spinlock_t *x,
+ unsigned long flags)
+ {
+ volatile unsigned int *a;
+- unsigned long flags_dis;
+
+ a = __ldcw_align(x);
+- while (__ldcw(a) == 0) {
+- local_save_flags(flags_dis);
+- local_irq_restore(flags);
++ while (__ldcw(a) == 0)
+ while (*a == 0)
+- cpu_relax();
+- local_irq_restore(flags_dis);
+- }
++ if (flags & PSW_SM_I) {
++ local_irq_enable();
++ cpu_relax();
++ local_irq_disable();
++ } else
++ cpu_relax();
+ }
+ #define arch_spin_lock_flags arch_spin_lock_flags
+
+@@ -46,12 +37,8 @@ static inline void arch_spin_unlock(arch_spinlock_t *x)
+ volatile unsigned int *a;
+
+ a = __ldcw_align(x);
+-#ifdef CONFIG_SMP
+- (void) __ldcw(a);
+-#else
+- mb();
+-#endif
+- *a = 1;
++ /* Release with ordered store. */
++ __asm__ __volatile__("stw,ma %0,0(%1)" : : "r"(1), "r"(a) : "memory");
+ }
+
+ static inline int arch_spin_trylock(arch_spinlock_t *x)
+diff --git a/arch/parisc/kernel/entry.S b/arch/parisc/kernel/entry.S
+index 9a03e29c8733..755240ce671e 100644
+--- a/arch/parisc/kernel/entry.S
++++ b/arch/parisc/kernel/entry.S
+@@ -454,7 +454,6 @@
+ nop
+ LDREG 0(\ptp),\pte
+ bb,<,n \pte,_PAGE_PRESENT_BIT,3f
+- LDCW 0(\tmp),\tmp1
+ b \fault
+ stw \spc,0(\tmp)
+ 99: ALTERNATIVE(98b, 99b, ALT_COND_NO_SMP, INSN_NOP)
+@@ -464,23 +463,26 @@
+ 3:
+ .endm
+
+- /* Release pa_tlb_lock lock without reloading lock address. */
+- .macro tlb_unlock0 spc,tmp,tmp1
++ /* Release pa_tlb_lock lock without reloading lock address.
++ Note that the values in the register spc are limited to
++ NR_SPACE_IDS (262144). Thus, the stw instruction always
++ stores a nonzero value even when register spc is 64 bits.
++ We use an ordered store to ensure all prior accesses are
++ performed prior to releasing the lock. */
++ .macro tlb_unlock0 spc,tmp
+ #ifdef CONFIG_SMP
+ 98: or,COND(=) %r0,\spc,%r0
+- LDCW 0(\tmp),\tmp1
+- or,COND(=) %r0,\spc,%r0
+- stw \spc,0(\tmp)
++ stw,ma \spc,0(\tmp)
+ 99: ALTERNATIVE(98b, 99b, ALT_COND_NO_SMP, INSN_NOP)
+ #endif
+ .endm
+
+ /* Release pa_tlb_lock lock. */
+- .macro tlb_unlock1 spc,tmp,tmp1
++ .macro tlb_unlock1 spc,tmp
+ #ifdef CONFIG_SMP
+ 98: load_pa_tlb_lock \tmp
+ 99: ALTERNATIVE(98b, 99b, ALT_COND_NO_SMP, INSN_NOP)
+- tlb_unlock0 \spc,\tmp,\tmp1
++ tlb_unlock0 \spc,\tmp
+ #endif
+ .endm
+
+@@ -1163,7 +1165,7 @@ dtlb_miss_20w:
+
+ idtlbt pte,prot
+
+- tlb_unlock1 spc,t0,t1
++ tlb_unlock1 spc,t0
+ rfir
+ nop
+
+@@ -1189,7 +1191,7 @@ nadtlb_miss_20w:
+
+ idtlbt pte,prot
+
+- tlb_unlock1 spc,t0,t1
++ tlb_unlock1 spc,t0
+ rfir
+ nop
+
+@@ -1223,7 +1225,7 @@ dtlb_miss_11:
+
+ mtsp t1, %sr1 /* Restore sr1 */
+
+- tlb_unlock1 spc,t0,t1
++ tlb_unlock1 spc,t0
+ rfir
+ nop
+
+@@ -1256,7 +1258,7 @@ nadtlb_miss_11:
+
+ mtsp t1, %sr1 /* Restore sr1 */
+
+- tlb_unlock1 spc,t0,t1
++ tlb_unlock1 spc,t0
+ rfir
+ nop
+
+@@ -1285,7 +1287,7 @@ dtlb_miss_20:
+
+ idtlbt pte,prot
+
+- tlb_unlock1 spc,t0,t1
++ tlb_unlock1 spc,t0
+ rfir
+ nop
+
+@@ -1313,7 +1315,7 @@ nadtlb_miss_20:
+
+ idtlbt pte,prot
+
+- tlb_unlock1 spc,t0,t1
++ tlb_unlock1 spc,t0
+ rfir
+ nop
+
+@@ -1420,7 +1422,7 @@ itlb_miss_20w:
+
+ iitlbt pte,prot
+
+- tlb_unlock1 spc,t0,t1
++ tlb_unlock1 spc,t0
+ rfir
+ nop
+
+@@ -1444,7 +1446,7 @@ naitlb_miss_20w:
+
+ iitlbt pte,prot
+
+- tlb_unlock1 spc,t0,t1
++ tlb_unlock1 spc,t0
+ rfir
+ nop
+
+@@ -1478,7 +1480,7 @@ itlb_miss_11:
+
+ mtsp t1, %sr1 /* Restore sr1 */
+
+- tlb_unlock1 spc,t0,t1
++ tlb_unlock1 spc,t0
+ rfir
+ nop
+
+@@ -1502,7 +1504,7 @@ naitlb_miss_11:
+
+ mtsp t1, %sr1 /* Restore sr1 */
+
+- tlb_unlock1 spc,t0,t1
++ tlb_unlock1 spc,t0
+ rfir
+ nop
+
+@@ -1532,7 +1534,7 @@ itlb_miss_20:
+
+ iitlbt pte,prot
+
+- tlb_unlock1 spc,t0,t1
++ tlb_unlock1 spc,t0
+ rfir
+ nop
+
+@@ -1552,7 +1554,7 @@ naitlb_miss_20:
+
+ iitlbt pte,prot
+
+- tlb_unlock1 spc,t0,t1
++ tlb_unlock1 spc,t0
+ rfir
+ nop
+
+@@ -1582,7 +1584,7 @@ dbit_trap_20w:
+
+ idtlbt pte,prot
+
+- tlb_unlock0 spc,t0,t1
++ tlb_unlock0 spc,t0
+ rfir
+ nop
+ #else
+@@ -1608,7 +1610,7 @@ dbit_trap_11:
+
+ mtsp t1, %sr1 /* Restore sr1 */
+
+- tlb_unlock0 spc,t0,t1
++ tlb_unlock0 spc,t0
+ rfir
+ nop
+
+@@ -1628,7 +1630,7 @@ dbit_trap_20:
+
+ idtlbt pte,prot
+
+- tlb_unlock0 spc,t0,t1
++ tlb_unlock0 spc,t0
+ rfir
+ nop
+ #endif
+diff --git a/arch/parisc/kernel/syscall.S b/arch/parisc/kernel/syscall.S
+index f05c9d5b6b9e..3ad61a177f5b 100644
+--- a/arch/parisc/kernel/syscall.S
++++ b/arch/parisc/kernel/syscall.S
+@@ -640,11 +640,7 @@ cas_action:
+ sub,<> %r28, %r25, %r0
+ 2: stw %r24, 0(%r26)
+ /* Free lock */
+-#ifdef CONFIG_SMP
+-98: LDCW 0(%sr2,%r20), %r1 /* Barrier */
+-99: ALTERNATIVE(98b, 99b, ALT_COND_NO_SMP, INSN_NOP)
+-#endif
+- stw %r20, 0(%sr2,%r20)
++ stw,ma %r20, 0(%sr2,%r20)
+ #if ENABLE_LWS_DEBUG
+ /* Clear thread register indicator */
+ stw %r0, 4(%sr2,%r20)
+@@ -658,11 +654,7 @@ cas_action:
+ 3:
+ /* Error occurred on load or store */
+ /* Free lock */
+-#ifdef CONFIG_SMP
+-98: LDCW 0(%sr2,%r20), %r1 /* Barrier */
+-99: ALTERNATIVE(98b, 99b, ALT_COND_NO_SMP, INSN_NOP)
+-#endif
+- stw %r20, 0(%sr2,%r20)
++ stw,ma %r20, 0(%sr2,%r20)
+ #if ENABLE_LWS_DEBUG
+ stw %r0, 4(%sr2,%r20)
+ #endif
+@@ -863,11 +855,7 @@ cas2_action:
+
+ cas2_end:
+ /* Free lock */
+-#ifdef CONFIG_SMP
+-98: LDCW 0(%sr2,%r20), %r1 /* Barrier */
+-99: ALTERNATIVE(98b, 99b, ALT_COND_NO_SMP, INSN_NOP)
+-#endif
+- stw %r20, 0(%sr2,%r20)
++ stw,ma %r20, 0(%sr2,%r20)
+ /* Enable interrupts */
+ ssm PSW_SM_I, %r0
+ /* Return to userspace, set no error */
+@@ -877,11 +865,7 @@ cas2_end:
+ 22:
+ /* Error occurred on load or store */
+ /* Free lock */
+-#ifdef CONFIG_SMP
+-98: LDCW 0(%sr2,%r20), %r1 /* Barrier */
+-99: ALTERNATIVE(98b, 99b, ALT_COND_NO_SMP, INSN_NOP)
+-#endif
+- stw %r20, 0(%sr2,%r20)
++ stw,ma %r20, 0(%sr2,%r20)
+ ssm PSW_SM_I, %r0
+ ldo 1(%r0),%r28
+ b lws_exit
+diff --git a/arch/powerpc/boot/Makefile b/arch/powerpc/boot/Makefile
+index c53a1b8bba8b..e32a9e40a522 100644
+--- a/arch/powerpc/boot/Makefile
++++ b/arch/powerpc/boot/Makefile
+@@ -119,7 +119,7 @@ src-wlib-y := string.S crt0.S stdio.c decompress.c main.c \
+ elf_util.c $(zlib-y) devtree.c stdlib.c \
+ oflib.c ofconsole.c cuboot.c
+
+-src-wlib-$(CONFIG_PPC_MPC52XX) += mpc52xx-psc.c
++src-wlib-$(CONFIG_PPC_MPC52xx) += mpc52xx-psc.c
+ src-wlib-$(CONFIG_PPC64_BOOT_WRAPPER) += opal-calls.S opal.c
+ ifndef CONFIG_PPC64_BOOT_WRAPPER
+ src-wlib-y += crtsavres.S
+diff --git a/arch/powerpc/boot/serial.c b/arch/powerpc/boot/serial.c
+index 9457863147f9..00179cd6bdd0 100644
+--- a/arch/powerpc/boot/serial.c
++++ b/arch/powerpc/boot/serial.c
+@@ -128,7 +128,7 @@ int serial_console_init(void)
+ dt_is_compatible(devp, "fsl,cpm2-smc-uart"))
+ rc = cpm_console_init(devp, &serial_cd);
+ #endif
+-#ifdef CONFIG_PPC_MPC52XX
++#ifdef CONFIG_PPC_MPC52xx
+ else if (dt_is_compatible(devp, "fsl,mpc5200-psc-uart"))
+ rc = mpc5200_psc_console_init(devp, &serial_cd);
+ #endif
+diff --git a/arch/powerpc/include/asm/fixmap.h b/arch/powerpc/include/asm/fixmap.h
+index 2ef155a3c821..77ab25a19974 100644
+--- a/arch/powerpc/include/asm/fixmap.h
++++ b/arch/powerpc/include/asm/fixmap.h
+@@ -52,7 +52,7 @@ enum fixed_addresses {
+ FIX_HOLE,
+ /* reserve the top 128K for early debugging purposes */
+ FIX_EARLY_DEBUG_TOP = FIX_HOLE,
+- FIX_EARLY_DEBUG_BASE = FIX_EARLY_DEBUG_TOP+((128*1024)/PAGE_SIZE)-1,
++ FIX_EARLY_DEBUG_BASE = FIX_EARLY_DEBUG_TOP+(ALIGN(SZ_128, PAGE_SIZE)/PAGE_SIZE)-1,
+ #ifdef CONFIG_HIGHMEM
+ FIX_KMAP_BEGIN, /* reserved pte's for temporary kernel mappings */
+ FIX_KMAP_END = FIX_KMAP_BEGIN+(KM_TYPE_NR*NR_CPUS)-1,
+diff --git a/arch/powerpc/include/asm/perf_event.h b/arch/powerpc/include/asm/perf_event.h
+index eed3954082fa..1e8b2e1ec1db 100644
+--- a/arch/powerpc/include/asm/perf_event.h
++++ b/arch/powerpc/include/asm/perf_event.h
+@@ -12,6 +12,8 @@
+
+ #ifdef CONFIG_PPC_PERF_CTRS
+ #include <asm/perf_event_server.h>
++#else
++static inline bool is_sier_available(void) { return false; }
+ #endif
+
+ #ifdef CONFIG_FSL_EMB_PERF_EVENT
+diff --git a/arch/powerpc/include/asm/ptrace.h b/arch/powerpc/include/asm/ptrace.h
+index e0195e6b892b..71ade62fb897 100644
+--- a/arch/powerpc/include/asm/ptrace.h
++++ b/arch/powerpc/include/asm/ptrace.h
+@@ -206,7 +206,7 @@ do { \
+ #endif /* __powerpc64__ */
+
+ #define arch_has_single_step() (1)
+-#ifndef CONFIG_BOOK3S_601
++#ifndef CONFIG_PPC_BOOK3S_601
+ #define arch_has_block_step() (true)
+ #else
+ #define arch_has_block_step() (false)
+diff --git a/arch/powerpc/include/asm/rtas.h b/arch/powerpc/include/asm/rtas.h
+index 3c1887351c71..bd227e0eab07 100644
+--- a/arch/powerpc/include/asm/rtas.h
++++ b/arch/powerpc/include/asm/rtas.h
+@@ -368,8 +368,6 @@ extern int rtas_set_indicator_fast(int indicator, int index, int new_value);
+ extern void rtas_progress(char *s, unsigned short hex);
+ extern int rtas_suspend_cpu(struct rtas_suspend_me_data *data);
+ extern int rtas_suspend_last_cpu(struct rtas_suspend_me_data *data);
+-extern int rtas_online_cpus_mask(cpumask_var_t cpus);
+-extern int rtas_offline_cpus_mask(cpumask_var_t cpus);
+ extern int rtas_ibm_suspend_me(u64 handle);
+
+ struct rtc_time;
+diff --git a/arch/powerpc/include/asm/timex.h b/arch/powerpc/include/asm/timex.h
+index d2d2c4bd8435..6047402b0a4d 100644
+--- a/arch/powerpc/include/asm/timex.h
++++ b/arch/powerpc/include/asm/timex.h
+@@ -17,7 +17,7 @@ typedef unsigned long cycles_t;
+
+ static inline cycles_t get_cycles(void)
+ {
+- if (IS_ENABLED(CONFIG_BOOK3S_601))
++ if (IS_ENABLED(CONFIG_PPC_BOOK3S_601))
+ return 0;
+
+ return mftb();
+diff --git a/arch/powerpc/kernel/rtas.c b/arch/powerpc/kernel/rtas.c
+index c5fa251b8950..01210593d60c 100644
+--- a/arch/powerpc/kernel/rtas.c
++++ b/arch/powerpc/kernel/rtas.c
+@@ -842,96 +842,6 @@ static void rtas_percpu_suspend_me(void *info)
+ __rtas_suspend_cpu((struct rtas_suspend_me_data *)info, 1);
+ }
+
+-enum rtas_cpu_state {
+- DOWN,
+- UP,
+-};
+-
+-#ifndef CONFIG_SMP
+-static int rtas_cpu_state_change_mask(enum rtas_cpu_state state,
+- cpumask_var_t cpus)
+-{
+- if (!cpumask_empty(cpus)) {
+- cpumask_clear(cpus);
+- return -EINVAL;
+- } else
+- return 0;
+-}
+-#else
+-/* On return cpumask will be altered to indicate CPUs changed.
+- * CPUs with states changed will be set in the mask,
+- * CPUs with status unchanged will be unset in the mask. */
+-static int rtas_cpu_state_change_mask(enum rtas_cpu_state state,
+- cpumask_var_t cpus)
+-{
+- int cpu;
+- int cpuret = 0;
+- int ret = 0;
+-
+- if (cpumask_empty(cpus))
+- return 0;
+-
+- for_each_cpu(cpu, cpus) {
+- struct device *dev = get_cpu_device(cpu);
+-
+- switch (state) {
+- case DOWN:
+- cpuret = device_offline(dev);
+- break;
+- case UP:
+- cpuret = device_online(dev);
+- break;
+- }
+- if (cpuret < 0) {
+- pr_debug("%s: cpu_%s for cpu#%d returned %d.\n",
+- __func__,
+- ((state == UP) ? "up" : "down"),
+- cpu, cpuret);
+- if (!ret)
+- ret = cpuret;
+- if (state == UP) {
+- /* clear bits for unchanged cpus, return */
+- cpumask_shift_right(cpus, cpus, cpu);
+- cpumask_shift_left(cpus, cpus, cpu);
+- break;
+- } else {
+- /* clear bit for unchanged cpu, continue */
+- cpumask_clear_cpu(cpu, cpus);
+- }
+- }
+- cond_resched();
+- }
+-
+- return ret;
+-}
+-#endif
+-
+-int rtas_online_cpus_mask(cpumask_var_t cpus)
+-{
+- int ret;
+-
+- ret = rtas_cpu_state_change_mask(UP, cpus);
+-
+- if (ret) {
+- cpumask_var_t tmp_mask;
+-
+- if (!alloc_cpumask_var(&tmp_mask, GFP_KERNEL))
+- return ret;
+-
+- /* Use tmp_mask to preserve cpus mask from first failure */
+- cpumask_copy(tmp_mask, cpus);
+- rtas_offline_cpus_mask(tmp_mask);
+- free_cpumask_var(tmp_mask);
+- }
+-
+- return ret;
+-}
+-
+-int rtas_offline_cpus_mask(cpumask_var_t cpus)
+-{
+- return rtas_cpu_state_change_mask(DOWN, cpus);
+-}
+-
+ int rtas_ibm_suspend_me(u64 handle)
+ {
+ long state;
+@@ -939,8 +849,6 @@ int rtas_ibm_suspend_me(u64 handle)
+ unsigned long retbuf[PLPAR_HCALL_BUFSIZE];
+ struct rtas_suspend_me_data data;
+ DECLARE_COMPLETION_ONSTACK(done);
+- cpumask_var_t offline_mask;
+- int cpuret;
+
+ if (!rtas_service_present("ibm,suspend-me"))
+ return -ENOSYS;
+@@ -961,9 +869,6 @@ int rtas_ibm_suspend_me(u64 handle)
+ return -EIO;
+ }
+
+- if (!alloc_cpumask_var(&offline_mask, GFP_KERNEL))
+- return -ENOMEM;
+-
+ atomic_set(&data.working, 0);
+ atomic_set(&data.done, 0);
+ atomic_set(&data.error, 0);
+@@ -972,24 +877,8 @@ int rtas_ibm_suspend_me(u64 handle)
+
+ lock_device_hotplug();
+
+- /* All present CPUs must be online */
+- cpumask_andnot(offline_mask, cpu_present_mask, cpu_online_mask);
+- cpuret = rtas_online_cpus_mask(offline_mask);
+- if (cpuret) {
+- pr_err("%s: Could not bring present CPUs online.\n", __func__);
+- atomic_set(&data.error, cpuret);
+- goto out;
+- }
+-
+ cpu_hotplug_disable();
+
+- /* Check if we raced with a CPU-Offline Operation */
+- if (!cpumask_equal(cpu_present_mask, cpu_online_mask)) {
+- pr_info("%s: Raced against a concurrent CPU-Offline\n", __func__);
+- atomic_set(&data.error, -EAGAIN);
+- goto out_hotplug_enable;
+- }
+-
+ /* Call function on all CPUs. One of us will make the
+ * rtas call
+ */
+@@ -1000,18 +889,11 @@ int rtas_ibm_suspend_me(u64 handle)
+ if (atomic_read(&data.error) != 0)
+ printk(KERN_ERR "Error doing global join\n");
+
+-out_hotplug_enable:
+- cpu_hotplug_enable();
+
+- /* Take down CPUs not online prior to suspend */
+- cpuret = rtas_offline_cpus_mask(offline_mask);
+- if (cpuret)
+- pr_warn("%s: Could not restore CPUs to offline state.\n",
+- __func__);
++ cpu_hotplug_enable();
+
+-out:
+ unlock_device_hotplug();
+- free_cpumask_var(offline_mask);
++
+ return atomic_read(&data.error);
+ }
+ #else /* CONFIG_PPC_PSERIES */
+diff --git a/arch/powerpc/kernel/vdso.c b/arch/powerpc/kernel/vdso.c
+index f38f26e844b6..1c07df1ad560 100644
+--- a/arch/powerpc/kernel/vdso.c
++++ b/arch/powerpc/kernel/vdso.c
+@@ -678,7 +678,7 @@ int vdso_getcpu_init(void)
+ node = cpu_to_node(cpu);
+ WARN_ON_ONCE(node > 0xffff);
+
+- val = (cpu & 0xfff) | ((node & 0xffff) << 16);
++ val = (cpu & 0xffff) | ((node & 0xffff) << 16);
+ mtspr(SPRN_SPRG_VDSO_WRITE, val);
+ get_paca()->sprg_vdso = val;
+
+diff --git a/arch/powerpc/mm/book3s64/hash_utils.c b/arch/powerpc/mm/book3s64/hash_utils.c
+index 8ed2411c3f39..cf2e1b06e5d4 100644
+--- a/arch/powerpc/mm/book3s64/hash_utils.c
++++ b/arch/powerpc/mm/book3s64/hash_utils.c
+@@ -660,11 +660,10 @@ static void __init htab_init_page_sizes(void)
+ * Pick a size for the linear mapping. Currently, we only
+ * support 16M, 1M and 4K which is the default
+ */
+- if (IS_ENABLED(STRICT_KERNEL_RWX) &&
++ if (IS_ENABLED(CONFIG_STRICT_KERNEL_RWX) &&
+ (unsigned long)_stext % 0x1000000) {
+ if (mmu_psize_defs[MMU_PAGE_16M].shift)
+- pr_warn("Kernel not 16M aligned, "
+- "disabling 16M linear map alignment");
++ pr_warn("Kernel not 16M aligned, disabling 16M linear map alignment\n");
+ aligned = false;
+ }
+
+diff --git a/arch/powerpc/mm/book3s64/pkeys.c b/arch/powerpc/mm/book3s64/pkeys.c
+index 268ce9581676..fa237c8c161f 100644
+--- a/arch/powerpc/mm/book3s64/pkeys.c
++++ b/arch/powerpc/mm/book3s64/pkeys.c
+@@ -83,13 +83,17 @@ static int pkey_initialize(void)
+ scan_pkey_feature();
+
+ /*
+- * Let's assume 32 pkeys on P8 bare metal, if its not defined by device
+- * tree. We make this exception since skiboot forgot to expose this
+- * property on power8.
++ * Let's assume 32 pkeys on P8/P9 bare metal, if its not defined by device
++ * tree. We make this exception since some version of skiboot forgot to
++ * expose this property on power8/9.
+ */
+- if (!pkeys_devtree_defined && !firmware_has_feature(FW_FEATURE_LPAR) &&
+- cpu_has_feature(CPU_FTRS_POWER8))
+- pkeys_total = 32;
++ if (!pkeys_devtree_defined && !firmware_has_feature(FW_FEATURE_LPAR)) {
++ unsigned long pvr = mfspr(SPRN_PVR);
++
++ if (PVR_VER(pvr) == PVR_POWER8 || PVR_VER(pvr) == PVR_POWER8E ||
++ PVR_VER(pvr) == PVR_POWER8NVL || PVR_VER(pvr) == PVR_POWER9)
++ pkeys_total = 32;
++ }
+
+ /*
+ * Adjust the upper limit, based on the number of bits supported by
+diff --git a/arch/powerpc/platforms/pseries/hotplug-cpu.c b/arch/powerpc/platforms/pseries/hotplug-cpu.c
+index 3e8cbfe7a80f..6d4ee03d476a 100644
+--- a/arch/powerpc/platforms/pseries/hotplug-cpu.c
++++ b/arch/powerpc/platforms/pseries/hotplug-cpu.c
+@@ -35,54 +35,10 @@
+ #include <asm/topology.h>
+
+ #include "pseries.h"
+-#include "offline_states.h"
+
+ /* This version can't take the spinlock, because it never returns */
+ static int rtas_stop_self_token = RTAS_UNKNOWN_SERVICE;
+
+-static DEFINE_PER_CPU(enum cpu_state_vals, preferred_offline_state) =
+- CPU_STATE_OFFLINE;
+-static DEFINE_PER_CPU(enum cpu_state_vals, current_state) = CPU_STATE_OFFLINE;
+-
+-static enum cpu_state_vals default_offline_state = CPU_STATE_OFFLINE;
+-
+-static bool cede_offline_enabled __read_mostly = true;
+-
+-/*
+- * Enable/disable cede_offline when available.
+- */
+-static int __init setup_cede_offline(char *str)
+-{
+- return (kstrtobool(str, &cede_offline_enabled) == 0);
+-}
+-
+-__setup("cede_offline=", setup_cede_offline);
+-
+-enum cpu_state_vals get_cpu_current_state(int cpu)
+-{
+- return per_cpu(current_state, cpu);
+-}
+-
+-void set_cpu_current_state(int cpu, enum cpu_state_vals state)
+-{
+- per_cpu(current_state, cpu) = state;
+-}
+-
+-enum cpu_state_vals get_preferred_offline_state(int cpu)
+-{
+- return per_cpu(preferred_offline_state, cpu);
+-}
+-
+-void set_preferred_offline_state(int cpu, enum cpu_state_vals state)
+-{
+- per_cpu(preferred_offline_state, cpu) = state;
+-}
+-
+-void set_default_offline_state(int cpu)
+-{
+- per_cpu(preferred_offline_state, cpu) = default_offline_state;
+-}
+-
+ static void rtas_stop_self(void)
+ {
+ static struct rtas_args args;
+@@ -101,9 +57,7 @@ static void rtas_stop_self(void)
+
+ static void pseries_mach_cpu_die(void)
+ {
+- unsigned int cpu = smp_processor_id();
+ unsigned int hwcpu = hard_smp_processor_id();
+- u8 cede_latency_hint = 0;
+
+ local_irq_disable();
+ idle_task_exit();
+@@ -112,49 +66,6 @@ static void pseries_mach_cpu_die(void)
+ else
+ xics_teardown_cpu();
+
+- if (get_preferred_offline_state(cpu) == CPU_STATE_INACTIVE) {
+- set_cpu_current_state(cpu, CPU_STATE_INACTIVE);
+- if (ppc_md.suspend_disable_cpu)
+- ppc_md.suspend_disable_cpu();
+-
+- cede_latency_hint = 2;
+-
+- get_lppaca()->idle = 1;
+- if (!lppaca_shared_proc(get_lppaca()))
+- get_lppaca()->donate_dedicated_cpu = 1;
+-
+- while (get_preferred_offline_state(cpu) == CPU_STATE_INACTIVE) {
+- while (!prep_irq_for_idle()) {
+- local_irq_enable();
+- local_irq_disable();
+- }
+-
+- extended_cede_processor(cede_latency_hint);
+- }
+-
+- local_irq_disable();
+-
+- if (!lppaca_shared_proc(get_lppaca()))
+- get_lppaca()->donate_dedicated_cpu = 0;
+- get_lppaca()->idle = 0;
+-
+- if (get_preferred_offline_state(cpu) == CPU_STATE_ONLINE) {
+- unregister_slb_shadow(hwcpu);
+-
+- hard_irq_disable();
+- /*
+- * Call to start_secondary_resume() will not return.
+- * Kernel stack will be reset and start_secondary()
+- * will be called to continue the online operation.
+- */
+- start_secondary_resume();
+- }
+- }
+-
+- /* Requested state is CPU_STATE_OFFLINE at this point */
+- WARN_ON(get_preferred_offline_state(cpu) != CPU_STATE_OFFLINE);
+-
+- set_cpu_current_state(cpu, CPU_STATE_OFFLINE);
+ unregister_slb_shadow(hwcpu);
+ rtas_stop_self();
+
+@@ -200,24 +111,13 @@ static void pseries_cpu_die(unsigned int cpu)
+ int cpu_status = 1;
+ unsigned int pcpu = get_hard_smp_processor_id(cpu);
+
+- if (get_preferred_offline_state(cpu) == CPU_STATE_INACTIVE) {
+- cpu_status = 1;
+- for (tries = 0; tries < 5000; tries++) {
+- if (get_cpu_current_state(cpu) == CPU_STATE_INACTIVE) {
+- cpu_status = 0;
+- break;
+- }
+- msleep(1);
+- }
+- } else if (get_preferred_offline_state(cpu) == CPU_STATE_OFFLINE) {
++ for (tries = 0; tries < 25; tries++) {
++ cpu_status = smp_query_cpu_stopped(pcpu);
++ if (cpu_status == QCSS_STOPPED ||
++ cpu_status == QCSS_HARDWARE_ERROR)
++ break;
++ cpu_relax();
+
+- for (tries = 0; tries < 25; tries++) {
+- cpu_status = smp_query_cpu_stopped(pcpu);
+- if (cpu_status == QCSS_STOPPED ||
+- cpu_status == QCSS_HARDWARE_ERROR)
+- break;
+- cpu_relax();
+- }
+ }
+
+ if (cpu_status != 0) {
+@@ -359,28 +259,15 @@ static int dlpar_offline_cpu(struct device_node *dn)
+ if (get_hard_smp_processor_id(cpu) != thread)
+ continue;
+
+- if (get_cpu_current_state(cpu) == CPU_STATE_OFFLINE)
++ if (!cpu_online(cpu))
+ break;
+
+- if (get_cpu_current_state(cpu) == CPU_STATE_ONLINE) {
+- set_preferred_offline_state(cpu,
+- CPU_STATE_OFFLINE);
+- cpu_maps_update_done();
+- timed_topology_update(1);
+- rc = device_offline(get_cpu_device(cpu));
+- if (rc)
+- goto out;
+- cpu_maps_update_begin();
+- break;
+- }
+-
+- /*
+- * The cpu is in CPU_STATE_INACTIVE.
+- * Upgrade it's state to CPU_STATE_OFFLINE.
+- */
+- set_preferred_offline_state(cpu, CPU_STATE_OFFLINE);
+- WARN_ON(plpar_hcall_norets(H_PROD, thread) != H_SUCCESS);
+- __cpu_die(cpu);
++ cpu_maps_update_done();
++ timed_topology_update(1);
++ rc = device_offline(get_cpu_device(cpu));
++ if (rc)
++ goto out;
++ cpu_maps_update_begin();
+ break;
+ }
+ if (cpu == num_possible_cpus()) {
+@@ -414,8 +301,6 @@ static int dlpar_online_cpu(struct device_node *dn)
+ for_each_present_cpu(cpu) {
+ if (get_hard_smp_processor_id(cpu) != thread)
+ continue;
+- BUG_ON(get_cpu_current_state(cpu)
+- != CPU_STATE_OFFLINE);
+ cpu_maps_update_done();
+ timed_topology_update(1);
+ find_and_online_cpu_nid(cpu);
+@@ -854,7 +739,6 @@ static int dlpar_cpu_add_by_count(u32 cpus_to_add)
+ parent = of_find_node_by_path("/cpus");
+ if (!parent) {
+ pr_warn("Could not find CPU root node in device tree\n");
+- kfree(cpu_drcs);
+ return -1;
+ }
+
+@@ -1013,27 +897,8 @@ static struct notifier_block pseries_smp_nb = {
+ .notifier_call = pseries_smp_notifier,
+ };
+
+-#define MAX_CEDE_LATENCY_LEVELS 4
+-#define CEDE_LATENCY_PARAM_LENGTH 10
+-#define CEDE_LATENCY_PARAM_MAX_LENGTH \
+- (MAX_CEDE_LATENCY_LEVELS * CEDE_LATENCY_PARAM_LENGTH * sizeof(char))
+-#define CEDE_LATENCY_TOKEN 45
+-
+-static char cede_parameters[CEDE_LATENCY_PARAM_MAX_LENGTH];
+-
+-static int parse_cede_parameters(void)
+-{
+- memset(cede_parameters, 0, CEDE_LATENCY_PARAM_MAX_LENGTH);
+- return rtas_call(rtas_token("ibm,get-system-parameter"), 3, 1,
+- NULL,
+- CEDE_LATENCY_TOKEN,
+- __pa(cede_parameters),
+- CEDE_LATENCY_PARAM_MAX_LENGTH);
+-}
+-
+ static int __init pseries_cpu_hotplug_init(void)
+ {
+- int cpu;
+ int qcss_tok;
+
+ #ifdef CONFIG_ARCH_CPU_PROBE_RELEASE
+@@ -1056,16 +921,8 @@ static int __init pseries_cpu_hotplug_init(void)
+ smp_ops->cpu_die = pseries_cpu_die;
+
+ /* Processors can be added/removed only on LPAR */
+- if (firmware_has_feature(FW_FEATURE_LPAR)) {
++ if (firmware_has_feature(FW_FEATURE_LPAR))
+ of_reconfig_notifier_register(&pseries_smp_nb);
+- cpu_maps_update_begin();
+- if (cede_offline_enabled && parse_cede_parameters() == 0) {
+- default_offline_state = CPU_STATE_INACTIVE;
+- for_each_online_cpu(cpu)
+- set_default_offline_state(cpu);
+- }
+- cpu_maps_update_done();
+- }
+
+ return 0;
+ }
+diff --git a/arch/powerpc/platforms/pseries/offline_states.h b/arch/powerpc/platforms/pseries/offline_states.h
+deleted file mode 100644
+index 51414aee2862..000000000000
+--- a/arch/powerpc/platforms/pseries/offline_states.h
++++ /dev/null
+@@ -1,38 +0,0 @@
+-/* SPDX-License-Identifier: GPL-2.0 */
+-#ifndef _OFFLINE_STATES_H_
+-#define _OFFLINE_STATES_H_
+-
+-/* Cpu offline states go here */
+-enum cpu_state_vals {
+- CPU_STATE_OFFLINE,
+- CPU_STATE_INACTIVE,
+- CPU_STATE_ONLINE,
+- CPU_MAX_OFFLINE_STATES
+-};
+-
+-#ifdef CONFIG_HOTPLUG_CPU
+-extern enum cpu_state_vals get_cpu_current_state(int cpu);
+-extern void set_cpu_current_state(int cpu, enum cpu_state_vals state);
+-extern void set_preferred_offline_state(int cpu, enum cpu_state_vals state);
+-extern void set_default_offline_state(int cpu);
+-#else
+-static inline enum cpu_state_vals get_cpu_current_state(int cpu)
+-{
+- return CPU_STATE_ONLINE;
+-}
+-
+-static inline void set_cpu_current_state(int cpu, enum cpu_state_vals state)
+-{
+-}
+-
+-static inline void set_preferred_offline_state(int cpu, enum cpu_state_vals state)
+-{
+-}
+-
+-static inline void set_default_offline_state(int cpu)
+-{
+-}
+-#endif
+-
+-extern enum cpu_state_vals get_preferred_offline_state(int cpu);
+-#endif
+diff --git a/arch/powerpc/platforms/pseries/pmem.c b/arch/powerpc/platforms/pseries/pmem.c
+index f860a897a9e0..f827de7087e9 100644
+--- a/arch/powerpc/platforms/pseries/pmem.c
++++ b/arch/powerpc/platforms/pseries/pmem.c
+@@ -24,7 +24,6 @@
+ #include <asm/topology.h>
+
+ #include "pseries.h"
+-#include "offline_states.h"
+
+ static struct device_node *pmem_node;
+
+diff --git a/arch/powerpc/platforms/pseries/smp.c b/arch/powerpc/platforms/pseries/smp.c
+index ad61e90032da..a8a070269151 100644
+--- a/arch/powerpc/platforms/pseries/smp.c
++++ b/arch/powerpc/platforms/pseries/smp.c
+@@ -44,8 +44,6 @@
+ #include <asm/svm.h>
+
+ #include "pseries.h"
+-#include "offline_states.h"
+-
+
+ /*
+ * The Primary thread of each non-boot processor was started from the OF client
+@@ -108,10 +106,7 @@ static inline int smp_startup_cpu(unsigned int lcpu)
+
+ /* Fixup atomic count: it exited inside IRQ handler. */
+ task_thread_info(paca_ptrs[lcpu]->__current)->preempt_count = 0;
+-#ifdef CONFIG_HOTPLUG_CPU
+- if (get_cpu_current_state(lcpu) == CPU_STATE_INACTIVE)
+- goto out;
+-#endif
++
+ /*
+ * If the RTAS start-cpu token does not exist then presume the
+ * cpu is already spinning.
+@@ -126,9 +121,6 @@ static inline int smp_startup_cpu(unsigned int lcpu)
+ return 0;
+ }
+
+-#ifdef CONFIG_HOTPLUG_CPU
+-out:
+-#endif
+ return 1;
+ }
+
+@@ -143,10 +135,6 @@ static void smp_setup_cpu(int cpu)
+ vpa_init(cpu);
+
+ cpumask_clear_cpu(cpu, of_spin_mask);
+-#ifdef CONFIG_HOTPLUG_CPU
+- set_cpu_current_state(cpu, CPU_STATE_ONLINE);
+- set_default_offline_state(cpu);
+-#endif
+ }
+
+ static int smp_pSeries_kick_cpu(int nr)
+@@ -163,20 +151,6 @@ static int smp_pSeries_kick_cpu(int nr)
+ * the processor will continue on to secondary_start
+ */
+ paca_ptrs[nr]->cpu_start = 1;
+-#ifdef CONFIG_HOTPLUG_CPU
+- set_preferred_offline_state(nr, CPU_STATE_ONLINE);
+-
+- if (get_cpu_current_state(nr) == CPU_STATE_INACTIVE) {
+- long rc;
+- unsigned long hcpuid;
+-
+- hcpuid = get_hard_smp_processor_id(nr);
+- rc = plpar_hcall_norets(H_PROD, hcpuid);
+- if (rc != H_SUCCESS)
+- printk(KERN_ERR "Error: Prod to wake up processor %d "
+- "Ret= %ld\n", nr, rc);
+- }
+-#endif
+
+ return 0;
+ }
+diff --git a/arch/powerpc/platforms/pseries/suspend.c b/arch/powerpc/platforms/pseries/suspend.c
+index 0a24a5a185f0..f789693f61f4 100644
+--- a/arch/powerpc/platforms/pseries/suspend.c
++++ b/arch/powerpc/platforms/pseries/suspend.c
+@@ -132,15 +132,11 @@ static ssize_t store_hibernate(struct device *dev,
+ struct device_attribute *attr,
+ const char *buf, size_t count)
+ {
+- cpumask_var_t offline_mask;
+ int rc;
+
+ if (!capable(CAP_SYS_ADMIN))
+ return -EPERM;
+
+- if (!alloc_cpumask_var(&offline_mask, GFP_KERNEL))
+- return -ENOMEM;
+-
+ stream_id = simple_strtoul(buf, NULL, 16);
+
+ do {
+@@ -150,32 +146,16 @@ static ssize_t store_hibernate(struct device *dev,
+ } while (rc == -EAGAIN);
+
+ if (!rc) {
+- /* All present CPUs must be online */
+- cpumask_andnot(offline_mask, cpu_present_mask,
+- cpu_online_mask);
+- rc = rtas_online_cpus_mask(offline_mask);
+- if (rc) {
+- pr_err("%s: Could not bring present CPUs online.\n",
+- __func__);
+- goto out;
+- }
+-
+ stop_topology_update();
+ rc = pm_suspend(PM_SUSPEND_MEM);
+ start_topology_update();
+-
+- /* Take down CPUs not online prior to suspend */
+- if (!rtas_offline_cpus_mask(offline_mask))
+- pr_warn("%s: Could not restore CPUs to offline "
+- "state.\n", __func__);
+ }
+
+ stream_id = 0;
+
+ if (!rc)
+ rc = count;
+-out:
+- free_cpumask_var(offline_mask);
++
+ return rc;
+ }
+
+diff --git a/arch/s390/include/asm/topology.h b/arch/s390/include/asm/topology.h
+index fbb507504a3b..3a0ac0c7a9a3 100644
+--- a/arch/s390/include/asm/topology.h
++++ b/arch/s390/include/asm/topology.h
+@@ -86,12 +86,6 @@ static inline const struct cpumask *cpumask_of_node(int node)
+
+ #define pcibus_to_node(bus) __pcibus_to_node(bus)
+
+-#define node_distance(a, b) __node_distance(a, b)
+-static inline int __node_distance(int a, int b)
+-{
+- return 0;
+-}
+-
+ #else /* !CONFIG_NUMA */
+
+ #define numa_node_id numa_node_id
+diff --git a/arch/s390/mm/gmap.c b/arch/s390/mm/gmap.c
+index 1a95d8809cc3..d035fcdcf083 100644
+--- a/arch/s390/mm/gmap.c
++++ b/arch/s390/mm/gmap.c
+@@ -2485,23 +2485,36 @@ void gmap_sync_dirty_log_pmd(struct gmap *gmap, unsigned long bitmap[4],
+ }
+ EXPORT_SYMBOL_GPL(gmap_sync_dirty_log_pmd);
+
++#ifdef CONFIG_TRANSPARENT_HUGEPAGE
++static int thp_split_walk_pmd_entry(pmd_t *pmd, unsigned long addr,
++ unsigned long end, struct mm_walk *walk)
++{
++ struct vm_area_struct *vma = walk->vma;
++
++ split_huge_pmd(vma, pmd, addr);
++ return 0;
++}
++
++static const struct mm_walk_ops thp_split_walk_ops = {
++ .pmd_entry = thp_split_walk_pmd_entry,
++};
++
+ static inline void thp_split_mm(struct mm_struct *mm)
+ {
+-#ifdef CONFIG_TRANSPARENT_HUGEPAGE
+ struct vm_area_struct *vma;
+- unsigned long addr;
+
+ for (vma = mm->mmap; vma != NULL; vma = vma->vm_next) {
+- for (addr = vma->vm_start;
+- addr < vma->vm_end;
+- addr += PAGE_SIZE)
+- follow_page(vma, addr, FOLL_SPLIT);
+ vma->vm_flags &= ~VM_HUGEPAGE;
+ vma->vm_flags |= VM_NOHUGEPAGE;
++ walk_page_vma(vma, &thp_split_walk_ops, NULL);
+ }
+ mm->def_flags |= VM_NOHUGEPAGE;
+-#endif
+ }
++#else
++static inline void thp_split_mm(struct mm_struct *mm)
++{
++}
++#endif /* CONFIG_TRANSPARENT_HUGEPAGE */
+
+ /*
+ * Remove all empty zero pages from the mapping for lazy refaulting
+diff --git a/arch/s390/net/bpf_jit_comp.c b/arch/s390/net/bpf_jit_comp.c
+index 0f37a1b635f8..95809599ebff 100644
+--- a/arch/s390/net/bpf_jit_comp.c
++++ b/arch/s390/net/bpf_jit_comp.c
+@@ -489,6 +489,24 @@ static void save_restore_regs(struct bpf_jit *jit, int op, u32 stack_depth)
+ } while (re <= last);
+ }
+
++static void bpf_skip(struct bpf_jit *jit, int size)
++{
++ if (size >= 6 && !is_valid_rel(size)) {
++ /* brcl 0xf,size */
++ EMIT6_PCREL_RIL(0xc0f4000000, size);
++ size -= 6;
++ } else if (size >= 4 && is_valid_rel(size)) {
++ /* brc 0xf,size */
++ EMIT4_PCREL(0xa7f40000, size);
++ size -= 4;
++ }
++ while (size >= 2) {
++ /* bcr 0,%0 */
++ _EMIT2(0x0700);
++ size -= 2;
++ }
++}
++
+ /*
+ * Emit function prologue
+ *
+@@ -1267,8 +1285,12 @@ static noinline int bpf_jit_insn(struct bpf_jit *jit, struct bpf_prog *fp,
+ last = (i == fp->len - 1) ? 1 : 0;
+ if (last)
+ break;
+- /* j <exit> */
+- EMIT4_PCREL(0xa7f40000, jit->exit_ip - jit->prg);
++ if (!is_first_pass(jit) && can_use_rel(jit, jit->exit_ip))
++ /* brc 0xf, <exit> */
++ EMIT4_PCREL_RIC(0xa7040000, 0xf, jit->exit_ip);
++ else
++ /* brcl 0xf, <exit> */
++ EMIT6_PCREL_RILC(0xc0040000, 0xf, jit->exit_ip);
+ break;
+ /*
+ * Branch relative (number of skipped instructions) to offset on
+@@ -1416,21 +1438,10 @@ branch_ks:
+ }
+ break;
+ branch_ku:
+- is_jmp32 = BPF_CLASS(insn->code) == BPF_JMP32;
+- /* clfi or clgfi %dst,imm */
+- EMIT6_IMM(is_jmp32 ? 0xc20f0000 : 0xc20e0000,
+- dst_reg, imm);
+- if (!is_first_pass(jit) &&
+- can_use_rel(jit, addrs[i + off + 1])) {
+- /* brc mask,off */
+- EMIT4_PCREL_RIC(0xa7040000,
+- mask >> 12, addrs[i + off + 1]);
+- } else {
+- /* brcl mask,off */
+- EMIT6_PCREL_RILC(0xc0040000,
+- mask >> 12, addrs[i + off + 1]);
+- }
+- break;
++ /* lgfi %w1,imm (load sign extend imm) */
++ src_reg = REG_1;
++ EMIT6_IMM(0xc0010000, src_reg, imm);
++ goto branch_xu;
+ branch_xs:
+ is_jmp32 = BPF_CLASS(insn->code) == BPF_JMP32;
+ if (!is_first_pass(jit) &&
+@@ -1509,7 +1520,14 @@ static bool bpf_is_new_addr_sane(struct bpf_jit *jit, int i)
+ */
+ static int bpf_set_addr(struct bpf_jit *jit, int i)
+ {
+- if (!bpf_is_new_addr_sane(jit, i))
++ int delta;
++
++ if (is_codegen_pass(jit)) {
++ delta = jit->prg - jit->addrs[i];
++ if (delta < 0)
++ bpf_skip(jit, -delta);
++ }
++ if (WARN_ON_ONCE(!bpf_is_new_addr_sane(jit, i)))
+ return -1;
+ jit->addrs[i] = jit->prg;
+ return 0;
+diff --git a/arch/x86/crypto/aes_ctrby8_avx-x86_64.S b/arch/x86/crypto/aes_ctrby8_avx-x86_64.S
+index ec437db1fa54..494a3bda8487 100644
+--- a/arch/x86/crypto/aes_ctrby8_avx-x86_64.S
++++ b/arch/x86/crypto/aes_ctrby8_avx-x86_64.S
+@@ -127,10 +127,6 @@ ddq_add_8:
+
+ /* generate a unique variable for ddq_add_x */
+
+-.macro setddq n
+- var_ddq_add = ddq_add_\n
+-.endm
+-
+ /* generate a unique variable for xmm register */
+ .macro setxdata n
+ var_xdata = %xmm\n
+@@ -140,9 +136,7 @@ ddq_add_8:
+
+ .macro club name, id
+ .altmacro
+- .if \name == DDQ_DATA
+- setddq %\id
+- .elseif \name == XDATA
++ .if \name == XDATA
+ setxdata %\id
+ .endif
+ .noaltmacro
+@@ -165,9 +159,8 @@ ddq_add_8:
+
+ .set i, 1
+ .rept (by - 1)
+- club DDQ_DATA, i
+ club XDATA, i
+- vpaddq var_ddq_add(%rip), xcounter, var_xdata
++ vpaddq (ddq_add_1 + 16 * (i - 1))(%rip), xcounter, var_xdata
+ vptest ddq_low_msk(%rip), var_xdata
+ jnz 1f
+ vpaddq ddq_high_add_1(%rip), var_xdata, var_xdata
+@@ -180,8 +173,7 @@ ddq_add_8:
+ vmovdqa 1*16(p_keys), xkeyA
+
+ vpxor xkey0, xdata0, xdata0
+- club DDQ_DATA, by
+- vpaddq var_ddq_add(%rip), xcounter, xcounter
++ vpaddq (ddq_add_1 + 16 * (by - 1))(%rip), xcounter, xcounter
+ vptest ddq_low_msk(%rip), xcounter
+ jnz 1f
+ vpaddq ddq_high_add_1(%rip), xcounter, xcounter
+diff --git a/arch/x86/crypto/aesni-intel_asm.S b/arch/x86/crypto/aesni-intel_asm.S
+index cad6e1bfa7d5..c216de287742 100644
+--- a/arch/x86/crypto/aesni-intel_asm.S
++++ b/arch/x86/crypto/aesni-intel_asm.S
+@@ -266,7 +266,7 @@ ALL_F: .octa 0xffffffffffffffffffffffffffffffff
+ PSHUFB_XMM %xmm2, %xmm0
+ movdqu %xmm0, CurCount(%arg2) # ctx_data.current_counter = iv
+
+- PRECOMPUTE \SUBKEY, %xmm1, %xmm2, %xmm3, %xmm4, %xmm5, %xmm6, %xmm7,
++ PRECOMPUTE \SUBKEY, %xmm1, %xmm2, %xmm3, %xmm4, %xmm5, %xmm6, %xmm7
+ movdqu HashKey(%arg2), %xmm13
+
+ CALC_AAD_HASH %xmm13, \AAD, \AADLEN, %xmm0, %xmm1, %xmm2, %xmm3, \
+@@ -978,7 +978,7 @@ _initial_blocks_done\@:
+ * arg1, %arg3, %arg4 are used as pointers only, not modified
+ * %r11 is the data offset value
+ */
+-.macro GHASH_4_ENCRYPT_4_PARALLEL_ENC TMP1 TMP2 TMP3 TMP4 TMP5 \
++.macro GHASH_4_ENCRYPT_4_PARALLEL_enc TMP1 TMP2 TMP3 TMP4 TMP5 \
+ TMP6 XMM0 XMM1 XMM2 XMM3 XMM4 XMM5 XMM6 XMM7 XMM8 operation
+
+ movdqa \XMM1, \XMM5
+@@ -1186,7 +1186,7 @@ aes_loop_par_enc_done\@:
+ * arg1, %arg3, %arg4 are used as pointers only, not modified
+ * %r11 is the data offset value
+ */
+-.macro GHASH_4_ENCRYPT_4_PARALLEL_DEC TMP1 TMP2 TMP3 TMP4 TMP5 \
++.macro GHASH_4_ENCRYPT_4_PARALLEL_dec TMP1 TMP2 TMP3 TMP4 TMP5 \
+ TMP6 XMM0 XMM1 XMM2 XMM3 XMM4 XMM5 XMM6 XMM7 XMM8 operation
+
+ movdqa \XMM1, \XMM5
+diff --git a/arch/x86/events/intel/uncore_snb.c b/arch/x86/events/intel/uncore_snb.c
+index 3de1065eefc4..1038e9f1e354 100644
+--- a/arch/x86/events/intel/uncore_snb.c
++++ b/arch/x86/events/intel/uncore_snb.c
+@@ -1085,6 +1085,7 @@ static struct pci_dev *tgl_uncore_get_mc_dev(void)
+ }
+
+ #define TGL_UNCORE_MMIO_IMC_MEM_OFFSET 0x10000
++#define TGL_UNCORE_PCI_IMC_MAP_SIZE 0xe000
+
+ static void tgl_uncore_imc_freerunning_init_box(struct intel_uncore_box *box)
+ {
+@@ -1112,7 +1113,7 @@ static void tgl_uncore_imc_freerunning_init_box(struct intel_uncore_box *box)
+ addr |= ((resource_size_t)mch_bar << 32);
+ #endif
+
+- box->io_addr = ioremap(addr, SNB_UNCORE_PCI_IMC_MAP_SIZE);
++ box->io_addr = ioremap(addr, TGL_UNCORE_PCI_IMC_MAP_SIZE);
+ }
+
+ static struct intel_uncore_ops tgl_uncore_imc_freerunning_ops = {
+diff --git a/arch/x86/include/asm/uaccess.h b/arch/x86/include/asm/uaccess.h
+index d8f283b9a569..d1323c73cf6d 100644
+--- a/arch/x86/include/asm/uaccess.h
++++ b/arch/x86/include/asm/uaccess.h
+@@ -314,11 +314,14 @@ do { \
+
+ #define __get_user_size(x, ptr, size, retval) \
+ do { \
++ unsigned char x_u8__; \
++ \
+ retval = 0; \
+ __chk_user_ptr(ptr); \
+ switch (size) { \
+ case 1: \
+- __get_user_asm(x, ptr, retval, "b", "=q"); \
++ __get_user_asm(x_u8__, ptr, retval, "b", "=q"); \
++ (x) = x_u8__; \
+ break; \
+ case 2: \
+ __get_user_asm(x, ptr, retval, "w", "=r"); \
+diff --git a/arch/x86/kernel/apic/io_apic.c b/arch/x86/kernel/apic/io_apic.c
+index 57447f03ee87..71c16618ec3c 100644
+--- a/arch/x86/kernel/apic/io_apic.c
++++ b/arch/x86/kernel/apic/io_apic.c
+@@ -2348,8 +2348,13 @@ static int mp_irqdomain_create(int ioapic)
+
+ static void ioapic_destroy_irqdomain(int idx)
+ {
++ struct ioapic_domain_cfg *cfg = &ioapics[idx].irqdomain_cfg;
++ struct fwnode_handle *fn = ioapics[idx].irqdomain->fwnode;
++
+ if (ioapics[idx].irqdomain) {
+ irq_domain_remove(ioapics[idx].irqdomain);
++ if (!cfg->dev)
++ irq_domain_free_fwnode(fn);
+ ioapics[idx].irqdomain = NULL;
+ }
+ }
+diff --git a/arch/x86/kernel/cpu/mce/inject.c b/arch/x86/kernel/cpu/mce/inject.c
+index 3413b41b8d55..dc28a615e340 100644
+--- a/arch/x86/kernel/cpu/mce/inject.c
++++ b/arch/x86/kernel/cpu/mce/inject.c
+@@ -511,7 +511,7 @@ static void do_inject(void)
+ */
+ if (inj_type == DFR_INT_INJ) {
+ i_mce.status |= MCI_STATUS_DEFERRED;
+- i_mce.status |= (i_mce.status & ~MCI_STATUS_UC);
++ i_mce.status &= ~MCI_STATUS_UC;
+ }
+
+ /*
+diff --git a/arch/x86/kernel/process_64.c b/arch/x86/kernel/process_64.c
+index 5ef9d8f25b0e..cf2cda72a75b 100644
+--- a/arch/x86/kernel/process_64.c
++++ b/arch/x86/kernel/process_64.c
+@@ -315,7 +315,7 @@ static unsigned long x86_fsgsbase_read_task(struct task_struct *task,
+ */
+ mutex_lock(&task->mm->context.lock);
+ ldt = task->mm->context.ldt;
+- if (unlikely(idx >= ldt->nr_entries))
++ if (unlikely(!ldt || idx >= ldt->nr_entries))
+ base = 0;
+ else
+ base = get_desc_base(ldt->entries + idx);
+diff --git a/arch/x86/kernel/smpboot.c b/arch/x86/kernel/smpboot.c
+index 2f24c334a938..e5b2b20a0aee 100644
+--- a/arch/x86/kernel/smpboot.c
++++ b/arch/x86/kernel/smpboot.c
+@@ -1974,6 +1974,7 @@ static bool core_set_max_freq_ratio(u64 *base_freq, u64 *turbo_freq)
+ static bool intel_set_max_freq_ratio(void)
+ {
+ u64 base_freq, turbo_freq;
++ u64 turbo_ratio;
+
+ if (slv_set_max_freq_ratio(&base_freq, &turbo_freq))
+ goto out;
+@@ -1999,15 +2000,23 @@ out:
+ /*
+ * Some hypervisors advertise X86_FEATURE_APERFMPERF
+ * but then fill all MSR's with zeroes.
++ * Some CPUs have turbo boost but don't declare any turbo ratio
++ * in MSR_TURBO_RATIO_LIMIT.
+ */
+- if (!base_freq) {
+- pr_debug("Couldn't determine cpu base frequency, necessary for scale-invariant accounting.\n");
++ if (!base_freq || !turbo_freq) {
++ pr_debug("Couldn't determine cpu base or turbo frequency, necessary for scale-invariant accounting.\n");
+ return false;
+ }
+
+- arch_turbo_freq_ratio = div_u64(turbo_freq * SCHED_CAPACITY_SCALE,
+- base_freq);
++ turbo_ratio = div_u64(turbo_freq * SCHED_CAPACITY_SCALE, base_freq);
++ if (!turbo_ratio) {
++ pr_debug("Non-zero turbo and base frequencies led to a 0 ratio.\n");
++ return false;
++ }
++
++ arch_turbo_freq_ratio = turbo_ratio;
+ arch_set_max_freq_ratio(turbo_disabled());
++
+ return true;
+ }
+
+diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c
+index 7dbfc0bc738c..27c0cc61fb08 100644
+--- a/arch/x86/kvm/svm/svm.c
++++ b/arch/x86/kvm/svm/svm.c
+@@ -2509,7 +2509,7 @@ static int svm_set_msr(struct kvm_vcpu *vcpu, struct msr_data *msr)
+ !guest_cpuid_has(vcpu, X86_FEATURE_AMD_SSBD))
+ return 1;
+
+- if (data & ~kvm_spec_ctrl_valid_bits(vcpu))
++ if (kvm_spec_ctrl_test_value(data))
+ return 1;
+
+ svm->spec_ctrl = data;
+diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c
+index 8fafcb2cd103..9938a7e698db 100644
+--- a/arch/x86/kvm/vmx/vmx.c
++++ b/arch/x86/kvm/vmx/vmx.c
+@@ -2015,7 +2015,7 @@ static int vmx_set_msr(struct kvm_vcpu *vcpu, struct msr_data *msr_info)
+ !guest_cpuid_has(vcpu, X86_FEATURE_SPEC_CTRL))
+ return 1;
+
+- if (data & ~kvm_spec_ctrl_valid_bits(vcpu))
++ if (kvm_spec_ctrl_test_value(data))
+ return 1;
+
+ vmx->spec_ctrl = data;
+diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
+index 738a558c915c..51ccb4dfaad2 100644
+--- a/arch/x86/kvm/x86.c
++++ b/arch/x86/kvm/x86.c
+@@ -10573,28 +10573,32 @@ bool kvm_arch_no_poll(struct kvm_vcpu *vcpu)
+ }
+ EXPORT_SYMBOL_GPL(kvm_arch_no_poll);
+
+-u64 kvm_spec_ctrl_valid_bits(struct kvm_vcpu *vcpu)
++
++int kvm_spec_ctrl_test_value(u64 value)
+ {
+- uint64_t bits = SPEC_CTRL_IBRS | SPEC_CTRL_STIBP | SPEC_CTRL_SSBD;
++ /*
++ * test that setting IA32_SPEC_CTRL to given value
++ * is allowed by the host processor
++ */
+
+- /* The STIBP bit doesn't fault even if it's not advertised */
+- if (!guest_cpuid_has(vcpu, X86_FEATURE_SPEC_CTRL) &&
+- !guest_cpuid_has(vcpu, X86_FEATURE_AMD_IBRS))
+- bits &= ~(SPEC_CTRL_IBRS | SPEC_CTRL_STIBP);
+- if (!boot_cpu_has(X86_FEATURE_SPEC_CTRL) &&
+- !boot_cpu_has(X86_FEATURE_AMD_IBRS))
+- bits &= ~(SPEC_CTRL_IBRS | SPEC_CTRL_STIBP);
++ u64 saved_value;
++ unsigned long flags;
++ int ret = 0;
+
+- if (!guest_cpuid_has(vcpu, X86_FEATURE_SPEC_CTRL_SSBD) &&
+- !guest_cpuid_has(vcpu, X86_FEATURE_AMD_SSBD))
+- bits &= ~SPEC_CTRL_SSBD;
+- if (!boot_cpu_has(X86_FEATURE_SPEC_CTRL_SSBD) &&
+- !boot_cpu_has(X86_FEATURE_AMD_SSBD))
+- bits &= ~SPEC_CTRL_SSBD;
++ local_irq_save(flags);
+
+- return bits;
++ if (rdmsrl_safe(MSR_IA32_SPEC_CTRL, &saved_value))
++ ret = 1;
++ else if (wrmsrl_safe(MSR_IA32_SPEC_CTRL, value))
++ ret = 1;
++ else
++ wrmsrl(MSR_IA32_SPEC_CTRL, saved_value);
++
++ local_irq_restore(flags);
++
++ return ret;
+ }
+-EXPORT_SYMBOL_GPL(kvm_spec_ctrl_valid_bits);
++EXPORT_SYMBOL_GPL(kvm_spec_ctrl_test_value);
+
+ EXPORT_TRACEPOINT_SYMBOL_GPL(kvm_exit);
+ EXPORT_TRACEPOINT_SYMBOL_GPL(kvm_fast_mmio);
+diff --git a/arch/x86/kvm/x86.h b/arch/x86/kvm/x86.h
+index b968acc0516f..73c62b5d2765 100644
+--- a/arch/x86/kvm/x86.h
++++ b/arch/x86/kvm/x86.h
+@@ -357,6 +357,6 @@ static inline bool kvm_dr7_valid(u64 data)
+
+ void kvm_load_guest_xsave_state(struct kvm_vcpu *vcpu);
+ void kvm_load_host_xsave_state(struct kvm_vcpu *vcpu);
+-u64 kvm_spec_ctrl_valid_bits(struct kvm_vcpu *vcpu);
++int kvm_spec_ctrl_test_value(u64 value);
+
+ #endif
+diff --git a/block/blk-iocost.c b/block/blk-iocost.c
+index ef193389fffe..b5a9cfcd75e9 100644
+--- a/block/blk-iocost.c
++++ b/block/blk-iocost.c
+@@ -1374,7 +1374,7 @@ static void ioc_timer_fn(struct timer_list *timer)
+ * should have woken up in the last period and expire idle iocgs.
+ */
+ list_for_each_entry_safe(iocg, tiocg, &ioc->active_iocgs, active_list) {
+- if (!waitqueue_active(&iocg->waitq) && iocg->abs_vdebt &&
++ if (!waitqueue_active(&iocg->waitq) && !iocg->abs_vdebt &&
+ !iocg_is_idle(iocg))
+ continue;
+
+diff --git a/block/blk-zoned.c b/block/blk-zoned.c
+index f87956e0dcaf..0dd17a6d0098 100644
+--- a/block/blk-zoned.c
++++ b/block/blk-zoned.c
+@@ -478,6 +478,9 @@ int blk_revalidate_disk_zones(struct gendisk *disk)
+ if (WARN_ON_ONCE(!queue_is_mq(q)))
+ return -EIO;
+
++ if (!get_capacity(disk))
++ return -EIO;
++
+ /*
+ * Ensure that all memory allocations in this context are done as if
+ * GFP_NOIO was specified.
+diff --git a/drivers/acpi/acpica/exprep.c b/drivers/acpi/acpica/exprep.c
+index a4e306690a21..4a0f03157e08 100644
+--- a/drivers/acpi/acpica/exprep.c
++++ b/drivers/acpi/acpica/exprep.c
+@@ -473,10 +473,6 @@ acpi_status acpi_ex_prep_field_value(struct acpi_create_field_info *info)
+ (u8)access_byte_width;
+ }
+ }
+- /* An additional reference for the container */
+-
+- acpi_ut_add_reference(obj_desc->field.region_obj);
+-
+ ACPI_DEBUG_PRINT((ACPI_DB_BFIELD,
+ "RegionField: BitOff %X, Off %X, Gran %X, Region %p\n",
+ obj_desc->field.start_field_bit_offset,
+diff --git a/drivers/acpi/acpica/utdelete.c b/drivers/acpi/acpica/utdelete.c
+index c365faf4e6cd..4c0d4e434196 100644
+--- a/drivers/acpi/acpica/utdelete.c
++++ b/drivers/acpi/acpica/utdelete.c
+@@ -568,11 +568,6 @@ acpi_ut_update_object_reference(union acpi_operand_object *object, u16 action)
+ next_object = object->buffer_field.buffer_obj;
+ break;
+
+- case ACPI_TYPE_LOCAL_REGION_FIELD:
+-
+- next_object = object->field.region_obj;
+- break;
+-
+ case ACPI_TYPE_LOCAL_BANK_FIELD:
+
+ next_object = object->bank_field.bank_obj;
+@@ -613,6 +608,7 @@ acpi_ut_update_object_reference(union acpi_operand_object *object, u16 action)
+ }
+ break;
+
++ case ACPI_TYPE_LOCAL_REGION_FIELD:
+ case ACPI_TYPE_REGION:
+ default:
+
+diff --git a/drivers/base/dd.c b/drivers/base/dd.c
+index 94037be7f5d7..60bd0a9b9918 100644
+--- a/drivers/base/dd.c
++++ b/drivers/base/dd.c
+@@ -276,7 +276,7 @@ static void deferred_probe_timeout_work_func(struct work_struct *work)
+
+ list_for_each_entry_safe(private, p, &deferred_probe_pending_list, deferred_probe)
+ dev_info(private->device, "deferred probe pending");
+- wake_up(&probe_timeout_waitqueue);
++ wake_up_all(&probe_timeout_waitqueue);
+ }
+ static DECLARE_DELAYED_WORK(deferred_probe_timeout_work, deferred_probe_timeout_work_func);
+
+@@ -487,7 +487,8 @@ static int really_probe(struct device *dev, struct device_driver *drv)
+ drv->bus->name, __func__, drv->name, dev_name(dev));
+ if (!list_empty(&dev->devres_head)) {
+ dev_crit(dev, "Resources present before probing\n");
+- return -EBUSY;
++ ret = -EBUSY;
++ goto done;
+ }
+
+ re_probe:
+@@ -608,7 +609,7 @@ pinctrl_bind_failed:
+ ret = 0;
+ done:
+ atomic_dec(&probe_count);
+- wake_up(&probe_waitqueue);
++ wake_up_all(&probe_waitqueue);
+ return ret;
+ }
+
+diff --git a/drivers/base/firmware_loader/fallback_platform.c b/drivers/base/firmware_loader/fallback_platform.c
+index c88c745590fe..723ff8bcf3e7 100644
+--- a/drivers/base/firmware_loader/fallback_platform.c
++++ b/drivers/base/firmware_loader/fallback_platform.c
+@@ -25,7 +25,10 @@ int firmware_fallback_platform(struct fw_priv *fw_priv, enum fw_opt opt_flags)
+ if (rc)
+ return rc; /* rc == -ENOENT when the fw was not found */
+
+- fw_priv->data = vmalloc(size);
++ if (fw_priv->data && size > fw_priv->allocated_size)
++ return -ENOMEM;
++ if (!fw_priv->data)
++ fw_priv->data = vmalloc(size);
+ if (!fw_priv->data)
+ return -ENOMEM;
+
+diff --git a/drivers/block/loop.c b/drivers/block/loop.c
+index 418bb4621255..6b36fc2f4edc 100644
+--- a/drivers/block/loop.c
++++ b/drivers/block/loop.c
+@@ -2333,6 +2333,8 @@ static void __exit loop_exit(void)
+
+ range = max_loop ? max_loop << part_shift : 1UL << MINORBITS;
+
++ mutex_lock(&loop_ctl_mutex);
++
+ idr_for_each(&loop_index_idr, &loop_exit_cb, NULL);
+ idr_destroy(&loop_index_idr);
+
+@@ -2340,6 +2342,8 @@ static void __exit loop_exit(void)
+ unregister_blkdev(LOOP_MAJOR, "loop");
+
+ misc_deregister(&loop_misc);
++
++ mutex_unlock(&loop_ctl_mutex);
+ }
+
+ module_init(loop_init);
+diff --git a/drivers/bluetooth/btmrvl_sdio.c b/drivers/bluetooth/btmrvl_sdio.c
+index 0f3a020703ab..4c7978cb1786 100644
+--- a/drivers/bluetooth/btmrvl_sdio.c
++++ b/drivers/bluetooth/btmrvl_sdio.c
+@@ -328,7 +328,7 @@ static const struct btmrvl_sdio_device btmrvl_sdio_sd8897 = {
+
+ static const struct btmrvl_sdio_device btmrvl_sdio_sd8977 = {
+ .helper = NULL,
+- .firmware = "mrvl/sd8977_uapsta.bin",
++ .firmware = "mrvl/sdsd8977_combo_v2.bin",
+ .reg = &btmrvl_reg_8977,
+ .support_pscan_win_report = true,
+ .sd_blksz_fw_dl = 256,
+@@ -346,7 +346,7 @@ static const struct btmrvl_sdio_device btmrvl_sdio_sd8987 = {
+
+ static const struct btmrvl_sdio_device btmrvl_sdio_sd8997 = {
+ .helper = NULL,
+- .firmware = "mrvl/sd8997_uapsta.bin",
++ .firmware = "mrvl/sdsd8997_combo_v4.bin",
+ .reg = &btmrvl_reg_8997,
+ .support_pscan_win_report = true,
+ .sd_blksz_fw_dl = 256,
+@@ -1831,6 +1831,6 @@ MODULE_FIRMWARE("mrvl/sd8787_uapsta.bin");
+ MODULE_FIRMWARE("mrvl/sd8797_uapsta.bin");
+ MODULE_FIRMWARE("mrvl/sd8887_uapsta.bin");
+ MODULE_FIRMWARE("mrvl/sd8897_uapsta.bin");
+-MODULE_FIRMWARE("mrvl/sd8977_uapsta.bin");
++MODULE_FIRMWARE("mrvl/sdsd8977_combo_v2.bin");
+ MODULE_FIRMWARE("mrvl/sd8987_uapsta.bin");
+-MODULE_FIRMWARE("mrvl/sd8997_uapsta.bin");
++MODULE_FIRMWARE("mrvl/sdsd8997_combo_v4.bin");
+diff --git a/drivers/bluetooth/btmtksdio.c b/drivers/bluetooth/btmtksdio.c
+index 519788c442ca..11494cd2a982 100644
+--- a/drivers/bluetooth/btmtksdio.c
++++ b/drivers/bluetooth/btmtksdio.c
+@@ -685,7 +685,7 @@ static int mtk_setup_firmware(struct hci_dev *hdev, const char *fwname)
+ const u8 *fw_ptr;
+ size_t fw_size;
+ int err, dlen;
+- u8 flag;
++ u8 flag, param;
+
+ err = request_firmware(&fw, fwname, &hdev->dev);
+ if (err < 0) {
+@@ -693,6 +693,20 @@ static int mtk_setup_firmware(struct hci_dev *hdev, const char *fwname)
+ return err;
+ }
+
++ /* Power on data RAM the firmware relies on. */
++ param = 1;
++ wmt_params.op = MTK_WMT_FUNC_CTRL;
++ wmt_params.flag = 3;
++ wmt_params.dlen = sizeof(param);
++ wmt_params.data = ¶m;
++ wmt_params.status = NULL;
++
++ err = mtk_hci_wmt_sync(hdev, &wmt_params);
++ if (err < 0) {
++ bt_dev_err(hdev, "Failed to power on data RAM (%d)", err);
++ return err;
++ }
++
+ fw_ptr = fw->data;
+ fw_size = fw->size;
+
+diff --git a/drivers/bluetooth/btusb.c b/drivers/bluetooth/btusb.c
+index 3d9313c746f3..0c77240fd7dd 100644
+--- a/drivers/bluetooth/btusb.c
++++ b/drivers/bluetooth/btusb.c
+@@ -1631,6 +1631,7 @@ static int btusb_setup_csr(struct hci_dev *hdev)
+ {
+ struct hci_rp_read_local_version *rp;
+ struct sk_buff *skb;
++ bool is_fake = false;
+
+ BT_DBG("%s", hdev->name);
+
+@@ -1650,18 +1651,69 @@ static int btusb_setup_csr(struct hci_dev *hdev)
+
+ rp = (struct hci_rp_read_local_version *)skb->data;
+
+- /* Detect controllers which aren't real CSR ones. */
++ /* Detect a wide host of Chinese controllers that aren't CSR.
++ *
++ * Known fake bcdDevices: 0x0100, 0x0134, 0x1915, 0x2520, 0x7558, 0x8891
++ *
++ * The main thing they have in common is that these are really popular low-cost
++ * options that support newer Bluetooth versions but rely on heavy VID/PID
++ * squatting of this poor old Bluetooth 1.1 device. Even sold as such.
++ *
++ * We detect actual CSR devices by checking that the HCI manufacturer code
++ * is Cambridge Silicon Radio (10) and ensuring that LMP sub-version and
++ * HCI rev values always match. As they both store the firmware number.
++ */
+ if (le16_to_cpu(rp->manufacturer) != 10 ||
+- le16_to_cpu(rp->lmp_subver) == 0x0c5c) {
++ le16_to_cpu(rp->hci_rev) != le16_to_cpu(rp->lmp_subver))
++ is_fake = true;
++
++ /* Known legit CSR firmware build numbers and their supported BT versions:
++ * - 1.1 (0x1) -> 0x0073, 0x020d, 0x033c, 0x034e
++ * - 1.2 (0x2) -> 0x04d9, 0x0529
++ * - 2.0 (0x3) -> 0x07a6, 0x07ad, 0x0c5c
++ * - 2.1 (0x4) -> 0x149c, 0x1735, 0x1899 (0x1899 is a BlueCore4-External)
++ * - 4.0 (0x6) -> 0x1d86, 0x2031, 0x22bb
++ *
++ * e.g. Real CSR dongles with LMP subversion 0x73 are old enough that
++ * support BT 1.1 only; so it's a dead giveaway when some
++ * third-party BT 4.0 dongle reuses it.
++ */
++ else if (le16_to_cpu(rp->lmp_subver) <= 0x034e &&
++ le16_to_cpu(rp->hci_ver) > BLUETOOTH_VER_1_1)
++ is_fake = true;
++
++ else if (le16_to_cpu(rp->lmp_subver) <= 0x0529 &&
++ le16_to_cpu(rp->hci_ver) > BLUETOOTH_VER_1_2)
++ is_fake = true;
++
++ else if (le16_to_cpu(rp->lmp_subver) <= 0x0c5c &&
++ le16_to_cpu(rp->hci_ver) > BLUETOOTH_VER_2_0)
++ is_fake = true;
++
++ else if (le16_to_cpu(rp->lmp_subver) <= 0x1899 &&
++ le16_to_cpu(rp->hci_ver) > BLUETOOTH_VER_2_1)
++ is_fake = true;
++
++ else if (le16_to_cpu(rp->lmp_subver) <= 0x22bb &&
++ le16_to_cpu(rp->hci_ver) > BLUETOOTH_VER_4_0)
++ is_fake = true;
++
++ if (is_fake) {
++ bt_dev_warn(hdev, "CSR: Unbranded CSR clone detected; adding workarounds...");
++
++ /* Generally these clones have big discrepancies between
++ * advertised features and what's actually supported.
++ * Probably will need to be expanded in the future;
++ * without these the controller will lock up.
++ */
++ set_bit(HCI_QUIRK_BROKEN_STORED_LINK_KEY, &hdev->quirks);
++ set_bit(HCI_QUIRK_BROKEN_ERR_DATA_REPORTING, &hdev->quirks);
++
+ /* Clear the reset quirk since this is not an actual
+ * early Bluetooth 1.1 device from CSR.
+ */
+ clear_bit(HCI_QUIRK_RESET_ON_CLOSE, &hdev->quirks);
+-
+- /* These fake CSR controllers have all a broken
+- * stored link key handling and so just disable it.
+- */
+- set_bit(HCI_QUIRK_BROKEN_STORED_LINK_KEY, &hdev->quirks);
++ clear_bit(HCI_QUIRK_SIMULTANEOUS_DISCOVERY, &hdev->quirks);
+ }
+
+ kfree_skb(skb);
+@@ -2826,7 +2878,7 @@ static int btusb_mtk_setup_firmware(struct hci_dev *hdev, const char *fwname)
+ const u8 *fw_ptr;
+ size_t fw_size;
+ int err, dlen;
+- u8 flag;
++ u8 flag, param;
+
+ err = request_firmware(&fw, fwname, &hdev->dev);
+ if (err < 0) {
+@@ -2834,6 +2886,20 @@ static int btusb_mtk_setup_firmware(struct hci_dev *hdev, const char *fwname)
+ return err;
+ }
+
++ /* Power on data RAM the firmware relies on. */
++ param = 1;
++ wmt_params.op = BTMTK_WMT_FUNC_CTRL;
++ wmt_params.flag = 3;
++ wmt_params.dlen = sizeof(param);
++ wmt_params.data = ¶m;
++ wmt_params.status = NULL;
++
++ err = btusb_mtk_hci_wmt_sync(hdev, &wmt_params);
++ if (err < 0) {
++ bt_dev_err(hdev, "Failed to power on data RAM (%d)", err);
++ return err;
++ }
++
+ fw_ptr = fw->data;
+ fw_size = fw->size;
+
+@@ -3891,11 +3957,13 @@ static int btusb_probe(struct usb_interface *intf,
+ if (bcdDevice < 0x117)
+ set_bit(HCI_QUIRK_RESET_ON_CLOSE, &hdev->quirks);
+
++ /* This must be set first in case we disable it for fakes */
++ set_bit(HCI_QUIRK_SIMULTANEOUS_DISCOVERY, &hdev->quirks);
++
+ /* Fake CSR devices with broken commands */
+- if (bcdDevice <= 0x100 || bcdDevice == 0x134)
++ if (le16_to_cpu(udev->descriptor.idVendor) == 0x0a12 &&
++ le16_to_cpu(udev->descriptor.idProduct) == 0x0001)
+ hdev->setup = btusb_setup_csr;
+-
+- set_bit(HCI_QUIRK_SIMULTANEOUS_DISCOVERY, &hdev->quirks);
+ }
+
+ if (id->driver_info & BTUSB_SNIFFER) {
+diff --git a/drivers/bluetooth/hci_h5.c b/drivers/bluetooth/hci_h5.c
+index 106c110efe56..0ce3d9fe0286 100644
+--- a/drivers/bluetooth/hci_h5.c
++++ b/drivers/bluetooth/hci_h5.c
+@@ -793,7 +793,7 @@ static int h5_serdev_probe(struct serdev_device *serdev)
+ if (!h5)
+ return -ENOMEM;
+
+- set_bit(HCI_UART_RESET_ON_INIT, &h5->serdev_hu.flags);
++ set_bit(HCI_UART_RESET_ON_INIT, &h5->serdev_hu.hdev_flags);
+
+ h5->hu = &h5->serdev_hu;
+ h5->serdev_hu.serdev = serdev;
+diff --git a/drivers/bluetooth/hci_qca.c b/drivers/bluetooth/hci_qca.c
+index 0b1036e5e963..6a3c80e1b19c 100644
+--- a/drivers/bluetooth/hci_qca.c
++++ b/drivers/bluetooth/hci_qca.c
+@@ -45,7 +45,7 @@
+ #define HCI_MAX_IBS_SIZE 10
+
+ #define IBS_WAKE_RETRANS_TIMEOUT_MS 100
+-#define IBS_BTSOC_TX_IDLE_TIMEOUT_MS 40
++#define IBS_BTSOC_TX_IDLE_TIMEOUT_MS 200
+ #define IBS_HOST_TX_IDLE_TIMEOUT_MS 2000
+ #define CMD_TRANS_TIMEOUT_MS 100
+ #define MEMDUMP_TIMEOUT_MS 8000
+@@ -71,7 +71,8 @@ enum qca_flags {
+ QCA_DROP_VENDOR_EVENT,
+ QCA_SUSPENDING,
+ QCA_MEMDUMP_COLLECTION,
+- QCA_HW_ERROR_EVENT
++ QCA_HW_ERROR_EVENT,
++ QCA_SSR_TRIGGERED
+ };
+
+
+@@ -854,6 +855,13 @@ static int qca_enqueue(struct hci_uart *hu, struct sk_buff *skb)
+ BT_DBG("hu %p qca enq skb %p tx_ibs_state %d", hu, skb,
+ qca->tx_ibs_state);
+
++ if (test_bit(QCA_SSR_TRIGGERED, &qca->flags)) {
++ /* As SSR is in progress, ignore the packets */
++ bt_dev_dbg(hu->hdev, "SSR is in progress");
++ kfree_skb(skb);
++ return 0;
++ }
++
+ /* Prepend skb with frame type */
+ memcpy(skb_push(skb, 1), &hci_skb_pkt_type(skb), 1);
+
+@@ -973,8 +981,11 @@ static void qca_controller_memdump(struct work_struct *work)
+ while ((skb = skb_dequeue(&qca->rx_memdump_q))) {
+
+ mutex_lock(&qca->hci_memdump_lock);
+- /* Skip processing the received packets if timeout detected. */
+- if (qca->memdump_state == QCA_MEMDUMP_TIMEOUT) {
++ /* Skip processing the received packets if timeout detected
++ * or memdump collection completed.
++ */
++ if (qca->memdump_state == QCA_MEMDUMP_TIMEOUT ||
++ qca->memdump_state == QCA_MEMDUMP_COLLECTED) {
+ mutex_unlock(&qca->hci_memdump_lock);
+ return;
+ }
+@@ -1085,6 +1096,7 @@ static int qca_controller_memdump_event(struct hci_dev *hdev,
+ struct hci_uart *hu = hci_get_drvdata(hdev);
+ struct qca_data *qca = hu->priv;
+
++ set_bit(QCA_SSR_TRIGGERED, &qca->flags);
+ skb_queue_tail(&qca->rx_memdump_q, skb);
+ queue_work(qca->workqueue, &qca->ctrl_memdump_evt);
+
+@@ -1442,9 +1454,8 @@ static void qca_hw_error(struct hci_dev *hdev, u8 code)
+ {
+ struct hci_uart *hu = hci_get_drvdata(hdev);
+ struct qca_data *qca = hu->priv;
+- struct qca_memdump_data *qca_memdump = qca->qca_memdump;
+- char *memdump_buf = NULL;
+
++ set_bit(QCA_SSR_TRIGGERED, &qca->flags);
+ set_bit(QCA_HW_ERROR_EVENT, &qca->flags);
+ bt_dev_info(hdev, "mem_dump_status: %d", qca->memdump_state);
+
+@@ -1466,19 +1477,23 @@ static void qca_hw_error(struct hci_dev *hdev, u8 code)
+ qca_wait_for_dump_collection(hdev);
+ }
+
++ mutex_lock(&qca->hci_memdump_lock);
+ if (qca->memdump_state != QCA_MEMDUMP_COLLECTED) {
+ bt_dev_err(hu->hdev, "clearing allocated memory due to memdump timeout");
+- mutex_lock(&qca->hci_memdump_lock);
+- if (qca_memdump)
+- memdump_buf = qca_memdump->memdump_buf_head;
+- vfree(memdump_buf);
+- kfree(qca_memdump);
+- qca->qca_memdump = NULL;
++ if (qca->qca_memdump) {
++ vfree(qca->qca_memdump->memdump_buf_head);
++ kfree(qca->qca_memdump);
++ qca->qca_memdump = NULL;
++ }
+ qca->memdump_state = QCA_MEMDUMP_TIMEOUT;
+ cancel_delayed_work(&qca->ctrl_memdump_timeout);
+- skb_queue_purge(&qca->rx_memdump_q);
+- mutex_unlock(&qca->hci_memdump_lock);
++ }
++ mutex_unlock(&qca->hci_memdump_lock);
++
++ if (qca->memdump_state == QCA_MEMDUMP_TIMEOUT ||
++ qca->memdump_state == QCA_MEMDUMP_COLLECTED) {
+ cancel_work_sync(&qca->ctrl_memdump_evt);
++ skb_queue_purge(&qca->rx_memdump_q);
+ }
+
+ clear_bit(QCA_HW_ERROR_EVENT, &qca->flags);
+@@ -1489,10 +1504,30 @@ static void qca_cmd_timeout(struct hci_dev *hdev)
+ struct hci_uart *hu = hci_get_drvdata(hdev);
+ struct qca_data *qca = hu->priv;
+
+- if (qca->memdump_state == QCA_MEMDUMP_IDLE)
++ set_bit(QCA_SSR_TRIGGERED, &qca->flags);
++ if (qca->memdump_state == QCA_MEMDUMP_IDLE) {
++ set_bit(QCA_MEMDUMP_COLLECTION, &qca->flags);
+ qca_send_crashbuffer(hu);
+- else
+- bt_dev_info(hdev, "Dump collection is in process");
++ qca_wait_for_dump_collection(hdev);
++ } else if (qca->memdump_state == QCA_MEMDUMP_COLLECTING) {
++ /* Let us wait here until memory dump collected or
++ * memory dump timer expired.
++ */
++ bt_dev_info(hdev, "waiting for dump to complete");
++ qca_wait_for_dump_collection(hdev);
++ }
++
++ mutex_lock(&qca->hci_memdump_lock);
++ if (qca->memdump_state != QCA_MEMDUMP_COLLECTED) {
++ qca->memdump_state = QCA_MEMDUMP_TIMEOUT;
++ if (!test_bit(QCA_HW_ERROR_EVENT, &qca->flags)) {
++ /* Inject hw error event to reset the device
++ * and driver.
++ */
++ hci_reset_dev(hu->hdev);
++ }
++ }
++ mutex_unlock(&qca->hci_memdump_lock);
+ }
+
+ static int qca_wcn3990_init(struct hci_uart *hu)
+@@ -1598,11 +1633,15 @@ static int qca_setup(struct hci_uart *hu)
+ bt_dev_info(hdev, "setting up %s",
+ qca_is_wcn399x(soc_type) ? "wcn399x" : "ROME");
+
++ qca->memdump_state = QCA_MEMDUMP_IDLE;
++
+ retry:
+ ret = qca_power_on(hdev);
+ if (ret)
+ return ret;
+
++ clear_bit(QCA_SSR_TRIGGERED, &qca->flags);
++
+ if (qca_is_wcn399x(soc_type)) {
+ set_bit(HCI_QUIRK_USE_BDADDR_PROPERTY, &hdev->quirks);
+
+@@ -1739,9 +1778,6 @@ static void qca_power_shutdown(struct hci_uart *hu)
+ qca_flush(hu);
+ spin_unlock_irqrestore(&qca->hci_ibs_lock, flags);
+
+- hu->hdev->hw_error = NULL;
+- hu->hdev->cmd_timeout = NULL;
+-
+ /* Non-serdev device usually is powered by external power
+ * and don't need additional action in driver for power down
+ */
+@@ -1763,6 +1799,9 @@ static int qca_power_off(struct hci_dev *hdev)
+ struct qca_data *qca = hu->priv;
+ enum qca_btsoc_type soc_type = qca_soc_type(hu);
+
++ hu->hdev->hw_error = NULL;
++ hu->hdev->cmd_timeout = NULL;
++
+ /* Stop sending shutdown command if soc crashes. */
+ if (qca_is_wcn399x(soc_type)
+ && qca->memdump_state == QCA_MEMDUMP_IDLE) {
+@@ -1770,7 +1809,6 @@ static int qca_power_off(struct hci_dev *hdev)
+ usleep_range(8000, 10000);
+ }
+
+- qca->memdump_state = QCA_MEMDUMP_IDLE;
+ qca_power_shutdown(hu);
+ return 0;
+ }
+@@ -1909,17 +1947,17 @@ static int qca_serdev_probe(struct serdev_device *serdev)
+ }
+
+ qcadev->susclk = devm_clk_get_optional(&serdev->dev, NULL);
+- if (!qcadev->susclk) {
++ if (IS_ERR(qcadev->susclk)) {
+ dev_warn(&serdev->dev, "failed to acquire clk\n");
+- } else {
+- err = clk_set_rate(qcadev->susclk, SUSCLK_RATE_32KHZ);
+- if (err)
+- return err;
+-
+- err = clk_prepare_enable(qcadev->susclk);
+- if (err)
+- return err;
++ return PTR_ERR(qcadev->susclk);
+ }
++ err = clk_set_rate(qcadev->susclk, SUSCLK_RATE_32KHZ);
++ if (err)
++ return err;
++
++ err = clk_prepare_enable(qcadev->susclk);
++ if (err)
++ return err;
+
+ err = hci_uart_register_device(&qcadev->serdev_hu, &qca_proto);
+ if (err) {
+@@ -1991,8 +2029,6 @@ static int __maybe_unused qca_suspend(struct device *dev)
+
+ qca->tx_ibs_state = HCI_IBS_TX_ASLEEP;
+ qca->ibs_sent_slps++;
+-
+- qca_wq_serial_tx_clock_vote_off(&qca->ws_tx_vote_off);
+ break;
+
+ case HCI_IBS_TX_ASLEEP:
+@@ -2020,8 +2056,10 @@ static int __maybe_unused qca_suspend(struct device *dev)
+ qca->rx_ibs_state == HCI_IBS_RX_ASLEEP,
+ msecs_to_jiffies(IBS_BTSOC_TX_IDLE_TIMEOUT_MS));
+
+- if (ret > 0)
++ if (ret > 0) {
++ qca_wq_serial_tx_clock_vote_off(&qca->ws_tx_vote_off);
+ return 0;
++ }
+
+ if (ret == 0)
+ ret = -ETIMEDOUT;
+diff --git a/drivers/bluetooth/hci_serdev.c b/drivers/bluetooth/hci_serdev.c
+index 4652896d4990..ad2f26cb2622 100644
+--- a/drivers/bluetooth/hci_serdev.c
++++ b/drivers/bluetooth/hci_serdev.c
+@@ -357,7 +357,8 @@ void hci_uart_unregister_device(struct hci_uart *hu)
+ struct hci_dev *hdev = hu->hdev;
+
+ clear_bit(HCI_UART_PROTO_READY, &hu->flags);
+- hci_unregister_dev(hdev);
++ if (test_bit(HCI_UART_REGISTERED, &hu->flags))
++ hci_unregister_dev(hdev);
+ hci_free_dev(hdev);
+
+ cancel_work_sync(&hu->write_work);
+diff --git a/drivers/bus/ti-sysc.c b/drivers/bus/ti-sysc.c
+index 3b0417a01494..ae4cf4667633 100644
+--- a/drivers/bus/ti-sysc.c
++++ b/drivers/bus/ti-sysc.c
+@@ -1402,6 +1402,10 @@ static const struct sysc_revision_quirk sysc_revision_quirks[] = {
+ SYSC_QUIRK_SWSUP_SIDLE | SYSC_QUIRK_SWSUP_MSTANDBY),
+ SYSC_QUIRK("tptc", 0, 0, -ENODEV, -ENODEV, 0x40007c00, 0xffffffff,
+ SYSC_QUIRK_SWSUP_SIDLE | SYSC_QUIRK_SWSUP_MSTANDBY),
++ SYSC_QUIRK("usb_host_hs", 0, 0, 0x10, 0x14, 0x50700100, 0xffffffff,
++ SYSC_QUIRK_SWSUP_SIDLE | SYSC_QUIRK_SWSUP_MSTANDBY),
++ SYSC_QUIRK("usb_host_hs", 0, 0, 0x10, -ENODEV, 0x50700101, 0xffffffff,
++ SYSC_QUIRK_SWSUP_SIDLE | SYSC_QUIRK_SWSUP_MSTANDBY),
+ SYSC_QUIRK("usb_otg_hs", 0, 0x400, 0x404, 0x408, 0x00000050,
+ 0xffffffff, SYSC_QUIRK_SWSUP_SIDLE | SYSC_QUIRK_SWSUP_MSTANDBY),
+ SYSC_QUIRK("usb_otg_hs", 0, 0, 0x10, -ENODEV, 0x4ea2080d, 0xffffffff,
+@@ -1473,8 +1477,6 @@ static const struct sysc_revision_quirk sysc_revision_quirks[] = {
+ SYSC_QUIRK("tpcc", 0, 0, -ENODEV, -ENODEV, 0x40014c00, 0xffffffff, 0),
+ SYSC_QUIRK("usbhstll", 0, 0, 0x10, 0x14, 0x00000004, 0xffffffff, 0),
+ SYSC_QUIRK("usbhstll", 0, 0, 0x10, 0x14, 0x00000008, 0xffffffff, 0),
+- SYSC_QUIRK("usb_host_hs", 0, 0, 0x10, 0x14, 0x50700100, 0xffffffff, 0),
+- SYSC_QUIRK("usb_host_hs", 0, 0, 0x10, -ENODEV, 0x50700101, 0xffffffff, 0),
+ SYSC_QUIRK("venc", 0x58003000, 0, -ENODEV, -ENODEV, 0x00000002, 0xffffffff, 0),
+ SYSC_QUIRK("vfpe", 0, 0, 0x104, -ENODEV, 0x4d001200, 0xffffffff, 0),
+ #endif
+diff --git a/drivers/char/agp/intel-gtt.c b/drivers/char/agp/intel-gtt.c
+index 3d42fc4290bc..585451a46e44 100644
+--- a/drivers/char/agp/intel-gtt.c
++++ b/drivers/char/agp/intel-gtt.c
+@@ -304,8 +304,10 @@ static int intel_gtt_setup_scratch_page(void)
+ if (intel_private.needs_dmar) {
+ dma_addr = pci_map_page(intel_private.pcidev, page, 0,
+ PAGE_SIZE, PCI_DMA_BIDIRECTIONAL);
+- if (pci_dma_mapping_error(intel_private.pcidev, dma_addr))
++ if (pci_dma_mapping_error(intel_private.pcidev, dma_addr)) {
++ __free_page(page);
+ return -EINVAL;
++ }
+
+ intel_private.scratch_page_dma = dma_addr;
+ } else
+diff --git a/drivers/char/tpm/tpm-chip.c b/drivers/char/tpm/tpm-chip.c
+index 8c77e88012e9..ddaeceb7e109 100644
+--- a/drivers/char/tpm/tpm-chip.c
++++ b/drivers/char/tpm/tpm-chip.c
+@@ -386,13 +386,8 @@ struct tpm_chip *tpm_chip_alloc(struct device *pdev,
+ chip->cdev.owner = THIS_MODULE;
+ chip->cdevs.owner = THIS_MODULE;
+
+- chip->work_space.context_buf = kzalloc(PAGE_SIZE, GFP_KERNEL);
+- if (!chip->work_space.context_buf) {
+- rc = -ENOMEM;
+- goto out;
+- }
+- chip->work_space.session_buf = kzalloc(PAGE_SIZE, GFP_KERNEL);
+- if (!chip->work_space.session_buf) {
++ rc = tpm2_init_space(&chip->work_space, TPM2_SPACE_BUFFER_SIZE);
++ if (rc) {
+ rc = -ENOMEM;
+ goto out;
+ }
+diff --git a/drivers/char/tpm/tpm.h b/drivers/char/tpm/tpm.h
+index 0fbcede241ea..947d1db0a5cc 100644
+--- a/drivers/char/tpm/tpm.h
++++ b/drivers/char/tpm/tpm.h
+@@ -59,6 +59,9 @@ enum tpm_addr {
+
+ #define TPM_TAG_RQU_COMMAND 193
+
++/* TPM2 specific constants. */
++#define TPM2_SPACE_BUFFER_SIZE 16384 /* 16 kB */
++
+ struct stclear_flags_t {
+ __be16 tag;
+ u8 deactivated;
+@@ -228,7 +231,7 @@ unsigned long tpm2_calc_ordinal_duration(struct tpm_chip *chip, u32 ordinal);
+ int tpm2_probe(struct tpm_chip *chip);
+ int tpm2_get_cc_attrs_tbl(struct tpm_chip *chip);
+ int tpm2_find_cc(struct tpm_chip *chip, u32 cc);
+-int tpm2_init_space(struct tpm_space *space);
++int tpm2_init_space(struct tpm_space *space, unsigned int buf_size);
+ void tpm2_del_space(struct tpm_chip *chip, struct tpm_space *space);
+ void tpm2_flush_space(struct tpm_chip *chip);
+ int tpm2_prepare_space(struct tpm_chip *chip, struct tpm_space *space, u8 *cmd,
+diff --git a/drivers/char/tpm/tpm2-space.c b/drivers/char/tpm/tpm2-space.c
+index 982d341d8837..784b8b3cb903 100644
+--- a/drivers/char/tpm/tpm2-space.c
++++ b/drivers/char/tpm/tpm2-space.c
+@@ -38,18 +38,21 @@ static void tpm2_flush_sessions(struct tpm_chip *chip, struct tpm_space *space)
+ }
+ }
+
+-int tpm2_init_space(struct tpm_space *space)
++int tpm2_init_space(struct tpm_space *space, unsigned int buf_size)
+ {
+- space->context_buf = kzalloc(PAGE_SIZE, GFP_KERNEL);
++ space->context_buf = kzalloc(buf_size, GFP_KERNEL);
+ if (!space->context_buf)
+ return -ENOMEM;
+
+- space->session_buf = kzalloc(PAGE_SIZE, GFP_KERNEL);
++ space->session_buf = kzalloc(buf_size, GFP_KERNEL);
+ if (space->session_buf == NULL) {
+ kfree(space->context_buf);
++ /* Prevent caller getting a dangling pointer. */
++ space->context_buf = NULL;
+ return -ENOMEM;
+ }
+
++ space->buf_size = buf_size;
+ return 0;
+ }
+
+@@ -311,8 +314,10 @@ int tpm2_prepare_space(struct tpm_chip *chip, struct tpm_space *space, u8 *cmd,
+ sizeof(space->context_tbl));
+ memcpy(&chip->work_space.session_tbl, &space->session_tbl,
+ sizeof(space->session_tbl));
+- memcpy(chip->work_space.context_buf, space->context_buf, PAGE_SIZE);
+- memcpy(chip->work_space.session_buf, space->session_buf, PAGE_SIZE);
++ memcpy(chip->work_space.context_buf, space->context_buf,
++ space->buf_size);
++ memcpy(chip->work_space.session_buf, space->session_buf,
++ space->buf_size);
+
+ rc = tpm2_load_space(chip);
+ if (rc) {
+@@ -492,7 +497,7 @@ static int tpm2_save_space(struct tpm_chip *chip)
+ continue;
+
+ rc = tpm2_save_context(chip, space->context_tbl[i],
+- space->context_buf, PAGE_SIZE,
++ space->context_buf, space->buf_size,
+ &offset);
+ if (rc == -ENOENT) {
+ space->context_tbl[i] = 0;
+@@ -509,9 +514,8 @@ static int tpm2_save_space(struct tpm_chip *chip)
+ continue;
+
+ rc = tpm2_save_context(chip, space->session_tbl[i],
+- space->session_buf, PAGE_SIZE,
++ space->session_buf, space->buf_size,
+ &offset);
+-
+ if (rc == -ENOENT) {
+ /* handle error saving session, just forget it */
+ space->session_tbl[i] = 0;
+@@ -557,8 +561,10 @@ int tpm2_commit_space(struct tpm_chip *chip, struct tpm_space *space,
+ sizeof(space->context_tbl));
+ memcpy(&space->session_tbl, &chip->work_space.session_tbl,
+ sizeof(space->session_tbl));
+- memcpy(space->context_buf, chip->work_space.context_buf, PAGE_SIZE);
+- memcpy(space->session_buf, chip->work_space.session_buf, PAGE_SIZE);
++ memcpy(space->context_buf, chip->work_space.context_buf,
++ space->buf_size);
++ memcpy(space->session_buf, chip->work_space.session_buf,
++ space->buf_size);
+
+ return 0;
+ out:
+diff --git a/drivers/char/tpm/tpmrm-dev.c b/drivers/char/tpm/tpmrm-dev.c
+index 7a0a7051a06f..eef0fb06ea83 100644
+--- a/drivers/char/tpm/tpmrm-dev.c
++++ b/drivers/char/tpm/tpmrm-dev.c
+@@ -21,7 +21,7 @@ static int tpmrm_open(struct inode *inode, struct file *file)
+ if (priv == NULL)
+ return -ENOMEM;
+
+- rc = tpm2_init_space(&priv->space);
++ rc = tpm2_init_space(&priv->space, TPM2_SPACE_BUFFER_SIZE);
+ if (rc) {
+ kfree(priv);
+ return -ENOMEM;
+diff --git a/drivers/clk/bcm/clk-bcm63xx-gate.c b/drivers/clk/bcm/clk-bcm63xx-gate.c
+index 98e884957db8..911a29bd744e 100644
+--- a/drivers/clk/bcm/clk-bcm63xx-gate.c
++++ b/drivers/clk/bcm/clk-bcm63xx-gate.c
+@@ -155,6 +155,7 @@ static int clk_bcm63xx_probe(struct platform_device *pdev)
+
+ for (entry = table; entry->name; entry++)
+ maxbit = max_t(u8, maxbit, entry->bit);
++ maxbit++;
+
+ hw = devm_kzalloc(&pdev->dev, struct_size(hw, data.hws, maxbit),
+ GFP_KERNEL);
+diff --git a/drivers/clk/clk-scmi.c b/drivers/clk/clk-scmi.c
+index c491f5de0f3f..c754dfbb73fd 100644
+--- a/drivers/clk/clk-scmi.c
++++ b/drivers/clk/clk-scmi.c
+@@ -103,6 +103,8 @@ static const struct clk_ops scmi_clk_ops = {
+ static int scmi_clk_ops_init(struct device *dev, struct scmi_clk *sclk)
+ {
+ int ret;
++ unsigned long min_rate, max_rate;
++
+ struct clk_init_data init = {
+ .flags = CLK_GET_RATE_NOCACHE,
+ .num_parents = 0,
+@@ -112,9 +114,23 @@ static int scmi_clk_ops_init(struct device *dev, struct scmi_clk *sclk)
+
+ sclk->hw.init = &init;
+ ret = devm_clk_hw_register(dev, &sclk->hw);
+- if (!ret)
+- clk_hw_set_rate_range(&sclk->hw, sclk->info->range.min_rate,
+- sclk->info->range.max_rate);
++ if (ret)
++ return ret;
++
++ if (sclk->info->rate_discrete) {
++ int num_rates = sclk->info->list.num_rates;
++
++ if (num_rates <= 0)
++ return -EINVAL;
++
++ min_rate = sclk->info->list.rates[0];
++ max_rate = sclk->info->list.rates[num_rates - 1];
++ } else {
++ min_rate = sclk->info->range.min_rate;
++ max_rate = sclk->info->range.max_rate;
++ }
++
++ clk_hw_set_rate_range(&sclk->hw, min_rate, max_rate);
+ return ret;
+ }
+
+diff --git a/drivers/clk/qcom/gcc-sc7180.c b/drivers/clk/qcom/gcc-sc7180.c
+index 73380525cb09..b3704b685cca 100644
+--- a/drivers/clk/qcom/gcc-sc7180.c
++++ b/drivers/clk/qcom/gcc-sc7180.c
+@@ -1041,7 +1041,7 @@ static struct clk_branch gcc_disp_gpll0_clk_src = {
+ .hw = &gpll0.clkr.hw,
+ },
+ .num_parents = 1,
+- .ops = &clk_branch2_ops,
++ .ops = &clk_branch2_aon_ops,
+ },
+ },
+ };
+diff --git a/drivers/clk/qcom/gcc-sdm845.c b/drivers/clk/qcom/gcc-sdm845.c
+index f6ce888098be..90f7febaf528 100644
+--- a/drivers/clk/qcom/gcc-sdm845.c
++++ b/drivers/clk/qcom/gcc-sdm845.c
+@@ -1,6 +1,6 @@
+ // SPDX-License-Identifier: GPL-2.0
+ /*
+- * Copyright (c) 2018, The Linux Foundation. All rights reserved.
++ * Copyright (c) 2018, 2020, The Linux Foundation. All rights reserved.
+ */
+
+ #include <linux/kernel.h>
+@@ -1344,7 +1344,7 @@ static struct clk_branch gcc_disp_gpll0_clk_src = {
+ "gpll0",
+ },
+ .num_parents = 1,
+- .ops = &clk_branch2_ops,
++ .ops = &clk_branch2_aon_ops,
+ },
+ },
+ };
+diff --git a/drivers/cpufreq/Kconfig.arm b/drivers/cpufreq/Kconfig.arm
+index 15c1a1231516..c65241bfa512 100644
+--- a/drivers/cpufreq/Kconfig.arm
++++ b/drivers/cpufreq/Kconfig.arm
+@@ -41,6 +41,7 @@ config ARM_ARMADA_37XX_CPUFREQ
+ config ARM_ARMADA_8K_CPUFREQ
+ tristate "Armada 8K CPUFreq driver"
+ depends on ARCH_MVEBU && CPUFREQ_DT
++ select ARMADA_AP_CPU_CLK
+ help
+ This enables the CPUFreq driver support for Marvell
+ Armada8k SOCs.
+diff --git a/drivers/cpufreq/armada-37xx-cpufreq.c b/drivers/cpufreq/armada-37xx-cpufreq.c
+index aa0f06dec959..df1c941260d1 100644
+--- a/drivers/cpufreq/armada-37xx-cpufreq.c
++++ b/drivers/cpufreq/armada-37xx-cpufreq.c
+@@ -456,6 +456,7 @@ static int __init armada37xx_cpufreq_driver_init(void)
+ /* Now that everything is setup, enable the DVFS at hardware level */
+ armada37xx_cpufreq_enable_dvfs(nb_pm_base);
+
++ memset(&pdata, 0, sizeof(pdata));
+ pdata.suspend = armada37xx_cpufreq_suspend;
+ pdata.resume = armada37xx_cpufreq_resume;
+
+diff --git a/drivers/cpufreq/cpufreq.c b/drivers/cpufreq/cpufreq.c
+index d03f250f68e4..e3e94a8bb499 100644
+--- a/drivers/cpufreq/cpufreq.c
++++ b/drivers/cpufreq/cpufreq.c
+@@ -621,6 +621,24 @@ static struct cpufreq_governor *find_governor(const char *str_governor)
+ return NULL;
+ }
+
++static struct cpufreq_governor *get_governor(const char *str_governor)
++{
++ struct cpufreq_governor *t;
++
++ mutex_lock(&cpufreq_governor_mutex);
++ t = find_governor(str_governor);
++ if (!t)
++ goto unlock;
++
++ if (!try_module_get(t->owner))
++ t = NULL;
++
++unlock:
++ mutex_unlock(&cpufreq_governor_mutex);
++
++ return t;
++}
++
+ static unsigned int cpufreq_parse_policy(char *str_governor)
+ {
+ if (!strncasecmp(str_governor, "performance", CPUFREQ_NAME_LEN))
+@@ -640,28 +658,14 @@ static struct cpufreq_governor *cpufreq_parse_governor(char *str_governor)
+ {
+ struct cpufreq_governor *t;
+
+- mutex_lock(&cpufreq_governor_mutex);
++ t = get_governor(str_governor);
++ if (t)
++ return t;
+
+- t = find_governor(str_governor);
+- if (!t) {
+- int ret;
+-
+- mutex_unlock(&cpufreq_governor_mutex);
+-
+- ret = request_module("cpufreq_%s", str_governor);
+- if (ret)
+- return NULL;
+-
+- mutex_lock(&cpufreq_governor_mutex);
+-
+- t = find_governor(str_governor);
+- }
+- if (t && !try_module_get(t->owner))
+- t = NULL;
+-
+- mutex_unlock(&cpufreq_governor_mutex);
++ if (request_module("cpufreq_%s", str_governor))
++ return NULL;
+
+- return t;
++ return get_governor(str_governor);
+ }
+
+ /**
+@@ -815,12 +819,14 @@ static ssize_t show_scaling_available_governors(struct cpufreq_policy *policy,
+ goto out;
+ }
+
++ mutex_lock(&cpufreq_governor_mutex);
+ for_each_governor(t) {
+ if (i >= (ssize_t) ((PAGE_SIZE / sizeof(char))
+ - (CPUFREQ_NAME_LEN + 2)))
+- goto out;
++ break;
+ i += scnprintf(&buf[i], CPUFREQ_NAME_PLEN, "%s ", t->name);
+ }
++ mutex_unlock(&cpufreq_governor_mutex);
+ out:
+ i += sprintf(&buf[i], "\n");
+ return i;
+@@ -1058,15 +1064,17 @@ static int cpufreq_init_policy(struct cpufreq_policy *policy)
+ struct cpufreq_governor *def_gov = cpufreq_default_governor();
+ struct cpufreq_governor *gov = NULL;
+ unsigned int pol = CPUFREQ_POLICY_UNKNOWN;
++ int ret;
+
+ if (has_target()) {
+ /* Update policy governor to the one used before hotplug. */
+- gov = find_governor(policy->last_governor);
++ gov = get_governor(policy->last_governor);
+ if (gov) {
+ pr_debug("Restoring governor %s for cpu %d\n",
+ policy->governor->name, policy->cpu);
+ } else if (def_gov) {
+ gov = def_gov;
++ __module_get(gov->owner);
+ } else {
+ return -ENODATA;
+ }
+@@ -1089,7 +1097,11 @@ static int cpufreq_init_policy(struct cpufreq_policy *policy)
+ return -ENODATA;
+ }
+
+- return cpufreq_set_policy(policy, gov, pol);
++ ret = cpufreq_set_policy(policy, gov, pol);
++ if (gov)
++ module_put(gov->owner);
++
++ return ret;
+ }
+
+ static int cpufreq_add_policy_cpu(struct cpufreq_policy *policy, unsigned int cpu)
+diff --git a/drivers/crypto/caam/caamalg.c b/drivers/crypto/caam/caamalg.c
+index b2f9882bc010..bf90a4fcabd1 100644
+--- a/drivers/crypto/caam/caamalg.c
++++ b/drivers/crypto/caam/caamalg.c
+@@ -838,7 +838,7 @@ static int xts_skcipher_setkey(struct crypto_skcipher *skcipher, const u8 *key,
+ u32 *desc;
+
+ if (keylen != 2 * AES_MIN_KEY_SIZE && keylen != 2 * AES_MAX_KEY_SIZE) {
+- dev_err(jrdev, "key size mismatch\n");
++ dev_dbg(jrdev, "key size mismatch\n");
+ return -EINVAL;
+ }
+
+diff --git a/drivers/crypto/caam/caamalg_qi.c b/drivers/crypto/caam/caamalg_qi.c
+index 27e36bdf6163..315d53499ce8 100644
+--- a/drivers/crypto/caam/caamalg_qi.c
++++ b/drivers/crypto/caam/caamalg_qi.c
+@@ -728,7 +728,7 @@ static int xts_skcipher_setkey(struct crypto_skcipher *skcipher, const u8 *key,
+ int ret = 0;
+
+ if (keylen != 2 * AES_MIN_KEY_SIZE && keylen != 2 * AES_MAX_KEY_SIZE) {
+- dev_err(jrdev, "key size mismatch\n");
++ dev_dbg(jrdev, "key size mismatch\n");
+ return -EINVAL;
+ }
+
+diff --git a/drivers/crypto/caam/caamalg_qi2.c b/drivers/crypto/caam/caamalg_qi2.c
+index 28669cbecf77..e1b6bc6ef091 100644
+--- a/drivers/crypto/caam/caamalg_qi2.c
++++ b/drivers/crypto/caam/caamalg_qi2.c
+@@ -1058,7 +1058,7 @@ static int xts_skcipher_setkey(struct crypto_skcipher *skcipher, const u8 *key,
+ u32 *desc;
+
+ if (keylen != 2 * AES_MIN_KEY_SIZE && keylen != 2 * AES_MAX_KEY_SIZE) {
+- dev_err(dev, "key size mismatch\n");
++ dev_dbg(dev, "key size mismatch\n");
+ return -EINVAL;
+ }
+
+diff --git a/drivers/crypto/cavium/cpt/cptvf_algs.c b/drivers/crypto/cavium/cpt/cptvf_algs.c
+index 1be1adffff1d..2e4bf90c5798 100644
+--- a/drivers/crypto/cavium/cpt/cptvf_algs.c
++++ b/drivers/crypto/cavium/cpt/cptvf_algs.c
+@@ -200,6 +200,7 @@ static inline int cvm_enc_dec(struct skcipher_request *req, u32 enc)
+ int status;
+
+ memset(req_info, 0, sizeof(struct cpt_request_info));
++ req_info->may_sleep = (req->base.flags & CRYPTO_TFM_REQ_MAY_SLEEP) != 0;
+ memset(fctx, 0, sizeof(struct fc_context));
+ create_input_list(req, enc, enc_iv_len);
+ create_output_list(req, enc_iv_len);
+diff --git a/drivers/crypto/cavium/cpt/cptvf_reqmanager.c b/drivers/crypto/cavium/cpt/cptvf_reqmanager.c
+index 7a24019356b5..e343249c8d05 100644
+--- a/drivers/crypto/cavium/cpt/cptvf_reqmanager.c
++++ b/drivers/crypto/cavium/cpt/cptvf_reqmanager.c
+@@ -133,7 +133,7 @@ static inline int setup_sgio_list(struct cpt_vf *cptvf,
+
+ /* Setup gather (input) components */
+ g_sz_bytes = ((req->incnt + 3) / 4) * sizeof(struct sglist_component);
+- info->gather_components = kzalloc(g_sz_bytes, GFP_KERNEL);
++ info->gather_components = kzalloc(g_sz_bytes, req->may_sleep ? GFP_KERNEL : GFP_ATOMIC);
+ if (!info->gather_components) {
+ ret = -ENOMEM;
+ goto scatter_gather_clean;
+@@ -150,7 +150,7 @@ static inline int setup_sgio_list(struct cpt_vf *cptvf,
+
+ /* Setup scatter (output) components */
+ s_sz_bytes = ((req->outcnt + 3) / 4) * sizeof(struct sglist_component);
+- info->scatter_components = kzalloc(s_sz_bytes, GFP_KERNEL);
++ info->scatter_components = kzalloc(s_sz_bytes, req->may_sleep ? GFP_KERNEL : GFP_ATOMIC);
+ if (!info->scatter_components) {
+ ret = -ENOMEM;
+ goto scatter_gather_clean;
+@@ -167,7 +167,7 @@ static inline int setup_sgio_list(struct cpt_vf *cptvf,
+
+ /* Create and initialize DPTR */
+ info->dlen = g_sz_bytes + s_sz_bytes + SG_LIST_HDR_SIZE;
+- info->in_buffer = kzalloc(info->dlen, GFP_KERNEL);
++ info->in_buffer = kzalloc(info->dlen, req->may_sleep ? GFP_KERNEL : GFP_ATOMIC);
+ if (!info->in_buffer) {
+ ret = -ENOMEM;
+ goto scatter_gather_clean;
+@@ -195,7 +195,7 @@ static inline int setup_sgio_list(struct cpt_vf *cptvf,
+ }
+
+ /* Create and initialize RPTR */
+- info->out_buffer = kzalloc(COMPLETION_CODE_SIZE, GFP_KERNEL);
++ info->out_buffer = kzalloc(COMPLETION_CODE_SIZE, req->may_sleep ? GFP_KERNEL : GFP_ATOMIC);
+ if (!info->out_buffer) {
+ ret = -ENOMEM;
+ goto scatter_gather_clean;
+@@ -421,7 +421,7 @@ int process_request(struct cpt_vf *cptvf, struct cpt_request_info *req)
+ struct cpt_vq_command vq_cmd;
+ union cpt_inst_s cptinst;
+
+- info = kzalloc(sizeof(*info), GFP_KERNEL);
++ info = kzalloc(sizeof(*info), req->may_sleep ? GFP_KERNEL : GFP_ATOMIC);
+ if (unlikely(!info)) {
+ dev_err(&pdev->dev, "Unable to allocate memory for info_buffer\n");
+ return -ENOMEM;
+@@ -443,7 +443,7 @@ int process_request(struct cpt_vf *cptvf, struct cpt_request_info *req)
+ * Get buffer for union cpt_res_s response
+ * structure and its physical address
+ */
+- info->completion_addr = kzalloc(sizeof(union cpt_res_s), GFP_KERNEL);
++ info->completion_addr = kzalloc(sizeof(union cpt_res_s), req->may_sleep ? GFP_KERNEL : GFP_ATOMIC);
+ if (unlikely(!info->completion_addr)) {
+ dev_err(&pdev->dev, "Unable to allocate memory for completion_addr\n");
+ ret = -ENOMEM;
+diff --git a/drivers/crypto/cavium/cpt/request_manager.h b/drivers/crypto/cavium/cpt/request_manager.h
+index 3514b082eca7..1e8dd9ebcc17 100644
+--- a/drivers/crypto/cavium/cpt/request_manager.h
++++ b/drivers/crypto/cavium/cpt/request_manager.h
+@@ -62,6 +62,8 @@ struct cpt_request_info {
+ union ctrl_info ctrl; /* User control information */
+ struct cptvf_request req; /* Request Information (Core specific) */
+
++ bool may_sleep;
++
+ struct buf_ptr in[MAX_BUF_CNT];
+ struct buf_ptr out[MAX_BUF_CNT];
+
+diff --git a/drivers/crypto/ccp/ccp-dev.h b/drivers/crypto/ccp/ccp-dev.h
+index 3f68262d9ab4..87a34d91fdf7 100644
+--- a/drivers/crypto/ccp/ccp-dev.h
++++ b/drivers/crypto/ccp/ccp-dev.h
+@@ -469,6 +469,7 @@ struct ccp_sg_workarea {
+ unsigned int sg_used;
+
+ struct scatterlist *dma_sg;
++ struct scatterlist *dma_sg_head;
+ struct device *dma_dev;
+ unsigned int dma_count;
+ enum dma_data_direction dma_dir;
+diff --git a/drivers/crypto/ccp/ccp-ops.c b/drivers/crypto/ccp/ccp-ops.c
+index 422193690fd4..64112c736810 100644
+--- a/drivers/crypto/ccp/ccp-ops.c
++++ b/drivers/crypto/ccp/ccp-ops.c
+@@ -63,7 +63,7 @@ static u32 ccp_gen_jobid(struct ccp_device *ccp)
+ static void ccp_sg_free(struct ccp_sg_workarea *wa)
+ {
+ if (wa->dma_count)
+- dma_unmap_sg(wa->dma_dev, wa->dma_sg, wa->nents, wa->dma_dir);
++ dma_unmap_sg(wa->dma_dev, wa->dma_sg_head, wa->nents, wa->dma_dir);
+
+ wa->dma_count = 0;
+ }
+@@ -92,6 +92,7 @@ static int ccp_init_sg_workarea(struct ccp_sg_workarea *wa, struct device *dev,
+ return 0;
+
+ wa->dma_sg = sg;
++ wa->dma_sg_head = sg;
+ wa->dma_dev = dev;
+ wa->dma_dir = dma_dir;
+ wa->dma_count = dma_map_sg(dev, sg, wa->nents, dma_dir);
+@@ -104,14 +105,28 @@ static int ccp_init_sg_workarea(struct ccp_sg_workarea *wa, struct device *dev,
+ static void ccp_update_sg_workarea(struct ccp_sg_workarea *wa, unsigned int len)
+ {
+ unsigned int nbytes = min_t(u64, len, wa->bytes_left);
++ unsigned int sg_combined_len = 0;
+
+ if (!wa->sg)
+ return;
+
+ wa->sg_used += nbytes;
+ wa->bytes_left -= nbytes;
+- if (wa->sg_used == wa->sg->length) {
+- wa->sg = sg_next(wa->sg);
++ if (wa->sg_used == sg_dma_len(wa->dma_sg)) {
++ /* Advance to the next DMA scatterlist entry */
++ wa->dma_sg = sg_next(wa->dma_sg);
++
++ /* In the case that the DMA mapped scatterlist has entries
++ * that have been merged, the non-DMA mapped scatterlist
++ * must be advanced multiple times for each merged entry.
++ * This ensures that the current non-DMA mapped entry
++ * corresponds to the current DMA mapped entry.
++ */
++ do {
++ sg_combined_len += wa->sg->length;
++ wa->sg = sg_next(wa->sg);
++ } while (wa->sg_used > sg_combined_len);
++
+ wa->sg_used = 0;
+ }
+ }
+@@ -299,7 +314,7 @@ static unsigned int ccp_queue_buf(struct ccp_data *data, unsigned int from)
+ /* Update the structures and generate the count */
+ buf_count = 0;
+ while (sg_wa->bytes_left && (buf_count < dm_wa->length)) {
+- nbytes = min(sg_wa->sg->length - sg_wa->sg_used,
++ nbytes = min(sg_dma_len(sg_wa->dma_sg) - sg_wa->sg_used,
+ dm_wa->length - buf_count);
+ nbytes = min_t(u64, sg_wa->bytes_left, nbytes);
+
+@@ -331,11 +346,11 @@ static void ccp_prepare_data(struct ccp_data *src, struct ccp_data *dst,
+ * and destination. The resulting len values will always be <= UINT_MAX
+ * because the dma length is an unsigned int.
+ */
+- sg_src_len = sg_dma_len(src->sg_wa.sg) - src->sg_wa.sg_used;
++ sg_src_len = sg_dma_len(src->sg_wa.dma_sg) - src->sg_wa.sg_used;
+ sg_src_len = min_t(u64, src->sg_wa.bytes_left, sg_src_len);
+
+ if (dst) {
+- sg_dst_len = sg_dma_len(dst->sg_wa.sg) - dst->sg_wa.sg_used;
++ sg_dst_len = sg_dma_len(dst->sg_wa.dma_sg) - dst->sg_wa.sg_used;
+ sg_dst_len = min_t(u64, src->sg_wa.bytes_left, sg_dst_len);
+ op_len = min(sg_src_len, sg_dst_len);
+ } else {
+@@ -365,7 +380,7 @@ static void ccp_prepare_data(struct ccp_data *src, struct ccp_data *dst,
+ /* Enough data in the sg element, but we need to
+ * adjust for any previously copied data
+ */
+- op->src.u.dma.address = sg_dma_address(src->sg_wa.sg);
++ op->src.u.dma.address = sg_dma_address(src->sg_wa.dma_sg);
+ op->src.u.dma.offset = src->sg_wa.sg_used;
+ op->src.u.dma.length = op_len & ~(block_size - 1);
+
+@@ -386,7 +401,7 @@ static void ccp_prepare_data(struct ccp_data *src, struct ccp_data *dst,
+ /* Enough room in the sg element, but we need to
+ * adjust for any previously used area
+ */
+- op->dst.u.dma.address = sg_dma_address(dst->sg_wa.sg);
++ op->dst.u.dma.address = sg_dma_address(dst->sg_wa.dma_sg);
+ op->dst.u.dma.offset = dst->sg_wa.sg_used;
+ op->dst.u.dma.length = op->src.u.dma.length;
+ }
+@@ -2028,7 +2043,7 @@ ccp_run_passthru_cmd(struct ccp_cmd_queue *cmd_q, struct ccp_cmd *cmd)
+ dst.sg_wa.sg_used = 0;
+ for (i = 1; i <= src.sg_wa.dma_count; i++) {
+ if (!dst.sg_wa.sg ||
+- (dst.sg_wa.sg->length < src.sg_wa.sg->length)) {
++ (sg_dma_len(dst.sg_wa.sg) < sg_dma_len(src.sg_wa.sg))) {
+ ret = -EINVAL;
+ goto e_dst;
+ }
+@@ -2054,8 +2069,8 @@ ccp_run_passthru_cmd(struct ccp_cmd_queue *cmd_q, struct ccp_cmd *cmd)
+ goto e_dst;
+ }
+
+- dst.sg_wa.sg_used += src.sg_wa.sg->length;
+- if (dst.sg_wa.sg_used == dst.sg_wa.sg->length) {
++ dst.sg_wa.sg_used += sg_dma_len(src.sg_wa.sg);
++ if (dst.sg_wa.sg_used == sg_dma_len(dst.sg_wa.sg)) {
+ dst.sg_wa.sg = sg_next(dst.sg_wa.sg);
+ dst.sg_wa.sg_used = 0;
+ }
+diff --git a/drivers/crypto/ccree/cc_cipher.c b/drivers/crypto/ccree/cc_cipher.c
+index a84335328f37..89f7661f0dce 100644
+--- a/drivers/crypto/ccree/cc_cipher.c
++++ b/drivers/crypto/ccree/cc_cipher.c
+@@ -159,7 +159,6 @@ static int cc_cipher_init(struct crypto_tfm *tfm)
+ skcipher_alg.base);
+ struct device *dev = drvdata_to_dev(cc_alg->drvdata);
+ unsigned int max_key_buf_size = cc_alg->skcipher_alg.max_keysize;
+- int rc = 0;
+
+ dev_dbg(dev, "Initializing context @%p for %s\n", ctx_p,
+ crypto_tfm_alg_name(tfm));
+@@ -171,10 +170,19 @@ static int cc_cipher_init(struct crypto_tfm *tfm)
+ ctx_p->flow_mode = cc_alg->flow_mode;
+ ctx_p->drvdata = cc_alg->drvdata;
+
++ if (ctx_p->cipher_mode == DRV_CIPHER_ESSIV) {
++ /* Alloc hash tfm for essiv */
++ ctx_p->shash_tfm = crypto_alloc_shash("sha256-generic", 0, 0);
++ if (IS_ERR(ctx_p->shash_tfm)) {
++ dev_err(dev, "Error allocating hash tfm for ESSIV.\n");
++ return PTR_ERR(ctx_p->shash_tfm);
++ }
++ }
++
+ /* Allocate key buffer, cache line aligned */
+ ctx_p->user.key = kmalloc(max_key_buf_size, GFP_KERNEL);
+ if (!ctx_p->user.key)
+- return -ENOMEM;
++ goto free_shash;
+
+ dev_dbg(dev, "Allocated key buffer in context. key=@%p\n",
+ ctx_p->user.key);
+@@ -186,21 +194,19 @@ static int cc_cipher_init(struct crypto_tfm *tfm)
+ if (dma_mapping_error(dev, ctx_p->user.key_dma_addr)) {
+ dev_err(dev, "Mapping Key %u B at va=%pK for DMA failed\n",
+ max_key_buf_size, ctx_p->user.key);
+- return -ENOMEM;
++ goto free_key;
+ }
+ dev_dbg(dev, "Mapped key %u B at va=%pK to dma=%pad\n",
+ max_key_buf_size, ctx_p->user.key, &ctx_p->user.key_dma_addr);
+
+- if (ctx_p->cipher_mode == DRV_CIPHER_ESSIV) {
+- /* Alloc hash tfm for essiv */
+- ctx_p->shash_tfm = crypto_alloc_shash("sha256-generic", 0, 0);
+- if (IS_ERR(ctx_p->shash_tfm)) {
+- dev_err(dev, "Error allocating hash tfm for ESSIV.\n");
+- return PTR_ERR(ctx_p->shash_tfm);
+- }
+- }
++ return 0;
+
+- return rc;
++free_key:
++ kfree(ctx_p->user.key);
++free_shash:
++ crypto_free_shash(ctx_p->shash_tfm);
++
++ return -ENOMEM;
+ }
+
+ static void cc_cipher_exit(struct crypto_tfm *tfm)
+diff --git a/drivers/crypto/hisilicon/sec/sec_algs.c b/drivers/crypto/hisilicon/sec/sec_algs.c
+index c27e7160d2df..4ad4ffd90cee 100644
+--- a/drivers/crypto/hisilicon/sec/sec_algs.c
++++ b/drivers/crypto/hisilicon/sec/sec_algs.c
+@@ -175,7 +175,8 @@ static int sec_alloc_and_fill_hw_sgl(struct sec_hw_sgl **sec_sgl,
+ dma_addr_t *psec_sgl,
+ struct scatterlist *sgl,
+ int count,
+- struct sec_dev_info *info)
++ struct sec_dev_info *info,
++ gfp_t gfp)
+ {
+ struct sec_hw_sgl *sgl_current = NULL;
+ struct sec_hw_sgl *sgl_next;
+@@ -190,7 +191,7 @@ static int sec_alloc_and_fill_hw_sgl(struct sec_hw_sgl **sec_sgl,
+ sge_index = i % SEC_MAX_SGE_NUM;
+ if (sge_index == 0) {
+ sgl_next = dma_pool_zalloc(info->hw_sgl_pool,
+- GFP_KERNEL, &sgl_next_dma);
++ gfp, &sgl_next_dma);
+ if (!sgl_next) {
+ ret = -ENOMEM;
+ goto err_free_hw_sgls;
+@@ -545,14 +546,14 @@ void sec_alg_callback(struct sec_bd_info *resp, void *shadow)
+ }
+
+ static int sec_alg_alloc_and_calc_split_sizes(int length, size_t **split_sizes,
+- int *steps)
++ int *steps, gfp_t gfp)
+ {
+ size_t *sizes;
+ int i;
+
+ /* Split into suitable sized blocks */
+ *steps = roundup(length, SEC_REQ_LIMIT) / SEC_REQ_LIMIT;
+- sizes = kcalloc(*steps, sizeof(*sizes), GFP_KERNEL);
++ sizes = kcalloc(*steps, sizeof(*sizes), gfp);
+ if (!sizes)
+ return -ENOMEM;
+
+@@ -568,7 +569,7 @@ static int sec_map_and_split_sg(struct scatterlist *sgl, size_t *split_sizes,
+ int steps, struct scatterlist ***splits,
+ int **splits_nents,
+ int sgl_len_in,
+- struct device *dev)
++ struct device *dev, gfp_t gfp)
+ {
+ int ret, count;
+
+@@ -576,12 +577,12 @@ static int sec_map_and_split_sg(struct scatterlist *sgl, size_t *split_sizes,
+ if (!count)
+ return -EINVAL;
+
+- *splits = kcalloc(steps, sizeof(struct scatterlist *), GFP_KERNEL);
++ *splits = kcalloc(steps, sizeof(struct scatterlist *), gfp);
+ if (!*splits) {
+ ret = -ENOMEM;
+ goto err_unmap_sg;
+ }
+- *splits_nents = kcalloc(steps, sizeof(int), GFP_KERNEL);
++ *splits_nents = kcalloc(steps, sizeof(int), gfp);
+ if (!*splits_nents) {
+ ret = -ENOMEM;
+ goto err_free_splits;
+@@ -589,7 +590,7 @@ static int sec_map_and_split_sg(struct scatterlist *sgl, size_t *split_sizes,
+
+ /* output the scatter list before and after this */
+ ret = sg_split(sgl, count, 0, steps, split_sizes,
+- *splits, *splits_nents, GFP_KERNEL);
++ *splits, *splits_nents, gfp);
+ if (ret) {
+ ret = -ENOMEM;
+ goto err_free_splits_nents;
+@@ -630,13 +631,13 @@ static struct sec_request_el
+ int el_size, bool different_dest,
+ struct scatterlist *sgl_in, int n_ents_in,
+ struct scatterlist *sgl_out, int n_ents_out,
+- struct sec_dev_info *info)
++ struct sec_dev_info *info, gfp_t gfp)
+ {
+ struct sec_request_el *el;
+ struct sec_bd_info *req;
+ int ret;
+
+- el = kzalloc(sizeof(*el), GFP_KERNEL);
++ el = kzalloc(sizeof(*el), gfp);
+ if (!el)
+ return ERR_PTR(-ENOMEM);
+ el->el_length = el_size;
+@@ -668,7 +669,7 @@ static struct sec_request_el
+ el->sgl_in = sgl_in;
+
+ ret = sec_alloc_and_fill_hw_sgl(&el->in, &el->dma_in, el->sgl_in,
+- n_ents_in, info);
++ n_ents_in, info, gfp);
+ if (ret)
+ goto err_free_el;
+
+@@ -679,7 +680,7 @@ static struct sec_request_el
+ el->sgl_out = sgl_out;
+ ret = sec_alloc_and_fill_hw_sgl(&el->out, &el->dma_out,
+ el->sgl_out,
+- n_ents_out, info);
++ n_ents_out, info, gfp);
+ if (ret)
+ goto err_free_hw_sgl_in;
+
+@@ -720,6 +721,7 @@ static int sec_alg_skcipher_crypto(struct skcipher_request *skreq,
+ int *splits_out_nents = NULL;
+ struct sec_request_el *el, *temp;
+ bool split = skreq->src != skreq->dst;
++ gfp_t gfp = skreq->base.flags & CRYPTO_TFM_REQ_MAY_SLEEP ? GFP_KERNEL : GFP_ATOMIC;
+
+ mutex_init(&sec_req->lock);
+ sec_req->req_base = &skreq->base;
+@@ -728,13 +730,13 @@ static int sec_alg_skcipher_crypto(struct skcipher_request *skreq,
+ sec_req->len_in = sg_nents(skreq->src);
+
+ ret = sec_alg_alloc_and_calc_split_sizes(skreq->cryptlen, &split_sizes,
+- &steps);
++ &steps, gfp);
+ if (ret)
+ return ret;
+ sec_req->num_elements = steps;
+ ret = sec_map_and_split_sg(skreq->src, split_sizes, steps, &splits_in,
+ &splits_in_nents, sec_req->len_in,
+- info->dev);
++ info->dev, gfp);
+ if (ret)
+ goto err_free_split_sizes;
+
+@@ -742,7 +744,7 @@ static int sec_alg_skcipher_crypto(struct skcipher_request *skreq,
+ sec_req->len_out = sg_nents(skreq->dst);
+ ret = sec_map_and_split_sg(skreq->dst, split_sizes, steps,
+ &splits_out, &splits_out_nents,
+- sec_req->len_out, info->dev);
++ sec_req->len_out, info->dev, gfp);
+ if (ret)
+ goto err_unmap_in_sg;
+ }
+@@ -775,7 +777,7 @@ static int sec_alg_skcipher_crypto(struct skcipher_request *skreq,
+ splits_in[i], splits_in_nents[i],
+ split ? splits_out[i] : NULL,
+ split ? splits_out_nents[i] : 0,
+- info);
++ info, gfp);
+ if (IS_ERR(el)) {
+ ret = PTR_ERR(el);
+ goto err_free_elements;
+diff --git a/drivers/crypto/qat/qat_common/qat_algs.c b/drivers/crypto/qat/qat_common/qat_algs.c
+index e14d3dd291f0..1b050391c0c9 100644
+--- a/drivers/crypto/qat/qat_common/qat_algs.c
++++ b/drivers/crypto/qat/qat_common/qat_algs.c
+@@ -55,6 +55,7 @@
+ #include <crypto/hmac.h>
+ #include <crypto/algapi.h>
+ #include <crypto/authenc.h>
++#include <crypto/xts.h>
+ #include <linux/dma-mapping.h>
+ #include "adf_accel_devices.h"
+ #include "adf_transport.h"
+@@ -1102,6 +1103,14 @@ static int qat_alg_skcipher_blk_encrypt(struct skcipher_request *req)
+ return qat_alg_skcipher_encrypt(req);
+ }
+
++static int qat_alg_skcipher_xts_encrypt(struct skcipher_request *req)
++{
++ if (req->cryptlen < XTS_BLOCK_SIZE)
++ return -EINVAL;
++
++ return qat_alg_skcipher_encrypt(req);
++}
++
+ static int qat_alg_skcipher_decrypt(struct skcipher_request *req)
+ {
+ struct crypto_skcipher *stfm = crypto_skcipher_reqtfm(req);
+@@ -1161,6 +1170,15 @@ static int qat_alg_skcipher_blk_decrypt(struct skcipher_request *req)
+
+ return qat_alg_skcipher_decrypt(req);
+ }
++
++static int qat_alg_skcipher_xts_decrypt(struct skcipher_request *req)
++{
++ if (req->cryptlen < XTS_BLOCK_SIZE)
++ return -EINVAL;
++
++ return qat_alg_skcipher_decrypt(req);
++}
++
+ static int qat_alg_aead_init(struct crypto_aead *tfm,
+ enum icp_qat_hw_auth_algo hash,
+ const char *hash_name)
+@@ -1354,8 +1372,8 @@ static struct skcipher_alg qat_skciphers[] = { {
+ .init = qat_alg_skcipher_init_tfm,
+ .exit = qat_alg_skcipher_exit_tfm,
+ .setkey = qat_alg_skcipher_xts_setkey,
+- .decrypt = qat_alg_skcipher_blk_decrypt,
+- .encrypt = qat_alg_skcipher_blk_encrypt,
++ .decrypt = qat_alg_skcipher_xts_decrypt,
++ .encrypt = qat_alg_skcipher_xts_encrypt,
+ .min_keysize = 2 * AES_MIN_KEY_SIZE,
+ .max_keysize = 2 * AES_MAX_KEY_SIZE,
+ .ivsize = AES_BLOCK_SIZE,
+diff --git a/drivers/crypto/qat/qat_common/qat_uclo.c b/drivers/crypto/qat/qat_common/qat_uclo.c
+index 6bd8f6a2a24f..aeb03081415c 100644
+--- a/drivers/crypto/qat/qat_common/qat_uclo.c
++++ b/drivers/crypto/qat/qat_common/qat_uclo.c
+@@ -332,13 +332,18 @@ static int qat_uclo_create_batch_init_list(struct icp_qat_fw_loader_handle
+ }
+ return 0;
+ out_err:
++ /* Do not free the list head unless we allocated it. */
++ tail_old = tail_old->next;
++ if (flag) {
++ kfree(*init_tab_base);
++ *init_tab_base = NULL;
++ }
++
+ while (tail_old) {
+ mem_init = tail_old->next;
+ kfree(tail_old);
+ tail_old = mem_init;
+ }
+- if (flag)
+- kfree(*init_tab_base);
+ return -ENOMEM;
+ }
+
+diff --git a/drivers/devfreq/devfreq.c b/drivers/devfreq/devfreq.c
+index 6fecd11dafdd..8f6cd0f70fad 100644
+--- a/drivers/devfreq/devfreq.c
++++ b/drivers/devfreq/devfreq.c
+@@ -1660,8 +1660,7 @@ static int devfreq_summary_show(struct seq_file *s, void *data)
+ unsigned long cur_freq, min_freq, max_freq;
+ unsigned int polling_ms;
+
+- seq_printf(s, "%-30s %-10s %-10s %-15s %10s %12s %12s %12s\n",
+- "dev_name",
++ seq_printf(s, "%-30s %-30s %-15s %10s %12s %12s %12s\n",
+ "dev",
+ "parent_dev",
+ "governor",
+@@ -1669,10 +1668,9 @@ static int devfreq_summary_show(struct seq_file *s, void *data)
+ "cur_freq_Hz",
+ "min_freq_Hz",
+ "max_freq_Hz");
+- seq_printf(s, "%30s %10s %10s %15s %10s %12s %12s %12s\n",
++ seq_printf(s, "%30s %30s %15s %10s %12s %12s %12s\n",
++ "------------------------------",
+ "------------------------------",
+- "----------",
+- "----------",
+ "---------------",
+ "----------",
+ "------------",
+@@ -1701,8 +1699,7 @@ static int devfreq_summary_show(struct seq_file *s, void *data)
+ mutex_unlock(&devfreq->lock);
+
+ seq_printf(s,
+- "%-30s %-10s %-10s %-15s %10d %12ld %12ld %12ld\n",
+- dev_name(devfreq->dev.parent),
++ "%-30s %-30s %-15s %10d %12ld %12ld %12ld\n",
+ dev_name(&devfreq->dev),
+ p_devfreq ? dev_name(&p_devfreq->dev) : "null",
+ devfreq->governor_name,
+diff --git a/drivers/devfreq/rk3399_dmc.c b/drivers/devfreq/rk3399_dmc.c
+index 24f04f78285b..027769e39f9b 100644
+--- a/drivers/devfreq/rk3399_dmc.c
++++ b/drivers/devfreq/rk3399_dmc.c
+@@ -95,18 +95,20 @@ static int rk3399_dmcfreq_target(struct device *dev, unsigned long *freq,
+
+ mutex_lock(&dmcfreq->lock);
+
+- if (target_rate >= dmcfreq->odt_dis_freq)
+- odt_enable = true;
+-
+- /*
+- * This makes a SMC call to the TF-A to set the DDR PD (power-down)
+- * timings and to enable or disable the ODT (on-die termination)
+- * resistors.
+- */
+- arm_smccc_smc(ROCKCHIP_SIP_DRAM_FREQ, dmcfreq->odt_pd_arg0,
+- dmcfreq->odt_pd_arg1,
+- ROCKCHIP_SIP_CONFIG_DRAM_SET_ODT_PD,
+- odt_enable, 0, 0, 0, &res);
++ if (dmcfreq->regmap_pmu) {
++ if (target_rate >= dmcfreq->odt_dis_freq)
++ odt_enable = true;
++
++ /*
++ * This makes a SMC call to the TF-A to set the DDR PD
++ * (power-down) timings and to enable or disable the
++ * ODT (on-die termination) resistors.
++ */
++ arm_smccc_smc(ROCKCHIP_SIP_DRAM_FREQ, dmcfreq->odt_pd_arg0,
++ dmcfreq->odt_pd_arg1,
++ ROCKCHIP_SIP_CONFIG_DRAM_SET_ODT_PD,
++ odt_enable, 0, 0, 0, &res);
++ }
+
+ /*
+ * If frequency scaling from low to high, adjust voltage first.
+@@ -371,13 +373,14 @@ static int rk3399_dmcfreq_probe(struct platform_device *pdev)
+ }
+
+ node = of_parse_phandle(np, "rockchip,pmu", 0);
+- if (node) {
+- data->regmap_pmu = syscon_node_to_regmap(node);
+- of_node_put(node);
+- if (IS_ERR(data->regmap_pmu)) {
+- ret = PTR_ERR(data->regmap_pmu);
+- goto err_edev;
+- }
++ if (!node)
++ goto no_pmu;
++
++ data->regmap_pmu = syscon_node_to_regmap(node);
++ of_node_put(node);
++ if (IS_ERR(data->regmap_pmu)) {
++ ret = PTR_ERR(data->regmap_pmu);
++ goto err_edev;
+ }
+
+ regmap_read(data->regmap_pmu, RK3399_PMUGRF_OS_REG2, &val);
+@@ -399,6 +402,7 @@ static int rk3399_dmcfreq_probe(struct platform_device *pdev)
+ goto err_edev;
+ };
+
++no_pmu:
+ arm_smccc_smc(ROCKCHIP_SIP_DRAM_FREQ, 0, 0,
+ ROCKCHIP_SIP_CONFIG_DRAM_INIT,
+ 0, 0, 0, 0, &res);
+diff --git a/drivers/edac/edac_device_sysfs.c b/drivers/edac/edac_device_sysfs.c
+index 0e7ea3591b78..5e7593753799 100644
+--- a/drivers/edac/edac_device_sysfs.c
++++ b/drivers/edac/edac_device_sysfs.c
+@@ -275,6 +275,7 @@ int edac_device_register_sysfs_main_kobj(struct edac_device_ctl_info *edac_dev)
+
+ /* Error exit stack */
+ err_kobj_reg:
++ kobject_put(&edac_dev->kobj);
+ module_put(edac_dev->owner);
+
+ err_out:
+diff --git a/drivers/edac/edac_pci_sysfs.c b/drivers/edac/edac_pci_sysfs.c
+index 72c9eb9fdffb..53042af7262e 100644
+--- a/drivers/edac/edac_pci_sysfs.c
++++ b/drivers/edac/edac_pci_sysfs.c
+@@ -386,7 +386,7 @@ static int edac_pci_main_kobj_setup(void)
+
+ /* Error unwind statck */
+ kobject_init_and_add_fail:
+- kfree(edac_pci_top_main_kobj);
++ kobject_put(edac_pci_top_main_kobj);
+
+ kzalloc_fail:
+ module_put(THIS_MODULE);
+diff --git a/drivers/firmware/arm_scmi/scmi_pm_domain.c b/drivers/firmware/arm_scmi/scmi_pm_domain.c
+index bafbfe358f97..9e44479f0284 100644
+--- a/drivers/firmware/arm_scmi/scmi_pm_domain.c
++++ b/drivers/firmware/arm_scmi/scmi_pm_domain.c
+@@ -85,7 +85,10 @@ static int scmi_pm_domain_probe(struct scmi_device *sdev)
+ for (i = 0; i < num_domains; i++, scmi_pd++) {
+ u32 state;
+
+- domains[i] = &scmi_pd->genpd;
++ if (handle->power_ops->state_get(handle, i, &state)) {
++ dev_warn(dev, "failed to get state for domain %d\n", i);
++ continue;
++ }
+
+ scmi_pd->domain = i;
+ scmi_pd->handle = handle;
+@@ -94,13 +97,10 @@ static int scmi_pm_domain_probe(struct scmi_device *sdev)
+ scmi_pd->genpd.power_off = scmi_pd_power_off;
+ scmi_pd->genpd.power_on = scmi_pd_power_on;
+
+- if (handle->power_ops->state_get(handle, i, &state)) {
+- dev_warn(dev, "failed to get state for domain %d\n", i);
+- continue;
+- }
+-
+ pm_genpd_init(&scmi_pd->genpd, NULL,
+ state == SCMI_POWER_STATE_GENERIC_OFF);
++
++ domains[i] = &scmi_pd->genpd;
+ }
+
+ scmi_pd_data->domains = domains;
+diff --git a/drivers/firmware/qcom_scm.c b/drivers/firmware/qcom_scm.c
+index 4701487573f7..7d9596552366 100644
+--- a/drivers/firmware/qcom_scm.c
++++ b/drivers/firmware/qcom_scm.c
+@@ -391,7 +391,7 @@ static int __qcom_scm_set_dload_mode(struct device *dev, bool enable)
+
+ desc.args[1] = enable ? QCOM_SCM_BOOT_SET_DLOAD_MODE : 0;
+
+- return qcom_scm_call(__scm->dev, &desc, NULL);
++ return qcom_scm_call_atomic(__scm->dev, &desc, NULL);
+ }
+
+ static void qcom_scm_set_download_mode(bool enable)
+@@ -650,7 +650,7 @@ int qcom_scm_io_readl(phys_addr_t addr, unsigned int *val)
+ int ret;
+
+
+- ret = qcom_scm_call(__scm->dev, &desc, &res);
++ ret = qcom_scm_call_atomic(__scm->dev, &desc, &res);
+ if (ret >= 0)
+ *val = res.result[0];
+
+@@ -669,8 +669,7 @@ int qcom_scm_io_writel(phys_addr_t addr, unsigned int val)
+ .owner = ARM_SMCCC_OWNER_SIP,
+ };
+
+-
+- return qcom_scm_call(__scm->dev, &desc, NULL);
++ return qcom_scm_call_atomic(__scm->dev, &desc, NULL);
+ }
+ EXPORT_SYMBOL(qcom_scm_io_writel);
+
+diff --git a/drivers/gpio/gpiolib-devres.c b/drivers/gpio/gpiolib-devres.c
+index 5c91c4365da1..7dbce4c4ebdf 100644
+--- a/drivers/gpio/gpiolib-devres.c
++++ b/drivers/gpio/gpiolib-devres.c
+@@ -487,10 +487,12 @@ static void devm_gpio_chip_release(struct device *dev, void *res)
+ }
+
+ /**
+- * devm_gpiochip_add_data() - Resource managed gpiochip_add_data()
++ * devm_gpiochip_add_data_with_key() - Resource managed gpiochip_add_data_with_key()
+ * @dev: pointer to the device that gpio_chip belongs to.
+ * @gc: the GPIO chip to register
+ * @data: driver-private data associated with this chip
++ * @lock_key: lockdep class for IRQ lock
++ * @request_key: lockdep class for IRQ request
+ *
+ * Context: potentially before irqs will work
+ *
+@@ -501,8 +503,9 @@ static void devm_gpio_chip_release(struct device *dev, void *res)
+ * gc->base is invalid or already associated with a different chip.
+ * Otherwise it returns zero as a success code.
+ */
+-int devm_gpiochip_add_data(struct device *dev, struct gpio_chip *gc,
+- void *data)
++int devm_gpiochip_add_data_with_key(struct device *dev, struct gpio_chip *gc, void *data,
++ struct lock_class_key *lock_key,
++ struct lock_class_key *request_key)
+ {
+ struct gpio_chip **ptr;
+ int ret;
+@@ -512,7 +515,7 @@ int devm_gpiochip_add_data(struct device *dev, struct gpio_chip *gc,
+ if (!ptr)
+ return -ENOMEM;
+
+- ret = gpiochip_add_data(gc, data);
++ ret = gpiochip_add_data_with_key(gc, data, lock_key, request_key);
+ if (ret < 0) {
+ devres_free(ptr);
+ return ret;
+@@ -523,4 +526,4 @@ int devm_gpiochip_add_data(struct device *dev, struct gpio_chip *gc,
+
+ return 0;
+ }
+-EXPORT_SYMBOL_GPL(devm_gpiochip_add_data);
++EXPORT_SYMBOL_GPL(devm_gpiochip_add_data_with_key);
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.c
+index ffeb20f11c07..728f76cc536e 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.c
+@@ -552,7 +552,7 @@ struct drm_gem_object *amdgpu_gem_prime_import(struct drm_device *dev,
+ attach = dma_buf_dynamic_attach(dma_buf, dev->dev,
+ &amdgpu_dma_buf_attach_ops, obj);
+ if (IS_ERR(attach)) {
+- drm_gem_object_put(obj);
++ drm_gem_object_put_unlocked(obj);
+ return ERR_CAST(attach);
+ }
+
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_fence.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_fence.c
+index 7531527067df..892c1e9a1eb0 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_fence.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_fence.c
+@@ -408,7 +408,9 @@ int amdgpu_fence_driver_start_ring(struct amdgpu_ring *ring,
+ ring->fence_drv.gpu_addr = adev->uvd.inst[ring->me].gpu_addr + index;
+ }
+ amdgpu_fence_write(ring, atomic_read(&ring->fence_drv.last_seq));
+- amdgpu_irq_get(adev, irq_src, irq_type);
++
++ if (irq_src)
++ amdgpu_irq_get(adev, irq_src, irq_type);
+
+ ring->fence_drv.irq_src = irq_src;
+ ring->fence_drv.irq_type = irq_type;
+@@ -529,8 +531,9 @@ void amdgpu_fence_driver_fini(struct amdgpu_device *adev)
+ /* no need to trigger GPU reset as we are unloading */
+ amdgpu_fence_driver_force_completion(ring);
+ }
+- amdgpu_irq_put(adev, ring->fence_drv.irq_src,
+- ring->fence_drv.irq_type);
++ if (ring->fence_drv.irq_src)
++ amdgpu_irq_put(adev, ring->fence_drv.irq_src,
++ ring->fence_drv.irq_type);
+ drm_sched_fini(&ring->sched);
+ del_timer_sync(&ring->fence_drv.fallback_timer);
+ for (j = 0; j <= ring->fence_drv.num_fences_mask; ++j)
+@@ -566,8 +569,9 @@ void amdgpu_fence_driver_suspend(struct amdgpu_device *adev)
+ }
+
+ /* disable the interrupt */
+- amdgpu_irq_put(adev, ring->fence_drv.irq_src,
+- ring->fence_drv.irq_type);
++ if (ring->fence_drv.irq_src)
++ amdgpu_irq_put(adev, ring->fence_drv.irq_src,
++ ring->fence_drv.irq_type);
+ }
+ }
+
+@@ -593,8 +597,9 @@ void amdgpu_fence_driver_resume(struct amdgpu_device *adev)
+ continue;
+
+ /* enable the interrupt */
+- amdgpu_irq_get(adev, ring->fence_drv.irq_src,
+- ring->fence_drv.irq_type);
++ if (ring->fence_drv.irq_src)
++ amdgpu_irq_get(adev, ring->fence_drv.irq_src,
++ ring->fence_drv.irq_type);
+ }
+ }
+
+diff --git a/drivers/gpu/drm/amd/amdgpu/jpeg_v2_5.c b/drivers/gpu/drm/amd/amdgpu/jpeg_v2_5.c
+index c04c2078a7c1..a7fcb55babb8 100644
+--- a/drivers/gpu/drm/amd/amdgpu/jpeg_v2_5.c
++++ b/drivers/gpu/drm/amd/amdgpu/jpeg_v2_5.c
+@@ -462,7 +462,7 @@ static int jpeg_v2_5_wait_for_idle(void *handle)
+ return ret;
+ }
+
+- return ret;
++ return 0;
+ }
+
+ static int jpeg_v2_5_set_clockgating_state(void *handle,
+diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_pp_smu.c b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_pp_smu.c
+index a2e1a73f66b8..5c6a6ae48d39 100644
+--- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_pp_smu.c
++++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_pp_smu.c
+@@ -106,7 +106,7 @@ bool dm_pp_apply_display_requirements(
+ adev->powerplay.pp_funcs->display_configuration_change(
+ adev->powerplay.pp_handle,
+ &adev->pm.pm_display_cfg);
+- else
++ else if (adev->smu.ppt_funcs)
+ smu_display_configuration_change(smu,
+ &adev->pm.pm_display_cfg);
+
+@@ -530,6 +530,8 @@ bool dm_pp_get_static_clocks(
+ &pp_clk_info);
+ else if (adev->smu.ppt_funcs)
+ ret = smu_get_current_clocks(&adev->smu, &pp_clk_info);
++ else
++ return false;
+ if (ret)
+ return false;
+
+@@ -590,7 +592,7 @@ void pp_rv_set_wm_ranges(struct pp_smu *pp,
+ if (pp_funcs && pp_funcs->set_watermarks_for_clocks_ranges)
+ pp_funcs->set_watermarks_for_clocks_ranges(pp_handle,
+ &wm_with_clock_ranges);
+- else
++ else if (adev->smu.ppt_funcs)
+ smu_set_watermarks_for_clock_ranges(&adev->smu,
+ &wm_with_clock_ranges);
+ }
+diff --git a/drivers/gpu/drm/amd/display/dc/core/dc_link.c b/drivers/gpu/drm/amd/display/dc/core/dc_link.c
+index 67cfff1586e9..3f157bcc174b 100644
+--- a/drivers/gpu/drm/amd/display/dc/core/dc_link.c
++++ b/drivers/gpu/drm/amd/display/dc/core/dc_link.c
+@@ -3146,9 +3146,11 @@ void core_link_disable_stream(struct pipe_ctx *pipe_ctx)
+ write_i2c_redriver_setting(pipe_ctx, false);
+ }
+ }
+- dc->hwss.disable_stream(pipe_ctx);
+
+ disable_link(pipe_ctx->stream->link, pipe_ctx->stream->signal);
++
++ dc->hwss.disable_stream(pipe_ctx);
++
+ if (pipe_ctx->stream->timing.flags.DSC) {
+ if (dc_is_dp_signal(pipe_ctx->stream->signal))
+ dp_set_dsc_enable(pipe_ctx, false);
+diff --git a/drivers/gpu/drm/amd/display/dc/core/dc_link_dp.c b/drivers/gpu/drm/amd/display/dc/core/dc_link_dp.c
+index caa090d0b6ac..1ada01322cd2 100644
+--- a/drivers/gpu/drm/amd/display/dc/core/dc_link_dp.c
++++ b/drivers/gpu/drm/amd/display/dc/core/dc_link_dp.c
+@@ -1103,6 +1103,10 @@ static inline enum link_training_result perform_link_training_int(
+ dpcd_pattern.v1_4.TRAINING_PATTERN_SET = DPCD_TRAINING_PATTERN_VIDEOIDLE;
+ dpcd_set_training_pattern(link, dpcd_pattern);
+
++ /* delay 5ms after notifying sink of idle pattern before switching output */
++ if (link->connector_signal != SIGNAL_TYPE_EDP)
++ msleep(5);
++
+ /* 4. mainlink output idle pattern*/
+ dp_set_hw_test_pattern(link, DP_TEST_PATTERN_VIDEO_MODE, NULL, 0);
+
+@@ -1552,6 +1556,12 @@ bool perform_link_training_with_retries(
+ struct dc_link *link = stream->link;
+ enum dp_panel_mode panel_mode = dp_get_panel_mode(link);
+
++ /* We need to do this before the link training to ensure the idle pattern in SST
++ * mode will be sent right after the link training
++ */
++ link->link_enc->funcs->connect_dig_be_to_fe(link->link_enc,
++ pipe_ctx->stream_res.stream_enc->id, true);
++
+ for (j = 0; j < attempts; ++j) {
+
+ dp_enable_link_phy(
+@@ -1568,12 +1578,6 @@ bool perform_link_training_with_retries(
+
+ dp_set_panel_mode(link, panel_mode);
+
+- /* We need to do this before the link training to ensure the idle pattern in SST
+- * mode will be sent right after the link training
+- */
+- link->link_enc->funcs->connect_dig_be_to_fe(link->link_enc,
+- pipe_ctx->stream_res.stream_enc->id, true);
+-
+ if (link->aux_access_disabled) {
+ dc_link_dp_perform_link_training_skip_aux(link, link_setting);
+ return true;
+diff --git a/drivers/gpu/drm/amd/display/dc/dce110/dce110_hw_sequencer.c b/drivers/gpu/drm/amd/display/dc/dce110/dce110_hw_sequencer.c
+index 10527593868c..24ca592c90df 100644
+--- a/drivers/gpu/drm/amd/display/dc/dce110/dce110_hw_sequencer.c
++++ b/drivers/gpu/drm/amd/display/dc/dce110/dce110_hw_sequencer.c
+@@ -1090,8 +1090,17 @@ void dce110_blank_stream(struct pipe_ctx *pipe_ctx)
+ dc_link_set_abm_disable(link);
+ }
+
+- if (dc_is_dp_signal(pipe_ctx->stream->signal))
++ if (dc_is_dp_signal(pipe_ctx->stream->signal)) {
+ pipe_ctx->stream_res.stream_enc->funcs->dp_blank(pipe_ctx->stream_res.stream_enc);
++
++ /*
++ * After output is idle pattern some sinks need time to recognize the stream
++ * has changed or they enter protection state and hang.
++ */
++ if (!dc_is_embedded_signal(pipe_ctx->stream->signal))
++ msleep(60);
++ }
++
+ }
+
+
+diff --git a/drivers/gpu/drm/amd/powerplay/arcturus_ppt.c b/drivers/gpu/drm/amd/powerplay/arcturus_ppt.c
+index 1ef0923f7190..9ad0e6f18be4 100644
+--- a/drivers/gpu/drm/amd/powerplay/arcturus_ppt.c
++++ b/drivers/gpu/drm/amd/powerplay/arcturus_ppt.c
+@@ -2035,8 +2035,6 @@ static void arcturus_fill_eeprom_i2c_req(SwI2cRequest_t *req, bool write,
+ {
+ int i;
+
+- BUG_ON(numbytes > MAX_SW_I2C_COMMANDS);
+-
+ req->I2CcontrollerPort = 0;
+ req->I2CSpeed = 2;
+ req->SlaveAddress = address;
+@@ -2074,6 +2072,12 @@ static int arcturus_i2c_eeprom_read_data(struct i2c_adapter *control,
+ struct smu_table_context *smu_table = &adev->smu.smu_table;
+ struct smu_table *table = &smu_table->driver_table;
+
++ if (numbytes > MAX_SW_I2C_COMMANDS) {
++ dev_err(adev->dev, "numbytes requested %d is over max allowed %d\n",
++ numbytes, MAX_SW_I2C_COMMANDS);
++ return -EINVAL;
++ }
++
+ memset(&req, 0, sizeof(req));
+ arcturus_fill_eeprom_i2c_req(&req, false, address, numbytes, data);
+
+@@ -2110,6 +2114,12 @@ static int arcturus_i2c_eeprom_write_data(struct i2c_adapter *control,
+ SwI2cRequest_t req;
+ struct amdgpu_device *adev = to_amdgpu_device(control);
+
++ if (numbytes > MAX_SW_I2C_COMMANDS) {
++ dev_err(adev->dev, "numbytes requested %d is over max allowed %d\n",
++ numbytes, MAX_SW_I2C_COMMANDS);
++ return -EINVAL;
++ }
++
+ memset(&req, 0, sizeof(req));
+ arcturus_fill_eeprom_i2c_req(&req, true, address, numbytes, data);
+
+diff --git a/drivers/gpu/drm/amd/powerplay/smu_v11_0.c b/drivers/gpu/drm/amd/powerplay/smu_v11_0.c
+index 655ba4fb05dc..48af305d42d5 100644
+--- a/drivers/gpu/drm/amd/powerplay/smu_v11_0.c
++++ b/drivers/gpu/drm/amd/powerplay/smu_v11_0.c
+@@ -159,7 +159,8 @@ int smu_v11_0_init_microcode(struct smu_context *smu)
+ chip_name = "navi12";
+ break;
+ default:
+- BUG();
++ dev_err(adev->dev, "Unsupported ASIC type %d\n", adev->asic_type);
++ return -EINVAL;
+ }
+
+ snprintf(fw_name, sizeof(fw_name), "amdgpu/%s_smc.bin", chip_name);
+diff --git a/drivers/gpu/drm/arm/malidp_planes.c b/drivers/gpu/drm/arm/malidp_planes.c
+index 37715cc6064e..ab45ac445045 100644
+--- a/drivers/gpu/drm/arm/malidp_planes.c
++++ b/drivers/gpu/drm/arm/malidp_planes.c
+@@ -928,7 +928,7 @@ int malidp_de_planes_init(struct drm_device *drm)
+ const struct malidp_hw_regmap *map = &malidp->dev->hw->map;
+ struct malidp_plane *plane = NULL;
+ enum drm_plane_type plane_type;
+- unsigned long crtcs = 1 << drm->mode_config.num_crtc;
++ unsigned long crtcs = BIT(drm->mode_config.num_crtc);
+ unsigned long flags = DRM_MODE_ROTATE_0 | DRM_MODE_ROTATE_90 | DRM_MODE_ROTATE_180 |
+ DRM_MODE_ROTATE_270 | DRM_MODE_REFLECT_X | DRM_MODE_REFLECT_Y;
+ unsigned int blend_caps = BIT(DRM_MODE_BLEND_PIXEL_NONE) |
+diff --git a/drivers/gpu/drm/bridge/sil-sii8620.c b/drivers/gpu/drm/bridge/sil-sii8620.c
+index 92acd336aa89..ca98133411aa 100644
+--- a/drivers/gpu/drm/bridge/sil-sii8620.c
++++ b/drivers/gpu/drm/bridge/sil-sii8620.c
+@@ -178,7 +178,7 @@ static void sii8620_read_buf(struct sii8620 *ctx, u16 addr, u8 *buf, int len)
+
+ static u8 sii8620_readb(struct sii8620 *ctx, u16 addr)
+ {
+- u8 ret;
++ u8 ret = 0;
+
+ sii8620_read_buf(ctx, addr, &ret, 1);
+ return ret;
+diff --git a/drivers/gpu/drm/bridge/ti-sn65dsi86.c b/drivers/gpu/drm/bridge/ti-sn65dsi86.c
+index 6ad688b320ae..8a0e34f2160a 100644
+--- a/drivers/gpu/drm/bridge/ti-sn65dsi86.c
++++ b/drivers/gpu/drm/bridge/ti-sn65dsi86.c
+@@ -475,7 +475,7 @@ static int ti_sn_bridge_calc_min_dp_rate_idx(struct ti_sn_bridge *pdata)
+ 1000 * pdata->dp_lanes * DP_CLK_FUDGE_DEN);
+
+ for (i = 1; i < ARRAY_SIZE(ti_sn_bridge_dp_rate_lut) - 1; i++)
+- if (ti_sn_bridge_dp_rate_lut[i] > dp_rate_mhz)
++ if (ti_sn_bridge_dp_rate_lut[i] >= dp_rate_mhz)
+ break;
+
+ return i;
+@@ -827,6 +827,12 @@ static ssize_t ti_sn_aux_transfer(struct drm_dp_aux *aux,
+ buf[i]);
+ }
+
++ /* Clear old status bits before start so we don't get confused */
++ regmap_write(pdata->regmap, SN_AUX_CMD_STATUS_REG,
++ AUX_IRQ_STATUS_NAT_I2C_FAIL |
++ AUX_IRQ_STATUS_AUX_RPLY_TOUT |
++ AUX_IRQ_STATUS_AUX_SHORT);
++
+ regmap_write(pdata->regmap, SN_AUX_CMD_REG, request_val | AUX_CMD_SEND);
+
+ ret = regmap_read_poll_timeout(pdata->regmap, SN_AUX_CMD_REG, val,
+diff --git a/drivers/gpu/drm/drm_debugfs.c b/drivers/gpu/drm/drm_debugfs.c
+index 4e673d318503..fb251c00fdd3 100644
+--- a/drivers/gpu/drm/drm_debugfs.c
++++ b/drivers/gpu/drm/drm_debugfs.c
+@@ -336,13 +336,13 @@ static ssize_t connector_write(struct file *file, const char __user *ubuf,
+
+ buf[len] = '\0';
+
+- if (!strcmp(buf, "on"))
++ if (sysfs_streq(buf, "on"))
+ connector->force = DRM_FORCE_ON;
+- else if (!strcmp(buf, "digital"))
++ else if (sysfs_streq(buf, "digital"))
+ connector->force = DRM_FORCE_ON_DIGITAL;
+- else if (!strcmp(buf, "off"))
++ else if (sysfs_streq(buf, "off"))
+ connector->force = DRM_FORCE_OFF;
+- else if (!strcmp(buf, "unspecified"))
++ else if (sysfs_streq(buf, "unspecified"))
+ connector->force = DRM_FORCE_UNSPECIFIED;
+ else
+ return -EINVAL;
+diff --git a/drivers/gpu/drm/drm_gem.c b/drivers/gpu/drm/drm_gem.c
+index 3087aa710e8d..d847540e4f8c 100644
+--- a/drivers/gpu/drm/drm_gem.c
++++ b/drivers/gpu/drm/drm_gem.c
+@@ -710,6 +710,8 @@ int drm_gem_objects_lookup(struct drm_file *filp, void __user *bo_handles,
+ if (!objs)
+ return -ENOMEM;
+
++ *objs_out = objs;
++
+ handles = kvmalloc_array(count, sizeof(u32), GFP_KERNEL);
+ if (!handles) {
+ ret = -ENOMEM;
+@@ -723,8 +725,6 @@ int drm_gem_objects_lookup(struct drm_file *filp, void __user *bo_handles,
+ }
+
+ ret = objects_lookup(filp, handles, count, objs);
+- *objs_out = objs;
+-
+ out:
+ kvfree(handles);
+ return ret;
+diff --git a/drivers/gpu/drm/drm_mipi_dsi.c b/drivers/gpu/drm/drm_mipi_dsi.c
+index 55531895dde6..37b03fefbdf6 100644
+--- a/drivers/gpu/drm/drm_mipi_dsi.c
++++ b/drivers/gpu/drm/drm_mipi_dsi.c
+@@ -1082,11 +1082,11 @@ EXPORT_SYMBOL(mipi_dsi_dcs_set_pixel_format);
+ */
+ int mipi_dsi_dcs_set_tear_scanline(struct mipi_dsi_device *dsi, u16 scanline)
+ {
+- u8 payload[3] = { MIPI_DCS_SET_TEAR_SCANLINE, scanline >> 8,
+- scanline & 0xff };
++ u8 payload[2] = { scanline >> 8, scanline & 0xff };
+ ssize_t err;
+
+- err = mipi_dsi_generic_write(dsi, payload, sizeof(payload));
++ err = mipi_dsi_dcs_write(dsi, MIPI_DCS_SET_TEAR_SCANLINE, payload,
++ sizeof(payload));
+ if (err < 0)
+ return err;
+
+diff --git a/drivers/gpu/drm/etnaviv/etnaviv_gpu.c b/drivers/gpu/drm/etnaviv/etnaviv_gpu.c
+index a31eeff2b297..4a512b062df8 100644
+--- a/drivers/gpu/drm/etnaviv/etnaviv_gpu.c
++++ b/drivers/gpu/drm/etnaviv/etnaviv_gpu.c
+@@ -722,7 +722,7 @@ int etnaviv_gpu_init(struct etnaviv_gpu *gpu)
+ ret = pm_runtime_get_sync(gpu->dev);
+ if (ret < 0) {
+ dev_err(gpu->dev, "Failed to enable GPU power domain\n");
+- return ret;
++ goto pm_put;
+ }
+
+ etnaviv_hw_identify(gpu);
+@@ -819,6 +819,7 @@ int etnaviv_gpu_init(struct etnaviv_gpu *gpu)
+
+ fail:
+ pm_runtime_mark_last_busy(gpu->dev);
++pm_put:
+ pm_runtime_put_autosuspend(gpu->dev);
+
+ return ret;
+@@ -859,7 +860,7 @@ int etnaviv_gpu_debugfs(struct etnaviv_gpu *gpu, struct seq_file *m)
+
+ ret = pm_runtime_get_sync(gpu->dev);
+ if (ret < 0)
+- return ret;
++ goto pm_put;
+
+ dma_lo = gpu_read(gpu, VIVS_FE_DMA_LOW);
+ dma_hi = gpu_read(gpu, VIVS_FE_DMA_HIGH);
+@@ -1003,6 +1004,7 @@ int etnaviv_gpu_debugfs(struct etnaviv_gpu *gpu, struct seq_file *m)
+ ret = 0;
+
+ pm_runtime_mark_last_busy(gpu->dev);
++pm_put:
+ pm_runtime_put_autosuspend(gpu->dev);
+
+ return ret;
+@@ -1016,7 +1018,7 @@ void etnaviv_gpu_recover_hang(struct etnaviv_gpu *gpu)
+ dev_err(gpu->dev, "recover hung GPU!\n");
+
+ if (pm_runtime_get_sync(gpu->dev) < 0)
+- return;
++ goto pm_put;
+
+ mutex_lock(&gpu->lock);
+
+@@ -1035,6 +1037,7 @@ void etnaviv_gpu_recover_hang(struct etnaviv_gpu *gpu)
+
+ mutex_unlock(&gpu->lock);
+ pm_runtime_mark_last_busy(gpu->dev);
++pm_put:
+ pm_runtime_put_autosuspend(gpu->dev);
+ }
+
+@@ -1308,8 +1311,10 @@ struct dma_fence *etnaviv_gpu_submit(struct etnaviv_gem_submit *submit)
+
+ if (!submit->runtime_resumed) {
+ ret = pm_runtime_get_sync(gpu->dev);
+- if (ret < 0)
++ if (ret < 0) {
++ pm_runtime_put_noidle(gpu->dev);
+ return NULL;
++ }
+ submit->runtime_resumed = true;
+ }
+
+@@ -1326,6 +1331,7 @@ struct dma_fence *etnaviv_gpu_submit(struct etnaviv_gem_submit *submit)
+ ret = event_alloc(gpu, nr_events, event);
+ if (ret) {
+ DRM_ERROR("no free events\n");
++ pm_runtime_put_noidle(gpu->dev);
+ return NULL;
+ }
+
+@@ -1496,7 +1502,7 @@ static int etnaviv_gpu_clk_enable(struct etnaviv_gpu *gpu)
+ if (gpu->clk_bus) {
+ ret = clk_prepare_enable(gpu->clk_bus);
+ if (ret)
+- return ret;
++ goto disable_clk_reg;
+ }
+
+ if (gpu->clk_core) {
+@@ -1519,6 +1525,9 @@ disable_clk_core:
+ disable_clk_bus:
+ if (gpu->clk_bus)
+ clk_disable_unprepare(gpu->clk_bus);
++disable_clk_reg:
++ if (gpu->clk_reg)
++ clk_disable_unprepare(gpu->clk_reg);
+
+ return ret;
+ }
+diff --git a/drivers/gpu/drm/imx/dw_hdmi-imx.c b/drivers/gpu/drm/imx/dw_hdmi-imx.c
+index f22cfbf9353e..2e12a4a3bfa1 100644
+--- a/drivers/gpu/drm/imx/dw_hdmi-imx.c
++++ b/drivers/gpu/drm/imx/dw_hdmi-imx.c
+@@ -212,9 +212,8 @@ static int dw_hdmi_imx_bind(struct device *dev, struct device *master,
+ if (!pdev->dev.of_node)
+ return -ENODEV;
+
+- hdmi = devm_kzalloc(&pdev->dev, sizeof(*hdmi), GFP_KERNEL);
+- if (!hdmi)
+- return -ENOMEM;
++ hdmi = dev_get_drvdata(dev);
++ memset(hdmi, 0, sizeof(*hdmi));
+
+ match = of_match_node(dw_hdmi_imx_dt_ids, pdev->dev.of_node);
+ plat_data = match->data;
+@@ -239,8 +238,6 @@ static int dw_hdmi_imx_bind(struct device *dev, struct device *master,
+ drm_encoder_init(drm, encoder, &dw_hdmi_imx_encoder_funcs,
+ DRM_MODE_ENCODER_TMDS, NULL);
+
+- platform_set_drvdata(pdev, hdmi);
+-
+ hdmi->hdmi = dw_hdmi_bind(pdev, encoder, plat_data);
+
+ /*
+@@ -270,6 +267,14 @@ static const struct component_ops dw_hdmi_imx_ops = {
+
+ static int dw_hdmi_imx_probe(struct platform_device *pdev)
+ {
++ struct imx_hdmi *hdmi;
++
++ hdmi = devm_kzalloc(&pdev->dev, sizeof(*hdmi), GFP_KERNEL);
++ if (!hdmi)
++ return -ENOMEM;
++
++ platform_set_drvdata(pdev, hdmi);
++
+ return component_add(&pdev->dev, &dw_hdmi_imx_ops);
+ }
+
+diff --git a/drivers/gpu/drm/imx/imx-drm-core.c b/drivers/gpu/drm/imx/imx-drm-core.c
+index da87c70e413b..881c36d0f16b 100644
+--- a/drivers/gpu/drm/imx/imx-drm-core.c
++++ b/drivers/gpu/drm/imx/imx-drm-core.c
+@@ -281,9 +281,10 @@ static void imx_drm_unbind(struct device *dev)
+
+ drm_kms_helper_poll_fini(drm);
+
++ component_unbind_all(drm->dev, drm);
++
+ drm_mode_config_cleanup(drm);
+
+- component_unbind_all(drm->dev, drm);
+ dev_set_drvdata(dev, NULL);
+
+ drm_dev_put(drm);
+diff --git a/drivers/gpu/drm/imx/imx-ldb.c b/drivers/gpu/drm/imx/imx-ldb.c
+index 4da22a94790c..8e209117b049 100644
+--- a/drivers/gpu/drm/imx/imx-ldb.c
++++ b/drivers/gpu/drm/imx/imx-ldb.c
+@@ -594,9 +594,8 @@ static int imx_ldb_bind(struct device *dev, struct device *master, void *data)
+ int ret;
+ int i;
+
+- imx_ldb = devm_kzalloc(dev, sizeof(*imx_ldb), GFP_KERNEL);
+- if (!imx_ldb)
+- return -ENOMEM;
++ imx_ldb = dev_get_drvdata(dev);
++ memset(imx_ldb, 0, sizeof(*imx_ldb));
+
+ imx_ldb->regmap = syscon_regmap_lookup_by_phandle(np, "gpr");
+ if (IS_ERR(imx_ldb->regmap)) {
+@@ -704,8 +703,6 @@ static int imx_ldb_bind(struct device *dev, struct device *master, void *data)
+ }
+ }
+
+- dev_set_drvdata(dev, imx_ldb);
+-
+ return 0;
+
+ free_child:
+@@ -737,6 +734,14 @@ static const struct component_ops imx_ldb_ops = {
+
+ static int imx_ldb_probe(struct platform_device *pdev)
+ {
++ struct imx_ldb *imx_ldb;
++
++ imx_ldb = devm_kzalloc(&pdev->dev, sizeof(*imx_ldb), GFP_KERNEL);
++ if (!imx_ldb)
++ return -ENOMEM;
++
++ platform_set_drvdata(pdev, imx_ldb);
++
+ return component_add(&pdev->dev, &imx_ldb_ops);
+ }
+
+diff --git a/drivers/gpu/drm/imx/imx-tve.c b/drivers/gpu/drm/imx/imx-tve.c
+index 5bbfaa2cd0f4..f91c3eb7697b 100644
+--- a/drivers/gpu/drm/imx/imx-tve.c
++++ b/drivers/gpu/drm/imx/imx-tve.c
+@@ -494,6 +494,13 @@ static int imx_tve_register(struct drm_device *drm, struct imx_tve *tve)
+ return 0;
+ }
+
++static void imx_tve_disable_regulator(void *data)
++{
++ struct imx_tve *tve = data;
++
++ regulator_disable(tve->dac_reg);
++}
++
+ static bool imx_tve_readable_reg(struct device *dev, unsigned int reg)
+ {
+ return (reg % 4 == 0) && (reg <= 0xdc);
+@@ -546,9 +553,8 @@ static int imx_tve_bind(struct device *dev, struct device *master, void *data)
+ int irq;
+ int ret;
+
+- tve = devm_kzalloc(dev, sizeof(*tve), GFP_KERNEL);
+- if (!tve)
+- return -ENOMEM;
++ tve = dev_get_drvdata(dev);
++ memset(tve, 0, sizeof(*tve));
+
+ tve->dev = dev;
+ spin_lock_init(&tve->lock);
+@@ -618,6 +624,9 @@ static int imx_tve_bind(struct device *dev, struct device *master, void *data)
+ ret = regulator_enable(tve->dac_reg);
+ if (ret)
+ return ret;
++ ret = devm_add_action_or_reset(dev, imx_tve_disable_regulator, tve);
++ if (ret)
++ return ret;
+ }
+
+ tve->clk = devm_clk_get(dev, "tve");
+@@ -659,27 +668,23 @@ static int imx_tve_bind(struct device *dev, struct device *master, void *data)
+ if (ret)
+ return ret;
+
+- dev_set_drvdata(dev, tve);
+-
+ return 0;
+ }
+
+-static void imx_tve_unbind(struct device *dev, struct device *master,
+- void *data)
+-{
+- struct imx_tve *tve = dev_get_drvdata(dev);
+-
+- if (!IS_ERR(tve->dac_reg))
+- regulator_disable(tve->dac_reg);
+-}
+-
+ static const struct component_ops imx_tve_ops = {
+ .bind = imx_tve_bind,
+- .unbind = imx_tve_unbind,
+ };
+
+ static int imx_tve_probe(struct platform_device *pdev)
+ {
++ struct imx_tve *tve;
++
++ tve = devm_kzalloc(&pdev->dev, sizeof(*tve), GFP_KERNEL);
++ if (!tve)
++ return -ENOMEM;
++
++ platform_set_drvdata(pdev, tve);
++
+ return component_add(&pdev->dev, &imx_tve_ops);
+ }
+
+diff --git a/drivers/gpu/drm/imx/ipuv3-crtc.c b/drivers/gpu/drm/imx/ipuv3-crtc.c
+index 63c0284f8b3c..2256c9789fc2 100644
+--- a/drivers/gpu/drm/imx/ipuv3-crtc.c
++++ b/drivers/gpu/drm/imx/ipuv3-crtc.c
+@@ -438,21 +438,13 @@ static int ipu_drm_bind(struct device *dev, struct device *master, void *data)
+ struct ipu_client_platformdata *pdata = dev->platform_data;
+ struct drm_device *drm = data;
+ struct ipu_crtc *ipu_crtc;
+- int ret;
+
+- ipu_crtc = devm_kzalloc(dev, sizeof(*ipu_crtc), GFP_KERNEL);
+- if (!ipu_crtc)
+- return -ENOMEM;
++ ipu_crtc = dev_get_drvdata(dev);
++ memset(ipu_crtc, 0, sizeof(*ipu_crtc));
+
+ ipu_crtc->dev = dev;
+
+- ret = ipu_crtc_init(ipu_crtc, pdata, drm);
+- if (ret)
+- return ret;
+-
+- dev_set_drvdata(dev, ipu_crtc);
+-
+- return 0;
++ return ipu_crtc_init(ipu_crtc, pdata, drm);
+ }
+
+ static void ipu_drm_unbind(struct device *dev, struct device *master,
+@@ -474,6 +466,7 @@ static const struct component_ops ipu_crtc_ops = {
+ static int ipu_drm_probe(struct platform_device *pdev)
+ {
+ struct device *dev = &pdev->dev;
++ struct ipu_crtc *ipu_crtc;
+ int ret;
+
+ if (!dev->platform_data)
+@@ -483,6 +476,12 @@ static int ipu_drm_probe(struct platform_device *pdev)
+ if (ret)
+ return ret;
+
++ ipu_crtc = devm_kzalloc(dev, sizeof(*ipu_crtc), GFP_KERNEL);
++ if (!ipu_crtc)
++ return -ENOMEM;
++
++ dev_set_drvdata(dev, ipu_crtc);
++
+ return component_add(dev, &ipu_crtc_ops);
+ }
+
+diff --git a/drivers/gpu/drm/imx/parallel-display.c b/drivers/gpu/drm/imx/parallel-display.c
+index 08fafa4bf8c2..43e109d67fe3 100644
+--- a/drivers/gpu/drm/imx/parallel-display.c
++++ b/drivers/gpu/drm/imx/parallel-display.c
+@@ -330,9 +330,8 @@ static int imx_pd_bind(struct device *dev, struct device *master, void *data)
+ u32 bus_format = 0;
+ const char *fmt;
+
+- imxpd = devm_kzalloc(dev, sizeof(*imxpd), GFP_KERNEL);
+- if (!imxpd)
+- return -ENOMEM;
++ imxpd = dev_get_drvdata(dev);
++ memset(imxpd, 0, sizeof(*imxpd));
+
+ edidp = of_get_property(np, "edid", &imxpd->edid_len);
+ if (edidp)
+@@ -363,8 +362,6 @@ static int imx_pd_bind(struct device *dev, struct device *master, void *data)
+ if (ret)
+ return ret;
+
+- dev_set_drvdata(dev, imxpd);
+-
+ return 0;
+ }
+
+@@ -386,6 +383,14 @@ static const struct component_ops imx_pd_ops = {
+
+ static int imx_pd_probe(struct platform_device *pdev)
+ {
++ struct imx_parallel_display *imxpd;
++
++ imxpd = devm_kzalloc(&pdev->dev, sizeof(*imxpd), GFP_KERNEL);
++ if (!imxpd)
++ return -ENOMEM;
++
++ platform_set_drvdata(pdev, imxpd);
++
+ return component_add(&pdev->dev, &imx_pd_ops);
+ }
+
+diff --git a/drivers/gpu/drm/msm/adreno/a6xx_gmu.c b/drivers/gpu/drm/msm/adreno/a6xx_gmu.c
+index 34607a98cc7c..9a7a18951dc2 100644
+--- a/drivers/gpu/drm/msm/adreno/a6xx_gmu.c
++++ b/drivers/gpu/drm/msm/adreno/a6xx_gmu.c
+@@ -732,10 +732,19 @@ int a6xx_gmu_resume(struct a6xx_gpu *a6xx_gpu)
+ /* Turn on the resources */
+ pm_runtime_get_sync(gmu->dev);
+
++ /*
++ * "enable" the GX power domain which won't actually do anything but it
++ * will make sure that the refcounting is correct in case we need to
++ * bring down the GX after a GMU failure
++ */
++ if (!IS_ERR_OR_NULL(gmu->gxpd))
++ pm_runtime_get_sync(gmu->gxpd);
++
+ /* Use a known rate to bring up the GMU */
+ clk_set_rate(gmu->core_clk, 200000000);
+ ret = clk_bulk_prepare_enable(gmu->nr_clocks, gmu->clocks);
+ if (ret) {
++ pm_runtime_put(gmu->gxpd);
+ pm_runtime_put(gmu->dev);
+ return ret;
+ }
+@@ -771,19 +780,12 @@ int a6xx_gmu_resume(struct a6xx_gpu *a6xx_gpu)
+ /* Set the GPU to the current freq */
+ __a6xx_gmu_set_freq(gmu, gmu->current_perf_index);
+
+- /*
+- * "enable" the GX power domain which won't actually do anything but it
+- * will make sure that the refcounting is correct in case we need to
+- * bring down the GX after a GMU failure
+- */
+- if (!IS_ERR_OR_NULL(gmu->gxpd))
+- pm_runtime_get(gmu->gxpd);
+-
+ out:
+ /* On failure, shut down the GMU to leave it in a good state */
+ if (ret) {
+ disable_irq(gmu->gmu_irq);
+ a6xx_rpmh_stop(gmu);
++ pm_runtime_put(gmu->gxpd);
+ pm_runtime_put(gmu->dev);
+ }
+
+diff --git a/drivers/gpu/drm/msm/disp/dpu1/dpu_crtc.c b/drivers/gpu/drm/msm/disp/dpu1/dpu_crtc.c
+index 17448505a9b5..d263d6e69bf1 100644
+--- a/drivers/gpu/drm/msm/disp/dpu1/dpu_crtc.c
++++ b/drivers/gpu/drm/msm/disp/dpu1/dpu_crtc.c
+@@ -386,7 +386,7 @@ static void dpu_crtc_frame_event_cb(void *data, u32 event)
+ spin_unlock_irqrestore(&dpu_crtc->spin_lock, flags);
+
+ if (!fevent) {
+- DRM_ERROR("crtc%d event %d overflow\n", crtc->base.id, event);
++ DRM_ERROR_RATELIMITED("crtc%d event %d overflow\n", crtc->base.id, event);
+ return;
+ }
+
+diff --git a/drivers/gpu/drm/msm/msm_gem.c b/drivers/gpu/drm/msm/msm_gem.c
+index 5a6a79fbc9d6..d92a0ffe2a76 100644
+--- a/drivers/gpu/drm/msm/msm_gem.c
++++ b/drivers/gpu/drm/msm/msm_gem.c
+@@ -977,10 +977,8 @@ int msm_gem_new_handle(struct drm_device *dev, struct drm_file *file,
+
+ static int msm_gem_new_impl(struct drm_device *dev,
+ uint32_t size, uint32_t flags,
+- struct drm_gem_object **obj,
+- bool struct_mutex_locked)
++ struct drm_gem_object **obj)
+ {
+- struct msm_drm_private *priv = dev->dev_private;
+ struct msm_gem_object *msm_obj;
+
+ switch (flags & MSM_BO_CACHE_MASK) {
+@@ -1006,15 +1004,6 @@ static int msm_gem_new_impl(struct drm_device *dev,
+ INIT_LIST_HEAD(&msm_obj->submit_entry);
+ INIT_LIST_HEAD(&msm_obj->vmas);
+
+- if (struct_mutex_locked) {
+- WARN_ON(!mutex_is_locked(&dev->struct_mutex));
+- list_add_tail(&msm_obj->mm_list, &priv->inactive_list);
+- } else {
+- mutex_lock(&dev->struct_mutex);
+- list_add_tail(&msm_obj->mm_list, &priv->inactive_list);
+- mutex_unlock(&dev->struct_mutex);
+- }
+-
+ *obj = &msm_obj->base;
+
+ return 0;
+@@ -1024,6 +1013,7 @@ static struct drm_gem_object *_msm_gem_new(struct drm_device *dev,
+ uint32_t size, uint32_t flags, bool struct_mutex_locked)
+ {
+ struct msm_drm_private *priv = dev->dev_private;
++ struct msm_gem_object *msm_obj;
+ struct drm_gem_object *obj = NULL;
+ bool use_vram = false;
+ int ret;
+@@ -1044,14 +1034,15 @@ static struct drm_gem_object *_msm_gem_new(struct drm_device *dev,
+ if (size == 0)
+ return ERR_PTR(-EINVAL);
+
+- ret = msm_gem_new_impl(dev, size, flags, &obj, struct_mutex_locked);
++ ret = msm_gem_new_impl(dev, size, flags, &obj);
+ if (ret)
+ goto fail;
+
++ msm_obj = to_msm_bo(obj);
++
+ if (use_vram) {
+ struct msm_gem_vma *vma;
+ struct page **pages;
+- struct msm_gem_object *msm_obj = to_msm_bo(obj);
+
+ mutex_lock(&msm_obj->lock);
+
+@@ -1086,6 +1077,15 @@ static struct drm_gem_object *_msm_gem_new(struct drm_device *dev,
+ mapping_set_gfp_mask(obj->filp->f_mapping, GFP_HIGHUSER);
+ }
+
++ if (struct_mutex_locked) {
++ WARN_ON(!mutex_is_locked(&dev->struct_mutex));
++ list_add_tail(&msm_obj->mm_list, &priv->inactive_list);
++ } else {
++ mutex_lock(&dev->struct_mutex);
++ list_add_tail(&msm_obj->mm_list, &priv->inactive_list);
++ mutex_unlock(&dev->struct_mutex);
++ }
++
+ return obj;
+
+ fail:
+@@ -1108,6 +1108,7 @@ struct drm_gem_object *msm_gem_new(struct drm_device *dev,
+ struct drm_gem_object *msm_gem_import(struct drm_device *dev,
+ struct dma_buf *dmabuf, struct sg_table *sgt)
+ {
++ struct msm_drm_private *priv = dev->dev_private;
+ struct msm_gem_object *msm_obj;
+ struct drm_gem_object *obj;
+ uint32_t size;
+@@ -1121,7 +1122,7 @@ struct drm_gem_object *msm_gem_import(struct drm_device *dev,
+
+ size = PAGE_ALIGN(dmabuf->size);
+
+- ret = msm_gem_new_impl(dev, size, MSM_BO_WC, &obj, false);
++ ret = msm_gem_new_impl(dev, size, MSM_BO_WC, &obj);
+ if (ret)
+ goto fail;
+
+@@ -1146,6 +1147,11 @@ struct drm_gem_object *msm_gem_import(struct drm_device *dev,
+ }
+
+ mutex_unlock(&msm_obj->lock);
++
++ mutex_lock(&dev->struct_mutex);
++ list_add_tail(&msm_obj->mm_list, &priv->inactive_list);
++ mutex_unlock(&dev->struct_mutex);
++
+ return obj;
+
+ fail:
+diff --git a/drivers/gpu/drm/nouveau/dispnv50/head.c b/drivers/gpu/drm/nouveau/dispnv50/head.c
+index 8f6455697ba7..ed6819519f6d 100644
+--- a/drivers/gpu/drm/nouveau/dispnv50/head.c
++++ b/drivers/gpu/drm/nouveau/dispnv50/head.c
+@@ -84,18 +84,20 @@ nv50_head_atomic_check_dither(struct nv50_head_atom *armh,
+ {
+ u32 mode = 0x00;
+
+- if (asyc->dither.mode == DITHERING_MODE_AUTO) {
+- if (asyh->base.depth > asyh->or.bpc * 3)
+- mode = DITHERING_MODE_DYNAMIC2X2;
+- } else {
+- mode = asyc->dither.mode;
+- }
++ if (asyc->dither.mode) {
++ if (asyc->dither.mode == DITHERING_MODE_AUTO) {
++ if (asyh->base.depth > asyh->or.bpc * 3)
++ mode = DITHERING_MODE_DYNAMIC2X2;
++ } else {
++ mode = asyc->dither.mode;
++ }
+
+- if (asyc->dither.depth == DITHERING_DEPTH_AUTO) {
+- if (asyh->or.bpc >= 8)
+- mode |= DITHERING_DEPTH_8BPC;
+- } else {
+- mode |= asyc->dither.depth;
++ if (asyc->dither.depth == DITHERING_DEPTH_AUTO) {
++ if (asyh->or.bpc >= 8)
++ mode |= DITHERING_DEPTH_8BPC;
++ } else {
++ mode |= asyc->dither.depth;
++ }
+ }
+
+ asyh->dither.enable = mode;
+diff --git a/drivers/gpu/drm/nouveau/nouveau_debugfs.c b/drivers/gpu/drm/nouveau/nouveau_debugfs.c
+index 15a3d40edf02..3e15a9d5e8fa 100644
+--- a/drivers/gpu/drm/nouveau/nouveau_debugfs.c
++++ b/drivers/gpu/drm/nouveau/nouveau_debugfs.c
+@@ -54,8 +54,10 @@ nouveau_debugfs_strap_peek(struct seq_file *m, void *data)
+ int ret;
+
+ ret = pm_runtime_get_sync(drm->dev->dev);
+- if (ret < 0 && ret != -EACCES)
++ if (ret < 0 && ret != -EACCES) {
++ pm_runtime_put_autosuspend(drm->dev->dev);
+ return ret;
++ }
+
+ seq_printf(m, "0x%08x\n",
+ nvif_rd32(&drm->client.device.object, 0x101000));
+diff --git a/drivers/gpu/drm/nouveau/nouveau_drm.c b/drivers/gpu/drm/nouveau/nouveau_drm.c
+index ca4087f5a15b..c484d21820c9 100644
+--- a/drivers/gpu/drm/nouveau/nouveau_drm.c
++++ b/drivers/gpu/drm/nouveau/nouveau_drm.c
+@@ -1051,8 +1051,10 @@ nouveau_drm_open(struct drm_device *dev, struct drm_file *fpriv)
+
+ /* need to bring up power immediately if opening device */
+ ret = pm_runtime_get_sync(dev->dev);
+- if (ret < 0 && ret != -EACCES)
++ if (ret < 0 && ret != -EACCES) {
++ pm_runtime_put_autosuspend(dev->dev);
+ return ret;
++ }
+
+ get_task_comm(tmpname, current);
+ snprintf(name, sizeof(name), "%s[%d]", tmpname, pid_nr(fpriv->pid));
+@@ -1134,8 +1136,10 @@ nouveau_drm_ioctl(struct file *file, unsigned int cmd, unsigned long arg)
+ long ret;
+
+ ret = pm_runtime_get_sync(dev->dev);
+- if (ret < 0 && ret != -EACCES)
++ if (ret < 0 && ret != -EACCES) {
++ pm_runtime_put_autosuspend(dev->dev);
+ return ret;
++ }
+
+ switch (_IOC_NR(cmd) - DRM_COMMAND_BASE) {
+ case DRM_NOUVEAU_NVIF:
+diff --git a/drivers/gpu/drm/nouveau/nouveau_gem.c b/drivers/gpu/drm/nouveau/nouveau_gem.c
+index f5ece1f94973..f941ce8f81e3 100644
+--- a/drivers/gpu/drm/nouveau/nouveau_gem.c
++++ b/drivers/gpu/drm/nouveau/nouveau_gem.c
+@@ -45,8 +45,10 @@ nouveau_gem_object_del(struct drm_gem_object *gem)
+ int ret;
+
+ ret = pm_runtime_get_sync(dev);
+- if (WARN_ON(ret < 0 && ret != -EACCES))
++ if (WARN_ON(ret < 0 && ret != -EACCES)) {
++ pm_runtime_put_autosuspend(dev);
+ return;
++ }
+
+ if (gem->import_attach)
+ drm_prime_gem_destroy(gem, nvbo->bo.sg);
+diff --git a/drivers/gpu/drm/nouveau/nouveau_sgdma.c b/drivers/gpu/drm/nouveau/nouveau_sgdma.c
+index feaac908efed..34403b810dba 100644
+--- a/drivers/gpu/drm/nouveau/nouveau_sgdma.c
++++ b/drivers/gpu/drm/nouveau/nouveau_sgdma.c
+@@ -96,12 +96,9 @@ nouveau_sgdma_create_ttm(struct ttm_buffer_object *bo, uint32_t page_flags)
+ else
+ nvbe->ttm.ttm.func = &nv50_sgdma_backend;
+
+- if (ttm_dma_tt_init(&nvbe->ttm, bo, page_flags))
+- /*
+- * A failing ttm_dma_tt_init() will call ttm_tt_destroy()
+- * and thus our nouveau_sgdma_destroy() hook, so we don't need
+- * to free nvbe here.
+- */
++ if (ttm_dma_tt_init(&nvbe->ttm, bo, page_flags)) {
++ kfree(nvbe);
+ return NULL;
++ }
+ return &nvbe->ttm.ttm;
+ }
+diff --git a/drivers/gpu/drm/panel/panel-simple.c b/drivers/gpu/drm/panel/panel-simple.c
+index db91b3c031a1..346e3f9fd505 100644
+--- a/drivers/gpu/drm/panel/panel-simple.c
++++ b/drivers/gpu/drm/panel/panel-simple.c
+@@ -2093,7 +2093,7 @@ static const struct drm_display_mode lg_lb070wv8_mode = {
+ static const struct panel_desc lg_lb070wv8 = {
+ .modes = &lg_lb070wv8_mode,
+ .num_modes = 1,
+- .bpc = 16,
++ .bpc = 8,
+ .size = {
+ .width = 151,
+ .height = 91,
+diff --git a/drivers/gpu/drm/panfrost/panfrost_job.c b/drivers/gpu/drm/panfrost/panfrost_job.c
+index 7914b1570841..f9519afca29d 100644
+--- a/drivers/gpu/drm/panfrost/panfrost_job.c
++++ b/drivers/gpu/drm/panfrost/panfrost_job.c
+@@ -145,6 +145,8 @@ static void panfrost_job_hw_submit(struct panfrost_job *job, int js)
+ u64 jc_head = job->jc;
+ int ret;
+
++ panfrost_devfreq_record_busy(pfdev);
++
+ ret = pm_runtime_get_sync(pfdev->dev);
+ if (ret < 0)
+ return;
+@@ -155,7 +157,6 @@ static void panfrost_job_hw_submit(struct panfrost_job *job, int js)
+ }
+
+ cfg = panfrost_mmu_as_get(pfdev, &job->file_priv->mmu);
+- panfrost_devfreq_record_busy(pfdev);
+
+ job_write(pfdev, JS_HEAD_NEXT_LO(js), jc_head & 0xFFFFFFFF);
+ job_write(pfdev, JS_HEAD_NEXT_HI(js), jc_head >> 32);
+@@ -410,12 +411,12 @@ static void panfrost_job_timedout(struct drm_sched_job *sched_job)
+ for (i = 0; i < NUM_JOB_SLOTS; i++) {
+ if (pfdev->jobs[i]) {
+ pm_runtime_put_noidle(pfdev->dev);
++ panfrost_devfreq_record_idle(pfdev);
+ pfdev->jobs[i] = NULL;
+ }
+ }
+ spin_unlock_irqrestore(&pfdev->js->job_lock, flags);
+
+- panfrost_devfreq_record_idle(pfdev);
+ panfrost_device_reset(pfdev);
+
+ for (i = 0; i < NUM_JOB_SLOTS; i++)
+diff --git a/drivers/gpu/drm/radeon/ci_dpm.c b/drivers/gpu/drm/radeon/ci_dpm.c
+index 30b5a59353c5..ddc9c034ff9e 100644
+--- a/drivers/gpu/drm/radeon/ci_dpm.c
++++ b/drivers/gpu/drm/radeon/ci_dpm.c
+@@ -4365,7 +4365,7 @@ static int ci_set_mc_special_registers(struct radeon_device *rdev,
+ table->mc_reg_table_entry[k].mc_data[j] |= 0x100;
+ }
+ j++;
+- if (j > SMU7_DISCRETE_MC_REGISTER_ARRAY_SIZE)
++ if (j >= SMU7_DISCRETE_MC_REGISTER_ARRAY_SIZE)
+ return -EINVAL;
+
+ if (!pi->mem_gddr5) {
+diff --git a/drivers/gpu/drm/radeon/radeon_display.c b/drivers/gpu/drm/radeon/radeon_display.c
+index 35db79a168bf..df1a7eb73651 100644
+--- a/drivers/gpu/drm/radeon/radeon_display.c
++++ b/drivers/gpu/drm/radeon/radeon_display.c
+@@ -635,8 +635,10 @@ radeon_crtc_set_config(struct drm_mode_set *set,
+ dev = set->crtc->dev;
+
+ ret = pm_runtime_get_sync(dev->dev);
+- if (ret < 0)
++ if (ret < 0) {
++ pm_runtime_put_autosuspend(dev->dev);
+ return ret;
++ }
+
+ ret = drm_crtc_helper_set_config(set, ctx);
+
+diff --git a/drivers/gpu/drm/radeon/radeon_drv.c b/drivers/gpu/drm/radeon/radeon_drv.c
+index 59f8186a2415..6f0d1971099b 100644
+--- a/drivers/gpu/drm/radeon/radeon_drv.c
++++ b/drivers/gpu/drm/radeon/radeon_drv.c
+@@ -171,12 +171,7 @@ int radeon_no_wb;
+ int radeon_modeset = -1;
+ int radeon_dynclks = -1;
+ int radeon_r4xx_atom = 0;
+-#ifdef __powerpc__
+-/* Default to PCI on PowerPC (fdo #95017) */
+ int radeon_agpmode = -1;
+-#else
+-int radeon_agpmode = 0;
+-#endif
+ int radeon_vram_limit = 0;
+ int radeon_gart_size = -1; /* auto */
+ int radeon_benchmarking = 0;
+@@ -549,8 +544,10 @@ long radeon_drm_ioctl(struct file *filp,
+ long ret;
+ dev = file_priv->minor->dev;
+ ret = pm_runtime_get_sync(dev->dev);
+- if (ret < 0)
++ if (ret < 0) {
++ pm_runtime_put_autosuspend(dev->dev);
+ return ret;
++ }
+
+ ret = drm_ioctl(filp, cmd, arg);
+
+diff --git a/drivers/gpu/drm/radeon/radeon_kms.c b/drivers/gpu/drm/radeon/radeon_kms.c
+index 58176db85952..779e4cd86245 100644
+--- a/drivers/gpu/drm/radeon/radeon_kms.c
++++ b/drivers/gpu/drm/radeon/radeon_kms.c
+@@ -638,8 +638,10 @@ int radeon_driver_open_kms(struct drm_device *dev, struct drm_file *file_priv)
+ file_priv->driver_priv = NULL;
+
+ r = pm_runtime_get_sync(dev->dev);
+- if (r < 0)
++ if (r < 0) {
++ pm_runtime_put_autosuspend(dev->dev);
+ return r;
++ }
+
+ /* new gpu have virtual address space support */
+ if (rdev->family >= CHIP_CAYMAN) {
+diff --git a/drivers/gpu/drm/stm/ltdc.c b/drivers/gpu/drm/stm/ltdc.c
+index df585fe64f61..60ffe5bbc129 100644
+--- a/drivers/gpu/drm/stm/ltdc.c
++++ b/drivers/gpu/drm/stm/ltdc.c
+@@ -425,9 +425,12 @@ static void ltdc_crtc_atomic_enable(struct drm_crtc *crtc,
+ struct drm_crtc_state *old_state)
+ {
+ struct ltdc_device *ldev = crtc_to_ltdc(crtc);
++ struct drm_device *ddev = crtc->dev;
+
+ DRM_DEBUG_DRIVER("\n");
+
++ pm_runtime_get_sync(ddev->dev);
++
+ /* Sets the background color value */
+ reg_write(ldev->regs, LTDC_BCCR, BCCR_BCBLACK);
+
+diff --git a/drivers/gpu/drm/tilcdc/tilcdc_panel.c b/drivers/gpu/drm/tilcdc/tilcdc_panel.c
+index 5584e656b857..8c4fd1aa4c2d 100644
+--- a/drivers/gpu/drm/tilcdc/tilcdc_panel.c
++++ b/drivers/gpu/drm/tilcdc/tilcdc_panel.c
+@@ -143,12 +143,16 @@ static int panel_connector_get_modes(struct drm_connector *connector)
+ int i;
+
+ for (i = 0; i < timings->num_timings; i++) {
+- struct drm_display_mode *mode = drm_mode_create(dev);
++ struct drm_display_mode *mode;
+ struct videomode vm;
+
+ if (videomode_from_timings(timings, &vm, i))
+ break;
+
++ mode = drm_mode_create(dev);
++ if (!mode)
++ break;
++
+ drm_display_mode_from_videomode(&vm, mode);
+
+ mode->type = DRM_MODE_TYPE_DRIVER;
+diff --git a/drivers/gpu/drm/ttm/ttm_tt.c b/drivers/gpu/drm/ttm/ttm_tt.c
+index 2ec448e1d663..9f296b9da05b 100644
+--- a/drivers/gpu/drm/ttm/ttm_tt.c
++++ b/drivers/gpu/drm/ttm/ttm_tt.c
+@@ -242,7 +242,6 @@ int ttm_tt_init(struct ttm_tt *ttm, struct ttm_buffer_object *bo,
+ ttm_tt_init_fields(ttm, bo, page_flags);
+
+ if (ttm_tt_alloc_page_directory(ttm)) {
+- ttm_tt_destroy(ttm);
+ pr_err("Failed allocating page table\n");
+ return -ENOMEM;
+ }
+@@ -266,7 +265,6 @@ int ttm_dma_tt_init(struct ttm_dma_tt *ttm_dma, struct ttm_buffer_object *bo,
+
+ INIT_LIST_HEAD(&ttm_dma->pages_list);
+ if (ttm_dma_tt_alloc_page_directory(ttm_dma)) {
+- ttm_tt_destroy(ttm);
+ pr_err("Failed allocating page table\n");
+ return -ENOMEM;
+ }
+@@ -288,7 +286,6 @@ int ttm_sg_tt_init(struct ttm_dma_tt *ttm_dma, struct ttm_buffer_object *bo,
+ else
+ ret = ttm_dma_tt_alloc_page_directory(ttm_dma);
+ if (ret) {
+- ttm_tt_destroy(ttm);
+ pr_err("Failed allocating page table\n");
+ return -ENOMEM;
+ }
+diff --git a/drivers/gpu/drm/xen/xen_drm_front.c b/drivers/gpu/drm/xen/xen_drm_front.c
+index 374142018171..09894a1d343f 100644
+--- a/drivers/gpu/drm/xen/xen_drm_front.c
++++ b/drivers/gpu/drm/xen/xen_drm_front.c
+@@ -400,8 +400,8 @@ static int xen_drm_drv_dumb_create(struct drm_file *filp,
+ args->size = args->pitch * args->height;
+
+ obj = xen_drm_front_gem_create(dev, args->size);
+- if (IS_ERR_OR_NULL(obj)) {
+- ret = PTR_ERR_OR_ZERO(obj);
++ if (IS_ERR(obj)) {
++ ret = PTR_ERR(obj);
+ goto fail;
+ }
+
+diff --git a/drivers/gpu/drm/xen/xen_drm_front_gem.c b/drivers/gpu/drm/xen/xen_drm_front_gem.c
+index f0b85e094111..4ec8a49241e1 100644
+--- a/drivers/gpu/drm/xen/xen_drm_front_gem.c
++++ b/drivers/gpu/drm/xen/xen_drm_front_gem.c
+@@ -83,7 +83,7 @@ static struct xen_gem_object *gem_create(struct drm_device *dev, size_t size)
+
+ size = round_up(size, PAGE_SIZE);
+ xen_obj = gem_create_obj(dev, size);
+- if (IS_ERR_OR_NULL(xen_obj))
++ if (IS_ERR(xen_obj))
+ return xen_obj;
+
+ if (drm_info->front_info->cfg.be_alloc) {
+@@ -117,7 +117,7 @@ static struct xen_gem_object *gem_create(struct drm_device *dev, size_t size)
+ */
+ xen_obj->num_pages = DIV_ROUND_UP(size, PAGE_SIZE);
+ xen_obj->pages = drm_gem_get_pages(&xen_obj->base);
+- if (IS_ERR_OR_NULL(xen_obj->pages)) {
++ if (IS_ERR(xen_obj->pages)) {
+ ret = PTR_ERR(xen_obj->pages);
+ xen_obj->pages = NULL;
+ goto fail;
+@@ -136,7 +136,7 @@ struct drm_gem_object *xen_drm_front_gem_create(struct drm_device *dev,
+ struct xen_gem_object *xen_obj;
+
+ xen_obj = gem_create(dev, size);
+- if (IS_ERR_OR_NULL(xen_obj))
++ if (IS_ERR(xen_obj))
+ return ERR_CAST(xen_obj);
+
+ return &xen_obj->base;
+@@ -194,7 +194,7 @@ xen_drm_front_gem_import_sg_table(struct drm_device *dev,
+
+ size = attach->dmabuf->size;
+ xen_obj = gem_create_obj(dev, size);
+- if (IS_ERR_OR_NULL(xen_obj))
++ if (IS_ERR(xen_obj))
+ return ERR_CAST(xen_obj);
+
+ ret = gem_alloc_pages_array(xen_obj, size);
+diff --git a/drivers/gpu/drm/xen/xen_drm_front_kms.c b/drivers/gpu/drm/xen/xen_drm_front_kms.c
+index 78096bbcd226..ef11b1e4de39 100644
+--- a/drivers/gpu/drm/xen/xen_drm_front_kms.c
++++ b/drivers/gpu/drm/xen/xen_drm_front_kms.c
+@@ -60,7 +60,7 @@ fb_create(struct drm_device *dev, struct drm_file *filp,
+ int ret;
+
+ fb = drm_gem_fb_create_with_funcs(dev, filp, mode_cmd, &fb_funcs);
+- if (IS_ERR_OR_NULL(fb))
++ if (IS_ERR(fb))
+ return fb;
+
+ gem_obj = fb->obj[0];
+diff --git a/drivers/gpu/host1x/debug.c b/drivers/gpu/host1x/debug.c
+index c0392672a842..1b4997bda1c7 100644
+--- a/drivers/gpu/host1x/debug.c
++++ b/drivers/gpu/host1x/debug.c
+@@ -16,6 +16,8 @@
+ #include "debug.h"
+ #include "channel.h"
+
++static DEFINE_MUTEX(debug_lock);
++
+ unsigned int host1x_debug_trace_cmdbuf;
+
+ static pid_t host1x_debug_force_timeout_pid;
+@@ -52,12 +54,14 @@ static int show_channel(struct host1x_channel *ch, void *data, bool show_fifo)
+ struct output *o = data;
+
+ mutex_lock(&ch->cdma.lock);
++ mutex_lock(&debug_lock);
+
+ if (show_fifo)
+ host1x_hw_show_channel_fifo(m, ch, o);
+
+ host1x_hw_show_channel_cdma(m, ch, o);
+
++ mutex_unlock(&debug_lock);
+ mutex_unlock(&ch->cdma.lock);
+
+ return 0;
+diff --git a/drivers/gpu/ipu-v3/ipu-common.c b/drivers/gpu/ipu-v3/ipu-common.c
+index ee2a025e54cf..b3dae9ec1a38 100644
+--- a/drivers/gpu/ipu-v3/ipu-common.c
++++ b/drivers/gpu/ipu-v3/ipu-common.c
+@@ -124,6 +124,8 @@ enum ipu_color_space ipu_pixelformat_to_colorspace(u32 pixelformat)
+ case V4L2_PIX_FMT_RGBX32:
+ case V4L2_PIX_FMT_ARGB32:
+ case V4L2_PIX_FMT_XRGB32:
++ case V4L2_PIX_FMT_RGB32:
++ case V4L2_PIX_FMT_BGR32:
+ return IPUV3_COLORSPACE_RGB;
+ default:
+ return IPUV3_COLORSPACE_UNKNOWN;
+diff --git a/drivers/hid/hid-input.c b/drivers/hid/hid-input.c
+index dea9cc65bf80..e8641ce677e4 100644
+--- a/drivers/hid/hid-input.c
++++ b/drivers/hid/hid-input.c
+@@ -350,13 +350,13 @@ static int hidinput_query_battery_capacity(struct hid_device *dev)
+ u8 *buf;
+ int ret;
+
+- buf = kmalloc(2, GFP_KERNEL);
++ buf = kmalloc(4, GFP_KERNEL);
+ if (!buf)
+ return -ENOMEM;
+
+- ret = hid_hw_raw_request(dev, dev->battery_report_id, buf, 2,
++ ret = hid_hw_raw_request(dev, dev->battery_report_id, buf, 4,
+ dev->battery_report_type, HID_REQ_GET_REPORT);
+- if (ret != 2) {
++ if (ret < 2) {
+ kfree(buf);
+ return -ENODATA;
+ }
+diff --git a/drivers/hwtracing/coresight/coresight-etm4x.c b/drivers/hwtracing/coresight/coresight-etm4x.c
+index d59e4b1e5ce5..13c362cddd6a 100644
+--- a/drivers/hwtracing/coresight/coresight-etm4x.c
++++ b/drivers/hwtracing/coresight/coresight-etm4x.c
+@@ -507,6 +507,12 @@ static void etm4_disable_hw(void *info)
+ readl_relaxed(drvdata->base + TRCSSCSRn(i));
+ }
+
++ /* read back the current counter values */
++ for (i = 0; i < drvdata->nr_cntr; i++) {
++ config->cntr_val[i] =
++ readl_relaxed(drvdata->base + TRCCNTVRn(i));
++ }
++
+ coresight_disclaim_device_unlocked(drvdata->base);
+
+ CS_LOCK(drvdata->base);
+@@ -1207,8 +1213,8 @@ static int etm4_cpu_save(struct etmv4_drvdata *drvdata)
+ }
+
+ for (i = 0; i < drvdata->nr_addr_cmp * 2; i++) {
+- state->trcacvr[i] = readl(drvdata->base + TRCACVRn(i));
+- state->trcacatr[i] = readl(drvdata->base + TRCACATRn(i));
++ state->trcacvr[i] = readq(drvdata->base + TRCACVRn(i));
++ state->trcacatr[i] = readq(drvdata->base + TRCACATRn(i));
+ }
+
+ /*
+@@ -1219,10 +1225,10 @@ static int etm4_cpu_save(struct etmv4_drvdata *drvdata)
+ */
+
+ for (i = 0; i < drvdata->numcidc; i++)
+- state->trccidcvr[i] = readl(drvdata->base + TRCCIDCVRn(i));
++ state->trccidcvr[i] = readq(drvdata->base + TRCCIDCVRn(i));
+
+ for (i = 0; i < drvdata->numvmidc; i++)
+- state->trcvmidcvr[i] = readl(drvdata->base + TRCVMIDCVRn(i));
++ state->trcvmidcvr[i] = readq(drvdata->base + TRCVMIDCVRn(i));
+
+ state->trccidcctlr0 = readl(drvdata->base + TRCCIDCCTLR0);
+ state->trccidcctlr1 = readl(drvdata->base + TRCCIDCCTLR1);
+@@ -1320,18 +1326,18 @@ static void etm4_cpu_restore(struct etmv4_drvdata *drvdata)
+ }
+
+ for (i = 0; i < drvdata->nr_addr_cmp * 2; i++) {
+- writel_relaxed(state->trcacvr[i],
++ writeq_relaxed(state->trcacvr[i],
+ drvdata->base + TRCACVRn(i));
+- writel_relaxed(state->trcacatr[i],
++ writeq_relaxed(state->trcacatr[i],
+ drvdata->base + TRCACATRn(i));
+ }
+
+ for (i = 0; i < drvdata->numcidc; i++)
+- writel_relaxed(state->trccidcvr[i],
++ writeq_relaxed(state->trccidcvr[i],
+ drvdata->base + TRCCIDCVRn(i));
+
+ for (i = 0; i < drvdata->numvmidc; i++)
+- writel_relaxed(state->trcvmidcvr[i],
++ writeq_relaxed(state->trcvmidcvr[i],
+ drvdata->base + TRCVMIDCVRn(i));
+
+ writel_relaxed(state->trccidcctlr0, drvdata->base + TRCCIDCCTLR0);
+diff --git a/drivers/hwtracing/coresight/coresight-etm4x.h b/drivers/hwtracing/coresight/coresight-etm4x.h
+index 4a695bf90582..47729e04aac7 100644
+--- a/drivers/hwtracing/coresight/coresight-etm4x.h
++++ b/drivers/hwtracing/coresight/coresight-etm4x.h
+@@ -133,7 +133,7 @@
+ #define ETMv4_MAX_CTXID_CMP 8
+ #define ETM_MAX_VMID_CMP 8
+ #define ETM_MAX_PE_CMP 8
+-#define ETM_MAX_RES_SEL 16
++#define ETM_MAX_RES_SEL 32
+ #define ETM_MAX_SS_CMP 8
+
+ #define ETM_ARCH_V4 0x40
+@@ -325,7 +325,7 @@ struct etmv4_save_state {
+ u32 trccntctlr[ETMv4_MAX_CNTR];
+ u32 trccntvr[ETMv4_MAX_CNTR];
+
+- u32 trcrsctlr[ETM_MAX_RES_SEL * 2];
++ u32 trcrsctlr[ETM_MAX_RES_SEL];
+
+ u32 trcssccr[ETM_MAX_SS_CMP];
+ u32 trcsscsr[ETM_MAX_SS_CMP];
+@@ -334,7 +334,7 @@ struct etmv4_save_state {
+ u64 trcacvr[ETM_MAX_SINGLE_ADDR_CMP];
+ u64 trcacatr[ETM_MAX_SINGLE_ADDR_CMP];
+ u64 trccidcvr[ETMv4_MAX_CTXID_CMP];
+- u32 trcvmidcvr[ETM_MAX_VMID_CMP];
++ u64 trcvmidcvr[ETM_MAX_VMID_CMP];
+ u32 trccidcctlr0;
+ u32 trccidcctlr1;
+ u32 trcvmidcctlr0;
+diff --git a/drivers/hwtracing/coresight/coresight-tmc-etf.c b/drivers/hwtracing/coresight/coresight-tmc-etf.c
+index 36cce2bfb744..6375504ba8b0 100644
+--- a/drivers/hwtracing/coresight/coresight-tmc-etf.c
++++ b/drivers/hwtracing/coresight/coresight-tmc-etf.c
+@@ -639,15 +639,14 @@ int tmc_read_unprepare_etb(struct tmc_drvdata *drvdata)
+
+ spin_lock_irqsave(&drvdata->spinlock, flags);
+
+- /* There is no point in reading a TMC in HW FIFO mode */
+- mode = readl_relaxed(drvdata->base + TMC_MODE);
+- if (mode != TMC_MODE_CIRCULAR_BUFFER) {
+- spin_unlock_irqrestore(&drvdata->spinlock, flags);
+- return -EINVAL;
+- }
+-
+ /* Re-enable the TMC if need be */
+ if (drvdata->mode == CS_MODE_SYSFS) {
++ /* There is no point in reading a TMC in HW FIFO mode */
++ mode = readl_relaxed(drvdata->base + TMC_MODE);
++ if (mode != TMC_MODE_CIRCULAR_BUFFER) {
++ spin_unlock_irqrestore(&drvdata->spinlock, flags);
++ return -EINVAL;
++ }
+ /*
+ * The trace run will continue with the same allocated trace
+ * buffer. As such zero-out the buffer so that we don't end
+diff --git a/drivers/iio/amplifiers/ad8366.c b/drivers/iio/amplifiers/ad8366.c
+index 62167b87caea..8345ba65d41d 100644
+--- a/drivers/iio/amplifiers/ad8366.c
++++ b/drivers/iio/amplifiers/ad8366.c
+@@ -262,8 +262,11 @@ static int ad8366_probe(struct spi_device *spi)
+ case ID_ADA4961:
+ case ID_ADL5240:
+ case ID_HMC1119:
+- st->reset_gpio = devm_gpiod_get(&spi->dev, "reset",
+- GPIOD_OUT_HIGH);
++ st->reset_gpio = devm_gpiod_get_optional(&spi->dev, "reset", GPIOD_OUT_HIGH);
++ if (IS_ERR(st->reset_gpio)) {
++ ret = PTR_ERR(st->reset_gpio);
++ goto error_disable_reg;
++ }
+ indio_dev->channels = ada4961_channels;
+ indio_dev->num_channels = ARRAY_SIZE(ada4961_channels);
+ break;
+diff --git a/drivers/infiniband/core/device.c b/drivers/infiniband/core/device.c
+index d0b3d35ad3e4..0fe3c3eb3dfd 100644
+--- a/drivers/infiniband/core/device.c
++++ b/drivers/infiniband/core/device.c
+@@ -1327,6 +1327,10 @@ out:
+ return ret;
+ }
+
++static void prevent_dealloc_device(struct ib_device *ib_dev)
++{
++}
++
+ /**
+ * ib_register_device - Register an IB device with IB core
+ * @device: Device to register
+@@ -1396,11 +1400,11 @@ int ib_register_device(struct ib_device *device, const char *name)
+ * possibility for a parallel unregistration along with this
+ * error flow. Since we have a refcount here we know any
+ * parallel flow is stopped in disable_device and will see the
+- * NULL pointers, causing the responsibility to
++ * special dealloc_driver pointer, causing the responsibility to
+ * ib_dealloc_device() to revert back to this thread.
+ */
+ dealloc_fn = device->ops.dealloc_driver;
+- device->ops.dealloc_driver = NULL;
++ device->ops.dealloc_driver = prevent_dealloc_device;
+ ib_device_put(device);
+ __ib_unregister_device(device);
+ device->ops.dealloc_driver = dealloc_fn;
+@@ -1448,7 +1452,8 @@ static void __ib_unregister_device(struct ib_device *ib_dev)
+ * Drivers using the new flow may not call ib_dealloc_device except
+ * in error unwind prior to registration success.
+ */
+- if (ib_dev->ops.dealloc_driver) {
++ if (ib_dev->ops.dealloc_driver &&
++ ib_dev->ops.dealloc_driver != prevent_dealloc_device) {
+ WARN_ON(kref_read(&ib_dev->dev.kobj.kref) <= 1);
+ ib_dealloc_device(ib_dev);
+ }
+diff --git a/drivers/infiniband/core/nldev.c b/drivers/infiniband/core/nldev.c
+index e16105be2eb2..98cd6403ca60 100644
+--- a/drivers/infiniband/core/nldev.c
++++ b/drivers/infiniband/core/nldev.c
+@@ -738,9 +738,6 @@ static int fill_stat_counter_qps(struct sk_buff *msg,
+ xa_lock(&rt->xa);
+ xa_for_each(&rt->xa, id, res) {
+ qp = container_of(res, struct ib_qp, res);
+- if (qp->qp_type == IB_QPT_RAW_PACKET && !capable(CAP_NET_RAW))
+- continue;
+-
+ if (!qp->counter || (qp->counter->id != counter->id))
+ continue;
+
+diff --git a/drivers/infiniband/core/verbs.c b/drivers/infiniband/core/verbs.c
+index 56a71337112c..cf45fd704671 100644
+--- a/drivers/infiniband/core/verbs.c
++++ b/drivers/infiniband/core/verbs.c
+@@ -1659,7 +1659,7 @@ static int _ib_modify_qp(struct ib_qp *qp, struct ib_qp_attr *attr,
+ if (!(rdma_protocol_ib(qp->device,
+ attr->alt_ah_attr.port_num) &&
+ rdma_protocol_ib(qp->device, port))) {
+- ret = EINVAL;
++ ret = -EINVAL;
+ goto out;
+ }
+ }
+diff --git a/drivers/infiniband/hw/qedr/qedr.h b/drivers/infiniband/hw/qedr/qedr.h
+index 5488dbd59d3c..8cf462a3d0f6 100644
+--- a/drivers/infiniband/hw/qedr/qedr.h
++++ b/drivers/infiniband/hw/qedr/qedr.h
+@@ -345,10 +345,10 @@ struct qedr_srq_hwq_info {
+ u32 wqe_prod;
+ u32 sge_prod;
+ u32 wr_prod_cnt;
+- u32 wr_cons_cnt;
++ atomic_t wr_cons_cnt;
+ u32 num_elems;
+
+- u32 *virt_prod_pair_addr;
++ struct rdma_srq_producers *virt_prod_pair_addr;
+ dma_addr_t phy_prod_pair_addr;
+ };
+
+diff --git a/drivers/infiniband/hw/qedr/verbs.c b/drivers/infiniband/hw/qedr/verbs.c
+index a5bd3adaf90a..ac93447c9524 100644
+--- a/drivers/infiniband/hw/qedr/verbs.c
++++ b/drivers/infiniband/hw/qedr/verbs.c
+@@ -3688,7 +3688,7 @@ static u32 qedr_srq_elem_left(struct qedr_srq_hwq_info *hw_srq)
+ * count and consumer count and subtract it from max
+ * work request supported so that we get elements left.
+ */
+- used = hw_srq->wr_prod_cnt - hw_srq->wr_cons_cnt;
++ used = hw_srq->wr_prod_cnt - (u32)atomic_read(&hw_srq->wr_cons_cnt);
+
+ return hw_srq->max_wr - used;
+ }
+@@ -3703,7 +3703,6 @@ int qedr_post_srq_recv(struct ib_srq *ibsrq, const struct ib_recv_wr *wr,
+ unsigned long flags;
+ int status = 0;
+ u32 num_sge;
+- u32 offset;
+
+ spin_lock_irqsave(&srq->lock, flags);
+
+@@ -3716,7 +3715,8 @@ int qedr_post_srq_recv(struct ib_srq *ibsrq, const struct ib_recv_wr *wr,
+ if (!qedr_srq_elem_left(hw_srq) ||
+ wr->num_sge > srq->hw_srq.max_sges) {
+ DP_ERR(dev, "Can't post WR (%d,%d) || (%d > %d)\n",
+- hw_srq->wr_prod_cnt, hw_srq->wr_cons_cnt,
++ hw_srq->wr_prod_cnt,
++ atomic_read(&hw_srq->wr_cons_cnt),
+ wr->num_sge, srq->hw_srq.max_sges);
+ status = -ENOMEM;
+ *bad_wr = wr;
+@@ -3750,22 +3750,20 @@ int qedr_post_srq_recv(struct ib_srq *ibsrq, const struct ib_recv_wr *wr,
+ hw_srq->sge_prod++;
+ }
+
+- /* Flush WQE and SGE information before
++ /* Update WQE and SGE information before
+ * updating producer.
+ */
+- wmb();
++ dma_wmb();
+
+ /* SRQ producer is 8 bytes. Need to update SGE producer index
+ * in first 4 bytes and need to update WQE producer in
+ * next 4 bytes.
+ */
+- *srq->hw_srq.virt_prod_pair_addr = hw_srq->sge_prod;
+- offset = offsetof(struct rdma_srq_producers, wqe_prod);
+- *((u8 *)srq->hw_srq.virt_prod_pair_addr + offset) =
+- hw_srq->wqe_prod;
++ srq->hw_srq.virt_prod_pair_addr->sge_prod = hw_srq->sge_prod;
++ /* Make sure sge producer is updated first */
++ dma_wmb();
++ srq->hw_srq.virt_prod_pair_addr->wqe_prod = hw_srq->wqe_prod;
+
+- /* Flush producer after updating it. */
+- wmb();
+ wr = wr->next;
+ }
+
+@@ -4184,7 +4182,7 @@ static int process_resp_one_srq(struct qedr_dev *dev, struct qedr_qp *qp,
+ } else {
+ __process_resp_one(dev, qp, cq, wc, resp, wr_id);
+ }
+- srq->hw_srq.wr_cons_cnt++;
++ atomic_inc(&srq->hw_srq.wr_cons_cnt);
+
+ return 1;
+ }
+diff --git a/drivers/infiniband/sw/rxe/rxe_recv.c b/drivers/infiniband/sw/rxe/rxe_recv.c
+index 831ad578a7b2..46e111c218fd 100644
+--- a/drivers/infiniband/sw/rxe/rxe_recv.c
++++ b/drivers/infiniband/sw/rxe/rxe_recv.c
+@@ -330,10 +330,14 @@ err1:
+
+ static int rxe_match_dgid(struct rxe_dev *rxe, struct sk_buff *skb)
+ {
++ struct rxe_pkt_info *pkt = SKB_TO_PKT(skb);
+ const struct ib_gid_attr *gid_attr;
+ union ib_gid dgid;
+ union ib_gid *pdgid;
+
++ if (pkt->mask & RXE_LOOPBACK_MASK)
++ return 0;
++
+ if (skb->protocol == htons(ETH_P_IP)) {
+ ipv6_addr_set_v4mapped(ip_hdr(skb)->daddr,
+ (struct in6_addr *)&dgid);
+@@ -366,7 +370,7 @@ void rxe_rcv(struct sk_buff *skb)
+ if (unlikely(skb->len < pkt->offset + RXE_BTH_BYTES))
+ goto drop;
+
+- if (unlikely(rxe_match_dgid(rxe, skb) < 0)) {
++ if (rxe_match_dgid(rxe, skb) < 0) {
+ pr_warn_ratelimited("failed matching dgid\n");
+ goto drop;
+ }
+diff --git a/drivers/infiniband/sw/rxe/rxe_verbs.c b/drivers/infiniband/sw/rxe/rxe_verbs.c
+index 9dd4bd7aea92..2aaa0b592a2d 100644
+--- a/drivers/infiniband/sw/rxe/rxe_verbs.c
++++ b/drivers/infiniband/sw/rxe/rxe_verbs.c
+@@ -683,6 +683,7 @@ static int rxe_post_send_kernel(struct rxe_qp *qp, const struct ib_send_wr *wr,
+ unsigned int mask;
+ unsigned int length = 0;
+ int i;
++ struct ib_send_wr *next;
+
+ while (wr) {
+ mask = wr_opcode_mask(wr->opcode, qp);
+@@ -699,6 +700,8 @@ static int rxe_post_send_kernel(struct rxe_qp *qp, const struct ib_send_wr *wr,
+ break;
+ }
+
++ next = wr->next;
++
+ length = 0;
+ for (i = 0; i < wr->num_sge; i++)
+ length += wr->sg_list[i].length;
+@@ -709,7 +712,7 @@ static int rxe_post_send_kernel(struct rxe_qp *qp, const struct ib_send_wr *wr,
+ *bad_wr = wr;
+ break;
+ }
+- wr = wr->next;
++ wr = next;
+ }
+
+ rxe_run_task(&qp->req.task, 1);
+diff --git a/drivers/iommu/intel_irq_remapping.c b/drivers/iommu/intel_irq_remapping.c
+index 982d796b686b..6bfb283e6f28 100644
+--- a/drivers/iommu/intel_irq_remapping.c
++++ b/drivers/iommu/intel_irq_remapping.c
+@@ -628,13 +628,21 @@ out_free_table:
+
+ static void intel_teardown_irq_remapping(struct intel_iommu *iommu)
+ {
++ struct fwnode_handle *fn;
++
+ if (iommu && iommu->ir_table) {
+ if (iommu->ir_msi_domain) {
++ fn = iommu->ir_msi_domain->fwnode;
++
+ irq_domain_remove(iommu->ir_msi_domain);
++ irq_domain_free_fwnode(fn);
+ iommu->ir_msi_domain = NULL;
+ }
+ if (iommu->ir_domain) {
++ fn = iommu->ir_domain->fwnode;
++
+ irq_domain_remove(iommu->ir_domain);
++ irq_domain_free_fwnode(fn);
+ iommu->ir_domain = NULL;
+ }
+ free_pages((unsigned long)iommu->ir_table->base,
+diff --git a/drivers/irqchip/irq-bcm7038-l1.c b/drivers/irqchip/irq-bcm7038-l1.c
+index fd7c537fb42a..4127eeab10af 100644
+--- a/drivers/irqchip/irq-bcm7038-l1.c
++++ b/drivers/irqchip/irq-bcm7038-l1.c
+@@ -327,7 +327,11 @@ static int bcm7038_l1_suspend(void)
+ u32 val;
+
+ /* Wakeup interrupt should only come from the boot cpu */
++#ifdef CONFIG_SMP
+ boot_cpu = cpu_logical_map(0);
++#else
++ boot_cpu = 0;
++#endif
+
+ list_for_each_entry(intc, &bcm7038_l1_intcs_list, list) {
+ for (word = 0; word < intc->n_words; word++) {
+@@ -347,7 +351,11 @@ static void bcm7038_l1_resume(void)
+ struct bcm7038_l1_chip *intc;
+ int boot_cpu, word;
+
++#ifdef CONFIG_SMP
+ boot_cpu = cpu_logical_map(0);
++#else
++ boot_cpu = 0;
++#endif
+
+ list_for_each_entry(intc, &bcm7038_l1_intcs_list, list) {
+ for (word = 0; word < intc->n_words; word++) {
+diff --git a/drivers/irqchip/irq-gic-v3-its.c b/drivers/irqchip/irq-gic-v3-its.c
+index b99e3105bf9f..237c832acdd7 100644
+--- a/drivers/irqchip/irq-gic-v3-its.c
++++ b/drivers/irqchip/irq-gic-v3-its.c
+@@ -2690,7 +2690,7 @@ static int allocate_vpe_l1_table(void)
+ if (val & GICR_VPROPBASER_4_1_VALID)
+ goto out;
+
+- gic_data_rdist()->vpe_table_mask = kzalloc(sizeof(cpumask_t), GFP_KERNEL);
++ gic_data_rdist()->vpe_table_mask = kzalloc(sizeof(cpumask_t), GFP_ATOMIC);
+ if (!gic_data_rdist()->vpe_table_mask)
+ return -ENOMEM;
+
+@@ -2757,7 +2757,7 @@ static int allocate_vpe_l1_table(void)
+
+ pr_debug("np = %d, npg = %lld, psz = %d, epp = %d, esz = %d\n",
+ np, npg, psz, epp, esz);
+- page = alloc_pages(GFP_KERNEL | __GFP_ZERO, get_order(np * PAGE_SIZE));
++ page = alloc_pages(GFP_ATOMIC | __GFP_ZERO, get_order(np * PAGE_SIZE));
+ if (!page)
+ return -ENOMEM;
+
+diff --git a/drivers/irqchip/irq-loongson-liointc.c b/drivers/irqchip/irq-loongson-liointc.c
+index 63b61474a0cc..6ef86a334c62 100644
+--- a/drivers/irqchip/irq-loongson-liointc.c
++++ b/drivers/irqchip/irq-loongson-liointc.c
+@@ -114,6 +114,7 @@ static int liointc_set_type(struct irq_data *data, unsigned int type)
+ liointc_set_bit(gc, LIOINTC_REG_INTC_POL, mask, false);
+ break;
+ default:
++ irq_gc_unlock_irqrestore(gc, flags);
+ return -EINVAL;
+ }
+ irq_gc_unlock_irqrestore(gc, flags);
+diff --git a/drivers/irqchip/irq-mtk-sysirq.c b/drivers/irqchip/irq-mtk-sysirq.c
+index 73eae5966a40..6ff98b87e5c0 100644
+--- a/drivers/irqchip/irq-mtk-sysirq.c
++++ b/drivers/irqchip/irq-mtk-sysirq.c
+@@ -15,7 +15,7 @@
+ #include <linux/spinlock.h>
+
+ struct mtk_sysirq_chip_data {
+- spinlock_t lock;
++ raw_spinlock_t lock;
+ u32 nr_intpol_bases;
+ void __iomem **intpol_bases;
+ u32 *intpol_words;
+@@ -37,7 +37,7 @@ static int mtk_sysirq_set_type(struct irq_data *data, unsigned int type)
+ reg_index = chip_data->which_word[hwirq];
+ offset = hwirq & 0x1f;
+
+- spin_lock_irqsave(&chip_data->lock, flags);
++ raw_spin_lock_irqsave(&chip_data->lock, flags);
+ value = readl_relaxed(base + reg_index * 4);
+ if (type == IRQ_TYPE_LEVEL_LOW || type == IRQ_TYPE_EDGE_FALLING) {
+ if (type == IRQ_TYPE_LEVEL_LOW)
+@@ -53,7 +53,7 @@ static int mtk_sysirq_set_type(struct irq_data *data, unsigned int type)
+
+ data = data->parent_data;
+ ret = data->chip->irq_set_type(data, type);
+- spin_unlock_irqrestore(&chip_data->lock, flags);
++ raw_spin_unlock_irqrestore(&chip_data->lock, flags);
+ return ret;
+ }
+
+@@ -212,7 +212,7 @@ static int __init mtk_sysirq_of_init(struct device_node *node,
+ ret = -ENOMEM;
+ goto out_free_which_word;
+ }
+- spin_lock_init(&chip_data->lock);
++ raw_spin_lock_init(&chip_data->lock);
+
+ return 0;
+
+diff --git a/drivers/irqchip/irq-ti-sci-inta.c b/drivers/irqchip/irq-ti-sci-inta.c
+index 7e3ebf6ed2cd..be0a35d91796 100644
+--- a/drivers/irqchip/irq-ti-sci-inta.c
++++ b/drivers/irqchip/irq-ti-sci-inta.c
+@@ -572,7 +572,7 @@ static int ti_sci_inta_irq_domain_probe(struct platform_device *pdev)
+ res = platform_get_resource(pdev, IORESOURCE_MEM, 0);
+ inta->base = devm_ioremap_resource(dev, res);
+ if (IS_ERR(inta->base))
+- return -ENODEV;
++ return PTR_ERR(inta->base);
+
+ domain = irq_domain_add_linear(dev_of_node(dev),
+ ti_sci_get_num_resources(inta->vint),
+diff --git a/drivers/leds/led-class.c b/drivers/leds/led-class.c
+index 3363a6551a70..cc3929f858b6 100644
+--- a/drivers/leds/led-class.c
++++ b/drivers/leds/led-class.c
+@@ -173,6 +173,7 @@ void led_classdev_suspend(struct led_classdev *led_cdev)
+ {
+ led_cdev->flags |= LED_SUSPENDED;
+ led_set_brightness_nopm(led_cdev, 0);
++ flush_work(&led_cdev->set_brightness_work);
+ }
+ EXPORT_SYMBOL_GPL(led_classdev_suspend);
+
+diff --git a/drivers/leds/leds-lm355x.c b/drivers/leds/leds-lm355x.c
+index a5abb499574b..129f475aebf2 100644
+--- a/drivers/leds/leds-lm355x.c
++++ b/drivers/leds/leds-lm355x.c
+@@ -165,18 +165,19 @@ static int lm355x_chip_init(struct lm355x_chip_data *chip)
+ /* input and output pins configuration */
+ switch (chip->type) {
+ case CHIP_LM3554:
+- reg_val = pdata->pin_tx2 | pdata->ntc_pin;
++ reg_val = (u32)pdata->pin_tx2 | (u32)pdata->ntc_pin;
+ ret = regmap_update_bits(chip->regmap, 0xE0, 0x28, reg_val);
+ if (ret < 0)
+ goto out;
+- reg_val = pdata->pass_mode;
++ reg_val = (u32)pdata->pass_mode;
+ ret = regmap_update_bits(chip->regmap, 0xA0, 0x04, reg_val);
+ if (ret < 0)
+ goto out;
+ break;
+
+ case CHIP_LM3556:
+- reg_val = pdata->pin_tx2 | pdata->ntc_pin | pdata->pass_mode;
++ reg_val = (u32)pdata->pin_tx2 | (u32)pdata->ntc_pin |
++ (u32)pdata->pass_mode;
+ ret = regmap_update_bits(chip->regmap, 0x0A, 0xC4, reg_val);
+ if (ret < 0)
+ goto out;
+diff --git a/drivers/macintosh/via-macii.c b/drivers/macintosh/via-macii.c
+index ac824d7b2dcf..6aa903529570 100644
+--- a/drivers/macintosh/via-macii.c
++++ b/drivers/macintosh/via-macii.c
+@@ -270,15 +270,12 @@ static int macii_autopoll(int devs)
+ unsigned long flags;
+ int err = 0;
+
++ local_irq_save(flags);
++
+ /* bit 1 == device 1, and so on. */
+ autopoll_devs = devs & 0xFFFE;
+
+- if (!autopoll_devs)
+- return 0;
+-
+- local_irq_save(flags);
+-
+- if (current_req == NULL) {
++ if (autopoll_devs && !current_req) {
+ /* Send a Talk Reg 0. The controller will repeatedly transmit
+ * this as long as it is idle.
+ */
+diff --git a/drivers/md/bcache/super.c b/drivers/md/bcache/super.c
+index a2e5a0fcd7d5..7048370331c3 100644
+--- a/drivers/md/bcache/super.c
++++ b/drivers/md/bcache/super.c
+@@ -2099,7 +2099,14 @@ found:
+ sysfs_create_link(&c->kobj, &ca->kobj, buf))
+ goto err;
+
+- if (ca->sb.seq > c->sb.seq) {
++ /*
++ * A special case is both ca->sb.seq and c->sb.seq are 0,
++ * such condition happens on a new created cache device whose
++ * super block is never flushed yet. In this case c->sb.version
++ * and other members should be updated too, otherwise we will
++ * have a mistaken super block version in cache set.
++ */
++ if (ca->sb.seq > c->sb.seq || c->sb.seq == 0) {
+ c->sb.version = ca->sb.version;
+ memcpy(c->sb.set_uuid, ca->sb.set_uuid, 16);
+ c->sb.flags = ca->sb.flags;
+diff --git a/drivers/md/md-cluster.c b/drivers/md/md-cluster.c
+index 813a99ffa86f..73fd50e77975 100644
+--- a/drivers/md/md-cluster.c
++++ b/drivers/md/md-cluster.c
+@@ -1518,6 +1518,7 @@ static void unlock_all_bitmaps(struct mddev *mddev)
+ }
+ }
+ kfree(cinfo->other_bitmap_lockres);
++ cinfo->other_bitmap_lockres = NULL;
+ }
+ }
+
+diff --git a/drivers/md/md.c b/drivers/md/md.c
+index 41eead9cbee9..d5a5c1881398 100644
+--- a/drivers/md/md.c
++++ b/drivers/md/md.c
+@@ -469,17 +469,18 @@ static blk_qc_t md_make_request(struct request_queue *q, struct bio *bio)
+ struct mddev *mddev = q->queuedata;
+ unsigned int sectors;
+
+- if (unlikely(test_bit(MD_BROKEN, &mddev->flags)) && (rw == WRITE)) {
++ if (mddev == NULL || mddev->pers == NULL) {
+ bio_io_error(bio);
+ return BLK_QC_T_NONE;
+ }
+
+- blk_queue_split(q, &bio);
+-
+- if (mddev == NULL || mddev->pers == NULL) {
++ if (unlikely(test_bit(MD_BROKEN, &mddev->flags)) && (rw == WRITE)) {
+ bio_io_error(bio);
+ return BLK_QC_T_NONE;
+ }
++
++ blk_queue_split(q, &bio);
++
+ if (mddev->ro == 1 && unlikely(rw == WRITE)) {
+ if (bio_sectors(bio) != 0)
+ bio->bi_status = BLK_STS_IOERR;
+diff --git a/drivers/media/firewire/firedtv-fw.c b/drivers/media/firewire/firedtv-fw.c
+index 97144734eb05..3f1ca40b9b98 100644
+--- a/drivers/media/firewire/firedtv-fw.c
++++ b/drivers/media/firewire/firedtv-fw.c
+@@ -272,6 +272,8 @@ static int node_probe(struct fw_unit *unit, const struct ieee1394_device_id *id)
+
+ name_len = fw_csr_string(unit->directory, CSR_MODEL,
+ name, sizeof(name));
++ if (name_len < 0)
++ return name_len;
+ for (i = ARRAY_SIZE(model_names); --i; )
+ if (strlen(model_names[i]) <= name_len &&
+ strncmp(name, model_names[i], name_len) == 0)
+diff --git a/drivers/media/i2c/tvp5150.c b/drivers/media/i2c/tvp5150.c
+index eb39cf5ea089..9df575238952 100644
+--- a/drivers/media/i2c/tvp5150.c
++++ b/drivers/media/i2c/tvp5150.c
+@@ -1664,8 +1664,10 @@ static int tvp5150_registered(struct v4l2_subdev *sd)
+ return 0;
+
+ err:
+- for (i = 0; i < decoder->connectors_num; i++)
++ for (i = 0; i < decoder->connectors_num; i++) {
+ media_device_unregister_entity(&decoder->connectors[i].ent);
++ media_entity_cleanup(&decoder->connectors[i].ent);
++ }
+ return ret;
+ #endif
+
+@@ -2248,8 +2250,10 @@ static int tvp5150_remove(struct i2c_client *c)
+
+ for (i = 0; i < decoder->connectors_num; i++)
+ v4l2_fwnode_connector_free(&decoder->connectors[i].base);
+- for (i = 0; i < decoder->connectors_num; i++)
++ for (i = 0; i < decoder->connectors_num; i++) {
+ media_device_unregister_entity(&decoder->connectors[i].ent);
++ media_entity_cleanup(&decoder->connectors[i].ent);
++ }
+ v4l2_async_unregister_subdev(sd);
+ v4l2_ctrl_handler_free(&decoder->hdl);
+ pm_runtime_disable(&c->dev);
+diff --git a/drivers/media/mc/mc-request.c b/drivers/media/mc/mc-request.c
+index e3fca436c75b..c0782fd96c59 100644
+--- a/drivers/media/mc/mc-request.c
++++ b/drivers/media/mc/mc-request.c
+@@ -296,9 +296,18 @@ int media_request_alloc(struct media_device *mdev, int *alloc_fd)
+ if (WARN_ON(!mdev->ops->req_alloc ^ !mdev->ops->req_free))
+ return -ENOMEM;
+
++ if (mdev->ops->req_alloc)
++ req = mdev->ops->req_alloc(mdev);
++ else
++ req = kzalloc(sizeof(*req), GFP_KERNEL);
++ if (!req)
++ return -ENOMEM;
++
+ fd = get_unused_fd_flags(O_CLOEXEC);
+- if (fd < 0)
+- return fd;
++ if (fd < 0) {
++ ret = fd;
++ goto err_free_req;
++ }
+
+ filp = anon_inode_getfile("request", &request_fops, NULL, O_CLOEXEC);
+ if (IS_ERR(filp)) {
+@@ -306,15 +315,6 @@ int media_request_alloc(struct media_device *mdev, int *alloc_fd)
+ goto err_put_fd;
+ }
+
+- if (mdev->ops->req_alloc)
+- req = mdev->ops->req_alloc(mdev);
+- else
+- req = kzalloc(sizeof(*req), GFP_KERNEL);
+- if (!req) {
+- ret = -ENOMEM;
+- goto err_fput;
+- }
+-
+ filp->private_data = req;
+ req->mdev = mdev;
+ req->state = MEDIA_REQUEST_STATE_IDLE;
+@@ -336,12 +336,15 @@ int media_request_alloc(struct media_device *mdev, int *alloc_fd)
+
+ return 0;
+
+-err_fput:
+- fput(filp);
+-
+ err_put_fd:
+ put_unused_fd(fd);
+
++err_free_req:
++ if (mdev->ops->req_free)
++ mdev->ops->req_free(req);
++ else
++ kfree(req);
++
+ return ret;
+ }
+
+diff --git a/drivers/media/platform/cros-ec-cec/cros-ec-cec.c b/drivers/media/platform/cros-ec-cec/cros-ec-cec.c
+index 0e7e2772f08f..2d95e16cd248 100644
+--- a/drivers/media/platform/cros-ec-cec/cros-ec-cec.c
++++ b/drivers/media/platform/cros-ec-cec/cros-ec-cec.c
+@@ -277,11 +277,7 @@ static int cros_ec_cec_probe(struct platform_device *pdev)
+ platform_set_drvdata(pdev, cros_ec_cec);
+ cros_ec_cec->cros_ec = cros_ec;
+
+- ret = device_init_wakeup(&pdev->dev, 1);
+- if (ret) {
+- dev_err(&pdev->dev, "failed to initialize wakeup\n");
+- return ret;
+- }
++ device_init_wakeup(&pdev->dev, 1);
+
+ cros_ec_cec->adap = cec_allocate_adapter(&cros_ec_cec_ops, cros_ec_cec,
+ DRV_NAME,
+diff --git a/drivers/media/platform/exynos4-is/media-dev.c b/drivers/media/platform/exynos4-is/media-dev.c
+index 9aaf3b8060d5..9c31d950cddf 100644
+--- a/drivers/media/platform/exynos4-is/media-dev.c
++++ b/drivers/media/platform/exynos4-is/media-dev.c
+@@ -1270,6 +1270,9 @@ static int fimc_md_get_pinctrl(struct fimc_md *fmd)
+
+ pctl->state_idle = pinctrl_lookup_state(pctl->pinctrl,
+ PINCTRL_STATE_IDLE);
++ if (IS_ERR(pctl->state_idle))
++ return PTR_ERR(pctl->state_idle);
++
+ return 0;
+ }
+
+diff --git a/drivers/media/platform/marvell-ccic/mcam-core.c b/drivers/media/platform/marvell-ccic/mcam-core.c
+index 09775b6624c6..326e79b8531c 100644
+--- a/drivers/media/platform/marvell-ccic/mcam-core.c
++++ b/drivers/media/platform/marvell-ccic/mcam-core.c
+@@ -1940,6 +1940,7 @@ int mccic_register(struct mcam_camera *cam)
+ out:
+ v4l2_async_notifier_unregister(&cam->notifier);
+ v4l2_device_unregister(&cam->v4l2_dev);
++ v4l2_async_notifier_cleanup(&cam->notifier);
+ return ret;
+ }
+ EXPORT_SYMBOL_GPL(mccic_register);
+@@ -1961,6 +1962,7 @@ void mccic_shutdown(struct mcam_camera *cam)
+ v4l2_ctrl_handler_free(&cam->ctrl_handler);
+ v4l2_async_notifier_unregister(&cam->notifier);
+ v4l2_device_unregister(&cam->v4l2_dev);
++ v4l2_async_notifier_cleanup(&cam->notifier);
+ }
+ EXPORT_SYMBOL_GPL(mccic_shutdown);
+
+diff --git a/drivers/media/platform/mtk-mdp/mtk_mdp_comp.c b/drivers/media/platform/mtk-mdp/mtk_mdp_comp.c
+index 14991685adb7..9b375d367753 100644
+--- a/drivers/media/platform/mtk-mdp/mtk_mdp_comp.c
++++ b/drivers/media/platform/mtk-mdp/mtk_mdp_comp.c
+@@ -96,6 +96,7 @@ int mtk_mdp_comp_init(struct device *dev, struct device_node *node,
+ {
+ struct device_node *larb_node;
+ struct platform_device *larb_pdev;
++ int ret;
+ int i;
+
+ if (comp_id < 0 || comp_id >= MTK_MDP_COMP_ID_MAX) {
+@@ -113,8 +114,8 @@ int mtk_mdp_comp_init(struct device *dev, struct device_node *node,
+ if (IS_ERR(comp->clk[i])) {
+ if (PTR_ERR(comp->clk[i]) != -EPROBE_DEFER)
+ dev_err(dev, "Failed to get clock\n");
+-
+- return PTR_ERR(comp->clk[i]);
++ ret = PTR_ERR(comp->clk[i]);
++ goto put_dev;
+ }
+
+ /* Only RDMA needs two clocks */
+@@ -133,20 +134,27 @@ int mtk_mdp_comp_init(struct device *dev, struct device_node *node,
+ if (!larb_node) {
+ dev_err(dev,
+ "Missing mediadek,larb phandle in %pOF node\n", node);
+- return -EINVAL;
++ ret = -EINVAL;
++ goto put_dev;
+ }
+
+ larb_pdev = of_find_device_by_node(larb_node);
+ if (!larb_pdev) {
+ dev_warn(dev, "Waiting for larb device %pOF\n", larb_node);
+ of_node_put(larb_node);
+- return -EPROBE_DEFER;
++ ret = -EPROBE_DEFER;
++ goto put_dev;
+ }
+ of_node_put(larb_node);
+
+ comp->larb_dev = &larb_pdev->dev;
+
+ return 0;
++
++put_dev:
++ of_node_put(comp->dev_node);
++
++ return ret;
+ }
+
+ void mtk_mdp_comp_deinit(struct device *dev, struct mtk_mdp_comp *comp)
+diff --git a/drivers/media/platform/omap3isp/isppreview.c b/drivers/media/platform/omap3isp/isppreview.c
+index 4dbdf3180d10..607b7685c982 100644
+--- a/drivers/media/platform/omap3isp/isppreview.c
++++ b/drivers/media/platform/omap3isp/isppreview.c
+@@ -2287,7 +2287,7 @@ static int preview_init_entities(struct isp_prev_device *prev)
+ me->ops = &preview_media_ops;
+ ret = media_entity_pads_init(me, PREV_PADS_NUM, pads);
+ if (ret < 0)
+- return ret;
++ goto error_handler_free;
+
+ preview_init_formats(sd, NULL);
+
+@@ -2320,6 +2320,8 @@ error_video_out:
+ omap3isp_video_cleanup(&prev->video_in);
+ error_video_in:
+ media_entity_cleanup(&prev->subdev.entity);
++error_handler_free:
++ v4l2_ctrl_handler_free(&prev->ctrls);
+ return ret;
+ }
+
+diff --git a/drivers/media/platform/s5p-g2d/g2d.c b/drivers/media/platform/s5p-g2d/g2d.c
+index 6932fd47071b..15bcb7f6e113 100644
+--- a/drivers/media/platform/s5p-g2d/g2d.c
++++ b/drivers/media/platform/s5p-g2d/g2d.c
+@@ -695,21 +695,13 @@ static int g2d_probe(struct platform_device *pdev)
+ vfd->lock = &dev->mutex;
+ vfd->v4l2_dev = &dev->v4l2_dev;
+ vfd->device_caps = V4L2_CAP_VIDEO_M2M | V4L2_CAP_STREAMING;
+- ret = video_register_device(vfd, VFL_TYPE_VIDEO, 0);
+- if (ret) {
+- v4l2_err(&dev->v4l2_dev, "Failed to register video device\n");
+- goto rel_vdev;
+- }
+- video_set_drvdata(vfd, dev);
+- dev->vfd = vfd;
+- v4l2_info(&dev->v4l2_dev, "device registered as /dev/video%d\n",
+- vfd->num);
++
+ platform_set_drvdata(pdev, dev);
+ dev->m2m_dev = v4l2_m2m_init(&g2d_m2m_ops);
+ if (IS_ERR(dev->m2m_dev)) {
+ v4l2_err(&dev->v4l2_dev, "Failed to init mem2mem device\n");
+ ret = PTR_ERR(dev->m2m_dev);
+- goto unreg_video_dev;
++ goto rel_vdev;
+ }
+
+ def_frame.stride = (def_frame.width * def_frame.fmt->depth) >> 3;
+@@ -717,14 +709,24 @@ static int g2d_probe(struct platform_device *pdev)
+ of_id = of_match_node(exynos_g2d_match, pdev->dev.of_node);
+ if (!of_id) {
+ ret = -ENODEV;
+- goto unreg_video_dev;
++ goto free_m2m;
+ }
+ dev->variant = (struct g2d_variant *)of_id->data;
+
++ ret = video_register_device(vfd, VFL_TYPE_VIDEO, 0);
++ if (ret) {
++ v4l2_err(&dev->v4l2_dev, "Failed to register video device\n");
++ goto free_m2m;
++ }
++ video_set_drvdata(vfd, dev);
++ dev->vfd = vfd;
++ v4l2_info(&dev->v4l2_dev, "device registered as /dev/video%d\n",
++ vfd->num);
++
+ return 0;
+
+-unreg_video_dev:
+- video_unregister_device(dev->vfd);
++free_m2m:
++ v4l2_m2m_release(dev->m2m_dev);
+ rel_vdev:
+ video_device_release(vfd);
+ unreg_v4l2_dev:
+diff --git a/drivers/media/usb/dvb-usb/Kconfig b/drivers/media/usb/dvb-usb/Kconfig
+index 1a3e5f965ae4..2d7a5c1c84af 100644
+--- a/drivers/media/usb/dvb-usb/Kconfig
++++ b/drivers/media/usb/dvb-usb/Kconfig
+@@ -150,6 +150,7 @@ config DVB_USB_CXUSB
+ config DVB_USB_CXUSB_ANALOG
+ bool "Analog support for the Conexant USB2.0 hybrid reference design"
+ depends on DVB_USB_CXUSB && VIDEO_V4L2
++ depends on VIDEO_V4L2=y || VIDEO_V4L2=DVB_USB_CXUSB
+ select VIDEO_CX25840
+ select VIDEOBUF2_VMALLOC
+ help
+diff --git a/drivers/media/usb/go7007/go7007-usb.c b/drivers/media/usb/go7007/go7007-usb.c
+index f889c9d740cd..dbf0455d5d50 100644
+--- a/drivers/media/usb/go7007/go7007-usb.c
++++ b/drivers/media/usb/go7007/go7007-usb.c
+@@ -1132,6 +1132,10 @@ static int go7007_usb_probe(struct usb_interface *intf,
+ go->hpi_ops = &go7007_usb_onboard_hpi_ops;
+ go->hpi_context = usb;
+
++ ep = usb->usbdev->ep_in[4];
++ if (!ep)
++ return -ENODEV;
++
+ /* Allocate the URB and buffer for receiving incoming interrupts */
+ usb->intr_urb = usb_alloc_urb(0, GFP_KERNEL);
+ if (usb->intr_urb == NULL)
+@@ -1141,7 +1145,6 @@ static int go7007_usb_probe(struct usb_interface *intf,
+ if (usb->intr_urb->transfer_buffer == NULL)
+ goto allocfail;
+
+- ep = usb->usbdev->ep_in[4];
+ if (usb_endpoint_type(&ep->desc) == USB_ENDPOINT_XFER_BULK)
+ usb_fill_bulk_urb(usb->intr_urb, usb->usbdev,
+ usb_rcvbulkpipe(usb->usbdev, 4),
+@@ -1263,9 +1266,13 @@ static int go7007_usb_probe(struct usb_interface *intf,
+
+ /* Allocate the URBs and buffers for receiving the video stream */
+ if (board->flags & GO7007_USB_EZUSB) {
++ if (!usb->usbdev->ep_in[6])
++ goto allocfail;
+ v_urb_len = 1024;
+ video_pipe = usb_rcvbulkpipe(usb->usbdev, 6);
+ } else {
++ if (!usb->usbdev->ep_in[1])
++ goto allocfail;
+ v_urb_len = 512;
+ video_pipe = usb_rcvbulkpipe(usb->usbdev, 1);
+ }
+@@ -1285,6 +1292,8 @@ static int go7007_usb_probe(struct usb_interface *intf,
+ /* Allocate the URBs and buffers for receiving the audio stream */
+ if ((board->flags & GO7007_USB_EZUSB) &&
+ (board->main_info.flags & GO7007_BOARD_HAS_AUDIO)) {
++ if (!usb->usbdev->ep_in[8])
++ goto allocfail;
+ for (i = 0; i < 8; ++i) {
+ usb->audio_urbs[i] = usb_alloc_urb(0, GFP_KERNEL);
+ if (usb->audio_urbs[i] == NULL)
+diff --git a/drivers/memory/samsung/exynos5422-dmc.c b/drivers/memory/samsung/exynos5422-dmc.c
+index 22a43d662833..3460ba55fd59 100644
+--- a/drivers/memory/samsung/exynos5422-dmc.c
++++ b/drivers/memory/samsung/exynos5422-dmc.c
+@@ -270,12 +270,14 @@ static int find_target_freq_idx(struct exynos5_dmc *dmc,
+ * This function switches between these banks according to the
+ * currently used clock source.
+ */
+-static void exynos5_switch_timing_regs(struct exynos5_dmc *dmc, bool set)
++static int exynos5_switch_timing_regs(struct exynos5_dmc *dmc, bool set)
+ {
+ unsigned int reg;
+ int ret;
+
+ ret = regmap_read(dmc->clk_regmap, CDREX_LPDDR3PHY_CON3, ®);
++ if (ret)
++ return ret;
+
+ if (set)
+ reg |= EXYNOS5_TIMING_SET_SWI;
+@@ -283,6 +285,8 @@ static void exynos5_switch_timing_regs(struct exynos5_dmc *dmc, bool set)
+ reg &= ~EXYNOS5_TIMING_SET_SWI;
+
+ regmap_write(dmc->clk_regmap, CDREX_LPDDR3PHY_CON3, reg);
++
++ return 0;
+ }
+
+ /**
+@@ -516,7 +520,7 @@ exynos5_dmc_switch_to_bypass_configuration(struct exynos5_dmc *dmc,
+ /*
+ * Delays are long enough, so use them for the new coming clock.
+ */
+- exynos5_switch_timing_regs(dmc, USE_MX_MSPLL_TIMINGS);
++ ret = exynos5_switch_timing_regs(dmc, USE_MX_MSPLL_TIMINGS);
+
+ return ret;
+ }
+@@ -577,7 +581,9 @@ exynos5_dmc_change_freq_and_volt(struct exynos5_dmc *dmc,
+
+ clk_set_rate(dmc->fout_bpll, target_rate);
+
+- exynos5_switch_timing_regs(dmc, USE_BPLL_TIMINGS);
++ ret = exynos5_switch_timing_regs(dmc, USE_BPLL_TIMINGS);
++ if (ret)
++ goto disable_clocks;
+
+ ret = clk_set_parent(dmc->mout_mclk_cdrex, dmc->mout_bpll);
+ if (ret)
+diff --git a/drivers/memory/tegra/tegra186-emc.c b/drivers/memory/tegra/tegra186-emc.c
+index 97f26bc77ad4..c900948881d5 100644
+--- a/drivers/memory/tegra/tegra186-emc.c
++++ b/drivers/memory/tegra/tegra186-emc.c
+@@ -185,7 +185,7 @@ static int tegra186_emc_probe(struct platform_device *pdev)
+ if (IS_ERR(emc->clk)) {
+ err = PTR_ERR(emc->clk);
+ dev_err(&pdev->dev, "failed to get EMC clock: %d\n", err);
+- return err;
++ goto put_bpmp;
+ }
+
+ platform_set_drvdata(pdev, emc);
+@@ -201,7 +201,7 @@ static int tegra186_emc_probe(struct platform_device *pdev)
+ err = tegra_bpmp_transfer(emc->bpmp, &msg);
+ if (err < 0) {
+ dev_err(&pdev->dev, "failed to EMC DVFS pairs: %d\n", err);
+- return err;
++ goto put_bpmp;
+ }
+
+ emc->debugfs.min_rate = ULONG_MAX;
+@@ -211,8 +211,10 @@ static int tegra186_emc_probe(struct platform_device *pdev)
+
+ emc->dvfs = devm_kmalloc_array(&pdev->dev, emc->num_dvfs,
+ sizeof(*emc->dvfs), GFP_KERNEL);
+- if (!emc->dvfs)
+- return -ENOMEM;
++ if (!emc->dvfs) {
++ err = -ENOMEM;
++ goto put_bpmp;
++ }
+
+ dev_dbg(&pdev->dev, "%u DVFS pairs:\n", emc->num_dvfs);
+
+@@ -237,7 +239,7 @@ static int tegra186_emc_probe(struct platform_device *pdev)
+ "failed to set rate range [%lu-%lu] for %pC\n",
+ emc->debugfs.min_rate, emc->debugfs.max_rate,
+ emc->clk);
+- return err;
++ goto put_bpmp;
+ }
+
+ emc->debugfs.root = debugfs_create_dir("emc", NULL);
+@@ -254,6 +256,10 @@ static int tegra186_emc_probe(struct platform_device *pdev)
+ emc, &tegra186_emc_debug_max_rate_fops);
+
+ return 0;
++
++put_bpmp:
++ tegra_bpmp_put(emc->bpmp);
++ return err;
+ }
+
+ static int tegra186_emc_remove(struct platform_device *pdev)
+diff --git a/drivers/mfd/ioc3.c b/drivers/mfd/ioc3.c
+index 74cee7cb0afc..d939ccc46509 100644
+--- a/drivers/mfd/ioc3.c
++++ b/drivers/mfd/ioc3.c
+@@ -616,7 +616,10 @@ static int ioc3_mfd_probe(struct pci_dev *pdev,
+ /* Remove all already added MFD devices */
+ mfd_remove_devices(&ipd->pdev->dev);
+ if (ipd->domain) {
++ struct fwnode_handle *fn = ipd->domain->fwnode;
++
+ irq_domain_remove(ipd->domain);
++ irq_domain_free_fwnode(fn);
+ free_irq(ipd->domain_irq, (void *)ipd);
+ }
+ pci_iounmap(pdev, regs);
+@@ -643,7 +646,10 @@ static void ioc3_mfd_remove(struct pci_dev *pdev)
+ /* Release resources */
+ mfd_remove_devices(&ipd->pdev->dev);
+ if (ipd->domain) {
++ struct fwnode_handle *fn = ipd->domain->fwnode;
++
+ irq_domain_remove(ipd->domain);
++ irq_domain_free_fwnode(fn);
+ free_irq(ipd->domain_irq, (void *)ipd);
+ }
+ pci_iounmap(pdev, ipd->regs);
+diff --git a/drivers/misc/cxl/sysfs.c b/drivers/misc/cxl/sysfs.c
+index f0263d1a1fdf..d97a243ad30c 100644
+--- a/drivers/misc/cxl/sysfs.c
++++ b/drivers/misc/cxl/sysfs.c
+@@ -624,7 +624,7 @@ static struct afu_config_record *cxl_sysfs_afu_new_cr(struct cxl_afu *afu, int c
+ rc = kobject_init_and_add(&cr->kobj, &afu_config_record_type,
+ &afu->dev.kobj, "cr%i", cr->cr);
+ if (rc)
+- goto err;
++ goto err1;
+
+ rc = sysfs_create_bin_file(&cr->kobj, &cr->config_attr);
+ if (rc)
+diff --git a/drivers/misc/lkdtm/bugs.c b/drivers/misc/lkdtm/bugs.c
+index 886459e0ddd9..7913c9ff216c 100644
+--- a/drivers/misc/lkdtm/bugs.c
++++ b/drivers/misc/lkdtm/bugs.c
+@@ -13,7 +13,7 @@
+ #include <linux/uaccess.h>
+ #include <linux/slab.h>
+
+-#ifdef CONFIG_X86_32
++#if IS_ENABLED(CONFIG_X86_32) && !IS_ENABLED(CONFIG_UML)
+ #include <asm/desc.h>
+ #endif
+
+@@ -118,9 +118,8 @@ noinline void lkdtm_CORRUPT_STACK(void)
+ /* Use default char array length that triggers stack protection. */
+ char data[8] __aligned(sizeof(void *));
+
+- __lkdtm_CORRUPT_STACK(&data);
+-
+- pr_info("Corrupted stack containing char array ...\n");
++ pr_info("Corrupting stack containing char array ...\n");
++ __lkdtm_CORRUPT_STACK((void *)&data);
+ }
+
+ /* Same as above but will only get a canary with -fstack-protector-strong */
+@@ -131,9 +130,8 @@ noinline void lkdtm_CORRUPT_STACK_STRONG(void)
+ unsigned long *ptr;
+ } data __aligned(sizeof(void *));
+
+- __lkdtm_CORRUPT_STACK(&data);
+-
+- pr_info("Corrupted stack containing union ...\n");
++ pr_info("Corrupting stack containing union ...\n");
++ __lkdtm_CORRUPT_STACK((void *)&data);
+ }
+
+ void lkdtm_UNALIGNED_LOAD_STORE_WRITE(void)
+@@ -248,6 +246,7 @@ void lkdtm_ARRAY_BOUNDS(void)
+
+ kfree(not_checked);
+ kfree(checked);
++ pr_err("FAIL: survived array bounds overflow!\n");
+ }
+
+ void lkdtm_CORRUPT_LIST_ADD(void)
+@@ -419,7 +418,7 @@ void lkdtm_UNSET_SMEP(void)
+
+ void lkdtm_DOUBLE_FAULT(void)
+ {
+-#ifdef CONFIG_X86_32
++#if IS_ENABLED(CONFIG_X86_32) && !IS_ENABLED(CONFIG_UML)
+ /*
+ * Trigger #DF by setting the stack limit to zero. This clobbers
+ * a GDT TLS slot, which is okay because the current task will die
+@@ -454,38 +453,42 @@ void lkdtm_DOUBLE_FAULT(void)
+ #endif
+ }
+
+-#ifdef CONFIG_ARM64_PTR_AUTH
++#ifdef CONFIG_ARM64
+ static noinline void change_pac_parameters(void)
+ {
+- /* Reset the keys of current task */
+- ptrauth_thread_init_kernel(current);
+- ptrauth_thread_switch_kernel(current);
++ if (IS_ENABLED(CONFIG_ARM64_PTR_AUTH)) {
++ /* Reset the keys of current task */
++ ptrauth_thread_init_kernel(current);
++ ptrauth_thread_switch_kernel(current);
++ }
+ }
++#endif
+
+-#define CORRUPT_PAC_ITERATE 10
+ noinline void lkdtm_CORRUPT_PAC(void)
+ {
++#ifdef CONFIG_ARM64
++#define CORRUPT_PAC_ITERATE 10
+ int i;
+
++ if (!IS_ENABLED(CONFIG_ARM64_PTR_AUTH))
++ pr_err("FAIL: kernel not built with CONFIG_ARM64_PTR_AUTH\n");
++
+ if (!system_supports_address_auth()) {
+- pr_err("FAIL: arm64 pointer authentication feature not present\n");
++ pr_err("FAIL: CPU lacks pointer authentication feature\n");
+ return;
+ }
+
+- pr_info("Change the PAC parameters to force function return failure\n");
++ pr_info("changing PAC parameters to force function return failure...\n");
+ /*
+- * Pac is a hash value computed from input keys, return address and
++ * PAC is a hash value computed from input keys, return address and
+ * stack pointer. As pac has fewer bits so there is a chance of
+ * collision, so iterate few times to reduce the collision probability.
+ */
+ for (i = 0; i < CORRUPT_PAC_ITERATE; i++)
+ change_pac_parameters();
+
+- pr_err("FAIL: %s test failed. Kernel may be unstable from here\n", __func__);
+-}
+-#else /* !CONFIG_ARM64_PTR_AUTH */
+-noinline void lkdtm_CORRUPT_PAC(void)
+-{
+- pr_err("FAIL: arm64 pointer authentication config disabled\n");
+-}
++ pr_err("FAIL: survived PAC changes! Kernel may be unstable from here\n");
++#else
++ pr_err("XFAIL: this test is arm64-only\n");
+ #endif
++}
+diff --git a/drivers/misc/lkdtm/lkdtm.h b/drivers/misc/lkdtm/lkdtm.h
+index 601a2156a0d4..8878538b2c13 100644
+--- a/drivers/misc/lkdtm/lkdtm.h
++++ b/drivers/misc/lkdtm/lkdtm.h
+@@ -31,9 +31,7 @@ void lkdtm_CORRUPT_USER_DS(void);
+ void lkdtm_STACK_GUARD_PAGE_LEADING(void);
+ void lkdtm_STACK_GUARD_PAGE_TRAILING(void);
+ void lkdtm_UNSET_SMEP(void);
+-#ifdef CONFIG_X86_32
+ void lkdtm_DOUBLE_FAULT(void);
+-#endif
+ void lkdtm_CORRUPT_PAC(void);
+
+ /* lkdtm_heap.c */
+diff --git a/drivers/misc/lkdtm/perms.c b/drivers/misc/lkdtm/perms.c
+index 62f76d506f04..2dede2ef658f 100644
+--- a/drivers/misc/lkdtm/perms.c
++++ b/drivers/misc/lkdtm/perms.c
+@@ -57,6 +57,7 @@ static noinline void execute_location(void *dst, bool write)
+ }
+ pr_info("attempting bad execution at %px\n", func);
+ func();
++ pr_err("FAIL: func returned\n");
+ }
+
+ static void execute_user_location(void *dst)
+@@ -75,20 +76,22 @@ static void execute_user_location(void *dst)
+ return;
+ pr_info("attempting bad execution at %px\n", func);
+ func();
++ pr_err("FAIL: func returned\n");
+ }
+
+ void lkdtm_WRITE_RO(void)
+ {
+- /* Explicitly cast away "const" for the test. */
+- unsigned long *ptr = (unsigned long *)&rodata;
++ /* Explicitly cast away "const" for the test and make volatile. */
++ volatile unsigned long *ptr = (unsigned long *)&rodata;
+
+ pr_info("attempting bad rodata write at %px\n", ptr);
+ *ptr ^= 0xabcd1234;
++ pr_err("FAIL: survived bad write\n");
+ }
+
+ void lkdtm_WRITE_RO_AFTER_INIT(void)
+ {
+- unsigned long *ptr = &ro_after_init;
++ volatile unsigned long *ptr = &ro_after_init;
+
+ /*
+ * Verify we were written to during init. Since an Oops
+@@ -102,19 +105,21 @@ void lkdtm_WRITE_RO_AFTER_INIT(void)
+
+ pr_info("attempting bad ro_after_init write at %px\n", ptr);
+ *ptr ^= 0xabcd1234;
++ pr_err("FAIL: survived bad write\n");
+ }
+
+ void lkdtm_WRITE_KERN(void)
+ {
+ size_t size;
+- unsigned char *ptr;
++ volatile unsigned char *ptr;
+
+ size = (unsigned long)do_overwritten - (unsigned long)do_nothing;
+ ptr = (unsigned char *)do_overwritten;
+
+ pr_info("attempting bad %zu byte write at %px\n", size, ptr);
+- memcpy(ptr, (unsigned char *)do_nothing, size);
++ memcpy((void *)ptr, (unsigned char *)do_nothing, size);
+ flush_icache_range((unsigned long)ptr, (unsigned long)(ptr + size));
++ pr_err("FAIL: survived bad write\n");
+
+ do_overwritten();
+ }
+@@ -193,9 +198,11 @@ void lkdtm_ACCESS_USERSPACE(void)
+ pr_info("attempting bad read at %px\n", ptr);
+ tmp = *ptr;
+ tmp += 0xc0dec0de;
++ pr_err("FAIL: survived bad read\n");
+
+ pr_info("attempting bad write at %px\n", ptr);
+ *ptr = tmp;
++ pr_err("FAIL: survived bad write\n");
+
+ vm_munmap(user_addr, PAGE_SIZE);
+ }
+@@ -203,19 +210,20 @@ void lkdtm_ACCESS_USERSPACE(void)
+ void lkdtm_ACCESS_NULL(void)
+ {
+ unsigned long tmp;
+- unsigned long *ptr = (unsigned long *)NULL;
++ volatile unsigned long *ptr = (unsigned long *)NULL;
+
+ pr_info("attempting bad read at %px\n", ptr);
+ tmp = *ptr;
+ tmp += 0xc0dec0de;
++ pr_err("FAIL: survived bad read\n");
+
+ pr_info("attempting bad write at %px\n", ptr);
+ *ptr = tmp;
++ pr_err("FAIL: survived bad write\n");
+ }
+
+ void __init lkdtm_perms_init(void)
+ {
+ /* Make sure we can write to __ro_after_init values during __init */
+ ro_after_init |= 0xAA;
+-
+ }
+diff --git a/drivers/misc/lkdtm/usercopy.c b/drivers/misc/lkdtm/usercopy.c
+index e172719dd86d..b833367a45d0 100644
+--- a/drivers/misc/lkdtm/usercopy.c
++++ b/drivers/misc/lkdtm/usercopy.c
+@@ -304,19 +304,22 @@ void lkdtm_USERCOPY_KERNEL(void)
+ return;
+ }
+
+- pr_info("attempting good copy_to_user from kernel rodata\n");
++ pr_info("attempting good copy_to_user from kernel rodata: %px\n",
++ test_text);
+ if (copy_to_user((void __user *)user_addr, test_text,
+ unconst + sizeof(test_text))) {
+ pr_warn("copy_to_user failed unexpectedly?!\n");
+ goto free_user;
+ }
+
+- pr_info("attempting bad copy_to_user from kernel text\n");
++ pr_info("attempting bad copy_to_user from kernel text: %px\n",
++ vm_mmap);
+ if (copy_to_user((void __user *)user_addr, vm_mmap,
+ unconst + PAGE_SIZE)) {
+ pr_warn("copy_to_user failed, but lacked Oops\n");
+ goto free_user;
+ }
++ pr_err("FAIL: survived bad copy_to_user()\n");
+
+ free_user:
+ vm_munmap(user_addr, PAGE_SIZE);
+diff --git a/drivers/mmc/host/sdhci-cadence.c b/drivers/mmc/host/sdhci-cadence.c
+index 6da6d4fb5edd..07a4cb989a68 100644
+--- a/drivers/mmc/host/sdhci-cadence.c
++++ b/drivers/mmc/host/sdhci-cadence.c
+@@ -194,57 +194,6 @@ static u32 sdhci_cdns_get_emmc_mode(struct sdhci_cdns_priv *priv)
+ return FIELD_GET(SDHCI_CDNS_HRS06_MODE, tmp);
+ }
+
+-static void sdhci_cdns_set_uhs_signaling(struct sdhci_host *host,
+- unsigned int timing)
+-{
+- struct sdhci_cdns_priv *priv = sdhci_cdns_priv(host);
+- u32 mode;
+-
+- switch (timing) {
+- case MMC_TIMING_MMC_HS:
+- mode = SDHCI_CDNS_HRS06_MODE_MMC_SDR;
+- break;
+- case MMC_TIMING_MMC_DDR52:
+- mode = SDHCI_CDNS_HRS06_MODE_MMC_DDR;
+- break;
+- case MMC_TIMING_MMC_HS200:
+- mode = SDHCI_CDNS_HRS06_MODE_MMC_HS200;
+- break;
+- case MMC_TIMING_MMC_HS400:
+- if (priv->enhanced_strobe)
+- mode = SDHCI_CDNS_HRS06_MODE_MMC_HS400ES;
+- else
+- mode = SDHCI_CDNS_HRS06_MODE_MMC_HS400;
+- break;
+- default:
+- mode = SDHCI_CDNS_HRS06_MODE_SD;
+- break;
+- }
+-
+- sdhci_cdns_set_emmc_mode(priv, mode);
+-
+- /* For SD, fall back to the default handler */
+- if (mode == SDHCI_CDNS_HRS06_MODE_SD)
+- sdhci_set_uhs_signaling(host, timing);
+-}
+-
+-static const struct sdhci_ops sdhci_cdns_ops = {
+- .set_clock = sdhci_set_clock,
+- .get_timeout_clock = sdhci_cdns_get_timeout_clock,
+- .set_bus_width = sdhci_set_bus_width,
+- .reset = sdhci_reset,
+- .set_uhs_signaling = sdhci_cdns_set_uhs_signaling,
+-};
+-
+-static const struct sdhci_pltfm_data sdhci_cdns_uniphier_pltfm_data = {
+- .ops = &sdhci_cdns_ops,
+- .quirks2 = SDHCI_QUIRK2_PRESET_VALUE_BROKEN,
+-};
+-
+-static const struct sdhci_pltfm_data sdhci_cdns_pltfm_data = {
+- .ops = &sdhci_cdns_ops,
+-};
+-
+ static int sdhci_cdns_set_tune_val(struct sdhci_host *host, unsigned int val)
+ {
+ struct sdhci_cdns_priv *priv = sdhci_cdns_priv(host);
+@@ -278,23 +227,24 @@ static int sdhci_cdns_set_tune_val(struct sdhci_host *host, unsigned int val)
+ return 0;
+ }
+
+-static int sdhci_cdns_execute_tuning(struct mmc_host *mmc, u32 opcode)
++/*
++ * In SD mode, software must not use the hardware tuning and instead perform
++ * an almost identical procedure to eMMC.
++ */
++static int sdhci_cdns_execute_tuning(struct sdhci_host *host, u32 opcode)
+ {
+- struct sdhci_host *host = mmc_priv(mmc);
+ int cur_streak = 0;
+ int max_streak = 0;
+ int end_of_streak = 0;
+ int i;
+
+ /*
+- * This handler only implements the eMMC tuning that is specific to
+- * this controller. Fall back to the standard method for SD timing.
++ * Do not execute tuning for UHS_SDR50 or UHS_DDR50.
++ * The delay is set by probe, based on the DT properties.
+ */
+- if (host->timing != MMC_TIMING_MMC_HS200)
+- return sdhci_execute_tuning(mmc, opcode);
+-
+- if (WARN_ON(opcode != MMC_SEND_TUNING_BLOCK_HS200))
+- return -EINVAL;
++ if (host->timing != MMC_TIMING_MMC_HS200 &&
++ host->timing != MMC_TIMING_UHS_SDR104)
++ return 0;
+
+ for (i = 0; i < SDHCI_CDNS_MAX_TUNING_LOOP; i++) {
+ if (sdhci_cdns_set_tune_val(host, i) ||
+@@ -317,6 +267,58 @@ static int sdhci_cdns_execute_tuning(struct mmc_host *mmc, u32 opcode)
+ return sdhci_cdns_set_tune_val(host, end_of_streak - max_streak / 2);
+ }
+
++static void sdhci_cdns_set_uhs_signaling(struct sdhci_host *host,
++ unsigned int timing)
++{
++ struct sdhci_cdns_priv *priv = sdhci_cdns_priv(host);
++ u32 mode;
++
++ switch (timing) {
++ case MMC_TIMING_MMC_HS:
++ mode = SDHCI_CDNS_HRS06_MODE_MMC_SDR;
++ break;
++ case MMC_TIMING_MMC_DDR52:
++ mode = SDHCI_CDNS_HRS06_MODE_MMC_DDR;
++ break;
++ case MMC_TIMING_MMC_HS200:
++ mode = SDHCI_CDNS_HRS06_MODE_MMC_HS200;
++ break;
++ case MMC_TIMING_MMC_HS400:
++ if (priv->enhanced_strobe)
++ mode = SDHCI_CDNS_HRS06_MODE_MMC_HS400ES;
++ else
++ mode = SDHCI_CDNS_HRS06_MODE_MMC_HS400;
++ break;
++ default:
++ mode = SDHCI_CDNS_HRS06_MODE_SD;
++ break;
++ }
++
++ sdhci_cdns_set_emmc_mode(priv, mode);
++
++ /* For SD, fall back to the default handler */
++ if (mode == SDHCI_CDNS_HRS06_MODE_SD)
++ sdhci_set_uhs_signaling(host, timing);
++}
++
++static const struct sdhci_ops sdhci_cdns_ops = {
++ .set_clock = sdhci_set_clock,
++ .get_timeout_clock = sdhci_cdns_get_timeout_clock,
++ .set_bus_width = sdhci_set_bus_width,
++ .reset = sdhci_reset,
++ .platform_execute_tuning = sdhci_cdns_execute_tuning,
++ .set_uhs_signaling = sdhci_cdns_set_uhs_signaling,
++};
++
++static const struct sdhci_pltfm_data sdhci_cdns_uniphier_pltfm_data = {
++ .ops = &sdhci_cdns_ops,
++ .quirks2 = SDHCI_QUIRK2_PRESET_VALUE_BROKEN,
++};
++
++static const struct sdhci_pltfm_data sdhci_cdns_pltfm_data = {
++ .ops = &sdhci_cdns_ops,
++};
++
+ static void sdhci_cdns_hs400_enhanced_strobe(struct mmc_host *mmc,
+ struct mmc_ios *ios)
+ {
+@@ -377,7 +379,6 @@ static int sdhci_cdns_probe(struct platform_device *pdev)
+ priv->hrs_addr = host->ioaddr;
+ priv->enhanced_strobe = false;
+ host->ioaddr += SDHCI_CDNS_SRS_BASE;
+- host->mmc_host_ops.execute_tuning = sdhci_cdns_execute_tuning;
+ host->mmc_host_ops.hs400_enhanced_strobe =
+ sdhci_cdns_hs400_enhanced_strobe;
+ sdhci_enable_v4_mode(host);
+diff --git a/drivers/mmc/host/sdhci-of-arasan.c b/drivers/mmc/host/sdhci-of-arasan.c
+index d4905c106c06..28091d3f704b 100644
+--- a/drivers/mmc/host/sdhci-of-arasan.c
++++ b/drivers/mmc/host/sdhci-of-arasan.c
+@@ -1020,6 +1020,8 @@ sdhci_arasan_register_sdcardclk(struct sdhci_arasan_data *sdhci_arasan,
+ clk_data->sdcardclk_hw.init = &sdcardclk_init;
+ clk_data->sdcardclk =
+ devm_clk_register(dev, &clk_data->sdcardclk_hw);
++ if (IS_ERR(clk_data->sdcardclk))
++ return PTR_ERR(clk_data->sdcardclk);
+ clk_data->sdcardclk_hw.init = NULL;
+
+ ret = of_clk_add_provider(np, of_clk_src_simple_get,
+@@ -1072,6 +1074,8 @@ sdhci_arasan_register_sampleclk(struct sdhci_arasan_data *sdhci_arasan,
+ clk_data->sampleclk_hw.init = &sampleclk_init;
+ clk_data->sampleclk =
+ devm_clk_register(dev, &clk_data->sampleclk_hw);
++ if (IS_ERR(clk_data->sampleclk))
++ return PTR_ERR(clk_data->sampleclk);
+ clk_data->sampleclk_hw.init = NULL;
+
+ ret = of_clk_add_provider(np, of_clk_src_simple_get,
+diff --git a/drivers/mmc/host/sdhci-pci-o2micro.c b/drivers/mmc/host/sdhci-pci-o2micro.c
+index fa8105087d68..41a2394313dd 100644
+--- a/drivers/mmc/host/sdhci-pci-o2micro.c
++++ b/drivers/mmc/host/sdhci-pci-o2micro.c
+@@ -561,6 +561,12 @@ int sdhci_pci_o2_probe_slot(struct sdhci_pci_slot *slot)
+ slot->host->mmc_host_ops.get_cd = sdhci_o2_get_cd;
+ }
+
++ if (chip->pdev->device == PCI_DEVICE_ID_O2_SEABIRD1) {
++ slot->host->mmc_host_ops.get_cd = sdhci_o2_get_cd;
++ host->mmc->caps2 |= MMC_CAP2_NO_SDIO;
++ host->quirks2 |= SDHCI_QUIRK2_PRESET_VALUE_BROKEN;
++ }
++
+ host->mmc_host_ops.execute_tuning = sdhci_o2_execute_tuning;
+
+ if (chip->pdev->device != PCI_DEVICE_ID_O2_FUJIN2)
+diff --git a/drivers/most/core.c b/drivers/most/core.c
+index f781c46cd4af..353ab277cbc6 100644
+--- a/drivers/most/core.c
++++ b/drivers/most/core.c
+@@ -1283,10 +1283,8 @@ int most_register_interface(struct most_interface *iface)
+ struct most_channel *c;
+
+ if (!iface || !iface->enqueue || !iface->configure ||
+- !iface->poison_channel || (iface->num_channels > MAX_CHANNELS)) {
+- dev_err(iface->dev, "Bad interface or channel overflow\n");
++ !iface->poison_channel || (iface->num_channels > MAX_CHANNELS))
+ return -EINVAL;
+- }
+
+ id = ida_simple_get(&mdev_id, 0, 0, GFP_KERNEL);
+ if (id < 0) {
+diff --git a/drivers/mtd/nand/raw/brcmnand/brcmnand.c b/drivers/mtd/nand/raw/brcmnand/brcmnand.c
+index 968ff7703925..cdae2311a3b6 100644
+--- a/drivers/mtd/nand/raw/brcmnand/brcmnand.c
++++ b/drivers/mtd/nand/raw/brcmnand/brcmnand.c
+@@ -2960,8 +2960,9 @@ int brcmnand_probe(struct platform_device *pdev, struct brcmnand_soc *soc)
+ if (ret < 0)
+ goto err;
+
+- /* set edu transfer function to call */
+- ctrl->dma_trans = brcmnand_edu_trans;
++ if (has_edu(ctrl))
++ /* set edu transfer function to call */
++ ctrl->dma_trans = brcmnand_edu_trans;
+ }
+
+ /* Disable automatic device ID config, direct addressing */
+diff --git a/drivers/mtd/nand/raw/qcom_nandc.c b/drivers/mtd/nand/raw/qcom_nandc.c
+index 5b11c7061497..d1216aa9dc0c 100644
+--- a/drivers/mtd/nand/raw/qcom_nandc.c
++++ b/drivers/mtd/nand/raw/qcom_nandc.c
+@@ -459,11 +459,13 @@ struct qcom_nand_host {
+ * among different NAND controllers.
+ * @ecc_modes - ecc mode for NAND
+ * @is_bam - whether NAND controller is using BAM
++ * @is_qpic - whether NAND CTRL is part of qpic IP
+ * @dev_cmd_reg_start - NAND_DEV_CMD_* registers starting offset
+ */
+ struct qcom_nandc_props {
+ u32 ecc_modes;
+ bool is_bam;
++ bool is_qpic;
+ u32 dev_cmd_reg_start;
+ };
+
+@@ -2774,7 +2776,8 @@ static int qcom_nandc_setup(struct qcom_nand_controller *nandc)
+ u32 nand_ctrl;
+
+ /* kill onenand */
+- nandc_write(nandc, SFLASHC_BURST_CFG, 0);
++ if (!nandc->props->is_qpic)
++ nandc_write(nandc, SFLASHC_BURST_CFG, 0);
+ nandc_write(nandc, dev_cmd_reg_addr(nandc, NAND_DEV_CMD_VLD),
+ NAND_DEV_CMD_VLD_VAL);
+
+@@ -3030,12 +3033,14 @@ static const struct qcom_nandc_props ipq806x_nandc_props = {
+ static const struct qcom_nandc_props ipq4019_nandc_props = {
+ .ecc_modes = (ECC_BCH_4BIT | ECC_BCH_8BIT),
+ .is_bam = true,
++ .is_qpic = true,
+ .dev_cmd_reg_start = 0x0,
+ };
+
+ static const struct qcom_nandc_props ipq8074_nandc_props = {
+ .ecc_modes = (ECC_BCH_4BIT | ECC_BCH_8BIT),
+ .is_bam = true,
++ .is_qpic = true,
+ .dev_cmd_reg_start = 0x7000,
+ };
+
+diff --git a/drivers/mtd/spi-nor/controllers/intel-spi.c b/drivers/mtd/spi-nor/controllers/intel-spi.c
+index 61d2a0ad2131..3259c9fc981f 100644
+--- a/drivers/mtd/spi-nor/controllers/intel-spi.c
++++ b/drivers/mtd/spi-nor/controllers/intel-spi.c
+@@ -612,6 +612,15 @@ static int intel_spi_write_reg(struct spi_nor *nor, u8 opcode, const u8 *buf,
+ return 0;
+ }
+
++ /*
++ * We hope that HW sequencer will do the right thing automatically and
++ * with the SW sequencer we cannot use preopcode anyway, so just ignore
++ * the Write Disable operation and pretend it was completed
++ * successfully.
++ */
++ if (opcode == SPINOR_OP_WRDI)
++ return 0;
++
+ writel(0, ispi->base + FADDR);
+
+ /* Write the value beforehand */
+diff --git a/drivers/net/dsa/mv88e6xxx/chip.c b/drivers/net/dsa/mv88e6xxx/chip.c
+index e065be419a03..18c892df0a13 100644
+--- a/drivers/net/dsa/mv88e6xxx/chip.c
++++ b/drivers/net/dsa/mv88e6xxx/chip.c
+@@ -3477,7 +3477,6 @@ static const struct mv88e6xxx_ops mv88e6097_ops = {
+ .port_set_frame_mode = mv88e6351_port_set_frame_mode,
+ .port_set_egress_floods = mv88e6352_port_set_egress_floods,
+ .port_set_ether_type = mv88e6351_port_set_ether_type,
+- .port_set_jumbo_size = mv88e6165_port_set_jumbo_size,
+ .port_egress_rate_limiting = mv88e6095_port_egress_rate_limiting,
+ .port_pause_limit = mv88e6097_port_pause_limit,
+ .port_disable_learn_limit = mv88e6xxx_port_disable_learn_limit,
+diff --git a/drivers/net/dsa/rtl8366.c b/drivers/net/dsa/rtl8366.c
+index ac88caca5ad4..1368816abaed 100644
+--- a/drivers/net/dsa/rtl8366.c
++++ b/drivers/net/dsa/rtl8366.c
+@@ -43,18 +43,26 @@ int rtl8366_set_vlan(struct realtek_smi *smi, int vid, u32 member,
+ int ret;
+ int i;
+
++ dev_dbg(smi->dev,
++ "setting VLAN%d 4k members: 0x%02x, untagged: 0x%02x\n",
++ vid, member, untag);
++
+ /* Update the 4K table */
+ ret = smi->ops->get_vlan_4k(smi, vid, &vlan4k);
+ if (ret)
+ return ret;
+
+- vlan4k.member = member;
+- vlan4k.untag = untag;
++ vlan4k.member |= member;
++ vlan4k.untag |= untag;
+ vlan4k.fid = fid;
+ ret = smi->ops->set_vlan_4k(smi, &vlan4k);
+ if (ret)
+ return ret;
+
++ dev_dbg(smi->dev,
++ "resulting VLAN%d 4k members: 0x%02x, untagged: 0x%02x\n",
++ vid, vlan4k.member, vlan4k.untag);
++
+ /* Try to find an existing MC entry for this VID */
+ for (i = 0; i < smi->num_vlan_mc; i++) {
+ struct rtl8366_vlan_mc vlanmc;
+@@ -65,11 +73,16 @@ int rtl8366_set_vlan(struct realtek_smi *smi, int vid, u32 member,
+
+ if (vid == vlanmc.vid) {
+ /* update the MC entry */
+- vlanmc.member = member;
+- vlanmc.untag = untag;
++ vlanmc.member |= member;
++ vlanmc.untag |= untag;
+ vlanmc.fid = fid;
+
+ ret = smi->ops->set_vlan_mc(smi, i, &vlanmc);
++
++ dev_dbg(smi->dev,
++ "resulting VLAN%d MC members: 0x%02x, untagged: 0x%02x\n",
++ vid, vlanmc.member, vlanmc.untag);
++
+ break;
+ }
+ }
+@@ -384,7 +397,7 @@ void rtl8366_vlan_add(struct dsa_switch *ds, int port,
+ if (dsa_is_dsa_port(ds, port) || dsa_is_cpu_port(ds, port))
+ dev_err(smi->dev, "port is DSA or CPU port\n");
+
+- for (vid = vlan->vid_begin; vid <= vlan->vid_end; ++vid) {
++ for (vid = vlan->vid_begin; vid <= vlan->vid_end; vid++) {
+ int pvid_val = 0;
+
+ dev_info(smi->dev, "add VLAN %04x\n", vid);
+@@ -407,13 +420,13 @@ void rtl8366_vlan_add(struct dsa_switch *ds, int port,
+ if (ret < 0)
+ return;
+ }
+- }
+
+- ret = rtl8366_set_vlan(smi, port, member, untag, 0);
+- if (ret)
+- dev_err(smi->dev,
+- "failed to set up VLAN %04x",
+- vid);
++ ret = rtl8366_set_vlan(smi, vid, member, untag, 0);
++ if (ret)
++ dev_err(smi->dev,
++ "failed to set up VLAN %04x",
++ vid);
++ }
+ }
+ EXPORT_SYMBOL_GPL(rtl8366_vlan_add);
+
+diff --git a/drivers/net/ethernet/aquantia/atlantic/aq_ethtool.c b/drivers/net/ethernet/aquantia/atlantic/aq_ethtool.c
+index 7241cf92b43a..446c59f2ab44 100644
+--- a/drivers/net/ethernet/aquantia/atlantic/aq_ethtool.c
++++ b/drivers/net/ethernet/aquantia/atlantic/aq_ethtool.c
+@@ -123,21 +123,21 @@ static const char aq_macsec_stat_names[][ETH_GSTRING_LEN] = {
+ "MACSec OutUnctrlHitDropRedir",
+ };
+
+-static const char *aq_macsec_txsc_stat_names[] = {
++static const char * const aq_macsec_txsc_stat_names[] = {
+ "MACSecTXSC%d ProtectedPkts",
+ "MACSecTXSC%d EncryptedPkts",
+ "MACSecTXSC%d ProtectedOctets",
+ "MACSecTXSC%d EncryptedOctets",
+ };
+
+-static const char *aq_macsec_txsa_stat_names[] = {
++static const char * const aq_macsec_txsa_stat_names[] = {
+ "MACSecTXSC%dSA%d HitDropRedirect",
+ "MACSecTXSC%dSA%d Protected2Pkts",
+ "MACSecTXSC%dSA%d ProtectedPkts",
+ "MACSecTXSC%dSA%d EncryptedPkts",
+ };
+
+-static const char *aq_macsec_rxsa_stat_names[] = {
++static const char * const aq_macsec_rxsa_stat_names[] = {
+ "MACSecRXSC%dSA%d UntaggedHitPkts",
+ "MACSecRXSC%dSA%d CtrlHitDrpRedir",
+ "MACSecRXSC%dSA%d NotUsingSa",
+diff --git a/drivers/net/ethernet/aquantia/atlantic/hw_atl/hw_atl_a0.c b/drivers/net/ethernet/aquantia/atlantic/hw_atl/hw_atl_a0.c
+index 9b1062b8af64..1e8b778cb9fa 100644
+--- a/drivers/net/ethernet/aquantia/atlantic/hw_atl/hw_atl_a0.c
++++ b/drivers/net/ethernet/aquantia/atlantic/hw_atl/hw_atl_a0.c
+@@ -782,7 +782,7 @@ static int hw_atl_a0_hw_multicast_list_set(struct aq_hw_s *self,
+ int err = 0;
+
+ if (count > (HW_ATL_A0_MAC_MAX - HW_ATL_A0_MAC_MIN)) {
+- err = EBADRQC;
++ err = -EBADRQC;
+ goto err_exit;
+ }
+ for (self->aq_nic_cfg->mc_list_count = 0U;
+diff --git a/drivers/net/ethernet/cavium/liquidio/cn23xx_pf_device.c b/drivers/net/ethernet/cavium/liquidio/cn23xx_pf_device.c
+index 43d11c38b38a..4cddd628d41b 100644
+--- a/drivers/net/ethernet/cavium/liquidio/cn23xx_pf_device.c
++++ b/drivers/net/ethernet/cavium/liquidio/cn23xx_pf_device.c
+@@ -1167,7 +1167,7 @@ static int cn23xx_get_pf_num(struct octeon_device *oct)
+ oct->pf_num = ((fdl_bit >> CN23XX_PCIE_SRIOV_FDL_BIT_POS) &
+ CN23XX_PCIE_SRIOV_FDL_MASK);
+ } else {
+- ret = EINVAL;
++ ret = -EINVAL;
+
+ /* Under some virtual environments, extended PCI regs are
+ * inaccessible, in which case the above read will have failed.
+diff --git a/drivers/net/ethernet/cavium/thunder/nicvf_main.c b/drivers/net/ethernet/cavium/thunder/nicvf_main.c
+index ae48f2e9265f..79898530760a 100644
+--- a/drivers/net/ethernet/cavium/thunder/nicvf_main.c
++++ b/drivers/net/ethernet/cavium/thunder/nicvf_main.c
+@@ -2179,6 +2179,9 @@ static int nicvf_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
+ nic->max_queues *= 2;
+ nic->ptp_clock = ptp_clock;
+
++ /* Initialize mutex that serializes usage of VF's mailbox */
++ mutex_init(&nic->rx_mode_mtx);
++
+ /* MAP VF's configuration registers */
+ nic->reg_base = pcim_iomap(pdev, PCI_CFG_REG_BAR_NUM, 0);
+ if (!nic->reg_base) {
+@@ -2255,7 +2258,6 @@ static int nicvf_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
+
+ INIT_WORK(&nic->rx_mode_work.work, nicvf_set_rx_mode_task);
+ spin_lock_init(&nic->rx_mode_wq_lock);
+- mutex_init(&nic->rx_mode_mtx);
+
+ err = register_netdev(netdev);
+ if (err) {
+diff --git a/drivers/net/ethernet/freescale/dpaa2/dpaa2-eth.c b/drivers/net/ethernet/freescale/dpaa2/dpaa2-eth.c
+index b7031f8562e0..665ec7269c60 100644
+--- a/drivers/net/ethernet/freescale/dpaa2/dpaa2-eth.c
++++ b/drivers/net/ethernet/freescale/dpaa2/dpaa2-eth.c
+@@ -1054,7 +1054,7 @@ static void drain_bufs(struct dpaa2_eth_priv *priv, int count)
+ buf_array, count);
+ if (ret < 0) {
+ if (ret == -EBUSY &&
+- retries++ >= DPAA2_ETH_SWP_BUSY_RETRIES)
++ retries++ < DPAA2_ETH_SWP_BUSY_RETRIES)
+ continue;
+ netdev_err(priv->net_dev, "dpaa2_io_service_acquire() failed\n");
+ return;
+diff --git a/drivers/net/ethernet/freescale/fman/fman.c b/drivers/net/ethernet/freescale/fman/fman.c
+index f151d6e111dd..ef67e8599b39 100644
+--- a/drivers/net/ethernet/freescale/fman/fman.c
++++ b/drivers/net/ethernet/freescale/fman/fman.c
+@@ -1398,8 +1398,7 @@ static void enable_time_stamp(struct fman *fman)
+ {
+ struct fman_fpm_regs __iomem *fpm_rg = fman->fpm_regs;
+ u16 fm_clk_freq = fman->state->fm_clk_freq;
+- u32 tmp, intgr, ts_freq;
+- u64 frac;
++ u32 tmp, intgr, ts_freq, frac;
+
+ ts_freq = (u32)(1 << fman->state->count1_micro_bit);
+ /* configure timestamp so that bit 8 will count 1 microsecond
+diff --git a/drivers/net/ethernet/freescale/fman/fman_dtsec.c b/drivers/net/ethernet/freescale/fman/fman_dtsec.c
+index 004c266802a8..bce3c9398887 100644
+--- a/drivers/net/ethernet/freescale/fman/fman_dtsec.c
++++ b/drivers/net/ethernet/freescale/fman/fman_dtsec.c
+@@ -1200,7 +1200,7 @@ int dtsec_del_hash_mac_address(struct fman_mac *dtsec, enet_addr_t *eth_addr)
+ list_for_each(pos,
+ &dtsec->multicast_addr_hash->lsts[bucket]) {
+ hash_entry = ETH_HASH_ENTRY_OBJ(pos);
+- if (hash_entry->addr == addr) {
++ if (hash_entry && hash_entry->addr == addr) {
+ list_del_init(&hash_entry->node);
+ kfree(hash_entry);
+ break;
+@@ -1213,7 +1213,7 @@ int dtsec_del_hash_mac_address(struct fman_mac *dtsec, enet_addr_t *eth_addr)
+ list_for_each(pos,
+ &dtsec->unicast_addr_hash->lsts[bucket]) {
+ hash_entry = ETH_HASH_ENTRY_OBJ(pos);
+- if (hash_entry->addr == addr) {
++ if (hash_entry && hash_entry->addr == addr) {
+ list_del_init(&hash_entry->node);
+ kfree(hash_entry);
+ break;
+diff --git a/drivers/net/ethernet/freescale/fman/fman_mac.h b/drivers/net/ethernet/freescale/fman/fman_mac.h
+index dd6d0526f6c1..19f327efdaff 100644
+--- a/drivers/net/ethernet/freescale/fman/fman_mac.h
++++ b/drivers/net/ethernet/freescale/fman/fman_mac.h
+@@ -252,7 +252,7 @@ static inline struct eth_hash_t *alloc_hash_table(u16 size)
+ struct eth_hash_t *hash;
+
+ /* Allocate address hash table */
+- hash = kmalloc_array(size, sizeof(struct eth_hash_t *), GFP_KERNEL);
++ hash = kmalloc(sizeof(*hash), GFP_KERNEL);
+ if (!hash)
+ return NULL;
+
+diff --git a/drivers/net/ethernet/freescale/fman/fman_memac.c b/drivers/net/ethernet/freescale/fman/fman_memac.c
+index a5500ede4070..645764abdaae 100644
+--- a/drivers/net/ethernet/freescale/fman/fman_memac.c
++++ b/drivers/net/ethernet/freescale/fman/fman_memac.c
+@@ -852,7 +852,6 @@ int memac_set_tx_pause_frames(struct fman_mac *memac, u8 priority,
+
+ tmp = ioread32be(®s->command_config);
+ tmp &= ~CMD_CFG_PFC_MODE;
+- priority = 0;
+
+ iowrite32be(tmp, ®s->command_config);
+
+@@ -982,7 +981,7 @@ int memac_del_hash_mac_address(struct fman_mac *memac, enet_addr_t *eth_addr)
+
+ list_for_each(pos, &memac->multicast_addr_hash->lsts[hash]) {
+ hash_entry = ETH_HASH_ENTRY_OBJ(pos);
+- if (hash_entry->addr == addr) {
++ if (hash_entry && hash_entry->addr == addr) {
+ list_del_init(&hash_entry->node);
+ kfree(hash_entry);
+ break;
+diff --git a/drivers/net/ethernet/freescale/fman/fman_port.c b/drivers/net/ethernet/freescale/fman/fman_port.c
+index 87b26f063cc8..c27df153f895 100644
+--- a/drivers/net/ethernet/freescale/fman/fman_port.c
++++ b/drivers/net/ethernet/freescale/fman/fman_port.c
+@@ -1767,6 +1767,7 @@ static int fman_port_probe(struct platform_device *of_dev)
+ struct fman_port *port;
+ struct fman *fman;
+ struct device_node *fm_node, *port_node;
++ struct platform_device *fm_pdev;
+ struct resource res;
+ struct resource *dev_res;
+ u32 val;
+@@ -1791,8 +1792,14 @@ static int fman_port_probe(struct platform_device *of_dev)
+ goto return_err;
+ }
+
+- fman = dev_get_drvdata(&of_find_device_by_node(fm_node)->dev);
++ fm_pdev = of_find_device_by_node(fm_node);
+ of_node_put(fm_node);
++ if (!fm_pdev) {
++ err = -EINVAL;
++ goto return_err;
++ }
++
++ fman = dev_get_drvdata(&fm_pdev->dev);
+ if (!fman) {
+ err = -EINVAL;
+ goto return_err;
+diff --git a/drivers/net/ethernet/freescale/fman/fman_tgec.c b/drivers/net/ethernet/freescale/fman/fman_tgec.c
+index 8c7eb878d5b4..41946b16f6c7 100644
+--- a/drivers/net/ethernet/freescale/fman/fman_tgec.c
++++ b/drivers/net/ethernet/freescale/fman/fman_tgec.c
+@@ -626,7 +626,7 @@ int tgec_del_hash_mac_address(struct fman_mac *tgec, enet_addr_t *eth_addr)
+
+ list_for_each(pos, &tgec->multicast_addr_hash->lsts[hash]) {
+ hash_entry = ETH_HASH_ENTRY_OBJ(pos);
+- if (hash_entry->addr == addr) {
++ if (hash_entry && hash_entry->addr == addr) {
+ list_del_init(&hash_entry->node);
+ kfree(hash_entry);
+ break;
+diff --git a/drivers/net/ethernet/intel/iavf/iavf_main.c b/drivers/net/ethernet/intel/iavf/iavf_main.c
+index a21ae74bcd1b..a4b2ad29e132 100644
+--- a/drivers/net/ethernet/intel/iavf/iavf_main.c
++++ b/drivers/net/ethernet/intel/iavf/iavf_main.c
+@@ -1863,8 +1863,10 @@ static int iavf_init_get_resources(struct iavf_adapter *adapter)
+
+ adapter->rss_key = kzalloc(adapter->rss_key_size, GFP_KERNEL);
+ adapter->rss_lut = kzalloc(adapter->rss_lut_size, GFP_KERNEL);
+- if (!adapter->rss_key || !adapter->rss_lut)
++ if (!adapter->rss_key || !adapter->rss_lut) {
++ err = -ENOMEM;
+ goto err_mem;
++ }
+ if (RSS_AQ(adapter))
+ adapter->aq_required |= IAVF_FLAG_AQ_CONFIGURE_RSS;
+ else
+@@ -1946,7 +1948,10 @@ static void iavf_watchdog_task(struct work_struct *work)
+ iavf_send_api_ver(adapter);
+ }
+ } else {
+- if (!iavf_process_aq_command(adapter) &&
++ /* An error will be returned if no commands were
++ * processed; use this opportunity to update stats
++ */
++ if (iavf_process_aq_command(adapter) &&
+ adapter->state == __IAVF_RUNNING)
+ iavf_request_stats(adapter);
+ }
+diff --git a/drivers/net/ethernet/intel/ice/ice_flex_pipe.c b/drivers/net/ethernet/intel/ice/ice_flex_pipe.c
+index abfec38bb483..a9a89bdb6036 100644
+--- a/drivers/net/ethernet/intel/ice/ice_flex_pipe.c
++++ b/drivers/net/ethernet/intel/ice/ice_flex_pipe.c
+@@ -2291,6 +2291,8 @@ static void ice_free_flow_profs(struct ice_hw *hw, u8 blk_idx)
+ mutex_lock(&hw->fl_profs_locks[blk_idx]);
+ list_for_each_entry_safe(p, tmp, &hw->fl_profs[blk_idx], l_entry) {
+ list_del(&p->l_entry);
++
++ mutex_destroy(&p->entries_lock);
+ devm_kfree(ice_hw_to_dev(hw), p);
+ }
+ mutex_unlock(&hw->fl_profs_locks[blk_idx]);
+@@ -2408,7 +2410,7 @@ void ice_clear_hw_tbls(struct ice_hw *hw)
+ memset(prof_redir->t, 0,
+ prof_redir->count * sizeof(*prof_redir->t));
+
+- memset(es->t, 0, es->count * sizeof(*es->t));
++ memset(es->t, 0, es->count * sizeof(*es->t) * es->fvw);
+ memset(es->ref_count, 0, es->count * sizeof(*es->ref_count));
+ memset(es->written, 0, es->count * sizeof(*es->written));
+ }
+@@ -2519,10 +2521,12 @@ enum ice_status ice_init_hw_tbls(struct ice_hw *hw)
+ es->ref_count = devm_kcalloc(ice_hw_to_dev(hw), es->count,
+ sizeof(*es->ref_count),
+ GFP_KERNEL);
++ if (!es->ref_count)
++ goto err;
+
+ es->written = devm_kcalloc(ice_hw_to_dev(hw), es->count,
+ sizeof(*es->written), GFP_KERNEL);
+- if (!es->ref_count)
++ if (!es->written)
+ goto err;
+ }
+ return 0;
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/fs_core.c b/drivers/net/ethernet/mellanox/mlx5/core/fs_core.c
+index 9620c8650e13..43cd379c46f3 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/fs_core.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/fs_core.c
+@@ -829,18 +829,15 @@ static int connect_fts_in_prio(struct mlx5_core_dev *dev,
+ {
+ struct mlx5_flow_root_namespace *root = find_root(&prio->node);
+ struct mlx5_flow_table *iter;
+- int i = 0;
+ int err;
+
+ fs_for_each_ft(iter, prio) {
+- i++;
+ err = root->cmds->modify_flow_table(root, iter, ft);
+ if (err) {
+- mlx5_core_warn(dev, "Failed to modify flow table %d\n",
+- iter->id);
++ mlx5_core_err(dev,
++ "Failed to modify flow table id %d, type %d, err %d\n",
++ iter->id, iter->type, err);
+ /* The driver is out of sync with the FW */
+- if (i > 1)
+- WARN_ON(true);
+ return err;
+ }
+ }
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/steering/fs_dr.c b/drivers/net/ethernet/mellanox/mlx5/core/steering/fs_dr.c
+index 3b3f5b9d4f95..2f3ee8519b22 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/steering/fs_dr.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/steering/fs_dr.c
+@@ -279,29 +279,9 @@ static int mlx5_cmd_dr_create_fte(struct mlx5_flow_root_namespace *ns,
+
+ /* The order of the actions are must to be keep, only the following
+ * order is supported by SW steering:
+- * TX: push vlan -> modify header -> encap
++ * TX: modify header -> push vlan -> encap
+ * RX: decap -> pop vlan -> modify header
+ */
+- if (fte->action.action & MLX5_FLOW_CONTEXT_ACTION_VLAN_PUSH) {
+- tmp_action = create_action_push_vlan(domain, &fte->action.vlan[0]);
+- if (!tmp_action) {
+- err = -ENOMEM;
+- goto free_actions;
+- }
+- fs_dr_actions[fs_dr_num_actions++] = tmp_action;
+- actions[num_actions++] = tmp_action;
+- }
+-
+- if (fte->action.action & MLX5_FLOW_CONTEXT_ACTION_VLAN_PUSH_2) {
+- tmp_action = create_action_push_vlan(domain, &fte->action.vlan[1]);
+- if (!tmp_action) {
+- err = -ENOMEM;
+- goto free_actions;
+- }
+- fs_dr_actions[fs_dr_num_actions++] = tmp_action;
+- actions[num_actions++] = tmp_action;
+- }
+-
+ if (fte->action.action & MLX5_FLOW_CONTEXT_ACTION_DECAP) {
+ enum mlx5dr_action_reformat_type decap_type =
+ DR_ACTION_REFORMAT_TYP_TNL_L2_TO_L2;
+@@ -354,6 +334,26 @@ static int mlx5_cmd_dr_create_fte(struct mlx5_flow_root_namespace *ns,
+ actions[num_actions++] =
+ fte->action.modify_hdr->action.dr_action;
+
++ if (fte->action.action & MLX5_FLOW_CONTEXT_ACTION_VLAN_PUSH) {
++ tmp_action = create_action_push_vlan(domain, &fte->action.vlan[0]);
++ if (!tmp_action) {
++ err = -ENOMEM;
++ goto free_actions;
++ }
++ fs_dr_actions[fs_dr_num_actions++] = tmp_action;
++ actions[num_actions++] = tmp_action;
++ }
++
++ if (fte->action.action & MLX5_FLOW_CONTEXT_ACTION_VLAN_PUSH_2) {
++ tmp_action = create_action_push_vlan(domain, &fte->action.vlan[1]);
++ if (!tmp_action) {
++ err = -ENOMEM;
++ goto free_actions;
++ }
++ fs_dr_actions[fs_dr_num_actions++] = tmp_action;
++ actions[num_actions++] = tmp_action;
++ }
++
+ if (delay_encap_set)
+ actions[num_actions++] =
+ fte->action.pkt_reformat->action.dr_action;
+diff --git a/drivers/net/ethernet/mscc/ocelot.c b/drivers/net/ethernet/mscc/ocelot.c
+index 76dbf9ac8ad5..1eaefc0ff87e 100644
+--- a/drivers/net/ethernet/mscc/ocelot.c
++++ b/drivers/net/ethernet/mscc/ocelot.c
+@@ -1599,14 +1599,14 @@ static int ocelot_port_obj_add_mdb(struct net_device *dev,
+ addr[0] = 0;
+
+ if (!new) {
+- addr[2] = mc->ports << 0;
+- addr[1] = mc->ports << 8;
++ addr[1] = mc->ports >> 8;
++ addr[2] = mc->ports & 0xff;
+ ocelot_mact_forget(ocelot, addr, vid);
+ }
+
+ mc->ports |= BIT(port);
+- addr[2] = mc->ports << 0;
+- addr[1] = mc->ports << 8;
++ addr[1] = mc->ports >> 8;
++ addr[2] = mc->ports & 0xff;
+
+ return ocelot_mact_learn(ocelot, 0, addr, vid, ENTRYTYPE_MACv4);
+ }
+@@ -1630,9 +1630,9 @@ static int ocelot_port_obj_del_mdb(struct net_device *dev,
+ return -ENOENT;
+
+ memcpy(addr, mc->addr, ETH_ALEN);
+- addr[2] = mc->ports << 0;
+- addr[1] = mc->ports << 8;
+ addr[0] = 0;
++ addr[1] = mc->ports >> 8;
++ addr[2] = mc->ports & 0xff;
+ ocelot_mact_forget(ocelot, addr, vid);
+
+ mc->ports &= ~BIT(port);
+@@ -1642,8 +1642,8 @@ static int ocelot_port_obj_del_mdb(struct net_device *dev,
+ return 0;
+ }
+
+- addr[2] = mc->ports << 0;
+- addr[1] = mc->ports << 8;
++ addr[1] = mc->ports >> 8;
++ addr[2] = mc->ports & 0xff;
+
+ return ocelot_mact_learn(ocelot, 0, addr, vid, ENTRYTYPE_MACv4);
+ }
+diff --git a/drivers/net/ethernet/pensando/ionic/ionic_lif.c b/drivers/net/ethernet/pensando/ionic/ionic_lif.c
+index 337d971ffd92..29f77faa808b 100644
+--- a/drivers/net/ethernet/pensando/ionic/ionic_lif.c
++++ b/drivers/net/ethernet/pensando/ionic/ionic_lif.c
+@@ -709,7 +709,7 @@ static bool ionic_notifyq_service(struct ionic_cq *cq,
+ eid = le64_to_cpu(comp->event.eid);
+
+ /* Have we run out of new completions to process? */
+- if (eid <= lif->last_eid)
++ if ((s64)(eid - lif->last_eid) <= 0)
+ return false;
+
+ lif->last_eid = eid;
+diff --git a/drivers/net/ethernet/sgi/ioc3-eth.c b/drivers/net/ethernet/sgi/ioc3-eth.c
+index 6646eba9f57f..6eef0f45b133 100644
+--- a/drivers/net/ethernet/sgi/ioc3-eth.c
++++ b/drivers/net/ethernet/sgi/ioc3-eth.c
+@@ -951,7 +951,7 @@ out_stop:
+ dma_free_coherent(ip->dma_dev, RX_RING_SIZE, ip->rxr,
+ ip->rxr_dma);
+ if (ip->tx_ring)
+- dma_free_coherent(ip->dma_dev, TX_RING_SIZE, ip->tx_ring,
++ dma_free_coherent(ip->dma_dev, TX_RING_SIZE + SZ_16K - 1, ip->tx_ring,
+ ip->txr_dma);
+ out_free:
+ free_netdev(dev);
+@@ -964,7 +964,7 @@ static int ioc3eth_remove(struct platform_device *pdev)
+ struct ioc3_private *ip = netdev_priv(dev);
+
+ dma_free_coherent(ip->dma_dev, RX_RING_SIZE, ip->rxr, ip->rxr_dma);
+- dma_free_coherent(ip->dma_dev, TX_RING_SIZE, ip->tx_ring, ip->txr_dma);
++ dma_free_coherent(ip->dma_dev, TX_RING_SIZE + SZ_16K - 1, ip->tx_ring, ip->txr_dma);
+
+ unregister_netdev(dev);
+ del_timer_sync(&ip->ioc3_timer);
+diff --git a/drivers/net/ethernet/ti/am65-cpsw-nuss.c b/drivers/net/ethernet/ti/am65-cpsw-nuss.c
+index 3e4388e6b5fa..61b59a3b277e 100644
+--- a/drivers/net/ethernet/ti/am65-cpsw-nuss.c
++++ b/drivers/net/ethernet/ti/am65-cpsw-nuss.c
+@@ -217,6 +217,9 @@ static int am65_cpsw_nuss_ndo_slave_add_vid(struct net_device *ndev,
+ u32 port_mask, unreg_mcast = 0;
+ int ret;
+
++ if (!netif_running(ndev) || !vid)
++ return 0;
++
+ ret = pm_runtime_get_sync(common->dev);
+ if (ret < 0) {
+ pm_runtime_put_noidle(common->dev);
+@@ -240,6 +243,9 @@ static int am65_cpsw_nuss_ndo_slave_kill_vid(struct net_device *ndev,
+ struct am65_cpsw_common *common = am65_ndev_to_common(ndev);
+ int ret;
+
++ if (!netif_running(ndev) || !vid)
++ return 0;
++
+ ret = pm_runtime_get_sync(common->dev);
+ if (ret < 0) {
+ pm_runtime_put_noidle(common->dev);
+@@ -565,6 +571,16 @@ static int am65_cpsw_nuss_ndo_slave_stop(struct net_device *ndev)
+ return 0;
+ }
+
++static int cpsw_restore_vlans(struct net_device *vdev, int vid, void *arg)
++{
++ struct am65_cpsw_port *port = arg;
++
++ if (!vdev)
++ return 0;
++
++ return am65_cpsw_nuss_ndo_slave_add_vid(port->ndev, 0, vid);
++}
++
+ static int am65_cpsw_nuss_ndo_slave_open(struct net_device *ndev)
+ {
+ struct am65_cpsw_common *common = am65_ndev_to_common(ndev);
+@@ -638,6 +654,9 @@ static int am65_cpsw_nuss_ndo_slave_open(struct net_device *ndev)
+ }
+ }
+
++ /* restore vlan configurations */
++ vlan_for_each(ndev, cpsw_restore_vlans, port);
++
+ phy_attached_info(port->slave.phy);
+ phy_start(port->slave.phy);
+
+diff --git a/drivers/net/ethernet/toshiba/spider_net.c b/drivers/net/ethernet/toshiba/spider_net.c
+index 6576271642c1..ce8b123cdbcc 100644
+--- a/drivers/net/ethernet/toshiba/spider_net.c
++++ b/drivers/net/ethernet/toshiba/spider_net.c
+@@ -283,8 +283,8 @@ spider_net_free_chain(struct spider_net_card *card,
+ descr = descr->next;
+ } while (descr != chain->ring);
+
+- dma_free_coherent(&card->pdev->dev, chain->num_desc,
+- chain->hwring, chain->dma_addr);
++ dma_free_coherent(&card->pdev->dev, chain->num_desc * sizeof(struct spider_net_hw_descr),
++ chain->hwring, chain->dma_addr);
+ }
+
+ /**
+diff --git a/drivers/net/phy/marvell10g.c b/drivers/net/phy/marvell10g.c
+index 1f1a01c98e44..06dfabe297af 100644
+--- a/drivers/net/phy/marvell10g.c
++++ b/drivers/net/phy/marvell10g.c
+@@ -205,13 +205,6 @@ static int mv3310_hwmon_config(struct phy_device *phydev, bool enable)
+ MV_V2_TEMP_CTRL_MASK, val);
+ }
+
+-static void mv3310_hwmon_disable(void *data)
+-{
+- struct phy_device *phydev = data;
+-
+- mv3310_hwmon_config(phydev, false);
+-}
+-
+ static int mv3310_hwmon_probe(struct phy_device *phydev)
+ {
+ struct device *dev = &phydev->mdio.dev;
+@@ -235,10 +228,6 @@ static int mv3310_hwmon_probe(struct phy_device *phydev)
+ if (ret)
+ return ret;
+
+- ret = devm_add_action_or_reset(dev, mv3310_hwmon_disable, phydev);
+- if (ret)
+- return ret;
+-
+ priv->hwmon_dev = devm_hwmon_device_register_with_info(dev,
+ priv->hwmon_name, phydev,
+ &mv3310_hwmon_chip_info, NULL);
+@@ -423,6 +412,11 @@ static int mv3310_probe(struct phy_device *phydev)
+ return phy_sfp_probe(phydev, &mv3310_sfp_ops);
+ }
+
++static void mv3310_remove(struct phy_device *phydev)
++{
++ mv3310_hwmon_config(phydev, false);
++}
++
+ static int mv3310_suspend(struct phy_device *phydev)
+ {
+ return mv3310_power_down(phydev);
+@@ -763,6 +757,7 @@ static struct phy_driver mv3310_drivers[] = {
+ .read_status = mv3310_read_status,
+ .get_tunable = mv3310_get_tunable,
+ .set_tunable = mv3310_set_tunable,
++ .remove = mv3310_remove,
+ },
+ {
+ .phy_id = MARVELL_PHY_ID_88E2110,
+@@ -778,6 +773,7 @@ static struct phy_driver mv3310_drivers[] = {
+ .read_status = mv3310_read_status,
+ .get_tunable = mv3310_get_tunable,
+ .set_tunable = mv3310_set_tunable,
++ .remove = mv3310_remove,
+ },
+ };
+
+diff --git a/drivers/net/phy/mscc/mscc_main.c b/drivers/net/phy/mscc/mscc_main.c
+index 485a4f8a6a9a..95bd2d277ba4 100644
+--- a/drivers/net/phy/mscc/mscc_main.c
++++ b/drivers/net/phy/mscc/mscc_main.c
+@@ -1413,6 +1413,11 @@ static int vsc8584_config_init(struct phy_device *phydev)
+ if (ret)
+ goto err;
+
++ ret = phy_base_write(phydev, MSCC_EXT_PAGE_ACCESS,
++ MSCC_PHY_PAGE_STANDARD);
++ if (ret)
++ goto err;
++
+ if (!phy_interface_is_rgmii(phydev)) {
+ val = PROC_CMD_MCB_ACCESS_MAC_CONF | PROC_CMD_RST_CONF_PORT |
+ PROC_CMD_READ_MOD_WRITE_PORT;
+@@ -1799,7 +1804,11 @@ static int vsc8514_config_init(struct phy_device *phydev)
+ val &= ~MAC_CFG_MASK;
+ val |= MAC_CFG_QSGMII;
+ ret = phy_base_write(phydev, MSCC_PHY_MAC_CFG_FASTLINK, val);
++ if (ret)
++ goto err;
+
++ ret = phy_base_write(phydev, MSCC_EXT_PAGE_ACCESS,
++ MSCC_PHY_PAGE_STANDARD);
+ if (ret)
+ goto err;
+
+diff --git a/drivers/net/phy/phy_device.c b/drivers/net/phy/phy_device.c
+index 0881b4b92363..d9bdc19b01cc 100644
+--- a/drivers/net/phy/phy_device.c
++++ b/drivers/net/phy/phy_device.c
+@@ -616,7 +616,9 @@ struct phy_device *phy_device_create(struct mii_bus *bus, int addr, u32 phy_id,
+ if (c45_ids)
+ dev->c45_ids = *c45_ids;
+ dev->irq = bus->irq[addr];
++
+ dev_set_name(&mdiodev->dev, PHY_ID_FMT, bus->id, addr);
++ device_initialize(&mdiodev->dev);
+
+ dev->state = PHY_DOWN;
+
+@@ -650,10 +652,8 @@ struct phy_device *phy_device_create(struct mii_bus *bus, int addr, u32 phy_id,
+ ret = phy_request_driver_module(dev, phy_id);
+ }
+
+- if (!ret) {
+- device_initialize(&mdiodev->dev);
+- } else {
+- kfree(dev);
++ if (ret) {
++ put_device(&mdiodev->dev);
+ dev = ERR_PTR(ret);
+ }
+
+diff --git a/drivers/net/vmxnet3/vmxnet3_drv.c b/drivers/net/vmxnet3/vmxnet3_drv.c
+index 722cb054a5cd..d42207dc25dd 100644
+--- a/drivers/net/vmxnet3/vmxnet3_drv.c
++++ b/drivers/net/vmxnet3/vmxnet3_drv.c
+@@ -861,7 +861,8 @@ vmxnet3_parse_hdr(struct sk_buff *skb, struct vmxnet3_tx_queue *tq,
+
+ switch (protocol) {
+ case IPPROTO_TCP:
+- ctx->l4_hdr_size = tcp_hdrlen(skb);
++ ctx->l4_hdr_size = skb->encapsulation ? inner_tcp_hdrlen(skb) :
++ tcp_hdrlen(skb);
+ break;
+ case IPPROTO_UDP:
+ ctx->l4_hdr_size = sizeof(struct udphdr);
+diff --git a/drivers/net/wan/lapbether.c b/drivers/net/wan/lapbether.c
+index b2868433718f..1ea15f2123ed 100644
+--- a/drivers/net/wan/lapbether.c
++++ b/drivers/net/wan/lapbether.c
+@@ -157,6 +157,12 @@ static netdev_tx_t lapbeth_xmit(struct sk_buff *skb,
+ if (!netif_running(dev))
+ goto drop;
+
++ /* There should be a pseudo header of 1 byte added by upper layers.
++ * Check to make sure it is there before reading it.
++ */
++ if (skb->len < 1)
++ goto drop;
++
+ switch (skb->data[0]) {
+ case X25_IFACE_DATA:
+ break;
+@@ -305,6 +311,7 @@ static void lapbeth_setup(struct net_device *dev)
+ dev->netdev_ops = &lapbeth_netdev_ops;
+ dev->needs_free_netdev = true;
+ dev->type = ARPHRD_X25;
++ dev->hard_header_len = 0;
+ dev->mtu = 1000;
+ dev->addr_len = 0;
+ }
+@@ -331,7 +338,8 @@ static int lapbeth_new_device(struct net_device *dev)
+ * then this driver prepends a length field of 2 bytes,
+ * then the underlying Ethernet device prepends its own header.
+ */
+- ndev->hard_header_len = -1 + 3 + 2 + dev->hard_header_len;
++ ndev->needed_headroom = -1 + 3 + 2 + dev->hard_header_len
++ + dev->needed_headroom;
+
+ lapbeth = netdev_priv(ndev);
+ lapbeth->axdev = ndev;
+diff --git a/drivers/net/wireless/ath/ath10k/htt_tx.c b/drivers/net/wireless/ath/ath10k/htt_tx.c
+index 517ee2af2231..e76b71e9326f 100644
+--- a/drivers/net/wireless/ath/ath10k/htt_tx.c
++++ b/drivers/net/wireless/ath/ath10k/htt_tx.c
+@@ -1550,7 +1550,9 @@ static int ath10k_htt_tx_32(struct ath10k_htt *htt,
+ err_unmap_msdu:
+ dma_unmap_single(dev, skb_cb->paddr, msdu->len, DMA_TO_DEVICE);
+ err_free_msdu_id:
++ spin_lock_bh(&htt->tx_lock);
+ ath10k_htt_tx_free_msdu_id(htt, msdu_id);
++ spin_unlock_bh(&htt->tx_lock);
+ err:
+ return res;
+ }
+@@ -1757,7 +1759,9 @@ static int ath10k_htt_tx_64(struct ath10k_htt *htt,
+ err_unmap_msdu:
+ dma_unmap_single(dev, skb_cb->paddr, msdu->len, DMA_TO_DEVICE);
+ err_free_msdu_id:
++ spin_lock_bh(&htt->tx_lock);
+ ath10k_htt_tx_free_msdu_id(htt, msdu_id);
++ spin_unlock_bh(&htt->tx_lock);
+ err:
+ return res;
+ }
+diff --git a/drivers/net/wireless/broadcom/brcm80211/brcmfmac/fwil_types.h b/drivers/net/wireless/broadcom/brcm80211/brcmfmac/fwil_types.h
+index de0ef1b545c4..2e31cc10c195 100644
+--- a/drivers/net/wireless/broadcom/brcm80211/brcmfmac/fwil_types.h
++++ b/drivers/net/wireless/broadcom/brcm80211/brcmfmac/fwil_types.h
+@@ -19,7 +19,7 @@
+ #define BRCMF_ARP_OL_PEER_AUTO_REPLY 0x00000008
+
+ #define BRCMF_BSS_INFO_VERSION 109 /* curr ver of brcmf_bss_info_le struct */
+-#define BRCMF_BSS_RSSI_ON_CHANNEL 0x0002
++#define BRCMF_BSS_RSSI_ON_CHANNEL 0x0004
+
+ #define BRCMF_STA_BRCM 0x00000001 /* Running a Broadcom driver */
+ #define BRCMF_STA_WME 0x00000002 /* WMM association */
+diff --git a/drivers/net/wireless/broadcom/brcm80211/brcmfmac/fwsignal.c b/drivers/net/wireless/broadcom/brcm80211/brcmfmac/fwsignal.c
+index 8cc52935fd41..948840b4e38e 100644
+--- a/drivers/net/wireless/broadcom/brcm80211/brcmfmac/fwsignal.c
++++ b/drivers/net/wireless/broadcom/brcm80211/brcmfmac/fwsignal.c
+@@ -643,6 +643,7 @@ static inline int brcmf_fws_hanger_poppkt(struct brcmf_fws_hanger *h,
+ static void brcmf_fws_psq_flush(struct brcmf_fws_info *fws, struct pktq *q,
+ int ifidx)
+ {
++ struct brcmf_fws_hanger_item *hi;
+ bool (*matchfn)(struct sk_buff *, void *) = NULL;
+ struct sk_buff *skb;
+ int prec;
+@@ -654,6 +655,9 @@ static void brcmf_fws_psq_flush(struct brcmf_fws_info *fws, struct pktq *q,
+ skb = brcmu_pktq_pdeq_match(q, prec, matchfn, &ifidx);
+ while (skb) {
+ hslot = brcmf_skb_htod_tag_get_field(skb, HSLOT);
++ hi = &fws->hanger.items[hslot];
++ WARN_ON(skb != hi->pkt);
++ hi->state = BRCMF_FWS_HANGER_ITEM_STATE_FREE;
+ brcmf_fws_hanger_poppkt(&fws->hanger, hslot, &skb,
+ true);
+ brcmu_pkt_buf_free_skb(skb);
+diff --git a/drivers/net/wireless/broadcom/brcm80211/brcmfmac/sdio.c b/drivers/net/wireless/broadcom/brcm80211/brcmfmac/sdio.c
+index 3a08252f1a53..0dbbb467c229 100644
+--- a/drivers/net/wireless/broadcom/brcm80211/brcmfmac/sdio.c
++++ b/drivers/net/wireless/broadcom/brcm80211/brcmfmac/sdio.c
+@@ -3689,7 +3689,11 @@ static void brcmf_sdio_bus_watchdog(struct brcmf_sdio *bus)
+ if (bus->idlecount > bus->idletime) {
+ brcmf_dbg(SDIO, "idle\n");
+ sdio_claim_host(bus->sdiodev->func1);
+- brcmf_sdio_wd_timer(bus, false);
++#ifdef DEBUG
++ if (!BRCMF_FWCON_ON() ||
++ bus->console_interval == 0)
++#endif
++ brcmf_sdio_wd_timer(bus, false);
+ bus->idlecount = 0;
+ brcmf_sdio_bus_sleep(bus, true, false);
+ sdio_release_host(bus->sdiodev->func1);
+diff --git a/drivers/net/wireless/intel/iwlegacy/common.c b/drivers/net/wireless/intel/iwlegacy/common.c
+index 348c17ce72f5..f78e062df572 100644
+--- a/drivers/net/wireless/intel/iwlegacy/common.c
++++ b/drivers/net/wireless/intel/iwlegacy/common.c
+@@ -4286,8 +4286,8 @@ il_apm_init(struct il_priv *il)
+ * power savings, even without L1.
+ */
+ if (il->cfg->set_l0s) {
+- pcie_capability_read_word(il->pci_dev, PCI_EXP_LNKCTL, &lctl);
+- if (lctl & PCI_EXP_LNKCTL_ASPM_L1) {
++ ret = pcie_capability_read_word(il->pci_dev, PCI_EXP_LNKCTL, &lctl);
++ if (!ret && (lctl & PCI_EXP_LNKCTL_ASPM_L1)) {
+ /* L1-ASPM enabled; disable(!) L0S */
+ il_set_bit(il, CSR_GIO_REG,
+ CSR_GIO_REG_VAL_L0S_ENABLED);
+diff --git a/drivers/net/wireless/marvell/mwifiex/sdio.h b/drivers/net/wireless/marvell/mwifiex/sdio.h
+index 71cd8629b28e..8b476b007c5e 100644
+--- a/drivers/net/wireless/marvell/mwifiex/sdio.h
++++ b/drivers/net/wireless/marvell/mwifiex/sdio.h
+@@ -36,9 +36,9 @@
+ #define SD8897_DEFAULT_FW_NAME "mrvl/sd8897_uapsta.bin"
+ #define SD8887_DEFAULT_FW_NAME "mrvl/sd8887_uapsta.bin"
+ #define SD8801_DEFAULT_FW_NAME "mrvl/sd8801_uapsta.bin"
+-#define SD8977_DEFAULT_FW_NAME "mrvl/sd8977_uapsta.bin"
++#define SD8977_DEFAULT_FW_NAME "mrvl/sdsd8977_combo_v2.bin"
+ #define SD8987_DEFAULT_FW_NAME "mrvl/sd8987_uapsta.bin"
+-#define SD8997_DEFAULT_FW_NAME "mrvl/sd8997_uapsta.bin"
++#define SD8997_DEFAULT_FW_NAME "mrvl/sdsd8997_combo_v4.bin"
+
+ #define BLOCK_MODE 1
+ #define BYTE_MODE 0
+diff --git a/drivers/net/wireless/marvell/mwifiex/sta_cmdresp.c b/drivers/net/wireless/marvell/mwifiex/sta_cmdresp.c
+index f21660149f58..962d8bfe6f10 100644
+--- a/drivers/net/wireless/marvell/mwifiex/sta_cmdresp.c
++++ b/drivers/net/wireless/marvell/mwifiex/sta_cmdresp.c
+@@ -580,6 +580,11 @@ static int mwifiex_ret_802_11_key_material_v1(struct mwifiex_private *priv,
+ {
+ struct host_cmd_ds_802_11_key_material *key =
+ &resp->params.key_material;
++ int len;
++
++ len = le16_to_cpu(key->key_param_set.key_len);
++ if (len > sizeof(key->key_param_set.key))
++ return -EINVAL;
+
+ if (le16_to_cpu(key->action) == HostCmd_ACT_GEN_SET) {
+ if ((le16_to_cpu(key->key_param_set.key_info) & KEY_MCAST)) {
+@@ -593,9 +598,8 @@ static int mwifiex_ret_802_11_key_material_v1(struct mwifiex_private *priv,
+
+ memset(priv->aes_key.key_param_set.key, 0,
+ sizeof(key->key_param_set.key));
+- priv->aes_key.key_param_set.key_len = key->key_param_set.key_len;
+- memcpy(priv->aes_key.key_param_set.key, key->key_param_set.key,
+- le16_to_cpu(priv->aes_key.key_param_set.key_len));
++ priv->aes_key.key_param_set.key_len = cpu_to_le16(len);
++ memcpy(priv->aes_key.key_param_set.key, key->key_param_set.key, len);
+
+ return 0;
+ }
+@@ -610,9 +614,14 @@ static int mwifiex_ret_802_11_key_material_v2(struct mwifiex_private *priv,
+ struct host_cmd_ds_command *resp)
+ {
+ struct host_cmd_ds_802_11_key_material_v2 *key_v2;
+- __le16 len;
++ int len;
+
+ key_v2 = &resp->params.key_material_v2;
++
++ len = le16_to_cpu(key_v2->key_param_set.key_params.aes.key_len);
++ if (len > WLAN_KEY_LEN_CCMP)
++ return -EINVAL;
++
+ if (le16_to_cpu(key_v2->action) == HostCmd_ACT_GEN_SET) {
+ if ((le16_to_cpu(key_v2->key_param_set.key_info) & KEY_MCAST)) {
+ mwifiex_dbg(priv->adapter, INFO, "info: key: GTK is set\n");
+@@ -628,10 +637,9 @@ static int mwifiex_ret_802_11_key_material_v2(struct mwifiex_private *priv,
+ memset(priv->aes_key_v2.key_param_set.key_params.aes.key, 0,
+ WLAN_KEY_LEN_CCMP);
+ priv->aes_key_v2.key_param_set.key_params.aes.key_len =
+- key_v2->key_param_set.key_params.aes.key_len;
+- len = priv->aes_key_v2.key_param_set.key_params.aes.key_len;
++ cpu_to_le16(len);
+ memcpy(priv->aes_key_v2.key_param_set.key_params.aes.key,
+- key_v2->key_param_set.key_params.aes.key, le16_to_cpu(len));
++ key_v2->key_param_set.key_params.aes.key, len);
+
+ return 0;
+ }
+diff --git a/drivers/net/wireless/mediatek/mt76/mt7615/mcu.c b/drivers/net/wireless/mediatek/mt76/mt7615/mcu.c
+index 29a7aaabb6da..81d6127dc6fd 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt7615/mcu.c
++++ b/drivers/net/wireless/mediatek/mt76/mt7615/mcu.c
+@@ -167,8 +167,10 @@ mt7615_mcu_parse_response(struct mt7615_dev *dev, int cmd,
+ struct mt7615_mcu_rxd *rxd = (struct mt7615_mcu_rxd *)skb->data;
+ int ret = 0;
+
+- if (seq != rxd->seq)
+- return -EAGAIN;
++ if (seq != rxd->seq) {
++ ret = -EAGAIN;
++ goto out;
++ }
+
+ switch (cmd) {
+ case MCU_CMD_PATCH_SEM_CONTROL:
+@@ -182,6 +184,7 @@ mt7615_mcu_parse_response(struct mt7615_dev *dev, int cmd,
+ default:
+ break;
+ }
++out:
+ dev_kfree_skb(skb);
+
+ return ret;
+@@ -1033,8 +1036,12 @@ mt7615_mcu_wtbl_sta_add(struct mt7615_dev *dev, struct ieee80211_vif *vif,
+ skb = enable ? wskb : sskb;
+
+ err = __mt76_mcu_skb_send_msg(&dev->mt76, skb, cmd, true);
+- if (err < 0)
++ if (err < 0) {
++ skb = enable ? sskb : wskb;
++ dev_kfree_skb(skb);
++
+ return err;
++ }
+
+ cmd = enable ? MCU_EXT_CMD_STA_REC_UPDATE : MCU_EXT_CMD_WTBL_UPDATE;
+ skb = enable ? sskb : wskb;
+diff --git a/drivers/net/wireless/quantenna/qtnfmac/core.c b/drivers/net/wireless/quantenna/qtnfmac/core.c
+index eea777f8acea..6aafff9d4231 100644
+--- a/drivers/net/wireless/quantenna/qtnfmac/core.c
++++ b/drivers/net/wireless/quantenna/qtnfmac/core.c
+@@ -446,8 +446,11 @@ static struct qtnf_wmac *qtnf_core_mac_alloc(struct qtnf_bus *bus,
+ }
+
+ wiphy = qtnf_wiphy_allocate(bus, pdev);
+- if (!wiphy)
++ if (!wiphy) {
++ if (pdev)
++ platform_device_unregister(pdev);
+ return ERR_PTR(-ENOMEM);
++ }
+
+ mac = wiphy_priv(wiphy);
+
+diff --git a/drivers/net/wireless/realtek/rtw88/coex.c b/drivers/net/wireless/realtek/rtw88/coex.c
+index 567372fb4e12..c73101afbedd 100644
+--- a/drivers/net/wireless/realtek/rtw88/coex.c
++++ b/drivers/net/wireless/realtek/rtw88/coex.c
+@@ -1920,7 +1920,8 @@ static void rtw_coex_run_coex(struct rtw_dev *rtwdev, u8 reason)
+ if (coex_stat->wl_under_ips)
+ return;
+
+- if (coex->freeze && !coex_stat->bt_setup_link)
++ if (coex->freeze && coex_dm->reason == COEX_RSN_BTINFO &&
++ !coex_stat->bt_setup_link)
+ return;
+
+ coex_stat->cnt_wl[COEX_CNT_WL_COEXRUN]++;
+diff --git a/drivers/net/wireless/realtek/rtw88/fw.c b/drivers/net/wireless/realtek/rtw88/fw.c
+index 05c430b3489c..917decdbfb72 100644
+--- a/drivers/net/wireless/realtek/rtw88/fw.c
++++ b/drivers/net/wireless/realtek/rtw88/fw.c
+@@ -444,7 +444,7 @@ void rtw_fw_send_ra_info(struct rtw_dev *rtwdev, struct rtw_sta_info *si)
+ SET_RA_INFO_INIT_RA_LVL(h2c_pkt, si->init_ra_lv);
+ SET_RA_INFO_SGI_EN(h2c_pkt, si->sgi_enable);
+ SET_RA_INFO_BW_MODE(h2c_pkt, si->bw_mode);
+- SET_RA_INFO_LDPC(h2c_pkt, si->ldpc_en);
++ SET_RA_INFO_LDPC(h2c_pkt, !!si->ldpc_en);
+ SET_RA_INFO_NO_UPDATE(h2c_pkt, no_update);
+ SET_RA_INFO_VHT_EN(h2c_pkt, si->vht_enable);
+ SET_RA_INFO_DIS_PT(h2c_pkt, disable_pt);
+diff --git a/drivers/net/wireless/realtek/rtw88/main.c b/drivers/net/wireless/realtek/rtw88/main.c
+index 7640e97706f5..72fe026e8a3c 100644
+--- a/drivers/net/wireless/realtek/rtw88/main.c
++++ b/drivers/net/wireless/realtek/rtw88/main.c
+@@ -703,8 +703,6 @@ void rtw_update_sta_info(struct rtw_dev *rtwdev, struct rtw_sta_info *si)
+ stbc_en = VHT_STBC_EN;
+ if (sta->vht_cap.cap & IEEE80211_VHT_CAP_RXLDPC)
+ ldpc_en = VHT_LDPC_EN;
+- if (sta->vht_cap.cap & IEEE80211_VHT_CAP_SHORT_GI_80)
+- is_support_sgi = true;
+ } else if (sta->ht_cap.ht_supported) {
+ ra_mask |= (sta->ht_cap.mcs.rx_mask[1] << 20) |
+ (sta->ht_cap.mcs.rx_mask[0] << 12);
+@@ -712,9 +710,6 @@ void rtw_update_sta_info(struct rtw_dev *rtwdev, struct rtw_sta_info *si)
+ stbc_en = HT_STBC_EN;
+ if (sta->ht_cap.cap & IEEE80211_HT_CAP_LDPC_CODING)
+ ldpc_en = HT_LDPC_EN;
+- if (sta->ht_cap.cap & IEEE80211_HT_CAP_SGI_20 ||
+- sta->ht_cap.cap & IEEE80211_HT_CAP_SGI_40)
+- is_support_sgi = true;
+ }
+
+ if (efuse->hw_cap.nss == 1)
+@@ -756,12 +751,18 @@ void rtw_update_sta_info(struct rtw_dev *rtwdev, struct rtw_sta_info *si)
+ switch (sta->bandwidth) {
+ case IEEE80211_STA_RX_BW_80:
+ bw_mode = RTW_CHANNEL_WIDTH_80;
++ is_support_sgi = sta->vht_cap.vht_supported &&
++ (sta->vht_cap.cap & IEEE80211_VHT_CAP_SHORT_GI_80);
+ break;
+ case IEEE80211_STA_RX_BW_40:
+ bw_mode = RTW_CHANNEL_WIDTH_40;
++ is_support_sgi = sta->ht_cap.ht_supported &&
++ (sta->ht_cap.cap & IEEE80211_HT_CAP_SGI_40);
+ break;
+ default:
+ bw_mode = RTW_CHANNEL_WIDTH_20;
++ is_support_sgi = sta->ht_cap.ht_supported &&
++ (sta->ht_cap.cap & IEEE80211_HT_CAP_SGI_20);
+ break;
+ }
+
+diff --git a/drivers/net/wireless/ti/wl1251/event.c b/drivers/net/wireless/ti/wl1251/event.c
+index 850864dbafa1..e6d426edab56 100644
+--- a/drivers/net/wireless/ti/wl1251/event.c
++++ b/drivers/net/wireless/ti/wl1251/event.c
+@@ -70,7 +70,7 @@ static int wl1251_event_ps_report(struct wl1251 *wl,
+ break;
+ }
+
+- return 0;
++ return ret;
+ }
+
+ static void wl1251_event_mbox_dump(struct event_mailbox *mbox)
+diff --git a/drivers/nvme/host/multipath.c b/drivers/nvme/host/multipath.c
+index 36db7d2e6a89..d3914b7e8f52 100644
+--- a/drivers/nvme/host/multipath.c
++++ b/drivers/nvme/host/multipath.c
+@@ -246,6 +246,12 @@ static struct nvme_ns *nvme_round_robin_path(struct nvme_ns_head *head,
+ fallback = ns;
+ }
+
++ /* No optimized path found, re-check the current path */
++ if (!nvme_path_is_disabled(old) &&
++ old->ana_state == NVME_ANA_OPTIMIZED) {
++ found = old;
++ goto out;
++ }
+ if (!fallback)
+ return NULL;
+ found = fallback;
+@@ -266,10 +272,13 @@ inline struct nvme_ns *nvme_find_path(struct nvme_ns_head *head)
+ struct nvme_ns *ns;
+
+ ns = srcu_dereference(head->current_path[node], &head->srcu);
+- if (READ_ONCE(head->subsys->iopolicy) == NVME_IOPOLICY_RR && ns)
+- ns = nvme_round_robin_path(head, node, ns);
+- if (unlikely(!ns || !nvme_path_is_optimized(ns)))
+- ns = __nvme_find_path(head, node);
++ if (unlikely(!ns))
++ return __nvme_find_path(head, node);
++
++ if (READ_ONCE(head->subsys->iopolicy) == NVME_IOPOLICY_RR)
++ return nvme_round_robin_path(head, node, ns);
++ if (unlikely(!nvme_path_is_optimized(ns)))
++ return __nvme_find_path(head, node);
+ return ns;
+ }
+
+diff --git a/drivers/nvme/host/rdma.c b/drivers/nvme/host/rdma.c
+index 1f9a45145d0d..19c94080512c 100644
+--- a/drivers/nvme/host/rdma.c
++++ b/drivers/nvme/host/rdma.c
+@@ -882,15 +882,20 @@ static int nvme_rdma_configure_io_queues(struct nvme_rdma_ctrl *ctrl, bool new)
+ ret = PTR_ERR(ctrl->ctrl.connect_q);
+ goto out_free_tag_set;
+ }
+- } else {
+- blk_mq_update_nr_hw_queues(&ctrl->tag_set,
+- ctrl->ctrl.queue_count - 1);
+ }
+
+ ret = nvme_rdma_start_io_queues(ctrl);
+ if (ret)
+ goto out_cleanup_connect_q;
+
++ if (!new) {
++ nvme_start_queues(&ctrl->ctrl);
++ nvme_wait_freeze(&ctrl->ctrl);
++ blk_mq_update_nr_hw_queues(ctrl->ctrl.tagset,
++ ctrl->ctrl.queue_count - 1);
++ nvme_unfreeze(&ctrl->ctrl);
++ }
++
+ return 0;
+
+ out_cleanup_connect_q:
+@@ -923,6 +928,7 @@ static void nvme_rdma_teardown_io_queues(struct nvme_rdma_ctrl *ctrl,
+ bool remove)
+ {
+ if (ctrl->ctrl.queue_count > 1) {
++ nvme_start_freeze(&ctrl->ctrl);
+ nvme_stop_queues(&ctrl->ctrl);
+ nvme_rdma_stop_io_queues(ctrl);
+ if (ctrl->ctrl.tagset) {
+diff --git a/drivers/nvme/host/tcp.c b/drivers/nvme/host/tcp.c
+index 26461bf3fdcc..99eaa0474e10 100644
+--- a/drivers/nvme/host/tcp.c
++++ b/drivers/nvme/host/tcp.c
+@@ -1753,15 +1753,20 @@ static int nvme_tcp_configure_io_queues(struct nvme_ctrl *ctrl, bool new)
+ ret = PTR_ERR(ctrl->connect_q);
+ goto out_free_tag_set;
+ }
+- } else {
+- blk_mq_update_nr_hw_queues(ctrl->tagset,
+- ctrl->queue_count - 1);
+ }
+
+ ret = nvme_tcp_start_io_queues(ctrl);
+ if (ret)
+ goto out_cleanup_connect_q;
+
++ if (!new) {
++ nvme_start_queues(ctrl);
++ nvme_wait_freeze(ctrl);
++ blk_mq_update_nr_hw_queues(ctrl->tagset,
++ ctrl->queue_count - 1);
++ nvme_unfreeze(ctrl);
++ }
++
+ return 0;
+
+ out_cleanup_connect_q:
+@@ -1866,6 +1871,7 @@ static void nvme_tcp_teardown_io_queues(struct nvme_ctrl *ctrl,
+ {
+ if (ctrl->queue_count <= 1)
+ return;
++ nvme_start_freeze(ctrl);
+ nvme_stop_queues(ctrl);
+ nvme_tcp_stop_io_queues(ctrl);
+ if (ctrl->tagset) {
+diff --git a/drivers/nvmem/sprd-efuse.c b/drivers/nvmem/sprd-efuse.c
+index 925feb21d5ad..59523245db8a 100644
+--- a/drivers/nvmem/sprd-efuse.c
++++ b/drivers/nvmem/sprd-efuse.c
+@@ -378,8 +378,8 @@ static int sprd_efuse_probe(struct platform_device *pdev)
+ return -ENOMEM;
+
+ efuse->base = devm_platform_ioremap_resource(pdev, 0);
+- if (!efuse->base)
+- return -ENOMEM;
++ if (IS_ERR(efuse->base))
++ return PTR_ERR(efuse->base);
+
+ ret = of_hwspin_lock_get_id(np, 0);
+ if (ret < 0) {
+diff --git a/drivers/parisc/sba_iommu.c b/drivers/parisc/sba_iommu.c
+index 7e112829d250..00785fa81ff7 100644
+--- a/drivers/parisc/sba_iommu.c
++++ b/drivers/parisc/sba_iommu.c
+@@ -1270,7 +1270,7 @@ sba_ioc_init_pluto(struct parisc_device *sba, struct ioc *ioc, int ioc_num)
+ ** (one that doesn't overlap memory or LMMIO space) in the
+ ** IBASE and IMASK registers.
+ */
+- ioc->ibase = READ_REG(ioc->ioc_hpa + IOC_IBASE);
++ ioc->ibase = READ_REG(ioc->ioc_hpa + IOC_IBASE) & ~0x1fffffULL;
+ iova_space_size = ~(READ_REG(ioc->ioc_hpa + IOC_IMASK) & 0xFFFFFFFFUL) + 1;
+
+ if ((ioc->ibase < 0xfed00000UL) && ((ioc->ibase + iova_space_size) > 0xfee00000UL)) {
+diff --git a/drivers/pci/access.c b/drivers/pci/access.c
+index 79c4a2ef269a..9793f17fa184 100644
+--- a/drivers/pci/access.c
++++ b/drivers/pci/access.c
+@@ -204,17 +204,13 @@ EXPORT_SYMBOL(pci_bus_set_ops);
+ static DECLARE_WAIT_QUEUE_HEAD(pci_cfg_wait);
+
+ static noinline void pci_wait_cfg(struct pci_dev *dev)
++ __must_hold(&pci_lock)
+ {
+- DECLARE_WAITQUEUE(wait, current);
+-
+- __add_wait_queue(&pci_cfg_wait, &wait);
+ do {
+- set_current_state(TASK_UNINTERRUPTIBLE);
+ raw_spin_unlock_irq(&pci_lock);
+- schedule();
++ wait_event(pci_cfg_wait, !dev->block_cfg_access);
+ raw_spin_lock_irq(&pci_lock);
+ } while (dev->block_cfg_access);
+- __remove_wait_queue(&pci_cfg_wait, &wait);
+ }
+
+ /* Returns 0 on success, negative values indicate error. */
+diff --git a/drivers/pci/controller/cadence/pcie-cadence-ep.c b/drivers/pci/controller/cadence/pcie-cadence-ep.c
+index 1c173dad67d1..1fdae37843ef 100644
+--- a/drivers/pci/controller/cadence/pcie-cadence-ep.c
++++ b/drivers/pci/controller/cadence/pcie-cadence-ep.c
+@@ -8,7 +8,6 @@
+ #include <linux/of.h>
+ #include <linux/pci-epc.h>
+ #include <linux/platform_device.h>
+-#include <linux/pm_runtime.h>
+ #include <linux/sizes.h>
+
+ #include "pcie-cadence.h"
+@@ -440,8 +439,7 @@ int cdns_pcie_ep_setup(struct cdns_pcie_ep *ep)
+ epc = devm_pci_epc_create(dev, &cdns_pcie_epc_ops);
+ if (IS_ERR(epc)) {
+ dev_err(dev, "failed to create epc device\n");
+- ret = PTR_ERR(epc);
+- goto err_init;
++ return PTR_ERR(epc);
+ }
+
+ epc_set_drvdata(epc, ep);
+@@ -453,7 +451,7 @@ int cdns_pcie_ep_setup(struct cdns_pcie_ep *ep)
+ resource_size(pcie->mem_res));
+ if (ret < 0) {
+ dev_err(dev, "failed to initialize the memory space\n");
+- goto err_init;
++ return ret;
+ }
+
+ ep->irq_cpu_addr = pci_epc_mem_alloc_addr(epc, &ep->irq_phys_addr,
+@@ -472,8 +470,5 @@ int cdns_pcie_ep_setup(struct cdns_pcie_ep *ep)
+ free_epc_mem:
+ pci_epc_mem_exit(epc);
+
+- err_init:
+- pm_runtime_put_sync(dev);
+-
+ return ret;
+ }
+diff --git a/drivers/pci/controller/cadence/pcie-cadence-host.c b/drivers/pci/controller/cadence/pcie-cadence-host.c
+index 9b1c3966414b..aa18fb724d2e 100644
+--- a/drivers/pci/controller/cadence/pcie-cadence-host.c
++++ b/drivers/pci/controller/cadence/pcie-cadence-host.c
+@@ -7,7 +7,6 @@
+ #include <linux/of_address.h>
+ #include <linux/of_pci.h>
+ #include <linux/platform_device.h>
+-#include <linux/pm_runtime.h>
+
+ #include "pcie-cadence.h"
+
+@@ -70,6 +69,7 @@ static int cdns_pcie_host_init_root_port(struct cdns_pcie_rc *rc)
+ {
+ struct cdns_pcie *pcie = &rc->pcie;
+ u32 value, ctrl;
++ u32 id;
+
+ /*
+ * Set the root complex BAR configuration register:
+@@ -89,8 +89,12 @@ static int cdns_pcie_host_init_root_port(struct cdns_pcie_rc *rc)
+ cdns_pcie_writel(pcie, CDNS_PCIE_LM_RC_BAR_CFG, value);
+
+ /* Set root port configuration space */
+- if (rc->vendor_id != 0xffff)
+- cdns_pcie_rp_writew(pcie, PCI_VENDOR_ID, rc->vendor_id);
++ if (rc->vendor_id != 0xffff) {
++ id = CDNS_PCIE_LM_ID_VENDOR(rc->vendor_id) |
++ CDNS_PCIE_LM_ID_SUBSYS(rc->vendor_id);
++ cdns_pcie_writel(pcie, CDNS_PCIE_LM_ID, id);
++ }
++
+ if (rc->device_id != 0xffff)
+ cdns_pcie_rp_writew(pcie, PCI_DEVICE_ID, rc->device_id);
+
+@@ -256,7 +260,7 @@ int cdns_pcie_host_setup(struct cdns_pcie_rc *rc)
+
+ ret = cdns_pcie_host_init(dev, &resources, rc);
+ if (ret)
+- goto err_init;
++ return ret;
+
+ list_splice_init(&resources, &bridge->windows);
+ bridge->dev.parent = dev;
+@@ -274,8 +278,5 @@ int cdns_pcie_host_setup(struct cdns_pcie_rc *rc)
+ err_host_probe:
+ pci_free_resource_list(&resources);
+
+- err_init:
+- pm_runtime_put_sync(dev);
+-
+ return ret;
+ }
+diff --git a/drivers/pci/controller/vmd.c b/drivers/pci/controller/vmd.c
+index 9a64cf90c291..ebec0a6e77ed 100644
+--- a/drivers/pci/controller/vmd.c
++++ b/drivers/pci/controller/vmd.c
+@@ -560,6 +560,7 @@ static int vmd_enable_domain(struct vmd_dev *vmd, unsigned long features)
+ if (!vmd->bus) {
+ pci_free_resource_list(&resources);
+ irq_domain_remove(vmd->irq_domain);
++ irq_domain_free_fwnode(fn);
+ return -ENODEV;
+ }
+
+@@ -673,6 +674,7 @@ static void vmd_cleanup_srcu(struct vmd_dev *vmd)
+ static void vmd_remove(struct pci_dev *dev)
+ {
+ struct vmd_dev *vmd = pci_get_drvdata(dev);
++ struct fwnode_handle *fn = vmd->irq_domain->fwnode;
+
+ sysfs_remove_link(&vmd->dev->dev.kobj, "domain");
+ pci_stop_root_bus(vmd->bus);
+@@ -680,6 +682,7 @@ static void vmd_remove(struct pci_dev *dev)
+ vmd_cleanup_srcu(vmd);
+ vmd_detach_resources(vmd);
+ irq_domain_remove(vmd->irq_domain);
++ irq_domain_free_fwnode(fn);
+ }
+
+ #ifdef CONFIG_PM_SLEEP
+diff --git a/drivers/pci/pcie/aspm.c b/drivers/pci/pcie/aspm.c
+index b17e5ffd31b1..253c30cc1967 100644
+--- a/drivers/pci/pcie/aspm.c
++++ b/drivers/pci/pcie/aspm.c
+@@ -1182,6 +1182,7 @@ static int pcie_aspm_get_policy(char *buffer, const struct kernel_param *kp)
+ cnt += sprintf(buffer + cnt, "[%s] ", policy_str[i]);
+ else
+ cnt += sprintf(buffer + cnt, "%s ", policy_str[i]);
++ cnt += sprintf(buffer + cnt, "\n");
+ return cnt;
+ }
+
+diff --git a/drivers/pci/quirks.c b/drivers/pci/quirks.c
+index cd522dd3dd58..5622603d96d4 100644
+--- a/drivers/pci/quirks.c
++++ b/drivers/pci/quirks.c
+@@ -4422,6 +4422,8 @@ static int pci_quirk_amd_sb_acs(struct pci_dev *dev, u16 acs_flags)
+ if (ACPI_FAILURE(status))
+ return -ENODEV;
+
++ acpi_put_table(header);
++
+ /* Filter out flags not applicable to multifunction */
+ acs_flags &= (PCI_ACS_RR | PCI_ACS_CR | PCI_ACS_EC | PCI_ACS_DT);
+
+diff --git a/drivers/phy/marvell/phy-armada38x-comphy.c b/drivers/phy/marvell/phy-armada38x-comphy.c
+index 6960dfd8ad8c..0fe408964334 100644
+--- a/drivers/phy/marvell/phy-armada38x-comphy.c
++++ b/drivers/phy/marvell/phy-armada38x-comphy.c
+@@ -41,6 +41,7 @@ struct a38x_comphy_lane {
+
+ struct a38x_comphy {
+ void __iomem *base;
++ void __iomem *conf;
+ struct device *dev;
+ struct a38x_comphy_lane lane[MAX_A38X_COMPHY];
+ };
+@@ -54,6 +55,21 @@ static const u8 gbe_mux[MAX_A38X_COMPHY][MAX_A38X_PORTS] = {
+ { 0, 0, 3 },
+ };
+
++static void a38x_set_conf(struct a38x_comphy_lane *lane, bool enable)
++{
++ struct a38x_comphy *priv = lane->priv;
++ u32 conf;
++
++ if (priv->conf) {
++ conf = readl_relaxed(priv->conf);
++ if (enable)
++ conf |= BIT(lane->port);
++ else
++ conf &= ~BIT(lane->port);
++ writel(conf, priv->conf);
++ }
++}
++
+ static void a38x_comphy_set_reg(struct a38x_comphy_lane *lane,
+ unsigned int offset, u32 mask, u32 value)
+ {
+@@ -97,6 +113,7 @@ static int a38x_comphy_set_mode(struct phy *phy, enum phy_mode mode, int sub)
+ {
+ struct a38x_comphy_lane *lane = phy_get_drvdata(phy);
+ unsigned int gen;
++ int ret;
+
+ if (mode != PHY_MODE_ETHERNET)
+ return -EINVAL;
+@@ -115,13 +132,20 @@ static int a38x_comphy_set_mode(struct phy *phy, enum phy_mode mode, int sub)
+ return -EINVAL;
+ }
+
++ a38x_set_conf(lane, false);
++
+ a38x_comphy_set_speed(lane, gen, gen);
+
+- return a38x_comphy_poll(lane, COMPHY_STAT1,
+- COMPHY_STAT1_PLL_RDY_TX |
+- COMPHY_STAT1_PLL_RDY_RX,
+- COMPHY_STAT1_PLL_RDY_TX |
+- COMPHY_STAT1_PLL_RDY_RX);
++ ret = a38x_comphy_poll(lane, COMPHY_STAT1,
++ COMPHY_STAT1_PLL_RDY_TX |
++ COMPHY_STAT1_PLL_RDY_RX,
++ COMPHY_STAT1_PLL_RDY_TX |
++ COMPHY_STAT1_PLL_RDY_RX);
++
++ if (ret == 0)
++ a38x_set_conf(lane, true);
++
++ return ret;
+ }
+
+ static const struct phy_ops a38x_comphy_ops = {
+@@ -174,14 +198,21 @@ static int a38x_comphy_probe(struct platform_device *pdev)
+ if (!priv)
+ return -ENOMEM;
+
+- res = platform_get_resource(pdev, IORESOURCE_MEM, 0);
+- base = devm_ioremap_resource(&pdev->dev, res);
++ base = devm_platform_ioremap_resource(pdev, 0);
+ if (IS_ERR(base))
+ return PTR_ERR(base);
+
+ priv->dev = &pdev->dev;
+ priv->base = base;
+
++ /* Optional */
++ res = platform_get_resource_byname(pdev, IORESOURCE_MEM, "conf");
++ if (res) {
++ priv->conf = devm_ioremap_resource(&pdev->dev, res);
++ if (IS_ERR(priv->conf))
++ return PTR_ERR(priv->conf);
++ }
++
+ for_each_available_child_of_node(pdev->dev.of_node, child) {
+ struct phy *phy;
+ int ret;
+diff --git a/drivers/phy/renesas/phy-rcar-gen3-usb2.c b/drivers/phy/renesas/phy-rcar-gen3-usb2.c
+index bfb22f868857..5087b7c44d55 100644
+--- a/drivers/phy/renesas/phy-rcar-gen3-usb2.c
++++ b/drivers/phy/renesas/phy-rcar-gen3-usb2.c
+@@ -111,6 +111,7 @@ struct rcar_gen3_chan {
+ struct work_struct work;
+ struct mutex lock; /* protects rphys[...].powered */
+ enum usb_dr_mode dr_mode;
++ int irq;
+ bool extcon_host;
+ bool is_otg_channel;
+ bool uses_otg_pins;
+@@ -389,12 +390,38 @@ static void rcar_gen3_init_otg(struct rcar_gen3_chan *ch)
+ rcar_gen3_device_recognition(ch);
+ }
+
++static irqreturn_t rcar_gen3_phy_usb2_irq(int irq, void *_ch)
++{
++ struct rcar_gen3_chan *ch = _ch;
++ void __iomem *usb2_base = ch->base;
++ u32 status = readl(usb2_base + USB2_OBINTSTA);
++ irqreturn_t ret = IRQ_NONE;
++
++ if (status & USB2_OBINT_BITS) {
++ dev_vdbg(ch->dev, "%s: %08x\n", __func__, status);
++ writel(USB2_OBINT_BITS, usb2_base + USB2_OBINTSTA);
++ rcar_gen3_device_recognition(ch);
++ ret = IRQ_HANDLED;
++ }
++
++ return ret;
++}
++
+ static int rcar_gen3_phy_usb2_init(struct phy *p)
+ {
+ struct rcar_gen3_phy *rphy = phy_get_drvdata(p);
+ struct rcar_gen3_chan *channel = rphy->ch;
+ void __iomem *usb2_base = channel->base;
+ u32 val;
++ int ret;
++
++ if (!rcar_gen3_is_any_rphy_initialized(channel) && channel->irq >= 0) {
++ INIT_WORK(&channel->work, rcar_gen3_phy_usb2_work);
++ ret = request_irq(channel->irq, rcar_gen3_phy_usb2_irq,
++ IRQF_SHARED, dev_name(channel->dev), channel);
++ if (ret < 0)
++ dev_err(channel->dev, "No irq handler (%d)\n", channel->irq);
++ }
+
+ /* Initialize USB2 part */
+ val = readl(usb2_base + USB2_INT_ENABLE);
+@@ -433,6 +460,9 @@ static int rcar_gen3_phy_usb2_exit(struct phy *p)
+ val &= ~USB2_INT_ENABLE_UCOM_INTEN;
+ writel(val, usb2_base + USB2_INT_ENABLE);
+
++ if (channel->irq >= 0 && !rcar_gen3_is_any_rphy_initialized(channel))
++ free_irq(channel->irq, channel);
++
+ return 0;
+ }
+
+@@ -503,23 +533,6 @@ static const struct phy_ops rz_g1c_phy_usb2_ops = {
+ .owner = THIS_MODULE,
+ };
+
+-static irqreturn_t rcar_gen3_phy_usb2_irq(int irq, void *_ch)
+-{
+- struct rcar_gen3_chan *ch = _ch;
+- void __iomem *usb2_base = ch->base;
+- u32 status = readl(usb2_base + USB2_OBINTSTA);
+- irqreturn_t ret = IRQ_NONE;
+-
+- if (status & USB2_OBINT_BITS) {
+- dev_vdbg(ch->dev, "%s: %08x\n", __func__, status);
+- writel(USB2_OBINT_BITS, usb2_base + USB2_OBINTSTA);
+- rcar_gen3_device_recognition(ch);
+- ret = IRQ_HANDLED;
+- }
+-
+- return ret;
+-}
+-
+ static const struct of_device_id rcar_gen3_phy_usb2_match_table[] = {
+ {
+ .compatible = "renesas,usb2-phy-r8a77470",
+@@ -598,7 +611,7 @@ static int rcar_gen3_phy_usb2_probe(struct platform_device *pdev)
+ struct phy_provider *provider;
+ struct resource *res;
+ const struct phy_ops *phy_usb2_ops;
+- int irq, ret = 0, i;
++ int ret = 0, i;
+
+ if (!dev->of_node) {
+ dev_err(dev, "This driver needs device tree\n");
+@@ -614,16 +627,8 @@ static int rcar_gen3_phy_usb2_probe(struct platform_device *pdev)
+ if (IS_ERR(channel->base))
+ return PTR_ERR(channel->base);
+
+- /* call request_irq for OTG */
+- irq = platform_get_irq_optional(pdev, 0);
+- if (irq >= 0) {
+- INIT_WORK(&channel->work, rcar_gen3_phy_usb2_work);
+- irq = devm_request_irq(dev, irq, rcar_gen3_phy_usb2_irq,
+- IRQF_SHARED, dev_name(dev), channel);
+- if (irq < 0)
+- dev_err(dev, "No irq handler (%d)\n", irq);
+- }
+-
++ /* get irq number here and request_irq for OTG in phy_init */
++ channel->irq = platform_get_irq_optional(pdev, 0);
+ channel->dr_mode = rcar_gen3_get_dr_mode(dev->of_node);
+ if (channel->dr_mode != USB_DR_MODE_UNKNOWN) {
+ int ret;
+diff --git a/drivers/phy/samsung/phy-exynos5-usbdrd.c b/drivers/phy/samsung/phy-exynos5-usbdrd.c
+index e510732afb8b..7f6279fb4f8f 100644
+--- a/drivers/phy/samsung/phy-exynos5-usbdrd.c
++++ b/drivers/phy/samsung/phy-exynos5-usbdrd.c
+@@ -714,7 +714,9 @@ static int exynos5_usbdrd_phy_calibrate(struct phy *phy)
+ struct phy_usb_instance *inst = phy_get_drvdata(phy);
+ struct exynos5_usbdrd_phy *phy_drd = to_usbdrd_phy(inst);
+
+- return exynos5420_usbdrd_phy_calibrate(phy_drd);
++ if (inst->phy_cfg->id == EXYNOS5_DRDPHY_UTMI)
++ return exynos5420_usbdrd_phy_calibrate(phy_drd);
++ return 0;
+ }
+
+ static const struct phy_ops exynos5_usbdrd_phy_ops = {
+diff --git a/drivers/pinctrl/pinctrl-single.c b/drivers/pinctrl/pinctrl-single.c
+index 1e0614daee9b..a9d511982780 100644
+--- a/drivers/pinctrl/pinctrl-single.c
++++ b/drivers/pinctrl/pinctrl-single.c
+@@ -916,7 +916,7 @@ static int pcs_parse_pinconf(struct pcs_device *pcs, struct device_node *np,
+
+ /* If pinconf isn't supported, don't parse properties in below. */
+ if (!PCS_HAS_PINCONF)
+- return 0;
++ return -ENOTSUPP;
+
+ /* cacluate how much properties are supported in current node */
+ for (i = 0; i < ARRAY_SIZE(prop2); i++) {
+@@ -928,7 +928,7 @@ static int pcs_parse_pinconf(struct pcs_device *pcs, struct device_node *np,
+ nconfs++;
+ }
+ if (!nconfs)
+- return 0;
++ return -ENOTSUPP;
+
+ func->conf = devm_kcalloc(pcs->dev,
+ nconfs, sizeof(struct pcs_conf_vals),
+@@ -1056,9 +1056,12 @@ static int pcs_parse_one_pinctrl_entry(struct pcs_device *pcs,
+
+ if (PCS_HAS_PINCONF && function) {
+ res = pcs_parse_pinconf(pcs, np, function, map);
+- if (res)
++ if (res == 0)
++ *num_maps = 2;
++ else if (res == -ENOTSUPP)
++ *num_maps = 1;
++ else
+ goto free_pingroups;
+- *num_maps = 2;
+ } else {
+ *num_maps = 1;
+ }
+diff --git a/drivers/platform/x86/asus-nb-wmi.c b/drivers/platform/x86/asus-nb-wmi.c
+index c4404d9c1de4..1bb082308c20 100644
+--- a/drivers/platform/x86/asus-nb-wmi.c
++++ b/drivers/platform/x86/asus-nb-wmi.c
+@@ -110,6 +110,16 @@ static struct quirk_entry quirk_asus_forceals = {
+ .wmi_force_als_set = true,
+ };
+
++static struct quirk_entry quirk_asus_ga401i = {
++ .wmi_backlight_power = true,
++ .wmi_backlight_set_devstate = true,
++};
++
++static struct quirk_entry quirk_asus_ga502i = {
++ .wmi_backlight_power = true,
++ .wmi_backlight_set_devstate = true,
++};
++
+ static int dmi_matched(const struct dmi_system_id *dmi)
+ {
+ pr_info("Identified laptop model '%s'\n", dmi->ident);
+@@ -411,6 +421,78 @@ static const struct dmi_system_id asus_quirks[] = {
+ },
+ .driver_data = &quirk_asus_forceals,
+ },
++ {
++ .callback = dmi_matched,
++ .ident = "ASUSTeK COMPUTER INC. GA401IH",
++ .matches = {
++ DMI_MATCH(DMI_SYS_VENDOR, "ASUSTeK COMPUTER INC."),
++ DMI_MATCH(DMI_PRODUCT_NAME, "GA401IH"),
++ },
++ .driver_data = &quirk_asus_ga401i,
++ },
++ {
++ .callback = dmi_matched,
++ .ident = "ASUSTeK COMPUTER INC. GA401II",
++ .matches = {
++ DMI_MATCH(DMI_SYS_VENDOR, "ASUSTeK COMPUTER INC."),
++ DMI_MATCH(DMI_PRODUCT_NAME, "GA401II"),
++ },
++ .driver_data = &quirk_asus_ga401i,
++ },
++ {
++ .callback = dmi_matched,
++ .ident = "ASUSTeK COMPUTER INC. GA401IU",
++ .matches = {
++ DMI_MATCH(DMI_SYS_VENDOR, "ASUSTeK COMPUTER INC."),
++ DMI_MATCH(DMI_PRODUCT_NAME, "GA401IU"),
++ },
++ .driver_data = &quirk_asus_ga401i,
++ },
++ {
++ .callback = dmi_matched,
++ .ident = "ASUSTeK COMPUTER INC. GA401IV",
++ .matches = {
++ DMI_MATCH(DMI_SYS_VENDOR, "ASUSTeK COMPUTER INC."),
++ DMI_MATCH(DMI_PRODUCT_NAME, "GA401IV"),
++ },
++ .driver_data = &quirk_asus_ga401i,
++ },
++ {
++ .callback = dmi_matched,
++ .ident = "ASUSTeK COMPUTER INC. GA401IVC",
++ .matches = {
++ DMI_MATCH(DMI_SYS_VENDOR, "ASUSTeK COMPUTER INC."),
++ DMI_MATCH(DMI_PRODUCT_NAME, "GA401IVC"),
++ },
++ .driver_data = &quirk_asus_ga401i,
++ },
++ {
++ .callback = dmi_matched,
++ .ident = "ASUSTeK COMPUTER INC. GA502II",
++ .matches = {
++ DMI_MATCH(DMI_SYS_VENDOR, "ASUSTeK COMPUTER INC."),
++ DMI_MATCH(DMI_PRODUCT_NAME, "GA502II"),
++ },
++ .driver_data = &quirk_asus_ga502i,
++ },
++ {
++ .callback = dmi_matched,
++ .ident = "ASUSTeK COMPUTER INC. GA502IU",
++ .matches = {
++ DMI_MATCH(DMI_SYS_VENDOR, "ASUSTeK COMPUTER INC."),
++ DMI_MATCH(DMI_PRODUCT_NAME, "GA502IU"),
++ },
++ .driver_data = &quirk_asus_ga502i,
++ },
++ {
++ .callback = dmi_matched,
++ .ident = "ASUSTeK COMPUTER INC. GA502IV",
++ .matches = {
++ DMI_MATCH(DMI_SYS_VENDOR, "ASUSTeK COMPUTER INC."),
++ DMI_MATCH(DMI_PRODUCT_NAME, "GA502IV"),
++ },
++ .driver_data = &quirk_asus_ga502i,
++ },
+ {},
+ };
+
+diff --git a/drivers/platform/x86/intel-hid.c b/drivers/platform/x86/intel-hid.c
+index 9ee79b74311c..86261970bd8f 100644
+--- a/drivers/platform/x86/intel-hid.c
++++ b/drivers/platform/x86/intel-hid.c
+@@ -571,7 +571,7 @@ check_acpi_dev(acpi_handle handle, u32 lvl, void *context, void **rv)
+ return AE_OK;
+
+ if (acpi_match_device_ids(dev, ids) == 0)
+- if (acpi_create_platform_device(dev, NULL))
++ if (!IS_ERR_OR_NULL(acpi_create_platform_device(dev, NULL)))
+ dev_info(&dev->dev,
+ "intel-hid: created platform device\n");
+
+diff --git a/drivers/platform/x86/intel-vbtn.c b/drivers/platform/x86/intel-vbtn.c
+index a05b80955dcd..5db8b7ad1f5d 100644
+--- a/drivers/platform/x86/intel-vbtn.c
++++ b/drivers/platform/x86/intel-vbtn.c
+@@ -286,7 +286,7 @@ check_acpi_dev(acpi_handle handle, u32 lvl, void *context, void **rv)
+ return AE_OK;
+
+ if (acpi_match_device_ids(dev, ids) == 0)
+- if (acpi_create_platform_device(dev, NULL))
++ if (!IS_ERR_OR_NULL(acpi_create_platform_device(dev, NULL)))
+ dev_info(&dev->dev,
+ "intel-vbtn: created platform device\n");
+
+diff --git a/drivers/power/supply/88pm860x_battery.c b/drivers/power/supply/88pm860x_battery.c
+index 5ca047b3f58f..23e7d6447ae9 100644
+--- a/drivers/power/supply/88pm860x_battery.c
++++ b/drivers/power/supply/88pm860x_battery.c
+@@ -433,7 +433,7 @@ static void pm860x_init_battery(struct pm860x_battery_info *info)
+ int ret;
+ int data;
+ int bat_remove;
+- int soc;
++ int soc = 0;
+
+ /* measure enable on GPADC1 */
+ data = MEAS1_GP1;
+@@ -496,7 +496,9 @@ static void pm860x_init_battery(struct pm860x_battery_info *info)
+ }
+ mutex_unlock(&info->lock);
+
+- calc_soc(info, OCV_MODE_ACTIVE, &soc);
++ ret = calc_soc(info, OCV_MODE_ACTIVE, &soc);
++ if (ret < 0)
++ goto out;
+
+ data = pm860x_reg_read(info->i2c, PM8607_POWER_UP_LOG);
+ bat_remove = data & BAT_WU_LOG;
+diff --git a/drivers/regulator/core.c b/drivers/regulator/core.c
+index 7486f6e4e613..0cb99bb090ef 100644
+--- a/drivers/regulator/core.c
++++ b/drivers/regulator/core.c
+@@ -5005,7 +5005,6 @@ regulator_register(const struct regulator_desc *regulator_desc,
+ struct regulator_dev *rdev;
+ bool dangling_cfg_gpiod = false;
+ bool dangling_of_gpiod = false;
+- bool reg_device_fail = false;
+ struct device *dev;
+ int ret, i;
+
+@@ -5134,10 +5133,12 @@ regulator_register(const struct regulator_desc *regulator_desc,
+ }
+
+ /* register with sysfs */
++ device_initialize(&rdev->dev);
+ rdev->dev.class = ®ulator_class;
+ rdev->dev.parent = dev;
+ dev_set_name(&rdev->dev, "regulator.%lu",
+ (unsigned long) atomic_inc_return(®ulator_no));
++ dev_set_drvdata(&rdev->dev, rdev);
+
+ /* set regulator constraints */
+ if (init_data)
+@@ -5188,12 +5189,9 @@ regulator_register(const struct regulator_desc *regulator_desc,
+ !rdev->desc->fixed_uV)
+ rdev->is_switch = true;
+
+- dev_set_drvdata(&rdev->dev, rdev);
+- ret = device_register(&rdev->dev);
+- if (ret != 0) {
+- reg_device_fail = true;
++ ret = device_add(&rdev->dev);
++ if (ret != 0)
+ goto unset_supplies;
+- }
+
+ rdev_init_debugfs(rdev);
+
+@@ -5215,17 +5213,15 @@ unset_supplies:
+ mutex_unlock(®ulator_list_mutex);
+ wash:
+ kfree(rdev->coupling_desc.coupled_rdevs);
+- kfree(rdev->constraints);
+ mutex_lock(®ulator_list_mutex);
+ regulator_ena_gpio_free(rdev);
+ mutex_unlock(®ulator_list_mutex);
++ put_device(&rdev->dev);
++ rdev = NULL;
+ clean:
+ if (dangling_of_gpiod)
+ gpiod_put(config->ena_gpiod);
+- if (reg_device_fail)
+- put_device(&rdev->dev);
+- else
+- kfree(rdev);
++ kfree(rdev);
+ kfree(config);
+ rinse:
+ if (dangling_cfg_gpiod)
+diff --git a/drivers/reset/reset-intel-gw.c b/drivers/reset/reset-intel-gw.c
+index 854238444616..effc177db80a 100644
+--- a/drivers/reset/reset-intel-gw.c
++++ b/drivers/reset/reset-intel-gw.c
+@@ -15,9 +15,9 @@
+ #define RCU_RST_STAT 0x0024
+ #define RCU_RST_REQ 0x0048
+
+-#define REG_OFFSET GENMASK(31, 16)
+-#define BIT_OFFSET GENMASK(15, 8)
+-#define STAT_BIT_OFFSET GENMASK(7, 0)
++#define REG_OFFSET_MASK GENMASK(31, 16)
++#define BIT_OFFSET_MASK GENMASK(15, 8)
++#define STAT_BIT_OFFSET_MASK GENMASK(7, 0)
+
+ #define to_reset_data(x) container_of(x, struct intel_reset_data, rcdev)
+
+@@ -51,11 +51,11 @@ static u32 id_to_reg_and_bit_offsets(struct intel_reset_data *data,
+ unsigned long id, u32 *rst_req,
+ u32 *req_bit, u32 *stat_bit)
+ {
+- *rst_req = FIELD_GET(REG_OFFSET, id);
+- *req_bit = FIELD_GET(BIT_OFFSET, id);
++ *rst_req = FIELD_GET(REG_OFFSET_MASK, id);
++ *req_bit = FIELD_GET(BIT_OFFSET_MASK, id);
+
+ if (data->soc_data->legacy)
+- *stat_bit = FIELD_GET(STAT_BIT_OFFSET, id);
++ *stat_bit = FIELD_GET(STAT_BIT_OFFSET_MASK, id);
+ else
+ *stat_bit = *req_bit;
+
+@@ -141,14 +141,14 @@ static int intel_reset_xlate(struct reset_controller_dev *rcdev,
+ if (spec->args[1] > 31)
+ return -EINVAL;
+
+- id = FIELD_PREP(REG_OFFSET, spec->args[0]);
+- id |= FIELD_PREP(BIT_OFFSET, spec->args[1]);
++ id = FIELD_PREP(REG_OFFSET_MASK, spec->args[0]);
++ id |= FIELD_PREP(BIT_OFFSET_MASK, spec->args[1]);
+
+ if (data->soc_data->legacy) {
+ if (spec->args[2] > 31)
+ return -EINVAL;
+
+- id |= FIELD_PREP(STAT_BIT_OFFSET, spec->args[2]);
++ id |= FIELD_PREP(STAT_BIT_OFFSET_MASK, spec->args[2]);
+ }
+
+ return id;
+@@ -210,11 +210,11 @@ static int intel_reset_probe(struct platform_device *pdev)
+ if (ret)
+ return ret;
+
+- data->reboot_id = FIELD_PREP(REG_OFFSET, rb_id[0]);
+- data->reboot_id |= FIELD_PREP(BIT_OFFSET, rb_id[1]);
++ data->reboot_id = FIELD_PREP(REG_OFFSET_MASK, rb_id[0]);
++ data->reboot_id |= FIELD_PREP(BIT_OFFSET_MASK, rb_id[1]);
+
+ if (data->soc_data->legacy)
+- data->reboot_id |= FIELD_PREP(STAT_BIT_OFFSET, rb_id[2]);
++ data->reboot_id |= FIELD_PREP(STAT_BIT_OFFSET_MASK, rb_id[2]);
+
+ data->restart_nb.notifier_call = intel_reset_restart_handler;
+ data->restart_nb.priority = 128;
+diff --git a/drivers/s390/block/dasd_diag.c b/drivers/s390/block/dasd_diag.c
+index facb588d09e4..069d6b39cacf 100644
+--- a/drivers/s390/block/dasd_diag.c
++++ b/drivers/s390/block/dasd_diag.c
+@@ -319,7 +319,7 @@ dasd_diag_check_device(struct dasd_device *device)
+ struct dasd_diag_characteristics *rdc_data;
+ struct vtoc_cms_label *label;
+ struct dasd_block *block;
+- struct dasd_diag_bio bio;
++ struct dasd_diag_bio *bio;
+ unsigned int sb, bsize;
+ blocknum_t end_block;
+ int rc;
+@@ -395,29 +395,36 @@ dasd_diag_check_device(struct dasd_device *device)
+ rc = -ENOMEM;
+ goto out;
+ }
++ bio = kzalloc(sizeof(*bio), GFP_KERNEL);
++ if (bio == NULL) {
++ DBF_DEV_EVENT(DBF_WARNING, device, "%s",
++ "No memory to allocate initialization bio");
++ rc = -ENOMEM;
++ goto out_label;
++ }
+ rc = 0;
+ end_block = 0;
+ /* try all sizes - needed for ECKD devices */
+ for (bsize = 512; bsize <= PAGE_SIZE; bsize <<= 1) {
+ mdsk_init_io(device, bsize, 0, &end_block);
+- memset(&bio, 0, sizeof (struct dasd_diag_bio));
+- bio.type = MDSK_READ_REQ;
+- bio.block_number = private->pt_block + 1;
+- bio.buffer = label;
++ memset(bio, 0, sizeof(*bio));
++ bio->type = MDSK_READ_REQ;
++ bio->block_number = private->pt_block + 1;
++ bio->buffer = label;
+ memset(&private->iob, 0, sizeof (struct dasd_diag_rw_io));
+ private->iob.dev_nr = rdc_data->dev_nr;
+ private->iob.key = 0;
+ private->iob.flags = 0; /* do synchronous io */
+ private->iob.block_count = 1;
+ private->iob.interrupt_params = 0;
+- private->iob.bio_list = &bio;
++ private->iob.bio_list = bio;
+ private->iob.flaga = DASD_DIAG_FLAGA_DEFAULT;
+ rc = dia250(&private->iob, RW_BIO);
+ if (rc == 3) {
+ pr_warn("%s: A 64-bit DIAG call failed\n",
+ dev_name(&device->cdev->dev));
+ rc = -EOPNOTSUPP;
+- goto out_label;
++ goto out_bio;
+ }
+ mdsk_term_io(device);
+ if (rc == 0)
+@@ -427,7 +434,7 @@ dasd_diag_check_device(struct dasd_device *device)
+ pr_warn("%s: Accessing the DASD failed because of an incorrect format (rc=%d)\n",
+ dev_name(&device->cdev->dev), rc);
+ rc = -EIO;
+- goto out_label;
++ goto out_bio;
+ }
+ /* check for label block */
+ if (memcmp(label->label_id, DASD_DIAG_CMS1,
+@@ -457,6 +464,8 @@ dasd_diag_check_device(struct dasd_device *device)
+ (rc == 4) ? ", read-only device" : "");
+ rc = 0;
+ }
++out_bio:
++ kfree(bio);
+ out_label:
+ free_page((long) label);
+ out:
+diff --git a/drivers/s390/net/qeth_core_main.c b/drivers/s390/net/qeth_core_main.c
+index 60d675fefac7..40ddd1786430 100644
+--- a/drivers/s390/net/qeth_core_main.c
++++ b/drivers/s390/net/qeth_core_main.c
+@@ -202,12 +202,17 @@ EXPORT_SYMBOL_GPL(qeth_threads_running);
+ void qeth_clear_working_pool_list(struct qeth_card *card)
+ {
+ struct qeth_buffer_pool_entry *pool_entry, *tmp;
++ struct qeth_qdio_q *queue = card->qdio.in_q;
++ unsigned int i;
+
+ QETH_CARD_TEXT(card, 5, "clwrklst");
+ list_for_each_entry_safe(pool_entry, tmp,
+ &card->qdio.in_buf_pool.entry_list, list){
+ list_del(&pool_entry->list);
+ }
++
++ for (i = 0; i < ARRAY_SIZE(queue->bufs); i++)
++ queue->bufs[i].pool_entry = NULL;
+ }
+ EXPORT_SYMBOL_GPL(qeth_clear_working_pool_list);
+
+@@ -2671,7 +2676,7 @@ static struct qeth_buffer_pool_entry *qeth_find_free_buffer_pool_entry(
+ static int qeth_init_input_buffer(struct qeth_card *card,
+ struct qeth_qdio_buffer *buf)
+ {
+- struct qeth_buffer_pool_entry *pool_entry;
++ struct qeth_buffer_pool_entry *pool_entry = buf->pool_entry;
+ int i;
+
+ if ((card->options.cq == QETH_CQ_ENABLED) && (!buf->rx_skb)) {
+@@ -2682,9 +2687,13 @@ static int qeth_init_input_buffer(struct qeth_card *card,
+ return -ENOMEM;
+ }
+
+- pool_entry = qeth_find_free_buffer_pool_entry(card);
+- if (!pool_entry)
+- return -ENOBUFS;
++ if (!pool_entry) {
++ pool_entry = qeth_find_free_buffer_pool_entry(card);
++ if (!pool_entry)
++ return -ENOBUFS;
++
++ buf->pool_entry = pool_entry;
++ }
+
+ /*
+ * since the buffer is accessed only from the input_tasklet
+@@ -2692,8 +2701,6 @@ static int qeth_init_input_buffer(struct qeth_card *card,
+ * the QETH_IN_BUF_REQUEUE_THRESHOLD we should never run out off
+ * buffers
+ */
+-
+- buf->pool_entry = pool_entry;
+ for (i = 0; i < QETH_MAX_BUFFER_ELEMENTS(card); ++i) {
+ buf->buffer->element[i].length = PAGE_SIZE;
+ buf->buffer->element[i].addr =
+@@ -5521,6 +5528,7 @@ static unsigned int qeth_rx_poll(struct qeth_card *card, int budget)
+ if (done) {
+ QETH_CARD_STAT_INC(card, rx_bufs);
+ qeth_put_buffer_pool_entry(card, buffer->pool_entry);
++ buffer->pool_entry = NULL;
+ qeth_queue_input_buffer(card, card->rx.b_index);
+ card->rx.b_count--;
+
+diff --git a/drivers/s390/net/qeth_l2_main.c b/drivers/s390/net/qeth_l2_main.c
+index 0bd5b09e7a22..37740cc7a44a 100644
+--- a/drivers/s390/net/qeth_l2_main.c
++++ b/drivers/s390/net/qeth_l2_main.c
+@@ -1071,6 +1071,10 @@ static void qeth_bridge_state_change(struct qeth_card *card,
+ int extrasize;
+
+ QETH_CARD_TEXT(card, 2, "brstchng");
++ if (qports->num_entries == 0) {
++ QETH_CARD_TEXT(card, 2, "BPempty");
++ return;
++ }
+ if (qports->entry_length != sizeof(struct qeth_sbp_port_entry)) {
+ QETH_CARD_TEXT_(card, 2, "BPsz%04x", qports->entry_length);
+ return;
+diff --git a/drivers/scsi/arm/cumana_2.c b/drivers/scsi/arm/cumana_2.c
+index a1f3e9ee4e63..14e1d001253c 100644
+--- a/drivers/scsi/arm/cumana_2.c
++++ b/drivers/scsi/arm/cumana_2.c
+@@ -450,7 +450,7 @@ static int cumanascsi2_probe(struct expansion_card *ec,
+
+ if (info->info.scsi.dma != NO_DMA)
+ free_dma(info->info.scsi.dma);
+- free_irq(ec->irq, host);
++ free_irq(ec->irq, info);
+
+ out_release:
+ fas216_release(host);
+diff --git a/drivers/scsi/arm/eesox.c b/drivers/scsi/arm/eesox.c
+index 134f040d58e2..f441ec8eb93d 100644
+--- a/drivers/scsi/arm/eesox.c
++++ b/drivers/scsi/arm/eesox.c
+@@ -571,7 +571,7 @@ static int eesoxscsi_probe(struct expansion_card *ec, const struct ecard_id *id)
+
+ if (info->info.scsi.dma != NO_DMA)
+ free_dma(info->info.scsi.dma);
+- free_irq(ec->irq, host);
++ free_irq(ec->irq, info);
+
+ out_remove:
+ fas216_remove(host);
+diff --git a/drivers/scsi/arm/powertec.c b/drivers/scsi/arm/powertec.c
+index c795537a671c..2dc0df005cb3 100644
+--- a/drivers/scsi/arm/powertec.c
++++ b/drivers/scsi/arm/powertec.c
+@@ -378,7 +378,7 @@ static int powertecscsi_probe(struct expansion_card *ec,
+
+ if (info->info.scsi.dma != NO_DMA)
+ free_dma(info->info.scsi.dma);
+- free_irq(ec->irq, host);
++ free_irq(ec->irq, info);
+
+ out_release:
+ fas216_release(host);
+diff --git a/drivers/scsi/megaraid/megaraid_sas_base.c b/drivers/scsi/megaraid/megaraid_sas_base.c
+index babe85d7b537..5a95c56ff7c2 100644
+--- a/drivers/scsi/megaraid/megaraid_sas_base.c
++++ b/drivers/scsi/megaraid/megaraid_sas_base.c
+@@ -5602,9 +5602,13 @@ megasas_setup_irqs_msix(struct megasas_instance *instance, u8 is_probe)
+ &instance->irq_context[i])) {
+ dev_err(&instance->pdev->dev,
+ "Failed to register IRQ for vector %d.\n", i);
+- for (j = 0; j < i; j++)
++ for (j = 0; j < i; j++) {
++ if (j < instance->low_latency_index_start)
++ irq_set_affinity_hint(
++ pci_irq_vector(pdev, j), NULL);
+ free_irq(pci_irq_vector(pdev, j),
+ &instance->irq_context[j]);
++ }
+ /* Retry irq register for IO_APIC*/
+ instance->msix_vectors = 0;
+ instance->msix_load_balance = false;
+@@ -5642,6 +5646,9 @@ megasas_destroy_irqs(struct megasas_instance *instance) {
+
+ if (instance->msix_vectors)
+ for (i = 0; i < instance->msix_vectors; i++) {
++ if (i < instance->low_latency_index_start)
++ irq_set_affinity_hint(
++ pci_irq_vector(instance->pdev, i), NULL);
+ free_irq(pci_irq_vector(instance->pdev, i),
+ &instance->irq_context[i]);
+ }
+diff --git a/drivers/scsi/mesh.c b/drivers/scsi/mesh.c
+index 74fb50644678..4dd50db90677 100644
+--- a/drivers/scsi/mesh.c
++++ b/drivers/scsi/mesh.c
+@@ -1045,6 +1045,8 @@ static void handle_error(struct mesh_state *ms)
+ while ((in_8(&mr->bus_status1) & BS1_RST) != 0)
+ udelay(1);
+ printk("done\n");
++ if (ms->dma_started)
++ halt_dma(ms);
+ handle_reset(ms);
+ /* request_q is empty, no point in mesh_start() */
+ return;
+@@ -1357,7 +1359,8 @@ static void halt_dma(struct mesh_state *ms)
+ ms->conn_tgt, ms->data_ptr, scsi_bufflen(cmd),
+ ms->tgts[ms->conn_tgt].data_goes_out);
+ }
+- scsi_dma_unmap(cmd);
++ if (cmd)
++ scsi_dma_unmap(cmd);
+ ms->dma_started = 0;
+ }
+
+@@ -1712,6 +1715,9 @@ static int mesh_host_reset(struct scsi_cmnd *cmd)
+
+ spin_lock_irqsave(ms->host->host_lock, flags);
+
++ if (ms->dma_started)
++ halt_dma(ms);
++
+ /* Reset the controller & dbdma channel */
+ out_le32(&md->control, (RUN|PAUSE|FLUSH|WAKE) << 16); /* stop dma */
+ out_8(&mr->exception, 0xff); /* clear all exception bits */
+diff --git a/drivers/scsi/scsi_debug.c b/drivers/scsi/scsi_debug.c
+index 4c6c448dc2df..c17ff74164e8 100644
+--- a/drivers/scsi/scsi_debug.c
++++ b/drivers/scsi/scsi_debug.c
+@@ -5297,6 +5297,12 @@ static int __init scsi_debug_init(void)
+ pr_err("submit_queues must be 1 or more\n");
+ return -EINVAL;
+ }
++
++ if ((sdebug_max_queue > SDEBUG_CANQUEUE) || (sdebug_max_queue < 1)) {
++ pr_err("max_queue must be in range [1, %d]\n", SDEBUG_CANQUEUE);
++ return -EINVAL;
++ }
++
+ sdebug_q_arr = kcalloc(submit_queues, sizeof(struct sdebug_queue),
+ GFP_KERNEL);
+ if (sdebug_q_arr == NULL)
+diff --git a/drivers/scsi/ufs/ufshcd.c b/drivers/scsi/ufs/ufshcd.c
+index 7ca32ede5e17..477b6cfff381 100644
+--- a/drivers/scsi/ufs/ufshcd.c
++++ b/drivers/scsi/ufs/ufshcd.c
+@@ -1280,6 +1280,7 @@ static int ufshcd_devfreq_get_dev_status(struct device *dev,
+ unsigned long flags;
+ struct list_head *clk_list = &hba->clk_list_head;
+ struct ufs_clk_info *clki;
++ ktime_t curr_t;
+
+ if (!ufshcd_is_clkscaling_supported(hba))
+ return -EINVAL;
+@@ -1287,6 +1288,7 @@ static int ufshcd_devfreq_get_dev_status(struct device *dev,
+ memset(stat, 0, sizeof(*stat));
+
+ spin_lock_irqsave(hba->host->host_lock, flags);
++ curr_t = ktime_get();
+ if (!scaling->window_start_t)
+ goto start_window;
+
+@@ -1298,18 +1300,17 @@ static int ufshcd_devfreq_get_dev_status(struct device *dev,
+ */
+ stat->current_frequency = clki->curr_freq;
+ if (scaling->is_busy_started)
+- scaling->tot_busy_t += ktime_to_us(ktime_sub(ktime_get(),
+- scaling->busy_start_t));
++ scaling->tot_busy_t += ktime_us_delta(curr_t,
++ scaling->busy_start_t);
+
+- stat->total_time = jiffies_to_usecs((long)jiffies -
+- (long)scaling->window_start_t);
++ stat->total_time = ktime_us_delta(curr_t, scaling->window_start_t);
+ stat->busy_time = scaling->tot_busy_t;
+ start_window:
+- scaling->window_start_t = jiffies;
++ scaling->window_start_t = curr_t;
+ scaling->tot_busy_t = 0;
+
+ if (hba->outstanding_reqs) {
+- scaling->busy_start_t = ktime_get();
++ scaling->busy_start_t = curr_t;
+ scaling->is_busy_started = true;
+ } else {
+ scaling->busy_start_t = 0;
+@@ -1860,6 +1861,7 @@ static void ufshcd_exit_clk_gating(struct ufs_hba *hba)
+ static void ufshcd_clk_scaling_start_busy(struct ufs_hba *hba)
+ {
+ bool queue_resume_work = false;
++ ktime_t curr_t = ktime_get();
+
+ if (!ufshcd_is_clkscaling_supported(hba))
+ return;
+@@ -1875,13 +1877,13 @@ static void ufshcd_clk_scaling_start_busy(struct ufs_hba *hba)
+ &hba->clk_scaling.resume_work);
+
+ if (!hba->clk_scaling.window_start_t) {
+- hba->clk_scaling.window_start_t = jiffies;
++ hba->clk_scaling.window_start_t = curr_t;
+ hba->clk_scaling.tot_busy_t = 0;
+ hba->clk_scaling.is_busy_started = false;
+ }
+
+ if (!hba->clk_scaling.is_busy_started) {
+- hba->clk_scaling.busy_start_t = ktime_get();
++ hba->clk_scaling.busy_start_t = curr_t;
+ hba->clk_scaling.is_busy_started = true;
+ }
+ }
+diff --git a/drivers/scsi/ufs/ufshcd.h b/drivers/scsi/ufs/ufshcd.h
+index 6ffc08ad85f6..2315ecc20927 100644
+--- a/drivers/scsi/ufs/ufshcd.h
++++ b/drivers/scsi/ufs/ufshcd.h
+@@ -409,7 +409,7 @@ struct ufs_saved_pwr_info {
+ struct ufs_clk_scaling {
+ int active_reqs;
+ unsigned long tot_busy_t;
+- unsigned long window_start_t;
++ ktime_t window_start_t;
+ ktime_t busy_start_t;
+ struct device_attribute enable_attr;
+ struct ufs_saved_pwr_info saved_pwr_info;
+diff --git a/drivers/soc/qcom/pdr_interface.c b/drivers/soc/qcom/pdr_interface.c
+index 17ad3b8698e1..cd8828c85723 100644
+--- a/drivers/soc/qcom/pdr_interface.c
++++ b/drivers/soc/qcom/pdr_interface.c
+@@ -282,13 +282,15 @@ static void pdr_indack_work(struct work_struct *work)
+
+ list_for_each_entry_safe(ind, tmp, &pdr->indack_list, node) {
+ pds = ind->pds;
+- pdr_send_indack_msg(pdr, pds, ind->transaction_id);
+
+ mutex_lock(&pdr->status_lock);
+ pds->state = ind->curr_state;
+ pdr->status(pds->state, pds->service_path, pdr->priv);
+ mutex_unlock(&pdr->status_lock);
+
++ /* Ack the indication after clients release the PD resources */
++ pdr_send_indack_msg(pdr, pds, ind->transaction_id);
++
+ mutex_lock(&pdr->list_lock);
+ list_del(&ind->node);
+ mutex_unlock(&pdr->list_lock);
+diff --git a/drivers/soc/qcom/rpmh-rsc.c b/drivers/soc/qcom/rpmh-rsc.c
+index 3d2104286ee9..a9ccdf2e43b7 100644
+--- a/drivers/soc/qcom/rpmh-rsc.c
++++ b/drivers/soc/qcom/rpmh-rsc.c
+@@ -715,6 +715,7 @@ static struct platform_driver rpmh_driver = {
+ .driver = {
+ .name = "rpmh",
+ .of_match_table = rpmh_drv_match,
++ .suppress_bind_attrs = true,
+ },
+ };
+
+diff --git a/drivers/spi/spi-lantiq-ssc.c b/drivers/spi/spi-lantiq-ssc.c
+index 1fd7ee53d451..049a64451c75 100644
+--- a/drivers/spi/spi-lantiq-ssc.c
++++ b/drivers/spi/spi-lantiq-ssc.c
+@@ -184,6 +184,7 @@ struct lantiq_ssc_spi {
+ unsigned int tx_fifo_size;
+ unsigned int rx_fifo_size;
+ unsigned int base_cs;
++ unsigned int fdx_tx_level;
+ };
+
+ static u32 lantiq_ssc_readl(const struct lantiq_ssc_spi *spi, u32 reg)
+@@ -481,6 +482,7 @@ static void tx_fifo_write(struct lantiq_ssc_spi *spi)
+ u32 data;
+ unsigned int tx_free = tx_fifo_free(spi);
+
++ spi->fdx_tx_level = 0;
+ while (spi->tx_todo && tx_free) {
+ switch (spi->bits_per_word) {
+ case 2 ... 8:
+@@ -509,6 +511,7 @@ static void tx_fifo_write(struct lantiq_ssc_spi *spi)
+
+ lantiq_ssc_writel(spi, data, LTQ_SPI_TB);
+ tx_free--;
++ spi->fdx_tx_level++;
+ }
+ }
+
+@@ -520,6 +523,13 @@ static void rx_fifo_read_full_duplex(struct lantiq_ssc_spi *spi)
+ u32 data;
+ unsigned int rx_fill = rx_fifo_level(spi);
+
++ /*
++ * Wait until all expected data to be shifted in.
++ * Otherwise, rx overrun may occur.
++ */
++ while (rx_fill != spi->fdx_tx_level)
++ rx_fill = rx_fifo_level(spi);
++
+ while (rx_fill) {
+ data = lantiq_ssc_readl(spi, LTQ_SPI_RB);
+
+@@ -899,7 +909,7 @@ static int lantiq_ssc_probe(struct platform_device *pdev)
+ master->bits_per_word_mask = SPI_BPW_RANGE_MASK(2, 8) |
+ SPI_BPW_MASK(16) | SPI_BPW_MASK(32);
+
+- spi->wq = alloc_ordered_workqueue(dev_name(dev), 0);
++ spi->wq = alloc_ordered_workqueue(dev_name(dev), WQ_MEM_RECLAIM);
+ if (!spi->wq) {
+ err = -ENOMEM;
+ goto err_clk_put;
+diff --git a/drivers/spi/spi-rockchip.c b/drivers/spi/spi-rockchip.c
+index 70ef63e0b6b8..02e920535591 100644
+--- a/drivers/spi/spi-rockchip.c
++++ b/drivers/spi/spi-rockchip.c
+@@ -286,7 +286,7 @@ static void rockchip_spi_pio_writer(struct rockchip_spi *rs)
+ static void rockchip_spi_pio_reader(struct rockchip_spi *rs)
+ {
+ u32 words = readl_relaxed(rs->regs + ROCKCHIP_SPI_RXFLR);
+- u32 rx_left = rs->rx_left - words;
++ u32 rx_left = (rs->rx_left > words) ? rs->rx_left - words : 0;
+
+ /* the hardware doesn't allow us to change fifo threshold
+ * level while spi is enabled, so instead make sure to leave
+diff --git a/drivers/spi/spidev.c b/drivers/spi/spidev.c
+index 012a89123067..2400da082563 100644
+--- a/drivers/spi/spidev.c
++++ b/drivers/spi/spidev.c
+@@ -223,6 +223,11 @@ static int spidev_message(struct spidev_data *spidev,
+ for (n = n_xfers, k_tmp = k_xfers, u_tmp = u_xfers;
+ n;
+ n--, k_tmp++, u_tmp++) {
++ /* Ensure that also following allocations from rx_buf/tx_buf will meet
++ * DMA alignment requirements.
++ */
++ unsigned int len_aligned = ALIGN(u_tmp->len, ARCH_KMALLOC_MINALIGN);
++
+ k_tmp->len = u_tmp->len;
+
+ total += k_tmp->len;
+@@ -238,17 +243,17 @@ static int spidev_message(struct spidev_data *spidev,
+
+ if (u_tmp->rx_buf) {
+ /* this transfer needs space in RX bounce buffer */
+- rx_total += k_tmp->len;
++ rx_total += len_aligned;
+ if (rx_total > bufsiz) {
+ status = -EMSGSIZE;
+ goto done;
+ }
+ k_tmp->rx_buf = rx_buf;
+- rx_buf += k_tmp->len;
++ rx_buf += len_aligned;
+ }
+ if (u_tmp->tx_buf) {
+ /* this transfer needs space in TX bounce buffer */
+- tx_total += k_tmp->len;
++ tx_total += len_aligned;
+ if (tx_total > bufsiz) {
+ status = -EMSGSIZE;
+ goto done;
+@@ -258,7 +263,7 @@ static int spidev_message(struct spidev_data *spidev,
+ (uintptr_t) u_tmp->tx_buf,
+ u_tmp->len))
+ goto done;
+- tx_buf += k_tmp->len;
++ tx_buf += len_aligned;
+ }
+
+ k_tmp->cs_change = !!u_tmp->cs_change;
+@@ -292,16 +297,16 @@ static int spidev_message(struct spidev_data *spidev,
+ goto done;
+
+ /* copy any rx data out of bounce buffer */
+- rx_buf = spidev->rx_buffer;
+- for (n = n_xfers, u_tmp = u_xfers; n; n--, u_tmp++) {
++ for (n = n_xfers, k_tmp = k_xfers, u_tmp = u_xfers;
++ n;
++ n--, k_tmp++, u_tmp++) {
+ if (u_tmp->rx_buf) {
+ if (copy_to_user((u8 __user *)
+- (uintptr_t) u_tmp->rx_buf, rx_buf,
++ (uintptr_t) u_tmp->rx_buf, k_tmp->rx_buf,
+ u_tmp->len)) {
+ status = -EFAULT;
+ goto done;
+ }
+- rx_buf += u_tmp->len;
+ }
+ }
+ status = total;
+diff --git a/drivers/staging/media/allegro-dvt/allegro-core.c b/drivers/staging/media/allegro-dvt/allegro-core.c
+index 70f133a842dd..3ed66aae741d 100644
+--- a/drivers/staging/media/allegro-dvt/allegro-core.c
++++ b/drivers/staging/media/allegro-dvt/allegro-core.c
+@@ -3065,9 +3065,9 @@ static int allegro_probe(struct platform_device *pdev)
+ return -EINVAL;
+ }
+ regs = devm_ioremap(&pdev->dev, res->start, resource_size(res));
+- if (IS_ERR(regs)) {
++ if (!regs) {
+ dev_err(&pdev->dev, "failed to map registers\n");
+- return PTR_ERR(regs);
++ return -ENOMEM;
+ }
+ dev->regmap = devm_regmap_init_mmio(&pdev->dev, regs,
+ &allegro_regmap_config);
+@@ -3085,9 +3085,9 @@ static int allegro_probe(struct platform_device *pdev)
+ sram_regs = devm_ioremap(&pdev->dev,
+ sram_res->start,
+ resource_size(sram_res));
+- if (IS_ERR(sram_regs)) {
++ if (!sram_regs) {
+ dev_err(&pdev->dev, "failed to map sram\n");
+- return PTR_ERR(sram_regs);
++ return -ENOMEM;
+ }
+ dev->sram = devm_regmap_init_mmio(&pdev->dev, sram_regs,
+ &allegro_sram_config);
+diff --git a/drivers/staging/media/rkisp1/rkisp1-resizer.c b/drivers/staging/media/rkisp1/rkisp1-resizer.c
+index 87799fbf0363..26d785d98525 100644
+--- a/drivers/staging/media/rkisp1/rkisp1-resizer.c
++++ b/drivers/staging/media/rkisp1/rkisp1-resizer.c
+@@ -427,8 +427,8 @@ static int rkisp1_rsz_enum_mbus_code(struct v4l2_subdev *sd,
+ u32 pad = code->pad;
+ int ret;
+
+- /* supported mbus codes are the same in isp sink pad */
+- code->pad = RKISP1_ISP_PAD_SINK_VIDEO;
++ /* supported mbus codes are the same in isp video src pad */
++ code->pad = RKISP1_ISP_PAD_SOURCE_VIDEO;
+ ret = v4l2_subdev_call(&rsz->rkisp1->isp.sd, pad, enum_mbus_code,
+ &dummy_cfg, code);
+
+@@ -543,11 +543,11 @@ static void rkisp1_rsz_set_sink_fmt(struct rkisp1_resizer *rsz,
+ src_fmt->code = sink_fmt->code;
+
+ sink_fmt->width = clamp_t(u32, format->width,
+- rsz->config->min_rsz_width,
+- rsz->config->max_rsz_width);
++ RKISP1_ISP_MIN_WIDTH,
++ RKISP1_ISP_MAX_WIDTH);
+ sink_fmt->height = clamp_t(u32, format->height,
+- rsz->config->min_rsz_height,
+- rsz->config->max_rsz_height);
++ RKISP1_ISP_MIN_HEIGHT,
++ RKISP1_ISP_MAX_HEIGHT);
+
+ *format = *sink_fmt;
+
+diff --git a/drivers/staging/rtl8192u/r8192U_core.c b/drivers/staging/rtl8192u/r8192U_core.c
+index fcfb9024a83f..6ec65187bef9 100644
+--- a/drivers/staging/rtl8192u/r8192U_core.c
++++ b/drivers/staging/rtl8192u/r8192U_core.c
+@@ -2374,7 +2374,7 @@ static int rtl8192_read_eeprom_info(struct net_device *dev)
+ ret = eprom_read(dev, (EEPROM_TX_PW_INDEX_CCK >> 1));
+ if (ret < 0)
+ return ret;
+- priv->EEPROMTxPowerLevelCCK = ((u16)ret & 0xff) >> 8;
++ priv->EEPROMTxPowerLevelCCK = ((u16)ret & 0xff00) >> 8;
+ } else
+ priv->EEPROMTxPowerLevelCCK = 0x10;
+ RT_TRACE(COMP_EPROM, "CCK Tx Power Levl: 0x%02x\n", priv->EEPROMTxPowerLevelCCK);
+diff --git a/drivers/staging/vc04_services/interface/vchiq_arm/vchiq_arm.c b/drivers/staging/vc04_services/interface/vchiq_arm/vchiq_arm.c
+index a1ea9777a444..73b1099c4b45 100644
+--- a/drivers/staging/vc04_services/interface/vchiq_arm/vchiq_arm.c
++++ b/drivers/staging/vc04_services/interface/vchiq_arm/vchiq_arm.c
+@@ -2803,6 +2803,7 @@ failed_platform_init:
+
+ static int vchiq_remove(struct platform_device *pdev)
+ {
++ platform_device_unregister(bcm2835_audio);
+ platform_device_unregister(bcm2835_camera);
+ vchiq_debugfs_deinit();
+ device_destroy(vchiq_class, vchiq_devid);
+diff --git a/drivers/thermal/intel/int340x_thermal/processor_thermal_device.c b/drivers/thermal/intel/int340x_thermal/processor_thermal_device.c
+index 297db1d2d960..81e8b15ef405 100644
+--- a/drivers/thermal/intel/int340x_thermal/processor_thermal_device.c
++++ b/drivers/thermal/intel/int340x_thermal/processor_thermal_device.c
+@@ -43,7 +43,7 @@
+ #define PCI_DEVICE_ID_PROC_ICL_THERMAL 0x8a03
+
+ /* JasperLake thermal reporting device */
+-#define PCI_DEVICE_ID_PROC_JSL_THERMAL 0x4503
++#define PCI_DEVICE_ID_PROC_JSL_THERMAL 0x4E03
+
+ /* TigerLake thermal reporting device */
+ #define PCI_DEVICE_ID_PROC_TGL_THERMAL 0x9A03
+diff --git a/drivers/thermal/ti-soc-thermal/ti-thermal-common.c b/drivers/thermal/ti-soc-thermal/ti-thermal-common.c
+index 85776db4bf34..2ce4b19f312a 100644
+--- a/drivers/thermal/ti-soc-thermal/ti-thermal-common.c
++++ b/drivers/thermal/ti-soc-thermal/ti-thermal-common.c
+@@ -169,7 +169,7 @@ int ti_thermal_expose_sensor(struct ti_bandgap *bgp, int id,
+
+ data = ti_bandgap_get_sensor_data(bgp, id);
+
+- if (!IS_ERR_OR_NULL(data))
++ if (IS_ERR_OR_NULL(data))
+ data = ti_thermal_build_data(bgp, id);
+
+ if (!data)
+diff --git a/drivers/usb/cdns3/gadget.c b/drivers/usb/cdns3/gadget.c
+index 4d43f3b28309..ecda80f8b308 100644
+--- a/drivers/usb/cdns3/gadget.c
++++ b/drivers/usb/cdns3/gadget.c
+@@ -242,9 +242,10 @@ int cdns3_allocate_trb_pool(struct cdns3_endpoint *priv_ep)
+ return -ENOMEM;
+
+ priv_ep->alloc_ring_size = ring_size;
+- memset(priv_ep->trb_pool, 0, ring_size);
+ }
+
++ memset(priv_ep->trb_pool, 0, ring_size);
++
+ priv_ep->num_trbs = num_trbs;
+
+ if (!priv_ep->num)
+diff --git a/drivers/usb/core/quirks.c b/drivers/usb/core/quirks.c
+index e0b77674869c..c96c50faccf7 100644
+--- a/drivers/usb/core/quirks.c
++++ b/drivers/usb/core/quirks.c
+@@ -25,17 +25,23 @@ static unsigned int quirk_count;
+
+ static char quirks_param[128];
+
+-static int quirks_param_set(const char *val, const struct kernel_param *kp)
++static int quirks_param_set(const char *value, const struct kernel_param *kp)
+ {
+- char *p, *field;
++ char *val, *p, *field;
+ u16 vid, pid;
+ u32 flags;
+ size_t i;
+ int err;
+
++ val = kstrdup(value, GFP_KERNEL);
++ if (!val)
++ return -ENOMEM;
++
+ err = param_set_copystring(val, kp);
+- if (err)
++ if (err) {
++ kfree(val);
+ return err;
++ }
+
+ mutex_lock(&quirk_mutex);
+
+@@ -60,10 +66,11 @@ static int quirks_param_set(const char *val, const struct kernel_param *kp)
+ if (!quirk_list) {
+ quirk_count = 0;
+ mutex_unlock(&quirk_mutex);
++ kfree(val);
+ return -ENOMEM;
+ }
+
+- for (i = 0, p = (char *)val; p && *p;) {
++ for (i = 0, p = val; p && *p;) {
+ /* Each entry consists of VID:PID:flags */
+ field = strsep(&p, ":");
+ if (!field)
+@@ -144,6 +151,7 @@ static int quirks_param_set(const char *val, const struct kernel_param *kp)
+
+ unlock:
+ mutex_unlock(&quirk_mutex);
++ kfree(val);
+
+ return 0;
+ }
+diff --git a/drivers/usb/dwc2/platform.c b/drivers/usb/dwc2/platform.c
+index 797afa99ef3b..4ad85fa2c932 100644
+--- a/drivers/usb/dwc2/platform.c
++++ b/drivers/usb/dwc2/platform.c
+@@ -543,6 +543,7 @@ static int dwc2_driver_probe(struct platform_device *dev)
+ if (hsotg->gadget_enabled) {
+ retval = usb_add_gadget_udc(hsotg->dev, &hsotg->gadget);
+ if (retval) {
++ hsotg->gadget.udc = NULL;
+ dwc2_hsotg_remove(hsotg);
+ goto error_init;
+ }
+@@ -554,7 +555,8 @@ error_init:
+ if (hsotg->params.activate_stm_id_vb_detection)
+ regulator_disable(hsotg->usb33d);
+ error:
+- dwc2_lowlevel_hw_disable(hsotg);
++ if (hsotg->dr_mode != USB_DR_MODE_PERIPHERAL)
++ dwc2_lowlevel_hw_disable(hsotg);
+ return retval;
+ }
+
+diff --git a/drivers/usb/gadget/function/f_uac2.c b/drivers/usb/gadget/function/f_uac2.c
+index db2d4980cb35..3633df6d7610 100644
+--- a/drivers/usb/gadget/function/f_uac2.c
++++ b/drivers/usb/gadget/function/f_uac2.c
+@@ -215,10 +215,7 @@ static struct uac2_ac_header_descriptor ac_hdr_desc = {
+ .bDescriptorSubtype = UAC_MS_HEADER,
+ .bcdADC = cpu_to_le16(0x200),
+ .bCategory = UAC2_FUNCTION_IO_BOX,
+- .wTotalLength = cpu_to_le16(sizeof in_clk_src_desc
+- + sizeof out_clk_src_desc + sizeof usb_out_it_desc
+- + sizeof io_in_it_desc + sizeof usb_in_ot_desc
+- + sizeof io_out_ot_desc),
++ /* .wTotalLength = DYNAMIC */
+ .bmControls = 0,
+ };
+
+@@ -501,7 +498,7 @@ static void setup_descriptor(struct f_uac2_opts *opts)
+ as_in_hdr_desc.bTerminalLink = usb_in_ot_desc.bTerminalID;
+
+ iad_desc.bInterfaceCount = 1;
+- ac_hdr_desc.wTotalLength = 0;
++ ac_hdr_desc.wTotalLength = cpu_to_le16(sizeof(ac_hdr_desc));
+
+ if (EPIN_EN(opts)) {
+ u16 len = le16_to_cpu(ac_hdr_desc.wTotalLength);
+diff --git a/drivers/usb/gadget/udc/bdc/bdc_core.c b/drivers/usb/gadget/udc/bdc/bdc_core.c
+index 02a3a774670b..2dca11f0a744 100644
+--- a/drivers/usb/gadget/udc/bdc/bdc_core.c
++++ b/drivers/usb/gadget/udc/bdc/bdc_core.c
+@@ -282,6 +282,7 @@ static void bdc_mem_init(struct bdc *bdc, bool reinit)
+ * in that case reinit is passed as 1
+ */
+ if (reinit) {
++ int i;
+ /* Enable interrupts */
+ temp = bdc_readl(bdc->regs, BDC_BDCSC);
+ temp |= BDC_GIE;
+@@ -291,6 +292,9 @@ static void bdc_mem_init(struct bdc *bdc, bool reinit)
+ /* Initialize SRR to 0 */
+ memset(bdc->srr.sr_bds, 0,
+ NUM_SR_ENTRIES * sizeof(struct bdc_bd));
++ /* clear ep flags to avoid post disconnect stops/deconfigs */
++ for (i = 1; i < bdc->num_eps; ++i)
++ bdc->bdc_ep_array[i]->flags = 0;
+ } else {
+ /* One time initiaization only */
+ /* Enable status report function pointers */
+@@ -599,9 +603,14 @@ static int bdc_remove(struct platform_device *pdev)
+ static int bdc_suspend(struct device *dev)
+ {
+ struct bdc *bdc = dev_get_drvdata(dev);
++ int ret;
+
+- clk_disable_unprepare(bdc->clk);
+- return 0;
++ /* Halt the controller */
++ ret = bdc_stop(bdc);
++ if (!ret)
++ clk_disable_unprepare(bdc->clk);
++
++ return ret;
+ }
+
+ static int bdc_resume(struct device *dev)
+diff --git a/drivers/usb/gadget/udc/bdc/bdc_ep.c b/drivers/usb/gadget/udc/bdc/bdc_ep.c
+index d49c6dc1082d..9ddc0b4e92c9 100644
+--- a/drivers/usb/gadget/udc/bdc/bdc_ep.c
++++ b/drivers/usb/gadget/udc/bdc/bdc_ep.c
+@@ -615,7 +615,6 @@ int bdc_ep_enable(struct bdc_ep *ep)
+ }
+ bdc_dbg_bd_list(bdc, ep);
+ /* only for ep0: config ep is called for ep0 from connect event */
+- ep->flags |= BDC_EP_ENABLED;
+ if (ep->ep_num == 1)
+ return ret;
+
+@@ -759,10 +758,13 @@ static int ep_dequeue(struct bdc_ep *ep, struct bdc_req *req)
+ __func__, ep->name, start_bdi, end_bdi);
+ dev_dbg(bdc->dev, "ep_dequeue ep=%p ep->desc=%p\n",
+ ep, (void *)ep->usb_ep.desc);
+- /* Stop the ep to see where the HW is ? */
+- ret = bdc_stop_ep(bdc, ep->ep_num);
+- /* if there is an issue with stopping ep, then no need to go further */
+- if (ret)
++ /* if still connected, stop the ep to see where the HW is ? */
++ if (!(bdc_readl(bdc->regs, BDC_USPC) & BDC_PST_MASK)) {
++ ret = bdc_stop_ep(bdc, ep->ep_num);
++ /* if there is an issue, then no need to go further */
++ if (ret)
++ return 0;
++ } else
+ return 0;
+
+ /*
+@@ -1911,7 +1913,9 @@ static int bdc_gadget_ep_disable(struct usb_ep *_ep)
+ __func__, ep->name, ep->flags);
+
+ if (!(ep->flags & BDC_EP_ENABLED)) {
+- dev_warn(bdc->dev, "%s is already disabled\n", ep->name);
++ if (bdc->gadget.speed != USB_SPEED_UNKNOWN)
++ dev_warn(bdc->dev, "%s is already disabled\n",
++ ep->name);
+ return 0;
+ }
+ spin_lock_irqsave(&bdc->lock, flags);
+diff --git a/drivers/usb/gadget/udc/net2280.c b/drivers/usb/gadget/udc/net2280.c
+index 5eff85eeaa5a..7530bd9a08c4 100644
+--- a/drivers/usb/gadget/udc/net2280.c
++++ b/drivers/usb/gadget/udc/net2280.c
+@@ -3781,8 +3781,10 @@ static int net2280_probe(struct pci_dev *pdev, const struct pci_device_id *id)
+ return 0;
+
+ done:
+- if (dev)
++ if (dev) {
+ net2280_remove(pdev);
++ kfree(dev);
++ }
+ return retval;
+ }
+
+diff --git a/drivers/usb/mtu3/mtu3_core.c b/drivers/usb/mtu3/mtu3_core.c
+index 9dd02160cca9..e3780d4d6514 100644
+--- a/drivers/usb/mtu3/mtu3_core.c
++++ b/drivers/usb/mtu3/mtu3_core.c
+@@ -131,8 +131,12 @@ static void mtu3_device_disable(struct mtu3 *mtu)
+ mtu3_setbits(ibase, SSUSB_U2_CTRL(0),
+ SSUSB_U2_PORT_DIS | SSUSB_U2_PORT_PDN);
+
+- if (mtu->ssusb->dr_mode == USB_DR_MODE_OTG)
++ if (mtu->ssusb->dr_mode == USB_DR_MODE_OTG) {
+ mtu3_clrbits(ibase, SSUSB_U2_CTRL(0), SSUSB_U2_PORT_OTG_SEL);
++ if (mtu->is_u3_ip)
++ mtu3_clrbits(ibase, SSUSB_U3_CTRL(0),
++ SSUSB_U3_PORT_DUAL_MODE);
++ }
+
+ mtu3_setbits(ibase, U3D_SSUSB_IP_PW_CTRL2, SSUSB_IP_DEV_PDN);
+ }
+diff --git a/drivers/usb/serial/cp210x.c b/drivers/usb/serial/cp210x.c
+index f5143eedbc48..a90801ef0055 100644
+--- a/drivers/usb/serial/cp210x.c
++++ b/drivers/usb/serial/cp210x.c
+@@ -272,6 +272,8 @@ static struct usb_serial_driver cp210x_device = {
+ .break_ctl = cp210x_break_ctl,
+ .set_termios = cp210x_set_termios,
+ .tx_empty = cp210x_tx_empty,
++ .throttle = usb_serial_generic_throttle,
++ .unthrottle = usb_serial_generic_unthrottle,
+ .tiocmget = cp210x_tiocmget,
+ .tiocmset = cp210x_tiocmset,
+ .attach = cp210x_attach,
+@@ -915,6 +917,7 @@ static void cp210x_get_termios_port(struct usb_serial_port *port,
+ u32 baud;
+ u16 bits;
+ u32 ctl_hs;
++ u32 flow_repl;
+
+ cp210x_read_u32_reg(port, CP210X_GET_BAUDRATE, &baud);
+
+@@ -1015,6 +1018,22 @@ static void cp210x_get_termios_port(struct usb_serial_port *port,
+ ctl_hs = le32_to_cpu(flow_ctl.ulControlHandshake);
+ if (ctl_hs & CP210X_SERIAL_CTS_HANDSHAKE) {
+ dev_dbg(dev, "%s - flow control = CRTSCTS\n", __func__);
++ /*
++ * When the port is closed, the CP210x hardware disables
++ * auto-RTS and RTS is deasserted but it leaves auto-CTS when
++ * in hardware flow control mode. When re-opening the port, if
++ * auto-CTS is enabled on the cp210x, then auto-RTS must be
++ * re-enabled in the driver.
++ */
++ flow_repl = le32_to_cpu(flow_ctl.ulFlowReplace);
++ flow_repl &= ~CP210X_SERIAL_RTS_MASK;
++ flow_repl |= CP210X_SERIAL_RTS_SHIFT(CP210X_SERIAL_RTS_FLOW_CTL);
++ flow_ctl.ulFlowReplace = cpu_to_le32(flow_repl);
++ cp210x_write_reg_block(port,
++ CP210X_SET_FLOW,
++ &flow_ctl,
++ sizeof(flow_ctl));
++
+ cflag |= CRTSCTS;
+ } else {
+ dev_dbg(dev, "%s - flow control = NONE\n", __func__);
+diff --git a/drivers/usb/serial/iuu_phoenix.c b/drivers/usb/serial/iuu_phoenix.c
+index b8dfeb4fb2ed..ffbb2a8901b2 100644
+--- a/drivers/usb/serial/iuu_phoenix.c
++++ b/drivers/usb/serial/iuu_phoenix.c
+@@ -353,10 +353,11 @@ static void iuu_led_activity_on(struct urb *urb)
+ struct usb_serial_port *port = urb->context;
+ int result;
+ char *buf_ptr = port->write_urb->transfer_buffer;
+- *buf_ptr++ = IUU_SET_LED;
++
+ if (xmas) {
+- get_random_bytes(buf_ptr, 6);
+- *(buf_ptr+7) = 1;
++ buf_ptr[0] = IUU_SET_LED;
++ get_random_bytes(buf_ptr + 1, 6);
++ buf_ptr[7] = 1;
+ } else {
+ iuu_rgbf_fill_buffer(buf_ptr, 255, 255, 0, 0, 0, 0, 255);
+ }
+@@ -374,13 +375,14 @@ static void iuu_led_activity_off(struct urb *urb)
+ struct usb_serial_port *port = urb->context;
+ int result;
+ char *buf_ptr = port->write_urb->transfer_buffer;
++
+ if (xmas) {
+ iuu_rxcmd(urb);
+ return;
+- } else {
+- *buf_ptr++ = IUU_SET_LED;
+- iuu_rgbf_fill_buffer(buf_ptr, 0, 0, 255, 255, 0, 0, 255);
+ }
++
++ iuu_rgbf_fill_buffer(buf_ptr, 0, 0, 255, 255, 0, 0, 255);
++
+ usb_fill_bulk_urb(port->write_urb, port->serial->dev,
+ usb_sndbulkpipe(port->serial->dev,
+ port->bulk_out_endpointAddress),
+diff --git a/drivers/vdpa/vdpa_sim/vdpa_sim.c b/drivers/vdpa/vdpa_sim/vdpa_sim.c
+index 01c456f7c1f7..e2dc8edd680e 100644
+--- a/drivers/vdpa/vdpa_sim/vdpa_sim.c
++++ b/drivers/vdpa/vdpa_sim/vdpa_sim.c
+@@ -70,6 +70,8 @@ struct vdpasim {
+ u32 status;
+ u32 generation;
+ u64 features;
++ /* spinlock to synchronize iommu table */
++ spinlock_t iommu_lock;
+ };
+
+ static struct vdpasim *vdpasim_dev;
+@@ -118,7 +120,9 @@ static void vdpasim_reset(struct vdpasim *vdpasim)
+ for (i = 0; i < VDPASIM_VQ_NUM; i++)
+ vdpasim_vq_reset(&vdpasim->vqs[i]);
+
++ spin_lock(&vdpasim->iommu_lock);
+ vhost_iotlb_reset(vdpasim->iommu);
++ spin_unlock(&vdpasim->iommu_lock);
+
+ vdpasim->features = 0;
+ vdpasim->status = 0;
+@@ -235,8 +239,10 @@ static dma_addr_t vdpasim_map_page(struct device *dev, struct page *page,
+ /* For simplicity, use identical mapping to avoid e.g iova
+ * allocator.
+ */
++ spin_lock(&vdpasim->iommu_lock);
+ ret = vhost_iotlb_add_range(iommu, pa, pa + size - 1,
+ pa, dir_to_perm(dir));
++ spin_unlock(&vdpasim->iommu_lock);
+ if (ret)
+ return DMA_MAPPING_ERROR;
+
+@@ -250,8 +256,10 @@ static void vdpasim_unmap_page(struct device *dev, dma_addr_t dma_addr,
+ struct vdpasim *vdpasim = dev_to_sim(dev);
+ struct vhost_iotlb *iommu = vdpasim->iommu;
+
++ spin_lock(&vdpasim->iommu_lock);
+ vhost_iotlb_del_range(iommu, (u64)dma_addr,
+ (u64)dma_addr + size - 1);
++ spin_unlock(&vdpasim->iommu_lock);
+ }
+
+ static void *vdpasim_alloc_coherent(struct device *dev, size_t size,
+@@ -263,9 +271,10 @@ static void *vdpasim_alloc_coherent(struct device *dev, size_t size,
+ void *addr = kmalloc(size, flag);
+ int ret;
+
+- if (!addr)
++ spin_lock(&vdpasim->iommu_lock);
++ if (!addr) {
+ *dma_addr = DMA_MAPPING_ERROR;
+- else {
++ } else {
+ u64 pa = virt_to_phys(addr);
+
+ ret = vhost_iotlb_add_range(iommu, (u64)pa,
+@@ -278,6 +287,7 @@ static void *vdpasim_alloc_coherent(struct device *dev, size_t size,
+ } else
+ *dma_addr = (dma_addr_t)pa;
+ }
++ spin_unlock(&vdpasim->iommu_lock);
+
+ return addr;
+ }
+@@ -289,8 +299,11 @@ static void vdpasim_free_coherent(struct device *dev, size_t size,
+ struct vdpasim *vdpasim = dev_to_sim(dev);
+ struct vhost_iotlb *iommu = vdpasim->iommu;
+
++ spin_lock(&vdpasim->iommu_lock);
+ vhost_iotlb_del_range(iommu, (u64)dma_addr,
+ (u64)dma_addr + size - 1);
++ spin_unlock(&vdpasim->iommu_lock);
++
+ kfree(phys_to_virt((uintptr_t)dma_addr));
+ }
+
+@@ -531,6 +544,7 @@ static int vdpasim_set_map(struct vdpa_device *vdpa,
+ u64 start = 0ULL, last = 0ULL - 1;
+ int ret;
+
++ spin_lock(&vdpasim->iommu_lock);
+ vhost_iotlb_reset(vdpasim->iommu);
+
+ for (map = vhost_iotlb_itree_first(iotlb, start, last); map;
+@@ -540,10 +554,12 @@ static int vdpasim_set_map(struct vdpa_device *vdpa,
+ if (ret)
+ goto err;
+ }
++ spin_unlock(&vdpasim->iommu_lock);
+ return 0;
+
+ err:
+ vhost_iotlb_reset(vdpasim->iommu);
++ spin_unlock(&vdpasim->iommu_lock);
+ return ret;
+ }
+
+@@ -551,16 +567,23 @@ static int vdpasim_dma_map(struct vdpa_device *vdpa, u64 iova, u64 size,
+ u64 pa, u32 perm)
+ {
+ struct vdpasim *vdpasim = vdpa_to_sim(vdpa);
++ int ret;
+
+- return vhost_iotlb_add_range(vdpasim->iommu, iova,
+- iova + size - 1, pa, perm);
++ spin_lock(&vdpasim->iommu_lock);
++ ret = vhost_iotlb_add_range(vdpasim->iommu, iova, iova + size - 1, pa,
++ perm);
++ spin_unlock(&vdpasim->iommu_lock);
++
++ return ret;
+ }
+
+ static int vdpasim_dma_unmap(struct vdpa_device *vdpa, u64 iova, u64 size)
+ {
+ struct vdpasim *vdpasim = vdpa_to_sim(vdpa);
+
++ spin_lock(&vdpasim->iommu_lock);
+ vhost_iotlb_del_range(vdpasim->iommu, iova, iova + size - 1);
++ spin_unlock(&vdpasim->iommu_lock);
+
+ return 0;
+ }
+diff --git a/drivers/video/console/newport_con.c b/drivers/video/console/newport_con.c
+index 00dddf6e08b0..2d2ee17052e8 100644
+--- a/drivers/video/console/newport_con.c
++++ b/drivers/video/console/newport_con.c
+@@ -32,6 +32,8 @@
+ #include <linux/linux_logo.h>
+ #include <linux/font.h>
+
++#define NEWPORT_LEN 0x10000
++
+ #define FONT_DATA ((unsigned char *)font_vga_8x16.data)
+
+ /* borrowed from fbcon.c */
+@@ -43,6 +45,7 @@
+ static unsigned char *font_data[MAX_NR_CONSOLES];
+
+ static struct newport_regs *npregs;
++static unsigned long newport_addr;
+
+ static int logo_active;
+ static int topscan;
+@@ -702,7 +705,6 @@ const struct consw newport_con = {
+ static int newport_probe(struct gio_device *dev,
+ const struct gio_device_id *id)
+ {
+- unsigned long newport_addr;
+ int err;
+
+ if (!dev->resource.start)
+@@ -712,7 +714,7 @@ static int newport_probe(struct gio_device *dev,
+ return -EBUSY; /* we only support one Newport as console */
+
+ newport_addr = dev->resource.start + 0xF0000;
+- if (!request_mem_region(newport_addr, 0x10000, "Newport"))
++ if (!request_mem_region(newport_addr, NEWPORT_LEN, "Newport"))
+ return -ENODEV;
+
+ npregs = (struct newport_regs *)/* ioremap cannot fail */
+@@ -720,6 +722,11 @@ static int newport_probe(struct gio_device *dev,
+ console_lock();
+ err = do_take_over_console(&newport_con, 0, MAX_NR_CONSOLES - 1, 1);
+ console_unlock();
++
++ if (err) {
++ iounmap((void *)npregs);
++ release_mem_region(newport_addr, NEWPORT_LEN);
++ }
+ return err;
+ }
+
+@@ -727,6 +734,7 @@ static void newport_remove(struct gio_device *dev)
+ {
+ give_up_console(&newport_con);
+ iounmap((void *)npregs);
++ release_mem_region(newport_addr, NEWPORT_LEN);
+ }
+
+ static struct gio_device_id newport_ids[] = {
+diff --git a/drivers/video/fbdev/neofb.c b/drivers/video/fbdev/neofb.c
+index e6ea853c1723..5a363ce9b4cb 100644
+--- a/drivers/video/fbdev/neofb.c
++++ b/drivers/video/fbdev/neofb.c
+@@ -1820,6 +1820,7 @@ static int neo_scan_monitor(struct fb_info *info)
+ #else
+ printk(KERN_ERR
+ "neofb: Only 640x480, 800x600/480 and 1024x768 panels are currently supported\n");
++ kfree(info->monspecs.modedb);
+ return -1;
+ #endif
+ default:
+diff --git a/drivers/video/fbdev/pxafb.c b/drivers/video/fbdev/pxafb.c
+index 00b96a78676e..6f972bed410a 100644
+--- a/drivers/video/fbdev/pxafb.c
++++ b/drivers/video/fbdev/pxafb.c
+@@ -2417,8 +2417,8 @@ static int pxafb_remove(struct platform_device *dev)
+
+ free_pages_exact(fbi->video_mem, fbi->video_mem_size);
+
+- dma_free_wc(&dev->dev, fbi->dma_buff_size, fbi->dma_buff,
+- fbi->dma_buff_phys);
++ dma_free_coherent(&dev->dev, fbi->dma_buff_size, fbi->dma_buff,
++ fbi->dma_buff_phys);
+
+ return 0;
+ }
+diff --git a/drivers/video/fbdev/savage/savagefb_driver.c b/drivers/video/fbdev/savage/savagefb_driver.c
+index aab312a7d9da..a542c33f2082 100644
+--- a/drivers/video/fbdev/savage/savagefb_driver.c
++++ b/drivers/video/fbdev/savage/savagefb_driver.c
+@@ -2158,6 +2158,8 @@ static int savage_init_fb_info(struct fb_info *info, struct pci_dev *dev,
+ info->flags |= FBINFO_HWACCEL_COPYAREA |
+ FBINFO_HWACCEL_FILLRECT |
+ FBINFO_HWACCEL_IMAGEBLIT;
++ else
++ kfree(info->pixmap.addr);
+ }
+ #endif
+ return err;
+diff --git a/drivers/video/fbdev/sm712fb.c b/drivers/video/fbdev/sm712fb.c
+index 6a1b4a853d9e..8cd655d6d628 100644
+--- a/drivers/video/fbdev/sm712fb.c
++++ b/drivers/video/fbdev/sm712fb.c
+@@ -1429,6 +1429,8 @@ static int smtc_map_smem(struct smtcfb_info *sfb,
+ static void smtc_unmap_smem(struct smtcfb_info *sfb)
+ {
+ if (sfb && sfb->fb->screen_base) {
++ if (sfb->chip_id == 0x720)
++ sfb->fb->screen_base -= 0x00200000;
+ iounmap(sfb->fb->screen_base);
+ sfb->fb->screen_base = NULL;
+ }
+diff --git a/drivers/xen/balloon.c b/drivers/xen/balloon.c
+index 0c142bcab79d..a932e75f44fc 100644
+--- a/drivers/xen/balloon.c
++++ b/drivers/xen/balloon.c
+@@ -569,11 +569,13 @@ static int add_ballooned_pages(int nr_pages)
+ if (xen_hotplug_unpopulated) {
+ st = reserve_additional_memory();
+ if (st != BP_ECANCELED) {
++ int rc;
++
+ mutex_unlock(&balloon_mutex);
+- wait_event(balloon_wq,
++ rc = wait_event_interruptible(balloon_wq,
+ !list_empty(&ballooned_pages));
+ mutex_lock(&balloon_mutex);
+- return 0;
++ return rc ? -ENOMEM : 0;
+ }
+ }
+
+@@ -631,6 +633,12 @@ int alloc_xenballooned_pages(int nr_pages, struct page **pages)
+ out_undo:
+ mutex_unlock(&balloon_mutex);
+ free_xenballooned_pages(pgno, pages);
++ /*
++ * NB: free_xenballooned_pages will only subtract pgno pages, but since
++ * target_unpopulated is incremented with nr_pages at the start we need
++ * to remove the remaining ones also, or accounting will be screwed.
++ */
++ balloon_stats.target_unpopulated -= nr_pages - pgno;
+ return ret;
+ }
+ EXPORT_SYMBOL(alloc_xenballooned_pages);
+diff --git a/drivers/xen/gntdev-dmabuf.c b/drivers/xen/gntdev-dmabuf.c
+index 75d3bb948bf3..b1b6eebafd5d 100644
+--- a/drivers/xen/gntdev-dmabuf.c
++++ b/drivers/xen/gntdev-dmabuf.c
+@@ -613,6 +613,14 @@ dmabuf_imp_to_refs(struct gntdev_dmabuf_priv *priv, struct device *dev,
+ goto fail_detach;
+ }
+
++ /* Check that we have zero offset. */
++ if (sgt->sgl->offset) {
++ ret = ERR_PTR(-EINVAL);
++ pr_debug("DMA buffer has %d bytes offset, user-space expects 0\n",
++ sgt->sgl->offset);
++ goto fail_unmap;
++ }
++
+ /* Check number of pages that imported buffer has. */
+ if (attach->dmabuf->size != gntdev_dmabuf->nr_pages << PAGE_SHIFT) {
+ ret = ERR_PTR(-EINVAL);
+diff --git a/fs/9p/v9fs.c b/fs/9p/v9fs.c
+index 15a99f9c7253..39def020a074 100644
+--- a/fs/9p/v9fs.c
++++ b/fs/9p/v9fs.c
+@@ -500,10 +500,9 @@ void v9fs_session_close(struct v9fs_session_info *v9ses)
+ }
+
+ #ifdef CONFIG_9P_FSCACHE
+- if (v9ses->fscache) {
++ if (v9ses->fscache)
+ v9fs_cache_session_put_cookie(v9ses);
+- kfree(v9ses->cachetag);
+- }
++ kfree(v9ses->cachetag);
+ #endif
+ kfree(v9ses->uname);
+ kfree(v9ses->aname);
+diff --git a/fs/btrfs/ctree.h b/fs/btrfs/ctree.h
+index 09e6dff8a8f8..68bd89e3d4f0 100644
+--- a/fs/btrfs/ctree.h
++++ b/fs/btrfs/ctree.h
+@@ -2982,6 +2982,8 @@ int btrfs_dirty_pages(struct inode *inode, struct page **pages,
+ size_t num_pages, loff_t pos, size_t write_bytes,
+ struct extent_state **cached);
+ int btrfs_fdatawrite_range(struct inode *inode, loff_t start, loff_t end);
++int btrfs_check_can_nocow(struct btrfs_inode *inode, loff_t pos,
++ size_t *write_bytes, bool nowait);
+
+ /* tree-defrag.c */
+ int btrfs_defrag_leaves(struct btrfs_trans_handle *trans,
+diff --git a/fs/btrfs/extent-tree.c b/fs/btrfs/extent-tree.c
+index 54a64d1e18c6..7c86188b33d4 100644
+--- a/fs/btrfs/extent-tree.c
++++ b/fs/btrfs/extent-tree.c
+@@ -5481,6 +5481,14 @@ int btrfs_drop_snapshot(struct btrfs_root *root, int update_ref, int for_reloc)
+ }
+ }
+
++ /*
++ * This subvolume is going to be completely dropped, and won't be
++ * recorded as dirty roots, thus pertrans meta rsv will not be freed at
++ * commit transaction time. So free it here manually.
++ */
++ btrfs_qgroup_convert_reserved_meta(root, INT_MAX);
++ btrfs_qgroup_free_meta_all_pertrans(root);
++
+ if (test_bit(BTRFS_ROOT_IN_RADIX, &root->state))
+ btrfs_add_dropped_root(trans, root);
+ else
+diff --git a/fs/btrfs/extent_io.c b/fs/btrfs/extent_io.c
+index 79196eb1a1b3..9d6d646e1eb0 100644
+--- a/fs/btrfs/extent_io.c
++++ b/fs/btrfs/extent_io.c
+@@ -4518,6 +4518,8 @@ int try_release_extent_mapping(struct page *page, gfp_t mask)
+
+ /* once for us */
+ free_extent_map(em);
++
++ cond_resched(); /* Allow large-extent preemption. */
+ }
+ }
+ return try_release_extent_state(tree, page, mask);
+diff --git a/fs/btrfs/file.c b/fs/btrfs/file.c
+index 93244934d4f9..1e1af0ce7077 100644
+--- a/fs/btrfs/file.c
++++ b/fs/btrfs/file.c
+@@ -1540,8 +1540,8 @@ lock_and_cleanup_extent_if_need(struct btrfs_inode *inode, struct page **pages,
+ return ret;
+ }
+
+-static noinline int check_can_nocow(struct btrfs_inode *inode, loff_t pos,
+- size_t *write_bytes, bool nowait)
++int btrfs_check_can_nocow(struct btrfs_inode *inode, loff_t pos,
++ size_t *write_bytes, bool nowait)
+ {
+ struct btrfs_fs_info *fs_info = inode->root->fs_info;
+ struct btrfs_root *root = inode->root;
+@@ -1656,8 +1656,8 @@ static noinline ssize_t btrfs_buffered_write(struct kiocb *iocb,
+ if (ret < 0) {
+ if ((BTRFS_I(inode)->flags & (BTRFS_INODE_NODATACOW |
+ BTRFS_INODE_PREALLOC)) &&
+- check_can_nocow(BTRFS_I(inode), pos,
+- &write_bytes, false) > 0) {
++ btrfs_check_can_nocow(BTRFS_I(inode), pos,
++ &write_bytes, false) > 0) {
+ /*
+ * For nodata cow case, no need to reserve
+ * data space.
+@@ -1936,8 +1936,8 @@ static ssize_t btrfs_file_write_iter(struct kiocb *iocb,
+ */
+ if (!(BTRFS_I(inode)->flags & (BTRFS_INODE_NODATACOW |
+ BTRFS_INODE_PREALLOC)) ||
+- check_can_nocow(BTRFS_I(inode), pos, &nocow_bytes,
+- true) <= 0) {
++ btrfs_check_can_nocow(BTRFS_I(inode), pos, &nocow_bytes,
++ true) <= 0) {
+ inode_unlock(inode);
+ return -EAGAIN;
+ }
+diff --git a/fs/btrfs/inode.c b/fs/btrfs/inode.c
+index e7bdda3ed069..6cb3dc274897 100644
+--- a/fs/btrfs/inode.c
++++ b/fs/btrfs/inode.c
+@@ -4520,11 +4520,13 @@ int btrfs_truncate_block(struct inode *inode, loff_t from, loff_t len,
+ struct extent_state *cached_state = NULL;
+ struct extent_changeset *data_reserved = NULL;
+ char *kaddr;
++ bool only_release_metadata = false;
+ u32 blocksize = fs_info->sectorsize;
+ pgoff_t index = from >> PAGE_SHIFT;
+ unsigned offset = from & (blocksize - 1);
+ struct page *page;
+ gfp_t mask = btrfs_alloc_write_mask(mapping);
++ size_t write_bytes = blocksize;
+ int ret = 0;
+ u64 block_start;
+ u64 block_end;
+@@ -4536,11 +4538,27 @@ int btrfs_truncate_block(struct inode *inode, loff_t from, loff_t len,
+ block_start = round_down(from, blocksize);
+ block_end = block_start + blocksize - 1;
+
+- ret = btrfs_delalloc_reserve_space(inode, &data_reserved,
+- block_start, blocksize);
+- if (ret)
+- goto out;
+
++ ret = btrfs_check_data_free_space(inode, &data_reserved, block_start,
++ blocksize);
++ if (ret < 0) {
++ if ((BTRFS_I(inode)->flags & (BTRFS_INODE_NODATACOW |
++ BTRFS_INODE_PREALLOC)) &&
++ btrfs_check_can_nocow(BTRFS_I(inode), block_start,
++ &write_bytes, false) > 0) {
++ /* For nocow case, no need to reserve data space */
++ only_release_metadata = true;
++ } else {
++ goto out;
++ }
++ }
++ ret = btrfs_delalloc_reserve_metadata(BTRFS_I(inode), blocksize);
++ if (ret < 0) {
++ if (!only_release_metadata)
++ btrfs_free_reserved_data_space(inode, data_reserved,
++ block_start, blocksize);
++ goto out;
++ }
+ again:
+ page = find_or_create_page(mapping, index, mask);
+ if (!page) {
+@@ -4609,14 +4627,26 @@ again:
+ set_page_dirty(page);
+ unlock_extent_cached(io_tree, block_start, block_end, &cached_state);
+
++ if (only_release_metadata)
++ set_extent_bit(&BTRFS_I(inode)->io_tree, block_start,
++ block_end, EXTENT_NORESERVE, NULL, NULL,
++ GFP_NOFS);
++
+ out_unlock:
+- if (ret)
+- btrfs_delalloc_release_space(inode, data_reserved, block_start,
+- blocksize, true);
++ if (ret) {
++ if (only_release_metadata)
++ btrfs_delalloc_release_metadata(BTRFS_I(inode),
++ blocksize, true);
++ else
++ btrfs_delalloc_release_space(inode, data_reserved,
++ block_start, blocksize, true);
++ }
+ btrfs_delalloc_release_extents(BTRFS_I(inode), blocksize);
+ unlock_page(page);
+ put_page(page);
+ out:
++ if (only_release_metadata)
++ btrfs_drew_write_unlock(&BTRFS_I(inode)->root->snapshot_lock);
+ extent_changeset_free(data_reserved);
+ return ret;
+ }
+diff --git a/fs/btrfs/space-info.c b/fs/btrfs/space-info.c
+index 756950aba1a6..317d1d216009 100644
+--- a/fs/btrfs/space-info.c
++++ b/fs/btrfs/space-info.c
+@@ -468,8 +468,8 @@ again:
+ "block group %llu has %llu bytes, %llu used %llu pinned %llu reserved %s",
+ cache->start, cache->length, cache->used, cache->pinned,
+ cache->reserved, cache->ro ? "[readonly]" : "");
+- btrfs_dump_free_space(cache, bytes);
+ spin_unlock(&cache->lock);
++ btrfs_dump_free_space(cache, bytes);
+ }
+ if (++index < BTRFS_NR_RAID_TYPES)
+ goto again;
+diff --git a/fs/dlm/lockspace.c b/fs/dlm/lockspace.c
+index afb8340918b8..c689359ca532 100644
+--- a/fs/dlm/lockspace.c
++++ b/fs/dlm/lockspace.c
+@@ -632,6 +632,9 @@ static int new_lockspace(const char *name, const char *cluster,
+ wait_event(ls->ls_recover_lock_wait,
+ test_bit(LSFL_RECOVER_LOCK, &ls->ls_flags));
+
++ /* let kobject handle freeing of ls if there's an error */
++ do_unreg = 1;
++
+ ls->ls_kobj.kset = dlm_kset;
+ error = kobject_init_and_add(&ls->ls_kobj, &dlm_ktype, NULL,
+ "%s", ls->ls_name);
+@@ -639,9 +642,6 @@ static int new_lockspace(const char *name, const char *cluster,
+ goto out_recoverd;
+ kobject_uevent(&ls->ls_kobj, KOBJ_ADD);
+
+- /* let kobject handle freeing of ls if there's an error */
+- do_unreg = 1;
+-
+ /* This uevent triggers dlm_controld in userspace to add us to the
+ group of nodes that are members of this lockspace (managed by the
+ cluster infrastructure.) Once it's done that, it tells us who the
+diff --git a/fs/erofs/inode.c b/fs/erofs/inode.c
+index 3350ab65d892..b36b414cd7a7 100644
+--- a/fs/erofs/inode.c
++++ b/fs/erofs/inode.c
+@@ -8,31 +8,80 @@
+
+ #include <trace/events/erofs.h>
+
+-/* no locking */
+-static int erofs_read_inode(struct inode *inode, void *data)
++/*
++ * if inode is successfully read, return its inode page (or sometimes
++ * the inode payload page if it's an extended inode) in order to fill
++ * inline data if possible.
++ */
++static struct page *erofs_read_inode(struct inode *inode,
++ unsigned int *ofs)
+ {
++ struct super_block *sb = inode->i_sb;
++ struct erofs_sb_info *sbi = EROFS_SB(sb);
+ struct erofs_inode *vi = EROFS_I(inode);
+- struct erofs_inode_compact *dic = data;
+- struct erofs_inode_extended *die;
++ const erofs_off_t inode_loc = iloc(sbi, vi->nid);
++
++ erofs_blk_t blkaddr, nblks = 0;
++ struct page *page;
++ struct erofs_inode_compact *dic;
++ struct erofs_inode_extended *die, *copied = NULL;
++ unsigned int ifmt;
++ int err;
+
+- const unsigned int ifmt = le16_to_cpu(dic->i_format);
+- struct erofs_sb_info *sbi = EROFS_SB(inode->i_sb);
+- erofs_blk_t nblks = 0;
++ blkaddr = erofs_blknr(inode_loc);
++ *ofs = erofs_blkoff(inode_loc);
+
+- vi->datalayout = erofs_inode_datalayout(ifmt);
++ erofs_dbg("%s, reading inode nid %llu at %u of blkaddr %u",
++ __func__, vi->nid, *ofs, blkaddr);
++
++ page = erofs_get_meta_page(sb, blkaddr);
++ if (IS_ERR(page)) {
++ erofs_err(sb, "failed to get inode (nid: %llu) page, err %ld",
++ vi->nid, PTR_ERR(page));
++ return page;
++ }
+
++ dic = page_address(page) + *ofs;
++ ifmt = le16_to_cpu(dic->i_format);
++
++ vi->datalayout = erofs_inode_datalayout(ifmt);
+ if (vi->datalayout >= EROFS_INODE_DATALAYOUT_MAX) {
+ erofs_err(inode->i_sb, "unsupported datalayout %u of nid %llu",
+ vi->datalayout, vi->nid);
+- DBG_BUGON(1);
+- return -EOPNOTSUPP;
++ err = -EOPNOTSUPP;
++ goto err_out;
+ }
+
+ switch (erofs_inode_version(ifmt)) {
+ case EROFS_INODE_LAYOUT_EXTENDED:
+- die = data;
+-
+ vi->inode_isize = sizeof(struct erofs_inode_extended);
++ /* check if the inode acrosses page boundary */
++ if (*ofs + vi->inode_isize <= PAGE_SIZE) {
++ *ofs += vi->inode_isize;
++ die = (struct erofs_inode_extended *)dic;
++ } else {
++ const unsigned int gotten = PAGE_SIZE - *ofs;
++
++ copied = kmalloc(vi->inode_isize, GFP_NOFS);
++ if (!copied) {
++ err = -ENOMEM;
++ goto err_out;
++ }
++ memcpy(copied, dic, gotten);
++ unlock_page(page);
++ put_page(page);
++
++ page = erofs_get_meta_page(sb, blkaddr + 1);
++ if (IS_ERR(page)) {
++ erofs_err(sb, "failed to get inode payload page (nid: %llu), err %ld",
++ vi->nid, PTR_ERR(page));
++ kfree(copied);
++ return page;
++ }
++ *ofs = vi->inode_isize - gotten;
++ memcpy((u8 *)copied + gotten, page_address(page), *ofs);
++ die = copied;
++ }
+ vi->xattr_isize = erofs_xattr_ibody_size(die->i_xattr_icount);
+
+ inode->i_mode = le16_to_cpu(die->i_mode);
+@@ -69,9 +118,12 @@ static int erofs_read_inode(struct inode *inode, void *data)
+ /* total blocks for compressed files */
+ if (erofs_inode_is_data_compressed(vi->datalayout))
+ nblks = le32_to_cpu(die->i_u.compressed_blocks);
++
++ kfree(copied);
+ break;
+ case EROFS_INODE_LAYOUT_COMPACT:
+ vi->inode_isize = sizeof(struct erofs_inode_compact);
++ *ofs += vi->inode_isize;
+ vi->xattr_isize = erofs_xattr_ibody_size(dic->i_xattr_icount);
+
+ inode->i_mode = le16_to_cpu(dic->i_mode);
+@@ -111,8 +163,8 @@ static int erofs_read_inode(struct inode *inode, void *data)
+ erofs_err(inode->i_sb,
+ "unsupported on-disk inode version %u of nid %llu",
+ erofs_inode_version(ifmt), vi->nid);
+- DBG_BUGON(1);
+- return -EOPNOTSUPP;
++ err = -EOPNOTSUPP;
++ goto err_out;
+ }
+
+ if (!nblks)
+@@ -120,13 +172,18 @@ static int erofs_read_inode(struct inode *inode, void *data)
+ inode->i_blocks = roundup(inode->i_size, EROFS_BLKSIZ) >> 9;
+ else
+ inode->i_blocks = nblks << LOG_SECTORS_PER_BLOCK;
+- return 0;
++ return page;
+
+ bogusimode:
+ erofs_err(inode->i_sb, "bogus i_mode (%o) @ nid %llu",
+ inode->i_mode, vi->nid);
++ err = -EFSCORRUPTED;
++err_out:
+ DBG_BUGON(1);
+- return -EFSCORRUPTED;
++ kfree(copied);
++ unlock_page(page);
++ put_page(page);
++ return ERR_PTR(err);
+ }
+
+ static int erofs_fill_symlink(struct inode *inode, void *data,
+@@ -146,7 +203,7 @@ static int erofs_fill_symlink(struct inode *inode, void *data,
+ if (!lnk)
+ return -ENOMEM;
+
+- m_pofs += vi->inode_isize + vi->xattr_isize;
++ m_pofs += vi->xattr_isize;
+ /* inline symlink data shouldn't cross page boundary as well */
+ if (m_pofs + inode->i_size > PAGE_SIZE) {
+ kfree(lnk);
+@@ -167,37 +224,17 @@ static int erofs_fill_symlink(struct inode *inode, void *data,
+
+ static int erofs_fill_inode(struct inode *inode, int isdir)
+ {
+- struct super_block *sb = inode->i_sb;
+ struct erofs_inode *vi = EROFS_I(inode);
+ struct page *page;
+- void *data;
+- int err;
+- erofs_blk_t blkaddr;
+ unsigned int ofs;
+- erofs_off_t inode_loc;
++ int err = 0;
+
+ trace_erofs_fill_inode(inode, isdir);
+- inode_loc = iloc(EROFS_SB(sb), vi->nid);
+- blkaddr = erofs_blknr(inode_loc);
+- ofs = erofs_blkoff(inode_loc);
+-
+- erofs_dbg("%s, reading inode nid %llu at %u of blkaddr %u",
+- __func__, vi->nid, ofs, blkaddr);
+
+- page = erofs_get_meta_page(sb, blkaddr);
+-
+- if (IS_ERR(page)) {
+- erofs_err(sb, "failed to get inode (nid: %llu) page, err %ld",
+- vi->nid, PTR_ERR(page));
++ /* read inode base data from disk */
++ page = erofs_read_inode(inode, &ofs);
++ if (IS_ERR(page))
+ return PTR_ERR(page);
+- }
+-
+- DBG_BUGON(!PageUptodate(page));
+- data = page_address(page);
+-
+- err = erofs_read_inode(inode, data + ofs);
+- if (err)
+- goto out_unlock;
+
+ /* setup the new inode */
+ switch (inode->i_mode & S_IFMT) {
+@@ -210,7 +247,7 @@ static int erofs_fill_inode(struct inode *inode, int isdir)
+ inode->i_fop = &erofs_dir_fops;
+ break;
+ case S_IFLNK:
+- err = erofs_fill_symlink(inode, data, ofs);
++ err = erofs_fill_symlink(inode, page_address(page), ofs);
+ if (err)
+ goto out_unlock;
+ inode_nohighmem(inode);
+diff --git a/fs/io_uring.c b/fs/io_uring.c
+index fb9dc865c9ea..b33d4a97a877 100644
+--- a/fs/io_uring.c
++++ b/fs/io_uring.c
+@@ -645,12 +645,12 @@ struct io_kiocb {
+ * restore the work, if needed.
+ */
+ struct {
+- struct callback_head task_work;
+ struct hlist_node hash_node;
+ struct async_poll *apoll;
+ };
+ struct io_wq_work work;
+ };
++ struct callback_head task_work;
+ };
+
+ #define IO_PLUG_THRESHOLD 2
+@@ -1484,12 +1484,9 @@ static void io_req_link_next(struct io_kiocb *req, struct io_kiocb **nxtptr)
+ /*
+ * Called if REQ_F_LINK_HEAD is set, and we fail the head request
+ */
+-static void io_fail_links(struct io_kiocb *req)
++static void __io_fail_links(struct io_kiocb *req)
+ {
+ struct io_ring_ctx *ctx = req->ctx;
+- unsigned long flags;
+-
+- spin_lock_irqsave(&ctx->completion_lock, flags);
+
+ while (!list_empty(&req->link_list)) {
+ struct io_kiocb *link = list_first_entry(&req->link_list,
+@@ -1503,13 +1500,29 @@ static void io_fail_links(struct io_kiocb *req)
+ io_link_cancel_timeout(link);
+ } else {
+ io_cqring_fill_event(link, -ECANCELED);
++ link->flags |= REQ_F_COMP_LOCKED;
+ __io_double_put_req(link);
+ }
+ req->flags &= ~REQ_F_LINK_TIMEOUT;
+ }
+
+ io_commit_cqring(ctx);
+- spin_unlock_irqrestore(&ctx->completion_lock, flags);
++}
++
++static void io_fail_links(struct io_kiocb *req)
++{
++ struct io_ring_ctx *ctx = req->ctx;
++
++ if (!(req->flags & REQ_F_COMP_LOCKED)) {
++ unsigned long flags;
++
++ spin_lock_irqsave(&ctx->completion_lock, flags);
++ __io_fail_links(req);
++ spin_unlock_irqrestore(&ctx->completion_lock, flags);
++ } else {
++ __io_fail_links(req);
++ }
++
+ io_cqring_ev_posted(ctx);
+ }
+
+@@ -1692,6 +1705,17 @@ static int io_put_kbuf(struct io_kiocb *req)
+ return cflags;
+ }
+
++static inline bool io_run_task_work(void)
++{
++ if (current->task_works) {
++ __set_current_state(TASK_RUNNING);
++ task_work_run();
++ return true;
++ }
++
++ return false;
++}
++
+ static void io_iopoll_queue(struct list_head *again)
+ {
+ struct io_kiocb *req;
+@@ -1881,6 +1905,7 @@ static int io_iopoll_check(struct io_ring_ctx *ctx, unsigned *nr_events,
+ */
+ if (!(++iters & 7)) {
+ mutex_unlock(&ctx->uring_lock);
++ io_run_task_work();
+ mutex_lock(&ctx->uring_lock);
+ }
+
+@@ -2602,8 +2627,10 @@ static int io_read(struct io_kiocb *req, bool force_nonblock)
+
+ if (req->file->f_op->read_iter)
+ ret2 = call_read_iter(req->file, kiocb, &iter);
+- else
++ else if (req->file->f_op->read)
+ ret2 = loop_rw_iter(READ, req->file, kiocb, &iter);
++ else
++ ret2 = -EINVAL;
+
+ /* Catch -EAGAIN return for forced non-blocking submission */
+ if (!force_nonblock || ret2 != -EAGAIN) {
+@@ -2717,8 +2744,10 @@ static int io_write(struct io_kiocb *req, bool force_nonblock)
+
+ if (req->file->f_op->write_iter)
+ ret2 = call_write_iter(req->file, kiocb, &iter);
+- else
++ else if (req->file->f_op->write)
+ ret2 = loop_rw_iter(WRITE, req->file, kiocb, &iter);
++ else
++ ret2 = -EINVAL;
+
+ if (!force_nonblock)
+ current->signal->rlim[RLIMIT_FSIZE].rlim_cur = RLIM_INFINITY;
+@@ -4149,22 +4178,22 @@ static int io_req_task_work_add(struct io_kiocb *req, struct callback_head *cb)
+ {
+ struct task_struct *tsk = req->task;
+ struct io_ring_ctx *ctx = req->ctx;
+- int ret, notify = TWA_RESUME;
++ int ret, notify;
+
+ /*
+- * SQPOLL kernel thread doesn't need notification, just a wakeup.
+- * If we're not using an eventfd, then TWA_RESUME is always fine,
+- * as we won't have dependencies between request completions for
+- * other kernel wait conditions.
++ * SQPOLL kernel thread doesn't need notification, just a wakeup. For
++ * all other cases, use TWA_SIGNAL unconditionally to ensure we're
++ * processing task_work. There's no reliable way to tell if TWA_RESUME
++ * will do the job.
+ */
+- if (ctx->flags & IORING_SETUP_SQPOLL)
+- notify = 0;
+- else if (ctx->cq_ev_fd)
++ notify = 0;
++ if (!(ctx->flags & IORING_SETUP_SQPOLL))
+ notify = TWA_SIGNAL;
+
+ ret = task_work_add(tsk, cb, notify);
+ if (!ret)
+ wake_up_process(tsk);
++
+ return ret;
+ }
+
+@@ -4185,6 +4214,8 @@ static int __io_async_wake(struct io_kiocb *req, struct io_poll_iocb *poll,
+ tsk = req->task;
+ req->result = mask;
+ init_task_work(&req->task_work, func);
++ percpu_ref_get(&req->ctx->refs);
++
+ /*
+ * If this fails, then the task is exiting. When a task exits, the
+ * work gets canceled, so just cancel this request as well instead
+@@ -4221,9 +4252,24 @@ static bool io_poll_rewait(struct io_kiocb *req, struct io_poll_iocb *poll)
+ return false;
+ }
+
+-static void io_poll_remove_double(struct io_kiocb *req, void *data)
++static struct io_poll_iocb *io_poll_get_double(struct io_kiocb *req)
++{
++ /* pure poll stashes this in ->io, poll driven retry elsewhere */
++ if (req->opcode == IORING_OP_POLL_ADD)
++ return (struct io_poll_iocb *) req->io;
++ return req->apoll->double_poll;
++}
++
++static struct io_poll_iocb *io_poll_get_single(struct io_kiocb *req)
+ {
+- struct io_poll_iocb *poll = data;
++ if (req->opcode == IORING_OP_POLL_ADD)
++ return &req->poll;
++ return &req->apoll->poll;
++}
++
++static void io_poll_remove_double(struct io_kiocb *req)
++{
++ struct io_poll_iocb *poll = io_poll_get_double(req);
+
+ lockdep_assert_held(&req->ctx->completion_lock);
+
+@@ -4243,7 +4289,7 @@ static void io_poll_complete(struct io_kiocb *req, __poll_t mask, int error)
+ {
+ struct io_ring_ctx *ctx = req->ctx;
+
+- io_poll_remove_double(req, req->io);
++ io_poll_remove_double(req);
+ req->poll.done = true;
+ io_cqring_fill_event(req, error ? error : mangle_poll(mask));
+ io_commit_cqring(ctx);
+@@ -4269,6 +4315,7 @@ static void io_poll_task_handler(struct io_kiocb *req, struct io_kiocb **nxt)
+ static void io_poll_task_func(struct callback_head *cb)
+ {
+ struct io_kiocb *req = container_of(cb, struct io_kiocb, task_work);
++ struct io_ring_ctx *ctx = req->ctx;
+ struct io_kiocb *nxt = NULL;
+
+ io_poll_task_handler(req, &nxt);
+@@ -4279,13 +4326,14 @@ static void io_poll_task_func(struct callback_head *cb)
+ __io_queue_sqe(nxt, NULL);
+ mutex_unlock(&ctx->uring_lock);
+ }
++ percpu_ref_put(&ctx->refs);
+ }
+
+ static int io_poll_double_wake(struct wait_queue_entry *wait, unsigned mode,
+ int sync, void *key)
+ {
+ struct io_kiocb *req = wait->private;
+- struct io_poll_iocb *poll = req->apoll->double_poll;
++ struct io_poll_iocb *poll = io_poll_get_single(req);
+ __poll_t mask = key_to_poll(key);
+
+ /* for instances that support it check for an event match first: */
+@@ -4299,6 +4347,8 @@ static int io_poll_double_wake(struct wait_queue_entry *wait, unsigned mode,
+ done = list_empty(&poll->wait.entry);
+ if (!done)
+ list_del_init(&poll->wait.entry);
++ /* make sure double remove sees this as being gone */
++ wait->private = NULL;
+ spin_unlock(&poll->head->lock);
+ if (!done)
+ __io_async_wake(req, poll, mask, io_poll_task_func);
+@@ -4393,6 +4443,7 @@ static void io_async_task_func(struct callback_head *cb)
+
+ if (io_poll_rewait(req, &apoll->poll)) {
+ spin_unlock_irq(&ctx->completion_lock);
++ percpu_ref_put(&ctx->refs);
+ return;
+ }
+
+@@ -4420,7 +4471,6 @@ end_req:
+ return;
+ }
+
+- __set_current_state(TASK_RUNNING);
+ if (io_sq_thread_acquire_mm(ctx, req)) {
+ io_cqring_add_event(req, -EFAULT);
+ goto end_req;
+@@ -4431,6 +4481,7 @@ end_req:
+
+ kfree(apoll->double_poll);
+ kfree(apoll);
++ percpu_ref_put(&ctx->refs);
+ }
+
+ static int io_async_wake(struct wait_queue_entry *wait, unsigned mode, int sync,
+@@ -4533,8 +4584,8 @@ static bool io_arm_poll_handler(struct io_kiocb *req)
+
+ ret = __io_arm_poll_handler(req, &apoll->poll, &ipt, mask,
+ io_async_wake);
+- if (ret) {
+- io_poll_remove_double(req, apoll->double_poll);
++ if (ret || ipt.error) {
++ io_poll_remove_double(req);
+ spin_unlock_irq(&ctx->completion_lock);
+ memcpy(&req->work, &apoll->work, sizeof(req->work));
+ kfree(apoll->double_poll);
+@@ -4567,14 +4618,13 @@ static bool io_poll_remove_one(struct io_kiocb *req)
+ {
+ bool do_complete;
+
++ io_poll_remove_double(req);
++
+ if (req->opcode == IORING_OP_POLL_ADD) {
+- io_poll_remove_double(req, req->io);
+ do_complete = __io_poll_remove_one(req, &req->poll);
+ } else {
+ struct async_poll *apoll = req->apoll;
+
+- io_poll_remove_double(req, apoll->double_poll);
+-
+ /* non-poll requests have submit ref still */
+ do_complete = __io_poll_remove_one(req, &apoll->poll);
+ if (do_complete) {
+@@ -4594,6 +4644,7 @@ static bool io_poll_remove_one(struct io_kiocb *req)
+ io_cqring_fill_event(req, -ECANCELED);
+ io_commit_cqring(req->ctx);
+ req->flags |= REQ_F_COMP_LOCKED;
++ req_set_fail_links(req);
+ io_put_req(req);
+ }
+
+@@ -4776,6 +4827,23 @@ static enum hrtimer_restart io_timeout_fn(struct hrtimer *timer)
+ return HRTIMER_NORESTART;
+ }
+
++static int __io_timeout_cancel(struct io_kiocb *req)
++{
++ int ret;
++
++ list_del_init(&req->list);
++
++ ret = hrtimer_try_to_cancel(&req->io->timeout.timer);
++ if (ret == -1)
++ return -EALREADY;
++
++ req_set_fail_links(req);
++ req->flags |= REQ_F_COMP_LOCKED;
++ io_cqring_fill_event(req, -ECANCELED);
++ io_put_req(req);
++ return 0;
++}
++
+ static int io_timeout_cancel(struct io_ring_ctx *ctx, __u64 user_data)
+ {
+ struct io_kiocb *req;
+@@ -4783,7 +4851,6 @@ static int io_timeout_cancel(struct io_ring_ctx *ctx, __u64 user_data)
+
+ list_for_each_entry(req, &ctx->timeout_list, list) {
+ if (user_data == req->user_data) {
+- list_del_init(&req->list);
+ ret = 0;
+ break;
+ }
+@@ -4792,14 +4859,7 @@ static int io_timeout_cancel(struct io_ring_ctx *ctx, __u64 user_data)
+ if (ret == -ENOENT)
+ return ret;
+
+- ret = hrtimer_try_to_cancel(&req->io->timeout.timer);
+- if (ret == -1)
+- return -EALREADY;
+-
+- req_set_fail_links(req);
+- io_cqring_fill_event(req, -ECANCELED);
+- io_put_req(req);
+- return 0;
++ return __io_timeout_cancel(req);
+ }
+
+ static int io_timeout_remove_prep(struct io_kiocb *req,
+@@ -6152,8 +6212,7 @@ static int io_sq_thread(void *data)
+ if (!list_empty(&ctx->poll_list) || need_resched() ||
+ (!time_after(jiffies, timeout) && ret != -EBUSY &&
+ !percpu_ref_is_dying(&ctx->refs))) {
+- if (current->task_works)
+- task_work_run();
++ io_run_task_work();
+ cond_resched();
+ continue;
+ }
+@@ -6185,8 +6244,7 @@ static int io_sq_thread(void *data)
+ finish_wait(&ctx->sqo_wait, &wait);
+ break;
+ }
+- if (current->task_works) {
+- task_work_run();
++ if (io_run_task_work()) {
+ finish_wait(&ctx->sqo_wait, &wait);
+ continue;
+ }
+@@ -6210,8 +6268,7 @@ static int io_sq_thread(void *data)
+ timeout = jiffies + ctx->sq_thread_idle;
+ }
+
+- if (current->task_works)
+- task_work_run();
++ io_run_task_work();
+
+ set_fs(old_fs);
+ io_sq_thread_drop_mm(ctx);
+@@ -6277,9 +6334,8 @@ static int io_cqring_wait(struct io_ring_ctx *ctx, int min_events,
+ do {
+ if (io_cqring_events(ctx, false) >= min_events)
+ return 0;
+- if (!current->task_works)
++ if (!io_run_task_work())
+ break;
+- task_work_run();
+ } while (1);
+
+ if (sig) {
+@@ -6301,8 +6357,8 @@ static int io_cqring_wait(struct io_ring_ctx *ctx, int min_events,
+ prepare_to_wait_exclusive(&ctx->wait, &iowq.wq,
+ TASK_INTERRUPTIBLE);
+ /* make sure we run task_work before checking for signals */
+- if (current->task_works)
+- task_work_run();
++ if (io_run_task_work())
++ continue;
+ if (signal_pending(current)) {
+ if (current->jobctl & JOBCTL_TASK_WORK) {
+ spin_lock_irq(¤t->sighand->siglock);
+@@ -7132,6 +7188,9 @@ static unsigned long rings_size(unsigned sq_entries, unsigned cq_entries,
+ return SIZE_MAX;
+ #endif
+
++ if (sq_offset)
++ *sq_offset = off;
++
+ sq_array_size = array_size(sizeof(u32), sq_entries);
+ if (sq_array_size == SIZE_MAX)
+ return SIZE_MAX;
+@@ -7139,9 +7198,6 @@ static unsigned long rings_size(unsigned sq_entries, unsigned cq_entries,
+ if (check_add_overflow(off, sq_array_size, &off))
+ return SIZE_MAX;
+
+- if (sq_offset)
+- *sq_offset = off;
+-
+ return off;
+ }
+
+@@ -7538,6 +7594,71 @@ static int io_uring_release(struct inode *inode, struct file *file)
+ return 0;
+ }
+
++/*
++ * Returns true if 'preq' is the link parent of 'req'
++ */
++static bool io_match_link(struct io_kiocb *preq, struct io_kiocb *req)
++{
++ struct io_kiocb *link;
++
++ if (!(preq->flags & REQ_F_LINK_HEAD))
++ return false;
++
++ list_for_each_entry(link, &preq->link_list, link_list) {
++ if (link == req)
++ return true;
++ }
++
++ return false;
++}
++
++/*
++ * We're looking to cancel 'req' because it's holding on to our files, but
++ * 'req' could be a link to another request. See if it is, and cancel that
++ * parent request if so.
++ */
++static bool io_poll_remove_link(struct io_ring_ctx *ctx, struct io_kiocb *req)
++{
++ struct hlist_node *tmp;
++ struct io_kiocb *preq;
++ bool found = false;
++ int i;
++
++ spin_lock_irq(&ctx->completion_lock);
++ for (i = 0; i < (1U << ctx->cancel_hash_bits); i++) {
++ struct hlist_head *list;
++
++ list = &ctx->cancel_hash[i];
++ hlist_for_each_entry_safe(preq, tmp, list, hash_node) {
++ found = io_match_link(preq, req);
++ if (found) {
++ io_poll_remove_one(preq);
++ break;
++ }
++ }
++ }
++ spin_unlock_irq(&ctx->completion_lock);
++ return found;
++}
++
++static bool io_timeout_remove_link(struct io_ring_ctx *ctx,
++ struct io_kiocb *req)
++{
++ struct io_kiocb *preq;
++ bool found = false;
++
++ spin_lock_irq(&ctx->completion_lock);
++ list_for_each_entry(preq, &ctx->timeout_list, list) {
++ found = io_match_link(preq, req);
++ if (found) {
++ __io_timeout_cancel(preq);
++ break;
++ }
++ }
++ spin_unlock_irq(&ctx->completion_lock);
++ return found;
++}
++
+ static void io_uring_cancel_files(struct io_ring_ctx *ctx,
+ struct files_struct *files)
+ {
+@@ -7572,10 +7693,10 @@ static void io_uring_cancel_files(struct io_ring_ctx *ctx,
+ clear_bit(0, &ctx->sq_check_overflow);
+ clear_bit(0, &ctx->cq_check_overflow);
+ }
+- spin_unlock_irq(&ctx->completion_lock);
+-
+ WRITE_ONCE(ctx->rings->cq_overflow,
+ atomic_inc_return(&ctx->cached_cq_overflow));
++ io_commit_cqring(ctx);
++ spin_unlock_irq(&ctx->completion_lock);
+
+ /*
+ * Put inflight ref and overflow ref. If that's
+@@ -7588,6 +7709,9 @@ static void io_uring_cancel_files(struct io_ring_ctx *ctx,
+ }
+ } else {
+ io_wq_cancel_work(ctx->io_wq, &cancel_req->work);
++ /* could be a link, check and remove if it is */
++ if (!io_poll_remove_link(ctx, cancel_req))
++ io_timeout_remove_link(ctx, cancel_req);
+ io_put_req(cancel_req);
+ }
+
+@@ -7690,8 +7814,7 @@ SYSCALL_DEFINE6(io_uring_enter, unsigned int, fd, u32, to_submit,
+ int submitted = 0;
+ struct fd f;
+
+- if (current->task_works)
+- task_work_run();
++ io_run_task_work();
+
+ if (flags & ~(IORING_ENTER_GETEVENTS | IORING_ENTER_SQ_WAKEUP))
+ return -EINVAL;
+@@ -7863,6 +7986,10 @@ static int io_allocate_scq_urings(struct io_ring_ctx *ctx,
+ struct io_rings *rings;
+ size_t size, sq_array_offset;
+
++ /* make sure these are sane, as we already accounted them */
++ ctx->sq_entries = p->sq_entries;
++ ctx->cq_entries = p->cq_entries;
++
+ size = rings_size(p->sq_entries, p->cq_entries, &sq_array_offset);
+ if (size == SIZE_MAX)
+ return -EOVERFLOW;
+@@ -7879,8 +8006,6 @@ static int io_allocate_scq_urings(struct io_ring_ctx *ctx,
+ rings->cq_ring_entries = p->cq_entries;
+ ctx->sq_mask = rings->sq_ring_mask;
+ ctx->cq_mask = rings->cq_ring_mask;
+- ctx->sq_entries = rings->sq_ring_entries;
+- ctx->cq_entries = rings->cq_ring_entries;
+
+ size = array_size(sizeof(struct io_uring_sqe), p->sq_entries);
+ if (size == SIZE_MAX) {
+diff --git a/fs/kernfs/file.c b/fs/kernfs/file.c
+index 34366db3620d..2a1879a6e795 100644
+--- a/fs/kernfs/file.c
++++ b/fs/kernfs/file.c
+@@ -912,7 +912,7 @@ repeat:
+ }
+
+ fsnotify(inode, FS_MODIFY, inode, FSNOTIFY_EVENT_INODE,
+- &name, 0);
++ NULL, 0);
+ iput(inode);
+ }
+
+diff --git a/fs/minix/inode.c b/fs/minix/inode.c
+index 7cb5fd38eb14..0dd929346f3f 100644
+--- a/fs/minix/inode.c
++++ b/fs/minix/inode.c
+@@ -150,6 +150,23 @@ static int minix_remount (struct super_block * sb, int * flags, char * data)
+ return 0;
+ }
+
++static bool minix_check_superblock(struct minix_sb_info *sbi)
++{
++ if (sbi->s_imap_blocks == 0 || sbi->s_zmap_blocks == 0)
++ return false;
++
++ /*
++ * s_max_size must not exceed the block mapping limitation. This check
++ * is only needed for V1 filesystems, since V2/V3 support an extra level
++ * of indirect blocks which places the limit well above U32_MAX.
++ */
++ if (sbi->s_version == MINIX_V1 &&
++ sbi->s_max_size > (7 + 512 + 512*512) * BLOCK_SIZE)
++ return false;
++
++ return true;
++}
++
+ static int minix_fill_super(struct super_block *s, void *data, int silent)
+ {
+ struct buffer_head *bh;
+@@ -228,11 +245,12 @@ static int minix_fill_super(struct super_block *s, void *data, int silent)
+ } else
+ goto out_no_fs;
+
++ if (!minix_check_superblock(sbi))
++ goto out_illegal_sb;
++
+ /*
+ * Allocate the buffer map to keep the superblock small.
+ */
+- if (sbi->s_imap_blocks == 0 || sbi->s_zmap_blocks == 0)
+- goto out_illegal_sb;
+ i = (sbi->s_imap_blocks + sbi->s_zmap_blocks) * sizeof(bh);
+ map = kzalloc(i, GFP_KERNEL);
+ if (!map)
+@@ -468,6 +486,13 @@ static struct inode *V1_minix_iget(struct inode *inode)
+ iget_failed(inode);
+ return ERR_PTR(-EIO);
+ }
++ if (raw_inode->i_nlinks == 0) {
++ printk("MINIX-fs: deleted inode referenced: %lu\n",
++ inode->i_ino);
++ brelse(bh);
++ iget_failed(inode);
++ return ERR_PTR(-ESTALE);
++ }
+ inode->i_mode = raw_inode->i_mode;
+ i_uid_write(inode, raw_inode->i_uid);
+ i_gid_write(inode, raw_inode->i_gid);
+@@ -501,6 +526,13 @@ static struct inode *V2_minix_iget(struct inode *inode)
+ iget_failed(inode);
+ return ERR_PTR(-EIO);
+ }
++ if (raw_inode->i_nlinks == 0) {
++ printk("MINIX-fs: deleted inode referenced: %lu\n",
++ inode->i_ino);
++ brelse(bh);
++ iget_failed(inode);
++ return ERR_PTR(-ESTALE);
++ }
+ inode->i_mode = raw_inode->i_mode;
+ i_uid_write(inode, raw_inode->i_uid);
+ i_gid_write(inode, raw_inode->i_gid);
+diff --git a/fs/minix/itree_common.c b/fs/minix/itree_common.c
+index 043c3fdbc8e7..446148792f41 100644
+--- a/fs/minix/itree_common.c
++++ b/fs/minix/itree_common.c
+@@ -75,6 +75,7 @@ static int alloc_branch(struct inode *inode,
+ int n = 0;
+ int i;
+ int parent = minix_new_block(inode);
++ int err = -ENOSPC;
+
+ branch[0].key = cpu_to_block(parent);
+ if (parent) for (n = 1; n < num; n++) {
+@@ -85,6 +86,11 @@ static int alloc_branch(struct inode *inode,
+ break;
+ branch[n].key = cpu_to_block(nr);
+ bh = sb_getblk(inode->i_sb, parent);
++ if (!bh) {
++ minix_free_block(inode, nr);
++ err = -ENOMEM;
++ break;
++ }
+ lock_buffer(bh);
+ memset(bh->b_data, 0, bh->b_size);
+ branch[n].bh = bh;
+@@ -103,7 +109,7 @@ static int alloc_branch(struct inode *inode,
+ bforget(branch[i].bh);
+ for (i = 0; i < n; i++)
+ minix_free_block(inode, block_to_cpu(branch[i].key));
+- return -ENOSPC;
++ return err;
+ }
+
+ static inline int splice_branch(struct inode *inode,
+diff --git a/fs/nfs/pnfs.c b/fs/nfs/pnfs.c
+index dd2e14f5875d..d61dac48dff5 100644
+--- a/fs/nfs/pnfs.c
++++ b/fs/nfs/pnfs.c
+@@ -1226,31 +1226,27 @@ out:
+ return status;
+ }
+
++static bool
++pnfs_layout_segments_returnable(struct pnfs_layout_hdr *lo,
++ enum pnfs_iomode iomode,
++ u32 seq)
++{
++ struct pnfs_layout_range recall_range = {
++ .length = NFS4_MAX_UINT64,
++ .iomode = iomode,
++ };
++ return pnfs_mark_matching_lsegs_return(lo, &lo->plh_return_segs,
++ &recall_range, seq) != -EBUSY;
++}
++
+ /* Return true if layoutreturn is needed */
+ static bool
+ pnfs_layout_need_return(struct pnfs_layout_hdr *lo)
+ {
+- struct pnfs_layout_segment *s;
+- enum pnfs_iomode iomode;
+- u32 seq;
+-
+ if (!test_bit(NFS_LAYOUT_RETURN_REQUESTED, &lo->plh_flags))
+ return false;
+-
+- seq = lo->plh_return_seq;
+- iomode = lo->plh_return_iomode;
+-
+- /* Defer layoutreturn until all recalled lsegs are done */
+- list_for_each_entry(s, &lo->plh_segs, pls_list) {
+- if (seq && pnfs_seqid_is_newer(s->pls_seq, seq))
+- continue;
+- if (iomode != IOMODE_ANY && s->pls_range.iomode != iomode)
+- continue;
+- if (test_bit(NFS_LSEG_LAYOUTRETURN, &s->pls_flags))
+- return false;
+- }
+-
+- return true;
++ return pnfs_layout_segments_returnable(lo, lo->plh_return_iomode,
++ lo->plh_return_seq);
+ }
+
+ static void pnfs_layoutreturn_before_put_layout_hdr(struct pnfs_layout_hdr *lo)
+@@ -2392,16 +2388,6 @@ out_forget:
+ return ERR_PTR(-EAGAIN);
+ }
+
+-static int
+-mark_lseg_invalid_or_return(struct pnfs_layout_segment *lseg,
+- struct list_head *tmp_list)
+-{
+- if (!mark_lseg_invalid(lseg, tmp_list))
+- return 0;
+- pnfs_cache_lseg_for_layoutreturn(lseg->pls_layout, lseg);
+- return 1;
+-}
+-
+ /**
+ * pnfs_mark_matching_lsegs_return - Free or return matching layout segments
+ * @lo: pointer to layout header
+@@ -2438,7 +2424,7 @@ pnfs_mark_matching_lsegs_return(struct pnfs_layout_hdr *lo,
+ lseg, lseg->pls_range.iomode,
+ lseg->pls_range.offset,
+ lseg->pls_range.length);
+- if (mark_lseg_invalid_or_return(lseg, tmp_list))
++ if (mark_lseg_invalid(lseg, tmp_list))
+ continue;
+ remaining++;
+ set_bit(NFS_LSEG_LAYOUTRETURN, &lseg->pls_flags);
+diff --git a/fs/nfsd/nfs4recover.c b/fs/nfsd/nfs4recover.c
+index a8fb18609146..82679990dd9b 100644
+--- a/fs/nfsd/nfs4recover.c
++++ b/fs/nfsd/nfs4recover.c
+@@ -755,13 +755,11 @@ struct cld_upcall {
+ };
+
+ static int
+-__cld_pipe_upcall(struct rpc_pipe *pipe, void *cmsg)
++__cld_pipe_upcall(struct rpc_pipe *pipe, void *cmsg, struct nfsd_net *nn)
+ {
+ int ret;
+ struct rpc_pipe_msg msg;
+ struct cld_upcall *cup = container_of(cmsg, struct cld_upcall, cu_u);
+- struct nfsd_net *nn = net_generic(pipe->dentry->d_sb->s_fs_info,
+- nfsd_net_id);
+
+ memset(&msg, 0, sizeof(msg));
+ msg.data = cmsg;
+@@ -781,7 +779,7 @@ out:
+ }
+
+ static int
+-cld_pipe_upcall(struct rpc_pipe *pipe, void *cmsg)
++cld_pipe_upcall(struct rpc_pipe *pipe, void *cmsg, struct nfsd_net *nn)
+ {
+ int ret;
+
+@@ -790,7 +788,7 @@ cld_pipe_upcall(struct rpc_pipe *pipe, void *cmsg)
+ * upcalls queued.
+ */
+ do {
+- ret = __cld_pipe_upcall(pipe, cmsg);
++ ret = __cld_pipe_upcall(pipe, cmsg, nn);
+ } while (ret == -EAGAIN);
+
+ return ret;
+@@ -1123,7 +1121,7 @@ nfsd4_cld_create(struct nfs4_client *clp)
+ memcpy(cup->cu_u.cu_msg.cm_u.cm_name.cn_id, clp->cl_name.data,
+ clp->cl_name.len);
+
+- ret = cld_pipe_upcall(cn->cn_pipe, &cup->cu_u.cu_msg);
++ ret = cld_pipe_upcall(cn->cn_pipe, &cup->cu_u.cu_msg, nn);
+ if (!ret) {
+ ret = cup->cu_u.cu_msg.cm_status;
+ set_bit(NFSD4_CLIENT_STABLE, &clp->cl_flags);
+@@ -1191,7 +1189,7 @@ nfsd4_cld_create_v2(struct nfs4_client *clp)
+ } else
+ cmsg->cm_u.cm_clntinfo.cc_princhash.cp_len = 0;
+
+- ret = cld_pipe_upcall(cn->cn_pipe, cmsg);
++ ret = cld_pipe_upcall(cn->cn_pipe, cmsg, nn);
+ if (!ret) {
+ ret = cmsg->cm_status;
+ set_bit(NFSD4_CLIENT_STABLE, &clp->cl_flags);
+@@ -1229,7 +1227,7 @@ nfsd4_cld_remove(struct nfs4_client *clp)
+ memcpy(cup->cu_u.cu_msg.cm_u.cm_name.cn_id, clp->cl_name.data,
+ clp->cl_name.len);
+
+- ret = cld_pipe_upcall(cn->cn_pipe, &cup->cu_u.cu_msg);
++ ret = cld_pipe_upcall(cn->cn_pipe, &cup->cu_u.cu_msg, nn);
+ if (!ret) {
+ ret = cup->cu_u.cu_msg.cm_status;
+ clear_bit(NFSD4_CLIENT_STABLE, &clp->cl_flags);
+@@ -1272,7 +1270,7 @@ nfsd4_cld_check_v0(struct nfs4_client *clp)
+ memcpy(cup->cu_u.cu_msg.cm_u.cm_name.cn_id, clp->cl_name.data,
+ clp->cl_name.len);
+
+- ret = cld_pipe_upcall(cn->cn_pipe, &cup->cu_u.cu_msg);
++ ret = cld_pipe_upcall(cn->cn_pipe, &cup->cu_u.cu_msg, nn);
+ if (!ret) {
+ ret = cup->cu_u.cu_msg.cm_status;
+ set_bit(NFSD4_CLIENT_STABLE, &clp->cl_flags);
+@@ -1418,7 +1416,7 @@ nfsd4_cld_grace_start(struct nfsd_net *nn)
+ }
+
+ cup->cu_u.cu_msg.cm_cmd = Cld_GraceStart;
+- ret = cld_pipe_upcall(cn->cn_pipe, &cup->cu_u.cu_msg);
++ ret = cld_pipe_upcall(cn->cn_pipe, &cup->cu_u.cu_msg, nn);
+ if (!ret)
+ ret = cup->cu_u.cu_msg.cm_status;
+
+@@ -1446,7 +1444,7 @@ nfsd4_cld_grace_done_v0(struct nfsd_net *nn)
+
+ cup->cu_u.cu_msg.cm_cmd = Cld_GraceDone;
+ cup->cu_u.cu_msg.cm_u.cm_gracetime = nn->boot_time;
+- ret = cld_pipe_upcall(cn->cn_pipe, &cup->cu_u.cu_msg);
++ ret = cld_pipe_upcall(cn->cn_pipe, &cup->cu_u.cu_msg, nn);
+ if (!ret)
+ ret = cup->cu_u.cu_msg.cm_status;
+
+@@ -1474,7 +1472,7 @@ nfsd4_cld_grace_done(struct nfsd_net *nn)
+ }
+
+ cup->cu_u.cu_msg.cm_cmd = Cld_GraceDone;
+- ret = cld_pipe_upcall(cn->cn_pipe, &cup->cu_u.cu_msg);
++ ret = cld_pipe_upcall(cn->cn_pipe, &cup->cu_u.cu_msg, nn);
+ if (!ret)
+ ret = cup->cu_u.cu_msg.cm_status;
+
+@@ -1538,7 +1536,7 @@ nfsd4_cld_get_version(struct nfsd_net *nn)
+ goto out_err;
+ }
+ cup->cu_u.cu_msg.cm_cmd = Cld_GetVersion;
+- ret = cld_pipe_upcall(cn->cn_pipe, &cup->cu_u.cu_msg);
++ ret = cld_pipe_upcall(cn->cn_pipe, &cup->cu_u.cu_msg, nn);
+ if (!ret) {
+ ret = cup->cu_u.cu_msg.cm_status;
+ if (ret)
+diff --git a/fs/ocfs2/dlmglue.c b/fs/ocfs2/dlmglue.c
+index 751bc4dc7466..8e3a369086db 100644
+--- a/fs/ocfs2/dlmglue.c
++++ b/fs/ocfs2/dlmglue.c
+@@ -2871,9 +2871,15 @@ int ocfs2_nfs_sync_lock(struct ocfs2_super *osb, int ex)
+
+ status = ocfs2_cluster_lock(osb, lockres, ex ? LKM_EXMODE : LKM_PRMODE,
+ 0, 0);
+- if (status < 0)
++ if (status < 0) {
+ mlog(ML_ERROR, "lock on nfs sync lock failed %d\n", status);
+
++ if (ex)
++ up_write(&osb->nfs_sync_rwlock);
++ else
++ up_read(&osb->nfs_sync_rwlock);
++ }
++
+ return status;
+ }
+
+diff --git a/fs/pstore/platform.c b/fs/pstore/platform.c
+index 408277ee3cdb..ef49703c9676 100644
+--- a/fs/pstore/platform.c
++++ b/fs/pstore/platform.c
+@@ -275,6 +275,9 @@ static int pstore_compress(const void *in, void *out,
+ {
+ int ret;
+
++ if (!IS_ENABLED(CONFIG_PSTORE_COMPRESSION))
++ return -EINVAL;
++
+ ret = crypto_comp_compress(tfm, in, inlen, out, &outlen);
+ if (ret) {
+ pr_err("crypto_comp_compress failed, ret = %d!\n", ret);
+@@ -661,7 +664,7 @@ static void decompress_record(struct pstore_record *record)
+ int unzipped_len;
+ char *unzipped, *workspace;
+
+- if (!record->compressed)
++ if (!IS_ENABLED(CONFIG_PSTORE_COMPRESSION) || !record->compressed)
+ return;
+
+ /* Only PSTORE_TYPE_DMESG support compression. */
+diff --git a/fs/xfs/libxfs/xfs_trans_space.h b/fs/xfs/libxfs/xfs_trans_space.h
+index 88221c7a04cc..c6df01a2a158 100644
+--- a/fs/xfs/libxfs/xfs_trans_space.h
++++ b/fs/xfs/libxfs/xfs_trans_space.h
+@@ -57,7 +57,7 @@
+ XFS_DAREMOVE_SPACE_RES(mp, XFS_DATA_FORK)
+ #define XFS_IALLOC_SPACE_RES(mp) \
+ (M_IGEO(mp)->ialloc_blks + \
+- (xfs_sb_version_hasfinobt(&mp->m_sb) ? 2 : 1 * \
++ ((xfs_sb_version_hasfinobt(&mp->m_sb) ? 2 : 1) * \
+ (M_IGEO(mp)->inobt_maxlevels - 1)))
+
+ /*
+diff --git a/fs/xfs/scrub/bmap.c b/fs/xfs/scrub/bmap.c
+index add8598eacd5..c4788d244de3 100644
+--- a/fs/xfs/scrub/bmap.c
++++ b/fs/xfs/scrub/bmap.c
+@@ -45,9 +45,27 @@ xchk_setup_inode_bmap(
+ */
+ if (S_ISREG(VFS_I(sc->ip)->i_mode) &&
+ sc->sm->sm_type == XFS_SCRUB_TYPE_BMBTD) {
++ struct address_space *mapping = VFS_I(sc->ip)->i_mapping;
++
+ inode_dio_wait(VFS_I(sc->ip));
+- error = filemap_write_and_wait(VFS_I(sc->ip)->i_mapping);
+- if (error)
++
++ /*
++ * Try to flush all incore state to disk before we examine the
++ * space mappings for the data fork. Leave accumulated errors
++ * in the mapping for the writer threads to consume.
++ *
++ * On ENOSPC or EIO writeback errors, we continue into the
++ * extent mapping checks because write failures do not
++ * necessarily imply anything about the correctness of the file
++ * metadata. The metadata and the file data could be on
++ * completely separate devices; a media failure might only
++ * affect a subset of the disk, etc. We can handle delalloc
++ * extents in the scrubber, so leaving them in memory is fine.
++ */
++ error = filemap_fdatawrite(mapping);
++ if (!error)
++ error = filemap_fdatawait_keep_errors(mapping);
++ if (error && (error != -ENOSPC && error != -EIO))
+ goto out;
+ }
+
+diff --git a/fs/xfs/xfs_qm.c b/fs/xfs/xfs_qm.c
+index c225691fad15..2a0cdca80f86 100644
+--- a/fs/xfs/xfs_qm.c
++++ b/fs/xfs/xfs_qm.c
+@@ -148,6 +148,7 @@ xfs_qm_dqpurge(
+ error = xfs_bwrite(bp);
+ xfs_buf_relse(bp);
+ } else if (error == -EAGAIN) {
++ dqp->dq_flags &= ~XFS_DQ_FREEING;
+ goto out_unlock;
+ }
+ xfs_dqflock(dqp);
+diff --git a/fs/xfs/xfs_reflink.c b/fs/xfs/xfs_reflink.c
+index 107bf2a2f344..d89201d40891 100644
+--- a/fs/xfs/xfs_reflink.c
++++ b/fs/xfs/xfs_reflink.c
+@@ -1003,6 +1003,7 @@ xfs_reflink_remap_extent(
+ xfs_filblks_t rlen;
+ xfs_filblks_t unmap_len;
+ xfs_off_t newlen;
++ int64_t qres;
+ int error;
+
+ unmap_len = irec->br_startoff + irec->br_blockcount - destoff;
+@@ -1025,13 +1026,19 @@ xfs_reflink_remap_extent(
+ xfs_ilock(ip, XFS_ILOCK_EXCL);
+ xfs_trans_ijoin(tp, ip, 0);
+
+- /* If we're not just clearing space, then do we have enough quota? */
+- if (real_extent) {
+- error = xfs_trans_reserve_quota_nblks(tp, ip,
+- irec->br_blockcount, 0, XFS_QMOPT_RES_REGBLKS);
+- if (error)
+- goto out_cancel;
+- }
++ /*
++ * Reserve quota for this operation. We don't know if the first unmap
++ * in the dest file will cause a bmap btree split, so we always reserve
++ * at least enough blocks for that split. If the extent being mapped
++ * in is written, we need to reserve quota for that too.
++ */
++ qres = XFS_EXTENTADD_SPACE_RES(mp, XFS_DATA_FORK);
++ if (real_extent)
++ qres += irec->br_blockcount;
++ error = xfs_trans_reserve_quota_nblks(tp, ip, qres, 0,
++ XFS_QMOPT_RES_REGBLKS);
++ if (error)
++ goto out_cancel;
+
+ trace_xfs_reflink_remap(ip, irec->br_startoff,
+ irec->br_blockcount, irec->br_startblock);
+diff --git a/include/asm-generic/vmlinux.lds.h b/include/asm-generic/vmlinux.lds.h
+index dc29044d3ed9..271d56ff3316 100644
+--- a/include/asm-generic/vmlinux.lds.h
++++ b/include/asm-generic/vmlinux.lds.h
+@@ -375,6 +375,7 @@
+ */
+ #ifndef RO_AFTER_INIT_DATA
+ #define RO_AFTER_INIT_DATA \
++ . = ALIGN(8); \
+ __start_ro_after_init = .; \
+ *(.data..ro_after_init) \
+ JUMP_TABLE_DATA \
+diff --git a/include/linux/bitfield.h b/include/linux/bitfield.h
+index 48ea093ff04c..4e035aca6f7e 100644
+--- a/include/linux/bitfield.h
++++ b/include/linux/bitfield.h
+@@ -77,7 +77,7 @@
+ */
+ #define FIELD_FIT(_mask, _val) \
+ ({ \
+- __BF_FIELD_CHECK(_mask, 0ULL, _val, "FIELD_FIT: "); \
++ __BF_FIELD_CHECK(_mask, 0ULL, 0ULL, "FIELD_FIT: "); \
+ !((((typeof(_mask))_val) << __bf_shf(_mask)) & ~(_mask)); \
+ })
+
+diff --git a/include/linux/gpio/driver.h b/include/linux/gpio/driver.h
+index b8fc92c177eb..e4a00bb42427 100644
+--- a/include/linux/gpio/driver.h
++++ b/include/linux/gpio/driver.h
+@@ -496,8 +496,16 @@ extern int gpiochip_add_data_with_key(struct gpio_chip *gc, void *data,
+ gpiochip_add_data_with_key(gc, data, &lock_key, \
+ &request_key); \
+ })
++#define devm_gpiochip_add_data(dev, gc, data) ({ \
++ static struct lock_class_key lock_key; \
++ static struct lock_class_key request_key; \
++ devm_gpiochip_add_data_with_key(dev, gc, data, &lock_key, \
++ &request_key); \
++ })
+ #else
+ #define gpiochip_add_data(gc, data) gpiochip_add_data_with_key(gc, data, NULL, NULL)
++#define devm_gpiochip_add_data(dev, gc, data) \
++ devm_gpiochip_add_data_with_key(dev, gc, data, NULL, NULL)
+ #endif /* CONFIG_LOCKDEP */
+
+ static inline int gpiochip_add(struct gpio_chip *gc)
+@@ -505,8 +513,9 @@ static inline int gpiochip_add(struct gpio_chip *gc)
+ return gpiochip_add_data(gc, NULL);
+ }
+ extern void gpiochip_remove(struct gpio_chip *gc);
+-extern int devm_gpiochip_add_data(struct device *dev, struct gpio_chip *gc,
+- void *data);
++extern int devm_gpiochip_add_data_with_key(struct device *dev, struct gpio_chip *gc, void *data,
++ struct lock_class_key *lock_key,
++ struct lock_class_key *request_key);
+
+ extern struct gpio_chip *gpiochip_find(void *data,
+ int (*match)(struct gpio_chip *gc, void *data));
+diff --git a/include/linux/tpm.h b/include/linux/tpm.h
+index 03e9b184411b..8f4ff39f51e7 100644
+--- a/include/linux/tpm.h
++++ b/include/linux/tpm.h
+@@ -96,6 +96,7 @@ struct tpm_space {
+ u8 *context_buf;
+ u32 session_tbl[3];
+ u8 *session_buf;
++ u32 buf_size;
+ };
+
+ struct tpm_bios_log {
+diff --git a/include/linux/tpm_eventlog.h b/include/linux/tpm_eventlog.h
+index 96d36b7a1344..6f1d1b7f8b42 100644
+--- a/include/linux/tpm_eventlog.h
++++ b/include/linux/tpm_eventlog.h
+@@ -211,9 +211,16 @@ static inline int __calc_tpm2_event_size(struct tcg_pcr_event2_head *event,
+
+ efispecid = (struct tcg_efi_specid_event_head *)event_header->event;
+
+- /* Check if event is malformed. */
++ /*
++ * Perform validation of the event in order to identify malformed
++ * events. This function may be asked to parse arbitrary byte sequences
++ * immediately following a valid event log. The caller expects this
++ * function to recognize that the byte sequence is not a valid event
++ * and to return an event size of 0.
++ */
+ if (memcmp(efispecid->signature, TCG_SPECID_SIG,
+- sizeof(TCG_SPECID_SIG)) || count > efispecid->num_algs) {
++ sizeof(TCG_SPECID_SIG)) ||
++ !efispecid->num_algs || count != efispecid->num_algs) {
+ size = 0;
+ goto out;
+ }
+diff --git a/include/linux/tracepoint.h b/include/linux/tracepoint.h
+index a1fecf311621..3a5b717d92e8 100644
+--- a/include/linux/tracepoint.h
++++ b/include/linux/tracepoint.h
+@@ -361,7 +361,7 @@ static inline struct tracepoint *tracepoint_ptr_deref(tracepoint_ptr_t *p)
+ static const char *___tp_str __tracepoint_string = str; \
+ ___tp_str; \
+ })
+-#define __tracepoint_string __attribute__((section("__tracepoint_str")))
++#define __tracepoint_string __attribute__((section("__tracepoint_str"), used))
+ #else
+ /*
+ * tracepoint_string() is used to save the string address for userspace
+diff --git a/include/net/bluetooth/bluetooth.h b/include/net/bluetooth/bluetooth.h
+index 1576353a2773..15c54deb2b8e 100644
+--- a/include/net/bluetooth/bluetooth.h
++++ b/include/net/bluetooth/bluetooth.h
+@@ -41,6 +41,8 @@
+ #define BLUETOOTH_VER_1_1 1
+ #define BLUETOOTH_VER_1_2 2
+ #define BLUETOOTH_VER_2_0 3
++#define BLUETOOTH_VER_2_1 4
++#define BLUETOOTH_VER_4_0 6
+
+ /* Reserv for core and drivers use */
+ #define BT_SKB_RESERVE 8
+diff --git a/include/net/bluetooth/hci.h b/include/net/bluetooth/hci.h
+index 25c2e5ee81dc..b2c567fc3338 100644
+--- a/include/net/bluetooth/hci.h
++++ b/include/net/bluetooth/hci.h
+@@ -223,6 +223,17 @@ enum {
+ * supported.
+ */
+ HCI_QUIRK_VALID_LE_STATES,
++
++ /* When this quirk is set, then erroneous data reporting
++ * is ignored. This is mainly due to the fact that the HCI
++ * Read Default Erroneous Data Reporting command is advertised,
++ * but not supported; these controllers often reply with unknown
++ * command and tend to lock up randomly. Needing a hard reset.
++ *
++ * This quirk can be set before hci_register_dev is called or
++ * during the hdev->setup vendor callback.
++ */
++ HCI_QUIRK_BROKEN_ERR_DATA_REPORTING,
+ };
+
+ /* HCI device flags */
+diff --git a/include/net/inet_connection_sock.h b/include/net/inet_connection_sock.h
+index a3f076befa4f..cceec467ed9e 100644
+--- a/include/net/inet_connection_sock.h
++++ b/include/net/inet_connection_sock.h
+@@ -309,6 +309,10 @@ int inet_csk_compat_getsockopt(struct sock *sk, int level, int optname,
+ int inet_csk_compat_setsockopt(struct sock *sk, int level, int optname,
+ char __user *optval, unsigned int optlen);
+
++/* update the fast reuse flag when adding a socket */
++void inet_csk_update_fastreuse(struct inet_bind_bucket *tb,
++ struct sock *sk);
++
+ struct dst_entry *inet_csk_update_pmtu(struct sock *sk, u32 mtu);
+
+ #define TCP_PINGPONG_THRESH 3
+diff --git a/include/net/ip_vs.h b/include/net/ip_vs.h
+index 83be2d93b407..fe96aa462d05 100644
+--- a/include/net/ip_vs.h
++++ b/include/net/ip_vs.h
+@@ -1624,18 +1624,16 @@ static inline void ip_vs_conn_drop_conntrack(struct ip_vs_conn *cp)
+ }
+ #endif /* CONFIG_IP_VS_NFCT */
+
+-/* Really using conntrack? */
+-static inline bool ip_vs_conn_uses_conntrack(struct ip_vs_conn *cp,
+- struct sk_buff *skb)
++/* Using old conntrack that can not be redirected to another real server? */
++static inline bool ip_vs_conn_uses_old_conntrack(struct ip_vs_conn *cp,
++ struct sk_buff *skb)
+ {
+ #ifdef CONFIG_IP_VS_NFCT
+ enum ip_conntrack_info ctinfo;
+ struct nf_conn *ct;
+
+- if (!(cp->flags & IP_VS_CONN_F_NFCT))
+- return false;
+ ct = nf_ct_get(skb, &ctinfo);
+- if (ct)
++ if (ct && nf_ct_is_confirmed(ct))
+ return true;
+ #endif
+ return false;
+diff --git a/include/net/tcp.h b/include/net/tcp.h
+index 6f8e60c6fbc7..ecb66d01135e 100644
+--- a/include/net/tcp.h
++++ b/include/net/tcp.h
+@@ -1669,6 +1669,8 @@ void tcp_fastopen_destroy_cipher(struct sock *sk);
+ void tcp_fastopen_ctx_destroy(struct net *net);
+ int tcp_fastopen_reset_cipher(struct net *net, struct sock *sk,
+ void *primary_key, void *backup_key);
++int tcp_fastopen_get_cipher(struct net *net, struct inet_connection_sock *icsk,
++ u64 *key);
+ void tcp_fastopen_add_skb(struct sock *sk, struct sk_buff *skb);
+ struct sock *tcp_try_fastopen(struct sock *sk, struct sk_buff *skb,
+ struct request_sock *req,
+diff --git a/include/uapi/linux/seccomp.h b/include/uapi/linux/seccomp.h
+index c1735455bc53..965290f7dcc2 100644
+--- a/include/uapi/linux/seccomp.h
++++ b/include/uapi/linux/seccomp.h
+@@ -123,5 +123,6 @@ struct seccomp_notif_resp {
+ #define SECCOMP_IOCTL_NOTIF_RECV SECCOMP_IOWR(0, struct seccomp_notif)
+ #define SECCOMP_IOCTL_NOTIF_SEND SECCOMP_IOWR(1, \
+ struct seccomp_notif_resp)
+-#define SECCOMP_IOCTL_NOTIF_ID_VALID SECCOMP_IOR(2, __u64)
++#define SECCOMP_IOCTL_NOTIF_ID_VALID SECCOMP_IOW(2, __u64)
++
+ #endif /* _UAPI_LINUX_SECCOMP_H */
+diff --git a/kernel/rcu/tree.c b/kernel/rcu/tree.c
+index d9a49cd6065a..a3aa129cc8f5 100644
+--- a/kernel/rcu/tree.c
++++ b/kernel/rcu/tree.c
+@@ -2895,7 +2895,7 @@ static void kfree_rcu_work(struct work_struct *work)
+ static inline bool queue_kfree_rcu_work(struct kfree_rcu_cpu *krcp)
+ {
+ struct kfree_rcu_cpu_work *krwp;
+- bool queued = false;
++ bool repeat = false;
+ int i;
+
+ lockdep_assert_held(&krcp->lock);
+@@ -2931,11 +2931,14 @@ static inline bool queue_kfree_rcu_work(struct kfree_rcu_cpu *krcp)
+ * been detached following each other, one by one.
+ */
+ queue_rcu_work(system_wq, &krwp->rcu_work);
+- queued = true;
+ }
++
++ /* Repeat if any "free" corresponding channel is still busy. */
++ if (krcp->bhead || krcp->head)
++ repeat = true;
+ }
+
+- return queued;
++ return !repeat;
+ }
+
+ static inline void kfree_rcu_drain_unlock(struct kfree_rcu_cpu *krcp,
+diff --git a/kernel/sched/core.c b/kernel/sched/core.c
+index a7ef76a62699..1bae86fc128b 100644
+--- a/kernel/sched/core.c
++++ b/kernel/sched/core.c
+@@ -1237,6 +1237,20 @@ static void uclamp_fork(struct task_struct *p)
+ }
+ }
+
++static void __init init_uclamp_rq(struct rq *rq)
++{
++ enum uclamp_id clamp_id;
++ struct uclamp_rq *uc_rq = rq->uclamp;
++
++ for_each_clamp_id(clamp_id) {
++ uc_rq[clamp_id] = (struct uclamp_rq) {
++ .value = uclamp_none(clamp_id)
++ };
++ }
++
++ rq->uclamp_flags = 0;
++}
++
+ static void __init init_uclamp(void)
+ {
+ struct uclamp_se uc_max = {};
+@@ -1245,11 +1259,8 @@ static void __init init_uclamp(void)
+
+ mutex_init(&uclamp_mutex);
+
+- for_each_possible_cpu(cpu) {
+- memset(&cpu_rq(cpu)->uclamp, 0,
+- sizeof(struct uclamp_rq)*UCLAMP_CNT);
+- cpu_rq(cpu)->uclamp_flags = 0;
+- }
++ for_each_possible_cpu(cpu)
++ init_uclamp_rq(cpu_rq(cpu));
+
+ for_each_clamp_id(clamp_id) {
+ uclamp_se_set(&init_task.uclamp_req[clamp_id],
+diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
+index 5c31875a7d9d..e44332b829b4 100644
+--- a/kernel/sched/fair.c
++++ b/kernel/sched/fair.c
+@@ -10033,7 +10033,12 @@ static void kick_ilb(unsigned int flags)
+ {
+ int ilb_cpu;
+
+- nohz.next_balance++;
++ /*
++ * Increase nohz.next_balance only when if full ilb is triggered but
++ * not if we only update stats.
++ */
++ if (flags & NOHZ_BALANCE_KICK)
++ nohz.next_balance = jiffies+1;
+
+ ilb_cpu = find_new_ilb();
+
+@@ -10351,6 +10356,14 @@ static bool _nohz_idle_balance(struct rq *this_rq, unsigned int flags,
+ }
+ }
+
++ /*
++ * next_balance will be updated only when there is a need.
++ * When the CPU is attached to null domain for ex, it will not be
++ * updated.
++ */
++ if (likely(update_next_balance))
++ nohz.next_balance = next_balance;
++
+ /* Newly idle CPU doesn't need an update */
+ if (idle != CPU_NEWLY_IDLE) {
+ update_blocked_averages(this_cpu);
+@@ -10371,14 +10384,6 @@ abort:
+ if (has_blocked_load)
+ WRITE_ONCE(nohz.has_blocked, 1);
+
+- /*
+- * next_balance will be updated only when there is a need.
+- * When the CPU is attached to null domain for ex, it will not be
+- * updated.
+- */
+- if (likely(update_next_balance))
+- nohz.next_balance = next_balance;
+-
+ return ret;
+ }
+
+diff --git a/kernel/sched/topology.c b/kernel/sched/topology.c
+index 8344757bba6e..160178d6eb20 100644
+--- a/kernel/sched/topology.c
++++ b/kernel/sched/topology.c
+@@ -1338,7 +1338,7 @@ sd_init(struct sched_domain_topology_level *tl,
+ sd_flags = (*tl->sd_flags)();
+ if (WARN_ONCE(sd_flags & ~TOPOLOGY_SD_FLAGS,
+ "wrong sd_flags in topology description\n"))
+- sd_flags &= ~TOPOLOGY_SD_FLAGS;
++ sd_flags &= TOPOLOGY_SD_FLAGS;
+
+ /* Apply detected topology flags */
+ sd_flags |= dflags;
+diff --git a/kernel/seccomp.c b/kernel/seccomp.c
+index 55a6184f5990..63e283c4c58e 100644
+--- a/kernel/seccomp.c
++++ b/kernel/seccomp.c
+@@ -42,6 +42,14 @@
+ #include <linux/uaccess.h>
+ #include <linux/anon_inodes.h>
+
++/*
++ * When SECCOMP_IOCTL_NOTIF_ID_VALID was first introduced, it had the
++ * wrong direction flag in the ioctl number. This is the broken one,
++ * which the kernel needs to keep supporting until all userspaces stop
++ * using the wrong command number.
++ */
++#define SECCOMP_IOCTL_NOTIF_ID_VALID_WRONG_DIR SECCOMP_IOR(2, __u64)
++
+ enum notify_state {
+ SECCOMP_NOTIFY_INIT,
+ SECCOMP_NOTIFY_SENT,
+@@ -1186,6 +1194,7 @@ static long seccomp_notify_ioctl(struct file *file, unsigned int cmd,
+ return seccomp_notify_recv(filter, buf);
+ case SECCOMP_IOCTL_NOTIF_SEND:
+ return seccomp_notify_send(filter, buf);
++ case SECCOMP_IOCTL_NOTIF_ID_VALID_WRONG_DIR:
+ case SECCOMP_IOCTL_NOTIF_ID_VALID:
+ return seccomp_notify_id_valid(filter, buf);
+ default:
+diff --git a/kernel/signal.c b/kernel/signal.c
+index d5feb34b5e15..6c793322e01b 100644
+--- a/kernel/signal.c
++++ b/kernel/signal.c
+@@ -2541,7 +2541,21 @@ bool get_signal(struct ksignal *ksig)
+
+ relock:
+ spin_lock_irq(&sighand->siglock);
+- current->jobctl &= ~JOBCTL_TASK_WORK;
++ /*
++ * Make sure we can safely read ->jobctl() in task_work add. As Oleg
++ * states:
++ *
++ * It pairs with mb (implied by cmpxchg) before READ_ONCE. So we
++ * roughly have
++ *
++ * task_work_add: get_signal:
++ * STORE(task->task_works, new_work); STORE(task->jobctl);
++ * mb(); mb();
++ * LOAD(task->jobctl); LOAD(task->task_works);
++ *
++ * and we can rely on STORE-MB-LOAD [ in task_work_add].
++ */
++ smp_store_mb(current->jobctl, current->jobctl & ~JOBCTL_TASK_WORK);
+ if (unlikely(current->task_works)) {
+ spin_unlock_irq(&sighand->siglock);
+ task_work_run();
+diff --git a/kernel/task_work.c b/kernel/task_work.c
+index 5c0848ca1287..613b2d634af8 100644
+--- a/kernel/task_work.c
++++ b/kernel/task_work.c
+@@ -42,7 +42,13 @@ task_work_add(struct task_struct *task, struct callback_head *work, int notify)
+ set_notify_resume(task);
+ break;
+ case TWA_SIGNAL:
+- if (lock_task_sighand(task, &flags)) {
++ /*
++ * Only grab the sighand lock if we don't already have some
++ * task_work pending. This pairs with the smp_store_mb()
++ * in get_signal(), see comment there.
++ */
++ if (!(READ_ONCE(task->jobctl) & JOBCTL_TASK_WORK) &&
++ lock_task_sighand(task, &flags)) {
+ task->jobctl |= JOBCTL_TASK_WORK;
+ signal_wake_up(task, 0);
+ unlock_task_sighand(task, &flags);
+diff --git a/kernel/time/tick-sched.c b/kernel/time/tick-sched.c
+index 3e2dc9b8858c..f0199a4ba1ad 100644
+--- a/kernel/time/tick-sched.c
++++ b/kernel/time/tick-sched.c
+@@ -351,16 +351,24 @@ void tick_nohz_dep_clear_cpu(int cpu, enum tick_dep_bits bit)
+ EXPORT_SYMBOL_GPL(tick_nohz_dep_clear_cpu);
+
+ /*
+- * Set a per-task tick dependency. Posix CPU timers need this in order to elapse
+- * per task timers.
++ * Set a per-task tick dependency. RCU need this. Also posix CPU timers
++ * in order to elapse per task timers.
+ */
+ void tick_nohz_dep_set_task(struct task_struct *tsk, enum tick_dep_bits bit)
+ {
+- /*
+- * We could optimize this with just kicking the target running the task
+- * if that noise matters for nohz full users.
+- */
+- tick_nohz_dep_set_all(&tsk->tick_dep_mask, bit);
++ if (!atomic_fetch_or(BIT(bit), &tsk->tick_dep_mask)) {
++ if (tsk == current) {
++ preempt_disable();
++ tick_nohz_full_kick();
++ preempt_enable();
++ } else {
++ /*
++ * Some future tick_nohz_full_kick_task()
++ * should optimize this.
++ */
++ tick_nohz_full_kick_all();
++ }
++ }
+ }
+ EXPORT_SYMBOL_GPL(tick_nohz_dep_set_task);
+
+diff --git a/kernel/trace/blktrace.c b/kernel/trace/blktrace.c
+index 085fceca3377..ac59476c77ae 100644
+--- a/kernel/trace/blktrace.c
++++ b/kernel/trace/blktrace.c
+@@ -520,10 +520,18 @@ static int do_blk_trace_setup(struct request_queue *q, char *name, dev_t dev,
+ if (!bt->msg_data)
+ goto err;
+
+- ret = -ENOENT;
+-
+- dir = debugfs_lookup(buts->name, blk_debugfs_root);
+- if (!dir)
++#ifdef CONFIG_BLK_DEBUG_FS
++ /*
++ * When tracing whole make_request drivers (multiqueue) block devices,
++ * reuse the existing debugfs directory created by the block layer on
++ * init. For request-based block devices, all partitions block devices,
++ * and scsi-generic block devices we create a temporary new debugfs
++ * directory that will be removed once the trace ends.
++ */
++ if (queue_is_mq(q) && bdev && bdev == bdev->bd_contains)
++ dir = q->debugfs_dir;
++ else
++#endif
+ bt->dir = dir = debugfs_create_dir(buts->name, blk_debugfs_root);
+
+ bt->dev = dev;
+@@ -564,8 +572,6 @@ static int do_blk_trace_setup(struct request_queue *q, char *name, dev_t dev,
+
+ ret = 0;
+ err:
+- if (dir && !bt->dir)
+- dput(dir);
+ if (ret)
+ blk_trace_free(bt);
+ return ret;
+diff --git a/kernel/trace/ftrace.c b/kernel/trace/ftrace.c
+index bd030b1b9514..baa7c050dc7b 100644
+--- a/kernel/trace/ftrace.c
++++ b/kernel/trace/ftrace.c
+@@ -139,9 +139,6 @@ static inline void ftrace_ops_init(struct ftrace_ops *ops)
+ #endif
+ }
+
+-#define FTRACE_PID_IGNORE -1
+-#define FTRACE_PID_TRACE -2
+-
+ static void ftrace_pid_func(unsigned long ip, unsigned long parent_ip,
+ struct ftrace_ops *op, struct pt_regs *regs)
+ {
+diff --git a/kernel/trace/trace.c b/kernel/trace/trace.c
+index 29615f15a820..5c56c1e2f273 100644
+--- a/kernel/trace/trace.c
++++ b/kernel/trace/trace.c
+@@ -5885,7 +5885,7 @@ int tracing_set_tracer(struct trace_array *tr, const char *buf)
+ }
+
+ /* If trace pipe files are being read, we can't change the tracer */
+- if (tr->current_trace->ref) {
++ if (tr->trace_ref) {
+ ret = -EBUSY;
+ goto out;
+ }
+@@ -6101,7 +6101,7 @@ static int tracing_open_pipe(struct inode *inode, struct file *filp)
+
+ nonseekable_open(inode, filp);
+
+- tr->current_trace->ref++;
++ tr->trace_ref++;
+ out:
+ mutex_unlock(&trace_types_lock);
+ return ret;
+@@ -6120,7 +6120,7 @@ static int tracing_release_pipe(struct inode *inode, struct file *file)
+
+ mutex_lock(&trace_types_lock);
+
+- tr->current_trace->ref--;
++ tr->trace_ref--;
+
+ if (iter->trace->pipe_close)
+ iter->trace->pipe_close(iter);
+@@ -7429,7 +7429,7 @@ static int tracing_buffers_open(struct inode *inode, struct file *filp)
+
+ filp->private_data = info;
+
+- tr->current_trace->ref++;
++ tr->trace_ref++;
+
+ mutex_unlock(&trace_types_lock);
+
+@@ -7530,7 +7530,7 @@ static int tracing_buffers_release(struct inode *inode, struct file *file)
+
+ mutex_lock(&trace_types_lock);
+
+- iter->tr->current_trace->ref--;
++ iter->tr->trace_ref--;
+
+ __trace_array_put(iter->tr);
+
+@@ -8752,7 +8752,7 @@ static int __remove_instance(struct trace_array *tr)
+ int i;
+
+ /* Reference counter for a newly created trace array = 1. */
+- if (tr->ref > 1 || (tr->current_trace && tr->current_trace->ref))
++ if (tr->ref > 1 || (tr->current_trace && tr->trace_ref))
+ return -EBUSY;
+
+ list_del(&tr->list);
+diff --git a/kernel/trace/trace.h b/kernel/trace/trace.h
+index 7fb2f4c1bc49..6b9acbf95cbc 100644
+--- a/kernel/trace/trace.h
++++ b/kernel/trace/trace.h
+@@ -356,6 +356,7 @@ struct trace_array {
+ struct trace_event_file *trace_marker_file;
+ cpumask_var_t tracing_cpumask; /* only trace on set CPUs */
+ int ref;
++ int trace_ref;
+ #ifdef CONFIG_FUNCTION_TRACER
+ struct ftrace_ops *ops;
+ struct trace_pid_list __rcu *function_pids;
+@@ -547,7 +548,6 @@ struct tracer {
+ struct tracer *next;
+ struct tracer_flags *flags;
+ int enabled;
+- int ref;
+ bool print_max;
+ bool allow_instances;
+ #ifdef CONFIG_TRACER_MAX_TRACE
+@@ -1103,6 +1103,10 @@ print_graph_function_flags(struct trace_iterator *iter, u32 flags)
+ extern struct list_head ftrace_pids;
+
+ #ifdef CONFIG_FUNCTION_TRACER
++
++#define FTRACE_PID_IGNORE -1
++#define FTRACE_PID_TRACE -2
++
+ struct ftrace_func_command {
+ struct list_head list;
+ char *name;
+@@ -1114,7 +1118,8 @@ struct ftrace_func_command {
+ extern bool ftrace_filter_param __initdata;
+ static inline int ftrace_trace_task(struct trace_array *tr)
+ {
+- return !this_cpu_read(tr->array_buffer.data->ftrace_ignore_pid);
++ return this_cpu_read(tr->array_buffer.data->ftrace_ignore_pid) !=
++ FTRACE_PID_IGNORE;
+ }
+ extern int ftrace_is_dead(void);
+ int ftrace_create_function_files(struct trace_array *tr,
+diff --git a/lib/crc-t10dif.c b/lib/crc-t10dif.c
+index 8cc01a603416..c9acf1c12cfc 100644
+--- a/lib/crc-t10dif.c
++++ b/lib/crc-t10dif.c
+@@ -19,39 +19,46 @@
+ static struct crypto_shash __rcu *crct10dif_tfm;
+ static struct static_key crct10dif_fallback __read_mostly;
+ static DEFINE_MUTEX(crc_t10dif_mutex);
++static struct work_struct crct10dif_rehash_work;
+
+-static int crc_t10dif_rehash(struct notifier_block *self, unsigned long val, void *data)
++static int crc_t10dif_notify(struct notifier_block *self, unsigned long val, void *data)
+ {
+ struct crypto_alg *alg = data;
+- struct crypto_shash *new, *old;
+
+ if (val != CRYPTO_MSG_ALG_LOADED ||
+ static_key_false(&crct10dif_fallback) ||
+ strncmp(alg->cra_name, CRC_T10DIF_STRING, strlen(CRC_T10DIF_STRING)))
+ return 0;
+
++ schedule_work(&crct10dif_rehash_work);
++ return 0;
++}
++
++static void crc_t10dif_rehash(struct work_struct *work)
++{
++ struct crypto_shash *new, *old;
++
+ mutex_lock(&crc_t10dif_mutex);
+ old = rcu_dereference_protected(crct10dif_tfm,
+ lockdep_is_held(&crc_t10dif_mutex));
+ if (!old) {
+ mutex_unlock(&crc_t10dif_mutex);
+- return 0;
++ return;
+ }
+ new = crypto_alloc_shash("crct10dif", 0, 0);
+ if (IS_ERR(new)) {
+ mutex_unlock(&crc_t10dif_mutex);
+- return 0;
++ return;
+ }
+ rcu_assign_pointer(crct10dif_tfm, new);
+ mutex_unlock(&crc_t10dif_mutex);
+
+ synchronize_rcu();
+ crypto_free_shash(old);
+- return 0;
+ }
+
+ static struct notifier_block crc_t10dif_nb = {
+- .notifier_call = crc_t10dif_rehash,
++ .notifier_call = crc_t10dif_notify,
+ };
+
+ __u16 crc_t10dif_update(__u16 crc, const unsigned char *buffer, size_t len)
+@@ -86,19 +93,26 @@ EXPORT_SYMBOL(crc_t10dif);
+
+ static int __init crc_t10dif_mod_init(void)
+ {
++ struct crypto_shash *tfm;
++
++ INIT_WORK(&crct10dif_rehash_work, crc_t10dif_rehash);
+ crypto_register_notifier(&crc_t10dif_nb);
+- crct10dif_tfm = crypto_alloc_shash("crct10dif", 0, 0);
+- if (IS_ERR(crct10dif_tfm)) {
++ mutex_lock(&crc_t10dif_mutex);
++ tfm = crypto_alloc_shash("crct10dif", 0, 0);
++ if (IS_ERR(tfm)) {
+ static_key_slow_inc(&crct10dif_fallback);
+- crct10dif_tfm = NULL;
++ tfm = NULL;
+ }
++ RCU_INIT_POINTER(crct10dif_tfm, tfm);
++ mutex_unlock(&crc_t10dif_mutex);
+ return 0;
+ }
+
+ static void __exit crc_t10dif_mod_fini(void)
+ {
+ crypto_unregister_notifier(&crc_t10dif_nb);
+- crypto_free_shash(crct10dif_tfm);
++ cancel_work_sync(&crct10dif_rehash_work);
++ crypto_free_shash(rcu_dereference_protected(crct10dif_tfm, 1));
+ }
+
+ module_init(crc_t10dif_mod_init);
+@@ -106,11 +120,27 @@ module_exit(crc_t10dif_mod_fini);
+
+ static int crc_t10dif_transform_show(char *buffer, const struct kernel_param *kp)
+ {
++ struct crypto_shash *tfm;
++ const char *name;
++ int len;
++
+ if (static_key_false(&crct10dif_fallback))
+ return sprintf(buffer, "fallback\n");
+
+- return sprintf(buffer, "%s\n",
+- crypto_tfm_alg_driver_name(crypto_shash_tfm(crct10dif_tfm)));
++ rcu_read_lock();
++ tfm = rcu_dereference(crct10dif_tfm);
++ if (!tfm) {
++ len = sprintf(buffer, "init\n");
++ goto unlock;
++ }
++
++ name = crypto_tfm_alg_driver_name(crypto_shash_tfm(tfm));
++ len = sprintf(buffer, "%s\n", name);
++
++unlock:
++ rcu_read_unlock();
++
++ return len;
+ }
+
+ module_param_call(transform, NULL, crc_t10dif_transform_show, NULL, 0644);
+diff --git a/lib/dynamic_debug.c b/lib/dynamic_debug.c
+index 8f199f403ab5..e3755d1f74bd 100644
+--- a/lib/dynamic_debug.c
++++ b/lib/dynamic_debug.c
+@@ -87,22 +87,22 @@ static struct { unsigned flag:8; char opt_char; } opt_array[] = {
+ { _DPRINTK_FLAGS_NONE, '_' },
+ };
+
++struct flagsbuf { char buf[ARRAY_SIZE(opt_array)+1]; };
++
+ /* format a string into buf[] which describes the _ddebug's flags */
+-static char *ddebug_describe_flags(struct _ddebug *dp, char *buf,
+- size_t maxlen)
++static char *ddebug_describe_flags(unsigned int flags, struct flagsbuf *fb)
+ {
+- char *p = buf;
++ char *p = fb->buf;
+ int i;
+
+- BUG_ON(maxlen < 6);
+ for (i = 0; i < ARRAY_SIZE(opt_array); ++i)
+- if (dp->flags & opt_array[i].flag)
++ if (flags & opt_array[i].flag)
+ *p++ = opt_array[i].opt_char;
+- if (p == buf)
++ if (p == fb->buf)
+ *p++ = '_';
+ *p = '\0';
+
+- return buf;
++ return fb->buf;
+ }
+
+ #define vpr_info(fmt, ...) \
+@@ -144,7 +144,7 @@ static int ddebug_change(const struct ddebug_query *query,
+ struct ddebug_table *dt;
+ unsigned int newflags;
+ unsigned int nfound = 0;
+- char flagbuf[10];
++ struct flagsbuf fbuf;
+
+ /* search for matching ddebugs */
+ mutex_lock(&ddebug_lock);
+@@ -201,8 +201,7 @@ static int ddebug_change(const struct ddebug_query *query,
+ vpr_info("changed %s:%d [%s]%s =%s\n",
+ trim_prefix(dp->filename), dp->lineno,
+ dt->mod_name, dp->function,
+- ddebug_describe_flags(dp, flagbuf,
+- sizeof(flagbuf)));
++ ddebug_describe_flags(dp->flags, &fbuf));
+ }
+ }
+ mutex_unlock(&ddebug_lock);
+@@ -816,7 +815,7 @@ static int ddebug_proc_show(struct seq_file *m, void *p)
+ {
+ struct ddebug_iter *iter = m->private;
+ struct _ddebug *dp = p;
+- char flagsbuf[10];
++ struct flagsbuf flags;
+
+ vpr_info("called m=%p p=%p\n", m, p);
+
+@@ -829,7 +828,7 @@ static int ddebug_proc_show(struct seq_file *m, void *p)
+ seq_printf(m, "%s:%u [%s]%s =%s \"",
+ trim_prefix(dp->filename), dp->lineno,
+ iter->table->mod_name, dp->function,
+- ddebug_describe_flags(dp, flagsbuf, sizeof(flagsbuf)));
++ ddebug_describe_flags(dp->flags, &flags));
+ seq_escape(m, dp->format, "\t\r\n\"");
+ seq_puts(m, "\"\n");
+
+diff --git a/lib/kobject.c b/lib/kobject.c
+index 83198cb37d8d..386873bdd51c 100644
+--- a/lib/kobject.c
++++ b/lib/kobject.c
+@@ -599,14 +599,7 @@ out:
+ }
+ EXPORT_SYMBOL_GPL(kobject_move);
+
+-/**
+- * kobject_del() - Unlink kobject from hierarchy.
+- * @kobj: object.
+- *
+- * This is the function that should be called to delete an object
+- * successfully added via kobject_add().
+- */
+-void kobject_del(struct kobject *kobj)
++static void __kobject_del(struct kobject *kobj)
+ {
+ struct kernfs_node *sd;
+ const struct kobj_type *ktype;
+@@ -625,9 +618,23 @@ void kobject_del(struct kobject *kobj)
+
+ kobj->state_in_sysfs = 0;
+ kobj_kset_leave(kobj);
+- kobject_put(kobj->parent);
+ kobj->parent = NULL;
+ }
++
++/**
++ * kobject_del() - Unlink kobject from hierarchy.
++ * @kobj: object.
++ *
++ * This is the function that should be called to delete an object
++ * successfully added via kobject_add().
++ */
++void kobject_del(struct kobject *kobj)
++{
++ struct kobject *parent = kobj->parent;
++
++ __kobject_del(kobj);
++ kobject_put(parent);
++}
+ EXPORT_SYMBOL(kobject_del);
+
+ /**
+@@ -663,6 +670,7 @@ EXPORT_SYMBOL(kobject_get_unless_zero);
+ */
+ static void kobject_cleanup(struct kobject *kobj)
+ {
++ struct kobject *parent = kobj->parent;
+ struct kobj_type *t = get_ktype(kobj);
+ const char *name = kobj->name;
+
+@@ -684,7 +692,10 @@ static void kobject_cleanup(struct kobject *kobj)
+ if (kobj->state_in_sysfs) {
+ pr_debug("kobject: '%s' (%p): auto cleanup kobject_del\n",
+ kobject_name(kobj), kobj);
+- kobject_del(kobj);
++ __kobject_del(kobj);
++ } else {
++ /* avoid dropping the parent reference unnecessarily */
++ parent = NULL;
+ }
+
+ if (t && t->release) {
+@@ -698,6 +709,8 @@ static void kobject_cleanup(struct kobject *kobj)
+ pr_debug("kobject: '%s': free name\n", name);
+ kfree_const(name);
+ }
++
++ kobject_put(parent);
+ }
+
+ #ifdef CONFIG_DEBUG_KOBJECT_RELEASE
+diff --git a/mm/mmap.c b/mm/mmap.c
+index bb1822ac9909..55bb456fd0d0 100644
+--- a/mm/mmap.c
++++ b/mm/mmap.c
+@@ -3171,6 +3171,7 @@ void exit_mmap(struct mm_struct *mm)
+ if (vma->vm_flags & VM_ACCOUNT)
+ nr_accounted += vma_pages(vma);
+ vma = remove_vma(vma);
++ cond_resched();
+ }
+ vm_unacct_memory(nr_accounted);
+ }
+diff --git a/net/bluetooth/6lowpan.c b/net/bluetooth/6lowpan.c
+index 4febc82a7c76..52fb6d6d6d58 100644
+--- a/net/bluetooth/6lowpan.c
++++ b/net/bluetooth/6lowpan.c
+@@ -50,6 +50,7 @@ static bool enable_6lowpan;
+ /* We are listening incoming connections via this channel
+ */
+ static struct l2cap_chan *listen_chan;
++static DEFINE_MUTEX(set_lock);
+
+ struct lowpan_peer {
+ struct list_head list;
+@@ -1070,12 +1071,14 @@ static void do_enable_set(struct work_struct *work)
+
+ enable_6lowpan = set_enable->flag;
+
++ mutex_lock(&set_lock);
+ if (listen_chan) {
+ l2cap_chan_close(listen_chan, 0);
+ l2cap_chan_put(listen_chan);
+ }
+
+ listen_chan = bt_6lowpan_listen();
++ mutex_unlock(&set_lock);
+
+ kfree(set_enable);
+ }
+@@ -1127,11 +1130,13 @@ static ssize_t lowpan_control_write(struct file *fp,
+ if (ret == -EINVAL)
+ return ret;
+
++ mutex_lock(&set_lock);
+ if (listen_chan) {
+ l2cap_chan_close(listen_chan, 0);
+ l2cap_chan_put(listen_chan);
+ listen_chan = NULL;
+ }
++ mutex_unlock(&set_lock);
+
+ if (conn) {
+ struct lowpan_peer *peer;
+diff --git a/net/bluetooth/hci_core.c b/net/bluetooth/hci_core.c
+index 2e7bc2da8371..c17e1a3e8218 100644
+--- a/net/bluetooth/hci_core.c
++++ b/net/bluetooth/hci_core.c
+@@ -605,7 +605,8 @@ static int hci_init3_req(struct hci_request *req, unsigned long opt)
+ if (hdev->commands[8] & 0x01)
+ hci_req_add(req, HCI_OP_READ_PAGE_SCAN_ACTIVITY, 0, NULL);
+
+- if (hdev->commands[18] & 0x04)
++ if (hdev->commands[18] & 0x04 &&
++ !test_bit(HCI_QUIRK_BROKEN_ERR_DATA_REPORTING, &hdev->quirks))
+ hci_req_add(req, HCI_OP_READ_DEF_ERR_DATA_REPORTING, 0, NULL);
+
+ /* Some older Broadcom based Bluetooth 1.2 controllers do not
+@@ -846,7 +847,8 @@ static int hci_init4_req(struct hci_request *req, unsigned long opt)
+ /* Set erroneous data reporting if supported to the wideband speech
+ * setting value
+ */
+- if (hdev->commands[18] & 0x08) {
++ if (hdev->commands[18] & 0x08 &&
++ !test_bit(HCI_QUIRK_BROKEN_ERR_DATA_REPORTING, &hdev->quirks)) {
+ bool enabled = hci_dev_test_flag(hdev,
+ HCI_WIDEBAND_SPEECH_ENABLED);
+
+@@ -3280,10 +3282,10 @@ static int hci_suspend_wait_event(struct hci_dev *hdev)
+ WAKE_COND, SUSPEND_NOTIFIER_TIMEOUT);
+
+ if (ret == 0) {
+- bt_dev_dbg(hdev, "Timed out waiting for suspend");
++ bt_dev_err(hdev, "Timed out waiting for suspend events");
+ for (i = 0; i < __SUSPEND_NUM_TASKS; ++i) {
+ if (test_bit(i, hdev->suspend_tasks))
+- bt_dev_dbg(hdev, "Bit %d is set", i);
++ bt_dev_err(hdev, "Suspend timeout bit: %d", i);
+ clear_bit(i, hdev->suspend_tasks);
+ }
+
+@@ -3349,12 +3351,15 @@ static int hci_suspend_notifier(struct notifier_block *nb, unsigned long action,
+ ret = hci_change_suspend_state(hdev, BT_RUNNING);
+ }
+
+- /* If suspend failed, restore it to running */
+- if (ret && action == PM_SUSPEND_PREPARE)
+- hci_change_suspend_state(hdev, BT_RUNNING);
+-
+ done:
+- return ret ? notifier_from_errno(-EBUSY) : NOTIFY_STOP;
++ /* We always allow suspend even if suspend preparation failed and
++ * attempt to recover in resume.
++ */
++ if (ret)
++ bt_dev_err(hdev, "Suspend notifier action (%lu) failed: %d",
++ action, ret);
++
++ return NOTIFY_STOP;
+ }
+
+ /* Alloc HCI device */
+@@ -3592,9 +3597,10 @@ void hci_unregister_dev(struct hci_dev *hdev)
+
+ cancel_work_sync(&hdev->power_on);
+
+- hci_dev_do_close(hdev);
+-
+ unregister_pm_notifier(&hdev->suspend_notifier);
++ cancel_work_sync(&hdev->suspend_prepare);
++
++ hci_dev_do_close(hdev);
+
+ if (!test_bit(HCI_INIT, &hdev->flags) &&
+ !hci_dev_test_flag(hdev, HCI_SETUP) &&
+diff --git a/net/core/sock.c b/net/core/sock.c
+index bc6fe4114374..7b0feeea61b6 100644
+--- a/net/core/sock.c
++++ b/net/core/sock.c
+@@ -3354,6 +3354,16 @@ static void sock_inuse_add(struct net *net, int val)
+ }
+ #endif
+
++static void tw_prot_cleanup(struct timewait_sock_ops *twsk_prot)
++{
++ if (!twsk_prot)
++ return;
++ kfree(twsk_prot->twsk_slab_name);
++ twsk_prot->twsk_slab_name = NULL;
++ kmem_cache_destroy(twsk_prot->twsk_slab);
++ twsk_prot->twsk_slab = NULL;
++}
++
+ static void req_prot_cleanup(struct request_sock_ops *rsk_prot)
+ {
+ if (!rsk_prot)
+@@ -3424,7 +3434,7 @@ int proto_register(struct proto *prot, int alloc_slab)
+ prot->slab_flags,
+ NULL);
+ if (prot->twsk_prot->twsk_slab == NULL)
+- goto out_free_timewait_sock_slab_name;
++ goto out_free_timewait_sock_slab;
+ }
+ }
+
+@@ -3432,15 +3442,15 @@ int proto_register(struct proto *prot, int alloc_slab)
+ ret = assign_proto_idx(prot);
+ if (ret) {
+ mutex_unlock(&proto_list_mutex);
+- goto out_free_timewait_sock_slab_name;
++ goto out_free_timewait_sock_slab;
+ }
+ list_add(&prot->node, &proto_list);
+ mutex_unlock(&proto_list_mutex);
+ return ret;
+
+-out_free_timewait_sock_slab_name:
++out_free_timewait_sock_slab:
+ if (alloc_slab && prot->twsk_prot)
+- kfree(prot->twsk_prot->twsk_slab_name);
++ tw_prot_cleanup(prot->twsk_prot);
+ out_free_request_sock_slab:
+ if (alloc_slab) {
+ req_prot_cleanup(prot->rsk_prot);
+@@ -3464,12 +3474,7 @@ void proto_unregister(struct proto *prot)
+ prot->slab = NULL;
+
+ req_prot_cleanup(prot->rsk_prot);
+-
+- if (prot->twsk_prot != NULL && prot->twsk_prot->twsk_slab != NULL) {
+- kmem_cache_destroy(prot->twsk_prot->twsk_slab);
+- kfree(prot->twsk_prot->twsk_slab_name);
+- prot->twsk_prot->twsk_slab = NULL;
+- }
++ tw_prot_cleanup(prot->twsk_prot);
+ }
+ EXPORT_SYMBOL(proto_unregister);
+
+diff --git a/net/ipv4/inet_connection_sock.c b/net/ipv4/inet_connection_sock.c
+index 65c29f2bd89f..98aa90a28691 100644
+--- a/net/ipv4/inet_connection_sock.c
++++ b/net/ipv4/inet_connection_sock.c
+@@ -296,6 +296,57 @@ static inline int sk_reuseport_match(struct inet_bind_bucket *tb,
+ ipv6_only_sock(sk), true, false);
+ }
+
++void inet_csk_update_fastreuse(struct inet_bind_bucket *tb,
++ struct sock *sk)
++{
++ kuid_t uid = sock_i_uid(sk);
++ bool reuse = sk->sk_reuse && sk->sk_state != TCP_LISTEN;
++
++ if (hlist_empty(&tb->owners)) {
++ tb->fastreuse = reuse;
++ if (sk->sk_reuseport) {
++ tb->fastreuseport = FASTREUSEPORT_ANY;
++ tb->fastuid = uid;
++ tb->fast_rcv_saddr = sk->sk_rcv_saddr;
++ tb->fast_ipv6_only = ipv6_only_sock(sk);
++ tb->fast_sk_family = sk->sk_family;
++#if IS_ENABLED(CONFIG_IPV6)
++ tb->fast_v6_rcv_saddr = sk->sk_v6_rcv_saddr;
++#endif
++ } else {
++ tb->fastreuseport = 0;
++ }
++ } else {
++ if (!reuse)
++ tb->fastreuse = 0;
++ if (sk->sk_reuseport) {
++ /* We didn't match or we don't have fastreuseport set on
++ * the tb, but we have sk_reuseport set on this socket
++ * and we know that there are no bind conflicts with
++ * this socket in this tb, so reset our tb's reuseport
++ * settings so that any subsequent sockets that match
++ * our current socket will be put on the fast path.
++ *
++ * If we reset we need to set FASTREUSEPORT_STRICT so we
++ * do extra checking for all subsequent sk_reuseport
++ * socks.
++ */
++ if (!sk_reuseport_match(tb, sk)) {
++ tb->fastreuseport = FASTREUSEPORT_STRICT;
++ tb->fastuid = uid;
++ tb->fast_rcv_saddr = sk->sk_rcv_saddr;
++ tb->fast_ipv6_only = ipv6_only_sock(sk);
++ tb->fast_sk_family = sk->sk_family;
++#if IS_ENABLED(CONFIG_IPV6)
++ tb->fast_v6_rcv_saddr = sk->sk_v6_rcv_saddr;
++#endif
++ }
++ } else {
++ tb->fastreuseport = 0;
++ }
++ }
++}
++
+ /* Obtain a reference to a local port for the given sock,
+ * if snum is zero it means select any available local port.
+ * We try to allocate an odd port (and leave even ports for connect())
+@@ -308,7 +359,6 @@ int inet_csk_get_port(struct sock *sk, unsigned short snum)
+ struct inet_bind_hashbucket *head;
+ struct net *net = sock_net(sk);
+ struct inet_bind_bucket *tb = NULL;
+- kuid_t uid = sock_i_uid(sk);
+ int l3mdev;
+
+ l3mdev = inet_sk_bound_l3mdev(sk);
+@@ -345,49 +395,8 @@ tb_found:
+ goto fail_unlock;
+ }
+ success:
+- if (hlist_empty(&tb->owners)) {
+- tb->fastreuse = reuse;
+- if (sk->sk_reuseport) {
+- tb->fastreuseport = FASTREUSEPORT_ANY;
+- tb->fastuid = uid;
+- tb->fast_rcv_saddr = sk->sk_rcv_saddr;
+- tb->fast_ipv6_only = ipv6_only_sock(sk);
+- tb->fast_sk_family = sk->sk_family;
+-#if IS_ENABLED(CONFIG_IPV6)
+- tb->fast_v6_rcv_saddr = sk->sk_v6_rcv_saddr;
+-#endif
+- } else {
+- tb->fastreuseport = 0;
+- }
+- } else {
+- if (!reuse)
+- tb->fastreuse = 0;
+- if (sk->sk_reuseport) {
+- /* We didn't match or we don't have fastreuseport set on
+- * the tb, but we have sk_reuseport set on this socket
+- * and we know that there are no bind conflicts with
+- * this socket in this tb, so reset our tb's reuseport
+- * settings so that any subsequent sockets that match
+- * our current socket will be put on the fast path.
+- *
+- * If we reset we need to set FASTREUSEPORT_STRICT so we
+- * do extra checking for all subsequent sk_reuseport
+- * socks.
+- */
+- if (!sk_reuseport_match(tb, sk)) {
+- tb->fastreuseport = FASTREUSEPORT_STRICT;
+- tb->fastuid = uid;
+- tb->fast_rcv_saddr = sk->sk_rcv_saddr;
+- tb->fast_ipv6_only = ipv6_only_sock(sk);
+- tb->fast_sk_family = sk->sk_family;
+-#if IS_ENABLED(CONFIG_IPV6)
+- tb->fast_v6_rcv_saddr = sk->sk_v6_rcv_saddr;
+-#endif
+- }
+- } else {
+- tb->fastreuseport = 0;
+- }
+- }
++ inet_csk_update_fastreuse(tb, sk);
++
+ if (!inet_csk(sk)->icsk_bind_hash)
+ inet_bind_hash(sk, tb, port);
+ WARN_ON(inet_csk(sk)->icsk_bind_hash != tb);
+diff --git a/net/ipv4/inet_hashtables.c b/net/ipv4/inet_hashtables.c
+index 2bbaaf0c7176..006a34b18537 100644
+--- a/net/ipv4/inet_hashtables.c
++++ b/net/ipv4/inet_hashtables.c
+@@ -163,6 +163,7 @@ int __inet_inherit_port(const struct sock *sk, struct sock *child)
+ return -ENOMEM;
+ }
+ }
++ inet_csk_update_fastreuse(tb, child);
+ }
+ inet_bind_hash(child, tb, port);
+ spin_unlock(&head->lock);
+diff --git a/net/ipv4/sysctl_net_ipv4.c b/net/ipv4/sysctl_net_ipv4.c
+index 81b267e990a1..e07c1b429b09 100644
+--- a/net/ipv4/sysctl_net_ipv4.c
++++ b/net/ipv4/sysctl_net_ipv4.c
+@@ -307,24 +307,16 @@ static int proc_tcp_fastopen_key(struct ctl_table *table, int write,
+ struct ctl_table tbl = { .maxlen = ((TCP_FASTOPEN_KEY_LENGTH *
+ 2 * TCP_FASTOPEN_KEY_MAX) +
+ (TCP_FASTOPEN_KEY_MAX * 5)) };
+- struct tcp_fastopen_context *ctx;
+- u32 user_key[TCP_FASTOPEN_KEY_MAX * 4];
+- __le32 key[TCP_FASTOPEN_KEY_MAX * 4];
++ u32 user_key[TCP_FASTOPEN_KEY_BUF_LENGTH / sizeof(u32)];
++ __le32 key[TCP_FASTOPEN_KEY_BUF_LENGTH / sizeof(__le32)];
+ char *backup_data;
+- int ret, i = 0, off = 0, n_keys = 0;
++ int ret, i = 0, off = 0, n_keys;
+
+ tbl.data = kmalloc(tbl.maxlen, GFP_KERNEL);
+ if (!tbl.data)
+ return -ENOMEM;
+
+- rcu_read_lock();
+- ctx = rcu_dereference(net->ipv4.tcp_fastopen_ctx);
+- if (ctx) {
+- n_keys = tcp_fastopen_context_len(ctx);
+- memcpy(&key[0], &ctx->key[0], TCP_FASTOPEN_KEY_LENGTH * n_keys);
+- }
+- rcu_read_unlock();
+-
++ n_keys = tcp_fastopen_get_cipher(net, NULL, (u64 *)key);
+ if (!n_keys) {
+ memset(&key[0], 0, TCP_FASTOPEN_KEY_LENGTH);
+ n_keys = 1;
+diff --git a/net/ipv4/tcp.c b/net/ipv4/tcp.c
+index eee18259a24e..4f11e68a4efa 100644
+--- a/net/ipv4/tcp.c
++++ b/net/ipv4/tcp.c
+@@ -3538,22 +3538,14 @@ static int do_tcp_getsockopt(struct sock *sk, int level,
+ return 0;
+
+ case TCP_FASTOPEN_KEY: {
+- __u8 key[TCP_FASTOPEN_KEY_BUF_LENGTH];
+- struct tcp_fastopen_context *ctx;
+- unsigned int key_len = 0;
++ u64 key[TCP_FASTOPEN_KEY_BUF_LENGTH / sizeof(u64)];
++ unsigned int key_len;
+
+ if (get_user(len, optlen))
+ return -EFAULT;
+
+- rcu_read_lock();
+- ctx = rcu_dereference(icsk->icsk_accept_queue.fastopenq.ctx);
+- if (ctx) {
+- key_len = tcp_fastopen_context_len(ctx) *
+- TCP_FASTOPEN_KEY_LENGTH;
+- memcpy(&key[0], &ctx->key[0], key_len);
+- }
+- rcu_read_unlock();
+-
++ key_len = tcp_fastopen_get_cipher(net, icsk, key) *
++ TCP_FASTOPEN_KEY_LENGTH;
+ len = min_t(unsigned int, len, key_len);
+ if (put_user(len, optlen))
+ return -EFAULT;
+diff --git a/net/ipv4/tcp_fastopen.c b/net/ipv4/tcp_fastopen.c
+index 19ad9586c720..1bb85821f1e6 100644
+--- a/net/ipv4/tcp_fastopen.c
++++ b/net/ipv4/tcp_fastopen.c
+@@ -108,6 +108,29 @@ out:
+ return err;
+ }
+
++int tcp_fastopen_get_cipher(struct net *net, struct inet_connection_sock *icsk,
++ u64 *key)
++{
++ struct tcp_fastopen_context *ctx;
++ int n_keys = 0, i;
++
++ rcu_read_lock();
++ if (icsk)
++ ctx = rcu_dereference(icsk->icsk_accept_queue.fastopenq.ctx);
++ else
++ ctx = rcu_dereference(net->ipv4.tcp_fastopen_ctx);
++ if (ctx) {
++ n_keys = tcp_fastopen_context_len(ctx);
++ for (i = 0; i < n_keys; i++) {
++ put_unaligned_le64(ctx->key[i].key[0], key + (i * 2));
++ put_unaligned_le64(ctx->key[i].key[1], key + (i * 2) + 1);
++ }
++ }
++ rcu_read_unlock();
++
++ return n_keys;
++}
++
+ static bool __tcp_fastopen_cookie_gen_cipher(struct request_sock *req,
+ struct sk_buff *syn,
+ const siphash_key_t *key,
+diff --git a/net/netfilter/ipvs/ip_vs_core.c b/net/netfilter/ipvs/ip_vs_core.c
+index aa6a603a2425..517f6a2ac15a 100644
+--- a/net/netfilter/ipvs/ip_vs_core.c
++++ b/net/netfilter/ipvs/ip_vs_core.c
+@@ -2066,14 +2066,14 @@ ip_vs_in(struct netns_ipvs *ipvs, unsigned int hooknum, struct sk_buff *skb, int
+
+ conn_reuse_mode = sysctl_conn_reuse_mode(ipvs);
+ if (conn_reuse_mode && !iph.fragoffs && is_new_conn(skb, &iph) && cp) {
+- bool uses_ct = false, resched = false;
++ bool old_ct = false, resched = false;
+
+ if (unlikely(sysctl_expire_nodest_conn(ipvs)) && cp->dest &&
+ unlikely(!atomic_read(&cp->dest->weight))) {
+ resched = true;
+- uses_ct = ip_vs_conn_uses_conntrack(cp, skb);
++ old_ct = ip_vs_conn_uses_old_conntrack(cp, skb);
+ } else if (is_new_conn_expected(cp, conn_reuse_mode)) {
+- uses_ct = ip_vs_conn_uses_conntrack(cp, skb);
++ old_ct = ip_vs_conn_uses_old_conntrack(cp, skb);
+ if (!atomic_read(&cp->n_control)) {
+ resched = true;
+ } else {
+@@ -2081,15 +2081,17 @@ ip_vs_in(struct netns_ipvs *ipvs, unsigned int hooknum, struct sk_buff *skb, int
+ * that uses conntrack while it is still
+ * referenced by controlled connection(s).
+ */
+- resched = !uses_ct;
++ resched = !old_ct;
+ }
+ }
+
+ if (resched) {
++ if (!old_ct)
++ cp->flags &= ~IP_VS_CONN_F_NFCT;
+ if (!atomic_read(&cp->n_control))
+ ip_vs_conn_expire_now(cp);
+ __ip_vs_conn_put(cp);
+- if (uses_ct)
++ if (old_ct)
+ return NF_DROP;
+ cp = NULL;
+ }
+diff --git a/net/netfilter/nft_meta.c b/net/netfilter/nft_meta.c
+index 951b6e87ed5d..7bc6537f3ccb 100644
+--- a/net/netfilter/nft_meta.c
++++ b/net/netfilter/nft_meta.c
+@@ -253,7 +253,7 @@ static bool nft_meta_get_eval_ifname(enum nft_meta_keys key, u32 *dest,
+ return false;
+ break;
+ case NFT_META_IIFGROUP:
+- if (!nft_meta_store_ifgroup(dest, nft_out(pkt)))
++ if (!nft_meta_store_ifgroup(dest, nft_in(pkt)))
+ return false;
+ break;
+ case NFT_META_OIFGROUP:
+diff --git a/net/nfc/rawsock.c b/net/nfc/rawsock.c
+index ba5ffd3badd3..b5c867fe3232 100644
+--- a/net/nfc/rawsock.c
++++ b/net/nfc/rawsock.c
+@@ -332,10 +332,13 @@ static int rawsock_create(struct net *net, struct socket *sock,
+ if ((sock->type != SOCK_SEQPACKET) && (sock->type != SOCK_RAW))
+ return -ESOCKTNOSUPPORT;
+
+- if (sock->type == SOCK_RAW)
++ if (sock->type == SOCK_RAW) {
++ if (!capable(CAP_NET_RAW))
++ return -EPERM;
+ sock->ops = &rawsock_raw_ops;
+- else
++ } else {
+ sock->ops = &rawsock_ops;
++ }
+
+ sk = sk_alloc(net, PF_NFC, GFP_ATOMIC, nfc_proto->proto, kern);
+ if (!sk)
+diff --git a/net/packet/af_packet.c b/net/packet/af_packet.c
+index 29bd405adbbd..301f41d4929b 100644
+--- a/net/packet/af_packet.c
++++ b/net/packet/af_packet.c
+@@ -942,6 +942,7 @@ static int prb_queue_frozen(struct tpacket_kbdq_core *pkc)
+ }
+
+ static void prb_clear_blk_fill_status(struct packet_ring_buffer *rb)
++ __releases(&pkc->blk_fill_in_prog_lock)
+ {
+ struct tpacket_kbdq_core *pkc = GET_PBDQC_FROM_RB(rb);
+ atomic_dec(&pkc->blk_fill_in_prog);
+@@ -989,6 +990,7 @@ static void prb_fill_curr_block(char *curr,
+ struct tpacket_kbdq_core *pkc,
+ struct tpacket_block_desc *pbd,
+ unsigned int len)
++ __acquires(&pkc->blk_fill_in_prog_lock)
+ {
+ struct tpacket3_hdr *ppd;
+
+@@ -2286,8 +2288,11 @@ static int tpacket_rcv(struct sk_buff *skb, struct net_device *dev,
+ if (do_vnet &&
+ virtio_net_hdr_from_skb(skb, h.raw + macoff -
+ sizeof(struct virtio_net_hdr),
+- vio_le(), true, 0))
++ vio_le(), true, 0)) {
++ if (po->tp_version == TPACKET_V3)
++ prb_clear_blk_fill_status(&po->rx_ring);
+ goto drop_n_account;
++ }
+
+ if (po->tp_version <= TPACKET_V2) {
+ packet_increment_rx_head(po, &po->rx_ring);
+@@ -2393,7 +2398,7 @@ static int tpacket_rcv(struct sk_buff *skb, struct net_device *dev,
+ __clear_bit(slot_id, po->rx_ring.rx_owner_map);
+ spin_unlock(&sk->sk_receive_queue.lock);
+ sk->sk_data_ready(sk);
+- } else {
++ } else if (po->tp_version == TPACKET_V3) {
+ prb_clear_blk_fill_status(&po->rx_ring);
+ }
+
+diff --git a/net/socket.c b/net/socket.c
+index 2dd739fba866..90e212410d37 100644
+--- a/net/socket.c
++++ b/net/socket.c
+@@ -500,7 +500,7 @@ static struct socket *sockfd_lookup_light(int fd, int *err, int *fput_needed)
+ if (f.file) {
+ sock = sock_from_file(f.file, err);
+ if (likely(sock)) {
+- *fput_needed = f.flags;
++ *fput_needed = f.flags & FDPUT_FPUT;
+ return sock;
+ }
+ fdput(f);
+diff --git a/net/sunrpc/auth_gss/gss_krb5_wrap.c b/net/sunrpc/auth_gss/gss_krb5_wrap.c
+index cf0fd170ac18..90b8329fef82 100644
+--- a/net/sunrpc/auth_gss/gss_krb5_wrap.c
++++ b/net/sunrpc/auth_gss/gss_krb5_wrap.c
+@@ -584,7 +584,7 @@ gss_unwrap_kerberos_v2(struct krb5_ctx *kctx, int offset, int len,
+ buf->head[0].iov_len);
+ memmove(ptr, ptr + GSS_KRB5_TOK_HDR_LEN + headskip, movelen);
+ buf->head[0].iov_len -= GSS_KRB5_TOK_HDR_LEN + headskip;
+- buf->len = len - GSS_KRB5_TOK_HDR_LEN + headskip;
++ buf->len = len - (GSS_KRB5_TOK_HDR_LEN + headskip);
+
+ /* Trim off the trailing "extra count" and checksum blob */
+ xdr_buf_trim(buf, ec + GSS_KRB5_TOK_HDR_LEN + tailskip);
+diff --git a/net/sunrpc/auth_gss/svcauth_gss.c b/net/sunrpc/auth_gss/svcauth_gss.c
+index 46027d0c903f..c28051f7d217 100644
+--- a/net/sunrpc/auth_gss/svcauth_gss.c
++++ b/net/sunrpc/auth_gss/svcauth_gss.c
+@@ -958,7 +958,6 @@ unwrap_priv_data(struct svc_rqst *rqstp, struct xdr_buf *buf, u32 seq, struct gs
+
+ maj_stat = gss_unwrap(ctx, 0, priv_len, buf);
+ pad = priv_len - buf->len;
+- buf->len -= pad;
+ /* The upper layers assume the buffer is aligned on 4-byte boundaries.
+ * In the krb5p case, at least, the data ends up offset, so we need to
+ * move it around. */
+diff --git a/net/sunrpc/xprtrdma/svc_rdma_rw.c b/net/sunrpc/xprtrdma/svc_rdma_rw.c
+index 23c2d3ce0dc9..e0a0ae39848c 100644
+--- a/net/sunrpc/xprtrdma/svc_rdma_rw.c
++++ b/net/sunrpc/xprtrdma/svc_rdma_rw.c
+@@ -678,7 +678,6 @@ static int svc_rdma_build_read_chunk(struct svc_rqst *rqstp,
+ struct svc_rdma_read_info *info,
+ __be32 *p)
+ {
+- unsigned int i;
+ int ret;
+
+ ret = -EINVAL;
+@@ -701,12 +700,6 @@ static int svc_rdma_build_read_chunk(struct svc_rqst *rqstp,
+ info->ri_chunklen += rs_length;
+ }
+
+- /* Pages under I/O have been copied to head->rc_pages.
+- * Prevent their premature release by svc_xprt_release() .
+- */
+- for (i = 0; i < info->ri_readctxt->rc_page_count; i++)
+- rqstp->rq_pages[i] = NULL;
+-
+ return ret;
+ }
+
+@@ -801,6 +794,26 @@ out:
+ return ret;
+ }
+
++/* Pages under I/O have been copied to head->rc_pages. Ensure they
++ * are not released by svc_xprt_release() until the I/O is complete.
++ *
++ * This has to be done after all Read WRs are constructed to properly
++ * handle a page that is part of I/O on behalf of two different RDMA
++ * segments.
++ *
++ * Do this only if I/O has been posted. Otherwise, we do indeed want
++ * svc_xprt_release() to clean things up properly.
++ */
++static void svc_rdma_save_io_pages(struct svc_rqst *rqstp,
++ const unsigned int start,
++ const unsigned int num_pages)
++{
++ unsigned int i;
++
++ for (i = start; i < num_pages + start; i++)
++ rqstp->rq_pages[i] = NULL;
++}
++
+ /**
+ * svc_rdma_recv_read_chunk - Pull a Read chunk from the client
+ * @rdma: controlling RDMA transport
+@@ -854,6 +867,7 @@ int svc_rdma_recv_read_chunk(struct svcxprt_rdma *rdma, struct svc_rqst *rqstp,
+ ret = svc_rdma_post_chunk_ctxt(&info->ri_cc);
+ if (ret < 0)
+ goto out_err;
++ svc_rdma_save_io_pages(rqstp, 0, head->rc_page_count);
+ return 0;
+
+ out_err:
+diff --git a/net/tls/tls_device.c b/net/tls/tls_device.c
+index a562ebaaa33c..0ad8b53a8ca4 100644
+--- a/net/tls/tls_device.c
++++ b/net/tls/tls_device.c
+@@ -561,7 +561,7 @@ int tls_device_sendpage(struct sock *sk, struct page *page,
+ {
+ struct tls_context *tls_ctx = tls_get_ctx(sk);
+ struct iov_iter msg_iter;
+- char *kaddr = kmap(page);
++ char *kaddr;
+ struct kvec iov;
+ int rc;
+
+@@ -576,6 +576,7 @@ int tls_device_sendpage(struct sock *sk, struct page *page,
+ goto out;
+ }
+
++ kaddr = kmap(page);
+ iov.iov_base = kaddr + offset;
+ iov.iov_len = size;
+ iov_iter_kvec(&msg_iter, WRITE, &iov, 1, size);
+diff --git a/net/vmw_vsock/af_vsock.c b/net/vmw_vsock/af_vsock.c
+index 626bf9044418..6cd0df1c5caf 100644
+--- a/net/vmw_vsock/af_vsock.c
++++ b/net/vmw_vsock/af_vsock.c
+@@ -1032,7 +1032,7 @@ static __poll_t vsock_poll(struct file *file, struct socket *sock,
+ }
+
+ /* Connected sockets that can produce data can be written. */
+- if (sk->sk_state == TCP_ESTABLISHED) {
++ if (transport && sk->sk_state == TCP_ESTABLISHED) {
+ if (!(sk->sk_shutdown & SEND_SHUTDOWN)) {
+ bool space_avail_now = false;
+ int ret = transport->notify_poll_out(
+diff --git a/samples/bpf/fds_example.c b/samples/bpf/fds_example.c
+index d5992f787232..59f45fef5110 100644
+--- a/samples/bpf/fds_example.c
++++ b/samples/bpf/fds_example.c
+@@ -30,6 +30,8 @@
+ #define BPF_M_MAP 1
+ #define BPF_M_PROG 2
+
++char bpf_log_buf[BPF_LOG_BUF_SIZE];
++
+ static void usage(void)
+ {
+ printf("Usage: fds_example [...]\n");
+@@ -57,7 +59,6 @@ static int bpf_prog_create(const char *object)
+ BPF_EXIT_INSN(),
+ };
+ size_t insns_cnt = sizeof(insns) / sizeof(struct bpf_insn);
+- char bpf_log_buf[BPF_LOG_BUF_SIZE];
+ struct bpf_object *obj;
+ int prog_fd;
+
+diff --git a/samples/bpf/map_perf_test_kern.c b/samples/bpf/map_perf_test_kern.c
+index 12e91ae64d4d..c9b31193ca12 100644
+--- a/samples/bpf/map_perf_test_kern.c
++++ b/samples/bpf/map_perf_test_kern.c
+@@ -11,6 +11,8 @@
+ #include <bpf/bpf_helpers.h>
+ #include "bpf_legacy.h"
+ #include <bpf/bpf_tracing.h>
++#include <bpf/bpf_core_read.h>
++#include "trace_common.h"
+
+ #define MAX_ENTRIES 1000
+ #define MAX_NR_CPUS 1024
+@@ -154,9 +156,10 @@ int stress_percpu_hmap_alloc(struct pt_regs *ctx)
+ return 0;
+ }
+
+-SEC("kprobe/sys_connect")
++SEC("kprobe/" SYSCALL(sys_connect))
+ int stress_lru_hmap_alloc(struct pt_regs *ctx)
+ {
++ struct pt_regs *real_regs = (struct pt_regs *)PT_REGS_PARM1_CORE(ctx);
+ char fmt[] = "Failed at stress_lru_hmap_alloc. ret:%dn";
+ union {
+ u16 dst6[8];
+@@ -175,8 +178,8 @@ int stress_lru_hmap_alloc(struct pt_regs *ctx)
+ long val = 1;
+ u32 key = 0;
+
+- in6 = (struct sockaddr_in6 *)PT_REGS_PARM2(ctx);
+- addrlen = (int)PT_REGS_PARM3(ctx);
++ in6 = (struct sockaddr_in6 *)PT_REGS_PARM2_CORE(real_regs);
++ addrlen = (int)PT_REGS_PARM3_CORE(real_regs);
+
+ if (addrlen != sizeof(*in6))
+ return 0;
+diff --git a/samples/bpf/test_map_in_map_kern.c b/samples/bpf/test_map_in_map_kern.c
+index 6cee61e8ce9b..36a203e69064 100644
+--- a/samples/bpf/test_map_in_map_kern.c
++++ b/samples/bpf/test_map_in_map_kern.c
+@@ -13,6 +13,8 @@
+ #include <bpf/bpf_helpers.h>
+ #include "bpf_legacy.h"
+ #include <bpf/bpf_tracing.h>
++#include <bpf/bpf_core_read.h>
++#include "trace_common.h"
+
+ #define MAX_NR_PORTS 65536
+
+@@ -102,9 +104,10 @@ static __always_inline int do_inline_hash_lookup(void *inner_map, u32 port)
+ return result ? *result : -ENOENT;
+ }
+
+-SEC("kprobe/sys_connect")
++SEC("kprobe/" SYSCALL(sys_connect))
+ int trace_sys_connect(struct pt_regs *ctx)
+ {
++ struct pt_regs *real_regs = (struct pt_regs *)PT_REGS_PARM1_CORE(ctx);
+ struct sockaddr_in6 *in6;
+ u16 test_case, port, dst6[8];
+ int addrlen, ret, inline_ret, ret_key = 0;
+@@ -112,8 +115,8 @@ int trace_sys_connect(struct pt_regs *ctx)
+ void *outer_map, *inner_map;
+ bool inline_hash = false;
+
+- in6 = (struct sockaddr_in6 *)PT_REGS_PARM2(ctx);
+- addrlen = (int)PT_REGS_PARM3(ctx);
++ in6 = (struct sockaddr_in6 *)PT_REGS_PARM2_CORE(real_regs);
++ addrlen = (int)PT_REGS_PARM3_CORE(real_regs);
+
+ if (addrlen != sizeof(*in6))
+ return 0;
+diff --git a/samples/bpf/test_probe_write_user_kern.c b/samples/bpf/test_probe_write_user_kern.c
+index f033f36a13a3..fd651a65281e 100644
+--- a/samples/bpf/test_probe_write_user_kern.c
++++ b/samples/bpf/test_probe_write_user_kern.c
+@@ -10,6 +10,8 @@
+ #include <linux/version.h>
+ #include <bpf/bpf_helpers.h>
+ #include <bpf/bpf_tracing.h>
++#include <bpf/bpf_core_read.h>
++#include "trace_common.h"
+
+ struct bpf_map_def SEC("maps") dnat_map = {
+ .type = BPF_MAP_TYPE_HASH,
+@@ -26,13 +28,14 @@ struct bpf_map_def SEC("maps") dnat_map = {
+ * This example sits on a syscall, and the syscall ABI is relatively stable
+ * of course, across platforms, and over time, the ABI may change.
+ */
+-SEC("kprobe/sys_connect")
++SEC("kprobe/" SYSCALL(sys_connect))
+ int bpf_prog1(struct pt_regs *ctx)
+ {
++ struct pt_regs *real_regs = (struct pt_regs *)PT_REGS_PARM1_CORE(ctx);
++ void *sockaddr_arg = (void *)PT_REGS_PARM2_CORE(real_regs);
++ int sockaddr_len = (int)PT_REGS_PARM3_CORE(real_regs);
+ struct sockaddr_in new_addr, orig_addr = {};
+ struct sockaddr_in *mapped_addr;
+- void *sockaddr_arg = (void *)PT_REGS_PARM2(ctx);
+- int sockaddr_len = (int)PT_REGS_PARM3(ctx);
+
+ if (sockaddr_len > sizeof(orig_addr))
+ return 0;
+diff --git a/scripts/recordmcount.c b/scripts/recordmcount.c
+index 7225107a9aaf..e59022b3f125 100644
+--- a/scripts/recordmcount.c
++++ b/scripts/recordmcount.c
+@@ -434,6 +434,11 @@ static int arm_is_fake_mcount(Elf32_Rel const *rp)
+ return 1;
+ }
+
++static int arm64_is_fake_mcount(Elf64_Rel const *rp)
++{
++ return ELF64_R_TYPE(w(rp->r_info)) != R_AARCH64_CALL26;
++}
++
+ /* 64-bit EM_MIPS has weird ELF64_Rela.r_info.
+ * http://techpubs.sgi.com/library/manuals/4000/007-4658-001/pdf/007-4658-001.pdf
+ * We interpret Table 29 Relocation Operation (Elf64_Rel, Elf64_Rela) [p.40]
+@@ -547,6 +552,7 @@ static int do_file(char const *const fname)
+ make_nop = make_nop_arm64;
+ rel_type_nop = R_AARCH64_NONE;
+ ideal_nop = ideal_nop4_arm64;
++ is_fake_mcount64 = arm64_is_fake_mcount;
+ break;
+ case EM_IA_64: reltype = R_IA64_IMM64; break;
+ case EM_MIPS: /* reltype: e_class */ break;
+diff --git a/scripts/selinux/mdp/mdp.c b/scripts/selinux/mdp/mdp.c
+index 576d11a60417..6ceb88eb9b59 100644
+--- a/scripts/selinux/mdp/mdp.c
++++ b/scripts/selinux/mdp/mdp.c
+@@ -67,8 +67,14 @@ int main(int argc, char *argv[])
+
+ initial_sid_to_string_len = sizeof(initial_sid_to_string) / sizeof (char *);
+ /* print out the sids */
+- for (i = 1; i < initial_sid_to_string_len; i++)
+- fprintf(fout, "sid %s\n", initial_sid_to_string[i]);
++ for (i = 1; i < initial_sid_to_string_len; i++) {
++ const char *name = initial_sid_to_string[i];
++
++ if (name)
++ fprintf(fout, "sid %s\n", name);
++ else
++ fprintf(fout, "sid unused%d\n", i);
++ }
+ fprintf(fout, "\n");
+
+ /* print out the class permissions */
+@@ -126,9 +132,16 @@ int main(int argc, char *argv[])
+ #define OBJUSERROLETYPE "user_u:object_r:base_t"
+
+ /* default sids */
+- for (i = 1; i < initial_sid_to_string_len; i++)
+- fprintf(fout, "sid %s " SUBJUSERROLETYPE "%s\n",
+- initial_sid_to_string[i], mls ? ":" SYSTEMLOW : "");
++ for (i = 1; i < initial_sid_to_string_len; i++) {
++ const char *name = initial_sid_to_string[i];
++
++ if (name)
++ fprintf(fout, "sid %s ", name);
++ else
++ fprintf(fout, "sid unused%d\n", i);
++ fprintf(fout, SUBJUSERROLETYPE "%s\n",
++ mls ? ":" SYSTEMLOW : "");
++ }
+ fprintf(fout, "\n");
+
+ #define FS_USE(behavior, fstype) \
+diff --git a/security/integrity/ima/ima.h b/security/integrity/ima/ima.h
+index 495e28bd488e..04c246b2b767 100644
+--- a/security/integrity/ima/ima.h
++++ b/security/integrity/ima/ima.h
+@@ -400,6 +400,7 @@ static inline void ima_free_modsig(struct modsig *modsig)
+ #ifdef CONFIG_IMA_LSM_RULES
+
+ #define security_filter_rule_init security_audit_rule_init
++#define security_filter_rule_free security_audit_rule_free
+ #define security_filter_rule_match security_audit_rule_match
+
+ #else
+@@ -410,6 +411,10 @@ static inline int security_filter_rule_init(u32 field, u32 op, char *rulestr,
+ return -EINVAL;
+ }
+
++static inline void security_filter_rule_free(void *lsmrule)
++{
++}
++
+ static inline int security_filter_rule_match(u32 secid, u32 field, u32 op,
+ void *lsmrule)
+ {
+diff --git a/security/integrity/ima/ima_policy.c b/security/integrity/ima/ima_policy.c
+index e493063a3c34..3e3e568c8130 100644
+--- a/security/integrity/ima/ima_policy.c
++++ b/security/integrity/ima/ima_policy.c
+@@ -258,9 +258,24 @@ static void ima_lsm_free_rule(struct ima_rule_entry *entry)
+ int i;
+
+ for (i = 0; i < MAX_LSM_RULES; i++) {
+- kfree(entry->lsm[i].rule);
++ security_filter_rule_free(entry->lsm[i].rule);
+ kfree(entry->lsm[i].args_p);
+ }
++}
++
++static void ima_free_rule(struct ima_rule_entry *entry)
++{
++ if (!entry)
++ return;
++
++ /*
++ * entry->template->fields may be allocated in ima_parse_rule() but that
++ * reference is owned by the corresponding ima_template_desc element in
++ * the defined_templates list and cannot be freed here
++ */
++ kfree(entry->fsname);
++ kfree(entry->keyrings);
++ ima_lsm_free_rule(entry);
+ kfree(entry);
+ }
+
+@@ -302,6 +317,7 @@ static struct ima_rule_entry *ima_lsm_copy_rule(struct ima_rule_entry *entry)
+
+ out_err:
+ ima_lsm_free_rule(nentry);
++ kfree(nentry);
+ return NULL;
+ }
+
+@@ -315,11 +331,29 @@ static int ima_lsm_update_rule(struct ima_rule_entry *entry)
+
+ list_replace_rcu(&entry->list, &nentry->list);
+ synchronize_rcu();
++ /*
++ * ima_lsm_copy_rule() shallow copied all references, except for the
++ * LSM references, from entry to nentry so we only want to free the LSM
++ * references and the entry itself. All other memory refrences will now
++ * be owned by nentry.
++ */
+ ima_lsm_free_rule(entry);
++ kfree(entry);
+
+ return 0;
+ }
+
++static bool ima_rule_contains_lsm_cond(struct ima_rule_entry *entry)
++{
++ int i;
++
++ for (i = 0; i < MAX_LSM_RULES; i++)
++ if (entry->lsm[i].args_p)
++ return true;
++
++ return false;
++}
++
+ /*
+ * The LSM policy can be reloaded, leaving the IMA LSM based rules referring
+ * to the old, stale LSM policy. Update the IMA LSM based rules to reflect
+@@ -890,6 +924,7 @@ static int ima_lsm_rule_init(struct ima_rule_entry *entry,
+
+ if (ima_rules == &ima_default_rules) {
+ kfree(entry->lsm[lsm_rule].args_p);
++ entry->lsm[lsm_rule].args_p = NULL;
+ result = -EINVAL;
+ } else
+ result = 0;
+@@ -949,6 +984,60 @@ static void check_template_modsig(const struct ima_template_desc *template)
+ #undef MSG
+ }
+
++static bool ima_validate_rule(struct ima_rule_entry *entry)
++{
++ /* Ensure that the action is set */
++ if (entry->action == UNKNOWN)
++ return false;
++
++ /*
++ * Ensure that the hook function is compatible with the other
++ * components of the rule
++ */
++ switch (entry->func) {
++ case NONE:
++ case FILE_CHECK:
++ case MMAP_CHECK:
++ case BPRM_CHECK:
++ case CREDS_CHECK:
++ case POST_SETATTR:
++ case MODULE_CHECK:
++ case FIRMWARE_CHECK:
++ case KEXEC_KERNEL_CHECK:
++ case KEXEC_INITRAMFS_CHECK:
++ case POLICY_CHECK:
++ /* Validation of these hook functions is in ima_parse_rule() */
++ break;
++ case KEXEC_CMDLINE:
++ if (entry->action & ~(MEASURE | DONT_MEASURE))
++ return false;
++
++ if (entry->flags & ~(IMA_FUNC | IMA_PCR))
++ return false;
++
++ if (ima_rule_contains_lsm_cond(entry))
++ return false;
++
++ break;
++ case KEY_CHECK:
++ if (entry->action & ~(MEASURE | DONT_MEASURE))
++ return false;
++
++ if (entry->flags & ~(IMA_FUNC | IMA_UID | IMA_PCR |
++ IMA_KEYRINGS))
++ return false;
++
++ if (ima_rule_contains_lsm_cond(entry))
++ return false;
++
++ break;
++ default:
++ return false;
++ }
++
++ return true;
++}
++
+ static int ima_parse_rule(char *rule, struct ima_rule_entry *entry)
+ {
+ struct audit_buffer *ab;
+@@ -1126,7 +1215,6 @@ static int ima_parse_rule(char *rule, struct ima_rule_entry *entry)
+ keyrings_len = strlen(args[0].from) + 1;
+
+ if ((entry->keyrings) ||
+- (entry->action != MEASURE) ||
+ (entry->func != KEY_CHECK) ||
+ (keyrings_len < 2)) {
+ result = -EINVAL;
+@@ -1332,7 +1420,7 @@ static int ima_parse_rule(char *rule, struct ima_rule_entry *entry)
+ break;
+ }
+ }
+- if (!result && (entry->action == UNKNOWN))
++ if (!result && !ima_validate_rule(entry))
+ result = -EINVAL;
+ else if (entry->action == APPRAISE)
+ temp_ima_appraise |= ima_appraise_flag(entry->func);
+@@ -1381,7 +1469,7 @@ ssize_t ima_parse_add_rule(char *rule)
+
+ result = ima_parse_rule(p, entry);
+ if (result) {
+- kfree(entry);
++ ima_free_rule(entry);
+ integrity_audit_msg(AUDIT_INTEGRITY_STATUS, NULL,
+ NULL, op, "invalid-policy", result,
+ audit_info);
+@@ -1402,15 +1490,11 @@ ssize_t ima_parse_add_rule(char *rule)
+ void ima_delete_rules(void)
+ {
+ struct ima_rule_entry *entry, *tmp;
+- int i;
+
+ temp_ima_appraise = 0;
+ list_for_each_entry_safe(entry, tmp, &ima_temp_rules, list) {
+- for (i = 0; i < MAX_LSM_RULES; i++)
+- kfree(entry->lsm[i].args_p);
+-
+ list_del(&entry->list);
+- kfree(entry);
++ ima_free_rule(entry);
+ }
+ }
+
+diff --git a/security/smack/smackfs.c b/security/smack/smackfs.c
+index 840a192e9337..9c4308077574 100644
+--- a/security/smack/smackfs.c
++++ b/security/smack/smackfs.c
+@@ -884,7 +884,7 @@ static ssize_t smk_set_cipso(struct file *file, const char __user *buf,
+ }
+
+ ret = sscanf(rule, "%d", &maplevel);
+- if (ret != 1 || maplevel > SMACK_CIPSO_MAXLEVEL)
++ if (ret != 1 || maplevel < 0 || maplevel > SMACK_CIPSO_MAXLEVEL)
+ goto out;
+
+ rule += SMK_DIGITLEN;
+@@ -905,6 +905,10 @@ static ssize_t smk_set_cipso(struct file *file, const char __user *buf,
+
+ for (i = 0; i < catlen; i++) {
+ rule += SMK_DIGITLEN;
++ if (rule > data + count) {
++ rc = -EOVERFLOW;
++ goto out;
++ }
+ ret = sscanf(rule, "%u", &cat);
+ if (ret != 1 || cat > SMACK_CIPSO_MAXCATNUM)
+ goto out;
+diff --git a/sound/pci/hda/patch_realtek.c b/sound/pci/hda/patch_realtek.c
+index b27d88c86067..313eecfb91b4 100644
+--- a/sound/pci/hda/patch_realtek.c
++++ b/sound/pci/hda/patch_realtek.c
+@@ -4391,6 +4391,7 @@ static void alc233_fixup_lenovo_line2_mic_hotkey(struct hda_codec *codec,
+ {
+ struct alc_spec *spec = codec->spec;
+
++ spec->micmute_led_polarity = 1;
+ alc_fixup_hp_gpio_led(codec, action, 0, 0x04);
+ if (action == HDA_FIXUP_ACT_PRE_PROBE) {
+ spec->init_amp = ALC_INIT_DEFAULT;
+diff --git a/sound/soc/codecs/hdac_hda.c b/sound/soc/codecs/hdac_hda.c
+index 473efe9ef998..b0370bb10c14 100644
+--- a/sound/soc/codecs/hdac_hda.c
++++ b/sound/soc/codecs/hdac_hda.c
+@@ -289,7 +289,6 @@ static int hdac_hda_dai_open(struct snd_pcm_substream *substream,
+ struct hdac_hda_priv *hda_pvt;
+ struct hda_pcm_stream *hda_stream;
+ struct hda_pcm *pcm;
+- int ret;
+
+ hda_pvt = snd_soc_component_get_drvdata(component);
+ pcm = snd_soc_find_pcm_from_dai(hda_pvt, dai);
+@@ -300,11 +299,7 @@ static int hdac_hda_dai_open(struct snd_pcm_substream *substream,
+
+ hda_stream = &pcm->stream[substream->stream];
+
+- ret = hda_stream->ops.open(hda_stream, &hda_pvt->codec, substream);
+- if (ret < 0)
+- snd_hda_codec_pcm_put(pcm);
+-
+- return ret;
++ return hda_stream->ops.open(hda_stream, &hda_pvt->codec, substream);
+ }
+
+ static void hdac_hda_dai_close(struct snd_pcm_substream *substream,
+diff --git a/sound/soc/codecs/tas2770.c b/sound/soc/codecs/tas2770.c
+index 54c8135fe43c..cf071121c839 100644
+--- a/sound/soc/codecs/tas2770.c
++++ b/sound/soc/codecs/tas2770.c
+@@ -758,8 +758,7 @@ static int tas2770_i2c_probe(struct i2c_client *client,
+ }
+ }
+
+- tas2770->reset_gpio = devm_gpiod_get_optional(tas2770->dev,
+- "reset-gpio",
++ tas2770->reset_gpio = devm_gpiod_get_optional(tas2770->dev, "reset",
+ GPIOD_OUT_HIGH);
+ if (IS_ERR(tas2770->reset_gpio)) {
+ if (PTR_ERR(tas2770->reset_gpio) == -EPROBE_DEFER) {
+diff --git a/sound/soc/fsl/fsl_sai.c b/sound/soc/fsl/fsl_sai.c
+index 9d436b0c5718..7031869a023a 100644
+--- a/sound/soc/fsl/fsl_sai.c
++++ b/sound/soc/fsl/fsl_sai.c
+@@ -680,10 +680,11 @@ static int fsl_sai_dai_probe(struct snd_soc_dai *cpu_dai)
+ regmap_write(sai->regmap, FSL_SAI_RCSR(ofs), 0);
+
+ regmap_update_bits(sai->regmap, FSL_SAI_TCR1(ofs),
+- FSL_SAI_CR1_RFW_MASK,
++ FSL_SAI_CR1_RFW_MASK(sai->soc_data->fifo_depth),
+ sai->soc_data->fifo_depth - FSL_SAI_MAXBURST_TX);
+ regmap_update_bits(sai->regmap, FSL_SAI_RCR1(ofs),
+- FSL_SAI_CR1_RFW_MASK, FSL_SAI_MAXBURST_RX - 1);
++ FSL_SAI_CR1_RFW_MASK(sai->soc_data->fifo_depth),
++ FSL_SAI_MAXBURST_RX - 1);
+
+ snd_soc_dai_init_dma_data(cpu_dai, &sai->dma_params_tx,
+ &sai->dma_params_rx);
+diff --git a/sound/soc/fsl/fsl_sai.h b/sound/soc/fsl/fsl_sai.h
+index 76b15deea80c..6aba7d28f5f3 100644
+--- a/sound/soc/fsl/fsl_sai.h
++++ b/sound/soc/fsl/fsl_sai.h
+@@ -94,7 +94,7 @@
+ #define FSL_SAI_CSR_FRDE BIT(0)
+
+ /* SAI Transmit and Receive Configuration 1 Register */
+-#define FSL_SAI_CR1_RFW_MASK 0x1f
++#define FSL_SAI_CR1_RFW_MASK(x) ((x) - 1)
+
+ /* SAI Transmit and Receive Configuration 2 Register */
+ #define FSL_SAI_CR2_SYNC BIT(30)
+diff --git a/sound/soc/intel/boards/bxt_rt298.c b/sound/soc/intel/boards/bxt_rt298.c
+index 7a4decf34191..c84c60df17db 100644
+--- a/sound/soc/intel/boards/bxt_rt298.c
++++ b/sound/soc/intel/boards/bxt_rt298.c
+@@ -565,6 +565,7 @@ static int bxt_card_late_probe(struct snd_soc_card *card)
+ /* broxton audio machine driver for SPT + RT298S */
+ static struct snd_soc_card broxton_rt298 = {
+ .name = "broxton-rt298",
++ .owner = THIS_MODULE,
+ .dai_link = broxton_rt298_dais,
+ .num_links = ARRAY_SIZE(broxton_rt298_dais),
+ .controls = broxton_controls,
+@@ -580,6 +581,7 @@ static struct snd_soc_card broxton_rt298 = {
+
+ static struct snd_soc_card geminilake_rt298 = {
+ .name = "geminilake-rt298",
++ .owner = THIS_MODULE,
+ .dai_link = broxton_rt298_dais,
+ .num_links = ARRAY_SIZE(broxton_rt298_dais),
+ .controls = broxton_controls,
+diff --git a/sound/soc/intel/boards/cml_rt1011_rt5682.c b/sound/soc/intel/boards/cml_rt1011_rt5682.c
+index 8167b2977e1d..7d811090e4fb 100644
+--- a/sound/soc/intel/boards/cml_rt1011_rt5682.c
++++ b/sound/soc/intel/boards/cml_rt1011_rt5682.c
+@@ -425,6 +425,7 @@ static struct snd_soc_codec_conf rt1011_conf[] = {
+ /* Cometlake audio machine driver for RT1011 and RT5682 */
+ static struct snd_soc_card snd_soc_card_cml = {
+ .name = "cml_rt1011_rt5682",
++ .owner = THIS_MODULE,
+ .dai_link = cml_rt1011_rt5682_dailink,
+ .num_links = ARRAY_SIZE(cml_rt1011_rt5682_dailink),
+ .codec_conf = rt1011_conf,
+diff --git a/sound/soc/intel/boards/sof_sdw.c b/sound/soc/intel/boards/sof_sdw.c
+index a64dc563b47e..61b5bced29b7 100644
+--- a/sound/soc/intel/boards/sof_sdw.c
++++ b/sound/soc/intel/boards/sof_sdw.c
+@@ -888,6 +888,7 @@ static const char sdw_card_long_name[] = "Intel Soundwire SOF";
+
+ static struct snd_soc_card card_sof_sdw = {
+ .name = "soundwire",
++ .owner = THIS_MODULE,
+ .late_probe = sof_sdw_hdmi_card_late_probe,
+ .codec_conf = codec_conf,
+ .num_configs = ARRAY_SIZE(codec_conf),
+diff --git a/sound/soc/meson/axg-card.c b/sound/soc/meson/axg-card.c
+index 89f7f64747cd..47f2d93224fe 100644
+--- a/sound/soc/meson/axg-card.c
++++ b/sound/soc/meson/axg-card.c
+@@ -116,7 +116,7 @@ static int axg_card_add_tdm_loopback(struct snd_soc_card *card,
+
+ lb = &card->dai_link[*index + 1];
+
+- lb->name = kasprintf(GFP_KERNEL, "%s-lb", pad->name);
++ lb->name = devm_kasprintf(card->dev, GFP_KERNEL, "%s-lb", pad->name);
+ if (!lb->name)
+ return -ENOMEM;
+
+diff --git a/sound/soc/meson/axg-tdm-formatter.c b/sound/soc/meson/axg-tdm-formatter.c
+index 358c8c0d861c..f7e8e9da68a0 100644
+--- a/sound/soc/meson/axg-tdm-formatter.c
++++ b/sound/soc/meson/axg-tdm-formatter.c
+@@ -70,7 +70,7 @@ EXPORT_SYMBOL_GPL(axg_tdm_formatter_set_channel_masks);
+ static int axg_tdm_formatter_enable(struct axg_tdm_formatter *formatter)
+ {
+ struct axg_tdm_stream *ts = formatter->stream;
+- bool invert = formatter->drv->quirks->invert_sclk;
++ bool invert;
+ int ret;
+
+ /* Do nothing if the formatter is already enabled */
+@@ -96,11 +96,12 @@ static int axg_tdm_formatter_enable(struct axg_tdm_formatter *formatter)
+ return ret;
+
+ /*
+- * If sclk is inverted, invert it back and provide the inversion
+- * required by the formatter
++ * If sclk is inverted, it means the bit should latched on the
++ * rising edge which is what our HW expects. If not, we need to
++ * invert it before the formatter.
+ */
+- invert ^= axg_tdm_sclk_invert(ts->iface->fmt);
+- ret = clk_set_phase(formatter->sclk, invert ? 180 : 0);
++ invert = axg_tdm_sclk_invert(ts->iface->fmt);
++ ret = clk_set_phase(formatter->sclk, invert ? 0 : 180);
+ if (ret)
+ return ret;
+
+diff --git a/sound/soc/meson/axg-tdm-formatter.h b/sound/soc/meson/axg-tdm-formatter.h
+index 9ef98e955cb2..a1f0dcc0ff13 100644
+--- a/sound/soc/meson/axg-tdm-formatter.h
++++ b/sound/soc/meson/axg-tdm-formatter.h
+@@ -16,7 +16,6 @@ struct snd_kcontrol;
+
+ struct axg_tdm_formatter_hw {
+ unsigned int skew_offset;
+- bool invert_sclk;
+ };
+
+ struct axg_tdm_formatter_ops {
+diff --git a/sound/soc/meson/axg-tdm-interface.c b/sound/soc/meson/axg-tdm-interface.c
+index d51f3344be7c..e25336f73912 100644
+--- a/sound/soc/meson/axg-tdm-interface.c
++++ b/sound/soc/meson/axg-tdm-interface.c
+@@ -119,18 +119,25 @@ static int axg_tdm_iface_set_fmt(struct snd_soc_dai *dai, unsigned int fmt)
+ {
+ struct axg_tdm_iface *iface = snd_soc_dai_get_drvdata(dai);
+
+- /* These modes are not supported */
+- if (fmt & (SND_SOC_DAIFMT_CBS_CFM | SND_SOC_DAIFMT_CBM_CFS)) {
++ switch (fmt & SND_SOC_DAIFMT_MASTER_MASK) {
++ case SND_SOC_DAIFMT_CBS_CFS:
++ if (!iface->mclk) {
++ dev_err(dai->dev, "cpu clock master: mclk missing\n");
++ return -ENODEV;
++ }
++ break;
++
++ case SND_SOC_DAIFMT_CBM_CFM:
++ break;
++
++ case SND_SOC_DAIFMT_CBS_CFM:
++ case SND_SOC_DAIFMT_CBM_CFS:
+ dev_err(dai->dev, "only CBS_CFS and CBM_CFM are supported\n");
++ /* Fall-through */
++ default:
+ return -EINVAL;
+ }
+
+- /* If the TDM interface is the clock master, it requires mclk */
+- if (!iface->mclk && (fmt & SND_SOC_DAIFMT_CBS_CFS)) {
+- dev_err(dai->dev, "cpu clock master: mclk missing\n");
+- return -ENODEV;
+- }
+-
+ iface->fmt = fmt;
+ return 0;
+ }
+@@ -319,7 +326,8 @@ static int axg_tdm_iface_hw_params(struct snd_pcm_substream *substream,
+ if (ret)
+ return ret;
+
+- if (iface->fmt & SND_SOC_DAIFMT_CBS_CFS) {
++ if ((iface->fmt & SND_SOC_DAIFMT_MASTER_MASK) ==
++ SND_SOC_DAIFMT_CBS_CFS) {
+ ret = axg_tdm_iface_set_sclk(dai, params);
+ if (ret)
+ return ret;
+diff --git a/sound/soc/meson/axg-tdmin.c b/sound/soc/meson/axg-tdmin.c
+index 973d4c02ef8d..88ed95ae886b 100644
+--- a/sound/soc/meson/axg-tdmin.c
++++ b/sound/soc/meson/axg-tdmin.c
+@@ -228,15 +228,29 @@ static const struct axg_tdm_formatter_driver axg_tdmin_drv = {
+ .regmap_cfg = &axg_tdmin_regmap_cfg,
+ .ops = &axg_tdmin_ops,
+ .quirks = &(const struct axg_tdm_formatter_hw) {
+- .invert_sclk = false,
+ .skew_offset = 2,
+ },
+ };
+
++static const struct axg_tdm_formatter_driver g12a_tdmin_drv = {
++ .component_drv = &axg_tdmin_component_drv,
++ .regmap_cfg = &axg_tdmin_regmap_cfg,
++ .ops = &axg_tdmin_ops,
++ .quirks = &(const struct axg_tdm_formatter_hw) {
++ .skew_offset = 3,
++ },
++};
++
+ static const struct of_device_id axg_tdmin_of_match[] = {
+ {
+ .compatible = "amlogic,axg-tdmin",
+ .data = &axg_tdmin_drv,
++ }, {
++ .compatible = "amlogic,g12a-tdmin",
++ .data = &g12a_tdmin_drv,
++ }, {
++ .compatible = "amlogic,sm1-tdmin",
++ .data = &g12a_tdmin_drv,
+ }, {}
+ };
+ MODULE_DEVICE_TABLE(of, axg_tdmin_of_match);
+diff --git a/sound/soc/meson/axg-tdmout.c b/sound/soc/meson/axg-tdmout.c
+index 418ec314b37d..3ceabddae629 100644
+--- a/sound/soc/meson/axg-tdmout.c
++++ b/sound/soc/meson/axg-tdmout.c
+@@ -238,7 +238,6 @@ static const struct axg_tdm_formatter_driver axg_tdmout_drv = {
+ .regmap_cfg = &axg_tdmout_regmap_cfg,
+ .ops = &axg_tdmout_ops,
+ .quirks = &(const struct axg_tdm_formatter_hw) {
+- .invert_sclk = true,
+ .skew_offset = 1,
+ },
+ };
+@@ -248,7 +247,6 @@ static const struct axg_tdm_formatter_driver g12a_tdmout_drv = {
+ .regmap_cfg = &axg_tdmout_regmap_cfg,
+ .ops = &axg_tdmout_ops,
+ .quirks = &(const struct axg_tdm_formatter_hw) {
+- .invert_sclk = true,
+ .skew_offset = 2,
+ },
+ };
+@@ -309,7 +307,6 @@ static const struct axg_tdm_formatter_driver sm1_tdmout_drv = {
+ .regmap_cfg = &axg_tdmout_regmap_cfg,
+ .ops = &axg_tdmout_ops,
+ .quirks = &(const struct axg_tdm_formatter_hw) {
+- .invert_sclk = true,
+ .skew_offset = 2,
+ },
+ };
+diff --git a/sound/soc/soc-core.c b/sound/soc/soc-core.c
+index e5433e8fcf19..b5c4473f1e49 100644
+--- a/sound/soc/soc-core.c
++++ b/sound/soc/soc-core.c
+@@ -443,7 +443,6 @@ static struct snd_soc_pcm_runtime *soc_new_pcm_runtime(
+
+ dev->parent = card->dev;
+ dev->release = soc_release_rtd_dev;
+- dev->groups = soc_dev_attr_groups;
+
+ dev_set_name(dev, "%s", dai_link->name);
+
+@@ -502,6 +501,10 @@ static struct snd_soc_pcm_runtime *soc_new_pcm_runtime(
+ rtd->num = card->num_rtd;
+ card->num_rtd++;
+
++ ret = device_add_groups(dev, soc_dev_attr_groups);
++ if (ret < 0)
++ goto free_rtd;
++
+ return rtd;
+
+ free_rtd:
+diff --git a/sound/soc/sof/nocodec.c b/sound/soc/sof/nocodec.c
+index 71cf5f9db79d..849c3bcdca9e 100644
+--- a/sound/soc/sof/nocodec.c
++++ b/sound/soc/sof/nocodec.c
+@@ -14,6 +14,7 @@
+
+ static struct snd_soc_card sof_nocodec_card = {
+ .name = "nocodec", /* the sof- prefix is added by the core */
++ .owner = THIS_MODULE
+ };
+
+ static int sof_nocodec_bes_setup(struct device *dev,
+diff --git a/sound/usb/card.h b/sound/usb/card.h
+index f39f23e3525d..d8ec5caf464d 100644
+--- a/sound/usb/card.h
++++ b/sound/usb/card.h
+@@ -133,6 +133,7 @@ struct snd_usb_substream {
+ unsigned int tx_length_quirk:1; /* add length specifier to transfers */
+ unsigned int fmt_type; /* USB audio format type (1-3) */
+ unsigned int pkt_offset_adj; /* Bytes to drop from beginning of packets (for non-compliant devices) */
++ unsigned int stream_offset_adj; /* Bytes to drop from beginning of stream (for non-compliant devices) */
+
+ unsigned int running: 1; /* running status */
+
+diff --git a/sound/usb/mixer_quirks.c b/sound/usb/mixer_quirks.c
+index 260607144f56..fed610d4162d 100644
+--- a/sound/usb/mixer_quirks.c
++++ b/sound/usb/mixer_quirks.c
+@@ -185,6 +185,7 @@ static const struct rc_config {
+ { USB_ID(0x041e, 0x3042), 0, 1, 1, 1, 1, 0x000d }, /* Usb X-Fi S51 */
+ { USB_ID(0x041e, 0x30df), 0, 1, 1, 1, 1, 0x000d }, /* Usb X-Fi S51 Pro */
+ { USB_ID(0x041e, 0x3237), 0, 1, 1, 1, 1, 0x000d }, /* Usb X-Fi S51 Pro */
++ { USB_ID(0x041e, 0x3263), 0, 1, 1, 1, 1, 0x000d }, /* Usb X-Fi S51 Pro */
+ { USB_ID(0x041e, 0x3048), 2, 2, 6, 6, 2, 0x6e91 }, /* Toshiba SB0500 */
+ };
+
+diff --git a/sound/usb/pcm.c b/sound/usb/pcm.c
+index 0247162a9fbf..9538684c9b4e 100644
+--- a/sound/usb/pcm.c
++++ b/sound/usb/pcm.c
+@@ -1416,6 +1416,12 @@ static void retire_capture_urb(struct snd_usb_substream *subs,
+ // continue;
+ }
+ bytes = urb->iso_frame_desc[i].actual_length;
++ if (subs->stream_offset_adj > 0) {
++ unsigned int adj = min(subs->stream_offset_adj, bytes);
++ cp += adj;
++ bytes -= adj;
++ subs->stream_offset_adj -= adj;
++ }
+ frames = bytes / stride;
+ if (!subs->txfr_quirk)
+ bytes = frames * stride;
+diff --git a/sound/usb/quirks-table.h b/sound/usb/quirks-table.h
+index 562179492a33..1573229d8cf4 100644
+--- a/sound/usb/quirks-table.h
++++ b/sound/usb/quirks-table.h
+@@ -3570,6 +3570,62 @@ AU0828_DEVICE(0x2040, 0x7270, "Hauppauge", "HVR-950Q"),
+ }
+ }
+ },
++{
++ /*
++ * PIONEER DJ DDJ-RB
++ * PCM is 4 channels out, 2 dummy channels in @ 44.1 fixed
++ * The feedback for the output is the dummy input.
++ */
++ USB_DEVICE_VENDOR_SPEC(0x2b73, 0x000e),
++ .driver_info = (unsigned long) &(const struct snd_usb_audio_quirk) {
++ .ifnum = QUIRK_ANY_INTERFACE,
++ .type = QUIRK_COMPOSITE,
++ .data = (const struct snd_usb_audio_quirk[]) {
++ {
++ .ifnum = 0,
++ .type = QUIRK_AUDIO_FIXED_ENDPOINT,
++ .data = &(const struct audioformat) {
++ .formats = SNDRV_PCM_FMTBIT_S24_3LE,
++ .channels = 4,
++ .iface = 0,
++ .altsetting = 1,
++ .altset_idx = 1,
++ .endpoint = 0x01,
++ .ep_attr = USB_ENDPOINT_XFER_ISOC|
++ USB_ENDPOINT_SYNC_ASYNC,
++ .rates = SNDRV_PCM_RATE_44100,
++ .rate_min = 44100,
++ .rate_max = 44100,
++ .nr_rates = 1,
++ .rate_table = (unsigned int[]) { 44100 }
++ }
++ },
++ {
++ .ifnum = 0,
++ .type = QUIRK_AUDIO_FIXED_ENDPOINT,
++ .data = &(const struct audioformat) {
++ .formats = SNDRV_PCM_FMTBIT_S24_3LE,
++ .channels = 2,
++ .iface = 0,
++ .altsetting = 1,
++ .altset_idx = 1,
++ .endpoint = 0x82,
++ .ep_attr = USB_ENDPOINT_XFER_ISOC|
++ USB_ENDPOINT_SYNC_ASYNC|
++ USB_ENDPOINT_USAGE_IMPLICIT_FB,
++ .rates = SNDRV_PCM_RATE_44100,
++ .rate_min = 44100,
++ .rate_max = 44100,
++ .nr_rates = 1,
++ .rate_table = (unsigned int[]) { 44100 }
++ }
++ },
++ {
++ .ifnum = -1
++ }
++ }
++ }
++},
+
+ #define ALC1220_VB_DESKTOP(vend, prod) { \
+ USB_DEVICE(vend, prod), \
+@@ -3623,7 +3679,13 @@ ALC1220_VB_DESKTOP(0x26ce, 0x0a01), /* Asrock TRX40 Creator */
+ * with.
+ */
+ {
+- USB_DEVICE(0x534d, 0x2109),
++ .match_flags = USB_DEVICE_ID_MATCH_DEVICE |
++ USB_DEVICE_ID_MATCH_INT_CLASS |
++ USB_DEVICE_ID_MATCH_INT_SUBCLASS,
++ .idVendor = 0x534d,
++ .idProduct = 0x2109,
++ .bInterfaceClass = USB_CLASS_AUDIO,
++ .bInterfaceSubClass = USB_SUBCLASS_AUDIOCONTROL,
+ .driver_info = (unsigned long) &(const struct snd_usb_audio_quirk) {
+ .vendor_name = "MacroSilicon",
+ .product_name = "MS2109",
+diff --git a/sound/usb/quirks.c b/sound/usb/quirks.c
+index d7d900ebcf37..497a7249f258 100644
+--- a/sound/usb/quirks.c
++++ b/sound/usb/quirks.c
+@@ -1468,6 +1468,9 @@ void snd_usb_set_format_quirk(struct snd_usb_substream *subs,
+ case USB_ID(0x041e, 0x3f19): /* E-Mu 0204 USB */
+ set_format_emu_quirk(subs, fmt);
+ break;
++ case USB_ID(0x534d, 0x2109): /* MacroSilicon MS2109 */
++ subs->stream_offset_adj = 2;
++ break;
+ }
+ }
+
+diff --git a/sound/usb/stream.c b/sound/usb/stream.c
+index 15296f2c902c..e03ff2a7a73f 100644
+--- a/sound/usb/stream.c
++++ b/sound/usb/stream.c
+@@ -94,6 +94,7 @@ static void snd_usb_init_substream(struct snd_usb_stream *as,
+ subs->tx_length_quirk = as->chip->tx_length_quirk;
+ subs->speed = snd_usb_get_speed(subs->dev);
+ subs->pkt_offset_adj = 0;
++ subs->stream_offset_adj = 0;
+
+ snd_usb_set_pcm_ops(as->pcm, stream);
+
+diff --git a/tools/bpf/bpftool/btf.c b/tools/bpf/bpftool/btf.c
+index bcaf55b59498..81a77475bea6 100644
+--- a/tools/bpf/bpftool/btf.c
++++ b/tools/bpf/bpftool/btf.c
+@@ -597,7 +597,7 @@ static int do_dump(int argc, char **argv)
+ goto done;
+ }
+ if (!btf) {
+- err = ENOENT;
++ err = -ENOENT;
+ p_err("can't find btf with ID (%u)", btf_id);
+ goto done;
+ }
+diff --git a/tools/bpf/bpftool/gen.c b/tools/bpf/bpftool/gen.c
+index f5960b48c861..5ff951e08c74 100644
+--- a/tools/bpf/bpftool/gen.c
++++ b/tools/bpf/bpftool/gen.c
+@@ -307,8 +307,11 @@ static int do_skeleton(int argc, char **argv)
+ opts.object_name = obj_name;
+ obj = bpf_object__open_mem(obj_data, file_sz, &opts);
+ if (IS_ERR(obj)) {
++ char err_buf[256];
++
++ libbpf_strerror(PTR_ERR(obj), err_buf, sizeof(err_buf));
++ p_err("failed to open BPF object file: %s", err_buf);
+ obj = NULL;
+- p_err("failed to open BPF object file: %ld", PTR_ERR(obj));
+ goto out;
+ }
+
+diff --git a/tools/build/Build.include b/tools/build/Build.include
+index 9ec01f4454f9..585486e40995 100644
+--- a/tools/build/Build.include
++++ b/tools/build/Build.include
+@@ -74,7 +74,8 @@ dep-cmd = $(if $(wildcard $(fixdep)),
+ # dependencies in the cmd file
+ if_changed_dep = $(if $(strip $(any-prereq) $(arg-check)), \
+ @set -e; \
+- $(echo-cmd) $(cmd_$(1)) && $(dep-cmd))
++ $(echo-cmd) $(cmd_$(1)); \
++ $(dep-cmd))
+
+ # if_changed - execute command if any prerequisite is newer than
+ # target, or command line has changed
+diff --git a/tools/lib/bpf/bpf_tracing.h b/tools/lib/bpf/bpf_tracing.h
+index 48a9c7c69ef1..e6ec7cb4aa4a 100644
+--- a/tools/lib/bpf/bpf_tracing.h
++++ b/tools/lib/bpf/bpf_tracing.h
+@@ -215,7 +215,7 @@ struct pt_regs;
+ #define PT_REGS_PARM5(x) ((x)->regs[8])
+ #define PT_REGS_RET(x) ((x)->regs[31])
+ #define PT_REGS_FP(x) ((x)->regs[30]) /* Works only with CONFIG_FRAME_POINTER */
+-#define PT_REGS_RC(x) ((x)->regs[1])
++#define PT_REGS_RC(x) ((x)->regs[2])
+ #define PT_REGS_SP(x) ((x)->regs[29])
+ #define PT_REGS_IP(x) ((x)->cp0_epc)
+
+@@ -226,7 +226,7 @@ struct pt_regs;
+ #define PT_REGS_PARM5_CORE(x) BPF_CORE_READ((x), regs[8])
+ #define PT_REGS_RET_CORE(x) BPF_CORE_READ((x), regs[31])
+ #define PT_REGS_FP_CORE(x) BPF_CORE_READ((x), regs[30])
+-#define PT_REGS_RC_CORE(x) BPF_CORE_READ((x), regs[1])
++#define PT_REGS_RC_CORE(x) BPF_CORE_READ((x), regs[2])
+ #define PT_REGS_SP_CORE(x) BPF_CORE_READ((x), regs[29])
+ #define PT_REGS_IP_CORE(x) BPF_CORE_READ((x), cp0_epc)
+
+diff --git a/tools/testing/kunit/kunit_kernel.py b/tools/testing/kunit/kunit_kernel.py
+index 63dbda2d029f..e20e2056cb38 100644
+--- a/tools/testing/kunit/kunit_kernel.py
++++ b/tools/testing/kunit/kunit_kernel.py
+@@ -34,7 +34,7 @@ class LinuxSourceTreeOperations(object):
+
+ def make_mrproper(self):
+ try:
+- subprocess.check_output(['make', 'mrproper'])
++ subprocess.check_output(['make', 'mrproper'], stderr=subprocess.STDOUT)
+ except OSError as e:
+ raise ConfigError('Could not call make command: ' + e)
+ except subprocess.CalledProcessError as e:
+@@ -47,7 +47,7 @@ class LinuxSourceTreeOperations(object):
+ if build_dir:
+ command += ['O=' + build_dir]
+ try:
+- subprocess.check_output(command, stderr=subprocess.PIPE)
++ subprocess.check_output(command, stderr=subprocess.STDOUT)
+ except OSError as e:
+ raise ConfigError('Could not call make command: ' + e)
+ except subprocess.CalledProcessError as e:
+@@ -77,7 +77,7 @@ class LinuxSourceTreeOperations(object):
+ if build_dir:
+ command += ['O=' + build_dir]
+ try:
+- subprocess.check_output(command)
++ subprocess.check_output(command, stderr=subprocess.STDOUT)
+ except OSError as e:
+ raise BuildError('Could not call execute make: ' + e)
+ except subprocess.CalledProcessError as e:
+diff --git a/tools/testing/selftests/lkdtm/run.sh b/tools/testing/selftests/lkdtm/run.sh
+index ee64ff8df8f4..8383eb89d88a 100755
+--- a/tools/testing/selftests/lkdtm/run.sh
++++ b/tools/testing/selftests/lkdtm/run.sh
+@@ -8,6 +8,7 @@
+ #
+ set -e
+ TRIGGER=/sys/kernel/debug/provoke-crash/DIRECT
++CLEAR_ONCE=/sys/kernel/debug/clear_warn_once
+ KSELFTEST_SKIP_TEST=4
+
+ # Verify we have LKDTM available in the kernel.
+@@ -67,6 +68,11 @@ cleanup() {
+ }
+ trap cleanup EXIT
+
++# Reset WARN_ONCE counters so we trip it each time this runs.
++if [ -w $CLEAR_ONCE ] ; then
++ echo 1 > $CLEAR_ONCE
++fi
++
+ # Save existing dmesg so we can detect new content below
+ dmesg > "$DMESG"
+
+diff --git a/tools/testing/selftests/lkdtm/tests.txt b/tools/testing/selftests/lkdtm/tests.txt
+index 92ca32143ae5..9d266e79c6a2 100644
+--- a/tools/testing/selftests/lkdtm/tests.txt
++++ b/tools/testing/selftests/lkdtm/tests.txt
+@@ -14,6 +14,7 @@ STACK_GUARD_PAGE_LEADING
+ STACK_GUARD_PAGE_TRAILING
+ UNSET_SMEP CR4 bits went missing
+ DOUBLE_FAULT
++CORRUPT_PAC
+ UNALIGNED_LOAD_STORE_WRITE
+ #OVERWRITE_ALLOCATION Corrupts memory on failure
+ #WRITE_AFTER_FREE Corrupts memory on failure
+diff --git a/tools/testing/selftests/powerpc/benchmarks/context_switch.c b/tools/testing/selftests/powerpc/benchmarks/context_switch.c
+index a2e8c9da7fa5..d50cc05df495 100644
+--- a/tools/testing/selftests/powerpc/benchmarks/context_switch.c
++++ b/tools/testing/selftests/powerpc/benchmarks/context_switch.c
+@@ -19,6 +19,7 @@
+ #include <limits.h>
+ #include <sys/time.h>
+ #include <sys/syscall.h>
++#include <sys/sysinfo.h>
+ #include <sys/types.h>
+ #include <sys/shm.h>
+ #include <linux/futex.h>
+@@ -104,8 +105,9 @@ static void start_thread_on(void *(*fn)(void *), void *arg, unsigned long cpu)
+
+ static void start_process_on(void *(*fn)(void *), void *arg, unsigned long cpu)
+ {
+- int pid;
+- cpu_set_t cpuset;
++ int pid, ncpus;
++ cpu_set_t *cpuset;
++ size_t size;
+
+ pid = fork();
+ if (pid == -1) {
+@@ -116,14 +118,23 @@ static void start_process_on(void *(*fn)(void *), void *arg, unsigned long cpu)
+ if (pid)
+ return;
+
+- CPU_ZERO(&cpuset);
+- CPU_SET(cpu, &cpuset);
++ ncpus = get_nprocs();
++ size = CPU_ALLOC_SIZE(ncpus);
++ cpuset = CPU_ALLOC(ncpus);
++ if (!cpuset) {
++ perror("malloc");
++ exit(1);
++ }
++ CPU_ZERO_S(size, cpuset);
++ CPU_SET_S(cpu, size, cpuset);
+
+- if (sched_setaffinity(0, sizeof(cpuset), &cpuset)) {
++ if (sched_setaffinity(0, size, cpuset)) {
+ perror("sched_setaffinity");
++ CPU_FREE(cpuset);
+ exit(1);
+ }
+
++ CPU_FREE(cpuset);
+ fn(arg);
+
+ exit(0);
+diff --git a/tools/testing/selftests/powerpc/eeh/eeh-functions.sh b/tools/testing/selftests/powerpc/eeh/eeh-functions.sh
+index f52ed92b53e7..00dc32c0ed75 100755
+--- a/tools/testing/selftests/powerpc/eeh/eeh-functions.sh
++++ b/tools/testing/selftests/powerpc/eeh/eeh-functions.sh
+@@ -5,12 +5,17 @@ pe_ok() {
+ local dev="$1"
+ local path="/sys/bus/pci/devices/$dev/eeh_pe_state"
+
+- if ! [ -e "$path" ] ; then
++ # if a driver doesn't support the error handling callbacks then the
++ # device is recovered by removing and re-probing it. This causes the
++ # sysfs directory to disappear so read the PE state once and squash
++ # any potential error messages
++ local eeh_state="$(cat $path 2>/dev/null)"
++ if [ -z "$eeh_state" ]; then
+ return 1;
+ fi
+
+- local fw_state="$(cut -d' ' -f1 < $path)"
+- local sw_state="$(cut -d' ' -f2 < $path)"
++ local fw_state="$(echo $eeh_state | cut -d' ' -f1)"
++ local sw_state="$(echo $eeh_state | cut -d' ' -f2)"
+
+ # If EEH_PE_ISOLATED or EEH_PE_RECOVERING are set then the PE is in an
+ # error state or being recovered. Either way, not ok.
+diff --git a/tools/testing/selftests/powerpc/utils.c b/tools/testing/selftests/powerpc/utils.c
+index 5ee0e98c4896..eb530e73e02c 100644
+--- a/tools/testing/selftests/powerpc/utils.c
++++ b/tools/testing/selftests/powerpc/utils.c
+@@ -16,6 +16,7 @@
+ #include <string.h>
+ #include <sys/ioctl.h>
+ #include <sys/stat.h>
++#include <sys/sysinfo.h>
+ #include <sys/types.h>
+ #include <sys/utsname.h>
+ #include <unistd.h>
+@@ -88,28 +89,40 @@ void *get_auxv_entry(int type)
+
+ int pick_online_cpu(void)
+ {
+- cpu_set_t mask;
+- int cpu;
++ int ncpus, cpu = -1;
++ cpu_set_t *mask;
++ size_t size;
++
++ ncpus = get_nprocs_conf();
++ size = CPU_ALLOC_SIZE(ncpus);
++ mask = CPU_ALLOC(ncpus);
++ if (!mask) {
++ perror("malloc");
++ return -1;
++ }
+
+- CPU_ZERO(&mask);
++ CPU_ZERO_S(size, mask);
+
+- if (sched_getaffinity(0, sizeof(mask), &mask)) {
++ if (sched_getaffinity(0, size, mask)) {
+ perror("sched_getaffinity");
+- return -1;
++ goto done;
+ }
+
+ /* We prefer a primary thread, but skip 0 */
+- for (cpu = 8; cpu < CPU_SETSIZE; cpu += 8)
+- if (CPU_ISSET(cpu, &mask))
+- return cpu;
++ for (cpu = 8; cpu < ncpus; cpu += 8)
++ if (CPU_ISSET_S(cpu, size, mask))
++ goto done;
+
+ /* Search for anything, but in reverse */
+- for (cpu = CPU_SETSIZE - 1; cpu >= 0; cpu--)
+- if (CPU_ISSET(cpu, &mask))
+- return cpu;
++ for (cpu = ncpus - 1; cpu >= 0; cpu--)
++ if (CPU_ISSET_S(cpu, size, mask))
++ goto done;
+
+ printf("No cpus in affinity mask?!\n");
+- return -1;
++
++done:
++ CPU_FREE(mask);
++ return cpu;
+ }
+
+ bool is_ppc64le(void)
+diff --git a/tools/testing/selftests/seccomp/seccomp_bpf.c b/tools/testing/selftests/seccomp/seccomp_bpf.c
+index c0aa46ce14f6..c84c7b50331c 100644
+--- a/tools/testing/selftests/seccomp/seccomp_bpf.c
++++ b/tools/testing/selftests/seccomp/seccomp_bpf.c
+@@ -180,7 +180,7 @@ struct seccomp_metadata {
+ #define SECCOMP_IOCTL_NOTIF_RECV SECCOMP_IOWR(0, struct seccomp_notif)
+ #define SECCOMP_IOCTL_NOTIF_SEND SECCOMP_IOWR(1, \
+ struct seccomp_notif_resp)
+-#define SECCOMP_IOCTL_NOTIF_ID_VALID SECCOMP_IOR(2, __u64)
++#define SECCOMP_IOCTL_NOTIF_ID_VALID SECCOMP_IOW(2, __u64)
+
+ struct seccomp_notif {
+ __u64 id;
^ permalink raw reply related [flat|nested] 25+ messages in thread
* [gentoo-commits] proj/linux-patches:5.7 commit in: /
@ 2020-08-19 14:55 Mike Pagano
0 siblings, 0 replies; 25+ messages in thread
From: Mike Pagano @ 2020-08-19 14:55 UTC (permalink / raw
To: gentoo-commits
commit: 6a863c711c1ead870c516cb00e0e69e7a4a9f7b1
Author: Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Wed Aug 19 14:55:19 2020 +0000
Commit: Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Wed Aug 19 14:55:19 2020 +0000
URL: https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=6a863c71
Remove redundant patch. See bug #738002
Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>
0000_README | 4 ----
2910_TVP5150-Fix-build-issue-by-selecting-REGMAP-I2C.patch | 10 ----------
2 files changed, 14 deletions(-)
diff --git a/0000_README b/0000_README
index 66f0380..1bfc2e2 100644
--- a/0000_README
+++ b/0000_README
@@ -127,10 +127,6 @@ Patch: 2900_tmp513-Fix-build-issue-by-selecting-CONFIG_REG.patch
From: https://bugs.gentoo.org/710790
Desc: tmp513 requies REGMAP_I2C to build. Select it by default in Kconfig. See bug #710790. Thanks to Phil Stracchino
-Patch: 2910_TVP5150-Fix-build-issue-by-selecting-REGMAP-I2C.patch
-From: https://bugs.gentoo.org/721096
-Desc: VIDEO_TVP5150 requies REGMAP_I2C to build. Select it by default in Kconfig. See bug #721096. Thanks to Max Steel
-
Patch: 2920_sign-file-patch-for-libressl.patch
From: https://bugs.gentoo.org/717166
Desc: sign-file: full functionality with modern LibreSSL
diff --git a/2910_TVP5150-Fix-build-issue-by-selecting-REGMAP-I2C.patch b/2910_TVP5150-Fix-build-issue-by-selecting-REGMAP-I2C.patch
deleted file mode 100644
index 1bc058e..0000000
--- a/2910_TVP5150-Fix-build-issue-by-selecting-REGMAP-I2C.patch
+++ /dev/null
@@ -1,10 +0,0 @@
---- a/drivers/media/i2c/Kconfig 2020-05-13 12:38:05.102903309 -0400
-+++ b/drivers/media/i2c/Kconfig 2020-05-13 12:38:51.283171977 -0400
-@@ -378,6 +378,7 @@ config VIDEO_TVP514X
- config VIDEO_TVP5150
- tristate "Texas Instruments TVP5150 video decoder"
- depends on VIDEO_V4L2 && I2C
-+ select REGMAP_I2C
- select V4L2_FWNODE
- help
- Support for the Texas Instruments TVP5150 video decoder.
^ permalink raw reply related [flat|nested] 25+ messages in thread
* [gentoo-commits] proj/linux-patches:5.7 commit in: /
@ 2020-08-21 11:43 Alice Ferrazzi
0 siblings, 0 replies; 25+ messages in thread
From: Alice Ferrazzi @ 2020-08-21 11:43 UTC (permalink / raw
To: gentoo-commits
commit: a02b235e592f91d1cc07f284754d422947791d7b
Author: Alice Ferrazzi <alicef <AT> gentoo <DOT> org>
AuthorDate: Fri Aug 21 11:43:03 2020 +0000
Commit: Alice Ferrazzi <alicef <AT> gentoo <DOT> org>
CommitDate: Fri Aug 21 11:43:09 2020 +0000
URL: https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=a02b235e
Linux patch 5.7.17
Signed-off-by: Alice Ferrazzi <alicef <AT> gentoo.org>
0000_README | 4 +
1016_linux-5.7.17.patch | 7578 +++++++++++++++++++++++++++++++++++++++++++++++
2 files changed, 7582 insertions(+)
diff --git a/0000_README b/0000_README
index 1bfc2e2..18ff2b2 100644
--- a/0000_README
+++ b/0000_README
@@ -107,6 +107,10 @@ Patch: 1015_linux-5.7.16.patch
From: http://www.kernel.org
Desc: Linux 5.7.16
+Patch: 1016_linux-5.7.17.patch
+From: http://www.kernel.org
+Desc: Linux 5.7.17
+
Patch: 1500_XATTR_USER_PREFIX.patch
From: https://bugs.gentoo.org/show_bug.cgi?id=470644
Desc: Support for namespace user.pax.* on tmpfs.
diff --git a/1016_linux-5.7.17.patch b/1016_linux-5.7.17.patch
new file mode 100644
index 0000000..e5861a0
--- /dev/null
+++ b/1016_linux-5.7.17.patch
@@ -0,0 +1,7578 @@
+diff --git a/Documentation/admin-guide/hw-vuln/multihit.rst b/Documentation/admin-guide/hw-vuln/multihit.rst
+index ba9988d8bce50..140e4cec38c33 100644
+--- a/Documentation/admin-guide/hw-vuln/multihit.rst
++++ b/Documentation/admin-guide/hw-vuln/multihit.rst
+@@ -80,6 +80,10 @@ The possible values in this file are:
+ - The processor is not vulnerable.
+ * - KVM: Mitigation: Split huge pages
+ - Software changes mitigate this issue.
++ * - KVM: Mitigation: VMX unsupported
++ - KVM is not vulnerable because Virtual Machine Extensions (VMX) is not supported.
++ * - KVM: Mitigation: VMX disabled
++ - KVM is not vulnerable because Virtual Machine Extensions (VMX) is disabled.
+ * - KVM: Vulnerable
+ - The processor is vulnerable, but no mitigation enabled
+
+diff --git a/Documentation/devicetree/bindings/iio/multiplexer/io-channel-mux.txt b/Documentation/devicetree/bindings/iio/multiplexer/io-channel-mux.txt
+index c82794002595f..89647d7143879 100644
+--- a/Documentation/devicetree/bindings/iio/multiplexer/io-channel-mux.txt
++++ b/Documentation/devicetree/bindings/iio/multiplexer/io-channel-mux.txt
+@@ -21,7 +21,7 @@ controller state. The mux controller state is described in
+
+ Example:
+ mux: mux-controller {
+- compatible = "mux-gpio";
++ compatible = "gpio-mux";
+ #mux-control-cells = <0>;
+
+ mux-gpios = <&pioA 0 GPIO_ACTIVE_HIGH>,
+diff --git a/Makefile b/Makefile
+index 627657860aa54..c0d34d03ab5f1 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 5
+ PATCHLEVEL = 7
+-SUBLEVEL = 16
++SUBLEVEL = 17
+ EXTRAVERSION =
+ NAME = Kleptomaniac Octopus
+
+diff --git a/arch/arm64/kernel/perf_event.c b/arch/arm64/kernel/perf_event.c
+index 4d7879484cecc..581602413a130 100644
+--- a/arch/arm64/kernel/perf_event.c
++++ b/arch/arm64/kernel/perf_event.c
+@@ -155,7 +155,7 @@ armv8pmu_events_sysfs_show(struct device *dev,
+
+ pmu_attr = container_of(attr, struct perf_pmu_events_attr, attr);
+
+- return sprintf(page, "event=0x%03llx\n", pmu_attr->id);
++ return sprintf(page, "event=0x%04llx\n", pmu_attr->id);
+ }
+
+ #define ARMV8_EVENT_ATTR(name, config) \
+@@ -244,10 +244,13 @@ armv8pmu_event_attr_is_visible(struct kobject *kobj,
+ test_bit(pmu_attr->id, cpu_pmu->pmceid_bitmap))
+ return attr->mode;
+
+- pmu_attr->id -= ARMV8_PMUV3_EXT_COMMON_EVENT_BASE;
+- if (pmu_attr->id < ARMV8_PMUV3_MAX_COMMON_EVENTS &&
+- test_bit(pmu_attr->id, cpu_pmu->pmceid_ext_bitmap))
+- return attr->mode;
++ if (pmu_attr->id >= ARMV8_PMUV3_EXT_COMMON_EVENT_BASE) {
++ u64 id = pmu_attr->id - ARMV8_PMUV3_EXT_COMMON_EVENT_BASE;
++
++ if (id < ARMV8_PMUV3_MAX_COMMON_EVENTS &&
++ test_bit(id, cpu_pmu->pmceid_ext_bitmap))
++ return attr->mode;
++ }
+
+ return 0;
+ }
+diff --git a/arch/mips/Kconfig b/arch/mips/Kconfig
+index 690718b3701af..1db782a08c6e5 100644
+--- a/arch/mips/Kconfig
++++ b/arch/mips/Kconfig
+@@ -722,6 +722,7 @@ config SGI_IP27
+ select SYS_SUPPORTS_NUMA
+ select SYS_SUPPORTS_SMP
+ select MIPS_L1_CACHE_SHIFT_7
++ select NUMA
+ help
+ This are the SGI Origin 200, Origin 2000 and Onyx 2 Graphics
+ workstations. To compile a Linux kernel that runs on these, say Y
+diff --git a/arch/mips/boot/dts/ingenic/qi_lb60.dts b/arch/mips/boot/dts/ingenic/qi_lb60.dts
+index 7a371d9c5a33f..eda37fb516f0e 100644
+--- a/arch/mips/boot/dts/ingenic/qi_lb60.dts
++++ b/arch/mips/boot/dts/ingenic/qi_lb60.dts
+@@ -69,7 +69,7 @@
+ "Speaker", "OUTL",
+ "Speaker", "OUTR",
+ "INL", "LOUT",
+- "INL", "ROUT";
++ "INR", "ROUT";
+
+ simple-audio-card,aux-devs = <&>;
+
+diff --git a/arch/mips/kernel/topology.c b/arch/mips/kernel/topology.c
+index cd3e1f82e1a5d..08ad6371fbe08 100644
+--- a/arch/mips/kernel/topology.c
++++ b/arch/mips/kernel/topology.c
+@@ -20,7 +20,7 @@ static int __init topology_init(void)
+ for_each_present_cpu(i) {
+ struct cpu *c = &per_cpu(cpu_devices, i);
+
+- c->hotpluggable = 1;
++ c->hotpluggable = !!i;
+ ret = register_cpu(c, i);
+ if (ret)
+ printk(KERN_WARNING "topology_init: register_cpu %d "
+diff --git a/arch/openrisc/kernel/stacktrace.c b/arch/openrisc/kernel/stacktrace.c
+index 43f140a28bc72..54d38809e22cb 100644
+--- a/arch/openrisc/kernel/stacktrace.c
++++ b/arch/openrisc/kernel/stacktrace.c
+@@ -13,6 +13,7 @@
+ #include <linux/export.h>
+ #include <linux/sched.h>
+ #include <linux/sched/debug.h>
++#include <linux/sched/task_stack.h>
+ #include <linux/stacktrace.h>
+
+ #include <asm/processor.h>
+@@ -68,12 +69,25 @@ void save_stack_trace_tsk(struct task_struct *tsk, struct stack_trace *trace)
+ {
+ unsigned long *sp = NULL;
+
++ if (!try_get_task_stack(tsk))
++ return;
++
+ if (tsk == current)
+ sp = (unsigned long *) &sp;
+- else
+- sp = (unsigned long *) KSTK_ESP(tsk);
++ else {
++ unsigned long ksp;
++
++ /* Locate stack from kernel context */
++ ksp = task_thread_info(tsk)->ksp;
++ ksp += STACK_FRAME_OVERHEAD; /* redzone */
++ ksp += sizeof(struct pt_regs);
++
++ sp = (unsigned long *) ksp;
++ }
+
+ unwind_stack(trace, sp, save_stack_address_nosched);
++
++ put_task_stack(tsk);
+ }
+ EXPORT_SYMBOL_GPL(save_stack_trace_tsk);
+
+diff --git a/arch/powerpc/include/asm/percpu.h b/arch/powerpc/include/asm/percpu.h
+index dce863a7635cd..8e5b7d0b851c6 100644
+--- a/arch/powerpc/include/asm/percpu.h
++++ b/arch/powerpc/include/asm/percpu.h
+@@ -10,8 +10,6 @@
+
+ #ifdef CONFIG_SMP
+
+-#include <asm/paca.h>
+-
+ #define __my_cpu_offset local_paca->data_offset
+
+ #endif /* CONFIG_SMP */
+@@ -19,4 +17,6 @@
+
+ #include <asm-generic/percpu.h>
+
++#include <asm/paca.h>
++
+ #endif /* _ASM_POWERPC_PERCPU_H_ */
+diff --git a/arch/powerpc/mm/fault.c b/arch/powerpc/mm/fault.c
+index 84af6c8eecf71..0a539afc8f4fa 100644
+--- a/arch/powerpc/mm/fault.c
++++ b/arch/powerpc/mm/fault.c
+@@ -241,6 +241,9 @@ static bool bad_kernel_fault(struct pt_regs *regs, unsigned long error_code,
+ return false;
+ }
+
++// This comes from 64-bit struct rt_sigframe + __SIGNAL_FRAMESIZE
++#define SIGFRAME_MAX_SIZE (4096 + 128)
++
+ static bool bad_stack_expansion(struct pt_regs *regs, unsigned long address,
+ struct vm_area_struct *vma, unsigned int flags,
+ bool *must_retry)
+@@ -248,7 +251,7 @@ static bool bad_stack_expansion(struct pt_regs *regs, unsigned long address,
+ /*
+ * N.B. The POWER/Open ABI allows programs to access up to
+ * 288 bytes below the stack pointer.
+- * The kernel signal delivery code writes up to about 1.5kB
++ * The kernel signal delivery code writes a bit over 4KB
+ * below the stack pointer (r1) before decrementing it.
+ * The exec code can write slightly over 640kB to the stack
+ * before setting the user r1. Thus we allow the stack to
+@@ -273,7 +276,7 @@ static bool bad_stack_expansion(struct pt_regs *regs, unsigned long address,
+ * between the last mapped region and the stack will
+ * expand the stack rather than segfaulting.
+ */
+- if (address + 2048 >= uregs->gpr[1])
++ if (address + SIGFRAME_MAX_SIZE >= uregs->gpr[1])
+ return false;
+
+ if ((flags & FAULT_FLAG_WRITE) && (flags & FAULT_FLAG_USER) &&
+diff --git a/arch/powerpc/mm/ptdump/hashpagetable.c b/arch/powerpc/mm/ptdump/hashpagetable.c
+index b6ed9578382ff..18f9586fbb935 100644
+--- a/arch/powerpc/mm/ptdump/hashpagetable.c
++++ b/arch/powerpc/mm/ptdump/hashpagetable.c
+@@ -259,7 +259,7 @@ static int pseries_find(unsigned long ea, int psize, bool primary, u64 *v, u64 *
+ for (i = 0; i < HPTES_PER_GROUP; i += 4, hpte_group += 4) {
+ lpar_rc = plpar_pte_read_4(0, hpte_group, (void *)ptes);
+
+- if (lpar_rc != H_SUCCESS)
++ if (lpar_rc)
+ continue;
+ for (j = 0; j < 4; j++) {
+ if (HPTE_V_COMPARE(ptes[j].v, want_v) &&
+diff --git a/arch/powerpc/platforms/pseries/hotplug-memory.c b/arch/powerpc/platforms/pseries/hotplug-memory.c
+index b2cde17323015..6d912db46deb7 100644
+--- a/arch/powerpc/platforms/pseries/hotplug-memory.c
++++ b/arch/powerpc/platforms/pseries/hotplug-memory.c
+@@ -27,7 +27,7 @@ static bool rtas_hp_event;
+ unsigned long pseries_memory_block_size(void)
+ {
+ struct device_node *np;
+- unsigned int memblock_size = MIN_MEMORY_BLOCK_SIZE;
++ u64 memblock_size = MIN_MEMORY_BLOCK_SIZE;
+ struct resource r;
+
+ np = of_find_node_by_path("/ibm,dynamic-reconfiguration-memory");
+diff --git a/arch/s390/Kconfig b/arch/s390/Kconfig
+index ae01be202204b..03e491c103e76 100644
+--- a/arch/s390/Kconfig
++++ b/arch/s390/Kconfig
+@@ -769,6 +769,7 @@ config VFIO_AP
+ def_tristate n
+ prompt "VFIO support for AP devices"
+ depends on S390_AP_IOMMU && VFIO_MDEV_DEVICE && KVM
++ depends on ZCRYPT
+ help
+ This driver grants access to Adjunct Processor (AP) devices
+ via the VFIO mediated device interface.
+diff --git a/arch/s390/lib/test_unwind.c b/arch/s390/lib/test_unwind.c
+index 32b7a30b2485d..b0b12b46bc572 100644
+--- a/arch/s390/lib/test_unwind.c
++++ b/arch/s390/lib/test_unwind.c
+@@ -63,6 +63,7 @@ static noinline int test_unwind(struct task_struct *task, struct pt_regs *regs,
+ break;
+ if (state.reliable && !addr) {
+ pr_err("unwind state reliable but addr is 0\n");
++ kfree(bt);
+ return -EINVAL;
+ }
+ sprint_symbol(sym, addr);
+diff --git a/arch/sh/boards/mach-landisk/setup.c b/arch/sh/boards/mach-landisk/setup.c
+index 16b4d8b0bb850..2c44b94f82fb2 100644
+--- a/arch/sh/boards/mach-landisk/setup.c
++++ b/arch/sh/boards/mach-landisk/setup.c
+@@ -82,6 +82,9 @@ device_initcall(landisk_devices_setup);
+
+ static void __init landisk_setup(char **cmdline_p)
+ {
++ /* I/O port identity mapping */
++ __set_io_port_base(0);
++
+ /* LED ON */
+ __raw_writeb(__raw_readb(PA_LED) | 0x03, PA_LED);
+
+diff --git a/arch/x86/events/rapl.c b/arch/x86/events/rapl.c
+index ece043fb7b494..fbc32b28f4cb8 100644
+--- a/arch/x86/events/rapl.c
++++ b/arch/x86/events/rapl.c
+@@ -642,7 +642,7 @@ static const struct attribute_group *rapl_attr_update[] = {
+ &rapl_events_pkg_group,
+ &rapl_events_ram_group,
+ &rapl_events_gpu_group,
+- &rapl_events_gpu_group,
++ &rapl_events_psys_group,
+ NULL,
+ };
+
+diff --git a/arch/x86/kernel/apic/vector.c b/arch/x86/kernel/apic/vector.c
+index 410363e60968f..3ee4830ebfd31 100644
+--- a/arch/x86/kernel/apic/vector.c
++++ b/arch/x86/kernel/apic/vector.c
+@@ -560,6 +560,10 @@ static int x86_vector_alloc_irqs(struct irq_domain *domain, unsigned int virq,
+ * as that can corrupt the affinity move state.
+ */
+ irqd_set_handle_enforce_irqctx(irqd);
++
++ /* Don't invoke affinity setter on deactivated interrupts */
++ irqd_set_affinity_on_activate(irqd);
++
+ /*
+ * Legacy vectors are already assigned when the IOAPIC
+ * takes them over. They stay on the same vector. This is
+diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c
+index b53dcff21438c..8c963ea39f9df 100644
+--- a/arch/x86/kernel/cpu/bugs.c
++++ b/arch/x86/kernel/cpu/bugs.c
+@@ -31,6 +31,7 @@
+ #include <asm/intel-family.h>
+ #include <asm/e820/api.h>
+ #include <asm/hypervisor.h>
++#include <asm/tlbflush.h>
+
+ #include "cpu.h"
+
+@@ -1556,7 +1557,12 @@ static ssize_t l1tf_show_state(char *buf)
+
+ static ssize_t itlb_multihit_show_state(char *buf)
+ {
+- if (itlb_multihit_kvm_mitigation)
++ if (!boot_cpu_has(X86_FEATURE_MSR_IA32_FEAT_CTL) ||
++ !boot_cpu_has(X86_FEATURE_VMX))
++ return sprintf(buf, "KVM: Mitigation: VMX unsupported\n");
++ else if (!(cr4_read_shadow() & X86_CR4_VMXE))
++ return sprintf(buf, "KVM: Mitigation: VMX disabled\n");
++ else if (itlb_multihit_kvm_mitigation)
+ return sprintf(buf, "KVM: Mitigation: Split huge pages\n");
+ else
+ return sprintf(buf, "KVM: Vulnerable\n");
+diff --git a/arch/x86/kernel/tsc_msr.c b/arch/x86/kernel/tsc_msr.c
+index 4fec6f3a1858b..a654a9b4b77c0 100644
+--- a/arch/x86/kernel/tsc_msr.c
++++ b/arch/x86/kernel/tsc_msr.c
+@@ -133,10 +133,15 @@ static const struct freq_desc freq_desc_ann = {
+ .mask = 0x0f,
+ };
+
+-/* 24 MHz crystal? : 24 * 13 / 4 = 78 MHz */
++/*
++ * 24 MHz crystal? : 24 * 13 / 4 = 78 MHz
++ * Frequency step for Lightning Mountain SoC is fixed to 78 MHz,
++ * so all the frequency entries are 78000.
++ */
+ static const struct freq_desc freq_desc_lgm = {
+ .use_msr_plat = true,
+- .freqs = { 78000, 78000, 78000, 78000, 78000, 78000, 78000, 78000 },
++ .freqs = { 78000, 78000, 78000, 78000, 78000, 78000, 78000, 78000,
++ 78000, 78000, 78000, 78000, 78000, 78000, 78000, 78000 },
+ .mask = 0x0f,
+ };
+
+diff --git a/arch/xtensa/include/asm/thread_info.h b/arch/xtensa/include/asm/thread_info.h
+index f092cc3f4e66d..956d4d47c6cd1 100644
+--- a/arch/xtensa/include/asm/thread_info.h
++++ b/arch/xtensa/include/asm/thread_info.h
+@@ -55,6 +55,10 @@ struct thread_info {
+ mm_segment_t addr_limit; /* thread address space */
+
+ unsigned long cpenable;
++#if XCHAL_HAVE_EXCLUSIVE
++ /* result of the most recent exclusive store */
++ unsigned long atomctl8;
++#endif
+
+ /* Allocate storage for extra user states and coprocessor states. */
+ #if XTENSA_HAVE_COPROCESSORS
+diff --git a/arch/xtensa/kernel/asm-offsets.c b/arch/xtensa/kernel/asm-offsets.c
+index 33a257b33723a..dc5c83cad9be8 100644
+--- a/arch/xtensa/kernel/asm-offsets.c
++++ b/arch/xtensa/kernel/asm-offsets.c
+@@ -93,6 +93,9 @@ int main(void)
+ DEFINE(THREAD_RA, offsetof (struct task_struct, thread.ra));
+ DEFINE(THREAD_SP, offsetof (struct task_struct, thread.sp));
+ DEFINE(THREAD_CPENABLE, offsetof (struct thread_info, cpenable));
++#if XCHAL_HAVE_EXCLUSIVE
++ DEFINE(THREAD_ATOMCTL8, offsetof (struct thread_info, atomctl8));
++#endif
+ #if XTENSA_HAVE_COPROCESSORS
+ DEFINE(THREAD_XTREGS_CP0, offsetof(struct thread_info, xtregs_cp.cp0));
+ DEFINE(THREAD_XTREGS_CP1, offsetof(struct thread_info, xtregs_cp.cp1));
+diff --git a/arch/xtensa/kernel/entry.S b/arch/xtensa/kernel/entry.S
+index 06fbb0a171f1e..26e2869d255b0 100644
+--- a/arch/xtensa/kernel/entry.S
++++ b/arch/xtensa/kernel/entry.S
+@@ -374,6 +374,11 @@ common_exception:
+ s32i a2, a1, PT_LCOUNT
+ #endif
+
++#if XCHAL_HAVE_EXCLUSIVE
++ /* Clear exclusive access monitor set by interrupted code */
++ clrex
++#endif
++
+ /* It is now save to restore the EXC_TABLE_FIXUP variable. */
+
+ rsr a2, exccause
+@@ -2020,6 +2025,12 @@ ENTRY(_switch_to)
+ s32i a3, a4, THREAD_CPENABLE
+ #endif
+
++#if XCHAL_HAVE_EXCLUSIVE
++ l32i a3, a5, THREAD_ATOMCTL8
++ getex a3
++ s32i a3, a4, THREAD_ATOMCTL8
++#endif
++
+ /* Flush register file. */
+
+ spill_registers_kernel
+diff --git a/arch/xtensa/kernel/perf_event.c b/arch/xtensa/kernel/perf_event.c
+index 9bae79f703013..86c9ba9631551 100644
+--- a/arch/xtensa/kernel/perf_event.c
++++ b/arch/xtensa/kernel/perf_event.c
+@@ -401,7 +401,7 @@ static struct pmu xtensa_pmu = {
+ .read = xtensa_pmu_read,
+ };
+
+-static int xtensa_pmu_setup(int cpu)
++static int xtensa_pmu_setup(unsigned int cpu)
+ {
+ unsigned i;
+
+diff --git a/crypto/af_alg.c b/crypto/af_alg.c
+index 28fc323e3fe30..5882ed46f1adb 100644
+--- a/crypto/af_alg.c
++++ b/crypto/af_alg.c
+@@ -635,6 +635,7 @@ void af_alg_pull_tsgl(struct sock *sk, size_t used, struct scatterlist *dst,
+
+ if (!ctx->used)
+ ctx->merge = 0;
++ ctx->init = ctx->more;
+ }
+ EXPORT_SYMBOL_GPL(af_alg_pull_tsgl);
+
+@@ -734,9 +735,10 @@ EXPORT_SYMBOL_GPL(af_alg_wmem_wakeup);
+ *
+ * @sk socket of connection to user space
+ * @flags If MSG_DONTWAIT is set, then only report if function would sleep
++ * @min Set to minimum request size if partial requests are allowed.
+ * @return 0 when writable memory is available, < 0 upon error
+ */
+-int af_alg_wait_for_data(struct sock *sk, unsigned flags)
++int af_alg_wait_for_data(struct sock *sk, unsigned flags, unsigned min)
+ {
+ DEFINE_WAIT_FUNC(wait, woken_wake_function);
+ struct alg_sock *ask = alg_sk(sk);
+@@ -754,7 +756,9 @@ int af_alg_wait_for_data(struct sock *sk, unsigned flags)
+ if (signal_pending(current))
+ break;
+ timeout = MAX_SCHEDULE_TIMEOUT;
+- if (sk_wait_event(sk, &timeout, (ctx->used || !ctx->more),
++ if (sk_wait_event(sk, &timeout,
++ ctx->init && (!ctx->more ||
++ (min && ctx->used >= min)),
+ &wait)) {
+ err = 0;
+ break;
+@@ -843,10 +847,11 @@ int af_alg_sendmsg(struct socket *sock, struct msghdr *msg, size_t size,
+ }
+
+ lock_sock(sk);
+- if (!ctx->more && ctx->used) {
++ if (ctx->init && (init || !ctx->more)) {
+ err = -EINVAL;
+ goto unlock;
+ }
++ ctx->init = true;
+
+ if (init) {
+ ctx->enc = enc;
+diff --git a/crypto/algif_aead.c b/crypto/algif_aead.c
+index 0ae000a61c7f5..43c6aa784858b 100644
+--- a/crypto/algif_aead.c
++++ b/crypto/algif_aead.c
+@@ -106,8 +106,8 @@ static int _aead_recvmsg(struct socket *sock, struct msghdr *msg,
+ size_t usedpages = 0; /* [in] RX bufs to be used from user */
+ size_t processed = 0; /* [in] TX bufs to be consumed */
+
+- if (!ctx->used) {
+- err = af_alg_wait_for_data(sk, flags);
++ if (!ctx->init || ctx->more) {
++ err = af_alg_wait_for_data(sk, flags, 0);
+ if (err)
+ return err;
+ }
+@@ -558,12 +558,6 @@ static int aead_accept_parent_nokey(void *private, struct sock *sk)
+
+ INIT_LIST_HEAD(&ctx->tsgl_list);
+ ctx->len = len;
+- ctx->used = 0;
+- atomic_set(&ctx->rcvused, 0);
+- ctx->more = 0;
+- ctx->merge = 0;
+- ctx->enc = 0;
+- ctx->aead_assoclen = 0;
+ crypto_init_wait(&ctx->wait);
+
+ ask->private = ctx;
+diff --git a/crypto/algif_skcipher.c b/crypto/algif_skcipher.c
+index ec5567c87a6df..81c4022285a7c 100644
+--- a/crypto/algif_skcipher.c
++++ b/crypto/algif_skcipher.c
+@@ -61,8 +61,8 @@ static int _skcipher_recvmsg(struct socket *sock, struct msghdr *msg,
+ int err = 0;
+ size_t len = 0;
+
+- if (!ctx->used) {
+- err = af_alg_wait_for_data(sk, flags);
++ if (!ctx->init || (ctx->more && ctx->used < bs)) {
++ err = af_alg_wait_for_data(sk, flags, bs);
+ if (err)
+ return err;
+ }
+@@ -333,6 +333,7 @@ static int skcipher_accept_parent_nokey(void *private, struct sock *sk)
+ ctx = sock_kmalloc(sk, len, GFP_KERNEL);
+ if (!ctx)
+ return -ENOMEM;
++ memset(ctx, 0, len);
+
+ ctx->iv = sock_kmalloc(sk, crypto_skcipher_ivsize(tfm),
+ GFP_KERNEL);
+@@ -340,16 +341,10 @@ static int skcipher_accept_parent_nokey(void *private, struct sock *sk)
+ sock_kfree_s(sk, ctx, len);
+ return -ENOMEM;
+ }
+-
+ memset(ctx->iv, 0, crypto_skcipher_ivsize(tfm));
+
+ INIT_LIST_HEAD(&ctx->tsgl_list);
+ ctx->len = len;
+- ctx->used = 0;
+- atomic_set(&ctx->rcvused, 0);
+- ctx->more = 0;
+- ctx->merge = 0;
+- ctx->enc = 0;
+ crypto_init_wait(&ctx->wait);
+
+ ask->private = ctx;
+diff --git a/drivers/base/dd.c b/drivers/base/dd.c
+index 60bd0a9b9918b..da5a8e90a8852 100644
+--- a/drivers/base/dd.c
++++ b/drivers/base/dd.c
+@@ -846,7 +846,9 @@ static int __device_attach(struct device *dev, bool allow_async)
+ int ret = 0;
+
+ device_lock(dev);
+- if (dev->driver) {
++ if (dev->p->dead) {
++ goto out_unlock;
++ } else if (dev->driver) {
+ if (device_is_bound(dev)) {
+ ret = 1;
+ goto out_unlock;
+diff --git a/drivers/clk/actions/owl-s500.c b/drivers/clk/actions/owl-s500.c
+index e2007ac4d235d..0eb83a0b70bcc 100644
+--- a/drivers/clk/actions/owl-s500.c
++++ b/drivers/clk/actions/owl-s500.c
+@@ -183,7 +183,7 @@ static OWL_GATE(timer_clk, "timer_clk", "hosc", CMU_DEVCLKEN1, 27, 0, 0);
+ static OWL_GATE(hdmi_clk, "hdmi_clk", "hosc", CMU_DEVCLKEN1, 3, 0, 0);
+
+ /* divider clocks */
+-static OWL_DIVIDER(h_clk, "h_clk", "ahbprevdiv_clk", CMU_BUSCLK1, 12, 2, NULL, 0, 0);
++static OWL_DIVIDER(h_clk, "h_clk", "ahbprediv_clk", CMU_BUSCLK1, 12, 2, NULL, 0, 0);
+ static OWL_DIVIDER(rmii_ref_clk, "rmii_ref_clk", "ethernet_pll_clk", CMU_ETHERNETPLL, 1, 1, rmii_ref_div_table, 0, 0);
+
+ /* factor clocks */
+diff --git a/drivers/clk/bcm/clk-bcm2835.c b/drivers/clk/bcm/clk-bcm2835.c
+index 7c845c293af00..798f0b419c79f 100644
+--- a/drivers/clk/bcm/clk-bcm2835.c
++++ b/drivers/clk/bcm/clk-bcm2835.c
+@@ -314,6 +314,7 @@ struct bcm2835_cprman {
+ struct device *dev;
+ void __iomem *regs;
+ spinlock_t regs_lock; /* spinlock for all clocks */
++ unsigned int soc;
+
+ /*
+ * Real names of cprman clock parents looked up through
+@@ -525,6 +526,20 @@ static int bcm2835_pll_is_on(struct clk_hw *hw)
+ A2W_PLL_CTRL_PRST_DISABLE;
+ }
+
++static u32 bcm2835_pll_get_prediv_mask(struct bcm2835_cprman *cprman,
++ const struct bcm2835_pll_data *data)
++{
++ /*
++ * On BCM2711 there isn't a pre-divisor available in the PLL feedback
++ * loop. Bits 13:14 of ANA1 (PLLA,PLLB,PLLC,PLLD) have been re-purposed
++ * for to for VCO RANGE bits.
++ */
++ if (cprman->soc & SOC_BCM2711)
++ return 0;
++
++ return data->ana->fb_prediv_mask;
++}
++
+ static void bcm2835_pll_choose_ndiv_and_fdiv(unsigned long rate,
+ unsigned long parent_rate,
+ u32 *ndiv, u32 *fdiv)
+@@ -582,7 +597,7 @@ static unsigned long bcm2835_pll_get_rate(struct clk_hw *hw,
+ ndiv = (a2wctrl & A2W_PLL_CTRL_NDIV_MASK) >> A2W_PLL_CTRL_NDIV_SHIFT;
+ pdiv = (a2wctrl & A2W_PLL_CTRL_PDIV_MASK) >> A2W_PLL_CTRL_PDIV_SHIFT;
+ using_prediv = cprman_read(cprman, data->ana_reg_base + 4) &
+- data->ana->fb_prediv_mask;
++ bcm2835_pll_get_prediv_mask(cprman, data);
+
+ if (using_prediv) {
+ ndiv *= 2;
+@@ -665,6 +680,7 @@ static int bcm2835_pll_set_rate(struct clk_hw *hw,
+ struct bcm2835_pll *pll = container_of(hw, struct bcm2835_pll, hw);
+ struct bcm2835_cprman *cprman = pll->cprman;
+ const struct bcm2835_pll_data *data = pll->data;
++ u32 prediv_mask = bcm2835_pll_get_prediv_mask(cprman, data);
+ bool was_using_prediv, use_fb_prediv, do_ana_setup_first;
+ u32 ndiv, fdiv, a2w_ctl;
+ u32 ana[4];
+@@ -682,7 +698,7 @@ static int bcm2835_pll_set_rate(struct clk_hw *hw,
+ for (i = 3; i >= 0; i--)
+ ana[i] = cprman_read(cprman, data->ana_reg_base + i * 4);
+
+- was_using_prediv = ana[1] & data->ana->fb_prediv_mask;
++ was_using_prediv = ana[1] & prediv_mask;
+
+ ana[0] &= ~data->ana->mask0;
+ ana[0] |= data->ana->set0;
+@@ -692,10 +708,10 @@ static int bcm2835_pll_set_rate(struct clk_hw *hw,
+ ana[3] |= data->ana->set3;
+
+ if (was_using_prediv && !use_fb_prediv) {
+- ana[1] &= ~data->ana->fb_prediv_mask;
++ ana[1] &= ~prediv_mask;
+ do_ana_setup_first = true;
+ } else if (!was_using_prediv && use_fb_prediv) {
+- ana[1] |= data->ana->fb_prediv_mask;
++ ana[1] |= prediv_mask;
+ do_ana_setup_first = false;
+ } else {
+ do_ana_setup_first = true;
+@@ -2232,6 +2248,7 @@ static int bcm2835_clk_probe(struct platform_device *pdev)
+ platform_set_drvdata(pdev, cprman);
+
+ cprman->onecell.num = asize;
++ cprman->soc = pdata->soc;
+ hws = cprman->onecell.hws;
+
+ for (i = 0; i < asize; i++) {
+diff --git a/drivers/clk/qcom/clk-alpha-pll.c b/drivers/clk/qcom/clk-alpha-pll.c
+index 9b2dfa08acb2a..1325139173c95 100644
+--- a/drivers/clk/qcom/clk-alpha-pll.c
++++ b/drivers/clk/qcom/clk-alpha-pll.c
+@@ -56,7 +56,6 @@
+ #define PLL_STATUS(p) ((p)->offset + (p)->regs[PLL_OFF_STATUS])
+ #define PLL_OPMODE(p) ((p)->offset + (p)->regs[PLL_OFF_OPMODE])
+ #define PLL_FRAC(p) ((p)->offset + (p)->regs[PLL_OFF_FRAC])
+-#define PLL_CAL_VAL(p) ((p)->offset + (p)->regs[PLL_OFF_CAL_VAL])
+
+ const u8 clk_alpha_pll_regs[][PLL_OFF_MAX_REGS] = {
+ [CLK_ALPHA_PLL_TYPE_DEFAULT] = {
+@@ -115,7 +114,6 @@ const u8 clk_alpha_pll_regs[][PLL_OFF_MAX_REGS] = {
+ [PLL_OFF_STATUS] = 0x30,
+ [PLL_OFF_OPMODE] = 0x38,
+ [PLL_OFF_ALPHA_VAL] = 0x40,
+- [PLL_OFF_CAL_VAL] = 0x44,
+ },
+ [CLK_ALPHA_PLL_TYPE_LUCID] = {
+ [PLL_OFF_L_VAL] = 0x04,
+diff --git a/drivers/clk/qcom/gcc-sdm660.c b/drivers/clk/qcom/gcc-sdm660.c
+index bf5730832ef3d..c6fb57cd576f5 100644
+--- a/drivers/clk/qcom/gcc-sdm660.c
++++ b/drivers/clk/qcom/gcc-sdm660.c
+@@ -1715,6 +1715,9 @@ static struct clk_branch gcc_mss_cfg_ahb_clk = {
+
+ static struct clk_branch gcc_mss_mnoc_bimc_axi_clk = {
+ .halt_reg = 0x8a004,
++ .halt_check = BRANCH_HALT,
++ .hwcg_reg = 0x8a004,
++ .hwcg_bit = 1,
+ .clkr = {
+ .enable_reg = 0x8a004,
+ .enable_mask = BIT(0),
+diff --git a/drivers/clk/qcom/gcc-sm8150.c b/drivers/clk/qcom/gcc-sm8150.c
+index 72524cf110487..55e9d6d75a0cd 100644
+--- a/drivers/clk/qcom/gcc-sm8150.c
++++ b/drivers/clk/qcom/gcc-sm8150.c
+@@ -1617,6 +1617,7 @@ static struct clk_branch gcc_gpu_cfg_ahb_clk = {
+ };
+
+ static struct clk_branch gcc_gpu_gpll0_clk_src = {
++ .halt_check = BRANCH_HALT_SKIP,
+ .clkr = {
+ .enable_reg = 0x52004,
+ .enable_mask = BIT(15),
+@@ -1632,13 +1633,14 @@ static struct clk_branch gcc_gpu_gpll0_clk_src = {
+ };
+
+ static struct clk_branch gcc_gpu_gpll0_div_clk_src = {
++ .halt_check = BRANCH_HALT_SKIP,
+ .clkr = {
+ .enable_reg = 0x52004,
+ .enable_mask = BIT(16),
+ .hw.init = &(struct clk_init_data){
+ .name = "gcc_gpu_gpll0_div_clk_src",
+ .parent_hws = (const struct clk_hw *[]){
+- &gcc_gpu_gpll0_clk_src.clkr.hw },
++ &gpll0_out_even.clkr.hw },
+ .num_parents = 1,
+ .flags = CLK_SET_RATE_PARENT,
+ .ops = &clk_branch2_ops,
+@@ -1729,6 +1731,7 @@ static struct clk_branch gcc_npu_cfg_ahb_clk = {
+ };
+
+ static struct clk_branch gcc_npu_gpll0_clk_src = {
++ .halt_check = BRANCH_HALT_SKIP,
+ .clkr = {
+ .enable_reg = 0x52004,
+ .enable_mask = BIT(18),
+@@ -1744,13 +1747,14 @@ static struct clk_branch gcc_npu_gpll0_clk_src = {
+ };
+
+ static struct clk_branch gcc_npu_gpll0_div_clk_src = {
++ .halt_check = BRANCH_HALT_SKIP,
+ .clkr = {
+ .enable_reg = 0x52004,
+ .enable_mask = BIT(19),
+ .hw.init = &(struct clk_init_data){
+ .name = "gcc_npu_gpll0_div_clk_src",
+ .parent_hws = (const struct clk_hw *[]){
+- &gcc_npu_gpll0_clk_src.clkr.hw },
++ &gpll0_out_even.clkr.hw },
+ .num_parents = 1,
+ .flags = CLK_SET_RATE_PARENT,
+ .ops = &clk_branch2_ops,
+diff --git a/drivers/clk/sirf/clk-atlas6.c b/drivers/clk/sirf/clk-atlas6.c
+index c84d5bab7ac28..b95483bb6a5ec 100644
+--- a/drivers/clk/sirf/clk-atlas6.c
++++ b/drivers/clk/sirf/clk-atlas6.c
+@@ -135,7 +135,7 @@ static void __init atlas6_clk_init(struct device_node *np)
+
+ for (i = pll1; i < maxclk; i++) {
+ atlas6_clks[i] = clk_register(NULL, atlas6_clk_hw_array[i]);
+- BUG_ON(!atlas6_clks[i]);
++ BUG_ON(IS_ERR(atlas6_clks[i]));
+ }
+ clk_register_clkdev(atlas6_clks[cpu], NULL, "cpu");
+ clk_register_clkdev(atlas6_clks[io], NULL, "io");
+diff --git a/drivers/crypto/caam/caamalg.c b/drivers/crypto/caam/caamalg.c
+index bf90a4fcabd1f..8149ac4d6ef22 100644
+--- a/drivers/crypto/caam/caamalg.c
++++ b/drivers/crypto/caam/caamalg.c
+@@ -810,12 +810,6 @@ static int ctr_skcipher_setkey(struct crypto_skcipher *skcipher,
+ return skcipher_setkey(skcipher, key, keylen, ctx1_iv_off);
+ }
+
+-static int arc4_skcipher_setkey(struct crypto_skcipher *skcipher,
+- const u8 *key, unsigned int keylen)
+-{
+- return skcipher_setkey(skcipher, key, keylen, 0);
+-}
+-
+ static int des_skcipher_setkey(struct crypto_skcipher *skcipher,
+ const u8 *key, unsigned int keylen)
+ {
+@@ -1967,21 +1961,6 @@ static struct caam_skcipher_alg driver_algs[] = {
+ },
+ .caam.class1_alg_type = OP_ALG_ALGSEL_3DES | OP_ALG_AAI_ECB,
+ },
+- {
+- .skcipher = {
+- .base = {
+- .cra_name = "ecb(arc4)",
+- .cra_driver_name = "ecb-arc4-caam",
+- .cra_blocksize = ARC4_BLOCK_SIZE,
+- },
+- .setkey = arc4_skcipher_setkey,
+- .encrypt = skcipher_encrypt,
+- .decrypt = skcipher_decrypt,
+- .min_keysize = ARC4_MIN_KEY_SIZE,
+- .max_keysize = ARC4_MAX_KEY_SIZE,
+- },
+- .caam.class1_alg_type = OP_ALG_ALGSEL_ARC4 | OP_ALG_AAI_ECB,
+- },
+ };
+
+ static struct caam_aead_alg driver_aeads[] = {
+@@ -3457,7 +3436,6 @@ int caam_algapi_init(struct device *ctrldev)
+ struct caam_drv_private *priv = dev_get_drvdata(ctrldev);
+ int i = 0, err = 0;
+ u32 aes_vid, aes_inst, des_inst, md_vid, md_inst, ccha_inst, ptha_inst;
+- u32 arc4_inst;
+ unsigned int md_limit = SHA512_DIGEST_SIZE;
+ bool registered = false, gcm_support;
+
+@@ -3477,8 +3455,6 @@ int caam_algapi_init(struct device *ctrldev)
+ CHA_ID_LS_DES_SHIFT;
+ aes_inst = cha_inst & CHA_ID_LS_AES_MASK;
+ md_inst = (cha_inst & CHA_ID_LS_MD_MASK) >> CHA_ID_LS_MD_SHIFT;
+- arc4_inst = (cha_inst & CHA_ID_LS_ARC4_MASK) >>
+- CHA_ID_LS_ARC4_SHIFT;
+ ccha_inst = 0;
+ ptha_inst = 0;
+
+@@ -3499,7 +3475,6 @@ int caam_algapi_init(struct device *ctrldev)
+ md_inst = mdha & CHA_VER_NUM_MASK;
+ ccha_inst = rd_reg32(&priv->ctrl->vreg.ccha) & CHA_VER_NUM_MASK;
+ ptha_inst = rd_reg32(&priv->ctrl->vreg.ptha) & CHA_VER_NUM_MASK;
+- arc4_inst = rd_reg32(&priv->ctrl->vreg.afha) & CHA_VER_NUM_MASK;
+
+ gcm_support = aesa & CHA_VER_MISC_AES_GCM;
+ }
+@@ -3522,10 +3497,6 @@ int caam_algapi_init(struct device *ctrldev)
+ if (!aes_inst && (alg_sel == OP_ALG_ALGSEL_AES))
+ continue;
+
+- /* Skip ARC4 algorithms if not supported by device */
+- if (!arc4_inst && alg_sel == OP_ALG_ALGSEL_ARC4)
+- continue;
+-
+ /*
+ * Check support for AES modes not available
+ * on LP devices.
+diff --git a/drivers/crypto/caam/compat.h b/drivers/crypto/caam/compat.h
+index 60e2a54c19f11..c3c22a8de4c00 100644
+--- a/drivers/crypto/caam/compat.h
++++ b/drivers/crypto/caam/compat.h
+@@ -43,7 +43,6 @@
+ #include <crypto/akcipher.h>
+ #include <crypto/scatterwalk.h>
+ #include <crypto/skcipher.h>
+-#include <crypto/arc4.h>
+ #include <crypto/internal/skcipher.h>
+ #include <crypto/internal/hash.h>
+ #include <crypto/internal/rsa.h>
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_psp.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_psp.c
+index 3c6f60c5b1a5a..088f43ebdceb6 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_psp.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_psp.c
+@@ -1679,15 +1679,15 @@ static int psp_suspend(void *handle)
+ }
+ }
+
+- ret = psp_tmr_terminate(psp);
++ ret = psp_asd_unload(psp);
+ if (ret) {
+- DRM_ERROR("Falied to terminate tmr\n");
++ DRM_ERROR("Failed to unload asd\n");
+ return ret;
+ }
+
+- ret = psp_asd_unload(psp);
++ ret = psp_tmr_terminate(psp);
+ if (ret) {
+- DRM_ERROR("Failed to unload asd\n");
++ DRM_ERROR("Falied to terminate tmr\n");
+ return ret;
+ }
+
+diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
+index d50751ae73f1b..7cb4fe479614e 100644
+--- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
++++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
+@@ -8458,6 +8458,29 @@ static int amdgpu_dm_atomic_check(struct drm_device *dev,
+ if (ret)
+ goto fail;
+
++ /* Check connector changes */
++ for_each_oldnew_connector_in_state(state, connector, old_con_state, new_con_state, i) {
++ struct dm_connector_state *dm_old_con_state = to_dm_connector_state(old_con_state);
++ struct dm_connector_state *dm_new_con_state = to_dm_connector_state(new_con_state);
++
++ /* Skip connectors that are disabled or part of modeset already. */
++ if (!old_con_state->crtc && !new_con_state->crtc)
++ continue;
++
++ if (!new_con_state->crtc)
++ continue;
++
++ new_crtc_state = drm_atomic_get_crtc_state(state, new_con_state->crtc);
++ if (IS_ERR(new_crtc_state)) {
++ ret = PTR_ERR(new_crtc_state);
++ goto fail;
++ }
++
++ if (dm_old_con_state->abm_level !=
++ dm_new_con_state->abm_level)
++ new_crtc_state->connectors_changed = true;
++ }
++
+ #if defined(CONFIG_DRM_AMD_DC_DCN)
+ if (!compute_mst_dsc_configs_for_state(state, dm_state->context))
+ goto fail;
+diff --git a/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn10/rv1_clk_mgr.c b/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn10/rv1_clk_mgr.c
+index 3fab9296918ab..e133edc587d31 100644
+--- a/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn10/rv1_clk_mgr.c
++++ b/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn10/rv1_clk_mgr.c
+@@ -85,12 +85,77 @@ static int rv1_determine_dppclk_threshold(struct clk_mgr_internal *clk_mgr, stru
+ return disp_clk_threshold;
+ }
+
+-static void ramp_up_dispclk_with_dpp(struct clk_mgr_internal *clk_mgr, struct dc *dc, struct dc_clocks *new_clocks)
++static void ramp_up_dispclk_with_dpp(
++ struct clk_mgr_internal *clk_mgr,
++ struct dc *dc,
++ struct dc_clocks *new_clocks,
++ bool safe_to_lower)
+ {
+ int i;
+ int dispclk_to_dpp_threshold = rv1_determine_dppclk_threshold(clk_mgr, new_clocks);
+ bool request_dpp_div = new_clocks->dispclk_khz > new_clocks->dppclk_khz;
+
++ /* this function is to change dispclk, dppclk and dprefclk according to
++ * bandwidth requirement. Its call stack is rv1_update_clocks -->
++ * update_clocks --> dcn10_prepare_bandwidth / dcn10_optimize_bandwidth
++ * --> prepare_bandwidth / optimize_bandwidth. before change dcn hw,
++ * prepare_bandwidth will be called first to allow enough clock,
++ * watermark for change, after end of dcn hw change, optimize_bandwidth
++ * is executed to lower clock to save power for new dcn hw settings.
++ *
++ * below is sequence of commit_planes_for_stream:
++ *
++ * step 1: prepare_bandwidth - raise clock to have enough bandwidth
++ * step 2: lock_doublebuffer_enable
++ * step 3: pipe_control_lock(true) - make dchubp register change will
++ * not take effect right way
++ * step 4: apply_ctx_for_surface - program dchubp
++ * step 5: pipe_control_lock(false) - dchubp register change take effect
++ * step 6: optimize_bandwidth --> dc_post_update_surfaces_to_stream
++ * for full_date, optimize clock to save power
++ *
++ * at end of step 1, dcn clocks (dprefclk, dispclk, dppclk) may be
++ * changed for new dchubp configuration. but real dcn hub dchubps are
++ * still running with old configuration until end of step 5. this need
++ * clocks settings at step 1 should not less than that before step 1.
++ * this is checked by two conditions: 1. if (should_set_clock(safe_to_lower
++ * , new_clocks->dispclk_khz, clk_mgr_base->clks.dispclk_khz) ||
++ * new_clocks->dispclk_khz == clk_mgr_base->clks.dispclk_khz)
++ * 2. request_dpp_div = new_clocks->dispclk_khz > new_clocks->dppclk_khz
++ *
++ * the second condition is based on new dchubp configuration. dppclk
++ * for new dchubp may be different from dppclk before step 1.
++ * for example, before step 1, dchubps are as below:
++ * pipe 0: recout=(0,40,1920,980) viewport=(0,0,1920,979)
++ * pipe 1: recout=(0,0,1920,1080) viewport=(0,0,1920,1080)
++ * for dppclk for pipe0 need dppclk = dispclk
++ *
++ * new dchubp pipe split configuration:
++ * pipe 0: recout=(0,0,960,1080) viewport=(0,0,960,1080)
++ * pipe 1: recout=(960,0,960,1080) viewport=(960,0,960,1080)
++ * dppclk only needs dppclk = dispclk /2.
++ *
++ * dispclk, dppclk are not lock by otg master lock. they take effect
++ * after step 1. during this transition, dispclk are the same, but
++ * dppclk is changed to half of previous clock for old dchubp
++ * configuration between step 1 and step 6. This may cause p-state
++ * warning intermittently.
++ *
++ * for new_clocks->dispclk_khz == clk_mgr_base->clks.dispclk_khz, we
++ * need make sure dppclk are not changed to less between step 1 and 6.
++ * for new_clocks->dispclk_khz > clk_mgr_base->clks.dispclk_khz,
++ * new display clock is raised, but we do not know ratio of
++ * new_clocks->dispclk_khz and clk_mgr_base->clks.dispclk_khz,
++ * new_clocks->dispclk_khz /2 does not guarantee equal or higher than
++ * old dppclk. we could ignore power saving different between
++ * dppclk = displck and dppclk = dispclk / 2 between step 1 and step 6.
++ * as long as safe_to_lower = false, set dpclk = dispclk to simplify
++ * condition check.
++ * todo: review this change for other asic.
++ **/
++ if (!safe_to_lower)
++ request_dpp_div = false;
++
+ /* set disp clk to dpp clk threshold */
+
+ clk_mgr->funcs->set_dispclk(clk_mgr, dispclk_to_dpp_threshold);
+@@ -209,7 +274,7 @@ static void rv1_update_clocks(struct clk_mgr *clk_mgr_base,
+ /* program dispclk on = as a w/a for sleep resume clock ramping issues */
+ if (should_set_clock(safe_to_lower, new_clocks->dispclk_khz, clk_mgr_base->clks.dispclk_khz)
+ || new_clocks->dispclk_khz == clk_mgr_base->clks.dispclk_khz) {
+- ramp_up_dispclk_with_dpp(clk_mgr, dc, new_clocks);
++ ramp_up_dispclk_with_dpp(clk_mgr, dc, new_clocks, safe_to_lower);
+ clk_mgr_base->clks.dispclk_khz = new_clocks->dispclk_khz;
+ send_request_to_lower = true;
+ }
+diff --git a/drivers/gpu/drm/amd/powerplay/smumgr/ci_smumgr.c b/drivers/gpu/drm/amd/powerplay/smumgr/ci_smumgr.c
+index 7c3e903230ca1..47eead0961297 100644
+--- a/drivers/gpu/drm/amd/powerplay/smumgr/ci_smumgr.c
++++ b/drivers/gpu/drm/amd/powerplay/smumgr/ci_smumgr.c
+@@ -2725,7 +2725,10 @@ static int ci_initialize_mc_reg_table(struct pp_hwmgr *hwmgr)
+
+ static bool ci_is_dpm_running(struct pp_hwmgr *hwmgr)
+ {
+- return ci_is_smc_ram_running(hwmgr);
++ return (1 == PHM_READ_INDIRECT_FIELD(hwmgr->device,
++ CGS_IND_REG__SMC, FEATURE_STATUS,
++ VOLTAGE_CONTROLLER_ON))
++ ? true : false;
+ }
+
+ static int ci_smu_init(struct pp_hwmgr *hwmgr)
+diff --git a/drivers/gpu/drm/drm_dp_mst_topology.c b/drivers/gpu/drm/drm_dp_mst_topology.c
+index abb1f358ec6df..252fc4b567007 100644
+--- a/drivers/gpu/drm/drm_dp_mst_topology.c
++++ b/drivers/gpu/drm/drm_dp_mst_topology.c
+@@ -88,8 +88,8 @@ static int drm_dp_send_enum_path_resources(struct drm_dp_mst_topology_mgr *mgr,
+ static bool drm_dp_validate_guid(struct drm_dp_mst_topology_mgr *mgr,
+ u8 *guid);
+
+-static int drm_dp_mst_register_i2c_bus(struct drm_dp_aux *aux);
+-static void drm_dp_mst_unregister_i2c_bus(struct drm_dp_aux *aux);
++static int drm_dp_mst_register_i2c_bus(struct drm_dp_mst_port *port);
++static void drm_dp_mst_unregister_i2c_bus(struct drm_dp_mst_port *port);
+ static void drm_dp_mst_kick_tx(struct drm_dp_mst_topology_mgr *mgr);
+
+ #define DBG_PREFIX "[dp_mst]"
+@@ -1981,7 +1981,7 @@ drm_dp_port_set_pdt(struct drm_dp_mst_port *port, u8 new_pdt,
+ }
+
+ /* remove i2c over sideband */
+- drm_dp_mst_unregister_i2c_bus(&port->aux);
++ drm_dp_mst_unregister_i2c_bus(port);
+ } else {
+ mutex_lock(&mgr->lock);
+ drm_dp_mst_topology_put_mstb(port->mstb);
+@@ -1996,7 +1996,7 @@ drm_dp_port_set_pdt(struct drm_dp_mst_port *port, u8 new_pdt,
+ if (port->pdt != DP_PEER_DEVICE_NONE) {
+ if (drm_dp_mst_is_end_device(port->pdt, port->mcs)) {
+ /* add i2c over sideband */
+- ret = drm_dp_mst_register_i2c_bus(&port->aux);
++ ret = drm_dp_mst_register_i2c_bus(port);
+ } else {
+ lct = drm_dp_calculate_rad(port, rad);
+ mstb = drm_dp_add_mst_branch_device(lct, rad);
+@@ -4319,11 +4319,11 @@ bool drm_dp_mst_allocate_vcpi(struct drm_dp_mst_topology_mgr *mgr,
+ {
+ int ret;
+
+- port = drm_dp_mst_topology_get_port_validated(mgr, port);
+- if (!port)
++ if (slots < 0)
+ return false;
+
+- if (slots < 0)
++ port = drm_dp_mst_topology_get_port_validated(mgr, port);
++ if (!port)
+ return false;
+
+ if (port->vcpi.vcpi > 0) {
+@@ -4339,6 +4339,7 @@ bool drm_dp_mst_allocate_vcpi(struct drm_dp_mst_topology_mgr *mgr,
+ if (ret) {
+ DRM_DEBUG_KMS("failed to init vcpi slots=%d max=63 ret=%d\n",
+ DIV_ROUND_UP(pbn, mgr->pbn_div), ret);
++ drm_dp_mst_topology_put_port(port);
+ goto out;
+ }
+ DRM_DEBUG_KMS("initing vcpi for pbn=%d slots=%d\n",
+@@ -5406,22 +5407,26 @@ static const struct i2c_algorithm drm_dp_mst_i2c_algo = {
+
+ /**
+ * drm_dp_mst_register_i2c_bus() - register an I2C adapter for I2C-over-AUX
+- * @aux: DisplayPort AUX channel
++ * @port: The port to add the I2C bus on
+ *
+ * Returns 0 on success or a negative error code on failure.
+ */
+-static int drm_dp_mst_register_i2c_bus(struct drm_dp_aux *aux)
++static int drm_dp_mst_register_i2c_bus(struct drm_dp_mst_port *port)
+ {
++ struct drm_dp_aux *aux = &port->aux;
++ struct device *parent_dev = port->mgr->dev->dev;
++
+ aux->ddc.algo = &drm_dp_mst_i2c_algo;
+ aux->ddc.algo_data = aux;
+ aux->ddc.retries = 3;
+
+ aux->ddc.class = I2C_CLASS_DDC;
+ aux->ddc.owner = THIS_MODULE;
+- aux->ddc.dev.parent = aux->dev;
+- aux->ddc.dev.of_node = aux->dev->of_node;
++ /* FIXME: set the kdev of the port's connector as parent */
++ aux->ddc.dev.parent = parent_dev;
++ aux->ddc.dev.of_node = parent_dev->of_node;
+
+- strlcpy(aux->ddc.name, aux->name ? aux->name : dev_name(aux->dev),
++ strlcpy(aux->ddc.name, aux->name ? aux->name : dev_name(parent_dev),
+ sizeof(aux->ddc.name));
+
+ return i2c_add_adapter(&aux->ddc);
+@@ -5429,11 +5434,11 @@ static int drm_dp_mst_register_i2c_bus(struct drm_dp_aux *aux)
+
+ /**
+ * drm_dp_mst_unregister_i2c_bus() - unregister an I2C-over-AUX adapter
+- * @aux: DisplayPort AUX channel
++ * @port: The port to remove the I2C bus from
+ */
+-static void drm_dp_mst_unregister_i2c_bus(struct drm_dp_aux *aux)
++static void drm_dp_mst_unregister_i2c_bus(struct drm_dp_mst_port *port)
+ {
+- i2c_del_adapter(&aux->ddc);
++ i2c_del_adapter(&port->aux.ddc);
+ }
+
+ /**
+diff --git a/drivers/gpu/drm/drm_panel_orientation_quirks.c b/drivers/gpu/drm/drm_panel_orientation_quirks.c
+index d00ea384dcbfe..58f5dc2f6dd52 100644
+--- a/drivers/gpu/drm/drm_panel_orientation_quirks.c
++++ b/drivers/gpu/drm/drm_panel_orientation_quirks.c
+@@ -121,6 +121,12 @@ static const struct dmi_system_id orientation_data[] = {
+ DMI_EXACT_MATCH(DMI_PRODUCT_NAME, "T101HA"),
+ },
+ .driver_data = (void *)&lcd800x1280_rightside_up,
++ }, { /* Asus T103HAF */
++ .matches = {
++ DMI_EXACT_MATCH(DMI_SYS_VENDOR, "ASUSTeK COMPUTER INC."),
++ DMI_EXACT_MATCH(DMI_PRODUCT_NAME, "T103HAF"),
++ },
++ .driver_data = (void *)&lcd800x1280_rightside_up,
+ }, { /* GPD MicroPC (generic strings, also match on bios date) */
+ .matches = {
+ DMI_EXACT_MATCH(DMI_SYS_VENDOR, "Default string"),
+diff --git a/drivers/gpu/drm/i915/gt/intel_gt.c b/drivers/gpu/drm/i915/gt/intel_gt.c
+index d09f7596cb98b..1c2f7a5b1e94a 100644
+--- a/drivers/gpu/drm/i915/gt/intel_gt.c
++++ b/drivers/gpu/drm/i915/gt/intel_gt.c
+@@ -656,6 +656,11 @@ void intel_gt_driver_unregister(struct intel_gt *gt)
+ void intel_gt_driver_release(struct intel_gt *gt)
+ {
+ struct i915_address_space *vm;
++ intel_wakeref_t wakeref;
++
++ /* Scrub all HW state upon release */
++ with_intel_runtime_pm(gt->uncore->rpm, wakeref)
++ __intel_gt_reset(gt, ALL_ENGINES);
+
+ vm = fetch_and_zero(>->vm);
+ if (vm) /* FIXME being called twice on error paths :( */
+diff --git a/drivers/gpu/drm/imx/imx-ldb.c b/drivers/gpu/drm/imx/imx-ldb.c
+index 8e209117b049a..819a858764d93 100644
+--- a/drivers/gpu/drm/imx/imx-ldb.c
++++ b/drivers/gpu/drm/imx/imx-ldb.c
+@@ -303,18 +303,19 @@ static void imx_ldb_encoder_disable(struct drm_encoder *encoder)
+ {
+ struct imx_ldb_channel *imx_ldb_ch = enc_to_imx_ldb_ch(encoder);
+ struct imx_ldb *ldb = imx_ldb_ch->ldb;
++ int dual = ldb->ldb_ctrl & LDB_SPLIT_MODE_EN;
+ int mux, ret;
+
+ drm_panel_disable(imx_ldb_ch->panel);
+
+- if (imx_ldb_ch == &ldb->channel[0])
++ if (imx_ldb_ch == &ldb->channel[0] || dual)
+ ldb->ldb_ctrl &= ~LDB_CH0_MODE_EN_MASK;
+- else if (imx_ldb_ch == &ldb->channel[1])
++ if (imx_ldb_ch == &ldb->channel[1] || dual)
+ ldb->ldb_ctrl &= ~LDB_CH1_MODE_EN_MASK;
+
+ regmap_write(ldb->regmap, IOMUXC_GPR2, ldb->ldb_ctrl);
+
+- if (ldb->ldb_ctrl & LDB_SPLIT_MODE_EN) {
++ if (dual) {
+ clk_disable_unprepare(ldb->clk[0]);
+ clk_disable_unprepare(ldb->clk[1]);
+ }
+diff --git a/drivers/gpu/drm/ingenic/ingenic-drm.c b/drivers/gpu/drm/ingenic/ingenic-drm.c
+index 548cc25ea4abe..e525260c31b2b 100644
+--- a/drivers/gpu/drm/ingenic/ingenic-drm.c
++++ b/drivers/gpu/drm/ingenic/ingenic-drm.c
+@@ -384,7 +384,7 @@ static void ingenic_drm_plane_atomic_update(struct drm_plane *plane,
+ addr = drm_fb_cma_get_gem_addr(state->fb, state, 0);
+ width = state->src_w >> 16;
+ height = state->src_h >> 16;
+- cpp = state->fb->format->cpp[plane->index];
++ cpp = state->fb->format->cpp[0];
+
+ priv->dma_hwdesc->addr = addr;
+ priv->dma_hwdesc->cmd = width * height * cpp / 4;
+diff --git a/drivers/gpu/drm/omapdrm/dss/dispc.c b/drivers/gpu/drm/omapdrm/dss/dispc.c
+index dbb90f2d2ccde..7782b163dd721 100644
+--- a/drivers/gpu/drm/omapdrm/dss/dispc.c
++++ b/drivers/gpu/drm/omapdrm/dss/dispc.c
+@@ -4936,6 +4936,7 @@ static int dispc_runtime_resume(struct device *dev)
+ static const struct dev_pm_ops dispc_pm_ops = {
+ .runtime_suspend = dispc_runtime_suspend,
+ .runtime_resume = dispc_runtime_resume,
++ SET_LATE_SYSTEM_SLEEP_PM_OPS(pm_runtime_force_suspend, pm_runtime_force_resume)
+ };
+
+ struct platform_driver omap_dispchw_driver = {
+diff --git a/drivers/gpu/drm/omapdrm/dss/dsi.c b/drivers/gpu/drm/omapdrm/dss/dsi.c
+index 79ddfbfd1b588..eeccf40bae416 100644
+--- a/drivers/gpu/drm/omapdrm/dss/dsi.c
++++ b/drivers/gpu/drm/omapdrm/dss/dsi.c
+@@ -5467,6 +5467,7 @@ static int dsi_runtime_resume(struct device *dev)
+ static const struct dev_pm_ops dsi_pm_ops = {
+ .runtime_suspend = dsi_runtime_suspend,
+ .runtime_resume = dsi_runtime_resume,
++ SET_LATE_SYSTEM_SLEEP_PM_OPS(pm_runtime_force_suspend, pm_runtime_force_resume)
+ };
+
+ struct platform_driver omap_dsihw_driver = {
+diff --git a/drivers/gpu/drm/omapdrm/dss/dss.c b/drivers/gpu/drm/omapdrm/dss/dss.c
+index 4d5739fa4a5d8..6ccbc29c4ce4b 100644
+--- a/drivers/gpu/drm/omapdrm/dss/dss.c
++++ b/drivers/gpu/drm/omapdrm/dss/dss.c
+@@ -1614,6 +1614,7 @@ static int dss_runtime_resume(struct device *dev)
+ static const struct dev_pm_ops dss_pm_ops = {
+ .runtime_suspend = dss_runtime_suspend,
+ .runtime_resume = dss_runtime_resume,
++ SET_LATE_SYSTEM_SLEEP_PM_OPS(pm_runtime_force_suspend, pm_runtime_force_resume)
+ };
+
+ struct platform_driver omap_dsshw_driver = {
+diff --git a/drivers/gpu/drm/omapdrm/dss/venc.c b/drivers/gpu/drm/omapdrm/dss/venc.c
+index 766553bb2f87b..4d3e7a72435f3 100644
+--- a/drivers/gpu/drm/omapdrm/dss/venc.c
++++ b/drivers/gpu/drm/omapdrm/dss/venc.c
+@@ -945,6 +945,7 @@ static int venc_runtime_resume(struct device *dev)
+ static const struct dev_pm_ops venc_pm_ops = {
+ .runtime_suspend = venc_runtime_suspend,
+ .runtime_resume = venc_runtime_resume,
++ SET_LATE_SYSTEM_SLEEP_PM_OPS(pm_runtime_force_suspend, pm_runtime_force_resume)
+ };
+
+ static const struct of_device_id venc_of_match[] = {
+diff --git a/drivers/gpu/drm/panfrost/panfrost_gem.c b/drivers/gpu/drm/panfrost/panfrost_gem.c
+index 17b654e1eb942..556181ea4a073 100644
+--- a/drivers/gpu/drm/panfrost/panfrost_gem.c
++++ b/drivers/gpu/drm/panfrost/panfrost_gem.c
+@@ -46,7 +46,7 @@ static void panfrost_gem_free_object(struct drm_gem_object *obj)
+ sg_free_table(&bo->sgts[i]);
+ }
+ }
+- kfree(bo->sgts);
++ kvfree(bo->sgts);
+ }
+
+ drm_gem_shmem_free_object(obj);
+diff --git a/drivers/gpu/drm/panfrost/panfrost_mmu.c b/drivers/gpu/drm/panfrost/panfrost_mmu.c
+index ed28aeba6d59a..3c8ae7411c800 100644
+--- a/drivers/gpu/drm/panfrost/panfrost_mmu.c
++++ b/drivers/gpu/drm/panfrost/panfrost_mmu.c
+@@ -486,7 +486,7 @@ static int panfrost_mmu_map_fault_addr(struct panfrost_device *pfdev, int as,
+ pages = kvmalloc_array(bo->base.base.size >> PAGE_SHIFT,
+ sizeof(struct page *), GFP_KERNEL | __GFP_ZERO);
+ if (!pages) {
+- kfree(bo->sgts);
++ kvfree(bo->sgts);
+ bo->sgts = NULL;
+ mutex_unlock(&bo->base.pages_lock);
+ ret = -ENOMEM;
+diff --git a/drivers/gpu/drm/tidss/tidss_kms.c b/drivers/gpu/drm/tidss/tidss_kms.c
+index 7d419960b0309..74467f6eafee8 100644
+--- a/drivers/gpu/drm/tidss/tidss_kms.c
++++ b/drivers/gpu/drm/tidss/tidss_kms.c
+@@ -154,7 +154,7 @@ static int tidss_dispc_modeset_init(struct tidss_device *tidss)
+ break;
+ case DISPC_VP_DPI:
+ enc_type = DRM_MODE_ENCODER_DPI;
+- conn_type = DRM_MODE_CONNECTOR_LVDS;
++ conn_type = DRM_MODE_CONNECTOR_DPI;
+ break;
+ default:
+ WARN_ON(1);
+diff --git a/drivers/gpu/drm/vmwgfx/vmwgfx_kms.c b/drivers/gpu/drm/vmwgfx/vmwgfx_kms.c
+index 04d66592f6050..b7a9cee69ea72 100644
+--- a/drivers/gpu/drm/vmwgfx/vmwgfx_kms.c
++++ b/drivers/gpu/drm/vmwgfx/vmwgfx_kms.c
+@@ -2578,7 +2578,7 @@ int vmw_kms_fbdev_init_data(struct vmw_private *dev_priv,
+ ++i;
+ }
+
+- if (i != unit) {
++ if (&con->head == &dev_priv->dev->mode_config.connector_list) {
+ DRM_ERROR("Could not find initial display unit.\n");
+ ret = -EINVAL;
+ goto out_unlock;
+@@ -2602,13 +2602,13 @@ int vmw_kms_fbdev_init_data(struct vmw_private *dev_priv,
+ break;
+ }
+
+- if (mode->type & DRM_MODE_TYPE_PREFERRED)
+- *p_mode = mode;
+- else {
++ if (&mode->head == &con->modes) {
+ WARN_ONCE(true, "Could not find initial preferred mode.\n");
+ *p_mode = list_first_entry(&con->modes,
+ struct drm_display_mode,
+ head);
++ } else {
++ *p_mode = mode;
+ }
+
+ out_unlock:
+diff --git a/drivers/gpu/drm/vmwgfx/vmwgfx_ldu.c b/drivers/gpu/drm/vmwgfx/vmwgfx_ldu.c
+index 16dafff5cab19..009f1742bed51 100644
+--- a/drivers/gpu/drm/vmwgfx/vmwgfx_ldu.c
++++ b/drivers/gpu/drm/vmwgfx/vmwgfx_ldu.c
+@@ -81,7 +81,7 @@ static int vmw_ldu_commit_list(struct vmw_private *dev_priv)
+ struct vmw_legacy_display_unit *entry;
+ struct drm_framebuffer *fb = NULL;
+ struct drm_crtc *crtc = NULL;
+- int i = 0;
++ int i;
+
+ /* If there is no display topology the host just assumes
+ * that the guest will set the same layout as the host.
+@@ -92,12 +92,11 @@ static int vmw_ldu_commit_list(struct vmw_private *dev_priv)
+ crtc = &entry->base.crtc;
+ w = max(w, crtc->x + crtc->mode.hdisplay);
+ h = max(h, crtc->y + crtc->mode.vdisplay);
+- i++;
+ }
+
+ if (crtc == NULL)
+ return 0;
+- fb = entry->base.crtc.primary->state->fb;
++ fb = crtc->primary->state->fb;
+
+ return vmw_kms_write_svga(dev_priv, w, h, fb->pitches[0],
+ fb->format->cpp[0] * 8,
+diff --git a/drivers/gpu/ipu-v3/ipu-image-convert.c b/drivers/gpu/ipu-v3/ipu-image-convert.c
+index eeca50d9a1ee4..aa1d4b6d278f7 100644
+--- a/drivers/gpu/ipu-v3/ipu-image-convert.c
++++ b/drivers/gpu/ipu-v3/ipu-image-convert.c
+@@ -137,6 +137,17 @@ struct ipu_image_convert_ctx;
+ struct ipu_image_convert_chan;
+ struct ipu_image_convert_priv;
+
++enum eof_irq_mask {
++ EOF_IRQ_IN = BIT(0),
++ EOF_IRQ_ROT_IN = BIT(1),
++ EOF_IRQ_OUT = BIT(2),
++ EOF_IRQ_ROT_OUT = BIT(3),
++};
++
++#define EOF_IRQ_COMPLETE (EOF_IRQ_IN | EOF_IRQ_OUT)
++#define EOF_IRQ_ROT_COMPLETE (EOF_IRQ_IN | EOF_IRQ_OUT | \
++ EOF_IRQ_ROT_IN | EOF_IRQ_ROT_OUT)
++
+ struct ipu_image_convert_ctx {
+ struct ipu_image_convert_chan *chan;
+
+@@ -173,6 +184,9 @@ struct ipu_image_convert_ctx {
+ /* where to place converted tile in dest image */
+ unsigned int out_tile_map[MAX_TILES];
+
++ /* mask of completed EOF irqs at every tile conversion */
++ enum eof_irq_mask eof_mask;
++
+ struct list_head list;
+ };
+
+@@ -189,6 +203,8 @@ struct ipu_image_convert_chan {
+ struct ipuv3_channel *rotation_out_chan;
+
+ /* the IPU end-of-frame irqs */
++ int in_eof_irq;
++ int rot_in_eof_irq;
+ int out_eof_irq;
+ int rot_out_eof_irq;
+
+@@ -1380,6 +1396,9 @@ static int convert_start(struct ipu_image_convert_run *run, unsigned int tile)
+ dev_dbg(priv->ipu->dev, "%s: task %u: starting ctx %p run %p tile %u -> %u\n",
+ __func__, chan->ic_task, ctx, run, tile, dst_tile);
+
++ /* clear EOF irq mask */
++ ctx->eof_mask = 0;
++
+ if (ipu_rot_mode_is_irt(ctx->rot_mode)) {
+ /* swap width/height for resizer */
+ dest_width = d_image->tile[dst_tile].height;
+@@ -1615,7 +1634,7 @@ static bool ic_settings_changed(struct ipu_image_convert_ctx *ctx)
+ }
+
+ /* hold irqlock when calling */
+-static irqreturn_t do_irq(struct ipu_image_convert_run *run)
++static irqreturn_t do_tile_complete(struct ipu_image_convert_run *run)
+ {
+ struct ipu_image_convert_ctx *ctx = run->ctx;
+ struct ipu_image_convert_chan *chan = ctx->chan;
+@@ -1700,6 +1719,7 @@ static irqreturn_t do_irq(struct ipu_image_convert_run *run)
+ ctx->cur_buf_num ^= 1;
+ }
+
++ ctx->eof_mask = 0; /* clear EOF irq mask for next tile */
+ ctx->next_tile++;
+ return IRQ_HANDLED;
+ done:
+@@ -1709,13 +1729,15 @@ done:
+ return IRQ_WAKE_THREAD;
+ }
+
+-static irqreturn_t norotate_irq(int irq, void *data)
++static irqreturn_t eof_irq(int irq, void *data)
+ {
+ struct ipu_image_convert_chan *chan = data;
++ struct ipu_image_convert_priv *priv = chan->priv;
+ struct ipu_image_convert_ctx *ctx;
+ struct ipu_image_convert_run *run;
++ irqreturn_t ret = IRQ_HANDLED;
++ bool tile_complete = false;
+ unsigned long flags;
+- irqreturn_t ret;
+
+ spin_lock_irqsave(&chan->irqlock, flags);
+
+@@ -1728,46 +1750,33 @@ static irqreturn_t norotate_irq(int irq, void *data)
+
+ ctx = run->ctx;
+
+- if (ipu_rot_mode_is_irt(ctx->rot_mode)) {
+- /* this is a rotation operation, just ignore */
+- spin_unlock_irqrestore(&chan->irqlock, flags);
+- return IRQ_HANDLED;
+- }
+-
+- ret = do_irq(run);
+-out:
+- spin_unlock_irqrestore(&chan->irqlock, flags);
+- return ret;
+-}
+-
+-static irqreturn_t rotate_irq(int irq, void *data)
+-{
+- struct ipu_image_convert_chan *chan = data;
+- struct ipu_image_convert_priv *priv = chan->priv;
+- struct ipu_image_convert_ctx *ctx;
+- struct ipu_image_convert_run *run;
+- unsigned long flags;
+- irqreturn_t ret;
+-
+- spin_lock_irqsave(&chan->irqlock, flags);
+-
+- /* get current run and its context */
+- run = chan->current_run;
+- if (!run) {
++ if (irq == chan->in_eof_irq) {
++ ctx->eof_mask |= EOF_IRQ_IN;
++ } else if (irq == chan->out_eof_irq) {
++ ctx->eof_mask |= EOF_IRQ_OUT;
++ } else if (irq == chan->rot_in_eof_irq ||
++ irq == chan->rot_out_eof_irq) {
++ if (!ipu_rot_mode_is_irt(ctx->rot_mode)) {
++ /* this was NOT a rotation op, shouldn't happen */
++ dev_err(priv->ipu->dev,
++ "Unexpected rotation interrupt\n");
++ goto out;
++ }
++ ctx->eof_mask |= (irq == chan->rot_in_eof_irq) ?
++ EOF_IRQ_ROT_IN : EOF_IRQ_ROT_OUT;
++ } else {
++ dev_err(priv->ipu->dev, "Received unknown irq %d\n", irq);
+ ret = IRQ_NONE;
+ goto out;
+ }
+
+- ctx = run->ctx;
+-
+- if (!ipu_rot_mode_is_irt(ctx->rot_mode)) {
+- /* this was NOT a rotation operation, shouldn't happen */
+- dev_err(priv->ipu->dev, "Unexpected rotation interrupt\n");
+- spin_unlock_irqrestore(&chan->irqlock, flags);
+- return IRQ_HANDLED;
+- }
++ if (ipu_rot_mode_is_irt(ctx->rot_mode))
++ tile_complete = (ctx->eof_mask == EOF_IRQ_ROT_COMPLETE);
++ else
++ tile_complete = (ctx->eof_mask == EOF_IRQ_COMPLETE);
+
+- ret = do_irq(run);
++ if (tile_complete)
++ ret = do_tile_complete(run);
+ out:
+ spin_unlock_irqrestore(&chan->irqlock, flags);
+ return ret;
+@@ -1801,6 +1810,10 @@ static void force_abort(struct ipu_image_convert_ctx *ctx)
+
+ static void release_ipu_resources(struct ipu_image_convert_chan *chan)
+ {
++ if (chan->in_eof_irq >= 0)
++ free_irq(chan->in_eof_irq, chan);
++ if (chan->rot_in_eof_irq >= 0)
++ free_irq(chan->rot_in_eof_irq, chan);
+ if (chan->out_eof_irq >= 0)
+ free_irq(chan->out_eof_irq, chan);
+ if (chan->rot_out_eof_irq >= 0)
+@@ -1819,7 +1832,27 @@ static void release_ipu_resources(struct ipu_image_convert_chan *chan)
+
+ chan->in_chan = chan->out_chan = chan->rotation_in_chan =
+ chan->rotation_out_chan = NULL;
+- chan->out_eof_irq = chan->rot_out_eof_irq = -1;
++ chan->in_eof_irq = -1;
++ chan->rot_in_eof_irq = -1;
++ chan->out_eof_irq = -1;
++ chan->rot_out_eof_irq = -1;
++}
++
++static int get_eof_irq(struct ipu_image_convert_chan *chan,
++ struct ipuv3_channel *channel)
++{
++ struct ipu_image_convert_priv *priv = chan->priv;
++ int ret, irq;
++
++ irq = ipu_idmac_channel_irq(priv->ipu, channel, IPU_IRQ_EOF);
++
++ ret = request_threaded_irq(irq, eof_irq, do_bh, 0, "ipu-ic", chan);
++ if (ret < 0) {
++ dev_err(priv->ipu->dev, "could not acquire irq %d\n", irq);
++ return ret;
++ }
++
++ return irq;
+ }
+
+ static int get_ipu_resources(struct ipu_image_convert_chan *chan)
+@@ -1855,31 +1888,33 @@ static int get_ipu_resources(struct ipu_image_convert_chan *chan)
+ }
+
+ /* acquire the EOF interrupts */
+- chan->out_eof_irq = ipu_idmac_channel_irq(priv->ipu,
+- chan->out_chan,
+- IPU_IRQ_EOF);
++ ret = get_eof_irq(chan, chan->in_chan);
++ if (ret < 0) {
++ chan->in_eof_irq = -1;
++ goto err;
++ }
++ chan->in_eof_irq = ret;
+
+- ret = request_threaded_irq(chan->out_eof_irq, norotate_irq, do_bh,
+- 0, "ipu-ic", chan);
++ ret = get_eof_irq(chan, chan->rotation_in_chan);
+ if (ret < 0) {
+- dev_err(priv->ipu->dev, "could not acquire irq %d\n",
+- chan->out_eof_irq);
+- chan->out_eof_irq = -1;
++ chan->rot_in_eof_irq = -1;
+ goto err;
+ }
++ chan->rot_in_eof_irq = ret;
+
+- chan->rot_out_eof_irq = ipu_idmac_channel_irq(priv->ipu,
+- chan->rotation_out_chan,
+- IPU_IRQ_EOF);
++ ret = get_eof_irq(chan, chan->out_chan);
++ if (ret < 0) {
++ chan->out_eof_irq = -1;
++ goto err;
++ }
++ chan->out_eof_irq = ret;
+
+- ret = request_threaded_irq(chan->rot_out_eof_irq, rotate_irq, do_bh,
+- 0, "ipu-ic", chan);
++ ret = get_eof_irq(chan, chan->rotation_out_chan);
+ if (ret < 0) {
+- dev_err(priv->ipu->dev, "could not acquire irq %d\n",
+- chan->rot_out_eof_irq);
+ chan->rot_out_eof_irq = -1;
+ goto err;
+ }
++ chan->rot_out_eof_irq = ret;
+
+ return 0;
+ err:
+@@ -2458,6 +2493,8 @@ int ipu_image_convert_init(struct ipu_soc *ipu, struct device *dev)
+ chan->ic_task = i;
+ chan->priv = priv;
+ chan->dma_ch = &image_convert_dma_chan[i];
++ chan->in_eof_irq = -1;
++ chan->rot_in_eof_irq = -1;
+ chan->out_eof_irq = -1;
+ chan->rot_out_eof_irq = -1;
+
+diff --git a/drivers/i2c/busses/i2c-bcm-iproc.c b/drivers/i2c/busses/i2c-bcm-iproc.c
+index d091a12596ad2..85aee6d365b40 100644
+--- a/drivers/i2c/busses/i2c-bcm-iproc.c
++++ b/drivers/i2c/busses/i2c-bcm-iproc.c
+@@ -1074,7 +1074,7 @@ static int bcm_iproc_i2c_unreg_slave(struct i2c_client *slave)
+ if (!iproc_i2c->slave)
+ return -EINVAL;
+
+- iproc_i2c->slave = NULL;
++ disable_irq(iproc_i2c->irq);
+
+ /* disable all slave interrupts */
+ tmp = iproc_i2c_rd_reg(iproc_i2c, IE_OFFSET);
+@@ -1087,6 +1087,17 @@ static int bcm_iproc_i2c_unreg_slave(struct i2c_client *slave)
+ tmp &= ~BIT(S_CFG_EN_NIC_SMB_ADDR3_SHIFT);
+ iproc_i2c_wr_reg(iproc_i2c, S_CFG_SMBUS_ADDR_OFFSET, tmp);
+
++ /* flush TX/RX FIFOs */
++ tmp = (BIT(S_FIFO_RX_FLUSH_SHIFT) | BIT(S_FIFO_TX_FLUSH_SHIFT));
++ iproc_i2c_wr_reg(iproc_i2c, S_FIFO_CTRL_OFFSET, tmp);
++
++ /* clear all pending slave interrupts */
++ iproc_i2c_wr_reg(iproc_i2c, IS_OFFSET, ISR_MASK_SLAVE);
++
++ iproc_i2c->slave = NULL;
++
++ enable_irq(iproc_i2c->irq);
++
+ return 0;
+ }
+
+diff --git a/drivers/i2c/busses/i2c-rcar.c b/drivers/i2c/busses/i2c-rcar.c
+index 50dd98803ca0c..5615e7c43b436 100644
+--- a/drivers/i2c/busses/i2c-rcar.c
++++ b/drivers/i2c/busses/i2c-rcar.c
+@@ -583,13 +583,14 @@ static bool rcar_i2c_slave_irq(struct rcar_i2c_priv *priv)
+ rcar_i2c_write(priv, ICSIER, SDR | SSR | SAR);
+ }
+
+- rcar_i2c_write(priv, ICSSR, ~SAR & 0xff);
++ /* Clear SSR, too, because of old STOPs to other clients than us */
++ rcar_i2c_write(priv, ICSSR, ~(SAR | SSR) & 0xff);
+ }
+
+ /* master sent stop */
+ if (ssr_filtered & SSR) {
+ i2c_slave_event(priv->slave, I2C_SLAVE_STOP, &value);
+- rcar_i2c_write(priv, ICSIER, SAR | SSR);
++ rcar_i2c_write(priv, ICSIER, SAR);
+ rcar_i2c_write(priv, ICSSR, ~SSR & 0xff);
+ }
+
+@@ -853,7 +854,7 @@ static int rcar_reg_slave(struct i2c_client *slave)
+ priv->slave = slave;
+ rcar_i2c_write(priv, ICSAR, slave->addr);
+ rcar_i2c_write(priv, ICSSR, 0);
+- rcar_i2c_write(priv, ICSIER, SAR | SSR);
++ rcar_i2c_write(priv, ICSIER, SAR);
+ rcar_i2c_write(priv, ICSCR, SIE | SDBS);
+
+ return 0;
+@@ -865,12 +866,14 @@ static int rcar_unreg_slave(struct i2c_client *slave)
+
+ WARN_ON(!priv->slave);
+
+- /* disable irqs and ensure none is running before clearing ptr */
++ /* ensure no irq is running before clearing ptr */
++ disable_irq(priv->irq);
+ rcar_i2c_write(priv, ICSIER, 0);
+- rcar_i2c_write(priv, ICSCR, 0);
++ rcar_i2c_write(priv, ICSSR, 0);
++ enable_irq(priv->irq);
++ rcar_i2c_write(priv, ICSCR, SDBS);
+ rcar_i2c_write(priv, ICSAR, 0); /* Gen2: must be 0 if not using slave */
+
+- synchronize_irq(priv->irq);
+ priv->slave = NULL;
+
+ pm_runtime_put(rcar_i2c_priv_to_dev(priv));
+diff --git a/drivers/iio/dac/ad5592r-base.c b/drivers/iio/dac/ad5592r-base.c
+index e2110113e8848..6044711feea3c 100644
+--- a/drivers/iio/dac/ad5592r-base.c
++++ b/drivers/iio/dac/ad5592r-base.c
+@@ -415,7 +415,7 @@ static int ad5592r_read_raw(struct iio_dev *iio_dev,
+ s64 tmp = *val * (3767897513LL / 25LL);
+ *val = div_s64_rem(tmp, 1000000000LL, val2);
+
+- ret = IIO_VAL_INT_PLUS_MICRO;
++ return IIO_VAL_INT_PLUS_MICRO;
+ } else {
+ int mult;
+
+@@ -446,7 +446,7 @@ static int ad5592r_read_raw(struct iio_dev *iio_dev,
+ ret = IIO_VAL_INT;
+ break;
+ default:
+- ret = -EINVAL;
++ return -EINVAL;
+ }
+
+ unlock:
+diff --git a/drivers/iio/imu/st_lsm6dsx/st_lsm6dsx.h b/drivers/iio/imu/st_lsm6dsx/st_lsm6dsx.h
+index 41cb20cb3809a..6c1fe72f2b807 100644
+--- a/drivers/iio/imu/st_lsm6dsx/st_lsm6dsx.h
++++ b/drivers/iio/imu/st_lsm6dsx/st_lsm6dsx.h
+@@ -436,8 +436,7 @@ int st_lsm6dsx_update_watermark(struct st_lsm6dsx_sensor *sensor,
+ u16 watermark);
+ int st_lsm6dsx_update_fifo(struct st_lsm6dsx_sensor *sensor, bool enable);
+ int st_lsm6dsx_flush_fifo(struct st_lsm6dsx_hw *hw);
+-int st_lsm6dsx_set_fifo_mode(struct st_lsm6dsx_hw *hw,
+- enum st_lsm6dsx_fifo_mode fifo_mode);
++int st_lsm6dsx_resume_fifo(struct st_lsm6dsx_hw *hw);
+ int st_lsm6dsx_read_fifo(struct st_lsm6dsx_hw *hw);
+ int st_lsm6dsx_read_tagged_fifo(struct st_lsm6dsx_hw *hw);
+ int st_lsm6dsx_check_odr(struct st_lsm6dsx_sensor *sensor, u32 odr, u8 *val);
+diff --git a/drivers/iio/imu/st_lsm6dsx/st_lsm6dsx_buffer.c b/drivers/iio/imu/st_lsm6dsx/st_lsm6dsx_buffer.c
+index afd00daeefb2d..7de10bd636ea0 100644
+--- a/drivers/iio/imu/st_lsm6dsx/st_lsm6dsx_buffer.c
++++ b/drivers/iio/imu/st_lsm6dsx/st_lsm6dsx_buffer.c
+@@ -184,8 +184,8 @@ static int st_lsm6dsx_update_decimators(struct st_lsm6dsx_hw *hw)
+ return err;
+ }
+
+-int st_lsm6dsx_set_fifo_mode(struct st_lsm6dsx_hw *hw,
+- enum st_lsm6dsx_fifo_mode fifo_mode)
++static int st_lsm6dsx_set_fifo_mode(struct st_lsm6dsx_hw *hw,
++ enum st_lsm6dsx_fifo_mode fifo_mode)
+ {
+ unsigned int data;
+
+@@ -302,6 +302,18 @@ static int st_lsm6dsx_reset_hw_ts(struct st_lsm6dsx_hw *hw)
+ return 0;
+ }
+
++int st_lsm6dsx_resume_fifo(struct st_lsm6dsx_hw *hw)
++{
++ int err;
++
++ /* reset hw ts counter */
++ err = st_lsm6dsx_reset_hw_ts(hw);
++ if (err < 0)
++ return err;
++
++ return st_lsm6dsx_set_fifo_mode(hw, ST_LSM6DSX_FIFO_CONT);
++}
++
+ /*
+ * Set max bulk read to ST_LSM6DSX_MAX_WORD_LEN/ST_LSM6DSX_MAX_TAGGED_WORD_LEN
+ * in order to avoid a kmalloc for each bus access
+@@ -675,12 +687,7 @@ int st_lsm6dsx_update_fifo(struct st_lsm6dsx_sensor *sensor, bool enable)
+ goto out;
+
+ if (fifo_mask) {
+- /* reset hw ts counter */
+- err = st_lsm6dsx_reset_hw_ts(hw);
+- if (err < 0)
+- goto out;
+-
+- err = st_lsm6dsx_set_fifo_mode(hw, ST_LSM6DSX_FIFO_CONT);
++ err = st_lsm6dsx_resume_fifo(hw);
+ if (err < 0)
+ goto out;
+ }
+diff --git a/drivers/iio/imu/st_lsm6dsx/st_lsm6dsx_core.c b/drivers/iio/imu/st_lsm6dsx/st_lsm6dsx_core.c
+index 4426524b59f28..fa02e90e95c37 100644
+--- a/drivers/iio/imu/st_lsm6dsx/st_lsm6dsx_core.c
++++ b/drivers/iio/imu/st_lsm6dsx/st_lsm6dsx_core.c
+@@ -2451,7 +2451,7 @@ static int __maybe_unused st_lsm6dsx_resume(struct device *dev)
+ }
+
+ if (hw->fifo_mask)
+- err = st_lsm6dsx_set_fifo_mode(hw, ST_LSM6DSX_FIFO_CONT);
++ err = st_lsm6dsx_resume_fifo(hw);
+
+ return err;
+ }
+diff --git a/drivers/infiniband/core/counters.c b/drivers/infiniband/core/counters.c
+index 738d1faf4bba5..417ebf4d8ba9b 100644
+--- a/drivers/infiniband/core/counters.c
++++ b/drivers/infiniband/core/counters.c
+@@ -288,7 +288,7 @@ int rdma_counter_bind_qp_auto(struct ib_qp *qp, u8 port)
+ struct rdma_counter *counter;
+ int ret;
+
+- if (!qp->res.valid)
++ if (!qp->res.valid || rdma_is_kernel_res(&qp->res))
+ return 0;
+
+ if (!rdma_is_port_valid(dev, port))
+@@ -483,7 +483,7 @@ int rdma_counter_bind_qpn(struct ib_device *dev, u8 port,
+ goto err;
+ }
+
+- if (counter->res.task != qp->res.task) {
++ if (rdma_is_kernel_res(&counter->res) != rdma_is_kernel_res(&qp->res)) {
+ ret = -EINVAL;
+ goto err_task;
+ }
+diff --git a/drivers/infiniband/core/uverbs_cmd.c b/drivers/infiniband/core/uverbs_cmd.c
+index d6e9cc94dd900..b2eb87d18e602 100644
+--- a/drivers/infiniband/core/uverbs_cmd.c
++++ b/drivers/infiniband/core/uverbs_cmd.c
+@@ -772,6 +772,7 @@ static int ib_uverbs_reg_mr(struct uverbs_attr_bundle *attrs)
+ mr->uobject = uobj;
+ atomic_inc(&pd->usecnt);
+ mr->res.type = RDMA_RESTRACK_MR;
++ mr->iova = cmd.hca_va;
+ rdma_restrack_uadd(&mr->res);
+
+ uobj->object = mr;
+@@ -863,6 +864,9 @@ static int ib_uverbs_rereg_mr(struct uverbs_attr_bundle *attrs)
+ atomic_dec(&old_pd->usecnt);
+ }
+
++ if (cmd.flags & IB_MR_REREG_TRANS)
++ mr->iova = cmd.hca_va;
++
+ memset(&resp, 0, sizeof(resp));
+ resp.lkey = mr->lkey;
+ resp.rkey = mr->rkey;
+diff --git a/drivers/infiniband/hw/cxgb4/mem.c b/drivers/infiniband/hw/cxgb4/mem.c
+index 962dc97a8ff2b..1e4f4e5255980 100644
+--- a/drivers/infiniband/hw/cxgb4/mem.c
++++ b/drivers/infiniband/hw/cxgb4/mem.c
+@@ -399,7 +399,6 @@ static int finish_mem_reg(struct c4iw_mr *mhp, u32 stag)
+ mmid = stag >> 8;
+ mhp->ibmr.rkey = mhp->ibmr.lkey = stag;
+ mhp->ibmr.length = mhp->attr.len;
+- mhp->ibmr.iova = mhp->attr.va_fbo;
+ mhp->ibmr.page_size = 1U << (mhp->attr.page_size + 12);
+ pr_debug("mmid 0x%x mhp %p\n", mmid, mhp);
+ return xa_insert_irq(&mhp->rhp->mrs, mmid, mhp, GFP_KERNEL);
+diff --git a/drivers/infiniband/hw/mlx4/mr.c b/drivers/infiniband/hw/mlx4/mr.c
+index b0121c90c561f..184a281f89ec8 100644
+--- a/drivers/infiniband/hw/mlx4/mr.c
++++ b/drivers/infiniband/hw/mlx4/mr.c
+@@ -439,7 +439,6 @@ struct ib_mr *mlx4_ib_reg_user_mr(struct ib_pd *pd, u64 start, u64 length,
+
+ mr->ibmr.rkey = mr->ibmr.lkey = mr->mmr.key;
+ mr->ibmr.length = length;
+- mr->ibmr.iova = virt_addr;
+ mr->ibmr.page_size = 1U << shift;
+
+ return &mr->ibmr;
+diff --git a/drivers/infiniband/ulp/ipoib/ipoib.h b/drivers/infiniband/ulp/ipoib/ipoib.h
+index 9a3379c49541f..9ce6a36fe48ed 100644
+--- a/drivers/infiniband/ulp/ipoib/ipoib.h
++++ b/drivers/infiniband/ulp/ipoib/ipoib.h
+@@ -515,7 +515,7 @@ void ipoib_ib_dev_cleanup(struct net_device *dev);
+
+ int ipoib_ib_dev_open_default(struct net_device *dev);
+ int ipoib_ib_dev_open(struct net_device *dev);
+-int ipoib_ib_dev_stop(struct net_device *dev);
++void ipoib_ib_dev_stop(struct net_device *dev);
+ void ipoib_ib_dev_up(struct net_device *dev);
+ void ipoib_ib_dev_down(struct net_device *dev);
+ int ipoib_ib_dev_stop_default(struct net_device *dev);
+diff --git a/drivers/infiniband/ulp/ipoib/ipoib_ib.c b/drivers/infiniband/ulp/ipoib/ipoib_ib.c
+index da3c5315bbb51..494f413dc3c6c 100644
+--- a/drivers/infiniband/ulp/ipoib/ipoib_ib.c
++++ b/drivers/infiniband/ulp/ipoib/ipoib_ib.c
+@@ -670,13 +670,12 @@ int ipoib_send(struct net_device *dev, struct sk_buff *skb,
+ return rc;
+ }
+
+-static void __ipoib_reap_ah(struct net_device *dev)
++static void ipoib_reap_dead_ahs(struct ipoib_dev_priv *priv)
+ {
+- struct ipoib_dev_priv *priv = ipoib_priv(dev);
+ struct ipoib_ah *ah, *tah;
+ unsigned long flags;
+
+- netif_tx_lock_bh(dev);
++ netif_tx_lock_bh(priv->dev);
+ spin_lock_irqsave(&priv->lock, flags);
+
+ list_for_each_entry_safe(ah, tah, &priv->dead_ahs, list)
+@@ -687,37 +686,37 @@ static void __ipoib_reap_ah(struct net_device *dev)
+ }
+
+ spin_unlock_irqrestore(&priv->lock, flags);
+- netif_tx_unlock_bh(dev);
++ netif_tx_unlock_bh(priv->dev);
+ }
+
+ void ipoib_reap_ah(struct work_struct *work)
+ {
+ struct ipoib_dev_priv *priv =
+ container_of(work, struct ipoib_dev_priv, ah_reap_task.work);
+- struct net_device *dev = priv->dev;
+
+- __ipoib_reap_ah(dev);
++ ipoib_reap_dead_ahs(priv);
+
+ if (!test_bit(IPOIB_STOP_REAPER, &priv->flags))
+ queue_delayed_work(priv->wq, &priv->ah_reap_task,
+ round_jiffies_relative(HZ));
+ }
+
+-static void ipoib_flush_ah(struct net_device *dev)
++static void ipoib_start_ah_reaper(struct ipoib_dev_priv *priv)
+ {
+- struct ipoib_dev_priv *priv = ipoib_priv(dev);
+-
+- cancel_delayed_work(&priv->ah_reap_task);
+- flush_workqueue(priv->wq);
+- ipoib_reap_ah(&priv->ah_reap_task.work);
++ clear_bit(IPOIB_STOP_REAPER, &priv->flags);
++ queue_delayed_work(priv->wq, &priv->ah_reap_task,
++ round_jiffies_relative(HZ));
+ }
+
+-static void ipoib_stop_ah(struct net_device *dev)
++static void ipoib_stop_ah_reaper(struct ipoib_dev_priv *priv)
+ {
+- struct ipoib_dev_priv *priv = ipoib_priv(dev);
+-
+ set_bit(IPOIB_STOP_REAPER, &priv->flags);
+- ipoib_flush_ah(dev);
++ cancel_delayed_work(&priv->ah_reap_task);
++ /*
++ * After ipoib_stop_ah_reaper() we always go through
++ * ipoib_reap_dead_ahs() which ensures the work is really stopped and
++ * does a final flush out of the dead_ah's list
++ */
+ }
+
+ static int recvs_pending(struct net_device *dev)
+@@ -846,18 +845,6 @@ timeout:
+ return 0;
+ }
+
+-int ipoib_ib_dev_stop(struct net_device *dev)
+-{
+- struct ipoib_dev_priv *priv = ipoib_priv(dev);
+-
+- priv->rn_ops->ndo_stop(dev);
+-
+- clear_bit(IPOIB_FLAG_INITIALIZED, &priv->flags);
+- ipoib_flush_ah(dev);
+-
+- return 0;
+-}
+-
+ int ipoib_ib_dev_open_default(struct net_device *dev)
+ {
+ struct ipoib_dev_priv *priv = ipoib_priv(dev);
+@@ -901,10 +888,7 @@ int ipoib_ib_dev_open(struct net_device *dev)
+ return -1;
+ }
+
+- clear_bit(IPOIB_STOP_REAPER, &priv->flags);
+- queue_delayed_work(priv->wq, &priv->ah_reap_task,
+- round_jiffies_relative(HZ));
+-
++ ipoib_start_ah_reaper(priv);
+ if (priv->rn_ops->ndo_open(dev)) {
+ pr_warn("%s: Failed to open dev\n", dev->name);
+ goto dev_stop;
+@@ -915,13 +899,20 @@ int ipoib_ib_dev_open(struct net_device *dev)
+ return 0;
+
+ dev_stop:
+- set_bit(IPOIB_STOP_REAPER, &priv->flags);
+- cancel_delayed_work(&priv->ah_reap_task);
+- set_bit(IPOIB_FLAG_INITIALIZED, &priv->flags);
+- ipoib_ib_dev_stop(dev);
++ ipoib_stop_ah_reaper(priv);
+ return -1;
+ }
+
++void ipoib_ib_dev_stop(struct net_device *dev)
++{
++ struct ipoib_dev_priv *priv = ipoib_priv(dev);
++
++ priv->rn_ops->ndo_stop(dev);
++
++ clear_bit(IPOIB_FLAG_INITIALIZED, &priv->flags);
++ ipoib_stop_ah_reaper(priv);
++}
++
+ void ipoib_pkey_dev_check_presence(struct net_device *dev)
+ {
+ struct ipoib_dev_priv *priv = ipoib_priv(dev);
+@@ -1232,7 +1223,7 @@ static void __ipoib_ib_dev_flush(struct ipoib_dev_priv *priv,
+ ipoib_mcast_dev_flush(dev);
+ if (oper_up)
+ set_bit(IPOIB_FLAG_OPER_UP, &priv->flags);
+- ipoib_flush_ah(dev);
++ ipoib_reap_dead_ahs(priv);
+ }
+
+ if (level >= IPOIB_FLUSH_NORMAL)
+@@ -1307,7 +1298,7 @@ void ipoib_ib_dev_cleanup(struct net_device *dev)
+ * the neighbor garbage collection is stopped and reaped.
+ * That should all be done now, so make a final ah flush.
+ */
+- ipoib_stop_ah(dev);
++ ipoib_reap_dead_ahs(priv);
+
+ clear_bit(IPOIB_PKEY_ASSIGNED, &priv->flags);
+
+diff --git a/drivers/infiniband/ulp/ipoib/ipoib_main.c b/drivers/infiniband/ulp/ipoib/ipoib_main.c
+index ceec24d451858..29ad4129d2f48 100644
+--- a/drivers/infiniband/ulp/ipoib/ipoib_main.c
++++ b/drivers/infiniband/ulp/ipoib/ipoib_main.c
+@@ -1975,6 +1975,8 @@ static void ipoib_ndo_uninit(struct net_device *dev)
+
+ /* no more works over the priv->wq */
+ if (priv->wq) {
++ /* See ipoib_mcast_carrier_on_task() */
++ WARN_ON(test_bit(IPOIB_FLAG_OPER_UP, &priv->flags));
+ flush_workqueue(priv->wq);
+ destroy_workqueue(priv->wq);
+ priv->wq = NULL;
+diff --git a/drivers/input/mouse/sentelic.c b/drivers/input/mouse/sentelic.c
+index e99d9bf1a267d..e78c4c7eda34d 100644
+--- a/drivers/input/mouse/sentelic.c
++++ b/drivers/input/mouse/sentelic.c
+@@ -441,7 +441,7 @@ static ssize_t fsp_attr_set_setreg(struct psmouse *psmouse, void *data,
+
+ fsp_reg_write_enable(psmouse, false);
+
+- return count;
++ return retval;
+ }
+
+ PSMOUSE_DEFINE_WO_ATTR(setreg, S_IWUSR, NULL, fsp_attr_set_setreg);
+diff --git a/drivers/iommu/intel-iommu.c b/drivers/iommu/intel-iommu.c
+index 2acf2842c3bd2..71a7605defdab 100644
+--- a/drivers/iommu/intel-iommu.c
++++ b/drivers/iommu/intel-iommu.c
+@@ -2645,7 +2645,7 @@ static struct dmar_domain *dmar_insert_one_dev_info(struct intel_iommu *iommu,
+ }
+
+ if (info->ats_supported && ecap_prs(iommu->ecap) &&
+- pci_find_ext_capability(pdev, PCI_EXT_CAP_ID_PRI))
++ pci_pri_supported(pdev))
+ info->pri_supported = 1;
+ }
+ }
+diff --git a/drivers/iommu/omap-iommu-debug.c b/drivers/iommu/omap-iommu-debug.c
+index 8e19bfa94121e..a99afb5d9011c 100644
+--- a/drivers/iommu/omap-iommu-debug.c
++++ b/drivers/iommu/omap-iommu-debug.c
+@@ -98,8 +98,11 @@ static ssize_t debug_read_regs(struct file *file, char __user *userbuf,
+ mutex_lock(&iommu_debug_lock);
+
+ bytes = omap_iommu_dump_ctx(obj, p, count);
++ if (bytes < 0)
++ goto err;
+ bytes = simple_read_from_buffer(userbuf, count, ppos, buf, bytes);
+
++err:
+ mutex_unlock(&iommu_debug_lock);
+ kfree(buf);
+
+diff --git a/drivers/irqchip/irq-gic-v3-its.c b/drivers/irqchip/irq-gic-v3-its.c
+index 237c832acdd77..0082192503d14 100644
+--- a/drivers/irqchip/irq-gic-v3-its.c
++++ b/drivers/irqchip/irq-gic-v3-its.c
+@@ -3399,6 +3399,7 @@ static int its_irq_domain_alloc(struct irq_domain *domain, unsigned int virq,
+ msi_alloc_info_t *info = args;
+ struct its_device *its_dev = info->scratchpad[0].ptr;
+ struct its_node *its = its_dev->its;
++ struct irq_data *irqd;
+ irq_hw_number_t hwirq;
+ int err;
+ int i;
+@@ -3418,7 +3419,9 @@ static int its_irq_domain_alloc(struct irq_domain *domain, unsigned int virq,
+
+ irq_domain_set_hwirq_and_chip(domain, virq + i,
+ hwirq + i, &its_irq_chip, its_dev);
+- irqd_set_single_target(irq_desc_get_irq_data(irq_to_desc(virq + i)));
++ irqd = irq_get_irq_data(virq + i);
++ irqd_set_single_target(irqd);
++ irqd_set_affinity_on_activate(irqd);
+ pr_debug("ID:%d pID:%d vID:%d\n",
+ (int)(hwirq + i - its_dev->event_map.lpi_base),
+ (int)(hwirq + i), virq + i);
+@@ -3971,18 +3974,22 @@ static void its_vpe_4_1_deschedule(struct its_vpe *vpe,
+ static void its_vpe_4_1_invall(struct its_vpe *vpe)
+ {
+ void __iomem *rdbase;
++ unsigned long flags;
+ u64 val;
++ int cpu;
+
+ val = GICR_INVALLR_V;
+ val |= FIELD_PREP(GICR_INVALLR_VPEID, vpe->vpe_id);
+
+ /* Target the redistributor this vPE is currently known on */
+- raw_spin_lock(&gic_data_rdist_cpu(vpe->col_idx)->rd_lock);
+- rdbase = per_cpu_ptr(gic_rdists->rdist, vpe->col_idx)->rd_base;
++ cpu = vpe_to_cpuid_lock(vpe, &flags);
++ raw_spin_lock(&gic_data_rdist_cpu(cpu)->rd_lock);
++ rdbase = per_cpu_ptr(gic_rdists->rdist, cpu)->rd_base;
+ gic_write_lpir(val, rdbase + GICR_INVALLR);
+
+ wait_for_syncr(rdbase);
+- raw_spin_unlock(&gic_data_rdist_cpu(vpe->col_idx)->rd_lock);
++ raw_spin_unlock(&gic_data_rdist_cpu(cpu)->rd_lock);
++ vpe_to_cpuid_unlock(vpe, flags);
+ }
+
+ static int its_vpe_4_1_set_vcpu_affinity(struct irq_data *d, void *vcpu_info)
+diff --git a/drivers/irqchip/irq-loongson-liointc.c b/drivers/irqchip/irq-loongson-liointc.c
+index 6ef86a334c62d..9ed1bc4736634 100644
+--- a/drivers/irqchip/irq-loongson-liointc.c
++++ b/drivers/irqchip/irq-loongson-liointc.c
+@@ -60,7 +60,7 @@ static void liointc_chained_handle_irq(struct irq_desc *desc)
+ if (!pending) {
+ /* Always blame LPC IRQ if we have that bug */
+ if (handler->priv->has_lpc_irq_errata &&
+- (handler->parent_int_map & ~gc->mask_cache &
++ (handler->parent_int_map & gc->mask_cache &
+ BIT(LIOINTC_ERRATA_IRQ)))
+ pending = BIT(LIOINTC_ERRATA_IRQ);
+ else
+@@ -132,11 +132,11 @@ static void liointc_resume(struct irq_chip_generic *gc)
+ irq_gc_lock_irqsave(gc, flags);
+ /* Disable all at first */
+ writel(0xffffffff, gc->reg_base + LIOINTC_REG_INTC_DISABLE);
+- /* Revert map cache */
++ /* Restore map cache */
+ for (i = 0; i < LIOINTC_CHIP_IRQ; i++)
+ writeb(priv->map_cache[i], gc->reg_base + i);
+- /* Revert mask cache */
+- writel(~gc->mask_cache, gc->reg_base + LIOINTC_REG_INTC_ENABLE);
++ /* Restore mask cache */
++ writel(gc->mask_cache, gc->reg_base + LIOINTC_REG_INTC_ENABLE);
+ irq_gc_unlock_irqrestore(gc, flags);
+ }
+
+@@ -244,7 +244,7 @@ int __init liointc_of_init(struct device_node *node,
+ ct->chip.irq_mask_ack = irq_gc_mask_disable_reg;
+ ct->chip.irq_set_type = liointc_set_type;
+
+- gc->mask_cache = 0xffffffff;
++ gc->mask_cache = 0;
+ priv->gc = gc;
+
+ for (i = 0; i < LIOINTC_NUM_PARENT; i++) {
+diff --git a/drivers/md/bcache/bcache.h b/drivers/md/bcache/bcache.h
+index 74a9849ea164a..756fc5425d9ba 100644
+--- a/drivers/md/bcache/bcache.h
++++ b/drivers/md/bcache/bcache.h
+@@ -264,7 +264,7 @@ struct bcache_device {
+ #define BCACHE_DEV_UNLINK_DONE 2
+ #define BCACHE_DEV_WB_RUNNING 3
+ #define BCACHE_DEV_RATE_DW_RUNNING 4
+- unsigned int nr_stripes;
++ int nr_stripes;
+ unsigned int stripe_size;
+ atomic_t *stripe_sectors_dirty;
+ unsigned long *full_dirty_stripes;
+diff --git a/drivers/md/bcache/bset.c b/drivers/md/bcache/bset.c
+index 4385303836d8e..ae4cd74c8001e 100644
+--- a/drivers/md/bcache/bset.c
++++ b/drivers/md/bcache/bset.c
+@@ -322,7 +322,7 @@ int bch_btree_keys_alloc(struct btree_keys *b,
+
+ b->page_order = page_order;
+
+- t->data = (void *) __get_free_pages(gfp, b->page_order);
++ t->data = (void *) __get_free_pages(__GFP_COMP|gfp, b->page_order);
+ if (!t->data)
+ goto err;
+
+diff --git a/drivers/md/bcache/btree.c b/drivers/md/bcache/btree.c
+index fd1f288fd8015..2f68cc51bbc70 100644
+--- a/drivers/md/bcache/btree.c
++++ b/drivers/md/bcache/btree.c
+@@ -785,7 +785,7 @@ int bch_btree_cache_alloc(struct cache_set *c)
+ mutex_init(&c->verify_lock);
+
+ c->verify_ondisk = (void *)
+- __get_free_pages(GFP_KERNEL, ilog2(bucket_pages(c)));
++ __get_free_pages(GFP_KERNEL|__GFP_COMP, ilog2(bucket_pages(c)));
+
+ c->verify_data = mca_bucket_alloc(c, &ZERO_KEY, GFP_KERNEL);
+
+diff --git a/drivers/md/bcache/journal.c b/drivers/md/bcache/journal.c
+index 0e3ff9745ac74..9179638b33874 100644
+--- a/drivers/md/bcache/journal.c
++++ b/drivers/md/bcache/journal.c
+@@ -999,8 +999,8 @@ int bch_journal_alloc(struct cache_set *c)
+ j->w[1].c = c;
+
+ if (!(init_fifo(&j->pin, JOURNAL_PIN, GFP_KERNEL)) ||
+- !(j->w[0].data = (void *) __get_free_pages(GFP_KERNEL, JSET_BITS)) ||
+- !(j->w[1].data = (void *) __get_free_pages(GFP_KERNEL, JSET_BITS)))
++ !(j->w[0].data = (void *) __get_free_pages(GFP_KERNEL|__GFP_COMP, JSET_BITS)) ||
++ !(j->w[1].data = (void *) __get_free_pages(GFP_KERNEL|__GFP_COMP, JSET_BITS)))
+ return -ENOMEM;
+
+ return 0;
+diff --git a/drivers/md/bcache/super.c b/drivers/md/bcache/super.c
+index 7048370331c38..b4d23d9f30f9b 100644
+--- a/drivers/md/bcache/super.c
++++ b/drivers/md/bcache/super.c
+@@ -1775,7 +1775,7 @@ void bch_cache_set_unregister(struct cache_set *c)
+ }
+
+ #define alloc_bucket_pages(gfp, c) \
+- ((void *) __get_free_pages(__GFP_ZERO|gfp, ilog2(bucket_pages(c))))
++ ((void *) __get_free_pages(__GFP_ZERO|__GFP_COMP|gfp, ilog2(bucket_pages(c))))
+
+ struct cache_set *bch_cache_set_alloc(struct cache_sb *sb)
+ {
+diff --git a/drivers/md/bcache/writeback.c b/drivers/md/bcache/writeback.c
+index 3f7641fb28d53..c0b3c36bb040b 100644
+--- a/drivers/md/bcache/writeback.c
++++ b/drivers/md/bcache/writeback.c
+@@ -523,15 +523,19 @@ void bcache_dev_sectors_dirty_add(struct cache_set *c, unsigned int inode,
+ uint64_t offset, int nr_sectors)
+ {
+ struct bcache_device *d = c->devices[inode];
+- unsigned int stripe_offset, stripe, sectors_dirty;
++ unsigned int stripe_offset, sectors_dirty;
++ int stripe;
+
+ if (!d)
+ return;
+
++ stripe = offset_to_stripe(d, offset);
++ if (stripe < 0)
++ return;
++
+ if (UUID_FLASH_ONLY(&c->uuids[inode]))
+ atomic_long_add(nr_sectors, &c->flash_dev_dirty_sectors);
+
+- stripe = offset_to_stripe(d, offset);
+ stripe_offset = offset & (d->stripe_size - 1);
+
+ while (nr_sectors) {
+@@ -571,12 +575,12 @@ static bool dirty_pred(struct keybuf *buf, struct bkey *k)
+ static void refill_full_stripes(struct cached_dev *dc)
+ {
+ struct keybuf *buf = &dc->writeback_keys;
+- unsigned int start_stripe, stripe, next_stripe;
++ unsigned int start_stripe, next_stripe;
++ int stripe;
+ bool wrapped = false;
+
+ stripe = offset_to_stripe(&dc->disk, KEY_OFFSET(&buf->last_scanned));
+-
+- if (stripe >= dc->disk.nr_stripes)
++ if (stripe < 0)
+ stripe = 0;
+
+ start_stripe = stripe;
+diff --git a/drivers/md/bcache/writeback.h b/drivers/md/bcache/writeback.h
+index b029843ce5b6f..3f1230e22de01 100644
+--- a/drivers/md/bcache/writeback.h
++++ b/drivers/md/bcache/writeback.h
+@@ -52,10 +52,22 @@ static inline uint64_t bcache_dev_sectors_dirty(struct bcache_device *d)
+ return ret;
+ }
+
+-static inline unsigned int offset_to_stripe(struct bcache_device *d,
++static inline int offset_to_stripe(struct bcache_device *d,
+ uint64_t offset)
+ {
+ do_div(offset, d->stripe_size);
++
++ /* d->nr_stripes is in range [1, INT_MAX] */
++ if (unlikely(offset >= d->nr_stripes)) {
++ pr_err("Invalid stripe %llu (>= nr_stripes %d).\n",
++ offset, d->nr_stripes);
++ return -EINVAL;
++ }
++
++ /*
++ * Here offset is definitly smaller than INT_MAX,
++ * return it as int will never overflow.
++ */
+ return offset;
+ }
+
+@@ -63,7 +75,10 @@ static inline bool bcache_dev_stripe_dirty(struct cached_dev *dc,
+ uint64_t offset,
+ unsigned int nr_sectors)
+ {
+- unsigned int stripe = offset_to_stripe(&dc->disk, offset);
++ int stripe = offset_to_stripe(&dc->disk, offset);
++
++ if (stripe < 0)
++ return false;
+
+ while (1) {
+ if (atomic_read(dc->disk.stripe_sectors_dirty + stripe))
+diff --git a/drivers/md/dm-rq.c b/drivers/md/dm-rq.c
+index 3f8577e2c13be..2bd2444ad99c6 100644
+--- a/drivers/md/dm-rq.c
++++ b/drivers/md/dm-rq.c
+@@ -70,9 +70,6 @@ void dm_start_queue(struct request_queue *q)
+
+ void dm_stop_queue(struct request_queue *q)
+ {
+- if (blk_mq_queue_stopped(q))
+- return;
+-
+ blk_mq_quiesce_queue(q);
+ }
+
+diff --git a/drivers/md/dm.c b/drivers/md/dm.c
+index fabcc51b468c9..8d952bf059bea 100644
+--- a/drivers/md/dm.c
++++ b/drivers/md/dm.c
+@@ -503,7 +503,8 @@ static int dm_blk_report_zones(struct gendisk *disk, sector_t sector,
+ }
+
+ args.tgt = tgt;
+- ret = tgt->type->report_zones(tgt, &args, nr_zones);
++ ret = tgt->type->report_zones(tgt, &args,
++ nr_zones - args.zone_idx);
+ if (ret < 0)
+ goto out;
+ } while (args.zone_idx < nr_zones &&
+diff --git a/drivers/md/md-cluster.c b/drivers/md/md-cluster.c
+index 73fd50e779754..d50737ec40394 100644
+--- a/drivers/md/md-cluster.c
++++ b/drivers/md/md-cluster.c
+@@ -1139,6 +1139,7 @@ static int resize_bitmaps(struct mddev *mddev, sector_t newsize, sector_t oldsiz
+ bitmap = get_bitmap_from_slot(mddev, i);
+ if (IS_ERR(bitmap)) {
+ pr_err("can't get bitmap from slot %d\n", i);
++ bitmap = NULL;
+ goto out;
+ }
+ counts = &bitmap->counts;
+diff --git a/drivers/md/raid5.c b/drivers/md/raid5.c
+index 190dd70db514b..554e7f15325fe 100644
+--- a/drivers/md/raid5.c
++++ b/drivers/md/raid5.c
+@@ -3604,6 +3604,7 @@ static int need_this_block(struct stripe_head *sh, struct stripe_head_state *s,
+ * is missing/faulty, then we need to read everything we can.
+ */
+ if (sh->raid_conf->level != 6 &&
++ sh->raid_conf->rmw_level != PARITY_DISABLE_RMW &&
+ sh->sector < sh->raid_conf->mddev->recovery_cp)
+ /* reconstruct-write isn't being forced */
+ return 0;
+@@ -4839,7 +4840,7 @@ static void handle_stripe(struct stripe_head *sh)
+ * or to load a block that is being partially written.
+ */
+ if (s.to_read || s.non_overwrite
+- || (conf->level == 6 && s.to_write && s.failed)
++ || (s.to_write && s.failed)
+ || (s.syncing && (s.uptodate + s.compute < disks))
+ || s.replacing
+ || s.expanding)
+diff --git a/drivers/media/platform/qcom/venus/pm_helpers.c b/drivers/media/platform/qcom/venus/pm_helpers.c
+index abf93158857b9..531e7a41658f7 100644
+--- a/drivers/media/platform/qcom/venus/pm_helpers.c
++++ b/drivers/media/platform/qcom/venus/pm_helpers.c
+@@ -496,6 +496,10 @@ min_loaded_core(struct venus_inst *inst, u32 *min_coreid, u32 *min_load)
+ list_for_each_entry(inst_pos, &core->instances, list) {
+ if (inst_pos == inst)
+ continue;
++
++ if (inst_pos->state != INST_START)
++ continue;
++
+ vpp_freq = inst_pos->clk_data.codec_freq_data->vpp_freq;
+ coreid = inst_pos->clk_data.core_id;
+
+diff --git a/drivers/media/platform/rockchip/rga/rga-hw.c b/drivers/media/platform/rockchip/rga/rga-hw.c
+index 4be6dcf292fff..aaa96f256356b 100644
+--- a/drivers/media/platform/rockchip/rga/rga-hw.c
++++ b/drivers/media/platform/rockchip/rga/rga-hw.c
+@@ -200,22 +200,25 @@ static void rga_cmd_set_trans_info(struct rga_ctx *ctx)
+ dst_info.data.format = ctx->out.fmt->hw_format;
+ dst_info.data.swap = ctx->out.fmt->color_swap;
+
+- if (ctx->in.fmt->hw_format >= RGA_COLOR_FMT_YUV422SP) {
+- if (ctx->out.fmt->hw_format < RGA_COLOR_FMT_YUV422SP) {
+- switch (ctx->in.colorspace) {
+- case V4L2_COLORSPACE_REC709:
+- src_info.data.csc_mode =
+- RGA_SRC_CSC_MODE_BT709_R0;
+- break;
+- default:
+- src_info.data.csc_mode =
+- RGA_SRC_CSC_MODE_BT601_R0;
+- break;
+- }
++ /*
++ * CSC mode must only be set when the colorspace families differ between
++ * input and output. It must remain unset (zeroed) if both are the same.
++ */
++
++ if (RGA_COLOR_FMT_IS_YUV(ctx->in.fmt->hw_format) &&
++ RGA_COLOR_FMT_IS_RGB(ctx->out.fmt->hw_format)) {
++ switch (ctx->in.colorspace) {
++ case V4L2_COLORSPACE_REC709:
++ src_info.data.csc_mode = RGA_SRC_CSC_MODE_BT709_R0;
++ break;
++ default:
++ src_info.data.csc_mode = RGA_SRC_CSC_MODE_BT601_R0;
++ break;
+ }
+ }
+
+- if (ctx->out.fmt->hw_format >= RGA_COLOR_FMT_YUV422SP) {
++ if (RGA_COLOR_FMT_IS_RGB(ctx->in.fmt->hw_format) &&
++ RGA_COLOR_FMT_IS_YUV(ctx->out.fmt->hw_format)) {
+ switch (ctx->out.colorspace) {
+ case V4L2_COLORSPACE_REC709:
+ dst_info.data.csc_mode = RGA_SRC_CSC_MODE_BT709_R0;
+diff --git a/drivers/media/platform/rockchip/rga/rga-hw.h b/drivers/media/platform/rockchip/rga/rga-hw.h
+index 96cb0314dfa70..e8917e5630a48 100644
+--- a/drivers/media/platform/rockchip/rga/rga-hw.h
++++ b/drivers/media/platform/rockchip/rga/rga-hw.h
+@@ -95,6 +95,11 @@
+ #define RGA_COLOR_FMT_CP_8BPP 15
+ #define RGA_COLOR_FMT_MASK 15
+
++#define RGA_COLOR_FMT_IS_YUV(fmt) \
++ (((fmt) >= RGA_COLOR_FMT_YUV422SP) && ((fmt) < RGA_COLOR_FMT_CP_1BPP))
++#define RGA_COLOR_FMT_IS_RGB(fmt) \
++ ((fmt) < RGA_COLOR_FMT_YUV422SP)
++
+ #define RGA_COLOR_NONE_SWAP 0
+ #define RGA_COLOR_RB_SWAP 1
+ #define RGA_COLOR_ALPHA_SWAP 2
+diff --git a/drivers/media/platform/vsp1/vsp1_dl.c b/drivers/media/platform/vsp1/vsp1_dl.c
+index d7b43037e500a..e07b135613eb5 100644
+--- a/drivers/media/platform/vsp1/vsp1_dl.c
++++ b/drivers/media/platform/vsp1/vsp1_dl.c
+@@ -431,6 +431,8 @@ vsp1_dl_cmd_pool_create(struct vsp1_device *vsp1, enum vsp1_extcmd_type type,
+ if (!pool)
+ return NULL;
+
++ pool->vsp1 = vsp1;
++
+ spin_lock_init(&pool->lock);
+ INIT_LIST_HEAD(&pool->free);
+
+diff --git a/drivers/mfd/arizona-core.c b/drivers/mfd/arizona-core.c
+index f73cf76d1373d..a5e443110fc3d 100644
+--- a/drivers/mfd/arizona-core.c
++++ b/drivers/mfd/arizona-core.c
+@@ -1426,6 +1426,15 @@ err_irq:
+ arizona_irq_exit(arizona);
+ err_pm:
+ pm_runtime_disable(arizona->dev);
++
++ switch (arizona->pdata.clk32k_src) {
++ case ARIZONA_32KZ_MCLK1:
++ case ARIZONA_32KZ_MCLK2:
++ arizona_clk32k_disable(arizona);
++ break;
++ default:
++ break;
++ }
+ err_reset:
+ arizona_enable_reset(arizona);
+ regulator_disable(arizona->dcvdd);
+@@ -1448,6 +1457,15 @@ int arizona_dev_exit(struct arizona *arizona)
+ regulator_disable(arizona->dcvdd);
+ regulator_put(arizona->dcvdd);
+
++ switch (arizona->pdata.clk32k_src) {
++ case ARIZONA_32KZ_MCLK1:
++ case ARIZONA_32KZ_MCLK2:
++ arizona_clk32k_disable(arizona);
++ break;
++ default:
++ break;
++ }
++
+ mfd_remove_devices(arizona->dev);
+ arizona_free_irq(arizona, ARIZONA_IRQ_UNDERCLOCKED, arizona);
+ arizona_free_irq(arizona, ARIZONA_IRQ_OVERCLOCKED, arizona);
+diff --git a/drivers/mfd/dln2.c b/drivers/mfd/dln2.c
+index 39276fa626d2b..83e676a096dc1 100644
+--- a/drivers/mfd/dln2.c
++++ b/drivers/mfd/dln2.c
+@@ -287,7 +287,11 @@ static void dln2_rx(struct urb *urb)
+ len = urb->actual_length - sizeof(struct dln2_header);
+
+ if (handle == DLN2_HANDLE_EVENT) {
++ unsigned long flags;
++
++ spin_lock_irqsave(&dln2->event_cb_lock, flags);
+ dln2_run_event_callbacks(dln2, id, echo, data, len);
++ spin_unlock_irqrestore(&dln2->event_cb_lock, flags);
+ } else {
+ /* URB will be re-submitted in _dln2_transfer (free_rx_slot) */
+ if (dln2_transfer_complete(dln2, urb, handle, echo))
+diff --git a/drivers/mmc/host/renesas_sdhi_internal_dmac.c b/drivers/mmc/host/renesas_sdhi_internal_dmac.c
+index 47ac53e912411..201b8ed37f2e0 100644
+--- a/drivers/mmc/host/renesas_sdhi_internal_dmac.c
++++ b/drivers/mmc/host/renesas_sdhi_internal_dmac.c
+@@ -229,15 +229,12 @@ static void renesas_sdhi_internal_dmac_issue_tasklet_fn(unsigned long arg)
+ DTRAN_CTRL_DM_START);
+ }
+
+-static void renesas_sdhi_internal_dmac_complete_tasklet_fn(unsigned long arg)
++static bool renesas_sdhi_internal_dmac_complete(struct tmio_mmc_host *host)
+ {
+- struct tmio_mmc_host *host = (struct tmio_mmc_host *)arg;
+ enum dma_data_direction dir;
+
+- spin_lock_irq(&host->lock);
+-
+ if (!host->data)
+- goto out;
++ return false;
+
+ if (host->data->flags & MMC_DATA_READ)
+ dir = DMA_FROM_DEVICE;
+@@ -250,6 +247,17 @@ static void renesas_sdhi_internal_dmac_complete_tasklet_fn(unsigned long arg)
+ if (dir == DMA_FROM_DEVICE)
+ clear_bit(SDHI_INTERNAL_DMAC_RX_IN_USE, &global_flags);
+
++ return true;
++}
++
++static void renesas_sdhi_internal_dmac_complete_tasklet_fn(unsigned long arg)
++{
++ struct tmio_mmc_host *host = (struct tmio_mmc_host *)arg;
++
++ spin_lock_irq(&host->lock);
++ if (!renesas_sdhi_internal_dmac_complete(host))
++ goto out;
++
+ tmio_mmc_do_data_irq(host);
+ out:
+ spin_unlock_irq(&host->lock);
+diff --git a/drivers/mtd/nand/raw/brcmnand/brcmnand.c b/drivers/mtd/nand/raw/brcmnand/brcmnand.c
+index cdae2311a3b69..0b1ea965cba08 100644
+--- a/drivers/mtd/nand/raw/brcmnand/brcmnand.c
++++ b/drivers/mtd/nand/raw/brcmnand/brcmnand.c
+@@ -1859,6 +1859,22 @@ static int brcmnand_edu_trans(struct brcmnand_host *host, u64 addr, u32 *buf,
+ edu_writel(ctrl, EDU_STOP, 0); /* force stop */
+ edu_readl(ctrl, EDU_STOP);
+
++ if (!ret && edu_cmd == EDU_CMD_READ) {
++ u64 err_addr = 0;
++
++ /*
++ * check for ECC errors here, subpage ECC errors are
++ * retained in ECC error address register
++ */
++ err_addr = brcmnand_get_uncorrecc_addr(ctrl);
++ if (!err_addr) {
++ err_addr = brcmnand_get_correcc_addr(ctrl);
++ if (err_addr)
++ ret = -EUCLEAN;
++ } else
++ ret = -EBADMSG;
++ }
++
+ return ret;
+ }
+
+@@ -2065,6 +2081,7 @@ static int brcmnand_read(struct mtd_info *mtd, struct nand_chip *chip,
+ u64 err_addr = 0;
+ int err;
+ bool retry = true;
++ bool edu_err = false;
+
+ dev_dbg(ctrl->dev, "read %llx -> %p\n", (unsigned long long)addr, buf);
+
+@@ -2082,6 +2099,10 @@ try_dmaread:
+ else
+ return -EIO;
+ }
++
++ if (has_edu(ctrl) && err_addr)
++ edu_err = true;
++
+ } else {
+ if (oob)
+ memset(oob, 0x99, mtd->oobsize);
+@@ -2129,6 +2150,11 @@ try_dmaread:
+ if (mtd_is_bitflip(err)) {
+ unsigned int corrected = brcmnand_count_corrected(ctrl);
+
++ /* in case of EDU correctable error we read again using PIO */
++ if (edu_err)
++ err = brcmnand_read_by_pio(mtd, chip, addr, trans, buf,
++ oob, &err_addr);
++
+ dev_dbg(ctrl->dev, "corrected error at 0x%llx\n",
+ (unsigned long long)err_addr);
+ mtd->ecc_stats.corrected += corrected;
+diff --git a/drivers/mtd/nand/raw/fsl_upm.c b/drivers/mtd/nand/raw/fsl_upm.c
+index f31fae3a4c689..6b8ec72686e29 100644
+--- a/drivers/mtd/nand/raw/fsl_upm.c
++++ b/drivers/mtd/nand/raw/fsl_upm.c
+@@ -62,7 +62,6 @@ static int fun_chip_ready(struct nand_chip *chip)
+ static void fun_wait_rnb(struct fsl_upm_nand *fun)
+ {
+ if (fun->rnb_gpio[fun->mchip_number] >= 0) {
+- struct mtd_info *mtd = nand_to_mtd(&fun->chip);
+ int cnt = 1000000;
+
+ while (--cnt && !fun_chip_ready(&fun->chip))
+diff --git a/drivers/net/ethernet/marvell/octeontx2/af/common.h b/drivers/net/ethernet/marvell/octeontx2/af/common.h
+index cd33c2e6ca5fc..f48eb66ed021b 100644
+--- a/drivers/net/ethernet/marvell/octeontx2/af/common.h
++++ b/drivers/net/ethernet/marvell/octeontx2/af/common.h
+@@ -43,7 +43,7 @@ struct qmem {
+ void *base;
+ dma_addr_t iova;
+ int alloc_sz;
+- u8 entry_sz;
++ u16 entry_sz;
+ u8 align;
+ u32 qsize;
+ };
+diff --git a/drivers/net/ethernet/qualcomm/emac/emac.c b/drivers/net/ethernet/qualcomm/emac/emac.c
+index 18b0c7a2d6dcb..90e794c79f667 100644
+--- a/drivers/net/ethernet/qualcomm/emac/emac.c
++++ b/drivers/net/ethernet/qualcomm/emac/emac.c
+@@ -473,13 +473,24 @@ static int emac_clks_phase1_init(struct platform_device *pdev,
+
+ ret = clk_prepare_enable(adpt->clk[EMAC_CLK_CFG_AHB]);
+ if (ret)
+- return ret;
++ goto disable_clk_axi;
+
+ ret = clk_set_rate(adpt->clk[EMAC_CLK_HIGH_SPEED], 19200000);
+ if (ret)
+- return ret;
++ goto disable_clk_cfg_ahb;
++
++ ret = clk_prepare_enable(adpt->clk[EMAC_CLK_HIGH_SPEED]);
++ if (ret)
++ goto disable_clk_cfg_ahb;
+
+- return clk_prepare_enable(adpt->clk[EMAC_CLK_HIGH_SPEED]);
++ return 0;
++
++disable_clk_cfg_ahb:
++ clk_disable_unprepare(adpt->clk[EMAC_CLK_CFG_AHB]);
++disable_clk_axi:
++ clk_disable_unprepare(adpt->clk[EMAC_CLK_AXI]);
++
++ return ret;
+ }
+
+ /* Enable clocks; needs emac_clks_phase1_init to be called before */
+diff --git a/drivers/net/ethernet/stmicro/stmmac/dwmac-ipq806x.c b/drivers/net/ethernet/stmicro/stmmac/dwmac-ipq806x.c
+index 02102c781a8cf..bf3250e0e59ca 100644
+--- a/drivers/net/ethernet/stmicro/stmmac/dwmac-ipq806x.c
++++ b/drivers/net/ethernet/stmicro/stmmac/dwmac-ipq806x.c
+@@ -351,6 +351,7 @@ static int ipq806x_gmac_probe(struct platform_device *pdev)
+ plat_dat->has_gmac = true;
+ plat_dat->bsp_priv = gmac;
+ plat_dat->fix_mac_speed = ipq806x_gmac_fix_mac_speed;
++ plat_dat->multicast_filter_bins = 0;
+
+ err = stmmac_dvr_probe(&pdev->dev, plat_dat, &stmmac_res);
+ if (err)
+diff --git a/drivers/net/ethernet/stmicro/stmmac/dwmac1000_core.c b/drivers/net/ethernet/stmicro/stmmac/dwmac1000_core.c
+index efc6ec1b8027c..fc8759f146c7c 100644
+--- a/drivers/net/ethernet/stmicro/stmmac/dwmac1000_core.c
++++ b/drivers/net/ethernet/stmicro/stmmac/dwmac1000_core.c
+@@ -164,6 +164,9 @@ static void dwmac1000_set_filter(struct mac_device_info *hw,
+ value = GMAC_FRAME_FILTER_PR | GMAC_FRAME_FILTER_PCF;
+ } else if (dev->flags & IFF_ALLMULTI) {
+ value = GMAC_FRAME_FILTER_PM; /* pass all multi */
++ } else if (!netdev_mc_empty(dev) && (mcbitslog2 == 0)) {
++ /* Fall back to all multicast if we've no filter */
++ value = GMAC_FRAME_FILTER_PM;
+ } else if (!netdev_mc_empty(dev)) {
+ struct netdev_hw_addr *ha;
+
+diff --git a/drivers/net/wireless/realtek/rtw88/pci.c b/drivers/net/wireless/realtek/rtw88/pci.c
+index d735f3127fe8f..6c24ddc2a9751 100644
+--- a/drivers/net/wireless/realtek/rtw88/pci.c
++++ b/drivers/net/wireless/realtek/rtw88/pci.c
+@@ -14,8 +14,11 @@
+ #include "debug.h"
+
+ static bool rtw_disable_msi;
++static bool rtw_pci_disable_aspm;
+ module_param_named(disable_msi, rtw_disable_msi, bool, 0644);
++module_param_named(disable_aspm, rtw_pci_disable_aspm, bool, 0644);
+ MODULE_PARM_DESC(disable_msi, "Set Y to disable MSI interrupt support");
++MODULE_PARM_DESC(disable_aspm, "Set Y to disable PCI ASPM support");
+
+ static u32 rtw_pci_tx_queue_idx_addr[] = {
+ [RTW_TX_QUEUE_BK] = RTK_PCI_TXBD_IDX_BKQ,
+@@ -1189,6 +1192,9 @@ static void rtw_pci_clkreq_set(struct rtw_dev *rtwdev, bool enable)
+ u8 value;
+ int ret;
+
++ if (rtw_pci_disable_aspm)
++ return;
++
+ ret = rtw_dbi_read8(rtwdev, RTK_PCIE_LINK_CFG, &value);
+ if (ret) {
+ rtw_err(rtwdev, "failed to read CLKREQ_L1, ret=%d", ret);
+@@ -1208,6 +1214,9 @@ static void rtw_pci_aspm_set(struct rtw_dev *rtwdev, bool enable)
+ u8 value;
+ int ret;
+
++ if (rtw_pci_disable_aspm)
++ return;
++
+ ret = rtw_dbi_read8(rtwdev, RTK_PCIE_LINK_CFG, &value);
+ if (ret) {
+ rtw_err(rtwdev, "failed to read ASPM, ret=%d", ret);
+diff --git a/drivers/nvdimm/security.c b/drivers/nvdimm/security.c
+index 89b85970912db..35d265014e1ec 100644
+--- a/drivers/nvdimm/security.c
++++ b/drivers/nvdimm/security.c
+@@ -450,14 +450,19 @@ void __nvdimm_security_overwrite_query(struct nvdimm *nvdimm)
+ else
+ dev_dbg(&nvdimm->dev, "overwrite completed\n");
+
+- if (nvdimm->sec.overwrite_state)
+- sysfs_notify_dirent(nvdimm->sec.overwrite_state);
++ /*
++ * Mark the overwrite work done and update dimm security flags,
++ * then send a sysfs event notification to wake up userspace
++ * poll threads to picked up the changed state.
++ */
+ nvdimm->sec.overwrite_tmo = 0;
+ clear_bit(NDD_SECURITY_OVERWRITE, &nvdimm->flags);
+ clear_bit(NDD_WORK_PENDING, &nvdimm->flags);
+- put_device(&nvdimm->dev);
+ nvdimm->sec.flags = nvdimm_security_flags(nvdimm, NVDIMM_USER);
+- nvdimm->sec.flags = nvdimm_security_flags(nvdimm, NVDIMM_MASTER);
++ nvdimm->sec.ext_flags = nvdimm_security_flags(nvdimm, NVDIMM_MASTER);
++ if (nvdimm->sec.overwrite_state)
++ sysfs_notify_dirent(nvdimm->sec.overwrite_state);
++ put_device(&nvdimm->dev);
+ }
+
+ void nvdimm_security_overwrite_query(struct work_struct *work)
+diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c
+index f7540a9e54fd2..ee67113d96b1b 100644
+--- a/drivers/nvme/host/core.c
++++ b/drivers/nvme/host/core.c
+@@ -368,6 +368,16 @@ bool nvme_change_ctrl_state(struct nvme_ctrl *ctrl,
+ break;
+ }
+ break;
++ case NVME_CTRL_DELETING_NOIO:
++ switch (old_state) {
++ case NVME_CTRL_DELETING:
++ case NVME_CTRL_DEAD:
++ changed = true;
++ /* FALLTHRU */
++ default:
++ break;
++ }
++ break;
+ case NVME_CTRL_DEAD:
+ switch (old_state) {
+ case NVME_CTRL_DELETING:
+@@ -405,6 +415,7 @@ static bool nvme_state_terminal(struct nvme_ctrl *ctrl)
+ case NVME_CTRL_CONNECTING:
+ return false;
+ case NVME_CTRL_DELETING:
++ case NVME_CTRL_DELETING_NOIO:
+ case NVME_CTRL_DEAD:
+ return true;
+ default:
+@@ -3280,6 +3291,7 @@ static ssize_t nvme_sysfs_show_state(struct device *dev,
+ [NVME_CTRL_RESETTING] = "resetting",
+ [NVME_CTRL_CONNECTING] = "connecting",
+ [NVME_CTRL_DELETING] = "deleting",
++ [NVME_CTRL_DELETING_NOIO]= "deleting (no IO)",
+ [NVME_CTRL_DEAD] = "dead",
+ };
+
+@@ -3860,6 +3872,9 @@ void nvme_remove_namespaces(struct nvme_ctrl *ctrl)
+ if (ctrl->state == NVME_CTRL_DEAD)
+ nvme_kill_queues(ctrl);
+
++ /* this is a no-op when called from the controller reset handler */
++ nvme_change_ctrl_state(ctrl, NVME_CTRL_DELETING_NOIO);
++
+ down_write(&ctrl->namespaces_rwsem);
+ list_splice_init(&ctrl->namespaces, &ns_list);
+ up_write(&ctrl->namespaces_rwsem);
+diff --git a/drivers/nvme/host/fabrics.c b/drivers/nvme/host/fabrics.c
+index 2a6c8190eeb76..4ec4829d62334 100644
+--- a/drivers/nvme/host/fabrics.c
++++ b/drivers/nvme/host/fabrics.c
+@@ -547,7 +547,7 @@ static struct nvmf_transport_ops *nvmf_lookup_transport(
+ blk_status_t nvmf_fail_nonready_command(struct nvme_ctrl *ctrl,
+ struct request *rq)
+ {
+- if (ctrl->state != NVME_CTRL_DELETING &&
++ if (ctrl->state != NVME_CTRL_DELETING_NOIO &&
+ ctrl->state != NVME_CTRL_DEAD &&
+ !blk_noretry_request(rq) && !(rq->cmd_flags & REQ_NVME_MPATH))
+ return BLK_STS_RESOURCE;
+diff --git a/drivers/nvme/host/fabrics.h b/drivers/nvme/host/fabrics.h
+index a0ec40ab62eeb..a9c1e3b4585ec 100644
+--- a/drivers/nvme/host/fabrics.h
++++ b/drivers/nvme/host/fabrics.h
+@@ -182,7 +182,8 @@ bool nvmf_ip_options_match(struct nvme_ctrl *ctrl,
+ static inline bool nvmf_check_ready(struct nvme_ctrl *ctrl, struct request *rq,
+ bool queue_live)
+ {
+- if (likely(ctrl->state == NVME_CTRL_LIVE))
++ if (likely(ctrl->state == NVME_CTRL_LIVE ||
++ ctrl->state == NVME_CTRL_DELETING))
+ return true;
+ return __nvmf_check_ready(ctrl, rq, queue_live);
+ }
+diff --git a/drivers/nvme/host/fc.c b/drivers/nvme/host/fc.c
+index 564e3f220ac79..a70220df1f570 100644
+--- a/drivers/nvme/host/fc.c
++++ b/drivers/nvme/host/fc.c
+@@ -800,6 +800,7 @@ nvme_fc_ctrl_connectivity_loss(struct nvme_fc_ctrl *ctrl)
+ break;
+
+ case NVME_CTRL_DELETING:
++ case NVME_CTRL_DELETING_NOIO:
+ default:
+ /* no action to take - let it delete */
+ break;
+diff --git a/drivers/nvme/host/multipath.c b/drivers/nvme/host/multipath.c
+index d3914b7e8f52c..8f235fbfe44ee 100644
+--- a/drivers/nvme/host/multipath.c
++++ b/drivers/nvme/host/multipath.c
+@@ -167,9 +167,18 @@ void nvme_mpath_clear_ctrl_paths(struct nvme_ctrl *ctrl)
+
+ static bool nvme_path_is_disabled(struct nvme_ns *ns)
+ {
+- return ns->ctrl->state != NVME_CTRL_LIVE ||
+- test_bit(NVME_NS_ANA_PENDING, &ns->flags) ||
+- test_bit(NVME_NS_REMOVING, &ns->flags);
++ /*
++ * We don't treat NVME_CTRL_DELETING as a disabled path as I/O should
++ * still be able to complete assuming that the controller is connected.
++ * Otherwise it will fail immediately and return to the requeue list.
++ */
++ if (ns->ctrl->state != NVME_CTRL_LIVE &&
++ ns->ctrl->state != NVME_CTRL_DELETING)
++ return true;
++ if (test_bit(NVME_NS_ANA_PENDING, &ns->flags) ||
++ test_bit(NVME_NS_REMOVING, &ns->flags))
++ return true;
++ return false;
+ }
+
+ static struct nvme_ns *__nvme_find_path(struct nvme_ns_head *head, int node)
+@@ -575,6 +584,9 @@ static void nvme_ana_work(struct work_struct *work)
+ {
+ struct nvme_ctrl *ctrl = container_of(work, struct nvme_ctrl, ana_work);
+
++ if (ctrl->state != NVME_CTRL_LIVE)
++ return;
++
+ nvme_read_ana_log(ctrl);
+ }
+
+diff --git a/drivers/nvme/host/nvme.h b/drivers/nvme/host/nvme.h
+index 8f1b0a30fd2a6..ff0b4079e8d6d 100644
+--- a/drivers/nvme/host/nvme.h
++++ b/drivers/nvme/host/nvme.h
+@@ -183,6 +183,7 @@ enum nvme_ctrl_state {
+ NVME_CTRL_RESETTING,
+ NVME_CTRL_CONNECTING,
+ NVME_CTRL_DELETING,
++ NVME_CTRL_DELETING_NOIO,
+ NVME_CTRL_DEAD,
+ };
+
+diff --git a/drivers/nvme/host/rdma.c b/drivers/nvme/host/rdma.c
+index 19c94080512cf..fdab0054cd809 100644
+--- a/drivers/nvme/host/rdma.c
++++ b/drivers/nvme/host/rdma.c
+@@ -1023,11 +1023,12 @@ static int nvme_rdma_setup_ctrl(struct nvme_rdma_ctrl *ctrl, bool new)
+ changed = nvme_change_ctrl_state(&ctrl->ctrl, NVME_CTRL_LIVE);
+ if (!changed) {
+ /*
+- * state change failure is ok if we're in DELETING state,
++ * state change failure is ok if we started ctrl delete,
+ * unless we're during creation of a new controller to
+ * avoid races with teardown flow.
+ */
+- WARN_ON_ONCE(ctrl->ctrl.state != NVME_CTRL_DELETING);
++ WARN_ON_ONCE(ctrl->ctrl.state != NVME_CTRL_DELETING &&
++ ctrl->ctrl.state != NVME_CTRL_DELETING_NOIO);
+ WARN_ON_ONCE(new);
+ ret = -EINVAL;
+ goto destroy_io;
+@@ -1080,8 +1081,9 @@ static void nvme_rdma_error_recovery_work(struct work_struct *work)
+ blk_mq_unquiesce_queue(ctrl->ctrl.admin_q);
+
+ if (!nvme_change_ctrl_state(&ctrl->ctrl, NVME_CTRL_CONNECTING)) {
+- /* state change failure is ok if we're in DELETING state */
+- WARN_ON_ONCE(ctrl->ctrl.state != NVME_CTRL_DELETING);
++ /* state change failure is ok if we started ctrl delete */
++ WARN_ON_ONCE(ctrl->ctrl.state != NVME_CTRL_DELETING &&
++ ctrl->ctrl.state != NVME_CTRL_DELETING_NOIO);
+ return;
+ }
+
+diff --git a/drivers/nvme/host/tcp.c b/drivers/nvme/host/tcp.c
+index 99eaa0474e10b..06d6c1c6de35b 100644
+--- a/drivers/nvme/host/tcp.c
++++ b/drivers/nvme/host/tcp.c
+@@ -1938,11 +1938,12 @@ static int nvme_tcp_setup_ctrl(struct nvme_ctrl *ctrl, bool new)
+
+ if (!nvme_change_ctrl_state(ctrl, NVME_CTRL_LIVE)) {
+ /*
+- * state change failure is ok if we're in DELETING state,
++ * state change failure is ok if we started ctrl delete,
+ * unless we're during creation of a new controller to
+ * avoid races with teardown flow.
+ */
+- WARN_ON_ONCE(ctrl->state != NVME_CTRL_DELETING);
++ WARN_ON_ONCE(ctrl->state != NVME_CTRL_DELETING &&
++ ctrl->state != NVME_CTRL_DELETING_NOIO);
+ WARN_ON_ONCE(new);
+ ret = -EINVAL;
+ goto destroy_io;
+@@ -1998,8 +1999,9 @@ static void nvme_tcp_error_recovery_work(struct work_struct *work)
+ blk_mq_unquiesce_queue(ctrl->admin_q);
+
+ if (!nvme_change_ctrl_state(ctrl, NVME_CTRL_CONNECTING)) {
+- /* state change failure is ok if we're in DELETING state */
+- WARN_ON_ONCE(ctrl->state != NVME_CTRL_DELETING);
++ /* state change failure is ok if we started ctrl delete */
++ WARN_ON_ONCE(ctrl->state != NVME_CTRL_DELETING &&
++ ctrl->state != NVME_CTRL_DELETING_NOIO);
+ return;
+ }
+
+@@ -2034,8 +2036,9 @@ static void nvme_reset_ctrl_work(struct work_struct *work)
+ nvme_tcp_teardown_ctrl(ctrl, false);
+
+ if (!nvme_change_ctrl_state(ctrl, NVME_CTRL_CONNECTING)) {
+- /* state change failure is ok if we're in DELETING state */
+- WARN_ON_ONCE(ctrl->state != NVME_CTRL_DELETING);
++ /* state change failure is ok if we started ctrl delete */
++ WARN_ON_ONCE(ctrl->state != NVME_CTRL_DELETING &&
++ ctrl->state != NVME_CTRL_DELETING_NOIO);
+ return;
+ }
+
+diff --git a/drivers/pci/ats.c b/drivers/pci/ats.c
+index 390e92f2d8d1f..7ce08da1c6cb1 100644
+--- a/drivers/pci/ats.c
++++ b/drivers/pci/ats.c
+@@ -309,6 +309,21 @@ int pci_prg_resp_pasid_required(struct pci_dev *pdev)
+
+ return pdev->pasid_required;
+ }
++
++/**
++ * pci_pri_supported - Check if PRI is supported.
++ * @pdev: PCI device structure
++ *
++ * Returns true if PRI capability is present, false otherwise.
++ */
++bool pci_pri_supported(struct pci_dev *pdev)
++{
++ /* VFs share the PF PRI */
++ if (pci_physfn(pdev)->pri_cap)
++ return true;
++ return false;
++}
++EXPORT_SYMBOL_GPL(pci_pri_supported);
+ #endif /* CONFIG_PCI_PRI */
+
+ #ifdef CONFIG_PCI_PASID
+diff --git a/drivers/pci/bus.c b/drivers/pci/bus.c
+index 8e40b3e6da77d..3cef835b375fd 100644
+--- a/drivers/pci/bus.c
++++ b/drivers/pci/bus.c
+@@ -322,12 +322,8 @@ void pci_bus_add_device(struct pci_dev *dev)
+
+ dev->match_driver = true;
+ retval = device_attach(&dev->dev);
+- if (retval < 0 && retval != -EPROBE_DEFER) {
++ if (retval < 0 && retval != -EPROBE_DEFER)
+ pci_warn(dev, "device attach failed (%d)\n", retval);
+- pci_proc_detach_device(dev);
+- pci_remove_sysfs_dev_files(dev);
+- return;
+- }
+
+ pci_dev_assign_added(dev, true);
+ }
+diff --git a/drivers/pci/controller/dwc/pcie-qcom.c b/drivers/pci/controller/dwc/pcie-qcom.c
+index 138e1a2d21ccd..5dd1740855770 100644
+--- a/drivers/pci/controller/dwc/pcie-qcom.c
++++ b/drivers/pci/controller/dwc/pcie-qcom.c
+@@ -45,7 +45,13 @@
+ #define PCIE_CAP_CPL_TIMEOUT_DISABLE 0x10
+
+ #define PCIE20_PARF_PHY_CTRL 0x40
++#define PHY_CTRL_PHY_TX0_TERM_OFFSET_MASK GENMASK(20, 16)
++#define PHY_CTRL_PHY_TX0_TERM_OFFSET(x) ((x) << 16)
++
+ #define PCIE20_PARF_PHY_REFCLK 0x4C
++#define PHY_REFCLK_SSP_EN BIT(16)
++#define PHY_REFCLK_USE_PAD BIT(12)
++
+ #define PCIE20_PARF_DBI_BASE_ADDR 0x168
+ #define PCIE20_PARF_SLV_ADDR_SPACE_SIZE 0x16C
+ #define PCIE20_PARF_MHI_CLOCK_RESET_CTRL 0x174
+@@ -77,6 +83,18 @@
+ #define DBI_RO_WR_EN 1
+
+ #define PERST_DELAY_US 1000
++/* PARF registers */
++#define PCIE20_PARF_PCS_DEEMPH 0x34
++#define PCS_DEEMPH_TX_DEEMPH_GEN1(x) ((x) << 16)
++#define PCS_DEEMPH_TX_DEEMPH_GEN2_3_5DB(x) ((x) << 8)
++#define PCS_DEEMPH_TX_DEEMPH_GEN2_6DB(x) ((x) << 0)
++
++#define PCIE20_PARF_PCS_SWING 0x38
++#define PCS_SWING_TX_SWING_FULL(x) ((x) << 8)
++#define PCS_SWING_TX_SWING_LOW(x) ((x) << 0)
++
++#define PCIE20_PARF_CONFIG_BITS 0x50
++#define PHY_RX0_EQ(x) ((x) << 24)
+
+ #define PCIE20_v3_PARF_SLV_ADDR_SPACE_SIZE 0x358
+ #define SLV_ADDR_SPACE_SZ 0x10000000
+@@ -286,6 +304,7 @@ static int qcom_pcie_init_2_1_0(struct qcom_pcie *pcie)
+ struct qcom_pcie_resources_2_1_0 *res = &pcie->res.v2_1_0;
+ struct dw_pcie *pci = pcie->pci;
+ struct device *dev = pci->dev;
++ struct device_node *node = dev->of_node;
+ u32 val;
+ int ret;
+
+@@ -330,9 +349,29 @@ static int qcom_pcie_init_2_1_0(struct qcom_pcie *pcie)
+ val &= ~BIT(0);
+ writel(val, pcie->parf + PCIE20_PARF_PHY_CTRL);
+
++ if (of_device_is_compatible(node, "qcom,pcie-ipq8064")) {
++ writel(PCS_DEEMPH_TX_DEEMPH_GEN1(24) |
++ PCS_DEEMPH_TX_DEEMPH_GEN2_3_5DB(24) |
++ PCS_DEEMPH_TX_DEEMPH_GEN2_6DB(34),
++ pcie->parf + PCIE20_PARF_PCS_DEEMPH);
++ writel(PCS_SWING_TX_SWING_FULL(120) |
++ PCS_SWING_TX_SWING_LOW(120),
++ pcie->parf + PCIE20_PARF_PCS_SWING);
++ writel(PHY_RX0_EQ(4), pcie->parf + PCIE20_PARF_CONFIG_BITS);
++ }
++
++ if (of_device_is_compatible(node, "qcom,pcie-ipq8064")) {
++ /* set TX termination offset */
++ val = readl(pcie->parf + PCIE20_PARF_PHY_CTRL);
++ val &= ~PHY_CTRL_PHY_TX0_TERM_OFFSET_MASK;
++ val |= PHY_CTRL_PHY_TX0_TERM_OFFSET(7);
++ writel(val, pcie->parf + PCIE20_PARF_PHY_CTRL);
++ }
++
+ /* enable external reference clock */
+ val = readl(pcie->parf + PCIE20_PARF_PHY_REFCLK);
+- val |= BIT(16);
++ val &= ~PHY_REFCLK_USE_PAD;
++ val |= PHY_REFCLK_SSP_EN;
+ writel(val, pcie->parf + PCIE20_PARF_PHY_REFCLK);
+
+ ret = reset_control_deassert(res->phy_reset);
+diff --git a/drivers/pci/hotplug/acpiphp_glue.c b/drivers/pci/hotplug/acpiphp_glue.c
+index b3869951c0eb7..6e60b4b1bf53b 100644
+--- a/drivers/pci/hotplug/acpiphp_glue.c
++++ b/drivers/pci/hotplug/acpiphp_glue.c
+@@ -122,13 +122,21 @@ static struct acpiphp_context *acpiphp_grab_context(struct acpi_device *adev)
+ struct acpiphp_context *context;
+
+ acpi_lock_hp_context();
++
+ context = acpiphp_get_context(adev);
+- if (!context || context->func.parent->is_going_away) {
+- acpi_unlock_hp_context();
+- return NULL;
++ if (!context)
++ goto unlock;
++
++ if (context->func.parent->is_going_away) {
++ acpiphp_put_context(context);
++ context = NULL;
++ goto unlock;
+ }
++
+ get_bridge(context->func.parent);
+ acpiphp_put_context(context);
++
++unlock:
+ acpi_unlock_hp_context();
+ return context;
+ }
+diff --git a/drivers/pci/quirks.c b/drivers/pci/quirks.c
+index 5622603d96d4e..136d25acff567 100644
+--- a/drivers/pci/quirks.c
++++ b/drivers/pci/quirks.c
+@@ -5207,7 +5207,8 @@ DECLARE_PCI_FIXUP_EARLY(PCI_VENDOR_ID_SERVERWORKS, 0x0422, quirk_no_ext_tags);
+ */
+ static void quirk_amd_harvest_no_ats(struct pci_dev *pdev)
+ {
+- if (pdev->device == 0x7340 && pdev->revision != 0xc5)
++ if ((pdev->device == 0x7312 && pdev->revision != 0x00) ||
++ (pdev->device == 0x7340 && pdev->revision != 0xc5))
+ return;
+
+ pci_info(pdev, "disabling ATS\n");
+@@ -5218,6 +5219,8 @@ static void quirk_amd_harvest_no_ats(struct pci_dev *pdev)
+ DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_ATI, 0x98e4, quirk_amd_harvest_no_ats);
+ /* AMD Iceland dGPU */
+ DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_ATI, 0x6900, quirk_amd_harvest_no_ats);
++/* AMD Navi10 dGPU */
++DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_ATI, 0x7312, quirk_amd_harvest_no_ats);
+ /* AMD Navi14 dGPU */
+ DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_ATI, 0x7340, quirk_amd_harvest_no_ats);
+ #endif /* CONFIG_PCI_ATS */
+diff --git a/drivers/pinctrl/pinctrl-ingenic.c b/drivers/pinctrl/pinctrl-ingenic.c
+index e5dcf77fe43de..fdfe549794f30 100644
+--- a/drivers/pinctrl/pinctrl-ingenic.c
++++ b/drivers/pinctrl/pinctrl-ingenic.c
+@@ -1810,9 +1810,9 @@ static void ingenic_gpio_irq_ack(struct irq_data *irqd)
+ */
+ high = ingenic_gpio_get_value(jzgc, irq);
+ if (high)
+- irq_set_type(jzgc, irq, IRQ_TYPE_EDGE_FALLING);
++ irq_set_type(jzgc, irq, IRQ_TYPE_LEVEL_LOW);
+ else
+- irq_set_type(jzgc, irq, IRQ_TYPE_EDGE_RISING);
++ irq_set_type(jzgc, irq, IRQ_TYPE_LEVEL_HIGH);
+ }
+
+ if (jzgc->jzpc->info->version >= ID_JZ4760)
+@@ -1848,7 +1848,7 @@ static int ingenic_gpio_irq_set_type(struct irq_data *irqd, unsigned int type)
+ */
+ bool high = ingenic_gpio_get_value(jzgc, irqd->hwirq);
+
+- type = high ? IRQ_TYPE_EDGE_FALLING : IRQ_TYPE_EDGE_RISING;
++ type = high ? IRQ_TYPE_LEVEL_LOW : IRQ_TYPE_LEVEL_HIGH;
+ }
+
+ irq_set_type(jzgc, irqd->hwirq, type);
+@@ -1955,7 +1955,8 @@ static int ingenic_gpio_get_direction(struct gpio_chip *gc, unsigned int offset)
+ unsigned int pin = gc->base + offset;
+
+ if (jzpc->info->version >= ID_JZ4760) {
+- if (ingenic_get_pin_config(jzpc, pin, JZ4760_GPIO_PAT1))
++ if (ingenic_get_pin_config(jzpc, pin, JZ4760_GPIO_INT) ||
++ ingenic_get_pin_config(jzpc, pin, JZ4760_GPIO_PAT1))
+ return GPIO_LINE_DIRECTION_IN;
+ return GPIO_LINE_DIRECTION_OUT;
+ }
+diff --git a/drivers/platform/chrome/cros_ec_ishtp.c b/drivers/platform/chrome/cros_ec_ishtp.c
+index 93a71e93a2f15..41d60af618c9d 100644
+--- a/drivers/platform/chrome/cros_ec_ishtp.c
++++ b/drivers/platform/chrome/cros_ec_ishtp.c
+@@ -660,8 +660,10 @@ static int cros_ec_ishtp_probe(struct ishtp_cl_device *cl_device)
+
+ /* Register croc_ec_dev mfd */
+ rv = cros_ec_dev_init(client_data);
+- if (rv)
++ if (rv) {
++ down_write(&init_lock);
+ goto end_cros_ec_dev_init_error;
++ }
+
+ return 0;
+
+diff --git a/drivers/pwm/pwm-bcm-iproc.c b/drivers/pwm/pwm-bcm-iproc.c
+index 1f829edd8ee70..d392a828fc493 100644
+--- a/drivers/pwm/pwm-bcm-iproc.c
++++ b/drivers/pwm/pwm-bcm-iproc.c
+@@ -85,8 +85,6 @@ static void iproc_pwmc_get_state(struct pwm_chip *chip, struct pwm_device *pwm,
+ u64 tmp, multi, rate;
+ u32 value, prescale;
+
+- rate = clk_get_rate(ip->clk);
+-
+ value = readl(ip->base + IPROC_PWM_CTRL_OFFSET);
+
+ if (value & BIT(IPROC_PWM_CTRL_EN_SHIFT(pwm->hwpwm)))
+@@ -99,6 +97,13 @@ static void iproc_pwmc_get_state(struct pwm_chip *chip, struct pwm_device *pwm,
+ else
+ state->polarity = PWM_POLARITY_INVERSED;
+
++ rate = clk_get_rate(ip->clk);
++ if (rate == 0) {
++ state->period = 0;
++ state->duty_cycle = 0;
++ return;
++ }
++
+ value = readl(ip->base + IPROC_PWM_PRESCALE_OFFSET);
+ prescale = value >> IPROC_PWM_PRESCALE_SHIFT(pwm->hwpwm);
+ prescale &= IPROC_PWM_PRESCALE_MAX;
+diff --git a/drivers/remoteproc/qcom_q6v5.c b/drivers/remoteproc/qcom_q6v5.c
+index 111a442c993c4..fd6fd36268d93 100644
+--- a/drivers/remoteproc/qcom_q6v5.c
++++ b/drivers/remoteproc/qcom_q6v5.c
+@@ -153,6 +153,8 @@ int qcom_q6v5_request_stop(struct qcom_q6v5 *q6v5)
+ {
+ int ret;
+
++ q6v5->running = false;
++
+ qcom_smem_state_update_bits(q6v5->state,
+ BIT(q6v5->stop_bit), BIT(q6v5->stop_bit));
+
+diff --git a/drivers/remoteproc/qcom_q6v5_mss.c b/drivers/remoteproc/qcom_q6v5_mss.c
+index 629abcee2c1d5..dc95cad40bd58 100644
+--- a/drivers/remoteproc/qcom_q6v5_mss.c
++++ b/drivers/remoteproc/qcom_q6v5_mss.c
+@@ -408,6 +408,12 @@ static int q6v5_load(struct rproc *rproc, const struct firmware *fw)
+ {
+ struct q6v5 *qproc = rproc->priv;
+
++ /* MBA is restricted to a maximum size of 1M */
++ if (fw->size > qproc->mba_size || fw->size > SZ_1M) {
++ dev_err(qproc->dev, "MBA firmware load failed\n");
++ return -EINVAL;
++ }
++
+ memcpy(qproc->mba_region, fw->data, fw->size);
+
+ return 0;
+@@ -1139,15 +1145,14 @@ static int q6v5_mpss_load(struct q6v5 *qproc)
+ } else if (phdr->p_filesz) {
+ /* Replace "xxx.xxx" with "xxx.bxx" */
+ sprintf(fw_name + fw_name_len - 3, "b%02d", i);
+- ret = request_firmware(&seg_fw, fw_name, qproc->dev);
++ ret = request_firmware_into_buf(&seg_fw, fw_name, qproc->dev,
++ ptr, phdr->p_filesz);
+ if (ret) {
+ dev_err(qproc->dev, "failed to load %s\n", fw_name);
+ iounmap(ptr);
+ goto release_firmware;
+ }
+
+- memcpy(ptr, seg_fw->data, seg_fw->size);
+-
+ release_firmware(seg_fw);
+ }
+
+diff --git a/drivers/rtc/rtc-cpcap.c b/drivers/rtc/rtc-cpcap.c
+index a603f1f211250..800667d73a6fb 100644
+--- a/drivers/rtc/rtc-cpcap.c
++++ b/drivers/rtc/rtc-cpcap.c
+@@ -261,7 +261,7 @@ static int cpcap_rtc_probe(struct platform_device *pdev)
+ return PTR_ERR(rtc->rtc_dev);
+
+ rtc->rtc_dev->ops = &cpcap_rtc_ops;
+- rtc->rtc_dev->range_max = (1 << 14) * SECS_PER_DAY - 1;
++ rtc->rtc_dev->range_max = (timeu64_t) (DAY_MASK + 1) * SECS_PER_DAY - 1;
+
+ err = cpcap_get_vendor(dev, rtc->regmap, &rtc->vendor);
+ if (err)
+diff --git a/drivers/rtc/rtc-pl031.c b/drivers/rtc/rtc-pl031.c
+index 40d7450a1ce49..c6b89273feba8 100644
+--- a/drivers/rtc/rtc-pl031.c
++++ b/drivers/rtc/rtc-pl031.c
+@@ -275,6 +275,7 @@ static int pl031_set_alarm(struct device *dev, struct rtc_wkalrm *alarm)
+ struct pl031_local *ldata = dev_get_drvdata(dev);
+
+ writel(rtc_tm_to_time64(&alarm->time), ldata->base + RTC_MR);
++ pl031_alarm_irq_enable(dev, alarm->enabled);
+
+ return 0;
+ }
+diff --git a/drivers/scsi/lpfc/lpfc_nvmet.c b/drivers/scsi/lpfc/lpfc_nvmet.c
+index 565419bf8d74a..40b2df6e304ad 100644
+--- a/drivers/scsi/lpfc/lpfc_nvmet.c
++++ b/drivers/scsi/lpfc/lpfc_nvmet.c
+@@ -1914,7 +1914,7 @@ lpfc_nvmet_destroy_targetport(struct lpfc_hba *phba)
+ }
+ tgtp->tport_unreg_cmp = &tport_unreg_cmp;
+ nvmet_fc_unregister_targetport(phba->targetport);
+- if (!wait_for_completion_timeout(tgtp->tport_unreg_cmp,
++ if (!wait_for_completion_timeout(&tport_unreg_cmp,
+ msecs_to_jiffies(LPFC_NVMET_WAIT_TMO)))
+ lpfc_printf_log(phba, KERN_ERR, LOG_NVME,
+ "6179 Unreg targetport x%px timeout "
+diff --git a/drivers/staging/media/rkisp1/rkisp1-isp.c b/drivers/staging/media/rkisp1/rkisp1-isp.c
+index fa53f05e37d81..31c5ae2aa29fb 100644
+--- a/drivers/staging/media/rkisp1/rkisp1-isp.c
++++ b/drivers/staging/media/rkisp1/rkisp1-isp.c
+@@ -25,7 +25,6 @@
+
+ #define RKISP1_DIR_SRC BIT(0)
+ #define RKISP1_DIR_SINK BIT(1)
+-#define RKISP1_DIR_SINK_SRC (RKISP1_DIR_SINK | RKISP1_DIR_SRC)
+
+ /*
+ * NOTE: MIPI controller and input MUX are also configured in this file.
+@@ -69,84 +68,84 @@ static const struct rkisp1_isp_mbus_info rkisp1_isp_formats[] = {
+ .mipi_dt = RKISP1_CIF_CSI2_DT_RAW10,
+ .bayer_pat = RKISP1_RAW_RGGB,
+ .bus_width = 10,
+- .direction = RKISP1_DIR_SINK_SRC,
++ .direction = RKISP1_DIR_SINK | RKISP1_DIR_SRC,
+ }, {
+ .mbus_code = MEDIA_BUS_FMT_SBGGR10_1X10,
+ .fmt_type = RKISP1_FMT_BAYER,
+ .mipi_dt = RKISP1_CIF_CSI2_DT_RAW10,
+ .bayer_pat = RKISP1_RAW_BGGR,
+ .bus_width = 10,
+- .direction = RKISP1_DIR_SINK_SRC,
++ .direction = RKISP1_DIR_SINK | RKISP1_DIR_SRC,
+ }, {
+ .mbus_code = MEDIA_BUS_FMT_SGBRG10_1X10,
+ .fmt_type = RKISP1_FMT_BAYER,
+ .mipi_dt = RKISP1_CIF_CSI2_DT_RAW10,
+ .bayer_pat = RKISP1_RAW_GBRG,
+ .bus_width = 10,
+- .direction = RKISP1_DIR_SINK_SRC,
++ .direction = RKISP1_DIR_SINK | RKISP1_DIR_SRC,
+ }, {
+ .mbus_code = MEDIA_BUS_FMT_SGRBG10_1X10,
+ .fmt_type = RKISP1_FMT_BAYER,
+ .mipi_dt = RKISP1_CIF_CSI2_DT_RAW10,
+ .bayer_pat = RKISP1_RAW_GRBG,
+ .bus_width = 10,
+- .direction = RKISP1_DIR_SINK_SRC,
++ .direction = RKISP1_DIR_SINK | RKISP1_DIR_SRC,
+ }, {
+ .mbus_code = MEDIA_BUS_FMT_SRGGB12_1X12,
+ .fmt_type = RKISP1_FMT_BAYER,
+ .mipi_dt = RKISP1_CIF_CSI2_DT_RAW12,
+ .bayer_pat = RKISP1_RAW_RGGB,
+ .bus_width = 12,
+- .direction = RKISP1_DIR_SINK_SRC,
++ .direction = RKISP1_DIR_SINK | RKISP1_DIR_SRC,
+ }, {
+ .mbus_code = MEDIA_BUS_FMT_SBGGR12_1X12,
+ .fmt_type = RKISP1_FMT_BAYER,
+ .mipi_dt = RKISP1_CIF_CSI2_DT_RAW12,
+ .bayer_pat = RKISP1_RAW_BGGR,
+ .bus_width = 12,
+- .direction = RKISP1_DIR_SINK_SRC,
++ .direction = RKISP1_DIR_SINK | RKISP1_DIR_SRC,
+ }, {
+ .mbus_code = MEDIA_BUS_FMT_SGBRG12_1X12,
+ .fmt_type = RKISP1_FMT_BAYER,
+ .mipi_dt = RKISP1_CIF_CSI2_DT_RAW12,
+ .bayer_pat = RKISP1_RAW_GBRG,
+ .bus_width = 12,
+- .direction = RKISP1_DIR_SINK_SRC,
++ .direction = RKISP1_DIR_SINK | RKISP1_DIR_SRC,
+ }, {
+ .mbus_code = MEDIA_BUS_FMT_SGRBG12_1X12,
+ .fmt_type = RKISP1_FMT_BAYER,
+ .mipi_dt = RKISP1_CIF_CSI2_DT_RAW12,
+ .bayer_pat = RKISP1_RAW_GRBG,
+ .bus_width = 12,
+- .direction = RKISP1_DIR_SINK_SRC,
++ .direction = RKISP1_DIR_SINK | RKISP1_DIR_SRC,
+ }, {
+ .mbus_code = MEDIA_BUS_FMT_SRGGB8_1X8,
+ .fmt_type = RKISP1_FMT_BAYER,
+ .mipi_dt = RKISP1_CIF_CSI2_DT_RAW8,
+ .bayer_pat = RKISP1_RAW_RGGB,
+ .bus_width = 8,
+- .direction = RKISP1_DIR_SINK_SRC,
++ .direction = RKISP1_DIR_SINK | RKISP1_DIR_SRC,
+ }, {
+ .mbus_code = MEDIA_BUS_FMT_SBGGR8_1X8,
+ .fmt_type = RKISP1_FMT_BAYER,
+ .mipi_dt = RKISP1_CIF_CSI2_DT_RAW8,
+ .bayer_pat = RKISP1_RAW_BGGR,
+ .bus_width = 8,
+- .direction = RKISP1_DIR_SINK_SRC,
++ .direction = RKISP1_DIR_SINK | RKISP1_DIR_SRC,
+ }, {
+ .mbus_code = MEDIA_BUS_FMT_SGBRG8_1X8,
+ .fmt_type = RKISP1_FMT_BAYER,
+ .mipi_dt = RKISP1_CIF_CSI2_DT_RAW8,
+ .bayer_pat = RKISP1_RAW_GBRG,
+ .bus_width = 8,
+- .direction = RKISP1_DIR_SINK_SRC,
++ .direction = RKISP1_DIR_SINK | RKISP1_DIR_SRC,
+ }, {
+ .mbus_code = MEDIA_BUS_FMT_SGRBG8_1X8,
+ .fmt_type = RKISP1_FMT_BAYER,
+ .mipi_dt = RKISP1_CIF_CSI2_DT_RAW8,
+ .bayer_pat = RKISP1_RAW_GRBG,
+ .bus_width = 8,
+- .direction = RKISP1_DIR_SINK_SRC,
++ .direction = RKISP1_DIR_SINK | RKISP1_DIR_SRC,
+ }, {
+ .mbus_code = MEDIA_BUS_FMT_YUYV8_1X16,
+ .fmt_type = RKISP1_FMT_YUV,
+diff --git a/drivers/usb/serial/ftdi_sio.c b/drivers/usb/serial/ftdi_sio.c
+index 9ad44a96dfe3a..33f1cca7eaa61 100644
+--- a/drivers/usb/serial/ftdi_sio.c
++++ b/drivers/usb/serial/ftdi_sio.c
+@@ -2480,12 +2480,11 @@ static int ftdi_prepare_write_buffer(struct usb_serial_port *port,
+ #define FTDI_RS_ERR_MASK (FTDI_RS_BI | FTDI_RS_PE | FTDI_RS_FE | FTDI_RS_OE)
+
+ static int ftdi_process_packet(struct usb_serial_port *port,
+- struct ftdi_private *priv, char *packet, int len)
++ struct ftdi_private *priv, unsigned char *buf, int len)
+ {
++ unsigned char status;
+ int i;
+- char status;
+ char flag;
+- char *ch;
+
+ if (len < 2) {
+ dev_dbg(&port->dev, "malformed packet\n");
+@@ -2495,7 +2494,7 @@ static int ftdi_process_packet(struct usb_serial_port *port,
+ /* Compare new line status to the old one, signal if different/
+ N.B. packet may be processed more than once, but differences
+ are only processed once. */
+- status = packet[0] & FTDI_STATUS_B0_MASK;
++ status = buf[0] & FTDI_STATUS_B0_MASK;
+ if (status != priv->prev_status) {
+ char diff_status = status ^ priv->prev_status;
+
+@@ -2521,13 +2520,12 @@ static int ftdi_process_packet(struct usb_serial_port *port,
+ }
+
+ /* save if the transmitter is empty or not */
+- if (packet[1] & FTDI_RS_TEMT)
++ if (buf[1] & FTDI_RS_TEMT)
+ priv->transmit_empty = 1;
+ else
+ priv->transmit_empty = 0;
+
+- len -= 2;
+- if (!len)
++ if (len == 2)
+ return 0; /* status only */
+
+ /*
+@@ -2535,40 +2533,41 @@ static int ftdi_process_packet(struct usb_serial_port *port,
+ * data payload to avoid over-reporting.
+ */
+ flag = TTY_NORMAL;
+- if (packet[1] & FTDI_RS_ERR_MASK) {
++ if (buf[1] & FTDI_RS_ERR_MASK) {
+ /* Break takes precedence over parity, which takes precedence
+ * over framing errors */
+- if (packet[1] & FTDI_RS_BI) {
++ if (buf[1] & FTDI_RS_BI) {
+ flag = TTY_BREAK;
+ port->icount.brk++;
+ usb_serial_handle_break(port);
+- } else if (packet[1] & FTDI_RS_PE) {
++ } else if (buf[1] & FTDI_RS_PE) {
+ flag = TTY_PARITY;
+ port->icount.parity++;
+- } else if (packet[1] & FTDI_RS_FE) {
++ } else if (buf[1] & FTDI_RS_FE) {
+ flag = TTY_FRAME;
+ port->icount.frame++;
+ }
+ /* Overrun is special, not associated with a char */
+- if (packet[1] & FTDI_RS_OE) {
++ if (buf[1] & FTDI_RS_OE) {
+ port->icount.overrun++;
+ tty_insert_flip_char(&port->port, 0, TTY_OVERRUN);
+ }
+ }
+
+- port->icount.rx += len;
+- ch = packet + 2;
++ port->icount.rx += len - 2;
+
+ if (port->port.console && port->sysrq) {
+- for (i = 0; i < len; i++, ch++) {
+- if (!usb_serial_handle_sysrq_char(port, *ch))
+- tty_insert_flip_char(&port->port, *ch, flag);
++ for (i = 2; i < len; i++) {
++ if (usb_serial_handle_sysrq_char(port, buf[i]))
++ continue;
++ tty_insert_flip_char(&port->port, buf[i], flag);
+ }
+ } else {
+- tty_insert_flip_string_fixed_flag(&port->port, ch, flag, len);
++ tty_insert_flip_string_fixed_flag(&port->port, buf + 2, flag,
++ len - 2);
+ }
+
+- return len;
++ return len - 2;
+ }
+
+ static void ftdi_process_read_urb(struct urb *urb)
+diff --git a/drivers/vdpa/vdpa_sim/vdpa_sim.c b/drivers/vdpa/vdpa_sim/vdpa_sim.c
+index e2dc8edd680e0..4907c1cfe6671 100644
+--- a/drivers/vdpa/vdpa_sim/vdpa_sim.c
++++ b/drivers/vdpa/vdpa_sim/vdpa_sim.c
+@@ -330,6 +330,7 @@ static struct vdpasim *vdpasim_create(void)
+
+ INIT_WORK(&vdpasim->work, vdpasim_work);
+ spin_lock_init(&vdpasim->lock);
++ spin_lock_init(&vdpasim->iommu_lock);
+
+ dev = &vdpasim->vdpa.dev;
+ dev->coherent_dma_mask = DMA_BIT_MASK(64);
+@@ -520,7 +521,7 @@ static void vdpasim_get_config(struct vdpa_device *vdpa, unsigned int offset,
+ struct vdpasim *vdpasim = vdpa_to_sim(vdpa);
+
+ if (offset + len < sizeof(struct virtio_net_config))
+- memcpy(buf, &vdpasim->config + offset, len);
++ memcpy(buf, (u8 *)&vdpasim->config + offset, len);
+ }
+
+ static void vdpasim_set_config(struct vdpa_device *vdpa, unsigned int offset,
+diff --git a/drivers/watchdog/f71808e_wdt.c b/drivers/watchdog/f71808e_wdt.c
+index a3c44d75d80eb..26bf366aebc23 100644
+--- a/drivers/watchdog/f71808e_wdt.c
++++ b/drivers/watchdog/f71808e_wdt.c
+@@ -690,9 +690,9 @@ static int __init watchdog_init(int sioaddr)
+ * into the module have been registered yet.
+ */
+ watchdog.sioaddr = sioaddr;
+- watchdog.ident.options = WDIOC_SETTIMEOUT
+- | WDIOF_MAGICCLOSE
+- | WDIOF_KEEPALIVEPING;
++ watchdog.ident.options = WDIOF_MAGICCLOSE
++ | WDIOF_KEEPALIVEPING
++ | WDIOF_CARDRESET;
+
+ snprintf(watchdog.ident.identity,
+ sizeof(watchdog.ident.identity), "%s watchdog",
+@@ -706,6 +706,13 @@ static int __init watchdog_init(int sioaddr)
+ wdt_conf = superio_inb(sioaddr, F71808FG_REG_WDT_CONF);
+ watchdog.caused_reboot = wdt_conf & BIT(F71808FG_FLAG_WDTMOUT_STS);
+
++ /*
++ * We don't want WDTMOUT_STS to stick around till regular reboot.
++ * Write 1 to the bit to clear it to zero.
++ */
++ superio_outb(sioaddr, F71808FG_REG_WDT_CONF,
++ wdt_conf | BIT(F71808FG_FLAG_WDTMOUT_STS));
++
+ superio_exit(sioaddr);
+
+ err = watchdog_set_timeout(timeout);
+diff --git a/drivers/watchdog/rti_wdt.c b/drivers/watchdog/rti_wdt.c
+index d456dd72d99a0..c904496fff65e 100644
+--- a/drivers/watchdog/rti_wdt.c
++++ b/drivers/watchdog/rti_wdt.c
+@@ -211,6 +211,7 @@ static int rti_wdt_probe(struct platform_device *pdev)
+
+ err_iomap:
+ pm_runtime_put_sync(&pdev->dev);
++ pm_runtime_disable(&pdev->dev);
+
+ return ret;
+ }
+@@ -221,6 +222,7 @@ static int rti_wdt_remove(struct platform_device *pdev)
+
+ watchdog_unregister_device(&wdt->wdd);
+ pm_runtime_put(&pdev->dev);
++ pm_runtime_disable(&pdev->dev);
+
+ return 0;
+ }
+diff --git a/drivers/watchdog/watchdog_dev.c b/drivers/watchdog/watchdog_dev.c
+index 7e4cd34a8c20e..b535f5fa279b9 100644
+--- a/drivers/watchdog/watchdog_dev.c
++++ b/drivers/watchdog/watchdog_dev.c
+@@ -994,6 +994,15 @@ static int watchdog_cdev_register(struct watchdog_device *wdd)
+ if (IS_ERR_OR_NULL(watchdog_kworker))
+ return -ENODEV;
+
++ device_initialize(&wd_data->dev);
++ wd_data->dev.devt = MKDEV(MAJOR(watchdog_devt), wdd->id);
++ wd_data->dev.class = &watchdog_class;
++ wd_data->dev.parent = wdd->parent;
++ wd_data->dev.groups = wdd->groups;
++ wd_data->dev.release = watchdog_core_data_release;
++ dev_set_drvdata(&wd_data->dev, wdd);
++ dev_set_name(&wd_data->dev, "watchdog%d", wdd->id);
++
+ kthread_init_work(&wd_data->work, watchdog_ping_work);
+ hrtimer_init(&wd_data->timer, CLOCK_MONOTONIC, HRTIMER_MODE_REL_HARD);
+ wd_data->timer.function = watchdog_timer_expired;
+@@ -1014,15 +1023,6 @@ static int watchdog_cdev_register(struct watchdog_device *wdd)
+ }
+ }
+
+- device_initialize(&wd_data->dev);
+- wd_data->dev.devt = MKDEV(MAJOR(watchdog_devt), wdd->id);
+- wd_data->dev.class = &watchdog_class;
+- wd_data->dev.parent = wdd->parent;
+- wd_data->dev.groups = wdd->groups;
+- wd_data->dev.release = watchdog_core_data_release;
+- dev_set_drvdata(&wd_data->dev, wdd);
+- dev_set_name(&wd_data->dev, "watchdog%d", wdd->id);
+-
+ /* Fill in the data structures */
+ cdev_init(&wd_data->cdev, &watchdog_fops);
+
+diff --git a/fs/btrfs/ctree.h b/fs/btrfs/ctree.h
+index 68bd89e3d4f09..562c1d61bb8b5 100644
+--- a/fs/btrfs/ctree.h
++++ b/fs/btrfs/ctree.h
+@@ -1038,8 +1038,10 @@ struct btrfs_root {
+ wait_queue_head_t log_writer_wait;
+ wait_queue_head_t log_commit_wait[2];
+ struct list_head log_ctxs[2];
++ /* Used only for log trees of subvolumes, not for the log root tree */
+ atomic_t log_writers;
+ atomic_t log_commit[2];
++ /* Used only for log trees of subvolumes, not for the log root tree */
+ atomic_t log_batch;
+ int log_transid;
+ /* No matter the commit succeeds or not*/
+@@ -3196,7 +3198,7 @@ do { \
+ /* Report first abort since mount */ \
+ if (!test_and_set_bit(BTRFS_FS_STATE_TRANS_ABORTED, \
+ &((trans)->fs_info->fs_state))) { \
+- if ((errno) != -EIO) { \
++ if ((errno) != -EIO && (errno) != -EROFS) { \
+ WARN(1, KERN_DEBUG \
+ "BTRFS: Transaction aborted (error %d)\n", \
+ (errno)); \
+diff --git a/fs/btrfs/disk-io.c b/fs/btrfs/disk-io.c
+index f00e64fee5ddb..f35be66413f95 100644
+--- a/fs/btrfs/disk-io.c
++++ b/fs/btrfs/disk-io.c
+@@ -1432,9 +1432,16 @@ static int btrfs_init_fs_root(struct btrfs_root *root)
+ spin_lock_init(&root->ino_cache_lock);
+ init_waitqueue_head(&root->ino_cache_wait);
+
+- ret = get_anon_bdev(&root->anon_dev);
+- if (ret)
+- goto fail;
++ /*
++ * Don't assign anonymous block device to roots that are not exposed to
++ * userspace, the id pool is limited to 1M
++ */
++ if (is_fstree(root->root_key.objectid) &&
++ btrfs_root_refs(&root->root_item) > 0) {
++ ret = get_anon_bdev(&root->anon_dev);
++ if (ret)
++ goto fail;
++ }
+
+ mutex_lock(&root->objectid_mutex);
+ ret = btrfs_find_highest_objectid(root,
+diff --git a/fs/btrfs/extent-io-tree.h b/fs/btrfs/extent-io-tree.h
+index b6561455b3c42..8bbb734f3f514 100644
+--- a/fs/btrfs/extent-io-tree.h
++++ b/fs/btrfs/extent-io-tree.h
+@@ -34,6 +34,8 @@ struct io_failure_record;
+ */
+ #define CHUNK_ALLOCATED EXTENT_DIRTY
+ #define CHUNK_TRIMMED EXTENT_DEFRAG
++#define CHUNK_STATE_MASK (CHUNK_ALLOCATED | \
++ CHUNK_TRIMMED)
+
+ enum {
+ IO_TREE_FS_PINNED_EXTENTS,
+diff --git a/fs/btrfs/extent-tree.c b/fs/btrfs/extent-tree.c
+index 7c86188b33d43..1409bbbdeb664 100644
+--- a/fs/btrfs/extent-tree.c
++++ b/fs/btrfs/extent-tree.c
+@@ -33,6 +33,7 @@
+ #include "delalloc-space.h"
+ #include "block-group.h"
+ #include "discard.h"
++#include "rcu-string.h"
+
+ #undef SCRAMBLE_DELAYED_REFS
+
+@@ -5313,7 +5314,14 @@ int btrfs_drop_snapshot(struct btrfs_root *root, int update_ref, int for_reloc)
+ goto out;
+ }
+
+- trans = btrfs_start_transaction(tree_root, 0);
++ /*
++ * Use join to avoid potential EINTR from transaction start. See
++ * wait_reserve_ticket and the whole reservation callchain.
++ */
++ if (for_reloc)
++ trans = btrfs_join_transaction(tree_root);
++ else
++ trans = btrfs_start_transaction(tree_root, 0);
+ if (IS_ERR(trans)) {
+ err = PTR_ERR(trans);
+ goto out_free;
+@@ -5678,6 +5686,19 @@ static int btrfs_trim_free_extents(struct btrfs_device *device, u64 *trimmed)
+ &start, &end,
+ CHUNK_TRIMMED | CHUNK_ALLOCATED);
+
++ /* Check if there are any CHUNK_* bits left */
++ if (start > device->total_bytes) {
++ WARN_ON(IS_ENABLED(CONFIG_BTRFS_DEBUG));
++ btrfs_warn_in_rcu(fs_info,
++"ignoring attempt to trim beyond device size: offset %llu length %llu device %s device size %llu",
++ start, end - start + 1,
++ rcu_str_deref(device->name),
++ device->total_bytes);
++ mutex_unlock(&fs_info->chunk_mutex);
++ ret = 0;
++ break;
++ }
++
+ /* Ensure we skip the reserved area in the first 1M */
+ start = max_t(u64, start, SZ_1M);
+
+diff --git a/fs/btrfs/extent_io.c b/fs/btrfs/extent_io.c
+index 9d6d646e1eb08..e95aa02ad6396 100644
+--- a/fs/btrfs/extent_io.c
++++ b/fs/btrfs/extent_io.c
+@@ -4110,7 +4110,7 @@ retry:
+ if (!test_bit(BTRFS_FS_STATE_ERROR, &fs_info->fs_state)) {
+ ret = flush_write_bio(&epd);
+ } else {
+- ret = -EUCLEAN;
++ ret = -EROFS;
+ end_write_bio(&epd, ret);
+ }
+ return ret;
+@@ -4504,15 +4504,25 @@ int try_release_extent_mapping(struct page *page, gfp_t mask)
+ free_extent_map(em);
+ break;
+ }
+- if (!test_range_bit(tree, em->start,
+- extent_map_end(em) - 1,
+- EXTENT_LOCKED, 0, NULL)) {
++ if (test_range_bit(tree, em->start,
++ extent_map_end(em) - 1,
++ EXTENT_LOCKED, 0, NULL))
++ goto next;
++ /*
++ * If it's not in the list of modified extents, used
++ * by a fast fsync, we can remove it. If it's being
++ * logged we can safely remove it since fsync took an
++ * extra reference on the em.
++ */
++ if (list_empty(&em->list) ||
++ test_bit(EXTENT_FLAG_LOGGING, &em->flags)) {
+ set_bit(BTRFS_INODE_NEEDS_FULL_SYNC,
+ &btrfs_inode->runtime_flags);
+ remove_extent_mapping(map, em);
+ /* once for the rb tree */
+ free_extent_map(em);
+ }
++next:
+ start = extent_map_end(em);
+ write_unlock(&map->lock);
+
+diff --git a/fs/btrfs/free-space-cache.c b/fs/btrfs/free-space-cache.c
+index 3613da065a737..e4f495d3cb894 100644
+--- a/fs/btrfs/free-space-cache.c
++++ b/fs/btrfs/free-space-cache.c
+@@ -2286,7 +2286,7 @@ out:
+ static bool try_merge_free_space(struct btrfs_free_space_ctl *ctl,
+ struct btrfs_free_space *info, bool update_stat)
+ {
+- struct btrfs_free_space *left_info;
++ struct btrfs_free_space *left_info = NULL;
+ struct btrfs_free_space *right_info;
+ bool merged = false;
+ u64 offset = info->offset;
+@@ -2302,7 +2302,7 @@ static bool try_merge_free_space(struct btrfs_free_space_ctl *ctl,
+ if (right_info && rb_prev(&right_info->offset_index))
+ left_info = rb_entry(rb_prev(&right_info->offset_index),
+ struct btrfs_free_space, offset_index);
+- else
++ else if (!right_info)
+ left_info = tree_search_offset(ctl, offset - 1, 0, 0);
+
+ /* See try_merge_free_space() comment. */
+diff --git a/fs/btrfs/inode.c b/fs/btrfs/inode.c
+index 6cb3dc2748974..2ccfa424a892a 100644
+--- a/fs/btrfs/inode.c
++++ b/fs/btrfs/inode.c
+@@ -650,12 +650,18 @@ cont:
+ page_error_op |
+ PAGE_END_WRITEBACK);
+
+- for (i = 0; i < nr_pages; i++) {
+- WARN_ON(pages[i]->mapping);
+- put_page(pages[i]);
++ /*
++ * Ensure we only free the compressed pages if we have
++ * them allocated, as we can still reach here with
++ * inode_need_compress() == false.
++ */
++ if (pages) {
++ for (i = 0; i < nr_pages; i++) {
++ WARN_ON(pages[i]->mapping);
++ put_page(pages[i]);
++ }
++ kfree(pages);
+ }
+- kfree(pages);
+-
+ return 0;
+ }
+ }
+@@ -4049,6 +4055,8 @@ int btrfs_delete_subvolume(struct inode *dir, struct dentry *dentry)
+ }
+ }
+
++ free_anon_bdev(dest->anon_dev);
++ dest->anon_dev = 0;
+ out_end_trans:
+ trans->block_rsv = NULL;
+ trans->bytes_reserved = 0;
+@@ -6632,7 +6640,7 @@ struct extent_map *btrfs_get_extent(struct btrfs_inode *inode,
+ extent_type == BTRFS_FILE_EXTENT_PREALLOC) {
+ /* Only regular file could have regular/prealloc extent */
+ if (!S_ISREG(inode->vfs_inode.i_mode)) {
+- ret = -EUCLEAN;
++ err = -EUCLEAN;
+ btrfs_crit(fs_info,
+ "regular/prealloc extent found for non-regular inode %llu",
+ btrfs_ino(inode));
+diff --git a/fs/btrfs/ioctl.c b/fs/btrfs/ioctl.c
+index 40b729dce91cd..92289adfee95a 100644
+--- a/fs/btrfs/ioctl.c
++++ b/fs/btrfs/ioctl.c
+@@ -164,8 +164,11 @@ static int btrfs_ioctl_getflags(struct file *file, void __user *arg)
+ return 0;
+ }
+
+-/* Check if @flags are a supported and valid set of FS_*_FL flags */
+-static int check_fsflags(unsigned int flags)
++/*
++ * Check if @flags are a supported and valid set of FS_*_FL flags and that
++ * the old and new flags are not conflicting
++ */
++static int check_fsflags(unsigned int old_flags, unsigned int flags)
+ {
+ if (flags & ~(FS_IMMUTABLE_FL | FS_APPEND_FL | \
+ FS_NOATIME_FL | FS_NODUMP_FL | \
+@@ -174,9 +177,19 @@ static int check_fsflags(unsigned int flags)
+ FS_NOCOW_FL))
+ return -EOPNOTSUPP;
+
++ /* COMPR and NOCOMP on new/old are valid */
+ if ((flags & FS_NOCOMP_FL) && (flags & FS_COMPR_FL))
+ return -EINVAL;
+
++ if ((flags & FS_COMPR_FL) && (flags & FS_NOCOW_FL))
++ return -EINVAL;
++
++ /* NOCOW and compression options are mutually exclusive */
++ if ((old_flags & FS_NOCOW_FL) && (flags & (FS_COMPR_FL | FS_NOCOMP_FL)))
++ return -EINVAL;
++ if ((flags & FS_NOCOW_FL) && (old_flags & (FS_COMPR_FL | FS_NOCOMP_FL)))
++ return -EINVAL;
++
+ return 0;
+ }
+
+@@ -190,7 +203,7 @@ static int btrfs_ioctl_setflags(struct file *file, void __user *arg)
+ unsigned int fsflags, old_fsflags;
+ int ret;
+ const char *comp = NULL;
+- u32 binode_flags = binode->flags;
++ u32 binode_flags;
+
+ if (!inode_owner_or_capable(inode))
+ return -EPERM;
+@@ -201,22 +214,23 @@ static int btrfs_ioctl_setflags(struct file *file, void __user *arg)
+ if (copy_from_user(&fsflags, arg, sizeof(fsflags)))
+ return -EFAULT;
+
+- ret = check_fsflags(fsflags);
+- if (ret)
+- return ret;
+-
+ ret = mnt_want_write_file(file);
+ if (ret)
+ return ret;
+
+ inode_lock(inode);
+-
+ fsflags = btrfs_mask_fsflags_for_type(inode, fsflags);
+ old_fsflags = btrfs_inode_flags_to_fsflags(binode->flags);
++
+ ret = vfs_ioc_setflags_prepare(inode, old_fsflags, fsflags);
+ if (ret)
+ goto out_unlock;
+
++ ret = check_fsflags(old_fsflags, fsflags);
++ if (ret)
++ goto out_unlock;
++
++ binode_flags = binode->flags;
+ if (fsflags & FS_SYNC_FL)
+ binode_flags |= BTRFS_INODE_SYNC;
+ else
+@@ -3197,11 +3211,15 @@ static long btrfs_ioctl_fs_info(struct btrfs_fs_info *fs_info,
+ struct btrfs_ioctl_fs_info_args *fi_args;
+ struct btrfs_device *device;
+ struct btrfs_fs_devices *fs_devices = fs_info->fs_devices;
++ u64 flags_in;
+ int ret = 0;
+
+- fi_args = kzalloc(sizeof(*fi_args), GFP_KERNEL);
+- if (!fi_args)
+- return -ENOMEM;
++ fi_args = memdup_user(arg, sizeof(*fi_args));
++ if (IS_ERR(fi_args))
++ return PTR_ERR(fi_args);
++
++ flags_in = fi_args->flags;
++ memset(fi_args, 0, sizeof(*fi_args));
+
+ rcu_read_lock();
+ fi_args->num_devices = fs_devices->num_devices;
+@@ -3217,6 +3235,12 @@ static long btrfs_ioctl_fs_info(struct btrfs_fs_info *fs_info,
+ fi_args->sectorsize = fs_info->sectorsize;
+ fi_args->clone_alignment = fs_info->sectorsize;
+
++ if (flags_in & BTRFS_FS_INFO_FLAG_CSUM_INFO) {
++ fi_args->csum_type = btrfs_super_csum_type(fs_info->super_copy);
++ fi_args->csum_size = btrfs_super_csum_size(fs_info->super_copy);
++ fi_args->flags |= BTRFS_FS_INFO_FLAG_CSUM_INFO;
++ }
++
+ if (copy_to_user(arg, fi_args, sizeof(*fi_args)))
+ ret = -EFAULT;
+
+diff --git a/fs/btrfs/ref-verify.c b/fs/btrfs/ref-verify.c
+index 7887317033c98..452ca955eb75e 100644
+--- a/fs/btrfs/ref-verify.c
++++ b/fs/btrfs/ref-verify.c
+@@ -286,6 +286,8 @@ static struct block_entry *add_block_entry(struct btrfs_fs_info *fs_info,
+ exist_re = insert_root_entry(&exist->roots, re);
+ if (exist_re)
+ kfree(re);
++ } else {
++ kfree(re);
+ }
+ kfree(be);
+ return exist;
+diff --git a/fs/btrfs/relocation.c b/fs/btrfs/relocation.c
+index f67d736c27a12..8e9c2142c66a8 100644
+--- a/fs/btrfs/relocation.c
++++ b/fs/btrfs/relocation.c
+@@ -2402,12 +2402,20 @@ static noinline_for_stack int merge_reloc_root(struct reloc_control *rc,
+ btrfs_unlock_up_safe(path, 0);
+ }
+
+- min_reserved = fs_info->nodesize * (BTRFS_MAX_LEVEL - 1) * 2;
++ /*
++ * In merge_reloc_root(), we modify the upper level pointer to swap the
++ * tree blocks between reloc tree and subvolume tree. Thus for tree
++ * block COW, we COW at most from level 1 to root level for each tree.
++ *
++ * Thus the needed metadata size is at most root_level * nodesize,
++ * and * 2 since we have two trees to COW.
++ */
++ min_reserved = fs_info->nodesize * btrfs_root_level(root_item) * 2;
+ memset(&next_key, 0, sizeof(next_key));
+
+ while (1) {
+ ret = btrfs_block_rsv_refill(root, rc->block_rsv, min_reserved,
+- BTRFS_RESERVE_FLUSH_ALL);
++ BTRFS_RESERVE_FLUSH_LIMIT);
+ if (ret) {
+ err = ret;
+ goto out;
+diff --git a/fs/btrfs/scrub.c b/fs/btrfs/scrub.c
+index 7c50ac5b68762..f2b9c4ec302d3 100644
+--- a/fs/btrfs/scrub.c
++++ b/fs/btrfs/scrub.c
+@@ -3761,7 +3761,7 @@ static noinline_for_stack int scrub_supers(struct scrub_ctx *sctx,
+ struct btrfs_fs_info *fs_info = sctx->fs_info;
+
+ if (test_bit(BTRFS_FS_STATE_ERROR, &fs_info->fs_state))
+- return -EIO;
++ return -EROFS;
+
+ /* Seed devices of a new filesystem has their own generation. */
+ if (scrub_dev->fs_devices != fs_info->fs_devices)
+diff --git a/fs/btrfs/super.c b/fs/btrfs/super.c
+index 7932d8d07cffe..6ca9bc3f51be1 100644
+--- a/fs/btrfs/super.c
++++ b/fs/btrfs/super.c
+@@ -440,6 +440,7 @@ int btrfs_parse_options(struct btrfs_fs_info *info, char *options,
+ char *compress_type;
+ bool compress_force = false;
+ enum btrfs_compression_type saved_compress_type;
++ int saved_compress_level;
+ bool saved_compress_force;
+ int no_compress = 0;
+
+@@ -522,6 +523,7 @@ int btrfs_parse_options(struct btrfs_fs_info *info, char *options,
+ info->compress_type : BTRFS_COMPRESS_NONE;
+ saved_compress_force =
+ btrfs_test_opt(info, FORCE_COMPRESS);
++ saved_compress_level = info->compress_level;
+ if (token == Opt_compress ||
+ token == Opt_compress_force ||
+ strncmp(args[0].from, "zlib", 4) == 0) {
+@@ -566,6 +568,8 @@ int btrfs_parse_options(struct btrfs_fs_info *info, char *options,
+ no_compress = 0;
+ } else if (strncmp(args[0].from, "no", 2) == 0) {
+ compress_type = "no";
++ info->compress_level = 0;
++ info->compress_type = 0;
+ btrfs_clear_opt(info->mount_opt, COMPRESS);
+ btrfs_clear_opt(info->mount_opt, FORCE_COMPRESS);
+ compress_force = false;
+@@ -586,11 +590,11 @@ int btrfs_parse_options(struct btrfs_fs_info *info, char *options,
+ */
+ btrfs_clear_opt(info->mount_opt, FORCE_COMPRESS);
+ }
+- if ((btrfs_test_opt(info, COMPRESS) &&
+- (info->compress_type != saved_compress_type ||
+- compress_force != saved_compress_force)) ||
+- (!btrfs_test_opt(info, COMPRESS) &&
+- no_compress == 1)) {
++ if (no_compress == 1) {
++ btrfs_info(info, "use no compression");
++ } else if ((info->compress_type != saved_compress_type) ||
++ (compress_force != saved_compress_force) ||
++ (info->compress_level != saved_compress_level)) {
+ btrfs_info(info, "%s %s compression, level %d",
+ (compress_force) ? "force" : "use",
+ compress_type, info->compress_level);
+@@ -1310,6 +1314,7 @@ static int btrfs_show_options(struct seq_file *seq, struct dentry *dentry)
+ {
+ struct btrfs_fs_info *info = btrfs_sb(dentry->d_sb);
+ const char *compress_type;
++ const char *subvol_name;
+
+ if (btrfs_test_opt(info, DEGRADED))
+ seq_puts(seq, ",degraded");
+@@ -1396,8 +1401,13 @@ static int btrfs_show_options(struct seq_file *seq, struct dentry *dentry)
+ seq_puts(seq, ",ref_verify");
+ seq_printf(seq, ",subvolid=%llu",
+ BTRFS_I(d_inode(dentry))->root->root_key.objectid);
+- seq_puts(seq, ",subvol=");
+- seq_dentry(seq, dentry, " \t\n\\");
++ subvol_name = btrfs_get_subvol_name_from_objectid(info,
++ BTRFS_I(d_inode(dentry))->root->root_key.objectid);
++ if (!IS_ERR(subvol_name)) {
++ seq_puts(seq, ",subvol=");
++ seq_escape(seq, subvol_name, " \t\n\\");
++ kfree(subvol_name);
++ }
+ return 0;
+ }
+
+@@ -1885,6 +1895,12 @@ static int btrfs_remount(struct super_block *sb, int *flags, char *data)
+ set_bit(BTRFS_FS_OPEN, &fs_info->flags);
+ }
+ out:
++ /*
++ * We need to set SB_I_VERSION here otherwise it'll get cleared by VFS,
++ * since the absence of the flag means it can be toggled off by remount.
++ */
++ *flags |= SB_I_VERSION;
++
+ wake_up_process(fs_info->transaction_kthread);
+ btrfs_remount_cleanup(fs_info, old_opts);
+ return 0;
+@@ -2294,9 +2310,7 @@ static int btrfs_unfreeze(struct super_block *sb)
+ static int btrfs_show_devname(struct seq_file *m, struct dentry *root)
+ {
+ struct btrfs_fs_info *fs_info = btrfs_sb(root->d_sb);
+- struct btrfs_fs_devices *cur_devices;
+ struct btrfs_device *dev, *first_dev = NULL;
+- struct list_head *head;
+
+ /*
+ * Lightweight locking of the devices. We should not need
+@@ -2306,18 +2320,13 @@ static int btrfs_show_devname(struct seq_file *m, struct dentry *root)
+ * least until the rcu_read_unlock.
+ */
+ rcu_read_lock();
+- cur_devices = fs_info->fs_devices;
+- while (cur_devices) {
+- head = &cur_devices->devices;
+- list_for_each_entry_rcu(dev, head, dev_list) {
+- if (test_bit(BTRFS_DEV_STATE_MISSING, &dev->dev_state))
+- continue;
+- if (!dev->name)
+- continue;
+- if (!first_dev || dev->devid < first_dev->devid)
+- first_dev = dev;
+- }
+- cur_devices = cur_devices->seed;
++ list_for_each_entry_rcu(dev, &fs_info->fs_devices->devices, dev_list) {
++ if (test_bit(BTRFS_DEV_STATE_MISSING, &dev->dev_state))
++ continue;
++ if (!dev->name)
++ continue;
++ if (!first_dev || dev->devid < first_dev->devid)
++ first_dev = dev;
+ }
+
+ if (first_dev)
+diff --git a/fs/btrfs/sysfs.c b/fs/btrfs/sysfs.c
+index a39bff64ff24e..abc4a8fd6df65 100644
+--- a/fs/btrfs/sysfs.c
++++ b/fs/btrfs/sysfs.c
+@@ -1273,7 +1273,9 @@ int btrfs_sysfs_add_devices_dir(struct btrfs_fs_devices *fs_devices,
+ {
+ int error = 0;
+ struct btrfs_device *dev;
++ unsigned int nofs_flag;
+
++ nofs_flag = memalloc_nofs_save();
+ list_for_each_entry(dev, &fs_devices->devices, dev_list) {
+
+ if (one_device && one_device != dev)
+@@ -1301,6 +1303,7 @@ int btrfs_sysfs_add_devices_dir(struct btrfs_fs_devices *fs_devices,
+ break;
+ }
+ }
++ memalloc_nofs_restore(nofs_flag);
+
+ return error;
+ }
+diff --git a/fs/btrfs/transaction.c b/fs/btrfs/transaction.c
+index 96eb313a50801..7253f7a6a1e33 100644
+--- a/fs/btrfs/transaction.c
++++ b/fs/btrfs/transaction.c
+@@ -937,7 +937,10 @@ static int __btrfs_end_transaction(struct btrfs_trans_handle *trans,
+ if (TRANS_ABORTED(trans) ||
+ test_bit(BTRFS_FS_STATE_ERROR, &info->fs_state)) {
+ wake_up_process(info->transaction_kthread);
+- err = -EIO;
++ if (TRANS_ABORTED(trans))
++ err = trans->aborted;
++ else
++ err = -EROFS;
+ }
+
+ kmem_cache_free(btrfs_trans_handle_cachep, trans);
+diff --git a/fs/btrfs/tree-log.c b/fs/btrfs/tree-log.c
+index bdfc421494481..3795fede53ae0 100644
+--- a/fs/btrfs/tree-log.c
++++ b/fs/btrfs/tree-log.c
+@@ -3125,29 +3125,17 @@ int btrfs_sync_log(struct btrfs_trans_handle *trans,
+ btrfs_init_log_ctx(&root_log_ctx, NULL);
+
+ mutex_lock(&log_root_tree->log_mutex);
+- atomic_inc(&log_root_tree->log_batch);
+- atomic_inc(&log_root_tree->log_writers);
+
+ index2 = log_root_tree->log_transid % 2;
+ list_add_tail(&root_log_ctx.list, &log_root_tree->log_ctxs[index2]);
+ root_log_ctx.log_transid = log_root_tree->log_transid;
+
+- mutex_unlock(&log_root_tree->log_mutex);
+-
+- mutex_lock(&log_root_tree->log_mutex);
+-
+ /*
+ * Now we are safe to update the log_root_tree because we're under the
+ * log_mutex, and we're a current writer so we're holding the commit
+ * open until we drop the log_mutex.
+ */
+ ret = update_log_root(trans, log, &new_root_item);
+-
+- if (atomic_dec_and_test(&log_root_tree->log_writers)) {
+- /* atomic_dec_and_test implies a barrier */
+- cond_wake_up_nomb(&log_root_tree->log_writer_wait);
+- }
+-
+ if (ret) {
+ if (!list_empty(&root_log_ctx.list))
+ list_del_init(&root_log_ctx.list);
+@@ -3193,8 +3181,6 @@ int btrfs_sync_log(struct btrfs_trans_handle *trans,
+ root_log_ctx.log_transid - 1);
+ }
+
+- wait_for_writer(log_root_tree);
+-
+ /*
+ * now that we've moved on to the tree of log tree roots,
+ * check the full commit flag again
+@@ -4054,11 +4040,8 @@ static noinline int copy_items(struct btrfs_trans_handle *trans,
+ fs_info->csum_root,
+ ds + cs, ds + cs + cl - 1,
+ &ordered_sums, 0);
+- if (ret) {
+- btrfs_release_path(dst_path);
+- kfree(ins_data);
+- return ret;
+- }
++ if (ret)
++ break;
+ }
+ }
+ }
+@@ -4071,7 +4054,6 @@ static noinline int copy_items(struct btrfs_trans_handle *trans,
+ * we have to do this after the loop above to avoid changing the
+ * log tree while trying to change the log tree.
+ */
+- ret = 0;
+ while (!list_empty(&ordered_sums)) {
+ struct btrfs_ordered_sum *sums = list_entry(ordered_sums.next,
+ struct btrfs_ordered_sum,
+@@ -5151,14 +5133,13 @@ static int btrfs_log_inode(struct btrfs_trans_handle *trans,
+ const loff_t end,
+ struct btrfs_log_ctx *ctx)
+ {
+- struct btrfs_fs_info *fs_info = root->fs_info;
+ struct btrfs_path *path;
+ struct btrfs_path *dst_path;
+ struct btrfs_key min_key;
+ struct btrfs_key max_key;
+ struct btrfs_root *log = root->log_root;
+ int err = 0;
+- int ret;
++ int ret = 0;
+ bool fast_search = false;
+ u64 ino = btrfs_ino(inode);
+ struct extent_map_tree *em_tree = &inode->extent_tree;
+@@ -5194,15 +5175,19 @@ static int btrfs_log_inode(struct btrfs_trans_handle *trans,
+ max_key.offset = (u64)-1;
+
+ /*
+- * Only run delayed items if we are a dir or a new file.
+- * Otherwise commit the delayed inode only, which is needed in
+- * order for the log replay code to mark inodes for link count
+- * fixup (create temporary BTRFS_TREE_LOG_FIXUP_OBJECTID items).
++ * Only run delayed items if we are a directory. We want to make sure
++ * all directory indexes hit the fs/subvolume tree so we can find them
++ * and figure out which index ranges have to be logged.
++ *
++ * Otherwise commit the delayed inode only if the full sync flag is set,
++ * as we want to make sure an up to date version is in the subvolume
++ * tree so copy_inode_items_to_log() / copy_items() can find it and copy
++ * it to the log tree. For a non full sync, we always log the inode item
++ * based on the in-memory struct btrfs_inode which is always up to date.
+ */
+- if (S_ISDIR(inode->vfs_inode.i_mode) ||
+- inode->generation > fs_info->last_trans_committed)
++ if (S_ISDIR(inode->vfs_inode.i_mode))
+ ret = btrfs_commit_inode_delayed_items(trans, inode);
+- else
++ else if (test_bit(BTRFS_INODE_NEEDS_FULL_SYNC, &inode->runtime_flags))
+ ret = btrfs_commit_inode_delayed_inode(inode);
+
+ if (ret) {
+diff --git a/fs/btrfs/volumes.c b/fs/btrfs/volumes.c
+index 45cf455f906dd..ac80297bcafe7 100644
+--- a/fs/btrfs/volumes.c
++++ b/fs/btrfs/volumes.c
+@@ -245,7 +245,9 @@ static int __btrfs_map_block(struct btrfs_fs_info *fs_info,
+ *
+ * global::fs_devs - add, remove, updates to the global list
+ *
+- * does not protect: manipulation of the fs_devices::devices list!
++ * does not protect: manipulation of the fs_devices::devices list in general
++ * but in mount context it could be used to exclude list modifications by eg.
++ * scan ioctl
+ *
+ * btrfs_device::name - renames (write side), read is RCU
+ *
+@@ -258,6 +260,9 @@ static int __btrfs_map_block(struct btrfs_fs_info *fs_info,
+ * may be used to exclude some operations from running concurrently without any
+ * modifications to the list (see write_all_supers)
+ *
++ * Is not required at mount and close times, because our device list is
++ * protected by the uuid_mutex at that point.
++ *
+ * balance_mutex
+ * -------------
+ * protects balance structures (status, state) and context accessed from
+@@ -603,6 +608,11 @@ static int btrfs_free_stale_devices(const char *path,
+ return ret;
+ }
+
++/*
++ * This is only used on mount, and we are protected from competing things
++ * messing with our fs_devices by the uuid_mutex, thus we do not need the
++ * fs_devices->device_list_mutex here.
++ */
+ static int btrfs_open_one_device(struct btrfs_fs_devices *fs_devices,
+ struct btrfs_device *device, fmode_t flags,
+ void *holder)
+@@ -1232,8 +1242,14 @@ int btrfs_open_devices(struct btrfs_fs_devices *fs_devices,
+ int ret;
+
+ lockdep_assert_held(&uuid_mutex);
++ /*
++ * The device_list_mutex cannot be taken here in case opening the
++ * underlying device takes further locks like bd_mutex.
++ *
++ * We also don't need the lock here as this is called during mount and
++ * exclusion is provided by uuid_mutex
++ */
+
+- mutex_lock(&fs_devices->device_list_mutex);
+ if (fs_devices->opened) {
+ fs_devices->opened++;
+ ret = 0;
+@@ -1241,7 +1257,6 @@ int btrfs_open_devices(struct btrfs_fs_devices *fs_devices,
+ list_sort(NULL, &fs_devices->devices, devid_cmp);
+ ret = open_fs_devices(fs_devices, flags, holder);
+ }
+- mutex_unlock(&fs_devices->device_list_mutex);
+
+ return ret;
+ }
+@@ -3235,7 +3250,7 @@ static int del_balance_item(struct btrfs_fs_info *fs_info)
+ if (!path)
+ return -ENOMEM;
+
+- trans = btrfs_start_transaction(root, 0);
++ trans = btrfs_start_transaction_fallback_global_rsv(root, 0);
+ if (IS_ERR(trans)) {
+ btrfs_free_path(path);
+ return PTR_ERR(trans);
+@@ -4139,7 +4154,22 @@ int btrfs_balance(struct btrfs_fs_info *fs_info,
+ mutex_lock(&fs_info->balance_mutex);
+ if (ret == -ECANCELED && atomic_read(&fs_info->balance_pause_req))
+ btrfs_info(fs_info, "balance: paused");
+- else if (ret == -ECANCELED && atomic_read(&fs_info->balance_cancel_req))
++ /*
++ * Balance can be canceled by:
++ *
++ * - Regular cancel request
++ * Then ret == -ECANCELED and balance_cancel_req > 0
++ *
++ * - Fatal signal to "btrfs" process
++ * Either the signal caught by wait_reserve_ticket() and callers
++ * got -EINTR, or caught by btrfs_should_cancel_balance() and
++ * got -ECANCELED.
++ * Either way, in this case balance_cancel_req = 0, and
++ * ret == -EINTR or ret == -ECANCELED.
++ *
++ * So here we only check the return value to catch canceled balance.
++ */
++ else if (ret == -ECANCELED || ret == -EINTR)
+ btrfs_info(fs_info, "balance: canceled");
+ else
+ btrfs_info(fs_info, "balance: ended with status: %d", ret);
+@@ -4694,6 +4724,10 @@ again:
+ }
+
+ mutex_lock(&fs_info->chunk_mutex);
++ /* Clear all state bits beyond the shrunk device size */
++ clear_extent_bits(&device->alloc_state, new_size, (u64)-1,
++ CHUNK_STATE_MASK);
++
+ btrfs_device_set_disk_total_bytes(device, new_size);
+ if (list_empty(&device->post_commit_list))
+ list_add_tail(&device->post_commit_list,
+@@ -7053,7 +7087,6 @@ int btrfs_read_chunk_tree(struct btrfs_fs_info *fs_info)
+ * otherwise we don't need it.
+ */
+ mutex_lock(&uuid_mutex);
+- mutex_lock(&fs_info->chunk_mutex);
+
+ /*
+ * It is possible for mount and umount to race in such a way that
+@@ -7098,7 +7131,9 @@ int btrfs_read_chunk_tree(struct btrfs_fs_info *fs_info)
+ } else if (found_key.type == BTRFS_CHUNK_ITEM_KEY) {
+ struct btrfs_chunk *chunk;
+ chunk = btrfs_item_ptr(leaf, slot, struct btrfs_chunk);
++ mutex_lock(&fs_info->chunk_mutex);
+ ret = read_one_chunk(&found_key, leaf, chunk);
++ mutex_unlock(&fs_info->chunk_mutex);
+ if (ret)
+ goto error;
+ }
+@@ -7128,7 +7163,6 @@ int btrfs_read_chunk_tree(struct btrfs_fs_info *fs_info)
+ }
+ ret = 0;
+ error:
+- mutex_unlock(&fs_info->chunk_mutex);
+ mutex_unlock(&uuid_mutex);
+
+ btrfs_free_path(path);
+diff --git a/fs/ceph/dir.c b/fs/ceph/dir.c
+index 4c4202c93b715..775fa63afdfd8 100644
+--- a/fs/ceph/dir.c
++++ b/fs/ceph/dir.c
+@@ -924,6 +924,10 @@ static int ceph_symlink(struct inode *dir, struct dentry *dentry,
+ req->r_num_caps = 2;
+ req->r_dentry_drop = CEPH_CAP_FILE_SHARED | CEPH_CAP_AUTH_EXCL;
+ req->r_dentry_unless = CEPH_CAP_FILE_EXCL;
++ if (as_ctx.pagelist) {
++ req->r_pagelist = as_ctx.pagelist;
++ as_ctx.pagelist = NULL;
++ }
+ err = ceph_mdsc_do_request(mdsc, dir, req);
+ if (!err && !req->r_reply_info.head->is_dentry)
+ err = ceph_handle_notrace_create(dir, dentry);
+diff --git a/fs/ceph/mds_client.c b/fs/ceph/mds_client.c
+index 7c63abf5bea91..95272ae36b058 100644
+--- a/fs/ceph/mds_client.c
++++ b/fs/ceph/mds_client.c
+@@ -3270,8 +3270,10 @@ static void handle_session(struct ceph_mds_session *session,
+ goto bad;
+ /* version >= 3, feature bits */
+ ceph_decode_32_safe(&p, end, len, bad);
+- ceph_decode_64_safe(&p, end, features, bad);
+- p += len - sizeof(features);
++ if (len) {
++ ceph_decode_64_safe(&p, end, features, bad);
++ p += len - sizeof(features);
++ }
+ }
+
+ mutex_lock(&mdsc->mutex);
+diff --git a/fs/cifs/smb2misc.c b/fs/cifs/smb2misc.c
+index 44fca24d993e2..c617091b02bf6 100644
+--- a/fs/cifs/smb2misc.c
++++ b/fs/cifs/smb2misc.c
+@@ -508,15 +508,31 @@ cifs_ses_oplock_break(struct work_struct *work)
+ kfree(lw);
+ }
+
++static void
++smb2_queue_pending_open_break(struct tcon_link *tlink, __u8 *lease_key,
++ __le32 new_lease_state)
++{
++ struct smb2_lease_break_work *lw;
++
++ lw = kmalloc(sizeof(struct smb2_lease_break_work), GFP_KERNEL);
++ if (!lw) {
++ cifs_put_tlink(tlink);
++ return;
++ }
++
++ INIT_WORK(&lw->lease_break, cifs_ses_oplock_break);
++ lw->tlink = tlink;
++ lw->lease_state = new_lease_state;
++ memcpy(lw->lease_key, lease_key, SMB2_LEASE_KEY_SIZE);
++ queue_work(cifsiod_wq, &lw->lease_break);
++}
++
+ static bool
+-smb2_tcon_has_lease(struct cifs_tcon *tcon, struct smb2_lease_break *rsp,
+- struct smb2_lease_break_work *lw)
++smb2_tcon_has_lease(struct cifs_tcon *tcon, struct smb2_lease_break *rsp)
+ {
+- bool found;
+ __u8 lease_state;
+ struct list_head *tmp;
+ struct cifsFileInfo *cfile;
+- struct cifs_pending_open *open;
+ struct cifsInodeInfo *cinode;
+ int ack_req = le32_to_cpu(rsp->Flags &
+ SMB2_NOTIFY_BREAK_LEASE_FLAG_ACK_REQUIRED);
+@@ -546,22 +562,29 @@ smb2_tcon_has_lease(struct cifs_tcon *tcon, struct smb2_lease_break *rsp,
+ cfile->oplock_level = lease_state;
+
+ cifs_queue_oplock_break(cfile);
+- kfree(lw);
+ return true;
+ }
+
+- found = false;
++ return false;
++}
++
++static struct cifs_pending_open *
++smb2_tcon_find_pending_open_lease(struct cifs_tcon *tcon,
++ struct smb2_lease_break *rsp)
++{
++ __u8 lease_state = le32_to_cpu(rsp->NewLeaseState);
++ int ack_req = le32_to_cpu(rsp->Flags &
++ SMB2_NOTIFY_BREAK_LEASE_FLAG_ACK_REQUIRED);
++ struct cifs_pending_open *open;
++ struct cifs_pending_open *found = NULL;
++
+ list_for_each_entry(open, &tcon->pending_opens, olist) {
+ if (memcmp(open->lease_key, rsp->LeaseKey,
+ SMB2_LEASE_KEY_SIZE))
+ continue;
+
+ if (!found && ack_req) {
+- found = true;
+- memcpy(lw->lease_key, open->lease_key,
+- SMB2_LEASE_KEY_SIZE);
+- lw->tlink = cifs_get_tlink(open->tlink);
+- queue_work(cifsiod_wq, &lw->lease_break);
++ found = open;
+ }
+
+ cifs_dbg(FYI, "found in the pending open list\n");
+@@ -582,14 +605,7 @@ smb2_is_valid_lease_break(char *buffer)
+ struct TCP_Server_Info *server;
+ struct cifs_ses *ses;
+ struct cifs_tcon *tcon;
+- struct smb2_lease_break_work *lw;
+-
+- lw = kmalloc(sizeof(struct smb2_lease_break_work), GFP_KERNEL);
+- if (!lw)
+- return false;
+-
+- INIT_WORK(&lw->lease_break, cifs_ses_oplock_break);
+- lw->lease_state = rsp->NewLeaseState;
++ struct cifs_pending_open *open;
+
+ cifs_dbg(FYI, "Checking for lease break\n");
+
+@@ -607,11 +623,27 @@ smb2_is_valid_lease_break(char *buffer)
+ spin_lock(&tcon->open_file_lock);
+ cifs_stats_inc(
+ &tcon->stats.cifs_stats.num_oplock_brks);
+- if (smb2_tcon_has_lease(tcon, rsp, lw)) {
++ if (smb2_tcon_has_lease(tcon, rsp)) {
+ spin_unlock(&tcon->open_file_lock);
+ spin_unlock(&cifs_tcp_ses_lock);
+ return true;
+ }
++ open = smb2_tcon_find_pending_open_lease(tcon,
++ rsp);
++ if (open) {
++ __u8 lease_key[SMB2_LEASE_KEY_SIZE];
++ struct tcon_link *tlink;
++
++ tlink = cifs_get_tlink(open->tlink);
++ memcpy(lease_key, open->lease_key,
++ SMB2_LEASE_KEY_SIZE);
++ spin_unlock(&tcon->open_file_lock);
++ spin_unlock(&cifs_tcp_ses_lock);
++ smb2_queue_pending_open_break(tlink,
++ lease_key,
++ rsp->NewLeaseState);
++ return true;
++ }
+ spin_unlock(&tcon->open_file_lock);
+
+ if (tcon->crfid.is_valid &&
+@@ -629,7 +661,6 @@ smb2_is_valid_lease_break(char *buffer)
+ }
+ }
+ spin_unlock(&cifs_tcp_ses_lock);
+- kfree(lw);
+ cifs_dbg(FYI, "Can not process lease break - no lease matched\n");
+ return false;
+ }
+diff --git a/fs/cifs/smb2pdu.c b/fs/cifs/smb2pdu.c
+index cdad4d933bce0..cac1eaa2a7183 100644
+--- a/fs/cifs/smb2pdu.c
++++ b/fs/cifs/smb2pdu.c
+@@ -1347,6 +1347,8 @@ SMB2_auth_kerberos(struct SMB2_sess_data *sess_data)
+ spnego_key = cifs_get_spnego_key(ses);
+ if (IS_ERR(spnego_key)) {
+ rc = PTR_ERR(spnego_key);
++ if (rc == -ENOKEY)
++ cifs_dbg(VFS, "Verify user has a krb5 ticket and keyutils is installed\n");
+ spnego_key = NULL;
+ goto out;
+ }
+diff --git a/fs/ext2/ialloc.c b/fs/ext2/ialloc.c
+index fda7d3f5b4be5..432c3febea6df 100644
+--- a/fs/ext2/ialloc.c
++++ b/fs/ext2/ialloc.c
+@@ -80,6 +80,7 @@ static void ext2_release_inode(struct super_block *sb, int group, int dir)
+ if (dir)
+ le16_add_cpu(&desc->bg_used_dirs_count, -1);
+ spin_unlock(sb_bgl_lock(EXT2_SB(sb), group));
++ percpu_counter_inc(&EXT2_SB(sb)->s_freeinodes_counter);
+ if (dir)
+ percpu_counter_dec(&EXT2_SB(sb)->s_dirs_counter);
+ mark_buffer_dirty(bh);
+@@ -528,7 +529,7 @@ got:
+ goto fail;
+ }
+
+- percpu_counter_add(&sbi->s_freeinodes_counter, -1);
++ percpu_counter_dec(&sbi->s_freeinodes_counter);
+ if (S_ISDIR(mode))
+ percpu_counter_inc(&sbi->s_dirs_counter);
+
+diff --git a/fs/f2fs/compress.c b/fs/f2fs/compress.c
+index a5b2e72174bb1..527d50edcb956 100644
+--- a/fs/f2fs/compress.c
++++ b/fs/f2fs/compress.c
+@@ -1250,6 +1250,8 @@ int f2fs_write_multi_pages(struct compress_ctx *cc,
+ err = f2fs_write_compressed_pages(cc, submitted,
+ wbc, io_type);
+ cops->destroy_compress_ctx(cc);
++ kfree(cc->cpages);
++ cc->cpages = NULL;
+ if (!err)
+ return 0;
+ f2fs_bug_on(F2FS_I_SB(cc->inode), err != -EAGAIN);
+diff --git a/fs/f2fs/data.c b/fs/f2fs/data.c
+index 10491ae1cb850..329afa55a581c 100644
+--- a/fs/f2fs/data.c
++++ b/fs/f2fs/data.c
+@@ -3353,6 +3353,10 @@ static int f2fs_write_end(struct file *file,
+ if (f2fs_compressed_file(inode) && fsdata) {
+ f2fs_compress_write_end(inode, fsdata, page->index, copied);
+ f2fs_update_time(F2FS_I_SB(inode), REQ_TIME);
++
++ if (pos + copied > i_size_read(inode) &&
++ !f2fs_verity_in_progress(inode))
++ f2fs_i_size_write(inode, pos + copied);
+ return copied;
+ }
+ #endif
+diff --git a/fs/gfs2/bmap.c b/fs/gfs2/bmap.c
+index 6306eaae378b2..6d2ea788d0a17 100644
+--- a/fs/gfs2/bmap.c
++++ b/fs/gfs2/bmap.c
+@@ -1351,9 +1351,15 @@ int gfs2_extent_map(struct inode *inode, u64 lblock, int *new, u64 *dblock, unsi
+ return ret;
+ }
+
++/*
++ * NOTE: Never call gfs2_block_zero_range with an open transaction because it
++ * uses iomap write to perform its actions, which begin their own transactions
++ * (iomap_begin, page_prepare, etc.)
++ */
+ static int gfs2_block_zero_range(struct inode *inode, loff_t from,
+ unsigned int length)
+ {
++ BUG_ON(current->journal_info);
+ return iomap_zero_range(inode, from, length, NULL, &gfs2_iomap_ops);
+ }
+
+@@ -1414,6 +1420,16 @@ static int trunc_start(struct inode *inode, u64 newsize)
+ u64 oldsize = inode->i_size;
+ int error;
+
++ if (!gfs2_is_stuffed(ip)) {
++ unsigned int blocksize = i_blocksize(inode);
++ unsigned int offs = newsize & (blocksize - 1);
++ if (offs) {
++ error = gfs2_block_zero_range(inode, newsize,
++ blocksize - offs);
++ if (error)
++ return error;
++ }
++ }
+ if (journaled)
+ error = gfs2_trans_begin(sdp, RES_DINODE + RES_JDATA, GFS2_JTRUNC_REVOKES);
+ else
+@@ -1427,19 +1443,10 @@ static int trunc_start(struct inode *inode, u64 newsize)
+
+ gfs2_trans_add_meta(ip->i_gl, dibh);
+
+- if (gfs2_is_stuffed(ip)) {
++ if (gfs2_is_stuffed(ip))
+ gfs2_buffer_clear_tail(dibh, sizeof(struct gfs2_dinode) + newsize);
+- } else {
+- unsigned int blocksize = i_blocksize(inode);
+- unsigned int offs = newsize & (blocksize - 1);
+- if (offs) {
+- error = gfs2_block_zero_range(inode, newsize,
+- blocksize - offs);
+- if (error)
+- goto out;
+- }
++ else
+ ip->i_diskflags |= GFS2_DIF_TRUNC_IN_PROG;
+- }
+
+ i_size_write(inode, newsize);
+ ip->i_inode.i_mtime = ip->i_inode.i_ctime = current_time(&ip->i_inode);
+@@ -2448,25 +2455,7 @@ int __gfs2_punch_hole(struct file *file, loff_t offset, loff_t length)
+ loff_t start, end;
+ int error;
+
+- start = round_down(offset, blocksize);
+- end = round_up(offset + length, blocksize) - 1;
+- error = filemap_write_and_wait_range(inode->i_mapping, start, end);
+- if (error)
+- return error;
+-
+- if (gfs2_is_jdata(ip))
+- error = gfs2_trans_begin(sdp, RES_DINODE + 2 * RES_JDATA,
+- GFS2_JTRUNC_REVOKES);
+- else
+- error = gfs2_trans_begin(sdp, RES_DINODE, 0);
+- if (error)
+- return error;
+-
+- if (gfs2_is_stuffed(ip)) {
+- error = stuffed_zero_range(inode, offset, length);
+- if (error)
+- goto out;
+- } else {
++ if (!gfs2_is_stuffed(ip)) {
+ unsigned int start_off, end_len;
+
+ start_off = offset & (blocksize - 1);
+@@ -2489,6 +2478,26 @@ int __gfs2_punch_hole(struct file *file, loff_t offset, loff_t length)
+ }
+ }
+
++ start = round_down(offset, blocksize);
++ end = round_up(offset + length, blocksize) - 1;
++ error = filemap_write_and_wait_range(inode->i_mapping, start, end);
++ if (error)
++ return error;
++
++ if (gfs2_is_jdata(ip))
++ error = gfs2_trans_begin(sdp, RES_DINODE + 2 * RES_JDATA,
++ GFS2_JTRUNC_REVOKES);
++ else
++ error = gfs2_trans_begin(sdp, RES_DINODE, 0);
++ if (error)
++ return error;
++
++ if (gfs2_is_stuffed(ip)) {
++ error = stuffed_zero_range(inode, offset, length);
++ if (error)
++ goto out;
++ }
++
+ if (gfs2_is_jdata(ip)) {
+ BUG_ON(!current->journal_info);
+ gfs2_journaled_truncate_range(inode, offset, length);
+diff --git a/fs/minix/inode.c b/fs/minix/inode.c
+index 0dd929346f3f3..7b09a9158e401 100644
+--- a/fs/minix/inode.c
++++ b/fs/minix/inode.c
+@@ -150,8 +150,10 @@ static int minix_remount (struct super_block * sb, int * flags, char * data)
+ return 0;
+ }
+
+-static bool minix_check_superblock(struct minix_sb_info *sbi)
++static bool minix_check_superblock(struct super_block *sb)
+ {
++ struct minix_sb_info *sbi = minix_sb(sb);
++
+ if (sbi->s_imap_blocks == 0 || sbi->s_zmap_blocks == 0)
+ return false;
+
+@@ -161,7 +163,7 @@ static bool minix_check_superblock(struct minix_sb_info *sbi)
+ * of indirect blocks which places the limit well above U32_MAX.
+ */
+ if (sbi->s_version == MINIX_V1 &&
+- sbi->s_max_size > (7 + 512 + 512*512) * BLOCK_SIZE)
++ sb->s_maxbytes > (7 + 512 + 512*512) * BLOCK_SIZE)
+ return false;
+
+ return true;
+@@ -202,7 +204,7 @@ static int minix_fill_super(struct super_block *s, void *data, int silent)
+ sbi->s_zmap_blocks = ms->s_zmap_blocks;
+ sbi->s_firstdatazone = ms->s_firstdatazone;
+ sbi->s_log_zone_size = ms->s_log_zone_size;
+- sbi->s_max_size = ms->s_max_size;
++ s->s_maxbytes = ms->s_max_size;
+ s->s_magic = ms->s_magic;
+ if (s->s_magic == MINIX_SUPER_MAGIC) {
+ sbi->s_version = MINIX_V1;
+@@ -233,7 +235,7 @@ static int minix_fill_super(struct super_block *s, void *data, int silent)
+ sbi->s_zmap_blocks = m3s->s_zmap_blocks;
+ sbi->s_firstdatazone = m3s->s_firstdatazone;
+ sbi->s_log_zone_size = m3s->s_log_zone_size;
+- sbi->s_max_size = m3s->s_max_size;
++ s->s_maxbytes = m3s->s_max_size;
+ sbi->s_ninodes = m3s->s_ninodes;
+ sbi->s_nzones = m3s->s_zones;
+ sbi->s_dirsize = 64;
+@@ -245,7 +247,7 @@ static int minix_fill_super(struct super_block *s, void *data, int silent)
+ } else
+ goto out_no_fs;
+
+- if (!minix_check_superblock(sbi))
++ if (!minix_check_superblock(s))
+ goto out_illegal_sb;
+
+ /*
+diff --git a/fs/minix/itree_v1.c b/fs/minix/itree_v1.c
+index 046cc96ee7adb..1fed906042aa8 100644
+--- a/fs/minix/itree_v1.c
++++ b/fs/minix/itree_v1.c
+@@ -29,12 +29,12 @@ static int block_to_path(struct inode * inode, long block, int offsets[DEPTH])
+ if (block < 0) {
+ printk("MINIX-fs: block_to_path: block %ld < 0 on dev %pg\n",
+ block, inode->i_sb->s_bdev);
+- } else if (block >= (minix_sb(inode->i_sb)->s_max_size/BLOCK_SIZE)) {
+- if (printk_ratelimit())
+- printk("MINIX-fs: block_to_path: "
+- "block %ld too big on dev %pg\n",
+- block, inode->i_sb->s_bdev);
+- } else if (block < 7) {
++ return 0;
++ }
++ if ((u64)block * BLOCK_SIZE >= inode->i_sb->s_maxbytes)
++ return 0;
++
++ if (block < 7) {
+ offsets[n++] = block;
+ } else if ((block -= 7) < 512) {
+ offsets[n++] = 7;
+diff --git a/fs/minix/itree_v2.c b/fs/minix/itree_v2.c
+index f7fc7eccccccd..9d00f31a2d9d1 100644
+--- a/fs/minix/itree_v2.c
++++ b/fs/minix/itree_v2.c
+@@ -32,13 +32,12 @@ static int block_to_path(struct inode * inode, long block, int offsets[DEPTH])
+ if (block < 0) {
+ printk("MINIX-fs: block_to_path: block %ld < 0 on dev %pg\n",
+ block, sb->s_bdev);
+- } else if ((u64)block * (u64)sb->s_blocksize >=
+- minix_sb(sb)->s_max_size) {
+- if (printk_ratelimit())
+- printk("MINIX-fs: block_to_path: "
+- "block %ld too big on dev %pg\n",
+- block, sb->s_bdev);
+- } else if (block < DIRCOUNT) {
++ return 0;
++ }
++ if ((u64)block * (u64)sb->s_blocksize >= sb->s_maxbytes)
++ return 0;
++
++ if (block < DIRCOUNT) {
+ offsets[n++] = block;
+ } else if ((block -= DIRCOUNT) < INDIRCOUNT(sb)) {
+ offsets[n++] = DIRCOUNT;
+diff --git a/fs/minix/minix.h b/fs/minix/minix.h
+index df081e8afcc3c..168d45d3de73e 100644
+--- a/fs/minix/minix.h
++++ b/fs/minix/minix.h
+@@ -32,7 +32,6 @@ struct minix_sb_info {
+ unsigned long s_zmap_blocks;
+ unsigned long s_firstdatazone;
+ unsigned long s_log_zone_size;
+- unsigned long s_max_size;
+ int s_dirsize;
+ int s_namelen;
+ struct buffer_head ** s_imap;
+diff --git a/fs/nfs/file.c b/fs/nfs/file.c
+index f96367a2463e3..63940a7a70be1 100644
+--- a/fs/nfs/file.c
++++ b/fs/nfs/file.c
+@@ -140,6 +140,7 @@ static int
+ nfs_file_flush(struct file *file, fl_owner_t id)
+ {
+ struct inode *inode = file_inode(file);
++ errseq_t since;
+
+ dprintk("NFS: flush(%pD2)\n", file);
+
+@@ -148,7 +149,9 @@ nfs_file_flush(struct file *file, fl_owner_t id)
+ return 0;
+
+ /* Flush writes to the server and return any errors */
+- return nfs_wb_all(inode);
++ since = filemap_sample_wb_err(file->f_mapping);
++ nfs_wb_all(inode);
++ return filemap_check_wb_err(file->f_mapping, since);
+ }
+
+ ssize_t
+@@ -587,12 +590,14 @@ static const struct vm_operations_struct nfs_file_vm_ops = {
+ .page_mkwrite = nfs_vm_page_mkwrite,
+ };
+
+-static int nfs_need_check_write(struct file *filp, struct inode *inode)
++static int nfs_need_check_write(struct file *filp, struct inode *inode,
++ int error)
+ {
+ struct nfs_open_context *ctx;
+
+ ctx = nfs_file_open_context(filp);
+- if (nfs_ctx_key_to_expire(ctx, inode))
++ if (nfs_error_is_fatal_on_server(error) ||
++ nfs_ctx_key_to_expire(ctx, inode))
+ return 1;
+ return 0;
+ }
+@@ -603,6 +608,8 @@ ssize_t nfs_file_write(struct kiocb *iocb, struct iov_iter *from)
+ struct inode *inode = file_inode(file);
+ unsigned long written = 0;
+ ssize_t result;
++ errseq_t since;
++ int error;
+
+ result = nfs_key_timeout_notify(file, inode);
+ if (result)
+@@ -627,6 +634,7 @@ ssize_t nfs_file_write(struct kiocb *iocb, struct iov_iter *from)
+ if (iocb->ki_pos > i_size_read(inode))
+ nfs_revalidate_mapping(inode, file->f_mapping);
+
++ since = filemap_sample_wb_err(file->f_mapping);
+ nfs_start_io_write(inode);
+ result = generic_write_checks(iocb, from);
+ if (result > 0) {
+@@ -645,7 +653,8 @@ ssize_t nfs_file_write(struct kiocb *iocb, struct iov_iter *from)
+ goto out;
+
+ /* Return error values */
+- if (nfs_need_check_write(file, inode)) {
++ error = filemap_check_wb_err(file->f_mapping, since);
++ if (nfs_need_check_write(file, inode, error)) {
+ int err = nfs_wb_all(inode);
+ if (err < 0)
+ result = err;
+diff --git a/fs/nfs/flexfilelayout/flexfilelayout.c b/fs/nfs/flexfilelayout/flexfilelayout.c
+index de03e440b7eef..048272d60a165 100644
+--- a/fs/nfs/flexfilelayout/flexfilelayout.c
++++ b/fs/nfs/flexfilelayout/flexfilelayout.c
+@@ -790,6 +790,19 @@ ff_layout_choose_best_ds_for_read(struct pnfs_layout_segment *lseg,
+ return ff_layout_choose_any_ds_for_read(lseg, start_idx, best_idx);
+ }
+
++static struct nfs4_pnfs_ds *
++ff_layout_get_ds_for_read(struct nfs_pageio_descriptor *pgio, int *best_idx)
++{
++ struct pnfs_layout_segment *lseg = pgio->pg_lseg;
++ struct nfs4_pnfs_ds *ds;
++
++ ds = ff_layout_choose_best_ds_for_read(lseg, pgio->pg_mirror_idx,
++ best_idx);
++ if (ds || !pgio->pg_mirror_idx)
++ return ds;
++ return ff_layout_choose_best_ds_for_read(lseg, 0, best_idx);
++}
++
+ static void
+ ff_layout_pg_get_read(struct nfs_pageio_descriptor *pgio,
+ struct nfs_page *req,
+@@ -840,7 +853,7 @@ retry:
+ goto out_nolseg;
+ }
+
+- ds = ff_layout_choose_best_ds_for_read(pgio->pg_lseg, 0, &ds_idx);
++ ds = ff_layout_get_ds_for_read(pgio, &ds_idx);
+ if (!ds) {
+ if (!ff_layout_no_fallback_to_mds(pgio->pg_lseg))
+ goto out_mds;
+@@ -1028,11 +1041,24 @@ static void ff_layout_reset_write(struct nfs_pgio_header *hdr, bool retry_pnfs)
+ }
+ }
+
++static void ff_layout_resend_pnfs_read(struct nfs_pgio_header *hdr)
++{
++ u32 idx = hdr->pgio_mirror_idx + 1;
++ int new_idx = 0;
++
++ if (ff_layout_choose_any_ds_for_read(hdr->lseg, idx + 1, &new_idx))
++ ff_layout_send_layouterror(hdr->lseg);
++ else
++ pnfs_error_mark_layout_for_return(hdr->inode, hdr->lseg);
++ pnfs_read_resend_pnfs(hdr, new_idx);
++}
++
+ static void ff_layout_reset_read(struct nfs_pgio_header *hdr)
+ {
+ struct rpc_task *task = &hdr->task;
+
+ pnfs_layoutcommit_inode(hdr->inode, false);
++ pnfs_error_mark_layout_for_return(hdr->inode, hdr->lseg);
+
+ if (!test_and_set_bit(NFS_IOHDR_REDO, &hdr->flags)) {
+ dprintk("%s Reset task %5u for i/o through MDS "
+@@ -1234,6 +1260,12 @@ static void ff_layout_io_track_ds_error(struct pnfs_layout_segment *lseg,
+ break;
+ case NFS4ERR_NXIO:
+ ff_layout_mark_ds_unreachable(lseg, idx);
++ /*
++ * Don't return the layout if this is a read and we still
++ * have layouts to try
++ */
++ if (opnum == OP_READ)
++ break;
+ /* Fallthrough */
+ default:
+ pnfs_error_mark_layout_for_return(lseg->pls_layout->plh_inode,
+@@ -1247,7 +1279,6 @@ static void ff_layout_io_track_ds_error(struct pnfs_layout_segment *lseg,
+ static int ff_layout_read_done_cb(struct rpc_task *task,
+ struct nfs_pgio_header *hdr)
+ {
+- int new_idx = hdr->pgio_mirror_idx;
+ int err;
+
+ if (task->tk_status < 0) {
+@@ -1267,10 +1298,6 @@ static int ff_layout_read_done_cb(struct rpc_task *task,
+ clear_bit(NFS_IOHDR_RESEND_MDS, &hdr->flags);
+ switch (err) {
+ case -NFS4ERR_RESET_TO_PNFS:
+- if (ff_layout_choose_best_ds_for_read(hdr->lseg,
+- hdr->pgio_mirror_idx + 1,
+- &new_idx))
+- goto out_layouterror;
+ set_bit(NFS_IOHDR_RESEND_PNFS, &hdr->flags);
+ return task->tk_status;
+ case -NFS4ERR_RESET_TO_MDS:
+@@ -1281,10 +1308,6 @@ static int ff_layout_read_done_cb(struct rpc_task *task,
+ }
+
+ return 0;
+-out_layouterror:
+- ff_layout_read_record_layoutstats_done(task, hdr);
+- ff_layout_send_layouterror(hdr->lseg);
+- hdr->pgio_mirror_idx = new_idx;
+ out_eagain:
+ rpc_restart_call_prepare(task);
+ return -EAGAIN;
+@@ -1411,10 +1434,9 @@ static void ff_layout_read_release(void *data)
+ struct nfs_pgio_header *hdr = data;
+
+ ff_layout_read_record_layoutstats_done(&hdr->task, hdr);
+- if (test_bit(NFS_IOHDR_RESEND_PNFS, &hdr->flags)) {
+- ff_layout_send_layouterror(hdr->lseg);
+- pnfs_read_resend_pnfs(hdr);
+- } else if (test_bit(NFS_IOHDR_RESEND_MDS, &hdr->flags))
++ if (test_bit(NFS_IOHDR_RESEND_PNFS, &hdr->flags))
++ ff_layout_resend_pnfs_read(hdr);
++ else if (test_bit(NFS_IOHDR_RESEND_MDS, &hdr->flags))
+ ff_layout_reset_read(hdr);
+ pnfs_generic_rw_release(data);
+ }
+diff --git a/fs/nfs/nfs4file.c b/fs/nfs/nfs4file.c
+index 8e5d6223ddd35..a339707654673 100644
+--- a/fs/nfs/nfs4file.c
++++ b/fs/nfs/nfs4file.c
+@@ -110,6 +110,7 @@ static int
+ nfs4_file_flush(struct file *file, fl_owner_t id)
+ {
+ struct inode *inode = file_inode(file);
++ errseq_t since;
+
+ dprintk("NFS: flush(%pD2)\n", file);
+
+@@ -125,7 +126,9 @@ nfs4_file_flush(struct file *file, fl_owner_t id)
+ return filemap_fdatawrite(file->f_mapping);
+
+ /* Flush writes to the server and return any errors */
+- return nfs_wb_all(inode);
++ since = filemap_sample_wb_err(file->f_mapping);
++ nfs_wb_all(inode);
++ return filemap_check_wb_err(file->f_mapping, since);
+ }
+
+ #ifdef CONFIG_NFS_V4_2
+diff --git a/fs/nfs/nfs4proc.c b/fs/nfs/nfs4proc.c
+index 2e2dac29a9e91..45e0585e0667c 100644
+--- a/fs/nfs/nfs4proc.c
++++ b/fs/nfs/nfs4proc.c
+@@ -5845,8 +5845,6 @@ static int _nfs4_get_security_label(struct inode *inode, void *buf,
+ return ret;
+ if (!(fattr.valid & NFS_ATTR_FATTR_V4_SECURITY_LABEL))
+ return -ENOENT;
+- if (buflen < label.len)
+- return -ERANGE;
+ return 0;
+ }
+
+diff --git a/fs/nfs/nfs4xdr.c b/fs/nfs/nfs4xdr.c
+index 47817ef0aadb1..4e0d8a3b89b67 100644
+--- a/fs/nfs/nfs4xdr.c
++++ b/fs/nfs/nfs4xdr.c
+@@ -4166,7 +4166,11 @@ static int decode_attr_security_label(struct xdr_stream *xdr, uint32_t *bitmap,
+ return -EIO;
+ if (len < NFS4_MAXLABELLEN) {
+ if (label) {
+- memcpy(label->label, p, len);
++ if (label->len) {
++ if (label->len < len)
++ return -ERANGE;
++ memcpy(label->label, p, len);
++ }
+ label->len = len;
+ label->pi = pi;
+ label->lfs = lfs;
+diff --git a/fs/nfs/pnfs.c b/fs/nfs/pnfs.c
+index d61dac48dff50..75e988caf3cd7 100644
+--- a/fs/nfs/pnfs.c
++++ b/fs/nfs/pnfs.c
+@@ -2939,7 +2939,8 @@ pnfs_try_to_read_data(struct nfs_pgio_header *hdr,
+ }
+
+ /* Resend all requests through pnfs. */
+-void pnfs_read_resend_pnfs(struct nfs_pgio_header *hdr)
++void pnfs_read_resend_pnfs(struct nfs_pgio_header *hdr,
++ unsigned int mirror_idx)
+ {
+ struct nfs_pageio_descriptor pgio;
+
+@@ -2950,6 +2951,7 @@ void pnfs_read_resend_pnfs(struct nfs_pgio_header *hdr)
+
+ nfs_pageio_init_read(&pgio, hdr->inode, false,
+ hdr->completion_ops);
++ pgio.pg_mirror_idx = mirror_idx;
+ hdr->task.tk_status = nfs_pageio_resend(&pgio, hdr);
+ }
+ }
+diff --git a/fs/nfs/pnfs.h b/fs/nfs/pnfs.h
+index 8e0ada581b92e..2661c44c62db4 100644
+--- a/fs/nfs/pnfs.h
++++ b/fs/nfs/pnfs.h
+@@ -311,7 +311,7 @@ int _pnfs_return_layout(struct inode *);
+ int pnfs_commit_and_return_layout(struct inode *);
+ void pnfs_ld_write_done(struct nfs_pgio_header *);
+ void pnfs_ld_read_done(struct nfs_pgio_header *);
+-void pnfs_read_resend_pnfs(struct nfs_pgio_header *);
++void pnfs_read_resend_pnfs(struct nfs_pgio_header *, unsigned int mirror_idx);
+ struct pnfs_layout_segment *pnfs_update_layout(struct inode *ino,
+ struct nfs_open_context *ctx,
+ loff_t pos,
+diff --git a/fs/ocfs2/ocfs2.h b/fs/ocfs2/ocfs2.h
+index 9461bd3e1c0c8..0a8cd8e59a92c 100644
+--- a/fs/ocfs2/ocfs2.h
++++ b/fs/ocfs2/ocfs2.h
+@@ -326,8 +326,8 @@ struct ocfs2_super
+ spinlock_t osb_lock;
+ u32 s_next_generation;
+ unsigned long osb_flags;
+- s16 s_inode_steal_slot;
+- s16 s_meta_steal_slot;
++ u16 s_inode_steal_slot;
++ u16 s_meta_steal_slot;
+ atomic_t s_num_inodes_stolen;
+ atomic_t s_num_meta_stolen;
+
+diff --git a/fs/ocfs2/suballoc.c b/fs/ocfs2/suballoc.c
+index 45745cc3408a5..8c8cf7f4eb34e 100644
+--- a/fs/ocfs2/suballoc.c
++++ b/fs/ocfs2/suballoc.c
+@@ -879,9 +879,9 @@ static void __ocfs2_set_steal_slot(struct ocfs2_super *osb, int slot, int type)
+ {
+ spin_lock(&osb->osb_lock);
+ if (type == INODE_ALLOC_SYSTEM_INODE)
+- osb->s_inode_steal_slot = slot;
++ osb->s_inode_steal_slot = (u16)slot;
+ else if (type == EXTENT_ALLOC_SYSTEM_INODE)
+- osb->s_meta_steal_slot = slot;
++ osb->s_meta_steal_slot = (u16)slot;
+ spin_unlock(&osb->osb_lock);
+ }
+
+diff --git a/fs/ocfs2/super.c b/fs/ocfs2/super.c
+index ac61eeaf38374..b74c5b25726f5 100644
+--- a/fs/ocfs2/super.c
++++ b/fs/ocfs2/super.c
+@@ -78,7 +78,7 @@ struct mount_options
+ unsigned long commit_interval;
+ unsigned long mount_opt;
+ unsigned int atime_quantum;
+- signed short slot;
++ unsigned short slot;
+ int localalloc_opt;
+ unsigned int resv_level;
+ int dir_resv_level;
+@@ -1334,7 +1334,7 @@ static int ocfs2_parse_options(struct super_block *sb,
+ goto bail;
+ }
+ if (option)
+- mopt->slot = (s16)option;
++ mopt->slot = (u16)option;
+ break;
+ case Opt_commit:
+ if (match_int(&args[0], &option)) {
+diff --git a/fs/ubifs/journal.c b/fs/ubifs/journal.c
+index e5ec1afe1c668..2cf05f87565c2 100644
+--- a/fs/ubifs/journal.c
++++ b/fs/ubifs/journal.c
+@@ -539,7 +539,7 @@ int ubifs_jnl_update(struct ubifs_info *c, const struct inode *dir,
+ const struct fscrypt_name *nm, const struct inode *inode,
+ int deletion, int xent)
+ {
+- int err, dlen, ilen, len, lnum, ino_offs, dent_offs;
++ int err, dlen, ilen, len, lnum, ino_offs, dent_offs, orphan_added = 0;
+ int aligned_dlen, aligned_ilen, sync = IS_DIRSYNC(dir);
+ int last_reference = !!(deletion && inode->i_nlink == 0);
+ struct ubifs_inode *ui = ubifs_inode(inode);
+@@ -630,6 +630,7 @@ int ubifs_jnl_update(struct ubifs_info *c, const struct inode *dir,
+ goto out_finish;
+ }
+ ui->del_cmtno = c->cmt_no;
++ orphan_added = 1;
+ }
+
+ err = write_head(c, BASEHD, dent, len, &lnum, &dent_offs, sync);
+@@ -702,7 +703,7 @@ out_release:
+ kfree(dent);
+ out_ro:
+ ubifs_ro_mode(c, err);
+- if (last_reference)
++ if (orphan_added)
+ ubifs_delete_orphan(c, inode->i_ino);
+ finish_reservation(c);
+ return err;
+@@ -1218,7 +1219,7 @@ int ubifs_jnl_rename(struct ubifs_info *c, const struct inode *old_dir,
+ void *p;
+ union ubifs_key key;
+ struct ubifs_dent_node *dent, *dent2;
+- int err, dlen1, dlen2, ilen, lnum, offs, len;
++ int err, dlen1, dlen2, ilen, lnum, offs, len, orphan_added = 0;
+ int aligned_dlen1, aligned_dlen2, plen = UBIFS_INO_NODE_SZ;
+ int last_reference = !!(new_inode && new_inode->i_nlink == 0);
+ int move = (old_dir != new_dir);
+@@ -1334,6 +1335,7 @@ int ubifs_jnl_rename(struct ubifs_info *c, const struct inode *old_dir,
+ goto out_finish;
+ }
+ new_ui->del_cmtno = c->cmt_no;
++ orphan_added = 1;
+ }
+
+ err = write_head(c, BASEHD, dent, len, &lnum, &offs, sync);
+@@ -1415,7 +1417,7 @@ out_release:
+ release_head(c, BASEHD);
+ out_ro:
+ ubifs_ro_mode(c, err);
+- if (last_reference)
++ if (orphan_added)
+ ubifs_delete_orphan(c, new_inode->i_ino);
+ out_finish:
+ finish_reservation(c);
+diff --git a/fs/ufs/super.c b/fs/ufs/super.c
+index 1da0be667409b..e3b69fb280e8c 100644
+--- a/fs/ufs/super.c
++++ b/fs/ufs/super.c
+@@ -101,7 +101,7 @@ static struct inode *ufs_nfs_get_inode(struct super_block *sb, u64 ino, u32 gene
+ struct ufs_sb_private_info *uspi = UFS_SB(sb)->s_uspi;
+ struct inode *inode;
+
+- if (ino < UFS_ROOTINO || ino > uspi->s_ncg * uspi->s_ipg)
++ if (ino < UFS_ROOTINO || ino > (u64)uspi->s_ncg * uspi->s_ipg)
+ return ERR_PTR(-ESTALE);
+
+ inode = ufs_iget(sb, ino);
+diff --git a/include/crypto/if_alg.h b/include/crypto/if_alg.h
+index 088c1ded27148..ee6412314f8f3 100644
+--- a/include/crypto/if_alg.h
++++ b/include/crypto/if_alg.h
+@@ -135,6 +135,7 @@ struct af_alg_async_req {
+ * SG?
+ * @enc: Cryptographic operation to be performed when
+ * recvmsg is invoked.
++ * @init: True if metadata has been sent.
+ * @len: Length of memory allocated for this data structure.
+ */
+ struct af_alg_ctx {
+@@ -151,6 +152,7 @@ struct af_alg_ctx {
+ bool more;
+ bool merge;
+ bool enc;
++ bool init;
+
+ unsigned int len;
+ };
+@@ -226,7 +228,7 @@ unsigned int af_alg_count_tsgl(struct sock *sk, size_t bytes, size_t offset);
+ void af_alg_pull_tsgl(struct sock *sk, size_t used, struct scatterlist *dst,
+ size_t dst_offset);
+ void af_alg_wmem_wakeup(struct sock *sk);
+-int af_alg_wait_for_data(struct sock *sk, unsigned flags);
++int af_alg_wait_for_data(struct sock *sk, unsigned flags, unsigned min);
+ int af_alg_sendmsg(struct socket *sock, struct msghdr *msg, size_t size,
+ unsigned int ivsize);
+ ssize_t af_alg_sendpage(struct socket *sock, struct page *page,
+diff --git a/include/linux/fs.h b/include/linux/fs.h
+index 45cc10cdf6ddd..db58786c660bf 100644
+--- a/include/linux/fs.h
++++ b/include/linux/fs.h
+@@ -546,6 +546,16 @@ static inline void i_mmap_unlock_read(struct address_space *mapping)
+ up_read(&mapping->i_mmap_rwsem);
+ }
+
++static inline void i_mmap_assert_locked(struct address_space *mapping)
++{
++ lockdep_assert_held(&mapping->i_mmap_rwsem);
++}
++
++static inline void i_mmap_assert_write_locked(struct address_space *mapping)
++{
++ lockdep_assert_held_write(&mapping->i_mmap_rwsem);
++}
++
+ /*
+ * Might pages of this file be mapped into userspace?
+ */
+diff --git a/include/linux/hugetlb.h b/include/linux/hugetlb.h
+index 43a1cef8f0f16..214f509bcb88f 100644
+--- a/include/linux/hugetlb.h
++++ b/include/linux/hugetlb.h
+@@ -165,7 +165,8 @@ pte_t *huge_pte_alloc(struct mm_struct *mm,
+ unsigned long addr, unsigned long sz);
+ pte_t *huge_pte_offset(struct mm_struct *mm,
+ unsigned long addr, unsigned long sz);
+-int huge_pmd_unshare(struct mm_struct *mm, unsigned long *addr, pte_t *ptep);
++int huge_pmd_unshare(struct mm_struct *mm, struct vm_area_struct *vma,
++ unsigned long *addr, pte_t *ptep);
+ void adjust_range_if_pmd_sharing_possible(struct vm_area_struct *vma,
+ unsigned long *start, unsigned long *end);
+ struct page *follow_huge_addr(struct mm_struct *mm, unsigned long address,
+@@ -204,8 +205,9 @@ static inline struct address_space *hugetlb_page_mapping_lock_write(
+ return NULL;
+ }
+
+-static inline int huge_pmd_unshare(struct mm_struct *mm, unsigned long *addr,
+- pte_t *ptep)
++static inline int huge_pmd_unshare(struct mm_struct *mm,
++ struct vm_area_struct *vma,
++ unsigned long *addr, pte_t *ptep)
+ {
+ return 0;
+ }
+diff --git a/include/linux/intel-iommu.h b/include/linux/intel-iommu.h
+index 64a5335046b00..bc1abbc041092 100644
+--- a/include/linux/intel-iommu.h
++++ b/include/linux/intel-iommu.h
+@@ -363,8 +363,8 @@ enum {
+
+ #define QI_DEV_EIOTLB_ADDR(a) ((u64)(a) & VTD_PAGE_MASK)
+ #define QI_DEV_EIOTLB_SIZE (((u64)1) << 11)
+-#define QI_DEV_EIOTLB_GLOB(g) ((u64)g)
+-#define QI_DEV_EIOTLB_PASID(p) (((u64)p) << 32)
++#define QI_DEV_EIOTLB_GLOB(g) ((u64)(g) & 0x1)
++#define QI_DEV_EIOTLB_PASID(p) ((u64)((p) & 0xfffff) << 32)
+ #define QI_DEV_EIOTLB_SID(sid) ((u64)((sid) & 0xffff) << 16)
+ #define QI_DEV_EIOTLB_QDEP(qd) ((u64)((qd) & 0x1f) << 4)
+ #define QI_DEV_EIOTLB_PFSID(pfsid) (((u64)(pfsid & 0xf) << 12) | \
+diff --git a/include/linux/irq.h b/include/linux/irq.h
+index 8d5bc2c237d74..1b7f4dfee35b3 100644
+--- a/include/linux/irq.h
++++ b/include/linux/irq.h
+@@ -213,6 +213,8 @@ struct irq_data {
+ * required
+ * IRQD_HANDLE_ENFORCE_IRQCTX - Enforce that handle_irq_*() is only invoked
+ * from actual interrupt context.
++ * IRQD_AFFINITY_ON_ACTIVATE - Affinity is set on activation. Don't call
++ * irq_chip::irq_set_affinity() when deactivated.
+ */
+ enum {
+ IRQD_TRIGGER_MASK = 0xf,
+@@ -237,6 +239,7 @@ enum {
+ IRQD_CAN_RESERVE = (1 << 26),
+ IRQD_MSI_NOMASK_QUIRK = (1 << 27),
+ IRQD_HANDLE_ENFORCE_IRQCTX = (1 << 28),
++ IRQD_AFFINITY_ON_ACTIVATE = (1 << 29),
+ };
+
+ #define __irqd_to_state(d) ACCESS_PRIVATE((d)->common, state_use_accessors)
+@@ -421,6 +424,16 @@ static inline bool irqd_msi_nomask_quirk(struct irq_data *d)
+ return __irqd_to_state(d) & IRQD_MSI_NOMASK_QUIRK;
+ }
+
++static inline void irqd_set_affinity_on_activate(struct irq_data *d)
++{
++ __irqd_to_state(d) |= IRQD_AFFINITY_ON_ACTIVATE;
++}
++
++static inline bool irqd_affinity_on_activate(struct irq_data *d)
++{
++ return __irqd_to_state(d) & IRQD_AFFINITY_ON_ACTIVATE;
++}
++
+ #undef __irqd_to_state
+
+ static inline irq_hw_number_t irqd_to_hwirq(struct irq_data *d)
+diff --git a/include/linux/pci-ats.h b/include/linux/pci-ats.h
+index d08f0869f1213..54c57a523ccec 100644
+--- a/include/linux/pci-ats.h
++++ b/include/linux/pci-ats.h
+@@ -25,6 +25,10 @@ int pci_enable_pri(struct pci_dev *pdev, u32 reqs);
+ void pci_disable_pri(struct pci_dev *pdev);
+ int pci_reset_pri(struct pci_dev *pdev);
+ int pci_prg_resp_pasid_required(struct pci_dev *pdev);
++bool pci_pri_supported(struct pci_dev *pdev);
++#else
++static inline bool pci_pri_supported(struct pci_dev *pdev)
++{ return false; }
+ #endif /* CONFIG_PCI_PRI */
+
+ #ifdef CONFIG_PCI_PASID
+diff --git a/include/net/sock.h b/include/net/sock.h
+index 46423e86dba50..2a1e8a683336e 100644
+--- a/include/net/sock.h
++++ b/include/net/sock.h
+@@ -890,6 +890,8 @@ static inline int sk_memalloc_socks(void)
+ {
+ return static_branch_unlikely(&memalloc_socks_key);
+ }
++
++void __receive_sock(struct file *file);
+ #else
+
+ static inline int sk_memalloc_socks(void)
+@@ -897,6 +899,8 @@ static inline int sk_memalloc_socks(void)
+ return 0;
+ }
+
++static inline void __receive_sock(struct file *file)
++{ }
+ #endif
+
+ static inline gfp_t sk_gfp_mask(const struct sock *sk, gfp_t gfp_mask)
+diff --git a/include/uapi/linux/btrfs.h b/include/uapi/linux/btrfs.h
+index e6b6cb0f8bc6a..24f6848ad78ec 100644
+--- a/include/uapi/linux/btrfs.h
++++ b/include/uapi/linux/btrfs.h
+@@ -243,6 +243,13 @@ struct btrfs_ioctl_dev_info_args {
+ __u8 path[BTRFS_DEVICE_PATH_NAME_MAX]; /* out */
+ };
+
++/*
++ * Retrieve information about the filesystem
++ */
++
++/* Request information about checksum type and size */
++#define BTRFS_FS_INFO_FLAG_CSUM_INFO (1 << 0)
++
+ struct btrfs_ioctl_fs_info_args {
+ __u64 max_id; /* out */
+ __u64 num_devices; /* out */
+@@ -250,8 +257,11 @@ struct btrfs_ioctl_fs_info_args {
+ __u32 nodesize; /* out */
+ __u32 sectorsize; /* out */
+ __u32 clone_alignment; /* out */
+- __u32 reserved32;
+- __u64 reserved[122]; /* pad to 1k */
++ /* See BTRFS_FS_INFO_FLAG_* */
++ __u16 csum_type; /* out */
++ __u16 csum_size; /* out */
++ __u64 flags; /* in/out */
++ __u8 reserved[968]; /* pad to 1k */
+ };
+
+ /*
+diff --git a/init/main.c b/init/main.c
+index 03371976d3872..567f7694b8044 100644
+--- a/init/main.c
++++ b/init/main.c
+@@ -385,8 +385,6 @@ static int __init bootconfig_params(char *param, char *val,
+ {
+ if (strcmp(param, "bootconfig") == 0) {
+ bootconfig_found = true;
+- } else if (strcmp(param, "--") == 0) {
+- initargs_found = true;
+ }
+ return 0;
+ }
+@@ -397,19 +395,23 @@ static void __init setup_boot_config(const char *cmdline)
+ const char *msg;
+ int pos;
+ u32 size, csum;
+- char *data, *copy;
++ char *data, *copy, *err;
+ int ret;
+
+ /* Cut out the bootconfig data even if we have no bootconfig option */
+ data = get_boot_config_from_initrd(&size, &csum);
+
+ strlcpy(tmp_cmdline, boot_command_line, COMMAND_LINE_SIZE);
+- parse_args("bootconfig", tmp_cmdline, NULL, 0, 0, 0, NULL,
+- bootconfig_params);
++ err = parse_args("bootconfig", tmp_cmdline, NULL, 0, 0, 0, NULL,
++ bootconfig_params);
+
+- if (!bootconfig_found)
++ if (IS_ERR(err) || !bootconfig_found)
+ return;
+
++ /* parse_args() stops at '--' and returns an address */
++ if (err)
++ initargs_found = true;
++
+ if (!data) {
+ pr_err("'bootconfig' found on command line, but no bootconfig found\n");
+ return;
+diff --git a/kernel/irq/manage.c b/kernel/irq/manage.c
+index dc58fd245e798..c48864ae6413c 100644
+--- a/kernel/irq/manage.c
++++ b/kernel/irq/manage.c
+@@ -320,12 +320,16 @@ static bool irq_set_affinity_deactivated(struct irq_data *data,
+ struct irq_desc *desc = irq_data_to_desc(data);
+
+ /*
++ * Handle irq chips which can handle affinity only in activated
++ * state correctly
++ *
+ * If the interrupt is not yet activated, just store the affinity
+ * mask and do not call the chip driver at all. On activation the
+ * driver has to make sure anyway that the interrupt is in a
+ * useable state so startup works.
+ */
+- if (!IS_ENABLED(CONFIG_IRQ_DOMAIN_HIERARCHY) || irqd_is_activated(data))
++ if (!IS_ENABLED(CONFIG_IRQ_DOMAIN_HIERARCHY) ||
++ irqd_is_activated(data) || !irqd_affinity_on_activate(data))
+ return false;
+
+ cpumask_copy(desc->irq_common_data.affinity, mask);
+diff --git a/kernel/irq/pm.c b/kernel/irq/pm.c
+index 8f557fa1f4fe4..c6c7e187ae748 100644
+--- a/kernel/irq/pm.c
++++ b/kernel/irq/pm.c
+@@ -185,14 +185,18 @@ void rearm_wake_irq(unsigned int irq)
+ unsigned long flags;
+ struct irq_desc *desc = irq_get_desc_buslock(irq, &flags, IRQ_GET_DESC_CHECK_GLOBAL);
+
+- if (!desc || !(desc->istate & IRQS_SUSPENDED) ||
+- !irqd_is_wakeup_set(&desc->irq_data))
++ if (!desc)
+ return;
+
++ if (!(desc->istate & IRQS_SUSPENDED) ||
++ !irqd_is_wakeup_set(&desc->irq_data))
++ goto unlock;
++
+ desc->istate &= ~IRQS_SUSPENDED;
+ irqd_set(&desc->irq_data, IRQD_WAKEUP_ARMED);
+ __enable_irq(desc);
+
++unlock:
+ irq_put_desc_busunlock(desc, flags);
+ }
+
+diff --git a/kernel/kprobes.c b/kernel/kprobes.c
+index 0a967db226d8a..bbff4bccb885d 100644
+--- a/kernel/kprobes.c
++++ b/kernel/kprobes.c
+@@ -2104,6 +2104,13 @@ static void kill_kprobe(struct kprobe *p)
+ * the original probed function (which will be freed soon) any more.
+ */
+ arch_remove_kprobe(p);
++
++ /*
++ * The module is going away. We should disarm the kprobe which
++ * is using ftrace.
++ */
++ if (kprobe_ftrace(p))
++ disarm_kprobe_ftrace(p);
+ }
+
+ /* Disable one kprobe */
+diff --git a/kernel/module.c b/kernel/module.c
+index af59c86f1547f..8814c21266384 100644
+--- a/kernel/module.c
++++ b/kernel/module.c
+@@ -1517,18 +1517,34 @@ struct module_sect_attrs {
+ struct module_sect_attr attrs[];
+ };
+
++#define MODULE_SECT_READ_SIZE (3 /* "0x", "\n" */ + (BITS_PER_LONG / 4))
+ static ssize_t module_sect_read(struct file *file, struct kobject *kobj,
+ struct bin_attribute *battr,
+ char *buf, loff_t pos, size_t count)
+ {
+ struct module_sect_attr *sattr =
+ container_of(battr, struct module_sect_attr, battr);
++ char bounce[MODULE_SECT_READ_SIZE + 1];
++ size_t wrote;
+
+ if (pos != 0)
+ return -EINVAL;
+
+- return sprintf(buf, "0x%px\n",
+- kallsyms_show_value(file->f_cred) ? (void *)sattr->address : NULL);
++ /*
++ * Since we're a binary read handler, we must account for the
++ * trailing NUL byte that sprintf will write: if "buf" is
++ * too small to hold the NUL, or the NUL is exactly the last
++ * byte, the read will look like it got truncated by one byte.
++ * Since there is no way to ask sprintf nicely to not write
++ * the NUL, we have to use a bounce buffer.
++ */
++ wrote = scnprintf(bounce, sizeof(bounce), "0x%px\n",
++ kallsyms_show_value(file->f_cred)
++ ? (void *)sattr->address : NULL);
++ count = min(count, wrote);
++ memcpy(buf, bounce, count);
++
++ return count;
+ }
+
+ static void free_sect_attrs(struct module_sect_attrs *sect_attrs)
+@@ -1577,7 +1593,7 @@ static void add_sect_attrs(struct module *mod, const struct load_info *info)
+ goto out;
+ sect_attrs->nsections++;
+ sattr->battr.read = module_sect_read;
+- sattr->battr.size = 3 /* "0x", "\n" */ + (BITS_PER_LONG / 4);
++ sattr->battr.size = MODULE_SECT_READ_SIZE;
+ sattr->battr.attr.mode = 0400;
+ *(gattr++) = &(sattr++)->battr;
+ }
+diff --git a/kernel/pid.c b/kernel/pid.c
+index c835b844aca7c..5506efe93dd2f 100644
+--- a/kernel/pid.c
++++ b/kernel/pid.c
+@@ -42,6 +42,7 @@
+ #include <linux/sched/signal.h>
+ #include <linux/sched/task.h>
+ #include <linux/idr.h>
++#include <net/sock.h>
+
+ struct pid init_struct_pid = {
+ .count = REFCOUNT_INIT(1),
+@@ -624,10 +625,12 @@ static int pidfd_getfd(struct pid *pid, int fd)
+ }
+
+ ret = get_unused_fd_flags(O_CLOEXEC);
+- if (ret < 0)
++ if (ret < 0) {
+ fput(file);
+- else
++ } else {
++ __receive_sock(file);
+ fd_install(ret, file);
++ }
+
+ return ret;
+ }
+diff --git a/kernel/sched/core.c b/kernel/sched/core.c
+index 1bae86fc128b2..ebecf1cc3b788 100644
+--- a/kernel/sched/core.c
++++ b/kernel/sched/core.c
+@@ -793,6 +793,26 @@ unsigned int sysctl_sched_uclamp_util_max = SCHED_CAPACITY_SCALE;
+ /* All clamps are required to be less or equal than these values */
+ static struct uclamp_se uclamp_default[UCLAMP_CNT];
+
++/*
++ * This static key is used to reduce the uclamp overhead in the fast path. It
++ * primarily disables the call to uclamp_rq_{inc, dec}() in
++ * enqueue/dequeue_task().
++ *
++ * This allows users to continue to enable uclamp in their kernel config with
++ * minimum uclamp overhead in the fast path.
++ *
++ * As soon as userspace modifies any of the uclamp knobs, the static key is
++ * enabled, since we have an actual users that make use of uclamp
++ * functionality.
++ *
++ * The knobs that would enable this static key are:
++ *
++ * * A task modifying its uclamp value with sched_setattr().
++ * * An admin modifying the sysctl_sched_uclamp_{min, max} via procfs.
++ * * An admin modifying the cgroup cpu.uclamp.{min, max}
++ */
++DEFINE_STATIC_KEY_FALSE(sched_uclamp_used);
++
+ /* Integer rounded range for each bucket */
+ #define UCLAMP_BUCKET_DELTA DIV_ROUND_CLOSEST(SCHED_CAPACITY_SCALE, UCLAMP_BUCKETS)
+
+@@ -989,10 +1009,38 @@ static inline void uclamp_rq_dec_id(struct rq *rq, struct task_struct *p,
+
+ lockdep_assert_held(&rq->lock);
+
++ /*
++ * If sched_uclamp_used was enabled after task @p was enqueued,
++ * we could end up with unbalanced call to uclamp_rq_dec_id().
++ *
++ * In this case the uc_se->active flag should be false since no uclamp
++ * accounting was performed at enqueue time and we can just return
++ * here.
++ *
++ * Need to be careful of the following enqeueue/dequeue ordering
++ * problem too
++ *
++ * enqueue(taskA)
++ * // sched_uclamp_used gets enabled
++ * enqueue(taskB)
++ * dequeue(taskA)
++ * // Must not decrement bukcet->tasks here
++ * dequeue(taskB)
++ *
++ * where we could end up with stale data in uc_se and
++ * bucket[uc_se->bucket_id].
++ *
++ * The following check here eliminates the possibility of such race.
++ */
++ if (unlikely(!uc_se->active))
++ return;
++
+ bucket = &uc_rq->bucket[uc_se->bucket_id];
++
+ SCHED_WARN_ON(!bucket->tasks);
+ if (likely(bucket->tasks))
+ bucket->tasks--;
++
+ uc_se->active = false;
+
+ /*
+@@ -1020,6 +1068,15 @@ static inline void uclamp_rq_inc(struct rq *rq, struct task_struct *p)
+ {
+ enum uclamp_id clamp_id;
+
++ /*
++ * Avoid any overhead until uclamp is actually used by the userspace.
++ *
++ * The condition is constructed such that a NOP is generated when
++ * sched_uclamp_used is disabled.
++ */
++ if (!static_branch_unlikely(&sched_uclamp_used))
++ return;
++
+ if (unlikely(!p->sched_class->uclamp_enabled))
+ return;
+
+@@ -1035,6 +1092,15 @@ static inline void uclamp_rq_dec(struct rq *rq, struct task_struct *p)
+ {
+ enum uclamp_id clamp_id;
+
++ /*
++ * Avoid any overhead until uclamp is actually used by the userspace.
++ *
++ * The condition is constructed such that a NOP is generated when
++ * sched_uclamp_used is disabled.
++ */
++ if (!static_branch_unlikely(&sched_uclamp_used))
++ return;
++
+ if (unlikely(!p->sched_class->uclamp_enabled))
+ return;
+
+@@ -1144,8 +1210,10 @@ int sysctl_sched_uclamp_handler(struct ctl_table *table, int write,
+ update_root_tg = true;
+ }
+
+- if (update_root_tg)
++ if (update_root_tg) {
++ static_branch_enable(&sched_uclamp_used);
+ uclamp_update_root_tg();
++ }
+
+ /*
+ * We update all RUNNABLE tasks only when task groups are in use.
+@@ -1180,6 +1248,15 @@ static int uclamp_validate(struct task_struct *p,
+ if (upper_bound > SCHED_CAPACITY_SCALE)
+ return -EINVAL;
+
++ /*
++ * We have valid uclamp attributes; make sure uclamp is enabled.
++ *
++ * We need to do that here, because enabling static branches is a
++ * blocking operation which obviously cannot be done while holding
++ * scheduler locks.
++ */
++ static_branch_enable(&sched_uclamp_used);
++
+ return 0;
+ }
+
+@@ -7306,6 +7383,8 @@ static ssize_t cpu_uclamp_write(struct kernfs_open_file *of, char *buf,
+ if (req.ret)
+ return req.ret;
+
++ static_branch_enable(&sched_uclamp_used);
++
+ mutex_lock(&uclamp_mutex);
+ rcu_read_lock();
+
+diff --git a/kernel/sched/cpufreq_schedutil.c b/kernel/sched/cpufreq_schedutil.c
+index 7fbaee24c824f..dc6835bc64907 100644
+--- a/kernel/sched/cpufreq_schedutil.c
++++ b/kernel/sched/cpufreq_schedutil.c
+@@ -210,7 +210,7 @@ unsigned long schedutil_cpu_util(int cpu, unsigned long util_cfs,
+ unsigned long dl_util, util, irq;
+ struct rq *rq = cpu_rq(cpu);
+
+- if (!IS_BUILTIN(CONFIG_UCLAMP_TASK) &&
++ if (!uclamp_is_used() &&
+ type == FREQUENCY_UTIL && rt_rq_is_runnable(&rq->rt)) {
+ return max;
+ }
+diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h
+index 1f58677a8f233..2a52710d2f526 100644
+--- a/kernel/sched/sched.h
++++ b/kernel/sched/sched.h
+@@ -863,6 +863,8 @@ struct uclamp_rq {
+ unsigned int value;
+ struct uclamp_bucket bucket[UCLAMP_BUCKETS];
+ };
++
++DECLARE_STATIC_KEY_FALSE(sched_uclamp_used);
+ #endif /* CONFIG_UCLAMP_TASK */
+
+ /*
+@@ -2355,12 +2357,35 @@ static inline void cpufreq_update_util(struct rq *rq, unsigned int flags) {}
+ #ifdef CONFIG_UCLAMP_TASK
+ unsigned long uclamp_eff_value(struct task_struct *p, enum uclamp_id clamp_id);
+
++/**
++ * uclamp_rq_util_with - clamp @util with @rq and @p effective uclamp values.
++ * @rq: The rq to clamp against. Must not be NULL.
++ * @util: The util value to clamp.
++ * @p: The task to clamp against. Can be NULL if you want to clamp
++ * against @rq only.
++ *
++ * Clamps the passed @util to the max(@rq, @p) effective uclamp values.
++ *
++ * If sched_uclamp_used static key is disabled, then just return the util
++ * without any clamping since uclamp aggregation at the rq level in the fast
++ * path is disabled, rendering this operation a NOP.
++ *
++ * Use uclamp_eff_value() if you don't care about uclamp values at rq level. It
++ * will return the correct effective uclamp value of the task even if the
++ * static key is disabled.
++ */
+ static __always_inline
+ unsigned long uclamp_rq_util_with(struct rq *rq, unsigned long util,
+ struct task_struct *p)
+ {
+- unsigned long min_util = READ_ONCE(rq->uclamp[UCLAMP_MIN].value);
+- unsigned long max_util = READ_ONCE(rq->uclamp[UCLAMP_MAX].value);
++ unsigned long min_util;
++ unsigned long max_util;
++
++ if (!static_branch_likely(&sched_uclamp_used))
++ return util;
++
++ min_util = READ_ONCE(rq->uclamp[UCLAMP_MIN].value);
++ max_util = READ_ONCE(rq->uclamp[UCLAMP_MAX].value);
+
+ if (p) {
+ min_util = max(min_util, uclamp_eff_value(p, UCLAMP_MIN));
+@@ -2377,6 +2402,19 @@ unsigned long uclamp_rq_util_with(struct rq *rq, unsigned long util,
+
+ return clamp(util, min_util, max_util);
+ }
++
++/*
++ * When uclamp is compiled in, the aggregation at rq level is 'turned off'
++ * by default in the fast path and only gets turned on once userspace performs
++ * an operation that requires it.
++ *
++ * Returns true if userspace opted-in to use uclamp and aggregation at rq level
++ * hence is active.
++ */
++static inline bool uclamp_is_used(void)
++{
++ return static_branch_likely(&sched_uclamp_used);
++}
+ #else /* CONFIG_UCLAMP_TASK */
+ static inline
+ unsigned long uclamp_rq_util_with(struct rq *rq, unsigned long util,
+@@ -2384,6 +2422,11 @@ unsigned long uclamp_rq_util_with(struct rq *rq, unsigned long util,
+ {
+ return util;
+ }
++
++static inline bool uclamp_is_used(void)
++{
++ return false;
++}
+ #endif /* CONFIG_UCLAMP_TASK */
+
+ #ifdef arch_scale_freq_capacity
+diff --git a/kernel/trace/ftrace.c b/kernel/trace/ftrace.c
+index baa7c050dc7bc..8fbe83b7f57ca 100644
+--- a/kernel/trace/ftrace.c
++++ b/kernel/trace/ftrace.c
+@@ -6198,8 +6198,11 @@ static int referenced_filters(struct dyn_ftrace *rec)
+ int cnt = 0;
+
+ for (ops = ftrace_ops_list; ops != &ftrace_list_end; ops = ops->next) {
+- if (ops_references_rec(ops, rec))
+- cnt++;
++ if (ops_references_rec(ops, rec)) {
++ cnt++;
++ if (ops->flags & FTRACE_OPS_FL_SAVE_REGS)
++ rec->flags |= FTRACE_FL_REGS;
++ }
+ }
+
+ return cnt;
+@@ -6378,8 +6381,8 @@ void ftrace_module_enable(struct module *mod)
+ if (ftrace_start_up)
+ cnt += referenced_filters(rec);
+
+- /* This clears FTRACE_FL_DISABLED */
+- rec->flags = cnt;
++ rec->flags &= ~FTRACE_FL_DISABLED;
++ rec->flags += cnt;
+
+ if (ftrace_start_up && cnt) {
+ int failed = __ftrace_replace_code(rec, 1);
+@@ -6977,12 +6980,12 @@ void ftrace_pid_follow_fork(struct trace_array *tr, bool enable)
+ if (enable) {
+ register_trace_sched_process_fork(ftrace_pid_follow_sched_process_fork,
+ tr);
+- register_trace_sched_process_exit(ftrace_pid_follow_sched_process_exit,
++ register_trace_sched_process_free(ftrace_pid_follow_sched_process_exit,
+ tr);
+ } else {
+ unregister_trace_sched_process_fork(ftrace_pid_follow_sched_process_fork,
+ tr);
+- unregister_trace_sched_process_exit(ftrace_pid_follow_sched_process_exit,
++ unregister_trace_sched_process_free(ftrace_pid_follow_sched_process_exit,
+ tr);
+ }
+ }
+diff --git a/kernel/trace/trace_events.c b/kernel/trace/trace_events.c
+index 242f59e7f17d5..671f564c33c40 100644
+--- a/kernel/trace/trace_events.c
++++ b/kernel/trace/trace_events.c
+@@ -538,12 +538,12 @@ void trace_event_follow_fork(struct trace_array *tr, bool enable)
+ if (enable) {
+ register_trace_prio_sched_process_fork(event_filter_pid_sched_process_fork,
+ tr, INT_MIN);
+- register_trace_prio_sched_process_exit(event_filter_pid_sched_process_exit,
++ register_trace_prio_sched_process_free(event_filter_pid_sched_process_exit,
+ tr, INT_MAX);
+ } else {
+ unregister_trace_sched_process_fork(event_filter_pid_sched_process_fork,
+ tr);
+- unregister_trace_sched_process_exit(event_filter_pid_sched_process_exit,
++ unregister_trace_sched_process_free(event_filter_pid_sched_process_exit,
+ tr);
+ }
+ }
+diff --git a/kernel/trace/trace_hwlat.c b/kernel/trace/trace_hwlat.c
+index e2be7bb7ef7e2..17e1e49e5b936 100644
+--- a/kernel/trace/trace_hwlat.c
++++ b/kernel/trace/trace_hwlat.c
+@@ -283,6 +283,7 @@ static bool disable_migrate;
+ static void move_to_next_cpu(void)
+ {
+ struct cpumask *current_mask = &save_cpumask;
++ struct trace_array *tr = hwlat_trace;
+ int next_cpu;
+
+ if (disable_migrate)
+@@ -296,7 +297,7 @@ static void move_to_next_cpu(void)
+ goto disable;
+
+ get_online_cpus();
+- cpumask_and(current_mask, cpu_online_mask, tracing_buffer_mask);
++ cpumask_and(current_mask, cpu_online_mask, tr->tracing_cpumask);
+ next_cpu = cpumask_next(smp_processor_id(), current_mask);
+ put_online_cpus();
+
+@@ -373,7 +374,7 @@ static int start_kthread(struct trace_array *tr)
+ /* Just pick the first CPU on first iteration */
+ current_mask = &save_cpumask;
+ get_online_cpus();
+- cpumask_and(current_mask, cpu_online_mask, tracing_buffer_mask);
++ cpumask_and(current_mask, cpu_online_mask, tr->tracing_cpumask);
+ put_online_cpus();
+ next_cpu = cpumask_first(current_mask);
+
+diff --git a/lib/devres.c b/lib/devres.c
+index 6ef51f159c54b..ca0d28727ccef 100644
+--- a/lib/devres.c
++++ b/lib/devres.c
+@@ -119,6 +119,7 @@ __devm_ioremap_resource(struct device *dev, const struct resource *res,
+ {
+ resource_size_t size;
+ void __iomem *dest_ptr;
++ char *pretty_name;
+
+ BUG_ON(!dev);
+
+@@ -129,7 +130,15 @@ __devm_ioremap_resource(struct device *dev, const struct resource *res,
+
+ size = resource_size(res);
+
+- if (!devm_request_mem_region(dev, res->start, size, dev_name(dev))) {
++ if (res->name)
++ pretty_name = devm_kasprintf(dev, GFP_KERNEL, "%s %s",
++ dev_name(dev), res->name);
++ else
++ pretty_name = devm_kstrdup(dev, dev_name(dev), GFP_KERNEL);
++ if (!pretty_name)
++ return IOMEM_ERR_PTR(-ENOMEM);
++
++ if (!devm_request_mem_region(dev, res->start, size, pretty_name)) {
+ dev_err(dev, "can't request region for resource %pR\n", res);
+ return IOMEM_ERR_PTR(-EBUSY);
+ }
+diff --git a/lib/test_kmod.c b/lib/test_kmod.c
+index e651c37d56dbd..eab52770070d6 100644
+--- a/lib/test_kmod.c
++++ b/lib/test_kmod.c
+@@ -745,7 +745,7 @@ static int trigger_config_run_type(struct kmod_test_device *test_dev,
+ break;
+ case TEST_KMOD_FS_TYPE:
+ kfree_const(config->test_fs);
+- config->test_driver = NULL;
++ config->test_fs = NULL;
+ copied = config_copy_test_fs(config, test_str,
+ strlen(test_str));
+ break;
+diff --git a/lib/test_lockup.c b/lib/test_lockup.c
+index ea09ca335b214..69ef1c17edf64 100644
+--- a/lib/test_lockup.c
++++ b/lib/test_lockup.c
+@@ -512,8 +512,8 @@ static int __init test_lockup_init(void)
+ if (test_file_path[0]) {
+ test_file = filp_open(test_file_path, O_RDONLY, 0);
+ if (IS_ERR(test_file)) {
+- pr_err("cannot find file_path\n");
+- return -EINVAL;
++ pr_err("failed to open %s: %ld\n", test_file_path, PTR_ERR(test_file));
++ return PTR_ERR(test_file);
+ }
+ test_inode = file_inode(test_file);
+ } else if (test_lock_inode ||
+diff --git a/mm/cma.c b/mm/cma.c
+index 26ecff8188817..0963c0f9c5022 100644
+--- a/mm/cma.c
++++ b/mm/cma.c
+@@ -93,17 +93,15 @@ static void cma_clear_bitmap(struct cma *cma, unsigned long pfn,
+ mutex_unlock(&cma->lock);
+ }
+
+-static int __init cma_activate_area(struct cma *cma)
++static void __init cma_activate_area(struct cma *cma)
+ {
+ unsigned long base_pfn = cma->base_pfn, pfn = base_pfn;
+ unsigned i = cma->count >> pageblock_order;
+ struct zone *zone;
+
+ cma->bitmap = bitmap_zalloc(cma_bitmap_maxno(cma), GFP_KERNEL);
+- if (!cma->bitmap) {
+- cma->count = 0;
+- return -ENOMEM;
+- }
++ if (!cma->bitmap)
++ goto out_error;
+
+ WARN_ON_ONCE(!pfn_valid(pfn));
+ zone = page_zone(pfn_to_page(pfn));
+@@ -133,25 +131,22 @@ static int __init cma_activate_area(struct cma *cma)
+ spin_lock_init(&cma->mem_head_lock);
+ #endif
+
+- return 0;
++ return;
+
+ not_in_zone:
+- pr_err("CMA area %s could not be activated\n", cma->name);
+ bitmap_free(cma->bitmap);
++out_error:
+ cma->count = 0;
+- return -EINVAL;
++ pr_err("CMA area %s could not be activated\n", cma->name);
++ return;
+ }
+
+ static int __init cma_init_reserved_areas(void)
+ {
+ int i;
+
+- for (i = 0; i < cma_area_count; i++) {
+- int ret = cma_activate_area(&cma_areas[i]);
+-
+- if (ret)
+- return ret;
+- }
++ for (i = 0; i < cma_area_count; i++)
++ cma_activate_area(&cma_areas[i]);
+
+ return 0;
+ }
+diff --git a/mm/hugetlb.c b/mm/hugetlb.c
+index 461324757c750..e4599bc61e718 100644
+--- a/mm/hugetlb.c
++++ b/mm/hugetlb.c
+@@ -3840,7 +3840,7 @@ void __unmap_hugepage_range(struct mmu_gather *tlb, struct vm_area_struct *vma,
+ continue;
+
+ ptl = huge_pte_lock(h, mm, ptep);
+- if (huge_pmd_unshare(mm, &address, ptep)) {
++ if (huge_pmd_unshare(mm, vma, &address, ptep)) {
+ spin_unlock(ptl);
+ /*
+ * We just unmapped a page of PMDs by clearing a PUD.
+@@ -4427,10 +4427,6 @@ vm_fault_t hugetlb_fault(struct mm_struct *mm, struct vm_area_struct *vma,
+ } else if (unlikely(is_hugetlb_entry_hwpoisoned(entry)))
+ return VM_FAULT_HWPOISON_LARGE |
+ VM_FAULT_SET_HINDEX(hstate_index(h));
+- } else {
+- ptep = huge_pte_alloc(mm, haddr, huge_page_size(h));
+- if (!ptep)
+- return VM_FAULT_OOM;
+ }
+
+ /*
+@@ -4907,7 +4903,7 @@ unsigned long hugetlb_change_protection(struct vm_area_struct *vma,
+ if (!ptep)
+ continue;
+ ptl = huge_pte_lock(h, mm, ptep);
+- if (huge_pmd_unshare(mm, &address, ptep)) {
++ if (huge_pmd_unshare(mm, vma, &address, ptep)) {
+ pages++;
+ spin_unlock(ptl);
+ shared_pmd = true;
+@@ -5201,25 +5197,21 @@ static bool vma_shareable(struct vm_area_struct *vma, unsigned long addr)
+ void adjust_range_if_pmd_sharing_possible(struct vm_area_struct *vma,
+ unsigned long *start, unsigned long *end)
+ {
+- unsigned long check_addr;
++ unsigned long a_start, a_end;
+
+ if (!(vma->vm_flags & VM_MAYSHARE))
+ return;
+
+- for (check_addr = *start; check_addr < *end; check_addr += PUD_SIZE) {
+- unsigned long a_start = check_addr & PUD_MASK;
+- unsigned long a_end = a_start + PUD_SIZE;
++ /* Extend the range to be PUD aligned for a worst case scenario */
++ a_start = ALIGN_DOWN(*start, PUD_SIZE);
++ a_end = ALIGN(*end, PUD_SIZE);
+
+- /*
+- * If sharing is possible, adjust start/end if necessary.
+- */
+- if (range_in_vma(vma, a_start, a_end)) {
+- if (a_start < *start)
+- *start = a_start;
+- if (a_end > *end)
+- *end = a_end;
+- }
+- }
++ /*
++ * Intersect the range with the vma range, since pmd sharing won't be
++ * across vma after all
++ */
++ *start = max(vma->vm_start, a_start);
++ *end = min(vma->vm_end, a_end);
+ }
+
+ /*
+@@ -5292,12 +5284,14 @@ out:
+ * returns: 1 successfully unmapped a shared pte page
+ * 0 the underlying pte page is not shared, or it is the last user
+ */
+-int huge_pmd_unshare(struct mm_struct *mm, unsigned long *addr, pte_t *ptep)
++int huge_pmd_unshare(struct mm_struct *mm, struct vm_area_struct *vma,
++ unsigned long *addr, pte_t *ptep)
+ {
+ pgd_t *pgd = pgd_offset(mm, *addr);
+ p4d_t *p4d = p4d_offset(pgd, *addr);
+ pud_t *pud = pud_offset(p4d, *addr);
+
++ i_mmap_assert_write_locked(vma->vm_file->f_mapping);
+ BUG_ON(page_count(virt_to_page(ptep)) == 0);
+ if (page_count(virt_to_page(ptep)) == 1)
+ return 0;
+@@ -5315,7 +5309,8 @@ pte_t *huge_pmd_share(struct mm_struct *mm, unsigned long addr, pud_t *pud)
+ return NULL;
+ }
+
+-int huge_pmd_unshare(struct mm_struct *mm, unsigned long *addr, pte_t *ptep)
++int huge_pmd_unshare(struct mm_struct *mm, struct vm_area_struct *vma,
++ unsigned long *addr, pte_t *ptep)
+ {
+ return 0;
+ }
+diff --git a/mm/khugepaged.c b/mm/khugepaged.c
+index e9e7a5659d647..38874fe112d58 100644
+--- a/mm/khugepaged.c
++++ b/mm/khugepaged.c
+@@ -1313,7 +1313,7 @@ void collapse_pte_mapped_thp(struct mm_struct *mm, unsigned long addr)
+ {
+ unsigned long haddr = addr & HPAGE_PMD_MASK;
+ struct vm_area_struct *vma = find_vma(mm, haddr);
+- struct page *hpage = NULL;
++ struct page *hpage;
+ pte_t *start_pte, *pte;
+ pmd_t *pmd, _pmd;
+ spinlock_t *ptl;
+@@ -1333,9 +1333,17 @@ void collapse_pte_mapped_thp(struct mm_struct *mm, unsigned long addr)
+ if (!hugepage_vma_check(vma, vma->vm_flags | VM_HUGEPAGE))
+ return;
+
++ hpage = find_lock_page(vma->vm_file->f_mapping,
++ linear_page_index(vma, haddr));
++ if (!hpage)
++ return;
++
++ if (!PageHead(hpage))
++ goto drop_hpage;
++
+ pmd = mm_find_pmd(mm, haddr);
+ if (!pmd)
+- return;
++ goto drop_hpage;
+
+ start_pte = pte_offset_map_lock(mm, pmd, haddr, &ptl);
+
+@@ -1354,30 +1362,11 @@ void collapse_pte_mapped_thp(struct mm_struct *mm, unsigned long addr)
+
+ page = vm_normal_page(vma, addr, *pte);
+
+- if (!page || !PageCompound(page))
+- goto abort;
+-
+- if (!hpage) {
+- hpage = compound_head(page);
+- /*
+- * The mapping of the THP should not change.
+- *
+- * Note that uprobe, debugger, or MAP_PRIVATE may
+- * change the page table, but the new page will
+- * not pass PageCompound() check.
+- */
+- if (WARN_ON(hpage->mapping != vma->vm_file->f_mapping))
+- goto abort;
+- }
+-
+ /*
+- * Confirm the page maps to the correct subpage.
+- *
+- * Note that uprobe, debugger, or MAP_PRIVATE may change
+- * the page table, but the new page will not pass
+- * PageCompound() check.
++ * Note that uprobe, debugger, or MAP_PRIVATE may change the
++ * page table, but the new page will not be a subpage of hpage.
+ */
+- if (WARN_ON(hpage + i != page))
++ if (hpage + i != page)
+ goto abort;
+ count++;
+ }
+@@ -1396,21 +1385,26 @@ void collapse_pte_mapped_thp(struct mm_struct *mm, unsigned long addr)
+ pte_unmap_unlock(start_pte, ptl);
+
+ /* step 3: set proper refcount and mm_counters. */
+- if (hpage) {
++ if (count) {
+ page_ref_sub(hpage, count);
+ add_mm_counter(vma->vm_mm, mm_counter_file(hpage), -count);
+ }
+
+ /* step 4: collapse pmd */
+ ptl = pmd_lock(vma->vm_mm, pmd);
+- _pmd = pmdp_collapse_flush(vma, addr, pmd);
++ _pmd = pmdp_collapse_flush(vma, haddr, pmd);
+ spin_unlock(ptl);
+ mm_dec_nr_ptes(mm);
+ pte_free(mm, pmd_pgtable(_pmd));
++
++drop_hpage:
++ unlock_page(hpage);
++ put_page(hpage);
+ return;
+
+ abort:
+ pte_unmap_unlock(start_pte, ptl);
++ goto drop_hpage;
+ }
+
+ static int khugepaged_collapse_pte_mapped_thps(struct mm_slot *mm_slot)
+@@ -1439,6 +1433,7 @@ out:
+ static void retract_page_tables(struct address_space *mapping, pgoff_t pgoff)
+ {
+ struct vm_area_struct *vma;
++ struct mm_struct *mm;
+ unsigned long addr;
+ pmd_t *pmd, _pmd;
+
+@@ -1467,7 +1462,8 @@ static void retract_page_tables(struct address_space *mapping, pgoff_t pgoff)
+ continue;
+ if (vma->vm_end < addr + HPAGE_PMD_SIZE)
+ continue;
+- pmd = mm_find_pmd(vma->vm_mm, addr);
++ mm = vma->vm_mm;
++ pmd = mm_find_pmd(mm, addr);
+ if (!pmd)
+ continue;
+ /*
+@@ -1477,17 +1473,19 @@ static void retract_page_tables(struct address_space *mapping, pgoff_t pgoff)
+ * mmap_sem while holding page lock. Fault path does it in
+ * reverse order. Trylock is a way to avoid deadlock.
+ */
+- if (down_write_trylock(&vma->vm_mm->mmap_sem)) {
+- spinlock_t *ptl = pmd_lock(vma->vm_mm, pmd);
+- /* assume page table is clear */
+- _pmd = pmdp_collapse_flush(vma, addr, pmd);
+- spin_unlock(ptl);
+- up_write(&vma->vm_mm->mmap_sem);
+- mm_dec_nr_ptes(vma->vm_mm);
+- pte_free(vma->vm_mm, pmd_pgtable(_pmd));
++ if (down_write_trylock(&mm->mmap_sem)) {
++ if (!khugepaged_test_exit(mm)) {
++ spinlock_t *ptl = pmd_lock(mm, pmd);
++ /* assume page table is clear */
++ _pmd = pmdp_collapse_flush(vma, addr, pmd);
++ spin_unlock(ptl);
++ mm_dec_nr_ptes(mm);
++ pte_free(mm, pmd_pgtable(_pmd));
++ }
++ up_write(&mm->mmap_sem);
+ } else {
+ /* Try again later */
+- khugepaged_add_pte_mapped_thp(vma->vm_mm, addr);
++ khugepaged_add_pte_mapped_thp(mm, addr);
+ }
+ }
+ i_mmap_unlock_write(mapping);
+diff --git a/mm/memory_hotplug.c b/mm/memory_hotplug.c
+index 744a3ea284b78..7f28c5f7e4bb8 100644
+--- a/mm/memory_hotplug.c
++++ b/mm/memory_hotplug.c
+@@ -1745,7 +1745,7 @@ static int __ref try_remove_memory(int nid, u64 start, u64 size)
+ */
+ rc = walk_memory_blocks(start, size, NULL, check_memblock_offlined_cb);
+ if (rc)
+- goto done;
++ return rc;
+
+ /* remove memmap entry */
+ firmware_map_remove(start, start + size, "System RAM");
+@@ -1765,9 +1765,8 @@ static int __ref try_remove_memory(int nid, u64 start, u64 size)
+
+ try_offline_node(nid);
+
+-done:
+ mem_hotplug_done();
+- return rc;
++ return 0;
+ }
+
+ /**
+diff --git a/mm/page_counter.c b/mm/page_counter.c
+index c56db2d5e1592..b4663844c9b37 100644
+--- a/mm/page_counter.c
++++ b/mm/page_counter.c
+@@ -72,7 +72,7 @@ void page_counter_charge(struct page_counter *counter, unsigned long nr_pages)
+ long new;
+
+ new = atomic_long_add_return(nr_pages, &c->usage);
+- propagate_protected_usage(counter, new);
++ propagate_protected_usage(c, new);
+ /*
+ * This is indeed racy, but we can live with some
+ * inaccuracy in the watermark.
+@@ -116,7 +116,7 @@ bool page_counter_try_charge(struct page_counter *counter,
+ new = atomic_long_add_return(nr_pages, &c->usage);
+ if (new > c->max) {
+ atomic_long_sub(nr_pages, &c->usage);
+- propagate_protected_usage(counter, new);
++ propagate_protected_usage(c, new);
+ /*
+ * This is racy, but we can live with some
+ * inaccuracy in the failcnt.
+@@ -125,7 +125,7 @@ bool page_counter_try_charge(struct page_counter *counter,
+ *fail = c;
+ goto failed;
+ }
+- propagate_protected_usage(counter, new);
++ propagate_protected_usage(c, new);
+ /*
+ * Just like with failcnt, we can live with some
+ * inaccuracy in the watermark.
+diff --git a/mm/rmap.c b/mm/rmap.c
+index f79a206b271a6..f3c5562bc5f40 100644
+--- a/mm/rmap.c
++++ b/mm/rmap.c
+@@ -1458,7 +1458,7 @@ static bool try_to_unmap_one(struct page *page, struct vm_area_struct *vma,
+ * do this outside rmap routines.
+ */
+ VM_BUG_ON(!(flags & TTU_RMAP_LOCKED));
+- if (huge_pmd_unshare(mm, &address, pvmw.pte)) {
++ if (huge_pmd_unshare(mm, vma, &address, pvmw.pte)) {
+ /*
+ * huge_pmd_unshare unmapped an entire PMD
+ * page. There is no way of knowing exactly
+diff --git a/mm/shuffle.c b/mm/shuffle.c
+index 44406d9977c77..dd13ab851b3ee 100644
+--- a/mm/shuffle.c
++++ b/mm/shuffle.c
+@@ -58,25 +58,25 @@ module_param_call(shuffle, shuffle_store, shuffle_show, &shuffle_param, 0400);
+ * For two pages to be swapped in the shuffle, they must be free (on a
+ * 'free_area' lru), have the same order, and have the same migratetype.
+ */
+-static struct page * __meminit shuffle_valid_page(unsigned long pfn, int order)
++static struct page * __meminit shuffle_valid_page(struct zone *zone,
++ unsigned long pfn, int order)
+ {
+- struct page *page;
++ struct page *page = pfn_to_online_page(pfn);
+
+ /*
+ * Given we're dealing with randomly selected pfns in a zone we
+ * need to ask questions like...
+ */
+
+- /* ...is the pfn even in the memmap? */
+- if (!pfn_valid_within(pfn))
++ /* ... is the page managed by the buddy? */
++ if (!page)
+ return NULL;
+
+- /* ...is the pfn in a present section or a hole? */
+- if (!pfn_in_present_section(pfn))
++ /* ... is the page assigned to the same zone? */
++ if (page_zone(page) != zone)
+ return NULL;
+
+ /* ...is the page free and currently on a free_area list? */
+- page = pfn_to_page(pfn);
+ if (!PageBuddy(page))
+ return NULL;
+
+@@ -123,7 +123,7 @@ void __meminit __shuffle_zone(struct zone *z)
+ * page_j randomly selected in the span @zone_start_pfn to
+ * @spanned_pages.
+ */
+- page_i = shuffle_valid_page(i, order);
++ page_i = shuffle_valid_page(z, i, order);
+ if (!page_i)
+ continue;
+
+@@ -137,7 +137,7 @@ void __meminit __shuffle_zone(struct zone *z)
+ j = z->zone_start_pfn +
+ ALIGN_DOWN(get_random_long() % z->spanned_pages,
+ order_pages);
+- page_j = shuffle_valid_page(j, order);
++ page_j = shuffle_valid_page(z, j, order);
+ if (page_j && page_j != page_i)
+ break;
+ }
+diff --git a/net/compat.c b/net/compat.c
+index 4bed96e84d9a6..32ea0a04a665c 100644
+--- a/net/compat.c
++++ b/net/compat.c
+@@ -307,6 +307,7 @@ void scm_detach_fds_compat(struct msghdr *kmsg, struct scm_cookie *scm)
+ break;
+ }
+ /* Bump the usage count and install the file. */
++ __receive_sock(fp[i]);
+ fd_install(new_fd, get_file(fp[i]));
+ }
+
+diff --git a/net/core/sock.c b/net/core/sock.c
+index 7b0feeea61b6b..f97c5af8961ca 100644
+--- a/net/core/sock.c
++++ b/net/core/sock.c
+@@ -2753,6 +2753,27 @@ int sock_no_mmap(struct file *file, struct socket *sock, struct vm_area_struct *
+ }
+ EXPORT_SYMBOL(sock_no_mmap);
+
++/*
++ * When a file is received (via SCM_RIGHTS, etc), we must bump the
++ * various sock-based usage counts.
++ */
++void __receive_sock(struct file *file)
++{
++ struct socket *sock;
++ int error;
++
++ /*
++ * The resulting value of "error" is ignored here since we only
++ * need to take action when the file is a socket and testing
++ * "sock" for NULL is sufficient.
++ */
++ sock = sock_from_file(file, &error);
++ if (sock) {
++ sock_update_netprioidx(&sock->sk->sk_cgrp_data);
++ sock_update_classid(&sock->sk->sk_cgrp_data);
++ }
++}
++
+ ssize_t sock_no_sendpage(struct socket *sock, struct page *page, int offset, size_t size, int flags)
+ {
+ ssize_t res;
+diff --git a/net/mac80211/sta_info.c b/net/mac80211/sta_info.c
+index cd8487bc6fc2e..032ec0303a1f7 100644
+--- a/net/mac80211/sta_info.c
++++ b/net/mac80211/sta_info.c
+@@ -1050,7 +1050,7 @@ static void __sta_info_destroy_part2(struct sta_info *sta)
+ might_sleep();
+ lockdep_assert_held(&local->sta_mtx);
+
+- while (sta->sta_state == IEEE80211_STA_AUTHORIZED) {
++ if (sta->sta_state == IEEE80211_STA_AUTHORIZED) {
+ ret = sta_info_move_state(sta, IEEE80211_STA_ASSOC);
+ WARN_ON_ONCE(ret);
+ }
+diff --git a/scripts/recordmcount.c b/scripts/recordmcount.c
+index e59022b3f1254..b9c2ee7ab43fa 100644
+--- a/scripts/recordmcount.c
++++ b/scripts/recordmcount.c
+@@ -42,6 +42,8 @@
+ #define R_ARM_THM_CALL 10
+ #define R_ARM_CALL 28
+
++#define R_AARCH64_CALL26 283
++
+ static int fd_map; /* File descriptor for file being modified. */
+ static int mmap_failed; /* Boolean flag. */
+ static char gpfx; /* prefix for global symbol name (sometimes '_') */
+diff --git a/security/integrity/ima/ima_policy.c b/security/integrity/ima/ima_policy.c
+index 3e3e568c81309..a59bf2f5b2d4f 100644
+--- a/security/integrity/ima/ima_policy.c
++++ b/security/integrity/ima/ima_policy.c
+@@ -1035,6 +1035,11 @@ static bool ima_validate_rule(struct ima_rule_entry *entry)
+ return false;
+ }
+
++ /* Ensure that combinations of flags are compatible with each other */
++ if (entry->flags & IMA_CHECK_BLACKLIST &&
++ !(entry->flags & IMA_MODSIG_ALLOWED))
++ return false;
++
+ return true;
+ }
+
+@@ -1371,9 +1376,17 @@ static int ima_parse_rule(char *rule, struct ima_rule_entry *entry)
+ result = -EINVAL;
+ break;
+ case Opt_appraise_flag:
++ if (entry->action != APPRAISE) {
++ result = -EINVAL;
++ break;
++ }
++
+ ima_log_string(ab, "appraise_flag", args[0].from);
+- if (strstr(args[0].from, "blacklist"))
++ if (IS_ENABLED(CONFIG_IMA_APPRAISE_MODSIG) &&
++ strstr(args[0].from, "blacklist"))
+ entry->flags |= IMA_CHECK_BLACKLIST;
++ else
++ result = -EINVAL;
+ break;
+ case Opt_permit_directio:
+ entry->flags |= IMA_PERMIT_DIRECTIO;
+diff --git a/sound/pci/echoaudio/echoaudio.c b/sound/pci/echoaudio/echoaudio.c
+index 0941a7a17623a..456219a665a79 100644
+--- a/sound/pci/echoaudio/echoaudio.c
++++ b/sound/pci/echoaudio/echoaudio.c
+@@ -2158,7 +2158,6 @@ static int snd_echo_resume(struct device *dev)
+ if (err < 0) {
+ kfree(commpage_bak);
+ dev_err(dev, "resume init_hw err=%d\n", err);
+- snd_echo_free(chip);
+ return err;
+ }
+
+@@ -2185,7 +2184,6 @@ static int snd_echo_resume(struct device *dev)
+ if (request_irq(pci->irq, snd_echo_interrupt, IRQF_SHARED,
+ KBUILD_MODNAME, chip)) {
+ dev_err(chip->card->dev, "cannot grab irq\n");
+- snd_echo_free(chip);
+ return -EBUSY;
+ }
+ chip->irq = pci->irq;
+diff --git a/sound/soc/tegra/tegra_alc5632.c b/sound/soc/tegra/tegra_alc5632.c
+index ec39ecba1e8b8..2839c6cb8c386 100644
+--- a/sound/soc/tegra/tegra_alc5632.c
++++ b/sound/soc/tegra/tegra_alc5632.c
+@@ -205,13 +205,11 @@ static int tegra_alc5632_probe(struct platform_device *pdev)
+ if (ret) {
+ dev_err(&pdev->dev, "snd_soc_register_card failed (%d)\n",
+ ret);
+- goto err_fini_utils;
++ goto err_put_cpu_of_node;
+ }
+
+ return 0;
+
+-err_fini_utils:
+- tegra_asoc_utils_fini(&alc5632->util_data);
+ err_put_cpu_of_node:
+ of_node_put(tegra_alc5632_dai.cpus->of_node);
+ tegra_alc5632_dai.cpus->of_node = NULL;
+@@ -226,12 +224,9 @@ err:
+ static int tegra_alc5632_remove(struct platform_device *pdev)
+ {
+ struct snd_soc_card *card = platform_get_drvdata(pdev);
+- struct tegra_alc5632 *machine = snd_soc_card_get_drvdata(card);
+
+ snd_soc_unregister_card(card);
+
+- tegra_asoc_utils_fini(&machine->util_data);
+-
+ of_node_put(tegra_alc5632_dai.cpus->of_node);
+ tegra_alc5632_dai.cpus->of_node = NULL;
+ tegra_alc5632_dai.platforms->of_node = NULL;
+diff --git a/sound/soc/tegra/tegra_asoc_utils.c b/sound/soc/tegra/tegra_asoc_utils.c
+index 536a578e95126..587f62a288d14 100644
+--- a/sound/soc/tegra/tegra_asoc_utils.c
++++ b/sound/soc/tegra/tegra_asoc_utils.c
+@@ -60,8 +60,6 @@ int tegra_asoc_utils_set_rate(struct tegra_asoc_utils_data *data, int srate,
+ data->set_mclk = 0;
+
+ clk_disable_unprepare(data->clk_cdev1);
+- clk_disable_unprepare(data->clk_pll_a_out0);
+- clk_disable_unprepare(data->clk_pll_a);
+
+ err = clk_set_rate(data->clk_pll_a, new_baseclock);
+ if (err) {
+@@ -77,18 +75,6 @@ int tegra_asoc_utils_set_rate(struct tegra_asoc_utils_data *data, int srate,
+
+ /* Don't set cdev1/extern1 rate; it's locked to pll_a_out0 */
+
+- err = clk_prepare_enable(data->clk_pll_a);
+- if (err) {
+- dev_err(data->dev, "Can't enable pll_a: %d\n", err);
+- return err;
+- }
+-
+- err = clk_prepare_enable(data->clk_pll_a_out0);
+- if (err) {
+- dev_err(data->dev, "Can't enable pll_a_out0: %d\n", err);
+- return err;
+- }
+-
+ err = clk_prepare_enable(data->clk_cdev1);
+ if (err) {
+ dev_err(data->dev, "Can't enable cdev1: %d\n", err);
+@@ -109,8 +95,6 @@ int tegra_asoc_utils_set_ac97_rate(struct tegra_asoc_utils_data *data)
+ int err;
+
+ clk_disable_unprepare(data->clk_cdev1);
+- clk_disable_unprepare(data->clk_pll_a_out0);
+- clk_disable_unprepare(data->clk_pll_a);
+
+ /*
+ * AC97 rate is fixed at 24.576MHz and is used for both the host
+@@ -130,18 +114,6 @@ int tegra_asoc_utils_set_ac97_rate(struct tegra_asoc_utils_data *data)
+
+ /* Don't set cdev1/extern1 rate; it's locked to pll_a_out0 */
+
+- err = clk_prepare_enable(data->clk_pll_a);
+- if (err) {
+- dev_err(data->dev, "Can't enable pll_a: %d\n", err);
+- return err;
+- }
+-
+- err = clk_prepare_enable(data->clk_pll_a_out0);
+- if (err) {
+- dev_err(data->dev, "Can't enable pll_a_out0: %d\n", err);
+- return err;
+- }
+-
+ err = clk_prepare_enable(data->clk_cdev1);
+ if (err) {
+ dev_err(data->dev, "Can't enable cdev1: %d\n", err);
+@@ -158,6 +130,7 @@ EXPORT_SYMBOL_GPL(tegra_asoc_utils_set_ac97_rate);
+ int tegra_asoc_utils_init(struct tegra_asoc_utils_data *data,
+ struct device *dev)
+ {
++ struct clk *clk_out_1, *clk_extern1;
+ int ret;
+
+ data->dev = dev;
+@@ -175,52 +148,78 @@ int tegra_asoc_utils_init(struct tegra_asoc_utils_data *data,
+ return -EINVAL;
+ }
+
+- data->clk_pll_a = clk_get(dev, "pll_a");
++ data->clk_pll_a = devm_clk_get(dev, "pll_a");
+ if (IS_ERR(data->clk_pll_a)) {
+ dev_err(data->dev, "Can't retrieve clk pll_a\n");
+- ret = PTR_ERR(data->clk_pll_a);
+- goto err;
++ return PTR_ERR(data->clk_pll_a);
+ }
+
+- data->clk_pll_a_out0 = clk_get(dev, "pll_a_out0");
++ data->clk_pll_a_out0 = devm_clk_get(dev, "pll_a_out0");
+ if (IS_ERR(data->clk_pll_a_out0)) {
+ dev_err(data->dev, "Can't retrieve clk pll_a_out0\n");
+- ret = PTR_ERR(data->clk_pll_a_out0);
+- goto err_put_pll_a;
++ return PTR_ERR(data->clk_pll_a_out0);
+ }
+
+- data->clk_cdev1 = clk_get(dev, "mclk");
++ data->clk_cdev1 = devm_clk_get(dev, "mclk");
+ if (IS_ERR(data->clk_cdev1)) {
+ dev_err(data->dev, "Can't retrieve clk cdev1\n");
+- ret = PTR_ERR(data->clk_cdev1);
+- goto err_put_pll_a_out0;
++ return PTR_ERR(data->clk_cdev1);
+ }
+
+- ret = tegra_asoc_utils_set_rate(data, 44100, 256 * 44100);
+- if (ret)
+- goto err_put_cdev1;
++ /*
++ * If clock parents are not set in DT, configure here to use clk_out_1
++ * as mclk and extern1 as parent for Tegra30 and higher.
++ */
++ if (!of_find_property(dev->of_node, "assigned-clock-parents", NULL) &&
++ data->soc > TEGRA_ASOC_UTILS_SOC_TEGRA20) {
++ dev_warn(data->dev,
++ "Configuring clocks for a legacy device-tree\n");
++ dev_warn(data->dev,
++ "Please update DT to use assigned-clock-parents\n");
++ clk_extern1 = devm_clk_get(dev, "extern1");
++ if (IS_ERR(clk_extern1)) {
++ dev_err(data->dev, "Can't retrieve clk extern1\n");
++ return PTR_ERR(clk_extern1);
++ }
++
++ ret = clk_set_parent(clk_extern1, data->clk_pll_a_out0);
++ if (ret < 0) {
++ dev_err(data->dev,
++ "Set parent failed for clk extern1\n");
++ return ret;
++ }
++
++ clk_out_1 = devm_clk_get(dev, "pmc_clk_out_1");
++ if (IS_ERR(clk_out_1)) {
++ dev_err(data->dev, "Can't retrieve pmc_clk_out_1\n");
++ return PTR_ERR(clk_out_1);
++ }
++
++ ret = clk_set_parent(clk_out_1, clk_extern1);
++ if (ret < 0) {
++ dev_err(data->dev,
++ "Set parent failed for pmc_clk_out_1\n");
++ return ret;
++ }
++
++ data->clk_cdev1 = clk_out_1;
++ }
+
+- return 0;
++ /*
++ * FIXME: There is some unknown dependency between audio mclk disable
++ * and suspend-resume functionality on Tegra30, although audio mclk is
++ * only needed for audio.
++ */
++ ret = clk_prepare_enable(data->clk_cdev1);
++ if (ret) {
++ dev_err(data->dev, "Can't enable cdev1: %d\n", ret);
++ return ret;
++ }
+
+-err_put_cdev1:
+- clk_put(data->clk_cdev1);
+-err_put_pll_a_out0:
+- clk_put(data->clk_pll_a_out0);
+-err_put_pll_a:
+- clk_put(data->clk_pll_a);
+-err:
+- return ret;
++ return 0;
+ }
+ EXPORT_SYMBOL_GPL(tegra_asoc_utils_init);
+
+-void tegra_asoc_utils_fini(struct tegra_asoc_utils_data *data)
+-{
+- clk_put(data->clk_cdev1);
+- clk_put(data->clk_pll_a_out0);
+- clk_put(data->clk_pll_a);
+-}
+-EXPORT_SYMBOL_GPL(tegra_asoc_utils_fini);
+-
+ MODULE_AUTHOR("Stephen Warren <swarren@nvidia.com>");
+ MODULE_DESCRIPTION("Tegra ASoC utility code");
+ MODULE_LICENSE("GPL");
+diff --git a/sound/soc/tegra/tegra_asoc_utils.h b/sound/soc/tegra/tegra_asoc_utils.h
+index 0c13818dee759..a34439587d59f 100644
+--- a/sound/soc/tegra/tegra_asoc_utils.h
++++ b/sound/soc/tegra/tegra_asoc_utils.h
+@@ -34,6 +34,5 @@ int tegra_asoc_utils_set_rate(struct tegra_asoc_utils_data *data, int srate,
+ int tegra_asoc_utils_set_ac97_rate(struct tegra_asoc_utils_data *data);
+ int tegra_asoc_utils_init(struct tegra_asoc_utils_data *data,
+ struct device *dev);
+-void tegra_asoc_utils_fini(struct tegra_asoc_utils_data *data);
+
+ #endif
+diff --git a/sound/soc/tegra/tegra_max98090.c b/sound/soc/tegra/tegra_max98090.c
+index d800b62b36f83..ec9050516cd7e 100644
+--- a/sound/soc/tegra/tegra_max98090.c
++++ b/sound/soc/tegra/tegra_max98090.c
+@@ -218,19 +218,18 @@ static int tegra_max98090_probe(struct platform_device *pdev)
+
+ ret = snd_soc_of_parse_card_name(card, "nvidia,model");
+ if (ret)
+- goto err;
++ return ret;
+
+ ret = snd_soc_of_parse_audio_routing(card, "nvidia,audio-routing");
+ if (ret)
+- goto err;
++ return ret;
+
+ tegra_max98090_dai.codecs->of_node = of_parse_phandle(np,
+ "nvidia,audio-codec", 0);
+ if (!tegra_max98090_dai.codecs->of_node) {
+ dev_err(&pdev->dev,
+ "Property 'nvidia,audio-codec' missing or invalid\n");
+- ret = -EINVAL;
+- goto err;
++ return -EINVAL;
+ }
+
+ tegra_max98090_dai.cpus->of_node = of_parse_phandle(np,
+@@ -238,40 +237,31 @@ static int tegra_max98090_probe(struct platform_device *pdev)
+ if (!tegra_max98090_dai.cpus->of_node) {
+ dev_err(&pdev->dev,
+ "Property 'nvidia,i2s-controller' missing or invalid\n");
+- ret = -EINVAL;
+- goto err;
++ return -EINVAL;
+ }
+
+ tegra_max98090_dai.platforms->of_node = tegra_max98090_dai.cpus->of_node;
+
+ ret = tegra_asoc_utils_init(&machine->util_data, &pdev->dev);
+ if (ret)
+- goto err;
++ return ret;
+
+ ret = snd_soc_register_card(card);
+ if (ret) {
+ dev_err(&pdev->dev, "snd_soc_register_card failed (%d)\n",
+ ret);
+- goto err_fini_utils;
++ return ret;
+ }
+
+ return 0;
+-
+-err_fini_utils:
+- tegra_asoc_utils_fini(&machine->util_data);
+-err:
+- return ret;
+ }
+
+ static int tegra_max98090_remove(struct platform_device *pdev)
+ {
+ struct snd_soc_card *card = platform_get_drvdata(pdev);
+- struct tegra_max98090 *machine = snd_soc_card_get_drvdata(card);
+
+ snd_soc_unregister_card(card);
+
+- tegra_asoc_utils_fini(&machine->util_data);
+-
+ return 0;
+ }
+
+diff --git a/sound/soc/tegra/tegra_rt5640.c b/sound/soc/tegra/tegra_rt5640.c
+index 9878bc3eb89e9..201d132731f9b 100644
+--- a/sound/soc/tegra/tegra_rt5640.c
++++ b/sound/soc/tegra/tegra_rt5640.c
+@@ -164,19 +164,18 @@ static int tegra_rt5640_probe(struct platform_device *pdev)
+
+ ret = snd_soc_of_parse_card_name(card, "nvidia,model");
+ if (ret)
+- goto err;
++ return ret;
+
+ ret = snd_soc_of_parse_audio_routing(card, "nvidia,audio-routing");
+ if (ret)
+- goto err;
++ return ret;
+
+ tegra_rt5640_dai.codecs->of_node = of_parse_phandle(np,
+ "nvidia,audio-codec", 0);
+ if (!tegra_rt5640_dai.codecs->of_node) {
+ dev_err(&pdev->dev,
+ "Property 'nvidia,audio-codec' missing or invalid\n");
+- ret = -EINVAL;
+- goto err;
++ return -EINVAL;
+ }
+
+ tegra_rt5640_dai.cpus->of_node = of_parse_phandle(np,
+@@ -184,40 +183,31 @@ static int tegra_rt5640_probe(struct platform_device *pdev)
+ if (!tegra_rt5640_dai.cpus->of_node) {
+ dev_err(&pdev->dev,
+ "Property 'nvidia,i2s-controller' missing or invalid\n");
+- ret = -EINVAL;
+- goto err;
++ return -EINVAL;
+ }
+
+ tegra_rt5640_dai.platforms->of_node = tegra_rt5640_dai.cpus->of_node;
+
+ ret = tegra_asoc_utils_init(&machine->util_data, &pdev->dev);
+ if (ret)
+- goto err;
++ return ret;
+
+ ret = snd_soc_register_card(card);
+ if (ret) {
+ dev_err(&pdev->dev, "snd_soc_register_card failed (%d)\n",
+ ret);
+- goto err_fini_utils;
++ return ret;
+ }
+
+ return 0;
+-
+-err_fini_utils:
+- tegra_asoc_utils_fini(&machine->util_data);
+-err:
+- return ret;
+ }
+
+ static int tegra_rt5640_remove(struct platform_device *pdev)
+ {
+ struct snd_soc_card *card = platform_get_drvdata(pdev);
+- struct tegra_rt5640 *machine = snd_soc_card_get_drvdata(card);
+
+ snd_soc_unregister_card(card);
+
+- tegra_asoc_utils_fini(&machine->util_data);
+-
+ return 0;
+ }
+
+diff --git a/sound/soc/tegra/tegra_rt5677.c b/sound/soc/tegra/tegra_rt5677.c
+index 5821313db977a..8f71e21f6ee97 100644
+--- a/sound/soc/tegra/tegra_rt5677.c
++++ b/sound/soc/tegra/tegra_rt5677.c
+@@ -270,13 +270,11 @@ static int tegra_rt5677_probe(struct platform_device *pdev)
+ if (ret) {
+ dev_err(&pdev->dev, "snd_soc_register_card failed (%d)\n",
+ ret);
+- goto err_fini_utils;
++ goto err_put_cpu_of_node;
+ }
+
+ return 0;
+
+-err_fini_utils:
+- tegra_asoc_utils_fini(&machine->util_data);
+ err_put_cpu_of_node:
+ of_node_put(tegra_rt5677_dai.cpus->of_node);
+ tegra_rt5677_dai.cpus->of_node = NULL;
+@@ -291,12 +289,9 @@ err:
+ static int tegra_rt5677_remove(struct platform_device *pdev)
+ {
+ struct snd_soc_card *card = platform_get_drvdata(pdev);
+- struct tegra_rt5677 *machine = snd_soc_card_get_drvdata(card);
+
+ snd_soc_unregister_card(card);
+
+- tegra_asoc_utils_fini(&machine->util_data);
+-
+ tegra_rt5677_dai.platforms->of_node = NULL;
+ of_node_put(tegra_rt5677_dai.codecs->of_node);
+ tegra_rt5677_dai.codecs->of_node = NULL;
+diff --git a/sound/soc/tegra/tegra_sgtl5000.c b/sound/soc/tegra/tegra_sgtl5000.c
+index dc411ba2e36d5..692fcc3d7d6e6 100644
+--- a/sound/soc/tegra/tegra_sgtl5000.c
++++ b/sound/soc/tegra/tegra_sgtl5000.c
+@@ -156,13 +156,11 @@ static int tegra_sgtl5000_driver_probe(struct platform_device *pdev)
+ if (ret) {
+ dev_err(&pdev->dev, "snd_soc_register_card failed (%d)\n",
+ ret);
+- goto err_fini_utils;
++ goto err_put_cpu_of_node;
+ }
+
+ return 0;
+
+-err_fini_utils:
+- tegra_asoc_utils_fini(&machine->util_data);
+ err_put_cpu_of_node:
+ of_node_put(tegra_sgtl5000_dai.cpus->of_node);
+ tegra_sgtl5000_dai.cpus->of_node = NULL;
+@@ -177,13 +175,10 @@ err:
+ static int tegra_sgtl5000_driver_remove(struct platform_device *pdev)
+ {
+ struct snd_soc_card *card = platform_get_drvdata(pdev);
+- struct tegra_sgtl5000 *machine = snd_soc_card_get_drvdata(card);
+ int ret;
+
+ ret = snd_soc_unregister_card(card);
+
+- tegra_asoc_utils_fini(&machine->util_data);
+-
+ of_node_put(tegra_sgtl5000_dai.cpus->of_node);
+ tegra_sgtl5000_dai.cpus->of_node = NULL;
+ tegra_sgtl5000_dai.platforms->of_node = NULL;
+diff --git a/sound/soc/tegra/tegra_wm8753.c b/sound/soc/tegra/tegra_wm8753.c
+index 0d653a605358c..2ee2ed190872d 100644
+--- a/sound/soc/tegra/tegra_wm8753.c
++++ b/sound/soc/tegra/tegra_wm8753.c
+@@ -127,19 +127,18 @@ static int tegra_wm8753_driver_probe(struct platform_device *pdev)
+
+ ret = snd_soc_of_parse_card_name(card, "nvidia,model");
+ if (ret)
+- goto err;
++ return ret;
+
+ ret = snd_soc_of_parse_audio_routing(card, "nvidia,audio-routing");
+ if (ret)
+- goto err;
++ return ret;
+
+ tegra_wm8753_dai.codecs->of_node = of_parse_phandle(np,
+ "nvidia,audio-codec", 0);
+ if (!tegra_wm8753_dai.codecs->of_node) {
+ dev_err(&pdev->dev,
+ "Property 'nvidia,audio-codec' missing or invalid\n");
+- ret = -EINVAL;
+- goto err;
++ return -EINVAL;
+ }
+
+ tegra_wm8753_dai.cpus->of_node = of_parse_phandle(np,
+@@ -147,40 +146,31 @@ static int tegra_wm8753_driver_probe(struct platform_device *pdev)
+ if (!tegra_wm8753_dai.cpus->of_node) {
+ dev_err(&pdev->dev,
+ "Property 'nvidia,i2s-controller' missing or invalid\n");
+- ret = -EINVAL;
+- goto err;
++ return -EINVAL;
+ }
+
+ tegra_wm8753_dai.platforms->of_node = tegra_wm8753_dai.cpus->of_node;
+
+ ret = tegra_asoc_utils_init(&machine->util_data, &pdev->dev);
+ if (ret)
+- goto err;
++ return ret;
+
+ ret = snd_soc_register_card(card);
+ if (ret) {
+ dev_err(&pdev->dev, "snd_soc_register_card failed (%d)\n",
+ ret);
+- goto err_fini_utils;
++ return ret;
+ }
+
+ return 0;
+-
+-err_fini_utils:
+- tegra_asoc_utils_fini(&machine->util_data);
+-err:
+- return ret;
+ }
+
+ static int tegra_wm8753_driver_remove(struct platform_device *pdev)
+ {
+ struct snd_soc_card *card = platform_get_drvdata(pdev);
+- struct tegra_wm8753 *machine = snd_soc_card_get_drvdata(card);
+
+ snd_soc_unregister_card(card);
+
+- tegra_asoc_utils_fini(&machine->util_data);
+-
+ return 0;
+ }
+
+diff --git a/sound/soc/tegra/tegra_wm8903.c b/sound/soc/tegra/tegra_wm8903.c
+index 3aca354f9e08b..7bf159965c4dd 100644
+--- a/sound/soc/tegra/tegra_wm8903.c
++++ b/sound/soc/tegra/tegra_wm8903.c
+@@ -323,19 +323,18 @@ static int tegra_wm8903_driver_probe(struct platform_device *pdev)
+
+ ret = snd_soc_of_parse_card_name(card, "nvidia,model");
+ if (ret)
+- goto err;
++ return ret;
+
+ ret = snd_soc_of_parse_audio_routing(card, "nvidia,audio-routing");
+ if (ret)
+- goto err;
++ return ret;
+
+ tegra_wm8903_dai.codecs->of_node = of_parse_phandle(np,
+ "nvidia,audio-codec", 0);
+ if (!tegra_wm8903_dai.codecs->of_node) {
+ dev_err(&pdev->dev,
+ "Property 'nvidia,audio-codec' missing or invalid\n");
+- ret = -EINVAL;
+- goto err;
++ return -EINVAL;
+ }
+
+ tegra_wm8903_dai.cpus->of_node = of_parse_phandle(np,
+@@ -343,40 +342,31 @@ static int tegra_wm8903_driver_probe(struct platform_device *pdev)
+ if (!tegra_wm8903_dai.cpus->of_node) {
+ dev_err(&pdev->dev,
+ "Property 'nvidia,i2s-controller' missing or invalid\n");
+- ret = -EINVAL;
+- goto err;
++ return -EINVAL;
+ }
+
+ tegra_wm8903_dai.platforms->of_node = tegra_wm8903_dai.cpus->of_node;
+
+ ret = tegra_asoc_utils_init(&machine->util_data, &pdev->dev);
+ if (ret)
+- goto err;
++ return ret;
+
+ ret = snd_soc_register_card(card);
+ if (ret) {
+ dev_err(&pdev->dev, "snd_soc_register_card failed (%d)\n",
+ ret);
+- goto err_fini_utils;
++ return ret;
+ }
+
+ return 0;
+-
+-err_fini_utils:
+- tegra_asoc_utils_fini(&machine->util_data);
+-err:
+- return ret;
+ }
+
+ static int tegra_wm8903_driver_remove(struct platform_device *pdev)
+ {
+ struct snd_soc_card *card = platform_get_drvdata(pdev);
+- struct tegra_wm8903 *machine = snd_soc_card_get_drvdata(card);
+
+ snd_soc_unregister_card(card);
+
+- tegra_asoc_utils_fini(&machine->util_data);
+-
+ return 0;
+ }
+
+diff --git a/sound/soc/tegra/tegra_wm9712.c b/sound/soc/tegra/tegra_wm9712.c
+index b85bd9f890737..726edfa21a29d 100644
+--- a/sound/soc/tegra/tegra_wm9712.c
++++ b/sound/soc/tegra/tegra_wm9712.c
+@@ -113,19 +113,17 @@ static int tegra_wm9712_driver_probe(struct platform_device *pdev)
+
+ ret = tegra_asoc_utils_set_ac97_rate(&machine->util_data);
+ if (ret)
+- goto asoc_utils_fini;
++ goto codec_unregister;
+
+ ret = snd_soc_register_card(card);
+ if (ret) {
+ dev_err(&pdev->dev, "snd_soc_register_card failed (%d)\n",
+ ret);
+- goto asoc_utils_fini;
++ goto codec_unregister;
+ }
+
+ return 0;
+
+-asoc_utils_fini:
+- tegra_asoc_utils_fini(&machine->util_data);
+ codec_unregister:
+ platform_device_del(machine->codec);
+ codec_put:
+@@ -140,8 +138,6 @@ static int tegra_wm9712_driver_remove(struct platform_device *pdev)
+
+ snd_soc_unregister_card(card);
+
+- tegra_asoc_utils_fini(&machine->util_data);
+-
+ platform_device_unregister(machine->codec);
+
+ return 0;
+diff --git a/sound/soc/tegra/trimslice.c b/sound/soc/tegra/trimslice.c
+index f9834afaa2e8b..6dca6836aa048 100644
+--- a/sound/soc/tegra/trimslice.c
++++ b/sound/soc/tegra/trimslice.c
+@@ -125,8 +125,7 @@ static int tegra_snd_trimslice_probe(struct platform_device *pdev)
+ if (!trimslice_tlv320aic23_dai.codecs->of_node) {
+ dev_err(&pdev->dev,
+ "Property 'nvidia,audio-codec' missing or invalid\n");
+- ret = -EINVAL;
+- goto err;
++ return -EINVAL;
+ }
+
+ trimslice_tlv320aic23_dai.cpus->of_node = of_parse_phandle(np,
+@@ -134,8 +133,7 @@ static int tegra_snd_trimslice_probe(struct platform_device *pdev)
+ if (!trimslice_tlv320aic23_dai.cpus->of_node) {
+ dev_err(&pdev->dev,
+ "Property 'nvidia,i2s-controller' missing or invalid\n");
+- ret = -EINVAL;
+- goto err;
++ return -EINVAL;
+ }
+
+ trimslice_tlv320aic23_dai.platforms->of_node =
+@@ -143,32 +141,24 @@ static int tegra_snd_trimslice_probe(struct platform_device *pdev)
+
+ ret = tegra_asoc_utils_init(&trimslice->util_data, &pdev->dev);
+ if (ret)
+- goto err;
++ return ret;
+
+ ret = snd_soc_register_card(card);
+ if (ret) {
+ dev_err(&pdev->dev, "snd_soc_register_card failed (%d)\n",
+ ret);
+- goto err_fini_utils;
++ return ret;
+ }
+
+ return 0;
+-
+-err_fini_utils:
+- tegra_asoc_utils_fini(&trimslice->util_data);
+-err:
+- return ret;
+ }
+
+ static int tegra_snd_trimslice_remove(struct platform_device *pdev)
+ {
+ struct snd_soc_card *card = platform_get_drvdata(pdev);
+- struct tegra_trimslice *trimslice = snd_soc_card_get_drvdata(card);
+
+ snd_soc_unregister_card(card);
+
+- tegra_asoc_utils_fini(&trimslice->util_data);
+-
+ return 0;
+ }
+
+diff --git a/tools/build/Makefile.feature b/tools/build/Makefile.feature
+index 3e0c019ef2971..82d6c43333fbd 100644
+--- a/tools/build/Makefile.feature
++++ b/tools/build/Makefile.feature
+@@ -8,7 +8,7 @@ endif
+
+ feature_check = $(eval $(feature_check_code))
+ define feature_check_code
+- feature-$(1) := $(shell $(MAKE) OUTPUT=$(OUTPUT_FEATURES) CFLAGS="$(EXTRA_CFLAGS) $(FEATURE_CHECK_CFLAGS-$(1))" CXXFLAGS="$(EXTRA_CXXFLAGS) $(FEATURE_CHECK_CXXFLAGS-$(1))" LDFLAGS="$(LDFLAGS) $(FEATURE_CHECK_LDFLAGS-$(1))" -C $(feature_dir) $(OUTPUT_FEATURES)test-$1.bin >/dev/null 2>/dev/null && echo 1 || echo 0)
++ feature-$(1) := $(shell $(MAKE) OUTPUT=$(OUTPUT_FEATURES) CC="$(CC)" CXX="$(CXX)" CFLAGS="$(EXTRA_CFLAGS) $(FEATURE_CHECK_CFLAGS-$(1))" CXXFLAGS="$(EXTRA_CXXFLAGS) $(FEATURE_CHECK_CXXFLAGS-$(1))" LDFLAGS="$(LDFLAGS) $(FEATURE_CHECK_LDFLAGS-$(1))" -C $(feature_dir) $(OUTPUT_FEATURES)test-$1.bin >/dev/null 2>/dev/null && echo 1 || echo 0)
+ endef
+
+ feature_set = $(eval $(feature_set_code))
+diff --git a/tools/build/feature/Makefile b/tools/build/feature/Makefile
+index 92012381393ad..ef4ca6e408427 100644
+--- a/tools/build/feature/Makefile
++++ b/tools/build/feature/Makefile
+@@ -73,8 +73,6 @@ FILES= \
+
+ FILES := $(addprefix $(OUTPUT),$(FILES))
+
+-CC ?= $(CROSS_COMPILE)gcc
+-CXX ?= $(CROSS_COMPILE)g++
+ PKG_CONFIG ?= $(CROSS_COMPILE)pkg-config
+ LLVM_CONFIG ?= llvm-config
+ CLANG ?= clang
+diff --git a/tools/perf/bench/mem-functions.c b/tools/perf/bench/mem-functions.c
+index 9235b76501be8..19d45c377ac18 100644
+--- a/tools/perf/bench/mem-functions.c
++++ b/tools/perf/bench/mem-functions.c
+@@ -223,12 +223,8 @@ static int bench_mem_common(int argc, const char **argv, struct bench_mem_info *
+ return 0;
+ }
+
+-static u64 do_memcpy_cycles(const struct function *r, size_t size, void *src, void *dst)
++static void memcpy_prefault(memcpy_t fn, size_t size, void *src, void *dst)
+ {
+- u64 cycle_start = 0ULL, cycle_end = 0ULL;
+- memcpy_t fn = r->fn.memcpy;
+- int i;
+-
+ /* Make sure to always prefault zero pages even if MMAP_THRESH is crossed: */
+ memset(src, 0, size);
+
+@@ -237,6 +233,15 @@ static u64 do_memcpy_cycles(const struct function *r, size_t size, void *src, vo
+ * to not measure page fault overhead:
+ */
+ fn(dst, src, size);
++}
++
++static u64 do_memcpy_cycles(const struct function *r, size_t size, void *src, void *dst)
++{
++ u64 cycle_start = 0ULL, cycle_end = 0ULL;
++ memcpy_t fn = r->fn.memcpy;
++ int i;
++
++ memcpy_prefault(fn, size, src, dst);
+
+ cycle_start = get_cycles();
+ for (i = 0; i < nr_loops; ++i)
+@@ -252,11 +257,7 @@ static double do_memcpy_gettimeofday(const struct function *r, size_t size, void
+ memcpy_t fn = r->fn.memcpy;
+ int i;
+
+- /*
+- * We prefault the freshly allocated memory range here,
+- * to not measure page fault overhead:
+- */
+- fn(dst, src, size);
++ memcpy_prefault(fn, size, src, dst);
+
+ BUG_ON(gettimeofday(&tv_start, NULL));
+ for (i = 0; i < nr_loops; ++i)
+diff --git a/tools/perf/util/intel-pt-decoder/intel-pt-decoder.c b/tools/perf/util/intel-pt-decoder/intel-pt-decoder.c
+index f8ccfd6be0eee..7ffcbd6fcd1ae 100644
+--- a/tools/perf/util/intel-pt-decoder/intel-pt-decoder.c
++++ b/tools/perf/util/intel-pt-decoder/intel-pt-decoder.c
+@@ -1164,6 +1164,7 @@ static int intel_pt_walk_fup(struct intel_pt_decoder *decoder)
+ return 0;
+ if (err == -EAGAIN ||
+ intel_pt_fup_with_nlip(decoder, &intel_pt_insn, ip, err)) {
++ decoder->pkt_state = INTEL_PT_STATE_IN_SYNC;
+ if (intel_pt_fup_event(decoder))
+ return 0;
+ return -EAGAIN;
+@@ -1942,17 +1943,13 @@ next:
+ }
+ if (decoder->set_fup_mwait)
+ no_tip = true;
++ if (no_tip)
++ decoder->pkt_state = INTEL_PT_STATE_FUP_NO_TIP;
++ else
++ decoder->pkt_state = INTEL_PT_STATE_FUP;
+ err = intel_pt_walk_fup(decoder);
+- if (err != -EAGAIN) {
+- if (err)
+- return err;
+- if (no_tip)
+- decoder->pkt_state =
+- INTEL_PT_STATE_FUP_NO_TIP;
+- else
+- decoder->pkt_state = INTEL_PT_STATE_FUP;
+- return 0;
+- }
++ if (err != -EAGAIN)
++ return err;
+ if (no_tip) {
+ no_tip = false;
+ break;
+@@ -1980,8 +1977,10 @@ next:
+ * possibility of another CBR change that gets caught up
+ * in the PSB+.
+ */
+- if (decoder->cbr != decoder->cbr_seen)
++ if (decoder->cbr != decoder->cbr_seen) {
++ decoder->state.type = 0;
+ return 0;
++ }
+ break;
+
+ case INTEL_PT_PIP:
+@@ -2022,8 +2021,10 @@ next:
+
+ case INTEL_PT_CBR:
+ intel_pt_calc_cbr(decoder);
+- if (decoder->cbr != decoder->cbr_seen)
++ if (decoder->cbr != decoder->cbr_seen) {
++ decoder->state.type = 0;
+ return 0;
++ }
+ break;
+
+ case INTEL_PT_MODE_EXEC:
+@@ -2599,15 +2600,11 @@ const struct intel_pt_state *intel_pt_decode(struct intel_pt_decoder *decoder)
+ err = intel_pt_walk_tip(decoder);
+ break;
+ case INTEL_PT_STATE_FUP:
+- decoder->pkt_state = INTEL_PT_STATE_IN_SYNC;
+ err = intel_pt_walk_fup(decoder);
+ if (err == -EAGAIN)
+ err = intel_pt_walk_fup_tip(decoder);
+- else if (!err)
+- decoder->pkt_state = INTEL_PT_STATE_FUP;
+ break;
+ case INTEL_PT_STATE_FUP_NO_TIP:
+- decoder->pkt_state = INTEL_PT_STATE_IN_SYNC;
+ err = intel_pt_walk_fup(decoder);
+ if (err == -EAGAIN)
+ err = intel_pt_walk_trace(decoder);
+diff --git a/tools/perf/util/probe-finder.c b/tools/perf/util/probe-finder.c
+index 55924255c5355..659024342e9ac 100644
+--- a/tools/perf/util/probe-finder.c
++++ b/tools/perf/util/probe-finder.c
+@@ -1408,6 +1408,9 @@ static int fill_empty_trace_arg(struct perf_probe_event *pev,
+ char *type;
+ int i, j, ret;
+
++ if (!ntevs)
++ return -ENOENT;
++
+ for (i = 0; i < pev->nargs; i++) {
+ type = NULL;
+ for (j = 0; j < ntevs; j++) {
+@@ -1464,7 +1467,7 @@ int debuginfo__find_trace_events(struct debuginfo *dbg,
+ if (ret >= 0 && tf.pf.skip_empty_arg)
+ ret = fill_empty_trace_arg(pev, tf.tevs, tf.ntevs);
+
+- if (ret < 0) {
++ if (ret < 0 || tf.ntevs == 0) {
+ for (i = 0; i < tf.ntevs; i++)
+ clear_probe_trace_event(&tf.tevs[i]);
+ zfree(tevs);
+diff --git a/tools/testing/selftests/bpf/Makefile b/tools/testing/selftests/bpf/Makefile
+index af139d0e2e0c6..666b1b786bd29 100644
+--- a/tools/testing/selftests/bpf/Makefile
++++ b/tools/testing/selftests/bpf/Makefile
+@@ -138,7 +138,9 @@ VMLINUX_BTF_PATHS := $(if $(O),$(O)/vmlinux) \
+ /boot/vmlinux-$(shell uname -r)
+ VMLINUX_BTF := $(abspath $(firstword $(wildcard $(VMLINUX_BTF_PATHS))))
+
+-$(OUTPUT)/runqslower: $(BPFOBJ)
++DEFAULT_BPFTOOL := $(SCRATCH_DIR)/sbin/bpftool
++
++$(OUTPUT)/runqslower: $(BPFOBJ) | $(DEFAULT_BPFTOOL)
+ $(Q)$(MAKE) $(submake_extras) -C $(TOOLSDIR)/bpf/runqslower \
+ OUTPUT=$(SCRATCH_DIR)/ VMLINUX_BTF=$(VMLINUX_BTF) \
+ BPFOBJ=$(BPFOBJ) BPF_INCLUDE=$(INCLUDE_DIR) && \
+@@ -160,7 +162,6 @@ $(OUTPUT)/test_netcnt: cgroup_helpers.c
+ $(OUTPUT)/test_sock_fields: cgroup_helpers.c
+ $(OUTPUT)/test_sysctl: cgroup_helpers.c
+
+-DEFAULT_BPFTOOL := $(SCRATCH_DIR)/sbin/bpftool
+ BPFTOOL ?= $(DEFAULT_BPFTOOL)
+ $(DEFAULT_BPFTOOL): $(wildcard $(BPFTOOLDIR)/*.[ch] $(BPFTOOLDIR)/Makefile) \
+ $(BPFOBJ) | $(BUILD_DIR)/bpftool
+diff --git a/tools/testing/selftests/bpf/test_progs.c b/tools/testing/selftests/bpf/test_progs.c
+index 93970ec1c9e94..c2eb58382113a 100644
+--- a/tools/testing/selftests/bpf/test_progs.c
++++ b/tools/testing/selftests/bpf/test_progs.c
+@@ -12,6 +12,9 @@
+ #include <string.h>
+ #include <execinfo.h> /* backtrace */
+
++#define EXIT_NO_TEST 2
++#define EXIT_ERR_SETUP_INFRA 3
++
+ /* defined in test_progs.h */
+ struct test_env env = {};
+
+@@ -111,13 +114,31 @@ static void reset_affinity() {
+ if (err < 0) {
+ stdio_restore();
+ fprintf(stderr, "Failed to reset process affinity: %d!\n", err);
+- exit(-1);
++ exit(EXIT_ERR_SETUP_INFRA);
+ }
+ err = pthread_setaffinity_np(pthread_self(), sizeof(cpuset), &cpuset);
+ if (err < 0) {
+ stdio_restore();
+ fprintf(stderr, "Failed to reset thread affinity: %d!\n", err);
+- exit(-1);
++ exit(EXIT_ERR_SETUP_INFRA);
++ }
++}
++
++static void save_netns(void)
++{
++ env.saved_netns_fd = open("/proc/self/ns/net", O_RDONLY);
++ if (env.saved_netns_fd == -1) {
++ perror("open(/proc/self/ns/net)");
++ exit(EXIT_ERR_SETUP_INFRA);
++ }
++}
++
++static void restore_netns(void)
++{
++ if (setns(env.saved_netns_fd, CLONE_NEWNET) == -1) {
++ stdio_restore();
++ perror("setns(CLONE_NEWNS)");
++ exit(EXIT_ERR_SETUP_INFRA);
+ }
+ }
+
+@@ -138,8 +159,6 @@ void test__end_subtest()
+ test->test_num, test->subtest_num,
+ test->subtest_name, sub_error_cnt ? "FAIL" : "OK");
+
+- reset_affinity();
+-
+ free(test->subtest_name);
+ test->subtest_name = NULL;
+ }
+@@ -732,6 +751,7 @@ int main(int argc, char **argv)
+ return -1;
+ }
+
++ save_netns();
+ stdio_hijack();
+ for (i = 0; i < prog_test_cnt; i++) {
+ struct prog_test_def *test = &prog_test_defs[i];
+@@ -762,6 +782,7 @@ int main(int argc, char **argv)
+ test->error_cnt ? "FAIL" : "OK");
+
+ reset_affinity();
++ restore_netns();
+ if (test->need_cgroup_cleanup)
+ cleanup_cgroup_environment();
+ }
+@@ -775,6 +796,10 @@ int main(int argc, char **argv)
+ free_str_set(&env.subtest_selector.blacklist);
+ free_str_set(&env.subtest_selector.whitelist);
+ free(env.subtest_selector.num_set);
++ close(env.saved_netns_fd);
++
++ if (env.succ_cnt + env.fail_cnt + env.skip_cnt == 0)
++ return EXIT_NO_TEST;
+
+ return env.fail_cnt ? EXIT_FAILURE : EXIT_SUCCESS;
+ }
+diff --git a/tools/testing/selftests/bpf/test_progs.h b/tools/testing/selftests/bpf/test_progs.h
+index f4aff6b8284be..3817667deb103 100644
+--- a/tools/testing/selftests/bpf/test_progs.h
++++ b/tools/testing/selftests/bpf/test_progs.h
+@@ -77,6 +77,8 @@ struct test_env {
+ int sub_succ_cnt; /* successful sub-tests */
+ int fail_cnt; /* total failed tests + sub-tests */
+ int skip_cnt; /* skipped tests */
++
++ int saved_netns_fd;
+ };
+
+ extern struct test_env env;
+diff --git a/tools/testing/selftests/powerpc/ptrace/ptrace-pkey.c b/tools/testing/selftests/powerpc/ptrace/ptrace-pkey.c
+index bdbbbe8431e03..3694613f418f6 100644
+--- a/tools/testing/selftests/powerpc/ptrace/ptrace-pkey.c
++++ b/tools/testing/selftests/powerpc/ptrace/ptrace-pkey.c
+@@ -44,7 +44,7 @@ struct shared_info {
+ unsigned long amr2;
+
+ /* AMR value that ptrace should refuse to write to the child. */
+- unsigned long amr3;
++ unsigned long invalid_amr;
+
+ /* IAMR value the parent expects to read from the child. */
+ unsigned long expected_iamr;
+@@ -57,8 +57,8 @@ struct shared_info {
+ * (even though they're valid ones) because userspace doesn't have
+ * access to those registers.
+ */
+- unsigned long new_iamr;
+- unsigned long new_uamor;
++ unsigned long invalid_iamr;
++ unsigned long invalid_uamor;
+ };
+
+ static int sys_pkey_alloc(unsigned long flags, unsigned long init_access_rights)
+@@ -66,11 +66,6 @@ static int sys_pkey_alloc(unsigned long flags, unsigned long init_access_rights)
+ return syscall(__NR_pkey_alloc, flags, init_access_rights);
+ }
+
+-static int sys_pkey_free(int pkey)
+-{
+- return syscall(__NR_pkey_free, pkey);
+-}
+-
+ static int child(struct shared_info *info)
+ {
+ unsigned long reg;
+@@ -100,28 +95,32 @@ static int child(struct shared_info *info)
+
+ info->amr1 |= 3ul << pkeyshift(pkey1);
+ info->amr2 |= 3ul << pkeyshift(pkey2);
+- info->amr3 |= info->amr2 | 3ul << pkeyshift(pkey3);
++ /*
++ * invalid amr value where we try to force write
++ * things which are deined by a uamor setting.
++ */
++ info->invalid_amr = info->amr2 | (~0x0UL & ~info->expected_uamor);
+
++ /*
++ * if PKEY_DISABLE_EXECUTE succeeded we should update the expected_iamr
++ */
+ if (disable_execute)
+ info->expected_iamr |= 1ul << pkeyshift(pkey1);
+ else
+ info->expected_iamr &= ~(1ul << pkeyshift(pkey1));
+
+- info->expected_iamr &= ~(1ul << pkeyshift(pkey2) | 1ul << pkeyshift(pkey3));
+-
+- info->expected_uamor |= 3ul << pkeyshift(pkey1) |
+- 3ul << pkeyshift(pkey2);
+- info->new_iamr |= 1ul << pkeyshift(pkey1) | 1ul << pkeyshift(pkey2);
+- info->new_uamor |= 3ul << pkeyshift(pkey1);
++ /*
++ * We allocated pkey2 and pkey 3 above. Clear the IAMR bits.
++ */
++ info->expected_iamr &= ~(1ul << pkeyshift(pkey2));
++ info->expected_iamr &= ~(1ul << pkeyshift(pkey3));
+
+ /*
+- * We won't use pkey3. We just want a plausible but invalid key to test
+- * whether ptrace will let us write to AMR bits we are not supposed to.
+- *
+- * This also tests whether the kernel restores the UAMOR permissions
+- * after a key is freed.
++ * Create an IAMR value different from expected value.
++ * Kernel will reject an IAMR and UAMOR change.
+ */
+- sys_pkey_free(pkey3);
++ info->invalid_iamr = info->expected_iamr | (1ul << pkeyshift(pkey1) | 1ul << pkeyshift(pkey2));
++ info->invalid_uamor = info->expected_uamor & ~(0x3ul << pkeyshift(pkey1));
+
+ printf("%-30s AMR: %016lx pkey1: %d pkey2: %d pkey3: %d\n",
+ user_write, info->amr1, pkey1, pkey2, pkey3);
+@@ -196,9 +195,9 @@ static int parent(struct shared_info *info, pid_t pid)
+ PARENT_SKIP_IF_UNSUPPORTED(ret, &info->child_sync);
+ PARENT_FAIL_IF(ret, &info->child_sync);
+
+- info->amr1 = info->amr2 = info->amr3 = regs[0];
+- info->expected_iamr = info->new_iamr = regs[1];
+- info->expected_uamor = info->new_uamor = regs[2];
++ info->amr1 = info->amr2 = regs[0];
++ info->expected_iamr = regs[1];
++ info->expected_uamor = regs[2];
+
+ /* Wake up child so that it can set itself up. */
+ ret = prod_child(&info->child_sync);
+@@ -234,10 +233,10 @@ static int parent(struct shared_info *info, pid_t pid)
+ return ret;
+
+ /* Write invalid AMR value in child. */
+- ret = ptrace_write_regs(pid, NT_PPC_PKEY, &info->amr3, 1);
++ ret = ptrace_write_regs(pid, NT_PPC_PKEY, &info->invalid_amr, 1);
+ PARENT_FAIL_IF(ret, &info->child_sync);
+
+- printf("%-30s AMR: %016lx\n", ptrace_write_running, info->amr3);
++ printf("%-30s AMR: %016lx\n", ptrace_write_running, info->invalid_amr);
+
+ /* Wake up child so that it can verify it didn't change. */
+ ret = prod_child(&info->child_sync);
+@@ -249,7 +248,7 @@ static int parent(struct shared_info *info, pid_t pid)
+
+ /* Try to write to IAMR. */
+ regs[0] = info->amr1;
+- regs[1] = info->new_iamr;
++ regs[1] = info->invalid_iamr;
+ ret = ptrace_write_regs(pid, NT_PPC_PKEY, regs, 2);
+ PARENT_FAIL_IF(!ret, &info->child_sync);
+
+@@ -257,7 +256,7 @@ static int parent(struct shared_info *info, pid_t pid)
+ ptrace_write_running, regs[0], regs[1]);
+
+ /* Try to write to IAMR and UAMOR. */
+- regs[2] = info->new_uamor;
++ regs[2] = info->invalid_uamor;
+ ret = ptrace_write_regs(pid, NT_PPC_PKEY, regs, 3);
+ PARENT_FAIL_IF(!ret, &info->child_sync);
+
+diff --git a/tools/testing/selftests/seccomp/seccomp_bpf.c b/tools/testing/selftests/seccomp/seccomp_bpf.c
+index c84c7b50331c6..cdab315244540 100644
+--- a/tools/testing/selftests/seccomp/seccomp_bpf.c
++++ b/tools/testing/selftests/seccomp/seccomp_bpf.c
+@@ -3257,6 +3257,11 @@ TEST(user_notification_with_tsync)
+ int ret;
+ unsigned int flags;
+
++ ret = prctl(PR_SET_NO_NEW_PRIVS, 1, 0, 0, 0);
++ ASSERT_EQ(0, ret) {
++ TH_LOG("Kernel does not support PR_SET_NO_NEW_PRIVS!");
++ }
++
+ /* these were exclusive */
+ flags = SECCOMP_FILTER_FLAG_NEW_LISTENER |
+ SECCOMP_FILTER_FLAG_TSYNC;
^ permalink raw reply related [flat|nested] 25+ messages in thread
* [gentoo-commits] proj/linux-patches:5.7 commit in: /
@ 2020-08-26 11:17 Mike Pagano
0 siblings, 0 replies; 25+ messages in thread
From: Mike Pagano @ 2020-08-26 11:17 UTC (permalink / raw
To: gentoo-commits
commit: b450d96181ba92bf82a87234f78c4273df4d711b
Author: Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Wed Aug 26 11:17:12 2020 +0000
Commit: Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Wed Aug 26 11:17:12 2020 +0000
URL: https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=b450d961
Linux patch 5.7.18
Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>
0000_README | 4 +
1017_linux-5.7.18.patch | 4218 +++++++++++++++++++++++++++++++++++++++++++++++
2 files changed, 4222 insertions(+)
diff --git a/0000_README b/0000_README
index 18ff2b2..1ab468f 100644
--- a/0000_README
+++ b/0000_README
@@ -111,6 +111,10 @@ Patch: 1016_linux-5.7.17.patch
From: http://www.kernel.org
Desc: Linux 5.7.17
+Patch: 1017_linux-5.7.18.patch
+From: http://www.kernel.org
+Desc: Linux 5.7.18
+
Patch: 1500_XATTR_USER_PREFIX.patch
From: https://bugs.gentoo.org/show_bug.cgi?id=470644
Desc: Support for namespace user.pax.* on tmpfs.
diff --git a/1017_linux-5.7.18.patch b/1017_linux-5.7.18.patch
new file mode 100644
index 0000000..8256bbf
--- /dev/null
+++ b/1017_linux-5.7.18.patch
@@ -0,0 +1,4218 @@
+diff --git a/Makefile b/Makefile
+index c0d34d03ab5f1..b56456c45c97f 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 5
+ PATCHLEVEL = 7
+-SUBLEVEL = 17
++SUBLEVEL = 18
+ EXTRAVERSION =
+ NAME = Kleptomaniac Octopus
+
+diff --git a/arch/alpha/include/asm/io.h b/arch/alpha/include/asm/io.h
+index e6225cf40de57..b09dd6bc98a12 100644
+--- a/arch/alpha/include/asm/io.h
++++ b/arch/alpha/include/asm/io.h
+@@ -490,10 +490,10 @@ extern inline void writeq(u64 b, volatile void __iomem *addr)
+ }
+ #endif
+
+-#define ioread16be(p) be16_to_cpu(ioread16(p))
+-#define ioread32be(p) be32_to_cpu(ioread32(p))
+-#define iowrite16be(v,p) iowrite16(cpu_to_be16(v), (p))
+-#define iowrite32be(v,p) iowrite32(cpu_to_be32(v), (p))
++#define ioread16be(p) swab16(ioread16(p))
++#define ioread32be(p) swab32(ioread32(p))
++#define iowrite16be(v,p) iowrite16(swab16(v), (p))
++#define iowrite32be(v,p) iowrite32(swab32(v), (p))
+
+ #define inb_p inb
+ #define inw_p inw
+diff --git a/arch/arm64/Makefile b/arch/arm64/Makefile
+index 85e4149cc5d5c..d3c7ffa72902d 100644
+--- a/arch/arm64/Makefile
++++ b/arch/arm64/Makefile
+@@ -156,6 +156,7 @@ zinstall install:
+ PHONY += vdso_install
+ vdso_install:
+ $(Q)$(MAKE) $(build)=arch/arm64/kernel/vdso $@
++ $(Q)$(MAKE) $(build)=arch/arm64/kernel/vdso32 $@
+
+ # We use MRPROPER_FILES and CLEAN_FILES now
+ archclean:
+diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
+index 26fca93cd6972..397e20a359752 100644
+--- a/arch/arm64/include/asm/kvm_host.h
++++ b/arch/arm64/include/asm/kvm_host.h
+@@ -440,7 +440,7 @@ int __kvm_arm_vcpu_set_events(struct kvm_vcpu *vcpu,
+
+ #define KVM_ARCH_WANT_MMU_NOTIFIER
+ int kvm_unmap_hva_range(struct kvm *kvm,
+- unsigned long start, unsigned long end);
++ unsigned long start, unsigned long end, unsigned flags);
+ int kvm_set_spte_hva(struct kvm *kvm, unsigned long hva, pte_t pte);
+ int kvm_age_hva(struct kvm *kvm, unsigned long start, unsigned long end);
+ int kvm_test_age_hva(struct kvm *kvm, unsigned long hva);
+diff --git a/arch/arm64/kernel/vdso32/Makefile b/arch/arm64/kernel/vdso32/Makefile
+index 0433bb58ce52c..601c075f1f476 100644
+--- a/arch/arm64/kernel/vdso32/Makefile
++++ b/arch/arm64/kernel/vdso32/Makefile
+@@ -201,7 +201,7 @@ quiet_cmd_vdsosym = VDSOSYM $@
+ cmd_vdsosym = $(NM) $< | $(gen-vdsosym) | LC_ALL=C sort > $@
+
+ # Install commands for the unstripped file
+-quiet_cmd_vdso_install = INSTALL $@
++quiet_cmd_vdso_install = INSTALL32 $@
+ cmd_vdso_install = cp $(obj)/$@.dbg $(MODLIB)/vdso/vdso32.so
+
+ vdso.so: $(obj)/vdso.so.dbg
+diff --git a/arch/m68k/include/asm/m53xxacr.h b/arch/m68k/include/asm/m53xxacr.h
+index 9138a624c5c81..692f90e7fecc1 100644
+--- a/arch/m68k/include/asm/m53xxacr.h
++++ b/arch/m68k/include/asm/m53xxacr.h
+@@ -89,9 +89,9 @@
+ * coherency though in all cases. And for copyback caches we will need
+ * to push cached data as well.
+ */
+-#define CACHE_INIT CACR_CINVA
+-#define CACHE_INVALIDATE CACR_CINVA
+-#define CACHE_INVALIDATED CACR_CINVA
++#define CACHE_INIT (CACHE_MODE + CACR_CINVA - CACR_EC)
++#define CACHE_INVALIDATE (CACHE_MODE + CACR_CINVA)
++#define CACHE_INVALIDATED (CACHE_MODE + CACR_CINVA)
+
+ #define ACR0_MODE ((CONFIG_RAMBASE & 0xff000000) + \
+ (0x000f0000) + \
+diff --git a/arch/mips/include/asm/kvm_host.h b/arch/mips/include/asm/kvm_host.h
+index caa2b936125cc..8861e9d4eb1f9 100644
+--- a/arch/mips/include/asm/kvm_host.h
++++ b/arch/mips/include/asm/kvm_host.h
+@@ -939,7 +939,7 @@ enum kvm_mips_fault_result kvm_trap_emul_gva_fault(struct kvm_vcpu *vcpu,
+
+ #define KVM_ARCH_WANT_MMU_NOTIFIER
+ int kvm_unmap_hva_range(struct kvm *kvm,
+- unsigned long start, unsigned long end);
++ unsigned long start, unsigned long end, unsigned flags);
+ int kvm_set_spte_hva(struct kvm *kvm, unsigned long hva, pte_t pte);
+ int kvm_age_hva(struct kvm *kvm, unsigned long start, unsigned long end);
+ int kvm_test_age_hva(struct kvm *kvm, unsigned long hva);
+diff --git a/arch/mips/kernel/setup.c b/arch/mips/kernel/setup.c
+index 573509e0f2d4e..3ace115740dd1 100644
+--- a/arch/mips/kernel/setup.c
++++ b/arch/mips/kernel/setup.c
+@@ -497,7 +497,7 @@ static void __init mips_parse_crashkernel(void)
+ if (ret != 0 || crash_size <= 0)
+ return;
+
+- if (!memblock_find_in_range(crash_base, crash_base + crash_size, crash_size, 0)) {
++ if (!memblock_find_in_range(crash_base, crash_base + crash_size, crash_size, 1)) {
+ pr_warn("Invalid memory region reserved for crash kernel\n");
+ return;
+ }
+diff --git a/arch/mips/kvm/mmu.c b/arch/mips/kvm/mmu.c
+index 7dad7a293eae9..2514e51d908b4 100644
+--- a/arch/mips/kvm/mmu.c
++++ b/arch/mips/kvm/mmu.c
+@@ -518,7 +518,8 @@ static int kvm_unmap_hva_handler(struct kvm *kvm, gfn_t gfn, gfn_t gfn_end,
+ return 1;
+ }
+
+-int kvm_unmap_hva_range(struct kvm *kvm, unsigned long start, unsigned long end)
++int kvm_unmap_hva_range(struct kvm *kvm, unsigned long start, unsigned long end,
++ unsigned flags)
+ {
+ handle_hva_to_gpa(kvm, start, end, &kvm_unmap_hva_handler, NULL);
+
+diff --git a/arch/powerpc/include/asm/fixmap.h b/arch/powerpc/include/asm/fixmap.h
+index 77ab25a199740..e808461e6532e 100644
+--- a/arch/powerpc/include/asm/fixmap.h
++++ b/arch/powerpc/include/asm/fixmap.h
+@@ -52,7 +52,7 @@ enum fixed_addresses {
+ FIX_HOLE,
+ /* reserve the top 128K for early debugging purposes */
+ FIX_EARLY_DEBUG_TOP = FIX_HOLE,
+- FIX_EARLY_DEBUG_BASE = FIX_EARLY_DEBUG_TOP+(ALIGN(SZ_128, PAGE_SIZE)/PAGE_SIZE)-1,
++ FIX_EARLY_DEBUG_BASE = FIX_EARLY_DEBUG_TOP+(ALIGN(SZ_128K, PAGE_SIZE)/PAGE_SIZE)-1,
+ #ifdef CONFIG_HIGHMEM
+ FIX_KMAP_BEGIN, /* reserved pte's for temporary kernel mappings */
+ FIX_KMAP_END = FIX_KMAP_BEGIN+(KM_TYPE_NR*NR_CPUS)-1,
+diff --git a/arch/powerpc/include/asm/kvm_host.h b/arch/powerpc/include/asm/kvm_host.h
+index 1dc63101ffe18..b82e46ecd7fbd 100644
+--- a/arch/powerpc/include/asm/kvm_host.h
++++ b/arch/powerpc/include/asm/kvm_host.h
+@@ -58,7 +58,8 @@
+ #define KVM_ARCH_WANT_MMU_NOTIFIER
+
+ extern int kvm_unmap_hva_range(struct kvm *kvm,
+- unsigned long start, unsigned long end);
++ unsigned long start, unsigned long end,
++ unsigned flags);
+ extern int kvm_age_hva(struct kvm *kvm, unsigned long start, unsigned long end);
+ extern int kvm_test_age_hva(struct kvm *kvm, unsigned long hva);
+ extern int kvm_set_spte_hva(struct kvm *kvm, unsigned long hva, pte_t pte);
+diff --git a/arch/powerpc/kvm/book3s.c b/arch/powerpc/kvm/book3s.c
+index 5690a1f9b9767..13f107dff880e 100644
+--- a/arch/powerpc/kvm/book3s.c
++++ b/arch/powerpc/kvm/book3s.c
+@@ -837,7 +837,8 @@ void kvmppc_core_commit_memory_region(struct kvm *kvm,
+ kvm->arch.kvm_ops->commit_memory_region(kvm, mem, old, new, change);
+ }
+
+-int kvm_unmap_hva_range(struct kvm *kvm, unsigned long start, unsigned long end)
++int kvm_unmap_hva_range(struct kvm *kvm, unsigned long start, unsigned long end,
++ unsigned flags)
+ {
+ return kvm->arch.kvm_ops->unmap_hva_range(kvm, start, end);
+ }
+diff --git a/arch/powerpc/kvm/e500_mmu_host.c b/arch/powerpc/kvm/e500_mmu_host.c
+index df9989cf7ba3f..9b402c345154f 100644
+--- a/arch/powerpc/kvm/e500_mmu_host.c
++++ b/arch/powerpc/kvm/e500_mmu_host.c
+@@ -734,7 +734,8 @@ static int kvm_unmap_hva(struct kvm *kvm, unsigned long hva)
+ return 0;
+ }
+
+-int kvm_unmap_hva_range(struct kvm *kvm, unsigned long start, unsigned long end)
++int kvm_unmap_hva_range(struct kvm *kvm, unsigned long start, unsigned long end,
++ unsigned flags)
+ {
+ /* kvm_unmap_hva flushes everything anyways */
+ kvm_unmap_hva(kvm, start);
+diff --git a/arch/powerpc/platforms/pseries/hotplug-cpu.c b/arch/powerpc/platforms/pseries/hotplug-cpu.c
+index 6d4ee03d476a9..ec04fc7f5a641 100644
+--- a/arch/powerpc/platforms/pseries/hotplug-cpu.c
++++ b/arch/powerpc/platforms/pseries/hotplug-cpu.c
+@@ -107,22 +107,28 @@ static int pseries_cpu_disable(void)
+ */
+ static void pseries_cpu_die(unsigned int cpu)
+ {
+- int tries;
+ int cpu_status = 1;
+ unsigned int pcpu = get_hard_smp_processor_id(cpu);
++ unsigned long timeout = jiffies + msecs_to_jiffies(120000);
+
+- for (tries = 0; tries < 25; tries++) {
++ while (true) {
+ cpu_status = smp_query_cpu_stopped(pcpu);
+ if (cpu_status == QCSS_STOPPED ||
+ cpu_status == QCSS_HARDWARE_ERROR)
+ break;
+- cpu_relax();
+
++ if (time_after(jiffies, timeout)) {
++ pr_warn("CPU %i (hwid %i) didn't die after 120 seconds\n",
++ cpu, pcpu);
++ timeout = jiffies + msecs_to_jiffies(120000);
++ }
++
++ cond_resched();
+ }
+
+- if (cpu_status != 0) {
+- printk("Querying DEAD? cpu %i (%i) shows %i\n",
+- cpu, pcpu, cpu_status);
++ if (cpu_status == QCSS_HARDWARE_ERROR) {
++ pr_warn("CPU %i (hwid %i) reported error while dying\n",
++ cpu, pcpu);
+ }
+
+ /* Isolation and deallocation are definitely done by
+diff --git a/arch/powerpc/platforms/pseries/ras.c b/arch/powerpc/platforms/pseries/ras.c
+index 16ba5c542e55c..988e9b75ff642 100644
+--- a/arch/powerpc/platforms/pseries/ras.c
++++ b/arch/powerpc/platforms/pseries/ras.c
+@@ -184,7 +184,6 @@ static void handle_system_shutdown(char event_modifier)
+ case EPOW_SHUTDOWN_ON_UPS:
+ pr_emerg("Loss of system power detected. System is running on"
+ " UPS/battery. Check RTAS error log for details\n");
+- orderly_poweroff(true);
+ break;
+
+ case EPOW_SHUTDOWN_LOSS_OF_CRITICAL_FUNCTIONS:
+diff --git a/arch/riscv/kernel/vmlinux.lds.S b/arch/riscv/kernel/vmlinux.lds.S
+index 0339b6bbe11ab..bf3f34dbe630b 100644
+--- a/arch/riscv/kernel/vmlinux.lds.S
++++ b/arch/riscv/kernel/vmlinux.lds.S
+@@ -22,6 +22,7 @@ SECTIONS
+ /* Beginning of code and text segment */
+ . = LOAD_OFFSET;
+ _start = .;
++ _stext = .;
+ HEAD_TEXT_SECTION
+ . = ALIGN(PAGE_SIZE);
+
+@@ -49,7 +50,6 @@ SECTIONS
+ . = ALIGN(SECTION_ALIGN);
+ .text : {
+ _text = .;
+- _stext = .;
+ TEXT_TEXT
+ SCHED_TEXT
+ CPUIDLE_TEXT
+diff --git a/arch/s390/kernel/ptrace.c b/arch/s390/kernel/ptrace.c
+index e007224b65bb2..a266ffed04df5 100644
+--- a/arch/s390/kernel/ptrace.c
++++ b/arch/s390/kernel/ptrace.c
+@@ -1311,7 +1311,6 @@ static bool is_ri_cb_valid(struct runtime_instr_cb *cb)
+ cb->pc == 1 &&
+ cb->qc == 0 &&
+ cb->reserved2 == 0 &&
+- cb->key == PAGE_DEFAULT_KEY &&
+ cb->reserved3 == 0 &&
+ cb->reserved4 == 0 &&
+ cb->reserved5 == 0 &&
+@@ -1375,7 +1374,11 @@ static int s390_runtime_instr_set(struct task_struct *target,
+ kfree(data);
+ return -EINVAL;
+ }
+-
++ /*
++ * Override access key in any case, since user space should
++ * not be able to set it, nor should it care about it.
++ */
++ ri_cb.key = PAGE_DEFAULT_KEY >> 4;
+ preempt_disable();
+ if (!target->thread.ri_cb)
+ target->thread.ri_cb = data;
+diff --git a/arch/s390/kernel/runtime_instr.c b/arch/s390/kernel/runtime_instr.c
+index 125c7f6e87150..1788a5454b6fc 100644
+--- a/arch/s390/kernel/runtime_instr.c
++++ b/arch/s390/kernel/runtime_instr.c
+@@ -57,7 +57,7 @@ static void init_runtime_instr_cb(struct runtime_instr_cb *cb)
+ cb->k = 1;
+ cb->ps = 1;
+ cb->pc = 1;
+- cb->key = PAGE_DEFAULT_KEY;
++ cb->key = PAGE_DEFAULT_KEY >> 4;
+ cb->v = 1;
+ }
+
+diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
+index 86e2e0272c576..d4c5d1d6c6f55 100644
+--- a/arch/x86/include/asm/kvm_host.h
++++ b/arch/x86/include/asm/kvm_host.h
+@@ -1606,7 +1606,8 @@ asmlinkage void kvm_spurious_fault(void);
+ _ASM_EXTABLE(666b, 667b)
+
+ #define KVM_ARCH_WANT_MMU_NOTIFIER
+-int kvm_unmap_hva_range(struct kvm *kvm, unsigned long start, unsigned long end);
++int kvm_unmap_hva_range(struct kvm *kvm, unsigned long start, unsigned long end,
++ unsigned flags);
+ int kvm_age_hva(struct kvm *kvm, unsigned long start, unsigned long end);
+ int kvm_test_age_hva(struct kvm *kvm, unsigned long hva);
+ int kvm_set_spte_hva(struct kvm *kvm, unsigned long hva, pte_t pte);
+diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c
+index 70cf2c1a1423c..59d096cacb26c 100644
+--- a/arch/x86/kvm/mmu/mmu.c
++++ b/arch/x86/kvm/mmu/mmu.c
+@@ -1972,7 +1972,8 @@ static int kvm_handle_hva(struct kvm *kvm, unsigned long hva,
+ return kvm_handle_hva_range(kvm, hva, hva + 1, data, handler);
+ }
+
+-int kvm_unmap_hva_range(struct kvm *kvm, unsigned long start, unsigned long end)
++int kvm_unmap_hva_range(struct kvm *kvm, unsigned long start, unsigned long end,
++ unsigned flags)
+ {
+ return kvm_handle_hva_range(kvm, start, end, 0, kvm_unmap_rmapp);
+ }
+diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
+index 51ccb4dfaad26..be195e63f1e69 100644
+--- a/arch/x86/kvm/x86.c
++++ b/arch/x86/kvm/x86.c
+@@ -956,7 +956,7 @@ int kvm_set_cr4(struct kvm_vcpu *vcpu, unsigned long cr4)
+ {
+ unsigned long old_cr4 = kvm_read_cr4(vcpu);
+ unsigned long pdptr_bits = X86_CR4_PGE | X86_CR4_PSE | X86_CR4_PAE |
+- X86_CR4_SMEP | X86_CR4_SMAP | X86_CR4_PKE;
++ X86_CR4_SMEP;
+
+ if (kvm_valid_cr4(vcpu, cr4))
+ return 1;
+diff --git a/arch/x86/pci/xen.c b/arch/x86/pci/xen.c
+index 91220cc258547..5c11ae66b5d8e 100644
+--- a/arch/x86/pci/xen.c
++++ b/arch/x86/pci/xen.c
+@@ -26,6 +26,7 @@
+ #include <asm/xen/pci.h>
+ #include <asm/xen/cpuid.h>
+ #include <asm/apic.h>
++#include <asm/acpi.h>
+ #include <asm/i8259.h>
+
+ static int xen_pcifront_enable_irq(struct pci_dev *dev)
+diff --git a/arch/x86/platform/efi/efi_64.c b/arch/x86/platform/efi/efi_64.c
+index c5e393f8bb3f6..3b0a84c88b7d9 100644
+--- a/arch/x86/platform/efi/efi_64.c
++++ b/arch/x86/platform/efi/efi_64.c
+@@ -269,6 +269,8 @@ int __init efi_setup_page_tables(unsigned long pa_memmap, unsigned num_pages)
+ npages = (__end_rodata - __start_rodata) >> PAGE_SHIFT;
+ rodata = __pa(__start_rodata);
+ pfn = rodata >> PAGE_SHIFT;
++
++ pf = _PAGE_NX | _PAGE_ENC;
+ if (kernel_map_pages_in_pgd(pgd, pfn, rodata, npages, pf)) {
+ pr_err("Failed to map kernel rodata 1:1\n");
+ return 1;
+diff --git a/drivers/cpufreq/intel_pstate.c b/drivers/cpufreq/intel_pstate.c
+index 4d3429b2058fc..8c4d86032c7a3 100644
+--- a/drivers/cpufreq/intel_pstate.c
++++ b/drivers/cpufreq/intel_pstate.c
+@@ -1572,6 +1572,7 @@ static void intel_pstate_get_cpu_pstates(struct cpudata *cpu)
+
+ intel_pstate_get_hwp_max(cpu->cpu, &phy_max, ¤t_max);
+ cpu->pstate.turbo_freq = phy_max * cpu->pstate.scaling;
++ cpu->pstate.turbo_pstate = phy_max;
+ } else {
+ cpu->pstate.turbo_freq = cpu->pstate.turbo_pstate * cpu->pstate.scaling;
+ }
+diff --git a/drivers/edac/i7core_edac.c b/drivers/edac/i7core_edac.c
+index b3135b208f9a0..0710b1a069270 100644
+--- a/drivers/edac/i7core_edac.c
++++ b/drivers/edac/i7core_edac.c
+@@ -1710,9 +1710,9 @@ static void i7core_mce_output_error(struct mem_ctl_info *mci,
+ if (uncorrected_error) {
+ core_err_cnt = 1;
+ if (ripv)
+- tp_event = HW_EVENT_ERR_FATAL;
+- else
+ tp_event = HW_EVENT_ERR_UNCORRECTED;
++ else
++ tp_event = HW_EVENT_ERR_FATAL;
+ } else {
+ tp_event = HW_EVENT_ERR_CORRECTED;
+ }
+diff --git a/drivers/edac/pnd2_edac.c b/drivers/edac/pnd2_edac.c
+index bc47328eb4858..fdf214ab8ce44 100644
+--- a/drivers/edac/pnd2_edac.c
++++ b/drivers/edac/pnd2_edac.c
+@@ -1155,7 +1155,7 @@ static void pnd2_mce_output_error(struct mem_ctl_info *mci, const struct mce *m,
+ u32 optypenum = GET_BITFIELD(m->status, 4, 6);
+ int rc;
+
+- tp_event = uc_err ? (ripv ? HW_EVENT_ERR_FATAL : HW_EVENT_ERR_UNCORRECTED) :
++ tp_event = uc_err ? (ripv ? HW_EVENT_ERR_UNCORRECTED : HW_EVENT_ERR_FATAL) :
+ HW_EVENT_ERR_CORRECTED;
+
+ /*
+diff --git a/drivers/edac/sb_edac.c b/drivers/edac/sb_edac.c
+index 7d51c82be62ba..6ab863ab6d862 100644
+--- a/drivers/edac/sb_edac.c
++++ b/drivers/edac/sb_edac.c
+@@ -2982,9 +2982,9 @@ static void sbridge_mce_output_error(struct mem_ctl_info *mci,
+ if (uncorrected_error) {
+ core_err_cnt = 1;
+ if (ripv) {
+- tp_event = HW_EVENT_ERR_FATAL;
+- } else {
+ tp_event = HW_EVENT_ERR_UNCORRECTED;
++ } else {
++ tp_event = HW_EVENT_ERR_FATAL;
+ }
+ } else {
+ tp_event = HW_EVENT_ERR_CORRECTED;
+diff --git a/drivers/edac/skx_common.c b/drivers/edac/skx_common.c
+index 412c651bef26b..dfeefacc90d6d 100644
+--- a/drivers/edac/skx_common.c
++++ b/drivers/edac/skx_common.c
+@@ -494,9 +494,9 @@ static void skx_mce_output_error(struct mem_ctl_info *mci,
+ if (uncorrected_error) {
+ core_err_cnt = 1;
+ if (ripv) {
+- tp_event = HW_EVENT_ERR_FATAL;
+- } else {
+ tp_event = HW_EVENT_ERR_UNCORRECTED;
++ } else {
++ tp_event = HW_EVENT_ERR_FATAL;
+ }
+ } else {
+ tp_event = HW_EVENT_ERR_CORRECTED;
+diff --git a/drivers/firmware/efi/efi.c b/drivers/firmware/efi/efi.c
+index 99446b3847265..9a0b614e99073 100644
+--- a/drivers/firmware/efi/efi.c
++++ b/drivers/firmware/efi/efi.c
+@@ -381,6 +381,7 @@ static int __init efisubsys_init(void)
+ efi_kobj = kobject_create_and_add("efi", firmware_kobj);
+ if (!efi_kobj) {
+ pr_err("efi: Firmware registration failed.\n");
++ destroy_workqueue(efi_rts_wq);
+ return -ENOMEM;
+ }
+
+@@ -424,6 +425,7 @@ err_unregister:
+ generic_ops_unregister();
+ err_put:
+ kobject_put(efi_kobj);
++ destroy_workqueue(efi_rts_wq);
+ return error;
+ }
+
+diff --git a/drivers/firmware/efi/libstub/efi-stub-helper.c b/drivers/firmware/efi/libstub/efi-stub-helper.c
+index 9f34c72429397..cac64fdfc3ae4 100644
+--- a/drivers/firmware/efi/libstub/efi-stub-helper.c
++++ b/drivers/firmware/efi/libstub/efi-stub-helper.c
+@@ -73,10 +73,14 @@ void efi_printk(char *str)
+ */
+ efi_status_t efi_parse_options(char const *cmdline)
+ {
+- size_t len = strlen(cmdline) + 1;
++ size_t len;
+ efi_status_t status;
+ char *str, *buf;
+
++ if (!cmdline)
++ return EFI_SUCCESS;
++
++ len = strlen(cmdline) + 1;
+ status = efi_bs_call(allocate_pool, EFI_LOADER_DATA, len, (void **)&buf);
+ if (status != EFI_SUCCESS)
+ return status;
+@@ -87,6 +91,8 @@ efi_status_t efi_parse_options(char const *cmdline)
+ char *param, *val;
+
+ str = next_arg(str, ¶m, &val);
++ if (!val && !strcmp(param, "--"))
++ break;
+
+ if (!strcmp(param, "nokaslr")) {
+ efi_nokaslr = true;
+diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
+index 7cb4fe479614e..debad34015913 100644
+--- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
++++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
+@@ -1983,6 +1983,7 @@ void amdgpu_dm_update_connector_after_detect(
+
+ drm_connector_update_edid_property(connector,
+ aconnector->edid);
++ drm_add_edid_modes(connector, aconnector->edid);
+
+ if (aconnector->dc_link->aux_mode)
+ drm_dp_cec_set_edid(&aconnector->dm_dp_aux.aux,
+diff --git a/drivers/gpu/drm/amd/display/dc/core/dc_link.c b/drivers/gpu/drm/amd/display/dc/core/dc_link.c
+index 3f157bcc174b9..92079e2fa515a 100644
+--- a/drivers/gpu/drm/amd/display/dc/core/dc_link.c
++++ b/drivers/gpu/drm/amd/display/dc/core/dc_link.c
+@@ -3113,12 +3113,11 @@ void core_link_disable_stream(struct pipe_ctx *pipe_ctx)
+ dc_is_virtual_signal(pipe_ctx->stream->signal))
+ return;
+
++ dc->hwss.blank_stream(pipe_ctx);
+ #if defined(CONFIG_DRM_AMD_DC_HDCP)
+ update_psp_stream_config(pipe_ctx, true);
+ #endif
+
+- dc->hwss.blank_stream(pipe_ctx);
+-
+ if (pipe_ctx->stream->signal == SIGNAL_TYPE_DISPLAY_PORT_MST)
+ deallocate_mst_payload(pipe_ctx);
+
+@@ -3146,11 +3145,9 @@ void core_link_disable_stream(struct pipe_ctx *pipe_ctx)
+ write_i2c_redriver_setting(pipe_ctx, false);
+ }
+ }
+-
+- disable_link(pipe_ctx->stream->link, pipe_ctx->stream->signal);
+-
+ dc->hwss.disable_stream(pipe_ctx);
+
++ disable_link(pipe_ctx->stream->link, pipe_ctx->stream->signal);
+ if (pipe_ctx->stream->timing.flags.DSC) {
+ if (dc_is_dp_signal(pipe_ctx->stream->signal))
+ dp_set_dsc_enable(pipe_ctx, false);
+diff --git a/drivers/gpu/drm/amd/display/dc/core/dc_link_dp.c b/drivers/gpu/drm/amd/display/dc/core/dc_link_dp.c
+index 1ada01322cd2c..caa090d0b6acc 100644
+--- a/drivers/gpu/drm/amd/display/dc/core/dc_link_dp.c
++++ b/drivers/gpu/drm/amd/display/dc/core/dc_link_dp.c
+@@ -1103,10 +1103,6 @@ static inline enum link_training_result perform_link_training_int(
+ dpcd_pattern.v1_4.TRAINING_PATTERN_SET = DPCD_TRAINING_PATTERN_VIDEOIDLE;
+ dpcd_set_training_pattern(link, dpcd_pattern);
+
+- /* delay 5ms after notifying sink of idle pattern before switching output */
+- if (link->connector_signal != SIGNAL_TYPE_EDP)
+- msleep(5);
+-
+ /* 4. mainlink output idle pattern*/
+ dp_set_hw_test_pattern(link, DP_TEST_PATTERN_VIDEO_MODE, NULL, 0);
+
+@@ -1556,12 +1552,6 @@ bool perform_link_training_with_retries(
+ struct dc_link *link = stream->link;
+ enum dp_panel_mode panel_mode = dp_get_panel_mode(link);
+
+- /* We need to do this before the link training to ensure the idle pattern in SST
+- * mode will be sent right after the link training
+- */
+- link->link_enc->funcs->connect_dig_be_to_fe(link->link_enc,
+- pipe_ctx->stream_res.stream_enc->id, true);
+-
+ for (j = 0; j < attempts; ++j) {
+
+ dp_enable_link_phy(
+@@ -1578,6 +1568,12 @@ bool perform_link_training_with_retries(
+
+ dp_set_panel_mode(link, panel_mode);
+
++ /* We need to do this before the link training to ensure the idle pattern in SST
++ * mode will be sent right after the link training
++ */
++ link->link_enc->funcs->connect_dig_be_to_fe(link->link_enc,
++ pipe_ctx->stream_res.stream_enc->id, true);
++
+ if (link->aux_access_disabled) {
+ dc_link_dp_perform_link_training_skip_aux(link, link_setting);
+ return true;
+diff --git a/drivers/gpu/drm/amd/display/dc/dce110/dce110_hw_sequencer.c b/drivers/gpu/drm/amd/display/dc/dce110/dce110_hw_sequencer.c
+index 24ca592c90df5..10527593868cc 100644
+--- a/drivers/gpu/drm/amd/display/dc/dce110/dce110_hw_sequencer.c
++++ b/drivers/gpu/drm/amd/display/dc/dce110/dce110_hw_sequencer.c
+@@ -1090,17 +1090,8 @@ void dce110_blank_stream(struct pipe_ctx *pipe_ctx)
+ dc_link_set_abm_disable(link);
+ }
+
+- if (dc_is_dp_signal(pipe_ctx->stream->signal)) {
++ if (dc_is_dp_signal(pipe_ctx->stream->signal))
+ pipe_ctx->stream_res.stream_enc->funcs->dp_blank(pipe_ctx->stream_res.stream_enc);
+-
+- /*
+- * After output is idle pattern some sinks need time to recognize the stream
+- * has changed or they enter protection state and hang.
+- */
+- if (!dc_is_embedded_signal(pipe_ctx->stream->signal))
+- msleep(60);
+- }
+-
+ }
+
+
+diff --git a/drivers/gpu/drm/amd/display/dc/dcn20/dcn20_hwseq.c b/drivers/gpu/drm/amd/display/dc/dcn20/dcn20_hwseq.c
+index c4fa13e4eaf96..ab93cecb78f68 100644
+--- a/drivers/gpu/drm/amd/display/dc/dcn20/dcn20_hwseq.c
++++ b/drivers/gpu/drm/amd/display/dc/dcn20/dcn20_hwseq.c
+@@ -1386,8 +1386,8 @@ static void dcn20_update_dchubp_dpp(
+
+ /* Any updates are handled in dc interface, just need to apply existing for plane enable */
+ if ((pipe_ctx->update_flags.bits.enable || pipe_ctx->update_flags.bits.opp_changed ||
+- pipe_ctx->update_flags.bits.scaler || pipe_ctx->update_flags.bits.viewport)
+- && pipe_ctx->stream->cursor_attributes.address.quad_part != 0) {
++ pipe_ctx->update_flags.bits.scaler || viewport_changed == true) &&
++ pipe_ctx->stream->cursor_attributes.address.quad_part != 0) {
+ dc->hwss.set_cursor_position(pipe_ctx);
+ dc->hwss.set_cursor_attribute(pipe_ctx);
+
+diff --git a/drivers/gpu/drm/amd/display/dc/dcn20/dcn20_resource.c b/drivers/gpu/drm/amd/display/dc/dcn20/dcn20_resource.c
+index 2719cdecc1cb0..d37ede03510ff 100644
+--- a/drivers/gpu/drm/amd/display/dc/dcn20/dcn20_resource.c
++++ b/drivers/gpu/drm/amd/display/dc/dcn20/dcn20_resource.c
+@@ -3031,7 +3031,7 @@ static bool dcn20_validate_bandwidth_internal(struct dc *dc, struct dc_state *co
+ int vlevel = 0;
+ int pipe_split_from[MAX_PIPES];
+ int pipe_cnt = 0;
+- display_e2e_pipe_params_st *pipes = kzalloc(dc->res_pool->pipe_count * sizeof(display_e2e_pipe_params_st), GFP_KERNEL);
++ display_e2e_pipe_params_st *pipes = kzalloc(dc->res_pool->pipe_count * sizeof(display_e2e_pipe_params_st), GFP_ATOMIC);
+ DC_LOGGER_INIT(dc->ctx->logger);
+
+ BW_VAL_TRACE_COUNT();
+diff --git a/drivers/gpu/drm/amd/display/include/fixed31_32.h b/drivers/gpu/drm/amd/display/include/fixed31_32.h
+index 89ef9f6860e5b..16df2a485dd0d 100644
+--- a/drivers/gpu/drm/amd/display/include/fixed31_32.h
++++ b/drivers/gpu/drm/amd/display/include/fixed31_32.h
+@@ -431,6 +431,9 @@ struct fixed31_32 dc_fixpt_log(struct fixed31_32 arg);
+ */
+ static inline struct fixed31_32 dc_fixpt_pow(struct fixed31_32 arg1, struct fixed31_32 arg2)
+ {
++ if (arg1.value == 0)
++ return arg2.value == 0 ? dc_fixpt_one : dc_fixpt_zero;
++
+ return dc_fixpt_exp(
+ dc_fixpt_mul(
+ dc_fixpt_log(arg1),
+diff --git a/drivers/gpu/drm/panel/panel-simple.c b/drivers/gpu/drm/panel/panel-simple.c
+index 346e3f9fd505a..a68eff1fb4297 100644
+--- a/drivers/gpu/drm/panel/panel-simple.c
++++ b/drivers/gpu/drm/panel/panel-simple.c
+@@ -1537,7 +1537,7 @@ static const struct drm_display_mode frida_frd350h54004_mode = {
+ .vsync_end = 240 + 2 + 6,
+ .vtotal = 240 + 2 + 6 + 2,
+ .vrefresh = 60,
+- .flags = DRM_MODE_FLAG_PHSYNC | DRM_MODE_FLAG_PVSYNC,
++ .flags = DRM_MODE_FLAG_NHSYNC | DRM_MODE_FLAG_NVSYNC,
+ };
+
+ static const struct panel_desc frida_frd350h54004 = {
+diff --git a/drivers/gpu/drm/ttm/ttm_bo_vm.c b/drivers/gpu/drm/ttm/ttm_bo_vm.c
+index 72100b84c7a90..b08fdfa4291b2 100644
+--- a/drivers/gpu/drm/ttm/ttm_bo_vm.c
++++ b/drivers/gpu/drm/ttm/ttm_bo_vm.c
+@@ -505,8 +505,10 @@ static int ttm_bo_vm_access_kmap(struct ttm_buffer_object *bo,
+ int ttm_bo_vm_access(struct vm_area_struct *vma, unsigned long addr,
+ void *buf, int len, int write)
+ {
+- unsigned long offset = (addr) - vma->vm_start;
+ struct ttm_buffer_object *bo = vma->vm_private_data;
++ unsigned long offset = (addr) - vma->vm_start +
++ ((vma->vm_pgoff - drm_vma_node_start(&bo->base.vma_node))
++ << PAGE_SHIFT);
+ int ret;
+
+ if (len < 1 || (offset + len) >> PAGE_SHIFT > bo->num_pages)
+diff --git a/drivers/gpu/drm/vgem/vgem_drv.c b/drivers/gpu/drm/vgem/vgem_drv.c
+index 909eba43664a2..204d1df5a21d1 100644
+--- a/drivers/gpu/drm/vgem/vgem_drv.c
++++ b/drivers/gpu/drm/vgem/vgem_drv.c
+@@ -229,32 +229,6 @@ static int vgem_gem_dumb_create(struct drm_file *file, struct drm_device *dev,
+ return 0;
+ }
+
+-static int vgem_gem_dumb_map(struct drm_file *file, struct drm_device *dev,
+- uint32_t handle, uint64_t *offset)
+-{
+- struct drm_gem_object *obj;
+- int ret;
+-
+- obj = drm_gem_object_lookup(file, handle);
+- if (!obj)
+- return -ENOENT;
+-
+- if (!obj->filp) {
+- ret = -EINVAL;
+- goto unref;
+- }
+-
+- ret = drm_gem_create_mmap_offset(obj);
+- if (ret)
+- goto unref;
+-
+- *offset = drm_vma_node_offset_addr(&obj->vma_node);
+-unref:
+- drm_gem_object_put_unlocked(obj);
+-
+- return ret;
+-}
+-
+ static struct drm_ioctl_desc vgem_ioctls[] = {
+ DRM_IOCTL_DEF_DRV(VGEM_FENCE_ATTACH, vgem_fence_attach_ioctl, DRM_RENDER_ALLOW),
+ DRM_IOCTL_DEF_DRV(VGEM_FENCE_SIGNAL, vgem_fence_signal_ioctl, DRM_RENDER_ALLOW),
+@@ -448,7 +422,6 @@ static struct drm_driver vgem_driver = {
+ .fops = &vgem_driver_fops,
+
+ .dumb_create = vgem_gem_dumb_create,
+- .dumb_map_offset = vgem_gem_dumb_map,
+
+ .prime_handle_to_fd = drm_gem_prime_handle_to_fd,
+ .prime_fd_to_handle = drm_gem_prime_fd_to_handle,
+diff --git a/drivers/gpu/drm/virtio/virtgpu_ioctl.c b/drivers/gpu/drm/virtio/virtgpu_ioctl.c
+index 512daff920387..1fc3fa00685d0 100644
+--- a/drivers/gpu/drm/virtio/virtgpu_ioctl.c
++++ b/drivers/gpu/drm/virtio/virtgpu_ioctl.c
+@@ -180,6 +180,7 @@ static int virtio_gpu_execbuffer_ioctl(struct drm_device *dev, void *data,
+
+ virtio_gpu_cmd_submit(vgdev, buf, exbuf->size,
+ vfpriv->ctx_id, buflist, out_fence);
++ dma_fence_put(&out_fence->f);
+ virtio_gpu_notify(vgdev);
+ return 0;
+
+diff --git a/drivers/infiniband/hw/bnxt_re/main.c b/drivers/infiniband/hw/bnxt_re/main.c
+index b12fbc857f942..5c41e13496a02 100644
+--- a/drivers/infiniband/hw/bnxt_re/main.c
++++ b/drivers/infiniband/hw/bnxt_re/main.c
+@@ -811,7 +811,8 @@ static int bnxt_re_handle_qp_async_event(struct creq_qp_event *qp_event,
+ struct ib_event event;
+ unsigned int flags;
+
+- if (qp->qplib_qp.state == CMDQ_MODIFY_QP_NEW_STATE_ERR) {
++ if (qp->qplib_qp.state == CMDQ_MODIFY_QP_NEW_STATE_ERR &&
++ rdma_is_kernel_res(&qp->ib_qp.res)) {
+ flags = bnxt_re_lock_cqs(qp);
+ bnxt_qplib_add_flush_qp(&qp->qplib_qp);
+ bnxt_re_unlock_cqs(qp, flags);
+diff --git a/drivers/infiniband/hw/hfi1/tid_rdma.c b/drivers/infiniband/hw/hfi1/tid_rdma.c
+index 7c6fd720fb2ea..c018fc633cca3 100644
+--- a/drivers/infiniband/hw/hfi1/tid_rdma.c
++++ b/drivers/infiniband/hw/hfi1/tid_rdma.c
+@@ -3215,6 +3215,7 @@ bool hfi1_tid_rdma_wqe_interlock(struct rvt_qp *qp, struct rvt_swqe *wqe)
+ case IB_WR_ATOMIC_CMP_AND_SWP:
+ case IB_WR_ATOMIC_FETCH_AND_ADD:
+ case IB_WR_RDMA_WRITE:
++ case IB_WR_RDMA_WRITE_WITH_IMM:
+ switch (prev->wr.opcode) {
+ case IB_WR_TID_RDMA_WRITE:
+ req = wqe_to_tid_req(prev);
+diff --git a/drivers/input/mouse/psmouse-base.c b/drivers/input/mouse/psmouse-base.c
+index 527ae0b9a191e..0b4a3039f312f 100644
+--- a/drivers/input/mouse/psmouse-base.c
++++ b/drivers/input/mouse/psmouse-base.c
+@@ -2042,7 +2042,7 @@ static int psmouse_get_maxproto(char *buffer, const struct kernel_param *kp)
+ {
+ int type = *((unsigned int *)kp->arg);
+
+- return sprintf(buffer, "%s", psmouse_protocol_by_type(type)->name);
++ return sprintf(buffer, "%s\n", psmouse_protocol_by_type(type)->name);
+ }
+
+ static int __init psmouse_init(void)
+diff --git a/drivers/md/bcache/super.c b/drivers/md/bcache/super.c
+index b4d23d9f30f9b..d5477faa14edd 100644
+--- a/drivers/md/bcache/super.c
++++ b/drivers/md/bcache/super.c
+@@ -825,19 +825,19 @@ static int bcache_device_init(struct bcache_device *d, unsigned int block_size,
+ struct request_queue *q;
+ const size_t max_stripes = min_t(size_t, INT_MAX,
+ SIZE_MAX / sizeof(atomic_t));
+- size_t n;
++ uint64_t n;
+ int idx;
+
+ if (!d->stripe_size)
+ d->stripe_size = 1 << 31;
+
+- d->nr_stripes = DIV_ROUND_UP_ULL(sectors, d->stripe_size);
+-
+- if (!d->nr_stripes || d->nr_stripes > max_stripes) {
+- pr_err("nr_stripes too large or invalid: %u (start sector beyond end of disk?)",
+- (unsigned int)d->nr_stripes);
++ n = DIV_ROUND_UP_ULL(sectors, d->stripe_size);
++ if (!n || n > max_stripes) {
++ pr_err("nr_stripes too large or invalid: %llu (start sector beyond end of disk?)\n",
++ n);
+ return -ENOMEM;
+ }
++ d->nr_stripes = n;
+
+ n = d->nr_stripes * sizeof(atomic_t);
+ d->stripe_sectors_dirty = kvzalloc(n, GFP_KERNEL);
+diff --git a/drivers/media/pci/ttpci/budget-core.c b/drivers/media/pci/ttpci/budget-core.c
+index fadbdeeb44955..293867b9e7961 100644
+--- a/drivers/media/pci/ttpci/budget-core.c
++++ b/drivers/media/pci/ttpci/budget-core.c
+@@ -369,20 +369,25 @@ static int budget_register(struct budget *budget)
+ ret = dvbdemux->dmx.add_frontend(&dvbdemux->dmx, &budget->hw_frontend);
+
+ if (ret < 0)
+- return ret;
++ goto err_release_dmx;
+
+ budget->mem_frontend.source = DMX_MEMORY_FE;
+ ret = dvbdemux->dmx.add_frontend(&dvbdemux->dmx, &budget->mem_frontend);
+ if (ret < 0)
+- return ret;
++ goto err_release_dmx;
+
+ ret = dvbdemux->dmx.connect_frontend(&dvbdemux->dmx, &budget->hw_frontend);
+ if (ret < 0)
+- return ret;
++ goto err_release_dmx;
+
+ dvb_net_init(&budget->dvb_adapter, &budget->dvb_net, &dvbdemux->dmx);
+
+ return 0;
++
++err_release_dmx:
++ dvb_dmxdev_release(&budget->dmxdev);
++ dvb_dmx_release(&budget->demux);
++ return ret;
+ }
+
+ static void budget_unregister(struct budget *budget)
+diff --git a/drivers/media/platform/davinci/vpss.c b/drivers/media/platform/davinci/vpss.c
+index d38d2bbb6f0f8..7000f0bf0b353 100644
+--- a/drivers/media/platform/davinci/vpss.c
++++ b/drivers/media/platform/davinci/vpss.c
+@@ -505,19 +505,31 @@ static void vpss_exit(void)
+
+ static int __init vpss_init(void)
+ {
++ int ret;
++
+ if (!request_mem_region(VPSS_CLK_CTRL, 4, "vpss_clock_control"))
+ return -EBUSY;
+
+ oper_cfg.vpss_regs_base2 = ioremap(VPSS_CLK_CTRL, 4);
+ if (unlikely(!oper_cfg.vpss_regs_base2)) {
+- release_mem_region(VPSS_CLK_CTRL, 4);
+- return -ENOMEM;
++ ret = -ENOMEM;
++ goto err_ioremap;
+ }
+
+ writel(VPSS_CLK_CTRL_VENCCLKEN |
+- VPSS_CLK_CTRL_DACCLKEN, oper_cfg.vpss_regs_base2);
++ VPSS_CLK_CTRL_DACCLKEN, oper_cfg.vpss_regs_base2);
++
++ ret = platform_driver_register(&vpss_driver);
++ if (ret)
++ goto err_pd_register;
++
++ return 0;
+
+- return platform_driver_register(&vpss_driver);
++err_pd_register:
++ iounmap(oper_cfg.vpss_regs_base2);
++err_ioremap:
++ release_mem_region(VPSS_CLK_CTRL, 4);
++ return ret;
+ }
+ subsys_initcall(vpss_init);
+ module_exit(vpss_exit);
+diff --git a/drivers/media/platform/qcom/camss/camss.c b/drivers/media/platform/qcom/camss/camss.c
+index 3fdc9f964a3c6..2483641799dfb 100644
+--- a/drivers/media/platform/qcom/camss/camss.c
++++ b/drivers/media/platform/qcom/camss/camss.c
+@@ -504,7 +504,6 @@ static int camss_of_parse_ports(struct camss *camss)
+ return num_subdevs;
+
+ err_cleanup:
+- v4l2_async_notifier_cleanup(&camss->notifier);
+ of_node_put(node);
+ return ret;
+ }
+@@ -835,29 +834,38 @@ static int camss_probe(struct platform_device *pdev)
+ camss->csid_num = 4;
+ camss->vfe_num = 2;
+ } else {
+- return -EINVAL;
++ ret = -EINVAL;
++ goto err_free;
+ }
+
+ camss->csiphy = devm_kcalloc(dev, camss->csiphy_num,
+ sizeof(*camss->csiphy), GFP_KERNEL);
+- if (!camss->csiphy)
+- return -ENOMEM;
++ if (!camss->csiphy) {
++ ret = -ENOMEM;
++ goto err_free;
++ }
+
+ camss->csid = devm_kcalloc(dev, camss->csid_num, sizeof(*camss->csid),
+ GFP_KERNEL);
+- if (!camss->csid)
+- return -ENOMEM;
++ if (!camss->csid) {
++ ret = -ENOMEM;
++ goto err_free;
++ }
+
+ camss->vfe = devm_kcalloc(dev, camss->vfe_num, sizeof(*camss->vfe),
+ GFP_KERNEL);
+- if (!camss->vfe)
+- return -ENOMEM;
++ if (!camss->vfe) {
++ ret = -ENOMEM;
++ goto err_free;
++ }
+
+ v4l2_async_notifier_init(&camss->notifier);
+
+ num_subdevs = camss_of_parse_ports(camss);
+- if (num_subdevs < 0)
+- return num_subdevs;
++ if (num_subdevs < 0) {
++ ret = num_subdevs;
++ goto err_cleanup;
++ }
+
+ ret = camss_init_subdevices(camss);
+ if (ret < 0)
+@@ -936,6 +944,8 @@ err_register_entities:
+ v4l2_device_unregister(&camss->v4l2_dev);
+ err_cleanup:
+ v4l2_async_notifier_cleanup(&camss->notifier);
++err_free:
++ kfree(camss);
+
+ return ret;
+ }
+diff --git a/drivers/net/bonding/bond_main.c b/drivers/net/bonding/bond_main.c
+index 6b40b5ab143a7..07624e89b96d6 100644
+--- a/drivers/net/bonding/bond_main.c
++++ b/drivers/net/bonding/bond_main.c
+@@ -2084,7 +2084,8 @@ static int bond_release_and_destroy(struct net_device *bond_dev,
+ int ret;
+
+ ret = __bond_release_one(bond_dev, slave_dev, false, true);
+- if (ret == 0 && !bond_has_slaves(bond)) {
++ if (ret == 0 && !bond_has_slaves(bond) &&
++ bond_dev->reg_state != NETREG_UNREGISTERING) {
+ bond_dev->priv_flags |= IFF_DISABLE_NETPOLL;
+ netdev_info(bond_dev, "Destroying bond\n");
+ bond_remove_proc_entry(bond);
+@@ -2824,6 +2825,9 @@ static int bond_ab_arp_inspect(struct bonding *bond)
+ if (bond_time_in_interval(bond, last_rx, 1)) {
+ bond_propose_link_state(slave, BOND_LINK_UP);
+ commit++;
++ } else if (slave->link == BOND_LINK_BACK) {
++ bond_propose_link_state(slave, BOND_LINK_FAIL);
++ commit++;
+ }
+ continue;
+ }
+@@ -2932,6 +2936,19 @@ static void bond_ab_arp_commit(struct bonding *bond)
+
+ continue;
+
++ case BOND_LINK_FAIL:
++ bond_set_slave_link_state(slave, BOND_LINK_FAIL,
++ BOND_SLAVE_NOTIFY_NOW);
++ bond_set_slave_inactive_flags(slave,
++ BOND_SLAVE_NOTIFY_NOW);
++
++ /* A slave has just been enslaved and has become
++ * the current active slave.
++ */
++ if (rtnl_dereference(bond->curr_active_slave))
++ RCU_INIT_POINTER(bond->current_arp_slave, NULL);
++ continue;
++
+ default:
+ slave_err(bond->dev, slave->dev,
+ "impossible: link_new_state %d on slave\n",
+@@ -2982,8 +2999,6 @@ static bool bond_ab_arp_probe(struct bonding *bond)
+ return should_notify_rtnl;
+ }
+
+- bond_set_slave_inactive_flags(curr_arp_slave, BOND_SLAVE_NOTIFY_LATER);
+-
+ bond_for_each_slave_rcu(bond, slave, iter) {
+ if (!found && !before && bond_slave_is_up(slave))
+ before = slave;
+@@ -4336,13 +4351,23 @@ static netdev_tx_t bond_start_xmit(struct sk_buff *skb, struct net_device *dev)
+ return ret;
+ }
+
++static u32 bond_mode_bcast_speed(struct slave *slave, u32 speed)
++{
++ if (speed == 0 || speed == SPEED_UNKNOWN)
++ speed = slave->speed;
++ else
++ speed = min(speed, slave->speed);
++
++ return speed;
++}
++
+ static int bond_ethtool_get_link_ksettings(struct net_device *bond_dev,
+ struct ethtool_link_ksettings *cmd)
+ {
+ struct bonding *bond = netdev_priv(bond_dev);
+- unsigned long speed = 0;
+ struct list_head *iter;
+ struct slave *slave;
++ u32 speed = 0;
+
+ cmd->base.duplex = DUPLEX_UNKNOWN;
+ cmd->base.port = PORT_OTHER;
+@@ -4354,8 +4379,13 @@ static int bond_ethtool_get_link_ksettings(struct net_device *bond_dev,
+ */
+ bond_for_each_slave(bond, slave, iter) {
+ if (bond_slave_can_tx(slave)) {
+- if (slave->speed != SPEED_UNKNOWN)
+- speed += slave->speed;
++ if (slave->speed != SPEED_UNKNOWN) {
++ if (BOND_MODE(bond) == BOND_MODE_BROADCAST)
++ speed = bond_mode_bcast_speed(slave,
++ speed);
++ else
++ speed += slave->speed;
++ }
+ if (cmd->base.duplex == DUPLEX_UNKNOWN &&
+ slave->duplex != DUPLEX_UNKNOWN)
+ cmd->base.duplex = slave->duplex;
+diff --git a/drivers/net/dsa/b53/b53_common.c b/drivers/net/dsa/b53/b53_common.c
+index c283593bef17e..dc1979096302b 100644
+--- a/drivers/net/dsa/b53/b53_common.c
++++ b/drivers/net/dsa/b53/b53_common.c
+@@ -1556,6 +1556,8 @@ static int b53_arl_op(struct b53_device *dev, int op, int port,
+ return ret;
+
+ switch (ret) {
++ case -ETIMEDOUT:
++ return ret;
+ case -ENOSPC:
+ dev_dbg(dev->dev, "{%pM,%.4d} no space left in ARL\n",
+ addr, vid);
+diff --git a/drivers/net/ethernet/amazon/ena/ena_netdev.c b/drivers/net/ethernet/amazon/ena/ena_netdev.c
+index 15ce93be05eac..c501a4edc34d6 100644
+--- a/drivers/net/ethernet/amazon/ena/ena_netdev.c
++++ b/drivers/net/ethernet/amazon/ena/ena_netdev.c
+@@ -2166,13 +2166,10 @@ static void ena_del_napi_in_range(struct ena_adapter *adapter,
+ int i;
+
+ for (i = first_index; i < first_index + count; i++) {
+- /* Check if napi was initialized before */
+- if (!ENA_IS_XDP_INDEX(adapter, i) ||
+- adapter->ena_napi[i].xdp_ring)
+- netif_napi_del(&adapter->ena_napi[i].napi);
+- else
+- WARN_ON(ENA_IS_XDP_INDEX(adapter, i) &&
+- adapter->ena_napi[i].xdp_ring);
++ netif_napi_del(&adapter->ena_napi[i].napi);
++
++ WARN_ON(!ENA_IS_XDP_INDEX(adapter, i) &&
++ adapter->ena_napi[i].xdp_ring);
+ }
+ }
+
+@@ -3508,16 +3505,14 @@ static void ena_fw_reset_device(struct work_struct *work)
+ {
+ struct ena_adapter *adapter =
+ container_of(work, struct ena_adapter, reset_task);
+- struct pci_dev *pdev = adapter->pdev;
+
+- if (unlikely(!test_bit(ENA_FLAG_TRIGGER_RESET, &adapter->flags))) {
+- dev_err(&pdev->dev,
+- "device reset schedule while reset bit is off\n");
+- return;
+- }
+ rtnl_lock();
+- ena_destroy_device(adapter, false);
+- ena_restore_device(adapter);
++
++ if (likely(test_bit(ENA_FLAG_TRIGGER_RESET, &adapter->flags))) {
++ ena_destroy_device(adapter, false);
++ ena_restore_device(adapter);
++ }
++
+ rtnl_unlock();
+ }
+
+@@ -4351,8 +4346,11 @@ static void __ena_shutoff(struct pci_dev *pdev, bool shutdown)
+ netdev->rx_cpu_rmap = NULL;
+ }
+ #endif /* CONFIG_RFS_ACCEL */
+- del_timer_sync(&adapter->timer_service);
+
++ /* Make sure timer and reset routine won't be called after
++ * freeing device resources.
++ */
++ del_timer_sync(&adapter->timer_service);
+ cancel_work_sync(&adapter->reset_task);
+
+ rtnl_lock(); /* lock released inside the below if-else block */
+diff --git a/drivers/net/ethernet/cortina/gemini.c b/drivers/net/ethernet/cortina/gemini.c
+index 5359fb40578db..e641890e9702f 100644
+--- a/drivers/net/ethernet/cortina/gemini.c
++++ b/drivers/net/ethernet/cortina/gemini.c
+@@ -2388,7 +2388,7 @@ static int gemini_ethernet_port_probe(struct platform_device *pdev)
+
+ dev_info(dev, "probe %s ID %d\n", dev_name(dev), id);
+
+- netdev = alloc_etherdev_mq(sizeof(*port), TX_QUEUE_NUM);
++ netdev = devm_alloc_etherdev_mqs(dev, sizeof(*port), TX_QUEUE_NUM, TX_QUEUE_NUM);
+ if (!netdev) {
+ dev_err(dev, "Can't allocate ethernet device #%d\n", id);
+ return -ENOMEM;
+@@ -2520,7 +2520,6 @@ static int gemini_ethernet_port_probe(struct platform_device *pdev)
+ }
+
+ port->netdev = NULL;
+- free_netdev(netdev);
+ return ret;
+ }
+
+@@ -2529,7 +2528,6 @@ static int gemini_ethernet_port_remove(struct platform_device *pdev)
+ struct gemini_ethernet_port *port = platform_get_drvdata(pdev);
+
+ gemini_port_remove(port);
+- free_netdev(port->netdev);
+ return 0;
+ }
+
+diff --git a/drivers/net/ethernet/freescale/fec_main.c b/drivers/net/ethernet/freescale/fec_main.c
+index bf73bc9bf35b9..76abafd099e22 100644
+--- a/drivers/net/ethernet/freescale/fec_main.c
++++ b/drivers/net/ethernet/freescale/fec_main.c
+@@ -3719,11 +3719,11 @@ failed_mii_init:
+ failed_irq:
+ failed_init:
+ fec_ptp_stop(pdev);
+- if (fep->reg_phy)
+- regulator_disable(fep->reg_phy);
+ failed_reset:
+ pm_runtime_put_noidle(&pdev->dev);
+ pm_runtime_disable(&pdev->dev);
++ if (fep->reg_phy)
++ regulator_disable(fep->reg_phy);
+ failed_regulator:
+ clk_disable_unprepare(fep->clk_ahb);
+ failed_clk_ahb:
+diff --git a/drivers/net/ethernet/intel/i40e/i40e_adminq_cmd.h b/drivers/net/ethernet/intel/i40e/i40e_adminq_cmd.h
+index aa5f1c0aa7215..0921785a10795 100644
+--- a/drivers/net/ethernet/intel/i40e/i40e_adminq_cmd.h
++++ b/drivers/net/ethernet/intel/i40e/i40e_adminq_cmd.h
+@@ -1211,7 +1211,7 @@ struct i40e_aqc_set_vsi_promiscuous_modes {
+ #define I40E_AQC_SET_VSI_PROMISC_BROADCAST 0x04
+ #define I40E_AQC_SET_VSI_DEFAULT 0x08
+ #define I40E_AQC_SET_VSI_PROMISC_VLAN 0x10
+-#define I40E_AQC_SET_VSI_PROMISC_TX 0x8000
++#define I40E_AQC_SET_VSI_PROMISC_RX_ONLY 0x8000
+ __le16 seid;
+ #define I40E_AQC_VSI_PROM_CMD_SEID_MASK 0x3FF
+ __le16 vlan_tag;
+diff --git a/drivers/net/ethernet/intel/i40e/i40e_common.c b/drivers/net/ethernet/intel/i40e/i40e_common.c
+index 45b90eb11adba..21e44c6cd5eac 100644
+--- a/drivers/net/ethernet/intel/i40e/i40e_common.c
++++ b/drivers/net/ethernet/intel/i40e/i40e_common.c
+@@ -1969,6 +1969,21 @@ i40e_status i40e_aq_set_phy_debug(struct i40e_hw *hw, u8 cmd_flags,
+ return status;
+ }
+
++/**
++ * i40e_is_aq_api_ver_ge
++ * @aq: pointer to AdminQ info containing HW API version to compare
++ * @maj: API major value
++ * @min: API minor value
++ *
++ * Assert whether current HW API version is greater/equal than provided.
++ **/
++static bool i40e_is_aq_api_ver_ge(struct i40e_adminq_info *aq, u16 maj,
++ u16 min)
++{
++ return (aq->api_maj_ver > maj ||
++ (aq->api_maj_ver == maj && aq->api_min_ver >= min));
++}
++
+ /**
+ * i40e_aq_add_vsi
+ * @hw: pointer to the hw struct
+@@ -2094,18 +2109,16 @@ i40e_status i40e_aq_set_vsi_unicast_promiscuous(struct i40e_hw *hw,
+
+ if (set) {
+ flags |= I40E_AQC_SET_VSI_PROMISC_UNICAST;
+- if (rx_only_promisc &&
+- (((hw->aq.api_maj_ver == 1) && (hw->aq.api_min_ver >= 5)) ||
+- (hw->aq.api_maj_ver > 1)))
+- flags |= I40E_AQC_SET_VSI_PROMISC_TX;
++ if (rx_only_promisc && i40e_is_aq_api_ver_ge(&hw->aq, 1, 5))
++ flags |= I40E_AQC_SET_VSI_PROMISC_RX_ONLY;
+ }
+
+ cmd->promiscuous_flags = cpu_to_le16(flags);
+
+ cmd->valid_flags = cpu_to_le16(I40E_AQC_SET_VSI_PROMISC_UNICAST);
+- if (((hw->aq.api_maj_ver >= 1) && (hw->aq.api_min_ver >= 5)) ||
+- (hw->aq.api_maj_ver > 1))
+- cmd->valid_flags |= cpu_to_le16(I40E_AQC_SET_VSI_PROMISC_TX);
++ if (i40e_is_aq_api_ver_ge(&hw->aq, 1, 5))
++ cmd->valid_flags |=
++ cpu_to_le16(I40E_AQC_SET_VSI_PROMISC_RX_ONLY);
+
+ cmd->seid = cpu_to_le16(seid);
+ status = i40e_asq_send_command(hw, &desc, NULL, 0, cmd_details);
+@@ -2202,11 +2215,17 @@ enum i40e_status_code i40e_aq_set_vsi_uc_promisc_on_vlan(struct i40e_hw *hw,
+ i40e_fill_default_direct_cmd_desc(&desc,
+ i40e_aqc_opc_set_vsi_promiscuous_modes);
+
+- if (enable)
++ if (enable) {
+ flags |= I40E_AQC_SET_VSI_PROMISC_UNICAST;
++ if (i40e_is_aq_api_ver_ge(&hw->aq, 1, 5))
++ flags |= I40E_AQC_SET_VSI_PROMISC_RX_ONLY;
++ }
+
+ cmd->promiscuous_flags = cpu_to_le16(flags);
+ cmd->valid_flags = cpu_to_le16(I40E_AQC_SET_VSI_PROMISC_UNICAST);
++ if (i40e_is_aq_api_ver_ge(&hw->aq, 1, 5))
++ cmd->valid_flags |=
++ cpu_to_le16(I40E_AQC_SET_VSI_PROMISC_RX_ONLY);
+ cmd->seid = cpu_to_le16(seid);
+ cmd->vlan_tag = cpu_to_le16(vid | I40E_AQC_SET_VSI_VLAN_VALID);
+
+diff --git a/drivers/net/ethernet/intel/i40e/i40e_main.c b/drivers/net/ethernet/intel/i40e/i40e_main.c
+index 80dc5fcb82db7..deb2d77ef975e 100644
+--- a/drivers/net/ethernet/intel/i40e/i40e_main.c
++++ b/drivers/net/ethernet/intel/i40e/i40e_main.c
+@@ -15344,6 +15344,9 @@ static void i40e_remove(struct pci_dev *pdev)
+ i40e_write_rx_ctl(hw, I40E_PFQF_HENA(0), 0);
+ i40e_write_rx_ctl(hw, I40E_PFQF_HENA(1), 0);
+
++ while (test_bit(__I40E_RESET_RECOVERY_PENDING, pf->state))
++ usleep_range(1000, 2000);
++
+ /* no more scheduling of any task */
+ set_bit(__I40E_SUSPENDED, pf->state);
+ set_bit(__I40E_DOWN, pf->state);
+diff --git a/drivers/net/ethernet/intel/igc/igc_main.c b/drivers/net/ethernet/intel/igc/igc_main.c
+index c7020ff2f490d..2ec89c99b6444 100644
+--- a/drivers/net/ethernet/intel/igc/igc_main.c
++++ b/drivers/net/ethernet/intel/igc/igc_main.c
+@@ -4801,6 +4801,8 @@ static int igc_probe(struct pci_dev *pdev,
+ device_set_wakeup_enable(&adapter->pdev->dev,
+ adapter->flags & IGC_FLAG_WOL_SUPPORTED);
+
++ igc_ptp_init(adapter);
++
+ /* reset the hardware with the new settings */
+ igc_reset(adapter);
+
+@@ -4817,9 +4819,6 @@ static int igc_probe(struct pci_dev *pdev,
+ /* carrier off reporting is important to ethtool even BEFORE open */
+ netif_carrier_off(netdev);
+
+- /* do hw tstamp init after resetting */
+- igc_ptp_init(adapter);
+-
+ /* Check if Media Autosense is enabled */
+ adapter->ei = *ei;
+
+diff --git a/drivers/net/ethernet/intel/igc/igc_ptp.c b/drivers/net/ethernet/intel/igc/igc_ptp.c
+index f99c514ad0f47..4f67bdd1948b5 100644
+--- a/drivers/net/ethernet/intel/igc/igc_ptp.c
++++ b/drivers/net/ethernet/intel/igc/igc_ptp.c
+@@ -620,8 +620,6 @@ void igc_ptp_init(struct igc_adapter *adapter)
+ adapter->tstamp_config.rx_filter = HWTSTAMP_FILTER_NONE;
+ adapter->tstamp_config.tx_type = HWTSTAMP_TX_OFF;
+
+- igc_ptp_reset(adapter);
+-
+ adapter->ptp_clock = ptp_clock_register(&adapter->ptp_caps,
+ &adapter->pdev->dev);
+ if (IS_ERR(adapter->ptp_clock)) {
+diff --git a/drivers/net/hyperv/netvsc_drv.c b/drivers/net/hyperv/netvsc_drv.c
+index b8b7fc13b3dc4..016fec19063a5 100644
+--- a/drivers/net/hyperv/netvsc_drv.c
++++ b/drivers/net/hyperv/netvsc_drv.c
+@@ -502,7 +502,7 @@ static int netvsc_vf_xmit(struct net_device *net, struct net_device *vf_netdev,
+ int rc;
+
+ skb->dev = vf_netdev;
+- skb->queue_mapping = qdisc_skb_cb(skb)->slave_dev_queue_mapping;
++ skb_record_rx_queue(skb, qdisc_skb_cb(skb)->slave_dev_queue_mapping);
+
+ rc = dev_queue_xmit(skb);
+ if (likely(rc == NET_XMIT_SUCCESS || rc == NET_XMIT_CN)) {
+diff --git a/drivers/net/ipvlan/ipvlan_main.c b/drivers/net/ipvlan/ipvlan_main.c
+index f195f278a83aa..7768f1120c1f6 100644
+--- a/drivers/net/ipvlan/ipvlan_main.c
++++ b/drivers/net/ipvlan/ipvlan_main.c
+@@ -106,12 +106,21 @@ static void ipvlan_port_destroy(struct net_device *dev)
+ kfree(port);
+ }
+
++#define IPVLAN_ALWAYS_ON_OFLOADS \
++ (NETIF_F_SG | NETIF_F_HW_CSUM | \
++ NETIF_F_GSO_ROBUST | NETIF_F_GSO_SOFTWARE | NETIF_F_GSO_ENCAP_ALL)
++
++#define IPVLAN_ALWAYS_ON \
++ (IPVLAN_ALWAYS_ON_OFLOADS | NETIF_F_LLTX | NETIF_F_VLAN_CHALLENGED)
++
+ #define IPVLAN_FEATURES \
+- (NETIF_F_SG | NETIF_F_CSUM_MASK | NETIF_F_HIGHDMA | NETIF_F_FRAGLIST | \
++ (NETIF_F_SG | NETIF_F_HW_CSUM | NETIF_F_HIGHDMA | NETIF_F_FRAGLIST | \
+ NETIF_F_GSO | NETIF_F_ALL_TSO | NETIF_F_GSO_ROBUST | \
+ NETIF_F_GRO | NETIF_F_RXCSUM | \
+ NETIF_F_HW_VLAN_CTAG_FILTER | NETIF_F_HW_VLAN_STAG_FILTER)
+
++ /* NETIF_F_GSO_ENCAP_ALL NETIF_F_GSO_SOFTWARE Newly added */
++
+ #define IPVLAN_STATE_MASK \
+ ((1<<__LINK_STATE_NOCARRIER) | (1<<__LINK_STATE_DORMANT))
+
+@@ -125,7 +134,9 @@ static int ipvlan_init(struct net_device *dev)
+ dev->state = (dev->state & ~IPVLAN_STATE_MASK) |
+ (phy_dev->state & IPVLAN_STATE_MASK);
+ dev->features = phy_dev->features & IPVLAN_FEATURES;
+- dev->features |= NETIF_F_LLTX | NETIF_F_VLAN_CHALLENGED;
++ dev->features |= IPVLAN_ALWAYS_ON;
++ dev->vlan_features = phy_dev->vlan_features & IPVLAN_FEATURES;
++ dev->vlan_features |= IPVLAN_ALWAYS_ON_OFLOADS;
+ dev->hw_enc_features |= dev->features;
+ dev->gso_max_size = phy_dev->gso_max_size;
+ dev->gso_max_segs = phy_dev->gso_max_segs;
+@@ -225,7 +236,14 @@ static netdev_features_t ipvlan_fix_features(struct net_device *dev,
+ {
+ struct ipvl_dev *ipvlan = netdev_priv(dev);
+
+- return features & (ipvlan->sfeatures | ~IPVLAN_FEATURES);
++ features |= NETIF_F_ALL_FOR_ALL;
++ features &= (ipvlan->sfeatures | ~IPVLAN_FEATURES);
++ features = netdev_increment_features(ipvlan->phy_dev->features,
++ features, features);
++ features |= IPVLAN_ALWAYS_ON;
++ features &= (IPVLAN_FEATURES | IPVLAN_ALWAYS_ON);
++
++ return features;
+ }
+
+ static void ipvlan_change_rx_flags(struct net_device *dev, int change)
+@@ -732,10 +750,9 @@ static int ipvlan_device_event(struct notifier_block *unused,
+
+ case NETDEV_FEAT_CHANGE:
+ list_for_each_entry(ipvlan, &port->ipvlans, pnode) {
+- ipvlan->dev->features = dev->features & IPVLAN_FEATURES;
+ ipvlan->dev->gso_max_size = dev->gso_max_size;
+ ipvlan->dev->gso_max_segs = dev->gso_max_segs;
+- netdev_features_change(ipvlan->dev);
++ netdev_update_features(ipvlan->dev);
+ }
+ break;
+
+diff --git a/drivers/of/address.c b/drivers/of/address.c
+index 8eea3f6e29a44..340d3051b1ce2 100644
+--- a/drivers/of/address.c
++++ b/drivers/of/address.c
+@@ -980,6 +980,11 @@ int of_dma_get_range(struct device_node *np, u64 *dma_addr, u64 *paddr, u64 *siz
+ /* Don't error out as we'd break some existing DTs */
+ continue;
+ }
++ if (range.cpu_addr == OF_BAD_ADDR) {
++ pr_err("translation of DMA address(%llx) to CPU address failed node(%pOF)\n",
++ range.bus_addr, node);
++ continue;
++ }
+ dma_offset = range.cpu_addr - range.bus_addr;
+
+ /* Take lower and upper limits */
+diff --git a/drivers/opp/core.c b/drivers/opp/core.c
+index e4f01e7771a22..a55d083e5be21 100644
+--- a/drivers/opp/core.c
++++ b/drivers/opp/core.c
+@@ -817,15 +817,23 @@ int dev_pm_opp_set_rate(struct device *dev, unsigned long target_freq)
+ }
+
+ if (unlikely(!target_freq)) {
+- if (opp_table->required_opp_tables) {
+- ret = _set_required_opps(dev, opp_table, NULL);
+- } else if (!_get_opp_count(opp_table)) {
+- return 0;
+- } else {
++ /*
++ * Some drivers need to support cases where some platforms may
++ * have OPP table for the device, while others don't and
++ * opp_set_rate() just needs to behave like clk_set_rate().
++ */
++ if (!_get_opp_count(opp_table)) {
++ ret = 0;
++ goto put_opp_table;
++ }
++
++ if (!opp_table->required_opp_tables) {
+ dev_err(dev, "target frequency can't be 0\n");
+ ret = -EINVAL;
++ goto put_opp_table;
+ }
+
++ ret = _set_required_opps(dev, opp_table, NULL);
+ goto put_opp_table;
+ }
+
+@@ -845,10 +853,12 @@ int dev_pm_opp_set_rate(struct device *dev, unsigned long target_freq)
+
+ /* Return early if nothing to do */
+ if (old_freq == freq) {
+- dev_dbg(dev, "%s: old/new frequencies (%lu Hz) are same, nothing to do\n",
+- __func__, freq);
+- ret = 0;
+- goto put_opp_table;
++ if (!opp_table->required_opp_tables && !opp_table->regulators) {
++ dev_dbg(dev, "%s: old/new frequencies (%lu Hz) are same, nothing to do\n",
++ __func__, freq);
++ ret = 0;
++ goto put_opp_table;
++ }
+ }
+
+ /*
+diff --git a/drivers/rtc/rtc-goldfish.c b/drivers/rtc/rtc-goldfish.c
+index cb6b0ad7ec3f2..5dd92147f1680 100644
+--- a/drivers/rtc/rtc-goldfish.c
++++ b/drivers/rtc/rtc-goldfish.c
+@@ -73,6 +73,7 @@ static int goldfish_rtc_set_alarm(struct device *dev,
+ rtc_alarm64 = rtc_tm_to_time64(&alrm->time) * NSEC_PER_SEC;
+ writel((rtc_alarm64 >> 32), base + TIMER_ALARM_HIGH);
+ writel(rtc_alarm64, base + TIMER_ALARM_LOW);
++ writel(1, base + TIMER_IRQ_ENABLED);
+ } else {
+ /*
+ * if this function was called with enabled=0
+diff --git a/drivers/s390/scsi/zfcp_fsf.c b/drivers/s390/scsi/zfcp_fsf.c
+index 111fe3fc32d76..e5d2299a7a39b 100644
+--- a/drivers/s390/scsi/zfcp_fsf.c
++++ b/drivers/s390/scsi/zfcp_fsf.c
+@@ -430,7 +430,7 @@ static void zfcp_fsf_req_complete(struct zfcp_fsf_req *req)
+ return;
+ }
+
+- del_timer(&req->timer);
++ del_timer_sync(&req->timer);
+ zfcp_fsf_protstatus_eval(req);
+ zfcp_fsf_fsfstatus_eval(req);
+ req->handler(req);
+@@ -905,7 +905,7 @@ static int zfcp_fsf_req_send(struct zfcp_fsf_req *req)
+ req->qdio_req.qdio_outb_usage = atomic_read(&qdio->req_q_free);
+ req->issued = get_tod_clock();
+ if (zfcp_qdio_send(qdio, &req->qdio_req)) {
+- del_timer(&req->timer);
++ del_timer_sync(&req->timer);
+ /* lookup request again, list might have changed */
+ zfcp_reqlist_find_rm(adapter->req_list, req_id);
+ zfcp_erp_adapter_reopen(adapter, 0, "fsrs__1");
+diff --git a/drivers/scsi/libfc/fc_disc.c b/drivers/scsi/libfc/fc_disc.c
+index 2b865c6423e29..e00dc4693fcbd 100644
+--- a/drivers/scsi/libfc/fc_disc.c
++++ b/drivers/scsi/libfc/fc_disc.c
+@@ -581,8 +581,12 @@ static void fc_disc_gpn_id_resp(struct fc_seq *sp, struct fc_frame *fp,
+
+ if (PTR_ERR(fp) == -FC_EX_CLOSED)
+ goto out;
+- if (IS_ERR(fp))
+- goto redisc;
++ if (IS_ERR(fp)) {
++ mutex_lock(&disc->disc_mutex);
++ fc_disc_restart(disc);
++ mutex_unlock(&disc->disc_mutex);
++ goto out;
++ }
+
+ cp = fc_frame_payload_get(fp, sizeof(*cp));
+ if (!cp)
+@@ -609,7 +613,7 @@ static void fc_disc_gpn_id_resp(struct fc_seq *sp, struct fc_frame *fp,
+ new_rdata->disc_id = disc->disc_id;
+ fc_rport_login(new_rdata);
+ }
+- goto out;
++ goto free_fp;
+ }
+ rdata->disc_id = disc->disc_id;
+ mutex_unlock(&rdata->rp_mutex);
+@@ -626,6 +630,8 @@ redisc:
+ fc_disc_restart(disc);
+ mutex_unlock(&disc->disc_mutex);
+ }
++free_fp:
++ fc_frame_free(fp);
+ out:
+ kref_put(&rdata->kref, fc_rport_destroy);
+ if (!IS_ERR(fp))
+diff --git a/drivers/scsi/qla2xxx/qla_os.c b/drivers/scsi/qla2xxx/qla_os.c
+index 1120d133204c2..20e3048276a01 100644
+--- a/drivers/scsi/qla2xxx/qla_os.c
++++ b/drivers/scsi/qla2xxx/qla_os.c
+@@ -2829,10 +2829,6 @@ qla2x00_probe_one(struct pci_dev *pdev, const struct pci_device_id *id)
+ /* This may fail but that's ok */
+ pci_enable_pcie_error_reporting(pdev);
+
+- /* Turn off T10-DIF when FC-NVMe is enabled */
+- if (ql2xnvmeenable)
+- ql2xenabledif = 0;
+-
+ ha = kzalloc(sizeof(struct qla_hw_data), GFP_KERNEL);
+ if (!ha) {
+ ql_log_pci(ql_log_fatal, pdev, 0x0009,
+diff --git a/drivers/scsi/ufs/ti-j721e-ufs.c b/drivers/scsi/ufs/ti-j721e-ufs.c
+index 46bb905b4d6a9..eafe0db98d542 100644
+--- a/drivers/scsi/ufs/ti-j721e-ufs.c
++++ b/drivers/scsi/ufs/ti-j721e-ufs.c
+@@ -38,6 +38,7 @@ static int ti_j721e_ufs_probe(struct platform_device *pdev)
+ /* Select MPHY refclk frequency */
+ clk = devm_clk_get(dev, NULL);
+ if (IS_ERR(clk)) {
++ ret = PTR_ERR(clk);
+ dev_err(dev, "Cannot claim MPHY clock.\n");
+ goto clk_err;
+ }
+diff --git a/drivers/scsi/ufs/ufs_quirks.h b/drivers/scsi/ufs/ufs_quirks.h
+index df7a1e6805a3b..c3af72c58805d 100644
+--- a/drivers/scsi/ufs/ufs_quirks.h
++++ b/drivers/scsi/ufs/ufs_quirks.h
+@@ -12,6 +12,7 @@
+ #define UFS_ANY_VENDOR 0xFFFF
+ #define UFS_ANY_MODEL "ANY_MODEL"
+
++#define UFS_VENDOR_MICRON 0x12C
+ #define UFS_VENDOR_TOSHIBA 0x198
+ #define UFS_VENDOR_SAMSUNG 0x1CE
+ #define UFS_VENDOR_SKHYNIX 0x1AD
+diff --git a/drivers/scsi/ufs/ufshcd-pci.c b/drivers/scsi/ufs/ufshcd-pci.c
+index 8f78a81514991..b220666774ce8 100644
+--- a/drivers/scsi/ufs/ufshcd-pci.c
++++ b/drivers/scsi/ufs/ufshcd-pci.c
+@@ -67,11 +67,23 @@ static int ufs_intel_link_startup_notify(struct ufs_hba *hba,
+ return err;
+ }
+
++static int ufs_intel_ehl_init(struct ufs_hba *hba)
++{
++ hba->quirks |= UFSHCD_QUIRK_BROKEN_AUTO_HIBERN8;
++ return 0;
++}
++
+ static struct ufs_hba_variant_ops ufs_intel_cnl_hba_vops = {
+ .name = "intel-pci",
+ .link_startup_notify = ufs_intel_link_startup_notify,
+ };
+
++static struct ufs_hba_variant_ops ufs_intel_ehl_hba_vops = {
++ .name = "intel-pci",
++ .init = ufs_intel_ehl_init,
++ .link_startup_notify = ufs_intel_link_startup_notify,
++};
++
+ #ifdef CONFIG_PM_SLEEP
+ /**
+ * ufshcd_pci_suspend - suspend power management function
+@@ -200,8 +212,8 @@ static const struct dev_pm_ops ufshcd_pci_pm_ops = {
+ static const struct pci_device_id ufshcd_pci_tbl[] = {
+ { PCI_VENDOR_ID_SAMSUNG, 0xC00C, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0 },
+ { PCI_VDEVICE(INTEL, 0x9DFA), (kernel_ulong_t)&ufs_intel_cnl_hba_vops },
+- { PCI_VDEVICE(INTEL, 0x4B41), (kernel_ulong_t)&ufs_intel_cnl_hba_vops },
+- { PCI_VDEVICE(INTEL, 0x4B43), (kernel_ulong_t)&ufs_intel_cnl_hba_vops },
++ { PCI_VDEVICE(INTEL, 0x4B41), (kernel_ulong_t)&ufs_intel_ehl_hba_vops },
++ { PCI_VDEVICE(INTEL, 0x4B43), (kernel_ulong_t)&ufs_intel_ehl_hba_vops },
+ { } /* terminate list */
+ };
+
+diff --git a/drivers/scsi/ufs/ufshcd.c b/drivers/scsi/ufs/ufshcd.c
+index 477b6cfff381b..3b80d692dd2e7 100644
+--- a/drivers/scsi/ufs/ufshcd.c
++++ b/drivers/scsi/ufs/ufshcd.c
+@@ -211,6 +211,8 @@ ufs_get_desired_pm_lvl_for_dev_link_state(enum ufs_dev_pwr_mode dev_state,
+
+ static struct ufs_dev_fix ufs_fixups[] = {
+ /* UFS cards deviations table */
++ UFS_FIX(UFS_VENDOR_MICRON, UFS_ANY_MODEL,
++ UFS_DEVICE_QUIRK_DELAY_BEFORE_LPM),
+ UFS_FIX(UFS_VENDOR_SAMSUNG, UFS_ANY_MODEL,
+ UFS_DEVICE_QUIRK_DELAY_BEFORE_LPM),
+ UFS_FIX(UFS_VENDOR_SAMSUNG, UFS_ANY_MODEL,
+@@ -645,7 +647,11 @@ static inline int ufshcd_get_tr_ocs(struct ufshcd_lrb *lrbp)
+ */
+ static inline void ufshcd_utrl_clear(struct ufs_hba *hba, u32 pos)
+ {
+- ufshcd_writel(hba, ~(1 << pos), REG_UTP_TRANSFER_REQ_LIST_CLEAR);
++ if (hba->quirks & UFSHCI_QUIRK_BROKEN_REQ_LIST_CLR)
++ ufshcd_writel(hba, (1 << pos), REG_UTP_TRANSFER_REQ_LIST_CLEAR);
++ else
++ ufshcd_writel(hba, ~(1 << pos),
++ REG_UTP_TRANSFER_REQ_LIST_CLEAR);
+ }
+
+ /**
+@@ -655,7 +661,10 @@ static inline void ufshcd_utrl_clear(struct ufs_hba *hba, u32 pos)
+ */
+ static inline void ufshcd_utmrl_clear(struct ufs_hba *hba, u32 pos)
+ {
+- ufshcd_writel(hba, ~(1 << pos), REG_UTP_TASK_REQ_LIST_CLEAR);
++ if (hba->quirks & UFSHCI_QUIRK_BROKEN_REQ_LIST_CLR)
++ ufshcd_writel(hba, (1 << pos), REG_UTP_TASK_REQ_LIST_CLEAR);
++ else
++ ufshcd_writel(hba, ~(1 << pos), REG_UTP_TASK_REQ_LIST_CLEAR);
+ }
+
+ /**
+@@ -2149,8 +2158,14 @@ static int ufshcd_map_sg(struct ufs_hba *hba, struct ufshcd_lrb *lrbp)
+ return sg_segments;
+
+ if (sg_segments) {
+- lrbp->utr_descriptor_ptr->prd_table_length =
+- cpu_to_le16((u16)sg_segments);
++
++ if (hba->quirks & UFSHCD_QUIRK_PRDT_BYTE_GRAN)
++ lrbp->utr_descriptor_ptr->prd_table_length =
++ cpu_to_le16((sg_segments *
++ sizeof(struct ufshcd_sg_entry)));
++ else
++ lrbp->utr_descriptor_ptr->prd_table_length =
++ cpu_to_le16((u16) (sg_segments));
+
+ prd_table = (struct ufshcd_sg_entry *)lrbp->ucd_prdt_ptr;
+
+@@ -3496,11 +3511,21 @@ static void ufshcd_host_memory_configure(struct ufs_hba *hba)
+ cpu_to_le32(upper_32_bits(cmd_desc_element_addr));
+
+ /* Response upiu and prdt offset should be in double words */
+- utrdlp[i].response_upiu_offset =
+- cpu_to_le16(response_offset >> 2);
+- utrdlp[i].prd_table_offset = cpu_to_le16(prdt_offset >> 2);
+- utrdlp[i].response_upiu_length =
+- cpu_to_le16(ALIGNED_UPIU_SIZE >> 2);
++ if (hba->quirks & UFSHCD_QUIRK_PRDT_BYTE_GRAN) {
++ utrdlp[i].response_upiu_offset =
++ cpu_to_le16(response_offset);
++ utrdlp[i].prd_table_offset =
++ cpu_to_le16(prdt_offset);
++ utrdlp[i].response_upiu_length =
++ cpu_to_le16(ALIGNED_UPIU_SIZE);
++ } else {
++ utrdlp[i].response_upiu_offset =
++ cpu_to_le16(response_offset >> 2);
++ utrdlp[i].prd_table_offset =
++ cpu_to_le16(prdt_offset >> 2);
++ utrdlp[i].response_upiu_length =
++ cpu_to_le16(ALIGNED_UPIU_SIZE >> 2);
++ }
+
+ ufshcd_init_lrb(hba, &hba->lrb[i], i);
+ }
+@@ -3530,6 +3555,52 @@ static int ufshcd_dme_link_startup(struct ufs_hba *hba)
+ "dme-link-startup: error code %d\n", ret);
+ return ret;
+ }
++/**
++ * ufshcd_dme_reset - UIC command for DME_RESET
++ * @hba: per adapter instance
++ *
++ * DME_RESET command is issued in order to reset UniPro stack.
++ * This function now deals with cold reset.
++ *
++ * Returns 0 on success, non-zero value on failure
++ */
++static int ufshcd_dme_reset(struct ufs_hba *hba)
++{
++ struct uic_command uic_cmd = {0};
++ int ret;
++
++ uic_cmd.command = UIC_CMD_DME_RESET;
++
++ ret = ufshcd_send_uic_cmd(hba, &uic_cmd);
++ if (ret)
++ dev_err(hba->dev,
++ "dme-reset: error code %d\n", ret);
++
++ return ret;
++}
++
++/**
++ * ufshcd_dme_enable - UIC command for DME_ENABLE
++ * @hba: per adapter instance
++ *
++ * DME_ENABLE command is issued in order to enable UniPro stack.
++ *
++ * Returns 0 on success, non-zero value on failure
++ */
++static int ufshcd_dme_enable(struct ufs_hba *hba)
++{
++ struct uic_command uic_cmd = {0};
++ int ret;
++
++ uic_cmd.command = UIC_CMD_DME_ENABLE;
++
++ ret = ufshcd_send_uic_cmd(hba, &uic_cmd);
++ if (ret)
++ dev_err(hba->dev,
++ "dme-reset: error code %d\n", ret);
++
++ return ret;
++}
+
+ static inline void ufshcd_add_delay_before_dme_cmd(struct ufs_hba *hba)
+ {
+@@ -4247,7 +4318,7 @@ static inline void ufshcd_hba_stop(struct ufs_hba *hba, bool can_sleep)
+ }
+
+ /**
+- * ufshcd_hba_enable - initialize the controller
++ * ufshcd_hba_execute_hce - initialize the controller
+ * @hba: per adapter instance
+ *
+ * The controller resets itself and controller firmware initialization
+@@ -4256,7 +4327,7 @@ static inline void ufshcd_hba_stop(struct ufs_hba *hba, bool can_sleep)
+ *
+ * Returns 0 on success, non-zero value on failure
+ */
+-int ufshcd_hba_enable(struct ufs_hba *hba)
++static int ufshcd_hba_execute_hce(struct ufs_hba *hba)
+ {
+ int retry;
+
+@@ -4304,6 +4375,32 @@ int ufshcd_hba_enable(struct ufs_hba *hba)
+
+ return 0;
+ }
++
++int ufshcd_hba_enable(struct ufs_hba *hba)
++{
++ int ret;
++
++ if (hba->quirks & UFSHCI_QUIRK_BROKEN_HCE) {
++ ufshcd_set_link_off(hba);
++ ufshcd_vops_hce_enable_notify(hba, PRE_CHANGE);
++
++ /* enable UIC related interrupts */
++ ufshcd_enable_intr(hba, UFSHCD_UIC_MASK);
++ ret = ufshcd_dme_reset(hba);
++ if (!ret) {
++ ret = ufshcd_dme_enable(hba);
++ if (!ret)
++ ufshcd_vops_hce_enable_notify(hba, POST_CHANGE);
++ if (ret)
++ dev_err(hba->dev,
++ "Host controller enable failed with non-hce\n");
++ }
++ } else {
++ ret = ufshcd_hba_execute_hce(hba);
++ }
++
++ return ret;
++}
+ EXPORT_SYMBOL_GPL(ufshcd_hba_enable);
+
+ static int ufshcd_disable_tx_lcc(struct ufs_hba *hba, bool peer)
+@@ -4702,6 +4799,12 @@ ufshcd_transfer_rsp_status(struct ufs_hba *hba, struct ufshcd_lrb *lrbp)
+ /* overall command status of utrd */
+ ocs = ufshcd_get_tr_ocs(lrbp);
+
++ if (hba->quirks & UFSHCD_QUIRK_BROKEN_OCS_FATAL_ERROR) {
++ if (be32_to_cpu(lrbp->ucd_rsp_ptr->header.dword_1) &
++ MASK_RSP_UPIU_RESULT)
++ ocs = OCS_SUCCESS;
++ }
++
+ switch (ocs) {
+ case OCS_SUCCESS:
+ result = ufshcd_get_req_rsp(lrbp->ucd_rsp_ptr);
+@@ -4880,7 +4983,8 @@ static irqreturn_t ufshcd_transfer_req_compl(struct ufs_hba *hba)
+ * false interrupt if device completes another request after resetting
+ * aggregation and before reading the DB.
+ */
+- if (ufshcd_is_intr_aggr_allowed(hba))
++ if (ufshcd_is_intr_aggr_allowed(hba) &&
++ !(hba->quirks & UFSHCI_QUIRK_SKIP_RESET_INTR_AGGR))
+ ufshcd_reset_intr_aggr(hba);
+
+ tr_doorbell = ufshcd_readl(hba, REG_UTP_TRANSFER_REQ_DOOR_BELL);
+@@ -5699,7 +5803,7 @@ static irqreturn_t ufshcd_intr(int irq, void *__hba)
+ intr_status = ufshcd_readl(hba, REG_INTERRUPT_STATUS);
+ } while (intr_status && --retries);
+
+- if (retval == IRQ_NONE) {
++ if (enabled_intr_status && retval == IRQ_NONE) {
+ dev_err(hba->dev, "%s: Unhandled interrupt 0x%08x\n",
+ __func__, intr_status);
+ ufshcd_dump_regs(hba, 0, UFSHCI_REG_SPACE_SIZE, "host_regs: ");
+diff --git a/drivers/scsi/ufs/ufshcd.h b/drivers/scsi/ufs/ufshcd.h
+index 2315ecc209272..ccbeae4f8325d 100644
+--- a/drivers/scsi/ufs/ufshcd.h
++++ b/drivers/scsi/ufs/ufshcd.h
+@@ -518,6 +518,41 @@ enum ufshcd_quirks {
+ * ops (get_ufs_hci_version) to get the correct version.
+ */
+ UFSHCD_QUIRK_BROKEN_UFS_HCI_VERSION = 1 << 5,
++
++ /*
++ * Clear handling for transfer/task request list is just opposite.
++ */
++ UFSHCI_QUIRK_BROKEN_REQ_LIST_CLR = 1 << 6,
++
++ /*
++ * This quirk needs to be enabled if host controller doesn't allow
++ * that the interrupt aggregation timer and counter are reset by s/w.
++ */
++ UFSHCI_QUIRK_SKIP_RESET_INTR_AGGR = 1 << 7,
++
++ /*
++ * This quirks needs to be enabled if host controller cannot be
++ * enabled via HCE register.
++ */
++ UFSHCI_QUIRK_BROKEN_HCE = 1 << 8,
++
++ /*
++ * This quirk needs to be enabled if the host controller regards
++ * resolution of the values of PRDTO and PRDTL in UTRD as byte.
++ */
++ UFSHCD_QUIRK_PRDT_BYTE_GRAN = 1 << 9,
++
++ /*
++ * This quirk needs to be enabled if the host controller reports
++ * OCS FATAL ERROR with device error through sense data
++ */
++ UFSHCD_QUIRK_BROKEN_OCS_FATAL_ERROR = 1 << 10,
++
++ /*
++ * This quirk needs to be enabled if the host controller has
++ * auto-hibernate capability but it doesn't work.
++ */
++ UFSHCD_QUIRK_BROKEN_AUTO_HIBERN8 = 1 << 11,
+ };
+
+ enum ufshcd_caps {
+@@ -767,7 +802,8 @@ return true;
+
+ static inline bool ufshcd_is_auto_hibern8_supported(struct ufs_hba *hba)
+ {
+- return (hba->capabilities & MASK_AUTO_HIBERN8_SUPPORT);
++ return (hba->capabilities & MASK_AUTO_HIBERN8_SUPPORT) &&
++ !(hba->quirks & UFSHCD_QUIRK_BROKEN_AUTO_HIBERN8);
+ }
+
+ static inline bool ufshcd_is_auto_hibern8_enabled(struct ufs_hba *hba)
+diff --git a/drivers/spi/Kconfig b/drivers/spi/Kconfig
+index 741b9140992a8..11279dcc4a3e9 100644
+--- a/drivers/spi/Kconfig
++++ b/drivers/spi/Kconfig
+@@ -989,4 +989,7 @@ config SPI_SLAVE_SYSTEM_CONTROL
+
+ endif # SPI_SLAVE
+
++config SPI_DYNAMIC
++ def_bool ACPI || OF_DYNAMIC || SPI_SLAVE
++
+ endif # SPI
+diff --git a/drivers/spi/spi-stm32.c b/drivers/spi/spi-stm32.c
+index 44ac6eb3298d4..e29818abbeaf4 100644
+--- a/drivers/spi/spi-stm32.c
++++ b/drivers/spi/spi-stm32.c
+@@ -13,6 +13,7 @@
+ #include <linux/iopoll.h>
+ #include <linux/module.h>
+ #include <linux/of_platform.h>
++#include <linux/pinctrl/consumer.h>
+ #include <linux/pm_runtime.h>
+ #include <linux/reset.h>
+ #include <linux/spi/spi.h>
+@@ -1985,6 +1986,8 @@ static int stm32_spi_remove(struct platform_device *pdev)
+
+ pm_runtime_disable(&pdev->dev);
+
++ pinctrl_pm_select_sleep_state(&pdev->dev);
++
+ return 0;
+ }
+
+@@ -1996,13 +1999,18 @@ static int stm32_spi_runtime_suspend(struct device *dev)
+
+ clk_disable_unprepare(spi->clk);
+
+- return 0;
++ return pinctrl_pm_select_sleep_state(dev);
+ }
+
+ static int stm32_spi_runtime_resume(struct device *dev)
+ {
+ struct spi_master *master = dev_get_drvdata(dev);
+ struct stm32_spi *spi = spi_master_get_devdata(master);
++ int ret;
++
++ ret = pinctrl_pm_select_default_state(dev);
++ if (ret)
++ return ret;
+
+ return clk_prepare_enable(spi->clk);
+ }
+@@ -2032,10 +2040,23 @@ static int stm32_spi_resume(struct device *dev)
+ return ret;
+
+ ret = spi_master_resume(master);
+- if (ret)
++ if (ret) {
+ clk_disable_unprepare(spi->clk);
++ return ret;
++ }
+
+- return ret;
++ ret = pm_runtime_get_sync(dev);
++ if (ret) {
++ dev_err(dev, "Unable to power device:%d\n", ret);
++ return ret;
++ }
++
++ spi->cfg->config(spi);
++
++ pm_runtime_mark_last_busy(dev);
++ pm_runtime_put_autosuspend(dev);
++
++ return 0;
+ }
+ #endif
+
+diff --git a/drivers/spi/spi.c b/drivers/spi/spi.c
+index 299384c91917a..a6e16c138845a 100644
+--- a/drivers/spi/spi.c
++++ b/drivers/spi/spi.c
+@@ -475,6 +475,12 @@ static LIST_HEAD(spi_controller_list);
+ */
+ static DEFINE_MUTEX(board_lock);
+
++/*
++ * Prevents addition of devices with same chip select and
++ * addition of devices below an unregistering controller.
++ */
++static DEFINE_MUTEX(spi_add_lock);
++
+ /**
+ * spi_alloc_device - Allocate a new SPI device
+ * @ctlr: Controller to which device is connected
+@@ -554,7 +560,6 @@ static int spi_dev_check(struct device *dev, void *data)
+ */
+ int spi_add_device(struct spi_device *spi)
+ {
+- static DEFINE_MUTEX(spi_add_lock);
+ struct spi_controller *ctlr = spi->controller;
+ struct device *dev = ctlr->dev.parent;
+ int status;
+@@ -582,6 +587,13 @@ int spi_add_device(struct spi_device *spi)
+ goto done;
+ }
+
++ /* Controller may unregister concurrently */
++ if (IS_ENABLED(CONFIG_SPI_DYNAMIC) &&
++ !device_is_registered(&ctlr->dev)) {
++ status = -ENODEV;
++ goto done;
++ }
++
+ /* Descriptors take precedence */
+ if (ctlr->cs_gpiods)
+ spi->cs_gpiod = ctlr->cs_gpiods[spi->chip_select];
+@@ -2761,6 +2773,10 @@ void spi_unregister_controller(struct spi_controller *ctlr)
+ struct spi_controller *found;
+ int id = ctlr->bus_num;
+
++ /* Prevent addition of new devices, unregister existing ones */
++ if (IS_ENABLED(CONFIG_SPI_DYNAMIC))
++ mutex_lock(&spi_add_lock);
++
+ device_for_each_child(&ctlr->dev, NULL, __unregister);
+
+ /* First make sure that this controller was ever added */
+@@ -2781,6 +2797,9 @@ void spi_unregister_controller(struct spi_controller *ctlr)
+ if (found == ctlr)
+ idr_remove(&spi_master_idr, id);
+ mutex_unlock(&board_lock);
++
++ if (IS_ENABLED(CONFIG_SPI_DYNAMIC))
++ mutex_unlock(&spi_add_lock);
+ }
+ EXPORT_SYMBOL_GPL(spi_unregister_controller);
+
+diff --git a/drivers/target/target_core_user.c b/drivers/target/target_core_user.c
+index b63a1e0c4aa6d..a55114975b00d 100644
+--- a/drivers/target/target_core_user.c
++++ b/drivers/target/target_core_user.c
+@@ -601,7 +601,7 @@ static inline void tcmu_flush_dcache_range(void *vaddr, size_t size)
+ size = round_up(size+offset, PAGE_SIZE);
+
+ while (size) {
+- flush_dcache_page(virt_to_page(start));
++ flush_dcache_page(vmalloc_to_page(start));
+ start += PAGE_SIZE;
+ size -= PAGE_SIZE;
+ }
+diff --git a/drivers/vfio/vfio_iommu_type1.c b/drivers/vfio/vfio_iommu_type1.c
+index cc1d64765ce79..c244e0ecf9f42 100644
+--- a/drivers/vfio/vfio_iommu_type1.c
++++ b/drivers/vfio/vfio_iommu_type1.c
+@@ -1149,13 +1149,16 @@ static int vfio_bus_type(struct device *dev, void *data)
+ static int vfio_iommu_replay(struct vfio_iommu *iommu,
+ struct vfio_domain *domain)
+ {
+- struct vfio_domain *d;
++ struct vfio_domain *d = NULL;
+ struct rb_node *n;
+ unsigned long limit = rlimit(RLIMIT_MEMLOCK) >> PAGE_SHIFT;
+ int ret;
+
+ /* Arbitrarily pick the first domain in the list for lookups */
+- d = list_first_entry(&iommu->domain_list, struct vfio_domain, next);
++ if (!list_empty(&iommu->domain_list))
++ d = list_first_entry(&iommu->domain_list,
++ struct vfio_domain, next);
++
+ n = rb_first(&iommu->dma_list);
+
+ for (; n; n = rb_next(n)) {
+@@ -1173,6 +1176,11 @@ static int vfio_iommu_replay(struct vfio_iommu *iommu,
+ phys_addr_t p;
+ dma_addr_t i;
+
++ if (WARN_ON(!d)) { /* mapped w/o a domain?! */
++ ret = -EINVAL;
++ goto unwind;
++ }
++
+ phys = iommu_iova_to_phys(d->domain, iova);
+
+ if (WARN_ON(!phys)) {
+@@ -1202,7 +1210,7 @@ static int vfio_iommu_replay(struct vfio_iommu *iommu,
+ if (npage <= 0) {
+ WARN_ON(!npage);
+ ret = (int)npage;
+- return ret;
++ goto unwind;
+ }
+
+ phys = pfn << PAGE_SHIFT;
+@@ -1211,14 +1219,67 @@ static int vfio_iommu_replay(struct vfio_iommu *iommu,
+
+ ret = iommu_map(domain->domain, iova, phys,
+ size, dma->prot | domain->prot);
+- if (ret)
+- return ret;
++ if (ret) {
++ if (!dma->iommu_mapped)
++ vfio_unpin_pages_remote(dma, iova,
++ phys >> PAGE_SHIFT,
++ size >> PAGE_SHIFT,
++ true);
++ goto unwind;
++ }
+
+ iova += size;
+ }
++ }
++
++ /* All dmas are now mapped, defer to second tree walk for unwind */
++ for (n = rb_first(&iommu->dma_list); n; n = rb_next(n)) {
++ struct vfio_dma *dma = rb_entry(n, struct vfio_dma, node);
++
+ dma->iommu_mapped = true;
+ }
++
+ return 0;
++
++unwind:
++ for (; n; n = rb_prev(n)) {
++ struct vfio_dma *dma = rb_entry(n, struct vfio_dma, node);
++ dma_addr_t iova;
++
++ if (dma->iommu_mapped) {
++ iommu_unmap(domain->domain, dma->iova, dma->size);
++ continue;
++ }
++
++ iova = dma->iova;
++ while (iova < dma->iova + dma->size) {
++ phys_addr_t phys, p;
++ size_t size;
++ dma_addr_t i;
++
++ phys = iommu_iova_to_phys(domain->domain, iova);
++ if (!phys) {
++ iova += PAGE_SIZE;
++ continue;
++ }
++
++ size = PAGE_SIZE;
++ p = phys + size;
++ i = iova + size;
++ while (i < dma->iova + dma->size &&
++ p == iommu_iova_to_phys(domain->domain, i)) {
++ size += PAGE_SIZE;
++ p += PAGE_SIZE;
++ i += PAGE_SIZE;
++ }
++
++ iommu_unmap(domain->domain, iova, size);
++ vfio_unpin_pages_remote(dma, iova, phys >> PAGE_SHIFT,
++ size >> PAGE_SHIFT, true);
++ }
++ }
++
++ return ret;
+ }
+
+ /*
+diff --git a/drivers/video/fbdev/efifb.c b/drivers/video/fbdev/efifb.c
+index 65491ae74808d..e57c00824965c 100644
+--- a/drivers/video/fbdev/efifb.c
++++ b/drivers/video/fbdev/efifb.c
+@@ -453,7 +453,7 @@ static int efifb_probe(struct platform_device *dev)
+ info->apertures->ranges[0].base = efifb_fix.smem_start;
+ info->apertures->ranges[0].size = size_remap;
+
+- if (efi_enabled(EFI_BOOT) &&
++ if (efi_enabled(EFI_MEMMAP) &&
+ !efi_mem_desc_lookup(efifb_fix.smem_start, &md)) {
+ if ((efifb_fix.smem_start + efifb_fix.smem_len) >
+ (md.phys_addr + (md.num_pages << EFI_PAGE_SHIFT))) {
+diff --git a/drivers/virtio/virtio_ring.c b/drivers/virtio/virtio_ring.c
+index 58b96baa8d488..4f7c73e6052f6 100644
+--- a/drivers/virtio/virtio_ring.c
++++ b/drivers/virtio/virtio_ring.c
+@@ -1960,6 +1960,9 @@ bool virtqueue_poll(struct virtqueue *_vq, unsigned last_used_idx)
+ {
+ struct vring_virtqueue *vq = to_vvq(_vq);
+
++ if (unlikely(vq->broken))
++ return false;
++
+ virtio_mb(vq->weak_barriers);
+ return vq->packed_ring ? virtqueue_poll_packed(_vq, last_used_idx) :
+ virtqueue_poll_split(_vq, last_used_idx);
+diff --git a/drivers/xen/preempt.c b/drivers/xen/preempt.c
+index 17240c5325a30..6ad87b5c95ed3 100644
+--- a/drivers/xen/preempt.c
++++ b/drivers/xen/preempt.c
+@@ -27,7 +27,7 @@ EXPORT_SYMBOL_GPL(xen_in_preemptible_hcall);
+ asmlinkage __visible void xen_maybe_preempt_hcall(void)
+ {
+ if (unlikely(__this_cpu_read(xen_in_preemptible_hcall)
+- && need_resched())) {
++ && need_resched() && !preempt_count())) {
+ /*
+ * Clear flag as we may be rescheduled on a different
+ * cpu.
+diff --git a/drivers/xen/swiotlb-xen.c b/drivers/xen/swiotlb-xen.c
+index b6d27762c6f8c..5fbadd07819bd 100644
+--- a/drivers/xen/swiotlb-xen.c
++++ b/drivers/xen/swiotlb-xen.c
+@@ -335,6 +335,7 @@ xen_swiotlb_free_coherent(struct device *hwdev, size_t size, void *vaddr,
+ int order = get_order(size);
+ phys_addr_t phys;
+ u64 dma_mask = DMA_BIT_MASK(32);
++ struct page *page;
+
+ if (hwdev && hwdev->coherent_dma_mask)
+ dma_mask = hwdev->coherent_dma_mask;
+@@ -346,9 +347,14 @@ xen_swiotlb_free_coherent(struct device *hwdev, size_t size, void *vaddr,
+ /* Convert the size to actually allocated. */
+ size = 1UL << (order + XEN_PAGE_SHIFT);
+
++ if (is_vmalloc_addr(vaddr))
++ page = vmalloc_to_page(vaddr);
++ else
++ page = virt_to_page(vaddr);
++
+ if (!WARN_ON((dev_addr + size - 1 > dma_mask) ||
+ range_straddles_page_boundary(phys, size)) &&
+- TestClearPageXenRemapped(virt_to_page(vaddr)))
++ TestClearPageXenRemapped(page))
+ xen_destroy_contiguous_region(phys, order);
+
+ xen_free_coherent_pages(hwdev, size, vaddr, (dma_addr_t)phys, attrs);
+diff --git a/fs/afs/dynroot.c b/fs/afs/dynroot.c
+index 7503899c0a1b5..f07e53ab808e3 100644
+--- a/fs/afs/dynroot.c
++++ b/fs/afs/dynroot.c
+@@ -289,15 +289,17 @@ void afs_dynroot_depopulate(struct super_block *sb)
+ net->dynroot_sb = NULL;
+ mutex_unlock(&net->proc_cells_lock);
+
+- inode_lock(root->d_inode);
+-
+- /* Remove all the pins for dirs created for manually added cells */
+- list_for_each_entry_safe(subdir, tmp, &root->d_subdirs, d_child) {
+- if (subdir->d_fsdata) {
+- subdir->d_fsdata = NULL;
+- dput(subdir);
++ if (root) {
++ inode_lock(root->d_inode);
++
++ /* Remove all the pins for dirs created for manually added cells */
++ list_for_each_entry_safe(subdir, tmp, &root->d_subdirs, d_child) {
++ if (subdir->d_fsdata) {
++ subdir->d_fsdata = NULL;
++ dput(subdir);
++ }
+ }
+- }
+
+- inode_unlock(root->d_inode);
++ inode_unlock(root->d_inode);
++ }
+ }
+diff --git a/fs/ceph/mds_client.c b/fs/ceph/mds_client.c
+index 95272ae36b058..e32935b68d0a4 100644
+--- a/fs/ceph/mds_client.c
++++ b/fs/ceph/mds_client.c
+@@ -4337,7 +4337,6 @@ int ceph_mdsc_init(struct ceph_fs_client *fsc)
+ return -ENOMEM;
+ }
+
+- fsc->mdsc = mdsc;
+ init_completion(&mdsc->safe_umount_waiters);
+ init_waitqueue_head(&mdsc->session_close_wq);
+ INIT_LIST_HEAD(&mdsc->waiting_for_map);
+@@ -4390,6 +4389,8 @@ int ceph_mdsc_init(struct ceph_fs_client *fsc)
+
+ strscpy(mdsc->nodename, utsname()->nodename,
+ sizeof(mdsc->nodename));
++
++ fsc->mdsc = mdsc;
+ return 0;
+ }
+
+diff --git a/fs/eventpoll.c b/fs/eventpoll.c
+index 12eebcdea9c8a..e0decff22ae27 100644
+--- a/fs/eventpoll.c
++++ b/fs/eventpoll.c
+@@ -1994,9 +1994,11 @@ static int ep_loop_check_proc(void *priv, void *cookie, int call_nests)
+ * not already there, and calling reverse_path_check()
+ * during ep_insert().
+ */
+- if (list_empty(&epi->ffd.file->f_tfile_llink))
++ if (list_empty(&epi->ffd.file->f_tfile_llink)) {
++ get_file(epi->ffd.file);
+ list_add(&epi->ffd.file->f_tfile_llink,
+ &tfile_check_list);
++ }
+ }
+ }
+ mutex_unlock(&ep->mtx);
+@@ -2040,6 +2042,7 @@ static void clear_tfile_check_list(void)
+ file = list_first_entry(&tfile_check_list, struct file,
+ f_tfile_llink);
+ list_del_init(&file->f_tfile_llink);
++ fput(file);
+ }
+ INIT_LIST_HEAD(&tfile_check_list);
+ }
+@@ -2200,25 +2203,22 @@ int do_epoll_ctl(int epfd, int op, int fd, struct epoll_event *epds,
+ full_check = 1;
+ if (is_file_epoll(tf.file)) {
+ error = -ELOOP;
+- if (ep_loop_check(ep, tf.file) != 0) {
+- clear_tfile_check_list();
++ if (ep_loop_check(ep, tf.file) != 0)
+ goto error_tgt_fput;
+- }
+- } else
++ } else {
++ get_file(tf.file);
+ list_add(&tf.file->f_tfile_llink,
+ &tfile_check_list);
++ }
+ error = epoll_mutex_lock(&ep->mtx, 0, nonblock);
+- if (error) {
+-out_del:
+- list_del(&tf.file->f_tfile_llink);
++ if (error)
+ goto error_tgt_fput;
+- }
+ if (is_file_epoll(tf.file)) {
+ tep = tf.file->private_data;
+ error = epoll_mutex_lock(&tep->mtx, 1, nonblock);
+ if (error) {
+ mutex_unlock(&ep->mtx);
+- goto out_del;
++ goto error_tgt_fput;
+ }
+ }
+ }
+@@ -2239,8 +2239,6 @@ out_del:
+ error = ep_insert(ep, epds, tf.file, fd, full_check);
+ } else
+ error = -EEXIST;
+- if (full_check)
+- clear_tfile_check_list();
+ break;
+ case EPOLL_CTL_DEL:
+ if (epi)
+@@ -2263,8 +2261,10 @@ out_del:
+ mutex_unlock(&ep->mtx);
+
+ error_tgt_fput:
+- if (full_check)
++ if (full_check) {
++ clear_tfile_check_list();
+ mutex_unlock(&epmutex);
++ }
+
+ fdput(tf);
+ error_fput:
+diff --git a/fs/ext4/block_validity.c b/fs/ext4/block_validity.c
+index 16e9b2fda03ae..e830a9d4e10d3 100644
+--- a/fs/ext4/block_validity.c
++++ b/fs/ext4/block_validity.c
+@@ -24,6 +24,7 @@ struct ext4_system_zone {
+ struct rb_node node;
+ ext4_fsblk_t start_blk;
+ unsigned int count;
++ u32 ino;
+ };
+
+ static struct kmem_cache *ext4_system_zone_cachep;
+@@ -45,7 +46,8 @@ void ext4_exit_system_zone(void)
+ static inline int can_merge(struct ext4_system_zone *entry1,
+ struct ext4_system_zone *entry2)
+ {
+- if ((entry1->start_blk + entry1->count) == entry2->start_blk)
++ if ((entry1->start_blk + entry1->count) == entry2->start_blk &&
++ entry1->ino == entry2->ino)
+ return 1;
+ return 0;
+ }
+@@ -66,9 +68,9 @@ static void release_system_zone(struct ext4_system_blocks *system_blks)
+ */
+ static int add_system_zone(struct ext4_system_blocks *system_blks,
+ ext4_fsblk_t start_blk,
+- unsigned int count)
++ unsigned int count, u32 ino)
+ {
+- struct ext4_system_zone *new_entry = NULL, *entry;
++ struct ext4_system_zone *new_entry, *entry;
+ struct rb_node **n = &system_blks->root.rb_node, *node;
+ struct rb_node *parent = NULL, *new_node = NULL;
+
+@@ -79,30 +81,21 @@ static int add_system_zone(struct ext4_system_blocks *system_blks,
+ n = &(*n)->rb_left;
+ else if (start_blk >= (entry->start_blk + entry->count))
+ n = &(*n)->rb_right;
+- else {
+- if (start_blk + count > (entry->start_blk +
+- entry->count))
+- entry->count = (start_blk + count -
+- entry->start_blk);
+- new_node = *n;
+- new_entry = rb_entry(new_node, struct ext4_system_zone,
+- node);
+- break;
+- }
++ else /* Unexpected overlap of system zones. */
++ return -EFSCORRUPTED;
+ }
+
+- if (!new_entry) {
+- new_entry = kmem_cache_alloc(ext4_system_zone_cachep,
+- GFP_KERNEL);
+- if (!new_entry)
+- return -ENOMEM;
+- new_entry->start_blk = start_blk;
+- new_entry->count = count;
+- new_node = &new_entry->node;
+-
+- rb_link_node(new_node, parent, n);
+- rb_insert_color(new_node, &system_blks->root);
+- }
++ new_entry = kmem_cache_alloc(ext4_system_zone_cachep,
++ GFP_KERNEL);
++ if (!new_entry)
++ return -ENOMEM;
++ new_entry->start_blk = start_blk;
++ new_entry->count = count;
++ new_entry->ino = ino;
++ new_node = &new_entry->node;
++
++ rb_link_node(new_node, parent, n);
++ rb_insert_color(new_node, &system_blks->root);
+
+ /* Can we merge to the left? */
+ node = rb_prev(new_node);
+@@ -159,7 +152,7 @@ static void debug_print_tree(struct ext4_sb_info *sbi)
+ static int ext4_data_block_valid_rcu(struct ext4_sb_info *sbi,
+ struct ext4_system_blocks *system_blks,
+ ext4_fsblk_t start_blk,
+- unsigned int count)
++ unsigned int count, ino_t ino)
+ {
+ struct ext4_system_zone *entry;
+ struct rb_node *n;
+@@ -180,7 +173,7 @@ static int ext4_data_block_valid_rcu(struct ext4_sb_info *sbi,
+ else if (start_blk >= (entry->start_blk + entry->count))
+ n = n->rb_right;
+ else
+- return 0;
++ return entry->ino == ino;
+ }
+ return 1;
+ }
+@@ -214,19 +207,18 @@ static int ext4_protect_reserved_inode(struct super_block *sb,
+ if (n == 0) {
+ i++;
+ } else {
+- if (!ext4_data_block_valid_rcu(sbi, system_blks,
+- map.m_pblk, n)) {
+- err = -EFSCORRUPTED;
+- __ext4_error(sb, __func__, __LINE__, -err,
+- map.m_pblk, "blocks %llu-%llu "
+- "from inode %u overlap system zone",
+- map.m_pblk,
+- map.m_pblk + map.m_len - 1, ino);
++ err = add_system_zone(system_blks, map.m_pblk, n, ino);
++ if (err < 0) {
++ if (err == -EFSCORRUPTED) {
++ __ext4_error(sb, __func__, __LINE__,
++ -err, map.m_pblk,
++ "blocks %llu-%llu from inode %u overlap system zone",
++ map.m_pblk,
++ map.m_pblk + map.m_len - 1,
++ ino);
++ }
+ break;
+ }
+- err = add_system_zone(system_blks, map.m_pblk, n);
+- if (err < 0)
+- break;
+ i += n;
+ }
+ }
+@@ -280,19 +272,19 @@ int ext4_setup_system_zone(struct super_block *sb)
+ ((i < 5) || ((i % flex_size) == 0)))
+ add_system_zone(system_blks,
+ ext4_group_first_block_no(sb, i),
+- ext4_bg_num_gdb(sb, i) + 1);
++ ext4_bg_num_gdb(sb, i) + 1, 0);
+ gdp = ext4_get_group_desc(sb, i, NULL);
+ ret = add_system_zone(system_blks,
+- ext4_block_bitmap(sb, gdp), 1);
++ ext4_block_bitmap(sb, gdp), 1, 0);
+ if (ret)
+ goto err;
+ ret = add_system_zone(system_blks,
+- ext4_inode_bitmap(sb, gdp), 1);
++ ext4_inode_bitmap(sb, gdp), 1, 0);
+ if (ret)
+ goto err;
+ ret = add_system_zone(system_blks,
+ ext4_inode_table(sb, gdp),
+- sbi->s_itb_per_group);
++ sbi->s_itb_per_group, 0);
+ if (ret)
+ goto err;
+ }
+@@ -341,7 +333,7 @@ void ext4_release_system_zone(struct super_block *sb)
+ call_rcu(&system_blks->rcu, ext4_destroy_system_zone);
+ }
+
+-int ext4_data_block_valid(struct ext4_sb_info *sbi, ext4_fsblk_t start_blk,
++int ext4_inode_block_valid(struct inode *inode, ext4_fsblk_t start_blk,
+ unsigned int count)
+ {
+ struct ext4_system_blocks *system_blks;
+@@ -353,9 +345,9 @@ int ext4_data_block_valid(struct ext4_sb_info *sbi, ext4_fsblk_t start_blk,
+ * mount option.
+ */
+ rcu_read_lock();
+- system_blks = rcu_dereference(sbi->system_blks);
+- ret = ext4_data_block_valid_rcu(sbi, system_blks, start_blk,
+- count);
++ system_blks = rcu_dereference(EXT4_SB(inode->i_sb)->system_blks);
++ ret = ext4_data_block_valid_rcu(EXT4_SB(inode->i_sb), system_blks,
++ start_blk, count, inode->i_ino);
+ rcu_read_unlock();
+ return ret;
+ }
+@@ -374,8 +366,7 @@ int ext4_check_blockref(const char *function, unsigned int line,
+ while (bref < p+max) {
+ blk = le32_to_cpu(*bref++);
+ if (blk &&
+- unlikely(!ext4_data_block_valid(EXT4_SB(inode->i_sb),
+- blk, 1))) {
++ unlikely(!ext4_inode_block_valid(inode, blk, 1))) {
+ ext4_error_inode(inode, function, line, blk,
+ "invalid block");
+ return -EFSCORRUPTED;
+diff --git a/fs/ext4/ext4.h b/fs/ext4/ext4.h
+index 51a85b50033a7..98b44322a3c18 100644
+--- a/fs/ext4/ext4.h
++++ b/fs/ext4/ext4.h
+@@ -3338,9 +3338,9 @@ extern void ext4_release_system_zone(struct super_block *sb);
+ extern int ext4_setup_system_zone(struct super_block *sb);
+ extern int __init ext4_init_system_zone(void);
+ extern void ext4_exit_system_zone(void);
+-extern int ext4_data_block_valid(struct ext4_sb_info *sbi,
+- ext4_fsblk_t start_blk,
+- unsigned int count);
++extern int ext4_inode_block_valid(struct inode *inode,
++ ext4_fsblk_t start_blk,
++ unsigned int count);
+ extern int ext4_check_blockref(const char *, unsigned int,
+ struct inode *, __le32 *, unsigned int);
+
+diff --git a/fs/ext4/extents.c b/fs/ext4/extents.c
+index d5453072eb635..910574aa6a903 100644
+--- a/fs/ext4/extents.c
++++ b/fs/ext4/extents.c
+@@ -337,7 +337,7 @@ static int ext4_valid_extent(struct inode *inode, struct ext4_extent *ext)
+ */
+ if (lblock + len <= lblock)
+ return 0;
+- return ext4_data_block_valid(EXT4_SB(inode->i_sb), block, len);
++ return ext4_inode_block_valid(inode, block, len);
+ }
+
+ static int ext4_valid_extent_idx(struct inode *inode,
+@@ -345,7 +345,7 @@ static int ext4_valid_extent_idx(struct inode *inode,
+ {
+ ext4_fsblk_t block = ext4_idx_pblock(ext_idx);
+
+- return ext4_data_block_valid(EXT4_SB(inode->i_sb), block, 1);
++ return ext4_inode_block_valid(inode, block, 1);
+ }
+
+ static int ext4_valid_extent_entries(struct inode *inode,
+@@ -500,14 +500,10 @@ __read_extent_tree_block(const char *function, unsigned int line,
+ }
+ if (buffer_verified(bh) && !(flags & EXT4_EX_FORCE_CACHE))
+ return bh;
+- if (!ext4_has_feature_journal(inode->i_sb) ||
+- (inode->i_ino !=
+- le32_to_cpu(EXT4_SB(inode->i_sb)->s_es->s_journal_inum))) {
+- err = __ext4_ext_check(function, line, inode,
+- ext_block_hdr(bh), depth, pblk);
+- if (err)
+- goto errout;
+- }
++ err = __ext4_ext_check(function, line, inode,
++ ext_block_hdr(bh), depth, pblk);
++ if (err)
++ goto errout;
+ set_buffer_verified(bh);
+ /*
+ * If this is a leaf block, cache all of its entries
+diff --git a/fs/ext4/file.c b/fs/ext4/file.c
+index 2a01e31a032c4..8f742b53f1d40 100644
+--- a/fs/ext4/file.c
++++ b/fs/ext4/file.c
+@@ -428,6 +428,10 @@ restart:
+ */
+ if (*ilock_shared && (!IS_NOSEC(inode) || *extend ||
+ !ext4_overwrite_io(inode, offset, count))) {
++ if (iocb->ki_flags & IOCB_NOWAIT) {
++ ret = -EAGAIN;
++ goto out;
++ }
+ inode_unlock_shared(inode);
+ *ilock_shared = false;
+ inode_lock(inode);
+diff --git a/fs/ext4/indirect.c b/fs/ext4/indirect.c
+index be2b66eb65f7a..4026418257121 100644
+--- a/fs/ext4/indirect.c
++++ b/fs/ext4/indirect.c
+@@ -858,8 +858,7 @@ static int ext4_clear_blocks(handle_t *handle, struct inode *inode,
+ else if (ext4_should_journal_data(inode))
+ flags |= EXT4_FREE_BLOCKS_FORGET;
+
+- if (!ext4_data_block_valid(EXT4_SB(inode->i_sb), block_to_free,
+- count)) {
++ if (!ext4_inode_block_valid(inode, block_to_free, count)) {
+ EXT4_ERROR_INODE(inode, "attempt to clear invalid "
+ "blocks %llu len %lu",
+ (unsigned long long) block_to_free, count);
+@@ -1004,8 +1003,7 @@ static void ext4_free_branches(handle_t *handle, struct inode *inode,
+ if (!nr)
+ continue; /* A hole */
+
+- if (!ext4_data_block_valid(EXT4_SB(inode->i_sb),
+- nr, 1)) {
++ if (!ext4_inode_block_valid(inode, nr, 1)) {
+ EXT4_ERROR_INODE(inode,
+ "invalid indirect mapped "
+ "block %lu (level %d)",
+diff --git a/fs/ext4/inode.c b/fs/ext4/inode.c
+index 87430d276bccc..d074ee4a7085a 100644
+--- a/fs/ext4/inode.c
++++ b/fs/ext4/inode.c
+@@ -384,8 +384,7 @@ static int __check_block_validity(struct inode *inode, const char *func,
+ (inode->i_ino ==
+ le32_to_cpu(EXT4_SB(inode->i_sb)->s_es->s_journal_inum)))
+ return 0;
+- if (!ext4_data_block_valid(EXT4_SB(inode->i_sb), map->m_pblk,
+- map->m_len)) {
++ if (!ext4_inode_block_valid(inode, map->m_pblk, map->m_len)) {
+ ext4_error_inode(inode, func, line, map->m_pblk,
+ "lblock %lu mapped to illegal pblock %llu "
+ "(length %d)", (unsigned long) map->m_lblk,
+@@ -4747,7 +4746,7 @@ struct inode *__ext4_iget(struct super_block *sb, unsigned long ino,
+
+ ret = 0;
+ if (ei->i_file_acl &&
+- !ext4_data_block_valid(EXT4_SB(sb), ei->i_file_acl, 1)) {
++ !ext4_inode_block_valid(inode, ei->i_file_acl, 1)) {
+ ext4_error_inode(inode, function, line, 0,
+ "iget: bad extended attribute block %llu",
+ ei->i_file_acl);
+diff --git a/fs/ext4/mballoc.c b/fs/ext4/mballoc.c
+index 30d5d97548c42..0461e82aba352 100644
+--- a/fs/ext4/mballoc.c
++++ b/fs/ext4/mballoc.c
+@@ -2992,7 +2992,7 @@ ext4_mb_mark_diskspace_used(struct ext4_allocation_context *ac,
+ block = ext4_grp_offs_to_block(sb, &ac->ac_b_ex);
+
+ len = EXT4_C2B(sbi, ac->ac_b_ex.fe_len);
+- if (!ext4_data_block_valid(sbi, block, len)) {
++ if (!ext4_inode_block_valid(ac->ac_inode, block, len)) {
+ ext4_error(sb, "Allocating blocks %llu-%llu which overlap "
+ "fs metadata", block, block+len);
+ /* File system mounted not to panic on error
+@@ -4759,7 +4759,7 @@ void ext4_free_blocks(handle_t *handle, struct inode *inode,
+
+ sbi = EXT4_SB(sb);
+ if (!(flags & EXT4_FREE_BLOCKS_VALIDATED) &&
+- !ext4_data_block_valid(sbi, block, count)) {
++ !ext4_inode_block_valid(inode, block, count)) {
+ ext4_error(sb, "Freeing blocks not in datazone - "
+ "block = %llu, count = %lu", block, count);
+ goto error_return;
+diff --git a/fs/ext4/namei.c b/fs/ext4/namei.c
+index 56738b538ddf4..a91a5bb8c3a2b 100644
+--- a/fs/ext4/namei.c
++++ b/fs/ext4/namei.c
+@@ -1396,8 +1396,8 @@ int ext4_search_dir(struct buffer_head *bh, char *search_buf, int buf_size,
+ ext4_match(dir, fname, de)) {
+ /* found a match - just to be sure, do
+ * a full check */
+- if (ext4_check_dir_entry(dir, NULL, de, bh, bh->b_data,
+- bh->b_size, offset))
++ if (ext4_check_dir_entry(dir, NULL, de, bh, search_buf,
++ buf_size, offset))
+ return -1;
+ *res_dir = de;
+ return 1;
+@@ -1858,7 +1858,7 @@ static struct ext4_dir_entry_2 *do_split(handle_t *handle, struct inode *dir,
+ blocksize, hinfo, map);
+ map -= count;
+ dx_sort_map(map, count);
+- /* Split the existing block in the middle, size-wise */
++ /* Ensure that neither split block is over half full */
+ size = 0;
+ move = 0;
+ for (i = count-1; i >= 0; i--) {
+@@ -1868,8 +1868,18 @@ static struct ext4_dir_entry_2 *do_split(handle_t *handle, struct inode *dir,
+ size += map[i].size;
+ move++;
+ }
+- /* map index at which we will split */
+- split = count - move;
++ /*
++ * map index at which we will split
++ *
++ * If the sum of active entries didn't exceed half the block size, just
++ * split it in half by count; each resulting block will have at least
++ * half the space free.
++ */
++ if (i > 0)
++ split = count - move;
++ else
++ split = count/2;
++
+ hash2 = map[split].hash;
+ continued = hash2 == map[split - 1].hash;
+ dxtrace(printk(KERN_INFO "Split block %lu at %x, %i/%i\n",
+@@ -2472,7 +2482,7 @@ int ext4_generic_delete_entry(handle_t *handle,
+ de = (struct ext4_dir_entry_2 *)entry_buf;
+ while (i < buf_size - csum_size) {
+ if (ext4_check_dir_entry(dir, NULL, de, bh,
+- bh->b_data, bh->b_size, i))
++ entry_buf, buf_size, i))
+ return -EFSCORRUPTED;
+ if (de == de_del) {
+ if (pde)
+diff --git a/fs/f2fs/compress.c b/fs/f2fs/compress.c
+index 527d50edcb956..b397121dfa107 100644
+--- a/fs/f2fs/compress.c
++++ b/fs/f2fs/compress.c
+@@ -1207,6 +1207,12 @@ retry_write:
+ congestion_wait(BLK_RW_ASYNC,
+ DEFAULT_IO_TIMEOUT);
+ lock_page(cc->rpages[i]);
++
++ if (!PageDirty(cc->rpages[i])) {
++ unlock_page(cc->rpages[i]);
++ continue;
++ }
++
+ clear_page_dirty_for_io(cc->rpages[i]);
+ goto retry_write;
+ }
+diff --git a/fs/io-wq.c b/fs/io-wq.c
+index 4023c98468608..2bfa9117bc289 100644
+--- a/fs/io-wq.c
++++ b/fs/io-wq.c
+@@ -907,13 +907,15 @@ void io_wq_cancel_all(struct io_wq *wq)
+ struct io_cb_cancel_data {
+ work_cancel_fn *fn;
+ void *data;
++ int nr_running;
++ int nr_pending;
++ bool cancel_all;
+ };
+
+ static bool io_wq_worker_cancel(struct io_worker *worker, void *data)
+ {
+ struct io_cb_cancel_data *match = data;
+ unsigned long flags;
+- bool ret = false;
+
+ /*
+ * Hold the lock to avoid ->cur_work going out of scope, caller
+@@ -924,74 +926,90 @@ static bool io_wq_worker_cancel(struct io_worker *worker, void *data)
+ !(worker->cur_work->flags & IO_WQ_WORK_NO_CANCEL) &&
+ match->fn(worker->cur_work, match->data)) {
+ send_sig(SIGINT, worker->task, 1);
+- ret = true;
++ match->nr_running++;
+ }
+ spin_unlock_irqrestore(&worker->lock, flags);
+
+- return ret;
++ return match->nr_running && !match->cancel_all;
+ }
+
+-static enum io_wq_cancel io_wqe_cancel_work(struct io_wqe *wqe,
+- struct io_cb_cancel_data *match)
++static void io_wqe_cancel_pending_work(struct io_wqe *wqe,
++ struct io_cb_cancel_data *match)
+ {
+ struct io_wq_work_node *node, *prev;
+ struct io_wq_work *work;
+ unsigned long flags;
+- bool found = false;
+
+- /*
+- * First check pending list, if we're lucky we can just remove it
+- * from there. CANCEL_OK means that the work is returned as-new,
+- * no completion will be posted for it.
+- */
++retry:
+ spin_lock_irqsave(&wqe->lock, flags);
+ wq_list_for_each(node, prev, &wqe->work_list) {
+ work = container_of(node, struct io_wq_work, list);
++ if (!match->fn(work, match->data))
++ continue;
+
+- if (match->fn(work, match->data)) {
+- wq_list_del(&wqe->work_list, node, prev);
+- found = true;
+- break;
+- }
+- }
+- spin_unlock_irqrestore(&wqe->lock, flags);
+-
+- if (found) {
++ wq_list_del(&wqe->work_list, node, prev);
++ spin_unlock_irqrestore(&wqe->lock, flags);
+ io_run_cancel(work, wqe);
+- return IO_WQ_CANCEL_OK;
++ match->nr_pending++;
++ if (!match->cancel_all)
++ return;
++
++ /* not safe to continue after unlock */
++ goto retry;
+ }
++ spin_unlock_irqrestore(&wqe->lock, flags);
++}
+
+- /*
+- * Now check if a free (going busy) or busy worker has the work
+- * currently running. If we find it there, we'll return CANCEL_RUNNING
+- * as an indication that we attempt to signal cancellation. The
+- * completion will run normally in this case.
+- */
++static void io_wqe_cancel_running_work(struct io_wqe *wqe,
++ struct io_cb_cancel_data *match)
++{
+ rcu_read_lock();
+- found = io_wq_for_each_worker(wqe, io_wq_worker_cancel, match);
++ io_wq_for_each_worker(wqe, io_wq_worker_cancel, match);
+ rcu_read_unlock();
+- return found ? IO_WQ_CANCEL_RUNNING : IO_WQ_CANCEL_NOTFOUND;
+ }
+
+ enum io_wq_cancel io_wq_cancel_cb(struct io_wq *wq, work_cancel_fn *cancel,
+- void *data)
++ void *data, bool cancel_all)
+ {
+ struct io_cb_cancel_data match = {
+- .fn = cancel,
+- .data = data,
++ .fn = cancel,
++ .data = data,
++ .cancel_all = cancel_all,
+ };
+- enum io_wq_cancel ret = IO_WQ_CANCEL_NOTFOUND;
+ int node;
+
++ /*
++ * First check pending list, if we're lucky we can just remove it
++ * from there. CANCEL_OK means that the work is returned as-new,
++ * no completion will be posted for it.
++ */
+ for_each_node(node) {
+ struct io_wqe *wqe = wq->wqes[node];
+
+- ret = io_wqe_cancel_work(wqe, &match);
+- if (ret != IO_WQ_CANCEL_NOTFOUND)
+- break;
++ io_wqe_cancel_pending_work(wqe, &match);
++ if (match.nr_pending && !match.cancel_all)
++ return IO_WQ_CANCEL_OK;
+ }
+
+- return ret;
++ /*
++ * Now check if a free (going busy) or busy worker has the work
++ * currently running. If we find it there, we'll return CANCEL_RUNNING
++ * as an indication that we attempt to signal cancellation. The
++ * completion will run normally in this case.
++ */
++ for_each_node(node) {
++ struct io_wqe *wqe = wq->wqes[node];
++
++ io_wqe_cancel_running_work(wqe, &match);
++ if (match.nr_running && !match.cancel_all)
++ return IO_WQ_CANCEL_RUNNING;
++ }
++
++ if (match.nr_running)
++ return IO_WQ_CANCEL_RUNNING;
++ if (match.nr_pending)
++ return IO_WQ_CANCEL_OK;
++ return IO_WQ_CANCEL_NOTFOUND;
+ }
+
+ static bool io_wq_io_cb_cancel_data(struct io_wq_work *work, void *data)
+@@ -1001,21 +1019,7 @@ static bool io_wq_io_cb_cancel_data(struct io_wq_work *work, void *data)
+
+ enum io_wq_cancel io_wq_cancel_work(struct io_wq *wq, struct io_wq_work *cwork)
+ {
+- return io_wq_cancel_cb(wq, io_wq_io_cb_cancel_data, (void *)cwork);
+-}
+-
+-static bool io_wq_pid_match(struct io_wq_work *work, void *data)
+-{
+- pid_t pid = (pid_t) (unsigned long) data;
+-
+- return work->task_pid == pid;
+-}
+-
+-enum io_wq_cancel io_wq_cancel_pid(struct io_wq *wq, pid_t pid)
+-{
+- void *data = (void *) (unsigned long) pid;
+-
+- return io_wq_cancel_cb(wq, io_wq_pid_match, data);
++ return io_wq_cancel_cb(wq, io_wq_io_cb_cancel_data, (void *)cwork, false);
+ }
+
+ struct io_wq *io_wq_create(unsigned bounded, struct io_wq_data *data)
+diff --git a/fs/io-wq.h b/fs/io-wq.h
+index 5ba12de7572f0..df8a4cd3236db 100644
+--- a/fs/io-wq.h
++++ b/fs/io-wq.h
+@@ -129,12 +129,11 @@ static inline bool io_wq_is_hashed(struct io_wq_work *work)
+
+ void io_wq_cancel_all(struct io_wq *wq);
+ enum io_wq_cancel io_wq_cancel_work(struct io_wq *wq, struct io_wq_work *cwork);
+-enum io_wq_cancel io_wq_cancel_pid(struct io_wq *wq, pid_t pid);
+
+ typedef bool (work_cancel_fn)(struct io_wq_work *, void *);
+
+ enum io_wq_cancel io_wq_cancel_cb(struct io_wq *wq, work_cancel_fn *cancel,
+- void *data);
++ void *data, bool cancel_all);
+
+ struct task_struct *io_wq_get_task(struct io_wq *wq);
+
+diff --git a/fs/io_uring.c b/fs/io_uring.c
+index b33d4a97a8774..0822a16bed9aa 100644
+--- a/fs/io_uring.c
++++ b/fs/io_uring.c
+@@ -5023,7 +5023,7 @@ static int io_async_cancel_one(struct io_ring_ctx *ctx, void *sqe_addr)
+ enum io_wq_cancel cancel_ret;
+ int ret = 0;
+
+- cancel_ret = io_wq_cancel_cb(ctx->io_wq, io_cancel_cb, sqe_addr);
++ cancel_ret = io_wq_cancel_cb(ctx->io_wq, io_cancel_cb, sqe_addr, false);
+ switch (cancel_ret) {
+ case IO_WQ_CANCEL_OK:
+ ret = 0;
+@@ -7659,6 +7659,33 @@ static bool io_timeout_remove_link(struct io_ring_ctx *ctx,
+ return found;
+ }
+
++static bool io_cancel_link_cb(struct io_wq_work *work, void *data)
++{
++ return io_match_link(container_of(work, struct io_kiocb, work), data);
++}
++
++static void io_attempt_cancel(struct io_ring_ctx *ctx, struct io_kiocb *req)
++{
++ enum io_wq_cancel cret;
++
++ /* cancel this particular work, if it's running */
++ cret = io_wq_cancel_work(ctx->io_wq, &req->work);
++ if (cret != IO_WQ_CANCEL_NOTFOUND)
++ return;
++
++ /* find links that hold this pending, cancel those */
++ cret = io_wq_cancel_cb(ctx->io_wq, io_cancel_link_cb, req, true);
++ if (cret != IO_WQ_CANCEL_NOTFOUND)
++ return;
++
++ /* if we have a poll link holding this pending, cancel that */
++ if (io_poll_remove_link(ctx, req))
++ return;
++
++ /* final option, timeout link is holding this req pending */
++ io_timeout_remove_link(ctx, req);
++}
++
+ static void io_uring_cancel_files(struct io_ring_ctx *ctx,
+ struct files_struct *files)
+ {
+@@ -7708,10 +7735,8 @@ static void io_uring_cancel_files(struct io_ring_ctx *ctx,
+ continue;
+ }
+ } else {
+- io_wq_cancel_work(ctx->io_wq, &cancel_req->work);
+- /* could be a link, check and remove if it is */
+- if (!io_poll_remove_link(ctx, cancel_req))
+- io_timeout_remove_link(ctx, cancel_req);
++ /* cancel this request, or head link requests */
++ io_attempt_cancel(ctx, cancel_req);
+ io_put_req(cancel_req);
+ }
+
+@@ -7720,6 +7745,13 @@ static void io_uring_cancel_files(struct io_ring_ctx *ctx,
+ }
+ }
+
++static bool io_cancel_pid_cb(struct io_wq_work *work, void *data)
++{
++ pid_t pid = (pid_t) (unsigned long) data;
++
++ return work->task_pid == pid;
++}
++
+ static int io_uring_flush(struct file *file, void *data)
+ {
+ struct io_ring_ctx *ctx = file->private_data;
+@@ -7729,8 +7761,11 @@ static int io_uring_flush(struct file *file, void *data)
+ /*
+ * If the task is going away, cancel work it may have pending
+ */
+- if (fatal_signal_pending(current) || (current->flags & PF_EXITING))
+- io_wq_cancel_pid(ctx->io_wq, task_pid_vnr(current));
++ if (fatal_signal_pending(current) || (current->flags & PF_EXITING)) {
++ void *data = (void *) (unsigned long)task_pid_vnr(current);
++
++ io_wq_cancel_cb(ctx->io_wq, io_cancel_pid_cb, data, true);
++ }
+
+ return 0;
+ }
+diff --git a/fs/jbd2/journal.c b/fs/jbd2/journal.c
+index e4944436e733d..5493a0da23ddd 100644
+--- a/fs/jbd2/journal.c
++++ b/fs/jbd2/journal.c
+@@ -1367,8 +1367,10 @@ static int jbd2_write_superblock(journal_t *journal, int write_flags)
+ int ret;
+
+ /* Buffer got discarded which means block device got invalidated */
+- if (!buffer_mapped(bh))
++ if (!buffer_mapped(bh)) {
++ unlock_buffer(bh);
+ return -EIO;
++ }
+
+ trace_jbd2_write_superblock(journal, write_flags);
+ if (!(journal->j_flags & JBD2_BARRIER))
+diff --git a/fs/jffs2/dir.c b/fs/jffs2/dir.c
+index f20cff1194bb6..776493713153f 100644
+--- a/fs/jffs2/dir.c
++++ b/fs/jffs2/dir.c
+@@ -590,10 +590,14 @@ static int jffs2_rmdir (struct inode *dir_i, struct dentry *dentry)
+ int ret;
+ uint32_t now = JFFS2_NOW();
+
++ mutex_lock(&f->sem);
+ for (fd = f->dents ; fd; fd = fd->next) {
+- if (fd->ino)
++ if (fd->ino) {
++ mutex_unlock(&f->sem);
+ return -ENOTEMPTY;
++ }
+ }
++ mutex_unlock(&f->sem);
+
+ ret = jffs2_do_unlink(c, dir_f, dentry->d_name.name,
+ dentry->d_name.len, f, now);
+diff --git a/fs/romfs/storage.c b/fs/romfs/storage.c
+index 6b2b4362089e6..b57b3ffcbc327 100644
+--- a/fs/romfs/storage.c
++++ b/fs/romfs/storage.c
+@@ -217,10 +217,8 @@ int romfs_dev_read(struct super_block *sb, unsigned long pos,
+ size_t limit;
+
+ limit = romfs_maxsize(sb);
+- if (pos >= limit)
++ if (pos >= limit || buflen > limit - pos)
+ return -EIO;
+- if (buflen > limit - pos)
+- buflen = limit - pos;
+
+ #ifdef CONFIG_ROMFS_ON_MTD
+ if (sb->s_mtd)
+diff --git a/fs/signalfd.c b/fs/signalfd.c
+index 44b6845b071c3..5b78719be4455 100644
+--- a/fs/signalfd.c
++++ b/fs/signalfd.c
+@@ -314,9 +314,10 @@ SYSCALL_DEFINE4(signalfd4, int, ufd, sigset_t __user *, user_mask,
+ {
+ sigset_t mask;
+
+- if (sizemask != sizeof(sigset_t) ||
+- copy_from_user(&mask, user_mask, sizeof(mask)))
++ if (sizemask != sizeof(sigset_t))
+ return -EINVAL;
++ if (copy_from_user(&mask, user_mask, sizeof(mask)))
++ return -EFAULT;
+ return do_signalfd4(ufd, &mask, flags);
+ }
+
+@@ -325,9 +326,10 @@ SYSCALL_DEFINE3(signalfd, int, ufd, sigset_t __user *, user_mask,
+ {
+ sigset_t mask;
+
+- if (sizemask != sizeof(sigset_t) ||
+- copy_from_user(&mask, user_mask, sizeof(mask)))
++ if (sizemask != sizeof(sigset_t))
+ return -EINVAL;
++ if (copy_from_user(&mask, user_mask, sizeof(mask)))
++ return -EFAULT;
+ return do_signalfd4(ufd, &mask, 0);
+ }
+
+diff --git a/fs/xfs/xfs_sysfs.h b/fs/xfs/xfs_sysfs.h
+index e9f810fc67317..43585850f1546 100644
+--- a/fs/xfs/xfs_sysfs.h
++++ b/fs/xfs/xfs_sysfs.h
+@@ -32,9 +32,11 @@ xfs_sysfs_init(
+ struct xfs_kobj *parent_kobj,
+ const char *name)
+ {
++ struct kobject *parent;
++
++ parent = parent_kobj ? &parent_kobj->kobject : NULL;
+ init_completion(&kobj->complete);
+- return kobject_init_and_add(&kobj->kobject, ktype,
+- &parent_kobj->kobject, "%s", name);
++ return kobject_init_and_add(&kobj->kobject, ktype, parent, "%s", name);
+ }
+
+ static inline void
+diff --git a/fs/xfs/xfs_trans_dquot.c b/fs/xfs/xfs_trans_dquot.c
+index d1b9869bc5fa6..af3636a99bf60 100644
+--- a/fs/xfs/xfs_trans_dquot.c
++++ b/fs/xfs/xfs_trans_dquot.c
+@@ -647,7 +647,7 @@ xfs_trans_dqresv(
+ }
+ }
+ if (ninos > 0) {
+- total_count = be64_to_cpu(dqp->q_core.d_icount) + ninos;
++ total_count = dqp->q_res_icount + ninos;
+ timer = be32_to_cpu(dqp->q_core.d_itimer);
+ warns = be16_to_cpu(dqp->q_core.d_iwarns);
+ warnlimit = dqp->q_mount->m_quotainfo->qi_iwarnlimit;
+diff --git a/kernel/events/uprobes.c b/kernel/events/uprobes.c
+index 8e1f7165162c3..d92d3e729bc7f 100644
+--- a/kernel/events/uprobes.c
++++ b/kernel/events/uprobes.c
+@@ -211,7 +211,7 @@ static int __replace_page(struct vm_area_struct *vma, unsigned long addr,
+ try_to_free_swap(old_page);
+ page_vma_mapped_walk_done(&pvmw);
+
+- if (vma->vm_flags & VM_LOCKED)
++ if ((vma->vm_flags & VM_LOCKED) && !PageCompound(old_page))
+ munlock_vma_page(old_page);
+ put_page(old_page);
+
+diff --git a/kernel/relay.c b/kernel/relay.c
+index 4b760ec163426..d3940becf2fc3 100644
+--- a/kernel/relay.c
++++ b/kernel/relay.c
+@@ -197,6 +197,7 @@ free_buf:
+ static void relay_destroy_channel(struct kref *kref)
+ {
+ struct rchan *chan = container_of(kref, struct rchan, kref);
++ free_percpu(chan->buf);
+ kfree(chan);
+ }
+
+diff --git a/mm/khugepaged.c b/mm/khugepaged.c
+index 38874fe112d58..cb17091d0a202 100644
+--- a/mm/khugepaged.c
++++ b/mm/khugepaged.c
+@@ -400,7 +400,7 @@ static void insert_to_mm_slots_hash(struct mm_struct *mm,
+
+ static inline int khugepaged_test_exit(struct mm_struct *mm)
+ {
+- return atomic_read(&mm->mm_users) == 0;
++ return atomic_read(&mm->mm_users) == 0 || !mmget_still_valid(mm);
+ }
+
+ static bool hugepage_vma_check(struct vm_area_struct *vma,
+@@ -435,7 +435,7 @@ int __khugepaged_enter(struct mm_struct *mm)
+ return -ENOMEM;
+
+ /* __khugepaged_exit() must not run from under us */
+- VM_BUG_ON_MM(khugepaged_test_exit(mm), mm);
++ VM_BUG_ON_MM(atomic_read(&mm->mm_users) == 0, mm);
+ if (unlikely(test_and_set_bit(MMF_VM_HUGEPAGE, &mm->flags))) {
+ free_mm_slot(mm_slot);
+ return 0;
+@@ -1016,9 +1016,6 @@ static void collapse_huge_page(struct mm_struct *mm,
+ * handled by the anon_vma lock + PG_lock.
+ */
+ down_write(&mm->mmap_sem);
+- result = SCAN_ANY_PROCESS;
+- if (!mmget_still_valid(mm))
+- goto out;
+ result = hugepage_vma_revalidate(mm, address, &vma);
+ if (result)
+ goto out;
+diff --git a/mm/memory.c b/mm/memory.c
+index 22d218bc56c8a..44d848b291b48 100644
+--- a/mm/memory.c
++++ b/mm/memory.c
+@@ -4237,6 +4237,9 @@ static vm_fault_t handle_pte_fault(struct vm_fault *vmf)
+ vmf->flags & FAULT_FLAG_WRITE)) {
+ update_mmu_cache(vmf->vma, vmf->address, vmf->pte);
+ } else {
++ /* Skip spurious TLB flush for retried page fault */
++ if (vmf->flags & FAULT_FLAG_TRIED)
++ goto unlock;
+ /*
+ * This is needed only for protection faults but the arch code
+ * is not yet telling us if this is a protection fault or not.
+diff --git a/mm/page_alloc.c b/mm/page_alloc.c
+index d0c0d9364aa6d..398dd6c90ad0f 100644
+--- a/mm/page_alloc.c
++++ b/mm/page_alloc.c
+@@ -1308,6 +1308,11 @@ static void free_pcppages_bulk(struct zone *zone, int count,
+ struct page *page, *tmp;
+ LIST_HEAD(head);
+
++ /*
++ * Ensure proper count is passed which otherwise would stuck in the
++ * below while (list_empty(list)) loop.
++ */
++ count = min(pcp->count, count);
+ while (count) {
+ struct list_head *list;
+
+@@ -7959,7 +7964,7 @@ int __meminit init_per_zone_wmark_min(void)
+
+ return 0;
+ }
+-core_initcall(init_per_zone_wmark_min)
++postcore_initcall(init_per_zone_wmark_min)
+
+ /*
+ * min_free_kbytes_sysctl_handler - just a wrapper around proc_dointvec() so
+diff --git a/net/can/j1939/socket.c b/net/can/j1939/socket.c
+index f7587428febdd..bf9fd6ee88fe0 100644
+--- a/net/can/j1939/socket.c
++++ b/net/can/j1939/socket.c
+@@ -398,6 +398,7 @@ static int j1939_sk_init(struct sock *sk)
+ spin_lock_init(&jsk->sk_session_queue_lock);
+ INIT_LIST_HEAD(&jsk->sk_session_queue);
+ sk->sk_destruct = j1939_sk_sock_destruct;
++ sk->sk_protocol = CAN_J1939;
+
+ return 0;
+ }
+@@ -466,6 +467,14 @@ static int j1939_sk_bind(struct socket *sock, struct sockaddr *uaddr, int len)
+ goto out_release_sock;
+ }
+
++ if (!ndev->ml_priv) {
++ netdev_warn_once(ndev,
++ "No CAN mid layer private allocated, please fix your driver and use alloc_candev()!\n");
++ dev_put(ndev);
++ ret = -ENODEV;
++ goto out_release_sock;
++ }
++
+ priv = j1939_netdev_start(ndev);
+ dev_put(ndev);
+ if (IS_ERR(priv)) {
+@@ -553,6 +562,11 @@ static int j1939_sk_connect(struct socket *sock, struct sockaddr *uaddr,
+ static void j1939_sk_sock2sockaddr_can(struct sockaddr_can *addr,
+ const struct j1939_sock *jsk, int peer)
+ {
++ /* There are two holes (2 bytes and 3 bytes) to clear to avoid
++ * leaking kernel information to user space.
++ */
++ memset(addr, 0, J1939_MIN_NAMELEN);
++
+ addr->can_family = AF_CAN;
+ addr->can_ifindex = jsk->ifindex;
+ addr->can_addr.j1939.pgn = jsk->addr.pgn;
+diff --git a/net/can/j1939/transport.c b/net/can/j1939/transport.c
+index 9f99af5b0b11e..dbd215cbc53d8 100644
+--- a/net/can/j1939/transport.c
++++ b/net/can/j1939/transport.c
+@@ -352,17 +352,16 @@ void j1939_session_skb_queue(struct j1939_session *session,
+ skb_queue_tail(&session->skb_queue, skb);
+ }
+
+-static struct sk_buff *j1939_session_skb_find(struct j1939_session *session)
++static struct
++sk_buff *j1939_session_skb_find_by_offset(struct j1939_session *session,
++ unsigned int offset_start)
+ {
+ struct j1939_priv *priv = session->priv;
++ struct j1939_sk_buff_cb *do_skcb;
+ struct sk_buff *skb = NULL;
+ struct sk_buff *do_skb;
+- struct j1939_sk_buff_cb *do_skcb;
+- unsigned int offset_start;
+ unsigned long flags;
+
+- offset_start = session->pkt.dpo * 7;
+-
+ spin_lock_irqsave(&session->skb_queue.lock, flags);
+ skb_queue_walk(&session->skb_queue, do_skb) {
+ do_skcb = j1939_skb_to_cb(do_skb);
+@@ -382,6 +381,14 @@ static struct sk_buff *j1939_session_skb_find(struct j1939_session *session)
+ return skb;
+ }
+
++static struct sk_buff *j1939_session_skb_find(struct j1939_session *session)
++{
++ unsigned int offset_start;
++
++ offset_start = session->pkt.dpo * 7;
++ return j1939_session_skb_find_by_offset(session, offset_start);
++}
++
+ /* see if we are receiver
+ * returns 0 for broadcasts, although we will receive them
+ */
+@@ -716,10 +723,12 @@ static int j1939_session_tx_rts(struct j1939_session *session)
+ return ret;
+
+ session->last_txcmd = dat[0];
+- if (dat[0] == J1939_TP_CMD_BAM)
++ if (dat[0] == J1939_TP_CMD_BAM) {
+ j1939_tp_schedule_txtimer(session, 50);
+-
+- j1939_tp_set_rxtimeout(session, 1250);
++ j1939_tp_set_rxtimeout(session, 250);
++ } else {
++ j1939_tp_set_rxtimeout(session, 1250);
++ }
+
+ netdev_dbg(session->priv->ndev, "%s: 0x%p\n", __func__, session);
+
+@@ -766,7 +775,7 @@ static int j1939_session_tx_dat(struct j1939_session *session)
+ int ret = 0;
+ u8 dat[8];
+
+- se_skb = j1939_session_skb_find(session);
++ se_skb = j1939_session_skb_find_by_offset(session, session->pkt.tx * 7);
+ if (!se_skb)
+ return -ENOBUFS;
+
+@@ -787,6 +796,18 @@ static int j1939_session_tx_dat(struct j1939_session *session)
+ if (len > 7)
+ len = 7;
+
++ if (offset + len > se_skb->len) {
++ netdev_err_once(priv->ndev,
++ "%s: 0x%p: requested data outside of queued buffer: offset %i, len %i, pkt.tx: %i\n",
++ __func__, session, skcb->offset, se_skb->len , session->pkt.tx);
++ return -EOVERFLOW;
++ }
++
++ if (!len) {
++ ret = -ENOBUFS;
++ break;
++ }
++
+ memcpy(&dat[1], &tpdat[offset], len);
+ ret = j1939_tp_tx_dat(session, dat, len + 1);
+ if (ret < 0) {
+@@ -1055,9 +1076,9 @@ static void __j1939_session_cancel(struct j1939_session *session,
+ lockdep_assert_held(&session->priv->active_session_list_lock);
+
+ session->err = j1939_xtp_abort_to_errno(priv, err);
++ session->state = J1939_SESSION_WAITING_ABORT;
+ /* do not send aborts on incoming broadcasts */
+ if (!j1939_cb_is_broadcast(&session->skcb)) {
+- session->state = J1939_SESSION_WAITING_ABORT;
+ j1939_xtp_tx_abort(priv, &session->skcb,
+ !session->transmission,
+ err, session->skcb.addr.pgn);
+@@ -1120,6 +1141,9 @@ static enum hrtimer_restart j1939_tp_txtimer(struct hrtimer *hrtimer)
+ * cleanup including propagation of the error to user space.
+ */
+ break;
++ case -EOVERFLOW:
++ j1939_session_cancel(session, J1939_XTP_ABORT_ECTS_TOO_BIG);
++ break;
+ case 0:
+ session->tx_retry = 0;
+ break;
+@@ -1651,8 +1675,12 @@ static void j1939_xtp_rx_rts(struct j1939_priv *priv, struct sk_buff *skb,
+ return;
+ }
+ session = j1939_xtp_rx_rts_session_new(priv, skb);
+- if (!session)
++ if (!session) {
++ if (cmd == J1939_TP_CMD_BAM && j1939_sk_recv_match(priv, skcb))
++ netdev_info(priv->ndev, "%s: failed to create TP BAM session\n",
++ __func__);
+ return;
++ }
+ } else {
+ if (j1939_xtp_rx_rts_session_active(session, skb)) {
+ j1939_session_put(session);
+@@ -1661,11 +1689,15 @@ static void j1939_xtp_rx_rts(struct j1939_priv *priv, struct sk_buff *skb,
+ }
+ session->last_cmd = cmd;
+
+- j1939_tp_set_rxtimeout(session, 1250);
+-
+- if (cmd != J1939_TP_CMD_BAM && !session->transmission) {
+- j1939_session_txtimer_cancel(session);
+- j1939_tp_schedule_txtimer(session, 0);
++ if (cmd == J1939_TP_CMD_BAM) {
++ if (!session->transmission)
++ j1939_tp_set_rxtimeout(session, 750);
++ } else {
++ if (!session->transmission) {
++ j1939_session_txtimer_cancel(session);
++ j1939_tp_schedule_txtimer(session, 0);
++ }
++ j1939_tp_set_rxtimeout(session, 1250);
+ }
+
+ j1939_session_put(session);
+@@ -1716,6 +1748,7 @@ static void j1939_xtp_rx_dat_one(struct j1939_session *session,
+ int offset;
+ int nbytes;
+ bool final = false;
++ bool remain = false;
+ bool do_cts_eoma = false;
+ int packet;
+
+@@ -1750,7 +1783,8 @@ static void j1939_xtp_rx_dat_one(struct j1939_session *session,
+ __func__, session);
+ goto out_session_cancel;
+ }
+- se_skb = j1939_session_skb_find(session);
++
++ se_skb = j1939_session_skb_find_by_offset(session, packet * 7);
+ if (!se_skb) {
+ netdev_warn(priv->ndev, "%s: 0x%p: no skb found\n", __func__,
+ session);
+@@ -1777,6 +1811,8 @@ static void j1939_xtp_rx_dat_one(struct j1939_session *session,
+ j1939_cb_is_broadcast(&session->skcb)) {
+ if (session->pkt.rx >= session->pkt.total)
+ final = true;
++ else
++ remain = true;
+ } else {
+ /* never final, an EOMA must follow */
+ if (session->pkt.rx >= session->pkt.last)
+@@ -1784,7 +1820,11 @@ static void j1939_xtp_rx_dat_one(struct j1939_session *session,
+ }
+
+ if (final) {
++ j1939_session_timers_cancel(session);
+ j1939_session_completed(session);
++ } else if (remain) {
++ if (!session->transmission)
++ j1939_tp_set_rxtimeout(session, 750);
+ } else if (do_cts_eoma) {
+ j1939_tp_set_rxtimeout(session, 1250);
+ if (!session->transmission)
+@@ -1829,6 +1869,13 @@ static void j1939_xtp_rx_dat(struct j1939_priv *priv, struct sk_buff *skb)
+ else
+ j1939_xtp_rx_dat_one(session, skb);
+ }
++
++ if (j1939_cb_is_broadcast(skcb)) {
++ session = j1939_session_get_by_addr(priv, &skcb->addr, false,
++ false);
++ if (session)
++ j1939_xtp_rx_dat_one(session, skb);
++ }
+ }
+
+ /* j1939 main intf */
+@@ -1920,7 +1967,7 @@ static void j1939_tp_cmd_recv(struct j1939_priv *priv, struct sk_buff *skb)
+ if (j1939_tp_im_transmitter(skcb))
+ j1939_xtp_rx_rts(priv, skb, true);
+
+- if (j1939_tp_im_receiver(skcb))
++ if (j1939_tp_im_receiver(skcb) || j1939_cb_is_broadcast(skcb))
+ j1939_xtp_rx_rts(priv, skb, false);
+
+ break;
+@@ -1984,7 +2031,7 @@ int j1939_tp_recv(struct j1939_priv *priv, struct sk_buff *skb)
+ {
+ struct j1939_sk_buff_cb *skcb = j1939_skb_to_cb(skb);
+
+- if (!j1939_tp_im_involved_anydir(skcb))
++ if (!j1939_tp_im_involved_anydir(skcb) && !j1939_cb_is_broadcast(skcb))
+ return 0;
+
+ switch (skcb->addr.pgn) {
+@@ -2017,6 +2064,10 @@ void j1939_simple_recv(struct j1939_priv *priv, struct sk_buff *skb)
+ if (!skb->sk)
+ return;
+
++ if (skb->sk->sk_family != AF_CAN ||
++ skb->sk->sk_protocol != CAN_J1939)
++ return;
++
+ j1939_session_list_lock(priv);
+ session = j1939_session_get_simple(priv, skb);
+ j1939_session_list_unlock(priv);
+diff --git a/net/core/filter.c b/net/core/filter.c
+index cebbb6ba9ed92..9c03702600128 100644
+--- a/net/core/filter.c
++++ b/net/core/filter.c
+@@ -8066,15 +8066,31 @@ static u32 sock_ops_convert_ctx_access(enum bpf_access_type type,
+ /* Helper macro for adding read access to tcp_sock or sock fields. */
+ #define SOCK_OPS_GET_FIELD(BPF_FIELD, OBJ_FIELD, OBJ) \
+ do { \
++ int fullsock_reg = si->dst_reg, reg = BPF_REG_9, jmp = 2; \
+ BUILD_BUG_ON(sizeof_field(OBJ, OBJ_FIELD) > \
+ sizeof_field(struct bpf_sock_ops, BPF_FIELD)); \
++ if (si->dst_reg == reg || si->src_reg == reg) \
++ reg--; \
++ if (si->dst_reg == reg || si->src_reg == reg) \
++ reg--; \
++ if (si->dst_reg == si->src_reg) { \
++ *insn++ = BPF_STX_MEM(BPF_DW, si->src_reg, reg, \
++ offsetof(struct bpf_sock_ops_kern, \
++ temp)); \
++ fullsock_reg = reg; \
++ jmp += 2; \
++ } \
+ *insn++ = BPF_LDX_MEM(BPF_FIELD_SIZEOF( \
+ struct bpf_sock_ops_kern, \
+ is_fullsock), \
+- si->dst_reg, si->src_reg, \
++ fullsock_reg, si->src_reg, \
+ offsetof(struct bpf_sock_ops_kern, \
+ is_fullsock)); \
+- *insn++ = BPF_JMP_IMM(BPF_JEQ, si->dst_reg, 0, 2); \
++ *insn++ = BPF_JMP_IMM(BPF_JEQ, fullsock_reg, 0, jmp); \
++ if (si->dst_reg == si->src_reg) \
++ *insn++ = BPF_LDX_MEM(BPF_DW, reg, si->src_reg, \
++ offsetof(struct bpf_sock_ops_kern, \
++ temp)); \
+ *insn++ = BPF_LDX_MEM(BPF_FIELD_SIZEOF( \
+ struct bpf_sock_ops_kern, sk),\
+ si->dst_reg, si->src_reg, \
+@@ -8083,6 +8099,49 @@ static u32 sock_ops_convert_ctx_access(enum bpf_access_type type,
+ OBJ_FIELD), \
+ si->dst_reg, si->dst_reg, \
+ offsetof(OBJ, OBJ_FIELD)); \
++ if (si->dst_reg == si->src_reg) { \
++ *insn++ = BPF_JMP_A(1); \
++ *insn++ = BPF_LDX_MEM(BPF_DW, reg, si->src_reg, \
++ offsetof(struct bpf_sock_ops_kern, \
++ temp)); \
++ } \
++ } while (0)
++
++#define SOCK_OPS_GET_SK() \
++ do { \
++ int fullsock_reg = si->dst_reg, reg = BPF_REG_9, jmp = 1; \
++ if (si->dst_reg == reg || si->src_reg == reg) \
++ reg--; \
++ if (si->dst_reg == reg || si->src_reg == reg) \
++ reg--; \
++ if (si->dst_reg == si->src_reg) { \
++ *insn++ = BPF_STX_MEM(BPF_DW, si->src_reg, reg, \
++ offsetof(struct bpf_sock_ops_kern, \
++ temp)); \
++ fullsock_reg = reg; \
++ jmp += 2; \
++ } \
++ *insn++ = BPF_LDX_MEM(BPF_FIELD_SIZEOF( \
++ struct bpf_sock_ops_kern, \
++ is_fullsock), \
++ fullsock_reg, si->src_reg, \
++ offsetof(struct bpf_sock_ops_kern, \
++ is_fullsock)); \
++ *insn++ = BPF_JMP_IMM(BPF_JEQ, fullsock_reg, 0, jmp); \
++ if (si->dst_reg == si->src_reg) \
++ *insn++ = BPF_LDX_MEM(BPF_DW, reg, si->src_reg, \
++ offsetof(struct bpf_sock_ops_kern, \
++ temp)); \
++ *insn++ = BPF_LDX_MEM(BPF_FIELD_SIZEOF( \
++ struct bpf_sock_ops_kern, sk),\
++ si->dst_reg, si->src_reg, \
++ offsetof(struct bpf_sock_ops_kern, sk));\
++ if (si->dst_reg == si->src_reg) { \
++ *insn++ = BPF_JMP_A(1); \
++ *insn++ = BPF_LDX_MEM(BPF_DW, reg, si->src_reg, \
++ offsetof(struct bpf_sock_ops_kern, \
++ temp)); \
++ } \
+ } while (0)
+
+ #define SOCK_OPS_GET_TCP_SOCK_FIELD(FIELD) \
+@@ -8369,17 +8428,7 @@ static u32 sock_ops_convert_ctx_access(enum bpf_access_type type,
+ SOCK_OPS_GET_TCP_SOCK_FIELD(bytes_acked);
+ break;
+ case offsetof(struct bpf_sock_ops, sk):
+- *insn++ = BPF_LDX_MEM(BPF_FIELD_SIZEOF(
+- struct bpf_sock_ops_kern,
+- is_fullsock),
+- si->dst_reg, si->src_reg,
+- offsetof(struct bpf_sock_ops_kern,
+- is_fullsock));
+- *insn++ = BPF_JMP_IMM(BPF_JEQ, si->dst_reg, 0, 1);
+- *insn++ = BPF_LDX_MEM(BPF_FIELD_SIZEOF(
+- struct bpf_sock_ops_kern, sk),
+- si->dst_reg, si->src_reg,
+- offsetof(struct bpf_sock_ops_kern, sk));
++ SOCK_OPS_GET_SK();
+ break;
+ }
+ return insn - insn_buf;
+diff --git a/net/netfilter/nft_exthdr.c b/net/netfilter/nft_exthdr.c
+index 07782836fad6e..3c48cdc8935df 100644
+--- a/net/netfilter/nft_exthdr.c
++++ b/net/netfilter/nft_exthdr.c
+@@ -44,7 +44,7 @@ static void nft_exthdr_ipv6_eval(const struct nft_expr *expr,
+
+ err = ipv6_find_hdr(pkt->skb, &offset, priv->type, NULL, NULL);
+ if (priv->flags & NFT_EXTHDR_F_PRESENT) {
+- *dest = (err >= 0);
++ nft_reg_store8(dest, err >= 0);
+ return;
+ } else if (err < 0) {
+ goto err;
+@@ -141,7 +141,7 @@ static void nft_exthdr_ipv4_eval(const struct nft_expr *expr,
+
+ err = ipv4_find_option(nft_net(pkt), skb, &offset, priv->type);
+ if (priv->flags & NFT_EXTHDR_F_PRESENT) {
+- *dest = (err >= 0);
++ nft_reg_store8(dest, err >= 0);
+ return;
+ } else if (err < 0) {
+ goto err;
+diff --git a/net/sunrpc/xprtrdma/svc_rdma_recvfrom.c b/net/sunrpc/xprtrdma/svc_rdma_recvfrom.c
+index efa5fcb5793f7..952b8f1908500 100644
+--- a/net/sunrpc/xprtrdma/svc_rdma_recvfrom.c
++++ b/net/sunrpc/xprtrdma/svc_rdma_recvfrom.c
+@@ -265,6 +265,8 @@ static int svc_rdma_post_recv(struct svcxprt_rdma *rdma)
+ {
+ struct svc_rdma_recv_ctxt *ctxt;
+
++ if (test_bit(XPT_CLOSE, &rdma->sc_xprt.xpt_flags))
++ return 0;
+ ctxt = svc_rdma_recv_ctxt_get(rdma);
+ if (!ctxt)
+ return -ENOMEM;
+diff --git a/scripts/kconfig/qconf.cc b/scripts/kconfig/qconf.cc
+index c0ac8f7b5f1ab..1eb076c7eae17 100644
+--- a/scripts/kconfig/qconf.cc
++++ b/scripts/kconfig/qconf.cc
+@@ -881,40 +881,40 @@ void ConfigList::focusInEvent(QFocusEvent *e)
+
+ void ConfigList::contextMenuEvent(QContextMenuEvent *e)
+ {
+- if (e->y() <= header()->geometry().bottom()) {
+- if (!headerPopup) {
+- QAction *action;
+-
+- headerPopup = new QMenu(this);
+- action = new QAction("Show Name", this);
+- action->setCheckable(true);
+- connect(action, SIGNAL(toggled(bool)),
+- parent(), SLOT(setShowName(bool)));
+- connect(parent(), SIGNAL(showNameChanged(bool)),
+- action, SLOT(setOn(bool)));
+- action->setChecked(showName);
+- headerPopup->addAction(action);
+- action = new QAction("Show Range", this);
+- action->setCheckable(true);
+- connect(action, SIGNAL(toggled(bool)),
+- parent(), SLOT(setShowRange(bool)));
+- connect(parent(), SIGNAL(showRangeChanged(bool)),
+- action, SLOT(setOn(bool)));
+- action->setChecked(showRange);
+- headerPopup->addAction(action);
+- action = new QAction("Show Data", this);
+- action->setCheckable(true);
+- connect(action, SIGNAL(toggled(bool)),
+- parent(), SLOT(setShowData(bool)));
+- connect(parent(), SIGNAL(showDataChanged(bool)),
+- action, SLOT(setOn(bool)));
+- action->setChecked(showData);
+- headerPopup->addAction(action);
+- }
+- headerPopup->exec(e->globalPos());
+- e->accept();
+- } else
+- e->ignore();
++ if (!headerPopup) {
++ QAction *action;
++
++ headerPopup = new QMenu(this);
++ action = new QAction("Show Name", this);
++ action->setCheckable(true);
++ connect(action, SIGNAL(toggled(bool)),
++ parent(), SLOT(setShowName(bool)));
++ connect(parent(), SIGNAL(showNameChanged(bool)),
++ action, SLOT(setChecked(bool)));
++ action->setChecked(showName);
++ headerPopup->addAction(action);
++
++ action = new QAction("Show Range", this);
++ action->setCheckable(true);
++ connect(action, SIGNAL(toggled(bool)),
++ parent(), SLOT(setShowRange(bool)));
++ connect(parent(), SIGNAL(showRangeChanged(bool)),
++ action, SLOT(setChecked(bool)));
++ action->setChecked(showRange);
++ headerPopup->addAction(action);
++
++ action = new QAction("Show Data", this);
++ action->setCheckable(true);
++ connect(action, SIGNAL(toggled(bool)),
++ parent(), SLOT(setShowData(bool)));
++ connect(parent(), SIGNAL(showDataChanged(bool)),
++ action, SLOT(setChecked(bool)));
++ action->setChecked(showData);
++ headerPopup->addAction(action);
++ }
++
++ headerPopup->exec(e->globalPos());
++ e->accept();
+ }
+
+ ConfigView*ConfigView::viewList;
+@@ -1240,7 +1240,7 @@ QMenu* ConfigInfoView::createStandardContextMenu(const QPoint & pos)
+
+ action->setCheckable(true);
+ connect(action, SIGNAL(toggled(bool)), SLOT(setShowDebug(bool)));
+- connect(this, SIGNAL(showDebugChanged(bool)), action, SLOT(setOn(bool)));
++ connect(this, SIGNAL(showDebugChanged(bool)), action, SLOT(setChecked(bool)));
+ action->setChecked(showDebug());
+ popup->addSeparator();
+ popup->addAction(action);
+diff --git a/sound/pci/hda/patch_realtek.c b/sound/pci/hda/patch_realtek.c
+index 313eecfb91b44..fe6db8b171e41 100644
+--- a/sound/pci/hda/patch_realtek.c
++++ b/sound/pci/hda/patch_realtek.c
+@@ -7666,6 +7666,8 @@ static const struct snd_pci_quirk alc269_fixup_tbl[] = {
+ SND_PCI_QUIRK(0x144d, 0xc109, "Samsung Ativ book 9 (NP900X3G)", ALC269_FIXUP_INV_DMIC),
+ SND_PCI_QUIRK(0x144d, 0xc169, "Samsung Notebook 9 Pen (NP930SBE-K01US)", ALC298_FIXUP_SAMSUNG_HEADPHONE_VERY_QUIET),
+ SND_PCI_QUIRK(0x144d, 0xc176, "Samsung Notebook 9 Pro (NP930MBE-K04US)", ALC298_FIXUP_SAMSUNG_HEADPHONE_VERY_QUIET),
++ SND_PCI_QUIRK(0x144d, 0xc189, "Samsung Galaxy Flex Book (NT950QCG-X716)", ALC298_FIXUP_SAMSUNG_HEADPHONE_VERY_QUIET),
++ SND_PCI_QUIRK(0x144d, 0xc18a, "Samsung Galaxy Book Ion (NT950XCJ-X716A)", ALC298_FIXUP_SAMSUNG_HEADPHONE_VERY_QUIET),
+ SND_PCI_QUIRK(0x144d, 0xc740, "Samsung Ativ book 8 (NP870Z5G)", ALC269_FIXUP_ATIV_BOOK_8),
+ SND_PCI_QUIRK(0x144d, 0xc812, "Samsung Notebook Pen S (NT950SBE-X58)", ALC298_FIXUP_SAMSUNG_HEADPHONE_VERY_QUIET),
+ SND_PCI_QUIRK(0x1458, 0xfa53, "Gigabyte BXBT-2807", ALC283_FIXUP_HEADSET_MIC),
+diff --git a/sound/soc/codecs/msm8916-wcd-analog.c b/sound/soc/codecs/msm8916-wcd-analog.c
+index 85bc7ae4d2671..26cf372ccda6f 100644
+--- a/sound/soc/codecs/msm8916-wcd-analog.c
++++ b/sound/soc/codecs/msm8916-wcd-analog.c
+@@ -19,8 +19,8 @@
+
+ #define CDC_D_REVISION1 (0xf000)
+ #define CDC_D_PERPH_SUBTYPE (0xf005)
+-#define CDC_D_INT_EN_SET (0x015)
+-#define CDC_D_INT_EN_CLR (0x016)
++#define CDC_D_INT_EN_SET (0xf015)
++#define CDC_D_INT_EN_CLR (0xf016)
+ #define MBHC_SWITCH_INT BIT(7)
+ #define MBHC_MIC_ELECTRICAL_INS_REM_DET BIT(6)
+ #define MBHC_BUTTON_PRESS_DET BIT(5)
+diff --git a/sound/soc/intel/atom/sst-mfld-platform-pcm.c b/sound/soc/intel/atom/sst-mfld-platform-pcm.c
+index 82f2b6357778d..a3cb05d925846 100644
+--- a/sound/soc/intel/atom/sst-mfld-platform-pcm.c
++++ b/sound/soc/intel/atom/sst-mfld-platform-pcm.c
+@@ -331,7 +331,7 @@ static int sst_media_open(struct snd_pcm_substream *substream,
+
+ ret_val = power_up_sst(stream);
+ if (ret_val < 0)
+- return ret_val;
++ goto out_power_up;
+
+ /* Make sure, that the period size is always even */
+ snd_pcm_hw_constraint_step(substream->runtime, 0,
+@@ -340,8 +340,9 @@ static int sst_media_open(struct snd_pcm_substream *substream,
+ return snd_pcm_hw_constraint_integer(runtime,
+ SNDRV_PCM_HW_PARAM_PERIODS);
+ out_ops:
+- kfree(stream);
+ mutex_unlock(&sst_lock);
++out_power_up:
++ kfree(stream);
+ return ret_val;
+ }
+
+diff --git a/sound/soc/qcom/qdsp6/q6afe-dai.c b/sound/soc/qcom/qdsp6/q6afe-dai.c
+index 2a5302f1db98a..0168af8492727 100644
+--- a/sound/soc/qcom/qdsp6/q6afe-dai.c
++++ b/sound/soc/qcom/qdsp6/q6afe-dai.c
+@@ -1150,206 +1150,206 @@ static int q6afe_of_xlate_dai_name(struct snd_soc_component *component,
+ }
+
+ static const struct snd_soc_dapm_widget q6afe_dai_widgets[] = {
+- SND_SOC_DAPM_AIF_IN("HDMI_RX", NULL, 0, 0, 0, 0),
+- SND_SOC_DAPM_AIF_IN("SLIMBUS_0_RX", NULL, 0, 0, 0, 0),
+- SND_SOC_DAPM_AIF_IN("SLIMBUS_1_RX", NULL, 0, 0, 0, 0),
+- SND_SOC_DAPM_AIF_IN("SLIMBUS_2_RX", NULL, 0, 0, 0, 0),
+- SND_SOC_DAPM_AIF_IN("SLIMBUS_3_RX", NULL, 0, 0, 0, 0),
+- SND_SOC_DAPM_AIF_IN("SLIMBUS_4_RX", NULL, 0, 0, 0, 0),
+- SND_SOC_DAPM_AIF_IN("SLIMBUS_5_RX", NULL, 0, 0, 0, 0),
+- SND_SOC_DAPM_AIF_IN("SLIMBUS_6_RX", NULL, 0, 0, 0, 0),
+- SND_SOC_DAPM_AIF_OUT("SLIMBUS_0_TX", NULL, 0, 0, 0, 0),
+- SND_SOC_DAPM_AIF_OUT("SLIMBUS_1_TX", NULL, 0, 0, 0, 0),
+- SND_SOC_DAPM_AIF_OUT("SLIMBUS_2_TX", NULL, 0, 0, 0, 0),
+- SND_SOC_DAPM_AIF_OUT("SLIMBUS_3_TX", NULL, 0, 0, 0, 0),
+- SND_SOC_DAPM_AIF_OUT("SLIMBUS_4_TX", NULL, 0, 0, 0, 0),
+- SND_SOC_DAPM_AIF_OUT("SLIMBUS_5_TX", NULL, 0, 0, 0, 0),
+- SND_SOC_DAPM_AIF_OUT("SLIMBUS_6_TX", NULL, 0, 0, 0, 0),
++ SND_SOC_DAPM_AIF_IN("HDMI_RX", NULL, 0, SND_SOC_NOPM, 0, 0),
++ SND_SOC_DAPM_AIF_IN("SLIMBUS_0_RX", NULL, 0, SND_SOC_NOPM, 0, 0),
++ SND_SOC_DAPM_AIF_IN("SLIMBUS_1_RX", NULL, 0, SND_SOC_NOPM, 0, 0),
++ SND_SOC_DAPM_AIF_IN("SLIMBUS_2_RX", NULL, 0, SND_SOC_NOPM, 0, 0),
++ SND_SOC_DAPM_AIF_IN("SLIMBUS_3_RX", NULL, 0, SND_SOC_NOPM, 0, 0),
++ SND_SOC_DAPM_AIF_IN("SLIMBUS_4_RX", NULL, 0, SND_SOC_NOPM, 0, 0),
++ SND_SOC_DAPM_AIF_IN("SLIMBUS_5_RX", NULL, 0, SND_SOC_NOPM, 0, 0),
++ SND_SOC_DAPM_AIF_IN("SLIMBUS_6_RX", NULL, 0, SND_SOC_NOPM, 0, 0),
++ SND_SOC_DAPM_AIF_OUT("SLIMBUS_0_TX", NULL, 0, SND_SOC_NOPM, 0, 0),
++ SND_SOC_DAPM_AIF_OUT("SLIMBUS_1_TX", NULL, 0, SND_SOC_NOPM, 0, 0),
++ SND_SOC_DAPM_AIF_OUT("SLIMBUS_2_TX", NULL, 0, SND_SOC_NOPM, 0, 0),
++ SND_SOC_DAPM_AIF_OUT("SLIMBUS_3_TX", NULL, 0, SND_SOC_NOPM, 0, 0),
++ SND_SOC_DAPM_AIF_OUT("SLIMBUS_4_TX", NULL, 0, SND_SOC_NOPM, 0, 0),
++ SND_SOC_DAPM_AIF_OUT("SLIMBUS_5_TX", NULL, 0, SND_SOC_NOPM, 0, 0),
++ SND_SOC_DAPM_AIF_OUT("SLIMBUS_6_TX", NULL, 0, SND_SOC_NOPM, 0, 0),
+ SND_SOC_DAPM_AIF_IN("QUAT_MI2S_RX", NULL,
+- 0, 0, 0, 0),
++ 0, SND_SOC_NOPM, 0, 0),
+ SND_SOC_DAPM_AIF_OUT("QUAT_MI2S_TX", NULL,
+- 0, 0, 0, 0),
++ 0, SND_SOC_NOPM, 0, 0),
+ SND_SOC_DAPM_AIF_IN("TERT_MI2S_RX", NULL,
+- 0, 0, 0, 0),
++ 0, SND_SOC_NOPM, 0, 0),
+ SND_SOC_DAPM_AIF_OUT("TERT_MI2S_TX", NULL,
+- 0, 0, 0, 0),
++ 0, SND_SOC_NOPM, 0, 0),
+ SND_SOC_DAPM_AIF_IN("SEC_MI2S_RX", NULL,
+- 0, 0, 0, 0),
++ 0, SND_SOC_NOPM, 0, 0),
+ SND_SOC_DAPM_AIF_OUT("SEC_MI2S_TX", NULL,
+- 0, 0, 0, 0),
++ 0, SND_SOC_NOPM, 0, 0),
+ SND_SOC_DAPM_AIF_IN("SEC_MI2S_RX_SD1",
+ "Secondary MI2S Playback SD1",
+- 0, 0, 0, 0),
++ 0, SND_SOC_NOPM, 0, 0),
+ SND_SOC_DAPM_AIF_IN("PRI_MI2S_RX", NULL,
+- 0, 0, 0, 0),
++ 0, SND_SOC_NOPM, 0, 0),
+ SND_SOC_DAPM_AIF_OUT("PRI_MI2S_TX", NULL,
+- 0, 0, 0, 0),
++ 0, SND_SOC_NOPM, 0, 0),
+
+ SND_SOC_DAPM_AIF_IN("PRIMARY_TDM_RX_0", NULL,
+- 0, 0, 0, 0),
++ 0, SND_SOC_NOPM, 0, 0),
+ SND_SOC_DAPM_AIF_IN("PRIMARY_TDM_RX_1", NULL,
+- 0, 0, 0, 0),
++ 0, SND_SOC_NOPM, 0, 0),
+ SND_SOC_DAPM_AIF_IN("PRIMARY_TDM_RX_2", NULL,
+- 0, 0, 0, 0),
++ 0, SND_SOC_NOPM, 0, 0),
+ SND_SOC_DAPM_AIF_IN("PRIMARY_TDM_RX_3", NULL,
+- 0, 0, 0, 0),
++ 0, SND_SOC_NOPM, 0, 0),
+ SND_SOC_DAPM_AIF_IN("PRIMARY_TDM_RX_4", NULL,
+- 0, 0, 0, 0),
++ 0, SND_SOC_NOPM, 0, 0),
+ SND_SOC_DAPM_AIF_IN("PRIMARY_TDM_RX_5", NULL,
+- 0, 0, 0, 0),
++ 0, SND_SOC_NOPM, 0, 0),
+ SND_SOC_DAPM_AIF_IN("PRIMARY_TDM_RX_6", NULL,
+- 0, 0, 0, 0),
++ 0, SND_SOC_NOPM, 0, 0),
+ SND_SOC_DAPM_AIF_IN("PRIMARY_TDM_RX_7", NULL,
+- 0, 0, 0, 0),
++ 0, SND_SOC_NOPM, 0, 0),
+ SND_SOC_DAPM_AIF_OUT("PRIMARY_TDM_TX_0", NULL,
+- 0, 0, 0, 0),
++ 0, SND_SOC_NOPM, 0, 0),
+ SND_SOC_DAPM_AIF_OUT("PRIMARY_TDM_TX_1", NULL,
+- 0, 0, 0, 0),
++ 0, SND_SOC_NOPM, 0, 0),
+ SND_SOC_DAPM_AIF_OUT("PRIMARY_TDM_TX_2", NULL,
+- 0, 0, 0, 0),
++ 0, SND_SOC_NOPM, 0, 0),
+ SND_SOC_DAPM_AIF_OUT("PRIMARY_TDM_TX_3", NULL,
+- 0, 0, 0, 0),
++ 0, SND_SOC_NOPM, 0, 0),
+ SND_SOC_DAPM_AIF_OUT("PRIMARY_TDM_TX_4", NULL,
+- 0, 0, 0, 0),
++ 0, SND_SOC_NOPM, 0, 0),
+ SND_SOC_DAPM_AIF_OUT("PRIMARY_TDM_TX_5", NULL,
+- 0, 0, 0, 0),
++ 0, SND_SOC_NOPM, 0, 0),
+ SND_SOC_DAPM_AIF_OUT("PRIMARY_TDM_TX_6", NULL,
+- 0, 0, 0, 0),
++ 0, SND_SOC_NOPM, 0, 0),
+ SND_SOC_DAPM_AIF_OUT("PRIMARY_TDM_TX_7", NULL,
+- 0, 0, 0, 0),
++ 0, SND_SOC_NOPM, 0, 0),
+
+ SND_SOC_DAPM_AIF_IN("SEC_TDM_RX_0", NULL,
+- 0, 0, 0, 0),
++ 0, SND_SOC_NOPM, 0, 0),
+ SND_SOC_DAPM_AIF_IN("SEC_TDM_RX_1", NULL,
+- 0, 0, 0, 0),
++ 0, SND_SOC_NOPM, 0, 0),
+ SND_SOC_DAPM_AIF_IN("SEC_TDM_RX_2", NULL,
+- 0, 0, 0, 0),
++ 0, SND_SOC_NOPM, 0, 0),
+ SND_SOC_DAPM_AIF_IN("SEC_TDM_RX_3", NULL,
+- 0, 0, 0, 0),
++ 0, SND_SOC_NOPM, 0, 0),
+ SND_SOC_DAPM_AIF_IN("SEC_TDM_RX_4", NULL,
+- 0, 0, 0, 0),
++ 0, SND_SOC_NOPM, 0, 0),
+ SND_SOC_DAPM_AIF_IN("SEC_TDM_RX_5", NULL,
+- 0, 0, 0, 0),
++ 0, SND_SOC_NOPM, 0, 0),
+ SND_SOC_DAPM_AIF_IN("SEC_TDM_RX_6", NULL,
+- 0, 0, 0, 0),
++ 0, SND_SOC_NOPM, 0, 0),
+ SND_SOC_DAPM_AIF_IN("SEC_TDM_RX_7", NULL,
+- 0, 0, 0, 0),
++ 0, SND_SOC_NOPM, 0, 0),
+ SND_SOC_DAPM_AIF_OUT("SEC_TDM_TX_0", NULL,
+- 0, 0, 0, 0),
++ 0, SND_SOC_NOPM, 0, 0),
+ SND_SOC_DAPM_AIF_OUT("SEC_TDM_TX_1", NULL,
+- 0, 0, 0, 0),
++ 0, SND_SOC_NOPM, 0, 0),
+ SND_SOC_DAPM_AIF_OUT("SEC_TDM_TX_2", NULL,
+- 0, 0, 0, 0),
++ 0, SND_SOC_NOPM, 0, 0),
+ SND_SOC_DAPM_AIF_OUT("SEC_TDM_TX_3", NULL,
+- 0, 0, 0, 0),
++ 0, SND_SOC_NOPM, 0, 0),
+ SND_SOC_DAPM_AIF_OUT("SEC_TDM_TX_4", NULL,
+- 0, 0, 0, 0),
++ 0, SND_SOC_NOPM, 0, 0),
+ SND_SOC_DAPM_AIF_OUT("SEC_TDM_TX_5", NULL,
+- 0, 0, 0, 0),
++ 0, SND_SOC_NOPM, 0, 0),
+ SND_SOC_DAPM_AIF_OUT("SEC_TDM_TX_6", NULL,
+- 0, 0, 0, 0),
++ 0, SND_SOC_NOPM, 0, 0),
+ SND_SOC_DAPM_AIF_OUT("SEC_TDM_TX_7", NULL,
+- 0, 0, 0, 0),
++ 0, SND_SOC_NOPM, 0, 0),
+
+ SND_SOC_DAPM_AIF_IN("TERT_TDM_RX_0", NULL,
+- 0, 0, 0, 0),
++ 0, SND_SOC_NOPM, 0, 0),
+ SND_SOC_DAPM_AIF_IN("TERT_TDM_RX_1", NULL,
+- 0, 0, 0, 0),
++ 0, SND_SOC_NOPM, 0, 0),
+ SND_SOC_DAPM_AIF_IN("TERT_TDM_RX_2", NULL,
+- 0, 0, 0, 0),
++ 0, SND_SOC_NOPM, 0, 0),
+ SND_SOC_DAPM_AIF_IN("TERT_TDM_RX_3", NULL,
+- 0, 0, 0, 0),
++ 0, SND_SOC_NOPM, 0, 0),
+ SND_SOC_DAPM_AIF_IN("TERT_TDM_RX_4", NULL,
+- 0, 0, 0, 0),
++ 0, SND_SOC_NOPM, 0, 0),
+ SND_SOC_DAPM_AIF_IN("TERT_TDM_RX_5", NULL,
+- 0, 0, 0, 0),
++ 0, SND_SOC_NOPM, 0, 0),
+ SND_SOC_DAPM_AIF_IN("TERT_TDM_RX_6", NULL,
+- 0, 0, 0, 0),
++ 0, SND_SOC_NOPM, 0, 0),
+ SND_SOC_DAPM_AIF_IN("TERT_TDM_RX_7", NULL,
+- 0, 0, 0, 0),
++ 0, SND_SOC_NOPM, 0, 0),
+ SND_SOC_DAPM_AIF_OUT("TERT_TDM_TX_0", NULL,
+- 0, 0, 0, 0),
++ 0, SND_SOC_NOPM, 0, 0),
+ SND_SOC_DAPM_AIF_OUT("TERT_TDM_TX_1", NULL,
+- 0, 0, 0, 0),
++ 0, SND_SOC_NOPM, 0, 0),
+ SND_SOC_DAPM_AIF_OUT("TERT_TDM_TX_2", NULL,
+- 0, 0, 0, 0),
++ 0, SND_SOC_NOPM, 0, 0),
+ SND_SOC_DAPM_AIF_OUT("TERT_TDM_TX_3", NULL,
+- 0, 0, 0, 0),
++ 0, SND_SOC_NOPM, 0, 0),
+ SND_SOC_DAPM_AIF_OUT("TERT_TDM_TX_4", NULL,
+- 0, 0, 0, 0),
++ 0, SND_SOC_NOPM, 0, 0),
+ SND_SOC_DAPM_AIF_OUT("TERT_TDM_TX_5", NULL,
+- 0, 0, 0, 0),
++ 0, SND_SOC_NOPM, 0, 0),
+ SND_SOC_DAPM_AIF_OUT("TERT_TDM_TX_6", NULL,
+- 0, 0, 0, 0),
++ 0, SND_SOC_NOPM, 0, 0),
+ SND_SOC_DAPM_AIF_OUT("TERT_TDM_TX_7", NULL,
+- 0, 0, 0, 0),
++ 0, SND_SOC_NOPM, 0, 0),
+
+ SND_SOC_DAPM_AIF_IN("QUAT_TDM_RX_0", NULL,
+- 0, 0, 0, 0),
++ 0, SND_SOC_NOPM, 0, 0),
+ SND_SOC_DAPM_AIF_IN("QUAT_TDM_RX_1", NULL,
+- 0, 0, 0, 0),
++ 0, SND_SOC_NOPM, 0, 0),
+ SND_SOC_DAPM_AIF_IN("QUAT_TDM_RX_2", NULL,
+- 0, 0, 0, 0),
++ 0, SND_SOC_NOPM, 0, 0),
+ SND_SOC_DAPM_AIF_IN("QUAT_TDM_RX_3", NULL,
+- 0, 0, 0, 0),
++ 0, SND_SOC_NOPM, 0, 0),
+ SND_SOC_DAPM_AIF_IN("QUAT_TDM_RX_4", NULL,
+- 0, 0, 0, 0),
++ 0, SND_SOC_NOPM, 0, 0),
+ SND_SOC_DAPM_AIF_IN("QUAT_TDM_RX_5", NULL,
+- 0, 0, 0, 0),
++ 0, SND_SOC_NOPM, 0, 0),
+ SND_SOC_DAPM_AIF_IN("QUAT_TDM_RX_6", NULL,
+- 0, 0, 0, 0),
++ 0, SND_SOC_NOPM, 0, 0),
+ SND_SOC_DAPM_AIF_IN("QUAT_TDM_RX_7", NULL,
+- 0, 0, 0, 0),
++ 0, SND_SOC_NOPM, 0, 0),
+ SND_SOC_DAPM_AIF_OUT("QUAT_TDM_TX_0", NULL,
+- 0, 0, 0, 0),
++ 0, SND_SOC_NOPM, 0, 0),
+ SND_SOC_DAPM_AIF_OUT("QUAT_TDM_TX_1", NULL,
+- 0, 0, 0, 0),
++ 0, SND_SOC_NOPM, 0, 0),
+ SND_SOC_DAPM_AIF_OUT("QUAT_TDM_TX_2", NULL,
+- 0, 0, 0, 0),
++ 0, SND_SOC_NOPM, 0, 0),
+ SND_SOC_DAPM_AIF_OUT("QUAT_TDM_TX_3", NULL,
+- 0, 0, 0, 0),
++ 0, SND_SOC_NOPM, 0, 0),
+ SND_SOC_DAPM_AIF_OUT("QUAT_TDM_TX_4", NULL,
+- 0, 0, 0, 0),
++ 0, SND_SOC_NOPM, 0, 0),
+ SND_SOC_DAPM_AIF_OUT("QUAT_TDM_TX_5", NULL,
+- 0, 0, 0, 0),
++ 0, SND_SOC_NOPM, 0, 0),
+ SND_SOC_DAPM_AIF_OUT("QUAT_TDM_TX_6", NULL,
+- 0, 0, 0, 0),
++ 0, SND_SOC_NOPM, 0, 0),
+ SND_SOC_DAPM_AIF_OUT("QUAT_TDM_TX_7", NULL,
+- 0, 0, 0, 0),
++ 0, SND_SOC_NOPM, 0, 0),
+
+ SND_SOC_DAPM_AIF_IN("QUIN_TDM_RX_0", NULL,
+- 0, 0, 0, 0),
++ 0, SND_SOC_NOPM, 0, 0),
+ SND_SOC_DAPM_AIF_IN("QUIN_TDM_RX_1", NULL,
+- 0, 0, 0, 0),
++ 0, SND_SOC_NOPM, 0, 0),
+ SND_SOC_DAPM_AIF_IN("QUIN_TDM_RX_2", NULL,
+- 0, 0, 0, 0),
++ 0, SND_SOC_NOPM, 0, 0),
+ SND_SOC_DAPM_AIF_IN("QUIN_TDM_RX_3", NULL,
+- 0, 0, 0, 0),
++ 0, SND_SOC_NOPM, 0, 0),
+ SND_SOC_DAPM_AIF_IN("QUIN_TDM_RX_4", NULL,
+- 0, 0, 0, 0),
++ 0, SND_SOC_NOPM, 0, 0),
+ SND_SOC_DAPM_AIF_IN("QUIN_TDM_RX_5", NULL,
+- 0, 0, 0, 0),
++ 0, SND_SOC_NOPM, 0, 0),
+ SND_SOC_DAPM_AIF_IN("QUIN_TDM_RX_6", NULL,
+- 0, 0, 0, 0),
++ 0, SND_SOC_NOPM, 0, 0),
+ SND_SOC_DAPM_AIF_IN("QUIN_TDM_RX_7", NULL,
+- 0, 0, 0, 0),
++ 0, SND_SOC_NOPM, 0, 0),
+ SND_SOC_DAPM_AIF_OUT("QUIN_TDM_TX_0", NULL,
+- 0, 0, 0, 0),
++ 0, SND_SOC_NOPM, 0, 0),
+ SND_SOC_DAPM_AIF_OUT("QUIN_TDM_TX_1", NULL,
+- 0, 0, 0, 0),
++ 0, SND_SOC_NOPM, 0, 0),
+ SND_SOC_DAPM_AIF_OUT("QUIN_TDM_TX_2", NULL,
+- 0, 0, 0, 0),
++ 0, SND_SOC_NOPM, 0, 0),
+ SND_SOC_DAPM_AIF_OUT("QUIN_TDM_TX_3", NULL,
+- 0, 0, 0, 0),
++ 0, SND_SOC_NOPM, 0, 0),
+ SND_SOC_DAPM_AIF_OUT("QUIN_TDM_TX_4", NULL,
+- 0, 0, 0, 0),
++ 0, SND_SOC_NOPM, 0, 0),
+ SND_SOC_DAPM_AIF_OUT("QUIN_TDM_TX_5", NULL,
+- 0, 0, 0, 0),
++ 0, SND_SOC_NOPM, 0, 0),
+ SND_SOC_DAPM_AIF_OUT("QUIN_TDM_TX_6", NULL,
+- 0, 0, 0, 0),
++ 0, SND_SOC_NOPM, 0, 0),
+ SND_SOC_DAPM_AIF_OUT("QUIN_TDM_TX_7", NULL,
+- 0, 0, 0, 0),
+- SND_SOC_DAPM_AIF_OUT("DISPLAY_PORT_RX", "NULL", 0, 0, 0, 0),
++ 0, SND_SOC_NOPM, 0, 0),
++ SND_SOC_DAPM_AIF_OUT("DISPLAY_PORT_RX", "NULL", 0, SND_SOC_NOPM, 0, 0),
+ };
+
+ static const struct snd_soc_component_driver q6afe_dai_component = {
+diff --git a/sound/soc/qcom/qdsp6/q6routing.c b/sound/soc/qcom/qdsp6/q6routing.c
+index 46e50612b92c1..750e6a30444eb 100644
+--- a/sound/soc/qcom/qdsp6/q6routing.c
++++ b/sound/soc/qcom/qdsp6/q6routing.c
+@@ -973,6 +973,20 @@ static int msm_routing_probe(struct snd_soc_component *c)
+ return 0;
+ }
+
++static unsigned int q6routing_reg_read(struct snd_soc_component *component,
++ unsigned int reg)
++{
++ /* default value */
++ return 0;
++}
++
++static int q6routing_reg_write(struct snd_soc_component *component,
++ unsigned int reg, unsigned int val)
++{
++ /* dummy */
++ return 0;
++}
++
+ static const struct snd_soc_component_driver msm_soc_routing_component = {
+ .probe = msm_routing_probe,
+ .name = DRV_NAME,
+@@ -981,6 +995,8 @@ static const struct snd_soc_component_driver msm_soc_routing_component = {
+ .num_dapm_widgets = ARRAY_SIZE(msm_qdsp6_widgets),
+ .dapm_routes = intercon,
+ .num_dapm_routes = ARRAY_SIZE(intercon),
++ .read = q6routing_reg_read,
++ .write = q6routing_reg_write,
+ };
+
+ static int q6pcm_routing_probe(struct platform_device *pdev)
+diff --git a/tools/bpf/bpftool/gen.c b/tools/bpf/bpftool/gen.c
+index 5ff951e08c740..52ebe400e9ca4 100644
+--- a/tools/bpf/bpftool/gen.c
++++ b/tools/bpf/bpftool/gen.c
+@@ -402,7 +402,7 @@ static int do_skeleton(int argc, char **argv)
+ { \n\
+ struct %1$s *obj; \n\
+ \n\
+- obj = (typeof(obj))calloc(1, sizeof(*obj)); \n\
++ obj = (struct %1$s *)calloc(1, sizeof(*obj)); \n\
+ if (!obj) \n\
+ return NULL; \n\
+ if (%1$s__create_skeleton(obj)) \n\
+@@ -466,7 +466,7 @@ static int do_skeleton(int argc, char **argv)
+ { \n\
+ struct bpf_object_skeleton *s; \n\
+ \n\
+- s = (typeof(s))calloc(1, sizeof(*s)); \n\
++ s = (struct bpf_object_skeleton *)calloc(1, sizeof(*s));\n\
+ if (!s) \n\
+ return -1; \n\
+ obj->skeleton = s; \n\
+@@ -484,7 +484,7 @@ static int do_skeleton(int argc, char **argv)
+ /* maps */ \n\
+ s->map_cnt = %zu; \n\
+ s->map_skel_sz = sizeof(*s->maps); \n\
+- s->maps = (typeof(s->maps))calloc(s->map_cnt, s->map_skel_sz);\n\
++ s->maps = (struct bpf_map_skeleton *)calloc(s->map_cnt, s->map_skel_sz);\n\
+ if (!s->maps) \n\
+ goto err; \n\
+ ",
+@@ -520,7 +520,7 @@ static int do_skeleton(int argc, char **argv)
+ /* programs */ \n\
+ s->prog_cnt = %zu; \n\
+ s->prog_skel_sz = sizeof(*s->progs); \n\
+- s->progs = (typeof(s->progs))calloc(s->prog_cnt, s->prog_skel_sz);\n\
++ s->progs = (struct bpf_prog_skeleton *)calloc(s->prog_cnt, s->prog_skel_sz);\n\
+ if (!s->progs) \n\
+ goto err; \n\
+ ",
+diff --git a/tools/testing/selftests/cgroup/cgroup_util.c b/tools/testing/selftests/cgroup/cgroup_util.c
+index 8a637ca7d73a4..05853b0b88318 100644
+--- a/tools/testing/selftests/cgroup/cgroup_util.c
++++ b/tools/testing/selftests/cgroup/cgroup_util.c
+@@ -106,7 +106,7 @@ int cg_read_strcmp(const char *cgroup, const char *control,
+
+ /* Handle the case of comparing against empty string */
+ if (!expected)
+- size = 32;
++ return -1;
+ else
+ size = strlen(expected) + 1;
+
+diff --git a/tools/testing/selftests/kvm/x86_64/debug_regs.c b/tools/testing/selftests/kvm/x86_64/debug_regs.c
+index 8162c58a1234e..b8d14f9db5f9e 100644
+--- a/tools/testing/selftests/kvm/x86_64/debug_regs.c
++++ b/tools/testing/selftests/kvm/x86_64/debug_regs.c
+@@ -40,11 +40,11 @@ static void guest_code(void)
+
+ /* Single step test, covers 2 basic instructions and 2 emulated */
+ asm volatile("ss_start: "
+- "xor %%rax,%%rax\n\t"
++ "xor %%eax,%%eax\n\t"
+ "cpuid\n\t"
+ "movl $0x1a0,%%ecx\n\t"
+ "rdmsr\n\t"
+- : : : "rax", "ecx");
++ : : : "eax", "ebx", "ecx", "edx");
+
+ /* DR6.BD test */
+ asm volatile("bd_start: mov %%dr0, %%rax" : : : "rax");
+diff --git a/virt/kvm/arm/mmu.c b/virt/kvm/arm/mmu.c
+index 8a9d13e8e904f..b005685a6de42 100644
+--- a/virt/kvm/arm/mmu.c
++++ b/virt/kvm/arm/mmu.c
+@@ -331,7 +331,8 @@ static void unmap_stage2_puds(struct kvm *kvm, pgd_t *pgd,
+ * destroying the VM), otherwise another faulting VCPU may come in and mess
+ * with things behind our backs.
+ */
+-static void unmap_stage2_range(struct kvm *kvm, phys_addr_t start, u64 size)
++static void __unmap_stage2_range(struct kvm *kvm, phys_addr_t start, u64 size,
++ bool may_block)
+ {
+ pgd_t *pgd;
+ phys_addr_t addr = start, end = start + size;
+@@ -356,11 +357,16 @@ static void unmap_stage2_range(struct kvm *kvm, phys_addr_t start, u64 size)
+ * If the range is too large, release the kvm->mmu_lock
+ * to prevent starvation and lockup detector warnings.
+ */
+- if (next != end)
++ if (may_block && next != end)
+ cond_resched_lock(&kvm->mmu_lock);
+ } while (pgd++, addr = next, addr != end);
+ }
+
++static void unmap_stage2_range(struct kvm *kvm, phys_addr_t start, u64 size)
++{
++ __unmap_stage2_range(kvm, start, size, true);
++}
++
+ static void stage2_flush_ptes(struct kvm *kvm, pmd_t *pmd,
+ phys_addr_t addr, phys_addr_t end)
+ {
+@@ -2041,18 +2047,21 @@ static int handle_hva_to_gpa(struct kvm *kvm,
+
+ static int kvm_unmap_hva_handler(struct kvm *kvm, gpa_t gpa, u64 size, void *data)
+ {
+- unmap_stage2_range(kvm, gpa, size);
++ unsigned flags = *(unsigned *)data;
++ bool may_block = flags & MMU_NOTIFIER_RANGE_BLOCKABLE;
++
++ __unmap_stage2_range(kvm, gpa, size, may_block);
+ return 0;
+ }
+
+ int kvm_unmap_hva_range(struct kvm *kvm,
+- unsigned long start, unsigned long end)
++ unsigned long start, unsigned long end, unsigned flags)
+ {
+ if (!kvm->arch.pgd)
+ return 0;
+
+ trace_kvm_unmap_hva_range(start, end);
+- handle_hva_to_gpa(kvm, start, end, &kvm_unmap_hva_handler, NULL);
++ handle_hva_to_gpa(kvm, start, end, &kvm_unmap_hva_handler, &flags);
+ return 0;
+ }
+
+diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c
+index 77aa91fb08d2b..66b7a9dbb77dc 100644
+--- a/virt/kvm/kvm_main.c
++++ b/virt/kvm/kvm_main.c
+@@ -428,7 +428,8 @@ static int kvm_mmu_notifier_invalidate_range_start(struct mmu_notifier *mn,
+ * count is also read inside the mmu_lock critical section.
+ */
+ kvm->mmu_notifier_count++;
+- need_tlb_flush = kvm_unmap_hva_range(kvm, range->start, range->end);
++ need_tlb_flush = kvm_unmap_hva_range(kvm, range->start, range->end,
++ range->flags);
+ need_tlb_flush |= kvm->tlbs_dirty;
+ /* we've to flush the tlb before the pages can be freed */
+ if (need_tlb_flush)
^ permalink raw reply related [flat|nested] 25+ messages in thread
* [gentoo-commits] proj/linux-patches:5.7 commit in: /
@ 2020-08-27 13:19 Mike Pagano
0 siblings, 0 replies; 25+ messages in thread
From: Mike Pagano @ 2020-08-27 13:19 UTC (permalink / raw
To: gentoo-commits
commit: 20bfdd395abdeb66604fa2dfa14eaf8a0e3d508c
Author: Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Thu Aug 27 13:19:09 2020 +0000
Commit: Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Thu Aug 27 13:19:09 2020 +0000
URL: https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=20bfdd39
Linux patch 5.7.19
Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>
0000_README | 4 +
1018_linux-5.7.19.patch | 382 ++++++++++++++++++++++++++++++++++++++++++++++++
2 files changed, 386 insertions(+)
diff --git a/0000_README b/0000_README
index 1ab468f..11a3e8d 100644
--- a/0000_README
+++ b/0000_README
@@ -115,6 +115,10 @@ Patch: 1017_linux-5.7.18.patch
From: http://www.kernel.org
Desc: Linux 5.7.18
+Patch: 1018_linux-5.7.19.patch
+From: http://www.kernel.org
+Desc: Linux 5.7.19
+
Patch: 1500_XATTR_USER_PREFIX.patch
From: https://bugs.gentoo.org/show_bug.cgi?id=470644
Desc: Support for namespace user.pax.* on tmpfs.
diff --git a/1018_linux-5.7.19.patch b/1018_linux-5.7.19.patch
new file mode 100644
index 0000000..005c31b
--- /dev/null
+++ b/1018_linux-5.7.19.patch
@@ -0,0 +1,382 @@
+diff --git a/Makefile b/Makefile
+index b56456c45c97f..b60ba59cfb196 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 5
+ PATCHLEVEL = 7
+-SUBLEVEL = 18
++SUBLEVEL = 19
+ EXTRAVERSION =
+ NAME = Kleptomaniac Octopus
+
+diff --git a/arch/powerpc/kernel/cpu_setup_power.S b/arch/powerpc/kernel/cpu_setup_power.S
+index a460298c7ddb4..f91ecb10d0ae7 100644
+--- a/arch/powerpc/kernel/cpu_setup_power.S
++++ b/arch/powerpc/kernel/cpu_setup_power.S
+@@ -184,7 +184,7 @@ __init_LPCR_ISA300:
+
+ __init_FSCR:
+ mfspr r3,SPRN_FSCR
+- ori r3,r3,FSCR_TAR|FSCR_DSCR|FSCR_EBB
++ ori r3,r3,FSCR_TAR|FSCR_EBB
+ mtspr SPRN_FSCR,r3
+ blr
+
+diff --git a/drivers/net/ethernet/amazon/ena/ena_netdev.c b/drivers/net/ethernet/amazon/ena/ena_netdev.c
+index c501a4edc34d6..51b9b49a295e9 100644
+--- a/drivers/net/ethernet/amazon/ena/ena_netdev.c
++++ b/drivers/net/ethernet/amazon/ena/ena_netdev.c
+@@ -3594,7 +3594,7 @@ static int check_missing_comp_in_tx_queue(struct ena_adapter *adapter,
+ }
+
+ u64_stats_update_begin(&tx_ring->syncp);
+- tx_ring->tx_stats.missed_tx = missed_tx;
++ tx_ring->tx_stats.missed_tx += missed_tx;
+ u64_stats_update_end(&tx_ring->syncp);
+
+ return rc;
+@@ -4519,6 +4519,9 @@ static void ena_keep_alive_wd(void *adapter_data,
+ rx_drops = ((u64)desc->rx_drops_high << 32) | desc->rx_drops_low;
+
+ u64_stats_update_begin(&adapter->syncp);
++ /* These stats are accumulated by the device, so the counters indicate
++ * all drops since last reset.
++ */
+ adapter->dev_stats.rx_drops = rx_drops;
+ u64_stats_update_end(&adapter->syncp);
+ }
+diff --git a/fs/binfmt_flat.c b/fs/binfmt_flat.c
+index 831a2b25ba79f..196f9f64d075c 100644
+--- a/fs/binfmt_flat.c
++++ b/fs/binfmt_flat.c
+@@ -571,7 +571,7 @@ static int load_flat_file(struct linux_binprm *bprm,
+ goto err;
+ }
+
+- len = data_len + extra;
++ len = data_len + extra + MAX_SHARED_LIBS * sizeof(unsigned long);
+ len = PAGE_ALIGN(len);
+ realdatastart = vm_mmap(NULL, 0, len,
+ PROT_READ|PROT_WRITE|PROT_EXEC, MAP_PRIVATE, 0);
+@@ -585,7 +585,9 @@ static int load_flat_file(struct linux_binprm *bprm,
+ vm_munmap(textpos, text_len);
+ goto err;
+ }
+- datapos = ALIGN(realdatastart, FLAT_DATA_ALIGN);
++ datapos = ALIGN(realdatastart +
++ MAX_SHARED_LIBS * sizeof(unsigned long),
++ FLAT_DATA_ALIGN);
+
+ pr_debug("Allocated data+bss+stack (%u bytes): %lx\n",
+ data_len + bss_len + stack_len, datapos);
+@@ -615,7 +617,7 @@ static int load_flat_file(struct linux_binprm *bprm,
+ memp_size = len;
+ } else {
+
+- len = text_len + data_len + extra;
++ len = text_len + data_len + extra + MAX_SHARED_LIBS * sizeof(u32);
+ len = PAGE_ALIGN(len);
+ textpos = vm_mmap(NULL, 0, len,
+ PROT_READ | PROT_EXEC | PROT_WRITE, MAP_PRIVATE, 0);
+@@ -630,7 +632,9 @@ static int load_flat_file(struct linux_binprm *bprm,
+ }
+
+ realdatastart = textpos + ntohl(hdr->data_start);
+- datapos = ALIGN(realdatastart, FLAT_DATA_ALIGN);
++ datapos = ALIGN(realdatastart +
++ MAX_SHARED_LIBS * sizeof(u32),
++ FLAT_DATA_ALIGN);
+
+ reloc = (__be32 __user *)
+ (datapos + (ntohl(hdr->reloc_start) - text_len));
+@@ -647,9 +651,8 @@ static int load_flat_file(struct linux_binprm *bprm,
+ (text_len + full_data
+ - sizeof(struct flat_hdr)),
+ 0);
+- if (datapos != realdatastart)
+- memmove((void *)datapos, (void *)realdatastart,
+- full_data);
++ memmove((void *) datapos, (void *) realdatastart,
++ full_data);
+ #else
+ /*
+ * This is used on MMU systems mainly for testing.
+@@ -705,7 +708,8 @@ static int load_flat_file(struct linux_binprm *bprm,
+ if (IS_ERR_VALUE(result)) {
+ ret = result;
+ pr_err("Unable to read code+data+bss, errno %d\n", ret);
+- vm_munmap(textpos, text_len + data_len + extra);
++ vm_munmap(textpos, text_len + data_len + extra +
++ MAX_SHARED_LIBS * sizeof(u32));
+ goto err;
+ }
+ }
+diff --git a/net/core/skbuff.c b/net/core/skbuff.c
+index 7e29590482ce5..115f3fde314f3 100644
+--- a/net/core/skbuff.c
++++ b/net/core/skbuff.c
+@@ -5421,8 +5421,8 @@ struct sk_buff *skb_vlan_untag(struct sk_buff *skb)
+ skb = skb_share_check(skb, GFP_ATOMIC);
+ if (unlikely(!skb))
+ goto err_free;
+-
+- if (unlikely(!pskb_may_pull(skb, VLAN_HLEN)))
++ /* We may access the two bytes after vlan_hdr in vlan_set_encap_proto(). */
++ if (unlikely(!pskb_may_pull(skb, VLAN_HLEN + sizeof(unsigned short))))
+ goto err_free;
+
+ vhdr = (struct vlan_hdr *)skb->data;
+diff --git a/net/ethtool/features.c b/net/ethtool/features.c
+index 4e632dc987d85..495635f152ba6 100644
+--- a/net/ethtool/features.c
++++ b/net/ethtool/features.c
+@@ -224,7 +224,9 @@ int ethnl_set_features(struct sk_buff *skb, struct genl_info *info)
+ DECLARE_BITMAP(wanted_diff_mask, NETDEV_FEATURE_COUNT);
+ DECLARE_BITMAP(active_diff_mask, NETDEV_FEATURE_COUNT);
+ DECLARE_BITMAP(old_active, NETDEV_FEATURE_COUNT);
++ DECLARE_BITMAP(old_wanted, NETDEV_FEATURE_COUNT);
+ DECLARE_BITMAP(new_active, NETDEV_FEATURE_COUNT);
++ DECLARE_BITMAP(new_wanted, NETDEV_FEATURE_COUNT);
+ DECLARE_BITMAP(req_wanted, NETDEV_FEATURE_COUNT);
+ DECLARE_BITMAP(req_mask, NETDEV_FEATURE_COUNT);
+ struct nlattr *tb[ETHTOOL_A_FEATURES_MAX + 1];
+@@ -250,6 +252,7 @@ int ethnl_set_features(struct sk_buff *skb, struct genl_info *info)
+
+ rtnl_lock();
+ ethnl_features_to_bitmap(old_active, dev->features);
++ ethnl_features_to_bitmap(old_wanted, dev->wanted_features);
+ ret = ethnl_parse_bitset(req_wanted, req_mask, NETDEV_FEATURE_COUNT,
+ tb[ETHTOOL_A_FEATURES_WANTED],
+ netdev_features_strings, info->extack);
+@@ -261,17 +264,15 @@ int ethnl_set_features(struct sk_buff *skb, struct genl_info *info)
+ goto out_rtnl;
+ }
+
+- /* set req_wanted bits not in req_mask from old_active */
++ /* set req_wanted bits not in req_mask from old_wanted */
+ bitmap_and(req_wanted, req_wanted, req_mask, NETDEV_FEATURE_COUNT);
+- bitmap_andnot(new_active, old_active, req_mask, NETDEV_FEATURE_COUNT);
+- bitmap_or(req_wanted, new_active, req_wanted, NETDEV_FEATURE_COUNT);
+- if (bitmap_equal(req_wanted, old_active, NETDEV_FEATURE_COUNT)) {
+- ret = 0;
+- goto out_rtnl;
++ bitmap_andnot(new_wanted, old_wanted, req_mask, NETDEV_FEATURE_COUNT);
++ bitmap_or(req_wanted, new_wanted, req_wanted, NETDEV_FEATURE_COUNT);
++ if (!bitmap_equal(req_wanted, old_wanted, NETDEV_FEATURE_COUNT)) {
++ dev->wanted_features &= ~dev->hw_features;
++ dev->wanted_features |= ethnl_bitmap_to_features(req_wanted) & dev->hw_features;
++ __netdev_update_features(dev);
+ }
+-
+- dev->wanted_features = ethnl_bitmap_to_features(req_wanted);
+- __netdev_update_features(dev);
+ ethnl_features_to_bitmap(new_active, dev->features);
+ mod = !bitmap_equal(old_active, new_active, NETDEV_FEATURE_COUNT);
+
+diff --git a/net/ipv4/nexthop.c b/net/ipv4/nexthop.c
+index 563f71bcb2d74..c97069e799811 100644
+--- a/net/ipv4/nexthop.c
++++ b/net/ipv4/nexthop.c
+@@ -402,7 +402,7 @@ static int nh_check_attr_group(struct net *net, struct nlattr *tb[],
+ struct nexthop_grp *nhg;
+ unsigned int i, j;
+
+- if (len & (sizeof(struct nexthop_grp) - 1)) {
++ if (!len || len & (sizeof(struct nexthop_grp) - 1)) {
+ NL_SET_ERR_MSG(extack,
+ "Invalid length for nexthop group attribute");
+ return -EINVAL;
+@@ -1104,6 +1104,9 @@ static struct nexthop *nexthop_create_group(struct net *net,
+ struct nexthop *nh;
+ int i;
+
++ if (WARN_ON(!num_nh))
++ return ERR_PTR(-EINVAL);
++
+ nh = nexthop_alloc();
+ if (!nh)
+ return ERR_PTR(-ENOMEM);
+diff --git a/net/ipv6/ip6_tunnel.c b/net/ipv6/ip6_tunnel.c
+index 4703b09808d0a..84f90b8b88903 100644
+--- a/net/ipv6/ip6_tunnel.c
++++ b/net/ipv6/ip6_tunnel.c
+@@ -886,7 +886,15 @@ int ip6_tnl_rcv(struct ip6_tnl *t, struct sk_buff *skb,
+ struct metadata_dst *tun_dst,
+ bool log_ecn_err)
+ {
+- return __ip6_tnl_rcv(t, skb, tpi, tun_dst, ip6ip6_dscp_ecn_decapsulate,
++ int (*dscp_ecn_decapsulate)(const struct ip6_tnl *t,
++ const struct ipv6hdr *ipv6h,
++ struct sk_buff *skb);
++
++ dscp_ecn_decapsulate = ip6ip6_dscp_ecn_decapsulate;
++ if (tpi->proto == htons(ETH_P_IP))
++ dscp_ecn_decapsulate = ip4ip6_dscp_ecn_decapsulate;
++
++ return __ip6_tnl_rcv(t, skb, tpi, tun_dst, dscp_ecn_decapsulate,
+ log_ecn_err);
+ }
+ EXPORT_SYMBOL(ip6_tnl_rcv);
+diff --git a/net/qrtr/qrtr.c b/net/qrtr/qrtr.c
+index 300a104b9a0fb..85ab4559f0577 100644
+--- a/net/qrtr/qrtr.c
++++ b/net/qrtr/qrtr.c
+@@ -692,23 +692,25 @@ static void qrtr_port_remove(struct qrtr_sock *ipc)
+ */
+ static int qrtr_port_assign(struct qrtr_sock *ipc, int *port)
+ {
++ u32 min_port;
+ int rc;
+
+ mutex_lock(&qrtr_port_lock);
+ if (!*port) {
+- rc = idr_alloc(&qrtr_ports, ipc,
+- QRTR_MIN_EPH_SOCKET, QRTR_MAX_EPH_SOCKET + 1,
+- GFP_ATOMIC);
+- if (rc >= 0)
+- *port = rc;
++ min_port = QRTR_MIN_EPH_SOCKET;
++ rc = idr_alloc_u32(&qrtr_ports, ipc, &min_port, QRTR_MAX_EPH_SOCKET, GFP_ATOMIC);
++ if (!rc)
++ *port = min_port;
+ } else if (*port < QRTR_MIN_EPH_SOCKET && !capable(CAP_NET_ADMIN)) {
+ rc = -EACCES;
+ } else if (*port == QRTR_PORT_CTRL) {
+- rc = idr_alloc(&qrtr_ports, ipc, 0, 1, GFP_ATOMIC);
++ min_port = 0;
++ rc = idr_alloc_u32(&qrtr_ports, ipc, &min_port, 0, GFP_ATOMIC);
+ } else {
+- rc = idr_alloc(&qrtr_ports, ipc, *port, *port + 1, GFP_ATOMIC);
+- if (rc >= 0)
+- *port = rc;
++ min_port = *port;
++ rc = idr_alloc_u32(&qrtr_ports, ipc, &min_port, *port, GFP_ATOMIC);
++ if (!rc)
++ *port = min_port;
+ }
+ mutex_unlock(&qrtr_port_lock);
+
+diff --git a/net/sched/act_ct.c b/net/sched/act_ct.c
+index 417526d7741bf..16bc5b0d1eaaa 100644
+--- a/net/sched/act_ct.c
++++ b/net/sched/act_ct.c
+@@ -702,7 +702,7 @@ static int tcf_ct_handle_fragments(struct net *net, struct sk_buff *skb,
+ err = ip_defrag(net, skb, user);
+ local_bh_enable();
+ if (err && err != -EINPROGRESS)
+- goto out_free;
++ return err;
+
+ if (!err) {
+ *defrag = true;
+diff --git a/net/sctp/stream.c b/net/sctp/stream.c
+index bda2536dd740f..6dc95dcc0ff4f 100644
+--- a/net/sctp/stream.c
++++ b/net/sctp/stream.c
+@@ -88,12 +88,13 @@ static int sctp_stream_alloc_out(struct sctp_stream *stream, __u16 outcnt,
+ int ret;
+
+ if (outcnt <= stream->outcnt)
+- return 0;
++ goto out;
+
+ ret = genradix_prealloc(&stream->out, outcnt, gfp);
+ if (ret)
+ return ret;
+
++out:
+ stream->outcnt = outcnt;
+ return 0;
+ }
+@@ -104,12 +105,13 @@ static int sctp_stream_alloc_in(struct sctp_stream *stream, __u16 incnt,
+ int ret;
+
+ if (incnt <= stream->incnt)
+- return 0;
++ goto out;
+
+ ret = genradix_prealloc(&stream->in, incnt, gfp);
+ if (ret)
+ return ret;
+
++out:
+ stream->incnt = incnt;
+ return 0;
+ }
+diff --git a/net/smc/smc_diag.c b/net/smc/smc_diag.c
+index e1f64f4ba2361..da9ba6d1679b7 100644
+--- a/net/smc/smc_diag.c
++++ b/net/smc/smc_diag.c
+@@ -170,13 +170,15 @@ static int __smc_diag_dump(struct sock *sk, struct sk_buff *skb,
+ (req->diag_ext & (1 << (SMC_DIAG_DMBINFO - 1))) &&
+ !list_empty(&smc->conn.lgr->list)) {
+ struct smc_connection *conn = &smc->conn;
+- struct smcd_diag_dmbinfo dinfo = {
+- .linkid = *((u32 *)conn->lgr->id),
+- .peer_gid = conn->lgr->peer_gid,
+- .my_gid = conn->lgr->smcd->local_gid,
+- .token = conn->rmb_desc->token,
+- .peer_token = conn->peer_token
+- };
++ struct smcd_diag_dmbinfo dinfo;
++
++ memset(&dinfo, 0, sizeof(dinfo));
++
++ dinfo.linkid = *((u32 *)conn->lgr->id);
++ dinfo.peer_gid = conn->lgr->peer_gid;
++ dinfo.my_gid = conn->lgr->smcd->local_gid;
++ dinfo.token = conn->rmb_desc->token;
++ dinfo.peer_token = conn->peer_token;
+
+ if (nla_put(skb, SMC_DIAG_DMBINFO, sizeof(dinfo), &dinfo) < 0)
+ goto errout;
+diff --git a/net/tipc/crypto.c b/net/tipc/crypto.c
+index 8c47ded2edb61..b214b898d11ad 100644
+--- a/net/tipc/crypto.c
++++ b/net/tipc/crypto.c
+@@ -757,10 +757,12 @@ static void tipc_aead_encrypt_done(struct crypto_async_request *base, int err)
+ switch (err) {
+ case 0:
+ this_cpu_inc(tx->stats->stat[STAT_ASYNC_OK]);
++ rcu_read_lock();
+ if (likely(test_bit(0, &b->up)))
+ b->media->send_msg(net, skb, b, &tx_ctx->dst);
+ else
+ kfree_skb(skb);
++ rcu_read_unlock();
+ break;
+ case -EINPROGRESS:
+ return;
+diff --git a/net/tipc/netlink_compat.c b/net/tipc/netlink_compat.c
+index 217516357ef26..90e3c70a91ad0 100644
+--- a/net/tipc/netlink_compat.c
++++ b/net/tipc/netlink_compat.c
+@@ -275,8 +275,9 @@ err_out:
+ static int tipc_nl_compat_dumpit(struct tipc_nl_compat_cmd_dump *cmd,
+ struct tipc_nl_compat_msg *msg)
+ {
+- int err;
++ struct nlmsghdr *nlh;
+ struct sk_buff *arg;
++ int err;
+
+ if (msg->req_type && (!msg->req_size ||
+ !TLV_CHECK_TYPE(msg->req, msg->req_type)))
+@@ -305,6 +306,15 @@ static int tipc_nl_compat_dumpit(struct tipc_nl_compat_cmd_dump *cmd,
+ return -ENOMEM;
+ }
+
++ nlh = nlmsg_put(arg, 0, 0, tipc_genl_family.id, 0, NLM_F_MULTI);
++ if (!nlh) {
++ kfree_skb(arg);
++ kfree_skb(msg->rep);
++ msg->rep = NULL;
++ return -EMSGSIZE;
++ }
++ nlmsg_end(arg, nlh);
++
+ err = __tipc_nl_compat_dumpit(cmd, msg, arg);
+ if (err) {
+ kfree_skb(msg->rep);
^ permalink raw reply related [flat|nested] 25+ messages in thread
end of thread, other threads:[~2020-08-27 13:19 UTC | newest]
Thread overview: 25+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2020-07-29 12:43 [gentoo-commits] proj/linux-patches:5.7 commit in: / Mike Pagano
-- strict thread matches above, loose matches on Subject: below --
2020-08-27 13:19 Mike Pagano
2020-08-26 11:17 Mike Pagano
2020-08-21 11:43 Alice Ferrazzi
2020-08-19 14:55 Mike Pagano
2020-08-19 9:31 Alice Ferrazzi
2020-08-12 23:32 Alice Ferrazzi
2020-08-07 12:13 Alice Ferrazzi
2020-08-05 14:36 Thomas Deutschmann
2020-07-31 18:07 Mike Pagano
2020-07-22 12:59 Mike Pagano
2020-07-16 11:22 Mike Pagano
2020-07-09 12:15 Mike Pagano
2020-07-01 12:24 Mike Pagano
2020-06-29 17:37 Mike Pagano
2020-06-29 17:32 Mike Pagano
2020-06-24 16:49 Mike Pagano
2020-06-22 11:25 Mike Pagano
2020-06-18 17:33 Mike Pagano
2020-06-17 16:41 Mike Pagano
2020-06-10 19:41 Mike Pagano
2020-06-07 21:57 Mike Pagano
2020-05-26 17:59 Mike Pagano
2020-05-26 17:49 Mike Pagano
2020-05-04 20:59 Mike Pagano
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox